diff --git a/site/en/api_docs/python/tf.md b/site/en/api_docs/python/tf.md new file mode 100644 index 00000000000..9200c0b0c49 --- /dev/null +++ b/site/en/api_docs/python/tf.md @@ -0,0 +1,613 @@ +description: ## TensorFlow + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# Module: tf + + + + + + + + + +## TensorFlow + + +``` +pip install tensorflow +``` + +## Modules + +[`audio`](./tf/audio.md) module: Public API for tf.audio namespace. + +[`autodiff`](./tf/autodiff.md) module: Public API for tf.autodiff namespace. + +[`autograph`](./tf/autograph.md) module: Conversion of plain Python into TensorFlow graph code. + +[`bitwise`](./tf/bitwise.md) module: Operations for manipulating the binary representations of integers. + +[`compat`](./tf/compat.md) module: Compatibility functions. + +[`config`](./tf/config.md) module: Public API for tf.config namespace. + +[`data`](./tf/data.md) module: tf.data.Dataset API for input pipelines. + +[`debugging`](./tf/debugging.md) module: Public API for tf.debugging namespace. + +[`distribute`](./tf/distribute.md) module: Library for running a computation across multiple devices. + +[`dtypes`](./tf/dtypes.md) module: Public API for tf.dtypes namespace. + +[`errors`](./tf/errors.md) module: Exception types for TensorFlow errors. + +[`estimator`](./tf/estimator.md) module: Estimator: High level tools for working with models. + +[`experimental`](./tf/experimental.md) module: Public API for tf.experimental namespace. + +[`feature_column`](./tf/feature_column.md) module: Public API for tf.feature_column namespace. + +[`graph_util`](./tf/graph_util.md) module: Helpers to manipulate a tensor graph in python. + +[`image`](./tf/image.md) module: Image ops. + +[`initializers`](./tf/keras/initializers.md) module: Keras initializer serialization / deserialization. + +[`io`](./tf/io.md) module: Public API for tf.io namespace. + +[`keras`](./tf/keras.md) module: Implementation of the Keras API meant to be a high-level API for TensorFlow. + +[`linalg`](./tf/linalg.md) module: Operations for linear algebra. + +[`lite`](./tf/lite.md) module: Public API for tf.lite namespace. + +[`lookup`](./tf/lookup.md) module: Public API for tf.lookup namespace. + +[`losses`](./tf/keras/losses.md) module: Built-in loss functions. + +[`math`](./tf/math.md) module: Math Operations. + +[`metrics`](./tf/keras/metrics.md) module: Built-in metrics. + +[`mixed_precision`](./tf/mixed_precision.md) module: Public API for tf.mixed_precision namespace. + +[`mlir`](./tf/mlir.md) module: Public API for tf.mlir namespace. + +[`nest`](./tf/nest.md) module: Public API for tf.nest namespace. + +[`nn`](./tf/nn.md) module: Wrappers for primitive Neural Net (NN) Operations. + +[`optimizers`](./tf/keras/optimizers.md) module: Built-in optimizer classes. + +[`profiler`](./tf/profiler.md) module: Public API for tf.profiler namespace. + +[`quantization`](./tf/quantization.md) module: Public API for tf.quantization namespace. + +[`queue`](./tf/queue.md) module: Public API for tf.queue namespace. + +[`ragged`](./tf/ragged.md) module: Ragged Tensors. + +[`random`](./tf/random.md) module: Public API for tf.random namespace. + +[`raw_ops`](./tf/raw_ops.md) module: Public API for tf.raw_ops namespace. + +[`saved_model`](./tf/saved_model.md) module: Public API for tf.saved_model namespace. + +[`sets`](./tf/sets.md) module: Tensorflow set operations. + +[`signal`](./tf/signal.md) module: Signal processing operations. + +[`sparse`](./tf/sparse.md) module: Sparse Tensor Representation. + +[`strings`](./tf/strings.md) module: Operations for working with string Tensors. + +[`summary`](./tf/summary.md) module: Operations for writing summary data, for use in analysis and visualization. + +[`sysconfig`](./tf/sysconfig.md) module: System configuration library. + +[`test`](./tf/test.md) module: Testing. + +[`tpu`](./tf/tpu.md) module: Ops related to Tensor Processing Units. + +[`train`](./tf/train.md) module: Support for training models. + +[`version`](./tf/version.md) module: Public API for tf.version namespace. + +[`xla`](./tf/xla.md) module: Public API for tf.xla namespace. + +## Classes + +[`class AggregationMethod`](./tf/AggregationMethod.md): A class listing aggregation methods used to combine gradients. + +[`class CriticalSection`](./tf/CriticalSection.md): Critical section. + +[`class DType`](./tf/dtypes/DType.md): Represents the type of the elements in a `Tensor`. + +[`class DeviceSpec`](./tf/DeviceSpec.md): Represents a (possibly partial) specification for a TensorFlow device. + +[`class GradientTape`](./tf/GradientTape.md): Record operations for automatic differentiation. + +[`class Graph`](./tf/Graph.md): A TensorFlow computation, represented as a dataflow graph. + +[`class IndexedSlices`](./tf/IndexedSlices.md): A sparse representation of a set of tensor slices at given indices. + +[`class IndexedSlicesSpec`](./tf/IndexedSlicesSpec.md): Type specification for a tf.IndexedSlices. + +[`class Module`](./tf/Module.md): Base neural network module class. + +[`class Operation`](./tf/Operation.md): Represents a graph node that performs computation on tensors. + +[`class OptionalSpec`](./tf/OptionalSpec.md): Represents an optional potentially containing a structured value. + +[`class RaggedTensor`](./tf/RaggedTensor.md): Represents a ragged tensor. + +[`class RaggedTensorSpec`](./tf/RaggedTensorSpec.md): Type specification for a tf.RaggedTensor. + +[`class RegisterGradient`](./tf/RegisterGradient.md): A decorator for registering the gradient function for an op type. + +[`class SparseTensor`](./tf/sparse/SparseTensor.md): Represents a sparse tensor. + +[`class SparseTensorSpec`](./tf/SparseTensorSpec.md): Type specification for a tf.SparseTensor. + +[`class Tensor`](./tf/Tensor.md): A tensor represents a rectangular array of data. + +[`class TensorArray`](./tf/TensorArray.md): Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays. + +[`class TensorArraySpec`](./tf/TensorArraySpec.md): Type specification for a tf.TensorArray. + +[`class TensorShape`](./tf/TensorShape.md): Represents the shape of a `Tensor`. + +[`class TensorSpec`](./tf/TensorSpec.md): Describes a tf.Tensor. + +[`class TypeSpec`](./tf/TypeSpec.md): Specifies a TensorFlow value type. + +[`class UnconnectedGradients`](./tf/UnconnectedGradients.md): Controls how gradient computation behaves when y does not depend on x. + +[`class Variable`](./tf/Variable.md): See the [variable guide](https://tensorflow.org/guide/variable). + +[`class VariableAggregation`](./tf/VariableAggregation.md): Indicates how a distributed variable will be aggregated. + +[`class VariableSynchronization`](./tf/VariableSynchronization.md): Indicates when a distributed variable will be synced. + +[`class constant_initializer`](./tf/constant_initializer.md): Initializer that generates tensors with constant values. + +[`class name_scope`](./tf/name_scope.md): A context manager for use when defining a Python op. + +[`class ones_initializer`](./tf/ones_initializer.md): Initializer that generates tensors initialized to 1. + +[`class random_normal_initializer`](./tf/random_normal_initializer.md): Initializer that generates tensors with a normal distribution. + +[`class random_uniform_initializer`](./tf/random_uniform_initializer.md): Initializer that generates tensors with a uniform distribution. + +[`class zeros_initializer`](./tf/zeros_initializer.md): Initializer that generates tensors initialized to 0. + +## Functions + +[`Assert(...)`](./tf/debugging/Assert.md): Asserts that the given condition is true. + +[`abs(...)`](./tf/math/abs.md): Computes the absolute value of a tensor. + +[`acos(...)`](./tf/math/acos.md): Computes acos of x element-wise. + +[`acosh(...)`](./tf/math/acosh.md): Computes inverse hyperbolic cosine of x element-wise. + +[`add(...)`](./tf/math/add.md): Returns x + y element-wise. + +[`add_n(...)`](./tf/math/add_n.md): Adds all input tensors element-wise. + +[`argmax(...)`](./tf/math/argmax.md): Returns the index with the largest value across axes of a tensor. + +[`argmin(...)`](./tf/math/argmin.md): Returns the index with the smallest value across axes of a tensor. + +[`argsort(...)`](./tf/argsort.md): Returns the indices of a tensor that give its sorted order along an axis. + +[`as_dtype(...)`](./tf/dtypes/as_dtype.md): Converts the given `type_value` to a `DType`. + +[`as_string(...)`](./tf/strings/as_string.md): Converts each entry in the given tensor to strings. + +[`asin(...)`](./tf/math/asin.md): Computes the trignometric inverse sine of x element-wise. + +[`asinh(...)`](./tf/math/asinh.md): Computes inverse hyperbolic sine of x element-wise. + +[`assert_equal(...)`](./tf/debugging/assert_equal.md): Assert the condition `x == y` holds element-wise. + +[`assert_greater(...)`](./tf/debugging/assert_greater.md): Assert the condition `x > y` holds element-wise. + +[`assert_less(...)`](./tf/debugging/assert_less.md): Assert the condition `x < y` holds element-wise. + +[`assert_rank(...)`](./tf/debugging/assert_rank.md): Assert that `x` has rank equal to `rank`. + +[`atan(...)`](./tf/math/atan.md): Computes the trignometric inverse tangent of x element-wise. + +[`atan2(...)`](./tf/math/atan2.md): Computes arctangent of `y/x` element-wise, respecting signs of the arguments. + +[`atanh(...)`](./tf/math/atanh.md): Computes inverse hyperbolic tangent of x element-wise. + +[`batch_to_space(...)`](./tf/batch_to_space.md): BatchToSpace for N-D tensors of type T. + +[`bitcast(...)`](./tf/bitcast.md): Bitcasts a tensor from one type to another without copying data. + +[`boolean_mask(...)`](./tf/boolean_mask.md): Apply boolean mask to tensor. + +[`broadcast_dynamic_shape(...)`](./tf/broadcast_dynamic_shape.md): Computes the shape of a broadcast given symbolic shapes. + +[`broadcast_static_shape(...)`](./tf/broadcast_static_shape.md): Computes the shape of a broadcast given known shapes. + +[`broadcast_to(...)`](./tf/broadcast_to.md): Broadcast an array for a compatible shape. + +[`case(...)`](./tf/case.md): Create a case operation. + +[`cast(...)`](./tf/cast.md): Casts a tensor to a new type. + +[`clip_by_global_norm(...)`](./tf/clip_by_global_norm.md): Clips values of multiple tensors by the ratio of the sum of their norms. + +[`clip_by_norm(...)`](./tf/clip_by_norm.md): Clips tensor values to a maximum L2-norm. + +[`clip_by_value(...)`](./tf/clip_by_value.md): Clips tensor values to a specified min and max. + +[`complex(...)`](./tf/dtypes/complex.md): Converts two real numbers to a complex number. + +[`concat(...)`](./tf/concat.md): Concatenates tensors along one dimension. + +[`cond(...)`](./tf/cond.md): Return `true_fn()` if the predicate `pred` is true else `false_fn()`. + +[`constant(...)`](./tf/constant.md): Creates a constant tensor from a tensor-like object. + +[`control_dependencies(...)`](./tf/control_dependencies.md): Wrapper for Graph.control_dependencies() using the default graph. + +[`convert_to_tensor(...)`](./tf/convert_to_tensor.md): Converts the given `value` to a `Tensor`. + +[`cos(...)`](./tf/math/cos.md): Computes cos of x element-wise. + +[`cosh(...)`](./tf/math/cosh.md): Computes hyperbolic cosine of x element-wise. + +[`cumsum(...)`](./tf/math/cumsum.md): Compute the cumulative sum of the tensor `x` along `axis`. + +[`custom_gradient(...)`](./tf/custom_gradient.md): Decorator to define a function with a custom gradient. + +[`device(...)`](./tf/device.md): Specifies the device for ops created/executed in this context. + +[`divide(...)`](./tf/math/divide.md): Computes Python style division of `x` by `y`. + +[`dynamic_partition(...)`](./tf/dynamic_partition.md): Partitions `data` into `num_partitions` tensors using indices from `partitions`. + +[`dynamic_stitch(...)`](./tf/dynamic_stitch.md): Interleave the values from the `data` tensors into a single tensor. + +[`edit_distance(...)`](./tf/edit_distance.md): Computes the Levenshtein distance between sequences. + +[`eig(...)`](./tf/linalg/eig.md): Computes the eigen decomposition of a batch of matrices. + +[`eigvals(...)`](./tf/linalg/eigvals.md): Computes the eigenvalues of one or more matrices. + +[`einsum(...)`](./tf/einsum.md): Tensor contraction over specified indices and outer product. + +[`ensure_shape(...)`](./tf/ensure_shape.md): Updates the shape of a tensor and checks at runtime that the shape holds. + +[`equal(...)`](./tf/math/equal.md): Returns the truth value of (x == y) element-wise. + +[`executing_eagerly(...)`](./tf/executing_eagerly.md): Checks whether the current thread has eager execution enabled. + +[`exp(...)`](./tf/math/exp.md): Computes exponential of x element-wise. \\(y = e^x\\). + +[`expand_dims(...)`](./tf/expand_dims.md): Returns a tensor with an additional dimension inserted at index `axis`. + +[`extract_volume_patches(...)`](./tf/extract_volume_patches.md): Extract `patches` from `input` and put them in the "depth" output dimension. 3D extension of `extract_image_patches`. + +[`eye(...)`](./tf/eye.md): Construct an identity matrix, or a batch of matrices. + +[`fill(...)`](./tf/fill.md): Creates a tensor filled with a scalar value. + +[`fingerprint(...)`](./tf/fingerprint.md): Generates fingerprint values. + +[`floor(...)`](./tf/math/floor.md): Returns element-wise largest integer not greater than x. + +[`foldl(...)`](./tf/foldl.md): foldl on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values) + +[`foldr(...)`](./tf/foldr.md): foldr on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values) + +[`function(...)`](./tf/function.md): Compiles a function into a callable TensorFlow graph. + +[`gather(...)`](./tf/gather.md): Gather slices from params axis `axis` according to indices. + +[`gather_nd(...)`](./tf/gather_nd.md): Gather slices from `params` into a Tensor with shape specified by `indices`. + +[`get_logger(...)`](./tf/get_logger.md): Return TF logger instance. + +[`get_static_value(...)`](./tf/get_static_value.md): Returns the constant value of the given tensor, if efficiently calculable. + +[`grad_pass_through(...)`](./tf/grad_pass_through.md): Creates a grad-pass-through op with the forward behavior provided in f. + +[`gradients(...)`](./tf/gradients.md): Constructs symbolic derivatives of sum of `ys` w.r.t. x in `xs`. + +[`greater(...)`](./tf/math/greater.md): Returns the truth value of (x > y) element-wise. + +[`greater_equal(...)`](./tf/math/greater_equal.md): Returns the truth value of (x >= y) element-wise. + +[`group(...)`](./tf/group.md): Create an op that groups multiple operations. + +[`guarantee_const(...)`](./tf/guarantee_const.md): Gives a guarantee to the TF runtime that the input tensor is a constant. + +[`hessians(...)`](./tf/hessians.md): Constructs the Hessian of sum of `ys` with respect to `x` in `xs`. + +[`histogram_fixed_width(...)`](./tf/histogram_fixed_width.md): Return histogram of values. + +[`histogram_fixed_width_bins(...)`](./tf/histogram_fixed_width_bins.md): Bins the given values for use in a histogram. + +[`identity(...)`](./tf/identity.md): Return a Tensor with the same shape and contents as input. + +[`identity_n(...)`](./tf/identity_n.md): Returns a list of tensors with the same shapes and contents as the input + +[`import_graph_def(...)`](./tf/graph_util/import_graph_def.md): Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments) + +[`init_scope(...)`](./tf/init_scope.md): A context manager that lifts ops out of control-flow scopes and function-building graphs. + +[`is_tensor(...)`](./tf/is_tensor.md): Checks whether `x` is a tensor or "tensor-like". + +[`less(...)`](./tf/math/less.md): Returns the truth value of (x < y) element-wise. + +[`less_equal(...)`](./tf/math/less_equal.md): Returns the truth value of (x <= y) element-wise. + +[`linspace(...)`](./tf/linspace.md): Generates values in an interval. + +[`load_library(...)`](./tf/load_library.md): Loads a TensorFlow plugin. + +[`load_op_library(...)`](./tf/load_op_library.md): Loads a TensorFlow plugin, containing custom ops and kernels. + +[`logical_and(...)`](./tf/math/logical_and.md): Logical AND function. + +[`logical_not(...)`](./tf/math/logical_not.md): Returns the truth value of `NOT x` element-wise. + +[`logical_or(...)`](./tf/math/logical_or.md): Returns the truth value of x OR y element-wise. + +[`make_ndarray(...)`](./tf/make_ndarray.md): Create a numpy ndarray from a tensor. + +[`make_tensor_proto(...)`](./tf/make_tensor_proto.md): Create a TensorProto. + +[`map_fn(...)`](./tf/map_fn.md): map on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values) + +[`matmul(...)`](./tf/linalg/matmul.md): Multiplies matrix `a` by matrix `b`, producing `a` * `b`. + +[`matrix_square_root(...)`](./tf/linalg/sqrtm.md): Computes the matrix square root of one or more square matrices: + +[`maximum(...)`](./tf/math/maximum.md): Returns the max of x and y (i.e. x > y ? x : y) element-wise. + +[`meshgrid(...)`](./tf/meshgrid.md): Broadcasts parameters for evaluation on an N-D grid. + +[`minimum(...)`](./tf/math/minimum.md): Returns the min of x and y (i.e. x < y ? x : y) element-wise. + +[`multiply(...)`](./tf/math/multiply.md): Returns an element-wise x * y. + +[`negative(...)`](./tf/math/negative.md): Computes numerical negative value element-wise. + +[`no_gradient(...)`](./tf/no_gradient.md): Specifies that ops of type `op_type` is not differentiable. + +[`no_op(...)`](./tf/no_op.md): Does nothing. Only useful as a placeholder for control edges. + +[`nondifferentiable_batch_function(...)`](./tf/nondifferentiable_batch_function.md): Batches the computation done by the decorated function. + +[`norm(...)`](./tf/norm.md): Computes the norm of vectors, matrices, and tensors. + +[`not_equal(...)`](./tf/math/not_equal.md): Returns the truth value of (x != y) element-wise. + +[`numpy_function(...)`](./tf/numpy_function.md): Wraps a python function and uses it as a TensorFlow op. + +[`one_hot(...)`](./tf/one_hot.md): Returns a one-hot tensor. + +[`ones(...)`](./tf/ones.md): Creates a tensor with all elements set to one (1). + +[`ones_like(...)`](./tf/ones_like.md): Creates a tensor of all ones that has the same shape as the input. + +[`pad(...)`](./tf/pad.md): Pads a tensor. + +[`parallel_stack(...)`](./tf/parallel_stack.md): Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor in parallel. + +[`pow(...)`](./tf/math/pow.md): Computes the power of one value to another. + +[`print(...)`](./tf/print.md): Print the specified inputs. + +[`py_function(...)`](./tf/py_function.md): Wraps a python function into a TensorFlow op that executes it eagerly. + +[`range(...)`](./tf/range.md): Creates a sequence of numbers. + +[`rank(...)`](./tf/rank.md): Returns the rank of a tensor. + +[`realdiv(...)`](./tf/realdiv.md): Returns x / y element-wise for real types. + +[`recompute_grad(...)`](./tf/recompute_grad.md): An eager-compatible version of recompute_grad. + +[`reduce_all(...)`](./tf/reduce_all.md): Computes the "logical and" of elements across dimensions of a tensor. + +[`reduce_any(...)`](./tf/math/reduce_any.md): Computes the "logical or" of elements across dimensions of a tensor. + +[`reduce_logsumexp(...)`](./tf/math/reduce_logsumexp.md): Computes log(sum(exp(elements across dimensions of a tensor))). + +[`reduce_max(...)`](./tf/math/reduce_max.md): Computes the maximum of elements across dimensions of a tensor. + +[`reduce_mean(...)`](./tf/math/reduce_mean.md): Computes the mean of elements across dimensions of a tensor. + +[`reduce_min(...)`](./tf/math/reduce_min.md): Computes the minimum of elements across dimensions of a tensor. + +[`reduce_prod(...)`](./tf/math/reduce_prod.md): Computes the product of elements across dimensions of a tensor. + +[`reduce_sum(...)`](./tf/math/reduce_sum.md): Computes the sum of elements across dimensions of a tensor. + +[`register_tensor_conversion_function(...)`](./tf/register_tensor_conversion_function.md): Registers a function for converting objects of `base_type` to `Tensor`. + +[`repeat(...)`](./tf/repeat.md): Repeat elements of `input`. + +[`required_space_to_batch_paddings(...)`](./tf/required_space_to_batch_paddings.md): Calculate padding required to make block_shape divide input_shape. + +[`reshape(...)`](./tf/reshape.md): Reshapes a tensor. + +[`reverse(...)`](./tf/reverse.md): Reverses specific dimensions of a tensor. + +[`reverse_sequence(...)`](./tf/reverse_sequence.md): Reverses variable length slices. (deprecated arguments) (deprecated arguments) + +[`roll(...)`](./tf/roll.md): Rolls the elements of a tensor along an axis. + +[`round(...)`](./tf/math/round.md): Rounds the values of a tensor to the nearest integer, element-wise. + +[`saturate_cast(...)`](./tf/dtypes/saturate_cast.md): Performs a safe saturating cast of `value` to `dtype`. + +[`scalar_mul(...)`](./tf/math/scalar_mul.md): Multiplies a scalar times a `Tensor` or `IndexedSlices` object. + +[`scan(...)`](./tf/scan.md): scan on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values) + +[`scatter_nd(...)`](./tf/scatter_nd.md): Scatter `updates` into a new tensor according to `indices`. + +[`searchsorted(...)`](./tf/searchsorted.md): Searches input tensor for values on the innermost dimension. + +[`sequence_mask(...)`](./tf/sequence_mask.md): Returns a mask tensor representing the first N positions of each cell. + +[`shape(...)`](./tf/shape.md): Returns the shape of a tensor. + +[`shape_n(...)`](./tf/shape_n.md): Returns shape of tensors. + +[`sigmoid(...)`](./tf/math/sigmoid.md): Computes sigmoid of `x` element-wise. + +[`sign(...)`](./tf/math/sign.md): Returns an element-wise indication of the sign of a number. + +[`sin(...)`](./tf/math/sin.md): Computes sine of x element-wise. + +[`sinh(...)`](./tf/math/sinh.md): Computes hyperbolic sine of x element-wise. + +[`size(...)`](./tf/size.md): Returns the size of a tensor. + +[`slice(...)`](./tf/slice.md): Extracts a slice from a tensor. + +[`sort(...)`](./tf/sort.md): Sorts a tensor. + +[`space_to_batch(...)`](./tf/space_to_batch.md): SpaceToBatch for N-D tensors of type T. + +[`space_to_batch_nd(...)`](./tf/space_to_batch_nd.md): SpaceToBatch for N-D tensors of type T. + +[`split(...)`](./tf/split.md): Splits a tensor `value` into a list of sub tensors. + +[`sqrt(...)`](./tf/math/sqrt.md): Computes element-wise square root of the input tensor. + +[`square(...)`](./tf/math/square.md): Computes square of x element-wise. + +[`squeeze(...)`](./tf/squeeze.md): Removes dimensions of size 1 from the shape of a tensor. + +[`stack(...)`](./tf/stack.md): Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor. + +[`stop_gradient(...)`](./tf/stop_gradient.md): Stops gradient computation. + +[`strided_slice(...)`](./tf/strided_slice.md): Extracts a strided slice of a tensor (generalized python array indexing). + +[`subtract(...)`](./tf/math/subtract.md): Returns x - y element-wise. + +[`switch_case(...)`](./tf/switch_case.md): Create a switch/case operation, i.e. an integer-indexed conditional. + +[`tan(...)`](./tf/math/tan.md): Computes tan of x element-wise. + +[`tanh(...)`](./tf/math/tanh.md): Computes hyperbolic tangent of `x` element-wise. + +[`tensor_scatter_nd_add(...)`](./tf/tensor_scatter_nd_add.md): Adds sparse `updates` to an existing tensor according to `indices`. + +[`tensor_scatter_nd_sub(...)`](./tf/tensor_scatter_nd_sub.md): Subtracts sparse `updates` from an existing tensor according to `indices`. + +[`tensor_scatter_nd_update(...)`](./tf/tensor_scatter_nd_update.md): Scatter `updates` into an existing tensor according to `indices`. + +[`tensordot(...)`](./tf/tensordot.md): Tensor contraction of a and b along specified axes and outer product. + +[`tile(...)`](./tf/tile.md): Constructs a tensor by tiling a given tensor. + +[`timestamp(...)`](./tf/timestamp.md): Provides the time since epoch in seconds. + +[`transpose(...)`](./tf/transpose.md): Transposes `a`, where `a` is a Tensor. + +[`truediv(...)`](./tf/math/truediv.md): Divides x / y elementwise (using Python 3 division operator semantics). + +[`truncatediv(...)`](./tf/truncatediv.md): Returns x / y element-wise for integer types. + +[`truncatemod(...)`](./tf/truncatemod.md): Returns element-wise remainder of division. This emulates C semantics in that + +[`tuple(...)`](./tf/tuple.md): Group tensors together. + +[`unique(...)`](./tf/unique.md): Finds unique elements in a 1-D tensor. + +[`unique_with_counts(...)`](./tf/unique_with_counts.md): Finds unique elements in a 1-D tensor. + +[`unravel_index(...)`](./tf/unravel_index.md): Converts an array of flat indices into a tuple of coordinate arrays. + +[`unstack(...)`](./tf/unstack.md): Unpacks the given dimension of a rank-`R` tensor into rank-`(R-1)` tensors. + +[`variable_creator_scope(...)`](./tf/variable_creator_scope.md): Scope which defines a variable creation function to be used by variable(). + +[`vectorized_map(...)`](./tf/vectorized_map.md): Parallel map on the list of tensors unpacked from `elems` on dimension 0. + +[`where(...)`](./tf/where.md): Return the elements where `condition` is `True` (multiplexing `x` and `y`). + +[`while_loop(...)`](./tf/while_loop.md): Repeat `body` while the condition `cond` is true. (deprecated argument values) + +[`zeros(...)`](./tf/zeros.md): Creates a tensor with all elements set to zero. + +[`zeros_like(...)`](./tf/zeros_like.md): Creates a tensor with all elements set to zero. + +## Other Members + +* `__version__ = '2.2.0'` +* `bfloat16` +* `bool` +* `complex128` +* `complex64` +* `double` +* `float16` +* `float32` +* `float64` +* `half` +* `int16` +* `int32` +* `int64` +* `int8` +* `newaxis = None` +* `qint16` +* `qint32` +* `qint8` +* `quint16` +* `quint8` +* `resource` +* `string` +* `uint16` +* `uint32` +* `uint64` +* `uint8` +* `variant` diff --git a/site/en/api_docs/python/tf/AggregationMethod.md b/site/en/api_docs/python/tf/AggregationMethod.md new file mode 100644 index 00000000000..57cd423d7ff --- /dev/null +++ b/site/en/api_docs/python/tf/AggregationMethod.md @@ -0,0 +1,68 @@ +description: A class listing aggregation methods used to combine gradients. + +
+ + + + + + +
+ +# tf.AggregationMethod + + + + + + + + + +A class listing aggregation methods used to combine gradients. + + + + + +Computing partial derivatives can require aggregating gradient +contributions. This class lists the various methods that can +be used to combine gradients in the graph. + +The following aggregation methods are part of the stable API for +aggregating gradients: + +* `ADD_N`: All of the gradient terms are summed as part of one + operation using the "AddN" op (see tf.add_n). This + method has the property that all gradients must be ready and + buffered separately in memory before any aggregation is performed. +* `DEFAULT`: The system-chosen default aggregation method. + +The following aggregation methods are experimental and may not +be supported in future releases: + +* `EXPERIMENTAL_TREE`: Gradient terms are summed in pairs using + using the "AddN" op. This method of summing gradients may reduce + performance, but it can improve memory utilization because the + gradients can be released earlier. + +## Class Variables + +* `ADD_N = 0` +* `DEFAULT = 0` +* `EXPERIMENTAL_ACCUMULATE_N = 2` +* `EXPERIMENTAL_TREE = 1` diff --git a/site/en/api_docs/python/tf/CriticalSection.md b/site/en/api_docs/python/tf/CriticalSection.md new file mode 100644 index 00000000000..54009b498e2 --- /dev/null +++ b/site/en/api_docs/python/tf/CriticalSection.md @@ -0,0 +1,231 @@ +description: Critical section. + +
+ + + + +
+ +# tf.CriticalSection + + + + + + + + + +Critical section. + + + + + + + + + +A `CriticalSection` object is a resource in the graph which executes subgraphs +in **serial** order. A common example of a subgraph one may wish to run +exclusively is the one given by the following function: + +```python +v = resource_variable_ops.ResourceVariable(0.0, name="v") + +def count(): + value = v.read_value() + with tf.control_dependencies([value]): + with tf.control_dependencies([v.assign_add(1)]): + return tf.identity(value) +``` + +Here, a snapshot of `v` is captured in `value`; and then `v` is updated. +The snapshot value is returned. + +If multiple workers or threads all execute `count` in parallel, there is no +guarantee that access to the variable `v` is atomic at any point within +any thread's calculation of `count`. In fact, even implementing an atomic +counter that guarantees that the user will see each value `0, 1, ...,` is +currently impossible. + +The solution is to ensure any access to the underlying resource `v` is +only processed through a critical section: + +```python +cs = CriticalSection() +f1 = cs.execute(count) +f2 = cs.execute(count) +output = f1 + f2 +session.run(output) +``` +The functions `f1` and `f2` will be executed serially, and updates to `v` +will be atomic. + +**NOTES** + +All resource objects, including the critical section and any captured +variables of functions executed on that critical section, will be +colocated to the same device (host and cpu/gpu). + +When using multiple critical sections on the same resources, there is no +guarantee of exclusive access to those resources. This behavior is disallowed +by default (but see the kwarg `exclusive_resource_access`). + +For example, running the same function in two separate critical sections +will not ensure serial execution: + +```python +v = tf.compat.v1.get_variable("v", initializer=0.0, use_resource=True) +def accumulate(up): + x = v.read_value() + with tf.control_dependencies([x]): + with tf.control_dependencies([v.assign_add(up)]): + return tf.identity(x) +ex1 = CriticalSection().execute( + accumulate, 1.0, exclusive_resource_access=False) +ex2 = CriticalSection().execute( + accumulate, 1.0, exclusive_resource_access=False) +bad_sum = ex1 + ex2 +sess.run(v.initializer) +sess.run(bad_sum) # May return 0.0 +``` + + + + + + + + + + + + +
+`name` + + +
+ + + +## Methods + +

execute

+ +View source + + + +Execute function `fn()` inside the critical section. + +`fn` should not accept any arguments. To add extra arguments to when +calling `fn` in the critical section, create a lambda: + +```python +critical_section.execute(lambda: fn(*my_args, **my_kwargs)) +``` + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to execute. Must return at least one tensor. +
+`exclusive_resource_access` + +Whether the resources required by +`fn` should be exclusive to this `CriticalSection`. Default: `True`. +You may want to set this to `False` if you will be accessing a +resource in read-only mode in two different CriticalSections. +
+`name` + +The name to use when creating the execute operation. +
+ + + + + + + + + + + +
Returns
+The tensors returned from `fn()`. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If `fn` attempts to lock this `CriticalSection` in any nested +or lazy way that may cause a deadlock. +
+`ValueError` + +If `exclusive_resource_access == True` and +another `CriticalSection` has an execution requesting the same +resources as `fn``. Note, even if `exclusive_resource_access` is +`True`, if another execution in another `CriticalSection` was created +without `exclusive_resource_access=True`, a `ValueError` will be raised. +
+ + + + + diff --git a/site/en/api_docs/python/tf/DeviceSpec.md b/site/en/api_docs/python/tf/DeviceSpec.md new file mode 100644 index 00000000000..abea68f9919 --- /dev/null +++ b/site/en/api_docs/python/tf/DeviceSpec.md @@ -0,0 +1,524 @@ +description: Represents a (possibly partial) specification for a TensorFlow device. + +
+ + + + + + + + + +
+ +# tf.DeviceSpec + + + + + + + + + +Represents a (possibly partial) specification for a TensorFlow device. + + + + + + + +`DeviceSpec`s are used throughout TensorFlow to describe where state is stored +and computations occur. Using `DeviceSpec` allows you to parse device spec +strings to verify their validity, merge them or compose them programmatically. + +#### Example: + + + +```python +# Place the operations on device "GPU:0" in the "ps" job. +device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0) +with tf.device(device_spec.to_string()): + # Both my_var and squared_var will be placed on /job:ps/device:GPU:0. + my_var = tf.Variable(..., name="my_variable") + squared_var = tf.square(my_var) +``` + +With eager execution disabled (by default in TensorFlow 1.x and by calling +disable_eager_execution() in TensorFlow 2.x), the following syntax +can be used: + +```python +tf.compat.v1.disable_eager_execution() + +# Same as previous +device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0) +# No need of .to_string() method. +with tf.device(device_spec): + my_var = tf.Variable(..., name="my_variable") + squared_var = tf.square(my_var) + ``` + +If a `DeviceSpec` is partially specified, it will be merged with other +`DeviceSpec`s according to the scope in which it is defined. `DeviceSpec` +components defined in inner scopes take precedence over those defined in +outer scopes. + +```python +gpu0_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0) +with tf.device(DeviceSpec(job="train").to_string()): + with tf.device(gpu0_spec.to_string()): + # Nodes created here will be assigned to /job:ps/device:GPU:0. + with tf.device(DeviceSpec(device_type="GPU", device_index=1).to_string()): + # Nodes created here will be assigned to /job:train/device:GPU:1. +``` + +A `DeviceSpec` consists of 5 components -- each of +which is optionally specified: + +* Job: The job name. +* Replica: The replica index. +* Task: The task index. +* Device type: The device type string (e.g. "CPU" or "GPU"). +* Device index: The device index. + + + + + + + + + + + + + + + + + + + + + + +
+`job` + +string. Optional job name. +
+`replica` + +int. Optional replica index. +
+`task` + +int. Optional task index. +
+`device_type` + +Optional device type string (e.g. "CPU" or "GPU") +
+`device_index` + +int. Optional device index. If left +unspecified, device represents 'any' device_index. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`device_index` + + +
+`device_type` + + +
+`job` + + +
+`replica` + + +
+`task` + + +
+ + + +## Methods + +

from_string

+ +View source + + + +Construct a `DeviceSpec` from a string. + + + + + + + + + + + +
Args
+`spec` + +a string of the form +/job:/replica:/task:/device:CPU: +or +/job:/replica:/task:/device:GPU: +as cpu and gpu are mutually exclusive. +All entries are optional. +
+ + + + + + + + + + + +
Returns
+A DeviceSpec. +
+ + + +

make_merged_spec

+ +View source + + + +Returns a new DeviceSpec which incorporates `dev`. + +When combining specs, `dev` will take precedence over the current spec. +So for instance: +``` +first_spec = tf.DeviceSpec(job=0, device_type="CPU") +second_spec = tf.DeviceSpec(device_type="GPU") +combined_spec = first_spec.make_merged_spec(second_spec) +``` + +is equivalent to: +``` +combined_spec = tf.DeviceSpec(job=0, device_type="GPU") +``` + + + + + + + + + + +
Args
+`dev` + +a `DeviceSpec` +
+ + + + + + + + + + + +
Returns
+A new `DeviceSpec` which combines `self` and `dev` +
+ + + +

parse_from_string

+ +View source + + + +Parse a `DeviceSpec` name into its components. + +2.x behavior change: + In TensorFlow 1.x, this function mutates its own state and returns itself. + In 2.x, DeviceSpecs are immutable, and this function will return a + DeviceSpec which contains the spec. + + Recommended: + ``` + # my_spec and my_updated_spec are unrelated. + my_spec = tf.DeviceSpec.from_string("/CPU:0") + my_updated_spec = tf.DeviceSpec.from_string("/GPU:0") + with tf.device(my_updated_spec): + ... + ``` + + Will work in 1.x and 2.x (though deprecated in 2.x): + ``` + my_spec = tf.DeviceSpec.from_string("/CPU:0") + my_updated_spec = my_spec.parse_from_string("/GPU:0") + with tf.device(my_updated_spec): + ... + ``` + + Will NOT work in 2.x: + ``` + my_spec = tf.DeviceSpec.from_string("/CPU:0") + my_spec.parse_from_string("/GPU:0") # <== Will not update my_spec + with tf.device(my_spec): + ... + ``` + + In general, DeviceSpec.from_string should completely replace + DeviceSpec.parse_from_string, and DeviceSpec.replace should + completely replace setting attributes directly. + + + + + + + + + + +
Args
+`spec` + +an optional string of the form +/job:/replica:/task:/device:CPU: +or +/job:/replica:/task:/device:GPU: +as cpu and gpu are mutually exclusive. +All entries are optional. +
+ + + + + + + + + + + +
Returns
+The `DeviceSpec`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if the spec was not valid. +
+ + + +

replace

+ +View source + + + +Convenience method for making a new DeviceSpec by overriding fields. + + +#### For instance: + + +``` +my_spec = DeviceSpec=(job="my_job", device="CPU") +my_updated_spec = my_spec.replace(device="GPU") +my_other_spec = my_spec.replace(device=None) +``` + + + + + + + + + + +
Args
+`**kwargs` + +This method takes the same args as the DeviceSpec constructor +
+ + + + + + + + + + + +
Returns
+A DeviceSpec with the fields specified in kwargs overridden. +
+ + + +

to_string

+ +View source + + + +Return a string representation of this `DeviceSpec`. + + + + + + + + + + +
Returns
+a string of the form +/job:/replica:/task:/device::. +
+ + + +

__eq__

+ +View source + + + +Checks if the `other` DeviceSpec is same as the current instance, eg have + + same value for all the internal fields. + + + + + + + + + + +
Args
+`other` + +Another DeviceSpec +
+ + + + + + + + + + + +
Returns
+Return `True` if `other` is also a DeviceSpec instance and has same value +as the current instance. +Return `False` otherwise. +
+ + + + + diff --git a/site/en/api_docs/python/tf/GradientTape.md b/site/en/api_docs/python/tf/GradientTape.md new file mode 100644 index 00000000000..e854b94c53b --- /dev/null +++ b/site/en/api_docs/python/tf/GradientTape.md @@ -0,0 +1,703 @@ +description: Record operations for automatic differentiation. + +
+ + + + + + + + + + + + +
+ +# tf.GradientTape + + + + + + + + + +Record operations for automatic differentiation. + + + + + + + + + +Operations are recorded if they are executed within this context manager and +at least one of their inputs is being "watched". + +Trainable variables (created by tf.Variable or tf.compat.v1.get_variable, +where `trainable=True` is default in both cases) are automatically watched. +Tensors can be manually watched by invoking the `watch` method on this context +manager. + +For example, consider the function `y = x * x`. The gradient at `x = 3.0` can +be computed as: + +```python +x = tf.constant(3.0) +with tf.GradientTape() as g: + g.watch(x) + y = x * x +dy_dx = g.gradient(y, x) # Will compute to 6.0 +``` + +GradientTapes can be nested to compute higher-order derivatives. For example, + +```python +x = tf.constant(3.0) +with tf.GradientTape() as g: + g.watch(x) + with tf.GradientTape() as gg: + gg.watch(x) + y = x * x + dy_dx = gg.gradient(y, x) # Will compute to 6.0 +d2y_dx2 = g.gradient(dy_dx, x) # Will compute to 2.0 +``` + +By default, the resources held by a GradientTape are released as soon as +GradientTape.gradient() method is called. To compute multiple gradients over +the same computation, create a persistent gradient tape. This allows multiple +calls to the gradient() method as resources are released when the tape object +is garbage collected. For example: + +```python +x = tf.constant(3.0) +with tf.GradientTape(persistent=True) as g: + g.watch(x) + y = x * x + z = y * y +dz_dx = g.gradient(z, x) # 108.0 (4*x^3 at x = 3) +dy_dx = g.gradient(y, x) # 6.0 +del g # Drop the reference to the tape +``` + +By default GradientTape will automatically watch any trainable variables that +are accessed inside the context. If you want fine grained control over which +variables are watched you can disable automatic tracking by passing +`watch_accessed_variables=False` to the tape constructor: + +```python +with tf.GradientTape(watch_accessed_variables=False) as tape: + tape.watch(variable_a) + y = variable_a ** 2 # Gradients will be available for `variable_a`. + z = variable_b ** 3 # No gradients will be available since `variable_b` is + # not being watched. +``` + +Note that when using models you should ensure that your variables exist when +using `watch_accessed_variables=False`. Otherwise it's quite easy to make your +first iteration not have any gradients: + +```python +a = tf.keras.layers.Dense(32) +b = tf.keras.layers.Dense(32) + +with tf.GradientTape(watch_accessed_variables=False) as tape: + tape.watch(a.variables) # Since `a.build` has not been called at this point + # `a.variables` will return an empty list and the + # tape will not be watching anything. + result = b(a(inputs)) + tape.gradient(result, a.variables) # The result of this computation will be + # a list of `None`s since a's variables + # are not being watched. +``` + +Note that only tensors with real or complex dtypes are differentiable. + + + + + + + + + + + + + +
+`persistent` + +Boolean controlling whether a persistent gradient tape +is created. False by default, which means at most one call can +be made to the gradient() method on this object. +
+`watch_accessed_variables` + +Boolean controlling whether the tape will +automatically `watch` any (trainable) variables accessed while the tape +is active. Defaults to True meaning gradients can be requested from any +result computed in the tape derived from reading a trainable `Variable`. +If False users must explicitly `watch` any `Variable`s they want to +request gradients from. +
+ + + +## Methods + +

batch_jacobian

+ +View source + + + +Computes and stacks per-example jacobians. + +See [wikipedia article](http://en.wikipedia.org/wiki/jacobian_matrix_and_determinant) for the +definition of a Jacobian. This function is essentially an efficient +implementation of the following: + +`tf.stack([self.jacobian(y[i], x[i]) for i in range(x.shape[0])])`. + +Note that compared to GradientTape.jacobian which computes gradient of +each output value w.r.t each input value, this function is useful when +`target[i,...]` is independent of `source[j,...]` for `j != i`. This +assumption allows more efficient computation as compared to +GradientTape.jacobian. The output, as well as intermediate activations, +are lower dimensional and avoid a bunch of redundant zeros which would +result in the jacobian computation given the independence assumption. + +#### Example usage: + + + +```python +with tf.GradientTape() as g: + x = tf.constant([[1., 2.], [3., 4.]], dtype=tf.float32) + g.watch(x) + y = x * x +batch_jacobian = g.batch_jacobian(y, x) +# batch_jacobian is [[[2, 0], [0, 4]], [[6, 0], [0, 8]]] +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`target` + +A tensor with rank 2 or higher and with shape [b, y1, ..., y_n]. +`target[i,...]` should only depend on `source[i,...]`. +
+`source` + +A tensor with rank 2 or higher and with shape [b, x1, ..., x_m]. +
+`unconnected_gradients` + +a value which can either hold 'none' or 'zero' and +alters the value which will be returned if the target and sources are +unconnected. The possible values and effects are detailed in +'UnconnectedGradients' and it defaults to 'none'. +
+`parallel_iterations` + +A knob to control how many iterations are dispatched +in parallel. This knob can be used to control the total memory usage. +
+`experimental_use_pfor` + +If true, uses pfor for computing the Jacobian. Else +uses a tf.while_loop. +
+ + + + + + + + + + + +
Returns
+A tensor `t` with shape [b, y_1, ..., y_n, x1, ..., x_m] where `t[i, ...]` +is the jacobian of `target[i, ...]` w.r.t. `source[i, ...]`, i.e. stacked +per-example jacobians. +
+ + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If called on a non-persistent tape with eager execution +enabled and without enabling experimental_use_pfor. +
+`ValueError` + +If vectorization of jacobian computation fails or if first +dimension of `target` and `source` do not match. +
+ + + +

gradient

+ +View source + + + +Computes the gradient using operations recorded in context of this tape. + + + + + + + + + + + + + + + + + + + + +
Args
+`target` + +a list or nested structure of Tensors or Variables to be +differentiated. +
+`sources` + +a list or nested structure of Tensors or Variables. `target` +will be differentiated against elements in `sources`. +
+`output_gradients` + +a list of gradients, one for each element of +target. Defaults to None. +
+`unconnected_gradients` + +a value which can either hold 'none' or 'zero' and +alters the value which will be returned if the target and sources are +unconnected. The possible values and effects are detailed in +'UnconnectedGradients' and it defaults to 'none'. +
+ + + + + + + + + + + +
Returns
+a list or nested structure of Tensors (or IndexedSlices, or None), +one for each element in `sources`. Returned structure is the same as +the structure of `sources`. +
+ + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +if called inside the context of the tape, or if called more +than once on a non-persistent tape. +
+`ValueError` + +if the target is a variable or if unconnected gradients is +called with an unknown value. +
+ + + +

jacobian

+ +View source + + + +Computes the jacobian using operations recorded in context of this tape. + +See [wikipedia article](http://en.wikipedia.org/wiki/jacobian_matrix_and_determinant) for the +definition of a Jacobian. + +#### Example usage: + + + +```python +with tf.GradientTape() as g: + x = tf.constant([1.0, 2.0]) + g.watch(x) + y = x * x +jacobian = g.jacobian(y, x) +# jacobian value is [[2., 0.], [0., 4.]] +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`target` + +Tensor to be differentiated. +
+`sources` + +a list or nested structure of Tensors or Variables. `target` +will be differentiated against elements in `sources`. +
+`unconnected_gradients` + +a value which can either hold 'none' or 'zero' and +alters the value which will be returned if the target and sources are +unconnected. The possible values and effects are detailed in +'UnconnectedGradients' and it defaults to 'none'. +
+`parallel_iterations` + +A knob to control how many iterations are dispatched +in parallel. This knob can be used to control the total memory usage. +
+`experimental_use_pfor` + +If true, vectorizes the jacobian computation. Else +falls back to a sequential while_loop. Vectorization can sometimes fail +or lead to excessive memory usage. This option can be used to disable +vectorization in such cases. +
+ + + + + + + + + + + +
Returns
+A list or nested structure of Tensors (or None), one for each element in +`sources`. Returned structure is the same as the structure of `sources`. +Note if any gradient is sparse (IndexedSlices), jacobian function +currently makes it dense and returns a Tensor instead. This may change in +the future. +
+ + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If called on a non-persistent tape with eager execution +enabled and without enabling experimental_use_pfor. +
+`ValueError` + +If vectorization of jacobian computation fails. +
+ + + +

reset

+ +View source + + + +Clears all information stored in this tape. + +Equivalent to exiting and reentering the tape context manager with a new +tape. For example, the two following code blocks are equivalent: + +``` +with tf.GradientTape() as t: + loss = loss_fn() +with tf.GradientTape() as t: + loss += other_loss_fn() +t.gradient(loss, ...) # Only differentiates other_loss_fn, not loss_fn + + +# The following is equivalent to the above +with tf.GradientTape() as t: + loss = loss_fn() + t.reset() + loss += other_loss_fn() +t.gradient(loss, ...) # Only differentiates other_loss_fn, not loss_fn +``` + +This is useful if you don't want to exit the context manager for the tape, +or can't because the desired reset point is inside a control flow construct: + +``` +with tf.GradientTape() as t: + loss = ... + if loss > k: + t.reset() +``` + +

stop_recording

+ +View source + + + +Temporarily stops recording operations on this tape. + +Operations executed while this context manager is active will not be +recorded on the tape. This is useful for reducing the memory used by tracing +all computations. + +#### For example: + + + +``` + with tf.GradientTape(persistent=True) as t: + loss = compute_loss(model) + with t.stop_recording(): + # The gradient computation below is not traced, saving memory. + grads = t.gradient(loss, model.variables) +``` + +#### Yields: + +None + + + + + + + + + + + +
Raises
+`RuntimeError` + +if the tape is not currently recording. +
+ + + +

watch

+ +View source + + + +Ensures that `tensor` is being traced by this tape. + + + + + + + + + + + +
Args
+`tensor` + +a Tensor or list of Tensors. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if it encounters something that is not a tensor. +
+ + + +

watched_variables

+ +View source + + + +Returns variables watched by this tape in order of construction. + + +

__enter__

+ +View source + + + +Enters a context inside which operations are recorded on this tape. + + +

__exit__

+ +View source + + + +Exits the recording context, no further operations are traced. + + + + diff --git a/site/en/api_docs/python/tf/Graph.md b/site/en/api_docs/python/tf/Graph.md new file mode 100644 index 00000000000..e05998d1ef4 --- /dev/null +++ b/site/en/api_docs/python/tf/Graph.md @@ -0,0 +1,1779 @@ +description: A TensorFlow computation, represented as a dataflow graph. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.Graph + + + + + + + + + +A TensorFlow computation, represented as a dataflow graph. + + + + + + + + + +Graphs are used by tf.functions to represent the function's computations. +Each graph contains a set of tf.Operation objects, which represent units of +computation; and tf.Tensor objects, which represent the units of data that +flow between operations. + +### Using graphs directly (deprecated) + +A tf.Graph can be constructed and used directly without a tf.function, as +was required in TensorFlow 1, but this is deprecated and it is recommended to +use a tf.function instead. If a graph is directly used, other deprecated +TensorFlow 1 classes are also required to execute the graph, such as a +tf.compat.v1.Session. + +A default graph can be registered with the tf.Graph.as_default context +manager. Then, operations will be added to the graph instead of being executed +eagerly. For example: + +```python +g = tf.Graph() +with g.as_default(): + # Define operations and tensors in `g`. + c = tf.constant(30.0) + assert c.graph is g +``` + +tf.compat.v1.get_default_graph() can be used to obtain the default graph. + +Important note: This class *is not* thread-safe for graph construction. All +operations should be created from a single thread, or external +synchronization must be provided. Unless otherwise specified, all methods +are not thread-safe. + +A `Graph` instance supports an arbitrary number of "collections" +that are identified by name. For convenience when building a large +graph, collections can store groups of related objects: for +example, the tf.Variable uses a collection (named +`tf.GraphKeys.GLOBAL_VARIABLES`) for +all variables that are created during the construction of a graph. The caller +may define additional collections by specifying a new name. + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`building_function` + +Returns True iff this graph represents a function. +
+`collections` + +Returns the names of the collections known to this graph. +
+`finalized` + +True if this graph has been finalized. +
+`graph_def_versions` + +The GraphDef version information of this graph. + +For details on the meaning of each version, see +[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto). +
+`seed` + +The graph-level random seed of this graph. +
+`version` + +Returns a version number that increases as ops are added to the graph. + +Note that this is unrelated to the +tf.Graph.graph_def_versions. +
+ + + +## Methods + +

add_to_collection

+ +View source + + + +Stores `value` in the collection with the given `name`. + +Note that collections are not sets, so it is possible to add a value to +a collection several times. + + + + + + + + + + + + + +
Args
+`name` + +The key for the collection. The `GraphKeys` class contains many +standard names for collections. +
+`value` + +The value to add to the collection. +
+ + + +

add_to_collections

+ +View source + + + +Stores `value` in the collections given by `names`. + +Note that collections are not sets, so it is possible to add a value to +a collection several times. This function makes sure that duplicates in +`names` are ignored, but it will not check for pre-existing membership of +`value` in any of the collections in `names`. + +`names` can be any iterable, but if `names` is a string, it is treated as a +single collection name. + + + + + + + + + + + + + +
Args
+`names` + +The keys for the collections to add to. The `GraphKeys` class +contains many standard names for collections. +
+`value` + +The value to add to the collections. +
+ + + +

as_default

+ +View source + + + +Returns a context manager that makes this `Graph` the default graph. + +This method should be used if you want to create multiple graphs +in the same process. For convenience, a global default graph is +provided, and all ops will be added to this graph if you do not +create a new graph explicitly. + +Use this method with the `with` keyword to specify that ops created within +the scope of a block should be added to this graph. In this case, once +the scope of the `with` is exited, the previous default graph is set again +as default. There is a stack, so it's ok to have multiple nested levels +of `as_default` calls. + +The default graph is a property of the current thread. If you +create a new thread, and wish to use the default graph in that +thread, you must explicitly add a `with g.as_default():` in that +thread's function. + +The following code examples are equivalent: + +```python +# 1. Using Graph.as_default(): +g = tf.Graph() +with g.as_default(): + c = tf.constant(5.0) + assert c.graph is g + +# 2. Constructing and making default: +with tf.Graph().as_default() as g: + c = tf.constant(5.0) + assert c.graph is g +``` + +If eager execution is enabled ops created under this context manager will be +added to the graph instead of executed eagerly. + + + + + + + + + +
Returns
+A context manager for using this graph as the default graph. +
+ + + +

as_graph_def

+ +View source + + + +Returns a serialized `GraphDef` representation of this graph. + +The serialized `GraphDef` can be imported into another `Graph` +(using tf.import_graph_def) or used with the +[C++ Session API](../../api_docs/cc/index.md). + +This method is thread-safe. + + + + + + + + + + + + + +
Args
+`from_version` + +Optional. If this is set, returns a `GraphDef` containing +only the nodes that were added to this graph since its `version` +property had the given value. +
+`add_shapes` + +If true, adds an "_output_shapes" list attr to each node with +the inferred shapes of each of its outputs. +
+ + + + + + + + + + + +
Returns
+A +[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) +protocol buffer. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `graph_def` would be too large. +
+ + + +

as_graph_element

+ +View source + + + +Returns the object referred to by `obj`, as an `Operation` or `Tensor`. + +This function validates that `obj` represents an element of this +graph, and gives an informative error message if it is not. + +This function is the canonical way to get/validate an object of +one of the allowed types from an external argument reference in the +Session API. + +This method may be called concurrently from multiple threads. + + + + + + + + + + + + + + + + +
Args
+`obj` + +A `Tensor`, an `Operation`, or the name of a tensor or operation. Can +also be any object with an `_as_graph_element()` method that returns a +value of one of these types. Note: `_as_graph_element` will be called +inside the graph's lock and so may not modify the graph. +
+`allow_tensor` + +If true, `obj` may refer to a `Tensor`. +
+`allow_operation` + +If true, `obj` may refer to an `Operation`. +
+ + + + + + + + + + + +
Returns
+The `Tensor` or `Operation` in the Graph corresponding to `obj`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `obj` is not a type we support attempting to convert +to types. +
+`ValueError` + +If `obj` is of an appropriate type but invalid. For +example, an invalid string. +
+`KeyError` + +If `obj` is not an object in the graph. +
+ + + +

clear_collection

+ +View source + + + +Clears all values in a collection. + + + + + + + + + + + +
Args
+`name` + +The key for the collection. The `GraphKeys` class contains many +standard names for collections. +
+ + + +

colocate_with

+ +View source + + + +Returns a context manager that specifies an op to colocate with. + +Note: this function is not for public use, only for internal libraries. + +#### For example: + + + +```python +a = tf.Variable([1.0]) +with g.colocate_with(a): + b = tf.constant(1.0) + c = tf.add(a, b) +``` + +`b` and `c` will always be colocated with `a`, no matter where `a` +is eventually placed. + +**NOTE** Using a colocation scope resets any existing device constraints. + +If `op` is `None` then `ignore_existing` must be `True` and the new +scope resets all colocation and device constraints. + + + + + + + + + + + + + +
Args
+`op` + +The op to colocate all created ops with, or `None`. +
+`ignore_existing` + +If true, only applies colocation of this op within the +context, rather than applying all colocation properties on the stack. +If `op` is `None`, this value must be `True`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if op is None but ignore_existing is False. +
+ + + +#### Yields: + +A context manager that specifies the op with which to colocate +newly created ops. + + +

container

+ +View source + + + +Returns a context manager that specifies the resource container to use. + +Stateful operations, such as variables and queues, can maintain their +states on devices so that they can be shared by multiple processes. +A resource container is a string name under which these stateful +operations are tracked. These resources can be released or cleared +with `tf.Session.reset()`. + +#### For example: + + + +```python +with g.container('experiment0'): + # All stateful Operations constructed in this context will be placed + # in resource container "experiment0". + v1 = tf.Variable([1.0]) + v2 = tf.Variable([2.0]) + with g.container("experiment1"): + # All stateful Operations constructed in this context will be + # placed in resource container "experiment1". + v3 = tf.Variable([3.0]) + q1 = tf.queue.FIFOQueue(10, tf.float32) + # All stateful Operations constructed in this context will be + # be created in the "experiment0". + v4 = tf.Variable([4.0]) + q1 = tf.queue.FIFOQueue(20, tf.float32) + with g.container(""): + # All stateful Operations constructed in this context will be + # be placed in the default resource container. + v5 = tf.Variable([5.0]) + q3 = tf.queue.FIFOQueue(30, tf.float32) + +# Resets container "experiment0", after which the state of v1, v2, v4, q1 +# will become undefined (such as uninitialized). +tf.Session.reset(target, ["experiment0"]) +``` + + + + + + + + + + +
Args
+`container_name` + +container name string. +
+ + + + + + + + + + + +
Returns
+A context manager for defining resource containers for stateful ops, +yields the container name. +
+ + + +

control_dependencies

+ +View source + + + +Returns a context manager that specifies control dependencies. + +Use with the `with` keyword to specify that all operations constructed +within the context should have control dependencies on +`control_inputs`. For example: + +```python +with g.control_dependencies([a, b, c]): + # `d` and `e` will only run after `a`, `b`, and `c` have executed. + d = ... + e = ... +``` + +Multiple calls to `control_dependencies()` can be nested, and in +that case a new `Operation` will have control dependencies on the union +of `control_inputs` from all active contexts. + +```python +with g.control_dependencies([a, b]): + # Ops constructed here run after `a` and `b`. + with g.control_dependencies([c, d]): + # Ops constructed here run after `a`, `b`, `c`, and `d`. +``` + +You can pass None to clear the control dependencies: + +```python +with g.control_dependencies([a, b]): + # Ops constructed here run after `a` and `b`. + with g.control_dependencies(None): + # Ops constructed here run normally, not waiting for either `a` or `b`. + with g.control_dependencies([c, d]): + # Ops constructed here run after `c` and `d`, also not waiting + # for either `a` or `b`. +``` + +*N.B.* The control dependencies context applies *only* to ops that +are constructed within the context. Merely using an op or tensor +in the context does not add a control dependency. The following +example illustrates this point: + +```python +# WRONG +def my_func(pred, tensor): + t = tf.matmul(tensor, tensor) + with tf.control_dependencies([pred]): + # The matmul op is created outside the context, so no control + # dependency will be added. + return t + +# RIGHT +def my_func(pred, tensor): + with tf.control_dependencies([pred]): + # The matmul op is created in the context, so a control dependency + # will be added. + return tf.matmul(tensor, tensor) +``` + +Also note that though execution of ops created under this scope will trigger +execution of the dependencies, the ops created under this scope might still +be pruned from a normal tensorflow graph. For example, in the following +snippet of code the dependencies are never executed: + +```python + loss = model.loss() + with tf.control_dependencies(dependencies): + loss = loss + tf.constant(1) # note: dependencies ignored in the + # backward pass + return tf.gradients(loss, model.variables) +``` + +This is because evaluating the gradient graph does not require evaluating +the constant(1) op created in the forward pass. + + + + + + + + + + +
Args
+`control_inputs` + +A list of `Operation` or `Tensor` objects which must be +executed or computed before running the operations defined in the +context. Can also be `None` to clear the control dependencies. +
+ + + + + + + + + + + +
Returns
+A context manager that specifies control dependencies for all +operations constructed within the context. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If `control_inputs` is not a list of `Operation` or +`Tensor` objects. +
+ + + +

create_op

+ +View source + + + +Creates an `Operation` in this graph. (deprecated arguments) + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(compute_shapes)`. They will be removed in a future version. +Instructions for updating: +Shapes are always computed; don't use the compute_shapes as it has no effect. + +This is a low-level interface for creating an `Operation`. Most +programs will not call this method directly, and instead use the +Python op constructors, such as tf.constant(), which add ops to +the default graph. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`op_type` + +The `Operation` type to create. This corresponds to the +`OpDef.name` field for the proto that defines the operation. +
+`inputs` + +A list of `Tensor` objects that will be inputs to the `Operation`. +
+`dtypes` + +(Optional) A list of `DType` objects that will be the types of the +tensors that the operation produces. +
+`input_types` + +(Optional.) A list of `DType`s that will be the types of the +tensors that the operation consumes. By default, uses the base `DType` +of each input in `inputs`. Operations that expect reference-typed inputs +must specify `input_types` explicitly. +
+`name` + +(Optional.) A string name for the operation. If not specified, a +name is generated based on `op_type`. +
+`attrs` + +(Optional.) A dictionary where the key is the attribute name (a +string) and the value is the respective `attr` attribute of the +`NodeDef` proto that will represent the operation (an `AttrValue` +proto). +
+`op_def` + +(Optional.) The `OpDef` proto that describes the `op_type` that +the operation will have. +
+`compute_shapes` + +(Optional.) Deprecated. Has no effect (shapes are always +computed). +
+`compute_device` + +(Optional.) If True, device functions will be executed to +compute the device property of the Operation. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if any of the inputs is not a `Tensor`. +
+`ValueError` + +if colocation conflicts with existing device assignment. +
+ + + + + + + + + + + +
Returns
+An `Operation` object. +
+ + + +

device

+ +View source + + + +Returns a context manager that specifies the default device to use. + +The `device_name_or_function` argument may either be a device name +string, a device function, or None: + +* If it is a device name string, all operations constructed in + this context will be assigned to the device with that name, unless + overridden by a nested `device()` context. +* If it is a function, it will be treated as a function from + Operation objects to device name strings, and invoked each time + a new Operation is created. The Operation will be assigned to + the device with the returned name. +* If it is None, all `device()` invocations from the enclosing context + will be ignored. + +For information about the valid syntax of device name strings, see +the documentation in +[`DeviceNameUtils`](https://www.tensorflow.org/code/tensorflow/core/util/device_name_utils.h). + +#### For example: + + + +```python +with g.device('/device:GPU:0'): + # All operations constructed in this context will be placed + # on GPU 0. + with g.device(None): + # All operations constructed in this context will have no + # assigned device. + +# Defines a function from `Operation` to device string. +def matmul_on_gpu(n): + if n.type == "MatMul": + return "/device:GPU:0" + else: + return "/cpu:0" + +with g.device(matmul_on_gpu): + # All operations of type "MatMul" constructed in this context + # will be placed on GPU 0; all other operations will be placed + # on CPU 0. +``` + +**N.B.** The device scope may be overridden by op wrappers or +other library code. For example, a variable assignment op +`v.assign()` must be colocated with the tf.Variable `v`, and +incompatible device scopes will be ignored. + + + + + + + + + + +
Args
+`device_name_or_function` + +The device name or function to use in the +context. +
+ + + +#### Yields: + +A context manager that specifies the default device to use for newly +created ops. + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If device scopes are not properly nested. +
+ + + +

finalize

+ +View source + + + +Finalizes this graph, making it read-only. + +After calling `g.finalize()`, no new operations can be added to +`g`. This method is used to ensure that no operations are added +to a graph when it is shared between multiple threads, for example +when using a tf.compat.v1.train.QueueRunner. + +

get_all_collection_keys

+ +View source + + + +Returns a list of collections used in this graph. + + +

get_collection

+ +View source + + + +Returns a list of values in the collection with the given `name`. + +This is different from `get_collection_ref()` which always returns the +actual collection list if it exists in that it returns a new list each time +it is called. + + + + + + + + + + + + + +
Args
+`name` + +The key for the collection. For example, the `GraphKeys` class +contains many standard names for collections. +
+`scope` + +(Optional.) A string. If supplied, the resulting list is filtered +to include only items whose `name` attribute matches `scope` using +`re.match`. Items without a `name` attribute are never returned if a +scope is supplied. The choice of `re.match` means that a `scope` without +special tokens filters by prefix. +
+ + + + + + + + + + + +
Returns
+The list of values in the collection with the given `name`, or +an empty list if no value has been added to that collection. The +list contains the values in the order under which they were +collected. +
+ + + +

get_collection_ref

+ +View source + + + +Returns a list of values in the collection with the given `name`. + +If the collection exists, this returns the list itself, which can +be modified in place to change the collection. If the collection does +not exist, it is created as an empty list and the list is returned. + +This is different from `get_collection()` which always returns a copy of +the collection list if it exists and never creates an empty collection. + + + + + + + + + + +
Args
+`name` + +The key for the collection. For example, the `GraphKeys` class +contains many standard names for collections. +
+ + + + + + + + + + + +
Returns
+The list of values in the collection with the given `name`, or an empty +list if no value has been added to that collection. +
+ + + +

get_name_scope

+ +View source + + + +Returns the current name scope. + + +#### For example: + + + +```python +with tf.name_scope('scope1'): + with tf.name_scope('scope2'): + print(tf.compat.v1.get_default_graph().get_name_scope()) +``` +would print the string `scope1/scope2`. + + + + + + + + + +
Returns
+A string representing the current name scope. +
+ + + +

get_operation_by_name

+ +View source + + + +Returns the `Operation` with the given `name`. + +This method may be called concurrently from multiple threads. + + + + + + + + + + +
Args
+`name` + +The name of the `Operation` to return. +
+ + + + + + + + + + + +
Returns
+The `Operation` with the given `name`. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `name` is not a string. +
+`KeyError` + +If `name` does not correspond to an operation in this graph. +
+ + + +

get_operations

+ +View source + + + +Return the list of operations in the graph. + +You can modify the operations in place, but modifications +to the list such as inserts/delete have no effect on the +list of operations known to the graph. + +This method may be called concurrently from multiple threads. + + + + + + + + + +
Returns
+A list of Operations. +
+ + + +

get_tensor_by_name

+ +View source + + + +Returns the `Tensor` with the given `name`. + +This method may be called concurrently from multiple threads. + + + + + + + + + + +
Args
+`name` + +The name of the `Tensor` to return. +
+ + + + + + + + + + + +
Returns
+The `Tensor` with the given `name`. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `name` is not a string. +
+`KeyError` + +If `name` does not correspond to a tensor in this graph. +
+ + + +

gradient_override_map

+ +View source + + + +EXPERIMENTAL: A context manager for overriding gradient functions. + +This context manager can be used to override the gradient function +that will be used for ops within the scope of the context. + +#### For example: + + + +```python +@tf.RegisterGradient("CustomSquare") +def _custom_square_grad(op, grad): + # ... + +with tf.Graph().as_default() as g: + c = tf.constant(5.0) + s_1 = tf.square(c) # Uses the default gradient for tf.square. + with g.gradient_override_map({"Square": "CustomSquare"}): + s_2 = tf.square(s_2) # Uses _custom_square_grad to compute the + # gradient of s_2. +``` + + + + + + + + + + +
Args
+`op_type_map` + +A dictionary mapping op type strings to alternative op type +strings. +
+ + + + + + + + + + + +
Returns
+A context manager that sets the alternative op type to be used for one +or more ops created in that context. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If `op_type_map` is not a dictionary mapping strings to +strings. +
+ + + +

is_feedable

+ +View source + + + +Returns `True` if and only if `tensor` is feedable. + + +

is_fetchable

+ +View source + + + +Returns `True` if and only if `tensor_or_op` is fetchable. + + +

name_scope

+ +View source + + + +Returns a context manager that creates hierarchical names for operations. + +A graph maintains a stack of name scopes. A `with name_scope(...):` +statement pushes a new name onto the stack for the lifetime of the context. + +The `name` argument will be interpreted as follows: + +* A string (not ending with '/') will create a new name scope, in which + `name` is appended to the prefix of all operations created in the + context. If `name` has been used before, it will be made unique by + calling `self.unique_name(name)`. +* A scope previously captured from a `with g.name_scope(...) as + scope:` statement will be treated as an "absolute" name scope, which + makes it possible to re-enter existing scopes. +* A value of `None` or the empty string will reset the current name scope + to the top-level (empty) name scope. + +#### For example: + + + +```python +with tf.Graph().as_default() as g: + c = tf.constant(5.0, name="c") + assert c.op.name == "c" + c_1 = tf.constant(6.0, name="c") + assert c_1.op.name == "c_1" + + # Creates a scope called "nested" + with g.name_scope("nested") as scope: + nested_c = tf.constant(10.0, name="c") + assert nested_c.op.name == "nested/c" + + # Creates a nested scope called "inner". + with g.name_scope("inner"): + nested_inner_c = tf.constant(20.0, name="c") + assert nested_inner_c.op.name == "nested/inner/c" + + # Create a nested scope called "inner_1". + with g.name_scope("inner"): + nested_inner_1_c = tf.constant(30.0, name="c") + assert nested_inner_1_c.op.name == "nested/inner_1/c" + + # Treats `scope` as an absolute name scope, and + # switches to the "nested/" scope. + with g.name_scope(scope): + nested_d = tf.constant(40.0, name="d") + assert nested_d.op.name == "nested/d" + + with g.name_scope(""): + e = tf.constant(50.0, name="e") + assert e.op.name == "e" +``` + +The name of the scope itself can be captured by `with +g.name_scope(...) as scope:`, which stores the name of the scope +in the variable `scope`. This value can be used to name an +operation that represents the overall result of executing the ops +in a scope. For example: + +```python +inputs = tf.constant(...) +with g.name_scope('my_layer') as scope: + weights = tf.Variable(..., name="weights") + biases = tf.Variable(..., name="biases") + affine = tf.matmul(inputs, weights) + biases + output = tf.nn.relu(affine, name=scope) +``` + +NOTE: This constructor validates the given `name`. Valid scope +names match one of the following regular expressions: + + [A-Za-z0-9.][A-Za-z0-9_.\-/]* (for scopes at the root) + [A-Za-z0-9_.\-/]* (for other scopes) + + + + + + + + + + +
Args
+`name` + +A name for the scope. +
+ + + + + + + + + + + +
Returns
+A context manager that installs `name` as a new name scope. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `name` is not a valid scope name, according to the rules +above. +
+ + + +

prevent_feeding

+ +View source + + + +Marks the given `tensor` as unfeedable in this graph. + + +

prevent_fetching

+ +View source + + + +Marks the given `op` as unfetchable in this graph. + + +

switch_to_thread_local

+ +View source + + + +Make device, colocation and dependencies stacks thread-local. + +Device, colocation and dependencies stacks are not thread-local be default. +If multiple threads access them, then the state is shared. This means that +one thread may affect the behavior of another thread. + +After this method is called, the stacks become thread-local. If multiple +threads access them, then the state is not shared. Each thread uses its own +value; a thread doesn't affect other threads by mutating such a stack. + +The initial value for every thread's stack is set to the current value +of the stack when `switch_to_thread_local()` was first called. + +

unique_name

+ +View source + + + +Return a unique operation name for `name`. + +Note: You rarely need to call `unique_name()` directly. Most of +the time you just need to create `with g.name_scope()` blocks to +generate structured names. + +`unique_name` is used to generate structured names, separated by +`"/"`, to help identify operations when debugging a graph. +Operation names are displayed in error messages reported by the +TensorFlow runtime, and in various visualization tools such as +TensorBoard. + +If `mark_as_used` is set to `True`, which is the default, a new +unique name is created and marked as in use. If it's set to `False`, +the unique name is returned without actually being marked as used. +This is useful when the caller simply wants to know what the name +to be created will be. + + + + + + + + + + + + + +
Args
+`name` + +The name for an operation. +
+`mark_as_used` + +Whether to mark this name as being used. +
+ + + + + + + + + + + +
Returns
+A string to be passed to `create_op()` that will be used +to name the operation being created. +
+ + + + + diff --git a/site/en/api_docs/python/tf/IndexedSlices.md b/site/en/api_docs/python/tf/IndexedSlices.md new file mode 100644 index 00000000000..3481b661e5c --- /dev/null +++ b/site/en/api_docs/python/tf/IndexedSlices.md @@ -0,0 +1,173 @@ +description: A sparse representation of a set of tensor slices at given indices. + +
+ + + + + +
+ +# tf.IndexedSlices + + + + + + + + + +A sparse representation of a set of tensor slices at given indices. + + + + + + + + + +This class is a simple wrapper for a pair of `Tensor` objects: + +* `values`: A `Tensor` of any dtype with shape `[D0, D1, ..., Dn]`. +* `indices`: A 1-D integer `Tensor` with shape `[D0]`. + +An `IndexedSlices` is typically used to represent a subset of a larger +tensor `dense` of shape `[LARGE0, D1, .. , DN]` where `LARGE0 >> D0`. +The values in `indices` are the indices in the first dimension of +the slices that have been extracted from the larger tensor. + +The dense tensor `dense` represented by an `IndexedSlices` `slices` has + +```python +dense[slices.indices[i], :, :, :, ...] = slices.values[i, :, :, :, ...] +``` + +The `IndexedSlices` class is used principally in the definition of +gradients for operations that have sparse gradients +(e.g. tf.gather). + +Contrast this representation with +tf.SparseTensor, +which uses multi-dimensional indices and scalar values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dense_shape` + +A 1-D `Tensor` containing the shape of the corresponding dense tensor. +
+`device` + +The name of the device on which `values` will be produced, or `None`. +
+`dtype` + +The `DType` of elements in this tensor. +
+`graph` + +The `Graph` that contains the values, indices, and shape tensors. +
+`indices` + +A 1-D `Tensor` containing the indices of the slices. +
+`name` + +The name of this `IndexedSlices`. +
+`op` + +The `Operation` that produces `values` as an output. +
+`shape` + +Gets the tf.TensorShape representing the shape of the dense tensor. +
+`values` + +A `Tensor` containing the values of the slices. +
+ + + +## Methods + +

consumers

+ +View source + + + + + + +

__neg__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/IndexedSlicesSpec.md b/site/en/api_docs/python/tf/IndexedSlicesSpec.md new file mode 100644 index 00000000000..acf89508ad4 --- /dev/null +++ b/site/en/api_docs/python/tf/IndexedSlicesSpec.md @@ -0,0 +1,214 @@ +description: Type specification for a tf.IndexedSlices. + +
+ + + + + + + +
+ +# tf.IndexedSlicesSpec + + + + + + + + + +Type specification for a tf.IndexedSlices. + +Inherits From: [`TypeSpec`](../tf/TypeSpec.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +The dense shape of the `IndexedSlices`, or `None` to allow any +dense shape. +
+`dtype` + +tf.DType of values in the `IndexedSlices`. +
+`indices_dtype` + +tf.DType of the `indices` in the `IndexedSlices`. One +of tf.int32 or tf.int64. +
+`dense_shape_dtype` + +tf.DType of the `dense_shape` in the `IndexedSlices`. +One of tf.int32, tf.int64, or `None` (if the `IndexedSlices` has +no `dense_shape` tensor). +
+`indices_shape` + +The shape of the `indices` component, which indicates +how many slices are in the `IndexedSlices`. +
+ + + + + + + + + + + + + + +
+`value_type` + + +
+ + + +## Methods + +

is_compatible_with

+ +View source + + + +Returns true if `spec_or_value` is compatible with this TypeSpec. + + +

most_specific_compatible_type

+ +View source + + + +Returns the most specific TypeSpec compatible with `self` and `other`. + + + + + + + + + + + +
Args
+`other` + +A `TypeSpec`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If there is no TypeSpec that is compatible with both `self` +and `other`. +
+ + + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/Module.md b/site/en/api_docs/python/tf/Module.md new file mode 100644 index 00000000000..a0676571974 --- /dev/null +++ b/site/en/api_docs/python/tf/Module.md @@ -0,0 +1,256 @@ +description: Base neural network module class. + +
+ + + + +
+ +# tf.Module + + + + + + + + + +Base neural network module class. + + + + + + + + + +A module is a named container for tf.Variables, other tf.Modules and +functions which apply to user input. For example a dense layer in a neural +network might be implemented as a tf.Module: + + ``` + >>> class Dense(tf.Module): + ... def __init__(self, in_features, out_features, name=None): + ... super(Dense, self).__init__(name=name) + ... self.w = tf.Variable( + ... tf.random.normal([in_features, out_features]), name='w') + ... self.b = tf.Variable(tf.zeros([out_features]), name='b') + ... def __call__(self, x): + ... y = tf.matmul(x, self.w) + self.b + ... return tf.nn.relu(y) + ``` + +You can use the Dense layer as you would expect: + +``` +>>> d = Dense(in_features=3, out_features=2) +>>> d(tf.ones([1, 3])) + +``` + + +By subclassing tf.Module instead of `object` any tf.Variable or +tf.Module instances assigned to object properties can be collected using +the `variables`, `trainable_variables` or `submodules` property: + +``` +>>> d.variables + (, + ) +``` + + +Subclasses of tf.Module can also take advantage of the `_flatten` method +which can be used to implement tracking of any other types. + +All tf.Module classes have an associated tf.name_scope which can be used +to group operations in TensorBoard and create hierarchies for variable names +which can help with debugging. We suggest using the name scope when creating +nested submodules/parameters or for forward methods whose graph you might want +to inspect in TensorBoard. You can enter the name scope explicitly using +`with self.name_scope:` or you can annotate methods (apart from `__init__`) +with `@tf.Module.with_name_scope`. + +```python +class MLP(tf.Module): + def __init__(self, input_size, sizes, name=None): + super(MLP, self).__init__(name=name) + self.layers = [] + with self.name_scope: + for size in sizes: + self.layers.append(Dense(input_size=input_size, output_size=size)) + input_size = size + + @tf.Module.with_name_scope + def __call__(self, x): + for layer in self.layers: + x = layer(x) + return x +``` + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +Returns the name of this module as passed or determined in the ctor. + +NOTE: This is not the same as the `self.name_scope.name` which includes +parent module names. +
+`name_scope` + +Returns a tf.name_scope instance for this class. +
+`submodules` + +Sequence of all sub-modules. + +Submodules are modules which are properties of this module, or found as +properties of modules which are properties of this module (and so on). + +``` +>>> a = tf.Module() +>>> b = tf.Module() +>>> c = tf.Module() +>>> a.b = b +>>> b.c = c +>>> list(a.submodules) == [b, c] +True +>>> list(b.submodules) == [c] +True +>>> list(c.submodules) == [] +True +``` +
+`trainable_variables` + +Sequence of trainable variables owned by this module and its submodules. + +Note: this method uses reflection to find variables on the current instance +and submodules. For performance reasons you may wish to cache the result +of calling this method if you don't expect the return value to change. +
+`variables` + +Sequence of variables owned by this module and its submodules. + +Note: this method uses reflection to find variables on the current instance +and submodules. For performance reasons you may wish to cache the result +of calling this method if you don't expect the return value to change. +
+ + + +## Methods + +

with_name_scope

+ +View source + + + +Decorator to automatically enter the module name scope. + +``` +>>> class MyModule(tf.Module): +... @tf.Module.with_name_scope +... def __call__(self, x): +... if not hasattr(self, 'w'): +... self.w = tf.Variable(tf.random.normal([x.shape[1], 3])) +... return tf.matmul(x, self.w) +``` + +Using the above module would produce tf.Variables and tf.Tensors whose +names included the module name: + +``` +>>> mod = MyModule() +>>> mod(tf.ones([1, 2])) + +>>> mod.w + +``` + + + + + + + + + + +
Args
+`method` + +The method to wrap. +
+ + + + + + + + + + + +
Returns
+The original method wrapped such that it enters the module's name scope. +
+ + + + + diff --git a/site/en/api_docs/python/tf/Operation.md b/site/en/api_docs/python/tf/Operation.md new file mode 100644 index 00000000000..25f9fd97565 --- /dev/null +++ b/site/en/api_docs/python/tf/Operation.md @@ -0,0 +1,391 @@ +description: Represents a graph node that performs computation on tensors. + +
+ + + + + + + +
+ +# tf.Operation + + + + + + + + + +Represents a graph node that performs computation on tensors. + + + + + + + + + +An `Operation` is a node in a tf.Graph that takes zero or more `Tensor` +objects as input, and produces zero or more `Tensor` objects as output. +Objects of type `Operation` are created by calling a Python op constructor +(such as tf.matmul) within a tf.function or under a tf.Graph.as_default +context manager. + +For example, within a tf.function, `c = tf.matmul(a, b)` creates an +`Operation` of type "MatMul" that takes tensors `a` and `b` as input, and +produces `c` as output. + +If a tf.compat.v1.Session is used, an `Operation` of a tf.Graph can be +executed by passing it to `tf.Session.run`. `op.run()` is a shortcut for +calling `tf.compat.v1.get_default_session().run(op)`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`node_def` + +`node_def_pb2.NodeDef`. `NodeDef` for the `Operation`. Used for +attributes of `node_def_pb2.NodeDef`, typically `name`, `op`, and +`device`. The `input` attribute is irrelevant here as it will be +computed when generating the model. +
+`g` + +`Graph`. The parent graph. +
+`inputs` + +list of `Tensor` objects. The inputs to this `Operation`. +
+`output_types` + +list of `DType` objects. List of the types of the `Tensors` +computed by this operation. The length of this list indicates the +number of output endpoints of the `Operation`. +
+`control_inputs` + +list of operations or tensors from which to have a control +dependency. +
+`input_types` + +List of `DType` objects representing the types of the tensors +accepted by the `Operation`. By default uses `[x.dtype.base_dtype for x +in inputs]`. Operations that expect reference-typed inputs must specify +these explicitly. +
+`original_op` + +Optional. Used to associate the new `Operation` with an +existing `Operation` (for example, a replica with the op that was +replicated). +
+`op_def` + +Optional. The `op_def_pb2.OpDef` proto that describes the op type +that this `Operation` represents. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if control inputs are not Operations or Tensors, +or if `node_def` is not a `NodeDef`, +or if `g` is not a `Graph`, +or if `inputs` are not tensors, +or if `inputs` and `input_types` are incompatible. +
+`ValueError` + +if the `node_def` name is not valid. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`control_inputs` + +The `Operation` objects on which this op has a control dependency. + +Before this op is executed, TensorFlow will ensure that the +operations in `self.control_inputs` have finished executing. This +mechanism can be used to run ops sequentially for performance +reasons, or to ensure that the side effects of an op are observed +in the correct order. +
+`device` + +The name of the device to which this op has been assigned, if any. +
+`graph` + +The `Graph` that contains this operation. +
+`inputs` + +The sequence of `Tensor` objects representing the data inputs of this op. +
+`name` + +The full name of this operation. +
+`node_def` + +Returns the `NodeDef` representation of this operation. +
+`op_def` + +Returns the `OpDef` proto that represents the type of this op. +
+`outputs` + +The list of `Tensor` objects representing the outputs of this op. +
+`traceback` + +Returns the call stack from when this operation was constructed. +
+`type` + +The type of the op (e.g. `"MatMul"`). +
+ + + +## Methods + +

colocation_groups

+ +View source + + + +Returns the list of colocation groups of the op. + + +

get_attr

+ +View source + + + +Returns the value of the attr of this op with the given `name`. + + + + + + + + + + + +
Args
+`name` + +The name of the attr to fetch. +
+ + + + + + + + + + + +
Returns
+The value of the attr, as a Python object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If this op does not have an attr with the given `name`. +
+ + + +

run

+ +View source + + + +Runs this operation in a `Session`. + +Calling this method will execute all preceding operations that +produce the inputs needed for this operation. + +*N.B.* Before invoking Operation.run(), its graph must have been +launched in a session, and either a default session must be +available, or `session` must be specified explicitly. + + + + + + + + + + + + + +
Args
+`feed_dict` + +A dictionary that maps `Tensor` objects to feed values. See +`tf.Session.run` for a description of the valid feed values. +
+`session` + +(Optional.) The `Session` to be used to run to this operation. If +none, the default session will be used. +
+ + + +

values

+ +View source + + + +DEPRECATED: Use outputs. + + + + diff --git a/site/en/api_docs/python/tf/OptionalSpec.md b/site/en/api_docs/python/tf/OptionalSpec.md new file mode 100644 index 00000000000..8f93e79b903 --- /dev/null +++ b/site/en/api_docs/python/tf/OptionalSpec.md @@ -0,0 +1,178 @@ +description: Represents an optional potentially containing a structured value. + +
+ + + + + + + + +
+ +# tf.OptionalSpec + + + + + + + + + +Represents an optional potentially containing a structured value. + +Inherits From: [`TypeSpec`](../tf/TypeSpec.md) + + + + + + + + + + + + + + + + + + + + + +
+`value_type` + +The Python type for values that are compatible with this TypeSpec. +
+ + + +## Methods + +

from_value

+ +View source + + + + + + +

is_compatible_with

+ +View source + + + +Returns true if `spec_or_value` is compatible with this TypeSpec. + + +

most_specific_compatible_type

+ +View source + + + +Returns the most specific TypeSpec compatible with `self` and `other`. + + + + + + + + + + + +
Args
+`other` + +A `TypeSpec`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If there is no TypeSpec that is compatible with both `self` +and `other`. +
+ + + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/RaggedTensor.md b/site/en/api_docs/python/tf/RaggedTensor.md new file mode 100644 index 00000000000..e03897ad69b --- /dev/null +++ b/site/en/api_docs/python/tf/RaggedTensor.md @@ -0,0 +1,4877 @@ +description: Represents a ragged tensor. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.RaggedTensor + + + + + + + + + +Represents a ragged tensor. + + + + + + + + + +A `RaggedTensor` is a tensor with one or more *ragged dimensions*, which are +dimensions whose slices may have different lengths. For example, the inner +(column) dimension of `rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []]` is ragged, +since the column slices (`rt[0, :]`, ..., `rt[4, :]`) have different lengths. +Dimensions whose slices all have the same length are called *uniform +dimensions*. The outermost dimension of a `RaggedTensor` is always uniform, +since it consists of a single slice (and so there is no possibility for +differing slice lengths). + +The total number of dimensions in a `RaggedTensor` is called its *rank*, +and the number of ragged dimensions in a `RaggedTensor` is called its +*ragged-rank*. A `RaggedTensor`'s ragged-rank is fixed at graph creation +time: it can't depend on the runtime values of `Tensor`s, and can't vary +dynamically for different session runs. + +### Potentially Ragged Tensors + +Many ops support both `Tensor`s and `RaggedTensor`s. The term "potentially +ragged tensor" may be used to refer to a tensor that might be either a +`Tensor` or a `RaggedTensor`. The ragged-rank of a `Tensor` is zero. + +### Documenting RaggedTensor Shapes + +When documenting the shape of a RaggedTensor, ragged dimensions can be +indicated by enclosing them in parentheses. For example, the shape of +a 3-D `RaggedTensor` that stores the fixed-size word embedding for each +word in a sentence, for each sentence in a batch, could be written as +`[num_sentences, (num_words), embedding_size]`. The parentheses around +`(num_words)` indicate that dimension is ragged, and that the length +of each element list in that dimension may vary for each item. + +### Component Tensors + +Internally, a `RaggedTensor` consists of a concatenated list of values that +are partitioned into variable-length rows. In particular, each `RaggedTensor` +consists of: + + * A `values` tensor, which concatenates the variable-length rows into a + flattened list. For example, the `values` tensor for + `[[3, 1, 4, 1], [], [5, 9, 2], [6], []]` is `[3, 1, 4, 1, 5, 9, 2, 6]`. + + * A `row_splits` vector, which indicates how those flattened values are + divided into rows. In particular, the values for row `rt[i]` are stored + in the slice `rt.values[rt.row_splits[i]:rt.row_splits[i+1]]`. + +#### Example: + + + +``` +>>> print(tf.RaggedTensor.from_row_splits( +... values=[3, 1, 4, 1, 5, 9, 2, 6], +... row_splits=[0, 4, 4, 7, 8, 8])) + +``` + +### Alternative Row-Partitioning Schemes + +In addition to `row_splits`, ragged tensors provide support for four other +row-partitioning schemes: + + * `row_lengths`: a vector with shape `[nrows]`, which specifies the length + of each row. + + * `value_rowids` and `nrows`: `value_rowids` is a vector with shape + `[nvals]`, corresponding one-to-one with `values`, which specifies + each value's row index. In particular, the row `rt[row]` consists of the + values `rt.values[j]` where `value_rowids[j]==row`. `nrows` is an + integer scalar that specifies the number of rows in the + `RaggedTensor`. (`nrows` is used to indicate trailing empty rows.) + + * `row_starts`: a vector with shape `[nrows]`, which specifies the start + offset of each row. Equivalent to `row_splits[:-1]`. + + * `row_limits`: a vector with shape `[nrows]`, which specifies the stop + offset of each row. Equivalent to `row_splits[1:]`. + + * `uniform_row_length`: A scalar tensor, specifying the length of every + row. This row-partitioning scheme may only be used if all rows have + the same length. + +Example: The following ragged tensors are equivalent, and all represent the +nested list `[[3, 1, 4, 1], [], [5, 9, 2], [6], []]`. + +``` +>>> values = [3, 1, 4, 1, 5, 9, 2, 6] +>>> rt1 = RaggedTensor.from_row_splits(values, row_splits=[0, 4, 4, 7, 8, 8]) +>>> rt2 = RaggedTensor.from_row_lengths(values, row_lengths=[4, 0, 3, 1, 0]) +>>> rt3 = RaggedTensor.from_value_rowids( +... values, value_rowids=[0, 0, 0, 0, 2, 2, 2, 3], nrows=5) +>>> rt4 = RaggedTensor.from_row_starts(values, row_starts=[0, 4, 4, 7, 8]) +>>> rt5 = RaggedTensor.from_row_limits(values, row_limits=[4, 4, 7, 8, 8]) +``` + +### Multiple Ragged Dimensions + +`RaggedTensor`s with multiple ragged dimensions can be defined by using +a nested `RaggedTensor` for the `values` tensor. Each nested `RaggedTensor` +adds a single ragged dimension. + +``` +>>> inner_rt = RaggedTensor.from_row_splits( # =rt1 from above +... values=[3, 1, 4, 1, 5, 9, 2, 6], row_splits=[0, 4, 4, 7, 8, 8]) +>>> outer_rt = RaggedTensor.from_row_splits( +... values=inner_rt, row_splits=[0, 3, 3, 5]) +>>> print(outer_rt.to_list()) +[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]] +>>> print(outer_rt.ragged_rank) +2 +``` + +The factory function RaggedTensor.from_nested_row_splits may be used to +construct a `RaggedTensor` with multiple ragged dimensions directly, by +providing a list of `row_splits` tensors: + +``` +>>> RaggedTensor.from_nested_row_splits( +... flat_values=[3, 1, 4, 1, 5, 9, 2, 6], +... nested_row_splits=([0, 3, 3, 5], [0, 4, 4, 7, 8, 8])).to_list() +[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]] +``` + +### Uniform Inner Dimensions + +`RaggedTensor`s with uniform inner dimensions can be defined +by using a multidimensional `Tensor` for `values`. + +``` +>>> rt = RaggedTensor.from_row_splits(values=tf.ones([5, 3], tf.int32), +... row_splits=[0, 2, 5]) +>>> print(rt.to_list()) +[[[1, 1, 1], [1, 1, 1]], + [[1, 1, 1], [1, 1, 1], [1, 1, 1]]] +>>> print(rt.shape) +(2, None, 3) +``` + +### Uniform Outer Dimensions + +`RaggedTensor`s with uniform outer dimensions can be defined by using +one or more `RaggedTensor` with a `uniform_row_length` row-partitioning +tensor. For example, a `RaggedTensor` with shape `[2, 2, None]` can be +constructed with this method from a `RaggedTensor` values with shape +`[4, None]`: + +``` +>>> values = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]]) +>>> print(values.shape) +(4, None) +>>> rt6 = tf.RaggedTensor.from_uniform_row_length(values, 2) +>>> print(rt6) + +>>> print(rt6.shape) +(2, 2, None) +``` + +Note that `rt6` only contains one ragged dimension (the innermost +dimension). In contrast, if `from_row_splits` is used to construct a similar +`RaggedTensor`, then that `RaggedTensor` will have two ragged dimensions: + +``` +>>> rt7 = tf.RaggedTensor.from_row_splits(values, [0, 2, 4]) +>>> print(rt7.shape) +(2, None, None) +``` + +Uniform and ragged outer dimensions may be interleaved, meaning that a +tensor with any combination of ragged and uniform dimensions may be created. +For example, a RaggedTensor `t4` with shape `[3, None, 4, 8, None, 2]` could +be constructed as follows: + +```python +t0 = tf.zeros([1000, 2]) # Shape: [1000, 2] +t1 = RaggedTensor.from_row_lengths(t0, [...]) # [160, None, 2] +t2 = RaggedTensor.from_uniform_row_length(t1, 8) # [20, 8, None, 2] +t3 = RaggedTensor.from_uniform_row_length(t2, 4) # [5, 4, 8, None, 2] +t4 = RaggedTensor.from_row_lengths(t3, [...]) # [3, None, 4, 8, None, 2] +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`values` + +A potentially ragged tensor of any dtype and shape `[nvals, ...]`. +
+`row_splits` + +A 1-D integer tensor with shape `[nrows+1]`. +
+`cached_row_lengths` + +A 1-D integer tensor with shape `[nrows]` +
+`cached_value_rowids` + +A 1-D integer tensor with shape `[nvals]`. +
+`cached_nrows` + +A 1-D integer scalar tensor. +
+`internal` + +True if the constructor is being called by one of the factory +methods. If false, an exception will be raised. +
+`uniform_row_length` + +A scalar tensor. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`TypeError` + +If a row partitioning tensor has an inappropriate dtype. +
+`TypeError` + +If exactly one row partitioning argument was not specified. +
+`ValueError` + +If a row partitioning tensor has an inappropriate shape. +
+`ValueError` + +If multiple partitioning arguments are specified. +
+`ValueError` + +If nrows is specified but value_rowids is not None. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +The `DType` of values in this tensor. +
+`flat_values` + +The innermost `values` tensor for this ragged tensor. + +Concretely, if `rt.values` is a `Tensor`, then `rt.flat_values` is +`rt.values`; otherwise, `rt.flat_values` is `rt.values.flat_values`. + +Conceptually, `flat_values` is the tensor formed by flattening the +outermost dimension and all of the ragged dimensions into a single +dimension. + +`rt.flat_values.shape = [nvals] + rt.shape[rt.ragged_rank + 1:]` +(where `nvals` is the number of items in the flattened dimensions). + +#### Example: + +``` +>>> rt = tf.ragged.constant([[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]) +>>> print(rt.flat_values) +tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32) +``` +
+`nested_row_splits` + +A tuple containing the row_splits for all ragged dimensions. + +`rt.nested_row_splits` is a tuple containing the `row_splits` tensors for +all ragged dimensions in `rt`, ordered from outermost to innermost. In +particular, `rt.nested_row_splits = (rt.row_splits,) + value_splits` where: + +* `value_splits = ()` if `rt.values` is a `Tensor`. +* `value_splits = rt.values.nested_row_splits` otherwise. + +#### Example: + +``` +>>> rt = tf.ragged.constant( +... [[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]]) +>>> for i, splits in enumerate(rt.nested_row_splits): +... print('Splits for dimension %d: %s' % (i+1, splits.numpy())) +Splits for dimension 1: [0 3] +Splits for dimension 2: [0 3 3 5] +Splits for dimension 3: [0 4 4 7 8 8] +``` +
+`ragged_rank` + +The number of ragged dimensions in this ragged tensor. +
+`row_splits` + +The row-split indices for this ragged tensor's `values`. + +`rt.row_splits` specifies where the values for each row begin and end in +`rt.values`. In particular, the values for row `rt[i]` are stored in +the slice `rt.values[rt.row_splits[i]:rt.row_splits[i+1]]`. + +#### Example: + +``` +>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []]) +>>> print(rt.row_splits) # indices of row splits in rt.values +tf.Tensor([0 4 4 7 8 8], shape=(6,), dtype=int64) +``` +
+`shape` + +The statically known shape of this ragged tensor. + + + +``` +>>> tf.ragged.constant([[0], [1, 2]]).shape +TensorShape([2, None]) +``` + +``` +>>> tf.ragged.constant([[[0, 1]], [[1, 2], [3, 4]]], ragged_rank=1).shape +TensorShape([2, None, 2]) +``` +
+`uniform_row_length` + +The length of each row in this ragged tensor, or None if rows are ragged. + +``` +>>> rt1 = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]]) +>>> print(rt1.uniform_row_length) # rows are ragged. +None +``` + +``` +>>> rt2 = tf.RaggedTensor.from_uniform_row_length( +... values=rt1, uniform_row_length=2) +>>> print(rt2) + +>>> print(rt2.uniform_row_length) # rows are not ragged (all have size 2). +tf.Tensor(2, shape=(), dtype=int64) +``` + +A RaggedTensor's rows are only considered to be uniform (i.e. non-ragged) +if it can be determined statically (at graph construction time) that the +rows all have the same length. +
+`values` + +The concatenated rows for this ragged tensor. + +`rt.values` is a potentially ragged tensor formed by flattening the two +outermost dimensions of `rt` into a single dimension. + +`rt.values.shape = [nvals] + rt.shape[2:]` (where `nvals` is the +number of items in the outer two dimensions of `rt`). + +`rt.ragged_rank = self.ragged_rank - 1` + +#### Example: + +``` +>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []]) +>>> print(rt.values) +tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32) +``` +
+ + + +## Methods + +

bounding_shape

+ +View source + + + +Returns the tight bounding box shape for this `RaggedTensor`. + + + + + + + + + + + + + + + + + +
Args
+`axis` + +An integer scalar or vector indicating which axes to return the +bounding box for. If not specified, then the full bounding box is +returned. +
+`name` + +A name prefix for the returned tensor (optional). +
+`out_type` + +`dtype` for the returned tensor. Defaults to +`self.row_splits.dtype`. +
+ + + + + + + + + + + +
Returns
+An integer `Tensor` (`dtype=self.row_splits.dtype`). If `axis` is not +specified, then `output` is a vector with +`output.shape=[self.shape.ndims]`. If `axis` is a scalar, then the +`output` is a scalar. If `axis` is a vector, then `output` is a vector, +where `output[i]` is the bounding size for dimension `axis[i]`. +
+ + +#### Example: + +``` +>>> rt = tf.ragged.constant([[1, 2, 3, 4], [5], [], [6, 7, 8, 9], [10]]) +>>> rt.bounding_shape().numpy() +array([5, 4]) +``` + +

consumers

+ +View source + + + + + + +

from_nested_row_lengths

+ +View source + + + +Creates a `RaggedTensor` from a nested list of `row_lengths` tensors. + + +#### Equivalent to: + + + +```python +result = flat_values +for row_lengths in reversed(nested_row_lengths): + result = from_row_lengths(result, row_lengths) +``` + + + + + + + + + + + + + + + + + + + +
Args
+`flat_values` + +A potentially ragged tensor. +
+`nested_row_lengths` + +A list of 1-D integer tensors. The `i`th tensor is +used as the `row_lengths` for the `i`th ragged dimension. +
+`name` + +A name prefix for the RaggedTensor (optional). +
+`validate` + +If true, then use assertions to check that the arguments form +a valid `RaggedTensor`. Note: these assertions incur a runtime cost, +since they must be checked for each tensor value. +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor` (or `flat_values` if `nested_row_lengths` is empty). +
+ + + +

from_nested_row_splits

+ +View source + + + +Creates a `RaggedTensor` from a nested list of `row_splits` tensors. + + +#### Equivalent to: + + + +```python +result = flat_values +for row_splits in reversed(nested_row_splits): + result = from_row_splits(result, row_splits) +``` + + + + + + + + + + + + + + + + + + + +
Args
+`flat_values` + +A potentially ragged tensor. +
+`nested_row_splits` + +A list of 1-D integer tensors. The `i`th tensor is +used as the `row_splits` for the `i`th ragged dimension. +
+`name` + +A name prefix for the RaggedTensor (optional). +
+`validate` + +If true, then use assertions to check that the arguments form +a valid `RaggedTensor`. Note: these assertions incur a runtime cost, +since they must be checked for each tensor value. +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor` (or `flat_values` if `nested_row_splits` is empty). +
+ + + +

from_nested_value_rowids

+ +View source + + + +Creates a `RaggedTensor` from a nested list of `value_rowids` tensors. + + +#### Equivalent to: + + + +```python +result = flat_values +for (rowids, nrows) in reversed(zip(nested_value_rowids, nested_nrows)): + result = from_value_rowids(result, rowids, nrows) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`flat_values` + +A potentially ragged tensor. +
+`nested_value_rowids` + +A list of 1-D integer tensors. The `i`th tensor is +used as the `value_rowids` for the `i`th ragged dimension. +
+`nested_nrows` + +A list of integer scalars. The `i`th scalar is used as the +`nrows` for the `i`th ragged dimension. +
+`name` + +A name prefix for the RaggedTensor (optional). +
+`validate` + +If true, then use assertions to check that the arguments form +a valid `RaggedTensor`. Note: these assertions incur a runtime cost, +since they must be checked for each tensor value. +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor` (or `flat_values` if `nested_value_rowids` is empty). +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `len(nested_values_rowids) != len(nested_nrows)`. +
+ + + +

from_row_lengths

+ +View source + + + +Creates a `RaggedTensor` with rows partitioned by `row_lengths`. + +The returned `RaggedTensor` corresponds with the python list defined by: + +```python +result = [[values.pop(0) for i in range(length)] + for length in row_lengths] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`values` + +A potentially ragged tensor with shape `[nvals, ...]`. +
+`row_lengths` + +A 1-D integer tensor with shape `[nrows]`. Must be +nonnegative. `sum(row_lengths)` must be `nvals`. +
+`name` + +A name prefix for the RaggedTensor (optional). +
+`validate` + +If true, then use assertions to check that the arguments form +a valid `RaggedTensor`. Note: these assertions incur a runtime cost, +since they must be checked for each tensor value. +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor`. `result.rank = values.rank + 1`. +`result.ragged_rank = values.ragged_rank + 1`. +
+ + +#### Example: + +``` +>>> print(tf.RaggedTensor.from_row_lengths( +... values=[3, 1, 4, 1, 5, 9, 2, 6], +... row_lengths=[4, 0, 3, 1, 0])) + +``` + +

from_row_limits

+ +View source + + + +Creates a `RaggedTensor` with rows partitioned by `row_limits`. + +Equivalent to: `from_row_splits(values, concat([0, row_limits]))`. + + + + + + + + + + + + + + + + + + + +
Args
+`values` + +A potentially ragged tensor with shape `[nvals, ...]`. +
+`row_limits` + +A 1-D integer tensor with shape `[nrows]`. Must be sorted in +ascending order. If `nrows>0`, then `row_limits[-1]` must be `nvals`. +
+`name` + +A name prefix for the RaggedTensor (optional). +
+`validate` + +If true, then use assertions to check that the arguments form +a valid `RaggedTensor`. Note: these assertions incur a runtime cost, +since they must be checked for each tensor value. +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor`. `result.rank = values.rank + 1`. +`result.ragged_rank = values.ragged_rank + 1`. +
+ + +#### Example: + +``` +>>> print(tf.RaggedTensor.from_row_limits( +... values=[3, 1, 4, 1, 5, 9, 2, 6], +... row_limits=[4, 4, 7, 8, 8])) + +``` + +

from_row_splits

+ +View source + + + +Creates a `RaggedTensor` with rows partitioned by `row_splits`. + +The returned `RaggedTensor` corresponds with the python list defined by: + +```python +result = [values[row_splits[i]:row_splits[i + 1]] + for i in range(len(row_splits) - 1)] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`values` + +A potentially ragged tensor with shape `[nvals, ...]`. +
+`row_splits` + +A 1-D integer tensor with shape `[nrows+1]`. Must not be +empty, and must be sorted in ascending order. `row_splits[0]` must be +zero and `row_splits[-1]` must be `nvals`. +
+`name` + +A name prefix for the RaggedTensor (optional). +
+`validate` + +If true, then use assertions to check that the arguments form +a valid `RaggedTensor`. Note: these assertions incur a runtime cost, +since they must be checked for each tensor value. +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor`. `result.rank = values.rank + 1`. +`result.ragged_rank = values.ragged_rank + 1`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `row_splits` is an empty list. +
+ + +#### Example: + +``` +>>> print(tf.RaggedTensor.from_row_splits( +... values=[3, 1, 4, 1, 5, 9, 2, 6], +... row_splits=[0, 4, 4, 7, 8, 8])) + +``` + +

from_row_starts

+ +View source + + + +Creates a `RaggedTensor` with rows partitioned by `row_starts`. + +Equivalent to: `from_row_splits(values, concat([row_starts, nvals]))`. + + + + + + + + + + + + + + + + + + + +
Args
+`values` + +A potentially ragged tensor with shape `[nvals, ...]`. +
+`row_starts` + +A 1-D integer tensor with shape `[nrows]`. Must be +nonnegative and sorted in ascending order. If `nrows>0`, then +`row_starts[0]` must be zero. +
+`name` + +A name prefix for the RaggedTensor (optional). +
+`validate` + +If true, then use assertions to check that the arguments form +a valid `RaggedTensor`. Note: these assertions incur a runtime cost, +since they must be checked for each tensor value. +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor`. `result.rank = values.rank + 1`. +`result.ragged_rank = values.ragged_rank + 1`. +
+ + +#### Example: + +``` +>>> print(tf.RaggedTensor.from_row_starts( +... values=[3, 1, 4, 1, 5, 9, 2, 6], +... row_starts=[0, 4, 4, 7, 8])) + +``` + +

from_sparse

+ +View source + + + +Converts a 2D tf.SparseTensor to a `RaggedTensor`. + +Each row of the `output` `RaggedTensor` will contain the explicit values +from the same row in `st_input`. `st_input` must be ragged-right. If not +it is not ragged-right, then an error will be generated. + +#### Example: + + + +``` +>>> st = tf.SparseTensor(indices=[[0, 0], [0, 1], [0, 2], [1, 0], [3, 0]], +... values=[1, 2, 3, 4, 5], +... dense_shape=[4, 3]) +>>> tf.RaggedTensor.from_sparse(st).to_list() +[[1, 2, 3], [4], [], [5]] +``` + +Currently, only two-dimensional `SparseTensors` are supported. + + + + + + + + + + + + + + + + +
Args
+`st_input` + +The sparse tensor to convert. Must have rank 2. +
+`name` + +A name prefix for the returned tensors (optional). +
+`row_splits_dtype` + +`dtype` for the returned `RaggedTensor`'s `row_splits` +tensor. One of tf.int32 or tf.int64. +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor` with the same values as `st_input`. +`output.ragged_rank = rank(st_input) - 1`. +`output.shape = [st_input.dense_shape[0], None]`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the number of dimensions in `st_input` is not known +statically, or is not two. +
+ + + +

from_tensor

+ +View source + + + +Converts a tf.Tensor into a `RaggedTensor`. + +The set of absent/default values may be specified using a vector of lengths +or a padding value (but not both). If `lengths` is specified, then the +output tensor will satisfy `output[row] = tensor[row][:lengths[row]]`. If +'lengths' is a list of lists or tuple of lists, those lists will be used +as nested row lengths. If `padding` is specified, then any row *suffix* +consisting entirely of `padding` will be excluded from the returned +`RaggedTensor`. If neither `lengths` nor `padding` is specified, then the +returned `RaggedTensor` will have no absent/default values. + +#### Examples: + + + +``` +>>> dt = tf.constant([[5, 7, 0], [0, 3, 0], [6, 0, 0]]) +>>> tf.RaggedTensor.from_tensor(dt) + +>>> tf.RaggedTensor.from_tensor(dt, lengths=[1, 0, 3]) + +``` + +``` +>>> tf.RaggedTensor.from_tensor(dt, padding=0) + +``` + +``` +>>> dt = tf.constant([[[5, 0], [7, 0], [0, 0]], +... [[0, 0], [3, 0], [0, 0]], +... [[6, 0], [0, 0], [0, 0]]]) +>>> tf.RaggedTensor.from_tensor(dt, lengths=([2, 0, 3], [1, 1, 2, 0, 1])) + +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`tensor` + +The `Tensor` to convert. Must have rank `ragged_rank + 1` or +higher. +
+`lengths` + +An optional set of row lengths, specified using a 1-D integer +`Tensor` whose length is equal to `tensor.shape[0]` (the number of rows +in `tensor`). If specified, then `output[row]` will contain +`tensor[row][:lengths[row]]`. Negative lengths are treated as zero. You +may optionally pass a list or tuple of lengths to this argument, which +will be used as nested row lengths to construct a ragged tensor with +multiple ragged dimensions. +
+`padding` + +An optional padding value. If specified, then any row suffix +consisting entirely of `padding` will be excluded from the returned +RaggedTensor. `padding` is a `Tensor` with the same dtype as `tensor` +and with `shape=tensor.shape[ragged_rank + 1:]`. +
+`ragged_rank` + +Integer specifying the ragged rank for the returned +`RaggedTensor`. Must be greater than zero. +
+`name` + +A name prefix for the returned tensors (optional). +
+`row_splits_dtype` + +`dtype` for the returned `RaggedTensor`'s `row_splits` +tensor. One of tf.int32 or tf.int64. +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor` with the specified `ragged_rank`. The shape of the +returned ragged tensor is compatible with the shape of `tensor`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If both `lengths` and `padding` are specified. +
+ + + +

from_uniform_row_length

+ +View source + + + +Creates a `RaggedTensor` with rows partitioned by `uniform_row_length`. + +This method can be used to create `RaggedTensor`s with multiple uniform +outer dimensions. For example, a `RaggedTensor` with shape `[2, 2, None]` +can be constructed with this method from a `RaggedTensor` values with shape +`[4, None]`: + +``` +>>> values = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]]) +>>> print(values.shape) +(4, None) +>>> rt1 = tf.RaggedTensor.from_uniform_row_length(values, 2) +>>> print(rt1) + +>>> print(rt1.shape) +(2, 2, None) +``` + +Note that `rt1` only contains one ragged dimension (the innermost +dimension). In contrast, if `from_row_splits` is used to construct a similar +`RaggedTensor`, then that `RaggedTensor` will have two ragged dimensions: + +``` +>>> rt2 = tf.RaggedTensor.from_row_splits(values, [0, 2, 4]) +>>> print(rt2.shape) +(2, None, None) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`values` + +A potentially ragged tensor with shape `[nvals, ...]`. +
+`uniform_row_length` + +A scalar integer tensor. Must be nonnegative. +The size of the outer axis of `values` must be evenly divisible by +`uniform_row_length`. +
+`nrows` + +The number of rows in the constructed RaggedTensor. If not +specified, then it defaults to `nvals/uniform_row_length` (or `0` if +`uniform_row_length==0`). `nrows` only needs to be specified if +`uniform_row_length` might be zero. `uniform_row_length*nrows` must +be `nvals`. +
+`validate` + +If true, then use assertions to check that the arguments form +a valid `RaggedTensor`. Note: these assertions incur a runtime cost, +since they must be checked for each tensor value. +
+`name` + +A name prefix for the RaggedTensor (optional). +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor` that corresponds with the python list defined by: + +```python +result = [[values.pop(0) for i in range(uniform_row_length)] +for _ in range(nrows)] +``` + +`result.rank = values.rank + 1`. +`result.ragged_rank = values.ragged_rank + 1`. +
+ + + +

from_value_rowids

+ +View source + + + +Creates a `RaggedTensor` with rows partitioned by `value_rowids`. + +The returned `RaggedTensor` corresponds with the python list defined by: + +```python +result = [[values[i] for i in range(len(values)) if value_rowids[i] == row] + for row in range(nrows)] +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`values` + +A potentially ragged tensor with shape `[nvals, ...]`. +
+`value_rowids` + +A 1-D integer tensor with shape `[nvals]`, which corresponds +one-to-one with `values`, and specifies each value's row index. Must be +nonnegative, and must be sorted in ascending order. +
+`nrows` + +An integer scalar specifying the number of rows. This should be +specified if the `RaggedTensor` may containing empty training rows. Must +be greater than `value_rowids[-1]` (or zero if `value_rowids` is empty). +Defaults to `value_rowids[-1]` (or zero if `value_rowids` is empty). +
+`name` + +A name prefix for the RaggedTensor (optional). +
+`validate` + +If true, then use assertions to check that the arguments form +a valid `RaggedTensor`. Note: these assertions incur a runtime cost, +since they must be checked for each tensor value. +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor`. `result.rank = values.rank + 1`. +`result.ragged_rank = values.ragged_rank + 1`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `nrows` is incompatible with `value_rowids`. +
+ + +#### Example: + +``` +>>> print(tf.RaggedTensor.from_value_rowids( +... values=[3, 1, 4, 1, 5, 9, 2, 6], +... value_rowids=[0, 0, 0, 0, 2, 2, 2, 3], +... nrows=5)) + +``` + +

merge_dims

+ +View source + + + +Merges outer_axis...inner_axis into a single dimension. + +Returns a copy of this RaggedTensor with the specified range of dimensions +flattened into a single dimension, with elements in row-major order. + +#### Examples: + +``` +>>> rt = tf.ragged.constant([[[1, 2], [3]], [[4, 5, 6]]]) +>>> print(rt.merge_dims(0, 1)) + +>>> print(rt.merge_dims(1, 2)) + +>>> print(rt.merge_dims(0, 2)) +tf.Tensor([1 2 3 4 5 6], shape=(6,), dtype=int32) +``` + +To mimic the behavior of `np.flatten` (which flattens all dimensions), use +`rt.merge_dims(0, -1). To mimic the behavior of `tf.layers.Flatten` (which +flattens all dimensions except the outermost batch dimension), use +`rt.merge_dims(1, -1)`. + + + + + + + + + + + + + +
Args
+`outer_axis` + +`int`: The first dimension in the range of dimensions to +merge. May be negative if `self.shape.rank` is statically known. +
+`inner_axis` + +`int`: The last dimension in the range of dimensions to +merge. May be negative if `self.shape.rank` is statically known. +
+ + + + + + + + + + + +
Returns
+A copy of this tensor, with the specified dimensions merged into a +single dimension. The shape of the returned tensor will be +`self.shape[:outer_axis] + [N] + self.shape[inner_axis + 1:]`, where `N` +is the total number of slices in the merged dimensions. +
+ + + +

nested_row_lengths

+ +View source + + + +Returns a tuple containing the row_lengths for all ragged dimensions. + +`rt.nested_row_lengths()` is a tuple containing the `row_lengths` tensors +for all ragged dimensions in `rt`, ordered from outermost to innermost. + + + + + + + + + + +
Args
+`name` + +A name prefix for the returned tensors (optional). +
+ + + + + + + + + + + +
Returns
+A `tuple` of 1-D integer `Tensors`. The length of the tuple is equal to +`self.ragged_rank`. +
+ + + +

nested_value_rowids

+ +View source + + + +Returns a tuple containing the value_rowids for all ragged dimensions. + +`rt.nested_value_rowids` is a tuple containing the `value_rowids` tensors +for +all ragged dimensions in `rt`, ordered from outermost to innermost. In +particular, `rt.nested_value_rowids = (rt.value_rowids(),) + value_ids` +where: + + * `value_ids = ()` if `rt.values` is a `Tensor`. + * `value_ids = rt.values.nested_value_rowids` otherwise. + + + + + + + + + + +
Args
+`name` + +A name prefix for the returned tensors (optional). +
+ + + + + + + + + + + +
Returns
+A `tuple` of 1-D integer `Tensor`s. +
+ + +#### Example: + +``` +>>> rt = tf.ragged.constant( +... [[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]]) +>>> for i, ids in enumerate(rt.nested_value_rowids()): +... print('row ids for dimension %d: %s' % (i+1, ids.numpy())) +row ids for dimension 1: [0 0 0] +row ids for dimension 2: [0 0 0 2 2] +row ids for dimension 3: [0 0 0 0 2 2 2 3] +``` + +

nrows

+ +View source + + + +Returns the number of rows in this ragged tensor. + +I.e., the size of the outermost dimension of the tensor. + + + + + + + + + + + + + +
Args
+`out_type` + +`dtype` for the returned tensor. Defaults to +`self.row_splits.dtype`. +
+`name` + +A name prefix for the returned tensor (optional). +
+ + + + + + + + + + + +
Returns
+A scalar `Tensor` with dtype `out_type`. +
+ + +#### Example: + +``` +>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []]) +>>> print(rt.nrows()) # rt has 5 rows. +tf.Tensor(5, shape=(), dtype=int64) +``` + +

numpy

+ +View source + + + +Returns a numpy `array` with the values for this `RaggedTensor`. + +Requires that this `RaggedTensor` was constructed in eager execution mode. + +Ragged dimensions are encoded using numpy `arrays` with `dtype=object` and +`rank=1`, where each element is a single row. + +#### Examples + +In the following example, the value returned by RaggedTensor.numpy() +contains three numpy `array` objects: one for each row (with `rank=1` and +`dtype=int64`), and one to combine them (with `rank=1` and `dtype=object`): + +``` +>>> tf.ragged.constant([[1, 2, 3], [4, 5]], dtype=tf.int64).numpy() +array([array([1, 2, 3]), array([4, 5])], dtype=object) +``` + +Uniform dimensions are encoded using multidimensional numpy `array`s. In +the following example, the value returned by RaggedTensor.numpy() contains +a single numpy `array` object, with `rank=2` and `dtype=int64`: + +``` +>>> tf.ragged.constant([[1, 2, 3], [4, 5, 6]], dtype=tf.int64).numpy() +array([[1, 2, 3], [4, 5, 6]]) +``` + + + + + + + + + +
Returns
+A numpy `array`. +
+ + + +

row_lengths

+ +View source + + + +Returns the lengths of the rows in this ragged tensor. + +`rt.row_lengths()[i]` indicates the number of values in the +`i`th row of `rt`. + + + + + + + + + + + + + +
Args
+`axis` + +An integer constant indicating the axis whose row lengths should be +returned. +
+`name` + +A name prefix for the returned tensor (optional). +
+ + + + + + + + + + + +
Returns
+A potentially ragged integer Tensor with shape `self.shape[:axis]`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `axis` is out of bounds. +
+ + +#### Example: + +``` +>>> rt = tf.ragged.constant( +... [[[3, 1, 4], [1]], [], [[5, 9], [2]], [[6]], []]) +>>> print(rt.row_lengths()) # lengths of rows in rt +tf.Tensor([2 0 2 1 0], shape=(5,), dtype=int64) +>>> print(rt.row_lengths(axis=2)) # lengths of axis=2 rows. + +``` + +

row_limits

+ +View source + + + +Returns the limit indices for rows in this ragged tensor. + +These indices specify where the values for each row end in +`self.values`. `rt.row_limits(self)` is equal to `rt.row_splits[:-1]`. + + + + + + + + + + +
Args
+`name` + +A name prefix for the returned tensor (optional). +
+ + + + + + + + + + + +
Returns
+A 1-D integer Tensor with shape `[nrows]`. +The returned tensor is nonnegative, and is sorted in ascending order. +
+ + +#### Example: + +``` +>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []]) +>>> print(rt.values) +tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32) +>>> print(rt.row_limits()) # indices of row limits in rt.values +tf.Tensor([4 4 7 8 8], shape=(5,), dtype=int64) +``` + +

row_starts

+ +View source + + + +Returns the start indices for rows in this ragged tensor. + +These indices specify where the values for each row begin in +`self.values`. `rt.row_starts()` is equal to `rt.row_splits[:-1]`. + + + + + + + + + + +
Args
+`name` + +A name prefix for the returned tensor (optional). +
+ + + + + + + + + + + +
Returns
+A 1-D integer Tensor with shape `[nrows]`. +The returned tensor is nonnegative, and is sorted in ascending order. +
+ + +#### Example: + +``` +>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []]) +>>> print(rt.values) +tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32) +>>> print(rt.row_starts()) # indices of row starts in rt.values +tf.Tensor([0 4 4 7 8], shape=(5,), dtype=int64) +``` + +

to_list

+ +View source + + + +Returns a nested Python `list` with the values for this `RaggedTensor`. + +Requires that `rt` was constructed in eager execution mode. + + + + + + + + + +
Returns
+A nested Python `list`. +
+ + + +

to_sparse

+ +View source + + + +Converts this `RaggedTensor` into a tf.SparseTensor. + + +#### Example: + + + +``` +>>> rt = tf.ragged.constant([[1, 2, 3], [4], [], [5, 6]]) +>>> print(rt.to_sparse()) +SparseTensor(indices=tf.Tensor( + [[0 0] [0 1] [0 2] [1 0] [3 0] [3 1]], + shape=(6, 2), dtype=int64), + values=tf.Tensor([1 2 3 4 5 6], shape=(6,), dtype=int32), + dense_shape=tf.Tensor([4 3], shape=(2,), dtype=int64)) +``` + + + + + + + + + + +
Args
+`name` + +A name prefix for the returned tensors (optional). +
+ + + + + + + + + + + +
Returns
+A SparseTensor with the same values as `self`. +
+ + + +

to_tensor

+ +View source + + + +Converts this `RaggedTensor` into a tf.Tensor. + +If `shape` is specified, then the result is padded and/or truncated to +the specified shape. + +#### Examples: + + + +``` +>>> rt = tf.ragged.constant([[9, 8, 7], [], [6, 5], [4]]) +>>> print(rt.to_tensor()) +tf.Tensor( + [[9 8 7] [0 0 0] [6 5 0] [4 0 0]], shape=(4, 3), dtype=int32) +>>> print(rt.to_tensor(shape=[5, 2])) +tf.Tensor( + [[9 8] [0 0] [6 5] [4 0] [0 0]], shape=(5, 2), dtype=int32) +``` + + + + + + + + + + + + + + + + +
Args
+`default_value` + +Value to set for indices not specified in `self`. Defaults +to zero. `default_value` must be broadcastable to +`self.shape[self.ragged_rank + 1:]`. +
+`name` + +A name prefix for the returned tensors (optional). +
+`shape` + +The shape of the resulting dense tensor. In particular, +`result.shape[i]` is `shape[i]` (if `shape[i]` is not None), or +`self.bounding_shape(i)` (otherwise).`shape.rank` must be `None` or +equal to `self.rank`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `ragged.bounding_shape(self)` and the +values specified by the non-empty values in `self`. Empty values are +assigned `default_value`. +
+ + + +

value_rowids

+ +View source + + + +Returns the row indices for the `values` in this ragged tensor. + +`rt.value_rowids()` corresponds one-to-one with the outermost dimension of +`rt.values`, and specifies the row containing each value. In particular, +the row `rt[row]` consists of the values `rt.values[j]` where +`rt.value_rowids()[j] == row`. + + + + + + + + + + +
Args
+`name` + +A name prefix for the returned tensor (optional). +
+ + + + + + + + + + + +
Returns
+A 1-D integer `Tensor` with shape `self.values.shape[:1]`. +The returned tensor is nonnegative, and is sorted in ascending order. +
+ + +#### Example: + +``` +>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []]) +>>> print(rt.values) +tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32) +>>> print(rt.value_rowids()) # corresponds 1:1 with rt.values +tf.Tensor([0 0 0 0 2 2 2 3], shape=(8,), dtype=int64) +``` + +

with_flat_values

+ +View source + + + +Returns a copy of `self` with `flat_values` replaced by `new_value`. + +Preserves cached row-partitioning tensors such as `self.cached_nrows` and +`self.cached_value_rowids` if they have values. + + + + + + + + + + +
Args
+`new_values` + +Potentially ragged tensor that should replace +`self.flat_values`. Must have `rank > 0`, and must have the same +number of rows as `self.flat_values`. +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor`. +`result.rank = self.ragged_rank + new_values.rank`. +`result.ragged_rank = self.ragged_rank + new_values.ragged_rank`. +
+ + + +

with_row_splits_dtype

+ +View source + + + +Returns a copy of this RaggedTensor with the given `row_splits` dtype. + +For RaggedTensors with multiple ragged dimensions, the `row_splits` for all +nested `RaggedTensor` objects are cast to the given dtype. + + + + + + + + + + +
Args
+`dtype` + +The dtype for `row_splits`. One of tf.int32 or tf.int64. +
+ + + + + + + + + + + +
Returns
+A copy of this RaggedTensor, with the `row_splits` cast to the given +type. +
+ + + +

with_values

+ +View source + + + +Returns a copy of `self` with `values` replaced by `new_value`. + +Preserves cached row-partitioning tensors such as `self.cached_nrows` and +`self.cached_value_rowids` if they have values. + + + + + + + + + + +
Args
+`new_values` + +Potentially ragged tensor to use as the `values` for the +returned `RaggedTensor`. Must have `rank > 0`, and must have the same +number of rows as `self.values`. +
+ + + + + + + + + + + +
Returns
+A `RaggedTensor`. `result.rank = 1 + new_values.rank`. +`result.ragged_rank = 1 + new_values.ragged_rank` +
+ + + +

__abs__

+ +View source + + + +Computes the absolute value of a tensor. + +Given a tensor of integer or floating-point values, this operation returns a +tensor of the same type, where each element contains the absolute value of the +corresponding element in the input. + +Given a tensor `x` of complex numbers, this operation returns a tensor of type +`float32` or `float64` that is the absolute value of each element in `x`. For +a complex number \\(a + bj\\), its absolute value is computed as \\(\sqrt{a^2 ++ b^2}\\). For example: + +``` +>>> x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]]) +>>> tf.abs(x) + +``` + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` or `SparseTensor` of type `float16`, `float32`, `float64`, +`int32`, `int64`, `complex64` or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` or `SparseTensor` of the same size, type and sparsity as `x`, +with absolute values. Note, for `complex64` or `complex128` input, the +returned `Tensor` will be of type `float32` or `float64`, respectively. + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.abs(x.values, ...), x.dense_shape)` +
+ + + +

__add__

+ + + +Returns x + y element-wise. + +*NOTE*: math.add supports broadcasting. `AddN` does not. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__and__

+ +View source + + + +Logical AND function. + +The operation works for the following input types: + +- Two single elements of type `bool` +- One tf.Tensor of type `bool` and one single `bool`, where the result will + be calculated by applying logical AND with the single element to each + element in the larger Tensor. +- Two tf.Tensor objects of type `bool` of the same shape. In this case, + the result will be the element-wise logical AND of the two input tensors. + +#### Usage: + + + +``` +>>> a = tf.constant([True]) +>>> b = tf.constant([False]) +>>> tf.math.logical_and(a, b) + +``` + +``` +>>> c = tf.constant([True]) +>>> x = tf.constant([False, True, True, False]) +>>> tf.math.logical_and(c, x) + +``` + +``` +>>> y = tf.constant([False, False, True, True]) +>>> z = tf.constant([False, True, False, True]) +>>> tf.math.logical_and(y, z) + +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A tf.Tensor type bool. +
+`y` + +A tf.Tensor of type bool. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tf.Tensor of type bool with the same size as that of x or y. +
+ + + +

__bool__

+ +View source + + + +Dummy method to prevent a RaggedTensor from being used as a Python bool. + + +

__div__

+ +View source + + + +Divides x / y elementwise (using Python 2 division operator semantics). (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Deprecated in favor of operator or tf.math.divide. + +NOTE: Prefer using the Tensor division operator or tf.divide which obey Python +3 division operator semantics. + +This function divides `x` and `y`, forcing Python 2 semantics. That is, if `x` +and `y` are both integers then the result will be an integer. This is in +contrast to Python 3, where division with `/` is always a float while division +with `//` is always an integer. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` returns the quotient of x and y. +
+ + + +

__floordiv__

+ +View source + + + +Divides `x / y` elementwise, rounding toward the most negative integer. + +The same as tf.compat.v1.div(x,y) for integers, but uses +`tf.floor(tf.compat.v1.div(x,y))` for +floating point arguments so that the result is always an integer (though +possibly an integer represented as floating point). This op is generated by +`x // y` floor division in Python 3 and in Python 2.7 with +`from __future__ import division`. + +`x` and `y` must have the same type, and the result will have the same type +as well. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` rounded down. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If the inputs are complex. +
+ + + +

__ge__

+ + + +Returns the truth value of (x >= y) element-wise. + +*NOTE*: math.greater_equal supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6, 7]) +y = tf.constant([5, 2, 5, 10]) +tf.math.greater_equal(x, y) ==> [True, True, True, False] + +x = tf.constant([5, 4, 6, 7]) +y = tf.constant([5]) +tf.math.greater_equal(x, y) ==> [True, False, True, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__getitem__

+ +View source + + + +Returns the specified piece of this RaggedTensor. + +Supports multidimensional indexing and slicing, with one restriction: +indexing into a ragged inner dimension is not allowed. This case is +problematic because the indicated value may exist in some rows but not +others. In such cases, it's not obvious whether we should (1) report an +IndexError; (2) use a default value; or (3) skip that value and return a +tensor with fewer rows than we started with. Following the guiding +principles of Python ("In the face of ambiguity, refuse the temptation to +guess"), we simply disallow this operation. + + + + + + + + + + + + + +
Args
+`self` + +The RaggedTensor to slice. +
+`key` + +Indicates which piece of the RaggedTensor to return, using standard +Python semantics (e.g., negative values index from the end). `key` +may have any of the following types: + +* `int` constant +* Scalar integer `Tensor` +* `slice` containing integer constants and/or scalar integer +`Tensor`s +* `Ellipsis` +* tf.newaxis +* `tuple` containing any of the above (for multidimensional indexing) +
+ + + + + + + + + + + +
Returns
+A `Tensor` or `RaggedTensor` object. Values that include at least one +ragged dimension are returned as `RaggedTensor`. Values that include no +ragged dimensions are returned as `Tensor`. See above for examples of +expressions that return `Tensor`s vs `RaggedTensor`s. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If `key` is out of bounds. +
+`ValueError` + +If `key` is not supported. +
+`TypeError` + +If the indices in `key` have an unsupported type. +
+ + + +#### Examples: + + + +``` +>>> # A 2-D ragged tensor with 1 ragged dimension. +>>> rt = tf.ragged.constant([['a', 'b', 'c'], ['d', 'e'], ['f'], ['g']]) +>>> rt[0].numpy() # First row (1-D `Tensor`) +array([b'a', b'b', b'c'], dtype=object) +>>> rt[:3].to_list() # First three rows (2-D RaggedTensor) +[[b'a', b'b', b'c'], [b'd', b'e'], [b'f']] +>>> rt[3, 0].numpy() # 1st element of 4th row (scalar) +b'g' +``` + +``` +>>> # A 3-D ragged tensor with 2 ragged dimensions. +>>> rt = tf.ragged.constant([[[1, 2, 3], [4]], +... [[5], [], [6]], +... [[7]], +... [[8, 9], [10]]]) +>>> rt[1].to_list() # Second row (2-D RaggedTensor) +[[5], [], [6]] +>>> rt[3, 0].numpy() # First element of fourth row (1-D Tensor) +array([8, 9], dtype=int32) +>>> rt[:, 1:3].to_list() # Items 1-3 of each row (3-D RaggedTensor) +[[[4]], [[], [6]], [], [[10]]] +>>> rt[:, -1:].to_list() # Last item of each row (3-D RaggedTensor) +[[[4]], [[6]], [[7]], [[10]]] +``` + +

__gt__

+ + + +Returns the truth value of (x > y) element-wise. + +*NOTE*: math.greater supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 2, 5]) +tf.math.greater(x, y) ==> [False, True, True] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.greater(x, y) ==> [False, False, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__invert__

+ + + +Returns the truth value of `NOT x` element-wise. + + +#### Example: + + + +``` +>>> tf.math.logical_not(tf.constant([True, False])) + +``` + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__le__

+ + + +Returns the truth value of (x <= y) element-wise. + +*NOTE*: math.less_equal supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.less_equal(x, y) ==> [True, True, False] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 6, 6]) +tf.math.less_equal(x, y) ==> [True, True, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__lt__

+ + + +Returns the truth value of (x < y) element-wise. + +*NOTE*: math.less supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.less(x, y) ==> [False, True, False] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 6, 7]) +tf.math.less(x, y) ==> [False, True, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__mod__

+ + + +Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +true, this follows Python semantics in that the result here is consistent +with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`. + +*NOTE*: math.floormod supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__mul__

+ +View source + + + +Returns an element-wise x * y. + + +#### For example: + + + +``` +>>> x = tf.constant(([1, 2, 3, 4])) +>>> tf.math.multiply(x, x) + +``` + +Since tf.math.multiply will convert its arguments to `Tensor`s, you can also +pass in non-`Tensor` arguments: + +``` +>>> tf.math.multiply(7,6) + +``` + +If `x.shape` is not thes same as `y.shape`, they will be broadcast to a +compatible shape. (More about broadcasting +[here](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).) + +#### For example: + + + +``` +>>> x = tf.ones([1, 2]); +>>> y = tf.ones([2, 1]); +>>> x * y # Taking advantage of operator overriding + +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A Tensor. Must be one of the following types: `bfloat16`, +`half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, +`int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + +
Returns
+ + +A `Tensor`. Has the same type as `x`. + + + + + + + + + +
Raises
+* InvalidArgumentError: When `x` and `y` have incomptatible shapes or types. +
+ + + +

__neg__

+ + + +Computes numerical negative value element-wise. + +I.e., \\(y = -x\\). + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.negative(x.values, ...), x.dense_shape)` +
+ + + +

__nonzero__

+ +View source + + + +Dummy method to prevent a RaggedTensor from being used as a Python bool. + + +

__or__

+ + + +Returns the truth value of x OR y element-wise. + +*NOTE*: math.logical_or supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__pow__

+ +View source + + + +Computes the power of one value to another. + +Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for +corresponding elements in `x` and `y`. For example: + +```python +x = tf.constant([[2, 2], [3, 3]]) +y = tf.constant([[8, 16], [2, 3]]) +tf.pow(x, y) # [[256, 65536], [9, 27]] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`y` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

__radd__

+ + + +Returns x + y element-wise. + +*NOTE*: math.add supports broadcasting. `AddN` does not. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__rand__

+ +View source + + + +Logical AND function. + +The operation works for the following input types: + +- Two single elements of type `bool` +- One tf.Tensor of type `bool` and one single `bool`, where the result will + be calculated by applying logical AND with the single element to each + element in the larger Tensor. +- Two tf.Tensor objects of type `bool` of the same shape. In this case, + the result will be the element-wise logical AND of the two input tensors. + +#### Usage: + + + +``` +>>> a = tf.constant([True]) +>>> b = tf.constant([False]) +>>> tf.math.logical_and(a, b) + +``` + +``` +>>> c = tf.constant([True]) +>>> x = tf.constant([False, True, True, False]) +>>> tf.math.logical_and(c, x) + +``` + +``` +>>> y = tf.constant([False, False, True, True]) +>>> z = tf.constant([False, True, False, True]) +>>> tf.math.logical_and(y, z) + +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A tf.Tensor type bool. +
+`y` + +A tf.Tensor of type bool. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tf.Tensor of type bool with the same size as that of x or y. +
+ + + +

__rdiv__

+ +View source + + + +Divides x / y elementwise (using Python 2 division operator semantics). (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Deprecated in favor of operator or tf.math.divide. + +NOTE: Prefer using the Tensor division operator or tf.divide which obey Python +3 division operator semantics. + +This function divides `x` and `y`, forcing Python 2 semantics. That is, if `x` +and `y` are both integers then the result will be an integer. This is in +contrast to Python 3, where division with `/` is always a float while division +with `//` is always an integer. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` returns the quotient of x and y. +
+ + + +

__rfloordiv__

+ +View source + + + +Divides `x / y` elementwise, rounding toward the most negative integer. + +The same as tf.compat.v1.div(x,y) for integers, but uses +`tf.floor(tf.compat.v1.div(x,y))` for +floating point arguments so that the result is always an integer (though +possibly an integer represented as floating point). This op is generated by +`x // y` floor division in Python 3 and in Python 2.7 with +`from __future__ import division`. + +`x` and `y` must have the same type, and the result will have the same type +as well. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` rounded down. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If the inputs are complex. +
+ + + +

__rmod__

+ + + +Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +true, this follows Python semantics in that the result here is consistent +with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`. + +*NOTE*: math.floormod supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__rmul__

+ +View source + + + +Returns an element-wise x * y. + + +#### For example: + + + +``` +>>> x = tf.constant(([1, 2, 3, 4])) +>>> tf.math.multiply(x, x) + +``` + +Since tf.math.multiply will convert its arguments to `Tensor`s, you can also +pass in non-`Tensor` arguments: + +``` +>>> tf.math.multiply(7,6) + +``` + +If `x.shape` is not thes same as `y.shape`, they will be broadcast to a +compatible shape. (More about broadcasting +[here](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).) + +#### For example: + + + +``` +>>> x = tf.ones([1, 2]); +>>> y = tf.ones([2, 1]); +>>> x * y # Taking advantage of operator overriding + +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A Tensor. Must be one of the following types: `bfloat16`, +`half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, +`int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + +
Returns
+ + +A `Tensor`. Has the same type as `x`. + + + + + + + + + +
Raises
+* InvalidArgumentError: When `x` and `y` have incomptatible shapes or types. +
+ + + +

__ror__

+ + + +Returns the truth value of x OR y element-wise. + +*NOTE*: math.logical_or supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__rpow__

+ +View source + + + +Computes the power of one value to another. + +Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for +corresponding elements in `x` and `y`. For example: + +```python +x = tf.constant([[2, 2], [3, 3]]) +y = tf.constant([[8, 16], [2, 3]]) +tf.pow(x, y) # [[256, 65536], [9, 27]] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`y` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

__rsub__

+ +View source + + + +Returns x - y element-wise. + +*NOTE*: `Subtract` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__rtruediv__

+ +View source + + + +Divides x / y elementwise (using Python 3 division operator semantics). + +NOTE: Prefer using the Tensor operator or tf.divide which obey Python +division operator semantics. + +This function forces Python 3 division operator semantics where all integer +arguments are cast to floating types first. This op is generated by normal +`x / y` division in Python 3 and in Python 2.7 with +`from __future__ import division`. If you want integer division that rounds +down, use `x // y` or `tf.math.floordiv`. + +`x` and `y` must have the same numeric type. If the inputs are floating +point, the output will have the same type. If the inputs are integral, the +inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32` +and `int64` (matching the behavior of Numpy). + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of numeric type. +
+`y` + +`Tensor` denominator of numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` evaluated in floating point. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If `x` and `y` have different dtypes. +
+ + + +

__rxor__

+ +View source + + + +Logical XOR function. + +x ^ y = (x | y) & ~(x & y) + +The operation works for the following input types: + +- Two single elements of type `bool` +- One tf.Tensor of type `bool` and one single `bool`, where the result will + be calculated by applying logical XOR with the single element to each + element in the larger Tensor. +- Two tf.Tensor objects of type `bool` of the same shape. In this case, + the result will be the element-wise logical XOR of the two input tensors. + +#### Usage: + + + +``` +>>> a = tf.constant([True]) +>>> b = tf.constant([False]) +>>> tf.math.logical_xor(a, b) + +``` + +``` +>>> c = tf.constant([True]) +>>> x = tf.constant([False, True, True, False]) +>>> tf.math.logical_xor(c, x) + +``` + +``` +>>> y = tf.constant([False, False, True, True]) +>>> z = tf.constant([False, True, False, True]) +>>> tf.math.logical_xor(y, z) + +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A tf.Tensor type bool. +
+`y` + +A tf.Tensor of type bool. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tf.Tensor of type bool with the same size as that of x or y. +
+ + + +

__sub__

+ +View source + + + +Returns x - y element-wise. + +*NOTE*: `Subtract` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__truediv__

+ +View source + + + +Divides x / y elementwise (using Python 3 division operator semantics). + +NOTE: Prefer using the Tensor operator or tf.divide which obey Python +division operator semantics. + +This function forces Python 3 division operator semantics where all integer +arguments are cast to floating types first. This op is generated by normal +`x / y` division in Python 3 and in Python 2.7 with +`from __future__ import division`. If you want integer division that rounds +down, use `x // y` or `tf.math.floordiv`. + +`x` and `y` must have the same numeric type. If the inputs are floating +point, the output will have the same type. If the inputs are integral, the +inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32` +and `int64` (matching the behavior of Numpy). + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of numeric type. +
+`y` + +`Tensor` denominator of numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` evaluated in floating point. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If `x` and `y` have different dtypes. +
+ + + +

__xor__

+ +View source + + + +Logical XOR function. + +x ^ y = (x | y) & ~(x & y) + +The operation works for the following input types: + +- Two single elements of type `bool` +- One tf.Tensor of type `bool` and one single `bool`, where the result will + be calculated by applying logical XOR with the single element to each + element in the larger Tensor. +- Two tf.Tensor objects of type `bool` of the same shape. In this case, + the result will be the element-wise logical XOR of the two input tensors. + +#### Usage: + + + +``` +>>> a = tf.constant([True]) +>>> b = tf.constant([False]) +>>> tf.math.logical_xor(a, b) + +``` + +``` +>>> c = tf.constant([True]) +>>> x = tf.constant([False, True, True, False]) +>>> tf.math.logical_xor(c, x) + +``` + +``` +>>> y = tf.constant([False, False, True, True]) +>>> z = tf.constant([False, True, False, True]) +>>> tf.math.logical_xor(y, z) + +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A tf.Tensor type bool. +
+`y` + +A tf.Tensor of type bool. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tf.Tensor of type bool with the same size as that of x or y. +
+ + + + + diff --git a/site/en/api_docs/python/tf/RaggedTensorSpec.md b/site/en/api_docs/python/tf/RaggedTensorSpec.md new file mode 100644 index 00000000000..017f488b65b --- /dev/null +++ b/site/en/api_docs/python/tf/RaggedTensorSpec.md @@ -0,0 +1,218 @@ +description: Type specification for a tf.RaggedTensor. + +
+ + + + + + + + +
+ +# tf.RaggedTensorSpec + + + + + + + + + +Type specification for a tf.RaggedTensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +The shape of the RaggedTensor, or `None` to allow any shape. If +a shape is specified, then all ragged dimensions must have size `None`. +
+`dtype` + +tf.DType of values in the RaggedTensor. +
+`ragged_rank` + +Python integer, the ragged rank of the RaggedTensor +to be described. Defaults to `shape.ndims - 1`. +
+`row_splits_dtype` + +`dtype` for the RaggedTensor's `row_splits` tensor. +One of tf.int32 or tf.int64. +
+ + + + + + + + + + + + + + +
+`value_type` + +The Python type for values that are compatible with this TypeSpec. +
+ + + +## Methods + +

from_value

+ +View source + + + + + + +

is_compatible_with

+ +View source + + + +Returns true if `spec_or_value` is compatible with this TypeSpec. + + +

most_specific_compatible_type

+ +View source + + + +Returns the most specific TypeSpec compatible with `self` and `other`. + + + + + + + + + + + +
Args
+`other` + +A `TypeSpec`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If there is no TypeSpec that is compatible with both `self` +and `other`. +
+ + + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/RegisterGradient.md b/site/en/api_docs/python/tf/RegisterGradient.md new file mode 100644 index 00000000000..c94daddfea6 --- /dev/null +++ b/site/en/api_docs/python/tf/RegisterGradient.md @@ -0,0 +1,120 @@ +description: A decorator for registering the gradient function for an op type. + +
+ + + + +
+ +# tf.RegisterGradient + + + + + + + + + +A decorator for registering the gradient function for an op type. + + + + + + + + + +This decorator is only used when defining a new op type. For an op +with `m` inputs and `n` outputs, the gradient function is a function +that takes the original `Operation` and `n` `Tensor` objects +(representing the gradients with respect to each output of the op), +and returns `m` `Tensor` objects (representing the partial gradients +with respect to each input of the op). + +For example, assuming that operations of type `"Sub"` take two +inputs `x` and `y`, and return a single output `x - y`, the +following gradient function would be registered: + +```python +@tf.RegisterGradient("Sub") +def _sub_grad(unused_op, grad): + return grad, tf.negative(grad) +``` + +The decorator argument `op_type` is the string type of an +operation. This corresponds to the `OpDef.name` field for the proto +that defines the operation. + + + + + + + + + + +
+`op_type` + +The string type of an operation. This corresponds to the +`OpDef.name` field for the proto that defines the operation. +
+ + + + + + + + + + + + +
+`TypeError` + +If `op_type` is not string. +
+ + + +## Methods + +

__call__

+ +View source + + + +Registers the function `f` as gradient function for `op_type`. + + + + diff --git a/site/en/api_docs/python/tf/SparseTensorSpec.md b/site/en/api_docs/python/tf/SparseTensorSpec.md new file mode 100644 index 00000000000..56ff3e805cb --- /dev/null +++ b/site/en/api_docs/python/tf/SparseTensorSpec.md @@ -0,0 +1,215 @@ +description: Type specification for a tf.SparseTensor. + +
+ + + + + + + + +
+ +# tf.SparseTensorSpec + + + + + + + + + +Type specification for a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +The dense shape of the `SparseTensor`, or `None` to allow +any dense shape. +
+`dtype` + +tf.DType of values in the `SparseTensor`. +
+ + + + + + + + + + + + + + + + + + + + +
+`dtype` + +The tf.dtypes.DType specified by this type for the SparseTensor. +
+`shape` + +The tf.TensorShape specified by this type for the SparseTensor. +
+`value_type` + + +
+ + + +## Methods + +

from_value

+ +View source + + + + + + +

is_compatible_with

+ +View source + + + +Returns true if `spec_or_value` is compatible with this TypeSpec. + + +

most_specific_compatible_type

+ +View source + + + +Returns the most specific TypeSpec compatible with `self` and `other`. + + + + + + + + + + + +
Args
+`other` + +A `TypeSpec`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If there is no TypeSpec that is compatible with both `self` +and `other`. +
+ + + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/Tensor.md b/site/en/api_docs/python/tf/Tensor.md new file mode 100644 index 00000000000..b631e1881a9 --- /dev/null +++ b/site/en/api_docs/python/tf/Tensor.md @@ -0,0 +1,2856 @@ +description: A tensor represents a rectangular array of data. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.Tensor + + + + + + + + + +A tensor represents a rectangular array of data. + + + + + + + + + +When writing a TensorFlow program, the main object you manipulate and pass +around is the tf.Tensor. A tf.Tensor object represents a rectangular array +of arbitrary dimension, filled with data of a specific data type. + +A tf.Tensor has the following properties: + +* a data type (float32, int32, or string, for example) +* a shape + +Each element in the Tensor has the same data type, and the data type is always +known. + +In eager execution, which is the default mode in TensorFlow, results are +calculated immediately. + +``` +>>> # Compute some values using a Tensor +>>> c = tf.constant([[1.0, 2.0], [3.0, 4.0]]) +>>> d = tf.constant([[1.0, 1.0], [0.0, 1.0]]) +>>> e = tf.matmul(c, d) +>>> print(e) +tf.Tensor( +[[1. 3.] + [3. 7.]], shape=(2, 2), dtype=float32) +``` + + +Note that during eager execution, you may discover your `Tensors` are actually +of type `EagerTensor`. This is an internal detail, but it does give you +access to a useful function, `numpy`: + +``` +>>> type(e) + +>>> print(e.numpy()) + [[1. 3.] + [3. 7.]] +``` + +TensorFlow can define computations without immediately executing them, most +commonly inside tf.functions, as well as in (legacy) Graph mode. In those +cases, the shape (that is, the rank of the Tensor and the size of +each dimension) might be only partially known. + +Most operations produce tensors of fully-known shapes if the shapes of their +inputs are also fully known, but in some cases it's only possible to find the +shape of a tensor at execution time. + +There are specialized tensors; for these, see tf.Variable, tf.constant, +`tf.placeholder`, tf.SparseTensor, and tf.RaggedTensor. + +For more on Tensors, see the [guide](https://tensorflow.org/guide/tensor`). + + + + + + + + + + + + + + + + +
+`op` + +An `Operation`. `Operation` that computes this tensor. +
+`value_index` + +An `int`. Index of the operation's endpoint that produces +this tensor. +
+`dtype` + +A `DType`. Type of elements stored in this tensor. +
+ + + + + + + + + + + + +
+`TypeError` + +If the op is not an `Operation`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`device` + +The name of the device on which this tensor will be produced, or None. +
+`dtype` + +The `DType` of elements in this tensor. +
+`graph` + +The `Graph` that contains this tensor. +
+`name` + +The string name of this tensor. +
+`op` + +The `Operation` that produces this tensor as an output. +
+`shape` + +Returns the `TensorShape` that represents the shape of this tensor. + +The shape is computed using shape inference functions that are +registered in the Op for each `Operation`. See +tf.TensorShape +for more details of what a shape represents. + +The inferred shape of a tensor is used to provide shape +information without having to execute the underlying kernel. This +can be used for debugging and providing early error messages. For +example: + +```python +>>> c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) +>>> print(c.shape) # will be TensorShape([2, 3]) +(2, 3) + +``` +>>> d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]]) +>>> print(d.shape) +(4, 2) +``` + +# Raises a ValueError, because `c` and `d` do not have compatible +# inner dimensions. +>>> e = tf.matmul(c, d) +Traceback (most recent call last): +... +tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix +size-incompatible: In[0]: [2,3], In[1]: [4,2] [Op:MatMul] name: MatMul/ + +# This works because we have compatible shapes. +>>> f = tf.matmul(c, d, transpose_a=True, transpose_b=True) +>>> print(f.shape) +(3, 4) + +``` + +In some cases, the inferred shape may have unknown dimensions. If +the caller has additional information about the values of these +dimensions, Tensor.set_shape() can be used to augment the +inferred shape. +
+`value_index` + +The index of this tensor in the outputs of its `Operation`. +
+ + + +## Methods + +

consumers

+ +View source + + + +Returns a list of `Operation`s that consume this tensor. + + + + + + + + + + +
Returns
+A list of `Operation`s. +
+ + + +

eval

+ +View source + + + +Evaluates this tensor in a `Session`. + +Note: If you are not using compat.v1 libraries, you should not need this, +(or `feed_dict` or `Session`). In eager execution (or within tf.function) +you do not need to call `eval`. + +Calling this method will execute all preceding operations that +produce the inputs needed for the operation that produces this +tensor. + +*N.B.* Before invoking Tensor.eval(), its graph must have been +launched in a session, and either a default session must be +available, or `session` must be specified explicitly. + + + + + + + + + + + + + +
Args
+`feed_dict` + +A dictionary that maps `Tensor` objects to feed values. See +`tf.Session.run` for a description of the valid feed values. +
+`session` + +(Optional.) The `Session` to be used to evaluate this tensor. If +none, the default session will be used. +
+ + + + + + + + + + + +
Returns
+A numpy array corresponding to the value of this tensor. +
+ + + +

experimental_ref

+ +View source + + + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use ref() instead. + +

get_shape

+ +View source + + + +Alias of tf.Tensor.shape. + + +

ref

+ +View source + + + +Returns a hashable reference object to this Tensor. + +The primary use case for this API is to put tensors in a set/dictionary. +We can't put tensors in a set/dictionary as `tensor.__hash__()` is no longer +available starting Tensorflow 2.0. + +The following will raise an exception starting 2.0 + +``` +>>> x = tf.constant(5) +>>> y = tf.constant(10) +>>> z = tf.constant(10) +>>> tensor_set = {x, y, z} +Traceback (most recent call last): + ... +TypeError: Tensor is unhashable. Instead, use tensor.ref() as the key. +>>> tensor_dict = {x: 'five', y: 'ten'} +Traceback (most recent call last): + ... +TypeError: Tensor is unhashable. Instead, use tensor.ref() as the key. +``` + +Instead, we can use `tensor.ref()`. + +``` +>>> tensor_set = {x.ref(), y.ref(), z.ref()} +>>> x.ref() in tensor_set +True +>>> tensor_dict = {x.ref(): 'five', y.ref(): 'ten', z.ref(): 'ten'} +>>> tensor_dict[y.ref()] +'ten' +``` + +Also, the reference object provides `.deref()` function that returns the +original Tensor. + +``` +>>> x = tf.constant(5) +>>> x.ref().deref() + +``` + +

set_shape

+ +View source + + + +Updates the shape of this tensor. + +This method can be called multiple times, and will merge the given +`shape` with the current shape of this tensor. It can be used to +provide additional information about the shape of this tensor that +cannot be inferred from the graph alone. For example, this can be used +to provide additional information about the shapes of images: + +```python +_, image_data = tf.compat.v1.TFRecordReader(...).read(...) +image = tf.image.decode_png(image_data, channels=3) + +# The height and width dimensions of `image` are data dependent, and +# cannot be computed without executing the op. +print(image.shape) +==> TensorShape([Dimension(None), Dimension(None), Dimension(3)]) + +# We know that each image in this dataset is 28 x 28 pixels. +image.set_shape([28, 28, 3]) +print(image.shape) +==> TensorShape([Dimension(28), Dimension(28), Dimension(3)]) +``` + +NOTE: This shape is not enforced at runtime. Setting incorrect shapes can +result in inconsistencies between the statically-known graph and the runtime +value of tensors. For runtime validation of the shape, use tf.ensure_shape +instead. + + + + + + + + + + +
Args
+`shape` + +A `TensorShape` representing the shape of this tensor, a +`TensorShapeProto`, a list, a tuple, or None. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `shape` is not compatible with the current shape of +this tensor. +
+ + + +

__abs__

+ +View source + + + +Computes the absolute value of a tensor. + +Given a tensor of integer or floating-point values, this operation returns a +tensor of the same type, where each element contains the absolute value of the +corresponding element in the input. + +Given a tensor `x` of complex numbers, this operation returns a tensor of type +`float32` or `float64` that is the absolute value of each element in `x`. For +a complex number \\(a + bj\\), its absolute value is computed as \\(\sqrt{a^2 ++ b^2}\\). For example: + +``` +>>> x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]]) +>>> tf.abs(x) + +``` + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` or `SparseTensor` of type `float16`, `float32`, `float64`, +`int32`, `int64`, `complex64` or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` or `SparseTensor` of the same size, type and sparsity as `x`, +with absolute values. Note, for `complex64` or `complex128` input, the +returned `Tensor` will be of type `float32` or `float64`, respectively. + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.abs(x.values, ...), x.dense_shape)` +
+ + + +

__add__

+ +View source + + + +Dispatches to add for strings and add_v2 for all other types. + + +

__and__

+ +View source + + + +Returns the truth value of x AND y element-wise. + +*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__bool__

+ +View source + + + +Dummy method to prevent a tensor from being used as a Python `bool`. + +This overload raises a `TypeError` when the user inadvertently +treats a `Tensor` as a boolean (most commonly in an `if` or `while` +statement), in code that was not converted by AutoGraph. For example: + +```python +if tf.constant(True): # Will raise. + # ... + +if tf.constant(5) < tf.constant(7): # Will raise. + # ... +``` + + + + + + + + + +
Raises
+`TypeError`. +
+ + + +

__div__

+ +View source + + + +Divide two values using Python 2 semantics. + +Used for Tensor.__div__. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` returns the quotient of x and y. +
+ + + +

__eq__

+ +View source + + + +Compares two tensors element-wise for equality. + + +

__floordiv__

+ +View source + + + +Divides `x / y` elementwise, rounding toward the most negative integer. + +The same as tf.compat.v1.div(x,y) for integers, but uses +`tf.floor(tf.compat.v1.div(x,y))` for +floating point arguments so that the result is always an integer (though +possibly an integer represented as floating point). This op is generated by +`x // y` floor division in Python 3 and in Python 2.7 with +`from __future__ import division`. + +`x` and `y` must have the same type, and the result will have the same type +as well. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` rounded down. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If the inputs are complex. +
+ + + +

__ge__

+ + + +Returns the truth value of (x >= y) element-wise. + +*NOTE*: math.greater_equal supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6, 7]) +y = tf.constant([5, 2, 5, 10]) +tf.math.greater_equal(x, y) ==> [True, True, True, False] + +x = tf.constant([5, 4, 6, 7]) +y = tf.constant([5]) +tf.math.greater_equal(x, y) ==> [True, False, True, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__getitem__

+ +View source + + + +Overload for Tensor.__getitem__. + +This operation extracts the specified region from the tensor. +The notation is similar to NumPy with the restriction that +currently only support basic indexing. That means that +using a non-scalar tensor as input is not currently allowed. + +#### Some useful examples: + + + +```python +# Strip leading and trailing 2 elements +foo = tf.constant([1,2,3,4,5,6]) +print(foo[2:-2].eval()) # => [3,4] + +# Skip every other row and reverse the order of the columns +foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]]) +print(foo[::2,::-1].eval()) # => [[3,2,1], [9,8,7]] + +# Use scalar tensors as indices on both dimensions +print(foo[tf.constant(0), tf.constant(2)].eval()) # => 3 + +# Insert another dimension +foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]]) +print(foo[tf.newaxis, :, :].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]] +print(foo[:, tf.newaxis, :].eval()) # => [[[1,2,3]], [[4,5,6]], [[7,8,9]]] +print(foo[:, :, tf.newaxis].eval()) # => [[[1],[2],[3]], [[4],[5],[6]], +[[7],[8],[9]]] + +# Ellipses (3 equivalent operations) +foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]]) +print(foo[tf.newaxis, :, :].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]] +print(foo[tf.newaxis, ...].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]] +print(foo[tf.newaxis].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]] + +# Masks +foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]]) +print(foo[foo > 2].eval()) # => [3, 4, 5, 6, 7, 8, 9] +``` + +#### Notes: + +- tf.newaxis is `None` as in NumPy. +- An implicit ellipsis is placed at the end of the `slice_spec` +- NumPy advanced indexing is currently not supported. + + + + + + + + + + + + + + + + + + +
Args
+`tensor` + +An ops.Tensor object. +
+`slice_spec` + +The arguments to Tensor.__getitem__. +
+`var` + +In the case of variable slice assignment, the Variable object to slice +(i.e. tensor is the read-only view of this variable). +
+ + + + + + + + + + + +
Returns
+The appropriate slice of "tensor", based on "slice_spec". +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If a slice range is negative size. +
+`TypeError` + +If the slice indices aren't int, slice, ellipsis, +tf.newaxis or scalar int32/int64 tensors. +
+ + + +

__gt__

+ + + +Returns the truth value of (x > y) element-wise. + +*NOTE*: math.greater supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 2, 5]) +tf.math.greater(x, y) ==> [False, True, True] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.greater(x, y) ==> [False, False, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__invert__

+ + + +Returns the truth value of `NOT x` element-wise. + + +#### Example: + + + +``` +>>> tf.math.logical_not(tf.constant([True, False])) + +``` + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__iter__

+ +View source + + + + + + +

__le__

+ + + +Returns the truth value of (x <= y) element-wise. + +*NOTE*: math.less_equal supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.less_equal(x, y) ==> [True, True, False] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 6, 6]) +tf.math.less_equal(x, y) ==> [True, True, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__len__

+ +View source + + + + + + +

__lt__

+ + + +Returns the truth value of (x < y) element-wise. + +*NOTE*: math.less supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.less(x, y) ==> [False, True, False] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 6, 7]) +tf.math.less(x, y) ==> [False, True, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__matmul__

+ +View source + + + +Multiplies matrix `a` by matrix `b`, producing `a` * `b`. + +The inputs must, following any transpositions, be tensors of rank >= 2 +where the inner 2 dimensions specify valid matrix multiplication dimensions, +and any further outer dimensions specify matching batch size. + +Both matrices must be of the same type. The supported types are: +`float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`. + +Either matrix can be transposed or adjointed (conjugated and transposed) on +the fly by setting one of the corresponding flag to `True`. These are `False` +by default. + +If one or both of the matrices contain a lot of zeros, a more efficient +multiplication algorithm can be used by setting the corresponding +`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default. +This optimization is only available for plain matrices (rank-2 tensors) with +datatypes `bfloat16` or `float32`. + +A simple 2-D tensor matrix multiplication: + +``` +>>> a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) +>>> a # 2-D tensor + +>>> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) +>>> b # 2-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +A batch matrix multiplication with batch shape [2]: + +``` +>>> a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) +>>> a # 3-D tensor + +>>> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) +>>> b # 3-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +Since python >= 3.5 the @ operator is supported +(see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow, +it simply calls the tf.matmul() function, so the following lines are +equivalent: + +``` +>>> d = a @ b @ [[10], [11]] +>>> d = tf.matmul(tf.matmul(a, b), [[10], [11]]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`a` + +tf.Tensor of type `float16`, `float32`, `float64`, `int32`, +`complex64`, `complex128` and rank > 1. +
+`b` + +tf.Tensor with same type and rank as `a`. +
+`transpose_a` + +If `True`, `a` is transposed before multiplication. +
+`transpose_b` + +If `True`, `b` is transposed before multiplication. +
+`adjoint_a` + +If `True`, `a` is conjugated and transposed before +multiplication. +
+`adjoint_b` + +If `True`, `b` is conjugated and transposed before +multiplication. +
+`a_is_sparse` + +If `True`, `a` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`b_is_sparse` + +If `True`, `b` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`name` + +Name for the operation (optional). +
+ + + + + + + + + + + + + + +
Returns
+A tf.Tensor of the same type as `a` and `b` where each inner-most matrix +is the product of the corresponding matrices in `a` and `b`, e.g. if all +transpose or adjoint attributes are `False`: + +`output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j])`, +for all indices `i`, `j`. +
+`Note` + +This is matrix product, not element-wise product. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `transpose_a` and `adjoint_a`, or `transpose_b` and +`adjoint_b` are both set to `True`. +
+ + + +

__mod__

+ +View source + + + +Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +true, this follows Python semantics in that the result here is consistent +with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`. + +*NOTE*: math.floormod supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__mul__

+ +View source + + + +Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse". + + +

__ne__

+ +View source + + + +Compares two tensors element-wise for equality. + + +

__neg__

+ + + +Computes numerical negative value element-wise. + +I.e., \\(y = -x\\). + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.negative(x.values, ...), x.dense_shape)` +
+ + + +

__nonzero__

+ +View source + + + +Dummy method to prevent a tensor from being used as a Python `bool`. + +This is the Python 2.x counterpart to `__bool__()` above. + + + + + + + + + +
Raises
+`TypeError`. +
+ + + +

__or__

+ +View source + + + +Returns the truth value of x OR y element-wise. + +*NOTE*: math.logical_or supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__pow__

+ +View source + + + +Computes the power of one value to another. + +Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for +corresponding elements in `x` and `y`. For example: + +```python +x = tf.constant([[2, 2], [3, 3]]) +y = tf.constant([[8, 16], [2, 3]]) +tf.pow(x, y) # [[256, 65536], [9, 27]] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`y` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

__radd__

+ +View source + + + +Dispatches to add for strings and add_v2 for all other types. + + +

__rand__

+ +View source + + + +Returns the truth value of x AND y element-wise. + +*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__rdiv__

+ +View source + + + +Divide two values using Python 2 semantics. + +Used for Tensor.__div__. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` returns the quotient of x and y. +
+ + + +

__rfloordiv__

+ +View source + + + +Divides `x / y` elementwise, rounding toward the most negative integer. + +The same as tf.compat.v1.div(x,y) for integers, but uses +`tf.floor(tf.compat.v1.div(x,y))` for +floating point arguments so that the result is always an integer (though +possibly an integer represented as floating point). This op is generated by +`x // y` floor division in Python 3 and in Python 2.7 with +`from __future__ import division`. + +`x` and `y` must have the same type, and the result will have the same type +as well. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` rounded down. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If the inputs are complex. +
+ + + +

__rmatmul__

+ +View source + + + +Multiplies matrix `a` by matrix `b`, producing `a` * `b`. + +The inputs must, following any transpositions, be tensors of rank >= 2 +where the inner 2 dimensions specify valid matrix multiplication dimensions, +and any further outer dimensions specify matching batch size. + +Both matrices must be of the same type. The supported types are: +`float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`. + +Either matrix can be transposed or adjointed (conjugated and transposed) on +the fly by setting one of the corresponding flag to `True`. These are `False` +by default. + +If one or both of the matrices contain a lot of zeros, a more efficient +multiplication algorithm can be used by setting the corresponding +`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default. +This optimization is only available for plain matrices (rank-2 tensors) with +datatypes `bfloat16` or `float32`. + +A simple 2-D tensor matrix multiplication: + +``` +>>> a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) +>>> a # 2-D tensor + +>>> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) +>>> b # 2-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +A batch matrix multiplication with batch shape [2]: + +``` +>>> a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) +>>> a # 3-D tensor + +>>> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) +>>> b # 3-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +Since python >= 3.5 the @ operator is supported +(see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow, +it simply calls the tf.matmul() function, so the following lines are +equivalent: + +``` +>>> d = a @ b @ [[10], [11]] +>>> d = tf.matmul(tf.matmul(a, b), [[10], [11]]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`a` + +tf.Tensor of type `float16`, `float32`, `float64`, `int32`, +`complex64`, `complex128` and rank > 1. +
+`b` + +tf.Tensor with same type and rank as `a`. +
+`transpose_a` + +If `True`, `a` is transposed before multiplication. +
+`transpose_b` + +If `True`, `b` is transposed before multiplication. +
+`adjoint_a` + +If `True`, `a` is conjugated and transposed before +multiplication. +
+`adjoint_b` + +If `True`, `b` is conjugated and transposed before +multiplication. +
+`a_is_sparse` + +If `True`, `a` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`b_is_sparse` + +If `True`, `b` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`name` + +Name for the operation (optional). +
+ + + + + + + + + + + + + + +
Returns
+A tf.Tensor of the same type as `a` and `b` where each inner-most matrix +is the product of the corresponding matrices in `a` and `b`, e.g. if all +transpose or adjoint attributes are `False`: + +`output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j])`, +for all indices `i`, `j`. +
+`Note` + +This is matrix product, not element-wise product. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `transpose_a` and `adjoint_a`, or `transpose_b` and +`adjoint_b` are both set to `True`. +
+ + + +

__rmod__

+ +View source + + + +Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +true, this follows Python semantics in that the result here is consistent +with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`. + +*NOTE*: math.floormod supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__rmul__

+ +View source + + + +Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse". + + +

__ror__

+ +View source + + + +Returns the truth value of x OR y element-wise. + +*NOTE*: math.logical_or supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__rpow__

+ +View source + + + +Computes the power of one value to another. + +Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for +corresponding elements in `x` and `y`. For example: + +```python +x = tf.constant([[2, 2], [3, 3]]) +y = tf.constant([[8, 16], [2, 3]]) +tf.pow(x, y) # [[256, 65536], [9, 27]] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`y` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

__rsub__

+ +View source + + + +Returns x - y element-wise. + +*NOTE*: `Subtract` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__rtruediv__

+ +View source + + + + + + +

__rxor__

+ +View source + + + +Logical XOR function. + +x ^ y = (x | y) & ~(x & y) + +The operation works for the following input types: + +- Two single elements of type `bool` +- One tf.Tensor of type `bool` and one single `bool`, where the result will + be calculated by applying logical XOR with the single element to each + element in the larger Tensor. +- Two tf.Tensor objects of type `bool` of the same shape. In this case, + the result will be the element-wise logical XOR of the two input tensors. + +#### Usage: + + + +``` +>>> a = tf.constant([True]) +>>> b = tf.constant([False]) +>>> tf.math.logical_xor(a, b) + +``` + +``` +>>> c = tf.constant([True]) +>>> x = tf.constant([False, True, True, False]) +>>> tf.math.logical_xor(c, x) + +``` + +``` +>>> y = tf.constant([False, False, True, True]) +>>> z = tf.constant([False, True, False, True]) +>>> tf.math.logical_xor(y, z) + +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A tf.Tensor type bool. +
+`y` + +A tf.Tensor of type bool. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tf.Tensor of type bool with the same size as that of x or y. +
+ + + +

__sub__

+ +View source + + + +Returns x - y element-wise. + +*NOTE*: `Subtract` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__truediv__

+ +View source + + + + + + +

__xor__

+ +View source + + + +Logical XOR function. + +x ^ y = (x | y) & ~(x & y) + +The operation works for the following input types: + +- Two single elements of type `bool` +- One tf.Tensor of type `bool` and one single `bool`, where the result will + be calculated by applying logical XOR with the single element to each + element in the larger Tensor. +- Two tf.Tensor objects of type `bool` of the same shape. In this case, + the result will be the element-wise logical XOR of the two input tensors. + +#### Usage: + + + +``` +>>> a = tf.constant([True]) +>>> b = tf.constant([False]) +>>> tf.math.logical_xor(a, b) + +``` + +``` +>>> c = tf.constant([True]) +>>> x = tf.constant([False, True, True, False]) +>>> tf.math.logical_xor(c, x) + +``` + +``` +>>> y = tf.constant([False, False, True, True]) +>>> z = tf.constant([False, True, False, True]) +>>> tf.math.logical_xor(y, z) + +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A tf.Tensor type bool. +
+`y` + +A tf.Tensor of type bool. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tf.Tensor of type bool with the same size as that of x or y. +
+ + + + + +## Class Variables + +* `OVERLOADABLE_OPERATORS` diff --git a/site/en/api_docs/python/tf/TensorArray.md b/site/en/api_docs/python/tf/TensorArray.md new file mode 100644 index 00000000000..870c862c0f9 --- /dev/null +++ b/site/en/api_docs/python/tf/TensorArray.md @@ -0,0 +1,675 @@ +description: Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays. + +
+ + + + + + + + + + + + + + + +
+ +# tf.TensorArray + + + + + + + + + +Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays. + + + + + + + + + +This class is meant to be used with dynamic iteration primitives such as +`while_loop` and `map_fn`. It supports gradient back-propagation via special +"flow" control flow dependencies. + +Example 1: Plain reading and writing. + +``` +>>> ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True, clear_after_read=False) +>>> ta = ta.write(0, 10) +>>> ta = ta.write(1, 20) +>>> ta = ta.write(2, 30) +>>> +>>> ta.read(0) + +>>> ta.read(1) + +>>> ta.read(2) + +>>> ta.stack() + +``` + +Example 2: Fibonacci sequence algorithm that writes in a loop then returns. + +``` +>>> @tf.function +... def fibonacci(n): +... ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True) +... ta = ta.unstack([0., 1.]) +... +... for i in range(2, n): +... ta = ta.write(i, ta.read(i - 1) + ta.read(i - 2)) +... +... return ta.stack() +>>> +>>> fibonacci(7) + +``` + +Example 3: A simple loop interacting with a tf.Variable. + +``` +>>> v = tf.Variable(1) +>>> +>>> @tf.function +... def f(x): +... ta = tf.TensorArray(tf.int32, size=0, dynamic_size=True) +... +... for i in tf.range(x): +... v.assign_add(i) +... ta = ta.write(i, v) +... +... return ta.stack() +>>> +>>> f(5) + +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +(required) data type of the TensorArray. +
+`size` + +(optional) int32 scalar `Tensor`: the size of the TensorArray. +Required if handle is not provided. +
+`dynamic_size` + +(optional) Python bool: If true, writes to the TensorArray +can grow the TensorArray past its initial size. Default: False. +
+`clear_after_read` + +Boolean (optional, default: True). If True, clear +TensorArray values after reading them. This disables read-many +semantics, but allows early release of memory. +
+`tensor_array_name` + +(optional) Python string: the name of the TensorArray. +This is used when creating the TensorArray handle. If this value is +set, handle should be None. +
+`handle` + +(optional) A `Tensor` handle to an existing TensorArray. If this +is set, tensor_array_name should be None. Only supported in graph mode. +
+`flow` + +(optional) A float `Tensor` scalar coming from an existing +TensorArray.flow. Only supported in graph mode. +
+`infer_shape` + +(optional, default: True) If True, shape inference +is enabled. In this case, all elements must have the same shape. +
+`element_shape` + +(optional, default: None) A `TensorShape` object specifying +the shape constraints of each of the elements of the TensorArray. +Need not be fully defined. +
+`colocate_with_first_write_call` + +If `True`, the TensorArray will be +colocated on the same device as the Tensor used on its first write +(write operations include `write`, `unstack`, and `split`). If `False`, +the TensorArray will be placed on the device determined by the +device context available during its initialization. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if both handle and tensor_array_name are provided. +
+`TypeError` + +if handle is provided but is not a Tensor. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +The data type of this TensorArray. +
+`dynamic_size` + +Python bool; if `True` the TensorArray can grow dynamically. +
+`element_shape` + +The tf.TensorShape of elements in this TensorArray. +
+`flow` + +The flow `Tensor` forcing ops leading to this TensorArray state. +
+`handle` + +The reference to the TensorArray. +
+ + + +## Methods + +

close

+ +View source + + + +Close the current TensorArray. + +**NOTE** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. + +

concat

+ +View source + + + +Return the values in the TensorArray as a concatenated `Tensor`. + +All of the values must have been written, their ranks must match, and +and their shapes must all match for all dimensions except the first. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+All the tensors in the TensorArray concatenated into one tensor. +
+ + + +

gather

+ +View source + + + +Return selected values in the TensorArray as a packed `Tensor`. + +All of selected values must have been written and their shapes +must all match. + + + + + + + + + + + + + +
Args
+`indices` + +A `1-D` `Tensor` taking values in `[0, max_value)`. If +the `TensorArray` is not dynamic, `max_value=size()`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The tensors in the `TensorArray` selected by `indices`, packed into one +tensor. +
+ + + +

grad

+ +View source + + + + + + +

identity

+ +View source + + + +Returns a TensorArray with the same content and properties. + + + + + + + + + + +
Returns
+A new TensorArray object with flow that ensures the control dependencies +from the contexts will become control dependencies for writes, reads, etc. +Use this object all for subsequent operations. +
+ + + +

read

+ +View source + + + +Read the value at location `index` in the TensorArray. + + + + + + + + + + + + + + +
Args
+`index` + +0-D. int32 tensor with the index to read from. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The tensor at index `index`. +
+ + + +

scatter

+ +View source + + + +Scatter the values of a `Tensor` in specific indices of a `TensorArray`. + + Args: + indices: A `1-D` `Tensor` taking values in `[0, max_value)`. If + the `TensorArray` is not dynamic, `max_value=size()`. + value: (N+1)-D. Tensor of type `dtype`. The Tensor to unpack. + name: A name for the operation (optional). + + Returns: + A new TensorArray object with flow that ensures the scatter occurs. + Use this object all for subsequent operations. + + Raises: + ValueError: if the shape inference fails. + + +**NOTE** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. + +

size

+ +View source + + + +Return the size of the TensorArray. + + +

split

+ +View source + + + +Split the values of a `Tensor` into the TensorArray. + + Args: + value: (N+1)-D. Tensor of type `dtype`. The Tensor to split. + lengths: 1-D. int32 vector with the lengths to use when splitting + `value` along its first dimension. + name: A name for the operation (optional). + + Returns: + A new TensorArray object with flow that ensures the split occurs. + Use this object all for subsequent operations. + + Raises: + ValueError: if the shape inference fails. + + +**NOTE** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. + +

stack

+ +View source + + + +Return the values in the TensorArray as a stacked `Tensor`. + +All of the values must have been written and their shapes must all match. +If input shapes have rank-`R`, then output shape will have rank-`(R+1)`. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+All the tensors in the TensorArray stacked into one tensor. +
+ + + +

unstack

+ +View source + + + +Unstack the values of a `Tensor` in the TensorArray. + + If input value shapes have rank-`R`, then the output TensorArray will + contain elements whose shapes are rank-`(R-1)`. + + Args: + value: (N+1)-D. Tensor of type `dtype`. The Tensor to unstack. + name: A name for the operation (optional). + + Returns: + A new TensorArray object with flow that ensures the unstack occurs. + Use this object all for subsequent operations. + + Raises: + ValueError: if the shape inference fails. + + +**NOTE** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. + +

write

+ +View source + + + +Write `value` into index `index` of the TensorArray. + + Args: + index: 0-D. int32 scalar with the index to write to. + value: N-D. Tensor of type `dtype`. The Tensor to write to this index. + name: A name for the operation (optional). + + Returns: + A new TensorArray object with flow that ensures the write occurs. + Use this object all for subsequent operations. + + Raises: + ValueError: if there are more writers than specified. + + +**NOTE** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. + + + diff --git a/site/en/api_docs/python/tf/TensorArraySpec.md b/site/en/api_docs/python/tf/TensorArraySpec.md new file mode 100644 index 00000000000..b33b9156ccf --- /dev/null +++ b/site/en/api_docs/python/tf/TensorArraySpec.md @@ -0,0 +1,217 @@ +description: Type specification for a tf.TensorArray. + +
+ + + + + + + + +
+ +# tf.TensorArraySpec + + + + + + + + + +Type specification for a tf.TensorArray. + +Inherits From: [`TypeSpec`](../tf/TypeSpec.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`element_shape` + +The shape of each element in the `TensorArray`. +
+`dtype` + +Data type of the `TensorArray`. +
+`dynamic_size` + +Whether the `TensorArray` can grow past its initial size. +
+`infer_shape` + +Whether shape inference is enabled. +
+ + + + + + + + + + + + + + +
+`value_type` + + +
+ + + +## Methods + +

from_value

+ +View source + + + + + + +

is_compatible_with

+ +View source + + + +Returns true if `spec_or_value` is compatible with this TypeSpec. + + +

most_specific_compatible_type

+ +View source + + + +Returns the most specific TypeSpec compatible with `self` and `other`. + + + + + + + + + + + +
Args
+`other` + +A `TypeSpec`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If there is no TypeSpec that is compatible with both `self` +and `other`. +
+ + + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/TensorShape.md b/site/en/api_docs/python/tf/TensorShape.md new file mode 100644 index 00000000000..85e2bb095f4 --- /dev/null +++ b/site/en/api_docs/python/tf/TensorShape.md @@ -0,0 +1,1008 @@ +description: Represents the shape of a Tensor. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.TensorShape + + + + + + + + + +Represents the shape of a `Tensor`. + + + + + + + + + +A `TensorShape` represents a possibly-partial shape specification for a +`Tensor`. It may be one of the following: + +* *Fully-known shape:* has a known number of dimensions and a known size + for each dimension. e.g. `TensorShape([16, 256])` +* *Partially-known shape:* has a known number of dimensions, and an unknown + size for one or more dimension. e.g. `TensorShape([None, 256])` +* *Unknown shape:* has an unknown number of dimensions, and an unknown + size in all dimensions. e.g. `TensorShape(None)` + +If a tensor is produced by an operation of type `"Foo"`, its shape +may be inferred if there is a registered shape function for +`"Foo"`. See [Shape +functions](https://tensorflow.org/extend/adding_an_op#shape_functions_in_c) +for details of shape functions and how to register them. Alternatively, +the shape may be set explicitly using tf.Tensor.set_shape. + + + + + + + + + + +
+`dims` + +A list of Dimensions, or None if the shape is unspecified. +
+ + + + + + + + + + + + +
+`TypeError` + +If dims cannot be converted to a list of dimensions. +
+ + + + + + + + + + + + + + + + + + + + +
+`dims` + +Deprecated. Returns list of dimensions for this shape. + +Suggest TensorShape.as_list instead. +
+`ndims` + +Deprecated accessor for `rank`. +
+`rank` + +Returns the rank of this shape, or None if it is unspecified. +
+ + + +## Methods + +

as_list

+ +View source + + + +Returns a list of integers or `None` for each dimension. + + + + + + + + + + +
Returns
+A list of integers or `None` for each dimension. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `self` is an unknown shape with an unknown rank. +
+ + + +

as_proto

+ +View source + + + +Returns this shape as a `TensorShapeProto`. + + +

assert_has_rank

+ +View source + + + +Raises an exception if `self` is not compatible with the given `rank`. + + + + + + + + + + + +
Args
+`rank` + +An integer. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `self` does not represent a shape with the given `rank`. +
+ + + +

assert_is_compatible_with

+ +View source + + + +Raises exception if `self` and `other` do not represent the same shape. + +This method can be used to assert that there exists a shape that both +`self` and `other` represent. + + + + + + + + + + +
Args
+`other` + +Another TensorShape. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `self` and `other` do not represent the same shape. +
+ + + +

assert_is_fully_defined

+ +View source + + + +Raises an exception if `self` is not fully defined in every dimension. + + + + + + + + + + + +
Raises
+`ValueError` + +If `self` does not have a known value for every dimension. +
+ + + +

assert_same_rank

+ +View source + + + +Raises an exception if `self` and `other` do not have compatible ranks. + + + + + + + + + + + +
Args
+`other` + +Another `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `self` and `other` do not represent shapes with the +same rank. +
+ + + +

concatenate

+ +View source + + + +Returns the concatenation of the dimension in `self` and `other`. + +*N.B.* If either `self` or `other` is completely unknown, +concatenation will discard information about the other shape. In +future, we might support concatenation that preserves this +information for use with slicing. + + + + + + + + + + +
Args
+`other` + +Another `TensorShape`. +
+ + + + + + + + + + + +
Returns
+A `TensorShape` whose dimensions are the concatenation of the +dimensions in `self` and `other`. +
+ + + +

is_compatible_with

+ +View source + + + +Returns True iff `self` is compatible with `other`. + +Two possibly-partially-defined shapes are compatible if there +exists a fully-defined shape that both shapes can represent. Thus, +compatibility allows the shape inference code to reason about +partially-defined shapes. For example: + +* TensorShape(None) is compatible with all shapes. + +* TensorShape([None, None]) is compatible with all two-dimensional + shapes, such as TensorShape([32, 784]), and also TensorShape(None). It is + not compatible with, for example, TensorShape([None]) or + TensorShape([None, None, None]). + +* TensorShape([32, None]) is compatible with all two-dimensional shapes + with size 32 in the 0th dimension, and also TensorShape([None, None]) + and TensorShape(None). It is not compatible with, for example, + TensorShape([32]), TensorShape([32, None, 1]) or TensorShape([64, None]). + +* TensorShape([32, 784]) is compatible with itself, and also + TensorShape([32, None]), TensorShape([None, 784]), TensorShape([None, + None]) and TensorShape(None). It is not compatible with, for example, + TensorShape([32, 1, 784]) or TensorShape([None]). + +The compatibility relation is reflexive and symmetric, but not +transitive. For example, TensorShape([32, 784]) is compatible with +TensorShape(None), and TensorShape(None) is compatible with +TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with +TensorShape([4, 4]). + + + + + + + + + + +
Args
+`other` + +Another TensorShape. +
+ + + + + + + + + + + +
Returns
+True iff `self` is compatible with `other`. +
+ + + +

is_fully_defined

+ +View source + + + +Returns True iff `self` is fully defined in every dimension. + + +

merge_with

+ +View source + + + +Returns a `TensorShape` combining the information in `self` and `other`. + +The dimensions in `self` and `other` are merged elementwise, +according to the rules defined for `Dimension.merge_with()`. + + + + + + + + + + +
Args
+`other` + +Another `TensorShape`. +
+ + + + + + + + + + + +
Returns
+A `TensorShape` containing the combined information of `self` and +`other`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `self` and `other` are not compatible. +
+ + + +

most_specific_compatible_shape

+ +View source + + + +Returns the most specific TensorShape compatible with `self` and `other`. + +* TensorShape([None, 1]) is the most specific TensorShape compatible with + both TensorShape([2, 1]) and TensorShape([5, 1]). Note that + TensorShape(None) is also compatible with above mentioned TensorShapes. + +* TensorShape([1, 2, 3]) is the most specific TensorShape compatible with + both TensorShape([1, 2, 3]) and TensorShape([1, 2, 3]). There are more + less specific TensorShapes compatible with above mentioned TensorShapes, + e.g. TensorShape([1, 2, None]), TensorShape(None). + + + + + + + + + + +
Args
+`other` + +Another `TensorShape`. +
+ + + + + + + + + + + +
Returns
+A `TensorShape` which is the most specific compatible shape of `self` +and `other`. +
+ + + +

num_elements

+ +View source + + + +Returns the total number of elements, or none for incomplete shapes. + + +

with_rank

+ +View source + + + +Returns a shape based on `self` with the given rank. + +This method promotes a completely unknown shape to one with a +known rank. + + + + + + + + + + +
Args
+`rank` + +An integer. +
+ + + + + + + + + + + +
Returns
+A shape that is at least as specific as `self` with the given rank. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `self` does not represent a shape with the given `rank`. +
+ + + +

with_rank_at_least

+ +View source + + + +Returns a shape based on `self` with at least the given rank. + + + + + + + + + + + +
Args
+`rank` + +An integer. +
+ + + + + + + + + + + +
Returns
+A shape that is at least as specific as `self` with at least the given +rank. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `self` does not represent a shape with at least the given +`rank`. +
+ + + +

with_rank_at_most

+ +View source + + + +Returns a shape based on `self` with at most the given rank. + + + + + + + + + + + +
Args
+`rank` + +An integer. +
+ + + + + + + + + + + +
Returns
+A shape that is at least as specific as `self` with at most the given +rank. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `self` does not represent a shape with at most the given +`rank`. +
+ + + +

__add__

+ +View source + + + + + + +

__bool__

+ +View source + + + +Returns True if this shape contains non-zero information. + + +

__concat__

+ +View source + + + + + + +

__eq__

+ +View source + + + +Returns True if `self` is equivalent to `other`. + + +

__getitem__

+ +View source + + + +Returns the value of a dimension or a shape, depending on the key. + + + + + + + + + + + +
Args
+`key` + +If `key` is an integer, returns the dimension at that index; +otherwise if `key` is a slice, returns a TensorShape whose dimensions +are those selected by the slice from `self`. +
+ + + + + + + + + + + +
Returns
+An integer if `key` is an integer, or a `TensorShape` if `key` is a +slice. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `key` is a slice and `self` is completely unknown and +the step is set. +
+ + + +

__iter__

+ +View source + + + +Returns `self.dims` if the rank is known, otherwise raises ValueError. + + +

__len__

+ +View source + + + +Returns the rank of this shape, or raises ValueError if unspecified. + + +

__ne__

+ +View source + + + +Returns True if `self` is known to be different from `other`. + + +

__nonzero__

+ +View source + + + +Returns True if this shape contains non-zero information. + + +

__radd__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/TensorSpec.md b/site/en/api_docs/python/tf/TensorSpec.md new file mode 100644 index 00000000000..aab29895631 --- /dev/null +++ b/site/en/api_docs/python/tf/TensorSpec.md @@ -0,0 +1,296 @@ +description: Describes a tf.Tensor. + +
+ + + + + + + + + +
+ +# tf.TensorSpec + + + + + + + + + +Describes a tf.Tensor. + + + + + + + + + +Metadata for describing the tf.Tensor objects accepted or returned +by some TensorFlow APIs. + + + + + + + + + + + + + + + + +
+`shape` + +Value convertible to tf.TensorShape. The shape of the tensor. +
+`dtype` + +Value convertible to tf.DType. The type of the tensor values. +
+`name` + +Optional name for the Tensor. +
+ + + + + + + + + + + + +
+`TypeError` + +If shape is not convertible to a tf.TensorShape, or dtype is +not convertible to a tf.DType. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +Returns the `dtype` of elements in the tensor. +
+`name` + +Returns the (optionally provided) name of the described tensor. +
+`shape` + +Returns the `TensorShape` that represents the shape of the tensor. +
+`value_type` + + +
+ + + +## Methods + +

from_spec

+ +View source + + + + + + +

from_tensor

+ +View source + + + + + + +

is_compatible_with

+ +View source + + + +Returns True if spec_or_tensor is compatible with this TensorSpec. + +Two tensors are considered compatible if they have the same dtype +and their shapes are compatible (see tf.TensorShape.is_compatible_with). + + + + + + + + + + +
Args
+`spec_or_tensor` + +A tf.TensorSpec or a tf.Tensor +
+ + + + + + + + + + + +
Returns
+True if spec_or_tensor is compatible with self. +
+ + + +

most_specific_compatible_type

+ +View source + + + +Returns the most specific TypeSpec compatible with `self` and `other`. + + + + + + + + + + + +
Args
+`other` + +A `TypeSpec`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If there is no TypeSpec that is compatible with both `self` +and `other`. +
+ + + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/TypeSpec.md b/site/en/api_docs/python/tf/TypeSpec.md new file mode 100644 index 00000000000..0d9ff562027 --- /dev/null +++ b/site/en/api_docs/python/tf/TypeSpec.md @@ -0,0 +1,162 @@ +description: Specifies a TensorFlow value type. + +
+ + + + + + +
+ +# tf.TypeSpec + + + + + + + + + +Specifies a TensorFlow value type. + + + + + +A tf.TypeSpec provides metadata describing an object accepted or returned +by TensorFlow APIs. Concrete subclasses, such as tf.TensorSpec and +tf.RaggedTensorSpec, are used to describe different value types. + +For example, tf.function's `input_signature` argument accepts a list +(or nested structure) of `TypeSpec`s. + +Creating new subclasses of TypeSpec (outside of TensorFlow core) is not +currently supported. In particular, we may make breaking changes to the +private methods and properties defined by this base class. + + + + + + + + + + + + +
+`value_type` + +The Python type for values that are compatible with this TypeSpec. +
+ + + +## Methods + +

is_compatible_with

+ +View source + + + +Returns true if `spec_or_value` is compatible with this TypeSpec. + + +

most_specific_compatible_type

+ +View source + + + +Returns the most specific TypeSpec compatible with `self` and `other`. + + + + + + + + + + + +
Args
+`other` + +A `TypeSpec`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If there is no TypeSpec that is compatible with both `self` +and `other`. +
+ + + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/UnconnectedGradients.md b/site/en/api_docs/python/tf/UnconnectedGradients.md new file mode 100644 index 00000000000..c6ec34a65cb --- /dev/null +++ b/site/en/api_docs/python/tf/UnconnectedGradients.md @@ -0,0 +1,55 @@ +description: Controls how gradient computation behaves when y does not depend on x. + +
+ + + + +
+ +# tf.UnconnectedGradients + + + + + + + + + +Controls how gradient computation behaves when y does not depend on x. + + + + + +The gradient of y with respect to x can be zero in two different ways: there +could be no differentiable path in the graph connecting x to y (and so we can +statically prove that the gradient is zero) or it could be that runtime values +of tensors in a particular execution lead to a gradient of zero (say, if a +relu unit happens to not be activated). To allow you to distinguish between +these two cases you can choose what value gets returned for the gradient when +there is no path in the graph from x to y: + +* `NONE`: Indicates that [None] will be returned if there is no path from x + to y +* `ZERO`: Indicates that a zero tensor will be returned in the shape of x. + +## Class Variables + +* `NONE` +* `ZERO` diff --git a/site/en/api_docs/python/tf/Variable.md b/site/en/api_docs/python/tf/Variable.md new file mode 100644 index 00000000000..0b9a50114c8 --- /dev/null +++ b/site/en/api_docs/python/tf/Variable.md @@ -0,0 +1,4382 @@ +description: See the [variable guide](https://tensorflow.org/guide/variable). + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.Variable + + + + + + + + + +See the [variable guide](https://tensorflow.org/guide/variable). + + + + + + + +A variable maintains shared, persistent state manipulated by a program. + +The `Variable()` constructor requires an initial value for the variable, which +can be a `Tensor` of any type and shape. This initial value defines the type +and shape of the variable. After construction, the type and shape of the +variable are fixed. The value can be changed using one of the assign methods. + +``` +>>> v = tf.Variable(1.) +>>> v.assign(2.) + +>>> v.assign_add(0.5) + +``` + +The `shape` argument to `Variable`'s constructor allows you to construct a +variable with a less defined shape than its `initial_value`: + +``` +>>> v = tf.Variable(1., shape=tf.TensorShape(None)) +>>> v.assign([[1.]]) + dtype=float32, numpy=array([[1.]], ...)> +``` + +Just like any `Tensor`, variables created with `Variable()` can be used as +inputs to operations. Additionally, all the operators overloaded for the +`Tensor` class are carried over to variables. + +``` +>>> w = tf.Variable([[1.], [2.]]) +>>> x = tf.constant([[3., 4.]]) +>>> tf.matmul(w, x) + +>>> tf.sigmoid(w + x) + +``` + +When building a machine learning model it is often convenient to distinguish +between variables holding trainable model parameters and other variables such +as a `step` variable used to count training steps. To make this easier, the +variable constructor supports a `trainable=` +parameter. tf.GradientTape watches trainable variables by default: + +``` +>>> with tf.GradientTape(persistent=True) as tape: +... trainable = tf.Variable(1.) +... non_trainable = tf.Variable(2., trainable=False) +... x1 = trainable * 2. +... x2 = non_trainable * 3. +>>> tape.gradient(x1, trainable) + +>>> assert tape.gradient(x2, non_trainable) is None # Unwatched +``` + +Variables are automatically tracked when assigned to attributes of types +inheriting from tf.Module. + +``` +>>> m = tf.Module() +>>> m.v = tf.Variable([1.]) +>>> m.trainable_variables +(,) +``` + +This tracking then allows saving variable values to +[training checkpoints](https://www.tensorflow.org/guide/checkpoint), or to +[SavedModels](https://www.tensorflow.org/guide/saved_model) which include +serialized TensorFlow graphs. + +Variables are often captured and manipulated by tf.functions. This works the +same way the un-decorated function would have: + +``` +>>> v = tf.Variable(0.) +>>> read_and_decrement = tf.function(lambda: v.assign_sub(0.1)) +>>> read_and_decrement() + +>>> read_and_decrement() + +``` + +Variables created inside a tf.function must be owned outside the function +and be created only once: + +``` +>>> class M(tf.Module): +... @tf.function +... def __call__(self, x): +... if not hasattr(self, "v"): # Or set self.v to None in __init__ +... self.v = tf.Variable(x) +... return self.v * x +>>> m = M() +>>> m(2.) + +>>> m(3.) + +>>> m.v + +``` + +See the tf.function documentation for details. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`initial_value` + +A `Tensor`, or Python object convertible to a `Tensor`, +which is the initial value for the Variable. The initial value must have +a shape specified unless `validate_shape` is set to False. Can also be a +callable with no argument that returns the initial value when called. In +that case, `dtype` must be specified. (Note that initializer functions +from init_ops.py must first be bound to a shape before being used here.) +
+`trainable` + +If `True`, GradientTapes automatically watch uses of this +variable. Defaults to `True`, unless `synchronization` is set to +`ON_READ`, in which case it defaults to `False`. +
+`validate_shape` + +If `False`, allows the variable to be initialized with a +value of unknown shape. If `True`, the default, the shape of +`initial_value` must be known. +
+`caching_device` + +Optional device string describing where the Variable +should be cached for reading. Defaults to the Variable's device. If not +`None`, caches on another device. Typical use is to cache on the device +where the Ops using the Variable reside, to deduplicate copying through +`Switch` and other conditional statements. +
+`name` + +Optional name for the variable. Defaults to `'Variable'` and gets +uniquified automatically. +
+`variable_def` + +`VariableDef` protocol buffer. If not `None`, recreates the +Variable object with its contents, referencing the variable's nodes in +the graph, which must already exist. The graph is not changed. +`variable_def` and the other arguments are mutually exclusive. +
+`dtype` + +If set, initial_value will be converted to the given type. If +`None`, either the datatype will be kept (if `initial_value` is a +Tensor), or `convert_to_tensor` will decide. +
+`import_scope` + +Optional `string`. Name scope to add to the `Variable.` Only +used when initializing from protocol buffer. +
+`constraint` + +An optional projection function to be applied to the variable +after being updated by an `Optimizer` (e.g. used to implement norm +constraints or value constraints for layer weights). The function must +take as input the unprojected Tensor representing the value of the +variable and return the Tensor for the projected value (which must have +the same shape). Constraints are not safe to use when doing asynchronous +distributed training. +
+`synchronization` + +Indicates when a distributed a variable will be +aggregated. Accepted values are constants defined in the class +tf.VariableSynchronization. By default the synchronization is set to +`AUTO` and the current `DistributionStrategy` chooses when to +synchronize. +
+`aggregation` + +Indicates how a distributed variable will be aggregated. +Accepted values are constants defined in the class +tf.VariableAggregation. +
+`shape` + +(optional) The shape of this variable. If None, the shape of +`initial_value` will be used. When setting this argument to +tf.TensorShape(None) (representing an unspecified shape), the variable +can be assigned with values of different shapes. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If both `variable_def` and initial_value are specified. +
+`ValueError` + +If the initial value is not specified, or does not have a +shape and `validate_shape` is `True`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`aggregation` + + +
+`constraint` + +Returns the constraint function associated with this variable. +
+`device` + +The device of this variable. +
+`dtype` + +The `DType` of this variable. +
+`graph` + +The `Graph` of this variable. +
+`initial_value` + +Returns the Tensor used as the initial value for the variable. + +Note that this is different from `initialized_value()` which runs +the op that initializes the variable before returning its value. +This method returns the tensor that is used by the op that initializes +the variable. +
+`initializer` + +The initializer operation for this variable. +
+`name` + +The name of this variable. +
+`op` + +The `Operation` of this variable. +
+`shape` + +The `TensorShape` of this variable. +
+`synchronization` + + +
+`trainable` + + +
+ + + +## Child Classes +[`class SaveSliceInfo`](../tf/Variable/SaveSliceInfo.md) + +## Methods + +

assign

+ +View source + + + +Assigns a new value to the variable. + +This is essentially a shortcut for `assign(self, value)`. + + + + + + + + + + + + + + + + + + + +
Args
+`value` + +A `Tensor`. The new value for this variable. +
+`use_locking` + +If `True`, use locking during the assignment. +
+`name` + +The name of the operation to be created +
+`read_value` + +if True, will return something which evaluates to the new +value of the variable; if False will return the assign op. +
+ + + + + + + + + + + +
Returns
+The updated variable. If `read_value` is false, instead returns None in +Eager mode and the assign op in graph mode. +
+ + + +

assign_add

+ +View source + + + +Adds a value to this variable. + + This is essentially a shortcut for `assign_add(self, delta)`. + + + + + + + + + + + + + + + + + + + +
Args
+`delta` + +A `Tensor`. The value to add to this variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +The name of the operation to be created +
+`read_value` + +if True, will return something which evaluates to the new +value of the variable; if False will return the assign op. +
+ + + + + + + + + + + +
Returns
+The updated variable. If `read_value` is false, instead returns None in +Eager mode and the assign op in graph mode. +
+ + + +

assign_sub

+ +View source + + + +Subtracts a value from this variable. + +This is essentially a shortcut for `assign_sub(self, delta)`. + + + + + + + + + + + + + + + + + + + +
Args
+`delta` + +A `Tensor`. The value to subtract from this variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +The name of the operation to be created +
+`read_value` + +if True, will return something which evaluates to the new +value of the variable; if False will return the assign op. +
+ + + + + + + + + + + +
Returns
+The updated variable. If `read_value` is false, instead returns None in +Eager mode and the assign op in graph mode. +
+ + + +

batch_scatter_update

+ +View source + + + +Assigns tf.IndexedSlices to this variable batch-wise. + +Analogous to `batch_gather`. This assumes that this variable and the +sparse_delta IndexedSlices have a series of leading dimensions that are the +same for all of them, and the updates are performed on the last dimension of +indices. In other words, the dimensions should be the following: + +`num_prefix_dims = sparse_delta.indices.ndims - 1` +`batch_dim = num_prefix_dims + 1` +`sparse_delta.updates.shape = sparse_delta.indices.shape + var.shape[ + batch_dim:]` + +where + +`sparse_delta.updates.shape[:num_prefix_dims]` +`== sparse_delta.indices.shape[:num_prefix_dims]` +`== var.shape[:num_prefix_dims]` + +And the operation performed can be expressed as: + +`var[i_1, ..., i_n, + sparse_delta.indices[i_1, ..., i_n, j]] = sparse_delta.updates[ + i_1, ..., i_n, j]` + +When sparse_delta.indices is a 1D tensor, this operation is equivalent to +`scatter_update`. + +To avoid this operation one can looping over the first `ndims` of the +variable and using `scatter_update` on the subtensors that result of slicing +the first dimension. This is a valid option for `ndims = 1`, but less +efficient than this implementation. + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to be assigned to this variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

count_up_to

+ +View source + + + +Increments this variable until it reaches `limit`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Prefer Dataset.range instead. + +When that Op is run it tries to increment the variable by `1`. If +incrementing the variable would bring it above `limit` then the Op raises +the exception `OutOfRangeError`. + +If no error is raised, the Op outputs the value of the variable before +the increment. + +This is essentially a shortcut for `count_up_to(self, limit)`. + + + + + + + + + + +
Args
+`limit` + +value at which incrementing the variable raises an error. +
+ + + + + + + + + + + +
Returns
+A `Tensor` that will hold the variable value before the increment. If no +other Op modifies this variable, the values produced will all be +distinct. +
+ + + +

eval

+ +View source + + + +In a session, computes and returns the value of this variable. + +This is not a graph construction method, it does not add ops to the graph. + +This convenience method requires a session where the graph +containing this variable has been launched. If no session is +passed, the default session is used. See tf.compat.v1.Session for more +information on launching a graph and on sessions. + +```python +v = tf.Variable([1, 2]) +init = tf.compat.v1.global_variables_initializer() + +with tf.compat.v1.Session() as sess: + sess.run(init) + # Usage passing the session explicitly. + print(v.eval(sess)) + # Usage with the default session. The 'with' block + # above makes 'sess' the default session. + print(v.eval()) +``` + + + + + + + + + + +
Args
+`session` + +The session to use to evaluate this variable. If none, the +default session is used. +
+ + + + + + + + + + + +
Returns
+A numpy `ndarray` with a copy of the value of this variable. +
+ + + +

experimental_ref

+ +View source + + + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use ref() instead. + +

from_proto

+ +View source + + + +Returns a `Variable` object created from `variable_def`. + + +

gather_nd

+ +View source + + + +Gather slices from `params` into a Tensor with shape specified by `indices`. + +See tf.gather_nd for details. + + + + + + + + + + + + + +
Args
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `params`. +
+ + + +

get_shape

+ +View source + + + +Alias of Variable.shape. + + +

initialized_value

+ +View source + + + +Returns the value of the initialized variable. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts. + +You should use this instead of the variable itself to initialize another +variable with a value that depends on the value of this variable. + +```python +# Initialize 'v' with a random tensor. +v = tf.Variable(tf.random.truncated_normal([10, 40])) +# Use `initialized_value` to guarantee that `v` has been +# initialized before its value is used to initialize `w`. +# The random values are picked only once. +w = tf.Variable(v.initialized_value() * 2.0) +``` + + + + + + + + + +
Returns
+A `Tensor` holding the value of this variable after its initializer +has run. +
+ + + +

load

+ +View source + + + +Load new value into this variable. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Prefer Variable.assign which has equivalent behavior in 2.X. + +Writes new value to variable's memory. Doesn't add ops to the graph. + +This convenience method requires a session where the graph +containing this variable has been launched. If no session is +passed, the default session is used. See tf.compat.v1.Session for more +information on launching a graph and on sessions. + +```python +v = tf.Variable([1, 2]) +init = tf.compat.v1.global_variables_initializer() + +with tf.compat.v1.Session() as sess: + sess.run(init) + # Usage passing the session explicitly. + v.load([2, 3], sess) + print(v.eval(sess)) # prints [2 3] + # Usage with the default session. The 'with' block + # above makes 'sess' the default session. + v.load([3, 4], sess) + print(v.eval()) # prints [3 4] +``` + + + + + + + + + + + + + +
Args
+`value` + +New variable value +
+`session` + +The session to use to evaluate this variable. If none, the +default session is used. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +Session is not passed and no default session +
+ + + +

read_value

+ +View source + + + +Returns the value of this variable, read in the current context. + +Can be different from value() if it's on another device, with control +dependencies, etc. + + + + + + + + + +
Returns
+A `Tensor` containing the value of the variable. +
+ + + +

ref

+ +View source + + + +Returns a hashable reference object to this Variable. + +The primary use case for this API is to put variables in a set/dictionary. +We can't put variables in a set/dictionary as `variable.__hash__()` is no +longer available starting Tensorflow 2.0. + +The following will raise an exception starting 2.0 + +``` +>>> x = tf.Variable(5) +>>> y = tf.Variable(10) +>>> z = tf.Variable(10) +>>> variable_set = {x, y, z} +Traceback (most recent call last): + ... +TypeError: Variable is unhashable. Instead, use tensor.ref() as the key. +>>> variable_dict = {x: 'five', y: 'ten'} +Traceback (most recent call last): + ... +TypeError: Variable is unhashable. Instead, use tensor.ref() as the key. +``` + +Instead, we can use `variable.ref()`. + +``` +>>> variable_set = {x.ref(), y.ref(), z.ref()} +>>> x.ref() in variable_set +True +>>> variable_dict = {x.ref(): 'five', y.ref(): 'ten', z.ref(): 'ten'} +>>> variable_dict[y.ref()] +'ten' +``` + +Also, the reference object provides `.deref()` function that returns the +original Variable. + +``` +>>> x = tf.Variable(5) +>>> x.ref().deref() + +``` + +

scatter_add

+ +View source + + + +Adds tf.IndexedSlices to this variable. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to be added to this variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

scatter_div

+ +View source + + + +Divide this variable by tf.IndexedSlices. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to divide this variable by. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

scatter_max

+ +View source + + + +Updates this variable with the max of tf.IndexedSlices and itself. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to use as an argument of max with this +variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

scatter_min

+ +View source + + + +Updates this variable with the min of tf.IndexedSlices and itself. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to use as an argument of min with this +variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

scatter_mul

+ +View source + + + +Multiply this variable by tf.IndexedSlices. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to multiply this variable by. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

scatter_nd_add

+ +View source + + + +Applies sparse addition to individual values or slices in a Variable. + +The Variable has rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into self. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of self. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. +``` + +For example, say we want to add 4 scattered elements to a rank-1 tensor to +8 elements. In Python, that update would look like this: + +```python + v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) + indices = tf.constant([[4], [3], [1] ,[7]]) + updates = tf.constant([9, 10, 11, 12]) + add = v.scatter_nd_add(indices, updates) + with tf.compat.v1.Session() as sess: + print sess.run(add) +``` + +The resulting update to v would look like this: + + [1, 13, 3, 14, 14, 6, 7, 20] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + +
Args
+`indices` + +The indices to be used in the operation. +
+`updates` + +The values to be used in the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + +

scatter_nd_sub

+ +View source + + + +Applies sparse subtraction to individual values or slices in a Variable. + +Assuming the variable has rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into self. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of self. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. +``` + +For example, say we want to add 4 scattered elements to a rank-1 tensor to +8 elements. In Python, that update would look like this: + +```python + v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) + indices = tf.constant([[4], [3], [1] ,[7]]) + updates = tf.constant([9, 10, 11, 12]) + op = v.scatter_nd_sub(indices, updates) + with tf.compat.v1.Session() as sess: + print sess.run(op) +``` + +The resulting update to v would look like this: + + [1, -9, 3, -6, -6, 6, 7, -4] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + +
Args
+`indices` + +The indices to be used in the operation. +
+`updates` + +The values to be used in the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + +

scatter_nd_update

+ +View source + + + +Applies sparse assignment to individual values or slices in a Variable. + +The Variable has rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into self. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of self. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. +``` + +For example, say we want to add 4 scattered elements to a rank-1 tensor to +8 elements. In Python, that update would look like this: + +```python + v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) + indices = tf.constant([[4], [3], [1] ,[7]]) + updates = tf.constant([9, 10, 11, 12]) + op = v.scatter_nd_assign(indices, updates) + with tf.compat.v1.Session() as sess: + print sess.run(op) +``` + +The resulting update to v would look like this: + + [1, 11, 3, 10, 9, 6, 7, 12] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + +
Args
+`indices` + +The indices to be used in the operation. +
+`updates` + +The values to be used in the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + +

scatter_sub

+ +View source + + + +Subtracts tf.IndexedSlices from this variable. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to be subtracted from this variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

scatter_update

+ +View source + + + +Assigns tf.IndexedSlices to this variable. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to be assigned to this variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

set_shape

+ +View source + + + +Overrides the shape for this variable. + + + + + + + + + + + +
Args
+`shape` + +the `TensorShape` representing the overridden shape. +
+ + + +

sparse_read

+ +View source + + + +Gather slices from params axis axis according to indices. + +This function supports a subset of tf.gather, see tf.gather for details on +usage. + + + + + + + + + + + + + +
Args
+`indices` + +The index `Tensor`. Must be one of the following types: `int32`, +`int64`. Must be in range `[0, params.shape[axis])`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `params`. +
+ + + +

to_proto

+ +View source + + + +Converts a `Variable` to a `VariableDef` protocol buffer. + + + + + + + + + + + +
Args
+`export_scope` + +Optional `string`. Name scope to remove. +
+ + + + + + + + + + + +
Returns
+A `VariableDef` protocol buffer, or `None` if the `Variable` is not +in the specified name scope. +
+ + + +

value

+ +View source + + + +Returns the last snapshot of this variable. + +You usually do not need to call this method as all ops that need the value +of the variable call it automatically through a `convert_to_tensor()` call. + +Returns a `Tensor` which holds the value of the variable. You can not +assign a new value to this tensor as it is not a reference to the variable. + +To avoid copies, if the consumer of the returned value is on the same device +as the variable, this actually returns the live value of the variable, not +a copy. Updates to the variable are seen by the consumer. If the consumer +is on a different device it will get a copy of the variable. + + + + + + + + + +
Returns
+A `Tensor` containing the value of the variable. +
+ + + +

__abs__

+ +View source + + + +Computes the absolute value of a tensor. + +Given a tensor of integer or floating-point values, this operation returns a +tensor of the same type, where each element contains the absolute value of the +corresponding element in the input. + +Given a tensor `x` of complex numbers, this operation returns a tensor of type +`float32` or `float64` that is the absolute value of each element in `x`. For +a complex number \\(a + bj\\), its absolute value is computed as \\(\sqrt{a^2 ++ b^2}\\). For example: + +``` +>>> x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]]) +>>> tf.abs(x) + +``` + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` or `SparseTensor` of type `float16`, `float32`, `float64`, +`int32`, `int64`, `complex64` or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` or `SparseTensor` of the same size, type and sparsity as `x`, +with absolute values. Note, for `complex64` or `complex128` input, the +returned `Tensor` will be of type `float32` or `float64`, respectively. +
+ + + +

__add__

+ +View source + + + +Dispatches to add for strings and add_v2 for all other types. + + +

__and__

+ +View source + + + +Returns the truth value of x AND y element-wise. + +*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__div__

+ +View source + + + +Divide two values using Python 2 semantics. + +Used for Tensor.__div__. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` returns the quotient of x and y. +
+ + + +

__eq__

+ +View source + + + +Compares two variables element-wise for equality. + + +

__floordiv__

+ +View source + + + +Divides `x / y` elementwise, rounding toward the most negative integer. + +The same as tf.compat.v1.div(x,y) for integers, but uses +`tf.floor(tf.compat.v1.div(x,y))` for +floating point arguments so that the result is always an integer (though +possibly an integer represented as floating point). This op is generated by +`x // y` floor division in Python 3 and in Python 2.7 with +`from __future__ import division`. + +`x` and `y` must have the same type, and the result will have the same type +as well. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` rounded down. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If the inputs are complex. +
+ + + +

__ge__

+ + + +Returns the truth value of (x >= y) element-wise. + +*NOTE*: math.greater_equal supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6, 7]) +y = tf.constant([5, 2, 5, 10]) +tf.math.greater_equal(x, y) ==> [True, True, True, False] + +x = tf.constant([5, 4, 6, 7]) +y = tf.constant([5]) +tf.math.greater_equal(x, y) ==> [True, False, True, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__getitem__

+ +View source + + + +Creates a slice helper object given a variable. + +This allows creating a sub-tensor from part of the current contents +of a variable. See tf.Tensor.__getitem__ for detailed examples +of slicing. + +This function in addition also allows assignment to a sliced range. +This is similar to `__setitem__` functionality in Python. However, +the syntax is different so that the user can capture the assignment +operation for grouping or passing to `sess.run()`. +For example, + +```python +import tensorflow as tf +A = tf.Variable([[1,2,3], [4,5,6], [7,8,9]], dtype=tf.float32) +with tf.compat.v1.Session() as sess: + sess.run(tf.compat.v1.global_variables_initializer()) + print(sess.run(A[:2, :2])) # => [[1,2], [4,5]] + + op = A[:2,:2].assign(22. * tf.ones((2, 2))) + print(sess.run(op)) # => [[22, 22, 3], [22, 22, 6], [7,8,9]] +``` + +Note that assignments currently do not support NumPy broadcasting +semantics. + + + + + + + + + + + + + +
Args
+`var` + +An `ops.Variable` object. +
+`slice_spec` + +The arguments to Tensor.__getitem__. +
+ + + + + + + + + + + +
Returns
+The appropriate slice of "tensor", based on "slice_spec". +As an operator. The operator also has a `assign()` method +that can be used to generate an assignment operator. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If a slice range is negative size. +
+`TypeError` + +TypeError: If the slice indices aren't int, slice, +ellipsis, tf.newaxis or int32/int64 tensors. +
+ + + +

__gt__

+ + + +Returns the truth value of (x > y) element-wise. + +*NOTE*: math.greater supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 2, 5]) +tf.math.greater(x, y) ==> [False, True, True] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.greater(x, y) ==> [False, False, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__invert__

+ + + +Returns the truth value of `NOT x` element-wise. + + +#### Example: + + + +``` +>>> tf.math.logical_not(tf.constant([True, False])) + +``` + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__iter__

+ +View source + + + +Dummy method to prevent iteration. + +Do not call. + +NOTE(mrry): If we register __getitem__ as an overloaded operator, +Python will valiantly attempt to iterate over the variable's Tensor from 0 +to infinity. Declaring this method prevents this unintended behavior. + + + + + + + + + + +
Raises
+`TypeError` + +when invoked. +
+ + + +

__le__

+ + + +Returns the truth value of (x <= y) element-wise. + +*NOTE*: math.less_equal supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.less_equal(x, y) ==> [True, True, False] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 6, 6]) +tf.math.less_equal(x, y) ==> [True, True, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__lt__

+ + + +Returns the truth value of (x < y) element-wise. + +*NOTE*: math.less supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.less(x, y) ==> [False, True, False] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 6, 7]) +tf.math.less(x, y) ==> [False, True, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__matmul__

+ +View source + + + +Multiplies matrix `a` by matrix `b`, producing `a` * `b`. + +The inputs must, following any transpositions, be tensors of rank >= 2 +where the inner 2 dimensions specify valid matrix multiplication dimensions, +and any further outer dimensions specify matching batch size. + +Both matrices must be of the same type. The supported types are: +`float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`. + +Either matrix can be transposed or adjointed (conjugated and transposed) on +the fly by setting one of the corresponding flag to `True`. These are `False` +by default. + +If one or both of the matrices contain a lot of zeros, a more efficient +multiplication algorithm can be used by setting the corresponding +`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default. +This optimization is only available for plain matrices (rank-2 tensors) with +datatypes `bfloat16` or `float32`. + +A simple 2-D tensor matrix multiplication: + +``` +>>> a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) +>>> a # 2-D tensor + +>>> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) +>>> b # 2-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +A batch matrix multiplication with batch shape [2]: + +``` +>>> a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) +>>> a # 3-D tensor + +>>> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) +>>> b # 3-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +Since python >= 3.5 the @ operator is supported +(see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow, +it simply calls the tf.matmul() function, so the following lines are +equivalent: + +``` +>>> d = a @ b @ [[10], [11]] +>>> d = tf.matmul(tf.matmul(a, b), [[10], [11]]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`a` + +tf.Tensor of type `float16`, `float32`, `float64`, `int32`, +`complex64`, `complex128` and rank > 1. +
+`b` + +tf.Tensor with same type and rank as `a`. +
+`transpose_a` + +If `True`, `a` is transposed before multiplication. +
+`transpose_b` + +If `True`, `b` is transposed before multiplication. +
+`adjoint_a` + +If `True`, `a` is conjugated and transposed before +multiplication. +
+`adjoint_b` + +If `True`, `b` is conjugated and transposed before +multiplication. +
+`a_is_sparse` + +If `True`, `a` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`b_is_sparse` + +If `True`, `b` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`name` + +Name for the operation (optional). +
+ + + + + + + + + + + + + + +
Returns
+A tf.Tensor of the same type as `a` and `b` where each inner-most matrix +is the product of the corresponding matrices in `a` and `b`, e.g. if all +transpose or adjoint attributes are `False`: + +`output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j])`, +for all indices `i`, `j`. +
+`Note` + +This is matrix product, not element-wise product. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `transpose_a` and `adjoint_a`, or `transpose_b` and +`adjoint_b` are both set to `True`. +
+ + + +

__mod__

+ +View source + + + +Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +true, this follows Python semantics in that the result here is consistent +with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`. + +*NOTE*: math.floormod supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__mul__

+ +View source + + + +Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse". + + +

__ne__

+ +View source + + + +Compares two variables element-wise for equality. + + +

__neg__

+ + + +Computes numerical negative value element-wise. + +I.e., \\(y = -x\\). + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__or__

+ +View source + + + +Returns the truth value of x OR y element-wise. + +*NOTE*: math.logical_or supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__pow__

+ +View source + + + +Computes the power of one value to another. + +Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for +corresponding elements in `x` and `y`. For example: + +```python +x = tf.constant([[2, 2], [3, 3]]) +y = tf.constant([[8, 16], [2, 3]]) +tf.pow(x, y) # [[256, 65536], [9, 27]] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`y` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

__radd__

+ +View source + + + +Dispatches to add for strings and add_v2 for all other types. + + +

__rand__

+ +View source + + + +Returns the truth value of x AND y element-wise. + +*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__rdiv__

+ +View source + + + +Divide two values using Python 2 semantics. + +Used for Tensor.__div__. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` returns the quotient of x and y. +
+ + + +

__rfloordiv__

+ +View source + + + +Divides `x / y` elementwise, rounding toward the most negative integer. + +The same as tf.compat.v1.div(x,y) for integers, but uses +`tf.floor(tf.compat.v1.div(x,y))` for +floating point arguments so that the result is always an integer (though +possibly an integer represented as floating point). This op is generated by +`x // y` floor division in Python 3 and in Python 2.7 with +`from __future__ import division`. + +`x` and `y` must have the same type, and the result will have the same type +as well. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` rounded down. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If the inputs are complex. +
+ + + +

__rmatmul__

+ +View source + + + +Multiplies matrix `a` by matrix `b`, producing `a` * `b`. + +The inputs must, following any transpositions, be tensors of rank >= 2 +where the inner 2 dimensions specify valid matrix multiplication dimensions, +and any further outer dimensions specify matching batch size. + +Both matrices must be of the same type. The supported types are: +`float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`. + +Either matrix can be transposed or adjointed (conjugated and transposed) on +the fly by setting one of the corresponding flag to `True`. These are `False` +by default. + +If one or both of the matrices contain a lot of zeros, a more efficient +multiplication algorithm can be used by setting the corresponding +`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default. +This optimization is only available for plain matrices (rank-2 tensors) with +datatypes `bfloat16` or `float32`. + +A simple 2-D tensor matrix multiplication: + +``` +>>> a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) +>>> a # 2-D tensor + +>>> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) +>>> b # 2-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +A batch matrix multiplication with batch shape [2]: + +``` +>>> a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) +>>> a # 3-D tensor + +>>> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) +>>> b # 3-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +Since python >= 3.5 the @ operator is supported +(see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow, +it simply calls the tf.matmul() function, so the following lines are +equivalent: + +``` +>>> d = a @ b @ [[10], [11]] +>>> d = tf.matmul(tf.matmul(a, b), [[10], [11]]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`a` + +tf.Tensor of type `float16`, `float32`, `float64`, `int32`, +`complex64`, `complex128` and rank > 1. +
+`b` + +tf.Tensor with same type and rank as `a`. +
+`transpose_a` + +If `True`, `a` is transposed before multiplication. +
+`transpose_b` + +If `True`, `b` is transposed before multiplication. +
+`adjoint_a` + +If `True`, `a` is conjugated and transposed before +multiplication. +
+`adjoint_b` + +If `True`, `b` is conjugated and transposed before +multiplication. +
+`a_is_sparse` + +If `True`, `a` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`b_is_sparse` + +If `True`, `b` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`name` + +Name for the operation (optional). +
+ + + + + + + + + + + + + + +
Returns
+A tf.Tensor of the same type as `a` and `b` where each inner-most matrix +is the product of the corresponding matrices in `a` and `b`, e.g. if all +transpose or adjoint attributes are `False`: + +`output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j])`, +for all indices `i`, `j`. +
+`Note` + +This is matrix product, not element-wise product. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `transpose_a` and `adjoint_a`, or `transpose_b` and +`adjoint_b` are both set to `True`. +
+ + + +

__rmod__

+ +View source + + + +Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +true, this follows Python semantics in that the result here is consistent +with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`. + +*NOTE*: math.floormod supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__rmul__

+ +View source + + + +Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse". + + +

__ror__

+ +View source + + + +Returns the truth value of x OR y element-wise. + +*NOTE*: math.logical_or supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__rpow__

+ +View source + + + +Computes the power of one value to another. + +Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for +corresponding elements in `x` and `y`. For example: + +```python +x = tf.constant([[2, 2], [3, 3]]) +y = tf.constant([[8, 16], [2, 3]]) +tf.pow(x, y) # [[256, 65536], [9, 27]] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`y` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

__rsub__

+ +View source + + + +Returns x - y element-wise. + +*NOTE*: `Subtract` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__rtruediv__

+ +View source + + + + + + +

__rxor__

+ +View source + + + +Logical XOR function. + +x ^ y = (x | y) & ~(x & y) + +The operation works for the following input types: + +- Two single elements of type `bool` +- One tf.Tensor of type `bool` and one single `bool`, where the result will + be calculated by applying logical XOR with the single element to each + element in the larger Tensor. +- Two tf.Tensor objects of type `bool` of the same shape. In this case, + the result will be the element-wise logical XOR of the two input tensors. + +#### Usage: + + + +``` +>>> a = tf.constant([True]) +>>> b = tf.constant([False]) +>>> tf.math.logical_xor(a, b) + +``` + +``` +>>> c = tf.constant([True]) +>>> x = tf.constant([False, True, True, False]) +>>> tf.math.logical_xor(c, x) + +``` + +``` +>>> y = tf.constant([False, False, True, True]) +>>> z = tf.constant([False, True, False, True]) +>>> tf.math.logical_xor(y, z) + +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A tf.Tensor type bool. +
+`y` + +A tf.Tensor of type bool. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tf.Tensor of type bool with the same size as that of x or y. +
+ + + +

__sub__

+ +View source + + + +Returns x - y element-wise. + +*NOTE*: `Subtract` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__truediv__

+ +View source + + + + + + +

__xor__

+ +View source + + + +Logical XOR function. + +x ^ y = (x | y) & ~(x & y) + +The operation works for the following input types: + +- Two single elements of type `bool` +- One tf.Tensor of type `bool` and one single `bool`, where the result will + be calculated by applying logical XOR with the single element to each + element in the larger Tensor. +- Two tf.Tensor objects of type `bool` of the same shape. In this case, + the result will be the element-wise logical XOR of the two input tensors. + +#### Usage: + + + +``` +>>> a = tf.constant([True]) +>>> b = tf.constant([False]) +>>> tf.math.logical_xor(a, b) + +``` + +``` +>>> c = tf.constant([True]) +>>> x = tf.constant([False, True, True, False]) +>>> tf.math.logical_xor(c, x) + +``` + +``` +>>> y = tf.constant([False, False, True, True]) +>>> z = tf.constant([False, True, False, True]) +>>> tf.math.logical_xor(y, z) + +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A tf.Tensor type bool. +
+`y` + +A tf.Tensor of type bool. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tf.Tensor of type bool with the same size as that of x or y. +
+ + + + + diff --git a/site/en/api_docs/python/tf/Variable/SaveSliceInfo.md b/site/en/api_docs/python/tf/Variable/SaveSliceInfo.md new file mode 100644 index 00000000000..6ecde170f19 --- /dev/null +++ b/site/en/api_docs/python/tf/Variable/SaveSliceInfo.md @@ -0,0 +1,185 @@ +description: Information on how to save this Variable as a slice. + +
+ + + + +
+ +# tf.Variable.SaveSliceInfo + + + + + + + + + +Information on how to save this Variable as a slice. + + + + + + + + + +Provides internal support for saving variables as slices of a larger +variable. This API is not public and is subject to change. + +#### Available properties: + + + +* full_name +* full_shape +* var_offset +* var_shape + + + + + + + + + + + + + + + + + + + + + + + + + +
+`full_name` + +Name of the full variable of which this `Variable` is a +slice. +
+`full_shape` + +Shape of the full variable, as a list of int. +
+`var_offset` + +Offset of this `Variable` into the full variable, as a list +of int. +
+`var_shape` + +Shape of this `Variable`, as a list of int. +
+`save_slice_info_def` + +`SaveSliceInfoDef` protocol buffer. If not `None`, +recreates the SaveSliceInfo object its contents. `save_slice_info_def` +and other arguments are mutually exclusive. +
+`import_scope` + +Optional `string`. Name scope to add. Only used when +initializing from protocol buffer. +
+ + + + + + + + + + + + + + +
+`spec` + +Computes the spec string used for saving. +
+ + + +## Methods + +

to_proto

+ +View source + + + +Returns a SaveSliceInfoDef() proto. + + + + + + + + + + + +
Args
+`export_scope` + +Optional `string`. Name scope to remove. +
+ + + + + + + + + + + +
Returns
+A `SaveSliceInfoDef` protocol buffer, or None if the `Variable` is not +in the specified name scope. +
+ + + + + diff --git a/site/en/api_docs/python/tf/VariableAggregation.md b/site/en/api_docs/python/tf/VariableAggregation.md new file mode 100644 index 00000000000..a56affdfc50 --- /dev/null +++ b/site/en/api_docs/python/tf/VariableAggregation.md @@ -0,0 +1,50 @@ +description: Indicates how a distributed variable will be aggregated. + +
+ + + + + + +
+ +# tf.VariableAggregation + + + + + + + + + +Indicates how a distributed variable will be aggregated. + + + +tf.distribute.Strategy distributes a model by making multiple copies +(called "replicas") acting data-parallel on different elements of the input +batch. When performing some variable-update operation, say +`var.assign_add(x)`, in a model, we need to resolve how to combine the +different values for `x` computed in the different replicas. + +* `NONE`: This is the default, giving an error if you use a + variable-update operation with multiple replicas. +* `SUM`: Add the updates across replicas. +* `MEAN`: Take the arithmetic mean ("average") of the updates across replicas. +* `ONLY_FIRST_REPLICA`: This is for when every replica is performing the same + update, but we only want to perform the update once. Used, e.g., for the + global step counter. + +## Class Variables + +* `MEAN` +* `NONE` +* `ONLY_FIRST_REPLICA` +* `SUM` diff --git a/site/en/api_docs/python/tf/VariableSynchronization.md b/site/en/api_docs/python/tf/VariableSynchronization.md new file mode 100644 index 00000000000..1d1ba59c808 --- /dev/null +++ b/site/en/api_docs/python/tf/VariableSynchronization.md @@ -0,0 +1,58 @@ +description: Indicates when a distributed variable will be synced. + +
+ + + + + + +
+ +# tf.VariableSynchronization + + + + + + + + + +Indicates when a distributed variable will be synced. + + + + + +* `AUTO`: Indicates that the synchronization will be determined by the current + `DistributionStrategy` (eg. With `MirroredStrategy` this would be + `ON_WRITE`). +* `NONE`: Indicates that there will only be one copy of the variable, so + there is no need to sync. +* `ON_WRITE`: Indicates that the variable will be updated across devices + every time it is written. +* `ON_READ`: Indicates that the variable will be aggregated across devices + when it is read (eg. when checkpointing or when evaluating an op that uses + the variable). + +## Class Variables + +* `AUTO` +* `NONE` +* `ON_READ` +* `ON_WRITE` diff --git a/site/en/api_docs/python/tf/_api_cache.json b/site/en/api_docs/python/tf/_api_cache.json new file mode 100644 index 00000000000..ac0d093b42e --- /dev/null +++ b/site/en/api_docs/python/tf/_api_cache.json @@ -0,0 +1,78926 @@ +{ + "duplicate_of": { + "tf.AggregationMethod.__eq__": "tf.keras.Model.__eq__", + "tf.AggregationMethod.__ge__": "tf.keras.Model.__ge__", + "tf.AggregationMethod.__gt__": "tf.keras.Model.__gt__", + "tf.AggregationMethod.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.AggregationMethod.__le__": "tf.keras.Model.__le__", + "tf.AggregationMethod.__lt__": "tf.keras.Model.__lt__", + "tf.AggregationMethod.__ne__": "tf.keras.Model.__ne__", + "tf.AggregationMethod.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.Assert": "tf.debugging.Assert", + "tf.CriticalSection.__eq__": "tf.keras.Model.__eq__", + "tf.CriticalSection.__ge__": "tf.keras.Model.__ge__", + "tf.CriticalSection.__gt__": "tf.keras.Model.__gt__", + "tf.CriticalSection.__le__": "tf.keras.Model.__le__", + "tf.CriticalSection.__lt__": "tf.keras.Model.__lt__", + "tf.CriticalSection.__ne__": "tf.keras.Model.__ne__", + "tf.CriticalSection.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.DType": "tf.dtypes.DType", + "tf.DType.__eq__": "tf.dtypes.DType.__eq__", + "tf.DType.__ge__": "tf.keras.Model.__ge__", + "tf.DType.__gt__": "tf.keras.Model.__gt__", + "tf.DType.__init__": "tf.dtypes.DType.__init__", + "tf.DType.__le__": "tf.keras.Model.__le__", + "tf.DType.__lt__": "tf.keras.Model.__lt__", + "tf.DType.__ne__": "tf.dtypes.DType.__ne__", + "tf.DType.__new__": "tf.dtypes.DType.__new__", + "tf.DType.as_datatype_enum": "tf.dtypes.DType.as_datatype_enum", + "tf.DType.as_numpy_dtype": "tf.dtypes.DType.as_numpy_dtype", + "tf.DType.base_dtype": "tf.dtypes.DType.base_dtype", + "tf.DType.is_bool": "tf.dtypes.DType.is_bool", + "tf.DType.is_compatible_with": "tf.dtypes.DType.is_compatible_with", + "tf.DType.is_complex": "tf.dtypes.DType.is_complex", + "tf.DType.is_floating": "tf.dtypes.DType.is_floating", + "tf.DType.is_integer": "tf.dtypes.DType.is_integer", + "tf.DType.is_numpy_compatible": "tf.dtypes.DType.is_numpy_compatible", + "tf.DType.is_quantized": "tf.dtypes.DType.is_quantized", + "tf.DType.is_unsigned": "tf.dtypes.DType.is_unsigned", + "tf.DType.limits": "tf.dtypes.DType.limits", + "tf.DType.max": "tf.dtypes.DType.max", + "tf.DType.min": "tf.dtypes.DType.min", + "tf.DType.name": "tf.dtypes.DType.name", + "tf.DType.real_dtype": "tf.dtypes.DType.real_dtype", + "tf.DType.size": "tf.dtypes.DType.size", + "tf.DeviceSpec.__ge__": "tf.keras.Model.__ge__", + "tf.DeviceSpec.__gt__": "tf.keras.Model.__gt__", + "tf.DeviceSpec.__le__": "tf.keras.Model.__le__", + "tf.DeviceSpec.__lt__": "tf.keras.Model.__lt__", + "tf.DeviceSpec.__ne__": "tf.keras.Model.__ne__", + "tf.DeviceSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.GradientTape.__enter__": "tf.autodiff.GradientTape.__enter__", + "tf.GradientTape.__eq__": "tf.keras.Model.__eq__", + "tf.GradientTape.__exit__": "tf.autodiff.GradientTape.__exit__", + "tf.GradientTape.__ge__": "tf.keras.Model.__ge__", + "tf.GradientTape.__gt__": "tf.keras.Model.__gt__", + "tf.GradientTape.__init__": "tf.autodiff.GradientTape.__init__", + "tf.GradientTape.__le__": "tf.keras.Model.__le__", + "tf.GradientTape.__lt__": "tf.keras.Model.__lt__", + "tf.GradientTape.__ne__": "tf.keras.Model.__ne__", + "tf.GradientTape.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.GradientTape.batch_jacobian": "tf.autodiff.GradientTape.batch_jacobian", + "tf.GradientTape.gradient": "tf.autodiff.GradientTape.gradient", + "tf.GradientTape.jacobian": "tf.autodiff.GradientTape.jacobian", + "tf.GradientTape.reset": "tf.autodiff.GradientTape.reset", + "tf.GradientTape.stop_recording": "tf.autodiff.GradientTape.stop_recording", + "tf.GradientTape.watch": "tf.autodiff.GradientTape.watch", + "tf.GradientTape.watched_variables": "tf.autodiff.GradientTape.watched_variables", + "tf.Graph.__eq__": "tf.keras.Model.__eq__", + "tf.Graph.__ge__": "tf.keras.Model.__ge__", + "tf.Graph.__gt__": "tf.keras.Model.__gt__", + "tf.Graph.__le__": "tf.keras.Model.__le__", + "tf.Graph.__lt__": "tf.keras.Model.__lt__", + "tf.Graph.__ne__": "tf.keras.Model.__ne__", + "tf.Graph.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.IndexedSlices.__eq__": "tf.keras.Model.__eq__", + "tf.IndexedSlices.__ge__": "tf.keras.Model.__ge__", + "tf.IndexedSlices.__gt__": "tf.keras.Model.__gt__", + "tf.IndexedSlices.__le__": "tf.keras.Model.__le__", + "tf.IndexedSlices.__lt__": "tf.keras.Model.__lt__", + "tf.IndexedSlices.__ne__": "tf.keras.Model.__ne__", + "tf.IndexedSlices.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.IndexedSlicesSpec.__eq__": "tf.TypeSpec.__eq__", + "tf.IndexedSlicesSpec.__ge__": "tf.keras.Model.__ge__", + "tf.IndexedSlicesSpec.__gt__": "tf.keras.Model.__gt__", + "tf.IndexedSlicesSpec.__le__": "tf.keras.Model.__le__", + "tf.IndexedSlicesSpec.__lt__": "tf.keras.Model.__lt__", + "tf.IndexedSlicesSpec.__ne__": "tf.TypeSpec.__ne__", + "tf.IndexedSlicesSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.IndexedSlicesSpec.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.IndexedSlicesSpec.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.Module.__eq__": "tf.keras.Model.__eq__", + "tf.Module.__ge__": "tf.keras.Model.__ge__", + "tf.Module.__gt__": "tf.keras.Model.__gt__", + "tf.Module.__le__": "tf.keras.Model.__le__", + "tf.Module.__lt__": "tf.keras.Model.__lt__", + "tf.Module.__ne__": "tf.keras.Model.__ne__", + "tf.Module.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.Operation.__eq__": "tf.keras.Model.__eq__", + "tf.Operation.__ge__": "tf.keras.Model.__ge__", + "tf.Operation.__gt__": "tf.keras.Model.__gt__", + "tf.Operation.__le__": "tf.keras.Model.__le__", + "tf.Operation.__lt__": "tf.keras.Model.__lt__", + "tf.Operation.__ne__": "tf.keras.Model.__ne__", + "tf.Operation.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.OptionalSpec.__eq__": "tf.TypeSpec.__eq__", + "tf.OptionalSpec.__ge__": "tf.keras.Model.__ge__", + "tf.OptionalSpec.__gt__": "tf.keras.Model.__gt__", + "tf.OptionalSpec.__le__": "tf.keras.Model.__le__", + "tf.OptionalSpec.__lt__": "tf.keras.Model.__lt__", + "tf.OptionalSpec.__ne__": "tf.TypeSpec.__ne__", + "tf.OptionalSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.OptionalSpec.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.OptionalSpec.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.RaggedTensor.__abs__": "tf.math.abs", + "tf.RaggedTensor.__add__": "tf.math.add", + "tf.RaggedTensor.__and__": "tf.math.logical_and", + "tf.RaggedTensor.__eq__": "tf.keras.Model.__eq__", + "tf.RaggedTensor.__floordiv__": "tf.math.floordiv", + "tf.RaggedTensor.__ge__": "tf.math.greater_equal", + "tf.RaggedTensor.__gt__": "tf.math.greater", + "tf.RaggedTensor.__invert__": "tf.math.logical_not", + "tf.RaggedTensor.__le__": "tf.math.less_equal", + "tf.RaggedTensor.__lt__": "tf.math.less", + "tf.RaggedTensor.__mod__": "tf.math.floormod", + "tf.RaggedTensor.__mul__": "tf.math.multiply", + "tf.RaggedTensor.__ne__": "tf.keras.Model.__ne__", + "tf.RaggedTensor.__neg__": "tf.math.negative", + "tf.RaggedTensor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.RaggedTensor.__nonzero__": "tf.RaggedTensor.__bool__", + "tf.RaggedTensor.__or__": "tf.math.logical_or", + "tf.RaggedTensor.__pow__": "tf.math.pow", + "tf.RaggedTensor.__sub__": "tf.math.subtract", + "tf.RaggedTensor.__truediv__": "tf.math.truediv", + "tf.RaggedTensor.__xor__": "tf.math.logical_xor", + "tf.RaggedTensorSpec.__eq__": "tf.TypeSpec.__eq__", + "tf.RaggedTensorSpec.__ge__": "tf.keras.Model.__ge__", + "tf.RaggedTensorSpec.__gt__": "tf.keras.Model.__gt__", + "tf.RaggedTensorSpec.__le__": "tf.keras.Model.__le__", + "tf.RaggedTensorSpec.__lt__": "tf.keras.Model.__lt__", + "tf.RaggedTensorSpec.__ne__": "tf.TypeSpec.__ne__", + "tf.RaggedTensorSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.RaggedTensorSpec.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.RaggedTensorSpec.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.RegisterGradient.__eq__": "tf.keras.Model.__eq__", + "tf.RegisterGradient.__ge__": "tf.keras.Model.__ge__", + "tf.RegisterGradient.__gt__": "tf.keras.Model.__gt__", + "tf.RegisterGradient.__le__": "tf.keras.Model.__le__", + "tf.RegisterGradient.__lt__": "tf.keras.Model.__lt__", + "tf.RegisterGradient.__ne__": "tf.keras.Model.__ne__", + "tf.RegisterGradient.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.SparseTensor": "tf.sparse.SparseTensor", + "tf.SparseTensor.__div__": "tf.sparse.SparseTensor.__div__", + "tf.SparseTensor.__eq__": "tf.keras.Model.__eq__", + "tf.SparseTensor.__ge__": "tf.keras.Model.__ge__", + "tf.SparseTensor.__gt__": "tf.keras.Model.__gt__", + "tf.SparseTensor.__init__": "tf.sparse.SparseTensor.__init__", + "tf.SparseTensor.__le__": "tf.keras.Model.__le__", + "tf.SparseTensor.__lt__": "tf.keras.Model.__lt__", + "tf.SparseTensor.__mul__": "tf.sparse.SparseTensor.__mul__", + "tf.SparseTensor.__ne__": "tf.keras.Model.__ne__", + "tf.SparseTensor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.SparseTensor.__truediv__": "tf.sparse.SparseTensor.__truediv__", + "tf.SparseTensor.consumers": "tf.sparse.SparseTensor.consumers", + "tf.SparseTensor.dense_shape": "tf.sparse.SparseTensor.dense_shape", + "tf.SparseTensor.dtype": "tf.sparse.SparseTensor.dtype", + "tf.SparseTensor.eval": "tf.sparse.SparseTensor.eval", + "tf.SparseTensor.get_shape": "tf.sparse.SparseTensor.get_shape", + "tf.SparseTensor.graph": "tf.sparse.SparseTensor.graph", + "tf.SparseTensor.indices": "tf.sparse.SparseTensor.indices", + "tf.SparseTensor.op": "tf.sparse.SparseTensor.op", + "tf.SparseTensor.shape": "tf.sparse.SparseTensor.shape", + "tf.SparseTensor.values": "tf.sparse.SparseTensor.values", + "tf.SparseTensorSpec.__eq__": "tf.TypeSpec.__eq__", + "tf.SparseTensorSpec.__ge__": "tf.keras.Model.__ge__", + "tf.SparseTensorSpec.__gt__": "tf.keras.Model.__gt__", + "tf.SparseTensorSpec.__le__": "tf.keras.Model.__le__", + "tf.SparseTensorSpec.__lt__": "tf.keras.Model.__lt__", + "tf.SparseTensorSpec.__ne__": "tf.TypeSpec.__ne__", + "tf.SparseTensorSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.SparseTensorSpec.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.SparseTensorSpec.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.Tensor.__abs__": "tf.math.abs", + "tf.Tensor.__ge__": "tf.math.greater_equal", + "tf.Tensor.__gt__": "tf.math.greater", + "tf.Tensor.__invert__": "tf.math.logical_not", + "tf.Tensor.__le__": "tf.math.less_equal", + "tf.Tensor.__lt__": "tf.math.less", + "tf.Tensor.__neg__": "tf.math.negative", + "tf.Tensor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.TensorArray.__eq__": "tf.keras.Model.__eq__", + "tf.TensorArray.__ge__": "tf.keras.Model.__ge__", + "tf.TensorArray.__gt__": "tf.keras.Model.__gt__", + "tf.TensorArray.__le__": "tf.keras.Model.__le__", + "tf.TensorArray.__lt__": "tf.keras.Model.__lt__", + "tf.TensorArray.__ne__": "tf.keras.Model.__ne__", + "tf.TensorArray.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.TensorArraySpec.__eq__": "tf.TypeSpec.__eq__", + "tf.TensorArraySpec.__ge__": "tf.keras.Model.__ge__", + "tf.TensorArraySpec.__gt__": "tf.keras.Model.__gt__", + "tf.TensorArraySpec.__le__": "tf.keras.Model.__le__", + "tf.TensorArraySpec.__lt__": "tf.keras.Model.__lt__", + "tf.TensorArraySpec.__ne__": "tf.TypeSpec.__ne__", + "tf.TensorArraySpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.TensorShape.__ge__": "tf.keras.Model.__ge__", + "tf.TensorShape.__gt__": "tf.keras.Model.__gt__", + "tf.TensorShape.__le__": "tf.keras.Model.__le__", + "tf.TensorShape.__lt__": "tf.keras.Model.__lt__", + "tf.TensorShape.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.TensorShape.__nonzero__": "tf.TensorShape.__bool__", + "tf.TensorSpec.__ge__": "tf.keras.Model.__ge__", + "tf.TensorSpec.__gt__": "tf.keras.Model.__gt__", + "tf.TensorSpec.__le__": "tf.keras.Model.__le__", + "tf.TensorSpec.__lt__": "tf.keras.Model.__lt__", + "tf.TensorSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.TypeSpec.__ge__": "tf.keras.Model.__ge__", + "tf.TypeSpec.__gt__": "tf.keras.Model.__gt__", + "tf.TypeSpec.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.TypeSpec.__le__": "tf.keras.Model.__le__", + "tf.TypeSpec.__lt__": "tf.keras.Model.__lt__", + "tf.TypeSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.UnconnectedGradients.name": "tf.distribute.InputReplicationMode.name", + "tf.UnconnectedGradients.value": "tf.distribute.InputReplicationMode.value", + "tf.Variable.SaveSliceInfo.__eq__": "tf.keras.Model.__eq__", + "tf.Variable.SaveSliceInfo.__ge__": "tf.keras.Model.__ge__", + "tf.Variable.SaveSliceInfo.__gt__": "tf.keras.Model.__gt__", + "tf.Variable.SaveSliceInfo.__le__": "tf.keras.Model.__le__", + "tf.Variable.SaveSliceInfo.__lt__": "tf.keras.Model.__lt__", + "tf.Variable.SaveSliceInfo.__ne__": "tf.keras.Model.__ne__", + "tf.Variable.SaveSliceInfo.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.Variable.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.VariableAggregation.name": "tf.distribute.InputReplicationMode.name", + "tf.VariableAggregation.value": "tf.distribute.InputReplicationMode.value", + "tf.VariableSynchronization.name": "tf.distribute.InputReplicationMode.name", + "tf.VariableSynchronization.value": "tf.distribute.InputReplicationMode.value", + "tf.abs": "tf.math.abs", + "tf.acos": "tf.math.acos", + "tf.acosh": "tf.math.acosh", + "tf.add": "tf.math.add", + "tf.add_n": "tf.math.add_n", + "tf.argmax": "tf.math.argmax", + "tf.argmin": "tf.math.argmin", + "tf.as_dtype": "tf.dtypes.as_dtype", + "tf.as_string": "tf.strings.as_string", + "tf.asin": "tf.math.asin", + "tf.asinh": "tf.math.asinh", + "tf.assert_equal": "tf.debugging.assert_equal", + "tf.assert_greater": "tf.debugging.assert_greater", + "tf.assert_less": "tf.debugging.assert_less", + "tf.assert_rank": "tf.debugging.assert_rank", + "tf.atan": "tf.math.atan", + "tf.atan2": "tf.math.atan2", + "tf.atanh": "tf.math.atanh", + "tf.autodiff.ForwardAccumulator.__eq__": "tf.keras.Model.__eq__", + "tf.autodiff.ForwardAccumulator.__ge__": "tf.keras.Model.__ge__", + "tf.autodiff.ForwardAccumulator.__gt__": "tf.keras.Model.__gt__", + "tf.autodiff.ForwardAccumulator.__le__": "tf.keras.Model.__le__", + "tf.autodiff.ForwardAccumulator.__lt__": "tf.keras.Model.__lt__", + "tf.autodiff.ForwardAccumulator.__ne__": "tf.keras.Model.__ne__", + "tf.autodiff.ForwardAccumulator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.autodiff.GradientTape": "tf.GradientTape", + "tf.autodiff.GradientTape.__eq__": "tf.keras.Model.__eq__", + "tf.autodiff.GradientTape.__ge__": "tf.keras.Model.__ge__", + "tf.autodiff.GradientTape.__gt__": "tf.keras.Model.__gt__", + "tf.autodiff.GradientTape.__le__": "tf.keras.Model.__le__", + "tf.autodiff.GradientTape.__lt__": "tf.keras.Model.__lt__", + "tf.autodiff.GradientTape.__ne__": "tf.keras.Model.__ne__", + "tf.autodiff.GradientTape.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.autograph.experimental.Feature.name": "tf.distribute.InputReplicationMode.name", + "tf.autograph.experimental.Feature.value": "tf.distribute.InputReplicationMode.value", + "tf.bfloat16": "tf.dtypes.bfloat16", + "tf.bool": "tf.dtypes.bool", + "tf.compat.v1.AggregationMethod": "tf.AggregationMethod", + "tf.compat.v1.AggregationMethod.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.AggregationMethod.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.AggregationMethod.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.AggregationMethod.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.AggregationMethod.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.AggregationMethod.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.AggregationMethod.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.AggregationMethod.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.Assert": "tf.debugging.Assert", + "tf.compat.v1.AttrValue.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.AttrValue.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.AttrValue.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.AttrValue.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.AttrValue.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.AttrValue.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.AttrValue.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.AttrValue.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.AttrValue.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.AttrValue.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.AttrValue.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.AttrValue.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.AttrValue.ListValue.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.AttrValue.ListValue.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.AttrValue.ListValue.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.AttrValue.ListValue.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.AttrValue.ListValue.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.AttrValue.ListValue.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.AttrValue.ListValue.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.AttrValue.ListValue.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.AttrValue.ListValue.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.AttrValue.ListValue.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.AttrValue.ListValue.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.AttrValue.ListValue.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.AttrValue.ListValue.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.AttrValue.ListValue.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.AttrValue.ListValue.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.AttrValue.ListValue.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.AttrValue.ListValue.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.AttrValue.ListValue.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.AttrValue.ListValue.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.AttrValue.ListValue.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.AttrValue.ListValue.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.AttrValue.ListValue.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.AttrValue.ListValue.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.AttrValue.ListValue.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.AttrValue.ListValue.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.AttrValue.ListValue.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.AttrValue.ListValue.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.AttrValue.ListValue.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.AttrValue.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.AttrValue.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.AttrValue.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.AttrValue.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.AttrValue.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.AttrValue.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.AttrValue.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.AttrValue.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.AttrValue.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.AttrValue.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.AttrValue.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.AttrValue.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.AttrValue.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.AttrValue.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.AttrValue.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.AttrValue.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.ConditionalAccumulator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.ConditionalAccumulator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.ConditionalAccumulator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.ConditionalAccumulator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.ConditionalAccumulator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.ConditionalAccumulator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.ConditionalAccumulator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.ConditionalAccumulator.accumulator_ref": "tf.compat.v1.ConditionalAccumulatorBase.accumulator_ref", + "tf.compat.v1.ConditionalAccumulator.dtype": "tf.compat.v1.ConditionalAccumulatorBase.dtype", + "tf.compat.v1.ConditionalAccumulator.name": "tf.compat.v1.ConditionalAccumulatorBase.name", + "tf.compat.v1.ConditionalAccumulator.num_accumulated": "tf.compat.v1.ConditionalAccumulatorBase.num_accumulated", + "tf.compat.v1.ConditionalAccumulator.set_global_step": "tf.compat.v1.ConditionalAccumulatorBase.set_global_step", + "tf.compat.v1.ConditionalAccumulatorBase.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.ConditionalAccumulatorBase.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.ConditionalAccumulatorBase.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.ConditionalAccumulatorBase.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.ConditionalAccumulatorBase.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.ConditionalAccumulatorBase.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.ConditionalAccumulatorBase.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.ConfigProto.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.ConfigProto.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.ConfigProto.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.ConfigProto.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.ConfigProto.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.ConfigProto.DeviceCountEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.ConfigProto.DeviceCountEntry.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.ConfigProto.DeviceCountEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.ConfigProto.DeviceCountEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.ConfigProto.DeviceCountEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.ConfigProto.DeviceCountEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.ConfigProto.DeviceCountEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.ConfigProto.DeviceCountEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.ConfigProto.DeviceCountEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.ConfigProto.DeviceCountEntry.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.ConfigProto.DeviceCountEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.ConfigProto.DeviceCountEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.ConfigProto.DeviceCountEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.ConfigProto.DeviceCountEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.ConfigProto.DeviceCountEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.ConfigProto.DeviceCountEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.ConfigProto.DeviceCountEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.ConfigProto.DeviceCountEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.ConfigProto.DeviceCountEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.ConfigProto.DeviceCountEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.ConfigProto.DeviceCountEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.ConfigProto.DeviceCountEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.ConfigProto.DeviceCountEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.ConfigProto.DeviceCountEntry.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.ConfigProto.DeviceCountEntry.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.ConfigProto.DeviceCountEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.ConfigProto.DeviceCountEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.ConfigProto.DeviceCountEntry.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.ConfigProto.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.ConfigProto.Experimental.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.ConfigProto.Experimental.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.ConfigProto.Experimental.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.ConfigProto.Experimental.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.ConfigProto.Experimental.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.ConfigProto.Experimental.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.ConfigProto.Experimental.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.ConfigProto.Experimental.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.ConfigProto.Experimental.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.ConfigProto.Experimental.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.ConfigProto.Experimental.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.ConfigProto.Experimental.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.ConfigProto.Experimental.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.ConfigProto.Experimental.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.ConfigProto.Experimental.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.ConfigProto.Experimental.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.ConfigProto.Experimental.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.ConfigProto.Experimental.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.ConfigProto.Experimental.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.ConfigProto.Experimental.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.ConfigProto.Experimental.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.ConfigProto.Experimental.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.ConfigProto.Experimental.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.ConfigProto.Experimental.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.ConfigProto.Experimental.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.ConfigProto.Experimental.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.ConfigProto.Experimental.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.ConfigProto.Experimental.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.ConfigProto.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.ConfigProto.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.ConfigProto.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.ConfigProto.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.ConfigProto.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.ConfigProto.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.ConfigProto.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.ConfigProto.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.ConfigProto.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.ConfigProto.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.ConfigProto.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.ConfigProto.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.ConfigProto.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.ConfigProto.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.ConfigProto.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.ConfigProto.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.ConfigProto.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.ConfigProto.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.ConfigProto.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.ConfigProto.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.ConfigProto.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.ConfigProto.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.CriticalSection": "tf.CriticalSection", + "tf.compat.v1.CriticalSection.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.CriticalSection.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.CriticalSection.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.CriticalSection.__init__": "tf.CriticalSection.__init__", + "tf.compat.v1.CriticalSection.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.CriticalSection.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.CriticalSection.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.CriticalSection.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.CriticalSection.execute": "tf.CriticalSection.execute", + "tf.compat.v1.CriticalSection.name": "tf.CriticalSection.name", + "tf.compat.v1.DType": "tf.dtypes.DType", + "tf.compat.v1.DType.__eq__": "tf.dtypes.DType.__eq__", + "tf.compat.v1.DType.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.DType.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.DType.__init__": "tf.dtypes.DType.__init__", + "tf.compat.v1.DType.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.DType.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.DType.__ne__": "tf.dtypes.DType.__ne__", + "tf.compat.v1.DType.__new__": "tf.dtypes.DType.__new__", + "tf.compat.v1.DType.as_datatype_enum": "tf.dtypes.DType.as_datatype_enum", + "tf.compat.v1.DType.as_numpy_dtype": "tf.dtypes.DType.as_numpy_dtype", + "tf.compat.v1.DType.base_dtype": "tf.dtypes.DType.base_dtype", + "tf.compat.v1.DType.is_bool": "tf.dtypes.DType.is_bool", + "tf.compat.v1.DType.is_compatible_with": "tf.dtypes.DType.is_compatible_with", + "tf.compat.v1.DType.is_complex": "tf.dtypes.DType.is_complex", + "tf.compat.v1.DType.is_floating": "tf.dtypes.DType.is_floating", + "tf.compat.v1.DType.is_integer": "tf.dtypes.DType.is_integer", + "tf.compat.v1.DType.is_numpy_compatible": "tf.dtypes.DType.is_numpy_compatible", + "tf.compat.v1.DType.is_quantized": "tf.dtypes.DType.is_quantized", + "tf.compat.v1.DType.is_unsigned": "tf.dtypes.DType.is_unsigned", + "tf.compat.v1.DType.limits": "tf.dtypes.DType.limits", + "tf.compat.v1.DType.max": "tf.dtypes.DType.max", + "tf.compat.v1.DType.min": "tf.dtypes.DType.min", + "tf.compat.v1.DType.name": "tf.dtypes.DType.name", + "tf.compat.v1.DType.real_dtype": "tf.dtypes.DType.real_dtype", + "tf.compat.v1.DType.size": "tf.dtypes.DType.size", + "tf.compat.v1.DeviceSpec.__eq__": "tf.DeviceSpec.__eq__", + "tf.compat.v1.DeviceSpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.DeviceSpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.DeviceSpec.__init__": "tf.DeviceSpec.__init__", + "tf.compat.v1.DeviceSpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.DeviceSpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.DeviceSpec.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.DeviceSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.DeviceSpec.make_merged_spec": "tf.DeviceSpec.make_merged_spec", + "tf.compat.v1.DeviceSpec.replace": "tf.DeviceSpec.replace", + "tf.compat.v1.Dimension.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.Event.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.Event.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.Event.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.Event.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.Event.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.Event.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.Event.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.Event.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.Event.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.Event.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.Event.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.Event.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.Event.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.Event.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.Event.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.Event.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.Event.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.Event.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.Event.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.Event.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.Event.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.Event.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.Event.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.Event.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.Event.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.Event.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.Event.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.Event.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.FIFOQueue": "tf.queue.FIFOQueue", + "tf.compat.v1.FIFOQueue.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.FIFOQueue.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.FIFOQueue.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.FIFOQueue.__init__": "tf.queue.FIFOQueue.__init__", + "tf.compat.v1.FIFOQueue.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.FIFOQueue.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.FIFOQueue.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.FIFOQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.FIFOQueue.close": "tf.queue.QueueBase.close", + "tf.compat.v1.FIFOQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.FIFOQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.FIFOQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.FIFOQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.FIFOQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.FIFOQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.FIFOQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.FIFOQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.FIFOQueue.name": "tf.queue.QueueBase.name", + "tf.compat.v1.FIFOQueue.names": "tf.queue.QueueBase.names", + "tf.compat.v1.FIFOQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.FIFOQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.FIFOQueue.size": "tf.queue.QueueBase.size", + "tf.compat.v1.FixedLenFeature": "tf.io.FixedLenFeature", + "tf.compat.v1.FixedLenFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.FixedLenFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.FixedLenFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.FixedLenFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.FixedLenFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.FixedLenFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.FixedLenFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.FixedLenFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.FixedLenFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.FixedLenFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.FixedLenFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.FixedLenFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.FixedLenFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.FixedLenFeature.__new__": "tf.io.FixedLenFeature.__new__", + "tf.compat.v1.FixedLenFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.FixedLenFeature.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.FixedLenFeature.default_value": "tf.io.FixedLenFeature.default_value", + "tf.compat.v1.FixedLenFeature.dtype": "tf.io.FixedLenFeature.dtype", + "tf.compat.v1.FixedLenFeature.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.FixedLenFeature.shape": "tf.io.FixedLenFeature.shape", + "tf.compat.v1.FixedLenSequenceFeature": "tf.io.FixedLenSequenceFeature", + "tf.compat.v1.FixedLenSequenceFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.FixedLenSequenceFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.FixedLenSequenceFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.FixedLenSequenceFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.FixedLenSequenceFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.FixedLenSequenceFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.FixedLenSequenceFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.FixedLenSequenceFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.FixedLenSequenceFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.FixedLenSequenceFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.FixedLenSequenceFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.FixedLenSequenceFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.FixedLenSequenceFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.FixedLenSequenceFeature.__new__": "tf.io.FixedLenSequenceFeature.__new__", + "tf.compat.v1.FixedLenSequenceFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.FixedLenSequenceFeature.allow_missing": "tf.io.FixedLenSequenceFeature.allow_missing", + "tf.compat.v1.FixedLenSequenceFeature.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.FixedLenSequenceFeature.default_value": "tf.io.FixedLenSequenceFeature.default_value", + "tf.compat.v1.FixedLenSequenceFeature.dtype": "tf.io.FixedLenSequenceFeature.dtype", + "tf.compat.v1.FixedLenSequenceFeature.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.FixedLenSequenceFeature.shape": "tf.io.FixedLenSequenceFeature.shape", + "tf.compat.v1.FixedLengthRecordReader.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.FixedLengthRecordReader.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.FixedLengthRecordReader.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.FixedLengthRecordReader.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.FixedLengthRecordReader.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.FixedLengthRecordReader.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.FixedLengthRecordReader.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.FixedLengthRecordReader.num_records_produced": "tf.compat.v1.ReaderBase.num_records_produced", + "tf.compat.v1.FixedLengthRecordReader.num_work_units_completed": "tf.compat.v1.ReaderBase.num_work_units_completed", + "tf.compat.v1.FixedLengthRecordReader.read": "tf.compat.v1.ReaderBase.read", + "tf.compat.v1.FixedLengthRecordReader.read_up_to": "tf.compat.v1.ReaderBase.read_up_to", + "tf.compat.v1.FixedLengthRecordReader.reader_ref": "tf.compat.v1.ReaderBase.reader_ref", + "tf.compat.v1.FixedLengthRecordReader.reset": "tf.compat.v1.ReaderBase.reset", + "tf.compat.v1.FixedLengthRecordReader.restore_state": "tf.compat.v1.ReaderBase.restore_state", + "tf.compat.v1.FixedLengthRecordReader.serialize_state": "tf.compat.v1.ReaderBase.serialize_state", + "tf.compat.v1.FixedLengthRecordReader.supports_serialize": "tf.compat.v1.ReaderBase.supports_serialize", + "tf.compat.v1.GPUOptions.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.GPUOptions.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.GPUOptions.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.GPUOptions.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.GPUOptions.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.GPUOptions.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.GPUOptions.Experimental.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.GPUOptions.Experimental.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.GPUOptions.Experimental.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.GPUOptions.Experimental.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.GPUOptions.Experimental.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.GPUOptions.Experimental.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.GPUOptions.Experimental.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.GPUOptions.Experimental.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.GPUOptions.Experimental.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.GPUOptions.Experimental.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.GPUOptions.Experimental.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.GPUOptions.Experimental.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.GPUOptions.Experimental.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.GPUOptions.Experimental.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.GPUOptions.Experimental.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.GPUOptions.Experimental.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.GPUOptions.Experimental.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.GPUOptions.Experimental.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.GPUOptions.Experimental.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.GPUOptions.Experimental.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.GPUOptions.Experimental.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.GPUOptions.Experimental.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.GPUOptions.Experimental.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.GPUOptions.Experimental.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.GPUOptions.Experimental.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.GPUOptions.Experimental.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.GPUOptions.Experimental.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.GPUOptions.Experimental.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.GPUOptions.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.GPUOptions.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.GPUOptions.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.GPUOptions.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.GPUOptions.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.GPUOptions.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.GPUOptions.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.GPUOptions.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.GPUOptions.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.GPUOptions.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.GPUOptions.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.GPUOptions.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.GPUOptions.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.GPUOptions.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.GPUOptions.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.GPUOptions.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.GPUOptions.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.GPUOptions.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.GPUOptions.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.GPUOptions.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.GPUOptions.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.GPUOptions.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.GradientTape": "tf.GradientTape", + "tf.compat.v1.GradientTape.__enter__": "tf.autodiff.GradientTape.__enter__", + "tf.compat.v1.GradientTape.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.GradientTape.__exit__": "tf.autodiff.GradientTape.__exit__", + "tf.compat.v1.GradientTape.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.GradientTape.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.GradientTape.__init__": "tf.autodiff.GradientTape.__init__", + "tf.compat.v1.GradientTape.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.GradientTape.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.GradientTape.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.GradientTape.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.GradientTape.batch_jacobian": "tf.autodiff.GradientTape.batch_jacobian", + "tf.compat.v1.GradientTape.gradient": "tf.autodiff.GradientTape.gradient", + "tf.compat.v1.GradientTape.jacobian": "tf.autodiff.GradientTape.jacobian", + "tf.compat.v1.GradientTape.reset": "tf.autodiff.GradientTape.reset", + "tf.compat.v1.GradientTape.stop_recording": "tf.autodiff.GradientTape.stop_recording", + "tf.compat.v1.GradientTape.watch": "tf.autodiff.GradientTape.watch", + "tf.compat.v1.GradientTape.watched_variables": "tf.autodiff.GradientTape.watched_variables", + "tf.compat.v1.Graph": "tf.Graph", + "tf.compat.v1.Graph.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.Graph.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.Graph.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.Graph.__init__": "tf.Graph.__init__", + "tf.compat.v1.Graph.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.Graph.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.Graph.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.Graph.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.Graph.add_to_collection": "tf.Graph.add_to_collection", + "tf.compat.v1.Graph.add_to_collections": "tf.Graph.add_to_collections", + "tf.compat.v1.Graph.as_default": "tf.Graph.as_default", + "tf.compat.v1.Graph.as_graph_def": "tf.Graph.as_graph_def", + "tf.compat.v1.Graph.as_graph_element": "tf.Graph.as_graph_element", + "tf.compat.v1.Graph.building_function": "tf.Graph.building_function", + "tf.compat.v1.Graph.clear_collection": "tf.Graph.clear_collection", + "tf.compat.v1.Graph.collections": "tf.Graph.collections", + "tf.compat.v1.Graph.colocate_with": "tf.Graph.colocate_with", + "tf.compat.v1.Graph.container": "tf.Graph.container", + "tf.compat.v1.Graph.control_dependencies": "tf.Graph.control_dependencies", + "tf.compat.v1.Graph.create_op": "tf.Graph.create_op", + "tf.compat.v1.Graph.device": "tf.Graph.device", + "tf.compat.v1.Graph.finalize": "tf.Graph.finalize", + "tf.compat.v1.Graph.finalized": "tf.Graph.finalized", + "tf.compat.v1.Graph.get_all_collection_keys": "tf.Graph.get_all_collection_keys", + "tf.compat.v1.Graph.get_collection": "tf.Graph.get_collection", + "tf.compat.v1.Graph.get_collection_ref": "tf.Graph.get_collection_ref", + "tf.compat.v1.Graph.get_name_scope": "tf.Graph.get_name_scope", + "tf.compat.v1.Graph.get_operation_by_name": "tf.Graph.get_operation_by_name", + "tf.compat.v1.Graph.get_operations": "tf.Graph.get_operations", + "tf.compat.v1.Graph.get_tensor_by_name": "tf.Graph.get_tensor_by_name", + "tf.compat.v1.Graph.gradient_override_map": "tf.Graph.gradient_override_map", + "tf.compat.v1.Graph.graph_def_versions": "tf.Graph.graph_def_versions", + "tf.compat.v1.Graph.is_feedable": "tf.Graph.is_feedable", + "tf.compat.v1.Graph.is_fetchable": "tf.Graph.is_fetchable", + "tf.compat.v1.Graph.name_scope": "tf.Graph.name_scope", + "tf.compat.v1.Graph.prevent_feeding": "tf.Graph.prevent_feeding", + "tf.compat.v1.Graph.prevent_fetching": "tf.Graph.prevent_fetching", + "tf.compat.v1.Graph.seed": "tf.Graph.seed", + "tf.compat.v1.Graph.switch_to_thread_local": "tf.Graph.switch_to_thread_local", + "tf.compat.v1.Graph.unique_name": "tf.Graph.unique_name", + "tf.compat.v1.Graph.version": "tf.Graph.version", + "tf.compat.v1.GraphDef.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.GraphDef.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.GraphDef.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.GraphDef.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.GraphDef.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.GraphDef.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.GraphDef.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.GraphDef.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.GraphDef.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.GraphDef.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.GraphDef.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.GraphDef.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.GraphDef.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.GraphDef.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.GraphDef.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.GraphDef.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.GraphDef.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.GraphDef.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.GraphDef.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.GraphDef.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.GraphDef.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.GraphDef.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.GraphDef.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.GraphDef.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.GraphDef.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.GraphDef.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.GraphDef.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.GraphDef.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.GraphKeys.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.GraphKeys.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.GraphKeys.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.GraphKeys.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.GraphKeys.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.GraphKeys.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.GraphKeys.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.GraphKeys.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.GraphOptions.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.GraphOptions.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.GraphOptions.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.GraphOptions.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.GraphOptions.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.GraphOptions.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.GraphOptions.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.GraphOptions.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.GraphOptions.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.GraphOptions.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.GraphOptions.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.GraphOptions.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.GraphOptions.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.GraphOptions.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.GraphOptions.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.GraphOptions.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.GraphOptions.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.GraphOptions.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.GraphOptions.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.GraphOptions.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.GraphOptions.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.GraphOptions.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.GraphOptions.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.GraphOptions.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.GraphOptions.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.GraphOptions.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.GraphOptions.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.GraphOptions.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.HistogramProto.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.HistogramProto.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.HistogramProto.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.HistogramProto.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.HistogramProto.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.HistogramProto.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.HistogramProto.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.HistogramProto.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.HistogramProto.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.HistogramProto.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.HistogramProto.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.HistogramProto.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.HistogramProto.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.HistogramProto.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.HistogramProto.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.HistogramProto.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.HistogramProto.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.HistogramProto.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.HistogramProto.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.HistogramProto.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.HistogramProto.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.HistogramProto.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.HistogramProto.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.HistogramProto.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.HistogramProto.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.HistogramProto.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.HistogramProto.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.HistogramProto.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.IdentityReader.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.IdentityReader.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.IdentityReader.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.IdentityReader.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.IdentityReader.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.IdentityReader.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.IdentityReader.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.IdentityReader.num_records_produced": "tf.compat.v1.ReaderBase.num_records_produced", + "tf.compat.v1.IdentityReader.num_work_units_completed": "tf.compat.v1.ReaderBase.num_work_units_completed", + "tf.compat.v1.IdentityReader.read": "tf.compat.v1.ReaderBase.read", + "tf.compat.v1.IdentityReader.read_up_to": "tf.compat.v1.ReaderBase.read_up_to", + "tf.compat.v1.IdentityReader.reader_ref": "tf.compat.v1.ReaderBase.reader_ref", + "tf.compat.v1.IdentityReader.reset": "tf.compat.v1.ReaderBase.reset", + "tf.compat.v1.IdentityReader.restore_state": "tf.compat.v1.ReaderBase.restore_state", + "tf.compat.v1.IdentityReader.serialize_state": "tf.compat.v1.ReaderBase.serialize_state", + "tf.compat.v1.IdentityReader.supports_serialize": "tf.compat.v1.ReaderBase.supports_serialize", + "tf.compat.v1.IndexedSlices": "tf.IndexedSlices", + "tf.compat.v1.IndexedSlices.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.IndexedSlices.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.IndexedSlices.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.IndexedSlices.__init__": "tf.IndexedSlices.__init__", + "tf.compat.v1.IndexedSlices.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.IndexedSlices.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.IndexedSlices.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.IndexedSlices.__neg__": "tf.IndexedSlices.__neg__", + "tf.compat.v1.IndexedSlices.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.IndexedSlices.consumers": "tf.IndexedSlices.consumers", + "tf.compat.v1.IndexedSlices.dense_shape": "tf.IndexedSlices.dense_shape", + "tf.compat.v1.IndexedSlices.device": "tf.IndexedSlices.device", + "tf.compat.v1.IndexedSlices.dtype": "tf.IndexedSlices.dtype", + "tf.compat.v1.IndexedSlices.graph": "tf.IndexedSlices.graph", + "tf.compat.v1.IndexedSlices.indices": "tf.IndexedSlices.indices", + "tf.compat.v1.IndexedSlices.name": "tf.IndexedSlices.name", + "tf.compat.v1.IndexedSlices.op": "tf.IndexedSlices.op", + "tf.compat.v1.IndexedSlices.shape": "tf.IndexedSlices.shape", + "tf.compat.v1.IndexedSlices.values": "tf.IndexedSlices.values", + "tf.compat.v1.IndexedSlicesSpec": "tf.IndexedSlicesSpec", + "tf.compat.v1.IndexedSlicesSpec.__eq__": "tf.TypeSpec.__eq__", + "tf.compat.v1.IndexedSlicesSpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.IndexedSlicesSpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.IndexedSlicesSpec.__init__": "tf.IndexedSlicesSpec.__init__", + "tf.compat.v1.IndexedSlicesSpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.IndexedSlicesSpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.IndexedSlicesSpec.__ne__": "tf.TypeSpec.__ne__", + "tf.compat.v1.IndexedSlicesSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.IndexedSlicesSpec.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.compat.v1.IndexedSlicesSpec.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.compat.v1.IndexedSlicesSpec.value_type": "tf.IndexedSlicesSpec.value_type", + "tf.compat.v1.InteractiveSession.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.InteractiveSession.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.InteractiveSession.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.InteractiveSession.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.InteractiveSession.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.InteractiveSession.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.InteractiveSession.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.LMDBReader.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.LMDBReader.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.LMDBReader.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.LMDBReader.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.LMDBReader.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.LMDBReader.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.LMDBReader.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.LMDBReader.num_records_produced": "tf.compat.v1.ReaderBase.num_records_produced", + "tf.compat.v1.LMDBReader.num_work_units_completed": "tf.compat.v1.ReaderBase.num_work_units_completed", + "tf.compat.v1.LMDBReader.read": "tf.compat.v1.ReaderBase.read", + "tf.compat.v1.LMDBReader.read_up_to": "tf.compat.v1.ReaderBase.read_up_to", + "tf.compat.v1.LMDBReader.reader_ref": "tf.compat.v1.ReaderBase.reader_ref", + "tf.compat.v1.LMDBReader.reset": "tf.compat.v1.ReaderBase.reset", + "tf.compat.v1.LMDBReader.restore_state": "tf.compat.v1.ReaderBase.restore_state", + "tf.compat.v1.LMDBReader.serialize_state": "tf.compat.v1.ReaderBase.serialize_state", + "tf.compat.v1.LMDBReader.supports_serialize": "tf.compat.v1.ReaderBase.supports_serialize", + "tf.compat.v1.LogMessage.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.LogMessage.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.LogMessage.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.LogMessage.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.LogMessage.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.LogMessage.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.LogMessage.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.LogMessage.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.LogMessage.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.LogMessage.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.LogMessage.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.LogMessage.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.LogMessage.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.LogMessage.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.LogMessage.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.LogMessage.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.LogMessage.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.LogMessage.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.LogMessage.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.LogMessage.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.LogMessage.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.LogMessage.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.LogMessage.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.LogMessage.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.LogMessage.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.LogMessage.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.LogMessage.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.LogMessage.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.MetaGraphDef.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.MetaGraphDef.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.MetaGraphDef.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.MetaGraphDef.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.MetaGraphDef.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.MetaGraphDef.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.MetaGraphDef.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.MetaGraphDef.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.MetaGraphDef.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.MetaGraphDef.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.MetaGraphDef.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.MetaGraphDef.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.MetaGraphDef.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.MetaGraphDef.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.MetaGraphDef.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.MetaGraphDef.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.MetaGraphDef.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.MetaGraphDef.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.MetaGraphDef.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.MetaGraphDef.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.MetaGraphDef.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.MetaGraphDef.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.MetaGraphDef.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.MetaGraphDef.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.MetaGraphDef.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.MetaGraphDef.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.MetaGraphDef.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.MetaGraphDef.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.Module": "tf.Module", + "tf.compat.v1.Module.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.Module.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.Module.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.Module.__init__": "tf.Module.__init__", + "tf.compat.v1.Module.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.Module.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.Module.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.Module.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.Module.name": "tf.Module.name", + "tf.compat.v1.Module.name_scope": "tf.Module.name_scope", + "tf.compat.v1.Module.submodules": "tf.Module.submodules", + "tf.compat.v1.Module.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.Module.variables": "tf.Module.variables", + "tf.compat.v1.NameAttrList.AttrEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.NameAttrList.AttrEntry.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.NameAttrList.AttrEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.NameAttrList.AttrEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.NameAttrList.AttrEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.NameAttrList.AttrEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.NameAttrList.AttrEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.NameAttrList.AttrEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.NameAttrList.AttrEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.NameAttrList.AttrEntry.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.NameAttrList.AttrEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.NameAttrList.AttrEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.NameAttrList.AttrEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.NameAttrList.AttrEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.NameAttrList.AttrEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.NameAttrList.AttrEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.NameAttrList.AttrEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.NameAttrList.AttrEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.NameAttrList.AttrEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.NameAttrList.AttrEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.NameAttrList.AttrEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.NameAttrList.AttrEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.NameAttrList.AttrEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.NameAttrList.AttrEntry.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.NameAttrList.AttrEntry.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.NameAttrList.AttrEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.NameAttrList.AttrEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.NameAttrList.AttrEntry.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.NameAttrList.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.NameAttrList.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.NameAttrList.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.NameAttrList.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.NameAttrList.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.NameAttrList.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.NameAttrList.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.NameAttrList.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.NameAttrList.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.NameAttrList.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.NameAttrList.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.NameAttrList.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.NameAttrList.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.NameAttrList.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.NameAttrList.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.NameAttrList.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.NameAttrList.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.NameAttrList.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.NameAttrList.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.NameAttrList.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.NameAttrList.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.NameAttrList.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.NameAttrList.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.NameAttrList.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.NameAttrList.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.NameAttrList.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.NameAttrList.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.NameAttrList.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.NoGradient": "tf.no_gradient", + "tf.compat.v1.NodeDef.AttrEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.NodeDef.AttrEntry.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.NodeDef.AttrEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.NodeDef.AttrEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.NodeDef.AttrEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.NodeDef.AttrEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.NodeDef.AttrEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.NodeDef.AttrEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.NodeDef.AttrEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.NodeDef.AttrEntry.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.NodeDef.AttrEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.NodeDef.AttrEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.NodeDef.AttrEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.NodeDef.AttrEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.NodeDef.AttrEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.NodeDef.AttrEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.NodeDef.AttrEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.NodeDef.AttrEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.NodeDef.AttrEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.NodeDef.AttrEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.NodeDef.AttrEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.NodeDef.AttrEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.NodeDef.AttrEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.NodeDef.AttrEntry.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.NodeDef.AttrEntry.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.NodeDef.AttrEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.NodeDef.AttrEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.NodeDef.AttrEntry.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.NodeDef.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.NodeDef.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.NodeDef.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.NodeDef.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.NodeDef.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.NodeDef.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.NodeDef.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.NodeDef.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.NodeDef.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.NodeDef.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.NodeDef.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.NodeDef.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.NodeDef.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.NodeDef.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.NodeDef.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.NodeDef.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.NodeDef.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.NodeDef.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.NodeDef.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.NodeDef.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.NodeDef.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.NodeDef.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.NodeDef.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.NodeDef.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.NodeDef.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.NodeDef.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.NodeDef.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.NodeDef.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.NotDifferentiable": "tf.no_gradient", + "tf.compat.v1.OpError": "tf.errors.OpError", + "tf.compat.v1.OpError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.OpError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.OpError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.OpError.__init__": "tf.errors.OpError.__init__", + "tf.compat.v1.OpError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.OpError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.OpError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.OpError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.OpError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.OpError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.OpError.message": "tf.errors.OpError.message", + "tf.compat.v1.OpError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.OpError.op": "tf.errors.OpError.op", + "tf.compat.v1.OpError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.Operation": "tf.Operation", + "tf.compat.v1.Operation.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.Operation.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.Operation.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.Operation.__init__": "tf.Operation.__init__", + "tf.compat.v1.Operation.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.Operation.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.Operation.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.Operation.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.Operation.colocation_groups": "tf.Operation.colocation_groups", + "tf.compat.v1.Operation.control_inputs": "tf.Operation.control_inputs", + "tf.compat.v1.Operation.device": "tf.Operation.device", + "tf.compat.v1.Operation.get_attr": "tf.Operation.get_attr", + "tf.compat.v1.Operation.graph": "tf.Operation.graph", + "tf.compat.v1.Operation.inputs": "tf.Operation.inputs", + "tf.compat.v1.Operation.name": "tf.Operation.name", + "tf.compat.v1.Operation.node_def": "tf.Operation.node_def", + "tf.compat.v1.Operation.op_def": "tf.Operation.op_def", + "tf.compat.v1.Operation.outputs": "tf.Operation.outputs", + "tf.compat.v1.Operation.run": "tf.Operation.run", + "tf.compat.v1.Operation.traceback": "tf.Operation.traceback", + "tf.compat.v1.Operation.type": "tf.Operation.type", + "tf.compat.v1.Operation.values": "tf.Operation.values", + "tf.compat.v1.OptimizerOptions.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.OptimizerOptions.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.OptimizerOptions.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.OptimizerOptions.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.OptimizerOptions.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.OptimizerOptions.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.OptimizerOptions.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.OptimizerOptions.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.OptimizerOptions.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.OptimizerOptions.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.OptimizerOptions.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.OptimizerOptions.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.OptimizerOptions.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.OptimizerOptions.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.OptimizerOptions.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.OptimizerOptions.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.OptimizerOptions.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.OptimizerOptions.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.OptimizerOptions.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.OptimizerOptions.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.OptimizerOptions.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.OptimizerOptions.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.OptimizerOptions.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.OptimizerOptions.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.OptimizerOptions.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.OptimizerOptions.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.OptimizerOptions.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.OptimizerOptions.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.OptionalSpec": "tf.OptionalSpec", + "tf.compat.v1.OptionalSpec.__eq__": "tf.TypeSpec.__eq__", + "tf.compat.v1.OptionalSpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.OptionalSpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.OptionalSpec.__init__": "tf.OptionalSpec.__init__", + "tf.compat.v1.OptionalSpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.OptionalSpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.OptionalSpec.__ne__": "tf.TypeSpec.__ne__", + "tf.compat.v1.OptionalSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.OptionalSpec.from_value": "tf.OptionalSpec.from_value", + "tf.compat.v1.OptionalSpec.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.compat.v1.OptionalSpec.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.compat.v1.OptionalSpec.value_type": "tf.OptionalSpec.value_type", + "tf.compat.v1.PaddingFIFOQueue": "tf.queue.PaddingFIFOQueue", + "tf.compat.v1.PaddingFIFOQueue.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.PaddingFIFOQueue.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.PaddingFIFOQueue.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.PaddingFIFOQueue.__init__": "tf.queue.PaddingFIFOQueue.__init__", + "tf.compat.v1.PaddingFIFOQueue.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.PaddingFIFOQueue.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.PaddingFIFOQueue.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.PaddingFIFOQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.PaddingFIFOQueue.close": "tf.queue.QueueBase.close", + "tf.compat.v1.PaddingFIFOQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.PaddingFIFOQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.PaddingFIFOQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.PaddingFIFOQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.PaddingFIFOQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.PaddingFIFOQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.PaddingFIFOQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.PaddingFIFOQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.PaddingFIFOQueue.name": "tf.queue.QueueBase.name", + "tf.compat.v1.PaddingFIFOQueue.names": "tf.queue.QueueBase.names", + "tf.compat.v1.PaddingFIFOQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.PaddingFIFOQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.PaddingFIFOQueue.size": "tf.queue.QueueBase.size", + "tf.compat.v1.PriorityQueue": "tf.queue.PriorityQueue", + "tf.compat.v1.PriorityQueue.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.PriorityQueue.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.PriorityQueue.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.PriorityQueue.__init__": "tf.queue.PriorityQueue.__init__", + "tf.compat.v1.PriorityQueue.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.PriorityQueue.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.PriorityQueue.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.PriorityQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.PriorityQueue.close": "tf.queue.QueueBase.close", + "tf.compat.v1.PriorityQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.PriorityQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.PriorityQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.PriorityQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.PriorityQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.PriorityQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.PriorityQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.PriorityQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.PriorityQueue.name": "tf.queue.QueueBase.name", + "tf.compat.v1.PriorityQueue.names": "tf.queue.QueueBase.names", + "tf.compat.v1.PriorityQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.PriorityQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.PriorityQueue.size": "tf.queue.QueueBase.size", + "tf.compat.v1.QUANTIZED_DTYPES": "tf.dtypes.QUANTIZED_DTYPES", + "tf.compat.v1.QueueBase": "tf.queue.QueueBase", + "tf.compat.v1.QueueBase.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.QueueBase.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.QueueBase.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.QueueBase.__init__": "tf.queue.QueueBase.__init__", + "tf.compat.v1.QueueBase.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.QueueBase.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.QueueBase.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.QueueBase.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.QueueBase.close": "tf.queue.QueueBase.close", + "tf.compat.v1.QueueBase.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.QueueBase.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.QueueBase.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.QueueBase.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.QueueBase.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.QueueBase.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.QueueBase.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.QueueBase.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.QueueBase.name": "tf.queue.QueueBase.name", + "tf.compat.v1.QueueBase.names": "tf.queue.QueueBase.names", + "tf.compat.v1.QueueBase.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.QueueBase.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.QueueBase.size": "tf.queue.QueueBase.size", + "tf.compat.v1.RaggedTensor": "tf.RaggedTensor", + "tf.compat.v1.RaggedTensor.__abs__": "tf.math.abs", + "tf.compat.v1.RaggedTensor.__add__": "tf.math.add", + "tf.compat.v1.RaggedTensor.__and__": "tf.math.logical_and", + "tf.compat.v1.RaggedTensor.__bool__": "tf.RaggedTensor.__bool__", + "tf.compat.v1.RaggedTensor.__div__": "tf.RaggedTensor.__div__", + "tf.compat.v1.RaggedTensor.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.RaggedTensor.__floordiv__": "tf.math.floordiv", + "tf.compat.v1.RaggedTensor.__ge__": "tf.math.greater_equal", + "tf.compat.v1.RaggedTensor.__getitem__": "tf.RaggedTensor.__getitem__", + "tf.compat.v1.RaggedTensor.__gt__": "tf.math.greater", + "tf.compat.v1.RaggedTensor.__init__": "tf.RaggedTensor.__init__", + "tf.compat.v1.RaggedTensor.__invert__": "tf.math.logical_not", + "tf.compat.v1.RaggedTensor.__le__": "tf.math.less_equal", + "tf.compat.v1.RaggedTensor.__lt__": "tf.math.less", + "tf.compat.v1.RaggedTensor.__mod__": "tf.math.floormod", + "tf.compat.v1.RaggedTensor.__mul__": "tf.math.multiply", + "tf.compat.v1.RaggedTensor.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.RaggedTensor.__neg__": "tf.math.negative", + "tf.compat.v1.RaggedTensor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.RaggedTensor.__nonzero__": "tf.RaggedTensor.__bool__", + "tf.compat.v1.RaggedTensor.__or__": "tf.math.logical_or", + "tf.compat.v1.RaggedTensor.__pow__": "tf.math.pow", + "tf.compat.v1.RaggedTensor.__radd__": "tf.RaggedTensor.__radd__", + "tf.compat.v1.RaggedTensor.__rand__": "tf.RaggedTensor.__rand__", + "tf.compat.v1.RaggedTensor.__rdiv__": "tf.RaggedTensor.__rdiv__", + "tf.compat.v1.RaggedTensor.__rfloordiv__": "tf.RaggedTensor.__rfloordiv__", + "tf.compat.v1.RaggedTensor.__rmod__": "tf.RaggedTensor.__rmod__", + "tf.compat.v1.RaggedTensor.__rmul__": "tf.RaggedTensor.__rmul__", + "tf.compat.v1.RaggedTensor.__ror__": "tf.RaggedTensor.__ror__", + "tf.compat.v1.RaggedTensor.__rpow__": "tf.RaggedTensor.__rpow__", + "tf.compat.v1.RaggedTensor.__rsub__": "tf.RaggedTensor.__rsub__", + "tf.compat.v1.RaggedTensor.__rtruediv__": "tf.RaggedTensor.__rtruediv__", + "tf.compat.v1.RaggedTensor.__rxor__": "tf.RaggedTensor.__rxor__", + "tf.compat.v1.RaggedTensor.__sub__": "tf.math.subtract", + "tf.compat.v1.RaggedTensor.__truediv__": "tf.math.truediv", + "tf.compat.v1.RaggedTensor.__xor__": "tf.math.logical_xor", + "tf.compat.v1.RaggedTensor.bounding_shape": "tf.RaggedTensor.bounding_shape", + "tf.compat.v1.RaggedTensor.consumers": "tf.RaggedTensor.consumers", + "tf.compat.v1.RaggedTensor.dtype": "tf.RaggedTensor.dtype", + "tf.compat.v1.RaggedTensor.flat_values": "tf.RaggedTensor.flat_values", + "tf.compat.v1.RaggedTensor.merge_dims": "tf.RaggedTensor.merge_dims", + "tf.compat.v1.RaggedTensor.nested_row_lengths": "tf.RaggedTensor.nested_row_lengths", + "tf.compat.v1.RaggedTensor.nested_row_splits": "tf.RaggedTensor.nested_row_splits", + "tf.compat.v1.RaggedTensor.nested_value_rowids": "tf.RaggedTensor.nested_value_rowids", + "tf.compat.v1.RaggedTensor.nrows": "tf.RaggedTensor.nrows", + "tf.compat.v1.RaggedTensor.numpy": "tf.RaggedTensor.numpy", + "tf.compat.v1.RaggedTensor.ragged_rank": "tf.RaggedTensor.ragged_rank", + "tf.compat.v1.RaggedTensor.row_lengths": "tf.RaggedTensor.row_lengths", + "tf.compat.v1.RaggedTensor.row_limits": "tf.RaggedTensor.row_limits", + "tf.compat.v1.RaggedTensor.row_splits": "tf.RaggedTensor.row_splits", + "tf.compat.v1.RaggedTensor.row_starts": "tf.RaggedTensor.row_starts", + "tf.compat.v1.RaggedTensor.shape": "tf.RaggedTensor.shape", + "tf.compat.v1.RaggedTensor.to_list": "tf.RaggedTensor.to_list", + "tf.compat.v1.RaggedTensor.to_sparse": "tf.RaggedTensor.to_sparse", + "tf.compat.v1.RaggedTensor.to_tensor": "tf.RaggedTensor.to_tensor", + "tf.compat.v1.RaggedTensor.uniform_row_length": "tf.RaggedTensor.uniform_row_length", + "tf.compat.v1.RaggedTensor.value_rowids": "tf.RaggedTensor.value_rowids", + "tf.compat.v1.RaggedTensor.values": "tf.RaggedTensor.values", + "tf.compat.v1.RaggedTensor.with_flat_values": "tf.RaggedTensor.with_flat_values", + "tf.compat.v1.RaggedTensor.with_row_splits_dtype": "tf.RaggedTensor.with_row_splits_dtype", + "tf.compat.v1.RaggedTensor.with_values": "tf.RaggedTensor.with_values", + "tf.compat.v1.RaggedTensorSpec": "tf.RaggedTensorSpec", + "tf.compat.v1.RaggedTensorSpec.__eq__": "tf.TypeSpec.__eq__", + "tf.compat.v1.RaggedTensorSpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.RaggedTensorSpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.RaggedTensorSpec.__init__": "tf.RaggedTensorSpec.__init__", + "tf.compat.v1.RaggedTensorSpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.RaggedTensorSpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.RaggedTensorSpec.__ne__": "tf.TypeSpec.__ne__", + "tf.compat.v1.RaggedTensorSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.RaggedTensorSpec.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.compat.v1.RaggedTensorSpec.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.compat.v1.RaggedTensorSpec.value_type": "tf.RaggedTensorSpec.value_type", + "tf.compat.v1.RandomShuffleQueue": "tf.queue.RandomShuffleQueue", + "tf.compat.v1.RandomShuffleQueue.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.RandomShuffleQueue.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.RandomShuffleQueue.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.RandomShuffleQueue.__init__": "tf.queue.RandomShuffleQueue.__init__", + "tf.compat.v1.RandomShuffleQueue.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.RandomShuffleQueue.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.RandomShuffleQueue.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.RandomShuffleQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.RandomShuffleQueue.close": "tf.queue.QueueBase.close", + "tf.compat.v1.RandomShuffleQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.RandomShuffleQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.RandomShuffleQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.RandomShuffleQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.RandomShuffleQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.RandomShuffleQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.RandomShuffleQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.RandomShuffleQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.RandomShuffleQueue.name": "tf.queue.QueueBase.name", + "tf.compat.v1.RandomShuffleQueue.names": "tf.queue.QueueBase.names", + "tf.compat.v1.RandomShuffleQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.RandomShuffleQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.RandomShuffleQueue.size": "tf.queue.QueueBase.size", + "tf.compat.v1.ReaderBase.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.ReaderBase.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.ReaderBase.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.ReaderBase.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.ReaderBase.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.ReaderBase.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.ReaderBase.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.RegisterGradient": "tf.RegisterGradient", + "tf.compat.v1.RegisterGradient.__call__": "tf.RegisterGradient.__call__", + "tf.compat.v1.RegisterGradient.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.RegisterGradient.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.RegisterGradient.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.RegisterGradient.__init__": "tf.RegisterGradient.__init__", + "tf.compat.v1.RegisterGradient.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.RegisterGradient.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.RegisterGradient.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.RegisterGradient.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.RunMetadata.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.RunMetadata.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.RunMetadata.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.RunMetadata.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.RunMetadata.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.RunMetadata.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.RunMetadata.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.RunMetadata.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.RunMetadata.FunctionGraphs.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.RunMetadata.FunctionGraphs.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.RunMetadata.FunctionGraphs.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.RunMetadata.FunctionGraphs.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.RunMetadata.FunctionGraphs.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.RunMetadata.FunctionGraphs.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.RunMetadata.FunctionGraphs.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.RunMetadata.FunctionGraphs.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.RunMetadata.FunctionGraphs.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.RunMetadata.FunctionGraphs.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.RunMetadata.FunctionGraphs.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.RunMetadata.FunctionGraphs.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.RunMetadata.FunctionGraphs.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.RunMetadata.FunctionGraphs.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.RunMetadata.FunctionGraphs.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.RunMetadata.FunctionGraphs.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.RunMetadata.FunctionGraphs.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.RunMetadata.FunctionGraphs.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.RunMetadata.FunctionGraphs.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.RunMetadata.FunctionGraphs.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.RunMetadata.FunctionGraphs.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.RunMetadata.FunctionGraphs.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.RunMetadata.FunctionGraphs.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.RunMetadata.FunctionGraphs.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.RunMetadata.FunctionGraphs.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.RunMetadata.FunctionGraphs.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.RunMetadata.FunctionGraphs.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.RunMetadata.FunctionGraphs.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.RunMetadata.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.RunMetadata.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.RunMetadata.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.RunMetadata.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.RunMetadata.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.RunMetadata.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.RunMetadata.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.RunMetadata.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.RunMetadata.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.RunMetadata.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.RunMetadata.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.RunMetadata.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.RunMetadata.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.RunMetadata.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.RunMetadata.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.RunMetadata.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.RunMetadata.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.RunMetadata.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.RunMetadata.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.RunMetadata.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.RunOptions.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.RunOptions.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.RunOptions.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.RunOptions.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.RunOptions.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.RunOptions.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.RunOptions.Experimental.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.RunOptions.Experimental.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.RunOptions.Experimental.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.RunOptions.Experimental.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.RunOptions.Experimental.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.RunOptions.Experimental.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.RunOptions.Experimental.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.RunOptions.Experimental.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.RunOptions.Experimental.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.RunOptions.Experimental.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.RunOptions.Experimental.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.RunOptions.Experimental.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.RunOptions.Experimental.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.RunOptions.Experimental.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.RunOptions.Experimental.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.RunOptions.Experimental.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.RunOptions.Experimental.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.RunOptions.Experimental.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.RunOptions.Experimental.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.RunOptions.Experimental.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.RunOptions.Experimental.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.RunOptions.Experimental.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.RunOptions.Experimental.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.RunOptions.Experimental.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.RunOptions.Experimental.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.RunOptions.Experimental.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.RunOptions.Experimental.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.RunOptions.Experimental.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.RunOptions.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.RunOptions.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.RunOptions.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.RunOptions.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.RunOptions.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.RunOptions.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.RunOptions.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.RunOptions.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.RunOptions.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.RunOptions.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.RunOptions.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.RunOptions.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.RunOptions.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.RunOptions.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.RunOptions.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.RunOptions.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.RunOptions.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.RunOptions.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.RunOptions.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.RunOptions.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.RunOptions.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.RunOptions.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.Session.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.Session.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.Session.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.Session.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.Session.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.Session.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.Session.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.Session.as_default": "tf.compat.v1.InteractiveSession.as_default", + "tf.compat.v1.Session.graph": "tf.compat.v1.InteractiveSession.graph", + "tf.compat.v1.Session.graph_def": "tf.compat.v1.InteractiveSession.graph_def", + "tf.compat.v1.Session.list_devices": "tf.compat.v1.InteractiveSession.list_devices", + "tf.compat.v1.Session.make_callable": "tf.compat.v1.InteractiveSession.make_callable", + "tf.compat.v1.Session.partial_run": "tf.compat.v1.InteractiveSession.partial_run", + "tf.compat.v1.Session.partial_run_setup": "tf.compat.v1.InteractiveSession.partial_run_setup", + "tf.compat.v1.Session.run": "tf.compat.v1.InteractiveSession.run", + "tf.compat.v1.Session.sess_str": "tf.compat.v1.InteractiveSession.sess_str", + "tf.compat.v1.SessionLog.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.SessionLog.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.SessionLog.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.SessionLog.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.SessionLog.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.SessionLog.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.SessionLog.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.SessionLog.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.SessionLog.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.SessionLog.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.SessionLog.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.SessionLog.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.SessionLog.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.SessionLog.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.SessionLog.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.SessionLog.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.SessionLog.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.SessionLog.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.SessionLog.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.SessionLog.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.SessionLog.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.SessionLog.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.SessionLog.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.SessionLog.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.SessionLog.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.SessionLog.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.SessionLog.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.SessionLog.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.SparseConditionalAccumulator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.SparseConditionalAccumulator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.SparseConditionalAccumulator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.SparseConditionalAccumulator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.SparseConditionalAccumulator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.SparseConditionalAccumulator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.SparseConditionalAccumulator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.SparseConditionalAccumulator.accumulator_ref": "tf.compat.v1.ConditionalAccumulatorBase.accumulator_ref", + "tf.compat.v1.SparseConditionalAccumulator.dtype": "tf.compat.v1.ConditionalAccumulatorBase.dtype", + "tf.compat.v1.SparseConditionalAccumulator.name": "tf.compat.v1.ConditionalAccumulatorBase.name", + "tf.compat.v1.SparseFeature": "tf.io.SparseFeature", + "tf.compat.v1.SparseFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.SparseFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.SparseFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.SparseFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.SparseFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.SparseFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.SparseFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.SparseFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.SparseFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.SparseFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.SparseFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.SparseFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.SparseFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.SparseFeature.__new__": "tf.io.SparseFeature.__new__", + "tf.compat.v1.SparseFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.SparseFeature.already_sorted": "tf.io.SparseFeature.already_sorted", + "tf.compat.v1.SparseFeature.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.SparseFeature.dtype": "tf.io.SparseFeature.dtype", + "tf.compat.v1.SparseFeature.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.SparseFeature.index_key": "tf.io.SparseFeature.index_key", + "tf.compat.v1.SparseFeature.size": "tf.io.SparseFeature.size", + "tf.compat.v1.SparseFeature.value_key": "tf.io.SparseFeature.value_key", + "tf.compat.v1.SparseTensor": "tf.sparse.SparseTensor", + "tf.compat.v1.SparseTensor.__div__": "tf.sparse.SparseTensor.__div__", + "tf.compat.v1.SparseTensor.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.SparseTensor.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.SparseTensor.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.SparseTensor.__init__": "tf.sparse.SparseTensor.__init__", + "tf.compat.v1.SparseTensor.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.SparseTensor.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.SparseTensor.__mul__": "tf.sparse.SparseTensor.__mul__", + "tf.compat.v1.SparseTensor.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.SparseTensor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.SparseTensor.__truediv__": "tf.sparse.SparseTensor.__truediv__", + "tf.compat.v1.SparseTensor.consumers": "tf.sparse.SparseTensor.consumers", + "tf.compat.v1.SparseTensor.dense_shape": "tf.sparse.SparseTensor.dense_shape", + "tf.compat.v1.SparseTensor.dtype": "tf.sparse.SparseTensor.dtype", + "tf.compat.v1.SparseTensor.eval": "tf.sparse.SparseTensor.eval", + "tf.compat.v1.SparseTensor.get_shape": "tf.sparse.SparseTensor.get_shape", + "tf.compat.v1.SparseTensor.graph": "tf.sparse.SparseTensor.graph", + "tf.compat.v1.SparseTensor.indices": "tf.sparse.SparseTensor.indices", + "tf.compat.v1.SparseTensor.op": "tf.sparse.SparseTensor.op", + "tf.compat.v1.SparseTensor.shape": "tf.sparse.SparseTensor.shape", + "tf.compat.v1.SparseTensor.values": "tf.sparse.SparseTensor.values", + "tf.compat.v1.SparseTensorSpec": "tf.SparseTensorSpec", + "tf.compat.v1.SparseTensorSpec.__eq__": "tf.TypeSpec.__eq__", + "tf.compat.v1.SparseTensorSpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.SparseTensorSpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.SparseTensorSpec.__init__": "tf.SparseTensorSpec.__init__", + "tf.compat.v1.SparseTensorSpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.SparseTensorSpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.SparseTensorSpec.__ne__": "tf.TypeSpec.__ne__", + "tf.compat.v1.SparseTensorSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.SparseTensorSpec.dtype": "tf.SparseTensorSpec.dtype", + "tf.compat.v1.SparseTensorSpec.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.compat.v1.SparseTensorSpec.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.compat.v1.SparseTensorSpec.shape": "tf.SparseTensorSpec.shape", + "tf.compat.v1.SparseTensorSpec.value_type": "tf.SparseTensorSpec.value_type", + "tf.compat.v1.SparseTensorValue.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.SparseTensorValue.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.SparseTensorValue.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.SparseTensorValue.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.SparseTensorValue.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.SparseTensorValue.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.SparseTensorValue.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.SparseTensorValue.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.SparseTensorValue.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.SparseTensorValue.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.SparseTensorValue.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.SparseTensorValue.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.SparseTensorValue.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.SparseTensorValue.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.SparseTensorValue.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.SparseTensorValue.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.Summary.Audio.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.Summary.Audio.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.Summary.Audio.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.Summary.Audio.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.Summary.Audio.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.Summary.Audio.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.Summary.Audio.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.Summary.Audio.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.Summary.Audio.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.Summary.Audio.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.Summary.Audio.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.Summary.Audio.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.Summary.Audio.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.Summary.Audio.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.Summary.Audio.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.Summary.Audio.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.Summary.Audio.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.Summary.Audio.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.Summary.Audio.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.Summary.Audio.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.Summary.Audio.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.Summary.Audio.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.Summary.Audio.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.Summary.Audio.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.Summary.Audio.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.Summary.Audio.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.Summary.Audio.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.Summary.Audio.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.Summary.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.Summary.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.Summary.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.Summary.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.Summary.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.Summary.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.Summary.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.Summary.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.Summary.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.Summary.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.Summary.Image.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.Summary.Image.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.Summary.Image.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.Summary.Image.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.Summary.Image.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.Summary.Image.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.Summary.Image.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.Summary.Image.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.Summary.Image.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.Summary.Image.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.Summary.Image.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.Summary.Image.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.Summary.Image.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.Summary.Image.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.Summary.Image.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.Summary.Image.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.Summary.Image.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.Summary.Image.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.Summary.Image.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.Summary.Image.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.Summary.Image.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.Summary.Image.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.Summary.Image.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.Summary.Image.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.Summary.Image.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.Summary.Image.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.Summary.Image.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.Summary.Image.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.Summary.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.Summary.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.Summary.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.Summary.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.Summary.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.Summary.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.Summary.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.Summary.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.Summary.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.Summary.Value.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.Summary.Value.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.Summary.Value.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.Summary.Value.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.Summary.Value.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.Summary.Value.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.Summary.Value.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.Summary.Value.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.Summary.Value.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.Summary.Value.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.Summary.Value.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.Summary.Value.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.Summary.Value.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.Summary.Value.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.Summary.Value.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.Summary.Value.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.Summary.Value.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.Summary.Value.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.Summary.Value.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.Summary.Value.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.Summary.Value.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.Summary.Value.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.Summary.Value.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.Summary.Value.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.Summary.Value.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.Summary.Value.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.Summary.Value.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.Summary.Value.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.Summary.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.Summary.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.Summary.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.Summary.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.Summary.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.Summary.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.Summary.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.Summary.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.Summary.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.SummaryMetadata.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.SummaryMetadata.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.SummaryMetadata.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.SummaryMetadata.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.SummaryMetadata.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.SummaryMetadata.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.SummaryMetadata.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.SummaryMetadata.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.SummaryMetadata.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.SummaryMetadata.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.SummaryMetadata.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.SummaryMetadata.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.SummaryMetadata.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.SummaryMetadata.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.SummaryMetadata.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.SummaryMetadata.PluginData.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.SummaryMetadata.PluginData.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.SummaryMetadata.PluginData.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.SummaryMetadata.PluginData.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.SummaryMetadata.PluginData.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.SummaryMetadata.PluginData.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.SummaryMetadata.PluginData.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.SummaryMetadata.PluginData.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.SummaryMetadata.PluginData.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.SummaryMetadata.PluginData.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.SummaryMetadata.PluginData.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.SummaryMetadata.PluginData.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.SummaryMetadata.PluginData.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.SummaryMetadata.PluginData.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.SummaryMetadata.PluginData.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.SummaryMetadata.PluginData.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.SummaryMetadata.PluginData.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.SummaryMetadata.PluginData.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.SummaryMetadata.PluginData.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.SummaryMetadata.PluginData.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.SummaryMetadata.PluginData.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.SummaryMetadata.PluginData.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.SummaryMetadata.PluginData.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.SummaryMetadata.PluginData.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.SummaryMetadata.PluginData.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.SummaryMetadata.PluginData.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.SummaryMetadata.PluginData.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.SummaryMetadata.PluginData.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.SummaryMetadata.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.SummaryMetadata.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.SummaryMetadata.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.SummaryMetadata.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.SummaryMetadata.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.SummaryMetadata.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.SummaryMetadata.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.SummaryMetadata.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.SummaryMetadata.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.SummaryMetadata.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.SummaryMetadata.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.SummaryMetadata.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.SummaryMetadata.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.TFRecordReader.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.TFRecordReader.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.TFRecordReader.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.TFRecordReader.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.TFRecordReader.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.TFRecordReader.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.TFRecordReader.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.TFRecordReader.num_records_produced": "tf.compat.v1.ReaderBase.num_records_produced", + "tf.compat.v1.TFRecordReader.num_work_units_completed": "tf.compat.v1.ReaderBase.num_work_units_completed", + "tf.compat.v1.TFRecordReader.read": "tf.compat.v1.ReaderBase.read", + "tf.compat.v1.TFRecordReader.read_up_to": "tf.compat.v1.ReaderBase.read_up_to", + "tf.compat.v1.TFRecordReader.reader_ref": "tf.compat.v1.ReaderBase.reader_ref", + "tf.compat.v1.TFRecordReader.reset": "tf.compat.v1.ReaderBase.reset", + "tf.compat.v1.TFRecordReader.restore_state": "tf.compat.v1.ReaderBase.restore_state", + "tf.compat.v1.TFRecordReader.serialize_state": "tf.compat.v1.ReaderBase.serialize_state", + "tf.compat.v1.TFRecordReader.supports_serialize": "tf.compat.v1.ReaderBase.supports_serialize", + "tf.compat.v1.Tensor": "tf.Tensor", + "tf.compat.v1.Tensor.OVERLOADABLE_OPERATORS": "tf.Tensor.OVERLOADABLE_OPERATORS", + "tf.compat.v1.Tensor.__abs__": "tf.math.abs", + "tf.compat.v1.Tensor.__add__": "tf.Tensor.__add__", + "tf.compat.v1.Tensor.__and__": "tf.Tensor.__and__", + "tf.compat.v1.Tensor.__bool__": "tf.Tensor.__bool__", + "tf.compat.v1.Tensor.__div__": "tf.Tensor.__div__", + "tf.compat.v1.Tensor.__eq__": "tf.Tensor.__eq__", + "tf.compat.v1.Tensor.__floordiv__": "tf.Tensor.__floordiv__", + "tf.compat.v1.Tensor.__ge__": "tf.math.greater_equal", + "tf.compat.v1.Tensor.__getitem__": "tf.Tensor.__getitem__", + "tf.compat.v1.Tensor.__gt__": "tf.math.greater", + "tf.compat.v1.Tensor.__init__": "tf.Tensor.__init__", + "tf.compat.v1.Tensor.__invert__": "tf.math.logical_not", + "tf.compat.v1.Tensor.__iter__": "tf.Tensor.__iter__", + "tf.compat.v1.Tensor.__le__": "tf.math.less_equal", + "tf.compat.v1.Tensor.__len__": "tf.Tensor.__len__", + "tf.compat.v1.Tensor.__lt__": "tf.math.less", + "tf.compat.v1.Tensor.__matmul__": "tf.Tensor.__matmul__", + "tf.compat.v1.Tensor.__mod__": "tf.Tensor.__mod__", + "tf.compat.v1.Tensor.__mul__": "tf.Tensor.__mul__", + "tf.compat.v1.Tensor.__ne__": "tf.Tensor.__ne__", + "tf.compat.v1.Tensor.__neg__": "tf.math.negative", + "tf.compat.v1.Tensor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.Tensor.__nonzero__": "tf.Tensor.__nonzero__", + "tf.compat.v1.Tensor.__or__": "tf.Tensor.__or__", + "tf.compat.v1.Tensor.__pow__": "tf.Tensor.__pow__", + "tf.compat.v1.Tensor.__radd__": "tf.Tensor.__radd__", + "tf.compat.v1.Tensor.__rand__": "tf.Tensor.__rand__", + "tf.compat.v1.Tensor.__rdiv__": "tf.Tensor.__rdiv__", + "tf.compat.v1.Tensor.__rfloordiv__": "tf.Tensor.__rfloordiv__", + "tf.compat.v1.Tensor.__rmatmul__": "tf.Tensor.__rmatmul__", + "tf.compat.v1.Tensor.__rmod__": "tf.Tensor.__rmod__", + "tf.compat.v1.Tensor.__rmul__": "tf.Tensor.__rmul__", + "tf.compat.v1.Tensor.__ror__": "tf.Tensor.__ror__", + "tf.compat.v1.Tensor.__rpow__": "tf.Tensor.__rpow__", + "tf.compat.v1.Tensor.__rsub__": "tf.Tensor.__rsub__", + "tf.compat.v1.Tensor.__rtruediv__": "tf.Tensor.__rtruediv__", + "tf.compat.v1.Tensor.__rxor__": "tf.Tensor.__rxor__", + "tf.compat.v1.Tensor.__sub__": "tf.Tensor.__sub__", + "tf.compat.v1.Tensor.__truediv__": "tf.Tensor.__truediv__", + "tf.compat.v1.Tensor.__xor__": "tf.Tensor.__xor__", + "tf.compat.v1.Tensor.consumers": "tf.Tensor.consumers", + "tf.compat.v1.Tensor.device": "tf.Tensor.device", + "tf.compat.v1.Tensor.dtype": "tf.Tensor.dtype", + "tf.compat.v1.Tensor.eval": "tf.Tensor.eval", + "tf.compat.v1.Tensor.experimental_ref": "tf.Tensor.experimental_ref", + "tf.compat.v1.Tensor.get_shape": "tf.Tensor.get_shape", + "tf.compat.v1.Tensor.graph": "tf.Tensor.graph", + "tf.compat.v1.Tensor.name": "tf.Tensor.name", + "tf.compat.v1.Tensor.op": "tf.Tensor.op", + "tf.compat.v1.Tensor.ref": "tf.Tensor.ref", + "tf.compat.v1.Tensor.set_shape": "tf.Tensor.set_shape", + "tf.compat.v1.Tensor.shape": "tf.Tensor.shape", + "tf.compat.v1.Tensor.value_index": "tf.Tensor.value_index", + "tf.compat.v1.TensorArray": "tf.TensorArray", + "tf.compat.v1.TensorArray.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.TensorArray.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.TensorArray.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.TensorArray.__init__": "tf.TensorArray.__init__", + "tf.compat.v1.TensorArray.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.TensorArray.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.TensorArray.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.TensorArray.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.TensorArray.close": "tf.TensorArray.close", + "tf.compat.v1.TensorArray.concat": "tf.TensorArray.concat", + "tf.compat.v1.TensorArray.dtype": "tf.TensorArray.dtype", + "tf.compat.v1.TensorArray.dynamic_size": "tf.TensorArray.dynamic_size", + "tf.compat.v1.TensorArray.element_shape": "tf.TensorArray.element_shape", + "tf.compat.v1.TensorArray.flow": "tf.TensorArray.flow", + "tf.compat.v1.TensorArray.gather": "tf.TensorArray.gather", + "tf.compat.v1.TensorArray.grad": "tf.TensorArray.grad", + "tf.compat.v1.TensorArray.handle": "tf.TensorArray.handle", + "tf.compat.v1.TensorArray.identity": "tf.TensorArray.identity", + "tf.compat.v1.TensorArray.read": "tf.TensorArray.read", + "tf.compat.v1.TensorArray.scatter": "tf.TensorArray.scatter", + "tf.compat.v1.TensorArray.size": "tf.TensorArray.size", + "tf.compat.v1.TensorArray.split": "tf.TensorArray.split", + "tf.compat.v1.TensorArray.stack": "tf.TensorArray.stack", + "tf.compat.v1.TensorArray.unstack": "tf.TensorArray.unstack", + "tf.compat.v1.TensorArray.write": "tf.TensorArray.write", + "tf.compat.v1.TensorArraySpec": "tf.TensorArraySpec", + "tf.compat.v1.TensorArraySpec.__eq__": "tf.TypeSpec.__eq__", + "tf.compat.v1.TensorArraySpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.TensorArraySpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.TensorArraySpec.__init__": "tf.TensorArraySpec.__init__", + "tf.compat.v1.TensorArraySpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.TensorArraySpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.TensorArraySpec.__ne__": "tf.TypeSpec.__ne__", + "tf.compat.v1.TensorArraySpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.TensorArraySpec.from_value": "tf.TensorArraySpec.from_value", + "tf.compat.v1.TensorArraySpec.is_compatible_with": "tf.TensorArraySpec.is_compatible_with", + "tf.compat.v1.TensorArraySpec.most_specific_compatible_type": "tf.TensorArraySpec.most_specific_compatible_type", + "tf.compat.v1.TensorArraySpec.value_type": "tf.TensorArraySpec.value_type", + "tf.compat.v1.TensorInfo.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.TensorInfo.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.TensorInfo.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.TensorInfo.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.TensorInfo.CompositeTensor.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.TensorInfo.CompositeTensor.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.TensorInfo.CompositeTensor.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.TensorInfo.CompositeTensor.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.TensorInfo.CompositeTensor.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.TensorInfo.CompositeTensor.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.TensorInfo.CompositeTensor.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.TensorInfo.CompositeTensor.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.TensorInfo.CompositeTensor.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.TensorInfo.CompositeTensor.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.TensorInfo.CompositeTensor.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.TensorInfo.CompositeTensor.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.TensorInfo.CompositeTensor.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.TensorInfo.CompositeTensor.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.TensorInfo.CompositeTensor.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.TensorInfo.CompositeTensor.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.TensorInfo.CompositeTensor.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.TensorInfo.CompositeTensor.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.TensorInfo.CompositeTensor.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.TensorInfo.CompositeTensor.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.TensorInfo.CompositeTensor.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.TensorInfo.CompositeTensor.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.TensorInfo.CompositeTensor.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.TensorInfo.CompositeTensor.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.TensorInfo.CompositeTensor.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.TensorInfo.CompositeTensor.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.TensorInfo.CompositeTensor.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.TensorInfo.CompositeTensor.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.TensorInfo.CooSparse.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.TensorInfo.CooSparse.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.TensorInfo.CooSparse.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.TensorInfo.CooSparse.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.TensorInfo.CooSparse.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.TensorInfo.CooSparse.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.TensorInfo.CooSparse.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.TensorInfo.CooSparse.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.TensorInfo.CooSparse.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.TensorInfo.CooSparse.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.TensorInfo.CooSparse.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.TensorInfo.CooSparse.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.TensorInfo.CooSparse.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.TensorInfo.CooSparse.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.TensorInfo.CooSparse.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.TensorInfo.CooSparse.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.TensorInfo.CooSparse.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.TensorInfo.CooSparse.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.TensorInfo.CooSparse.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.TensorInfo.CooSparse.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.TensorInfo.CooSparse.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.TensorInfo.CooSparse.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.TensorInfo.CooSparse.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.TensorInfo.CooSparse.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.TensorInfo.CooSparse.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.TensorInfo.CooSparse.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.TensorInfo.CooSparse.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.TensorInfo.CooSparse.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.TensorInfo.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.TensorInfo.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.TensorInfo.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.TensorInfo.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.TensorInfo.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.TensorInfo.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.TensorInfo.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.TensorInfo.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.TensorInfo.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.TensorInfo.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.TensorInfo.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.TensorInfo.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.TensorInfo.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.TensorInfo.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.TensorInfo.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.TensorInfo.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.TensorInfo.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.TensorInfo.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.TensorInfo.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.TensorInfo.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.TensorInfo.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.TensorInfo.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.TensorInfo.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.TensorInfo.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.TensorShape": "tf.TensorShape", + "tf.compat.v1.TensorShape.__add__": "tf.TensorShape.__add__", + "tf.compat.v1.TensorShape.__bool__": "tf.TensorShape.__bool__", + "tf.compat.v1.TensorShape.__concat__": "tf.TensorShape.__concat__", + "tf.compat.v1.TensorShape.__eq__": "tf.TensorShape.__eq__", + "tf.compat.v1.TensorShape.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.TensorShape.__getitem__": "tf.TensorShape.__getitem__", + "tf.compat.v1.TensorShape.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.TensorShape.__init__": "tf.TensorShape.__init__", + "tf.compat.v1.TensorShape.__iter__": "tf.TensorShape.__iter__", + "tf.compat.v1.TensorShape.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.TensorShape.__len__": "tf.TensorShape.__len__", + "tf.compat.v1.TensorShape.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.TensorShape.__ne__": "tf.TensorShape.__ne__", + "tf.compat.v1.TensorShape.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.TensorShape.__nonzero__": "tf.TensorShape.__bool__", + "tf.compat.v1.TensorShape.__radd__": "tf.TensorShape.__radd__", + "tf.compat.v1.TensorShape.as_list": "tf.TensorShape.as_list", + "tf.compat.v1.TensorShape.as_proto": "tf.TensorShape.as_proto", + "tf.compat.v1.TensorShape.assert_has_rank": "tf.TensorShape.assert_has_rank", + "tf.compat.v1.TensorShape.assert_is_compatible_with": "tf.TensorShape.assert_is_compatible_with", + "tf.compat.v1.TensorShape.assert_is_fully_defined": "tf.TensorShape.assert_is_fully_defined", + "tf.compat.v1.TensorShape.assert_same_rank": "tf.TensorShape.assert_same_rank", + "tf.compat.v1.TensorShape.concatenate": "tf.TensorShape.concatenate", + "tf.compat.v1.TensorShape.dims": "tf.TensorShape.dims", + "tf.compat.v1.TensorShape.is_compatible_with": "tf.TensorShape.is_compatible_with", + "tf.compat.v1.TensorShape.is_fully_defined": "tf.TensorShape.is_fully_defined", + "tf.compat.v1.TensorShape.merge_with": "tf.TensorShape.merge_with", + "tf.compat.v1.TensorShape.most_specific_compatible_shape": "tf.TensorShape.most_specific_compatible_shape", + "tf.compat.v1.TensorShape.ndims": "tf.TensorShape.ndims", + "tf.compat.v1.TensorShape.num_elements": "tf.TensorShape.num_elements", + "tf.compat.v1.TensorShape.rank": "tf.TensorShape.rank", + "tf.compat.v1.TensorShape.with_rank": "tf.TensorShape.with_rank", + "tf.compat.v1.TensorShape.with_rank_at_least": "tf.TensorShape.with_rank_at_least", + "tf.compat.v1.TensorShape.with_rank_at_most": "tf.TensorShape.with_rank_at_most", + "tf.compat.v1.TensorSpec": "tf.TensorSpec", + "tf.compat.v1.TensorSpec.__eq__": "tf.TensorSpec.__eq__", + "tf.compat.v1.TensorSpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.TensorSpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.TensorSpec.__init__": "tf.TensorSpec.__init__", + "tf.compat.v1.TensorSpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.TensorSpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.TensorSpec.__ne__": "tf.TensorSpec.__ne__", + "tf.compat.v1.TensorSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.TensorSpec.dtype": "tf.TensorSpec.dtype", + "tf.compat.v1.TensorSpec.is_compatible_with": "tf.TensorSpec.is_compatible_with", + "tf.compat.v1.TensorSpec.most_specific_compatible_type": "tf.TensorSpec.most_specific_compatible_type", + "tf.compat.v1.TensorSpec.name": "tf.TensorSpec.name", + "tf.compat.v1.TensorSpec.shape": "tf.TensorSpec.shape", + "tf.compat.v1.TensorSpec.value_type": "tf.TensorSpec.value_type", + "tf.compat.v1.TextLineReader.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.TextLineReader.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.TextLineReader.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.TextLineReader.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.TextLineReader.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.TextLineReader.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.TextLineReader.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.TextLineReader.num_records_produced": "tf.compat.v1.ReaderBase.num_records_produced", + "tf.compat.v1.TextLineReader.num_work_units_completed": "tf.compat.v1.ReaderBase.num_work_units_completed", + "tf.compat.v1.TextLineReader.read": "tf.compat.v1.ReaderBase.read", + "tf.compat.v1.TextLineReader.read_up_to": "tf.compat.v1.ReaderBase.read_up_to", + "tf.compat.v1.TextLineReader.reader_ref": "tf.compat.v1.ReaderBase.reader_ref", + "tf.compat.v1.TextLineReader.reset": "tf.compat.v1.ReaderBase.reset", + "tf.compat.v1.TextLineReader.restore_state": "tf.compat.v1.ReaderBase.restore_state", + "tf.compat.v1.TextLineReader.serialize_state": "tf.compat.v1.ReaderBase.serialize_state", + "tf.compat.v1.TextLineReader.supports_serialize": "tf.compat.v1.ReaderBase.supports_serialize", + "tf.compat.v1.TypeSpec": "tf.TypeSpec", + "tf.compat.v1.TypeSpec.__eq__": "tf.TypeSpec.__eq__", + "tf.compat.v1.TypeSpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.TypeSpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.TypeSpec.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.TypeSpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.TypeSpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.TypeSpec.__ne__": "tf.TypeSpec.__ne__", + "tf.compat.v1.TypeSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.TypeSpec.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.compat.v1.TypeSpec.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.compat.v1.TypeSpec.value_type": "tf.TypeSpec.value_type", + "tf.compat.v1.UnconnectedGradients": "tf.UnconnectedGradients", + "tf.compat.v1.UnconnectedGradients.NONE": "tf.UnconnectedGradients.NONE", + "tf.compat.v1.UnconnectedGradients.ZERO": "tf.UnconnectedGradients.ZERO", + "tf.compat.v1.UnconnectedGradients.name": "tf.distribute.InputReplicationMode.name", + "tf.compat.v1.UnconnectedGradients.value": "tf.distribute.InputReplicationMode.value", + "tf.compat.v1.VarLenFeature": "tf.io.VarLenFeature", + "tf.compat.v1.VarLenFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.VarLenFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.VarLenFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.VarLenFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.VarLenFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.VarLenFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.VarLenFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.VarLenFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.VarLenFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.VarLenFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.VarLenFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.VarLenFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.VarLenFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.VarLenFeature.__new__": "tf.io.VarLenFeature.__new__", + "tf.compat.v1.VarLenFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.VarLenFeature.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.VarLenFeature.dtype": "tf.io.VarLenFeature.dtype", + "tf.compat.v1.VarLenFeature.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.Variable.SaveSliceInfo": "tf.Variable.SaveSliceInfo", + "tf.compat.v1.Variable.SaveSliceInfo.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.Variable.SaveSliceInfo.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.Variable.SaveSliceInfo.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.Variable.SaveSliceInfo.__init__": "tf.Variable.SaveSliceInfo.__init__", + "tf.compat.v1.Variable.SaveSliceInfo.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.Variable.SaveSliceInfo.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.Variable.SaveSliceInfo.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.Variable.SaveSliceInfo.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.Variable.SaveSliceInfo.spec": "tf.Variable.SaveSliceInfo.spec", + "tf.compat.v1.Variable.SaveSliceInfo.to_proto": "tf.Variable.SaveSliceInfo.to_proto", + "tf.compat.v1.Variable.__abs__": "tf.Variable.__abs__", + "tf.compat.v1.Variable.__add__": "tf.Variable.__add__", + "tf.compat.v1.Variable.__and__": "tf.Variable.__and__", + "tf.compat.v1.Variable.__div__": "tf.Variable.__div__", + "tf.compat.v1.Variable.__eq__": "tf.Variable.__eq__", + "tf.compat.v1.Variable.__floordiv__": "tf.Variable.__floordiv__", + "tf.compat.v1.Variable.__ge__": "tf.Variable.__ge__", + "tf.compat.v1.Variable.__getitem__": "tf.Variable.__getitem__", + "tf.compat.v1.Variable.__gt__": "tf.Variable.__gt__", + "tf.compat.v1.Variable.__invert__": "tf.Variable.__invert__", + "tf.compat.v1.Variable.__iter__": "tf.Variable.__iter__", + "tf.compat.v1.Variable.__le__": "tf.Variable.__le__", + "tf.compat.v1.Variable.__lt__": "tf.Variable.__lt__", + "tf.compat.v1.Variable.__matmul__": "tf.Variable.__matmul__", + "tf.compat.v1.Variable.__mod__": "tf.Variable.__mod__", + "tf.compat.v1.Variable.__mul__": "tf.Variable.__mul__", + "tf.compat.v1.Variable.__ne__": "tf.Variable.__ne__", + "tf.compat.v1.Variable.__neg__": "tf.Variable.__neg__", + "tf.compat.v1.Variable.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.Variable.__or__": "tf.Variable.__or__", + "tf.compat.v1.Variable.__pow__": "tf.Variable.__pow__", + "tf.compat.v1.Variable.__radd__": "tf.Variable.__radd__", + "tf.compat.v1.Variable.__rand__": "tf.Variable.__rand__", + "tf.compat.v1.Variable.__rdiv__": "tf.Variable.__rdiv__", + "tf.compat.v1.Variable.__rfloordiv__": "tf.Variable.__rfloordiv__", + "tf.compat.v1.Variable.__rmatmul__": "tf.Variable.__rmatmul__", + "tf.compat.v1.Variable.__rmod__": "tf.Variable.__rmod__", + "tf.compat.v1.Variable.__rmul__": "tf.Variable.__rmul__", + "tf.compat.v1.Variable.__ror__": "tf.Variable.__ror__", + "tf.compat.v1.Variable.__rpow__": "tf.Variable.__rpow__", + "tf.compat.v1.Variable.__rsub__": "tf.Variable.__rsub__", + "tf.compat.v1.Variable.__rtruediv__": "tf.Variable.__rtruediv__", + "tf.compat.v1.Variable.__rxor__": "tf.Variable.__rxor__", + "tf.compat.v1.Variable.__sub__": "tf.Variable.__sub__", + "tf.compat.v1.Variable.__truediv__": "tf.Variable.__truediv__", + "tf.compat.v1.Variable.__xor__": "tf.Variable.__xor__", + "tf.compat.v1.Variable.aggregation": "tf.Variable.aggregation", + "tf.compat.v1.Variable.assign": "tf.Variable.assign", + "tf.compat.v1.Variable.assign_add": "tf.Variable.assign_add", + "tf.compat.v1.Variable.assign_sub": "tf.Variable.assign_sub", + "tf.compat.v1.Variable.batch_scatter_update": "tf.Variable.batch_scatter_update", + "tf.compat.v1.Variable.constraint": "tf.Variable.constraint", + "tf.compat.v1.Variable.count_up_to": "tf.Variable.count_up_to", + "tf.compat.v1.Variable.device": "tf.Variable.device", + "tf.compat.v1.Variable.dtype": "tf.Variable.dtype", + "tf.compat.v1.Variable.eval": "tf.Variable.eval", + "tf.compat.v1.Variable.experimental_ref": "tf.Variable.experimental_ref", + "tf.compat.v1.Variable.from_proto": "tf.Variable.from_proto", + "tf.compat.v1.Variable.gather_nd": "tf.Variable.gather_nd", + "tf.compat.v1.Variable.get_shape": "tf.Variable.get_shape", + "tf.compat.v1.Variable.graph": "tf.Variable.graph", + "tf.compat.v1.Variable.initial_value": "tf.Variable.initial_value", + "tf.compat.v1.Variable.initialized_value": "tf.Variable.initialized_value", + "tf.compat.v1.Variable.initializer": "tf.Variable.initializer", + "tf.compat.v1.Variable.load": "tf.Variable.load", + "tf.compat.v1.Variable.name": "tf.Variable.name", + "tf.compat.v1.Variable.op": "tf.Variable.op", + "tf.compat.v1.Variable.read_value": "tf.Variable.read_value", + "tf.compat.v1.Variable.ref": "tf.Variable.ref", + "tf.compat.v1.Variable.scatter_add": "tf.Variable.scatter_add", + "tf.compat.v1.Variable.scatter_div": "tf.Variable.scatter_div", + "tf.compat.v1.Variable.scatter_max": "tf.Variable.scatter_max", + "tf.compat.v1.Variable.scatter_min": "tf.Variable.scatter_min", + "tf.compat.v1.Variable.scatter_mul": "tf.Variable.scatter_mul", + "tf.compat.v1.Variable.scatter_nd_add": "tf.Variable.scatter_nd_add", + "tf.compat.v1.Variable.scatter_nd_sub": "tf.Variable.scatter_nd_sub", + "tf.compat.v1.Variable.scatter_nd_update": "tf.Variable.scatter_nd_update", + "tf.compat.v1.Variable.scatter_sub": "tf.Variable.scatter_sub", + "tf.compat.v1.Variable.scatter_update": "tf.Variable.scatter_update", + "tf.compat.v1.Variable.set_shape": "tf.Variable.set_shape", + "tf.compat.v1.Variable.shape": "tf.Variable.shape", + "tf.compat.v1.Variable.sparse_read": "tf.Variable.sparse_read", + "tf.compat.v1.Variable.synchronization": "tf.Variable.synchronization", + "tf.compat.v1.Variable.to_proto": "tf.Variable.to_proto", + "tf.compat.v1.Variable.trainable": "tf.Variable.trainable", + "tf.compat.v1.Variable.value": "tf.Variable.value", + "tf.compat.v1.VariableAggregation.name": "tf.distribute.InputReplicationMode.name", + "tf.compat.v1.VariableAggregation.value": "tf.distribute.InputReplicationMode.value", + "tf.compat.v1.VariableScope.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.VariableScope.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.VariableScope.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.VariableScope.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.VariableScope.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.VariableScope.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.VariableScope.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.VariableSynchronization": "tf.VariableSynchronization", + "tf.compat.v1.VariableSynchronization.AUTO": "tf.VariableSynchronization.AUTO", + "tf.compat.v1.VariableSynchronization.NONE": "tf.VariableSynchronization.NONE", + "tf.compat.v1.VariableSynchronization.ON_READ": "tf.VariableSynchronization.ON_READ", + "tf.compat.v1.VariableSynchronization.ON_WRITE": "tf.VariableSynchronization.ON_WRITE", + "tf.compat.v1.VariableSynchronization.name": "tf.distribute.InputReplicationMode.name", + "tf.compat.v1.VariableSynchronization.value": "tf.distribute.InputReplicationMode.value", + "tf.compat.v1.WholeFileReader.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.WholeFileReader.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.WholeFileReader.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.WholeFileReader.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.WholeFileReader.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.WholeFileReader.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.WholeFileReader.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.WholeFileReader.num_records_produced": "tf.compat.v1.ReaderBase.num_records_produced", + "tf.compat.v1.WholeFileReader.num_work_units_completed": "tf.compat.v1.ReaderBase.num_work_units_completed", + "tf.compat.v1.WholeFileReader.read": "tf.compat.v1.ReaderBase.read", + "tf.compat.v1.WholeFileReader.read_up_to": "tf.compat.v1.ReaderBase.read_up_to", + "tf.compat.v1.WholeFileReader.reader_ref": "tf.compat.v1.ReaderBase.reader_ref", + "tf.compat.v1.WholeFileReader.reset": "tf.compat.v1.ReaderBase.reset", + "tf.compat.v1.WholeFileReader.restore_state": "tf.compat.v1.ReaderBase.restore_state", + "tf.compat.v1.WholeFileReader.serialize_state": "tf.compat.v1.ReaderBase.serialize_state", + "tf.compat.v1.WholeFileReader.supports_serialize": "tf.compat.v1.ReaderBase.supports_serialize", + "tf.compat.v1.abs": "tf.math.abs", + "tf.compat.v1.accumulate_n": "tf.math.accumulate_n", + "tf.compat.v1.acos": "tf.math.acos", + "tf.compat.v1.acosh": "tf.math.acosh", + "tf.compat.v1.add": "tf.math.add", + "tf.compat.v1.add_n": "tf.math.add_n", + "tf.compat.v1.angle": "tf.math.angle", + "tf.compat.v1.app.flags": "tf.compat.v1.flags", + "tf.compat.v1.app.flags.ArgumentParser": "tf.compat.v1.flags.ArgumentParser", + "tf.compat.v1.app.flags.ArgumentParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.ArgumentParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.ArgumentParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.ArgumentParser.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.app.flags.ArgumentParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.ArgumentParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.ArgumentParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.ArgumentParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.ArgumentParser.flag_type": "tf.compat.v1.flags.ArgumentParser.flag_type", + "tf.compat.v1.app.flags.ArgumentParser.parse": "tf.compat.v1.flags.ArgumentParser.parse", + "tf.compat.v1.app.flags.ArgumentSerializer": "tf.compat.v1.flags.ArgumentSerializer", + "tf.compat.v1.app.flags.ArgumentSerializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.ArgumentSerializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.ArgumentSerializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.ArgumentSerializer.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.app.flags.ArgumentSerializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.ArgumentSerializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.ArgumentSerializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.ArgumentSerializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.ArgumentSerializer.serialize": "tf.compat.v1.flags.ArgumentSerializer.serialize", + "tf.compat.v1.app.flags.BaseListParser": "tf.compat.v1.flags.BaseListParser", + "tf.compat.v1.app.flags.BaseListParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.BaseListParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.BaseListParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.BaseListParser.__init__": "tf.compat.v1.flags.BaseListParser.__init__", + "tf.compat.v1.app.flags.BaseListParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.BaseListParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.BaseListParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.BaseListParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.BaseListParser.flag_type": "tf.compat.v1.flags.BaseListParser.flag_type", + "tf.compat.v1.app.flags.BaseListParser.parse": "tf.compat.v1.flags.BaseListParser.parse", + "tf.compat.v1.app.flags.BooleanFlag": "tf.compat.v1.flags.BooleanFlag", + "tf.compat.v1.app.flags.BooleanFlag.__eq__": "tf.compat.v1.flags.Flag.__eq__", + "tf.compat.v1.app.flags.BooleanFlag.__ge__": "tf.compat.v1.flags.Flag.__ge__", + "tf.compat.v1.app.flags.BooleanFlag.__gt__": "tf.compat.v1.flags.Flag.__gt__", + "tf.compat.v1.app.flags.BooleanFlag.__init__": "tf.compat.v1.flags.BooleanFlag.__init__", + "tf.compat.v1.app.flags.BooleanFlag.__le__": "tf.compat.v1.flags.Flag.__le__", + "tf.compat.v1.app.flags.BooleanFlag.__lt__": "tf.compat.v1.flags.Flag.__lt__", + "tf.compat.v1.app.flags.BooleanFlag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.BooleanFlag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.BooleanFlag.flag_type": "tf.compat.v1.flags.Flag.flag_type", + "tf.compat.v1.app.flags.BooleanFlag.parse": "tf.compat.v1.flags.Flag.parse", + "tf.compat.v1.app.flags.BooleanFlag.serialize": "tf.compat.v1.flags.Flag.serialize", + "tf.compat.v1.app.flags.BooleanFlag.unparse": "tf.compat.v1.flags.Flag.unparse", + "tf.compat.v1.app.flags.BooleanFlag.value": "tf.compat.v1.flags.Flag.value", + "tf.compat.v1.app.flags.BooleanParser": "tf.compat.v1.flags.BooleanParser", + "tf.compat.v1.app.flags.BooleanParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.BooleanParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.BooleanParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.BooleanParser.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.app.flags.BooleanParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.BooleanParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.BooleanParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.BooleanParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.BooleanParser.flag_type": "tf.compat.v1.flags.BooleanParser.flag_type", + "tf.compat.v1.app.flags.BooleanParser.parse": "tf.compat.v1.flags.BooleanParser.parse", + "tf.compat.v1.app.flags.CantOpenFlagFileError": "tf.compat.v1.flags.CantOpenFlagFileError", + "tf.compat.v1.app.flags.CantOpenFlagFileError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.CantOpenFlagFileError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.CantOpenFlagFileError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.CantOpenFlagFileError.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.app.flags.CantOpenFlagFileError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.CantOpenFlagFileError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.CantOpenFlagFileError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.CantOpenFlagFileError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.app.flags.CantOpenFlagFileError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.app.flags.CantOpenFlagFileError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.app.flags.CsvListSerializer": "tf.compat.v1.flags.CsvListSerializer", + "tf.compat.v1.app.flags.CsvListSerializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.CsvListSerializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.CsvListSerializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.CsvListSerializer.__init__": "tf.compat.v1.flags.CsvListSerializer.__init__", + "tf.compat.v1.app.flags.CsvListSerializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.CsvListSerializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.CsvListSerializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.CsvListSerializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.CsvListSerializer.serialize": "tf.compat.v1.flags.CsvListSerializer.serialize", + "tf.compat.v1.app.flags.DEFINE": "tf.compat.v1.flags.DEFINE", + "tf.compat.v1.app.flags.DEFINE_alias": "tf.compat.v1.flags.DEFINE_alias", + "tf.compat.v1.app.flags.DEFINE_bool": "tf.compat.v1.flags.DEFINE_bool", + "tf.compat.v1.app.flags.DEFINE_boolean": "tf.compat.v1.flags.DEFINE_bool", + "tf.compat.v1.app.flags.DEFINE_enum": "tf.compat.v1.flags.DEFINE_enum", + "tf.compat.v1.app.flags.DEFINE_enum_class": "tf.compat.v1.flags.DEFINE_enum_class", + "tf.compat.v1.app.flags.DEFINE_flag": "tf.compat.v1.flags.DEFINE_flag", + "tf.compat.v1.app.flags.DEFINE_float": "tf.compat.v1.flags.DEFINE_float", + "tf.compat.v1.app.flags.DEFINE_integer": "tf.compat.v1.flags.DEFINE_integer", + "tf.compat.v1.app.flags.DEFINE_list": "tf.compat.v1.flags.DEFINE_list", + "tf.compat.v1.app.flags.DEFINE_multi": "tf.compat.v1.flags.DEFINE_multi", + "tf.compat.v1.app.flags.DEFINE_multi_enum": "tf.compat.v1.flags.DEFINE_multi_enum", + "tf.compat.v1.app.flags.DEFINE_multi_enum_class": "tf.compat.v1.flags.DEFINE_multi_enum_class", + "tf.compat.v1.app.flags.DEFINE_multi_float": "tf.compat.v1.flags.DEFINE_multi_float", + "tf.compat.v1.app.flags.DEFINE_multi_integer": "tf.compat.v1.flags.DEFINE_multi_integer", + "tf.compat.v1.app.flags.DEFINE_multi_string": "tf.compat.v1.flags.DEFINE_multi_string", + "tf.compat.v1.app.flags.DEFINE_spaceseplist": "tf.compat.v1.flags.DEFINE_spaceseplist", + "tf.compat.v1.app.flags.DEFINE_string": "tf.compat.v1.flags.DEFINE_string", + "tf.compat.v1.app.flags.DuplicateFlagError": "tf.compat.v1.flags.DuplicateFlagError", + "tf.compat.v1.app.flags.DuplicateFlagError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.DuplicateFlagError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.DuplicateFlagError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.DuplicateFlagError.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.app.flags.DuplicateFlagError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.DuplicateFlagError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.DuplicateFlagError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.DuplicateFlagError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.app.flags.DuplicateFlagError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.app.flags.DuplicateFlagError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.app.flags.EnumClassFlag": "tf.compat.v1.flags.EnumClassFlag", + "tf.compat.v1.app.flags.EnumClassFlag.__eq__": "tf.compat.v1.flags.Flag.__eq__", + "tf.compat.v1.app.flags.EnumClassFlag.__ge__": "tf.compat.v1.flags.Flag.__ge__", + "tf.compat.v1.app.flags.EnumClassFlag.__gt__": "tf.compat.v1.flags.Flag.__gt__", + "tf.compat.v1.app.flags.EnumClassFlag.__init__": "tf.compat.v1.flags.EnumClassFlag.__init__", + "tf.compat.v1.app.flags.EnumClassFlag.__le__": "tf.compat.v1.flags.Flag.__le__", + "tf.compat.v1.app.flags.EnumClassFlag.__lt__": "tf.compat.v1.flags.Flag.__lt__", + "tf.compat.v1.app.flags.EnumClassFlag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.EnumClassFlag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.EnumClassFlag.flag_type": "tf.compat.v1.flags.Flag.flag_type", + "tf.compat.v1.app.flags.EnumClassFlag.parse": "tf.compat.v1.flags.Flag.parse", + "tf.compat.v1.app.flags.EnumClassFlag.serialize": "tf.compat.v1.flags.Flag.serialize", + "tf.compat.v1.app.flags.EnumClassFlag.unparse": "tf.compat.v1.flags.Flag.unparse", + "tf.compat.v1.app.flags.EnumClassFlag.value": "tf.compat.v1.flags.Flag.value", + "tf.compat.v1.app.flags.EnumClassParser": "tf.compat.v1.flags.EnumClassParser", + "tf.compat.v1.app.flags.EnumClassParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.EnumClassParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.EnumClassParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.EnumClassParser.__init__": "tf.compat.v1.flags.EnumClassParser.__init__", + "tf.compat.v1.app.flags.EnumClassParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.EnumClassParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.EnumClassParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.EnumClassParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.EnumClassParser.flag_type": "tf.compat.v1.flags.EnumClassParser.flag_type", + "tf.compat.v1.app.flags.EnumClassParser.parse": "tf.compat.v1.flags.EnumClassParser.parse", + "tf.compat.v1.app.flags.EnumFlag": "tf.compat.v1.flags.EnumFlag", + "tf.compat.v1.app.flags.EnumFlag.__eq__": "tf.compat.v1.flags.Flag.__eq__", + "tf.compat.v1.app.flags.EnumFlag.__ge__": "tf.compat.v1.flags.Flag.__ge__", + "tf.compat.v1.app.flags.EnumFlag.__gt__": "tf.compat.v1.flags.Flag.__gt__", + "tf.compat.v1.app.flags.EnumFlag.__init__": "tf.compat.v1.flags.EnumFlag.__init__", + "tf.compat.v1.app.flags.EnumFlag.__le__": "tf.compat.v1.flags.Flag.__le__", + "tf.compat.v1.app.flags.EnumFlag.__lt__": "tf.compat.v1.flags.Flag.__lt__", + "tf.compat.v1.app.flags.EnumFlag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.EnumFlag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.EnumFlag.flag_type": "tf.compat.v1.flags.Flag.flag_type", + "tf.compat.v1.app.flags.EnumFlag.parse": "tf.compat.v1.flags.Flag.parse", + "tf.compat.v1.app.flags.EnumFlag.serialize": "tf.compat.v1.flags.Flag.serialize", + "tf.compat.v1.app.flags.EnumFlag.unparse": "tf.compat.v1.flags.Flag.unparse", + "tf.compat.v1.app.flags.EnumFlag.value": "tf.compat.v1.flags.Flag.value", + "tf.compat.v1.app.flags.EnumParser": "tf.compat.v1.flags.EnumParser", + "tf.compat.v1.app.flags.EnumParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.EnumParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.EnumParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.EnumParser.__init__": "tf.compat.v1.flags.EnumParser.__init__", + "tf.compat.v1.app.flags.EnumParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.EnumParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.EnumParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.EnumParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.EnumParser.flag_type": "tf.compat.v1.flags.EnumParser.flag_type", + "tf.compat.v1.app.flags.EnumParser.parse": "tf.compat.v1.flags.EnumParser.parse", + "tf.compat.v1.app.flags.Error": "tf.compat.v1.flags.Error", + "tf.compat.v1.app.flags.Error.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.Error.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.Error.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.Error.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.app.flags.Error.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.Error.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.Error.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.Error.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.app.flags.Error.args": "tf.errors.AbortedError.args", + "tf.compat.v1.app.flags.Error.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.app.flags.FLAGS": "tf.compat.v1.flags.FLAGS", + "tf.compat.v1.app.flags.Flag": "tf.compat.v1.flags.Flag", + "tf.compat.v1.app.flags.Flag.__eq__": "tf.compat.v1.flags.Flag.__eq__", + "tf.compat.v1.app.flags.Flag.__ge__": "tf.compat.v1.flags.Flag.__ge__", + "tf.compat.v1.app.flags.Flag.__gt__": "tf.compat.v1.flags.Flag.__gt__", + "tf.compat.v1.app.flags.Flag.__init__": "tf.compat.v1.flags.Flag.__init__", + "tf.compat.v1.app.flags.Flag.__le__": "tf.compat.v1.flags.Flag.__le__", + "tf.compat.v1.app.flags.Flag.__lt__": "tf.compat.v1.flags.Flag.__lt__", + "tf.compat.v1.app.flags.Flag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.Flag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.Flag.flag_type": "tf.compat.v1.flags.Flag.flag_type", + "tf.compat.v1.app.flags.Flag.parse": "tf.compat.v1.flags.Flag.parse", + "tf.compat.v1.app.flags.Flag.serialize": "tf.compat.v1.flags.Flag.serialize", + "tf.compat.v1.app.flags.Flag.unparse": "tf.compat.v1.flags.Flag.unparse", + "tf.compat.v1.app.flags.Flag.value": "tf.compat.v1.flags.Flag.value", + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError": "tf.compat.v1.flags.FlagNameConflictsWithMethodError", + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.app.flags.FlagValues": "tf.compat.v1.flags.FlagValues", + "tf.compat.v1.app.flags.FlagValues.__call__": "tf.compat.v1.flags.FlagValues.__call__", + "tf.compat.v1.app.flags.FlagValues.__contains__": "tf.compat.v1.flags.FlagValues.__contains__", + "tf.compat.v1.app.flags.FlagValues.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.FlagValues.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.FlagValues.__getitem__": "tf.compat.v1.flags.FlagValues.__getitem__", + "tf.compat.v1.app.flags.FlagValues.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.FlagValues.__init__": "tf.compat.v1.flags.FlagValues.__init__", + "tf.compat.v1.app.flags.FlagValues.__iter__": "tf.compat.v1.flags.FlagValues.__iter__", + "tf.compat.v1.app.flags.FlagValues.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.FlagValues.__len__": "tf.compat.v1.flags.FlagValues.__len__", + "tf.compat.v1.app.flags.FlagValues.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.FlagValues.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.FlagValues.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.FlagValues.append_flag_values": "tf.compat.v1.flags.FlagValues.append_flag_values", + "tf.compat.v1.app.flags.FlagValues.append_flags_into_file": "tf.compat.v1.flags.FlagValues.append_flags_into_file", + "tf.compat.v1.app.flags.FlagValues.find_module_defining_flag": "tf.compat.v1.flags.FlagValues.find_module_defining_flag", + "tf.compat.v1.app.flags.FlagValues.find_module_id_defining_flag": "tf.compat.v1.flags.FlagValues.find_module_id_defining_flag", + "tf.compat.v1.app.flags.FlagValues.flag_values_dict": "tf.compat.v1.flags.FlagValues.flag_values_dict", + "tf.compat.v1.app.flags.FlagValues.flags_by_module_dict": "tf.compat.v1.flags.FlagValues.flags_by_module_dict", + "tf.compat.v1.app.flags.FlagValues.flags_by_module_id_dict": "tf.compat.v1.flags.FlagValues.flags_by_module_id_dict", + "tf.compat.v1.app.flags.FlagValues.flags_into_string": "tf.compat.v1.flags.FlagValues.flags_into_string", + "tf.compat.v1.app.flags.FlagValues.get_flag_value": "tf.compat.v1.flags.FlagValues.get_flag_value", + "tf.compat.v1.app.flags.FlagValues.get_help": "tf.compat.v1.flags.FlagValues.get_help", + "tf.compat.v1.app.flags.FlagValues.get_key_flags_for_module": "tf.compat.v1.flags.FlagValues.get_key_flags_for_module", + "tf.compat.v1.app.flags.FlagValues.is_gnu_getopt": "tf.compat.v1.flags.FlagValues.is_gnu_getopt", + "tf.compat.v1.app.flags.FlagValues.is_parsed": "tf.compat.v1.flags.FlagValues.is_parsed", + "tf.compat.v1.app.flags.FlagValues.key_flags_by_module_dict": "tf.compat.v1.flags.FlagValues.key_flags_by_module_dict", + "tf.compat.v1.app.flags.FlagValues.main_module_help": "tf.compat.v1.flags.FlagValues.main_module_help", + "tf.compat.v1.app.flags.FlagValues.mark_as_parsed": "tf.compat.v1.flags.FlagValues.mark_as_parsed", + "tf.compat.v1.app.flags.FlagValues.module_help": "tf.compat.v1.flags.FlagValues.module_help", + "tf.compat.v1.app.flags.FlagValues.read_flags_from_files": "tf.compat.v1.flags.FlagValues.read_flags_from_files", + "tf.compat.v1.app.flags.FlagValues.register_flag_by_module": "tf.compat.v1.flags.FlagValues.register_flag_by_module", + "tf.compat.v1.app.flags.FlagValues.register_flag_by_module_id": "tf.compat.v1.flags.FlagValues.register_flag_by_module_id", + "tf.compat.v1.app.flags.FlagValues.register_key_flag_for_module": "tf.compat.v1.flags.FlagValues.register_key_flag_for_module", + "tf.compat.v1.app.flags.FlagValues.remove_flag_values": "tf.compat.v1.flags.FlagValues.remove_flag_values", + "tf.compat.v1.app.flags.FlagValues.set_default": "tf.compat.v1.flags.FlagValues.set_default", + "tf.compat.v1.app.flags.FlagValues.set_gnu_getopt": "tf.compat.v1.flags.FlagValues.set_gnu_getopt", + "tf.compat.v1.app.flags.FlagValues.unparse_flags": "tf.compat.v1.flags.FlagValues.unparse_flags", + "tf.compat.v1.app.flags.FlagValues.write_help_in_xml_format": "tf.compat.v1.flags.FlagValues.write_help_in_xml_format", + "tf.compat.v1.app.flags.FloatParser": "tf.compat.v1.flags.FloatParser", + "tf.compat.v1.app.flags.FloatParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.FloatParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.FloatParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.FloatParser.__init__": "tf.compat.v1.flags.FloatParser.__init__", + "tf.compat.v1.app.flags.FloatParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.FloatParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.FloatParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.FloatParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.FloatParser.convert": "tf.compat.v1.flags.FloatParser.convert", + "tf.compat.v1.app.flags.FloatParser.flag_type": "tf.compat.v1.flags.FloatParser.flag_type", + "tf.compat.v1.app.flags.FloatParser.is_outside_bounds": "tf.compat.v1.flags.FloatParser.is_outside_bounds", + "tf.compat.v1.app.flags.FloatParser.parse": "tf.compat.v1.flags.FloatParser.parse", + "tf.compat.v1.app.flags.IllegalFlagValueError": "tf.compat.v1.flags.IllegalFlagValueError", + "tf.compat.v1.app.flags.IllegalFlagValueError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.IllegalFlagValueError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.IllegalFlagValueError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.IllegalFlagValueError.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.app.flags.IllegalFlagValueError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.IllegalFlagValueError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.IllegalFlagValueError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.IllegalFlagValueError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.app.flags.IllegalFlagValueError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.app.flags.IllegalFlagValueError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.app.flags.IntegerParser": "tf.compat.v1.flags.IntegerParser", + "tf.compat.v1.app.flags.IntegerParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.IntegerParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.IntegerParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.IntegerParser.__init__": "tf.compat.v1.flags.IntegerParser.__init__", + "tf.compat.v1.app.flags.IntegerParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.IntegerParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.IntegerParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.IntegerParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.IntegerParser.convert": "tf.compat.v1.flags.IntegerParser.convert", + "tf.compat.v1.app.flags.IntegerParser.flag_type": "tf.compat.v1.flags.IntegerParser.flag_type", + "tf.compat.v1.app.flags.IntegerParser.is_outside_bounds": "tf.compat.v1.flags.FloatParser.is_outside_bounds", + "tf.compat.v1.app.flags.IntegerParser.parse": "tf.compat.v1.flags.FloatParser.parse", + "tf.compat.v1.app.flags.ListParser": "tf.compat.v1.flags.ListParser", + "tf.compat.v1.app.flags.ListParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.ListParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.ListParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.ListParser.__init__": "tf.compat.v1.flags.ListParser.__init__", + "tf.compat.v1.app.flags.ListParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.ListParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.ListParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.ListParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.ListParser.flag_type": "tf.compat.v1.flags.BaseListParser.flag_type", + "tf.compat.v1.app.flags.ListParser.parse": "tf.compat.v1.flags.ListParser.parse", + "tf.compat.v1.app.flags.ListSerializer": "tf.compat.v1.flags.ListSerializer", + "tf.compat.v1.app.flags.ListSerializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.ListSerializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.ListSerializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.ListSerializer.__init__": "tf.compat.v1.flags.ListSerializer.__init__", + "tf.compat.v1.app.flags.ListSerializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.ListSerializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.ListSerializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.ListSerializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.ListSerializer.serialize": "tf.compat.v1.flags.ListSerializer.serialize", + "tf.compat.v1.app.flags.MultiEnumClassFlag": "tf.compat.v1.flags.MultiEnumClassFlag", + "tf.compat.v1.app.flags.MultiEnumClassFlag.__eq__": "tf.compat.v1.flags.Flag.__eq__", + "tf.compat.v1.app.flags.MultiEnumClassFlag.__ge__": "tf.compat.v1.flags.Flag.__ge__", + "tf.compat.v1.app.flags.MultiEnumClassFlag.__gt__": "tf.compat.v1.flags.Flag.__gt__", + "tf.compat.v1.app.flags.MultiEnumClassFlag.__init__": "tf.compat.v1.flags.MultiEnumClassFlag.__init__", + "tf.compat.v1.app.flags.MultiEnumClassFlag.__le__": "tf.compat.v1.flags.Flag.__le__", + "tf.compat.v1.app.flags.MultiEnumClassFlag.__lt__": "tf.compat.v1.flags.Flag.__lt__", + "tf.compat.v1.app.flags.MultiEnumClassFlag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.MultiEnumClassFlag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.MultiEnumClassFlag.flag_type": "tf.compat.v1.flags.MultiFlag.flag_type", + "tf.compat.v1.app.flags.MultiEnumClassFlag.parse": "tf.compat.v1.flags.MultiFlag.parse", + "tf.compat.v1.app.flags.MultiEnumClassFlag.serialize": "tf.compat.v1.flags.Flag.serialize", + "tf.compat.v1.app.flags.MultiEnumClassFlag.unparse": "tf.compat.v1.flags.Flag.unparse", + "tf.compat.v1.app.flags.MultiEnumClassFlag.value": "tf.compat.v1.flags.Flag.value", + "tf.compat.v1.app.flags.MultiFlag": "tf.compat.v1.flags.MultiFlag", + "tf.compat.v1.app.flags.MultiFlag.__eq__": "tf.compat.v1.flags.Flag.__eq__", + "tf.compat.v1.app.flags.MultiFlag.__ge__": "tf.compat.v1.flags.Flag.__ge__", + "tf.compat.v1.app.flags.MultiFlag.__gt__": "tf.compat.v1.flags.Flag.__gt__", + "tf.compat.v1.app.flags.MultiFlag.__init__": "tf.compat.v1.flags.MultiFlag.__init__", + "tf.compat.v1.app.flags.MultiFlag.__le__": "tf.compat.v1.flags.Flag.__le__", + "tf.compat.v1.app.flags.MultiFlag.__lt__": "tf.compat.v1.flags.Flag.__lt__", + "tf.compat.v1.app.flags.MultiFlag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.MultiFlag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.MultiFlag.flag_type": "tf.compat.v1.flags.MultiFlag.flag_type", + "tf.compat.v1.app.flags.MultiFlag.parse": "tf.compat.v1.flags.MultiFlag.parse", + "tf.compat.v1.app.flags.MultiFlag.serialize": "tf.compat.v1.flags.Flag.serialize", + "tf.compat.v1.app.flags.MultiFlag.unparse": "tf.compat.v1.flags.Flag.unparse", + "tf.compat.v1.app.flags.MultiFlag.value": "tf.compat.v1.flags.Flag.value", + "tf.compat.v1.app.flags.UnparsedFlagAccessError": "tf.compat.v1.flags.UnparsedFlagAccessError", + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.app.flags.UnparsedFlagAccessError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.app.flags.UnparsedFlagAccessError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.app.flags.UnrecognizedFlagError": "tf.compat.v1.flags.UnrecognizedFlagError", + "tf.compat.v1.app.flags.UnrecognizedFlagError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.UnrecognizedFlagError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.UnrecognizedFlagError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.UnrecognizedFlagError.__init__": "tf.compat.v1.flags.UnrecognizedFlagError.__init__", + "tf.compat.v1.app.flags.UnrecognizedFlagError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.UnrecognizedFlagError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.UnrecognizedFlagError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.UnrecognizedFlagError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.app.flags.UnrecognizedFlagError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.app.flags.UnrecognizedFlagError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.app.flags.ValidationError": "tf.compat.v1.flags.ValidationError", + "tf.compat.v1.app.flags.ValidationError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.ValidationError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.ValidationError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.ValidationError.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.app.flags.ValidationError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.ValidationError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.ValidationError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.ValidationError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.app.flags.ValidationError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.app.flags.ValidationError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser": "tf.compat.v1.flags.WhitespaceSeparatedListParser", + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__init__": "tf.compat.v1.flags.WhitespaceSeparatedListParser.__init__", + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.flag_type": "tf.compat.v1.flags.BaseListParser.flag_type", + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.parse": "tf.compat.v1.flags.WhitespaceSeparatedListParser.parse", + "tf.compat.v1.app.flags.absolute_import": "tf.compat.v1.flags.absolute_import", + "tf.compat.v1.app.flags.adopt_module_key_flags": "tf.compat.v1.flags.adopt_module_key_flags", + "tf.compat.v1.app.flags.declare_key_flag": "tf.compat.v1.flags.declare_key_flag", + "tf.compat.v1.app.flags.disclaim_key_flags": "tf.compat.v1.flags.disclaim_key_flags", + "tf.compat.v1.app.flags.division": "tf.compat.v1.flags.division", + "tf.compat.v1.app.flags.doc_to_help": "tf.compat.v1.flags.doc_to_help", + "tf.compat.v1.app.flags.flag_dict_to_args": "tf.compat.v1.flags.flag_dict_to_args", + "tf.compat.v1.app.flags.get_help_width": "tf.compat.v1.flags.get_help_width", + "tf.compat.v1.app.flags.mark_bool_flags_as_mutual_exclusive": "tf.compat.v1.flags.mark_bool_flags_as_mutual_exclusive", + "tf.compat.v1.app.flags.mark_flag_as_required": "tf.compat.v1.flags.mark_flag_as_required", + "tf.compat.v1.app.flags.mark_flags_as_mutual_exclusive": "tf.compat.v1.flags.mark_flags_as_mutual_exclusive", + "tf.compat.v1.app.flags.mark_flags_as_required": "tf.compat.v1.flags.mark_flags_as_required", + "tf.compat.v1.app.flags.multi_flags_validator": "tf.compat.v1.flags.multi_flags_validator", + "tf.compat.v1.app.flags.print_function": "tf.compat.v1.flags.print_function", + "tf.compat.v1.app.flags.register_multi_flags_validator": "tf.compat.v1.flags.register_multi_flags_validator", + "tf.compat.v1.app.flags.register_validator": "tf.compat.v1.flags.register_validator", + "tf.compat.v1.app.flags.text_wrap": "tf.compat.v1.flags.text_wrap", + "tf.compat.v1.app.flags.tf_decorator": "tf.compat.v1.flags.tf_decorator", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator": "tf.compat.v1.flags.tf_decorator.TFDecorator", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__call__": "tf.compat.v1.flags.tf_decorator.TFDecorator.__call__", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__init__": "tf.compat.v1.flags.tf_decorator.TFDecorator.__init__", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.decorated_target": "tf.compat.v1.flags.tf_decorator.TFDecorator.decorated_target", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.decorator_argspec": "tf.compat.v1.flags.tf_decorator.TFDecorator.decorator_argspec", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.decorator_doc": "tf.compat.v1.flags.tf_decorator.TFDecorator.decorator_doc", + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.decorator_name": "tf.compat.v1.flags.tf_decorator.TFDecorator.decorator_name", + "tf.compat.v1.app.flags.tf_decorator.absolute_import": "tf.compat.v1.flags.absolute_import", + "tf.compat.v1.app.flags.tf_decorator.division": "tf.compat.v1.flags.division", + "tf.compat.v1.app.flags.tf_decorator.make_decorator": "tf.compat.v1.flags.tf_decorator.make_decorator", + "tf.compat.v1.app.flags.tf_decorator.print_function": "tf.compat.v1.flags.print_function", + "tf.compat.v1.app.flags.tf_decorator.rewrap": "tf.compat.v1.flags.tf_decorator.rewrap", + "tf.compat.v1.app.flags.tf_decorator.tf_stack": "tf.compat.v1.flags.tf_decorator.tf_stack", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter": "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__enter__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__enter__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__exit__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__exit__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__init__": "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__init__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.get_filtered_filenames": "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.get_filtered_filenames", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.reset": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.reset", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary": "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__eq__": "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__eq__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__getitem__": "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__getitem__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__init__": "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__init__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__iter__": "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__iter__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__len__": "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__len__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__ne__": "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__ne__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__new__": "tf.dtypes.DType.__new__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.filename": "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.filename", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.line": "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.line", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.lineno": "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.lineno", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.name": "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.name", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__bool__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__bool__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__contains__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__contains__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__eq__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__eq__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__getitem__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__getitem__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__init__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__init__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__iter__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__iter__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__len__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__len__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__ne__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__ne__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__new__": "tf.dtypes.DType.__new__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.append": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.append", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.count": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.count", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.extend": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.extend", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.insert": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.insert", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.pop": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.pop", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.remove": "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.remove", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__enter__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__enter__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__exit__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__exit__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.get_filtered_filenames": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.get_filtered_filenames", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.reset": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.reset", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__enter__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__enter__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__exit__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__exit__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.get_effective_source_map": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.get_effective_source_map", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.reset": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.reset", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__enter__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__enter__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__exit__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__exit__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.reset": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.reset", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.absolute_import": "tf.compat.v1.flags.absolute_import", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.division": "tf.compat.v1.flags.division", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.extract_stack": "tf.compat.v1.flags.tf_decorator.tf_stack.extract_stack", + "tf.compat.v1.app.flags.tf_decorator.tf_stack.print_function": "tf.compat.v1.flags.print_function", + "tf.compat.v1.app.flags.tf_decorator.unwrap": "tf.compat.v1.flags.tf_decorator.unwrap", + "tf.compat.v1.app.flags.validator": "tf.compat.v1.flags.validator", + "tf.compat.v1.argsort": "tf.argsort", + "tf.compat.v1.as_dtype": "tf.dtypes.as_dtype", + "tf.compat.v1.as_string": "tf.strings.as_string", + "tf.compat.v1.asin": "tf.math.asin", + "tf.compat.v1.asinh": "tf.math.asinh", + "tf.compat.v1.assert_proper_iterable": "tf.debugging.assert_proper_iterable", + "tf.compat.v1.assert_same_float_dtype": "tf.debugging.assert_same_float_dtype", + "tf.compat.v1.atan": "tf.math.atan", + "tf.compat.v1.atan2": "tf.math.atan2", + "tf.compat.v1.atanh": "tf.math.atanh", + "tf.compat.v1.audio.decode_wav": "tf.audio.decode_wav", + "tf.compat.v1.audio.encode_wav": "tf.audio.encode_wav", + "tf.compat.v1.autograph.experimental.Feature": "tf.autograph.experimental.Feature", + "tf.compat.v1.autograph.experimental.Feature.ALL": "tf.autograph.experimental.Feature.ALL", + "tf.compat.v1.autograph.experimental.Feature.ASSERT_STATEMENTS": "tf.autograph.experimental.Feature.ASSERT_STATEMENTS", + "tf.compat.v1.autograph.experimental.Feature.AUTO_CONTROL_DEPS": "tf.autograph.experimental.Feature.AUTO_CONTROL_DEPS", + "tf.compat.v1.autograph.experimental.Feature.BUILTIN_FUNCTIONS": "tf.autograph.experimental.Feature.BUILTIN_FUNCTIONS", + "tf.compat.v1.autograph.experimental.Feature.EQUALITY_OPERATORS": "tf.autograph.experimental.Feature.EQUALITY_OPERATORS", + "tf.compat.v1.autograph.experimental.Feature.LISTS": "tf.autograph.experimental.Feature.LISTS", + "tf.compat.v1.autograph.experimental.Feature.NAME_SCOPES": "tf.autograph.experimental.Feature.NAME_SCOPES", + "tf.compat.v1.autograph.experimental.Feature.name": "tf.distribute.InputReplicationMode.name", + "tf.compat.v1.autograph.experimental.Feature.value": "tf.distribute.InputReplicationMode.value", + "tf.compat.v1.autograph.experimental.do_not_convert": "tf.autograph.experimental.do_not_convert", + "tf.compat.v1.autograph.experimental.set_loop_options": "tf.autograph.experimental.set_loop_options", + "tf.compat.v1.autograph.set_verbosity": "tf.autograph.set_verbosity", + "tf.compat.v1.autograph.trace": "tf.autograph.trace", + "tf.compat.v1.betainc": "tf.math.betainc", + "tf.compat.v1.bfloat16": "tf.dtypes.bfloat16", + "tf.compat.v1.bitcast": "tf.bitcast", + "tf.compat.v1.bitwise.bitwise_and": "tf.bitwise.bitwise_and", + "tf.compat.v1.bitwise.bitwise_or": "tf.bitwise.bitwise_or", + "tf.compat.v1.bitwise.bitwise_xor": "tf.bitwise.bitwise_xor", + "tf.compat.v1.bitwise.invert": "tf.bitwise.invert", + "tf.compat.v1.bitwise.left_shift": "tf.bitwise.left_shift", + "tf.compat.v1.bitwise.right_shift": "tf.bitwise.right_shift", + "tf.compat.v1.bool": "tf.dtypes.bool", + "tf.compat.v1.broadcast_dynamic_shape": "tf.broadcast_dynamic_shape", + "tf.compat.v1.broadcast_static_shape": "tf.broadcast_static_shape", + "tf.compat.v1.broadcast_to": "tf.broadcast_to", + "tf.compat.v1.cast": "tf.cast", + "tf.compat.v1.ceil": "tf.math.ceil", + "tf.compat.v1.check_numerics": "tf.debugging.check_numerics", + "tf.compat.v1.cholesky": "tf.linalg.cholesky", + "tf.compat.v1.cholesky_solve": "tf.linalg.cholesky_solve", + "tf.compat.v1.clip_by_global_norm": "tf.clip_by_global_norm", + "tf.compat.v1.clip_by_norm": "tf.clip_by_norm", + "tf.compat.v1.clip_by_value": "tf.clip_by_value", + "tf.compat.v1.compat.as_bytes": "tf.compat.as_bytes", + "tf.compat.v1.compat.as_str": "tf.compat.as_str", + "tf.compat.v1.compat.as_str_any": "tf.compat.as_str_any", + "tf.compat.v1.compat.as_text": "tf.compat.as_text", + "tf.compat.v1.compat.bytes_or_text_types": "tf.compat.bytes_or_text_types", + "tf.compat.v1.compat.complex_types": "tf.compat.complex_types", + "tf.compat.v1.compat.dimension_at_index": "tf.compat.dimension_at_index", + "tf.compat.v1.compat.dimension_value": "tf.compat.dimension_value", + "tf.compat.v1.compat.forward_compatibility_horizon": "tf.compat.forward_compatibility_horizon", + "tf.compat.v1.compat.forward_compatible": "tf.compat.forward_compatible", + "tf.compat.v1.compat.integral_types": "tf.compat.integral_types", + "tf.compat.v1.compat.path_to_str": "tf.compat.path_to_str", + "tf.compat.v1.compat.real_types": "tf.compat.real_types", + "tf.compat.v1.complex": "tf.dtypes.complex", + "tf.compat.v1.complex128": "tf.dtypes.complex128", + "tf.compat.v1.complex64": "tf.dtypes.complex64", + "tf.compat.v1.concat": "tf.concat", + "tf.compat.v1.config.LogicalDevice": "tf.config.LogicalDevice", + "tf.compat.v1.config.LogicalDevice.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.config.LogicalDevice.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.config.LogicalDevice.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.config.LogicalDevice.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.config.LogicalDevice.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.config.LogicalDevice.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.config.LogicalDevice.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.config.LogicalDevice.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.config.LogicalDevice.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.config.LogicalDevice.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.config.LogicalDevice.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.config.LogicalDevice.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.config.LogicalDevice.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.config.LogicalDevice.__new__": "tf.config.LogicalDevice.__new__", + "tf.compat.v1.config.LogicalDevice.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.config.LogicalDevice.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.config.LogicalDevice.device_type": "tf.config.LogicalDevice.device_type", + "tf.compat.v1.config.LogicalDevice.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.config.LogicalDevice.name": "tf.config.LogicalDevice.name", + "tf.compat.v1.config.LogicalDeviceConfiguration": "tf.config.LogicalDeviceConfiguration", + "tf.compat.v1.config.LogicalDeviceConfiguration.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__new__": "tf.config.LogicalDeviceConfiguration.__new__", + "tf.compat.v1.config.LogicalDeviceConfiguration.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.config.LogicalDeviceConfiguration.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.config.LogicalDeviceConfiguration.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.config.LogicalDeviceConfiguration.memory_limit": "tf.config.LogicalDeviceConfiguration.memory_limit", + "tf.compat.v1.config.PhysicalDevice": "tf.config.PhysicalDevice", + "tf.compat.v1.config.PhysicalDevice.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.config.PhysicalDevice.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.config.PhysicalDevice.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.config.PhysicalDevice.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.config.PhysicalDevice.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.config.PhysicalDevice.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.config.PhysicalDevice.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.config.PhysicalDevice.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.config.PhysicalDevice.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.config.PhysicalDevice.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.config.PhysicalDevice.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.config.PhysicalDevice.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.config.PhysicalDevice.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.config.PhysicalDevice.__new__": "tf.config.PhysicalDevice.__new__", + "tf.compat.v1.config.PhysicalDevice.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.config.PhysicalDevice.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.config.PhysicalDevice.device_type": "tf.config.PhysicalDevice.device_type", + "tf.compat.v1.config.PhysicalDevice.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.config.PhysicalDevice.name": "tf.config.PhysicalDevice.name", + "tf.compat.v1.config.experimental.ClusterDeviceFilters": "tf.config.experimental.ClusterDeviceFilters", + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__init__": "tf.config.experimental.ClusterDeviceFilters.__init__", + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.config.experimental.ClusterDeviceFilters.set_device_filters": "tf.config.experimental.ClusterDeviceFilters.set_device_filters", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration": "tf.config.LogicalDeviceConfiguration", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__new__": "tf.config.LogicalDeviceConfiguration.__new__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.memory_limit": "tf.config.LogicalDeviceConfiguration.memory_limit", + "tf.compat.v1.config.experimental.disable_mlir_bridge": "tf.config.experimental.disable_mlir_bridge", + "tf.compat.v1.config.experimental.enable_mlir_bridge": "tf.config.experimental.enable_mlir_bridge", + "tf.compat.v1.config.experimental.get_device_policy": "tf.config.experimental.get_device_policy", + "tf.compat.v1.config.experimental.get_memory_growth": "tf.config.experimental.get_memory_growth", + "tf.compat.v1.config.experimental.get_synchronous_execution": "tf.config.experimental.get_synchronous_execution", + "tf.compat.v1.config.experimental.get_virtual_device_configuration": "tf.config.get_logical_device_configuration", + "tf.compat.v1.config.experimental.get_visible_devices": "tf.config.get_visible_devices", + "tf.compat.v1.config.experimental.list_logical_devices": "tf.config.list_logical_devices", + "tf.compat.v1.config.experimental.list_physical_devices": "tf.config.list_physical_devices", + "tf.compat.v1.config.experimental.set_device_policy": "tf.config.experimental.set_device_policy", + "tf.compat.v1.config.experimental.set_memory_growth": "tf.config.experimental.set_memory_growth", + "tf.compat.v1.config.experimental.set_synchronous_execution": "tf.config.experimental.set_synchronous_execution", + "tf.compat.v1.config.experimental.set_virtual_device_configuration": "tf.config.set_logical_device_configuration", + "tf.compat.v1.config.experimental.set_visible_devices": "tf.config.set_visible_devices", + "tf.compat.v1.config.experimental_connect_to_cluster": "tf.config.experimental_connect_to_cluster", + "tf.compat.v1.config.experimental_connect_to_host": "tf.config.experimental_connect_to_host", + "tf.compat.v1.config.experimental_functions_run_eagerly": "tf.config.experimental_functions_run_eagerly", + "tf.compat.v1.config.experimental_run_functions_eagerly": "tf.config.experimental_run_functions_eagerly", + "tf.compat.v1.config.get_logical_device_configuration": "tf.config.get_logical_device_configuration", + "tf.compat.v1.config.get_soft_device_placement": "tf.config.get_soft_device_placement", + "tf.compat.v1.config.get_visible_devices": "tf.config.get_visible_devices", + "tf.compat.v1.config.list_logical_devices": "tf.config.list_logical_devices", + "tf.compat.v1.config.list_physical_devices": "tf.config.list_physical_devices", + "tf.compat.v1.config.optimizer.get_experimental_options": "tf.config.optimizer.get_experimental_options", + "tf.compat.v1.config.optimizer.get_jit": "tf.config.optimizer.get_jit", + "tf.compat.v1.config.optimizer.set_experimental_options": "tf.config.optimizer.set_experimental_options", + "tf.compat.v1.config.optimizer.set_jit": "tf.config.optimizer.set_jit", + "tf.compat.v1.config.set_logical_device_configuration": "tf.config.set_logical_device_configuration", + "tf.compat.v1.config.set_soft_device_placement": "tf.config.set_soft_device_placement", + "tf.compat.v1.config.set_visible_devices": "tf.config.set_visible_devices", + "tf.compat.v1.config.threading.get_inter_op_parallelism_threads": "tf.config.threading.get_inter_op_parallelism_threads", + "tf.compat.v1.config.threading.get_intra_op_parallelism_threads": "tf.config.threading.get_intra_op_parallelism_threads", + "tf.compat.v1.config.threading.set_inter_op_parallelism_threads": "tf.config.threading.set_inter_op_parallelism_threads", + "tf.compat.v1.config.threading.set_intra_op_parallelism_threads": "tf.config.threading.set_intra_op_parallelism_threads", + "tf.compat.v1.conj": "tf.math.conj", + "tf.compat.v1.constant_initializer": "tf.compat.v1.keras.initializers.Constant", + "tf.compat.v1.constant_initializer.__call__": "tf.compat.v1.keras.initializers.Constant.__call__", + "tf.compat.v1.constant_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.constant_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.constant_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.constant_initializer.__init__": "tf.compat.v1.keras.initializers.Constant.__init__", + "tf.compat.v1.constant_initializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.constant_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.constant_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.constant_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.constant_initializer.get_config": "tf.compat.v1.keras.initializers.Constant.get_config", + "tf.compat.v1.control_dependencies": "tf.control_dependencies", + "tf.compat.v1.cos": "tf.math.cos", + "tf.compat.v1.cosh": "tf.math.cosh", + "tf.compat.v1.cross": "tf.linalg.cross", + "tf.compat.v1.cumprod": "tf.math.cumprod", + "tf.compat.v1.cumsum": "tf.math.cumsum", + "tf.compat.v1.custom_gradient": "tf.custom_gradient", + "tf.compat.v1.data.Dataset.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.Dataset.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.Dataset.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.Dataset.__iter__": "tf.data.Dataset.__iter__", + "tf.compat.v1.data.Dataset.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.Dataset.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.Dataset.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.Dataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.Dataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.compat.v1.data.Dataset.enumerate": "tf.data.Dataset.enumerate", + "tf.compat.v1.data.Dataset.options": "tf.data.Dataset.options", + "tf.compat.v1.data.Dataset.reduce": "tf.data.Dataset.reduce", + "tf.compat.v1.data.DatasetSpec": "tf.data.DatasetSpec", + "tf.compat.v1.data.DatasetSpec.__eq__": "tf.TypeSpec.__eq__", + "tf.compat.v1.data.DatasetSpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.DatasetSpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.DatasetSpec.__init__": "tf.data.DatasetSpec.__init__", + "tf.compat.v1.data.DatasetSpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.DatasetSpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.DatasetSpec.__ne__": "tf.TypeSpec.__ne__", + "tf.compat.v1.data.DatasetSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.DatasetSpec.from_value": "tf.data.DatasetSpec.from_value", + "tf.compat.v1.data.DatasetSpec.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.compat.v1.data.DatasetSpec.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.compat.v1.data.DatasetSpec.value_type": "tf.data.DatasetSpec.value_type", + "tf.compat.v1.data.FixedLengthRecordDataset.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.FixedLengthRecordDataset.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.FixedLengthRecordDataset.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.FixedLengthRecordDataset.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.FixedLengthRecordDataset.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.FixedLengthRecordDataset.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.FixedLengthRecordDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.FixedLengthRecordDataset.apply": "tf.compat.v1.data.Dataset.apply", + "tf.compat.v1.data.FixedLengthRecordDataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.compat.v1.data.FixedLengthRecordDataset.batch": "tf.compat.v1.data.Dataset.batch", + "tf.compat.v1.data.FixedLengthRecordDataset.cache": "tf.compat.v1.data.Dataset.cache", + "tf.compat.v1.data.FixedLengthRecordDataset.concatenate": "tf.compat.v1.data.Dataset.concatenate", + "tf.compat.v1.data.FixedLengthRecordDataset.enumerate": "tf.data.Dataset.enumerate", + "tf.compat.v1.data.FixedLengthRecordDataset.filter": "tf.compat.v1.data.Dataset.filter", + "tf.compat.v1.data.FixedLengthRecordDataset.filter_with_legacy_function": "tf.compat.v1.data.Dataset.filter_with_legacy_function", + "tf.compat.v1.data.FixedLengthRecordDataset.flat_map": "tf.compat.v1.data.Dataset.flat_map", + "tf.compat.v1.data.FixedLengthRecordDataset.from_generator": "tf.compat.v1.data.Dataset.from_generator", + "tf.compat.v1.data.FixedLengthRecordDataset.from_sparse_tensor_slices": "tf.compat.v1.data.Dataset.from_sparse_tensor_slices", + "tf.compat.v1.data.FixedLengthRecordDataset.from_tensor_slices": "tf.compat.v1.data.Dataset.from_tensor_slices", + "tf.compat.v1.data.FixedLengthRecordDataset.from_tensors": "tf.compat.v1.data.Dataset.from_tensors", + "tf.compat.v1.data.FixedLengthRecordDataset.interleave": "tf.compat.v1.data.Dataset.interleave", + "tf.compat.v1.data.FixedLengthRecordDataset.list_files": "tf.compat.v1.data.Dataset.list_files", + "tf.compat.v1.data.FixedLengthRecordDataset.make_initializable_iterator": "tf.compat.v1.data.Dataset.make_initializable_iterator", + "tf.compat.v1.data.FixedLengthRecordDataset.make_one_shot_iterator": "tf.compat.v1.data.Dataset.make_one_shot_iterator", + "tf.compat.v1.data.FixedLengthRecordDataset.map": "tf.compat.v1.data.Dataset.map", + "tf.compat.v1.data.FixedLengthRecordDataset.map_with_legacy_function": "tf.compat.v1.data.Dataset.map_with_legacy_function", + "tf.compat.v1.data.FixedLengthRecordDataset.output_classes": "tf.compat.v1.data.Dataset.output_classes", + "tf.compat.v1.data.FixedLengthRecordDataset.output_shapes": "tf.compat.v1.data.Dataset.output_shapes", + "tf.compat.v1.data.FixedLengthRecordDataset.output_types": "tf.compat.v1.data.Dataset.output_types", + "tf.compat.v1.data.FixedLengthRecordDataset.padded_batch": "tf.compat.v1.data.Dataset.padded_batch", + "tf.compat.v1.data.FixedLengthRecordDataset.prefetch": "tf.compat.v1.data.Dataset.prefetch", + "tf.compat.v1.data.FixedLengthRecordDataset.range": "tf.compat.v1.data.Dataset.range", + "tf.compat.v1.data.FixedLengthRecordDataset.reduce": "tf.data.Dataset.reduce", + "tf.compat.v1.data.FixedLengthRecordDataset.repeat": "tf.compat.v1.data.Dataset.repeat", + "tf.compat.v1.data.FixedLengthRecordDataset.shard": "tf.compat.v1.data.Dataset.shard", + "tf.compat.v1.data.FixedLengthRecordDataset.shuffle": "tf.compat.v1.data.Dataset.shuffle", + "tf.compat.v1.data.FixedLengthRecordDataset.skip": "tf.compat.v1.data.Dataset.skip", + "tf.compat.v1.data.FixedLengthRecordDataset.take": "tf.compat.v1.data.Dataset.take", + "tf.compat.v1.data.FixedLengthRecordDataset.unbatch": "tf.compat.v1.data.Dataset.unbatch", + "tf.compat.v1.data.FixedLengthRecordDataset.window": "tf.compat.v1.data.Dataset.window", + "tf.compat.v1.data.FixedLengthRecordDataset.with_options": "tf.compat.v1.data.Dataset.with_options", + "tf.compat.v1.data.FixedLengthRecordDataset.zip": "tf.compat.v1.data.Dataset.zip", + "tf.compat.v1.data.Iterator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.Iterator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.Iterator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.Iterator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.Iterator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.Iterator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.Iterator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.Options": "tf.data.Options", + "tf.compat.v1.data.Options.__eq__": "tf.data.Options.__eq__", + "tf.compat.v1.data.Options.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.Options.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.Options.__init__": "tf.data.Options.__init__", + "tf.compat.v1.data.Options.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.Options.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.Options.__ne__": "tf.data.Options.__ne__", + "tf.compat.v1.data.Options.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.Options.experimental_deterministic": "tf.data.Options.experimental_deterministic", + "tf.compat.v1.data.Options.experimental_distribute": "tf.data.Options.experimental_distribute", + "tf.compat.v1.data.Options.experimental_external_state_policy": "tf.data.Options.experimental_external_state_policy", + "tf.compat.v1.data.Options.experimental_optimization": "tf.data.Options.experimental_optimization", + "tf.compat.v1.data.Options.experimental_slack": "tf.data.Options.experimental_slack", + "tf.compat.v1.data.Options.experimental_stats": "tf.data.Options.experimental_stats", + "tf.compat.v1.data.Options.experimental_threading": "tf.data.Options.experimental_threading", + "tf.compat.v1.data.Options.merge": "tf.data.Options.merge", + "tf.compat.v1.data.TFRecordDataset.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.TFRecordDataset.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.TFRecordDataset.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.TFRecordDataset.__iter__": "tf.compat.v1.data.FixedLengthRecordDataset.__iter__", + "tf.compat.v1.data.TFRecordDataset.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.TFRecordDataset.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.TFRecordDataset.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.TFRecordDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.TFRecordDataset.apply": "tf.compat.v1.data.Dataset.apply", + "tf.compat.v1.data.TFRecordDataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.compat.v1.data.TFRecordDataset.batch": "tf.compat.v1.data.Dataset.batch", + "tf.compat.v1.data.TFRecordDataset.cache": "tf.compat.v1.data.Dataset.cache", + "tf.compat.v1.data.TFRecordDataset.concatenate": "tf.compat.v1.data.Dataset.concatenate", + "tf.compat.v1.data.TFRecordDataset.element_spec": "tf.compat.v1.data.FixedLengthRecordDataset.element_spec", + "tf.compat.v1.data.TFRecordDataset.enumerate": "tf.data.Dataset.enumerate", + "tf.compat.v1.data.TFRecordDataset.filter": "tf.compat.v1.data.Dataset.filter", + "tf.compat.v1.data.TFRecordDataset.filter_with_legacy_function": "tf.compat.v1.data.Dataset.filter_with_legacy_function", + "tf.compat.v1.data.TFRecordDataset.flat_map": "tf.compat.v1.data.Dataset.flat_map", + "tf.compat.v1.data.TFRecordDataset.from_generator": "tf.compat.v1.data.Dataset.from_generator", + "tf.compat.v1.data.TFRecordDataset.from_sparse_tensor_slices": "tf.compat.v1.data.Dataset.from_sparse_tensor_slices", + "tf.compat.v1.data.TFRecordDataset.from_tensor_slices": "tf.compat.v1.data.Dataset.from_tensor_slices", + "tf.compat.v1.data.TFRecordDataset.from_tensors": "tf.compat.v1.data.Dataset.from_tensors", + "tf.compat.v1.data.TFRecordDataset.interleave": "tf.compat.v1.data.Dataset.interleave", + "tf.compat.v1.data.TFRecordDataset.list_files": "tf.compat.v1.data.Dataset.list_files", + "tf.compat.v1.data.TFRecordDataset.make_initializable_iterator": "tf.compat.v1.data.Dataset.make_initializable_iterator", + "tf.compat.v1.data.TFRecordDataset.make_one_shot_iterator": "tf.compat.v1.data.Dataset.make_one_shot_iterator", + "tf.compat.v1.data.TFRecordDataset.map": "tf.compat.v1.data.Dataset.map", + "tf.compat.v1.data.TFRecordDataset.map_with_legacy_function": "tf.compat.v1.data.Dataset.map_with_legacy_function", + "tf.compat.v1.data.TFRecordDataset.options": "tf.compat.v1.data.FixedLengthRecordDataset.options", + "tf.compat.v1.data.TFRecordDataset.output_classes": "tf.compat.v1.data.Dataset.output_classes", + "tf.compat.v1.data.TFRecordDataset.output_shapes": "tf.compat.v1.data.Dataset.output_shapes", + "tf.compat.v1.data.TFRecordDataset.output_types": "tf.compat.v1.data.Dataset.output_types", + "tf.compat.v1.data.TFRecordDataset.padded_batch": "tf.compat.v1.data.Dataset.padded_batch", + "tf.compat.v1.data.TFRecordDataset.prefetch": "tf.compat.v1.data.Dataset.prefetch", + "tf.compat.v1.data.TFRecordDataset.range": "tf.compat.v1.data.Dataset.range", + "tf.compat.v1.data.TFRecordDataset.reduce": "tf.data.Dataset.reduce", + "tf.compat.v1.data.TFRecordDataset.repeat": "tf.compat.v1.data.Dataset.repeat", + "tf.compat.v1.data.TFRecordDataset.shard": "tf.compat.v1.data.Dataset.shard", + "tf.compat.v1.data.TFRecordDataset.shuffle": "tf.compat.v1.data.Dataset.shuffle", + "tf.compat.v1.data.TFRecordDataset.skip": "tf.compat.v1.data.Dataset.skip", + "tf.compat.v1.data.TFRecordDataset.take": "tf.compat.v1.data.Dataset.take", + "tf.compat.v1.data.TFRecordDataset.unbatch": "tf.compat.v1.data.Dataset.unbatch", + "tf.compat.v1.data.TFRecordDataset.window": "tf.compat.v1.data.Dataset.window", + "tf.compat.v1.data.TFRecordDataset.with_options": "tf.compat.v1.data.Dataset.with_options", + "tf.compat.v1.data.TFRecordDataset.zip": "tf.compat.v1.data.Dataset.zip", + "tf.compat.v1.data.TextLineDataset.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.TextLineDataset.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.TextLineDataset.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.TextLineDataset.__iter__": "tf.compat.v1.data.FixedLengthRecordDataset.__iter__", + "tf.compat.v1.data.TextLineDataset.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.TextLineDataset.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.TextLineDataset.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.TextLineDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.TextLineDataset.apply": "tf.compat.v1.data.Dataset.apply", + "tf.compat.v1.data.TextLineDataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.compat.v1.data.TextLineDataset.batch": "tf.compat.v1.data.Dataset.batch", + "tf.compat.v1.data.TextLineDataset.cache": "tf.compat.v1.data.Dataset.cache", + "tf.compat.v1.data.TextLineDataset.concatenate": "tf.compat.v1.data.Dataset.concatenate", + "tf.compat.v1.data.TextLineDataset.element_spec": "tf.compat.v1.data.FixedLengthRecordDataset.element_spec", + "tf.compat.v1.data.TextLineDataset.enumerate": "tf.data.Dataset.enumerate", + "tf.compat.v1.data.TextLineDataset.filter": "tf.compat.v1.data.Dataset.filter", + "tf.compat.v1.data.TextLineDataset.filter_with_legacy_function": "tf.compat.v1.data.Dataset.filter_with_legacy_function", + "tf.compat.v1.data.TextLineDataset.flat_map": "tf.compat.v1.data.Dataset.flat_map", + "tf.compat.v1.data.TextLineDataset.from_generator": "tf.compat.v1.data.Dataset.from_generator", + "tf.compat.v1.data.TextLineDataset.from_sparse_tensor_slices": "tf.compat.v1.data.Dataset.from_sparse_tensor_slices", + "tf.compat.v1.data.TextLineDataset.from_tensor_slices": "tf.compat.v1.data.Dataset.from_tensor_slices", + "tf.compat.v1.data.TextLineDataset.from_tensors": "tf.compat.v1.data.Dataset.from_tensors", + "tf.compat.v1.data.TextLineDataset.interleave": "tf.compat.v1.data.Dataset.interleave", + "tf.compat.v1.data.TextLineDataset.list_files": "tf.compat.v1.data.Dataset.list_files", + "tf.compat.v1.data.TextLineDataset.make_initializable_iterator": "tf.compat.v1.data.Dataset.make_initializable_iterator", + "tf.compat.v1.data.TextLineDataset.make_one_shot_iterator": "tf.compat.v1.data.Dataset.make_one_shot_iterator", + "tf.compat.v1.data.TextLineDataset.map": "tf.compat.v1.data.Dataset.map", + "tf.compat.v1.data.TextLineDataset.map_with_legacy_function": "tf.compat.v1.data.Dataset.map_with_legacy_function", + "tf.compat.v1.data.TextLineDataset.options": "tf.compat.v1.data.FixedLengthRecordDataset.options", + "tf.compat.v1.data.TextLineDataset.output_classes": "tf.compat.v1.data.Dataset.output_classes", + "tf.compat.v1.data.TextLineDataset.output_shapes": "tf.compat.v1.data.Dataset.output_shapes", + "tf.compat.v1.data.TextLineDataset.output_types": "tf.compat.v1.data.Dataset.output_types", + "tf.compat.v1.data.TextLineDataset.padded_batch": "tf.compat.v1.data.Dataset.padded_batch", + "tf.compat.v1.data.TextLineDataset.prefetch": "tf.compat.v1.data.Dataset.prefetch", + "tf.compat.v1.data.TextLineDataset.range": "tf.compat.v1.data.Dataset.range", + "tf.compat.v1.data.TextLineDataset.reduce": "tf.data.Dataset.reduce", + "tf.compat.v1.data.TextLineDataset.repeat": "tf.compat.v1.data.Dataset.repeat", + "tf.compat.v1.data.TextLineDataset.shard": "tf.compat.v1.data.Dataset.shard", + "tf.compat.v1.data.TextLineDataset.shuffle": "tf.compat.v1.data.Dataset.shuffle", + "tf.compat.v1.data.TextLineDataset.skip": "tf.compat.v1.data.Dataset.skip", + "tf.compat.v1.data.TextLineDataset.take": "tf.compat.v1.data.Dataset.take", + "tf.compat.v1.data.TextLineDataset.unbatch": "tf.compat.v1.data.Dataset.unbatch", + "tf.compat.v1.data.TextLineDataset.window": "tf.compat.v1.data.Dataset.window", + "tf.compat.v1.data.TextLineDataset.with_options": "tf.compat.v1.data.Dataset.with_options", + "tf.compat.v1.data.TextLineDataset.zip": "tf.compat.v1.data.Dataset.zip", + "tf.compat.v1.data.experimental.AutoShardPolicy": "tf.data.experimental.AutoShardPolicy", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook": "tf.data.experimental.CheckpointInputPipelineHook", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__init__": "tf.data.experimental.CheckpointInputPipelineHook.__init__", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.after_create_session": "tf.data.experimental.CheckpointInputPipelineHook.after_create_session", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.after_run": "tf.data.experimental.CheckpointInputPipelineHook.after_run", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.before_run": "tf.data.experimental.CheckpointInputPipelineHook.before_run", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.begin": "tf.data.experimental.CheckpointInputPipelineHook.begin", + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.end": "tf.data.experimental.CheckpointInputPipelineHook.end", + "tf.compat.v1.data.experimental.CsvDataset.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.experimental.CsvDataset.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.CsvDataset.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.CsvDataset.__iter__": "tf.compat.v1.data.FixedLengthRecordDataset.__iter__", + "tf.compat.v1.data.experimental.CsvDataset.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.CsvDataset.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.CsvDataset.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.experimental.CsvDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.CsvDataset.apply": "tf.compat.v1.data.Dataset.apply", + "tf.compat.v1.data.experimental.CsvDataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.compat.v1.data.experimental.CsvDataset.batch": "tf.compat.v1.data.Dataset.batch", + "tf.compat.v1.data.experimental.CsvDataset.cache": "tf.compat.v1.data.Dataset.cache", + "tf.compat.v1.data.experimental.CsvDataset.concatenate": "tf.compat.v1.data.Dataset.concatenate", + "tf.compat.v1.data.experimental.CsvDataset.element_spec": "tf.compat.v1.data.FixedLengthRecordDataset.element_spec", + "tf.compat.v1.data.experimental.CsvDataset.enumerate": "tf.data.Dataset.enumerate", + "tf.compat.v1.data.experimental.CsvDataset.filter": "tf.compat.v1.data.Dataset.filter", + "tf.compat.v1.data.experimental.CsvDataset.filter_with_legacy_function": "tf.compat.v1.data.Dataset.filter_with_legacy_function", + "tf.compat.v1.data.experimental.CsvDataset.flat_map": "tf.compat.v1.data.Dataset.flat_map", + "tf.compat.v1.data.experimental.CsvDataset.from_generator": "tf.compat.v1.data.Dataset.from_generator", + "tf.compat.v1.data.experimental.CsvDataset.from_sparse_tensor_slices": "tf.compat.v1.data.Dataset.from_sparse_tensor_slices", + "tf.compat.v1.data.experimental.CsvDataset.from_tensor_slices": "tf.compat.v1.data.Dataset.from_tensor_slices", + "tf.compat.v1.data.experimental.CsvDataset.from_tensors": "tf.compat.v1.data.Dataset.from_tensors", + "tf.compat.v1.data.experimental.CsvDataset.interleave": "tf.compat.v1.data.Dataset.interleave", + "tf.compat.v1.data.experimental.CsvDataset.list_files": "tf.compat.v1.data.Dataset.list_files", + "tf.compat.v1.data.experimental.CsvDataset.make_initializable_iterator": "tf.compat.v1.data.Dataset.make_initializable_iterator", + "tf.compat.v1.data.experimental.CsvDataset.make_one_shot_iterator": "tf.compat.v1.data.Dataset.make_one_shot_iterator", + "tf.compat.v1.data.experimental.CsvDataset.map": "tf.compat.v1.data.Dataset.map", + "tf.compat.v1.data.experimental.CsvDataset.map_with_legacy_function": "tf.compat.v1.data.Dataset.map_with_legacy_function", + "tf.compat.v1.data.experimental.CsvDataset.options": "tf.compat.v1.data.FixedLengthRecordDataset.options", + "tf.compat.v1.data.experimental.CsvDataset.output_classes": "tf.compat.v1.data.Dataset.output_classes", + "tf.compat.v1.data.experimental.CsvDataset.output_shapes": "tf.compat.v1.data.Dataset.output_shapes", + "tf.compat.v1.data.experimental.CsvDataset.output_types": "tf.compat.v1.data.Dataset.output_types", + "tf.compat.v1.data.experimental.CsvDataset.padded_batch": "tf.compat.v1.data.Dataset.padded_batch", + "tf.compat.v1.data.experimental.CsvDataset.prefetch": "tf.compat.v1.data.Dataset.prefetch", + "tf.compat.v1.data.experimental.CsvDataset.range": "tf.compat.v1.data.Dataset.range", + "tf.compat.v1.data.experimental.CsvDataset.reduce": "tf.data.Dataset.reduce", + "tf.compat.v1.data.experimental.CsvDataset.repeat": "tf.compat.v1.data.Dataset.repeat", + "tf.compat.v1.data.experimental.CsvDataset.shard": "tf.compat.v1.data.Dataset.shard", + "tf.compat.v1.data.experimental.CsvDataset.shuffle": "tf.compat.v1.data.Dataset.shuffle", + "tf.compat.v1.data.experimental.CsvDataset.skip": "tf.compat.v1.data.Dataset.skip", + "tf.compat.v1.data.experimental.CsvDataset.take": "tf.compat.v1.data.Dataset.take", + "tf.compat.v1.data.experimental.CsvDataset.unbatch": "tf.compat.v1.data.Dataset.unbatch", + "tf.compat.v1.data.experimental.CsvDataset.window": "tf.compat.v1.data.Dataset.window", + "tf.compat.v1.data.experimental.CsvDataset.with_options": "tf.compat.v1.data.Dataset.with_options", + "tf.compat.v1.data.experimental.CsvDataset.zip": "tf.compat.v1.data.Dataset.zip", + "tf.compat.v1.data.experimental.DatasetStructure": "tf.data.DatasetSpec", + "tf.compat.v1.data.experimental.DatasetStructure.__eq__": "tf.TypeSpec.__eq__", + "tf.compat.v1.data.experimental.DatasetStructure.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.DatasetStructure.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.DatasetStructure.__init__": "tf.data.DatasetSpec.__init__", + "tf.compat.v1.data.experimental.DatasetStructure.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.DatasetStructure.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.DatasetStructure.__ne__": "tf.TypeSpec.__ne__", + "tf.compat.v1.data.experimental.DatasetStructure.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.DatasetStructure.from_value": "tf.data.DatasetSpec.from_value", + "tf.compat.v1.data.experimental.DatasetStructure.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.compat.v1.data.experimental.DatasetStructure.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.compat.v1.data.experimental.DatasetStructure.value_type": "tf.data.DatasetSpec.value_type", + "tf.compat.v1.data.experimental.DistributeOptions": "tf.data.experimental.DistributeOptions", + "tf.compat.v1.data.experimental.DistributeOptions.__eq__": "tf.data.Options.__eq__", + "tf.compat.v1.data.experimental.DistributeOptions.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.DistributeOptions.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.DistributeOptions.__init__": "tf.data.Options.__init__", + "tf.compat.v1.data.experimental.DistributeOptions.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.DistributeOptions.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.DistributeOptions.__ne__": "tf.data.Options.__ne__", + "tf.compat.v1.data.experimental.DistributeOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.DistributeOptions.auto_shard_policy": "tf.data.experimental.DistributeOptions.auto_shard_policy", + "tf.compat.v1.data.experimental.DistributeOptions.num_devices": "tf.data.experimental.DistributeOptions.num_devices", + "tf.compat.v1.data.experimental.MapVectorizationOptions": "tf.data.experimental.MapVectorizationOptions", + "tf.compat.v1.data.experimental.MapVectorizationOptions.__eq__": "tf.data.Options.__eq__", + "tf.compat.v1.data.experimental.MapVectorizationOptions.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.MapVectorizationOptions.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.MapVectorizationOptions.__init__": "tf.data.Options.__init__", + "tf.compat.v1.data.experimental.MapVectorizationOptions.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.MapVectorizationOptions.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.MapVectorizationOptions.__ne__": "tf.data.Options.__ne__", + "tf.compat.v1.data.experimental.MapVectorizationOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.MapVectorizationOptions.enabled": "tf.data.experimental.MapVectorizationOptions.enabled", + "tf.compat.v1.data.experimental.MapVectorizationOptions.use_choose_fastest": "tf.data.experimental.MapVectorizationOptions.use_choose_fastest", + "tf.compat.v1.data.experimental.OptimizationOptions": "tf.data.experimental.OptimizationOptions", + "tf.compat.v1.data.experimental.OptimizationOptions.__eq__": "tf.data.Options.__eq__", + "tf.compat.v1.data.experimental.OptimizationOptions.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.OptimizationOptions.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.OptimizationOptions.__init__": "tf.data.Options.__init__", + "tf.compat.v1.data.experimental.OptimizationOptions.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.OptimizationOptions.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.OptimizationOptions.__ne__": "tf.data.Options.__ne__", + "tf.compat.v1.data.experimental.OptimizationOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.OptimizationOptions.apply_default_optimizations": "tf.data.experimental.OptimizationOptions.apply_default_optimizations", + "tf.compat.v1.data.experimental.OptimizationOptions.autotune": "tf.data.experimental.OptimizationOptions.autotune", + "tf.compat.v1.data.experimental.OptimizationOptions.autotune_buffers": "tf.data.experimental.OptimizationOptions.autotune_buffers", + "tf.compat.v1.data.experimental.OptimizationOptions.autotune_cpu_budget": "tf.data.experimental.OptimizationOptions.autotune_cpu_budget", + "tf.compat.v1.data.experimental.OptimizationOptions.filter_fusion": "tf.data.experimental.OptimizationOptions.filter_fusion", + "tf.compat.v1.data.experimental.OptimizationOptions.filter_with_random_uniform_fusion": "tf.data.experimental.OptimizationOptions.filter_with_random_uniform_fusion", + "tf.compat.v1.data.experimental.OptimizationOptions.hoist_random_uniform": "tf.data.experimental.OptimizationOptions.hoist_random_uniform", + "tf.compat.v1.data.experimental.OptimizationOptions.map_and_batch_fusion": "tf.data.experimental.OptimizationOptions.map_and_batch_fusion", + "tf.compat.v1.data.experimental.OptimizationOptions.map_and_filter_fusion": "tf.data.experimental.OptimizationOptions.map_and_filter_fusion", + "tf.compat.v1.data.experimental.OptimizationOptions.map_fusion": "tf.data.experimental.OptimizationOptions.map_fusion", + "tf.compat.v1.data.experimental.OptimizationOptions.map_parallelization": "tf.data.experimental.OptimizationOptions.map_parallelization", + "tf.compat.v1.data.experimental.OptimizationOptions.map_vectorization": "tf.data.experimental.OptimizationOptions.map_vectorization", + "tf.compat.v1.data.experimental.OptimizationOptions.noop_elimination": "tf.data.experimental.OptimizationOptions.noop_elimination", + "tf.compat.v1.data.experimental.OptimizationOptions.parallel_batch": "tf.data.experimental.OptimizationOptions.parallel_batch", + "tf.compat.v1.data.experimental.OptimizationOptions.shuffle_and_repeat_fusion": "tf.data.experimental.OptimizationOptions.shuffle_and_repeat_fusion", + "tf.compat.v1.data.experimental.Optional": "tf.data.experimental.Optional", + "tf.compat.v1.data.experimental.Optional.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.experimental.Optional.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.Optional.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.Optional.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.data.experimental.Optional.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.Optional.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.Optional.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.experimental.Optional.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.Optional.from_value": "tf.data.experimental.Optional.from_value", + "tf.compat.v1.data.experimental.Optional.get_value": "tf.data.experimental.Optional.get_value", + "tf.compat.v1.data.experimental.Optional.has_value": "tf.data.experimental.Optional.has_value", + "tf.compat.v1.data.experimental.Optional.none_from_structure": "tf.data.experimental.Optional.none_from_structure", + "tf.compat.v1.data.experimental.Optional.value_structure": "tf.data.experimental.Optional.value_structure", + "tf.compat.v1.data.experimental.OptionalStructure": "tf.OptionalSpec", + "tf.compat.v1.data.experimental.OptionalStructure.__eq__": "tf.TypeSpec.__eq__", + "tf.compat.v1.data.experimental.OptionalStructure.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.OptionalStructure.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.OptionalStructure.__init__": "tf.OptionalSpec.__init__", + "tf.compat.v1.data.experimental.OptionalStructure.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.OptionalStructure.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.OptionalStructure.__ne__": "tf.TypeSpec.__ne__", + "tf.compat.v1.data.experimental.OptionalStructure.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.OptionalStructure.from_value": "tf.OptionalSpec.from_value", + "tf.compat.v1.data.experimental.OptionalStructure.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.compat.v1.data.experimental.OptionalStructure.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.compat.v1.data.experimental.OptionalStructure.value_type": "tf.OptionalSpec.value_type", + "tf.compat.v1.data.experimental.RandomDataset.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.experimental.RandomDataset.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.RandomDataset.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.RandomDataset.__iter__": "tf.compat.v1.data.FixedLengthRecordDataset.__iter__", + "tf.compat.v1.data.experimental.RandomDataset.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.RandomDataset.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.RandomDataset.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.experimental.RandomDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.RandomDataset.apply": "tf.compat.v1.data.Dataset.apply", + "tf.compat.v1.data.experimental.RandomDataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.compat.v1.data.experimental.RandomDataset.batch": "tf.compat.v1.data.Dataset.batch", + "tf.compat.v1.data.experimental.RandomDataset.cache": "tf.compat.v1.data.Dataset.cache", + "tf.compat.v1.data.experimental.RandomDataset.concatenate": "tf.compat.v1.data.Dataset.concatenate", + "tf.compat.v1.data.experimental.RandomDataset.element_spec": "tf.compat.v1.data.FixedLengthRecordDataset.element_spec", + "tf.compat.v1.data.experimental.RandomDataset.enumerate": "tf.data.Dataset.enumerate", + "tf.compat.v1.data.experimental.RandomDataset.filter": "tf.compat.v1.data.Dataset.filter", + "tf.compat.v1.data.experimental.RandomDataset.filter_with_legacy_function": "tf.compat.v1.data.Dataset.filter_with_legacy_function", + "tf.compat.v1.data.experimental.RandomDataset.flat_map": "tf.compat.v1.data.Dataset.flat_map", + "tf.compat.v1.data.experimental.RandomDataset.from_generator": "tf.compat.v1.data.Dataset.from_generator", + "tf.compat.v1.data.experimental.RandomDataset.from_sparse_tensor_slices": "tf.compat.v1.data.Dataset.from_sparse_tensor_slices", + "tf.compat.v1.data.experimental.RandomDataset.from_tensor_slices": "tf.compat.v1.data.Dataset.from_tensor_slices", + "tf.compat.v1.data.experimental.RandomDataset.from_tensors": "tf.compat.v1.data.Dataset.from_tensors", + "tf.compat.v1.data.experimental.RandomDataset.interleave": "tf.compat.v1.data.Dataset.interleave", + "tf.compat.v1.data.experimental.RandomDataset.list_files": "tf.compat.v1.data.Dataset.list_files", + "tf.compat.v1.data.experimental.RandomDataset.make_initializable_iterator": "tf.compat.v1.data.Dataset.make_initializable_iterator", + "tf.compat.v1.data.experimental.RandomDataset.make_one_shot_iterator": "tf.compat.v1.data.Dataset.make_one_shot_iterator", + "tf.compat.v1.data.experimental.RandomDataset.map": "tf.compat.v1.data.Dataset.map", + "tf.compat.v1.data.experimental.RandomDataset.map_with_legacy_function": "tf.compat.v1.data.Dataset.map_with_legacy_function", + "tf.compat.v1.data.experimental.RandomDataset.options": "tf.compat.v1.data.FixedLengthRecordDataset.options", + "tf.compat.v1.data.experimental.RandomDataset.output_classes": "tf.compat.v1.data.Dataset.output_classes", + "tf.compat.v1.data.experimental.RandomDataset.output_shapes": "tf.compat.v1.data.Dataset.output_shapes", + "tf.compat.v1.data.experimental.RandomDataset.output_types": "tf.compat.v1.data.Dataset.output_types", + "tf.compat.v1.data.experimental.RandomDataset.padded_batch": "tf.compat.v1.data.Dataset.padded_batch", + "tf.compat.v1.data.experimental.RandomDataset.prefetch": "tf.compat.v1.data.Dataset.prefetch", + "tf.compat.v1.data.experimental.RandomDataset.range": "tf.compat.v1.data.Dataset.range", + "tf.compat.v1.data.experimental.RandomDataset.reduce": "tf.data.Dataset.reduce", + "tf.compat.v1.data.experimental.RandomDataset.repeat": "tf.compat.v1.data.Dataset.repeat", + "tf.compat.v1.data.experimental.RandomDataset.shard": "tf.compat.v1.data.Dataset.shard", + "tf.compat.v1.data.experimental.RandomDataset.shuffle": "tf.compat.v1.data.Dataset.shuffle", + "tf.compat.v1.data.experimental.RandomDataset.skip": "tf.compat.v1.data.Dataset.skip", + "tf.compat.v1.data.experimental.RandomDataset.take": "tf.compat.v1.data.Dataset.take", + "tf.compat.v1.data.experimental.RandomDataset.unbatch": "tf.compat.v1.data.Dataset.unbatch", + "tf.compat.v1.data.experimental.RandomDataset.window": "tf.compat.v1.data.Dataset.window", + "tf.compat.v1.data.experimental.RandomDataset.with_options": "tf.compat.v1.data.Dataset.with_options", + "tf.compat.v1.data.experimental.RandomDataset.zip": "tf.compat.v1.data.Dataset.zip", + "tf.compat.v1.data.experimental.Reducer": "tf.data.experimental.Reducer", + "tf.compat.v1.data.experimental.Reducer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.experimental.Reducer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.Reducer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.Reducer.__init__": "tf.data.experimental.Reducer.__init__", + "tf.compat.v1.data.experimental.Reducer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.Reducer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.Reducer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.experimental.Reducer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.Reducer.finalize_func": "tf.data.experimental.Reducer.finalize_func", + "tf.compat.v1.data.experimental.Reducer.init_func": "tf.data.experimental.Reducer.init_func", + "tf.compat.v1.data.experimental.Reducer.reduce_func": "tf.data.experimental.Reducer.reduce_func", + "tf.compat.v1.data.experimental.SqlDataset.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.experimental.SqlDataset.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.SqlDataset.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.SqlDataset.__iter__": "tf.compat.v1.data.FixedLengthRecordDataset.__iter__", + "tf.compat.v1.data.experimental.SqlDataset.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.SqlDataset.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.SqlDataset.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.experimental.SqlDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.SqlDataset.apply": "tf.compat.v1.data.Dataset.apply", + "tf.compat.v1.data.experimental.SqlDataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.compat.v1.data.experimental.SqlDataset.batch": "tf.compat.v1.data.Dataset.batch", + "tf.compat.v1.data.experimental.SqlDataset.cache": "tf.compat.v1.data.Dataset.cache", + "tf.compat.v1.data.experimental.SqlDataset.concatenate": "tf.compat.v1.data.Dataset.concatenate", + "tf.compat.v1.data.experimental.SqlDataset.element_spec": "tf.compat.v1.data.FixedLengthRecordDataset.element_spec", + "tf.compat.v1.data.experimental.SqlDataset.enumerate": "tf.data.Dataset.enumerate", + "tf.compat.v1.data.experimental.SqlDataset.filter": "tf.compat.v1.data.Dataset.filter", + "tf.compat.v1.data.experimental.SqlDataset.filter_with_legacy_function": "tf.compat.v1.data.Dataset.filter_with_legacy_function", + "tf.compat.v1.data.experimental.SqlDataset.flat_map": "tf.compat.v1.data.Dataset.flat_map", + "tf.compat.v1.data.experimental.SqlDataset.from_generator": "tf.compat.v1.data.Dataset.from_generator", + "tf.compat.v1.data.experimental.SqlDataset.from_sparse_tensor_slices": "tf.compat.v1.data.Dataset.from_sparse_tensor_slices", + "tf.compat.v1.data.experimental.SqlDataset.from_tensor_slices": "tf.compat.v1.data.Dataset.from_tensor_slices", + "tf.compat.v1.data.experimental.SqlDataset.from_tensors": "tf.compat.v1.data.Dataset.from_tensors", + "tf.compat.v1.data.experimental.SqlDataset.interleave": "tf.compat.v1.data.Dataset.interleave", + "tf.compat.v1.data.experimental.SqlDataset.list_files": "tf.compat.v1.data.Dataset.list_files", + "tf.compat.v1.data.experimental.SqlDataset.make_initializable_iterator": "tf.compat.v1.data.Dataset.make_initializable_iterator", + "tf.compat.v1.data.experimental.SqlDataset.make_one_shot_iterator": "tf.compat.v1.data.Dataset.make_one_shot_iterator", + "tf.compat.v1.data.experimental.SqlDataset.map": "tf.compat.v1.data.Dataset.map", + "tf.compat.v1.data.experimental.SqlDataset.map_with_legacy_function": "tf.compat.v1.data.Dataset.map_with_legacy_function", + "tf.compat.v1.data.experimental.SqlDataset.options": "tf.compat.v1.data.FixedLengthRecordDataset.options", + "tf.compat.v1.data.experimental.SqlDataset.output_classes": "tf.compat.v1.data.Dataset.output_classes", + "tf.compat.v1.data.experimental.SqlDataset.output_shapes": "tf.compat.v1.data.Dataset.output_shapes", + "tf.compat.v1.data.experimental.SqlDataset.output_types": "tf.compat.v1.data.Dataset.output_types", + "tf.compat.v1.data.experimental.SqlDataset.padded_batch": "tf.compat.v1.data.Dataset.padded_batch", + "tf.compat.v1.data.experimental.SqlDataset.prefetch": "tf.compat.v1.data.Dataset.prefetch", + "tf.compat.v1.data.experimental.SqlDataset.range": "tf.compat.v1.data.Dataset.range", + "tf.compat.v1.data.experimental.SqlDataset.reduce": "tf.data.Dataset.reduce", + "tf.compat.v1.data.experimental.SqlDataset.repeat": "tf.compat.v1.data.Dataset.repeat", + "tf.compat.v1.data.experimental.SqlDataset.shard": "tf.compat.v1.data.Dataset.shard", + "tf.compat.v1.data.experimental.SqlDataset.shuffle": "tf.compat.v1.data.Dataset.shuffle", + "tf.compat.v1.data.experimental.SqlDataset.skip": "tf.compat.v1.data.Dataset.skip", + "tf.compat.v1.data.experimental.SqlDataset.take": "tf.compat.v1.data.Dataset.take", + "tf.compat.v1.data.experimental.SqlDataset.unbatch": "tf.compat.v1.data.Dataset.unbatch", + "tf.compat.v1.data.experimental.SqlDataset.window": "tf.compat.v1.data.Dataset.window", + "tf.compat.v1.data.experimental.SqlDataset.with_options": "tf.compat.v1.data.Dataset.with_options", + "tf.compat.v1.data.experimental.SqlDataset.zip": "tf.compat.v1.data.Dataset.zip", + "tf.compat.v1.data.experimental.StatsAggregator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.experimental.StatsAggregator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.StatsAggregator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.StatsAggregator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.StatsAggregator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.StatsAggregator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.experimental.StatsAggregator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.StatsOptions": "tf.data.experimental.StatsOptions", + "tf.compat.v1.data.experimental.StatsOptions.__eq__": "tf.data.Options.__eq__", + "tf.compat.v1.data.experimental.StatsOptions.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.StatsOptions.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.StatsOptions.__init__": "tf.data.Options.__init__", + "tf.compat.v1.data.experimental.StatsOptions.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.StatsOptions.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.StatsOptions.__ne__": "tf.data.Options.__ne__", + "tf.compat.v1.data.experimental.StatsOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.StatsOptions.aggregator": "tf.data.experimental.StatsOptions.aggregator", + "tf.compat.v1.data.experimental.StatsOptions.counter_prefix": "tf.data.experimental.StatsOptions.counter_prefix", + "tf.compat.v1.data.experimental.StatsOptions.latency_all_edges": "tf.data.experimental.StatsOptions.latency_all_edges", + "tf.compat.v1.data.experimental.StatsOptions.prefix": "tf.data.experimental.StatsOptions.prefix", + "tf.compat.v1.data.experimental.Structure": "tf.TypeSpec", + "tf.compat.v1.data.experimental.Structure.__eq__": "tf.TypeSpec.__eq__", + "tf.compat.v1.data.experimental.Structure.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.Structure.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.Structure.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.data.experimental.Structure.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.Structure.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.Structure.__ne__": "tf.TypeSpec.__ne__", + "tf.compat.v1.data.experimental.Structure.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.Structure.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.compat.v1.data.experimental.Structure.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.compat.v1.data.experimental.Structure.value_type": "tf.TypeSpec.value_type", + "tf.compat.v1.data.experimental.TFRecordWriter": "tf.data.experimental.TFRecordWriter", + "tf.compat.v1.data.experimental.TFRecordWriter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.data.experimental.TFRecordWriter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.TFRecordWriter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.TFRecordWriter.__init__": "tf.data.experimental.TFRecordWriter.__init__", + "tf.compat.v1.data.experimental.TFRecordWriter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.TFRecordWriter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.TFRecordWriter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.data.experimental.TFRecordWriter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.TFRecordWriter.write": "tf.data.experimental.TFRecordWriter.write", + "tf.compat.v1.data.experimental.ThreadingOptions": "tf.data.experimental.ThreadingOptions", + "tf.compat.v1.data.experimental.ThreadingOptions.__eq__": "tf.data.Options.__eq__", + "tf.compat.v1.data.experimental.ThreadingOptions.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.data.experimental.ThreadingOptions.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.data.experimental.ThreadingOptions.__init__": "tf.data.Options.__init__", + "tf.compat.v1.data.experimental.ThreadingOptions.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.data.experimental.ThreadingOptions.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.data.experimental.ThreadingOptions.__ne__": "tf.data.Options.__ne__", + "tf.compat.v1.data.experimental.ThreadingOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.data.experimental.ThreadingOptions.max_intra_op_parallelism": "tf.data.experimental.ThreadingOptions.max_intra_op_parallelism", + "tf.compat.v1.data.experimental.ThreadingOptions.private_threadpool_size": "tf.data.experimental.ThreadingOptions.private_threadpool_size", + "tf.compat.v1.data.experimental.assert_cardinality": "tf.data.experimental.assert_cardinality", + "tf.compat.v1.data.experimental.bucket_by_sequence_length": "tf.data.experimental.bucket_by_sequence_length", + "tf.compat.v1.data.experimental.bytes_produced_stats": "tf.data.experimental.bytes_produced_stats", + "tf.compat.v1.data.experimental.cardinality": "tf.data.experimental.cardinality", + "tf.compat.v1.data.experimental.copy_to_device": "tf.data.experimental.copy_to_device", + "tf.compat.v1.data.experimental.dense_to_ragged_batch": "tf.data.experimental.dense_to_ragged_batch", + "tf.compat.v1.data.experimental.dense_to_sparse_batch": "tf.data.experimental.dense_to_sparse_batch", + "tf.compat.v1.data.experimental.enumerate_dataset": "tf.data.experimental.enumerate_dataset", + "tf.compat.v1.data.experimental.from_variant": "tf.data.experimental.from_variant", + "tf.compat.v1.data.experimental.get_next_as_optional": "tf.data.experimental.get_next_as_optional", + "tf.compat.v1.data.experimental.get_single_element": "tf.data.experimental.get_single_element", + "tf.compat.v1.data.experimental.get_structure": "tf.data.experimental.get_structure", + "tf.compat.v1.data.experimental.group_by_reducer": "tf.data.experimental.group_by_reducer", + "tf.compat.v1.data.experimental.group_by_window": "tf.data.experimental.group_by_window", + "tf.compat.v1.data.experimental.ignore_errors": "tf.data.experimental.ignore_errors", + "tf.compat.v1.data.experimental.latency_stats": "tf.data.experimental.latency_stats", + "tf.compat.v1.data.experimental.make_saveable_from_iterator": "tf.data.experimental.make_saveable_from_iterator", + "tf.compat.v1.data.experimental.map_and_batch": "tf.data.experimental.map_and_batch", + "tf.compat.v1.data.experimental.parallel_interleave": "tf.data.experimental.parallel_interleave", + "tf.compat.v1.data.experimental.parse_example_dataset": "tf.data.experimental.parse_example_dataset", + "tf.compat.v1.data.experimental.prefetch_to_device": "tf.data.experimental.prefetch_to_device", + "tf.compat.v1.data.experimental.rejection_resample": "tf.data.experimental.rejection_resample", + "tf.compat.v1.data.experimental.scan": "tf.data.experimental.scan", + "tf.compat.v1.data.experimental.shuffle_and_repeat": "tf.data.experimental.shuffle_and_repeat", + "tf.compat.v1.data.experimental.take_while": "tf.data.experimental.take_while", + "tf.compat.v1.data.experimental.to_variant": "tf.data.experimental.to_variant", + "tf.compat.v1.data.experimental.unbatch": "tf.data.experimental.unbatch", + "tf.compat.v1.data.experimental.unique": "tf.data.experimental.unique", + "tf.compat.v1.debugging.Assert": "tf.debugging.Assert", + "tf.compat.v1.debugging.assert_all_finite": "tf.compat.v1.verify_tensor_all_finite", + "tf.compat.v1.debugging.assert_equal": "tf.compat.v1.assert_equal", + "tf.compat.v1.debugging.assert_greater": "tf.compat.v1.assert_greater", + "tf.compat.v1.debugging.assert_greater_equal": "tf.compat.v1.assert_greater_equal", + "tf.compat.v1.debugging.assert_integer": "tf.compat.v1.assert_integer", + "tf.compat.v1.debugging.assert_less": "tf.compat.v1.assert_less", + "tf.compat.v1.debugging.assert_less_equal": "tf.compat.v1.assert_less_equal", + "tf.compat.v1.debugging.assert_near": "tf.compat.v1.assert_near", + "tf.compat.v1.debugging.assert_negative": "tf.compat.v1.assert_negative", + "tf.compat.v1.debugging.assert_non_negative": "tf.compat.v1.assert_non_negative", + "tf.compat.v1.debugging.assert_non_positive": "tf.compat.v1.assert_non_positive", + "tf.compat.v1.debugging.assert_none_equal": "tf.compat.v1.assert_none_equal", + "tf.compat.v1.debugging.assert_positive": "tf.compat.v1.assert_positive", + "tf.compat.v1.debugging.assert_proper_iterable": "tf.debugging.assert_proper_iterable", + "tf.compat.v1.debugging.assert_rank": "tf.compat.v1.assert_rank", + "tf.compat.v1.debugging.assert_rank_at_least": "tf.compat.v1.assert_rank_at_least", + "tf.compat.v1.debugging.assert_rank_in": "tf.compat.v1.assert_rank_in", + "tf.compat.v1.debugging.assert_same_float_dtype": "tf.debugging.assert_same_float_dtype", + "tf.compat.v1.debugging.assert_scalar": "tf.compat.v1.assert_scalar", + "tf.compat.v1.debugging.assert_type": "tf.compat.v1.assert_type", + "tf.compat.v1.debugging.check_numerics": "tf.debugging.check_numerics", + "tf.compat.v1.debugging.disable_check_numerics": "tf.debugging.disable_check_numerics", + "tf.compat.v1.debugging.enable_check_numerics": "tf.debugging.enable_check_numerics", + "tf.compat.v1.debugging.experimental.disable_dump_debug_info": "tf.debugging.experimental.disable_dump_debug_info", + "tf.compat.v1.debugging.experimental.enable_dump_debug_info": "tf.debugging.experimental.enable_dump_debug_info", + "tf.compat.v1.debugging.get_log_device_placement": "tf.debugging.get_log_device_placement", + "tf.compat.v1.debugging.is_finite": "tf.math.is_finite", + "tf.compat.v1.debugging.is_inf": "tf.math.is_inf", + "tf.compat.v1.debugging.is_nan": "tf.math.is_nan", + "tf.compat.v1.debugging.is_non_decreasing": "tf.math.is_non_decreasing", + "tf.compat.v1.debugging.is_numeric_tensor": "tf.debugging.is_numeric_tensor", + "tf.compat.v1.debugging.is_strictly_increasing": "tf.math.is_strictly_increasing", + "tf.compat.v1.debugging.set_log_device_placement": "tf.debugging.set_log_device_placement", + "tf.compat.v1.decode_base64": "tf.io.decode_base64", + "tf.compat.v1.decode_compressed": "tf.io.decode_compressed", + "tf.compat.v1.decode_json_example": "tf.io.decode_json_example", + "tf.compat.v1.dequantize": "tf.quantization.dequantize", + "tf.compat.v1.deserialize_many_sparse": "tf.io.deserialize_many_sparse", + "tf.compat.v1.diag": "tf.linalg.tensor_diag", + "tf.compat.v1.diag_part": "tf.linalg.tensor_diag_part", + "tf.compat.v1.digamma": "tf.math.digamma", + "tf.compat.v1.dimension_at_index": "tf.compat.dimension_at_index", + "tf.compat.v1.dimension_value": "tf.compat.dimension_value", + "tf.compat.v1.distribute.CrossDeviceOps": "tf.distribute.CrossDeviceOps", + "tf.compat.v1.distribute.CrossDeviceOps.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.CrossDeviceOps.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.CrossDeviceOps.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.CrossDeviceOps.__init__": "tf.distribute.CrossDeviceOps.__init__", + "tf.compat.v1.distribute.CrossDeviceOps.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.CrossDeviceOps.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.CrossDeviceOps.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.CrossDeviceOps.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.CrossDeviceOps.batch_reduce": "tf.distribute.CrossDeviceOps.batch_reduce", + "tf.compat.v1.distribute.CrossDeviceOps.batch_reduce_implementation": "tf.distribute.CrossDeviceOps.batch_reduce_implementation", + "tf.compat.v1.distribute.CrossDeviceOps.broadcast": "tf.distribute.CrossDeviceOps.broadcast", + "tf.compat.v1.distribute.CrossDeviceOps.broadcast_implementation": "tf.distribute.CrossDeviceOps.broadcast_implementation", + "tf.compat.v1.distribute.CrossDeviceOps.reduce": "tf.distribute.CrossDeviceOps.reduce", + "tf.compat.v1.distribute.CrossDeviceOps.reduce_implementation": "tf.distribute.CrossDeviceOps.reduce_implementation", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce": "tf.distribute.HierarchicalCopyAllReduce", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__init__": "tf.distribute.HierarchicalCopyAllReduce.__init__", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.batch_reduce": "tf.distribute.CrossDeviceOps.batch_reduce", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.batch_reduce_implementation": "tf.distribute.HierarchicalCopyAllReduce.batch_reduce_implementation", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.broadcast": "tf.distribute.CrossDeviceOps.broadcast", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.broadcast_implementation": "tf.distribute.CrossDeviceOps.broadcast_implementation", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.reduce": "tf.distribute.CrossDeviceOps.reduce", + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.reduce_implementation": "tf.distribute.HierarchicalCopyAllReduce.reduce_implementation", + "tf.compat.v1.distribute.InputContext": "tf.distribute.InputContext", + "tf.compat.v1.distribute.InputContext.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.InputContext.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.InputContext.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.InputContext.__init__": "tf.distribute.InputContext.__init__", + "tf.compat.v1.distribute.InputContext.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.InputContext.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.InputContext.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.InputContext.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.InputContext.get_per_replica_batch_size": "tf.distribute.InputContext.get_per_replica_batch_size", + "tf.compat.v1.distribute.InputContext.input_pipeline_id": "tf.distribute.InputContext.input_pipeline_id", + "tf.compat.v1.distribute.InputContext.num_input_pipelines": "tf.distribute.InputContext.num_input_pipelines", + "tf.compat.v1.distribute.InputContext.num_replicas_in_sync": "tf.distribute.InputContext.num_replicas_in_sync", + "tf.compat.v1.distribute.InputReplicationMode": "tf.distribute.InputReplicationMode", + "tf.compat.v1.distribute.InputReplicationMode.PER_WORKER": "tf.distribute.InputReplicationMode.PER_WORKER", + "tf.compat.v1.distribute.InputReplicationMode.name": "tf.distribute.InputReplicationMode.name", + "tf.compat.v1.distribute.InputReplicationMode.value": "tf.distribute.InputReplicationMode.value", + "tf.compat.v1.distribute.MirroredStrategy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.MirroredStrategy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.MirroredStrategy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.MirroredStrategy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.MirroredStrategy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.MirroredStrategy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.MirroredStrategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.MirroredStrategy.experimental_distribute_dataset": "tf.distribute.MirroredStrategy.experimental_distribute_dataset", + "tf.compat.v1.distribute.MirroredStrategy.experimental_distribute_datasets_from_function": "tf.distribute.MirroredStrategy.experimental_distribute_datasets_from_function", + "tf.compat.v1.distribute.MirroredStrategy.experimental_local_results": "tf.distribute.MirroredStrategy.experimental_local_results", + "tf.compat.v1.distribute.MirroredStrategy.experimental_make_numpy_dataset": "tf.compat.v1.distribute.Strategy.experimental_make_numpy_dataset", + "tf.compat.v1.distribute.MirroredStrategy.experimental_run": "tf.compat.v1.distribute.Strategy.experimental_run", + "tf.compat.v1.distribute.MirroredStrategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.compat.v1.distribute.MirroredStrategy.make_dataset_iterator": "tf.compat.v1.distribute.Strategy.make_dataset_iterator", + "tf.compat.v1.distribute.MirroredStrategy.make_input_fn_iterator": "tf.compat.v1.distribute.Strategy.make_input_fn_iterator", + "tf.compat.v1.distribute.MirroredStrategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.compat.v1.distribute.MirroredStrategy.reduce": "tf.compat.v1.distribute.Strategy.reduce", + "tf.compat.v1.distribute.MirroredStrategy.run": "tf.distribute.MirroredStrategy.run", + "tf.compat.v1.distribute.MirroredStrategy.scope": "tf.distribute.MirroredStrategy.scope", + "tf.compat.v1.distribute.MirroredStrategy.update_config_proto": "tf.compat.v1.distribute.Strategy.update_config_proto", + "tf.compat.v1.distribute.NcclAllReduce": "tf.distribute.NcclAllReduce", + "tf.compat.v1.distribute.NcclAllReduce.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.NcclAllReduce.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.NcclAllReduce.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.NcclAllReduce.__init__": "tf.distribute.NcclAllReduce.__init__", + "tf.compat.v1.distribute.NcclAllReduce.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.NcclAllReduce.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.NcclAllReduce.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.NcclAllReduce.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.NcclAllReduce.batch_reduce": "tf.distribute.CrossDeviceOps.batch_reduce", + "tf.compat.v1.distribute.NcclAllReduce.batch_reduce_implementation": "tf.distribute.HierarchicalCopyAllReduce.batch_reduce_implementation", + "tf.compat.v1.distribute.NcclAllReduce.broadcast": "tf.distribute.CrossDeviceOps.broadcast", + "tf.compat.v1.distribute.NcclAllReduce.broadcast_implementation": "tf.distribute.CrossDeviceOps.broadcast_implementation", + "tf.compat.v1.distribute.NcclAllReduce.reduce": "tf.distribute.CrossDeviceOps.reduce", + "tf.compat.v1.distribute.NcclAllReduce.reduce_implementation": "tf.distribute.HierarchicalCopyAllReduce.reduce_implementation", + "tf.compat.v1.distribute.OneDeviceStrategy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.OneDeviceStrategy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.OneDeviceStrategy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.OneDeviceStrategy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.OneDeviceStrategy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.OneDeviceStrategy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.OneDeviceStrategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.OneDeviceStrategy.experimental_distribute_dataset": "tf.distribute.MirroredStrategy.experimental_distribute_dataset", + "tf.compat.v1.distribute.OneDeviceStrategy.experimental_distribute_datasets_from_function": "tf.distribute.MirroredStrategy.experimental_distribute_datasets_from_function", + "tf.compat.v1.distribute.OneDeviceStrategy.experimental_local_results": "tf.distribute.MirroredStrategy.experimental_local_results", + "tf.compat.v1.distribute.OneDeviceStrategy.experimental_make_numpy_dataset": "tf.compat.v1.distribute.Strategy.experimental_make_numpy_dataset", + "tf.compat.v1.distribute.OneDeviceStrategy.experimental_run": "tf.compat.v1.distribute.Strategy.experimental_run", + "tf.compat.v1.distribute.OneDeviceStrategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.compat.v1.distribute.OneDeviceStrategy.make_dataset_iterator": "tf.compat.v1.distribute.Strategy.make_dataset_iterator", + "tf.compat.v1.distribute.OneDeviceStrategy.make_input_fn_iterator": "tf.compat.v1.distribute.Strategy.make_input_fn_iterator", + "tf.compat.v1.distribute.OneDeviceStrategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.compat.v1.distribute.OneDeviceStrategy.reduce": "tf.compat.v1.distribute.Strategy.reduce", + "tf.compat.v1.distribute.OneDeviceStrategy.run": "tf.distribute.MirroredStrategy.run", + "tf.compat.v1.distribute.OneDeviceStrategy.scope": "tf.distribute.MirroredStrategy.scope", + "tf.compat.v1.distribute.OneDeviceStrategy.update_config_proto": "tf.compat.v1.distribute.Strategy.update_config_proto", + "tf.compat.v1.distribute.ReduceOp": "tf.distribute.ReduceOp", + "tf.compat.v1.distribute.ReduceOp.MEAN": "tf.distribute.ReduceOp.MEAN", + "tf.compat.v1.distribute.ReduceOp.SUM": "tf.distribute.ReduceOp.SUM", + "tf.compat.v1.distribute.ReduceOp.name": "tf.distribute.InputReplicationMode.name", + "tf.compat.v1.distribute.ReduceOp.value": "tf.distribute.InputReplicationMode.value", + "tf.compat.v1.distribute.ReductionToOneDevice": "tf.distribute.ReductionToOneDevice", + "tf.compat.v1.distribute.ReductionToOneDevice.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.ReductionToOneDevice.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.ReductionToOneDevice.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.ReductionToOneDevice.__init__": "tf.distribute.ReductionToOneDevice.__init__", + "tf.compat.v1.distribute.ReductionToOneDevice.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.ReductionToOneDevice.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.ReductionToOneDevice.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.ReductionToOneDevice.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.ReductionToOneDevice.batch_reduce": "tf.distribute.CrossDeviceOps.batch_reduce", + "tf.compat.v1.distribute.ReductionToOneDevice.batch_reduce_implementation": "tf.distribute.ReductionToOneDevice.batch_reduce_implementation", + "tf.compat.v1.distribute.ReductionToOneDevice.broadcast": "tf.distribute.CrossDeviceOps.broadcast", + "tf.compat.v1.distribute.ReductionToOneDevice.broadcast_implementation": "tf.distribute.CrossDeviceOps.broadcast_implementation", + "tf.compat.v1.distribute.ReductionToOneDevice.reduce": "tf.distribute.CrossDeviceOps.reduce", + "tf.compat.v1.distribute.ReductionToOneDevice.reduce_implementation": "tf.distribute.ReductionToOneDevice.reduce_implementation", + "tf.compat.v1.distribute.ReplicaContext": "tf.distribute.ReplicaContext", + "tf.compat.v1.distribute.ReplicaContext.__enter__": "tf.distribute.ReplicaContext.__enter__", + "tf.compat.v1.distribute.ReplicaContext.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.ReplicaContext.__exit__": "tf.distribute.ReplicaContext.__exit__", + "tf.compat.v1.distribute.ReplicaContext.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.ReplicaContext.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.ReplicaContext.__init__": "tf.distribute.ReplicaContext.__init__", + "tf.compat.v1.distribute.ReplicaContext.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.ReplicaContext.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.ReplicaContext.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.ReplicaContext.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.ReplicaContext.all_reduce": "tf.distribute.ReplicaContext.all_reduce", + "tf.compat.v1.distribute.ReplicaContext.devices": "tf.distribute.ReplicaContext.devices", + "tf.compat.v1.distribute.ReplicaContext.merge_call": "tf.distribute.ReplicaContext.merge_call", + "tf.compat.v1.distribute.ReplicaContext.num_replicas_in_sync": "tf.distribute.ReplicaContext.num_replicas_in_sync", + "tf.compat.v1.distribute.ReplicaContext.replica_id_in_sync_group": "tf.distribute.ReplicaContext.replica_id_in_sync_group", + "tf.compat.v1.distribute.ReplicaContext.strategy": "tf.distribute.ReplicaContext.strategy", + "tf.compat.v1.distribute.RunOptions": "tf.distribute.RunOptions", + "tf.compat.v1.distribute.RunOptions.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.distribute.RunOptions.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.distribute.RunOptions.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.distribute.RunOptions.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.distribute.RunOptions.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.distribute.RunOptions.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.distribute.RunOptions.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.distribute.RunOptions.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.distribute.RunOptions.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.distribute.RunOptions.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.distribute.RunOptions.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.distribute.RunOptions.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.distribute.RunOptions.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.distribute.RunOptions.__new__": "tf.distribute.RunOptions.__new__", + "tf.compat.v1.distribute.RunOptions.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.distribute.RunOptions.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.distribute.RunOptions.experimental_bucketizing_dynamic_shape": "tf.distribute.RunOptions.experimental_bucketizing_dynamic_shape", + "tf.compat.v1.distribute.RunOptions.experimental_enable_dynamic_batch_size": "tf.distribute.RunOptions.experimental_enable_dynamic_batch_size", + "tf.compat.v1.distribute.RunOptions.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.distribute.Server": "tf.distribute.Server", + "tf.compat.v1.distribute.Server.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.Server.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.Server.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.Server.__init__": "tf.distribute.Server.__init__", + "tf.compat.v1.distribute.Server.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.Server.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.Server.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.Server.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.Server.create_local_server": "tf.distribute.Server.create_local_server", + "tf.compat.v1.distribute.Server.join": "tf.distribute.Server.join", + "tf.compat.v1.distribute.Server.server_def": "tf.distribute.Server.server_def", + "tf.compat.v1.distribute.Server.start": "tf.distribute.Server.start", + "tf.compat.v1.distribute.Server.target": "tf.distribute.Server.target", + "tf.compat.v1.distribute.Strategy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.Strategy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.Strategy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.Strategy.__init__": "tf.distribute.Strategy.__init__", + "tf.compat.v1.distribute.Strategy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.Strategy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.Strategy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.Strategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.Strategy.experimental_distribute_dataset": "tf.distribute.MirroredStrategy.experimental_distribute_dataset", + "tf.compat.v1.distribute.Strategy.experimental_distribute_datasets_from_function": "tf.distribute.MirroredStrategy.experimental_distribute_datasets_from_function", + "tf.compat.v1.distribute.Strategy.experimental_local_results": "tf.distribute.MirroredStrategy.experimental_local_results", + "tf.compat.v1.distribute.Strategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.compat.v1.distribute.Strategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.compat.v1.distribute.Strategy.run": "tf.distribute.MirroredStrategy.run", + "tf.compat.v1.distribute.Strategy.scope": "tf.distribute.MirroredStrategy.scope", + "tf.compat.v1.distribute.StrategyExtended.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.StrategyExtended.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.StrategyExtended.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.StrategyExtended.__init__": "tf.distribute.StrategyExtended.__init__", + "tf.compat.v1.distribute.StrategyExtended.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.StrategyExtended.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.StrategyExtended.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.StrategyExtended.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.StrategyExtended.batch_reduce_to": "tf.distribute.StrategyExtended.batch_reduce_to", + "tf.compat.v1.distribute.StrategyExtended.colocate_vars_with": "tf.distribute.StrategyExtended.colocate_vars_with", + "tf.compat.v1.distribute.StrategyExtended.experimental_require_static_shapes": "tf.distribute.StrategyExtended.experimental_require_static_shapes", + "tf.compat.v1.distribute.StrategyExtended.non_slot_devices": "tf.distribute.StrategyExtended.non_slot_devices", + "tf.compat.v1.distribute.StrategyExtended.parameter_devices": "tf.distribute.StrategyExtended.parameter_devices", + "tf.compat.v1.distribute.StrategyExtended.reduce_to": "tf.distribute.StrategyExtended.reduce_to", + "tf.compat.v1.distribute.StrategyExtended.update": "tf.distribute.StrategyExtended.update", + "tf.compat.v1.distribute.StrategyExtended.update_non_slot": "tf.distribute.StrategyExtended.update_non_slot", + "tf.compat.v1.distribute.StrategyExtended.value_container": "tf.distribute.StrategyExtended.value_container", + "tf.compat.v1.distribute.StrategyExtended.variable_created_in_scope": "tf.distribute.StrategyExtended.variable_created_in_scope", + "tf.compat.v1.distribute.StrategyExtended.worker_devices": "tf.distribute.StrategyExtended.worker_devices", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver": "tf.distribute.cluster_resolver.ClusterResolver", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.cluster_spec": "tf.distribute.cluster_resolver.ClusterResolver.cluster_spec", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.environment": "tf.distribute.cluster_resolver.ClusterResolver.environment", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.master": "tf.distribute.cluster_resolver.ClusterResolver.master", + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.num_accelerators": "tf.distribute.cluster_resolver.ClusterResolver.num_accelerators", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver": "tf.distribute.cluster_resolver.GCEClusterResolver", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__init__": "tf.distribute.cluster_resolver.GCEClusterResolver.__init__", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.cluster_spec": "tf.distribute.cluster_resolver.GCEClusterResolver.cluster_spec", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.environment": "tf.distribute.cluster_resolver.ClusterResolver.environment", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.master": "tf.distribute.cluster_resolver.GCEClusterResolver.master", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.num_accelerators": "tf.distribute.cluster_resolver.ClusterResolver.num_accelerators", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.rpc_layer": "tf.distribute.cluster_resolver.GCEClusterResolver.rpc_layer", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.task_id": "tf.distribute.cluster_resolver.GCEClusterResolver.task_id", + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.task_type": "tf.distribute.cluster_resolver.GCEClusterResolver.task_type", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver": "tf.distribute.cluster_resolver.KubernetesClusterResolver", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__init__": "tf.distribute.cluster_resolver.KubernetesClusterResolver.__init__", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.cluster_spec": "tf.distribute.cluster_resolver.KubernetesClusterResolver.cluster_spec", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.environment": "tf.distribute.cluster_resolver.ClusterResolver.environment", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.master": "tf.distribute.cluster_resolver.KubernetesClusterResolver.master", + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.num_accelerators": "tf.distribute.cluster_resolver.ClusterResolver.num_accelerators", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver": "tf.distribute.cluster_resolver.SimpleClusterResolver", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__init__": "tf.distribute.cluster_resolver.SimpleClusterResolver.__init__", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.cluster_spec": "tf.distribute.cluster_resolver.SimpleClusterResolver.cluster_spec", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.environment": "tf.distribute.cluster_resolver.SimpleClusterResolver.environment", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.master": "tf.distribute.cluster_resolver.SimpleClusterResolver.master", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.num_accelerators": "tf.distribute.cluster_resolver.SimpleClusterResolver.num_accelerators", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.rpc_layer": "tf.distribute.cluster_resolver.SimpleClusterResolver.rpc_layer", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.task_id": "tf.distribute.cluster_resolver.SimpleClusterResolver.task_id", + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.task_type": "tf.distribute.cluster_resolver.SimpleClusterResolver.task_type", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver": "tf.distribute.cluster_resolver.SlurmClusterResolver", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__init__": "tf.distribute.cluster_resolver.SlurmClusterResolver.__init__", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.cluster_spec": "tf.distribute.cluster_resolver.SlurmClusterResolver.cluster_spec", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.environment": "tf.distribute.cluster_resolver.ClusterResolver.environment", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.get_task_info": "tf.distribute.cluster_resolver.SlurmClusterResolver.get_task_info", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.master": "tf.distribute.cluster_resolver.SlurmClusterResolver.master", + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.num_accelerators": "tf.distribute.cluster_resolver.SlurmClusterResolver.num_accelerators", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver": "tf.distribute.cluster_resolver.TFConfigClusterResolver", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__init__": "tf.distribute.cluster_resolver.TFConfigClusterResolver.__init__", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.cluster_spec": "tf.distribute.cluster_resolver.TFConfigClusterResolver.cluster_spec", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.environment": "tf.distribute.cluster_resolver.TFConfigClusterResolver.environment", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.master": "tf.distribute.cluster_resolver.TFConfigClusterResolver.master", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.num_accelerators": "tf.distribute.cluster_resolver.TFConfigClusterResolver.num_accelerators", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.rpc_layer": "tf.distribute.cluster_resolver.TFConfigClusterResolver.rpc_layer", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.task_id": "tf.distribute.cluster_resolver.TFConfigClusterResolver.task_id", + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.task_type": "tf.distribute.cluster_resolver.TFConfigClusterResolver.task_type", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver": "tf.distribute.cluster_resolver.TPUClusterResolver", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__enter__": "tf.distribute.cluster_resolver.TPUClusterResolver.__enter__", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__exit__": "tf.distribute.cluster_resolver.TPUClusterResolver.__exit__", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__init__": "tf.distribute.cluster_resolver.TPUClusterResolver.__init__", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.cluster_spec": "tf.distribute.cluster_resolver.TPUClusterResolver.cluster_spec", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.environment": "tf.distribute.cluster_resolver.TPUClusterResolver.environment", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.get_job_name": "tf.distribute.cluster_resolver.TPUClusterResolver.get_job_name", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.get_master": "tf.distribute.cluster_resolver.TPUClusterResolver.get_master", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.master": "tf.distribute.cluster_resolver.TPUClusterResolver.master", + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.num_accelerators": "tf.distribute.cluster_resolver.TPUClusterResolver.num_accelerators", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver": "tf.distribute.cluster_resolver.UnionResolver", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__init__": "tf.distribute.cluster_resolver.UnionResolver.__init__", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.cluster_spec": "tf.distribute.cluster_resolver.UnionResolver.cluster_spec", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.environment": "tf.distribute.cluster_resolver.UnionResolver.environment", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.master": "tf.distribute.cluster_resolver.UnionResolver.master", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.num_accelerators": "tf.distribute.cluster_resolver.UnionResolver.num_accelerators", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.rpc_layer": "tf.distribute.cluster_resolver.UnionResolver.rpc_layer", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.task_id": "tf.distribute.cluster_resolver.UnionResolver.task_id", + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.task_type": "tf.distribute.cluster_resolver.UnionResolver.task_type", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.experimental_distribute_dataset": "tf.distribute.MirroredStrategy.experimental_distribute_dataset", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.experimental_distribute_datasets_from_function": "tf.distribute.MirroredStrategy.experimental_distribute_datasets_from_function", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.experimental_local_results": "tf.distribute.MirroredStrategy.experimental_local_results", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.experimental_make_numpy_dataset": "tf.compat.v1.distribute.Strategy.experimental_make_numpy_dataset", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.experimental_run": "tf.compat.v1.distribute.Strategy.experimental_run", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.make_dataset_iterator": "tf.compat.v1.distribute.Strategy.make_dataset_iterator", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.make_input_fn_iterator": "tf.compat.v1.distribute.Strategy.make_input_fn_iterator", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.reduce": "tf.compat.v1.distribute.Strategy.reduce", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.run": "tf.distribute.MirroredStrategy.run", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.scope": "tf.distribute.MirroredStrategy.scope", + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.update_config_proto": "tf.compat.v1.distribute.Strategy.update_config_proto", + "tf.compat.v1.distribute.experimental.CollectiveCommunication": "tf.distribute.experimental.CollectiveCommunication", + "tf.compat.v1.distribute.experimental.CollectiveCommunication.AUTO": "tf.distribute.experimental.CollectiveCommunication.AUTO", + "tf.compat.v1.distribute.experimental.CollectiveCommunication.NCCL": "tf.distribute.experimental.CollectiveCommunication.NCCL", + "tf.compat.v1.distribute.experimental.CollectiveCommunication.RING": "tf.distribute.experimental.CollectiveCommunication.RING", + "tf.compat.v1.distribute.experimental.CollectiveCommunication.name": "tf.distribute.InputReplicationMode.name", + "tf.compat.v1.distribute.experimental.CollectiveCommunication.value": "tf.distribute.InputReplicationMode.value", + "tf.compat.v1.distribute.experimental.CollectiveHints": "tf.distribute.experimental.CollectiveHints", + "tf.compat.v1.distribute.experimental.CollectiveHints.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.experimental.CollectiveHints.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.experimental.CollectiveHints.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.experimental.CollectiveHints.__init__": "tf.distribute.experimental.CollectiveHints.__init__", + "tf.compat.v1.distribute.experimental.CollectiveHints.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.experimental.CollectiveHints.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.experimental.CollectiveHints.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.experimental.CollectiveHints.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.experimental_distribute_dataset": "tf.distribute.MirroredStrategy.experimental_distribute_dataset", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.experimental_distribute_datasets_from_function": "tf.distribute.MirroredStrategy.experimental_distribute_datasets_from_function", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.experimental_local_results": "tf.distribute.MirroredStrategy.experimental_local_results", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.experimental_make_numpy_dataset": "tf.compat.v1.distribute.Strategy.experimental_make_numpy_dataset", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.experimental_run": "tf.compat.v1.distribute.Strategy.experimental_run", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.make_dataset_iterator": "tf.compat.v1.distribute.Strategy.make_dataset_iterator", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.make_input_fn_iterator": "tf.compat.v1.distribute.Strategy.make_input_fn_iterator", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.reduce": "tf.compat.v1.distribute.Strategy.reduce", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.run": "tf.distribute.MirroredStrategy.run", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.scope": "tf.distribute.MirroredStrategy.scope", + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.update_config_proto": "tf.compat.v1.distribute.Strategy.update_config_proto", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.experimental_distribute_dataset": "tf.distribute.MirroredStrategy.experimental_distribute_dataset", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.experimental_distribute_datasets_from_function": "tf.distribute.MirroredStrategy.experimental_distribute_datasets_from_function", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.experimental_local_results": "tf.distribute.MirroredStrategy.experimental_local_results", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.experimental_make_numpy_dataset": "tf.compat.v1.distribute.Strategy.experimental_make_numpy_dataset", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.experimental_run": "tf.compat.v1.distribute.Strategy.experimental_run", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.make_dataset_iterator": "tf.compat.v1.distribute.Strategy.make_dataset_iterator", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.make_input_fn_iterator": "tf.compat.v1.distribute.Strategy.make_input_fn_iterator", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.reduce": "tf.compat.v1.distribute.Strategy.reduce", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.run": "tf.distribute.MirroredStrategy.run", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.scope": "tf.distribute.MirroredStrategy.scope", + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.update_config_proto": "tf.compat.v1.distribute.Strategy.update_config_proto", + "tf.compat.v1.distribute.experimental.TPUStrategy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distribute.experimental.TPUStrategy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distribute.experimental.TPUStrategy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distribute.experimental.TPUStrategy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distribute.experimental.TPUStrategy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distribute.experimental.TPUStrategy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distribute.experimental.TPUStrategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distribute.experimental.TPUStrategy.experimental_distribute_dataset": "tf.distribute.MirroredStrategy.experimental_distribute_dataset", + "tf.compat.v1.distribute.experimental.TPUStrategy.experimental_distribute_datasets_from_function": "tf.distribute.MirroredStrategy.experimental_distribute_datasets_from_function", + "tf.compat.v1.distribute.experimental.TPUStrategy.experimental_local_results": "tf.distribute.MirroredStrategy.experimental_local_results", + "tf.compat.v1.distribute.experimental.TPUStrategy.experimental_make_numpy_dataset": "tf.compat.v1.distribute.Strategy.experimental_make_numpy_dataset", + "tf.compat.v1.distribute.experimental.TPUStrategy.experimental_run": "tf.compat.v1.distribute.Strategy.experimental_run", + "tf.compat.v1.distribute.experimental.TPUStrategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.compat.v1.distribute.experimental.TPUStrategy.make_dataset_iterator": "tf.compat.v1.distribute.Strategy.make_dataset_iterator", + "tf.compat.v1.distribute.experimental.TPUStrategy.make_input_fn_iterator": "tf.compat.v1.distribute.Strategy.make_input_fn_iterator", + "tf.compat.v1.distribute.experimental.TPUStrategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.compat.v1.distribute.experimental.TPUStrategy.reduce": "tf.compat.v1.distribute.Strategy.reduce", + "tf.compat.v1.distribute.experimental.TPUStrategy.scope": "tf.distribute.MirroredStrategy.scope", + "tf.compat.v1.distribute.experimental.TPUStrategy.update_config_proto": "tf.compat.v1.distribute.Strategy.update_config_proto", + "tf.compat.v1.distribute.experimental_set_strategy": "tf.distribute.experimental_set_strategy", + "tf.compat.v1.distribute.get_replica_context": "tf.distribute.get_replica_context", + "tf.compat.v1.distribute.get_strategy": "tf.distribute.get_strategy", + "tf.compat.v1.distribute.has_strategy": "tf.distribute.has_strategy", + "tf.compat.v1.distribute.in_cross_replica_context": "tf.distribute.in_cross_replica_context", + "tf.compat.v1.distributions.Bernoulli.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.Bernoulli.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.Bernoulli.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.Bernoulli.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.Bernoulli.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.Bernoulli.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.Bernoulli.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.Bernoulli.allow_nan_stats": "tf.compat.v1.distributions.Distribution.allow_nan_stats", + "tf.compat.v1.distributions.Bernoulli.batch_shape": "tf.compat.v1.distributions.Distribution.batch_shape", + "tf.compat.v1.distributions.Bernoulli.batch_shape_tensor": "tf.compat.v1.distributions.Distribution.batch_shape_tensor", + "tf.compat.v1.distributions.Bernoulli.cdf": "tf.compat.v1.distributions.Distribution.cdf", + "tf.compat.v1.distributions.Bernoulli.copy": "tf.compat.v1.distributions.Distribution.copy", + "tf.compat.v1.distributions.Bernoulli.covariance": "tf.compat.v1.distributions.Distribution.covariance", + "tf.compat.v1.distributions.Bernoulli.cross_entropy": "tf.compat.v1.distributions.Distribution.cross_entropy", + "tf.compat.v1.distributions.Bernoulli.dtype": "tf.compat.v1.distributions.Distribution.dtype", + "tf.compat.v1.distributions.Bernoulli.entropy": "tf.compat.v1.distributions.Distribution.entropy", + "tf.compat.v1.distributions.Bernoulli.event_shape": "tf.compat.v1.distributions.Distribution.event_shape", + "tf.compat.v1.distributions.Bernoulli.event_shape_tensor": "tf.compat.v1.distributions.Distribution.event_shape_tensor", + "tf.compat.v1.distributions.Bernoulli.is_scalar_batch": "tf.compat.v1.distributions.Distribution.is_scalar_batch", + "tf.compat.v1.distributions.Bernoulli.is_scalar_event": "tf.compat.v1.distributions.Distribution.is_scalar_event", + "tf.compat.v1.distributions.Bernoulli.kl_divergence": "tf.compat.v1.distributions.Distribution.kl_divergence", + "tf.compat.v1.distributions.Bernoulli.log_cdf": "tf.compat.v1.distributions.Distribution.log_cdf", + "tf.compat.v1.distributions.Bernoulli.log_prob": "tf.compat.v1.distributions.Distribution.log_prob", + "tf.compat.v1.distributions.Bernoulli.log_survival_function": "tf.compat.v1.distributions.Distribution.log_survival_function", + "tf.compat.v1.distributions.Bernoulli.mean": "tf.compat.v1.distributions.Distribution.mean", + "tf.compat.v1.distributions.Bernoulli.name": "tf.compat.v1.distributions.Distribution.name", + "tf.compat.v1.distributions.Bernoulli.parameters": "tf.compat.v1.distributions.Distribution.parameters", + "tf.compat.v1.distributions.Bernoulli.prob": "tf.compat.v1.distributions.Distribution.prob", + "tf.compat.v1.distributions.Bernoulli.quantile": "tf.compat.v1.distributions.Distribution.quantile", + "tf.compat.v1.distributions.Bernoulli.reparameterization_type": "tf.compat.v1.distributions.Distribution.reparameterization_type", + "tf.compat.v1.distributions.Bernoulli.sample": "tf.compat.v1.distributions.Distribution.sample", + "tf.compat.v1.distributions.Bernoulli.stddev": "tf.compat.v1.distributions.Distribution.stddev", + "tf.compat.v1.distributions.Bernoulli.survival_function": "tf.compat.v1.distributions.Distribution.survival_function", + "tf.compat.v1.distributions.Bernoulli.validate_args": "tf.compat.v1.distributions.Distribution.validate_args", + "tf.compat.v1.distributions.Bernoulli.variance": "tf.compat.v1.distributions.Distribution.variance", + "tf.compat.v1.distributions.Beta.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.Beta.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.Beta.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.Beta.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.Beta.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.Beta.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.Beta.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.Beta.allow_nan_stats": "tf.compat.v1.distributions.Distribution.allow_nan_stats", + "tf.compat.v1.distributions.Beta.batch_shape": "tf.compat.v1.distributions.Distribution.batch_shape", + "tf.compat.v1.distributions.Beta.batch_shape_tensor": "tf.compat.v1.distributions.Distribution.batch_shape_tensor", + "tf.compat.v1.distributions.Beta.copy": "tf.compat.v1.distributions.Distribution.copy", + "tf.compat.v1.distributions.Beta.covariance": "tf.compat.v1.distributions.Distribution.covariance", + "tf.compat.v1.distributions.Beta.cross_entropy": "tf.compat.v1.distributions.Distribution.cross_entropy", + "tf.compat.v1.distributions.Beta.dtype": "tf.compat.v1.distributions.Distribution.dtype", + "tf.compat.v1.distributions.Beta.entropy": "tf.compat.v1.distributions.Distribution.entropy", + "tf.compat.v1.distributions.Beta.event_shape": "tf.compat.v1.distributions.Distribution.event_shape", + "tf.compat.v1.distributions.Beta.event_shape_tensor": "tf.compat.v1.distributions.Distribution.event_shape_tensor", + "tf.compat.v1.distributions.Beta.is_scalar_batch": "tf.compat.v1.distributions.Distribution.is_scalar_batch", + "tf.compat.v1.distributions.Beta.is_scalar_event": "tf.compat.v1.distributions.Distribution.is_scalar_event", + "tf.compat.v1.distributions.Beta.kl_divergence": "tf.compat.v1.distributions.Distribution.kl_divergence", + "tf.compat.v1.distributions.Beta.log_survival_function": "tf.compat.v1.distributions.Distribution.log_survival_function", + "tf.compat.v1.distributions.Beta.mean": "tf.compat.v1.distributions.Distribution.mean", + "tf.compat.v1.distributions.Beta.name": "tf.compat.v1.distributions.Distribution.name", + "tf.compat.v1.distributions.Beta.parameters": "tf.compat.v1.distributions.Distribution.parameters", + "tf.compat.v1.distributions.Beta.quantile": "tf.compat.v1.distributions.Distribution.quantile", + "tf.compat.v1.distributions.Beta.reparameterization_type": "tf.compat.v1.distributions.Distribution.reparameterization_type", + "tf.compat.v1.distributions.Beta.sample": "tf.compat.v1.distributions.Distribution.sample", + "tf.compat.v1.distributions.Beta.stddev": "tf.compat.v1.distributions.Distribution.stddev", + "tf.compat.v1.distributions.Beta.survival_function": "tf.compat.v1.distributions.Distribution.survival_function", + "tf.compat.v1.distributions.Beta.validate_args": "tf.compat.v1.distributions.Distribution.validate_args", + "tf.compat.v1.distributions.Beta.variance": "tf.compat.v1.distributions.Distribution.variance", + "tf.compat.v1.distributions.Categorical.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.Categorical.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.Categorical.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.Categorical.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.Categorical.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.Categorical.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.Categorical.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.Categorical.allow_nan_stats": "tf.compat.v1.distributions.Distribution.allow_nan_stats", + "tf.compat.v1.distributions.Categorical.batch_shape": "tf.compat.v1.distributions.Distribution.batch_shape", + "tf.compat.v1.distributions.Categorical.batch_shape_tensor": "tf.compat.v1.distributions.Distribution.batch_shape_tensor", + "tf.compat.v1.distributions.Categorical.cdf": "tf.compat.v1.distributions.Distribution.cdf", + "tf.compat.v1.distributions.Categorical.copy": "tf.compat.v1.distributions.Distribution.copy", + "tf.compat.v1.distributions.Categorical.covariance": "tf.compat.v1.distributions.Distribution.covariance", + "tf.compat.v1.distributions.Categorical.cross_entropy": "tf.compat.v1.distributions.Distribution.cross_entropy", + "tf.compat.v1.distributions.Categorical.dtype": "tf.compat.v1.distributions.Distribution.dtype", + "tf.compat.v1.distributions.Categorical.entropy": "tf.compat.v1.distributions.Distribution.entropy", + "tf.compat.v1.distributions.Categorical.event_shape": "tf.compat.v1.distributions.Distribution.event_shape", + "tf.compat.v1.distributions.Categorical.event_shape_tensor": "tf.compat.v1.distributions.Distribution.event_shape_tensor", + "tf.compat.v1.distributions.Categorical.is_scalar_batch": "tf.compat.v1.distributions.Distribution.is_scalar_batch", + "tf.compat.v1.distributions.Categorical.is_scalar_event": "tf.compat.v1.distributions.Distribution.is_scalar_event", + "tf.compat.v1.distributions.Categorical.kl_divergence": "tf.compat.v1.distributions.Distribution.kl_divergence", + "tf.compat.v1.distributions.Categorical.log_cdf": "tf.compat.v1.distributions.Distribution.log_cdf", + "tf.compat.v1.distributions.Categorical.log_prob": "tf.compat.v1.distributions.Distribution.log_prob", + "tf.compat.v1.distributions.Categorical.log_survival_function": "tf.compat.v1.distributions.Distribution.log_survival_function", + "tf.compat.v1.distributions.Categorical.mean": "tf.compat.v1.distributions.Distribution.mean", + "tf.compat.v1.distributions.Categorical.mode": "tf.compat.v1.distributions.Distribution.mode", + "tf.compat.v1.distributions.Categorical.name": "tf.compat.v1.distributions.Distribution.name", + "tf.compat.v1.distributions.Categorical.parameters": "tf.compat.v1.distributions.Distribution.parameters", + "tf.compat.v1.distributions.Categorical.prob": "tf.compat.v1.distributions.Distribution.prob", + "tf.compat.v1.distributions.Categorical.quantile": "tf.compat.v1.distributions.Distribution.quantile", + "tf.compat.v1.distributions.Categorical.reparameterization_type": "tf.compat.v1.distributions.Distribution.reparameterization_type", + "tf.compat.v1.distributions.Categorical.sample": "tf.compat.v1.distributions.Distribution.sample", + "tf.compat.v1.distributions.Categorical.stddev": "tf.compat.v1.distributions.Distribution.stddev", + "tf.compat.v1.distributions.Categorical.survival_function": "tf.compat.v1.distributions.Distribution.survival_function", + "tf.compat.v1.distributions.Categorical.validate_args": "tf.compat.v1.distributions.Distribution.validate_args", + "tf.compat.v1.distributions.Categorical.variance": "tf.compat.v1.distributions.Distribution.variance", + "tf.compat.v1.distributions.Dirichlet.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.Dirichlet.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.Dirichlet.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.Dirichlet.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.Dirichlet.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.Dirichlet.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.Dirichlet.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.Dirichlet.allow_nan_stats": "tf.compat.v1.distributions.Distribution.allow_nan_stats", + "tf.compat.v1.distributions.Dirichlet.batch_shape": "tf.compat.v1.distributions.Distribution.batch_shape", + "tf.compat.v1.distributions.Dirichlet.batch_shape_tensor": "tf.compat.v1.distributions.Distribution.batch_shape_tensor", + "tf.compat.v1.distributions.Dirichlet.cdf": "tf.compat.v1.distributions.Distribution.cdf", + "tf.compat.v1.distributions.Dirichlet.copy": "tf.compat.v1.distributions.Distribution.copy", + "tf.compat.v1.distributions.Dirichlet.covariance": "tf.compat.v1.distributions.Distribution.covariance", + "tf.compat.v1.distributions.Dirichlet.cross_entropy": "tf.compat.v1.distributions.Distribution.cross_entropy", + "tf.compat.v1.distributions.Dirichlet.dtype": "tf.compat.v1.distributions.Distribution.dtype", + "tf.compat.v1.distributions.Dirichlet.entropy": "tf.compat.v1.distributions.Distribution.entropy", + "tf.compat.v1.distributions.Dirichlet.event_shape": "tf.compat.v1.distributions.Distribution.event_shape", + "tf.compat.v1.distributions.Dirichlet.event_shape_tensor": "tf.compat.v1.distributions.Distribution.event_shape_tensor", + "tf.compat.v1.distributions.Dirichlet.is_scalar_batch": "tf.compat.v1.distributions.Distribution.is_scalar_batch", + "tf.compat.v1.distributions.Dirichlet.is_scalar_event": "tf.compat.v1.distributions.Distribution.is_scalar_event", + "tf.compat.v1.distributions.Dirichlet.kl_divergence": "tf.compat.v1.distributions.Distribution.kl_divergence", + "tf.compat.v1.distributions.Dirichlet.log_cdf": "tf.compat.v1.distributions.Distribution.log_cdf", + "tf.compat.v1.distributions.Dirichlet.log_survival_function": "tf.compat.v1.distributions.Distribution.log_survival_function", + "tf.compat.v1.distributions.Dirichlet.mean": "tf.compat.v1.distributions.Distribution.mean", + "tf.compat.v1.distributions.Dirichlet.name": "tf.compat.v1.distributions.Distribution.name", + "tf.compat.v1.distributions.Dirichlet.parameters": "tf.compat.v1.distributions.Distribution.parameters", + "tf.compat.v1.distributions.Dirichlet.quantile": "tf.compat.v1.distributions.Distribution.quantile", + "tf.compat.v1.distributions.Dirichlet.reparameterization_type": "tf.compat.v1.distributions.Distribution.reparameterization_type", + "tf.compat.v1.distributions.Dirichlet.sample": "tf.compat.v1.distributions.Distribution.sample", + "tf.compat.v1.distributions.Dirichlet.stddev": "tf.compat.v1.distributions.Distribution.stddev", + "tf.compat.v1.distributions.Dirichlet.survival_function": "tf.compat.v1.distributions.Distribution.survival_function", + "tf.compat.v1.distributions.Dirichlet.validate_args": "tf.compat.v1.distributions.Distribution.validate_args", + "tf.compat.v1.distributions.Dirichlet.variance": "tf.compat.v1.distributions.Distribution.variance", + "tf.compat.v1.distributions.DirichletMultinomial.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.DirichletMultinomial.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.DirichletMultinomial.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.DirichletMultinomial.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.DirichletMultinomial.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.DirichletMultinomial.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.DirichletMultinomial.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.DirichletMultinomial.allow_nan_stats": "tf.compat.v1.distributions.Distribution.allow_nan_stats", + "tf.compat.v1.distributions.DirichletMultinomial.batch_shape": "tf.compat.v1.distributions.Distribution.batch_shape", + "tf.compat.v1.distributions.DirichletMultinomial.batch_shape_tensor": "tf.compat.v1.distributions.Distribution.batch_shape_tensor", + "tf.compat.v1.distributions.DirichletMultinomial.cdf": "tf.compat.v1.distributions.Distribution.cdf", + "tf.compat.v1.distributions.DirichletMultinomial.copy": "tf.compat.v1.distributions.Distribution.copy", + "tf.compat.v1.distributions.DirichletMultinomial.cross_entropy": "tf.compat.v1.distributions.Distribution.cross_entropy", + "tf.compat.v1.distributions.DirichletMultinomial.dtype": "tf.compat.v1.distributions.Distribution.dtype", + "tf.compat.v1.distributions.DirichletMultinomial.entropy": "tf.compat.v1.distributions.Distribution.entropy", + "tf.compat.v1.distributions.DirichletMultinomial.event_shape": "tf.compat.v1.distributions.Distribution.event_shape", + "tf.compat.v1.distributions.DirichletMultinomial.event_shape_tensor": "tf.compat.v1.distributions.Distribution.event_shape_tensor", + "tf.compat.v1.distributions.DirichletMultinomial.is_scalar_batch": "tf.compat.v1.distributions.Distribution.is_scalar_batch", + "tf.compat.v1.distributions.DirichletMultinomial.is_scalar_event": "tf.compat.v1.distributions.Distribution.is_scalar_event", + "tf.compat.v1.distributions.DirichletMultinomial.kl_divergence": "tf.compat.v1.distributions.Distribution.kl_divergence", + "tf.compat.v1.distributions.DirichletMultinomial.log_cdf": "tf.compat.v1.distributions.Distribution.log_cdf", + "tf.compat.v1.distributions.DirichletMultinomial.log_survival_function": "tf.compat.v1.distributions.Distribution.log_survival_function", + "tf.compat.v1.distributions.DirichletMultinomial.mean": "tf.compat.v1.distributions.Distribution.mean", + "tf.compat.v1.distributions.DirichletMultinomial.mode": "tf.compat.v1.distributions.Distribution.mode", + "tf.compat.v1.distributions.DirichletMultinomial.name": "tf.compat.v1.distributions.Distribution.name", + "tf.compat.v1.distributions.DirichletMultinomial.parameters": "tf.compat.v1.distributions.Distribution.parameters", + "tf.compat.v1.distributions.DirichletMultinomial.quantile": "tf.compat.v1.distributions.Distribution.quantile", + "tf.compat.v1.distributions.DirichletMultinomial.reparameterization_type": "tf.compat.v1.distributions.Distribution.reparameterization_type", + "tf.compat.v1.distributions.DirichletMultinomial.sample": "tf.compat.v1.distributions.Distribution.sample", + "tf.compat.v1.distributions.DirichletMultinomial.stddev": "tf.compat.v1.distributions.Distribution.stddev", + "tf.compat.v1.distributions.DirichletMultinomial.survival_function": "tf.compat.v1.distributions.Distribution.survival_function", + "tf.compat.v1.distributions.DirichletMultinomial.validate_args": "tf.compat.v1.distributions.Distribution.validate_args", + "tf.compat.v1.distributions.DirichletMultinomial.variance": "tf.compat.v1.distributions.Distribution.variance", + "tf.compat.v1.distributions.Distribution.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.Distribution.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.Distribution.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.Distribution.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.Distribution.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.Distribution.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.Distribution.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.Exponential.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.Exponential.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.Exponential.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.Exponential.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.Exponential.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.Exponential.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.Exponential.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.Exponential.allow_nan_stats": "tf.compat.v1.distributions.Distribution.allow_nan_stats", + "tf.compat.v1.distributions.Exponential.batch_shape": "tf.compat.v1.distributions.Distribution.batch_shape", + "tf.compat.v1.distributions.Exponential.batch_shape_tensor": "tf.compat.v1.distributions.Distribution.batch_shape_tensor", + "tf.compat.v1.distributions.Exponential.cdf": "tf.compat.v1.distributions.Distribution.cdf", + "tf.compat.v1.distributions.Exponential.concentration": "tf.compat.v1.distributions.Gamma.concentration", + "tf.compat.v1.distributions.Exponential.copy": "tf.compat.v1.distributions.Distribution.copy", + "tf.compat.v1.distributions.Exponential.covariance": "tf.compat.v1.distributions.Distribution.covariance", + "tf.compat.v1.distributions.Exponential.cross_entropy": "tf.compat.v1.distributions.Distribution.cross_entropy", + "tf.compat.v1.distributions.Exponential.dtype": "tf.compat.v1.distributions.Distribution.dtype", + "tf.compat.v1.distributions.Exponential.entropy": "tf.compat.v1.distributions.Distribution.entropy", + "tf.compat.v1.distributions.Exponential.event_shape": "tf.compat.v1.distributions.Distribution.event_shape", + "tf.compat.v1.distributions.Exponential.event_shape_tensor": "tf.compat.v1.distributions.Distribution.event_shape_tensor", + "tf.compat.v1.distributions.Exponential.is_scalar_batch": "tf.compat.v1.distributions.Distribution.is_scalar_batch", + "tf.compat.v1.distributions.Exponential.is_scalar_event": "tf.compat.v1.distributions.Distribution.is_scalar_event", + "tf.compat.v1.distributions.Exponential.kl_divergence": "tf.compat.v1.distributions.Distribution.kl_divergence", + "tf.compat.v1.distributions.Exponential.log_cdf": "tf.compat.v1.distributions.Distribution.log_cdf", + "tf.compat.v1.distributions.Exponential.log_prob": "tf.compat.v1.distributions.Distribution.log_prob", + "tf.compat.v1.distributions.Exponential.log_survival_function": "tf.compat.v1.distributions.Distribution.log_survival_function", + "tf.compat.v1.distributions.Exponential.mean": "tf.compat.v1.distributions.Distribution.mean", + "tf.compat.v1.distributions.Exponential.mode": "tf.compat.v1.distributions.Gamma.mode", + "tf.compat.v1.distributions.Exponential.name": "tf.compat.v1.distributions.Distribution.name", + "tf.compat.v1.distributions.Exponential.parameters": "tf.compat.v1.distributions.Distribution.parameters", + "tf.compat.v1.distributions.Exponential.prob": "tf.compat.v1.distributions.Distribution.prob", + "tf.compat.v1.distributions.Exponential.quantile": "tf.compat.v1.distributions.Distribution.quantile", + "tf.compat.v1.distributions.Exponential.reparameterization_type": "tf.compat.v1.distributions.Distribution.reparameterization_type", + "tf.compat.v1.distributions.Exponential.sample": "tf.compat.v1.distributions.Distribution.sample", + "tf.compat.v1.distributions.Exponential.stddev": "tf.compat.v1.distributions.Distribution.stddev", + "tf.compat.v1.distributions.Exponential.survival_function": "tf.compat.v1.distributions.Distribution.survival_function", + "tf.compat.v1.distributions.Exponential.validate_args": "tf.compat.v1.distributions.Distribution.validate_args", + "tf.compat.v1.distributions.Exponential.variance": "tf.compat.v1.distributions.Distribution.variance", + "tf.compat.v1.distributions.Gamma.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.Gamma.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.Gamma.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.Gamma.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.Gamma.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.Gamma.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.Gamma.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.Gamma.allow_nan_stats": "tf.compat.v1.distributions.Distribution.allow_nan_stats", + "tf.compat.v1.distributions.Gamma.batch_shape": "tf.compat.v1.distributions.Distribution.batch_shape", + "tf.compat.v1.distributions.Gamma.batch_shape_tensor": "tf.compat.v1.distributions.Distribution.batch_shape_tensor", + "tf.compat.v1.distributions.Gamma.cdf": "tf.compat.v1.distributions.Distribution.cdf", + "tf.compat.v1.distributions.Gamma.copy": "tf.compat.v1.distributions.Distribution.copy", + "tf.compat.v1.distributions.Gamma.covariance": "tf.compat.v1.distributions.Distribution.covariance", + "tf.compat.v1.distributions.Gamma.cross_entropy": "tf.compat.v1.distributions.Distribution.cross_entropy", + "tf.compat.v1.distributions.Gamma.dtype": "tf.compat.v1.distributions.Distribution.dtype", + "tf.compat.v1.distributions.Gamma.entropy": "tf.compat.v1.distributions.Distribution.entropy", + "tf.compat.v1.distributions.Gamma.event_shape": "tf.compat.v1.distributions.Distribution.event_shape", + "tf.compat.v1.distributions.Gamma.event_shape_tensor": "tf.compat.v1.distributions.Distribution.event_shape_tensor", + "tf.compat.v1.distributions.Gamma.is_scalar_batch": "tf.compat.v1.distributions.Distribution.is_scalar_batch", + "tf.compat.v1.distributions.Gamma.is_scalar_event": "tf.compat.v1.distributions.Distribution.is_scalar_event", + "tf.compat.v1.distributions.Gamma.kl_divergence": "tf.compat.v1.distributions.Distribution.kl_divergence", + "tf.compat.v1.distributions.Gamma.log_cdf": "tf.compat.v1.distributions.Distribution.log_cdf", + "tf.compat.v1.distributions.Gamma.log_prob": "tf.compat.v1.distributions.Distribution.log_prob", + "tf.compat.v1.distributions.Gamma.log_survival_function": "tf.compat.v1.distributions.Distribution.log_survival_function", + "tf.compat.v1.distributions.Gamma.mean": "tf.compat.v1.distributions.Distribution.mean", + "tf.compat.v1.distributions.Gamma.name": "tf.compat.v1.distributions.Distribution.name", + "tf.compat.v1.distributions.Gamma.parameters": "tf.compat.v1.distributions.Distribution.parameters", + "tf.compat.v1.distributions.Gamma.prob": "tf.compat.v1.distributions.Distribution.prob", + "tf.compat.v1.distributions.Gamma.quantile": "tf.compat.v1.distributions.Distribution.quantile", + "tf.compat.v1.distributions.Gamma.reparameterization_type": "tf.compat.v1.distributions.Distribution.reparameterization_type", + "tf.compat.v1.distributions.Gamma.sample": "tf.compat.v1.distributions.Distribution.sample", + "tf.compat.v1.distributions.Gamma.stddev": "tf.compat.v1.distributions.Distribution.stddev", + "tf.compat.v1.distributions.Gamma.survival_function": "tf.compat.v1.distributions.Distribution.survival_function", + "tf.compat.v1.distributions.Gamma.validate_args": "tf.compat.v1.distributions.Distribution.validate_args", + "tf.compat.v1.distributions.Gamma.variance": "tf.compat.v1.distributions.Distribution.variance", + "tf.compat.v1.distributions.Laplace.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.Laplace.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.Laplace.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.Laplace.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.Laplace.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.Laplace.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.Laplace.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.Laplace.allow_nan_stats": "tf.compat.v1.distributions.Distribution.allow_nan_stats", + "tf.compat.v1.distributions.Laplace.batch_shape": "tf.compat.v1.distributions.Distribution.batch_shape", + "tf.compat.v1.distributions.Laplace.batch_shape_tensor": "tf.compat.v1.distributions.Distribution.batch_shape_tensor", + "tf.compat.v1.distributions.Laplace.cdf": "tf.compat.v1.distributions.Distribution.cdf", + "tf.compat.v1.distributions.Laplace.copy": "tf.compat.v1.distributions.Distribution.copy", + "tf.compat.v1.distributions.Laplace.covariance": "tf.compat.v1.distributions.Distribution.covariance", + "tf.compat.v1.distributions.Laplace.cross_entropy": "tf.compat.v1.distributions.Distribution.cross_entropy", + "tf.compat.v1.distributions.Laplace.dtype": "tf.compat.v1.distributions.Distribution.dtype", + "tf.compat.v1.distributions.Laplace.entropy": "tf.compat.v1.distributions.Distribution.entropy", + "tf.compat.v1.distributions.Laplace.event_shape": "tf.compat.v1.distributions.Distribution.event_shape", + "tf.compat.v1.distributions.Laplace.event_shape_tensor": "tf.compat.v1.distributions.Distribution.event_shape_tensor", + "tf.compat.v1.distributions.Laplace.is_scalar_batch": "tf.compat.v1.distributions.Distribution.is_scalar_batch", + "tf.compat.v1.distributions.Laplace.is_scalar_event": "tf.compat.v1.distributions.Distribution.is_scalar_event", + "tf.compat.v1.distributions.Laplace.kl_divergence": "tf.compat.v1.distributions.Distribution.kl_divergence", + "tf.compat.v1.distributions.Laplace.log_cdf": "tf.compat.v1.distributions.Distribution.log_cdf", + "tf.compat.v1.distributions.Laplace.log_prob": "tf.compat.v1.distributions.Distribution.log_prob", + "tf.compat.v1.distributions.Laplace.log_survival_function": "tf.compat.v1.distributions.Distribution.log_survival_function", + "tf.compat.v1.distributions.Laplace.mean": "tf.compat.v1.distributions.Distribution.mean", + "tf.compat.v1.distributions.Laplace.mode": "tf.compat.v1.distributions.Distribution.mode", + "tf.compat.v1.distributions.Laplace.name": "tf.compat.v1.distributions.Distribution.name", + "tf.compat.v1.distributions.Laplace.parameters": "tf.compat.v1.distributions.Distribution.parameters", + "tf.compat.v1.distributions.Laplace.prob": "tf.compat.v1.distributions.Distribution.prob", + "tf.compat.v1.distributions.Laplace.quantile": "tf.compat.v1.distributions.Distribution.quantile", + "tf.compat.v1.distributions.Laplace.reparameterization_type": "tf.compat.v1.distributions.Distribution.reparameterization_type", + "tf.compat.v1.distributions.Laplace.sample": "tf.compat.v1.distributions.Distribution.sample", + "tf.compat.v1.distributions.Laplace.stddev": "tf.compat.v1.distributions.Distribution.stddev", + "tf.compat.v1.distributions.Laplace.survival_function": "tf.compat.v1.distributions.Distribution.survival_function", + "tf.compat.v1.distributions.Laplace.validate_args": "tf.compat.v1.distributions.Distribution.validate_args", + "tf.compat.v1.distributions.Laplace.variance": "tf.compat.v1.distributions.Distribution.variance", + "tf.compat.v1.distributions.Multinomial.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.Multinomial.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.Multinomial.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.Multinomial.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.Multinomial.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.Multinomial.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.Multinomial.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.Multinomial.allow_nan_stats": "tf.compat.v1.distributions.Distribution.allow_nan_stats", + "tf.compat.v1.distributions.Multinomial.batch_shape": "tf.compat.v1.distributions.Distribution.batch_shape", + "tf.compat.v1.distributions.Multinomial.batch_shape_tensor": "tf.compat.v1.distributions.Distribution.batch_shape_tensor", + "tf.compat.v1.distributions.Multinomial.cdf": "tf.compat.v1.distributions.Distribution.cdf", + "tf.compat.v1.distributions.Multinomial.copy": "tf.compat.v1.distributions.Distribution.copy", + "tf.compat.v1.distributions.Multinomial.covariance": "tf.compat.v1.distributions.Distribution.covariance", + "tf.compat.v1.distributions.Multinomial.cross_entropy": "tf.compat.v1.distributions.Distribution.cross_entropy", + "tf.compat.v1.distributions.Multinomial.dtype": "tf.compat.v1.distributions.Distribution.dtype", + "tf.compat.v1.distributions.Multinomial.entropy": "tf.compat.v1.distributions.Distribution.entropy", + "tf.compat.v1.distributions.Multinomial.event_shape": "tf.compat.v1.distributions.Distribution.event_shape", + "tf.compat.v1.distributions.Multinomial.event_shape_tensor": "tf.compat.v1.distributions.Distribution.event_shape_tensor", + "tf.compat.v1.distributions.Multinomial.is_scalar_batch": "tf.compat.v1.distributions.Distribution.is_scalar_batch", + "tf.compat.v1.distributions.Multinomial.is_scalar_event": "tf.compat.v1.distributions.Distribution.is_scalar_event", + "tf.compat.v1.distributions.Multinomial.kl_divergence": "tf.compat.v1.distributions.Distribution.kl_divergence", + "tf.compat.v1.distributions.Multinomial.log_cdf": "tf.compat.v1.distributions.Distribution.log_cdf", + "tf.compat.v1.distributions.Multinomial.log_survival_function": "tf.compat.v1.distributions.Distribution.log_survival_function", + "tf.compat.v1.distributions.Multinomial.mean": "tf.compat.v1.distributions.Distribution.mean", + "tf.compat.v1.distributions.Multinomial.mode": "tf.compat.v1.distributions.Distribution.mode", + "tf.compat.v1.distributions.Multinomial.name": "tf.compat.v1.distributions.Distribution.name", + "tf.compat.v1.distributions.Multinomial.parameters": "tf.compat.v1.distributions.Distribution.parameters", + "tf.compat.v1.distributions.Multinomial.prob": "tf.compat.v1.distributions.Distribution.prob", + "tf.compat.v1.distributions.Multinomial.quantile": "tf.compat.v1.distributions.Distribution.quantile", + "tf.compat.v1.distributions.Multinomial.reparameterization_type": "tf.compat.v1.distributions.Distribution.reparameterization_type", + "tf.compat.v1.distributions.Multinomial.sample": "tf.compat.v1.distributions.Distribution.sample", + "tf.compat.v1.distributions.Multinomial.stddev": "tf.compat.v1.distributions.Distribution.stddev", + "tf.compat.v1.distributions.Multinomial.survival_function": "tf.compat.v1.distributions.Distribution.survival_function", + "tf.compat.v1.distributions.Multinomial.validate_args": "tf.compat.v1.distributions.Distribution.validate_args", + "tf.compat.v1.distributions.Multinomial.variance": "tf.compat.v1.distributions.Distribution.variance", + "tf.compat.v1.distributions.Normal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.Normal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.Normal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.Normal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.Normal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.Normal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.Normal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.Normal.allow_nan_stats": "tf.compat.v1.distributions.Distribution.allow_nan_stats", + "tf.compat.v1.distributions.Normal.batch_shape": "tf.compat.v1.distributions.Distribution.batch_shape", + "tf.compat.v1.distributions.Normal.batch_shape_tensor": "tf.compat.v1.distributions.Distribution.batch_shape_tensor", + "tf.compat.v1.distributions.Normal.cdf": "tf.compat.v1.distributions.Distribution.cdf", + "tf.compat.v1.distributions.Normal.copy": "tf.compat.v1.distributions.Distribution.copy", + "tf.compat.v1.distributions.Normal.covariance": "tf.compat.v1.distributions.Distribution.covariance", + "tf.compat.v1.distributions.Normal.cross_entropy": "tf.compat.v1.distributions.Distribution.cross_entropy", + "tf.compat.v1.distributions.Normal.dtype": "tf.compat.v1.distributions.Distribution.dtype", + "tf.compat.v1.distributions.Normal.entropy": "tf.compat.v1.distributions.Distribution.entropy", + "tf.compat.v1.distributions.Normal.event_shape": "tf.compat.v1.distributions.Distribution.event_shape", + "tf.compat.v1.distributions.Normal.event_shape_tensor": "tf.compat.v1.distributions.Distribution.event_shape_tensor", + "tf.compat.v1.distributions.Normal.is_scalar_batch": "tf.compat.v1.distributions.Distribution.is_scalar_batch", + "tf.compat.v1.distributions.Normal.is_scalar_event": "tf.compat.v1.distributions.Distribution.is_scalar_event", + "tf.compat.v1.distributions.Normal.kl_divergence": "tf.compat.v1.distributions.Distribution.kl_divergence", + "tf.compat.v1.distributions.Normal.log_cdf": "tf.compat.v1.distributions.Distribution.log_cdf", + "tf.compat.v1.distributions.Normal.log_prob": "tf.compat.v1.distributions.Distribution.log_prob", + "tf.compat.v1.distributions.Normal.log_survival_function": "tf.compat.v1.distributions.Distribution.log_survival_function", + "tf.compat.v1.distributions.Normal.mean": "tf.compat.v1.distributions.Distribution.mean", + "tf.compat.v1.distributions.Normal.mode": "tf.compat.v1.distributions.Distribution.mode", + "tf.compat.v1.distributions.Normal.name": "tf.compat.v1.distributions.Distribution.name", + "tf.compat.v1.distributions.Normal.parameters": "tf.compat.v1.distributions.Distribution.parameters", + "tf.compat.v1.distributions.Normal.prob": "tf.compat.v1.distributions.Distribution.prob", + "tf.compat.v1.distributions.Normal.quantile": "tf.compat.v1.distributions.Distribution.quantile", + "tf.compat.v1.distributions.Normal.reparameterization_type": "tf.compat.v1.distributions.Distribution.reparameterization_type", + "tf.compat.v1.distributions.Normal.sample": "tf.compat.v1.distributions.Distribution.sample", + "tf.compat.v1.distributions.Normal.stddev": "tf.compat.v1.distributions.Distribution.stddev", + "tf.compat.v1.distributions.Normal.survival_function": "tf.compat.v1.distributions.Distribution.survival_function", + "tf.compat.v1.distributions.Normal.validate_args": "tf.compat.v1.distributions.Distribution.validate_args", + "tf.compat.v1.distributions.Normal.variance": "tf.compat.v1.distributions.Distribution.variance", + "tf.compat.v1.distributions.RegisterKL.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.RegisterKL.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.RegisterKL.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.RegisterKL.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.RegisterKL.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.RegisterKL.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.RegisterKL.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.ReparameterizationType.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.ReparameterizationType.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.ReparameterizationType.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.ReparameterizationType.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.ReparameterizationType.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.ReparameterizationType.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.StudentT.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.StudentT.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.StudentT.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.StudentT.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.StudentT.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.StudentT.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.StudentT.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.StudentT.allow_nan_stats": "tf.compat.v1.distributions.Distribution.allow_nan_stats", + "tf.compat.v1.distributions.StudentT.batch_shape": "tf.compat.v1.distributions.Distribution.batch_shape", + "tf.compat.v1.distributions.StudentT.batch_shape_tensor": "tf.compat.v1.distributions.Distribution.batch_shape_tensor", + "tf.compat.v1.distributions.StudentT.cdf": "tf.compat.v1.distributions.Distribution.cdf", + "tf.compat.v1.distributions.StudentT.copy": "tf.compat.v1.distributions.Distribution.copy", + "tf.compat.v1.distributions.StudentT.covariance": "tf.compat.v1.distributions.Distribution.covariance", + "tf.compat.v1.distributions.StudentT.cross_entropy": "tf.compat.v1.distributions.Distribution.cross_entropy", + "tf.compat.v1.distributions.StudentT.dtype": "tf.compat.v1.distributions.Distribution.dtype", + "tf.compat.v1.distributions.StudentT.entropy": "tf.compat.v1.distributions.Distribution.entropy", + "tf.compat.v1.distributions.StudentT.event_shape": "tf.compat.v1.distributions.Distribution.event_shape", + "tf.compat.v1.distributions.StudentT.event_shape_tensor": "tf.compat.v1.distributions.Distribution.event_shape_tensor", + "tf.compat.v1.distributions.StudentT.is_scalar_batch": "tf.compat.v1.distributions.Distribution.is_scalar_batch", + "tf.compat.v1.distributions.StudentT.is_scalar_event": "tf.compat.v1.distributions.Distribution.is_scalar_event", + "tf.compat.v1.distributions.StudentT.kl_divergence": "tf.compat.v1.distributions.Distribution.kl_divergence", + "tf.compat.v1.distributions.StudentT.log_cdf": "tf.compat.v1.distributions.Distribution.log_cdf", + "tf.compat.v1.distributions.StudentT.log_prob": "tf.compat.v1.distributions.Distribution.log_prob", + "tf.compat.v1.distributions.StudentT.log_survival_function": "tf.compat.v1.distributions.Distribution.log_survival_function", + "tf.compat.v1.distributions.StudentT.mode": "tf.compat.v1.distributions.Distribution.mode", + "tf.compat.v1.distributions.StudentT.name": "tf.compat.v1.distributions.Distribution.name", + "tf.compat.v1.distributions.StudentT.parameters": "tf.compat.v1.distributions.Distribution.parameters", + "tf.compat.v1.distributions.StudentT.prob": "tf.compat.v1.distributions.Distribution.prob", + "tf.compat.v1.distributions.StudentT.quantile": "tf.compat.v1.distributions.Distribution.quantile", + "tf.compat.v1.distributions.StudentT.reparameterization_type": "tf.compat.v1.distributions.Distribution.reparameterization_type", + "tf.compat.v1.distributions.StudentT.sample": "tf.compat.v1.distributions.Distribution.sample", + "tf.compat.v1.distributions.StudentT.stddev": "tf.compat.v1.distributions.Distribution.stddev", + "tf.compat.v1.distributions.StudentT.survival_function": "tf.compat.v1.distributions.Distribution.survival_function", + "tf.compat.v1.distributions.StudentT.validate_args": "tf.compat.v1.distributions.Distribution.validate_args", + "tf.compat.v1.distributions.Uniform.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.distributions.Uniform.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.distributions.Uniform.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.distributions.Uniform.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.distributions.Uniform.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.distributions.Uniform.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.distributions.Uniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.distributions.Uniform.allow_nan_stats": "tf.compat.v1.distributions.Distribution.allow_nan_stats", + "tf.compat.v1.distributions.Uniform.batch_shape": "tf.compat.v1.distributions.Distribution.batch_shape", + "tf.compat.v1.distributions.Uniform.batch_shape_tensor": "tf.compat.v1.distributions.Distribution.batch_shape_tensor", + "tf.compat.v1.distributions.Uniform.cdf": "tf.compat.v1.distributions.Distribution.cdf", + "tf.compat.v1.distributions.Uniform.copy": "tf.compat.v1.distributions.Distribution.copy", + "tf.compat.v1.distributions.Uniform.covariance": "tf.compat.v1.distributions.Distribution.covariance", + "tf.compat.v1.distributions.Uniform.cross_entropy": "tf.compat.v1.distributions.Distribution.cross_entropy", + "tf.compat.v1.distributions.Uniform.dtype": "tf.compat.v1.distributions.Distribution.dtype", + "tf.compat.v1.distributions.Uniform.entropy": "tf.compat.v1.distributions.Distribution.entropy", + "tf.compat.v1.distributions.Uniform.event_shape": "tf.compat.v1.distributions.Distribution.event_shape", + "tf.compat.v1.distributions.Uniform.event_shape_tensor": "tf.compat.v1.distributions.Distribution.event_shape_tensor", + "tf.compat.v1.distributions.Uniform.is_scalar_batch": "tf.compat.v1.distributions.Distribution.is_scalar_batch", + "tf.compat.v1.distributions.Uniform.is_scalar_event": "tf.compat.v1.distributions.Distribution.is_scalar_event", + "tf.compat.v1.distributions.Uniform.kl_divergence": "tf.compat.v1.distributions.Distribution.kl_divergence", + "tf.compat.v1.distributions.Uniform.log_cdf": "tf.compat.v1.distributions.Distribution.log_cdf", + "tf.compat.v1.distributions.Uniform.log_prob": "tf.compat.v1.distributions.Distribution.log_prob", + "tf.compat.v1.distributions.Uniform.log_survival_function": "tf.compat.v1.distributions.Distribution.log_survival_function", + "tf.compat.v1.distributions.Uniform.mean": "tf.compat.v1.distributions.Distribution.mean", + "tf.compat.v1.distributions.Uniform.mode": "tf.compat.v1.distributions.Distribution.mode", + "tf.compat.v1.distributions.Uniform.name": "tf.compat.v1.distributions.Distribution.name", + "tf.compat.v1.distributions.Uniform.parameters": "tf.compat.v1.distributions.Distribution.parameters", + "tf.compat.v1.distributions.Uniform.prob": "tf.compat.v1.distributions.Distribution.prob", + "tf.compat.v1.distributions.Uniform.quantile": "tf.compat.v1.distributions.Distribution.quantile", + "tf.compat.v1.distributions.Uniform.reparameterization_type": "tf.compat.v1.distributions.Distribution.reparameterization_type", + "tf.compat.v1.distributions.Uniform.sample": "tf.compat.v1.distributions.Distribution.sample", + "tf.compat.v1.distributions.Uniform.stddev": "tf.compat.v1.distributions.Distribution.stddev", + "tf.compat.v1.distributions.Uniform.survival_function": "tf.compat.v1.distributions.Distribution.survival_function", + "tf.compat.v1.distributions.Uniform.validate_args": "tf.compat.v1.distributions.Distribution.validate_args", + "tf.compat.v1.distributions.Uniform.variance": "tf.compat.v1.distributions.Distribution.variance", + "tf.compat.v1.div": "tf.RaggedTensor.__div__", + "tf.compat.v1.div_no_nan": "tf.math.divide_no_nan", + "tf.compat.v1.divide": "tf.math.divide", + "tf.compat.v1.double": "tf.dtypes.double", + "tf.compat.v1.dtypes.DType": "tf.dtypes.DType", + "tf.compat.v1.dtypes.DType.__eq__": "tf.dtypes.DType.__eq__", + "tf.compat.v1.dtypes.DType.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.dtypes.DType.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.dtypes.DType.__init__": "tf.dtypes.DType.__init__", + "tf.compat.v1.dtypes.DType.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.dtypes.DType.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.dtypes.DType.__ne__": "tf.dtypes.DType.__ne__", + "tf.compat.v1.dtypes.DType.__new__": "tf.dtypes.DType.__new__", + "tf.compat.v1.dtypes.DType.as_datatype_enum": "tf.dtypes.DType.as_datatype_enum", + "tf.compat.v1.dtypes.DType.as_numpy_dtype": "tf.dtypes.DType.as_numpy_dtype", + "tf.compat.v1.dtypes.DType.base_dtype": "tf.dtypes.DType.base_dtype", + "tf.compat.v1.dtypes.DType.is_bool": "tf.dtypes.DType.is_bool", + "tf.compat.v1.dtypes.DType.is_compatible_with": "tf.dtypes.DType.is_compatible_with", + "tf.compat.v1.dtypes.DType.is_complex": "tf.dtypes.DType.is_complex", + "tf.compat.v1.dtypes.DType.is_floating": "tf.dtypes.DType.is_floating", + "tf.compat.v1.dtypes.DType.is_integer": "tf.dtypes.DType.is_integer", + "tf.compat.v1.dtypes.DType.is_numpy_compatible": "tf.dtypes.DType.is_numpy_compatible", + "tf.compat.v1.dtypes.DType.is_quantized": "tf.dtypes.DType.is_quantized", + "tf.compat.v1.dtypes.DType.is_unsigned": "tf.dtypes.DType.is_unsigned", + "tf.compat.v1.dtypes.DType.limits": "tf.dtypes.DType.limits", + "tf.compat.v1.dtypes.DType.max": "tf.dtypes.DType.max", + "tf.compat.v1.dtypes.DType.min": "tf.dtypes.DType.min", + "tf.compat.v1.dtypes.DType.name": "tf.dtypes.DType.name", + "tf.compat.v1.dtypes.DType.real_dtype": "tf.dtypes.DType.real_dtype", + "tf.compat.v1.dtypes.DType.size": "tf.dtypes.DType.size", + "tf.compat.v1.dtypes.QUANTIZED_DTYPES": "tf.dtypes.QUANTIZED_DTYPES", + "tf.compat.v1.dtypes.as_dtype": "tf.dtypes.as_dtype", + "tf.compat.v1.dtypes.as_string": "tf.strings.as_string", + "tf.compat.v1.dtypes.bfloat16": "tf.dtypes.bfloat16", + "tf.compat.v1.dtypes.bool": "tf.dtypes.bool", + "tf.compat.v1.dtypes.cast": "tf.cast", + "tf.compat.v1.dtypes.complex": "tf.dtypes.complex", + "tf.compat.v1.dtypes.complex128": "tf.dtypes.complex128", + "tf.compat.v1.dtypes.complex64": "tf.dtypes.complex64", + "tf.compat.v1.dtypes.double": "tf.dtypes.double", + "tf.compat.v1.dtypes.float16": "tf.dtypes.float16", + "tf.compat.v1.dtypes.float32": "tf.dtypes.float32", + "tf.compat.v1.dtypes.float64": "tf.dtypes.double", + "tf.compat.v1.dtypes.half": "tf.dtypes.float16", + "tf.compat.v1.dtypes.int16": "tf.dtypes.int16", + "tf.compat.v1.dtypes.int32": "tf.dtypes.int32", + "tf.compat.v1.dtypes.int64": "tf.dtypes.int64", + "tf.compat.v1.dtypes.int8": "tf.dtypes.int8", + "tf.compat.v1.dtypes.qint16": "tf.dtypes.qint16", + "tf.compat.v1.dtypes.qint32": "tf.dtypes.qint32", + "tf.compat.v1.dtypes.qint8": "tf.dtypes.qint8", + "tf.compat.v1.dtypes.quint16": "tf.dtypes.quint16", + "tf.compat.v1.dtypes.quint8": "tf.dtypes.quint8", + "tf.compat.v1.dtypes.resource": "tf.dtypes.resource", + "tf.compat.v1.dtypes.saturate_cast": "tf.dtypes.saturate_cast", + "tf.compat.v1.dtypes.string": "tf.dtypes.string", + "tf.compat.v1.dtypes.uint16": "tf.dtypes.uint16", + "tf.compat.v1.dtypes.uint32": "tf.dtypes.uint32", + "tf.compat.v1.dtypes.uint64": "tf.dtypes.uint64", + "tf.compat.v1.dtypes.uint8": "tf.dtypes.uint8", + "tf.compat.v1.dtypes.variant": "tf.dtypes.variant", + "tf.compat.v1.dynamic_partition": "tf.dynamic_partition", + "tf.compat.v1.dynamic_stitch": "tf.dynamic_stitch", + "tf.compat.v1.edit_distance": "tf.edit_distance", + "tf.compat.v1.einsum": "tf.einsum", + "tf.compat.v1.encode_base64": "tf.io.encode_base64", + "tf.compat.v1.ensure_shape": "tf.ensure_shape", + "tf.compat.v1.equal": "tf.math.equal", + "tf.compat.v1.erf": "tf.math.erf", + "tf.compat.v1.erfc": "tf.math.erfc", + "tf.compat.v1.errors.AbortedError": "tf.errors.AbortedError", + "tf.compat.v1.errors.AbortedError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.AbortedError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.AbortedError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.AbortedError.__init__": "tf.errors.AbortedError.__init__", + "tf.compat.v1.errors.AbortedError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.AbortedError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.AbortedError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.AbortedError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.AbortedError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.AbortedError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.AbortedError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.AbortedError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.AbortedError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.AbortedError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.AlreadyExistsError": "tf.errors.AlreadyExistsError", + "tf.compat.v1.errors.AlreadyExistsError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.AlreadyExistsError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.AlreadyExistsError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.AlreadyExistsError.__init__": "tf.errors.AlreadyExistsError.__init__", + "tf.compat.v1.errors.AlreadyExistsError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.AlreadyExistsError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.AlreadyExistsError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.AlreadyExistsError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.AlreadyExistsError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.AlreadyExistsError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.AlreadyExistsError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.AlreadyExistsError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.AlreadyExistsError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.AlreadyExistsError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.CancelledError": "tf.errors.CancelledError", + "tf.compat.v1.errors.CancelledError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.CancelledError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.CancelledError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.CancelledError.__init__": "tf.errors.CancelledError.__init__", + "tf.compat.v1.errors.CancelledError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.CancelledError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.CancelledError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.CancelledError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.CancelledError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.CancelledError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.CancelledError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.CancelledError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.CancelledError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.CancelledError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.DataLossError": "tf.errors.DataLossError", + "tf.compat.v1.errors.DataLossError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.DataLossError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.DataLossError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.DataLossError.__init__": "tf.errors.DataLossError.__init__", + "tf.compat.v1.errors.DataLossError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.DataLossError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.DataLossError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.DataLossError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.DataLossError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.DataLossError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.DataLossError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.DataLossError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.DataLossError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.DataLossError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.DeadlineExceededError": "tf.errors.DeadlineExceededError", + "tf.compat.v1.errors.DeadlineExceededError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.DeadlineExceededError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.DeadlineExceededError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.DeadlineExceededError.__init__": "tf.errors.DeadlineExceededError.__init__", + "tf.compat.v1.errors.DeadlineExceededError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.DeadlineExceededError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.DeadlineExceededError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.DeadlineExceededError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.DeadlineExceededError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.DeadlineExceededError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.DeadlineExceededError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.DeadlineExceededError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.DeadlineExceededError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.DeadlineExceededError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.FailedPreconditionError": "tf.errors.FailedPreconditionError", + "tf.compat.v1.errors.FailedPreconditionError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.FailedPreconditionError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.FailedPreconditionError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.FailedPreconditionError.__init__": "tf.errors.FailedPreconditionError.__init__", + "tf.compat.v1.errors.FailedPreconditionError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.FailedPreconditionError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.FailedPreconditionError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.FailedPreconditionError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.FailedPreconditionError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.FailedPreconditionError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.FailedPreconditionError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.FailedPreconditionError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.FailedPreconditionError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.FailedPreconditionError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.InternalError": "tf.errors.InternalError", + "tf.compat.v1.errors.InternalError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.InternalError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.InternalError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.InternalError.__init__": "tf.errors.InternalError.__init__", + "tf.compat.v1.errors.InternalError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.InternalError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.InternalError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.InternalError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.InternalError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.InternalError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.InternalError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.InternalError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.InternalError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.InternalError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.InvalidArgumentError": "tf.errors.InvalidArgumentError", + "tf.compat.v1.errors.InvalidArgumentError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.InvalidArgumentError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.InvalidArgumentError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.InvalidArgumentError.__init__": "tf.errors.InvalidArgumentError.__init__", + "tf.compat.v1.errors.InvalidArgumentError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.InvalidArgumentError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.InvalidArgumentError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.InvalidArgumentError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.InvalidArgumentError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.InvalidArgumentError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.InvalidArgumentError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.InvalidArgumentError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.InvalidArgumentError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.InvalidArgumentError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.NotFoundError": "tf.errors.NotFoundError", + "tf.compat.v1.errors.NotFoundError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.NotFoundError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.NotFoundError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.NotFoundError.__init__": "tf.errors.NotFoundError.__init__", + "tf.compat.v1.errors.NotFoundError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.NotFoundError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.NotFoundError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.NotFoundError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.NotFoundError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.NotFoundError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.NotFoundError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.NotFoundError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.NotFoundError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.NotFoundError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.OpError": "tf.errors.OpError", + "tf.compat.v1.errors.OpError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.OpError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.OpError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.OpError.__init__": "tf.errors.OpError.__init__", + "tf.compat.v1.errors.OpError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.OpError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.OpError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.OpError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.OpError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.OpError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.OpError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.OpError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.OpError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.OpError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.OutOfRangeError": "tf.errors.OutOfRangeError", + "tf.compat.v1.errors.OutOfRangeError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.OutOfRangeError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.OutOfRangeError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.OutOfRangeError.__init__": "tf.errors.OutOfRangeError.__init__", + "tf.compat.v1.errors.OutOfRangeError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.OutOfRangeError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.OutOfRangeError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.OutOfRangeError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.OutOfRangeError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.OutOfRangeError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.OutOfRangeError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.OutOfRangeError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.OutOfRangeError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.OutOfRangeError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.PermissionDeniedError": "tf.errors.PermissionDeniedError", + "tf.compat.v1.errors.PermissionDeniedError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.PermissionDeniedError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.PermissionDeniedError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.PermissionDeniedError.__init__": "tf.errors.PermissionDeniedError.__init__", + "tf.compat.v1.errors.PermissionDeniedError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.PermissionDeniedError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.PermissionDeniedError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.PermissionDeniedError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.PermissionDeniedError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.PermissionDeniedError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.PermissionDeniedError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.PermissionDeniedError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.PermissionDeniedError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.PermissionDeniedError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.ResourceExhaustedError": "tf.errors.ResourceExhaustedError", + "tf.compat.v1.errors.ResourceExhaustedError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.ResourceExhaustedError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.ResourceExhaustedError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.ResourceExhaustedError.__init__": "tf.errors.ResourceExhaustedError.__init__", + "tf.compat.v1.errors.ResourceExhaustedError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.ResourceExhaustedError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.ResourceExhaustedError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.ResourceExhaustedError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.ResourceExhaustedError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.ResourceExhaustedError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.ResourceExhaustedError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.ResourceExhaustedError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.ResourceExhaustedError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.ResourceExhaustedError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.UnauthenticatedError": "tf.errors.UnauthenticatedError", + "tf.compat.v1.errors.UnauthenticatedError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.UnauthenticatedError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.UnauthenticatedError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.UnauthenticatedError.__init__": "tf.errors.UnauthenticatedError.__init__", + "tf.compat.v1.errors.UnauthenticatedError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.UnauthenticatedError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.UnauthenticatedError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.UnauthenticatedError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.UnauthenticatedError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.UnauthenticatedError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.UnauthenticatedError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.UnauthenticatedError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.UnauthenticatedError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.UnauthenticatedError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.UnavailableError": "tf.errors.UnavailableError", + "tf.compat.v1.errors.UnavailableError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.UnavailableError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.UnavailableError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.UnavailableError.__init__": "tf.errors.UnavailableError.__init__", + "tf.compat.v1.errors.UnavailableError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.UnavailableError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.UnavailableError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.UnavailableError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.UnavailableError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.UnavailableError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.UnavailableError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.UnavailableError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.UnavailableError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.UnavailableError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.UnimplementedError": "tf.errors.UnimplementedError", + "tf.compat.v1.errors.UnimplementedError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.UnimplementedError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.UnimplementedError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.UnimplementedError.__init__": "tf.errors.UnimplementedError.__init__", + "tf.compat.v1.errors.UnimplementedError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.UnimplementedError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.UnimplementedError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.UnimplementedError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.UnimplementedError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.UnimplementedError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.UnimplementedError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.UnimplementedError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.UnimplementedError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.UnimplementedError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.UnknownError": "tf.errors.UnknownError", + "tf.compat.v1.errors.UnknownError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.UnknownError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.UnknownError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.UnknownError.__init__": "tf.errors.UnknownError.__init__", + "tf.compat.v1.errors.UnknownError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.UnknownError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.UnknownError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.UnknownError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.errors.UnknownError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.errors.UnknownError.error_code": "tf.errors.OpError.error_code", + "tf.compat.v1.errors.UnknownError.message": "tf.errors.OpError.message", + "tf.compat.v1.errors.UnknownError.node_def": "tf.errors.OpError.node_def", + "tf.compat.v1.errors.UnknownError.op": "tf.errors.OpError.op", + "tf.compat.v1.errors.UnknownError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.BaselineClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.BaselineClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.BaselineClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.BaselineClassifier.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.BaselineClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.BaselineClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.BaselineClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.BaselineClassifier.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.BaselineClassifier.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.BaselineClassifier.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.BaselineClassifier.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.BaselineClassifier.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.BaselineClassifier.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.BaselineClassifier.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.BaselineClassifier.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.BaselineClassifier.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.BaselineClassifier.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.BaselineClassifier.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.BaselineClassifier.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.BaselineClassifier.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.BaselineClassifier.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.BaselineEstimator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.BaselineEstimator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.BaselineEstimator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.BaselineEstimator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.BaselineEstimator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.BaselineEstimator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.BaselineEstimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.BaselineEstimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.BaselineEstimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.BaselineEstimator.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.BaselineEstimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.BaselineEstimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.BaselineEstimator.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.BaselineEstimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.BaselineEstimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.BaselineEstimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.BaselineEstimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.BaselineEstimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.BaselineEstimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.BaselineEstimator.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.BaselineEstimator.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.BaselineRegressor.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.BaselineRegressor.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.BaselineRegressor.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.BaselineRegressor.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.BaselineRegressor.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.BaselineRegressor.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.BaselineRegressor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.BaselineRegressor.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.BaselineRegressor.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.BaselineRegressor.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.BaselineRegressor.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.BaselineRegressor.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.BaselineRegressor.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.BaselineRegressor.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.BaselineRegressor.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.BaselineRegressor.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.BaselineRegressor.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.BaselineRegressor.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.BaselineRegressor.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.BaselineRegressor.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.BaselineRegressor.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.BestExporter": "tf.estimator.BestExporter", + "tf.compat.v1.estimator.BestExporter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.BestExporter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.BestExporter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.BestExporter.__init__": "tf.estimator.BestExporter.__init__", + "tf.compat.v1.estimator.BestExporter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.BestExporter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.BestExporter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.BestExporter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.BestExporter.export": "tf.estimator.BestExporter.export", + "tf.compat.v1.estimator.BestExporter.name": "tf.estimator.BestExporter.name", + "tf.compat.v1.estimator.BinaryClassHead": "tf.estimator.BinaryClassHead", + "tf.compat.v1.estimator.BinaryClassHead.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.BinaryClassHead.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.BinaryClassHead.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.BinaryClassHead.__init__": "tf.estimator.BinaryClassHead.__init__", + "tf.compat.v1.estimator.BinaryClassHead.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.BinaryClassHead.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.BinaryClassHead.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.BinaryClassHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.BinaryClassHead.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.compat.v1.estimator.BinaryClassHead.logits_dimension": "tf.estimator.BinaryClassHead.logits_dimension", + "tf.compat.v1.estimator.BinaryClassHead.loss": "tf.estimator.BinaryClassHead.loss", + "tf.compat.v1.estimator.BinaryClassHead.loss_reduction": "tf.estimator.BinaryClassHead.loss_reduction", + "tf.compat.v1.estimator.BinaryClassHead.metrics": "tf.estimator.BinaryClassHead.metrics", + "tf.compat.v1.estimator.BinaryClassHead.name": "tf.estimator.BinaryClassHead.name", + "tf.compat.v1.estimator.BinaryClassHead.predictions": "tf.estimator.BinaryClassHead.predictions", + "tf.compat.v1.estimator.BinaryClassHead.update_metrics": "tf.estimator.BinaryClassHead.update_metrics", + "tf.compat.v1.estimator.BoostedTreesClassifier": "tf.estimator.BoostedTreesClassifier", + "tf.compat.v1.estimator.BoostedTreesClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.BoostedTreesClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.BoostedTreesClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.BoostedTreesClassifier.__init__": "tf.estimator.BoostedTreesClassifier.__init__", + "tf.compat.v1.estimator.BoostedTreesClassifier.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.BoostedTreesClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.BoostedTreesClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.BoostedTreesClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.BoostedTreesClassifier.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.BoostedTreesClassifier.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.BoostedTreesClassifier.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.BoostedTreesClassifier.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.BoostedTreesClassifier.experimental_feature_importances": "tf.estimator.BoostedTreesClassifier.experimental_feature_importances", + "tf.compat.v1.estimator.BoostedTreesClassifier.experimental_predict_with_explanations": "tf.estimator.BoostedTreesClassifier.experimental_predict_with_explanations", + "tf.compat.v1.estimator.BoostedTreesClassifier.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.BoostedTreesClassifier.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.BoostedTreesClassifier.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.BoostedTreesClassifier.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.BoostedTreesClassifier.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.BoostedTreesClassifier.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.BoostedTreesClassifier.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.BoostedTreesClassifier.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.BoostedTreesClassifier.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.BoostedTreesClassifier.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.BoostedTreesEstimator": "tf.estimator.BoostedTreesEstimator", + "tf.compat.v1.estimator.BoostedTreesEstimator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.BoostedTreesEstimator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.BoostedTreesEstimator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.BoostedTreesEstimator.__init__": "tf.estimator.BoostedTreesEstimator.__init__", + "tf.compat.v1.estimator.BoostedTreesEstimator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.BoostedTreesEstimator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.BoostedTreesEstimator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.BoostedTreesEstimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.BoostedTreesEstimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.BoostedTreesEstimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.BoostedTreesEstimator.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.BoostedTreesEstimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.BoostedTreesEstimator.experimental_feature_importances": "tf.estimator.BoostedTreesClassifier.experimental_feature_importances", + "tf.compat.v1.estimator.BoostedTreesEstimator.experimental_predict_with_explanations": "tf.estimator.BoostedTreesClassifier.experimental_predict_with_explanations", + "tf.compat.v1.estimator.BoostedTreesEstimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.BoostedTreesEstimator.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.BoostedTreesEstimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.BoostedTreesEstimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.BoostedTreesEstimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.BoostedTreesEstimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.BoostedTreesEstimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.BoostedTreesEstimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.BoostedTreesEstimator.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.BoostedTreesEstimator.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.BoostedTreesRegressor": "tf.estimator.BoostedTreesRegressor", + "tf.compat.v1.estimator.BoostedTreesRegressor.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.BoostedTreesRegressor.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.BoostedTreesRegressor.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.BoostedTreesRegressor.__init__": "tf.estimator.BoostedTreesRegressor.__init__", + "tf.compat.v1.estimator.BoostedTreesRegressor.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.BoostedTreesRegressor.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.BoostedTreesRegressor.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.BoostedTreesRegressor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.BoostedTreesRegressor.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.BoostedTreesRegressor.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.BoostedTreesRegressor.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.BoostedTreesRegressor.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.BoostedTreesRegressor.experimental_feature_importances": "tf.estimator.BoostedTreesClassifier.experimental_feature_importances", + "tf.compat.v1.estimator.BoostedTreesRegressor.experimental_predict_with_explanations": "tf.estimator.BoostedTreesClassifier.experimental_predict_with_explanations", + "tf.compat.v1.estimator.BoostedTreesRegressor.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.BoostedTreesRegressor.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.BoostedTreesRegressor.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.BoostedTreesRegressor.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.BoostedTreesRegressor.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.BoostedTreesRegressor.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.BoostedTreesRegressor.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.BoostedTreesRegressor.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.BoostedTreesRegressor.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.BoostedTreesRegressor.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.CheckpointSaverHook": "tf.estimator.CheckpointSaverHook", + "tf.compat.v1.estimator.CheckpointSaverHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.CheckpointSaverHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.CheckpointSaverHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.CheckpointSaverHook.__init__": "tf.estimator.CheckpointSaverHook.__init__", + "tf.compat.v1.estimator.CheckpointSaverHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.CheckpointSaverHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.CheckpointSaverHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.CheckpointSaverHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.CheckpointSaverHook.after_create_session": "tf.estimator.CheckpointSaverHook.after_create_session", + "tf.compat.v1.estimator.CheckpointSaverHook.after_run": "tf.estimator.CheckpointSaverHook.after_run", + "tf.compat.v1.estimator.CheckpointSaverHook.before_run": "tf.estimator.CheckpointSaverHook.before_run", + "tf.compat.v1.estimator.CheckpointSaverHook.begin": "tf.estimator.CheckpointSaverHook.begin", + "tf.compat.v1.estimator.CheckpointSaverHook.end": "tf.estimator.CheckpointSaverHook.end", + "tf.compat.v1.estimator.CheckpointSaverListener": "tf.estimator.CheckpointSaverListener", + "tf.compat.v1.estimator.CheckpointSaverListener.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.CheckpointSaverListener.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.CheckpointSaverListener.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.CheckpointSaverListener.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.CheckpointSaverListener.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.CheckpointSaverListener.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.CheckpointSaverListener.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.CheckpointSaverListener.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.CheckpointSaverListener.after_save": "tf.estimator.CheckpointSaverListener.after_save", + "tf.compat.v1.estimator.CheckpointSaverListener.before_save": "tf.estimator.CheckpointSaverListener.before_save", + "tf.compat.v1.estimator.CheckpointSaverListener.begin": "tf.estimator.CheckpointSaverListener.begin", + "tf.compat.v1.estimator.CheckpointSaverListener.end": "tf.estimator.CheckpointSaverListener.end", + "tf.compat.v1.estimator.DNNClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.DNNClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.DNNClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.DNNClassifier.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.DNNClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.DNNClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.DNNClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.DNNClassifier.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.DNNClassifier.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.DNNClassifier.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.DNNClassifier.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.DNNClassifier.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.DNNClassifier.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.DNNClassifier.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.DNNClassifier.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.DNNClassifier.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.DNNClassifier.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.DNNClassifier.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.DNNClassifier.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.DNNClassifier.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.DNNClassifier.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.DNNEstimator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.DNNEstimator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.DNNEstimator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.DNNEstimator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.DNNEstimator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.DNNEstimator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.DNNEstimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.DNNEstimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.DNNEstimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.DNNEstimator.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.DNNEstimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.DNNEstimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.DNNEstimator.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.DNNEstimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.DNNEstimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.DNNEstimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.DNNEstimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.DNNEstimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.DNNEstimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.DNNEstimator.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.DNNEstimator.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.DNNRegressor.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.DNNRegressor.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.DNNRegressor.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.DNNRegressor.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.DNNRegressor.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.DNNRegressor.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.DNNRegressor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.DNNRegressor.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.DNNRegressor.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.DNNRegressor.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.DNNRegressor.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.DNNRegressor.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.DNNRegressor.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.DNNRegressor.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.DNNRegressor.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.DNNRegressor.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.DNNRegressor.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.DNNRegressor.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.DNNRegressor.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.DNNRegressor.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.DNNRegressor.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.Estimator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.Estimator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.Estimator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.Estimator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.Estimator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.Estimator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.Estimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.EstimatorSpec": "tf.estimator.EstimatorSpec", + "tf.compat.v1.estimator.EstimatorSpec.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.estimator.EstimatorSpec.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.estimator.EstimatorSpec.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.estimator.EstimatorSpec.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.estimator.EstimatorSpec.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.estimator.EstimatorSpec.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.estimator.EstimatorSpec.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.EstimatorSpec.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.estimator.EstimatorSpec.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.estimator.EstimatorSpec.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.estimator.EstimatorSpec.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.estimator.EstimatorSpec.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.estimator.EstimatorSpec.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.estimator.EstimatorSpec.__new__": "tf.estimator.EstimatorSpec.__new__", + "tf.compat.v1.estimator.EstimatorSpec.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.estimator.EstimatorSpec.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.estimator.EstimatorSpec.eval_metric_ops": "tf.estimator.EstimatorSpec.eval_metric_ops", + "tf.compat.v1.estimator.EstimatorSpec.evaluation_hooks": "tf.estimator.EstimatorSpec.evaluation_hooks", + "tf.compat.v1.estimator.EstimatorSpec.export_outputs": "tf.estimator.EstimatorSpec.export_outputs", + "tf.compat.v1.estimator.EstimatorSpec.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.estimator.EstimatorSpec.loss": "tf.estimator.EstimatorSpec.loss", + "tf.compat.v1.estimator.EstimatorSpec.mode": "tf.estimator.EstimatorSpec.mode", + "tf.compat.v1.estimator.EstimatorSpec.prediction_hooks": "tf.estimator.EstimatorSpec.prediction_hooks", + "tf.compat.v1.estimator.EstimatorSpec.predictions": "tf.estimator.EstimatorSpec.predictions", + "tf.compat.v1.estimator.EstimatorSpec.scaffold": "tf.estimator.EstimatorSpec.scaffold", + "tf.compat.v1.estimator.EstimatorSpec.train_op": "tf.estimator.EstimatorSpec.train_op", + "tf.compat.v1.estimator.EstimatorSpec.training_chief_hooks": "tf.estimator.EstimatorSpec.training_chief_hooks", + "tf.compat.v1.estimator.EstimatorSpec.training_hooks": "tf.estimator.EstimatorSpec.training_hooks", + "tf.compat.v1.estimator.EvalSpec": "tf.estimator.EvalSpec", + "tf.compat.v1.estimator.EvalSpec.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.estimator.EvalSpec.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.estimator.EvalSpec.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.estimator.EvalSpec.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.estimator.EvalSpec.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.estimator.EvalSpec.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.estimator.EvalSpec.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.EvalSpec.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.estimator.EvalSpec.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.estimator.EvalSpec.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.estimator.EvalSpec.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.estimator.EvalSpec.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.estimator.EvalSpec.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.estimator.EvalSpec.__new__": "tf.estimator.EvalSpec.__new__", + "tf.compat.v1.estimator.EvalSpec.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.estimator.EvalSpec.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.estimator.EvalSpec.exporters": "tf.estimator.EvalSpec.exporters", + "tf.compat.v1.estimator.EvalSpec.hooks": "tf.estimator.EvalSpec.hooks", + "tf.compat.v1.estimator.EvalSpec.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.estimator.EvalSpec.input_fn": "tf.estimator.EvalSpec.input_fn", + "tf.compat.v1.estimator.EvalSpec.name": "tf.estimator.EvalSpec.name", + "tf.compat.v1.estimator.EvalSpec.start_delay_secs": "tf.estimator.EvalSpec.start_delay_secs", + "tf.compat.v1.estimator.EvalSpec.steps": "tf.estimator.EvalSpec.steps", + "tf.compat.v1.estimator.EvalSpec.throttle_secs": "tf.estimator.EvalSpec.throttle_secs", + "tf.compat.v1.estimator.Exporter": "tf.estimator.Exporter", + "tf.compat.v1.estimator.Exporter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.Exporter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.Exporter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.Exporter.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.Exporter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.Exporter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.Exporter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.Exporter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.Exporter.export": "tf.estimator.Exporter.export", + "tf.compat.v1.estimator.Exporter.name": "tf.estimator.Exporter.name", + "tf.compat.v1.estimator.FeedFnHook": "tf.estimator.FeedFnHook", + "tf.compat.v1.estimator.FeedFnHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.FeedFnHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.FeedFnHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.FeedFnHook.__init__": "tf.estimator.FeedFnHook.__init__", + "tf.compat.v1.estimator.FeedFnHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.FeedFnHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.FeedFnHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.FeedFnHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.FeedFnHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.estimator.FeedFnHook.after_run": "tf.estimator.SessionRunHook.after_run", + "tf.compat.v1.estimator.FeedFnHook.before_run": "tf.estimator.FeedFnHook.before_run", + "tf.compat.v1.estimator.FeedFnHook.begin": "tf.estimator.SessionRunHook.begin", + "tf.compat.v1.estimator.FeedFnHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.estimator.FinalExporter": "tf.estimator.FinalExporter", + "tf.compat.v1.estimator.FinalExporter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.FinalExporter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.FinalExporter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.FinalExporter.__init__": "tf.estimator.FinalExporter.__init__", + "tf.compat.v1.estimator.FinalExporter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.FinalExporter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.FinalExporter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.FinalExporter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.FinalExporter.export": "tf.estimator.FinalExporter.export", + "tf.compat.v1.estimator.FinalExporter.name": "tf.estimator.FinalExporter.name", + "tf.compat.v1.estimator.FinalOpsHook": "tf.estimator.FinalOpsHook", + "tf.compat.v1.estimator.FinalOpsHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.FinalOpsHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.FinalOpsHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.FinalOpsHook.__init__": "tf.estimator.FinalOpsHook.__init__", + "tf.compat.v1.estimator.FinalOpsHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.FinalOpsHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.FinalOpsHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.FinalOpsHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.FinalOpsHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.estimator.FinalOpsHook.after_run": "tf.estimator.SessionRunHook.after_run", + "tf.compat.v1.estimator.FinalOpsHook.before_run": "tf.estimator.SessionRunHook.before_run", + "tf.compat.v1.estimator.FinalOpsHook.begin": "tf.estimator.SessionRunHook.begin", + "tf.compat.v1.estimator.FinalOpsHook.end": "tf.estimator.FinalOpsHook.end", + "tf.compat.v1.estimator.FinalOpsHook.final_ops_values": "tf.estimator.FinalOpsHook.final_ops_values", + "tf.compat.v1.estimator.GlobalStepWaiterHook": "tf.estimator.GlobalStepWaiterHook", + "tf.compat.v1.estimator.GlobalStepWaiterHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.GlobalStepWaiterHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.GlobalStepWaiterHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.GlobalStepWaiterHook.__init__": "tf.estimator.GlobalStepWaiterHook.__init__", + "tf.compat.v1.estimator.GlobalStepWaiterHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.GlobalStepWaiterHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.GlobalStepWaiterHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.GlobalStepWaiterHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.GlobalStepWaiterHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.estimator.GlobalStepWaiterHook.after_run": "tf.estimator.SessionRunHook.after_run", + "tf.compat.v1.estimator.GlobalStepWaiterHook.before_run": "tf.estimator.GlobalStepWaiterHook.before_run", + "tf.compat.v1.estimator.GlobalStepWaiterHook.begin": "tf.estimator.GlobalStepWaiterHook.begin", + "tf.compat.v1.estimator.GlobalStepWaiterHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.estimator.Head": "tf.estimator.Head", + "tf.compat.v1.estimator.Head.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.Head.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.Head.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.Head.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.Head.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.Head.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.Head.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.Head.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.Head.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.compat.v1.estimator.Head.logits_dimension": "tf.estimator.Head.logits_dimension", + "tf.compat.v1.estimator.Head.loss": "tf.estimator.Head.loss", + "tf.compat.v1.estimator.Head.loss_reduction": "tf.estimator.Head.loss_reduction", + "tf.compat.v1.estimator.Head.metrics": "tf.estimator.Head.metrics", + "tf.compat.v1.estimator.Head.name": "tf.estimator.Head.name", + "tf.compat.v1.estimator.Head.predictions": "tf.estimator.Head.predictions", + "tf.compat.v1.estimator.Head.update_metrics": "tf.estimator.Head.update_metrics", + "tf.compat.v1.estimator.LatestExporter": "tf.estimator.LatestExporter", + "tf.compat.v1.estimator.LatestExporter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.LatestExporter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.LatestExporter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.LatestExporter.__init__": "tf.estimator.LatestExporter.__init__", + "tf.compat.v1.estimator.LatestExporter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.LatestExporter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.LatestExporter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.LatestExporter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.LatestExporter.export": "tf.estimator.LatestExporter.export", + "tf.compat.v1.estimator.LatestExporter.name": "tf.estimator.LatestExporter.name", + "tf.compat.v1.estimator.LinearClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.LinearClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.LinearClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.LinearClassifier.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.LinearClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.LinearClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.LinearClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.LinearClassifier.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.LinearClassifier.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.LinearClassifier.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.LinearClassifier.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.LinearClassifier.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.LinearClassifier.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.LinearClassifier.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.LinearClassifier.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.LinearClassifier.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.LinearClassifier.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.LinearClassifier.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.LinearClassifier.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.LinearClassifier.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.LinearClassifier.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.LinearEstimator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.LinearEstimator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.LinearEstimator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.LinearEstimator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.LinearEstimator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.LinearEstimator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.LinearEstimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.LinearEstimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.LinearEstimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.LinearEstimator.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.LinearEstimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.LinearEstimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.LinearEstimator.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.LinearEstimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.LinearEstimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.LinearEstimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.LinearEstimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.LinearEstimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.LinearEstimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.LinearEstimator.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.LinearEstimator.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.LinearRegressor.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.LinearRegressor.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.LinearRegressor.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.LinearRegressor.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.LinearRegressor.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.LinearRegressor.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.LinearRegressor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.LinearRegressor.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.LinearRegressor.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.LinearRegressor.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.LinearRegressor.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.LinearRegressor.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.LinearRegressor.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.LinearRegressor.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.LinearRegressor.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.LinearRegressor.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.LinearRegressor.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.LinearRegressor.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.LinearRegressor.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.LinearRegressor.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.LinearRegressor.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.LoggingTensorHook": "tf.estimator.LoggingTensorHook", + "tf.compat.v1.estimator.LoggingTensorHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.LoggingTensorHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.LoggingTensorHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.LoggingTensorHook.__init__": "tf.estimator.LoggingTensorHook.__init__", + "tf.compat.v1.estimator.LoggingTensorHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.LoggingTensorHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.LoggingTensorHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.LoggingTensorHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.LoggingTensorHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.estimator.LoggingTensorHook.after_run": "tf.estimator.LoggingTensorHook.after_run", + "tf.compat.v1.estimator.LoggingTensorHook.before_run": "tf.estimator.LoggingTensorHook.before_run", + "tf.compat.v1.estimator.LoggingTensorHook.begin": "tf.estimator.LoggingTensorHook.begin", + "tf.compat.v1.estimator.LoggingTensorHook.end": "tf.estimator.LoggingTensorHook.end", + "tf.compat.v1.estimator.LogisticRegressionHead": "tf.estimator.LogisticRegressionHead", + "tf.compat.v1.estimator.LogisticRegressionHead.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.LogisticRegressionHead.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.LogisticRegressionHead.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.LogisticRegressionHead.__init__": "tf.estimator.LogisticRegressionHead.__init__", + "tf.compat.v1.estimator.LogisticRegressionHead.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.LogisticRegressionHead.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.LogisticRegressionHead.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.LogisticRegressionHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.LogisticRegressionHead.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.compat.v1.estimator.LogisticRegressionHead.logits_dimension": "tf.estimator.RegressionHead.logits_dimension", + "tf.compat.v1.estimator.LogisticRegressionHead.loss": "tf.estimator.RegressionHead.loss", + "tf.compat.v1.estimator.LogisticRegressionHead.loss_reduction": "tf.estimator.RegressionHead.loss_reduction", + "tf.compat.v1.estimator.LogisticRegressionHead.metrics": "tf.estimator.RegressionHead.metrics", + "tf.compat.v1.estimator.LogisticRegressionHead.name": "tf.estimator.RegressionHead.name", + "tf.compat.v1.estimator.LogisticRegressionHead.predictions": "tf.estimator.RegressionHead.predictions", + "tf.compat.v1.estimator.LogisticRegressionHead.update_metrics": "tf.estimator.RegressionHead.update_metrics", + "tf.compat.v1.estimator.ModeKeys": "tf.estimator.ModeKeys", + "tf.compat.v1.estimator.ModeKeys.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.ModeKeys.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.ModeKeys.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.ModeKeys.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.ModeKeys.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.ModeKeys.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.ModeKeys.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.ModeKeys.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.MultiClassHead": "tf.estimator.MultiClassHead", + "tf.compat.v1.estimator.MultiClassHead.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.MultiClassHead.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.MultiClassHead.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.MultiClassHead.__init__": "tf.estimator.MultiClassHead.__init__", + "tf.compat.v1.estimator.MultiClassHead.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.MultiClassHead.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.MultiClassHead.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.MultiClassHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.MultiClassHead.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.compat.v1.estimator.MultiClassHead.logits_dimension": "tf.estimator.MultiClassHead.logits_dimension", + "tf.compat.v1.estimator.MultiClassHead.loss": "tf.estimator.MultiClassHead.loss", + "tf.compat.v1.estimator.MultiClassHead.loss_reduction": "tf.estimator.MultiClassHead.loss_reduction", + "tf.compat.v1.estimator.MultiClassHead.metrics": "tf.estimator.MultiClassHead.metrics", + "tf.compat.v1.estimator.MultiClassHead.name": "tf.estimator.MultiClassHead.name", + "tf.compat.v1.estimator.MultiClassHead.predictions": "tf.estimator.MultiClassHead.predictions", + "tf.compat.v1.estimator.MultiClassHead.update_metrics": "tf.estimator.MultiClassHead.update_metrics", + "tf.compat.v1.estimator.MultiHead": "tf.estimator.MultiHead", + "tf.compat.v1.estimator.MultiHead.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.MultiHead.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.MultiHead.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.MultiHead.__init__": "tf.estimator.MultiHead.__init__", + "tf.compat.v1.estimator.MultiHead.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.MultiHead.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.MultiHead.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.MultiHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.MultiHead.create_estimator_spec": "tf.estimator.MultiHead.create_estimator_spec", + "tf.compat.v1.estimator.MultiHead.logits_dimension": "tf.estimator.MultiHead.logits_dimension", + "tf.compat.v1.estimator.MultiHead.loss": "tf.estimator.MultiHead.loss", + "tf.compat.v1.estimator.MultiHead.loss_reduction": "tf.estimator.MultiHead.loss_reduction", + "tf.compat.v1.estimator.MultiHead.metrics": "tf.estimator.MultiHead.metrics", + "tf.compat.v1.estimator.MultiHead.name": "tf.estimator.MultiHead.name", + "tf.compat.v1.estimator.MultiHead.predictions": "tf.estimator.MultiHead.predictions", + "tf.compat.v1.estimator.MultiHead.update_metrics": "tf.estimator.MultiHead.update_metrics", + "tf.compat.v1.estimator.MultiLabelHead": "tf.estimator.MultiLabelHead", + "tf.compat.v1.estimator.MultiLabelHead.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.MultiLabelHead.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.MultiLabelHead.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.MultiLabelHead.__init__": "tf.estimator.MultiLabelHead.__init__", + "tf.compat.v1.estimator.MultiLabelHead.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.MultiLabelHead.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.MultiLabelHead.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.MultiLabelHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.MultiLabelHead.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.compat.v1.estimator.MultiLabelHead.logits_dimension": "tf.estimator.MultiLabelHead.logits_dimension", + "tf.compat.v1.estimator.MultiLabelHead.loss": "tf.estimator.MultiLabelHead.loss", + "tf.compat.v1.estimator.MultiLabelHead.loss_reduction": "tf.estimator.MultiLabelHead.loss_reduction", + "tf.compat.v1.estimator.MultiLabelHead.metrics": "tf.estimator.MultiLabelHead.metrics", + "tf.compat.v1.estimator.MultiLabelHead.name": "tf.estimator.MultiLabelHead.name", + "tf.compat.v1.estimator.MultiLabelHead.predictions": "tf.estimator.MultiLabelHead.predictions", + "tf.compat.v1.estimator.MultiLabelHead.update_metrics": "tf.estimator.MultiLabelHead.update_metrics", + "tf.compat.v1.estimator.NanLossDuringTrainingError": "tf.estimator.NanLossDuringTrainingError", + "tf.compat.v1.estimator.NanLossDuringTrainingError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.NanLossDuringTrainingError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.NanLossDuringTrainingError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.NanLossDuringTrainingError.__init__": "tf.estimator.NanLossDuringTrainingError.__init__", + "tf.compat.v1.estimator.NanLossDuringTrainingError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.NanLossDuringTrainingError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.NanLossDuringTrainingError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.NanLossDuringTrainingError.__new__": "tf.estimator.NanLossDuringTrainingError.__new__", + "tf.compat.v1.estimator.NanLossDuringTrainingError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.estimator.NanLossDuringTrainingError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.estimator.NanTensorHook": "tf.estimator.NanTensorHook", + "tf.compat.v1.estimator.NanTensorHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.NanTensorHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.NanTensorHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.NanTensorHook.__init__": "tf.estimator.NanTensorHook.__init__", + "tf.compat.v1.estimator.NanTensorHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.NanTensorHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.NanTensorHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.NanTensorHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.NanTensorHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.estimator.NanTensorHook.after_run": "tf.estimator.NanTensorHook.after_run", + "tf.compat.v1.estimator.NanTensorHook.before_run": "tf.estimator.NanTensorHook.before_run", + "tf.compat.v1.estimator.NanTensorHook.begin": "tf.estimator.SessionRunHook.begin", + "tf.compat.v1.estimator.NanTensorHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.estimator.PoissonRegressionHead": "tf.estimator.PoissonRegressionHead", + "tf.compat.v1.estimator.PoissonRegressionHead.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.PoissonRegressionHead.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.PoissonRegressionHead.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.PoissonRegressionHead.__init__": "tf.estimator.PoissonRegressionHead.__init__", + "tf.compat.v1.estimator.PoissonRegressionHead.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.PoissonRegressionHead.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.PoissonRegressionHead.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.PoissonRegressionHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.PoissonRegressionHead.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.compat.v1.estimator.PoissonRegressionHead.logits_dimension": "tf.estimator.RegressionHead.logits_dimension", + "tf.compat.v1.estimator.PoissonRegressionHead.loss": "tf.estimator.RegressionHead.loss", + "tf.compat.v1.estimator.PoissonRegressionHead.loss_reduction": "tf.estimator.RegressionHead.loss_reduction", + "tf.compat.v1.estimator.PoissonRegressionHead.metrics": "tf.estimator.RegressionHead.metrics", + "tf.compat.v1.estimator.PoissonRegressionHead.name": "tf.estimator.RegressionHead.name", + "tf.compat.v1.estimator.PoissonRegressionHead.predictions": "tf.estimator.RegressionHead.predictions", + "tf.compat.v1.estimator.PoissonRegressionHead.update_metrics": "tf.estimator.RegressionHead.update_metrics", + "tf.compat.v1.estimator.ProfilerHook": "tf.estimator.ProfilerHook", + "tf.compat.v1.estimator.ProfilerHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.ProfilerHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.ProfilerHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.ProfilerHook.__init__": "tf.estimator.ProfilerHook.__init__", + "tf.compat.v1.estimator.ProfilerHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.ProfilerHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.ProfilerHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.ProfilerHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.ProfilerHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.estimator.ProfilerHook.after_run": "tf.estimator.ProfilerHook.after_run", + "tf.compat.v1.estimator.ProfilerHook.before_run": "tf.estimator.ProfilerHook.before_run", + "tf.compat.v1.estimator.ProfilerHook.begin": "tf.estimator.ProfilerHook.begin", + "tf.compat.v1.estimator.ProfilerHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.estimator.RegressionHead": "tf.estimator.RegressionHead", + "tf.compat.v1.estimator.RegressionHead.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.RegressionHead.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.RegressionHead.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.RegressionHead.__init__": "tf.estimator.RegressionHead.__init__", + "tf.compat.v1.estimator.RegressionHead.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.RegressionHead.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.RegressionHead.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.RegressionHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.RegressionHead.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.compat.v1.estimator.RegressionHead.logits_dimension": "tf.estimator.RegressionHead.logits_dimension", + "tf.compat.v1.estimator.RegressionHead.loss": "tf.estimator.RegressionHead.loss", + "tf.compat.v1.estimator.RegressionHead.loss_reduction": "tf.estimator.RegressionHead.loss_reduction", + "tf.compat.v1.estimator.RegressionHead.metrics": "tf.estimator.RegressionHead.metrics", + "tf.compat.v1.estimator.RegressionHead.name": "tf.estimator.RegressionHead.name", + "tf.compat.v1.estimator.RegressionHead.predictions": "tf.estimator.RegressionHead.predictions", + "tf.compat.v1.estimator.RegressionHead.update_metrics": "tf.estimator.RegressionHead.update_metrics", + "tf.compat.v1.estimator.RunConfig": "tf.estimator.RunConfig", + "tf.compat.v1.estimator.RunConfig.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.RunConfig.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.RunConfig.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.RunConfig.__init__": "tf.estimator.RunConfig.__init__", + "tf.compat.v1.estimator.RunConfig.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.RunConfig.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.RunConfig.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.RunConfig.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.RunConfig.cluster_spec": "tf.estimator.RunConfig.cluster_spec", + "tf.compat.v1.estimator.RunConfig.device_fn": "tf.estimator.RunConfig.device_fn", + "tf.compat.v1.estimator.RunConfig.eval_distribute": "tf.estimator.RunConfig.eval_distribute", + "tf.compat.v1.estimator.RunConfig.evaluation_master": "tf.estimator.RunConfig.evaluation_master", + "tf.compat.v1.estimator.RunConfig.experimental_max_worker_delay_secs": "tf.estimator.RunConfig.experimental_max_worker_delay_secs", + "tf.compat.v1.estimator.RunConfig.global_id_in_cluster": "tf.estimator.RunConfig.global_id_in_cluster", + "tf.compat.v1.estimator.RunConfig.is_chief": "tf.estimator.RunConfig.is_chief", + "tf.compat.v1.estimator.RunConfig.keep_checkpoint_every_n_hours": "tf.estimator.RunConfig.keep_checkpoint_every_n_hours", + "tf.compat.v1.estimator.RunConfig.keep_checkpoint_max": "tf.estimator.RunConfig.keep_checkpoint_max", + "tf.compat.v1.estimator.RunConfig.log_step_count_steps": "tf.estimator.RunConfig.log_step_count_steps", + "tf.compat.v1.estimator.RunConfig.master": "tf.estimator.RunConfig.master", + "tf.compat.v1.estimator.RunConfig.model_dir": "tf.estimator.RunConfig.model_dir", + "tf.compat.v1.estimator.RunConfig.num_ps_replicas": "tf.estimator.RunConfig.num_ps_replicas", + "tf.compat.v1.estimator.RunConfig.num_worker_replicas": "tf.estimator.RunConfig.num_worker_replicas", + "tf.compat.v1.estimator.RunConfig.protocol": "tf.estimator.RunConfig.protocol", + "tf.compat.v1.estimator.RunConfig.replace": "tf.estimator.RunConfig.replace", + "tf.compat.v1.estimator.RunConfig.save_checkpoints_secs": "tf.estimator.RunConfig.save_checkpoints_secs", + "tf.compat.v1.estimator.RunConfig.save_checkpoints_steps": "tf.estimator.RunConfig.save_checkpoints_steps", + "tf.compat.v1.estimator.RunConfig.save_summary_steps": "tf.estimator.RunConfig.save_summary_steps", + "tf.compat.v1.estimator.RunConfig.service": "tf.estimator.RunConfig.service", + "tf.compat.v1.estimator.RunConfig.session_config": "tf.estimator.RunConfig.session_config", + "tf.compat.v1.estimator.RunConfig.session_creation_timeout_secs": "tf.estimator.RunConfig.session_creation_timeout_secs", + "tf.compat.v1.estimator.RunConfig.task_id": "tf.estimator.RunConfig.task_id", + "tf.compat.v1.estimator.RunConfig.task_type": "tf.estimator.RunConfig.task_type", + "tf.compat.v1.estimator.RunConfig.tf_random_seed": "tf.estimator.RunConfig.tf_random_seed", + "tf.compat.v1.estimator.RunConfig.train_distribute": "tf.estimator.RunConfig.train_distribute", + "tf.compat.v1.estimator.SecondOrStepTimer": "tf.estimator.SecondOrStepTimer", + "tf.compat.v1.estimator.SecondOrStepTimer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.SecondOrStepTimer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.SecondOrStepTimer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.SecondOrStepTimer.__init__": "tf.estimator.SecondOrStepTimer.__init__", + "tf.compat.v1.estimator.SecondOrStepTimer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.SecondOrStepTimer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.SecondOrStepTimer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.SecondOrStepTimer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.SecondOrStepTimer.last_triggered_step": "tf.estimator.SecondOrStepTimer.last_triggered_step", + "tf.compat.v1.estimator.SecondOrStepTimer.reset": "tf.estimator.SecondOrStepTimer.reset", + "tf.compat.v1.estimator.SecondOrStepTimer.should_trigger_for_step": "tf.estimator.SecondOrStepTimer.should_trigger_for_step", + "tf.compat.v1.estimator.SecondOrStepTimer.update_last_triggered_step": "tf.estimator.SecondOrStepTimer.update_last_triggered_step", + "tf.compat.v1.estimator.SessionRunArgs": "tf.estimator.SessionRunArgs", + "tf.compat.v1.estimator.SessionRunArgs.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.estimator.SessionRunArgs.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.estimator.SessionRunArgs.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.estimator.SessionRunArgs.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.estimator.SessionRunArgs.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.estimator.SessionRunArgs.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.estimator.SessionRunArgs.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.SessionRunArgs.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.estimator.SessionRunArgs.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.estimator.SessionRunArgs.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.estimator.SessionRunArgs.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.estimator.SessionRunArgs.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.estimator.SessionRunArgs.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.estimator.SessionRunArgs.__new__": "tf.estimator.SessionRunArgs.__new__", + "tf.compat.v1.estimator.SessionRunArgs.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.estimator.SessionRunArgs.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.estimator.SessionRunArgs.feed_dict": "tf.estimator.SessionRunArgs.feed_dict", + "tf.compat.v1.estimator.SessionRunArgs.fetches": "tf.estimator.SessionRunArgs.fetches", + "tf.compat.v1.estimator.SessionRunArgs.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.estimator.SessionRunArgs.options": "tf.estimator.SessionRunArgs.options", + "tf.compat.v1.estimator.SessionRunContext": "tf.estimator.SessionRunContext", + "tf.compat.v1.estimator.SessionRunContext.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.SessionRunContext.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.SessionRunContext.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.SessionRunContext.__init__": "tf.estimator.SessionRunContext.__init__", + "tf.compat.v1.estimator.SessionRunContext.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.SessionRunContext.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.SessionRunContext.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.SessionRunContext.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.SessionRunContext.original_args": "tf.estimator.SessionRunContext.original_args", + "tf.compat.v1.estimator.SessionRunContext.request_stop": "tf.estimator.SessionRunContext.request_stop", + "tf.compat.v1.estimator.SessionRunContext.session": "tf.estimator.SessionRunContext.session", + "tf.compat.v1.estimator.SessionRunContext.stop_requested": "tf.estimator.SessionRunContext.stop_requested", + "tf.compat.v1.estimator.SessionRunHook": "tf.estimator.SessionRunHook", + "tf.compat.v1.estimator.SessionRunHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.SessionRunHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.SessionRunHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.SessionRunHook.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.SessionRunHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.SessionRunHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.SessionRunHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.SessionRunHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.SessionRunHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.estimator.SessionRunHook.after_run": "tf.estimator.SessionRunHook.after_run", + "tf.compat.v1.estimator.SessionRunHook.before_run": "tf.estimator.SessionRunHook.before_run", + "tf.compat.v1.estimator.SessionRunHook.begin": "tf.estimator.SessionRunHook.begin", + "tf.compat.v1.estimator.SessionRunHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.estimator.SessionRunValues": "tf.estimator.SessionRunValues", + "tf.compat.v1.estimator.SessionRunValues.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.estimator.SessionRunValues.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.estimator.SessionRunValues.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.estimator.SessionRunValues.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.estimator.SessionRunValues.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.estimator.SessionRunValues.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.estimator.SessionRunValues.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.SessionRunValues.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.estimator.SessionRunValues.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.estimator.SessionRunValues.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.estimator.SessionRunValues.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.estimator.SessionRunValues.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.estimator.SessionRunValues.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.estimator.SessionRunValues.__new__": "tf.estimator.SessionRunValues.__new__", + "tf.compat.v1.estimator.SessionRunValues.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.estimator.SessionRunValues.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.estimator.SessionRunValues.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.estimator.SessionRunValues.options": "tf.estimator.SessionRunValues.options", + "tf.compat.v1.estimator.SessionRunValues.results": "tf.estimator.SessionRunValues.results", + "tf.compat.v1.estimator.SessionRunValues.run_metadata": "tf.estimator.SessionRunValues.run_metadata", + "tf.compat.v1.estimator.StepCounterHook": "tf.estimator.StepCounterHook", + "tf.compat.v1.estimator.StepCounterHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.StepCounterHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.StepCounterHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.StepCounterHook.__init__": "tf.estimator.StepCounterHook.__init__", + "tf.compat.v1.estimator.StepCounterHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.StepCounterHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.StepCounterHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.StepCounterHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.StepCounterHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.estimator.StepCounterHook.after_run": "tf.estimator.StepCounterHook.after_run", + "tf.compat.v1.estimator.StepCounterHook.before_run": "tf.estimator.StepCounterHook.before_run", + "tf.compat.v1.estimator.StepCounterHook.begin": "tf.estimator.StepCounterHook.begin", + "tf.compat.v1.estimator.StepCounterHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.estimator.StopAtStepHook": "tf.estimator.StopAtStepHook", + "tf.compat.v1.estimator.StopAtStepHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.StopAtStepHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.StopAtStepHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.StopAtStepHook.__init__": "tf.estimator.StopAtStepHook.__init__", + "tf.compat.v1.estimator.StopAtStepHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.StopAtStepHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.StopAtStepHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.StopAtStepHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.StopAtStepHook.after_create_session": "tf.estimator.StopAtStepHook.after_create_session", + "tf.compat.v1.estimator.StopAtStepHook.after_run": "tf.estimator.StopAtStepHook.after_run", + "tf.compat.v1.estimator.StopAtStepHook.before_run": "tf.estimator.StopAtStepHook.before_run", + "tf.compat.v1.estimator.StopAtStepHook.begin": "tf.estimator.StopAtStepHook.begin", + "tf.compat.v1.estimator.StopAtStepHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.estimator.SummarySaverHook": "tf.estimator.SummarySaverHook", + "tf.compat.v1.estimator.SummarySaverHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.SummarySaverHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.SummarySaverHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.SummarySaverHook.__init__": "tf.estimator.SummarySaverHook.__init__", + "tf.compat.v1.estimator.SummarySaverHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.SummarySaverHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.SummarySaverHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.SummarySaverHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.SummarySaverHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.estimator.SummarySaverHook.after_run": "tf.estimator.SummarySaverHook.after_run", + "tf.compat.v1.estimator.SummarySaverHook.before_run": "tf.estimator.SummarySaverHook.before_run", + "tf.compat.v1.estimator.SummarySaverHook.begin": "tf.estimator.SummarySaverHook.begin", + "tf.compat.v1.estimator.SummarySaverHook.end": "tf.estimator.SummarySaverHook.end", + "tf.compat.v1.estimator.TrainSpec": "tf.estimator.TrainSpec", + "tf.compat.v1.estimator.TrainSpec.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.estimator.TrainSpec.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.estimator.TrainSpec.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.estimator.TrainSpec.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.estimator.TrainSpec.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.estimator.TrainSpec.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.estimator.TrainSpec.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.TrainSpec.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.estimator.TrainSpec.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.estimator.TrainSpec.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.estimator.TrainSpec.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.estimator.TrainSpec.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.estimator.TrainSpec.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.estimator.TrainSpec.__new__": "tf.estimator.TrainSpec.__new__", + "tf.compat.v1.estimator.TrainSpec.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.estimator.TrainSpec.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.estimator.TrainSpec.hooks": "tf.estimator.TrainSpec.hooks", + "tf.compat.v1.estimator.TrainSpec.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.estimator.TrainSpec.input_fn": "tf.estimator.TrainSpec.input_fn", + "tf.compat.v1.estimator.TrainSpec.max_steps": "tf.estimator.TrainSpec.max_steps", + "tf.compat.v1.estimator.VocabInfo": "tf.estimator.VocabInfo", + "tf.compat.v1.estimator.VocabInfo.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.estimator.VocabInfo.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.estimator.VocabInfo.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.estimator.VocabInfo.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.estimator.VocabInfo.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.estimator.VocabInfo.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.estimator.VocabInfo.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.VocabInfo.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.estimator.VocabInfo.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.estimator.VocabInfo.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.estimator.VocabInfo.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.estimator.VocabInfo.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.estimator.VocabInfo.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.estimator.VocabInfo.__new__": "tf.estimator.VocabInfo.__new__", + "tf.compat.v1.estimator.VocabInfo.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.estimator.VocabInfo.axis": "tf.estimator.VocabInfo.axis", + "tf.compat.v1.estimator.VocabInfo.backup_initializer": "tf.estimator.VocabInfo.backup_initializer", + "tf.compat.v1.estimator.VocabInfo.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.estimator.VocabInfo.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.estimator.VocabInfo.new_vocab": "tf.estimator.VocabInfo.new_vocab", + "tf.compat.v1.estimator.VocabInfo.new_vocab_size": "tf.estimator.VocabInfo.new_vocab_size", + "tf.compat.v1.estimator.VocabInfo.num_oov_buckets": "tf.estimator.VocabInfo.num_oov_buckets", + "tf.compat.v1.estimator.VocabInfo.old_vocab": "tf.estimator.VocabInfo.old_vocab", + "tf.compat.v1.estimator.VocabInfo.old_vocab_size": "tf.estimator.VocabInfo.old_vocab_size", + "tf.compat.v1.estimator.WarmStartSettings": "tf.estimator.WarmStartSettings", + "tf.compat.v1.estimator.WarmStartSettings.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.estimator.WarmStartSettings.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.estimator.WarmStartSettings.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.estimator.WarmStartSettings.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.estimator.WarmStartSettings.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.estimator.WarmStartSettings.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.estimator.WarmStartSettings.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.WarmStartSettings.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.estimator.WarmStartSettings.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.estimator.WarmStartSettings.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.estimator.WarmStartSettings.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.estimator.WarmStartSettings.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.estimator.WarmStartSettings.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.estimator.WarmStartSettings.__new__": "tf.estimator.WarmStartSettings.__new__", + "tf.compat.v1.estimator.WarmStartSettings.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.estimator.WarmStartSettings.ckpt_to_initialize_from": "tf.estimator.WarmStartSettings.ckpt_to_initialize_from", + "tf.compat.v1.estimator.WarmStartSettings.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.estimator.WarmStartSettings.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.estimator.WarmStartSettings.var_name_to_prev_var_name": "tf.estimator.WarmStartSettings.var_name_to_prev_var_name", + "tf.compat.v1.estimator.WarmStartSettings.var_name_to_vocab_info": "tf.estimator.WarmStartSettings.var_name_to_vocab_info", + "tf.compat.v1.estimator.WarmStartSettings.vars_to_warm_start": "tf.estimator.WarmStartSettings.vars_to_warm_start", + "tf.compat.v1.estimator.add_metrics": "tf.estimator.add_metrics", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook": "tf.estimator.experimental.InMemoryEvaluatorHook", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__init__": "tf.estimator.experimental.InMemoryEvaluatorHook.__init__", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.after_create_session": "tf.estimator.experimental.InMemoryEvaluatorHook.after_create_session", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.after_run": "tf.estimator.experimental.InMemoryEvaluatorHook.after_run", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.before_run": "tf.estimator.SessionRunHook.before_run", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.begin": "tf.estimator.experimental.InMemoryEvaluatorHook.begin", + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.end": "tf.estimator.experimental.InMemoryEvaluatorHook.end", + "tf.compat.v1.estimator.experimental.KMeans.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.experimental.KMeans.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.experimental.KMeans.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.experimental.KMeans.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.experimental.KMeans.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.experimental.KMeans.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.experimental.KMeans.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.experimental.KMeans.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.experimental.KMeans.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.experimental.KMeans.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.compat.v1.estimator.experimental.KMeans.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.experimental.KMeans.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.experimental.KMeans.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.experimental.KMeans.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.experimental.KMeans.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.experimental.KMeans.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.experimental.KMeans.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.experimental.KMeans.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.experimental.KMeans.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.experimental.KMeans.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.compat.v1.estimator.experimental.KMeans.train": "tf.compat.v1.estimator.Estimator.train", + "tf.compat.v1.estimator.experimental.LinearSDCA": "tf.estimator.experimental.LinearSDCA", + "tf.compat.v1.estimator.experimental.LinearSDCA.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.experimental.LinearSDCA.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.experimental.LinearSDCA.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.experimental.LinearSDCA.__init__": "tf.estimator.experimental.LinearSDCA.__init__", + "tf.compat.v1.estimator.experimental.LinearSDCA.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.experimental.LinearSDCA.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.experimental.LinearSDCA.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.experimental.LinearSDCA.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.experimental.LinearSDCA.get_train_step": "tf.estimator.experimental.LinearSDCA.get_train_step", + "tf.compat.v1.estimator.experimental.build_raw_supervised_input_receiver_fn": "tf.estimator.experimental.build_raw_supervised_input_receiver_fn", + "tf.compat.v1.estimator.experimental.call_logit_fn": "tf.estimator.experimental.call_logit_fn", + "tf.compat.v1.estimator.experimental.make_early_stopping_hook": "tf.estimator.experimental.make_early_stopping_hook", + "tf.compat.v1.estimator.experimental.make_stop_at_checkpoint_step_hook": "tf.estimator.experimental.make_stop_at_checkpoint_step_hook", + "tf.compat.v1.estimator.experimental.stop_if_higher_hook": "tf.estimator.experimental.stop_if_higher_hook", + "tf.compat.v1.estimator.experimental.stop_if_lower_hook": "tf.estimator.experimental.stop_if_lower_hook", + "tf.compat.v1.estimator.experimental.stop_if_no_decrease_hook": "tf.estimator.experimental.stop_if_no_decrease_hook", + "tf.compat.v1.estimator.experimental.stop_if_no_increase_hook": "tf.estimator.experimental.stop_if_no_increase_hook", + "tf.compat.v1.estimator.export.ClassificationOutput": "tf.estimator.export.ClassificationOutput", + "tf.compat.v1.estimator.export.ClassificationOutput.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.export.ClassificationOutput.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.export.ClassificationOutput.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.export.ClassificationOutput.__init__": "tf.estimator.export.ClassificationOutput.__init__", + "tf.compat.v1.estimator.export.ClassificationOutput.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.export.ClassificationOutput.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.export.ClassificationOutput.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.export.ClassificationOutput.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.export.ClassificationOutput.as_signature_def": "tf.estimator.export.ClassificationOutput.as_signature_def", + "tf.compat.v1.estimator.export.ClassificationOutput.classes": "tf.estimator.export.ClassificationOutput.classes", + "tf.compat.v1.estimator.export.ClassificationOutput.scores": "tf.estimator.export.ClassificationOutput.scores", + "tf.compat.v1.estimator.export.ExportOutput": "tf.estimator.export.ExportOutput", + "tf.compat.v1.estimator.export.ExportOutput.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.export.ExportOutput.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.export.ExportOutput.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.export.ExportOutput.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.export.ExportOutput.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.export.ExportOutput.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.export.ExportOutput.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.export.ExportOutput.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.export.ExportOutput.as_signature_def": "tf.estimator.export.ExportOutput.as_signature_def", + "tf.compat.v1.estimator.export.PredictOutput": "tf.estimator.export.PredictOutput", + "tf.compat.v1.estimator.export.PredictOutput.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.export.PredictOutput.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.export.PredictOutput.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.export.PredictOutput.__init__": "tf.estimator.export.PredictOutput.__init__", + "tf.compat.v1.estimator.export.PredictOutput.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.export.PredictOutput.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.export.PredictOutput.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.export.PredictOutput.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.export.PredictOutput.as_signature_def": "tf.estimator.export.PredictOutput.as_signature_def", + "tf.compat.v1.estimator.export.PredictOutput.outputs": "tf.estimator.export.PredictOutput.outputs", + "tf.compat.v1.estimator.export.RegressionOutput": "tf.estimator.export.RegressionOutput", + "tf.compat.v1.estimator.export.RegressionOutput.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.export.RegressionOutput.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.export.RegressionOutput.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.export.RegressionOutput.__init__": "tf.estimator.export.RegressionOutput.__init__", + "tf.compat.v1.estimator.export.RegressionOutput.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.export.RegressionOutput.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.export.RegressionOutput.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.export.RegressionOutput.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.export.RegressionOutput.as_signature_def": "tf.estimator.export.RegressionOutput.as_signature_def", + "tf.compat.v1.estimator.export.RegressionOutput.value": "tf.estimator.export.RegressionOutput.value", + "tf.compat.v1.estimator.export.ServingInputReceiver": "tf.estimator.export.ServingInputReceiver", + "tf.compat.v1.estimator.export.ServingInputReceiver.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__new__": "tf.estimator.export.ServingInputReceiver.__new__", + "tf.compat.v1.estimator.export.ServingInputReceiver.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.estimator.export.ServingInputReceiver.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.estimator.export.ServingInputReceiver.features": "tf.estimator.export.ServingInputReceiver.features", + "tf.compat.v1.estimator.export.ServingInputReceiver.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.estimator.export.ServingInputReceiver.receiver_tensors": "tf.estimator.export.ServingInputReceiver.receiver_tensors", + "tf.compat.v1.estimator.export.ServingInputReceiver.receiver_tensors_alternatives": "tf.estimator.export.ServingInputReceiver.receiver_tensors_alternatives", + "tf.compat.v1.estimator.export.TensorServingInputReceiver": "tf.estimator.export.TensorServingInputReceiver", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__new__": "tf.estimator.export.TensorServingInputReceiver.__new__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.features": "tf.estimator.export.TensorServingInputReceiver.features", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.receiver_tensors": "tf.estimator.export.TensorServingInputReceiver.receiver_tensors", + "tf.compat.v1.estimator.export.TensorServingInputReceiver.receiver_tensors_alternatives": "tf.estimator.export.TensorServingInputReceiver.receiver_tensors_alternatives", + "tf.compat.v1.estimator.export.build_parsing_serving_input_receiver_fn": "tf.estimator.export.build_parsing_serving_input_receiver_fn", + "tf.compat.v1.estimator.export.build_raw_serving_input_receiver_fn": "tf.estimator.export.build_raw_serving_input_receiver_fn", + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.tpu.RunConfig.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.tpu.RunConfig.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.tpu.RunConfig.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.tpu.RunConfig.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.tpu.RunConfig.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.tpu.RunConfig.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.tpu.RunConfig.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.tpu.RunConfig.cluster_spec": "tf.estimator.RunConfig.cluster_spec", + "tf.compat.v1.estimator.tpu.RunConfig.device_fn": "tf.estimator.RunConfig.device_fn", + "tf.compat.v1.estimator.tpu.RunConfig.eval_distribute": "tf.estimator.RunConfig.eval_distribute", + "tf.compat.v1.estimator.tpu.RunConfig.experimental_max_worker_delay_secs": "tf.estimator.RunConfig.experimental_max_worker_delay_secs", + "tf.compat.v1.estimator.tpu.RunConfig.global_id_in_cluster": "tf.estimator.RunConfig.global_id_in_cluster", + "tf.compat.v1.estimator.tpu.RunConfig.is_chief": "tf.estimator.RunConfig.is_chief", + "tf.compat.v1.estimator.tpu.RunConfig.keep_checkpoint_every_n_hours": "tf.estimator.RunConfig.keep_checkpoint_every_n_hours", + "tf.compat.v1.estimator.tpu.RunConfig.keep_checkpoint_max": "tf.estimator.RunConfig.keep_checkpoint_max", + "tf.compat.v1.estimator.tpu.RunConfig.log_step_count_steps": "tf.estimator.RunConfig.log_step_count_steps", + "tf.compat.v1.estimator.tpu.RunConfig.model_dir": "tf.estimator.RunConfig.model_dir", + "tf.compat.v1.estimator.tpu.RunConfig.num_ps_replicas": "tf.estimator.RunConfig.num_ps_replicas", + "tf.compat.v1.estimator.tpu.RunConfig.num_worker_replicas": "tf.estimator.RunConfig.num_worker_replicas", + "tf.compat.v1.estimator.tpu.RunConfig.protocol": "tf.estimator.RunConfig.protocol", + "tf.compat.v1.estimator.tpu.RunConfig.save_checkpoints_secs": "tf.estimator.RunConfig.save_checkpoints_secs", + "tf.compat.v1.estimator.tpu.RunConfig.save_checkpoints_steps": "tf.estimator.RunConfig.save_checkpoints_steps", + "tf.compat.v1.estimator.tpu.RunConfig.save_summary_steps": "tf.estimator.RunConfig.save_summary_steps", + "tf.compat.v1.estimator.tpu.RunConfig.service": "tf.estimator.RunConfig.service", + "tf.compat.v1.estimator.tpu.RunConfig.session_config": "tf.estimator.RunConfig.session_config", + "tf.compat.v1.estimator.tpu.RunConfig.session_creation_timeout_secs": "tf.estimator.RunConfig.session_creation_timeout_secs", + "tf.compat.v1.estimator.tpu.RunConfig.task_id": "tf.estimator.RunConfig.task_id", + "tf.compat.v1.estimator.tpu.RunConfig.task_type": "tf.estimator.RunConfig.task_type", + "tf.compat.v1.estimator.tpu.RunConfig.tf_random_seed": "tf.estimator.RunConfig.tf_random_seed", + "tf.compat.v1.estimator.tpu.RunConfig.train_distribute": "tf.estimator.RunConfig.train_distribute", + "tf.compat.v1.estimator.tpu.TPUConfig.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.estimator.tpu.TPUConfig.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.estimator.tpu.TPUConfig.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.estimator.tpu.TPUConfig.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.estimator.tpu.TPUConfig.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.estimator.tpu.TPUConfig.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.estimator.tpu.TPUConfig.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.tpu.TPUConfig.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.estimator.tpu.TPUConfig.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.estimator.tpu.TPUConfig.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.estimator.tpu.TPUConfig.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.estimator.tpu.TPUConfig.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.estimator.tpu.TPUConfig.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.estimator.tpu.TPUConfig.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.estimator.tpu.TPUConfig.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.estimator.tpu.TPUConfig.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.estimator.tpu.TPUEstimator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.estimator.tpu.TPUEstimator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.estimator.tpu.TPUEstimator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.estimator.tpu.TPUEstimator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.estimator.tpu.TPUEstimator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.estimator.tpu.TPUEstimator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.estimator.tpu.TPUEstimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.estimator.tpu.TPUEstimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.compat.v1.estimator.tpu.TPUEstimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.compat.v1.estimator.tpu.TPUEstimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.compat.v1.estimator.tpu.TPUEstimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.compat.v1.estimator.tpu.TPUEstimator.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.compat.v1.estimator.tpu.TPUEstimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.compat.v1.estimator.tpu.TPUEstimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.compat.v1.estimator.tpu.TPUEstimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.compat.v1.estimator.tpu.TPUEstimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.compat.v1.estimator.tpu.TPUEstimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.compat.v1.estimator.tpu.TPUEstimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.estimator.train_and_evaluate": "tf.estimator.train_and_evaluate", + "tf.compat.v1.exp": "tf.math.exp", + "tf.compat.v1.experimental.async_clear_error": "tf.experimental.async_clear_error", + "tf.compat.v1.experimental.async_scope": "tf.experimental.async_scope", + "tf.compat.v1.experimental.function_executor_type": "tf.experimental.function_executor_type", + "tf.compat.v1.expm1": "tf.math.expm1", + "tf.compat.v1.extract_volume_patches": "tf.extract_volume_patches", + "tf.compat.v1.eye": "tf.eye", + "tf.compat.v1.fake_quant_with_min_max_args": "tf.quantization.fake_quant_with_min_max_args", + "tf.compat.v1.fake_quant_with_min_max_args_gradient": "tf.quantization.fake_quant_with_min_max_args_gradient", + "tf.compat.v1.fake_quant_with_min_max_vars": "tf.quantization.fake_quant_with_min_max_vars", + "tf.compat.v1.fake_quant_with_min_max_vars_gradient": "tf.quantization.fake_quant_with_min_max_vars_gradient", + "tf.compat.v1.fake_quant_with_min_max_vars_per_channel": "tf.quantization.fake_quant_with_min_max_vars_per_channel", + "tf.compat.v1.fake_quant_with_min_max_vars_per_channel_gradient": "tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient", + "tf.compat.v1.feature_column.bucketized_column": "tf.feature_column.bucketized_column", + "tf.compat.v1.feature_column.categorical_column_with_hash_bucket": "tf.feature_column.categorical_column_with_hash_bucket", + "tf.compat.v1.feature_column.categorical_column_with_identity": "tf.feature_column.categorical_column_with_identity", + "tf.compat.v1.feature_column.categorical_column_with_vocabulary_list": "tf.feature_column.categorical_column_with_vocabulary_list", + "tf.compat.v1.feature_column.crossed_column": "tf.feature_column.crossed_column", + "tf.compat.v1.feature_column.embedding_column": "tf.feature_column.embedding_column", + "tf.compat.v1.feature_column.indicator_column": "tf.feature_column.indicator_column", + "tf.compat.v1.feature_column.numeric_column": "tf.feature_column.numeric_column", + "tf.compat.v1.feature_column.sequence_categorical_column_with_hash_bucket": "tf.feature_column.sequence_categorical_column_with_hash_bucket", + "tf.compat.v1.feature_column.sequence_categorical_column_with_identity": "tf.feature_column.sequence_categorical_column_with_identity", + "tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_file": "tf.feature_column.sequence_categorical_column_with_vocabulary_file", + "tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_list": "tf.feature_column.sequence_categorical_column_with_vocabulary_list", + "tf.compat.v1.feature_column.sequence_numeric_column": "tf.feature_column.sequence_numeric_column", + "tf.compat.v1.feature_column.weighted_categorical_column": "tf.feature_column.weighted_categorical_column", + "tf.compat.v1.fft": "tf.signal.fft", + "tf.compat.v1.fft2d": "tf.signal.fft2d", + "tf.compat.v1.fft3d": "tf.signal.fft3d", + "tf.compat.v1.fill": "tf.fill", + "tf.compat.v1.fingerprint": "tf.fingerprint", + "tf.compat.v1.flags.ArgumentParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.ArgumentParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.ArgumentParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.ArgumentParser.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.flags.ArgumentParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.ArgumentParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.ArgumentParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.ArgumentParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.ArgumentSerializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.ArgumentSerializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.ArgumentSerializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.ArgumentSerializer.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.flags.ArgumentSerializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.ArgumentSerializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.ArgumentSerializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.ArgumentSerializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.BaseListParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.BaseListParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.BaseListParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.BaseListParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.BaseListParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.BaseListParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.BaseListParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.BooleanFlag.__eq__": "tf.compat.v1.flags.Flag.__eq__", + "tf.compat.v1.flags.BooleanFlag.__ge__": "tf.compat.v1.flags.Flag.__ge__", + "tf.compat.v1.flags.BooleanFlag.__gt__": "tf.compat.v1.flags.Flag.__gt__", + "tf.compat.v1.flags.BooleanFlag.__le__": "tf.compat.v1.flags.Flag.__le__", + "tf.compat.v1.flags.BooleanFlag.__lt__": "tf.compat.v1.flags.Flag.__lt__", + "tf.compat.v1.flags.BooleanFlag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.BooleanFlag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.BooleanFlag.flag_type": "tf.compat.v1.flags.Flag.flag_type", + "tf.compat.v1.flags.BooleanFlag.parse": "tf.compat.v1.flags.Flag.parse", + "tf.compat.v1.flags.BooleanFlag.serialize": "tf.compat.v1.flags.Flag.serialize", + "tf.compat.v1.flags.BooleanFlag.unparse": "tf.compat.v1.flags.Flag.unparse", + "tf.compat.v1.flags.BooleanFlag.value": "tf.compat.v1.flags.Flag.value", + "tf.compat.v1.flags.BooleanParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.BooleanParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.BooleanParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.BooleanParser.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.flags.BooleanParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.BooleanParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.BooleanParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.BooleanParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.CantOpenFlagFileError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.CantOpenFlagFileError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.CantOpenFlagFileError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.CantOpenFlagFileError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.CantOpenFlagFileError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.CantOpenFlagFileError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.CantOpenFlagFileError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.flags.CantOpenFlagFileError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.flags.CantOpenFlagFileError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.flags.CsvListSerializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.CsvListSerializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.CsvListSerializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.CsvListSerializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.CsvListSerializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.CsvListSerializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.CsvListSerializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.DEFINE_boolean": "tf.compat.v1.flags.DEFINE_bool", + "tf.compat.v1.flags.DuplicateFlagError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.DuplicateFlagError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.DuplicateFlagError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.DuplicateFlagError.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.flags.DuplicateFlagError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.DuplicateFlagError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.DuplicateFlagError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.DuplicateFlagError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.flags.DuplicateFlagError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.flags.DuplicateFlagError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.flags.EnumClassFlag.__eq__": "tf.compat.v1.flags.Flag.__eq__", + "tf.compat.v1.flags.EnumClassFlag.__ge__": "tf.compat.v1.flags.Flag.__ge__", + "tf.compat.v1.flags.EnumClassFlag.__gt__": "tf.compat.v1.flags.Flag.__gt__", + "tf.compat.v1.flags.EnumClassFlag.__le__": "tf.compat.v1.flags.Flag.__le__", + "tf.compat.v1.flags.EnumClassFlag.__lt__": "tf.compat.v1.flags.Flag.__lt__", + "tf.compat.v1.flags.EnumClassFlag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.EnumClassFlag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.EnumClassFlag.flag_type": "tf.compat.v1.flags.Flag.flag_type", + "tf.compat.v1.flags.EnumClassFlag.parse": "tf.compat.v1.flags.Flag.parse", + "tf.compat.v1.flags.EnumClassFlag.serialize": "tf.compat.v1.flags.Flag.serialize", + "tf.compat.v1.flags.EnumClassFlag.unparse": "tf.compat.v1.flags.Flag.unparse", + "tf.compat.v1.flags.EnumClassFlag.value": "tf.compat.v1.flags.Flag.value", + "tf.compat.v1.flags.EnumClassParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.EnumClassParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.EnumClassParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.EnumClassParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.EnumClassParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.EnumClassParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.EnumClassParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.EnumFlag.__eq__": "tf.compat.v1.flags.Flag.__eq__", + "tf.compat.v1.flags.EnumFlag.__ge__": "tf.compat.v1.flags.Flag.__ge__", + "tf.compat.v1.flags.EnumFlag.__gt__": "tf.compat.v1.flags.Flag.__gt__", + "tf.compat.v1.flags.EnumFlag.__le__": "tf.compat.v1.flags.Flag.__le__", + "tf.compat.v1.flags.EnumFlag.__lt__": "tf.compat.v1.flags.Flag.__lt__", + "tf.compat.v1.flags.EnumFlag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.EnumFlag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.EnumFlag.flag_type": "tf.compat.v1.flags.Flag.flag_type", + "tf.compat.v1.flags.EnumFlag.parse": "tf.compat.v1.flags.Flag.parse", + "tf.compat.v1.flags.EnumFlag.serialize": "tf.compat.v1.flags.Flag.serialize", + "tf.compat.v1.flags.EnumFlag.unparse": "tf.compat.v1.flags.Flag.unparse", + "tf.compat.v1.flags.EnumFlag.value": "tf.compat.v1.flags.Flag.value", + "tf.compat.v1.flags.EnumParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.EnumParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.EnumParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.EnumParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.EnumParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.EnumParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.EnumParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.Error.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.Error.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.Error.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.Error.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.flags.Error.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.Error.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.Error.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.Error.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.flags.Error.args": "tf.errors.AbortedError.args", + "tf.compat.v1.flags.Error.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.flags.Flag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.Flag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.flags.FlagValues.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.FlagValues.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.FlagValues.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.FlagValues.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.FlagValues.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.FlagValues.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.FlagValues.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.FloatParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.FloatParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.FloatParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.FloatParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.FloatParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.FloatParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.FloatParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.IllegalFlagValueError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.IllegalFlagValueError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.IllegalFlagValueError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.IllegalFlagValueError.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.flags.IllegalFlagValueError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.IllegalFlagValueError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.IllegalFlagValueError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.IllegalFlagValueError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.flags.IllegalFlagValueError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.flags.IllegalFlagValueError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.flags.IntegerParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.IntegerParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.IntegerParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.IntegerParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.IntegerParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.IntegerParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.IntegerParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.IntegerParser.is_outside_bounds": "tf.compat.v1.flags.FloatParser.is_outside_bounds", + "tf.compat.v1.flags.IntegerParser.parse": "tf.compat.v1.flags.FloatParser.parse", + "tf.compat.v1.flags.ListParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.ListParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.ListParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.ListParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.ListParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.ListParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.ListParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.ListParser.flag_type": "tf.compat.v1.flags.BaseListParser.flag_type", + "tf.compat.v1.flags.ListSerializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.ListSerializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.ListSerializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.ListSerializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.ListSerializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.ListSerializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.ListSerializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.MultiEnumClassFlag.__eq__": "tf.compat.v1.flags.Flag.__eq__", + "tf.compat.v1.flags.MultiEnumClassFlag.__ge__": "tf.compat.v1.flags.Flag.__ge__", + "tf.compat.v1.flags.MultiEnumClassFlag.__gt__": "tf.compat.v1.flags.Flag.__gt__", + "tf.compat.v1.flags.MultiEnumClassFlag.__le__": "tf.compat.v1.flags.Flag.__le__", + "tf.compat.v1.flags.MultiEnumClassFlag.__lt__": "tf.compat.v1.flags.Flag.__lt__", + "tf.compat.v1.flags.MultiEnumClassFlag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.MultiEnumClassFlag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.MultiEnumClassFlag.flag_type": "tf.compat.v1.flags.MultiFlag.flag_type", + "tf.compat.v1.flags.MultiEnumClassFlag.parse": "tf.compat.v1.flags.MultiFlag.parse", + "tf.compat.v1.flags.MultiEnumClassFlag.serialize": "tf.compat.v1.flags.Flag.serialize", + "tf.compat.v1.flags.MultiEnumClassFlag.unparse": "tf.compat.v1.flags.Flag.unparse", + "tf.compat.v1.flags.MultiEnumClassFlag.value": "tf.compat.v1.flags.Flag.value", + "tf.compat.v1.flags.MultiFlag.__eq__": "tf.compat.v1.flags.Flag.__eq__", + "tf.compat.v1.flags.MultiFlag.__ge__": "tf.compat.v1.flags.Flag.__ge__", + "tf.compat.v1.flags.MultiFlag.__gt__": "tf.compat.v1.flags.Flag.__gt__", + "tf.compat.v1.flags.MultiFlag.__le__": "tf.compat.v1.flags.Flag.__le__", + "tf.compat.v1.flags.MultiFlag.__lt__": "tf.compat.v1.flags.Flag.__lt__", + "tf.compat.v1.flags.MultiFlag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.MultiFlag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.MultiFlag.serialize": "tf.compat.v1.flags.Flag.serialize", + "tf.compat.v1.flags.MultiFlag.unparse": "tf.compat.v1.flags.Flag.unparse", + "tf.compat.v1.flags.MultiFlag.value": "tf.compat.v1.flags.Flag.value", + "tf.compat.v1.flags.UnparsedFlagAccessError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.UnparsedFlagAccessError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.UnparsedFlagAccessError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.UnparsedFlagAccessError.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.flags.UnparsedFlagAccessError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.UnparsedFlagAccessError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.UnparsedFlagAccessError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.UnparsedFlagAccessError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.flags.UnparsedFlagAccessError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.flags.UnparsedFlagAccessError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.flags.UnrecognizedFlagError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.UnrecognizedFlagError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.UnrecognizedFlagError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.UnrecognizedFlagError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.UnrecognizedFlagError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.UnrecognizedFlagError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.UnrecognizedFlagError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.flags.UnrecognizedFlagError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.flags.UnrecognizedFlagError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.flags.ValidationError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.ValidationError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.ValidationError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.ValidationError.__init__": "tf.compat.v1.flags.CantOpenFlagFileError.__init__", + "tf.compat.v1.flags.ValidationError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.ValidationError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.ValidationError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.ValidationError.__new__": "tf.errors.AbortedError.__new__", + "tf.compat.v1.flags.ValidationError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.flags.ValidationError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.WhitespaceSeparatedListParser.flag_type": "tf.compat.v1.flags.BaseListParser.flag_type", + "tf.compat.v1.flags.tf_decorator.TFDecorator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.tf_decorator.TFDecorator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.tf_decorator.TFDecorator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.tf_decorator.TFDecorator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.tf_decorator.TFDecorator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.tf_decorator.TFDecorator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.tf_decorator.TFDecorator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.tf_decorator.absolute_import": "tf.compat.v1.flags.absolute_import", + "tf.compat.v1.flags.tf_decorator.division": "tf.compat.v1.flags.division", + "tf.compat.v1.flags.tf_decorator.print_function": "tf.compat.v1.flags.print_function", + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__enter__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__enter__", + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__exit__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__exit__", + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.reset": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.reset", + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__new__": "tf.dtypes.DType.__new__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__new__": "tf.dtypes.DType.__new__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__enter__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__enter__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__exit__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__exit__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__enter__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__enter__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__exit__": "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__exit__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.flags.tf_decorator.tf_stack.absolute_import": "tf.compat.v1.flags.absolute_import", + "tf.compat.v1.flags.tf_decorator.tf_stack.division": "tf.compat.v1.flags.division", + "tf.compat.v1.flags.tf_decorator.tf_stack.print_function": "tf.compat.v1.flags.print_function", + "tf.compat.v1.float16": "tf.dtypes.float16", + "tf.compat.v1.float32": "tf.dtypes.float32", + "tf.compat.v1.float64": "tf.dtypes.double", + "tf.compat.v1.floor": "tf.math.floor", + "tf.compat.v1.floordiv": "tf.math.floordiv", + "tf.compat.v1.floormod": "tf.math.floormod", + "tf.compat.v1.function": "tf.function", + "tf.compat.v1.get_logger": "tf.get_logger", + "tf.compat.v1.get_static_value": "tf.get_static_value", + "tf.compat.v1.gfile.FastGFile.__enter__": "tf.io.gfile.GFile.__enter__", + "tf.compat.v1.gfile.FastGFile.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.gfile.FastGFile.__exit__": "tf.io.gfile.GFile.__exit__", + "tf.compat.v1.gfile.FastGFile.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.gfile.FastGFile.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.gfile.FastGFile.__iter__": "tf.io.gfile.GFile.__iter__", + "tf.compat.v1.gfile.FastGFile.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.gfile.FastGFile.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.gfile.FastGFile.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.gfile.FastGFile.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.gfile.FastGFile.close": "tf.io.gfile.GFile.close", + "tf.compat.v1.gfile.FastGFile.flush": "tf.io.gfile.GFile.flush", + "tf.compat.v1.gfile.FastGFile.mode": "tf.io.gfile.GFile.mode", + "tf.compat.v1.gfile.FastGFile.name": "tf.io.gfile.GFile.name", + "tf.compat.v1.gfile.FastGFile.next": "tf.io.gfile.GFile.next", + "tf.compat.v1.gfile.FastGFile.read": "tf.io.gfile.GFile.read", + "tf.compat.v1.gfile.FastGFile.readline": "tf.io.gfile.GFile.readline", + "tf.compat.v1.gfile.FastGFile.readlines": "tf.io.gfile.GFile.readlines", + "tf.compat.v1.gfile.FastGFile.seek": "tf.io.gfile.GFile.seek", + "tf.compat.v1.gfile.FastGFile.seekable": "tf.io.gfile.GFile.seekable", + "tf.compat.v1.gfile.FastGFile.size": "tf.io.gfile.GFile.size", + "tf.compat.v1.gfile.FastGFile.tell": "tf.io.gfile.GFile.tell", + "tf.compat.v1.gfile.FastGFile.write": "tf.io.gfile.GFile.write", + "tf.compat.v1.gfile.GFile": "tf.io.gfile.GFile", + "tf.compat.v1.gfile.GFile.__enter__": "tf.io.gfile.GFile.__enter__", + "tf.compat.v1.gfile.GFile.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.gfile.GFile.__exit__": "tf.io.gfile.GFile.__exit__", + "tf.compat.v1.gfile.GFile.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.gfile.GFile.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.gfile.GFile.__init__": "tf.io.gfile.GFile.__init__", + "tf.compat.v1.gfile.GFile.__iter__": "tf.io.gfile.GFile.__iter__", + "tf.compat.v1.gfile.GFile.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.gfile.GFile.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.gfile.GFile.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.gfile.GFile.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.gfile.GFile.close": "tf.io.gfile.GFile.close", + "tf.compat.v1.gfile.GFile.flush": "tf.io.gfile.GFile.flush", + "tf.compat.v1.gfile.GFile.mode": "tf.io.gfile.GFile.mode", + "tf.compat.v1.gfile.GFile.name": "tf.io.gfile.GFile.name", + "tf.compat.v1.gfile.GFile.next": "tf.io.gfile.GFile.next", + "tf.compat.v1.gfile.GFile.read": "tf.io.gfile.GFile.read", + "tf.compat.v1.gfile.GFile.readline": "tf.io.gfile.GFile.readline", + "tf.compat.v1.gfile.GFile.readlines": "tf.io.gfile.GFile.readlines", + "tf.compat.v1.gfile.GFile.seek": "tf.io.gfile.GFile.seek", + "tf.compat.v1.gfile.GFile.seekable": "tf.io.gfile.GFile.seekable", + "tf.compat.v1.gfile.GFile.size": "tf.io.gfile.GFile.size", + "tf.compat.v1.gfile.GFile.tell": "tf.io.gfile.GFile.tell", + "tf.compat.v1.gfile.GFile.write": "tf.io.gfile.GFile.write", + "tf.compat.v1.gfile.Open": "tf.io.gfile.GFile", + "tf.compat.v1.gfile.Open.__enter__": "tf.io.gfile.GFile.__enter__", + "tf.compat.v1.gfile.Open.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.gfile.Open.__exit__": "tf.io.gfile.GFile.__exit__", + "tf.compat.v1.gfile.Open.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.gfile.Open.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.gfile.Open.__init__": "tf.io.gfile.GFile.__init__", + "tf.compat.v1.gfile.Open.__iter__": "tf.io.gfile.GFile.__iter__", + "tf.compat.v1.gfile.Open.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.gfile.Open.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.gfile.Open.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.gfile.Open.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.gfile.Open.close": "tf.io.gfile.GFile.close", + "tf.compat.v1.gfile.Open.flush": "tf.io.gfile.GFile.flush", + "tf.compat.v1.gfile.Open.mode": "tf.io.gfile.GFile.mode", + "tf.compat.v1.gfile.Open.name": "tf.io.gfile.GFile.name", + "tf.compat.v1.gfile.Open.next": "tf.io.gfile.GFile.next", + "tf.compat.v1.gfile.Open.read": "tf.io.gfile.GFile.read", + "tf.compat.v1.gfile.Open.readline": "tf.io.gfile.GFile.readline", + "tf.compat.v1.gfile.Open.readlines": "tf.io.gfile.GFile.readlines", + "tf.compat.v1.gfile.Open.seek": "tf.io.gfile.GFile.seek", + "tf.compat.v1.gfile.Open.seekable": "tf.io.gfile.GFile.seekable", + "tf.compat.v1.gfile.Open.size": "tf.io.gfile.GFile.size", + "tf.compat.v1.gfile.Open.tell": "tf.io.gfile.GFile.tell", + "tf.compat.v1.gfile.Open.write": "tf.io.gfile.GFile.write", + "tf.compat.v1.global_norm": "tf.linalg.global_norm", + "tf.compat.v1.glorot_normal_initializer": "tf.compat.v1.keras.initializers.glorot_normal", + "tf.compat.v1.glorot_normal_initializer.__call__": "tf.compat.v1.keras.initializers.VarianceScaling.__call__", + "tf.compat.v1.glorot_normal_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.glorot_normal_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.glorot_normal_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.glorot_normal_initializer.__init__": "tf.compat.v1.keras.initializers.glorot_normal.__init__", + "tf.compat.v1.glorot_normal_initializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.glorot_normal_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.glorot_normal_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.glorot_normal_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.glorot_normal_initializer.get_config": "tf.compat.v1.keras.initializers.glorot_normal.get_config", + "tf.compat.v1.glorot_uniform_initializer": "tf.compat.v1.keras.initializers.glorot_uniform", + "tf.compat.v1.glorot_uniform_initializer.__call__": "tf.compat.v1.keras.initializers.VarianceScaling.__call__", + "tf.compat.v1.glorot_uniform_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.glorot_uniform_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.glorot_uniform_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.glorot_uniform_initializer.__init__": "tf.compat.v1.keras.initializers.glorot_uniform.__init__", + "tf.compat.v1.glorot_uniform_initializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.glorot_uniform_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.glorot_uniform_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.glorot_uniform_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.glorot_uniform_initializer.get_config": "tf.compat.v1.keras.initializers.glorot_uniform.get_config", + "tf.compat.v1.grad_pass_through": "tf.grad_pass_through", + "tf.compat.v1.graph_util.import_graph_def": "tf.graph_util.import_graph_def", + "tf.compat.v1.greater": "tf.math.greater", + "tf.compat.v1.greater_equal": "tf.math.greater_equal", + "tf.compat.v1.group": "tf.group", + "tf.compat.v1.guarantee_const": "tf.guarantee_const", + "tf.compat.v1.half": "tf.dtypes.float16", + "tf.compat.v1.histogram_fixed_width": "tf.histogram_fixed_width", + "tf.compat.v1.histogram_fixed_width_bins": "tf.histogram_fixed_width_bins", + "tf.compat.v1.identity": "tf.identity", + "tf.compat.v1.identity_n": "tf.identity_n", + "tf.compat.v1.ifft": "tf.signal.ifft", + "tf.compat.v1.ifft2d": "tf.signal.ifft2d", + "tf.compat.v1.ifft3d": "tf.signal.ifft3d", + "tf.compat.v1.igamma": "tf.math.igamma", + "tf.compat.v1.igammac": "tf.math.igammac", + "tf.compat.v1.imag": "tf.math.imag", + "tf.compat.v1.image.ResizeMethod.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.image.ResizeMethod.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.image.ResizeMethod.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.image.ResizeMethod.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.image.ResizeMethod.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.image.ResizeMethod.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.image.ResizeMethod.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.image.ResizeMethod.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.image.adjust_brightness": "tf.image.adjust_brightness", + "tf.compat.v1.image.adjust_contrast": "tf.image.adjust_contrast", + "tf.compat.v1.image.adjust_gamma": "tf.image.adjust_gamma", + "tf.compat.v1.image.adjust_hue": "tf.image.adjust_hue", + "tf.compat.v1.image.adjust_jpeg_quality": "tf.image.adjust_jpeg_quality", + "tf.compat.v1.image.adjust_saturation": "tf.image.adjust_saturation", + "tf.compat.v1.image.central_crop": "tf.image.central_crop", + "tf.compat.v1.image.combined_non_max_suppression": "tf.image.combined_non_max_suppression", + "tf.compat.v1.image.convert_image_dtype": "tf.image.convert_image_dtype", + "tf.compat.v1.image.crop_to_bounding_box": "tf.image.crop_to_bounding_box", + "tf.compat.v1.image.decode_and_crop_jpeg": "tf.io.decode_and_crop_jpeg", + "tf.compat.v1.image.decode_bmp": "tf.io.decode_bmp", + "tf.compat.v1.image.decode_gif": "tf.io.decode_gif", + "tf.compat.v1.image.decode_image": "tf.io.decode_image", + "tf.compat.v1.image.decode_jpeg": "tf.io.decode_jpeg", + "tf.compat.v1.image.decode_png": "tf.io.decode_png", + "tf.compat.v1.image.encode_jpeg": "tf.io.encode_jpeg", + "tf.compat.v1.image.encode_png": "tf.image.encode_png", + "tf.compat.v1.image.extract_image_patches": "tf.compat.v1.extract_image_patches", + "tf.compat.v1.image.extract_jpeg_shape": "tf.io.extract_jpeg_shape", + "tf.compat.v1.image.extract_patches": "tf.image.extract_patches", + "tf.compat.v1.image.flip_left_right": "tf.image.flip_left_right", + "tf.compat.v1.image.flip_up_down": "tf.image.flip_up_down", + "tf.compat.v1.image.generate_bounding_box_proposals": "tf.image.generate_bounding_box_proposals", + "tf.compat.v1.image.grayscale_to_rgb": "tf.image.grayscale_to_rgb", + "tf.compat.v1.image.hsv_to_rgb": "tf.image.hsv_to_rgb", + "tf.compat.v1.image.image_gradients": "tf.image.image_gradients", + "tf.compat.v1.image.is_jpeg": "tf.io.is_jpeg", + "tf.compat.v1.image.non_max_suppression": "tf.image.non_max_suppression", + "tf.compat.v1.image.non_max_suppression_overlaps": "tf.image.non_max_suppression_overlaps", + "tf.compat.v1.image.non_max_suppression_padded": "tf.image.non_max_suppression_padded", + "tf.compat.v1.image.non_max_suppression_with_scores": "tf.image.non_max_suppression_with_scores", + "tf.compat.v1.image.pad_to_bounding_box": "tf.image.pad_to_bounding_box", + "tf.compat.v1.image.per_image_standardization": "tf.image.per_image_standardization", + "tf.compat.v1.image.psnr": "tf.image.psnr", + "tf.compat.v1.image.random_brightness": "tf.image.random_brightness", + "tf.compat.v1.image.random_contrast": "tf.image.random_contrast", + "tf.compat.v1.image.random_crop": "tf.image.random_crop", + "tf.compat.v1.image.random_flip_left_right": "tf.image.random_flip_left_right", + "tf.compat.v1.image.random_flip_up_down": "tf.image.random_flip_up_down", + "tf.compat.v1.image.random_hue": "tf.image.random_hue", + "tf.compat.v1.image.random_jpeg_quality": "tf.image.random_jpeg_quality", + "tf.compat.v1.image.random_saturation": "tf.image.random_saturation", + "tf.compat.v1.image.resize_image_with_crop_or_pad": "tf.image.resize_with_crop_or_pad", + "tf.compat.v1.image.resize_images": "tf.compat.v1.image.resize", + "tf.compat.v1.image.resize_with_crop_or_pad": "tf.image.resize_with_crop_or_pad", + "tf.compat.v1.image.rgb_to_grayscale": "tf.image.rgb_to_grayscale", + "tf.compat.v1.image.rgb_to_hsv": "tf.image.rgb_to_hsv", + "tf.compat.v1.image.rgb_to_yiq": "tf.image.rgb_to_yiq", + "tf.compat.v1.image.rgb_to_yuv": "tf.image.rgb_to_yuv", + "tf.compat.v1.image.rot90": "tf.image.rot90", + "tf.compat.v1.image.sobel_edges": "tf.image.sobel_edges", + "tf.compat.v1.image.ssim": "tf.image.ssim", + "tf.compat.v1.image.ssim_multiscale": "tf.image.ssim_multiscale", + "tf.compat.v1.image.total_variation": "tf.image.total_variation", + "tf.compat.v1.image.transpose": "tf.image.transpose", + "tf.compat.v1.image.transpose_image": "tf.image.transpose", + "tf.compat.v1.image.yiq_to_rgb": "tf.image.yiq_to_rgb", + "tf.compat.v1.image.yuv_to_rgb": "tf.image.yuv_to_rgb", + "tf.compat.v1.import_graph_def": "tf.graph_util.import_graph_def", + "tf.compat.v1.init_scope": "tf.init_scope", + "tf.compat.v1.initializers.constant": "tf.compat.v1.keras.initializers.Constant", + "tf.compat.v1.initializers.constant.__call__": "tf.compat.v1.keras.initializers.Constant.__call__", + "tf.compat.v1.initializers.constant.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.initializers.constant.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.initializers.constant.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.initializers.constant.__init__": "tf.compat.v1.keras.initializers.Constant.__init__", + "tf.compat.v1.initializers.constant.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.initializers.constant.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.initializers.constant.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.initializers.constant.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.initializers.constant.get_config": "tf.compat.v1.keras.initializers.Constant.get_config", + "tf.compat.v1.initializers.global_variables": "tf.compat.v1.global_variables_initializer", + "tf.compat.v1.initializers.glorot_normal": "tf.compat.v1.keras.initializers.glorot_normal", + "tf.compat.v1.initializers.glorot_normal.__call__": "tf.compat.v1.keras.initializers.VarianceScaling.__call__", + "tf.compat.v1.initializers.glorot_normal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.initializers.glorot_normal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.initializers.glorot_normal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.initializers.glorot_normal.__init__": "tf.compat.v1.keras.initializers.glorot_normal.__init__", + "tf.compat.v1.initializers.glorot_normal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.initializers.glorot_normal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.initializers.glorot_normal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.initializers.glorot_normal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.initializers.glorot_normal.get_config": "tf.compat.v1.keras.initializers.glorot_normal.get_config", + "tf.compat.v1.initializers.glorot_uniform": "tf.compat.v1.keras.initializers.glorot_uniform", + "tf.compat.v1.initializers.glorot_uniform.__call__": "tf.compat.v1.keras.initializers.VarianceScaling.__call__", + "tf.compat.v1.initializers.glorot_uniform.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.initializers.glorot_uniform.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.initializers.glorot_uniform.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.initializers.glorot_uniform.__init__": "tf.compat.v1.keras.initializers.glorot_uniform.__init__", + "tf.compat.v1.initializers.glorot_uniform.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.initializers.glorot_uniform.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.initializers.glorot_uniform.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.initializers.glorot_uniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.initializers.glorot_uniform.get_config": "tf.compat.v1.keras.initializers.glorot_uniform.get_config", + "tf.compat.v1.initializers.he_normal": "tf.compat.v1.keras.initializers.he_normal", + "tf.compat.v1.initializers.he_uniform": "tf.compat.v1.keras.initializers.he_uniform", + "tf.compat.v1.initializers.identity": "tf.compat.v1.keras.initializers.Identity", + "tf.compat.v1.initializers.identity.__call__": "tf.compat.v1.keras.initializers.Identity.__call__", + "tf.compat.v1.initializers.identity.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.initializers.identity.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.initializers.identity.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.initializers.identity.__init__": "tf.compat.v1.keras.initializers.Identity.__init__", + "tf.compat.v1.initializers.identity.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.initializers.identity.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.initializers.identity.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.initializers.identity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.initializers.identity.get_config": "tf.compat.v1.keras.initializers.Identity.get_config", + "tf.compat.v1.initializers.lecun_normal": "tf.compat.v1.keras.initializers.lecun_normal", + "tf.compat.v1.initializers.lecun_uniform": "tf.compat.v1.keras.initializers.lecun_uniform", + "tf.compat.v1.initializers.local_variables": "tf.compat.v1.local_variables_initializer", + "tf.compat.v1.initializers.ones": "tf.compat.v1.keras.initializers.Ones", + "tf.compat.v1.initializers.ones.__call__": "tf.compat.v1.keras.initializers.Ones.__call__", + "tf.compat.v1.initializers.ones.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.initializers.ones.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.initializers.ones.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.initializers.ones.__init__": "tf.compat.v1.keras.initializers.Ones.__init__", + "tf.compat.v1.initializers.ones.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.initializers.ones.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.initializers.ones.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.initializers.ones.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.initializers.ones.get_config": "tf.compat.v1.keras.initializers.Ones.get_config", + "tf.compat.v1.initializers.orthogonal": "tf.compat.v1.keras.initializers.Orthogonal", + "tf.compat.v1.initializers.orthogonal.__call__": "tf.compat.v1.keras.initializers.Orthogonal.__call__", + "tf.compat.v1.initializers.orthogonal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.initializers.orthogonal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.initializers.orthogonal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.initializers.orthogonal.__init__": "tf.compat.v1.keras.initializers.Orthogonal.__init__", + "tf.compat.v1.initializers.orthogonal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.initializers.orthogonal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.initializers.orthogonal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.initializers.orthogonal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.initializers.orthogonal.get_config": "tf.compat.v1.keras.initializers.Orthogonal.get_config", + "tf.compat.v1.initializers.random_normal": "tf.compat.v1.random_normal_initializer", + "tf.compat.v1.initializers.random_normal.__call__": "tf.compat.v1.random_normal_initializer.__call__", + "tf.compat.v1.initializers.random_normal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.initializers.random_normal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.initializers.random_normal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.initializers.random_normal.__init__": "tf.compat.v1.random_normal_initializer.__init__", + "tf.compat.v1.initializers.random_normal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.initializers.random_normal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.initializers.random_normal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.initializers.random_normal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.initializers.random_normal.get_config": "tf.compat.v1.random_normal_initializer.get_config", + "tf.compat.v1.initializers.random_uniform": "tf.compat.v1.random_uniform_initializer", + "tf.compat.v1.initializers.random_uniform.__call__": "tf.compat.v1.random_uniform_initializer.__call__", + "tf.compat.v1.initializers.random_uniform.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.initializers.random_uniform.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.initializers.random_uniform.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.initializers.random_uniform.__init__": "tf.compat.v1.random_uniform_initializer.__init__", + "tf.compat.v1.initializers.random_uniform.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.initializers.random_uniform.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.initializers.random_uniform.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.initializers.random_uniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.initializers.random_uniform.get_config": "tf.compat.v1.random_uniform_initializer.get_config", + "tf.compat.v1.initializers.tables_initializer": "tf.compat.v1.tables_initializer", + "tf.compat.v1.initializers.truncated_normal": "tf.compat.v1.truncated_normal_initializer", + "tf.compat.v1.initializers.truncated_normal.__call__": "tf.compat.v1.truncated_normal_initializer.__call__", + "tf.compat.v1.initializers.truncated_normal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.initializers.truncated_normal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.initializers.truncated_normal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.initializers.truncated_normal.__init__": "tf.compat.v1.truncated_normal_initializer.__init__", + "tf.compat.v1.initializers.truncated_normal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.initializers.truncated_normal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.initializers.truncated_normal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.initializers.truncated_normal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.initializers.truncated_normal.get_config": "tf.compat.v1.truncated_normal_initializer.get_config", + "tf.compat.v1.initializers.uniform_unit_scaling": "tf.compat.v1.uniform_unit_scaling_initializer", + "tf.compat.v1.initializers.uniform_unit_scaling.__call__": "tf.compat.v1.uniform_unit_scaling_initializer.__call__", + "tf.compat.v1.initializers.uniform_unit_scaling.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.initializers.uniform_unit_scaling.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.initializers.uniform_unit_scaling.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.initializers.uniform_unit_scaling.__init__": "tf.compat.v1.uniform_unit_scaling_initializer.__init__", + "tf.compat.v1.initializers.uniform_unit_scaling.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.initializers.uniform_unit_scaling.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.initializers.uniform_unit_scaling.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.initializers.uniform_unit_scaling.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.initializers.uniform_unit_scaling.get_config": "tf.compat.v1.uniform_unit_scaling_initializer.get_config", + "tf.compat.v1.initializers.variables": "tf.compat.v1.variables_initializer", + "tf.compat.v1.initializers.variance_scaling": "tf.compat.v1.keras.initializers.VarianceScaling", + "tf.compat.v1.initializers.variance_scaling.__call__": "tf.compat.v1.keras.initializers.VarianceScaling.__call__", + "tf.compat.v1.initializers.variance_scaling.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.initializers.variance_scaling.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.initializers.variance_scaling.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.initializers.variance_scaling.__init__": "tf.compat.v1.keras.initializers.VarianceScaling.__init__", + "tf.compat.v1.initializers.variance_scaling.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.initializers.variance_scaling.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.initializers.variance_scaling.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.initializers.variance_scaling.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.initializers.variance_scaling.get_config": "tf.compat.v1.keras.initializers.VarianceScaling.get_config", + "tf.compat.v1.initializers.zeros": "tf.compat.v1.keras.initializers.Zeros", + "tf.compat.v1.initializers.zeros.__call__": "tf.compat.v1.keras.initializers.Zeros.__call__", + "tf.compat.v1.initializers.zeros.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.initializers.zeros.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.initializers.zeros.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.initializers.zeros.__init__": "tf.compat.v1.keras.initializers.Zeros.__init__", + "tf.compat.v1.initializers.zeros.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.initializers.zeros.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.initializers.zeros.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.initializers.zeros.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.initializers.zeros.get_config": "tf.compat.v1.keras.initializers.Zeros.get_config", + "tf.compat.v1.int16": "tf.dtypes.int16", + "tf.compat.v1.int32": "tf.dtypes.int32", + "tf.compat.v1.int64": "tf.dtypes.int64", + "tf.compat.v1.int8": "tf.dtypes.int8", + "tf.compat.v1.invert_permutation": "tf.math.invert_permutation", + "tf.compat.v1.io.FixedLenFeature": "tf.io.FixedLenFeature", + "tf.compat.v1.io.FixedLenFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.io.FixedLenFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.io.FixedLenFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.io.FixedLenFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.io.FixedLenFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.io.FixedLenFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.io.FixedLenFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.io.FixedLenFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.io.FixedLenFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.io.FixedLenFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.io.FixedLenFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.io.FixedLenFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.io.FixedLenFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.io.FixedLenFeature.__new__": "tf.io.FixedLenFeature.__new__", + "tf.compat.v1.io.FixedLenFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.io.FixedLenFeature.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.io.FixedLenFeature.default_value": "tf.io.FixedLenFeature.default_value", + "tf.compat.v1.io.FixedLenFeature.dtype": "tf.io.FixedLenFeature.dtype", + "tf.compat.v1.io.FixedLenFeature.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.io.FixedLenFeature.shape": "tf.io.FixedLenFeature.shape", + "tf.compat.v1.io.FixedLenSequenceFeature": "tf.io.FixedLenSequenceFeature", + "tf.compat.v1.io.FixedLenSequenceFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.io.FixedLenSequenceFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.io.FixedLenSequenceFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.io.FixedLenSequenceFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.io.FixedLenSequenceFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.io.FixedLenSequenceFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.io.FixedLenSequenceFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.io.FixedLenSequenceFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.io.FixedLenSequenceFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.io.FixedLenSequenceFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.io.FixedLenSequenceFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.io.FixedLenSequenceFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.io.FixedLenSequenceFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.io.FixedLenSequenceFeature.__new__": "tf.io.FixedLenSequenceFeature.__new__", + "tf.compat.v1.io.FixedLenSequenceFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.io.FixedLenSequenceFeature.allow_missing": "tf.io.FixedLenSequenceFeature.allow_missing", + "tf.compat.v1.io.FixedLenSequenceFeature.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.io.FixedLenSequenceFeature.default_value": "tf.io.FixedLenSequenceFeature.default_value", + "tf.compat.v1.io.FixedLenSequenceFeature.dtype": "tf.io.FixedLenSequenceFeature.dtype", + "tf.compat.v1.io.FixedLenSequenceFeature.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.io.FixedLenSequenceFeature.shape": "tf.io.FixedLenSequenceFeature.shape", + "tf.compat.v1.io.PaddingFIFOQueue": "tf.queue.PaddingFIFOQueue", + "tf.compat.v1.io.PaddingFIFOQueue.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.io.PaddingFIFOQueue.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.io.PaddingFIFOQueue.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.io.PaddingFIFOQueue.__init__": "tf.queue.PaddingFIFOQueue.__init__", + "tf.compat.v1.io.PaddingFIFOQueue.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.io.PaddingFIFOQueue.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.io.PaddingFIFOQueue.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.io.PaddingFIFOQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.io.PaddingFIFOQueue.close": "tf.queue.QueueBase.close", + "tf.compat.v1.io.PaddingFIFOQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.io.PaddingFIFOQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.io.PaddingFIFOQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.io.PaddingFIFOQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.io.PaddingFIFOQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.io.PaddingFIFOQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.io.PaddingFIFOQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.io.PaddingFIFOQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.io.PaddingFIFOQueue.name": "tf.queue.QueueBase.name", + "tf.compat.v1.io.PaddingFIFOQueue.names": "tf.queue.QueueBase.names", + "tf.compat.v1.io.PaddingFIFOQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.io.PaddingFIFOQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.io.PaddingFIFOQueue.size": "tf.queue.QueueBase.size", + "tf.compat.v1.io.PriorityQueue": "tf.queue.PriorityQueue", + "tf.compat.v1.io.PriorityQueue.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.io.PriorityQueue.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.io.PriorityQueue.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.io.PriorityQueue.__init__": "tf.queue.PriorityQueue.__init__", + "tf.compat.v1.io.PriorityQueue.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.io.PriorityQueue.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.io.PriorityQueue.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.io.PriorityQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.io.PriorityQueue.close": "tf.queue.QueueBase.close", + "tf.compat.v1.io.PriorityQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.io.PriorityQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.io.PriorityQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.io.PriorityQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.io.PriorityQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.io.PriorityQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.io.PriorityQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.io.PriorityQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.io.PriorityQueue.name": "tf.queue.QueueBase.name", + "tf.compat.v1.io.PriorityQueue.names": "tf.queue.QueueBase.names", + "tf.compat.v1.io.PriorityQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.io.PriorityQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.io.PriorityQueue.size": "tf.queue.QueueBase.size", + "tf.compat.v1.io.QueueBase": "tf.queue.QueueBase", + "tf.compat.v1.io.QueueBase.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.io.QueueBase.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.io.QueueBase.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.io.QueueBase.__init__": "tf.queue.QueueBase.__init__", + "tf.compat.v1.io.QueueBase.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.io.QueueBase.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.io.QueueBase.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.io.QueueBase.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.io.QueueBase.close": "tf.queue.QueueBase.close", + "tf.compat.v1.io.QueueBase.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.io.QueueBase.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.io.QueueBase.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.io.QueueBase.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.io.QueueBase.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.io.QueueBase.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.io.QueueBase.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.io.QueueBase.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.io.QueueBase.name": "tf.queue.QueueBase.name", + "tf.compat.v1.io.QueueBase.names": "tf.queue.QueueBase.names", + "tf.compat.v1.io.QueueBase.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.io.QueueBase.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.io.QueueBase.size": "tf.queue.QueueBase.size", + "tf.compat.v1.io.RaggedFeature": "tf.io.RaggedFeature", + "tf.compat.v1.io.RaggedFeature.RowLengths": "tf.io.RaggedFeature.RowLengths", + "tf.compat.v1.io.RaggedFeature.RowLengths.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__new__": "tf.io.RaggedFeature.RowLengths.__new__", + "tf.compat.v1.io.RaggedFeature.RowLengths.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.io.RaggedFeature.RowLengths.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.io.RaggedFeature.RowLengths.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.io.RaggedFeature.RowLengths.key": "tf.io.RaggedFeature.RowLengths.key", + "tf.compat.v1.io.RaggedFeature.RowLimits": "tf.io.RaggedFeature.RowLimits", + "tf.compat.v1.io.RaggedFeature.RowLimits.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__new__": "tf.io.RaggedFeature.RowLimits.__new__", + "tf.compat.v1.io.RaggedFeature.RowLimits.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.io.RaggedFeature.RowLimits.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.io.RaggedFeature.RowLimits.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.io.RaggedFeature.RowLimits.key": "tf.io.RaggedFeature.RowLimits.key", + "tf.compat.v1.io.RaggedFeature.RowSplits": "tf.io.RaggedFeature.RowSplits", + "tf.compat.v1.io.RaggedFeature.RowSplits.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__new__": "tf.io.RaggedFeature.RowSplits.__new__", + "tf.compat.v1.io.RaggedFeature.RowSplits.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.io.RaggedFeature.RowSplits.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.io.RaggedFeature.RowSplits.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.io.RaggedFeature.RowSplits.key": "tf.io.RaggedFeature.RowSplits.key", + "tf.compat.v1.io.RaggedFeature.RowStarts": "tf.io.RaggedFeature.RowStarts", + "tf.compat.v1.io.RaggedFeature.RowStarts.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__new__": "tf.io.RaggedFeature.RowStarts.__new__", + "tf.compat.v1.io.RaggedFeature.RowStarts.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.io.RaggedFeature.RowStarts.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.io.RaggedFeature.RowStarts.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.io.RaggedFeature.RowStarts.key": "tf.io.RaggedFeature.RowStarts.key", + "tf.compat.v1.io.RaggedFeature.UniformRowLength": "tf.io.RaggedFeature.UniformRowLength", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__new__": "tf.io.RaggedFeature.UniformRowLength.__new__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.io.RaggedFeature.UniformRowLength.length": "tf.io.RaggedFeature.UniformRowLength.length", + "tf.compat.v1.io.RaggedFeature.ValueRowIds": "tf.io.RaggedFeature.ValueRowIds", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__new__": "tf.io.RaggedFeature.ValueRowIds.__new__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.io.RaggedFeature.ValueRowIds.key": "tf.io.RaggedFeature.ValueRowIds.key", + "tf.compat.v1.io.RaggedFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.io.RaggedFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.io.RaggedFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.io.RaggedFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.io.RaggedFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.io.RaggedFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.io.RaggedFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.io.RaggedFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.io.RaggedFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.io.RaggedFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.io.RaggedFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.io.RaggedFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.io.RaggedFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.io.RaggedFeature.__new__": "tf.io.RaggedFeature.__new__", + "tf.compat.v1.io.RaggedFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.io.RaggedFeature.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.io.RaggedFeature.dtype": "tf.io.RaggedFeature.dtype", + "tf.compat.v1.io.RaggedFeature.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.io.RaggedFeature.partitions": "tf.io.RaggedFeature.partitions", + "tf.compat.v1.io.RaggedFeature.row_splits_dtype": "tf.io.RaggedFeature.row_splits_dtype", + "tf.compat.v1.io.RaggedFeature.validate": "tf.io.RaggedFeature.validate", + "tf.compat.v1.io.RaggedFeature.value_key": "tf.io.RaggedFeature.value_key", + "tf.compat.v1.io.RandomShuffleQueue": "tf.queue.RandomShuffleQueue", + "tf.compat.v1.io.RandomShuffleQueue.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.io.RandomShuffleQueue.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.io.RandomShuffleQueue.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.io.RandomShuffleQueue.__init__": "tf.queue.RandomShuffleQueue.__init__", + "tf.compat.v1.io.RandomShuffleQueue.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.io.RandomShuffleQueue.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.io.RandomShuffleQueue.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.io.RandomShuffleQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.io.RandomShuffleQueue.close": "tf.queue.QueueBase.close", + "tf.compat.v1.io.RandomShuffleQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.io.RandomShuffleQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.io.RandomShuffleQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.io.RandomShuffleQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.io.RandomShuffleQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.io.RandomShuffleQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.io.RandomShuffleQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.io.RandomShuffleQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.io.RandomShuffleQueue.name": "tf.queue.QueueBase.name", + "tf.compat.v1.io.RandomShuffleQueue.names": "tf.queue.QueueBase.names", + "tf.compat.v1.io.RandomShuffleQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.io.RandomShuffleQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.io.RandomShuffleQueue.size": "tf.queue.QueueBase.size", + "tf.compat.v1.io.SparseFeature": "tf.io.SparseFeature", + "tf.compat.v1.io.SparseFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.io.SparseFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.io.SparseFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.io.SparseFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.io.SparseFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.io.SparseFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.io.SparseFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.io.SparseFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.io.SparseFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.io.SparseFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.io.SparseFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.io.SparseFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.io.SparseFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.io.SparseFeature.__new__": "tf.io.SparseFeature.__new__", + "tf.compat.v1.io.SparseFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.io.SparseFeature.already_sorted": "tf.io.SparseFeature.already_sorted", + "tf.compat.v1.io.SparseFeature.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.io.SparseFeature.dtype": "tf.io.SparseFeature.dtype", + "tf.compat.v1.io.SparseFeature.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.io.SparseFeature.index_key": "tf.io.SparseFeature.index_key", + "tf.compat.v1.io.SparseFeature.size": "tf.io.SparseFeature.size", + "tf.compat.v1.io.SparseFeature.value_key": "tf.io.SparseFeature.value_key", + "tf.compat.v1.io.TFRecordCompressionType.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.io.TFRecordCompressionType.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.io.TFRecordCompressionType.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.io.TFRecordCompressionType.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.io.TFRecordCompressionType.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.io.TFRecordCompressionType.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.io.TFRecordCompressionType.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.io.TFRecordCompressionType.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.io.TFRecordOptions": "tf.io.TFRecordOptions", + "tf.compat.v1.io.TFRecordOptions.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.io.TFRecordOptions.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.io.TFRecordOptions.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.io.TFRecordOptions.__init__": "tf.io.TFRecordOptions.__init__", + "tf.compat.v1.io.TFRecordOptions.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.io.TFRecordOptions.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.io.TFRecordOptions.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.io.TFRecordOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.io.TFRecordOptions.compression_type_map": "tf.io.TFRecordOptions.compression_type_map", + "tf.compat.v1.io.TFRecordWriter": "tf.io.TFRecordWriter", + "tf.compat.v1.io.TFRecordWriter.__enter__": "tf.io.TFRecordWriter.__enter__", + "tf.compat.v1.io.TFRecordWriter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.io.TFRecordWriter.__exit__": "tf.io.TFRecordWriter.__exit__", + "tf.compat.v1.io.TFRecordWriter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.io.TFRecordWriter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.io.TFRecordWriter.__init__": "tf.io.TFRecordWriter.__init__", + "tf.compat.v1.io.TFRecordWriter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.io.TFRecordWriter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.io.TFRecordWriter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.io.TFRecordWriter.__new__": "tf.dtypes.DType.__new__", + "tf.compat.v1.io.TFRecordWriter.close": "tf.io.TFRecordWriter.close", + "tf.compat.v1.io.TFRecordWriter.flush": "tf.io.TFRecordWriter.flush", + "tf.compat.v1.io.TFRecordWriter.write": "tf.io.TFRecordWriter.write", + "tf.compat.v1.io.VarLenFeature": "tf.io.VarLenFeature", + "tf.compat.v1.io.VarLenFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.io.VarLenFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.io.VarLenFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.io.VarLenFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.io.VarLenFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.io.VarLenFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.io.VarLenFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.io.VarLenFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.io.VarLenFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.io.VarLenFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.io.VarLenFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.io.VarLenFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.io.VarLenFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.io.VarLenFeature.__new__": "tf.io.VarLenFeature.__new__", + "tf.compat.v1.io.VarLenFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.io.VarLenFeature.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.io.VarLenFeature.dtype": "tf.io.VarLenFeature.dtype", + "tf.compat.v1.io.VarLenFeature.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.io.decode_and_crop_jpeg": "tf.io.decode_and_crop_jpeg", + "tf.compat.v1.io.decode_base64": "tf.io.decode_base64", + "tf.compat.v1.io.decode_bmp": "tf.io.decode_bmp", + "tf.compat.v1.io.decode_compressed": "tf.io.decode_compressed", + "tf.compat.v1.io.decode_csv": "tf.compat.v1.decode_csv", + "tf.compat.v1.io.decode_gif": "tf.io.decode_gif", + "tf.compat.v1.io.decode_image": "tf.io.decode_image", + "tf.compat.v1.io.decode_jpeg": "tf.io.decode_jpeg", + "tf.compat.v1.io.decode_json_example": "tf.io.decode_json_example", + "tf.compat.v1.io.decode_png": "tf.io.decode_png", + "tf.compat.v1.io.decode_proto": "tf.io.decode_proto", + "tf.compat.v1.io.decode_raw": "tf.compat.v1.decode_raw", + "tf.compat.v1.io.deserialize_many_sparse": "tf.io.deserialize_many_sparse", + "tf.compat.v1.io.encode_base64": "tf.io.encode_base64", + "tf.compat.v1.io.encode_jpeg": "tf.io.encode_jpeg", + "tf.compat.v1.io.encode_proto": "tf.io.encode_proto", + "tf.compat.v1.io.extract_jpeg_shape": "tf.io.extract_jpeg_shape", + "tf.compat.v1.io.gfile.GFile": "tf.io.gfile.GFile", + "tf.compat.v1.io.gfile.GFile.__enter__": "tf.io.gfile.GFile.__enter__", + "tf.compat.v1.io.gfile.GFile.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.io.gfile.GFile.__exit__": "tf.io.gfile.GFile.__exit__", + "tf.compat.v1.io.gfile.GFile.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.io.gfile.GFile.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.io.gfile.GFile.__init__": "tf.io.gfile.GFile.__init__", + "tf.compat.v1.io.gfile.GFile.__iter__": "tf.io.gfile.GFile.__iter__", + "tf.compat.v1.io.gfile.GFile.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.io.gfile.GFile.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.io.gfile.GFile.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.io.gfile.GFile.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.io.gfile.GFile.close": "tf.io.gfile.GFile.close", + "tf.compat.v1.io.gfile.GFile.flush": "tf.io.gfile.GFile.flush", + "tf.compat.v1.io.gfile.GFile.mode": "tf.io.gfile.GFile.mode", + "tf.compat.v1.io.gfile.GFile.name": "tf.io.gfile.GFile.name", + "tf.compat.v1.io.gfile.GFile.next": "tf.io.gfile.GFile.next", + "tf.compat.v1.io.gfile.GFile.read": "tf.io.gfile.GFile.read", + "tf.compat.v1.io.gfile.GFile.readline": "tf.io.gfile.GFile.readline", + "tf.compat.v1.io.gfile.GFile.readlines": "tf.io.gfile.GFile.readlines", + "tf.compat.v1.io.gfile.GFile.seek": "tf.io.gfile.GFile.seek", + "tf.compat.v1.io.gfile.GFile.seekable": "tf.io.gfile.GFile.seekable", + "tf.compat.v1.io.gfile.GFile.size": "tf.io.gfile.GFile.size", + "tf.compat.v1.io.gfile.GFile.tell": "tf.io.gfile.GFile.tell", + "tf.compat.v1.io.gfile.GFile.write": "tf.io.gfile.GFile.write", + "tf.compat.v1.io.gfile.copy": "tf.io.gfile.copy", + "tf.compat.v1.io.gfile.exists": "tf.io.gfile.exists", + "tf.compat.v1.io.gfile.glob": "tf.io.gfile.glob", + "tf.compat.v1.io.gfile.isdir": "tf.io.gfile.isdir", + "tf.compat.v1.io.gfile.listdir": "tf.io.gfile.listdir", + "tf.compat.v1.io.gfile.makedirs": "tf.io.gfile.makedirs", + "tf.compat.v1.io.gfile.mkdir": "tf.io.gfile.mkdir", + "tf.compat.v1.io.gfile.remove": "tf.io.gfile.remove", + "tf.compat.v1.io.gfile.rename": "tf.io.gfile.rename", + "tf.compat.v1.io.gfile.rmtree": "tf.io.gfile.rmtree", + "tf.compat.v1.io.gfile.stat": "tf.io.gfile.stat", + "tf.compat.v1.io.gfile.walk": "tf.io.gfile.walk", + "tf.compat.v1.io.is_jpeg": "tf.io.is_jpeg", + "tf.compat.v1.io.match_filenames_once": "tf.io.match_filenames_once", + "tf.compat.v1.io.matching_files": "tf.io.matching_files", + "tf.compat.v1.io.parse_example": "tf.compat.v1.parse_example", + "tf.compat.v1.io.parse_sequence_example": "tf.io.parse_sequence_example", + "tf.compat.v1.io.parse_single_example": "tf.compat.v1.parse_single_example", + "tf.compat.v1.io.parse_single_sequence_example": "tf.io.parse_single_sequence_example", + "tf.compat.v1.io.parse_tensor": "tf.io.parse_tensor", + "tf.compat.v1.io.read_file": "tf.io.read_file", + "tf.compat.v1.io.serialize_many_sparse": "tf.compat.v1.serialize_many_sparse", + "tf.compat.v1.io.serialize_sparse": "tf.compat.v1.serialize_sparse", + "tf.compat.v1.io.serialize_tensor": "tf.io.serialize_tensor", + "tf.compat.v1.io.write_file": "tf.io.write_file", + "tf.compat.v1.io.write_graph": "tf.io.write_graph", + "tf.compat.v1.is_finite": "tf.math.is_finite", + "tf.compat.v1.is_inf": "tf.math.is_inf", + "tf.compat.v1.is_nan": "tf.math.is_nan", + "tf.compat.v1.is_non_decreasing": "tf.math.is_non_decreasing", + "tf.compat.v1.is_numeric_tensor": "tf.debugging.is_numeric_tensor", + "tf.compat.v1.is_strictly_increasing": "tf.math.is_strictly_increasing", + "tf.compat.v1.is_tensor": "tf.is_tensor", + "tf.compat.v1.keras.Input": "tf.keras.Input", + "tf.compat.v1.keras.Model": "tf.keras.Model", + "tf.compat.v1.keras.Model.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.Model.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.Model.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.Model.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.Model.__init__": "tf.keras.Model.__init__", + "tf.compat.v1.keras.Model.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.Model.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.Model.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.Model.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.Model.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.Model.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.Model.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.Model.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.Model.build": "tf.keras.Model.build", + "tf.compat.v1.keras.Model.call": "tf.keras.Model.call", + "tf.compat.v1.keras.Model.compile": "tf.keras.Model.compile", + "tf.compat.v1.keras.Model.compute_mask": "tf.keras.Model.compute_mask", + "tf.compat.v1.keras.Model.compute_output_shape": "tf.keras.Model.compute_output_shape", + "tf.compat.v1.keras.Model.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.Model.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.Model.distribute_strategy": "tf.keras.Model.distribute_strategy", + "tf.compat.v1.keras.Model.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.Model.dynamic": "tf.keras.Model.dynamic", + "tf.compat.v1.keras.Model.evaluate": "tf.keras.Model.evaluate", + "tf.compat.v1.keras.Model.evaluate_generator": "tf.keras.Model.evaluate_generator", + "tf.compat.v1.keras.Model.fit": "tf.keras.Model.fit", + "tf.compat.v1.keras.Model.fit_generator": "tf.keras.Model.fit_generator", + "tf.compat.v1.keras.Model.get_config": "tf.keras.Model.get_config", + "tf.compat.v1.keras.Model.get_layer": "tf.keras.Model.get_layer", + "tf.compat.v1.keras.Model.get_weights": "tf.keras.Model.get_weights", + "tf.compat.v1.keras.Model.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.Model.input_spec": "tf.keras.Model.input_spec", + "tf.compat.v1.keras.Model.layers": "tf.keras.Model.layers", + "tf.compat.v1.keras.Model.load_weights": "tf.keras.Model.load_weights", + "tf.compat.v1.keras.Model.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.Model.make_predict_function": "tf.keras.Model.make_predict_function", + "tf.compat.v1.keras.Model.make_test_function": "tf.keras.Model.make_test_function", + "tf.compat.v1.keras.Model.make_train_function": "tf.keras.Model.make_train_function", + "tf.compat.v1.keras.Model.metrics": "tf.keras.Model.metrics", + "tf.compat.v1.keras.Model.metrics_names": "tf.keras.Model.metrics_names", + "tf.compat.v1.keras.Model.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.Model.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.Model.non_trainable_weights": "tf.keras.Model.non_trainable_weights", + "tf.compat.v1.keras.Model.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.Model.predict": "tf.keras.Model.predict", + "tf.compat.v1.keras.Model.predict_generator": "tf.keras.Model.predict_generator", + "tf.compat.v1.keras.Model.predict_on_batch": "tf.keras.Model.predict_on_batch", + "tf.compat.v1.keras.Model.predict_step": "tf.keras.Model.predict_step", + "tf.compat.v1.keras.Model.reset_metrics": "tf.keras.Model.reset_metrics", + "tf.compat.v1.keras.Model.reset_states": "tf.keras.Model.reset_states", + "tf.compat.v1.keras.Model.run_eagerly": "tf.keras.Model.run_eagerly", + "tf.compat.v1.keras.Model.save": "tf.keras.Model.save", + "tf.compat.v1.keras.Model.save_weights": "tf.keras.Model.save_weights", + "tf.compat.v1.keras.Model.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.Model.state_updates": "tf.keras.Model.state_updates", + "tf.compat.v1.keras.Model.stateful": "tf.keras.Model.stateful", + "tf.compat.v1.keras.Model.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.Model.summary": "tf.keras.Model.summary", + "tf.compat.v1.keras.Model.test_on_batch": "tf.keras.Model.test_on_batch", + "tf.compat.v1.keras.Model.test_step": "tf.keras.Model.test_step", + "tf.compat.v1.keras.Model.to_json": "tf.keras.Model.to_json", + "tf.compat.v1.keras.Model.to_yaml": "tf.keras.Model.to_yaml", + "tf.compat.v1.keras.Model.train_on_batch": "tf.keras.Model.train_on_batch", + "tf.compat.v1.keras.Model.train_step": "tf.keras.Model.train_step", + "tf.compat.v1.keras.Model.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.Model.trainable_weights": "tf.keras.Model.trainable_weights", + "tf.compat.v1.keras.Model.weights": "tf.keras.Model.weights", + "tf.compat.v1.keras.Sequential": "tf.keras.Sequential", + "tf.compat.v1.keras.Sequential.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.Sequential.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.Sequential.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.Sequential.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.Sequential.__init__": "tf.keras.Sequential.__init__", + "tf.compat.v1.keras.Sequential.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.Sequential.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.Sequential.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.Sequential.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.Sequential.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.Sequential.add": "tf.keras.Sequential.add", + "tf.compat.v1.keras.Sequential.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.Sequential.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.Sequential.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.Sequential.build": "tf.keras.Sequential.build", + "tf.compat.v1.keras.Sequential.call": "tf.keras.Sequential.call", + "tf.compat.v1.keras.Sequential.compile": "tf.keras.Model.compile", + "tf.compat.v1.keras.Sequential.compute_mask": "tf.keras.Sequential.compute_mask", + "tf.compat.v1.keras.Sequential.compute_output_shape": "tf.keras.Sequential.compute_output_shape", + "tf.compat.v1.keras.Sequential.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.Sequential.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.Sequential.distribute_strategy": "tf.keras.Model.distribute_strategy", + "tf.compat.v1.keras.Sequential.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.Sequential.dynamic": "tf.keras.Sequential.dynamic", + "tf.compat.v1.keras.Sequential.evaluate": "tf.keras.Model.evaluate", + "tf.compat.v1.keras.Sequential.evaluate_generator": "tf.keras.Model.evaluate_generator", + "tf.compat.v1.keras.Sequential.fit": "tf.keras.Model.fit", + "tf.compat.v1.keras.Sequential.fit_generator": "tf.keras.Model.fit_generator", + "tf.compat.v1.keras.Sequential.get_config": "tf.keras.Sequential.get_config", + "tf.compat.v1.keras.Sequential.get_layer": "tf.keras.Model.get_layer", + "tf.compat.v1.keras.Sequential.get_weights": "tf.keras.Model.get_weights", + "tf.compat.v1.keras.Sequential.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.Sequential.input_spec": "tf.keras.Sequential.input_spec", + "tf.compat.v1.keras.Sequential.layers": "tf.keras.Sequential.layers", + "tf.compat.v1.keras.Sequential.load_weights": "tf.keras.Model.load_weights", + "tf.compat.v1.keras.Sequential.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.Sequential.make_predict_function": "tf.keras.Model.make_predict_function", + "tf.compat.v1.keras.Sequential.make_test_function": "tf.keras.Model.make_test_function", + "tf.compat.v1.keras.Sequential.make_train_function": "tf.keras.Model.make_train_function", + "tf.compat.v1.keras.Sequential.metrics": "tf.keras.Model.metrics", + "tf.compat.v1.keras.Sequential.metrics_names": "tf.keras.Model.metrics_names", + "tf.compat.v1.keras.Sequential.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.Sequential.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.Sequential.non_trainable_weights": "tf.keras.Model.non_trainable_weights", + "tf.compat.v1.keras.Sequential.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.Sequential.pop": "tf.keras.Sequential.pop", + "tf.compat.v1.keras.Sequential.predict": "tf.keras.Model.predict", + "tf.compat.v1.keras.Sequential.predict_classes": "tf.keras.Sequential.predict_classes", + "tf.compat.v1.keras.Sequential.predict_generator": "tf.keras.Model.predict_generator", + "tf.compat.v1.keras.Sequential.predict_on_batch": "tf.keras.Model.predict_on_batch", + "tf.compat.v1.keras.Sequential.predict_proba": "tf.keras.Sequential.predict_proba", + "tf.compat.v1.keras.Sequential.predict_step": "tf.keras.Model.predict_step", + "tf.compat.v1.keras.Sequential.reset_metrics": "tf.keras.Model.reset_metrics", + "tf.compat.v1.keras.Sequential.reset_states": "tf.keras.Model.reset_states", + "tf.compat.v1.keras.Sequential.run_eagerly": "tf.keras.Model.run_eagerly", + "tf.compat.v1.keras.Sequential.save": "tf.keras.Model.save", + "tf.compat.v1.keras.Sequential.save_weights": "tf.keras.Model.save_weights", + "tf.compat.v1.keras.Sequential.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.Sequential.state_updates": "tf.keras.Model.state_updates", + "tf.compat.v1.keras.Sequential.stateful": "tf.keras.Model.stateful", + "tf.compat.v1.keras.Sequential.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.Sequential.summary": "tf.keras.Model.summary", + "tf.compat.v1.keras.Sequential.test_on_batch": "tf.keras.Model.test_on_batch", + "tf.compat.v1.keras.Sequential.test_step": "tf.keras.Model.test_step", + "tf.compat.v1.keras.Sequential.to_json": "tf.keras.Model.to_json", + "tf.compat.v1.keras.Sequential.to_yaml": "tf.keras.Model.to_yaml", + "tf.compat.v1.keras.Sequential.train_on_batch": "tf.keras.Model.train_on_batch", + "tf.compat.v1.keras.Sequential.train_step": "tf.keras.Model.train_step", + "tf.compat.v1.keras.Sequential.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.Sequential.trainable_weights": "tf.keras.Model.trainable_weights", + "tf.compat.v1.keras.Sequential.weights": "tf.keras.Model.weights", + "tf.compat.v1.keras.activations.deserialize": "tf.keras.activations.deserialize", + "tf.compat.v1.keras.activations.elu": "tf.keras.activations.elu", + "tf.compat.v1.keras.activations.exponential": "tf.keras.activations.exponential", + "tf.compat.v1.keras.activations.get": "tf.keras.activations.get", + "tf.compat.v1.keras.activations.hard_sigmoid": "tf.keras.activations.hard_sigmoid", + "tf.compat.v1.keras.activations.linear": "tf.keras.activations.linear", + "tf.compat.v1.keras.activations.relu": "tf.keras.activations.relu", + "tf.compat.v1.keras.activations.selu": "tf.keras.activations.selu", + "tf.compat.v1.keras.activations.serialize": "tf.keras.activations.serialize", + "tf.compat.v1.keras.activations.sigmoid": "tf.keras.activations.sigmoid", + "tf.compat.v1.keras.activations.softmax": "tf.keras.activations.softmax", + "tf.compat.v1.keras.activations.softplus": "tf.keras.activations.softplus", + "tf.compat.v1.keras.activations.softsign": "tf.keras.activations.softsign", + "tf.compat.v1.keras.activations.swish": "tf.keras.activations.swish", + "tf.compat.v1.keras.activations.tanh": "tf.keras.activations.tanh", + "tf.compat.v1.keras.applications.DenseNet121": "tf.keras.applications.DenseNet121", + "tf.compat.v1.keras.applications.DenseNet169": "tf.keras.applications.DenseNet169", + "tf.compat.v1.keras.applications.DenseNet201": "tf.keras.applications.DenseNet201", + "tf.compat.v1.keras.applications.InceptionResNetV2": "tf.keras.applications.InceptionResNetV2", + "tf.compat.v1.keras.applications.InceptionV3": "tf.keras.applications.InceptionV3", + "tf.compat.v1.keras.applications.MobileNet": "tf.keras.applications.MobileNet", + "tf.compat.v1.keras.applications.MobileNetV2": "tf.keras.applications.MobileNetV2", + "tf.compat.v1.keras.applications.NASNetLarge": "tf.keras.applications.NASNetLarge", + "tf.compat.v1.keras.applications.NASNetMobile": "tf.keras.applications.NASNetMobile", + "tf.compat.v1.keras.applications.ResNet101": "tf.keras.applications.ResNet101", + "tf.compat.v1.keras.applications.ResNet101V2": "tf.keras.applications.ResNet101V2", + "tf.compat.v1.keras.applications.ResNet152": "tf.keras.applications.ResNet152", + "tf.compat.v1.keras.applications.ResNet152V2": "tf.keras.applications.ResNet152V2", + "tf.compat.v1.keras.applications.ResNet50": "tf.keras.applications.ResNet50", + "tf.compat.v1.keras.applications.ResNet50V2": "tf.keras.applications.ResNet50V2", + "tf.compat.v1.keras.applications.VGG16": "tf.keras.applications.VGG16", + "tf.compat.v1.keras.applications.VGG19": "tf.keras.applications.VGG19", + "tf.compat.v1.keras.applications.Xception": "tf.keras.applications.Xception", + "tf.compat.v1.keras.applications.densenet.DenseNet121": "tf.keras.applications.DenseNet121", + "tf.compat.v1.keras.applications.densenet.DenseNet169": "tf.keras.applications.DenseNet169", + "tf.compat.v1.keras.applications.densenet.DenseNet201": "tf.keras.applications.DenseNet201", + "tf.compat.v1.keras.applications.densenet.decode_predictions": "tf.keras.applications.densenet.decode_predictions", + "tf.compat.v1.keras.applications.densenet.preprocess_input": "tf.keras.applications.densenet.preprocess_input", + "tf.compat.v1.keras.applications.imagenet_utils.decode_predictions": "tf.keras.applications.imagenet_utils.decode_predictions", + "tf.compat.v1.keras.applications.imagenet_utils.preprocess_input": "tf.keras.applications.imagenet_utils.preprocess_input", + "tf.compat.v1.keras.applications.inception_resnet_v2.InceptionResNetV2": "tf.keras.applications.InceptionResNetV2", + "tf.compat.v1.keras.applications.inception_resnet_v2.decode_predictions": "tf.keras.applications.inception_resnet_v2.decode_predictions", + "tf.compat.v1.keras.applications.inception_resnet_v2.preprocess_input": "tf.keras.applications.inception_resnet_v2.preprocess_input", + "tf.compat.v1.keras.applications.inception_v3.InceptionV3": "tf.keras.applications.InceptionV3", + "tf.compat.v1.keras.applications.inception_v3.decode_predictions": "tf.keras.applications.inception_v3.decode_predictions", + "tf.compat.v1.keras.applications.inception_v3.preprocess_input": "tf.keras.applications.inception_v3.preprocess_input", + "tf.compat.v1.keras.applications.mobilenet.MobileNet": "tf.keras.applications.MobileNet", + "tf.compat.v1.keras.applications.mobilenet.decode_predictions": "tf.keras.applications.mobilenet.decode_predictions", + "tf.compat.v1.keras.applications.mobilenet.preprocess_input": "tf.keras.applications.mobilenet.preprocess_input", + "tf.compat.v1.keras.applications.mobilenet_v2.MobileNetV2": "tf.keras.applications.MobileNetV2", + "tf.compat.v1.keras.applications.mobilenet_v2.decode_predictions": "tf.keras.applications.mobilenet_v2.decode_predictions", + "tf.compat.v1.keras.applications.mobilenet_v2.preprocess_input": "tf.keras.applications.mobilenet_v2.preprocess_input", + "tf.compat.v1.keras.applications.nasnet.NASNetLarge": "tf.keras.applications.NASNetLarge", + "tf.compat.v1.keras.applications.nasnet.NASNetMobile": "tf.keras.applications.NASNetMobile", + "tf.compat.v1.keras.applications.nasnet.decode_predictions": "tf.keras.applications.nasnet.decode_predictions", + "tf.compat.v1.keras.applications.nasnet.preprocess_input": "tf.keras.applications.nasnet.preprocess_input", + "tf.compat.v1.keras.applications.resnet.ResNet101": "tf.keras.applications.ResNet101", + "tf.compat.v1.keras.applications.resnet.ResNet152": "tf.keras.applications.ResNet152", + "tf.compat.v1.keras.applications.resnet.ResNet50": "tf.keras.applications.ResNet50", + "tf.compat.v1.keras.applications.resnet.decode_predictions": "tf.keras.applications.resnet.decode_predictions", + "tf.compat.v1.keras.applications.resnet.preprocess_input": "tf.keras.applications.resnet.preprocess_input", + "tf.compat.v1.keras.applications.resnet50.ResNet50": "tf.keras.applications.ResNet50", + "tf.compat.v1.keras.applications.resnet50.decode_predictions": "tf.keras.applications.resnet.decode_predictions", + "tf.compat.v1.keras.applications.resnet50.preprocess_input": "tf.keras.applications.resnet.preprocess_input", + "tf.compat.v1.keras.applications.resnet_v2.ResNet101V2": "tf.keras.applications.ResNet101V2", + "tf.compat.v1.keras.applications.resnet_v2.ResNet152V2": "tf.keras.applications.ResNet152V2", + "tf.compat.v1.keras.applications.resnet_v2.ResNet50V2": "tf.keras.applications.ResNet50V2", + "tf.compat.v1.keras.applications.resnet_v2.decode_predictions": "tf.keras.applications.resnet_v2.decode_predictions", + "tf.compat.v1.keras.applications.resnet_v2.preprocess_input": "tf.keras.applications.resnet_v2.preprocess_input", + "tf.compat.v1.keras.applications.vgg16.VGG16": "tf.keras.applications.VGG16", + "tf.compat.v1.keras.applications.vgg16.decode_predictions": "tf.keras.applications.vgg16.decode_predictions", + "tf.compat.v1.keras.applications.vgg16.preprocess_input": "tf.keras.applications.vgg16.preprocess_input", + "tf.compat.v1.keras.applications.vgg19.VGG19": "tf.keras.applications.VGG19", + "tf.compat.v1.keras.applications.vgg19.decode_predictions": "tf.keras.applications.vgg19.decode_predictions", + "tf.compat.v1.keras.applications.vgg19.preprocess_input": "tf.keras.applications.vgg19.preprocess_input", + "tf.compat.v1.keras.applications.xception.Xception": "tf.keras.applications.Xception", + "tf.compat.v1.keras.applications.xception.decode_predictions": "tf.keras.applications.xception.decode_predictions", + "tf.compat.v1.keras.applications.xception.preprocess_input": "tf.keras.applications.xception.preprocess_input", + "tf.compat.v1.keras.backend.abs": "tf.keras.backend.abs", + "tf.compat.v1.keras.backend.all": "tf.keras.backend.all", + "tf.compat.v1.keras.backend.any": "tf.keras.backend.any", + "tf.compat.v1.keras.backend.arange": "tf.keras.backend.arange", + "tf.compat.v1.keras.backend.argmax": "tf.keras.backend.argmax", + "tf.compat.v1.keras.backend.argmin": "tf.keras.backend.argmin", + "tf.compat.v1.keras.backend.backend": "tf.keras.backend.backend", + "tf.compat.v1.keras.backend.batch_dot": "tf.keras.backend.batch_dot", + "tf.compat.v1.keras.backend.batch_flatten": "tf.keras.backend.batch_flatten", + "tf.compat.v1.keras.backend.batch_get_value": "tf.keras.backend.batch_get_value", + "tf.compat.v1.keras.backend.batch_normalization": "tf.keras.backend.batch_normalization", + "tf.compat.v1.keras.backend.batch_set_value": "tf.keras.backend.batch_set_value", + "tf.compat.v1.keras.backend.bias_add": "tf.keras.backend.bias_add", + "tf.compat.v1.keras.backend.binary_crossentropy": "tf.keras.backend.binary_crossentropy", + "tf.compat.v1.keras.backend.cast": "tf.keras.backend.cast", + "tf.compat.v1.keras.backend.cast_to_floatx": "tf.keras.backend.cast_to_floatx", + "tf.compat.v1.keras.backend.categorical_crossentropy": "tf.keras.backend.categorical_crossentropy", + "tf.compat.v1.keras.backend.clear_session": "tf.keras.backend.clear_session", + "tf.compat.v1.keras.backend.clip": "tf.keras.backend.clip", + "tf.compat.v1.keras.backend.concatenate": "tf.keras.backend.concatenate", + "tf.compat.v1.keras.backend.constant": "tf.keras.backend.constant", + "tf.compat.v1.keras.backend.conv1d": "tf.keras.backend.conv1d", + "tf.compat.v1.keras.backend.conv2d": "tf.keras.backend.conv2d", + "tf.compat.v1.keras.backend.conv2d_transpose": "tf.keras.backend.conv2d_transpose", + "tf.compat.v1.keras.backend.conv3d": "tf.keras.backend.conv3d", + "tf.compat.v1.keras.backend.cos": "tf.keras.backend.cos", + "tf.compat.v1.keras.backend.count_params": "tf.keras.backend.count_params", + "tf.compat.v1.keras.backend.ctc_batch_cost": "tf.keras.backend.ctc_batch_cost", + "tf.compat.v1.keras.backend.ctc_decode": "tf.keras.backend.ctc_decode", + "tf.compat.v1.keras.backend.ctc_label_dense_to_sparse": "tf.keras.backend.ctc_label_dense_to_sparse", + "tf.compat.v1.keras.backend.cumprod": "tf.keras.backend.cumprod", + "tf.compat.v1.keras.backend.cumsum": "tf.keras.backend.cumsum", + "tf.compat.v1.keras.backend.depthwise_conv2d": "tf.keras.backend.depthwise_conv2d", + "tf.compat.v1.keras.backend.dot": "tf.keras.backend.dot", + "tf.compat.v1.keras.backend.dropout": "tf.keras.backend.dropout", + "tf.compat.v1.keras.backend.dtype": "tf.keras.backend.dtype", + "tf.compat.v1.keras.backend.elu": "tf.keras.backend.elu", + "tf.compat.v1.keras.backend.epsilon": "tf.keras.backend.epsilon", + "tf.compat.v1.keras.backend.equal": "tf.keras.backend.equal", + "tf.compat.v1.keras.backend.eval": "tf.keras.backend.eval", + "tf.compat.v1.keras.backend.exp": "tf.keras.backend.exp", + "tf.compat.v1.keras.backend.expand_dims": "tf.keras.backend.expand_dims", + "tf.compat.v1.keras.backend.eye": "tf.keras.backend.eye", + "tf.compat.v1.keras.backend.flatten": "tf.keras.backend.flatten", + "tf.compat.v1.keras.backend.floatx": "tf.keras.backend.floatx", + "tf.compat.v1.keras.backend.foldl": "tf.keras.backend.foldl", + "tf.compat.v1.keras.backend.foldr": "tf.keras.backend.foldr", + "tf.compat.v1.keras.backend.function": "tf.keras.backend.function", + "tf.compat.v1.keras.backend.gather": "tf.keras.backend.gather", + "tf.compat.v1.keras.backend.get_uid": "tf.keras.backend.get_uid", + "tf.compat.v1.keras.backend.get_value": "tf.keras.backend.get_value", + "tf.compat.v1.keras.backend.gradients": "tf.keras.backend.gradients", + "tf.compat.v1.keras.backend.greater": "tf.keras.backend.greater", + "tf.compat.v1.keras.backend.greater_equal": "tf.keras.backend.greater_equal", + "tf.compat.v1.keras.backend.hard_sigmoid": "tf.keras.backend.hard_sigmoid", + "tf.compat.v1.keras.backend.image_data_format": "tf.keras.backend.image_data_format", + "tf.compat.v1.keras.backend.in_test_phase": "tf.keras.backend.in_test_phase", + "tf.compat.v1.keras.backend.in_top_k": "tf.keras.backend.in_top_k", + "tf.compat.v1.keras.backend.in_train_phase": "tf.keras.backend.in_train_phase", + "tf.compat.v1.keras.backend.int_shape": "tf.keras.backend.int_shape", + "tf.compat.v1.keras.backend.is_keras_tensor": "tf.keras.backend.is_keras_tensor", + "tf.compat.v1.keras.backend.is_sparse": "tf.keras.backend.is_sparse", + "tf.compat.v1.keras.backend.l2_normalize": "tf.keras.backend.l2_normalize", + "tf.compat.v1.keras.backend.learning_phase": "tf.keras.backend.learning_phase", + "tf.compat.v1.keras.backend.learning_phase_scope": "tf.keras.backend.learning_phase_scope", + "tf.compat.v1.keras.backend.less": "tf.keras.backend.less", + "tf.compat.v1.keras.backend.less_equal": "tf.keras.backend.less_equal", + "tf.compat.v1.keras.backend.local_conv1d": "tf.keras.backend.local_conv1d", + "tf.compat.v1.keras.backend.local_conv2d": "tf.keras.backend.local_conv2d", + "tf.compat.v1.keras.backend.log": "tf.keras.backend.log", + "tf.compat.v1.keras.backend.manual_variable_initialization": "tf.keras.backend.manual_variable_initialization", + "tf.compat.v1.keras.backend.map_fn": "tf.keras.backend.map_fn", + "tf.compat.v1.keras.backend.max": "tf.keras.backend.max", + "tf.compat.v1.keras.backend.maximum": "tf.keras.backend.maximum", + "tf.compat.v1.keras.backend.mean": "tf.keras.backend.mean", + "tf.compat.v1.keras.backend.min": "tf.keras.backend.min", + "tf.compat.v1.keras.backend.minimum": "tf.keras.backend.minimum", + "tf.compat.v1.keras.backend.moving_average_update": "tf.keras.backend.moving_average_update", + "tf.compat.v1.keras.backend.name_scope.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.backend.name_scope.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.backend.name_scope.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.backend.name_scope.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.backend.name_scope.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.backend.name_scope.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.backend.name_scope.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.backend.ndim": "tf.keras.backend.ndim", + "tf.compat.v1.keras.backend.normalize_batch_in_training": "tf.keras.backend.normalize_batch_in_training", + "tf.compat.v1.keras.backend.not_equal": "tf.keras.backend.not_equal", + "tf.compat.v1.keras.backend.one_hot": "tf.keras.backend.one_hot", + "tf.compat.v1.keras.backend.ones": "tf.keras.backend.ones", + "tf.compat.v1.keras.backend.ones_like": "tf.keras.backend.ones_like", + "tf.compat.v1.keras.backend.permute_dimensions": "tf.keras.backend.permute_dimensions", + "tf.compat.v1.keras.backend.placeholder": "tf.keras.backend.placeholder", + "tf.compat.v1.keras.backend.pool2d": "tf.keras.backend.pool2d", + "tf.compat.v1.keras.backend.pool3d": "tf.keras.backend.pool3d", + "tf.compat.v1.keras.backend.pow": "tf.keras.backend.pow", + "tf.compat.v1.keras.backend.print_tensor": "tf.keras.backend.print_tensor", + "tf.compat.v1.keras.backend.prod": "tf.keras.backend.prod", + "tf.compat.v1.keras.backend.random_binomial": "tf.keras.backend.random_binomial", + "tf.compat.v1.keras.backend.random_normal": "tf.keras.backend.random_normal", + "tf.compat.v1.keras.backend.random_normal_variable": "tf.keras.backend.random_normal_variable", + "tf.compat.v1.keras.backend.random_uniform": "tf.keras.backend.random_uniform", + "tf.compat.v1.keras.backend.random_uniform_variable": "tf.keras.backend.random_uniform_variable", + "tf.compat.v1.keras.backend.relu": "tf.keras.backend.relu", + "tf.compat.v1.keras.backend.repeat": "tf.keras.backend.repeat", + "tf.compat.v1.keras.backend.repeat_elements": "tf.keras.backend.repeat_elements", + "tf.compat.v1.keras.backend.reset_uids": "tf.keras.backend.reset_uids", + "tf.compat.v1.keras.backend.reshape": "tf.keras.backend.reshape", + "tf.compat.v1.keras.backend.resize_images": "tf.keras.backend.resize_images", + "tf.compat.v1.keras.backend.resize_volumes": "tf.keras.backend.resize_volumes", + "tf.compat.v1.keras.backend.reverse": "tf.keras.backend.reverse", + "tf.compat.v1.keras.backend.rnn": "tf.keras.backend.rnn", + "tf.compat.v1.keras.backend.round": "tf.keras.backend.round", + "tf.compat.v1.keras.backend.separable_conv2d": "tf.keras.backend.separable_conv2d", + "tf.compat.v1.keras.backend.set_epsilon": "tf.keras.backend.set_epsilon", + "tf.compat.v1.keras.backend.set_floatx": "tf.keras.backend.set_floatx", + "tf.compat.v1.keras.backend.set_image_data_format": "tf.keras.backend.set_image_data_format", + "tf.compat.v1.keras.backend.set_learning_phase": "tf.keras.backend.set_learning_phase", + "tf.compat.v1.keras.backend.set_value": "tf.keras.backend.set_value", + "tf.compat.v1.keras.backend.shape": "tf.keras.backend.shape", + "tf.compat.v1.keras.backend.sigmoid": "tf.keras.backend.sigmoid", + "tf.compat.v1.keras.backend.sign": "tf.keras.backend.sign", + "tf.compat.v1.keras.backend.sin": "tf.keras.backend.sin", + "tf.compat.v1.keras.backend.softmax": "tf.keras.backend.softmax", + "tf.compat.v1.keras.backend.softplus": "tf.keras.backend.softplus", + "tf.compat.v1.keras.backend.softsign": "tf.keras.backend.softsign", + "tf.compat.v1.keras.backend.sparse_categorical_crossentropy": "tf.keras.backend.sparse_categorical_crossentropy", + "tf.compat.v1.keras.backend.spatial_2d_padding": "tf.keras.backend.spatial_2d_padding", + "tf.compat.v1.keras.backend.spatial_3d_padding": "tf.keras.backend.spatial_3d_padding", + "tf.compat.v1.keras.backend.sqrt": "tf.keras.backend.sqrt", + "tf.compat.v1.keras.backend.square": "tf.keras.backend.square", + "tf.compat.v1.keras.backend.squeeze": "tf.keras.backend.squeeze", + "tf.compat.v1.keras.backend.stack": "tf.keras.backend.stack", + "tf.compat.v1.keras.backend.std": "tf.keras.backend.std", + "tf.compat.v1.keras.backend.stop_gradient": "tf.keras.backend.stop_gradient", + "tf.compat.v1.keras.backend.sum": "tf.keras.backend.sum", + "tf.compat.v1.keras.backend.switch": "tf.keras.backend.switch", + "tf.compat.v1.keras.backend.tanh": "tf.keras.backend.tanh", + "tf.compat.v1.keras.backend.temporal_padding": "tf.keras.backend.temporal_padding", + "tf.compat.v1.keras.backend.tile": "tf.keras.backend.tile", + "tf.compat.v1.keras.backend.to_dense": "tf.keras.backend.to_dense", + "tf.compat.v1.keras.backend.transpose": "tf.keras.backend.transpose", + "tf.compat.v1.keras.backend.truncated_normal": "tf.keras.backend.truncated_normal", + "tf.compat.v1.keras.backend.update": "tf.keras.backend.update", + "tf.compat.v1.keras.backend.update_add": "tf.keras.backend.update_add", + "tf.compat.v1.keras.backend.update_sub": "tf.keras.backend.update_sub", + "tf.compat.v1.keras.backend.var": "tf.keras.backend.var", + "tf.compat.v1.keras.backend.variable": "tf.keras.backend.variable", + "tf.compat.v1.keras.backend.zeros": "tf.keras.backend.zeros", + "tf.compat.v1.keras.backend.zeros_like": "tf.keras.backend.zeros_like", + "tf.compat.v1.keras.callbacks.BaseLogger": "tf.keras.callbacks.BaseLogger", + "tf.compat.v1.keras.callbacks.BaseLogger.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.BaseLogger.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.BaseLogger.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.BaseLogger.__init__": "tf.keras.callbacks.BaseLogger.__init__", + "tf.compat.v1.keras.callbacks.BaseLogger.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.BaseLogger.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.BaseLogger.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.BaseLogger.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.BaseLogger.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.BaseLogger.on_batch_end": "tf.keras.callbacks.BaseLogger.on_batch_end", + "tf.compat.v1.keras.callbacks.BaseLogger.on_epoch_begin": "tf.keras.callbacks.BaseLogger.on_epoch_begin", + "tf.compat.v1.keras.callbacks.BaseLogger.on_epoch_end": "tf.keras.callbacks.BaseLogger.on_epoch_end", + "tf.compat.v1.keras.callbacks.BaseLogger.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.BaseLogger.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.BaseLogger.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.compat.v1.keras.callbacks.BaseLogger.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.compat.v1.keras.callbacks.BaseLogger.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.BaseLogger.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.compat.v1.keras.callbacks.BaseLogger.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.compat.v1.keras.callbacks.BaseLogger.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.compat.v1.keras.callbacks.BaseLogger.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.BaseLogger.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.compat.v1.keras.callbacks.BaseLogger.on_train_begin": "tf.keras.callbacks.Callback.on_train_begin", + "tf.compat.v1.keras.callbacks.BaseLogger.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.compat.v1.keras.callbacks.BaseLogger.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.compat.v1.keras.callbacks.BaseLogger.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.compat.v1.keras.callbacks.CSVLogger": "tf.keras.callbacks.CSVLogger", + "tf.compat.v1.keras.callbacks.CSVLogger.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.CSVLogger.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.CSVLogger.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.CSVLogger.__init__": "tf.keras.callbacks.CSVLogger.__init__", + "tf.compat.v1.keras.callbacks.CSVLogger.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.CSVLogger.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.CSVLogger.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.CSVLogger.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.CSVLogger.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.CSVLogger.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.compat.v1.keras.callbacks.CSVLogger.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.compat.v1.keras.callbacks.CSVLogger.on_epoch_end": "tf.keras.callbacks.CSVLogger.on_epoch_end", + "tf.compat.v1.keras.callbacks.CSVLogger.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.CSVLogger.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.CSVLogger.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.compat.v1.keras.callbacks.CSVLogger.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.compat.v1.keras.callbacks.CSVLogger.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.CSVLogger.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.compat.v1.keras.callbacks.CSVLogger.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.compat.v1.keras.callbacks.CSVLogger.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.compat.v1.keras.callbacks.CSVLogger.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.CSVLogger.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.compat.v1.keras.callbacks.CSVLogger.on_train_begin": "tf.keras.callbacks.CSVLogger.on_train_begin", + "tf.compat.v1.keras.callbacks.CSVLogger.on_train_end": "tf.keras.callbacks.CSVLogger.on_train_end", + "tf.compat.v1.keras.callbacks.CSVLogger.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.compat.v1.keras.callbacks.CSVLogger.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.compat.v1.keras.callbacks.Callback": "tf.keras.callbacks.Callback", + "tf.compat.v1.keras.callbacks.Callback.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.Callback.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.Callback.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.Callback.__init__": "tf.keras.callbacks.Callback.__init__", + "tf.compat.v1.keras.callbacks.Callback.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.Callback.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.Callback.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.Callback.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.Callback.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.Callback.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.compat.v1.keras.callbacks.Callback.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.compat.v1.keras.callbacks.Callback.on_epoch_end": "tf.keras.callbacks.Callback.on_epoch_end", + "tf.compat.v1.keras.callbacks.Callback.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.Callback.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.Callback.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.compat.v1.keras.callbacks.Callback.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.compat.v1.keras.callbacks.Callback.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.Callback.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.compat.v1.keras.callbacks.Callback.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.compat.v1.keras.callbacks.Callback.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.compat.v1.keras.callbacks.Callback.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.Callback.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.compat.v1.keras.callbacks.Callback.on_train_begin": "tf.keras.callbacks.Callback.on_train_begin", + "tf.compat.v1.keras.callbacks.Callback.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.compat.v1.keras.callbacks.Callback.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.compat.v1.keras.callbacks.Callback.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.compat.v1.keras.callbacks.EarlyStopping": "tf.keras.callbacks.EarlyStopping", + "tf.compat.v1.keras.callbacks.EarlyStopping.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.EarlyStopping.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.EarlyStopping.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.EarlyStopping.__init__": "tf.keras.callbacks.EarlyStopping.__init__", + "tf.compat.v1.keras.callbacks.EarlyStopping.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.EarlyStopping.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.EarlyStopping.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.EarlyStopping.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.EarlyStopping.get_monitor_value": "tf.keras.callbacks.EarlyStopping.get_monitor_value", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_epoch_end": "tf.keras.callbacks.EarlyStopping.on_epoch_end", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_train_begin": "tf.keras.callbacks.EarlyStopping.on_train_begin", + "tf.compat.v1.keras.callbacks.EarlyStopping.on_train_end": "tf.keras.callbacks.EarlyStopping.on_train_end", + "tf.compat.v1.keras.callbacks.EarlyStopping.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.compat.v1.keras.callbacks.EarlyStopping.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.compat.v1.keras.callbacks.History": "tf.keras.callbacks.History", + "tf.compat.v1.keras.callbacks.History.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.History.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.History.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.History.__init__": "tf.keras.callbacks.History.__init__", + "tf.compat.v1.keras.callbacks.History.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.History.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.History.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.History.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.History.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.History.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.compat.v1.keras.callbacks.History.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.compat.v1.keras.callbacks.History.on_epoch_end": "tf.keras.callbacks.History.on_epoch_end", + "tf.compat.v1.keras.callbacks.History.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.History.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.History.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.compat.v1.keras.callbacks.History.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.compat.v1.keras.callbacks.History.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.History.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.compat.v1.keras.callbacks.History.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.compat.v1.keras.callbacks.History.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.compat.v1.keras.callbacks.History.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.History.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.compat.v1.keras.callbacks.History.on_train_begin": "tf.keras.callbacks.History.on_train_begin", + "tf.compat.v1.keras.callbacks.History.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.compat.v1.keras.callbacks.History.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.compat.v1.keras.callbacks.History.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.compat.v1.keras.callbacks.LambdaCallback": "tf.keras.callbacks.LambdaCallback", + "tf.compat.v1.keras.callbacks.LambdaCallback.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.LambdaCallback.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.LambdaCallback.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.LambdaCallback.__init__": "tf.keras.callbacks.LambdaCallback.__init__", + "tf.compat.v1.keras.callbacks.LambdaCallback.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.LambdaCallback.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.LambdaCallback.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.LambdaCallback.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_epoch_end": "tf.keras.callbacks.Callback.on_epoch_end", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_train_begin": "tf.keras.callbacks.Callback.on_train_begin", + "tf.compat.v1.keras.callbacks.LambdaCallback.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.compat.v1.keras.callbacks.LambdaCallback.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.compat.v1.keras.callbacks.LambdaCallback.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.compat.v1.keras.callbacks.LearningRateScheduler": "tf.keras.callbacks.LearningRateScheduler", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__init__": "tf.keras.callbacks.LearningRateScheduler.__init__", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_epoch_begin": "tf.keras.callbacks.LearningRateScheduler.on_epoch_begin", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_epoch_end": "tf.keras.callbacks.LearningRateScheduler.on_epoch_end", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_train_begin": "tf.keras.callbacks.Callback.on_train_begin", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.compat.v1.keras.callbacks.LearningRateScheduler.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.compat.v1.keras.callbacks.ModelCheckpoint": "tf.keras.callbacks.ModelCheckpoint", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__init__": "tf.keras.callbacks.ModelCheckpoint.__init__", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_batch_end": "tf.keras.callbacks.ModelCheckpoint.on_batch_end", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_epoch_begin": "tf.keras.callbacks.ModelCheckpoint.on_epoch_begin", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_epoch_end": "tf.keras.callbacks.ModelCheckpoint.on_epoch_end", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_train_begin": "tf.keras.callbacks.ModelCheckpoint.on_train_begin", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_train_end": "tf.keras.callbacks.ModelCheckpoint.on_train_end", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.set_model": "tf.keras.callbacks.ModelCheckpoint.set_model", + "tf.compat.v1.keras.callbacks.ModelCheckpoint.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.compat.v1.keras.callbacks.ProgbarLogger": "tf.keras.callbacks.ProgbarLogger", + "tf.compat.v1.keras.callbacks.ProgbarLogger.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.ProgbarLogger.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.ProgbarLogger.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.ProgbarLogger.__init__": "tf.keras.callbacks.ProgbarLogger.__init__", + "tf.compat.v1.keras.callbacks.ProgbarLogger.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.ProgbarLogger.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.ProgbarLogger.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.ProgbarLogger.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_epoch_begin": "tf.keras.callbacks.ProgbarLogger.on_epoch_begin", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_epoch_end": "tf.keras.callbacks.ProgbarLogger.on_epoch_end", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_predict_batch_end": "tf.keras.callbacks.ProgbarLogger.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_predict_begin": "tf.keras.callbacks.ProgbarLogger.on_predict_begin", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_predict_end": "tf.keras.callbacks.ProgbarLogger.on_predict_end", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_test_batch_end": "tf.keras.callbacks.ProgbarLogger.on_test_batch_end", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_test_begin": "tf.keras.callbacks.ProgbarLogger.on_test_begin", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_test_end": "tf.keras.callbacks.ProgbarLogger.on_test_end", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_train_batch_end": "tf.keras.callbacks.ProgbarLogger.on_train_batch_end", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_train_begin": "tf.keras.callbacks.ProgbarLogger.on_train_begin", + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.compat.v1.keras.callbacks.ProgbarLogger.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.compat.v1.keras.callbacks.ProgbarLogger.set_params": "tf.keras.callbacks.ProgbarLogger.set_params", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau": "tf.keras.callbacks.ReduceLROnPlateau", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__init__": "tf.keras.callbacks.ReduceLROnPlateau.__init__", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.in_cooldown": "tf.keras.callbacks.ReduceLROnPlateau.in_cooldown", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_epoch_end": "tf.keras.callbacks.ReduceLROnPlateau.on_epoch_end", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_train_begin": "tf.keras.callbacks.ReduceLROnPlateau.on_train_begin", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.compat.v1.keras.callbacks.RemoteMonitor": "tf.keras.callbacks.RemoteMonitor", + "tf.compat.v1.keras.callbacks.RemoteMonitor.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.RemoteMonitor.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.RemoteMonitor.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.RemoteMonitor.__init__": "tf.keras.callbacks.RemoteMonitor.__init__", + "tf.compat.v1.keras.callbacks.RemoteMonitor.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.RemoteMonitor.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.RemoteMonitor.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.RemoteMonitor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_epoch_end": "tf.keras.callbacks.RemoteMonitor.on_epoch_end", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_train_begin": "tf.keras.callbacks.Callback.on_train_begin", + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.compat.v1.keras.callbacks.RemoteMonitor.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.compat.v1.keras.callbacks.RemoteMonitor.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.compat.v1.keras.callbacks.TensorBoard.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.TensorBoard.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.TensorBoard.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.TensorBoard.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.TensorBoard.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.TensorBoard.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.TensorBoard.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.TensorBoard.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.TensorBoard.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.TensorBoard.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.TensorBoard.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.compat.v1.keras.callbacks.TensorBoard.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.compat.v1.keras.callbacks.TensorBoard.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.TensorBoard.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.compat.v1.keras.callbacks.TensorBoard.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.compat.v1.keras.callbacks.TensorBoard.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.compat.v1.keras.callbacks.TensorBoard.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.TensorBoard.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.compat.v1.keras.callbacks.TensorBoard.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.compat.v1.keras.callbacks.TerminateOnNaN": "tf.keras.callbacks.TerminateOnNaN", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__init__": "tf.keras.callbacks.Callback.__init__", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_batch_end": "tf.keras.callbacks.TerminateOnNaN.on_batch_end", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_epoch_end": "tf.keras.callbacks.Callback.on_epoch_end", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_train_begin": "tf.keras.callbacks.Callback.on_train_begin", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.compat.v1.keras.callbacks.TerminateOnNaN.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.compat.v1.keras.constraints.Constraint": "tf.keras.constraints.Constraint", + "tf.compat.v1.keras.constraints.Constraint.__call__": "tf.keras.constraints.Constraint.__call__", + "tf.compat.v1.keras.constraints.Constraint.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.constraints.Constraint.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.constraints.Constraint.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.constraints.Constraint.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.keras.constraints.Constraint.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.constraints.Constraint.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.constraints.Constraint.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.constraints.Constraint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.constraints.Constraint.get_config": "tf.keras.constraints.Constraint.get_config", + "tf.compat.v1.keras.constraints.MaxNorm": "tf.keras.constraints.MaxNorm", + "tf.compat.v1.keras.constraints.MaxNorm.__call__": "tf.keras.constraints.MaxNorm.__call__", + "tf.compat.v1.keras.constraints.MaxNorm.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.constraints.MaxNorm.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.constraints.MaxNorm.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.constraints.MaxNorm.__init__": "tf.keras.constraints.MaxNorm.__init__", + "tf.compat.v1.keras.constraints.MaxNorm.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.constraints.MaxNorm.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.constraints.MaxNorm.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.constraints.MaxNorm.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.constraints.MaxNorm.get_config": "tf.keras.constraints.MaxNorm.get_config", + "tf.compat.v1.keras.constraints.MinMaxNorm": "tf.keras.constraints.MinMaxNorm", + "tf.compat.v1.keras.constraints.MinMaxNorm.__call__": "tf.keras.constraints.MinMaxNorm.__call__", + "tf.compat.v1.keras.constraints.MinMaxNorm.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.constraints.MinMaxNorm.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.constraints.MinMaxNorm.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.constraints.MinMaxNorm.__init__": "tf.keras.constraints.MinMaxNorm.__init__", + "tf.compat.v1.keras.constraints.MinMaxNorm.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.constraints.MinMaxNorm.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.constraints.MinMaxNorm.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.constraints.MinMaxNorm.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.constraints.MinMaxNorm.get_config": "tf.keras.constraints.MinMaxNorm.get_config", + "tf.compat.v1.keras.constraints.NonNeg": "tf.keras.constraints.NonNeg", + "tf.compat.v1.keras.constraints.NonNeg.__call__": "tf.keras.constraints.NonNeg.__call__", + "tf.compat.v1.keras.constraints.NonNeg.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.constraints.NonNeg.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.constraints.NonNeg.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.constraints.NonNeg.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.keras.constraints.NonNeg.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.constraints.NonNeg.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.constraints.NonNeg.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.constraints.NonNeg.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.constraints.NonNeg.get_config": "tf.keras.constraints.Constraint.get_config", + "tf.compat.v1.keras.constraints.RadialConstraint": "tf.keras.constraints.RadialConstraint", + "tf.compat.v1.keras.constraints.RadialConstraint.__call__": "tf.keras.constraints.RadialConstraint.__call__", + "tf.compat.v1.keras.constraints.RadialConstraint.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.constraints.RadialConstraint.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.constraints.RadialConstraint.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.constraints.RadialConstraint.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.keras.constraints.RadialConstraint.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.constraints.RadialConstraint.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.constraints.RadialConstraint.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.constraints.RadialConstraint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.constraints.RadialConstraint.get_config": "tf.keras.constraints.Constraint.get_config", + "tf.compat.v1.keras.constraints.UnitNorm": "tf.keras.constraints.UnitNorm", + "tf.compat.v1.keras.constraints.UnitNorm.__call__": "tf.keras.constraints.UnitNorm.__call__", + "tf.compat.v1.keras.constraints.UnitNorm.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.constraints.UnitNorm.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.constraints.UnitNorm.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.constraints.UnitNorm.__init__": "tf.keras.constraints.UnitNorm.__init__", + "tf.compat.v1.keras.constraints.UnitNorm.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.constraints.UnitNorm.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.constraints.UnitNorm.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.constraints.UnitNorm.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.constraints.UnitNorm.get_config": "tf.keras.constraints.UnitNorm.get_config", + "tf.compat.v1.keras.constraints.deserialize": "tf.keras.constraints.deserialize", + "tf.compat.v1.keras.constraints.get": "tf.keras.constraints.get", + "tf.compat.v1.keras.constraints.max_norm": "tf.keras.constraints.MaxNorm", + "tf.compat.v1.keras.constraints.max_norm.__call__": "tf.keras.constraints.MaxNorm.__call__", + "tf.compat.v1.keras.constraints.max_norm.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.constraints.max_norm.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.constraints.max_norm.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.constraints.max_norm.__init__": "tf.keras.constraints.MaxNorm.__init__", + "tf.compat.v1.keras.constraints.max_norm.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.constraints.max_norm.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.constraints.max_norm.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.constraints.max_norm.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.constraints.max_norm.get_config": "tf.keras.constraints.MaxNorm.get_config", + "tf.compat.v1.keras.constraints.min_max_norm": "tf.keras.constraints.MinMaxNorm", + "tf.compat.v1.keras.constraints.min_max_norm.__call__": "tf.keras.constraints.MinMaxNorm.__call__", + "tf.compat.v1.keras.constraints.min_max_norm.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.constraints.min_max_norm.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.constraints.min_max_norm.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.constraints.min_max_norm.__init__": "tf.keras.constraints.MinMaxNorm.__init__", + "tf.compat.v1.keras.constraints.min_max_norm.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.constraints.min_max_norm.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.constraints.min_max_norm.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.constraints.min_max_norm.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.constraints.min_max_norm.get_config": "tf.keras.constraints.MinMaxNorm.get_config", + "tf.compat.v1.keras.constraints.non_neg": "tf.keras.constraints.NonNeg", + "tf.compat.v1.keras.constraints.non_neg.__call__": "tf.keras.constraints.NonNeg.__call__", + "tf.compat.v1.keras.constraints.non_neg.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.constraints.non_neg.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.constraints.non_neg.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.constraints.non_neg.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.keras.constraints.non_neg.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.constraints.non_neg.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.constraints.non_neg.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.constraints.non_neg.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.constraints.non_neg.get_config": "tf.keras.constraints.Constraint.get_config", + "tf.compat.v1.keras.constraints.radial_constraint": "tf.keras.constraints.RadialConstraint", + "tf.compat.v1.keras.constraints.radial_constraint.__call__": "tf.keras.constraints.RadialConstraint.__call__", + "tf.compat.v1.keras.constraints.radial_constraint.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.constraints.radial_constraint.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.constraints.radial_constraint.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.constraints.radial_constraint.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.keras.constraints.radial_constraint.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.constraints.radial_constraint.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.constraints.radial_constraint.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.constraints.radial_constraint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.constraints.radial_constraint.get_config": "tf.keras.constraints.Constraint.get_config", + "tf.compat.v1.keras.constraints.serialize": "tf.keras.constraints.serialize", + "tf.compat.v1.keras.constraints.unit_norm": "tf.keras.constraints.UnitNorm", + "tf.compat.v1.keras.constraints.unit_norm.__call__": "tf.keras.constraints.UnitNorm.__call__", + "tf.compat.v1.keras.constraints.unit_norm.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.constraints.unit_norm.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.constraints.unit_norm.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.constraints.unit_norm.__init__": "tf.keras.constraints.UnitNorm.__init__", + "tf.compat.v1.keras.constraints.unit_norm.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.constraints.unit_norm.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.constraints.unit_norm.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.constraints.unit_norm.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.constraints.unit_norm.get_config": "tf.keras.constraints.UnitNorm.get_config", + "tf.compat.v1.keras.datasets.boston_housing.load_data": "tf.keras.datasets.boston_housing.load_data", + "tf.compat.v1.keras.datasets.cifar10.load_data": "tf.keras.datasets.cifar10.load_data", + "tf.compat.v1.keras.datasets.cifar100.load_data": "tf.keras.datasets.cifar100.load_data", + "tf.compat.v1.keras.datasets.fashion_mnist.load_data": "tf.keras.datasets.fashion_mnist.load_data", + "tf.compat.v1.keras.datasets.imdb.get_word_index": "tf.keras.datasets.imdb.get_word_index", + "tf.compat.v1.keras.datasets.imdb.load_data": "tf.keras.datasets.imdb.load_data", + "tf.compat.v1.keras.datasets.mnist.load_data": "tf.keras.datasets.mnist.load_data", + "tf.compat.v1.keras.datasets.reuters.get_word_index": "tf.keras.datasets.reuters.get_word_index", + "tf.compat.v1.keras.datasets.reuters.load_data": "tf.keras.datasets.reuters.load_data", + "tf.compat.v1.keras.experimental.CosineDecay": "tf.keras.experimental.CosineDecay", + "tf.compat.v1.keras.experimental.CosineDecay.__call__": "tf.keras.experimental.CosineDecay.__call__", + "tf.compat.v1.keras.experimental.CosineDecay.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.experimental.CosineDecay.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.experimental.CosineDecay.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.experimental.CosineDecay.__init__": "tf.keras.experimental.CosineDecay.__init__", + "tf.compat.v1.keras.experimental.CosineDecay.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.experimental.CosineDecay.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.experimental.CosineDecay.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.experimental.CosineDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.experimental.CosineDecay.get_config": "tf.keras.experimental.CosineDecay.get_config", + "tf.compat.v1.keras.experimental.CosineDecayRestarts": "tf.keras.experimental.CosineDecayRestarts", + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__call__": "tf.keras.experimental.CosineDecayRestarts.__call__", + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__init__": "tf.keras.experimental.CosineDecayRestarts.__init__", + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.experimental.CosineDecayRestarts.get_config": "tf.keras.experimental.CosineDecayRestarts.get_config", + "tf.compat.v1.keras.experimental.LinearCosineDecay": "tf.keras.experimental.LinearCosineDecay", + "tf.compat.v1.keras.experimental.LinearCosineDecay.__call__": "tf.keras.experimental.LinearCosineDecay.__call__", + "tf.compat.v1.keras.experimental.LinearCosineDecay.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.experimental.LinearCosineDecay.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.experimental.LinearCosineDecay.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.experimental.LinearCosineDecay.__init__": "tf.keras.experimental.LinearCosineDecay.__init__", + "tf.compat.v1.keras.experimental.LinearCosineDecay.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.experimental.LinearCosineDecay.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.experimental.LinearCosineDecay.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.experimental.LinearCosineDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.experimental.LinearCosineDecay.get_config": "tf.keras.experimental.LinearCosineDecay.get_config", + "tf.compat.v1.keras.experimental.LinearModel": "tf.keras.experimental.LinearModel", + "tf.compat.v1.keras.experimental.LinearModel.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.experimental.LinearModel.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.experimental.LinearModel.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.experimental.LinearModel.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.experimental.LinearModel.__init__": "tf.keras.experimental.LinearModel.__init__", + "tf.compat.v1.keras.experimental.LinearModel.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.experimental.LinearModel.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.experimental.LinearModel.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.experimental.LinearModel.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.experimental.LinearModel.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.experimental.LinearModel.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.experimental.LinearModel.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.experimental.LinearModel.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.experimental.LinearModel.build": "tf.keras.experimental.LinearModel.build", + "tf.compat.v1.keras.experimental.LinearModel.call": "tf.keras.experimental.LinearModel.call", + "tf.compat.v1.keras.experimental.LinearModel.compile": "tf.keras.Model.compile", + "tf.compat.v1.keras.experimental.LinearModel.compute_mask": "tf.keras.Model.compute_mask", + "tf.compat.v1.keras.experimental.LinearModel.compute_output_shape": "tf.keras.Model.compute_output_shape", + "tf.compat.v1.keras.experimental.LinearModel.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.experimental.LinearModel.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.experimental.LinearModel.distribute_strategy": "tf.keras.Model.distribute_strategy", + "tf.compat.v1.keras.experimental.LinearModel.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.experimental.LinearModel.dynamic": "tf.keras.Model.dynamic", + "tf.compat.v1.keras.experimental.LinearModel.evaluate": "tf.keras.Model.evaluate", + "tf.compat.v1.keras.experimental.LinearModel.evaluate_generator": "tf.keras.Model.evaluate_generator", + "tf.compat.v1.keras.experimental.LinearModel.fit": "tf.keras.Model.fit", + "tf.compat.v1.keras.experimental.LinearModel.fit_generator": "tf.keras.Model.fit_generator", + "tf.compat.v1.keras.experimental.LinearModel.get_config": "tf.keras.experimental.LinearModel.get_config", + "tf.compat.v1.keras.experimental.LinearModel.get_layer": "tf.keras.Model.get_layer", + "tf.compat.v1.keras.experimental.LinearModel.get_weights": "tf.keras.Model.get_weights", + "tf.compat.v1.keras.experimental.LinearModel.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.experimental.LinearModel.input_spec": "tf.keras.Model.input_spec", + "tf.compat.v1.keras.experimental.LinearModel.layers": "tf.keras.Model.layers", + "tf.compat.v1.keras.experimental.LinearModel.load_weights": "tf.keras.Model.load_weights", + "tf.compat.v1.keras.experimental.LinearModel.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.experimental.LinearModel.make_predict_function": "tf.keras.Model.make_predict_function", + "tf.compat.v1.keras.experimental.LinearModel.make_test_function": "tf.keras.Model.make_test_function", + "tf.compat.v1.keras.experimental.LinearModel.make_train_function": "tf.keras.Model.make_train_function", + "tf.compat.v1.keras.experimental.LinearModel.metrics": "tf.keras.Model.metrics", + "tf.compat.v1.keras.experimental.LinearModel.metrics_names": "tf.keras.Model.metrics_names", + "tf.compat.v1.keras.experimental.LinearModel.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.experimental.LinearModel.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.experimental.LinearModel.non_trainable_weights": "tf.keras.Model.non_trainable_weights", + "tf.compat.v1.keras.experimental.LinearModel.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.experimental.LinearModel.predict": "tf.keras.Model.predict", + "tf.compat.v1.keras.experimental.LinearModel.predict_generator": "tf.keras.Model.predict_generator", + "tf.compat.v1.keras.experimental.LinearModel.predict_on_batch": "tf.keras.Model.predict_on_batch", + "tf.compat.v1.keras.experimental.LinearModel.predict_step": "tf.keras.Model.predict_step", + "tf.compat.v1.keras.experimental.LinearModel.reset_metrics": "tf.keras.Model.reset_metrics", + "tf.compat.v1.keras.experimental.LinearModel.reset_states": "tf.keras.Model.reset_states", + "tf.compat.v1.keras.experimental.LinearModel.run_eagerly": "tf.keras.Model.run_eagerly", + "tf.compat.v1.keras.experimental.LinearModel.save": "tf.keras.Model.save", + "tf.compat.v1.keras.experimental.LinearModel.save_weights": "tf.keras.Model.save_weights", + "tf.compat.v1.keras.experimental.LinearModel.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.experimental.LinearModel.state_updates": "tf.keras.Model.state_updates", + "tf.compat.v1.keras.experimental.LinearModel.stateful": "tf.keras.Model.stateful", + "tf.compat.v1.keras.experimental.LinearModel.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.experimental.LinearModel.summary": "tf.keras.Model.summary", + "tf.compat.v1.keras.experimental.LinearModel.test_on_batch": "tf.keras.Model.test_on_batch", + "tf.compat.v1.keras.experimental.LinearModel.test_step": "tf.keras.Model.test_step", + "tf.compat.v1.keras.experimental.LinearModel.to_json": "tf.keras.Model.to_json", + "tf.compat.v1.keras.experimental.LinearModel.to_yaml": "tf.keras.Model.to_yaml", + "tf.compat.v1.keras.experimental.LinearModel.train_on_batch": "tf.keras.Model.train_on_batch", + "tf.compat.v1.keras.experimental.LinearModel.train_step": "tf.keras.Model.train_step", + "tf.compat.v1.keras.experimental.LinearModel.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.experimental.LinearModel.trainable_weights": "tf.keras.Model.trainable_weights", + "tf.compat.v1.keras.experimental.LinearModel.weights": "tf.keras.Model.weights", + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay": "tf.keras.experimental.NoisyLinearCosineDecay", + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__call__": "tf.keras.experimental.NoisyLinearCosineDecay.__call__", + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__init__": "tf.keras.experimental.NoisyLinearCosineDecay.__init__", + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.get_config": "tf.keras.experimental.NoisyLinearCosineDecay.get_config", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell": "tf.keras.experimental.PeepholeLSTMCell", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__init__": "tf.compat.v1.keras.layers.LSTMCell.__init__", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.build": "tf.keras.experimental.PeepholeLSTMCell.build", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.call": "tf.compat.v1.keras.layers.LSTMCell.call", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.get_config": "tf.compat.v1.keras.layers.LSTMCell.get_config", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.get_dropout_mask_for_cell": "tf.keras.layers.GRU.get_dropout_mask_for_cell", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.get_initial_state": "tf.compat.v1.keras.layers.LSTMCell.get_initial_state", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.get_recurrent_dropout_mask_for_cell": "tf.keras.layers.GRU.get_recurrent_dropout_mask_for_cell", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.reset_dropout_mask": "tf.keras.layers.GRU.reset_dropout_mask", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.reset_recurrent_dropout_mask": "tf.keras.layers.GRU.reset_recurrent_dropout_mask", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.experimental.SequenceFeatures": "tf.keras.experimental.SequenceFeatures", + "tf.compat.v1.keras.experimental.SequenceFeatures.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.experimental.SequenceFeatures.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.experimental.SequenceFeatures.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.experimental.SequenceFeatures.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.experimental.SequenceFeatures.__init__": "tf.keras.experimental.SequenceFeatures.__init__", + "tf.compat.v1.keras.experimental.SequenceFeatures.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.experimental.SequenceFeatures.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.experimental.SequenceFeatures.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.experimental.SequenceFeatures.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.experimental.SequenceFeatures.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.experimental.SequenceFeatures.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.experimental.SequenceFeatures.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.experimental.SequenceFeatures.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.experimental.SequenceFeatures.build": "tf.compat.v1.keras.layers.DenseFeatures.build", + "tf.compat.v1.keras.experimental.SequenceFeatures.call": "tf.keras.experimental.SequenceFeatures.call", + "tf.compat.v1.keras.experimental.SequenceFeatures.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.experimental.SequenceFeatures.compute_output_shape": "tf.keras.layers.DenseFeatures.compute_output_shape", + "tf.compat.v1.keras.experimental.SequenceFeatures.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.experimental.SequenceFeatures.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.experimental.SequenceFeatures.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.experimental.SequenceFeatures.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.experimental.SequenceFeatures.get_config": "tf.keras.layers.DenseFeatures.get_config", + "tf.compat.v1.keras.experimental.SequenceFeatures.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.experimental.SequenceFeatures.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.experimental.SequenceFeatures.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.experimental.SequenceFeatures.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.experimental.SequenceFeatures.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.experimental.SequenceFeatures.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.experimental.SequenceFeatures.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.experimental.SequenceFeatures.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.experimental.SequenceFeatures.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.experimental.SequenceFeatures.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.experimental.SequenceFeatures.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.experimental.SequenceFeatures.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.experimental.SequenceFeatures.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.experimental.SequenceFeatures.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.experimental.WideDeepModel": "tf.keras.experimental.WideDeepModel", + "tf.compat.v1.keras.experimental.WideDeepModel.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.experimental.WideDeepModel.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.experimental.WideDeepModel.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.experimental.WideDeepModel.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.experimental.WideDeepModel.__init__": "tf.keras.experimental.WideDeepModel.__init__", + "tf.compat.v1.keras.experimental.WideDeepModel.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.experimental.WideDeepModel.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.experimental.WideDeepModel.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.experimental.WideDeepModel.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.experimental.WideDeepModel.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.experimental.WideDeepModel.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.experimental.WideDeepModel.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.experimental.WideDeepModel.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.experimental.WideDeepModel.build": "tf.keras.Model.build", + "tf.compat.v1.keras.experimental.WideDeepModel.call": "tf.keras.experimental.WideDeepModel.call", + "tf.compat.v1.keras.experimental.WideDeepModel.compile": "tf.keras.Model.compile", + "tf.compat.v1.keras.experimental.WideDeepModel.compute_mask": "tf.keras.Model.compute_mask", + "tf.compat.v1.keras.experimental.WideDeepModel.compute_output_shape": "tf.keras.Model.compute_output_shape", + "tf.compat.v1.keras.experimental.WideDeepModel.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.experimental.WideDeepModel.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.experimental.WideDeepModel.distribute_strategy": "tf.keras.Model.distribute_strategy", + "tf.compat.v1.keras.experimental.WideDeepModel.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.experimental.WideDeepModel.dynamic": "tf.keras.Model.dynamic", + "tf.compat.v1.keras.experimental.WideDeepModel.evaluate": "tf.keras.Model.evaluate", + "tf.compat.v1.keras.experimental.WideDeepModel.evaluate_generator": "tf.keras.Model.evaluate_generator", + "tf.compat.v1.keras.experimental.WideDeepModel.fit": "tf.keras.Model.fit", + "tf.compat.v1.keras.experimental.WideDeepModel.fit_generator": "tf.keras.Model.fit_generator", + "tf.compat.v1.keras.experimental.WideDeepModel.get_config": "tf.keras.experimental.WideDeepModel.get_config", + "tf.compat.v1.keras.experimental.WideDeepModel.get_layer": "tf.keras.Model.get_layer", + "tf.compat.v1.keras.experimental.WideDeepModel.get_weights": "tf.keras.Model.get_weights", + "tf.compat.v1.keras.experimental.WideDeepModel.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.experimental.WideDeepModel.input_spec": "tf.keras.Model.input_spec", + "tf.compat.v1.keras.experimental.WideDeepModel.layers": "tf.keras.Model.layers", + "tf.compat.v1.keras.experimental.WideDeepModel.load_weights": "tf.keras.Model.load_weights", + "tf.compat.v1.keras.experimental.WideDeepModel.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.experimental.WideDeepModel.make_predict_function": "tf.keras.Model.make_predict_function", + "tf.compat.v1.keras.experimental.WideDeepModel.make_test_function": "tf.keras.Model.make_test_function", + "tf.compat.v1.keras.experimental.WideDeepModel.make_train_function": "tf.keras.Model.make_train_function", + "tf.compat.v1.keras.experimental.WideDeepModel.metrics": "tf.keras.Model.metrics", + "tf.compat.v1.keras.experimental.WideDeepModel.metrics_names": "tf.keras.Model.metrics_names", + "tf.compat.v1.keras.experimental.WideDeepModel.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.experimental.WideDeepModel.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.experimental.WideDeepModel.non_trainable_weights": "tf.keras.Model.non_trainable_weights", + "tf.compat.v1.keras.experimental.WideDeepModel.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.experimental.WideDeepModel.predict": "tf.keras.Model.predict", + "tf.compat.v1.keras.experimental.WideDeepModel.predict_generator": "tf.keras.Model.predict_generator", + "tf.compat.v1.keras.experimental.WideDeepModel.predict_on_batch": "tf.keras.Model.predict_on_batch", + "tf.compat.v1.keras.experimental.WideDeepModel.predict_step": "tf.keras.Model.predict_step", + "tf.compat.v1.keras.experimental.WideDeepModel.reset_metrics": "tf.keras.Model.reset_metrics", + "tf.compat.v1.keras.experimental.WideDeepModel.reset_states": "tf.keras.Model.reset_states", + "tf.compat.v1.keras.experimental.WideDeepModel.run_eagerly": "tf.keras.Model.run_eagerly", + "tf.compat.v1.keras.experimental.WideDeepModel.save": "tf.keras.Model.save", + "tf.compat.v1.keras.experimental.WideDeepModel.save_weights": "tf.keras.Model.save_weights", + "tf.compat.v1.keras.experimental.WideDeepModel.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.experimental.WideDeepModel.state_updates": "tf.keras.Model.state_updates", + "tf.compat.v1.keras.experimental.WideDeepModel.stateful": "tf.keras.Model.stateful", + "tf.compat.v1.keras.experimental.WideDeepModel.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.experimental.WideDeepModel.summary": "tf.keras.Model.summary", + "tf.compat.v1.keras.experimental.WideDeepModel.test_on_batch": "tf.keras.Model.test_on_batch", + "tf.compat.v1.keras.experimental.WideDeepModel.test_step": "tf.keras.Model.test_step", + "tf.compat.v1.keras.experimental.WideDeepModel.to_json": "tf.keras.Model.to_json", + "tf.compat.v1.keras.experimental.WideDeepModel.to_yaml": "tf.keras.Model.to_yaml", + "tf.compat.v1.keras.experimental.WideDeepModel.train_on_batch": "tf.keras.Model.train_on_batch", + "tf.compat.v1.keras.experimental.WideDeepModel.train_step": "tf.keras.experimental.WideDeepModel.train_step", + "tf.compat.v1.keras.experimental.WideDeepModel.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.experimental.WideDeepModel.trainable_weights": "tf.keras.Model.trainable_weights", + "tf.compat.v1.keras.experimental.WideDeepModel.weights": "tf.keras.Model.weights", + "tf.compat.v1.keras.experimental.terminate_keras_multiprocessing_pools": "tf.keras.experimental.terminate_keras_multiprocessing_pools", + "tf.compat.v1.keras.initializers.Constant.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.Constant.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.Constant.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.Constant.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.Constant.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.Constant.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.Constant.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.Identity.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.Identity.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.Identity.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.Identity.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.Identity.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.Identity.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.Identity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.Initializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.Initializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.Initializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.Initializer.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.keras.initializers.Initializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.Initializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.Initializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.Initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.Ones.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.Ones.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.Ones.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.Ones.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.Ones.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.Ones.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.Ones.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.Orthogonal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.Orthogonal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.Orthogonal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.Orthogonal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.Orthogonal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.Orthogonal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.Orthogonal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.RandomNormal.__call__": "tf.compat.v1.random_normal_initializer.__call__", + "tf.compat.v1.keras.initializers.RandomNormal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.RandomNormal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.RandomNormal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.RandomNormal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.RandomNormal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.RandomNormal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.RandomNormal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.RandomNormal.get_config": "tf.compat.v1.random_normal_initializer.get_config", + "tf.compat.v1.keras.initializers.RandomUniform.__call__": "tf.compat.v1.random_uniform_initializer.__call__", + "tf.compat.v1.keras.initializers.RandomUniform.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.RandomUniform.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.RandomUniform.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.RandomUniform.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.RandomUniform.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.RandomUniform.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.RandomUniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.RandomUniform.get_config": "tf.compat.v1.random_uniform_initializer.get_config", + "tf.compat.v1.keras.initializers.TruncatedNormal.__call__": "tf.compat.v1.truncated_normal_initializer.__call__", + "tf.compat.v1.keras.initializers.TruncatedNormal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.TruncatedNormal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.TruncatedNormal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.TruncatedNormal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.TruncatedNormal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.TruncatedNormal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.TruncatedNormal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.TruncatedNormal.get_config": "tf.compat.v1.truncated_normal_initializer.get_config", + "tf.compat.v1.keras.initializers.VarianceScaling.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.VarianceScaling.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.VarianceScaling.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.VarianceScaling.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.VarianceScaling.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.VarianceScaling.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.VarianceScaling.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.Zeros.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.Zeros.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.Zeros.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.Zeros.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.Zeros.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.Zeros.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.Zeros.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.constant": "tf.compat.v1.keras.initializers.Constant", + "tf.compat.v1.keras.initializers.constant.__call__": "tf.compat.v1.keras.initializers.Constant.__call__", + "tf.compat.v1.keras.initializers.constant.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.constant.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.constant.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.constant.__init__": "tf.compat.v1.keras.initializers.Constant.__init__", + "tf.compat.v1.keras.initializers.constant.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.constant.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.constant.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.constant.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.constant.get_config": "tf.compat.v1.keras.initializers.Constant.get_config", + "tf.compat.v1.keras.initializers.deserialize": "tf.keras.initializers.deserialize", + "tf.compat.v1.keras.initializers.get": "tf.keras.initializers.get", + "tf.compat.v1.keras.initializers.glorot_normal.__call__": "tf.compat.v1.keras.initializers.VarianceScaling.__call__", + "tf.compat.v1.keras.initializers.glorot_normal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.glorot_normal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.glorot_normal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.glorot_normal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.glorot_normal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.glorot_normal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.glorot_normal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.glorot_uniform.__call__": "tf.compat.v1.keras.initializers.VarianceScaling.__call__", + "tf.compat.v1.keras.initializers.glorot_uniform.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.glorot_uniform.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.glorot_uniform.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.glorot_uniform.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.glorot_uniform.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.glorot_uniform.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.glorot_uniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.identity": "tf.compat.v1.keras.initializers.Identity", + "tf.compat.v1.keras.initializers.identity.__call__": "tf.compat.v1.keras.initializers.Identity.__call__", + "tf.compat.v1.keras.initializers.identity.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.identity.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.identity.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.identity.__init__": "tf.compat.v1.keras.initializers.Identity.__init__", + "tf.compat.v1.keras.initializers.identity.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.identity.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.identity.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.identity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.identity.get_config": "tf.compat.v1.keras.initializers.Identity.get_config", + "tf.compat.v1.keras.initializers.normal": "tf.compat.v1.keras.initializers.RandomNormal", + "tf.compat.v1.keras.initializers.normal.__call__": "tf.compat.v1.random_normal_initializer.__call__", + "tf.compat.v1.keras.initializers.normal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.normal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.normal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.normal.__init__": "tf.compat.v1.keras.initializers.RandomNormal.__init__", + "tf.compat.v1.keras.initializers.normal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.normal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.normal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.normal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.normal.get_config": "tf.compat.v1.random_normal_initializer.get_config", + "tf.compat.v1.keras.initializers.ones": "tf.compat.v1.keras.initializers.Ones", + "tf.compat.v1.keras.initializers.ones.__call__": "tf.compat.v1.keras.initializers.Ones.__call__", + "tf.compat.v1.keras.initializers.ones.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.ones.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.ones.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.ones.__init__": "tf.compat.v1.keras.initializers.Ones.__init__", + "tf.compat.v1.keras.initializers.ones.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.ones.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.ones.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.ones.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.ones.get_config": "tf.compat.v1.keras.initializers.Ones.get_config", + "tf.compat.v1.keras.initializers.orthogonal": "tf.compat.v1.keras.initializers.Orthogonal", + "tf.compat.v1.keras.initializers.orthogonal.__call__": "tf.compat.v1.keras.initializers.Orthogonal.__call__", + "tf.compat.v1.keras.initializers.orthogonal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.orthogonal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.orthogonal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.orthogonal.__init__": "tf.compat.v1.keras.initializers.Orthogonal.__init__", + "tf.compat.v1.keras.initializers.orthogonal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.orthogonal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.orthogonal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.orthogonal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.orthogonal.get_config": "tf.compat.v1.keras.initializers.Orthogonal.get_config", + "tf.compat.v1.keras.initializers.random_normal": "tf.compat.v1.keras.initializers.RandomNormal", + "tf.compat.v1.keras.initializers.random_normal.__call__": "tf.compat.v1.random_normal_initializer.__call__", + "tf.compat.v1.keras.initializers.random_normal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.random_normal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.random_normal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.random_normal.__init__": "tf.compat.v1.keras.initializers.RandomNormal.__init__", + "tf.compat.v1.keras.initializers.random_normal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.random_normal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.random_normal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.random_normal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.random_normal.get_config": "tf.compat.v1.random_normal_initializer.get_config", + "tf.compat.v1.keras.initializers.random_uniform": "tf.compat.v1.keras.initializers.RandomUniform", + "tf.compat.v1.keras.initializers.random_uniform.__call__": "tf.compat.v1.random_uniform_initializer.__call__", + "tf.compat.v1.keras.initializers.random_uniform.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.random_uniform.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.random_uniform.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.random_uniform.__init__": "tf.compat.v1.keras.initializers.RandomUniform.__init__", + "tf.compat.v1.keras.initializers.random_uniform.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.random_uniform.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.random_uniform.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.random_uniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.random_uniform.get_config": "tf.compat.v1.random_uniform_initializer.get_config", + "tf.compat.v1.keras.initializers.serialize": "tf.keras.initializers.serialize", + "tf.compat.v1.keras.initializers.truncated_normal": "tf.compat.v1.keras.initializers.TruncatedNormal", + "tf.compat.v1.keras.initializers.truncated_normal.__call__": "tf.compat.v1.truncated_normal_initializer.__call__", + "tf.compat.v1.keras.initializers.truncated_normal.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.truncated_normal.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.truncated_normal.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.truncated_normal.__init__": "tf.compat.v1.keras.initializers.TruncatedNormal.__init__", + "tf.compat.v1.keras.initializers.truncated_normal.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.truncated_normal.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.truncated_normal.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.truncated_normal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.truncated_normal.get_config": "tf.compat.v1.truncated_normal_initializer.get_config", + "tf.compat.v1.keras.initializers.uniform": "tf.compat.v1.keras.initializers.RandomUniform", + "tf.compat.v1.keras.initializers.uniform.__call__": "tf.compat.v1.random_uniform_initializer.__call__", + "tf.compat.v1.keras.initializers.uniform.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.uniform.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.uniform.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.uniform.__init__": "tf.compat.v1.keras.initializers.RandomUniform.__init__", + "tf.compat.v1.keras.initializers.uniform.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.uniform.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.uniform.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.uniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.uniform.get_config": "tf.compat.v1.random_uniform_initializer.get_config", + "tf.compat.v1.keras.initializers.zeros": "tf.compat.v1.keras.initializers.Zeros", + "tf.compat.v1.keras.initializers.zeros.__call__": "tf.compat.v1.keras.initializers.Zeros.__call__", + "tf.compat.v1.keras.initializers.zeros.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.initializers.zeros.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.initializers.zeros.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.initializers.zeros.__init__": "tf.compat.v1.keras.initializers.Zeros.__init__", + "tf.compat.v1.keras.initializers.zeros.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.initializers.zeros.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.initializers.zeros.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.initializers.zeros.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.initializers.zeros.get_config": "tf.compat.v1.keras.initializers.Zeros.get_config", + "tf.compat.v1.keras.layers.AbstractRNNCell": "tf.keras.layers.AbstractRNNCell", + "tf.compat.v1.keras.layers.AbstractRNNCell.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.AbstractRNNCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.AbstractRNNCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.AbstractRNNCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.AbstractRNNCell.__init__": "tf.keras.layers.Layer.__init__", + "tf.compat.v1.keras.layers.AbstractRNNCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.AbstractRNNCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.AbstractRNNCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.AbstractRNNCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.AbstractRNNCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.AbstractRNNCell.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.AbstractRNNCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.AbstractRNNCell.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.AbstractRNNCell.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.AbstractRNNCell.call": "tf.keras.layers.AbstractRNNCell.call", + "tf.compat.v1.keras.layers.AbstractRNNCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.AbstractRNNCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.layers.AbstractRNNCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.AbstractRNNCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.AbstractRNNCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.AbstractRNNCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.AbstractRNNCell.get_config": "tf.keras.layers.Layer.get_config", + "tf.compat.v1.keras.layers.AbstractRNNCell.get_initial_state": "tf.keras.layers.AbstractRNNCell.get_initial_state", + "tf.compat.v1.keras.layers.AbstractRNNCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.AbstractRNNCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.AbstractRNNCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.AbstractRNNCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.AbstractRNNCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.AbstractRNNCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.AbstractRNNCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.AbstractRNNCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.AbstractRNNCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.AbstractRNNCell.output_size": "tf.keras.layers.AbstractRNNCell.output_size", + "tf.compat.v1.keras.layers.AbstractRNNCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.AbstractRNNCell.state_size": "tf.keras.layers.AbstractRNNCell.state_size", + "tf.compat.v1.keras.layers.AbstractRNNCell.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.AbstractRNNCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.AbstractRNNCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.AbstractRNNCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Activation": "tf.keras.layers.Activation", + "tf.compat.v1.keras.layers.Activation.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Activation.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Activation.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Activation.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Activation.__init__": "tf.keras.layers.Activation.__init__", + "tf.compat.v1.keras.layers.Activation.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Activation.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Activation.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Activation.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Activation.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Activation.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Activation.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Activation.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Activation.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.Activation.call": "tf.keras.layers.Activation.call", + "tf.compat.v1.keras.layers.Activation.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Activation.compute_output_shape": "tf.keras.layers.Activation.compute_output_shape", + "tf.compat.v1.keras.layers.Activation.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Activation.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Activation.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Activation.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Activation.get_config": "tf.keras.layers.Activation.get_config", + "tf.compat.v1.keras.layers.Activation.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Activation.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Activation.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Activation.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Activation.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Activation.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Activation.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Activation.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Activation.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Activation.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Activation.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Activation.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Activation.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Activation.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.ActivityRegularization": "tf.keras.layers.ActivityRegularization", + "tf.compat.v1.keras.layers.ActivityRegularization.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.ActivityRegularization.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.ActivityRegularization.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.ActivityRegularization.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.ActivityRegularization.__init__": "tf.keras.layers.ActivityRegularization.__init__", + "tf.compat.v1.keras.layers.ActivityRegularization.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.ActivityRegularization.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.ActivityRegularization.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.ActivityRegularization.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.ActivityRegularization.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.ActivityRegularization.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.ActivityRegularization.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.ActivityRegularization.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.ActivityRegularization.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.ActivityRegularization.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.layers.ActivityRegularization.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.ActivityRegularization.compute_output_shape": "tf.keras.layers.ActivityRegularization.compute_output_shape", + "tf.compat.v1.keras.layers.ActivityRegularization.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.ActivityRegularization.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.ActivityRegularization.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.ActivityRegularization.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.ActivityRegularization.get_config": "tf.keras.layers.ActivityRegularization.get_config", + "tf.compat.v1.keras.layers.ActivityRegularization.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.ActivityRegularization.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.ActivityRegularization.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.ActivityRegularization.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.ActivityRegularization.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.ActivityRegularization.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.ActivityRegularization.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.ActivityRegularization.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.ActivityRegularization.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.ActivityRegularization.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.ActivityRegularization.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.ActivityRegularization.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.ActivityRegularization.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.ActivityRegularization.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Add": "tf.keras.layers.Add", + "tf.compat.v1.keras.layers.Add.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Add.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Add.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Add.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Add.__init__": "tf.keras.layers.Add.__init__", + "tf.compat.v1.keras.layers.Add.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Add.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Add.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Add.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Add.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Add.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Add.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Add.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Add.build": "tf.keras.layers.Add.build", + "tf.compat.v1.keras.layers.Add.call": "tf.keras.layers.Add.call", + "tf.compat.v1.keras.layers.Add.compute_mask": "tf.keras.layers.Add.compute_mask", + "tf.compat.v1.keras.layers.Add.compute_output_shape": "tf.keras.layers.Add.compute_output_shape", + "tf.compat.v1.keras.layers.Add.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Add.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Add.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Add.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Add.get_config": "tf.keras.layers.Layer.get_config", + "tf.compat.v1.keras.layers.Add.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Add.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Add.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Add.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Add.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Add.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Add.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Add.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Add.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Add.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Add.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Add.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Add.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Add.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.AdditiveAttention": "tf.keras.layers.AdditiveAttention", + "tf.compat.v1.keras.layers.AdditiveAttention.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.AdditiveAttention.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.AdditiveAttention.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.AdditiveAttention.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.AdditiveAttention.__init__": "tf.keras.layers.AdditiveAttention.__init__", + "tf.compat.v1.keras.layers.AdditiveAttention.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.AdditiveAttention.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.AdditiveAttention.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.AdditiveAttention.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.AdditiveAttention.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.AdditiveAttention.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.AdditiveAttention.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.AdditiveAttention.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.AdditiveAttention.build": "tf.keras.layers.AdditiveAttention.build", + "tf.compat.v1.keras.layers.AdditiveAttention.call": "tf.keras.layers.AdditiveAttention.call", + "tf.compat.v1.keras.layers.AdditiveAttention.compute_mask": "tf.keras.layers.AdditiveAttention.compute_mask", + "tf.compat.v1.keras.layers.AdditiveAttention.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.layers.AdditiveAttention.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.AdditiveAttention.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.AdditiveAttention.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.AdditiveAttention.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.AdditiveAttention.get_config": "tf.keras.layers.AdditiveAttention.get_config", + "tf.compat.v1.keras.layers.AdditiveAttention.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.AdditiveAttention.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.AdditiveAttention.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.AdditiveAttention.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.AdditiveAttention.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.AdditiveAttention.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.AdditiveAttention.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.AdditiveAttention.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.AdditiveAttention.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.AdditiveAttention.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.AdditiveAttention.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.AdditiveAttention.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.AdditiveAttention.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.AdditiveAttention.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.AlphaDropout": "tf.keras.layers.AlphaDropout", + "tf.compat.v1.keras.layers.AlphaDropout.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.AlphaDropout.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.AlphaDropout.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.AlphaDropout.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.AlphaDropout.__init__": "tf.keras.layers.AlphaDropout.__init__", + "tf.compat.v1.keras.layers.AlphaDropout.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.AlphaDropout.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.AlphaDropout.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.AlphaDropout.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.AlphaDropout.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.AlphaDropout.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.AlphaDropout.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.AlphaDropout.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.AlphaDropout.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.AlphaDropout.call": "tf.keras.layers.AlphaDropout.call", + "tf.compat.v1.keras.layers.AlphaDropout.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.AlphaDropout.compute_output_shape": "tf.keras.layers.AlphaDropout.compute_output_shape", + "tf.compat.v1.keras.layers.AlphaDropout.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.AlphaDropout.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.AlphaDropout.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.AlphaDropout.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.AlphaDropout.get_config": "tf.keras.layers.AlphaDropout.get_config", + "tf.compat.v1.keras.layers.AlphaDropout.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.AlphaDropout.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.AlphaDropout.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.AlphaDropout.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.AlphaDropout.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.AlphaDropout.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.AlphaDropout.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.AlphaDropout.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.AlphaDropout.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.AlphaDropout.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.AlphaDropout.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.AlphaDropout.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.AlphaDropout.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.AlphaDropout.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Attention": "tf.keras.layers.Attention", + "tf.compat.v1.keras.layers.Attention.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Attention.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Attention.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Attention.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Attention.__init__": "tf.keras.layers.Attention.__init__", + "tf.compat.v1.keras.layers.Attention.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Attention.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Attention.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Attention.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Attention.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Attention.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Attention.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Attention.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Attention.build": "tf.keras.layers.Attention.build", + "tf.compat.v1.keras.layers.Attention.call": "tf.keras.layers.AdditiveAttention.call", + "tf.compat.v1.keras.layers.Attention.compute_mask": "tf.keras.layers.AdditiveAttention.compute_mask", + "tf.compat.v1.keras.layers.Attention.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.layers.Attention.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Attention.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Attention.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Attention.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Attention.get_config": "tf.keras.layers.Attention.get_config", + "tf.compat.v1.keras.layers.Attention.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Attention.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Attention.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Attention.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Attention.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Attention.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Attention.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Attention.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Attention.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Attention.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Attention.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Attention.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Attention.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Attention.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Average": "tf.keras.layers.Average", + "tf.compat.v1.keras.layers.Average.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Average.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Average.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Average.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Average.__init__": "tf.keras.layers.Add.__init__", + "tf.compat.v1.keras.layers.Average.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Average.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Average.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Average.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Average.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Average.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Average.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Average.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Average.build": "tf.keras.layers.Add.build", + "tf.compat.v1.keras.layers.Average.call": "tf.keras.layers.Add.call", + "tf.compat.v1.keras.layers.Average.compute_mask": "tf.keras.layers.Add.compute_mask", + "tf.compat.v1.keras.layers.Average.compute_output_shape": "tf.keras.layers.Add.compute_output_shape", + "tf.compat.v1.keras.layers.Average.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Average.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Average.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Average.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Average.get_config": "tf.keras.layers.Layer.get_config", + "tf.compat.v1.keras.layers.Average.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Average.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Average.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Average.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Average.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Average.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Average.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Average.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Average.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Average.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Average.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Average.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Average.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Average.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.AveragePooling1D": "tf.keras.layers.AveragePooling1D", + "tf.compat.v1.keras.layers.AveragePooling1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.AveragePooling1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.AveragePooling1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.AveragePooling1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.AveragePooling1D.__init__": "tf.keras.layers.AveragePooling1D.__init__", + "tf.compat.v1.keras.layers.AveragePooling1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.AveragePooling1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.AveragePooling1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.AveragePooling1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.AveragePooling1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.AveragePooling1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.AveragePooling1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.AveragePooling1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.AveragePooling1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.AveragePooling1D.call": "tf.keras.layers.AveragePooling1D.call", + "tf.compat.v1.keras.layers.AveragePooling1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.AveragePooling1D.compute_output_shape": "tf.keras.layers.AveragePooling1D.compute_output_shape", + "tf.compat.v1.keras.layers.AveragePooling1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.AveragePooling1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.AveragePooling1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.AveragePooling1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.AveragePooling1D.get_config": "tf.keras.layers.AveragePooling1D.get_config", + "tf.compat.v1.keras.layers.AveragePooling1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.AveragePooling1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.AveragePooling1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.AveragePooling1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.AveragePooling1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.AveragePooling1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.AveragePooling1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.AveragePooling1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.AveragePooling1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.AveragePooling1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.AveragePooling1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.AveragePooling1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.AveragePooling1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.AveragePooling1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.AveragePooling2D": "tf.keras.layers.AveragePooling2D", + "tf.compat.v1.keras.layers.AveragePooling2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.AveragePooling2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.AveragePooling2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.AveragePooling2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.AveragePooling2D.__init__": "tf.keras.layers.AveragePooling2D.__init__", + "tf.compat.v1.keras.layers.AveragePooling2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.AveragePooling2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.AveragePooling2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.AveragePooling2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.AveragePooling2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.AveragePooling2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.AveragePooling2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.AveragePooling2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.AveragePooling2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.AveragePooling2D.call": "tf.keras.layers.AveragePooling2D.call", + "tf.compat.v1.keras.layers.AveragePooling2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.AveragePooling2D.compute_output_shape": "tf.keras.layers.AveragePooling2D.compute_output_shape", + "tf.compat.v1.keras.layers.AveragePooling2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.AveragePooling2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.AveragePooling2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.AveragePooling2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.AveragePooling2D.get_config": "tf.keras.layers.AveragePooling2D.get_config", + "tf.compat.v1.keras.layers.AveragePooling2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.AveragePooling2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.AveragePooling2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.AveragePooling2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.AveragePooling2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.AveragePooling2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.AveragePooling2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.AveragePooling2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.AveragePooling2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.AveragePooling2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.AveragePooling2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.AveragePooling2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.AveragePooling2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.AveragePooling2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.AveragePooling3D": "tf.keras.layers.AveragePooling3D", + "tf.compat.v1.keras.layers.AveragePooling3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.AveragePooling3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.AveragePooling3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.AveragePooling3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.AveragePooling3D.__init__": "tf.keras.layers.AveragePooling3D.__init__", + "tf.compat.v1.keras.layers.AveragePooling3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.AveragePooling3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.AveragePooling3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.AveragePooling3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.AveragePooling3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.AveragePooling3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.AveragePooling3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.AveragePooling3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.AveragePooling3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.AveragePooling3D.call": "tf.keras.layers.AveragePooling3D.call", + "tf.compat.v1.keras.layers.AveragePooling3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.AveragePooling3D.compute_output_shape": "tf.keras.layers.AveragePooling3D.compute_output_shape", + "tf.compat.v1.keras.layers.AveragePooling3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.AveragePooling3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.AveragePooling3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.AveragePooling3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.AveragePooling3D.get_config": "tf.keras.layers.AveragePooling3D.get_config", + "tf.compat.v1.keras.layers.AveragePooling3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.AveragePooling3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.AveragePooling3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.AveragePooling3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.AveragePooling3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.AveragePooling3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.AveragePooling3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.AveragePooling3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.AveragePooling3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.AveragePooling3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.AveragePooling3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.AveragePooling3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.AveragePooling3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.AveragePooling3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.AvgPool1D": "tf.keras.layers.AveragePooling1D", + "tf.compat.v1.keras.layers.AvgPool1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.AvgPool1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.AvgPool1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.AvgPool1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.AvgPool1D.__init__": "tf.keras.layers.AveragePooling1D.__init__", + "tf.compat.v1.keras.layers.AvgPool1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.AvgPool1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.AvgPool1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.AvgPool1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.AvgPool1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.AvgPool1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.AvgPool1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.AvgPool1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.AvgPool1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.AvgPool1D.call": "tf.keras.layers.AveragePooling1D.call", + "tf.compat.v1.keras.layers.AvgPool1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.AvgPool1D.compute_output_shape": "tf.keras.layers.AveragePooling1D.compute_output_shape", + "tf.compat.v1.keras.layers.AvgPool1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.AvgPool1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.AvgPool1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.AvgPool1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.AvgPool1D.get_config": "tf.keras.layers.AveragePooling1D.get_config", + "tf.compat.v1.keras.layers.AvgPool1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.AvgPool1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.AvgPool1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.AvgPool1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.AvgPool1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.AvgPool1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.AvgPool1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.AvgPool1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.AvgPool1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.AvgPool1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.AvgPool1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.AvgPool1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.AvgPool1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.AvgPool1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.AvgPool2D": "tf.keras.layers.AveragePooling2D", + "tf.compat.v1.keras.layers.AvgPool2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.AvgPool2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.AvgPool2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.AvgPool2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.AvgPool2D.__init__": "tf.keras.layers.AveragePooling2D.__init__", + "tf.compat.v1.keras.layers.AvgPool2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.AvgPool2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.AvgPool2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.AvgPool2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.AvgPool2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.AvgPool2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.AvgPool2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.AvgPool2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.AvgPool2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.AvgPool2D.call": "tf.keras.layers.AveragePooling2D.call", + "tf.compat.v1.keras.layers.AvgPool2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.AvgPool2D.compute_output_shape": "tf.keras.layers.AveragePooling2D.compute_output_shape", + "tf.compat.v1.keras.layers.AvgPool2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.AvgPool2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.AvgPool2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.AvgPool2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.AvgPool2D.get_config": "tf.keras.layers.AveragePooling2D.get_config", + "tf.compat.v1.keras.layers.AvgPool2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.AvgPool2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.AvgPool2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.AvgPool2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.AvgPool2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.AvgPool2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.AvgPool2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.AvgPool2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.AvgPool2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.AvgPool2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.AvgPool2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.AvgPool2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.AvgPool2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.AvgPool2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.AvgPool3D": "tf.keras.layers.AveragePooling3D", + "tf.compat.v1.keras.layers.AvgPool3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.AvgPool3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.AvgPool3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.AvgPool3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.AvgPool3D.__init__": "tf.keras.layers.AveragePooling3D.__init__", + "tf.compat.v1.keras.layers.AvgPool3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.AvgPool3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.AvgPool3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.AvgPool3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.AvgPool3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.AvgPool3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.AvgPool3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.AvgPool3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.AvgPool3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.AvgPool3D.call": "tf.keras.layers.AveragePooling3D.call", + "tf.compat.v1.keras.layers.AvgPool3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.AvgPool3D.compute_output_shape": "tf.keras.layers.AveragePooling3D.compute_output_shape", + "tf.compat.v1.keras.layers.AvgPool3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.AvgPool3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.AvgPool3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.AvgPool3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.AvgPool3D.get_config": "tf.keras.layers.AveragePooling3D.get_config", + "tf.compat.v1.keras.layers.AvgPool3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.AvgPool3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.AvgPool3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.AvgPool3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.AvgPool3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.AvgPool3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.AvgPool3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.AvgPool3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.AvgPool3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.AvgPool3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.AvgPool3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.AvgPool3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.AvgPool3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.AvgPool3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.BatchNormalization.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.BatchNormalization.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.BatchNormalization.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.BatchNormalization.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.BatchNormalization.__init__": "tf.keras.layers.BatchNormalization.__init__", + "tf.compat.v1.keras.layers.BatchNormalization.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.BatchNormalization.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.BatchNormalization.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.BatchNormalization.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.BatchNormalization.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.BatchNormalization.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.BatchNormalization.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.BatchNormalization.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.BatchNormalization.build": "tf.keras.layers.BatchNormalization.build", + "tf.compat.v1.keras.layers.BatchNormalization.call": "tf.keras.layers.BatchNormalization.call", + "tf.compat.v1.keras.layers.BatchNormalization.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.BatchNormalization.compute_output_shape": "tf.keras.layers.BatchNormalization.compute_output_shape", + "tf.compat.v1.keras.layers.BatchNormalization.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.BatchNormalization.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.BatchNormalization.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.BatchNormalization.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.BatchNormalization.get_config": "tf.keras.layers.BatchNormalization.get_config", + "tf.compat.v1.keras.layers.BatchNormalization.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.BatchNormalization.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.BatchNormalization.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.BatchNormalization.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.BatchNormalization.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.BatchNormalization.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.BatchNormalization.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.BatchNormalization.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.BatchNormalization.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.BatchNormalization.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.BatchNormalization.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.BatchNormalization.trainable": "tf.keras.layers.BatchNormalization.trainable", + "tf.compat.v1.keras.layers.BatchNormalization.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.BatchNormalization.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Bidirectional": "tf.keras.layers.Bidirectional", + "tf.compat.v1.keras.layers.Bidirectional.__call__": "tf.keras.layers.Bidirectional.__call__", + "tf.compat.v1.keras.layers.Bidirectional.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Bidirectional.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Bidirectional.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Bidirectional.__init__": "tf.keras.layers.Bidirectional.__init__", + "tf.compat.v1.keras.layers.Bidirectional.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Bidirectional.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Bidirectional.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Bidirectional.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Bidirectional.activity_regularizer": "tf.keras.layers.Wrapper.activity_regularizer", + "tf.compat.v1.keras.layers.Bidirectional.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Bidirectional.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Bidirectional.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Bidirectional.build": "tf.keras.layers.Bidirectional.build", + "tf.compat.v1.keras.layers.Bidirectional.call": "tf.keras.layers.Bidirectional.call", + "tf.compat.v1.keras.layers.Bidirectional.compute_mask": "tf.keras.layers.Bidirectional.compute_mask", + "tf.compat.v1.keras.layers.Bidirectional.compute_output_shape": "tf.keras.layers.Bidirectional.compute_output_shape", + "tf.compat.v1.keras.layers.Bidirectional.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Bidirectional.constraints": "tf.keras.layers.Bidirectional.constraints", + "tf.compat.v1.keras.layers.Bidirectional.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Bidirectional.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Bidirectional.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Bidirectional.get_config": "tf.keras.layers.Bidirectional.get_config", + "tf.compat.v1.keras.layers.Bidirectional.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Bidirectional.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Bidirectional.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Bidirectional.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Bidirectional.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Bidirectional.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Bidirectional.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Bidirectional.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Bidirectional.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Bidirectional.reset_states": "tf.keras.layers.Bidirectional.reset_states", + "tf.compat.v1.keras.layers.Bidirectional.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Bidirectional.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Bidirectional.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Bidirectional.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Bidirectional.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Concatenate": "tf.keras.layers.Concatenate", + "tf.compat.v1.keras.layers.Concatenate.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Concatenate.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Concatenate.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Concatenate.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Concatenate.__init__": "tf.keras.layers.Concatenate.__init__", + "tf.compat.v1.keras.layers.Concatenate.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Concatenate.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Concatenate.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Concatenate.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Concatenate.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Concatenate.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Concatenate.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Concatenate.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Concatenate.build": "tf.keras.layers.Concatenate.build", + "tf.compat.v1.keras.layers.Concatenate.call": "tf.keras.layers.Add.call", + "tf.compat.v1.keras.layers.Concatenate.compute_mask": "tf.keras.layers.Concatenate.compute_mask", + "tf.compat.v1.keras.layers.Concatenate.compute_output_shape": "tf.keras.layers.Concatenate.compute_output_shape", + "tf.compat.v1.keras.layers.Concatenate.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Concatenate.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Concatenate.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Concatenate.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Concatenate.get_config": "tf.keras.layers.Concatenate.get_config", + "tf.compat.v1.keras.layers.Concatenate.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Concatenate.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Concatenate.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Concatenate.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Concatenate.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Concatenate.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Concatenate.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Concatenate.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Concatenate.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Concatenate.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Concatenate.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Concatenate.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Concatenate.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Concatenate.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Conv1D": "tf.keras.layers.Conv1D", + "tf.compat.v1.keras.layers.Conv1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Conv1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Conv1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Conv1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Conv1D.__init__": "tf.keras.layers.Conv1D.__init__", + "tf.compat.v1.keras.layers.Conv1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Conv1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Conv1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Conv1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Conv1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Conv1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Conv1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Conv1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Conv1D.build": "tf.keras.layers.Conv1D.build", + "tf.compat.v1.keras.layers.Conv1D.call": "tf.keras.layers.Conv1D.call", + "tf.compat.v1.keras.layers.Conv1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Conv1D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.keras.layers.Conv1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Conv1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Conv1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Conv1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Conv1D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.compat.v1.keras.layers.Conv1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Conv1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Conv1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Conv1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Conv1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Conv1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Conv1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Conv1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Conv1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Conv1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Conv1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Conv1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Conv1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Conv1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Conv2D": "tf.keras.layers.Conv2D", + "tf.compat.v1.keras.layers.Conv2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Conv2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Conv2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Conv2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Conv2D.__init__": "tf.keras.layers.Conv2D.__init__", + "tf.compat.v1.keras.layers.Conv2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Conv2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Conv2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Conv2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Conv2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Conv2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Conv2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Conv2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Conv2D.build": "tf.keras.layers.Conv1D.build", + "tf.compat.v1.keras.layers.Conv2D.call": "tf.keras.layers.Conv1D.call", + "tf.compat.v1.keras.layers.Conv2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Conv2D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.keras.layers.Conv2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Conv2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Conv2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Conv2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Conv2D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.compat.v1.keras.layers.Conv2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Conv2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Conv2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Conv2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Conv2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Conv2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Conv2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Conv2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Conv2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Conv2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Conv2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Conv2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Conv2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Conv2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Conv2DTranspose": "tf.keras.layers.Conv2DTranspose", + "tf.compat.v1.keras.layers.Conv2DTranspose.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Conv2DTranspose.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Conv2DTranspose.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Conv2DTranspose.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Conv2DTranspose.__init__": "tf.keras.layers.Conv2DTranspose.__init__", + "tf.compat.v1.keras.layers.Conv2DTranspose.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Conv2DTranspose.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Conv2DTranspose.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Conv2DTranspose.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Conv2DTranspose.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Conv2DTranspose.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Conv2DTranspose.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Conv2DTranspose.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Conv2DTranspose.build": "tf.keras.layers.Conv2DTranspose.build", + "tf.compat.v1.keras.layers.Conv2DTranspose.call": "tf.keras.layers.Conv2DTranspose.call", + "tf.compat.v1.keras.layers.Conv2DTranspose.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Conv2DTranspose.compute_output_shape": "tf.keras.layers.Conv2DTranspose.compute_output_shape", + "tf.compat.v1.keras.layers.Conv2DTranspose.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Conv2DTranspose.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Conv2DTranspose.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Conv2DTranspose.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Conv2DTranspose.get_config": "tf.keras.layers.Conv2DTranspose.get_config", + "tf.compat.v1.keras.layers.Conv2DTranspose.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Conv2DTranspose.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Conv2DTranspose.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Conv2DTranspose.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Conv2DTranspose.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Conv2DTranspose.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Conv2DTranspose.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Conv2DTranspose.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Conv2DTranspose.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Conv2DTranspose.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Conv2DTranspose.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Conv2DTranspose.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Conv2DTranspose.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Conv2DTranspose.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Conv3D": "tf.keras.layers.Conv3D", + "tf.compat.v1.keras.layers.Conv3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Conv3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Conv3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Conv3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Conv3D.__init__": "tf.keras.layers.Conv3D.__init__", + "tf.compat.v1.keras.layers.Conv3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Conv3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Conv3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Conv3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Conv3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Conv3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Conv3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Conv3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Conv3D.build": "tf.keras.layers.Conv1D.build", + "tf.compat.v1.keras.layers.Conv3D.call": "tf.keras.layers.Conv1D.call", + "tf.compat.v1.keras.layers.Conv3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Conv3D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.keras.layers.Conv3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Conv3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Conv3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Conv3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Conv3D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.compat.v1.keras.layers.Conv3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Conv3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Conv3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Conv3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Conv3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Conv3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Conv3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Conv3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Conv3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Conv3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Conv3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Conv3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Conv3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Conv3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Conv3DTranspose": "tf.keras.layers.Conv3DTranspose", + "tf.compat.v1.keras.layers.Conv3DTranspose.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Conv3DTranspose.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Conv3DTranspose.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Conv3DTranspose.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Conv3DTranspose.__init__": "tf.keras.layers.Conv3DTranspose.__init__", + "tf.compat.v1.keras.layers.Conv3DTranspose.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Conv3DTranspose.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Conv3DTranspose.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Conv3DTranspose.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Conv3DTranspose.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Conv3DTranspose.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Conv3DTranspose.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Conv3DTranspose.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Conv3DTranspose.build": "tf.keras.layers.Conv3DTranspose.build", + "tf.compat.v1.keras.layers.Conv3DTranspose.call": "tf.keras.layers.Conv3DTranspose.call", + "tf.compat.v1.keras.layers.Conv3DTranspose.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Conv3DTranspose.compute_output_shape": "tf.keras.layers.Conv3DTranspose.compute_output_shape", + "tf.compat.v1.keras.layers.Conv3DTranspose.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Conv3DTranspose.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Conv3DTranspose.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Conv3DTranspose.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Conv3DTranspose.get_config": "tf.keras.layers.Conv3DTranspose.get_config", + "tf.compat.v1.keras.layers.Conv3DTranspose.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Conv3DTranspose.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Conv3DTranspose.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Conv3DTranspose.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Conv3DTranspose.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Conv3DTranspose.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Conv3DTranspose.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Conv3DTranspose.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Conv3DTranspose.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Conv3DTranspose.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Conv3DTranspose.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Conv3DTranspose.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Conv3DTranspose.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Conv3DTranspose.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.ConvLSTM2D": "tf.keras.layers.ConvLSTM2D", + "tf.compat.v1.keras.layers.ConvLSTM2D.__call__": "tf.keras.layers.RNN.__call__", + "tf.compat.v1.keras.layers.ConvLSTM2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.ConvLSTM2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.ConvLSTM2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.ConvLSTM2D.__init__": "tf.keras.layers.ConvLSTM2D.__init__", + "tf.compat.v1.keras.layers.ConvLSTM2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.ConvLSTM2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.ConvLSTM2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.ConvLSTM2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.ConvLSTM2D.activation": "tf.keras.layers.ConvLSTM2D.activation", + "tf.compat.v1.keras.layers.ConvLSTM2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.ConvLSTM2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.ConvLSTM2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.ConvLSTM2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.ConvLSTM2D.bias_constraint": "tf.keras.layers.ConvLSTM2D.bias_constraint", + "tf.compat.v1.keras.layers.ConvLSTM2D.bias_initializer": "tf.keras.layers.ConvLSTM2D.bias_initializer", + "tf.compat.v1.keras.layers.ConvLSTM2D.bias_regularizer": "tf.keras.layers.ConvLSTM2D.bias_regularizer", + "tf.compat.v1.keras.layers.ConvLSTM2D.build": "tf.keras.layers.ConvLSTM2D.build", + "tf.compat.v1.keras.layers.ConvLSTM2D.call": "tf.keras.layers.ConvLSTM2D.call", + "tf.compat.v1.keras.layers.ConvLSTM2D.compute_mask": "tf.keras.layers.RNN.compute_mask", + "tf.compat.v1.keras.layers.ConvLSTM2D.compute_output_shape": "tf.keras.layers.ConvLSTM2D.compute_output_shape", + "tf.compat.v1.keras.layers.ConvLSTM2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.ConvLSTM2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.ConvLSTM2D.data_format": "tf.keras.layers.ConvLSTM2D.data_format", + "tf.compat.v1.keras.layers.ConvLSTM2D.dilation_rate": "tf.keras.layers.ConvLSTM2D.dilation_rate", + "tf.compat.v1.keras.layers.ConvLSTM2D.dropout": "tf.keras.layers.ConvLSTM2D.dropout", + "tf.compat.v1.keras.layers.ConvLSTM2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.ConvLSTM2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.ConvLSTM2D.filters": "tf.keras.layers.ConvLSTM2D.filters", + "tf.compat.v1.keras.layers.ConvLSTM2D.get_config": "tf.keras.layers.ConvLSTM2D.get_config", + "tf.compat.v1.keras.layers.ConvLSTM2D.get_initial_state": "tf.keras.layers.ConvLSTM2D.get_initial_state", + "tf.compat.v1.keras.layers.ConvLSTM2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.ConvLSTM2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.ConvLSTM2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.ConvLSTM2D.kernel_constraint": "tf.keras.layers.ConvLSTM2D.kernel_constraint", + "tf.compat.v1.keras.layers.ConvLSTM2D.kernel_initializer": "tf.keras.layers.ConvLSTM2D.kernel_initializer", + "tf.compat.v1.keras.layers.ConvLSTM2D.kernel_regularizer": "tf.keras.layers.ConvLSTM2D.kernel_regularizer", + "tf.compat.v1.keras.layers.ConvLSTM2D.kernel_size": "tf.keras.layers.ConvLSTM2D.kernel_size", + "tf.compat.v1.keras.layers.ConvLSTM2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.ConvLSTM2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.ConvLSTM2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.ConvLSTM2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.ConvLSTM2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.ConvLSTM2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.ConvLSTM2D.padding": "tf.keras.layers.ConvLSTM2D.padding", + "tf.compat.v1.keras.layers.ConvLSTM2D.recurrent_activation": "tf.keras.layers.ConvLSTM2D.recurrent_activation", + "tf.compat.v1.keras.layers.ConvLSTM2D.recurrent_constraint": "tf.keras.layers.ConvLSTM2D.recurrent_constraint", + "tf.compat.v1.keras.layers.ConvLSTM2D.recurrent_dropout": "tf.keras.layers.ConvLSTM2D.recurrent_dropout", + "tf.compat.v1.keras.layers.ConvLSTM2D.recurrent_initializer": "tf.keras.layers.ConvLSTM2D.recurrent_initializer", + "tf.compat.v1.keras.layers.ConvLSTM2D.recurrent_regularizer": "tf.keras.layers.ConvLSTM2D.recurrent_regularizer", + "tf.compat.v1.keras.layers.ConvLSTM2D.reset_states": "tf.keras.layers.ConvLSTM2D.reset_states", + "tf.compat.v1.keras.layers.ConvLSTM2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.ConvLSTM2D.states": "tf.keras.layers.RNN.states", + "tf.compat.v1.keras.layers.ConvLSTM2D.strides": "tf.keras.layers.ConvLSTM2D.strides", + "tf.compat.v1.keras.layers.ConvLSTM2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.ConvLSTM2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.ConvLSTM2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.ConvLSTM2D.unit_forget_bias": "tf.keras.layers.ConvLSTM2D.unit_forget_bias", + "tf.compat.v1.keras.layers.ConvLSTM2D.use_bias": "tf.keras.layers.ConvLSTM2D.use_bias", + "tf.compat.v1.keras.layers.ConvLSTM2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Convolution1D": "tf.keras.layers.Conv1D", + "tf.compat.v1.keras.layers.Convolution1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Convolution1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Convolution1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Convolution1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Convolution1D.__init__": "tf.keras.layers.Conv1D.__init__", + "tf.compat.v1.keras.layers.Convolution1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Convolution1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Convolution1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Convolution1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Convolution1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Convolution1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Convolution1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Convolution1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Convolution1D.build": "tf.keras.layers.Conv1D.build", + "tf.compat.v1.keras.layers.Convolution1D.call": "tf.keras.layers.Conv1D.call", + "tf.compat.v1.keras.layers.Convolution1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Convolution1D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.keras.layers.Convolution1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Convolution1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Convolution1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Convolution1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Convolution1D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.compat.v1.keras.layers.Convolution1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Convolution1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Convolution1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Convolution1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Convolution1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Convolution1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Convolution1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Convolution1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Convolution1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Convolution1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Convolution1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Convolution1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Convolution1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Convolution1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Convolution2D": "tf.keras.layers.Conv2D", + "tf.compat.v1.keras.layers.Convolution2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Convolution2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Convolution2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Convolution2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Convolution2D.__init__": "tf.keras.layers.Conv2D.__init__", + "tf.compat.v1.keras.layers.Convolution2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Convolution2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Convolution2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Convolution2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Convolution2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Convolution2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Convolution2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Convolution2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Convolution2D.build": "tf.keras.layers.Conv1D.build", + "tf.compat.v1.keras.layers.Convolution2D.call": "tf.keras.layers.Conv1D.call", + "tf.compat.v1.keras.layers.Convolution2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Convolution2D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.keras.layers.Convolution2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Convolution2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Convolution2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Convolution2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Convolution2D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.compat.v1.keras.layers.Convolution2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Convolution2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Convolution2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Convolution2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Convolution2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Convolution2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Convolution2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Convolution2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Convolution2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Convolution2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Convolution2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Convolution2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Convolution2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Convolution2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Convolution2DTranspose": "tf.keras.layers.Conv2DTranspose", + "tf.compat.v1.keras.layers.Convolution2DTranspose.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Convolution2DTranspose.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Convolution2DTranspose.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Convolution2DTranspose.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Convolution2DTranspose.__init__": "tf.keras.layers.Conv2DTranspose.__init__", + "tf.compat.v1.keras.layers.Convolution2DTranspose.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Convolution2DTranspose.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Convolution2DTranspose.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Convolution2DTranspose.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Convolution2DTranspose.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Convolution2DTranspose.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Convolution2DTranspose.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Convolution2DTranspose.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Convolution2DTranspose.build": "tf.keras.layers.Conv2DTranspose.build", + "tf.compat.v1.keras.layers.Convolution2DTranspose.call": "tf.keras.layers.Conv2DTranspose.call", + "tf.compat.v1.keras.layers.Convolution2DTranspose.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Convolution2DTranspose.compute_output_shape": "tf.keras.layers.Conv2DTranspose.compute_output_shape", + "tf.compat.v1.keras.layers.Convolution2DTranspose.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Convolution2DTranspose.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Convolution2DTranspose.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Convolution2DTranspose.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Convolution2DTranspose.get_config": "tf.keras.layers.Conv2DTranspose.get_config", + "tf.compat.v1.keras.layers.Convolution2DTranspose.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Convolution2DTranspose.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Convolution2DTranspose.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Convolution2DTranspose.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Convolution2DTranspose.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Convolution2DTranspose.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Convolution2DTranspose.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Convolution2DTranspose.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Convolution2DTranspose.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Convolution2DTranspose.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Convolution2DTranspose.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Convolution2DTranspose.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Convolution2DTranspose.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Convolution2DTranspose.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Convolution3D": "tf.keras.layers.Conv3D", + "tf.compat.v1.keras.layers.Convolution3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Convolution3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Convolution3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Convolution3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Convolution3D.__init__": "tf.keras.layers.Conv3D.__init__", + "tf.compat.v1.keras.layers.Convolution3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Convolution3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Convolution3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Convolution3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Convolution3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Convolution3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Convolution3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Convolution3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Convolution3D.build": "tf.keras.layers.Conv1D.build", + "tf.compat.v1.keras.layers.Convolution3D.call": "tf.keras.layers.Conv1D.call", + "tf.compat.v1.keras.layers.Convolution3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Convolution3D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.keras.layers.Convolution3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Convolution3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Convolution3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Convolution3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Convolution3D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.compat.v1.keras.layers.Convolution3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Convolution3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Convolution3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Convolution3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Convolution3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Convolution3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Convolution3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Convolution3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Convolution3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Convolution3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Convolution3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Convolution3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Convolution3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Convolution3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Convolution3DTranspose": "tf.keras.layers.Conv3DTranspose", + "tf.compat.v1.keras.layers.Convolution3DTranspose.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Convolution3DTranspose.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Convolution3DTranspose.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Convolution3DTranspose.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Convolution3DTranspose.__init__": "tf.keras.layers.Conv3DTranspose.__init__", + "tf.compat.v1.keras.layers.Convolution3DTranspose.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Convolution3DTranspose.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Convolution3DTranspose.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Convolution3DTranspose.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Convolution3DTranspose.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Convolution3DTranspose.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Convolution3DTranspose.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Convolution3DTranspose.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Convolution3DTranspose.build": "tf.keras.layers.Conv3DTranspose.build", + "tf.compat.v1.keras.layers.Convolution3DTranspose.call": "tf.keras.layers.Conv3DTranspose.call", + "tf.compat.v1.keras.layers.Convolution3DTranspose.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Convolution3DTranspose.compute_output_shape": "tf.keras.layers.Conv3DTranspose.compute_output_shape", + "tf.compat.v1.keras.layers.Convolution3DTranspose.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Convolution3DTranspose.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Convolution3DTranspose.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Convolution3DTranspose.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Convolution3DTranspose.get_config": "tf.keras.layers.Conv3DTranspose.get_config", + "tf.compat.v1.keras.layers.Convolution3DTranspose.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Convolution3DTranspose.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Convolution3DTranspose.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Convolution3DTranspose.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Convolution3DTranspose.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Convolution3DTranspose.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Convolution3DTranspose.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Convolution3DTranspose.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Convolution3DTranspose.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Convolution3DTranspose.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Convolution3DTranspose.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Convolution3DTranspose.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Convolution3DTranspose.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Convolution3DTranspose.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Cropping1D": "tf.keras.layers.Cropping1D", + "tf.compat.v1.keras.layers.Cropping1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Cropping1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Cropping1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Cropping1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Cropping1D.__init__": "tf.keras.layers.Cropping1D.__init__", + "tf.compat.v1.keras.layers.Cropping1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Cropping1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Cropping1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Cropping1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Cropping1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Cropping1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Cropping1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Cropping1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Cropping1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.Cropping1D.call": "tf.keras.layers.Cropping1D.call", + "tf.compat.v1.keras.layers.Cropping1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Cropping1D.compute_output_shape": "tf.keras.layers.Cropping1D.compute_output_shape", + "tf.compat.v1.keras.layers.Cropping1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Cropping1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Cropping1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Cropping1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Cropping1D.get_config": "tf.keras.layers.Cropping1D.get_config", + "tf.compat.v1.keras.layers.Cropping1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Cropping1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Cropping1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Cropping1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Cropping1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Cropping1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Cropping1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Cropping1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Cropping1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Cropping1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Cropping1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Cropping1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Cropping1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Cropping1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Cropping2D": "tf.keras.layers.Cropping2D", + "tf.compat.v1.keras.layers.Cropping2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Cropping2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Cropping2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Cropping2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Cropping2D.__init__": "tf.keras.layers.Cropping2D.__init__", + "tf.compat.v1.keras.layers.Cropping2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Cropping2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Cropping2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Cropping2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Cropping2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Cropping2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Cropping2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Cropping2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Cropping2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.Cropping2D.call": "tf.keras.layers.Cropping2D.call", + "tf.compat.v1.keras.layers.Cropping2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Cropping2D.compute_output_shape": "tf.keras.layers.Cropping2D.compute_output_shape", + "tf.compat.v1.keras.layers.Cropping2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Cropping2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Cropping2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Cropping2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Cropping2D.get_config": "tf.keras.layers.Cropping2D.get_config", + "tf.compat.v1.keras.layers.Cropping2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Cropping2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Cropping2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Cropping2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Cropping2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Cropping2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Cropping2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Cropping2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Cropping2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Cropping2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Cropping2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Cropping2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Cropping2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Cropping2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Cropping3D": "tf.keras.layers.Cropping3D", + "tf.compat.v1.keras.layers.Cropping3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Cropping3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Cropping3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Cropping3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Cropping3D.__init__": "tf.keras.layers.Cropping3D.__init__", + "tf.compat.v1.keras.layers.Cropping3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Cropping3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Cropping3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Cropping3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Cropping3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Cropping3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Cropping3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Cropping3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Cropping3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.Cropping3D.call": "tf.keras.layers.Cropping3D.call", + "tf.compat.v1.keras.layers.Cropping3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Cropping3D.compute_output_shape": "tf.keras.layers.Cropping3D.compute_output_shape", + "tf.compat.v1.keras.layers.Cropping3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Cropping3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Cropping3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Cropping3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Cropping3D.get_config": "tf.keras.layers.Cropping3D.get_config", + "tf.compat.v1.keras.layers.Cropping3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Cropping3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Cropping3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Cropping3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Cropping3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Cropping3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Cropping3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Cropping3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Cropping3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Cropping3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Cropping3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Cropping3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Cropping3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Cropping3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.CuDNNGRU.__call__": "tf.keras.layers.RNN.__call__", + "tf.compat.v1.keras.layers.CuDNNGRU.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.CuDNNGRU.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.CuDNNGRU.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.CuDNNGRU.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.CuDNNGRU.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.CuDNNGRU.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.CuDNNGRU.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.CuDNNGRU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.CuDNNGRU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.CuDNNGRU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.CuDNNGRU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.CuDNNGRU.compute_mask": "tf.keras.layers.RNN.compute_mask", + "tf.compat.v1.keras.layers.CuDNNGRU.compute_output_shape": "tf.keras.layers.RNN.compute_output_shape", + "tf.compat.v1.keras.layers.CuDNNGRU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.CuDNNGRU.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.CuDNNGRU.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.CuDNNGRU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.CuDNNGRU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.CuDNNGRU.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.CuDNNGRU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.CuDNNGRU.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.CuDNNGRU.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.CuDNNGRU.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.CuDNNGRU.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.CuDNNGRU.reset_states": "tf.keras.layers.RNN.reset_states", + "tf.compat.v1.keras.layers.CuDNNGRU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.CuDNNGRU.states": "tf.keras.layers.RNN.states", + "tf.compat.v1.keras.layers.CuDNNGRU.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.CuDNNGRU.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.CuDNNGRU.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.CuDNNLSTM.__call__": "tf.keras.layers.RNN.__call__", + "tf.compat.v1.keras.layers.CuDNNLSTM.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.CuDNNLSTM.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.CuDNNLSTM.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.CuDNNLSTM.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.CuDNNLSTM.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.CuDNNLSTM.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.CuDNNLSTM.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.CuDNNLSTM.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.CuDNNLSTM.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.CuDNNLSTM.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.CuDNNLSTM.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.CuDNNLSTM.call": "tf.compat.v1.keras.layers.CuDNNGRU.call", + "tf.compat.v1.keras.layers.CuDNNLSTM.compute_mask": "tf.keras.layers.RNN.compute_mask", + "tf.compat.v1.keras.layers.CuDNNLSTM.compute_output_shape": "tf.keras.layers.RNN.compute_output_shape", + "tf.compat.v1.keras.layers.CuDNNLSTM.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.CuDNNLSTM.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.CuDNNLSTM.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.CuDNNLSTM.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.CuDNNLSTM.get_losses_for": "tf.compat.v1.keras.layers.CuDNNGRU.get_losses_for", + "tf.compat.v1.keras.layers.CuDNNLSTM.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.CuDNNLSTM.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.CuDNNLSTM.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.CuDNNLSTM.losses": "tf.compat.v1.keras.layers.CuDNNGRU.losses", + "tf.compat.v1.keras.layers.CuDNNLSTM.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.CuDNNLSTM.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.CuDNNLSTM.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.CuDNNLSTM.non_trainable_weights": "tf.compat.v1.keras.layers.CuDNNGRU.non_trainable_weights", + "tf.compat.v1.keras.layers.CuDNNLSTM.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.CuDNNLSTM.reset_states": "tf.keras.layers.RNN.reset_states", + "tf.compat.v1.keras.layers.CuDNNLSTM.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.CuDNNLSTM.states": "tf.keras.layers.RNN.states", + "tf.compat.v1.keras.layers.CuDNNLSTM.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.CuDNNLSTM.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.CuDNNLSTM.trainable_weights": "tf.compat.v1.keras.layers.CuDNNGRU.trainable_weights", + "tf.compat.v1.keras.layers.CuDNNLSTM.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Dense": "tf.keras.layers.Dense", + "tf.compat.v1.keras.layers.Dense.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Dense.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Dense.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Dense.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Dense.__init__": "tf.keras.layers.Dense.__init__", + "tf.compat.v1.keras.layers.Dense.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Dense.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Dense.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Dense.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Dense.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Dense.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Dense.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Dense.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Dense.build": "tf.keras.layers.Dense.build", + "tf.compat.v1.keras.layers.Dense.call": "tf.keras.layers.Dense.call", + "tf.compat.v1.keras.layers.Dense.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Dense.compute_output_shape": "tf.keras.layers.Dense.compute_output_shape", + "tf.compat.v1.keras.layers.Dense.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Dense.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Dense.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Dense.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Dense.get_config": "tf.keras.layers.Dense.get_config", + "tf.compat.v1.keras.layers.Dense.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Dense.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Dense.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Dense.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Dense.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Dense.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Dense.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Dense.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Dense.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Dense.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Dense.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Dense.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Dense.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Dense.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.DenseFeatures.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.DenseFeatures.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.DenseFeatures.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.DenseFeatures.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.DenseFeatures.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.DenseFeatures.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.DenseFeatures.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.DenseFeatures.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.DenseFeatures.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.DenseFeatures.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.DenseFeatures.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.DenseFeatures.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.DenseFeatures.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.DenseFeatures.compute_output_shape": "tf.keras.layers.DenseFeatures.compute_output_shape", + "tf.compat.v1.keras.layers.DenseFeatures.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.DenseFeatures.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.DenseFeatures.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.DenseFeatures.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.DenseFeatures.get_config": "tf.keras.layers.DenseFeatures.get_config", + "tf.compat.v1.keras.layers.DenseFeatures.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.DenseFeatures.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.DenseFeatures.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.DenseFeatures.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.DenseFeatures.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.DenseFeatures.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.DenseFeatures.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.DenseFeatures.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.DenseFeatures.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.DenseFeatures.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.DenseFeatures.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.DenseFeatures.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.DenseFeatures.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.DenseFeatures.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.DepthwiseConv2D": "tf.keras.layers.DepthwiseConv2D", + "tf.compat.v1.keras.layers.DepthwiseConv2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.DepthwiseConv2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.DepthwiseConv2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.DepthwiseConv2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.DepthwiseConv2D.__init__": "tf.keras.layers.DepthwiseConv2D.__init__", + "tf.compat.v1.keras.layers.DepthwiseConv2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.DepthwiseConv2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.DepthwiseConv2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.DepthwiseConv2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.DepthwiseConv2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.DepthwiseConv2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.DepthwiseConv2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.DepthwiseConv2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.DepthwiseConv2D.build": "tf.keras.layers.DepthwiseConv2D.build", + "tf.compat.v1.keras.layers.DepthwiseConv2D.call": "tf.keras.layers.DepthwiseConv2D.call", + "tf.compat.v1.keras.layers.DepthwiseConv2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.DepthwiseConv2D.compute_output_shape": "tf.keras.layers.DepthwiseConv2D.compute_output_shape", + "tf.compat.v1.keras.layers.DepthwiseConv2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.DepthwiseConv2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.DepthwiseConv2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.DepthwiseConv2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.DepthwiseConv2D.get_config": "tf.keras.layers.DepthwiseConv2D.get_config", + "tf.compat.v1.keras.layers.DepthwiseConv2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.DepthwiseConv2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.DepthwiseConv2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.DepthwiseConv2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.DepthwiseConv2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.DepthwiseConv2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.DepthwiseConv2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.DepthwiseConv2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.DepthwiseConv2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.DepthwiseConv2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.DepthwiseConv2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.DepthwiseConv2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.DepthwiseConv2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.DepthwiseConv2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Dot": "tf.keras.layers.Dot", + "tf.compat.v1.keras.layers.Dot.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Dot.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Dot.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Dot.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Dot.__init__": "tf.keras.layers.Dot.__init__", + "tf.compat.v1.keras.layers.Dot.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Dot.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Dot.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Dot.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Dot.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Dot.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Dot.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Dot.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Dot.build": "tf.keras.layers.Dot.build", + "tf.compat.v1.keras.layers.Dot.call": "tf.keras.layers.Add.call", + "tf.compat.v1.keras.layers.Dot.compute_mask": "tf.keras.layers.Dot.compute_mask", + "tf.compat.v1.keras.layers.Dot.compute_output_shape": "tf.keras.layers.Dot.compute_output_shape", + "tf.compat.v1.keras.layers.Dot.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Dot.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Dot.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Dot.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Dot.get_config": "tf.keras.layers.Dot.get_config", + "tf.compat.v1.keras.layers.Dot.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Dot.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Dot.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Dot.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Dot.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Dot.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Dot.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Dot.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Dot.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Dot.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Dot.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Dot.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Dot.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Dot.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Dropout": "tf.keras.layers.Dropout", + "tf.compat.v1.keras.layers.Dropout.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Dropout.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Dropout.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Dropout.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Dropout.__init__": "tf.keras.layers.Dropout.__init__", + "tf.compat.v1.keras.layers.Dropout.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Dropout.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Dropout.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Dropout.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Dropout.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Dropout.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Dropout.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Dropout.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Dropout.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.Dropout.call": "tf.keras.layers.Dropout.call", + "tf.compat.v1.keras.layers.Dropout.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Dropout.compute_output_shape": "tf.keras.layers.Dropout.compute_output_shape", + "tf.compat.v1.keras.layers.Dropout.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Dropout.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Dropout.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Dropout.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Dropout.get_config": "tf.keras.layers.Dropout.get_config", + "tf.compat.v1.keras.layers.Dropout.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Dropout.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Dropout.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Dropout.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Dropout.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Dropout.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Dropout.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Dropout.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Dropout.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Dropout.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Dropout.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Dropout.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Dropout.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Dropout.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.ELU": "tf.keras.layers.ELU", + "tf.compat.v1.keras.layers.ELU.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.ELU.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.ELU.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.ELU.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.ELU.__init__": "tf.keras.layers.ELU.__init__", + "tf.compat.v1.keras.layers.ELU.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.ELU.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.ELU.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.ELU.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.ELU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.ELU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.ELU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.ELU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.ELU.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.ELU.call": "tf.keras.layers.ELU.call", + "tf.compat.v1.keras.layers.ELU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.ELU.compute_output_shape": "tf.keras.layers.ELU.compute_output_shape", + "tf.compat.v1.keras.layers.ELU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.ELU.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.ELU.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.ELU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.ELU.get_config": "tf.keras.layers.ELU.get_config", + "tf.compat.v1.keras.layers.ELU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.ELU.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.ELU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.ELU.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.ELU.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.ELU.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.ELU.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.ELU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.ELU.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.ELU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.ELU.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.ELU.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.ELU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.ELU.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Embedding": "tf.keras.layers.Embedding", + "tf.compat.v1.keras.layers.Embedding.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Embedding.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Embedding.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Embedding.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Embedding.__init__": "tf.keras.layers.Embedding.__init__", + "tf.compat.v1.keras.layers.Embedding.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Embedding.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Embedding.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Embedding.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Embedding.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Embedding.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Embedding.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Embedding.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Embedding.build": "tf.keras.layers.Embedding.build", + "tf.compat.v1.keras.layers.Embedding.call": "tf.keras.layers.Embedding.call", + "tf.compat.v1.keras.layers.Embedding.compute_mask": "tf.keras.layers.Embedding.compute_mask", + "tf.compat.v1.keras.layers.Embedding.compute_output_shape": "tf.keras.layers.Embedding.compute_output_shape", + "tf.compat.v1.keras.layers.Embedding.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Embedding.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Embedding.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Embedding.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Embedding.get_config": "tf.keras.layers.Embedding.get_config", + "tf.compat.v1.keras.layers.Embedding.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Embedding.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Embedding.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Embedding.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Embedding.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Embedding.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Embedding.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Embedding.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Embedding.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Embedding.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Embedding.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Embedding.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Embedding.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Embedding.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Flatten": "tf.keras.layers.Flatten", + "tf.compat.v1.keras.layers.Flatten.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Flatten.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Flatten.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Flatten.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Flatten.__init__": "tf.keras.layers.Flatten.__init__", + "tf.compat.v1.keras.layers.Flatten.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Flatten.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Flatten.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Flatten.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Flatten.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Flatten.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Flatten.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Flatten.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Flatten.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.Flatten.call": "tf.keras.layers.Flatten.call", + "tf.compat.v1.keras.layers.Flatten.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Flatten.compute_output_shape": "tf.keras.layers.Flatten.compute_output_shape", + "tf.compat.v1.keras.layers.Flatten.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Flatten.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Flatten.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Flatten.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Flatten.get_config": "tf.keras.layers.Flatten.get_config", + "tf.compat.v1.keras.layers.Flatten.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Flatten.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Flatten.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Flatten.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Flatten.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Flatten.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Flatten.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Flatten.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Flatten.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Flatten.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Flatten.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Flatten.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Flatten.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Flatten.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GRU.__call__": "tf.keras.layers.RNN.__call__", + "tf.compat.v1.keras.layers.GRU.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GRU.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GRU.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GRU.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GRU.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GRU.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GRU.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GRU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GRU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GRU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GRU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GRU.build": "tf.keras.layers.RNN.build", + "tf.compat.v1.keras.layers.GRU.compute_mask": "tf.keras.layers.RNN.compute_mask", + "tf.compat.v1.keras.layers.GRU.compute_output_shape": "tf.keras.layers.RNN.compute_output_shape", + "tf.compat.v1.keras.layers.GRU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GRU.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GRU.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GRU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GRU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GRU.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GRU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GRU.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GRU.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GRU.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GRU.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GRU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GRU.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GRU.reset_states": "tf.keras.layers.RNN.reset_states", + "tf.compat.v1.keras.layers.GRU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GRU.states": "tf.keras.layers.RNN.states", + "tf.compat.v1.keras.layers.GRU.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GRU.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GRU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GRU.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GRUCell.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GRUCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GRUCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GRUCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GRUCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GRUCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GRUCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GRUCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GRUCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GRUCell.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GRUCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GRUCell.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GRUCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GRUCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.layers.GRUCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GRUCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GRUCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GRUCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GRUCell.get_dropout_mask_for_cell": "tf.keras.layers.GRU.get_dropout_mask_for_cell", + "tf.compat.v1.keras.layers.GRUCell.get_recurrent_dropout_mask_for_cell": "tf.keras.layers.GRU.get_recurrent_dropout_mask_for_cell", + "tf.compat.v1.keras.layers.GRUCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GRUCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GRUCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GRUCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GRUCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GRUCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GRUCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GRUCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GRUCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GRUCell.reset_dropout_mask": "tf.keras.layers.GRU.reset_dropout_mask", + "tf.compat.v1.keras.layers.GRUCell.reset_recurrent_dropout_mask": "tf.keras.layers.GRU.reset_recurrent_dropout_mask", + "tf.compat.v1.keras.layers.GRUCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GRUCell.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GRUCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GRUCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GRUCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GaussianDropout": "tf.keras.layers.GaussianDropout", + "tf.compat.v1.keras.layers.GaussianDropout.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GaussianDropout.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GaussianDropout.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GaussianDropout.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GaussianDropout.__init__": "tf.keras.layers.GaussianDropout.__init__", + "tf.compat.v1.keras.layers.GaussianDropout.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GaussianDropout.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GaussianDropout.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GaussianDropout.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GaussianDropout.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GaussianDropout.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GaussianDropout.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GaussianDropout.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GaussianDropout.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GaussianDropout.call": "tf.keras.layers.GaussianDropout.call", + "tf.compat.v1.keras.layers.GaussianDropout.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GaussianDropout.compute_output_shape": "tf.keras.layers.GaussianDropout.compute_output_shape", + "tf.compat.v1.keras.layers.GaussianDropout.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GaussianDropout.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GaussianDropout.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GaussianDropout.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GaussianDropout.get_config": "tf.keras.layers.GaussianDropout.get_config", + "tf.compat.v1.keras.layers.GaussianDropout.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GaussianDropout.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GaussianDropout.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GaussianDropout.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GaussianDropout.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GaussianDropout.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GaussianDropout.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GaussianDropout.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GaussianDropout.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GaussianDropout.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GaussianDropout.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GaussianDropout.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GaussianDropout.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GaussianDropout.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GaussianNoise": "tf.keras.layers.GaussianNoise", + "tf.compat.v1.keras.layers.GaussianNoise.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GaussianNoise.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GaussianNoise.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GaussianNoise.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GaussianNoise.__init__": "tf.keras.layers.GaussianNoise.__init__", + "tf.compat.v1.keras.layers.GaussianNoise.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GaussianNoise.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GaussianNoise.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GaussianNoise.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GaussianNoise.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GaussianNoise.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GaussianNoise.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GaussianNoise.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GaussianNoise.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GaussianNoise.call": "tf.keras.layers.GaussianNoise.call", + "tf.compat.v1.keras.layers.GaussianNoise.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GaussianNoise.compute_output_shape": "tf.keras.layers.GaussianNoise.compute_output_shape", + "tf.compat.v1.keras.layers.GaussianNoise.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GaussianNoise.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GaussianNoise.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GaussianNoise.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GaussianNoise.get_config": "tf.keras.layers.GaussianNoise.get_config", + "tf.compat.v1.keras.layers.GaussianNoise.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GaussianNoise.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GaussianNoise.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GaussianNoise.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GaussianNoise.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GaussianNoise.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GaussianNoise.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GaussianNoise.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GaussianNoise.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GaussianNoise.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GaussianNoise.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GaussianNoise.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GaussianNoise.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GaussianNoise.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D": "tf.keras.layers.GlobalAveragePooling1D", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__init__": "tf.keras.layers.GlobalAveragePooling1D.__init__", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.call": "tf.keras.layers.GlobalAveragePooling1D.call", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.compute_mask": "tf.keras.layers.GlobalAveragePooling1D.compute_mask", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling1D.compute_output_shape", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.get_config": "tf.keras.layers.GlobalAveragePooling1D.get_config", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D": "tf.keras.layers.GlobalAveragePooling2D", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__init__": "tf.keras.layers.GlobalAveragePooling2D.__init__", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.call": "tf.keras.layers.GlobalAveragePooling2D.call", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling2D.compute_output_shape", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.get_config": "tf.keras.layers.GlobalAveragePooling2D.get_config", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D": "tf.keras.layers.GlobalAveragePooling3D", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__init__": "tf.keras.layers.GlobalAveragePooling3D.__init__", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.call": "tf.keras.layers.GlobalAveragePooling3D.call", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling3D.compute_output_shape", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.get_config": "tf.keras.layers.GlobalAveragePooling3D.get_config", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GlobalAvgPool1D": "tf.keras.layers.GlobalAveragePooling1D", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__init__": "tf.keras.layers.GlobalAveragePooling1D.__init__", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.call": "tf.keras.layers.GlobalAveragePooling1D.call", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.compute_mask": "tf.keras.layers.GlobalAveragePooling1D.compute_mask", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling1D.compute_output_shape", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.get_config": "tf.keras.layers.GlobalAveragePooling1D.get_config", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GlobalAvgPool1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GlobalAvgPool2D": "tf.keras.layers.GlobalAveragePooling2D", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__init__": "tf.keras.layers.GlobalAveragePooling2D.__init__", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.call": "tf.keras.layers.GlobalAveragePooling2D.call", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling2D.compute_output_shape", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.get_config": "tf.keras.layers.GlobalAveragePooling2D.get_config", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GlobalAvgPool2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GlobalAvgPool3D": "tf.keras.layers.GlobalAveragePooling3D", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__init__": "tf.keras.layers.GlobalAveragePooling3D.__init__", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.call": "tf.keras.layers.GlobalAveragePooling3D.call", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling3D.compute_output_shape", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.get_config": "tf.keras.layers.GlobalAveragePooling3D.get_config", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GlobalAvgPool3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GlobalMaxPool1D": "tf.keras.layers.GlobalMaxPool1D", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__init__": "tf.keras.layers.GlobalMaxPool1D.__init__", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.call": "tf.keras.layers.GlobalMaxPool1D.call", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling1D.compute_output_shape", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.get_config": "tf.keras.layers.GlobalAveragePooling1D.get_config", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GlobalMaxPool1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GlobalMaxPool2D": "tf.keras.layers.GlobalMaxPool2D", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__init__": "tf.keras.layers.GlobalAveragePooling2D.__init__", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.call": "tf.keras.layers.GlobalMaxPool2D.call", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling2D.compute_output_shape", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.get_config": "tf.keras.layers.GlobalAveragePooling2D.get_config", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GlobalMaxPool2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GlobalMaxPool3D": "tf.keras.layers.GlobalMaxPool3D", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__init__": "tf.keras.layers.GlobalAveragePooling3D.__init__", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.call": "tf.keras.layers.GlobalMaxPool3D.call", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling3D.compute_output_shape", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.get_config": "tf.keras.layers.GlobalAveragePooling3D.get_config", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GlobalMaxPool3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D": "tf.keras.layers.GlobalMaxPool1D", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__init__": "tf.keras.layers.GlobalMaxPool1D.__init__", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.call": "tf.keras.layers.GlobalMaxPool1D.call", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling1D.compute_output_shape", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.get_config": "tf.keras.layers.GlobalAveragePooling1D.get_config", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D": "tf.keras.layers.GlobalMaxPool2D", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__init__": "tf.keras.layers.GlobalAveragePooling2D.__init__", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.call": "tf.keras.layers.GlobalMaxPool2D.call", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling2D.compute_output_shape", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.get_config": "tf.keras.layers.GlobalAveragePooling2D.get_config", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D": "tf.keras.layers.GlobalMaxPool3D", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__init__": "tf.keras.layers.GlobalAveragePooling3D.__init__", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.call": "tf.keras.layers.GlobalMaxPool3D.call", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling3D.compute_output_shape", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.get_config": "tf.keras.layers.GlobalAveragePooling3D.get_config", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Input": "tf.keras.Input", + "tf.compat.v1.keras.layers.InputLayer": "tf.keras.layers.InputLayer", + "tf.compat.v1.keras.layers.InputLayer.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.InputLayer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.InputLayer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.InputLayer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.InputLayer.__init__": "tf.keras.layers.InputLayer.__init__", + "tf.compat.v1.keras.layers.InputLayer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.InputLayer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.InputLayer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.InputLayer.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.InputLayer.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.InputLayer.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.InputLayer.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.InputLayer.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.InputLayer.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.InputLayer.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.layers.InputLayer.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.InputLayer.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.layers.InputLayer.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.InputLayer.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.InputLayer.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.InputLayer.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.InputLayer.get_config": "tf.keras.layers.InputLayer.get_config", + "tf.compat.v1.keras.layers.InputLayer.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.InputLayer.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.InputLayer.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.InputLayer.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.InputLayer.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.InputLayer.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.InputLayer.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.InputLayer.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.InputLayer.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.InputLayer.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.InputLayer.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.InputLayer.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.InputLayer.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.InputLayer.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.InputSpec": "tf.keras.layers.InputSpec", + "tf.compat.v1.keras.layers.InputSpec.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.InputSpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.InputSpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.InputSpec.__init__": "tf.keras.layers.InputSpec.__init__", + "tf.compat.v1.keras.layers.InputSpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.InputSpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.InputSpec.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.InputSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.layers.InputSpec.get_config": "tf.keras.layers.InputSpec.get_config", + "tf.compat.v1.keras.layers.LSTM.__call__": "tf.keras.layers.RNN.__call__", + "tf.compat.v1.keras.layers.LSTM.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.LSTM.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.LSTM.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.LSTM.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.LSTM.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.LSTM.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.LSTM.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.LSTM.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.LSTM.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.LSTM.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.LSTM.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.LSTM.build": "tf.keras.layers.RNN.build", + "tf.compat.v1.keras.layers.LSTM.compute_mask": "tf.keras.layers.RNN.compute_mask", + "tf.compat.v1.keras.layers.LSTM.compute_output_shape": "tf.keras.layers.RNN.compute_output_shape", + "tf.compat.v1.keras.layers.LSTM.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.LSTM.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.LSTM.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.LSTM.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.LSTM.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.LSTM.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.LSTM.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.LSTM.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.LSTM.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.LSTM.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.LSTM.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.LSTM.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.LSTM.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.LSTM.reset_states": "tf.keras.layers.RNN.reset_states", + "tf.compat.v1.keras.layers.LSTM.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.LSTM.states": "tf.keras.layers.RNN.states", + "tf.compat.v1.keras.layers.LSTM.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.LSTM.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.LSTM.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.LSTM.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.LSTMCell.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.LSTMCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.LSTMCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.LSTMCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.LSTMCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.LSTMCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.LSTMCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.LSTMCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.LSTMCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.LSTMCell.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.LSTMCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.LSTMCell.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.LSTMCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.LSTMCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.layers.LSTMCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.LSTMCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.LSTMCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.LSTMCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.LSTMCell.get_dropout_mask_for_cell": "tf.keras.layers.GRU.get_dropout_mask_for_cell", + "tf.compat.v1.keras.layers.LSTMCell.get_recurrent_dropout_mask_for_cell": "tf.keras.layers.GRU.get_recurrent_dropout_mask_for_cell", + "tf.compat.v1.keras.layers.LSTMCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.LSTMCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.LSTMCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.LSTMCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.LSTMCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.LSTMCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.LSTMCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.LSTMCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.LSTMCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.LSTMCell.reset_dropout_mask": "tf.keras.layers.GRU.reset_dropout_mask", + "tf.compat.v1.keras.layers.LSTMCell.reset_recurrent_dropout_mask": "tf.keras.layers.GRU.reset_recurrent_dropout_mask", + "tf.compat.v1.keras.layers.LSTMCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.LSTMCell.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.LSTMCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.LSTMCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.LSTMCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Lambda": "tf.keras.layers.Lambda", + "tf.compat.v1.keras.layers.Lambda.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Lambda.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Lambda.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Lambda.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Lambda.__init__": "tf.keras.layers.Lambda.__init__", + "tf.compat.v1.keras.layers.Lambda.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Lambda.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Lambda.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Lambda.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Lambda.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Lambda.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Lambda.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Lambda.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Lambda.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.Lambda.call": "tf.keras.layers.Lambda.call", + "tf.compat.v1.keras.layers.Lambda.compute_mask": "tf.keras.layers.Lambda.compute_mask", + "tf.compat.v1.keras.layers.Lambda.compute_output_shape": "tf.keras.layers.Lambda.compute_output_shape", + "tf.compat.v1.keras.layers.Lambda.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Lambda.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Lambda.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Lambda.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Lambda.get_config": "tf.keras.layers.Lambda.get_config", + "tf.compat.v1.keras.layers.Lambda.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Lambda.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Lambda.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Lambda.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Lambda.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Lambda.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Lambda.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Lambda.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Lambda.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Lambda.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Lambda.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Lambda.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Lambda.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Lambda.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Layer": "tf.keras.layers.Layer", + "tf.compat.v1.keras.layers.Layer.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Layer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Layer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Layer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Layer.__init__": "tf.keras.layers.Layer.__init__", + "tf.compat.v1.keras.layers.Layer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Layer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Layer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Layer.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Layer.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Layer.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Layer.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Layer.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Layer.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.Layer.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.layers.Layer.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Layer.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.layers.Layer.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Layer.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Layer.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Layer.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Layer.get_config": "tf.keras.layers.Layer.get_config", + "tf.compat.v1.keras.layers.Layer.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Layer.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Layer.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Layer.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Layer.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Layer.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Layer.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Layer.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Layer.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Layer.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Layer.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Layer.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Layer.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Layer.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.LayerNormalization": "tf.keras.layers.LayerNormalization", + "tf.compat.v1.keras.layers.LayerNormalization.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.LayerNormalization.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.LayerNormalization.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.LayerNormalization.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.LayerNormalization.__init__": "tf.keras.layers.LayerNormalization.__init__", + "tf.compat.v1.keras.layers.LayerNormalization.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.LayerNormalization.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.LayerNormalization.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.LayerNormalization.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.LayerNormalization.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.LayerNormalization.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.LayerNormalization.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.LayerNormalization.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.LayerNormalization.build": "tf.keras.layers.LayerNormalization.build", + "tf.compat.v1.keras.layers.LayerNormalization.call": "tf.keras.layers.LayerNormalization.call", + "tf.compat.v1.keras.layers.LayerNormalization.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.LayerNormalization.compute_output_shape": "tf.keras.layers.LayerNormalization.compute_output_shape", + "tf.compat.v1.keras.layers.LayerNormalization.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.LayerNormalization.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.LayerNormalization.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.LayerNormalization.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.LayerNormalization.get_config": "tf.keras.layers.LayerNormalization.get_config", + "tf.compat.v1.keras.layers.LayerNormalization.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.LayerNormalization.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.LayerNormalization.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.LayerNormalization.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.LayerNormalization.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.LayerNormalization.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.LayerNormalization.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.LayerNormalization.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.LayerNormalization.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.LayerNormalization.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.LayerNormalization.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.LayerNormalization.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.LayerNormalization.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.LayerNormalization.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.LeakyReLU": "tf.keras.layers.LeakyReLU", + "tf.compat.v1.keras.layers.LeakyReLU.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.LeakyReLU.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.LeakyReLU.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.LeakyReLU.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.LeakyReLU.__init__": "tf.keras.layers.LeakyReLU.__init__", + "tf.compat.v1.keras.layers.LeakyReLU.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.LeakyReLU.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.LeakyReLU.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.LeakyReLU.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.LeakyReLU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.LeakyReLU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.LeakyReLU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.LeakyReLU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.LeakyReLU.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.LeakyReLU.call": "tf.keras.layers.LeakyReLU.call", + "tf.compat.v1.keras.layers.LeakyReLU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.LeakyReLU.compute_output_shape": "tf.keras.layers.LeakyReLU.compute_output_shape", + "tf.compat.v1.keras.layers.LeakyReLU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.LeakyReLU.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.LeakyReLU.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.LeakyReLU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.LeakyReLU.get_config": "tf.keras.layers.LeakyReLU.get_config", + "tf.compat.v1.keras.layers.LeakyReLU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.LeakyReLU.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.LeakyReLU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.LeakyReLU.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.LeakyReLU.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.LeakyReLU.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.LeakyReLU.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.LeakyReLU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.LeakyReLU.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.LeakyReLU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.LeakyReLU.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.LeakyReLU.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.LeakyReLU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.LeakyReLU.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.LocallyConnected1D": "tf.keras.layers.LocallyConnected1D", + "tf.compat.v1.keras.layers.LocallyConnected1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.LocallyConnected1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.LocallyConnected1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.LocallyConnected1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.LocallyConnected1D.__init__": "tf.keras.layers.LocallyConnected1D.__init__", + "tf.compat.v1.keras.layers.LocallyConnected1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.LocallyConnected1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.LocallyConnected1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.LocallyConnected1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.LocallyConnected1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.LocallyConnected1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.LocallyConnected1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.LocallyConnected1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.LocallyConnected1D.build": "tf.keras.layers.LocallyConnected1D.build", + "tf.compat.v1.keras.layers.LocallyConnected1D.call": "tf.keras.layers.LocallyConnected1D.call", + "tf.compat.v1.keras.layers.LocallyConnected1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.LocallyConnected1D.compute_output_shape": "tf.keras.layers.LocallyConnected1D.compute_output_shape", + "tf.compat.v1.keras.layers.LocallyConnected1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.LocallyConnected1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.LocallyConnected1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.LocallyConnected1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.LocallyConnected1D.get_config": "tf.keras.layers.LocallyConnected1D.get_config", + "tf.compat.v1.keras.layers.LocallyConnected1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.LocallyConnected1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.LocallyConnected1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.LocallyConnected1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.LocallyConnected1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.LocallyConnected1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.LocallyConnected1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.LocallyConnected1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.LocallyConnected1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.LocallyConnected1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.LocallyConnected1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.LocallyConnected1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.LocallyConnected1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.LocallyConnected1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.LocallyConnected2D": "tf.keras.layers.LocallyConnected2D", + "tf.compat.v1.keras.layers.LocallyConnected2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.LocallyConnected2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.LocallyConnected2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.LocallyConnected2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.LocallyConnected2D.__init__": "tf.keras.layers.LocallyConnected2D.__init__", + "tf.compat.v1.keras.layers.LocallyConnected2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.LocallyConnected2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.LocallyConnected2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.LocallyConnected2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.LocallyConnected2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.LocallyConnected2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.LocallyConnected2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.LocallyConnected2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.LocallyConnected2D.build": "tf.keras.layers.LocallyConnected2D.build", + "tf.compat.v1.keras.layers.LocallyConnected2D.call": "tf.keras.layers.LocallyConnected2D.call", + "tf.compat.v1.keras.layers.LocallyConnected2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.LocallyConnected2D.compute_output_shape": "tf.keras.layers.LocallyConnected2D.compute_output_shape", + "tf.compat.v1.keras.layers.LocallyConnected2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.LocallyConnected2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.LocallyConnected2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.LocallyConnected2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.LocallyConnected2D.get_config": "tf.keras.layers.LocallyConnected2D.get_config", + "tf.compat.v1.keras.layers.LocallyConnected2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.LocallyConnected2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.LocallyConnected2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.LocallyConnected2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.LocallyConnected2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.LocallyConnected2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.LocallyConnected2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.LocallyConnected2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.LocallyConnected2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.LocallyConnected2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.LocallyConnected2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.LocallyConnected2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.LocallyConnected2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.LocallyConnected2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Masking": "tf.keras.layers.Masking", + "tf.compat.v1.keras.layers.Masking.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Masking.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Masking.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Masking.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Masking.__init__": "tf.keras.layers.Masking.__init__", + "tf.compat.v1.keras.layers.Masking.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Masking.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Masking.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Masking.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Masking.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Masking.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Masking.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Masking.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Masking.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.Masking.call": "tf.keras.layers.Masking.call", + "tf.compat.v1.keras.layers.Masking.compute_mask": "tf.keras.layers.Masking.compute_mask", + "tf.compat.v1.keras.layers.Masking.compute_output_shape": "tf.keras.layers.Masking.compute_output_shape", + "tf.compat.v1.keras.layers.Masking.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Masking.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Masking.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Masking.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Masking.get_config": "tf.keras.layers.Masking.get_config", + "tf.compat.v1.keras.layers.Masking.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Masking.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Masking.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Masking.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Masking.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Masking.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Masking.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Masking.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Masking.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Masking.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Masking.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Masking.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Masking.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Masking.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.MaxPool1D": "tf.keras.layers.MaxPool1D", + "tf.compat.v1.keras.layers.MaxPool1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.MaxPool1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.MaxPool1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.MaxPool1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.MaxPool1D.__init__": "tf.keras.layers.MaxPool1D.__init__", + "tf.compat.v1.keras.layers.MaxPool1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.MaxPool1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.MaxPool1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.MaxPool1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.MaxPool1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.MaxPool1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.MaxPool1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.MaxPool1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.MaxPool1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.MaxPool1D.call": "tf.keras.layers.AveragePooling1D.call", + "tf.compat.v1.keras.layers.MaxPool1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.MaxPool1D.compute_output_shape": "tf.keras.layers.AveragePooling1D.compute_output_shape", + "tf.compat.v1.keras.layers.MaxPool1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.MaxPool1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.MaxPool1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.MaxPool1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.MaxPool1D.get_config": "tf.keras.layers.AveragePooling1D.get_config", + "tf.compat.v1.keras.layers.MaxPool1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.MaxPool1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.MaxPool1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.MaxPool1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.MaxPool1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.MaxPool1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.MaxPool1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.MaxPool1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.MaxPool1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.MaxPool1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.MaxPool1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.MaxPool1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.MaxPool1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.MaxPool1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.MaxPool2D": "tf.keras.layers.MaxPool2D", + "tf.compat.v1.keras.layers.MaxPool2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.MaxPool2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.MaxPool2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.MaxPool2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.MaxPool2D.__init__": "tf.keras.layers.MaxPool2D.__init__", + "tf.compat.v1.keras.layers.MaxPool2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.MaxPool2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.MaxPool2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.MaxPool2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.MaxPool2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.MaxPool2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.MaxPool2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.MaxPool2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.MaxPool2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.MaxPool2D.call": "tf.keras.layers.AveragePooling2D.call", + "tf.compat.v1.keras.layers.MaxPool2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.MaxPool2D.compute_output_shape": "tf.keras.layers.AveragePooling2D.compute_output_shape", + "tf.compat.v1.keras.layers.MaxPool2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.MaxPool2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.MaxPool2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.MaxPool2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.MaxPool2D.get_config": "tf.keras.layers.AveragePooling2D.get_config", + "tf.compat.v1.keras.layers.MaxPool2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.MaxPool2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.MaxPool2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.MaxPool2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.MaxPool2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.MaxPool2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.MaxPool2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.MaxPool2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.MaxPool2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.MaxPool2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.MaxPool2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.MaxPool2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.MaxPool2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.MaxPool2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.MaxPool3D": "tf.keras.layers.MaxPool3D", + "tf.compat.v1.keras.layers.MaxPool3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.MaxPool3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.MaxPool3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.MaxPool3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.MaxPool3D.__init__": "tf.keras.layers.MaxPool3D.__init__", + "tf.compat.v1.keras.layers.MaxPool3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.MaxPool3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.MaxPool3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.MaxPool3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.MaxPool3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.MaxPool3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.MaxPool3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.MaxPool3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.MaxPool3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.MaxPool3D.call": "tf.keras.layers.AveragePooling3D.call", + "tf.compat.v1.keras.layers.MaxPool3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.MaxPool3D.compute_output_shape": "tf.keras.layers.AveragePooling3D.compute_output_shape", + "tf.compat.v1.keras.layers.MaxPool3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.MaxPool3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.MaxPool3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.MaxPool3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.MaxPool3D.get_config": "tf.keras.layers.AveragePooling3D.get_config", + "tf.compat.v1.keras.layers.MaxPool3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.MaxPool3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.MaxPool3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.MaxPool3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.MaxPool3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.MaxPool3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.MaxPool3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.MaxPool3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.MaxPool3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.MaxPool3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.MaxPool3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.MaxPool3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.MaxPool3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.MaxPool3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.MaxPooling1D": "tf.keras.layers.MaxPool1D", + "tf.compat.v1.keras.layers.MaxPooling1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.MaxPooling1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.MaxPooling1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.MaxPooling1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.MaxPooling1D.__init__": "tf.keras.layers.MaxPool1D.__init__", + "tf.compat.v1.keras.layers.MaxPooling1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.MaxPooling1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.MaxPooling1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.MaxPooling1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.MaxPooling1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.MaxPooling1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.MaxPooling1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.MaxPooling1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.MaxPooling1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.MaxPooling1D.call": "tf.keras.layers.AveragePooling1D.call", + "tf.compat.v1.keras.layers.MaxPooling1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.MaxPooling1D.compute_output_shape": "tf.keras.layers.AveragePooling1D.compute_output_shape", + "tf.compat.v1.keras.layers.MaxPooling1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.MaxPooling1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.MaxPooling1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.MaxPooling1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.MaxPooling1D.get_config": "tf.keras.layers.AveragePooling1D.get_config", + "tf.compat.v1.keras.layers.MaxPooling1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.MaxPooling1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.MaxPooling1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.MaxPooling1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.MaxPooling1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.MaxPooling1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.MaxPooling1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.MaxPooling1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.MaxPooling1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.MaxPooling1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.MaxPooling1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.MaxPooling1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.MaxPooling1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.MaxPooling1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.MaxPooling2D": "tf.keras.layers.MaxPool2D", + "tf.compat.v1.keras.layers.MaxPooling2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.MaxPooling2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.MaxPooling2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.MaxPooling2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.MaxPooling2D.__init__": "tf.keras.layers.MaxPool2D.__init__", + "tf.compat.v1.keras.layers.MaxPooling2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.MaxPooling2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.MaxPooling2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.MaxPooling2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.MaxPooling2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.MaxPooling2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.MaxPooling2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.MaxPooling2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.MaxPooling2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.MaxPooling2D.call": "tf.keras.layers.AveragePooling2D.call", + "tf.compat.v1.keras.layers.MaxPooling2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.MaxPooling2D.compute_output_shape": "tf.keras.layers.AveragePooling2D.compute_output_shape", + "tf.compat.v1.keras.layers.MaxPooling2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.MaxPooling2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.MaxPooling2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.MaxPooling2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.MaxPooling2D.get_config": "tf.keras.layers.AveragePooling2D.get_config", + "tf.compat.v1.keras.layers.MaxPooling2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.MaxPooling2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.MaxPooling2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.MaxPooling2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.MaxPooling2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.MaxPooling2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.MaxPooling2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.MaxPooling2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.MaxPooling2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.MaxPooling2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.MaxPooling2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.MaxPooling2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.MaxPooling2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.MaxPooling2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.MaxPooling3D": "tf.keras.layers.MaxPool3D", + "tf.compat.v1.keras.layers.MaxPooling3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.MaxPooling3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.MaxPooling3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.MaxPooling3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.MaxPooling3D.__init__": "tf.keras.layers.MaxPool3D.__init__", + "tf.compat.v1.keras.layers.MaxPooling3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.MaxPooling3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.MaxPooling3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.MaxPooling3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.MaxPooling3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.MaxPooling3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.MaxPooling3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.MaxPooling3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.MaxPooling3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.MaxPooling3D.call": "tf.keras.layers.AveragePooling3D.call", + "tf.compat.v1.keras.layers.MaxPooling3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.MaxPooling3D.compute_output_shape": "tf.keras.layers.AveragePooling3D.compute_output_shape", + "tf.compat.v1.keras.layers.MaxPooling3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.MaxPooling3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.MaxPooling3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.MaxPooling3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.MaxPooling3D.get_config": "tf.keras.layers.AveragePooling3D.get_config", + "tf.compat.v1.keras.layers.MaxPooling3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.MaxPooling3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.MaxPooling3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.MaxPooling3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.MaxPooling3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.MaxPooling3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.MaxPooling3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.MaxPooling3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.MaxPooling3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.MaxPooling3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.MaxPooling3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.MaxPooling3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.MaxPooling3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.MaxPooling3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Maximum": "tf.keras.layers.Maximum", + "tf.compat.v1.keras.layers.Maximum.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Maximum.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Maximum.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Maximum.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Maximum.__init__": "tf.keras.layers.Add.__init__", + "tf.compat.v1.keras.layers.Maximum.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Maximum.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Maximum.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Maximum.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Maximum.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Maximum.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Maximum.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Maximum.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Maximum.build": "tf.keras.layers.Add.build", + "tf.compat.v1.keras.layers.Maximum.call": "tf.keras.layers.Add.call", + "tf.compat.v1.keras.layers.Maximum.compute_mask": "tf.keras.layers.Add.compute_mask", + "tf.compat.v1.keras.layers.Maximum.compute_output_shape": "tf.keras.layers.Add.compute_output_shape", + "tf.compat.v1.keras.layers.Maximum.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Maximum.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Maximum.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Maximum.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Maximum.get_config": "tf.keras.layers.Layer.get_config", + "tf.compat.v1.keras.layers.Maximum.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Maximum.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Maximum.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Maximum.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Maximum.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Maximum.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Maximum.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Maximum.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Maximum.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Maximum.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Maximum.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Maximum.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Maximum.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Maximum.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Minimum": "tf.keras.layers.Minimum", + "tf.compat.v1.keras.layers.Minimum.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Minimum.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Minimum.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Minimum.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Minimum.__init__": "tf.keras.layers.Add.__init__", + "tf.compat.v1.keras.layers.Minimum.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Minimum.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Minimum.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Minimum.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Minimum.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Minimum.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Minimum.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Minimum.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Minimum.build": "tf.keras.layers.Add.build", + "tf.compat.v1.keras.layers.Minimum.call": "tf.keras.layers.Add.call", + "tf.compat.v1.keras.layers.Minimum.compute_mask": "tf.keras.layers.Add.compute_mask", + "tf.compat.v1.keras.layers.Minimum.compute_output_shape": "tf.keras.layers.Add.compute_output_shape", + "tf.compat.v1.keras.layers.Minimum.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Minimum.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Minimum.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Minimum.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Minimum.get_config": "tf.keras.layers.Layer.get_config", + "tf.compat.v1.keras.layers.Minimum.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Minimum.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Minimum.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Minimum.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Minimum.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Minimum.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Minimum.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Minimum.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Minimum.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Minimum.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Minimum.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Minimum.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Minimum.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Minimum.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Multiply": "tf.keras.layers.Multiply", + "tf.compat.v1.keras.layers.Multiply.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Multiply.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Multiply.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Multiply.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Multiply.__init__": "tf.keras.layers.Add.__init__", + "tf.compat.v1.keras.layers.Multiply.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Multiply.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Multiply.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Multiply.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Multiply.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Multiply.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Multiply.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Multiply.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Multiply.build": "tf.keras.layers.Add.build", + "tf.compat.v1.keras.layers.Multiply.call": "tf.keras.layers.Add.call", + "tf.compat.v1.keras.layers.Multiply.compute_mask": "tf.keras.layers.Add.compute_mask", + "tf.compat.v1.keras.layers.Multiply.compute_output_shape": "tf.keras.layers.Add.compute_output_shape", + "tf.compat.v1.keras.layers.Multiply.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Multiply.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Multiply.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Multiply.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Multiply.get_config": "tf.keras.layers.Layer.get_config", + "tf.compat.v1.keras.layers.Multiply.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Multiply.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Multiply.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Multiply.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Multiply.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Multiply.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Multiply.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Multiply.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Multiply.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Multiply.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Multiply.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Multiply.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Multiply.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Multiply.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.PReLU": "tf.keras.layers.PReLU", + "tf.compat.v1.keras.layers.PReLU.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.PReLU.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.PReLU.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.PReLU.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.PReLU.__init__": "tf.keras.layers.PReLU.__init__", + "tf.compat.v1.keras.layers.PReLU.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.PReLU.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.PReLU.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.PReLU.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.PReLU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.PReLU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.PReLU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.PReLU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.PReLU.build": "tf.keras.layers.PReLU.build", + "tf.compat.v1.keras.layers.PReLU.call": "tf.keras.layers.PReLU.call", + "tf.compat.v1.keras.layers.PReLU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.PReLU.compute_output_shape": "tf.keras.layers.PReLU.compute_output_shape", + "tf.compat.v1.keras.layers.PReLU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.PReLU.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.PReLU.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.PReLU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.PReLU.get_config": "tf.keras.layers.PReLU.get_config", + "tf.compat.v1.keras.layers.PReLU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.PReLU.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.PReLU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.PReLU.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.PReLU.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.PReLU.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.PReLU.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.PReLU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.PReLU.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.PReLU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.PReLU.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.PReLU.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.PReLU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.PReLU.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Permute": "tf.keras.layers.Permute", + "tf.compat.v1.keras.layers.Permute.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Permute.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Permute.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Permute.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Permute.__init__": "tf.keras.layers.Permute.__init__", + "tf.compat.v1.keras.layers.Permute.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Permute.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Permute.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Permute.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Permute.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Permute.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Permute.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Permute.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Permute.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.Permute.call": "tf.keras.layers.Permute.call", + "tf.compat.v1.keras.layers.Permute.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Permute.compute_output_shape": "tf.keras.layers.Permute.compute_output_shape", + "tf.compat.v1.keras.layers.Permute.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Permute.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Permute.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Permute.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Permute.get_config": "tf.keras.layers.Permute.get_config", + "tf.compat.v1.keras.layers.Permute.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Permute.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Permute.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Permute.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Permute.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Permute.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Permute.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Permute.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Permute.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Permute.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Permute.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Permute.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Permute.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Permute.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.RNN": "tf.keras.layers.RNN", + "tf.compat.v1.keras.layers.RNN.__call__": "tf.keras.layers.RNN.__call__", + "tf.compat.v1.keras.layers.RNN.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.RNN.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.RNN.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.RNN.__init__": "tf.keras.layers.RNN.__init__", + "tf.compat.v1.keras.layers.RNN.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.RNN.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.RNN.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.RNN.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.RNN.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.RNN.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.RNN.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.RNN.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.RNN.build": "tf.keras.layers.RNN.build", + "tf.compat.v1.keras.layers.RNN.call": "tf.keras.layers.RNN.call", + "tf.compat.v1.keras.layers.RNN.compute_mask": "tf.keras.layers.RNN.compute_mask", + "tf.compat.v1.keras.layers.RNN.compute_output_shape": "tf.keras.layers.RNN.compute_output_shape", + "tf.compat.v1.keras.layers.RNN.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.RNN.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.RNN.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.RNN.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.RNN.get_config": "tf.keras.layers.RNN.get_config", + "tf.compat.v1.keras.layers.RNN.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.RNN.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.RNN.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.RNN.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.RNN.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.RNN.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.RNN.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.RNN.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.RNN.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.RNN.reset_states": "tf.keras.layers.RNN.reset_states", + "tf.compat.v1.keras.layers.RNN.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.RNN.states": "tf.keras.layers.RNN.states", + "tf.compat.v1.keras.layers.RNN.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.RNN.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.RNN.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.RNN.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.ReLU": "tf.keras.layers.ReLU", + "tf.compat.v1.keras.layers.ReLU.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.ReLU.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.ReLU.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.ReLU.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.ReLU.__init__": "tf.keras.layers.ReLU.__init__", + "tf.compat.v1.keras.layers.ReLU.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.ReLU.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.ReLU.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.ReLU.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.ReLU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.ReLU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.ReLU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.ReLU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.ReLU.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.ReLU.call": "tf.keras.layers.ReLU.call", + "tf.compat.v1.keras.layers.ReLU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.ReLU.compute_output_shape": "tf.keras.layers.ReLU.compute_output_shape", + "tf.compat.v1.keras.layers.ReLU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.ReLU.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.ReLU.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.ReLU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.ReLU.get_config": "tf.keras.layers.ReLU.get_config", + "tf.compat.v1.keras.layers.ReLU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.ReLU.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.ReLU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.ReLU.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.ReLU.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.ReLU.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.ReLU.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.ReLU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.ReLU.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.ReLU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.ReLU.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.ReLU.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.ReLU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.ReLU.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.RepeatVector": "tf.keras.layers.RepeatVector", + "tf.compat.v1.keras.layers.RepeatVector.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.RepeatVector.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.RepeatVector.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.RepeatVector.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.RepeatVector.__init__": "tf.keras.layers.RepeatVector.__init__", + "tf.compat.v1.keras.layers.RepeatVector.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.RepeatVector.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.RepeatVector.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.RepeatVector.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.RepeatVector.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.RepeatVector.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.RepeatVector.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.RepeatVector.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.RepeatVector.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.RepeatVector.call": "tf.keras.layers.RepeatVector.call", + "tf.compat.v1.keras.layers.RepeatVector.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.RepeatVector.compute_output_shape": "tf.keras.layers.RepeatVector.compute_output_shape", + "tf.compat.v1.keras.layers.RepeatVector.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.RepeatVector.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.RepeatVector.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.RepeatVector.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.RepeatVector.get_config": "tf.keras.layers.RepeatVector.get_config", + "tf.compat.v1.keras.layers.RepeatVector.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.RepeatVector.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.RepeatVector.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.RepeatVector.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.RepeatVector.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.RepeatVector.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.RepeatVector.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.RepeatVector.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.RepeatVector.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.RepeatVector.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.RepeatVector.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.RepeatVector.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.RepeatVector.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.RepeatVector.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Reshape": "tf.keras.layers.Reshape", + "tf.compat.v1.keras.layers.Reshape.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Reshape.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Reshape.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Reshape.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Reshape.__init__": "tf.keras.layers.Reshape.__init__", + "tf.compat.v1.keras.layers.Reshape.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Reshape.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Reshape.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Reshape.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Reshape.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Reshape.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Reshape.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Reshape.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Reshape.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.Reshape.call": "tf.keras.layers.Reshape.call", + "tf.compat.v1.keras.layers.Reshape.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Reshape.compute_output_shape": "tf.keras.layers.Reshape.compute_output_shape", + "tf.compat.v1.keras.layers.Reshape.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Reshape.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Reshape.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Reshape.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Reshape.get_config": "tf.keras.layers.Reshape.get_config", + "tf.compat.v1.keras.layers.Reshape.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Reshape.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Reshape.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Reshape.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Reshape.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Reshape.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Reshape.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Reshape.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Reshape.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Reshape.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Reshape.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Reshape.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Reshape.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Reshape.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.SeparableConv1D": "tf.keras.layers.SeparableConv1D", + "tf.compat.v1.keras.layers.SeparableConv1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.SeparableConv1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.SeparableConv1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.SeparableConv1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.SeparableConv1D.__init__": "tf.keras.layers.SeparableConv1D.__init__", + "tf.compat.v1.keras.layers.SeparableConv1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.SeparableConv1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.SeparableConv1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.SeparableConv1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.SeparableConv1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.SeparableConv1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.SeparableConv1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.SeparableConv1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.SeparableConv1D.build": "tf.keras.layers.SeparableConv1D.build", + "tf.compat.v1.keras.layers.SeparableConv1D.call": "tf.keras.layers.SeparableConv1D.call", + "tf.compat.v1.keras.layers.SeparableConv1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.SeparableConv1D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.keras.layers.SeparableConv1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.SeparableConv1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.SeparableConv1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.SeparableConv1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.SeparableConv1D.get_config": "tf.keras.layers.SeparableConv1D.get_config", + "tf.compat.v1.keras.layers.SeparableConv1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.SeparableConv1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.SeparableConv1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.SeparableConv1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.SeparableConv1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.SeparableConv1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.SeparableConv1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.SeparableConv1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.SeparableConv1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.SeparableConv1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.SeparableConv1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.SeparableConv1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.SeparableConv1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.SeparableConv1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.SeparableConv2D": "tf.keras.layers.SeparableConv2D", + "tf.compat.v1.keras.layers.SeparableConv2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.SeparableConv2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.SeparableConv2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.SeparableConv2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.SeparableConv2D.__init__": "tf.keras.layers.SeparableConv2D.__init__", + "tf.compat.v1.keras.layers.SeparableConv2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.SeparableConv2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.SeparableConv2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.SeparableConv2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.SeparableConv2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.SeparableConv2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.SeparableConv2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.SeparableConv2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.SeparableConv2D.build": "tf.keras.layers.SeparableConv1D.build", + "tf.compat.v1.keras.layers.SeparableConv2D.call": "tf.keras.layers.SeparableConv2D.call", + "tf.compat.v1.keras.layers.SeparableConv2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.SeparableConv2D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.keras.layers.SeparableConv2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.SeparableConv2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.SeparableConv2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.SeparableConv2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.SeparableConv2D.get_config": "tf.keras.layers.SeparableConv1D.get_config", + "tf.compat.v1.keras.layers.SeparableConv2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.SeparableConv2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.SeparableConv2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.SeparableConv2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.SeparableConv2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.SeparableConv2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.SeparableConv2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.SeparableConv2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.SeparableConv2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.SeparableConv2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.SeparableConv2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.SeparableConv2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.SeparableConv2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.SeparableConv2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.SeparableConvolution1D": "tf.keras.layers.SeparableConv1D", + "tf.compat.v1.keras.layers.SeparableConvolution1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.SeparableConvolution1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.SeparableConvolution1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.SeparableConvolution1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.SeparableConvolution1D.__init__": "tf.keras.layers.SeparableConv1D.__init__", + "tf.compat.v1.keras.layers.SeparableConvolution1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.SeparableConvolution1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.SeparableConvolution1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.SeparableConvolution1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.SeparableConvolution1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.SeparableConvolution1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.SeparableConvolution1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.SeparableConvolution1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.SeparableConvolution1D.build": "tf.keras.layers.SeparableConv1D.build", + "tf.compat.v1.keras.layers.SeparableConvolution1D.call": "tf.keras.layers.SeparableConv1D.call", + "tf.compat.v1.keras.layers.SeparableConvolution1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.SeparableConvolution1D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.keras.layers.SeparableConvolution1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.SeparableConvolution1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.SeparableConvolution1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.SeparableConvolution1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.SeparableConvolution1D.get_config": "tf.keras.layers.SeparableConv1D.get_config", + "tf.compat.v1.keras.layers.SeparableConvolution1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.SeparableConvolution1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.SeparableConvolution1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.SeparableConvolution1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.SeparableConvolution1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.SeparableConvolution1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.SeparableConvolution1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.SeparableConvolution1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.SeparableConvolution1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.SeparableConvolution1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.SeparableConvolution1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.SeparableConvolution1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.SeparableConvolution1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.SeparableConvolution1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.SeparableConvolution2D": "tf.keras.layers.SeparableConv2D", + "tf.compat.v1.keras.layers.SeparableConvolution2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.SeparableConvolution2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.SeparableConvolution2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.SeparableConvolution2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.SeparableConvolution2D.__init__": "tf.keras.layers.SeparableConv2D.__init__", + "tf.compat.v1.keras.layers.SeparableConvolution2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.SeparableConvolution2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.SeparableConvolution2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.SeparableConvolution2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.SeparableConvolution2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.SeparableConvolution2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.SeparableConvolution2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.SeparableConvolution2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.SeparableConvolution2D.build": "tf.keras.layers.SeparableConv1D.build", + "tf.compat.v1.keras.layers.SeparableConvolution2D.call": "tf.keras.layers.SeparableConv2D.call", + "tf.compat.v1.keras.layers.SeparableConvolution2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.SeparableConvolution2D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.keras.layers.SeparableConvolution2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.SeparableConvolution2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.SeparableConvolution2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.SeparableConvolution2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.SeparableConvolution2D.get_config": "tf.keras.layers.SeparableConv1D.get_config", + "tf.compat.v1.keras.layers.SeparableConvolution2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.SeparableConvolution2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.SeparableConvolution2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.SeparableConvolution2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.SeparableConvolution2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.SeparableConvolution2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.SeparableConvolution2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.SeparableConvolution2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.SeparableConvolution2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.SeparableConvolution2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.SeparableConvolution2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.SeparableConvolution2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.SeparableConvolution2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.SeparableConvolution2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.SimpleRNN": "tf.keras.layers.SimpleRNN", + "tf.compat.v1.keras.layers.SimpleRNN.__call__": "tf.keras.layers.RNN.__call__", + "tf.compat.v1.keras.layers.SimpleRNN.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.SimpleRNN.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.SimpleRNN.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.SimpleRNN.__init__": "tf.keras.layers.SimpleRNN.__init__", + "tf.compat.v1.keras.layers.SimpleRNN.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.SimpleRNN.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.SimpleRNN.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.SimpleRNN.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.SimpleRNN.activation": "tf.keras.layers.SimpleRNN.activation", + "tf.compat.v1.keras.layers.SimpleRNN.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.SimpleRNN.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.SimpleRNN.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.SimpleRNN.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.SimpleRNN.bias_constraint": "tf.keras.layers.SimpleRNN.bias_constraint", + "tf.compat.v1.keras.layers.SimpleRNN.bias_initializer": "tf.keras.layers.SimpleRNN.bias_initializer", + "tf.compat.v1.keras.layers.SimpleRNN.bias_regularizer": "tf.keras.layers.SimpleRNN.bias_regularizer", + "tf.compat.v1.keras.layers.SimpleRNN.build": "tf.keras.layers.RNN.build", + "tf.compat.v1.keras.layers.SimpleRNN.call": "tf.keras.layers.SimpleRNN.call", + "tf.compat.v1.keras.layers.SimpleRNN.compute_mask": "tf.keras.layers.RNN.compute_mask", + "tf.compat.v1.keras.layers.SimpleRNN.compute_output_shape": "tf.keras.layers.RNN.compute_output_shape", + "tf.compat.v1.keras.layers.SimpleRNN.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.SimpleRNN.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.SimpleRNN.dropout": "tf.keras.layers.SimpleRNN.dropout", + "tf.compat.v1.keras.layers.SimpleRNN.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.SimpleRNN.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.SimpleRNN.get_config": "tf.keras.layers.SimpleRNN.get_config", + "tf.compat.v1.keras.layers.SimpleRNN.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.SimpleRNN.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.SimpleRNN.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.SimpleRNN.kernel_constraint": "tf.keras.layers.SimpleRNN.kernel_constraint", + "tf.compat.v1.keras.layers.SimpleRNN.kernel_initializer": "tf.keras.layers.SimpleRNN.kernel_initializer", + "tf.compat.v1.keras.layers.SimpleRNN.kernel_regularizer": "tf.keras.layers.SimpleRNN.kernel_regularizer", + "tf.compat.v1.keras.layers.SimpleRNN.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.SimpleRNN.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.SimpleRNN.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.SimpleRNN.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.SimpleRNN.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.SimpleRNN.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.SimpleRNN.recurrent_constraint": "tf.keras.layers.SimpleRNN.recurrent_constraint", + "tf.compat.v1.keras.layers.SimpleRNN.recurrent_dropout": "tf.keras.layers.SimpleRNN.recurrent_dropout", + "tf.compat.v1.keras.layers.SimpleRNN.recurrent_initializer": "tf.keras.layers.SimpleRNN.recurrent_initializer", + "tf.compat.v1.keras.layers.SimpleRNN.recurrent_regularizer": "tf.keras.layers.SimpleRNN.recurrent_regularizer", + "tf.compat.v1.keras.layers.SimpleRNN.reset_states": "tf.keras.layers.RNN.reset_states", + "tf.compat.v1.keras.layers.SimpleRNN.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.SimpleRNN.states": "tf.keras.layers.RNN.states", + "tf.compat.v1.keras.layers.SimpleRNN.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.SimpleRNN.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.SimpleRNN.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.SimpleRNN.units": "tf.keras.layers.SimpleRNN.units", + "tf.compat.v1.keras.layers.SimpleRNN.use_bias": "tf.keras.layers.SimpleRNN.use_bias", + "tf.compat.v1.keras.layers.SimpleRNN.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.SimpleRNNCell": "tf.keras.layers.SimpleRNNCell", + "tf.compat.v1.keras.layers.SimpleRNNCell.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.SimpleRNNCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.SimpleRNNCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.SimpleRNNCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.SimpleRNNCell.__init__": "tf.keras.layers.SimpleRNNCell.__init__", + "tf.compat.v1.keras.layers.SimpleRNNCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.SimpleRNNCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.SimpleRNNCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.SimpleRNNCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.SimpleRNNCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.SimpleRNNCell.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.SimpleRNNCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.SimpleRNNCell.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.SimpleRNNCell.build": "tf.keras.layers.SimpleRNNCell.build", + "tf.compat.v1.keras.layers.SimpleRNNCell.call": "tf.keras.layers.SimpleRNNCell.call", + "tf.compat.v1.keras.layers.SimpleRNNCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.SimpleRNNCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.layers.SimpleRNNCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.SimpleRNNCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.SimpleRNNCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.SimpleRNNCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.SimpleRNNCell.get_config": "tf.keras.layers.SimpleRNNCell.get_config", + "tf.compat.v1.keras.layers.SimpleRNNCell.get_dropout_mask_for_cell": "tf.keras.layers.GRU.get_dropout_mask_for_cell", + "tf.compat.v1.keras.layers.SimpleRNNCell.get_initial_state": "tf.keras.layers.SimpleRNNCell.get_initial_state", + "tf.compat.v1.keras.layers.SimpleRNNCell.get_recurrent_dropout_mask_for_cell": "tf.keras.layers.GRU.get_recurrent_dropout_mask_for_cell", + "tf.compat.v1.keras.layers.SimpleRNNCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.SimpleRNNCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.SimpleRNNCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.SimpleRNNCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.SimpleRNNCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.SimpleRNNCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.SimpleRNNCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.SimpleRNNCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.SimpleRNNCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.SimpleRNNCell.reset_dropout_mask": "tf.keras.layers.GRU.reset_dropout_mask", + "tf.compat.v1.keras.layers.SimpleRNNCell.reset_recurrent_dropout_mask": "tf.keras.layers.GRU.reset_recurrent_dropout_mask", + "tf.compat.v1.keras.layers.SimpleRNNCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.SimpleRNNCell.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.SimpleRNNCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.SimpleRNNCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.SimpleRNNCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Softmax": "tf.keras.layers.Softmax", + "tf.compat.v1.keras.layers.Softmax.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Softmax.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Softmax.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Softmax.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Softmax.__init__": "tf.keras.layers.Softmax.__init__", + "tf.compat.v1.keras.layers.Softmax.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Softmax.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Softmax.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Softmax.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Softmax.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Softmax.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Softmax.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Softmax.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Softmax.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.Softmax.call": "tf.keras.layers.Softmax.call", + "tf.compat.v1.keras.layers.Softmax.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Softmax.compute_output_shape": "tf.keras.layers.Softmax.compute_output_shape", + "tf.compat.v1.keras.layers.Softmax.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Softmax.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Softmax.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Softmax.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Softmax.get_config": "tf.keras.layers.Softmax.get_config", + "tf.compat.v1.keras.layers.Softmax.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Softmax.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Softmax.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Softmax.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Softmax.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Softmax.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Softmax.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Softmax.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Softmax.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Softmax.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Softmax.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Softmax.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Softmax.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Softmax.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.SpatialDropout1D": "tf.keras.layers.SpatialDropout1D", + "tf.compat.v1.keras.layers.SpatialDropout1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.SpatialDropout1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.SpatialDropout1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.SpatialDropout1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.SpatialDropout1D.__init__": "tf.keras.layers.SpatialDropout1D.__init__", + "tf.compat.v1.keras.layers.SpatialDropout1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.SpatialDropout1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.SpatialDropout1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.SpatialDropout1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.SpatialDropout1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.SpatialDropout1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.SpatialDropout1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.SpatialDropout1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.SpatialDropout1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.SpatialDropout1D.call": "tf.keras.layers.Dropout.call", + "tf.compat.v1.keras.layers.SpatialDropout1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.SpatialDropout1D.compute_output_shape": "tf.keras.layers.Dropout.compute_output_shape", + "tf.compat.v1.keras.layers.SpatialDropout1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.SpatialDropout1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.SpatialDropout1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.SpatialDropout1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.SpatialDropout1D.get_config": "tf.keras.layers.Dropout.get_config", + "tf.compat.v1.keras.layers.SpatialDropout1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.SpatialDropout1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.SpatialDropout1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.SpatialDropout1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.SpatialDropout1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.SpatialDropout1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.SpatialDropout1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.SpatialDropout1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.SpatialDropout1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.SpatialDropout1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.SpatialDropout1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.SpatialDropout1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.SpatialDropout1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.SpatialDropout1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.SpatialDropout2D": "tf.keras.layers.SpatialDropout2D", + "tf.compat.v1.keras.layers.SpatialDropout2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.SpatialDropout2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.SpatialDropout2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.SpatialDropout2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.SpatialDropout2D.__init__": "tf.keras.layers.SpatialDropout2D.__init__", + "tf.compat.v1.keras.layers.SpatialDropout2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.SpatialDropout2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.SpatialDropout2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.SpatialDropout2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.SpatialDropout2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.SpatialDropout2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.SpatialDropout2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.SpatialDropout2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.SpatialDropout2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.SpatialDropout2D.call": "tf.keras.layers.Dropout.call", + "tf.compat.v1.keras.layers.SpatialDropout2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.SpatialDropout2D.compute_output_shape": "tf.keras.layers.Dropout.compute_output_shape", + "tf.compat.v1.keras.layers.SpatialDropout2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.SpatialDropout2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.SpatialDropout2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.SpatialDropout2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.SpatialDropout2D.get_config": "tf.keras.layers.Dropout.get_config", + "tf.compat.v1.keras.layers.SpatialDropout2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.SpatialDropout2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.SpatialDropout2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.SpatialDropout2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.SpatialDropout2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.SpatialDropout2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.SpatialDropout2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.SpatialDropout2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.SpatialDropout2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.SpatialDropout2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.SpatialDropout2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.SpatialDropout2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.SpatialDropout2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.SpatialDropout2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.SpatialDropout3D": "tf.keras.layers.SpatialDropout3D", + "tf.compat.v1.keras.layers.SpatialDropout3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.SpatialDropout3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.SpatialDropout3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.SpatialDropout3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.SpatialDropout3D.__init__": "tf.keras.layers.SpatialDropout3D.__init__", + "tf.compat.v1.keras.layers.SpatialDropout3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.SpatialDropout3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.SpatialDropout3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.SpatialDropout3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.SpatialDropout3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.SpatialDropout3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.SpatialDropout3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.SpatialDropout3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.SpatialDropout3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.SpatialDropout3D.call": "tf.keras.layers.Dropout.call", + "tf.compat.v1.keras.layers.SpatialDropout3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.SpatialDropout3D.compute_output_shape": "tf.keras.layers.Dropout.compute_output_shape", + "tf.compat.v1.keras.layers.SpatialDropout3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.SpatialDropout3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.SpatialDropout3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.SpatialDropout3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.SpatialDropout3D.get_config": "tf.keras.layers.Dropout.get_config", + "tf.compat.v1.keras.layers.SpatialDropout3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.SpatialDropout3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.SpatialDropout3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.SpatialDropout3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.SpatialDropout3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.SpatialDropout3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.SpatialDropout3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.SpatialDropout3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.SpatialDropout3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.SpatialDropout3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.SpatialDropout3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.SpatialDropout3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.SpatialDropout3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.SpatialDropout3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.StackedRNNCells": "tf.keras.layers.StackedRNNCells", + "tf.compat.v1.keras.layers.StackedRNNCells.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.StackedRNNCells.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.StackedRNNCells.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.StackedRNNCells.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.StackedRNNCells.__init__": "tf.keras.layers.StackedRNNCells.__init__", + "tf.compat.v1.keras.layers.StackedRNNCells.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.StackedRNNCells.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.StackedRNNCells.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.StackedRNNCells.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.StackedRNNCells.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.StackedRNNCells.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.StackedRNNCells.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.StackedRNNCells.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.StackedRNNCells.build": "tf.keras.layers.StackedRNNCells.build", + "tf.compat.v1.keras.layers.StackedRNNCells.call": "tf.keras.layers.StackedRNNCells.call", + "tf.compat.v1.keras.layers.StackedRNNCells.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.StackedRNNCells.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.layers.StackedRNNCells.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.StackedRNNCells.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.StackedRNNCells.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.StackedRNNCells.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.StackedRNNCells.get_config": "tf.keras.layers.StackedRNNCells.get_config", + "tf.compat.v1.keras.layers.StackedRNNCells.get_initial_state": "tf.keras.layers.StackedRNNCells.get_initial_state", + "tf.compat.v1.keras.layers.StackedRNNCells.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.StackedRNNCells.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.StackedRNNCells.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.StackedRNNCells.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.StackedRNNCells.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.StackedRNNCells.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.StackedRNNCells.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.StackedRNNCells.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.StackedRNNCells.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.StackedRNNCells.output_size": "tf.keras.layers.StackedRNNCells.output_size", + "tf.compat.v1.keras.layers.StackedRNNCells.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.StackedRNNCells.state_size": "tf.keras.layers.StackedRNNCells.state_size", + "tf.compat.v1.keras.layers.StackedRNNCells.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.StackedRNNCells.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.StackedRNNCells.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.StackedRNNCells.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Subtract": "tf.keras.layers.Subtract", + "tf.compat.v1.keras.layers.Subtract.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Subtract.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Subtract.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Subtract.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Subtract.__init__": "tf.keras.layers.Add.__init__", + "tf.compat.v1.keras.layers.Subtract.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Subtract.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Subtract.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Subtract.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Subtract.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.Subtract.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Subtract.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Subtract.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Subtract.build": "tf.keras.layers.Subtract.build", + "tf.compat.v1.keras.layers.Subtract.call": "tf.keras.layers.Add.call", + "tf.compat.v1.keras.layers.Subtract.compute_mask": "tf.keras.layers.Add.compute_mask", + "tf.compat.v1.keras.layers.Subtract.compute_output_shape": "tf.keras.layers.Add.compute_output_shape", + "tf.compat.v1.keras.layers.Subtract.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Subtract.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Subtract.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Subtract.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Subtract.get_config": "tf.keras.layers.Layer.get_config", + "tf.compat.v1.keras.layers.Subtract.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Subtract.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Subtract.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Subtract.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Subtract.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Subtract.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Subtract.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Subtract.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Subtract.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Subtract.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Subtract.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Subtract.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Subtract.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Subtract.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.ThresholdedReLU": "tf.keras.layers.ThresholdedReLU", + "tf.compat.v1.keras.layers.ThresholdedReLU.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.ThresholdedReLU.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.ThresholdedReLU.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.ThresholdedReLU.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.ThresholdedReLU.__init__": "tf.keras.layers.ThresholdedReLU.__init__", + "tf.compat.v1.keras.layers.ThresholdedReLU.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.ThresholdedReLU.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.ThresholdedReLU.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.ThresholdedReLU.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.ThresholdedReLU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.ThresholdedReLU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.ThresholdedReLU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.ThresholdedReLU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.ThresholdedReLU.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.ThresholdedReLU.call": "tf.keras.layers.ThresholdedReLU.call", + "tf.compat.v1.keras.layers.ThresholdedReLU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.ThresholdedReLU.compute_output_shape": "tf.keras.layers.ThresholdedReLU.compute_output_shape", + "tf.compat.v1.keras.layers.ThresholdedReLU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.ThresholdedReLU.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.ThresholdedReLU.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.ThresholdedReLU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.ThresholdedReLU.get_config": "tf.keras.layers.ThresholdedReLU.get_config", + "tf.compat.v1.keras.layers.ThresholdedReLU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.ThresholdedReLU.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.ThresholdedReLU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.ThresholdedReLU.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.ThresholdedReLU.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.ThresholdedReLU.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.ThresholdedReLU.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.ThresholdedReLU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.ThresholdedReLU.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.ThresholdedReLU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.ThresholdedReLU.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.ThresholdedReLU.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.ThresholdedReLU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.ThresholdedReLU.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.TimeDistributed": "tf.keras.layers.TimeDistributed", + "tf.compat.v1.keras.layers.TimeDistributed.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.TimeDistributed.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.TimeDistributed.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.TimeDistributed.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.TimeDistributed.__init__": "tf.keras.layers.TimeDistributed.__init__", + "tf.compat.v1.keras.layers.TimeDistributed.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.TimeDistributed.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.TimeDistributed.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.TimeDistributed.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.TimeDistributed.activity_regularizer": "tf.keras.layers.Wrapper.activity_regularizer", + "tf.compat.v1.keras.layers.TimeDistributed.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.TimeDistributed.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.TimeDistributed.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.TimeDistributed.build": "tf.keras.layers.TimeDistributed.build", + "tf.compat.v1.keras.layers.TimeDistributed.call": "tf.keras.layers.TimeDistributed.call", + "tf.compat.v1.keras.layers.TimeDistributed.compute_mask": "tf.keras.layers.TimeDistributed.compute_mask", + "tf.compat.v1.keras.layers.TimeDistributed.compute_output_shape": "tf.keras.layers.TimeDistributed.compute_output_shape", + "tf.compat.v1.keras.layers.TimeDistributed.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.TimeDistributed.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.TimeDistributed.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.TimeDistributed.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.TimeDistributed.get_config": "tf.keras.layers.Wrapper.get_config", + "tf.compat.v1.keras.layers.TimeDistributed.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.TimeDistributed.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.TimeDistributed.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.TimeDistributed.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.TimeDistributed.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.TimeDistributed.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.TimeDistributed.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.TimeDistributed.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.TimeDistributed.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.TimeDistributed.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.TimeDistributed.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.TimeDistributed.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.TimeDistributed.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.TimeDistributed.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.UpSampling1D": "tf.keras.layers.UpSampling1D", + "tf.compat.v1.keras.layers.UpSampling1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.UpSampling1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.UpSampling1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.UpSampling1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.UpSampling1D.__init__": "tf.keras.layers.UpSampling1D.__init__", + "tf.compat.v1.keras.layers.UpSampling1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.UpSampling1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.UpSampling1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.UpSampling1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.UpSampling1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.UpSampling1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.UpSampling1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.UpSampling1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.UpSampling1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.UpSampling1D.call": "tf.keras.layers.UpSampling1D.call", + "tf.compat.v1.keras.layers.UpSampling1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.UpSampling1D.compute_output_shape": "tf.keras.layers.UpSampling1D.compute_output_shape", + "tf.compat.v1.keras.layers.UpSampling1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.UpSampling1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.UpSampling1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.UpSampling1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.UpSampling1D.get_config": "tf.keras.layers.UpSampling1D.get_config", + "tf.compat.v1.keras.layers.UpSampling1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.UpSampling1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.UpSampling1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.UpSampling1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.UpSampling1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.UpSampling1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.UpSampling1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.UpSampling1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.UpSampling1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.UpSampling1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.UpSampling1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.UpSampling1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.UpSampling1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.UpSampling1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.UpSampling2D": "tf.keras.layers.UpSampling2D", + "tf.compat.v1.keras.layers.UpSampling2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.UpSampling2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.UpSampling2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.UpSampling2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.UpSampling2D.__init__": "tf.keras.layers.UpSampling2D.__init__", + "tf.compat.v1.keras.layers.UpSampling2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.UpSampling2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.UpSampling2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.UpSampling2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.UpSampling2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.UpSampling2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.UpSampling2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.UpSampling2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.UpSampling2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.UpSampling2D.call": "tf.keras.layers.UpSampling2D.call", + "tf.compat.v1.keras.layers.UpSampling2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.UpSampling2D.compute_output_shape": "tf.keras.layers.UpSampling2D.compute_output_shape", + "tf.compat.v1.keras.layers.UpSampling2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.UpSampling2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.UpSampling2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.UpSampling2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.UpSampling2D.get_config": "tf.keras.layers.UpSampling2D.get_config", + "tf.compat.v1.keras.layers.UpSampling2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.UpSampling2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.UpSampling2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.UpSampling2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.UpSampling2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.UpSampling2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.UpSampling2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.UpSampling2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.UpSampling2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.UpSampling2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.UpSampling2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.UpSampling2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.UpSampling2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.UpSampling2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.UpSampling3D": "tf.keras.layers.UpSampling3D", + "tf.compat.v1.keras.layers.UpSampling3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.UpSampling3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.UpSampling3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.UpSampling3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.UpSampling3D.__init__": "tf.keras.layers.UpSampling3D.__init__", + "tf.compat.v1.keras.layers.UpSampling3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.UpSampling3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.UpSampling3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.UpSampling3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.UpSampling3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.UpSampling3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.UpSampling3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.UpSampling3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.UpSampling3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.UpSampling3D.call": "tf.keras.layers.UpSampling3D.call", + "tf.compat.v1.keras.layers.UpSampling3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.UpSampling3D.compute_output_shape": "tf.keras.layers.UpSampling3D.compute_output_shape", + "tf.compat.v1.keras.layers.UpSampling3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.UpSampling3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.UpSampling3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.UpSampling3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.UpSampling3D.get_config": "tf.keras.layers.UpSampling3D.get_config", + "tf.compat.v1.keras.layers.UpSampling3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.UpSampling3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.UpSampling3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.UpSampling3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.UpSampling3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.UpSampling3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.UpSampling3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.UpSampling3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.UpSampling3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.UpSampling3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.UpSampling3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.UpSampling3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.UpSampling3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.UpSampling3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.Wrapper": "tf.keras.layers.Wrapper", + "tf.compat.v1.keras.layers.Wrapper.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.Wrapper.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.Wrapper.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.Wrapper.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.Wrapper.__init__": "tf.keras.layers.Wrapper.__init__", + "tf.compat.v1.keras.layers.Wrapper.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.Wrapper.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.Wrapper.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.Wrapper.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.Wrapper.activity_regularizer": "tf.keras.layers.Wrapper.activity_regularizer", + "tf.compat.v1.keras.layers.Wrapper.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.Wrapper.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.Wrapper.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.Wrapper.build": "tf.keras.layers.Wrapper.build", + "tf.compat.v1.keras.layers.Wrapper.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.layers.Wrapper.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.Wrapper.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.layers.Wrapper.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.Wrapper.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.Wrapper.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.Wrapper.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.Wrapper.get_config": "tf.keras.layers.Wrapper.get_config", + "tf.compat.v1.keras.layers.Wrapper.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.Wrapper.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.Wrapper.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.Wrapper.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.Wrapper.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.Wrapper.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.Wrapper.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.Wrapper.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.Wrapper.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.Wrapper.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.Wrapper.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.Wrapper.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.Wrapper.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.Wrapper.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.ZeroPadding1D": "tf.keras.layers.ZeroPadding1D", + "tf.compat.v1.keras.layers.ZeroPadding1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.ZeroPadding1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.ZeroPadding1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.ZeroPadding1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.ZeroPadding1D.__init__": "tf.keras.layers.ZeroPadding1D.__init__", + "tf.compat.v1.keras.layers.ZeroPadding1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.ZeroPadding1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.ZeroPadding1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.ZeroPadding1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.ZeroPadding1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.ZeroPadding1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.ZeroPadding1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.ZeroPadding1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.ZeroPadding1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.ZeroPadding1D.call": "tf.keras.layers.ZeroPadding1D.call", + "tf.compat.v1.keras.layers.ZeroPadding1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.ZeroPadding1D.compute_output_shape": "tf.keras.layers.ZeroPadding1D.compute_output_shape", + "tf.compat.v1.keras.layers.ZeroPadding1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.ZeroPadding1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.ZeroPadding1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.ZeroPadding1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.ZeroPadding1D.get_config": "tf.keras.layers.ZeroPadding1D.get_config", + "tf.compat.v1.keras.layers.ZeroPadding1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.ZeroPadding1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.ZeroPadding1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.ZeroPadding1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.ZeroPadding1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.ZeroPadding1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.ZeroPadding1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.ZeroPadding1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.ZeroPadding1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.ZeroPadding1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.ZeroPadding1D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.ZeroPadding1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.ZeroPadding1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.ZeroPadding1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.ZeroPadding2D": "tf.keras.layers.ZeroPadding2D", + "tf.compat.v1.keras.layers.ZeroPadding2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.ZeroPadding2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.ZeroPadding2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.ZeroPadding2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.ZeroPadding2D.__init__": "tf.keras.layers.ZeroPadding2D.__init__", + "tf.compat.v1.keras.layers.ZeroPadding2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.ZeroPadding2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.ZeroPadding2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.ZeroPadding2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.ZeroPadding2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.ZeroPadding2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.ZeroPadding2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.ZeroPadding2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.ZeroPadding2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.ZeroPadding2D.call": "tf.keras.layers.ZeroPadding2D.call", + "tf.compat.v1.keras.layers.ZeroPadding2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.ZeroPadding2D.compute_output_shape": "tf.keras.layers.ZeroPadding2D.compute_output_shape", + "tf.compat.v1.keras.layers.ZeroPadding2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.ZeroPadding2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.ZeroPadding2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.ZeroPadding2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.ZeroPadding2D.get_config": "tf.keras.layers.ZeroPadding2D.get_config", + "tf.compat.v1.keras.layers.ZeroPadding2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.ZeroPadding2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.ZeroPadding2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.ZeroPadding2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.ZeroPadding2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.ZeroPadding2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.ZeroPadding2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.ZeroPadding2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.ZeroPadding2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.ZeroPadding2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.ZeroPadding2D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.ZeroPadding2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.ZeroPadding2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.ZeroPadding2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.ZeroPadding3D": "tf.keras.layers.ZeroPadding3D", + "tf.compat.v1.keras.layers.ZeroPadding3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.ZeroPadding3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.ZeroPadding3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.ZeroPadding3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.ZeroPadding3D.__init__": "tf.keras.layers.ZeroPadding3D.__init__", + "tf.compat.v1.keras.layers.ZeroPadding3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.ZeroPadding3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.ZeroPadding3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.ZeroPadding3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.ZeroPadding3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.ZeroPadding3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.ZeroPadding3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.ZeroPadding3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.ZeroPadding3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.ZeroPadding3D.call": "tf.keras.layers.ZeroPadding3D.call", + "tf.compat.v1.keras.layers.ZeroPadding3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.ZeroPadding3D.compute_output_shape": "tf.keras.layers.ZeroPadding3D.compute_output_shape", + "tf.compat.v1.keras.layers.ZeroPadding3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.ZeroPadding3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.ZeroPadding3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.ZeroPadding3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.ZeroPadding3D.get_config": "tf.keras.layers.ZeroPadding3D.get_config", + "tf.compat.v1.keras.layers.ZeroPadding3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.ZeroPadding3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.ZeroPadding3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.ZeroPadding3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.ZeroPadding3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.ZeroPadding3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.ZeroPadding3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.ZeroPadding3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.ZeroPadding3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.ZeroPadding3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.ZeroPadding3D.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.ZeroPadding3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.ZeroPadding3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.ZeroPadding3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.add": "tf.keras.layers.add", + "tf.compat.v1.keras.layers.average": "tf.keras.layers.average", + "tf.compat.v1.keras.layers.concatenate": "tf.keras.layers.concatenate", + "tf.compat.v1.keras.layers.deserialize": "tf.keras.layers.deserialize", + "tf.compat.v1.keras.layers.dot": "tf.keras.layers.dot", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop": "tf.keras.layers.experimental.preprocessing.CenterCrop", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__init__": "tf.keras.layers.experimental.preprocessing.CenterCrop.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.build": "tf.keras.layers.experimental.preprocessing.CenterCrop.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.call": "tf.keras.layers.experimental.preprocessing.CenterCrop.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.compute_output_shape": "tf.keras.layers.experimental.preprocessing.CenterCrop.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.get_config": "tf.keras.layers.experimental.preprocessing.CenterCrop.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__init__": "tf.keras.layers.experimental.preprocessing.Normalization.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.adapt": "tf.keras.layers.experimental.preprocessing.Normalization.adapt", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.build": "tf.keras.layers.experimental.preprocessing.Normalization.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.call": "tf.keras.layers.experimental.preprocessing.Normalization.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.compute_output_shape": "tf.keras.layers.experimental.preprocessing.Normalization.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.compute_output_signature": "tf.keras.layers.experimental.preprocessing.Normalization.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.get_config": "tf.keras.layers.experimental.preprocessing.Normalization.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.set_weights": "tf.keras.layers.experimental.preprocessing.Normalization.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer": "tf.keras.layers.experimental.preprocessing.PreprocessingLayer", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__init__": "tf.keras.layers.Layer.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.adapt": "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.adapt", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.get_config": "tf.keras.layers.Layer.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast": "tf.keras.layers.experimental.preprocessing.RandomContrast", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__init__": "tf.keras.layers.experimental.preprocessing.RandomContrast.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.call": "tf.keras.layers.experimental.preprocessing.RandomContrast.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.compute_output_shape": "tf.keras.layers.experimental.preprocessing.RandomContrast.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.get_config": "tf.keras.layers.experimental.preprocessing.RandomContrast.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop": "tf.keras.layers.experimental.preprocessing.RandomCrop", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__init__": "tf.keras.layers.experimental.preprocessing.RandomCrop.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.call": "tf.keras.layers.experimental.preprocessing.RandomCrop.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.compute_output_shape": "tf.keras.layers.experimental.preprocessing.RandomCrop.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.get_config": "tf.keras.layers.experimental.preprocessing.RandomCrop.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip": "tf.keras.layers.experimental.preprocessing.RandomFlip", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__init__": "tf.keras.layers.experimental.preprocessing.RandomFlip.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.call": "tf.keras.layers.experimental.preprocessing.RandomFlip.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.compute_output_shape": "tf.keras.layers.experimental.preprocessing.RandomFlip.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.get_config": "tf.keras.layers.experimental.preprocessing.RandomFlip.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight": "tf.keras.layers.experimental.preprocessing.RandomHeight", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__init__": "tf.keras.layers.experimental.preprocessing.RandomHeight.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.call": "tf.keras.layers.experimental.preprocessing.RandomHeight.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.compute_output_shape": "tf.keras.layers.experimental.preprocessing.RandomHeight.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.get_config": "tf.keras.layers.experimental.preprocessing.RandomHeight.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation": "tf.keras.layers.experimental.preprocessing.RandomRotation", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__init__": "tf.keras.layers.experimental.preprocessing.RandomRotation.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.call": "tf.keras.layers.experimental.preprocessing.RandomRotation.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.compute_output_shape": "tf.keras.layers.experimental.preprocessing.RandomRotation.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.get_config": "tf.keras.layers.experimental.preprocessing.RandomRotation.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation": "tf.keras.layers.experimental.preprocessing.RandomTranslation", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__init__": "tf.keras.layers.experimental.preprocessing.RandomTranslation.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.call": "tf.keras.layers.experimental.preprocessing.RandomTranslation.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.compute_output_shape": "tf.keras.layers.experimental.preprocessing.RandomTranslation.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.get_config": "tf.keras.layers.experimental.preprocessing.RandomTranslation.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth": "tf.keras.layers.experimental.preprocessing.RandomWidth", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__init__": "tf.keras.layers.experimental.preprocessing.RandomWidth.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.call": "tf.keras.layers.experimental.preprocessing.RandomWidth.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.compute_output_shape": "tf.keras.layers.experimental.preprocessing.RandomWidth.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.get_config": "tf.keras.layers.experimental.preprocessing.RandomWidth.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling": "tf.keras.layers.experimental.preprocessing.Rescaling", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__init__": "tf.keras.layers.experimental.preprocessing.Rescaling.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.call": "tf.keras.layers.experimental.preprocessing.Rescaling.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.compute_output_shape": "tf.keras.layers.experimental.preprocessing.Rescaling.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.get_config": "tf.keras.layers.experimental.preprocessing.Rescaling.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing": "tf.keras.layers.experimental.preprocessing.Resizing", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__init__": "tf.keras.layers.experimental.preprocessing.Resizing.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.build": "tf.keras.layers.experimental.preprocessing.Resizing.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.call": "tf.keras.layers.experimental.preprocessing.Resizing.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.compute_output_shape": "tf.keras.layers.experimental.preprocessing.Resizing.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.get_config": "tf.keras.layers.experimental.preprocessing.Resizing.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__init__": "tf.keras.layers.experimental.preprocessing.TextVectorization.__init__", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.adapt": "tf.keras.layers.experimental.preprocessing.TextVectorization.adapt", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.build": "tf.keras.layers.experimental.preprocessing.TextVectorization.build", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.call": "tf.keras.layers.experimental.preprocessing.TextVectorization.call", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.compute_output_shape": "tf.keras.layers.experimental.preprocessing.TextVectorization.compute_output_shape", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.compute_output_signature": "tf.keras.layers.experimental.preprocessing.TextVectorization.compute_output_signature", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.count_params": "tf.keras.layers.experimental.preprocessing.TextVectorization.count_params", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.get_config": "tf.keras.layers.experimental.preprocessing.TextVectorization.get_config", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.get_vocabulary": "tf.keras.layers.experimental.preprocessing.TextVectorization.get_vocabulary", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.set_vocabulary": "tf.keras.layers.experimental.preprocessing.TextVectorization.set_vocabulary", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.layers.maximum": "tf.keras.layers.maximum", + "tf.compat.v1.keras.layers.minimum": "tf.keras.layers.minimum", + "tf.compat.v1.keras.layers.multiply": "tf.keras.layers.multiply", + "tf.compat.v1.keras.layers.serialize": "tf.keras.layers.serialize", + "tf.compat.v1.keras.layers.subtract": "tf.keras.layers.subtract", + "tf.compat.v1.keras.losses.BinaryCrossentropy": "tf.keras.losses.BinaryCrossentropy", + "tf.compat.v1.keras.losses.BinaryCrossentropy.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.BinaryCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.BinaryCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.BinaryCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.BinaryCrossentropy.__init__": "tf.keras.losses.BinaryCrossentropy.__init__", + "tf.compat.v1.keras.losses.BinaryCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.BinaryCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.BinaryCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.BinaryCrossentropy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.BinaryCrossentropy.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.BinaryCrossentropy.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.CategoricalCrossentropy": "tf.keras.losses.CategoricalCrossentropy", + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__init__": "tf.keras.losses.CategoricalCrossentropy.__init__", + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.CategoricalCrossentropy.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.CategoricalCrossentropy.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.CategoricalHinge": "tf.keras.losses.CategoricalHinge", + "tf.compat.v1.keras.losses.CategoricalHinge.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.CategoricalHinge.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.CategoricalHinge.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.CategoricalHinge.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.CategoricalHinge.__init__": "tf.keras.losses.CategoricalHinge.__init__", + "tf.compat.v1.keras.losses.CategoricalHinge.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.CategoricalHinge.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.CategoricalHinge.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.CategoricalHinge.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.CategoricalHinge.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.CategoricalHinge.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.CosineSimilarity": "tf.keras.losses.CosineSimilarity", + "tf.compat.v1.keras.losses.CosineSimilarity.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.CosineSimilarity.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.CosineSimilarity.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.CosineSimilarity.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.CosineSimilarity.__init__": "tf.keras.losses.CosineSimilarity.__init__", + "tf.compat.v1.keras.losses.CosineSimilarity.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.CosineSimilarity.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.CosineSimilarity.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.CosineSimilarity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.CosineSimilarity.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.CosineSimilarity.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.Hinge": "tf.keras.losses.Hinge", + "tf.compat.v1.keras.losses.Hinge.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.Hinge.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.Hinge.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.Hinge.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.Hinge.__init__": "tf.keras.losses.Hinge.__init__", + "tf.compat.v1.keras.losses.Hinge.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.Hinge.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.Hinge.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.Hinge.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.Hinge.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.Hinge.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.Huber": "tf.keras.losses.Huber", + "tf.compat.v1.keras.losses.Huber.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.Huber.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.Huber.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.Huber.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.Huber.__init__": "tf.keras.losses.Huber.__init__", + "tf.compat.v1.keras.losses.Huber.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.Huber.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.Huber.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.Huber.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.Huber.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.Huber.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.KLD": "tf.keras.losses.KLD", + "tf.compat.v1.keras.losses.KLDivergence": "tf.keras.losses.KLDivergence", + "tf.compat.v1.keras.losses.KLDivergence.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.KLDivergence.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.KLDivergence.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.KLDivergence.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.KLDivergence.__init__": "tf.keras.losses.KLDivergence.__init__", + "tf.compat.v1.keras.losses.KLDivergence.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.KLDivergence.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.KLDivergence.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.KLDivergence.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.KLDivergence.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.KLDivergence.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.LogCosh": "tf.keras.losses.LogCosh", + "tf.compat.v1.keras.losses.LogCosh.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.LogCosh.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.LogCosh.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.LogCosh.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.LogCosh.__init__": "tf.keras.losses.LogCosh.__init__", + "tf.compat.v1.keras.losses.LogCosh.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.LogCosh.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.LogCosh.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.LogCosh.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.LogCosh.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.LogCosh.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.Loss": "tf.keras.losses.Loss", + "tf.compat.v1.keras.losses.Loss.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.Loss.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.Loss.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.Loss.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.Loss.__init__": "tf.keras.losses.Loss.__init__", + "tf.compat.v1.keras.losses.Loss.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.Loss.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.Loss.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.Loss.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.Loss.call": "tf.keras.losses.Loss.call", + "tf.compat.v1.keras.losses.Loss.get_config": "tf.keras.losses.Loss.get_config", + "tf.compat.v1.keras.losses.MAE": "tf.keras.losses.MAE", + "tf.compat.v1.keras.losses.MAPE": "tf.keras.losses.MAPE", + "tf.compat.v1.keras.losses.MSE": "tf.keras.losses.MSE", + "tf.compat.v1.keras.losses.MSLE": "tf.keras.losses.MSLE", + "tf.compat.v1.keras.losses.MeanAbsoluteError": "tf.keras.losses.MeanAbsoluteError", + "tf.compat.v1.keras.losses.MeanAbsoluteError.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.MeanAbsoluteError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.MeanAbsoluteError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.MeanAbsoluteError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.MeanAbsoluteError.__init__": "tf.keras.losses.MeanAbsoluteError.__init__", + "tf.compat.v1.keras.losses.MeanAbsoluteError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.MeanAbsoluteError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.MeanAbsoluteError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.MeanAbsoluteError.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.MeanAbsoluteError.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.MeanAbsoluteError.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError": "tf.keras.losses.MeanAbsolutePercentageError", + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__init__": "tf.keras.losses.MeanAbsolutePercentageError.__init__", + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.MeanSquaredError": "tf.keras.losses.MeanSquaredError", + "tf.compat.v1.keras.losses.MeanSquaredError.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.MeanSquaredError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.MeanSquaredError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.MeanSquaredError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.MeanSquaredError.__init__": "tf.keras.losses.MeanSquaredError.__init__", + "tf.compat.v1.keras.losses.MeanSquaredError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.MeanSquaredError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.MeanSquaredError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.MeanSquaredError.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.MeanSquaredError.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.MeanSquaredError.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError": "tf.keras.losses.MeanSquaredLogarithmicError", + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__init__": "tf.keras.losses.MeanSquaredLogarithmicError.__init__", + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.Poisson": "tf.keras.losses.Poisson", + "tf.compat.v1.keras.losses.Poisson.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.Poisson.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.Poisson.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.Poisson.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.Poisson.__init__": "tf.keras.losses.Poisson.__init__", + "tf.compat.v1.keras.losses.Poisson.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.Poisson.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.Poisson.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.Poisson.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.Poisson.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.Poisson.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy": "tf.keras.losses.SparseCategoricalCrossentropy", + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__init__": "tf.keras.losses.SparseCategoricalCrossentropy.__init__", + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.SquaredHinge": "tf.keras.losses.SquaredHinge", + "tf.compat.v1.keras.losses.SquaredHinge.__call__": "tf.keras.losses.Loss.__call__", + "tf.compat.v1.keras.losses.SquaredHinge.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.losses.SquaredHinge.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.losses.SquaredHinge.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.losses.SquaredHinge.__init__": "tf.keras.losses.SquaredHinge.__init__", + "tf.compat.v1.keras.losses.SquaredHinge.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.losses.SquaredHinge.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.losses.SquaredHinge.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.losses.SquaredHinge.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.losses.SquaredHinge.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.compat.v1.keras.losses.SquaredHinge.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.compat.v1.keras.losses.binary_crossentropy": "tf.keras.losses.binary_crossentropy", + "tf.compat.v1.keras.losses.categorical_crossentropy": "tf.keras.losses.categorical_crossentropy", + "tf.compat.v1.keras.losses.categorical_hinge": "tf.keras.losses.categorical_hinge", + "tf.compat.v1.keras.losses.cosine": "tf.keras.losses.cosine_similarity", + "tf.compat.v1.keras.losses.cosine_proximity": "tf.keras.losses.cosine_similarity", + "tf.compat.v1.keras.losses.cosine_similarity": "tf.keras.losses.cosine_similarity", + "tf.compat.v1.keras.losses.deserialize": "tf.keras.losses.deserialize", + "tf.compat.v1.keras.losses.get": "tf.keras.losses.get", + "tf.compat.v1.keras.losses.hinge": "tf.keras.losses.hinge", + "tf.compat.v1.keras.losses.kld": "tf.keras.losses.KLD", + "tf.compat.v1.keras.losses.kullback_leibler_divergence": "tf.keras.losses.KLD", + "tf.compat.v1.keras.losses.logcosh": "tf.keras.losses.logcosh", + "tf.compat.v1.keras.losses.mae": "tf.keras.losses.MAE", + "tf.compat.v1.keras.losses.mape": "tf.keras.losses.MAPE", + "tf.compat.v1.keras.losses.mean_absolute_error": "tf.keras.losses.MAE", + "tf.compat.v1.keras.losses.mean_absolute_percentage_error": "tf.keras.losses.MAPE", + "tf.compat.v1.keras.losses.mean_squared_error": "tf.keras.losses.MSE", + "tf.compat.v1.keras.losses.mean_squared_logarithmic_error": "tf.keras.losses.MSLE", + "tf.compat.v1.keras.losses.mse": "tf.keras.losses.MSE", + "tf.compat.v1.keras.losses.msle": "tf.keras.losses.MSLE", + "tf.compat.v1.keras.losses.poisson": "tf.keras.losses.poisson", + "tf.compat.v1.keras.losses.serialize": "tf.keras.losses.serialize", + "tf.compat.v1.keras.losses.sparse_categorical_crossentropy": "tf.keras.losses.sparse_categorical_crossentropy", + "tf.compat.v1.keras.losses.squared_hinge": "tf.keras.losses.squared_hinge", + "tf.compat.v1.keras.metrics.AUC": "tf.keras.metrics.AUC", + "tf.compat.v1.keras.metrics.AUC.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.AUC.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.AUC.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.AUC.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.AUC.__init__": "tf.keras.metrics.AUC.__init__", + "tf.compat.v1.keras.metrics.AUC.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.AUC.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.AUC.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.AUC.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.AUC.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.AUC.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.AUC.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.AUC.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.AUC.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.AUC.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.AUC.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.AUC.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.AUC.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.AUC.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.AUC.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.AUC.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.AUC.get_config": "tf.keras.metrics.AUC.get_config", + "tf.compat.v1.keras.metrics.AUC.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.AUC.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.AUC.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.AUC.interpolate_pr_auc": "tf.keras.metrics.AUC.interpolate_pr_auc", + "tf.compat.v1.keras.metrics.AUC.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.AUC.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.AUC.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.AUC.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.AUC.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.AUC.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.AUC.reset_states": "tf.keras.metrics.AUC.reset_states", + "tf.compat.v1.keras.metrics.AUC.result": "tf.keras.metrics.AUC.result", + "tf.compat.v1.keras.metrics.AUC.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.AUC.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.AUC.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.AUC.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.AUC.update_state": "tf.keras.metrics.AUC.update_state", + "tf.compat.v1.keras.metrics.AUC.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.Accuracy": "tf.keras.metrics.Accuracy", + "tf.compat.v1.keras.metrics.Accuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.Accuracy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.Accuracy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.Accuracy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.Accuracy.__init__": "tf.keras.metrics.Accuracy.__init__", + "tf.compat.v1.keras.metrics.Accuracy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.Accuracy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.Accuracy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.Accuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.Accuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.Accuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.Accuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.Accuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.Accuracy.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.Accuracy.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.Accuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.Accuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.Accuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.Accuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.Accuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.Accuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.Accuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.Accuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.Accuracy.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.Accuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.Accuracy.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.Accuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.Accuracy.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.Accuracy.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.Accuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.Accuracy.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.Accuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.Accuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.Accuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.Accuracy.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.Accuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.Accuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.Accuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.Accuracy.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.BinaryAccuracy": "tf.keras.metrics.BinaryAccuracy", + "tf.compat.v1.keras.metrics.BinaryAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.BinaryAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.BinaryAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.BinaryAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.BinaryAccuracy.__init__": "tf.keras.metrics.BinaryAccuracy.__init__", + "tf.compat.v1.keras.metrics.BinaryAccuracy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.BinaryAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.BinaryAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.BinaryAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.BinaryAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.BinaryAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.BinaryAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.BinaryAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.BinaryAccuracy.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.BinaryAccuracy.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.BinaryAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.BinaryAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.BinaryAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.BinaryAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.BinaryAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.BinaryAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.BinaryAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.BinaryAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.BinaryAccuracy.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.BinaryAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.BinaryAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.BinaryAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.BinaryAccuracy.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.BinaryAccuracy.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.BinaryAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.BinaryAccuracy.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.BinaryAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.BinaryAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.BinaryAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.BinaryAccuracy.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.BinaryAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.BinaryAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.BinaryAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.BinaryAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.BinaryCrossentropy": "tf.keras.metrics.BinaryCrossentropy", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__init__": "tf.keras.metrics.BinaryCrossentropy.__init__", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.BinaryCrossentropy.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.CategoricalAccuracy": "tf.keras.metrics.CategoricalAccuracy", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__init__": "tf.keras.metrics.CategoricalAccuracy.__init__", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.CategoricalAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy": "tf.keras.metrics.CategoricalCrossentropy", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__init__": "tf.keras.metrics.CategoricalCrossentropy.__init__", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.CategoricalHinge": "tf.keras.metrics.CategoricalHinge", + "tf.compat.v1.keras.metrics.CategoricalHinge.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.CategoricalHinge.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.CategoricalHinge.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.CategoricalHinge.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.CategoricalHinge.__init__": "tf.keras.metrics.CategoricalHinge.__init__", + "tf.compat.v1.keras.metrics.CategoricalHinge.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.CategoricalHinge.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.CategoricalHinge.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.CategoricalHinge.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.CategoricalHinge.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.CategoricalHinge.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.CategoricalHinge.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.CategoricalHinge.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.CategoricalHinge.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.CategoricalHinge.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.CategoricalHinge.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.CategoricalHinge.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.CategoricalHinge.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.CategoricalHinge.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.CategoricalHinge.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.CategoricalHinge.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.CategoricalHinge.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.CategoricalHinge.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.CategoricalHinge.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.CategoricalHinge.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.CategoricalHinge.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.CategoricalHinge.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.CategoricalHinge.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.CategoricalHinge.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.CategoricalHinge.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.CategoricalHinge.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.CategoricalHinge.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.CategoricalHinge.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.CategoricalHinge.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.CategoricalHinge.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.CategoricalHinge.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.CategoricalHinge.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.CategoricalHinge.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.CategoricalHinge.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.CosineSimilarity": "tf.keras.metrics.CosineSimilarity", + "tf.compat.v1.keras.metrics.CosineSimilarity.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.CosineSimilarity.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.CosineSimilarity.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.CosineSimilarity.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.CosineSimilarity.__init__": "tf.keras.metrics.CosineSimilarity.__init__", + "tf.compat.v1.keras.metrics.CosineSimilarity.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.CosineSimilarity.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.CosineSimilarity.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.CosineSimilarity.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.CosineSimilarity.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.CosineSimilarity.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.CosineSimilarity.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.CosineSimilarity.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.CosineSimilarity.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.CosineSimilarity.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.CosineSimilarity.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.CosineSimilarity.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.CosineSimilarity.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.CosineSimilarity.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.CosineSimilarity.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.CosineSimilarity.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.CosineSimilarity.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.CosineSimilarity.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.CosineSimilarity.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.CosineSimilarity.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.CosineSimilarity.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.CosineSimilarity.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.CosineSimilarity.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.CosineSimilarity.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.CosineSimilarity.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.CosineSimilarity.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.CosineSimilarity.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.CosineSimilarity.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.CosineSimilarity.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.CosineSimilarity.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.CosineSimilarity.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.CosineSimilarity.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.CosineSimilarity.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.CosineSimilarity.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.FalseNegatives": "tf.keras.metrics.FalseNegatives", + "tf.compat.v1.keras.metrics.FalseNegatives.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.FalseNegatives.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.FalseNegatives.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.FalseNegatives.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.FalseNegatives.__init__": "tf.keras.metrics.FalseNegatives.__init__", + "tf.compat.v1.keras.metrics.FalseNegatives.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.FalseNegatives.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.FalseNegatives.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.FalseNegatives.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.FalseNegatives.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.FalseNegatives.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.FalseNegatives.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.FalseNegatives.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.FalseNegatives.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.FalseNegatives.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.FalseNegatives.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.FalseNegatives.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.FalseNegatives.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.FalseNegatives.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.FalseNegatives.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.FalseNegatives.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.FalseNegatives.get_config": "tf.keras.metrics.FalseNegatives.get_config", + "tf.compat.v1.keras.metrics.FalseNegatives.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.FalseNegatives.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.FalseNegatives.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.FalseNegatives.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.FalseNegatives.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.FalseNegatives.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.FalseNegatives.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.FalseNegatives.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.FalseNegatives.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.FalseNegatives.reset_states": "tf.keras.metrics.FalseNegatives.reset_states", + "tf.compat.v1.keras.metrics.FalseNegatives.result": "tf.keras.metrics.FalseNegatives.result", + "tf.compat.v1.keras.metrics.FalseNegatives.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.FalseNegatives.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.FalseNegatives.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.FalseNegatives.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.FalseNegatives.update_state": "tf.keras.metrics.FalseNegatives.update_state", + "tf.compat.v1.keras.metrics.FalseNegatives.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.FalsePositives": "tf.keras.metrics.FalsePositives", + "tf.compat.v1.keras.metrics.FalsePositives.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.FalsePositives.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.FalsePositives.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.FalsePositives.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.FalsePositives.__init__": "tf.keras.metrics.FalsePositives.__init__", + "tf.compat.v1.keras.metrics.FalsePositives.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.FalsePositives.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.FalsePositives.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.FalsePositives.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.FalsePositives.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.FalsePositives.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.FalsePositives.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.FalsePositives.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.FalsePositives.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.FalsePositives.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.FalsePositives.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.FalsePositives.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.FalsePositives.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.FalsePositives.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.FalsePositives.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.FalsePositives.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.FalsePositives.get_config": "tf.keras.metrics.FalseNegatives.get_config", + "tf.compat.v1.keras.metrics.FalsePositives.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.FalsePositives.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.FalsePositives.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.FalsePositives.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.FalsePositives.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.FalsePositives.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.FalsePositives.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.FalsePositives.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.FalsePositives.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.FalsePositives.reset_states": "tf.keras.metrics.FalseNegatives.reset_states", + "tf.compat.v1.keras.metrics.FalsePositives.result": "tf.keras.metrics.FalseNegatives.result", + "tf.compat.v1.keras.metrics.FalsePositives.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.FalsePositives.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.FalsePositives.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.FalsePositives.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.FalsePositives.update_state": "tf.keras.metrics.FalseNegatives.update_state", + "tf.compat.v1.keras.metrics.FalsePositives.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.Hinge": "tf.keras.metrics.Hinge", + "tf.compat.v1.keras.metrics.Hinge.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.Hinge.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.Hinge.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.Hinge.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.Hinge.__init__": "tf.keras.metrics.Hinge.__init__", + "tf.compat.v1.keras.metrics.Hinge.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.Hinge.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.Hinge.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.Hinge.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.Hinge.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.Hinge.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.Hinge.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.Hinge.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.Hinge.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.Hinge.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.Hinge.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.Hinge.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.Hinge.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.Hinge.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.Hinge.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.Hinge.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.Hinge.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.Hinge.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.Hinge.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.Hinge.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.Hinge.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.Hinge.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.Hinge.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.Hinge.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.Hinge.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.Hinge.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.Hinge.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.Hinge.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.Hinge.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.Hinge.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.Hinge.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.Hinge.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.Hinge.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.Hinge.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.KLD": "tf.keras.losses.KLD", + "tf.compat.v1.keras.metrics.KLDivergence": "tf.keras.metrics.KLDivergence", + "tf.compat.v1.keras.metrics.KLDivergence.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.KLDivergence.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.KLDivergence.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.KLDivergence.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.KLDivergence.__init__": "tf.keras.metrics.KLDivergence.__init__", + "tf.compat.v1.keras.metrics.KLDivergence.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.KLDivergence.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.KLDivergence.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.KLDivergence.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.KLDivergence.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.KLDivergence.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.KLDivergence.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.KLDivergence.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.KLDivergence.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.KLDivergence.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.KLDivergence.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.KLDivergence.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.KLDivergence.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.KLDivergence.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.KLDivergence.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.KLDivergence.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.KLDivergence.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.KLDivergence.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.KLDivergence.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.KLDivergence.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.KLDivergence.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.KLDivergence.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.KLDivergence.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.KLDivergence.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.KLDivergence.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.KLDivergence.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.KLDivergence.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.KLDivergence.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.KLDivergence.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.KLDivergence.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.KLDivergence.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.KLDivergence.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.KLDivergence.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.KLDivergence.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.LogCoshError": "tf.keras.metrics.LogCoshError", + "tf.compat.v1.keras.metrics.LogCoshError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.LogCoshError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.LogCoshError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.LogCoshError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.LogCoshError.__init__": "tf.keras.metrics.LogCoshError.__init__", + "tf.compat.v1.keras.metrics.LogCoshError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.LogCoshError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.LogCoshError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.LogCoshError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.LogCoshError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.LogCoshError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.LogCoshError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.LogCoshError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.LogCoshError.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.LogCoshError.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.LogCoshError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.LogCoshError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.LogCoshError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.LogCoshError.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.LogCoshError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.LogCoshError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.LogCoshError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.LogCoshError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.LogCoshError.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.LogCoshError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.LogCoshError.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.LogCoshError.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.LogCoshError.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.LogCoshError.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.LogCoshError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.LogCoshError.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.LogCoshError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.LogCoshError.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.LogCoshError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.LogCoshError.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.LogCoshError.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.LogCoshError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.LogCoshError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.LogCoshError.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.MAE": "tf.keras.losses.MAE", + "tf.compat.v1.keras.metrics.MAPE": "tf.keras.losses.MAPE", + "tf.compat.v1.keras.metrics.MSE": "tf.keras.losses.MSE", + "tf.compat.v1.keras.metrics.MSLE": "tf.keras.losses.MSLE", + "tf.compat.v1.keras.metrics.Mean": "tf.keras.metrics.Mean", + "tf.compat.v1.keras.metrics.Mean.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.Mean.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.Mean.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.Mean.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.Mean.__init__": "tf.keras.metrics.Mean.__init__", + "tf.compat.v1.keras.metrics.Mean.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.Mean.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.Mean.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.Mean.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.Mean.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.Mean.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.Mean.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.Mean.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.Mean.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.Mean.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.Mean.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.Mean.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.Mean.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.Mean.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.Mean.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.Mean.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.Mean.get_config": "tf.keras.metrics.Metric.get_config", + "tf.compat.v1.keras.metrics.Mean.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.Mean.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.Mean.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.Mean.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.Mean.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.Mean.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.Mean.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.Mean.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.Mean.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.Mean.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.Mean.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.Mean.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.Mean.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.Mean.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.Mean.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.Mean.update_state": "tf.keras.metrics.Mean.update_state", + "tf.compat.v1.keras.metrics.Mean.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.MeanAbsoluteError": "tf.keras.metrics.MeanAbsoluteError", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__init__": "tf.keras.metrics.MeanAbsoluteError.__init__", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.MeanAbsoluteError.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError": "tf.keras.metrics.MeanAbsolutePercentageError", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__init__": "tf.keras.metrics.MeanAbsolutePercentageError.__init__", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.MeanIoU": "tf.keras.metrics.MeanIoU", + "tf.compat.v1.keras.metrics.MeanIoU.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.MeanIoU.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.MeanIoU.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.MeanIoU.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.MeanIoU.__init__": "tf.keras.metrics.MeanIoU.__init__", + "tf.compat.v1.keras.metrics.MeanIoU.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.MeanIoU.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.MeanIoU.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.MeanIoU.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.MeanIoU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.MeanIoU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.MeanIoU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.MeanIoU.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.MeanIoU.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.MeanIoU.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.MeanIoU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.MeanIoU.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.MeanIoU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.MeanIoU.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.MeanIoU.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.MeanIoU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.MeanIoU.get_config": "tf.keras.metrics.MeanIoU.get_config", + "tf.compat.v1.keras.metrics.MeanIoU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.MeanIoU.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.MeanIoU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.MeanIoU.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.MeanIoU.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.MeanIoU.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.MeanIoU.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.MeanIoU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.MeanIoU.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.MeanIoU.reset_states": "tf.keras.metrics.MeanIoU.reset_states", + "tf.compat.v1.keras.metrics.MeanIoU.result": "tf.keras.metrics.MeanIoU.result", + "tf.compat.v1.keras.metrics.MeanIoU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.MeanIoU.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.MeanIoU.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.MeanIoU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.MeanIoU.update_state": "tf.keras.metrics.MeanIoU.update_state", + "tf.compat.v1.keras.metrics.MeanIoU.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.MeanRelativeError": "tf.keras.metrics.MeanRelativeError", + "tf.compat.v1.keras.metrics.MeanRelativeError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.MeanRelativeError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.MeanRelativeError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.MeanRelativeError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.MeanRelativeError.__init__": "tf.keras.metrics.MeanRelativeError.__init__", + "tf.compat.v1.keras.metrics.MeanRelativeError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.MeanRelativeError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.MeanRelativeError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.MeanRelativeError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.MeanRelativeError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.MeanRelativeError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.MeanRelativeError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.MeanRelativeError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.MeanRelativeError.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.MeanRelativeError.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.MeanRelativeError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.MeanRelativeError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.MeanRelativeError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.MeanRelativeError.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.MeanRelativeError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.MeanRelativeError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.MeanRelativeError.get_config": "tf.keras.metrics.MeanRelativeError.get_config", + "tf.compat.v1.keras.metrics.MeanRelativeError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.MeanRelativeError.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.MeanRelativeError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.MeanRelativeError.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.MeanRelativeError.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.MeanRelativeError.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.MeanRelativeError.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.MeanRelativeError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.MeanRelativeError.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.MeanRelativeError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.MeanRelativeError.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.MeanRelativeError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.MeanRelativeError.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.MeanRelativeError.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.MeanRelativeError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.MeanRelativeError.update_state": "tf.keras.metrics.MeanRelativeError.update_state", + "tf.compat.v1.keras.metrics.MeanRelativeError.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.MeanSquaredError": "tf.keras.metrics.MeanSquaredError", + "tf.compat.v1.keras.metrics.MeanSquaredError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.MeanSquaredError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.MeanSquaredError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.MeanSquaredError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.MeanSquaredError.__init__": "tf.keras.metrics.MeanSquaredError.__init__", + "tf.compat.v1.keras.metrics.MeanSquaredError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.MeanSquaredError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.MeanSquaredError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.MeanSquaredError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.MeanSquaredError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.MeanSquaredError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.MeanSquaredError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.MeanSquaredError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.MeanSquaredError.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.MeanSquaredError.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.MeanSquaredError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.MeanSquaredError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.MeanSquaredError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.MeanSquaredError.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.MeanSquaredError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.MeanSquaredError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.MeanSquaredError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.MeanSquaredError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.MeanSquaredError.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.MeanSquaredError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.MeanSquaredError.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.MeanSquaredError.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.MeanSquaredError.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.MeanSquaredError.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.MeanSquaredError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.MeanSquaredError.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.MeanSquaredError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.MeanSquaredError.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.MeanSquaredError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.MeanSquaredError.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.MeanSquaredError.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.MeanSquaredError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.MeanSquaredError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.MeanSquaredError.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError": "tf.keras.metrics.MeanSquaredLogarithmicError", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__init__": "tf.keras.metrics.MeanSquaredLogarithmicError.__init__", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.MeanTensor": "tf.keras.metrics.MeanTensor", + "tf.compat.v1.keras.metrics.MeanTensor.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.MeanTensor.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.MeanTensor.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.MeanTensor.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.MeanTensor.__init__": "tf.keras.metrics.MeanTensor.__init__", + "tf.compat.v1.keras.metrics.MeanTensor.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.MeanTensor.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.MeanTensor.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.MeanTensor.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.MeanTensor.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.MeanTensor.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.MeanTensor.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.MeanTensor.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.MeanTensor.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.MeanTensor.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.MeanTensor.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.MeanTensor.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.MeanTensor.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.MeanTensor.count": "tf.keras.metrics.MeanTensor.count", + "tf.compat.v1.keras.metrics.MeanTensor.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.MeanTensor.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.MeanTensor.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.MeanTensor.get_config": "tf.keras.metrics.Metric.get_config", + "tf.compat.v1.keras.metrics.MeanTensor.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.MeanTensor.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.MeanTensor.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.MeanTensor.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.MeanTensor.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.MeanTensor.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.MeanTensor.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.MeanTensor.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.MeanTensor.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.MeanTensor.reset_states": "tf.keras.metrics.MeanTensor.reset_states", + "tf.compat.v1.keras.metrics.MeanTensor.result": "tf.keras.metrics.MeanTensor.result", + "tf.compat.v1.keras.metrics.MeanTensor.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.MeanTensor.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.MeanTensor.total": "tf.keras.metrics.MeanTensor.total", + "tf.compat.v1.keras.metrics.MeanTensor.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.MeanTensor.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.MeanTensor.update_state": "tf.keras.metrics.MeanTensor.update_state", + "tf.compat.v1.keras.metrics.MeanTensor.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.Metric": "tf.keras.metrics.Metric", + "tf.compat.v1.keras.metrics.Metric.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.Metric.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.Metric.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.Metric.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.Metric.__init__": "tf.keras.metrics.Metric.__init__", + "tf.compat.v1.keras.metrics.Metric.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.Metric.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.Metric.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.Metric.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.Metric.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.Metric.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.Metric.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.Metric.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.Metric.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.Metric.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.Metric.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.Metric.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.Metric.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.Metric.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.Metric.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.Metric.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.Metric.get_config": "tf.keras.metrics.Metric.get_config", + "tf.compat.v1.keras.metrics.Metric.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.Metric.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.Metric.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.Metric.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.Metric.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.Metric.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.Metric.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.Metric.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.Metric.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.Metric.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.Metric.result": "tf.keras.metrics.Metric.result", + "tf.compat.v1.keras.metrics.Metric.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.Metric.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.Metric.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.Metric.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.Metric.update_state": "tf.keras.metrics.Metric.update_state", + "tf.compat.v1.keras.metrics.Metric.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.Poisson": "tf.keras.metrics.Poisson", + "tf.compat.v1.keras.metrics.Poisson.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.Poisson.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.Poisson.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.Poisson.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.Poisson.__init__": "tf.keras.metrics.Poisson.__init__", + "tf.compat.v1.keras.metrics.Poisson.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.Poisson.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.Poisson.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.Poisson.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.Poisson.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.Poisson.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.Poisson.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.Poisson.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.Poisson.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.Poisson.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.Poisson.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.Poisson.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.Poisson.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.Poisson.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.Poisson.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.Poisson.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.Poisson.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.Poisson.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.Poisson.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.Poisson.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.Poisson.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.Poisson.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.Poisson.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.Poisson.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.Poisson.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.Poisson.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.Poisson.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.Poisson.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.Poisson.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.Poisson.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.Poisson.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.Poisson.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.Poisson.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.Poisson.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.Precision": "tf.keras.metrics.Precision", + "tf.compat.v1.keras.metrics.Precision.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.Precision.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.Precision.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.Precision.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.Precision.__init__": "tf.keras.metrics.Precision.__init__", + "tf.compat.v1.keras.metrics.Precision.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.Precision.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.Precision.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.Precision.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.Precision.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.Precision.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.Precision.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.Precision.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.Precision.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.Precision.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.Precision.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.Precision.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.Precision.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.Precision.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.Precision.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.Precision.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.Precision.get_config": "tf.keras.metrics.Precision.get_config", + "tf.compat.v1.keras.metrics.Precision.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.Precision.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.Precision.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.Precision.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.Precision.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.Precision.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.Precision.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.Precision.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.Precision.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.Precision.reset_states": "tf.keras.metrics.Precision.reset_states", + "tf.compat.v1.keras.metrics.Precision.result": "tf.keras.metrics.Precision.result", + "tf.compat.v1.keras.metrics.Precision.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.Precision.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.Precision.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.Precision.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.Precision.update_state": "tf.keras.metrics.Precision.update_state", + "tf.compat.v1.keras.metrics.Precision.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.PrecisionAtRecall": "tf.keras.metrics.PrecisionAtRecall", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__init__": "tf.keras.metrics.PrecisionAtRecall.__init__", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.get_config": "tf.keras.metrics.PrecisionAtRecall.get_config", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.reset_states": "tf.keras.metrics.PrecisionAtRecall.reset_states", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.result": "tf.keras.metrics.PrecisionAtRecall.result", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.update_state": "tf.keras.metrics.PrecisionAtRecall.update_state", + "tf.compat.v1.keras.metrics.PrecisionAtRecall.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.Recall": "tf.keras.metrics.Recall", + "tf.compat.v1.keras.metrics.Recall.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.Recall.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.Recall.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.Recall.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.Recall.__init__": "tf.keras.metrics.Recall.__init__", + "tf.compat.v1.keras.metrics.Recall.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.Recall.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.Recall.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.Recall.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.Recall.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.Recall.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.Recall.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.Recall.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.Recall.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.Recall.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.Recall.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.Recall.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.Recall.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.Recall.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.Recall.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.Recall.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.Recall.get_config": "tf.keras.metrics.Recall.get_config", + "tf.compat.v1.keras.metrics.Recall.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.Recall.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.Recall.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.Recall.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.Recall.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.Recall.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.Recall.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.Recall.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.Recall.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.Recall.reset_states": "tf.keras.metrics.Recall.reset_states", + "tf.compat.v1.keras.metrics.Recall.result": "tf.keras.metrics.Recall.result", + "tf.compat.v1.keras.metrics.Recall.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.Recall.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.Recall.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.Recall.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.Recall.update_state": "tf.keras.metrics.Recall.update_state", + "tf.compat.v1.keras.metrics.Recall.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.RecallAtPrecision": "tf.keras.metrics.RecallAtPrecision", + "tf.compat.v1.keras.metrics.RecallAtPrecision.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.RecallAtPrecision.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.RecallAtPrecision.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.RecallAtPrecision.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.RecallAtPrecision.__init__": "tf.keras.metrics.RecallAtPrecision.__init__", + "tf.compat.v1.keras.metrics.RecallAtPrecision.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.RecallAtPrecision.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.RecallAtPrecision.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.RecallAtPrecision.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.RecallAtPrecision.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.RecallAtPrecision.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.RecallAtPrecision.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.RecallAtPrecision.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.RecallAtPrecision.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.RecallAtPrecision.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.RecallAtPrecision.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.RecallAtPrecision.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.RecallAtPrecision.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.RecallAtPrecision.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.RecallAtPrecision.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.RecallAtPrecision.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.RecallAtPrecision.get_config": "tf.keras.metrics.RecallAtPrecision.get_config", + "tf.compat.v1.keras.metrics.RecallAtPrecision.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.RecallAtPrecision.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.RecallAtPrecision.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.RecallAtPrecision.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.RecallAtPrecision.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.RecallAtPrecision.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.RecallAtPrecision.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.RecallAtPrecision.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.RecallAtPrecision.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.RecallAtPrecision.reset_states": "tf.keras.metrics.PrecisionAtRecall.reset_states", + "tf.compat.v1.keras.metrics.RecallAtPrecision.result": "tf.keras.metrics.RecallAtPrecision.result", + "tf.compat.v1.keras.metrics.RecallAtPrecision.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.RecallAtPrecision.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.RecallAtPrecision.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.RecallAtPrecision.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.RecallAtPrecision.update_state": "tf.keras.metrics.PrecisionAtRecall.update_state", + "tf.compat.v1.keras.metrics.RecallAtPrecision.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.RootMeanSquaredError": "tf.keras.metrics.RootMeanSquaredError", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__init__": "tf.keras.metrics.RootMeanSquaredError.__init__", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.get_config": "tf.keras.metrics.Metric.get_config", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.result": "tf.keras.metrics.RootMeanSquaredError.result", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.update_state": "tf.keras.metrics.RootMeanSquaredError.update_state", + "tf.compat.v1.keras.metrics.RootMeanSquaredError.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity": "tf.keras.metrics.SensitivityAtSpecificity", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__init__": "tf.keras.metrics.SensitivityAtSpecificity.__init__", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.get_config": "tf.keras.metrics.SensitivityAtSpecificity.get_config", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.reset_states": "tf.keras.metrics.PrecisionAtRecall.reset_states", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.result": "tf.keras.metrics.SensitivityAtSpecificity.result", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.update_state": "tf.keras.metrics.PrecisionAtRecall.update_state", + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy": "tf.keras.metrics.SparseCategoricalAccuracy", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__init__": "tf.keras.metrics.SparseCategoricalAccuracy.__init__", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy": "tf.keras.metrics.SparseCategoricalCrossentropy", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__init__": "tf.keras.metrics.SparseCategoricalCrossentropy.__init__", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy": "tf.keras.metrics.SparseTopKCategoricalAccuracy", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__init__": "tf.keras.metrics.SparseTopKCategoricalAccuracy.__init__", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity": "tf.keras.metrics.SpecificityAtSensitivity", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__init__": "tf.keras.metrics.SpecificityAtSensitivity.__init__", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.get_config": "tf.keras.metrics.SpecificityAtSensitivity.get_config", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.reset_states": "tf.keras.metrics.PrecisionAtRecall.reset_states", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.result": "tf.keras.metrics.SpecificityAtSensitivity.result", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.update_state": "tf.keras.metrics.PrecisionAtRecall.update_state", + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.SquaredHinge": "tf.keras.metrics.SquaredHinge", + "tf.compat.v1.keras.metrics.SquaredHinge.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.SquaredHinge.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.SquaredHinge.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.SquaredHinge.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.SquaredHinge.__init__": "tf.keras.metrics.SquaredHinge.__init__", + "tf.compat.v1.keras.metrics.SquaredHinge.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.SquaredHinge.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.SquaredHinge.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.SquaredHinge.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.SquaredHinge.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.SquaredHinge.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.SquaredHinge.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.SquaredHinge.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.SquaredHinge.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.SquaredHinge.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.SquaredHinge.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.SquaredHinge.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.SquaredHinge.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.SquaredHinge.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.SquaredHinge.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.SquaredHinge.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.SquaredHinge.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.SquaredHinge.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.SquaredHinge.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.SquaredHinge.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.SquaredHinge.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.SquaredHinge.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.SquaredHinge.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.SquaredHinge.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.SquaredHinge.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.SquaredHinge.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.SquaredHinge.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.SquaredHinge.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.SquaredHinge.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.SquaredHinge.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.SquaredHinge.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.SquaredHinge.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.SquaredHinge.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.SquaredHinge.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.Sum": "tf.keras.metrics.Sum", + "tf.compat.v1.keras.metrics.Sum.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.Sum.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.Sum.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.Sum.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.Sum.__init__": "tf.keras.metrics.Sum.__init__", + "tf.compat.v1.keras.metrics.Sum.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.Sum.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.Sum.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.Sum.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.Sum.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.Sum.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.Sum.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.Sum.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.Sum.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.Sum.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.Sum.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.Sum.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.Sum.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.Sum.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.Sum.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.Sum.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.Sum.get_config": "tf.keras.metrics.Metric.get_config", + "tf.compat.v1.keras.metrics.Sum.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.Sum.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.Sum.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.Sum.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.Sum.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.Sum.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.Sum.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.Sum.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.Sum.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.Sum.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.Sum.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.Sum.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.Sum.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.Sum.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.Sum.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.Sum.update_state": "tf.keras.metrics.Mean.update_state", + "tf.compat.v1.keras.metrics.Sum.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy": "tf.keras.metrics.TopKCategoricalAccuracy", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__init__": "tf.keras.metrics.TopKCategoricalAccuracy.__init__", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.TrueNegatives": "tf.keras.metrics.TrueNegatives", + "tf.compat.v1.keras.metrics.TrueNegatives.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.TrueNegatives.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.TrueNegatives.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.TrueNegatives.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.TrueNegatives.__init__": "tf.keras.metrics.TrueNegatives.__init__", + "tf.compat.v1.keras.metrics.TrueNegatives.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.TrueNegatives.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.TrueNegatives.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.TrueNegatives.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.TrueNegatives.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.TrueNegatives.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.TrueNegatives.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.TrueNegatives.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.TrueNegatives.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.TrueNegatives.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.TrueNegatives.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.TrueNegatives.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.TrueNegatives.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.TrueNegatives.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.TrueNegatives.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.TrueNegatives.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.TrueNegatives.get_config": "tf.keras.metrics.FalseNegatives.get_config", + "tf.compat.v1.keras.metrics.TrueNegatives.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.TrueNegatives.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.TrueNegatives.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.TrueNegatives.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.TrueNegatives.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.TrueNegatives.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.TrueNegatives.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.TrueNegatives.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.TrueNegatives.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.TrueNegatives.reset_states": "tf.keras.metrics.FalseNegatives.reset_states", + "tf.compat.v1.keras.metrics.TrueNegatives.result": "tf.keras.metrics.FalseNegatives.result", + "tf.compat.v1.keras.metrics.TrueNegatives.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.TrueNegatives.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.TrueNegatives.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.TrueNegatives.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.TrueNegatives.update_state": "tf.keras.metrics.FalseNegatives.update_state", + "tf.compat.v1.keras.metrics.TrueNegatives.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.TruePositives": "tf.keras.metrics.TruePositives", + "tf.compat.v1.keras.metrics.TruePositives.__call__": "tf.keras.metrics.Metric.__call__", + "tf.compat.v1.keras.metrics.TruePositives.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.metrics.TruePositives.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.metrics.TruePositives.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.metrics.TruePositives.__init__": "tf.keras.metrics.TruePositives.__init__", + "tf.compat.v1.keras.metrics.TruePositives.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.metrics.TruePositives.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.metrics.TruePositives.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.metrics.TruePositives.__new__": "tf.keras.metrics.Metric.__new__", + "tf.compat.v1.keras.metrics.TruePositives.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.metrics.TruePositives.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.metrics.TruePositives.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.metrics.TruePositives.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.compat.v1.keras.metrics.TruePositives.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.keras.metrics.TruePositives.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.keras.metrics.TruePositives.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.keras.metrics.TruePositives.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.keras.metrics.TruePositives.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.metrics.TruePositives.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.metrics.TruePositives.dtype": "tf.keras.metrics.Metric.dtype", + "tf.compat.v1.keras.metrics.TruePositives.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.keras.metrics.TruePositives.get_config": "tf.keras.metrics.FalseNegatives.get_config", + "tf.compat.v1.keras.metrics.TruePositives.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.keras.metrics.TruePositives.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.metrics.TruePositives.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.keras.metrics.TruePositives.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.metrics.TruePositives.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.keras.metrics.TruePositives.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.metrics.TruePositives.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.metrics.TruePositives.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.keras.metrics.TruePositives.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.metrics.TruePositives.reset_states": "tf.keras.metrics.FalseNegatives.reset_states", + "tf.compat.v1.keras.metrics.TruePositives.result": "tf.keras.metrics.FalseNegatives.result", + "tf.compat.v1.keras.metrics.TruePositives.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.metrics.TruePositives.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.metrics.TruePositives.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.metrics.TruePositives.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.keras.metrics.TruePositives.update_state": "tf.keras.metrics.FalseNegatives.update_state", + "tf.compat.v1.keras.metrics.TruePositives.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.keras.metrics.binary_accuracy": "tf.keras.metrics.binary_accuracy", + "tf.compat.v1.keras.metrics.binary_crossentropy": "tf.keras.losses.binary_crossentropy", + "tf.compat.v1.keras.metrics.categorical_accuracy": "tf.keras.metrics.categorical_accuracy", + "tf.compat.v1.keras.metrics.categorical_crossentropy": "tf.keras.losses.categorical_crossentropy", + "tf.compat.v1.keras.metrics.cosine": "tf.keras.losses.cosine_similarity", + "tf.compat.v1.keras.metrics.cosine_proximity": "tf.keras.losses.cosine_similarity", + "tf.compat.v1.keras.metrics.deserialize": "tf.keras.metrics.deserialize", + "tf.compat.v1.keras.metrics.get": "tf.keras.metrics.get", + "tf.compat.v1.keras.metrics.hinge": "tf.keras.losses.hinge", + "tf.compat.v1.keras.metrics.kld": "tf.keras.losses.KLD", + "tf.compat.v1.keras.metrics.kullback_leibler_divergence": "tf.keras.losses.KLD", + "tf.compat.v1.keras.metrics.mae": "tf.keras.losses.MAE", + "tf.compat.v1.keras.metrics.mape": "tf.keras.losses.MAPE", + "tf.compat.v1.keras.metrics.mean_absolute_error": "tf.keras.losses.MAE", + "tf.compat.v1.keras.metrics.mean_absolute_percentage_error": "tf.keras.losses.MAPE", + "tf.compat.v1.keras.metrics.mean_squared_error": "tf.keras.losses.MSE", + "tf.compat.v1.keras.metrics.mean_squared_logarithmic_error": "tf.keras.losses.MSLE", + "tf.compat.v1.keras.metrics.mse": "tf.keras.losses.MSE", + "tf.compat.v1.keras.metrics.msle": "tf.keras.losses.MSLE", + "tf.compat.v1.keras.metrics.poisson": "tf.keras.losses.poisson", + "tf.compat.v1.keras.metrics.serialize": "tf.keras.metrics.serialize", + "tf.compat.v1.keras.metrics.sparse_categorical_accuracy": "tf.keras.metrics.sparse_categorical_accuracy", + "tf.compat.v1.keras.metrics.sparse_categorical_crossentropy": "tf.keras.losses.sparse_categorical_crossentropy", + "tf.compat.v1.keras.metrics.sparse_top_k_categorical_accuracy": "tf.keras.metrics.sparse_top_k_categorical_accuracy", + "tf.compat.v1.keras.metrics.squared_hinge": "tf.keras.losses.squared_hinge", + "tf.compat.v1.keras.metrics.top_k_categorical_accuracy": "tf.keras.metrics.top_k_categorical_accuracy", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer": "tf.keras.mixed_precision.experimental.LossScaleOptimizer", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__init__": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__init__", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.add_slot": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.add_slot", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.apply_gradients": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.apply_gradients", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_config": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_config", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_gradients": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_gradients", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_scaled_loss": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_scaled_loss", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_slot": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_slot", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_slot_names": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_slot_names", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_unscaled_gradients": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_unscaled_gradients", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_weights": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_weights", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.iterations": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.iterations", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.learning_rate": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.learning_rate", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.loss_scale": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.loss_scale", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.lr": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.lr", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.set_weights": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.set_weights", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.variables": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.variables", + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.weights": "tf.keras.mixed_precision.experimental.LossScaleOptimizer.weights", + "tf.compat.v1.keras.mixed_precision.experimental.Policy": "tf.keras.mixed_precision.experimental.Policy", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__init__": "tf.keras.mixed_precision.experimental.Policy.__init__", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.compute_dtype": "tf.keras.mixed_precision.experimental.Policy.compute_dtype", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.get_config": "tf.keras.mixed_precision.experimental.Policy.get_config", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.loss_scale": "tf.keras.mixed_precision.experimental.Policy.loss_scale", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.name": "tf.keras.mixed_precision.experimental.Policy.name", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.should_cast_variables": "tf.keras.mixed_precision.experimental.Policy.should_cast_variables", + "tf.compat.v1.keras.mixed_precision.experimental.Policy.variable_dtype": "tf.keras.mixed_precision.experimental.Policy.variable_dtype", + "tf.compat.v1.keras.mixed_precision.experimental.get_layer_policy": "tf.keras.mixed_precision.experimental.get_layer_policy", + "tf.compat.v1.keras.mixed_precision.experimental.global_policy": "tf.keras.mixed_precision.experimental.global_policy", + "tf.compat.v1.keras.mixed_precision.experimental.set_policy": "tf.keras.mixed_precision.experimental.set_policy", + "tf.compat.v1.keras.models.Model": "tf.keras.Model", + "tf.compat.v1.keras.models.Model.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.models.Model.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.models.Model.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.models.Model.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.models.Model.__init__": "tf.keras.Model.__init__", + "tf.compat.v1.keras.models.Model.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.models.Model.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.models.Model.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.models.Model.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.models.Model.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.models.Model.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.models.Model.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.models.Model.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.models.Model.build": "tf.keras.Model.build", + "tf.compat.v1.keras.models.Model.call": "tf.keras.Model.call", + "tf.compat.v1.keras.models.Model.compile": "tf.keras.Model.compile", + "tf.compat.v1.keras.models.Model.compute_mask": "tf.keras.Model.compute_mask", + "tf.compat.v1.keras.models.Model.compute_output_shape": "tf.keras.Model.compute_output_shape", + "tf.compat.v1.keras.models.Model.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.models.Model.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.models.Model.distribute_strategy": "tf.keras.Model.distribute_strategy", + "tf.compat.v1.keras.models.Model.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.models.Model.dynamic": "tf.keras.Model.dynamic", + "tf.compat.v1.keras.models.Model.evaluate": "tf.keras.Model.evaluate", + "tf.compat.v1.keras.models.Model.evaluate_generator": "tf.keras.Model.evaluate_generator", + "tf.compat.v1.keras.models.Model.fit": "tf.keras.Model.fit", + "tf.compat.v1.keras.models.Model.fit_generator": "tf.keras.Model.fit_generator", + "tf.compat.v1.keras.models.Model.get_config": "tf.keras.Model.get_config", + "tf.compat.v1.keras.models.Model.get_layer": "tf.keras.Model.get_layer", + "tf.compat.v1.keras.models.Model.get_weights": "tf.keras.Model.get_weights", + "tf.compat.v1.keras.models.Model.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.models.Model.input_spec": "tf.keras.Model.input_spec", + "tf.compat.v1.keras.models.Model.layers": "tf.keras.Model.layers", + "tf.compat.v1.keras.models.Model.load_weights": "tf.keras.Model.load_weights", + "tf.compat.v1.keras.models.Model.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.models.Model.make_predict_function": "tf.keras.Model.make_predict_function", + "tf.compat.v1.keras.models.Model.make_test_function": "tf.keras.Model.make_test_function", + "tf.compat.v1.keras.models.Model.make_train_function": "tf.keras.Model.make_train_function", + "tf.compat.v1.keras.models.Model.metrics": "tf.keras.Model.metrics", + "tf.compat.v1.keras.models.Model.metrics_names": "tf.keras.Model.metrics_names", + "tf.compat.v1.keras.models.Model.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.models.Model.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.models.Model.non_trainable_weights": "tf.keras.Model.non_trainable_weights", + "tf.compat.v1.keras.models.Model.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.models.Model.predict": "tf.keras.Model.predict", + "tf.compat.v1.keras.models.Model.predict_generator": "tf.keras.Model.predict_generator", + "tf.compat.v1.keras.models.Model.predict_on_batch": "tf.keras.Model.predict_on_batch", + "tf.compat.v1.keras.models.Model.predict_step": "tf.keras.Model.predict_step", + "tf.compat.v1.keras.models.Model.reset_metrics": "tf.keras.Model.reset_metrics", + "tf.compat.v1.keras.models.Model.reset_states": "tf.keras.Model.reset_states", + "tf.compat.v1.keras.models.Model.run_eagerly": "tf.keras.Model.run_eagerly", + "tf.compat.v1.keras.models.Model.save": "tf.keras.Model.save", + "tf.compat.v1.keras.models.Model.save_weights": "tf.keras.Model.save_weights", + "tf.compat.v1.keras.models.Model.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.models.Model.state_updates": "tf.keras.Model.state_updates", + "tf.compat.v1.keras.models.Model.stateful": "tf.keras.Model.stateful", + "tf.compat.v1.keras.models.Model.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.models.Model.summary": "tf.keras.Model.summary", + "tf.compat.v1.keras.models.Model.test_on_batch": "tf.keras.Model.test_on_batch", + "tf.compat.v1.keras.models.Model.test_step": "tf.keras.Model.test_step", + "tf.compat.v1.keras.models.Model.to_json": "tf.keras.Model.to_json", + "tf.compat.v1.keras.models.Model.to_yaml": "tf.keras.Model.to_yaml", + "tf.compat.v1.keras.models.Model.train_on_batch": "tf.keras.Model.train_on_batch", + "tf.compat.v1.keras.models.Model.train_step": "tf.keras.Model.train_step", + "tf.compat.v1.keras.models.Model.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.models.Model.trainable_weights": "tf.keras.Model.trainable_weights", + "tf.compat.v1.keras.models.Model.weights": "tf.keras.Model.weights", + "tf.compat.v1.keras.models.Sequential": "tf.keras.Sequential", + "tf.compat.v1.keras.models.Sequential.__call__": "tf.keras.layers.Layer.__call__", + "tf.compat.v1.keras.models.Sequential.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.models.Sequential.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.models.Sequential.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.models.Sequential.__init__": "tf.keras.Sequential.__init__", + "tf.compat.v1.keras.models.Sequential.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.models.Sequential.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.models.Sequential.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.models.Sequential.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.keras.models.Sequential.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.keras.models.Sequential.add": "tf.keras.Sequential.add", + "tf.compat.v1.keras.models.Sequential.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.compat.v1.keras.models.Sequential.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.keras.models.Sequential.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.compat.v1.keras.models.Sequential.build": "tf.keras.Sequential.build", + "tf.compat.v1.keras.models.Sequential.call": "tf.keras.Sequential.call", + "tf.compat.v1.keras.models.Sequential.compile": "tf.keras.Model.compile", + "tf.compat.v1.keras.models.Sequential.compute_mask": "tf.keras.Sequential.compute_mask", + "tf.compat.v1.keras.models.Sequential.compute_output_shape": "tf.keras.Sequential.compute_output_shape", + "tf.compat.v1.keras.models.Sequential.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.keras.models.Sequential.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.keras.models.Sequential.distribute_strategy": "tf.keras.Model.distribute_strategy", + "tf.compat.v1.keras.models.Sequential.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.keras.models.Sequential.dynamic": "tf.keras.Sequential.dynamic", + "tf.compat.v1.keras.models.Sequential.evaluate": "tf.keras.Model.evaluate", + "tf.compat.v1.keras.models.Sequential.evaluate_generator": "tf.keras.Model.evaluate_generator", + "tf.compat.v1.keras.models.Sequential.fit": "tf.keras.Model.fit", + "tf.compat.v1.keras.models.Sequential.fit_generator": "tf.keras.Model.fit_generator", + "tf.compat.v1.keras.models.Sequential.get_config": "tf.keras.Sequential.get_config", + "tf.compat.v1.keras.models.Sequential.get_layer": "tf.keras.Model.get_layer", + "tf.compat.v1.keras.models.Sequential.get_weights": "tf.keras.Model.get_weights", + "tf.compat.v1.keras.models.Sequential.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.keras.models.Sequential.input_spec": "tf.keras.Sequential.input_spec", + "tf.compat.v1.keras.models.Sequential.layers": "tf.keras.Sequential.layers", + "tf.compat.v1.keras.models.Sequential.load_weights": "tf.keras.Model.load_weights", + "tf.compat.v1.keras.models.Sequential.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.keras.models.Sequential.make_predict_function": "tf.keras.Model.make_predict_function", + "tf.compat.v1.keras.models.Sequential.make_test_function": "tf.keras.Model.make_test_function", + "tf.compat.v1.keras.models.Sequential.make_train_function": "tf.keras.Model.make_train_function", + "tf.compat.v1.keras.models.Sequential.metrics": "tf.keras.Model.metrics", + "tf.compat.v1.keras.models.Sequential.metrics_names": "tf.keras.Model.metrics_names", + "tf.compat.v1.keras.models.Sequential.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.keras.models.Sequential.name_scope": "tf.Module.name_scope", + "tf.compat.v1.keras.models.Sequential.non_trainable_weights": "tf.keras.Model.non_trainable_weights", + "tf.compat.v1.keras.models.Sequential.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.keras.models.Sequential.pop": "tf.keras.Sequential.pop", + "tf.compat.v1.keras.models.Sequential.predict": "tf.keras.Model.predict", + "tf.compat.v1.keras.models.Sequential.predict_classes": "tf.keras.Sequential.predict_classes", + "tf.compat.v1.keras.models.Sequential.predict_generator": "tf.keras.Model.predict_generator", + "tf.compat.v1.keras.models.Sequential.predict_on_batch": "tf.keras.Model.predict_on_batch", + "tf.compat.v1.keras.models.Sequential.predict_proba": "tf.keras.Sequential.predict_proba", + "tf.compat.v1.keras.models.Sequential.predict_step": "tf.keras.Model.predict_step", + "tf.compat.v1.keras.models.Sequential.reset_metrics": "tf.keras.Model.reset_metrics", + "tf.compat.v1.keras.models.Sequential.reset_states": "tf.keras.Model.reset_states", + "tf.compat.v1.keras.models.Sequential.run_eagerly": "tf.keras.Model.run_eagerly", + "tf.compat.v1.keras.models.Sequential.save": "tf.keras.Model.save", + "tf.compat.v1.keras.models.Sequential.save_weights": "tf.keras.Model.save_weights", + "tf.compat.v1.keras.models.Sequential.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.keras.models.Sequential.state_updates": "tf.keras.Model.state_updates", + "tf.compat.v1.keras.models.Sequential.stateful": "tf.keras.Model.stateful", + "tf.compat.v1.keras.models.Sequential.submodules": "tf.Module.submodules", + "tf.compat.v1.keras.models.Sequential.summary": "tf.keras.Model.summary", + "tf.compat.v1.keras.models.Sequential.test_on_batch": "tf.keras.Model.test_on_batch", + "tf.compat.v1.keras.models.Sequential.test_step": "tf.keras.Model.test_step", + "tf.compat.v1.keras.models.Sequential.to_json": "tf.keras.Model.to_json", + "tf.compat.v1.keras.models.Sequential.to_yaml": "tf.keras.Model.to_yaml", + "tf.compat.v1.keras.models.Sequential.train_on_batch": "tf.keras.Model.train_on_batch", + "tf.compat.v1.keras.models.Sequential.train_step": "tf.keras.Model.train_step", + "tf.compat.v1.keras.models.Sequential.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.keras.models.Sequential.trainable_weights": "tf.keras.Model.trainable_weights", + "tf.compat.v1.keras.models.Sequential.weights": "tf.keras.Model.weights", + "tf.compat.v1.keras.models.clone_model": "tf.keras.models.clone_model", + "tf.compat.v1.keras.models.load_model": "tf.keras.models.load_model", + "tf.compat.v1.keras.models.model_from_config": "tf.keras.models.model_from_config", + "tf.compat.v1.keras.models.model_from_json": "tf.keras.models.model_from_json", + "tf.compat.v1.keras.models.model_from_yaml": "tf.keras.models.model_from_yaml", + "tf.compat.v1.keras.models.save_model": "tf.keras.models.save_model", + "tf.compat.v1.keras.optimizers.Adadelta": "tf.keras.optimizers.Adadelta", + "tf.compat.v1.keras.optimizers.Adadelta.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.Adadelta.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.Adadelta.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.Adadelta.__init__": "tf.keras.optimizers.Adadelta.__init__", + "tf.compat.v1.keras.optimizers.Adadelta.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.Adadelta.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.Adadelta.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.Adadelta.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.Adadelta.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.compat.v1.keras.optimizers.Adadelta.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.compat.v1.keras.optimizers.Adadelta.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.compat.v1.keras.optimizers.Adadelta.get_config": "tf.keras.optimizers.Adadelta.get_config", + "tf.compat.v1.keras.optimizers.Adadelta.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.compat.v1.keras.optimizers.Adadelta.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.compat.v1.keras.optimizers.Adadelta.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.compat.v1.keras.optimizers.Adadelta.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.compat.v1.keras.optimizers.Adadelta.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.compat.v1.keras.optimizers.Adadelta.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.compat.v1.keras.optimizers.Adadelta.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.compat.v1.keras.optimizers.Adadelta.set_weights": "tf.keras.optimizers.Adadelta.set_weights", + "tf.compat.v1.keras.optimizers.Adadelta.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.compat.v1.keras.optimizers.Adadelta.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.compat.v1.keras.optimizers.Adagrad": "tf.keras.optimizers.Adagrad", + "tf.compat.v1.keras.optimizers.Adagrad.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.Adagrad.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.Adagrad.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.Adagrad.__init__": "tf.keras.optimizers.Adagrad.__init__", + "tf.compat.v1.keras.optimizers.Adagrad.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.Adagrad.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.Adagrad.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.Adagrad.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.Adagrad.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.compat.v1.keras.optimizers.Adagrad.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.compat.v1.keras.optimizers.Adagrad.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.compat.v1.keras.optimizers.Adagrad.get_config": "tf.keras.optimizers.Adagrad.get_config", + "tf.compat.v1.keras.optimizers.Adagrad.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.compat.v1.keras.optimizers.Adagrad.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.compat.v1.keras.optimizers.Adagrad.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.compat.v1.keras.optimizers.Adagrad.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.compat.v1.keras.optimizers.Adagrad.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.compat.v1.keras.optimizers.Adagrad.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.compat.v1.keras.optimizers.Adagrad.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.compat.v1.keras.optimizers.Adagrad.set_weights": "tf.keras.optimizers.Adagrad.set_weights", + "tf.compat.v1.keras.optimizers.Adagrad.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.compat.v1.keras.optimizers.Adagrad.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.compat.v1.keras.optimizers.Adam": "tf.keras.optimizers.Adam", + "tf.compat.v1.keras.optimizers.Adam.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.Adam.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.Adam.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.Adam.__init__": "tf.keras.optimizers.Adam.__init__", + "tf.compat.v1.keras.optimizers.Adam.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.Adam.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.Adam.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.Adam.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.Adam.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.compat.v1.keras.optimizers.Adam.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.compat.v1.keras.optimizers.Adam.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.compat.v1.keras.optimizers.Adam.get_config": "tf.keras.optimizers.Adam.get_config", + "tf.compat.v1.keras.optimizers.Adam.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.compat.v1.keras.optimizers.Adam.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.compat.v1.keras.optimizers.Adam.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.compat.v1.keras.optimizers.Adam.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.compat.v1.keras.optimizers.Adam.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.compat.v1.keras.optimizers.Adam.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.compat.v1.keras.optimizers.Adam.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.compat.v1.keras.optimizers.Adam.set_weights": "tf.keras.optimizers.Adam.set_weights", + "tf.compat.v1.keras.optimizers.Adam.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.compat.v1.keras.optimizers.Adam.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.compat.v1.keras.optimizers.Adamax": "tf.keras.optimizers.Adamax", + "tf.compat.v1.keras.optimizers.Adamax.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.Adamax.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.Adamax.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.Adamax.__init__": "tf.keras.optimizers.Adamax.__init__", + "tf.compat.v1.keras.optimizers.Adamax.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.Adamax.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.Adamax.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.Adamax.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.Adamax.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.compat.v1.keras.optimizers.Adamax.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.compat.v1.keras.optimizers.Adamax.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.compat.v1.keras.optimizers.Adamax.get_config": "tf.keras.optimizers.Adamax.get_config", + "tf.compat.v1.keras.optimizers.Adamax.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.compat.v1.keras.optimizers.Adamax.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.compat.v1.keras.optimizers.Adamax.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.compat.v1.keras.optimizers.Adamax.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.compat.v1.keras.optimizers.Adamax.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.compat.v1.keras.optimizers.Adamax.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.compat.v1.keras.optimizers.Adamax.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.compat.v1.keras.optimizers.Adamax.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.compat.v1.keras.optimizers.Adamax.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.compat.v1.keras.optimizers.Adamax.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.compat.v1.keras.optimizers.Ftrl": "tf.keras.optimizers.Ftrl", + "tf.compat.v1.keras.optimizers.Ftrl.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.Ftrl.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.Ftrl.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.Ftrl.__init__": "tf.keras.optimizers.Ftrl.__init__", + "tf.compat.v1.keras.optimizers.Ftrl.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.Ftrl.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.Ftrl.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.Ftrl.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.Ftrl.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.compat.v1.keras.optimizers.Ftrl.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.compat.v1.keras.optimizers.Ftrl.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.compat.v1.keras.optimizers.Ftrl.get_config": "tf.keras.optimizers.Ftrl.get_config", + "tf.compat.v1.keras.optimizers.Ftrl.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.compat.v1.keras.optimizers.Ftrl.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.compat.v1.keras.optimizers.Ftrl.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.compat.v1.keras.optimizers.Ftrl.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.compat.v1.keras.optimizers.Ftrl.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.compat.v1.keras.optimizers.Ftrl.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.compat.v1.keras.optimizers.Ftrl.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.compat.v1.keras.optimizers.Ftrl.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.compat.v1.keras.optimizers.Ftrl.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.compat.v1.keras.optimizers.Ftrl.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.compat.v1.keras.optimizers.Nadam": "tf.keras.optimizers.Nadam", + "tf.compat.v1.keras.optimizers.Nadam.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.Nadam.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.Nadam.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.Nadam.__init__": "tf.keras.optimizers.Nadam.__init__", + "tf.compat.v1.keras.optimizers.Nadam.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.Nadam.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.Nadam.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.Nadam.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.Nadam.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.compat.v1.keras.optimizers.Nadam.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.compat.v1.keras.optimizers.Nadam.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.compat.v1.keras.optimizers.Nadam.get_config": "tf.keras.optimizers.Nadam.get_config", + "tf.compat.v1.keras.optimizers.Nadam.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.compat.v1.keras.optimizers.Nadam.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.compat.v1.keras.optimizers.Nadam.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.compat.v1.keras.optimizers.Nadam.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.compat.v1.keras.optimizers.Nadam.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.compat.v1.keras.optimizers.Nadam.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.compat.v1.keras.optimizers.Nadam.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.compat.v1.keras.optimizers.Nadam.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.compat.v1.keras.optimizers.Nadam.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.compat.v1.keras.optimizers.Nadam.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.compat.v1.keras.optimizers.Optimizer": "tf.keras.optimizers.Optimizer", + "tf.compat.v1.keras.optimizers.Optimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.Optimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.Optimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.Optimizer.__init__": "tf.keras.optimizers.Optimizer.__init__", + "tf.compat.v1.keras.optimizers.Optimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.Optimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.Optimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.Optimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.Optimizer.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.compat.v1.keras.optimizers.Optimizer.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.compat.v1.keras.optimizers.Optimizer.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.compat.v1.keras.optimizers.Optimizer.get_config": "tf.keras.optimizers.Optimizer.get_config", + "tf.compat.v1.keras.optimizers.Optimizer.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.compat.v1.keras.optimizers.Optimizer.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.compat.v1.keras.optimizers.Optimizer.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.compat.v1.keras.optimizers.Optimizer.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.compat.v1.keras.optimizers.Optimizer.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.compat.v1.keras.optimizers.Optimizer.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.compat.v1.keras.optimizers.Optimizer.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.compat.v1.keras.optimizers.Optimizer.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.compat.v1.keras.optimizers.Optimizer.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.compat.v1.keras.optimizers.Optimizer.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.compat.v1.keras.optimizers.RMSprop": "tf.keras.optimizers.RMSprop", + "tf.compat.v1.keras.optimizers.RMSprop.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.RMSprop.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.RMSprop.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.RMSprop.__init__": "tf.keras.optimizers.RMSprop.__init__", + "tf.compat.v1.keras.optimizers.RMSprop.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.RMSprop.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.RMSprop.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.RMSprop.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.RMSprop.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.compat.v1.keras.optimizers.RMSprop.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.compat.v1.keras.optimizers.RMSprop.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.compat.v1.keras.optimizers.RMSprop.get_config": "tf.keras.optimizers.RMSprop.get_config", + "tf.compat.v1.keras.optimizers.RMSprop.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.compat.v1.keras.optimizers.RMSprop.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.compat.v1.keras.optimizers.RMSprop.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.compat.v1.keras.optimizers.RMSprop.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.compat.v1.keras.optimizers.RMSprop.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.compat.v1.keras.optimizers.RMSprop.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.compat.v1.keras.optimizers.RMSprop.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.compat.v1.keras.optimizers.RMSprop.set_weights": "tf.keras.optimizers.RMSprop.set_weights", + "tf.compat.v1.keras.optimizers.RMSprop.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.compat.v1.keras.optimizers.RMSprop.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.compat.v1.keras.optimizers.SGD": "tf.keras.optimizers.SGD", + "tf.compat.v1.keras.optimizers.SGD.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.SGD.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.SGD.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.SGD.__init__": "tf.keras.optimizers.SGD.__init__", + "tf.compat.v1.keras.optimizers.SGD.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.SGD.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.SGD.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.SGD.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.SGD.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.compat.v1.keras.optimizers.SGD.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.compat.v1.keras.optimizers.SGD.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.compat.v1.keras.optimizers.SGD.get_config": "tf.keras.optimizers.SGD.get_config", + "tf.compat.v1.keras.optimizers.SGD.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.compat.v1.keras.optimizers.SGD.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.compat.v1.keras.optimizers.SGD.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.compat.v1.keras.optimizers.SGD.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.compat.v1.keras.optimizers.SGD.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.compat.v1.keras.optimizers.SGD.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.compat.v1.keras.optimizers.SGD.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.compat.v1.keras.optimizers.SGD.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.compat.v1.keras.optimizers.SGD.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.compat.v1.keras.optimizers.SGD.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.compat.v1.keras.optimizers.deserialize": "tf.keras.optimizers.deserialize", + "tf.compat.v1.keras.optimizers.get": "tf.keras.optimizers.get", + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay": "tf.keras.optimizers.schedules.ExponentialDecay", + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__call__": "tf.keras.optimizers.schedules.ExponentialDecay.__call__", + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__init__": "tf.keras.optimizers.schedules.ExponentialDecay.__init__", + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.get_config": "tf.keras.optimizers.schedules.ExponentialDecay.get_config", + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay": "tf.keras.optimizers.schedules.InverseTimeDecay", + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__call__": "tf.keras.optimizers.schedules.InverseTimeDecay.__call__", + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__init__": "tf.keras.optimizers.schedules.InverseTimeDecay.__init__", + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.get_config": "tf.keras.optimizers.schedules.InverseTimeDecay.get_config", + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule": "tf.keras.optimizers.schedules.LearningRateSchedule", + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__call__": "tf.keras.optimizers.schedules.LearningRateSchedule.__call__", + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.get_config": "tf.keras.optimizers.schedules.LearningRateSchedule.get_config", + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay": "tf.keras.optimizers.schedules.PiecewiseConstantDecay", + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__call__": "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__call__", + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__init__": "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__init__", + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.get_config": "tf.keras.optimizers.schedules.PiecewiseConstantDecay.get_config", + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay": "tf.keras.optimizers.schedules.PolynomialDecay", + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__call__": "tf.keras.optimizers.schedules.PolynomialDecay.__call__", + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__init__": "tf.keras.optimizers.schedules.PolynomialDecay.__init__", + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.get_config": "tf.keras.optimizers.schedules.PolynomialDecay.get_config", + "tf.compat.v1.keras.optimizers.schedules.deserialize": "tf.keras.optimizers.schedules.deserialize", + "tf.compat.v1.keras.optimizers.schedules.serialize": "tf.keras.optimizers.schedules.serialize", + "tf.compat.v1.keras.optimizers.serialize": "tf.keras.optimizers.serialize", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator": "tf.keras.preprocessing.image.DirectoryIterator", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__getitem__": "tf.keras.preprocessing.image.DirectoryIterator.__getitem__", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__init__": "tf.keras.preprocessing.image.DirectoryIterator.__init__", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__iter__": "tf.keras.preprocessing.image.DirectoryIterator.__iter__", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__len__": "tf.keras.preprocessing.image.DirectoryIterator.__len__", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__new__": "tf.keras.preprocessing.image.DirectoryIterator.__new__", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.allowed_class_modes": "tf.keras.preprocessing.image.DirectoryIterator.allowed_class_modes", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.filepaths": "tf.keras.preprocessing.image.DirectoryIterator.filepaths", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.labels": "tf.keras.preprocessing.image.DirectoryIterator.labels", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.next": "tf.keras.preprocessing.image.DirectoryIterator.next", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.on_epoch_end": "tf.keras.preprocessing.image.DirectoryIterator.on_epoch_end", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.reset": "tf.keras.preprocessing.image.DirectoryIterator.reset", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.sample_weight": "tf.keras.preprocessing.image.DirectoryIterator.sample_weight", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.set_processing_attrs": "tf.keras.preprocessing.image.DirectoryIterator.set_processing_attrs", + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.white_list_formats": "tf.keras.preprocessing.image.DirectoryIterator.white_list_formats", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator": "tf.keras.preprocessing.image.ImageDataGenerator", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__init__": "tf.keras.preprocessing.image.ImageDataGenerator.__init__", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.apply_transform": "tf.keras.preprocessing.image.ImageDataGenerator.apply_transform", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.fit": "tf.keras.preprocessing.image.ImageDataGenerator.fit", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.flow": "tf.keras.preprocessing.image.ImageDataGenerator.flow", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.flow_from_dataframe": "tf.keras.preprocessing.image.ImageDataGenerator.flow_from_dataframe", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.flow_from_directory": "tf.keras.preprocessing.image.ImageDataGenerator.flow_from_directory", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.get_random_transform": "tf.keras.preprocessing.image.ImageDataGenerator.get_random_transform", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.random_transform": "tf.keras.preprocessing.image.ImageDataGenerator.random_transform", + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.standardize": "tf.keras.preprocessing.image.ImageDataGenerator.standardize", + "tf.compat.v1.keras.preprocessing.image.Iterator": "tf.keras.preprocessing.image.Iterator", + "tf.compat.v1.keras.preprocessing.image.Iterator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.preprocessing.image.Iterator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.preprocessing.image.Iterator.__getitem__": "tf.keras.preprocessing.image.DirectoryIterator.__getitem__", + "tf.compat.v1.keras.preprocessing.image.Iterator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.preprocessing.image.Iterator.__init__": "tf.keras.preprocessing.image.Iterator.__init__", + "tf.compat.v1.keras.preprocessing.image.Iterator.__iter__": "tf.keras.preprocessing.image.DirectoryIterator.__iter__", + "tf.compat.v1.keras.preprocessing.image.Iterator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.preprocessing.image.Iterator.__len__": "tf.keras.preprocessing.image.DirectoryIterator.__len__", + "tf.compat.v1.keras.preprocessing.image.Iterator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.preprocessing.image.Iterator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.preprocessing.image.Iterator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.preprocessing.image.Iterator.next": "tf.keras.preprocessing.image.DirectoryIterator.next", + "tf.compat.v1.keras.preprocessing.image.Iterator.on_epoch_end": "tf.keras.preprocessing.image.DirectoryIterator.on_epoch_end", + "tf.compat.v1.keras.preprocessing.image.Iterator.reset": "tf.keras.preprocessing.image.DirectoryIterator.reset", + "tf.compat.v1.keras.preprocessing.image.Iterator.white_list_formats": "tf.keras.preprocessing.image.DirectoryIterator.white_list_formats", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator": "tf.keras.preprocessing.image.NumpyArrayIterator", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__getitem__": "tf.keras.preprocessing.image.DirectoryIterator.__getitem__", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__init__": "tf.keras.preprocessing.image.NumpyArrayIterator.__init__", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__iter__": "tf.keras.preprocessing.image.DirectoryIterator.__iter__", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__len__": "tf.keras.preprocessing.image.DirectoryIterator.__len__", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__new__": "tf.keras.preprocessing.image.NumpyArrayIterator.__new__", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.next": "tf.keras.preprocessing.image.DirectoryIterator.next", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.on_epoch_end": "tf.keras.preprocessing.image.DirectoryIterator.on_epoch_end", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.reset": "tf.keras.preprocessing.image.DirectoryIterator.reset", + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.white_list_formats": "tf.keras.preprocessing.image.DirectoryIterator.white_list_formats", + "tf.compat.v1.keras.preprocessing.image.apply_affine_transform": "tf.keras.preprocessing.image.apply_affine_transform", + "tf.compat.v1.keras.preprocessing.image.apply_brightness_shift": "tf.keras.preprocessing.image.apply_brightness_shift", + "tf.compat.v1.keras.preprocessing.image.apply_channel_shift": "tf.keras.preprocessing.image.apply_channel_shift", + "tf.compat.v1.keras.preprocessing.image.array_to_img": "tf.keras.preprocessing.image.array_to_img", + "tf.compat.v1.keras.preprocessing.image.img_to_array": "tf.keras.preprocessing.image.img_to_array", + "tf.compat.v1.keras.preprocessing.image.load_img": "tf.keras.preprocessing.image.load_img", + "tf.compat.v1.keras.preprocessing.image.random_brightness": "tf.keras.preprocessing.image.random_brightness", + "tf.compat.v1.keras.preprocessing.image.random_channel_shift": "tf.keras.preprocessing.image.random_channel_shift", + "tf.compat.v1.keras.preprocessing.image.random_rotation": "tf.keras.preprocessing.image.random_rotation", + "tf.compat.v1.keras.preprocessing.image.random_shear": "tf.keras.preprocessing.image.random_shear", + "tf.compat.v1.keras.preprocessing.image.random_shift": "tf.keras.preprocessing.image.random_shift", + "tf.compat.v1.keras.preprocessing.image.random_zoom": "tf.keras.preprocessing.image.random_zoom", + "tf.compat.v1.keras.preprocessing.image.save_img": "tf.keras.preprocessing.image.save_img", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator": "tf.keras.preprocessing.sequence.TimeseriesGenerator", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__getitem__": "tf.keras.preprocessing.sequence.TimeseriesGenerator.__getitem__", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__init__": "tf.keras.preprocessing.sequence.TimeseriesGenerator.__init__", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__iter__": "tf.keras.utils.Sequence.__iter__", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__len__": "tf.keras.preprocessing.sequence.TimeseriesGenerator.__len__", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.get_config": "tf.keras.preprocessing.sequence.TimeseriesGenerator.get_config", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.on_epoch_end": "tf.keras.utils.Sequence.on_epoch_end", + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.to_json": "tf.keras.preprocessing.sequence.TimeseriesGenerator.to_json", + "tf.compat.v1.keras.preprocessing.sequence.make_sampling_table": "tf.keras.preprocessing.sequence.make_sampling_table", + "tf.compat.v1.keras.preprocessing.sequence.pad_sequences": "tf.keras.preprocessing.sequence.pad_sequences", + "tf.compat.v1.keras.preprocessing.sequence.skipgrams": "tf.keras.preprocessing.sequence.skipgrams", + "tf.compat.v1.keras.preprocessing.text.Tokenizer": "tf.keras.preprocessing.text.Tokenizer", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__init__": "tf.keras.preprocessing.text.Tokenizer.__init__", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.fit_on_sequences": "tf.keras.preprocessing.text.Tokenizer.fit_on_sequences", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.fit_on_texts": "tf.keras.preprocessing.text.Tokenizer.fit_on_texts", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.get_config": "tf.keras.preprocessing.text.Tokenizer.get_config", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.sequences_to_matrix": "tf.keras.preprocessing.text.Tokenizer.sequences_to_matrix", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.sequences_to_texts": "tf.keras.preprocessing.text.Tokenizer.sequences_to_texts", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.sequences_to_texts_generator": "tf.keras.preprocessing.text.Tokenizer.sequences_to_texts_generator", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.texts_to_matrix": "tf.keras.preprocessing.text.Tokenizer.texts_to_matrix", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.texts_to_sequences": "tf.keras.preprocessing.text.Tokenizer.texts_to_sequences", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.texts_to_sequences_generator": "tf.keras.preprocessing.text.Tokenizer.texts_to_sequences_generator", + "tf.compat.v1.keras.preprocessing.text.Tokenizer.to_json": "tf.keras.preprocessing.text.Tokenizer.to_json", + "tf.compat.v1.keras.preprocessing.text.hashing_trick": "tf.keras.preprocessing.text.hashing_trick", + "tf.compat.v1.keras.preprocessing.text.one_hot": "tf.keras.preprocessing.text.one_hot", + "tf.compat.v1.keras.preprocessing.text.text_to_word_sequence": "tf.keras.preprocessing.text.text_to_word_sequence", + "tf.compat.v1.keras.preprocessing.text.tokenizer_from_json": "tf.keras.preprocessing.text.tokenizer_from_json", + "tf.compat.v1.keras.regularizers.L1L2": "tf.keras.regularizers.L1L2", + "tf.compat.v1.keras.regularizers.L1L2.__call__": "tf.keras.regularizers.L1L2.__call__", + "tf.compat.v1.keras.regularizers.L1L2.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.regularizers.L1L2.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.regularizers.L1L2.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.regularizers.L1L2.__init__": "tf.keras.regularizers.L1L2.__init__", + "tf.compat.v1.keras.regularizers.L1L2.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.regularizers.L1L2.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.regularizers.L1L2.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.regularizers.L1L2.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.regularizers.L1L2.get_config": "tf.keras.regularizers.L1L2.get_config", + "tf.compat.v1.keras.regularizers.Regularizer": "tf.keras.regularizers.Regularizer", + "tf.compat.v1.keras.regularizers.Regularizer.__call__": "tf.keras.regularizers.Regularizer.__call__", + "tf.compat.v1.keras.regularizers.Regularizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.regularizers.Regularizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.regularizers.Regularizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.regularizers.Regularizer.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.keras.regularizers.Regularizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.regularizers.Regularizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.regularizers.Regularizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.regularizers.Regularizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.regularizers.Regularizer.get_config": "tf.keras.regularizers.Regularizer.get_config", + "tf.compat.v1.keras.regularizers.deserialize": "tf.keras.regularizers.deserialize", + "tf.compat.v1.keras.regularizers.get": "tf.keras.regularizers.get", + "tf.compat.v1.keras.regularizers.l1": "tf.keras.regularizers.l1", + "tf.compat.v1.keras.regularizers.l1_l2": "tf.keras.regularizers.l1_l2", + "tf.compat.v1.keras.regularizers.l2": "tf.keras.regularizers.l2", + "tf.compat.v1.keras.regularizers.serialize": "tf.keras.regularizers.serialize", + "tf.compat.v1.keras.utils.CustomObjectScope": "tf.keras.utils.CustomObjectScope", + "tf.compat.v1.keras.utils.CustomObjectScope.__enter__": "tf.keras.utils.CustomObjectScope.__enter__", + "tf.compat.v1.keras.utils.CustomObjectScope.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.utils.CustomObjectScope.__exit__": "tf.keras.utils.CustomObjectScope.__exit__", + "tf.compat.v1.keras.utils.CustomObjectScope.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.utils.CustomObjectScope.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.utils.CustomObjectScope.__init__": "tf.keras.utils.CustomObjectScope.__init__", + "tf.compat.v1.keras.utils.CustomObjectScope.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.utils.CustomObjectScope.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.utils.CustomObjectScope.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.utils.CustomObjectScope.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.utils.GeneratorEnqueuer": "tf.keras.utils.GeneratorEnqueuer", + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__init__": "tf.keras.utils.GeneratorEnqueuer.__init__", + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.utils.GeneratorEnqueuer.get": "tf.keras.utils.GeneratorEnqueuer.get", + "tf.compat.v1.keras.utils.GeneratorEnqueuer.is_running": "tf.keras.utils.SequenceEnqueuer.is_running", + "tf.compat.v1.keras.utils.GeneratorEnqueuer.start": "tf.keras.utils.SequenceEnqueuer.start", + "tf.compat.v1.keras.utils.GeneratorEnqueuer.stop": "tf.keras.utils.SequenceEnqueuer.stop", + "tf.compat.v1.keras.utils.HDF5Matrix": "tf.keras.utils.HDF5Matrix", + "tf.compat.v1.keras.utils.HDF5Matrix.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.utils.HDF5Matrix.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.utils.HDF5Matrix.__getitem__": "tf.keras.utils.HDF5Matrix.__getitem__", + "tf.compat.v1.keras.utils.HDF5Matrix.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.utils.HDF5Matrix.__init__": "tf.keras.utils.HDF5Matrix.__init__", + "tf.compat.v1.keras.utils.HDF5Matrix.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.utils.HDF5Matrix.__len__": "tf.keras.utils.HDF5Matrix.__len__", + "tf.compat.v1.keras.utils.HDF5Matrix.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.utils.HDF5Matrix.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.utils.HDF5Matrix.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.utils.HDF5Matrix.dtype": "tf.keras.utils.HDF5Matrix.dtype", + "tf.compat.v1.keras.utils.HDF5Matrix.ndim": "tf.keras.utils.HDF5Matrix.ndim", + "tf.compat.v1.keras.utils.HDF5Matrix.refs": "tf.keras.utils.HDF5Matrix.refs", + "tf.compat.v1.keras.utils.HDF5Matrix.shape": "tf.keras.utils.HDF5Matrix.shape", + "tf.compat.v1.keras.utils.HDF5Matrix.size": "tf.keras.utils.HDF5Matrix.size", + "tf.compat.v1.keras.utils.OrderedEnqueuer": "tf.keras.utils.OrderedEnqueuer", + "tf.compat.v1.keras.utils.OrderedEnqueuer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.utils.OrderedEnqueuer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.utils.OrderedEnqueuer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.utils.OrderedEnqueuer.__init__": "tf.keras.utils.OrderedEnqueuer.__init__", + "tf.compat.v1.keras.utils.OrderedEnqueuer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.utils.OrderedEnqueuer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.utils.OrderedEnqueuer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.utils.OrderedEnqueuer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.utils.OrderedEnqueuer.get": "tf.keras.utils.OrderedEnqueuer.get", + "tf.compat.v1.keras.utils.OrderedEnqueuer.is_running": "tf.keras.utils.SequenceEnqueuer.is_running", + "tf.compat.v1.keras.utils.OrderedEnqueuer.start": "tf.keras.utils.SequenceEnqueuer.start", + "tf.compat.v1.keras.utils.OrderedEnqueuer.stop": "tf.keras.utils.SequenceEnqueuer.stop", + "tf.compat.v1.keras.utils.Progbar": "tf.keras.utils.Progbar", + "tf.compat.v1.keras.utils.Progbar.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.utils.Progbar.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.utils.Progbar.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.utils.Progbar.__init__": "tf.keras.utils.Progbar.__init__", + "tf.compat.v1.keras.utils.Progbar.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.utils.Progbar.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.utils.Progbar.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.utils.Progbar.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.utils.Progbar.add": "tf.keras.utils.Progbar.add", + "tf.compat.v1.keras.utils.Progbar.update": "tf.keras.utils.Progbar.update", + "tf.compat.v1.keras.utils.Sequence": "tf.keras.utils.Sequence", + "tf.compat.v1.keras.utils.Sequence.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.utils.Sequence.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.utils.Sequence.__getitem__": "tf.keras.utils.Sequence.__getitem__", + "tf.compat.v1.keras.utils.Sequence.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.utils.Sequence.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.keras.utils.Sequence.__iter__": "tf.keras.utils.Sequence.__iter__", + "tf.compat.v1.keras.utils.Sequence.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.utils.Sequence.__len__": "tf.keras.utils.Sequence.__len__", + "tf.compat.v1.keras.utils.Sequence.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.utils.Sequence.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.utils.Sequence.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.utils.Sequence.on_epoch_end": "tf.keras.utils.Sequence.on_epoch_end", + "tf.compat.v1.keras.utils.SequenceEnqueuer": "tf.keras.utils.SequenceEnqueuer", + "tf.compat.v1.keras.utils.SequenceEnqueuer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.utils.SequenceEnqueuer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.utils.SequenceEnqueuer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.utils.SequenceEnqueuer.__init__": "tf.keras.utils.SequenceEnqueuer.__init__", + "tf.compat.v1.keras.utils.SequenceEnqueuer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.utils.SequenceEnqueuer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.utils.SequenceEnqueuer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.utils.SequenceEnqueuer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.utils.SequenceEnqueuer.get": "tf.keras.utils.SequenceEnqueuer.get", + "tf.compat.v1.keras.utils.SequenceEnqueuer.is_running": "tf.keras.utils.SequenceEnqueuer.is_running", + "tf.compat.v1.keras.utils.SequenceEnqueuer.start": "tf.keras.utils.SequenceEnqueuer.start", + "tf.compat.v1.keras.utils.SequenceEnqueuer.stop": "tf.keras.utils.SequenceEnqueuer.stop", + "tf.compat.v1.keras.utils.convert_all_kernels_in_model": "tf.keras.utils.convert_all_kernels_in_model", + "tf.compat.v1.keras.utils.custom_object_scope": "tf.keras.utils.custom_object_scope", + "tf.compat.v1.keras.utils.deserialize_keras_object": "tf.keras.utils.deserialize_keras_object", + "tf.compat.v1.keras.utils.get_custom_objects": "tf.keras.utils.get_custom_objects", + "tf.compat.v1.keras.utils.get_file": "tf.keras.utils.get_file", + "tf.compat.v1.keras.utils.get_registered_name": "tf.keras.utils.get_registered_name", + "tf.compat.v1.keras.utils.get_registered_object": "tf.keras.utils.get_registered_object", + "tf.compat.v1.keras.utils.get_source_inputs": "tf.keras.utils.get_source_inputs", + "tf.compat.v1.keras.utils.model_to_dot": "tf.keras.utils.model_to_dot", + "tf.compat.v1.keras.utils.multi_gpu_model": "tf.keras.utils.multi_gpu_model", + "tf.compat.v1.keras.utils.normalize": "tf.keras.utils.normalize", + "tf.compat.v1.keras.utils.plot_model": "tf.keras.utils.plot_model", + "tf.compat.v1.keras.utils.register_keras_serializable": "tf.keras.utils.register_keras_serializable", + "tf.compat.v1.keras.utils.serialize_keras_object": "tf.keras.utils.serialize_keras_object", + "tf.compat.v1.keras.utils.to_categorical": "tf.keras.utils.to_categorical", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier": "tf.keras.wrappers.scikit_learn.KerasClassifier", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__init__": "tf.keras.wrappers.scikit_learn.KerasClassifier.__init__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.check_params": "tf.keras.wrappers.scikit_learn.KerasClassifier.check_params", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.filter_sk_params": "tf.keras.wrappers.scikit_learn.KerasClassifier.filter_sk_params", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.fit": "tf.keras.wrappers.scikit_learn.KerasClassifier.fit", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.get_params": "tf.keras.wrappers.scikit_learn.KerasClassifier.get_params", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.predict": "tf.keras.wrappers.scikit_learn.KerasClassifier.predict", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.predict_proba": "tf.keras.wrappers.scikit_learn.KerasClassifier.predict_proba", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.score": "tf.keras.wrappers.scikit_learn.KerasClassifier.score", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.set_params": "tf.keras.wrappers.scikit_learn.KerasClassifier.set_params", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor": "tf.keras.wrappers.scikit_learn.KerasRegressor", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__init__": "tf.keras.wrappers.scikit_learn.KerasClassifier.__init__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.check_params": "tf.keras.wrappers.scikit_learn.KerasClassifier.check_params", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.filter_sk_params": "tf.keras.wrappers.scikit_learn.KerasClassifier.filter_sk_params", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.fit": "tf.keras.wrappers.scikit_learn.KerasRegressor.fit", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.get_params": "tf.keras.wrappers.scikit_learn.KerasClassifier.get_params", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.predict": "tf.keras.wrappers.scikit_learn.KerasRegressor.predict", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.score": "tf.keras.wrappers.scikit_learn.KerasRegressor.score", + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.set_params": "tf.keras.wrappers.scikit_learn.KerasClassifier.set_params", + "tf.compat.v1.layers.AveragePooling1D.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.AveragePooling1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.AveragePooling1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.AveragePooling1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.AveragePooling1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.AveragePooling1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.AveragePooling1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.AveragePooling1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.AveragePooling1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.AveragePooling1D.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.AveragePooling1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.AveragePooling1D.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.AveragePooling1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.layers.AveragePooling1D.call": "tf.keras.layers.AveragePooling1D.call", + "tf.compat.v1.layers.AveragePooling1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.AveragePooling1D.compute_output_shape": "tf.keras.layers.AveragePooling1D.compute_output_shape", + "tf.compat.v1.layers.AveragePooling1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.AveragePooling1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.AveragePooling1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.AveragePooling1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.AveragePooling1D.get_config": "tf.keras.layers.AveragePooling1D.get_config", + "tf.compat.v1.layers.AveragePooling1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.AveragePooling1D.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.AveragePooling1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.AveragePooling1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.AveragePooling1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.AveragePooling1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.AveragePooling1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.AveragePooling1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.AveragePooling1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.AveragePooling1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.AveragePooling1D.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.AveragePooling1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.AveragePooling1D.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.AveragePooling1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.AveragePooling1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.AveragePooling1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.AveragePooling2D.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.AveragePooling2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.AveragePooling2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.AveragePooling2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.AveragePooling2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.AveragePooling2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.AveragePooling2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.AveragePooling2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.AveragePooling2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.AveragePooling2D.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.AveragePooling2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.AveragePooling2D.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.AveragePooling2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.layers.AveragePooling2D.call": "tf.keras.layers.AveragePooling2D.call", + "tf.compat.v1.layers.AveragePooling2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.AveragePooling2D.compute_output_shape": "tf.keras.layers.AveragePooling2D.compute_output_shape", + "tf.compat.v1.layers.AveragePooling2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.AveragePooling2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.AveragePooling2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.AveragePooling2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.AveragePooling2D.get_config": "tf.keras.layers.AveragePooling2D.get_config", + "tf.compat.v1.layers.AveragePooling2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.AveragePooling2D.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.AveragePooling2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.AveragePooling2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.AveragePooling2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.AveragePooling2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.AveragePooling2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.AveragePooling2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.AveragePooling2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.AveragePooling2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.AveragePooling2D.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.AveragePooling2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.AveragePooling2D.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.AveragePooling2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.AveragePooling2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.AveragePooling2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.AveragePooling3D.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.AveragePooling3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.AveragePooling3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.AveragePooling3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.AveragePooling3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.AveragePooling3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.AveragePooling3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.AveragePooling3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.AveragePooling3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.AveragePooling3D.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.AveragePooling3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.AveragePooling3D.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.AveragePooling3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.layers.AveragePooling3D.call": "tf.keras.layers.AveragePooling3D.call", + "tf.compat.v1.layers.AveragePooling3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.AveragePooling3D.compute_output_shape": "tf.keras.layers.AveragePooling3D.compute_output_shape", + "tf.compat.v1.layers.AveragePooling3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.AveragePooling3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.AveragePooling3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.AveragePooling3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.AveragePooling3D.get_config": "tf.keras.layers.AveragePooling3D.get_config", + "tf.compat.v1.layers.AveragePooling3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.AveragePooling3D.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.AveragePooling3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.AveragePooling3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.AveragePooling3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.AveragePooling3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.AveragePooling3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.AveragePooling3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.AveragePooling3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.AveragePooling3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.AveragePooling3D.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.AveragePooling3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.AveragePooling3D.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.AveragePooling3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.AveragePooling3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.AveragePooling3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.BatchNormalization.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.BatchNormalization.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.BatchNormalization.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.BatchNormalization.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.BatchNormalization.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.BatchNormalization.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.BatchNormalization.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.BatchNormalization.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.BatchNormalization.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.BatchNormalization.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.BatchNormalization.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.BatchNormalization.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.BatchNormalization.build": "tf.keras.layers.BatchNormalization.build", + "tf.compat.v1.layers.BatchNormalization.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.BatchNormalization.compute_output_shape": "tf.keras.layers.BatchNormalization.compute_output_shape", + "tf.compat.v1.layers.BatchNormalization.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.BatchNormalization.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.BatchNormalization.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.BatchNormalization.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.BatchNormalization.get_config": "tf.keras.layers.BatchNormalization.get_config", + "tf.compat.v1.layers.BatchNormalization.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.BatchNormalization.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.BatchNormalization.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.BatchNormalization.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.BatchNormalization.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.BatchNormalization.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.BatchNormalization.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.BatchNormalization.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.BatchNormalization.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.BatchNormalization.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.BatchNormalization.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.BatchNormalization.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.BatchNormalization.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.BatchNormalization.trainable": "tf.keras.layers.BatchNormalization.trainable", + "tf.compat.v1.layers.BatchNormalization.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.BatchNormalization.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.Conv1D.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.Conv1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.Conv1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.Conv1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.Conv1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.Conv1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.Conv1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.Conv1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.Conv1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.Conv1D.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.Conv1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.Conv1D.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.Conv1D.build": "tf.keras.layers.Conv1D.build", + "tf.compat.v1.layers.Conv1D.call": "tf.keras.layers.Conv1D.call", + "tf.compat.v1.layers.Conv1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.Conv1D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.layers.Conv1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.Conv1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.Conv1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.Conv1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.Conv1D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.compat.v1.layers.Conv1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.Conv1D.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.Conv1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.Conv1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.Conv1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.Conv1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.Conv1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.Conv1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.Conv1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.Conv1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.Conv1D.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.Conv1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.Conv1D.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.Conv1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.Conv1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.Conv1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.Conv2D.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.Conv2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.Conv2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.Conv2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.Conv2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.Conv2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.Conv2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.Conv2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.Conv2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.Conv2D.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.Conv2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.Conv2D.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.Conv2D.build": "tf.keras.layers.Conv1D.build", + "tf.compat.v1.layers.Conv2D.call": "tf.keras.layers.Conv1D.call", + "tf.compat.v1.layers.Conv2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.Conv2D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.layers.Conv2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.Conv2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.Conv2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.Conv2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.Conv2D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.compat.v1.layers.Conv2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.Conv2D.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.Conv2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.Conv2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.Conv2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.Conv2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.Conv2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.Conv2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.Conv2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.Conv2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.Conv2D.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.Conv2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.Conv2D.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.Conv2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.Conv2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.Conv2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.Conv2DTranspose.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.Conv2DTranspose.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.Conv2DTranspose.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.Conv2DTranspose.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.Conv2DTranspose.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.Conv2DTranspose.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.Conv2DTranspose.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.Conv2DTranspose.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.Conv2DTranspose.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.Conv2DTranspose.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.Conv2DTranspose.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.Conv2DTranspose.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.Conv2DTranspose.build": "tf.keras.layers.Conv2DTranspose.build", + "tf.compat.v1.layers.Conv2DTranspose.call": "tf.keras.layers.Conv2DTranspose.call", + "tf.compat.v1.layers.Conv2DTranspose.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.Conv2DTranspose.compute_output_shape": "tf.keras.layers.Conv2DTranspose.compute_output_shape", + "tf.compat.v1.layers.Conv2DTranspose.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.Conv2DTranspose.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.Conv2DTranspose.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.Conv2DTranspose.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.Conv2DTranspose.get_config": "tf.keras.layers.Conv2DTranspose.get_config", + "tf.compat.v1.layers.Conv2DTranspose.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.Conv2DTranspose.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.Conv2DTranspose.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.Conv2DTranspose.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.Conv2DTranspose.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.Conv2DTranspose.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.Conv2DTranspose.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.Conv2DTranspose.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.Conv2DTranspose.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.Conv2DTranspose.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.Conv2DTranspose.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.Conv2DTranspose.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.Conv2DTranspose.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.Conv2DTranspose.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.Conv2DTranspose.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.Conv2DTranspose.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.Conv3D.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.Conv3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.Conv3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.Conv3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.Conv3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.Conv3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.Conv3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.Conv3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.Conv3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.Conv3D.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.Conv3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.Conv3D.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.Conv3D.build": "tf.keras.layers.Conv1D.build", + "tf.compat.v1.layers.Conv3D.call": "tf.keras.layers.Conv1D.call", + "tf.compat.v1.layers.Conv3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.Conv3D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.layers.Conv3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.Conv3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.Conv3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.Conv3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.Conv3D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.compat.v1.layers.Conv3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.Conv3D.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.Conv3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.Conv3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.Conv3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.Conv3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.Conv3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.Conv3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.Conv3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.Conv3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.Conv3D.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.Conv3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.Conv3D.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.Conv3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.Conv3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.Conv3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.Conv3DTranspose.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.Conv3DTranspose.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.Conv3DTranspose.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.Conv3DTranspose.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.Conv3DTranspose.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.Conv3DTranspose.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.Conv3DTranspose.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.Conv3DTranspose.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.Conv3DTranspose.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.Conv3DTranspose.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.Conv3DTranspose.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.Conv3DTranspose.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.Conv3DTranspose.build": "tf.keras.layers.Conv3DTranspose.build", + "tf.compat.v1.layers.Conv3DTranspose.call": "tf.keras.layers.Conv3DTranspose.call", + "tf.compat.v1.layers.Conv3DTranspose.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.Conv3DTranspose.compute_output_shape": "tf.keras.layers.Conv3DTranspose.compute_output_shape", + "tf.compat.v1.layers.Conv3DTranspose.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.Conv3DTranspose.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.Conv3DTranspose.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.Conv3DTranspose.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.Conv3DTranspose.get_config": "tf.keras.layers.Conv3DTranspose.get_config", + "tf.compat.v1.layers.Conv3DTranspose.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.Conv3DTranspose.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.Conv3DTranspose.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.Conv3DTranspose.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.Conv3DTranspose.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.Conv3DTranspose.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.Conv3DTranspose.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.Conv3DTranspose.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.Conv3DTranspose.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.Conv3DTranspose.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.Conv3DTranspose.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.Conv3DTranspose.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.Conv3DTranspose.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.Conv3DTranspose.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.Conv3DTranspose.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.Conv3DTranspose.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.Dense.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.Dense.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.Dense.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.Dense.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.Dense.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.Dense.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.Dense.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.Dense.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.Dense.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.Dense.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.Dense.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.Dense.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.Dense.build": "tf.keras.layers.Dense.build", + "tf.compat.v1.layers.Dense.call": "tf.keras.layers.Dense.call", + "tf.compat.v1.layers.Dense.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.Dense.compute_output_shape": "tf.keras.layers.Dense.compute_output_shape", + "tf.compat.v1.layers.Dense.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.Dense.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.Dense.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.Dense.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.Dense.get_config": "tf.keras.layers.Dense.get_config", + "tf.compat.v1.layers.Dense.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.Dense.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.Dense.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.Dense.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.Dense.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.Dense.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.Dense.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.Dense.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.Dense.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.Dense.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.Dense.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.Dense.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.Dense.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.Dense.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.Dense.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.Dense.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.Dropout.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.Dropout.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.Dropout.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.Dropout.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.Dropout.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.Dropout.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.Dropout.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.Dropout.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.Dropout.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.Dropout.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.Dropout.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.Dropout.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.Dropout.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.layers.Dropout.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.Dropout.compute_output_shape": "tf.keras.layers.Dropout.compute_output_shape", + "tf.compat.v1.layers.Dropout.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.Dropout.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.Dropout.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.Dropout.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.Dropout.get_config": "tf.keras.layers.Dropout.get_config", + "tf.compat.v1.layers.Dropout.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.Dropout.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.Dropout.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.Dropout.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.Dropout.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.Dropout.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.Dropout.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.Dropout.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.Dropout.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.Dropout.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.Dropout.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.Dropout.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.Dropout.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.Dropout.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.Dropout.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.Dropout.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.Flatten.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.Flatten.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.Flatten.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.Flatten.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.Flatten.__init__": "tf.keras.layers.Flatten.__init__", + "tf.compat.v1.layers.Flatten.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.Flatten.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.Flatten.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.Flatten.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.Flatten.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.Flatten.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.Flatten.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.Flatten.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.Flatten.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.layers.Flatten.call": "tf.keras.layers.Flatten.call", + "tf.compat.v1.layers.Flatten.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.Flatten.compute_output_shape": "tf.keras.layers.Flatten.compute_output_shape", + "tf.compat.v1.layers.Flatten.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.Flatten.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.Flatten.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.Flatten.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.Flatten.get_config": "tf.keras.layers.Flatten.get_config", + "tf.compat.v1.layers.Flatten.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.Flatten.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.Flatten.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.Flatten.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.Flatten.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.Flatten.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.Flatten.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.Flatten.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.Flatten.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.Flatten.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.Flatten.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.Flatten.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.Flatten.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.Flatten.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.Flatten.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.Flatten.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.InputSpec": "tf.keras.layers.InputSpec", + "tf.compat.v1.layers.InputSpec.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.InputSpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.InputSpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.InputSpec.__init__": "tf.keras.layers.InputSpec.__init__", + "tf.compat.v1.layers.InputSpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.InputSpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.InputSpec.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.InputSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.layers.InputSpec.get_config": "tf.keras.layers.InputSpec.get_config", + "tf.compat.v1.layers.Layer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.Layer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.Layer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.Layer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.Layer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.Layer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.Layer.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.Layer.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.Layer.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.Layer.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.layers.Layer.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.layers.Layer.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.Layer.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.layers.Layer.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.Layer.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.Layer.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.Layer.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.Layer.get_config": "tf.keras.layers.Layer.get_config", + "tf.compat.v1.layers.Layer.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.Layer.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.Layer.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.Layer.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.Layer.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.Layer.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.Layer.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.Layer.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.Layer.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.Layer.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.Layer.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.Layer.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.Layer.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.Layer.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.MaxPooling1D.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.MaxPooling1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.MaxPooling1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.MaxPooling1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.MaxPooling1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.MaxPooling1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.MaxPooling1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.MaxPooling1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.MaxPooling1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.MaxPooling1D.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.MaxPooling1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.MaxPooling1D.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.MaxPooling1D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.layers.MaxPooling1D.call": "tf.keras.layers.AveragePooling1D.call", + "tf.compat.v1.layers.MaxPooling1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.MaxPooling1D.compute_output_shape": "tf.keras.layers.AveragePooling1D.compute_output_shape", + "tf.compat.v1.layers.MaxPooling1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.MaxPooling1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.MaxPooling1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.MaxPooling1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.MaxPooling1D.get_config": "tf.keras.layers.AveragePooling1D.get_config", + "tf.compat.v1.layers.MaxPooling1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.MaxPooling1D.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.MaxPooling1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.MaxPooling1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.MaxPooling1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.MaxPooling1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.MaxPooling1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.MaxPooling1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.MaxPooling1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.MaxPooling1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.MaxPooling1D.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.MaxPooling1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.MaxPooling1D.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.MaxPooling1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.MaxPooling1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.MaxPooling1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.MaxPooling2D.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.MaxPooling2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.MaxPooling2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.MaxPooling2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.MaxPooling2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.MaxPooling2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.MaxPooling2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.MaxPooling2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.MaxPooling2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.MaxPooling2D.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.MaxPooling2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.MaxPooling2D.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.MaxPooling2D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.layers.MaxPooling2D.call": "tf.keras.layers.AveragePooling2D.call", + "tf.compat.v1.layers.MaxPooling2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.MaxPooling2D.compute_output_shape": "tf.keras.layers.AveragePooling2D.compute_output_shape", + "tf.compat.v1.layers.MaxPooling2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.MaxPooling2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.MaxPooling2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.MaxPooling2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.MaxPooling2D.get_config": "tf.keras.layers.AveragePooling2D.get_config", + "tf.compat.v1.layers.MaxPooling2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.MaxPooling2D.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.MaxPooling2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.MaxPooling2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.MaxPooling2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.MaxPooling2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.MaxPooling2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.MaxPooling2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.MaxPooling2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.MaxPooling2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.MaxPooling2D.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.MaxPooling2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.MaxPooling2D.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.MaxPooling2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.MaxPooling2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.MaxPooling2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.MaxPooling3D.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.MaxPooling3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.MaxPooling3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.MaxPooling3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.MaxPooling3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.MaxPooling3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.MaxPooling3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.MaxPooling3D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.MaxPooling3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.MaxPooling3D.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.MaxPooling3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.MaxPooling3D.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.MaxPooling3D.build": "tf.keras.layers.Layer.build", + "tf.compat.v1.layers.MaxPooling3D.call": "tf.keras.layers.AveragePooling3D.call", + "tf.compat.v1.layers.MaxPooling3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.MaxPooling3D.compute_output_shape": "tf.keras.layers.AveragePooling3D.compute_output_shape", + "tf.compat.v1.layers.MaxPooling3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.MaxPooling3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.MaxPooling3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.MaxPooling3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.MaxPooling3D.get_config": "tf.keras.layers.AveragePooling3D.get_config", + "tf.compat.v1.layers.MaxPooling3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.MaxPooling3D.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.MaxPooling3D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.MaxPooling3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.MaxPooling3D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.MaxPooling3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.MaxPooling3D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.MaxPooling3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.MaxPooling3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.MaxPooling3D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.MaxPooling3D.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.MaxPooling3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.MaxPooling3D.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.MaxPooling3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.MaxPooling3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.MaxPooling3D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.SeparableConv1D.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.SeparableConv1D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.SeparableConv1D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.SeparableConv1D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.SeparableConv1D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.SeparableConv1D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.SeparableConv1D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.SeparableConv1D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.SeparableConv1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.SeparableConv1D.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.SeparableConv1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.SeparableConv1D.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.SeparableConv1D.build": "tf.keras.layers.SeparableConv1D.build", + "tf.compat.v1.layers.SeparableConv1D.call": "tf.keras.layers.SeparableConv1D.call", + "tf.compat.v1.layers.SeparableConv1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.SeparableConv1D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.layers.SeparableConv1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.SeparableConv1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.SeparableConv1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.SeparableConv1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.SeparableConv1D.get_config": "tf.keras.layers.SeparableConv1D.get_config", + "tf.compat.v1.layers.SeparableConv1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.SeparableConv1D.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.SeparableConv1D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.SeparableConv1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.SeparableConv1D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.SeparableConv1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.SeparableConv1D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.SeparableConv1D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.SeparableConv1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.SeparableConv1D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.SeparableConv1D.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.SeparableConv1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.SeparableConv1D.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.SeparableConv1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.SeparableConv1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.SeparableConv1D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.layers.SeparableConv2D.__call__": "tf.compat.v1.layers.Layer.__call__", + "tf.compat.v1.layers.SeparableConv2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.layers.SeparableConv2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.layers.SeparableConv2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.layers.SeparableConv2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.layers.SeparableConv2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.layers.SeparableConv2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.layers.SeparableConv2D.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.layers.SeparableConv2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.layers.SeparableConv2D.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.layers.SeparableConv2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.layers.SeparableConv2D.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.layers.SeparableConv2D.build": "tf.keras.layers.SeparableConv1D.build", + "tf.compat.v1.layers.SeparableConv2D.call": "tf.keras.layers.SeparableConv2D.call", + "tf.compat.v1.layers.SeparableConv2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.layers.SeparableConv2D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.compat.v1.layers.SeparableConv2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.layers.SeparableConv2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.layers.SeparableConv2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.layers.SeparableConv2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.layers.SeparableConv2D.get_config": "tf.keras.layers.SeparableConv1D.get_config", + "tf.compat.v1.layers.SeparableConv2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.layers.SeparableConv2D.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.layers.SeparableConv2D.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.layers.SeparableConv2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.layers.SeparableConv2D.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.layers.SeparableConv2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.layers.SeparableConv2D.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.layers.SeparableConv2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.layers.SeparableConv2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.layers.SeparableConv2D.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.layers.SeparableConv2D.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.layers.SeparableConv2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.layers.SeparableConv2D.submodules": "tf.Module.submodules", + "tf.compat.v1.layers.SeparableConv2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.layers.SeparableConv2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.layers.SeparableConv2D.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.lbeta": "tf.math.lbeta", + "tf.compat.v1.less": "tf.math.less", + "tf.compat.v1.less_equal": "tf.math.less_equal", + "tf.compat.v1.lgamma": "tf.math.lgamma", + "tf.compat.v1.lin_space": "tf.linspace", + "tf.compat.v1.linalg.LinearOperator": "tf.linalg.LinearOperator", + "tf.compat.v1.linalg.LinearOperator.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperator.__init__": "tf.linalg.LinearOperator.__init__", + "tf.compat.v1.linalg.LinearOperator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperator.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperator.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperator.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperator.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperator.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperator.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperator.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperator.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperator.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperator.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperator.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperator.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperator.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperator.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperator.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperator.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperator.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperator.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperator.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperator.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperator.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperator.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperator.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperator.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperator.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperator.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperator.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperator.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperator.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperator.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperator.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperator.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperator.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperator.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperator.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperator.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperator.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperator.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperator.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperator.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorAdjoint": "tf.linalg.LinearOperatorAdjoint", + "tf.compat.v1.linalg.LinearOperatorAdjoint.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorAdjoint.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorAdjoint.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorAdjoint.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorAdjoint.__init__": "tf.linalg.LinearOperatorAdjoint.__init__", + "tf.compat.v1.linalg.LinearOperatorAdjoint.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorAdjoint.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorAdjoint.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorAdjoint.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorAdjoint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorAdjoint.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorAdjoint.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorAdjoint.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorAdjoint.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorAdjoint.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorAdjoint.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorAdjoint.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorAdjoint.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorAdjoint.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorAdjoint.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorAdjoint.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorAdjoint.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorAdjoint.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorAdjoint.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorAdjoint.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorAdjoint.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorAdjoint.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorAdjoint.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorAdjoint.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorAdjoint.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorAdjoint.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorAdjoint.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorAdjoint.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorAdjoint.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorAdjoint.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorAdjoint.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorAdjoint.operator": "tf.linalg.LinearOperatorAdjoint.operator", + "tf.compat.v1.linalg.LinearOperatorAdjoint.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorAdjoint.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorAdjoint.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorAdjoint.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorAdjoint.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorAdjoint.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorAdjoint.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorAdjoint.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorAdjoint.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorAdjoint.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorAdjoint.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorAdjoint.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorAdjoint.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorBlockDiag": "tf.linalg.LinearOperatorBlockDiag", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__init__": "tf.linalg.LinearOperatorBlockDiag.__init__", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.operators": "tf.linalg.LinearOperatorBlockDiag.operators", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorBlockDiag.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular": "tf.linalg.LinearOperatorBlockLowerTriangular", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__init__": "tf.linalg.LinearOperatorBlockLowerTriangular.__init__", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.operators": "tf.linalg.LinearOperatorBlockLowerTriangular.operators", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorCirculant": "tf.linalg.LinearOperatorCirculant", + "tf.compat.v1.linalg.LinearOperatorCirculant.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorCirculant.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorCirculant.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorCirculant.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorCirculant.__init__": "tf.linalg.LinearOperatorCirculant.__init__", + "tf.compat.v1.linalg.LinearOperatorCirculant.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorCirculant.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorCirculant.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorCirculant.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorCirculant.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorCirculant.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorCirculant.assert_hermitian_spectrum": "tf.linalg.LinearOperatorCirculant.assert_hermitian_spectrum", + "tf.compat.v1.linalg.LinearOperatorCirculant.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorCirculant.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorCirculant.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorCirculant.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorCirculant.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant.block_depth": "tf.linalg.LinearOperatorCirculant.block_depth", + "tf.compat.v1.linalg.LinearOperatorCirculant.block_shape": "tf.linalg.LinearOperatorCirculant.block_shape", + "tf.compat.v1.linalg.LinearOperatorCirculant.block_shape_tensor": "tf.linalg.LinearOperatorCirculant.block_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorCirculant.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorCirculant.convolution_kernel": "tf.linalg.LinearOperatorCirculant.convolution_kernel", + "tf.compat.v1.linalg.LinearOperatorCirculant.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorCirculant.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorCirculant.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorCirculant.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorCirculant.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorCirculant.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorCirculant.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorCirculant.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorCirculant.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorCirculant.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorCirculant.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorCirculant.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorCirculant.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorCirculant.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorCirculant.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorCirculant.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorCirculant.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorCirculant.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorCirculant.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorCirculant.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorCirculant.spectrum": "tf.linalg.LinearOperatorCirculant.spectrum", + "tf.compat.v1.linalg.LinearOperatorCirculant.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorCirculant.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorCirculant.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorCirculant.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorCirculant.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorCirculant.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorCirculant2D": "tf.linalg.LinearOperatorCirculant2D", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__init__": "tf.linalg.LinearOperatorCirculant2D.__init__", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.assert_hermitian_spectrum": "tf.linalg.LinearOperatorCirculant.assert_hermitian_spectrum", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.block_depth": "tf.linalg.LinearOperatorCirculant.block_depth", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.block_shape": "tf.linalg.LinearOperatorCirculant.block_shape", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.block_shape_tensor": "tf.linalg.LinearOperatorCirculant.block_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.convolution_kernel": "tf.linalg.LinearOperatorCirculant.convolution_kernel", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.spectrum": "tf.linalg.LinearOperatorCirculant.spectrum", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorCirculant2D.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorCirculant3D": "tf.linalg.LinearOperatorCirculant3D", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__init__": "tf.linalg.LinearOperatorCirculant3D.__init__", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.assert_hermitian_spectrum": "tf.linalg.LinearOperatorCirculant.assert_hermitian_spectrum", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.block_depth": "tf.linalg.LinearOperatorCirculant.block_depth", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.block_shape": "tf.linalg.LinearOperatorCirculant.block_shape", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.block_shape_tensor": "tf.linalg.LinearOperatorCirculant.block_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.convolution_kernel": "tf.linalg.LinearOperatorCirculant.convolution_kernel", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.spectrum": "tf.linalg.LinearOperatorCirculant.spectrum", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorCirculant3D.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorComposition": "tf.linalg.LinearOperatorComposition", + "tf.compat.v1.linalg.LinearOperatorComposition.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorComposition.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorComposition.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorComposition.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorComposition.__init__": "tf.linalg.LinearOperatorComposition.__init__", + "tf.compat.v1.linalg.LinearOperatorComposition.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorComposition.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorComposition.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorComposition.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorComposition.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorComposition.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorComposition.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorComposition.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorComposition.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorComposition.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorComposition.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorComposition.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorComposition.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorComposition.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorComposition.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorComposition.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorComposition.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorComposition.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorComposition.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorComposition.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorComposition.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorComposition.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorComposition.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorComposition.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorComposition.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorComposition.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorComposition.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorComposition.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorComposition.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorComposition.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorComposition.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorComposition.operators": "tf.linalg.LinearOperatorComposition.operators", + "tf.compat.v1.linalg.LinearOperatorComposition.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorComposition.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorComposition.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorComposition.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorComposition.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorComposition.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorComposition.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorComposition.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorComposition.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorComposition.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorComposition.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorComposition.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorComposition.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorDiag": "tf.linalg.LinearOperatorDiag", + "tf.compat.v1.linalg.LinearOperatorDiag.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorDiag.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorDiag.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorDiag.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorDiag.__init__": "tf.linalg.LinearOperatorDiag.__init__", + "tf.compat.v1.linalg.LinearOperatorDiag.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorDiag.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorDiag.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorDiag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorDiag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorDiag.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorDiag.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorDiag.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorDiag.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorDiag.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorDiag.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorDiag.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorDiag.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorDiag.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorDiag.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorDiag.diag": "tf.linalg.LinearOperatorDiag.diag", + "tf.compat.v1.linalg.LinearOperatorDiag.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorDiag.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorDiag.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorDiag.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorDiag.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorDiag.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorDiag.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorDiag.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorDiag.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorDiag.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorDiag.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorDiag.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorDiag.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorDiag.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorDiag.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorDiag.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorDiag.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorDiag.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorDiag.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorDiag.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorDiag.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorDiag.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorDiag.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorDiag.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorDiag.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorDiag.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorDiag.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorDiag.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorDiag.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorFullMatrix": "tf.linalg.LinearOperatorFullMatrix", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__init__": "tf.linalg.LinearOperatorFullMatrix.__init__", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorFullMatrix.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorHouseholder": "tf.linalg.LinearOperatorHouseholder", + "tf.compat.v1.linalg.LinearOperatorHouseholder.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorHouseholder.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorHouseholder.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorHouseholder.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorHouseholder.__init__": "tf.linalg.LinearOperatorHouseholder.__init__", + "tf.compat.v1.linalg.LinearOperatorHouseholder.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorHouseholder.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorHouseholder.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorHouseholder.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorHouseholder.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorHouseholder.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorHouseholder.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorHouseholder.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorHouseholder.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorHouseholder.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorHouseholder.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorHouseholder.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorHouseholder.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorHouseholder.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorHouseholder.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorHouseholder.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorHouseholder.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorHouseholder.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorHouseholder.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorHouseholder.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorHouseholder.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorHouseholder.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorHouseholder.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorHouseholder.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorHouseholder.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorHouseholder.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorHouseholder.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorHouseholder.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorHouseholder.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorHouseholder.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorHouseholder.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorHouseholder.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorHouseholder.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorHouseholder.reflection_axis": "tf.linalg.LinearOperatorHouseholder.reflection_axis", + "tf.compat.v1.linalg.LinearOperatorHouseholder.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorHouseholder.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorHouseholder.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorHouseholder.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorHouseholder.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorHouseholder.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorHouseholder.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorHouseholder.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorHouseholder.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorHouseholder.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorHouseholder.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorIdentity": "tf.linalg.LinearOperatorIdentity", + "tf.compat.v1.linalg.LinearOperatorIdentity.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorIdentity.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorIdentity.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorIdentity.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorIdentity.__init__": "tf.linalg.LinearOperatorIdentity.__init__", + "tf.compat.v1.linalg.LinearOperatorIdentity.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorIdentity.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorIdentity.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorIdentity.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorIdentity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorIdentity.add_to_tensor": "tf.linalg.LinearOperatorIdentity.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorIdentity.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorIdentity.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorIdentity.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorIdentity.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorIdentity.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorIdentity.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorIdentity.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorIdentity.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorIdentity.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorIdentity.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorIdentity.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorIdentity.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorIdentity.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorIdentity.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorIdentity.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorIdentity.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorIdentity.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorIdentity.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorIdentity.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorIdentity.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorIdentity.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorIdentity.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorIdentity.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorIdentity.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorIdentity.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorIdentity.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorIdentity.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorIdentity.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorIdentity.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorIdentity.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorIdentity.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorIdentity.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorIdentity.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorIdentity.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorIdentity.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorIdentity.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorIdentity.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorIdentity.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorInversion": "tf.linalg.LinearOperatorInversion", + "tf.compat.v1.linalg.LinearOperatorInversion.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorInversion.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorInversion.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorInversion.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorInversion.__init__": "tf.linalg.LinearOperatorInversion.__init__", + "tf.compat.v1.linalg.LinearOperatorInversion.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorInversion.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorInversion.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorInversion.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorInversion.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorInversion.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorInversion.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorInversion.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorInversion.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorInversion.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorInversion.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorInversion.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorInversion.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorInversion.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorInversion.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorInversion.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorInversion.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorInversion.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorInversion.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorInversion.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorInversion.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorInversion.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorInversion.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorInversion.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorInversion.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorInversion.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorInversion.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorInversion.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorInversion.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorInversion.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorInversion.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorInversion.operator": "tf.linalg.LinearOperatorInversion.operator", + "tf.compat.v1.linalg.LinearOperatorInversion.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorInversion.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorInversion.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorInversion.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorInversion.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorInversion.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorInversion.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorInversion.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorInversion.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorInversion.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorInversion.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorInversion.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorInversion.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorKronecker": "tf.linalg.LinearOperatorKronecker", + "tf.compat.v1.linalg.LinearOperatorKronecker.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorKronecker.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorKronecker.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorKronecker.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorKronecker.__init__": "tf.linalg.LinearOperatorKronecker.__init__", + "tf.compat.v1.linalg.LinearOperatorKronecker.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorKronecker.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorKronecker.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorKronecker.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorKronecker.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorKronecker.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorKronecker.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorKronecker.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorKronecker.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorKronecker.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorKronecker.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorKronecker.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorKronecker.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorKronecker.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorKronecker.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorKronecker.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorKronecker.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorKronecker.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorKronecker.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorKronecker.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorKronecker.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorKronecker.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorKronecker.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorKronecker.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorKronecker.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorKronecker.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorKronecker.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorKronecker.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorKronecker.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorKronecker.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorKronecker.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorKronecker.operators": "tf.linalg.LinearOperatorKronecker.operators", + "tf.compat.v1.linalg.LinearOperatorKronecker.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorKronecker.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorKronecker.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorKronecker.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorKronecker.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorKronecker.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorKronecker.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorKronecker.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorKronecker.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorKronecker.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorKronecker.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorKronecker.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorKronecker.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate": "tf.linalg.LinearOperatorLowRankUpdate", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__init__": "tf.linalg.LinearOperatorLowRankUpdate.__init__", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.base_operator": "tf.linalg.LinearOperatorLowRankUpdate.base_operator", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.diag_operator": "tf.linalg.LinearOperatorLowRankUpdate.diag_operator", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.diag_update": "tf.linalg.LinearOperatorLowRankUpdate.diag_update", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.is_diag_update_positive": "tf.linalg.LinearOperatorLowRankUpdate.is_diag_update_positive", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.u": "tf.linalg.LinearOperatorLowRankUpdate.u", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.v": "tf.linalg.LinearOperatorLowRankUpdate.v", + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular": "tf.linalg.LinearOperatorLowerTriangular", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__init__": "tf.linalg.LinearOperatorLowerTriangular.__init__", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorPermutation": "tf.linalg.LinearOperatorPermutation", + "tf.compat.v1.linalg.LinearOperatorPermutation.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorPermutation.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorPermutation.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorPermutation.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorPermutation.__init__": "tf.linalg.LinearOperatorPermutation.__init__", + "tf.compat.v1.linalg.LinearOperatorPermutation.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorPermutation.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorPermutation.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorPermutation.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorPermutation.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorPermutation.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorPermutation.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorPermutation.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorPermutation.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorPermutation.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorPermutation.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorPermutation.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorPermutation.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorPermutation.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorPermutation.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorPermutation.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorPermutation.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorPermutation.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorPermutation.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorPermutation.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorPermutation.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorPermutation.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorPermutation.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorPermutation.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorPermutation.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorPermutation.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorPermutation.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorPermutation.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorPermutation.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorPermutation.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorPermutation.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorPermutation.perm": "tf.linalg.LinearOperatorPermutation.perm", + "tf.compat.v1.linalg.LinearOperatorPermutation.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorPermutation.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorPermutation.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorPermutation.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorPermutation.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorPermutation.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorPermutation.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorPermutation.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorPermutation.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorPermutation.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorPermutation.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorPermutation.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorPermutation.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity": "tf.linalg.LinearOperatorScaledIdentity", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__init__": "tf.linalg.LinearOperatorScaledIdentity.__init__", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.add_to_tensor": "tf.linalg.LinearOperatorScaledIdentity.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.multiplier": "tf.linalg.LinearOperatorScaledIdentity.multiplier", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorToeplitz": "tf.linalg.LinearOperatorToeplitz", + "tf.compat.v1.linalg.LinearOperatorToeplitz.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorToeplitz.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorToeplitz.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorToeplitz.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorToeplitz.__init__": "tf.linalg.LinearOperatorToeplitz.__init__", + "tf.compat.v1.linalg.LinearOperatorToeplitz.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorToeplitz.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorToeplitz.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorToeplitz.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorToeplitz.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorToeplitz.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorToeplitz.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorToeplitz.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorToeplitz.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorToeplitz.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorToeplitz.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorToeplitz.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorToeplitz.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorToeplitz.col": "tf.linalg.LinearOperatorToeplitz.col", + "tf.compat.v1.linalg.LinearOperatorToeplitz.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorToeplitz.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorToeplitz.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorToeplitz.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorToeplitz.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorToeplitz.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorToeplitz.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorToeplitz.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorToeplitz.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorToeplitz.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorToeplitz.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorToeplitz.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorToeplitz.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorToeplitz.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorToeplitz.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorToeplitz.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorToeplitz.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorToeplitz.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorToeplitz.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorToeplitz.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorToeplitz.row": "tf.linalg.LinearOperatorToeplitz.row", + "tf.compat.v1.linalg.LinearOperatorToeplitz.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorToeplitz.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorToeplitz.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorToeplitz.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorToeplitz.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorToeplitz.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorToeplitz.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorToeplitz.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorToeplitz.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorToeplitz.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorToeplitz.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorTridiag": "tf.linalg.LinearOperatorTridiag", + "tf.compat.v1.linalg.LinearOperatorTridiag.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorTridiag.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorTridiag.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorTridiag.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorTridiag.__init__": "tf.linalg.LinearOperatorTridiag.__init__", + "tf.compat.v1.linalg.LinearOperatorTridiag.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorTridiag.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorTridiag.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorTridiag.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorTridiag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorTridiag.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorTridiag.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorTridiag.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorTridiag.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorTridiag.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorTridiag.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorTridiag.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorTridiag.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorTridiag.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorTridiag.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorTridiag.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorTridiag.diagonals": "tf.linalg.LinearOperatorTridiag.diagonals", + "tf.compat.v1.linalg.LinearOperatorTridiag.diagonals_format": "tf.linalg.LinearOperatorTridiag.diagonals_format", + "tf.compat.v1.linalg.LinearOperatorTridiag.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorTridiag.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorTridiag.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorTridiag.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorTridiag.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorTridiag.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorTridiag.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorTridiag.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorTridiag.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorTridiag.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorTridiag.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorTridiag.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorTridiag.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorTridiag.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorTridiag.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorTridiag.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorTridiag.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorTridiag.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorTridiag.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorTridiag.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorTridiag.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorTridiag.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorTridiag.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorTridiag.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorTridiag.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorTridiag.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorTridiag.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorTridiag.variables": "tf.Module.variables", + "tf.compat.v1.linalg.LinearOperatorZeros": "tf.linalg.LinearOperatorZeros", + "tf.compat.v1.linalg.LinearOperatorZeros.H": "tf.linalg.LinearOperator.H", + "tf.compat.v1.linalg.LinearOperatorZeros.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.linalg.LinearOperatorZeros.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.linalg.LinearOperatorZeros.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.linalg.LinearOperatorZeros.__init__": "tf.linalg.LinearOperatorZeros.__init__", + "tf.compat.v1.linalg.LinearOperatorZeros.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.linalg.LinearOperatorZeros.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.linalg.LinearOperatorZeros.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.compat.v1.linalg.LinearOperatorZeros.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.linalg.LinearOperatorZeros.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.linalg.LinearOperatorZeros.add_to_tensor": "tf.linalg.LinearOperatorZeros.add_to_tensor", + "tf.compat.v1.linalg.LinearOperatorZeros.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.compat.v1.linalg.LinearOperatorZeros.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.compat.v1.linalg.LinearOperatorZeros.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.compat.v1.linalg.LinearOperatorZeros.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorZeros.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.compat.v1.linalg.LinearOperatorZeros.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.compat.v1.linalg.LinearOperatorZeros.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.compat.v1.linalg.LinearOperatorZeros.cond": "tf.linalg.LinearOperator.cond", + "tf.compat.v1.linalg.LinearOperatorZeros.determinant": "tf.linalg.LinearOperator.determinant", + "tf.compat.v1.linalg.LinearOperatorZeros.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.compat.v1.linalg.LinearOperatorZeros.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.compat.v1.linalg.LinearOperatorZeros.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorZeros.dtype": "tf.linalg.LinearOperator.dtype", + "tf.compat.v1.linalg.LinearOperatorZeros.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.compat.v1.linalg.LinearOperatorZeros.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.compat.v1.linalg.LinearOperatorZeros.inverse": "tf.linalg.LinearOperator.inverse", + "tf.compat.v1.linalg.LinearOperatorZeros.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.compat.v1.linalg.LinearOperatorZeros.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.compat.v1.linalg.LinearOperatorZeros.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.compat.v1.linalg.LinearOperatorZeros.is_square": "tf.linalg.LinearOperator.is_square", + "tf.compat.v1.linalg.LinearOperatorZeros.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.compat.v1.linalg.LinearOperatorZeros.matmul": "tf.linalg.LinearOperator.matmul", + "tf.compat.v1.linalg.LinearOperatorZeros.matvec": "tf.linalg.LinearOperator.matvec", + "tf.compat.v1.linalg.LinearOperatorZeros.name": "tf.linalg.LinearOperator.name", + "tf.compat.v1.linalg.LinearOperatorZeros.name_scope": "tf.Module.name_scope", + "tf.compat.v1.linalg.LinearOperatorZeros.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.compat.v1.linalg.LinearOperatorZeros.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.compat.v1.linalg.LinearOperatorZeros.shape": "tf.linalg.LinearOperator.shape", + "tf.compat.v1.linalg.LinearOperatorZeros.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.compat.v1.linalg.LinearOperatorZeros.solve": "tf.linalg.LinearOperator.solve", + "tf.compat.v1.linalg.LinearOperatorZeros.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.compat.v1.linalg.LinearOperatorZeros.submodules": "tf.Module.submodules", + "tf.compat.v1.linalg.LinearOperatorZeros.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.compat.v1.linalg.LinearOperatorZeros.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.compat.v1.linalg.LinearOperatorZeros.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.compat.v1.linalg.LinearOperatorZeros.trace": "tf.linalg.LinearOperator.trace", + "tf.compat.v1.linalg.LinearOperatorZeros.trainable_variables": "tf.Module.trainable_variables", + "tf.compat.v1.linalg.LinearOperatorZeros.variables": "tf.Module.variables", + "tf.compat.v1.linalg.adjoint": "tf.linalg.adjoint", + "tf.compat.v1.linalg.band_part": "tf.linalg.band_part", + "tf.compat.v1.linalg.cholesky": "tf.linalg.cholesky", + "tf.compat.v1.linalg.cholesky_solve": "tf.linalg.cholesky_solve", + "tf.compat.v1.linalg.cross": "tf.linalg.cross", + "tf.compat.v1.linalg.det": "tf.linalg.det", + "tf.compat.v1.linalg.diag": "tf.linalg.diag", + "tf.compat.v1.linalg.diag_part": "tf.linalg.diag_part", + "tf.compat.v1.linalg.eigh": "tf.linalg.eigh", + "tf.compat.v1.linalg.eigvalsh": "tf.linalg.eigvalsh", + "tf.compat.v1.linalg.einsum": "tf.einsum", + "tf.compat.v1.linalg.experimental.conjugate_gradient": "tf.linalg.experimental.conjugate_gradient", + "tf.compat.v1.linalg.expm": "tf.linalg.expm", + "tf.compat.v1.linalg.eye": "tf.eye", + "tf.compat.v1.linalg.global_norm": "tf.linalg.global_norm", + "tf.compat.v1.linalg.inv": "tf.linalg.inv", + "tf.compat.v1.linalg.logdet": "tf.linalg.logdet", + "tf.compat.v1.linalg.logm": "tf.linalg.logm", + "tf.compat.v1.linalg.lstsq": "tf.linalg.lstsq", + "tf.compat.v1.linalg.lu": "tf.linalg.lu", + "tf.compat.v1.linalg.lu_matrix_inverse": "tf.linalg.lu_matrix_inverse", + "tf.compat.v1.linalg.lu_reconstruct": "tf.linalg.lu_reconstruct", + "tf.compat.v1.linalg.lu_solve": "tf.linalg.lu_solve", + "tf.compat.v1.linalg.matmul": "tf.linalg.matmul", + "tf.compat.v1.linalg.matrix_rank": "tf.linalg.matrix_rank", + "tf.compat.v1.linalg.matrix_transpose": "tf.linalg.matrix_transpose", + "tf.compat.v1.linalg.matvec": "tf.linalg.matvec", + "tf.compat.v1.linalg.norm": "tf.compat.v1.norm", + "tf.compat.v1.linalg.normalize": "tf.linalg.normalize", + "tf.compat.v1.linalg.pinv": "tf.linalg.pinv", + "tf.compat.v1.linalg.qr": "tf.linalg.qr", + "tf.compat.v1.linalg.set_diag": "tf.linalg.set_diag", + "tf.compat.v1.linalg.slogdet": "tf.linalg.slogdet", + "tf.compat.v1.linalg.solve": "tf.linalg.solve", + "tf.compat.v1.linalg.sqrtm": "tf.linalg.sqrtm", + "tf.compat.v1.linalg.svd": "tf.linalg.svd", + "tf.compat.v1.linalg.tensor_diag": "tf.linalg.tensor_diag", + "tf.compat.v1.linalg.tensor_diag_part": "tf.linalg.tensor_diag_part", + "tf.compat.v1.linalg.tensordot": "tf.tensordot", + "tf.compat.v1.linalg.trace": "tf.linalg.trace", + "tf.compat.v1.linalg.transpose": "tf.linalg.matrix_transpose", + "tf.compat.v1.linalg.triangular_solve": "tf.linalg.triangular_solve", + "tf.compat.v1.linalg.tridiagonal_matmul": "tf.linalg.tridiagonal_matmul", + "tf.compat.v1.linalg.tridiagonal_solve": "tf.linalg.tridiagonal_solve", + "tf.compat.v1.linspace": "tf.linspace", + "tf.compat.v1.lite.Interpreter": "tf.lite.Interpreter", + "tf.compat.v1.lite.Interpreter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lite.Interpreter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lite.Interpreter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lite.Interpreter.__init__": "tf.lite.Interpreter.__init__", + "tf.compat.v1.lite.Interpreter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lite.Interpreter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lite.Interpreter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lite.Interpreter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lite.Interpreter.allocate_tensors": "tf.lite.Interpreter.allocate_tensors", + "tf.compat.v1.lite.Interpreter.get_input_details": "tf.lite.Interpreter.get_input_details", + "tf.compat.v1.lite.Interpreter.get_output_details": "tf.lite.Interpreter.get_output_details", + "tf.compat.v1.lite.Interpreter.get_tensor": "tf.lite.Interpreter.get_tensor", + "tf.compat.v1.lite.Interpreter.get_tensor_details": "tf.lite.Interpreter.get_tensor_details", + "tf.compat.v1.lite.Interpreter.invoke": "tf.lite.Interpreter.invoke", + "tf.compat.v1.lite.Interpreter.reset_all_variables": "tf.lite.Interpreter.reset_all_variables", + "tf.compat.v1.lite.Interpreter.resize_tensor_input": "tf.lite.Interpreter.resize_tensor_input", + "tf.compat.v1.lite.Interpreter.set_tensor": "tf.lite.Interpreter.set_tensor", + "tf.compat.v1.lite.Interpreter.tensor": "tf.lite.Interpreter.tensor", + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lite.OpHint.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lite.OpHint.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lite.OpHint.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lite.OpHint.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lite.OpHint.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lite.OpHint.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lite.OpHint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lite.OpsSet": "tf.lite.OpsSet", + "tf.compat.v1.lite.OpsSet.SELECT_TF_OPS": "tf.lite.OpsSet.SELECT_TF_OPS", + "tf.compat.v1.lite.OpsSet.TFLITE_BUILTINS": "tf.lite.OpsSet.TFLITE_BUILTINS", + "tf.compat.v1.lite.OpsSet.TFLITE_BUILTINS_INT8": "tf.lite.OpsSet.TFLITE_BUILTINS_INT8", + "tf.compat.v1.lite.OpsSet.name": "tf.distribute.InputReplicationMode.name", + "tf.compat.v1.lite.OpsSet.value": "tf.distribute.InputReplicationMode.value", + "tf.compat.v1.lite.Optimize": "tf.lite.Optimize", + "tf.compat.v1.lite.Optimize.DEFAULT": "tf.lite.Optimize.DEFAULT", + "tf.compat.v1.lite.Optimize.OPTIMIZE_FOR_LATENCY": "tf.lite.Optimize.OPTIMIZE_FOR_LATENCY", + "tf.compat.v1.lite.Optimize.OPTIMIZE_FOR_SIZE": "tf.lite.Optimize.OPTIMIZE_FOR_SIZE", + "tf.compat.v1.lite.Optimize.name": "tf.distribute.InputReplicationMode.name", + "tf.compat.v1.lite.Optimize.value": "tf.distribute.InputReplicationMode.value", + "tf.compat.v1.lite.RepresentativeDataset": "tf.lite.RepresentativeDataset", + "tf.compat.v1.lite.RepresentativeDataset.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lite.RepresentativeDataset.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lite.RepresentativeDataset.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lite.RepresentativeDataset.__init__": "tf.lite.RepresentativeDataset.__init__", + "tf.compat.v1.lite.RepresentativeDataset.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lite.RepresentativeDataset.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lite.RepresentativeDataset.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lite.RepresentativeDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lite.TFLiteConverter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lite.TFLiteConverter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lite.TFLiteConverter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lite.TFLiteConverter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lite.TFLiteConverter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lite.TFLiteConverter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lite.TFLiteConverter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lite.TargetSpec": "tf.lite.TargetSpec", + "tf.compat.v1.lite.TargetSpec.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lite.TargetSpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lite.TargetSpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lite.TargetSpec.__init__": "tf.lite.TargetSpec.__init__", + "tf.compat.v1.lite.TargetSpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lite.TargetSpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lite.TargetSpec.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lite.TargetSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lite.TocoConverter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lite.TocoConverter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lite.TocoConverter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lite.TocoConverter.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.lite.TocoConverter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lite.TocoConverter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lite.TocoConverter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lite.TocoConverter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lite.constants.FLOAT": "tf.dtypes.float32", + "tf.compat.v1.lite.constants.FLOAT16": "tf.dtypes.float16", + "tf.compat.v1.lite.constants.INT32": "tf.dtypes.int32", + "tf.compat.v1.lite.constants.INT64": "tf.dtypes.int64", + "tf.compat.v1.lite.constants.INT8": "tf.dtypes.int8", + "tf.compat.v1.lite.constants.QUANTIZED_UINT8": "tf.dtypes.uint8", + "tf.compat.v1.lite.constants.STRING": "tf.dtypes.string", + "tf.compat.v1.lite.experimental.load_delegate": "tf.lite.experimental.load_delegate", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__call__": "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__call__", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.get_initial_state": "tf.compat.v1.nn.rnn_cell.RNNCell.get_initial_state", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.submodules": "tf.Module.submodules", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.zero_state": "tf.compat.v1.nn.rnn_cell.RNNCell.zero_state", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__call__": "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__call__", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.get_initial_state": "tf.compat.v1.nn.rnn_cell.RNNCell.get_initial_state", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.submodules": "tf.Module.submodules", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.zero_state": "tf.compat.v1.nn.rnn_cell.RNNCell.zero_state", + "tf.compat.v1.load_library": "tf.load_library", + "tf.compat.v1.load_op_library": "tf.load_op_library", + "tf.compat.v1.log": "tf.math.log", + "tf.compat.v1.log1p": "tf.math.log1p", + "tf.compat.v1.log_sigmoid": "tf.math.log_sigmoid", + "tf.compat.v1.logical_and": "tf.math.logical_and", + "tf.compat.v1.logical_not": "tf.math.logical_not", + "tf.compat.v1.logical_or": "tf.math.logical_or", + "tf.compat.v1.logical_xor": "tf.math.logical_xor", + "tf.compat.v1.lookup.KeyValueTensorInitializer": "tf.lookup.KeyValueTensorInitializer", + "tf.compat.v1.lookup.KeyValueTensorInitializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lookup.KeyValueTensorInitializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lookup.KeyValueTensorInitializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lookup.KeyValueTensorInitializer.__init__": "tf.lookup.KeyValueTensorInitializer.__init__", + "tf.compat.v1.lookup.KeyValueTensorInitializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lookup.KeyValueTensorInitializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lookup.KeyValueTensorInitializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lookup.KeyValueTensorInitializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lookup.KeyValueTensorInitializer.initialize": "tf.lookup.KeyValueTensorInitializer.initialize", + "tf.compat.v1.lookup.KeyValueTensorInitializer.key_dtype": "tf.lookup.KeyValueTensorInitializer.key_dtype", + "tf.compat.v1.lookup.KeyValueTensorInitializer.value_dtype": "tf.lookup.KeyValueTensorInitializer.value_dtype", + "tf.compat.v1.lookup.StaticHashTable.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lookup.StaticHashTable.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lookup.StaticHashTable.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lookup.StaticHashTable.__init__": "tf.lookup.StaticHashTable.__init__", + "tf.compat.v1.lookup.StaticHashTable.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lookup.StaticHashTable.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lookup.StaticHashTable.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lookup.StaticHashTable.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lookup.StaticHashTable.default_value": "tf.lookup.StaticHashTable.default_value", + "tf.compat.v1.lookup.StaticHashTable.export": "tf.lookup.StaticHashTable.export", + "tf.compat.v1.lookup.StaticHashTable.key_dtype": "tf.lookup.StaticHashTable.key_dtype", + "tf.compat.v1.lookup.StaticHashTable.lookup": "tf.lookup.StaticHashTable.lookup", + "tf.compat.v1.lookup.StaticHashTable.name": "tf.lookup.StaticHashTable.name", + "tf.compat.v1.lookup.StaticHashTable.resource_handle": "tf.lookup.StaticHashTable.resource_handle", + "tf.compat.v1.lookup.StaticHashTable.size": "tf.lookup.StaticHashTable.size", + "tf.compat.v1.lookup.StaticHashTable.value_dtype": "tf.lookup.StaticHashTable.value_dtype", + "tf.compat.v1.lookup.StaticVocabularyTable.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lookup.StaticVocabularyTable.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lookup.StaticVocabularyTable.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lookup.StaticVocabularyTable.__init__": "tf.lookup.StaticVocabularyTable.__init__", + "tf.compat.v1.lookup.StaticVocabularyTable.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lookup.StaticVocabularyTable.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lookup.StaticVocabularyTable.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lookup.StaticVocabularyTable.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lookup.StaticVocabularyTable.key_dtype": "tf.lookup.StaticHashTable.key_dtype", + "tf.compat.v1.lookup.StaticVocabularyTable.lookup": "tf.lookup.StaticVocabularyTable.lookup", + "tf.compat.v1.lookup.StaticVocabularyTable.name": "tf.lookup.StaticVocabularyTable.name", + "tf.compat.v1.lookup.StaticVocabularyTable.resource_handle": "tf.lookup.StaticVocabularyTable.resource_handle", + "tf.compat.v1.lookup.StaticVocabularyTable.size": "tf.lookup.StaticVocabularyTable.size", + "tf.compat.v1.lookup.StaticVocabularyTable.value_dtype": "tf.lookup.StaticHashTable.value_dtype", + "tf.compat.v1.lookup.TextFileIndex": "tf.lookup.TextFileIndex", + "tf.compat.v1.lookup.TextFileIndex.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lookup.TextFileIndex.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lookup.TextFileIndex.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lookup.TextFileIndex.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.lookup.TextFileIndex.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lookup.TextFileIndex.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lookup.TextFileIndex.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lookup.TextFileIndex.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lookup.TextFileInitializer": "tf.lookup.TextFileInitializer", + "tf.compat.v1.lookup.TextFileInitializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lookup.TextFileInitializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lookup.TextFileInitializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lookup.TextFileInitializer.__init__": "tf.lookup.TextFileInitializer.__init__", + "tf.compat.v1.lookup.TextFileInitializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lookup.TextFileInitializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lookup.TextFileInitializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lookup.TextFileInitializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lookup.TextFileInitializer.initialize": "tf.lookup.TextFileInitializer.initialize", + "tf.compat.v1.lookup.TextFileInitializer.key_dtype": "tf.lookup.KeyValueTensorInitializer.key_dtype", + "tf.compat.v1.lookup.TextFileInitializer.value_dtype": "tf.lookup.KeyValueTensorInitializer.value_dtype", + "tf.compat.v1.lookup.experimental.DenseHashTable": "tf.lookup.experimental.DenseHashTable", + "tf.compat.v1.lookup.experimental.DenseHashTable.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.lookup.experimental.DenseHashTable.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.lookup.experimental.DenseHashTable.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.lookup.experimental.DenseHashTable.__init__": "tf.lookup.experimental.DenseHashTable.__init__", + "tf.compat.v1.lookup.experimental.DenseHashTable.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.lookup.experimental.DenseHashTable.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.lookup.experimental.DenseHashTable.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.lookup.experimental.DenseHashTable.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.lookup.experimental.DenseHashTable.erase": "tf.lookup.experimental.DenseHashTable.erase", + "tf.compat.v1.lookup.experimental.DenseHashTable.export": "tf.lookup.experimental.DenseHashTable.export", + "tf.compat.v1.lookup.experimental.DenseHashTable.insert": "tf.lookup.experimental.DenseHashTable.insert", + "tf.compat.v1.lookup.experimental.DenseHashTable.insert_or_assign": "tf.lookup.experimental.DenseHashTable.insert_or_assign", + "tf.compat.v1.lookup.experimental.DenseHashTable.key_dtype": "tf.lookup.StaticHashTable.key_dtype", + "tf.compat.v1.lookup.experimental.DenseHashTable.lookup": "tf.lookup.experimental.DenseHashTable.lookup", + "tf.compat.v1.lookup.experimental.DenseHashTable.name": "tf.lookup.experimental.DenseHashTable.name", + "tf.compat.v1.lookup.experimental.DenseHashTable.remove": "tf.lookup.experimental.DenseHashTable.remove", + "tf.compat.v1.lookup.experimental.DenseHashTable.resource_handle": "tf.lookup.StaticHashTable.resource_handle", + "tf.compat.v1.lookup.experimental.DenseHashTable.size": "tf.lookup.experimental.DenseHashTable.size", + "tf.compat.v1.lookup.experimental.DenseHashTable.value_dtype": "tf.lookup.StaticHashTable.value_dtype", + "tf.compat.v1.losses.Reduction.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.losses.Reduction.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.losses.Reduction.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.losses.Reduction.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.losses.Reduction.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.losses.Reduction.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.losses.Reduction.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.losses.Reduction.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.make_ndarray": "tf.make_ndarray", + "tf.compat.v1.make_tensor_proto": "tf.make_tensor_proto", + "tf.compat.v1.manip.batch_to_space_nd": "tf.compat.v1.batch_to_space_nd", + "tf.compat.v1.manip.gather_nd": "tf.compat.v1.gather_nd", + "tf.compat.v1.manip.reshape": "tf.reshape", + "tf.compat.v1.manip.reverse": "tf.reverse", + "tf.compat.v1.manip.roll": "tf.roll", + "tf.compat.v1.manip.scatter_nd": "tf.scatter_nd", + "tf.compat.v1.manip.space_to_batch_nd": "tf.space_to_batch_nd", + "tf.compat.v1.manip.tile": "tf.tile", + "tf.compat.v1.matching_files": "tf.io.matching_files", + "tf.compat.v1.math.abs": "tf.math.abs", + "tf.compat.v1.math.accumulate_n": "tf.math.accumulate_n", + "tf.compat.v1.math.acos": "tf.math.acos", + "tf.compat.v1.math.acosh": "tf.math.acosh", + "tf.compat.v1.math.add": "tf.math.add", + "tf.compat.v1.math.add_n": "tf.math.add_n", + "tf.compat.v1.math.angle": "tf.math.angle", + "tf.compat.v1.math.argmax": "tf.compat.v1.argmax", + "tf.compat.v1.math.argmin": "tf.compat.v1.argmin", + "tf.compat.v1.math.asin": "tf.math.asin", + "tf.compat.v1.math.asinh": "tf.math.asinh", + "tf.compat.v1.math.atan": "tf.math.atan", + "tf.compat.v1.math.atan2": "tf.math.atan2", + "tf.compat.v1.math.atanh": "tf.math.atanh", + "tf.compat.v1.math.bessel_i0": "tf.math.bessel_i0", + "tf.compat.v1.math.bessel_i0e": "tf.math.bessel_i0e", + "tf.compat.v1.math.bessel_i1": "tf.math.bessel_i1", + "tf.compat.v1.math.bessel_i1e": "tf.math.bessel_i1e", + "tf.compat.v1.math.betainc": "tf.math.betainc", + "tf.compat.v1.math.bincount": "tf.compat.v1.bincount", + "tf.compat.v1.math.ceil": "tf.math.ceil", + "tf.compat.v1.math.confusion_matrix": "tf.compat.v1.confusion_matrix", + "tf.compat.v1.math.conj": "tf.math.conj", + "tf.compat.v1.math.cos": "tf.math.cos", + "tf.compat.v1.math.cosh": "tf.math.cosh", + "tf.compat.v1.math.count_nonzero": "tf.compat.v1.count_nonzero", + "tf.compat.v1.math.cumprod": "tf.math.cumprod", + "tf.compat.v1.math.cumsum": "tf.math.cumsum", + "tf.compat.v1.math.cumulative_logsumexp": "tf.math.cumulative_logsumexp", + "tf.compat.v1.math.digamma": "tf.math.digamma", + "tf.compat.v1.math.divide": "tf.math.divide", + "tf.compat.v1.math.divide_no_nan": "tf.math.divide_no_nan", + "tf.compat.v1.math.equal": "tf.math.equal", + "tf.compat.v1.math.erf": "tf.math.erf", + "tf.compat.v1.math.erfc": "tf.math.erfc", + "tf.compat.v1.math.erfinv": "tf.math.erfinv", + "tf.compat.v1.math.exp": "tf.math.exp", + "tf.compat.v1.math.expm1": "tf.math.expm1", + "tf.compat.v1.math.floor": "tf.math.floor", + "tf.compat.v1.math.floordiv": "tf.math.floordiv", + "tf.compat.v1.math.floormod": "tf.math.floormod", + "tf.compat.v1.math.greater": "tf.math.greater", + "tf.compat.v1.math.greater_equal": "tf.math.greater_equal", + "tf.compat.v1.math.igamma": "tf.math.igamma", + "tf.compat.v1.math.igammac": "tf.math.igammac", + "tf.compat.v1.math.imag": "tf.math.imag", + "tf.compat.v1.math.invert_permutation": "tf.math.invert_permutation", + "tf.compat.v1.math.is_finite": "tf.math.is_finite", + "tf.compat.v1.math.is_inf": "tf.math.is_inf", + "tf.compat.v1.math.is_nan": "tf.math.is_nan", + "tf.compat.v1.math.is_non_decreasing": "tf.math.is_non_decreasing", + "tf.compat.v1.math.is_strictly_increasing": "tf.math.is_strictly_increasing", + "tf.compat.v1.math.l2_normalize": "tf.compat.v1.linalg.l2_normalize", + "tf.compat.v1.math.lbeta": "tf.math.lbeta", + "tf.compat.v1.math.less": "tf.math.less", + "tf.compat.v1.math.less_equal": "tf.math.less_equal", + "tf.compat.v1.math.lgamma": "tf.math.lgamma", + "tf.compat.v1.math.log": "tf.math.log", + "tf.compat.v1.math.log1p": "tf.math.log1p", + "tf.compat.v1.math.log_sigmoid": "tf.math.log_sigmoid", + "tf.compat.v1.math.logical_and": "tf.math.logical_and", + "tf.compat.v1.math.logical_not": "tf.math.logical_not", + "tf.compat.v1.math.logical_or": "tf.math.logical_or", + "tf.compat.v1.math.logical_xor": "tf.math.logical_xor", + "tf.compat.v1.math.maximum": "tf.math.maximum", + "tf.compat.v1.math.minimum": "tf.math.minimum", + "tf.compat.v1.math.mod": "tf.math.floormod", + "tf.compat.v1.math.multiply": "tf.math.multiply", + "tf.compat.v1.math.multiply_no_nan": "tf.math.multiply_no_nan", + "tf.compat.v1.math.ndtri": "tf.math.ndtri", + "tf.compat.v1.math.negative": "tf.math.negative", + "tf.compat.v1.math.nextafter": "tf.math.nextafter", + "tf.compat.v1.math.not_equal": "tf.math.not_equal", + "tf.compat.v1.math.polygamma": "tf.math.polygamma", + "tf.compat.v1.math.polyval": "tf.math.polyval", + "tf.compat.v1.math.pow": "tf.math.pow", + "tf.compat.v1.math.real": "tf.math.real", + "tf.compat.v1.math.reciprocal": "tf.math.reciprocal", + "tf.compat.v1.math.reciprocal_no_nan": "tf.math.reciprocal_no_nan", + "tf.compat.v1.math.reduce_all": "tf.compat.v1.reduce_all", + "tf.compat.v1.math.reduce_any": "tf.compat.v1.reduce_any", + "tf.compat.v1.math.reduce_euclidean_norm": "tf.math.reduce_euclidean_norm", + "tf.compat.v1.math.reduce_logsumexp": "tf.compat.v1.reduce_logsumexp", + "tf.compat.v1.math.reduce_max": "tf.compat.v1.reduce_max", + "tf.compat.v1.math.reduce_mean": "tf.compat.v1.reduce_mean", + "tf.compat.v1.math.reduce_min": "tf.compat.v1.reduce_min", + "tf.compat.v1.math.reduce_prod": "tf.compat.v1.reduce_prod", + "tf.compat.v1.math.reduce_std": "tf.math.reduce_std", + "tf.compat.v1.math.reduce_sum": "tf.compat.v1.reduce_sum", + "tf.compat.v1.math.reduce_variance": "tf.math.reduce_variance", + "tf.compat.v1.math.rint": "tf.math.rint", + "tf.compat.v1.math.round": "tf.math.round", + "tf.compat.v1.math.rsqrt": "tf.math.rsqrt", + "tf.compat.v1.math.scalar_mul": "tf.compat.v1.scalar_mul", + "tf.compat.v1.math.segment_max": "tf.math.segment_max", + "tf.compat.v1.math.segment_mean": "tf.math.segment_mean", + "tf.compat.v1.math.segment_min": "tf.math.segment_min", + "tf.compat.v1.math.segment_prod": "tf.math.segment_prod", + "tf.compat.v1.math.segment_sum": "tf.math.segment_sum", + "tf.compat.v1.math.sigmoid": "tf.math.sigmoid", + "tf.compat.v1.math.sign": "tf.math.sign", + "tf.compat.v1.math.sin": "tf.math.sin", + "tf.compat.v1.math.sinh": "tf.math.sinh", + "tf.compat.v1.math.sobol_sample": "tf.math.sobol_sample", + "tf.compat.v1.math.softplus": "tf.math.softplus", + "tf.compat.v1.math.softsign": "tf.nn.softsign", + "tf.compat.v1.math.special.dawsn": "tf.math.special.dawsn", + "tf.compat.v1.math.special.expint": "tf.math.special.expint", + "tf.compat.v1.math.special.fresnel_cos": "tf.math.special.fresnel_cos", + "tf.compat.v1.math.special.fresnel_sin": "tf.math.special.fresnel_sin", + "tf.compat.v1.math.special.spence": "tf.math.special.spence", + "tf.compat.v1.math.sqrt": "tf.math.sqrt", + "tf.compat.v1.math.square": "tf.math.square", + "tf.compat.v1.math.squared_difference": "tf.math.squared_difference", + "tf.compat.v1.math.subtract": "tf.math.subtract", + "tf.compat.v1.math.tan": "tf.math.tan", + "tf.compat.v1.math.tanh": "tf.math.tanh", + "tf.compat.v1.math.top_k": "tf.math.top_k", + "tf.compat.v1.math.truediv": "tf.math.truediv", + "tf.compat.v1.math.unsorted_segment_max": "tf.math.unsorted_segment_max", + "tf.compat.v1.math.unsorted_segment_mean": "tf.math.unsorted_segment_mean", + "tf.compat.v1.math.unsorted_segment_min": "tf.math.unsorted_segment_min", + "tf.compat.v1.math.unsorted_segment_prod": "tf.math.unsorted_segment_prod", + "tf.compat.v1.math.unsorted_segment_sqrt_n": "tf.math.unsorted_segment_sqrt_n", + "tf.compat.v1.math.unsorted_segment_sum": "tf.math.unsorted_segment_sum", + "tf.compat.v1.math.xdivy": "tf.math.xdivy", + "tf.compat.v1.math.xlog1py": "tf.math.xlog1py", + "tf.compat.v1.math.xlogy": "tf.math.xlogy", + "tf.compat.v1.math.zero_fraction": "tf.math.zero_fraction", + "tf.compat.v1.math.zeta": "tf.math.zeta", + "tf.compat.v1.matmul": "tf.linalg.matmul", + "tf.compat.v1.matrix_band_part": "tf.linalg.band_part", + "tf.compat.v1.matrix_determinant": "tf.linalg.det", + "tf.compat.v1.matrix_diag": "tf.linalg.diag", + "tf.compat.v1.matrix_diag_part": "tf.linalg.diag_part", + "tf.compat.v1.matrix_inverse": "tf.linalg.inv", + "tf.compat.v1.matrix_set_diag": "tf.linalg.set_diag", + "tf.compat.v1.matrix_solve": "tf.linalg.solve", + "tf.compat.v1.matrix_solve_ls": "tf.linalg.lstsq", + "tf.compat.v1.matrix_square_root": "tf.linalg.sqrtm", + "tf.compat.v1.matrix_transpose": "tf.linalg.matrix_transpose", + "tf.compat.v1.matrix_triangular_solve": "tf.linalg.triangular_solve", + "tf.compat.v1.maximum": "tf.math.maximum", + "tf.compat.v1.meshgrid": "tf.meshgrid", + "tf.compat.v1.minimum": "tf.math.minimum", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale": "tf.mixed_precision.experimental.DynamicLossScale", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__call__": "tf.mixed_precision.experimental.DynamicLossScale.__call__", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__init__": "tf.mixed_precision.experimental.DynamicLossScale.__init__", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.get_config": "tf.mixed_precision.experimental.DynamicLossScale.get_config", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.increment_period": "tf.mixed_precision.experimental.DynamicLossScale.increment_period", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.initial_loss_scale": "tf.mixed_precision.experimental.DynamicLossScale.initial_loss_scale", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.multiplier": "tf.mixed_precision.experimental.DynamicLossScale.multiplier", + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.update": "tf.mixed_precision.experimental.DynamicLossScale.update", + "tf.compat.v1.mixed_precision.experimental.FixedLossScale": "tf.mixed_precision.experimental.FixedLossScale", + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__call__": "tf.mixed_precision.experimental.FixedLossScale.__call__", + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__init__": "tf.mixed_precision.experimental.FixedLossScale.__init__", + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.get_config": "tf.mixed_precision.experimental.FixedLossScale.get_config", + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.update": "tf.mixed_precision.experimental.FixedLossScale.update", + "tf.compat.v1.mixed_precision.experimental.LossScale": "tf.mixed_precision.experimental.LossScale", + "tf.compat.v1.mixed_precision.experimental.LossScale.__call__": "tf.mixed_precision.experimental.LossScale.__call__", + "tf.compat.v1.mixed_precision.experimental.LossScale.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.mixed_precision.experimental.LossScale.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.mixed_precision.experimental.LossScale.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.mixed_precision.experimental.LossScale.__init__": "tf.mixed_precision.experimental.LossScale.__init__", + "tf.compat.v1.mixed_precision.experimental.LossScale.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.mixed_precision.experimental.LossScale.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.mixed_precision.experimental.LossScale.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.mixed_precision.experimental.LossScale.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.mixed_precision.experimental.LossScale.get_config": "tf.mixed_precision.experimental.LossScale.get_config", + "tf.compat.v1.mixed_precision.experimental.LossScale.update": "tf.mixed_precision.experimental.LossScale.update", + "tf.compat.v1.mlir.experimental.convert_graph_def": "tf.mlir.experimental.convert_graph_def", + "tf.compat.v1.mod": "tf.math.floormod", + "tf.compat.v1.multiply": "tf.math.multiply", + "tf.compat.v1.name_scope": "tf.compat.v1.keras.backend.name_scope", + "tf.compat.v1.name_scope.__enter__": "tf.compat.v1.keras.backend.name_scope.__enter__", + "tf.compat.v1.name_scope.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.name_scope.__exit__": "tf.compat.v1.keras.backend.name_scope.__exit__", + "tf.compat.v1.name_scope.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.name_scope.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.name_scope.__init__": "tf.compat.v1.keras.backend.name_scope.__init__", + "tf.compat.v1.name_scope.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.name_scope.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.name_scope.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.name_scope.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.name_scope.name": "tf.compat.v1.keras.backend.name_scope.name", + "tf.compat.v1.negative": "tf.math.negative", + "tf.compat.v1.nest.assert_same_structure": "tf.nest.assert_same_structure", + "tf.compat.v1.nest.flatten": "tf.nest.flatten", + "tf.compat.v1.nest.is_nested": "tf.nest.is_nested", + "tf.compat.v1.nest.map_structure": "tf.nest.map_structure", + "tf.compat.v1.nest.pack_sequence_as": "tf.nest.pack_sequence_as", + "tf.compat.v1.nn.all_candidate_sampler": "tf.random.all_candidate_sampler", + "tf.compat.v1.nn.atrous_conv2d": "tf.nn.atrous_conv2d", + "tf.compat.v1.nn.atrous_conv2d_transpose": "tf.nn.atrous_conv2d_transpose", + "tf.compat.v1.nn.avg_pool1d": "tf.nn.avg_pool1d", + "tf.compat.v1.nn.avg_pool2d": "tf.compat.v1.nn.avg_pool", + "tf.compat.v1.nn.avg_pool3d": "tf.nn.avg_pool3d", + "tf.compat.v1.nn.avg_pool_v2": "tf.nn.avg_pool", + "tf.compat.v1.nn.batch_normalization": "tf.nn.batch_normalization", + "tf.compat.v1.nn.bias_add": "tf.nn.bias_add", + "tf.compat.v1.nn.collapse_repeated": "tf.nn.collapse_repeated", + "tf.compat.v1.nn.compute_accidental_hits": "tf.nn.compute_accidental_hits", + "tf.compat.v1.nn.compute_average_loss": "tf.nn.compute_average_loss", + "tf.compat.v1.nn.conv1d_transpose": "tf.nn.conv1d_transpose", + "tf.compat.v1.nn.conv3d_backprop_filter_v2": "tf.compat.v1.nn.conv3d_backprop_filter", + "tf.compat.v1.nn.conv_transpose": "tf.nn.conv_transpose", + "tf.compat.v1.nn.ctc_beam_search_decoder_v2": "tf.nn.ctc_beam_search_decoder", + "tf.compat.v1.nn.ctc_greedy_decoder": "tf.nn.ctc_greedy_decoder", + "tf.compat.v1.nn.ctc_unique_labels": "tf.nn.ctc_unique_labels", + "tf.compat.v1.nn.depth_to_space": "tf.compat.v1.depth_to_space", + "tf.compat.v1.nn.depthwise_conv2d_backprop_filter": "tf.nn.depthwise_conv2d_backprop_filter", + "tf.compat.v1.nn.depthwise_conv2d_backprop_input": "tf.nn.depthwise_conv2d_backprop_input", + "tf.compat.v1.nn.depthwise_conv2d_native_backprop_filter": "tf.nn.depthwise_conv2d_backprop_filter", + "tf.compat.v1.nn.depthwise_conv2d_native_backprop_input": "tf.nn.depthwise_conv2d_backprop_input", + "tf.compat.v1.nn.elu": "tf.nn.elu", + "tf.compat.v1.nn.fixed_unigram_candidate_sampler": "tf.random.fixed_unigram_candidate_sampler", + "tf.compat.v1.nn.in_top_k": "tf.compat.v1.math.in_top_k", + "tf.compat.v1.nn.l2_loss": "tf.nn.l2_loss", + "tf.compat.v1.nn.l2_normalize": "tf.compat.v1.linalg.l2_normalize", + "tf.compat.v1.nn.leaky_relu": "tf.nn.leaky_relu", + "tf.compat.v1.nn.learned_unigram_candidate_sampler": "tf.random.learned_unigram_candidate_sampler", + "tf.compat.v1.nn.local_response_normalization": "tf.nn.local_response_normalization", + "tf.compat.v1.nn.log_poisson_loss": "tf.nn.log_poisson_loss", + "tf.compat.v1.nn.log_softmax": "tf.compat.v1.math.log_softmax", + "tf.compat.v1.nn.log_uniform_candidate_sampler": "tf.random.log_uniform_candidate_sampler", + "tf.compat.v1.nn.lrn": "tf.nn.local_response_normalization", + "tf.compat.v1.nn.max_pool1d": "tf.nn.max_pool1d", + "tf.compat.v1.nn.max_pool2d": "tf.nn.max_pool2d", + "tf.compat.v1.nn.max_pool3d": "tf.nn.max_pool3d", + "tf.compat.v1.nn.max_pool_v2": "tf.nn.max_pool", + "tf.compat.v1.nn.normalize_moments": "tf.nn.normalize_moments", + "tf.compat.v1.nn.relu": "tf.nn.relu", + "tf.compat.v1.nn.relu6": "tf.nn.relu6", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.get_initial_state": "tf.compat.v1.nn.rnn_cell.RNNCell.get_initial_state", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.submodules": "tf.Module.submodules", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.zero_state": "tf.compat.v1.nn.rnn_cell.RNNCell.zero_state", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__call__": "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__call__", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.get_initial_state": "tf.compat.v1.nn.rnn_cell.RNNCell.get_initial_state", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.submodules": "tf.Module.submodules", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.zero_state": "tf.compat.v1.nn.rnn_cell.RNNCell.zero_state", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.build": "tf.compat.v1.nn.rnn_cell.RNNCell.build", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.get_config": "tf.nn.RNNCellDeviceWrapper.get_config", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.get_initial_state": "tf.compat.v1.nn.rnn_cell.RNNCell.get_initial_state", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.name_scope": "tf.Module.name_scope", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.output_size": "tf.nn.RNNCellDeviceWrapper.output_size", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.state_size": "tf.nn.RNNCellDeviceWrapper.state_size", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.submodules": "tf.Module.submodules", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.zero_state": "tf.nn.RNNCellDeviceWrapper.zero_state", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__call__": "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__call__", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.build": "tf.nn.RNNCellDropoutWrapper.build", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.get_config": "tf.nn.RNNCellDropoutWrapper.get_config", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.get_initial_state": "tf.compat.v1.nn.rnn_cell.RNNCell.get_initial_state", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.name_scope": "tf.Module.name_scope", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.output_size": "tf.nn.RNNCellDropoutWrapper.output_size", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.state_size": "tf.nn.RNNCellDropoutWrapper.state_size", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.submodules": "tf.Module.submodules", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.wrapped_cell": "tf.nn.RNNCellDropoutWrapper.wrapped_cell", + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.zero_state": "tf.nn.RNNCellDropoutWrapper.zero_state", + "tf.compat.v1.nn.rnn_cell.GRUCell.__call__": "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__call__", + "tf.compat.v1.nn.rnn_cell.GRUCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.nn.rnn_cell.GRUCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.nn.rnn_cell.GRUCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.nn.rnn_cell.GRUCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.nn.rnn_cell.GRUCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.nn.rnn_cell.GRUCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.nn.rnn_cell.GRUCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.nn.rnn_cell.GRUCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.nn.rnn_cell.GRUCell.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.nn.rnn_cell.GRUCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.nn.rnn_cell.GRUCell.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.nn.rnn_cell.GRUCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.nn.rnn_cell.GRUCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.nn.rnn_cell.GRUCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.nn.rnn_cell.GRUCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.nn.rnn_cell.GRUCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.nn.rnn_cell.GRUCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.nn.rnn_cell.GRUCell.get_initial_state": "tf.compat.v1.nn.rnn_cell.RNNCell.get_initial_state", + "tf.compat.v1.nn.rnn_cell.GRUCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.nn.rnn_cell.GRUCell.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.nn.rnn_cell.GRUCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.nn.rnn_cell.GRUCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.nn.rnn_cell.GRUCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.nn.rnn_cell.GRUCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.nn.rnn_cell.GRUCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.nn.rnn_cell.GRUCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.nn.rnn_cell.GRUCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.nn.rnn_cell.GRUCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.nn.rnn_cell.GRUCell.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.nn.rnn_cell.GRUCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.nn.rnn_cell.GRUCell.submodules": "tf.Module.submodules", + "tf.compat.v1.nn.rnn_cell.GRUCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.nn.rnn_cell.GRUCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.nn.rnn_cell.GRUCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.nn.rnn_cell.GRUCell.zero_state": "tf.compat.v1.nn.rnn_cell.RNNCell.zero_state", + "tf.compat.v1.nn.rnn_cell.LSTMCell.__call__": "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__call__", + "tf.compat.v1.nn.rnn_cell.LSTMCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.nn.rnn_cell.LSTMCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.nn.rnn_cell.LSTMCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.nn.rnn_cell.LSTMCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.nn.rnn_cell.LSTMCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.nn.rnn_cell.LSTMCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.nn.rnn_cell.LSTMCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.nn.rnn_cell.LSTMCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.nn.rnn_cell.LSTMCell.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.nn.rnn_cell.LSTMCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.nn.rnn_cell.LSTMCell.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.nn.rnn_cell.LSTMCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.nn.rnn_cell.LSTMCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.nn.rnn_cell.LSTMCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.nn.rnn_cell.LSTMCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.nn.rnn_cell.LSTMCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.nn.rnn_cell.LSTMCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.nn.rnn_cell.LSTMCell.get_initial_state": "tf.compat.v1.nn.rnn_cell.RNNCell.get_initial_state", + "tf.compat.v1.nn.rnn_cell.LSTMCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.nn.rnn_cell.LSTMCell.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.nn.rnn_cell.LSTMCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.nn.rnn_cell.LSTMCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.nn.rnn_cell.LSTMCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.nn.rnn_cell.LSTMCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.nn.rnn_cell.LSTMCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.nn.rnn_cell.LSTMCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.nn.rnn_cell.LSTMCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.nn.rnn_cell.LSTMCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.nn.rnn_cell.LSTMCell.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.nn.rnn_cell.LSTMCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.nn.rnn_cell.LSTMCell.submodules": "tf.Module.submodules", + "tf.compat.v1.nn.rnn_cell.LSTMCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.nn.rnn_cell.LSTMCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.nn.rnn_cell.LSTMCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.nn.rnn_cell.LSTMCell.zero_state": "tf.compat.v1.nn.rnn_cell.RNNCell.zero_state", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__call__": "tf.compat.v1.nn.rnn_cell.RNNCell.__call__", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.build": "tf.compat.v1.nn.rnn_cell.RNNCell.build", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.get_config": "tf.compat.v1.nn.rnn_cell.RNNCell.get_config", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.get_initial_state": "tf.compat.v1.nn.rnn_cell.RNNCell.get_initial_state", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.submodules": "tf.Module.submodules", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.nn.rnn_cell.RNNCell.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.nn.rnn_cell.RNNCell.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.nn.rnn_cell.RNNCell.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.nn.rnn_cell.RNNCell.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.nn.rnn_cell.RNNCell.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.nn.rnn_cell.RNNCell.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.nn.rnn_cell.RNNCell.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.nn.rnn_cell.RNNCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.nn.rnn_cell.RNNCell.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.nn.rnn_cell.RNNCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.nn.rnn_cell.RNNCell.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.nn.rnn_cell.RNNCell.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.nn.rnn_cell.RNNCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.nn.rnn_cell.RNNCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.nn.rnn_cell.RNNCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.nn.rnn_cell.RNNCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.nn.rnn_cell.RNNCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.nn.rnn_cell.RNNCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.nn.rnn_cell.RNNCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.nn.rnn_cell.RNNCell.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.nn.rnn_cell.RNNCell.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.nn.rnn_cell.RNNCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.nn.rnn_cell.RNNCell.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.nn.rnn_cell.RNNCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.nn.rnn_cell.RNNCell.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.nn.rnn_cell.RNNCell.name_scope": "tf.Module.name_scope", + "tf.compat.v1.nn.rnn_cell.RNNCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.nn.rnn_cell.RNNCell.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.nn.rnn_cell.RNNCell.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.nn.rnn_cell.RNNCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.nn.rnn_cell.RNNCell.submodules": "tf.Module.submodules", + "tf.compat.v1.nn.rnn_cell.RNNCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.nn.rnn_cell.RNNCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.nn.rnn_cell.RNNCell.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__call__": "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__call__", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__new__": "tf.keras.Model.__new__", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.add_loss": "tf.compat.v1.layers.Layer.add_loss", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.add_weight": "tf.compat.v1.layers.Layer.add_weight", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.build": "tf.compat.v1.nn.rnn_cell.RNNCell.build", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.call": "tf.keras.layers.Layer.call", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.count_params": "tf.keras.layers.Layer.count_params", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.dtype": "tf.keras.layers.Layer.dtype", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.get_config": "tf.nn.RNNCellResidualWrapper.get_config", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.get_initial_state": "tf.compat.v1.nn.rnn_cell.RNNCell.get_initial_state", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.graph": "tf.compat.v1.layers.Layer.graph", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.input": "tf.keras.layers.Layer.input", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.losses": "tf.keras.layers.Layer.losses", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.metrics": "tf.keras.layers.Layer.metrics", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.name": "tf.keras.layers.Layer.name", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.name_scope": "tf.Module.name_scope", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.output": "tf.keras.layers.Layer.output", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.output_size": "tf.nn.RNNCellResidualWrapper.output_size", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.scope_name": "tf.compat.v1.layers.Layer.scope_name", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.state_size": "tf.nn.RNNCellResidualWrapper.state_size", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.submodules": "tf.Module.submodules", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.trainable": "tf.keras.layers.Layer.trainable", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.weights": "tf.keras.layers.Layer.weights", + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.zero_state": "tf.nn.RNNCellResidualWrapper.zero_state", + "tf.compat.v1.nn.scale_regularization_loss": "tf.nn.scale_regularization_loss", + "tf.compat.v1.nn.selu": "tf.nn.selu", + "tf.compat.v1.nn.sigmoid": "tf.math.sigmoid", + "tf.compat.v1.nn.softmax": "tf.compat.v1.math.softmax", + "tf.compat.v1.nn.softplus": "tf.math.softplus", + "tf.compat.v1.nn.softsign": "tf.nn.softsign", + "tf.compat.v1.nn.space_to_batch": "tf.compat.v1.space_to_batch", + "tf.compat.v1.nn.space_to_depth": "tf.compat.v1.space_to_depth", + "tf.compat.v1.nn.swish": "tf.nn.swish", + "tf.compat.v1.nn.tanh": "tf.math.tanh", + "tf.compat.v1.nn.top_k": "tf.math.top_k", + "tf.compat.v1.nn.uniform_candidate_sampler": "tf.random.uniform_candidate_sampler", + "tf.compat.v1.nn.with_space_to_batch": "tf.nn.with_space_to_batch", + "tf.compat.v1.nn.zero_fraction": "tf.math.zero_fraction", + "tf.compat.v1.no_gradient": "tf.no_gradient", + "tf.compat.v1.no_op": "tf.no_op", + "tf.compat.v1.nondifferentiable_batch_function": "tf.nondifferentiable_batch_function", + "tf.compat.v1.not_equal": "tf.math.not_equal", + "tf.compat.v1.numpy_function": "tf.numpy_function", + "tf.compat.v1.one_hot": "tf.one_hot", + "tf.compat.v1.ones": "tf.ones", + "tf.compat.v1.ones_initializer": "tf.compat.v1.keras.initializers.Ones", + "tf.compat.v1.ones_initializer.__call__": "tf.compat.v1.keras.initializers.Ones.__call__", + "tf.compat.v1.ones_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.ones_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.ones_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.ones_initializer.__init__": "tf.compat.v1.keras.initializers.Ones.__init__", + "tf.compat.v1.ones_initializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.ones_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.ones_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.ones_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.ones_initializer.get_config": "tf.compat.v1.keras.initializers.Ones.get_config", + "tf.compat.v1.orthogonal_initializer": "tf.compat.v1.keras.initializers.Orthogonal", + "tf.compat.v1.orthogonal_initializer.__call__": "tf.compat.v1.keras.initializers.Orthogonal.__call__", + "tf.compat.v1.orthogonal_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.orthogonal_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.orthogonal_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.orthogonal_initializer.__init__": "tf.compat.v1.keras.initializers.Orthogonal.__init__", + "tf.compat.v1.orthogonal_initializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.orthogonal_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.orthogonal_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.orthogonal_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.orthogonal_initializer.get_config": "tf.compat.v1.keras.initializers.Orthogonal.get_config", + "tf.compat.v1.parallel_stack": "tf.parallel_stack", + "tf.compat.v1.parse_single_sequence_example": "tf.io.parse_single_sequence_example", + "tf.compat.v1.parse_tensor": "tf.io.parse_tensor", + "tf.compat.v1.polygamma": "tf.math.polygamma", + "tf.compat.v1.pow": "tf.math.pow", + "tf.compat.v1.print": "tf.print", + "tf.compat.v1.profiler.AdviceProto.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.profiler.AdviceProto.Checker.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.profiler.AdviceProto.Checker.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.profiler.AdviceProto.Checker.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.profiler.AdviceProto.Checker.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.profiler.AdviceProto.Checker.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.profiler.AdviceProto.Checker.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.profiler.AdviceProto.Checker.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.profiler.AdviceProto.Checker.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.profiler.AdviceProto.Checker.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.profiler.AdviceProto.Checker.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.profiler.AdviceProto.Checker.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.profiler.AdviceProto.Checker.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.profiler.AdviceProto.Checker.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.profiler.AdviceProto.Checker.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.profiler.AdviceProto.Checker.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.profiler.AdviceProto.Checker.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.profiler.AdviceProto.Checker.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.profiler.AdviceProto.Checker.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.profiler.AdviceProto.Checker.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.profiler.AdviceProto.Checker.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.profiler.AdviceProto.Checker.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.profiler.AdviceProto.Checker.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.profiler.AdviceProto.Checker.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.profiler.AdviceProto.Checker.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.profiler.AdviceProto.Checker.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.profiler.AdviceProto.Checker.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.profiler.AdviceProto.Checker.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.profiler.AdviceProto.Checker.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.profiler.AdviceProto.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.profiler.AdviceProto.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.profiler.AdviceProto.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.profiler.AdviceProto.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.profiler.AdviceProto.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.profiler.AdviceProto.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.profiler.AdviceProto.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.profiler.AdviceProto.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.profiler.AdviceProto.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.profiler.AdviceProto.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.profiler.AdviceProto.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.profiler.AdviceProto.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.profiler.AdviceProto.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.profiler.AdviceProto.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.profiler.AdviceProto.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.profiler.AdviceProto.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.profiler.AdviceProto.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.profiler.AdviceProto.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.profiler.AdviceProto.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.profiler.AdviceProto.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.profiler.AdviceProto.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.profiler.AdviceProto.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.profiler.AdviceProto.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.profiler.AdviceProto.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.profiler.AdviceProto.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.profiler.AdviceProto.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.profiler.AdviceProto.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.profiler.GraphNodeProto.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.profiler.GraphNodeProto.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.profiler.GraphNodeProto.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.profiler.GraphNodeProto.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.profiler.GraphNodeProto.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.profiler.GraphNodeProto.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.profiler.GraphNodeProto.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.profiler.GraphNodeProto.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.profiler.GraphNodeProto.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.profiler.GraphNodeProto.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.profiler.GraphNodeProto.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.profiler.GraphNodeProto.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.profiler.GraphNodeProto.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.profiler.GraphNodeProto.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.profiler.GraphNodeProto.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.profiler.GraphNodeProto.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.profiler.GraphNodeProto.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.profiler.GraphNodeProto.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.profiler.GraphNodeProto.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.profiler.GraphNodeProto.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.profiler.GraphNodeProto.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.profiler.GraphNodeProto.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.profiler.GraphNodeProto.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.profiler.GraphNodeProto.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.profiler.GraphNodeProto.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.profiler.GraphNodeProto.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.profiler.GraphNodeProto.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.profiler.GraphNodeProto.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.profiler.MultiGraphNodeProto.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.profiler.MultiGraphNodeProto.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.profiler.MultiGraphNodeProto.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.profiler.MultiGraphNodeProto.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.profiler.MultiGraphNodeProto.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.profiler.MultiGraphNodeProto.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.profiler.MultiGraphNodeProto.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.profiler.MultiGraphNodeProto.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.profiler.MultiGraphNodeProto.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.profiler.MultiGraphNodeProto.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.profiler.MultiGraphNodeProto.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.profiler.MultiGraphNodeProto.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.profiler.MultiGraphNodeProto.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.profiler.MultiGraphNodeProto.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.profiler.MultiGraphNodeProto.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.profiler.MultiGraphNodeProto.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.profiler.MultiGraphNodeProto.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.profiler.MultiGraphNodeProto.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.profiler.MultiGraphNodeProto.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.profiler.MultiGraphNodeProto.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.profiler.MultiGraphNodeProto.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.profiler.MultiGraphNodeProto.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.profiler.MultiGraphNodeProto.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.profiler.MultiGraphNodeProto.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.profiler.MultiGraphNodeProto.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.profiler.MultiGraphNodeProto.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.profiler.MultiGraphNodeProto.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.profiler.MultiGraphNodeProto.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.profiler.OpLogProto.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.profiler.OpLogProto.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.profiler.OpLogProto.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.profiler.OpLogProto.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.profiler.OpLogProto.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.profiler.OpLogProto.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.profiler.OpLogProto.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.profiler.OpLogProto.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.profiler.OpLogProto.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.profiler.OpLogProto.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.profiler.OpLogProto.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.profiler.OpLogProto.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.profiler.OpLogProto.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.profiler.OpLogProto.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.profiler.OpLogProto.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.profiler.OpLogProto.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.profiler.OpLogProto.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.profiler.OpLogProto.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.profiler.OpLogProto.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.profiler.OpLogProto.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.profiler.OpLogProto.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.profiler.OpLogProto.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.profiler.OpLogProto.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.profiler.OpLogProto.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.profiler.OpLogProto.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.profiler.OpLogProto.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.profiler.OpLogProto.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.profiler.OpLogProto.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.profiler.ProfileOptionBuilder.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.profiler.ProfileOptionBuilder.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.profiler.ProfileOptionBuilder.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.profiler.ProfileOptionBuilder.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.profiler.ProfileOptionBuilder.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.profiler.ProfileOptionBuilder.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.profiler.ProfileOptionBuilder.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.profiler.Profiler.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.profiler.Profiler.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.profiler.Profiler.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.profiler.Profiler.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.profiler.Profiler.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.profiler.Profiler.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.profiler.Profiler.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.py_function": "tf.py_function", + "tf.compat.v1.python_io.TFRecordCompressionType": "tf.compat.v1.io.TFRecordCompressionType", + "tf.compat.v1.python_io.TFRecordCompressionType.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.python_io.TFRecordCompressionType.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.python_io.TFRecordCompressionType.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.python_io.TFRecordCompressionType.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.python_io.TFRecordCompressionType.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.python_io.TFRecordCompressionType.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.python_io.TFRecordCompressionType.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.python_io.TFRecordCompressionType.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.python_io.TFRecordOptions": "tf.io.TFRecordOptions", + "tf.compat.v1.python_io.TFRecordOptions.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.python_io.TFRecordOptions.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.python_io.TFRecordOptions.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.python_io.TFRecordOptions.__init__": "tf.io.TFRecordOptions.__init__", + "tf.compat.v1.python_io.TFRecordOptions.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.python_io.TFRecordOptions.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.python_io.TFRecordOptions.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.python_io.TFRecordOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.python_io.TFRecordOptions.compression_type_map": "tf.io.TFRecordOptions.compression_type_map", + "tf.compat.v1.python_io.TFRecordWriter": "tf.io.TFRecordWriter", + "tf.compat.v1.python_io.TFRecordWriter.__enter__": "tf.io.TFRecordWriter.__enter__", + "tf.compat.v1.python_io.TFRecordWriter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.python_io.TFRecordWriter.__exit__": "tf.io.TFRecordWriter.__exit__", + "tf.compat.v1.python_io.TFRecordWriter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.python_io.TFRecordWriter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.python_io.TFRecordWriter.__init__": "tf.io.TFRecordWriter.__init__", + "tf.compat.v1.python_io.TFRecordWriter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.python_io.TFRecordWriter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.python_io.TFRecordWriter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.python_io.TFRecordWriter.__new__": "tf.dtypes.DType.__new__", + "tf.compat.v1.python_io.TFRecordWriter.close": "tf.io.TFRecordWriter.close", + "tf.compat.v1.python_io.TFRecordWriter.flush": "tf.io.TFRecordWriter.flush", + "tf.compat.v1.python_io.TFRecordWriter.write": "tf.io.TFRecordWriter.write", + "tf.compat.v1.python_io.tf_record_iterator": "tf.compat.v1.io.tf_record_iterator", + "tf.compat.v1.qint16": "tf.dtypes.qint16", + "tf.compat.v1.qint32": "tf.dtypes.qint32", + "tf.compat.v1.qint8": "tf.dtypes.qint8", + "tf.compat.v1.qr": "tf.linalg.qr", + "tf.compat.v1.quantization.dequantize": "tf.quantization.dequantize", + "tf.compat.v1.quantization.fake_quant_with_min_max_args": "tf.quantization.fake_quant_with_min_max_args", + "tf.compat.v1.quantization.fake_quant_with_min_max_args_gradient": "tf.quantization.fake_quant_with_min_max_args_gradient", + "tf.compat.v1.quantization.fake_quant_with_min_max_vars": "tf.quantization.fake_quant_with_min_max_vars", + "tf.compat.v1.quantization.fake_quant_with_min_max_vars_gradient": "tf.quantization.fake_quant_with_min_max_vars_gradient", + "tf.compat.v1.quantization.fake_quant_with_min_max_vars_per_channel": "tf.quantization.fake_quant_with_min_max_vars_per_channel", + "tf.compat.v1.quantization.fake_quant_with_min_max_vars_per_channel_gradient": "tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient", + "tf.compat.v1.quantization.quantize": "tf.quantization.quantize", + "tf.compat.v1.quantization.quantize_and_dequantize": "tf.quantization.quantize_and_dequantize", + "tf.compat.v1.quantization.quantized_concat": "tf.quantization.quantized_concat", + "tf.compat.v1.quantize": "tf.quantization.quantize", + "tf.compat.v1.quantized_concat": "tf.quantization.quantized_concat", + "tf.compat.v1.queue.FIFOQueue": "tf.queue.FIFOQueue", + "tf.compat.v1.queue.FIFOQueue.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.queue.FIFOQueue.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.queue.FIFOQueue.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.queue.FIFOQueue.__init__": "tf.queue.FIFOQueue.__init__", + "tf.compat.v1.queue.FIFOQueue.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.queue.FIFOQueue.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.queue.FIFOQueue.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.queue.FIFOQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.queue.FIFOQueue.close": "tf.queue.QueueBase.close", + "tf.compat.v1.queue.FIFOQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.queue.FIFOQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.queue.FIFOQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.queue.FIFOQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.queue.FIFOQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.queue.FIFOQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.queue.FIFOQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.queue.FIFOQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.queue.FIFOQueue.name": "tf.queue.QueueBase.name", + "tf.compat.v1.queue.FIFOQueue.names": "tf.queue.QueueBase.names", + "tf.compat.v1.queue.FIFOQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.queue.FIFOQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.queue.FIFOQueue.size": "tf.queue.QueueBase.size", + "tf.compat.v1.queue.PaddingFIFOQueue": "tf.queue.PaddingFIFOQueue", + "tf.compat.v1.queue.PaddingFIFOQueue.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.queue.PaddingFIFOQueue.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.queue.PaddingFIFOQueue.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.queue.PaddingFIFOQueue.__init__": "tf.queue.PaddingFIFOQueue.__init__", + "tf.compat.v1.queue.PaddingFIFOQueue.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.queue.PaddingFIFOQueue.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.queue.PaddingFIFOQueue.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.queue.PaddingFIFOQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.queue.PaddingFIFOQueue.close": "tf.queue.QueueBase.close", + "tf.compat.v1.queue.PaddingFIFOQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.queue.PaddingFIFOQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.queue.PaddingFIFOQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.queue.PaddingFIFOQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.queue.PaddingFIFOQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.queue.PaddingFIFOQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.queue.PaddingFIFOQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.queue.PaddingFIFOQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.queue.PaddingFIFOQueue.name": "tf.queue.QueueBase.name", + "tf.compat.v1.queue.PaddingFIFOQueue.names": "tf.queue.QueueBase.names", + "tf.compat.v1.queue.PaddingFIFOQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.queue.PaddingFIFOQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.queue.PaddingFIFOQueue.size": "tf.queue.QueueBase.size", + "tf.compat.v1.queue.PriorityQueue": "tf.queue.PriorityQueue", + "tf.compat.v1.queue.PriorityQueue.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.queue.PriorityQueue.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.queue.PriorityQueue.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.queue.PriorityQueue.__init__": "tf.queue.PriorityQueue.__init__", + "tf.compat.v1.queue.PriorityQueue.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.queue.PriorityQueue.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.queue.PriorityQueue.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.queue.PriorityQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.queue.PriorityQueue.close": "tf.queue.QueueBase.close", + "tf.compat.v1.queue.PriorityQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.queue.PriorityQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.queue.PriorityQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.queue.PriorityQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.queue.PriorityQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.queue.PriorityQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.queue.PriorityQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.queue.PriorityQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.queue.PriorityQueue.name": "tf.queue.QueueBase.name", + "tf.compat.v1.queue.PriorityQueue.names": "tf.queue.QueueBase.names", + "tf.compat.v1.queue.PriorityQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.queue.PriorityQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.queue.PriorityQueue.size": "tf.queue.QueueBase.size", + "tf.compat.v1.queue.QueueBase": "tf.queue.QueueBase", + "tf.compat.v1.queue.QueueBase.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.queue.QueueBase.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.queue.QueueBase.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.queue.QueueBase.__init__": "tf.queue.QueueBase.__init__", + "tf.compat.v1.queue.QueueBase.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.queue.QueueBase.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.queue.QueueBase.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.queue.QueueBase.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.queue.QueueBase.close": "tf.queue.QueueBase.close", + "tf.compat.v1.queue.QueueBase.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.queue.QueueBase.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.queue.QueueBase.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.queue.QueueBase.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.queue.QueueBase.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.queue.QueueBase.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.queue.QueueBase.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.queue.QueueBase.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.queue.QueueBase.name": "tf.queue.QueueBase.name", + "tf.compat.v1.queue.QueueBase.names": "tf.queue.QueueBase.names", + "tf.compat.v1.queue.QueueBase.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.queue.QueueBase.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.queue.QueueBase.size": "tf.queue.QueueBase.size", + "tf.compat.v1.queue.RandomShuffleQueue": "tf.queue.RandomShuffleQueue", + "tf.compat.v1.queue.RandomShuffleQueue.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.queue.RandomShuffleQueue.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.queue.RandomShuffleQueue.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.queue.RandomShuffleQueue.__init__": "tf.queue.RandomShuffleQueue.__init__", + "tf.compat.v1.queue.RandomShuffleQueue.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.queue.RandomShuffleQueue.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.queue.RandomShuffleQueue.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.queue.RandomShuffleQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.queue.RandomShuffleQueue.close": "tf.queue.QueueBase.close", + "tf.compat.v1.queue.RandomShuffleQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.compat.v1.queue.RandomShuffleQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.compat.v1.queue.RandomShuffleQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.compat.v1.queue.RandomShuffleQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.compat.v1.queue.RandomShuffleQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.compat.v1.queue.RandomShuffleQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.compat.v1.queue.RandomShuffleQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.compat.v1.queue.RandomShuffleQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.compat.v1.queue.RandomShuffleQueue.name": "tf.queue.QueueBase.name", + "tf.compat.v1.queue.RandomShuffleQueue.names": "tf.queue.QueueBase.names", + "tf.compat.v1.queue.RandomShuffleQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.compat.v1.queue.RandomShuffleQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.compat.v1.queue.RandomShuffleQueue.size": "tf.queue.QueueBase.size", + "tf.compat.v1.quint16": "tf.dtypes.quint16", + "tf.compat.v1.quint8": "tf.dtypes.quint8", + "tf.compat.v1.ragged.RaggedTensorValue.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.ragged.RaggedTensorValue.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.ragged.RaggedTensorValue.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.ragged.RaggedTensorValue.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.ragged.RaggedTensorValue.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.ragged.RaggedTensorValue.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.ragged.RaggedTensorValue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.ragged.boolean_mask": "tf.ragged.boolean_mask", + "tf.compat.v1.ragged.constant": "tf.ragged.constant", + "tf.compat.v1.ragged.map_flat_values": "tf.ragged.map_flat_values", + "tf.compat.v1.ragged.range": "tf.ragged.range", + "tf.compat.v1.ragged.row_splits_to_segment_ids": "tf.ragged.row_splits_to_segment_ids", + "tf.compat.v1.ragged.segment_ids_to_row_splits": "tf.ragged.segment_ids_to_row_splits", + "tf.compat.v1.ragged.stack": "tf.ragged.stack", + "tf.compat.v1.ragged.stack_dynamic_partitions": "tf.ragged.stack_dynamic_partitions", + "tf.compat.v1.random.Algorithm": "tf.random.Algorithm", + "tf.compat.v1.random.Algorithm.PHILOX": "tf.random.Algorithm.PHILOX", + "tf.compat.v1.random.Algorithm.THREEFRY": "tf.random.Algorithm.THREEFRY", + "tf.compat.v1.random.Algorithm.name": "tf.distribute.InputReplicationMode.name", + "tf.compat.v1.random.Algorithm.value": "tf.distribute.InputReplicationMode.value", + "tf.compat.v1.random.Generator": "tf.random.Generator", + "tf.compat.v1.random.Generator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.random.Generator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.random.Generator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.random.Generator.__init__": "tf.random.Generator.__init__", + "tf.compat.v1.random.Generator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.random.Generator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.random.Generator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.random.Generator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.random.Generator.algorithm": "tf.random.Generator.algorithm", + "tf.compat.v1.random.Generator.binomial": "tf.random.Generator.binomial", + "tf.compat.v1.random.Generator.key": "tf.random.Generator.key", + "tf.compat.v1.random.Generator.make_seeds": "tf.random.Generator.make_seeds", + "tf.compat.v1.random.Generator.normal": "tf.random.Generator.normal", + "tf.compat.v1.random.Generator.reset": "tf.random.Generator.reset", + "tf.compat.v1.random.Generator.reset_from_key_counter": "tf.random.Generator.reset_from_key_counter", + "tf.compat.v1.random.Generator.reset_from_seed": "tf.random.Generator.reset_from_seed", + "tf.compat.v1.random.Generator.skip": "tf.random.Generator.skip", + "tf.compat.v1.random.Generator.split": "tf.random.Generator.split", + "tf.compat.v1.random.Generator.state": "tf.random.Generator.state", + "tf.compat.v1.random.Generator.truncated_normal": "tf.random.Generator.truncated_normal", + "tf.compat.v1.random.Generator.uniform": "tf.random.Generator.uniform", + "tf.compat.v1.random.Generator.uniform_full_int": "tf.random.Generator.uniform_full_int", + "tf.compat.v1.random.all_candidate_sampler": "tf.random.all_candidate_sampler", + "tf.compat.v1.random.categorical": "tf.random.categorical", + "tf.compat.v1.random.create_rng_state": "tf.random.create_rng_state", + "tf.compat.v1.random.experimental.Algorithm": "tf.random.Algorithm", + "tf.compat.v1.random.experimental.Algorithm.PHILOX": "tf.random.Algorithm.PHILOX", + "tf.compat.v1.random.experimental.Algorithm.THREEFRY": "tf.random.Algorithm.THREEFRY", + "tf.compat.v1.random.experimental.Algorithm.name": "tf.distribute.InputReplicationMode.name", + "tf.compat.v1.random.experimental.Algorithm.value": "tf.distribute.InputReplicationMode.value", + "tf.compat.v1.random.experimental.Generator": "tf.random.Generator", + "tf.compat.v1.random.experimental.Generator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.random.experimental.Generator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.random.experimental.Generator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.random.experimental.Generator.__init__": "tf.random.Generator.__init__", + "tf.compat.v1.random.experimental.Generator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.random.experimental.Generator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.random.experimental.Generator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.random.experimental.Generator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.random.experimental.Generator.algorithm": "tf.random.Generator.algorithm", + "tf.compat.v1.random.experimental.Generator.binomial": "tf.random.Generator.binomial", + "tf.compat.v1.random.experimental.Generator.key": "tf.random.Generator.key", + "tf.compat.v1.random.experimental.Generator.make_seeds": "tf.random.Generator.make_seeds", + "tf.compat.v1.random.experimental.Generator.normal": "tf.random.Generator.normal", + "tf.compat.v1.random.experimental.Generator.reset": "tf.random.Generator.reset", + "tf.compat.v1.random.experimental.Generator.reset_from_key_counter": "tf.random.Generator.reset_from_key_counter", + "tf.compat.v1.random.experimental.Generator.reset_from_seed": "tf.random.Generator.reset_from_seed", + "tf.compat.v1.random.experimental.Generator.skip": "tf.random.Generator.skip", + "tf.compat.v1.random.experimental.Generator.split": "tf.random.Generator.split", + "tf.compat.v1.random.experimental.Generator.state": "tf.random.Generator.state", + "tf.compat.v1.random.experimental.Generator.truncated_normal": "tf.random.Generator.truncated_normal", + "tf.compat.v1.random.experimental.Generator.uniform": "tf.random.Generator.uniform", + "tf.compat.v1.random.experimental.Generator.uniform_full_int": "tf.random.Generator.uniform_full_int", + "tf.compat.v1.random.experimental.create_rng_state": "tf.random.create_rng_state", + "tf.compat.v1.random.experimental.get_global_generator": "tf.random.get_global_generator", + "tf.compat.v1.random.experimental.set_global_generator": "tf.random.set_global_generator", + "tf.compat.v1.random.fixed_unigram_candidate_sampler": "tf.random.fixed_unigram_candidate_sampler", + "tf.compat.v1.random.gamma": "tf.random.gamma", + "tf.compat.v1.random.get_global_generator": "tf.random.get_global_generator", + "tf.compat.v1.random.get_seed": "tf.compat.v1.get_seed", + "tf.compat.v1.random.learned_unigram_candidate_sampler": "tf.random.learned_unigram_candidate_sampler", + "tf.compat.v1.random.log_uniform_candidate_sampler": "tf.random.log_uniform_candidate_sampler", + "tf.compat.v1.random.multinomial": "tf.compat.v1.multinomial", + "tf.compat.v1.random.normal": "tf.random.normal", + "tf.compat.v1.random.poisson": "tf.compat.v1.random_poisson", + "tf.compat.v1.random.set_global_generator": "tf.random.set_global_generator", + "tf.compat.v1.random.set_random_seed": "tf.compat.v1.set_random_seed", + "tf.compat.v1.random.shuffle": "tf.random.shuffle", + "tf.compat.v1.random.stateless_binomial": "tf.random.stateless_binomial", + "tf.compat.v1.random.stateless_categorical": "tf.random.stateless_categorical", + "tf.compat.v1.random.stateless_gamma": "tf.random.stateless_gamma", + "tf.compat.v1.random.stateless_normal": "tf.random.stateless_normal", + "tf.compat.v1.random.stateless_poisson": "tf.random.stateless_poisson", + "tf.compat.v1.random.stateless_truncated_normal": "tf.random.stateless_truncated_normal", + "tf.compat.v1.random.stateless_uniform": "tf.random.stateless_uniform", + "tf.compat.v1.random.truncated_normal": "tf.random.truncated_normal", + "tf.compat.v1.random.uniform": "tf.random.uniform", + "tf.compat.v1.random.uniform_candidate_sampler": "tf.random.uniform_candidate_sampler", + "tf.compat.v1.random_crop": "tf.image.random_crop", + "tf.compat.v1.random_gamma": "tf.random.gamma", + "tf.compat.v1.random_normal": "tf.random.normal", + "tf.compat.v1.random_normal_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.random_normal_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.random_normal_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.random_normal_initializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.random_normal_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.random_normal_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.random_normal_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.random_shuffle": "tf.random.shuffle", + "tf.compat.v1.random_uniform": "tf.random.uniform", + "tf.compat.v1.random_uniform_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.random_uniform_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.random_uniform_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.random_uniform_initializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.random_uniform_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.random_uniform_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.random_uniform_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.range": "tf.range", + "tf.compat.v1.rank": "tf.rank", + "tf.compat.v1.raw_ops.Abort": "tf.raw_ops.Abort", + "tf.compat.v1.raw_ops.Abs": "tf.raw_ops.Abs", + "tf.compat.v1.raw_ops.AccumulateNV2": "tf.raw_ops.AccumulateNV2", + "tf.compat.v1.raw_ops.AccumulatorApplyGradient": "tf.raw_ops.AccumulatorApplyGradient", + "tf.compat.v1.raw_ops.AccumulatorNumAccumulated": "tf.raw_ops.AccumulatorNumAccumulated", + "tf.compat.v1.raw_ops.AccumulatorSetGlobalStep": "tf.raw_ops.AccumulatorSetGlobalStep", + "tf.compat.v1.raw_ops.AccumulatorTakeGradient": "tf.raw_ops.AccumulatorTakeGradient", + "tf.compat.v1.raw_ops.Acos": "tf.raw_ops.Acos", + "tf.compat.v1.raw_ops.Acosh": "tf.raw_ops.Acosh", + "tf.compat.v1.raw_ops.Add": "tf.raw_ops.Add", + "tf.compat.v1.raw_ops.AddManySparseToTensorsMap": "tf.raw_ops.AddManySparseToTensorsMap", + "tf.compat.v1.raw_ops.AddN": "tf.raw_ops.AddN", + "tf.compat.v1.raw_ops.AddSparseToTensorsMap": "tf.raw_ops.AddSparseToTensorsMap", + "tf.compat.v1.raw_ops.AddV2": "tf.raw_ops.AddV2", + "tf.compat.v1.raw_ops.AdjustContrast": "tf.raw_ops.AdjustContrast", + "tf.compat.v1.raw_ops.AdjustContrastv2": "tf.raw_ops.AdjustContrastv2", + "tf.compat.v1.raw_ops.AdjustHue": "tf.raw_ops.AdjustHue", + "tf.compat.v1.raw_ops.AdjustSaturation": "tf.raw_ops.AdjustSaturation", + "tf.compat.v1.raw_ops.All": "tf.raw_ops.All", + "tf.compat.v1.raw_ops.AllCandidateSampler": "tf.raw_ops.AllCandidateSampler", + "tf.compat.v1.raw_ops.AllToAll": "tf.raw_ops.AllToAll", + "tf.compat.v1.raw_ops.Angle": "tf.raw_ops.Angle", + "tf.compat.v1.raw_ops.AnonymousIterator": "tf.raw_ops.AnonymousIterator", + "tf.compat.v1.raw_ops.AnonymousIteratorV2": "tf.raw_ops.AnonymousIteratorV2", + "tf.compat.v1.raw_ops.AnonymousMemoryCache": "tf.raw_ops.AnonymousMemoryCache", + "tf.compat.v1.raw_ops.AnonymousMultiDeviceIterator": "tf.raw_ops.AnonymousMultiDeviceIterator", + "tf.compat.v1.raw_ops.AnonymousRandomSeedGenerator": "tf.raw_ops.AnonymousRandomSeedGenerator", + "tf.compat.v1.raw_ops.Any": "tf.raw_ops.Any", + "tf.compat.v1.raw_ops.ApplyAdaMax": "tf.raw_ops.ApplyAdaMax", + "tf.compat.v1.raw_ops.ApplyAdadelta": "tf.raw_ops.ApplyAdadelta", + "tf.compat.v1.raw_ops.ApplyAdagrad": "tf.raw_ops.ApplyAdagrad", + "tf.compat.v1.raw_ops.ApplyAdagradDA": "tf.raw_ops.ApplyAdagradDA", + "tf.compat.v1.raw_ops.ApplyAdagradV2": "tf.raw_ops.ApplyAdagradV2", + "tf.compat.v1.raw_ops.ApplyAdam": "tf.raw_ops.ApplyAdam", + "tf.compat.v1.raw_ops.ApplyAddSign": "tf.raw_ops.ApplyAddSign", + "tf.compat.v1.raw_ops.ApplyCenteredRMSProp": "tf.raw_ops.ApplyCenteredRMSProp", + "tf.compat.v1.raw_ops.ApplyFtrl": "tf.raw_ops.ApplyFtrl", + "tf.compat.v1.raw_ops.ApplyFtrlV2": "tf.raw_ops.ApplyFtrlV2", + "tf.compat.v1.raw_ops.ApplyGradientDescent": "tf.raw_ops.ApplyGradientDescent", + "tf.compat.v1.raw_ops.ApplyMomentum": "tf.raw_ops.ApplyMomentum", + "tf.compat.v1.raw_ops.ApplyPowerSign": "tf.raw_ops.ApplyPowerSign", + "tf.compat.v1.raw_ops.ApplyProximalAdagrad": "tf.raw_ops.ApplyProximalAdagrad", + "tf.compat.v1.raw_ops.ApplyProximalGradientDescent": "tf.raw_ops.ApplyProximalGradientDescent", + "tf.compat.v1.raw_ops.ApplyRMSProp": "tf.raw_ops.ApplyRMSProp", + "tf.compat.v1.raw_ops.ApproximateEqual": "tf.raw_ops.ApproximateEqual", + "tf.compat.v1.raw_ops.ArgMax": "tf.raw_ops.ArgMax", + "tf.compat.v1.raw_ops.ArgMin": "tf.raw_ops.ArgMin", + "tf.compat.v1.raw_ops.AsString": "tf.raw_ops.AsString", + "tf.compat.v1.raw_ops.Asin": "tf.raw_ops.Asin", + "tf.compat.v1.raw_ops.Asinh": "tf.raw_ops.Asinh", + "tf.compat.v1.raw_ops.Assert": "tf.raw_ops.Assert", + "tf.compat.v1.raw_ops.AssertCardinalityDataset": "tf.raw_ops.AssertCardinalityDataset", + "tf.compat.v1.raw_ops.AssertNextDataset": "tf.raw_ops.AssertNextDataset", + "tf.compat.v1.raw_ops.Assign": "tf.raw_ops.Assign", + "tf.compat.v1.raw_ops.AssignAdd": "tf.raw_ops.AssignAdd", + "tf.compat.v1.raw_ops.AssignAddVariableOp": "tf.raw_ops.AssignAddVariableOp", + "tf.compat.v1.raw_ops.AssignSub": "tf.raw_ops.AssignSub", + "tf.compat.v1.raw_ops.AssignSubVariableOp": "tf.raw_ops.AssignSubVariableOp", + "tf.compat.v1.raw_ops.AssignVariableOp": "tf.raw_ops.AssignVariableOp", + "tf.compat.v1.raw_ops.Atan": "tf.raw_ops.Atan", + "tf.compat.v1.raw_ops.Atan2": "tf.raw_ops.Atan2", + "tf.compat.v1.raw_ops.Atanh": "tf.raw_ops.Atanh", + "tf.compat.v1.raw_ops.AudioSpectrogram": "tf.raw_ops.AudioSpectrogram", + "tf.compat.v1.raw_ops.AudioSummary": "tf.raw_ops.AudioSummary", + "tf.compat.v1.raw_ops.AudioSummaryV2": "tf.raw_ops.AudioSummaryV2", + "tf.compat.v1.raw_ops.AutoShardDataset": "tf.raw_ops.AutoShardDataset", + "tf.compat.v1.raw_ops.AvgPool": "tf.raw_ops.AvgPool", + "tf.compat.v1.raw_ops.AvgPool3D": "tf.raw_ops.AvgPool3D", + "tf.compat.v1.raw_ops.AvgPool3DGrad": "tf.raw_ops.AvgPool3DGrad", + "tf.compat.v1.raw_ops.AvgPoolGrad": "tf.raw_ops.AvgPoolGrad", + "tf.compat.v1.raw_ops.Barrier": "tf.raw_ops.Barrier", + "tf.compat.v1.raw_ops.BarrierClose": "tf.raw_ops.BarrierClose", + "tf.compat.v1.raw_ops.BarrierIncompleteSize": "tf.raw_ops.BarrierIncompleteSize", + "tf.compat.v1.raw_ops.BarrierInsertMany": "tf.raw_ops.BarrierInsertMany", + "tf.compat.v1.raw_ops.BarrierReadySize": "tf.raw_ops.BarrierReadySize", + "tf.compat.v1.raw_ops.BarrierTakeMany": "tf.raw_ops.BarrierTakeMany", + "tf.compat.v1.raw_ops.Batch": "tf.raw_ops.Batch", + "tf.compat.v1.raw_ops.BatchCholesky": "tf.raw_ops.BatchCholesky", + "tf.compat.v1.raw_ops.BatchCholeskyGrad": "tf.raw_ops.BatchCholeskyGrad", + "tf.compat.v1.raw_ops.BatchDataset": "tf.raw_ops.BatchDataset", + "tf.compat.v1.raw_ops.BatchDatasetV2": "tf.raw_ops.BatchDatasetV2", + "tf.compat.v1.raw_ops.BatchFFT": "tf.raw_ops.BatchFFT", + "tf.compat.v1.raw_ops.BatchFFT2D": "tf.raw_ops.BatchFFT2D", + "tf.compat.v1.raw_ops.BatchFFT3D": "tf.raw_ops.BatchFFT3D", + "tf.compat.v1.raw_ops.BatchFunction": "tf.raw_ops.BatchFunction", + "tf.compat.v1.raw_ops.BatchIFFT": "tf.raw_ops.BatchIFFT", + "tf.compat.v1.raw_ops.BatchIFFT2D": "tf.raw_ops.BatchIFFT2D", + "tf.compat.v1.raw_ops.BatchIFFT3D": "tf.raw_ops.BatchIFFT3D", + "tf.compat.v1.raw_ops.BatchMatMul": "tf.raw_ops.BatchMatMul", + "tf.compat.v1.raw_ops.BatchMatMulV2": "tf.raw_ops.BatchMatMulV2", + "tf.compat.v1.raw_ops.BatchMatrixBandPart": "tf.raw_ops.BatchMatrixBandPart", + "tf.compat.v1.raw_ops.BatchMatrixDeterminant": "tf.raw_ops.BatchMatrixDeterminant", + "tf.compat.v1.raw_ops.BatchMatrixDiag": "tf.raw_ops.BatchMatrixDiag", + "tf.compat.v1.raw_ops.BatchMatrixDiagPart": "tf.raw_ops.BatchMatrixDiagPart", + "tf.compat.v1.raw_ops.BatchMatrixInverse": "tf.raw_ops.BatchMatrixInverse", + "tf.compat.v1.raw_ops.BatchMatrixSetDiag": "tf.raw_ops.BatchMatrixSetDiag", + "tf.compat.v1.raw_ops.BatchMatrixSolve": "tf.raw_ops.BatchMatrixSolve", + "tf.compat.v1.raw_ops.BatchMatrixSolveLs": "tf.raw_ops.BatchMatrixSolveLs", + "tf.compat.v1.raw_ops.BatchMatrixTriangularSolve": "tf.raw_ops.BatchMatrixTriangularSolve", + "tf.compat.v1.raw_ops.BatchNormWithGlobalNormalization": "tf.raw_ops.BatchNormWithGlobalNormalization", + "tf.compat.v1.raw_ops.BatchNormWithGlobalNormalizationGrad": "tf.raw_ops.BatchNormWithGlobalNormalizationGrad", + "tf.compat.v1.raw_ops.BatchSelfAdjointEig": "tf.raw_ops.BatchSelfAdjointEig", + "tf.compat.v1.raw_ops.BatchSelfAdjointEigV2": "tf.raw_ops.BatchSelfAdjointEigV2", + "tf.compat.v1.raw_ops.BatchSvd": "tf.raw_ops.BatchSvd", + "tf.compat.v1.raw_ops.BatchToSpace": "tf.raw_ops.BatchToSpace", + "tf.compat.v1.raw_ops.BatchToSpaceND": "tf.raw_ops.BatchToSpaceND", + "tf.compat.v1.raw_ops.BesselI0e": "tf.raw_ops.BesselI0e", + "tf.compat.v1.raw_ops.BesselI1e": "tf.raw_ops.BesselI1e", + "tf.compat.v1.raw_ops.Betainc": "tf.raw_ops.Betainc", + "tf.compat.v1.raw_ops.BiasAdd": "tf.raw_ops.BiasAdd", + "tf.compat.v1.raw_ops.BiasAddGrad": "tf.raw_ops.BiasAddGrad", + "tf.compat.v1.raw_ops.BiasAddV1": "tf.raw_ops.BiasAddV1", + "tf.compat.v1.raw_ops.Bincount": "tf.raw_ops.Bincount", + "tf.compat.v1.raw_ops.Bitcast": "tf.raw_ops.Bitcast", + "tf.compat.v1.raw_ops.BitwiseAnd": "tf.raw_ops.BitwiseAnd", + "tf.compat.v1.raw_ops.BitwiseOr": "tf.raw_ops.BitwiseOr", + "tf.compat.v1.raw_ops.BitwiseXor": "tf.raw_ops.BitwiseXor", + "tf.compat.v1.raw_ops.BlockLSTM": "tf.raw_ops.BlockLSTM", + "tf.compat.v1.raw_ops.BlockLSTMGrad": "tf.raw_ops.BlockLSTMGrad", + "tf.compat.v1.raw_ops.BlockLSTMGradV2": "tf.raw_ops.BlockLSTMGradV2", + "tf.compat.v1.raw_ops.BlockLSTMV2": "tf.raw_ops.BlockLSTMV2", + "tf.compat.v1.raw_ops.BoostedTreesAggregateStats": "tf.raw_ops.BoostedTreesAggregateStats", + "tf.compat.v1.raw_ops.BoostedTreesBucketize": "tf.raw_ops.BoostedTreesBucketize", + "tf.compat.v1.raw_ops.BoostedTreesCalculateBestFeatureSplit": "tf.raw_ops.BoostedTreesCalculateBestFeatureSplit", + "tf.compat.v1.raw_ops.BoostedTreesCalculateBestFeatureSplitV2": "tf.raw_ops.BoostedTreesCalculateBestFeatureSplitV2", + "tf.compat.v1.raw_ops.BoostedTreesCalculateBestGainsPerFeature": "tf.raw_ops.BoostedTreesCalculateBestGainsPerFeature", + "tf.compat.v1.raw_ops.BoostedTreesCenterBias": "tf.raw_ops.BoostedTreesCenterBias", + "tf.compat.v1.raw_ops.BoostedTreesCreateEnsemble": "tf.raw_ops.BoostedTreesCreateEnsemble", + "tf.compat.v1.raw_ops.BoostedTreesCreateQuantileStreamResource": "tf.raw_ops.BoostedTreesCreateQuantileStreamResource", + "tf.compat.v1.raw_ops.BoostedTreesDeserializeEnsemble": "tf.raw_ops.BoostedTreesDeserializeEnsemble", + "tf.compat.v1.raw_ops.BoostedTreesEnsembleResourceHandleOp": "tf.raw_ops.BoostedTreesEnsembleResourceHandleOp", + "tf.compat.v1.raw_ops.BoostedTreesExampleDebugOutputs": "tf.raw_ops.BoostedTreesExampleDebugOutputs", + "tf.compat.v1.raw_ops.BoostedTreesFlushQuantileSummaries": "tf.raw_ops.BoostedTreesFlushQuantileSummaries", + "tf.compat.v1.raw_ops.BoostedTreesGetEnsembleStates": "tf.raw_ops.BoostedTreesGetEnsembleStates", + "tf.compat.v1.raw_ops.BoostedTreesMakeQuantileSummaries": "tf.raw_ops.BoostedTreesMakeQuantileSummaries", + "tf.compat.v1.raw_ops.BoostedTreesMakeStatsSummary": "tf.raw_ops.BoostedTreesMakeStatsSummary", + "tf.compat.v1.raw_ops.BoostedTreesPredict": "tf.raw_ops.BoostedTreesPredict", + "tf.compat.v1.raw_ops.BoostedTreesQuantileStreamResourceAddSummaries": "tf.raw_ops.BoostedTreesQuantileStreamResourceAddSummaries", + "tf.compat.v1.raw_ops.BoostedTreesQuantileStreamResourceDeserialize": "tf.raw_ops.BoostedTreesQuantileStreamResourceDeserialize", + "tf.compat.v1.raw_ops.BoostedTreesQuantileStreamResourceFlush": "tf.raw_ops.BoostedTreesQuantileStreamResourceFlush", + "tf.compat.v1.raw_ops.BoostedTreesQuantileStreamResourceGetBucketBoundaries": "tf.raw_ops.BoostedTreesQuantileStreamResourceGetBucketBoundaries", + "tf.compat.v1.raw_ops.BoostedTreesQuantileStreamResourceHandleOp": "tf.raw_ops.BoostedTreesQuantileStreamResourceHandleOp", + "tf.compat.v1.raw_ops.BoostedTreesSerializeEnsemble": "tf.raw_ops.BoostedTreesSerializeEnsemble", + "tf.compat.v1.raw_ops.BoostedTreesSparseAggregateStats": "tf.raw_ops.BoostedTreesSparseAggregateStats", + "tf.compat.v1.raw_ops.BoostedTreesSparseCalculateBestFeatureSplit": "tf.raw_ops.BoostedTreesSparseCalculateBestFeatureSplit", + "tf.compat.v1.raw_ops.BoostedTreesTrainingPredict": "tf.raw_ops.BoostedTreesTrainingPredict", + "tf.compat.v1.raw_ops.BoostedTreesUpdateEnsemble": "tf.raw_ops.BoostedTreesUpdateEnsemble", + "tf.compat.v1.raw_ops.BoostedTreesUpdateEnsembleV2": "tf.raw_ops.BoostedTreesUpdateEnsembleV2", + "tf.compat.v1.raw_ops.BroadcastArgs": "tf.raw_ops.BroadcastArgs", + "tf.compat.v1.raw_ops.BroadcastGradientArgs": "tf.raw_ops.BroadcastGradientArgs", + "tf.compat.v1.raw_ops.BroadcastTo": "tf.raw_ops.BroadcastTo", + "tf.compat.v1.raw_ops.Bucketize": "tf.raw_ops.Bucketize", + "tf.compat.v1.raw_ops.BytesProducedStatsDataset": "tf.raw_ops.BytesProducedStatsDataset", + "tf.compat.v1.raw_ops.CSRSparseMatrixComponents": "tf.raw_ops.CSRSparseMatrixComponents", + "tf.compat.v1.raw_ops.CSRSparseMatrixToDense": "tf.raw_ops.CSRSparseMatrixToDense", + "tf.compat.v1.raw_ops.CSRSparseMatrixToSparseTensor": "tf.raw_ops.CSRSparseMatrixToSparseTensor", + "tf.compat.v1.raw_ops.CSVDataset": "tf.raw_ops.CSVDataset", + "tf.compat.v1.raw_ops.CTCBeamSearchDecoder": "tf.raw_ops.CTCBeamSearchDecoder", + "tf.compat.v1.raw_ops.CTCGreedyDecoder": "tf.raw_ops.CTCGreedyDecoder", + "tf.compat.v1.raw_ops.CTCLoss": "tf.raw_ops.CTCLoss", + "tf.compat.v1.raw_ops.CTCLossV2": "tf.raw_ops.CTCLossV2", + "tf.compat.v1.raw_ops.CacheDataset": "tf.raw_ops.CacheDataset", + "tf.compat.v1.raw_ops.CacheDatasetV2": "tf.raw_ops.CacheDatasetV2", + "tf.compat.v1.raw_ops.Case": "tf.raw_ops.Case", + "tf.compat.v1.raw_ops.Cast": "tf.raw_ops.Cast", + "tf.compat.v1.raw_ops.Ceil": "tf.raw_ops.Ceil", + "tf.compat.v1.raw_ops.CheckNumerics": "tf.raw_ops.CheckNumerics", + "tf.compat.v1.raw_ops.CheckNumericsV2": "tf.raw_ops.CheckNumericsV2", + "tf.compat.v1.raw_ops.Cholesky": "tf.raw_ops.Cholesky", + "tf.compat.v1.raw_ops.CholeskyGrad": "tf.raw_ops.CholeskyGrad", + "tf.compat.v1.raw_ops.ChooseFastestBranchDataset": "tf.raw_ops.ChooseFastestBranchDataset", + "tf.compat.v1.raw_ops.ChooseFastestDataset": "tf.raw_ops.ChooseFastestDataset", + "tf.compat.v1.raw_ops.ClipByValue": "tf.raw_ops.ClipByValue", + "tf.compat.v1.raw_ops.CloseSummaryWriter": "tf.raw_ops.CloseSummaryWriter", + "tf.compat.v1.raw_ops.CollectiveBcastRecv": "tf.raw_ops.CollectiveBcastRecv", + "tf.compat.v1.raw_ops.CollectiveBcastSend": "tf.raw_ops.CollectiveBcastSend", + "tf.compat.v1.raw_ops.CollectiveGather": "tf.raw_ops.CollectiveGather", + "tf.compat.v1.raw_ops.CollectivePermute": "tf.raw_ops.CollectivePermute", + "tf.compat.v1.raw_ops.CollectiveReduce": "tf.raw_ops.CollectiveReduce", + "tf.compat.v1.raw_ops.CombinedNonMaxSuppression": "tf.raw_ops.CombinedNonMaxSuppression", + "tf.compat.v1.raw_ops.CompareAndBitpack": "tf.raw_ops.CompareAndBitpack", + "tf.compat.v1.raw_ops.Complex": "tf.raw_ops.Complex", + "tf.compat.v1.raw_ops.ComplexAbs": "tf.raw_ops.ComplexAbs", + "tf.compat.v1.raw_ops.ComputeAccidentalHits": "tf.raw_ops.ComputeAccidentalHits", + "tf.compat.v1.raw_ops.Concat": "tf.raw_ops.Concat", + "tf.compat.v1.raw_ops.ConcatOffset": "tf.raw_ops.ConcatOffset", + "tf.compat.v1.raw_ops.ConcatV2": "tf.raw_ops.ConcatV2", + "tf.compat.v1.raw_ops.ConcatenateDataset": "tf.raw_ops.ConcatenateDataset", + "tf.compat.v1.raw_ops.ConditionalAccumulator": "tf.raw_ops.ConditionalAccumulator", + "tf.compat.v1.raw_ops.ConfigureDistributedTPU": "tf.raw_ops.ConfigureDistributedTPU", + "tf.compat.v1.raw_ops.ConfigureTPUEmbedding": "tf.raw_ops.ConfigureTPUEmbedding", + "tf.compat.v1.raw_ops.Conj": "tf.raw_ops.Conj", + "tf.compat.v1.raw_ops.ConjugateTranspose": "tf.raw_ops.ConjugateTranspose", + "tf.compat.v1.raw_ops.Const": "tf.raw_ops.Const", + "tf.compat.v1.raw_ops.ConsumeMutexLock": "tf.raw_ops.ConsumeMutexLock", + "tf.compat.v1.raw_ops.ControlTrigger": "tf.raw_ops.ControlTrigger", + "tf.compat.v1.raw_ops.Conv2D": "tf.raw_ops.Conv2D", + "tf.compat.v1.raw_ops.Conv2DBackpropFilter": "tf.raw_ops.Conv2DBackpropFilter", + "tf.compat.v1.raw_ops.Conv2DBackpropInput": "tf.raw_ops.Conv2DBackpropInput", + "tf.compat.v1.raw_ops.Conv3D": "tf.raw_ops.Conv3D", + "tf.compat.v1.raw_ops.Conv3DBackpropFilter": "tf.raw_ops.Conv3DBackpropFilter", + "tf.compat.v1.raw_ops.Conv3DBackpropFilterV2": "tf.raw_ops.Conv3DBackpropFilterV2", + "tf.compat.v1.raw_ops.Conv3DBackpropInput": "tf.raw_ops.Conv3DBackpropInput", + "tf.compat.v1.raw_ops.Conv3DBackpropInputV2": "tf.raw_ops.Conv3DBackpropInputV2", + "tf.compat.v1.raw_ops.Copy": "tf.raw_ops.Copy", + "tf.compat.v1.raw_ops.CopyHost": "tf.raw_ops.CopyHost", + "tf.compat.v1.raw_ops.Cos": "tf.raw_ops.Cos", + "tf.compat.v1.raw_ops.Cosh": "tf.raw_ops.Cosh", + "tf.compat.v1.raw_ops.CountUpTo": "tf.raw_ops.CountUpTo", + "tf.compat.v1.raw_ops.CreateSummaryDbWriter": "tf.raw_ops.CreateSummaryDbWriter", + "tf.compat.v1.raw_ops.CreateSummaryFileWriter": "tf.raw_ops.CreateSummaryFileWriter", + "tf.compat.v1.raw_ops.CropAndResize": "tf.raw_ops.CropAndResize", + "tf.compat.v1.raw_ops.CropAndResizeGradBoxes": "tf.raw_ops.CropAndResizeGradBoxes", + "tf.compat.v1.raw_ops.CropAndResizeGradImage": "tf.raw_ops.CropAndResizeGradImage", + "tf.compat.v1.raw_ops.Cross": "tf.raw_ops.Cross", + "tf.compat.v1.raw_ops.CrossReplicaSum": "tf.raw_ops.CrossReplicaSum", + "tf.compat.v1.raw_ops.CudnnRNN": "tf.raw_ops.CudnnRNN", + "tf.compat.v1.raw_ops.CudnnRNNBackprop": "tf.raw_ops.CudnnRNNBackprop", + "tf.compat.v1.raw_ops.CudnnRNNBackpropV2": "tf.raw_ops.CudnnRNNBackpropV2", + "tf.compat.v1.raw_ops.CudnnRNNBackpropV3": "tf.raw_ops.CudnnRNNBackpropV3", + "tf.compat.v1.raw_ops.CudnnRNNCanonicalToParams": "tf.raw_ops.CudnnRNNCanonicalToParams", + "tf.compat.v1.raw_ops.CudnnRNNCanonicalToParamsV2": "tf.raw_ops.CudnnRNNCanonicalToParamsV2", + "tf.compat.v1.raw_ops.CudnnRNNParamsSize": "tf.raw_ops.CudnnRNNParamsSize", + "tf.compat.v1.raw_ops.CudnnRNNParamsToCanonical": "tf.raw_ops.CudnnRNNParamsToCanonical", + "tf.compat.v1.raw_ops.CudnnRNNParamsToCanonicalV2": "tf.raw_ops.CudnnRNNParamsToCanonicalV2", + "tf.compat.v1.raw_ops.CudnnRNNV2": "tf.raw_ops.CudnnRNNV2", + "tf.compat.v1.raw_ops.CudnnRNNV3": "tf.raw_ops.CudnnRNNV3", + "tf.compat.v1.raw_ops.Cumprod": "tf.raw_ops.Cumprod", + "tf.compat.v1.raw_ops.Cumsum": "tf.raw_ops.Cumsum", + "tf.compat.v1.raw_ops.CumulativeLogsumexp": "tf.raw_ops.CumulativeLogsumexp", + "tf.compat.v1.raw_ops.DataFormatDimMap": "tf.raw_ops.DataFormatDimMap", + "tf.compat.v1.raw_ops.DataFormatVecPermute": "tf.raw_ops.DataFormatVecPermute", + "tf.compat.v1.raw_ops.DatasetCardinality": "tf.raw_ops.DatasetCardinality", + "tf.compat.v1.raw_ops.DatasetFromGraph": "tf.raw_ops.DatasetFromGraph", + "tf.compat.v1.raw_ops.DatasetToGraph": "tf.raw_ops.DatasetToGraph", + "tf.compat.v1.raw_ops.DatasetToGraphV2": "tf.raw_ops.DatasetToGraphV2", + "tf.compat.v1.raw_ops.DatasetToSingleElement": "tf.raw_ops.DatasetToSingleElement", + "tf.compat.v1.raw_ops.DatasetToTFRecord": "tf.raw_ops.DatasetToTFRecord", + "tf.compat.v1.raw_ops.Dawsn": "tf.raw_ops.Dawsn", + "tf.compat.v1.raw_ops.DebugGradientIdentity": "tf.raw_ops.DebugGradientIdentity", + "tf.compat.v1.raw_ops.DebugGradientRefIdentity": "tf.raw_ops.DebugGradientRefIdentity", + "tf.compat.v1.raw_ops.DebugIdentity": "tf.raw_ops.DebugIdentity", + "tf.compat.v1.raw_ops.DebugIdentityV2": "tf.raw_ops.DebugIdentityV2", + "tf.compat.v1.raw_ops.DebugNanCount": "tf.raw_ops.DebugNanCount", + "tf.compat.v1.raw_ops.DebugNumericSummary": "tf.raw_ops.DebugNumericSummary", + "tf.compat.v1.raw_ops.DebugNumericSummaryV2": "tf.raw_ops.DebugNumericSummaryV2", + "tf.compat.v1.raw_ops.DecodeAndCropJpeg": "tf.raw_ops.DecodeAndCropJpeg", + "tf.compat.v1.raw_ops.DecodeBase64": "tf.raw_ops.DecodeBase64", + "tf.compat.v1.raw_ops.DecodeBmp": "tf.raw_ops.DecodeBmp", + "tf.compat.v1.raw_ops.DecodeCSV": "tf.raw_ops.DecodeCSV", + "tf.compat.v1.raw_ops.DecodeCompressed": "tf.raw_ops.DecodeCompressed", + "tf.compat.v1.raw_ops.DecodeGif": "tf.raw_ops.DecodeGif", + "tf.compat.v1.raw_ops.DecodeJSONExample": "tf.raw_ops.DecodeJSONExample", + "tf.compat.v1.raw_ops.DecodeJpeg": "tf.raw_ops.DecodeJpeg", + "tf.compat.v1.raw_ops.DecodePaddedRaw": "tf.raw_ops.DecodePaddedRaw", + "tf.compat.v1.raw_ops.DecodePng": "tf.raw_ops.DecodePng", + "tf.compat.v1.raw_ops.DecodeProtoV2": "tf.raw_ops.DecodeProtoV2", + "tf.compat.v1.raw_ops.DecodeRaw": "tf.raw_ops.DecodeRaw", + "tf.compat.v1.raw_ops.DecodeWav": "tf.raw_ops.DecodeWav", + "tf.compat.v1.raw_ops.DeepCopy": "tf.raw_ops.DeepCopy", + "tf.compat.v1.raw_ops.DeleteIterator": "tf.raw_ops.DeleteIterator", + "tf.compat.v1.raw_ops.DeleteMemoryCache": "tf.raw_ops.DeleteMemoryCache", + "tf.compat.v1.raw_ops.DeleteMultiDeviceIterator": "tf.raw_ops.DeleteMultiDeviceIterator", + "tf.compat.v1.raw_ops.DeleteRandomSeedGenerator": "tf.raw_ops.DeleteRandomSeedGenerator", + "tf.compat.v1.raw_ops.DeleteSessionTensor": "tf.raw_ops.DeleteSessionTensor", + "tf.compat.v1.raw_ops.DenseToCSRSparseMatrix": "tf.raw_ops.DenseToCSRSparseMatrix", + "tf.compat.v1.raw_ops.DenseToDenseSetOperation": "tf.raw_ops.DenseToDenseSetOperation", + "tf.compat.v1.raw_ops.DenseToSparseBatchDataset": "tf.raw_ops.DenseToSparseBatchDataset", + "tf.compat.v1.raw_ops.DenseToSparseSetOperation": "tf.raw_ops.DenseToSparseSetOperation", + "tf.compat.v1.raw_ops.DepthToSpace": "tf.raw_ops.DepthToSpace", + "tf.compat.v1.raw_ops.DepthwiseConv2dNative": "tf.raw_ops.DepthwiseConv2dNative", + "tf.compat.v1.raw_ops.DepthwiseConv2dNativeBackpropFilter": "tf.raw_ops.DepthwiseConv2dNativeBackpropFilter", + "tf.compat.v1.raw_ops.DepthwiseConv2dNativeBackpropInput": "tf.raw_ops.DepthwiseConv2dNativeBackpropInput", + "tf.compat.v1.raw_ops.Dequantize": "tf.raw_ops.Dequantize", + "tf.compat.v1.raw_ops.DeserializeIterator": "tf.raw_ops.DeserializeIterator", + "tf.compat.v1.raw_ops.DeserializeManySparse": "tf.raw_ops.DeserializeManySparse", + "tf.compat.v1.raw_ops.DeserializeSparse": "tf.raw_ops.DeserializeSparse", + "tf.compat.v1.raw_ops.DestroyResourceOp": "tf.raw_ops.DestroyResourceOp", + "tf.compat.v1.raw_ops.DestroyTemporaryVariable": "tf.raw_ops.DestroyTemporaryVariable", + "tf.compat.v1.raw_ops.Diag": "tf.raw_ops.Diag", + "tf.compat.v1.raw_ops.DiagPart": "tf.raw_ops.DiagPart", + "tf.compat.v1.raw_ops.Digamma": "tf.raw_ops.Digamma", + "tf.compat.v1.raw_ops.Dilation2D": "tf.raw_ops.Dilation2D", + "tf.compat.v1.raw_ops.Dilation2DBackpropFilter": "tf.raw_ops.Dilation2DBackpropFilter", + "tf.compat.v1.raw_ops.Dilation2DBackpropInput": "tf.raw_ops.Dilation2DBackpropInput", + "tf.compat.v1.raw_ops.DirectedInterleaveDataset": "tf.raw_ops.DirectedInterleaveDataset", + "tf.compat.v1.raw_ops.Div": "tf.raw_ops.Div", + "tf.compat.v1.raw_ops.DivNoNan": "tf.raw_ops.DivNoNan", + "tf.compat.v1.raw_ops.DrawBoundingBoxes": "tf.raw_ops.DrawBoundingBoxes", + "tf.compat.v1.raw_ops.DrawBoundingBoxesV2": "tf.raw_ops.DrawBoundingBoxesV2", + "tf.compat.v1.raw_ops.DummyMemoryCache": "tf.raw_ops.DummyMemoryCache", + "tf.compat.v1.raw_ops.DynamicPartition": "tf.raw_ops.DynamicPartition", + "tf.compat.v1.raw_ops.DynamicStitch": "tf.raw_ops.DynamicStitch", + "tf.compat.v1.raw_ops.EagerPyFunc": "tf.raw_ops.EagerPyFunc", + "tf.compat.v1.raw_ops.EditDistance": "tf.raw_ops.EditDistance", + "tf.compat.v1.raw_ops.Eig": "tf.raw_ops.Eig", + "tf.compat.v1.raw_ops.Einsum": "tf.raw_ops.Einsum", + "tf.compat.v1.raw_ops.Elu": "tf.raw_ops.Elu", + "tf.compat.v1.raw_ops.EluGrad": "tf.raw_ops.EluGrad", + "tf.compat.v1.raw_ops.Empty": "tf.raw_ops.Empty", + "tf.compat.v1.raw_ops.EmptyTensorList": "tf.raw_ops.EmptyTensorList", + "tf.compat.v1.raw_ops.EncodeBase64": "tf.raw_ops.EncodeBase64", + "tf.compat.v1.raw_ops.EncodeJpeg": "tf.raw_ops.EncodeJpeg", + "tf.compat.v1.raw_ops.EncodeJpegVariableQuality": "tf.raw_ops.EncodeJpegVariableQuality", + "tf.compat.v1.raw_ops.EncodePng": "tf.raw_ops.EncodePng", + "tf.compat.v1.raw_ops.EncodeProto": "tf.raw_ops.EncodeProto", + "tf.compat.v1.raw_ops.EncodeWav": "tf.raw_ops.EncodeWav", + "tf.compat.v1.raw_ops.EnqueueTPUEmbeddingIntegerBatch": "tf.raw_ops.EnqueueTPUEmbeddingIntegerBatch", + "tf.compat.v1.raw_ops.EnqueueTPUEmbeddingSparseBatch": "tf.raw_ops.EnqueueTPUEmbeddingSparseBatch", + "tf.compat.v1.raw_ops.EnqueueTPUEmbeddingSparseTensorBatch": "tf.raw_ops.EnqueueTPUEmbeddingSparseTensorBatch", + "tf.compat.v1.raw_ops.EnsureShape": "tf.raw_ops.EnsureShape", + "tf.compat.v1.raw_ops.Enter": "tf.raw_ops.Enter", + "tf.compat.v1.raw_ops.Equal": "tf.raw_ops.Equal", + "tf.compat.v1.raw_ops.Erf": "tf.raw_ops.Erf", + "tf.compat.v1.raw_ops.Erfc": "tf.raw_ops.Erfc", + "tf.compat.v1.raw_ops.Erfinv": "tf.raw_ops.Erfinv", + "tf.compat.v1.raw_ops.EuclideanNorm": "tf.raw_ops.EuclideanNorm", + "tf.compat.v1.raw_ops.Exit": "tf.raw_ops.Exit", + "tf.compat.v1.raw_ops.Exp": "tf.raw_ops.Exp", + "tf.compat.v1.raw_ops.ExpandDims": "tf.raw_ops.ExpandDims", + "tf.compat.v1.raw_ops.ExperimentalAssertNextDataset": "tf.raw_ops.ExperimentalAssertNextDataset", + "tf.compat.v1.raw_ops.ExperimentalAutoShardDataset": "tf.raw_ops.ExperimentalAutoShardDataset", + "tf.compat.v1.raw_ops.ExperimentalBytesProducedStatsDataset": "tf.raw_ops.ExperimentalBytesProducedStatsDataset", + "tf.compat.v1.raw_ops.ExperimentalCSVDataset": "tf.raw_ops.ExperimentalCSVDataset", + "tf.compat.v1.raw_ops.ExperimentalChooseFastestDataset": "tf.raw_ops.ExperimentalChooseFastestDataset", + "tf.compat.v1.raw_ops.ExperimentalDatasetCardinality": "tf.raw_ops.ExperimentalDatasetCardinality", + "tf.compat.v1.raw_ops.ExperimentalDatasetToTFRecord": "tf.raw_ops.ExperimentalDatasetToTFRecord", + "tf.compat.v1.raw_ops.ExperimentalDenseToSparseBatchDataset": "tf.raw_ops.ExperimentalDenseToSparseBatchDataset", + "tf.compat.v1.raw_ops.ExperimentalDirectedInterleaveDataset": "tf.raw_ops.ExperimentalDirectedInterleaveDataset", + "tf.compat.v1.raw_ops.ExperimentalGroupByReducerDataset": "tf.raw_ops.ExperimentalGroupByReducerDataset", + "tf.compat.v1.raw_ops.ExperimentalGroupByWindowDataset": "tf.raw_ops.ExperimentalGroupByWindowDataset", + "tf.compat.v1.raw_ops.ExperimentalIgnoreErrorsDataset": "tf.raw_ops.ExperimentalIgnoreErrorsDataset", + "tf.compat.v1.raw_ops.ExperimentalIteratorGetDevice": "tf.raw_ops.ExperimentalIteratorGetDevice", + "tf.compat.v1.raw_ops.ExperimentalLMDBDataset": "tf.raw_ops.ExperimentalLMDBDataset", + "tf.compat.v1.raw_ops.ExperimentalLatencyStatsDataset": "tf.raw_ops.ExperimentalLatencyStatsDataset", + "tf.compat.v1.raw_ops.ExperimentalMapAndBatchDataset": "tf.raw_ops.ExperimentalMapAndBatchDataset", + "tf.compat.v1.raw_ops.ExperimentalMapDataset": "tf.raw_ops.ExperimentalMapDataset", + "tf.compat.v1.raw_ops.ExperimentalMatchingFilesDataset": "tf.raw_ops.ExperimentalMatchingFilesDataset", + "tf.compat.v1.raw_ops.ExperimentalMaxIntraOpParallelismDataset": "tf.raw_ops.ExperimentalMaxIntraOpParallelismDataset", + "tf.compat.v1.raw_ops.ExperimentalNonSerializableDataset": "tf.raw_ops.ExperimentalNonSerializableDataset", + "tf.compat.v1.raw_ops.ExperimentalParallelInterleaveDataset": "tf.raw_ops.ExperimentalParallelInterleaveDataset", + "tf.compat.v1.raw_ops.ExperimentalParseExampleDataset": "tf.raw_ops.ExperimentalParseExampleDataset", + "tf.compat.v1.raw_ops.ExperimentalPrivateThreadPoolDataset": "tf.raw_ops.ExperimentalPrivateThreadPoolDataset", + "tf.compat.v1.raw_ops.ExperimentalRandomDataset": "tf.raw_ops.ExperimentalRandomDataset", + "tf.compat.v1.raw_ops.ExperimentalRebatchDataset": "tf.raw_ops.ExperimentalRebatchDataset", + "tf.compat.v1.raw_ops.ExperimentalScanDataset": "tf.raw_ops.ExperimentalScanDataset", + "tf.compat.v1.raw_ops.ExperimentalSetStatsAggregatorDataset": "tf.raw_ops.ExperimentalSetStatsAggregatorDataset", + "tf.compat.v1.raw_ops.ExperimentalSleepDataset": "tf.raw_ops.ExperimentalSleepDataset", + "tf.compat.v1.raw_ops.ExperimentalSlidingWindowDataset": "tf.raw_ops.ExperimentalSlidingWindowDataset", + "tf.compat.v1.raw_ops.ExperimentalSqlDataset": "tf.raw_ops.ExperimentalSqlDataset", + "tf.compat.v1.raw_ops.ExperimentalStatsAggregatorHandle": "tf.raw_ops.ExperimentalStatsAggregatorHandle", + "tf.compat.v1.raw_ops.ExperimentalStatsAggregatorSummary": "tf.raw_ops.ExperimentalStatsAggregatorSummary", + "tf.compat.v1.raw_ops.ExperimentalTakeWhileDataset": "tf.raw_ops.ExperimentalTakeWhileDataset", + "tf.compat.v1.raw_ops.ExperimentalThreadPoolDataset": "tf.raw_ops.ExperimentalThreadPoolDataset", + "tf.compat.v1.raw_ops.ExperimentalThreadPoolHandle": "tf.raw_ops.ExperimentalThreadPoolHandle", + "tf.compat.v1.raw_ops.ExperimentalUnbatchDataset": "tf.raw_ops.ExperimentalUnbatchDataset", + "tf.compat.v1.raw_ops.ExperimentalUniqueDataset": "tf.raw_ops.ExperimentalUniqueDataset", + "tf.compat.v1.raw_ops.Expint": "tf.raw_ops.Expint", + "tf.compat.v1.raw_ops.Expm1": "tf.raw_ops.Expm1", + "tf.compat.v1.raw_ops.ExtractGlimpse": "tf.raw_ops.ExtractGlimpse", + "tf.compat.v1.raw_ops.ExtractImagePatches": "tf.raw_ops.ExtractImagePatches", + "tf.compat.v1.raw_ops.ExtractJpegShape": "tf.raw_ops.ExtractJpegShape", + "tf.compat.v1.raw_ops.ExtractVolumePatches": "tf.raw_ops.ExtractVolumePatches", + "tf.compat.v1.raw_ops.FFT": "tf.raw_ops.FFT", + "tf.compat.v1.raw_ops.FFT2D": "tf.raw_ops.FFT2D", + "tf.compat.v1.raw_ops.FFT3D": "tf.raw_ops.FFT3D", + "tf.compat.v1.raw_ops.FIFOQueue": "tf.raw_ops.FIFOQueue", + "tf.compat.v1.raw_ops.FIFOQueueV2": "tf.raw_ops.FIFOQueueV2", + "tf.compat.v1.raw_ops.Fact": "tf.raw_ops.Fact", + "tf.compat.v1.raw_ops.FakeParam": "tf.raw_ops.FakeParam", + "tf.compat.v1.raw_ops.FakeQuantWithMinMaxArgs": "tf.raw_ops.FakeQuantWithMinMaxArgs", + "tf.compat.v1.raw_ops.FakeQuantWithMinMaxArgsGradient": "tf.raw_ops.FakeQuantWithMinMaxArgsGradient", + "tf.compat.v1.raw_ops.FakeQuantWithMinMaxVars": "tf.raw_ops.FakeQuantWithMinMaxVars", + "tf.compat.v1.raw_ops.FakeQuantWithMinMaxVarsGradient": "tf.raw_ops.FakeQuantWithMinMaxVarsGradient", + "tf.compat.v1.raw_ops.FakeQuantWithMinMaxVarsPerChannel": "tf.raw_ops.FakeQuantWithMinMaxVarsPerChannel", + "tf.compat.v1.raw_ops.FakeQuantWithMinMaxVarsPerChannelGradient": "tf.raw_ops.FakeQuantWithMinMaxVarsPerChannelGradient", + "tf.compat.v1.raw_ops.FakeQueue": "tf.raw_ops.FakeQueue", + "tf.compat.v1.raw_ops.Fill": "tf.raw_ops.Fill", + "tf.compat.v1.raw_ops.FilterByLastComponentDataset": "tf.raw_ops.FilterByLastComponentDataset", + "tf.compat.v1.raw_ops.FilterDataset": "tf.raw_ops.FilterDataset", + "tf.compat.v1.raw_ops.Fingerprint": "tf.raw_ops.Fingerprint", + "tf.compat.v1.raw_ops.FixedLengthRecordDataset": "tf.raw_ops.FixedLengthRecordDataset", + "tf.compat.v1.raw_ops.FixedLengthRecordDatasetV2": "tf.raw_ops.FixedLengthRecordDatasetV2", + "tf.compat.v1.raw_ops.FixedLengthRecordReader": "tf.raw_ops.FixedLengthRecordReader", + "tf.compat.v1.raw_ops.FixedLengthRecordReaderV2": "tf.raw_ops.FixedLengthRecordReaderV2", + "tf.compat.v1.raw_ops.FixedUnigramCandidateSampler": "tf.raw_ops.FixedUnigramCandidateSampler", + "tf.compat.v1.raw_ops.FlatMapDataset": "tf.raw_ops.FlatMapDataset", + "tf.compat.v1.raw_ops.Floor": "tf.raw_ops.Floor", + "tf.compat.v1.raw_ops.FloorDiv": "tf.raw_ops.FloorDiv", + "tf.compat.v1.raw_ops.FloorMod": "tf.raw_ops.FloorMod", + "tf.compat.v1.raw_ops.FlushSummaryWriter": "tf.raw_ops.FlushSummaryWriter", + "tf.compat.v1.raw_ops.For": "tf.raw_ops.For", + "tf.compat.v1.raw_ops.FractionalAvgPool": "tf.raw_ops.FractionalAvgPool", + "tf.compat.v1.raw_ops.FractionalAvgPoolGrad": "tf.raw_ops.FractionalAvgPoolGrad", + "tf.compat.v1.raw_ops.FractionalMaxPool": "tf.raw_ops.FractionalMaxPool", + "tf.compat.v1.raw_ops.FractionalMaxPoolGrad": "tf.raw_ops.FractionalMaxPoolGrad", + "tf.compat.v1.raw_ops.FresnelCos": "tf.raw_ops.FresnelCos", + "tf.compat.v1.raw_ops.FresnelSin": "tf.raw_ops.FresnelSin", + "tf.compat.v1.raw_ops.FusedBatchNorm": "tf.raw_ops.FusedBatchNorm", + "tf.compat.v1.raw_ops.FusedBatchNormGrad": "tf.raw_ops.FusedBatchNormGrad", + "tf.compat.v1.raw_ops.FusedBatchNormGradV2": "tf.raw_ops.FusedBatchNormGradV2", + "tf.compat.v1.raw_ops.FusedBatchNormGradV3": "tf.raw_ops.FusedBatchNormGradV3", + "tf.compat.v1.raw_ops.FusedBatchNormV2": "tf.raw_ops.FusedBatchNormV2", + "tf.compat.v1.raw_ops.FusedBatchNormV3": "tf.raw_ops.FusedBatchNormV3", + "tf.compat.v1.raw_ops.FusedPadConv2D": "tf.raw_ops.FusedPadConv2D", + "tf.compat.v1.raw_ops.FusedResizeAndPadConv2D": "tf.raw_ops.FusedResizeAndPadConv2D", + "tf.compat.v1.raw_ops.GRUBlockCell": "tf.raw_ops.GRUBlockCell", + "tf.compat.v1.raw_ops.GRUBlockCellGrad": "tf.raw_ops.GRUBlockCellGrad", + "tf.compat.v1.raw_ops.Gather": "tf.raw_ops.Gather", + "tf.compat.v1.raw_ops.GatherNd": "tf.raw_ops.GatherNd", + "tf.compat.v1.raw_ops.GatherV2": "tf.raw_ops.GatherV2", + "tf.compat.v1.raw_ops.GenerateBoundingBoxProposals": "tf.raw_ops.GenerateBoundingBoxProposals", + "tf.compat.v1.raw_ops.GenerateVocabRemapping": "tf.raw_ops.GenerateVocabRemapping", + "tf.compat.v1.raw_ops.GeneratorDataset": "tf.raw_ops.GeneratorDataset", + "tf.compat.v1.raw_ops.GetSessionHandle": "tf.raw_ops.GetSessionHandle", + "tf.compat.v1.raw_ops.GetSessionHandleV2": "tf.raw_ops.GetSessionHandleV2", + "tf.compat.v1.raw_ops.GetSessionTensor": "tf.raw_ops.GetSessionTensor", + "tf.compat.v1.raw_ops.Greater": "tf.raw_ops.Greater", + "tf.compat.v1.raw_ops.GreaterEqual": "tf.raw_ops.GreaterEqual", + "tf.compat.v1.raw_ops.GroupByReducerDataset": "tf.raw_ops.GroupByReducerDataset", + "tf.compat.v1.raw_ops.GroupByWindowDataset": "tf.raw_ops.GroupByWindowDataset", + "tf.compat.v1.raw_ops.GuaranteeConst": "tf.raw_ops.GuaranteeConst", + "tf.compat.v1.raw_ops.HSVToRGB": "tf.raw_ops.HSVToRGB", + "tf.compat.v1.raw_ops.HashTable": "tf.raw_ops.HashTable", + "tf.compat.v1.raw_ops.HashTableV2": "tf.raw_ops.HashTableV2", + "tf.compat.v1.raw_ops.HistogramFixedWidth": "tf.raw_ops.HistogramFixedWidth", + "tf.compat.v1.raw_ops.HistogramSummary": "tf.raw_ops.HistogramSummary", + "tf.compat.v1.raw_ops.IFFT": "tf.raw_ops.IFFT", + "tf.compat.v1.raw_ops.IFFT2D": "tf.raw_ops.IFFT2D", + "tf.compat.v1.raw_ops.IFFT3D": "tf.raw_ops.IFFT3D", + "tf.compat.v1.raw_ops.IRFFT": "tf.raw_ops.IRFFT", + "tf.compat.v1.raw_ops.IRFFT2D": "tf.raw_ops.IRFFT2D", + "tf.compat.v1.raw_ops.IRFFT3D": "tf.raw_ops.IRFFT3D", + "tf.compat.v1.raw_ops.Identity": "tf.raw_ops.Identity", + "tf.compat.v1.raw_ops.IdentityN": "tf.raw_ops.IdentityN", + "tf.compat.v1.raw_ops.IdentityReader": "tf.raw_ops.IdentityReader", + "tf.compat.v1.raw_ops.IdentityReaderV2": "tf.raw_ops.IdentityReaderV2", + "tf.compat.v1.raw_ops.If": "tf.raw_ops.If", + "tf.compat.v1.raw_ops.Igamma": "tf.raw_ops.Igamma", + "tf.compat.v1.raw_ops.IgammaGradA": "tf.raw_ops.IgammaGradA", + "tf.compat.v1.raw_ops.Igammac": "tf.raw_ops.Igammac", + "tf.compat.v1.raw_ops.IgnoreErrorsDataset": "tf.raw_ops.IgnoreErrorsDataset", + "tf.compat.v1.raw_ops.Imag": "tf.raw_ops.Imag", + "tf.compat.v1.raw_ops.ImageProjectiveTransformV2": "tf.raw_ops.ImageProjectiveTransformV2", + "tf.compat.v1.raw_ops.ImageSummary": "tf.raw_ops.ImageSummary", + "tf.compat.v1.raw_ops.ImmutableConst": "tf.raw_ops.ImmutableConst", + "tf.compat.v1.raw_ops.ImportEvent": "tf.raw_ops.ImportEvent", + "tf.compat.v1.raw_ops.InTopK": "tf.raw_ops.InTopK", + "tf.compat.v1.raw_ops.InTopKV2": "tf.raw_ops.InTopKV2", + "tf.compat.v1.raw_ops.InfeedDequeue": "tf.raw_ops.InfeedDequeue", + "tf.compat.v1.raw_ops.InfeedDequeueTuple": "tf.raw_ops.InfeedDequeueTuple", + "tf.compat.v1.raw_ops.InfeedEnqueue": "tf.raw_ops.InfeedEnqueue", + "tf.compat.v1.raw_ops.InfeedEnqueuePrelinearizedBuffer": "tf.raw_ops.InfeedEnqueuePrelinearizedBuffer", + "tf.compat.v1.raw_ops.InfeedEnqueueTuple": "tf.raw_ops.InfeedEnqueueTuple", + "tf.compat.v1.raw_ops.InitializeTable": "tf.raw_ops.InitializeTable", + "tf.compat.v1.raw_ops.InitializeTableFromTextFile": "tf.raw_ops.InitializeTableFromTextFile", + "tf.compat.v1.raw_ops.InitializeTableFromTextFileV2": "tf.raw_ops.InitializeTableFromTextFileV2", + "tf.compat.v1.raw_ops.InitializeTableV2": "tf.raw_ops.InitializeTableV2", + "tf.compat.v1.raw_ops.InplaceAdd": "tf.raw_ops.InplaceAdd", + "tf.compat.v1.raw_ops.InplaceSub": "tf.raw_ops.InplaceSub", + "tf.compat.v1.raw_ops.InplaceUpdate": "tf.raw_ops.InplaceUpdate", + "tf.compat.v1.raw_ops.InterleaveDataset": "tf.raw_ops.InterleaveDataset", + "tf.compat.v1.raw_ops.Inv": "tf.raw_ops.Inv", + "tf.compat.v1.raw_ops.InvGrad": "tf.raw_ops.InvGrad", + "tf.compat.v1.raw_ops.Invert": "tf.raw_ops.Invert", + "tf.compat.v1.raw_ops.InvertPermutation": "tf.raw_ops.InvertPermutation", + "tf.compat.v1.raw_ops.IsBoostedTreesEnsembleInitialized": "tf.raw_ops.IsBoostedTreesEnsembleInitialized", + "tf.compat.v1.raw_ops.IsBoostedTreesQuantileStreamResourceInitialized": "tf.raw_ops.IsBoostedTreesQuantileStreamResourceInitialized", + "tf.compat.v1.raw_ops.IsFinite": "tf.raw_ops.IsFinite", + "tf.compat.v1.raw_ops.IsInf": "tf.raw_ops.IsInf", + "tf.compat.v1.raw_ops.IsNan": "tf.raw_ops.IsNan", + "tf.compat.v1.raw_ops.IsVariableInitialized": "tf.raw_ops.IsVariableInitialized", + "tf.compat.v1.raw_ops.Iterator": "tf.raw_ops.Iterator", + "tf.compat.v1.raw_ops.IteratorFromStringHandle": "tf.raw_ops.IteratorFromStringHandle", + "tf.compat.v1.raw_ops.IteratorFromStringHandleV2": "tf.raw_ops.IteratorFromStringHandleV2", + "tf.compat.v1.raw_ops.IteratorGetDevice": "tf.raw_ops.IteratorGetDevice", + "tf.compat.v1.raw_ops.IteratorGetNext": "tf.raw_ops.IteratorGetNext", + "tf.compat.v1.raw_ops.IteratorGetNextAsOptional": "tf.raw_ops.IteratorGetNextAsOptional", + "tf.compat.v1.raw_ops.IteratorGetNextSync": "tf.raw_ops.IteratorGetNextSync", + "tf.compat.v1.raw_ops.IteratorToStringHandle": "tf.raw_ops.IteratorToStringHandle", + "tf.compat.v1.raw_ops.IteratorV2": "tf.raw_ops.IteratorV2", + "tf.compat.v1.raw_ops.L2Loss": "tf.raw_ops.L2Loss", + "tf.compat.v1.raw_ops.LMDBDataset": "tf.raw_ops.LMDBDataset", + "tf.compat.v1.raw_ops.LMDBReader": "tf.raw_ops.LMDBReader", + "tf.compat.v1.raw_ops.LRN": "tf.raw_ops.LRN", + "tf.compat.v1.raw_ops.LRNGrad": "tf.raw_ops.LRNGrad", + "tf.compat.v1.raw_ops.LSTMBlockCell": "tf.raw_ops.LSTMBlockCell", + "tf.compat.v1.raw_ops.LSTMBlockCellGrad": "tf.raw_ops.LSTMBlockCellGrad", + "tf.compat.v1.raw_ops.LatencyStatsDataset": "tf.raw_ops.LatencyStatsDataset", + "tf.compat.v1.raw_ops.LeakyRelu": "tf.raw_ops.LeakyRelu", + "tf.compat.v1.raw_ops.LeakyReluGrad": "tf.raw_ops.LeakyReluGrad", + "tf.compat.v1.raw_ops.LearnedUnigramCandidateSampler": "tf.raw_ops.LearnedUnigramCandidateSampler", + "tf.compat.v1.raw_ops.LeftShift": "tf.raw_ops.LeftShift", + "tf.compat.v1.raw_ops.LegacyParallelInterleaveDatasetV2": "tf.raw_ops.LegacyParallelInterleaveDatasetV2", + "tf.compat.v1.raw_ops.Less": "tf.raw_ops.Less", + "tf.compat.v1.raw_ops.LessEqual": "tf.raw_ops.LessEqual", + "tf.compat.v1.raw_ops.Lgamma": "tf.raw_ops.Lgamma", + "tf.compat.v1.raw_ops.LinSpace": "tf.raw_ops.LinSpace", + "tf.compat.v1.raw_ops.ListDiff": "tf.raw_ops.ListDiff", + "tf.compat.v1.raw_ops.LoadAndRemapMatrix": "tf.raw_ops.LoadAndRemapMatrix", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingADAMParameters": "tf.raw_ops.LoadTPUEmbeddingADAMParameters", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingADAMParametersGradAccumDebug": "tf.raw_ops.LoadTPUEmbeddingADAMParametersGradAccumDebug", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingAdadeltaParameters": "tf.raw_ops.LoadTPUEmbeddingAdadeltaParameters", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingAdadeltaParametersGradAccumDebug": "tf.raw_ops.LoadTPUEmbeddingAdadeltaParametersGradAccumDebug", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingAdagradParameters": "tf.raw_ops.LoadTPUEmbeddingAdagradParameters", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingAdagradParametersGradAccumDebug": "tf.raw_ops.LoadTPUEmbeddingAdagradParametersGradAccumDebug", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingCenteredRMSPropParameters": "tf.raw_ops.LoadTPUEmbeddingCenteredRMSPropParameters", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingFTRLParameters": "tf.raw_ops.LoadTPUEmbeddingFTRLParameters", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingFTRLParametersGradAccumDebug": "tf.raw_ops.LoadTPUEmbeddingFTRLParametersGradAccumDebug", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingMDLAdagradLightParameters": "tf.raw_ops.LoadTPUEmbeddingMDLAdagradLightParameters", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingMomentumParameters": "tf.raw_ops.LoadTPUEmbeddingMomentumParameters", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingMomentumParametersGradAccumDebug": "tf.raw_ops.LoadTPUEmbeddingMomentumParametersGradAccumDebug", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingProximalAdagradParameters": "tf.raw_ops.LoadTPUEmbeddingProximalAdagradParameters", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug": "tf.raw_ops.LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingRMSPropParameters": "tf.raw_ops.LoadTPUEmbeddingRMSPropParameters", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingRMSPropParametersGradAccumDebug": "tf.raw_ops.LoadTPUEmbeddingRMSPropParametersGradAccumDebug", + "tf.compat.v1.raw_ops.LoadTPUEmbeddingStochasticGradientDescentParameters": "tf.raw_ops.LoadTPUEmbeddingStochasticGradientDescentParameters", + "tf.compat.v1.raw_ops.Log": "tf.raw_ops.Log", + "tf.compat.v1.raw_ops.Log1p": "tf.raw_ops.Log1p", + "tf.compat.v1.raw_ops.LogMatrixDeterminant": "tf.raw_ops.LogMatrixDeterminant", + "tf.compat.v1.raw_ops.LogSoftmax": "tf.raw_ops.LogSoftmax", + "tf.compat.v1.raw_ops.LogUniformCandidateSampler": "tf.raw_ops.LogUniformCandidateSampler", + "tf.compat.v1.raw_ops.LogicalAnd": "tf.raw_ops.LogicalAnd", + "tf.compat.v1.raw_ops.LogicalNot": "tf.raw_ops.LogicalNot", + "tf.compat.v1.raw_ops.LogicalOr": "tf.raw_ops.LogicalOr", + "tf.compat.v1.raw_ops.LookupTableExport": "tf.raw_ops.LookupTableExport", + "tf.compat.v1.raw_ops.LookupTableExportV2": "tf.raw_ops.LookupTableExportV2", + "tf.compat.v1.raw_ops.LookupTableFind": "tf.raw_ops.LookupTableFind", + "tf.compat.v1.raw_ops.LookupTableFindV2": "tf.raw_ops.LookupTableFindV2", + "tf.compat.v1.raw_ops.LookupTableImport": "tf.raw_ops.LookupTableImport", + "tf.compat.v1.raw_ops.LookupTableImportV2": "tf.raw_ops.LookupTableImportV2", + "tf.compat.v1.raw_ops.LookupTableInsert": "tf.raw_ops.LookupTableInsert", + "tf.compat.v1.raw_ops.LookupTableInsertV2": "tf.raw_ops.LookupTableInsertV2", + "tf.compat.v1.raw_ops.LookupTableRemoveV2": "tf.raw_ops.LookupTableRemoveV2", + "tf.compat.v1.raw_ops.LookupTableSize": "tf.raw_ops.LookupTableSize", + "tf.compat.v1.raw_ops.LookupTableSizeV2": "tf.raw_ops.LookupTableSizeV2", + "tf.compat.v1.raw_ops.LoopCond": "tf.raw_ops.LoopCond", + "tf.compat.v1.raw_ops.LowerBound": "tf.raw_ops.LowerBound", + "tf.compat.v1.raw_ops.Lu": "tf.raw_ops.Lu", + "tf.compat.v1.raw_ops.MakeIterator": "tf.raw_ops.MakeIterator", + "tf.compat.v1.raw_ops.MapAndBatchDataset": "tf.raw_ops.MapAndBatchDataset", + "tf.compat.v1.raw_ops.MapClear": "tf.raw_ops.MapClear", + "tf.compat.v1.raw_ops.MapDataset": "tf.raw_ops.MapDataset", + "tf.compat.v1.raw_ops.MapDefun": "tf.raw_ops.MapDefun", + "tf.compat.v1.raw_ops.MapIncompleteSize": "tf.raw_ops.MapIncompleteSize", + "tf.compat.v1.raw_ops.MapPeek": "tf.raw_ops.MapPeek", + "tf.compat.v1.raw_ops.MapSize": "tf.raw_ops.MapSize", + "tf.compat.v1.raw_ops.MapStage": "tf.raw_ops.MapStage", + "tf.compat.v1.raw_ops.MapUnstage": "tf.raw_ops.MapUnstage", + "tf.compat.v1.raw_ops.MapUnstageNoKey": "tf.raw_ops.MapUnstageNoKey", + "tf.compat.v1.raw_ops.MatMul": "tf.raw_ops.MatMul", + "tf.compat.v1.raw_ops.MatchingFiles": "tf.raw_ops.MatchingFiles", + "tf.compat.v1.raw_ops.MatchingFilesDataset": "tf.raw_ops.MatchingFilesDataset", + "tf.compat.v1.raw_ops.MatrixBandPart": "tf.raw_ops.MatrixBandPart", + "tf.compat.v1.raw_ops.MatrixDeterminant": "tf.raw_ops.MatrixDeterminant", + "tf.compat.v1.raw_ops.MatrixDiag": "tf.raw_ops.MatrixDiag", + "tf.compat.v1.raw_ops.MatrixDiagPart": "tf.raw_ops.MatrixDiagPart", + "tf.compat.v1.raw_ops.MatrixDiagPartV2": "tf.raw_ops.MatrixDiagPartV2", + "tf.compat.v1.raw_ops.MatrixDiagPartV3": "tf.raw_ops.MatrixDiagPartV3", + "tf.compat.v1.raw_ops.MatrixDiagV2": "tf.raw_ops.MatrixDiagV2", + "tf.compat.v1.raw_ops.MatrixDiagV3": "tf.raw_ops.MatrixDiagV3", + "tf.compat.v1.raw_ops.MatrixExponential": "tf.raw_ops.MatrixExponential", + "tf.compat.v1.raw_ops.MatrixInverse": "tf.raw_ops.MatrixInverse", + "tf.compat.v1.raw_ops.MatrixLogarithm": "tf.raw_ops.MatrixLogarithm", + "tf.compat.v1.raw_ops.MatrixSetDiag": "tf.raw_ops.MatrixSetDiag", + "tf.compat.v1.raw_ops.MatrixSetDiagV2": "tf.raw_ops.MatrixSetDiagV2", + "tf.compat.v1.raw_ops.MatrixSetDiagV3": "tf.raw_ops.MatrixSetDiagV3", + "tf.compat.v1.raw_ops.MatrixSolve": "tf.raw_ops.MatrixSolve", + "tf.compat.v1.raw_ops.MatrixSolveLs": "tf.raw_ops.MatrixSolveLs", + "tf.compat.v1.raw_ops.MatrixSquareRoot": "tf.raw_ops.MatrixSquareRoot", + "tf.compat.v1.raw_ops.MatrixTriangularSolve": "tf.raw_ops.MatrixTriangularSolve", + "tf.compat.v1.raw_ops.Max": "tf.raw_ops.Max", + "tf.compat.v1.raw_ops.MaxIntraOpParallelismDataset": "tf.raw_ops.MaxIntraOpParallelismDataset", + "tf.compat.v1.raw_ops.MaxPool": "tf.raw_ops.MaxPool", + "tf.compat.v1.raw_ops.MaxPool3D": "tf.raw_ops.MaxPool3D", + "tf.compat.v1.raw_ops.MaxPool3DGrad": "tf.raw_ops.MaxPool3DGrad", + "tf.compat.v1.raw_ops.MaxPool3DGradGrad": "tf.raw_ops.MaxPool3DGradGrad", + "tf.compat.v1.raw_ops.MaxPoolGrad": "tf.raw_ops.MaxPoolGrad", + "tf.compat.v1.raw_ops.MaxPoolGradGrad": "tf.raw_ops.MaxPoolGradGrad", + "tf.compat.v1.raw_ops.MaxPoolGradGradV2": "tf.raw_ops.MaxPoolGradGradV2", + "tf.compat.v1.raw_ops.MaxPoolGradGradWithArgmax": "tf.raw_ops.MaxPoolGradGradWithArgmax", + "tf.compat.v1.raw_ops.MaxPoolGradV2": "tf.raw_ops.MaxPoolGradV2", + "tf.compat.v1.raw_ops.MaxPoolGradWithArgmax": "tf.raw_ops.MaxPoolGradWithArgmax", + "tf.compat.v1.raw_ops.MaxPoolV2": "tf.raw_ops.MaxPoolV2", + "tf.compat.v1.raw_ops.MaxPoolWithArgmax": "tf.raw_ops.MaxPoolWithArgmax", + "tf.compat.v1.raw_ops.Maximum": "tf.raw_ops.Maximum", + "tf.compat.v1.raw_ops.Mean": "tf.raw_ops.Mean", + "tf.compat.v1.raw_ops.Merge": "tf.raw_ops.Merge", + "tf.compat.v1.raw_ops.MergeSummary": "tf.raw_ops.MergeSummary", + "tf.compat.v1.raw_ops.MergeV2Checkpoints": "tf.raw_ops.MergeV2Checkpoints", + "tf.compat.v1.raw_ops.Mfcc": "tf.raw_ops.Mfcc", + "tf.compat.v1.raw_ops.Min": "tf.raw_ops.Min", + "tf.compat.v1.raw_ops.Minimum": "tf.raw_ops.Minimum", + "tf.compat.v1.raw_ops.MirrorPad": "tf.raw_ops.MirrorPad", + "tf.compat.v1.raw_ops.MirrorPadGrad": "tf.raw_ops.MirrorPadGrad", + "tf.compat.v1.raw_ops.Mod": "tf.raw_ops.Mod", + "tf.compat.v1.raw_ops.ModelDataset": "tf.raw_ops.ModelDataset", + "tf.compat.v1.raw_ops.Mul": "tf.raw_ops.Mul", + "tf.compat.v1.raw_ops.MulNoNan": "tf.raw_ops.MulNoNan", + "tf.compat.v1.raw_ops.MultiDeviceIterator": "tf.raw_ops.MultiDeviceIterator", + "tf.compat.v1.raw_ops.MultiDeviceIteratorFromStringHandle": "tf.raw_ops.MultiDeviceIteratorFromStringHandle", + "tf.compat.v1.raw_ops.MultiDeviceIteratorGetNextFromShard": "tf.raw_ops.MultiDeviceIteratorGetNextFromShard", + "tf.compat.v1.raw_ops.MultiDeviceIteratorInit": "tf.raw_ops.MultiDeviceIteratorInit", + "tf.compat.v1.raw_ops.MultiDeviceIteratorToStringHandle": "tf.raw_ops.MultiDeviceIteratorToStringHandle", + "tf.compat.v1.raw_ops.Multinomial": "tf.raw_ops.Multinomial", + "tf.compat.v1.raw_ops.MutableDenseHashTable": "tf.raw_ops.MutableDenseHashTable", + "tf.compat.v1.raw_ops.MutableDenseHashTableV2": "tf.raw_ops.MutableDenseHashTableV2", + "tf.compat.v1.raw_ops.MutableHashTable": "tf.raw_ops.MutableHashTable", + "tf.compat.v1.raw_ops.MutableHashTableOfTensors": "tf.raw_ops.MutableHashTableOfTensors", + "tf.compat.v1.raw_ops.MutableHashTableOfTensorsV2": "tf.raw_ops.MutableHashTableOfTensorsV2", + "tf.compat.v1.raw_ops.MutableHashTableV2": "tf.raw_ops.MutableHashTableV2", + "tf.compat.v1.raw_ops.MutexLock": "tf.raw_ops.MutexLock", + "tf.compat.v1.raw_ops.MutexV2": "tf.raw_ops.MutexV2", + "tf.compat.v1.raw_ops.NcclAllReduce": "tf.raw_ops.NcclAllReduce", + "tf.compat.v1.raw_ops.NcclBroadcast": "tf.raw_ops.NcclBroadcast", + "tf.compat.v1.raw_ops.NcclReduce": "tf.raw_ops.NcclReduce", + "tf.compat.v1.raw_ops.Ndtri": "tf.raw_ops.Ndtri", + "tf.compat.v1.raw_ops.Neg": "tf.raw_ops.Neg", + "tf.compat.v1.raw_ops.NextAfter": "tf.raw_ops.NextAfter", + "tf.compat.v1.raw_ops.NextIteration": "tf.raw_ops.NextIteration", + "tf.compat.v1.raw_ops.NoOp": "tf.raw_ops.NoOp", + "tf.compat.v1.raw_ops.NonDeterministicInts": "tf.raw_ops.NonDeterministicInts", + "tf.compat.v1.raw_ops.NonMaxSuppression": "tf.raw_ops.NonMaxSuppression", + "tf.compat.v1.raw_ops.NonMaxSuppressionV2": "tf.raw_ops.NonMaxSuppressionV2", + "tf.compat.v1.raw_ops.NonMaxSuppressionV3": "tf.raw_ops.NonMaxSuppressionV3", + "tf.compat.v1.raw_ops.NonMaxSuppressionV4": "tf.raw_ops.NonMaxSuppressionV4", + "tf.compat.v1.raw_ops.NonMaxSuppressionV5": "tf.raw_ops.NonMaxSuppressionV5", + "tf.compat.v1.raw_ops.NonMaxSuppressionWithOverlaps": "tf.raw_ops.NonMaxSuppressionWithOverlaps", + "tf.compat.v1.raw_ops.NonSerializableDataset": "tf.raw_ops.NonSerializableDataset", + "tf.compat.v1.raw_ops.NotEqual": "tf.raw_ops.NotEqual", + "tf.compat.v1.raw_ops.NthElement": "tf.raw_ops.NthElement", + "tf.compat.v1.raw_ops.OneHot": "tf.raw_ops.OneHot", + "tf.compat.v1.raw_ops.OneShotIterator": "tf.raw_ops.OneShotIterator", + "tf.compat.v1.raw_ops.OnesLike": "tf.raw_ops.OnesLike", + "tf.compat.v1.raw_ops.OptimizeDataset": "tf.raw_ops.OptimizeDataset", + "tf.compat.v1.raw_ops.OptionalFromValue": "tf.raw_ops.OptionalFromValue", + "tf.compat.v1.raw_ops.OptionalGetValue": "tf.raw_ops.OptionalGetValue", + "tf.compat.v1.raw_ops.OptionalHasValue": "tf.raw_ops.OptionalHasValue", + "tf.compat.v1.raw_ops.OptionalNone": "tf.raw_ops.OptionalNone", + "tf.compat.v1.raw_ops.OrderedMapClear": "tf.raw_ops.OrderedMapClear", + "tf.compat.v1.raw_ops.OrderedMapIncompleteSize": "tf.raw_ops.OrderedMapIncompleteSize", + "tf.compat.v1.raw_ops.OrderedMapPeek": "tf.raw_ops.OrderedMapPeek", + "tf.compat.v1.raw_ops.OrderedMapSize": "tf.raw_ops.OrderedMapSize", + "tf.compat.v1.raw_ops.OrderedMapStage": "tf.raw_ops.OrderedMapStage", + "tf.compat.v1.raw_ops.OrderedMapUnstage": "tf.raw_ops.OrderedMapUnstage", + "tf.compat.v1.raw_ops.OrderedMapUnstageNoKey": "tf.raw_ops.OrderedMapUnstageNoKey", + "tf.compat.v1.raw_ops.OutfeedDequeue": "tf.raw_ops.OutfeedDequeue", + "tf.compat.v1.raw_ops.OutfeedDequeueTuple": "tf.raw_ops.OutfeedDequeueTuple", + "tf.compat.v1.raw_ops.OutfeedEnqueue": "tf.raw_ops.OutfeedEnqueue", + "tf.compat.v1.raw_ops.OutfeedEnqueueTuple": "tf.raw_ops.OutfeedEnqueueTuple", + "tf.compat.v1.raw_ops.Pack": "tf.raw_ops.Pack", + "tf.compat.v1.raw_ops.Pad": "tf.raw_ops.Pad", + "tf.compat.v1.raw_ops.PadV2": "tf.raw_ops.PadV2", + "tf.compat.v1.raw_ops.PaddedBatchDataset": "tf.raw_ops.PaddedBatchDataset", + "tf.compat.v1.raw_ops.PaddedBatchDatasetV2": "tf.raw_ops.PaddedBatchDatasetV2", + "tf.compat.v1.raw_ops.PaddingFIFOQueue": "tf.raw_ops.PaddingFIFOQueue", + "tf.compat.v1.raw_ops.PaddingFIFOQueueV2": "tf.raw_ops.PaddingFIFOQueueV2", + "tf.compat.v1.raw_ops.ParallelConcat": "tf.raw_ops.ParallelConcat", + "tf.compat.v1.raw_ops.ParallelDynamicStitch": "tf.raw_ops.ParallelDynamicStitch", + "tf.compat.v1.raw_ops.ParallelInterleaveDataset": "tf.raw_ops.ParallelInterleaveDataset", + "tf.compat.v1.raw_ops.ParallelInterleaveDatasetV2": "tf.raw_ops.ParallelInterleaveDatasetV2", + "tf.compat.v1.raw_ops.ParallelInterleaveDatasetV3": "tf.raw_ops.ParallelInterleaveDatasetV3", + "tf.compat.v1.raw_ops.ParallelInterleaveDatasetV4": "tf.raw_ops.ParallelInterleaveDatasetV4", + "tf.compat.v1.raw_ops.ParallelMapDataset": "tf.raw_ops.ParallelMapDataset", + "tf.compat.v1.raw_ops.ParallelMapDatasetV2": "tf.raw_ops.ParallelMapDatasetV2", + "tf.compat.v1.raw_ops.ParameterizedTruncatedNormal": "tf.raw_ops.ParameterizedTruncatedNormal", + "tf.compat.v1.raw_ops.ParseExample": "tf.raw_ops.ParseExample", + "tf.compat.v1.raw_ops.ParseExampleDataset": "tf.raw_ops.ParseExampleDataset", + "tf.compat.v1.raw_ops.ParseExampleDatasetV2": "tf.raw_ops.ParseExampleDatasetV2", + "tf.compat.v1.raw_ops.ParseExampleV2": "tf.raw_ops.ParseExampleV2", + "tf.compat.v1.raw_ops.ParseSequenceExample": "tf.raw_ops.ParseSequenceExample", + "tf.compat.v1.raw_ops.ParseSequenceExampleV2": "tf.raw_ops.ParseSequenceExampleV2", + "tf.compat.v1.raw_ops.ParseSingleExample": "tf.raw_ops.ParseSingleExample", + "tf.compat.v1.raw_ops.ParseSingleSequenceExample": "tf.raw_ops.ParseSingleSequenceExample", + "tf.compat.v1.raw_ops.ParseTensor": "tf.raw_ops.ParseTensor", + "tf.compat.v1.raw_ops.PartitionedCall": "tf.raw_ops.PartitionedCall", + "tf.compat.v1.raw_ops.Placeholder": "tf.raw_ops.Placeholder", + "tf.compat.v1.raw_ops.PlaceholderV2": "tf.raw_ops.PlaceholderV2", + "tf.compat.v1.raw_ops.PlaceholderWithDefault": "tf.raw_ops.PlaceholderWithDefault", + "tf.compat.v1.raw_ops.Polygamma": "tf.raw_ops.Polygamma", + "tf.compat.v1.raw_ops.PopulationCount": "tf.raw_ops.PopulationCount", + "tf.compat.v1.raw_ops.Pow": "tf.raw_ops.Pow", + "tf.compat.v1.raw_ops.PrefetchDataset": "tf.raw_ops.PrefetchDataset", + "tf.compat.v1.raw_ops.Prelinearize": "tf.raw_ops.Prelinearize", + "tf.compat.v1.raw_ops.PrelinearizeTuple": "tf.raw_ops.PrelinearizeTuple", + "tf.compat.v1.raw_ops.PreventGradient": "tf.raw_ops.PreventGradient", + "tf.compat.v1.raw_ops.Print": "tf.raw_ops.Print", + "tf.compat.v1.raw_ops.PrintV2": "tf.raw_ops.PrintV2", + "tf.compat.v1.raw_ops.PriorityQueue": "tf.raw_ops.PriorityQueue", + "tf.compat.v1.raw_ops.PriorityQueueV2": "tf.raw_ops.PriorityQueueV2", + "tf.compat.v1.raw_ops.PrivateThreadPoolDataset": "tf.raw_ops.PrivateThreadPoolDataset", + "tf.compat.v1.raw_ops.Prod": "tf.raw_ops.Prod", + "tf.compat.v1.raw_ops.PyFunc": "tf.raw_ops.PyFunc", + "tf.compat.v1.raw_ops.PyFuncStateless": "tf.raw_ops.PyFuncStateless", + "tf.compat.v1.raw_ops.Qr": "tf.raw_ops.Qr", + "tf.compat.v1.raw_ops.QuantizeAndDequantize": "tf.raw_ops.QuantizeAndDequantize", + "tf.compat.v1.raw_ops.QuantizeAndDequantizeV2": "tf.raw_ops.QuantizeAndDequantizeV2", + "tf.compat.v1.raw_ops.QuantizeAndDequantizeV3": "tf.raw_ops.QuantizeAndDequantizeV3", + "tf.compat.v1.raw_ops.QuantizeDownAndShrinkRange": "tf.raw_ops.QuantizeDownAndShrinkRange", + "tf.compat.v1.raw_ops.QuantizeV2": "tf.raw_ops.QuantizeV2", + "tf.compat.v1.raw_ops.QuantizedAdd": "tf.raw_ops.QuantizedAdd", + "tf.compat.v1.raw_ops.QuantizedAvgPool": "tf.raw_ops.QuantizedAvgPool", + "tf.compat.v1.raw_ops.QuantizedBatchNormWithGlobalNormalization": "tf.raw_ops.QuantizedBatchNormWithGlobalNormalization", + "tf.compat.v1.raw_ops.QuantizedBiasAdd": "tf.raw_ops.QuantizedBiasAdd", + "tf.compat.v1.raw_ops.QuantizedConcat": "tf.raw_ops.QuantizedConcat", + "tf.compat.v1.raw_ops.QuantizedConv2D": "tf.raw_ops.QuantizedConv2D", + "tf.compat.v1.raw_ops.QuantizedConv2DAndRelu": "tf.raw_ops.QuantizedConv2DAndRelu", + "tf.compat.v1.raw_ops.QuantizedConv2DAndReluAndRequantize": "tf.raw_ops.QuantizedConv2DAndReluAndRequantize", + "tf.compat.v1.raw_ops.QuantizedConv2DAndRequantize": "tf.raw_ops.QuantizedConv2DAndRequantize", + "tf.compat.v1.raw_ops.QuantizedConv2DPerChannel": "tf.raw_ops.QuantizedConv2DPerChannel", + "tf.compat.v1.raw_ops.QuantizedConv2DWithBias": "tf.raw_ops.QuantizedConv2DWithBias", + "tf.compat.v1.raw_ops.QuantizedConv2DWithBiasAndRelu": "tf.raw_ops.QuantizedConv2DWithBiasAndRelu", + "tf.compat.v1.raw_ops.QuantizedConv2DWithBiasAndReluAndRequantize": "tf.raw_ops.QuantizedConv2DWithBiasAndReluAndRequantize", + "tf.compat.v1.raw_ops.QuantizedConv2DWithBiasAndRequantize": "tf.raw_ops.QuantizedConv2DWithBiasAndRequantize", + "tf.compat.v1.raw_ops.QuantizedConv2DWithBiasSignedSumAndReluAndRequantize": "tf.raw_ops.QuantizedConv2DWithBiasSignedSumAndReluAndRequantize", + "tf.compat.v1.raw_ops.QuantizedConv2DWithBiasSumAndRelu": "tf.raw_ops.QuantizedConv2DWithBiasSumAndRelu", + "tf.compat.v1.raw_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize": "tf.raw_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize", + "tf.compat.v1.raw_ops.QuantizedDepthwiseConv2D": "tf.raw_ops.QuantizedDepthwiseConv2D", + "tf.compat.v1.raw_ops.QuantizedDepthwiseConv2DWithBias": "tf.raw_ops.QuantizedDepthwiseConv2DWithBias", + "tf.compat.v1.raw_ops.QuantizedDepthwiseConv2DWithBiasAndRelu": "tf.raw_ops.QuantizedDepthwiseConv2DWithBiasAndRelu", + "tf.compat.v1.raw_ops.QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize": "tf.raw_ops.QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize", + "tf.compat.v1.raw_ops.QuantizedInstanceNorm": "tf.raw_ops.QuantizedInstanceNorm", + "tf.compat.v1.raw_ops.QuantizedMatMul": "tf.raw_ops.QuantizedMatMul", + "tf.compat.v1.raw_ops.QuantizedMatMulWithBias": "tf.raw_ops.QuantizedMatMulWithBias", + "tf.compat.v1.raw_ops.QuantizedMatMulWithBiasAndDequantize": "tf.raw_ops.QuantizedMatMulWithBiasAndDequantize", + "tf.compat.v1.raw_ops.QuantizedMatMulWithBiasAndRelu": "tf.raw_ops.QuantizedMatMulWithBiasAndRelu", + "tf.compat.v1.raw_ops.QuantizedMatMulWithBiasAndReluAndRequantize": "tf.raw_ops.QuantizedMatMulWithBiasAndReluAndRequantize", + "tf.compat.v1.raw_ops.QuantizedMatMulWithBiasAndRequantize": "tf.raw_ops.QuantizedMatMulWithBiasAndRequantize", + "tf.compat.v1.raw_ops.QuantizedMaxPool": "tf.raw_ops.QuantizedMaxPool", + "tf.compat.v1.raw_ops.QuantizedMul": "tf.raw_ops.QuantizedMul", + "tf.compat.v1.raw_ops.QuantizedRelu": "tf.raw_ops.QuantizedRelu", + "tf.compat.v1.raw_ops.QuantizedRelu6": "tf.raw_ops.QuantizedRelu6", + "tf.compat.v1.raw_ops.QuantizedReluX": "tf.raw_ops.QuantizedReluX", + "tf.compat.v1.raw_ops.QuantizedReshape": "tf.raw_ops.QuantizedReshape", + "tf.compat.v1.raw_ops.QuantizedResizeBilinear": "tf.raw_ops.QuantizedResizeBilinear", + "tf.compat.v1.raw_ops.QueueClose": "tf.raw_ops.QueueClose", + "tf.compat.v1.raw_ops.QueueCloseV2": "tf.raw_ops.QueueCloseV2", + "tf.compat.v1.raw_ops.QueueDequeue": "tf.raw_ops.QueueDequeue", + "tf.compat.v1.raw_ops.QueueDequeueMany": "tf.raw_ops.QueueDequeueMany", + "tf.compat.v1.raw_ops.QueueDequeueManyV2": "tf.raw_ops.QueueDequeueManyV2", + "tf.compat.v1.raw_ops.QueueDequeueUpTo": "tf.raw_ops.QueueDequeueUpTo", + "tf.compat.v1.raw_ops.QueueDequeueUpToV2": "tf.raw_ops.QueueDequeueUpToV2", + "tf.compat.v1.raw_ops.QueueDequeueV2": "tf.raw_ops.QueueDequeueV2", + "tf.compat.v1.raw_ops.QueueEnqueue": "tf.raw_ops.QueueEnqueue", + "tf.compat.v1.raw_ops.QueueEnqueueMany": "tf.raw_ops.QueueEnqueueMany", + "tf.compat.v1.raw_ops.QueueEnqueueManyV2": "tf.raw_ops.QueueEnqueueManyV2", + "tf.compat.v1.raw_ops.QueueEnqueueV2": "tf.raw_ops.QueueEnqueueV2", + "tf.compat.v1.raw_ops.QueueIsClosed": "tf.raw_ops.QueueIsClosed", + "tf.compat.v1.raw_ops.QueueIsClosedV2": "tf.raw_ops.QueueIsClosedV2", + "tf.compat.v1.raw_ops.QueueSize": "tf.raw_ops.QueueSize", + "tf.compat.v1.raw_ops.QueueSizeV2": "tf.raw_ops.QueueSizeV2", + "tf.compat.v1.raw_ops.RFFT": "tf.raw_ops.RFFT", + "tf.compat.v1.raw_ops.RFFT2D": "tf.raw_ops.RFFT2D", + "tf.compat.v1.raw_ops.RFFT3D": "tf.raw_ops.RFFT3D", + "tf.compat.v1.raw_ops.RGBToHSV": "tf.raw_ops.RGBToHSV", + "tf.compat.v1.raw_ops.RaggedGather": "tf.raw_ops.RaggedGather", + "tf.compat.v1.raw_ops.RaggedRange": "tf.raw_ops.RaggedRange", + "tf.compat.v1.raw_ops.RaggedTensorFromVariant": "tf.raw_ops.RaggedTensorFromVariant", + "tf.compat.v1.raw_ops.RaggedTensorToSparse": "tf.raw_ops.RaggedTensorToSparse", + "tf.compat.v1.raw_ops.RaggedTensorToTensor": "tf.raw_ops.RaggedTensorToTensor", + "tf.compat.v1.raw_ops.RaggedTensorToVariant": "tf.raw_ops.RaggedTensorToVariant", + "tf.compat.v1.raw_ops.RandomCrop": "tf.raw_ops.RandomCrop", + "tf.compat.v1.raw_ops.RandomDataset": "tf.raw_ops.RandomDataset", + "tf.compat.v1.raw_ops.RandomGamma": "tf.raw_ops.RandomGamma", + "tf.compat.v1.raw_ops.RandomGammaGrad": "tf.raw_ops.RandomGammaGrad", + "tf.compat.v1.raw_ops.RandomPoisson": "tf.raw_ops.RandomPoisson", + "tf.compat.v1.raw_ops.RandomPoissonV2": "tf.raw_ops.RandomPoissonV2", + "tf.compat.v1.raw_ops.RandomShuffle": "tf.raw_ops.RandomShuffle", + "tf.compat.v1.raw_ops.RandomShuffleQueue": "tf.raw_ops.RandomShuffleQueue", + "tf.compat.v1.raw_ops.RandomShuffleQueueV2": "tf.raw_ops.RandomShuffleQueueV2", + "tf.compat.v1.raw_ops.RandomStandardNormal": "tf.raw_ops.RandomStandardNormal", + "tf.compat.v1.raw_ops.RandomUniform": "tf.raw_ops.RandomUniform", + "tf.compat.v1.raw_ops.RandomUniformInt": "tf.raw_ops.RandomUniformInt", + "tf.compat.v1.raw_ops.Range": "tf.raw_ops.Range", + "tf.compat.v1.raw_ops.RangeDataset": "tf.raw_ops.RangeDataset", + "tf.compat.v1.raw_ops.Rank": "tf.raw_ops.Rank", + "tf.compat.v1.raw_ops.ReadFile": "tf.raw_ops.ReadFile", + "tf.compat.v1.raw_ops.ReadVariableOp": "tf.raw_ops.ReadVariableOp", + "tf.compat.v1.raw_ops.ReaderNumRecordsProduced": "tf.raw_ops.ReaderNumRecordsProduced", + "tf.compat.v1.raw_ops.ReaderNumRecordsProducedV2": "tf.raw_ops.ReaderNumRecordsProducedV2", + "tf.compat.v1.raw_ops.ReaderNumWorkUnitsCompleted": "tf.raw_ops.ReaderNumWorkUnitsCompleted", + "tf.compat.v1.raw_ops.ReaderNumWorkUnitsCompletedV2": "tf.raw_ops.ReaderNumWorkUnitsCompletedV2", + "tf.compat.v1.raw_ops.ReaderRead": "tf.raw_ops.ReaderRead", + "tf.compat.v1.raw_ops.ReaderReadUpTo": "tf.raw_ops.ReaderReadUpTo", + "tf.compat.v1.raw_ops.ReaderReadUpToV2": "tf.raw_ops.ReaderReadUpToV2", + "tf.compat.v1.raw_ops.ReaderReadV2": "tf.raw_ops.ReaderReadV2", + "tf.compat.v1.raw_ops.ReaderReset": "tf.raw_ops.ReaderReset", + "tf.compat.v1.raw_ops.ReaderResetV2": "tf.raw_ops.ReaderResetV2", + "tf.compat.v1.raw_ops.ReaderRestoreState": "tf.raw_ops.ReaderRestoreState", + "tf.compat.v1.raw_ops.ReaderRestoreStateV2": "tf.raw_ops.ReaderRestoreStateV2", + "tf.compat.v1.raw_ops.ReaderSerializeState": "tf.raw_ops.ReaderSerializeState", + "tf.compat.v1.raw_ops.ReaderSerializeStateV2": "tf.raw_ops.ReaderSerializeStateV2", + "tf.compat.v1.raw_ops.Real": "tf.raw_ops.Real", + "tf.compat.v1.raw_ops.RealDiv": "tf.raw_ops.RealDiv", + "tf.compat.v1.raw_ops.RebatchDataset": "tf.raw_ops.RebatchDataset", + "tf.compat.v1.raw_ops.Reciprocal": "tf.raw_ops.Reciprocal", + "tf.compat.v1.raw_ops.ReciprocalGrad": "tf.raw_ops.ReciprocalGrad", + "tf.compat.v1.raw_ops.RecordInput": "tf.raw_ops.RecordInput", + "tf.compat.v1.raw_ops.Recv": "tf.raw_ops.Recv", + "tf.compat.v1.raw_ops.RecvTPUEmbeddingActivations": "tf.raw_ops.RecvTPUEmbeddingActivations", + "tf.compat.v1.raw_ops.ReduceDataset": "tf.raw_ops.ReduceDataset", + "tf.compat.v1.raw_ops.ReduceJoin": "tf.raw_ops.ReduceJoin", + "tf.compat.v1.raw_ops.RefEnter": "tf.raw_ops.RefEnter", + "tf.compat.v1.raw_ops.RefExit": "tf.raw_ops.RefExit", + "tf.compat.v1.raw_ops.RefIdentity": "tf.raw_ops.RefIdentity", + "tf.compat.v1.raw_ops.RefMerge": "tf.raw_ops.RefMerge", + "tf.compat.v1.raw_ops.RefNextIteration": "tf.raw_ops.RefNextIteration", + "tf.compat.v1.raw_ops.RefSelect": "tf.raw_ops.RefSelect", + "tf.compat.v1.raw_ops.RefSwitch": "tf.raw_ops.RefSwitch", + "tf.compat.v1.raw_ops.RegexFullMatch": "tf.raw_ops.RegexFullMatch", + "tf.compat.v1.raw_ops.RegexReplace": "tf.raw_ops.RegexReplace", + "tf.compat.v1.raw_ops.Relu": "tf.raw_ops.Relu", + "tf.compat.v1.raw_ops.Relu6": "tf.raw_ops.Relu6", + "tf.compat.v1.raw_ops.Relu6Grad": "tf.raw_ops.Relu6Grad", + "tf.compat.v1.raw_ops.ReluGrad": "tf.raw_ops.ReluGrad", + "tf.compat.v1.raw_ops.RemoteCall": "tf.raw_ops.RemoteCall", + "tf.compat.v1.raw_ops.RepeatDataset": "tf.raw_ops.RepeatDataset", + "tf.compat.v1.raw_ops.RequantizationRange": "tf.raw_ops.RequantizationRange", + "tf.compat.v1.raw_ops.RequantizationRangePerChannel": "tf.raw_ops.RequantizationRangePerChannel", + "tf.compat.v1.raw_ops.Requantize": "tf.raw_ops.Requantize", + "tf.compat.v1.raw_ops.RequantizePerChannel": "tf.raw_ops.RequantizePerChannel", + "tf.compat.v1.raw_ops.Reshape": "tf.raw_ops.Reshape", + "tf.compat.v1.raw_ops.ResizeArea": "tf.raw_ops.ResizeArea", + "tf.compat.v1.raw_ops.ResizeBicubic": "tf.raw_ops.ResizeBicubic", + "tf.compat.v1.raw_ops.ResizeBicubicGrad": "tf.raw_ops.ResizeBicubicGrad", + "tf.compat.v1.raw_ops.ResizeBilinear": "tf.raw_ops.ResizeBilinear", + "tf.compat.v1.raw_ops.ResizeBilinearGrad": "tf.raw_ops.ResizeBilinearGrad", + "tf.compat.v1.raw_ops.ResizeNearestNeighbor": "tf.raw_ops.ResizeNearestNeighbor", + "tf.compat.v1.raw_ops.ResizeNearestNeighborGrad": "tf.raw_ops.ResizeNearestNeighborGrad", + "tf.compat.v1.raw_ops.ResourceAccumulatorApplyGradient": "tf.raw_ops.ResourceAccumulatorApplyGradient", + "tf.compat.v1.raw_ops.ResourceAccumulatorNumAccumulated": "tf.raw_ops.ResourceAccumulatorNumAccumulated", + "tf.compat.v1.raw_ops.ResourceAccumulatorSetGlobalStep": "tf.raw_ops.ResourceAccumulatorSetGlobalStep", + "tf.compat.v1.raw_ops.ResourceAccumulatorTakeGradient": "tf.raw_ops.ResourceAccumulatorTakeGradient", + "tf.compat.v1.raw_ops.ResourceApplyAdaMax": "tf.raw_ops.ResourceApplyAdaMax", + "tf.compat.v1.raw_ops.ResourceApplyAdadelta": "tf.raw_ops.ResourceApplyAdadelta", + "tf.compat.v1.raw_ops.ResourceApplyAdagrad": "tf.raw_ops.ResourceApplyAdagrad", + "tf.compat.v1.raw_ops.ResourceApplyAdagradDA": "tf.raw_ops.ResourceApplyAdagradDA", + "tf.compat.v1.raw_ops.ResourceApplyAdagradV2": "tf.raw_ops.ResourceApplyAdagradV2", + "tf.compat.v1.raw_ops.ResourceApplyAdam": "tf.raw_ops.ResourceApplyAdam", + "tf.compat.v1.raw_ops.ResourceApplyAdamWithAmsgrad": "tf.raw_ops.ResourceApplyAdamWithAmsgrad", + "tf.compat.v1.raw_ops.ResourceApplyAddSign": "tf.raw_ops.ResourceApplyAddSign", + "tf.compat.v1.raw_ops.ResourceApplyCenteredRMSProp": "tf.raw_ops.ResourceApplyCenteredRMSProp", + "tf.compat.v1.raw_ops.ResourceApplyFtrl": "tf.raw_ops.ResourceApplyFtrl", + "tf.compat.v1.raw_ops.ResourceApplyFtrlV2": "tf.raw_ops.ResourceApplyFtrlV2", + "tf.compat.v1.raw_ops.ResourceApplyGradientDescent": "tf.raw_ops.ResourceApplyGradientDescent", + "tf.compat.v1.raw_ops.ResourceApplyKerasMomentum": "tf.raw_ops.ResourceApplyKerasMomentum", + "tf.compat.v1.raw_ops.ResourceApplyMomentum": "tf.raw_ops.ResourceApplyMomentum", + "tf.compat.v1.raw_ops.ResourceApplyPowerSign": "tf.raw_ops.ResourceApplyPowerSign", + "tf.compat.v1.raw_ops.ResourceApplyProximalAdagrad": "tf.raw_ops.ResourceApplyProximalAdagrad", + "tf.compat.v1.raw_ops.ResourceApplyProximalGradientDescent": "tf.raw_ops.ResourceApplyProximalGradientDescent", + "tf.compat.v1.raw_ops.ResourceApplyRMSProp": "tf.raw_ops.ResourceApplyRMSProp", + "tf.compat.v1.raw_ops.ResourceConditionalAccumulator": "tf.raw_ops.ResourceConditionalAccumulator", + "tf.compat.v1.raw_ops.ResourceCountUpTo": "tf.raw_ops.ResourceCountUpTo", + "tf.compat.v1.raw_ops.ResourceGather": "tf.raw_ops.ResourceGather", + "tf.compat.v1.raw_ops.ResourceGatherNd": "tf.raw_ops.ResourceGatherNd", + "tf.compat.v1.raw_ops.ResourceScatterAdd": "tf.raw_ops.ResourceScatterAdd", + "tf.compat.v1.raw_ops.ResourceScatterDiv": "tf.raw_ops.ResourceScatterDiv", + "tf.compat.v1.raw_ops.ResourceScatterMax": "tf.raw_ops.ResourceScatterMax", + "tf.compat.v1.raw_ops.ResourceScatterMin": "tf.raw_ops.ResourceScatterMin", + "tf.compat.v1.raw_ops.ResourceScatterMul": "tf.raw_ops.ResourceScatterMul", + "tf.compat.v1.raw_ops.ResourceScatterNdAdd": "tf.raw_ops.ResourceScatterNdAdd", + "tf.compat.v1.raw_ops.ResourceScatterNdSub": "tf.raw_ops.ResourceScatterNdSub", + "tf.compat.v1.raw_ops.ResourceScatterNdUpdate": "tf.raw_ops.ResourceScatterNdUpdate", + "tf.compat.v1.raw_ops.ResourceScatterSub": "tf.raw_ops.ResourceScatterSub", + "tf.compat.v1.raw_ops.ResourceScatterUpdate": "tf.raw_ops.ResourceScatterUpdate", + "tf.compat.v1.raw_ops.ResourceSparseApplyAdadelta": "tf.raw_ops.ResourceSparseApplyAdadelta", + "tf.compat.v1.raw_ops.ResourceSparseApplyAdagrad": "tf.raw_ops.ResourceSparseApplyAdagrad", + "tf.compat.v1.raw_ops.ResourceSparseApplyAdagradDA": "tf.raw_ops.ResourceSparseApplyAdagradDA", + "tf.compat.v1.raw_ops.ResourceSparseApplyAdagradV2": "tf.raw_ops.ResourceSparseApplyAdagradV2", + "tf.compat.v1.raw_ops.ResourceSparseApplyCenteredRMSProp": "tf.raw_ops.ResourceSparseApplyCenteredRMSProp", + "tf.compat.v1.raw_ops.ResourceSparseApplyFtrl": "tf.raw_ops.ResourceSparseApplyFtrl", + "tf.compat.v1.raw_ops.ResourceSparseApplyFtrlV2": "tf.raw_ops.ResourceSparseApplyFtrlV2", + "tf.compat.v1.raw_ops.ResourceSparseApplyKerasMomentum": "tf.raw_ops.ResourceSparseApplyKerasMomentum", + "tf.compat.v1.raw_ops.ResourceSparseApplyMomentum": "tf.raw_ops.ResourceSparseApplyMomentum", + "tf.compat.v1.raw_ops.ResourceSparseApplyProximalAdagrad": "tf.raw_ops.ResourceSparseApplyProximalAdagrad", + "tf.compat.v1.raw_ops.ResourceSparseApplyProximalGradientDescent": "tf.raw_ops.ResourceSparseApplyProximalGradientDescent", + "tf.compat.v1.raw_ops.ResourceSparseApplyRMSProp": "tf.raw_ops.ResourceSparseApplyRMSProp", + "tf.compat.v1.raw_ops.ResourceStridedSliceAssign": "tf.raw_ops.ResourceStridedSliceAssign", + "tf.compat.v1.raw_ops.Restore": "tf.raw_ops.Restore", + "tf.compat.v1.raw_ops.RestoreSlice": "tf.raw_ops.RestoreSlice", + "tf.compat.v1.raw_ops.RestoreV2": "tf.raw_ops.RestoreV2", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingADAMParameters": "tf.raw_ops.RetrieveTPUEmbeddingADAMParameters", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingADAMParametersGradAccumDebug": "tf.raw_ops.RetrieveTPUEmbeddingADAMParametersGradAccumDebug", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdadeltaParameters": "tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParameters", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug": "tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdagradParameters": "tf.raw_ops.RetrieveTPUEmbeddingAdagradParameters", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdagradParametersGradAccumDebug": "tf.raw_ops.RetrieveTPUEmbeddingAdagradParametersGradAccumDebug", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingCenteredRMSPropParameters": "tf.raw_ops.RetrieveTPUEmbeddingCenteredRMSPropParameters", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingFTRLParameters": "tf.raw_ops.RetrieveTPUEmbeddingFTRLParameters", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingFTRLParametersGradAccumDebug": "tf.raw_ops.RetrieveTPUEmbeddingFTRLParametersGradAccumDebug", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingMDLAdagradLightParameters": "tf.raw_ops.RetrieveTPUEmbeddingMDLAdagradLightParameters", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingMomentumParameters": "tf.raw_ops.RetrieveTPUEmbeddingMomentumParameters", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingMomentumParametersGradAccumDebug": "tf.raw_ops.RetrieveTPUEmbeddingMomentumParametersGradAccumDebug", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters": "tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug": "tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingRMSPropParameters": "tf.raw_ops.RetrieveTPUEmbeddingRMSPropParameters", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug": "tf.raw_ops.RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug", + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParameters": "tf.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParameters", + "tf.compat.v1.raw_ops.Reverse": "tf.raw_ops.Reverse", + "tf.compat.v1.raw_ops.ReverseSequence": "tf.raw_ops.ReverseSequence", + "tf.compat.v1.raw_ops.ReverseV2": "tf.raw_ops.ReverseV2", + "tf.compat.v1.raw_ops.RightShift": "tf.raw_ops.RightShift", + "tf.compat.v1.raw_ops.Rint": "tf.raw_ops.Rint", + "tf.compat.v1.raw_ops.RngSkip": "tf.raw_ops.RngSkip", + "tf.compat.v1.raw_ops.Roll": "tf.raw_ops.Roll", + "tf.compat.v1.raw_ops.Round": "tf.raw_ops.Round", + "tf.compat.v1.raw_ops.Rsqrt": "tf.raw_ops.Rsqrt", + "tf.compat.v1.raw_ops.RsqrtGrad": "tf.raw_ops.RsqrtGrad", + "tf.compat.v1.raw_ops.SampleDistortedBoundingBox": "tf.raw_ops.SampleDistortedBoundingBox", + "tf.compat.v1.raw_ops.SampleDistortedBoundingBoxV2": "tf.raw_ops.SampleDistortedBoundingBoxV2", + "tf.compat.v1.raw_ops.SamplingDataset": "tf.raw_ops.SamplingDataset", + "tf.compat.v1.raw_ops.Save": "tf.raw_ops.Save", + "tf.compat.v1.raw_ops.SaveSlices": "tf.raw_ops.SaveSlices", + "tf.compat.v1.raw_ops.SaveV2": "tf.raw_ops.SaveV2", + "tf.compat.v1.raw_ops.ScalarSummary": "tf.raw_ops.ScalarSummary", + "tf.compat.v1.raw_ops.ScaleAndTranslate": "tf.raw_ops.ScaleAndTranslate", + "tf.compat.v1.raw_ops.ScaleAndTranslateGrad": "tf.raw_ops.ScaleAndTranslateGrad", + "tf.compat.v1.raw_ops.ScanDataset": "tf.raw_ops.ScanDataset", + "tf.compat.v1.raw_ops.ScatterAdd": "tf.raw_ops.ScatterAdd", + "tf.compat.v1.raw_ops.ScatterDiv": "tf.raw_ops.ScatterDiv", + "tf.compat.v1.raw_ops.ScatterMax": "tf.raw_ops.ScatterMax", + "tf.compat.v1.raw_ops.ScatterMin": "tf.raw_ops.ScatterMin", + "tf.compat.v1.raw_ops.ScatterMul": "tf.raw_ops.ScatterMul", + "tf.compat.v1.raw_ops.ScatterNd": "tf.raw_ops.ScatterNd", + "tf.compat.v1.raw_ops.ScatterNdAdd": "tf.raw_ops.ScatterNdAdd", + "tf.compat.v1.raw_ops.ScatterNdNonAliasingAdd": "tf.raw_ops.ScatterNdNonAliasingAdd", + "tf.compat.v1.raw_ops.ScatterNdSub": "tf.raw_ops.ScatterNdSub", + "tf.compat.v1.raw_ops.ScatterNdUpdate": "tf.raw_ops.ScatterNdUpdate", + "tf.compat.v1.raw_ops.ScatterSub": "tf.raw_ops.ScatterSub", + "tf.compat.v1.raw_ops.ScatterUpdate": "tf.raw_ops.ScatterUpdate", + "tf.compat.v1.raw_ops.SdcaFprint": "tf.raw_ops.SdcaFprint", + "tf.compat.v1.raw_ops.SdcaOptimizer": "tf.raw_ops.SdcaOptimizer", + "tf.compat.v1.raw_ops.SdcaOptimizerV2": "tf.raw_ops.SdcaOptimizerV2", + "tf.compat.v1.raw_ops.SdcaShrinkL1": "tf.raw_ops.SdcaShrinkL1", + "tf.compat.v1.raw_ops.SegmentMax": "tf.raw_ops.SegmentMax", + "tf.compat.v1.raw_ops.SegmentMean": "tf.raw_ops.SegmentMean", + "tf.compat.v1.raw_ops.SegmentMin": "tf.raw_ops.SegmentMin", + "tf.compat.v1.raw_ops.SegmentProd": "tf.raw_ops.SegmentProd", + "tf.compat.v1.raw_ops.SegmentSum": "tf.raw_ops.SegmentSum", + "tf.compat.v1.raw_ops.Select": "tf.raw_ops.Select", + "tf.compat.v1.raw_ops.SelectV2": "tf.raw_ops.SelectV2", + "tf.compat.v1.raw_ops.SelfAdjointEig": "tf.raw_ops.SelfAdjointEig", + "tf.compat.v1.raw_ops.SelfAdjointEigV2": "tf.raw_ops.SelfAdjointEigV2", + "tf.compat.v1.raw_ops.Selu": "tf.raw_ops.Selu", + "tf.compat.v1.raw_ops.SeluGrad": "tf.raw_ops.SeluGrad", + "tf.compat.v1.raw_ops.Send": "tf.raw_ops.Send", + "tf.compat.v1.raw_ops.SendTPUEmbeddingGradients": "tf.raw_ops.SendTPUEmbeddingGradients", + "tf.compat.v1.raw_ops.SerializeIterator": "tf.raw_ops.SerializeIterator", + "tf.compat.v1.raw_ops.SerializeManySparse": "tf.raw_ops.SerializeManySparse", + "tf.compat.v1.raw_ops.SerializeSparse": "tf.raw_ops.SerializeSparse", + "tf.compat.v1.raw_ops.SerializeTensor": "tf.raw_ops.SerializeTensor", + "tf.compat.v1.raw_ops.SetSize": "tf.raw_ops.SetSize", + "tf.compat.v1.raw_ops.SetStatsAggregatorDataset": "tf.raw_ops.SetStatsAggregatorDataset", + "tf.compat.v1.raw_ops.Shape": "tf.raw_ops.Shape", + "tf.compat.v1.raw_ops.ShapeN": "tf.raw_ops.ShapeN", + "tf.compat.v1.raw_ops.ShardDataset": "tf.raw_ops.ShardDataset", + "tf.compat.v1.raw_ops.ShardedFilename": "tf.raw_ops.ShardedFilename", + "tf.compat.v1.raw_ops.ShardedFilespec": "tf.raw_ops.ShardedFilespec", + "tf.compat.v1.raw_ops.ShuffleAndRepeatDataset": "tf.raw_ops.ShuffleAndRepeatDataset", + "tf.compat.v1.raw_ops.ShuffleDataset": "tf.raw_ops.ShuffleDataset", + "tf.compat.v1.raw_ops.ShuffleDatasetV2": "tf.raw_ops.ShuffleDatasetV2", + "tf.compat.v1.raw_ops.ShutdownDistributedTPU": "tf.raw_ops.ShutdownDistributedTPU", + "tf.compat.v1.raw_ops.Sigmoid": "tf.raw_ops.Sigmoid", + "tf.compat.v1.raw_ops.SigmoidGrad": "tf.raw_ops.SigmoidGrad", + "tf.compat.v1.raw_ops.Sign": "tf.raw_ops.Sign", + "tf.compat.v1.raw_ops.Sin": "tf.raw_ops.Sin", + "tf.compat.v1.raw_ops.Sinh": "tf.raw_ops.Sinh", + "tf.compat.v1.raw_ops.Size": "tf.raw_ops.Size", + "tf.compat.v1.raw_ops.SkipDataset": "tf.raw_ops.SkipDataset", + "tf.compat.v1.raw_ops.SleepDataset": "tf.raw_ops.SleepDataset", + "tf.compat.v1.raw_ops.Slice": "tf.raw_ops.Slice", + "tf.compat.v1.raw_ops.SlidingWindowDataset": "tf.raw_ops.SlidingWindowDataset", + "tf.compat.v1.raw_ops.Snapshot": "tf.raw_ops.Snapshot", + "tf.compat.v1.raw_ops.SnapshotDataset": "tf.raw_ops.SnapshotDataset", + "tf.compat.v1.raw_ops.SobolSample": "tf.raw_ops.SobolSample", + "tf.compat.v1.raw_ops.Softmax": "tf.raw_ops.Softmax", + "tf.compat.v1.raw_ops.SoftmaxCrossEntropyWithLogits": "tf.raw_ops.SoftmaxCrossEntropyWithLogits", + "tf.compat.v1.raw_ops.Softplus": "tf.raw_ops.Softplus", + "tf.compat.v1.raw_ops.SoftplusGrad": "tf.raw_ops.SoftplusGrad", + "tf.compat.v1.raw_ops.Softsign": "tf.raw_ops.Softsign", + "tf.compat.v1.raw_ops.SoftsignGrad": "tf.raw_ops.SoftsignGrad", + "tf.compat.v1.raw_ops.SpaceToBatch": "tf.raw_ops.SpaceToBatch", + "tf.compat.v1.raw_ops.SpaceToBatchND": "tf.raw_ops.SpaceToBatchND", + "tf.compat.v1.raw_ops.SpaceToDepth": "tf.raw_ops.SpaceToDepth", + "tf.compat.v1.raw_ops.SparseAccumulatorApplyGradient": "tf.raw_ops.SparseAccumulatorApplyGradient", + "tf.compat.v1.raw_ops.SparseAccumulatorTakeGradient": "tf.raw_ops.SparseAccumulatorTakeGradient", + "tf.compat.v1.raw_ops.SparseAdd": "tf.raw_ops.SparseAdd", + "tf.compat.v1.raw_ops.SparseAddGrad": "tf.raw_ops.SparseAddGrad", + "tf.compat.v1.raw_ops.SparseApplyAdadelta": "tf.raw_ops.SparseApplyAdadelta", + "tf.compat.v1.raw_ops.SparseApplyAdagrad": "tf.raw_ops.SparseApplyAdagrad", + "tf.compat.v1.raw_ops.SparseApplyAdagradDA": "tf.raw_ops.SparseApplyAdagradDA", + "tf.compat.v1.raw_ops.SparseApplyAdagradV2": "tf.raw_ops.SparseApplyAdagradV2", + "tf.compat.v1.raw_ops.SparseApplyCenteredRMSProp": "tf.raw_ops.SparseApplyCenteredRMSProp", + "tf.compat.v1.raw_ops.SparseApplyFtrl": "tf.raw_ops.SparseApplyFtrl", + "tf.compat.v1.raw_ops.SparseApplyFtrlV2": "tf.raw_ops.SparseApplyFtrlV2", + "tf.compat.v1.raw_ops.SparseApplyMomentum": "tf.raw_ops.SparseApplyMomentum", + "tf.compat.v1.raw_ops.SparseApplyProximalAdagrad": "tf.raw_ops.SparseApplyProximalAdagrad", + "tf.compat.v1.raw_ops.SparseApplyProximalGradientDescent": "tf.raw_ops.SparseApplyProximalGradientDescent", + "tf.compat.v1.raw_ops.SparseApplyRMSProp": "tf.raw_ops.SparseApplyRMSProp", + "tf.compat.v1.raw_ops.SparseConcat": "tf.raw_ops.SparseConcat", + "tf.compat.v1.raw_ops.SparseConditionalAccumulator": "tf.raw_ops.SparseConditionalAccumulator", + "tf.compat.v1.raw_ops.SparseCross": "tf.raw_ops.SparseCross", + "tf.compat.v1.raw_ops.SparseDenseCwiseAdd": "tf.raw_ops.SparseDenseCwiseAdd", + "tf.compat.v1.raw_ops.SparseDenseCwiseDiv": "tf.raw_ops.SparseDenseCwiseDiv", + "tf.compat.v1.raw_ops.SparseDenseCwiseMul": "tf.raw_ops.SparseDenseCwiseMul", + "tf.compat.v1.raw_ops.SparseFillEmptyRows": "tf.raw_ops.SparseFillEmptyRows", + "tf.compat.v1.raw_ops.SparseFillEmptyRowsGrad": "tf.raw_ops.SparseFillEmptyRowsGrad", + "tf.compat.v1.raw_ops.SparseMatMul": "tf.raw_ops.SparseMatMul", + "tf.compat.v1.raw_ops.SparseMatrixAdd": "tf.raw_ops.SparseMatrixAdd", + "tf.compat.v1.raw_ops.SparseMatrixMatMul": "tf.raw_ops.SparseMatrixMatMul", + "tf.compat.v1.raw_ops.SparseMatrixMul": "tf.raw_ops.SparseMatrixMul", + "tf.compat.v1.raw_ops.SparseMatrixNNZ": "tf.raw_ops.SparseMatrixNNZ", + "tf.compat.v1.raw_ops.SparseMatrixOrderingAMD": "tf.raw_ops.SparseMatrixOrderingAMD", + "tf.compat.v1.raw_ops.SparseMatrixSoftmax": "tf.raw_ops.SparseMatrixSoftmax", + "tf.compat.v1.raw_ops.SparseMatrixSoftmaxGrad": "tf.raw_ops.SparseMatrixSoftmaxGrad", + "tf.compat.v1.raw_ops.SparseMatrixSparseCholesky": "tf.raw_ops.SparseMatrixSparseCholesky", + "tf.compat.v1.raw_ops.SparseMatrixSparseMatMul": "tf.raw_ops.SparseMatrixSparseMatMul", + "tf.compat.v1.raw_ops.SparseMatrixTranspose": "tf.raw_ops.SparseMatrixTranspose", + "tf.compat.v1.raw_ops.SparseMatrixZeros": "tf.raw_ops.SparseMatrixZeros", + "tf.compat.v1.raw_ops.SparseReduceMax": "tf.raw_ops.SparseReduceMax", + "tf.compat.v1.raw_ops.SparseReduceMaxSparse": "tf.raw_ops.SparseReduceMaxSparse", + "tf.compat.v1.raw_ops.SparseReduceSum": "tf.raw_ops.SparseReduceSum", + "tf.compat.v1.raw_ops.SparseReduceSumSparse": "tf.raw_ops.SparseReduceSumSparse", + "tf.compat.v1.raw_ops.SparseReorder": "tf.raw_ops.SparseReorder", + "tf.compat.v1.raw_ops.SparseReshape": "tf.raw_ops.SparseReshape", + "tf.compat.v1.raw_ops.SparseSegmentMean": "tf.raw_ops.SparseSegmentMean", + "tf.compat.v1.raw_ops.SparseSegmentMeanGrad": "tf.raw_ops.SparseSegmentMeanGrad", + "tf.compat.v1.raw_ops.SparseSegmentMeanWithNumSegments": "tf.raw_ops.SparseSegmentMeanWithNumSegments", + "tf.compat.v1.raw_ops.SparseSegmentSqrtN": "tf.raw_ops.SparseSegmentSqrtN", + "tf.compat.v1.raw_ops.SparseSegmentSqrtNGrad": "tf.raw_ops.SparseSegmentSqrtNGrad", + "tf.compat.v1.raw_ops.SparseSegmentSqrtNWithNumSegments": "tf.raw_ops.SparseSegmentSqrtNWithNumSegments", + "tf.compat.v1.raw_ops.SparseSegmentSum": "tf.raw_ops.SparseSegmentSum", + "tf.compat.v1.raw_ops.SparseSegmentSumWithNumSegments": "tf.raw_ops.SparseSegmentSumWithNumSegments", + "tf.compat.v1.raw_ops.SparseSlice": "tf.raw_ops.SparseSlice", + "tf.compat.v1.raw_ops.SparseSliceGrad": "tf.raw_ops.SparseSliceGrad", + "tf.compat.v1.raw_ops.SparseSoftmax": "tf.raw_ops.SparseSoftmax", + "tf.compat.v1.raw_ops.SparseSoftmaxCrossEntropyWithLogits": "tf.raw_ops.SparseSoftmaxCrossEntropyWithLogits", + "tf.compat.v1.raw_ops.SparseSparseMaximum": "tf.raw_ops.SparseSparseMaximum", + "tf.compat.v1.raw_ops.SparseSparseMinimum": "tf.raw_ops.SparseSparseMinimum", + "tf.compat.v1.raw_ops.SparseSplit": "tf.raw_ops.SparseSplit", + "tf.compat.v1.raw_ops.SparseTensorDenseAdd": "tf.raw_ops.SparseTensorDenseAdd", + "tf.compat.v1.raw_ops.SparseTensorDenseMatMul": "tf.raw_ops.SparseTensorDenseMatMul", + "tf.compat.v1.raw_ops.SparseTensorSliceDataset": "tf.raw_ops.SparseTensorSliceDataset", + "tf.compat.v1.raw_ops.SparseTensorToCSRSparseMatrix": "tf.raw_ops.SparseTensorToCSRSparseMatrix", + "tf.compat.v1.raw_ops.SparseToDense": "tf.raw_ops.SparseToDense", + "tf.compat.v1.raw_ops.SparseToSparseSetOperation": "tf.raw_ops.SparseToSparseSetOperation", + "tf.compat.v1.raw_ops.Spence": "tf.raw_ops.Spence", + "tf.compat.v1.raw_ops.Split": "tf.raw_ops.Split", + "tf.compat.v1.raw_ops.SplitV": "tf.raw_ops.SplitV", + "tf.compat.v1.raw_ops.SqlDataset": "tf.raw_ops.SqlDataset", + "tf.compat.v1.raw_ops.Sqrt": "tf.raw_ops.Sqrt", + "tf.compat.v1.raw_ops.SqrtGrad": "tf.raw_ops.SqrtGrad", + "tf.compat.v1.raw_ops.Square": "tf.raw_ops.Square", + "tf.compat.v1.raw_ops.SquaredDifference": "tf.raw_ops.SquaredDifference", + "tf.compat.v1.raw_ops.Squeeze": "tf.raw_ops.Squeeze", + "tf.compat.v1.raw_ops.Stack": "tf.raw_ops.Stack", + "tf.compat.v1.raw_ops.StackClose": "tf.raw_ops.StackClose", + "tf.compat.v1.raw_ops.StackCloseV2": "tf.raw_ops.StackCloseV2", + "tf.compat.v1.raw_ops.StackPop": "tf.raw_ops.StackPop", + "tf.compat.v1.raw_ops.StackPopV2": "tf.raw_ops.StackPopV2", + "tf.compat.v1.raw_ops.StackPush": "tf.raw_ops.StackPush", + "tf.compat.v1.raw_ops.StackPushV2": "tf.raw_ops.StackPushV2", + "tf.compat.v1.raw_ops.StackV2": "tf.raw_ops.StackV2", + "tf.compat.v1.raw_ops.Stage": "tf.raw_ops.Stage", + "tf.compat.v1.raw_ops.StageClear": "tf.raw_ops.StageClear", + "tf.compat.v1.raw_ops.StagePeek": "tf.raw_ops.StagePeek", + "tf.compat.v1.raw_ops.StageSize": "tf.raw_ops.StageSize", + "tf.compat.v1.raw_ops.StatefulPartitionedCall": "tf.raw_ops.StatefulPartitionedCall", + "tf.compat.v1.raw_ops.StatefulRandomBinomial": "tf.raw_ops.StatefulRandomBinomial", + "tf.compat.v1.raw_ops.StatefulStandardNormal": "tf.raw_ops.StatefulStandardNormal", + "tf.compat.v1.raw_ops.StatefulStandardNormalV2": "tf.raw_ops.StatefulStandardNormalV2", + "tf.compat.v1.raw_ops.StatefulTruncatedNormal": "tf.raw_ops.StatefulTruncatedNormal", + "tf.compat.v1.raw_ops.StatefulUniform": "tf.raw_ops.StatefulUniform", + "tf.compat.v1.raw_ops.StatefulUniformFullInt": "tf.raw_ops.StatefulUniformFullInt", + "tf.compat.v1.raw_ops.StatefulUniformInt": "tf.raw_ops.StatefulUniformInt", + "tf.compat.v1.raw_ops.StatelessIf": "tf.raw_ops.StatelessIf", + "tf.compat.v1.raw_ops.StatelessMultinomial": "tf.raw_ops.StatelessMultinomial", + "tf.compat.v1.raw_ops.StatelessRandomBinomial": "tf.raw_ops.StatelessRandomBinomial", + "tf.compat.v1.raw_ops.StatelessRandomGammaV2": "tf.raw_ops.StatelessRandomGammaV2", + "tf.compat.v1.raw_ops.StatelessRandomNormal": "tf.raw_ops.StatelessRandomNormal", + "tf.compat.v1.raw_ops.StatelessRandomPoisson": "tf.raw_ops.StatelessRandomPoisson", + "tf.compat.v1.raw_ops.StatelessRandomUniform": "tf.raw_ops.StatelessRandomUniform", + "tf.compat.v1.raw_ops.StatelessRandomUniformFullInt": "tf.raw_ops.StatelessRandomUniformFullInt", + "tf.compat.v1.raw_ops.StatelessRandomUniformInt": "tf.raw_ops.StatelessRandomUniformInt", + "tf.compat.v1.raw_ops.StatelessTruncatedNormal": "tf.raw_ops.StatelessTruncatedNormal", + "tf.compat.v1.raw_ops.StatelessWhile": "tf.raw_ops.StatelessWhile", + "tf.compat.v1.raw_ops.StaticRegexFullMatch": "tf.raw_ops.StaticRegexFullMatch", + "tf.compat.v1.raw_ops.StaticRegexReplace": "tf.raw_ops.StaticRegexReplace", + "tf.compat.v1.raw_ops.StatsAggregatorHandle": "tf.raw_ops.StatsAggregatorHandle", + "tf.compat.v1.raw_ops.StatsAggregatorHandleV2": "tf.raw_ops.StatsAggregatorHandleV2", + "tf.compat.v1.raw_ops.StatsAggregatorSetSummaryWriter": "tf.raw_ops.StatsAggregatorSetSummaryWriter", + "tf.compat.v1.raw_ops.StatsAggregatorSummary": "tf.raw_ops.StatsAggregatorSummary", + "tf.compat.v1.raw_ops.StopGradient": "tf.raw_ops.StopGradient", + "tf.compat.v1.raw_ops.StridedSlice": "tf.raw_ops.StridedSlice", + "tf.compat.v1.raw_ops.StridedSliceAssign": "tf.raw_ops.StridedSliceAssign", + "tf.compat.v1.raw_ops.StridedSliceGrad": "tf.raw_ops.StridedSliceGrad", + "tf.compat.v1.raw_ops.StringFormat": "tf.raw_ops.StringFormat", + "tf.compat.v1.raw_ops.StringJoin": "tf.raw_ops.StringJoin", + "tf.compat.v1.raw_ops.StringLength": "tf.raw_ops.StringLength", + "tf.compat.v1.raw_ops.StringLower": "tf.raw_ops.StringLower", + "tf.compat.v1.raw_ops.StringNGrams": "tf.raw_ops.StringNGrams", + "tf.compat.v1.raw_ops.StringSplit": "tf.raw_ops.StringSplit", + "tf.compat.v1.raw_ops.StringSplitV2": "tf.raw_ops.StringSplitV2", + "tf.compat.v1.raw_ops.StringStrip": "tf.raw_ops.StringStrip", + "tf.compat.v1.raw_ops.StringToHashBucket": "tf.raw_ops.StringToHashBucket", + "tf.compat.v1.raw_ops.StringToHashBucketFast": "tf.raw_ops.StringToHashBucketFast", + "tf.compat.v1.raw_ops.StringToHashBucketStrong": "tf.raw_ops.StringToHashBucketStrong", + "tf.compat.v1.raw_ops.StringToNumber": "tf.raw_ops.StringToNumber", + "tf.compat.v1.raw_ops.StringUpper": "tf.raw_ops.StringUpper", + "tf.compat.v1.raw_ops.Sub": "tf.raw_ops.Sub", + "tf.compat.v1.raw_ops.Substr": "tf.raw_ops.Substr", + "tf.compat.v1.raw_ops.Sum": "tf.raw_ops.Sum", + "tf.compat.v1.raw_ops.SummaryWriter": "tf.raw_ops.SummaryWriter", + "tf.compat.v1.raw_ops.Svd": "tf.raw_ops.Svd", + "tf.compat.v1.raw_ops.Switch": "tf.raw_ops.Switch", + "tf.compat.v1.raw_ops.SymbolicGradient": "tf.raw_ops.SymbolicGradient", + "tf.compat.v1.raw_ops.TFRecordDataset": "tf.raw_ops.TFRecordDataset", + "tf.compat.v1.raw_ops.TFRecordReader": "tf.raw_ops.TFRecordReader", + "tf.compat.v1.raw_ops.TFRecordReaderV2": "tf.raw_ops.TFRecordReaderV2", + "tf.compat.v1.raw_ops.TPUCompilationResult": "tf.raw_ops.TPUCompilationResult", + "tf.compat.v1.raw_ops.TPUEmbeddingActivations": "tf.raw_ops.TPUEmbeddingActivations", + "tf.compat.v1.raw_ops.TPUOrdinalSelector": "tf.raw_ops.TPUOrdinalSelector", + "tf.compat.v1.raw_ops.TPUPartitionedCall": "tf.raw_ops.TPUPartitionedCall", + "tf.compat.v1.raw_ops.TPUReplicateMetadata": "tf.raw_ops.TPUReplicateMetadata", + "tf.compat.v1.raw_ops.TPUReplicatedInput": "tf.raw_ops.TPUReplicatedInput", + "tf.compat.v1.raw_ops.TPUReplicatedOutput": "tf.raw_ops.TPUReplicatedOutput", + "tf.compat.v1.raw_ops.TakeDataset": "tf.raw_ops.TakeDataset", + "tf.compat.v1.raw_ops.TakeManySparseFromTensorsMap": "tf.raw_ops.TakeManySparseFromTensorsMap", + "tf.compat.v1.raw_ops.TakeWhileDataset": "tf.raw_ops.TakeWhileDataset", + "tf.compat.v1.raw_ops.Tan": "tf.raw_ops.Tan", + "tf.compat.v1.raw_ops.Tanh": "tf.raw_ops.Tanh", + "tf.compat.v1.raw_ops.TanhGrad": "tf.raw_ops.TanhGrad", + "tf.compat.v1.raw_ops.TemporaryVariable": "tf.raw_ops.TemporaryVariable", + "tf.compat.v1.raw_ops.TensorArray": "tf.raw_ops.TensorArray", + "tf.compat.v1.raw_ops.TensorArrayClose": "tf.raw_ops.TensorArrayClose", + "tf.compat.v1.raw_ops.TensorArrayCloseV2": "tf.raw_ops.TensorArrayCloseV2", + "tf.compat.v1.raw_ops.TensorArrayCloseV3": "tf.raw_ops.TensorArrayCloseV3", + "tf.compat.v1.raw_ops.TensorArrayConcat": "tf.raw_ops.TensorArrayConcat", + "tf.compat.v1.raw_ops.TensorArrayConcatV2": "tf.raw_ops.TensorArrayConcatV2", + "tf.compat.v1.raw_ops.TensorArrayConcatV3": "tf.raw_ops.TensorArrayConcatV3", + "tf.compat.v1.raw_ops.TensorArrayGather": "tf.raw_ops.TensorArrayGather", + "tf.compat.v1.raw_ops.TensorArrayGatherV2": "tf.raw_ops.TensorArrayGatherV2", + "tf.compat.v1.raw_ops.TensorArrayGatherV3": "tf.raw_ops.TensorArrayGatherV3", + "tf.compat.v1.raw_ops.TensorArrayGrad": "tf.raw_ops.TensorArrayGrad", + "tf.compat.v1.raw_ops.TensorArrayGradV2": "tf.raw_ops.TensorArrayGradV2", + "tf.compat.v1.raw_ops.TensorArrayGradV3": "tf.raw_ops.TensorArrayGradV3", + "tf.compat.v1.raw_ops.TensorArrayGradWithShape": "tf.raw_ops.TensorArrayGradWithShape", + "tf.compat.v1.raw_ops.TensorArrayPack": "tf.raw_ops.TensorArrayPack", + "tf.compat.v1.raw_ops.TensorArrayRead": "tf.raw_ops.TensorArrayRead", + "tf.compat.v1.raw_ops.TensorArrayReadV2": "tf.raw_ops.TensorArrayReadV2", + "tf.compat.v1.raw_ops.TensorArrayReadV3": "tf.raw_ops.TensorArrayReadV3", + "tf.compat.v1.raw_ops.TensorArrayScatter": "tf.raw_ops.TensorArrayScatter", + "tf.compat.v1.raw_ops.TensorArrayScatterV2": "tf.raw_ops.TensorArrayScatterV2", + "tf.compat.v1.raw_ops.TensorArrayScatterV3": "tf.raw_ops.TensorArrayScatterV3", + "tf.compat.v1.raw_ops.TensorArraySize": "tf.raw_ops.TensorArraySize", + "tf.compat.v1.raw_ops.TensorArraySizeV2": "tf.raw_ops.TensorArraySizeV2", + "tf.compat.v1.raw_ops.TensorArraySizeV3": "tf.raw_ops.TensorArraySizeV3", + "tf.compat.v1.raw_ops.TensorArraySplit": "tf.raw_ops.TensorArraySplit", + "tf.compat.v1.raw_ops.TensorArraySplitV2": "tf.raw_ops.TensorArraySplitV2", + "tf.compat.v1.raw_ops.TensorArraySplitV3": "tf.raw_ops.TensorArraySplitV3", + "tf.compat.v1.raw_ops.TensorArrayUnpack": "tf.raw_ops.TensorArrayUnpack", + "tf.compat.v1.raw_ops.TensorArrayV2": "tf.raw_ops.TensorArrayV2", + "tf.compat.v1.raw_ops.TensorArrayV3": "tf.raw_ops.TensorArrayV3", + "tf.compat.v1.raw_ops.TensorArrayWrite": "tf.raw_ops.TensorArrayWrite", + "tf.compat.v1.raw_ops.TensorArrayWriteV2": "tf.raw_ops.TensorArrayWriteV2", + "tf.compat.v1.raw_ops.TensorArrayWriteV3": "tf.raw_ops.TensorArrayWriteV3", + "tf.compat.v1.raw_ops.TensorDataset": "tf.raw_ops.TensorDataset", + "tf.compat.v1.raw_ops.TensorListConcat": "tf.raw_ops.TensorListConcat", + "tf.compat.v1.raw_ops.TensorListConcatLists": "tf.raw_ops.TensorListConcatLists", + "tf.compat.v1.raw_ops.TensorListConcatV2": "tf.raw_ops.TensorListConcatV2", + "tf.compat.v1.raw_ops.TensorListElementShape": "tf.raw_ops.TensorListElementShape", + "tf.compat.v1.raw_ops.TensorListFromTensor": "tf.raw_ops.TensorListFromTensor", + "tf.compat.v1.raw_ops.TensorListGather": "tf.raw_ops.TensorListGather", + "tf.compat.v1.raw_ops.TensorListGetItem": "tf.raw_ops.TensorListGetItem", + "tf.compat.v1.raw_ops.TensorListLength": "tf.raw_ops.TensorListLength", + "tf.compat.v1.raw_ops.TensorListPopBack": "tf.raw_ops.TensorListPopBack", + "tf.compat.v1.raw_ops.TensorListPushBack": "tf.raw_ops.TensorListPushBack", + "tf.compat.v1.raw_ops.TensorListPushBackBatch": "tf.raw_ops.TensorListPushBackBatch", + "tf.compat.v1.raw_ops.TensorListReserve": "tf.raw_ops.TensorListReserve", + "tf.compat.v1.raw_ops.TensorListResize": "tf.raw_ops.TensorListResize", + "tf.compat.v1.raw_ops.TensorListScatter": "tf.raw_ops.TensorListScatter", + "tf.compat.v1.raw_ops.TensorListScatterIntoExistingList": "tf.raw_ops.TensorListScatterIntoExistingList", + "tf.compat.v1.raw_ops.TensorListScatterV2": "tf.raw_ops.TensorListScatterV2", + "tf.compat.v1.raw_ops.TensorListSetItem": "tf.raw_ops.TensorListSetItem", + "tf.compat.v1.raw_ops.TensorListSplit": "tf.raw_ops.TensorListSplit", + "tf.compat.v1.raw_ops.TensorListStack": "tf.raw_ops.TensorListStack", + "tf.compat.v1.raw_ops.TensorScatterAdd": "tf.raw_ops.TensorScatterAdd", + "tf.compat.v1.raw_ops.TensorScatterSub": "tf.raw_ops.TensorScatterSub", + "tf.compat.v1.raw_ops.TensorScatterUpdate": "tf.raw_ops.TensorScatterUpdate", + "tf.compat.v1.raw_ops.TensorSliceDataset": "tf.raw_ops.TensorSliceDataset", + "tf.compat.v1.raw_ops.TensorStridedSliceUpdate": "tf.raw_ops.TensorStridedSliceUpdate", + "tf.compat.v1.raw_ops.TensorSummary": "tf.raw_ops.TensorSummary", + "tf.compat.v1.raw_ops.TensorSummaryV2": "tf.raw_ops.TensorSummaryV2", + "tf.compat.v1.raw_ops.TextLineDataset": "tf.raw_ops.TextLineDataset", + "tf.compat.v1.raw_ops.TextLineReader": "tf.raw_ops.TextLineReader", + "tf.compat.v1.raw_ops.TextLineReaderV2": "tf.raw_ops.TextLineReaderV2", + "tf.compat.v1.raw_ops.ThreadPoolDataset": "tf.raw_ops.ThreadPoolDataset", + "tf.compat.v1.raw_ops.ThreadPoolHandle": "tf.raw_ops.ThreadPoolHandle", + "tf.compat.v1.raw_ops.ThreadUnsafeUnigramCandidateSampler": "tf.raw_ops.ThreadUnsafeUnigramCandidateSampler", + "tf.compat.v1.raw_ops.Tile": "tf.raw_ops.Tile", + "tf.compat.v1.raw_ops.TileGrad": "tf.raw_ops.TileGrad", + "tf.compat.v1.raw_ops.Timestamp": "tf.raw_ops.Timestamp", + "tf.compat.v1.raw_ops.ToBool": "tf.raw_ops.ToBool", + "tf.compat.v1.raw_ops.TopK": "tf.raw_ops.TopK", + "tf.compat.v1.raw_ops.TopKV2": "tf.raw_ops.TopKV2", + "tf.compat.v1.raw_ops.Transpose": "tf.raw_ops.Transpose", + "tf.compat.v1.raw_ops.TridiagonalMatMul": "tf.raw_ops.TridiagonalMatMul", + "tf.compat.v1.raw_ops.TridiagonalSolve": "tf.raw_ops.TridiagonalSolve", + "tf.compat.v1.raw_ops.TruncateDiv": "tf.raw_ops.TruncateDiv", + "tf.compat.v1.raw_ops.TruncateMod": "tf.raw_ops.TruncateMod", + "tf.compat.v1.raw_ops.TruncatedNormal": "tf.raw_ops.TruncatedNormal", + "tf.compat.v1.raw_ops.Unbatch": "tf.raw_ops.Unbatch", + "tf.compat.v1.raw_ops.UnbatchDataset": "tf.raw_ops.UnbatchDataset", + "tf.compat.v1.raw_ops.UnbatchGrad": "tf.raw_ops.UnbatchGrad", + "tf.compat.v1.raw_ops.UnicodeDecode": "tf.raw_ops.UnicodeDecode", + "tf.compat.v1.raw_ops.UnicodeDecodeWithOffsets": "tf.raw_ops.UnicodeDecodeWithOffsets", + "tf.compat.v1.raw_ops.UnicodeEncode": "tf.raw_ops.UnicodeEncode", + "tf.compat.v1.raw_ops.UnicodeScript": "tf.raw_ops.UnicodeScript", + "tf.compat.v1.raw_ops.UnicodeTranscode": "tf.raw_ops.UnicodeTranscode", + "tf.compat.v1.raw_ops.UniformCandidateSampler": "tf.raw_ops.UniformCandidateSampler", + "tf.compat.v1.raw_ops.Unique": "tf.raw_ops.Unique", + "tf.compat.v1.raw_ops.UniqueDataset": "tf.raw_ops.UniqueDataset", + "tf.compat.v1.raw_ops.UniqueV2": "tf.raw_ops.UniqueV2", + "tf.compat.v1.raw_ops.UniqueWithCounts": "tf.raw_ops.UniqueWithCounts", + "tf.compat.v1.raw_ops.UniqueWithCountsV2": "tf.raw_ops.UniqueWithCountsV2", + "tf.compat.v1.raw_ops.Unpack": "tf.raw_ops.Unpack", + "tf.compat.v1.raw_ops.UnravelIndex": "tf.raw_ops.UnravelIndex", + "tf.compat.v1.raw_ops.UnsortedSegmentJoin": "tf.raw_ops.UnsortedSegmentJoin", + "tf.compat.v1.raw_ops.UnsortedSegmentMax": "tf.raw_ops.UnsortedSegmentMax", + "tf.compat.v1.raw_ops.UnsortedSegmentMin": "tf.raw_ops.UnsortedSegmentMin", + "tf.compat.v1.raw_ops.UnsortedSegmentProd": "tf.raw_ops.UnsortedSegmentProd", + "tf.compat.v1.raw_ops.UnsortedSegmentSum": "tf.raw_ops.UnsortedSegmentSum", + "tf.compat.v1.raw_ops.Unstage": "tf.raw_ops.Unstage", + "tf.compat.v1.raw_ops.UnwrapDatasetVariant": "tf.raw_ops.UnwrapDatasetVariant", + "tf.compat.v1.raw_ops.UpperBound": "tf.raw_ops.UpperBound", + "tf.compat.v1.raw_ops.VarHandleOp": "tf.raw_ops.VarHandleOp", + "tf.compat.v1.raw_ops.VarIsInitializedOp": "tf.raw_ops.VarIsInitializedOp", + "tf.compat.v1.raw_ops.Variable": "tf.raw_ops.Variable", + "tf.compat.v1.raw_ops.VariableShape": "tf.raw_ops.VariableShape", + "tf.compat.v1.raw_ops.VariableV2": "tf.raw_ops.VariableV2", + "tf.compat.v1.raw_ops.Where": "tf.raw_ops.Where", + "tf.compat.v1.raw_ops.While": "tf.raw_ops.While", + "tf.compat.v1.raw_ops.WholeFileReader": "tf.raw_ops.WholeFileReader", + "tf.compat.v1.raw_ops.WholeFileReaderV2": "tf.raw_ops.WholeFileReaderV2", + "tf.compat.v1.raw_ops.WindowDataset": "tf.raw_ops.WindowDataset", + "tf.compat.v1.raw_ops.WorkerHeartbeat": "tf.raw_ops.WorkerHeartbeat", + "tf.compat.v1.raw_ops.WrapDatasetVariant": "tf.raw_ops.WrapDatasetVariant", + "tf.compat.v1.raw_ops.WriteAudioSummary": "tf.raw_ops.WriteAudioSummary", + "tf.compat.v1.raw_ops.WriteFile": "tf.raw_ops.WriteFile", + "tf.compat.v1.raw_ops.WriteGraphSummary": "tf.raw_ops.WriteGraphSummary", + "tf.compat.v1.raw_ops.WriteHistogramSummary": "tf.raw_ops.WriteHistogramSummary", + "tf.compat.v1.raw_ops.WriteImageSummary": "tf.raw_ops.WriteImageSummary", + "tf.compat.v1.raw_ops.WriteRawProtoSummary": "tf.raw_ops.WriteRawProtoSummary", + "tf.compat.v1.raw_ops.WriteScalarSummary": "tf.raw_ops.WriteScalarSummary", + "tf.compat.v1.raw_ops.WriteSummary": "tf.raw_ops.WriteSummary", + "tf.compat.v1.raw_ops.Xdivy": "tf.raw_ops.Xdivy", + "tf.compat.v1.raw_ops.Xlog1py": "tf.raw_ops.Xlog1py", + "tf.compat.v1.raw_ops.Xlogy": "tf.raw_ops.Xlogy", + "tf.compat.v1.raw_ops.ZerosLike": "tf.raw_ops.ZerosLike", + "tf.compat.v1.raw_ops.Zeta": "tf.raw_ops.Zeta", + "tf.compat.v1.raw_ops.ZipDataset": "tf.raw_ops.ZipDataset", + "tf.compat.v1.read_file": "tf.io.read_file", + "tf.compat.v1.real": "tf.math.real", + "tf.compat.v1.realdiv": "tf.realdiv", + "tf.compat.v1.reciprocal": "tf.math.reciprocal", + "tf.compat.v1.recompute_grad": "tf.recompute_grad", + "tf.compat.v1.regex_replace": "tf.strings.regex_replace", + "tf.compat.v1.register_tensor_conversion_function": "tf.register_tensor_conversion_function", + "tf.compat.v1.repeat": "tf.repeat", + "tf.compat.v1.required_space_to_batch_paddings": "tf.required_space_to_batch_paddings", + "tf.compat.v1.reshape": "tf.reshape", + "tf.compat.v1.resource": "tf.dtypes.resource", + "tf.compat.v1.reverse": "tf.reverse", + "tf.compat.v1.reverse_v2": "tf.reverse", + "tf.compat.v1.rint": "tf.math.rint", + "tf.compat.v1.roll": "tf.roll", + "tf.compat.v1.round": "tf.math.round", + "tf.compat.v1.rsqrt": "tf.math.rsqrt", + "tf.compat.v1.saturate_cast": "tf.dtypes.saturate_cast", + "tf.compat.v1.saved_model.Asset": "tf.saved_model.Asset", + "tf.compat.v1.saved_model.Asset.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.saved_model.Asset.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.saved_model.Asset.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.saved_model.Asset.__init__": "tf.saved_model.Asset.__init__", + "tf.compat.v1.saved_model.Asset.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.saved_model.Asset.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.saved_model.Asset.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.saved_model.Asset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.saved_model.Asset.asset_path": "tf.saved_model.Asset.asset_path", + "tf.compat.v1.saved_model.Builder.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.saved_model.Builder.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.saved_model.Builder.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.saved_model.Builder.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.saved_model.Builder.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.saved_model.Builder.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.saved_model.Builder.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.saved_model.SaveOptions": "tf.saved_model.SaveOptions", + "tf.compat.v1.saved_model.SaveOptions.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.saved_model.SaveOptions.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.saved_model.SaveOptions.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.saved_model.SaveOptions.__init__": "tf.saved_model.SaveOptions.__init__", + "tf.compat.v1.saved_model.SaveOptions.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.saved_model.SaveOptions.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.saved_model.SaveOptions.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.saved_model.SaveOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.saved_model.SaveOptions.function_aliases": "tf.saved_model.SaveOptions.function_aliases", + "tf.compat.v1.saved_model.SaveOptions.namespace_whitelist": "tf.saved_model.SaveOptions.namespace_whitelist", + "tf.compat.v1.saved_model.SaveOptions.save_debug_info": "tf.saved_model.SaveOptions.save_debug_info", + "tf.compat.v1.saved_model.builder.SavedModelBuilder": "tf.compat.v1.saved_model.Builder", + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__init__": "tf.compat.v1.saved_model.Builder.__init__", + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.saved_model.builder.SavedModelBuilder.add_meta_graph": "tf.compat.v1.saved_model.Builder.add_meta_graph", + "tf.compat.v1.saved_model.builder.SavedModelBuilder.add_meta_graph_and_variables": "tf.compat.v1.saved_model.Builder.add_meta_graph_and_variables", + "tf.compat.v1.saved_model.builder.SavedModelBuilder.save": "tf.compat.v1.saved_model.Builder.save", + "tf.compat.v1.saved_model.experimental.save": "tf.saved_model.save", + "tf.compat.v1.saved_model.load_v2": "tf.saved_model.load", + "tf.compat.v1.saved_model.loader.load": "tf.compat.v1.saved_model.load", + "tf.compat.v1.saved_model.loader.maybe_saved_model_directory": "tf.compat.v1.saved_model.contains_saved_model", + "tf.compat.v1.saved_model.main_op.main_op_with_restore": "tf.compat.v1.saved_model.main_op_with_restore", + "tf.compat.v1.saved_model.maybe_saved_model_directory": "tf.compat.v1.saved_model.contains_saved_model", + "tf.compat.v1.saved_model.save": "tf.saved_model.save", + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.saved_model.signature_def_utils.build_signature_def": "tf.compat.v1.saved_model.build_signature_def", + "tf.compat.v1.saved_model.signature_def_utils.classification_signature_def": "tf.compat.v1.saved_model.classification_signature_def", + "tf.compat.v1.saved_model.signature_def_utils.is_valid_signature": "tf.compat.v1.saved_model.is_valid_signature", + "tf.compat.v1.saved_model.signature_def_utils.predict_signature_def": "tf.compat.v1.saved_model.predict_signature_def", + "tf.compat.v1.saved_model.signature_def_utils.regression_signature_def": "tf.compat.v1.saved_model.regression_signature_def", + "tf.compat.v1.saved_model.utils.build_tensor_info": "tf.compat.v1.saved_model.build_tensor_info", + "tf.compat.v1.saved_model.utils.get_tensor_from_tensor_info": "tf.compat.v1.saved_model.get_tensor_from_tensor_info", + "tf.compat.v1.scatter_nd": "tf.scatter_nd", + "tf.compat.v1.searchsorted": "tf.searchsorted", + "tf.compat.v1.segment_max": "tf.math.segment_max", + "tf.compat.v1.segment_mean": "tf.math.segment_mean", + "tf.compat.v1.segment_min": "tf.math.segment_min", + "tf.compat.v1.segment_prod": "tf.math.segment_prod", + "tf.compat.v1.segment_sum": "tf.math.segment_sum", + "tf.compat.v1.self_adjoint_eig": "tf.linalg.eigh", + "tf.compat.v1.self_adjoint_eigvals": "tf.linalg.eigvalsh", + "tf.compat.v1.sequence_mask": "tf.sequence_mask", + "tf.compat.v1.serialize_tensor": "tf.io.serialize_tensor", + "tf.compat.v1.sets.difference": "tf.sets.difference", + "tf.compat.v1.sets.intersection": "tf.sets.intersection", + "tf.compat.v1.sets.set_difference": "tf.sets.difference", + "tf.compat.v1.sets.set_intersection": "tf.sets.intersection", + "tf.compat.v1.sets.set_size": "tf.sets.size", + "tf.compat.v1.sets.set_union": "tf.sets.union", + "tf.compat.v1.sets.size": "tf.sets.size", + "tf.compat.v1.sets.union": "tf.sets.union", + "tf.compat.v1.shape_n": "tf.shape_n", + "tf.compat.v1.sigmoid": "tf.math.sigmoid", + "tf.compat.v1.sign": "tf.math.sign", + "tf.compat.v1.signal.dct": "tf.signal.dct", + "tf.compat.v1.signal.fft": "tf.signal.fft", + "tf.compat.v1.signal.fft2d": "tf.signal.fft2d", + "tf.compat.v1.signal.fft3d": "tf.signal.fft3d", + "tf.compat.v1.signal.fftshift": "tf.signal.fftshift", + "tf.compat.v1.signal.frame": "tf.signal.frame", + "tf.compat.v1.signal.hamming_window": "tf.signal.hamming_window", + "tf.compat.v1.signal.hann_window": "tf.signal.hann_window", + "tf.compat.v1.signal.idct": "tf.signal.idct", + "tf.compat.v1.signal.ifft": "tf.signal.ifft", + "tf.compat.v1.signal.ifft2d": "tf.signal.ifft2d", + "tf.compat.v1.signal.ifft3d": "tf.signal.ifft3d", + "tf.compat.v1.signal.ifftshift": "tf.signal.ifftshift", + "tf.compat.v1.signal.inverse_mdct": "tf.signal.inverse_mdct", + "tf.compat.v1.signal.inverse_stft": "tf.signal.inverse_stft", + "tf.compat.v1.signal.inverse_stft_window_fn": "tf.signal.inverse_stft_window_fn", + "tf.compat.v1.signal.irfft": "tf.signal.irfft", + "tf.compat.v1.signal.irfft2d": "tf.signal.irfft2d", + "tf.compat.v1.signal.irfft3d": "tf.signal.irfft3d", + "tf.compat.v1.signal.kaiser_bessel_derived_window": "tf.signal.kaiser_bessel_derived_window", + "tf.compat.v1.signal.kaiser_window": "tf.signal.kaiser_window", + "tf.compat.v1.signal.linear_to_mel_weight_matrix": "tf.signal.linear_to_mel_weight_matrix", + "tf.compat.v1.signal.mdct": "tf.signal.mdct", + "tf.compat.v1.signal.mfccs_from_log_mel_spectrograms": "tf.signal.mfccs_from_log_mel_spectrograms", + "tf.compat.v1.signal.overlap_and_add": "tf.signal.overlap_and_add", + "tf.compat.v1.signal.rfft": "tf.signal.rfft", + "tf.compat.v1.signal.rfft2d": "tf.signal.rfft2d", + "tf.compat.v1.signal.rfft3d": "tf.signal.rfft3d", + "tf.compat.v1.signal.stft": "tf.signal.stft", + "tf.compat.v1.signal.vorbis_window": "tf.signal.vorbis_window", + "tf.compat.v1.sin": "tf.math.sin", + "tf.compat.v1.sinh": "tf.math.sinh", + "tf.compat.v1.slice": "tf.slice", + "tf.compat.v1.sort": "tf.sort", + "tf.compat.v1.space_to_batch_nd": "tf.space_to_batch_nd", + "tf.compat.v1.sparse.SparseConditionalAccumulator": "tf.compat.v1.SparseConditionalAccumulator", + "tf.compat.v1.sparse.SparseConditionalAccumulator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.sparse.SparseConditionalAccumulator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.sparse.SparseConditionalAccumulator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.sparse.SparseConditionalAccumulator.__init__": "tf.compat.v1.SparseConditionalAccumulator.__init__", + "tf.compat.v1.sparse.SparseConditionalAccumulator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.sparse.SparseConditionalAccumulator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.sparse.SparseConditionalAccumulator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.sparse.SparseConditionalAccumulator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.sparse.SparseConditionalAccumulator.accumulator_ref": "tf.compat.v1.ConditionalAccumulatorBase.accumulator_ref", + "tf.compat.v1.sparse.SparseConditionalAccumulator.apply_grad": "tf.compat.v1.SparseConditionalAccumulator.apply_grad", + "tf.compat.v1.sparse.SparseConditionalAccumulator.apply_indexed_slices_grad": "tf.compat.v1.SparseConditionalAccumulator.apply_indexed_slices_grad", + "tf.compat.v1.sparse.SparseConditionalAccumulator.dtype": "tf.compat.v1.ConditionalAccumulatorBase.dtype", + "tf.compat.v1.sparse.SparseConditionalAccumulator.name": "tf.compat.v1.ConditionalAccumulatorBase.name", + "tf.compat.v1.sparse.SparseConditionalAccumulator.num_accumulated": "tf.compat.v1.SparseConditionalAccumulator.num_accumulated", + "tf.compat.v1.sparse.SparseConditionalAccumulator.set_global_step": "tf.compat.v1.SparseConditionalAccumulator.set_global_step", + "tf.compat.v1.sparse.SparseConditionalAccumulator.take_grad": "tf.compat.v1.SparseConditionalAccumulator.take_grad", + "tf.compat.v1.sparse.SparseConditionalAccumulator.take_indexed_slices_grad": "tf.compat.v1.SparseConditionalAccumulator.take_indexed_slices_grad", + "tf.compat.v1.sparse.SparseTensor": "tf.sparse.SparseTensor", + "tf.compat.v1.sparse.SparseTensor.__div__": "tf.sparse.SparseTensor.__div__", + "tf.compat.v1.sparse.SparseTensor.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.sparse.SparseTensor.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.sparse.SparseTensor.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.sparse.SparseTensor.__init__": "tf.sparse.SparseTensor.__init__", + "tf.compat.v1.sparse.SparseTensor.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.sparse.SparseTensor.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.sparse.SparseTensor.__mul__": "tf.sparse.SparseTensor.__mul__", + "tf.compat.v1.sparse.SparseTensor.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.sparse.SparseTensor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.sparse.SparseTensor.__truediv__": "tf.sparse.SparseTensor.__truediv__", + "tf.compat.v1.sparse.SparseTensor.consumers": "tf.sparse.SparseTensor.consumers", + "tf.compat.v1.sparse.SparseTensor.dense_shape": "tf.sparse.SparseTensor.dense_shape", + "tf.compat.v1.sparse.SparseTensor.dtype": "tf.sparse.SparseTensor.dtype", + "tf.compat.v1.sparse.SparseTensor.eval": "tf.sparse.SparseTensor.eval", + "tf.compat.v1.sparse.SparseTensor.get_shape": "tf.sparse.SparseTensor.get_shape", + "tf.compat.v1.sparse.SparseTensor.graph": "tf.sparse.SparseTensor.graph", + "tf.compat.v1.sparse.SparseTensor.indices": "tf.sparse.SparseTensor.indices", + "tf.compat.v1.sparse.SparseTensor.op": "tf.sparse.SparseTensor.op", + "tf.compat.v1.sparse.SparseTensor.shape": "tf.sparse.SparseTensor.shape", + "tf.compat.v1.sparse.SparseTensor.values": "tf.sparse.SparseTensor.values", + "tf.compat.v1.sparse.add": "tf.compat.v1.sparse_add", + "tf.compat.v1.sparse.concat": "tf.compat.v1.sparse_concat", + "tf.compat.v1.sparse.cross": "tf.sparse.cross", + "tf.compat.v1.sparse.cross_hashed": "tf.sparse.cross_hashed", + "tf.compat.v1.sparse.expand_dims": "tf.sparse.expand_dims", + "tf.compat.v1.sparse.eye": "tf.sparse.eye", + "tf.compat.v1.sparse.fill_empty_rows": "tf.sparse.fill_empty_rows", + "tf.compat.v1.sparse.from_dense": "tf.sparse.from_dense", + "tf.compat.v1.sparse.mask": "tf.sparse.mask", + "tf.compat.v1.sparse.matmul": "tf.sparse.sparse_dense_matmul", + "tf.compat.v1.sparse.maximum": "tf.sparse.maximum", + "tf.compat.v1.sparse.merge": "tf.compat.v1.sparse_merge", + "tf.compat.v1.sparse.minimum": "tf.sparse.minimum", + "tf.compat.v1.sparse.placeholder": "tf.compat.v1.sparse_placeholder", + "tf.compat.v1.sparse.reduce_max": "tf.compat.v1.sparse_reduce_max", + "tf.compat.v1.sparse.reduce_max_sparse": "tf.compat.v1.sparse_reduce_max_sparse", + "tf.compat.v1.sparse.reduce_sum": "tf.compat.v1.sparse_reduce_sum", + "tf.compat.v1.sparse.reduce_sum_sparse": "tf.compat.v1.sparse_reduce_sum_sparse", + "tf.compat.v1.sparse.reorder": "tf.sparse.reorder", + "tf.compat.v1.sparse.reset_shape": "tf.sparse.reset_shape", + "tf.compat.v1.sparse.reshape": "tf.sparse.reshape", + "tf.compat.v1.sparse.retain": "tf.sparse.retain", + "tf.compat.v1.sparse.segment_mean": "tf.compat.v1.sparse_segment_mean", + "tf.compat.v1.sparse.segment_sqrt_n": "tf.compat.v1.sparse_segment_sqrt_n", + "tf.compat.v1.sparse.segment_sum": "tf.compat.v1.sparse_segment_sum", + "tf.compat.v1.sparse.slice": "tf.sparse.slice", + "tf.compat.v1.sparse.softmax": "tf.sparse.softmax", + "tf.compat.v1.sparse.sparse_dense_matmul": "tf.sparse.sparse_dense_matmul", + "tf.compat.v1.sparse.split": "tf.compat.v1.sparse_split", + "tf.compat.v1.sparse.to_dense": "tf.sparse.to_dense", + "tf.compat.v1.sparse.to_indicator": "tf.sparse.to_indicator", + "tf.compat.v1.sparse.transpose": "tf.sparse.transpose", + "tf.compat.v1.sparse_fill_empty_rows": "tf.sparse.fill_empty_rows", + "tf.compat.v1.sparse_mask": "tf.sparse.mask", + "tf.compat.v1.sparse_maximum": "tf.sparse.maximum", + "tf.compat.v1.sparse_minimum": "tf.sparse.minimum", + "tf.compat.v1.sparse_reorder": "tf.sparse.reorder", + "tf.compat.v1.sparse_reset_shape": "tf.sparse.reset_shape", + "tf.compat.v1.sparse_reshape": "tf.sparse.reshape", + "tf.compat.v1.sparse_retain": "tf.sparse.retain", + "tf.compat.v1.sparse_slice": "tf.sparse.slice", + "tf.compat.v1.sparse_softmax": "tf.sparse.softmax", + "tf.compat.v1.sparse_tensor_dense_matmul": "tf.sparse.sparse_dense_matmul", + "tf.compat.v1.sparse_tensor_to_dense": "tf.sparse.to_dense", + "tf.compat.v1.sparse_to_indicator": "tf.sparse.to_indicator", + "tf.compat.v1.sparse_transpose": "tf.sparse.transpose", + "tf.compat.v1.spectral.dct": "tf.signal.dct", + "tf.compat.v1.spectral.fft": "tf.signal.fft", + "tf.compat.v1.spectral.fft2d": "tf.signal.fft2d", + "tf.compat.v1.spectral.fft3d": "tf.signal.fft3d", + "tf.compat.v1.spectral.idct": "tf.signal.idct", + "tf.compat.v1.spectral.ifft": "tf.signal.ifft", + "tf.compat.v1.spectral.ifft2d": "tf.signal.ifft2d", + "tf.compat.v1.spectral.ifft3d": "tf.signal.ifft3d", + "tf.compat.v1.spectral.irfft": "tf.signal.irfft", + "tf.compat.v1.spectral.irfft2d": "tf.signal.irfft2d", + "tf.compat.v1.spectral.irfft3d": "tf.signal.irfft3d", + "tf.compat.v1.spectral.rfft": "tf.signal.rfft", + "tf.compat.v1.spectral.rfft2d": "tf.signal.rfft2d", + "tf.compat.v1.spectral.rfft3d": "tf.signal.rfft3d", + "tf.compat.v1.split": "tf.split", + "tf.compat.v1.sqrt": "tf.math.sqrt", + "tf.compat.v1.square": "tf.math.square", + "tf.compat.v1.squared_difference": "tf.math.squared_difference", + "tf.compat.v1.stack": "tf.stack", + "tf.compat.v1.stop_gradient": "tf.stop_gradient", + "tf.compat.v1.strided_slice": "tf.strided_slice", + "tf.compat.v1.string": "tf.dtypes.string", + "tf.compat.v1.string_join": "tf.strings.join", + "tf.compat.v1.string_strip": "tf.strings.strip", + "tf.compat.v1.string_to_hash_bucket_fast": "tf.strings.to_hash_bucket_fast", + "tf.compat.v1.string_to_hash_bucket_strong": "tf.strings.to_hash_bucket_strong", + "tf.compat.v1.strings.as_string": "tf.strings.as_string", + "tf.compat.v1.strings.bytes_split": "tf.strings.bytes_split", + "tf.compat.v1.strings.format": "tf.strings.format", + "tf.compat.v1.strings.join": "tf.strings.join", + "tf.compat.v1.strings.lower": "tf.strings.lower", + "tf.compat.v1.strings.ngrams": "tf.strings.ngrams", + "tf.compat.v1.strings.reduce_join": "tf.compat.v1.reduce_join", + "tf.compat.v1.strings.regex_full_match": "tf.strings.regex_full_match", + "tf.compat.v1.strings.regex_replace": "tf.strings.regex_replace", + "tf.compat.v1.strings.strip": "tf.strings.strip", + "tf.compat.v1.strings.to_hash_bucket": "tf.compat.v1.string_to_hash_bucket", + "tf.compat.v1.strings.to_hash_bucket_fast": "tf.strings.to_hash_bucket_fast", + "tf.compat.v1.strings.to_hash_bucket_strong": "tf.strings.to_hash_bucket_strong", + "tf.compat.v1.strings.to_number": "tf.compat.v1.string_to_number", + "tf.compat.v1.strings.unicode_decode": "tf.strings.unicode_decode", + "tf.compat.v1.strings.unicode_decode_with_offsets": "tf.strings.unicode_decode_with_offsets", + "tf.compat.v1.strings.unicode_encode": "tf.strings.unicode_encode", + "tf.compat.v1.strings.unicode_script": "tf.strings.unicode_script", + "tf.compat.v1.strings.unicode_split": "tf.strings.unicode_split", + "tf.compat.v1.strings.unicode_split_with_offsets": "tf.strings.unicode_split_with_offsets", + "tf.compat.v1.strings.unicode_transcode": "tf.strings.unicode_transcode", + "tf.compat.v1.strings.unsorted_segment_join": "tf.strings.unsorted_segment_join", + "tf.compat.v1.strings.upper": "tf.strings.upper", + "tf.compat.v1.subtract": "tf.math.subtract", + "tf.compat.v1.summary.Event": "tf.compat.v1.Event", + "tf.compat.v1.summary.Event.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.summary.Event.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.summary.Event.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.summary.Event.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.summary.Event.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.summary.Event.DESCRIPTOR": "tf.compat.v1.Event.DESCRIPTOR", + "tf.compat.v1.summary.Event.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.summary.Event.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.summary.Event.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.summary.Event.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.summary.Event.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.summary.Event.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.summary.Event.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.summary.Event.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.summary.Event.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.summary.Event.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.summary.Event.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.summary.Event.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.summary.Event.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.summary.Event.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.summary.Event.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.summary.Event.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.summary.Event.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.summary.Event.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.summary.Event.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.summary.Event.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.summary.Event.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.summary.Event.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.summary.Event.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.summary.FileWriter.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.summary.FileWriter.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.summary.FileWriter.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.summary.FileWriter.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.summary.FileWriter.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.summary.FileWriter.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.summary.FileWriter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.summary.FileWriterCache.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.summary.FileWriterCache.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.summary.FileWriterCache.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.summary.FileWriterCache.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.summary.FileWriterCache.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.summary.FileWriterCache.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.summary.FileWriterCache.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.summary.FileWriterCache.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.summary.SessionLog": "tf.compat.v1.SessionLog", + "tf.compat.v1.summary.SessionLog.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.summary.SessionLog.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.summary.SessionLog.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.summary.SessionLog.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.summary.SessionLog.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.summary.SessionLog.DESCRIPTOR": "tf.compat.v1.SessionLog.DESCRIPTOR", + "tf.compat.v1.summary.SessionLog.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.summary.SessionLog.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.summary.SessionLog.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.summary.SessionLog.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.summary.SessionLog.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.summary.SessionLog.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.summary.SessionLog.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.summary.SessionLog.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.summary.SessionLog.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.summary.SessionLog.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.summary.SessionLog.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.summary.SessionLog.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.summary.SessionLog.SessionStatus": "tf.compat.v1.SessionLog.SessionStatus", + "tf.compat.v1.summary.SessionLog.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.summary.SessionLog.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.summary.SessionLog.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.summary.SessionLog.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.summary.SessionLog.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.summary.SessionLog.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.summary.SessionLog.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.summary.SessionLog.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.summary.SessionLog.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.summary.SessionLog.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.summary.SessionLog.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.summary.Summary": "tf.compat.v1.Summary", + "tf.compat.v1.summary.Summary.Audio": "tf.compat.v1.Summary.Audio", + "tf.compat.v1.summary.Summary.Audio.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.summary.Summary.Audio.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.summary.Summary.Audio.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.summary.Summary.Audio.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.summary.Summary.Audio.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.summary.Summary.Audio.DESCRIPTOR": "tf.compat.v1.Summary.Audio.DESCRIPTOR", + "tf.compat.v1.summary.Summary.Audio.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.summary.Summary.Audio.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.summary.Summary.Audio.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.summary.Summary.Audio.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.summary.Summary.Audio.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.summary.Summary.Audio.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.summary.Summary.Audio.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.summary.Summary.Audio.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.summary.Summary.Audio.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.summary.Summary.Audio.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.summary.Summary.Audio.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.summary.Summary.Audio.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.summary.Summary.Audio.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.summary.Summary.Audio.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.summary.Summary.Audio.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.summary.Summary.Audio.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.summary.Summary.Audio.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.summary.Summary.Audio.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.summary.Summary.Audio.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.summary.Summary.Audio.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.summary.Summary.Audio.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.summary.Summary.Audio.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.summary.Summary.Audio.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.summary.Summary.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.summary.Summary.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.summary.Summary.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.summary.Summary.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.summary.Summary.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.summary.Summary.DESCRIPTOR": "tf.compat.v1.Summary.DESCRIPTOR", + "tf.compat.v1.summary.Summary.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.summary.Summary.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.summary.Summary.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.summary.Summary.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.summary.Summary.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.summary.Summary.Image": "tf.compat.v1.Summary.Image", + "tf.compat.v1.summary.Summary.Image.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.summary.Summary.Image.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.summary.Summary.Image.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.summary.Summary.Image.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.summary.Summary.Image.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.summary.Summary.Image.DESCRIPTOR": "tf.compat.v1.Summary.Image.DESCRIPTOR", + "tf.compat.v1.summary.Summary.Image.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.summary.Summary.Image.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.summary.Summary.Image.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.summary.Summary.Image.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.summary.Summary.Image.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.summary.Summary.Image.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.summary.Summary.Image.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.summary.Summary.Image.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.summary.Summary.Image.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.summary.Summary.Image.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.summary.Summary.Image.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.summary.Summary.Image.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.summary.Summary.Image.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.summary.Summary.Image.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.summary.Summary.Image.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.summary.Summary.Image.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.summary.Summary.Image.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.summary.Summary.Image.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.summary.Summary.Image.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.summary.Summary.Image.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.summary.Summary.Image.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.summary.Summary.Image.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.summary.Summary.Image.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.summary.Summary.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.summary.Summary.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.summary.Summary.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.summary.Summary.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.summary.Summary.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.summary.Summary.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.summary.Summary.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.summary.Summary.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.summary.Summary.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.summary.Summary.Value": "tf.compat.v1.Summary.Value", + "tf.compat.v1.summary.Summary.Value.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.summary.Summary.Value.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.summary.Summary.Value.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.summary.Summary.Value.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.summary.Summary.Value.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.summary.Summary.Value.DESCRIPTOR": "tf.compat.v1.Summary.Value.DESCRIPTOR", + "tf.compat.v1.summary.Summary.Value.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.summary.Summary.Value.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.summary.Summary.Value.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.summary.Summary.Value.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.summary.Summary.Value.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.summary.Summary.Value.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.summary.Summary.Value.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.summary.Summary.Value.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.summary.Summary.Value.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.summary.Summary.Value.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.summary.Summary.Value.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.summary.Summary.Value.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.summary.Summary.Value.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.summary.Summary.Value.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.summary.Summary.Value.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.summary.Summary.Value.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.summary.Summary.Value.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.summary.Summary.Value.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.summary.Summary.Value.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.summary.Summary.Value.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.summary.Summary.Value.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.summary.Summary.Value.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.summary.Summary.Value.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.summary.Summary.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.summary.Summary.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.summary.Summary.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.summary.Summary.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.summary.Summary.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.summary.Summary.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.summary.Summary.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.summary.Summary.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.summary.Summary.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.summary.SummaryDescription.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.summary.SummaryDescription.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.summary.SummaryDescription.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.summary.SummaryDescription.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.summary.SummaryDescription.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.summary.SummaryDescription.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.summary.SummaryDescription.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.summary.SummaryDescription.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.summary.SummaryDescription.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.summary.SummaryDescription.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.summary.SummaryDescription.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.summary.SummaryDescription.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.summary.SummaryDescription.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.summary.SummaryDescription.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.summary.SummaryDescription.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.summary.SummaryDescription.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.summary.SummaryDescription.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.summary.SummaryDescription.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.summary.SummaryDescription.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.summary.SummaryDescription.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.summary.SummaryDescription.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.summary.SummaryDescription.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.summary.SummaryDescription.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.summary.SummaryDescription.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.summary.SummaryDescription.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.summary.SummaryDescription.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.summary.SummaryDescription.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.summary.SummaryDescription.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.summary.TaggedRunMetadata.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.summary.TaggedRunMetadata.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.summary.TaggedRunMetadata.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.summary.TaggedRunMetadata.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.summary.TaggedRunMetadata.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.summary.TaggedRunMetadata.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.summary.TaggedRunMetadata.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.summary.TaggedRunMetadata.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.summary.TaggedRunMetadata.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.summary.TaggedRunMetadata.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.summary.TaggedRunMetadata.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.summary.TaggedRunMetadata.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.summary.TaggedRunMetadata.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.summary.TaggedRunMetadata.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.summary.TaggedRunMetadata.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.summary.TaggedRunMetadata.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.summary.TaggedRunMetadata.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.summary.TaggedRunMetadata.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.summary.TaggedRunMetadata.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.summary.TaggedRunMetadata.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.summary.TaggedRunMetadata.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.summary.TaggedRunMetadata.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.summary.TaggedRunMetadata.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.summary.TaggedRunMetadata.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.summary.TaggedRunMetadata.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.summary.TaggedRunMetadata.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.summary.TaggedRunMetadata.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.summary.TaggedRunMetadata.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.svd": "tf.linalg.svd", + "tf.compat.v1.switch_case": "tf.switch_case", + "tf.compat.v1.sysconfig.get_compile_flags": "tf.sysconfig.get_compile_flags", + "tf.compat.v1.sysconfig.get_include": "tf.sysconfig.get_include", + "tf.compat.v1.sysconfig.get_lib": "tf.sysconfig.get_lib", + "tf.compat.v1.sysconfig.get_link_flags": "tf.sysconfig.get_link_flags", + "tf.compat.v1.tan": "tf.math.tan", + "tf.compat.v1.tanh": "tf.math.tanh", + "tf.compat.v1.tensor_scatter_add": "tf.tensor_scatter_nd_add", + "tf.compat.v1.tensor_scatter_nd_add": "tf.tensor_scatter_nd_add", + "tf.compat.v1.tensor_scatter_nd_sub": "tf.tensor_scatter_nd_sub", + "tf.compat.v1.tensor_scatter_nd_update": "tf.tensor_scatter_nd_update", + "tf.compat.v1.tensor_scatter_sub": "tf.tensor_scatter_nd_sub", + "tf.compat.v1.tensor_scatter_update": "tf.tensor_scatter_nd_update", + "tf.compat.v1.tensordot": "tf.tensordot", + "tf.compat.v1.test.Benchmark": "tf.test.Benchmark", + "tf.compat.v1.test.Benchmark.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.test.Benchmark.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.test.Benchmark.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.test.Benchmark.__init__": "tf.test.Benchmark.__init__", + "tf.compat.v1.test.Benchmark.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.test.Benchmark.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.test.Benchmark.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.test.Benchmark.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.test.Benchmark.evaluate": "tf.test.Benchmark.evaluate", + "tf.compat.v1.test.Benchmark.report_benchmark": "tf.test.Benchmark.report_benchmark", + "tf.compat.v1.test.Benchmark.run_op_benchmark": "tf.test.Benchmark.run_op_benchmark", + "tf.compat.v1.test.StubOutForTesting.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.test.StubOutForTesting.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.test.StubOutForTesting.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.test.StubOutForTesting.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.test.StubOutForTesting.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.test.StubOutForTesting.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.test.StubOutForTesting.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.test.TestCase": "tf.test.TestCase", + "tf.compat.v1.test.TestCase.__call__": "tf.test.TestCase.__call__", + "tf.compat.v1.test.TestCase.__eq__": "tf.test.TestCase.__eq__", + "tf.compat.v1.test.TestCase.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.test.TestCase.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.test.TestCase.__init__": "tf.test.TestCase.__init__", + "tf.compat.v1.test.TestCase.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.test.TestCase.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.test.TestCase.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.test.TestCase.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.test.TestCase.addCleanup": "tf.test.TestCase.addCleanup", + "tf.compat.v1.test.TestCase.addTypeEqualityFunc": "tf.test.TestCase.addTypeEqualityFunc", + "tf.compat.v1.test.TestCase.assertAllClose": "tf.test.TestCase.assertAllClose", + "tf.compat.v1.test.TestCase.assertAllCloseAccordingToType": "tf.test.TestCase.assertAllCloseAccordingToType", + "tf.compat.v1.test.TestCase.assertAllEqual": "tf.test.TestCase.assertAllEqual", + "tf.compat.v1.test.TestCase.assertAllGreater": "tf.test.TestCase.assertAllGreater", + "tf.compat.v1.test.TestCase.assertAllGreaterEqual": "tf.test.TestCase.assertAllGreaterEqual", + "tf.compat.v1.test.TestCase.assertAllInRange": "tf.test.TestCase.assertAllInRange", + "tf.compat.v1.test.TestCase.assertAllInSet": "tf.test.TestCase.assertAllInSet", + "tf.compat.v1.test.TestCase.assertAllLess": "tf.test.TestCase.assertAllLess", + "tf.compat.v1.test.TestCase.assertAllLessEqual": "tf.test.TestCase.assertAllLessEqual", + "tf.compat.v1.test.TestCase.assertAlmostEqual": "tf.test.TestCase.assertAlmostEqual", + "tf.compat.v1.test.TestCase.assertAlmostEquals": "tf.test.TestCase.assertAlmostEquals", + "tf.compat.v1.test.TestCase.assertArrayNear": "tf.test.TestCase.assertArrayNear", + "tf.compat.v1.test.TestCase.assertBetween": "tf.test.TestCase.assertBetween", + "tf.compat.v1.test.TestCase.assertCommandFails": "tf.test.TestCase.assertCommandFails", + "tf.compat.v1.test.TestCase.assertCommandSucceeds": "tf.test.TestCase.assertCommandSucceeds", + "tf.compat.v1.test.TestCase.assertContainsExactSubsequence": "tf.test.TestCase.assertContainsExactSubsequence", + "tf.compat.v1.test.TestCase.assertContainsInOrder": "tf.test.TestCase.assertContainsInOrder", + "tf.compat.v1.test.TestCase.assertContainsSubsequence": "tf.test.TestCase.assertContainsSubsequence", + "tf.compat.v1.test.TestCase.assertContainsSubset": "tf.test.TestCase.assertContainsSubset", + "tf.compat.v1.test.TestCase.assertCountEqual": "tf.test.TestCase.assertItemsEqual", + "tf.compat.v1.test.TestCase.assertDTypeEqual": "tf.test.TestCase.assertDTypeEqual", + "tf.compat.v1.test.TestCase.assertDeviceEqual": "tf.test.TestCase.assertDeviceEqual", + "tf.compat.v1.test.TestCase.assertDictContainsSubset": "tf.test.TestCase.assertDictContainsSubset", + "tf.compat.v1.test.TestCase.assertDictEqual": "tf.test.TestCase.assertDictEqual", + "tf.compat.v1.test.TestCase.assertEmpty": "tf.test.TestCase.assertEmpty", + "tf.compat.v1.test.TestCase.assertEndsWith": "tf.test.TestCase.assertEndsWith", + "tf.compat.v1.test.TestCase.assertEqual": "tf.test.TestCase.assertEqual", + "tf.compat.v1.test.TestCase.assertEquals": "tf.test.TestCase.assertEquals", + "tf.compat.v1.test.TestCase.assertFalse": "tf.test.TestCase.assertFalse", + "tf.compat.v1.test.TestCase.assertGreater": "tf.test.TestCase.assertGreater", + "tf.compat.v1.test.TestCase.assertGreaterEqual": "tf.test.TestCase.assertGreaterEqual", + "tf.compat.v1.test.TestCase.assertIn": "tf.test.TestCase.assertIn", + "tf.compat.v1.test.TestCase.assertIs": "tf.test.TestCase.assertIs", + "tf.compat.v1.test.TestCase.assertIsInstance": "tf.test.TestCase.assertIsInstance", + "tf.compat.v1.test.TestCase.assertIsNone": "tf.test.TestCase.assertIsNone", + "tf.compat.v1.test.TestCase.assertIsNot": "tf.test.TestCase.assertIsNot", + "tf.compat.v1.test.TestCase.assertIsNotNone": "tf.test.TestCase.assertIsNotNone", + "tf.compat.v1.test.TestCase.assertItemsEqual": "tf.test.TestCase.assertItemsEqual", + "tf.compat.v1.test.TestCase.assertJsonEqual": "tf.test.TestCase.assertJsonEqual", + "tf.compat.v1.test.TestCase.assertLen": "tf.test.TestCase.assertLen", + "tf.compat.v1.test.TestCase.assertLess": "tf.test.TestCase.assertLess", + "tf.compat.v1.test.TestCase.assertLessEqual": "tf.test.TestCase.assertLessEqual", + "tf.compat.v1.test.TestCase.assertListEqual": "tf.test.TestCase.assertListEqual", + "tf.compat.v1.test.TestCase.assertLogs": "tf.test.TestCase.assertLogs", + "tf.compat.v1.test.TestCase.assertMultiLineEqual": "tf.test.TestCase.assertMultiLineEqual", + "tf.compat.v1.test.TestCase.assertNDArrayNear": "tf.test.TestCase.assertNDArrayNear", + "tf.compat.v1.test.TestCase.assertNear": "tf.test.TestCase.assertNear", + "tf.compat.v1.test.TestCase.assertNoCommonElements": "tf.test.TestCase.assertNoCommonElements", + "tf.compat.v1.test.TestCase.assertNotAllClose": "tf.test.TestCase.assertNotAllClose", + "tf.compat.v1.test.TestCase.assertNotAllEqual": "tf.test.TestCase.assertNotAllEqual", + "tf.compat.v1.test.TestCase.assertNotAlmostEqual": "tf.test.TestCase.assertNotAlmostEqual", + "tf.compat.v1.test.TestCase.assertNotAlmostEquals": "tf.test.TestCase.assertNotAlmostEquals", + "tf.compat.v1.test.TestCase.assertNotEmpty": "tf.test.TestCase.assertNotEmpty", + "tf.compat.v1.test.TestCase.assertNotEndsWith": "tf.test.TestCase.assertNotEndsWith", + "tf.compat.v1.test.TestCase.assertNotEqual": "tf.test.TestCase.assertNotEqual", + "tf.compat.v1.test.TestCase.assertNotEquals": "tf.test.TestCase.assertNotEquals", + "tf.compat.v1.test.TestCase.assertNotIn": "tf.test.TestCase.assertNotIn", + "tf.compat.v1.test.TestCase.assertNotIsInstance": "tf.test.TestCase.assertNotIsInstance", + "tf.compat.v1.test.TestCase.assertNotRegex": "tf.test.TestCase.assertNotRegex", + "tf.compat.v1.test.TestCase.assertNotRegexpMatches": "tf.test.TestCase.assertNotRegexpMatches", + "tf.compat.v1.test.TestCase.assertNotStartsWith": "tf.test.TestCase.assertNotStartsWith", + "tf.compat.v1.test.TestCase.assertProtoEquals": "tf.test.TestCase.assertProtoEquals", + "tf.compat.v1.test.TestCase.assertProtoEqualsVersion": "tf.test.TestCase.assertProtoEqualsVersion", + "tf.compat.v1.test.TestCase.assertRaises": "tf.test.TestCase.assertRaises", + "tf.compat.v1.test.TestCase.assertRaisesOpError": "tf.test.TestCase.assertRaisesOpError", + "tf.compat.v1.test.TestCase.assertRaisesRegex": "tf.test.TestCase.assertRaisesRegexp", + "tf.compat.v1.test.TestCase.assertRaisesRegexp": "tf.test.TestCase.assertRaisesRegexp", + "tf.compat.v1.test.TestCase.assertRaisesWithLiteralMatch": "tf.test.TestCase.assertRaisesWithLiteralMatch", + "tf.compat.v1.test.TestCase.assertRaisesWithPredicateMatch": "tf.test.TestCase.assertRaisesWithPredicateMatch", + "tf.compat.v1.test.TestCase.assertRegex": "tf.test.TestCase.assertRegex", + "tf.compat.v1.test.TestCase.assertRegexMatch": "tf.test.TestCase.assertRegexMatch", + "tf.compat.v1.test.TestCase.assertRegexpMatches": "tf.test.TestCase.assertRegexpMatches", + "tf.compat.v1.test.TestCase.assertSameElements": "tf.test.TestCase.assertSameElements", + "tf.compat.v1.test.TestCase.assertSameStructure": "tf.test.TestCase.assertSameStructure", + "tf.compat.v1.test.TestCase.assertSequenceAlmostEqual": "tf.test.TestCase.assertSequenceAlmostEqual", + "tf.compat.v1.test.TestCase.assertSequenceEqual": "tf.test.TestCase.assertSequenceEqual", + "tf.compat.v1.test.TestCase.assertSequenceStartsWith": "tf.test.TestCase.assertSequenceStartsWith", + "tf.compat.v1.test.TestCase.assertSetEqual": "tf.test.TestCase.assertSetEqual", + "tf.compat.v1.test.TestCase.assertShapeEqual": "tf.test.TestCase.assertShapeEqual", + "tf.compat.v1.test.TestCase.assertStartsWith": "tf.test.TestCase.assertStartsWith", + "tf.compat.v1.test.TestCase.assertTotallyOrdered": "tf.test.TestCase.assertTotallyOrdered", + "tf.compat.v1.test.TestCase.assertTrue": "tf.test.TestCase.assertTrue", + "tf.compat.v1.test.TestCase.assertTupleEqual": "tf.test.TestCase.assertTupleEqual", + "tf.compat.v1.test.TestCase.assertUrlEqual": "tf.test.TestCase.assertUrlEqual", + "tf.compat.v1.test.TestCase.assertWarns": "tf.test.TestCase.assertWarns", + "tf.compat.v1.test.TestCase.assertWarnsRegex": "tf.test.TestCase.assertWarnsRegex", + "tf.compat.v1.test.TestCase.assert_": "tf.test.TestCase.assert_", + "tf.compat.v1.test.TestCase.cached_session": "tf.test.TestCase.cached_session", + "tf.compat.v1.test.TestCase.captureWritesToStream": "tf.test.TestCase.captureWritesToStream", + "tf.compat.v1.test.TestCase.checkedThread": "tf.test.TestCase.checkedThread", + "tf.compat.v1.test.TestCase.countTestCases": "tf.test.TestCase.countTestCases", + "tf.compat.v1.test.TestCase.create_tempdir": "tf.test.TestCase.create_tempdir", + "tf.compat.v1.test.TestCase.create_tempfile": "tf.test.TestCase.create_tempfile", + "tf.compat.v1.test.TestCase.debug": "tf.test.TestCase.debug", + "tf.compat.v1.test.TestCase.defaultTestResult": "tf.test.TestCase.defaultTestResult", + "tf.compat.v1.test.TestCase.doCleanups": "tf.test.TestCase.doCleanups", + "tf.compat.v1.test.TestCase.enter_context": "tf.test.TestCase.enter_context", + "tf.compat.v1.test.TestCase.evaluate": "tf.test.TestCase.evaluate", + "tf.compat.v1.test.TestCase.fail": "tf.test.TestCase.fail", + "tf.compat.v1.test.TestCase.failIf": "tf.test.TestCase.failIf", + "tf.compat.v1.test.TestCase.failIfAlmostEqual": "tf.test.TestCase.assertNotAlmostEquals", + "tf.compat.v1.test.TestCase.failIfEqual": "tf.test.TestCase.assertNotEquals", + "tf.compat.v1.test.TestCase.failUnless": "tf.test.TestCase.assert_", + "tf.compat.v1.test.TestCase.failUnlessAlmostEqual": "tf.test.TestCase.assertAlmostEquals", + "tf.compat.v1.test.TestCase.failUnlessEqual": "tf.test.TestCase.assertEquals", + "tf.compat.v1.test.TestCase.failUnlessRaises": "tf.test.TestCase.failUnlessRaises", + "tf.compat.v1.test.TestCase.failureException": "tf.test.TestCase.failureException", + "tf.compat.v1.test.TestCase.failureException.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.test.TestCase.failureException.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.test.TestCase.failureException.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.test.TestCase.failureException.__init__": "tf.test.TestCase.failureException.__init__", + "tf.compat.v1.test.TestCase.failureException.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.test.TestCase.failureException.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.test.TestCase.failureException.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.test.TestCase.failureException.__new__": "tf.test.TestCase.failureException.__new__", + "tf.compat.v1.test.TestCase.failureException.args": "tf.errors.AbortedError.args", + "tf.compat.v1.test.TestCase.failureException.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.test.TestCase.get_temp_dir": "tf.test.TestCase.get_temp_dir", + "tf.compat.v1.test.TestCase.id": "tf.test.TestCase.id", + "tf.compat.v1.test.TestCase.run": "tf.test.TestCase.run", + "tf.compat.v1.test.TestCase.session": "tf.test.TestCase.session", + "tf.compat.v1.test.TestCase.setUp": "tf.test.TestCase.setUp", + "tf.compat.v1.test.TestCase.shortDescription": "tf.test.TestCase.shortDescription", + "tf.compat.v1.test.TestCase.skipTest": "tf.test.TestCase.skipTest", + "tf.compat.v1.test.TestCase.subTest": "tf.test.TestCase.subTest", + "tf.compat.v1.test.TestCase.tearDown": "tf.test.TestCase.tearDown", + "tf.compat.v1.test.TestCase.tempfile_cleanup": "tf.test.TestCase.tempfile_cleanup", + "tf.compat.v1.test.TestCase.test_session": "tf.test.TestCase.test_session", + "tf.compat.v1.test.benchmark_config": "tf.test.benchmark_config", + "tf.compat.v1.test.create_local_cluster": "tf.test.create_local_cluster", + "tf.compat.v1.test.gpu_device_name": "tf.test.gpu_device_name", + "tf.compat.v1.test.is_built_with_cuda": "tf.test.is_built_with_cuda", + "tf.compat.v1.test.is_built_with_gpu_support": "tf.test.is_built_with_gpu_support", + "tf.compat.v1.test.is_built_with_rocm": "tf.test.is_built_with_rocm", + "tf.compat.v1.test.is_built_with_xla": "tf.test.is_built_with_xla", + "tf.compat.v1.test.is_gpu_available": "tf.test.is_gpu_available", + "tf.compat.v1.test.main": "tf.test.main", + "tf.compat.v1.tile": "tf.tile", + "tf.compat.v1.timestamp": "tf.timestamp", + "tf.compat.v1.tpu.CrossShardOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.tpu.CrossShardOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.tpu.CrossShardOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.tpu.CrossShardOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.tpu.CrossShardOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.tpu.CrossShardOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.tpu.CrossShardOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.tpu.CrossShardOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.tpu.CrossShardOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.tpu.experimental.AdagradParameters.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.tpu.experimental.AdagradParameters.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.tpu.experimental.AdagradParameters.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.tpu.experimental.AdagradParameters.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.tpu.experimental.AdagradParameters.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.tpu.experimental.AdagradParameters.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.tpu.experimental.AdagradParameters.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.tpu.experimental.AdamParameters.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.tpu.experimental.AdamParameters.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.tpu.experimental.AdamParameters.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.tpu.experimental.AdamParameters.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.tpu.experimental.AdamParameters.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.tpu.experimental.AdamParameters.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.tpu.experimental.AdamParameters.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.tpu.experimental.DeviceAssignment": "tf.tpu.experimental.DeviceAssignment", + "tf.compat.v1.tpu.experimental.DeviceAssignment.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.tpu.experimental.DeviceAssignment.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.tpu.experimental.DeviceAssignment.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.tpu.experimental.DeviceAssignment.__init__": "tf.tpu.experimental.DeviceAssignment.__init__", + "tf.compat.v1.tpu.experimental.DeviceAssignment.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.tpu.experimental.DeviceAssignment.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.tpu.experimental.DeviceAssignment.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.tpu.experimental.DeviceAssignment.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.tpu.experimental.DeviceAssignment.build": "tf.tpu.experimental.DeviceAssignment.build", + "tf.compat.v1.tpu.experimental.DeviceAssignment.coordinates": "tf.tpu.experimental.DeviceAssignment.coordinates", + "tf.compat.v1.tpu.experimental.DeviceAssignment.core_assignment": "tf.tpu.experimental.DeviceAssignment.core_assignment", + "tf.compat.v1.tpu.experimental.DeviceAssignment.host_device": "tf.tpu.experimental.DeviceAssignment.host_device", + "tf.compat.v1.tpu.experimental.DeviceAssignment.lookup_replicas": "tf.tpu.experimental.DeviceAssignment.lookup_replicas", + "tf.compat.v1.tpu.experimental.DeviceAssignment.num_cores_per_replica": "tf.tpu.experimental.DeviceAssignment.num_cores_per_replica", + "tf.compat.v1.tpu.experimental.DeviceAssignment.num_replicas": "tf.tpu.experimental.DeviceAssignment.num_replicas", + "tf.compat.v1.tpu.experimental.DeviceAssignment.topology": "tf.tpu.experimental.DeviceAssignment.topology", + "tf.compat.v1.tpu.experimental.DeviceAssignment.tpu_device": "tf.tpu.experimental.DeviceAssignment.tpu_device", + "tf.compat.v1.tpu.experimental.DeviceAssignment.tpu_ordinal": "tf.tpu.experimental.DeviceAssignment.tpu_ordinal", + "tf.compat.v1.tpu.experimental.FtrlParameters.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.tpu.experimental.FtrlParameters.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.tpu.experimental.FtrlParameters.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.tpu.experimental.FtrlParameters.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.tpu.experimental.FtrlParameters.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.tpu.experimental.FtrlParameters.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.tpu.experimental.FtrlParameters.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.tpu.experimental.initialize_tpu_system": "tf.tpu.experimental.initialize_tpu_system", + "tf.compat.v1.tpu.experimental.shutdown_tpu_system": "tf.tpu.experimental.shutdown_tpu_system", + "tf.compat.v1.trace": "tf.linalg.trace", + "tf.compat.v1.train.AdadeltaOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.AdadeltaOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.AdadeltaOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.AdadeltaOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.AdadeltaOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.AdadeltaOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.AdadeltaOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.AdadeltaOptimizer.apply_gradients": "tf.compat.v1.train.Optimizer.apply_gradients", + "tf.compat.v1.train.AdadeltaOptimizer.compute_gradients": "tf.compat.v1.train.Optimizer.compute_gradients", + "tf.compat.v1.train.AdadeltaOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.train.AdadeltaOptimizer.get_slot": "tf.compat.v1.train.Optimizer.get_slot", + "tf.compat.v1.train.AdadeltaOptimizer.get_slot_names": "tf.compat.v1.train.Optimizer.get_slot_names", + "tf.compat.v1.train.AdadeltaOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.train.AdadeltaOptimizer.variables": "tf.compat.v1.train.Optimizer.variables", + "tf.compat.v1.train.AdagradDAOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.AdagradDAOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.AdagradDAOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.AdagradDAOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.AdagradDAOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.AdagradDAOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.AdagradDAOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.AdagradDAOptimizer.apply_gradients": "tf.compat.v1.train.Optimizer.apply_gradients", + "tf.compat.v1.train.AdagradDAOptimizer.compute_gradients": "tf.compat.v1.train.Optimizer.compute_gradients", + "tf.compat.v1.train.AdagradDAOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.train.AdagradDAOptimizer.get_slot": "tf.compat.v1.train.Optimizer.get_slot", + "tf.compat.v1.train.AdagradDAOptimizer.get_slot_names": "tf.compat.v1.train.Optimizer.get_slot_names", + "tf.compat.v1.train.AdagradDAOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.train.AdagradDAOptimizer.variables": "tf.compat.v1.train.Optimizer.variables", + "tf.compat.v1.train.AdagradOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.AdagradOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.AdagradOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.AdagradOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.AdagradOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.AdagradOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.AdagradOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.AdagradOptimizer.apply_gradients": "tf.compat.v1.train.Optimizer.apply_gradients", + "tf.compat.v1.train.AdagradOptimizer.compute_gradients": "tf.compat.v1.train.Optimizer.compute_gradients", + "tf.compat.v1.train.AdagradOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.train.AdagradOptimizer.get_slot": "tf.compat.v1.train.Optimizer.get_slot", + "tf.compat.v1.train.AdagradOptimizer.get_slot_names": "tf.compat.v1.train.Optimizer.get_slot_names", + "tf.compat.v1.train.AdagradOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.train.AdagradOptimizer.variables": "tf.compat.v1.train.Optimizer.variables", + "tf.compat.v1.train.AdamOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.AdamOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.AdamOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.AdamOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.AdamOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.AdamOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.AdamOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.AdamOptimizer.apply_gradients": "tf.compat.v1.train.Optimizer.apply_gradients", + "tf.compat.v1.train.AdamOptimizer.compute_gradients": "tf.compat.v1.train.Optimizer.compute_gradients", + "tf.compat.v1.train.AdamOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.train.AdamOptimizer.get_slot": "tf.compat.v1.train.Optimizer.get_slot", + "tf.compat.v1.train.AdamOptimizer.get_slot_names": "tf.compat.v1.train.Optimizer.get_slot_names", + "tf.compat.v1.train.AdamOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.train.AdamOptimizer.variables": "tf.compat.v1.train.Optimizer.variables", + "tf.compat.v1.train.BytesList": "tf.train.BytesList", + "tf.compat.v1.train.BytesList.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.BytesList.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.BytesList.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.BytesList.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.BytesList.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.BytesList.DESCRIPTOR": "tf.train.BytesList.DESCRIPTOR", + "tf.compat.v1.train.BytesList.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.BytesList.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.BytesList.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.BytesList.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.BytesList.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.BytesList.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.BytesList.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.BytesList.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.BytesList.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.BytesList.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.BytesList.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.BytesList.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.BytesList.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.BytesList.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.BytesList.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.BytesList.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.BytesList.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.BytesList.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.BytesList.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.BytesList.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.BytesList.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.BytesList.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.BytesList.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.Checkpoint.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.Checkpoint.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.Checkpoint.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.Checkpoint.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.Checkpoint.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.Checkpoint.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.Checkpoint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.CheckpointManager": "tf.train.CheckpointManager", + "tf.compat.v1.train.CheckpointManager.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.CheckpointManager.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.CheckpointManager.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.CheckpointManager.__init__": "tf.train.CheckpointManager.__init__", + "tf.compat.v1.train.CheckpointManager.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.CheckpointManager.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.CheckpointManager.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.CheckpointManager.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.CheckpointManager.checkpoint": "tf.train.CheckpointManager.checkpoint", + "tf.compat.v1.train.CheckpointManager.checkpoint_interval": "tf.train.CheckpointManager.checkpoint_interval", + "tf.compat.v1.train.CheckpointManager.checkpoints": "tf.train.CheckpointManager.checkpoints", + "tf.compat.v1.train.CheckpointManager.directory": "tf.train.CheckpointManager.directory", + "tf.compat.v1.train.CheckpointManager.latest_checkpoint": "tf.train.CheckpointManager.latest_checkpoint", + "tf.compat.v1.train.CheckpointManager.restore_or_initialize": "tf.train.CheckpointManager.restore_or_initialize", + "tf.compat.v1.train.CheckpointManager.save": "tf.train.CheckpointManager.save", + "tf.compat.v1.train.CheckpointSaverHook": "tf.estimator.CheckpointSaverHook", + "tf.compat.v1.train.CheckpointSaverHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.CheckpointSaverHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.CheckpointSaverHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.CheckpointSaverHook.__init__": "tf.estimator.CheckpointSaverHook.__init__", + "tf.compat.v1.train.CheckpointSaverHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.CheckpointSaverHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.CheckpointSaverHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.CheckpointSaverHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.CheckpointSaverHook.after_create_session": "tf.estimator.CheckpointSaverHook.after_create_session", + "tf.compat.v1.train.CheckpointSaverHook.after_run": "tf.estimator.CheckpointSaverHook.after_run", + "tf.compat.v1.train.CheckpointSaverHook.before_run": "tf.estimator.CheckpointSaverHook.before_run", + "tf.compat.v1.train.CheckpointSaverHook.begin": "tf.estimator.CheckpointSaverHook.begin", + "tf.compat.v1.train.CheckpointSaverHook.end": "tf.estimator.CheckpointSaverHook.end", + "tf.compat.v1.train.CheckpointSaverListener": "tf.estimator.CheckpointSaverListener", + "tf.compat.v1.train.CheckpointSaverListener.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.CheckpointSaverListener.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.CheckpointSaverListener.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.CheckpointSaverListener.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.train.CheckpointSaverListener.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.CheckpointSaverListener.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.CheckpointSaverListener.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.CheckpointSaverListener.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.CheckpointSaverListener.after_save": "tf.estimator.CheckpointSaverListener.after_save", + "tf.compat.v1.train.CheckpointSaverListener.before_save": "tf.estimator.CheckpointSaverListener.before_save", + "tf.compat.v1.train.CheckpointSaverListener.begin": "tf.estimator.CheckpointSaverListener.begin", + "tf.compat.v1.train.CheckpointSaverListener.end": "tf.estimator.CheckpointSaverListener.end", + "tf.compat.v1.train.ChiefSessionCreator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.ChiefSessionCreator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.ChiefSessionCreator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.ChiefSessionCreator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.ChiefSessionCreator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.ChiefSessionCreator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.ChiefSessionCreator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.ClusterDef": "tf.train.ClusterDef", + "tf.compat.v1.train.ClusterDef.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.ClusterDef.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.ClusterDef.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.ClusterDef.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.ClusterDef.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.ClusterDef.DESCRIPTOR": "tf.train.ClusterDef.DESCRIPTOR", + "tf.compat.v1.train.ClusterDef.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.ClusterDef.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.ClusterDef.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.ClusterDef.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.ClusterDef.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.ClusterDef.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.ClusterDef.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.ClusterDef.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.ClusterDef.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.ClusterDef.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.ClusterDef.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.ClusterDef.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.ClusterDef.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.ClusterDef.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.ClusterDef.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.ClusterDef.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.ClusterDef.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.ClusterDef.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.ClusterDef.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.ClusterDef.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.ClusterDef.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.ClusterDef.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.ClusterDef.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.ClusterSpec": "tf.train.ClusterSpec", + "tf.compat.v1.train.ClusterSpec.__bool__": "tf.train.ClusterSpec.__bool__", + "tf.compat.v1.train.ClusterSpec.__eq__": "tf.train.ClusterSpec.__eq__", + "tf.compat.v1.train.ClusterSpec.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.ClusterSpec.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.ClusterSpec.__init__": "tf.train.ClusterSpec.__init__", + "tf.compat.v1.train.ClusterSpec.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.ClusterSpec.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.ClusterSpec.__ne__": "tf.train.ClusterSpec.__ne__", + "tf.compat.v1.train.ClusterSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.ClusterSpec.__nonzero__": "tf.train.ClusterSpec.__bool__", + "tf.compat.v1.train.ClusterSpec.as_cluster_def": "tf.train.ClusterSpec.as_cluster_def", + "tf.compat.v1.train.ClusterSpec.as_dict": "tf.train.ClusterSpec.as_dict", + "tf.compat.v1.train.ClusterSpec.job_tasks": "tf.train.ClusterSpec.job_tasks", + "tf.compat.v1.train.ClusterSpec.jobs": "tf.train.ClusterSpec.jobs", + "tf.compat.v1.train.ClusterSpec.num_tasks": "tf.train.ClusterSpec.num_tasks", + "tf.compat.v1.train.ClusterSpec.task_address": "tf.train.ClusterSpec.task_address", + "tf.compat.v1.train.ClusterSpec.task_indices": "tf.train.ClusterSpec.task_indices", + "tf.compat.v1.train.Coordinator": "tf.train.Coordinator", + "tf.compat.v1.train.Coordinator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.Coordinator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.Coordinator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.Coordinator.__init__": "tf.train.Coordinator.__init__", + "tf.compat.v1.train.Coordinator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.Coordinator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.Coordinator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.Coordinator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.Coordinator.clear_stop": "tf.train.Coordinator.clear_stop", + "tf.compat.v1.train.Coordinator.join": "tf.train.Coordinator.join", + "tf.compat.v1.train.Coordinator.joined": "tf.train.Coordinator.joined", + "tf.compat.v1.train.Coordinator.raise_requested_exception": "tf.train.Coordinator.raise_requested_exception", + "tf.compat.v1.train.Coordinator.register_thread": "tf.train.Coordinator.register_thread", + "tf.compat.v1.train.Coordinator.request_stop": "tf.train.Coordinator.request_stop", + "tf.compat.v1.train.Coordinator.should_stop": "tf.train.Coordinator.should_stop", + "tf.compat.v1.train.Coordinator.stop_on_exception": "tf.train.Coordinator.stop_on_exception", + "tf.compat.v1.train.Coordinator.wait_for_stop": "tf.train.Coordinator.wait_for_stop", + "tf.compat.v1.train.Example": "tf.train.Example", + "tf.compat.v1.train.Example.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.Example.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.Example.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.Example.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.Example.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.Example.DESCRIPTOR": "tf.train.Example.DESCRIPTOR", + "tf.compat.v1.train.Example.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.Example.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.Example.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.Example.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.Example.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.Example.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.Example.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.Example.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.Example.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.Example.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.Example.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.Example.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.Example.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.Example.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.Example.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.Example.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.Example.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.Example.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.Example.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.Example.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.Example.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.Example.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.Example.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.ExponentialMovingAverage": "tf.train.ExponentialMovingAverage", + "tf.compat.v1.train.ExponentialMovingAverage.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.ExponentialMovingAverage.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.ExponentialMovingAverage.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.ExponentialMovingAverage.__init__": "tf.train.ExponentialMovingAverage.__init__", + "tf.compat.v1.train.ExponentialMovingAverage.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.ExponentialMovingAverage.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.ExponentialMovingAverage.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.ExponentialMovingAverage.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.ExponentialMovingAverage.apply": "tf.train.ExponentialMovingAverage.apply", + "tf.compat.v1.train.ExponentialMovingAverage.average": "tf.train.ExponentialMovingAverage.average", + "tf.compat.v1.train.ExponentialMovingAverage.average_name": "tf.train.ExponentialMovingAverage.average_name", + "tf.compat.v1.train.ExponentialMovingAverage.name": "tf.train.ExponentialMovingAverage.name", + "tf.compat.v1.train.ExponentialMovingAverage.variables_to_restore": "tf.train.ExponentialMovingAverage.variables_to_restore", + "tf.compat.v1.train.Feature": "tf.train.Feature", + "tf.compat.v1.train.Feature.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.Feature.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.Feature.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.Feature.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.Feature.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.Feature.DESCRIPTOR": "tf.train.Feature.DESCRIPTOR", + "tf.compat.v1.train.Feature.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.Feature.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.Feature.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.Feature.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.Feature.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.Feature.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.Feature.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.Feature.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.Feature.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.Feature.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.Feature.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.Feature.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.Feature.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.Feature.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.Feature.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.Feature.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.Feature.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.Feature.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.Feature.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.Feature.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.Feature.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.Feature.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.Feature.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.FeatureList": "tf.train.FeatureList", + "tf.compat.v1.train.FeatureList.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.FeatureList.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.FeatureList.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.FeatureList.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.FeatureList.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.FeatureList.DESCRIPTOR": "tf.train.FeatureList.DESCRIPTOR", + "tf.compat.v1.train.FeatureList.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.FeatureList.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.FeatureList.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.FeatureList.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.FeatureList.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.FeatureList.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.FeatureList.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.FeatureList.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.FeatureList.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.FeatureList.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.FeatureList.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.FeatureList.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.FeatureList.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.FeatureList.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.FeatureList.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.FeatureList.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.FeatureList.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.FeatureList.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.FeatureList.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.FeatureList.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.FeatureList.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.FeatureList.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.FeatureList.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.FeatureLists": "tf.train.FeatureLists", + "tf.compat.v1.train.FeatureLists.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.FeatureLists.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.FeatureLists.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.FeatureLists.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.FeatureLists.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.FeatureLists.DESCRIPTOR": "tf.train.FeatureLists.DESCRIPTOR", + "tf.compat.v1.train.FeatureLists.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.FeatureLists.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.FeatureLists.FeatureListEntry": "tf.train.FeatureLists.FeatureListEntry", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.DESCRIPTOR": "tf.train.FeatureLists.FeatureListEntry.DESCRIPTOR", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.FeatureLists.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.FeatureLists.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.FeatureLists.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.FeatureLists.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.FeatureLists.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.FeatureLists.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.FeatureLists.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.FeatureLists.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.FeatureLists.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.FeatureLists.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.FeatureLists.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.FeatureLists.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.FeatureLists.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.FeatureLists.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.FeatureLists.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.FeatureLists.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.FeatureLists.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.FeatureLists.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.FeatureLists.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.FeatureLists.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.FeatureLists.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.Features": "tf.train.Features", + "tf.compat.v1.train.Features.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.Features.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.Features.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.Features.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.Features.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.Features.DESCRIPTOR": "tf.train.Features.DESCRIPTOR", + "tf.compat.v1.train.Features.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.Features.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.Features.FeatureEntry": "tf.train.Features.FeatureEntry", + "tf.compat.v1.train.Features.FeatureEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.Features.FeatureEntry.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.Features.FeatureEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.Features.FeatureEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.Features.FeatureEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.Features.FeatureEntry.DESCRIPTOR": "tf.train.Features.FeatureEntry.DESCRIPTOR", + "tf.compat.v1.train.Features.FeatureEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.Features.FeatureEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.Features.FeatureEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.Features.FeatureEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.Features.FeatureEntry.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.Features.FeatureEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.Features.FeatureEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.Features.FeatureEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.Features.FeatureEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.Features.FeatureEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.Features.FeatureEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.Features.FeatureEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.Features.FeatureEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.Features.FeatureEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.Features.FeatureEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.Features.FeatureEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.Features.FeatureEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.Features.FeatureEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.Features.FeatureEntry.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.Features.FeatureEntry.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.Features.FeatureEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.Features.FeatureEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.Features.FeatureEntry.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.Features.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.Features.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.Features.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.Features.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.Features.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.Features.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.Features.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.Features.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.Features.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.Features.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.Features.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.Features.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.Features.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.Features.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.Features.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.Features.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.Features.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.Features.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.Features.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.Features.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.Features.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.FeedFnHook": "tf.estimator.FeedFnHook", + "tf.compat.v1.train.FeedFnHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.FeedFnHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.FeedFnHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.FeedFnHook.__init__": "tf.estimator.FeedFnHook.__init__", + "tf.compat.v1.train.FeedFnHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.FeedFnHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.FeedFnHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.FeedFnHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.FeedFnHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.train.FeedFnHook.after_run": "tf.estimator.SessionRunHook.after_run", + "tf.compat.v1.train.FeedFnHook.before_run": "tf.estimator.FeedFnHook.before_run", + "tf.compat.v1.train.FeedFnHook.begin": "tf.estimator.SessionRunHook.begin", + "tf.compat.v1.train.FeedFnHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.train.FinalOpsHook": "tf.estimator.FinalOpsHook", + "tf.compat.v1.train.FinalOpsHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.FinalOpsHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.FinalOpsHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.FinalOpsHook.__init__": "tf.estimator.FinalOpsHook.__init__", + "tf.compat.v1.train.FinalOpsHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.FinalOpsHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.FinalOpsHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.FinalOpsHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.FinalOpsHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.train.FinalOpsHook.after_run": "tf.estimator.SessionRunHook.after_run", + "tf.compat.v1.train.FinalOpsHook.before_run": "tf.estimator.SessionRunHook.before_run", + "tf.compat.v1.train.FinalOpsHook.begin": "tf.estimator.SessionRunHook.begin", + "tf.compat.v1.train.FinalOpsHook.end": "tf.estimator.FinalOpsHook.end", + "tf.compat.v1.train.FinalOpsHook.final_ops_values": "tf.estimator.FinalOpsHook.final_ops_values", + "tf.compat.v1.train.FloatList": "tf.train.FloatList", + "tf.compat.v1.train.FloatList.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.FloatList.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.FloatList.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.FloatList.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.FloatList.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.FloatList.DESCRIPTOR": "tf.train.FloatList.DESCRIPTOR", + "tf.compat.v1.train.FloatList.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.FloatList.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.FloatList.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.FloatList.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.FloatList.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.FloatList.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.FloatList.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.FloatList.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.FloatList.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.FloatList.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.FloatList.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.FloatList.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.FloatList.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.FloatList.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.FloatList.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.FloatList.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.FloatList.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.FloatList.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.FloatList.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.FloatList.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.FloatList.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.FloatList.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.FloatList.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.FtrlOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.FtrlOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.FtrlOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.FtrlOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.FtrlOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.FtrlOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.FtrlOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.FtrlOptimizer.apply_gradients": "tf.compat.v1.train.Optimizer.apply_gradients", + "tf.compat.v1.train.FtrlOptimizer.compute_gradients": "tf.compat.v1.train.Optimizer.compute_gradients", + "tf.compat.v1.train.FtrlOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.train.FtrlOptimizer.get_slot": "tf.compat.v1.train.Optimizer.get_slot", + "tf.compat.v1.train.FtrlOptimizer.get_slot_names": "tf.compat.v1.train.Optimizer.get_slot_names", + "tf.compat.v1.train.FtrlOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.train.FtrlOptimizer.variables": "tf.compat.v1.train.Optimizer.variables", + "tf.compat.v1.train.GlobalStepWaiterHook": "tf.estimator.GlobalStepWaiterHook", + "tf.compat.v1.train.GlobalStepWaiterHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.GlobalStepWaiterHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.GlobalStepWaiterHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.GlobalStepWaiterHook.__init__": "tf.estimator.GlobalStepWaiterHook.__init__", + "tf.compat.v1.train.GlobalStepWaiterHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.GlobalStepWaiterHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.GlobalStepWaiterHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.GlobalStepWaiterHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.GlobalStepWaiterHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.train.GlobalStepWaiterHook.after_run": "tf.estimator.SessionRunHook.after_run", + "tf.compat.v1.train.GlobalStepWaiterHook.before_run": "tf.estimator.GlobalStepWaiterHook.before_run", + "tf.compat.v1.train.GlobalStepWaiterHook.begin": "tf.estimator.GlobalStepWaiterHook.begin", + "tf.compat.v1.train.GlobalStepWaiterHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.train.GradientDescentOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.GradientDescentOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.GradientDescentOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.GradientDescentOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.GradientDescentOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.GradientDescentOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.GradientDescentOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.GradientDescentOptimizer.apply_gradients": "tf.compat.v1.train.Optimizer.apply_gradients", + "tf.compat.v1.train.GradientDescentOptimizer.compute_gradients": "tf.compat.v1.train.Optimizer.compute_gradients", + "tf.compat.v1.train.GradientDescentOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.train.GradientDescentOptimizer.get_slot": "tf.compat.v1.train.Optimizer.get_slot", + "tf.compat.v1.train.GradientDescentOptimizer.get_slot_names": "tf.compat.v1.train.Optimizer.get_slot_names", + "tf.compat.v1.train.GradientDescentOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.train.GradientDescentOptimizer.variables": "tf.compat.v1.train.Optimizer.variables", + "tf.compat.v1.train.Int64List": "tf.train.Int64List", + "tf.compat.v1.train.Int64List.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.Int64List.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.Int64List.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.Int64List.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.Int64List.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.Int64List.DESCRIPTOR": "tf.train.Int64List.DESCRIPTOR", + "tf.compat.v1.train.Int64List.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.Int64List.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.Int64List.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.Int64List.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.Int64List.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.Int64List.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.Int64List.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.Int64List.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.Int64List.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.Int64List.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.Int64List.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.Int64List.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.Int64List.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.Int64List.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.Int64List.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.Int64List.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.Int64List.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.Int64List.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.Int64List.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.Int64List.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.Int64List.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.Int64List.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.Int64List.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.JobDef": "tf.train.JobDef", + "tf.compat.v1.train.JobDef.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.JobDef.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.JobDef.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.JobDef.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.JobDef.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.JobDef.DESCRIPTOR": "tf.train.JobDef.DESCRIPTOR", + "tf.compat.v1.train.JobDef.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.JobDef.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.JobDef.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.JobDef.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.JobDef.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.JobDef.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.JobDef.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.JobDef.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.JobDef.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.JobDef.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.JobDef.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.JobDef.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.JobDef.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.JobDef.TasksEntry": "tf.train.JobDef.TasksEntry", + "tf.compat.v1.train.JobDef.TasksEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.JobDef.TasksEntry.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.JobDef.TasksEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.JobDef.TasksEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.JobDef.TasksEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.JobDef.TasksEntry.DESCRIPTOR": "tf.train.JobDef.TasksEntry.DESCRIPTOR", + "tf.compat.v1.train.JobDef.TasksEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.JobDef.TasksEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.JobDef.TasksEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.JobDef.TasksEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.JobDef.TasksEntry.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.JobDef.TasksEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.JobDef.TasksEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.JobDef.TasksEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.JobDef.TasksEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.JobDef.TasksEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.JobDef.TasksEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.JobDef.TasksEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.JobDef.TasksEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.JobDef.TasksEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.JobDef.TasksEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.JobDef.TasksEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.JobDef.TasksEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.JobDef.TasksEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.JobDef.TasksEntry.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.JobDef.TasksEntry.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.JobDef.TasksEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.JobDef.TasksEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.JobDef.TasksEntry.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.JobDef.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.JobDef.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.JobDef.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.JobDef.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.JobDef.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.JobDef.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.JobDef.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.JobDef.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.JobDef.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.JobDef.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.LoggingTensorHook": "tf.estimator.LoggingTensorHook", + "tf.compat.v1.train.LoggingTensorHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.LoggingTensorHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.LoggingTensorHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.LoggingTensorHook.__init__": "tf.estimator.LoggingTensorHook.__init__", + "tf.compat.v1.train.LoggingTensorHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.LoggingTensorHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.LoggingTensorHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.LoggingTensorHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.LoggingTensorHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.train.LoggingTensorHook.after_run": "tf.estimator.LoggingTensorHook.after_run", + "tf.compat.v1.train.LoggingTensorHook.before_run": "tf.estimator.LoggingTensorHook.before_run", + "tf.compat.v1.train.LoggingTensorHook.begin": "tf.estimator.LoggingTensorHook.begin", + "tf.compat.v1.train.LoggingTensorHook.end": "tf.estimator.LoggingTensorHook.end", + "tf.compat.v1.train.LooperThread.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.LooperThread.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.LooperThread.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.LooperThread.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.LooperThread.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.LooperThread.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.LooperThread.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.MomentumOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.MomentumOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.MomentumOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.MomentumOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.MomentumOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.MomentumOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.MomentumOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.MomentumOptimizer.apply_gradients": "tf.compat.v1.train.Optimizer.apply_gradients", + "tf.compat.v1.train.MomentumOptimizer.compute_gradients": "tf.compat.v1.train.Optimizer.compute_gradients", + "tf.compat.v1.train.MomentumOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.train.MomentumOptimizer.get_slot": "tf.compat.v1.train.Optimizer.get_slot", + "tf.compat.v1.train.MomentumOptimizer.get_slot_names": "tf.compat.v1.train.Optimizer.get_slot_names", + "tf.compat.v1.train.MomentumOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.train.MomentumOptimizer.variables": "tf.compat.v1.train.Optimizer.variables", + "tf.compat.v1.train.MonitoredSession.StepContext.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.MonitoredSession.StepContext.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.MonitoredSession.StepContext.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.MonitoredSession.StepContext.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.MonitoredSession.StepContext.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.MonitoredSession.StepContext.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.MonitoredSession.StepContext.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.MonitoredSession.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.MonitoredSession.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.MonitoredSession.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.MonitoredSession.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.MonitoredSession.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.MonitoredSession.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.MonitoredSession.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.NanLossDuringTrainingError": "tf.estimator.NanLossDuringTrainingError", + "tf.compat.v1.train.NanLossDuringTrainingError.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.NanLossDuringTrainingError.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.NanLossDuringTrainingError.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.NanLossDuringTrainingError.__init__": "tf.estimator.NanLossDuringTrainingError.__init__", + "tf.compat.v1.train.NanLossDuringTrainingError.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.NanLossDuringTrainingError.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.NanLossDuringTrainingError.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.NanLossDuringTrainingError.__new__": "tf.estimator.NanLossDuringTrainingError.__new__", + "tf.compat.v1.train.NanLossDuringTrainingError.args": "tf.errors.AbortedError.args", + "tf.compat.v1.train.NanLossDuringTrainingError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.compat.v1.train.NanTensorHook": "tf.estimator.NanTensorHook", + "tf.compat.v1.train.NanTensorHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.NanTensorHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.NanTensorHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.NanTensorHook.__init__": "tf.estimator.NanTensorHook.__init__", + "tf.compat.v1.train.NanTensorHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.NanTensorHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.NanTensorHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.NanTensorHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.NanTensorHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.train.NanTensorHook.after_run": "tf.estimator.NanTensorHook.after_run", + "tf.compat.v1.train.NanTensorHook.before_run": "tf.estimator.NanTensorHook.before_run", + "tf.compat.v1.train.NanTensorHook.begin": "tf.estimator.SessionRunHook.begin", + "tf.compat.v1.train.NanTensorHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.train.Optimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.Optimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.Optimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.Optimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.Optimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.Optimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.Optimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.ProfilerHook": "tf.estimator.ProfilerHook", + "tf.compat.v1.train.ProfilerHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.ProfilerHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.ProfilerHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.ProfilerHook.__init__": "tf.estimator.ProfilerHook.__init__", + "tf.compat.v1.train.ProfilerHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.ProfilerHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.ProfilerHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.ProfilerHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.ProfilerHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.train.ProfilerHook.after_run": "tf.estimator.ProfilerHook.after_run", + "tf.compat.v1.train.ProfilerHook.before_run": "tf.estimator.ProfilerHook.before_run", + "tf.compat.v1.train.ProfilerHook.begin": "tf.estimator.ProfilerHook.begin", + "tf.compat.v1.train.ProfilerHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.train.ProximalAdagradOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.ProximalAdagradOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.ProximalAdagradOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.ProximalAdagradOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.ProximalAdagradOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.ProximalAdagradOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.ProximalAdagradOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.ProximalAdagradOptimizer.apply_gradients": "tf.compat.v1.train.Optimizer.apply_gradients", + "tf.compat.v1.train.ProximalAdagradOptimizer.compute_gradients": "tf.compat.v1.train.Optimizer.compute_gradients", + "tf.compat.v1.train.ProximalAdagradOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.train.ProximalAdagradOptimizer.get_slot": "tf.compat.v1.train.Optimizer.get_slot", + "tf.compat.v1.train.ProximalAdagradOptimizer.get_slot_names": "tf.compat.v1.train.Optimizer.get_slot_names", + "tf.compat.v1.train.ProximalAdagradOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.train.ProximalAdagradOptimizer.variables": "tf.compat.v1.train.Optimizer.variables", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.apply_gradients": "tf.compat.v1.train.Optimizer.apply_gradients", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.compute_gradients": "tf.compat.v1.train.Optimizer.compute_gradients", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.get_slot": "tf.compat.v1.train.Optimizer.get_slot", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.get_slot_names": "tf.compat.v1.train.Optimizer.get_slot_names", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.train.ProximalGradientDescentOptimizer.variables": "tf.compat.v1.train.Optimizer.variables", + "tf.compat.v1.train.QueueRunner.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.QueueRunner.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.QueueRunner.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.QueueRunner.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.QueueRunner.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.QueueRunner.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.QueueRunner.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.RMSPropOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.RMSPropOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.RMSPropOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.RMSPropOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.RMSPropOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.RMSPropOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.RMSPropOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.RMSPropOptimizer.apply_gradients": "tf.compat.v1.train.Optimizer.apply_gradients", + "tf.compat.v1.train.RMSPropOptimizer.compute_gradients": "tf.compat.v1.train.Optimizer.compute_gradients", + "tf.compat.v1.train.RMSPropOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.train.RMSPropOptimizer.get_slot": "tf.compat.v1.train.Optimizer.get_slot", + "tf.compat.v1.train.RMSPropOptimizer.get_slot_names": "tf.compat.v1.train.Optimizer.get_slot_names", + "tf.compat.v1.train.RMSPropOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.train.RMSPropOptimizer.variables": "tf.compat.v1.train.Optimizer.variables", + "tf.compat.v1.train.Saver.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.Saver.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.Saver.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.Saver.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.Saver.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.Saver.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.Saver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.SaverDef.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.SaverDef.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.SaverDef.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.SaverDef.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.SaverDef.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.SaverDef.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.SaverDef.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.SaverDef.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.SaverDef.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.SaverDef.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.SaverDef.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.SaverDef.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.SaverDef.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.SaverDef.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.SaverDef.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.SaverDef.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.SaverDef.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.SaverDef.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.SaverDef.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.SaverDef.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.SaverDef.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.SaverDef.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.SaverDef.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.SaverDef.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.SaverDef.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.SaverDef.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.SaverDef.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.SaverDef.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.Scaffold.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.Scaffold.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.Scaffold.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.Scaffold.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.Scaffold.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.Scaffold.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.Scaffold.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.SecondOrStepTimer": "tf.estimator.SecondOrStepTimer", + "tf.compat.v1.train.SecondOrStepTimer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.SecondOrStepTimer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.SecondOrStepTimer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.SecondOrStepTimer.__init__": "tf.estimator.SecondOrStepTimer.__init__", + "tf.compat.v1.train.SecondOrStepTimer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.SecondOrStepTimer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.SecondOrStepTimer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.SecondOrStepTimer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.SecondOrStepTimer.last_triggered_step": "tf.estimator.SecondOrStepTimer.last_triggered_step", + "tf.compat.v1.train.SecondOrStepTimer.reset": "tf.estimator.SecondOrStepTimer.reset", + "tf.compat.v1.train.SecondOrStepTimer.should_trigger_for_step": "tf.estimator.SecondOrStepTimer.should_trigger_for_step", + "tf.compat.v1.train.SecondOrStepTimer.update_last_triggered_step": "tf.estimator.SecondOrStepTimer.update_last_triggered_step", + "tf.compat.v1.train.SequenceExample": "tf.train.SequenceExample", + "tf.compat.v1.train.SequenceExample.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.SequenceExample.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.SequenceExample.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.SequenceExample.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.SequenceExample.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.SequenceExample.DESCRIPTOR": "tf.train.SequenceExample.DESCRIPTOR", + "tf.compat.v1.train.SequenceExample.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.SequenceExample.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.SequenceExample.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.SequenceExample.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.SequenceExample.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.SequenceExample.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.SequenceExample.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.SequenceExample.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.SequenceExample.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.SequenceExample.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.SequenceExample.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.SequenceExample.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.SequenceExample.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.SequenceExample.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.SequenceExample.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.SequenceExample.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.SequenceExample.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.SequenceExample.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.SequenceExample.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.SequenceExample.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.SequenceExample.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.SequenceExample.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.SequenceExample.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.Server": "tf.distribute.Server", + "tf.compat.v1.train.Server.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.Server.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.Server.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.Server.__init__": "tf.distribute.Server.__init__", + "tf.compat.v1.train.Server.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.Server.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.Server.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.Server.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.Server.create_local_server": "tf.distribute.Server.create_local_server", + "tf.compat.v1.train.Server.join": "tf.distribute.Server.join", + "tf.compat.v1.train.Server.server_def": "tf.distribute.Server.server_def", + "tf.compat.v1.train.Server.start": "tf.distribute.Server.start", + "tf.compat.v1.train.Server.target": "tf.distribute.Server.target", + "tf.compat.v1.train.ServerDef": "tf.train.ServerDef", + "tf.compat.v1.train.ServerDef.ByteSize": "tf.train.BytesList.ByteSize", + "tf.compat.v1.train.ServerDef.Clear": "tf.train.BytesList.Clear", + "tf.compat.v1.train.ServerDef.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.compat.v1.train.ServerDef.ClearField": "tf.train.BytesList.ClearField", + "tf.compat.v1.train.ServerDef.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.compat.v1.train.ServerDef.DESCRIPTOR": "tf.train.ServerDef.DESCRIPTOR", + "tf.compat.v1.train.ServerDef.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.compat.v1.train.ServerDef.Extensions": "tf.train.BytesList.Extensions", + "tf.compat.v1.train.ServerDef.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.compat.v1.train.ServerDef.HasExtension": "tf.train.BytesList.HasExtension", + "tf.compat.v1.train.ServerDef.HasField": "tf.train.BytesList.HasField", + "tf.compat.v1.train.ServerDef.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.compat.v1.train.ServerDef.ListFields": "tf.train.BytesList.ListFields", + "tf.compat.v1.train.ServerDef.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.compat.v1.train.ServerDef.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.compat.v1.train.ServerDef.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.compat.v1.train.ServerDef.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.compat.v1.train.ServerDef.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.compat.v1.train.ServerDef.SetInParent": "tf.train.BytesList.SetInParent", + "tf.compat.v1.train.ServerDef.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.compat.v1.train.ServerDef.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.compat.v1.train.ServerDef.__eq__": "tf.train.BytesList.__eq__", + "tf.compat.v1.train.ServerDef.__ge__": "tf.train.BytesList.__ge__", + "tf.compat.v1.train.ServerDef.__gt__": "tf.train.BytesList.__gt__", + "tf.compat.v1.train.ServerDef.__init__": "tf.train.BytesList.__init__", + "tf.compat.v1.train.ServerDef.__le__": "tf.train.BytesList.__le__", + "tf.compat.v1.train.ServerDef.__lt__": "tf.train.BytesList.__lt__", + "tf.compat.v1.train.ServerDef.__ne__": "tf.train.BytesList.__ne__", + "tf.compat.v1.train.ServerDef.__new__": "tf.train.BytesList.__new__", + "tf.compat.v1.train.SessionCreator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.SessionCreator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.SessionCreator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.SessionCreator.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.train.SessionCreator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.SessionCreator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.SessionCreator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.SessionCreator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.SessionManager.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.SessionManager.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.SessionManager.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.SessionManager.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.SessionManager.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.SessionManager.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.SessionManager.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.SessionRunArgs": "tf.estimator.SessionRunArgs", + "tf.compat.v1.train.SessionRunArgs.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.train.SessionRunArgs.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.train.SessionRunArgs.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.train.SessionRunArgs.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.train.SessionRunArgs.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.train.SessionRunArgs.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.train.SessionRunArgs.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.train.SessionRunArgs.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.train.SessionRunArgs.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.train.SessionRunArgs.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.train.SessionRunArgs.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.train.SessionRunArgs.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.train.SessionRunArgs.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.train.SessionRunArgs.__new__": "tf.estimator.SessionRunArgs.__new__", + "tf.compat.v1.train.SessionRunArgs.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.train.SessionRunArgs.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.train.SessionRunArgs.feed_dict": "tf.estimator.SessionRunArgs.feed_dict", + "tf.compat.v1.train.SessionRunArgs.fetches": "tf.estimator.SessionRunArgs.fetches", + "tf.compat.v1.train.SessionRunArgs.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.train.SessionRunArgs.options": "tf.estimator.SessionRunArgs.options", + "tf.compat.v1.train.SessionRunContext": "tf.estimator.SessionRunContext", + "tf.compat.v1.train.SessionRunContext.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.SessionRunContext.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.SessionRunContext.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.SessionRunContext.__init__": "tf.estimator.SessionRunContext.__init__", + "tf.compat.v1.train.SessionRunContext.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.SessionRunContext.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.SessionRunContext.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.SessionRunContext.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.SessionRunContext.original_args": "tf.estimator.SessionRunContext.original_args", + "tf.compat.v1.train.SessionRunContext.request_stop": "tf.estimator.SessionRunContext.request_stop", + "tf.compat.v1.train.SessionRunContext.session": "tf.estimator.SessionRunContext.session", + "tf.compat.v1.train.SessionRunContext.stop_requested": "tf.estimator.SessionRunContext.stop_requested", + "tf.compat.v1.train.SessionRunHook": "tf.estimator.SessionRunHook", + "tf.compat.v1.train.SessionRunHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.SessionRunHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.SessionRunHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.SessionRunHook.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.train.SessionRunHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.SessionRunHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.SessionRunHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.SessionRunHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.SessionRunHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.train.SessionRunHook.after_run": "tf.estimator.SessionRunHook.after_run", + "tf.compat.v1.train.SessionRunHook.before_run": "tf.estimator.SessionRunHook.before_run", + "tf.compat.v1.train.SessionRunHook.begin": "tf.estimator.SessionRunHook.begin", + "tf.compat.v1.train.SessionRunHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.train.SessionRunValues": "tf.estimator.SessionRunValues", + "tf.compat.v1.train.SessionRunValues.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.train.SessionRunValues.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.train.SessionRunValues.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.train.SessionRunValues.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.train.SessionRunValues.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.train.SessionRunValues.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.train.SessionRunValues.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.train.SessionRunValues.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.train.SessionRunValues.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.train.SessionRunValues.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.train.SessionRunValues.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.train.SessionRunValues.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.train.SessionRunValues.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.train.SessionRunValues.__new__": "tf.estimator.SessionRunValues.__new__", + "tf.compat.v1.train.SessionRunValues.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.train.SessionRunValues.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.train.SessionRunValues.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.train.SessionRunValues.options": "tf.estimator.SessionRunValues.options", + "tf.compat.v1.train.SessionRunValues.results": "tf.estimator.SessionRunValues.results", + "tf.compat.v1.train.SessionRunValues.run_metadata": "tf.estimator.SessionRunValues.run_metadata", + "tf.compat.v1.train.SingularMonitoredSession.StepContext": "tf.compat.v1.train.MonitoredSession.StepContext", + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__init__": "tf.compat.v1.train.MonitoredSession.StepContext.__init__", + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.SingularMonitoredSession.StepContext.request_stop": "tf.compat.v1.train.MonitoredSession.StepContext.request_stop", + "tf.compat.v1.train.SingularMonitoredSession.StepContext.run_with_hooks": "tf.compat.v1.train.MonitoredSession.StepContext.run_with_hooks", + "tf.compat.v1.train.SingularMonitoredSession.StepContext.session": "tf.compat.v1.train.MonitoredSession.StepContext.session", + "tf.compat.v1.train.SingularMonitoredSession.__enter__": "tf.compat.v1.train.MonitoredSession.__enter__", + "tf.compat.v1.train.SingularMonitoredSession.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.SingularMonitoredSession.__exit__": "tf.compat.v1.train.MonitoredSession.__exit__", + "tf.compat.v1.train.SingularMonitoredSession.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.SingularMonitoredSession.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.SingularMonitoredSession.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.SingularMonitoredSession.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.SingularMonitoredSession.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.SingularMonitoredSession.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.SingularMonitoredSession.close": "tf.compat.v1.train.MonitoredSession.close", + "tf.compat.v1.train.SingularMonitoredSession.graph": "tf.compat.v1.train.MonitoredSession.graph", + "tf.compat.v1.train.SingularMonitoredSession.run": "tf.compat.v1.train.MonitoredSession.run", + "tf.compat.v1.train.SingularMonitoredSession.run_step_fn": "tf.compat.v1.train.MonitoredSession.run_step_fn", + "tf.compat.v1.train.SingularMonitoredSession.should_stop": "tf.compat.v1.train.MonitoredSession.should_stop", + "tf.compat.v1.train.StepCounterHook": "tf.estimator.StepCounterHook", + "tf.compat.v1.train.StepCounterHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.StepCounterHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.StepCounterHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.StepCounterHook.__init__": "tf.estimator.StepCounterHook.__init__", + "tf.compat.v1.train.StepCounterHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.StepCounterHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.StepCounterHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.StepCounterHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.StepCounterHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.train.StepCounterHook.after_run": "tf.estimator.StepCounterHook.after_run", + "tf.compat.v1.train.StepCounterHook.before_run": "tf.estimator.StepCounterHook.before_run", + "tf.compat.v1.train.StepCounterHook.begin": "tf.estimator.StepCounterHook.begin", + "tf.compat.v1.train.StepCounterHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.train.StopAtStepHook": "tf.estimator.StopAtStepHook", + "tf.compat.v1.train.StopAtStepHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.StopAtStepHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.StopAtStepHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.StopAtStepHook.__init__": "tf.estimator.StopAtStepHook.__init__", + "tf.compat.v1.train.StopAtStepHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.StopAtStepHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.StopAtStepHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.StopAtStepHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.StopAtStepHook.after_create_session": "tf.estimator.StopAtStepHook.after_create_session", + "tf.compat.v1.train.StopAtStepHook.after_run": "tf.estimator.StopAtStepHook.after_run", + "tf.compat.v1.train.StopAtStepHook.before_run": "tf.estimator.StopAtStepHook.before_run", + "tf.compat.v1.train.StopAtStepHook.begin": "tf.estimator.StopAtStepHook.begin", + "tf.compat.v1.train.StopAtStepHook.end": "tf.estimator.SessionRunHook.end", + "tf.compat.v1.train.SummarySaverHook": "tf.estimator.SummarySaverHook", + "tf.compat.v1.train.SummarySaverHook.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.SummarySaverHook.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.SummarySaverHook.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.SummarySaverHook.__init__": "tf.estimator.SummarySaverHook.__init__", + "tf.compat.v1.train.SummarySaverHook.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.SummarySaverHook.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.SummarySaverHook.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.SummarySaverHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.SummarySaverHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.compat.v1.train.SummarySaverHook.after_run": "tf.estimator.SummarySaverHook.after_run", + "tf.compat.v1.train.SummarySaverHook.before_run": "tf.estimator.SummarySaverHook.before_run", + "tf.compat.v1.train.SummarySaverHook.begin": "tf.estimator.SummarySaverHook.begin", + "tf.compat.v1.train.SummarySaverHook.end": "tf.estimator.SummarySaverHook.end", + "tf.compat.v1.train.Supervisor.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.Supervisor.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.Supervisor.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.Supervisor.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.Supervisor.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.Supervisor.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.Supervisor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.Supervisor.loop": "tf.compat.v1.train.Supervisor.Loop", + "tf.compat.v1.train.Supervisor.prepare_or_wait_for_session": "tf.compat.v1.train.Supervisor.PrepareSession", + "tf.compat.v1.train.Supervisor.request_stop": "tf.compat.v1.train.Supervisor.RequestStop", + "tf.compat.v1.train.Supervisor.should_stop": "tf.compat.v1.train.Supervisor.ShouldStop", + "tf.compat.v1.train.Supervisor.start_queue_runners": "tf.compat.v1.train.Supervisor.StartQueueRunners", + "tf.compat.v1.train.Supervisor.start_standard_services": "tf.compat.v1.train.Supervisor.StartStandardServices", + "tf.compat.v1.train.Supervisor.stop": "tf.compat.v1.train.Supervisor.Stop", + "tf.compat.v1.train.Supervisor.stop_on_exception": "tf.compat.v1.train.Supervisor.StopOnException", + "tf.compat.v1.train.Supervisor.summary_computed": "tf.compat.v1.train.Supervisor.SummaryComputed", + "tf.compat.v1.train.Supervisor.wait_for_stop": "tf.compat.v1.train.Supervisor.WaitForStop", + "tf.compat.v1.train.SyncReplicasOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.SyncReplicasOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.SyncReplicasOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.SyncReplicasOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.SyncReplicasOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.SyncReplicasOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.SyncReplicasOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.SyncReplicasOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.train.SyncReplicasOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.train.VocabInfo": "tf.estimator.VocabInfo", + "tf.compat.v1.train.VocabInfo.__add__": "tf.config.LogicalDevice.__add__", + "tf.compat.v1.train.VocabInfo.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.compat.v1.train.VocabInfo.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.compat.v1.train.VocabInfo.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.compat.v1.train.VocabInfo.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.compat.v1.train.VocabInfo.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.compat.v1.train.VocabInfo.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.train.VocabInfo.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.compat.v1.train.VocabInfo.__le__": "tf.config.LogicalDevice.__le__", + "tf.compat.v1.train.VocabInfo.__len__": "tf.config.LogicalDevice.__len__", + "tf.compat.v1.train.VocabInfo.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.compat.v1.train.VocabInfo.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.compat.v1.train.VocabInfo.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.compat.v1.train.VocabInfo.__new__": "tf.estimator.VocabInfo.__new__", + "tf.compat.v1.train.VocabInfo.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.compat.v1.train.VocabInfo.axis": "tf.estimator.VocabInfo.axis", + "tf.compat.v1.train.VocabInfo.backup_initializer": "tf.estimator.VocabInfo.backup_initializer", + "tf.compat.v1.train.VocabInfo.count": "tf.config.LogicalDevice.count", + "tf.compat.v1.train.VocabInfo.index": "tf.config.LogicalDevice.index", + "tf.compat.v1.train.VocabInfo.new_vocab": "tf.estimator.VocabInfo.new_vocab", + "tf.compat.v1.train.VocabInfo.new_vocab_size": "tf.estimator.VocabInfo.new_vocab_size", + "tf.compat.v1.train.VocabInfo.num_oov_buckets": "tf.estimator.VocabInfo.num_oov_buckets", + "tf.compat.v1.train.VocabInfo.old_vocab": "tf.estimator.VocabInfo.old_vocab", + "tf.compat.v1.train.VocabInfo.old_vocab_size": "tf.estimator.VocabInfo.old_vocab_size", + "tf.compat.v1.train.WorkerSessionCreator.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.WorkerSessionCreator.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.WorkerSessionCreator.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.WorkerSessionCreator.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.WorkerSessionCreator.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.WorkerSessionCreator.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.WorkerSessionCreator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.checkpoints_iterator": "tf.train.checkpoints_iterator", + "tf.compat.v1.train.experimental.DynamicLossScale": "tf.mixed_precision.experimental.DynamicLossScale", + "tf.compat.v1.train.experimental.DynamicLossScale.__call__": "tf.mixed_precision.experimental.DynamicLossScale.__call__", + "tf.compat.v1.train.experimental.DynamicLossScale.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.experimental.DynamicLossScale.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.experimental.DynamicLossScale.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.experimental.DynamicLossScale.__init__": "tf.mixed_precision.experimental.DynamicLossScale.__init__", + "tf.compat.v1.train.experimental.DynamicLossScale.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.experimental.DynamicLossScale.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.experimental.DynamicLossScale.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.experimental.DynamicLossScale.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.experimental.DynamicLossScale.get_config": "tf.mixed_precision.experimental.DynamicLossScale.get_config", + "tf.compat.v1.train.experimental.DynamicLossScale.increment_period": "tf.mixed_precision.experimental.DynamicLossScale.increment_period", + "tf.compat.v1.train.experimental.DynamicLossScale.initial_loss_scale": "tf.mixed_precision.experimental.DynamicLossScale.initial_loss_scale", + "tf.compat.v1.train.experimental.DynamicLossScale.multiplier": "tf.mixed_precision.experimental.DynamicLossScale.multiplier", + "tf.compat.v1.train.experimental.DynamicLossScale.update": "tf.mixed_precision.experimental.DynamicLossScale.update", + "tf.compat.v1.train.experimental.FixedLossScale": "tf.mixed_precision.experimental.FixedLossScale", + "tf.compat.v1.train.experimental.FixedLossScale.__call__": "tf.mixed_precision.experimental.FixedLossScale.__call__", + "tf.compat.v1.train.experimental.FixedLossScale.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.experimental.FixedLossScale.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.experimental.FixedLossScale.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.experimental.FixedLossScale.__init__": "tf.mixed_precision.experimental.FixedLossScale.__init__", + "tf.compat.v1.train.experimental.FixedLossScale.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.experimental.FixedLossScale.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.experimental.FixedLossScale.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.experimental.FixedLossScale.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.experimental.FixedLossScale.get_config": "tf.mixed_precision.experimental.FixedLossScale.get_config", + "tf.compat.v1.train.experimental.FixedLossScale.update": "tf.mixed_precision.experimental.FixedLossScale.update", + "tf.compat.v1.train.experimental.LossScale": "tf.mixed_precision.experimental.LossScale", + "tf.compat.v1.train.experimental.LossScale.__call__": "tf.mixed_precision.experimental.LossScale.__call__", + "tf.compat.v1.train.experimental.LossScale.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.experimental.LossScale.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.experimental.LossScale.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.experimental.LossScale.__init__": "tf.mixed_precision.experimental.LossScale.__init__", + "tf.compat.v1.train.experimental.LossScale.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.experimental.LossScale.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.experimental.LossScale.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.experimental.LossScale.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.experimental.LossScale.get_config": "tf.mixed_precision.experimental.LossScale.get_config", + "tf.compat.v1.train.experimental.LossScale.update": "tf.mixed_precision.experimental.LossScale.update", + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.get_name": "tf.compat.v1.train.Optimizer.get_name", + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.get_slot": "tf.compat.v1.train.Optimizer.get_slot", + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.get_slot_names": "tf.compat.v1.train.Optimizer.get_slot_names", + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.minimize": "tf.compat.v1.train.Optimizer.minimize", + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.variables": "tf.compat.v1.train.Optimizer.variables", + "tf.compat.v1.train.experimental.PythonState": "tf.train.experimental.PythonState", + "tf.compat.v1.train.experimental.PythonState.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.experimental.PythonState.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.experimental.PythonState.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.experimental.PythonState.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.compat.v1.train.experimental.PythonState.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.experimental.PythonState.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.experimental.PythonState.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.experimental.PythonState.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.experimental.PythonState.deserialize": "tf.train.experimental.PythonState.deserialize", + "tf.compat.v1.train.experimental.PythonState.serialize": "tf.train.experimental.PythonState.serialize", + "tf.compat.v1.train.get_checkpoint_state": "tf.train.get_checkpoint_state", + "tf.compat.v1.train.latest_checkpoint": "tf.train.latest_checkpoint", + "tf.compat.v1.train.list_variables": "tf.train.list_variables", + "tf.compat.v1.train.load_checkpoint": "tf.train.load_checkpoint", + "tf.compat.v1.train.load_variable": "tf.train.load_variable", + "tf.compat.v1.train.match_filenames_once": "tf.io.match_filenames_once", + "tf.compat.v1.train.piecewise_constant_decay": "tf.compat.v1.train.piecewise_constant", + "tf.compat.v1.train.queue_runner.QueueRunner": "tf.compat.v1.train.QueueRunner", + "tf.compat.v1.train.queue_runner.QueueRunner.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.train.queue_runner.QueueRunner.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.train.queue_runner.QueueRunner.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.train.queue_runner.QueueRunner.__init__": "tf.compat.v1.train.QueueRunner.__init__", + "tf.compat.v1.train.queue_runner.QueueRunner.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.train.queue_runner.QueueRunner.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.train.queue_runner.QueueRunner.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.train.queue_runner.QueueRunner.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.train.queue_runner.QueueRunner.cancel_op": "tf.compat.v1.train.QueueRunner.cancel_op", + "tf.compat.v1.train.queue_runner.QueueRunner.close_op": "tf.compat.v1.train.QueueRunner.close_op", + "tf.compat.v1.train.queue_runner.QueueRunner.create_threads": "tf.compat.v1.train.QueueRunner.create_threads", + "tf.compat.v1.train.queue_runner.QueueRunner.enqueue_ops": "tf.compat.v1.train.QueueRunner.enqueue_ops", + "tf.compat.v1.train.queue_runner.QueueRunner.exceptions_raised": "tf.compat.v1.train.QueueRunner.exceptions_raised", + "tf.compat.v1.train.queue_runner.QueueRunner.from_proto": "tf.compat.v1.train.QueueRunner.from_proto", + "tf.compat.v1.train.queue_runner.QueueRunner.name": "tf.compat.v1.train.QueueRunner.name", + "tf.compat.v1.train.queue_runner.QueueRunner.queue": "tf.compat.v1.train.QueueRunner.queue", + "tf.compat.v1.train.queue_runner.QueueRunner.queue_closed_exception_types": "tf.compat.v1.train.QueueRunner.queue_closed_exception_types", + "tf.compat.v1.train.queue_runner.QueueRunner.to_proto": "tf.compat.v1.train.QueueRunner.to_proto", + "tf.compat.v1.train.queue_runner.add_queue_runner": "tf.compat.v1.train.add_queue_runner", + "tf.compat.v1.train.queue_runner.start_queue_runners": "tf.compat.v1.train.start_queue_runners", + "tf.compat.v1.train.write_graph": "tf.io.write_graph", + "tf.compat.v1.truediv": "tf.math.truediv", + "tf.compat.v1.truncated_normal": "tf.random.truncated_normal", + "tf.compat.v1.truncated_normal_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.truncated_normal_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.truncated_normal_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.truncated_normal_initializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.truncated_normal_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.truncated_normal_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.truncated_normal_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.truncatediv": "tf.truncatediv", + "tf.compat.v1.truncatemod": "tf.truncatemod", + "tf.compat.v1.uint16": "tf.dtypes.uint16", + "tf.compat.v1.uint32": "tf.dtypes.uint32", + "tf.compat.v1.uint64": "tf.dtypes.uint64", + "tf.compat.v1.uint8": "tf.dtypes.uint8", + "tf.compat.v1.uniform_unit_scaling_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.uniform_unit_scaling_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.uniform_unit_scaling_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.uniform_unit_scaling_initializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.uniform_unit_scaling_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.uniform_unit_scaling_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.uniform_unit_scaling_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.unique": "tf.unique", + "tf.compat.v1.unique_with_counts": "tf.unique_with_counts", + "tf.compat.v1.unravel_index": "tf.unravel_index", + "tf.compat.v1.unsorted_segment_max": "tf.math.unsorted_segment_max", + "tf.compat.v1.unsorted_segment_mean": "tf.math.unsorted_segment_mean", + "tf.compat.v1.unsorted_segment_min": "tf.math.unsorted_segment_min", + "tf.compat.v1.unsorted_segment_prod": "tf.math.unsorted_segment_prod", + "tf.compat.v1.unsorted_segment_sqrt_n": "tf.math.unsorted_segment_sqrt_n", + "tf.compat.v1.unsorted_segment_sum": "tf.math.unsorted_segment_sum", + "tf.compat.v1.unstack": "tf.unstack", + "tf.compat.v1.variable_scope.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.variable_scope.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.variable_scope.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.variable_scope.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.variable_scope.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.variable_scope.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.variable_scope.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.variance_scaling_initializer": "tf.compat.v1.keras.initializers.VarianceScaling", + "tf.compat.v1.variance_scaling_initializer.__call__": "tf.compat.v1.keras.initializers.VarianceScaling.__call__", + "tf.compat.v1.variance_scaling_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.variance_scaling_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.variance_scaling_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.variance_scaling_initializer.__init__": "tf.compat.v1.keras.initializers.VarianceScaling.__init__", + "tf.compat.v1.variance_scaling_initializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.variance_scaling_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.variance_scaling_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.variance_scaling_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.variance_scaling_initializer.get_config": "tf.compat.v1.keras.initializers.VarianceScaling.get_config", + "tf.compat.v1.variant": "tf.dtypes.variant", + "tf.compat.v1.vectorized_map": "tf.vectorized_map", + "tf.compat.v1.where_v2": "tf.where", + "tf.compat.v1.write_file": "tf.io.write_file", + "tf.compat.v1.xla.experimental.compile": "tf.xla.experimental.compile", + "tf.compat.v1.xla.experimental.jit_scope": "tf.xla.experimental.jit_scope", + "tf.compat.v1.zeros": "tf.zeros", + "tf.compat.v1.zeros_initializer": "tf.compat.v1.keras.initializers.Zeros", + "tf.compat.v1.zeros_initializer.__call__": "tf.compat.v1.keras.initializers.Zeros.__call__", + "tf.compat.v1.zeros_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.compat.v1.zeros_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.compat.v1.zeros_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.compat.v1.zeros_initializer.__init__": "tf.compat.v1.keras.initializers.Zeros.__init__", + "tf.compat.v1.zeros_initializer.__le__": "tf.keras.Model.__le__", + "tf.compat.v1.zeros_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.compat.v1.zeros_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.compat.v1.zeros_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.compat.v1.zeros_initializer.get_config": "tf.compat.v1.keras.initializers.Zeros.get_config", + "tf.compat.v1.zeta": "tf.math.zeta", + "tf.complex": "tf.dtypes.complex", + "tf.complex128": "tf.dtypes.complex128", + "tf.complex64": "tf.dtypes.complex64", + "tf.config.LogicalDevice.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.config.LogicalDeviceConfiguration.__add__": "tf.config.LogicalDevice.__add__", + "tf.config.LogicalDeviceConfiguration.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.config.LogicalDeviceConfiguration.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.config.LogicalDeviceConfiguration.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.config.LogicalDeviceConfiguration.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.config.LogicalDeviceConfiguration.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.config.LogicalDeviceConfiguration.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.config.LogicalDeviceConfiguration.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.config.LogicalDeviceConfiguration.__le__": "tf.config.LogicalDevice.__le__", + "tf.config.LogicalDeviceConfiguration.__len__": "tf.config.LogicalDevice.__len__", + "tf.config.LogicalDeviceConfiguration.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.config.LogicalDeviceConfiguration.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.config.LogicalDeviceConfiguration.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.config.LogicalDeviceConfiguration.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.config.LogicalDeviceConfiguration.count": "tf.config.LogicalDevice.count", + "tf.config.LogicalDeviceConfiguration.index": "tf.config.LogicalDevice.index", + "tf.config.PhysicalDevice.__add__": "tf.config.LogicalDevice.__add__", + "tf.config.PhysicalDevice.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.config.PhysicalDevice.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.config.PhysicalDevice.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.config.PhysicalDevice.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.config.PhysicalDevice.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.config.PhysicalDevice.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.config.PhysicalDevice.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.config.PhysicalDevice.__le__": "tf.config.LogicalDevice.__le__", + "tf.config.PhysicalDevice.__len__": "tf.config.LogicalDevice.__len__", + "tf.config.PhysicalDevice.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.config.PhysicalDevice.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.config.PhysicalDevice.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.config.PhysicalDevice.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.config.PhysicalDevice.count": "tf.config.LogicalDevice.count", + "tf.config.PhysicalDevice.index": "tf.config.LogicalDevice.index", + "tf.config.experimental.ClusterDeviceFilters.__eq__": "tf.keras.Model.__eq__", + "tf.config.experimental.ClusterDeviceFilters.__ge__": "tf.keras.Model.__ge__", + "tf.config.experimental.ClusterDeviceFilters.__gt__": "tf.keras.Model.__gt__", + "tf.config.experimental.ClusterDeviceFilters.__le__": "tf.keras.Model.__le__", + "tf.config.experimental.ClusterDeviceFilters.__lt__": "tf.keras.Model.__lt__", + "tf.config.experimental.ClusterDeviceFilters.__ne__": "tf.keras.Model.__ne__", + "tf.config.experimental.ClusterDeviceFilters.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.config.experimental.VirtualDeviceConfiguration": "tf.config.LogicalDeviceConfiguration", + "tf.config.experimental.VirtualDeviceConfiguration.__add__": "tf.config.LogicalDevice.__add__", + "tf.config.experimental.VirtualDeviceConfiguration.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.config.experimental.VirtualDeviceConfiguration.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.config.experimental.VirtualDeviceConfiguration.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.config.experimental.VirtualDeviceConfiguration.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.config.experimental.VirtualDeviceConfiguration.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.config.experimental.VirtualDeviceConfiguration.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.config.experimental.VirtualDeviceConfiguration.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.config.experimental.VirtualDeviceConfiguration.__le__": "tf.config.LogicalDevice.__le__", + "tf.config.experimental.VirtualDeviceConfiguration.__len__": "tf.config.LogicalDevice.__len__", + "tf.config.experimental.VirtualDeviceConfiguration.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.config.experimental.VirtualDeviceConfiguration.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.config.experimental.VirtualDeviceConfiguration.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.config.experimental.VirtualDeviceConfiguration.__new__": "tf.config.LogicalDeviceConfiguration.__new__", + "tf.config.experimental.VirtualDeviceConfiguration.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.config.experimental.VirtualDeviceConfiguration.count": "tf.config.LogicalDevice.count", + "tf.config.experimental.VirtualDeviceConfiguration.index": "tf.config.LogicalDevice.index", + "tf.config.experimental.VirtualDeviceConfiguration.memory_limit": "tf.config.LogicalDeviceConfiguration.memory_limit", + "tf.config.experimental.get_virtual_device_configuration": "tf.config.get_logical_device_configuration", + "tf.config.experimental.get_visible_devices": "tf.config.get_visible_devices", + "tf.config.experimental.list_logical_devices": "tf.config.list_logical_devices", + "tf.config.experimental.list_physical_devices": "tf.config.list_physical_devices", + "tf.config.experimental.set_virtual_device_configuration": "tf.config.set_logical_device_configuration", + "tf.config.experimental.set_visible_devices": "tf.config.set_visible_devices", + "tf.constant_initializer.__call__": "tf.keras.initializers.Constant.__call__", + "tf.constant_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.constant_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.constant_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.constant_initializer.__init__": "tf.keras.initializers.Constant.__init__", + "tf.constant_initializer.__le__": "tf.keras.Model.__le__", + "tf.constant_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.constant_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.constant_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.constant_initializer.get_config": "tf.keras.initializers.Constant.get_config", + "tf.cos": "tf.math.cos", + "tf.cosh": "tf.math.cosh", + "tf.cumsum": "tf.math.cumsum", + "tf.data.Dataset.__eq__": "tf.keras.Model.__eq__", + "tf.data.Dataset.__ge__": "tf.keras.Model.__ge__", + "tf.data.Dataset.__gt__": "tf.keras.Model.__gt__", + "tf.data.Dataset.__le__": "tf.keras.Model.__le__", + "tf.data.Dataset.__lt__": "tf.keras.Model.__lt__", + "tf.data.Dataset.__ne__": "tf.keras.Model.__ne__", + "tf.data.Dataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.DatasetSpec.__eq__": "tf.TypeSpec.__eq__", + "tf.data.DatasetSpec.__ge__": "tf.keras.Model.__ge__", + "tf.data.DatasetSpec.__gt__": "tf.keras.Model.__gt__", + "tf.data.DatasetSpec.__le__": "tf.keras.Model.__le__", + "tf.data.DatasetSpec.__lt__": "tf.keras.Model.__lt__", + "tf.data.DatasetSpec.__ne__": "tf.TypeSpec.__ne__", + "tf.data.DatasetSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.DatasetSpec.is_compatible_with": "tf.TypeSpec.is_compatible_with", + "tf.data.DatasetSpec.most_specific_compatible_type": "tf.TypeSpec.most_specific_compatible_type", + "tf.data.FixedLengthRecordDataset.__eq__": "tf.keras.Model.__eq__", + "tf.data.FixedLengthRecordDataset.__ge__": "tf.keras.Model.__ge__", + "tf.data.FixedLengthRecordDataset.__gt__": "tf.keras.Model.__gt__", + "tf.data.FixedLengthRecordDataset.__iter__": "tf.data.Dataset.__iter__", + "tf.data.FixedLengthRecordDataset.__le__": "tf.keras.Model.__le__", + "tf.data.FixedLengthRecordDataset.__lt__": "tf.keras.Model.__lt__", + "tf.data.FixedLengthRecordDataset.__ne__": "tf.keras.Model.__ne__", + "tf.data.FixedLengthRecordDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.FixedLengthRecordDataset.apply": "tf.data.Dataset.apply", + "tf.data.FixedLengthRecordDataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.data.FixedLengthRecordDataset.batch": "tf.data.Dataset.batch", + "tf.data.FixedLengthRecordDataset.cache": "tf.data.Dataset.cache", + "tf.data.FixedLengthRecordDataset.concatenate": "tf.data.Dataset.concatenate", + "tf.data.FixedLengthRecordDataset.enumerate": "tf.data.Dataset.enumerate", + "tf.data.FixedLengthRecordDataset.filter": "tf.data.Dataset.filter", + "tf.data.FixedLengthRecordDataset.flat_map": "tf.data.Dataset.flat_map", + "tf.data.FixedLengthRecordDataset.from_generator": "tf.data.Dataset.from_generator", + "tf.data.FixedLengthRecordDataset.from_tensor_slices": "tf.data.Dataset.from_tensor_slices", + "tf.data.FixedLengthRecordDataset.from_tensors": "tf.data.Dataset.from_tensors", + "tf.data.FixedLengthRecordDataset.interleave": "tf.data.Dataset.interleave", + "tf.data.FixedLengthRecordDataset.list_files": "tf.data.Dataset.list_files", + "tf.data.FixedLengthRecordDataset.map": "tf.data.Dataset.map", + "tf.data.FixedLengthRecordDataset.options": "tf.data.Dataset.options", + "tf.data.FixedLengthRecordDataset.padded_batch": "tf.data.Dataset.padded_batch", + "tf.data.FixedLengthRecordDataset.prefetch": "tf.data.Dataset.prefetch", + "tf.data.FixedLengthRecordDataset.range": "tf.data.Dataset.range", + "tf.data.FixedLengthRecordDataset.reduce": "tf.data.Dataset.reduce", + "tf.data.FixedLengthRecordDataset.repeat": "tf.data.Dataset.repeat", + "tf.data.FixedLengthRecordDataset.shard": "tf.data.Dataset.shard", + "tf.data.FixedLengthRecordDataset.shuffle": "tf.data.Dataset.shuffle", + "tf.data.FixedLengthRecordDataset.skip": "tf.data.Dataset.skip", + "tf.data.FixedLengthRecordDataset.take": "tf.data.Dataset.take", + "tf.data.FixedLengthRecordDataset.unbatch": "tf.data.Dataset.unbatch", + "tf.data.FixedLengthRecordDataset.window": "tf.data.Dataset.window", + "tf.data.FixedLengthRecordDataset.with_options": "tf.data.Dataset.with_options", + "tf.data.FixedLengthRecordDataset.zip": "tf.data.Dataset.zip", + "tf.data.Options.__ge__": "tf.keras.Model.__ge__", + "tf.data.Options.__gt__": "tf.keras.Model.__gt__", + "tf.data.Options.__le__": "tf.keras.Model.__le__", + "tf.data.Options.__lt__": "tf.keras.Model.__lt__", + "tf.data.Options.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.TFRecordDataset.__eq__": "tf.keras.Model.__eq__", + "tf.data.TFRecordDataset.__ge__": "tf.keras.Model.__ge__", + "tf.data.TFRecordDataset.__gt__": "tf.keras.Model.__gt__", + "tf.data.TFRecordDataset.__iter__": "tf.data.Dataset.__iter__", + "tf.data.TFRecordDataset.__le__": "tf.keras.Model.__le__", + "tf.data.TFRecordDataset.__lt__": "tf.keras.Model.__lt__", + "tf.data.TFRecordDataset.__ne__": "tf.keras.Model.__ne__", + "tf.data.TFRecordDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.TFRecordDataset.apply": "tf.data.Dataset.apply", + "tf.data.TFRecordDataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.data.TFRecordDataset.batch": "tf.data.Dataset.batch", + "tf.data.TFRecordDataset.cache": "tf.data.Dataset.cache", + "tf.data.TFRecordDataset.concatenate": "tf.data.Dataset.concatenate", + "tf.data.TFRecordDataset.enumerate": "tf.data.Dataset.enumerate", + "tf.data.TFRecordDataset.filter": "tf.data.Dataset.filter", + "tf.data.TFRecordDataset.flat_map": "tf.data.Dataset.flat_map", + "tf.data.TFRecordDataset.from_generator": "tf.data.Dataset.from_generator", + "tf.data.TFRecordDataset.from_tensor_slices": "tf.data.Dataset.from_tensor_slices", + "tf.data.TFRecordDataset.from_tensors": "tf.data.Dataset.from_tensors", + "tf.data.TFRecordDataset.interleave": "tf.data.Dataset.interleave", + "tf.data.TFRecordDataset.list_files": "tf.data.Dataset.list_files", + "tf.data.TFRecordDataset.map": "tf.data.Dataset.map", + "tf.data.TFRecordDataset.options": "tf.data.Dataset.options", + "tf.data.TFRecordDataset.padded_batch": "tf.data.Dataset.padded_batch", + "tf.data.TFRecordDataset.prefetch": "tf.data.Dataset.prefetch", + "tf.data.TFRecordDataset.range": "tf.data.Dataset.range", + "tf.data.TFRecordDataset.reduce": "tf.data.Dataset.reduce", + "tf.data.TFRecordDataset.repeat": "tf.data.Dataset.repeat", + "tf.data.TFRecordDataset.shard": "tf.data.Dataset.shard", + "tf.data.TFRecordDataset.shuffle": "tf.data.Dataset.shuffle", + "tf.data.TFRecordDataset.skip": "tf.data.Dataset.skip", + "tf.data.TFRecordDataset.take": "tf.data.Dataset.take", + "tf.data.TFRecordDataset.unbatch": "tf.data.Dataset.unbatch", + "tf.data.TFRecordDataset.window": "tf.data.Dataset.window", + "tf.data.TFRecordDataset.with_options": "tf.data.Dataset.with_options", + "tf.data.TFRecordDataset.zip": "tf.data.Dataset.zip", + "tf.data.TextLineDataset.__eq__": "tf.keras.Model.__eq__", + "tf.data.TextLineDataset.__ge__": "tf.keras.Model.__ge__", + "tf.data.TextLineDataset.__gt__": "tf.keras.Model.__gt__", + "tf.data.TextLineDataset.__iter__": "tf.data.Dataset.__iter__", + "tf.data.TextLineDataset.__le__": "tf.keras.Model.__le__", + "tf.data.TextLineDataset.__lt__": "tf.keras.Model.__lt__", + "tf.data.TextLineDataset.__ne__": "tf.keras.Model.__ne__", + "tf.data.TextLineDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.TextLineDataset.apply": "tf.data.Dataset.apply", + "tf.data.TextLineDataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.data.TextLineDataset.batch": "tf.data.Dataset.batch", + "tf.data.TextLineDataset.cache": "tf.data.Dataset.cache", + "tf.data.TextLineDataset.concatenate": "tf.data.Dataset.concatenate", + "tf.data.TextLineDataset.enumerate": "tf.data.Dataset.enumerate", + "tf.data.TextLineDataset.filter": "tf.data.Dataset.filter", + "tf.data.TextLineDataset.flat_map": "tf.data.Dataset.flat_map", + "tf.data.TextLineDataset.from_generator": "tf.data.Dataset.from_generator", + "tf.data.TextLineDataset.from_tensor_slices": "tf.data.Dataset.from_tensor_slices", + "tf.data.TextLineDataset.from_tensors": "tf.data.Dataset.from_tensors", + "tf.data.TextLineDataset.interleave": "tf.data.Dataset.interleave", + "tf.data.TextLineDataset.list_files": "tf.data.Dataset.list_files", + "tf.data.TextLineDataset.map": "tf.data.Dataset.map", + "tf.data.TextLineDataset.options": "tf.data.Dataset.options", + "tf.data.TextLineDataset.padded_batch": "tf.data.Dataset.padded_batch", + "tf.data.TextLineDataset.prefetch": "tf.data.Dataset.prefetch", + "tf.data.TextLineDataset.range": "tf.data.Dataset.range", + "tf.data.TextLineDataset.reduce": "tf.data.Dataset.reduce", + "tf.data.TextLineDataset.repeat": "tf.data.Dataset.repeat", + "tf.data.TextLineDataset.shard": "tf.data.Dataset.shard", + "tf.data.TextLineDataset.shuffle": "tf.data.Dataset.shuffle", + "tf.data.TextLineDataset.skip": "tf.data.Dataset.skip", + "tf.data.TextLineDataset.take": "tf.data.Dataset.take", + "tf.data.TextLineDataset.unbatch": "tf.data.Dataset.unbatch", + "tf.data.TextLineDataset.window": "tf.data.Dataset.window", + "tf.data.TextLineDataset.with_options": "tf.data.Dataset.with_options", + "tf.data.TextLineDataset.zip": "tf.data.Dataset.zip", + "tf.data.experimental.CheckpointInputPipelineHook.__eq__": "tf.keras.Model.__eq__", + "tf.data.experimental.CheckpointInputPipelineHook.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.CheckpointInputPipelineHook.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.CheckpointInputPipelineHook.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.CheckpointInputPipelineHook.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.CheckpointInputPipelineHook.__ne__": "tf.keras.Model.__ne__", + "tf.data.experimental.CheckpointInputPipelineHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.experimental.CsvDataset.__eq__": "tf.keras.Model.__eq__", + "tf.data.experimental.CsvDataset.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.CsvDataset.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.CsvDataset.__iter__": "tf.data.Dataset.__iter__", + "tf.data.experimental.CsvDataset.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.CsvDataset.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.CsvDataset.__ne__": "tf.keras.Model.__ne__", + "tf.data.experimental.CsvDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.experimental.CsvDataset.apply": "tf.data.Dataset.apply", + "tf.data.experimental.CsvDataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.data.experimental.CsvDataset.batch": "tf.data.Dataset.batch", + "tf.data.experimental.CsvDataset.cache": "tf.data.Dataset.cache", + "tf.data.experimental.CsvDataset.concatenate": "tf.data.Dataset.concatenate", + "tf.data.experimental.CsvDataset.enumerate": "tf.data.Dataset.enumerate", + "tf.data.experimental.CsvDataset.filter": "tf.data.Dataset.filter", + "tf.data.experimental.CsvDataset.flat_map": "tf.data.Dataset.flat_map", + "tf.data.experimental.CsvDataset.from_generator": "tf.data.Dataset.from_generator", + "tf.data.experimental.CsvDataset.from_tensor_slices": "tf.data.Dataset.from_tensor_slices", + "tf.data.experimental.CsvDataset.from_tensors": "tf.data.Dataset.from_tensors", + "tf.data.experimental.CsvDataset.interleave": "tf.data.Dataset.interleave", + "tf.data.experimental.CsvDataset.list_files": "tf.data.Dataset.list_files", + "tf.data.experimental.CsvDataset.map": "tf.data.Dataset.map", + "tf.data.experimental.CsvDataset.options": "tf.data.Dataset.options", + "tf.data.experimental.CsvDataset.padded_batch": "tf.data.Dataset.padded_batch", + "tf.data.experimental.CsvDataset.prefetch": "tf.data.Dataset.prefetch", + "tf.data.experimental.CsvDataset.range": "tf.data.Dataset.range", + "tf.data.experimental.CsvDataset.reduce": "tf.data.Dataset.reduce", + "tf.data.experimental.CsvDataset.repeat": "tf.data.Dataset.repeat", + "tf.data.experimental.CsvDataset.shard": "tf.data.Dataset.shard", + "tf.data.experimental.CsvDataset.shuffle": "tf.data.Dataset.shuffle", + "tf.data.experimental.CsvDataset.skip": "tf.data.Dataset.skip", + "tf.data.experimental.CsvDataset.take": "tf.data.Dataset.take", + "tf.data.experimental.CsvDataset.unbatch": "tf.data.Dataset.unbatch", + "tf.data.experimental.CsvDataset.window": "tf.data.Dataset.window", + "tf.data.experimental.CsvDataset.with_options": "tf.data.Dataset.with_options", + "tf.data.experimental.CsvDataset.zip": "tf.data.Dataset.zip", + "tf.data.experimental.DistributeOptions.__eq__": "tf.data.Options.__eq__", + "tf.data.experimental.DistributeOptions.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.DistributeOptions.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.DistributeOptions.__init__": "tf.data.Options.__init__", + "tf.data.experimental.DistributeOptions.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.DistributeOptions.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.DistributeOptions.__ne__": "tf.data.Options.__ne__", + "tf.data.experimental.DistributeOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.experimental.MapVectorizationOptions.__eq__": "tf.data.Options.__eq__", + "tf.data.experimental.MapVectorizationOptions.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.MapVectorizationOptions.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.MapVectorizationOptions.__init__": "tf.data.Options.__init__", + "tf.data.experimental.MapVectorizationOptions.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.MapVectorizationOptions.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.MapVectorizationOptions.__ne__": "tf.data.Options.__ne__", + "tf.data.experimental.MapVectorizationOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.experimental.OptimizationOptions.__eq__": "tf.data.Options.__eq__", + "tf.data.experimental.OptimizationOptions.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.OptimizationOptions.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.OptimizationOptions.__init__": "tf.data.Options.__init__", + "tf.data.experimental.OptimizationOptions.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.OptimizationOptions.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.OptimizationOptions.__ne__": "tf.data.Options.__ne__", + "tf.data.experimental.OptimizationOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.experimental.Optional.__eq__": "tf.keras.Model.__eq__", + "tf.data.experimental.Optional.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.Optional.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.Optional.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.data.experimental.Optional.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.Optional.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.Optional.__ne__": "tf.keras.Model.__ne__", + "tf.data.experimental.Optional.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.experimental.RandomDataset.__eq__": "tf.keras.Model.__eq__", + "tf.data.experimental.RandomDataset.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.RandomDataset.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.RandomDataset.__iter__": "tf.data.Dataset.__iter__", + "tf.data.experimental.RandomDataset.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.RandomDataset.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.RandomDataset.__ne__": "tf.keras.Model.__ne__", + "tf.data.experimental.RandomDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.experimental.RandomDataset.apply": "tf.data.Dataset.apply", + "tf.data.experimental.RandomDataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.data.experimental.RandomDataset.batch": "tf.data.Dataset.batch", + "tf.data.experimental.RandomDataset.cache": "tf.data.Dataset.cache", + "tf.data.experimental.RandomDataset.concatenate": "tf.data.Dataset.concatenate", + "tf.data.experimental.RandomDataset.enumerate": "tf.data.Dataset.enumerate", + "tf.data.experimental.RandomDataset.filter": "tf.data.Dataset.filter", + "tf.data.experimental.RandomDataset.flat_map": "tf.data.Dataset.flat_map", + "tf.data.experimental.RandomDataset.from_generator": "tf.data.Dataset.from_generator", + "tf.data.experimental.RandomDataset.from_tensor_slices": "tf.data.Dataset.from_tensor_slices", + "tf.data.experimental.RandomDataset.from_tensors": "tf.data.Dataset.from_tensors", + "tf.data.experimental.RandomDataset.interleave": "tf.data.Dataset.interleave", + "tf.data.experimental.RandomDataset.list_files": "tf.data.Dataset.list_files", + "tf.data.experimental.RandomDataset.map": "tf.data.Dataset.map", + "tf.data.experimental.RandomDataset.options": "tf.data.Dataset.options", + "tf.data.experimental.RandomDataset.padded_batch": "tf.data.Dataset.padded_batch", + "tf.data.experimental.RandomDataset.prefetch": "tf.data.Dataset.prefetch", + "tf.data.experimental.RandomDataset.range": "tf.data.Dataset.range", + "tf.data.experimental.RandomDataset.reduce": "tf.data.Dataset.reduce", + "tf.data.experimental.RandomDataset.repeat": "tf.data.Dataset.repeat", + "tf.data.experimental.RandomDataset.shard": "tf.data.Dataset.shard", + "tf.data.experimental.RandomDataset.shuffle": "tf.data.Dataset.shuffle", + "tf.data.experimental.RandomDataset.skip": "tf.data.Dataset.skip", + "tf.data.experimental.RandomDataset.take": "tf.data.Dataset.take", + "tf.data.experimental.RandomDataset.unbatch": "tf.data.Dataset.unbatch", + "tf.data.experimental.RandomDataset.window": "tf.data.Dataset.window", + "tf.data.experimental.RandomDataset.with_options": "tf.data.Dataset.with_options", + "tf.data.experimental.RandomDataset.zip": "tf.data.Dataset.zip", + "tf.data.experimental.Reducer.__eq__": "tf.keras.Model.__eq__", + "tf.data.experimental.Reducer.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.Reducer.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.Reducer.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.Reducer.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.Reducer.__ne__": "tf.keras.Model.__ne__", + "tf.data.experimental.Reducer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.experimental.SqlDataset.__eq__": "tf.keras.Model.__eq__", + "tf.data.experimental.SqlDataset.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.SqlDataset.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.SqlDataset.__iter__": "tf.data.Dataset.__iter__", + "tf.data.experimental.SqlDataset.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.SqlDataset.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.SqlDataset.__ne__": "tf.keras.Model.__ne__", + "tf.data.experimental.SqlDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.experimental.SqlDataset.apply": "tf.data.Dataset.apply", + "tf.data.experimental.SqlDataset.as_numpy_iterator": "tf.data.Dataset.as_numpy_iterator", + "tf.data.experimental.SqlDataset.batch": "tf.data.Dataset.batch", + "tf.data.experimental.SqlDataset.cache": "tf.data.Dataset.cache", + "tf.data.experimental.SqlDataset.concatenate": "tf.data.Dataset.concatenate", + "tf.data.experimental.SqlDataset.enumerate": "tf.data.Dataset.enumerate", + "tf.data.experimental.SqlDataset.filter": "tf.data.Dataset.filter", + "tf.data.experimental.SqlDataset.flat_map": "tf.data.Dataset.flat_map", + "tf.data.experimental.SqlDataset.from_generator": "tf.data.Dataset.from_generator", + "tf.data.experimental.SqlDataset.from_tensor_slices": "tf.data.Dataset.from_tensor_slices", + "tf.data.experimental.SqlDataset.from_tensors": "tf.data.Dataset.from_tensors", + "tf.data.experimental.SqlDataset.interleave": "tf.data.Dataset.interleave", + "tf.data.experimental.SqlDataset.list_files": "tf.data.Dataset.list_files", + "tf.data.experimental.SqlDataset.map": "tf.data.Dataset.map", + "tf.data.experimental.SqlDataset.options": "tf.data.Dataset.options", + "tf.data.experimental.SqlDataset.padded_batch": "tf.data.Dataset.padded_batch", + "tf.data.experimental.SqlDataset.prefetch": "tf.data.Dataset.prefetch", + "tf.data.experimental.SqlDataset.range": "tf.data.Dataset.range", + "tf.data.experimental.SqlDataset.reduce": "tf.data.Dataset.reduce", + "tf.data.experimental.SqlDataset.repeat": "tf.data.Dataset.repeat", + "tf.data.experimental.SqlDataset.shard": "tf.data.Dataset.shard", + "tf.data.experimental.SqlDataset.shuffle": "tf.data.Dataset.shuffle", + "tf.data.experimental.SqlDataset.skip": "tf.data.Dataset.skip", + "tf.data.experimental.SqlDataset.take": "tf.data.Dataset.take", + "tf.data.experimental.SqlDataset.unbatch": "tf.data.Dataset.unbatch", + "tf.data.experimental.SqlDataset.window": "tf.data.Dataset.window", + "tf.data.experimental.SqlDataset.with_options": "tf.data.Dataset.with_options", + "tf.data.experimental.SqlDataset.zip": "tf.data.Dataset.zip", + "tf.data.experimental.StatsAggregator.__eq__": "tf.keras.Model.__eq__", + "tf.data.experimental.StatsAggregator.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.StatsAggregator.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.StatsAggregator.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.StatsAggregator.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.StatsAggregator.__ne__": "tf.keras.Model.__ne__", + "tf.data.experimental.StatsAggregator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.experimental.StatsOptions.__eq__": "tf.data.Options.__eq__", + "tf.data.experimental.StatsOptions.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.StatsOptions.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.StatsOptions.__init__": "tf.data.Options.__init__", + "tf.data.experimental.StatsOptions.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.StatsOptions.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.StatsOptions.__ne__": "tf.data.Options.__ne__", + "tf.data.experimental.StatsOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.experimental.TFRecordWriter.__eq__": "tf.keras.Model.__eq__", + "tf.data.experimental.TFRecordWriter.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.TFRecordWriter.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.TFRecordWriter.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.TFRecordWriter.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.TFRecordWriter.__ne__": "tf.keras.Model.__ne__", + "tf.data.experimental.TFRecordWriter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.data.experimental.ThreadingOptions.__eq__": "tf.data.Options.__eq__", + "tf.data.experimental.ThreadingOptions.__ge__": "tf.keras.Model.__ge__", + "tf.data.experimental.ThreadingOptions.__gt__": "tf.keras.Model.__gt__", + "tf.data.experimental.ThreadingOptions.__init__": "tf.data.Options.__init__", + "tf.data.experimental.ThreadingOptions.__le__": "tf.keras.Model.__le__", + "tf.data.experimental.ThreadingOptions.__lt__": "tf.keras.Model.__lt__", + "tf.data.experimental.ThreadingOptions.__ne__": "tf.data.Options.__ne__", + "tf.data.experimental.ThreadingOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.CrossDeviceOps.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.CrossDeviceOps.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.CrossDeviceOps.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.CrossDeviceOps.__le__": "tf.keras.Model.__le__", + "tf.distribute.CrossDeviceOps.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.CrossDeviceOps.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.CrossDeviceOps.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.DistributedValues.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.DistributedValues.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.DistributedValues.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.DistributedValues.__le__": "tf.keras.Model.__le__", + "tf.distribute.DistributedValues.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.DistributedValues.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.DistributedValues.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.HierarchicalCopyAllReduce.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.HierarchicalCopyAllReduce.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.HierarchicalCopyAllReduce.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.HierarchicalCopyAllReduce.__le__": "tf.keras.Model.__le__", + "tf.distribute.HierarchicalCopyAllReduce.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.HierarchicalCopyAllReduce.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.HierarchicalCopyAllReduce.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.HierarchicalCopyAllReduce.batch_reduce": "tf.distribute.CrossDeviceOps.batch_reduce", + "tf.distribute.HierarchicalCopyAllReduce.broadcast": "tf.distribute.CrossDeviceOps.broadcast", + "tf.distribute.HierarchicalCopyAllReduce.broadcast_implementation": "tf.distribute.CrossDeviceOps.broadcast_implementation", + "tf.distribute.HierarchicalCopyAllReduce.reduce": "tf.distribute.CrossDeviceOps.reduce", + "tf.distribute.InputContext.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.InputContext.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.InputContext.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.InputContext.__le__": "tf.keras.Model.__le__", + "tf.distribute.InputContext.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.InputContext.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.InputContext.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.MirroredStrategy.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.MirroredStrategy.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.MirroredStrategy.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.MirroredStrategy.__le__": "tf.keras.Model.__le__", + "tf.distribute.MirroredStrategy.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.MirroredStrategy.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.MirroredStrategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.MirroredStrategy.experimental_assign_to_logical_device": "tf.distribute.Strategy.experimental_assign_to_logical_device", + "tf.distribute.MirroredStrategy.experimental_distribute_values_from_function": "tf.distribute.Strategy.experimental_distribute_values_from_function", + "tf.distribute.MirroredStrategy.experimental_replicate_to_logical_devices": "tf.distribute.Strategy.experimental_replicate_to_logical_devices", + "tf.distribute.MirroredStrategy.experimental_split_to_logical_devices": "tf.distribute.Strategy.experimental_split_to_logical_devices", + "tf.distribute.NcclAllReduce.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.NcclAllReduce.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.NcclAllReduce.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.NcclAllReduce.__le__": "tf.keras.Model.__le__", + "tf.distribute.NcclAllReduce.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.NcclAllReduce.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.NcclAllReduce.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.NcclAllReduce.batch_reduce": "tf.distribute.CrossDeviceOps.batch_reduce", + "tf.distribute.NcclAllReduce.batch_reduce_implementation": "tf.distribute.HierarchicalCopyAllReduce.batch_reduce_implementation", + "tf.distribute.NcclAllReduce.broadcast": "tf.distribute.CrossDeviceOps.broadcast", + "tf.distribute.NcclAllReduce.broadcast_implementation": "tf.distribute.CrossDeviceOps.broadcast_implementation", + "tf.distribute.NcclAllReduce.reduce": "tf.distribute.CrossDeviceOps.reduce", + "tf.distribute.NcclAllReduce.reduce_implementation": "tf.distribute.HierarchicalCopyAllReduce.reduce_implementation", + "tf.distribute.OneDeviceStrategy.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.OneDeviceStrategy.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.OneDeviceStrategy.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.OneDeviceStrategy.__le__": "tf.keras.Model.__le__", + "tf.distribute.OneDeviceStrategy.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.OneDeviceStrategy.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.OneDeviceStrategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.OneDeviceStrategy.experimental_assign_to_logical_device": "tf.distribute.Strategy.experimental_assign_to_logical_device", + "tf.distribute.OneDeviceStrategy.experimental_distribute_values_from_function": "tf.distribute.Strategy.experimental_distribute_values_from_function", + "tf.distribute.OneDeviceStrategy.experimental_make_numpy_dataset": "tf.distribute.MirroredStrategy.experimental_make_numpy_dataset", + "tf.distribute.OneDeviceStrategy.experimental_replicate_to_logical_devices": "tf.distribute.Strategy.experimental_replicate_to_logical_devices", + "tf.distribute.OneDeviceStrategy.experimental_split_to_logical_devices": "tf.distribute.Strategy.experimental_split_to_logical_devices", + "tf.distribute.OneDeviceStrategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.distribute.OneDeviceStrategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.distribute.ReduceOp.name": "tf.distribute.InputReplicationMode.name", + "tf.distribute.ReduceOp.value": "tf.distribute.InputReplicationMode.value", + "tf.distribute.ReductionToOneDevice.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.ReductionToOneDevice.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.ReductionToOneDevice.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.ReductionToOneDevice.__le__": "tf.keras.Model.__le__", + "tf.distribute.ReductionToOneDevice.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.ReductionToOneDevice.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.ReductionToOneDevice.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.ReductionToOneDevice.batch_reduce": "tf.distribute.CrossDeviceOps.batch_reduce", + "tf.distribute.ReductionToOneDevice.broadcast": "tf.distribute.CrossDeviceOps.broadcast", + "tf.distribute.ReductionToOneDevice.broadcast_implementation": "tf.distribute.CrossDeviceOps.broadcast_implementation", + "tf.distribute.ReductionToOneDevice.reduce": "tf.distribute.CrossDeviceOps.reduce", + "tf.distribute.ReplicaContext.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.ReplicaContext.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.ReplicaContext.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.ReplicaContext.__le__": "tf.keras.Model.__le__", + "tf.distribute.ReplicaContext.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.ReplicaContext.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.ReplicaContext.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.RunOptions.__add__": "tf.config.LogicalDevice.__add__", + "tf.distribute.RunOptions.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.distribute.RunOptions.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.distribute.RunOptions.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.distribute.RunOptions.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.distribute.RunOptions.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.distribute.RunOptions.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.distribute.RunOptions.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.distribute.RunOptions.__le__": "tf.config.LogicalDevice.__le__", + "tf.distribute.RunOptions.__len__": "tf.config.LogicalDevice.__len__", + "tf.distribute.RunOptions.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.distribute.RunOptions.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.distribute.RunOptions.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.distribute.RunOptions.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.distribute.RunOptions.count": "tf.config.LogicalDevice.count", + "tf.distribute.RunOptions.index": "tf.config.LogicalDevice.index", + "tf.distribute.Server.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.Server.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.Server.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.Server.__le__": "tf.keras.Model.__le__", + "tf.distribute.Server.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.Server.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.Server.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.Strategy.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.Strategy.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.Strategy.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.Strategy.__le__": "tf.keras.Model.__le__", + "tf.distribute.Strategy.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.Strategy.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.Strategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.Strategy.experimental_distribute_dataset": "tf.distribute.MirroredStrategy.experimental_distribute_dataset", + "tf.distribute.Strategy.experimental_distribute_datasets_from_function": "tf.distribute.MirroredStrategy.experimental_distribute_datasets_from_function", + "tf.distribute.Strategy.experimental_local_results": "tf.distribute.MirroredStrategy.experimental_local_results", + "tf.distribute.Strategy.experimental_make_numpy_dataset": "tf.distribute.MirroredStrategy.experimental_make_numpy_dataset", + "tf.distribute.Strategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.distribute.Strategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.distribute.Strategy.reduce": "tf.distribute.MirroredStrategy.reduce", + "tf.distribute.Strategy.run": "tf.distribute.MirroredStrategy.run", + "tf.distribute.Strategy.scope": "tf.distribute.MirroredStrategy.scope", + "tf.distribute.StrategyExtended.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.StrategyExtended.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.StrategyExtended.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.StrategyExtended.__le__": "tf.keras.Model.__le__", + "tf.distribute.StrategyExtended.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.StrategyExtended.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.StrategyExtended.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.cluster_resolver.ClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.cluster_resolver.ClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.cluster_resolver.ClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.cluster_resolver.ClusterResolver.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.distribute.cluster_resolver.ClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.distribute.cluster_resolver.ClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.cluster_resolver.ClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.cluster_resolver.ClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.cluster_resolver.GCEClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.cluster_resolver.GCEClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.cluster_resolver.GCEClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.cluster_resolver.GCEClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.distribute.cluster_resolver.GCEClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.cluster_resolver.GCEClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.cluster_resolver.GCEClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.cluster_resolver.GCEClusterResolver.environment": "tf.distribute.cluster_resolver.ClusterResolver.environment", + "tf.distribute.cluster_resolver.GCEClusterResolver.num_accelerators": "tf.distribute.cluster_resolver.ClusterResolver.num_accelerators", + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.cluster_resolver.KubernetesClusterResolver.environment": "tf.distribute.cluster_resolver.ClusterResolver.environment", + "tf.distribute.cluster_resolver.KubernetesClusterResolver.num_accelerators": "tf.distribute.cluster_resolver.ClusterResolver.num_accelerators", + "tf.distribute.cluster_resolver.SimpleClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.cluster_resolver.SimpleClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.cluster_resolver.SimpleClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.cluster_resolver.SimpleClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.distribute.cluster_resolver.SimpleClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.cluster_resolver.SimpleClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.cluster_resolver.SimpleClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.cluster_resolver.SlurmClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.cluster_resolver.SlurmClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.cluster_resolver.SlurmClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.cluster_resolver.SlurmClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.distribute.cluster_resolver.SlurmClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.cluster_resolver.SlurmClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.cluster_resolver.SlurmClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.cluster_resolver.SlurmClusterResolver.environment": "tf.distribute.cluster_resolver.ClusterResolver.environment", + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.cluster_resolver.TPUClusterResolver.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.cluster_resolver.TPUClusterResolver.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.cluster_resolver.TPUClusterResolver.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.cluster_resolver.TPUClusterResolver.__le__": "tf.keras.Model.__le__", + "tf.distribute.cluster_resolver.TPUClusterResolver.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.cluster_resolver.TPUClusterResolver.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.cluster_resolver.TPUClusterResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.cluster_resolver.UnionResolver.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.cluster_resolver.UnionResolver.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.cluster_resolver.UnionResolver.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.cluster_resolver.UnionResolver.__le__": "tf.keras.Model.__le__", + "tf.distribute.cluster_resolver.UnionResolver.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.cluster_resolver.UnionResolver.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.cluster_resolver.UnionResolver.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.experimental.CentralStorageStrategy.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.experimental.CentralStorageStrategy.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.experimental.CentralStorageStrategy.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.experimental.CentralStorageStrategy.__le__": "tf.keras.Model.__le__", + "tf.distribute.experimental.CentralStorageStrategy.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.experimental.CentralStorageStrategy.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.experimental.CentralStorageStrategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.experimental.CentralStorageStrategy.experimental_assign_to_logical_device": "tf.distribute.Strategy.experimental_assign_to_logical_device", + "tf.distribute.experimental.CentralStorageStrategy.experimental_distribute_values_from_function": "tf.distribute.Strategy.experimental_distribute_values_from_function", + "tf.distribute.experimental.CentralStorageStrategy.experimental_make_numpy_dataset": "tf.distribute.MirroredStrategy.experimental_make_numpy_dataset", + "tf.distribute.experimental.CentralStorageStrategy.experimental_replicate_to_logical_devices": "tf.distribute.Strategy.experimental_replicate_to_logical_devices", + "tf.distribute.experimental.CentralStorageStrategy.experimental_split_to_logical_devices": "tf.distribute.Strategy.experimental_split_to_logical_devices", + "tf.distribute.experimental.CentralStorageStrategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.distribute.experimental.CentralStorageStrategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.distribute.experimental.CentralStorageStrategy.scope": "tf.distribute.MirroredStrategy.scope", + "tf.distribute.experimental.CollectiveCommunication.name": "tf.distribute.InputReplicationMode.name", + "tf.distribute.experimental.CollectiveCommunication.value": "tf.distribute.InputReplicationMode.value", + "tf.distribute.experimental.CollectiveHints.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.experimental.CollectiveHints.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.experimental.CollectiveHints.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.experimental.CollectiveHints.__le__": "tf.keras.Model.__le__", + "tf.distribute.experimental.CollectiveHints.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.experimental.CollectiveHints.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.experimental.CollectiveHints.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__le__": "tf.keras.Model.__le__", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_assign_to_logical_device": "tf.distribute.Strategy.experimental_assign_to_logical_device", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_distribute_dataset": "tf.distribute.MirroredStrategy.experimental_distribute_dataset", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_distribute_datasets_from_function": "tf.distribute.MirroredStrategy.experimental_distribute_datasets_from_function", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_distribute_values_from_function": "tf.distribute.Strategy.experimental_distribute_values_from_function", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_local_results": "tf.distribute.MirroredStrategy.experimental_local_results", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_make_numpy_dataset": "tf.distribute.MirroredStrategy.experimental_make_numpy_dataset", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_replicate_to_logical_devices": "tf.distribute.Strategy.experimental_replicate_to_logical_devices", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_split_to_logical_devices": "tf.distribute.Strategy.experimental_split_to_logical_devices", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.reduce": "tf.distribute.MirroredStrategy.reduce", + "tf.distribute.experimental.MultiWorkerMirroredStrategy.run": "tf.distribute.MirroredStrategy.run", + "tf.distribute.experimental.ParameterServerStrategy.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.experimental.ParameterServerStrategy.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.experimental.ParameterServerStrategy.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.experimental.ParameterServerStrategy.__le__": "tf.keras.Model.__le__", + "tf.distribute.experimental.ParameterServerStrategy.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.experimental.ParameterServerStrategy.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.experimental.ParameterServerStrategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.experimental.ParameterServerStrategy.experimental_assign_to_logical_device": "tf.distribute.Strategy.experimental_assign_to_logical_device", + "tf.distribute.experimental.ParameterServerStrategy.experimental_distribute_dataset": "tf.distribute.MirroredStrategy.experimental_distribute_dataset", + "tf.distribute.experimental.ParameterServerStrategy.experimental_distribute_datasets_from_function": "tf.distribute.MirroredStrategy.experimental_distribute_datasets_from_function", + "tf.distribute.experimental.ParameterServerStrategy.experimental_distribute_values_from_function": "tf.distribute.Strategy.experimental_distribute_values_from_function", + "tf.distribute.experimental.ParameterServerStrategy.experimental_local_results": "tf.distribute.MirroredStrategy.experimental_local_results", + "tf.distribute.experimental.ParameterServerStrategy.experimental_make_numpy_dataset": "tf.distribute.MirroredStrategy.experimental_make_numpy_dataset", + "tf.distribute.experimental.ParameterServerStrategy.experimental_replicate_to_logical_devices": "tf.distribute.Strategy.experimental_replicate_to_logical_devices", + "tf.distribute.experimental.ParameterServerStrategy.experimental_split_to_logical_devices": "tf.distribute.Strategy.experimental_split_to_logical_devices", + "tf.distribute.experimental.ParameterServerStrategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.distribute.experimental.ParameterServerStrategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.distribute.experimental.ParameterServerStrategy.reduce": "tf.distribute.MirroredStrategy.reduce", + "tf.distribute.experimental.ParameterServerStrategy.run": "tf.distribute.MirroredStrategy.run", + "tf.distribute.experimental.ParameterServerStrategy.scope": "tf.distribute.MirroredStrategy.scope", + "tf.distribute.experimental.TPUStrategy.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.experimental.TPUStrategy.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.experimental.TPUStrategy.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.experimental.TPUStrategy.__le__": "tf.keras.Model.__le__", + "tf.distribute.experimental.TPUStrategy.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.experimental.TPUStrategy.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.experimental.TPUStrategy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.distribute.experimental.TPUStrategy.experimental_assign_to_logical_device": "tf.distribute.Strategy.experimental_assign_to_logical_device", + "tf.distribute.experimental.TPUStrategy.experimental_distribute_dataset": "tf.distribute.MirroredStrategy.experimental_distribute_dataset", + "tf.distribute.experimental.TPUStrategy.experimental_distribute_datasets_from_function": "tf.distribute.MirroredStrategy.experimental_distribute_datasets_from_function", + "tf.distribute.experimental.TPUStrategy.experimental_distribute_values_from_function": "tf.distribute.Strategy.experimental_distribute_values_from_function", + "tf.distribute.experimental.TPUStrategy.experimental_local_results": "tf.distribute.MirroredStrategy.experimental_local_results", + "tf.distribute.experimental.TPUStrategy.experimental_make_numpy_dataset": "tf.distribute.MirroredStrategy.experimental_make_numpy_dataset", + "tf.distribute.experimental.TPUStrategy.experimental_replicate_to_logical_devices": "tf.distribute.Strategy.experimental_replicate_to_logical_devices", + "tf.distribute.experimental.TPUStrategy.experimental_split_to_logical_devices": "tf.distribute.Strategy.experimental_split_to_logical_devices", + "tf.distribute.experimental.TPUStrategy.extended": "tf.distribute.MirroredStrategy.extended", + "tf.distribute.experimental.TPUStrategy.num_replicas_in_sync": "tf.distribute.MirroredStrategy.num_replicas_in_sync", + "tf.distribute.experimental.TPUStrategy.reduce": "tf.distribute.MirroredStrategy.reduce", + "tf.distribute.experimental.TPUStrategy.scope": "tf.distribute.MirroredStrategy.scope", + "tf.distribute.experimental.ValueContext.__eq__": "tf.keras.Model.__eq__", + "tf.distribute.experimental.ValueContext.__ge__": "tf.keras.Model.__ge__", + "tf.distribute.experimental.ValueContext.__gt__": "tf.keras.Model.__gt__", + "tf.distribute.experimental.ValueContext.__le__": "tf.keras.Model.__le__", + "tf.distribute.experimental.ValueContext.__lt__": "tf.keras.Model.__lt__", + "tf.distribute.experimental.ValueContext.__ne__": "tf.keras.Model.__ne__", + "tf.distribute.experimental.ValueContext.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.divide": "tf.math.divide", + "tf.double": "tf.dtypes.double", + "tf.dtypes.DType.__ge__": "tf.keras.Model.__ge__", + "tf.dtypes.DType.__gt__": "tf.keras.Model.__gt__", + "tf.dtypes.DType.__le__": "tf.keras.Model.__le__", + "tf.dtypes.DType.__lt__": "tf.keras.Model.__lt__", + "tf.dtypes.cast": "tf.cast", + "tf.dtypes.float64": "tf.dtypes.double", + "tf.dtypes.half": "tf.dtypes.float16", + "tf.eig": "tf.linalg.eig", + "tf.eigvals": "tf.linalg.eigvals", + "tf.equal": "tf.math.equal", + "tf.errors.AbortedError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.AbortedError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.AbortedError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.AbortedError.__le__": "tf.keras.Model.__le__", + "tf.errors.AbortedError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.AbortedError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.AbortedError.error_code": "tf.errors.OpError.error_code", + "tf.errors.AbortedError.message": "tf.errors.OpError.message", + "tf.errors.AbortedError.node_def": "tf.errors.OpError.node_def", + "tf.errors.AbortedError.op": "tf.errors.OpError.op", + "tf.errors.AlreadyExistsError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.AlreadyExistsError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.AlreadyExistsError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.AlreadyExistsError.__le__": "tf.keras.Model.__le__", + "tf.errors.AlreadyExistsError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.AlreadyExistsError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.AlreadyExistsError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.AlreadyExistsError.args": "tf.errors.AbortedError.args", + "tf.errors.AlreadyExistsError.error_code": "tf.errors.OpError.error_code", + "tf.errors.AlreadyExistsError.message": "tf.errors.OpError.message", + "tf.errors.AlreadyExistsError.node_def": "tf.errors.OpError.node_def", + "tf.errors.AlreadyExistsError.op": "tf.errors.OpError.op", + "tf.errors.AlreadyExistsError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.CancelledError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.CancelledError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.CancelledError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.CancelledError.__le__": "tf.keras.Model.__le__", + "tf.errors.CancelledError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.CancelledError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.CancelledError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.CancelledError.args": "tf.errors.AbortedError.args", + "tf.errors.CancelledError.error_code": "tf.errors.OpError.error_code", + "tf.errors.CancelledError.message": "tf.errors.OpError.message", + "tf.errors.CancelledError.node_def": "tf.errors.OpError.node_def", + "tf.errors.CancelledError.op": "tf.errors.OpError.op", + "tf.errors.CancelledError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.DataLossError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.DataLossError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.DataLossError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.DataLossError.__le__": "tf.keras.Model.__le__", + "tf.errors.DataLossError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.DataLossError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.DataLossError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.DataLossError.args": "tf.errors.AbortedError.args", + "tf.errors.DataLossError.error_code": "tf.errors.OpError.error_code", + "tf.errors.DataLossError.message": "tf.errors.OpError.message", + "tf.errors.DataLossError.node_def": "tf.errors.OpError.node_def", + "tf.errors.DataLossError.op": "tf.errors.OpError.op", + "tf.errors.DataLossError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.DeadlineExceededError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.DeadlineExceededError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.DeadlineExceededError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.DeadlineExceededError.__le__": "tf.keras.Model.__le__", + "tf.errors.DeadlineExceededError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.DeadlineExceededError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.DeadlineExceededError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.DeadlineExceededError.args": "tf.errors.AbortedError.args", + "tf.errors.DeadlineExceededError.error_code": "tf.errors.OpError.error_code", + "tf.errors.DeadlineExceededError.message": "tf.errors.OpError.message", + "tf.errors.DeadlineExceededError.node_def": "tf.errors.OpError.node_def", + "tf.errors.DeadlineExceededError.op": "tf.errors.OpError.op", + "tf.errors.DeadlineExceededError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.FailedPreconditionError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.FailedPreconditionError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.FailedPreconditionError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.FailedPreconditionError.__le__": "tf.keras.Model.__le__", + "tf.errors.FailedPreconditionError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.FailedPreconditionError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.FailedPreconditionError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.FailedPreconditionError.args": "tf.errors.AbortedError.args", + "tf.errors.FailedPreconditionError.error_code": "tf.errors.OpError.error_code", + "tf.errors.FailedPreconditionError.message": "tf.errors.OpError.message", + "tf.errors.FailedPreconditionError.node_def": "tf.errors.OpError.node_def", + "tf.errors.FailedPreconditionError.op": "tf.errors.OpError.op", + "tf.errors.FailedPreconditionError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.InternalError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.InternalError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.InternalError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.InternalError.__le__": "tf.keras.Model.__le__", + "tf.errors.InternalError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.InternalError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.InternalError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.InternalError.args": "tf.errors.AbortedError.args", + "tf.errors.InternalError.error_code": "tf.errors.OpError.error_code", + "tf.errors.InternalError.message": "tf.errors.OpError.message", + "tf.errors.InternalError.node_def": "tf.errors.OpError.node_def", + "tf.errors.InternalError.op": "tf.errors.OpError.op", + "tf.errors.InternalError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.InvalidArgumentError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.InvalidArgumentError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.InvalidArgumentError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.InvalidArgumentError.__le__": "tf.keras.Model.__le__", + "tf.errors.InvalidArgumentError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.InvalidArgumentError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.InvalidArgumentError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.InvalidArgumentError.args": "tf.errors.AbortedError.args", + "tf.errors.InvalidArgumentError.error_code": "tf.errors.OpError.error_code", + "tf.errors.InvalidArgumentError.message": "tf.errors.OpError.message", + "tf.errors.InvalidArgumentError.node_def": "tf.errors.OpError.node_def", + "tf.errors.InvalidArgumentError.op": "tf.errors.OpError.op", + "tf.errors.InvalidArgumentError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.NotFoundError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.NotFoundError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.NotFoundError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.NotFoundError.__le__": "tf.keras.Model.__le__", + "tf.errors.NotFoundError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.NotFoundError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.NotFoundError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.NotFoundError.args": "tf.errors.AbortedError.args", + "tf.errors.NotFoundError.error_code": "tf.errors.OpError.error_code", + "tf.errors.NotFoundError.message": "tf.errors.OpError.message", + "tf.errors.NotFoundError.node_def": "tf.errors.OpError.node_def", + "tf.errors.NotFoundError.op": "tf.errors.OpError.op", + "tf.errors.NotFoundError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.OpError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.OpError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.OpError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.OpError.__le__": "tf.keras.Model.__le__", + "tf.errors.OpError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.OpError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.OpError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.OpError.args": "tf.errors.AbortedError.args", + "tf.errors.OpError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.OutOfRangeError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.OutOfRangeError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.OutOfRangeError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.OutOfRangeError.__le__": "tf.keras.Model.__le__", + "tf.errors.OutOfRangeError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.OutOfRangeError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.OutOfRangeError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.OutOfRangeError.args": "tf.errors.AbortedError.args", + "tf.errors.OutOfRangeError.error_code": "tf.errors.OpError.error_code", + "tf.errors.OutOfRangeError.message": "tf.errors.OpError.message", + "tf.errors.OutOfRangeError.node_def": "tf.errors.OpError.node_def", + "tf.errors.OutOfRangeError.op": "tf.errors.OpError.op", + "tf.errors.OutOfRangeError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.PermissionDeniedError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.PermissionDeniedError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.PermissionDeniedError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.PermissionDeniedError.__le__": "tf.keras.Model.__le__", + "tf.errors.PermissionDeniedError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.PermissionDeniedError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.PermissionDeniedError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.PermissionDeniedError.args": "tf.errors.AbortedError.args", + "tf.errors.PermissionDeniedError.error_code": "tf.errors.OpError.error_code", + "tf.errors.PermissionDeniedError.message": "tf.errors.OpError.message", + "tf.errors.PermissionDeniedError.node_def": "tf.errors.OpError.node_def", + "tf.errors.PermissionDeniedError.op": "tf.errors.OpError.op", + "tf.errors.PermissionDeniedError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.ResourceExhaustedError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.ResourceExhaustedError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.ResourceExhaustedError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.ResourceExhaustedError.__le__": "tf.keras.Model.__le__", + "tf.errors.ResourceExhaustedError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.ResourceExhaustedError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.ResourceExhaustedError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.ResourceExhaustedError.args": "tf.errors.AbortedError.args", + "tf.errors.ResourceExhaustedError.error_code": "tf.errors.OpError.error_code", + "tf.errors.ResourceExhaustedError.message": "tf.errors.OpError.message", + "tf.errors.ResourceExhaustedError.node_def": "tf.errors.OpError.node_def", + "tf.errors.ResourceExhaustedError.op": "tf.errors.OpError.op", + "tf.errors.ResourceExhaustedError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.UnauthenticatedError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.UnauthenticatedError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.UnauthenticatedError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.UnauthenticatedError.__le__": "tf.keras.Model.__le__", + "tf.errors.UnauthenticatedError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.UnauthenticatedError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.UnauthenticatedError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.UnauthenticatedError.args": "tf.errors.AbortedError.args", + "tf.errors.UnauthenticatedError.error_code": "tf.errors.OpError.error_code", + "tf.errors.UnauthenticatedError.message": "tf.errors.OpError.message", + "tf.errors.UnauthenticatedError.node_def": "tf.errors.OpError.node_def", + "tf.errors.UnauthenticatedError.op": "tf.errors.OpError.op", + "tf.errors.UnauthenticatedError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.UnavailableError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.UnavailableError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.UnavailableError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.UnavailableError.__le__": "tf.keras.Model.__le__", + "tf.errors.UnavailableError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.UnavailableError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.UnavailableError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.UnavailableError.args": "tf.errors.AbortedError.args", + "tf.errors.UnavailableError.error_code": "tf.errors.OpError.error_code", + "tf.errors.UnavailableError.message": "tf.errors.OpError.message", + "tf.errors.UnavailableError.node_def": "tf.errors.OpError.node_def", + "tf.errors.UnavailableError.op": "tf.errors.OpError.op", + "tf.errors.UnavailableError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.UnimplementedError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.UnimplementedError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.UnimplementedError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.UnimplementedError.__le__": "tf.keras.Model.__le__", + "tf.errors.UnimplementedError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.UnimplementedError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.UnimplementedError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.UnimplementedError.args": "tf.errors.AbortedError.args", + "tf.errors.UnimplementedError.error_code": "tf.errors.OpError.error_code", + "tf.errors.UnimplementedError.message": "tf.errors.OpError.message", + "tf.errors.UnimplementedError.node_def": "tf.errors.OpError.node_def", + "tf.errors.UnimplementedError.op": "tf.errors.OpError.op", + "tf.errors.UnimplementedError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.errors.UnknownError.__eq__": "tf.keras.Model.__eq__", + "tf.errors.UnknownError.__ge__": "tf.keras.Model.__ge__", + "tf.errors.UnknownError.__gt__": "tf.keras.Model.__gt__", + "tf.errors.UnknownError.__le__": "tf.keras.Model.__le__", + "tf.errors.UnknownError.__lt__": "tf.keras.Model.__lt__", + "tf.errors.UnknownError.__ne__": "tf.keras.Model.__ne__", + "tf.errors.UnknownError.__new__": "tf.errors.AbortedError.__new__", + "tf.errors.UnknownError.args": "tf.errors.AbortedError.args", + "tf.errors.UnknownError.error_code": "tf.errors.OpError.error_code", + "tf.errors.UnknownError.message": "tf.errors.OpError.message", + "tf.errors.UnknownError.node_def": "tf.errors.OpError.node_def", + "tf.errors.UnknownError.op": "tf.errors.OpError.op", + "tf.errors.UnknownError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.estimator.BaselineClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.BaselineClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.BaselineClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.BaselineClassifier.__le__": "tf.keras.Model.__le__", + "tf.estimator.BaselineClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.BaselineClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.BaselineClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.BaselineClassifier.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.BaselineClassifier.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.BaselineClassifier.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.BaselineClassifier.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.BaselineClassifier.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.BaselineClassifier.export_savedmodel": "tf.estimator.Estimator.export_savedmodel", + "tf.estimator.BaselineClassifier.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.BaselineClassifier.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.BaselineClassifier.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.BaselineClassifier.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.BaselineClassifier.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.BaselineClassifier.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.BaselineClassifier.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.BaselineClassifier.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.BaselineEstimator.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.BaselineEstimator.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.BaselineEstimator.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.BaselineEstimator.__le__": "tf.keras.Model.__le__", + "tf.estimator.BaselineEstimator.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.BaselineEstimator.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.BaselineEstimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.BaselineEstimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.BaselineEstimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.BaselineEstimator.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.BaselineEstimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.BaselineEstimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.BaselineEstimator.export_savedmodel": "tf.estimator.Estimator.export_savedmodel", + "tf.estimator.BaselineEstimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.BaselineEstimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.BaselineEstimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.BaselineEstimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.BaselineEstimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.BaselineEstimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.BaselineEstimator.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.BaselineEstimator.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.BaselineRegressor.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.BaselineRegressor.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.BaselineRegressor.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.BaselineRegressor.__le__": "tf.keras.Model.__le__", + "tf.estimator.BaselineRegressor.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.BaselineRegressor.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.BaselineRegressor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.BaselineRegressor.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.BaselineRegressor.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.BaselineRegressor.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.BaselineRegressor.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.BaselineRegressor.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.BaselineRegressor.export_savedmodel": "tf.estimator.Estimator.export_savedmodel", + "tf.estimator.BaselineRegressor.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.BaselineRegressor.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.BaselineRegressor.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.BaselineRegressor.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.BaselineRegressor.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.BaselineRegressor.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.BaselineRegressor.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.BaselineRegressor.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.BestExporter.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.BestExporter.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.BestExporter.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.BestExporter.__le__": "tf.keras.Model.__le__", + "tf.estimator.BestExporter.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.BestExporter.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.BestExporter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.BinaryClassHead.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.BinaryClassHead.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.BinaryClassHead.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.BinaryClassHead.__le__": "tf.keras.Model.__le__", + "tf.estimator.BinaryClassHead.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.BinaryClassHead.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.BinaryClassHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.BinaryClassHead.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.estimator.BoostedTreesClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.BoostedTreesClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.BoostedTreesClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.BoostedTreesClassifier.__le__": "tf.keras.Model.__le__", + "tf.estimator.BoostedTreesClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.BoostedTreesClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.BoostedTreesClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.BoostedTreesClassifier.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.BoostedTreesClassifier.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.BoostedTreesClassifier.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.BoostedTreesClassifier.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.BoostedTreesClassifier.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.BoostedTreesClassifier.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.estimator.BoostedTreesClassifier.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.BoostedTreesClassifier.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.BoostedTreesClassifier.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.BoostedTreesClassifier.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.BoostedTreesClassifier.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.BoostedTreesClassifier.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.BoostedTreesClassifier.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.BoostedTreesClassifier.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.BoostedTreesEstimator.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.BoostedTreesEstimator.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.BoostedTreesEstimator.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.BoostedTreesEstimator.__le__": "tf.keras.Model.__le__", + "tf.estimator.BoostedTreesEstimator.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.BoostedTreesEstimator.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.BoostedTreesEstimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.BoostedTreesEstimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.BoostedTreesEstimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.BoostedTreesEstimator.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.BoostedTreesEstimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.BoostedTreesEstimator.experimental_feature_importances": "tf.estimator.BoostedTreesClassifier.experimental_feature_importances", + "tf.estimator.BoostedTreesEstimator.experimental_predict_with_explanations": "tf.estimator.BoostedTreesClassifier.experimental_predict_with_explanations", + "tf.estimator.BoostedTreesEstimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.BoostedTreesEstimator.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.estimator.BoostedTreesEstimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.BoostedTreesEstimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.BoostedTreesEstimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.BoostedTreesEstimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.BoostedTreesEstimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.BoostedTreesEstimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.BoostedTreesEstimator.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.BoostedTreesEstimator.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.BoostedTreesRegressor.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.BoostedTreesRegressor.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.BoostedTreesRegressor.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.BoostedTreesRegressor.__le__": "tf.keras.Model.__le__", + "tf.estimator.BoostedTreesRegressor.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.BoostedTreesRegressor.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.BoostedTreesRegressor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.BoostedTreesRegressor.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.BoostedTreesRegressor.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.BoostedTreesRegressor.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.BoostedTreesRegressor.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.BoostedTreesRegressor.experimental_feature_importances": "tf.estimator.BoostedTreesClassifier.experimental_feature_importances", + "tf.estimator.BoostedTreesRegressor.experimental_predict_with_explanations": "tf.estimator.BoostedTreesClassifier.experimental_predict_with_explanations", + "tf.estimator.BoostedTreesRegressor.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.BoostedTreesRegressor.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.estimator.BoostedTreesRegressor.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.BoostedTreesRegressor.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.BoostedTreesRegressor.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.BoostedTreesRegressor.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.BoostedTreesRegressor.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.BoostedTreesRegressor.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.BoostedTreesRegressor.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.BoostedTreesRegressor.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.CheckpointSaverHook.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.CheckpointSaverHook.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.CheckpointSaverHook.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.CheckpointSaverHook.__le__": "tf.keras.Model.__le__", + "tf.estimator.CheckpointSaverHook.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.CheckpointSaverHook.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.CheckpointSaverHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.CheckpointSaverListener.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.CheckpointSaverListener.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.CheckpointSaverListener.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.CheckpointSaverListener.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.CheckpointSaverListener.__le__": "tf.keras.Model.__le__", + "tf.estimator.CheckpointSaverListener.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.CheckpointSaverListener.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.CheckpointSaverListener.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.DNNClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.DNNClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.DNNClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.DNNClassifier.__le__": "tf.keras.Model.__le__", + "tf.estimator.DNNClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.DNNClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.DNNClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.DNNClassifier.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.DNNClassifier.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.DNNClassifier.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.DNNClassifier.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.DNNClassifier.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.DNNClassifier.export_savedmodel": "tf.estimator.Estimator.export_savedmodel", + "tf.estimator.DNNClassifier.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.DNNClassifier.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.DNNClassifier.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.DNNClassifier.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.DNNClassifier.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.DNNClassifier.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.DNNClassifier.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.DNNClassifier.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.DNNEstimator.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.DNNEstimator.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.DNNEstimator.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.DNNEstimator.__le__": "tf.keras.Model.__le__", + "tf.estimator.DNNEstimator.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.DNNEstimator.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.DNNEstimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.DNNEstimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.DNNEstimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.DNNEstimator.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.DNNEstimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.DNNEstimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.DNNEstimator.export_savedmodel": "tf.estimator.Estimator.export_savedmodel", + "tf.estimator.DNNEstimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.DNNEstimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.DNNEstimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.DNNEstimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.DNNEstimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.DNNEstimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.DNNEstimator.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.DNNEstimator.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.DNNLinearCombinedClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.DNNLinearCombinedClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.DNNLinearCombinedClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.DNNLinearCombinedClassifier.__le__": "tf.keras.Model.__le__", + "tf.estimator.DNNLinearCombinedClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.DNNLinearCombinedClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.DNNLinearCombinedClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.DNNLinearCombinedClassifier.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.DNNLinearCombinedClassifier.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.DNNLinearCombinedClassifier.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.DNNLinearCombinedClassifier.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.DNNLinearCombinedClassifier.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.DNNLinearCombinedClassifier.export_savedmodel": "tf.estimator.Estimator.export_savedmodel", + "tf.estimator.DNNLinearCombinedClassifier.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.DNNLinearCombinedClassifier.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.DNNLinearCombinedClassifier.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.DNNLinearCombinedClassifier.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.DNNLinearCombinedClassifier.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.DNNLinearCombinedClassifier.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.DNNLinearCombinedClassifier.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.DNNLinearCombinedClassifier.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.DNNLinearCombinedEstimator.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.DNNLinearCombinedEstimator.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.DNNLinearCombinedEstimator.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.DNNLinearCombinedEstimator.__le__": "tf.keras.Model.__le__", + "tf.estimator.DNNLinearCombinedEstimator.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.DNNLinearCombinedEstimator.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.DNNLinearCombinedEstimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.DNNLinearCombinedEstimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.DNNLinearCombinedEstimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.DNNLinearCombinedEstimator.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.DNNLinearCombinedEstimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.DNNLinearCombinedEstimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.DNNLinearCombinedEstimator.export_savedmodel": "tf.estimator.Estimator.export_savedmodel", + "tf.estimator.DNNLinearCombinedEstimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.DNNLinearCombinedEstimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.DNNLinearCombinedEstimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.DNNLinearCombinedEstimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.DNNLinearCombinedEstimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.DNNLinearCombinedEstimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.DNNLinearCombinedEstimator.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.DNNLinearCombinedEstimator.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.DNNLinearCombinedRegressor.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.DNNLinearCombinedRegressor.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.DNNLinearCombinedRegressor.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.DNNLinearCombinedRegressor.__le__": "tf.keras.Model.__le__", + "tf.estimator.DNNLinearCombinedRegressor.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.DNNLinearCombinedRegressor.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.DNNLinearCombinedRegressor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.DNNLinearCombinedRegressor.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.DNNLinearCombinedRegressor.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.DNNLinearCombinedRegressor.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.DNNLinearCombinedRegressor.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.DNNLinearCombinedRegressor.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.DNNLinearCombinedRegressor.export_savedmodel": "tf.estimator.Estimator.export_savedmodel", + "tf.estimator.DNNLinearCombinedRegressor.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.DNNLinearCombinedRegressor.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.DNNLinearCombinedRegressor.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.DNNLinearCombinedRegressor.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.DNNLinearCombinedRegressor.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.DNNLinearCombinedRegressor.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.DNNLinearCombinedRegressor.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.DNNLinearCombinedRegressor.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.DNNRegressor.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.DNNRegressor.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.DNNRegressor.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.DNNRegressor.__le__": "tf.keras.Model.__le__", + "tf.estimator.DNNRegressor.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.DNNRegressor.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.DNNRegressor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.DNNRegressor.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.DNNRegressor.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.DNNRegressor.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.DNNRegressor.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.DNNRegressor.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.DNNRegressor.export_savedmodel": "tf.estimator.Estimator.export_savedmodel", + "tf.estimator.DNNRegressor.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.DNNRegressor.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.DNNRegressor.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.DNNRegressor.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.DNNRegressor.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.DNNRegressor.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.DNNRegressor.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.DNNRegressor.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.Estimator.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.Estimator.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.Estimator.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.Estimator.__init__": "tf.compat.v1.estimator.Estimator.__init__", + "tf.estimator.Estimator.__le__": "tf.keras.Model.__le__", + "tf.estimator.Estimator.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.Estimator.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.Estimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.Estimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.Estimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.Estimator.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.Estimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.Estimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.Estimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.Estimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.Estimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.Estimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.Estimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.Estimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.Estimator.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.Estimator.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.EstimatorSpec.__add__": "tf.config.LogicalDevice.__add__", + "tf.estimator.EstimatorSpec.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.estimator.EstimatorSpec.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.estimator.EstimatorSpec.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.estimator.EstimatorSpec.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.estimator.EstimatorSpec.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.estimator.EstimatorSpec.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.EstimatorSpec.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.estimator.EstimatorSpec.__le__": "tf.config.LogicalDevice.__le__", + "tf.estimator.EstimatorSpec.__len__": "tf.config.LogicalDevice.__len__", + "tf.estimator.EstimatorSpec.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.estimator.EstimatorSpec.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.estimator.EstimatorSpec.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.estimator.EstimatorSpec.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.estimator.EstimatorSpec.count": "tf.config.LogicalDevice.count", + "tf.estimator.EstimatorSpec.index": "tf.config.LogicalDevice.index", + "tf.estimator.EvalSpec.__add__": "tf.config.LogicalDevice.__add__", + "tf.estimator.EvalSpec.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.estimator.EvalSpec.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.estimator.EvalSpec.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.estimator.EvalSpec.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.estimator.EvalSpec.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.estimator.EvalSpec.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.EvalSpec.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.estimator.EvalSpec.__le__": "tf.config.LogicalDevice.__le__", + "tf.estimator.EvalSpec.__len__": "tf.config.LogicalDevice.__len__", + "tf.estimator.EvalSpec.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.estimator.EvalSpec.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.estimator.EvalSpec.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.estimator.EvalSpec.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.estimator.EvalSpec.count": "tf.config.LogicalDevice.count", + "tf.estimator.EvalSpec.index": "tf.config.LogicalDevice.index", + "tf.estimator.Exporter.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.Exporter.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.Exporter.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.Exporter.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.Exporter.__le__": "tf.keras.Model.__le__", + "tf.estimator.Exporter.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.Exporter.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.Exporter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.FeedFnHook.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.FeedFnHook.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.FeedFnHook.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.FeedFnHook.__le__": "tf.keras.Model.__le__", + "tf.estimator.FeedFnHook.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.FeedFnHook.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.FeedFnHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.FeedFnHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.estimator.FeedFnHook.after_run": "tf.estimator.SessionRunHook.after_run", + "tf.estimator.FeedFnHook.begin": "tf.estimator.SessionRunHook.begin", + "tf.estimator.FeedFnHook.end": "tf.estimator.SessionRunHook.end", + "tf.estimator.FinalExporter.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.FinalExporter.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.FinalExporter.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.FinalExporter.__le__": "tf.keras.Model.__le__", + "tf.estimator.FinalExporter.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.FinalExporter.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.FinalExporter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.FinalOpsHook.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.FinalOpsHook.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.FinalOpsHook.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.FinalOpsHook.__le__": "tf.keras.Model.__le__", + "tf.estimator.FinalOpsHook.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.FinalOpsHook.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.FinalOpsHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.FinalOpsHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.estimator.FinalOpsHook.after_run": "tf.estimator.SessionRunHook.after_run", + "tf.estimator.FinalOpsHook.before_run": "tf.estimator.SessionRunHook.before_run", + "tf.estimator.FinalOpsHook.begin": "tf.estimator.SessionRunHook.begin", + "tf.estimator.GlobalStepWaiterHook.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.GlobalStepWaiterHook.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.GlobalStepWaiterHook.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.GlobalStepWaiterHook.__le__": "tf.keras.Model.__le__", + "tf.estimator.GlobalStepWaiterHook.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.GlobalStepWaiterHook.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.GlobalStepWaiterHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.GlobalStepWaiterHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.estimator.GlobalStepWaiterHook.after_run": "tf.estimator.SessionRunHook.after_run", + "tf.estimator.GlobalStepWaiterHook.end": "tf.estimator.SessionRunHook.end", + "tf.estimator.Head.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.Head.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.Head.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.Head.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.Head.__le__": "tf.keras.Model.__le__", + "tf.estimator.Head.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.Head.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.Head.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.LatestExporter.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.LatestExporter.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.LatestExporter.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.LatestExporter.__le__": "tf.keras.Model.__le__", + "tf.estimator.LatestExporter.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.LatestExporter.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.LatestExporter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.LinearClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.LinearClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.LinearClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.LinearClassifier.__le__": "tf.keras.Model.__le__", + "tf.estimator.LinearClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.LinearClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.LinearClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.LinearClassifier.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.LinearClassifier.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.LinearClassifier.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.LinearClassifier.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.LinearClassifier.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.LinearClassifier.export_savedmodel": "tf.estimator.Estimator.export_savedmodel", + "tf.estimator.LinearClassifier.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.LinearClassifier.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.LinearClassifier.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.LinearClassifier.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.LinearClassifier.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.LinearClassifier.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.LinearClassifier.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.LinearClassifier.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.LinearEstimator.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.LinearEstimator.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.LinearEstimator.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.LinearEstimator.__le__": "tf.keras.Model.__le__", + "tf.estimator.LinearEstimator.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.LinearEstimator.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.LinearEstimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.LinearEstimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.LinearEstimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.LinearEstimator.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.LinearEstimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.LinearEstimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.LinearEstimator.export_savedmodel": "tf.estimator.Estimator.export_savedmodel", + "tf.estimator.LinearEstimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.LinearEstimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.LinearEstimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.LinearEstimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.LinearEstimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.LinearEstimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.LinearEstimator.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.LinearEstimator.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.LinearRegressor.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.LinearRegressor.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.LinearRegressor.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.LinearRegressor.__le__": "tf.keras.Model.__le__", + "tf.estimator.LinearRegressor.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.LinearRegressor.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.LinearRegressor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.LinearRegressor.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.LinearRegressor.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.LinearRegressor.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.LinearRegressor.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.LinearRegressor.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.LinearRegressor.export_savedmodel": "tf.estimator.Estimator.export_savedmodel", + "tf.estimator.LinearRegressor.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.LinearRegressor.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.LinearRegressor.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.LinearRegressor.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.LinearRegressor.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.LinearRegressor.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.LinearRegressor.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.LinearRegressor.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.LoggingTensorHook.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.LoggingTensorHook.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.LoggingTensorHook.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.LoggingTensorHook.__le__": "tf.keras.Model.__le__", + "tf.estimator.LoggingTensorHook.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.LoggingTensorHook.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.LoggingTensorHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.LoggingTensorHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.estimator.LogisticRegressionHead.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.LogisticRegressionHead.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.LogisticRegressionHead.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.LogisticRegressionHead.__le__": "tf.keras.Model.__le__", + "tf.estimator.LogisticRegressionHead.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.LogisticRegressionHead.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.LogisticRegressionHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.LogisticRegressionHead.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.estimator.LogisticRegressionHead.logits_dimension": "tf.estimator.RegressionHead.logits_dimension", + "tf.estimator.LogisticRegressionHead.loss": "tf.estimator.RegressionHead.loss", + "tf.estimator.LogisticRegressionHead.loss_reduction": "tf.estimator.RegressionHead.loss_reduction", + "tf.estimator.LogisticRegressionHead.metrics": "tf.estimator.RegressionHead.metrics", + "tf.estimator.LogisticRegressionHead.name": "tf.estimator.RegressionHead.name", + "tf.estimator.LogisticRegressionHead.predictions": "tf.estimator.RegressionHead.predictions", + "tf.estimator.LogisticRegressionHead.update_metrics": "tf.estimator.RegressionHead.update_metrics", + "tf.estimator.ModeKeys.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.ModeKeys.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.ModeKeys.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.ModeKeys.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.ModeKeys.__le__": "tf.keras.Model.__le__", + "tf.estimator.ModeKeys.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.ModeKeys.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.ModeKeys.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.MultiClassHead.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.MultiClassHead.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.MultiClassHead.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.MultiClassHead.__le__": "tf.keras.Model.__le__", + "tf.estimator.MultiClassHead.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.MultiClassHead.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.MultiClassHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.MultiClassHead.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.estimator.MultiHead.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.MultiHead.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.MultiHead.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.MultiHead.__le__": "tf.keras.Model.__le__", + "tf.estimator.MultiHead.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.MultiHead.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.MultiHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.MultiLabelHead.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.MultiLabelHead.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.MultiLabelHead.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.MultiLabelHead.__le__": "tf.keras.Model.__le__", + "tf.estimator.MultiLabelHead.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.MultiLabelHead.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.MultiLabelHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.MultiLabelHead.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.estimator.NanLossDuringTrainingError.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.NanLossDuringTrainingError.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.NanLossDuringTrainingError.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.NanLossDuringTrainingError.__le__": "tf.keras.Model.__le__", + "tf.estimator.NanLossDuringTrainingError.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.NanLossDuringTrainingError.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.NanLossDuringTrainingError.args": "tf.errors.AbortedError.args", + "tf.estimator.NanLossDuringTrainingError.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.estimator.NanTensorHook.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.NanTensorHook.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.NanTensorHook.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.NanTensorHook.__le__": "tf.keras.Model.__le__", + "tf.estimator.NanTensorHook.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.NanTensorHook.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.NanTensorHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.NanTensorHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.estimator.NanTensorHook.begin": "tf.estimator.SessionRunHook.begin", + "tf.estimator.NanTensorHook.end": "tf.estimator.SessionRunHook.end", + "tf.estimator.PoissonRegressionHead.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.PoissonRegressionHead.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.PoissonRegressionHead.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.PoissonRegressionHead.__le__": "tf.keras.Model.__le__", + "tf.estimator.PoissonRegressionHead.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.PoissonRegressionHead.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.PoissonRegressionHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.PoissonRegressionHead.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.estimator.PoissonRegressionHead.logits_dimension": "tf.estimator.RegressionHead.logits_dimension", + "tf.estimator.PoissonRegressionHead.loss": "tf.estimator.RegressionHead.loss", + "tf.estimator.PoissonRegressionHead.loss_reduction": "tf.estimator.RegressionHead.loss_reduction", + "tf.estimator.PoissonRegressionHead.metrics": "tf.estimator.RegressionHead.metrics", + "tf.estimator.PoissonRegressionHead.name": "tf.estimator.RegressionHead.name", + "tf.estimator.PoissonRegressionHead.predictions": "tf.estimator.RegressionHead.predictions", + "tf.estimator.PoissonRegressionHead.update_metrics": "tf.estimator.RegressionHead.update_metrics", + "tf.estimator.ProfilerHook.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.ProfilerHook.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.ProfilerHook.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.ProfilerHook.__le__": "tf.keras.Model.__le__", + "tf.estimator.ProfilerHook.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.ProfilerHook.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.ProfilerHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.ProfilerHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.estimator.ProfilerHook.end": "tf.estimator.SessionRunHook.end", + "tf.estimator.RegressionHead.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.RegressionHead.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.RegressionHead.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.RegressionHead.__le__": "tf.keras.Model.__le__", + "tf.estimator.RegressionHead.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.RegressionHead.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.RegressionHead.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.RegressionHead.create_estimator_spec": "tf.estimator.Head.create_estimator_spec", + "tf.estimator.RunConfig.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.RunConfig.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.RunConfig.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.RunConfig.__le__": "tf.keras.Model.__le__", + "tf.estimator.RunConfig.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.RunConfig.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.RunConfig.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.SecondOrStepTimer.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.SecondOrStepTimer.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.SecondOrStepTimer.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.SecondOrStepTimer.__le__": "tf.keras.Model.__le__", + "tf.estimator.SecondOrStepTimer.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.SecondOrStepTimer.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.SecondOrStepTimer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.SessionRunArgs.__add__": "tf.config.LogicalDevice.__add__", + "tf.estimator.SessionRunArgs.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.estimator.SessionRunArgs.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.estimator.SessionRunArgs.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.estimator.SessionRunArgs.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.estimator.SessionRunArgs.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.estimator.SessionRunArgs.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.SessionRunArgs.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.estimator.SessionRunArgs.__le__": "tf.config.LogicalDevice.__le__", + "tf.estimator.SessionRunArgs.__len__": "tf.config.LogicalDevice.__len__", + "tf.estimator.SessionRunArgs.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.estimator.SessionRunArgs.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.estimator.SessionRunArgs.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.estimator.SessionRunArgs.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.estimator.SessionRunArgs.count": "tf.config.LogicalDevice.count", + "tf.estimator.SessionRunArgs.index": "tf.config.LogicalDevice.index", + "tf.estimator.SessionRunContext.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.SessionRunContext.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.SessionRunContext.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.SessionRunContext.__le__": "tf.keras.Model.__le__", + "tf.estimator.SessionRunContext.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.SessionRunContext.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.SessionRunContext.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.SessionRunHook.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.SessionRunHook.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.SessionRunHook.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.SessionRunHook.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.SessionRunHook.__le__": "tf.keras.Model.__le__", + "tf.estimator.SessionRunHook.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.SessionRunHook.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.SessionRunHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.SessionRunValues.__add__": "tf.config.LogicalDevice.__add__", + "tf.estimator.SessionRunValues.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.estimator.SessionRunValues.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.estimator.SessionRunValues.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.estimator.SessionRunValues.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.estimator.SessionRunValues.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.estimator.SessionRunValues.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.SessionRunValues.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.estimator.SessionRunValues.__le__": "tf.config.LogicalDevice.__le__", + "tf.estimator.SessionRunValues.__len__": "tf.config.LogicalDevice.__len__", + "tf.estimator.SessionRunValues.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.estimator.SessionRunValues.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.estimator.SessionRunValues.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.estimator.SessionRunValues.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.estimator.SessionRunValues.count": "tf.config.LogicalDevice.count", + "tf.estimator.SessionRunValues.index": "tf.config.LogicalDevice.index", + "tf.estimator.StepCounterHook.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.StepCounterHook.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.StepCounterHook.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.StepCounterHook.__le__": "tf.keras.Model.__le__", + "tf.estimator.StepCounterHook.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.StepCounterHook.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.StepCounterHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.StepCounterHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.estimator.StepCounterHook.end": "tf.estimator.SessionRunHook.end", + "tf.estimator.StopAtStepHook.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.StopAtStepHook.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.StopAtStepHook.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.StopAtStepHook.__le__": "tf.keras.Model.__le__", + "tf.estimator.StopAtStepHook.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.StopAtStepHook.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.StopAtStepHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.StopAtStepHook.end": "tf.estimator.SessionRunHook.end", + "tf.estimator.SummarySaverHook.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.SummarySaverHook.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.SummarySaverHook.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.SummarySaverHook.__le__": "tf.keras.Model.__le__", + "tf.estimator.SummarySaverHook.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.SummarySaverHook.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.SummarySaverHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.SummarySaverHook.after_create_session": "tf.estimator.SessionRunHook.after_create_session", + "tf.estimator.TrainSpec.__add__": "tf.config.LogicalDevice.__add__", + "tf.estimator.TrainSpec.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.estimator.TrainSpec.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.estimator.TrainSpec.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.estimator.TrainSpec.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.estimator.TrainSpec.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.estimator.TrainSpec.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.TrainSpec.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.estimator.TrainSpec.__le__": "tf.config.LogicalDevice.__le__", + "tf.estimator.TrainSpec.__len__": "tf.config.LogicalDevice.__len__", + "tf.estimator.TrainSpec.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.estimator.TrainSpec.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.estimator.TrainSpec.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.estimator.TrainSpec.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.estimator.TrainSpec.count": "tf.config.LogicalDevice.count", + "tf.estimator.TrainSpec.index": "tf.config.LogicalDevice.index", + "tf.estimator.VocabInfo.__add__": "tf.config.LogicalDevice.__add__", + "tf.estimator.VocabInfo.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.estimator.VocabInfo.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.estimator.VocabInfo.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.estimator.VocabInfo.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.estimator.VocabInfo.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.estimator.VocabInfo.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.VocabInfo.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.estimator.VocabInfo.__le__": "tf.config.LogicalDevice.__le__", + "tf.estimator.VocabInfo.__len__": "tf.config.LogicalDevice.__len__", + "tf.estimator.VocabInfo.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.estimator.VocabInfo.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.estimator.VocabInfo.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.estimator.VocabInfo.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.estimator.VocabInfo.count": "tf.config.LogicalDevice.count", + "tf.estimator.VocabInfo.index": "tf.config.LogicalDevice.index", + "tf.estimator.WarmStartSettings.__add__": "tf.config.LogicalDevice.__add__", + "tf.estimator.WarmStartSettings.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.estimator.WarmStartSettings.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.estimator.WarmStartSettings.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.estimator.WarmStartSettings.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.estimator.WarmStartSettings.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.estimator.WarmStartSettings.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.WarmStartSettings.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.estimator.WarmStartSettings.__le__": "tf.config.LogicalDevice.__le__", + "tf.estimator.WarmStartSettings.__len__": "tf.config.LogicalDevice.__len__", + "tf.estimator.WarmStartSettings.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.estimator.WarmStartSettings.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.estimator.WarmStartSettings.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.estimator.WarmStartSettings.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.estimator.WarmStartSettings.count": "tf.config.LogicalDevice.count", + "tf.estimator.WarmStartSettings.index": "tf.config.LogicalDevice.index", + "tf.estimator.experimental.InMemoryEvaluatorHook.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.experimental.InMemoryEvaluatorHook.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.experimental.InMemoryEvaluatorHook.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.experimental.InMemoryEvaluatorHook.__le__": "tf.keras.Model.__le__", + "tf.estimator.experimental.InMemoryEvaluatorHook.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.experimental.InMemoryEvaluatorHook.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.experimental.InMemoryEvaluatorHook.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.experimental.InMemoryEvaluatorHook.before_run": "tf.estimator.SessionRunHook.before_run", + "tf.estimator.experimental.LinearSDCA.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.experimental.LinearSDCA.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.experimental.LinearSDCA.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.experimental.LinearSDCA.__le__": "tf.keras.Model.__le__", + "tf.estimator.experimental.LinearSDCA.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.experimental.LinearSDCA.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.experimental.LinearSDCA.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.experimental.RNNClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.experimental.RNNClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.experimental.RNNClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.experimental.RNNClassifier.__le__": "tf.keras.Model.__le__", + "tf.estimator.experimental.RNNClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.experimental.RNNClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.experimental.RNNClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.experimental.RNNClassifier.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.experimental.RNNClassifier.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.experimental.RNNClassifier.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.experimental.RNNClassifier.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.experimental.RNNClassifier.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.experimental.RNNClassifier.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.estimator.experimental.RNNClassifier.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.experimental.RNNClassifier.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.experimental.RNNClassifier.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.experimental.RNNClassifier.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.experimental.RNNClassifier.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.experimental.RNNClassifier.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.experimental.RNNClassifier.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.experimental.RNNClassifier.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.experimental.RNNEstimator.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.experimental.RNNEstimator.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.experimental.RNNEstimator.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.experimental.RNNEstimator.__le__": "tf.keras.Model.__le__", + "tf.estimator.experimental.RNNEstimator.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.experimental.RNNEstimator.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.experimental.RNNEstimator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.experimental.RNNEstimator.config": "tf.compat.v1.estimator.Estimator.config", + "tf.estimator.experimental.RNNEstimator.eval_dir": "tf.compat.v1.estimator.Estimator.eval_dir", + "tf.estimator.experimental.RNNEstimator.evaluate": "tf.compat.v1.estimator.Estimator.evaluate", + "tf.estimator.experimental.RNNEstimator.experimental_export_all_saved_models": "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models", + "tf.estimator.experimental.RNNEstimator.export_saved_model": "tf.compat.v1.estimator.Estimator.export_saved_model", + "tf.estimator.experimental.RNNEstimator.export_savedmodel": "tf.compat.v1.estimator.Estimator.export_savedmodel", + "tf.estimator.experimental.RNNEstimator.get_variable_names": "tf.compat.v1.estimator.Estimator.get_variable_names", + "tf.estimator.experimental.RNNEstimator.get_variable_value": "tf.compat.v1.estimator.Estimator.get_variable_value", + "tf.estimator.experimental.RNNEstimator.latest_checkpoint": "tf.compat.v1.estimator.Estimator.latest_checkpoint", + "tf.estimator.experimental.RNNEstimator.model_dir": "tf.compat.v1.estimator.Estimator.model_dir", + "tf.estimator.experimental.RNNEstimator.model_fn": "tf.compat.v1.estimator.Estimator.model_fn", + "tf.estimator.experimental.RNNEstimator.params": "tf.compat.v1.estimator.Estimator.params", + "tf.estimator.experimental.RNNEstimator.predict": "tf.compat.v1.estimator.Estimator.predict", + "tf.estimator.experimental.RNNEstimator.train": "tf.compat.v1.estimator.Estimator.train", + "tf.estimator.export.ClassificationOutput.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.export.ClassificationOutput.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.export.ClassificationOutput.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.export.ClassificationOutput.__le__": "tf.keras.Model.__le__", + "tf.estimator.export.ClassificationOutput.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.export.ClassificationOutput.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.export.ClassificationOutput.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.export.ExportOutput.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.export.ExportOutput.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.export.ExportOutput.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.export.ExportOutput.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.export.ExportOutput.__le__": "tf.keras.Model.__le__", + "tf.estimator.export.ExportOutput.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.export.ExportOutput.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.export.ExportOutput.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.export.PredictOutput.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.export.PredictOutput.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.export.PredictOutput.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.export.PredictOutput.__le__": "tf.keras.Model.__le__", + "tf.estimator.export.PredictOutput.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.export.PredictOutput.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.export.PredictOutput.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.export.RegressionOutput.__eq__": "tf.keras.Model.__eq__", + "tf.estimator.export.RegressionOutput.__ge__": "tf.keras.Model.__ge__", + "tf.estimator.export.RegressionOutput.__gt__": "tf.keras.Model.__gt__", + "tf.estimator.export.RegressionOutput.__le__": "tf.keras.Model.__le__", + "tf.estimator.export.RegressionOutput.__lt__": "tf.keras.Model.__lt__", + "tf.estimator.export.RegressionOutput.__ne__": "tf.keras.Model.__ne__", + "tf.estimator.export.RegressionOutput.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.estimator.export.ServingInputReceiver.__add__": "tf.config.LogicalDevice.__add__", + "tf.estimator.export.ServingInputReceiver.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.estimator.export.ServingInputReceiver.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.estimator.export.ServingInputReceiver.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.estimator.export.ServingInputReceiver.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.estimator.export.ServingInputReceiver.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.estimator.export.ServingInputReceiver.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.export.ServingInputReceiver.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.estimator.export.ServingInputReceiver.__le__": "tf.config.LogicalDevice.__le__", + "tf.estimator.export.ServingInputReceiver.__len__": "tf.config.LogicalDevice.__len__", + "tf.estimator.export.ServingInputReceiver.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.estimator.export.ServingInputReceiver.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.estimator.export.ServingInputReceiver.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.estimator.export.ServingInputReceiver.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.estimator.export.ServingInputReceiver.count": "tf.config.LogicalDevice.count", + "tf.estimator.export.ServingInputReceiver.index": "tf.config.LogicalDevice.index", + "tf.estimator.export.TensorServingInputReceiver.__add__": "tf.config.LogicalDevice.__add__", + "tf.estimator.export.TensorServingInputReceiver.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.estimator.export.TensorServingInputReceiver.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.estimator.export.TensorServingInputReceiver.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.estimator.export.TensorServingInputReceiver.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.estimator.export.TensorServingInputReceiver.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.estimator.export.TensorServingInputReceiver.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.estimator.export.TensorServingInputReceiver.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.estimator.export.TensorServingInputReceiver.__le__": "tf.config.LogicalDevice.__le__", + "tf.estimator.export.TensorServingInputReceiver.__len__": "tf.config.LogicalDevice.__len__", + "tf.estimator.export.TensorServingInputReceiver.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.estimator.export.TensorServingInputReceiver.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.estimator.export.TensorServingInputReceiver.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.estimator.export.TensorServingInputReceiver.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.estimator.export.TensorServingInputReceiver.count": "tf.config.LogicalDevice.count", + "tf.estimator.export.TensorServingInputReceiver.index": "tf.config.LogicalDevice.index", + "tf.exp": "tf.math.exp", + "tf.experimental.tensorrt.ConversionParams.__add__": "tf.config.LogicalDevice.__add__", + "tf.experimental.tensorrt.ConversionParams.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.experimental.tensorrt.ConversionParams.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.experimental.tensorrt.ConversionParams.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.experimental.tensorrt.ConversionParams.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.experimental.tensorrt.ConversionParams.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.experimental.tensorrt.ConversionParams.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.experimental.tensorrt.ConversionParams.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.experimental.tensorrt.ConversionParams.__le__": "tf.config.LogicalDevice.__le__", + "tf.experimental.tensorrt.ConversionParams.__len__": "tf.config.LogicalDevice.__len__", + "tf.experimental.tensorrt.ConversionParams.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.experimental.tensorrt.ConversionParams.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.experimental.tensorrt.ConversionParams.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.experimental.tensorrt.ConversionParams.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.experimental.tensorrt.ConversionParams.count": "tf.config.LogicalDevice.count", + "tf.experimental.tensorrt.ConversionParams.index": "tf.config.LogicalDevice.index", + "tf.experimental.tensorrt.Converter.__eq__": "tf.keras.Model.__eq__", + "tf.experimental.tensorrt.Converter.__ge__": "tf.keras.Model.__ge__", + "tf.experimental.tensorrt.Converter.__gt__": "tf.keras.Model.__gt__", + "tf.experimental.tensorrt.Converter.__le__": "tf.keras.Model.__le__", + "tf.experimental.tensorrt.Converter.__lt__": "tf.keras.Model.__lt__", + "tf.experimental.tensorrt.Converter.__ne__": "tf.keras.Model.__ne__", + "tf.experimental.tensorrt.Converter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.float16": "tf.dtypes.float16", + "tf.float32": "tf.dtypes.float32", + "tf.float64": "tf.dtypes.double", + "tf.floor": "tf.math.floor", + "tf.greater": "tf.math.greater", + "tf.greater_equal": "tf.math.greater_equal", + "tf.half": "tf.dtypes.float16", + "tf.image.ResizeMethod.__eq__": "tf.keras.Model.__eq__", + "tf.image.ResizeMethod.__ge__": "tf.keras.Model.__ge__", + "tf.image.ResizeMethod.__gt__": "tf.keras.Model.__gt__", + "tf.image.ResizeMethod.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.image.ResizeMethod.__le__": "tf.keras.Model.__le__", + "tf.image.ResizeMethod.__lt__": "tf.keras.Model.__lt__", + "tf.image.ResizeMethod.__ne__": "tf.keras.Model.__ne__", + "tf.image.ResizeMethod.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.image.decode_and_crop_jpeg": "tf.io.decode_and_crop_jpeg", + "tf.image.decode_bmp": "tf.io.decode_bmp", + "tf.image.decode_gif": "tf.io.decode_gif", + "tf.image.decode_image": "tf.io.decode_image", + "tf.image.decode_jpeg": "tf.io.decode_jpeg", + "tf.image.decode_png": "tf.io.decode_png", + "tf.image.encode_jpeg": "tf.io.encode_jpeg", + "tf.image.extract_jpeg_shape": "tf.io.extract_jpeg_shape", + "tf.image.is_jpeg": "tf.io.is_jpeg", + "tf.import_graph_def": "tf.graph_util.import_graph_def", + "tf.initializers": "tf.keras.initializers", + "tf.initializers.Constant": "tf.constant_initializer", + "tf.initializers.Constant.__call__": "tf.keras.initializers.Constant.__call__", + "tf.initializers.Constant.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.Constant.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.Constant.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.Constant.__init__": "tf.keras.initializers.Constant.__init__", + "tf.initializers.Constant.__le__": "tf.keras.Model.__le__", + "tf.initializers.Constant.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.Constant.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.Constant.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.Constant.get_config": "tf.keras.initializers.Constant.get_config", + "tf.initializers.GlorotNormal": "tf.keras.initializers.GlorotNormal", + "tf.initializers.GlorotNormal.__call__": "tf.keras.initializers.VarianceScaling.__call__", + "tf.initializers.GlorotNormal.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.GlorotNormal.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.GlorotNormal.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.GlorotNormal.__init__": "tf.keras.initializers.GlorotNormal.__init__", + "tf.initializers.GlorotNormal.__le__": "tf.keras.Model.__le__", + "tf.initializers.GlorotNormal.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.GlorotNormal.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.GlorotNormal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.GlorotNormal.get_config": "tf.keras.initializers.GlorotNormal.get_config", + "tf.initializers.GlorotUniform": "tf.keras.initializers.GlorotUniform", + "tf.initializers.GlorotUniform.__call__": "tf.keras.initializers.VarianceScaling.__call__", + "tf.initializers.GlorotUniform.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.GlorotUniform.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.GlorotUniform.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.GlorotUniform.__init__": "tf.keras.initializers.GlorotUniform.__init__", + "tf.initializers.GlorotUniform.__le__": "tf.keras.Model.__le__", + "tf.initializers.GlorotUniform.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.GlorotUniform.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.GlorotUniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.GlorotUniform.get_config": "tf.keras.initializers.GlorotUniform.get_config", + "tf.initializers.Identity": "tf.keras.initializers.Identity", + "tf.initializers.Identity.__call__": "tf.keras.initializers.Identity.__call__", + "tf.initializers.Identity.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.Identity.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.Identity.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.Identity.__init__": "tf.keras.initializers.Identity.__init__", + "tf.initializers.Identity.__le__": "tf.keras.Model.__le__", + "tf.initializers.Identity.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.Identity.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.Identity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.Identity.get_config": "tf.keras.initializers.Identity.get_config", + "tf.initializers.Initializer": "tf.keras.initializers.Initializer", + "tf.initializers.Initializer.__call__": "tf.keras.initializers.Initializer.__call__", + "tf.initializers.Initializer.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.Initializer.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.Initializer.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.Initializer.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.initializers.Initializer.__le__": "tf.keras.Model.__le__", + "tf.initializers.Initializer.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.Initializer.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.Initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.Initializer.get_config": "tf.keras.initializers.Initializer.get_config", + "tf.initializers.Ones": "tf.ones_initializer", + "tf.initializers.Ones.__call__": "tf.keras.initializers.Ones.__call__", + "tf.initializers.Ones.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.Ones.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.Ones.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.Ones.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.initializers.Ones.__le__": "tf.keras.Model.__le__", + "tf.initializers.Ones.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.Ones.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.Ones.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.Ones.get_config": "tf.keras.initializers.Initializer.get_config", + "tf.initializers.Orthogonal": "tf.keras.initializers.Orthogonal", + "tf.initializers.Orthogonal.__call__": "tf.keras.initializers.Orthogonal.__call__", + "tf.initializers.Orthogonal.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.Orthogonal.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.Orthogonal.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.Orthogonal.__init__": "tf.keras.initializers.Orthogonal.__init__", + "tf.initializers.Orthogonal.__le__": "tf.keras.Model.__le__", + "tf.initializers.Orthogonal.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.Orthogonal.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.Orthogonal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.Orthogonal.get_config": "tf.keras.initializers.Orthogonal.get_config", + "tf.initializers.RandomNormal": "tf.random_normal_initializer", + "tf.initializers.RandomNormal.__call__": "tf.keras.initializers.RandomNormal.__call__", + "tf.initializers.RandomNormal.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.RandomNormal.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.RandomNormal.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.RandomNormal.__init__": "tf.keras.initializers.RandomNormal.__init__", + "tf.initializers.RandomNormal.__le__": "tf.keras.Model.__le__", + "tf.initializers.RandomNormal.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.RandomNormal.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.RandomNormal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.RandomNormal.get_config": "tf.keras.initializers.RandomNormal.get_config", + "tf.initializers.RandomUniform": "tf.random_uniform_initializer", + "tf.initializers.RandomUniform.__call__": "tf.keras.initializers.RandomUniform.__call__", + "tf.initializers.RandomUniform.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.RandomUniform.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.RandomUniform.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.RandomUniform.__init__": "tf.keras.initializers.RandomUniform.__init__", + "tf.initializers.RandomUniform.__le__": "tf.keras.Model.__le__", + "tf.initializers.RandomUniform.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.RandomUniform.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.RandomUniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.RandomUniform.get_config": "tf.keras.initializers.RandomUniform.get_config", + "tf.initializers.TruncatedNormal": "tf.keras.initializers.TruncatedNormal", + "tf.initializers.TruncatedNormal.__call__": "tf.keras.initializers.TruncatedNormal.__call__", + "tf.initializers.TruncatedNormal.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.TruncatedNormal.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.TruncatedNormal.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.TruncatedNormal.__init__": "tf.keras.initializers.TruncatedNormal.__init__", + "tf.initializers.TruncatedNormal.__le__": "tf.keras.Model.__le__", + "tf.initializers.TruncatedNormal.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.TruncatedNormal.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.TruncatedNormal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.TruncatedNormal.get_config": "tf.keras.initializers.TruncatedNormal.get_config", + "tf.initializers.VarianceScaling": "tf.keras.initializers.VarianceScaling", + "tf.initializers.VarianceScaling.__call__": "tf.keras.initializers.VarianceScaling.__call__", + "tf.initializers.VarianceScaling.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.VarianceScaling.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.VarianceScaling.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.VarianceScaling.__init__": "tf.keras.initializers.VarianceScaling.__init__", + "tf.initializers.VarianceScaling.__le__": "tf.keras.Model.__le__", + "tf.initializers.VarianceScaling.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.VarianceScaling.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.VarianceScaling.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.VarianceScaling.get_config": "tf.keras.initializers.VarianceScaling.get_config", + "tf.initializers.Zeros": "tf.zeros_initializer", + "tf.initializers.Zeros.__call__": "tf.keras.initializers.Zeros.__call__", + "tf.initializers.Zeros.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.Zeros.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.Zeros.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.Zeros.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.initializers.Zeros.__le__": "tf.keras.Model.__le__", + "tf.initializers.Zeros.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.Zeros.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.Zeros.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.Zeros.get_config": "tf.keras.initializers.Initializer.get_config", + "tf.initializers.constant": "tf.constant_initializer", + "tf.initializers.constant.__call__": "tf.keras.initializers.Constant.__call__", + "tf.initializers.constant.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.constant.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.constant.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.constant.__init__": "tf.keras.initializers.Constant.__init__", + "tf.initializers.constant.__le__": "tf.keras.Model.__le__", + "tf.initializers.constant.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.constant.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.constant.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.constant.get_config": "tf.keras.initializers.Constant.get_config", + "tf.initializers.deserialize": "tf.keras.initializers.deserialize", + "tf.initializers.get": "tf.keras.initializers.get", + "tf.initializers.glorot_normal": "tf.keras.initializers.GlorotNormal", + "tf.initializers.glorot_normal.__call__": "tf.keras.initializers.VarianceScaling.__call__", + "tf.initializers.glorot_normal.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.glorot_normal.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.glorot_normal.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.glorot_normal.__init__": "tf.keras.initializers.GlorotNormal.__init__", + "tf.initializers.glorot_normal.__le__": "tf.keras.Model.__le__", + "tf.initializers.glorot_normal.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.glorot_normal.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.glorot_normal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.glorot_normal.get_config": "tf.keras.initializers.GlorotNormal.get_config", + "tf.initializers.glorot_uniform": "tf.keras.initializers.GlorotUniform", + "tf.initializers.glorot_uniform.__call__": "tf.keras.initializers.VarianceScaling.__call__", + "tf.initializers.glorot_uniform.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.glorot_uniform.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.glorot_uniform.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.glorot_uniform.__init__": "tf.keras.initializers.GlorotUniform.__init__", + "tf.initializers.glorot_uniform.__le__": "tf.keras.Model.__le__", + "tf.initializers.glorot_uniform.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.glorot_uniform.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.glorot_uniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.glorot_uniform.get_config": "tf.keras.initializers.GlorotUniform.get_config", + "tf.initializers.he_normal": "tf.keras.initializers.he_normal", + "tf.initializers.he_uniform": "tf.keras.initializers.he_uniform", + "tf.initializers.identity": "tf.keras.initializers.Identity", + "tf.initializers.identity.__call__": "tf.keras.initializers.Identity.__call__", + "tf.initializers.identity.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.identity.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.identity.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.identity.__init__": "tf.keras.initializers.Identity.__init__", + "tf.initializers.identity.__le__": "tf.keras.Model.__le__", + "tf.initializers.identity.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.identity.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.identity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.identity.get_config": "tf.keras.initializers.Identity.get_config", + "tf.initializers.lecun_normal": "tf.keras.initializers.lecun_normal", + "tf.initializers.lecun_uniform": "tf.keras.initializers.lecun_uniform", + "tf.initializers.ones": "tf.ones_initializer", + "tf.initializers.ones.__call__": "tf.keras.initializers.Ones.__call__", + "tf.initializers.ones.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.ones.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.ones.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.ones.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.initializers.ones.__le__": "tf.keras.Model.__le__", + "tf.initializers.ones.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.ones.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.ones.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.ones.get_config": "tf.keras.initializers.Initializer.get_config", + "tf.initializers.orthogonal": "tf.keras.initializers.Orthogonal", + "tf.initializers.orthogonal.__call__": "tf.keras.initializers.Orthogonal.__call__", + "tf.initializers.orthogonal.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.orthogonal.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.orthogonal.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.orthogonal.__init__": "tf.keras.initializers.Orthogonal.__init__", + "tf.initializers.orthogonal.__le__": "tf.keras.Model.__le__", + "tf.initializers.orthogonal.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.orthogonal.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.orthogonal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.orthogonal.get_config": "tf.keras.initializers.Orthogonal.get_config", + "tf.initializers.serialize": "tf.keras.initializers.serialize", + "tf.initializers.zeros": "tf.zeros_initializer", + "tf.initializers.zeros.__call__": "tf.keras.initializers.Zeros.__call__", + "tf.initializers.zeros.__eq__": "tf.keras.Model.__eq__", + "tf.initializers.zeros.__ge__": "tf.keras.Model.__ge__", + "tf.initializers.zeros.__gt__": "tf.keras.Model.__gt__", + "tf.initializers.zeros.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.initializers.zeros.__le__": "tf.keras.Model.__le__", + "tf.initializers.zeros.__lt__": "tf.keras.Model.__lt__", + "tf.initializers.zeros.__ne__": "tf.keras.Model.__ne__", + "tf.initializers.zeros.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.initializers.zeros.get_config": "tf.keras.initializers.Initializer.get_config", + "tf.int16": "tf.dtypes.int16", + "tf.int32": "tf.dtypes.int32", + "tf.int64": "tf.dtypes.int64", + "tf.int8": "tf.dtypes.int8", + "tf.io.FixedLenFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.io.FixedLenFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.io.FixedLenFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.io.FixedLenFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.io.FixedLenFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.io.FixedLenFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.io.FixedLenFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.io.FixedLenFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.io.FixedLenFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.io.FixedLenFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.io.FixedLenFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.io.FixedLenFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.io.FixedLenFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.io.FixedLenFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.io.FixedLenFeature.count": "tf.config.LogicalDevice.count", + "tf.io.FixedLenFeature.index": "tf.config.LogicalDevice.index", + "tf.io.FixedLenSequenceFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.io.FixedLenSequenceFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.io.FixedLenSequenceFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.io.FixedLenSequenceFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.io.FixedLenSequenceFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.io.FixedLenSequenceFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.io.FixedLenSequenceFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.io.FixedLenSequenceFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.io.FixedLenSequenceFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.io.FixedLenSequenceFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.io.FixedLenSequenceFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.io.FixedLenSequenceFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.io.FixedLenSequenceFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.io.FixedLenSequenceFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.io.FixedLenSequenceFeature.count": "tf.config.LogicalDevice.count", + "tf.io.FixedLenSequenceFeature.index": "tf.config.LogicalDevice.index", + "tf.io.RaggedFeature.RowLengths.__add__": "tf.config.LogicalDevice.__add__", + "tf.io.RaggedFeature.RowLengths.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.io.RaggedFeature.RowLengths.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.io.RaggedFeature.RowLengths.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.io.RaggedFeature.RowLengths.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.io.RaggedFeature.RowLengths.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.io.RaggedFeature.RowLengths.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.io.RaggedFeature.RowLengths.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.io.RaggedFeature.RowLengths.__le__": "tf.config.LogicalDevice.__le__", + "tf.io.RaggedFeature.RowLengths.__len__": "tf.config.LogicalDevice.__len__", + "tf.io.RaggedFeature.RowLengths.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.io.RaggedFeature.RowLengths.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.io.RaggedFeature.RowLengths.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.io.RaggedFeature.RowLengths.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.io.RaggedFeature.RowLengths.count": "tf.config.LogicalDevice.count", + "tf.io.RaggedFeature.RowLengths.index": "tf.config.LogicalDevice.index", + "tf.io.RaggedFeature.RowLimits.__add__": "tf.config.LogicalDevice.__add__", + "tf.io.RaggedFeature.RowLimits.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.io.RaggedFeature.RowLimits.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.io.RaggedFeature.RowLimits.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.io.RaggedFeature.RowLimits.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.io.RaggedFeature.RowLimits.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.io.RaggedFeature.RowLimits.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.io.RaggedFeature.RowLimits.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.io.RaggedFeature.RowLimits.__le__": "tf.config.LogicalDevice.__le__", + "tf.io.RaggedFeature.RowLimits.__len__": "tf.config.LogicalDevice.__len__", + "tf.io.RaggedFeature.RowLimits.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.io.RaggedFeature.RowLimits.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.io.RaggedFeature.RowLimits.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.io.RaggedFeature.RowLimits.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.io.RaggedFeature.RowLimits.count": "tf.config.LogicalDevice.count", + "tf.io.RaggedFeature.RowLimits.index": "tf.config.LogicalDevice.index", + "tf.io.RaggedFeature.RowSplits.__add__": "tf.config.LogicalDevice.__add__", + "tf.io.RaggedFeature.RowSplits.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.io.RaggedFeature.RowSplits.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.io.RaggedFeature.RowSplits.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.io.RaggedFeature.RowSplits.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.io.RaggedFeature.RowSplits.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.io.RaggedFeature.RowSplits.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.io.RaggedFeature.RowSplits.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.io.RaggedFeature.RowSplits.__le__": "tf.config.LogicalDevice.__le__", + "tf.io.RaggedFeature.RowSplits.__len__": "tf.config.LogicalDevice.__len__", + "tf.io.RaggedFeature.RowSplits.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.io.RaggedFeature.RowSplits.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.io.RaggedFeature.RowSplits.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.io.RaggedFeature.RowSplits.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.io.RaggedFeature.RowSplits.count": "tf.config.LogicalDevice.count", + "tf.io.RaggedFeature.RowSplits.index": "tf.config.LogicalDevice.index", + "tf.io.RaggedFeature.RowStarts.__add__": "tf.config.LogicalDevice.__add__", + "tf.io.RaggedFeature.RowStarts.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.io.RaggedFeature.RowStarts.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.io.RaggedFeature.RowStarts.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.io.RaggedFeature.RowStarts.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.io.RaggedFeature.RowStarts.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.io.RaggedFeature.RowStarts.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.io.RaggedFeature.RowStarts.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.io.RaggedFeature.RowStarts.__le__": "tf.config.LogicalDevice.__le__", + "tf.io.RaggedFeature.RowStarts.__len__": "tf.config.LogicalDevice.__len__", + "tf.io.RaggedFeature.RowStarts.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.io.RaggedFeature.RowStarts.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.io.RaggedFeature.RowStarts.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.io.RaggedFeature.RowStarts.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.io.RaggedFeature.RowStarts.count": "tf.config.LogicalDevice.count", + "tf.io.RaggedFeature.RowStarts.index": "tf.config.LogicalDevice.index", + "tf.io.RaggedFeature.UniformRowLength.__add__": "tf.config.LogicalDevice.__add__", + "tf.io.RaggedFeature.UniformRowLength.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.io.RaggedFeature.UniformRowLength.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.io.RaggedFeature.UniformRowLength.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.io.RaggedFeature.UniformRowLength.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.io.RaggedFeature.UniformRowLength.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.io.RaggedFeature.UniformRowLength.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.io.RaggedFeature.UniformRowLength.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.io.RaggedFeature.UniformRowLength.__le__": "tf.config.LogicalDevice.__le__", + "tf.io.RaggedFeature.UniformRowLength.__len__": "tf.config.LogicalDevice.__len__", + "tf.io.RaggedFeature.UniformRowLength.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.io.RaggedFeature.UniformRowLength.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.io.RaggedFeature.UniformRowLength.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.io.RaggedFeature.UniformRowLength.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.io.RaggedFeature.UniformRowLength.count": "tf.config.LogicalDevice.count", + "tf.io.RaggedFeature.UniformRowLength.index": "tf.config.LogicalDevice.index", + "tf.io.RaggedFeature.ValueRowIds.__add__": "tf.config.LogicalDevice.__add__", + "tf.io.RaggedFeature.ValueRowIds.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.io.RaggedFeature.ValueRowIds.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.io.RaggedFeature.ValueRowIds.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.io.RaggedFeature.ValueRowIds.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.io.RaggedFeature.ValueRowIds.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.io.RaggedFeature.ValueRowIds.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.io.RaggedFeature.ValueRowIds.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.io.RaggedFeature.ValueRowIds.__le__": "tf.config.LogicalDevice.__le__", + "tf.io.RaggedFeature.ValueRowIds.__len__": "tf.config.LogicalDevice.__len__", + "tf.io.RaggedFeature.ValueRowIds.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.io.RaggedFeature.ValueRowIds.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.io.RaggedFeature.ValueRowIds.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.io.RaggedFeature.ValueRowIds.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.io.RaggedFeature.ValueRowIds.count": "tf.config.LogicalDevice.count", + "tf.io.RaggedFeature.ValueRowIds.index": "tf.config.LogicalDevice.index", + "tf.io.RaggedFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.io.RaggedFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.io.RaggedFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.io.RaggedFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.io.RaggedFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.io.RaggedFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.io.RaggedFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.io.RaggedFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.io.RaggedFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.io.RaggedFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.io.RaggedFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.io.RaggedFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.io.RaggedFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.io.RaggedFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.io.RaggedFeature.count": "tf.config.LogicalDevice.count", + "tf.io.RaggedFeature.index": "tf.config.LogicalDevice.index", + "tf.io.SparseFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.io.SparseFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.io.SparseFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.io.SparseFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.io.SparseFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.io.SparseFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.io.SparseFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.io.SparseFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.io.SparseFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.io.SparseFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.io.SparseFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.io.SparseFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.io.SparseFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.io.SparseFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.io.SparseFeature.count": "tf.config.LogicalDevice.count", + "tf.io.SparseFeature.index": "tf.config.LogicalDevice.index", + "tf.io.TFRecordOptions.__eq__": "tf.keras.Model.__eq__", + "tf.io.TFRecordOptions.__ge__": "tf.keras.Model.__ge__", + "tf.io.TFRecordOptions.__gt__": "tf.keras.Model.__gt__", + "tf.io.TFRecordOptions.__le__": "tf.keras.Model.__le__", + "tf.io.TFRecordOptions.__lt__": "tf.keras.Model.__lt__", + "tf.io.TFRecordOptions.__ne__": "tf.keras.Model.__ne__", + "tf.io.TFRecordOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.io.TFRecordWriter.__eq__": "tf.keras.Model.__eq__", + "tf.io.TFRecordWriter.__ge__": "tf.keras.Model.__ge__", + "tf.io.TFRecordWriter.__gt__": "tf.keras.Model.__gt__", + "tf.io.TFRecordWriter.__le__": "tf.keras.Model.__le__", + "tf.io.TFRecordWriter.__lt__": "tf.keras.Model.__lt__", + "tf.io.TFRecordWriter.__ne__": "tf.keras.Model.__ne__", + "tf.io.TFRecordWriter.__new__": "tf.dtypes.DType.__new__", + "tf.io.VarLenFeature.__add__": "tf.config.LogicalDevice.__add__", + "tf.io.VarLenFeature.__contains__": "tf.config.LogicalDevice.__contains__", + "tf.io.VarLenFeature.__eq__": "tf.config.LogicalDevice.__eq__", + "tf.io.VarLenFeature.__ge__": "tf.config.LogicalDevice.__ge__", + "tf.io.VarLenFeature.__getitem__": "tf.config.LogicalDevice.__getitem__", + "tf.io.VarLenFeature.__gt__": "tf.config.LogicalDevice.__gt__", + "tf.io.VarLenFeature.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.io.VarLenFeature.__iter__": "tf.config.LogicalDevice.__iter__", + "tf.io.VarLenFeature.__le__": "tf.config.LogicalDevice.__le__", + "tf.io.VarLenFeature.__len__": "tf.config.LogicalDevice.__len__", + "tf.io.VarLenFeature.__lt__": "tf.config.LogicalDevice.__lt__", + "tf.io.VarLenFeature.__mul__": "tf.config.LogicalDevice.__mul__", + "tf.io.VarLenFeature.__ne__": "tf.config.LogicalDevice.__ne__", + "tf.io.VarLenFeature.__rmul__": "tf.config.LogicalDevice.__rmul__", + "tf.io.VarLenFeature.count": "tf.config.LogicalDevice.count", + "tf.io.VarLenFeature.index": "tf.config.LogicalDevice.index", + "tf.io.gfile.GFile.__eq__": "tf.keras.Model.__eq__", + "tf.io.gfile.GFile.__ge__": "tf.keras.Model.__ge__", + "tf.io.gfile.GFile.__gt__": "tf.keras.Model.__gt__", + "tf.io.gfile.GFile.__le__": "tf.keras.Model.__le__", + "tf.io.gfile.GFile.__lt__": "tf.keras.Model.__lt__", + "tf.io.gfile.GFile.__ne__": "tf.keras.Model.__ne__", + "tf.io.gfile.GFile.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.Model.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.Model.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.Model.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.Model.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.Model.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.Model.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.Model.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.Model.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.Model.input": "tf.keras.layers.Layer.input", + "tf.keras.Model.losses": "tf.keras.layers.Layer.losses", + "tf.keras.Model.name": "tf.keras.layers.Layer.name", + "tf.keras.Model.name_scope": "tf.Module.name_scope", + "tf.keras.Model.output": "tf.keras.layers.Layer.output", + "tf.keras.Model.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.Model.submodules": "tf.Module.submodules", + "tf.keras.Model.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.Sequential.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.Sequential.__eq__": "tf.keras.Model.__eq__", + "tf.keras.Sequential.__ge__": "tf.keras.Model.__ge__", + "tf.keras.Sequential.__gt__": "tf.keras.Model.__gt__", + "tf.keras.Sequential.__le__": "tf.keras.Model.__le__", + "tf.keras.Sequential.__lt__": "tf.keras.Model.__lt__", + "tf.keras.Sequential.__ne__": "tf.keras.Model.__ne__", + "tf.keras.Sequential.__new__": "tf.keras.Model.__new__", + "tf.keras.Sequential.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.Sequential.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.Sequential.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.Sequential.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.Sequential.compile": "tf.keras.Model.compile", + "tf.keras.Sequential.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.Sequential.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.Sequential.distribute_strategy": "tf.keras.Model.distribute_strategy", + "tf.keras.Sequential.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.Sequential.evaluate": "tf.keras.Model.evaluate", + "tf.keras.Sequential.evaluate_generator": "tf.keras.Model.evaluate_generator", + "tf.keras.Sequential.fit": "tf.keras.Model.fit", + "tf.keras.Sequential.fit_generator": "tf.keras.Model.fit_generator", + "tf.keras.Sequential.get_layer": "tf.keras.Model.get_layer", + "tf.keras.Sequential.get_weights": "tf.keras.Model.get_weights", + "tf.keras.Sequential.input": "tf.keras.layers.Layer.input", + "tf.keras.Sequential.load_weights": "tf.keras.Model.load_weights", + "tf.keras.Sequential.losses": "tf.keras.layers.Layer.losses", + "tf.keras.Sequential.make_predict_function": "tf.keras.Model.make_predict_function", + "tf.keras.Sequential.make_test_function": "tf.keras.Model.make_test_function", + "tf.keras.Sequential.make_train_function": "tf.keras.Model.make_train_function", + "tf.keras.Sequential.metrics": "tf.keras.Model.metrics", + "tf.keras.Sequential.metrics_names": "tf.keras.Model.metrics_names", + "tf.keras.Sequential.name": "tf.keras.layers.Layer.name", + "tf.keras.Sequential.name_scope": "tf.Module.name_scope", + "tf.keras.Sequential.non_trainable_weights": "tf.keras.Model.non_trainable_weights", + "tf.keras.Sequential.output": "tf.keras.layers.Layer.output", + "tf.keras.Sequential.predict": "tf.keras.Model.predict", + "tf.keras.Sequential.predict_generator": "tf.keras.Model.predict_generator", + "tf.keras.Sequential.predict_on_batch": "tf.keras.Model.predict_on_batch", + "tf.keras.Sequential.predict_step": "tf.keras.Model.predict_step", + "tf.keras.Sequential.reset_metrics": "tf.keras.Model.reset_metrics", + "tf.keras.Sequential.reset_states": "tf.keras.Model.reset_states", + "tf.keras.Sequential.run_eagerly": "tf.keras.Model.run_eagerly", + "tf.keras.Sequential.save": "tf.keras.Model.save", + "tf.keras.Sequential.save_weights": "tf.keras.Model.save_weights", + "tf.keras.Sequential.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.Sequential.state_updates": "tf.keras.Model.state_updates", + "tf.keras.Sequential.stateful": "tf.keras.Model.stateful", + "tf.keras.Sequential.submodules": "tf.Module.submodules", + "tf.keras.Sequential.summary": "tf.keras.Model.summary", + "tf.keras.Sequential.test_on_batch": "tf.keras.Model.test_on_batch", + "tf.keras.Sequential.test_step": "tf.keras.Model.test_step", + "tf.keras.Sequential.to_json": "tf.keras.Model.to_json", + "tf.keras.Sequential.to_yaml": "tf.keras.Model.to_yaml", + "tf.keras.Sequential.train_on_batch": "tf.keras.Model.train_on_batch", + "tf.keras.Sequential.train_step": "tf.keras.Model.train_step", + "tf.keras.Sequential.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.Sequential.trainable_weights": "tf.keras.Model.trainable_weights", + "tf.keras.Sequential.weights": "tf.keras.Model.weights", + "tf.keras.applications.densenet.DenseNet121": "tf.keras.applications.DenseNet121", + "tf.keras.applications.densenet.DenseNet169": "tf.keras.applications.DenseNet169", + "tf.keras.applications.densenet.DenseNet201": "tf.keras.applications.DenseNet201", + "tf.keras.applications.inception_resnet_v2.InceptionResNetV2": "tf.keras.applications.InceptionResNetV2", + "tf.keras.applications.inception_v3.InceptionV3": "tf.keras.applications.InceptionV3", + "tf.keras.applications.mobilenet.MobileNet": "tf.keras.applications.MobileNet", + "tf.keras.applications.mobilenet_v2.MobileNetV2": "tf.keras.applications.MobileNetV2", + "tf.keras.applications.nasnet.NASNetLarge": "tf.keras.applications.NASNetLarge", + "tf.keras.applications.nasnet.NASNetMobile": "tf.keras.applications.NASNetMobile", + "tf.keras.applications.resnet.ResNet101": "tf.keras.applications.ResNet101", + "tf.keras.applications.resnet.ResNet152": "tf.keras.applications.ResNet152", + "tf.keras.applications.resnet.ResNet50": "tf.keras.applications.ResNet50", + "tf.keras.applications.resnet50.ResNet50": "tf.keras.applications.ResNet50", + "tf.keras.applications.resnet50.decode_predictions": "tf.keras.applications.resnet.decode_predictions", + "tf.keras.applications.resnet50.preprocess_input": "tf.keras.applications.resnet.preprocess_input", + "tf.keras.applications.resnet_v2.ResNet101V2": "tf.keras.applications.ResNet101V2", + "tf.keras.applications.resnet_v2.ResNet152V2": "tf.keras.applications.ResNet152V2", + "tf.keras.applications.resnet_v2.ResNet50V2": "tf.keras.applications.ResNet50V2", + "tf.keras.applications.vgg16.VGG16": "tf.keras.applications.VGG16", + "tf.keras.applications.vgg19.VGG19": "tf.keras.applications.VGG19", + "tf.keras.applications.xception.Xception": "tf.keras.applications.Xception", + "tf.keras.callbacks.BaseLogger.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.BaseLogger.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.BaseLogger.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.BaseLogger.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.BaseLogger.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.BaseLogger.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.BaseLogger.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.keras.callbacks.BaseLogger.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.keras.callbacks.BaseLogger.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.keras.callbacks.BaseLogger.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.keras.callbacks.BaseLogger.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.keras.callbacks.BaseLogger.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.keras.callbacks.BaseLogger.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.keras.callbacks.BaseLogger.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.keras.callbacks.BaseLogger.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.keras.callbacks.BaseLogger.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.keras.callbacks.BaseLogger.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.keras.callbacks.BaseLogger.on_train_begin": "tf.keras.callbacks.Callback.on_train_begin", + "tf.keras.callbacks.BaseLogger.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.keras.callbacks.BaseLogger.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.keras.callbacks.BaseLogger.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.keras.callbacks.CSVLogger.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.CSVLogger.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.CSVLogger.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.CSVLogger.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.CSVLogger.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.CSVLogger.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.CSVLogger.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.callbacks.CSVLogger.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.keras.callbacks.CSVLogger.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.keras.callbacks.CSVLogger.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.keras.callbacks.CSVLogger.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.keras.callbacks.CSVLogger.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.keras.callbacks.CSVLogger.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.keras.callbacks.CSVLogger.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.keras.callbacks.CSVLogger.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.keras.callbacks.CSVLogger.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.keras.callbacks.CSVLogger.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.keras.callbacks.CSVLogger.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.keras.callbacks.CSVLogger.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.keras.callbacks.CSVLogger.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.keras.callbacks.CSVLogger.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.keras.callbacks.CSVLogger.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.keras.callbacks.Callback.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.Callback.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.Callback.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.Callback.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.Callback.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.Callback.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.Callback.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.callbacks.EarlyStopping.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.EarlyStopping.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.EarlyStopping.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.EarlyStopping.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.EarlyStopping.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.EarlyStopping.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.EarlyStopping.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.callbacks.EarlyStopping.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.keras.callbacks.EarlyStopping.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.keras.callbacks.EarlyStopping.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.keras.callbacks.EarlyStopping.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.keras.callbacks.EarlyStopping.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.keras.callbacks.EarlyStopping.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.keras.callbacks.EarlyStopping.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.keras.callbacks.EarlyStopping.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.keras.callbacks.EarlyStopping.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.keras.callbacks.EarlyStopping.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.keras.callbacks.EarlyStopping.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.keras.callbacks.EarlyStopping.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.keras.callbacks.EarlyStopping.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.keras.callbacks.EarlyStopping.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.keras.callbacks.EarlyStopping.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.keras.callbacks.History.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.History.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.History.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.History.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.History.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.History.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.History.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.callbacks.History.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.keras.callbacks.History.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.keras.callbacks.History.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.keras.callbacks.History.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.keras.callbacks.History.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.keras.callbacks.History.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.keras.callbacks.History.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.keras.callbacks.History.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.keras.callbacks.History.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.keras.callbacks.History.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.keras.callbacks.History.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.keras.callbacks.History.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.keras.callbacks.History.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.keras.callbacks.History.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.keras.callbacks.History.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.keras.callbacks.History.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.keras.callbacks.LambdaCallback.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.LambdaCallback.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.LambdaCallback.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.LambdaCallback.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.LambdaCallback.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.LambdaCallback.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.LambdaCallback.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.callbacks.LambdaCallback.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.keras.callbacks.LambdaCallback.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.keras.callbacks.LambdaCallback.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.keras.callbacks.LambdaCallback.on_epoch_end": "tf.keras.callbacks.Callback.on_epoch_end", + "tf.keras.callbacks.LambdaCallback.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.keras.callbacks.LambdaCallback.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.keras.callbacks.LambdaCallback.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.keras.callbacks.LambdaCallback.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.keras.callbacks.LambdaCallback.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.keras.callbacks.LambdaCallback.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.keras.callbacks.LambdaCallback.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.keras.callbacks.LambdaCallback.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.keras.callbacks.LambdaCallback.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.keras.callbacks.LambdaCallback.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.keras.callbacks.LambdaCallback.on_train_begin": "tf.keras.callbacks.Callback.on_train_begin", + "tf.keras.callbacks.LambdaCallback.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.keras.callbacks.LambdaCallback.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.keras.callbacks.LambdaCallback.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.keras.callbacks.LearningRateScheduler.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.LearningRateScheduler.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.LearningRateScheduler.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.LearningRateScheduler.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.LearningRateScheduler.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.LearningRateScheduler.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.LearningRateScheduler.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.callbacks.LearningRateScheduler.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.keras.callbacks.LearningRateScheduler.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.keras.callbacks.LearningRateScheduler.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.keras.callbacks.LearningRateScheduler.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.keras.callbacks.LearningRateScheduler.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.keras.callbacks.LearningRateScheduler.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.keras.callbacks.LearningRateScheduler.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.keras.callbacks.LearningRateScheduler.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.keras.callbacks.LearningRateScheduler.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.keras.callbacks.LearningRateScheduler.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.keras.callbacks.LearningRateScheduler.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.keras.callbacks.LearningRateScheduler.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.keras.callbacks.LearningRateScheduler.on_train_begin": "tf.keras.callbacks.Callback.on_train_begin", + "tf.keras.callbacks.LearningRateScheduler.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.keras.callbacks.LearningRateScheduler.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.keras.callbacks.LearningRateScheduler.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.keras.callbacks.ModelCheckpoint.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.ModelCheckpoint.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.ModelCheckpoint.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.ModelCheckpoint.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.ModelCheckpoint.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.ModelCheckpoint.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.ModelCheckpoint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.callbacks.ModelCheckpoint.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.keras.callbacks.ModelCheckpoint.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.keras.callbacks.ModelCheckpoint.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.keras.callbacks.ModelCheckpoint.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.keras.callbacks.ModelCheckpoint.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.keras.callbacks.ModelCheckpoint.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.keras.callbacks.ModelCheckpoint.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.keras.callbacks.ModelCheckpoint.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.keras.callbacks.ModelCheckpoint.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.keras.callbacks.ModelCheckpoint.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.keras.callbacks.ModelCheckpoint.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.keras.callbacks.ModelCheckpoint.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.keras.callbacks.ProgbarLogger.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.ProgbarLogger.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.ProgbarLogger.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.ProgbarLogger.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.ProgbarLogger.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.ProgbarLogger.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.ProgbarLogger.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.callbacks.ProgbarLogger.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.keras.callbacks.ProgbarLogger.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.keras.callbacks.ProgbarLogger.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.keras.callbacks.ProgbarLogger.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.keras.callbacks.ProgbarLogger.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.keras.callbacks.ProgbarLogger.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.keras.callbacks.ProgbarLogger.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.keras.callbacks.ReduceLROnPlateau.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.ReduceLROnPlateau.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.ReduceLROnPlateau.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.ReduceLROnPlateau.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.ReduceLROnPlateau.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.ReduceLROnPlateau.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.ReduceLROnPlateau.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.callbacks.ReduceLROnPlateau.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.keras.callbacks.ReduceLROnPlateau.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.keras.callbacks.ReduceLROnPlateau.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.keras.callbacks.ReduceLROnPlateau.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.keras.callbacks.ReduceLROnPlateau.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.keras.callbacks.ReduceLROnPlateau.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.keras.callbacks.ReduceLROnPlateau.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.keras.callbacks.ReduceLROnPlateau.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.keras.callbacks.ReduceLROnPlateau.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.keras.callbacks.ReduceLROnPlateau.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.keras.callbacks.ReduceLROnPlateau.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.keras.callbacks.ReduceLROnPlateau.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.keras.callbacks.ReduceLROnPlateau.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.keras.callbacks.ReduceLROnPlateau.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.keras.callbacks.ReduceLROnPlateau.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.keras.callbacks.ReduceLROnPlateau.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.keras.callbacks.RemoteMonitor.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.RemoteMonitor.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.RemoteMonitor.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.RemoteMonitor.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.RemoteMonitor.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.RemoteMonitor.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.RemoteMonitor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.callbacks.RemoteMonitor.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.keras.callbacks.RemoteMonitor.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.keras.callbacks.RemoteMonitor.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.keras.callbacks.RemoteMonitor.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.keras.callbacks.RemoteMonitor.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.keras.callbacks.RemoteMonitor.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.keras.callbacks.RemoteMonitor.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.keras.callbacks.RemoteMonitor.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.keras.callbacks.RemoteMonitor.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.keras.callbacks.RemoteMonitor.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.keras.callbacks.RemoteMonitor.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.keras.callbacks.RemoteMonitor.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.keras.callbacks.RemoteMonitor.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.keras.callbacks.RemoteMonitor.on_train_begin": "tf.keras.callbacks.Callback.on_train_begin", + "tf.keras.callbacks.RemoteMonitor.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.keras.callbacks.RemoteMonitor.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.keras.callbacks.RemoteMonitor.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.keras.callbacks.TensorBoard.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.TensorBoard.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.TensorBoard.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.TensorBoard.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.TensorBoard.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.TensorBoard.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.TensorBoard.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.callbacks.TensorBoard.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.keras.callbacks.TensorBoard.on_batch_end": "tf.keras.callbacks.Callback.on_batch_end", + "tf.keras.callbacks.TensorBoard.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.keras.callbacks.TensorBoard.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.keras.callbacks.TensorBoard.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.keras.callbacks.TensorBoard.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.keras.callbacks.TensorBoard.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.keras.callbacks.TensorBoard.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.keras.callbacks.TensorBoard.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.keras.callbacks.TensorBoard.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.keras.callbacks.TerminateOnNaN.__eq__": "tf.keras.Model.__eq__", + "tf.keras.callbacks.TerminateOnNaN.__ge__": "tf.keras.Model.__ge__", + "tf.keras.callbacks.TerminateOnNaN.__gt__": "tf.keras.Model.__gt__", + "tf.keras.callbacks.TerminateOnNaN.__init__": "tf.keras.callbacks.Callback.__init__", + "tf.keras.callbacks.TerminateOnNaN.__le__": "tf.keras.Model.__le__", + "tf.keras.callbacks.TerminateOnNaN.__lt__": "tf.keras.Model.__lt__", + "tf.keras.callbacks.TerminateOnNaN.__ne__": "tf.keras.Model.__ne__", + "tf.keras.callbacks.TerminateOnNaN.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.callbacks.TerminateOnNaN.on_batch_begin": "tf.keras.callbacks.Callback.on_batch_begin", + "tf.keras.callbacks.TerminateOnNaN.on_epoch_begin": "tf.keras.callbacks.Callback.on_epoch_begin", + "tf.keras.callbacks.TerminateOnNaN.on_epoch_end": "tf.keras.callbacks.Callback.on_epoch_end", + "tf.keras.callbacks.TerminateOnNaN.on_predict_batch_begin": "tf.keras.callbacks.Callback.on_predict_batch_begin", + "tf.keras.callbacks.TerminateOnNaN.on_predict_batch_end": "tf.keras.callbacks.Callback.on_predict_batch_end", + "tf.keras.callbacks.TerminateOnNaN.on_predict_begin": "tf.keras.callbacks.Callback.on_predict_begin", + "tf.keras.callbacks.TerminateOnNaN.on_predict_end": "tf.keras.callbacks.Callback.on_predict_end", + "tf.keras.callbacks.TerminateOnNaN.on_test_batch_begin": "tf.keras.callbacks.Callback.on_test_batch_begin", + "tf.keras.callbacks.TerminateOnNaN.on_test_batch_end": "tf.keras.callbacks.Callback.on_test_batch_end", + "tf.keras.callbacks.TerminateOnNaN.on_test_begin": "tf.keras.callbacks.Callback.on_test_begin", + "tf.keras.callbacks.TerminateOnNaN.on_test_end": "tf.keras.callbacks.Callback.on_test_end", + "tf.keras.callbacks.TerminateOnNaN.on_train_batch_begin": "tf.keras.callbacks.Callback.on_train_batch_begin", + "tf.keras.callbacks.TerminateOnNaN.on_train_batch_end": "tf.keras.callbacks.Callback.on_train_batch_end", + "tf.keras.callbacks.TerminateOnNaN.on_train_begin": "tf.keras.callbacks.Callback.on_train_begin", + "tf.keras.callbacks.TerminateOnNaN.on_train_end": "tf.keras.callbacks.Callback.on_train_end", + "tf.keras.callbacks.TerminateOnNaN.set_model": "tf.keras.callbacks.Callback.set_model", + "tf.keras.callbacks.TerminateOnNaN.set_params": "tf.keras.callbacks.Callback.set_params", + "tf.keras.constraints.Constraint.__eq__": "tf.keras.Model.__eq__", + "tf.keras.constraints.Constraint.__ge__": "tf.keras.Model.__ge__", + "tf.keras.constraints.Constraint.__gt__": "tf.keras.Model.__gt__", + "tf.keras.constraints.Constraint.__le__": "tf.keras.Model.__le__", + "tf.keras.constraints.Constraint.__lt__": "tf.keras.Model.__lt__", + "tf.keras.constraints.Constraint.__ne__": "tf.keras.Model.__ne__", + "tf.keras.constraints.Constraint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.constraints.MaxNorm.__eq__": "tf.keras.Model.__eq__", + "tf.keras.constraints.MaxNorm.__ge__": "tf.keras.Model.__ge__", + "tf.keras.constraints.MaxNorm.__gt__": "tf.keras.Model.__gt__", + "tf.keras.constraints.MaxNorm.__le__": "tf.keras.Model.__le__", + "tf.keras.constraints.MaxNorm.__lt__": "tf.keras.Model.__lt__", + "tf.keras.constraints.MaxNorm.__ne__": "tf.keras.Model.__ne__", + "tf.keras.constraints.MaxNorm.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.constraints.MinMaxNorm.__eq__": "tf.keras.Model.__eq__", + "tf.keras.constraints.MinMaxNorm.__ge__": "tf.keras.Model.__ge__", + "tf.keras.constraints.MinMaxNorm.__gt__": "tf.keras.Model.__gt__", + "tf.keras.constraints.MinMaxNorm.__le__": "tf.keras.Model.__le__", + "tf.keras.constraints.MinMaxNorm.__lt__": "tf.keras.Model.__lt__", + "tf.keras.constraints.MinMaxNorm.__ne__": "tf.keras.Model.__ne__", + "tf.keras.constraints.MinMaxNorm.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.constraints.NonNeg.__eq__": "tf.keras.Model.__eq__", + "tf.keras.constraints.NonNeg.__ge__": "tf.keras.Model.__ge__", + "tf.keras.constraints.NonNeg.__gt__": "tf.keras.Model.__gt__", + "tf.keras.constraints.NonNeg.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.constraints.NonNeg.__le__": "tf.keras.Model.__le__", + "tf.keras.constraints.NonNeg.__lt__": "tf.keras.Model.__lt__", + "tf.keras.constraints.NonNeg.__ne__": "tf.keras.Model.__ne__", + "tf.keras.constraints.NonNeg.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.constraints.NonNeg.get_config": "tf.keras.constraints.Constraint.get_config", + "tf.keras.constraints.RadialConstraint.__eq__": "tf.keras.Model.__eq__", + "tf.keras.constraints.RadialConstraint.__ge__": "tf.keras.Model.__ge__", + "tf.keras.constraints.RadialConstraint.__gt__": "tf.keras.Model.__gt__", + "tf.keras.constraints.RadialConstraint.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.constraints.RadialConstraint.__le__": "tf.keras.Model.__le__", + "tf.keras.constraints.RadialConstraint.__lt__": "tf.keras.Model.__lt__", + "tf.keras.constraints.RadialConstraint.__ne__": "tf.keras.Model.__ne__", + "tf.keras.constraints.RadialConstraint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.constraints.RadialConstraint.get_config": "tf.keras.constraints.Constraint.get_config", + "tf.keras.constraints.UnitNorm.__eq__": "tf.keras.Model.__eq__", + "tf.keras.constraints.UnitNorm.__ge__": "tf.keras.Model.__ge__", + "tf.keras.constraints.UnitNorm.__gt__": "tf.keras.Model.__gt__", + "tf.keras.constraints.UnitNorm.__le__": "tf.keras.Model.__le__", + "tf.keras.constraints.UnitNorm.__lt__": "tf.keras.Model.__lt__", + "tf.keras.constraints.UnitNorm.__ne__": "tf.keras.Model.__ne__", + "tf.keras.constraints.UnitNorm.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.constraints.max_norm": "tf.keras.constraints.MaxNorm", + "tf.keras.constraints.max_norm.__call__": "tf.keras.constraints.MaxNorm.__call__", + "tf.keras.constraints.max_norm.__eq__": "tf.keras.Model.__eq__", + "tf.keras.constraints.max_norm.__ge__": "tf.keras.Model.__ge__", + "tf.keras.constraints.max_norm.__gt__": "tf.keras.Model.__gt__", + "tf.keras.constraints.max_norm.__init__": "tf.keras.constraints.MaxNorm.__init__", + "tf.keras.constraints.max_norm.__le__": "tf.keras.Model.__le__", + "tf.keras.constraints.max_norm.__lt__": "tf.keras.Model.__lt__", + "tf.keras.constraints.max_norm.__ne__": "tf.keras.Model.__ne__", + "tf.keras.constraints.max_norm.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.constraints.max_norm.get_config": "tf.keras.constraints.MaxNorm.get_config", + "tf.keras.constraints.min_max_norm": "tf.keras.constraints.MinMaxNorm", + "tf.keras.constraints.min_max_norm.__call__": "tf.keras.constraints.MinMaxNorm.__call__", + "tf.keras.constraints.min_max_norm.__eq__": "tf.keras.Model.__eq__", + "tf.keras.constraints.min_max_norm.__ge__": "tf.keras.Model.__ge__", + "tf.keras.constraints.min_max_norm.__gt__": "tf.keras.Model.__gt__", + "tf.keras.constraints.min_max_norm.__init__": "tf.keras.constraints.MinMaxNorm.__init__", + "tf.keras.constraints.min_max_norm.__le__": "tf.keras.Model.__le__", + "tf.keras.constraints.min_max_norm.__lt__": "tf.keras.Model.__lt__", + "tf.keras.constraints.min_max_norm.__ne__": "tf.keras.Model.__ne__", + "tf.keras.constraints.min_max_norm.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.constraints.min_max_norm.get_config": "tf.keras.constraints.MinMaxNorm.get_config", + "tf.keras.constraints.non_neg": "tf.keras.constraints.NonNeg", + "tf.keras.constraints.non_neg.__call__": "tf.keras.constraints.NonNeg.__call__", + "tf.keras.constraints.non_neg.__eq__": "tf.keras.Model.__eq__", + "tf.keras.constraints.non_neg.__ge__": "tf.keras.Model.__ge__", + "tf.keras.constraints.non_neg.__gt__": "tf.keras.Model.__gt__", + "tf.keras.constraints.non_neg.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.constraints.non_neg.__le__": "tf.keras.Model.__le__", + "tf.keras.constraints.non_neg.__lt__": "tf.keras.Model.__lt__", + "tf.keras.constraints.non_neg.__ne__": "tf.keras.Model.__ne__", + "tf.keras.constraints.non_neg.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.constraints.non_neg.get_config": "tf.keras.constraints.Constraint.get_config", + "tf.keras.constraints.radial_constraint": "tf.keras.constraints.RadialConstraint", + "tf.keras.constraints.radial_constraint.__call__": "tf.keras.constraints.RadialConstraint.__call__", + "tf.keras.constraints.radial_constraint.__eq__": "tf.keras.Model.__eq__", + "tf.keras.constraints.radial_constraint.__ge__": "tf.keras.Model.__ge__", + "tf.keras.constraints.radial_constraint.__gt__": "tf.keras.Model.__gt__", + "tf.keras.constraints.radial_constraint.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.constraints.radial_constraint.__le__": "tf.keras.Model.__le__", + "tf.keras.constraints.radial_constraint.__lt__": "tf.keras.Model.__lt__", + "tf.keras.constraints.radial_constraint.__ne__": "tf.keras.Model.__ne__", + "tf.keras.constraints.radial_constraint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.constraints.radial_constraint.get_config": "tf.keras.constraints.Constraint.get_config", + "tf.keras.constraints.unit_norm": "tf.keras.constraints.UnitNorm", + "tf.keras.constraints.unit_norm.__call__": "tf.keras.constraints.UnitNorm.__call__", + "tf.keras.constraints.unit_norm.__eq__": "tf.keras.Model.__eq__", + "tf.keras.constraints.unit_norm.__ge__": "tf.keras.Model.__ge__", + "tf.keras.constraints.unit_norm.__gt__": "tf.keras.Model.__gt__", + "tf.keras.constraints.unit_norm.__init__": "tf.keras.constraints.UnitNorm.__init__", + "tf.keras.constraints.unit_norm.__le__": "tf.keras.Model.__le__", + "tf.keras.constraints.unit_norm.__lt__": "tf.keras.Model.__lt__", + "tf.keras.constraints.unit_norm.__ne__": "tf.keras.Model.__ne__", + "tf.keras.constraints.unit_norm.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.constraints.unit_norm.get_config": "tf.keras.constraints.UnitNorm.get_config", + "tf.keras.experimental.CosineDecay.__eq__": "tf.keras.Model.__eq__", + "tf.keras.experimental.CosineDecay.__ge__": "tf.keras.Model.__ge__", + "tf.keras.experimental.CosineDecay.__gt__": "tf.keras.Model.__gt__", + "tf.keras.experimental.CosineDecay.__le__": "tf.keras.Model.__le__", + "tf.keras.experimental.CosineDecay.__lt__": "tf.keras.Model.__lt__", + "tf.keras.experimental.CosineDecay.__ne__": "tf.keras.Model.__ne__", + "tf.keras.experimental.CosineDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.experimental.CosineDecayRestarts.__eq__": "tf.keras.Model.__eq__", + "tf.keras.experimental.CosineDecayRestarts.__ge__": "tf.keras.Model.__ge__", + "tf.keras.experimental.CosineDecayRestarts.__gt__": "tf.keras.Model.__gt__", + "tf.keras.experimental.CosineDecayRestarts.__le__": "tf.keras.Model.__le__", + "tf.keras.experimental.CosineDecayRestarts.__lt__": "tf.keras.Model.__lt__", + "tf.keras.experimental.CosineDecayRestarts.__ne__": "tf.keras.Model.__ne__", + "tf.keras.experimental.CosineDecayRestarts.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.experimental.LinearCosineDecay.__eq__": "tf.keras.Model.__eq__", + "tf.keras.experimental.LinearCosineDecay.__ge__": "tf.keras.Model.__ge__", + "tf.keras.experimental.LinearCosineDecay.__gt__": "tf.keras.Model.__gt__", + "tf.keras.experimental.LinearCosineDecay.__le__": "tf.keras.Model.__le__", + "tf.keras.experimental.LinearCosineDecay.__lt__": "tf.keras.Model.__lt__", + "tf.keras.experimental.LinearCosineDecay.__ne__": "tf.keras.Model.__ne__", + "tf.keras.experimental.LinearCosineDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.experimental.LinearModel.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.experimental.LinearModel.__eq__": "tf.keras.Model.__eq__", + "tf.keras.experimental.LinearModel.__ge__": "tf.keras.Model.__ge__", + "tf.keras.experimental.LinearModel.__gt__": "tf.keras.Model.__gt__", + "tf.keras.experimental.LinearModel.__le__": "tf.keras.Model.__le__", + "tf.keras.experimental.LinearModel.__lt__": "tf.keras.Model.__lt__", + "tf.keras.experimental.LinearModel.__ne__": "tf.keras.Model.__ne__", + "tf.keras.experimental.LinearModel.__new__": "tf.keras.Model.__new__", + "tf.keras.experimental.LinearModel.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.experimental.LinearModel.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.experimental.LinearModel.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.experimental.LinearModel.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.experimental.LinearModel.compile": "tf.keras.Model.compile", + "tf.keras.experimental.LinearModel.compute_mask": "tf.keras.Model.compute_mask", + "tf.keras.experimental.LinearModel.compute_output_shape": "tf.keras.Model.compute_output_shape", + "tf.keras.experimental.LinearModel.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.experimental.LinearModel.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.experimental.LinearModel.distribute_strategy": "tf.keras.Model.distribute_strategy", + "tf.keras.experimental.LinearModel.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.experimental.LinearModel.dynamic": "tf.keras.Model.dynamic", + "tf.keras.experimental.LinearModel.evaluate": "tf.keras.Model.evaluate", + "tf.keras.experimental.LinearModel.evaluate_generator": "tf.keras.Model.evaluate_generator", + "tf.keras.experimental.LinearModel.fit": "tf.keras.Model.fit", + "tf.keras.experimental.LinearModel.fit_generator": "tf.keras.Model.fit_generator", + "tf.keras.experimental.LinearModel.get_layer": "tf.keras.Model.get_layer", + "tf.keras.experimental.LinearModel.get_weights": "tf.keras.Model.get_weights", + "tf.keras.experimental.LinearModel.input": "tf.keras.layers.Layer.input", + "tf.keras.experimental.LinearModel.input_spec": "tf.keras.Model.input_spec", + "tf.keras.experimental.LinearModel.layers": "tf.keras.Model.layers", + "tf.keras.experimental.LinearModel.load_weights": "tf.keras.Model.load_weights", + "tf.keras.experimental.LinearModel.losses": "tf.keras.layers.Layer.losses", + "tf.keras.experimental.LinearModel.make_predict_function": "tf.keras.Model.make_predict_function", + "tf.keras.experimental.LinearModel.make_test_function": "tf.keras.Model.make_test_function", + "tf.keras.experimental.LinearModel.make_train_function": "tf.keras.Model.make_train_function", + "tf.keras.experimental.LinearModel.metrics": "tf.keras.Model.metrics", + "tf.keras.experimental.LinearModel.metrics_names": "tf.keras.Model.metrics_names", + "tf.keras.experimental.LinearModel.name": "tf.keras.layers.Layer.name", + "tf.keras.experimental.LinearModel.name_scope": "tf.Module.name_scope", + "tf.keras.experimental.LinearModel.non_trainable_weights": "tf.keras.Model.non_trainable_weights", + "tf.keras.experimental.LinearModel.output": "tf.keras.layers.Layer.output", + "tf.keras.experimental.LinearModel.predict": "tf.keras.Model.predict", + "tf.keras.experimental.LinearModel.predict_generator": "tf.keras.Model.predict_generator", + "tf.keras.experimental.LinearModel.predict_on_batch": "tf.keras.Model.predict_on_batch", + "tf.keras.experimental.LinearModel.predict_step": "tf.keras.Model.predict_step", + "tf.keras.experimental.LinearModel.reset_metrics": "tf.keras.Model.reset_metrics", + "tf.keras.experimental.LinearModel.reset_states": "tf.keras.Model.reset_states", + "tf.keras.experimental.LinearModel.run_eagerly": "tf.keras.Model.run_eagerly", + "tf.keras.experimental.LinearModel.save": "tf.keras.Model.save", + "tf.keras.experimental.LinearModel.save_weights": "tf.keras.Model.save_weights", + "tf.keras.experimental.LinearModel.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.experimental.LinearModel.state_updates": "tf.keras.Model.state_updates", + "tf.keras.experimental.LinearModel.stateful": "tf.keras.Model.stateful", + "tf.keras.experimental.LinearModel.submodules": "tf.Module.submodules", + "tf.keras.experimental.LinearModel.summary": "tf.keras.Model.summary", + "tf.keras.experimental.LinearModel.test_on_batch": "tf.keras.Model.test_on_batch", + "tf.keras.experimental.LinearModel.test_step": "tf.keras.Model.test_step", + "tf.keras.experimental.LinearModel.to_json": "tf.keras.Model.to_json", + "tf.keras.experimental.LinearModel.to_yaml": "tf.keras.Model.to_yaml", + "tf.keras.experimental.LinearModel.train_on_batch": "tf.keras.Model.train_on_batch", + "tf.keras.experimental.LinearModel.train_step": "tf.keras.Model.train_step", + "tf.keras.experimental.LinearModel.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.experimental.LinearModel.trainable_weights": "tf.keras.Model.trainable_weights", + "tf.keras.experimental.LinearModel.weights": "tf.keras.Model.weights", + "tf.keras.experimental.NoisyLinearCosineDecay.__eq__": "tf.keras.Model.__eq__", + "tf.keras.experimental.NoisyLinearCosineDecay.__ge__": "tf.keras.Model.__ge__", + "tf.keras.experimental.NoisyLinearCosineDecay.__gt__": "tf.keras.Model.__gt__", + "tf.keras.experimental.NoisyLinearCosineDecay.__le__": "tf.keras.Model.__le__", + "tf.keras.experimental.NoisyLinearCosineDecay.__lt__": "tf.keras.Model.__lt__", + "tf.keras.experimental.NoisyLinearCosineDecay.__ne__": "tf.keras.Model.__ne__", + "tf.keras.experimental.NoisyLinearCosineDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.experimental.PeepholeLSTMCell.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.experimental.PeepholeLSTMCell.__eq__": "tf.keras.Model.__eq__", + "tf.keras.experimental.PeepholeLSTMCell.__ge__": "tf.keras.Model.__ge__", + "tf.keras.experimental.PeepholeLSTMCell.__gt__": "tf.keras.Model.__gt__", + "tf.keras.experimental.PeepholeLSTMCell.__init__": "tf.compat.v1.keras.layers.LSTMCell.__init__", + "tf.keras.experimental.PeepholeLSTMCell.__le__": "tf.keras.Model.__le__", + "tf.keras.experimental.PeepholeLSTMCell.__lt__": "tf.keras.Model.__lt__", + "tf.keras.experimental.PeepholeLSTMCell.__ne__": "tf.keras.Model.__ne__", + "tf.keras.experimental.PeepholeLSTMCell.__new__": "tf.keras.Model.__new__", + "tf.keras.experimental.PeepholeLSTMCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.experimental.PeepholeLSTMCell.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.experimental.PeepholeLSTMCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.experimental.PeepholeLSTMCell.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.experimental.PeepholeLSTMCell.call": "tf.compat.v1.keras.layers.LSTMCell.call", + "tf.keras.experimental.PeepholeLSTMCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.experimental.PeepholeLSTMCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.experimental.PeepholeLSTMCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.experimental.PeepholeLSTMCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.experimental.PeepholeLSTMCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.experimental.PeepholeLSTMCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.experimental.PeepholeLSTMCell.get_config": "tf.compat.v1.keras.layers.LSTMCell.get_config", + "tf.keras.experimental.PeepholeLSTMCell.get_dropout_mask_for_cell": "tf.keras.layers.GRU.get_dropout_mask_for_cell", + "tf.keras.experimental.PeepholeLSTMCell.get_initial_state": "tf.compat.v1.keras.layers.LSTMCell.get_initial_state", + "tf.keras.experimental.PeepholeLSTMCell.get_recurrent_dropout_mask_for_cell": "tf.keras.layers.GRU.get_recurrent_dropout_mask_for_cell", + "tf.keras.experimental.PeepholeLSTMCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.experimental.PeepholeLSTMCell.input": "tf.keras.layers.Layer.input", + "tf.keras.experimental.PeepholeLSTMCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.experimental.PeepholeLSTMCell.losses": "tf.keras.layers.Layer.losses", + "tf.keras.experimental.PeepholeLSTMCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.experimental.PeepholeLSTMCell.name": "tf.keras.layers.Layer.name", + "tf.keras.experimental.PeepholeLSTMCell.name_scope": "tf.Module.name_scope", + "tf.keras.experimental.PeepholeLSTMCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.experimental.PeepholeLSTMCell.output": "tf.keras.layers.Layer.output", + "tf.keras.experimental.PeepholeLSTMCell.reset_dropout_mask": "tf.keras.layers.GRU.reset_dropout_mask", + "tf.keras.experimental.PeepholeLSTMCell.reset_recurrent_dropout_mask": "tf.keras.layers.GRU.reset_recurrent_dropout_mask", + "tf.keras.experimental.PeepholeLSTMCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.experimental.PeepholeLSTMCell.submodules": "tf.Module.submodules", + "tf.keras.experimental.PeepholeLSTMCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.experimental.PeepholeLSTMCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.experimental.PeepholeLSTMCell.weights": "tf.keras.layers.Layer.weights", + "tf.keras.experimental.SequenceFeatures.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.experimental.SequenceFeatures.__eq__": "tf.keras.Model.__eq__", + "tf.keras.experimental.SequenceFeatures.__ge__": "tf.keras.Model.__ge__", + "tf.keras.experimental.SequenceFeatures.__gt__": "tf.keras.Model.__gt__", + "tf.keras.experimental.SequenceFeatures.__le__": "tf.keras.Model.__le__", + "tf.keras.experimental.SequenceFeatures.__lt__": "tf.keras.Model.__lt__", + "tf.keras.experimental.SequenceFeatures.__ne__": "tf.keras.Model.__ne__", + "tf.keras.experimental.SequenceFeatures.__new__": "tf.keras.Model.__new__", + "tf.keras.experimental.SequenceFeatures.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.experimental.SequenceFeatures.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.experimental.SequenceFeatures.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.experimental.SequenceFeatures.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.experimental.SequenceFeatures.build": "tf.compat.v1.keras.layers.DenseFeatures.build", + "tf.keras.experimental.SequenceFeatures.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.experimental.SequenceFeatures.compute_output_shape": "tf.keras.layers.DenseFeatures.compute_output_shape", + "tf.keras.experimental.SequenceFeatures.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.experimental.SequenceFeatures.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.experimental.SequenceFeatures.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.experimental.SequenceFeatures.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.experimental.SequenceFeatures.get_config": "tf.keras.layers.DenseFeatures.get_config", + "tf.keras.experimental.SequenceFeatures.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.experimental.SequenceFeatures.input": "tf.keras.layers.Layer.input", + "tf.keras.experimental.SequenceFeatures.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.experimental.SequenceFeatures.losses": "tf.keras.layers.Layer.losses", + "tf.keras.experimental.SequenceFeatures.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.experimental.SequenceFeatures.name": "tf.keras.layers.Layer.name", + "tf.keras.experimental.SequenceFeatures.name_scope": "tf.Module.name_scope", + "tf.keras.experimental.SequenceFeatures.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.experimental.SequenceFeatures.output": "tf.keras.layers.Layer.output", + "tf.keras.experimental.SequenceFeatures.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.experimental.SequenceFeatures.submodules": "tf.Module.submodules", + "tf.keras.experimental.SequenceFeatures.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.experimental.SequenceFeatures.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.experimental.SequenceFeatures.weights": "tf.keras.layers.Layer.weights", + "tf.keras.experimental.WideDeepModel.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.experimental.WideDeepModel.__eq__": "tf.keras.Model.__eq__", + "tf.keras.experimental.WideDeepModel.__ge__": "tf.keras.Model.__ge__", + "tf.keras.experimental.WideDeepModel.__gt__": "tf.keras.Model.__gt__", + "tf.keras.experimental.WideDeepModel.__le__": "tf.keras.Model.__le__", + "tf.keras.experimental.WideDeepModel.__lt__": "tf.keras.Model.__lt__", + "tf.keras.experimental.WideDeepModel.__ne__": "tf.keras.Model.__ne__", + "tf.keras.experimental.WideDeepModel.__new__": "tf.keras.Model.__new__", + "tf.keras.experimental.WideDeepModel.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.experimental.WideDeepModel.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.experimental.WideDeepModel.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.experimental.WideDeepModel.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.experimental.WideDeepModel.build": "tf.keras.Model.build", + "tf.keras.experimental.WideDeepModel.compile": "tf.keras.Model.compile", + "tf.keras.experimental.WideDeepModel.compute_mask": "tf.keras.Model.compute_mask", + "tf.keras.experimental.WideDeepModel.compute_output_shape": "tf.keras.Model.compute_output_shape", + "tf.keras.experimental.WideDeepModel.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.experimental.WideDeepModel.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.experimental.WideDeepModel.distribute_strategy": "tf.keras.Model.distribute_strategy", + "tf.keras.experimental.WideDeepModel.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.experimental.WideDeepModel.dynamic": "tf.keras.Model.dynamic", + "tf.keras.experimental.WideDeepModel.evaluate": "tf.keras.Model.evaluate", + "tf.keras.experimental.WideDeepModel.evaluate_generator": "tf.keras.Model.evaluate_generator", + "tf.keras.experimental.WideDeepModel.fit": "tf.keras.Model.fit", + "tf.keras.experimental.WideDeepModel.fit_generator": "tf.keras.Model.fit_generator", + "tf.keras.experimental.WideDeepModel.get_layer": "tf.keras.Model.get_layer", + "tf.keras.experimental.WideDeepModel.get_weights": "tf.keras.Model.get_weights", + "tf.keras.experimental.WideDeepModel.input": "tf.keras.layers.Layer.input", + "tf.keras.experimental.WideDeepModel.input_spec": "tf.keras.Model.input_spec", + "tf.keras.experimental.WideDeepModel.layers": "tf.keras.Model.layers", + "tf.keras.experimental.WideDeepModel.load_weights": "tf.keras.Model.load_weights", + "tf.keras.experimental.WideDeepModel.losses": "tf.keras.layers.Layer.losses", + "tf.keras.experimental.WideDeepModel.make_predict_function": "tf.keras.Model.make_predict_function", + "tf.keras.experimental.WideDeepModel.make_test_function": "tf.keras.Model.make_test_function", + "tf.keras.experimental.WideDeepModel.make_train_function": "tf.keras.Model.make_train_function", + "tf.keras.experimental.WideDeepModel.metrics": "tf.keras.Model.metrics", + "tf.keras.experimental.WideDeepModel.metrics_names": "tf.keras.Model.metrics_names", + "tf.keras.experimental.WideDeepModel.name": "tf.keras.layers.Layer.name", + "tf.keras.experimental.WideDeepModel.name_scope": "tf.Module.name_scope", + "tf.keras.experimental.WideDeepModel.non_trainable_weights": "tf.keras.Model.non_trainable_weights", + "tf.keras.experimental.WideDeepModel.output": "tf.keras.layers.Layer.output", + "tf.keras.experimental.WideDeepModel.predict": "tf.keras.Model.predict", + "tf.keras.experimental.WideDeepModel.predict_generator": "tf.keras.Model.predict_generator", + "tf.keras.experimental.WideDeepModel.predict_on_batch": "tf.keras.Model.predict_on_batch", + "tf.keras.experimental.WideDeepModel.predict_step": "tf.keras.Model.predict_step", + "tf.keras.experimental.WideDeepModel.reset_metrics": "tf.keras.Model.reset_metrics", + "tf.keras.experimental.WideDeepModel.reset_states": "tf.keras.Model.reset_states", + "tf.keras.experimental.WideDeepModel.run_eagerly": "tf.keras.Model.run_eagerly", + "tf.keras.experimental.WideDeepModel.save": "tf.keras.Model.save", + "tf.keras.experimental.WideDeepModel.save_weights": "tf.keras.Model.save_weights", + "tf.keras.experimental.WideDeepModel.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.experimental.WideDeepModel.state_updates": "tf.keras.Model.state_updates", + "tf.keras.experimental.WideDeepModel.stateful": "tf.keras.Model.stateful", + "tf.keras.experimental.WideDeepModel.submodules": "tf.Module.submodules", + "tf.keras.experimental.WideDeepModel.summary": "tf.keras.Model.summary", + "tf.keras.experimental.WideDeepModel.test_on_batch": "tf.keras.Model.test_on_batch", + "tf.keras.experimental.WideDeepModel.test_step": "tf.keras.Model.test_step", + "tf.keras.experimental.WideDeepModel.to_json": "tf.keras.Model.to_json", + "tf.keras.experimental.WideDeepModel.to_yaml": "tf.keras.Model.to_yaml", + "tf.keras.experimental.WideDeepModel.train_on_batch": "tf.keras.Model.train_on_batch", + "tf.keras.experimental.WideDeepModel.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.experimental.WideDeepModel.trainable_weights": "tf.keras.Model.trainable_weights", + "tf.keras.experimental.WideDeepModel.weights": "tf.keras.Model.weights", + "tf.keras.initializers.Constant": "tf.constant_initializer", + "tf.keras.initializers.Constant.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.Constant.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.Constant.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.Constant.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.Constant.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.Constant.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.Constant.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.GlorotNormal.__call__": "tf.keras.initializers.VarianceScaling.__call__", + "tf.keras.initializers.GlorotNormal.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.GlorotNormal.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.GlorotNormal.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.GlorotNormal.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.GlorotNormal.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.GlorotNormal.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.GlorotNormal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.GlorotUniform.__call__": "tf.keras.initializers.VarianceScaling.__call__", + "tf.keras.initializers.GlorotUniform.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.GlorotUniform.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.GlorotUniform.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.GlorotUniform.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.GlorotUniform.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.GlorotUniform.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.GlorotUniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.Identity.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.Identity.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.Identity.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.Identity.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.Identity.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.Identity.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.Identity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.Initializer.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.Initializer.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.Initializer.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.Initializer.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.initializers.Initializer.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.Initializer.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.Initializer.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.Initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.Ones": "tf.ones_initializer", + "tf.keras.initializers.Ones.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.Ones.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.Ones.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.Ones.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.initializers.Ones.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.Ones.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.Ones.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.Ones.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.Ones.get_config": "tf.keras.initializers.Initializer.get_config", + "tf.keras.initializers.Orthogonal.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.Orthogonal.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.Orthogonal.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.Orthogonal.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.Orthogonal.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.Orthogonal.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.Orthogonal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.RandomNormal": "tf.random_normal_initializer", + "tf.keras.initializers.RandomNormal.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.RandomNormal.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.RandomNormal.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.RandomNormal.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.RandomNormal.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.RandomNormal.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.RandomNormal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.RandomUniform": "tf.random_uniform_initializer", + "tf.keras.initializers.RandomUniform.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.RandomUniform.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.RandomUniform.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.RandomUniform.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.RandomUniform.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.RandomUniform.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.RandomUniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.TruncatedNormal.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.TruncatedNormal.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.TruncatedNormal.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.TruncatedNormal.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.TruncatedNormal.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.TruncatedNormal.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.TruncatedNormal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.VarianceScaling.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.VarianceScaling.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.VarianceScaling.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.VarianceScaling.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.VarianceScaling.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.VarianceScaling.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.VarianceScaling.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.Zeros": "tf.zeros_initializer", + "tf.keras.initializers.Zeros.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.Zeros.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.Zeros.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.Zeros.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.initializers.Zeros.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.Zeros.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.Zeros.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.Zeros.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.Zeros.get_config": "tf.keras.initializers.Initializer.get_config", + "tf.keras.initializers.constant": "tf.constant_initializer", + "tf.keras.initializers.constant.__call__": "tf.keras.initializers.Constant.__call__", + "tf.keras.initializers.constant.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.constant.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.constant.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.constant.__init__": "tf.keras.initializers.Constant.__init__", + "tf.keras.initializers.constant.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.constant.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.constant.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.constant.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.constant.get_config": "tf.keras.initializers.Constant.get_config", + "tf.keras.initializers.glorot_normal": "tf.keras.initializers.GlorotNormal", + "tf.keras.initializers.glorot_normal.__call__": "tf.keras.initializers.VarianceScaling.__call__", + "tf.keras.initializers.glorot_normal.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.glorot_normal.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.glorot_normal.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.glorot_normal.__init__": "tf.keras.initializers.GlorotNormal.__init__", + "tf.keras.initializers.glorot_normal.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.glorot_normal.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.glorot_normal.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.glorot_normal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.glorot_normal.get_config": "tf.keras.initializers.GlorotNormal.get_config", + "tf.keras.initializers.glorot_uniform": "tf.keras.initializers.GlorotUniform", + "tf.keras.initializers.glorot_uniform.__call__": "tf.keras.initializers.VarianceScaling.__call__", + "tf.keras.initializers.glorot_uniform.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.glorot_uniform.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.glorot_uniform.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.glorot_uniform.__init__": "tf.keras.initializers.GlorotUniform.__init__", + "tf.keras.initializers.glorot_uniform.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.glorot_uniform.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.glorot_uniform.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.glorot_uniform.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.glorot_uniform.get_config": "tf.keras.initializers.GlorotUniform.get_config", + "tf.keras.initializers.identity": "tf.keras.initializers.Identity", + "tf.keras.initializers.identity.__call__": "tf.keras.initializers.Identity.__call__", + "tf.keras.initializers.identity.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.identity.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.identity.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.identity.__init__": "tf.keras.initializers.Identity.__init__", + "tf.keras.initializers.identity.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.identity.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.identity.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.identity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.identity.get_config": "tf.keras.initializers.Identity.get_config", + "tf.keras.initializers.ones": "tf.ones_initializer", + "tf.keras.initializers.ones.__call__": "tf.keras.initializers.Ones.__call__", + "tf.keras.initializers.ones.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.ones.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.ones.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.ones.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.initializers.ones.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.ones.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.ones.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.ones.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.ones.get_config": "tf.keras.initializers.Initializer.get_config", + "tf.keras.initializers.orthogonal": "tf.keras.initializers.Orthogonal", + "tf.keras.initializers.orthogonal.__call__": "tf.keras.initializers.Orthogonal.__call__", + "tf.keras.initializers.orthogonal.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.orthogonal.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.orthogonal.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.orthogonal.__init__": "tf.keras.initializers.Orthogonal.__init__", + "tf.keras.initializers.orthogonal.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.orthogonal.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.orthogonal.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.orthogonal.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.orthogonal.get_config": "tf.keras.initializers.Orthogonal.get_config", + "tf.keras.initializers.zeros": "tf.zeros_initializer", + "tf.keras.initializers.zeros.__call__": "tf.keras.initializers.Zeros.__call__", + "tf.keras.initializers.zeros.__eq__": "tf.keras.Model.__eq__", + "tf.keras.initializers.zeros.__ge__": "tf.keras.Model.__ge__", + "tf.keras.initializers.zeros.__gt__": "tf.keras.Model.__gt__", + "tf.keras.initializers.zeros.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.initializers.zeros.__le__": "tf.keras.Model.__le__", + "tf.keras.initializers.zeros.__lt__": "tf.keras.Model.__lt__", + "tf.keras.initializers.zeros.__ne__": "tf.keras.Model.__ne__", + "tf.keras.initializers.zeros.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.initializers.zeros.get_config": "tf.keras.initializers.Initializer.get_config", + "tf.keras.layers.AbstractRNNCell.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.AbstractRNNCell.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.AbstractRNNCell.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.AbstractRNNCell.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.AbstractRNNCell.__init__": "tf.keras.layers.Layer.__init__", + "tf.keras.layers.AbstractRNNCell.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.AbstractRNNCell.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.AbstractRNNCell.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.AbstractRNNCell.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.AbstractRNNCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.AbstractRNNCell.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.AbstractRNNCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.AbstractRNNCell.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.AbstractRNNCell.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.AbstractRNNCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.AbstractRNNCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.layers.AbstractRNNCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.AbstractRNNCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.AbstractRNNCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.AbstractRNNCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.AbstractRNNCell.get_config": "tf.keras.layers.Layer.get_config", + "tf.keras.layers.AbstractRNNCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.AbstractRNNCell.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.AbstractRNNCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.AbstractRNNCell.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.AbstractRNNCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.AbstractRNNCell.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.AbstractRNNCell.name_scope": "tf.Module.name_scope", + "tf.keras.layers.AbstractRNNCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.AbstractRNNCell.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.AbstractRNNCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.AbstractRNNCell.submodules": "tf.Module.submodules", + "tf.keras.layers.AbstractRNNCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.AbstractRNNCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.AbstractRNNCell.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Activation.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Activation.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Activation.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Activation.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Activation.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Activation.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Activation.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Activation.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Activation.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Activation.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Activation.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Activation.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Activation.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.Activation.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Activation.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Activation.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Activation.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Activation.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Activation.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Activation.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Activation.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Activation.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Activation.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Activation.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Activation.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Activation.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Activation.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Activation.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Activation.submodules": "tf.Module.submodules", + "tf.keras.layers.Activation.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Activation.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Activation.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.ActivityRegularization.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.ActivityRegularization.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.ActivityRegularization.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.ActivityRegularization.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.ActivityRegularization.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.ActivityRegularization.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.ActivityRegularization.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.ActivityRegularization.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.ActivityRegularization.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.ActivityRegularization.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.ActivityRegularization.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.ActivityRegularization.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.ActivityRegularization.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.ActivityRegularization.call": "tf.keras.layers.Layer.call", + "tf.keras.layers.ActivityRegularization.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.ActivityRegularization.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.ActivityRegularization.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.ActivityRegularization.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.ActivityRegularization.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.ActivityRegularization.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.ActivityRegularization.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.ActivityRegularization.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.ActivityRegularization.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.ActivityRegularization.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.ActivityRegularization.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.ActivityRegularization.name_scope": "tf.Module.name_scope", + "tf.keras.layers.ActivityRegularization.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.ActivityRegularization.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.ActivityRegularization.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.ActivityRegularization.submodules": "tf.Module.submodules", + "tf.keras.layers.ActivityRegularization.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.ActivityRegularization.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.ActivityRegularization.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Add.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Add.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Add.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Add.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Add.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Add.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Add.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Add.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Add.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Add.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Add.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Add.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Add.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Add.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Add.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Add.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Add.get_config": "tf.keras.layers.Layer.get_config", + "tf.keras.layers.Add.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Add.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Add.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Add.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Add.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Add.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Add.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Add.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Add.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Add.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Add.submodules": "tf.Module.submodules", + "tf.keras.layers.Add.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Add.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Add.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.AdditiveAttention.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.AdditiveAttention.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.AdditiveAttention.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.AdditiveAttention.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.AdditiveAttention.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.AdditiveAttention.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.AdditiveAttention.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.AdditiveAttention.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.AdditiveAttention.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.AdditiveAttention.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.AdditiveAttention.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.AdditiveAttention.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.AdditiveAttention.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.layers.AdditiveAttention.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.AdditiveAttention.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.AdditiveAttention.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.AdditiveAttention.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.AdditiveAttention.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.AdditiveAttention.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.AdditiveAttention.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.AdditiveAttention.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.AdditiveAttention.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.AdditiveAttention.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.AdditiveAttention.name_scope": "tf.Module.name_scope", + "tf.keras.layers.AdditiveAttention.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.AdditiveAttention.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.AdditiveAttention.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.AdditiveAttention.submodules": "tf.Module.submodules", + "tf.keras.layers.AdditiveAttention.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.AdditiveAttention.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.AdditiveAttention.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.AlphaDropout.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.AlphaDropout.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.AlphaDropout.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.AlphaDropout.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.AlphaDropout.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.AlphaDropout.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.AlphaDropout.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.AlphaDropout.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.AlphaDropout.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.AlphaDropout.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.AlphaDropout.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.AlphaDropout.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.AlphaDropout.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.AlphaDropout.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.AlphaDropout.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.AlphaDropout.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.AlphaDropout.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.AlphaDropout.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.AlphaDropout.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.AlphaDropout.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.AlphaDropout.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.AlphaDropout.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.AlphaDropout.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.AlphaDropout.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.AlphaDropout.name_scope": "tf.Module.name_scope", + "tf.keras.layers.AlphaDropout.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.AlphaDropout.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.AlphaDropout.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.AlphaDropout.submodules": "tf.Module.submodules", + "tf.keras.layers.AlphaDropout.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.AlphaDropout.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.AlphaDropout.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Attention.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Attention.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Attention.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Attention.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Attention.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Attention.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Attention.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Attention.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Attention.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Attention.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Attention.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Attention.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Attention.call": "tf.keras.layers.AdditiveAttention.call", + "tf.keras.layers.Attention.compute_mask": "tf.keras.layers.AdditiveAttention.compute_mask", + "tf.keras.layers.Attention.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.layers.Attention.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Attention.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Attention.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Attention.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Attention.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Attention.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Attention.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Attention.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Attention.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Attention.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Attention.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Attention.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Attention.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Attention.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Attention.submodules": "tf.Module.submodules", + "tf.keras.layers.Attention.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Attention.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Attention.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Average.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Average.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Average.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Average.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Average.__init__": "tf.keras.layers.Add.__init__", + "tf.keras.layers.Average.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Average.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Average.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Average.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Average.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Average.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Average.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Average.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Average.build": "tf.keras.layers.Add.build", + "tf.keras.layers.Average.call": "tf.keras.layers.Add.call", + "tf.keras.layers.Average.compute_mask": "tf.keras.layers.Add.compute_mask", + "tf.keras.layers.Average.compute_output_shape": "tf.keras.layers.Add.compute_output_shape", + "tf.keras.layers.Average.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Average.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Average.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Average.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Average.get_config": "tf.keras.layers.Layer.get_config", + "tf.keras.layers.Average.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Average.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Average.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Average.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Average.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Average.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Average.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Average.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Average.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Average.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Average.submodules": "tf.Module.submodules", + "tf.keras.layers.Average.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Average.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Average.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.AveragePooling1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.AveragePooling1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.AveragePooling1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.AveragePooling1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.AveragePooling1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.AveragePooling1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.AveragePooling1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.AveragePooling1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.AveragePooling1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.AveragePooling1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.AveragePooling1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.AveragePooling1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.AveragePooling1D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.AveragePooling1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.AveragePooling1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.AveragePooling1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.AveragePooling1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.AveragePooling1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.AveragePooling1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.AveragePooling1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.AveragePooling1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.AveragePooling1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.AveragePooling1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.AveragePooling1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.AveragePooling1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.AveragePooling1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.AveragePooling1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.AveragePooling1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.AveragePooling1D.submodules": "tf.Module.submodules", + "tf.keras.layers.AveragePooling1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.AveragePooling1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.AveragePooling1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.AveragePooling2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.AveragePooling2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.AveragePooling2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.AveragePooling2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.AveragePooling2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.AveragePooling2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.AveragePooling2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.AveragePooling2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.AveragePooling2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.AveragePooling2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.AveragePooling2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.AveragePooling2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.AveragePooling2D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.AveragePooling2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.AveragePooling2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.AveragePooling2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.AveragePooling2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.AveragePooling2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.AveragePooling2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.AveragePooling2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.AveragePooling2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.AveragePooling2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.AveragePooling2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.AveragePooling2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.AveragePooling2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.AveragePooling2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.AveragePooling2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.AveragePooling2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.AveragePooling2D.submodules": "tf.Module.submodules", + "tf.keras.layers.AveragePooling2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.AveragePooling2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.AveragePooling2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.AveragePooling3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.AveragePooling3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.AveragePooling3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.AveragePooling3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.AveragePooling3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.AveragePooling3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.AveragePooling3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.AveragePooling3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.AveragePooling3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.AveragePooling3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.AveragePooling3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.AveragePooling3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.AveragePooling3D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.AveragePooling3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.AveragePooling3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.AveragePooling3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.AveragePooling3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.AveragePooling3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.AveragePooling3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.AveragePooling3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.AveragePooling3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.AveragePooling3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.AveragePooling3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.AveragePooling3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.AveragePooling3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.AveragePooling3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.AveragePooling3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.AveragePooling3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.AveragePooling3D.submodules": "tf.Module.submodules", + "tf.keras.layers.AveragePooling3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.AveragePooling3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.AveragePooling3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.AvgPool1D": "tf.keras.layers.AveragePooling1D", + "tf.keras.layers.AvgPool1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.AvgPool1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.AvgPool1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.AvgPool1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.AvgPool1D.__init__": "tf.keras.layers.AveragePooling1D.__init__", + "tf.keras.layers.AvgPool1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.AvgPool1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.AvgPool1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.AvgPool1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.AvgPool1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.AvgPool1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.AvgPool1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.AvgPool1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.AvgPool1D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.AvgPool1D.call": "tf.keras.layers.AveragePooling1D.call", + "tf.keras.layers.AvgPool1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.AvgPool1D.compute_output_shape": "tf.keras.layers.AveragePooling1D.compute_output_shape", + "tf.keras.layers.AvgPool1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.AvgPool1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.AvgPool1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.AvgPool1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.AvgPool1D.get_config": "tf.keras.layers.AveragePooling1D.get_config", + "tf.keras.layers.AvgPool1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.AvgPool1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.AvgPool1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.AvgPool1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.AvgPool1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.AvgPool1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.AvgPool1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.AvgPool1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.AvgPool1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.AvgPool1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.AvgPool1D.submodules": "tf.Module.submodules", + "tf.keras.layers.AvgPool1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.AvgPool1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.AvgPool1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.AvgPool2D": "tf.keras.layers.AveragePooling2D", + "tf.keras.layers.AvgPool2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.AvgPool2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.AvgPool2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.AvgPool2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.AvgPool2D.__init__": "tf.keras.layers.AveragePooling2D.__init__", + "tf.keras.layers.AvgPool2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.AvgPool2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.AvgPool2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.AvgPool2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.AvgPool2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.AvgPool2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.AvgPool2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.AvgPool2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.AvgPool2D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.AvgPool2D.call": "tf.keras.layers.AveragePooling2D.call", + "tf.keras.layers.AvgPool2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.AvgPool2D.compute_output_shape": "tf.keras.layers.AveragePooling2D.compute_output_shape", + "tf.keras.layers.AvgPool2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.AvgPool2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.AvgPool2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.AvgPool2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.AvgPool2D.get_config": "tf.keras.layers.AveragePooling2D.get_config", + "tf.keras.layers.AvgPool2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.AvgPool2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.AvgPool2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.AvgPool2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.AvgPool2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.AvgPool2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.AvgPool2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.AvgPool2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.AvgPool2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.AvgPool2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.AvgPool2D.submodules": "tf.Module.submodules", + "tf.keras.layers.AvgPool2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.AvgPool2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.AvgPool2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.AvgPool3D": "tf.keras.layers.AveragePooling3D", + "tf.keras.layers.AvgPool3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.AvgPool3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.AvgPool3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.AvgPool3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.AvgPool3D.__init__": "tf.keras.layers.AveragePooling3D.__init__", + "tf.keras.layers.AvgPool3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.AvgPool3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.AvgPool3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.AvgPool3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.AvgPool3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.AvgPool3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.AvgPool3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.AvgPool3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.AvgPool3D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.AvgPool3D.call": "tf.keras.layers.AveragePooling3D.call", + "tf.keras.layers.AvgPool3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.AvgPool3D.compute_output_shape": "tf.keras.layers.AveragePooling3D.compute_output_shape", + "tf.keras.layers.AvgPool3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.AvgPool3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.AvgPool3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.AvgPool3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.AvgPool3D.get_config": "tf.keras.layers.AveragePooling3D.get_config", + "tf.keras.layers.AvgPool3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.AvgPool3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.AvgPool3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.AvgPool3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.AvgPool3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.AvgPool3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.AvgPool3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.AvgPool3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.AvgPool3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.AvgPool3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.AvgPool3D.submodules": "tf.Module.submodules", + "tf.keras.layers.AvgPool3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.AvgPool3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.AvgPool3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.BatchNormalization.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.BatchNormalization.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.BatchNormalization.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.BatchNormalization.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.BatchNormalization.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.BatchNormalization.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.BatchNormalization.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.BatchNormalization.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.BatchNormalization.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.BatchNormalization.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.BatchNormalization.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.BatchNormalization.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.BatchNormalization.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.BatchNormalization.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.BatchNormalization.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.BatchNormalization.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.BatchNormalization.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.BatchNormalization.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.BatchNormalization.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.BatchNormalization.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.BatchNormalization.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.BatchNormalization.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.BatchNormalization.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.BatchNormalization.name_scope": "tf.Module.name_scope", + "tf.keras.layers.BatchNormalization.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.BatchNormalization.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.BatchNormalization.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.BatchNormalization.submodules": "tf.Module.submodules", + "tf.keras.layers.BatchNormalization.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.BatchNormalization.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Bidirectional.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Bidirectional.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Bidirectional.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Bidirectional.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Bidirectional.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Bidirectional.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Bidirectional.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Bidirectional.activity_regularizer": "tf.keras.layers.Wrapper.activity_regularizer", + "tf.keras.layers.Bidirectional.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Bidirectional.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Bidirectional.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Bidirectional.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Bidirectional.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Bidirectional.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Bidirectional.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Bidirectional.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Bidirectional.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Bidirectional.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Bidirectional.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Bidirectional.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Bidirectional.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Bidirectional.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Bidirectional.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Bidirectional.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Bidirectional.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Bidirectional.submodules": "tf.Module.submodules", + "tf.keras.layers.Bidirectional.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Bidirectional.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Bidirectional.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Concatenate.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Concatenate.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Concatenate.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Concatenate.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Concatenate.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Concatenate.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Concatenate.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Concatenate.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Concatenate.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Concatenate.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Concatenate.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Concatenate.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Concatenate.call": "tf.keras.layers.Add.call", + "tf.keras.layers.Concatenate.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Concatenate.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Concatenate.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Concatenate.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Concatenate.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Concatenate.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Concatenate.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Concatenate.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Concatenate.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Concatenate.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Concatenate.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Concatenate.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Concatenate.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Concatenate.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Concatenate.submodules": "tf.Module.submodules", + "tf.keras.layers.Concatenate.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Concatenate.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Concatenate.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Conv1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Conv1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Conv1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Conv1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Conv1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Conv1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Conv1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Conv1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Conv1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Conv1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Conv1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Conv1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Conv1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Conv1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Conv1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Conv1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Conv1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Conv1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Conv1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Conv1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Conv1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Conv1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Conv1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Conv1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Conv1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Conv1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Conv1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Conv1D.submodules": "tf.Module.submodules", + "tf.keras.layers.Conv1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Conv1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Conv1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Conv2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Conv2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Conv2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Conv2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Conv2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Conv2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Conv2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Conv2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Conv2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Conv2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Conv2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Conv2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Conv2D.build": "tf.keras.layers.Conv1D.build", + "tf.keras.layers.Conv2D.call": "tf.keras.layers.Conv1D.call", + "tf.keras.layers.Conv2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Conv2D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.keras.layers.Conv2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Conv2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Conv2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Conv2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Conv2D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.keras.layers.Conv2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Conv2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Conv2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Conv2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Conv2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Conv2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Conv2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Conv2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Conv2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Conv2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Conv2D.submodules": "tf.Module.submodules", + "tf.keras.layers.Conv2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Conv2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Conv2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Conv2DTranspose.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Conv2DTranspose.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Conv2DTranspose.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Conv2DTranspose.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Conv2DTranspose.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Conv2DTranspose.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Conv2DTranspose.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Conv2DTranspose.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Conv2DTranspose.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Conv2DTranspose.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Conv2DTranspose.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Conv2DTranspose.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Conv2DTranspose.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Conv2DTranspose.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Conv2DTranspose.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Conv2DTranspose.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Conv2DTranspose.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Conv2DTranspose.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Conv2DTranspose.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Conv2DTranspose.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Conv2DTranspose.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Conv2DTranspose.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Conv2DTranspose.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Conv2DTranspose.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Conv2DTranspose.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Conv2DTranspose.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Conv2DTranspose.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Conv2DTranspose.submodules": "tf.Module.submodules", + "tf.keras.layers.Conv2DTranspose.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Conv2DTranspose.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Conv2DTranspose.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Conv3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Conv3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Conv3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Conv3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Conv3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Conv3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Conv3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Conv3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Conv3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Conv3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Conv3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Conv3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Conv3D.build": "tf.keras.layers.Conv1D.build", + "tf.keras.layers.Conv3D.call": "tf.keras.layers.Conv1D.call", + "tf.keras.layers.Conv3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Conv3D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.keras.layers.Conv3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Conv3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Conv3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Conv3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Conv3D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.keras.layers.Conv3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Conv3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Conv3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Conv3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Conv3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Conv3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Conv3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Conv3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Conv3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Conv3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Conv3D.submodules": "tf.Module.submodules", + "tf.keras.layers.Conv3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Conv3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Conv3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Conv3DTranspose.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Conv3DTranspose.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Conv3DTranspose.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Conv3DTranspose.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Conv3DTranspose.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Conv3DTranspose.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Conv3DTranspose.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Conv3DTranspose.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Conv3DTranspose.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Conv3DTranspose.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Conv3DTranspose.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Conv3DTranspose.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Conv3DTranspose.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Conv3DTranspose.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Conv3DTranspose.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Conv3DTranspose.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Conv3DTranspose.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Conv3DTranspose.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Conv3DTranspose.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Conv3DTranspose.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Conv3DTranspose.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Conv3DTranspose.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Conv3DTranspose.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Conv3DTranspose.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Conv3DTranspose.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Conv3DTranspose.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Conv3DTranspose.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Conv3DTranspose.submodules": "tf.Module.submodules", + "tf.keras.layers.Conv3DTranspose.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Conv3DTranspose.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Conv3DTranspose.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.ConvLSTM2D.__call__": "tf.keras.layers.RNN.__call__", + "tf.keras.layers.ConvLSTM2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.ConvLSTM2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.ConvLSTM2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.ConvLSTM2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.ConvLSTM2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.ConvLSTM2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.ConvLSTM2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.ConvLSTM2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.ConvLSTM2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.ConvLSTM2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.ConvLSTM2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.ConvLSTM2D.compute_mask": "tf.keras.layers.RNN.compute_mask", + "tf.keras.layers.ConvLSTM2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.ConvLSTM2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.ConvLSTM2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.ConvLSTM2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.ConvLSTM2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.ConvLSTM2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.ConvLSTM2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.ConvLSTM2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.ConvLSTM2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.ConvLSTM2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.ConvLSTM2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.ConvLSTM2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.ConvLSTM2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.ConvLSTM2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.ConvLSTM2D.states": "tf.keras.layers.RNN.states", + "tf.keras.layers.ConvLSTM2D.submodules": "tf.Module.submodules", + "tf.keras.layers.ConvLSTM2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.ConvLSTM2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.ConvLSTM2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Convolution1D": "tf.keras.layers.Conv1D", + "tf.keras.layers.Convolution1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Convolution1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Convolution1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Convolution1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Convolution1D.__init__": "tf.keras.layers.Conv1D.__init__", + "tf.keras.layers.Convolution1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Convolution1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Convolution1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Convolution1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Convolution1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Convolution1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Convolution1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Convolution1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Convolution1D.build": "tf.keras.layers.Conv1D.build", + "tf.keras.layers.Convolution1D.call": "tf.keras.layers.Conv1D.call", + "tf.keras.layers.Convolution1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Convolution1D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.keras.layers.Convolution1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Convolution1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Convolution1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Convolution1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Convolution1D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.keras.layers.Convolution1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Convolution1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Convolution1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Convolution1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Convolution1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Convolution1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Convolution1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Convolution1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Convolution1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Convolution1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Convolution1D.submodules": "tf.Module.submodules", + "tf.keras.layers.Convolution1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Convolution1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Convolution1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Convolution2D": "tf.keras.layers.Conv2D", + "tf.keras.layers.Convolution2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Convolution2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Convolution2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Convolution2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Convolution2D.__init__": "tf.keras.layers.Conv2D.__init__", + "tf.keras.layers.Convolution2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Convolution2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Convolution2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Convolution2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Convolution2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Convolution2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Convolution2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Convolution2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Convolution2D.build": "tf.keras.layers.Conv1D.build", + "tf.keras.layers.Convolution2D.call": "tf.keras.layers.Conv1D.call", + "tf.keras.layers.Convolution2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Convolution2D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.keras.layers.Convolution2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Convolution2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Convolution2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Convolution2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Convolution2D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.keras.layers.Convolution2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Convolution2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Convolution2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Convolution2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Convolution2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Convolution2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Convolution2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Convolution2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Convolution2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Convolution2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Convolution2D.submodules": "tf.Module.submodules", + "tf.keras.layers.Convolution2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Convolution2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Convolution2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Convolution2DTranspose": "tf.keras.layers.Conv2DTranspose", + "tf.keras.layers.Convolution2DTranspose.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Convolution2DTranspose.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Convolution2DTranspose.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Convolution2DTranspose.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Convolution2DTranspose.__init__": "tf.keras.layers.Conv2DTranspose.__init__", + "tf.keras.layers.Convolution2DTranspose.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Convolution2DTranspose.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Convolution2DTranspose.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Convolution2DTranspose.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Convolution2DTranspose.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Convolution2DTranspose.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Convolution2DTranspose.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Convolution2DTranspose.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Convolution2DTranspose.build": "tf.keras.layers.Conv2DTranspose.build", + "tf.keras.layers.Convolution2DTranspose.call": "tf.keras.layers.Conv2DTranspose.call", + "tf.keras.layers.Convolution2DTranspose.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Convolution2DTranspose.compute_output_shape": "tf.keras.layers.Conv2DTranspose.compute_output_shape", + "tf.keras.layers.Convolution2DTranspose.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Convolution2DTranspose.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Convolution2DTranspose.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Convolution2DTranspose.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Convolution2DTranspose.get_config": "tf.keras.layers.Conv2DTranspose.get_config", + "tf.keras.layers.Convolution2DTranspose.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Convolution2DTranspose.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Convolution2DTranspose.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Convolution2DTranspose.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Convolution2DTranspose.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Convolution2DTranspose.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Convolution2DTranspose.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Convolution2DTranspose.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Convolution2DTranspose.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Convolution2DTranspose.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Convolution2DTranspose.submodules": "tf.Module.submodules", + "tf.keras.layers.Convolution2DTranspose.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Convolution2DTranspose.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Convolution2DTranspose.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Convolution3D": "tf.keras.layers.Conv3D", + "tf.keras.layers.Convolution3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Convolution3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Convolution3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Convolution3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Convolution3D.__init__": "tf.keras.layers.Conv3D.__init__", + "tf.keras.layers.Convolution3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Convolution3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Convolution3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Convolution3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Convolution3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Convolution3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Convolution3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Convolution3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Convolution3D.build": "tf.keras.layers.Conv1D.build", + "tf.keras.layers.Convolution3D.call": "tf.keras.layers.Conv1D.call", + "tf.keras.layers.Convolution3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Convolution3D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.keras.layers.Convolution3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Convolution3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Convolution3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Convolution3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Convolution3D.get_config": "tf.keras.layers.Conv1D.get_config", + "tf.keras.layers.Convolution3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Convolution3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Convolution3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Convolution3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Convolution3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Convolution3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Convolution3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Convolution3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Convolution3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Convolution3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Convolution3D.submodules": "tf.Module.submodules", + "tf.keras.layers.Convolution3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Convolution3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Convolution3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Convolution3DTranspose": "tf.keras.layers.Conv3DTranspose", + "tf.keras.layers.Convolution3DTranspose.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Convolution3DTranspose.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Convolution3DTranspose.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Convolution3DTranspose.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Convolution3DTranspose.__init__": "tf.keras.layers.Conv3DTranspose.__init__", + "tf.keras.layers.Convolution3DTranspose.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Convolution3DTranspose.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Convolution3DTranspose.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Convolution3DTranspose.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Convolution3DTranspose.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Convolution3DTranspose.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Convolution3DTranspose.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Convolution3DTranspose.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Convolution3DTranspose.build": "tf.keras.layers.Conv3DTranspose.build", + "tf.keras.layers.Convolution3DTranspose.call": "tf.keras.layers.Conv3DTranspose.call", + "tf.keras.layers.Convolution3DTranspose.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Convolution3DTranspose.compute_output_shape": "tf.keras.layers.Conv3DTranspose.compute_output_shape", + "tf.keras.layers.Convolution3DTranspose.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Convolution3DTranspose.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Convolution3DTranspose.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Convolution3DTranspose.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Convolution3DTranspose.get_config": "tf.keras.layers.Conv3DTranspose.get_config", + "tf.keras.layers.Convolution3DTranspose.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Convolution3DTranspose.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Convolution3DTranspose.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Convolution3DTranspose.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Convolution3DTranspose.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Convolution3DTranspose.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Convolution3DTranspose.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Convolution3DTranspose.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Convolution3DTranspose.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Convolution3DTranspose.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Convolution3DTranspose.submodules": "tf.Module.submodules", + "tf.keras.layers.Convolution3DTranspose.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Convolution3DTranspose.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Convolution3DTranspose.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Cropping1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Cropping1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Cropping1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Cropping1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Cropping1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Cropping1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Cropping1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Cropping1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Cropping1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Cropping1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Cropping1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Cropping1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Cropping1D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.Cropping1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Cropping1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Cropping1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Cropping1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Cropping1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Cropping1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Cropping1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Cropping1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Cropping1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Cropping1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Cropping1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Cropping1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Cropping1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Cropping1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Cropping1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Cropping1D.submodules": "tf.Module.submodules", + "tf.keras.layers.Cropping1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Cropping1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Cropping1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Cropping2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Cropping2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Cropping2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Cropping2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Cropping2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Cropping2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Cropping2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Cropping2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Cropping2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Cropping2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Cropping2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Cropping2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Cropping2D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.Cropping2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Cropping2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Cropping2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Cropping2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Cropping2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Cropping2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Cropping2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Cropping2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Cropping2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Cropping2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Cropping2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Cropping2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Cropping2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Cropping2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Cropping2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Cropping2D.submodules": "tf.Module.submodules", + "tf.keras.layers.Cropping2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Cropping2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Cropping2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Cropping3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Cropping3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Cropping3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Cropping3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Cropping3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Cropping3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Cropping3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Cropping3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Cropping3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Cropping3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Cropping3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Cropping3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Cropping3D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.Cropping3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Cropping3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Cropping3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Cropping3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Cropping3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Cropping3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Cropping3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Cropping3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Cropping3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Cropping3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Cropping3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Cropping3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Cropping3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Cropping3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Cropping3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Cropping3D.submodules": "tf.Module.submodules", + "tf.keras.layers.Cropping3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Cropping3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Cropping3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Dense.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Dense.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Dense.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Dense.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Dense.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Dense.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Dense.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Dense.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Dense.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Dense.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Dense.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Dense.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Dense.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Dense.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Dense.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Dense.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Dense.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Dense.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Dense.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Dense.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Dense.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Dense.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Dense.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Dense.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Dense.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Dense.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Dense.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Dense.submodules": "tf.Module.submodules", + "tf.keras.layers.Dense.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Dense.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Dense.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.DenseFeatures.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.DenseFeatures.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.DenseFeatures.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.DenseFeatures.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.DenseFeatures.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.DenseFeatures.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.DenseFeatures.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.DenseFeatures.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.DenseFeatures.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.DenseFeatures.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.DenseFeatures.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.DenseFeatures.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.DenseFeatures.call": "tf.compat.v1.keras.layers.DenseFeatures.call", + "tf.keras.layers.DenseFeatures.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.DenseFeatures.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.DenseFeatures.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.DenseFeatures.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.DenseFeatures.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.DenseFeatures.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.DenseFeatures.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.DenseFeatures.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.DenseFeatures.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.DenseFeatures.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.DenseFeatures.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.DenseFeatures.name_scope": "tf.Module.name_scope", + "tf.keras.layers.DenseFeatures.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.DenseFeatures.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.DenseFeatures.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.DenseFeatures.submodules": "tf.Module.submodules", + "tf.keras.layers.DenseFeatures.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.DenseFeatures.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.DenseFeatures.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.DepthwiseConv2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.DepthwiseConv2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.DepthwiseConv2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.DepthwiseConv2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.DepthwiseConv2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.DepthwiseConv2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.DepthwiseConv2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.DepthwiseConv2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.DepthwiseConv2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.DepthwiseConv2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.DepthwiseConv2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.DepthwiseConv2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.DepthwiseConv2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.DepthwiseConv2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.DepthwiseConv2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.DepthwiseConv2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.DepthwiseConv2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.DepthwiseConv2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.DepthwiseConv2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.DepthwiseConv2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.DepthwiseConv2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.DepthwiseConv2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.DepthwiseConv2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.DepthwiseConv2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.DepthwiseConv2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.DepthwiseConv2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.DepthwiseConv2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.DepthwiseConv2D.submodules": "tf.Module.submodules", + "tf.keras.layers.DepthwiseConv2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.DepthwiseConv2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.DepthwiseConv2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Dot.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Dot.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Dot.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Dot.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Dot.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Dot.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Dot.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Dot.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Dot.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Dot.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Dot.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Dot.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Dot.call": "tf.keras.layers.Add.call", + "tf.keras.layers.Dot.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Dot.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Dot.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Dot.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Dot.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Dot.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Dot.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Dot.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Dot.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Dot.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Dot.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Dot.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Dot.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Dot.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Dot.submodules": "tf.Module.submodules", + "tf.keras.layers.Dot.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Dot.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Dot.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Dropout.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Dropout.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Dropout.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Dropout.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Dropout.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Dropout.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Dropout.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Dropout.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Dropout.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Dropout.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Dropout.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Dropout.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Dropout.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.Dropout.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Dropout.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Dropout.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Dropout.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Dropout.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Dropout.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Dropout.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Dropout.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Dropout.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Dropout.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Dropout.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Dropout.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Dropout.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Dropout.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Dropout.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Dropout.submodules": "tf.Module.submodules", + "tf.keras.layers.Dropout.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Dropout.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Dropout.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.ELU.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.ELU.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.ELU.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.ELU.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.ELU.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.ELU.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.ELU.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.ELU.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.ELU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.ELU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.ELU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.ELU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.ELU.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.ELU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.ELU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.ELU.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.ELU.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.ELU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.ELU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.ELU.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.ELU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.ELU.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.ELU.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.ELU.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.ELU.name_scope": "tf.Module.name_scope", + "tf.keras.layers.ELU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.ELU.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.ELU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.ELU.submodules": "tf.Module.submodules", + "tf.keras.layers.ELU.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.ELU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.ELU.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Embedding.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Embedding.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Embedding.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Embedding.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Embedding.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Embedding.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Embedding.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Embedding.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Embedding.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Embedding.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Embedding.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Embedding.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Embedding.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Embedding.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Embedding.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Embedding.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Embedding.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Embedding.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Embedding.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Embedding.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Embedding.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Embedding.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Embedding.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Embedding.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Embedding.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Embedding.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Embedding.submodules": "tf.Module.submodules", + "tf.keras.layers.Embedding.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Embedding.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Embedding.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Flatten.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Flatten.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Flatten.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Flatten.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Flatten.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Flatten.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Flatten.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Flatten.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Flatten.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Flatten.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Flatten.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Flatten.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Flatten.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.Flatten.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Flatten.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Flatten.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Flatten.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Flatten.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Flatten.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Flatten.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Flatten.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Flatten.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Flatten.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Flatten.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Flatten.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Flatten.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Flatten.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Flatten.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Flatten.submodules": "tf.Module.submodules", + "tf.keras.layers.Flatten.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Flatten.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Flatten.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GRU.__call__": "tf.keras.layers.RNN.__call__", + "tf.keras.layers.GRU.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GRU.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GRU.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GRU.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GRU.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GRU.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GRU.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GRU.activation": "tf.compat.v1.keras.layers.GRU.activation", + "tf.keras.layers.GRU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GRU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GRU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GRU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GRU.bias_constraint": "tf.compat.v1.keras.layers.GRU.bias_constraint", + "tf.keras.layers.GRU.bias_initializer": "tf.compat.v1.keras.layers.GRU.bias_initializer", + "tf.keras.layers.GRU.bias_regularizer": "tf.compat.v1.keras.layers.GRU.bias_regularizer", + "tf.keras.layers.GRU.compute_mask": "tf.keras.layers.RNN.compute_mask", + "tf.keras.layers.GRU.compute_output_shape": "tf.keras.layers.RNN.compute_output_shape", + "tf.keras.layers.GRU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GRU.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GRU.dropout": "tf.compat.v1.keras.layers.GRU.dropout", + "tf.keras.layers.GRU.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GRU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GRU.get_config": "tf.compat.v1.keras.layers.GRU.get_config", + "tf.keras.layers.GRU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GRU.implementation": "tf.compat.v1.keras.layers.GRU.implementation", + "tf.keras.layers.GRU.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GRU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GRU.kernel_constraint": "tf.compat.v1.keras.layers.GRU.kernel_constraint", + "tf.keras.layers.GRU.kernel_initializer": "tf.compat.v1.keras.layers.GRU.kernel_initializer", + "tf.keras.layers.GRU.kernel_regularizer": "tf.compat.v1.keras.layers.GRU.kernel_regularizer", + "tf.keras.layers.GRU.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GRU.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GRU.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GRU.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GRU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GRU.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GRU.recurrent_activation": "tf.compat.v1.keras.layers.GRU.recurrent_activation", + "tf.keras.layers.GRU.recurrent_constraint": "tf.compat.v1.keras.layers.GRU.recurrent_constraint", + "tf.keras.layers.GRU.recurrent_dropout": "tf.compat.v1.keras.layers.GRU.recurrent_dropout", + "tf.keras.layers.GRU.recurrent_initializer": "tf.compat.v1.keras.layers.GRU.recurrent_initializer", + "tf.keras.layers.GRU.recurrent_regularizer": "tf.compat.v1.keras.layers.GRU.recurrent_regularizer", + "tf.keras.layers.GRU.reset_after": "tf.compat.v1.keras.layers.GRU.reset_after", + "tf.keras.layers.GRU.reset_states": "tf.keras.layers.RNN.reset_states", + "tf.keras.layers.GRU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GRU.states": "tf.keras.layers.RNN.states", + "tf.keras.layers.GRU.submodules": "tf.Module.submodules", + "tf.keras.layers.GRU.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GRU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GRU.units": "tf.compat.v1.keras.layers.GRU.units", + "tf.keras.layers.GRU.use_bias": "tf.compat.v1.keras.layers.GRU.use_bias", + "tf.keras.layers.GRU.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GRUCell.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GRUCell.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GRUCell.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GRUCell.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GRUCell.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GRUCell.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GRUCell.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GRUCell.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GRUCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GRUCell.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GRUCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GRUCell.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GRUCell.build": "tf.compat.v1.keras.layers.GRUCell.build", + "tf.keras.layers.GRUCell.call": "tf.compat.v1.keras.layers.GRUCell.call", + "tf.keras.layers.GRUCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GRUCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.layers.GRUCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GRUCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GRUCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GRUCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GRUCell.get_config": "tf.compat.v1.keras.layers.GRUCell.get_config", + "tf.keras.layers.GRUCell.get_dropout_mask_for_cell": "tf.keras.layers.GRU.get_dropout_mask_for_cell", + "tf.keras.layers.GRUCell.get_initial_state": "tf.compat.v1.keras.layers.GRUCell.get_initial_state", + "tf.keras.layers.GRUCell.get_recurrent_dropout_mask_for_cell": "tf.keras.layers.GRU.get_recurrent_dropout_mask_for_cell", + "tf.keras.layers.GRUCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GRUCell.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GRUCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GRUCell.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GRUCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GRUCell.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GRUCell.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GRUCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GRUCell.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GRUCell.reset_dropout_mask": "tf.keras.layers.GRU.reset_dropout_mask", + "tf.keras.layers.GRUCell.reset_recurrent_dropout_mask": "tf.keras.layers.GRU.reset_recurrent_dropout_mask", + "tf.keras.layers.GRUCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GRUCell.submodules": "tf.Module.submodules", + "tf.keras.layers.GRUCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GRUCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GRUCell.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GaussianDropout.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GaussianDropout.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GaussianDropout.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GaussianDropout.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GaussianDropout.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GaussianDropout.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GaussianDropout.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GaussianDropout.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GaussianDropout.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GaussianDropout.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GaussianDropout.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GaussianDropout.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GaussianDropout.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GaussianDropout.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GaussianDropout.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GaussianDropout.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GaussianDropout.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GaussianDropout.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GaussianDropout.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GaussianDropout.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GaussianDropout.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GaussianDropout.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GaussianDropout.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GaussianDropout.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GaussianDropout.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GaussianDropout.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GaussianDropout.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GaussianDropout.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GaussianDropout.submodules": "tf.Module.submodules", + "tf.keras.layers.GaussianDropout.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GaussianDropout.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GaussianDropout.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GaussianNoise.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GaussianNoise.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GaussianNoise.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GaussianNoise.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GaussianNoise.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GaussianNoise.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GaussianNoise.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GaussianNoise.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GaussianNoise.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GaussianNoise.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GaussianNoise.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GaussianNoise.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GaussianNoise.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GaussianNoise.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GaussianNoise.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GaussianNoise.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GaussianNoise.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GaussianNoise.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GaussianNoise.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GaussianNoise.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GaussianNoise.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GaussianNoise.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GaussianNoise.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GaussianNoise.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GaussianNoise.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GaussianNoise.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GaussianNoise.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GaussianNoise.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GaussianNoise.submodules": "tf.Module.submodules", + "tf.keras.layers.GaussianNoise.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GaussianNoise.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GaussianNoise.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GlobalAveragePooling1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GlobalAveragePooling1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GlobalAveragePooling1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GlobalAveragePooling1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GlobalAveragePooling1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GlobalAveragePooling1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GlobalAveragePooling1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GlobalAveragePooling1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GlobalAveragePooling1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GlobalAveragePooling1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GlobalAveragePooling1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GlobalAveragePooling1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GlobalAveragePooling1D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GlobalAveragePooling1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GlobalAveragePooling1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GlobalAveragePooling1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GlobalAveragePooling1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GlobalAveragePooling1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GlobalAveragePooling1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GlobalAveragePooling1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GlobalAveragePooling1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GlobalAveragePooling1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GlobalAveragePooling1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GlobalAveragePooling1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GlobalAveragePooling1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GlobalAveragePooling1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GlobalAveragePooling1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GlobalAveragePooling1D.submodules": "tf.Module.submodules", + "tf.keras.layers.GlobalAveragePooling1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GlobalAveragePooling1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GlobalAveragePooling1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GlobalAveragePooling2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GlobalAveragePooling2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GlobalAveragePooling2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GlobalAveragePooling2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GlobalAveragePooling2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GlobalAveragePooling2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GlobalAveragePooling2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GlobalAveragePooling2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GlobalAveragePooling2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GlobalAveragePooling2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GlobalAveragePooling2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GlobalAveragePooling2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GlobalAveragePooling2D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GlobalAveragePooling2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GlobalAveragePooling2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GlobalAveragePooling2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GlobalAveragePooling2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GlobalAveragePooling2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GlobalAveragePooling2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GlobalAveragePooling2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GlobalAveragePooling2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GlobalAveragePooling2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GlobalAveragePooling2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GlobalAveragePooling2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GlobalAveragePooling2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GlobalAveragePooling2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GlobalAveragePooling2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GlobalAveragePooling2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GlobalAveragePooling2D.submodules": "tf.Module.submodules", + "tf.keras.layers.GlobalAveragePooling2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GlobalAveragePooling2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GlobalAveragePooling2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GlobalAveragePooling3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GlobalAveragePooling3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GlobalAveragePooling3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GlobalAveragePooling3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GlobalAveragePooling3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GlobalAveragePooling3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GlobalAveragePooling3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GlobalAveragePooling3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GlobalAveragePooling3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GlobalAveragePooling3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GlobalAveragePooling3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GlobalAveragePooling3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GlobalAveragePooling3D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GlobalAveragePooling3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GlobalAveragePooling3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GlobalAveragePooling3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GlobalAveragePooling3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GlobalAveragePooling3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GlobalAveragePooling3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GlobalAveragePooling3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GlobalAveragePooling3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GlobalAveragePooling3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GlobalAveragePooling3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GlobalAveragePooling3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GlobalAveragePooling3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GlobalAveragePooling3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GlobalAveragePooling3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GlobalAveragePooling3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GlobalAveragePooling3D.submodules": "tf.Module.submodules", + "tf.keras.layers.GlobalAveragePooling3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GlobalAveragePooling3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GlobalAveragePooling3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GlobalAvgPool1D": "tf.keras.layers.GlobalAveragePooling1D", + "tf.keras.layers.GlobalAvgPool1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GlobalAvgPool1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GlobalAvgPool1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GlobalAvgPool1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GlobalAvgPool1D.__init__": "tf.keras.layers.GlobalAveragePooling1D.__init__", + "tf.keras.layers.GlobalAvgPool1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GlobalAvgPool1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GlobalAvgPool1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GlobalAvgPool1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GlobalAvgPool1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GlobalAvgPool1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GlobalAvgPool1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GlobalAvgPool1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GlobalAvgPool1D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GlobalAvgPool1D.call": "tf.keras.layers.GlobalAveragePooling1D.call", + "tf.keras.layers.GlobalAvgPool1D.compute_mask": "tf.keras.layers.GlobalAveragePooling1D.compute_mask", + "tf.keras.layers.GlobalAvgPool1D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling1D.compute_output_shape", + "tf.keras.layers.GlobalAvgPool1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GlobalAvgPool1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GlobalAvgPool1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GlobalAvgPool1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GlobalAvgPool1D.get_config": "tf.keras.layers.GlobalAveragePooling1D.get_config", + "tf.keras.layers.GlobalAvgPool1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GlobalAvgPool1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GlobalAvgPool1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GlobalAvgPool1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GlobalAvgPool1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GlobalAvgPool1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GlobalAvgPool1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GlobalAvgPool1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GlobalAvgPool1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GlobalAvgPool1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GlobalAvgPool1D.submodules": "tf.Module.submodules", + "tf.keras.layers.GlobalAvgPool1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GlobalAvgPool1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GlobalAvgPool1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GlobalAvgPool2D": "tf.keras.layers.GlobalAveragePooling2D", + "tf.keras.layers.GlobalAvgPool2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GlobalAvgPool2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GlobalAvgPool2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GlobalAvgPool2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GlobalAvgPool2D.__init__": "tf.keras.layers.GlobalAveragePooling2D.__init__", + "tf.keras.layers.GlobalAvgPool2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GlobalAvgPool2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GlobalAvgPool2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GlobalAvgPool2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GlobalAvgPool2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GlobalAvgPool2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GlobalAvgPool2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GlobalAvgPool2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GlobalAvgPool2D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GlobalAvgPool2D.call": "tf.keras.layers.GlobalAveragePooling2D.call", + "tf.keras.layers.GlobalAvgPool2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GlobalAvgPool2D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling2D.compute_output_shape", + "tf.keras.layers.GlobalAvgPool2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GlobalAvgPool2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GlobalAvgPool2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GlobalAvgPool2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GlobalAvgPool2D.get_config": "tf.keras.layers.GlobalAveragePooling2D.get_config", + "tf.keras.layers.GlobalAvgPool2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GlobalAvgPool2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GlobalAvgPool2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GlobalAvgPool2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GlobalAvgPool2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GlobalAvgPool2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GlobalAvgPool2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GlobalAvgPool2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GlobalAvgPool2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GlobalAvgPool2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GlobalAvgPool2D.submodules": "tf.Module.submodules", + "tf.keras.layers.GlobalAvgPool2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GlobalAvgPool2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GlobalAvgPool2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GlobalAvgPool3D": "tf.keras.layers.GlobalAveragePooling3D", + "tf.keras.layers.GlobalAvgPool3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GlobalAvgPool3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GlobalAvgPool3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GlobalAvgPool3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GlobalAvgPool3D.__init__": "tf.keras.layers.GlobalAveragePooling3D.__init__", + "tf.keras.layers.GlobalAvgPool3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GlobalAvgPool3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GlobalAvgPool3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GlobalAvgPool3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GlobalAvgPool3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GlobalAvgPool3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GlobalAvgPool3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GlobalAvgPool3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GlobalAvgPool3D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GlobalAvgPool3D.call": "tf.keras.layers.GlobalAveragePooling3D.call", + "tf.keras.layers.GlobalAvgPool3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GlobalAvgPool3D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling3D.compute_output_shape", + "tf.keras.layers.GlobalAvgPool3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GlobalAvgPool3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GlobalAvgPool3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GlobalAvgPool3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GlobalAvgPool3D.get_config": "tf.keras.layers.GlobalAveragePooling3D.get_config", + "tf.keras.layers.GlobalAvgPool3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GlobalAvgPool3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GlobalAvgPool3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GlobalAvgPool3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GlobalAvgPool3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GlobalAvgPool3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GlobalAvgPool3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GlobalAvgPool3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GlobalAvgPool3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GlobalAvgPool3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GlobalAvgPool3D.submodules": "tf.Module.submodules", + "tf.keras.layers.GlobalAvgPool3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GlobalAvgPool3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GlobalAvgPool3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GlobalMaxPool1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GlobalMaxPool1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GlobalMaxPool1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GlobalMaxPool1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GlobalMaxPool1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GlobalMaxPool1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GlobalMaxPool1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GlobalMaxPool1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GlobalMaxPool1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GlobalMaxPool1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GlobalMaxPool1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GlobalMaxPool1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GlobalMaxPool1D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GlobalMaxPool1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GlobalMaxPool1D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling1D.compute_output_shape", + "tf.keras.layers.GlobalMaxPool1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GlobalMaxPool1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GlobalMaxPool1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GlobalMaxPool1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GlobalMaxPool1D.get_config": "tf.keras.layers.GlobalAveragePooling1D.get_config", + "tf.keras.layers.GlobalMaxPool1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GlobalMaxPool1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GlobalMaxPool1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GlobalMaxPool1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GlobalMaxPool1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GlobalMaxPool1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GlobalMaxPool1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GlobalMaxPool1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GlobalMaxPool1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GlobalMaxPool1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GlobalMaxPool1D.submodules": "tf.Module.submodules", + "tf.keras.layers.GlobalMaxPool1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GlobalMaxPool1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GlobalMaxPool1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GlobalMaxPool2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GlobalMaxPool2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GlobalMaxPool2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GlobalMaxPool2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GlobalMaxPool2D.__init__": "tf.keras.layers.GlobalAveragePooling2D.__init__", + "tf.keras.layers.GlobalMaxPool2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GlobalMaxPool2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GlobalMaxPool2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GlobalMaxPool2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GlobalMaxPool2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GlobalMaxPool2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GlobalMaxPool2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GlobalMaxPool2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GlobalMaxPool2D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GlobalMaxPool2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GlobalMaxPool2D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling2D.compute_output_shape", + "tf.keras.layers.GlobalMaxPool2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GlobalMaxPool2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GlobalMaxPool2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GlobalMaxPool2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GlobalMaxPool2D.get_config": "tf.keras.layers.GlobalAveragePooling2D.get_config", + "tf.keras.layers.GlobalMaxPool2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GlobalMaxPool2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GlobalMaxPool2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GlobalMaxPool2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GlobalMaxPool2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GlobalMaxPool2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GlobalMaxPool2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GlobalMaxPool2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GlobalMaxPool2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GlobalMaxPool2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GlobalMaxPool2D.submodules": "tf.Module.submodules", + "tf.keras.layers.GlobalMaxPool2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GlobalMaxPool2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GlobalMaxPool2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GlobalMaxPool3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GlobalMaxPool3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GlobalMaxPool3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GlobalMaxPool3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GlobalMaxPool3D.__init__": "tf.keras.layers.GlobalAveragePooling3D.__init__", + "tf.keras.layers.GlobalMaxPool3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GlobalMaxPool3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GlobalMaxPool3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GlobalMaxPool3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GlobalMaxPool3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GlobalMaxPool3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GlobalMaxPool3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GlobalMaxPool3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GlobalMaxPool3D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GlobalMaxPool3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GlobalMaxPool3D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling3D.compute_output_shape", + "tf.keras.layers.GlobalMaxPool3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GlobalMaxPool3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GlobalMaxPool3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GlobalMaxPool3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GlobalMaxPool3D.get_config": "tf.keras.layers.GlobalAveragePooling3D.get_config", + "tf.keras.layers.GlobalMaxPool3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GlobalMaxPool3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GlobalMaxPool3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GlobalMaxPool3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GlobalMaxPool3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GlobalMaxPool3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GlobalMaxPool3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GlobalMaxPool3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GlobalMaxPool3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GlobalMaxPool3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GlobalMaxPool3D.submodules": "tf.Module.submodules", + "tf.keras.layers.GlobalMaxPool3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GlobalMaxPool3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GlobalMaxPool3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GlobalMaxPooling1D": "tf.keras.layers.GlobalMaxPool1D", + "tf.keras.layers.GlobalMaxPooling1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GlobalMaxPooling1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GlobalMaxPooling1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GlobalMaxPooling1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GlobalMaxPooling1D.__init__": "tf.keras.layers.GlobalMaxPool1D.__init__", + "tf.keras.layers.GlobalMaxPooling1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GlobalMaxPooling1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GlobalMaxPooling1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GlobalMaxPooling1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GlobalMaxPooling1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GlobalMaxPooling1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GlobalMaxPooling1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GlobalMaxPooling1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GlobalMaxPooling1D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GlobalMaxPooling1D.call": "tf.keras.layers.GlobalMaxPool1D.call", + "tf.keras.layers.GlobalMaxPooling1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GlobalMaxPooling1D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling1D.compute_output_shape", + "tf.keras.layers.GlobalMaxPooling1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GlobalMaxPooling1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GlobalMaxPooling1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GlobalMaxPooling1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GlobalMaxPooling1D.get_config": "tf.keras.layers.GlobalAveragePooling1D.get_config", + "tf.keras.layers.GlobalMaxPooling1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GlobalMaxPooling1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GlobalMaxPooling1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GlobalMaxPooling1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GlobalMaxPooling1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GlobalMaxPooling1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GlobalMaxPooling1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GlobalMaxPooling1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GlobalMaxPooling1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GlobalMaxPooling1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GlobalMaxPooling1D.submodules": "tf.Module.submodules", + "tf.keras.layers.GlobalMaxPooling1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GlobalMaxPooling1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GlobalMaxPooling1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GlobalMaxPooling2D": "tf.keras.layers.GlobalMaxPool2D", + "tf.keras.layers.GlobalMaxPooling2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GlobalMaxPooling2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GlobalMaxPooling2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GlobalMaxPooling2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GlobalMaxPooling2D.__init__": "tf.keras.layers.GlobalAveragePooling2D.__init__", + "tf.keras.layers.GlobalMaxPooling2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GlobalMaxPooling2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GlobalMaxPooling2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GlobalMaxPooling2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GlobalMaxPooling2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GlobalMaxPooling2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GlobalMaxPooling2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GlobalMaxPooling2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GlobalMaxPooling2D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GlobalMaxPooling2D.call": "tf.keras.layers.GlobalMaxPool2D.call", + "tf.keras.layers.GlobalMaxPooling2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GlobalMaxPooling2D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling2D.compute_output_shape", + "tf.keras.layers.GlobalMaxPooling2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GlobalMaxPooling2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GlobalMaxPooling2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GlobalMaxPooling2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GlobalMaxPooling2D.get_config": "tf.keras.layers.GlobalAveragePooling2D.get_config", + "tf.keras.layers.GlobalMaxPooling2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GlobalMaxPooling2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GlobalMaxPooling2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GlobalMaxPooling2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GlobalMaxPooling2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GlobalMaxPooling2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GlobalMaxPooling2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GlobalMaxPooling2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GlobalMaxPooling2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GlobalMaxPooling2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GlobalMaxPooling2D.submodules": "tf.Module.submodules", + "tf.keras.layers.GlobalMaxPooling2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GlobalMaxPooling2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GlobalMaxPooling2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.GlobalMaxPooling3D": "tf.keras.layers.GlobalMaxPool3D", + "tf.keras.layers.GlobalMaxPooling3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.GlobalMaxPooling3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.GlobalMaxPooling3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.GlobalMaxPooling3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.GlobalMaxPooling3D.__init__": "tf.keras.layers.GlobalAveragePooling3D.__init__", + "tf.keras.layers.GlobalMaxPooling3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.GlobalMaxPooling3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.GlobalMaxPooling3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.GlobalMaxPooling3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.GlobalMaxPooling3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.GlobalMaxPooling3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.GlobalMaxPooling3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.GlobalMaxPooling3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.GlobalMaxPooling3D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.GlobalMaxPooling3D.call": "tf.keras.layers.GlobalMaxPool3D.call", + "tf.keras.layers.GlobalMaxPooling3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.GlobalMaxPooling3D.compute_output_shape": "tf.keras.layers.GlobalAveragePooling3D.compute_output_shape", + "tf.keras.layers.GlobalMaxPooling3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.GlobalMaxPooling3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.GlobalMaxPooling3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.GlobalMaxPooling3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.GlobalMaxPooling3D.get_config": "tf.keras.layers.GlobalAveragePooling3D.get_config", + "tf.keras.layers.GlobalMaxPooling3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.GlobalMaxPooling3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.GlobalMaxPooling3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.GlobalMaxPooling3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.GlobalMaxPooling3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.GlobalMaxPooling3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.GlobalMaxPooling3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.GlobalMaxPooling3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.GlobalMaxPooling3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.GlobalMaxPooling3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.GlobalMaxPooling3D.submodules": "tf.Module.submodules", + "tf.keras.layers.GlobalMaxPooling3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.GlobalMaxPooling3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.GlobalMaxPooling3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Input": "tf.keras.Input", + "tf.keras.layers.InputLayer.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.InputLayer.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.InputLayer.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.InputLayer.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.InputLayer.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.InputLayer.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.InputLayer.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.InputLayer.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.InputLayer.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.InputLayer.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.InputLayer.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.InputLayer.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.InputLayer.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.InputLayer.call": "tf.keras.layers.Layer.call", + "tf.keras.layers.InputLayer.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.InputLayer.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.layers.InputLayer.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.InputLayer.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.InputLayer.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.InputLayer.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.InputLayer.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.InputLayer.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.InputLayer.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.InputLayer.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.InputLayer.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.InputLayer.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.InputLayer.name_scope": "tf.Module.name_scope", + "tf.keras.layers.InputLayer.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.InputLayer.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.InputLayer.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.InputLayer.submodules": "tf.Module.submodules", + "tf.keras.layers.InputLayer.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.InputLayer.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.InputLayer.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.InputSpec.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.InputSpec.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.InputSpec.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.InputSpec.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.InputSpec.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.InputSpec.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.InputSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.layers.LSTM.__call__": "tf.keras.layers.RNN.__call__", + "tf.keras.layers.LSTM.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.LSTM.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.LSTM.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.LSTM.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.LSTM.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.LSTM.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.LSTM.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.LSTM.activation": "tf.compat.v1.keras.layers.LSTM.activation", + "tf.keras.layers.LSTM.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.LSTM.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.LSTM.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.LSTM.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.LSTM.bias_constraint": "tf.compat.v1.keras.layers.LSTM.bias_constraint", + "tf.keras.layers.LSTM.bias_initializer": "tf.compat.v1.keras.layers.LSTM.bias_initializer", + "tf.keras.layers.LSTM.bias_regularizer": "tf.compat.v1.keras.layers.LSTM.bias_regularizer", + "tf.keras.layers.LSTM.build": "tf.keras.layers.RNN.build", + "tf.keras.layers.LSTM.compute_mask": "tf.keras.layers.RNN.compute_mask", + "tf.keras.layers.LSTM.compute_output_shape": "tf.keras.layers.RNN.compute_output_shape", + "tf.keras.layers.LSTM.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.LSTM.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.LSTM.dropout": "tf.compat.v1.keras.layers.LSTM.dropout", + "tf.keras.layers.LSTM.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.LSTM.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.LSTM.get_config": "tf.compat.v1.keras.layers.LSTM.get_config", + "tf.keras.layers.LSTM.get_dropout_mask_for_cell": "tf.keras.layers.GRU.get_dropout_mask_for_cell", + "tf.keras.layers.LSTM.get_recurrent_dropout_mask_for_cell": "tf.keras.layers.GRU.get_recurrent_dropout_mask_for_cell", + "tf.keras.layers.LSTM.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.LSTM.implementation": "tf.compat.v1.keras.layers.LSTM.implementation", + "tf.keras.layers.LSTM.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.LSTM.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.LSTM.kernel_constraint": "tf.compat.v1.keras.layers.LSTM.kernel_constraint", + "tf.keras.layers.LSTM.kernel_initializer": "tf.compat.v1.keras.layers.LSTM.kernel_initializer", + "tf.keras.layers.LSTM.kernel_regularizer": "tf.compat.v1.keras.layers.LSTM.kernel_regularizer", + "tf.keras.layers.LSTM.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.LSTM.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.LSTM.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.LSTM.name_scope": "tf.Module.name_scope", + "tf.keras.layers.LSTM.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.LSTM.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.LSTM.recurrent_activation": "tf.compat.v1.keras.layers.LSTM.recurrent_activation", + "tf.keras.layers.LSTM.recurrent_constraint": "tf.compat.v1.keras.layers.LSTM.recurrent_constraint", + "tf.keras.layers.LSTM.recurrent_dropout": "tf.compat.v1.keras.layers.LSTM.recurrent_dropout", + "tf.keras.layers.LSTM.recurrent_initializer": "tf.compat.v1.keras.layers.LSTM.recurrent_initializer", + "tf.keras.layers.LSTM.recurrent_regularizer": "tf.compat.v1.keras.layers.LSTM.recurrent_regularizer", + "tf.keras.layers.LSTM.reset_dropout_mask": "tf.keras.layers.GRU.reset_dropout_mask", + "tf.keras.layers.LSTM.reset_recurrent_dropout_mask": "tf.keras.layers.GRU.reset_recurrent_dropout_mask", + "tf.keras.layers.LSTM.reset_states": "tf.keras.layers.RNN.reset_states", + "tf.keras.layers.LSTM.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.LSTM.states": "tf.keras.layers.RNN.states", + "tf.keras.layers.LSTM.submodules": "tf.Module.submodules", + "tf.keras.layers.LSTM.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.LSTM.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.LSTM.unit_forget_bias": "tf.compat.v1.keras.layers.LSTM.unit_forget_bias", + "tf.keras.layers.LSTM.units": "tf.compat.v1.keras.layers.LSTM.units", + "tf.keras.layers.LSTM.use_bias": "tf.compat.v1.keras.layers.LSTM.use_bias", + "tf.keras.layers.LSTM.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.LSTMCell.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.LSTMCell.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.LSTMCell.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.LSTMCell.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.LSTMCell.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.LSTMCell.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.LSTMCell.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.LSTMCell.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.LSTMCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.LSTMCell.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.LSTMCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.LSTMCell.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.LSTMCell.build": "tf.compat.v1.keras.layers.LSTMCell.build", + "tf.keras.layers.LSTMCell.call": "tf.compat.v1.keras.layers.LSTMCell.call", + "tf.keras.layers.LSTMCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.LSTMCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.layers.LSTMCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.LSTMCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.LSTMCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.LSTMCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.LSTMCell.get_config": "tf.compat.v1.keras.layers.LSTMCell.get_config", + "tf.keras.layers.LSTMCell.get_dropout_mask_for_cell": "tf.keras.layers.GRU.get_dropout_mask_for_cell", + "tf.keras.layers.LSTMCell.get_initial_state": "tf.compat.v1.keras.layers.LSTMCell.get_initial_state", + "tf.keras.layers.LSTMCell.get_recurrent_dropout_mask_for_cell": "tf.keras.layers.GRU.get_recurrent_dropout_mask_for_cell", + "tf.keras.layers.LSTMCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.LSTMCell.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.LSTMCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.LSTMCell.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.LSTMCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.LSTMCell.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.LSTMCell.name_scope": "tf.Module.name_scope", + "tf.keras.layers.LSTMCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.LSTMCell.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.LSTMCell.reset_dropout_mask": "tf.keras.layers.GRU.reset_dropout_mask", + "tf.keras.layers.LSTMCell.reset_recurrent_dropout_mask": "tf.keras.layers.GRU.reset_recurrent_dropout_mask", + "tf.keras.layers.LSTMCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.LSTMCell.submodules": "tf.Module.submodules", + "tf.keras.layers.LSTMCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.LSTMCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.LSTMCell.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Lambda.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Lambda.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Lambda.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Lambda.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Lambda.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Lambda.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Lambda.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Lambda.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Lambda.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Lambda.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Lambda.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Lambda.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Lambda.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.Lambda.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Lambda.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Lambda.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Lambda.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Lambda.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Lambda.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Lambda.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Lambda.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Lambda.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Lambda.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Lambda.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Lambda.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Lambda.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Lambda.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Lambda.submodules": "tf.Module.submodules", + "tf.keras.layers.Lambda.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Lambda.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Lambda.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Layer.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Layer.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Layer.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Layer.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Layer.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Layer.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Layer.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Layer.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Layer.submodules": "tf.Module.submodules", + "tf.keras.layers.LayerNormalization.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.LayerNormalization.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.LayerNormalization.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.LayerNormalization.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.LayerNormalization.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.LayerNormalization.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.LayerNormalization.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.LayerNormalization.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.LayerNormalization.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.LayerNormalization.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.LayerNormalization.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.LayerNormalization.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.LayerNormalization.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.LayerNormalization.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.LayerNormalization.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.LayerNormalization.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.LayerNormalization.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.LayerNormalization.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.LayerNormalization.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.LayerNormalization.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.LayerNormalization.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.LayerNormalization.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.LayerNormalization.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.LayerNormalization.name_scope": "tf.Module.name_scope", + "tf.keras.layers.LayerNormalization.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.LayerNormalization.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.LayerNormalization.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.LayerNormalization.submodules": "tf.Module.submodules", + "tf.keras.layers.LayerNormalization.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.LayerNormalization.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.LayerNormalization.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.LeakyReLU.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.LeakyReLU.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.LeakyReLU.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.LeakyReLU.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.LeakyReLU.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.LeakyReLU.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.LeakyReLU.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.LeakyReLU.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.LeakyReLU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.LeakyReLU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.LeakyReLU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.LeakyReLU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.LeakyReLU.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.LeakyReLU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.LeakyReLU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.LeakyReLU.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.LeakyReLU.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.LeakyReLU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.LeakyReLU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.LeakyReLU.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.LeakyReLU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.LeakyReLU.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.LeakyReLU.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.LeakyReLU.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.LeakyReLU.name_scope": "tf.Module.name_scope", + "tf.keras.layers.LeakyReLU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.LeakyReLU.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.LeakyReLU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.LeakyReLU.submodules": "tf.Module.submodules", + "tf.keras.layers.LeakyReLU.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.LeakyReLU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.LeakyReLU.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.LocallyConnected1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.LocallyConnected1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.LocallyConnected1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.LocallyConnected1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.LocallyConnected1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.LocallyConnected1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.LocallyConnected1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.LocallyConnected1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.LocallyConnected1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.LocallyConnected1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.LocallyConnected1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.LocallyConnected1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.LocallyConnected1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.LocallyConnected1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.LocallyConnected1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.LocallyConnected1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.LocallyConnected1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.LocallyConnected1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.LocallyConnected1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.LocallyConnected1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.LocallyConnected1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.LocallyConnected1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.LocallyConnected1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.LocallyConnected1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.LocallyConnected1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.LocallyConnected1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.LocallyConnected1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.LocallyConnected1D.submodules": "tf.Module.submodules", + "tf.keras.layers.LocallyConnected1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.LocallyConnected1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.LocallyConnected1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.LocallyConnected2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.LocallyConnected2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.LocallyConnected2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.LocallyConnected2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.LocallyConnected2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.LocallyConnected2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.LocallyConnected2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.LocallyConnected2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.LocallyConnected2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.LocallyConnected2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.LocallyConnected2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.LocallyConnected2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.LocallyConnected2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.LocallyConnected2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.LocallyConnected2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.LocallyConnected2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.LocallyConnected2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.LocallyConnected2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.LocallyConnected2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.LocallyConnected2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.LocallyConnected2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.LocallyConnected2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.LocallyConnected2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.LocallyConnected2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.LocallyConnected2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.LocallyConnected2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.LocallyConnected2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.LocallyConnected2D.submodules": "tf.Module.submodules", + "tf.keras.layers.LocallyConnected2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.LocallyConnected2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.LocallyConnected2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Masking.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Masking.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Masking.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Masking.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Masking.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Masking.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Masking.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Masking.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Masking.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Masking.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Masking.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Masking.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Masking.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.Masking.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Masking.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Masking.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Masking.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Masking.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Masking.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Masking.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Masking.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Masking.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Masking.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Masking.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Masking.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Masking.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Masking.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Masking.submodules": "tf.Module.submodules", + "tf.keras.layers.Masking.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Masking.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Masking.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.MaxPool1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.MaxPool1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.MaxPool1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.MaxPool1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.MaxPool1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.MaxPool1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.MaxPool1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.MaxPool1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.MaxPool1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.MaxPool1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.MaxPool1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.MaxPool1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.MaxPool1D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.MaxPool1D.call": "tf.keras.layers.AveragePooling1D.call", + "tf.keras.layers.MaxPool1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.MaxPool1D.compute_output_shape": "tf.keras.layers.AveragePooling1D.compute_output_shape", + "tf.keras.layers.MaxPool1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.MaxPool1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.MaxPool1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.MaxPool1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.MaxPool1D.get_config": "tf.keras.layers.AveragePooling1D.get_config", + "tf.keras.layers.MaxPool1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.MaxPool1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.MaxPool1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.MaxPool1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.MaxPool1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.MaxPool1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.MaxPool1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.MaxPool1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.MaxPool1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.MaxPool1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.MaxPool1D.submodules": "tf.Module.submodules", + "tf.keras.layers.MaxPool1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.MaxPool1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.MaxPool1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.MaxPool2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.MaxPool2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.MaxPool2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.MaxPool2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.MaxPool2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.MaxPool2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.MaxPool2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.MaxPool2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.MaxPool2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.MaxPool2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.MaxPool2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.MaxPool2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.MaxPool2D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.MaxPool2D.call": "tf.keras.layers.AveragePooling2D.call", + "tf.keras.layers.MaxPool2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.MaxPool2D.compute_output_shape": "tf.keras.layers.AveragePooling2D.compute_output_shape", + "tf.keras.layers.MaxPool2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.MaxPool2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.MaxPool2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.MaxPool2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.MaxPool2D.get_config": "tf.keras.layers.AveragePooling2D.get_config", + "tf.keras.layers.MaxPool2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.MaxPool2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.MaxPool2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.MaxPool2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.MaxPool2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.MaxPool2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.MaxPool2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.MaxPool2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.MaxPool2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.MaxPool2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.MaxPool2D.submodules": "tf.Module.submodules", + "tf.keras.layers.MaxPool2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.MaxPool2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.MaxPool2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.MaxPool3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.MaxPool3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.MaxPool3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.MaxPool3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.MaxPool3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.MaxPool3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.MaxPool3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.MaxPool3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.MaxPool3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.MaxPool3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.MaxPool3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.MaxPool3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.MaxPool3D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.MaxPool3D.call": "tf.keras.layers.AveragePooling3D.call", + "tf.keras.layers.MaxPool3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.MaxPool3D.compute_output_shape": "tf.keras.layers.AveragePooling3D.compute_output_shape", + "tf.keras.layers.MaxPool3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.MaxPool3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.MaxPool3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.MaxPool3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.MaxPool3D.get_config": "tf.keras.layers.AveragePooling3D.get_config", + "tf.keras.layers.MaxPool3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.MaxPool3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.MaxPool3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.MaxPool3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.MaxPool3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.MaxPool3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.MaxPool3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.MaxPool3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.MaxPool3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.MaxPool3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.MaxPool3D.submodules": "tf.Module.submodules", + "tf.keras.layers.MaxPool3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.MaxPool3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.MaxPool3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.MaxPooling1D": "tf.keras.layers.MaxPool1D", + "tf.keras.layers.MaxPooling1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.MaxPooling1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.MaxPooling1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.MaxPooling1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.MaxPooling1D.__init__": "tf.keras.layers.MaxPool1D.__init__", + "tf.keras.layers.MaxPooling1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.MaxPooling1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.MaxPooling1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.MaxPooling1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.MaxPooling1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.MaxPooling1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.MaxPooling1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.MaxPooling1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.MaxPooling1D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.MaxPooling1D.call": "tf.keras.layers.AveragePooling1D.call", + "tf.keras.layers.MaxPooling1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.MaxPooling1D.compute_output_shape": "tf.keras.layers.AveragePooling1D.compute_output_shape", + "tf.keras.layers.MaxPooling1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.MaxPooling1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.MaxPooling1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.MaxPooling1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.MaxPooling1D.get_config": "tf.keras.layers.AveragePooling1D.get_config", + "tf.keras.layers.MaxPooling1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.MaxPooling1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.MaxPooling1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.MaxPooling1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.MaxPooling1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.MaxPooling1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.MaxPooling1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.MaxPooling1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.MaxPooling1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.MaxPooling1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.MaxPooling1D.submodules": "tf.Module.submodules", + "tf.keras.layers.MaxPooling1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.MaxPooling1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.MaxPooling1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.MaxPooling2D": "tf.keras.layers.MaxPool2D", + "tf.keras.layers.MaxPooling2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.MaxPooling2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.MaxPooling2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.MaxPooling2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.MaxPooling2D.__init__": "tf.keras.layers.MaxPool2D.__init__", + "tf.keras.layers.MaxPooling2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.MaxPooling2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.MaxPooling2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.MaxPooling2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.MaxPooling2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.MaxPooling2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.MaxPooling2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.MaxPooling2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.MaxPooling2D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.MaxPooling2D.call": "tf.keras.layers.AveragePooling2D.call", + "tf.keras.layers.MaxPooling2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.MaxPooling2D.compute_output_shape": "tf.keras.layers.AveragePooling2D.compute_output_shape", + "tf.keras.layers.MaxPooling2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.MaxPooling2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.MaxPooling2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.MaxPooling2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.MaxPooling2D.get_config": "tf.keras.layers.AveragePooling2D.get_config", + "tf.keras.layers.MaxPooling2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.MaxPooling2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.MaxPooling2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.MaxPooling2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.MaxPooling2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.MaxPooling2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.MaxPooling2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.MaxPooling2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.MaxPooling2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.MaxPooling2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.MaxPooling2D.submodules": "tf.Module.submodules", + "tf.keras.layers.MaxPooling2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.MaxPooling2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.MaxPooling2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.MaxPooling3D": "tf.keras.layers.MaxPool3D", + "tf.keras.layers.MaxPooling3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.MaxPooling3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.MaxPooling3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.MaxPooling3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.MaxPooling3D.__init__": "tf.keras.layers.MaxPool3D.__init__", + "tf.keras.layers.MaxPooling3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.MaxPooling3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.MaxPooling3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.MaxPooling3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.MaxPooling3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.MaxPooling3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.MaxPooling3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.MaxPooling3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.MaxPooling3D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.MaxPooling3D.call": "tf.keras.layers.AveragePooling3D.call", + "tf.keras.layers.MaxPooling3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.MaxPooling3D.compute_output_shape": "tf.keras.layers.AveragePooling3D.compute_output_shape", + "tf.keras.layers.MaxPooling3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.MaxPooling3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.MaxPooling3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.MaxPooling3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.MaxPooling3D.get_config": "tf.keras.layers.AveragePooling3D.get_config", + "tf.keras.layers.MaxPooling3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.MaxPooling3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.MaxPooling3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.MaxPooling3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.MaxPooling3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.MaxPooling3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.MaxPooling3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.MaxPooling3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.MaxPooling3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.MaxPooling3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.MaxPooling3D.submodules": "tf.Module.submodules", + "tf.keras.layers.MaxPooling3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.MaxPooling3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.MaxPooling3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Maximum.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Maximum.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Maximum.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Maximum.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Maximum.__init__": "tf.keras.layers.Add.__init__", + "tf.keras.layers.Maximum.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Maximum.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Maximum.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Maximum.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Maximum.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Maximum.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Maximum.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Maximum.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Maximum.build": "tf.keras.layers.Add.build", + "tf.keras.layers.Maximum.call": "tf.keras.layers.Add.call", + "tf.keras.layers.Maximum.compute_mask": "tf.keras.layers.Add.compute_mask", + "tf.keras.layers.Maximum.compute_output_shape": "tf.keras.layers.Add.compute_output_shape", + "tf.keras.layers.Maximum.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Maximum.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Maximum.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Maximum.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Maximum.get_config": "tf.keras.layers.Layer.get_config", + "tf.keras.layers.Maximum.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Maximum.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Maximum.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Maximum.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Maximum.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Maximum.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Maximum.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Maximum.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Maximum.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Maximum.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Maximum.submodules": "tf.Module.submodules", + "tf.keras.layers.Maximum.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Maximum.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Maximum.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Minimum.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Minimum.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Minimum.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Minimum.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Minimum.__init__": "tf.keras.layers.Add.__init__", + "tf.keras.layers.Minimum.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Minimum.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Minimum.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Minimum.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Minimum.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Minimum.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Minimum.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Minimum.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Minimum.build": "tf.keras.layers.Add.build", + "tf.keras.layers.Minimum.call": "tf.keras.layers.Add.call", + "tf.keras.layers.Minimum.compute_mask": "tf.keras.layers.Add.compute_mask", + "tf.keras.layers.Minimum.compute_output_shape": "tf.keras.layers.Add.compute_output_shape", + "tf.keras.layers.Minimum.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Minimum.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Minimum.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Minimum.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Minimum.get_config": "tf.keras.layers.Layer.get_config", + "tf.keras.layers.Minimum.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Minimum.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Minimum.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Minimum.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Minimum.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Minimum.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Minimum.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Minimum.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Minimum.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Minimum.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Minimum.submodules": "tf.Module.submodules", + "tf.keras.layers.Minimum.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Minimum.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Minimum.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Multiply.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Multiply.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Multiply.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Multiply.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Multiply.__init__": "tf.keras.layers.Add.__init__", + "tf.keras.layers.Multiply.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Multiply.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Multiply.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Multiply.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Multiply.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Multiply.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Multiply.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Multiply.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Multiply.build": "tf.keras.layers.Add.build", + "tf.keras.layers.Multiply.call": "tf.keras.layers.Add.call", + "tf.keras.layers.Multiply.compute_mask": "tf.keras.layers.Add.compute_mask", + "tf.keras.layers.Multiply.compute_output_shape": "tf.keras.layers.Add.compute_output_shape", + "tf.keras.layers.Multiply.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Multiply.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Multiply.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Multiply.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Multiply.get_config": "tf.keras.layers.Layer.get_config", + "tf.keras.layers.Multiply.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Multiply.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Multiply.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Multiply.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Multiply.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Multiply.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Multiply.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Multiply.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Multiply.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Multiply.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Multiply.submodules": "tf.Module.submodules", + "tf.keras.layers.Multiply.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Multiply.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Multiply.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.PReLU.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.PReLU.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.PReLU.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.PReLU.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.PReLU.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.PReLU.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.PReLU.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.PReLU.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.PReLU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.PReLU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.PReLU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.PReLU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.PReLU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.PReLU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.PReLU.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.PReLU.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.PReLU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.PReLU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.PReLU.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.PReLU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.PReLU.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.PReLU.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.PReLU.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.PReLU.name_scope": "tf.Module.name_scope", + "tf.keras.layers.PReLU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.PReLU.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.PReLU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.PReLU.submodules": "tf.Module.submodules", + "tf.keras.layers.PReLU.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.PReLU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.PReLU.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Permute.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Permute.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Permute.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Permute.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Permute.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Permute.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Permute.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Permute.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Permute.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Permute.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Permute.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Permute.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Permute.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.Permute.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Permute.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Permute.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Permute.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Permute.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Permute.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Permute.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Permute.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Permute.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Permute.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Permute.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Permute.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Permute.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Permute.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Permute.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Permute.submodules": "tf.Module.submodules", + "tf.keras.layers.Permute.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Permute.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Permute.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.RNN.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.RNN.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.RNN.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.RNN.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.RNN.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.RNN.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.RNN.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.RNN.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.RNN.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.RNN.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.RNN.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.RNN.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.RNN.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.RNN.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.RNN.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.RNN.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.RNN.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.RNN.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.RNN.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.RNN.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.RNN.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.RNN.name_scope": "tf.Module.name_scope", + "tf.keras.layers.RNN.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.RNN.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.RNN.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.RNN.submodules": "tf.Module.submodules", + "tf.keras.layers.RNN.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.RNN.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.RNN.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.ReLU.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.ReLU.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.ReLU.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.ReLU.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.ReLU.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.ReLU.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.ReLU.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.ReLU.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.ReLU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.ReLU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.ReLU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.ReLU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.ReLU.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.ReLU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.ReLU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.ReLU.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.ReLU.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.ReLU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.ReLU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.ReLU.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.ReLU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.ReLU.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.ReLU.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.ReLU.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.ReLU.name_scope": "tf.Module.name_scope", + "tf.keras.layers.ReLU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.ReLU.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.ReLU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.ReLU.submodules": "tf.Module.submodules", + "tf.keras.layers.ReLU.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.ReLU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.ReLU.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.RepeatVector.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.RepeatVector.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.RepeatVector.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.RepeatVector.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.RepeatVector.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.RepeatVector.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.RepeatVector.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.RepeatVector.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.RepeatVector.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.RepeatVector.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.RepeatVector.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.RepeatVector.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.RepeatVector.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.RepeatVector.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.RepeatVector.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.RepeatVector.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.RepeatVector.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.RepeatVector.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.RepeatVector.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.RepeatVector.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.RepeatVector.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.RepeatVector.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.RepeatVector.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.RepeatVector.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.RepeatVector.name_scope": "tf.Module.name_scope", + "tf.keras.layers.RepeatVector.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.RepeatVector.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.RepeatVector.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.RepeatVector.submodules": "tf.Module.submodules", + "tf.keras.layers.RepeatVector.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.RepeatVector.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.RepeatVector.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Reshape.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Reshape.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Reshape.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Reshape.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Reshape.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Reshape.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Reshape.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Reshape.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Reshape.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Reshape.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Reshape.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Reshape.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Reshape.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.Reshape.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Reshape.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Reshape.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Reshape.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Reshape.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Reshape.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Reshape.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Reshape.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Reshape.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Reshape.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Reshape.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Reshape.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Reshape.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Reshape.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Reshape.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Reshape.submodules": "tf.Module.submodules", + "tf.keras.layers.Reshape.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Reshape.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Reshape.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.SeparableConv1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.SeparableConv1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.SeparableConv1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.SeparableConv1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.SeparableConv1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.SeparableConv1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.SeparableConv1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.SeparableConv1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.SeparableConv1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.SeparableConv1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.SeparableConv1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.SeparableConv1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.SeparableConv1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.SeparableConv1D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.keras.layers.SeparableConv1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.SeparableConv1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.SeparableConv1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.SeparableConv1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.SeparableConv1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.SeparableConv1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.SeparableConv1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.SeparableConv1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.SeparableConv1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.SeparableConv1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.SeparableConv1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.SeparableConv1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.SeparableConv1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.SeparableConv1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.SeparableConv1D.submodules": "tf.Module.submodules", + "tf.keras.layers.SeparableConv1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.SeparableConv1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.SeparableConv1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.SeparableConv2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.SeparableConv2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.SeparableConv2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.SeparableConv2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.SeparableConv2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.SeparableConv2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.SeparableConv2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.SeparableConv2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.SeparableConv2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.SeparableConv2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.SeparableConv2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.SeparableConv2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.SeparableConv2D.build": "tf.keras.layers.SeparableConv1D.build", + "tf.keras.layers.SeparableConv2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.SeparableConv2D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.keras.layers.SeparableConv2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.SeparableConv2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.SeparableConv2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.SeparableConv2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.SeparableConv2D.get_config": "tf.keras.layers.SeparableConv1D.get_config", + "tf.keras.layers.SeparableConv2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.SeparableConv2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.SeparableConv2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.SeparableConv2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.SeparableConv2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.SeparableConv2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.SeparableConv2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.SeparableConv2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.SeparableConv2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.SeparableConv2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.SeparableConv2D.submodules": "tf.Module.submodules", + "tf.keras.layers.SeparableConv2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.SeparableConv2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.SeparableConv2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.SeparableConvolution1D": "tf.keras.layers.SeparableConv1D", + "tf.keras.layers.SeparableConvolution1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.SeparableConvolution1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.SeparableConvolution1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.SeparableConvolution1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.SeparableConvolution1D.__init__": "tf.keras.layers.SeparableConv1D.__init__", + "tf.keras.layers.SeparableConvolution1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.SeparableConvolution1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.SeparableConvolution1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.SeparableConvolution1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.SeparableConvolution1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.SeparableConvolution1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.SeparableConvolution1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.SeparableConvolution1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.SeparableConvolution1D.build": "tf.keras.layers.SeparableConv1D.build", + "tf.keras.layers.SeparableConvolution1D.call": "tf.keras.layers.SeparableConv1D.call", + "tf.keras.layers.SeparableConvolution1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.SeparableConvolution1D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.keras.layers.SeparableConvolution1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.SeparableConvolution1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.SeparableConvolution1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.SeparableConvolution1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.SeparableConvolution1D.get_config": "tf.keras.layers.SeparableConv1D.get_config", + "tf.keras.layers.SeparableConvolution1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.SeparableConvolution1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.SeparableConvolution1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.SeparableConvolution1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.SeparableConvolution1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.SeparableConvolution1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.SeparableConvolution1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.SeparableConvolution1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.SeparableConvolution1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.SeparableConvolution1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.SeparableConvolution1D.submodules": "tf.Module.submodules", + "tf.keras.layers.SeparableConvolution1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.SeparableConvolution1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.SeparableConvolution1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.SeparableConvolution2D": "tf.keras.layers.SeparableConv2D", + "tf.keras.layers.SeparableConvolution2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.SeparableConvolution2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.SeparableConvolution2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.SeparableConvolution2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.SeparableConvolution2D.__init__": "tf.keras.layers.SeparableConv2D.__init__", + "tf.keras.layers.SeparableConvolution2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.SeparableConvolution2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.SeparableConvolution2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.SeparableConvolution2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.SeparableConvolution2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.SeparableConvolution2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.SeparableConvolution2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.SeparableConvolution2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.SeparableConvolution2D.build": "tf.keras.layers.SeparableConv1D.build", + "tf.keras.layers.SeparableConvolution2D.call": "tf.keras.layers.SeparableConv2D.call", + "tf.keras.layers.SeparableConvolution2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.SeparableConvolution2D.compute_output_shape": "tf.keras.layers.Conv1D.compute_output_shape", + "tf.keras.layers.SeparableConvolution2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.SeparableConvolution2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.SeparableConvolution2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.SeparableConvolution2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.SeparableConvolution2D.get_config": "tf.keras.layers.SeparableConv1D.get_config", + "tf.keras.layers.SeparableConvolution2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.SeparableConvolution2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.SeparableConvolution2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.SeparableConvolution2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.SeparableConvolution2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.SeparableConvolution2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.SeparableConvolution2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.SeparableConvolution2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.SeparableConvolution2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.SeparableConvolution2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.SeparableConvolution2D.submodules": "tf.Module.submodules", + "tf.keras.layers.SeparableConvolution2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.SeparableConvolution2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.SeparableConvolution2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.SimpleRNN.__call__": "tf.keras.layers.RNN.__call__", + "tf.keras.layers.SimpleRNN.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.SimpleRNN.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.SimpleRNN.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.SimpleRNN.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.SimpleRNN.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.SimpleRNN.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.SimpleRNN.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.SimpleRNN.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.SimpleRNN.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.SimpleRNN.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.SimpleRNN.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.SimpleRNN.build": "tf.keras.layers.RNN.build", + "tf.keras.layers.SimpleRNN.compute_mask": "tf.keras.layers.RNN.compute_mask", + "tf.keras.layers.SimpleRNN.compute_output_shape": "tf.keras.layers.RNN.compute_output_shape", + "tf.keras.layers.SimpleRNN.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.SimpleRNN.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.SimpleRNN.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.SimpleRNN.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.SimpleRNN.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.SimpleRNN.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.SimpleRNN.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.SimpleRNN.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.SimpleRNN.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.SimpleRNN.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.SimpleRNN.name_scope": "tf.Module.name_scope", + "tf.keras.layers.SimpleRNN.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.SimpleRNN.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.SimpleRNN.reset_states": "tf.keras.layers.RNN.reset_states", + "tf.keras.layers.SimpleRNN.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.SimpleRNN.states": "tf.keras.layers.RNN.states", + "tf.keras.layers.SimpleRNN.submodules": "tf.Module.submodules", + "tf.keras.layers.SimpleRNN.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.SimpleRNN.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.SimpleRNN.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.SimpleRNNCell.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.SimpleRNNCell.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.SimpleRNNCell.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.SimpleRNNCell.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.SimpleRNNCell.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.SimpleRNNCell.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.SimpleRNNCell.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.SimpleRNNCell.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.SimpleRNNCell.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.SimpleRNNCell.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.SimpleRNNCell.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.SimpleRNNCell.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.SimpleRNNCell.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.SimpleRNNCell.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.layers.SimpleRNNCell.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.SimpleRNNCell.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.SimpleRNNCell.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.SimpleRNNCell.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.SimpleRNNCell.get_dropout_mask_for_cell": "tf.keras.layers.GRU.get_dropout_mask_for_cell", + "tf.keras.layers.SimpleRNNCell.get_recurrent_dropout_mask_for_cell": "tf.keras.layers.GRU.get_recurrent_dropout_mask_for_cell", + "tf.keras.layers.SimpleRNNCell.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.SimpleRNNCell.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.SimpleRNNCell.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.SimpleRNNCell.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.SimpleRNNCell.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.SimpleRNNCell.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.SimpleRNNCell.name_scope": "tf.Module.name_scope", + "tf.keras.layers.SimpleRNNCell.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.SimpleRNNCell.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.SimpleRNNCell.reset_dropout_mask": "tf.keras.layers.GRU.reset_dropout_mask", + "tf.keras.layers.SimpleRNNCell.reset_recurrent_dropout_mask": "tf.keras.layers.GRU.reset_recurrent_dropout_mask", + "tf.keras.layers.SimpleRNNCell.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.SimpleRNNCell.submodules": "tf.Module.submodules", + "tf.keras.layers.SimpleRNNCell.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.SimpleRNNCell.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.SimpleRNNCell.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Softmax.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Softmax.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Softmax.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Softmax.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Softmax.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Softmax.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Softmax.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Softmax.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Softmax.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Softmax.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Softmax.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Softmax.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Softmax.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.Softmax.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Softmax.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Softmax.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Softmax.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Softmax.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Softmax.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Softmax.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Softmax.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Softmax.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Softmax.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Softmax.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Softmax.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Softmax.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Softmax.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Softmax.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Softmax.submodules": "tf.Module.submodules", + "tf.keras.layers.Softmax.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Softmax.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Softmax.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.SpatialDropout1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.SpatialDropout1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.SpatialDropout1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.SpatialDropout1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.SpatialDropout1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.SpatialDropout1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.SpatialDropout1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.SpatialDropout1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.SpatialDropout1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.SpatialDropout1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.SpatialDropout1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.SpatialDropout1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.SpatialDropout1D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.SpatialDropout1D.call": "tf.keras.layers.Dropout.call", + "tf.keras.layers.SpatialDropout1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.SpatialDropout1D.compute_output_shape": "tf.keras.layers.Dropout.compute_output_shape", + "tf.keras.layers.SpatialDropout1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.SpatialDropout1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.SpatialDropout1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.SpatialDropout1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.SpatialDropout1D.get_config": "tf.keras.layers.Dropout.get_config", + "tf.keras.layers.SpatialDropout1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.SpatialDropout1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.SpatialDropout1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.SpatialDropout1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.SpatialDropout1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.SpatialDropout1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.SpatialDropout1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.SpatialDropout1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.SpatialDropout1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.SpatialDropout1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.SpatialDropout1D.submodules": "tf.Module.submodules", + "tf.keras.layers.SpatialDropout1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.SpatialDropout1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.SpatialDropout1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.SpatialDropout2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.SpatialDropout2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.SpatialDropout2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.SpatialDropout2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.SpatialDropout2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.SpatialDropout2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.SpatialDropout2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.SpatialDropout2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.SpatialDropout2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.SpatialDropout2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.SpatialDropout2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.SpatialDropout2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.SpatialDropout2D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.SpatialDropout2D.call": "tf.keras.layers.Dropout.call", + "tf.keras.layers.SpatialDropout2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.SpatialDropout2D.compute_output_shape": "tf.keras.layers.Dropout.compute_output_shape", + "tf.keras.layers.SpatialDropout2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.SpatialDropout2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.SpatialDropout2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.SpatialDropout2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.SpatialDropout2D.get_config": "tf.keras.layers.Dropout.get_config", + "tf.keras.layers.SpatialDropout2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.SpatialDropout2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.SpatialDropout2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.SpatialDropout2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.SpatialDropout2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.SpatialDropout2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.SpatialDropout2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.SpatialDropout2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.SpatialDropout2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.SpatialDropout2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.SpatialDropout2D.submodules": "tf.Module.submodules", + "tf.keras.layers.SpatialDropout2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.SpatialDropout2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.SpatialDropout2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.SpatialDropout3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.SpatialDropout3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.SpatialDropout3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.SpatialDropout3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.SpatialDropout3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.SpatialDropout3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.SpatialDropout3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.SpatialDropout3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.SpatialDropout3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.SpatialDropout3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.SpatialDropout3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.SpatialDropout3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.SpatialDropout3D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.SpatialDropout3D.call": "tf.keras.layers.Dropout.call", + "tf.keras.layers.SpatialDropout3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.SpatialDropout3D.compute_output_shape": "tf.keras.layers.Dropout.compute_output_shape", + "tf.keras.layers.SpatialDropout3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.SpatialDropout3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.SpatialDropout3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.SpatialDropout3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.SpatialDropout3D.get_config": "tf.keras.layers.Dropout.get_config", + "tf.keras.layers.SpatialDropout3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.SpatialDropout3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.SpatialDropout3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.SpatialDropout3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.SpatialDropout3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.SpatialDropout3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.SpatialDropout3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.SpatialDropout3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.SpatialDropout3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.SpatialDropout3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.SpatialDropout3D.submodules": "tf.Module.submodules", + "tf.keras.layers.SpatialDropout3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.SpatialDropout3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.SpatialDropout3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.StackedRNNCells.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.StackedRNNCells.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.StackedRNNCells.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.StackedRNNCells.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.StackedRNNCells.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.StackedRNNCells.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.StackedRNNCells.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.StackedRNNCells.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.StackedRNNCells.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.StackedRNNCells.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.StackedRNNCells.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.StackedRNNCells.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.StackedRNNCells.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.StackedRNNCells.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.layers.StackedRNNCells.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.StackedRNNCells.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.StackedRNNCells.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.StackedRNNCells.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.StackedRNNCells.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.StackedRNNCells.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.StackedRNNCells.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.StackedRNNCells.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.StackedRNNCells.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.StackedRNNCells.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.StackedRNNCells.name_scope": "tf.Module.name_scope", + "tf.keras.layers.StackedRNNCells.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.StackedRNNCells.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.StackedRNNCells.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.StackedRNNCells.submodules": "tf.Module.submodules", + "tf.keras.layers.StackedRNNCells.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.StackedRNNCells.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.StackedRNNCells.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Subtract.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Subtract.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Subtract.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Subtract.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Subtract.__init__": "tf.keras.layers.Add.__init__", + "tf.keras.layers.Subtract.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Subtract.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Subtract.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Subtract.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Subtract.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.Subtract.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Subtract.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Subtract.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Subtract.call": "tf.keras.layers.Add.call", + "tf.keras.layers.Subtract.compute_mask": "tf.keras.layers.Add.compute_mask", + "tf.keras.layers.Subtract.compute_output_shape": "tf.keras.layers.Add.compute_output_shape", + "tf.keras.layers.Subtract.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Subtract.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Subtract.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Subtract.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Subtract.get_config": "tf.keras.layers.Layer.get_config", + "tf.keras.layers.Subtract.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Subtract.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Subtract.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Subtract.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Subtract.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Subtract.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Subtract.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Subtract.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Subtract.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Subtract.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Subtract.submodules": "tf.Module.submodules", + "tf.keras.layers.Subtract.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Subtract.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Subtract.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.ThresholdedReLU.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.ThresholdedReLU.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.ThresholdedReLU.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.ThresholdedReLU.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.ThresholdedReLU.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.ThresholdedReLU.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.ThresholdedReLU.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.ThresholdedReLU.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.ThresholdedReLU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.ThresholdedReLU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.ThresholdedReLU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.ThresholdedReLU.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.ThresholdedReLU.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.ThresholdedReLU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.ThresholdedReLU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.ThresholdedReLU.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.ThresholdedReLU.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.ThresholdedReLU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.ThresholdedReLU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.ThresholdedReLU.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.ThresholdedReLU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.ThresholdedReLU.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.ThresholdedReLU.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.ThresholdedReLU.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.ThresholdedReLU.name_scope": "tf.Module.name_scope", + "tf.keras.layers.ThresholdedReLU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.ThresholdedReLU.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.ThresholdedReLU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.ThresholdedReLU.submodules": "tf.Module.submodules", + "tf.keras.layers.ThresholdedReLU.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.ThresholdedReLU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.ThresholdedReLU.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.TimeDistributed.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.TimeDistributed.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.TimeDistributed.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.TimeDistributed.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.TimeDistributed.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.TimeDistributed.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.TimeDistributed.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.TimeDistributed.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.TimeDistributed.activity_regularizer": "tf.keras.layers.Wrapper.activity_regularizer", + "tf.keras.layers.TimeDistributed.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.TimeDistributed.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.TimeDistributed.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.TimeDistributed.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.TimeDistributed.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.TimeDistributed.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.TimeDistributed.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.TimeDistributed.get_config": "tf.keras.layers.Wrapper.get_config", + "tf.keras.layers.TimeDistributed.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.TimeDistributed.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.TimeDistributed.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.TimeDistributed.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.TimeDistributed.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.TimeDistributed.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.TimeDistributed.name_scope": "tf.Module.name_scope", + "tf.keras.layers.TimeDistributed.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.TimeDistributed.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.TimeDistributed.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.TimeDistributed.submodules": "tf.Module.submodules", + "tf.keras.layers.TimeDistributed.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.TimeDistributed.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.TimeDistributed.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.UpSampling1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.UpSampling1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.UpSampling1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.UpSampling1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.UpSampling1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.UpSampling1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.UpSampling1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.UpSampling1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.UpSampling1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.UpSampling1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.UpSampling1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.UpSampling1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.UpSampling1D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.UpSampling1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.UpSampling1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.UpSampling1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.UpSampling1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.UpSampling1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.UpSampling1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.UpSampling1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.UpSampling1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.UpSampling1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.UpSampling1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.UpSampling1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.UpSampling1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.UpSampling1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.UpSampling1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.UpSampling1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.UpSampling1D.submodules": "tf.Module.submodules", + "tf.keras.layers.UpSampling1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.UpSampling1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.UpSampling1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.UpSampling2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.UpSampling2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.UpSampling2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.UpSampling2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.UpSampling2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.UpSampling2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.UpSampling2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.UpSampling2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.UpSampling2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.UpSampling2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.UpSampling2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.UpSampling2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.UpSampling2D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.UpSampling2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.UpSampling2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.UpSampling2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.UpSampling2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.UpSampling2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.UpSampling2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.UpSampling2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.UpSampling2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.UpSampling2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.UpSampling2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.UpSampling2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.UpSampling2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.UpSampling2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.UpSampling2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.UpSampling2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.UpSampling2D.submodules": "tf.Module.submodules", + "tf.keras.layers.UpSampling2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.UpSampling2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.UpSampling2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.UpSampling3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.UpSampling3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.UpSampling3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.UpSampling3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.UpSampling3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.UpSampling3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.UpSampling3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.UpSampling3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.UpSampling3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.UpSampling3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.UpSampling3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.UpSampling3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.UpSampling3D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.UpSampling3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.UpSampling3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.UpSampling3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.UpSampling3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.UpSampling3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.UpSampling3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.UpSampling3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.UpSampling3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.UpSampling3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.UpSampling3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.UpSampling3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.UpSampling3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.UpSampling3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.UpSampling3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.UpSampling3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.UpSampling3D.submodules": "tf.Module.submodules", + "tf.keras.layers.UpSampling3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.UpSampling3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.UpSampling3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.Wrapper.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.Wrapper.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.Wrapper.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.Wrapper.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.Wrapper.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.Wrapper.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.Wrapper.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.Wrapper.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.Wrapper.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.Wrapper.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.Wrapper.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.Wrapper.call": "tf.keras.layers.Layer.call", + "tf.keras.layers.Wrapper.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.Wrapper.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.layers.Wrapper.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.Wrapper.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.Wrapper.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.Wrapper.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.Wrapper.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.Wrapper.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.Wrapper.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.Wrapper.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.Wrapper.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.Wrapper.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.Wrapper.name_scope": "tf.Module.name_scope", + "tf.keras.layers.Wrapper.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.Wrapper.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.Wrapper.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.Wrapper.submodules": "tf.Module.submodules", + "tf.keras.layers.Wrapper.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.Wrapper.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.Wrapper.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.ZeroPadding1D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.ZeroPadding1D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.ZeroPadding1D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.ZeroPadding1D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.ZeroPadding1D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.ZeroPadding1D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.ZeroPadding1D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.ZeroPadding1D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.ZeroPadding1D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.ZeroPadding1D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.ZeroPadding1D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.ZeroPadding1D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.ZeroPadding1D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.ZeroPadding1D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.ZeroPadding1D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.ZeroPadding1D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.ZeroPadding1D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.ZeroPadding1D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.ZeroPadding1D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.ZeroPadding1D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.ZeroPadding1D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.ZeroPadding1D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.ZeroPadding1D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.ZeroPadding1D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.ZeroPadding1D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.ZeroPadding1D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.ZeroPadding1D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.ZeroPadding1D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.ZeroPadding1D.submodules": "tf.Module.submodules", + "tf.keras.layers.ZeroPadding1D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.ZeroPadding1D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.ZeroPadding1D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.ZeroPadding2D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.ZeroPadding2D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.ZeroPadding2D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.ZeroPadding2D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.ZeroPadding2D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.ZeroPadding2D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.ZeroPadding2D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.ZeroPadding2D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.ZeroPadding2D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.ZeroPadding2D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.ZeroPadding2D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.ZeroPadding2D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.ZeroPadding2D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.ZeroPadding2D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.ZeroPadding2D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.ZeroPadding2D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.ZeroPadding2D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.ZeroPadding2D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.ZeroPadding2D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.ZeroPadding2D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.ZeroPadding2D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.ZeroPadding2D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.ZeroPadding2D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.ZeroPadding2D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.ZeroPadding2D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.ZeroPadding2D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.ZeroPadding2D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.ZeroPadding2D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.ZeroPadding2D.submodules": "tf.Module.submodules", + "tf.keras.layers.ZeroPadding2D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.ZeroPadding2D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.ZeroPadding2D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.ZeroPadding3D.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.ZeroPadding3D.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.ZeroPadding3D.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.ZeroPadding3D.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.ZeroPadding3D.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.ZeroPadding3D.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.ZeroPadding3D.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.ZeroPadding3D.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.ZeroPadding3D.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.ZeroPadding3D.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.ZeroPadding3D.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.ZeroPadding3D.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.ZeroPadding3D.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.ZeroPadding3D.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.ZeroPadding3D.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.ZeroPadding3D.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.ZeroPadding3D.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.ZeroPadding3D.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.ZeroPadding3D.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.ZeroPadding3D.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.ZeroPadding3D.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.ZeroPadding3D.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.ZeroPadding3D.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.ZeroPadding3D.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.ZeroPadding3D.name_scope": "tf.Module.name_scope", + "tf.keras.layers.ZeroPadding3D.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.ZeroPadding3D.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.ZeroPadding3D.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.ZeroPadding3D.submodules": "tf.Module.submodules", + "tf.keras.layers.ZeroPadding3D.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.ZeroPadding3D.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.ZeroPadding3D.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.SyncBatchNormalization.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.SyncBatchNormalization.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.SyncBatchNormalization.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.SyncBatchNormalization.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.SyncBatchNormalization.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.SyncBatchNormalization.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.SyncBatchNormalization.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.SyncBatchNormalization.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.SyncBatchNormalization.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.SyncBatchNormalization.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.SyncBatchNormalization.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.SyncBatchNormalization.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.SyncBatchNormalization.build": "tf.keras.layers.BatchNormalization.build", + "tf.keras.layers.experimental.SyncBatchNormalization.call": "tf.keras.layers.BatchNormalization.call", + "tf.keras.layers.experimental.SyncBatchNormalization.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.SyncBatchNormalization.compute_output_shape": "tf.keras.layers.BatchNormalization.compute_output_shape", + "tf.keras.layers.experimental.SyncBatchNormalization.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.experimental.SyncBatchNormalization.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.SyncBatchNormalization.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.SyncBatchNormalization.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.SyncBatchNormalization.get_config": "tf.keras.layers.BatchNormalization.get_config", + "tf.keras.layers.experimental.SyncBatchNormalization.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.SyncBatchNormalization.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.SyncBatchNormalization.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.SyncBatchNormalization.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.SyncBatchNormalization.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.SyncBatchNormalization.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.SyncBatchNormalization.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.SyncBatchNormalization.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.SyncBatchNormalization.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.SyncBatchNormalization.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.SyncBatchNormalization.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.SyncBatchNormalization.trainable": "tf.keras.layers.BatchNormalization.trainable", + "tf.keras.layers.experimental.SyncBatchNormalization.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.SyncBatchNormalization.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.CenterCrop.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.CenterCrop.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.CenterCrop.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.CenterCrop.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.CenterCrop.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.CenterCrop.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.CenterCrop.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.CenterCrop.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.CenterCrop.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.CenterCrop.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.CenterCrop.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.CenterCrop.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.CenterCrop.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.CenterCrop.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.experimental.preprocessing.CenterCrop.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.preprocessing.CenterCrop.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.CenterCrop.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.CenterCrop.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.CenterCrop.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.CenterCrop.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.CenterCrop.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.CenterCrop.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.CenterCrop.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.CenterCrop.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.CenterCrop.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.CenterCrop.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.CenterCrop.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.preprocessing.CenterCrop.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.CenterCrop.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.CenterCrop.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.CenterCrop.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.Normalization.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.Normalization.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.Normalization.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.Normalization.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.Normalization.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.Normalization.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.Normalization.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.Normalization.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.Normalization.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.Normalization.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.Normalization.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.Normalization.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.Normalization.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.Normalization.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.preprocessing.Normalization.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.Normalization.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.Normalization.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.Normalization.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.Normalization.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.Normalization.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.Normalization.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.Normalization.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.Normalization.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.Normalization.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.Normalization.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.Normalization.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.Normalization.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.Normalization.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.Normalization.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__init__": "tf.keras.layers.Layer.__init__", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.call": "tf.keras.layers.Layer.call", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.get_config": "tf.keras.layers.Layer.get_config", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.RandomContrast.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.RandomContrast.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.RandomContrast.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.RandomContrast.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.RandomContrast.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.RandomContrast.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.RandomContrast.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.RandomContrast.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.RandomContrast.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.RandomContrast.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.RandomContrast.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.RandomContrast.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.RandomContrast.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.experimental.preprocessing.RandomContrast.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.RandomContrast.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.experimental.preprocessing.RandomContrast.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.preprocessing.RandomContrast.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.RandomContrast.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.RandomContrast.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.RandomContrast.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.RandomContrast.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.RandomContrast.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.RandomContrast.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.RandomContrast.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.RandomContrast.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.RandomContrast.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomContrast.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.RandomContrast.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.preprocessing.RandomContrast.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.RandomContrast.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.RandomContrast.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomContrast.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.RandomCrop.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.RandomCrop.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.RandomCrop.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.RandomCrop.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.RandomCrop.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.RandomCrop.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.RandomCrop.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.RandomCrop.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.RandomCrop.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.RandomCrop.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.RandomCrop.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.RandomCrop.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.RandomCrop.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.experimental.preprocessing.RandomCrop.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.RandomCrop.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.experimental.preprocessing.RandomCrop.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.preprocessing.RandomCrop.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.RandomCrop.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.RandomCrop.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.RandomCrop.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.RandomCrop.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.RandomCrop.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.RandomCrop.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.RandomCrop.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.RandomCrop.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.RandomCrop.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomCrop.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.RandomCrop.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.preprocessing.RandomCrop.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.RandomCrop.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.RandomCrop.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomCrop.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.RandomFlip.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.RandomFlip.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.RandomFlip.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.RandomFlip.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.RandomFlip.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.RandomFlip.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.RandomFlip.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.RandomFlip.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.RandomFlip.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.RandomFlip.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.RandomFlip.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.RandomFlip.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.RandomFlip.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.experimental.preprocessing.RandomFlip.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.RandomFlip.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.experimental.preprocessing.RandomFlip.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.preprocessing.RandomFlip.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.RandomFlip.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.RandomFlip.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.RandomFlip.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.RandomFlip.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.RandomFlip.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.RandomFlip.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.RandomFlip.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.RandomFlip.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.RandomFlip.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomFlip.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.RandomFlip.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.preprocessing.RandomFlip.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.RandomFlip.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.RandomFlip.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomFlip.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.RandomHeight.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.RandomHeight.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.RandomHeight.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.RandomHeight.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.RandomHeight.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.RandomHeight.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.RandomHeight.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.RandomHeight.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.RandomHeight.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.RandomHeight.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.RandomHeight.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.RandomHeight.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.RandomHeight.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.experimental.preprocessing.RandomHeight.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.RandomHeight.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.experimental.preprocessing.RandomHeight.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.preprocessing.RandomHeight.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.RandomHeight.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.RandomHeight.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.RandomHeight.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.RandomHeight.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.RandomHeight.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.RandomHeight.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.RandomHeight.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.RandomHeight.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.RandomHeight.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomHeight.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.RandomHeight.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.preprocessing.RandomHeight.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.RandomHeight.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.RandomHeight.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomHeight.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.RandomRotation.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.RandomRotation.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.RandomRotation.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.RandomRotation.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.RandomRotation.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.RandomRotation.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.RandomRotation.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.RandomRotation.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.RandomRotation.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.RandomRotation.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.RandomRotation.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.RandomRotation.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.RandomRotation.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.experimental.preprocessing.RandomRotation.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.RandomRotation.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.experimental.preprocessing.RandomRotation.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.preprocessing.RandomRotation.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.RandomRotation.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.RandomRotation.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.RandomRotation.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.RandomRotation.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.RandomRotation.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.RandomRotation.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.RandomRotation.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.RandomRotation.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.RandomRotation.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomRotation.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.RandomRotation.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.preprocessing.RandomRotation.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.RandomRotation.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.RandomRotation.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomRotation.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomTranslation.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.RandomWidth.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.RandomWidth.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.RandomWidth.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.RandomWidth.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.RandomWidth.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.RandomWidth.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.RandomWidth.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.RandomWidth.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.RandomWidth.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.RandomWidth.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.RandomWidth.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.RandomWidth.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.RandomWidth.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.experimental.preprocessing.RandomWidth.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.RandomWidth.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.experimental.preprocessing.RandomWidth.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.preprocessing.RandomWidth.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.RandomWidth.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.RandomWidth.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.RandomWidth.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.RandomWidth.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.RandomWidth.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.RandomWidth.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.RandomWidth.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.RandomWidth.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.RandomWidth.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomWidth.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.RandomWidth.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.preprocessing.RandomWidth.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.RandomWidth.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.RandomWidth.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.RandomWidth.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.Rescaling.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.Rescaling.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.Rescaling.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.Rescaling.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.Rescaling.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.Rescaling.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.Rescaling.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.Rescaling.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.Rescaling.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.Rescaling.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.Rescaling.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.Rescaling.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.Rescaling.build": "tf.keras.layers.Layer.build", + "tf.keras.layers.experimental.preprocessing.Rescaling.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.Rescaling.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.experimental.preprocessing.Rescaling.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.preprocessing.Rescaling.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.Rescaling.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.Rescaling.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.Rescaling.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.Rescaling.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.Rescaling.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.Rescaling.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.Rescaling.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.Rescaling.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.Rescaling.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.Rescaling.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.Rescaling.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.preprocessing.Rescaling.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.Rescaling.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.Rescaling.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.Rescaling.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.Resizing.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.Resizing.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.Resizing.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.Resizing.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.Resizing.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.Resizing.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.Resizing.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.Resizing.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.Resizing.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.Resizing.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.Resizing.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.Resizing.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.Resizing.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.Resizing.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.layers.experimental.preprocessing.Resizing.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.layers.experimental.preprocessing.Resizing.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.Resizing.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.Resizing.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.Resizing.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.Resizing.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.Resizing.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.Resizing.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.Resizing.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.Resizing.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.Resizing.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.Resizing.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.Resizing.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.preprocessing.Resizing.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.Resizing.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.Resizing.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.Resizing.weights": "tf.keras.layers.Layer.weights", + "tf.keras.layers.experimental.preprocessing.TextVectorization.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.layers.experimental.preprocessing.TextVectorization.__eq__": "tf.keras.Model.__eq__", + "tf.keras.layers.experimental.preprocessing.TextVectorization.__ge__": "tf.keras.Model.__ge__", + "tf.keras.layers.experimental.preprocessing.TextVectorization.__gt__": "tf.keras.Model.__gt__", + "tf.keras.layers.experimental.preprocessing.TextVectorization.__le__": "tf.keras.Model.__le__", + "tf.keras.layers.experimental.preprocessing.TextVectorization.__lt__": "tf.keras.Model.__lt__", + "tf.keras.layers.experimental.preprocessing.TextVectorization.__ne__": "tf.keras.Model.__ne__", + "tf.keras.layers.experimental.preprocessing.TextVectorization.__new__": "tf.keras.Model.__new__", + "tf.keras.layers.experimental.preprocessing.TextVectorization.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.layers.experimental.preprocessing.TextVectorization.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.layers.experimental.preprocessing.TextVectorization.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.layers.experimental.preprocessing.TextVectorization.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.layers.experimental.preprocessing.TextVectorization.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.layers.experimental.preprocessing.TextVectorization.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.layers.experimental.preprocessing.TextVectorization.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.layers.experimental.preprocessing.TextVectorization.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.layers.experimental.preprocessing.TextVectorization.input": "tf.keras.layers.Layer.input", + "tf.keras.layers.experimental.preprocessing.TextVectorization.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.layers.experimental.preprocessing.TextVectorization.losses": "tf.keras.layers.Layer.losses", + "tf.keras.layers.experimental.preprocessing.TextVectorization.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.layers.experimental.preprocessing.TextVectorization.name": "tf.keras.layers.Layer.name", + "tf.keras.layers.experimental.preprocessing.TextVectorization.name_scope": "tf.Module.name_scope", + "tf.keras.layers.experimental.preprocessing.TextVectorization.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.layers.experimental.preprocessing.TextVectorization.output": "tf.keras.layers.Layer.output", + "tf.keras.layers.experimental.preprocessing.TextVectorization.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.layers.experimental.preprocessing.TextVectorization.submodules": "tf.Module.submodules", + "tf.keras.layers.experimental.preprocessing.TextVectorization.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.layers.experimental.preprocessing.TextVectorization.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.layers.experimental.preprocessing.TextVectorization.weights": "tf.keras.layers.Layer.weights", + "tf.keras.losses.BinaryCrossentropy.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.BinaryCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.BinaryCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.BinaryCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.BinaryCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.BinaryCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.BinaryCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.BinaryCrossentropy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.CategoricalCrossentropy.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.CategoricalCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.CategoricalCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.CategoricalCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.CategoricalCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.CategoricalCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.CategoricalCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.CategoricalCrossentropy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.CategoricalCrossentropy.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.CategoricalCrossentropy.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.CategoricalHinge.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.CategoricalHinge.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.CategoricalHinge.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.CategoricalHinge.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.CategoricalHinge.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.CategoricalHinge.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.CategoricalHinge.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.CategoricalHinge.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.CategoricalHinge.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.CategoricalHinge.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.CosineSimilarity.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.CosineSimilarity.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.CosineSimilarity.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.CosineSimilarity.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.CosineSimilarity.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.CosineSimilarity.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.CosineSimilarity.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.CosineSimilarity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.CosineSimilarity.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.CosineSimilarity.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.Hinge.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.Hinge.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.Hinge.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.Hinge.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.Hinge.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.Hinge.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.Hinge.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.Hinge.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.Hinge.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.Hinge.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.Huber.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.Huber.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.Huber.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.Huber.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.Huber.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.Huber.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.Huber.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.Huber.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.Huber.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.Huber.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.KLDivergence.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.KLDivergence.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.KLDivergence.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.KLDivergence.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.KLDivergence.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.KLDivergence.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.KLDivergence.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.KLDivergence.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.KLDivergence.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.KLDivergence.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.LogCosh.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.LogCosh.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.LogCosh.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.LogCosh.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.LogCosh.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.LogCosh.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.LogCosh.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.LogCosh.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.LogCosh.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.LogCosh.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.Loss.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.Loss.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.Loss.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.Loss.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.Loss.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.Loss.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.Loss.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.MeanAbsoluteError.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.MeanAbsoluteError.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.MeanAbsoluteError.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.MeanAbsoluteError.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.MeanAbsoluteError.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.MeanAbsoluteError.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.MeanAbsoluteError.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.MeanAbsoluteError.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.MeanAbsoluteError.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.MeanAbsoluteError.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.MeanAbsolutePercentageError.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.MeanAbsolutePercentageError.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.MeanAbsolutePercentageError.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.MeanAbsolutePercentageError.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.MeanAbsolutePercentageError.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.MeanAbsolutePercentageError.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.MeanAbsolutePercentageError.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.MeanAbsolutePercentageError.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.MeanAbsolutePercentageError.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.MeanAbsolutePercentageError.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.MeanSquaredError.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.MeanSquaredError.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.MeanSquaredError.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.MeanSquaredError.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.MeanSquaredError.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.MeanSquaredError.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.MeanSquaredError.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.MeanSquaredError.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.MeanSquaredError.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.MeanSquaredError.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.MeanSquaredLogarithmicError.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.MeanSquaredLogarithmicError.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.MeanSquaredLogarithmicError.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.MeanSquaredLogarithmicError.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.MeanSquaredLogarithmicError.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.MeanSquaredLogarithmicError.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.MeanSquaredLogarithmicError.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.MeanSquaredLogarithmicError.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.MeanSquaredLogarithmicError.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.MeanSquaredLogarithmicError.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.Poisson.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.Poisson.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.Poisson.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.Poisson.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.Poisson.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.Poisson.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.Poisson.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.Poisson.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.Poisson.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.Poisson.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.Reduction.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.Reduction.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.Reduction.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.Reduction.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.losses.Reduction.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.Reduction.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.Reduction.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.Reduction.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.SparseCategoricalCrossentropy.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.SparseCategoricalCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.SparseCategoricalCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.SparseCategoricalCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.SparseCategoricalCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.SparseCategoricalCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.SparseCategoricalCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.SparseCategoricalCrossentropy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.SparseCategoricalCrossentropy.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.SparseCategoricalCrossentropy.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.SquaredHinge.__call__": "tf.keras.losses.Loss.__call__", + "tf.keras.losses.SquaredHinge.__eq__": "tf.keras.Model.__eq__", + "tf.keras.losses.SquaredHinge.__ge__": "tf.keras.Model.__ge__", + "tf.keras.losses.SquaredHinge.__gt__": "tf.keras.Model.__gt__", + "tf.keras.losses.SquaredHinge.__le__": "tf.keras.Model.__le__", + "tf.keras.losses.SquaredHinge.__lt__": "tf.keras.Model.__lt__", + "tf.keras.losses.SquaredHinge.__ne__": "tf.keras.Model.__ne__", + "tf.keras.losses.SquaredHinge.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.losses.SquaredHinge.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.keras.losses.SquaredHinge.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.keras.losses.kld": "tf.keras.losses.KLD", + "tf.keras.losses.kullback_leibler_divergence": "tf.keras.losses.KLD", + "tf.keras.losses.mae": "tf.keras.losses.MAE", + "tf.keras.losses.mape": "tf.keras.losses.MAPE", + "tf.keras.losses.mean_absolute_error": "tf.keras.losses.MAE", + "tf.keras.losses.mean_absolute_percentage_error": "tf.keras.losses.MAPE", + "tf.keras.losses.mean_squared_error": "tf.keras.losses.MSE", + "tf.keras.losses.mean_squared_logarithmic_error": "tf.keras.losses.MSLE", + "tf.keras.losses.mse": "tf.keras.losses.MSE", + "tf.keras.losses.msle": "tf.keras.losses.MSLE", + "tf.keras.metrics.AUC.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.AUC.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.AUC.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.AUC.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.AUC.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.AUC.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.AUC.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.AUC.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.AUC.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.AUC.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.AUC.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.AUC.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.AUC.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.AUC.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.AUC.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.AUC.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.AUC.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.AUC.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.AUC.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.AUC.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.AUC.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.AUC.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.AUC.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.AUC.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.AUC.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.AUC.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.AUC.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.AUC.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.AUC.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.AUC.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.AUC.submodules": "tf.Module.submodules", + "tf.keras.metrics.AUC.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.AUC.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.AUC.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.Accuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.Accuracy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.Accuracy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.Accuracy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.Accuracy.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.Accuracy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.Accuracy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.Accuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.Accuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.Accuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.Accuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.Accuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.Accuracy.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.Accuracy.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.Accuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.Accuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.Accuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.Accuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.Accuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.Accuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.Accuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.Accuracy.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.Accuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.Accuracy.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.Accuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.Accuracy.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.Accuracy.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.Accuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.Accuracy.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.Accuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.Accuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.Accuracy.submodules": "tf.Module.submodules", + "tf.keras.metrics.Accuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.Accuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.Accuracy.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.BinaryAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.BinaryAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.BinaryAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.BinaryAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.BinaryAccuracy.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.BinaryAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.BinaryAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.BinaryAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.BinaryAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.BinaryAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.BinaryAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.BinaryAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.BinaryAccuracy.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.BinaryAccuracy.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.BinaryAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.BinaryAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.BinaryAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.BinaryAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.BinaryAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.BinaryAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.BinaryAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.BinaryAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.BinaryAccuracy.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.BinaryAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.BinaryAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.BinaryAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.BinaryAccuracy.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.BinaryAccuracy.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.BinaryAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.BinaryAccuracy.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.BinaryAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.BinaryAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.BinaryAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.BinaryAccuracy.submodules": "tf.Module.submodules", + "tf.keras.metrics.BinaryAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.BinaryAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.BinaryAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.BinaryAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.BinaryCrossentropy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.BinaryCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.BinaryCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.BinaryCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.BinaryCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.BinaryCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.BinaryCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.BinaryCrossentropy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.BinaryCrossentropy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.BinaryCrossentropy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.BinaryCrossentropy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.BinaryCrossentropy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.BinaryCrossentropy.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.BinaryCrossentropy.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.BinaryCrossentropy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.BinaryCrossentropy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.BinaryCrossentropy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.BinaryCrossentropy.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.BinaryCrossentropy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.BinaryCrossentropy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.BinaryCrossentropy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.BinaryCrossentropy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.BinaryCrossentropy.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.BinaryCrossentropy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.BinaryCrossentropy.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.BinaryCrossentropy.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.BinaryCrossentropy.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.BinaryCrossentropy.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.BinaryCrossentropy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.BinaryCrossentropy.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.BinaryCrossentropy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.BinaryCrossentropy.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.BinaryCrossentropy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.BinaryCrossentropy.submodules": "tf.Module.submodules", + "tf.keras.metrics.BinaryCrossentropy.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.BinaryCrossentropy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.BinaryCrossentropy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.BinaryCrossentropy.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.CategoricalAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.CategoricalAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.CategoricalAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.CategoricalAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.CategoricalAccuracy.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.CategoricalAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.CategoricalAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.CategoricalAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.CategoricalAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.CategoricalAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.CategoricalAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.CategoricalAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.CategoricalAccuracy.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.CategoricalAccuracy.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.CategoricalAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.CategoricalAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.CategoricalAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.CategoricalAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.CategoricalAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.CategoricalAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.CategoricalAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.CategoricalAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.CategoricalAccuracy.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.CategoricalAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.CategoricalAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.CategoricalAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.CategoricalAccuracy.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.CategoricalAccuracy.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.CategoricalAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.CategoricalAccuracy.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.CategoricalAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.CategoricalAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.CategoricalAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.CategoricalAccuracy.submodules": "tf.Module.submodules", + "tf.keras.metrics.CategoricalAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.CategoricalAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.CategoricalAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.CategoricalAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.CategoricalCrossentropy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.CategoricalCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.CategoricalCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.CategoricalCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.CategoricalCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.CategoricalCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.CategoricalCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.CategoricalCrossentropy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.CategoricalCrossentropy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.CategoricalCrossentropy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.CategoricalCrossentropy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.CategoricalCrossentropy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.CategoricalCrossentropy.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.CategoricalCrossentropy.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.CategoricalCrossentropy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.CategoricalCrossentropy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.CategoricalCrossentropy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.CategoricalCrossentropy.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.CategoricalCrossentropy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.CategoricalCrossentropy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.CategoricalCrossentropy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.CategoricalCrossentropy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.CategoricalCrossentropy.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.CategoricalCrossentropy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.CategoricalCrossentropy.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.CategoricalCrossentropy.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.CategoricalCrossentropy.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.CategoricalCrossentropy.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.CategoricalCrossentropy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.CategoricalCrossentropy.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.CategoricalCrossentropy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.CategoricalCrossentropy.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.CategoricalCrossentropy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.CategoricalCrossentropy.submodules": "tf.Module.submodules", + "tf.keras.metrics.CategoricalCrossentropy.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.CategoricalCrossentropy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.CategoricalCrossentropy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.CategoricalCrossentropy.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.CategoricalHinge.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.CategoricalHinge.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.CategoricalHinge.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.CategoricalHinge.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.CategoricalHinge.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.CategoricalHinge.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.CategoricalHinge.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.CategoricalHinge.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.CategoricalHinge.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.CategoricalHinge.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.CategoricalHinge.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.CategoricalHinge.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.CategoricalHinge.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.CategoricalHinge.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.CategoricalHinge.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.CategoricalHinge.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.CategoricalHinge.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.CategoricalHinge.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.CategoricalHinge.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.CategoricalHinge.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.CategoricalHinge.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.CategoricalHinge.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.CategoricalHinge.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.CategoricalHinge.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.CategoricalHinge.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.CategoricalHinge.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.CategoricalHinge.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.CategoricalHinge.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.CategoricalHinge.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.CategoricalHinge.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.CategoricalHinge.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.CategoricalHinge.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.CategoricalHinge.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.CategoricalHinge.submodules": "tf.Module.submodules", + "tf.keras.metrics.CategoricalHinge.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.CategoricalHinge.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.CategoricalHinge.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.CategoricalHinge.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.CosineSimilarity.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.CosineSimilarity.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.CosineSimilarity.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.CosineSimilarity.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.CosineSimilarity.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.CosineSimilarity.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.CosineSimilarity.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.CosineSimilarity.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.CosineSimilarity.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.CosineSimilarity.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.CosineSimilarity.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.CosineSimilarity.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.CosineSimilarity.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.CosineSimilarity.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.CosineSimilarity.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.CosineSimilarity.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.CosineSimilarity.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.CosineSimilarity.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.CosineSimilarity.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.CosineSimilarity.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.CosineSimilarity.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.CosineSimilarity.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.CosineSimilarity.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.CosineSimilarity.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.CosineSimilarity.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.CosineSimilarity.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.CosineSimilarity.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.CosineSimilarity.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.CosineSimilarity.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.CosineSimilarity.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.CosineSimilarity.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.CosineSimilarity.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.CosineSimilarity.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.CosineSimilarity.submodules": "tf.Module.submodules", + "tf.keras.metrics.CosineSimilarity.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.CosineSimilarity.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.CosineSimilarity.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.CosineSimilarity.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.FalseNegatives.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.FalseNegatives.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.FalseNegatives.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.FalseNegatives.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.FalseNegatives.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.FalseNegatives.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.FalseNegatives.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.FalseNegatives.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.FalseNegatives.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.FalseNegatives.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.FalseNegatives.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.FalseNegatives.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.FalseNegatives.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.FalseNegatives.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.FalseNegatives.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.FalseNegatives.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.FalseNegatives.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.FalseNegatives.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.FalseNegatives.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.FalseNegatives.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.FalseNegatives.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.FalseNegatives.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.FalseNegatives.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.FalseNegatives.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.FalseNegatives.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.FalseNegatives.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.FalseNegatives.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.FalseNegatives.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.FalseNegatives.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.FalseNegatives.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.FalseNegatives.submodules": "tf.Module.submodules", + "tf.keras.metrics.FalseNegatives.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.FalseNegatives.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.FalseNegatives.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.FalsePositives.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.FalsePositives.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.FalsePositives.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.FalsePositives.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.FalsePositives.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.FalsePositives.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.FalsePositives.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.FalsePositives.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.FalsePositives.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.FalsePositives.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.FalsePositives.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.FalsePositives.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.FalsePositives.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.FalsePositives.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.FalsePositives.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.FalsePositives.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.FalsePositives.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.FalsePositives.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.FalsePositives.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.FalsePositives.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.FalsePositives.get_config": "tf.keras.metrics.FalseNegatives.get_config", + "tf.keras.metrics.FalsePositives.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.FalsePositives.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.FalsePositives.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.FalsePositives.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.FalsePositives.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.FalsePositives.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.FalsePositives.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.FalsePositives.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.FalsePositives.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.FalsePositives.reset_states": "tf.keras.metrics.FalseNegatives.reset_states", + "tf.keras.metrics.FalsePositives.result": "tf.keras.metrics.FalseNegatives.result", + "tf.keras.metrics.FalsePositives.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.FalsePositives.submodules": "tf.Module.submodules", + "tf.keras.metrics.FalsePositives.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.FalsePositives.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.FalsePositives.update_state": "tf.keras.metrics.FalseNegatives.update_state", + "tf.keras.metrics.FalsePositives.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.Hinge.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.Hinge.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.Hinge.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.Hinge.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.Hinge.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.Hinge.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.Hinge.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.Hinge.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.Hinge.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.Hinge.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.Hinge.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.Hinge.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.Hinge.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.Hinge.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.Hinge.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.Hinge.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.Hinge.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.Hinge.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.Hinge.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.Hinge.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.Hinge.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.Hinge.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.Hinge.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.Hinge.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.Hinge.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.Hinge.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.Hinge.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.Hinge.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.Hinge.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.Hinge.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.Hinge.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.Hinge.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.Hinge.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.Hinge.submodules": "tf.Module.submodules", + "tf.keras.metrics.Hinge.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.Hinge.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.Hinge.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.Hinge.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.KLD": "tf.keras.losses.KLD", + "tf.keras.metrics.KLDivergence.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.KLDivergence.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.KLDivergence.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.KLDivergence.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.KLDivergence.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.KLDivergence.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.KLDivergence.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.KLDivergence.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.KLDivergence.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.KLDivergence.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.KLDivergence.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.KLDivergence.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.KLDivergence.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.KLDivergence.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.KLDivergence.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.KLDivergence.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.KLDivergence.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.KLDivergence.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.KLDivergence.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.KLDivergence.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.KLDivergence.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.KLDivergence.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.KLDivergence.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.KLDivergence.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.KLDivergence.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.KLDivergence.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.KLDivergence.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.KLDivergence.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.KLDivergence.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.KLDivergence.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.KLDivergence.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.KLDivergence.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.KLDivergence.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.KLDivergence.submodules": "tf.Module.submodules", + "tf.keras.metrics.KLDivergence.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.KLDivergence.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.KLDivergence.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.KLDivergence.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.LogCoshError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.LogCoshError.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.LogCoshError.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.LogCoshError.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.LogCoshError.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.LogCoshError.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.LogCoshError.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.LogCoshError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.LogCoshError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.LogCoshError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.LogCoshError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.LogCoshError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.LogCoshError.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.LogCoshError.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.LogCoshError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.LogCoshError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.LogCoshError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.LogCoshError.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.LogCoshError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.LogCoshError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.LogCoshError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.LogCoshError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.LogCoshError.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.LogCoshError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.LogCoshError.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.LogCoshError.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.LogCoshError.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.LogCoshError.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.LogCoshError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.LogCoshError.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.LogCoshError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.LogCoshError.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.LogCoshError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.LogCoshError.submodules": "tf.Module.submodules", + "tf.keras.metrics.LogCoshError.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.LogCoshError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.LogCoshError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.LogCoshError.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.MAE": "tf.keras.losses.MAE", + "tf.keras.metrics.MAPE": "tf.keras.losses.MAPE", + "tf.keras.metrics.MSE": "tf.keras.losses.MSE", + "tf.keras.metrics.MSLE": "tf.keras.losses.MSLE", + "tf.keras.metrics.Mean.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.Mean.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.Mean.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.Mean.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.Mean.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.Mean.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.Mean.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.Mean.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.Mean.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.Mean.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.Mean.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.Mean.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.Mean.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.Mean.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.Mean.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.Mean.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.Mean.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.Mean.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.Mean.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.Mean.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.Mean.get_config": "tf.keras.metrics.Metric.get_config", + "tf.keras.metrics.Mean.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.Mean.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.Mean.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.Mean.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.Mean.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.Mean.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.Mean.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.Mean.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.Mean.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.Mean.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.Mean.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.Mean.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.Mean.submodules": "tf.Module.submodules", + "tf.keras.metrics.Mean.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.Mean.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.Mean.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.MeanAbsoluteError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.MeanAbsoluteError.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.MeanAbsoluteError.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.MeanAbsoluteError.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.MeanAbsoluteError.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.MeanAbsoluteError.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.MeanAbsoluteError.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.MeanAbsoluteError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.MeanAbsoluteError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.MeanAbsoluteError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.MeanAbsoluteError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.MeanAbsoluteError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.MeanAbsoluteError.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.MeanAbsoluteError.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.MeanAbsoluteError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.MeanAbsoluteError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.MeanAbsoluteError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.MeanAbsoluteError.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.MeanAbsoluteError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.MeanAbsoluteError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.MeanAbsoluteError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.MeanAbsoluteError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.MeanAbsoluteError.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.MeanAbsoluteError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.MeanAbsoluteError.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.MeanAbsoluteError.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.MeanAbsoluteError.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.MeanAbsoluteError.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.MeanAbsoluteError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.MeanAbsoluteError.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.MeanAbsoluteError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.MeanAbsoluteError.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.MeanAbsoluteError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.MeanAbsoluteError.submodules": "tf.Module.submodules", + "tf.keras.metrics.MeanAbsoluteError.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.MeanAbsoluteError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.MeanAbsoluteError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.MeanAbsoluteError.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.MeanAbsolutePercentageError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.MeanAbsolutePercentageError.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.MeanAbsolutePercentageError.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.MeanAbsolutePercentageError.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.MeanAbsolutePercentageError.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.MeanAbsolutePercentageError.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.MeanAbsolutePercentageError.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.MeanAbsolutePercentageError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.MeanAbsolutePercentageError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.MeanAbsolutePercentageError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.MeanAbsolutePercentageError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.MeanAbsolutePercentageError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.MeanAbsolutePercentageError.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.MeanAbsolutePercentageError.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.MeanAbsolutePercentageError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.MeanAbsolutePercentageError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.MeanAbsolutePercentageError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.MeanAbsolutePercentageError.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.MeanAbsolutePercentageError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.MeanAbsolutePercentageError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.MeanAbsolutePercentageError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.MeanAbsolutePercentageError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.MeanAbsolutePercentageError.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.MeanAbsolutePercentageError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.MeanAbsolutePercentageError.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.MeanAbsolutePercentageError.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.MeanAbsolutePercentageError.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.MeanAbsolutePercentageError.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.MeanAbsolutePercentageError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.MeanAbsolutePercentageError.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.MeanAbsolutePercentageError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.MeanAbsolutePercentageError.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.MeanAbsolutePercentageError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.MeanAbsolutePercentageError.submodules": "tf.Module.submodules", + "tf.keras.metrics.MeanAbsolutePercentageError.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.MeanAbsolutePercentageError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.MeanAbsolutePercentageError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.MeanAbsolutePercentageError.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.MeanIoU.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.MeanIoU.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.MeanIoU.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.MeanIoU.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.MeanIoU.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.MeanIoU.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.MeanIoU.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.MeanIoU.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.MeanIoU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.MeanIoU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.MeanIoU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.MeanIoU.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.MeanIoU.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.MeanIoU.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.MeanIoU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.MeanIoU.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.MeanIoU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.MeanIoU.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.MeanIoU.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.MeanIoU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.MeanIoU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.MeanIoU.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.MeanIoU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.MeanIoU.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.MeanIoU.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.MeanIoU.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.MeanIoU.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.MeanIoU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.MeanIoU.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.MeanIoU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.MeanIoU.submodules": "tf.Module.submodules", + "tf.keras.metrics.MeanIoU.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.MeanIoU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.MeanIoU.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.MeanRelativeError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.MeanRelativeError.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.MeanRelativeError.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.MeanRelativeError.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.MeanRelativeError.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.MeanRelativeError.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.MeanRelativeError.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.MeanRelativeError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.MeanRelativeError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.MeanRelativeError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.MeanRelativeError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.MeanRelativeError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.MeanRelativeError.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.MeanRelativeError.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.MeanRelativeError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.MeanRelativeError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.MeanRelativeError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.MeanRelativeError.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.MeanRelativeError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.MeanRelativeError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.MeanRelativeError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.MeanRelativeError.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.MeanRelativeError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.MeanRelativeError.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.MeanRelativeError.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.MeanRelativeError.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.MeanRelativeError.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.MeanRelativeError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.MeanRelativeError.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.MeanRelativeError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.MeanRelativeError.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.MeanRelativeError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.MeanRelativeError.submodules": "tf.Module.submodules", + "tf.keras.metrics.MeanRelativeError.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.MeanRelativeError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.MeanRelativeError.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.MeanSquaredError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.MeanSquaredError.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.MeanSquaredError.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.MeanSquaredError.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.MeanSquaredError.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.MeanSquaredError.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.MeanSquaredError.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.MeanSquaredError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.MeanSquaredError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.MeanSquaredError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.MeanSquaredError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.MeanSquaredError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.MeanSquaredError.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.MeanSquaredError.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.MeanSquaredError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.MeanSquaredError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.MeanSquaredError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.MeanSquaredError.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.MeanSquaredError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.MeanSquaredError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.MeanSquaredError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.MeanSquaredError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.MeanSquaredError.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.MeanSquaredError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.MeanSquaredError.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.MeanSquaredError.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.MeanSquaredError.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.MeanSquaredError.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.MeanSquaredError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.MeanSquaredError.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.MeanSquaredError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.MeanSquaredError.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.MeanSquaredError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.MeanSquaredError.submodules": "tf.Module.submodules", + "tf.keras.metrics.MeanSquaredError.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.MeanSquaredError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.MeanSquaredError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.MeanSquaredError.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.MeanSquaredLogarithmicError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.MeanSquaredLogarithmicError.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.MeanSquaredLogarithmicError.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.MeanSquaredLogarithmicError.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.MeanSquaredLogarithmicError.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.MeanSquaredLogarithmicError.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.MeanSquaredLogarithmicError.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.MeanSquaredLogarithmicError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.MeanSquaredLogarithmicError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.MeanSquaredLogarithmicError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.MeanSquaredLogarithmicError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.MeanSquaredLogarithmicError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.MeanSquaredLogarithmicError.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.MeanSquaredLogarithmicError.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.MeanSquaredLogarithmicError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.MeanSquaredLogarithmicError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.MeanSquaredLogarithmicError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.MeanSquaredLogarithmicError.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.MeanSquaredLogarithmicError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.MeanSquaredLogarithmicError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.MeanSquaredLogarithmicError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.MeanSquaredLogarithmicError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.MeanSquaredLogarithmicError.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.MeanSquaredLogarithmicError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.MeanSquaredLogarithmicError.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.MeanSquaredLogarithmicError.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.MeanSquaredLogarithmicError.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.MeanSquaredLogarithmicError.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.MeanSquaredLogarithmicError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.MeanSquaredLogarithmicError.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.MeanSquaredLogarithmicError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.MeanSquaredLogarithmicError.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.MeanSquaredLogarithmicError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.MeanSquaredLogarithmicError.submodules": "tf.Module.submodules", + "tf.keras.metrics.MeanSquaredLogarithmicError.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.MeanSquaredLogarithmicError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.MeanSquaredLogarithmicError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.MeanSquaredLogarithmicError.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.MeanTensor.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.MeanTensor.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.MeanTensor.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.MeanTensor.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.MeanTensor.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.MeanTensor.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.MeanTensor.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.MeanTensor.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.MeanTensor.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.MeanTensor.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.MeanTensor.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.MeanTensor.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.MeanTensor.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.MeanTensor.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.MeanTensor.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.MeanTensor.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.MeanTensor.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.MeanTensor.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.MeanTensor.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.MeanTensor.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.MeanTensor.get_config": "tf.keras.metrics.Metric.get_config", + "tf.keras.metrics.MeanTensor.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.MeanTensor.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.MeanTensor.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.MeanTensor.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.MeanTensor.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.MeanTensor.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.MeanTensor.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.MeanTensor.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.MeanTensor.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.MeanTensor.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.MeanTensor.submodules": "tf.Module.submodules", + "tf.keras.metrics.MeanTensor.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.MeanTensor.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.MeanTensor.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.Metric.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.Metric.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.Metric.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.Metric.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.Metric.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.Metric.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.Metric.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.Metric.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.Metric.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.Metric.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.Metric.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.Metric.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.Metric.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.Metric.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.Metric.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.Metric.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.Metric.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.Metric.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.Metric.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.Metric.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.Metric.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.Metric.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.Metric.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.Metric.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.Metric.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.Metric.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.Metric.submodules": "tf.Module.submodules", + "tf.keras.metrics.Metric.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.Metric.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.Metric.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.Poisson.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.Poisson.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.Poisson.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.Poisson.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.Poisson.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.Poisson.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.Poisson.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.Poisson.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.Poisson.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.Poisson.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.Poisson.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.Poisson.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.Poisson.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.Poisson.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.Poisson.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.Poisson.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.Poisson.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.Poisson.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.Poisson.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.Poisson.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.Poisson.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.Poisson.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.Poisson.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.Poisson.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.Poisson.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.Poisson.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.Poisson.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.Poisson.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.Poisson.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.Poisson.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.Poisson.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.Poisson.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.Poisson.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.Poisson.submodules": "tf.Module.submodules", + "tf.keras.metrics.Poisson.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.Poisson.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.Poisson.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.Poisson.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.Precision.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.Precision.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.Precision.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.Precision.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.Precision.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.Precision.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.Precision.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.Precision.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.Precision.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.Precision.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.Precision.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.Precision.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.Precision.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.Precision.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.Precision.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.Precision.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.Precision.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.Precision.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.Precision.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.Precision.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.Precision.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.Precision.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.Precision.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.Precision.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.Precision.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.Precision.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.Precision.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.Precision.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.Precision.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.Precision.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.Precision.submodules": "tf.Module.submodules", + "tf.keras.metrics.Precision.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.Precision.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.Precision.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.PrecisionAtRecall.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.PrecisionAtRecall.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.PrecisionAtRecall.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.PrecisionAtRecall.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.PrecisionAtRecall.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.PrecisionAtRecall.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.PrecisionAtRecall.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.PrecisionAtRecall.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.PrecisionAtRecall.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.PrecisionAtRecall.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.PrecisionAtRecall.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.PrecisionAtRecall.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.PrecisionAtRecall.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.PrecisionAtRecall.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.PrecisionAtRecall.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.PrecisionAtRecall.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.PrecisionAtRecall.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.PrecisionAtRecall.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.PrecisionAtRecall.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.PrecisionAtRecall.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.PrecisionAtRecall.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.PrecisionAtRecall.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.PrecisionAtRecall.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.PrecisionAtRecall.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.PrecisionAtRecall.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.PrecisionAtRecall.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.PrecisionAtRecall.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.PrecisionAtRecall.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.PrecisionAtRecall.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.PrecisionAtRecall.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.PrecisionAtRecall.submodules": "tf.Module.submodules", + "tf.keras.metrics.PrecisionAtRecall.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.PrecisionAtRecall.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.PrecisionAtRecall.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.Recall.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.Recall.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.Recall.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.Recall.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.Recall.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.Recall.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.Recall.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.Recall.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.Recall.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.Recall.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.Recall.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.Recall.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.Recall.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.Recall.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.Recall.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.Recall.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.Recall.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.Recall.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.Recall.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.Recall.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.Recall.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.Recall.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.Recall.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.Recall.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.Recall.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.Recall.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.Recall.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.Recall.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.Recall.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.Recall.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.Recall.submodules": "tf.Module.submodules", + "tf.keras.metrics.Recall.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.Recall.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.Recall.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.RecallAtPrecision.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.RecallAtPrecision.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.RecallAtPrecision.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.RecallAtPrecision.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.RecallAtPrecision.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.RecallAtPrecision.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.RecallAtPrecision.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.RecallAtPrecision.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.RecallAtPrecision.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.RecallAtPrecision.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.RecallAtPrecision.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.RecallAtPrecision.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.RecallAtPrecision.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.RecallAtPrecision.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.RecallAtPrecision.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.RecallAtPrecision.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.RecallAtPrecision.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.RecallAtPrecision.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.RecallAtPrecision.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.RecallAtPrecision.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.RecallAtPrecision.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.RecallAtPrecision.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.RecallAtPrecision.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.RecallAtPrecision.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.RecallAtPrecision.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.RecallAtPrecision.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.RecallAtPrecision.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.RecallAtPrecision.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.RecallAtPrecision.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.RecallAtPrecision.reset_states": "tf.keras.metrics.PrecisionAtRecall.reset_states", + "tf.keras.metrics.RecallAtPrecision.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.RecallAtPrecision.submodules": "tf.Module.submodules", + "tf.keras.metrics.RecallAtPrecision.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.RecallAtPrecision.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.RecallAtPrecision.update_state": "tf.keras.metrics.PrecisionAtRecall.update_state", + "tf.keras.metrics.RecallAtPrecision.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.RootMeanSquaredError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.RootMeanSquaredError.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.RootMeanSquaredError.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.RootMeanSquaredError.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.RootMeanSquaredError.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.RootMeanSquaredError.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.RootMeanSquaredError.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.RootMeanSquaredError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.RootMeanSquaredError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.RootMeanSquaredError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.RootMeanSquaredError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.RootMeanSquaredError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.RootMeanSquaredError.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.RootMeanSquaredError.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.RootMeanSquaredError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.RootMeanSquaredError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.RootMeanSquaredError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.RootMeanSquaredError.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.RootMeanSquaredError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.RootMeanSquaredError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.RootMeanSquaredError.get_config": "tf.keras.metrics.Metric.get_config", + "tf.keras.metrics.RootMeanSquaredError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.RootMeanSquaredError.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.RootMeanSquaredError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.RootMeanSquaredError.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.RootMeanSquaredError.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.RootMeanSquaredError.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.RootMeanSquaredError.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.RootMeanSquaredError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.RootMeanSquaredError.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.RootMeanSquaredError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.RootMeanSquaredError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.RootMeanSquaredError.submodules": "tf.Module.submodules", + "tf.keras.metrics.RootMeanSquaredError.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.RootMeanSquaredError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.RootMeanSquaredError.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.SensitivityAtSpecificity.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.SensitivityAtSpecificity.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.SensitivityAtSpecificity.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.SensitivityAtSpecificity.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.SensitivityAtSpecificity.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.SensitivityAtSpecificity.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.SensitivityAtSpecificity.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.SensitivityAtSpecificity.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.SensitivityAtSpecificity.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.SensitivityAtSpecificity.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.SensitivityAtSpecificity.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.SensitivityAtSpecificity.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.SensitivityAtSpecificity.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.SensitivityAtSpecificity.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.SensitivityAtSpecificity.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.SensitivityAtSpecificity.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.SensitivityAtSpecificity.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.SensitivityAtSpecificity.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.SensitivityAtSpecificity.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.SensitivityAtSpecificity.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.SensitivityAtSpecificity.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.SensitivityAtSpecificity.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.SensitivityAtSpecificity.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.SensitivityAtSpecificity.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.SensitivityAtSpecificity.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.SensitivityAtSpecificity.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.SensitivityAtSpecificity.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.SensitivityAtSpecificity.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.SensitivityAtSpecificity.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.SensitivityAtSpecificity.reset_states": "tf.keras.metrics.PrecisionAtRecall.reset_states", + "tf.keras.metrics.SensitivityAtSpecificity.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.SensitivityAtSpecificity.submodules": "tf.Module.submodules", + "tf.keras.metrics.SensitivityAtSpecificity.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.SensitivityAtSpecificity.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.SensitivityAtSpecificity.update_state": "tf.keras.metrics.PrecisionAtRecall.update_state", + "tf.keras.metrics.SensitivityAtSpecificity.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.SparseCategoricalAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.SparseCategoricalAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.SparseCategoricalAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.SparseCategoricalAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.SparseCategoricalAccuracy.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.SparseCategoricalAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.SparseCategoricalAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.SparseCategoricalAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.SparseCategoricalAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.SparseCategoricalAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.SparseCategoricalAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.SparseCategoricalAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.SparseCategoricalAccuracy.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.SparseCategoricalAccuracy.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.SparseCategoricalAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.SparseCategoricalAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.SparseCategoricalAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.SparseCategoricalAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.SparseCategoricalAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.SparseCategoricalAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.SparseCategoricalAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.SparseCategoricalAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.SparseCategoricalAccuracy.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.SparseCategoricalAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.SparseCategoricalAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.SparseCategoricalAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.SparseCategoricalAccuracy.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.SparseCategoricalAccuracy.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.SparseCategoricalAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.SparseCategoricalAccuracy.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.SparseCategoricalAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.SparseCategoricalAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.SparseCategoricalAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.SparseCategoricalAccuracy.submodules": "tf.Module.submodules", + "tf.keras.metrics.SparseCategoricalAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.SparseCategoricalAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.SparseCategoricalAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.SparseCategoricalAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.SparseCategoricalCrossentropy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.SparseCategoricalCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.SparseCategoricalCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.SparseCategoricalCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.SparseCategoricalCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.SparseCategoricalCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.SparseCategoricalCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.SparseCategoricalCrossentropy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.SparseCategoricalCrossentropy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.SparseCategoricalCrossentropy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.SparseCategoricalCrossentropy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.SparseCategoricalCrossentropy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.SparseCategoricalCrossentropy.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.SparseCategoricalCrossentropy.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.SparseCategoricalCrossentropy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.SparseCategoricalCrossentropy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.SparseCategoricalCrossentropy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.SparseCategoricalCrossentropy.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.SparseCategoricalCrossentropy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.SparseCategoricalCrossentropy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.SparseCategoricalCrossentropy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.SparseCategoricalCrossentropy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.SparseCategoricalCrossentropy.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.SparseCategoricalCrossentropy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.SparseCategoricalCrossentropy.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.SparseCategoricalCrossentropy.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.SparseCategoricalCrossentropy.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.SparseCategoricalCrossentropy.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.SparseCategoricalCrossentropy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.SparseCategoricalCrossentropy.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.SparseCategoricalCrossentropy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.SparseCategoricalCrossentropy.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.SparseCategoricalCrossentropy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.SparseCategoricalCrossentropy.submodules": "tf.Module.submodules", + "tf.keras.metrics.SparseCategoricalCrossentropy.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.SparseCategoricalCrossentropy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.SparseCategoricalCrossentropy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.SparseCategoricalCrossentropy.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.submodules": "tf.Module.submodules", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.SparseTopKCategoricalAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.SpecificityAtSensitivity.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.SpecificityAtSensitivity.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.SpecificityAtSensitivity.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.SpecificityAtSensitivity.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.SpecificityAtSensitivity.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.SpecificityAtSensitivity.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.SpecificityAtSensitivity.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.SpecificityAtSensitivity.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.SpecificityAtSensitivity.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.SpecificityAtSensitivity.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.SpecificityAtSensitivity.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.SpecificityAtSensitivity.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.SpecificityAtSensitivity.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.SpecificityAtSensitivity.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.SpecificityAtSensitivity.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.SpecificityAtSensitivity.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.SpecificityAtSensitivity.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.SpecificityAtSensitivity.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.SpecificityAtSensitivity.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.SpecificityAtSensitivity.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.SpecificityAtSensitivity.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.SpecificityAtSensitivity.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.SpecificityAtSensitivity.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.SpecificityAtSensitivity.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.SpecificityAtSensitivity.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.SpecificityAtSensitivity.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.SpecificityAtSensitivity.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.SpecificityAtSensitivity.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.SpecificityAtSensitivity.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.SpecificityAtSensitivity.reset_states": "tf.keras.metrics.PrecisionAtRecall.reset_states", + "tf.keras.metrics.SpecificityAtSensitivity.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.SpecificityAtSensitivity.submodules": "tf.Module.submodules", + "tf.keras.metrics.SpecificityAtSensitivity.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.SpecificityAtSensitivity.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.SpecificityAtSensitivity.update_state": "tf.keras.metrics.PrecisionAtRecall.update_state", + "tf.keras.metrics.SpecificityAtSensitivity.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.SquaredHinge.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.SquaredHinge.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.SquaredHinge.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.SquaredHinge.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.SquaredHinge.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.SquaredHinge.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.SquaredHinge.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.SquaredHinge.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.SquaredHinge.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.SquaredHinge.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.SquaredHinge.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.SquaredHinge.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.SquaredHinge.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.SquaredHinge.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.SquaredHinge.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.SquaredHinge.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.SquaredHinge.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.SquaredHinge.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.SquaredHinge.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.SquaredHinge.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.SquaredHinge.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.SquaredHinge.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.SquaredHinge.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.SquaredHinge.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.SquaredHinge.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.SquaredHinge.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.SquaredHinge.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.SquaredHinge.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.SquaredHinge.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.SquaredHinge.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.SquaredHinge.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.SquaredHinge.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.SquaredHinge.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.SquaredHinge.submodules": "tf.Module.submodules", + "tf.keras.metrics.SquaredHinge.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.SquaredHinge.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.SquaredHinge.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.SquaredHinge.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.Sum.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.Sum.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.Sum.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.Sum.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.Sum.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.Sum.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.Sum.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.Sum.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.Sum.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.Sum.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.Sum.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.Sum.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.Sum.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.Sum.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.Sum.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.Sum.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.Sum.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.Sum.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.Sum.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.Sum.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.Sum.get_config": "tf.keras.metrics.Metric.get_config", + "tf.keras.metrics.Sum.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.Sum.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.Sum.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.Sum.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.Sum.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.Sum.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.Sum.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.Sum.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.Sum.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.Sum.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.Sum.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.Sum.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.Sum.submodules": "tf.Module.submodules", + "tf.keras.metrics.Sum.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.Sum.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.Sum.update_state": "tf.keras.metrics.Mean.update_state", + "tf.keras.metrics.Sum.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.TopKCategoricalAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.TopKCategoricalAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.TopKCategoricalAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.TopKCategoricalAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.TopKCategoricalAccuracy.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.TopKCategoricalAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.TopKCategoricalAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.TopKCategoricalAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.TopKCategoricalAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.TopKCategoricalAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.TopKCategoricalAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.TopKCategoricalAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.TopKCategoricalAccuracy.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.TopKCategoricalAccuracy.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.TopKCategoricalAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.TopKCategoricalAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.TopKCategoricalAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.TopKCategoricalAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.TopKCategoricalAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.TopKCategoricalAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.TopKCategoricalAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.keras.metrics.TopKCategoricalAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.TopKCategoricalAccuracy.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.TopKCategoricalAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.TopKCategoricalAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.TopKCategoricalAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.TopKCategoricalAccuracy.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.TopKCategoricalAccuracy.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.TopKCategoricalAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.TopKCategoricalAccuracy.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.TopKCategoricalAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.keras.metrics.TopKCategoricalAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.keras.metrics.TopKCategoricalAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.TopKCategoricalAccuracy.submodules": "tf.Module.submodules", + "tf.keras.metrics.TopKCategoricalAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.TopKCategoricalAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.TopKCategoricalAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.keras.metrics.TopKCategoricalAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.TrueNegatives.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.TrueNegatives.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.TrueNegatives.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.TrueNegatives.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.TrueNegatives.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.TrueNegatives.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.TrueNegatives.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.TrueNegatives.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.TrueNegatives.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.TrueNegatives.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.TrueNegatives.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.TrueNegatives.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.TrueNegatives.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.TrueNegatives.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.TrueNegatives.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.TrueNegatives.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.TrueNegatives.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.TrueNegatives.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.TrueNegatives.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.TrueNegatives.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.TrueNegatives.get_config": "tf.keras.metrics.FalseNegatives.get_config", + "tf.keras.metrics.TrueNegatives.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.TrueNegatives.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.TrueNegatives.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.TrueNegatives.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.TrueNegatives.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.TrueNegatives.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.TrueNegatives.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.TrueNegatives.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.TrueNegatives.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.TrueNegatives.reset_states": "tf.keras.metrics.FalseNegatives.reset_states", + "tf.keras.metrics.TrueNegatives.result": "tf.keras.metrics.FalseNegatives.result", + "tf.keras.metrics.TrueNegatives.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.TrueNegatives.submodules": "tf.Module.submodules", + "tf.keras.metrics.TrueNegatives.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.TrueNegatives.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.TrueNegatives.update_state": "tf.keras.metrics.FalseNegatives.update_state", + "tf.keras.metrics.TrueNegatives.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.TruePositives.__call__": "tf.keras.metrics.Metric.__call__", + "tf.keras.metrics.TruePositives.__eq__": "tf.keras.Model.__eq__", + "tf.keras.metrics.TruePositives.__ge__": "tf.keras.Model.__ge__", + "tf.keras.metrics.TruePositives.__gt__": "tf.keras.Model.__gt__", + "tf.keras.metrics.TruePositives.__le__": "tf.keras.Model.__le__", + "tf.keras.metrics.TruePositives.__lt__": "tf.keras.Model.__lt__", + "tf.keras.metrics.TruePositives.__ne__": "tf.keras.Model.__ne__", + "tf.keras.metrics.TruePositives.__new__": "tf.keras.metrics.Metric.__new__", + "tf.keras.metrics.TruePositives.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.metrics.TruePositives.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.metrics.TruePositives.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.metrics.TruePositives.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.keras.metrics.TruePositives.build": "tf.keras.layers.Layer.build", + "tf.keras.metrics.TruePositives.call": "tf.keras.layers.Layer.call", + "tf.keras.metrics.TruePositives.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.keras.metrics.TruePositives.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.keras.metrics.TruePositives.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.metrics.TruePositives.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.metrics.TruePositives.dtype": "tf.keras.metrics.Metric.dtype", + "tf.keras.metrics.TruePositives.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.keras.metrics.TruePositives.get_config": "tf.keras.metrics.FalseNegatives.get_config", + "tf.keras.metrics.TruePositives.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.keras.metrics.TruePositives.input": "tf.keras.layers.Layer.input", + "tf.keras.metrics.TruePositives.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.keras.metrics.TruePositives.losses": "tf.keras.layers.Layer.losses", + "tf.keras.metrics.TruePositives.metrics": "tf.keras.layers.Layer.metrics", + "tf.keras.metrics.TruePositives.name": "tf.keras.layers.Layer.name", + "tf.keras.metrics.TruePositives.name_scope": "tf.Module.name_scope", + "tf.keras.metrics.TruePositives.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.keras.metrics.TruePositives.output": "tf.keras.layers.Layer.output", + "tf.keras.metrics.TruePositives.reset_states": "tf.keras.metrics.FalseNegatives.reset_states", + "tf.keras.metrics.TruePositives.result": "tf.keras.metrics.FalseNegatives.result", + "tf.keras.metrics.TruePositives.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.metrics.TruePositives.submodules": "tf.Module.submodules", + "tf.keras.metrics.TruePositives.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.metrics.TruePositives.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.keras.metrics.TruePositives.update_state": "tf.keras.metrics.FalseNegatives.update_state", + "tf.keras.metrics.TruePositives.weights": "tf.keras.layers.Layer.weights", + "tf.keras.metrics.binary_crossentropy": "tf.keras.losses.binary_crossentropy", + "tf.keras.metrics.categorical_crossentropy": "tf.keras.losses.categorical_crossentropy", + "tf.keras.metrics.hinge": "tf.keras.losses.hinge", + "tf.keras.metrics.kld": "tf.keras.losses.KLD", + "tf.keras.metrics.kullback_leibler_divergence": "tf.keras.losses.KLD", + "tf.keras.metrics.mae": "tf.keras.losses.MAE", + "tf.keras.metrics.mape": "tf.keras.losses.MAPE", + "tf.keras.metrics.mean_absolute_error": "tf.keras.losses.MAE", + "tf.keras.metrics.mean_absolute_percentage_error": "tf.keras.losses.MAPE", + "tf.keras.metrics.mean_squared_error": "tf.keras.losses.MSE", + "tf.keras.metrics.mean_squared_logarithmic_error": "tf.keras.losses.MSLE", + "tf.keras.metrics.mse": "tf.keras.losses.MSE", + "tf.keras.metrics.msle": "tf.keras.losses.MSLE", + "tf.keras.metrics.poisson": "tf.keras.losses.poisson", + "tf.keras.metrics.sparse_categorical_crossentropy": "tf.keras.losses.sparse_categorical_crossentropy", + "tf.keras.metrics.squared_hinge": "tf.keras.losses.squared_hinge", + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__eq__": "tf.keras.Model.__eq__", + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__ge__": "tf.keras.Model.__ge__", + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__gt__": "tf.keras.Model.__gt__", + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__le__": "tf.keras.Model.__le__", + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__lt__": "tf.keras.Model.__lt__", + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__ne__": "tf.keras.Model.__ne__", + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.keras.mixed_precision.experimental.Policy.__eq__": "tf.keras.Model.__eq__", + "tf.keras.mixed_precision.experimental.Policy.__ge__": "tf.keras.Model.__ge__", + "tf.keras.mixed_precision.experimental.Policy.__gt__": "tf.keras.Model.__gt__", + "tf.keras.mixed_precision.experimental.Policy.__le__": "tf.keras.Model.__le__", + "tf.keras.mixed_precision.experimental.Policy.__lt__": "tf.keras.Model.__lt__", + "tf.keras.mixed_precision.experimental.Policy.__ne__": "tf.keras.Model.__ne__", + "tf.keras.mixed_precision.experimental.Policy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.models.Model": "tf.keras.Model", + "tf.keras.models.Model.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.models.Model.__eq__": "tf.keras.Model.__eq__", + "tf.keras.models.Model.__ge__": "tf.keras.Model.__ge__", + "tf.keras.models.Model.__gt__": "tf.keras.Model.__gt__", + "tf.keras.models.Model.__init__": "tf.keras.Model.__init__", + "tf.keras.models.Model.__le__": "tf.keras.Model.__le__", + "tf.keras.models.Model.__lt__": "tf.keras.Model.__lt__", + "tf.keras.models.Model.__ne__": "tf.keras.Model.__ne__", + "tf.keras.models.Model.__new__": "tf.keras.Model.__new__", + "tf.keras.models.Model.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.models.Model.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.models.Model.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.models.Model.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.models.Model.build": "tf.keras.Model.build", + "tf.keras.models.Model.call": "tf.keras.Model.call", + "tf.keras.models.Model.compile": "tf.keras.Model.compile", + "tf.keras.models.Model.compute_mask": "tf.keras.Model.compute_mask", + "tf.keras.models.Model.compute_output_shape": "tf.keras.Model.compute_output_shape", + "tf.keras.models.Model.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.models.Model.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.models.Model.distribute_strategy": "tf.keras.Model.distribute_strategy", + "tf.keras.models.Model.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.models.Model.dynamic": "tf.keras.Model.dynamic", + "tf.keras.models.Model.evaluate": "tf.keras.Model.evaluate", + "tf.keras.models.Model.evaluate_generator": "tf.keras.Model.evaluate_generator", + "tf.keras.models.Model.fit": "tf.keras.Model.fit", + "tf.keras.models.Model.fit_generator": "tf.keras.Model.fit_generator", + "tf.keras.models.Model.get_config": "tf.keras.Model.get_config", + "tf.keras.models.Model.get_layer": "tf.keras.Model.get_layer", + "tf.keras.models.Model.get_weights": "tf.keras.Model.get_weights", + "tf.keras.models.Model.input": "tf.keras.layers.Layer.input", + "tf.keras.models.Model.input_spec": "tf.keras.Model.input_spec", + "tf.keras.models.Model.layers": "tf.keras.Model.layers", + "tf.keras.models.Model.load_weights": "tf.keras.Model.load_weights", + "tf.keras.models.Model.losses": "tf.keras.layers.Layer.losses", + "tf.keras.models.Model.make_predict_function": "tf.keras.Model.make_predict_function", + "tf.keras.models.Model.make_test_function": "tf.keras.Model.make_test_function", + "tf.keras.models.Model.make_train_function": "tf.keras.Model.make_train_function", + "tf.keras.models.Model.metrics": "tf.keras.Model.metrics", + "tf.keras.models.Model.metrics_names": "tf.keras.Model.metrics_names", + "tf.keras.models.Model.name": "tf.keras.layers.Layer.name", + "tf.keras.models.Model.name_scope": "tf.Module.name_scope", + "tf.keras.models.Model.non_trainable_weights": "tf.keras.Model.non_trainable_weights", + "tf.keras.models.Model.output": "tf.keras.layers.Layer.output", + "tf.keras.models.Model.predict": "tf.keras.Model.predict", + "tf.keras.models.Model.predict_generator": "tf.keras.Model.predict_generator", + "tf.keras.models.Model.predict_on_batch": "tf.keras.Model.predict_on_batch", + "tf.keras.models.Model.predict_step": "tf.keras.Model.predict_step", + "tf.keras.models.Model.reset_metrics": "tf.keras.Model.reset_metrics", + "tf.keras.models.Model.reset_states": "tf.keras.Model.reset_states", + "tf.keras.models.Model.run_eagerly": "tf.keras.Model.run_eagerly", + "tf.keras.models.Model.save": "tf.keras.Model.save", + "tf.keras.models.Model.save_weights": "tf.keras.Model.save_weights", + "tf.keras.models.Model.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.models.Model.state_updates": "tf.keras.Model.state_updates", + "tf.keras.models.Model.stateful": "tf.keras.Model.stateful", + "tf.keras.models.Model.submodules": "tf.Module.submodules", + "tf.keras.models.Model.summary": "tf.keras.Model.summary", + "tf.keras.models.Model.test_on_batch": "tf.keras.Model.test_on_batch", + "tf.keras.models.Model.test_step": "tf.keras.Model.test_step", + "tf.keras.models.Model.to_json": "tf.keras.Model.to_json", + "tf.keras.models.Model.to_yaml": "tf.keras.Model.to_yaml", + "tf.keras.models.Model.train_on_batch": "tf.keras.Model.train_on_batch", + "tf.keras.models.Model.train_step": "tf.keras.Model.train_step", + "tf.keras.models.Model.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.models.Model.trainable_weights": "tf.keras.Model.trainable_weights", + "tf.keras.models.Model.weights": "tf.keras.Model.weights", + "tf.keras.models.Sequential": "tf.keras.Sequential", + "tf.keras.models.Sequential.__call__": "tf.keras.layers.Layer.__call__", + "tf.keras.models.Sequential.__eq__": "tf.keras.Model.__eq__", + "tf.keras.models.Sequential.__ge__": "tf.keras.Model.__ge__", + "tf.keras.models.Sequential.__gt__": "tf.keras.Model.__gt__", + "tf.keras.models.Sequential.__init__": "tf.keras.Sequential.__init__", + "tf.keras.models.Sequential.__le__": "tf.keras.Model.__le__", + "tf.keras.models.Sequential.__lt__": "tf.keras.Model.__lt__", + "tf.keras.models.Sequential.__ne__": "tf.keras.Model.__ne__", + "tf.keras.models.Sequential.__new__": "tf.keras.Model.__new__", + "tf.keras.models.Sequential.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.keras.models.Sequential.add": "tf.keras.Sequential.add", + "tf.keras.models.Sequential.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.keras.models.Sequential.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.keras.models.Sequential.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.keras.models.Sequential.build": "tf.keras.Sequential.build", + "tf.keras.models.Sequential.call": "tf.keras.Sequential.call", + "tf.keras.models.Sequential.compile": "tf.keras.Model.compile", + "tf.keras.models.Sequential.compute_mask": "tf.keras.Sequential.compute_mask", + "tf.keras.models.Sequential.compute_output_shape": "tf.keras.Sequential.compute_output_shape", + "tf.keras.models.Sequential.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.keras.models.Sequential.count_params": "tf.keras.layers.Layer.count_params", + "tf.keras.models.Sequential.distribute_strategy": "tf.keras.Model.distribute_strategy", + "tf.keras.models.Sequential.dtype": "tf.keras.layers.Layer.dtype", + "tf.keras.models.Sequential.dynamic": "tf.keras.Sequential.dynamic", + "tf.keras.models.Sequential.evaluate": "tf.keras.Model.evaluate", + "tf.keras.models.Sequential.evaluate_generator": "tf.keras.Model.evaluate_generator", + "tf.keras.models.Sequential.fit": "tf.keras.Model.fit", + "tf.keras.models.Sequential.fit_generator": "tf.keras.Model.fit_generator", + "tf.keras.models.Sequential.get_config": "tf.keras.Sequential.get_config", + "tf.keras.models.Sequential.get_layer": "tf.keras.Model.get_layer", + "tf.keras.models.Sequential.get_weights": "tf.keras.Model.get_weights", + "tf.keras.models.Sequential.input": "tf.keras.layers.Layer.input", + "tf.keras.models.Sequential.input_spec": "tf.keras.Sequential.input_spec", + "tf.keras.models.Sequential.layers": "tf.keras.Sequential.layers", + "tf.keras.models.Sequential.load_weights": "tf.keras.Model.load_weights", + "tf.keras.models.Sequential.losses": "tf.keras.layers.Layer.losses", + "tf.keras.models.Sequential.make_predict_function": "tf.keras.Model.make_predict_function", + "tf.keras.models.Sequential.make_test_function": "tf.keras.Model.make_test_function", + "tf.keras.models.Sequential.make_train_function": "tf.keras.Model.make_train_function", + "tf.keras.models.Sequential.metrics": "tf.keras.Model.metrics", + "tf.keras.models.Sequential.metrics_names": "tf.keras.Model.metrics_names", + "tf.keras.models.Sequential.name": "tf.keras.layers.Layer.name", + "tf.keras.models.Sequential.name_scope": "tf.Module.name_scope", + "tf.keras.models.Sequential.non_trainable_weights": "tf.keras.Model.non_trainable_weights", + "tf.keras.models.Sequential.output": "tf.keras.layers.Layer.output", + "tf.keras.models.Sequential.pop": "tf.keras.Sequential.pop", + "tf.keras.models.Sequential.predict": "tf.keras.Model.predict", + "tf.keras.models.Sequential.predict_classes": "tf.keras.Sequential.predict_classes", + "tf.keras.models.Sequential.predict_generator": "tf.keras.Model.predict_generator", + "tf.keras.models.Sequential.predict_on_batch": "tf.keras.Model.predict_on_batch", + "tf.keras.models.Sequential.predict_proba": "tf.keras.Sequential.predict_proba", + "tf.keras.models.Sequential.predict_step": "tf.keras.Model.predict_step", + "tf.keras.models.Sequential.reset_metrics": "tf.keras.Model.reset_metrics", + "tf.keras.models.Sequential.reset_states": "tf.keras.Model.reset_states", + "tf.keras.models.Sequential.run_eagerly": "tf.keras.Model.run_eagerly", + "tf.keras.models.Sequential.save": "tf.keras.Model.save", + "tf.keras.models.Sequential.save_weights": "tf.keras.Model.save_weights", + "tf.keras.models.Sequential.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.keras.models.Sequential.state_updates": "tf.keras.Model.state_updates", + "tf.keras.models.Sequential.stateful": "tf.keras.Model.stateful", + "tf.keras.models.Sequential.submodules": "tf.Module.submodules", + "tf.keras.models.Sequential.summary": "tf.keras.Model.summary", + "tf.keras.models.Sequential.test_on_batch": "tf.keras.Model.test_on_batch", + "tf.keras.models.Sequential.test_step": "tf.keras.Model.test_step", + "tf.keras.models.Sequential.to_json": "tf.keras.Model.to_json", + "tf.keras.models.Sequential.to_yaml": "tf.keras.Model.to_yaml", + "tf.keras.models.Sequential.train_on_batch": "tf.keras.Model.train_on_batch", + "tf.keras.models.Sequential.train_step": "tf.keras.Model.train_step", + "tf.keras.models.Sequential.trainable": "tf.keras.layers.Layer.trainable", + "tf.keras.models.Sequential.trainable_weights": "tf.keras.Model.trainable_weights", + "tf.keras.models.Sequential.weights": "tf.keras.Model.weights", + "tf.keras.optimizers.Adadelta.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.Adadelta.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.Adadelta.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.Adadelta.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.Adadelta.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.Adadelta.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.Adadelta.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.Adadelta.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.keras.optimizers.Adadelta.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.keras.optimizers.Adadelta.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.keras.optimizers.Adadelta.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.keras.optimizers.Adadelta.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.keras.optimizers.Adadelta.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.keras.optimizers.Adadelta.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.keras.optimizers.Adadelta.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.keras.optimizers.Adadelta.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.keras.optimizers.Adadelta.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.keras.optimizers.Adadelta.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.keras.optimizers.Adadelta.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.keras.optimizers.Adagrad.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.Adagrad.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.Adagrad.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.Adagrad.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.Adagrad.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.Adagrad.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.Adagrad.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.Adagrad.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.keras.optimizers.Adagrad.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.keras.optimizers.Adagrad.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.keras.optimizers.Adagrad.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.keras.optimizers.Adagrad.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.keras.optimizers.Adagrad.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.keras.optimizers.Adagrad.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.keras.optimizers.Adagrad.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.keras.optimizers.Adagrad.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.keras.optimizers.Adagrad.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.keras.optimizers.Adagrad.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.keras.optimizers.Adagrad.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.keras.optimizers.Adam.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.Adam.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.Adam.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.Adam.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.Adam.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.Adam.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.Adam.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.Adam.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.keras.optimizers.Adam.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.keras.optimizers.Adam.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.keras.optimizers.Adam.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.keras.optimizers.Adam.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.keras.optimizers.Adam.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.keras.optimizers.Adam.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.keras.optimizers.Adam.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.keras.optimizers.Adam.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.keras.optimizers.Adam.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.keras.optimizers.Adam.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.keras.optimizers.Adam.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.keras.optimizers.Adamax.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.Adamax.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.Adamax.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.Adamax.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.Adamax.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.Adamax.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.Adamax.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.Adamax.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.keras.optimizers.Adamax.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.keras.optimizers.Adamax.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.keras.optimizers.Adamax.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.keras.optimizers.Adamax.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.keras.optimizers.Adamax.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.keras.optimizers.Adamax.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.keras.optimizers.Adamax.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.keras.optimizers.Adamax.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.keras.optimizers.Adamax.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.keras.optimizers.Adamax.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.keras.optimizers.Adamax.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.keras.optimizers.Adamax.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.keras.optimizers.Ftrl.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.Ftrl.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.Ftrl.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.Ftrl.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.Ftrl.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.Ftrl.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.Ftrl.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.Ftrl.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.keras.optimizers.Ftrl.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.keras.optimizers.Ftrl.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.keras.optimizers.Ftrl.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.keras.optimizers.Ftrl.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.keras.optimizers.Ftrl.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.keras.optimizers.Ftrl.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.keras.optimizers.Ftrl.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.keras.optimizers.Ftrl.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.keras.optimizers.Ftrl.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.keras.optimizers.Ftrl.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.keras.optimizers.Ftrl.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.keras.optimizers.Ftrl.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.keras.optimizers.Nadam.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.Nadam.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.Nadam.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.Nadam.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.Nadam.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.Nadam.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.Nadam.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.Nadam.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.keras.optimizers.Nadam.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.keras.optimizers.Nadam.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.keras.optimizers.Nadam.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.keras.optimizers.Nadam.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.keras.optimizers.Nadam.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.keras.optimizers.Nadam.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.keras.optimizers.Nadam.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.keras.optimizers.Nadam.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.keras.optimizers.Nadam.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.keras.optimizers.Nadam.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.keras.optimizers.Nadam.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.keras.optimizers.Nadam.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.keras.optimizers.Optimizer.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.Optimizer.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.Optimizer.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.Optimizer.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.Optimizer.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.Optimizer.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.Optimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.RMSprop.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.RMSprop.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.RMSprop.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.RMSprop.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.RMSprop.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.RMSprop.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.RMSprop.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.RMSprop.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.keras.optimizers.RMSprop.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.keras.optimizers.RMSprop.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.keras.optimizers.RMSprop.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.keras.optimizers.RMSprop.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.keras.optimizers.RMSprop.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.keras.optimizers.RMSprop.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.keras.optimizers.RMSprop.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.keras.optimizers.RMSprop.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.keras.optimizers.RMSprop.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.keras.optimizers.RMSprop.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.keras.optimizers.RMSprop.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.keras.optimizers.SGD.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.SGD.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.SGD.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.SGD.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.SGD.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.SGD.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.SGD.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.SGD.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.keras.optimizers.SGD.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.keras.optimizers.SGD.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.keras.optimizers.SGD.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.keras.optimizers.SGD.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.keras.optimizers.SGD.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.keras.optimizers.SGD.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.keras.optimizers.SGD.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.keras.optimizers.SGD.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.keras.optimizers.SGD.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.keras.optimizers.SGD.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.keras.optimizers.SGD.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.keras.optimizers.SGD.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.keras.optimizers.schedules.ExponentialDecay.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.schedules.ExponentialDecay.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.schedules.ExponentialDecay.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.schedules.ExponentialDecay.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.schedules.ExponentialDecay.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.schedules.ExponentialDecay.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.schedules.ExponentialDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.schedules.InverseTimeDecay.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.schedules.InverseTimeDecay.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.schedules.InverseTimeDecay.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.schedules.InverseTimeDecay.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.schedules.InverseTimeDecay.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.schedules.InverseTimeDecay.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.schedules.InverseTimeDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.schedules.LearningRateSchedule.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.schedules.LearningRateSchedule.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.schedules.LearningRateSchedule.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.schedules.LearningRateSchedule.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.optimizers.schedules.LearningRateSchedule.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.schedules.LearningRateSchedule.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.schedules.LearningRateSchedule.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.schedules.LearningRateSchedule.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.optimizers.schedules.PolynomialDecay.__eq__": "tf.keras.Model.__eq__", + "tf.keras.optimizers.schedules.PolynomialDecay.__ge__": "tf.keras.Model.__ge__", + "tf.keras.optimizers.schedules.PolynomialDecay.__gt__": "tf.keras.Model.__gt__", + "tf.keras.optimizers.schedules.PolynomialDecay.__le__": "tf.keras.Model.__le__", + "tf.keras.optimizers.schedules.PolynomialDecay.__lt__": "tf.keras.Model.__lt__", + "tf.keras.optimizers.schedules.PolynomialDecay.__ne__": "tf.keras.Model.__ne__", + "tf.keras.optimizers.schedules.PolynomialDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.preprocessing.image.DirectoryIterator.__eq__": "tf.keras.Model.__eq__", + "tf.keras.preprocessing.image.DirectoryIterator.__ge__": "tf.keras.Model.__ge__", + "tf.keras.preprocessing.image.DirectoryIterator.__gt__": "tf.keras.Model.__gt__", + "tf.keras.preprocessing.image.DirectoryIterator.__le__": "tf.keras.Model.__le__", + "tf.keras.preprocessing.image.DirectoryIterator.__lt__": "tf.keras.Model.__lt__", + "tf.keras.preprocessing.image.DirectoryIterator.__ne__": "tf.keras.Model.__ne__", + "tf.keras.preprocessing.image.ImageDataGenerator.__eq__": "tf.keras.Model.__eq__", + "tf.keras.preprocessing.image.ImageDataGenerator.__ge__": "tf.keras.Model.__ge__", + "tf.keras.preprocessing.image.ImageDataGenerator.__gt__": "tf.keras.Model.__gt__", + "tf.keras.preprocessing.image.ImageDataGenerator.__le__": "tf.keras.Model.__le__", + "tf.keras.preprocessing.image.ImageDataGenerator.__lt__": "tf.keras.Model.__lt__", + "tf.keras.preprocessing.image.ImageDataGenerator.__ne__": "tf.keras.Model.__ne__", + "tf.keras.preprocessing.image.ImageDataGenerator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.preprocessing.image.Iterator.__eq__": "tf.keras.Model.__eq__", + "tf.keras.preprocessing.image.Iterator.__ge__": "tf.keras.Model.__ge__", + "tf.keras.preprocessing.image.Iterator.__getitem__": "tf.keras.preprocessing.image.DirectoryIterator.__getitem__", + "tf.keras.preprocessing.image.Iterator.__gt__": "tf.keras.Model.__gt__", + "tf.keras.preprocessing.image.Iterator.__iter__": "tf.keras.preprocessing.image.DirectoryIterator.__iter__", + "tf.keras.preprocessing.image.Iterator.__le__": "tf.keras.Model.__le__", + "tf.keras.preprocessing.image.Iterator.__len__": "tf.keras.preprocessing.image.DirectoryIterator.__len__", + "tf.keras.preprocessing.image.Iterator.__lt__": "tf.keras.Model.__lt__", + "tf.keras.preprocessing.image.Iterator.__ne__": "tf.keras.Model.__ne__", + "tf.keras.preprocessing.image.Iterator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.preprocessing.image.Iterator.next": "tf.keras.preprocessing.image.DirectoryIterator.next", + "tf.keras.preprocessing.image.Iterator.on_epoch_end": "tf.keras.preprocessing.image.DirectoryIterator.on_epoch_end", + "tf.keras.preprocessing.image.Iterator.reset": "tf.keras.preprocessing.image.DirectoryIterator.reset", + "tf.keras.preprocessing.image.Iterator.white_list_formats": "tf.keras.preprocessing.image.DirectoryIterator.white_list_formats", + "tf.keras.preprocessing.image.NumpyArrayIterator.__eq__": "tf.keras.Model.__eq__", + "tf.keras.preprocessing.image.NumpyArrayIterator.__ge__": "tf.keras.Model.__ge__", + "tf.keras.preprocessing.image.NumpyArrayIterator.__getitem__": "tf.keras.preprocessing.image.DirectoryIterator.__getitem__", + "tf.keras.preprocessing.image.NumpyArrayIterator.__gt__": "tf.keras.Model.__gt__", + "tf.keras.preprocessing.image.NumpyArrayIterator.__iter__": "tf.keras.preprocessing.image.DirectoryIterator.__iter__", + "tf.keras.preprocessing.image.NumpyArrayIterator.__le__": "tf.keras.Model.__le__", + "tf.keras.preprocessing.image.NumpyArrayIterator.__len__": "tf.keras.preprocessing.image.DirectoryIterator.__len__", + "tf.keras.preprocessing.image.NumpyArrayIterator.__lt__": "tf.keras.Model.__lt__", + "tf.keras.preprocessing.image.NumpyArrayIterator.__ne__": "tf.keras.Model.__ne__", + "tf.keras.preprocessing.image.NumpyArrayIterator.next": "tf.keras.preprocessing.image.DirectoryIterator.next", + "tf.keras.preprocessing.image.NumpyArrayIterator.on_epoch_end": "tf.keras.preprocessing.image.DirectoryIterator.on_epoch_end", + "tf.keras.preprocessing.image.NumpyArrayIterator.reset": "tf.keras.preprocessing.image.DirectoryIterator.reset", + "tf.keras.preprocessing.image.NumpyArrayIterator.white_list_formats": "tf.keras.preprocessing.image.DirectoryIterator.white_list_formats", + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__eq__": "tf.keras.Model.__eq__", + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__ge__": "tf.keras.Model.__ge__", + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__gt__": "tf.keras.Model.__gt__", + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__iter__": "tf.keras.utils.Sequence.__iter__", + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__le__": "tf.keras.Model.__le__", + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__lt__": "tf.keras.Model.__lt__", + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__ne__": "tf.keras.Model.__ne__", + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.preprocessing.sequence.TimeseriesGenerator.on_epoch_end": "tf.keras.utils.Sequence.on_epoch_end", + "tf.keras.preprocessing.text.Tokenizer.__eq__": "tf.keras.Model.__eq__", + "tf.keras.preprocessing.text.Tokenizer.__ge__": "tf.keras.Model.__ge__", + "tf.keras.preprocessing.text.Tokenizer.__gt__": "tf.keras.Model.__gt__", + "tf.keras.preprocessing.text.Tokenizer.__le__": "tf.keras.Model.__le__", + "tf.keras.preprocessing.text.Tokenizer.__lt__": "tf.keras.Model.__lt__", + "tf.keras.preprocessing.text.Tokenizer.__ne__": "tf.keras.Model.__ne__", + "tf.keras.preprocessing.text.Tokenizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.regularizers.L1L2.__eq__": "tf.keras.Model.__eq__", + "tf.keras.regularizers.L1L2.__ge__": "tf.keras.Model.__ge__", + "tf.keras.regularizers.L1L2.__gt__": "tf.keras.Model.__gt__", + "tf.keras.regularizers.L1L2.__le__": "tf.keras.Model.__le__", + "tf.keras.regularizers.L1L2.__lt__": "tf.keras.Model.__lt__", + "tf.keras.regularizers.L1L2.__ne__": "tf.keras.Model.__ne__", + "tf.keras.regularizers.L1L2.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.regularizers.Regularizer.__eq__": "tf.keras.Model.__eq__", + "tf.keras.regularizers.Regularizer.__ge__": "tf.keras.Model.__ge__", + "tf.keras.regularizers.Regularizer.__gt__": "tf.keras.Model.__gt__", + "tf.keras.regularizers.Regularizer.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.regularizers.Regularizer.__le__": "tf.keras.Model.__le__", + "tf.keras.regularizers.Regularizer.__lt__": "tf.keras.Model.__lt__", + "tf.keras.regularizers.Regularizer.__ne__": "tf.keras.Model.__ne__", + "tf.keras.regularizers.Regularizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.utils.CustomObjectScope.__eq__": "tf.keras.Model.__eq__", + "tf.keras.utils.CustomObjectScope.__ge__": "tf.keras.Model.__ge__", + "tf.keras.utils.CustomObjectScope.__gt__": "tf.keras.Model.__gt__", + "tf.keras.utils.CustomObjectScope.__le__": "tf.keras.Model.__le__", + "tf.keras.utils.CustomObjectScope.__lt__": "tf.keras.Model.__lt__", + "tf.keras.utils.CustomObjectScope.__ne__": "tf.keras.Model.__ne__", + "tf.keras.utils.CustomObjectScope.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.utils.GeneratorEnqueuer.__eq__": "tf.keras.Model.__eq__", + "tf.keras.utils.GeneratorEnqueuer.__ge__": "tf.keras.Model.__ge__", + "tf.keras.utils.GeneratorEnqueuer.__gt__": "tf.keras.Model.__gt__", + "tf.keras.utils.GeneratorEnqueuer.__le__": "tf.keras.Model.__le__", + "tf.keras.utils.GeneratorEnqueuer.__lt__": "tf.keras.Model.__lt__", + "tf.keras.utils.GeneratorEnqueuer.__ne__": "tf.keras.Model.__ne__", + "tf.keras.utils.GeneratorEnqueuer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.utils.GeneratorEnqueuer.is_running": "tf.keras.utils.SequenceEnqueuer.is_running", + "tf.keras.utils.GeneratorEnqueuer.start": "tf.keras.utils.SequenceEnqueuer.start", + "tf.keras.utils.GeneratorEnqueuer.stop": "tf.keras.utils.SequenceEnqueuer.stop", + "tf.keras.utils.HDF5Matrix.__eq__": "tf.keras.Model.__eq__", + "tf.keras.utils.HDF5Matrix.__ge__": "tf.keras.Model.__ge__", + "tf.keras.utils.HDF5Matrix.__gt__": "tf.keras.Model.__gt__", + "tf.keras.utils.HDF5Matrix.__le__": "tf.keras.Model.__le__", + "tf.keras.utils.HDF5Matrix.__lt__": "tf.keras.Model.__lt__", + "tf.keras.utils.HDF5Matrix.__ne__": "tf.keras.Model.__ne__", + "tf.keras.utils.HDF5Matrix.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.utils.OrderedEnqueuer.__eq__": "tf.keras.Model.__eq__", + "tf.keras.utils.OrderedEnqueuer.__ge__": "tf.keras.Model.__ge__", + "tf.keras.utils.OrderedEnqueuer.__gt__": "tf.keras.Model.__gt__", + "tf.keras.utils.OrderedEnqueuer.__le__": "tf.keras.Model.__le__", + "tf.keras.utils.OrderedEnqueuer.__lt__": "tf.keras.Model.__lt__", + "tf.keras.utils.OrderedEnqueuer.__ne__": "tf.keras.Model.__ne__", + "tf.keras.utils.OrderedEnqueuer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.utils.OrderedEnqueuer.is_running": "tf.keras.utils.SequenceEnqueuer.is_running", + "tf.keras.utils.OrderedEnqueuer.start": "tf.keras.utils.SequenceEnqueuer.start", + "tf.keras.utils.OrderedEnqueuer.stop": "tf.keras.utils.SequenceEnqueuer.stop", + "tf.keras.utils.Progbar.__eq__": "tf.keras.Model.__eq__", + "tf.keras.utils.Progbar.__ge__": "tf.keras.Model.__ge__", + "tf.keras.utils.Progbar.__gt__": "tf.keras.Model.__gt__", + "tf.keras.utils.Progbar.__le__": "tf.keras.Model.__le__", + "tf.keras.utils.Progbar.__lt__": "tf.keras.Model.__lt__", + "tf.keras.utils.Progbar.__ne__": "tf.keras.Model.__ne__", + "tf.keras.utils.Progbar.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.utils.Sequence.__eq__": "tf.keras.Model.__eq__", + "tf.keras.utils.Sequence.__ge__": "tf.keras.Model.__ge__", + "tf.keras.utils.Sequence.__gt__": "tf.keras.Model.__gt__", + "tf.keras.utils.Sequence.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.keras.utils.Sequence.__le__": "tf.keras.Model.__le__", + "tf.keras.utils.Sequence.__lt__": "tf.keras.Model.__lt__", + "tf.keras.utils.Sequence.__ne__": "tf.keras.Model.__ne__", + "tf.keras.utils.Sequence.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.utils.SequenceEnqueuer.__eq__": "tf.keras.Model.__eq__", + "tf.keras.utils.SequenceEnqueuer.__ge__": "tf.keras.Model.__ge__", + "tf.keras.utils.SequenceEnqueuer.__gt__": "tf.keras.Model.__gt__", + "tf.keras.utils.SequenceEnqueuer.__le__": "tf.keras.Model.__le__", + "tf.keras.utils.SequenceEnqueuer.__lt__": "tf.keras.Model.__lt__", + "tf.keras.utils.SequenceEnqueuer.__ne__": "tf.keras.Model.__ne__", + "tf.keras.utils.SequenceEnqueuer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.wrappers.scikit_learn.KerasClassifier.__eq__": "tf.keras.Model.__eq__", + "tf.keras.wrappers.scikit_learn.KerasClassifier.__ge__": "tf.keras.Model.__ge__", + "tf.keras.wrappers.scikit_learn.KerasClassifier.__gt__": "tf.keras.Model.__gt__", + "tf.keras.wrappers.scikit_learn.KerasClassifier.__le__": "tf.keras.Model.__le__", + "tf.keras.wrappers.scikit_learn.KerasClassifier.__lt__": "tf.keras.Model.__lt__", + "tf.keras.wrappers.scikit_learn.KerasClassifier.__ne__": "tf.keras.Model.__ne__", + "tf.keras.wrappers.scikit_learn.KerasClassifier.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.wrappers.scikit_learn.KerasRegressor.__eq__": "tf.keras.Model.__eq__", + "tf.keras.wrappers.scikit_learn.KerasRegressor.__ge__": "tf.keras.Model.__ge__", + "tf.keras.wrappers.scikit_learn.KerasRegressor.__gt__": "tf.keras.Model.__gt__", + "tf.keras.wrappers.scikit_learn.KerasRegressor.__init__": "tf.keras.wrappers.scikit_learn.KerasClassifier.__init__", + "tf.keras.wrappers.scikit_learn.KerasRegressor.__le__": "tf.keras.Model.__le__", + "tf.keras.wrappers.scikit_learn.KerasRegressor.__lt__": "tf.keras.Model.__lt__", + "tf.keras.wrappers.scikit_learn.KerasRegressor.__ne__": "tf.keras.Model.__ne__", + "tf.keras.wrappers.scikit_learn.KerasRegressor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.keras.wrappers.scikit_learn.KerasRegressor.check_params": "tf.keras.wrappers.scikit_learn.KerasClassifier.check_params", + "tf.keras.wrappers.scikit_learn.KerasRegressor.filter_sk_params": "tf.keras.wrappers.scikit_learn.KerasClassifier.filter_sk_params", + "tf.keras.wrappers.scikit_learn.KerasRegressor.get_params": "tf.keras.wrappers.scikit_learn.KerasClassifier.get_params", + "tf.keras.wrappers.scikit_learn.KerasRegressor.set_params": "tf.keras.wrappers.scikit_learn.KerasClassifier.set_params", + "tf.less": "tf.math.less", + "tf.less_equal": "tf.math.less_equal", + "tf.linalg.LinearOperator.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperator.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperator.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperator.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperator.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperator.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperator.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperator.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperator.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperator.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorAdjoint.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorAdjoint.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorAdjoint.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorAdjoint.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorAdjoint.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorAdjoint.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorAdjoint.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorAdjoint.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorAdjoint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorAdjoint.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorAdjoint.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorAdjoint.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorAdjoint.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorAdjoint.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorAdjoint.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorAdjoint.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorAdjoint.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorAdjoint.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorAdjoint.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorAdjoint.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorAdjoint.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorAdjoint.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorAdjoint.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorAdjoint.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorAdjoint.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorAdjoint.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorAdjoint.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorAdjoint.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorAdjoint.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorAdjoint.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorAdjoint.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorAdjoint.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorAdjoint.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorAdjoint.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorAdjoint.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorAdjoint.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorAdjoint.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorAdjoint.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorAdjoint.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorAdjoint.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorAdjoint.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorAdjoint.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorAdjoint.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorAdjoint.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorAdjoint.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorAdjoint.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorAdjoint.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorAdjoint.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorBlockDiag.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorBlockDiag.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorBlockDiag.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorBlockDiag.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorBlockDiag.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorBlockDiag.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorBlockDiag.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorBlockDiag.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorBlockDiag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorBlockDiag.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorBlockDiag.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorBlockDiag.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorBlockDiag.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorBlockDiag.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorBlockDiag.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorBlockDiag.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorBlockDiag.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorBlockDiag.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorBlockDiag.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorBlockDiag.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorBlockDiag.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorBlockDiag.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorBlockDiag.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorBlockDiag.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorBlockDiag.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorBlockDiag.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorBlockDiag.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorBlockDiag.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorBlockDiag.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorBlockDiag.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorBlockDiag.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorBlockDiag.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorBlockDiag.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorBlockDiag.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorBlockDiag.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorBlockDiag.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorBlockDiag.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorBlockDiag.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorBlockDiag.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorBlockDiag.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorBlockDiag.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorBlockDiag.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorBlockDiag.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorBlockDiag.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorBlockDiag.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorBlockDiag.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorBlockDiag.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorBlockDiag.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorBlockLowerTriangular.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorBlockLowerTriangular.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorBlockLowerTriangular.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorBlockLowerTriangular.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorBlockLowerTriangular.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorBlockLowerTriangular.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorBlockLowerTriangular.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorBlockLowerTriangular.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorBlockLowerTriangular.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorBlockLowerTriangular.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorBlockLowerTriangular.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorBlockLowerTriangular.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorBlockLowerTriangular.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorBlockLowerTriangular.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorBlockLowerTriangular.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorBlockLowerTriangular.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorBlockLowerTriangular.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorBlockLowerTriangular.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorBlockLowerTriangular.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorBlockLowerTriangular.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorBlockLowerTriangular.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorBlockLowerTriangular.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorBlockLowerTriangular.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorBlockLowerTriangular.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorBlockLowerTriangular.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorBlockLowerTriangular.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorBlockLowerTriangular.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorBlockLowerTriangular.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorBlockLowerTriangular.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorBlockLowerTriangular.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorBlockLowerTriangular.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorBlockLowerTriangular.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorBlockLowerTriangular.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorBlockLowerTriangular.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorBlockLowerTriangular.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorBlockLowerTriangular.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorBlockLowerTriangular.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorBlockLowerTriangular.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorBlockLowerTriangular.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorBlockLowerTriangular.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorBlockLowerTriangular.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorBlockLowerTriangular.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorBlockLowerTriangular.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorBlockLowerTriangular.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorBlockLowerTriangular.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorBlockLowerTriangular.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorBlockLowerTriangular.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorBlockLowerTriangular.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorCirculant.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorCirculant.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorCirculant.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorCirculant.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorCirculant.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorCirculant.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorCirculant.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorCirculant.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorCirculant.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorCirculant.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorCirculant.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorCirculant.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorCirculant.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorCirculant.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorCirculant.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorCirculant.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorCirculant.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorCirculant.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorCirculant.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorCirculant.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorCirculant.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorCirculant.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorCirculant.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorCirculant.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorCirculant.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorCirculant.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorCirculant.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorCirculant.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorCirculant.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorCirculant.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorCirculant.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorCirculant.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorCirculant.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorCirculant.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorCirculant.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorCirculant.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorCirculant.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorCirculant.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorCirculant.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorCirculant.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorCirculant.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorCirculant.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorCirculant.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorCirculant.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorCirculant.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorCirculant.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorCirculant.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorCirculant.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorCirculant2D.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorCirculant2D.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorCirculant2D.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorCirculant2D.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorCirculant2D.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorCirculant2D.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorCirculant2D.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorCirculant2D.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorCirculant2D.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorCirculant2D.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorCirculant2D.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorCirculant2D.assert_hermitian_spectrum": "tf.linalg.LinearOperatorCirculant.assert_hermitian_spectrum", + "tf.linalg.LinearOperatorCirculant2D.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorCirculant2D.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorCirculant2D.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorCirculant2D.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorCirculant2D.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorCirculant2D.block_depth": "tf.linalg.LinearOperatorCirculant.block_depth", + "tf.linalg.LinearOperatorCirculant2D.block_shape": "tf.linalg.LinearOperatorCirculant.block_shape", + "tf.linalg.LinearOperatorCirculant2D.block_shape_tensor": "tf.linalg.LinearOperatorCirculant.block_shape_tensor", + "tf.linalg.LinearOperatorCirculant2D.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorCirculant2D.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorCirculant2D.convolution_kernel": "tf.linalg.LinearOperatorCirculant.convolution_kernel", + "tf.linalg.LinearOperatorCirculant2D.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorCirculant2D.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorCirculant2D.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorCirculant2D.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorCirculant2D.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorCirculant2D.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorCirculant2D.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorCirculant2D.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorCirculant2D.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorCirculant2D.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorCirculant2D.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorCirculant2D.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorCirculant2D.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorCirculant2D.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorCirculant2D.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorCirculant2D.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorCirculant2D.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorCirculant2D.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorCirculant2D.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorCirculant2D.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorCirculant2D.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorCirculant2D.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorCirculant2D.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorCirculant2D.spectrum": "tf.linalg.LinearOperatorCirculant.spectrum", + "tf.linalg.LinearOperatorCirculant2D.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorCirculant2D.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorCirculant2D.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorCirculant2D.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorCirculant2D.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorCirculant2D.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorCirculant2D.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorCirculant3D.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorCirculant3D.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorCirculant3D.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorCirculant3D.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorCirculant3D.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorCirculant3D.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorCirculant3D.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorCirculant3D.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorCirculant3D.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorCirculant3D.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorCirculant3D.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorCirculant3D.assert_hermitian_spectrum": "tf.linalg.LinearOperatorCirculant.assert_hermitian_spectrum", + "tf.linalg.LinearOperatorCirculant3D.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorCirculant3D.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorCirculant3D.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorCirculant3D.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorCirculant3D.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorCirculant3D.block_depth": "tf.linalg.LinearOperatorCirculant.block_depth", + "tf.linalg.LinearOperatorCirculant3D.block_shape": "tf.linalg.LinearOperatorCirculant.block_shape", + "tf.linalg.LinearOperatorCirculant3D.block_shape_tensor": "tf.linalg.LinearOperatorCirculant.block_shape_tensor", + "tf.linalg.LinearOperatorCirculant3D.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorCirculant3D.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorCirculant3D.convolution_kernel": "tf.linalg.LinearOperatorCirculant.convolution_kernel", + "tf.linalg.LinearOperatorCirculant3D.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorCirculant3D.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorCirculant3D.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorCirculant3D.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorCirculant3D.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorCirculant3D.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorCirculant3D.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorCirculant3D.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorCirculant3D.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorCirculant3D.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorCirculant3D.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorCirculant3D.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorCirculant3D.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorCirculant3D.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorCirculant3D.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorCirculant3D.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorCirculant3D.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorCirculant3D.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorCirculant3D.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorCirculant3D.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorCirculant3D.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorCirculant3D.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorCirculant3D.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorCirculant3D.spectrum": "tf.linalg.LinearOperatorCirculant.spectrum", + "tf.linalg.LinearOperatorCirculant3D.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorCirculant3D.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorCirculant3D.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorCirculant3D.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorCirculant3D.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorCirculant3D.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorCirculant3D.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorComposition.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorComposition.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorComposition.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorComposition.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorComposition.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorComposition.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorComposition.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorComposition.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorComposition.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorComposition.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorComposition.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorComposition.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorComposition.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorComposition.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorComposition.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorComposition.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorComposition.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorComposition.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorComposition.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorComposition.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorComposition.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorComposition.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorComposition.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorComposition.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorComposition.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorComposition.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorComposition.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorComposition.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorComposition.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorComposition.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorComposition.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorComposition.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorComposition.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorComposition.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorComposition.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorComposition.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorComposition.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorComposition.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorComposition.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorComposition.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorComposition.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorComposition.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorComposition.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorComposition.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorComposition.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorComposition.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorComposition.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorComposition.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorDiag.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorDiag.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorDiag.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorDiag.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorDiag.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorDiag.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorDiag.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorDiag.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorDiag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorDiag.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorDiag.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorDiag.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorDiag.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorDiag.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorDiag.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorDiag.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorDiag.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorDiag.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorDiag.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorDiag.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorDiag.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorDiag.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorDiag.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorDiag.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorDiag.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorDiag.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorDiag.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorDiag.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorDiag.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorDiag.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorDiag.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorDiag.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorDiag.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorDiag.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorDiag.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorDiag.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorDiag.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorDiag.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorDiag.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorDiag.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorDiag.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorDiag.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorDiag.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorDiag.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorDiag.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorDiag.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorDiag.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorDiag.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorFullMatrix.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorFullMatrix.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorFullMatrix.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorFullMatrix.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorFullMatrix.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorFullMatrix.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorFullMatrix.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorFullMatrix.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorFullMatrix.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorFullMatrix.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorFullMatrix.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorFullMatrix.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorFullMatrix.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorFullMatrix.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorFullMatrix.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorFullMatrix.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorFullMatrix.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorFullMatrix.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorFullMatrix.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorFullMatrix.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorFullMatrix.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorFullMatrix.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorFullMatrix.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorFullMatrix.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorFullMatrix.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorFullMatrix.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorFullMatrix.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorFullMatrix.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorFullMatrix.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorFullMatrix.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorFullMatrix.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorFullMatrix.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorFullMatrix.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorFullMatrix.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorFullMatrix.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorFullMatrix.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorFullMatrix.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorFullMatrix.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorFullMatrix.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorFullMatrix.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorFullMatrix.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorFullMatrix.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorFullMatrix.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorFullMatrix.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorFullMatrix.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorFullMatrix.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorFullMatrix.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorFullMatrix.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorHouseholder.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorHouseholder.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorHouseholder.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorHouseholder.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorHouseholder.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorHouseholder.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorHouseholder.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorHouseholder.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorHouseholder.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorHouseholder.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorHouseholder.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorHouseholder.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorHouseholder.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorHouseholder.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorHouseholder.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorHouseholder.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorHouseholder.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorHouseholder.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorHouseholder.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorHouseholder.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorHouseholder.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorHouseholder.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorHouseholder.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorHouseholder.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorHouseholder.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorHouseholder.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorHouseholder.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorHouseholder.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorHouseholder.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorHouseholder.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorHouseholder.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorHouseholder.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorHouseholder.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorHouseholder.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorHouseholder.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorHouseholder.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorHouseholder.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorHouseholder.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorHouseholder.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorHouseholder.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorHouseholder.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorHouseholder.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorHouseholder.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorHouseholder.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorHouseholder.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorHouseholder.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorHouseholder.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorHouseholder.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorIdentity.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorIdentity.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorIdentity.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorIdentity.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorIdentity.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorIdentity.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorIdentity.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorIdentity.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorIdentity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorIdentity.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorIdentity.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorIdentity.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorIdentity.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorIdentity.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorIdentity.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorIdentity.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorIdentity.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorIdentity.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorIdentity.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorIdentity.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorIdentity.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorIdentity.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorIdentity.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorIdentity.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorIdentity.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorIdentity.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorIdentity.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorIdentity.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorIdentity.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorIdentity.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorIdentity.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorIdentity.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorIdentity.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorIdentity.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorIdentity.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorIdentity.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorIdentity.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorIdentity.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorIdentity.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorIdentity.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorIdentity.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorIdentity.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorIdentity.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorIdentity.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorIdentity.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorIdentity.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorIdentity.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorInversion.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorInversion.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorInversion.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorInversion.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorInversion.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorInversion.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorInversion.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorInversion.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorInversion.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorInversion.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorInversion.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorInversion.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorInversion.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorInversion.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorInversion.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorInversion.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorInversion.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorInversion.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorInversion.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorInversion.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorInversion.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorInversion.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorInversion.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorInversion.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorInversion.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorInversion.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorInversion.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorInversion.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorInversion.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorInversion.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorInversion.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorInversion.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorInversion.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorInversion.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorInversion.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorInversion.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorInversion.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorInversion.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorInversion.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorInversion.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorInversion.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorInversion.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorInversion.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorInversion.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorInversion.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorInversion.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorInversion.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorInversion.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorKronecker.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorKronecker.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorKronecker.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorKronecker.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorKronecker.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorKronecker.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorKronecker.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorKronecker.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorKronecker.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorKronecker.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorKronecker.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorKronecker.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorKronecker.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorKronecker.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorKronecker.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorKronecker.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorKronecker.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorKronecker.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorKronecker.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorKronecker.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorKronecker.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorKronecker.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorKronecker.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorKronecker.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorKronecker.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorKronecker.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorKronecker.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorKronecker.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorKronecker.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorKronecker.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorKronecker.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorKronecker.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorKronecker.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorKronecker.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorKronecker.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorKronecker.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorKronecker.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorKronecker.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorKronecker.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorKronecker.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorKronecker.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorKronecker.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorKronecker.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorKronecker.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorKronecker.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorKronecker.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorKronecker.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorKronecker.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorLowRankUpdate.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorLowRankUpdate.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorLowRankUpdate.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorLowRankUpdate.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorLowRankUpdate.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorLowRankUpdate.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorLowRankUpdate.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorLowRankUpdate.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorLowRankUpdate.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorLowRankUpdate.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorLowRankUpdate.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorLowRankUpdate.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorLowRankUpdate.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorLowRankUpdate.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorLowRankUpdate.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorLowRankUpdate.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorLowRankUpdate.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorLowRankUpdate.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorLowRankUpdate.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorLowRankUpdate.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorLowRankUpdate.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorLowRankUpdate.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorLowRankUpdate.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorLowRankUpdate.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorLowRankUpdate.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorLowRankUpdate.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorLowRankUpdate.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorLowRankUpdate.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorLowRankUpdate.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorLowRankUpdate.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorLowRankUpdate.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorLowRankUpdate.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorLowRankUpdate.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorLowRankUpdate.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorLowRankUpdate.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorLowRankUpdate.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorLowRankUpdate.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorLowRankUpdate.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorLowRankUpdate.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorLowRankUpdate.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorLowRankUpdate.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorLowRankUpdate.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorLowRankUpdate.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorLowRankUpdate.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorLowRankUpdate.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorLowRankUpdate.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorLowRankUpdate.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorLowRankUpdate.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorLowerTriangular.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorLowerTriangular.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorLowerTriangular.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorLowerTriangular.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorLowerTriangular.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorLowerTriangular.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorLowerTriangular.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorLowerTriangular.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorLowerTriangular.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorLowerTriangular.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorLowerTriangular.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorLowerTriangular.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorLowerTriangular.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorLowerTriangular.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorLowerTriangular.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorLowerTriangular.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorLowerTriangular.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorLowerTriangular.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorLowerTriangular.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorLowerTriangular.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorLowerTriangular.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorLowerTriangular.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorLowerTriangular.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorLowerTriangular.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorLowerTriangular.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorLowerTriangular.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorLowerTriangular.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorLowerTriangular.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorLowerTriangular.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorLowerTriangular.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorLowerTriangular.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorLowerTriangular.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorLowerTriangular.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorLowerTriangular.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorLowerTriangular.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorLowerTriangular.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorLowerTriangular.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorLowerTriangular.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorLowerTriangular.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorLowerTriangular.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorLowerTriangular.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorLowerTriangular.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorLowerTriangular.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorLowerTriangular.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorLowerTriangular.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorLowerTriangular.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorLowerTriangular.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorLowerTriangular.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorPermutation.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorPermutation.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorPermutation.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorPermutation.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorPermutation.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorPermutation.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorPermutation.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorPermutation.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorPermutation.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorPermutation.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorPermutation.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorPermutation.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorPermutation.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorPermutation.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorPermutation.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorPermutation.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorPermutation.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorPermutation.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorPermutation.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorPermutation.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorPermutation.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorPermutation.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorPermutation.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorPermutation.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorPermutation.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorPermutation.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorPermutation.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorPermutation.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorPermutation.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorPermutation.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorPermutation.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorPermutation.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorPermutation.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorPermutation.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorPermutation.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorPermutation.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorPermutation.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorPermutation.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorPermutation.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorPermutation.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorPermutation.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorPermutation.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorPermutation.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorPermutation.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorPermutation.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorPermutation.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorPermutation.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorPermutation.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorScaledIdentity.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorScaledIdentity.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorScaledIdentity.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorScaledIdentity.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorScaledIdentity.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorScaledIdentity.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorScaledIdentity.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorScaledIdentity.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorScaledIdentity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorScaledIdentity.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorScaledIdentity.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorScaledIdentity.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorScaledIdentity.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorScaledIdentity.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorScaledIdentity.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorScaledIdentity.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorScaledIdentity.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorScaledIdentity.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorScaledIdentity.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorScaledIdentity.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorScaledIdentity.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorScaledIdentity.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorScaledIdentity.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorScaledIdentity.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorScaledIdentity.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorScaledIdentity.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorScaledIdentity.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorScaledIdentity.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorScaledIdentity.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorScaledIdentity.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorScaledIdentity.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorScaledIdentity.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorScaledIdentity.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorScaledIdentity.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorScaledIdentity.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorScaledIdentity.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorScaledIdentity.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorScaledIdentity.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorScaledIdentity.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorScaledIdentity.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorScaledIdentity.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorScaledIdentity.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorScaledIdentity.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorScaledIdentity.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorScaledIdentity.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorScaledIdentity.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorScaledIdentity.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorToeplitz.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorToeplitz.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorToeplitz.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorToeplitz.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorToeplitz.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorToeplitz.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorToeplitz.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorToeplitz.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorToeplitz.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorToeplitz.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorToeplitz.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorToeplitz.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorToeplitz.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorToeplitz.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorToeplitz.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorToeplitz.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorToeplitz.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorToeplitz.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorToeplitz.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorToeplitz.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorToeplitz.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorToeplitz.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorToeplitz.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorToeplitz.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorToeplitz.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorToeplitz.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorToeplitz.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorToeplitz.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorToeplitz.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorToeplitz.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorToeplitz.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorToeplitz.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorToeplitz.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorToeplitz.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorToeplitz.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorToeplitz.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorToeplitz.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorToeplitz.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorToeplitz.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorToeplitz.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorToeplitz.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorToeplitz.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorToeplitz.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorToeplitz.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorToeplitz.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorToeplitz.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorToeplitz.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorToeplitz.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorTridiag.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorTridiag.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorTridiag.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorTridiag.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorTridiag.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorTridiag.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorTridiag.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorTridiag.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorTridiag.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorTridiag.add_to_tensor": "tf.linalg.LinearOperator.add_to_tensor", + "tf.linalg.LinearOperatorTridiag.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorTridiag.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorTridiag.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorTridiag.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorTridiag.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorTridiag.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorTridiag.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorTridiag.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorTridiag.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorTridiag.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorTridiag.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorTridiag.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorTridiag.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorTridiag.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorTridiag.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorTridiag.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorTridiag.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorTridiag.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorTridiag.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorTridiag.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorTridiag.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorTridiag.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorTridiag.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorTridiag.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorTridiag.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorTridiag.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorTridiag.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorTridiag.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorTridiag.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorTridiag.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorTridiag.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorTridiag.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorTridiag.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorTridiag.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorTridiag.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorTridiag.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorTridiag.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorTridiag.variables": "tf.Module.variables", + "tf.linalg.LinearOperatorZeros.H": "tf.linalg.LinearOperator.H", + "tf.linalg.LinearOperatorZeros.__eq__": "tf.keras.Model.__eq__", + "tf.linalg.LinearOperatorZeros.__ge__": "tf.keras.Model.__ge__", + "tf.linalg.LinearOperatorZeros.__gt__": "tf.keras.Model.__gt__", + "tf.linalg.LinearOperatorZeros.__le__": "tf.keras.Model.__le__", + "tf.linalg.LinearOperatorZeros.__lt__": "tf.keras.Model.__lt__", + "tf.linalg.LinearOperatorZeros.__matmul__": "tf.linalg.LinearOperator.__matmul__", + "tf.linalg.LinearOperatorZeros.__ne__": "tf.keras.Model.__ne__", + "tf.linalg.LinearOperatorZeros.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.linalg.LinearOperatorZeros.adjoint": "tf.linalg.LinearOperator.adjoint", + "tf.linalg.LinearOperatorZeros.assert_non_singular": "tf.linalg.LinearOperator.assert_non_singular", + "tf.linalg.LinearOperatorZeros.assert_positive_definite": "tf.linalg.LinearOperator.assert_positive_definite", + "tf.linalg.LinearOperatorZeros.assert_self_adjoint": "tf.linalg.LinearOperator.assert_self_adjoint", + "tf.linalg.LinearOperatorZeros.batch_shape": "tf.linalg.LinearOperator.batch_shape", + "tf.linalg.LinearOperatorZeros.batch_shape_tensor": "tf.linalg.LinearOperator.batch_shape_tensor", + "tf.linalg.LinearOperatorZeros.cholesky": "tf.linalg.LinearOperator.cholesky", + "tf.linalg.LinearOperatorZeros.cond": "tf.linalg.LinearOperator.cond", + "tf.linalg.LinearOperatorZeros.determinant": "tf.linalg.LinearOperator.determinant", + "tf.linalg.LinearOperatorZeros.diag_part": "tf.linalg.LinearOperator.diag_part", + "tf.linalg.LinearOperatorZeros.domain_dimension": "tf.linalg.LinearOperator.domain_dimension", + "tf.linalg.LinearOperatorZeros.domain_dimension_tensor": "tf.linalg.LinearOperator.domain_dimension_tensor", + "tf.linalg.LinearOperatorZeros.dtype": "tf.linalg.LinearOperator.dtype", + "tf.linalg.LinearOperatorZeros.eigvals": "tf.linalg.LinearOperator.eigvals", + "tf.linalg.LinearOperatorZeros.graph_parents": "tf.linalg.LinearOperator.graph_parents", + "tf.linalg.LinearOperatorZeros.inverse": "tf.linalg.LinearOperator.inverse", + "tf.linalg.LinearOperatorZeros.is_non_singular": "tf.linalg.LinearOperator.is_non_singular", + "tf.linalg.LinearOperatorZeros.is_positive_definite": "tf.linalg.LinearOperator.is_positive_definite", + "tf.linalg.LinearOperatorZeros.is_self_adjoint": "tf.linalg.LinearOperator.is_self_adjoint", + "tf.linalg.LinearOperatorZeros.is_square": "tf.linalg.LinearOperator.is_square", + "tf.linalg.LinearOperatorZeros.log_abs_determinant": "tf.linalg.LinearOperator.log_abs_determinant", + "tf.linalg.LinearOperatorZeros.matmul": "tf.linalg.LinearOperator.matmul", + "tf.linalg.LinearOperatorZeros.matvec": "tf.linalg.LinearOperator.matvec", + "tf.linalg.LinearOperatorZeros.name": "tf.linalg.LinearOperator.name", + "tf.linalg.LinearOperatorZeros.name_scope": "tf.Module.name_scope", + "tf.linalg.LinearOperatorZeros.range_dimension": "tf.linalg.LinearOperator.range_dimension", + "tf.linalg.LinearOperatorZeros.range_dimension_tensor": "tf.linalg.LinearOperator.range_dimension_tensor", + "tf.linalg.LinearOperatorZeros.shape": "tf.linalg.LinearOperator.shape", + "tf.linalg.LinearOperatorZeros.shape_tensor": "tf.linalg.LinearOperator.shape_tensor", + "tf.linalg.LinearOperatorZeros.solve": "tf.linalg.LinearOperator.solve", + "tf.linalg.LinearOperatorZeros.solvevec": "tf.linalg.LinearOperator.solvevec", + "tf.linalg.LinearOperatorZeros.submodules": "tf.Module.submodules", + "tf.linalg.LinearOperatorZeros.tensor_rank": "tf.linalg.LinearOperator.tensor_rank", + "tf.linalg.LinearOperatorZeros.tensor_rank_tensor": "tf.linalg.LinearOperator.tensor_rank_tensor", + "tf.linalg.LinearOperatorZeros.to_dense": "tf.linalg.LinearOperator.to_dense", + "tf.linalg.LinearOperatorZeros.trace": "tf.linalg.LinearOperator.trace", + "tf.linalg.LinearOperatorZeros.trainable_variables": "tf.Module.trainable_variables", + "tf.linalg.LinearOperatorZeros.variables": "tf.Module.variables", + "tf.linalg.einsum": "tf.einsum", + "tf.linalg.eye": "tf.eye", + "tf.linalg.l2_normalize": "tf.math.l2_normalize", + "tf.linalg.norm": "tf.norm", + "tf.linalg.tensordot": "tf.tensordot", + "tf.lite.Interpreter.__eq__": "tf.keras.Model.__eq__", + "tf.lite.Interpreter.__ge__": "tf.keras.Model.__ge__", + "tf.lite.Interpreter.__gt__": "tf.keras.Model.__gt__", + "tf.lite.Interpreter.__le__": "tf.keras.Model.__le__", + "tf.lite.Interpreter.__lt__": "tf.keras.Model.__lt__", + "tf.lite.Interpreter.__ne__": "tf.keras.Model.__ne__", + "tf.lite.Interpreter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.lite.OpsSet.name": "tf.distribute.InputReplicationMode.name", + "tf.lite.OpsSet.value": "tf.distribute.InputReplicationMode.value", + "tf.lite.Optimize.name": "tf.distribute.InputReplicationMode.name", + "tf.lite.Optimize.value": "tf.distribute.InputReplicationMode.value", + "tf.lite.RepresentativeDataset.__eq__": "tf.keras.Model.__eq__", + "tf.lite.RepresentativeDataset.__ge__": "tf.keras.Model.__ge__", + "tf.lite.RepresentativeDataset.__gt__": "tf.keras.Model.__gt__", + "tf.lite.RepresentativeDataset.__le__": "tf.keras.Model.__le__", + "tf.lite.RepresentativeDataset.__lt__": "tf.keras.Model.__lt__", + "tf.lite.RepresentativeDataset.__ne__": "tf.keras.Model.__ne__", + "tf.lite.RepresentativeDataset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.lite.TFLiteConverter.__eq__": "tf.keras.Model.__eq__", + "tf.lite.TFLiteConverter.__ge__": "tf.keras.Model.__ge__", + "tf.lite.TFLiteConverter.__gt__": "tf.keras.Model.__gt__", + "tf.lite.TFLiteConverter.__le__": "tf.keras.Model.__le__", + "tf.lite.TFLiteConverter.__lt__": "tf.keras.Model.__lt__", + "tf.lite.TFLiteConverter.__ne__": "tf.keras.Model.__ne__", + "tf.lite.TFLiteConverter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.lite.TargetSpec.__eq__": "tf.keras.Model.__eq__", + "tf.lite.TargetSpec.__ge__": "tf.keras.Model.__ge__", + "tf.lite.TargetSpec.__gt__": "tf.keras.Model.__gt__", + "tf.lite.TargetSpec.__le__": "tf.keras.Model.__le__", + "tf.lite.TargetSpec.__lt__": "tf.keras.Model.__lt__", + "tf.lite.TargetSpec.__ne__": "tf.keras.Model.__ne__", + "tf.lite.TargetSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.logical_and": "tf.math.logical_and", + "tf.logical_not": "tf.math.logical_not", + "tf.logical_or": "tf.math.logical_or", + "tf.lookup.KeyValueTensorInitializer.__eq__": "tf.keras.Model.__eq__", + "tf.lookup.KeyValueTensorInitializer.__ge__": "tf.keras.Model.__ge__", + "tf.lookup.KeyValueTensorInitializer.__gt__": "tf.keras.Model.__gt__", + "tf.lookup.KeyValueTensorInitializer.__le__": "tf.keras.Model.__le__", + "tf.lookup.KeyValueTensorInitializer.__lt__": "tf.keras.Model.__lt__", + "tf.lookup.KeyValueTensorInitializer.__ne__": "tf.keras.Model.__ne__", + "tf.lookup.KeyValueTensorInitializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.lookup.StaticHashTable.__eq__": "tf.keras.Model.__eq__", + "tf.lookup.StaticHashTable.__ge__": "tf.keras.Model.__ge__", + "tf.lookup.StaticHashTable.__gt__": "tf.keras.Model.__gt__", + "tf.lookup.StaticHashTable.__le__": "tf.keras.Model.__le__", + "tf.lookup.StaticHashTable.__lt__": "tf.keras.Model.__lt__", + "tf.lookup.StaticHashTable.__ne__": "tf.keras.Model.__ne__", + "tf.lookup.StaticHashTable.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.lookup.StaticVocabularyTable.__eq__": "tf.keras.Model.__eq__", + "tf.lookup.StaticVocabularyTable.__ge__": "tf.keras.Model.__ge__", + "tf.lookup.StaticVocabularyTable.__gt__": "tf.keras.Model.__gt__", + "tf.lookup.StaticVocabularyTable.__le__": "tf.keras.Model.__le__", + "tf.lookup.StaticVocabularyTable.__lt__": "tf.keras.Model.__lt__", + "tf.lookup.StaticVocabularyTable.__ne__": "tf.keras.Model.__ne__", + "tf.lookup.StaticVocabularyTable.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.lookup.StaticVocabularyTable.key_dtype": "tf.lookup.StaticHashTable.key_dtype", + "tf.lookup.StaticVocabularyTable.value_dtype": "tf.lookup.StaticHashTable.value_dtype", + "tf.lookup.TextFileIndex.__eq__": "tf.keras.Model.__eq__", + "tf.lookup.TextFileIndex.__ge__": "tf.keras.Model.__ge__", + "tf.lookup.TextFileIndex.__gt__": "tf.keras.Model.__gt__", + "tf.lookup.TextFileIndex.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.lookup.TextFileIndex.__le__": "tf.keras.Model.__le__", + "tf.lookup.TextFileIndex.__lt__": "tf.keras.Model.__lt__", + "tf.lookup.TextFileIndex.__ne__": "tf.keras.Model.__ne__", + "tf.lookup.TextFileIndex.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.lookup.TextFileInitializer.__eq__": "tf.keras.Model.__eq__", + "tf.lookup.TextFileInitializer.__ge__": "tf.keras.Model.__ge__", + "tf.lookup.TextFileInitializer.__gt__": "tf.keras.Model.__gt__", + "tf.lookup.TextFileInitializer.__le__": "tf.keras.Model.__le__", + "tf.lookup.TextFileInitializer.__lt__": "tf.keras.Model.__lt__", + "tf.lookup.TextFileInitializer.__ne__": "tf.keras.Model.__ne__", + "tf.lookup.TextFileInitializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.lookup.TextFileInitializer.key_dtype": "tf.lookup.KeyValueTensorInitializer.key_dtype", + "tf.lookup.TextFileInitializer.value_dtype": "tf.lookup.KeyValueTensorInitializer.value_dtype", + "tf.lookup.experimental.DenseHashTable.__eq__": "tf.keras.Model.__eq__", + "tf.lookup.experimental.DenseHashTable.__ge__": "tf.keras.Model.__ge__", + "tf.lookup.experimental.DenseHashTable.__gt__": "tf.keras.Model.__gt__", + "tf.lookup.experimental.DenseHashTable.__le__": "tf.keras.Model.__le__", + "tf.lookup.experimental.DenseHashTable.__lt__": "tf.keras.Model.__lt__", + "tf.lookup.experimental.DenseHashTable.__ne__": "tf.keras.Model.__ne__", + "tf.lookup.experimental.DenseHashTable.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.lookup.experimental.DenseHashTable.key_dtype": "tf.lookup.StaticHashTable.key_dtype", + "tf.lookup.experimental.DenseHashTable.resource_handle": "tf.lookup.StaticHashTable.resource_handle", + "tf.lookup.experimental.DenseHashTable.value_dtype": "tf.lookup.StaticHashTable.value_dtype", + "tf.losses": "tf.keras.losses", + "tf.losses.BinaryCrossentropy": "tf.keras.losses.BinaryCrossentropy", + "tf.losses.BinaryCrossentropy.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.BinaryCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.losses.BinaryCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.losses.BinaryCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.losses.BinaryCrossentropy.__init__": "tf.keras.losses.BinaryCrossentropy.__init__", + "tf.losses.BinaryCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.losses.BinaryCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.losses.BinaryCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.losses.BinaryCrossentropy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.BinaryCrossentropy.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.BinaryCrossentropy.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.CategoricalCrossentropy": "tf.keras.losses.CategoricalCrossentropy", + "tf.losses.CategoricalCrossentropy.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.CategoricalCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.losses.CategoricalCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.losses.CategoricalCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.losses.CategoricalCrossentropy.__init__": "tf.keras.losses.CategoricalCrossentropy.__init__", + "tf.losses.CategoricalCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.losses.CategoricalCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.losses.CategoricalCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.losses.CategoricalCrossentropy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.CategoricalCrossentropy.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.CategoricalCrossentropy.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.CategoricalHinge": "tf.keras.losses.CategoricalHinge", + "tf.losses.CategoricalHinge.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.CategoricalHinge.__eq__": "tf.keras.Model.__eq__", + "tf.losses.CategoricalHinge.__ge__": "tf.keras.Model.__ge__", + "tf.losses.CategoricalHinge.__gt__": "tf.keras.Model.__gt__", + "tf.losses.CategoricalHinge.__init__": "tf.keras.losses.CategoricalHinge.__init__", + "tf.losses.CategoricalHinge.__le__": "tf.keras.Model.__le__", + "tf.losses.CategoricalHinge.__lt__": "tf.keras.Model.__lt__", + "tf.losses.CategoricalHinge.__ne__": "tf.keras.Model.__ne__", + "tf.losses.CategoricalHinge.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.CategoricalHinge.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.CategoricalHinge.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.CosineSimilarity": "tf.keras.losses.CosineSimilarity", + "tf.losses.CosineSimilarity.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.CosineSimilarity.__eq__": "tf.keras.Model.__eq__", + "tf.losses.CosineSimilarity.__ge__": "tf.keras.Model.__ge__", + "tf.losses.CosineSimilarity.__gt__": "tf.keras.Model.__gt__", + "tf.losses.CosineSimilarity.__init__": "tf.keras.losses.CosineSimilarity.__init__", + "tf.losses.CosineSimilarity.__le__": "tf.keras.Model.__le__", + "tf.losses.CosineSimilarity.__lt__": "tf.keras.Model.__lt__", + "tf.losses.CosineSimilarity.__ne__": "tf.keras.Model.__ne__", + "tf.losses.CosineSimilarity.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.CosineSimilarity.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.CosineSimilarity.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.Hinge": "tf.keras.losses.Hinge", + "tf.losses.Hinge.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.Hinge.__eq__": "tf.keras.Model.__eq__", + "tf.losses.Hinge.__ge__": "tf.keras.Model.__ge__", + "tf.losses.Hinge.__gt__": "tf.keras.Model.__gt__", + "tf.losses.Hinge.__init__": "tf.keras.losses.Hinge.__init__", + "tf.losses.Hinge.__le__": "tf.keras.Model.__le__", + "tf.losses.Hinge.__lt__": "tf.keras.Model.__lt__", + "tf.losses.Hinge.__ne__": "tf.keras.Model.__ne__", + "tf.losses.Hinge.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.Hinge.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.Hinge.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.Huber": "tf.keras.losses.Huber", + "tf.losses.Huber.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.Huber.__eq__": "tf.keras.Model.__eq__", + "tf.losses.Huber.__ge__": "tf.keras.Model.__ge__", + "tf.losses.Huber.__gt__": "tf.keras.Model.__gt__", + "tf.losses.Huber.__init__": "tf.keras.losses.Huber.__init__", + "tf.losses.Huber.__le__": "tf.keras.Model.__le__", + "tf.losses.Huber.__lt__": "tf.keras.Model.__lt__", + "tf.losses.Huber.__ne__": "tf.keras.Model.__ne__", + "tf.losses.Huber.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.Huber.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.Huber.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.KLD": "tf.keras.losses.KLD", + "tf.losses.KLDivergence": "tf.keras.losses.KLDivergence", + "tf.losses.KLDivergence.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.KLDivergence.__eq__": "tf.keras.Model.__eq__", + "tf.losses.KLDivergence.__ge__": "tf.keras.Model.__ge__", + "tf.losses.KLDivergence.__gt__": "tf.keras.Model.__gt__", + "tf.losses.KLDivergence.__init__": "tf.keras.losses.KLDivergence.__init__", + "tf.losses.KLDivergence.__le__": "tf.keras.Model.__le__", + "tf.losses.KLDivergence.__lt__": "tf.keras.Model.__lt__", + "tf.losses.KLDivergence.__ne__": "tf.keras.Model.__ne__", + "tf.losses.KLDivergence.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.KLDivergence.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.KLDivergence.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.LogCosh": "tf.keras.losses.LogCosh", + "tf.losses.LogCosh.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.LogCosh.__eq__": "tf.keras.Model.__eq__", + "tf.losses.LogCosh.__ge__": "tf.keras.Model.__ge__", + "tf.losses.LogCosh.__gt__": "tf.keras.Model.__gt__", + "tf.losses.LogCosh.__init__": "tf.keras.losses.LogCosh.__init__", + "tf.losses.LogCosh.__le__": "tf.keras.Model.__le__", + "tf.losses.LogCosh.__lt__": "tf.keras.Model.__lt__", + "tf.losses.LogCosh.__ne__": "tf.keras.Model.__ne__", + "tf.losses.LogCosh.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.LogCosh.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.LogCosh.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.Loss": "tf.keras.losses.Loss", + "tf.losses.Loss.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.Loss.__eq__": "tf.keras.Model.__eq__", + "tf.losses.Loss.__ge__": "tf.keras.Model.__ge__", + "tf.losses.Loss.__gt__": "tf.keras.Model.__gt__", + "tf.losses.Loss.__init__": "tf.keras.losses.Loss.__init__", + "tf.losses.Loss.__le__": "tf.keras.Model.__le__", + "tf.losses.Loss.__lt__": "tf.keras.Model.__lt__", + "tf.losses.Loss.__ne__": "tf.keras.Model.__ne__", + "tf.losses.Loss.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.Loss.call": "tf.keras.losses.Loss.call", + "tf.losses.Loss.get_config": "tf.keras.losses.Loss.get_config", + "tf.losses.MAE": "tf.keras.losses.MAE", + "tf.losses.MAPE": "tf.keras.losses.MAPE", + "tf.losses.MSE": "tf.keras.losses.MSE", + "tf.losses.MSLE": "tf.keras.losses.MSLE", + "tf.losses.MeanAbsoluteError": "tf.keras.losses.MeanAbsoluteError", + "tf.losses.MeanAbsoluteError.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.MeanAbsoluteError.__eq__": "tf.keras.Model.__eq__", + "tf.losses.MeanAbsoluteError.__ge__": "tf.keras.Model.__ge__", + "tf.losses.MeanAbsoluteError.__gt__": "tf.keras.Model.__gt__", + "tf.losses.MeanAbsoluteError.__init__": "tf.keras.losses.MeanAbsoluteError.__init__", + "tf.losses.MeanAbsoluteError.__le__": "tf.keras.Model.__le__", + "tf.losses.MeanAbsoluteError.__lt__": "tf.keras.Model.__lt__", + "tf.losses.MeanAbsoluteError.__ne__": "tf.keras.Model.__ne__", + "tf.losses.MeanAbsoluteError.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.MeanAbsoluteError.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.MeanAbsoluteError.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.MeanAbsolutePercentageError": "tf.keras.losses.MeanAbsolutePercentageError", + "tf.losses.MeanAbsolutePercentageError.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.MeanAbsolutePercentageError.__eq__": "tf.keras.Model.__eq__", + "tf.losses.MeanAbsolutePercentageError.__ge__": "tf.keras.Model.__ge__", + "tf.losses.MeanAbsolutePercentageError.__gt__": "tf.keras.Model.__gt__", + "tf.losses.MeanAbsolutePercentageError.__init__": "tf.keras.losses.MeanAbsolutePercentageError.__init__", + "tf.losses.MeanAbsolutePercentageError.__le__": "tf.keras.Model.__le__", + "tf.losses.MeanAbsolutePercentageError.__lt__": "tf.keras.Model.__lt__", + "tf.losses.MeanAbsolutePercentageError.__ne__": "tf.keras.Model.__ne__", + "tf.losses.MeanAbsolutePercentageError.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.MeanAbsolutePercentageError.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.MeanAbsolutePercentageError.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.MeanSquaredError": "tf.keras.losses.MeanSquaredError", + "tf.losses.MeanSquaredError.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.MeanSquaredError.__eq__": "tf.keras.Model.__eq__", + "tf.losses.MeanSquaredError.__ge__": "tf.keras.Model.__ge__", + "tf.losses.MeanSquaredError.__gt__": "tf.keras.Model.__gt__", + "tf.losses.MeanSquaredError.__init__": "tf.keras.losses.MeanSquaredError.__init__", + "tf.losses.MeanSquaredError.__le__": "tf.keras.Model.__le__", + "tf.losses.MeanSquaredError.__lt__": "tf.keras.Model.__lt__", + "tf.losses.MeanSquaredError.__ne__": "tf.keras.Model.__ne__", + "tf.losses.MeanSquaredError.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.MeanSquaredError.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.MeanSquaredError.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.MeanSquaredLogarithmicError": "tf.keras.losses.MeanSquaredLogarithmicError", + "tf.losses.MeanSquaredLogarithmicError.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.MeanSquaredLogarithmicError.__eq__": "tf.keras.Model.__eq__", + "tf.losses.MeanSquaredLogarithmicError.__ge__": "tf.keras.Model.__ge__", + "tf.losses.MeanSquaredLogarithmicError.__gt__": "tf.keras.Model.__gt__", + "tf.losses.MeanSquaredLogarithmicError.__init__": "tf.keras.losses.MeanSquaredLogarithmicError.__init__", + "tf.losses.MeanSquaredLogarithmicError.__le__": "tf.keras.Model.__le__", + "tf.losses.MeanSquaredLogarithmicError.__lt__": "tf.keras.Model.__lt__", + "tf.losses.MeanSquaredLogarithmicError.__ne__": "tf.keras.Model.__ne__", + "tf.losses.MeanSquaredLogarithmicError.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.MeanSquaredLogarithmicError.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.MeanSquaredLogarithmicError.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.Poisson": "tf.keras.losses.Poisson", + "tf.losses.Poisson.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.Poisson.__eq__": "tf.keras.Model.__eq__", + "tf.losses.Poisson.__ge__": "tf.keras.Model.__ge__", + "tf.losses.Poisson.__gt__": "tf.keras.Model.__gt__", + "tf.losses.Poisson.__init__": "tf.keras.losses.Poisson.__init__", + "tf.losses.Poisson.__le__": "tf.keras.Model.__le__", + "tf.losses.Poisson.__lt__": "tf.keras.Model.__lt__", + "tf.losses.Poisson.__ne__": "tf.keras.Model.__ne__", + "tf.losses.Poisson.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.Poisson.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.Poisson.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.Reduction": "tf.keras.losses.Reduction", + "tf.losses.Reduction.__eq__": "tf.keras.Model.__eq__", + "tf.losses.Reduction.__ge__": "tf.keras.Model.__ge__", + "tf.losses.Reduction.__gt__": "tf.keras.Model.__gt__", + "tf.losses.Reduction.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.losses.Reduction.__le__": "tf.keras.Model.__le__", + "tf.losses.Reduction.__lt__": "tf.keras.Model.__lt__", + "tf.losses.Reduction.__ne__": "tf.keras.Model.__ne__", + "tf.losses.Reduction.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.SparseCategoricalCrossentropy": "tf.keras.losses.SparseCategoricalCrossentropy", + "tf.losses.SparseCategoricalCrossentropy.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.SparseCategoricalCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.losses.SparseCategoricalCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.losses.SparseCategoricalCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.losses.SparseCategoricalCrossentropy.__init__": "tf.keras.losses.SparseCategoricalCrossentropy.__init__", + "tf.losses.SparseCategoricalCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.losses.SparseCategoricalCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.losses.SparseCategoricalCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.losses.SparseCategoricalCrossentropy.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.SparseCategoricalCrossentropy.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.SparseCategoricalCrossentropy.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.SquaredHinge": "tf.keras.losses.SquaredHinge", + "tf.losses.SquaredHinge.__call__": "tf.keras.losses.Loss.__call__", + "tf.losses.SquaredHinge.__eq__": "tf.keras.Model.__eq__", + "tf.losses.SquaredHinge.__ge__": "tf.keras.Model.__ge__", + "tf.losses.SquaredHinge.__gt__": "tf.keras.Model.__gt__", + "tf.losses.SquaredHinge.__init__": "tf.keras.losses.SquaredHinge.__init__", + "tf.losses.SquaredHinge.__le__": "tf.keras.Model.__le__", + "tf.losses.SquaredHinge.__lt__": "tf.keras.Model.__lt__", + "tf.losses.SquaredHinge.__ne__": "tf.keras.Model.__ne__", + "tf.losses.SquaredHinge.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.losses.SquaredHinge.call": "tf.keras.losses.BinaryCrossentropy.call", + "tf.losses.SquaredHinge.get_config": "tf.keras.losses.BinaryCrossentropy.get_config", + "tf.losses.binary_crossentropy": "tf.keras.losses.binary_crossentropy", + "tf.losses.categorical_crossentropy": "tf.keras.losses.categorical_crossentropy", + "tf.losses.categorical_hinge": "tf.keras.losses.categorical_hinge", + "tf.losses.cosine_similarity": "tf.keras.losses.cosine_similarity", + "tf.losses.deserialize": "tf.keras.losses.deserialize", + "tf.losses.get": "tf.keras.losses.get", + "tf.losses.hinge": "tf.keras.losses.hinge", + "tf.losses.kld": "tf.keras.losses.KLD", + "tf.losses.kullback_leibler_divergence": "tf.keras.losses.KLD", + "tf.losses.logcosh": "tf.keras.losses.logcosh", + "tf.losses.mae": "tf.keras.losses.MAE", + "tf.losses.mape": "tf.keras.losses.MAPE", + "tf.losses.mean_absolute_error": "tf.keras.losses.MAE", + "tf.losses.mean_absolute_percentage_error": "tf.keras.losses.MAPE", + "tf.losses.mean_squared_error": "tf.keras.losses.MSE", + "tf.losses.mean_squared_logarithmic_error": "tf.keras.losses.MSLE", + "tf.losses.mse": "tf.keras.losses.MSE", + "tf.losses.msle": "tf.keras.losses.MSLE", + "tf.losses.poisson": "tf.keras.losses.poisson", + "tf.losses.serialize": "tf.keras.losses.serialize", + "tf.losses.sparse_categorical_crossentropy": "tf.keras.losses.sparse_categorical_crossentropy", + "tf.losses.squared_hinge": "tf.keras.losses.squared_hinge", + "tf.math.log_softmax": "tf.nn.log_softmax", + "tf.math.mod": "tf.math.floormod", + "tf.math.reduce_all": "tf.reduce_all", + "tf.math.softmax": "tf.nn.softmax", + "tf.math.softsign": "tf.nn.softsign", + "tf.matmul": "tf.linalg.matmul", + "tf.matrix_square_root": "tf.linalg.sqrtm", + "tf.maximum": "tf.math.maximum", + "tf.metrics": "tf.keras.metrics", + "tf.metrics.AUC": "tf.keras.metrics.AUC", + "tf.metrics.AUC.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.AUC.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.AUC.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.AUC.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.AUC.__init__": "tf.keras.metrics.AUC.__init__", + "tf.metrics.AUC.__le__": "tf.keras.Model.__le__", + "tf.metrics.AUC.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.AUC.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.AUC.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.AUC.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.AUC.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.AUC.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.AUC.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.AUC.build": "tf.keras.layers.Layer.build", + "tf.metrics.AUC.call": "tf.keras.layers.Layer.call", + "tf.metrics.AUC.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.AUC.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.AUC.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.AUC.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.AUC.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.AUC.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.AUC.get_config": "tf.keras.metrics.AUC.get_config", + "tf.metrics.AUC.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.AUC.input": "tf.keras.layers.Layer.input", + "tf.metrics.AUC.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.AUC.interpolate_pr_auc": "tf.keras.metrics.AUC.interpolate_pr_auc", + "tf.metrics.AUC.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.AUC.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.AUC.name": "tf.keras.layers.Layer.name", + "tf.metrics.AUC.name_scope": "tf.Module.name_scope", + "tf.metrics.AUC.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.AUC.output": "tf.keras.layers.Layer.output", + "tf.metrics.AUC.reset_states": "tf.keras.metrics.AUC.reset_states", + "tf.metrics.AUC.result": "tf.keras.metrics.AUC.result", + "tf.metrics.AUC.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.AUC.submodules": "tf.Module.submodules", + "tf.metrics.AUC.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.AUC.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.AUC.update_state": "tf.keras.metrics.AUC.update_state", + "tf.metrics.AUC.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.Accuracy": "tf.keras.metrics.Accuracy", + "tf.metrics.Accuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.Accuracy.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.Accuracy.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.Accuracy.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.Accuracy.__init__": "tf.keras.metrics.Accuracy.__init__", + "tf.metrics.Accuracy.__le__": "tf.keras.Model.__le__", + "tf.metrics.Accuracy.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.Accuracy.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.Accuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.Accuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.Accuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.Accuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.Accuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.Accuracy.build": "tf.keras.layers.Layer.build", + "tf.metrics.Accuracy.call": "tf.keras.layers.Layer.call", + "tf.metrics.Accuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.Accuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.Accuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.Accuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.Accuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.Accuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.Accuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.Accuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.Accuracy.input": "tf.keras.layers.Layer.input", + "tf.metrics.Accuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.Accuracy.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.Accuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.Accuracy.name": "tf.keras.layers.Layer.name", + "tf.metrics.Accuracy.name_scope": "tf.Module.name_scope", + "tf.metrics.Accuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.Accuracy.output": "tf.keras.layers.Layer.output", + "tf.metrics.Accuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.Accuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.Accuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.Accuracy.submodules": "tf.Module.submodules", + "tf.metrics.Accuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.Accuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.Accuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.Accuracy.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.BinaryAccuracy": "tf.keras.metrics.BinaryAccuracy", + "tf.metrics.BinaryAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.BinaryAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.BinaryAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.BinaryAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.BinaryAccuracy.__init__": "tf.keras.metrics.BinaryAccuracy.__init__", + "tf.metrics.BinaryAccuracy.__le__": "tf.keras.Model.__le__", + "tf.metrics.BinaryAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.BinaryAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.BinaryAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.BinaryAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.BinaryAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.BinaryAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.BinaryAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.BinaryAccuracy.build": "tf.keras.layers.Layer.build", + "tf.metrics.BinaryAccuracy.call": "tf.keras.layers.Layer.call", + "tf.metrics.BinaryAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.BinaryAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.BinaryAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.BinaryAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.BinaryAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.BinaryAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.BinaryAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.BinaryAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.BinaryAccuracy.input": "tf.keras.layers.Layer.input", + "tf.metrics.BinaryAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.BinaryAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.BinaryAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.BinaryAccuracy.name": "tf.keras.layers.Layer.name", + "tf.metrics.BinaryAccuracy.name_scope": "tf.Module.name_scope", + "tf.metrics.BinaryAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.BinaryAccuracy.output": "tf.keras.layers.Layer.output", + "tf.metrics.BinaryAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.BinaryAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.BinaryAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.BinaryAccuracy.submodules": "tf.Module.submodules", + "tf.metrics.BinaryAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.BinaryAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.BinaryAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.BinaryAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.BinaryCrossentropy": "tf.keras.metrics.BinaryCrossentropy", + "tf.metrics.BinaryCrossentropy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.BinaryCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.BinaryCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.BinaryCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.BinaryCrossentropy.__init__": "tf.keras.metrics.BinaryCrossentropy.__init__", + "tf.metrics.BinaryCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.metrics.BinaryCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.BinaryCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.BinaryCrossentropy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.BinaryCrossentropy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.BinaryCrossentropy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.BinaryCrossentropy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.BinaryCrossentropy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.BinaryCrossentropy.build": "tf.keras.layers.Layer.build", + "tf.metrics.BinaryCrossentropy.call": "tf.keras.layers.Layer.call", + "tf.metrics.BinaryCrossentropy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.BinaryCrossentropy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.BinaryCrossentropy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.BinaryCrossentropy.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.BinaryCrossentropy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.BinaryCrossentropy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.BinaryCrossentropy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.BinaryCrossentropy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.BinaryCrossentropy.input": "tf.keras.layers.Layer.input", + "tf.metrics.BinaryCrossentropy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.BinaryCrossentropy.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.BinaryCrossentropy.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.BinaryCrossentropy.name": "tf.keras.layers.Layer.name", + "tf.metrics.BinaryCrossentropy.name_scope": "tf.Module.name_scope", + "tf.metrics.BinaryCrossentropy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.BinaryCrossentropy.output": "tf.keras.layers.Layer.output", + "tf.metrics.BinaryCrossentropy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.BinaryCrossentropy.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.BinaryCrossentropy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.BinaryCrossentropy.submodules": "tf.Module.submodules", + "tf.metrics.BinaryCrossentropy.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.BinaryCrossentropy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.BinaryCrossentropy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.BinaryCrossentropy.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.CategoricalAccuracy": "tf.keras.metrics.CategoricalAccuracy", + "tf.metrics.CategoricalAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.CategoricalAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.CategoricalAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.CategoricalAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.CategoricalAccuracy.__init__": "tf.keras.metrics.CategoricalAccuracy.__init__", + "tf.metrics.CategoricalAccuracy.__le__": "tf.keras.Model.__le__", + "tf.metrics.CategoricalAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.CategoricalAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.CategoricalAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.CategoricalAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.CategoricalAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.CategoricalAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.CategoricalAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.CategoricalAccuracy.build": "tf.keras.layers.Layer.build", + "tf.metrics.CategoricalAccuracy.call": "tf.keras.layers.Layer.call", + "tf.metrics.CategoricalAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.CategoricalAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.CategoricalAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.CategoricalAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.CategoricalAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.CategoricalAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.CategoricalAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.CategoricalAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.CategoricalAccuracy.input": "tf.keras.layers.Layer.input", + "tf.metrics.CategoricalAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.CategoricalAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.CategoricalAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.CategoricalAccuracy.name": "tf.keras.layers.Layer.name", + "tf.metrics.CategoricalAccuracy.name_scope": "tf.Module.name_scope", + "tf.metrics.CategoricalAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.CategoricalAccuracy.output": "tf.keras.layers.Layer.output", + "tf.metrics.CategoricalAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.CategoricalAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.CategoricalAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.CategoricalAccuracy.submodules": "tf.Module.submodules", + "tf.metrics.CategoricalAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.CategoricalAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.CategoricalAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.CategoricalAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.CategoricalCrossentropy": "tf.keras.metrics.CategoricalCrossentropy", + "tf.metrics.CategoricalCrossentropy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.CategoricalCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.CategoricalCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.CategoricalCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.CategoricalCrossentropy.__init__": "tf.keras.metrics.CategoricalCrossentropy.__init__", + "tf.metrics.CategoricalCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.metrics.CategoricalCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.CategoricalCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.CategoricalCrossentropy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.CategoricalCrossentropy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.CategoricalCrossentropy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.CategoricalCrossentropy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.CategoricalCrossentropy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.CategoricalCrossentropy.build": "tf.keras.layers.Layer.build", + "tf.metrics.CategoricalCrossentropy.call": "tf.keras.layers.Layer.call", + "tf.metrics.CategoricalCrossentropy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.CategoricalCrossentropy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.CategoricalCrossentropy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.CategoricalCrossentropy.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.CategoricalCrossentropy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.CategoricalCrossentropy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.CategoricalCrossentropy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.CategoricalCrossentropy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.CategoricalCrossentropy.input": "tf.keras.layers.Layer.input", + "tf.metrics.CategoricalCrossentropy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.CategoricalCrossentropy.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.CategoricalCrossentropy.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.CategoricalCrossentropy.name": "tf.keras.layers.Layer.name", + "tf.metrics.CategoricalCrossentropy.name_scope": "tf.Module.name_scope", + "tf.metrics.CategoricalCrossentropy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.CategoricalCrossentropy.output": "tf.keras.layers.Layer.output", + "tf.metrics.CategoricalCrossentropy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.CategoricalCrossentropy.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.CategoricalCrossentropy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.CategoricalCrossentropy.submodules": "tf.Module.submodules", + "tf.metrics.CategoricalCrossentropy.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.CategoricalCrossentropy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.CategoricalCrossentropy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.CategoricalCrossentropy.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.CategoricalHinge": "tf.keras.metrics.CategoricalHinge", + "tf.metrics.CategoricalHinge.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.CategoricalHinge.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.CategoricalHinge.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.CategoricalHinge.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.CategoricalHinge.__init__": "tf.keras.metrics.CategoricalHinge.__init__", + "tf.metrics.CategoricalHinge.__le__": "tf.keras.Model.__le__", + "tf.metrics.CategoricalHinge.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.CategoricalHinge.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.CategoricalHinge.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.CategoricalHinge.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.CategoricalHinge.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.CategoricalHinge.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.CategoricalHinge.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.CategoricalHinge.build": "tf.keras.layers.Layer.build", + "tf.metrics.CategoricalHinge.call": "tf.keras.layers.Layer.call", + "tf.metrics.CategoricalHinge.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.CategoricalHinge.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.CategoricalHinge.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.CategoricalHinge.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.CategoricalHinge.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.CategoricalHinge.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.CategoricalHinge.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.CategoricalHinge.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.CategoricalHinge.input": "tf.keras.layers.Layer.input", + "tf.metrics.CategoricalHinge.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.CategoricalHinge.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.CategoricalHinge.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.CategoricalHinge.name": "tf.keras.layers.Layer.name", + "tf.metrics.CategoricalHinge.name_scope": "tf.Module.name_scope", + "tf.metrics.CategoricalHinge.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.CategoricalHinge.output": "tf.keras.layers.Layer.output", + "tf.metrics.CategoricalHinge.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.CategoricalHinge.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.CategoricalHinge.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.CategoricalHinge.submodules": "tf.Module.submodules", + "tf.metrics.CategoricalHinge.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.CategoricalHinge.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.CategoricalHinge.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.CategoricalHinge.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.CosineSimilarity": "tf.keras.metrics.CosineSimilarity", + "tf.metrics.CosineSimilarity.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.CosineSimilarity.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.CosineSimilarity.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.CosineSimilarity.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.CosineSimilarity.__init__": "tf.keras.metrics.CosineSimilarity.__init__", + "tf.metrics.CosineSimilarity.__le__": "tf.keras.Model.__le__", + "tf.metrics.CosineSimilarity.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.CosineSimilarity.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.CosineSimilarity.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.CosineSimilarity.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.CosineSimilarity.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.CosineSimilarity.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.CosineSimilarity.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.CosineSimilarity.build": "tf.keras.layers.Layer.build", + "tf.metrics.CosineSimilarity.call": "tf.keras.layers.Layer.call", + "tf.metrics.CosineSimilarity.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.CosineSimilarity.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.CosineSimilarity.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.CosineSimilarity.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.CosineSimilarity.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.CosineSimilarity.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.CosineSimilarity.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.CosineSimilarity.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.CosineSimilarity.input": "tf.keras.layers.Layer.input", + "tf.metrics.CosineSimilarity.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.CosineSimilarity.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.CosineSimilarity.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.CosineSimilarity.name": "tf.keras.layers.Layer.name", + "tf.metrics.CosineSimilarity.name_scope": "tf.Module.name_scope", + "tf.metrics.CosineSimilarity.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.CosineSimilarity.output": "tf.keras.layers.Layer.output", + "tf.metrics.CosineSimilarity.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.CosineSimilarity.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.CosineSimilarity.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.CosineSimilarity.submodules": "tf.Module.submodules", + "tf.metrics.CosineSimilarity.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.CosineSimilarity.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.CosineSimilarity.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.CosineSimilarity.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.FalseNegatives": "tf.keras.metrics.FalseNegatives", + "tf.metrics.FalseNegatives.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.FalseNegatives.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.FalseNegatives.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.FalseNegatives.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.FalseNegatives.__init__": "tf.keras.metrics.FalseNegatives.__init__", + "tf.metrics.FalseNegatives.__le__": "tf.keras.Model.__le__", + "tf.metrics.FalseNegatives.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.FalseNegatives.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.FalseNegatives.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.FalseNegatives.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.FalseNegatives.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.FalseNegatives.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.FalseNegatives.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.FalseNegatives.build": "tf.keras.layers.Layer.build", + "tf.metrics.FalseNegatives.call": "tf.keras.layers.Layer.call", + "tf.metrics.FalseNegatives.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.FalseNegatives.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.FalseNegatives.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.FalseNegatives.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.FalseNegatives.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.FalseNegatives.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.FalseNegatives.get_config": "tf.keras.metrics.FalseNegatives.get_config", + "tf.metrics.FalseNegatives.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.FalseNegatives.input": "tf.keras.layers.Layer.input", + "tf.metrics.FalseNegatives.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.FalseNegatives.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.FalseNegatives.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.FalseNegatives.name": "tf.keras.layers.Layer.name", + "tf.metrics.FalseNegatives.name_scope": "tf.Module.name_scope", + "tf.metrics.FalseNegatives.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.FalseNegatives.output": "tf.keras.layers.Layer.output", + "tf.metrics.FalseNegatives.reset_states": "tf.keras.metrics.FalseNegatives.reset_states", + "tf.metrics.FalseNegatives.result": "tf.keras.metrics.FalseNegatives.result", + "tf.metrics.FalseNegatives.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.FalseNegatives.submodules": "tf.Module.submodules", + "tf.metrics.FalseNegatives.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.FalseNegatives.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.FalseNegatives.update_state": "tf.keras.metrics.FalseNegatives.update_state", + "tf.metrics.FalseNegatives.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.FalsePositives": "tf.keras.metrics.FalsePositives", + "tf.metrics.FalsePositives.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.FalsePositives.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.FalsePositives.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.FalsePositives.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.FalsePositives.__init__": "tf.keras.metrics.FalsePositives.__init__", + "tf.metrics.FalsePositives.__le__": "tf.keras.Model.__le__", + "tf.metrics.FalsePositives.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.FalsePositives.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.FalsePositives.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.FalsePositives.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.FalsePositives.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.FalsePositives.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.FalsePositives.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.FalsePositives.build": "tf.keras.layers.Layer.build", + "tf.metrics.FalsePositives.call": "tf.keras.layers.Layer.call", + "tf.metrics.FalsePositives.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.FalsePositives.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.FalsePositives.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.FalsePositives.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.FalsePositives.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.FalsePositives.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.FalsePositives.get_config": "tf.keras.metrics.FalseNegatives.get_config", + "tf.metrics.FalsePositives.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.FalsePositives.input": "tf.keras.layers.Layer.input", + "tf.metrics.FalsePositives.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.FalsePositives.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.FalsePositives.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.FalsePositives.name": "tf.keras.layers.Layer.name", + "tf.metrics.FalsePositives.name_scope": "tf.Module.name_scope", + "tf.metrics.FalsePositives.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.FalsePositives.output": "tf.keras.layers.Layer.output", + "tf.metrics.FalsePositives.reset_states": "tf.keras.metrics.FalseNegatives.reset_states", + "tf.metrics.FalsePositives.result": "tf.keras.metrics.FalseNegatives.result", + "tf.metrics.FalsePositives.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.FalsePositives.submodules": "tf.Module.submodules", + "tf.metrics.FalsePositives.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.FalsePositives.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.FalsePositives.update_state": "tf.keras.metrics.FalseNegatives.update_state", + "tf.metrics.FalsePositives.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.Hinge": "tf.keras.metrics.Hinge", + "tf.metrics.Hinge.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.Hinge.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.Hinge.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.Hinge.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.Hinge.__init__": "tf.keras.metrics.Hinge.__init__", + "tf.metrics.Hinge.__le__": "tf.keras.Model.__le__", + "tf.metrics.Hinge.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.Hinge.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.Hinge.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.Hinge.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.Hinge.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.Hinge.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.Hinge.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.Hinge.build": "tf.keras.layers.Layer.build", + "tf.metrics.Hinge.call": "tf.keras.layers.Layer.call", + "tf.metrics.Hinge.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.Hinge.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.Hinge.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.Hinge.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.Hinge.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.Hinge.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.Hinge.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.Hinge.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.Hinge.input": "tf.keras.layers.Layer.input", + "tf.metrics.Hinge.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.Hinge.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.Hinge.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.Hinge.name": "tf.keras.layers.Layer.name", + "tf.metrics.Hinge.name_scope": "tf.Module.name_scope", + "tf.metrics.Hinge.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.Hinge.output": "tf.keras.layers.Layer.output", + "tf.metrics.Hinge.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.Hinge.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.Hinge.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.Hinge.submodules": "tf.Module.submodules", + "tf.metrics.Hinge.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.Hinge.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.Hinge.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.Hinge.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.KLD": "tf.keras.losses.KLD", + "tf.metrics.KLDivergence": "tf.keras.metrics.KLDivergence", + "tf.metrics.KLDivergence.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.KLDivergence.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.KLDivergence.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.KLDivergence.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.KLDivergence.__init__": "tf.keras.metrics.KLDivergence.__init__", + "tf.metrics.KLDivergence.__le__": "tf.keras.Model.__le__", + "tf.metrics.KLDivergence.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.KLDivergence.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.KLDivergence.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.KLDivergence.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.KLDivergence.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.KLDivergence.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.KLDivergence.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.KLDivergence.build": "tf.keras.layers.Layer.build", + "tf.metrics.KLDivergence.call": "tf.keras.layers.Layer.call", + "tf.metrics.KLDivergence.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.KLDivergence.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.KLDivergence.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.KLDivergence.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.KLDivergence.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.KLDivergence.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.KLDivergence.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.KLDivergence.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.KLDivergence.input": "tf.keras.layers.Layer.input", + "tf.metrics.KLDivergence.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.KLDivergence.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.KLDivergence.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.KLDivergence.name": "tf.keras.layers.Layer.name", + "tf.metrics.KLDivergence.name_scope": "tf.Module.name_scope", + "tf.metrics.KLDivergence.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.KLDivergence.output": "tf.keras.layers.Layer.output", + "tf.metrics.KLDivergence.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.KLDivergence.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.KLDivergence.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.KLDivergence.submodules": "tf.Module.submodules", + "tf.metrics.KLDivergence.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.KLDivergence.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.KLDivergence.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.KLDivergence.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.LogCoshError": "tf.keras.metrics.LogCoshError", + "tf.metrics.LogCoshError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.LogCoshError.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.LogCoshError.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.LogCoshError.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.LogCoshError.__init__": "tf.keras.metrics.LogCoshError.__init__", + "tf.metrics.LogCoshError.__le__": "tf.keras.Model.__le__", + "tf.metrics.LogCoshError.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.LogCoshError.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.LogCoshError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.LogCoshError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.LogCoshError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.LogCoshError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.LogCoshError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.LogCoshError.build": "tf.keras.layers.Layer.build", + "tf.metrics.LogCoshError.call": "tf.keras.layers.Layer.call", + "tf.metrics.LogCoshError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.LogCoshError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.LogCoshError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.LogCoshError.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.LogCoshError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.LogCoshError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.LogCoshError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.LogCoshError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.LogCoshError.input": "tf.keras.layers.Layer.input", + "tf.metrics.LogCoshError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.LogCoshError.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.LogCoshError.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.LogCoshError.name": "tf.keras.layers.Layer.name", + "tf.metrics.LogCoshError.name_scope": "tf.Module.name_scope", + "tf.metrics.LogCoshError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.LogCoshError.output": "tf.keras.layers.Layer.output", + "tf.metrics.LogCoshError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.LogCoshError.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.LogCoshError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.LogCoshError.submodules": "tf.Module.submodules", + "tf.metrics.LogCoshError.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.LogCoshError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.LogCoshError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.LogCoshError.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.MAE": "tf.keras.losses.MAE", + "tf.metrics.MAPE": "tf.keras.losses.MAPE", + "tf.metrics.MSE": "tf.keras.losses.MSE", + "tf.metrics.MSLE": "tf.keras.losses.MSLE", + "tf.metrics.Mean": "tf.keras.metrics.Mean", + "tf.metrics.Mean.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.Mean.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.Mean.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.Mean.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.Mean.__init__": "tf.keras.metrics.Mean.__init__", + "tf.metrics.Mean.__le__": "tf.keras.Model.__le__", + "tf.metrics.Mean.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.Mean.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.Mean.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.Mean.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.Mean.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.Mean.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.Mean.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.Mean.build": "tf.keras.layers.Layer.build", + "tf.metrics.Mean.call": "tf.keras.layers.Layer.call", + "tf.metrics.Mean.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.Mean.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.Mean.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.Mean.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.Mean.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.Mean.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.Mean.get_config": "tf.keras.metrics.Metric.get_config", + "tf.metrics.Mean.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.Mean.input": "tf.keras.layers.Layer.input", + "tf.metrics.Mean.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.Mean.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.Mean.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.Mean.name": "tf.keras.layers.Layer.name", + "tf.metrics.Mean.name_scope": "tf.Module.name_scope", + "tf.metrics.Mean.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.Mean.output": "tf.keras.layers.Layer.output", + "tf.metrics.Mean.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.Mean.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.Mean.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.Mean.submodules": "tf.Module.submodules", + "tf.metrics.Mean.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.Mean.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.Mean.update_state": "tf.keras.metrics.Mean.update_state", + "tf.metrics.Mean.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.MeanAbsoluteError": "tf.keras.metrics.MeanAbsoluteError", + "tf.metrics.MeanAbsoluteError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.MeanAbsoluteError.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.MeanAbsoluteError.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.MeanAbsoluteError.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.MeanAbsoluteError.__init__": "tf.keras.metrics.MeanAbsoluteError.__init__", + "tf.metrics.MeanAbsoluteError.__le__": "tf.keras.Model.__le__", + "tf.metrics.MeanAbsoluteError.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.MeanAbsoluteError.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.MeanAbsoluteError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.MeanAbsoluteError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.MeanAbsoluteError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.MeanAbsoluteError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.MeanAbsoluteError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.MeanAbsoluteError.build": "tf.keras.layers.Layer.build", + "tf.metrics.MeanAbsoluteError.call": "tf.keras.layers.Layer.call", + "tf.metrics.MeanAbsoluteError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.MeanAbsoluteError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.MeanAbsoluteError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.MeanAbsoluteError.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.MeanAbsoluteError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.MeanAbsoluteError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.MeanAbsoluteError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.MeanAbsoluteError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.MeanAbsoluteError.input": "tf.keras.layers.Layer.input", + "tf.metrics.MeanAbsoluteError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.MeanAbsoluteError.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.MeanAbsoluteError.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.MeanAbsoluteError.name": "tf.keras.layers.Layer.name", + "tf.metrics.MeanAbsoluteError.name_scope": "tf.Module.name_scope", + "tf.metrics.MeanAbsoluteError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.MeanAbsoluteError.output": "tf.keras.layers.Layer.output", + "tf.metrics.MeanAbsoluteError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.MeanAbsoluteError.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.MeanAbsoluteError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.MeanAbsoluteError.submodules": "tf.Module.submodules", + "tf.metrics.MeanAbsoluteError.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.MeanAbsoluteError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.MeanAbsoluteError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.MeanAbsoluteError.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.MeanAbsolutePercentageError": "tf.keras.metrics.MeanAbsolutePercentageError", + "tf.metrics.MeanAbsolutePercentageError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.MeanAbsolutePercentageError.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.MeanAbsolutePercentageError.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.MeanAbsolutePercentageError.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.MeanAbsolutePercentageError.__init__": "tf.keras.metrics.MeanAbsolutePercentageError.__init__", + "tf.metrics.MeanAbsolutePercentageError.__le__": "tf.keras.Model.__le__", + "tf.metrics.MeanAbsolutePercentageError.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.MeanAbsolutePercentageError.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.MeanAbsolutePercentageError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.MeanAbsolutePercentageError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.MeanAbsolutePercentageError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.MeanAbsolutePercentageError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.MeanAbsolutePercentageError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.MeanAbsolutePercentageError.build": "tf.keras.layers.Layer.build", + "tf.metrics.MeanAbsolutePercentageError.call": "tf.keras.layers.Layer.call", + "tf.metrics.MeanAbsolutePercentageError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.MeanAbsolutePercentageError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.MeanAbsolutePercentageError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.MeanAbsolutePercentageError.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.MeanAbsolutePercentageError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.MeanAbsolutePercentageError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.MeanAbsolutePercentageError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.MeanAbsolutePercentageError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.MeanAbsolutePercentageError.input": "tf.keras.layers.Layer.input", + "tf.metrics.MeanAbsolutePercentageError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.MeanAbsolutePercentageError.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.MeanAbsolutePercentageError.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.MeanAbsolutePercentageError.name": "tf.keras.layers.Layer.name", + "tf.metrics.MeanAbsolutePercentageError.name_scope": "tf.Module.name_scope", + "tf.metrics.MeanAbsolutePercentageError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.MeanAbsolutePercentageError.output": "tf.keras.layers.Layer.output", + "tf.metrics.MeanAbsolutePercentageError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.MeanAbsolutePercentageError.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.MeanAbsolutePercentageError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.MeanAbsolutePercentageError.submodules": "tf.Module.submodules", + "tf.metrics.MeanAbsolutePercentageError.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.MeanAbsolutePercentageError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.MeanAbsolutePercentageError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.MeanAbsolutePercentageError.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.MeanIoU": "tf.keras.metrics.MeanIoU", + "tf.metrics.MeanIoU.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.MeanIoU.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.MeanIoU.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.MeanIoU.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.MeanIoU.__init__": "tf.keras.metrics.MeanIoU.__init__", + "tf.metrics.MeanIoU.__le__": "tf.keras.Model.__le__", + "tf.metrics.MeanIoU.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.MeanIoU.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.MeanIoU.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.MeanIoU.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.MeanIoU.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.MeanIoU.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.MeanIoU.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.MeanIoU.build": "tf.keras.layers.Layer.build", + "tf.metrics.MeanIoU.call": "tf.keras.layers.Layer.call", + "tf.metrics.MeanIoU.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.MeanIoU.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.MeanIoU.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.MeanIoU.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.MeanIoU.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.MeanIoU.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.MeanIoU.get_config": "tf.keras.metrics.MeanIoU.get_config", + "tf.metrics.MeanIoU.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.MeanIoU.input": "tf.keras.layers.Layer.input", + "tf.metrics.MeanIoU.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.MeanIoU.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.MeanIoU.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.MeanIoU.name": "tf.keras.layers.Layer.name", + "tf.metrics.MeanIoU.name_scope": "tf.Module.name_scope", + "tf.metrics.MeanIoU.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.MeanIoU.output": "tf.keras.layers.Layer.output", + "tf.metrics.MeanIoU.reset_states": "tf.keras.metrics.MeanIoU.reset_states", + "tf.metrics.MeanIoU.result": "tf.keras.metrics.MeanIoU.result", + "tf.metrics.MeanIoU.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.MeanIoU.submodules": "tf.Module.submodules", + "tf.metrics.MeanIoU.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.MeanIoU.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.MeanIoU.update_state": "tf.keras.metrics.MeanIoU.update_state", + "tf.metrics.MeanIoU.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.MeanRelativeError": "tf.keras.metrics.MeanRelativeError", + "tf.metrics.MeanRelativeError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.MeanRelativeError.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.MeanRelativeError.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.MeanRelativeError.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.MeanRelativeError.__init__": "tf.keras.metrics.MeanRelativeError.__init__", + "tf.metrics.MeanRelativeError.__le__": "tf.keras.Model.__le__", + "tf.metrics.MeanRelativeError.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.MeanRelativeError.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.MeanRelativeError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.MeanRelativeError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.MeanRelativeError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.MeanRelativeError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.MeanRelativeError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.MeanRelativeError.build": "tf.keras.layers.Layer.build", + "tf.metrics.MeanRelativeError.call": "tf.keras.layers.Layer.call", + "tf.metrics.MeanRelativeError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.MeanRelativeError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.MeanRelativeError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.MeanRelativeError.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.MeanRelativeError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.MeanRelativeError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.MeanRelativeError.get_config": "tf.keras.metrics.MeanRelativeError.get_config", + "tf.metrics.MeanRelativeError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.MeanRelativeError.input": "tf.keras.layers.Layer.input", + "tf.metrics.MeanRelativeError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.MeanRelativeError.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.MeanRelativeError.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.MeanRelativeError.name": "tf.keras.layers.Layer.name", + "tf.metrics.MeanRelativeError.name_scope": "tf.Module.name_scope", + "tf.metrics.MeanRelativeError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.MeanRelativeError.output": "tf.keras.layers.Layer.output", + "tf.metrics.MeanRelativeError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.MeanRelativeError.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.MeanRelativeError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.MeanRelativeError.submodules": "tf.Module.submodules", + "tf.metrics.MeanRelativeError.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.MeanRelativeError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.MeanRelativeError.update_state": "tf.keras.metrics.MeanRelativeError.update_state", + "tf.metrics.MeanRelativeError.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.MeanSquaredError": "tf.keras.metrics.MeanSquaredError", + "tf.metrics.MeanSquaredError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.MeanSquaredError.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.MeanSquaredError.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.MeanSquaredError.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.MeanSquaredError.__init__": "tf.keras.metrics.MeanSquaredError.__init__", + "tf.metrics.MeanSquaredError.__le__": "tf.keras.Model.__le__", + "tf.metrics.MeanSquaredError.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.MeanSquaredError.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.MeanSquaredError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.MeanSquaredError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.MeanSquaredError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.MeanSquaredError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.MeanSquaredError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.MeanSquaredError.build": "tf.keras.layers.Layer.build", + "tf.metrics.MeanSquaredError.call": "tf.keras.layers.Layer.call", + "tf.metrics.MeanSquaredError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.MeanSquaredError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.MeanSquaredError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.MeanSquaredError.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.MeanSquaredError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.MeanSquaredError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.MeanSquaredError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.MeanSquaredError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.MeanSquaredError.input": "tf.keras.layers.Layer.input", + "tf.metrics.MeanSquaredError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.MeanSquaredError.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.MeanSquaredError.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.MeanSquaredError.name": "tf.keras.layers.Layer.name", + "tf.metrics.MeanSquaredError.name_scope": "tf.Module.name_scope", + "tf.metrics.MeanSquaredError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.MeanSquaredError.output": "tf.keras.layers.Layer.output", + "tf.metrics.MeanSquaredError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.MeanSquaredError.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.MeanSquaredError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.MeanSquaredError.submodules": "tf.Module.submodules", + "tf.metrics.MeanSquaredError.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.MeanSquaredError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.MeanSquaredError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.MeanSquaredError.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.MeanSquaredLogarithmicError": "tf.keras.metrics.MeanSquaredLogarithmicError", + "tf.metrics.MeanSquaredLogarithmicError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.MeanSquaredLogarithmicError.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.MeanSquaredLogarithmicError.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.MeanSquaredLogarithmicError.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.MeanSquaredLogarithmicError.__init__": "tf.keras.metrics.MeanSquaredLogarithmicError.__init__", + "tf.metrics.MeanSquaredLogarithmicError.__le__": "tf.keras.Model.__le__", + "tf.metrics.MeanSquaredLogarithmicError.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.MeanSquaredLogarithmicError.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.MeanSquaredLogarithmicError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.MeanSquaredLogarithmicError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.MeanSquaredLogarithmicError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.MeanSquaredLogarithmicError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.MeanSquaredLogarithmicError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.MeanSquaredLogarithmicError.build": "tf.keras.layers.Layer.build", + "tf.metrics.MeanSquaredLogarithmicError.call": "tf.keras.layers.Layer.call", + "tf.metrics.MeanSquaredLogarithmicError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.MeanSquaredLogarithmicError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.MeanSquaredLogarithmicError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.MeanSquaredLogarithmicError.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.MeanSquaredLogarithmicError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.MeanSquaredLogarithmicError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.MeanSquaredLogarithmicError.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.MeanSquaredLogarithmicError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.MeanSquaredLogarithmicError.input": "tf.keras.layers.Layer.input", + "tf.metrics.MeanSquaredLogarithmicError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.MeanSquaredLogarithmicError.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.MeanSquaredLogarithmicError.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.MeanSquaredLogarithmicError.name": "tf.keras.layers.Layer.name", + "tf.metrics.MeanSquaredLogarithmicError.name_scope": "tf.Module.name_scope", + "tf.metrics.MeanSquaredLogarithmicError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.MeanSquaredLogarithmicError.output": "tf.keras.layers.Layer.output", + "tf.metrics.MeanSquaredLogarithmicError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.MeanSquaredLogarithmicError.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.MeanSquaredLogarithmicError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.MeanSquaredLogarithmicError.submodules": "tf.Module.submodules", + "tf.metrics.MeanSquaredLogarithmicError.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.MeanSquaredLogarithmicError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.MeanSquaredLogarithmicError.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.MeanSquaredLogarithmicError.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.MeanTensor": "tf.keras.metrics.MeanTensor", + "tf.metrics.MeanTensor.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.MeanTensor.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.MeanTensor.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.MeanTensor.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.MeanTensor.__init__": "tf.keras.metrics.MeanTensor.__init__", + "tf.metrics.MeanTensor.__le__": "tf.keras.Model.__le__", + "tf.metrics.MeanTensor.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.MeanTensor.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.MeanTensor.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.MeanTensor.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.MeanTensor.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.MeanTensor.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.MeanTensor.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.MeanTensor.build": "tf.keras.layers.Layer.build", + "tf.metrics.MeanTensor.call": "tf.keras.layers.Layer.call", + "tf.metrics.MeanTensor.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.MeanTensor.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.MeanTensor.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.MeanTensor.count": "tf.keras.metrics.MeanTensor.count", + "tf.metrics.MeanTensor.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.MeanTensor.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.MeanTensor.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.MeanTensor.get_config": "tf.keras.metrics.Metric.get_config", + "tf.metrics.MeanTensor.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.MeanTensor.input": "tf.keras.layers.Layer.input", + "tf.metrics.MeanTensor.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.MeanTensor.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.MeanTensor.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.MeanTensor.name": "tf.keras.layers.Layer.name", + "tf.metrics.MeanTensor.name_scope": "tf.Module.name_scope", + "tf.metrics.MeanTensor.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.MeanTensor.output": "tf.keras.layers.Layer.output", + "tf.metrics.MeanTensor.reset_states": "tf.keras.metrics.MeanTensor.reset_states", + "tf.metrics.MeanTensor.result": "tf.keras.metrics.MeanTensor.result", + "tf.metrics.MeanTensor.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.MeanTensor.submodules": "tf.Module.submodules", + "tf.metrics.MeanTensor.total": "tf.keras.metrics.MeanTensor.total", + "tf.metrics.MeanTensor.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.MeanTensor.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.MeanTensor.update_state": "tf.keras.metrics.MeanTensor.update_state", + "tf.metrics.MeanTensor.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.Metric": "tf.keras.metrics.Metric", + "tf.metrics.Metric.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.Metric.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.Metric.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.Metric.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.Metric.__init__": "tf.keras.metrics.Metric.__init__", + "tf.metrics.Metric.__le__": "tf.keras.Model.__le__", + "tf.metrics.Metric.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.Metric.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.Metric.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.Metric.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.Metric.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.Metric.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.Metric.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.Metric.build": "tf.keras.layers.Layer.build", + "tf.metrics.Metric.call": "tf.keras.layers.Layer.call", + "tf.metrics.Metric.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.Metric.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.Metric.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.Metric.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.Metric.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.Metric.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.Metric.get_config": "tf.keras.metrics.Metric.get_config", + "tf.metrics.Metric.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.Metric.input": "tf.keras.layers.Layer.input", + "tf.metrics.Metric.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.Metric.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.Metric.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.Metric.name": "tf.keras.layers.Layer.name", + "tf.metrics.Metric.name_scope": "tf.Module.name_scope", + "tf.metrics.Metric.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.Metric.output": "tf.keras.layers.Layer.output", + "tf.metrics.Metric.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.Metric.result": "tf.keras.metrics.Metric.result", + "tf.metrics.Metric.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.Metric.submodules": "tf.Module.submodules", + "tf.metrics.Metric.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.Metric.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.Metric.update_state": "tf.keras.metrics.Metric.update_state", + "tf.metrics.Metric.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.Poisson": "tf.keras.metrics.Poisson", + "tf.metrics.Poisson.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.Poisson.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.Poisson.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.Poisson.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.Poisson.__init__": "tf.keras.metrics.Poisson.__init__", + "tf.metrics.Poisson.__le__": "tf.keras.Model.__le__", + "tf.metrics.Poisson.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.Poisson.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.Poisson.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.Poisson.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.Poisson.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.Poisson.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.Poisson.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.Poisson.build": "tf.keras.layers.Layer.build", + "tf.metrics.Poisson.call": "tf.keras.layers.Layer.call", + "tf.metrics.Poisson.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.Poisson.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.Poisson.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.Poisson.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.Poisson.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.Poisson.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.Poisson.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.Poisson.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.Poisson.input": "tf.keras.layers.Layer.input", + "tf.metrics.Poisson.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.Poisson.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.Poisson.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.Poisson.name": "tf.keras.layers.Layer.name", + "tf.metrics.Poisson.name_scope": "tf.Module.name_scope", + "tf.metrics.Poisson.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.Poisson.output": "tf.keras.layers.Layer.output", + "tf.metrics.Poisson.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.Poisson.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.Poisson.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.Poisson.submodules": "tf.Module.submodules", + "tf.metrics.Poisson.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.Poisson.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.Poisson.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.Poisson.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.Precision": "tf.keras.metrics.Precision", + "tf.metrics.Precision.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.Precision.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.Precision.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.Precision.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.Precision.__init__": "tf.keras.metrics.Precision.__init__", + "tf.metrics.Precision.__le__": "tf.keras.Model.__le__", + "tf.metrics.Precision.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.Precision.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.Precision.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.Precision.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.Precision.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.Precision.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.Precision.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.Precision.build": "tf.keras.layers.Layer.build", + "tf.metrics.Precision.call": "tf.keras.layers.Layer.call", + "tf.metrics.Precision.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.Precision.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.Precision.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.Precision.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.Precision.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.Precision.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.Precision.get_config": "tf.keras.metrics.Precision.get_config", + "tf.metrics.Precision.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.Precision.input": "tf.keras.layers.Layer.input", + "tf.metrics.Precision.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.Precision.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.Precision.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.Precision.name": "tf.keras.layers.Layer.name", + "tf.metrics.Precision.name_scope": "tf.Module.name_scope", + "tf.metrics.Precision.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.Precision.output": "tf.keras.layers.Layer.output", + "tf.metrics.Precision.reset_states": "tf.keras.metrics.Precision.reset_states", + "tf.metrics.Precision.result": "tf.keras.metrics.Precision.result", + "tf.metrics.Precision.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.Precision.submodules": "tf.Module.submodules", + "tf.metrics.Precision.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.Precision.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.Precision.update_state": "tf.keras.metrics.Precision.update_state", + "tf.metrics.Precision.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.PrecisionAtRecall": "tf.keras.metrics.PrecisionAtRecall", + "tf.metrics.PrecisionAtRecall.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.PrecisionAtRecall.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.PrecisionAtRecall.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.PrecisionAtRecall.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.PrecisionAtRecall.__init__": "tf.keras.metrics.PrecisionAtRecall.__init__", + "tf.metrics.PrecisionAtRecall.__le__": "tf.keras.Model.__le__", + "tf.metrics.PrecisionAtRecall.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.PrecisionAtRecall.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.PrecisionAtRecall.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.PrecisionAtRecall.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.PrecisionAtRecall.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.PrecisionAtRecall.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.PrecisionAtRecall.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.PrecisionAtRecall.build": "tf.keras.layers.Layer.build", + "tf.metrics.PrecisionAtRecall.call": "tf.keras.layers.Layer.call", + "tf.metrics.PrecisionAtRecall.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.PrecisionAtRecall.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.PrecisionAtRecall.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.PrecisionAtRecall.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.PrecisionAtRecall.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.PrecisionAtRecall.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.PrecisionAtRecall.get_config": "tf.keras.metrics.PrecisionAtRecall.get_config", + "tf.metrics.PrecisionAtRecall.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.PrecisionAtRecall.input": "tf.keras.layers.Layer.input", + "tf.metrics.PrecisionAtRecall.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.PrecisionAtRecall.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.PrecisionAtRecall.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.PrecisionAtRecall.name": "tf.keras.layers.Layer.name", + "tf.metrics.PrecisionAtRecall.name_scope": "tf.Module.name_scope", + "tf.metrics.PrecisionAtRecall.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.PrecisionAtRecall.output": "tf.keras.layers.Layer.output", + "tf.metrics.PrecisionAtRecall.reset_states": "tf.keras.metrics.PrecisionAtRecall.reset_states", + "tf.metrics.PrecisionAtRecall.result": "tf.keras.metrics.PrecisionAtRecall.result", + "tf.metrics.PrecisionAtRecall.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.PrecisionAtRecall.submodules": "tf.Module.submodules", + "tf.metrics.PrecisionAtRecall.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.PrecisionAtRecall.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.PrecisionAtRecall.update_state": "tf.keras.metrics.PrecisionAtRecall.update_state", + "tf.metrics.PrecisionAtRecall.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.Recall": "tf.keras.metrics.Recall", + "tf.metrics.Recall.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.Recall.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.Recall.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.Recall.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.Recall.__init__": "tf.keras.metrics.Recall.__init__", + "tf.metrics.Recall.__le__": "tf.keras.Model.__le__", + "tf.metrics.Recall.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.Recall.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.Recall.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.Recall.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.Recall.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.Recall.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.Recall.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.Recall.build": "tf.keras.layers.Layer.build", + "tf.metrics.Recall.call": "tf.keras.layers.Layer.call", + "tf.metrics.Recall.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.Recall.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.Recall.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.Recall.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.Recall.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.Recall.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.Recall.get_config": "tf.keras.metrics.Recall.get_config", + "tf.metrics.Recall.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.Recall.input": "tf.keras.layers.Layer.input", + "tf.metrics.Recall.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.Recall.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.Recall.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.Recall.name": "tf.keras.layers.Layer.name", + "tf.metrics.Recall.name_scope": "tf.Module.name_scope", + "tf.metrics.Recall.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.Recall.output": "tf.keras.layers.Layer.output", + "tf.metrics.Recall.reset_states": "tf.keras.metrics.Recall.reset_states", + "tf.metrics.Recall.result": "tf.keras.metrics.Recall.result", + "tf.metrics.Recall.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.Recall.submodules": "tf.Module.submodules", + "tf.metrics.Recall.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.Recall.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.Recall.update_state": "tf.keras.metrics.Recall.update_state", + "tf.metrics.Recall.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.RecallAtPrecision": "tf.keras.metrics.RecallAtPrecision", + "tf.metrics.RecallAtPrecision.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.RecallAtPrecision.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.RecallAtPrecision.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.RecallAtPrecision.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.RecallAtPrecision.__init__": "tf.keras.metrics.RecallAtPrecision.__init__", + "tf.metrics.RecallAtPrecision.__le__": "tf.keras.Model.__le__", + "tf.metrics.RecallAtPrecision.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.RecallAtPrecision.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.RecallAtPrecision.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.RecallAtPrecision.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.RecallAtPrecision.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.RecallAtPrecision.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.RecallAtPrecision.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.RecallAtPrecision.build": "tf.keras.layers.Layer.build", + "tf.metrics.RecallAtPrecision.call": "tf.keras.layers.Layer.call", + "tf.metrics.RecallAtPrecision.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.RecallAtPrecision.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.RecallAtPrecision.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.RecallAtPrecision.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.RecallAtPrecision.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.RecallAtPrecision.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.RecallAtPrecision.get_config": "tf.keras.metrics.RecallAtPrecision.get_config", + "tf.metrics.RecallAtPrecision.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.RecallAtPrecision.input": "tf.keras.layers.Layer.input", + "tf.metrics.RecallAtPrecision.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.RecallAtPrecision.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.RecallAtPrecision.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.RecallAtPrecision.name": "tf.keras.layers.Layer.name", + "tf.metrics.RecallAtPrecision.name_scope": "tf.Module.name_scope", + "tf.metrics.RecallAtPrecision.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.RecallAtPrecision.output": "tf.keras.layers.Layer.output", + "tf.metrics.RecallAtPrecision.reset_states": "tf.keras.metrics.PrecisionAtRecall.reset_states", + "tf.metrics.RecallAtPrecision.result": "tf.keras.metrics.RecallAtPrecision.result", + "tf.metrics.RecallAtPrecision.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.RecallAtPrecision.submodules": "tf.Module.submodules", + "tf.metrics.RecallAtPrecision.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.RecallAtPrecision.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.RecallAtPrecision.update_state": "tf.keras.metrics.PrecisionAtRecall.update_state", + "tf.metrics.RecallAtPrecision.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.RootMeanSquaredError": "tf.keras.metrics.RootMeanSquaredError", + "tf.metrics.RootMeanSquaredError.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.RootMeanSquaredError.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.RootMeanSquaredError.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.RootMeanSquaredError.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.RootMeanSquaredError.__init__": "tf.keras.metrics.RootMeanSquaredError.__init__", + "tf.metrics.RootMeanSquaredError.__le__": "tf.keras.Model.__le__", + "tf.metrics.RootMeanSquaredError.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.RootMeanSquaredError.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.RootMeanSquaredError.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.RootMeanSquaredError.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.RootMeanSquaredError.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.RootMeanSquaredError.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.RootMeanSquaredError.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.RootMeanSquaredError.build": "tf.keras.layers.Layer.build", + "tf.metrics.RootMeanSquaredError.call": "tf.keras.layers.Layer.call", + "tf.metrics.RootMeanSquaredError.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.RootMeanSquaredError.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.RootMeanSquaredError.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.RootMeanSquaredError.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.RootMeanSquaredError.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.RootMeanSquaredError.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.RootMeanSquaredError.get_config": "tf.keras.metrics.Metric.get_config", + "tf.metrics.RootMeanSquaredError.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.RootMeanSquaredError.input": "tf.keras.layers.Layer.input", + "tf.metrics.RootMeanSquaredError.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.RootMeanSquaredError.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.RootMeanSquaredError.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.RootMeanSquaredError.name": "tf.keras.layers.Layer.name", + "tf.metrics.RootMeanSquaredError.name_scope": "tf.Module.name_scope", + "tf.metrics.RootMeanSquaredError.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.RootMeanSquaredError.output": "tf.keras.layers.Layer.output", + "tf.metrics.RootMeanSquaredError.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.RootMeanSquaredError.result": "tf.keras.metrics.RootMeanSquaredError.result", + "tf.metrics.RootMeanSquaredError.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.RootMeanSquaredError.submodules": "tf.Module.submodules", + "tf.metrics.RootMeanSquaredError.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.RootMeanSquaredError.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.RootMeanSquaredError.update_state": "tf.keras.metrics.RootMeanSquaredError.update_state", + "tf.metrics.RootMeanSquaredError.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.SensitivityAtSpecificity": "tf.keras.metrics.SensitivityAtSpecificity", + "tf.metrics.SensitivityAtSpecificity.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.SensitivityAtSpecificity.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.SensitivityAtSpecificity.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.SensitivityAtSpecificity.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.SensitivityAtSpecificity.__init__": "tf.keras.metrics.SensitivityAtSpecificity.__init__", + "tf.metrics.SensitivityAtSpecificity.__le__": "tf.keras.Model.__le__", + "tf.metrics.SensitivityAtSpecificity.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.SensitivityAtSpecificity.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.SensitivityAtSpecificity.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.SensitivityAtSpecificity.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.SensitivityAtSpecificity.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.SensitivityAtSpecificity.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.SensitivityAtSpecificity.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.SensitivityAtSpecificity.build": "tf.keras.layers.Layer.build", + "tf.metrics.SensitivityAtSpecificity.call": "tf.keras.layers.Layer.call", + "tf.metrics.SensitivityAtSpecificity.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.SensitivityAtSpecificity.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.SensitivityAtSpecificity.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.SensitivityAtSpecificity.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.SensitivityAtSpecificity.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.SensitivityAtSpecificity.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.SensitivityAtSpecificity.get_config": "tf.keras.metrics.SensitivityAtSpecificity.get_config", + "tf.metrics.SensitivityAtSpecificity.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.SensitivityAtSpecificity.input": "tf.keras.layers.Layer.input", + "tf.metrics.SensitivityAtSpecificity.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.SensitivityAtSpecificity.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.SensitivityAtSpecificity.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.SensitivityAtSpecificity.name": "tf.keras.layers.Layer.name", + "tf.metrics.SensitivityAtSpecificity.name_scope": "tf.Module.name_scope", + "tf.metrics.SensitivityAtSpecificity.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.SensitivityAtSpecificity.output": "tf.keras.layers.Layer.output", + "tf.metrics.SensitivityAtSpecificity.reset_states": "tf.keras.metrics.PrecisionAtRecall.reset_states", + "tf.metrics.SensitivityAtSpecificity.result": "tf.keras.metrics.SensitivityAtSpecificity.result", + "tf.metrics.SensitivityAtSpecificity.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.SensitivityAtSpecificity.submodules": "tf.Module.submodules", + "tf.metrics.SensitivityAtSpecificity.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.SensitivityAtSpecificity.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.SensitivityAtSpecificity.update_state": "tf.keras.metrics.PrecisionAtRecall.update_state", + "tf.metrics.SensitivityAtSpecificity.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.SparseCategoricalAccuracy": "tf.keras.metrics.SparseCategoricalAccuracy", + "tf.metrics.SparseCategoricalAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.SparseCategoricalAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.SparseCategoricalAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.SparseCategoricalAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.SparseCategoricalAccuracy.__init__": "tf.keras.metrics.SparseCategoricalAccuracy.__init__", + "tf.metrics.SparseCategoricalAccuracy.__le__": "tf.keras.Model.__le__", + "tf.metrics.SparseCategoricalAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.SparseCategoricalAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.SparseCategoricalAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.SparseCategoricalAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.SparseCategoricalAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.SparseCategoricalAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.SparseCategoricalAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.SparseCategoricalAccuracy.build": "tf.keras.layers.Layer.build", + "tf.metrics.SparseCategoricalAccuracy.call": "tf.keras.layers.Layer.call", + "tf.metrics.SparseCategoricalAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.SparseCategoricalAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.SparseCategoricalAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.SparseCategoricalAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.SparseCategoricalAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.SparseCategoricalAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.SparseCategoricalAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.SparseCategoricalAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.SparseCategoricalAccuracy.input": "tf.keras.layers.Layer.input", + "tf.metrics.SparseCategoricalAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.SparseCategoricalAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.SparseCategoricalAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.SparseCategoricalAccuracy.name": "tf.keras.layers.Layer.name", + "tf.metrics.SparseCategoricalAccuracy.name_scope": "tf.Module.name_scope", + "tf.metrics.SparseCategoricalAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.SparseCategoricalAccuracy.output": "tf.keras.layers.Layer.output", + "tf.metrics.SparseCategoricalAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.SparseCategoricalAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.SparseCategoricalAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.SparseCategoricalAccuracy.submodules": "tf.Module.submodules", + "tf.metrics.SparseCategoricalAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.SparseCategoricalAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.SparseCategoricalAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.SparseCategoricalAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.SparseCategoricalCrossentropy": "tf.keras.metrics.SparseCategoricalCrossentropy", + "tf.metrics.SparseCategoricalCrossentropy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.SparseCategoricalCrossentropy.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.SparseCategoricalCrossentropy.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.SparseCategoricalCrossentropy.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.SparseCategoricalCrossentropy.__init__": "tf.keras.metrics.SparseCategoricalCrossentropy.__init__", + "tf.metrics.SparseCategoricalCrossentropy.__le__": "tf.keras.Model.__le__", + "tf.metrics.SparseCategoricalCrossentropy.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.SparseCategoricalCrossentropy.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.SparseCategoricalCrossentropy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.SparseCategoricalCrossentropy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.SparseCategoricalCrossentropy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.SparseCategoricalCrossentropy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.SparseCategoricalCrossentropy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.SparseCategoricalCrossentropy.build": "tf.keras.layers.Layer.build", + "tf.metrics.SparseCategoricalCrossentropy.call": "tf.keras.layers.Layer.call", + "tf.metrics.SparseCategoricalCrossentropy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.SparseCategoricalCrossentropy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.SparseCategoricalCrossentropy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.SparseCategoricalCrossentropy.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.SparseCategoricalCrossentropy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.SparseCategoricalCrossentropy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.SparseCategoricalCrossentropy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.SparseCategoricalCrossentropy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.SparseCategoricalCrossentropy.input": "tf.keras.layers.Layer.input", + "tf.metrics.SparseCategoricalCrossentropy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.SparseCategoricalCrossentropy.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.SparseCategoricalCrossentropy.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.SparseCategoricalCrossentropy.name": "tf.keras.layers.Layer.name", + "tf.metrics.SparseCategoricalCrossentropy.name_scope": "tf.Module.name_scope", + "tf.metrics.SparseCategoricalCrossentropy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.SparseCategoricalCrossentropy.output": "tf.keras.layers.Layer.output", + "tf.metrics.SparseCategoricalCrossentropy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.SparseCategoricalCrossentropy.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.SparseCategoricalCrossentropy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.SparseCategoricalCrossentropy.submodules": "tf.Module.submodules", + "tf.metrics.SparseCategoricalCrossentropy.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.SparseCategoricalCrossentropy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.SparseCategoricalCrossentropy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.SparseCategoricalCrossentropy.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.SparseTopKCategoricalAccuracy": "tf.keras.metrics.SparseTopKCategoricalAccuracy", + "tf.metrics.SparseTopKCategoricalAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.SparseTopKCategoricalAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.SparseTopKCategoricalAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.SparseTopKCategoricalAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.SparseTopKCategoricalAccuracy.__init__": "tf.keras.metrics.SparseTopKCategoricalAccuracy.__init__", + "tf.metrics.SparseTopKCategoricalAccuracy.__le__": "tf.keras.Model.__le__", + "tf.metrics.SparseTopKCategoricalAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.SparseTopKCategoricalAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.SparseTopKCategoricalAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.SparseTopKCategoricalAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.SparseTopKCategoricalAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.SparseTopKCategoricalAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.SparseTopKCategoricalAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.SparseTopKCategoricalAccuracy.build": "tf.keras.layers.Layer.build", + "tf.metrics.SparseTopKCategoricalAccuracy.call": "tf.keras.layers.Layer.call", + "tf.metrics.SparseTopKCategoricalAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.SparseTopKCategoricalAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.SparseTopKCategoricalAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.SparseTopKCategoricalAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.SparseTopKCategoricalAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.SparseTopKCategoricalAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.SparseTopKCategoricalAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.SparseTopKCategoricalAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.SparseTopKCategoricalAccuracy.input": "tf.keras.layers.Layer.input", + "tf.metrics.SparseTopKCategoricalAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.SparseTopKCategoricalAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.SparseTopKCategoricalAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.SparseTopKCategoricalAccuracy.name": "tf.keras.layers.Layer.name", + "tf.metrics.SparseTopKCategoricalAccuracy.name_scope": "tf.Module.name_scope", + "tf.metrics.SparseTopKCategoricalAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.SparseTopKCategoricalAccuracy.output": "tf.keras.layers.Layer.output", + "tf.metrics.SparseTopKCategoricalAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.SparseTopKCategoricalAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.SparseTopKCategoricalAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.SparseTopKCategoricalAccuracy.submodules": "tf.Module.submodules", + "tf.metrics.SparseTopKCategoricalAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.SparseTopKCategoricalAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.SparseTopKCategoricalAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.SparseTopKCategoricalAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.SpecificityAtSensitivity": "tf.keras.metrics.SpecificityAtSensitivity", + "tf.metrics.SpecificityAtSensitivity.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.SpecificityAtSensitivity.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.SpecificityAtSensitivity.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.SpecificityAtSensitivity.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.SpecificityAtSensitivity.__init__": "tf.keras.metrics.SpecificityAtSensitivity.__init__", + "tf.metrics.SpecificityAtSensitivity.__le__": "tf.keras.Model.__le__", + "tf.metrics.SpecificityAtSensitivity.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.SpecificityAtSensitivity.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.SpecificityAtSensitivity.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.SpecificityAtSensitivity.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.SpecificityAtSensitivity.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.SpecificityAtSensitivity.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.SpecificityAtSensitivity.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.SpecificityAtSensitivity.build": "tf.keras.layers.Layer.build", + "tf.metrics.SpecificityAtSensitivity.call": "tf.keras.layers.Layer.call", + "tf.metrics.SpecificityAtSensitivity.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.SpecificityAtSensitivity.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.SpecificityAtSensitivity.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.SpecificityAtSensitivity.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.SpecificityAtSensitivity.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.SpecificityAtSensitivity.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.SpecificityAtSensitivity.get_config": "tf.keras.metrics.SpecificityAtSensitivity.get_config", + "tf.metrics.SpecificityAtSensitivity.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.SpecificityAtSensitivity.input": "tf.keras.layers.Layer.input", + "tf.metrics.SpecificityAtSensitivity.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.SpecificityAtSensitivity.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.SpecificityAtSensitivity.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.SpecificityAtSensitivity.name": "tf.keras.layers.Layer.name", + "tf.metrics.SpecificityAtSensitivity.name_scope": "tf.Module.name_scope", + "tf.metrics.SpecificityAtSensitivity.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.SpecificityAtSensitivity.output": "tf.keras.layers.Layer.output", + "tf.metrics.SpecificityAtSensitivity.reset_states": "tf.keras.metrics.PrecisionAtRecall.reset_states", + "tf.metrics.SpecificityAtSensitivity.result": "tf.keras.metrics.SpecificityAtSensitivity.result", + "tf.metrics.SpecificityAtSensitivity.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.SpecificityAtSensitivity.submodules": "tf.Module.submodules", + "tf.metrics.SpecificityAtSensitivity.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.SpecificityAtSensitivity.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.SpecificityAtSensitivity.update_state": "tf.keras.metrics.PrecisionAtRecall.update_state", + "tf.metrics.SpecificityAtSensitivity.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.SquaredHinge": "tf.keras.metrics.SquaredHinge", + "tf.metrics.SquaredHinge.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.SquaredHinge.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.SquaredHinge.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.SquaredHinge.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.SquaredHinge.__init__": "tf.keras.metrics.SquaredHinge.__init__", + "tf.metrics.SquaredHinge.__le__": "tf.keras.Model.__le__", + "tf.metrics.SquaredHinge.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.SquaredHinge.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.SquaredHinge.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.SquaredHinge.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.SquaredHinge.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.SquaredHinge.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.SquaredHinge.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.SquaredHinge.build": "tf.keras.layers.Layer.build", + "tf.metrics.SquaredHinge.call": "tf.keras.layers.Layer.call", + "tf.metrics.SquaredHinge.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.SquaredHinge.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.SquaredHinge.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.SquaredHinge.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.SquaredHinge.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.SquaredHinge.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.SquaredHinge.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.SquaredHinge.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.SquaredHinge.input": "tf.keras.layers.Layer.input", + "tf.metrics.SquaredHinge.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.SquaredHinge.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.SquaredHinge.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.SquaredHinge.name": "tf.keras.layers.Layer.name", + "tf.metrics.SquaredHinge.name_scope": "tf.Module.name_scope", + "tf.metrics.SquaredHinge.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.SquaredHinge.output": "tf.keras.layers.Layer.output", + "tf.metrics.SquaredHinge.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.SquaredHinge.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.SquaredHinge.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.SquaredHinge.submodules": "tf.Module.submodules", + "tf.metrics.SquaredHinge.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.SquaredHinge.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.SquaredHinge.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.SquaredHinge.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.Sum": "tf.keras.metrics.Sum", + "tf.metrics.Sum.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.Sum.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.Sum.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.Sum.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.Sum.__init__": "tf.keras.metrics.Sum.__init__", + "tf.metrics.Sum.__le__": "tf.keras.Model.__le__", + "tf.metrics.Sum.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.Sum.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.Sum.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.Sum.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.Sum.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.Sum.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.Sum.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.Sum.build": "tf.keras.layers.Layer.build", + "tf.metrics.Sum.call": "tf.keras.layers.Layer.call", + "tf.metrics.Sum.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.Sum.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.Sum.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.Sum.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.Sum.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.Sum.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.Sum.get_config": "tf.keras.metrics.Metric.get_config", + "tf.metrics.Sum.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.Sum.input": "tf.keras.layers.Layer.input", + "tf.metrics.Sum.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.Sum.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.Sum.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.Sum.name": "tf.keras.layers.Layer.name", + "tf.metrics.Sum.name_scope": "tf.Module.name_scope", + "tf.metrics.Sum.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.Sum.output": "tf.keras.layers.Layer.output", + "tf.metrics.Sum.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.Sum.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.Sum.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.Sum.submodules": "tf.Module.submodules", + "tf.metrics.Sum.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.Sum.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.Sum.update_state": "tf.keras.metrics.Mean.update_state", + "tf.metrics.Sum.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.TopKCategoricalAccuracy": "tf.keras.metrics.TopKCategoricalAccuracy", + "tf.metrics.TopKCategoricalAccuracy.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.TopKCategoricalAccuracy.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.TopKCategoricalAccuracy.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.TopKCategoricalAccuracy.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.TopKCategoricalAccuracy.__init__": "tf.keras.metrics.TopKCategoricalAccuracy.__init__", + "tf.metrics.TopKCategoricalAccuracy.__le__": "tf.keras.Model.__le__", + "tf.metrics.TopKCategoricalAccuracy.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.TopKCategoricalAccuracy.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.TopKCategoricalAccuracy.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.TopKCategoricalAccuracy.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.TopKCategoricalAccuracy.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.TopKCategoricalAccuracy.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.TopKCategoricalAccuracy.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.TopKCategoricalAccuracy.build": "tf.keras.layers.Layer.build", + "tf.metrics.TopKCategoricalAccuracy.call": "tf.keras.layers.Layer.call", + "tf.metrics.TopKCategoricalAccuracy.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.TopKCategoricalAccuracy.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.TopKCategoricalAccuracy.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.TopKCategoricalAccuracy.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.TopKCategoricalAccuracy.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.TopKCategoricalAccuracy.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.TopKCategoricalAccuracy.get_config": "tf.keras.metrics.Accuracy.get_config", + "tf.metrics.TopKCategoricalAccuracy.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.TopKCategoricalAccuracy.input": "tf.keras.layers.Layer.input", + "tf.metrics.TopKCategoricalAccuracy.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.TopKCategoricalAccuracy.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.TopKCategoricalAccuracy.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.TopKCategoricalAccuracy.name": "tf.keras.layers.Layer.name", + "tf.metrics.TopKCategoricalAccuracy.name_scope": "tf.Module.name_scope", + "tf.metrics.TopKCategoricalAccuracy.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.TopKCategoricalAccuracy.output": "tf.keras.layers.Layer.output", + "tf.metrics.TopKCategoricalAccuracy.reset_states": "tf.keras.metrics.Metric.reset_states", + "tf.metrics.TopKCategoricalAccuracy.result": "tf.keras.metrics.Accuracy.result", + "tf.metrics.TopKCategoricalAccuracy.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.TopKCategoricalAccuracy.submodules": "tf.Module.submodules", + "tf.metrics.TopKCategoricalAccuracy.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.TopKCategoricalAccuracy.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.TopKCategoricalAccuracy.update_state": "tf.keras.metrics.Accuracy.update_state", + "tf.metrics.TopKCategoricalAccuracy.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.TrueNegatives": "tf.keras.metrics.TrueNegatives", + "tf.metrics.TrueNegatives.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.TrueNegatives.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.TrueNegatives.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.TrueNegatives.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.TrueNegatives.__init__": "tf.keras.metrics.TrueNegatives.__init__", + "tf.metrics.TrueNegatives.__le__": "tf.keras.Model.__le__", + "tf.metrics.TrueNegatives.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.TrueNegatives.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.TrueNegatives.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.TrueNegatives.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.TrueNegatives.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.TrueNegatives.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.TrueNegatives.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.TrueNegatives.build": "tf.keras.layers.Layer.build", + "tf.metrics.TrueNegatives.call": "tf.keras.layers.Layer.call", + "tf.metrics.TrueNegatives.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.TrueNegatives.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.TrueNegatives.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.TrueNegatives.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.TrueNegatives.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.TrueNegatives.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.TrueNegatives.get_config": "tf.keras.metrics.FalseNegatives.get_config", + "tf.metrics.TrueNegatives.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.TrueNegatives.input": "tf.keras.layers.Layer.input", + "tf.metrics.TrueNegatives.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.TrueNegatives.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.TrueNegatives.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.TrueNegatives.name": "tf.keras.layers.Layer.name", + "tf.metrics.TrueNegatives.name_scope": "tf.Module.name_scope", + "tf.metrics.TrueNegatives.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.TrueNegatives.output": "tf.keras.layers.Layer.output", + "tf.metrics.TrueNegatives.reset_states": "tf.keras.metrics.FalseNegatives.reset_states", + "tf.metrics.TrueNegatives.result": "tf.keras.metrics.FalseNegatives.result", + "tf.metrics.TrueNegatives.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.TrueNegatives.submodules": "tf.Module.submodules", + "tf.metrics.TrueNegatives.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.TrueNegatives.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.TrueNegatives.update_state": "tf.keras.metrics.FalseNegatives.update_state", + "tf.metrics.TrueNegatives.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.TruePositives": "tf.keras.metrics.TruePositives", + "tf.metrics.TruePositives.__call__": "tf.keras.metrics.Metric.__call__", + "tf.metrics.TruePositives.__eq__": "tf.keras.Model.__eq__", + "tf.metrics.TruePositives.__ge__": "tf.keras.Model.__ge__", + "tf.metrics.TruePositives.__gt__": "tf.keras.Model.__gt__", + "tf.metrics.TruePositives.__init__": "tf.keras.metrics.TruePositives.__init__", + "tf.metrics.TruePositives.__le__": "tf.keras.Model.__le__", + "tf.metrics.TruePositives.__lt__": "tf.keras.Model.__lt__", + "tf.metrics.TruePositives.__ne__": "tf.keras.Model.__ne__", + "tf.metrics.TruePositives.__new__": "tf.keras.metrics.Metric.__new__", + "tf.metrics.TruePositives.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.metrics.TruePositives.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.metrics.TruePositives.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.metrics.TruePositives.add_weight": "tf.keras.metrics.Metric.add_weight", + "tf.metrics.TruePositives.build": "tf.keras.layers.Layer.build", + "tf.metrics.TruePositives.call": "tf.keras.layers.Layer.call", + "tf.metrics.TruePositives.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.metrics.TruePositives.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.metrics.TruePositives.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.metrics.TruePositives.count_params": "tf.keras.layers.Layer.count_params", + "tf.metrics.TruePositives.dtype": "tf.keras.metrics.Metric.dtype", + "tf.metrics.TruePositives.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.metrics.TruePositives.get_config": "tf.keras.metrics.FalseNegatives.get_config", + "tf.metrics.TruePositives.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.metrics.TruePositives.input": "tf.keras.layers.Layer.input", + "tf.metrics.TruePositives.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.metrics.TruePositives.losses": "tf.keras.layers.Layer.losses", + "tf.metrics.TruePositives.metrics": "tf.keras.layers.Layer.metrics", + "tf.metrics.TruePositives.name": "tf.keras.layers.Layer.name", + "tf.metrics.TruePositives.name_scope": "tf.Module.name_scope", + "tf.metrics.TruePositives.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.metrics.TruePositives.output": "tf.keras.layers.Layer.output", + "tf.metrics.TruePositives.reset_states": "tf.keras.metrics.FalseNegatives.reset_states", + "tf.metrics.TruePositives.result": "tf.keras.metrics.FalseNegatives.result", + "tf.metrics.TruePositives.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.metrics.TruePositives.submodules": "tf.Module.submodules", + "tf.metrics.TruePositives.trainable": "tf.keras.layers.Layer.trainable", + "tf.metrics.TruePositives.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.metrics.TruePositives.update_state": "tf.keras.metrics.FalseNegatives.update_state", + "tf.metrics.TruePositives.weights": "tf.keras.layers.Layer.weights", + "tf.metrics.binary_accuracy": "tf.keras.metrics.binary_accuracy", + "tf.metrics.binary_crossentropy": "tf.keras.losses.binary_crossentropy", + "tf.metrics.categorical_accuracy": "tf.keras.metrics.categorical_accuracy", + "tf.metrics.categorical_crossentropy": "tf.keras.losses.categorical_crossentropy", + "tf.metrics.deserialize": "tf.keras.metrics.deserialize", + "tf.metrics.get": "tf.keras.metrics.get", + "tf.metrics.hinge": "tf.keras.losses.hinge", + "tf.metrics.kld": "tf.keras.losses.KLD", + "tf.metrics.kullback_leibler_divergence": "tf.keras.losses.KLD", + "tf.metrics.mae": "tf.keras.losses.MAE", + "tf.metrics.mape": "tf.keras.losses.MAPE", + "tf.metrics.mean_absolute_error": "tf.keras.losses.MAE", + "tf.metrics.mean_absolute_percentage_error": "tf.keras.losses.MAPE", + "tf.metrics.mean_squared_error": "tf.keras.losses.MSE", + "tf.metrics.mean_squared_logarithmic_error": "tf.keras.losses.MSLE", + "tf.metrics.mse": "tf.keras.losses.MSE", + "tf.metrics.msle": "tf.keras.losses.MSLE", + "tf.metrics.poisson": "tf.keras.losses.poisson", + "tf.metrics.serialize": "tf.keras.metrics.serialize", + "tf.metrics.sparse_categorical_accuracy": "tf.keras.metrics.sparse_categorical_accuracy", + "tf.metrics.sparse_categorical_crossentropy": "tf.keras.losses.sparse_categorical_crossentropy", + "tf.metrics.sparse_top_k_categorical_accuracy": "tf.keras.metrics.sparse_top_k_categorical_accuracy", + "tf.metrics.squared_hinge": "tf.keras.losses.squared_hinge", + "tf.metrics.top_k_categorical_accuracy": "tf.keras.metrics.top_k_categorical_accuracy", + "tf.minimum": "tf.math.minimum", + "tf.mixed_precision.experimental.DynamicLossScale.__eq__": "tf.keras.Model.__eq__", + "tf.mixed_precision.experimental.DynamicLossScale.__ge__": "tf.keras.Model.__ge__", + "tf.mixed_precision.experimental.DynamicLossScale.__gt__": "tf.keras.Model.__gt__", + "tf.mixed_precision.experimental.DynamicLossScale.__le__": "tf.keras.Model.__le__", + "tf.mixed_precision.experimental.DynamicLossScale.__lt__": "tf.keras.Model.__lt__", + "tf.mixed_precision.experimental.DynamicLossScale.__ne__": "tf.keras.Model.__ne__", + "tf.mixed_precision.experimental.DynamicLossScale.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.mixed_precision.experimental.FixedLossScale.__eq__": "tf.keras.Model.__eq__", + "tf.mixed_precision.experimental.FixedLossScale.__ge__": "tf.keras.Model.__ge__", + "tf.mixed_precision.experimental.FixedLossScale.__gt__": "tf.keras.Model.__gt__", + "tf.mixed_precision.experimental.FixedLossScale.__le__": "tf.keras.Model.__le__", + "tf.mixed_precision.experimental.FixedLossScale.__lt__": "tf.keras.Model.__lt__", + "tf.mixed_precision.experimental.FixedLossScale.__ne__": "tf.keras.Model.__ne__", + "tf.mixed_precision.experimental.FixedLossScale.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.mixed_precision.experimental.LossScale.__eq__": "tf.keras.Model.__eq__", + "tf.mixed_precision.experimental.LossScale.__ge__": "tf.keras.Model.__ge__", + "tf.mixed_precision.experimental.LossScale.__gt__": "tf.keras.Model.__gt__", + "tf.mixed_precision.experimental.LossScale.__le__": "tf.keras.Model.__le__", + "tf.mixed_precision.experimental.LossScale.__lt__": "tf.keras.Model.__lt__", + "tf.mixed_precision.experimental.LossScale.__ne__": "tf.keras.Model.__ne__", + "tf.mixed_precision.experimental.LossScale.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.multiply": "tf.math.multiply", + "tf.name_scope.__eq__": "tf.keras.Model.__eq__", + "tf.name_scope.__ge__": "tf.keras.Model.__ge__", + "tf.name_scope.__gt__": "tf.keras.Model.__gt__", + "tf.name_scope.__le__": "tf.keras.Model.__le__", + "tf.name_scope.__lt__": "tf.keras.Model.__lt__", + "tf.name_scope.__ne__": "tf.keras.Model.__ne__", + "tf.name_scope.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.negative": "tf.math.negative", + "tf.nn.RNNCellDeviceWrapper.__call__": "tf.keras.layers.Layer.__call__", + "tf.nn.RNNCellDeviceWrapper.__eq__": "tf.keras.Model.__eq__", + "tf.nn.RNNCellDeviceWrapper.__ge__": "tf.keras.Model.__ge__", + "tf.nn.RNNCellDeviceWrapper.__gt__": "tf.keras.Model.__gt__", + "tf.nn.RNNCellDeviceWrapper.__le__": "tf.keras.Model.__le__", + "tf.nn.RNNCellDeviceWrapper.__lt__": "tf.keras.Model.__lt__", + "tf.nn.RNNCellDeviceWrapper.__ne__": "tf.keras.Model.__ne__", + "tf.nn.RNNCellDeviceWrapper.__new__": "tf.keras.Model.__new__", + "tf.nn.RNNCellDeviceWrapper.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.nn.RNNCellDeviceWrapper.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.nn.RNNCellDeviceWrapper.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.nn.RNNCellDeviceWrapper.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.nn.RNNCellDeviceWrapper.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.nn.RNNCellDeviceWrapper.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.nn.RNNCellDeviceWrapper.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.nn.RNNCellDeviceWrapper.count_params": "tf.keras.layers.Layer.count_params", + "tf.nn.RNNCellDeviceWrapper.dtype": "tf.keras.layers.Layer.dtype", + "tf.nn.RNNCellDeviceWrapper.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.nn.RNNCellDeviceWrapper.get_initial_state": "tf.keras.layers.AbstractRNNCell.get_initial_state", + "tf.nn.RNNCellDeviceWrapper.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.nn.RNNCellDeviceWrapper.input": "tf.keras.layers.Layer.input", + "tf.nn.RNNCellDeviceWrapper.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.nn.RNNCellDeviceWrapper.losses": "tf.keras.layers.Layer.losses", + "tf.nn.RNNCellDeviceWrapper.metrics": "tf.keras.layers.Layer.metrics", + "tf.nn.RNNCellDeviceWrapper.name": "tf.keras.layers.Layer.name", + "tf.nn.RNNCellDeviceWrapper.name_scope": "tf.Module.name_scope", + "tf.nn.RNNCellDeviceWrapper.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.nn.RNNCellDeviceWrapper.output": "tf.keras.layers.Layer.output", + "tf.nn.RNNCellDeviceWrapper.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.nn.RNNCellDeviceWrapper.submodules": "tf.Module.submodules", + "tf.nn.RNNCellDeviceWrapper.trainable": "tf.keras.layers.Layer.trainable", + "tf.nn.RNNCellDeviceWrapper.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.nn.RNNCellDeviceWrapper.weights": "tf.keras.layers.Layer.weights", + "tf.nn.RNNCellDropoutWrapper.__call__": "tf.keras.layers.Layer.__call__", + "tf.nn.RNNCellDropoutWrapper.__eq__": "tf.keras.Model.__eq__", + "tf.nn.RNNCellDropoutWrapper.__ge__": "tf.keras.Model.__ge__", + "tf.nn.RNNCellDropoutWrapper.__gt__": "tf.keras.Model.__gt__", + "tf.nn.RNNCellDropoutWrapper.__le__": "tf.keras.Model.__le__", + "tf.nn.RNNCellDropoutWrapper.__lt__": "tf.keras.Model.__lt__", + "tf.nn.RNNCellDropoutWrapper.__ne__": "tf.keras.Model.__ne__", + "tf.nn.RNNCellDropoutWrapper.__new__": "tf.keras.Model.__new__", + "tf.nn.RNNCellDropoutWrapper.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.nn.RNNCellDropoutWrapper.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.nn.RNNCellDropoutWrapper.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.nn.RNNCellDropoutWrapper.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.nn.RNNCellDropoutWrapper.call": "tf.nn.RNNCellDeviceWrapper.call", + "tf.nn.RNNCellDropoutWrapper.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.nn.RNNCellDropoutWrapper.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.nn.RNNCellDropoutWrapper.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.nn.RNNCellDropoutWrapper.count_params": "tf.keras.layers.Layer.count_params", + "tf.nn.RNNCellDropoutWrapper.dtype": "tf.keras.layers.Layer.dtype", + "tf.nn.RNNCellDropoutWrapper.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.nn.RNNCellDropoutWrapper.get_initial_state": "tf.keras.layers.AbstractRNNCell.get_initial_state", + "tf.nn.RNNCellDropoutWrapper.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.nn.RNNCellDropoutWrapper.input": "tf.keras.layers.Layer.input", + "tf.nn.RNNCellDropoutWrapper.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.nn.RNNCellDropoutWrapper.losses": "tf.keras.layers.Layer.losses", + "tf.nn.RNNCellDropoutWrapper.metrics": "tf.keras.layers.Layer.metrics", + "tf.nn.RNNCellDropoutWrapper.name": "tf.keras.layers.Layer.name", + "tf.nn.RNNCellDropoutWrapper.name_scope": "tf.Module.name_scope", + "tf.nn.RNNCellDropoutWrapper.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.nn.RNNCellDropoutWrapper.output": "tf.keras.layers.Layer.output", + "tf.nn.RNNCellDropoutWrapper.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.nn.RNNCellDropoutWrapper.submodules": "tf.Module.submodules", + "tf.nn.RNNCellDropoutWrapper.trainable": "tf.keras.layers.Layer.trainable", + "tf.nn.RNNCellDropoutWrapper.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.nn.RNNCellDropoutWrapper.weights": "tf.keras.layers.Layer.weights", + "tf.nn.RNNCellResidualWrapper.__call__": "tf.keras.layers.Layer.__call__", + "tf.nn.RNNCellResidualWrapper.__eq__": "tf.keras.Model.__eq__", + "tf.nn.RNNCellResidualWrapper.__ge__": "tf.keras.Model.__ge__", + "tf.nn.RNNCellResidualWrapper.__gt__": "tf.keras.Model.__gt__", + "tf.nn.RNNCellResidualWrapper.__le__": "tf.keras.Model.__le__", + "tf.nn.RNNCellResidualWrapper.__lt__": "tf.keras.Model.__lt__", + "tf.nn.RNNCellResidualWrapper.__ne__": "tf.keras.Model.__ne__", + "tf.nn.RNNCellResidualWrapper.__new__": "tf.keras.Model.__new__", + "tf.nn.RNNCellResidualWrapper.activity_regularizer": "tf.keras.layers.Layer.activity_regularizer", + "tf.nn.RNNCellResidualWrapper.add_loss": "tf.keras.layers.Layer.add_loss", + "tf.nn.RNNCellResidualWrapper.add_metric": "tf.keras.layers.Layer.add_metric", + "tf.nn.RNNCellResidualWrapper.add_weight": "tf.keras.layers.Layer.add_weight", + "tf.nn.RNNCellResidualWrapper.build": "tf.nn.RNNCellDeviceWrapper.build", + "tf.nn.RNNCellResidualWrapper.call": "tf.nn.RNNCellDeviceWrapper.call", + "tf.nn.RNNCellResidualWrapper.compute_mask": "tf.keras.layers.Layer.compute_mask", + "tf.nn.RNNCellResidualWrapper.compute_output_shape": "tf.keras.layers.Layer.compute_output_shape", + "tf.nn.RNNCellResidualWrapper.compute_output_signature": "tf.keras.layers.Layer.compute_output_signature", + "tf.nn.RNNCellResidualWrapper.count_params": "tf.keras.layers.Layer.count_params", + "tf.nn.RNNCellResidualWrapper.dtype": "tf.keras.layers.Layer.dtype", + "tf.nn.RNNCellResidualWrapper.dynamic": "tf.keras.layers.Layer.dynamic", + "tf.nn.RNNCellResidualWrapper.get_initial_state": "tf.keras.layers.AbstractRNNCell.get_initial_state", + "tf.nn.RNNCellResidualWrapper.get_weights": "tf.keras.layers.Layer.get_weights", + "tf.nn.RNNCellResidualWrapper.input": "tf.keras.layers.Layer.input", + "tf.nn.RNNCellResidualWrapper.input_spec": "tf.keras.layers.Layer.input_spec", + "tf.nn.RNNCellResidualWrapper.losses": "tf.keras.layers.Layer.losses", + "tf.nn.RNNCellResidualWrapper.metrics": "tf.keras.layers.Layer.metrics", + "tf.nn.RNNCellResidualWrapper.name": "tf.keras.layers.Layer.name", + "tf.nn.RNNCellResidualWrapper.name_scope": "tf.Module.name_scope", + "tf.nn.RNNCellResidualWrapper.non_trainable_weights": "tf.keras.layers.Layer.non_trainable_weights", + "tf.nn.RNNCellResidualWrapper.output": "tf.keras.layers.Layer.output", + "tf.nn.RNNCellResidualWrapper.set_weights": "tf.keras.layers.Layer.set_weights", + "tf.nn.RNNCellResidualWrapper.submodules": "tf.Module.submodules", + "tf.nn.RNNCellResidualWrapper.trainable": "tf.keras.layers.Layer.trainable", + "tf.nn.RNNCellResidualWrapper.trainable_weights": "tf.keras.layers.Layer.trainable_weights", + "tf.nn.RNNCellResidualWrapper.weights": "tf.keras.layers.Layer.weights", + "tf.nn.all_candidate_sampler": "tf.random.all_candidate_sampler", + "tf.nn.fixed_unigram_candidate_sampler": "tf.random.fixed_unigram_candidate_sampler", + "tf.nn.in_top_k": "tf.math.in_top_k", + "tf.nn.l2_normalize": "tf.math.l2_normalize", + "tf.nn.learned_unigram_candidate_sampler": "tf.random.learned_unigram_candidate_sampler", + "tf.nn.lrn": "tf.nn.local_response_normalization", + "tf.nn.sigmoid": "tf.math.sigmoid", + "tf.nn.softplus": "tf.math.softplus", + "tf.nn.space_to_batch": "tf.space_to_batch", + "tf.nn.tanh": "tf.math.tanh", + "tf.nn.top_k": "tf.math.top_k", + "tf.nn.zero_fraction": "tf.math.zero_fraction", + "tf.not_equal": "tf.math.not_equal", + "tf.ones_initializer.__call__": "tf.keras.initializers.Ones.__call__", + "tf.ones_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.ones_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.ones_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.ones_initializer.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.ones_initializer.__le__": "tf.keras.Model.__le__", + "tf.ones_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.ones_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.ones_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.ones_initializer.get_config": "tf.keras.initializers.Initializer.get_config", + "tf.optimizers": "tf.keras.optimizers", + "tf.optimizers.Adadelta": "tf.keras.optimizers.Adadelta", + "tf.optimizers.Adadelta.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.Adadelta.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.Adadelta.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.Adadelta.__init__": "tf.keras.optimizers.Adadelta.__init__", + "tf.optimizers.Adadelta.__le__": "tf.keras.Model.__le__", + "tf.optimizers.Adadelta.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.Adadelta.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.Adadelta.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.Adadelta.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.optimizers.Adadelta.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.optimizers.Adadelta.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.optimizers.Adadelta.get_config": "tf.keras.optimizers.Adadelta.get_config", + "tf.optimizers.Adadelta.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.optimizers.Adadelta.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.optimizers.Adadelta.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.optimizers.Adadelta.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.optimizers.Adadelta.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.optimizers.Adadelta.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.optimizers.Adadelta.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.optimizers.Adadelta.set_weights": "tf.keras.optimizers.Adadelta.set_weights", + "tf.optimizers.Adadelta.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.optimizers.Adadelta.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.optimizers.Adagrad": "tf.keras.optimizers.Adagrad", + "tf.optimizers.Adagrad.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.Adagrad.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.Adagrad.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.Adagrad.__init__": "tf.keras.optimizers.Adagrad.__init__", + "tf.optimizers.Adagrad.__le__": "tf.keras.Model.__le__", + "tf.optimizers.Adagrad.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.Adagrad.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.Adagrad.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.Adagrad.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.optimizers.Adagrad.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.optimizers.Adagrad.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.optimizers.Adagrad.get_config": "tf.keras.optimizers.Adagrad.get_config", + "tf.optimizers.Adagrad.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.optimizers.Adagrad.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.optimizers.Adagrad.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.optimizers.Adagrad.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.optimizers.Adagrad.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.optimizers.Adagrad.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.optimizers.Adagrad.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.optimizers.Adagrad.set_weights": "tf.keras.optimizers.Adagrad.set_weights", + "tf.optimizers.Adagrad.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.optimizers.Adagrad.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.optimizers.Adam": "tf.keras.optimizers.Adam", + "tf.optimizers.Adam.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.Adam.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.Adam.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.Adam.__init__": "tf.keras.optimizers.Adam.__init__", + "tf.optimizers.Adam.__le__": "tf.keras.Model.__le__", + "tf.optimizers.Adam.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.Adam.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.Adam.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.Adam.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.optimizers.Adam.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.optimizers.Adam.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.optimizers.Adam.get_config": "tf.keras.optimizers.Adam.get_config", + "tf.optimizers.Adam.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.optimizers.Adam.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.optimizers.Adam.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.optimizers.Adam.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.optimizers.Adam.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.optimizers.Adam.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.optimizers.Adam.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.optimizers.Adam.set_weights": "tf.keras.optimizers.Adam.set_weights", + "tf.optimizers.Adam.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.optimizers.Adam.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.optimizers.Adamax": "tf.keras.optimizers.Adamax", + "tf.optimizers.Adamax.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.Adamax.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.Adamax.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.Adamax.__init__": "tf.keras.optimizers.Adamax.__init__", + "tf.optimizers.Adamax.__le__": "tf.keras.Model.__le__", + "tf.optimizers.Adamax.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.Adamax.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.Adamax.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.Adamax.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.optimizers.Adamax.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.optimizers.Adamax.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.optimizers.Adamax.get_config": "tf.keras.optimizers.Adamax.get_config", + "tf.optimizers.Adamax.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.optimizers.Adamax.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.optimizers.Adamax.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.optimizers.Adamax.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.optimizers.Adamax.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.optimizers.Adamax.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.optimizers.Adamax.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.optimizers.Adamax.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.optimizers.Adamax.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.optimizers.Adamax.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.optimizers.Ftrl": "tf.keras.optimizers.Ftrl", + "tf.optimizers.Ftrl.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.Ftrl.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.Ftrl.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.Ftrl.__init__": "tf.keras.optimizers.Ftrl.__init__", + "tf.optimizers.Ftrl.__le__": "tf.keras.Model.__le__", + "tf.optimizers.Ftrl.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.Ftrl.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.Ftrl.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.Ftrl.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.optimizers.Ftrl.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.optimizers.Ftrl.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.optimizers.Ftrl.get_config": "tf.keras.optimizers.Ftrl.get_config", + "tf.optimizers.Ftrl.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.optimizers.Ftrl.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.optimizers.Ftrl.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.optimizers.Ftrl.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.optimizers.Ftrl.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.optimizers.Ftrl.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.optimizers.Ftrl.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.optimizers.Ftrl.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.optimizers.Ftrl.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.optimizers.Ftrl.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.optimizers.Nadam": "tf.keras.optimizers.Nadam", + "tf.optimizers.Nadam.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.Nadam.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.Nadam.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.Nadam.__init__": "tf.keras.optimizers.Nadam.__init__", + "tf.optimizers.Nadam.__le__": "tf.keras.Model.__le__", + "tf.optimizers.Nadam.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.Nadam.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.Nadam.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.Nadam.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.optimizers.Nadam.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.optimizers.Nadam.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.optimizers.Nadam.get_config": "tf.keras.optimizers.Nadam.get_config", + "tf.optimizers.Nadam.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.optimizers.Nadam.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.optimizers.Nadam.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.optimizers.Nadam.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.optimizers.Nadam.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.optimizers.Nadam.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.optimizers.Nadam.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.optimizers.Nadam.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.optimizers.Nadam.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.optimizers.Nadam.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.optimizers.Optimizer": "tf.keras.optimizers.Optimizer", + "tf.optimizers.Optimizer.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.Optimizer.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.Optimizer.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.Optimizer.__init__": "tf.keras.optimizers.Optimizer.__init__", + "tf.optimizers.Optimizer.__le__": "tf.keras.Model.__le__", + "tf.optimizers.Optimizer.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.Optimizer.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.Optimizer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.Optimizer.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.optimizers.Optimizer.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.optimizers.Optimizer.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.optimizers.Optimizer.get_config": "tf.keras.optimizers.Optimizer.get_config", + "tf.optimizers.Optimizer.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.optimizers.Optimizer.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.optimizers.Optimizer.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.optimizers.Optimizer.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.optimizers.Optimizer.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.optimizers.Optimizer.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.optimizers.Optimizer.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.optimizers.Optimizer.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.optimizers.Optimizer.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.optimizers.Optimizer.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.optimizers.RMSprop": "tf.keras.optimizers.RMSprop", + "tf.optimizers.RMSprop.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.RMSprop.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.RMSprop.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.RMSprop.__init__": "tf.keras.optimizers.RMSprop.__init__", + "tf.optimizers.RMSprop.__le__": "tf.keras.Model.__le__", + "tf.optimizers.RMSprop.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.RMSprop.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.RMSprop.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.RMSprop.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.optimizers.RMSprop.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.optimizers.RMSprop.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.optimizers.RMSprop.get_config": "tf.keras.optimizers.RMSprop.get_config", + "tf.optimizers.RMSprop.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.optimizers.RMSprop.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.optimizers.RMSprop.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.optimizers.RMSprop.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.optimizers.RMSprop.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.optimizers.RMSprop.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.optimizers.RMSprop.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.optimizers.RMSprop.set_weights": "tf.keras.optimizers.RMSprop.set_weights", + "tf.optimizers.RMSprop.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.optimizers.RMSprop.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.optimizers.SGD": "tf.keras.optimizers.SGD", + "tf.optimizers.SGD.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.SGD.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.SGD.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.SGD.__init__": "tf.keras.optimizers.SGD.__init__", + "tf.optimizers.SGD.__le__": "tf.keras.Model.__le__", + "tf.optimizers.SGD.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.SGD.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.SGD.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.SGD.add_slot": "tf.keras.optimizers.Optimizer.add_slot", + "tf.optimizers.SGD.add_weight": "tf.keras.optimizers.Optimizer.add_weight", + "tf.optimizers.SGD.apply_gradients": "tf.keras.optimizers.Optimizer.apply_gradients", + "tf.optimizers.SGD.get_config": "tf.keras.optimizers.SGD.get_config", + "tf.optimizers.SGD.get_gradients": "tf.keras.optimizers.Optimizer.get_gradients", + "tf.optimizers.SGD.get_slot": "tf.keras.optimizers.Optimizer.get_slot", + "tf.optimizers.SGD.get_slot_names": "tf.keras.optimizers.Optimizer.get_slot_names", + "tf.optimizers.SGD.get_updates": "tf.keras.optimizers.Optimizer.get_updates", + "tf.optimizers.SGD.get_weights": "tf.keras.optimizers.Optimizer.get_weights", + "tf.optimizers.SGD.iterations": "tf.keras.optimizers.Optimizer.iterations", + "tf.optimizers.SGD.minimize": "tf.keras.optimizers.Optimizer.minimize", + "tf.optimizers.SGD.set_weights": "tf.keras.optimizers.Optimizer.set_weights", + "tf.optimizers.SGD.variables": "tf.keras.optimizers.Optimizer.variables", + "tf.optimizers.SGD.weights": "tf.keras.optimizers.Optimizer.weights", + "tf.optimizers.deserialize": "tf.keras.optimizers.deserialize", + "tf.optimizers.get": "tf.keras.optimizers.get", + "tf.optimizers.schedules": "tf.keras.optimizers.schedules", + "tf.optimizers.schedules.ExponentialDecay": "tf.keras.optimizers.schedules.ExponentialDecay", + "tf.optimizers.schedules.ExponentialDecay.__call__": "tf.keras.optimizers.schedules.ExponentialDecay.__call__", + "tf.optimizers.schedules.ExponentialDecay.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.schedules.ExponentialDecay.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.schedules.ExponentialDecay.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.schedules.ExponentialDecay.__init__": "tf.keras.optimizers.schedules.ExponentialDecay.__init__", + "tf.optimizers.schedules.ExponentialDecay.__le__": "tf.keras.Model.__le__", + "tf.optimizers.schedules.ExponentialDecay.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.schedules.ExponentialDecay.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.schedules.ExponentialDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.schedules.ExponentialDecay.get_config": "tf.keras.optimizers.schedules.ExponentialDecay.get_config", + "tf.optimizers.schedules.InverseTimeDecay": "tf.keras.optimizers.schedules.InverseTimeDecay", + "tf.optimizers.schedules.InverseTimeDecay.__call__": "tf.keras.optimizers.schedules.InverseTimeDecay.__call__", + "tf.optimizers.schedules.InverseTimeDecay.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.schedules.InverseTimeDecay.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.schedules.InverseTimeDecay.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.schedules.InverseTimeDecay.__init__": "tf.keras.optimizers.schedules.InverseTimeDecay.__init__", + "tf.optimizers.schedules.InverseTimeDecay.__le__": "tf.keras.Model.__le__", + "tf.optimizers.schedules.InverseTimeDecay.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.schedules.InverseTimeDecay.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.schedules.InverseTimeDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.schedules.InverseTimeDecay.get_config": "tf.keras.optimizers.schedules.InverseTimeDecay.get_config", + "tf.optimizers.schedules.LearningRateSchedule": "tf.keras.optimizers.schedules.LearningRateSchedule", + "tf.optimizers.schedules.LearningRateSchedule.__call__": "tf.keras.optimizers.schedules.LearningRateSchedule.__call__", + "tf.optimizers.schedules.LearningRateSchedule.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.schedules.LearningRateSchedule.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.schedules.LearningRateSchedule.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.schedules.LearningRateSchedule.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.optimizers.schedules.LearningRateSchedule.__le__": "tf.keras.Model.__le__", + "tf.optimizers.schedules.LearningRateSchedule.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.schedules.LearningRateSchedule.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.schedules.LearningRateSchedule.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.schedules.LearningRateSchedule.get_config": "tf.keras.optimizers.schedules.LearningRateSchedule.get_config", + "tf.optimizers.schedules.PiecewiseConstantDecay": "tf.keras.optimizers.schedules.PiecewiseConstantDecay", + "tf.optimizers.schedules.PiecewiseConstantDecay.__call__": "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__call__", + "tf.optimizers.schedules.PiecewiseConstantDecay.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.schedules.PiecewiseConstantDecay.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.schedules.PiecewiseConstantDecay.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.schedules.PiecewiseConstantDecay.__init__": "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__init__", + "tf.optimizers.schedules.PiecewiseConstantDecay.__le__": "tf.keras.Model.__le__", + "tf.optimizers.schedules.PiecewiseConstantDecay.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.schedules.PiecewiseConstantDecay.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.schedules.PiecewiseConstantDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.schedules.PiecewiseConstantDecay.get_config": "tf.keras.optimizers.schedules.PiecewiseConstantDecay.get_config", + "tf.optimizers.schedules.PolynomialDecay": "tf.keras.optimizers.schedules.PolynomialDecay", + "tf.optimizers.schedules.PolynomialDecay.__call__": "tf.keras.optimizers.schedules.PolynomialDecay.__call__", + "tf.optimizers.schedules.PolynomialDecay.__eq__": "tf.keras.Model.__eq__", + "tf.optimizers.schedules.PolynomialDecay.__ge__": "tf.keras.Model.__ge__", + "tf.optimizers.schedules.PolynomialDecay.__gt__": "tf.keras.Model.__gt__", + "tf.optimizers.schedules.PolynomialDecay.__init__": "tf.keras.optimizers.schedules.PolynomialDecay.__init__", + "tf.optimizers.schedules.PolynomialDecay.__le__": "tf.keras.Model.__le__", + "tf.optimizers.schedules.PolynomialDecay.__lt__": "tf.keras.Model.__lt__", + "tf.optimizers.schedules.PolynomialDecay.__ne__": "tf.keras.Model.__ne__", + "tf.optimizers.schedules.PolynomialDecay.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.optimizers.schedules.PolynomialDecay.get_config": "tf.keras.optimizers.schedules.PolynomialDecay.get_config", + "tf.optimizers.schedules.deserialize": "tf.keras.optimizers.schedules.deserialize", + "tf.optimizers.schedules.serialize": "tf.keras.optimizers.schedules.serialize", + "tf.optimizers.serialize": "tf.keras.optimizers.serialize", + "tf.pow": "tf.math.pow", + "tf.profiler.experimental.Profile.__eq__": "tf.keras.Model.__eq__", + "tf.profiler.experimental.Profile.__ge__": "tf.keras.Model.__ge__", + "tf.profiler.experimental.Profile.__gt__": "tf.keras.Model.__gt__", + "tf.profiler.experimental.Profile.__le__": "tf.keras.Model.__le__", + "tf.profiler.experimental.Profile.__lt__": "tf.keras.Model.__lt__", + "tf.profiler.experimental.Profile.__ne__": "tf.keras.Model.__ne__", + "tf.profiler.experimental.Profile.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.qint16": "tf.dtypes.qint16", + "tf.qint32": "tf.dtypes.qint32", + "tf.qint8": "tf.dtypes.qint8", + "tf.queue.FIFOQueue.__eq__": "tf.keras.Model.__eq__", + "tf.queue.FIFOQueue.__ge__": "tf.keras.Model.__ge__", + "tf.queue.FIFOQueue.__gt__": "tf.keras.Model.__gt__", + "tf.queue.FIFOQueue.__le__": "tf.keras.Model.__le__", + "tf.queue.FIFOQueue.__lt__": "tf.keras.Model.__lt__", + "tf.queue.FIFOQueue.__ne__": "tf.keras.Model.__ne__", + "tf.queue.FIFOQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.queue.FIFOQueue.close": "tf.queue.QueueBase.close", + "tf.queue.FIFOQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.queue.FIFOQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.queue.FIFOQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.queue.FIFOQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.queue.FIFOQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.queue.FIFOQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.queue.FIFOQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.queue.FIFOQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.queue.FIFOQueue.name": "tf.queue.QueueBase.name", + "tf.queue.FIFOQueue.names": "tf.queue.QueueBase.names", + "tf.queue.FIFOQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.queue.FIFOQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.queue.FIFOQueue.size": "tf.queue.QueueBase.size", + "tf.queue.PaddingFIFOQueue.__eq__": "tf.keras.Model.__eq__", + "tf.queue.PaddingFIFOQueue.__ge__": "tf.keras.Model.__ge__", + "tf.queue.PaddingFIFOQueue.__gt__": "tf.keras.Model.__gt__", + "tf.queue.PaddingFIFOQueue.__le__": "tf.keras.Model.__le__", + "tf.queue.PaddingFIFOQueue.__lt__": "tf.keras.Model.__lt__", + "tf.queue.PaddingFIFOQueue.__ne__": "tf.keras.Model.__ne__", + "tf.queue.PaddingFIFOQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.queue.PaddingFIFOQueue.close": "tf.queue.QueueBase.close", + "tf.queue.PaddingFIFOQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.queue.PaddingFIFOQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.queue.PaddingFIFOQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.queue.PaddingFIFOQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.queue.PaddingFIFOQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.queue.PaddingFIFOQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.queue.PaddingFIFOQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.queue.PaddingFIFOQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.queue.PaddingFIFOQueue.name": "tf.queue.QueueBase.name", + "tf.queue.PaddingFIFOQueue.names": "tf.queue.QueueBase.names", + "tf.queue.PaddingFIFOQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.queue.PaddingFIFOQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.queue.PaddingFIFOQueue.size": "tf.queue.QueueBase.size", + "tf.queue.PriorityQueue.__eq__": "tf.keras.Model.__eq__", + "tf.queue.PriorityQueue.__ge__": "tf.keras.Model.__ge__", + "tf.queue.PriorityQueue.__gt__": "tf.keras.Model.__gt__", + "tf.queue.PriorityQueue.__le__": "tf.keras.Model.__le__", + "tf.queue.PriorityQueue.__lt__": "tf.keras.Model.__lt__", + "tf.queue.PriorityQueue.__ne__": "tf.keras.Model.__ne__", + "tf.queue.PriorityQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.queue.PriorityQueue.close": "tf.queue.QueueBase.close", + "tf.queue.PriorityQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.queue.PriorityQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.queue.PriorityQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.queue.PriorityQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.queue.PriorityQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.queue.PriorityQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.queue.PriorityQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.queue.PriorityQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.queue.PriorityQueue.name": "tf.queue.QueueBase.name", + "tf.queue.PriorityQueue.names": "tf.queue.QueueBase.names", + "tf.queue.PriorityQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.queue.PriorityQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.queue.PriorityQueue.size": "tf.queue.QueueBase.size", + "tf.queue.QueueBase.__eq__": "tf.keras.Model.__eq__", + "tf.queue.QueueBase.__ge__": "tf.keras.Model.__ge__", + "tf.queue.QueueBase.__gt__": "tf.keras.Model.__gt__", + "tf.queue.QueueBase.__le__": "tf.keras.Model.__le__", + "tf.queue.QueueBase.__lt__": "tf.keras.Model.__lt__", + "tf.queue.QueueBase.__ne__": "tf.keras.Model.__ne__", + "tf.queue.QueueBase.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.queue.RandomShuffleQueue.__eq__": "tf.keras.Model.__eq__", + "tf.queue.RandomShuffleQueue.__ge__": "tf.keras.Model.__ge__", + "tf.queue.RandomShuffleQueue.__gt__": "tf.keras.Model.__gt__", + "tf.queue.RandomShuffleQueue.__le__": "tf.keras.Model.__le__", + "tf.queue.RandomShuffleQueue.__lt__": "tf.keras.Model.__lt__", + "tf.queue.RandomShuffleQueue.__ne__": "tf.keras.Model.__ne__", + "tf.queue.RandomShuffleQueue.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.queue.RandomShuffleQueue.close": "tf.queue.QueueBase.close", + "tf.queue.RandomShuffleQueue.dequeue": "tf.queue.QueueBase.dequeue", + "tf.queue.RandomShuffleQueue.dequeue_many": "tf.queue.QueueBase.dequeue_many", + "tf.queue.RandomShuffleQueue.dequeue_up_to": "tf.queue.QueueBase.dequeue_up_to", + "tf.queue.RandomShuffleQueue.dtypes": "tf.queue.QueueBase.dtypes", + "tf.queue.RandomShuffleQueue.enqueue": "tf.queue.QueueBase.enqueue", + "tf.queue.RandomShuffleQueue.enqueue_many": "tf.queue.QueueBase.enqueue_many", + "tf.queue.RandomShuffleQueue.from_list": "tf.queue.QueueBase.from_list", + "tf.queue.RandomShuffleQueue.is_closed": "tf.queue.QueueBase.is_closed", + "tf.queue.RandomShuffleQueue.name": "tf.queue.QueueBase.name", + "tf.queue.RandomShuffleQueue.names": "tf.queue.QueueBase.names", + "tf.queue.RandomShuffleQueue.queue_ref": "tf.queue.QueueBase.queue_ref", + "tf.queue.RandomShuffleQueue.shapes": "tf.queue.QueueBase.shapes", + "tf.queue.RandomShuffleQueue.size": "tf.queue.QueueBase.size", + "tf.quint16": "tf.dtypes.quint16", + "tf.quint8": "tf.dtypes.quint8", + "tf.random.Algorithm.name": "tf.distribute.InputReplicationMode.name", + "tf.random.Algorithm.value": "tf.distribute.InputReplicationMode.value", + "tf.random.Generator.__eq__": "tf.keras.Model.__eq__", + "tf.random.Generator.__ge__": "tf.keras.Model.__ge__", + "tf.random.Generator.__gt__": "tf.keras.Model.__gt__", + "tf.random.Generator.__le__": "tf.keras.Model.__le__", + "tf.random.Generator.__lt__": "tf.keras.Model.__lt__", + "tf.random.Generator.__ne__": "tf.keras.Model.__ne__", + "tf.random.Generator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.random.experimental.Algorithm": "tf.random.Algorithm", + "tf.random.experimental.Algorithm.PHILOX": "tf.random.Algorithm.PHILOX", + "tf.random.experimental.Algorithm.THREEFRY": "tf.random.Algorithm.THREEFRY", + "tf.random.experimental.Algorithm.name": "tf.distribute.InputReplicationMode.name", + "tf.random.experimental.Algorithm.value": "tf.distribute.InputReplicationMode.value", + "tf.random.experimental.Generator": "tf.random.Generator", + "tf.random.experimental.Generator.__eq__": "tf.keras.Model.__eq__", + "tf.random.experimental.Generator.__ge__": "tf.keras.Model.__ge__", + "tf.random.experimental.Generator.__gt__": "tf.keras.Model.__gt__", + "tf.random.experimental.Generator.__init__": "tf.random.Generator.__init__", + "tf.random.experimental.Generator.__le__": "tf.keras.Model.__le__", + "tf.random.experimental.Generator.__lt__": "tf.keras.Model.__lt__", + "tf.random.experimental.Generator.__ne__": "tf.keras.Model.__ne__", + "tf.random.experimental.Generator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.random.experimental.Generator.algorithm": "tf.random.Generator.algorithm", + "tf.random.experimental.Generator.binomial": "tf.random.Generator.binomial", + "tf.random.experimental.Generator.key": "tf.random.Generator.key", + "tf.random.experimental.Generator.make_seeds": "tf.random.Generator.make_seeds", + "tf.random.experimental.Generator.normal": "tf.random.Generator.normal", + "tf.random.experimental.Generator.reset": "tf.random.Generator.reset", + "tf.random.experimental.Generator.reset_from_key_counter": "tf.random.Generator.reset_from_key_counter", + "tf.random.experimental.Generator.reset_from_seed": "tf.random.Generator.reset_from_seed", + "tf.random.experimental.Generator.skip": "tf.random.Generator.skip", + "tf.random.experimental.Generator.split": "tf.random.Generator.split", + "tf.random.experimental.Generator.state": "tf.random.Generator.state", + "tf.random.experimental.Generator.truncated_normal": "tf.random.Generator.truncated_normal", + "tf.random.experimental.Generator.uniform": "tf.random.Generator.uniform", + "tf.random.experimental.Generator.uniform_full_int": "tf.random.Generator.uniform_full_int", + "tf.random.experimental.create_rng_state": "tf.random.create_rng_state", + "tf.random.experimental.get_global_generator": "tf.random.get_global_generator", + "tf.random.experimental.set_global_generator": "tf.random.set_global_generator", + "tf.random_normal_initializer.__call__": "tf.keras.initializers.RandomNormal.__call__", + "tf.random_normal_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.random_normal_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.random_normal_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.random_normal_initializer.__init__": "tf.keras.initializers.RandomNormal.__init__", + "tf.random_normal_initializer.__le__": "tf.keras.Model.__le__", + "tf.random_normal_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.random_normal_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.random_normal_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.random_normal_initializer.get_config": "tf.keras.initializers.RandomNormal.get_config", + "tf.random_uniform_initializer.__call__": "tf.keras.initializers.RandomUniform.__call__", + "tf.random_uniform_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.random_uniform_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.random_uniform_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.random_uniform_initializer.__init__": "tf.keras.initializers.RandomUniform.__init__", + "tf.random_uniform_initializer.__le__": "tf.keras.Model.__le__", + "tf.random_uniform_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.random_uniform_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.random_uniform_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.random_uniform_initializer.get_config": "tf.keras.initializers.RandomUniform.get_config", + "tf.reduce_any": "tf.math.reduce_any", + "tf.reduce_logsumexp": "tf.math.reduce_logsumexp", + "tf.reduce_max": "tf.math.reduce_max", + "tf.reduce_mean": "tf.math.reduce_mean", + "tf.reduce_min": "tf.math.reduce_min", + "tf.reduce_prod": "tf.math.reduce_prod", + "tf.reduce_sum": "tf.math.reduce_sum", + "tf.resource": "tf.dtypes.resource", + "tf.round": "tf.math.round", + "tf.saturate_cast": "tf.dtypes.saturate_cast", + "tf.saved_model.Asset.__eq__": "tf.keras.Model.__eq__", + "tf.saved_model.Asset.__ge__": "tf.keras.Model.__ge__", + "tf.saved_model.Asset.__gt__": "tf.keras.Model.__gt__", + "tf.saved_model.Asset.__le__": "tf.keras.Model.__le__", + "tf.saved_model.Asset.__lt__": "tf.keras.Model.__lt__", + "tf.saved_model.Asset.__ne__": "tf.keras.Model.__ne__", + "tf.saved_model.Asset.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.saved_model.SaveOptions.__eq__": "tf.keras.Model.__eq__", + "tf.saved_model.SaveOptions.__ge__": "tf.keras.Model.__ge__", + "tf.saved_model.SaveOptions.__gt__": "tf.keras.Model.__gt__", + "tf.saved_model.SaveOptions.__le__": "tf.keras.Model.__le__", + "tf.saved_model.SaveOptions.__lt__": "tf.keras.Model.__lt__", + "tf.saved_model.SaveOptions.__ne__": "tf.keras.Model.__ne__", + "tf.saved_model.SaveOptions.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.scalar_mul": "tf.math.scalar_mul", + "tf.sigmoid": "tf.math.sigmoid", + "tf.sign": "tf.math.sign", + "tf.sin": "tf.math.sin", + "tf.sinh": "tf.math.sinh", + "tf.sparse.SparseTensor.__eq__": "tf.keras.Model.__eq__", + "tf.sparse.SparseTensor.__ge__": "tf.keras.Model.__ge__", + "tf.sparse.SparseTensor.__gt__": "tf.keras.Model.__gt__", + "tf.sparse.SparseTensor.__le__": "tf.keras.Model.__le__", + "tf.sparse.SparseTensor.__lt__": "tf.keras.Model.__lt__", + "tf.sparse.SparseTensor.__ne__": "tf.keras.Model.__ne__", + "tf.sparse.SparseTensor.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.sqrt": "tf.math.sqrt", + "tf.square": "tf.math.square", + "tf.string": "tf.dtypes.string", + "tf.subtract": "tf.math.subtract", + "tf.summary.SummaryWriter.__eq__": "tf.keras.Model.__eq__", + "tf.summary.SummaryWriter.__ge__": "tf.keras.Model.__ge__", + "tf.summary.SummaryWriter.__gt__": "tf.keras.Model.__gt__", + "tf.summary.SummaryWriter.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.summary.SummaryWriter.__le__": "tf.keras.Model.__le__", + "tf.summary.SummaryWriter.__lt__": "tf.keras.Model.__lt__", + "tf.summary.SummaryWriter.__ne__": "tf.keras.Model.__ne__", + "tf.summary.SummaryWriter.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.tan": "tf.math.tan", + "tf.tanh": "tf.math.tanh", + "tf.test.Benchmark.__eq__": "tf.keras.Model.__eq__", + "tf.test.Benchmark.__ge__": "tf.keras.Model.__ge__", + "tf.test.Benchmark.__gt__": "tf.keras.Model.__gt__", + "tf.test.Benchmark.__le__": "tf.keras.Model.__le__", + "tf.test.Benchmark.__lt__": "tf.keras.Model.__lt__", + "tf.test.Benchmark.__ne__": "tf.keras.Model.__ne__", + "tf.test.Benchmark.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.test.TestCase.__ge__": "tf.keras.Model.__ge__", + "tf.test.TestCase.__gt__": "tf.keras.Model.__gt__", + "tf.test.TestCase.__le__": "tf.keras.Model.__le__", + "tf.test.TestCase.__lt__": "tf.keras.Model.__lt__", + "tf.test.TestCase.__ne__": "tf.keras.Model.__ne__", + "tf.test.TestCase.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.test.TestCase.assertCountEqual": "tf.test.TestCase.assertItemsEqual", + "tf.test.TestCase.assertRaisesRegex": "tf.test.TestCase.assertRaisesRegexp", + "tf.test.TestCase.failIfAlmostEqual": "tf.test.TestCase.assertNotAlmostEquals", + "tf.test.TestCase.failIfEqual": "tf.test.TestCase.assertNotEquals", + "tf.test.TestCase.failUnless": "tf.test.TestCase.assert_", + "tf.test.TestCase.failUnlessAlmostEqual": "tf.test.TestCase.assertAlmostEquals", + "tf.test.TestCase.failUnlessEqual": "tf.test.TestCase.assertEquals", + "tf.test.TestCase.failureException.__eq__": "tf.keras.Model.__eq__", + "tf.test.TestCase.failureException.__ge__": "tf.keras.Model.__ge__", + "tf.test.TestCase.failureException.__gt__": "tf.keras.Model.__gt__", + "tf.test.TestCase.failureException.__le__": "tf.keras.Model.__le__", + "tf.test.TestCase.failureException.__lt__": "tf.keras.Model.__lt__", + "tf.test.TestCase.failureException.__ne__": "tf.keras.Model.__ne__", + "tf.test.TestCase.failureException.args": "tf.errors.AbortedError.args", + "tf.test.TestCase.failureException.with_traceback": "tf.errors.AbortedError.with_traceback", + "tf.tpu.experimental.DeviceAssignment.__eq__": "tf.keras.Model.__eq__", + "tf.tpu.experimental.DeviceAssignment.__ge__": "tf.keras.Model.__ge__", + "tf.tpu.experimental.DeviceAssignment.__gt__": "tf.keras.Model.__gt__", + "tf.tpu.experimental.DeviceAssignment.__le__": "tf.keras.Model.__le__", + "tf.tpu.experimental.DeviceAssignment.__lt__": "tf.keras.Model.__lt__", + "tf.tpu.experimental.DeviceAssignment.__ne__": "tf.keras.Model.__ne__", + "tf.tpu.experimental.DeviceAssignment.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.train.Checkpoint.__eq__": "tf.keras.Model.__eq__", + "tf.train.Checkpoint.__ge__": "tf.keras.Model.__ge__", + "tf.train.Checkpoint.__gt__": "tf.keras.Model.__gt__", + "tf.train.Checkpoint.__le__": "tf.keras.Model.__le__", + "tf.train.Checkpoint.__lt__": "tf.keras.Model.__lt__", + "tf.train.Checkpoint.__ne__": "tf.keras.Model.__ne__", + "tf.train.Checkpoint.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.train.CheckpointManager.__eq__": "tf.keras.Model.__eq__", + "tf.train.CheckpointManager.__ge__": "tf.keras.Model.__ge__", + "tf.train.CheckpointManager.__gt__": "tf.keras.Model.__gt__", + "tf.train.CheckpointManager.__le__": "tf.keras.Model.__le__", + "tf.train.CheckpointManager.__lt__": "tf.keras.Model.__lt__", + "tf.train.CheckpointManager.__ne__": "tf.keras.Model.__ne__", + "tf.train.CheckpointManager.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.train.ClusterDef.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.ClusterDef.Clear": "tf.train.BytesList.Clear", + "tf.train.ClusterDef.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.ClusterDef.ClearField": "tf.train.BytesList.ClearField", + "tf.train.ClusterDef.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.ClusterDef.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.ClusterDef.Extensions": "tf.train.BytesList.Extensions", + "tf.train.ClusterDef.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.ClusterDef.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.ClusterDef.HasField": "tf.train.BytesList.HasField", + "tf.train.ClusterDef.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.ClusterDef.ListFields": "tf.train.BytesList.ListFields", + "tf.train.ClusterDef.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.ClusterDef.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.ClusterDef.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.ClusterDef.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.ClusterDef.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.ClusterDef.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.ClusterDef.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.ClusterDef.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.ClusterDef.__eq__": "tf.train.BytesList.__eq__", + "tf.train.ClusterDef.__ge__": "tf.train.BytesList.__ge__", + "tf.train.ClusterDef.__gt__": "tf.train.BytesList.__gt__", + "tf.train.ClusterDef.__init__": "tf.train.BytesList.__init__", + "tf.train.ClusterDef.__le__": "tf.train.BytesList.__le__", + "tf.train.ClusterDef.__lt__": "tf.train.BytesList.__lt__", + "tf.train.ClusterDef.__ne__": "tf.train.BytesList.__ne__", + "tf.train.ClusterDef.__new__": "tf.train.BytesList.__new__", + "tf.train.ClusterSpec.__ge__": "tf.keras.Model.__ge__", + "tf.train.ClusterSpec.__gt__": "tf.keras.Model.__gt__", + "tf.train.ClusterSpec.__le__": "tf.keras.Model.__le__", + "tf.train.ClusterSpec.__lt__": "tf.keras.Model.__lt__", + "tf.train.ClusterSpec.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.train.ClusterSpec.__nonzero__": "tf.train.ClusterSpec.__bool__", + "tf.train.Coordinator.__eq__": "tf.keras.Model.__eq__", + "tf.train.Coordinator.__ge__": "tf.keras.Model.__ge__", + "tf.train.Coordinator.__gt__": "tf.keras.Model.__gt__", + "tf.train.Coordinator.__le__": "tf.keras.Model.__le__", + "tf.train.Coordinator.__lt__": "tf.keras.Model.__lt__", + "tf.train.Coordinator.__ne__": "tf.keras.Model.__ne__", + "tf.train.Coordinator.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.train.Example.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.Example.Clear": "tf.train.BytesList.Clear", + "tf.train.Example.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.Example.ClearField": "tf.train.BytesList.ClearField", + "tf.train.Example.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.Example.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.Example.Extensions": "tf.train.BytesList.Extensions", + "tf.train.Example.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.Example.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.Example.HasField": "tf.train.BytesList.HasField", + "tf.train.Example.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.Example.ListFields": "tf.train.BytesList.ListFields", + "tf.train.Example.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.Example.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.Example.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.Example.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.Example.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.Example.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.Example.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.Example.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.Example.__eq__": "tf.train.BytesList.__eq__", + "tf.train.Example.__ge__": "tf.train.BytesList.__ge__", + "tf.train.Example.__gt__": "tf.train.BytesList.__gt__", + "tf.train.Example.__init__": "tf.train.BytesList.__init__", + "tf.train.Example.__le__": "tf.train.BytesList.__le__", + "tf.train.Example.__lt__": "tf.train.BytesList.__lt__", + "tf.train.Example.__ne__": "tf.train.BytesList.__ne__", + "tf.train.Example.__new__": "tf.train.BytesList.__new__", + "tf.train.ExponentialMovingAverage.__eq__": "tf.keras.Model.__eq__", + "tf.train.ExponentialMovingAverage.__ge__": "tf.keras.Model.__ge__", + "tf.train.ExponentialMovingAverage.__gt__": "tf.keras.Model.__gt__", + "tf.train.ExponentialMovingAverage.__le__": "tf.keras.Model.__le__", + "tf.train.ExponentialMovingAverage.__lt__": "tf.keras.Model.__lt__", + "tf.train.ExponentialMovingAverage.__ne__": "tf.keras.Model.__ne__", + "tf.train.ExponentialMovingAverage.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.train.Feature.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.Feature.Clear": "tf.train.BytesList.Clear", + "tf.train.Feature.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.Feature.ClearField": "tf.train.BytesList.ClearField", + "tf.train.Feature.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.Feature.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.Feature.Extensions": "tf.train.BytesList.Extensions", + "tf.train.Feature.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.Feature.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.Feature.HasField": "tf.train.BytesList.HasField", + "tf.train.Feature.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.Feature.ListFields": "tf.train.BytesList.ListFields", + "tf.train.Feature.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.Feature.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.Feature.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.Feature.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.Feature.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.Feature.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.Feature.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.Feature.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.Feature.__eq__": "tf.train.BytesList.__eq__", + "tf.train.Feature.__ge__": "tf.train.BytesList.__ge__", + "tf.train.Feature.__gt__": "tf.train.BytesList.__gt__", + "tf.train.Feature.__init__": "tf.train.BytesList.__init__", + "tf.train.Feature.__le__": "tf.train.BytesList.__le__", + "tf.train.Feature.__lt__": "tf.train.BytesList.__lt__", + "tf.train.Feature.__ne__": "tf.train.BytesList.__ne__", + "tf.train.Feature.__new__": "tf.train.BytesList.__new__", + "tf.train.FeatureList.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.FeatureList.Clear": "tf.train.BytesList.Clear", + "tf.train.FeatureList.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.FeatureList.ClearField": "tf.train.BytesList.ClearField", + "tf.train.FeatureList.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.FeatureList.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.FeatureList.Extensions": "tf.train.BytesList.Extensions", + "tf.train.FeatureList.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.FeatureList.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.FeatureList.HasField": "tf.train.BytesList.HasField", + "tf.train.FeatureList.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.FeatureList.ListFields": "tf.train.BytesList.ListFields", + "tf.train.FeatureList.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.FeatureList.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.FeatureList.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.FeatureList.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.FeatureList.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.FeatureList.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.FeatureList.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.FeatureList.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.FeatureList.__eq__": "tf.train.BytesList.__eq__", + "tf.train.FeatureList.__ge__": "tf.train.BytesList.__ge__", + "tf.train.FeatureList.__gt__": "tf.train.BytesList.__gt__", + "tf.train.FeatureList.__init__": "tf.train.BytesList.__init__", + "tf.train.FeatureList.__le__": "tf.train.BytesList.__le__", + "tf.train.FeatureList.__lt__": "tf.train.BytesList.__lt__", + "tf.train.FeatureList.__ne__": "tf.train.BytesList.__ne__", + "tf.train.FeatureList.__new__": "tf.train.BytesList.__new__", + "tf.train.FeatureLists.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.FeatureLists.Clear": "tf.train.BytesList.Clear", + "tf.train.FeatureLists.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.FeatureLists.ClearField": "tf.train.BytesList.ClearField", + "tf.train.FeatureLists.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.FeatureLists.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.FeatureLists.Extensions": "tf.train.BytesList.Extensions", + "tf.train.FeatureLists.FeatureListEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.FeatureLists.FeatureListEntry.Clear": "tf.train.BytesList.Clear", + "tf.train.FeatureLists.FeatureListEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.FeatureLists.FeatureListEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.train.FeatureLists.FeatureListEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.FeatureLists.FeatureListEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.FeatureLists.FeatureListEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.train.FeatureLists.FeatureListEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.FeatureLists.FeatureListEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.FeatureLists.FeatureListEntry.HasField": "tf.train.BytesList.HasField", + "tf.train.FeatureLists.FeatureListEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.FeatureLists.FeatureListEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.train.FeatureLists.FeatureListEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.FeatureLists.FeatureListEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.FeatureLists.FeatureListEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.FeatureLists.FeatureListEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.FeatureLists.FeatureListEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.FeatureLists.FeatureListEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.FeatureLists.FeatureListEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.FeatureLists.FeatureListEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.FeatureLists.FeatureListEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.train.FeatureLists.FeatureListEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.train.FeatureLists.FeatureListEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.train.FeatureLists.FeatureListEntry.__init__": "tf.train.BytesList.__init__", + "tf.train.FeatureLists.FeatureListEntry.__le__": "tf.train.BytesList.__le__", + "tf.train.FeatureLists.FeatureListEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.train.FeatureLists.FeatureListEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.train.FeatureLists.FeatureListEntry.__new__": "tf.train.BytesList.__new__", + "tf.train.FeatureLists.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.FeatureLists.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.FeatureLists.HasField": "tf.train.BytesList.HasField", + "tf.train.FeatureLists.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.FeatureLists.ListFields": "tf.train.BytesList.ListFields", + "tf.train.FeatureLists.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.FeatureLists.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.FeatureLists.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.FeatureLists.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.FeatureLists.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.FeatureLists.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.FeatureLists.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.FeatureLists.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.FeatureLists.__eq__": "tf.train.BytesList.__eq__", + "tf.train.FeatureLists.__ge__": "tf.train.BytesList.__ge__", + "tf.train.FeatureLists.__gt__": "tf.train.BytesList.__gt__", + "tf.train.FeatureLists.__init__": "tf.train.BytesList.__init__", + "tf.train.FeatureLists.__le__": "tf.train.BytesList.__le__", + "tf.train.FeatureLists.__lt__": "tf.train.BytesList.__lt__", + "tf.train.FeatureLists.__ne__": "tf.train.BytesList.__ne__", + "tf.train.FeatureLists.__new__": "tf.train.BytesList.__new__", + "tf.train.Features.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.Features.Clear": "tf.train.BytesList.Clear", + "tf.train.Features.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.Features.ClearField": "tf.train.BytesList.ClearField", + "tf.train.Features.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.Features.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.Features.Extensions": "tf.train.BytesList.Extensions", + "tf.train.Features.FeatureEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.Features.FeatureEntry.Clear": "tf.train.BytesList.Clear", + "tf.train.Features.FeatureEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.Features.FeatureEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.train.Features.FeatureEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.Features.FeatureEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.Features.FeatureEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.train.Features.FeatureEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.Features.FeatureEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.Features.FeatureEntry.HasField": "tf.train.BytesList.HasField", + "tf.train.Features.FeatureEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.Features.FeatureEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.train.Features.FeatureEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.Features.FeatureEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.Features.FeatureEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.Features.FeatureEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.Features.FeatureEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.Features.FeatureEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.Features.FeatureEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.Features.FeatureEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.Features.FeatureEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.train.Features.FeatureEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.train.Features.FeatureEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.train.Features.FeatureEntry.__init__": "tf.train.BytesList.__init__", + "tf.train.Features.FeatureEntry.__le__": "tf.train.BytesList.__le__", + "tf.train.Features.FeatureEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.train.Features.FeatureEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.train.Features.FeatureEntry.__new__": "tf.train.BytesList.__new__", + "tf.train.Features.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.Features.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.Features.HasField": "tf.train.BytesList.HasField", + "tf.train.Features.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.Features.ListFields": "tf.train.BytesList.ListFields", + "tf.train.Features.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.Features.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.Features.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.Features.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.Features.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.Features.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.Features.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.Features.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.Features.__eq__": "tf.train.BytesList.__eq__", + "tf.train.Features.__ge__": "tf.train.BytesList.__ge__", + "tf.train.Features.__gt__": "tf.train.BytesList.__gt__", + "tf.train.Features.__init__": "tf.train.BytesList.__init__", + "tf.train.Features.__le__": "tf.train.BytesList.__le__", + "tf.train.Features.__lt__": "tf.train.BytesList.__lt__", + "tf.train.Features.__ne__": "tf.train.BytesList.__ne__", + "tf.train.Features.__new__": "tf.train.BytesList.__new__", + "tf.train.FloatList.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.FloatList.Clear": "tf.train.BytesList.Clear", + "tf.train.FloatList.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.FloatList.ClearField": "tf.train.BytesList.ClearField", + "tf.train.FloatList.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.FloatList.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.FloatList.Extensions": "tf.train.BytesList.Extensions", + "tf.train.FloatList.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.FloatList.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.FloatList.HasField": "tf.train.BytesList.HasField", + "tf.train.FloatList.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.FloatList.ListFields": "tf.train.BytesList.ListFields", + "tf.train.FloatList.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.FloatList.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.FloatList.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.FloatList.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.FloatList.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.FloatList.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.FloatList.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.FloatList.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.FloatList.__eq__": "tf.train.BytesList.__eq__", + "tf.train.FloatList.__ge__": "tf.train.BytesList.__ge__", + "tf.train.FloatList.__gt__": "tf.train.BytesList.__gt__", + "tf.train.FloatList.__init__": "tf.train.BytesList.__init__", + "tf.train.FloatList.__le__": "tf.train.BytesList.__le__", + "tf.train.FloatList.__lt__": "tf.train.BytesList.__lt__", + "tf.train.FloatList.__ne__": "tf.train.BytesList.__ne__", + "tf.train.FloatList.__new__": "tf.train.BytesList.__new__", + "tf.train.Int64List.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.Int64List.Clear": "tf.train.BytesList.Clear", + "tf.train.Int64List.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.Int64List.ClearField": "tf.train.BytesList.ClearField", + "tf.train.Int64List.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.Int64List.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.Int64List.Extensions": "tf.train.BytesList.Extensions", + "tf.train.Int64List.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.Int64List.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.Int64List.HasField": "tf.train.BytesList.HasField", + "tf.train.Int64List.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.Int64List.ListFields": "tf.train.BytesList.ListFields", + "tf.train.Int64List.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.Int64List.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.Int64List.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.Int64List.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.Int64List.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.Int64List.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.Int64List.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.Int64List.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.Int64List.__eq__": "tf.train.BytesList.__eq__", + "tf.train.Int64List.__ge__": "tf.train.BytesList.__ge__", + "tf.train.Int64List.__gt__": "tf.train.BytesList.__gt__", + "tf.train.Int64List.__init__": "tf.train.BytesList.__init__", + "tf.train.Int64List.__le__": "tf.train.BytesList.__le__", + "tf.train.Int64List.__lt__": "tf.train.BytesList.__lt__", + "tf.train.Int64List.__ne__": "tf.train.BytesList.__ne__", + "tf.train.Int64List.__new__": "tf.train.BytesList.__new__", + "tf.train.JobDef.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.JobDef.Clear": "tf.train.BytesList.Clear", + "tf.train.JobDef.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.JobDef.ClearField": "tf.train.BytesList.ClearField", + "tf.train.JobDef.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.JobDef.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.JobDef.Extensions": "tf.train.BytesList.Extensions", + "tf.train.JobDef.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.JobDef.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.JobDef.HasField": "tf.train.BytesList.HasField", + "tf.train.JobDef.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.JobDef.ListFields": "tf.train.BytesList.ListFields", + "tf.train.JobDef.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.JobDef.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.JobDef.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.JobDef.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.JobDef.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.JobDef.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.JobDef.TasksEntry.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.JobDef.TasksEntry.Clear": "tf.train.BytesList.Clear", + "tf.train.JobDef.TasksEntry.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.JobDef.TasksEntry.ClearField": "tf.train.BytesList.ClearField", + "tf.train.JobDef.TasksEntry.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.JobDef.TasksEntry.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.JobDef.TasksEntry.Extensions": "tf.train.BytesList.Extensions", + "tf.train.JobDef.TasksEntry.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.JobDef.TasksEntry.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.JobDef.TasksEntry.HasField": "tf.train.BytesList.HasField", + "tf.train.JobDef.TasksEntry.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.JobDef.TasksEntry.ListFields": "tf.train.BytesList.ListFields", + "tf.train.JobDef.TasksEntry.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.JobDef.TasksEntry.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.JobDef.TasksEntry.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.JobDef.TasksEntry.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.JobDef.TasksEntry.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.JobDef.TasksEntry.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.JobDef.TasksEntry.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.JobDef.TasksEntry.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.JobDef.TasksEntry.__eq__": "tf.train.BytesList.__eq__", + "tf.train.JobDef.TasksEntry.__ge__": "tf.train.BytesList.__ge__", + "tf.train.JobDef.TasksEntry.__gt__": "tf.train.BytesList.__gt__", + "tf.train.JobDef.TasksEntry.__init__": "tf.train.BytesList.__init__", + "tf.train.JobDef.TasksEntry.__le__": "tf.train.BytesList.__le__", + "tf.train.JobDef.TasksEntry.__lt__": "tf.train.BytesList.__lt__", + "tf.train.JobDef.TasksEntry.__ne__": "tf.train.BytesList.__ne__", + "tf.train.JobDef.TasksEntry.__new__": "tf.train.BytesList.__new__", + "tf.train.JobDef.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.JobDef.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.JobDef.__eq__": "tf.train.BytesList.__eq__", + "tf.train.JobDef.__ge__": "tf.train.BytesList.__ge__", + "tf.train.JobDef.__gt__": "tf.train.BytesList.__gt__", + "tf.train.JobDef.__init__": "tf.train.BytesList.__init__", + "tf.train.JobDef.__le__": "tf.train.BytesList.__le__", + "tf.train.JobDef.__lt__": "tf.train.BytesList.__lt__", + "tf.train.JobDef.__ne__": "tf.train.BytesList.__ne__", + "tf.train.JobDef.__new__": "tf.train.BytesList.__new__", + "tf.train.SequenceExample.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.SequenceExample.Clear": "tf.train.BytesList.Clear", + "tf.train.SequenceExample.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.SequenceExample.ClearField": "tf.train.BytesList.ClearField", + "tf.train.SequenceExample.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.SequenceExample.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.SequenceExample.Extensions": "tf.train.BytesList.Extensions", + "tf.train.SequenceExample.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.SequenceExample.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.SequenceExample.HasField": "tf.train.BytesList.HasField", + "tf.train.SequenceExample.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.SequenceExample.ListFields": "tf.train.BytesList.ListFields", + "tf.train.SequenceExample.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.SequenceExample.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.SequenceExample.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.SequenceExample.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.SequenceExample.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.SequenceExample.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.SequenceExample.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.SequenceExample.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.SequenceExample.__eq__": "tf.train.BytesList.__eq__", + "tf.train.SequenceExample.__ge__": "tf.train.BytesList.__ge__", + "tf.train.SequenceExample.__gt__": "tf.train.BytesList.__gt__", + "tf.train.SequenceExample.__init__": "tf.train.BytesList.__init__", + "tf.train.SequenceExample.__le__": "tf.train.BytesList.__le__", + "tf.train.SequenceExample.__lt__": "tf.train.BytesList.__lt__", + "tf.train.SequenceExample.__ne__": "tf.train.BytesList.__ne__", + "tf.train.SequenceExample.__new__": "tf.train.BytesList.__new__", + "tf.train.ServerDef.ByteSize": "tf.train.BytesList.ByteSize", + "tf.train.ServerDef.Clear": "tf.train.BytesList.Clear", + "tf.train.ServerDef.ClearExtension": "tf.train.BytesList.ClearExtension", + "tf.train.ServerDef.ClearField": "tf.train.BytesList.ClearField", + "tf.train.ServerDef.CopyFrom": "tf.train.BytesList.CopyFrom", + "tf.train.ServerDef.DiscardUnknownFields": "tf.train.BytesList.DiscardUnknownFields", + "tf.train.ServerDef.Extensions": "tf.train.BytesList.Extensions", + "tf.train.ServerDef.FindInitializationErrors": "tf.train.BytesList.FindInitializationErrors", + "tf.train.ServerDef.HasExtension": "tf.train.BytesList.HasExtension", + "tf.train.ServerDef.HasField": "tf.train.BytesList.HasField", + "tf.train.ServerDef.IsInitialized": "tf.train.BytesList.IsInitialized", + "tf.train.ServerDef.ListFields": "tf.train.BytesList.ListFields", + "tf.train.ServerDef.MergeFrom": "tf.train.BytesList.MergeFrom", + "tf.train.ServerDef.MergeFromString": "tf.train.BytesList.MergeFromString", + "tf.train.ServerDef.ParseFromString": "tf.train.BytesList.ParseFromString", + "tf.train.ServerDef.SerializePartialToString": "tf.train.BytesList.SerializePartialToString", + "tf.train.ServerDef.SerializeToString": "tf.train.BytesList.SerializeToString", + "tf.train.ServerDef.SetInParent": "tf.train.BytesList.SetInParent", + "tf.train.ServerDef.UnknownFields": "tf.train.BytesList.UnknownFields", + "tf.train.ServerDef.WhichOneof": "tf.train.BytesList.WhichOneof", + "tf.train.ServerDef.__eq__": "tf.train.BytesList.__eq__", + "tf.train.ServerDef.__ge__": "tf.train.BytesList.__ge__", + "tf.train.ServerDef.__gt__": "tf.train.BytesList.__gt__", + "tf.train.ServerDef.__init__": "tf.train.BytesList.__init__", + "tf.train.ServerDef.__le__": "tf.train.BytesList.__le__", + "tf.train.ServerDef.__lt__": "tf.train.BytesList.__lt__", + "tf.train.ServerDef.__ne__": "tf.train.BytesList.__ne__", + "tf.train.ServerDef.__new__": "tf.train.BytesList.__new__", + "tf.train.experimental.DynamicLossScale": "tf.mixed_precision.experimental.DynamicLossScale", + "tf.train.experimental.DynamicLossScale.__call__": "tf.mixed_precision.experimental.DynamicLossScale.__call__", + "tf.train.experimental.DynamicLossScale.__eq__": "tf.keras.Model.__eq__", + "tf.train.experimental.DynamicLossScale.__ge__": "tf.keras.Model.__ge__", + "tf.train.experimental.DynamicLossScale.__gt__": "tf.keras.Model.__gt__", + "tf.train.experimental.DynamicLossScale.__init__": "tf.mixed_precision.experimental.DynamicLossScale.__init__", + "tf.train.experimental.DynamicLossScale.__le__": "tf.keras.Model.__le__", + "tf.train.experimental.DynamicLossScale.__lt__": "tf.keras.Model.__lt__", + "tf.train.experimental.DynamicLossScale.__ne__": "tf.keras.Model.__ne__", + "tf.train.experimental.DynamicLossScale.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.train.experimental.DynamicLossScale.get_config": "tf.mixed_precision.experimental.DynamicLossScale.get_config", + "tf.train.experimental.DynamicLossScale.increment_period": "tf.mixed_precision.experimental.DynamicLossScale.increment_period", + "tf.train.experimental.DynamicLossScale.initial_loss_scale": "tf.mixed_precision.experimental.DynamicLossScale.initial_loss_scale", + "tf.train.experimental.DynamicLossScale.multiplier": "tf.mixed_precision.experimental.DynamicLossScale.multiplier", + "tf.train.experimental.DynamicLossScale.update": "tf.mixed_precision.experimental.DynamicLossScale.update", + "tf.train.experimental.FixedLossScale": "tf.mixed_precision.experimental.FixedLossScale", + "tf.train.experimental.FixedLossScale.__call__": "tf.mixed_precision.experimental.FixedLossScale.__call__", + "tf.train.experimental.FixedLossScale.__eq__": "tf.keras.Model.__eq__", + "tf.train.experimental.FixedLossScale.__ge__": "tf.keras.Model.__ge__", + "tf.train.experimental.FixedLossScale.__gt__": "tf.keras.Model.__gt__", + "tf.train.experimental.FixedLossScale.__init__": "tf.mixed_precision.experimental.FixedLossScale.__init__", + "tf.train.experimental.FixedLossScale.__le__": "tf.keras.Model.__le__", + "tf.train.experimental.FixedLossScale.__lt__": "tf.keras.Model.__lt__", + "tf.train.experimental.FixedLossScale.__ne__": "tf.keras.Model.__ne__", + "tf.train.experimental.FixedLossScale.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.train.experimental.FixedLossScale.get_config": "tf.mixed_precision.experimental.FixedLossScale.get_config", + "tf.train.experimental.FixedLossScale.update": "tf.mixed_precision.experimental.FixedLossScale.update", + "tf.train.experimental.LossScale": "tf.mixed_precision.experimental.LossScale", + "tf.train.experimental.LossScale.__call__": "tf.mixed_precision.experimental.LossScale.__call__", + "tf.train.experimental.LossScale.__eq__": "tf.keras.Model.__eq__", + "tf.train.experimental.LossScale.__ge__": "tf.keras.Model.__ge__", + "tf.train.experimental.LossScale.__gt__": "tf.keras.Model.__gt__", + "tf.train.experimental.LossScale.__init__": "tf.mixed_precision.experimental.LossScale.__init__", + "tf.train.experimental.LossScale.__le__": "tf.keras.Model.__le__", + "tf.train.experimental.LossScale.__lt__": "tf.keras.Model.__lt__", + "tf.train.experimental.LossScale.__ne__": "tf.keras.Model.__ne__", + "tf.train.experimental.LossScale.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.train.experimental.LossScale.get_config": "tf.mixed_precision.experimental.LossScale.get_config", + "tf.train.experimental.LossScale.update": "tf.mixed_precision.experimental.LossScale.update", + "tf.train.experimental.PythonState.__eq__": "tf.keras.Model.__eq__", + "tf.train.experimental.PythonState.__ge__": "tf.keras.Model.__ge__", + "tf.train.experimental.PythonState.__gt__": "tf.keras.Model.__gt__", + "tf.train.experimental.PythonState.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.train.experimental.PythonState.__le__": "tf.keras.Model.__le__", + "tf.train.experimental.PythonState.__lt__": "tf.keras.Model.__lt__", + "tf.train.experimental.PythonState.__ne__": "tf.keras.Model.__ne__", + "tf.train.experimental.PythonState.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.truediv": "tf.math.truediv", + "tf.uint16": "tf.dtypes.uint16", + "tf.uint32": "tf.dtypes.uint32", + "tf.uint64": "tf.dtypes.uint64", + "tf.uint8": "tf.dtypes.uint8", + "tf.variant": "tf.dtypes.variant", + "tf.zeros_initializer.__call__": "tf.keras.initializers.Zeros.__call__", + "tf.zeros_initializer.__eq__": "tf.keras.Model.__eq__", + "tf.zeros_initializer.__ge__": "tf.keras.Model.__ge__", + "tf.zeros_initializer.__gt__": "tf.keras.Model.__gt__", + "tf.zeros_initializer.__init__": "tf.keras.constraints.Constraint.__init__", + "tf.zeros_initializer.__le__": "tf.keras.Model.__le__", + "tf.zeros_initializer.__lt__": "tf.keras.Model.__lt__", + "tf.zeros_initializer.__ne__": "tf.keras.Model.__ne__", + "tf.zeros_initializer.__new__": "tf.keras.callbacks.BaseLogger.__new__", + "tf.zeros_initializer.get_config": "tf.keras.initializers.Initializer.get_config" + }, + "is_fragment": { + "tf": false, + "tf.AggregationMethod": false, + "tf.AggregationMethod.ADD_N": true, + "tf.AggregationMethod.DEFAULT": true, + "tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N": true, + "tf.AggregationMethod.EXPERIMENTAL_TREE": true, + "tf.AggregationMethod.__eq__": true, + "tf.AggregationMethod.__ge__": true, + "tf.AggregationMethod.__gt__": true, + "tf.AggregationMethod.__init__": true, + "tf.AggregationMethod.__le__": true, + "tf.AggregationMethod.__lt__": true, + "tf.AggregationMethod.__ne__": true, + "tf.AggregationMethod.__new__": true, + "tf.Assert": false, + "tf.CriticalSection": false, + "tf.CriticalSection.__eq__": true, + "tf.CriticalSection.__ge__": true, + "tf.CriticalSection.__gt__": true, + "tf.CriticalSection.__init__": true, + "tf.CriticalSection.__le__": true, + "tf.CriticalSection.__lt__": true, + "tf.CriticalSection.__ne__": true, + "tf.CriticalSection.__new__": true, + "tf.CriticalSection.execute": true, + "tf.CriticalSection.name": true, + "tf.DType": false, + "tf.DType.__eq__": true, + "tf.DType.__ge__": true, + "tf.DType.__gt__": true, + "tf.DType.__init__": true, + "tf.DType.__le__": true, + "tf.DType.__lt__": true, + "tf.DType.__ne__": true, + "tf.DType.__new__": true, + "tf.DType.as_datatype_enum": true, + "tf.DType.as_numpy_dtype": true, + "tf.DType.base_dtype": true, + "tf.DType.is_bool": true, + "tf.DType.is_compatible_with": true, + "tf.DType.is_complex": true, + "tf.DType.is_floating": true, + "tf.DType.is_integer": true, + "tf.DType.is_numpy_compatible": true, + "tf.DType.is_quantized": true, + "tf.DType.is_unsigned": true, + "tf.DType.limits": true, + "tf.DType.max": true, + "tf.DType.min": true, + "tf.DType.name": true, + "tf.DType.real_dtype": true, + "tf.DType.size": true, + "tf.DeviceSpec": false, + "tf.DeviceSpec.__eq__": true, + "tf.DeviceSpec.__ge__": true, + "tf.DeviceSpec.__gt__": true, + "tf.DeviceSpec.__init__": true, + "tf.DeviceSpec.__le__": true, + "tf.DeviceSpec.__lt__": true, + "tf.DeviceSpec.__ne__": true, + "tf.DeviceSpec.__new__": true, + "tf.DeviceSpec.device_index": true, + "tf.DeviceSpec.device_type": true, + "tf.DeviceSpec.from_string": true, + "tf.DeviceSpec.job": true, + "tf.DeviceSpec.make_merged_spec": true, + "tf.DeviceSpec.parse_from_string": true, + "tf.DeviceSpec.replace": true, + "tf.DeviceSpec.replica": true, + "tf.DeviceSpec.task": true, + "tf.DeviceSpec.to_string": true, + "tf.GradientTape": false, + "tf.GradientTape.__enter__": true, + "tf.GradientTape.__eq__": true, + "tf.GradientTape.__exit__": true, + "tf.GradientTape.__ge__": true, + "tf.GradientTape.__gt__": true, + "tf.GradientTape.__init__": true, + "tf.GradientTape.__le__": true, + "tf.GradientTape.__lt__": true, + "tf.GradientTape.__ne__": true, + "tf.GradientTape.__new__": true, + "tf.GradientTape.batch_jacobian": true, + "tf.GradientTape.gradient": true, + "tf.GradientTape.jacobian": true, + "tf.GradientTape.reset": true, + "tf.GradientTape.stop_recording": true, + "tf.GradientTape.watch": true, + "tf.GradientTape.watched_variables": true, + "tf.Graph": false, + "tf.Graph.__eq__": true, + "tf.Graph.__ge__": true, + "tf.Graph.__gt__": true, + "tf.Graph.__init__": true, + "tf.Graph.__le__": true, + "tf.Graph.__lt__": true, + "tf.Graph.__ne__": true, + "tf.Graph.__new__": true, + "tf.Graph.add_to_collection": true, + "tf.Graph.add_to_collections": true, + "tf.Graph.as_default": true, + "tf.Graph.as_graph_def": true, + "tf.Graph.as_graph_element": true, + "tf.Graph.building_function": true, + "tf.Graph.clear_collection": true, + "tf.Graph.collections": true, + "tf.Graph.colocate_with": true, + "tf.Graph.container": true, + "tf.Graph.control_dependencies": true, + "tf.Graph.create_op": true, + "tf.Graph.device": true, + "tf.Graph.finalize": true, + "tf.Graph.finalized": true, + "tf.Graph.get_all_collection_keys": true, + "tf.Graph.get_collection": true, + "tf.Graph.get_collection_ref": true, + "tf.Graph.get_name_scope": true, + "tf.Graph.get_operation_by_name": true, + "tf.Graph.get_operations": true, + "tf.Graph.get_tensor_by_name": true, + "tf.Graph.gradient_override_map": true, + "tf.Graph.graph_def_versions": true, + "tf.Graph.is_feedable": true, + "tf.Graph.is_fetchable": true, + "tf.Graph.name_scope": true, + "tf.Graph.prevent_feeding": true, + "tf.Graph.prevent_fetching": true, + "tf.Graph.seed": true, + "tf.Graph.switch_to_thread_local": true, + "tf.Graph.unique_name": true, + "tf.Graph.version": true, + "tf.IndexedSlices": false, + "tf.IndexedSlices.__eq__": true, + "tf.IndexedSlices.__ge__": true, + "tf.IndexedSlices.__gt__": true, + "tf.IndexedSlices.__init__": true, + "tf.IndexedSlices.__le__": true, + "tf.IndexedSlices.__lt__": true, + "tf.IndexedSlices.__ne__": true, + "tf.IndexedSlices.__neg__": true, + "tf.IndexedSlices.__new__": true, + "tf.IndexedSlices.consumers": true, + "tf.IndexedSlices.dense_shape": true, + "tf.IndexedSlices.device": true, + "tf.IndexedSlices.dtype": true, + "tf.IndexedSlices.graph": true, + "tf.IndexedSlices.indices": true, + "tf.IndexedSlices.name": true, + "tf.IndexedSlices.op": true, + "tf.IndexedSlices.shape": true, + "tf.IndexedSlices.values": true, + "tf.IndexedSlicesSpec": false, + "tf.IndexedSlicesSpec.__eq__": true, + "tf.IndexedSlicesSpec.__ge__": true, + "tf.IndexedSlicesSpec.__gt__": true, + "tf.IndexedSlicesSpec.__init__": true, + "tf.IndexedSlicesSpec.__le__": true, + "tf.IndexedSlicesSpec.__lt__": true, + "tf.IndexedSlicesSpec.__ne__": true, + "tf.IndexedSlicesSpec.__new__": true, + "tf.IndexedSlicesSpec.is_compatible_with": true, + "tf.IndexedSlicesSpec.most_specific_compatible_type": true, + "tf.IndexedSlicesSpec.value_type": true, + "tf.Module": false, + "tf.Module.__eq__": true, + "tf.Module.__ge__": true, + "tf.Module.__gt__": true, + "tf.Module.__init__": true, + "tf.Module.__le__": true, + "tf.Module.__lt__": true, + "tf.Module.__ne__": true, + "tf.Module.__new__": true, + "tf.Module.name": true, + "tf.Module.name_scope": true, + "tf.Module.submodules": true, + "tf.Module.trainable_variables": true, + "tf.Module.variables": true, + "tf.Module.with_name_scope": true, + "tf.Operation": false, + "tf.Operation.__eq__": true, + "tf.Operation.__ge__": true, + "tf.Operation.__gt__": true, + "tf.Operation.__init__": true, + "tf.Operation.__le__": true, + "tf.Operation.__lt__": true, + "tf.Operation.__ne__": true, + "tf.Operation.__new__": true, + "tf.Operation.colocation_groups": true, + "tf.Operation.control_inputs": true, + "tf.Operation.device": true, + "tf.Operation.get_attr": true, + "tf.Operation.graph": true, + "tf.Operation.inputs": true, + "tf.Operation.name": true, + "tf.Operation.node_def": true, + "tf.Operation.op_def": true, + "tf.Operation.outputs": true, + "tf.Operation.run": true, + "tf.Operation.traceback": true, + "tf.Operation.type": true, + "tf.Operation.values": true, + "tf.OptionalSpec": false, + "tf.OptionalSpec.__eq__": true, + "tf.OptionalSpec.__ge__": true, + "tf.OptionalSpec.__gt__": true, + "tf.OptionalSpec.__init__": true, + "tf.OptionalSpec.__le__": true, + "tf.OptionalSpec.__lt__": true, + "tf.OptionalSpec.__ne__": true, + "tf.OptionalSpec.__new__": true, + "tf.OptionalSpec.from_value": true, + "tf.OptionalSpec.is_compatible_with": true, + "tf.OptionalSpec.most_specific_compatible_type": true, + "tf.OptionalSpec.value_type": true, + "tf.RaggedTensor": false, + "tf.RaggedTensor.__abs__": true, + "tf.RaggedTensor.__add__": true, + "tf.RaggedTensor.__and__": true, + "tf.RaggedTensor.__bool__": true, + "tf.RaggedTensor.__div__": true, + "tf.RaggedTensor.__eq__": true, + "tf.RaggedTensor.__floordiv__": true, + "tf.RaggedTensor.__ge__": true, + "tf.RaggedTensor.__getitem__": true, + "tf.RaggedTensor.__gt__": true, + "tf.RaggedTensor.__init__": true, + "tf.RaggedTensor.__invert__": true, + "tf.RaggedTensor.__le__": true, + "tf.RaggedTensor.__lt__": true, + "tf.RaggedTensor.__mod__": true, + "tf.RaggedTensor.__mul__": true, + "tf.RaggedTensor.__ne__": true, + "tf.RaggedTensor.__neg__": true, + "tf.RaggedTensor.__new__": true, + "tf.RaggedTensor.__nonzero__": true, + "tf.RaggedTensor.__or__": true, + "tf.RaggedTensor.__pow__": true, + "tf.RaggedTensor.__radd__": true, + "tf.RaggedTensor.__rand__": true, + "tf.RaggedTensor.__rdiv__": true, + "tf.RaggedTensor.__rfloordiv__": true, + "tf.RaggedTensor.__rmod__": true, + "tf.RaggedTensor.__rmul__": true, + "tf.RaggedTensor.__ror__": true, + "tf.RaggedTensor.__rpow__": true, + "tf.RaggedTensor.__rsub__": true, + "tf.RaggedTensor.__rtruediv__": true, + "tf.RaggedTensor.__rxor__": true, + "tf.RaggedTensor.__sub__": true, + "tf.RaggedTensor.__truediv__": true, + "tf.RaggedTensor.__xor__": true, + "tf.RaggedTensor.bounding_shape": true, + "tf.RaggedTensor.consumers": true, + "tf.RaggedTensor.dtype": true, + "tf.RaggedTensor.flat_values": true, + "tf.RaggedTensor.from_nested_row_lengths": true, + "tf.RaggedTensor.from_nested_row_splits": true, + "tf.RaggedTensor.from_nested_value_rowids": true, + "tf.RaggedTensor.from_row_lengths": true, + "tf.RaggedTensor.from_row_limits": true, + "tf.RaggedTensor.from_row_splits": true, + "tf.RaggedTensor.from_row_starts": true, + "tf.RaggedTensor.from_sparse": true, + "tf.RaggedTensor.from_tensor": true, + "tf.RaggedTensor.from_uniform_row_length": true, + "tf.RaggedTensor.from_value_rowids": true, + "tf.RaggedTensor.merge_dims": true, + "tf.RaggedTensor.nested_row_lengths": true, + "tf.RaggedTensor.nested_row_splits": true, + "tf.RaggedTensor.nested_value_rowids": true, + "tf.RaggedTensor.nrows": true, + "tf.RaggedTensor.numpy": true, + "tf.RaggedTensor.ragged_rank": true, + "tf.RaggedTensor.row_lengths": true, + "tf.RaggedTensor.row_limits": true, + "tf.RaggedTensor.row_splits": true, + "tf.RaggedTensor.row_starts": true, + "tf.RaggedTensor.shape": true, + "tf.RaggedTensor.to_list": true, + "tf.RaggedTensor.to_sparse": true, + "tf.RaggedTensor.to_tensor": true, + "tf.RaggedTensor.uniform_row_length": true, + "tf.RaggedTensor.value_rowids": true, + "tf.RaggedTensor.values": true, + "tf.RaggedTensor.with_flat_values": true, + "tf.RaggedTensor.with_row_splits_dtype": true, + "tf.RaggedTensor.with_values": true, + "tf.RaggedTensorSpec": false, + "tf.RaggedTensorSpec.__eq__": true, + "tf.RaggedTensorSpec.__ge__": true, + "tf.RaggedTensorSpec.__gt__": true, + "tf.RaggedTensorSpec.__init__": true, + "tf.RaggedTensorSpec.__le__": true, + "tf.RaggedTensorSpec.__lt__": true, + "tf.RaggedTensorSpec.__ne__": true, + "tf.RaggedTensorSpec.__new__": true, + "tf.RaggedTensorSpec.from_value": true, + "tf.RaggedTensorSpec.is_compatible_with": true, + "tf.RaggedTensorSpec.most_specific_compatible_type": true, + "tf.RaggedTensorSpec.value_type": true, + "tf.RegisterGradient": false, + "tf.RegisterGradient.__call__": true, + "tf.RegisterGradient.__eq__": true, + "tf.RegisterGradient.__ge__": true, + "tf.RegisterGradient.__gt__": true, + "tf.RegisterGradient.__init__": true, + "tf.RegisterGradient.__le__": true, + "tf.RegisterGradient.__lt__": true, + "tf.RegisterGradient.__ne__": true, + "tf.RegisterGradient.__new__": true, + "tf.SparseTensor": false, + "tf.SparseTensor.__div__": true, + "tf.SparseTensor.__eq__": true, + "tf.SparseTensor.__ge__": true, + "tf.SparseTensor.__gt__": true, + "tf.SparseTensor.__init__": true, + "tf.SparseTensor.__le__": true, + "tf.SparseTensor.__lt__": true, + "tf.SparseTensor.__mul__": true, + "tf.SparseTensor.__ne__": true, + "tf.SparseTensor.__new__": true, + "tf.SparseTensor.__truediv__": true, + "tf.SparseTensor.consumers": true, + "tf.SparseTensor.dense_shape": true, + "tf.SparseTensor.dtype": true, + "tf.SparseTensor.eval": true, + "tf.SparseTensor.from_value": true, + "tf.SparseTensor.get_shape": true, + "tf.SparseTensor.graph": true, + "tf.SparseTensor.indices": true, + "tf.SparseTensor.op": true, + "tf.SparseTensor.shape": true, + "tf.SparseTensor.values": true, + "tf.SparseTensorSpec": false, + "tf.SparseTensorSpec.__eq__": true, + "tf.SparseTensorSpec.__ge__": true, + "tf.SparseTensorSpec.__gt__": true, + "tf.SparseTensorSpec.__init__": true, + "tf.SparseTensorSpec.__le__": true, + "tf.SparseTensorSpec.__lt__": true, + "tf.SparseTensorSpec.__ne__": true, + "tf.SparseTensorSpec.__new__": true, + "tf.SparseTensorSpec.dtype": true, + "tf.SparseTensorSpec.from_value": true, + "tf.SparseTensorSpec.is_compatible_with": true, + "tf.SparseTensorSpec.most_specific_compatible_type": true, + "tf.SparseTensorSpec.shape": true, + "tf.SparseTensorSpec.value_type": true, + "tf.Tensor": false, + "tf.Tensor.OVERLOADABLE_OPERATORS": true, + "tf.Tensor.__abs__": true, + "tf.Tensor.__add__": true, + "tf.Tensor.__and__": true, + "tf.Tensor.__bool__": true, + "tf.Tensor.__div__": true, + "tf.Tensor.__eq__": true, + "tf.Tensor.__floordiv__": true, + "tf.Tensor.__ge__": true, + "tf.Tensor.__getitem__": true, + "tf.Tensor.__gt__": true, + "tf.Tensor.__init__": true, + "tf.Tensor.__invert__": true, + "tf.Tensor.__iter__": true, + "tf.Tensor.__le__": true, + "tf.Tensor.__len__": true, + "tf.Tensor.__lt__": true, + "tf.Tensor.__matmul__": true, + "tf.Tensor.__mod__": true, + "tf.Tensor.__mul__": true, + "tf.Tensor.__ne__": true, + "tf.Tensor.__neg__": true, + "tf.Tensor.__new__": true, + "tf.Tensor.__nonzero__": true, + "tf.Tensor.__or__": true, + "tf.Tensor.__pow__": true, + "tf.Tensor.__radd__": true, + "tf.Tensor.__rand__": true, + "tf.Tensor.__rdiv__": true, + "tf.Tensor.__rfloordiv__": true, + "tf.Tensor.__rmatmul__": true, + "tf.Tensor.__rmod__": true, + "tf.Tensor.__rmul__": true, + "tf.Tensor.__ror__": true, + "tf.Tensor.__rpow__": true, + "tf.Tensor.__rsub__": true, + "tf.Tensor.__rtruediv__": true, + "tf.Tensor.__rxor__": true, + "tf.Tensor.__sub__": true, + "tf.Tensor.__truediv__": true, + "tf.Tensor.__xor__": true, + "tf.Tensor.consumers": true, + "tf.Tensor.device": true, + "tf.Tensor.dtype": true, + "tf.Tensor.eval": true, + "tf.Tensor.experimental_ref": true, + "tf.Tensor.get_shape": true, + "tf.Tensor.graph": true, + "tf.Tensor.name": true, + "tf.Tensor.op": true, + "tf.Tensor.ref": true, + "tf.Tensor.set_shape": true, + "tf.Tensor.shape": true, + "tf.Tensor.value_index": true, + "tf.TensorArray": false, + "tf.TensorArray.__eq__": true, + "tf.TensorArray.__ge__": true, + "tf.TensorArray.__gt__": true, + "tf.TensorArray.__init__": true, + "tf.TensorArray.__le__": true, + "tf.TensorArray.__lt__": true, + "tf.TensorArray.__ne__": true, + "tf.TensorArray.__new__": true, + "tf.TensorArray.close": true, + "tf.TensorArray.concat": true, + "tf.TensorArray.dtype": true, + "tf.TensorArray.dynamic_size": true, + "tf.TensorArray.element_shape": true, + "tf.TensorArray.flow": true, + "tf.TensorArray.gather": true, + "tf.TensorArray.grad": true, + "tf.TensorArray.handle": true, + "tf.TensorArray.identity": true, + "tf.TensorArray.read": true, + "tf.TensorArray.scatter": true, + "tf.TensorArray.size": true, + "tf.TensorArray.split": true, + "tf.TensorArray.stack": true, + "tf.TensorArray.unstack": true, + "tf.TensorArray.write": true, + "tf.TensorArraySpec": false, + "tf.TensorArraySpec.__eq__": true, + "tf.TensorArraySpec.__ge__": true, + "tf.TensorArraySpec.__gt__": true, + "tf.TensorArraySpec.__init__": true, + "tf.TensorArraySpec.__le__": true, + "tf.TensorArraySpec.__lt__": true, + "tf.TensorArraySpec.__ne__": true, + "tf.TensorArraySpec.__new__": true, + "tf.TensorArraySpec.from_value": true, + "tf.TensorArraySpec.is_compatible_with": true, + "tf.TensorArraySpec.most_specific_compatible_type": true, + "tf.TensorArraySpec.value_type": true, + "tf.TensorShape": false, + "tf.TensorShape.__add__": true, + "tf.TensorShape.__bool__": true, + "tf.TensorShape.__concat__": true, + "tf.TensorShape.__eq__": true, + "tf.TensorShape.__ge__": true, + "tf.TensorShape.__getitem__": true, + "tf.TensorShape.__gt__": true, + "tf.TensorShape.__init__": true, + "tf.TensorShape.__iter__": true, + "tf.TensorShape.__le__": true, + "tf.TensorShape.__len__": true, + "tf.TensorShape.__lt__": true, + "tf.TensorShape.__ne__": true, + "tf.TensorShape.__new__": true, + "tf.TensorShape.__nonzero__": true, + "tf.TensorShape.__radd__": true, + "tf.TensorShape.as_list": true, + "tf.TensorShape.as_proto": true, + "tf.TensorShape.assert_has_rank": true, + "tf.TensorShape.assert_is_compatible_with": true, + "tf.TensorShape.assert_is_fully_defined": true, + "tf.TensorShape.assert_same_rank": true, + "tf.TensorShape.concatenate": true, + "tf.TensorShape.dims": true, + "tf.TensorShape.is_compatible_with": true, + "tf.TensorShape.is_fully_defined": true, + "tf.TensorShape.merge_with": true, + "tf.TensorShape.most_specific_compatible_shape": true, + "tf.TensorShape.ndims": true, + "tf.TensorShape.num_elements": true, + "tf.TensorShape.rank": true, + "tf.TensorShape.with_rank": true, + "tf.TensorShape.with_rank_at_least": true, + "tf.TensorShape.with_rank_at_most": true, + "tf.TensorSpec": false, + "tf.TensorSpec.__eq__": true, + "tf.TensorSpec.__ge__": true, + "tf.TensorSpec.__gt__": true, + "tf.TensorSpec.__init__": true, + "tf.TensorSpec.__le__": true, + "tf.TensorSpec.__lt__": true, + "tf.TensorSpec.__ne__": true, + "tf.TensorSpec.__new__": true, + "tf.TensorSpec.dtype": true, + "tf.TensorSpec.from_spec": true, + "tf.TensorSpec.from_tensor": true, + "tf.TensorSpec.is_compatible_with": true, + "tf.TensorSpec.most_specific_compatible_type": true, + "tf.TensorSpec.name": true, + "tf.TensorSpec.shape": true, + "tf.TensorSpec.value_type": true, + "tf.TypeSpec": false, + "tf.TypeSpec.__eq__": true, + "tf.TypeSpec.__ge__": true, + "tf.TypeSpec.__gt__": true, + "tf.TypeSpec.__init__": true, + "tf.TypeSpec.__le__": true, + "tf.TypeSpec.__lt__": true, + "tf.TypeSpec.__ne__": true, + "tf.TypeSpec.__new__": true, + "tf.TypeSpec.is_compatible_with": true, + "tf.TypeSpec.most_specific_compatible_type": true, + "tf.TypeSpec.value_type": true, + "tf.UnconnectedGradients": false, + "tf.UnconnectedGradients.NONE": true, + "tf.UnconnectedGradients.ZERO": true, + "tf.UnconnectedGradients.name": true, + "tf.UnconnectedGradients.value": true, + "tf.Variable": false, + "tf.Variable.SaveSliceInfo": false, + "tf.Variable.SaveSliceInfo.__eq__": true, + "tf.Variable.SaveSliceInfo.__ge__": true, + "tf.Variable.SaveSliceInfo.__gt__": true, + "tf.Variable.SaveSliceInfo.__init__": true, + "tf.Variable.SaveSliceInfo.__le__": true, + "tf.Variable.SaveSliceInfo.__lt__": true, + "tf.Variable.SaveSliceInfo.__ne__": true, + "tf.Variable.SaveSliceInfo.__new__": true, + "tf.Variable.SaveSliceInfo.spec": true, + "tf.Variable.SaveSliceInfo.to_proto": true, + "tf.Variable.__abs__": true, + "tf.Variable.__add__": true, + "tf.Variable.__and__": true, + "tf.Variable.__div__": true, + "tf.Variable.__eq__": true, + "tf.Variable.__floordiv__": true, + "tf.Variable.__ge__": true, + "tf.Variable.__getitem__": true, + "tf.Variable.__gt__": true, + "tf.Variable.__init__": true, + "tf.Variable.__invert__": true, + "tf.Variable.__iter__": true, + "tf.Variable.__le__": true, + "tf.Variable.__lt__": true, + "tf.Variable.__matmul__": true, + "tf.Variable.__mod__": true, + "tf.Variable.__mul__": true, + "tf.Variable.__ne__": true, + "tf.Variable.__neg__": true, + "tf.Variable.__new__": true, + "tf.Variable.__or__": true, + "tf.Variable.__pow__": true, + "tf.Variable.__radd__": true, + "tf.Variable.__rand__": true, + "tf.Variable.__rdiv__": true, + "tf.Variable.__rfloordiv__": true, + "tf.Variable.__rmatmul__": true, + "tf.Variable.__rmod__": true, + "tf.Variable.__rmul__": true, + "tf.Variable.__ror__": true, + "tf.Variable.__rpow__": true, + "tf.Variable.__rsub__": true, + "tf.Variable.__rtruediv__": true, + "tf.Variable.__rxor__": true, + "tf.Variable.__sub__": true, + "tf.Variable.__truediv__": true, + "tf.Variable.__xor__": true, + "tf.Variable.aggregation": true, + "tf.Variable.assign": true, + "tf.Variable.assign_add": true, + "tf.Variable.assign_sub": true, + "tf.Variable.batch_scatter_update": true, + "tf.Variable.constraint": true, + "tf.Variable.count_up_to": true, + "tf.Variable.device": true, + "tf.Variable.dtype": true, + "tf.Variable.eval": true, + "tf.Variable.experimental_ref": true, + "tf.Variable.from_proto": true, + "tf.Variable.gather_nd": true, + "tf.Variable.get_shape": true, + "tf.Variable.graph": true, + "tf.Variable.initial_value": true, + "tf.Variable.initialized_value": true, + "tf.Variable.initializer": true, + "tf.Variable.load": true, + "tf.Variable.name": true, + "tf.Variable.op": true, + "tf.Variable.read_value": true, + "tf.Variable.ref": true, + "tf.Variable.scatter_add": true, + "tf.Variable.scatter_div": true, + "tf.Variable.scatter_max": true, + "tf.Variable.scatter_min": true, + "tf.Variable.scatter_mul": true, + "tf.Variable.scatter_nd_add": true, + "tf.Variable.scatter_nd_sub": true, + "tf.Variable.scatter_nd_update": true, + "tf.Variable.scatter_sub": true, + "tf.Variable.scatter_update": true, + "tf.Variable.set_shape": true, + "tf.Variable.shape": true, + "tf.Variable.sparse_read": true, + "tf.Variable.synchronization": true, + "tf.Variable.to_proto": true, + "tf.Variable.trainable": true, + "tf.Variable.value": true, + "tf.VariableAggregation": false, + "tf.VariableAggregation.MEAN": true, + "tf.VariableAggregation.NONE": true, + "tf.VariableAggregation.ONLY_FIRST_REPLICA": true, + "tf.VariableAggregation.SUM": true, + "tf.VariableAggregation.name": true, + "tf.VariableAggregation.value": true, + "tf.VariableSynchronization": false, + "tf.VariableSynchronization.AUTO": true, + "tf.VariableSynchronization.NONE": true, + "tf.VariableSynchronization.ON_READ": true, + "tf.VariableSynchronization.ON_WRITE": true, + "tf.VariableSynchronization.name": true, + "tf.VariableSynchronization.value": true, + "tf.__version__": true, + "tf.abs": false, + "tf.acos": false, + "tf.acosh": false, + "tf.add": false, + "tf.add_n": false, + "tf.argmax": false, + "tf.argmin": false, + "tf.argsort": false, + "tf.as_dtype": false, + "tf.as_string": false, + "tf.asin": false, + "tf.asinh": false, + "tf.assert_equal": false, + "tf.assert_greater": false, + "tf.assert_less": false, + "tf.assert_rank": false, + "tf.atan": false, + "tf.atan2": false, + "tf.atanh": false, + "tf.audio": false, + "tf.audio.decode_wav": false, + "tf.audio.encode_wav": false, + "tf.autodiff": false, + "tf.autodiff.ForwardAccumulator": false, + "tf.autodiff.ForwardAccumulator.__enter__": true, + "tf.autodiff.ForwardAccumulator.__eq__": true, + "tf.autodiff.ForwardAccumulator.__exit__": true, + "tf.autodiff.ForwardAccumulator.__ge__": true, + "tf.autodiff.ForwardAccumulator.__gt__": true, + "tf.autodiff.ForwardAccumulator.__init__": true, + "tf.autodiff.ForwardAccumulator.__le__": true, + "tf.autodiff.ForwardAccumulator.__lt__": true, + "tf.autodiff.ForwardAccumulator.__ne__": true, + "tf.autodiff.ForwardAccumulator.__new__": true, + "tf.autodiff.ForwardAccumulator.jvp": true, + "tf.autodiff.GradientTape": false, + "tf.autodiff.GradientTape.__enter__": true, + "tf.autodiff.GradientTape.__eq__": true, + "tf.autodiff.GradientTape.__exit__": true, + "tf.autodiff.GradientTape.__ge__": true, + "tf.autodiff.GradientTape.__gt__": true, + "tf.autodiff.GradientTape.__init__": true, + "tf.autodiff.GradientTape.__le__": true, + "tf.autodiff.GradientTape.__lt__": true, + "tf.autodiff.GradientTape.__ne__": true, + "tf.autodiff.GradientTape.__new__": true, + "tf.autodiff.GradientTape.batch_jacobian": true, + "tf.autodiff.GradientTape.gradient": true, + "tf.autodiff.GradientTape.jacobian": true, + "tf.autodiff.GradientTape.reset": true, + "tf.autodiff.GradientTape.stop_recording": true, + "tf.autodiff.GradientTape.watch": true, + "tf.autodiff.GradientTape.watched_variables": true, + "tf.autograph": false, + "tf.autograph.experimental": false, + "tf.autograph.experimental.Feature": false, + "tf.autograph.experimental.Feature.ALL": true, + "tf.autograph.experimental.Feature.ASSERT_STATEMENTS": true, + "tf.autograph.experimental.Feature.AUTO_CONTROL_DEPS": true, + "tf.autograph.experimental.Feature.BUILTIN_FUNCTIONS": true, + "tf.autograph.experimental.Feature.EQUALITY_OPERATORS": true, + "tf.autograph.experimental.Feature.LISTS": true, + "tf.autograph.experimental.Feature.NAME_SCOPES": true, + "tf.autograph.experimental.Feature.name": true, + "tf.autograph.experimental.Feature.value": true, + "tf.autograph.experimental.do_not_convert": false, + "tf.autograph.experimental.set_loop_options": false, + "tf.autograph.set_verbosity": false, + "tf.autograph.to_code": false, + "tf.autograph.to_graph": false, + "tf.autograph.trace": false, + "tf.batch_to_space": false, + "tf.bfloat16": true, + "tf.bitcast": false, + "tf.bitwise": false, + "tf.bitwise.bitwise_and": false, + "tf.bitwise.bitwise_or": false, + "tf.bitwise.bitwise_xor": false, + "tf.bitwise.invert": false, + "tf.bitwise.left_shift": false, + "tf.bitwise.right_shift": false, + "tf.bool": true, + "tf.boolean_mask": false, + "tf.broadcast_dynamic_shape": false, + "tf.broadcast_static_shape": false, + "tf.broadcast_to": false, + "tf.case": false, + "tf.cast": false, + "tf.clip_by_global_norm": false, + "tf.clip_by_norm": false, + "tf.clip_by_value": false, + "tf.compat": false, + "tf.compat.as_bytes": false, + "tf.compat.as_str": false, + "tf.compat.as_str_any": false, + "tf.compat.as_text": false, + "tf.compat.bytes_or_text_types": true, + "tf.compat.complex_types": true, + "tf.compat.dimension_at_index": false, + "tf.compat.dimension_value": false, + "tf.compat.forward_compatibility_horizon": false, + "tf.compat.forward_compatible": false, + "tf.compat.integral_types": true, + "tf.compat.path_to_str": false, + "tf.compat.real_types": true, + "tf.compat.v1": false, + "tf.compat.v1.AUTO_REUSE": true, + "tf.compat.v1.AggregationMethod": false, + "tf.compat.v1.AggregationMethod.ADD_N": true, + "tf.compat.v1.AggregationMethod.DEFAULT": true, + "tf.compat.v1.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N": true, + "tf.compat.v1.AggregationMethod.EXPERIMENTAL_TREE": true, + "tf.compat.v1.AggregationMethod.__eq__": true, + "tf.compat.v1.AggregationMethod.__ge__": true, + "tf.compat.v1.AggregationMethod.__gt__": true, + "tf.compat.v1.AggregationMethod.__init__": true, + "tf.compat.v1.AggregationMethod.__le__": true, + "tf.compat.v1.AggregationMethod.__lt__": true, + "tf.compat.v1.AggregationMethod.__ne__": true, + "tf.compat.v1.AggregationMethod.__new__": true, + "tf.compat.v1.Assert": false, + "tf.compat.v1.AttrValue": false, + "tf.compat.v1.AttrValue.ByteSize": true, + "tf.compat.v1.AttrValue.Clear": true, + "tf.compat.v1.AttrValue.ClearExtension": true, + "tf.compat.v1.AttrValue.ClearField": true, + "tf.compat.v1.AttrValue.CopyFrom": true, + "tf.compat.v1.AttrValue.DESCRIPTOR": true, + "tf.compat.v1.AttrValue.DiscardUnknownFields": true, + "tf.compat.v1.AttrValue.Extensions": true, + "tf.compat.v1.AttrValue.FindInitializationErrors": true, + "tf.compat.v1.AttrValue.FromString": true, + "tf.compat.v1.AttrValue.HasExtension": true, + "tf.compat.v1.AttrValue.HasField": true, + "tf.compat.v1.AttrValue.IsInitialized": true, + "tf.compat.v1.AttrValue.ListFields": true, + "tf.compat.v1.AttrValue.ListValue": false, + "tf.compat.v1.AttrValue.ListValue.ByteSize": true, + "tf.compat.v1.AttrValue.ListValue.Clear": true, + "tf.compat.v1.AttrValue.ListValue.ClearExtension": true, + "tf.compat.v1.AttrValue.ListValue.ClearField": true, + "tf.compat.v1.AttrValue.ListValue.CopyFrom": true, + "tf.compat.v1.AttrValue.ListValue.DESCRIPTOR": true, + "tf.compat.v1.AttrValue.ListValue.DiscardUnknownFields": true, + "tf.compat.v1.AttrValue.ListValue.Extensions": true, + "tf.compat.v1.AttrValue.ListValue.FindInitializationErrors": true, + "tf.compat.v1.AttrValue.ListValue.FromString": true, + "tf.compat.v1.AttrValue.ListValue.HasExtension": true, + "tf.compat.v1.AttrValue.ListValue.HasField": true, + "tf.compat.v1.AttrValue.ListValue.IsInitialized": true, + "tf.compat.v1.AttrValue.ListValue.ListFields": true, + "tf.compat.v1.AttrValue.ListValue.MergeFrom": true, + "tf.compat.v1.AttrValue.ListValue.MergeFromString": true, + "tf.compat.v1.AttrValue.ListValue.ParseFromString": true, + "tf.compat.v1.AttrValue.ListValue.RegisterExtension": true, + "tf.compat.v1.AttrValue.ListValue.SerializePartialToString": true, + "tf.compat.v1.AttrValue.ListValue.SerializeToString": true, + "tf.compat.v1.AttrValue.ListValue.SetInParent": true, + "tf.compat.v1.AttrValue.ListValue.UnknownFields": true, + "tf.compat.v1.AttrValue.ListValue.WhichOneof": true, + "tf.compat.v1.AttrValue.ListValue.__eq__": true, + "tf.compat.v1.AttrValue.ListValue.__ge__": true, + "tf.compat.v1.AttrValue.ListValue.__gt__": true, + "tf.compat.v1.AttrValue.ListValue.__init__": true, + "tf.compat.v1.AttrValue.ListValue.__le__": true, + "tf.compat.v1.AttrValue.ListValue.__lt__": true, + "tf.compat.v1.AttrValue.ListValue.__ne__": true, + "tf.compat.v1.AttrValue.ListValue.__new__": true, + "tf.compat.v1.AttrValue.ListValue.b": true, + "tf.compat.v1.AttrValue.ListValue.f": true, + "tf.compat.v1.AttrValue.ListValue.func": true, + "tf.compat.v1.AttrValue.ListValue.i": true, + "tf.compat.v1.AttrValue.ListValue.s": true, + "tf.compat.v1.AttrValue.ListValue.shape": true, + "tf.compat.v1.AttrValue.ListValue.tensor": true, + "tf.compat.v1.AttrValue.ListValue.type": true, + "tf.compat.v1.AttrValue.MergeFrom": true, + "tf.compat.v1.AttrValue.MergeFromString": true, + "tf.compat.v1.AttrValue.ParseFromString": true, + "tf.compat.v1.AttrValue.RegisterExtension": true, + "tf.compat.v1.AttrValue.SerializePartialToString": true, + "tf.compat.v1.AttrValue.SerializeToString": true, + "tf.compat.v1.AttrValue.SetInParent": true, + "tf.compat.v1.AttrValue.UnknownFields": true, + "tf.compat.v1.AttrValue.WhichOneof": true, + "tf.compat.v1.AttrValue.__eq__": true, + "tf.compat.v1.AttrValue.__ge__": true, + "tf.compat.v1.AttrValue.__gt__": true, + "tf.compat.v1.AttrValue.__init__": true, + "tf.compat.v1.AttrValue.__le__": true, + "tf.compat.v1.AttrValue.__lt__": true, + "tf.compat.v1.AttrValue.__ne__": true, + "tf.compat.v1.AttrValue.__new__": true, + "tf.compat.v1.AttrValue.b": true, + "tf.compat.v1.AttrValue.f": true, + "tf.compat.v1.AttrValue.func": true, + "tf.compat.v1.AttrValue.i": true, + "tf.compat.v1.AttrValue.list": true, + "tf.compat.v1.AttrValue.placeholder": true, + "tf.compat.v1.AttrValue.s": true, + "tf.compat.v1.AttrValue.shape": true, + "tf.compat.v1.AttrValue.tensor": true, + "tf.compat.v1.AttrValue.type": true, + "tf.compat.v1.COMPILER_VERSION": true, + "tf.compat.v1.CXX11_ABI_FLAG": true, + "tf.compat.v1.ConditionalAccumulator": false, + "tf.compat.v1.ConditionalAccumulator.__eq__": true, + "tf.compat.v1.ConditionalAccumulator.__ge__": true, + "tf.compat.v1.ConditionalAccumulator.__gt__": true, + "tf.compat.v1.ConditionalAccumulator.__init__": true, + "tf.compat.v1.ConditionalAccumulator.__le__": true, + "tf.compat.v1.ConditionalAccumulator.__lt__": true, + "tf.compat.v1.ConditionalAccumulator.__ne__": true, + "tf.compat.v1.ConditionalAccumulator.__new__": true, + "tf.compat.v1.ConditionalAccumulator.accumulator_ref": true, + "tf.compat.v1.ConditionalAccumulator.apply_grad": true, + "tf.compat.v1.ConditionalAccumulator.dtype": true, + "tf.compat.v1.ConditionalAccumulator.name": true, + "tf.compat.v1.ConditionalAccumulator.num_accumulated": true, + "tf.compat.v1.ConditionalAccumulator.set_global_step": true, + "tf.compat.v1.ConditionalAccumulator.take_grad": true, + "tf.compat.v1.ConditionalAccumulatorBase": false, + "tf.compat.v1.ConditionalAccumulatorBase.__eq__": true, + "tf.compat.v1.ConditionalAccumulatorBase.__ge__": true, + "tf.compat.v1.ConditionalAccumulatorBase.__gt__": true, + "tf.compat.v1.ConditionalAccumulatorBase.__init__": true, + "tf.compat.v1.ConditionalAccumulatorBase.__le__": true, + "tf.compat.v1.ConditionalAccumulatorBase.__lt__": true, + "tf.compat.v1.ConditionalAccumulatorBase.__ne__": true, + "tf.compat.v1.ConditionalAccumulatorBase.__new__": true, + "tf.compat.v1.ConditionalAccumulatorBase.accumulator_ref": true, + "tf.compat.v1.ConditionalAccumulatorBase.dtype": true, + "tf.compat.v1.ConditionalAccumulatorBase.name": true, + "tf.compat.v1.ConditionalAccumulatorBase.num_accumulated": true, + "tf.compat.v1.ConditionalAccumulatorBase.set_global_step": true, + "tf.compat.v1.ConfigProto": false, + "tf.compat.v1.ConfigProto.ByteSize": true, + "tf.compat.v1.ConfigProto.Clear": true, + "tf.compat.v1.ConfigProto.ClearExtension": true, + "tf.compat.v1.ConfigProto.ClearField": true, + "tf.compat.v1.ConfigProto.CopyFrom": true, + "tf.compat.v1.ConfigProto.DESCRIPTOR": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry": false, + "tf.compat.v1.ConfigProto.DeviceCountEntry.ByteSize": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.Clear": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.ClearExtension": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.ClearField": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.CopyFrom": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.DESCRIPTOR": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.DiscardUnknownFields": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.Extensions": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.FindInitializationErrors": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.FromString": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.HasExtension": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.HasField": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.IsInitialized": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.ListFields": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.MergeFrom": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.MergeFromString": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.ParseFromString": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.RegisterExtension": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.SerializePartialToString": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.SerializeToString": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.SetInParent": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.UnknownFields": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.WhichOneof": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.__eq__": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.__ge__": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.__gt__": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.__init__": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.__le__": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.__lt__": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.__ne__": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.__new__": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.key": true, + "tf.compat.v1.ConfigProto.DeviceCountEntry.value": true, + "tf.compat.v1.ConfigProto.DiscardUnknownFields": true, + "tf.compat.v1.ConfigProto.Experimental": false, + "tf.compat.v1.ConfigProto.Experimental.ByteSize": true, + "tf.compat.v1.ConfigProto.Experimental.Clear": true, + "tf.compat.v1.ConfigProto.Experimental.ClearExtension": true, + "tf.compat.v1.ConfigProto.Experimental.ClearField": true, + "tf.compat.v1.ConfigProto.Experimental.CopyFrom": true, + "tf.compat.v1.ConfigProto.Experimental.DESCRIPTOR": true, + "tf.compat.v1.ConfigProto.Experimental.DiscardUnknownFields": true, + "tf.compat.v1.ConfigProto.Experimental.Extensions": true, + "tf.compat.v1.ConfigProto.Experimental.FindInitializationErrors": true, + "tf.compat.v1.ConfigProto.Experimental.FromString": true, + "tf.compat.v1.ConfigProto.Experimental.HasExtension": true, + "tf.compat.v1.ConfigProto.Experimental.HasField": true, + "tf.compat.v1.ConfigProto.Experimental.IsInitialized": true, + "tf.compat.v1.ConfigProto.Experimental.ListFields": true, + "tf.compat.v1.ConfigProto.Experimental.MergeFrom": true, + "tf.compat.v1.ConfigProto.Experimental.MergeFromString": true, + "tf.compat.v1.ConfigProto.Experimental.ParseFromString": true, + "tf.compat.v1.ConfigProto.Experimental.RegisterExtension": true, + "tf.compat.v1.ConfigProto.Experimental.SerializePartialToString": true, + "tf.compat.v1.ConfigProto.Experimental.SerializeToString": true, + "tf.compat.v1.ConfigProto.Experimental.SetInParent": true, + "tf.compat.v1.ConfigProto.Experimental.UnknownFields": true, + "tf.compat.v1.ConfigProto.Experimental.WhichOneof": true, + "tf.compat.v1.ConfigProto.Experimental.__eq__": true, + "tf.compat.v1.ConfigProto.Experimental.__ge__": true, + "tf.compat.v1.ConfigProto.Experimental.__gt__": true, + "tf.compat.v1.ConfigProto.Experimental.__init__": true, + "tf.compat.v1.ConfigProto.Experimental.__le__": true, + "tf.compat.v1.ConfigProto.Experimental.__lt__": true, + "tf.compat.v1.ConfigProto.Experimental.__ne__": true, + "tf.compat.v1.ConfigProto.Experimental.__new__": true, + "tf.compat.v1.ConfigProto.Experimental.collective_deterministic_sequential_execution": true, + "tf.compat.v1.ConfigProto.Experimental.collective_group_leader": true, + "tf.compat.v1.ConfigProto.Experimental.collective_nccl": true, + "tf.compat.v1.ConfigProto.Experimental.disable_output_partition_graphs": true, + "tf.compat.v1.ConfigProto.Experimental.disable_thread_spinning": true, + "tf.compat.v1.ConfigProto.Experimental.enable_mlir_bridge": true, + "tf.compat.v1.ConfigProto.Experimental.executor_type": true, + "tf.compat.v1.ConfigProto.Experimental.optimize_for_static_graph": true, + "tf.compat.v1.ConfigProto.Experimental.recv_buf_max_chunk": true, + "tf.compat.v1.ConfigProto.Experimental.session_metadata": true, + "tf.compat.v1.ConfigProto.Experimental.share_cluster_devices_in_session": true, + "tf.compat.v1.ConfigProto.Experimental.share_session_state_in_clusterspec_propagation": true, + "tf.compat.v1.ConfigProto.Experimental.use_numa_affinity": true, + "tf.compat.v1.ConfigProto.Experimental.xla_fusion_autotuner_thresh": true, + "tf.compat.v1.ConfigProto.Extensions": true, + "tf.compat.v1.ConfigProto.FindInitializationErrors": true, + "tf.compat.v1.ConfigProto.FromString": true, + "tf.compat.v1.ConfigProto.HasExtension": true, + "tf.compat.v1.ConfigProto.HasField": true, + "tf.compat.v1.ConfigProto.IsInitialized": true, + "tf.compat.v1.ConfigProto.ListFields": true, + "tf.compat.v1.ConfigProto.MergeFrom": true, + "tf.compat.v1.ConfigProto.MergeFromString": true, + "tf.compat.v1.ConfigProto.ParseFromString": true, + "tf.compat.v1.ConfigProto.RegisterExtension": true, + "tf.compat.v1.ConfigProto.SerializePartialToString": true, + "tf.compat.v1.ConfigProto.SerializeToString": true, + "tf.compat.v1.ConfigProto.SetInParent": true, + "tf.compat.v1.ConfigProto.UnknownFields": true, + "tf.compat.v1.ConfigProto.WhichOneof": true, + "tf.compat.v1.ConfigProto.__eq__": true, + "tf.compat.v1.ConfigProto.__ge__": true, + "tf.compat.v1.ConfigProto.__gt__": true, + "tf.compat.v1.ConfigProto.__init__": true, + "tf.compat.v1.ConfigProto.__le__": true, + "tf.compat.v1.ConfigProto.__lt__": true, + "tf.compat.v1.ConfigProto.__ne__": true, + "tf.compat.v1.ConfigProto.__new__": true, + "tf.compat.v1.ConfigProto.allow_soft_placement": true, + "tf.compat.v1.ConfigProto.cluster_def": true, + "tf.compat.v1.ConfigProto.device_count": true, + "tf.compat.v1.ConfigProto.device_filters": true, + "tf.compat.v1.ConfigProto.experimental": true, + "tf.compat.v1.ConfigProto.gpu_options": true, + "tf.compat.v1.ConfigProto.graph_options": true, + "tf.compat.v1.ConfigProto.inter_op_parallelism_threads": true, + "tf.compat.v1.ConfigProto.intra_op_parallelism_threads": true, + "tf.compat.v1.ConfigProto.isolate_session_state": true, + "tf.compat.v1.ConfigProto.log_device_placement": true, + "tf.compat.v1.ConfigProto.operation_timeout_in_ms": true, + "tf.compat.v1.ConfigProto.placement_period": true, + "tf.compat.v1.ConfigProto.rpc_options": true, + "tf.compat.v1.ConfigProto.session_inter_op_thread_pool": true, + "tf.compat.v1.ConfigProto.share_cluster_devices_in_session": true, + "tf.compat.v1.ConfigProto.use_per_session_threads": true, + "tf.compat.v1.CriticalSection": false, + "tf.compat.v1.CriticalSection.__eq__": true, + "tf.compat.v1.CriticalSection.__ge__": true, + "tf.compat.v1.CriticalSection.__gt__": true, + "tf.compat.v1.CriticalSection.__init__": true, + "tf.compat.v1.CriticalSection.__le__": true, + "tf.compat.v1.CriticalSection.__lt__": true, + "tf.compat.v1.CriticalSection.__ne__": true, + "tf.compat.v1.CriticalSection.__new__": true, + "tf.compat.v1.CriticalSection.execute": true, + "tf.compat.v1.CriticalSection.name": true, + "tf.compat.v1.DType": false, + "tf.compat.v1.DType.__eq__": true, + "tf.compat.v1.DType.__ge__": true, + "tf.compat.v1.DType.__gt__": true, + "tf.compat.v1.DType.__init__": true, + "tf.compat.v1.DType.__le__": true, + "tf.compat.v1.DType.__lt__": true, + "tf.compat.v1.DType.__ne__": true, + "tf.compat.v1.DType.__new__": true, + "tf.compat.v1.DType.as_datatype_enum": true, + "tf.compat.v1.DType.as_numpy_dtype": true, + "tf.compat.v1.DType.base_dtype": true, + "tf.compat.v1.DType.is_bool": true, + "tf.compat.v1.DType.is_compatible_with": true, + "tf.compat.v1.DType.is_complex": true, + "tf.compat.v1.DType.is_floating": true, + "tf.compat.v1.DType.is_integer": true, + "tf.compat.v1.DType.is_numpy_compatible": true, + "tf.compat.v1.DType.is_quantized": true, + "tf.compat.v1.DType.is_unsigned": true, + "tf.compat.v1.DType.limits": true, + "tf.compat.v1.DType.max": true, + "tf.compat.v1.DType.min": true, + "tf.compat.v1.DType.name": true, + "tf.compat.v1.DType.real_dtype": true, + "tf.compat.v1.DType.size": true, + "tf.compat.v1.DeviceSpec": false, + "tf.compat.v1.DeviceSpec.__eq__": true, + "tf.compat.v1.DeviceSpec.__ge__": true, + "tf.compat.v1.DeviceSpec.__gt__": true, + "tf.compat.v1.DeviceSpec.__init__": true, + "tf.compat.v1.DeviceSpec.__le__": true, + "tf.compat.v1.DeviceSpec.__lt__": true, + "tf.compat.v1.DeviceSpec.__ne__": true, + "tf.compat.v1.DeviceSpec.__new__": true, + "tf.compat.v1.DeviceSpec.device_index": true, + "tf.compat.v1.DeviceSpec.device_type": true, + "tf.compat.v1.DeviceSpec.from_string": true, + "tf.compat.v1.DeviceSpec.job": true, + "tf.compat.v1.DeviceSpec.make_merged_spec": true, + "tf.compat.v1.DeviceSpec.merge_from": true, + "tf.compat.v1.DeviceSpec.parse_from_string": true, + "tf.compat.v1.DeviceSpec.replace": true, + "tf.compat.v1.DeviceSpec.replica": true, + "tf.compat.v1.DeviceSpec.task": true, + "tf.compat.v1.DeviceSpec.to_string": true, + "tf.compat.v1.Dimension": false, + "tf.compat.v1.Dimension.__add__": true, + "tf.compat.v1.Dimension.__div__": true, + "tf.compat.v1.Dimension.__eq__": true, + "tf.compat.v1.Dimension.__floordiv__": true, + "tf.compat.v1.Dimension.__ge__": true, + "tf.compat.v1.Dimension.__gt__": true, + "tf.compat.v1.Dimension.__init__": true, + "tf.compat.v1.Dimension.__le__": true, + "tf.compat.v1.Dimension.__lt__": true, + "tf.compat.v1.Dimension.__mod__": true, + "tf.compat.v1.Dimension.__mul__": true, + "tf.compat.v1.Dimension.__ne__": true, + "tf.compat.v1.Dimension.__new__": true, + "tf.compat.v1.Dimension.__radd__": true, + "tf.compat.v1.Dimension.__rdiv__": true, + "tf.compat.v1.Dimension.__rfloordiv__": true, + "tf.compat.v1.Dimension.__rmod__": true, + "tf.compat.v1.Dimension.__rmul__": true, + "tf.compat.v1.Dimension.__rsub__": true, + "tf.compat.v1.Dimension.__rtruediv__": true, + "tf.compat.v1.Dimension.__sub__": true, + "tf.compat.v1.Dimension.__truediv__": true, + "tf.compat.v1.Dimension.assert_is_compatible_with": true, + "tf.compat.v1.Dimension.is_compatible_with": true, + "tf.compat.v1.Dimension.merge_with": true, + "tf.compat.v1.Dimension.value": true, + "tf.compat.v1.Event": false, + "tf.compat.v1.Event.ByteSize": true, + "tf.compat.v1.Event.Clear": true, + "tf.compat.v1.Event.ClearExtension": true, + "tf.compat.v1.Event.ClearField": true, + "tf.compat.v1.Event.CopyFrom": true, + "tf.compat.v1.Event.DESCRIPTOR": true, + "tf.compat.v1.Event.DiscardUnknownFields": true, + "tf.compat.v1.Event.Extensions": true, + "tf.compat.v1.Event.FindInitializationErrors": true, + "tf.compat.v1.Event.FromString": true, + "tf.compat.v1.Event.HasExtension": true, + "tf.compat.v1.Event.HasField": true, + "tf.compat.v1.Event.IsInitialized": true, + "tf.compat.v1.Event.ListFields": true, + "tf.compat.v1.Event.MergeFrom": true, + "tf.compat.v1.Event.MergeFromString": true, + "tf.compat.v1.Event.ParseFromString": true, + "tf.compat.v1.Event.RegisterExtension": true, + "tf.compat.v1.Event.SerializePartialToString": true, + "tf.compat.v1.Event.SerializeToString": true, + "tf.compat.v1.Event.SetInParent": true, + "tf.compat.v1.Event.UnknownFields": true, + "tf.compat.v1.Event.WhichOneof": true, + "tf.compat.v1.Event.__eq__": true, + "tf.compat.v1.Event.__ge__": true, + "tf.compat.v1.Event.__gt__": true, + "tf.compat.v1.Event.__init__": true, + "tf.compat.v1.Event.__le__": true, + "tf.compat.v1.Event.__lt__": true, + "tf.compat.v1.Event.__ne__": true, + "tf.compat.v1.Event.__new__": true, + "tf.compat.v1.Event.file_version": true, + "tf.compat.v1.Event.graph_def": true, + "tf.compat.v1.Event.log_message": true, + "tf.compat.v1.Event.meta_graph_def": true, + "tf.compat.v1.Event.session_log": true, + "tf.compat.v1.Event.step": true, + "tf.compat.v1.Event.summary": true, + "tf.compat.v1.Event.tagged_run_metadata": true, + "tf.compat.v1.Event.wall_time": true, + "tf.compat.v1.FIFOQueue": false, + "tf.compat.v1.FIFOQueue.__eq__": true, + "tf.compat.v1.FIFOQueue.__ge__": true, + "tf.compat.v1.FIFOQueue.__gt__": true, + "tf.compat.v1.FIFOQueue.__init__": true, + "tf.compat.v1.FIFOQueue.__le__": true, + "tf.compat.v1.FIFOQueue.__lt__": true, + "tf.compat.v1.FIFOQueue.__ne__": true, + "tf.compat.v1.FIFOQueue.__new__": true, + "tf.compat.v1.FIFOQueue.close": true, + "tf.compat.v1.FIFOQueue.dequeue": true, + "tf.compat.v1.FIFOQueue.dequeue_many": true, + "tf.compat.v1.FIFOQueue.dequeue_up_to": true, + "tf.compat.v1.FIFOQueue.dtypes": true, + "tf.compat.v1.FIFOQueue.enqueue": true, + "tf.compat.v1.FIFOQueue.enqueue_many": true, + "tf.compat.v1.FIFOQueue.from_list": true, + "tf.compat.v1.FIFOQueue.is_closed": true, + "tf.compat.v1.FIFOQueue.name": true, + "tf.compat.v1.FIFOQueue.names": true, + "tf.compat.v1.FIFOQueue.queue_ref": true, + "tf.compat.v1.FIFOQueue.shapes": true, + "tf.compat.v1.FIFOQueue.size": true, + "tf.compat.v1.FixedLenFeature": false, + "tf.compat.v1.FixedLenFeature.__add__": true, + "tf.compat.v1.FixedLenFeature.__contains__": true, + "tf.compat.v1.FixedLenFeature.__eq__": true, + "tf.compat.v1.FixedLenFeature.__ge__": true, + "tf.compat.v1.FixedLenFeature.__getitem__": true, + "tf.compat.v1.FixedLenFeature.__gt__": true, + "tf.compat.v1.FixedLenFeature.__init__": true, + "tf.compat.v1.FixedLenFeature.__iter__": true, + "tf.compat.v1.FixedLenFeature.__le__": true, + "tf.compat.v1.FixedLenFeature.__len__": true, + "tf.compat.v1.FixedLenFeature.__lt__": true, + "tf.compat.v1.FixedLenFeature.__mul__": true, + "tf.compat.v1.FixedLenFeature.__ne__": true, + "tf.compat.v1.FixedLenFeature.__new__": true, + "tf.compat.v1.FixedLenFeature.__rmul__": true, + "tf.compat.v1.FixedLenFeature.count": true, + "tf.compat.v1.FixedLenFeature.default_value": true, + "tf.compat.v1.FixedLenFeature.dtype": true, + "tf.compat.v1.FixedLenFeature.index": true, + "tf.compat.v1.FixedLenFeature.shape": true, + "tf.compat.v1.FixedLenSequenceFeature": false, + "tf.compat.v1.FixedLenSequenceFeature.__add__": true, + "tf.compat.v1.FixedLenSequenceFeature.__contains__": true, + "tf.compat.v1.FixedLenSequenceFeature.__eq__": true, + "tf.compat.v1.FixedLenSequenceFeature.__ge__": true, + "tf.compat.v1.FixedLenSequenceFeature.__getitem__": true, + "tf.compat.v1.FixedLenSequenceFeature.__gt__": true, + "tf.compat.v1.FixedLenSequenceFeature.__init__": true, + "tf.compat.v1.FixedLenSequenceFeature.__iter__": true, + "tf.compat.v1.FixedLenSequenceFeature.__le__": true, + "tf.compat.v1.FixedLenSequenceFeature.__len__": true, + "tf.compat.v1.FixedLenSequenceFeature.__lt__": true, + "tf.compat.v1.FixedLenSequenceFeature.__mul__": true, + "tf.compat.v1.FixedLenSequenceFeature.__ne__": true, + "tf.compat.v1.FixedLenSequenceFeature.__new__": true, + "tf.compat.v1.FixedLenSequenceFeature.__rmul__": true, + "tf.compat.v1.FixedLenSequenceFeature.allow_missing": true, + "tf.compat.v1.FixedLenSequenceFeature.count": true, + "tf.compat.v1.FixedLenSequenceFeature.default_value": true, + "tf.compat.v1.FixedLenSequenceFeature.dtype": true, + "tf.compat.v1.FixedLenSequenceFeature.index": true, + "tf.compat.v1.FixedLenSequenceFeature.shape": true, + "tf.compat.v1.FixedLengthRecordReader": false, + "tf.compat.v1.FixedLengthRecordReader.__eq__": true, + "tf.compat.v1.FixedLengthRecordReader.__ge__": true, + "tf.compat.v1.FixedLengthRecordReader.__gt__": true, + "tf.compat.v1.FixedLengthRecordReader.__init__": true, + "tf.compat.v1.FixedLengthRecordReader.__le__": true, + "tf.compat.v1.FixedLengthRecordReader.__lt__": true, + "tf.compat.v1.FixedLengthRecordReader.__ne__": true, + "tf.compat.v1.FixedLengthRecordReader.__new__": true, + "tf.compat.v1.FixedLengthRecordReader.num_records_produced": true, + "tf.compat.v1.FixedLengthRecordReader.num_work_units_completed": true, + "tf.compat.v1.FixedLengthRecordReader.read": true, + "tf.compat.v1.FixedLengthRecordReader.read_up_to": true, + "tf.compat.v1.FixedLengthRecordReader.reader_ref": true, + "tf.compat.v1.FixedLengthRecordReader.reset": true, + "tf.compat.v1.FixedLengthRecordReader.restore_state": true, + "tf.compat.v1.FixedLengthRecordReader.serialize_state": true, + "tf.compat.v1.FixedLengthRecordReader.supports_serialize": true, + "tf.compat.v1.GIT_VERSION": true, + "tf.compat.v1.GPUOptions": false, + "tf.compat.v1.GPUOptions.ByteSize": true, + "tf.compat.v1.GPUOptions.Clear": true, + "tf.compat.v1.GPUOptions.ClearExtension": true, + "tf.compat.v1.GPUOptions.ClearField": true, + "tf.compat.v1.GPUOptions.CopyFrom": true, + "tf.compat.v1.GPUOptions.DESCRIPTOR": true, + "tf.compat.v1.GPUOptions.DiscardUnknownFields": true, + "tf.compat.v1.GPUOptions.Experimental": false, + "tf.compat.v1.GPUOptions.Experimental.ByteSize": true, + "tf.compat.v1.GPUOptions.Experimental.Clear": true, + "tf.compat.v1.GPUOptions.Experimental.ClearExtension": true, + "tf.compat.v1.GPUOptions.Experimental.ClearField": true, + "tf.compat.v1.GPUOptions.Experimental.CopyFrom": true, + "tf.compat.v1.GPUOptions.Experimental.DESCRIPTOR": true, + "tf.compat.v1.GPUOptions.Experimental.DiscardUnknownFields": true, + "tf.compat.v1.GPUOptions.Experimental.Extensions": true, + "tf.compat.v1.GPUOptions.Experimental.FindInitializationErrors": true, + "tf.compat.v1.GPUOptions.Experimental.FromString": true, + "tf.compat.v1.GPUOptions.Experimental.HasExtension": true, + "tf.compat.v1.GPUOptions.Experimental.HasField": true, + "tf.compat.v1.GPUOptions.Experimental.IsInitialized": true, + "tf.compat.v1.GPUOptions.Experimental.ListFields": true, + "tf.compat.v1.GPUOptions.Experimental.MergeFrom": true, + "tf.compat.v1.GPUOptions.Experimental.MergeFromString": true, + "tf.compat.v1.GPUOptions.Experimental.ParseFromString": true, + "tf.compat.v1.GPUOptions.Experimental.RegisterExtension": true, + "tf.compat.v1.GPUOptions.Experimental.SerializePartialToString": true, + "tf.compat.v1.GPUOptions.Experimental.SerializeToString": true, + "tf.compat.v1.GPUOptions.Experimental.SetInParent": true, + "tf.compat.v1.GPUOptions.Experimental.UnknownFields": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices": false, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.ByteSize": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.Clear": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.ClearExtension": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.ClearField": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.CopyFrom": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.DESCRIPTOR": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.DiscardUnknownFields": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.Extensions": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.FindInitializationErrors": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.FromString": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.HasExtension": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.HasField": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.IsInitialized": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.ListFields": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.MergeFrom": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.MergeFromString": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.ParseFromString": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.RegisterExtension": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.SerializePartialToString": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.SerializeToString": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.SetInParent": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.UnknownFields": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.WhichOneof": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__eq__": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__ge__": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__gt__": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__init__": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__le__": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__lt__": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__ne__": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.__new__": true, + "tf.compat.v1.GPUOptions.Experimental.VirtualDevices.memory_limit_mb": true, + "tf.compat.v1.GPUOptions.Experimental.WhichOneof": true, + "tf.compat.v1.GPUOptions.Experimental.__eq__": true, + "tf.compat.v1.GPUOptions.Experimental.__ge__": true, + "tf.compat.v1.GPUOptions.Experimental.__gt__": true, + "tf.compat.v1.GPUOptions.Experimental.__init__": true, + "tf.compat.v1.GPUOptions.Experimental.__le__": true, + "tf.compat.v1.GPUOptions.Experimental.__lt__": true, + "tf.compat.v1.GPUOptions.Experimental.__ne__": true, + "tf.compat.v1.GPUOptions.Experimental.__new__": true, + "tf.compat.v1.GPUOptions.Experimental.collective_ring_order": true, + "tf.compat.v1.GPUOptions.Experimental.kernel_tracker_max_bytes": true, + "tf.compat.v1.GPUOptions.Experimental.kernel_tracker_max_interval": true, + "tf.compat.v1.GPUOptions.Experimental.kernel_tracker_max_pending": true, + "tf.compat.v1.GPUOptions.Experimental.num_dev_to_dev_copy_streams": true, + "tf.compat.v1.GPUOptions.Experimental.timestamped_allocator": true, + "tf.compat.v1.GPUOptions.Experimental.use_unified_memory": true, + "tf.compat.v1.GPUOptions.Experimental.virtual_devices": true, + "tf.compat.v1.GPUOptions.Extensions": true, + "tf.compat.v1.GPUOptions.FindInitializationErrors": true, + "tf.compat.v1.GPUOptions.FromString": true, + "tf.compat.v1.GPUOptions.HasExtension": true, + "tf.compat.v1.GPUOptions.HasField": true, + "tf.compat.v1.GPUOptions.IsInitialized": true, + "tf.compat.v1.GPUOptions.ListFields": true, + "tf.compat.v1.GPUOptions.MergeFrom": true, + "tf.compat.v1.GPUOptions.MergeFromString": true, + "tf.compat.v1.GPUOptions.ParseFromString": true, + "tf.compat.v1.GPUOptions.RegisterExtension": true, + "tf.compat.v1.GPUOptions.SerializePartialToString": true, + "tf.compat.v1.GPUOptions.SerializeToString": true, + "tf.compat.v1.GPUOptions.SetInParent": true, + "tf.compat.v1.GPUOptions.UnknownFields": true, + "tf.compat.v1.GPUOptions.WhichOneof": true, + "tf.compat.v1.GPUOptions.__eq__": true, + "tf.compat.v1.GPUOptions.__ge__": true, + "tf.compat.v1.GPUOptions.__gt__": true, + "tf.compat.v1.GPUOptions.__init__": true, + "tf.compat.v1.GPUOptions.__le__": true, + "tf.compat.v1.GPUOptions.__lt__": true, + "tf.compat.v1.GPUOptions.__ne__": true, + "tf.compat.v1.GPUOptions.__new__": true, + "tf.compat.v1.GPUOptions.allocator_type": true, + "tf.compat.v1.GPUOptions.allow_growth": true, + "tf.compat.v1.GPUOptions.deferred_deletion_bytes": true, + "tf.compat.v1.GPUOptions.experimental": true, + "tf.compat.v1.GPUOptions.force_gpu_compatible": true, + "tf.compat.v1.GPUOptions.per_process_gpu_memory_fraction": true, + "tf.compat.v1.GPUOptions.polling_active_delay_usecs": true, + "tf.compat.v1.GPUOptions.polling_inactive_delay_msecs": true, + "tf.compat.v1.GPUOptions.visible_device_list": true, + "tf.compat.v1.GRAPH_DEF_VERSION": true, + "tf.compat.v1.GRAPH_DEF_VERSION_MIN_CONSUMER": true, + "tf.compat.v1.GRAPH_DEF_VERSION_MIN_PRODUCER": true, + "tf.compat.v1.GradientTape": false, + "tf.compat.v1.GradientTape.__enter__": true, + "tf.compat.v1.GradientTape.__eq__": true, + "tf.compat.v1.GradientTape.__exit__": true, + "tf.compat.v1.GradientTape.__ge__": true, + "tf.compat.v1.GradientTape.__gt__": true, + "tf.compat.v1.GradientTape.__init__": true, + "tf.compat.v1.GradientTape.__le__": true, + "tf.compat.v1.GradientTape.__lt__": true, + "tf.compat.v1.GradientTape.__ne__": true, + "tf.compat.v1.GradientTape.__new__": true, + "tf.compat.v1.GradientTape.batch_jacobian": true, + "tf.compat.v1.GradientTape.gradient": true, + "tf.compat.v1.GradientTape.jacobian": true, + "tf.compat.v1.GradientTape.reset": true, + "tf.compat.v1.GradientTape.stop_recording": true, + "tf.compat.v1.GradientTape.watch": true, + "tf.compat.v1.GradientTape.watched_variables": true, + "tf.compat.v1.Graph": false, + "tf.compat.v1.Graph.__eq__": true, + "tf.compat.v1.Graph.__ge__": true, + "tf.compat.v1.Graph.__gt__": true, + "tf.compat.v1.Graph.__init__": true, + "tf.compat.v1.Graph.__le__": true, + "tf.compat.v1.Graph.__lt__": true, + "tf.compat.v1.Graph.__ne__": true, + "tf.compat.v1.Graph.__new__": true, + "tf.compat.v1.Graph.add_to_collection": true, + "tf.compat.v1.Graph.add_to_collections": true, + "tf.compat.v1.Graph.as_default": true, + "tf.compat.v1.Graph.as_graph_def": true, + "tf.compat.v1.Graph.as_graph_element": true, + "tf.compat.v1.Graph.building_function": true, + "tf.compat.v1.Graph.clear_collection": true, + "tf.compat.v1.Graph.collections": true, + "tf.compat.v1.Graph.colocate_with": true, + "tf.compat.v1.Graph.container": true, + "tf.compat.v1.Graph.control_dependencies": true, + "tf.compat.v1.Graph.create_op": true, + "tf.compat.v1.Graph.device": true, + "tf.compat.v1.Graph.finalize": true, + "tf.compat.v1.Graph.finalized": true, + "tf.compat.v1.Graph.get_all_collection_keys": true, + "tf.compat.v1.Graph.get_collection": true, + "tf.compat.v1.Graph.get_collection_ref": true, + "tf.compat.v1.Graph.get_name_scope": true, + "tf.compat.v1.Graph.get_operation_by_name": true, + "tf.compat.v1.Graph.get_operations": true, + "tf.compat.v1.Graph.get_tensor_by_name": true, + "tf.compat.v1.Graph.gradient_override_map": true, + "tf.compat.v1.Graph.graph_def_versions": true, + "tf.compat.v1.Graph.is_feedable": true, + "tf.compat.v1.Graph.is_fetchable": true, + "tf.compat.v1.Graph.name_scope": true, + "tf.compat.v1.Graph.prevent_feeding": true, + "tf.compat.v1.Graph.prevent_fetching": true, + "tf.compat.v1.Graph.seed": true, + "tf.compat.v1.Graph.switch_to_thread_local": true, + "tf.compat.v1.Graph.unique_name": true, + "tf.compat.v1.Graph.version": true, + "tf.compat.v1.GraphDef": false, + "tf.compat.v1.GraphDef.ByteSize": true, + "tf.compat.v1.GraphDef.Clear": true, + "tf.compat.v1.GraphDef.ClearExtension": true, + "tf.compat.v1.GraphDef.ClearField": true, + "tf.compat.v1.GraphDef.CopyFrom": true, + "tf.compat.v1.GraphDef.DESCRIPTOR": true, + "tf.compat.v1.GraphDef.DiscardUnknownFields": true, + "tf.compat.v1.GraphDef.Extensions": true, + "tf.compat.v1.GraphDef.FindInitializationErrors": true, + "tf.compat.v1.GraphDef.FromString": true, + "tf.compat.v1.GraphDef.HasExtension": true, + "tf.compat.v1.GraphDef.HasField": true, + "tf.compat.v1.GraphDef.IsInitialized": true, + "tf.compat.v1.GraphDef.ListFields": true, + "tf.compat.v1.GraphDef.MergeFrom": true, + "tf.compat.v1.GraphDef.MergeFromString": true, + "tf.compat.v1.GraphDef.ParseFromString": true, + "tf.compat.v1.GraphDef.RegisterExtension": true, + "tf.compat.v1.GraphDef.SerializePartialToString": true, + "tf.compat.v1.GraphDef.SerializeToString": true, + "tf.compat.v1.GraphDef.SetInParent": true, + "tf.compat.v1.GraphDef.UnknownFields": true, + "tf.compat.v1.GraphDef.WhichOneof": true, + "tf.compat.v1.GraphDef.__eq__": true, + "tf.compat.v1.GraphDef.__ge__": true, + "tf.compat.v1.GraphDef.__gt__": true, + "tf.compat.v1.GraphDef.__init__": true, + "tf.compat.v1.GraphDef.__le__": true, + "tf.compat.v1.GraphDef.__lt__": true, + "tf.compat.v1.GraphDef.__ne__": true, + "tf.compat.v1.GraphDef.__new__": true, + "tf.compat.v1.GraphDef.library": true, + "tf.compat.v1.GraphDef.node": true, + "tf.compat.v1.GraphDef.version": true, + "tf.compat.v1.GraphDef.versions": true, + "tf.compat.v1.GraphKeys": false, + "tf.compat.v1.GraphKeys.ACTIVATIONS": true, + "tf.compat.v1.GraphKeys.ASSET_FILEPATHS": true, + "tf.compat.v1.GraphKeys.BIASES": true, + "tf.compat.v1.GraphKeys.CONCATENATED_VARIABLES": true, + "tf.compat.v1.GraphKeys.COND_CONTEXT": true, + "tf.compat.v1.GraphKeys.EVAL_STEP": true, + "tf.compat.v1.GraphKeys.GLOBAL_STEP": true, + "tf.compat.v1.GraphKeys.GLOBAL_VARIABLES": true, + "tf.compat.v1.GraphKeys.INIT_OP": true, + "tf.compat.v1.GraphKeys.LOCAL_INIT_OP": true, + "tf.compat.v1.GraphKeys.LOCAL_RESOURCES": true, + "tf.compat.v1.GraphKeys.LOCAL_VARIABLES": true, + "tf.compat.v1.GraphKeys.LOSSES": true, + "tf.compat.v1.GraphKeys.METRIC_VARIABLES": true, + "tf.compat.v1.GraphKeys.MODEL_VARIABLES": true, + "tf.compat.v1.GraphKeys.MOVING_AVERAGE_VARIABLES": true, + "tf.compat.v1.GraphKeys.QUEUE_RUNNERS": true, + "tf.compat.v1.GraphKeys.READY_FOR_LOCAL_INIT_OP": true, + "tf.compat.v1.GraphKeys.READY_OP": true, + "tf.compat.v1.GraphKeys.REGULARIZATION_LOSSES": true, + "tf.compat.v1.GraphKeys.RESOURCES": true, + "tf.compat.v1.GraphKeys.SAVEABLE_OBJECTS": true, + "tf.compat.v1.GraphKeys.SAVERS": true, + "tf.compat.v1.GraphKeys.SUMMARIES": true, + "tf.compat.v1.GraphKeys.SUMMARY_OP": true, + "tf.compat.v1.GraphKeys.TABLE_INITIALIZERS": true, + "tf.compat.v1.GraphKeys.TRAINABLE_RESOURCE_VARIABLES": true, + "tf.compat.v1.GraphKeys.TRAINABLE_VARIABLES": true, + "tf.compat.v1.GraphKeys.TRAIN_OP": true, + "tf.compat.v1.GraphKeys.UPDATE_OPS": true, + "tf.compat.v1.GraphKeys.VARIABLES": true, + "tf.compat.v1.GraphKeys.WEIGHTS": true, + "tf.compat.v1.GraphKeys.WHILE_CONTEXT": true, + "tf.compat.v1.GraphKeys.__eq__": true, + "tf.compat.v1.GraphKeys.__ge__": true, + "tf.compat.v1.GraphKeys.__gt__": true, + "tf.compat.v1.GraphKeys.__init__": true, + "tf.compat.v1.GraphKeys.__le__": true, + "tf.compat.v1.GraphKeys.__lt__": true, + "tf.compat.v1.GraphKeys.__ne__": true, + "tf.compat.v1.GraphKeys.__new__": true, + "tf.compat.v1.GraphOptions": false, + "tf.compat.v1.GraphOptions.ByteSize": true, + "tf.compat.v1.GraphOptions.Clear": true, + "tf.compat.v1.GraphOptions.ClearExtension": true, + "tf.compat.v1.GraphOptions.ClearField": true, + "tf.compat.v1.GraphOptions.CopyFrom": true, + "tf.compat.v1.GraphOptions.DESCRIPTOR": true, + "tf.compat.v1.GraphOptions.DiscardUnknownFields": true, + "tf.compat.v1.GraphOptions.Extensions": true, + "tf.compat.v1.GraphOptions.FindInitializationErrors": true, + "tf.compat.v1.GraphOptions.FromString": true, + "tf.compat.v1.GraphOptions.HasExtension": true, + "tf.compat.v1.GraphOptions.HasField": true, + "tf.compat.v1.GraphOptions.IsInitialized": true, + "tf.compat.v1.GraphOptions.ListFields": true, + "tf.compat.v1.GraphOptions.MergeFrom": true, + "tf.compat.v1.GraphOptions.MergeFromString": true, + "tf.compat.v1.GraphOptions.ParseFromString": true, + "tf.compat.v1.GraphOptions.RegisterExtension": true, + "tf.compat.v1.GraphOptions.SerializePartialToString": true, + "tf.compat.v1.GraphOptions.SerializeToString": true, + "tf.compat.v1.GraphOptions.SetInParent": true, + "tf.compat.v1.GraphOptions.UnknownFields": true, + "tf.compat.v1.GraphOptions.WhichOneof": true, + "tf.compat.v1.GraphOptions.__eq__": true, + "tf.compat.v1.GraphOptions.__ge__": true, + "tf.compat.v1.GraphOptions.__gt__": true, + "tf.compat.v1.GraphOptions.__init__": true, + "tf.compat.v1.GraphOptions.__le__": true, + "tf.compat.v1.GraphOptions.__lt__": true, + "tf.compat.v1.GraphOptions.__ne__": true, + "tf.compat.v1.GraphOptions.__new__": true, + "tf.compat.v1.GraphOptions.build_cost_model": true, + "tf.compat.v1.GraphOptions.build_cost_model_after": true, + "tf.compat.v1.GraphOptions.enable_bfloat16_sendrecv": true, + "tf.compat.v1.GraphOptions.enable_recv_scheduling": true, + "tf.compat.v1.GraphOptions.infer_shapes": true, + "tf.compat.v1.GraphOptions.optimizer_options": true, + "tf.compat.v1.GraphOptions.place_pruned_graph": true, + "tf.compat.v1.GraphOptions.rewrite_options": true, + "tf.compat.v1.GraphOptions.timeline_step": true, + "tf.compat.v1.HistogramProto": false, + "tf.compat.v1.HistogramProto.ByteSize": true, + "tf.compat.v1.HistogramProto.Clear": true, + "tf.compat.v1.HistogramProto.ClearExtension": true, + "tf.compat.v1.HistogramProto.ClearField": true, + "tf.compat.v1.HistogramProto.CopyFrom": true, + "tf.compat.v1.HistogramProto.DESCRIPTOR": true, + "tf.compat.v1.HistogramProto.DiscardUnknownFields": true, + "tf.compat.v1.HistogramProto.Extensions": true, + "tf.compat.v1.HistogramProto.FindInitializationErrors": true, + "tf.compat.v1.HistogramProto.FromString": true, + "tf.compat.v1.HistogramProto.HasExtension": true, + "tf.compat.v1.HistogramProto.HasField": true, + "tf.compat.v1.HistogramProto.IsInitialized": true, + "tf.compat.v1.HistogramProto.ListFields": true, + "tf.compat.v1.HistogramProto.MergeFrom": true, + "tf.compat.v1.HistogramProto.MergeFromString": true, + "tf.compat.v1.HistogramProto.ParseFromString": true, + "tf.compat.v1.HistogramProto.RegisterExtension": true, + "tf.compat.v1.HistogramProto.SerializePartialToString": true, + "tf.compat.v1.HistogramProto.SerializeToString": true, + "tf.compat.v1.HistogramProto.SetInParent": true, + "tf.compat.v1.HistogramProto.UnknownFields": true, + "tf.compat.v1.HistogramProto.WhichOneof": true, + "tf.compat.v1.HistogramProto.__eq__": true, + "tf.compat.v1.HistogramProto.__ge__": true, + "tf.compat.v1.HistogramProto.__gt__": true, + "tf.compat.v1.HistogramProto.__init__": true, + "tf.compat.v1.HistogramProto.__le__": true, + "tf.compat.v1.HistogramProto.__lt__": true, + "tf.compat.v1.HistogramProto.__ne__": true, + "tf.compat.v1.HistogramProto.__new__": true, + "tf.compat.v1.HistogramProto.bucket": true, + "tf.compat.v1.HistogramProto.bucket_limit": true, + "tf.compat.v1.HistogramProto.max": true, + "tf.compat.v1.HistogramProto.min": true, + "tf.compat.v1.HistogramProto.num": true, + "tf.compat.v1.HistogramProto.sum": true, + "tf.compat.v1.HistogramProto.sum_squares": true, + "tf.compat.v1.IdentityReader": false, + "tf.compat.v1.IdentityReader.__eq__": true, + "tf.compat.v1.IdentityReader.__ge__": true, + "tf.compat.v1.IdentityReader.__gt__": true, + "tf.compat.v1.IdentityReader.__init__": true, + "tf.compat.v1.IdentityReader.__le__": true, + "tf.compat.v1.IdentityReader.__lt__": true, + "tf.compat.v1.IdentityReader.__ne__": true, + "tf.compat.v1.IdentityReader.__new__": true, + "tf.compat.v1.IdentityReader.num_records_produced": true, + "tf.compat.v1.IdentityReader.num_work_units_completed": true, + "tf.compat.v1.IdentityReader.read": true, + "tf.compat.v1.IdentityReader.read_up_to": true, + "tf.compat.v1.IdentityReader.reader_ref": true, + "tf.compat.v1.IdentityReader.reset": true, + "tf.compat.v1.IdentityReader.restore_state": true, + "tf.compat.v1.IdentityReader.serialize_state": true, + "tf.compat.v1.IdentityReader.supports_serialize": true, + "tf.compat.v1.IndexedSlices": false, + "tf.compat.v1.IndexedSlices.__eq__": true, + "tf.compat.v1.IndexedSlices.__ge__": true, + "tf.compat.v1.IndexedSlices.__gt__": true, + "tf.compat.v1.IndexedSlices.__init__": true, + "tf.compat.v1.IndexedSlices.__le__": true, + "tf.compat.v1.IndexedSlices.__lt__": true, + "tf.compat.v1.IndexedSlices.__ne__": true, + "tf.compat.v1.IndexedSlices.__neg__": true, + "tf.compat.v1.IndexedSlices.__new__": true, + "tf.compat.v1.IndexedSlices.consumers": true, + "tf.compat.v1.IndexedSlices.dense_shape": true, + "tf.compat.v1.IndexedSlices.device": true, + "tf.compat.v1.IndexedSlices.dtype": true, + "tf.compat.v1.IndexedSlices.graph": true, + "tf.compat.v1.IndexedSlices.indices": true, + "tf.compat.v1.IndexedSlices.name": true, + "tf.compat.v1.IndexedSlices.op": true, + "tf.compat.v1.IndexedSlices.shape": true, + "tf.compat.v1.IndexedSlices.values": true, + "tf.compat.v1.IndexedSlicesSpec": false, + "tf.compat.v1.IndexedSlicesSpec.__eq__": true, + "tf.compat.v1.IndexedSlicesSpec.__ge__": true, + "tf.compat.v1.IndexedSlicesSpec.__gt__": true, + "tf.compat.v1.IndexedSlicesSpec.__init__": true, + "tf.compat.v1.IndexedSlicesSpec.__le__": true, + "tf.compat.v1.IndexedSlicesSpec.__lt__": true, + "tf.compat.v1.IndexedSlicesSpec.__ne__": true, + "tf.compat.v1.IndexedSlicesSpec.__new__": true, + "tf.compat.v1.IndexedSlicesSpec.is_compatible_with": true, + "tf.compat.v1.IndexedSlicesSpec.most_specific_compatible_type": true, + "tf.compat.v1.IndexedSlicesSpec.value_type": true, + "tf.compat.v1.InteractiveSession": false, + "tf.compat.v1.InteractiveSession.__eq__": true, + "tf.compat.v1.InteractiveSession.__ge__": true, + "tf.compat.v1.InteractiveSession.__gt__": true, + "tf.compat.v1.InteractiveSession.__init__": true, + "tf.compat.v1.InteractiveSession.__le__": true, + "tf.compat.v1.InteractiveSession.__lt__": true, + "tf.compat.v1.InteractiveSession.__ne__": true, + "tf.compat.v1.InteractiveSession.__new__": true, + "tf.compat.v1.InteractiveSession.as_default": true, + "tf.compat.v1.InteractiveSession.close": true, + "tf.compat.v1.InteractiveSession.graph": true, + "tf.compat.v1.InteractiveSession.graph_def": true, + "tf.compat.v1.InteractiveSession.list_devices": true, + "tf.compat.v1.InteractiveSession.make_callable": true, + "tf.compat.v1.InteractiveSession.partial_run": true, + "tf.compat.v1.InteractiveSession.partial_run_setup": true, + "tf.compat.v1.InteractiveSession.run": true, + "tf.compat.v1.InteractiveSession.sess_str": true, + "tf.compat.v1.LMDBReader": false, + "tf.compat.v1.LMDBReader.__eq__": true, + "tf.compat.v1.LMDBReader.__ge__": true, + "tf.compat.v1.LMDBReader.__gt__": true, + "tf.compat.v1.LMDBReader.__init__": true, + "tf.compat.v1.LMDBReader.__le__": true, + "tf.compat.v1.LMDBReader.__lt__": true, + "tf.compat.v1.LMDBReader.__ne__": true, + "tf.compat.v1.LMDBReader.__new__": true, + "tf.compat.v1.LMDBReader.num_records_produced": true, + "tf.compat.v1.LMDBReader.num_work_units_completed": true, + "tf.compat.v1.LMDBReader.read": true, + "tf.compat.v1.LMDBReader.read_up_to": true, + "tf.compat.v1.LMDBReader.reader_ref": true, + "tf.compat.v1.LMDBReader.reset": true, + "tf.compat.v1.LMDBReader.restore_state": true, + "tf.compat.v1.LMDBReader.serialize_state": true, + "tf.compat.v1.LMDBReader.supports_serialize": true, + "tf.compat.v1.LogMessage": false, + "tf.compat.v1.LogMessage.ByteSize": true, + "tf.compat.v1.LogMessage.Clear": true, + "tf.compat.v1.LogMessage.ClearExtension": true, + "tf.compat.v1.LogMessage.ClearField": true, + "tf.compat.v1.LogMessage.CopyFrom": true, + "tf.compat.v1.LogMessage.DEBUGGING": true, + "tf.compat.v1.LogMessage.DESCRIPTOR": true, + "tf.compat.v1.LogMessage.DiscardUnknownFields": true, + "tf.compat.v1.LogMessage.ERROR": true, + "tf.compat.v1.LogMessage.Extensions": true, + "tf.compat.v1.LogMessage.FATAL": true, + "tf.compat.v1.LogMessage.FindInitializationErrors": true, + "tf.compat.v1.LogMessage.FromString": true, + "tf.compat.v1.LogMessage.HasExtension": true, + "tf.compat.v1.LogMessage.HasField": true, + "tf.compat.v1.LogMessage.INFO": true, + "tf.compat.v1.LogMessage.IsInitialized": true, + "tf.compat.v1.LogMessage.Level": true, + "tf.compat.v1.LogMessage.ListFields": true, + "tf.compat.v1.LogMessage.MergeFrom": true, + "tf.compat.v1.LogMessage.MergeFromString": true, + "tf.compat.v1.LogMessage.ParseFromString": true, + "tf.compat.v1.LogMessage.RegisterExtension": true, + "tf.compat.v1.LogMessage.SerializePartialToString": true, + "tf.compat.v1.LogMessage.SerializeToString": true, + "tf.compat.v1.LogMessage.SetInParent": true, + "tf.compat.v1.LogMessage.UNKNOWN": true, + "tf.compat.v1.LogMessage.UnknownFields": true, + "tf.compat.v1.LogMessage.WARN": true, + "tf.compat.v1.LogMessage.WhichOneof": true, + "tf.compat.v1.LogMessage.__eq__": true, + "tf.compat.v1.LogMessage.__ge__": true, + "tf.compat.v1.LogMessage.__gt__": true, + "tf.compat.v1.LogMessage.__init__": true, + "tf.compat.v1.LogMessage.__le__": true, + "tf.compat.v1.LogMessage.__lt__": true, + "tf.compat.v1.LogMessage.__ne__": true, + "tf.compat.v1.LogMessage.__new__": true, + "tf.compat.v1.LogMessage.level": true, + "tf.compat.v1.LogMessage.message": true, + "tf.compat.v1.MONOLITHIC_BUILD": true, + "tf.compat.v1.MetaGraphDef": false, + "tf.compat.v1.MetaGraphDef.ByteSize": true, + "tf.compat.v1.MetaGraphDef.Clear": true, + "tf.compat.v1.MetaGraphDef.ClearExtension": true, + "tf.compat.v1.MetaGraphDef.ClearField": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry": false, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.ByteSize": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.Clear": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.ClearExtension": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.ClearField": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.CopyFrom": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.DESCRIPTOR": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.DiscardUnknownFields": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.Extensions": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.FindInitializationErrors": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.FromString": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.HasExtension": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.HasField": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.IsInitialized": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.ListFields": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.MergeFrom": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.MergeFromString": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.ParseFromString": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.RegisterExtension": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.SerializePartialToString": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.SerializeToString": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.SetInParent": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.UnknownFields": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.WhichOneof": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__eq__": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__ge__": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__gt__": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__init__": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__le__": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__lt__": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__ne__": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.__new__": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.key": true, + "tf.compat.v1.MetaGraphDef.CollectionDefEntry.value": true, + "tf.compat.v1.MetaGraphDef.CopyFrom": true, + "tf.compat.v1.MetaGraphDef.DESCRIPTOR": true, + "tf.compat.v1.MetaGraphDef.DiscardUnknownFields": true, + "tf.compat.v1.MetaGraphDef.Extensions": true, + "tf.compat.v1.MetaGraphDef.FindInitializationErrors": true, + "tf.compat.v1.MetaGraphDef.FromString": true, + "tf.compat.v1.MetaGraphDef.HasExtension": true, + "tf.compat.v1.MetaGraphDef.HasField": true, + "tf.compat.v1.MetaGraphDef.IsInitialized": true, + "tf.compat.v1.MetaGraphDef.ListFields": true, + "tf.compat.v1.MetaGraphDef.MergeFrom": true, + "tf.compat.v1.MetaGraphDef.MergeFromString": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef": false, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.ByteSize": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.Clear": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.ClearExtension": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.ClearField": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.CopyFrom": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.DESCRIPTOR": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.DiscardUnknownFields": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.Extensions": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FindInitializationErrors": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FromString": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry": false, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.ByteSize": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.Clear": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.ClearExtension": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.ClearField": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.CopyFrom": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.DESCRIPTOR": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.DiscardUnknownFields": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.Extensions": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.FindInitializationErrors": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.FromString": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.HasExtension": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.HasField": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.IsInitialized": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.ListFields": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.MergeFrom": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.MergeFromString": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.ParseFromString": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.RegisterExtension": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.SerializePartialToString": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.SerializeToString": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.SetInParent": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.UnknownFields": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.WhichOneof": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__eq__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__ge__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__gt__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__init__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__le__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__lt__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__ne__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.__new__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.key": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry.value": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.HasExtension": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.HasField": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.IsInitialized": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.ListFields": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.MergeFrom": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.MergeFromString": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.ParseFromString": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.RegisterExtension": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.SerializePartialToString": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.SerializeToString": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.SetInParent": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.UnknownFields": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.WhichOneof": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__eq__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__ge__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__gt__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__init__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__le__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__lt__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__ne__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.__new__": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.any_info": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.function_aliases": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.meta_graph_version": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.stripped_default_attrs": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.stripped_op_list": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.tags": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.tensorflow_git_version": true, + "tf.compat.v1.MetaGraphDef.MetaInfoDef.tensorflow_version": true, + "tf.compat.v1.MetaGraphDef.ParseFromString": true, + "tf.compat.v1.MetaGraphDef.RegisterExtension": true, + "tf.compat.v1.MetaGraphDef.SerializePartialToString": true, + "tf.compat.v1.MetaGraphDef.SerializeToString": true, + "tf.compat.v1.MetaGraphDef.SetInParent": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry": false, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.ByteSize": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.Clear": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.ClearExtension": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.ClearField": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.CopyFrom": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.DESCRIPTOR": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.DiscardUnknownFields": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.Extensions": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.FindInitializationErrors": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.FromString": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.HasExtension": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.HasField": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.IsInitialized": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.ListFields": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.MergeFrom": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.MergeFromString": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.ParseFromString": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.RegisterExtension": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.SerializePartialToString": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.SerializeToString": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.SetInParent": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.UnknownFields": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.WhichOneof": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__eq__": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__ge__": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__gt__": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__init__": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__le__": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__lt__": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__ne__": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.__new__": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.key": true, + "tf.compat.v1.MetaGraphDef.SignatureDefEntry.value": true, + "tf.compat.v1.MetaGraphDef.UnknownFields": true, + "tf.compat.v1.MetaGraphDef.WhichOneof": true, + "tf.compat.v1.MetaGraphDef.__eq__": true, + "tf.compat.v1.MetaGraphDef.__ge__": true, + "tf.compat.v1.MetaGraphDef.__gt__": true, + "tf.compat.v1.MetaGraphDef.__init__": true, + "tf.compat.v1.MetaGraphDef.__le__": true, + "tf.compat.v1.MetaGraphDef.__lt__": true, + "tf.compat.v1.MetaGraphDef.__ne__": true, + "tf.compat.v1.MetaGraphDef.__new__": true, + "tf.compat.v1.MetaGraphDef.asset_file_def": true, + "tf.compat.v1.MetaGraphDef.collection_def": true, + "tf.compat.v1.MetaGraphDef.graph_def": true, + "tf.compat.v1.MetaGraphDef.meta_info_def": true, + "tf.compat.v1.MetaGraphDef.object_graph_def": true, + "tf.compat.v1.MetaGraphDef.saver_def": true, + "tf.compat.v1.MetaGraphDef.signature_def": true, + "tf.compat.v1.Module": false, + "tf.compat.v1.Module.__eq__": true, + "tf.compat.v1.Module.__ge__": true, + "tf.compat.v1.Module.__gt__": true, + "tf.compat.v1.Module.__init__": true, + "tf.compat.v1.Module.__le__": true, + "tf.compat.v1.Module.__lt__": true, + "tf.compat.v1.Module.__ne__": true, + "tf.compat.v1.Module.__new__": true, + "tf.compat.v1.Module.name": true, + "tf.compat.v1.Module.name_scope": true, + "tf.compat.v1.Module.submodules": true, + "tf.compat.v1.Module.trainable_variables": true, + "tf.compat.v1.Module.variables": true, + "tf.compat.v1.Module.with_name_scope": true, + "tf.compat.v1.NameAttrList": false, + "tf.compat.v1.NameAttrList.AttrEntry": false, + "tf.compat.v1.NameAttrList.AttrEntry.ByteSize": true, + "tf.compat.v1.NameAttrList.AttrEntry.Clear": true, + "tf.compat.v1.NameAttrList.AttrEntry.ClearExtension": true, + "tf.compat.v1.NameAttrList.AttrEntry.ClearField": true, + "tf.compat.v1.NameAttrList.AttrEntry.CopyFrom": true, + "tf.compat.v1.NameAttrList.AttrEntry.DESCRIPTOR": true, + "tf.compat.v1.NameAttrList.AttrEntry.DiscardUnknownFields": true, + "tf.compat.v1.NameAttrList.AttrEntry.Extensions": true, + "tf.compat.v1.NameAttrList.AttrEntry.FindInitializationErrors": true, + "tf.compat.v1.NameAttrList.AttrEntry.FromString": true, + "tf.compat.v1.NameAttrList.AttrEntry.HasExtension": true, + "tf.compat.v1.NameAttrList.AttrEntry.HasField": true, + "tf.compat.v1.NameAttrList.AttrEntry.IsInitialized": true, + "tf.compat.v1.NameAttrList.AttrEntry.ListFields": true, + "tf.compat.v1.NameAttrList.AttrEntry.MergeFrom": true, + "tf.compat.v1.NameAttrList.AttrEntry.MergeFromString": true, + "tf.compat.v1.NameAttrList.AttrEntry.ParseFromString": true, + "tf.compat.v1.NameAttrList.AttrEntry.RegisterExtension": true, + "tf.compat.v1.NameAttrList.AttrEntry.SerializePartialToString": true, + "tf.compat.v1.NameAttrList.AttrEntry.SerializeToString": true, + "tf.compat.v1.NameAttrList.AttrEntry.SetInParent": true, + "tf.compat.v1.NameAttrList.AttrEntry.UnknownFields": true, + "tf.compat.v1.NameAttrList.AttrEntry.WhichOneof": true, + "tf.compat.v1.NameAttrList.AttrEntry.__eq__": true, + "tf.compat.v1.NameAttrList.AttrEntry.__ge__": true, + "tf.compat.v1.NameAttrList.AttrEntry.__gt__": true, + "tf.compat.v1.NameAttrList.AttrEntry.__init__": true, + "tf.compat.v1.NameAttrList.AttrEntry.__le__": true, + "tf.compat.v1.NameAttrList.AttrEntry.__lt__": true, + "tf.compat.v1.NameAttrList.AttrEntry.__ne__": true, + "tf.compat.v1.NameAttrList.AttrEntry.__new__": true, + "tf.compat.v1.NameAttrList.AttrEntry.key": true, + "tf.compat.v1.NameAttrList.AttrEntry.value": true, + "tf.compat.v1.NameAttrList.ByteSize": true, + "tf.compat.v1.NameAttrList.Clear": true, + "tf.compat.v1.NameAttrList.ClearExtension": true, + "tf.compat.v1.NameAttrList.ClearField": true, + "tf.compat.v1.NameAttrList.CopyFrom": true, + "tf.compat.v1.NameAttrList.DESCRIPTOR": true, + "tf.compat.v1.NameAttrList.DiscardUnknownFields": true, + "tf.compat.v1.NameAttrList.Extensions": true, + "tf.compat.v1.NameAttrList.FindInitializationErrors": true, + "tf.compat.v1.NameAttrList.FromString": true, + "tf.compat.v1.NameAttrList.HasExtension": true, + "tf.compat.v1.NameAttrList.HasField": true, + "tf.compat.v1.NameAttrList.IsInitialized": true, + "tf.compat.v1.NameAttrList.ListFields": true, + "tf.compat.v1.NameAttrList.MergeFrom": true, + "tf.compat.v1.NameAttrList.MergeFromString": true, + "tf.compat.v1.NameAttrList.ParseFromString": true, + "tf.compat.v1.NameAttrList.RegisterExtension": true, + "tf.compat.v1.NameAttrList.SerializePartialToString": true, + "tf.compat.v1.NameAttrList.SerializeToString": true, + "tf.compat.v1.NameAttrList.SetInParent": true, + "tf.compat.v1.NameAttrList.UnknownFields": true, + "tf.compat.v1.NameAttrList.WhichOneof": true, + "tf.compat.v1.NameAttrList.__eq__": true, + "tf.compat.v1.NameAttrList.__ge__": true, + "tf.compat.v1.NameAttrList.__gt__": true, + "tf.compat.v1.NameAttrList.__init__": true, + "tf.compat.v1.NameAttrList.__le__": true, + "tf.compat.v1.NameAttrList.__lt__": true, + "tf.compat.v1.NameAttrList.__ne__": true, + "tf.compat.v1.NameAttrList.__new__": true, + "tf.compat.v1.NameAttrList.attr": true, + "tf.compat.v1.NameAttrList.name": true, + "tf.compat.v1.NoGradient": false, + "tf.compat.v1.NodeDef": false, + "tf.compat.v1.NodeDef.AttrEntry": false, + "tf.compat.v1.NodeDef.AttrEntry.ByteSize": true, + "tf.compat.v1.NodeDef.AttrEntry.Clear": true, + "tf.compat.v1.NodeDef.AttrEntry.ClearExtension": true, + "tf.compat.v1.NodeDef.AttrEntry.ClearField": true, + "tf.compat.v1.NodeDef.AttrEntry.CopyFrom": true, + "tf.compat.v1.NodeDef.AttrEntry.DESCRIPTOR": true, + "tf.compat.v1.NodeDef.AttrEntry.DiscardUnknownFields": true, + "tf.compat.v1.NodeDef.AttrEntry.Extensions": true, + "tf.compat.v1.NodeDef.AttrEntry.FindInitializationErrors": true, + "tf.compat.v1.NodeDef.AttrEntry.FromString": true, + "tf.compat.v1.NodeDef.AttrEntry.HasExtension": true, + "tf.compat.v1.NodeDef.AttrEntry.HasField": true, + "tf.compat.v1.NodeDef.AttrEntry.IsInitialized": true, + "tf.compat.v1.NodeDef.AttrEntry.ListFields": true, + "tf.compat.v1.NodeDef.AttrEntry.MergeFrom": true, + "tf.compat.v1.NodeDef.AttrEntry.MergeFromString": true, + "tf.compat.v1.NodeDef.AttrEntry.ParseFromString": true, + "tf.compat.v1.NodeDef.AttrEntry.RegisterExtension": true, + "tf.compat.v1.NodeDef.AttrEntry.SerializePartialToString": true, + "tf.compat.v1.NodeDef.AttrEntry.SerializeToString": true, + "tf.compat.v1.NodeDef.AttrEntry.SetInParent": true, + "tf.compat.v1.NodeDef.AttrEntry.UnknownFields": true, + "tf.compat.v1.NodeDef.AttrEntry.WhichOneof": true, + "tf.compat.v1.NodeDef.AttrEntry.__eq__": true, + "tf.compat.v1.NodeDef.AttrEntry.__ge__": true, + "tf.compat.v1.NodeDef.AttrEntry.__gt__": true, + "tf.compat.v1.NodeDef.AttrEntry.__init__": true, + "tf.compat.v1.NodeDef.AttrEntry.__le__": true, + "tf.compat.v1.NodeDef.AttrEntry.__lt__": true, + "tf.compat.v1.NodeDef.AttrEntry.__ne__": true, + "tf.compat.v1.NodeDef.AttrEntry.__new__": true, + "tf.compat.v1.NodeDef.AttrEntry.key": true, + "tf.compat.v1.NodeDef.AttrEntry.value": true, + "tf.compat.v1.NodeDef.ByteSize": true, + "tf.compat.v1.NodeDef.Clear": true, + "tf.compat.v1.NodeDef.ClearExtension": true, + "tf.compat.v1.NodeDef.ClearField": true, + "tf.compat.v1.NodeDef.CopyFrom": true, + "tf.compat.v1.NodeDef.DESCRIPTOR": true, + "tf.compat.v1.NodeDef.DiscardUnknownFields": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo": false, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.ByteSize": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.Clear": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.ClearExtension": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.ClearField": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.CopyFrom": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.DESCRIPTOR": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.DiscardUnknownFields": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.Extensions": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.FindInitializationErrors": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.FromString": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.HasExtension": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.HasField": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.IsInitialized": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.ListFields": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.MergeFrom": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.MergeFromString": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.ParseFromString": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.RegisterExtension": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.SerializePartialToString": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.SerializeToString": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.SetInParent": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.UnknownFields": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.WhichOneof": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__eq__": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__ge__": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__gt__": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__init__": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__le__": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__lt__": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__ne__": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.__new__": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.original_func_names": true, + "tf.compat.v1.NodeDef.ExperimentalDebugInfo.original_node_names": true, + "tf.compat.v1.NodeDef.Extensions": true, + "tf.compat.v1.NodeDef.FindInitializationErrors": true, + "tf.compat.v1.NodeDef.FromString": true, + "tf.compat.v1.NodeDef.HasExtension": true, + "tf.compat.v1.NodeDef.HasField": true, + "tf.compat.v1.NodeDef.IsInitialized": true, + "tf.compat.v1.NodeDef.ListFields": true, + "tf.compat.v1.NodeDef.MergeFrom": true, + "tf.compat.v1.NodeDef.MergeFromString": true, + "tf.compat.v1.NodeDef.ParseFromString": true, + "tf.compat.v1.NodeDef.RegisterExtension": true, + "tf.compat.v1.NodeDef.SerializePartialToString": true, + "tf.compat.v1.NodeDef.SerializeToString": true, + "tf.compat.v1.NodeDef.SetInParent": true, + "tf.compat.v1.NodeDef.UnknownFields": true, + "tf.compat.v1.NodeDef.WhichOneof": true, + "tf.compat.v1.NodeDef.__eq__": true, + "tf.compat.v1.NodeDef.__ge__": true, + "tf.compat.v1.NodeDef.__gt__": true, + "tf.compat.v1.NodeDef.__init__": true, + "tf.compat.v1.NodeDef.__le__": true, + "tf.compat.v1.NodeDef.__lt__": true, + "tf.compat.v1.NodeDef.__ne__": true, + "tf.compat.v1.NodeDef.__new__": true, + "tf.compat.v1.NodeDef.attr": true, + "tf.compat.v1.NodeDef.device": true, + "tf.compat.v1.NodeDef.experimental_debug_info": true, + "tf.compat.v1.NodeDef.input": true, + "tf.compat.v1.NodeDef.name": true, + "tf.compat.v1.NodeDef.op": true, + "tf.compat.v1.NotDifferentiable": false, + "tf.compat.v1.OpError": false, + "tf.compat.v1.OpError.__eq__": true, + "tf.compat.v1.OpError.__ge__": true, + "tf.compat.v1.OpError.__gt__": true, + "tf.compat.v1.OpError.__init__": true, + "tf.compat.v1.OpError.__le__": true, + "tf.compat.v1.OpError.__lt__": true, + "tf.compat.v1.OpError.__ne__": true, + "tf.compat.v1.OpError.__new__": true, + "tf.compat.v1.OpError.args": true, + "tf.compat.v1.OpError.error_code": true, + "tf.compat.v1.OpError.message": true, + "tf.compat.v1.OpError.node_def": true, + "tf.compat.v1.OpError.op": true, + "tf.compat.v1.OpError.with_traceback": true, + "tf.compat.v1.Operation": false, + "tf.compat.v1.Operation.__eq__": true, + "tf.compat.v1.Operation.__ge__": true, + "tf.compat.v1.Operation.__gt__": true, + "tf.compat.v1.Operation.__init__": true, + "tf.compat.v1.Operation.__le__": true, + "tf.compat.v1.Operation.__lt__": true, + "tf.compat.v1.Operation.__ne__": true, + "tf.compat.v1.Operation.__new__": true, + "tf.compat.v1.Operation.colocation_groups": true, + "tf.compat.v1.Operation.control_inputs": true, + "tf.compat.v1.Operation.device": true, + "tf.compat.v1.Operation.get_attr": true, + "tf.compat.v1.Operation.graph": true, + "tf.compat.v1.Operation.inputs": true, + "tf.compat.v1.Operation.name": true, + "tf.compat.v1.Operation.node_def": true, + "tf.compat.v1.Operation.op_def": true, + "tf.compat.v1.Operation.outputs": true, + "tf.compat.v1.Operation.run": true, + "tf.compat.v1.Operation.traceback": true, + "tf.compat.v1.Operation.type": true, + "tf.compat.v1.Operation.values": true, + "tf.compat.v1.OptimizerOptions": false, + "tf.compat.v1.OptimizerOptions.ByteSize": true, + "tf.compat.v1.OptimizerOptions.Clear": true, + "tf.compat.v1.OptimizerOptions.ClearExtension": true, + "tf.compat.v1.OptimizerOptions.ClearField": true, + "tf.compat.v1.OptimizerOptions.CopyFrom": true, + "tf.compat.v1.OptimizerOptions.DEFAULT": true, + "tf.compat.v1.OptimizerOptions.DESCRIPTOR": true, + "tf.compat.v1.OptimizerOptions.DiscardUnknownFields": true, + "tf.compat.v1.OptimizerOptions.Extensions": true, + "tf.compat.v1.OptimizerOptions.FindInitializationErrors": true, + "tf.compat.v1.OptimizerOptions.FromString": true, + "tf.compat.v1.OptimizerOptions.GlobalJitLevel": true, + "tf.compat.v1.OptimizerOptions.HasExtension": true, + "tf.compat.v1.OptimizerOptions.HasField": true, + "tf.compat.v1.OptimizerOptions.IsInitialized": true, + "tf.compat.v1.OptimizerOptions.L0": true, + "tf.compat.v1.OptimizerOptions.L1": true, + "tf.compat.v1.OptimizerOptions.Level": true, + "tf.compat.v1.OptimizerOptions.ListFields": true, + "tf.compat.v1.OptimizerOptions.MergeFrom": true, + "tf.compat.v1.OptimizerOptions.MergeFromString": true, + "tf.compat.v1.OptimizerOptions.OFF": true, + "tf.compat.v1.OptimizerOptions.ON_1": true, + "tf.compat.v1.OptimizerOptions.ON_2": true, + "tf.compat.v1.OptimizerOptions.ParseFromString": true, + "tf.compat.v1.OptimizerOptions.RegisterExtension": true, + "tf.compat.v1.OptimizerOptions.SerializePartialToString": true, + "tf.compat.v1.OptimizerOptions.SerializeToString": true, + "tf.compat.v1.OptimizerOptions.SetInParent": true, + "tf.compat.v1.OptimizerOptions.UnknownFields": true, + "tf.compat.v1.OptimizerOptions.WhichOneof": true, + "tf.compat.v1.OptimizerOptions.__eq__": true, + "tf.compat.v1.OptimizerOptions.__ge__": true, + "tf.compat.v1.OptimizerOptions.__gt__": true, + "tf.compat.v1.OptimizerOptions.__init__": true, + "tf.compat.v1.OptimizerOptions.__le__": true, + "tf.compat.v1.OptimizerOptions.__lt__": true, + "tf.compat.v1.OptimizerOptions.__ne__": true, + "tf.compat.v1.OptimizerOptions.__new__": true, + "tf.compat.v1.OptimizerOptions.do_common_subexpression_elimination": true, + "tf.compat.v1.OptimizerOptions.do_constant_folding": true, + "tf.compat.v1.OptimizerOptions.do_function_inlining": true, + "tf.compat.v1.OptimizerOptions.global_jit_level": true, + "tf.compat.v1.OptimizerOptions.max_folded_constant_in_bytes": true, + "tf.compat.v1.OptimizerOptions.opt_level": true, + "tf.compat.v1.OptionalSpec": false, + "tf.compat.v1.OptionalSpec.__eq__": true, + "tf.compat.v1.OptionalSpec.__ge__": true, + "tf.compat.v1.OptionalSpec.__gt__": true, + "tf.compat.v1.OptionalSpec.__init__": true, + "tf.compat.v1.OptionalSpec.__le__": true, + "tf.compat.v1.OptionalSpec.__lt__": true, + "tf.compat.v1.OptionalSpec.__ne__": true, + "tf.compat.v1.OptionalSpec.__new__": true, + "tf.compat.v1.OptionalSpec.from_value": true, + "tf.compat.v1.OptionalSpec.is_compatible_with": true, + "tf.compat.v1.OptionalSpec.most_specific_compatible_type": true, + "tf.compat.v1.OptionalSpec.value_type": true, + "tf.compat.v1.PaddingFIFOQueue": false, + "tf.compat.v1.PaddingFIFOQueue.__eq__": true, + "tf.compat.v1.PaddingFIFOQueue.__ge__": true, + "tf.compat.v1.PaddingFIFOQueue.__gt__": true, + "tf.compat.v1.PaddingFIFOQueue.__init__": true, + "tf.compat.v1.PaddingFIFOQueue.__le__": true, + "tf.compat.v1.PaddingFIFOQueue.__lt__": true, + "tf.compat.v1.PaddingFIFOQueue.__ne__": true, + "tf.compat.v1.PaddingFIFOQueue.__new__": true, + "tf.compat.v1.PaddingFIFOQueue.close": true, + "tf.compat.v1.PaddingFIFOQueue.dequeue": true, + "tf.compat.v1.PaddingFIFOQueue.dequeue_many": true, + "tf.compat.v1.PaddingFIFOQueue.dequeue_up_to": true, + "tf.compat.v1.PaddingFIFOQueue.dtypes": true, + "tf.compat.v1.PaddingFIFOQueue.enqueue": true, + "tf.compat.v1.PaddingFIFOQueue.enqueue_many": true, + "tf.compat.v1.PaddingFIFOQueue.from_list": true, + "tf.compat.v1.PaddingFIFOQueue.is_closed": true, + "tf.compat.v1.PaddingFIFOQueue.name": true, + "tf.compat.v1.PaddingFIFOQueue.names": true, + "tf.compat.v1.PaddingFIFOQueue.queue_ref": true, + "tf.compat.v1.PaddingFIFOQueue.shapes": true, + "tf.compat.v1.PaddingFIFOQueue.size": true, + "tf.compat.v1.Print": false, + "tf.compat.v1.PriorityQueue": false, + "tf.compat.v1.PriorityQueue.__eq__": true, + "tf.compat.v1.PriorityQueue.__ge__": true, + "tf.compat.v1.PriorityQueue.__gt__": true, + "tf.compat.v1.PriorityQueue.__init__": true, + "tf.compat.v1.PriorityQueue.__le__": true, + "tf.compat.v1.PriorityQueue.__lt__": true, + "tf.compat.v1.PriorityQueue.__ne__": true, + "tf.compat.v1.PriorityQueue.__new__": true, + "tf.compat.v1.PriorityQueue.close": true, + "tf.compat.v1.PriorityQueue.dequeue": true, + "tf.compat.v1.PriorityQueue.dequeue_many": true, + "tf.compat.v1.PriorityQueue.dequeue_up_to": true, + "tf.compat.v1.PriorityQueue.dtypes": true, + "tf.compat.v1.PriorityQueue.enqueue": true, + "tf.compat.v1.PriorityQueue.enqueue_many": true, + "tf.compat.v1.PriorityQueue.from_list": true, + "tf.compat.v1.PriorityQueue.is_closed": true, + "tf.compat.v1.PriorityQueue.name": true, + "tf.compat.v1.PriorityQueue.names": true, + "tf.compat.v1.PriorityQueue.queue_ref": true, + "tf.compat.v1.PriorityQueue.shapes": true, + "tf.compat.v1.PriorityQueue.size": true, + "tf.compat.v1.QUANTIZED_DTYPES": true, + "tf.compat.v1.QueueBase": false, + "tf.compat.v1.QueueBase.__eq__": true, + "tf.compat.v1.QueueBase.__ge__": true, + "tf.compat.v1.QueueBase.__gt__": true, + "tf.compat.v1.QueueBase.__init__": true, + "tf.compat.v1.QueueBase.__le__": true, + "tf.compat.v1.QueueBase.__lt__": true, + "tf.compat.v1.QueueBase.__ne__": true, + "tf.compat.v1.QueueBase.__new__": true, + "tf.compat.v1.QueueBase.close": true, + "tf.compat.v1.QueueBase.dequeue": true, + "tf.compat.v1.QueueBase.dequeue_many": true, + "tf.compat.v1.QueueBase.dequeue_up_to": true, + "tf.compat.v1.QueueBase.dtypes": true, + "tf.compat.v1.QueueBase.enqueue": true, + "tf.compat.v1.QueueBase.enqueue_many": true, + "tf.compat.v1.QueueBase.from_list": true, + "tf.compat.v1.QueueBase.is_closed": true, + "tf.compat.v1.QueueBase.name": true, + "tf.compat.v1.QueueBase.names": true, + "tf.compat.v1.QueueBase.queue_ref": true, + "tf.compat.v1.QueueBase.shapes": true, + "tf.compat.v1.QueueBase.size": true, + "tf.compat.v1.RaggedTensor": false, + "tf.compat.v1.RaggedTensor.__abs__": true, + "tf.compat.v1.RaggedTensor.__add__": true, + "tf.compat.v1.RaggedTensor.__and__": true, + "tf.compat.v1.RaggedTensor.__bool__": true, + "tf.compat.v1.RaggedTensor.__div__": true, + "tf.compat.v1.RaggedTensor.__eq__": true, + "tf.compat.v1.RaggedTensor.__floordiv__": true, + "tf.compat.v1.RaggedTensor.__ge__": true, + "tf.compat.v1.RaggedTensor.__getitem__": true, + "tf.compat.v1.RaggedTensor.__gt__": true, + "tf.compat.v1.RaggedTensor.__init__": true, + "tf.compat.v1.RaggedTensor.__invert__": true, + "tf.compat.v1.RaggedTensor.__le__": true, + "tf.compat.v1.RaggedTensor.__lt__": true, + "tf.compat.v1.RaggedTensor.__mod__": true, + "tf.compat.v1.RaggedTensor.__mul__": true, + "tf.compat.v1.RaggedTensor.__ne__": true, + "tf.compat.v1.RaggedTensor.__neg__": true, + "tf.compat.v1.RaggedTensor.__new__": true, + "tf.compat.v1.RaggedTensor.__nonzero__": true, + "tf.compat.v1.RaggedTensor.__or__": true, + "tf.compat.v1.RaggedTensor.__pow__": true, + "tf.compat.v1.RaggedTensor.__radd__": true, + "tf.compat.v1.RaggedTensor.__rand__": true, + "tf.compat.v1.RaggedTensor.__rdiv__": true, + "tf.compat.v1.RaggedTensor.__rfloordiv__": true, + "tf.compat.v1.RaggedTensor.__rmod__": true, + "tf.compat.v1.RaggedTensor.__rmul__": true, + "tf.compat.v1.RaggedTensor.__ror__": true, + "tf.compat.v1.RaggedTensor.__rpow__": true, + "tf.compat.v1.RaggedTensor.__rsub__": true, + "tf.compat.v1.RaggedTensor.__rtruediv__": true, + "tf.compat.v1.RaggedTensor.__rxor__": true, + "tf.compat.v1.RaggedTensor.__sub__": true, + "tf.compat.v1.RaggedTensor.__truediv__": true, + "tf.compat.v1.RaggedTensor.__xor__": true, + "tf.compat.v1.RaggedTensor.bounding_shape": true, + "tf.compat.v1.RaggedTensor.consumers": true, + "tf.compat.v1.RaggedTensor.dtype": true, + "tf.compat.v1.RaggedTensor.flat_values": true, + "tf.compat.v1.RaggedTensor.from_nested_row_lengths": true, + "tf.compat.v1.RaggedTensor.from_nested_row_splits": true, + "tf.compat.v1.RaggedTensor.from_nested_value_rowids": true, + "tf.compat.v1.RaggedTensor.from_row_lengths": true, + "tf.compat.v1.RaggedTensor.from_row_limits": true, + "tf.compat.v1.RaggedTensor.from_row_splits": true, + "tf.compat.v1.RaggedTensor.from_row_starts": true, + "tf.compat.v1.RaggedTensor.from_sparse": true, + "tf.compat.v1.RaggedTensor.from_tensor": true, + "tf.compat.v1.RaggedTensor.from_uniform_row_length": true, + "tf.compat.v1.RaggedTensor.from_value_rowids": true, + "tf.compat.v1.RaggedTensor.merge_dims": true, + "tf.compat.v1.RaggedTensor.nested_row_lengths": true, + "tf.compat.v1.RaggedTensor.nested_row_splits": true, + "tf.compat.v1.RaggedTensor.nested_value_rowids": true, + "tf.compat.v1.RaggedTensor.nrows": true, + "tf.compat.v1.RaggedTensor.numpy": true, + "tf.compat.v1.RaggedTensor.ragged_rank": true, + "tf.compat.v1.RaggedTensor.row_lengths": true, + "tf.compat.v1.RaggedTensor.row_limits": true, + "tf.compat.v1.RaggedTensor.row_splits": true, + "tf.compat.v1.RaggedTensor.row_starts": true, + "tf.compat.v1.RaggedTensor.shape": true, + "tf.compat.v1.RaggedTensor.to_list": true, + "tf.compat.v1.RaggedTensor.to_sparse": true, + "tf.compat.v1.RaggedTensor.to_tensor": true, + "tf.compat.v1.RaggedTensor.uniform_row_length": true, + "tf.compat.v1.RaggedTensor.value_rowids": true, + "tf.compat.v1.RaggedTensor.values": true, + "tf.compat.v1.RaggedTensor.with_flat_values": true, + "tf.compat.v1.RaggedTensor.with_row_splits_dtype": true, + "tf.compat.v1.RaggedTensor.with_values": true, + "tf.compat.v1.RaggedTensorSpec": false, + "tf.compat.v1.RaggedTensorSpec.__eq__": true, + "tf.compat.v1.RaggedTensorSpec.__ge__": true, + "tf.compat.v1.RaggedTensorSpec.__gt__": true, + "tf.compat.v1.RaggedTensorSpec.__init__": true, + "tf.compat.v1.RaggedTensorSpec.__le__": true, + "tf.compat.v1.RaggedTensorSpec.__lt__": true, + "tf.compat.v1.RaggedTensorSpec.__ne__": true, + "tf.compat.v1.RaggedTensorSpec.__new__": true, + "tf.compat.v1.RaggedTensorSpec.from_value": true, + "tf.compat.v1.RaggedTensorSpec.is_compatible_with": true, + "tf.compat.v1.RaggedTensorSpec.most_specific_compatible_type": true, + "tf.compat.v1.RaggedTensorSpec.value_type": true, + "tf.compat.v1.RandomShuffleQueue": false, + "tf.compat.v1.RandomShuffleQueue.__eq__": true, + "tf.compat.v1.RandomShuffleQueue.__ge__": true, + "tf.compat.v1.RandomShuffleQueue.__gt__": true, + "tf.compat.v1.RandomShuffleQueue.__init__": true, + "tf.compat.v1.RandomShuffleQueue.__le__": true, + "tf.compat.v1.RandomShuffleQueue.__lt__": true, + "tf.compat.v1.RandomShuffleQueue.__ne__": true, + "tf.compat.v1.RandomShuffleQueue.__new__": true, + "tf.compat.v1.RandomShuffleQueue.close": true, + "tf.compat.v1.RandomShuffleQueue.dequeue": true, + "tf.compat.v1.RandomShuffleQueue.dequeue_many": true, + "tf.compat.v1.RandomShuffleQueue.dequeue_up_to": true, + "tf.compat.v1.RandomShuffleQueue.dtypes": true, + "tf.compat.v1.RandomShuffleQueue.enqueue": true, + "tf.compat.v1.RandomShuffleQueue.enqueue_many": true, + "tf.compat.v1.RandomShuffleQueue.from_list": true, + "tf.compat.v1.RandomShuffleQueue.is_closed": true, + "tf.compat.v1.RandomShuffleQueue.name": true, + "tf.compat.v1.RandomShuffleQueue.names": true, + "tf.compat.v1.RandomShuffleQueue.queue_ref": true, + "tf.compat.v1.RandomShuffleQueue.shapes": true, + "tf.compat.v1.RandomShuffleQueue.size": true, + "tf.compat.v1.ReaderBase": false, + "tf.compat.v1.ReaderBase.__eq__": true, + "tf.compat.v1.ReaderBase.__ge__": true, + "tf.compat.v1.ReaderBase.__gt__": true, + "tf.compat.v1.ReaderBase.__init__": true, + "tf.compat.v1.ReaderBase.__le__": true, + "tf.compat.v1.ReaderBase.__lt__": true, + "tf.compat.v1.ReaderBase.__ne__": true, + "tf.compat.v1.ReaderBase.__new__": true, + "tf.compat.v1.ReaderBase.num_records_produced": true, + "tf.compat.v1.ReaderBase.num_work_units_completed": true, + "tf.compat.v1.ReaderBase.read": true, + "tf.compat.v1.ReaderBase.read_up_to": true, + "tf.compat.v1.ReaderBase.reader_ref": true, + "tf.compat.v1.ReaderBase.reset": true, + "tf.compat.v1.ReaderBase.restore_state": true, + "tf.compat.v1.ReaderBase.serialize_state": true, + "tf.compat.v1.ReaderBase.supports_serialize": true, + "tf.compat.v1.RegisterGradient": false, + "tf.compat.v1.RegisterGradient.__call__": true, + "tf.compat.v1.RegisterGradient.__eq__": true, + "tf.compat.v1.RegisterGradient.__ge__": true, + "tf.compat.v1.RegisterGradient.__gt__": true, + "tf.compat.v1.RegisterGradient.__init__": true, + "tf.compat.v1.RegisterGradient.__le__": true, + "tf.compat.v1.RegisterGradient.__lt__": true, + "tf.compat.v1.RegisterGradient.__ne__": true, + "tf.compat.v1.RegisterGradient.__new__": true, + "tf.compat.v1.RunMetadata": false, + "tf.compat.v1.RunMetadata.ByteSize": true, + "tf.compat.v1.RunMetadata.Clear": true, + "tf.compat.v1.RunMetadata.ClearExtension": true, + "tf.compat.v1.RunMetadata.ClearField": true, + "tf.compat.v1.RunMetadata.CopyFrom": true, + "tf.compat.v1.RunMetadata.DESCRIPTOR": true, + "tf.compat.v1.RunMetadata.DiscardUnknownFields": true, + "tf.compat.v1.RunMetadata.Extensions": true, + "tf.compat.v1.RunMetadata.FindInitializationErrors": true, + "tf.compat.v1.RunMetadata.FromString": true, + "tf.compat.v1.RunMetadata.FunctionGraphs": false, + "tf.compat.v1.RunMetadata.FunctionGraphs.ByteSize": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.Clear": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.ClearExtension": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.ClearField": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.CopyFrom": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.DESCRIPTOR": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.DiscardUnknownFields": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.Extensions": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.FindInitializationErrors": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.FromString": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.HasExtension": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.HasField": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.IsInitialized": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.ListFields": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.MergeFrom": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.MergeFromString": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.ParseFromString": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.RegisterExtension": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.SerializePartialToString": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.SerializeToString": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.SetInParent": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.UnknownFields": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.WhichOneof": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.__eq__": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.__ge__": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.__gt__": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.__init__": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.__le__": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.__lt__": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.__ne__": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.__new__": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.partition_graphs": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.post_optimization_graph": true, + "tf.compat.v1.RunMetadata.FunctionGraphs.pre_optimization_graph": true, + "tf.compat.v1.RunMetadata.HasExtension": true, + "tf.compat.v1.RunMetadata.HasField": true, + "tf.compat.v1.RunMetadata.IsInitialized": true, + "tf.compat.v1.RunMetadata.ListFields": true, + "tf.compat.v1.RunMetadata.MergeFrom": true, + "tf.compat.v1.RunMetadata.MergeFromString": true, + "tf.compat.v1.RunMetadata.ParseFromString": true, + "tf.compat.v1.RunMetadata.RegisterExtension": true, + "tf.compat.v1.RunMetadata.SerializePartialToString": true, + "tf.compat.v1.RunMetadata.SerializeToString": true, + "tf.compat.v1.RunMetadata.SetInParent": true, + "tf.compat.v1.RunMetadata.UnknownFields": true, + "tf.compat.v1.RunMetadata.WhichOneof": true, + "tf.compat.v1.RunMetadata.__eq__": true, + "tf.compat.v1.RunMetadata.__ge__": true, + "tf.compat.v1.RunMetadata.__gt__": true, + "tf.compat.v1.RunMetadata.__init__": true, + "tf.compat.v1.RunMetadata.__le__": true, + "tf.compat.v1.RunMetadata.__lt__": true, + "tf.compat.v1.RunMetadata.__ne__": true, + "tf.compat.v1.RunMetadata.__new__": true, + "tf.compat.v1.RunMetadata.cost_graph": true, + "tf.compat.v1.RunMetadata.function_graphs": true, + "tf.compat.v1.RunMetadata.partition_graphs": true, + "tf.compat.v1.RunMetadata.step_stats": true, + "tf.compat.v1.RunOptions": false, + "tf.compat.v1.RunOptions.ByteSize": true, + "tf.compat.v1.RunOptions.Clear": true, + "tf.compat.v1.RunOptions.ClearExtension": true, + "tf.compat.v1.RunOptions.ClearField": true, + "tf.compat.v1.RunOptions.CopyFrom": true, + "tf.compat.v1.RunOptions.DESCRIPTOR": true, + "tf.compat.v1.RunOptions.DiscardUnknownFields": true, + "tf.compat.v1.RunOptions.Experimental": false, + "tf.compat.v1.RunOptions.Experimental.ByteSize": true, + "tf.compat.v1.RunOptions.Experimental.Clear": true, + "tf.compat.v1.RunOptions.Experimental.ClearExtension": true, + "tf.compat.v1.RunOptions.Experimental.ClearField": true, + "tf.compat.v1.RunOptions.Experimental.CopyFrom": true, + "tf.compat.v1.RunOptions.Experimental.DESCRIPTOR": true, + "tf.compat.v1.RunOptions.Experimental.DiscardUnknownFields": true, + "tf.compat.v1.RunOptions.Experimental.Extensions": true, + "tf.compat.v1.RunOptions.Experimental.FindInitializationErrors": true, + "tf.compat.v1.RunOptions.Experimental.FromString": true, + "tf.compat.v1.RunOptions.Experimental.HasExtension": true, + "tf.compat.v1.RunOptions.Experimental.HasField": true, + "tf.compat.v1.RunOptions.Experimental.IsInitialized": true, + "tf.compat.v1.RunOptions.Experimental.ListFields": true, + "tf.compat.v1.RunOptions.Experimental.MergeFrom": true, + "tf.compat.v1.RunOptions.Experimental.MergeFromString": true, + "tf.compat.v1.RunOptions.Experimental.ParseFromString": true, + "tf.compat.v1.RunOptions.Experimental.RegisterExtension": true, + "tf.compat.v1.RunOptions.Experimental.SerializePartialToString": true, + "tf.compat.v1.RunOptions.Experimental.SerializeToString": true, + "tf.compat.v1.RunOptions.Experimental.SetInParent": true, + "tf.compat.v1.RunOptions.Experimental.UnknownFields": true, + "tf.compat.v1.RunOptions.Experimental.WhichOneof": true, + "tf.compat.v1.RunOptions.Experimental.__eq__": true, + "tf.compat.v1.RunOptions.Experimental.__ge__": true, + "tf.compat.v1.RunOptions.Experimental.__gt__": true, + "tf.compat.v1.RunOptions.Experimental.__init__": true, + "tf.compat.v1.RunOptions.Experimental.__le__": true, + "tf.compat.v1.RunOptions.Experimental.__lt__": true, + "tf.compat.v1.RunOptions.Experimental.__ne__": true, + "tf.compat.v1.RunOptions.Experimental.__new__": true, + "tf.compat.v1.RunOptions.Experimental.collective_graph_key": true, + "tf.compat.v1.RunOptions.Experimental.use_run_handler_pool": true, + "tf.compat.v1.RunOptions.Extensions": true, + "tf.compat.v1.RunOptions.FULL_TRACE": true, + "tf.compat.v1.RunOptions.FindInitializationErrors": true, + "tf.compat.v1.RunOptions.FromString": true, + "tf.compat.v1.RunOptions.HARDWARE_TRACE": true, + "tf.compat.v1.RunOptions.HasExtension": true, + "tf.compat.v1.RunOptions.HasField": true, + "tf.compat.v1.RunOptions.IsInitialized": true, + "tf.compat.v1.RunOptions.ListFields": true, + "tf.compat.v1.RunOptions.MergeFrom": true, + "tf.compat.v1.RunOptions.MergeFromString": true, + "tf.compat.v1.RunOptions.NO_TRACE": true, + "tf.compat.v1.RunOptions.ParseFromString": true, + "tf.compat.v1.RunOptions.RegisterExtension": true, + "tf.compat.v1.RunOptions.SOFTWARE_TRACE": true, + "tf.compat.v1.RunOptions.SerializePartialToString": true, + "tf.compat.v1.RunOptions.SerializeToString": true, + "tf.compat.v1.RunOptions.SetInParent": true, + "tf.compat.v1.RunOptions.TraceLevel": true, + "tf.compat.v1.RunOptions.UnknownFields": true, + "tf.compat.v1.RunOptions.WhichOneof": true, + "tf.compat.v1.RunOptions.__eq__": true, + "tf.compat.v1.RunOptions.__ge__": true, + "tf.compat.v1.RunOptions.__gt__": true, + "tf.compat.v1.RunOptions.__init__": true, + "tf.compat.v1.RunOptions.__le__": true, + "tf.compat.v1.RunOptions.__lt__": true, + "tf.compat.v1.RunOptions.__ne__": true, + "tf.compat.v1.RunOptions.__new__": true, + "tf.compat.v1.RunOptions.debug_options": true, + "tf.compat.v1.RunOptions.experimental": true, + "tf.compat.v1.RunOptions.inter_op_thread_pool": true, + "tf.compat.v1.RunOptions.output_partition_graphs": true, + "tf.compat.v1.RunOptions.report_tensor_allocations_upon_oom": true, + "tf.compat.v1.RunOptions.timeout_in_ms": true, + "tf.compat.v1.RunOptions.trace_level": true, + "tf.compat.v1.Session": false, + "tf.compat.v1.Session.__enter__": true, + "tf.compat.v1.Session.__eq__": true, + "tf.compat.v1.Session.__exit__": true, + "tf.compat.v1.Session.__ge__": true, + "tf.compat.v1.Session.__gt__": true, + "tf.compat.v1.Session.__init__": true, + "tf.compat.v1.Session.__le__": true, + "tf.compat.v1.Session.__lt__": true, + "tf.compat.v1.Session.__ne__": true, + "tf.compat.v1.Session.__new__": true, + "tf.compat.v1.Session.as_default": true, + "tf.compat.v1.Session.close": true, + "tf.compat.v1.Session.graph": true, + "tf.compat.v1.Session.graph_def": true, + "tf.compat.v1.Session.list_devices": true, + "tf.compat.v1.Session.make_callable": true, + "tf.compat.v1.Session.partial_run": true, + "tf.compat.v1.Session.partial_run_setup": true, + "tf.compat.v1.Session.reset": true, + "tf.compat.v1.Session.run": true, + "tf.compat.v1.Session.sess_str": true, + "tf.compat.v1.SessionLog": false, + "tf.compat.v1.SessionLog.ByteSize": true, + "tf.compat.v1.SessionLog.CHECKPOINT": true, + "tf.compat.v1.SessionLog.Clear": true, + "tf.compat.v1.SessionLog.ClearExtension": true, + "tf.compat.v1.SessionLog.ClearField": true, + "tf.compat.v1.SessionLog.CopyFrom": true, + "tf.compat.v1.SessionLog.DESCRIPTOR": true, + "tf.compat.v1.SessionLog.DiscardUnknownFields": true, + "tf.compat.v1.SessionLog.Extensions": true, + "tf.compat.v1.SessionLog.FindInitializationErrors": true, + "tf.compat.v1.SessionLog.FromString": true, + "tf.compat.v1.SessionLog.HasExtension": true, + "tf.compat.v1.SessionLog.HasField": true, + "tf.compat.v1.SessionLog.IsInitialized": true, + "tf.compat.v1.SessionLog.ListFields": true, + "tf.compat.v1.SessionLog.MergeFrom": true, + "tf.compat.v1.SessionLog.MergeFromString": true, + "tf.compat.v1.SessionLog.ParseFromString": true, + "tf.compat.v1.SessionLog.RegisterExtension": true, + "tf.compat.v1.SessionLog.START": true, + "tf.compat.v1.SessionLog.STATUS_UNSPECIFIED": true, + "tf.compat.v1.SessionLog.STOP": true, + "tf.compat.v1.SessionLog.SerializePartialToString": true, + "tf.compat.v1.SessionLog.SerializeToString": true, + "tf.compat.v1.SessionLog.SessionStatus": true, + "tf.compat.v1.SessionLog.SetInParent": true, + "tf.compat.v1.SessionLog.UnknownFields": true, + "tf.compat.v1.SessionLog.WhichOneof": true, + "tf.compat.v1.SessionLog.__eq__": true, + "tf.compat.v1.SessionLog.__ge__": true, + "tf.compat.v1.SessionLog.__gt__": true, + "tf.compat.v1.SessionLog.__init__": true, + "tf.compat.v1.SessionLog.__le__": true, + "tf.compat.v1.SessionLog.__lt__": true, + "tf.compat.v1.SessionLog.__ne__": true, + "tf.compat.v1.SessionLog.__new__": true, + "tf.compat.v1.SessionLog.checkpoint_path": true, + "tf.compat.v1.SessionLog.msg": true, + "tf.compat.v1.SessionLog.status": true, + "tf.compat.v1.SparseConditionalAccumulator": false, + "tf.compat.v1.SparseConditionalAccumulator.__eq__": true, + "tf.compat.v1.SparseConditionalAccumulator.__ge__": true, + "tf.compat.v1.SparseConditionalAccumulator.__gt__": true, + "tf.compat.v1.SparseConditionalAccumulator.__init__": true, + "tf.compat.v1.SparseConditionalAccumulator.__le__": true, + "tf.compat.v1.SparseConditionalAccumulator.__lt__": true, + "tf.compat.v1.SparseConditionalAccumulator.__ne__": true, + "tf.compat.v1.SparseConditionalAccumulator.__new__": true, + "tf.compat.v1.SparseConditionalAccumulator.accumulator_ref": true, + "tf.compat.v1.SparseConditionalAccumulator.apply_grad": true, + "tf.compat.v1.SparseConditionalAccumulator.apply_indexed_slices_grad": true, + "tf.compat.v1.SparseConditionalAccumulator.dtype": true, + "tf.compat.v1.SparseConditionalAccumulator.name": true, + "tf.compat.v1.SparseConditionalAccumulator.num_accumulated": true, + "tf.compat.v1.SparseConditionalAccumulator.set_global_step": true, + "tf.compat.v1.SparseConditionalAccumulator.take_grad": true, + "tf.compat.v1.SparseConditionalAccumulator.take_indexed_slices_grad": true, + "tf.compat.v1.SparseFeature": false, + "tf.compat.v1.SparseFeature.__add__": true, + "tf.compat.v1.SparseFeature.__contains__": true, + "tf.compat.v1.SparseFeature.__eq__": true, + "tf.compat.v1.SparseFeature.__ge__": true, + "tf.compat.v1.SparseFeature.__getitem__": true, + "tf.compat.v1.SparseFeature.__gt__": true, + "tf.compat.v1.SparseFeature.__init__": true, + "tf.compat.v1.SparseFeature.__iter__": true, + "tf.compat.v1.SparseFeature.__le__": true, + "tf.compat.v1.SparseFeature.__len__": true, + "tf.compat.v1.SparseFeature.__lt__": true, + "tf.compat.v1.SparseFeature.__mul__": true, + "tf.compat.v1.SparseFeature.__ne__": true, + "tf.compat.v1.SparseFeature.__new__": true, + "tf.compat.v1.SparseFeature.__rmul__": true, + "tf.compat.v1.SparseFeature.already_sorted": true, + "tf.compat.v1.SparseFeature.count": true, + "tf.compat.v1.SparseFeature.dtype": true, + "tf.compat.v1.SparseFeature.index": true, + "tf.compat.v1.SparseFeature.index_key": true, + "tf.compat.v1.SparseFeature.size": true, + "tf.compat.v1.SparseFeature.value_key": true, + "tf.compat.v1.SparseTensor": false, + "tf.compat.v1.SparseTensor.__div__": true, + "tf.compat.v1.SparseTensor.__eq__": true, + "tf.compat.v1.SparseTensor.__ge__": true, + "tf.compat.v1.SparseTensor.__gt__": true, + "tf.compat.v1.SparseTensor.__init__": true, + "tf.compat.v1.SparseTensor.__le__": true, + "tf.compat.v1.SparseTensor.__lt__": true, + "tf.compat.v1.SparseTensor.__mul__": true, + "tf.compat.v1.SparseTensor.__ne__": true, + "tf.compat.v1.SparseTensor.__new__": true, + "tf.compat.v1.SparseTensor.__truediv__": true, + "tf.compat.v1.SparseTensor.consumers": true, + "tf.compat.v1.SparseTensor.dense_shape": true, + "tf.compat.v1.SparseTensor.dtype": true, + "tf.compat.v1.SparseTensor.eval": true, + "tf.compat.v1.SparseTensor.from_value": true, + "tf.compat.v1.SparseTensor.get_shape": true, + "tf.compat.v1.SparseTensor.graph": true, + "tf.compat.v1.SparseTensor.indices": true, + "tf.compat.v1.SparseTensor.op": true, + "tf.compat.v1.SparseTensor.shape": true, + "tf.compat.v1.SparseTensor.values": true, + "tf.compat.v1.SparseTensorSpec": false, + "tf.compat.v1.SparseTensorSpec.__eq__": true, + "tf.compat.v1.SparseTensorSpec.__ge__": true, + "tf.compat.v1.SparseTensorSpec.__gt__": true, + "tf.compat.v1.SparseTensorSpec.__init__": true, + "tf.compat.v1.SparseTensorSpec.__le__": true, + "tf.compat.v1.SparseTensorSpec.__lt__": true, + "tf.compat.v1.SparseTensorSpec.__ne__": true, + "tf.compat.v1.SparseTensorSpec.__new__": true, + "tf.compat.v1.SparseTensorSpec.dtype": true, + "tf.compat.v1.SparseTensorSpec.from_value": true, + "tf.compat.v1.SparseTensorSpec.is_compatible_with": true, + "tf.compat.v1.SparseTensorSpec.most_specific_compatible_type": true, + "tf.compat.v1.SparseTensorSpec.shape": true, + "tf.compat.v1.SparseTensorSpec.value_type": true, + "tf.compat.v1.SparseTensorValue": false, + "tf.compat.v1.SparseTensorValue.__add__": true, + "tf.compat.v1.SparseTensorValue.__contains__": true, + "tf.compat.v1.SparseTensorValue.__eq__": true, + "tf.compat.v1.SparseTensorValue.__ge__": true, + "tf.compat.v1.SparseTensorValue.__getitem__": true, + "tf.compat.v1.SparseTensorValue.__gt__": true, + "tf.compat.v1.SparseTensorValue.__init__": true, + "tf.compat.v1.SparseTensorValue.__iter__": true, + "tf.compat.v1.SparseTensorValue.__le__": true, + "tf.compat.v1.SparseTensorValue.__len__": true, + "tf.compat.v1.SparseTensorValue.__lt__": true, + "tf.compat.v1.SparseTensorValue.__mul__": true, + "tf.compat.v1.SparseTensorValue.__ne__": true, + "tf.compat.v1.SparseTensorValue.__new__": true, + "tf.compat.v1.SparseTensorValue.__rmul__": true, + "tf.compat.v1.SparseTensorValue.count": true, + "tf.compat.v1.SparseTensorValue.dense_shape": true, + "tf.compat.v1.SparseTensorValue.index": true, + "tf.compat.v1.SparseTensorValue.indices": true, + "tf.compat.v1.SparseTensorValue.values": true, + "tf.compat.v1.Summary": false, + "tf.compat.v1.Summary.Audio": false, + "tf.compat.v1.Summary.Audio.ByteSize": true, + "tf.compat.v1.Summary.Audio.Clear": true, + "tf.compat.v1.Summary.Audio.ClearExtension": true, + "tf.compat.v1.Summary.Audio.ClearField": true, + "tf.compat.v1.Summary.Audio.CopyFrom": true, + "tf.compat.v1.Summary.Audio.DESCRIPTOR": true, + "tf.compat.v1.Summary.Audio.DiscardUnknownFields": true, + "tf.compat.v1.Summary.Audio.Extensions": true, + "tf.compat.v1.Summary.Audio.FindInitializationErrors": true, + "tf.compat.v1.Summary.Audio.FromString": true, + "tf.compat.v1.Summary.Audio.HasExtension": true, + "tf.compat.v1.Summary.Audio.HasField": true, + "tf.compat.v1.Summary.Audio.IsInitialized": true, + "tf.compat.v1.Summary.Audio.ListFields": true, + "tf.compat.v1.Summary.Audio.MergeFrom": true, + "tf.compat.v1.Summary.Audio.MergeFromString": true, + "tf.compat.v1.Summary.Audio.ParseFromString": true, + "tf.compat.v1.Summary.Audio.RegisterExtension": true, + "tf.compat.v1.Summary.Audio.SerializePartialToString": true, + "tf.compat.v1.Summary.Audio.SerializeToString": true, + "tf.compat.v1.Summary.Audio.SetInParent": true, + "tf.compat.v1.Summary.Audio.UnknownFields": true, + "tf.compat.v1.Summary.Audio.WhichOneof": true, + "tf.compat.v1.Summary.Audio.__eq__": true, + "tf.compat.v1.Summary.Audio.__ge__": true, + "tf.compat.v1.Summary.Audio.__gt__": true, + "tf.compat.v1.Summary.Audio.__init__": true, + "tf.compat.v1.Summary.Audio.__le__": true, + "tf.compat.v1.Summary.Audio.__lt__": true, + "tf.compat.v1.Summary.Audio.__ne__": true, + "tf.compat.v1.Summary.Audio.__new__": true, + "tf.compat.v1.Summary.Audio.content_type": true, + "tf.compat.v1.Summary.Audio.encoded_audio_string": true, + "tf.compat.v1.Summary.Audio.length_frames": true, + "tf.compat.v1.Summary.Audio.num_channels": true, + "tf.compat.v1.Summary.Audio.sample_rate": true, + "tf.compat.v1.Summary.ByteSize": true, + "tf.compat.v1.Summary.Clear": true, + "tf.compat.v1.Summary.ClearExtension": true, + "tf.compat.v1.Summary.ClearField": true, + "tf.compat.v1.Summary.CopyFrom": true, + "tf.compat.v1.Summary.DESCRIPTOR": true, + "tf.compat.v1.Summary.DiscardUnknownFields": true, + "tf.compat.v1.Summary.Extensions": true, + "tf.compat.v1.Summary.FindInitializationErrors": true, + "tf.compat.v1.Summary.FromString": true, + "tf.compat.v1.Summary.HasExtension": true, + "tf.compat.v1.Summary.HasField": true, + "tf.compat.v1.Summary.Image": false, + "tf.compat.v1.Summary.Image.ByteSize": true, + "tf.compat.v1.Summary.Image.Clear": true, + "tf.compat.v1.Summary.Image.ClearExtension": true, + "tf.compat.v1.Summary.Image.ClearField": true, + "tf.compat.v1.Summary.Image.CopyFrom": true, + "tf.compat.v1.Summary.Image.DESCRIPTOR": true, + "tf.compat.v1.Summary.Image.DiscardUnknownFields": true, + "tf.compat.v1.Summary.Image.Extensions": true, + "tf.compat.v1.Summary.Image.FindInitializationErrors": true, + "tf.compat.v1.Summary.Image.FromString": true, + "tf.compat.v1.Summary.Image.HasExtension": true, + "tf.compat.v1.Summary.Image.HasField": true, + "tf.compat.v1.Summary.Image.IsInitialized": true, + "tf.compat.v1.Summary.Image.ListFields": true, + "tf.compat.v1.Summary.Image.MergeFrom": true, + "tf.compat.v1.Summary.Image.MergeFromString": true, + "tf.compat.v1.Summary.Image.ParseFromString": true, + "tf.compat.v1.Summary.Image.RegisterExtension": true, + "tf.compat.v1.Summary.Image.SerializePartialToString": true, + "tf.compat.v1.Summary.Image.SerializeToString": true, + "tf.compat.v1.Summary.Image.SetInParent": true, + "tf.compat.v1.Summary.Image.UnknownFields": true, + "tf.compat.v1.Summary.Image.WhichOneof": true, + "tf.compat.v1.Summary.Image.__eq__": true, + "tf.compat.v1.Summary.Image.__ge__": true, + "tf.compat.v1.Summary.Image.__gt__": true, + "tf.compat.v1.Summary.Image.__init__": true, + "tf.compat.v1.Summary.Image.__le__": true, + "tf.compat.v1.Summary.Image.__lt__": true, + "tf.compat.v1.Summary.Image.__ne__": true, + "tf.compat.v1.Summary.Image.__new__": true, + "tf.compat.v1.Summary.Image.colorspace": true, + "tf.compat.v1.Summary.Image.encoded_image_string": true, + "tf.compat.v1.Summary.Image.height": true, + "tf.compat.v1.Summary.Image.width": true, + "tf.compat.v1.Summary.IsInitialized": true, + "tf.compat.v1.Summary.ListFields": true, + "tf.compat.v1.Summary.MergeFrom": true, + "tf.compat.v1.Summary.MergeFromString": true, + "tf.compat.v1.Summary.ParseFromString": true, + "tf.compat.v1.Summary.RegisterExtension": true, + "tf.compat.v1.Summary.SerializePartialToString": true, + "tf.compat.v1.Summary.SerializeToString": true, + "tf.compat.v1.Summary.SetInParent": true, + "tf.compat.v1.Summary.UnknownFields": true, + "tf.compat.v1.Summary.Value": false, + "tf.compat.v1.Summary.Value.ByteSize": true, + "tf.compat.v1.Summary.Value.Clear": true, + "tf.compat.v1.Summary.Value.ClearExtension": true, + "tf.compat.v1.Summary.Value.ClearField": true, + "tf.compat.v1.Summary.Value.CopyFrom": true, + "tf.compat.v1.Summary.Value.DESCRIPTOR": true, + "tf.compat.v1.Summary.Value.DiscardUnknownFields": true, + "tf.compat.v1.Summary.Value.Extensions": true, + "tf.compat.v1.Summary.Value.FindInitializationErrors": true, + "tf.compat.v1.Summary.Value.FromString": true, + "tf.compat.v1.Summary.Value.HasExtension": true, + "tf.compat.v1.Summary.Value.HasField": true, + "tf.compat.v1.Summary.Value.IsInitialized": true, + "tf.compat.v1.Summary.Value.ListFields": true, + "tf.compat.v1.Summary.Value.MergeFrom": true, + "tf.compat.v1.Summary.Value.MergeFromString": true, + "tf.compat.v1.Summary.Value.ParseFromString": true, + "tf.compat.v1.Summary.Value.RegisterExtension": true, + "tf.compat.v1.Summary.Value.SerializePartialToString": true, + "tf.compat.v1.Summary.Value.SerializeToString": true, + "tf.compat.v1.Summary.Value.SetInParent": true, + "tf.compat.v1.Summary.Value.UnknownFields": true, + "tf.compat.v1.Summary.Value.WhichOneof": true, + "tf.compat.v1.Summary.Value.__eq__": true, + "tf.compat.v1.Summary.Value.__ge__": true, + "tf.compat.v1.Summary.Value.__gt__": true, + "tf.compat.v1.Summary.Value.__init__": true, + "tf.compat.v1.Summary.Value.__le__": true, + "tf.compat.v1.Summary.Value.__lt__": true, + "tf.compat.v1.Summary.Value.__ne__": true, + "tf.compat.v1.Summary.Value.__new__": true, + "tf.compat.v1.Summary.Value.audio": true, + "tf.compat.v1.Summary.Value.histo": true, + "tf.compat.v1.Summary.Value.image": true, + "tf.compat.v1.Summary.Value.metadata": true, + "tf.compat.v1.Summary.Value.node_name": true, + "tf.compat.v1.Summary.Value.obsolete_old_style_histogram": true, + "tf.compat.v1.Summary.Value.simple_value": true, + "tf.compat.v1.Summary.Value.tag": true, + "tf.compat.v1.Summary.Value.tensor": true, + "tf.compat.v1.Summary.WhichOneof": true, + "tf.compat.v1.Summary.__eq__": true, + "tf.compat.v1.Summary.__ge__": true, + "tf.compat.v1.Summary.__gt__": true, + "tf.compat.v1.Summary.__init__": true, + "tf.compat.v1.Summary.__le__": true, + "tf.compat.v1.Summary.__lt__": true, + "tf.compat.v1.Summary.__ne__": true, + "tf.compat.v1.Summary.__new__": true, + "tf.compat.v1.Summary.value": true, + "tf.compat.v1.SummaryMetadata": false, + "tf.compat.v1.SummaryMetadata.ByteSize": true, + "tf.compat.v1.SummaryMetadata.Clear": true, + "tf.compat.v1.SummaryMetadata.ClearExtension": true, + "tf.compat.v1.SummaryMetadata.ClearField": true, + "tf.compat.v1.SummaryMetadata.CopyFrom": true, + "tf.compat.v1.SummaryMetadata.DESCRIPTOR": true, + "tf.compat.v1.SummaryMetadata.DiscardUnknownFields": true, + "tf.compat.v1.SummaryMetadata.Extensions": true, + "tf.compat.v1.SummaryMetadata.FindInitializationErrors": true, + "tf.compat.v1.SummaryMetadata.FromString": true, + "tf.compat.v1.SummaryMetadata.HasExtension": true, + "tf.compat.v1.SummaryMetadata.HasField": true, + "tf.compat.v1.SummaryMetadata.IsInitialized": true, + "tf.compat.v1.SummaryMetadata.ListFields": true, + "tf.compat.v1.SummaryMetadata.MergeFrom": true, + "tf.compat.v1.SummaryMetadata.MergeFromString": true, + "tf.compat.v1.SummaryMetadata.ParseFromString": true, + "tf.compat.v1.SummaryMetadata.PluginData": false, + "tf.compat.v1.SummaryMetadata.PluginData.ByteSize": true, + "tf.compat.v1.SummaryMetadata.PluginData.Clear": true, + "tf.compat.v1.SummaryMetadata.PluginData.ClearExtension": true, + "tf.compat.v1.SummaryMetadata.PluginData.ClearField": true, + "tf.compat.v1.SummaryMetadata.PluginData.CopyFrom": true, + "tf.compat.v1.SummaryMetadata.PluginData.DESCRIPTOR": true, + "tf.compat.v1.SummaryMetadata.PluginData.DiscardUnknownFields": true, + "tf.compat.v1.SummaryMetadata.PluginData.Extensions": true, + "tf.compat.v1.SummaryMetadata.PluginData.FindInitializationErrors": true, + "tf.compat.v1.SummaryMetadata.PluginData.FromString": true, + "tf.compat.v1.SummaryMetadata.PluginData.HasExtension": true, + "tf.compat.v1.SummaryMetadata.PluginData.HasField": true, + "tf.compat.v1.SummaryMetadata.PluginData.IsInitialized": true, + "tf.compat.v1.SummaryMetadata.PluginData.ListFields": true, + "tf.compat.v1.SummaryMetadata.PluginData.MergeFrom": true, + "tf.compat.v1.SummaryMetadata.PluginData.MergeFromString": true, + "tf.compat.v1.SummaryMetadata.PluginData.ParseFromString": true, + "tf.compat.v1.SummaryMetadata.PluginData.RegisterExtension": true, + "tf.compat.v1.SummaryMetadata.PluginData.SerializePartialToString": true, + "tf.compat.v1.SummaryMetadata.PluginData.SerializeToString": true, + "tf.compat.v1.SummaryMetadata.PluginData.SetInParent": true, + "tf.compat.v1.SummaryMetadata.PluginData.UnknownFields": true, + "tf.compat.v1.SummaryMetadata.PluginData.WhichOneof": true, + "tf.compat.v1.SummaryMetadata.PluginData.__eq__": true, + "tf.compat.v1.SummaryMetadata.PluginData.__ge__": true, + "tf.compat.v1.SummaryMetadata.PluginData.__gt__": true, + "tf.compat.v1.SummaryMetadata.PluginData.__init__": true, + "tf.compat.v1.SummaryMetadata.PluginData.__le__": true, + "tf.compat.v1.SummaryMetadata.PluginData.__lt__": true, + "tf.compat.v1.SummaryMetadata.PluginData.__ne__": true, + "tf.compat.v1.SummaryMetadata.PluginData.__new__": true, + "tf.compat.v1.SummaryMetadata.PluginData.content": true, + "tf.compat.v1.SummaryMetadata.PluginData.plugin_name": true, + "tf.compat.v1.SummaryMetadata.RegisterExtension": true, + "tf.compat.v1.SummaryMetadata.SerializePartialToString": true, + "tf.compat.v1.SummaryMetadata.SerializeToString": true, + "tf.compat.v1.SummaryMetadata.SetInParent": true, + "tf.compat.v1.SummaryMetadata.UnknownFields": true, + "tf.compat.v1.SummaryMetadata.WhichOneof": true, + "tf.compat.v1.SummaryMetadata.__eq__": true, + "tf.compat.v1.SummaryMetadata.__ge__": true, + "tf.compat.v1.SummaryMetadata.__gt__": true, + "tf.compat.v1.SummaryMetadata.__init__": true, + "tf.compat.v1.SummaryMetadata.__le__": true, + "tf.compat.v1.SummaryMetadata.__lt__": true, + "tf.compat.v1.SummaryMetadata.__ne__": true, + "tf.compat.v1.SummaryMetadata.__new__": true, + "tf.compat.v1.SummaryMetadata.data_class": true, + "tf.compat.v1.SummaryMetadata.display_name": true, + "tf.compat.v1.SummaryMetadata.plugin_data": true, + "tf.compat.v1.SummaryMetadata.summary_description": true, + "tf.compat.v1.TFRecordReader": false, + "tf.compat.v1.TFRecordReader.__eq__": true, + "tf.compat.v1.TFRecordReader.__ge__": true, + "tf.compat.v1.TFRecordReader.__gt__": true, + "tf.compat.v1.TFRecordReader.__init__": true, + "tf.compat.v1.TFRecordReader.__le__": true, + "tf.compat.v1.TFRecordReader.__lt__": true, + "tf.compat.v1.TFRecordReader.__ne__": true, + "tf.compat.v1.TFRecordReader.__new__": true, + "tf.compat.v1.TFRecordReader.num_records_produced": true, + "tf.compat.v1.TFRecordReader.num_work_units_completed": true, + "tf.compat.v1.TFRecordReader.read": true, + "tf.compat.v1.TFRecordReader.read_up_to": true, + "tf.compat.v1.TFRecordReader.reader_ref": true, + "tf.compat.v1.TFRecordReader.reset": true, + "tf.compat.v1.TFRecordReader.restore_state": true, + "tf.compat.v1.TFRecordReader.serialize_state": true, + "tf.compat.v1.TFRecordReader.supports_serialize": true, + "tf.compat.v1.Tensor": false, + "tf.compat.v1.Tensor.OVERLOADABLE_OPERATORS": true, + "tf.compat.v1.Tensor.__abs__": true, + "tf.compat.v1.Tensor.__add__": true, + "tf.compat.v1.Tensor.__and__": true, + "tf.compat.v1.Tensor.__bool__": true, + "tf.compat.v1.Tensor.__div__": true, + "tf.compat.v1.Tensor.__eq__": true, + "tf.compat.v1.Tensor.__floordiv__": true, + "tf.compat.v1.Tensor.__ge__": true, + "tf.compat.v1.Tensor.__getitem__": true, + "tf.compat.v1.Tensor.__gt__": true, + "tf.compat.v1.Tensor.__init__": true, + "tf.compat.v1.Tensor.__invert__": true, + "tf.compat.v1.Tensor.__iter__": true, + "tf.compat.v1.Tensor.__le__": true, + "tf.compat.v1.Tensor.__len__": true, + "tf.compat.v1.Tensor.__lt__": true, + "tf.compat.v1.Tensor.__matmul__": true, + "tf.compat.v1.Tensor.__mod__": true, + "tf.compat.v1.Tensor.__mul__": true, + "tf.compat.v1.Tensor.__ne__": true, + "tf.compat.v1.Tensor.__neg__": true, + "tf.compat.v1.Tensor.__new__": true, + "tf.compat.v1.Tensor.__nonzero__": true, + "tf.compat.v1.Tensor.__or__": true, + "tf.compat.v1.Tensor.__pow__": true, + "tf.compat.v1.Tensor.__radd__": true, + "tf.compat.v1.Tensor.__rand__": true, + "tf.compat.v1.Tensor.__rdiv__": true, + "tf.compat.v1.Tensor.__rfloordiv__": true, + "tf.compat.v1.Tensor.__rmatmul__": true, + "tf.compat.v1.Tensor.__rmod__": true, + "tf.compat.v1.Tensor.__rmul__": true, + "tf.compat.v1.Tensor.__ror__": true, + "tf.compat.v1.Tensor.__rpow__": true, + "tf.compat.v1.Tensor.__rsub__": true, + "tf.compat.v1.Tensor.__rtruediv__": true, + "tf.compat.v1.Tensor.__rxor__": true, + "tf.compat.v1.Tensor.__sub__": true, + "tf.compat.v1.Tensor.__truediv__": true, + "tf.compat.v1.Tensor.__xor__": true, + "tf.compat.v1.Tensor.consumers": true, + "tf.compat.v1.Tensor.device": true, + "tf.compat.v1.Tensor.dtype": true, + "tf.compat.v1.Tensor.eval": true, + "tf.compat.v1.Tensor.experimental_ref": true, + "tf.compat.v1.Tensor.get_shape": true, + "tf.compat.v1.Tensor.graph": true, + "tf.compat.v1.Tensor.name": true, + "tf.compat.v1.Tensor.op": true, + "tf.compat.v1.Tensor.ref": true, + "tf.compat.v1.Tensor.set_shape": true, + "tf.compat.v1.Tensor.shape": true, + "tf.compat.v1.Tensor.value_index": true, + "tf.compat.v1.TensorArray": false, + "tf.compat.v1.TensorArray.__eq__": true, + "tf.compat.v1.TensorArray.__ge__": true, + "tf.compat.v1.TensorArray.__gt__": true, + "tf.compat.v1.TensorArray.__init__": true, + "tf.compat.v1.TensorArray.__le__": true, + "tf.compat.v1.TensorArray.__lt__": true, + "tf.compat.v1.TensorArray.__ne__": true, + "tf.compat.v1.TensorArray.__new__": true, + "tf.compat.v1.TensorArray.close": true, + "tf.compat.v1.TensorArray.concat": true, + "tf.compat.v1.TensorArray.dtype": true, + "tf.compat.v1.TensorArray.dynamic_size": true, + "tf.compat.v1.TensorArray.element_shape": true, + "tf.compat.v1.TensorArray.flow": true, + "tf.compat.v1.TensorArray.gather": true, + "tf.compat.v1.TensorArray.grad": true, + "tf.compat.v1.TensorArray.handle": true, + "tf.compat.v1.TensorArray.identity": true, + "tf.compat.v1.TensorArray.read": true, + "tf.compat.v1.TensorArray.scatter": true, + "tf.compat.v1.TensorArray.size": true, + "tf.compat.v1.TensorArray.split": true, + "tf.compat.v1.TensorArray.stack": true, + "tf.compat.v1.TensorArray.unstack": true, + "tf.compat.v1.TensorArray.write": true, + "tf.compat.v1.TensorArraySpec": false, + "tf.compat.v1.TensorArraySpec.__eq__": true, + "tf.compat.v1.TensorArraySpec.__ge__": true, + "tf.compat.v1.TensorArraySpec.__gt__": true, + "tf.compat.v1.TensorArraySpec.__init__": true, + "tf.compat.v1.TensorArraySpec.__le__": true, + "tf.compat.v1.TensorArraySpec.__lt__": true, + "tf.compat.v1.TensorArraySpec.__ne__": true, + "tf.compat.v1.TensorArraySpec.__new__": true, + "tf.compat.v1.TensorArraySpec.from_value": true, + "tf.compat.v1.TensorArraySpec.is_compatible_with": true, + "tf.compat.v1.TensorArraySpec.most_specific_compatible_type": true, + "tf.compat.v1.TensorArraySpec.value_type": true, + "tf.compat.v1.TensorInfo": false, + "tf.compat.v1.TensorInfo.ByteSize": true, + "tf.compat.v1.TensorInfo.Clear": true, + "tf.compat.v1.TensorInfo.ClearExtension": true, + "tf.compat.v1.TensorInfo.ClearField": true, + "tf.compat.v1.TensorInfo.CompositeTensor": false, + "tf.compat.v1.TensorInfo.CompositeTensor.ByteSize": true, + "tf.compat.v1.TensorInfo.CompositeTensor.Clear": true, + "tf.compat.v1.TensorInfo.CompositeTensor.ClearExtension": true, + "tf.compat.v1.TensorInfo.CompositeTensor.ClearField": true, + "tf.compat.v1.TensorInfo.CompositeTensor.CopyFrom": true, + "tf.compat.v1.TensorInfo.CompositeTensor.DESCRIPTOR": true, + "tf.compat.v1.TensorInfo.CompositeTensor.DiscardUnknownFields": true, + "tf.compat.v1.TensorInfo.CompositeTensor.Extensions": true, + "tf.compat.v1.TensorInfo.CompositeTensor.FindInitializationErrors": true, + "tf.compat.v1.TensorInfo.CompositeTensor.FromString": true, + "tf.compat.v1.TensorInfo.CompositeTensor.HasExtension": true, + "tf.compat.v1.TensorInfo.CompositeTensor.HasField": true, + "tf.compat.v1.TensorInfo.CompositeTensor.IsInitialized": true, + "tf.compat.v1.TensorInfo.CompositeTensor.ListFields": true, + "tf.compat.v1.TensorInfo.CompositeTensor.MergeFrom": true, + "tf.compat.v1.TensorInfo.CompositeTensor.MergeFromString": true, + "tf.compat.v1.TensorInfo.CompositeTensor.ParseFromString": true, + "tf.compat.v1.TensorInfo.CompositeTensor.RegisterExtension": true, + "tf.compat.v1.TensorInfo.CompositeTensor.SerializePartialToString": true, + "tf.compat.v1.TensorInfo.CompositeTensor.SerializeToString": true, + "tf.compat.v1.TensorInfo.CompositeTensor.SetInParent": true, + "tf.compat.v1.TensorInfo.CompositeTensor.UnknownFields": true, + "tf.compat.v1.TensorInfo.CompositeTensor.WhichOneof": true, + "tf.compat.v1.TensorInfo.CompositeTensor.__eq__": true, + "tf.compat.v1.TensorInfo.CompositeTensor.__ge__": true, + "tf.compat.v1.TensorInfo.CompositeTensor.__gt__": true, + "tf.compat.v1.TensorInfo.CompositeTensor.__init__": true, + "tf.compat.v1.TensorInfo.CompositeTensor.__le__": true, + "tf.compat.v1.TensorInfo.CompositeTensor.__lt__": true, + "tf.compat.v1.TensorInfo.CompositeTensor.__ne__": true, + "tf.compat.v1.TensorInfo.CompositeTensor.__new__": true, + "tf.compat.v1.TensorInfo.CompositeTensor.components": true, + "tf.compat.v1.TensorInfo.CompositeTensor.type_spec": true, + "tf.compat.v1.TensorInfo.CooSparse": false, + "tf.compat.v1.TensorInfo.CooSparse.ByteSize": true, + "tf.compat.v1.TensorInfo.CooSparse.Clear": true, + "tf.compat.v1.TensorInfo.CooSparse.ClearExtension": true, + "tf.compat.v1.TensorInfo.CooSparse.ClearField": true, + "tf.compat.v1.TensorInfo.CooSparse.CopyFrom": true, + "tf.compat.v1.TensorInfo.CooSparse.DESCRIPTOR": true, + "tf.compat.v1.TensorInfo.CooSparse.DiscardUnknownFields": true, + "tf.compat.v1.TensorInfo.CooSparse.Extensions": true, + "tf.compat.v1.TensorInfo.CooSparse.FindInitializationErrors": true, + "tf.compat.v1.TensorInfo.CooSparse.FromString": true, + "tf.compat.v1.TensorInfo.CooSparse.HasExtension": true, + "tf.compat.v1.TensorInfo.CooSparse.HasField": true, + "tf.compat.v1.TensorInfo.CooSparse.IsInitialized": true, + "tf.compat.v1.TensorInfo.CooSparse.ListFields": true, + "tf.compat.v1.TensorInfo.CooSparse.MergeFrom": true, + "tf.compat.v1.TensorInfo.CooSparse.MergeFromString": true, + "tf.compat.v1.TensorInfo.CooSparse.ParseFromString": true, + "tf.compat.v1.TensorInfo.CooSparse.RegisterExtension": true, + "tf.compat.v1.TensorInfo.CooSparse.SerializePartialToString": true, + "tf.compat.v1.TensorInfo.CooSparse.SerializeToString": true, + "tf.compat.v1.TensorInfo.CooSparse.SetInParent": true, + "tf.compat.v1.TensorInfo.CooSparse.UnknownFields": true, + "tf.compat.v1.TensorInfo.CooSparse.WhichOneof": true, + "tf.compat.v1.TensorInfo.CooSparse.__eq__": true, + "tf.compat.v1.TensorInfo.CooSparse.__ge__": true, + "tf.compat.v1.TensorInfo.CooSparse.__gt__": true, + "tf.compat.v1.TensorInfo.CooSparse.__init__": true, + "tf.compat.v1.TensorInfo.CooSparse.__le__": true, + "tf.compat.v1.TensorInfo.CooSparse.__lt__": true, + "tf.compat.v1.TensorInfo.CooSparse.__ne__": true, + "tf.compat.v1.TensorInfo.CooSparse.__new__": true, + "tf.compat.v1.TensorInfo.CooSparse.dense_shape_tensor_name": true, + "tf.compat.v1.TensorInfo.CooSparse.indices_tensor_name": true, + "tf.compat.v1.TensorInfo.CooSparse.values_tensor_name": true, + "tf.compat.v1.TensorInfo.CopyFrom": true, + "tf.compat.v1.TensorInfo.DESCRIPTOR": true, + "tf.compat.v1.TensorInfo.DiscardUnknownFields": true, + "tf.compat.v1.TensorInfo.Extensions": true, + "tf.compat.v1.TensorInfo.FindInitializationErrors": true, + "tf.compat.v1.TensorInfo.FromString": true, + "tf.compat.v1.TensorInfo.HasExtension": true, + "tf.compat.v1.TensorInfo.HasField": true, + "tf.compat.v1.TensorInfo.IsInitialized": true, + "tf.compat.v1.TensorInfo.ListFields": true, + "tf.compat.v1.TensorInfo.MergeFrom": true, + "tf.compat.v1.TensorInfo.MergeFromString": true, + "tf.compat.v1.TensorInfo.ParseFromString": true, + "tf.compat.v1.TensorInfo.RegisterExtension": true, + "tf.compat.v1.TensorInfo.SerializePartialToString": true, + "tf.compat.v1.TensorInfo.SerializeToString": true, + "tf.compat.v1.TensorInfo.SetInParent": true, + "tf.compat.v1.TensorInfo.UnknownFields": true, + "tf.compat.v1.TensorInfo.WhichOneof": true, + "tf.compat.v1.TensorInfo.__eq__": true, + "tf.compat.v1.TensorInfo.__ge__": true, + "tf.compat.v1.TensorInfo.__gt__": true, + "tf.compat.v1.TensorInfo.__init__": true, + "tf.compat.v1.TensorInfo.__le__": true, + "tf.compat.v1.TensorInfo.__lt__": true, + "tf.compat.v1.TensorInfo.__ne__": true, + "tf.compat.v1.TensorInfo.__new__": true, + "tf.compat.v1.TensorInfo.composite_tensor": true, + "tf.compat.v1.TensorInfo.coo_sparse": true, + "tf.compat.v1.TensorInfo.dtype": true, + "tf.compat.v1.TensorInfo.name": true, + "tf.compat.v1.TensorInfo.tensor_shape": true, + "tf.compat.v1.TensorShape": false, + "tf.compat.v1.TensorShape.__add__": true, + "tf.compat.v1.TensorShape.__bool__": true, + "tf.compat.v1.TensorShape.__concat__": true, + "tf.compat.v1.TensorShape.__eq__": true, + "tf.compat.v1.TensorShape.__ge__": true, + "tf.compat.v1.TensorShape.__getitem__": true, + "tf.compat.v1.TensorShape.__gt__": true, + "tf.compat.v1.TensorShape.__init__": true, + "tf.compat.v1.TensorShape.__iter__": true, + "tf.compat.v1.TensorShape.__le__": true, + "tf.compat.v1.TensorShape.__len__": true, + "tf.compat.v1.TensorShape.__lt__": true, + "tf.compat.v1.TensorShape.__ne__": true, + "tf.compat.v1.TensorShape.__new__": true, + "tf.compat.v1.TensorShape.__nonzero__": true, + "tf.compat.v1.TensorShape.__radd__": true, + "tf.compat.v1.TensorShape.as_list": true, + "tf.compat.v1.TensorShape.as_proto": true, + "tf.compat.v1.TensorShape.assert_has_rank": true, + "tf.compat.v1.TensorShape.assert_is_compatible_with": true, + "tf.compat.v1.TensorShape.assert_is_fully_defined": true, + "tf.compat.v1.TensorShape.assert_same_rank": true, + "tf.compat.v1.TensorShape.concatenate": true, + "tf.compat.v1.TensorShape.dims": true, + "tf.compat.v1.TensorShape.is_compatible_with": true, + "tf.compat.v1.TensorShape.is_fully_defined": true, + "tf.compat.v1.TensorShape.merge_with": true, + "tf.compat.v1.TensorShape.most_specific_compatible_shape": true, + "tf.compat.v1.TensorShape.ndims": true, + "tf.compat.v1.TensorShape.num_elements": true, + "tf.compat.v1.TensorShape.rank": true, + "tf.compat.v1.TensorShape.with_rank": true, + "tf.compat.v1.TensorShape.with_rank_at_least": true, + "tf.compat.v1.TensorShape.with_rank_at_most": true, + "tf.compat.v1.TensorSpec": false, + "tf.compat.v1.TensorSpec.__eq__": true, + "tf.compat.v1.TensorSpec.__ge__": true, + "tf.compat.v1.TensorSpec.__gt__": true, + "tf.compat.v1.TensorSpec.__init__": true, + "tf.compat.v1.TensorSpec.__le__": true, + "tf.compat.v1.TensorSpec.__lt__": true, + "tf.compat.v1.TensorSpec.__ne__": true, + "tf.compat.v1.TensorSpec.__new__": true, + "tf.compat.v1.TensorSpec.dtype": true, + "tf.compat.v1.TensorSpec.from_spec": true, + "tf.compat.v1.TensorSpec.from_tensor": true, + "tf.compat.v1.TensorSpec.is_compatible_with": true, + "tf.compat.v1.TensorSpec.most_specific_compatible_type": true, + "tf.compat.v1.TensorSpec.name": true, + "tf.compat.v1.TensorSpec.shape": true, + "tf.compat.v1.TensorSpec.value_type": true, + "tf.compat.v1.TextLineReader": false, + "tf.compat.v1.TextLineReader.__eq__": true, + "tf.compat.v1.TextLineReader.__ge__": true, + "tf.compat.v1.TextLineReader.__gt__": true, + "tf.compat.v1.TextLineReader.__init__": true, + "tf.compat.v1.TextLineReader.__le__": true, + "tf.compat.v1.TextLineReader.__lt__": true, + "tf.compat.v1.TextLineReader.__ne__": true, + "tf.compat.v1.TextLineReader.__new__": true, + "tf.compat.v1.TextLineReader.num_records_produced": true, + "tf.compat.v1.TextLineReader.num_work_units_completed": true, + "tf.compat.v1.TextLineReader.read": true, + "tf.compat.v1.TextLineReader.read_up_to": true, + "tf.compat.v1.TextLineReader.reader_ref": true, + "tf.compat.v1.TextLineReader.reset": true, + "tf.compat.v1.TextLineReader.restore_state": true, + "tf.compat.v1.TextLineReader.serialize_state": true, + "tf.compat.v1.TextLineReader.supports_serialize": true, + "tf.compat.v1.TypeSpec": false, + "tf.compat.v1.TypeSpec.__eq__": true, + "tf.compat.v1.TypeSpec.__ge__": true, + "tf.compat.v1.TypeSpec.__gt__": true, + "tf.compat.v1.TypeSpec.__init__": true, + "tf.compat.v1.TypeSpec.__le__": true, + "tf.compat.v1.TypeSpec.__lt__": true, + "tf.compat.v1.TypeSpec.__ne__": true, + "tf.compat.v1.TypeSpec.__new__": true, + "tf.compat.v1.TypeSpec.is_compatible_with": true, + "tf.compat.v1.TypeSpec.most_specific_compatible_type": true, + "tf.compat.v1.TypeSpec.value_type": true, + "tf.compat.v1.UnconnectedGradients": false, + "tf.compat.v1.UnconnectedGradients.NONE": true, + "tf.compat.v1.UnconnectedGradients.ZERO": true, + "tf.compat.v1.UnconnectedGradients.name": true, + "tf.compat.v1.UnconnectedGradients.value": true, + "tf.compat.v1.VERSION": true, + "tf.compat.v1.VarLenFeature": false, + "tf.compat.v1.VarLenFeature.__add__": true, + "tf.compat.v1.VarLenFeature.__contains__": true, + "tf.compat.v1.VarLenFeature.__eq__": true, + "tf.compat.v1.VarLenFeature.__ge__": true, + "tf.compat.v1.VarLenFeature.__getitem__": true, + "tf.compat.v1.VarLenFeature.__gt__": true, + "tf.compat.v1.VarLenFeature.__init__": true, + "tf.compat.v1.VarLenFeature.__iter__": true, + "tf.compat.v1.VarLenFeature.__le__": true, + "tf.compat.v1.VarLenFeature.__len__": true, + "tf.compat.v1.VarLenFeature.__lt__": true, + "tf.compat.v1.VarLenFeature.__mul__": true, + "tf.compat.v1.VarLenFeature.__ne__": true, + "tf.compat.v1.VarLenFeature.__new__": true, + "tf.compat.v1.VarLenFeature.__rmul__": true, + "tf.compat.v1.VarLenFeature.count": true, + "tf.compat.v1.VarLenFeature.dtype": true, + "tf.compat.v1.VarLenFeature.index": true, + "tf.compat.v1.Variable": false, + "tf.compat.v1.Variable.SaveSliceInfo": false, + "tf.compat.v1.Variable.SaveSliceInfo.__eq__": true, + "tf.compat.v1.Variable.SaveSliceInfo.__ge__": true, + "tf.compat.v1.Variable.SaveSliceInfo.__gt__": true, + "tf.compat.v1.Variable.SaveSliceInfo.__init__": true, + "tf.compat.v1.Variable.SaveSliceInfo.__le__": true, + "tf.compat.v1.Variable.SaveSliceInfo.__lt__": true, + "tf.compat.v1.Variable.SaveSliceInfo.__ne__": true, + "tf.compat.v1.Variable.SaveSliceInfo.__new__": true, + "tf.compat.v1.Variable.SaveSliceInfo.spec": true, + "tf.compat.v1.Variable.SaveSliceInfo.to_proto": true, + "tf.compat.v1.Variable.__abs__": true, + "tf.compat.v1.Variable.__add__": true, + "tf.compat.v1.Variable.__and__": true, + "tf.compat.v1.Variable.__div__": true, + "tf.compat.v1.Variable.__eq__": true, + "tf.compat.v1.Variable.__floordiv__": true, + "tf.compat.v1.Variable.__ge__": true, + "tf.compat.v1.Variable.__getitem__": true, + "tf.compat.v1.Variable.__gt__": true, + "tf.compat.v1.Variable.__init__": true, + "tf.compat.v1.Variable.__invert__": true, + "tf.compat.v1.Variable.__iter__": true, + "tf.compat.v1.Variable.__le__": true, + "tf.compat.v1.Variable.__lt__": true, + "tf.compat.v1.Variable.__matmul__": true, + "tf.compat.v1.Variable.__mod__": true, + "tf.compat.v1.Variable.__mul__": true, + "tf.compat.v1.Variable.__ne__": true, + "tf.compat.v1.Variable.__neg__": true, + "tf.compat.v1.Variable.__new__": true, + "tf.compat.v1.Variable.__or__": true, + "tf.compat.v1.Variable.__pow__": true, + "tf.compat.v1.Variable.__radd__": true, + "tf.compat.v1.Variable.__rand__": true, + "tf.compat.v1.Variable.__rdiv__": true, + "tf.compat.v1.Variable.__rfloordiv__": true, + "tf.compat.v1.Variable.__rmatmul__": true, + "tf.compat.v1.Variable.__rmod__": true, + "tf.compat.v1.Variable.__rmul__": true, + "tf.compat.v1.Variable.__ror__": true, + "tf.compat.v1.Variable.__rpow__": true, + "tf.compat.v1.Variable.__rsub__": true, + "tf.compat.v1.Variable.__rtruediv__": true, + "tf.compat.v1.Variable.__rxor__": true, + "tf.compat.v1.Variable.__sub__": true, + "tf.compat.v1.Variable.__truediv__": true, + "tf.compat.v1.Variable.__xor__": true, + "tf.compat.v1.Variable.aggregation": true, + "tf.compat.v1.Variable.assign": true, + "tf.compat.v1.Variable.assign_add": true, + "tf.compat.v1.Variable.assign_sub": true, + "tf.compat.v1.Variable.batch_scatter_update": true, + "tf.compat.v1.Variable.constraint": true, + "tf.compat.v1.Variable.count_up_to": true, + "tf.compat.v1.Variable.device": true, + "tf.compat.v1.Variable.dtype": true, + "tf.compat.v1.Variable.eval": true, + "tf.compat.v1.Variable.experimental_ref": true, + "tf.compat.v1.Variable.from_proto": true, + "tf.compat.v1.Variable.gather_nd": true, + "tf.compat.v1.Variable.get_shape": true, + "tf.compat.v1.Variable.graph": true, + "tf.compat.v1.Variable.initial_value": true, + "tf.compat.v1.Variable.initialized_value": true, + "tf.compat.v1.Variable.initializer": true, + "tf.compat.v1.Variable.load": true, + "tf.compat.v1.Variable.name": true, + "tf.compat.v1.Variable.op": true, + "tf.compat.v1.Variable.read_value": true, + "tf.compat.v1.Variable.ref": true, + "tf.compat.v1.Variable.scatter_add": true, + "tf.compat.v1.Variable.scatter_div": true, + "tf.compat.v1.Variable.scatter_max": true, + "tf.compat.v1.Variable.scatter_min": true, + "tf.compat.v1.Variable.scatter_mul": true, + "tf.compat.v1.Variable.scatter_nd_add": true, + "tf.compat.v1.Variable.scatter_nd_sub": true, + "tf.compat.v1.Variable.scatter_nd_update": true, + "tf.compat.v1.Variable.scatter_sub": true, + "tf.compat.v1.Variable.scatter_update": true, + "tf.compat.v1.Variable.set_shape": true, + "tf.compat.v1.Variable.shape": true, + "tf.compat.v1.Variable.sparse_read": true, + "tf.compat.v1.Variable.synchronization": true, + "tf.compat.v1.Variable.to_proto": true, + "tf.compat.v1.Variable.trainable": true, + "tf.compat.v1.Variable.value": true, + "tf.compat.v1.VariableAggregation": false, + "tf.compat.v1.VariableAggregation.MEAN": true, + "tf.compat.v1.VariableAggregation.NONE": true, + "tf.compat.v1.VariableAggregation.ONLY_FIRST_REPLICA": true, + "tf.compat.v1.VariableAggregation.SUM": true, + "tf.compat.v1.VariableAggregation.name": true, + "tf.compat.v1.VariableAggregation.value": true, + "tf.compat.v1.VariableScope": false, + "tf.compat.v1.VariableScope.__eq__": true, + "tf.compat.v1.VariableScope.__ge__": true, + "tf.compat.v1.VariableScope.__gt__": true, + "tf.compat.v1.VariableScope.__init__": true, + "tf.compat.v1.VariableScope.__le__": true, + "tf.compat.v1.VariableScope.__lt__": true, + "tf.compat.v1.VariableScope.__ne__": true, + "tf.compat.v1.VariableScope.__new__": true, + "tf.compat.v1.VariableScope.caching_device": true, + "tf.compat.v1.VariableScope.constraint": true, + "tf.compat.v1.VariableScope.custom_getter": true, + "tf.compat.v1.VariableScope.dtype": true, + "tf.compat.v1.VariableScope.get_collection": true, + "tf.compat.v1.VariableScope.get_variable": true, + "tf.compat.v1.VariableScope.global_variables": true, + "tf.compat.v1.VariableScope.initializer": true, + "tf.compat.v1.VariableScope.local_variables": true, + "tf.compat.v1.VariableScope.name": true, + "tf.compat.v1.VariableScope.original_name_scope": true, + "tf.compat.v1.VariableScope.partitioner": true, + "tf.compat.v1.VariableScope.regularizer": true, + "tf.compat.v1.VariableScope.reuse": true, + "tf.compat.v1.VariableScope.reuse_variables": true, + "tf.compat.v1.VariableScope.set_caching_device": true, + "tf.compat.v1.VariableScope.set_custom_getter": true, + "tf.compat.v1.VariableScope.set_dtype": true, + "tf.compat.v1.VariableScope.set_initializer": true, + "tf.compat.v1.VariableScope.set_partitioner": true, + "tf.compat.v1.VariableScope.set_regularizer": true, + "tf.compat.v1.VariableScope.set_use_resource": true, + "tf.compat.v1.VariableScope.trainable_variables": true, + "tf.compat.v1.VariableScope.use_resource": true, + "tf.compat.v1.VariableSynchronization": false, + "tf.compat.v1.VariableSynchronization.AUTO": true, + "tf.compat.v1.VariableSynchronization.NONE": true, + "tf.compat.v1.VariableSynchronization.ON_READ": true, + "tf.compat.v1.VariableSynchronization.ON_WRITE": true, + "tf.compat.v1.VariableSynchronization.name": true, + "tf.compat.v1.VariableSynchronization.value": true, + "tf.compat.v1.WholeFileReader": false, + "tf.compat.v1.WholeFileReader.__eq__": true, + "tf.compat.v1.WholeFileReader.__ge__": true, + "tf.compat.v1.WholeFileReader.__gt__": true, + "tf.compat.v1.WholeFileReader.__init__": true, + "tf.compat.v1.WholeFileReader.__le__": true, + "tf.compat.v1.WholeFileReader.__lt__": true, + "tf.compat.v1.WholeFileReader.__ne__": true, + "tf.compat.v1.WholeFileReader.__new__": true, + "tf.compat.v1.WholeFileReader.num_records_produced": true, + "tf.compat.v1.WholeFileReader.num_work_units_completed": true, + "tf.compat.v1.WholeFileReader.read": true, + "tf.compat.v1.WholeFileReader.read_up_to": true, + "tf.compat.v1.WholeFileReader.reader_ref": true, + "tf.compat.v1.WholeFileReader.reset": true, + "tf.compat.v1.WholeFileReader.restore_state": true, + "tf.compat.v1.WholeFileReader.serialize_state": true, + "tf.compat.v1.WholeFileReader.supports_serialize": true, + "tf.compat.v1.__version__": true, + "tf.compat.v1.abs": false, + "tf.compat.v1.accumulate_n": false, + "tf.compat.v1.acos": false, + "tf.compat.v1.acosh": false, + "tf.compat.v1.add": false, + "tf.compat.v1.add_check_numerics_ops": false, + "tf.compat.v1.add_n": false, + "tf.compat.v1.add_to_collection": false, + "tf.compat.v1.add_to_collections": false, + "tf.compat.v1.all_variables": false, + "tf.compat.v1.angle": false, + "tf.compat.v1.app": false, + "tf.compat.v1.app.flags": false, + "tf.compat.v1.app.flags.ArgumentParser": false, + "tf.compat.v1.app.flags.ArgumentParser.__eq__": true, + "tf.compat.v1.app.flags.ArgumentParser.__ge__": true, + "tf.compat.v1.app.flags.ArgumentParser.__gt__": true, + "tf.compat.v1.app.flags.ArgumentParser.__init__": true, + "tf.compat.v1.app.flags.ArgumentParser.__le__": true, + "tf.compat.v1.app.flags.ArgumentParser.__lt__": true, + "tf.compat.v1.app.flags.ArgumentParser.__ne__": true, + "tf.compat.v1.app.flags.ArgumentParser.__new__": true, + "tf.compat.v1.app.flags.ArgumentParser.flag_type": true, + "tf.compat.v1.app.flags.ArgumentParser.parse": true, + "tf.compat.v1.app.flags.ArgumentParser.syntactic_help": true, + "tf.compat.v1.app.flags.ArgumentSerializer": false, + "tf.compat.v1.app.flags.ArgumentSerializer.__eq__": true, + "tf.compat.v1.app.flags.ArgumentSerializer.__ge__": true, + "tf.compat.v1.app.flags.ArgumentSerializer.__gt__": true, + "tf.compat.v1.app.flags.ArgumentSerializer.__init__": true, + "tf.compat.v1.app.flags.ArgumentSerializer.__le__": true, + "tf.compat.v1.app.flags.ArgumentSerializer.__lt__": true, + "tf.compat.v1.app.flags.ArgumentSerializer.__ne__": true, + "tf.compat.v1.app.flags.ArgumentSerializer.__new__": true, + "tf.compat.v1.app.flags.ArgumentSerializer.serialize": true, + "tf.compat.v1.app.flags.BaseListParser": false, + "tf.compat.v1.app.flags.BaseListParser.__eq__": true, + "tf.compat.v1.app.flags.BaseListParser.__ge__": true, + "tf.compat.v1.app.flags.BaseListParser.__gt__": true, + "tf.compat.v1.app.flags.BaseListParser.__init__": true, + "tf.compat.v1.app.flags.BaseListParser.__le__": true, + "tf.compat.v1.app.flags.BaseListParser.__lt__": true, + "tf.compat.v1.app.flags.BaseListParser.__ne__": true, + "tf.compat.v1.app.flags.BaseListParser.__new__": true, + "tf.compat.v1.app.flags.BaseListParser.flag_type": true, + "tf.compat.v1.app.flags.BaseListParser.parse": true, + "tf.compat.v1.app.flags.BaseListParser.syntactic_help": true, + "tf.compat.v1.app.flags.BooleanFlag": false, + "tf.compat.v1.app.flags.BooleanFlag.__eq__": true, + "tf.compat.v1.app.flags.BooleanFlag.__ge__": true, + "tf.compat.v1.app.flags.BooleanFlag.__gt__": true, + "tf.compat.v1.app.flags.BooleanFlag.__init__": true, + "tf.compat.v1.app.flags.BooleanFlag.__le__": true, + "tf.compat.v1.app.flags.BooleanFlag.__lt__": true, + "tf.compat.v1.app.flags.BooleanFlag.__ne__": true, + "tf.compat.v1.app.flags.BooleanFlag.__new__": true, + "tf.compat.v1.app.flags.BooleanFlag.flag_type": true, + "tf.compat.v1.app.flags.BooleanFlag.parse": true, + "tf.compat.v1.app.flags.BooleanFlag.serialize": true, + "tf.compat.v1.app.flags.BooleanFlag.unparse": true, + "tf.compat.v1.app.flags.BooleanFlag.value": true, + "tf.compat.v1.app.flags.BooleanParser": false, + "tf.compat.v1.app.flags.BooleanParser.__eq__": true, + "tf.compat.v1.app.flags.BooleanParser.__ge__": true, + "tf.compat.v1.app.flags.BooleanParser.__gt__": true, + "tf.compat.v1.app.flags.BooleanParser.__init__": true, + "tf.compat.v1.app.flags.BooleanParser.__le__": true, + "tf.compat.v1.app.flags.BooleanParser.__lt__": true, + "tf.compat.v1.app.flags.BooleanParser.__ne__": true, + "tf.compat.v1.app.flags.BooleanParser.__new__": true, + "tf.compat.v1.app.flags.BooleanParser.flag_type": true, + "tf.compat.v1.app.flags.BooleanParser.parse": true, + "tf.compat.v1.app.flags.BooleanParser.syntactic_help": true, + "tf.compat.v1.app.flags.CantOpenFlagFileError": false, + "tf.compat.v1.app.flags.CantOpenFlagFileError.__eq__": true, + "tf.compat.v1.app.flags.CantOpenFlagFileError.__ge__": true, + "tf.compat.v1.app.flags.CantOpenFlagFileError.__gt__": true, + "tf.compat.v1.app.flags.CantOpenFlagFileError.__init__": true, + "tf.compat.v1.app.flags.CantOpenFlagFileError.__le__": true, + "tf.compat.v1.app.flags.CantOpenFlagFileError.__lt__": true, + "tf.compat.v1.app.flags.CantOpenFlagFileError.__ne__": true, + "tf.compat.v1.app.flags.CantOpenFlagFileError.__new__": true, + "tf.compat.v1.app.flags.CantOpenFlagFileError.args": true, + "tf.compat.v1.app.flags.CantOpenFlagFileError.with_traceback": true, + "tf.compat.v1.app.flags.CsvListSerializer": false, + "tf.compat.v1.app.flags.CsvListSerializer.__eq__": true, + "tf.compat.v1.app.flags.CsvListSerializer.__ge__": true, + "tf.compat.v1.app.flags.CsvListSerializer.__gt__": true, + "tf.compat.v1.app.flags.CsvListSerializer.__init__": true, + "tf.compat.v1.app.flags.CsvListSerializer.__le__": true, + "tf.compat.v1.app.flags.CsvListSerializer.__lt__": true, + "tf.compat.v1.app.flags.CsvListSerializer.__ne__": true, + "tf.compat.v1.app.flags.CsvListSerializer.__new__": true, + "tf.compat.v1.app.flags.CsvListSerializer.serialize": true, + "tf.compat.v1.app.flags.DEFINE": false, + "tf.compat.v1.app.flags.DEFINE_alias": false, + "tf.compat.v1.app.flags.DEFINE_bool": false, + "tf.compat.v1.app.flags.DEFINE_boolean": false, + "tf.compat.v1.app.flags.DEFINE_enum": false, + "tf.compat.v1.app.flags.DEFINE_enum_class": false, + "tf.compat.v1.app.flags.DEFINE_flag": false, + "tf.compat.v1.app.flags.DEFINE_float": false, + "tf.compat.v1.app.flags.DEFINE_integer": false, + "tf.compat.v1.app.flags.DEFINE_list": false, + "tf.compat.v1.app.flags.DEFINE_multi": false, + "tf.compat.v1.app.flags.DEFINE_multi_enum": false, + "tf.compat.v1.app.flags.DEFINE_multi_enum_class": false, + "tf.compat.v1.app.flags.DEFINE_multi_float": false, + "tf.compat.v1.app.flags.DEFINE_multi_integer": false, + "tf.compat.v1.app.flags.DEFINE_multi_string": false, + "tf.compat.v1.app.flags.DEFINE_spaceseplist": false, + "tf.compat.v1.app.flags.DEFINE_string": false, + "tf.compat.v1.app.flags.DuplicateFlagError": false, + "tf.compat.v1.app.flags.DuplicateFlagError.__eq__": true, + "tf.compat.v1.app.flags.DuplicateFlagError.__ge__": true, + "tf.compat.v1.app.flags.DuplicateFlagError.__gt__": true, + "tf.compat.v1.app.flags.DuplicateFlagError.__init__": true, + "tf.compat.v1.app.flags.DuplicateFlagError.__le__": true, + "tf.compat.v1.app.flags.DuplicateFlagError.__lt__": true, + "tf.compat.v1.app.flags.DuplicateFlagError.__ne__": true, + "tf.compat.v1.app.flags.DuplicateFlagError.__new__": true, + "tf.compat.v1.app.flags.DuplicateFlagError.args": true, + "tf.compat.v1.app.flags.DuplicateFlagError.from_flag": true, + "tf.compat.v1.app.flags.DuplicateFlagError.with_traceback": true, + "tf.compat.v1.app.flags.EnumClassFlag": false, + "tf.compat.v1.app.flags.EnumClassFlag.__eq__": true, + "tf.compat.v1.app.flags.EnumClassFlag.__ge__": true, + "tf.compat.v1.app.flags.EnumClassFlag.__gt__": true, + "tf.compat.v1.app.flags.EnumClassFlag.__init__": true, + "tf.compat.v1.app.flags.EnumClassFlag.__le__": true, + "tf.compat.v1.app.flags.EnumClassFlag.__lt__": true, + "tf.compat.v1.app.flags.EnumClassFlag.__ne__": true, + "tf.compat.v1.app.flags.EnumClassFlag.__new__": true, + "tf.compat.v1.app.flags.EnumClassFlag.flag_type": true, + "tf.compat.v1.app.flags.EnumClassFlag.parse": true, + "tf.compat.v1.app.flags.EnumClassFlag.serialize": true, + "tf.compat.v1.app.flags.EnumClassFlag.unparse": true, + "tf.compat.v1.app.flags.EnumClassFlag.value": true, + "tf.compat.v1.app.flags.EnumClassParser": false, + "tf.compat.v1.app.flags.EnumClassParser.__eq__": true, + "tf.compat.v1.app.flags.EnumClassParser.__ge__": true, + "tf.compat.v1.app.flags.EnumClassParser.__gt__": true, + "tf.compat.v1.app.flags.EnumClassParser.__init__": true, + "tf.compat.v1.app.flags.EnumClassParser.__le__": true, + "tf.compat.v1.app.flags.EnumClassParser.__lt__": true, + "tf.compat.v1.app.flags.EnumClassParser.__ne__": true, + "tf.compat.v1.app.flags.EnumClassParser.__new__": true, + "tf.compat.v1.app.flags.EnumClassParser.flag_type": true, + "tf.compat.v1.app.flags.EnumClassParser.parse": true, + "tf.compat.v1.app.flags.EnumClassParser.syntactic_help": true, + "tf.compat.v1.app.flags.EnumFlag": false, + "tf.compat.v1.app.flags.EnumFlag.__eq__": true, + "tf.compat.v1.app.flags.EnumFlag.__ge__": true, + "tf.compat.v1.app.flags.EnumFlag.__gt__": true, + "tf.compat.v1.app.flags.EnumFlag.__init__": true, + "tf.compat.v1.app.flags.EnumFlag.__le__": true, + "tf.compat.v1.app.flags.EnumFlag.__lt__": true, + "tf.compat.v1.app.flags.EnumFlag.__ne__": true, + "tf.compat.v1.app.flags.EnumFlag.__new__": true, + "tf.compat.v1.app.flags.EnumFlag.flag_type": true, + "tf.compat.v1.app.flags.EnumFlag.parse": true, + "tf.compat.v1.app.flags.EnumFlag.serialize": true, + "tf.compat.v1.app.flags.EnumFlag.unparse": true, + "tf.compat.v1.app.flags.EnumFlag.value": true, + "tf.compat.v1.app.flags.EnumParser": false, + "tf.compat.v1.app.flags.EnumParser.__eq__": true, + "tf.compat.v1.app.flags.EnumParser.__ge__": true, + "tf.compat.v1.app.flags.EnumParser.__gt__": true, + "tf.compat.v1.app.flags.EnumParser.__init__": true, + "tf.compat.v1.app.flags.EnumParser.__le__": true, + "tf.compat.v1.app.flags.EnumParser.__lt__": true, + "tf.compat.v1.app.flags.EnumParser.__ne__": true, + "tf.compat.v1.app.flags.EnumParser.__new__": true, + "tf.compat.v1.app.flags.EnumParser.flag_type": true, + "tf.compat.v1.app.flags.EnumParser.parse": true, + "tf.compat.v1.app.flags.EnumParser.syntactic_help": true, + "tf.compat.v1.app.flags.Error": false, + "tf.compat.v1.app.flags.Error.__eq__": true, + "tf.compat.v1.app.flags.Error.__ge__": true, + "tf.compat.v1.app.flags.Error.__gt__": true, + "tf.compat.v1.app.flags.Error.__init__": true, + "tf.compat.v1.app.flags.Error.__le__": true, + "tf.compat.v1.app.flags.Error.__lt__": true, + "tf.compat.v1.app.flags.Error.__ne__": true, + "tf.compat.v1.app.flags.Error.__new__": true, + "tf.compat.v1.app.flags.Error.args": true, + "tf.compat.v1.app.flags.Error.with_traceback": true, + "tf.compat.v1.app.flags.FLAGS": false, + "tf.compat.v1.app.flags.Flag": false, + "tf.compat.v1.app.flags.Flag.__eq__": true, + "tf.compat.v1.app.flags.Flag.__ge__": true, + "tf.compat.v1.app.flags.Flag.__gt__": true, + "tf.compat.v1.app.flags.Flag.__init__": true, + "tf.compat.v1.app.flags.Flag.__le__": true, + "tf.compat.v1.app.flags.Flag.__lt__": true, + "tf.compat.v1.app.flags.Flag.__ne__": true, + "tf.compat.v1.app.flags.Flag.__new__": true, + "tf.compat.v1.app.flags.Flag.flag_type": true, + "tf.compat.v1.app.flags.Flag.parse": true, + "tf.compat.v1.app.flags.Flag.serialize": true, + "tf.compat.v1.app.flags.Flag.unparse": true, + "tf.compat.v1.app.flags.Flag.value": true, + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError": false, + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__eq__": true, + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__ge__": true, + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__gt__": true, + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__init__": true, + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__le__": true, + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__lt__": true, + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__ne__": true, + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.__new__": true, + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.args": true, + "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError.with_traceback": true, + "tf.compat.v1.app.flags.FlagValues": false, + "tf.compat.v1.app.flags.FlagValues.__call__": true, + "tf.compat.v1.app.flags.FlagValues.__contains__": true, + "tf.compat.v1.app.flags.FlagValues.__eq__": true, + "tf.compat.v1.app.flags.FlagValues.__ge__": true, + "tf.compat.v1.app.flags.FlagValues.__getitem__": true, + "tf.compat.v1.app.flags.FlagValues.__gt__": true, + "tf.compat.v1.app.flags.FlagValues.__init__": true, + "tf.compat.v1.app.flags.FlagValues.__iter__": true, + "tf.compat.v1.app.flags.FlagValues.__le__": true, + "tf.compat.v1.app.flags.FlagValues.__len__": true, + "tf.compat.v1.app.flags.FlagValues.__lt__": true, + "tf.compat.v1.app.flags.FlagValues.__ne__": true, + "tf.compat.v1.app.flags.FlagValues.__new__": true, + "tf.compat.v1.app.flags.FlagValues.append_flag_values": true, + "tf.compat.v1.app.flags.FlagValues.append_flags_into_file": true, + "tf.compat.v1.app.flags.FlagValues.find_module_defining_flag": true, + "tf.compat.v1.app.flags.FlagValues.find_module_id_defining_flag": true, + "tf.compat.v1.app.flags.FlagValues.flag_values_dict": true, + "tf.compat.v1.app.flags.FlagValues.flags_by_module_dict": true, + "tf.compat.v1.app.flags.FlagValues.flags_by_module_id_dict": true, + "tf.compat.v1.app.flags.FlagValues.flags_into_string": true, + "tf.compat.v1.app.flags.FlagValues.get_flag_value": true, + "tf.compat.v1.app.flags.FlagValues.get_help": true, + "tf.compat.v1.app.flags.FlagValues.get_key_flags_for_module": true, + "tf.compat.v1.app.flags.FlagValues.is_gnu_getopt": true, + "tf.compat.v1.app.flags.FlagValues.is_parsed": true, + "tf.compat.v1.app.flags.FlagValues.key_flags_by_module_dict": true, + "tf.compat.v1.app.flags.FlagValues.main_module_help": true, + "tf.compat.v1.app.flags.FlagValues.mark_as_parsed": true, + "tf.compat.v1.app.flags.FlagValues.module_help": true, + "tf.compat.v1.app.flags.FlagValues.read_flags_from_files": true, + "tf.compat.v1.app.flags.FlagValues.register_flag_by_module": true, + "tf.compat.v1.app.flags.FlagValues.register_flag_by_module_id": true, + "tf.compat.v1.app.flags.FlagValues.register_key_flag_for_module": true, + "tf.compat.v1.app.flags.FlagValues.remove_flag_values": true, + "tf.compat.v1.app.flags.FlagValues.set_default": true, + "tf.compat.v1.app.flags.FlagValues.set_gnu_getopt": true, + "tf.compat.v1.app.flags.FlagValues.unparse_flags": true, + "tf.compat.v1.app.flags.FlagValues.write_help_in_xml_format": true, + "tf.compat.v1.app.flags.FloatParser": false, + "tf.compat.v1.app.flags.FloatParser.__eq__": true, + "tf.compat.v1.app.flags.FloatParser.__ge__": true, + "tf.compat.v1.app.flags.FloatParser.__gt__": true, + "tf.compat.v1.app.flags.FloatParser.__init__": true, + "tf.compat.v1.app.flags.FloatParser.__le__": true, + "tf.compat.v1.app.flags.FloatParser.__lt__": true, + "tf.compat.v1.app.flags.FloatParser.__ne__": true, + "tf.compat.v1.app.flags.FloatParser.__new__": true, + "tf.compat.v1.app.flags.FloatParser.convert": true, + "tf.compat.v1.app.flags.FloatParser.flag_type": true, + "tf.compat.v1.app.flags.FloatParser.is_outside_bounds": true, + "tf.compat.v1.app.flags.FloatParser.number_article": true, + "tf.compat.v1.app.flags.FloatParser.number_name": true, + "tf.compat.v1.app.flags.FloatParser.parse": true, + "tf.compat.v1.app.flags.FloatParser.syntactic_help": true, + "tf.compat.v1.app.flags.IllegalFlagValueError": false, + "tf.compat.v1.app.flags.IllegalFlagValueError.__eq__": true, + "tf.compat.v1.app.flags.IllegalFlagValueError.__ge__": true, + "tf.compat.v1.app.flags.IllegalFlagValueError.__gt__": true, + "tf.compat.v1.app.flags.IllegalFlagValueError.__init__": true, + "tf.compat.v1.app.flags.IllegalFlagValueError.__le__": true, + "tf.compat.v1.app.flags.IllegalFlagValueError.__lt__": true, + "tf.compat.v1.app.flags.IllegalFlagValueError.__ne__": true, + "tf.compat.v1.app.flags.IllegalFlagValueError.__new__": true, + "tf.compat.v1.app.flags.IllegalFlagValueError.args": true, + "tf.compat.v1.app.flags.IllegalFlagValueError.with_traceback": true, + "tf.compat.v1.app.flags.IntegerParser": false, + "tf.compat.v1.app.flags.IntegerParser.__eq__": true, + "tf.compat.v1.app.flags.IntegerParser.__ge__": true, + "tf.compat.v1.app.flags.IntegerParser.__gt__": true, + "tf.compat.v1.app.flags.IntegerParser.__init__": true, + "tf.compat.v1.app.flags.IntegerParser.__le__": true, + "tf.compat.v1.app.flags.IntegerParser.__lt__": true, + "tf.compat.v1.app.flags.IntegerParser.__ne__": true, + "tf.compat.v1.app.flags.IntegerParser.__new__": true, + "tf.compat.v1.app.flags.IntegerParser.convert": true, + "tf.compat.v1.app.flags.IntegerParser.flag_type": true, + "tf.compat.v1.app.flags.IntegerParser.is_outside_bounds": true, + "tf.compat.v1.app.flags.IntegerParser.number_article": true, + "tf.compat.v1.app.flags.IntegerParser.number_name": true, + "tf.compat.v1.app.flags.IntegerParser.parse": true, + "tf.compat.v1.app.flags.IntegerParser.syntactic_help": true, + "tf.compat.v1.app.flags.ListParser": false, + "tf.compat.v1.app.flags.ListParser.__eq__": true, + "tf.compat.v1.app.flags.ListParser.__ge__": true, + "tf.compat.v1.app.flags.ListParser.__gt__": true, + "tf.compat.v1.app.flags.ListParser.__init__": true, + "tf.compat.v1.app.flags.ListParser.__le__": true, + "tf.compat.v1.app.flags.ListParser.__lt__": true, + "tf.compat.v1.app.flags.ListParser.__ne__": true, + "tf.compat.v1.app.flags.ListParser.__new__": true, + "tf.compat.v1.app.flags.ListParser.flag_type": true, + "tf.compat.v1.app.flags.ListParser.parse": true, + "tf.compat.v1.app.flags.ListParser.syntactic_help": true, + "tf.compat.v1.app.flags.ListSerializer": false, + "tf.compat.v1.app.flags.ListSerializer.__eq__": true, + "tf.compat.v1.app.flags.ListSerializer.__ge__": true, + "tf.compat.v1.app.flags.ListSerializer.__gt__": true, + "tf.compat.v1.app.flags.ListSerializer.__init__": true, + "tf.compat.v1.app.flags.ListSerializer.__le__": true, + "tf.compat.v1.app.flags.ListSerializer.__lt__": true, + "tf.compat.v1.app.flags.ListSerializer.__ne__": true, + "tf.compat.v1.app.flags.ListSerializer.__new__": true, + "tf.compat.v1.app.flags.ListSerializer.serialize": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag": false, + "tf.compat.v1.app.flags.MultiEnumClassFlag.__eq__": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag.__ge__": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag.__gt__": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag.__init__": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag.__le__": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag.__lt__": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag.__ne__": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag.__new__": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag.flag_type": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag.parse": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag.serialize": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag.unparse": true, + "tf.compat.v1.app.flags.MultiEnumClassFlag.value": true, + "tf.compat.v1.app.flags.MultiFlag": false, + "tf.compat.v1.app.flags.MultiFlag.__eq__": true, + "tf.compat.v1.app.flags.MultiFlag.__ge__": true, + "tf.compat.v1.app.flags.MultiFlag.__gt__": true, + "tf.compat.v1.app.flags.MultiFlag.__init__": true, + "tf.compat.v1.app.flags.MultiFlag.__le__": true, + "tf.compat.v1.app.flags.MultiFlag.__lt__": true, + "tf.compat.v1.app.flags.MultiFlag.__ne__": true, + "tf.compat.v1.app.flags.MultiFlag.__new__": true, + "tf.compat.v1.app.flags.MultiFlag.flag_type": true, + "tf.compat.v1.app.flags.MultiFlag.parse": true, + "tf.compat.v1.app.flags.MultiFlag.serialize": true, + "tf.compat.v1.app.flags.MultiFlag.unparse": true, + "tf.compat.v1.app.flags.MultiFlag.value": true, + "tf.compat.v1.app.flags.UnparsedFlagAccessError": false, + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__eq__": true, + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__ge__": true, + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__gt__": true, + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__init__": true, + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__le__": true, + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__lt__": true, + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__ne__": true, + "tf.compat.v1.app.flags.UnparsedFlagAccessError.__new__": true, + "tf.compat.v1.app.flags.UnparsedFlagAccessError.args": true, + "tf.compat.v1.app.flags.UnparsedFlagAccessError.with_traceback": true, + "tf.compat.v1.app.flags.UnrecognizedFlagError": false, + "tf.compat.v1.app.flags.UnrecognizedFlagError.__eq__": true, + "tf.compat.v1.app.flags.UnrecognizedFlagError.__ge__": true, + "tf.compat.v1.app.flags.UnrecognizedFlagError.__gt__": true, + "tf.compat.v1.app.flags.UnrecognizedFlagError.__init__": true, + "tf.compat.v1.app.flags.UnrecognizedFlagError.__le__": true, + "tf.compat.v1.app.flags.UnrecognizedFlagError.__lt__": true, + "tf.compat.v1.app.flags.UnrecognizedFlagError.__ne__": true, + "tf.compat.v1.app.flags.UnrecognizedFlagError.__new__": true, + "tf.compat.v1.app.flags.UnrecognizedFlagError.args": true, + "tf.compat.v1.app.flags.UnrecognizedFlagError.with_traceback": true, + "tf.compat.v1.app.flags.ValidationError": false, + "tf.compat.v1.app.flags.ValidationError.__eq__": true, + "tf.compat.v1.app.flags.ValidationError.__ge__": true, + "tf.compat.v1.app.flags.ValidationError.__gt__": true, + "tf.compat.v1.app.flags.ValidationError.__init__": true, + "tf.compat.v1.app.flags.ValidationError.__le__": true, + "tf.compat.v1.app.flags.ValidationError.__lt__": true, + "tf.compat.v1.app.flags.ValidationError.__ne__": true, + "tf.compat.v1.app.flags.ValidationError.__new__": true, + "tf.compat.v1.app.flags.ValidationError.args": true, + "tf.compat.v1.app.flags.ValidationError.with_traceback": true, + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser": false, + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__eq__": true, + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__ge__": true, + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__gt__": true, + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__init__": true, + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__le__": true, + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__lt__": true, + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__ne__": true, + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.__new__": true, + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.flag_type": true, + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.parse": true, + "tf.compat.v1.app.flags.WhitespaceSeparatedListParser.syntactic_help": true, + "tf.compat.v1.app.flags.absolute_import": true, + "tf.compat.v1.app.flags.adopt_module_key_flags": false, + "tf.compat.v1.app.flags.declare_key_flag": false, + "tf.compat.v1.app.flags.disclaim_key_flags": false, + "tf.compat.v1.app.flags.division": true, + "tf.compat.v1.app.flags.doc_to_help": false, + "tf.compat.v1.app.flags.flag_dict_to_args": false, + "tf.compat.v1.app.flags.get_help_width": false, + "tf.compat.v1.app.flags.mark_bool_flags_as_mutual_exclusive": false, + "tf.compat.v1.app.flags.mark_flag_as_required": false, + "tf.compat.v1.app.flags.mark_flags_as_mutual_exclusive": false, + "tf.compat.v1.app.flags.mark_flags_as_required": false, + "tf.compat.v1.app.flags.multi_flags_validator": false, + "tf.compat.v1.app.flags.print_function": true, + "tf.compat.v1.app.flags.register_multi_flags_validator": false, + "tf.compat.v1.app.flags.register_validator": false, + "tf.compat.v1.app.flags.text_wrap": false, + "tf.compat.v1.app.flags.tf_decorator": false, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator": false, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__call__": true, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__eq__": true, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__ge__": true, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__gt__": true, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__init__": true, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__le__": true, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__lt__": true, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__ne__": true, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.__new__": true, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.decorated_target": true, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.decorator_argspec": true, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.decorator_doc": true, + "tf.compat.v1.app.flags.tf_decorator.TFDecorator.decorator_name": true, + "tf.compat.v1.app.flags.tf_decorator.absolute_import": true, + "tf.compat.v1.app.flags.tf_decorator.division": true, + "tf.compat.v1.app.flags.tf_decorator.make_decorator": false, + "tf.compat.v1.app.flags.tf_decorator.print_function": true, + "tf.compat.v1.app.flags.tf_decorator.rewrap": false, + "tf.compat.v1.app.flags.tf_decorator.tf_stack": false, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter": false, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__enter__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__eq__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__exit__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__ge__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__gt__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__init__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__le__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__lt__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__ne__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.__new__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.get_filtered_filenames": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter.reset": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary": false, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__eq__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__ge__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__getitem__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__gt__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__init__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__iter__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__le__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__len__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__lt__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__ne__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.__new__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.filename": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.line": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.lineno": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary.name": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary": false, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__bool__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__contains__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__eq__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__ge__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__getitem__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__gt__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__init__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__iter__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__le__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__len__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__lt__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__ne__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.__new__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.append": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.count": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.extend": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.insert": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.pop": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary.remove": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter": false, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__enter__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__eq__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__exit__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__ge__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__gt__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__init__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__le__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__lt__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__ne__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.__new__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.get_filtered_filenames": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter.reset": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper": false, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__enter__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__eq__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__exit__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__ge__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__gt__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__init__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__le__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__lt__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__ne__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.__new__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.get_effective_source_map": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper.reset": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform": false, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__enter__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__eq__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__exit__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__ge__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__gt__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__init__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__le__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__lt__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__ne__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.__new__": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform.reset": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.absolute_import": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.division": true, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.extract_stack": false, + "tf.compat.v1.app.flags.tf_decorator.tf_stack.print_function": true, + "tf.compat.v1.app.flags.tf_decorator.unwrap": false, + "tf.compat.v1.app.flags.validator": false, + "tf.compat.v1.app.run": false, + "tf.compat.v1.arg_max": false, + "tf.compat.v1.arg_min": false, + "tf.compat.v1.argmax": false, + "tf.compat.v1.argmin": false, + "tf.compat.v1.argsort": false, + "tf.compat.v1.as_dtype": false, + "tf.compat.v1.as_string": false, + "tf.compat.v1.asin": false, + "tf.compat.v1.asinh": false, + "tf.compat.v1.assert_equal": false, + "tf.compat.v1.assert_greater": false, + "tf.compat.v1.assert_greater_equal": false, + "tf.compat.v1.assert_integer": false, + "tf.compat.v1.assert_less": false, + "tf.compat.v1.assert_less_equal": false, + "tf.compat.v1.assert_near": false, + "tf.compat.v1.assert_negative": false, + "tf.compat.v1.assert_non_negative": false, + "tf.compat.v1.assert_non_positive": false, + "tf.compat.v1.assert_none_equal": false, + "tf.compat.v1.assert_positive": false, + "tf.compat.v1.assert_proper_iterable": false, + "tf.compat.v1.assert_rank": false, + "tf.compat.v1.assert_rank_at_least": false, + "tf.compat.v1.assert_rank_in": false, + "tf.compat.v1.assert_same_float_dtype": false, + "tf.compat.v1.assert_scalar": false, + "tf.compat.v1.assert_type": false, + "tf.compat.v1.assert_variables_initialized": false, + "tf.compat.v1.assign": false, + "tf.compat.v1.assign_add": false, + "tf.compat.v1.assign_sub": false, + "tf.compat.v1.atan": false, + "tf.compat.v1.atan2": false, + "tf.compat.v1.atanh": false, + "tf.compat.v1.audio": false, + "tf.compat.v1.audio.decode_wav": false, + "tf.compat.v1.audio.encode_wav": false, + "tf.compat.v1.autograph": false, + "tf.compat.v1.autograph.experimental": false, + "tf.compat.v1.autograph.experimental.Feature": false, + "tf.compat.v1.autograph.experimental.Feature.ALL": true, + "tf.compat.v1.autograph.experimental.Feature.ASSERT_STATEMENTS": true, + "tf.compat.v1.autograph.experimental.Feature.AUTO_CONTROL_DEPS": true, + "tf.compat.v1.autograph.experimental.Feature.BUILTIN_FUNCTIONS": true, + "tf.compat.v1.autograph.experimental.Feature.EQUALITY_OPERATORS": true, + "tf.compat.v1.autograph.experimental.Feature.LISTS": true, + "tf.compat.v1.autograph.experimental.Feature.NAME_SCOPES": true, + "tf.compat.v1.autograph.experimental.Feature.name": true, + "tf.compat.v1.autograph.experimental.Feature.value": true, + "tf.compat.v1.autograph.experimental.do_not_convert": false, + "tf.compat.v1.autograph.experimental.set_loop_options": false, + "tf.compat.v1.autograph.set_verbosity": false, + "tf.compat.v1.autograph.to_code": false, + "tf.compat.v1.autograph.to_graph": false, + "tf.compat.v1.autograph.trace": false, + "tf.compat.v1.batch_gather": false, + "tf.compat.v1.batch_scatter_update": false, + "tf.compat.v1.batch_to_space": false, + "tf.compat.v1.batch_to_space_nd": false, + "tf.compat.v1.betainc": false, + "tf.compat.v1.bfloat16": true, + "tf.compat.v1.bincount": false, + "tf.compat.v1.bitcast": false, + "tf.compat.v1.bitwise": false, + "tf.compat.v1.bitwise.bitwise_and": false, + "tf.compat.v1.bitwise.bitwise_or": false, + "tf.compat.v1.bitwise.bitwise_xor": false, + "tf.compat.v1.bitwise.invert": false, + "tf.compat.v1.bitwise.left_shift": false, + "tf.compat.v1.bitwise.right_shift": false, + "tf.compat.v1.bool": true, + "tf.compat.v1.boolean_mask": false, + "tf.compat.v1.broadcast_dynamic_shape": false, + "tf.compat.v1.broadcast_static_shape": false, + "tf.compat.v1.broadcast_to": false, + "tf.compat.v1.case": false, + "tf.compat.v1.cast": false, + "tf.compat.v1.ceil": false, + "tf.compat.v1.check_numerics": false, + "tf.compat.v1.cholesky": false, + "tf.compat.v1.cholesky_solve": false, + "tf.compat.v1.clip_by_average_norm": false, + "tf.compat.v1.clip_by_global_norm": false, + "tf.compat.v1.clip_by_norm": false, + "tf.compat.v1.clip_by_value": false, + "tf.compat.v1.colocate_with": false, + "tf.compat.v1.compat": false, + "tf.compat.v1.compat.as_bytes": false, + "tf.compat.v1.compat.as_str": false, + "tf.compat.v1.compat.as_str_any": false, + "tf.compat.v1.compat.as_text": false, + "tf.compat.v1.compat.bytes_or_text_types": true, + "tf.compat.v1.compat.complex_types": true, + "tf.compat.v1.compat.dimension_at_index": false, + "tf.compat.v1.compat.dimension_value": false, + "tf.compat.v1.compat.forward_compatibility_horizon": false, + "tf.compat.v1.compat.forward_compatible": false, + "tf.compat.v1.compat.integral_types": true, + "tf.compat.v1.compat.path_to_str": false, + "tf.compat.v1.compat.real_types": true, + "tf.compat.v1.complex": false, + "tf.compat.v1.complex128": true, + "tf.compat.v1.complex64": true, + "tf.compat.v1.concat": false, + "tf.compat.v1.cond": false, + "tf.compat.v1.config": false, + "tf.compat.v1.config.LogicalDevice": false, + "tf.compat.v1.config.LogicalDevice.__add__": true, + "tf.compat.v1.config.LogicalDevice.__contains__": true, + "tf.compat.v1.config.LogicalDevice.__eq__": true, + "tf.compat.v1.config.LogicalDevice.__ge__": true, + "tf.compat.v1.config.LogicalDevice.__getitem__": true, + "tf.compat.v1.config.LogicalDevice.__gt__": true, + "tf.compat.v1.config.LogicalDevice.__init__": true, + "tf.compat.v1.config.LogicalDevice.__iter__": true, + "tf.compat.v1.config.LogicalDevice.__le__": true, + "tf.compat.v1.config.LogicalDevice.__len__": true, + "tf.compat.v1.config.LogicalDevice.__lt__": true, + "tf.compat.v1.config.LogicalDevice.__mul__": true, + "tf.compat.v1.config.LogicalDevice.__ne__": true, + "tf.compat.v1.config.LogicalDevice.__new__": true, + "tf.compat.v1.config.LogicalDevice.__rmul__": true, + "tf.compat.v1.config.LogicalDevice.count": true, + "tf.compat.v1.config.LogicalDevice.device_type": true, + "tf.compat.v1.config.LogicalDevice.index": true, + "tf.compat.v1.config.LogicalDevice.name": true, + "tf.compat.v1.config.LogicalDeviceConfiguration": false, + "tf.compat.v1.config.LogicalDeviceConfiguration.__add__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__contains__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__eq__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__ge__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__getitem__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__gt__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__init__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__iter__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__le__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__len__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__lt__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__mul__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__ne__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__new__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.__rmul__": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.count": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.index": true, + "tf.compat.v1.config.LogicalDeviceConfiguration.memory_limit": true, + "tf.compat.v1.config.PhysicalDevice": false, + "tf.compat.v1.config.PhysicalDevice.__add__": true, + "tf.compat.v1.config.PhysicalDevice.__contains__": true, + "tf.compat.v1.config.PhysicalDevice.__eq__": true, + "tf.compat.v1.config.PhysicalDevice.__ge__": true, + "tf.compat.v1.config.PhysicalDevice.__getitem__": true, + "tf.compat.v1.config.PhysicalDevice.__gt__": true, + "tf.compat.v1.config.PhysicalDevice.__init__": true, + "tf.compat.v1.config.PhysicalDevice.__iter__": true, + "tf.compat.v1.config.PhysicalDevice.__le__": true, + "tf.compat.v1.config.PhysicalDevice.__len__": true, + "tf.compat.v1.config.PhysicalDevice.__lt__": true, + "tf.compat.v1.config.PhysicalDevice.__mul__": true, + "tf.compat.v1.config.PhysicalDevice.__ne__": true, + "tf.compat.v1.config.PhysicalDevice.__new__": true, + "tf.compat.v1.config.PhysicalDevice.__rmul__": true, + "tf.compat.v1.config.PhysicalDevice.count": true, + "tf.compat.v1.config.PhysicalDevice.device_type": true, + "tf.compat.v1.config.PhysicalDevice.index": true, + "tf.compat.v1.config.PhysicalDevice.name": true, + "tf.compat.v1.config.experimental": false, + "tf.compat.v1.config.experimental.ClusterDeviceFilters": false, + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__eq__": true, + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__ge__": true, + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__gt__": true, + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__init__": true, + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__le__": true, + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__lt__": true, + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__ne__": true, + "tf.compat.v1.config.experimental.ClusterDeviceFilters.__new__": true, + "tf.compat.v1.config.experimental.ClusterDeviceFilters.set_device_filters": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration": false, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__add__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__contains__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__eq__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__ge__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__getitem__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__gt__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__init__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__iter__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__le__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__len__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__lt__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__mul__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__ne__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__new__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.__rmul__": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.count": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.index": true, + "tf.compat.v1.config.experimental.VirtualDeviceConfiguration.memory_limit": true, + "tf.compat.v1.config.experimental.disable_mlir_bridge": false, + "tf.compat.v1.config.experimental.enable_mlir_bridge": false, + "tf.compat.v1.config.experimental.get_device_policy": false, + "tf.compat.v1.config.experimental.get_memory_growth": false, + "tf.compat.v1.config.experimental.get_synchronous_execution": false, + "tf.compat.v1.config.experimental.get_virtual_device_configuration": false, + "tf.compat.v1.config.experimental.get_visible_devices": false, + "tf.compat.v1.config.experimental.list_logical_devices": false, + "tf.compat.v1.config.experimental.list_physical_devices": false, + "tf.compat.v1.config.experimental.set_device_policy": false, + "tf.compat.v1.config.experimental.set_memory_growth": false, + "tf.compat.v1.config.experimental.set_synchronous_execution": false, + "tf.compat.v1.config.experimental.set_virtual_device_configuration": false, + "tf.compat.v1.config.experimental.set_visible_devices": false, + "tf.compat.v1.config.experimental_connect_to_cluster": false, + "tf.compat.v1.config.experimental_connect_to_host": false, + "tf.compat.v1.config.experimental_functions_run_eagerly": false, + "tf.compat.v1.config.experimental_run_functions_eagerly": false, + "tf.compat.v1.config.get_logical_device_configuration": false, + "tf.compat.v1.config.get_soft_device_placement": false, + "tf.compat.v1.config.get_visible_devices": false, + "tf.compat.v1.config.list_logical_devices": false, + "tf.compat.v1.config.list_physical_devices": false, + "tf.compat.v1.config.optimizer": false, + "tf.compat.v1.config.optimizer.get_experimental_options": false, + "tf.compat.v1.config.optimizer.get_jit": false, + "tf.compat.v1.config.optimizer.set_experimental_options": false, + "tf.compat.v1.config.optimizer.set_jit": false, + "tf.compat.v1.config.set_logical_device_configuration": false, + "tf.compat.v1.config.set_soft_device_placement": false, + "tf.compat.v1.config.set_visible_devices": false, + "tf.compat.v1.config.threading": false, + "tf.compat.v1.config.threading.get_inter_op_parallelism_threads": false, + "tf.compat.v1.config.threading.get_intra_op_parallelism_threads": false, + "tf.compat.v1.config.threading.set_inter_op_parallelism_threads": false, + "tf.compat.v1.config.threading.set_intra_op_parallelism_threads": false, + "tf.compat.v1.confusion_matrix": false, + "tf.compat.v1.conj": false, + "tf.compat.v1.constant": false, + "tf.compat.v1.constant_initializer": false, + "tf.compat.v1.constant_initializer.__call__": true, + "tf.compat.v1.constant_initializer.__eq__": true, + "tf.compat.v1.constant_initializer.__ge__": true, + "tf.compat.v1.constant_initializer.__gt__": true, + "tf.compat.v1.constant_initializer.__init__": true, + "tf.compat.v1.constant_initializer.__le__": true, + "tf.compat.v1.constant_initializer.__lt__": true, + "tf.compat.v1.constant_initializer.__ne__": true, + "tf.compat.v1.constant_initializer.__new__": true, + "tf.compat.v1.constant_initializer.from_config": true, + "tf.compat.v1.constant_initializer.get_config": true, + "tf.compat.v1.container": false, + "tf.compat.v1.control_dependencies": false, + "tf.compat.v1.control_flow_v2_enabled": false, + "tf.compat.v1.convert_to_tensor": false, + "tf.compat.v1.convert_to_tensor_or_indexed_slices": false, + "tf.compat.v1.convert_to_tensor_or_sparse_tensor": false, + "tf.compat.v1.cos": false, + "tf.compat.v1.cosh": false, + "tf.compat.v1.count_nonzero": false, + "tf.compat.v1.count_up_to": false, + "tf.compat.v1.create_partitioned_variables": false, + "tf.compat.v1.cross": false, + "tf.compat.v1.cumprod": false, + "tf.compat.v1.cumsum": false, + "tf.compat.v1.custom_gradient": false, + "tf.compat.v1.data": false, + "tf.compat.v1.data.Dataset": false, + "tf.compat.v1.data.Dataset.__eq__": true, + "tf.compat.v1.data.Dataset.__ge__": true, + "tf.compat.v1.data.Dataset.__gt__": true, + "tf.compat.v1.data.Dataset.__init__": true, + "tf.compat.v1.data.Dataset.__iter__": true, + "tf.compat.v1.data.Dataset.__le__": true, + "tf.compat.v1.data.Dataset.__lt__": true, + "tf.compat.v1.data.Dataset.__ne__": true, + "tf.compat.v1.data.Dataset.__new__": true, + "tf.compat.v1.data.Dataset.apply": true, + "tf.compat.v1.data.Dataset.as_numpy_iterator": true, + "tf.compat.v1.data.Dataset.batch": true, + "tf.compat.v1.data.Dataset.cache": true, + "tf.compat.v1.data.Dataset.concatenate": true, + "tf.compat.v1.data.Dataset.element_spec": true, + "tf.compat.v1.data.Dataset.enumerate": true, + "tf.compat.v1.data.Dataset.filter": true, + "tf.compat.v1.data.Dataset.filter_with_legacy_function": true, + "tf.compat.v1.data.Dataset.flat_map": true, + "tf.compat.v1.data.Dataset.from_generator": true, + "tf.compat.v1.data.Dataset.from_sparse_tensor_slices": true, + "tf.compat.v1.data.Dataset.from_tensor_slices": true, + "tf.compat.v1.data.Dataset.from_tensors": true, + "tf.compat.v1.data.Dataset.interleave": true, + "tf.compat.v1.data.Dataset.list_files": true, + "tf.compat.v1.data.Dataset.make_initializable_iterator": true, + "tf.compat.v1.data.Dataset.make_one_shot_iterator": true, + "tf.compat.v1.data.Dataset.map": true, + "tf.compat.v1.data.Dataset.map_with_legacy_function": true, + "tf.compat.v1.data.Dataset.options": true, + "tf.compat.v1.data.Dataset.output_classes": true, + "tf.compat.v1.data.Dataset.output_shapes": true, + "tf.compat.v1.data.Dataset.output_types": true, + "tf.compat.v1.data.Dataset.padded_batch": true, + "tf.compat.v1.data.Dataset.prefetch": true, + "tf.compat.v1.data.Dataset.range": true, + "tf.compat.v1.data.Dataset.reduce": true, + "tf.compat.v1.data.Dataset.repeat": true, + "tf.compat.v1.data.Dataset.shard": true, + "tf.compat.v1.data.Dataset.shuffle": true, + "tf.compat.v1.data.Dataset.skip": true, + "tf.compat.v1.data.Dataset.take": true, + "tf.compat.v1.data.Dataset.unbatch": true, + "tf.compat.v1.data.Dataset.window": true, + "tf.compat.v1.data.Dataset.with_options": true, + "tf.compat.v1.data.Dataset.zip": true, + "tf.compat.v1.data.DatasetSpec": false, + "tf.compat.v1.data.DatasetSpec.__eq__": true, + "tf.compat.v1.data.DatasetSpec.__ge__": true, + "tf.compat.v1.data.DatasetSpec.__gt__": true, + "tf.compat.v1.data.DatasetSpec.__init__": true, + "tf.compat.v1.data.DatasetSpec.__le__": true, + "tf.compat.v1.data.DatasetSpec.__lt__": true, + "tf.compat.v1.data.DatasetSpec.__ne__": true, + "tf.compat.v1.data.DatasetSpec.__new__": true, + "tf.compat.v1.data.DatasetSpec.from_value": true, + "tf.compat.v1.data.DatasetSpec.is_compatible_with": true, + "tf.compat.v1.data.DatasetSpec.most_specific_compatible_type": true, + "tf.compat.v1.data.DatasetSpec.value_type": true, + "tf.compat.v1.data.FixedLengthRecordDataset": false, + "tf.compat.v1.data.FixedLengthRecordDataset.__eq__": true, + "tf.compat.v1.data.FixedLengthRecordDataset.__ge__": true, + "tf.compat.v1.data.FixedLengthRecordDataset.__gt__": true, + "tf.compat.v1.data.FixedLengthRecordDataset.__init__": true, + "tf.compat.v1.data.FixedLengthRecordDataset.__iter__": true, + "tf.compat.v1.data.FixedLengthRecordDataset.__le__": true, + "tf.compat.v1.data.FixedLengthRecordDataset.__lt__": true, + "tf.compat.v1.data.FixedLengthRecordDataset.__ne__": true, + "tf.compat.v1.data.FixedLengthRecordDataset.__new__": true, + "tf.compat.v1.data.FixedLengthRecordDataset.apply": true, + "tf.compat.v1.data.FixedLengthRecordDataset.as_numpy_iterator": true, + "tf.compat.v1.data.FixedLengthRecordDataset.batch": true, + "tf.compat.v1.data.FixedLengthRecordDataset.cache": true, + "tf.compat.v1.data.FixedLengthRecordDataset.concatenate": true, + "tf.compat.v1.data.FixedLengthRecordDataset.element_spec": true, + "tf.compat.v1.data.FixedLengthRecordDataset.enumerate": true, + "tf.compat.v1.data.FixedLengthRecordDataset.filter": true, + "tf.compat.v1.data.FixedLengthRecordDataset.filter_with_legacy_function": true, + "tf.compat.v1.data.FixedLengthRecordDataset.flat_map": true, + "tf.compat.v1.data.FixedLengthRecordDataset.from_generator": true, + "tf.compat.v1.data.FixedLengthRecordDataset.from_sparse_tensor_slices": true, + "tf.compat.v1.data.FixedLengthRecordDataset.from_tensor_slices": true, + "tf.compat.v1.data.FixedLengthRecordDataset.from_tensors": true, + "tf.compat.v1.data.FixedLengthRecordDataset.interleave": true, + "tf.compat.v1.data.FixedLengthRecordDataset.list_files": true, + "tf.compat.v1.data.FixedLengthRecordDataset.make_initializable_iterator": true, + "tf.compat.v1.data.FixedLengthRecordDataset.make_one_shot_iterator": true, + "tf.compat.v1.data.FixedLengthRecordDataset.map": true, + "tf.compat.v1.data.FixedLengthRecordDataset.map_with_legacy_function": true, + "tf.compat.v1.data.FixedLengthRecordDataset.options": true, + "tf.compat.v1.data.FixedLengthRecordDataset.output_classes": true, + "tf.compat.v1.data.FixedLengthRecordDataset.output_shapes": true, + "tf.compat.v1.data.FixedLengthRecordDataset.output_types": true, + "tf.compat.v1.data.FixedLengthRecordDataset.padded_batch": true, + "tf.compat.v1.data.FixedLengthRecordDataset.prefetch": true, + "tf.compat.v1.data.FixedLengthRecordDataset.range": true, + "tf.compat.v1.data.FixedLengthRecordDataset.reduce": true, + "tf.compat.v1.data.FixedLengthRecordDataset.repeat": true, + "tf.compat.v1.data.FixedLengthRecordDataset.shard": true, + "tf.compat.v1.data.FixedLengthRecordDataset.shuffle": true, + "tf.compat.v1.data.FixedLengthRecordDataset.skip": true, + "tf.compat.v1.data.FixedLengthRecordDataset.take": true, + "tf.compat.v1.data.FixedLengthRecordDataset.unbatch": true, + "tf.compat.v1.data.FixedLengthRecordDataset.window": true, + "tf.compat.v1.data.FixedLengthRecordDataset.with_options": true, + "tf.compat.v1.data.FixedLengthRecordDataset.zip": true, + "tf.compat.v1.data.Iterator": false, + "tf.compat.v1.data.Iterator.__eq__": true, + "tf.compat.v1.data.Iterator.__ge__": true, + "tf.compat.v1.data.Iterator.__gt__": true, + "tf.compat.v1.data.Iterator.__init__": true, + "tf.compat.v1.data.Iterator.__le__": true, + "tf.compat.v1.data.Iterator.__lt__": true, + "tf.compat.v1.data.Iterator.__ne__": true, + "tf.compat.v1.data.Iterator.__new__": true, + "tf.compat.v1.data.Iterator.element_spec": true, + "tf.compat.v1.data.Iterator.from_string_handle": true, + "tf.compat.v1.data.Iterator.from_structure": true, + "tf.compat.v1.data.Iterator.get_next": true, + "tf.compat.v1.data.Iterator.initializer": true, + "tf.compat.v1.data.Iterator.make_initializer": true, + "tf.compat.v1.data.Iterator.output_classes": true, + "tf.compat.v1.data.Iterator.output_shapes": true, + "tf.compat.v1.data.Iterator.output_types": true, + "tf.compat.v1.data.Iterator.string_handle": true, + "tf.compat.v1.data.Options": false, + "tf.compat.v1.data.Options.__eq__": true, + "tf.compat.v1.data.Options.__ge__": true, + "tf.compat.v1.data.Options.__gt__": true, + "tf.compat.v1.data.Options.__init__": true, + "tf.compat.v1.data.Options.__le__": true, + "tf.compat.v1.data.Options.__lt__": true, + "tf.compat.v1.data.Options.__ne__": true, + "tf.compat.v1.data.Options.__new__": true, + "tf.compat.v1.data.Options.experimental_deterministic": true, + "tf.compat.v1.data.Options.experimental_distribute": true, + "tf.compat.v1.data.Options.experimental_external_state_policy": true, + "tf.compat.v1.data.Options.experimental_optimization": true, + "tf.compat.v1.data.Options.experimental_slack": true, + "tf.compat.v1.data.Options.experimental_stats": true, + "tf.compat.v1.data.Options.experimental_threading": true, + "tf.compat.v1.data.Options.merge": true, + "tf.compat.v1.data.TFRecordDataset": false, + "tf.compat.v1.data.TFRecordDataset.__eq__": true, + "tf.compat.v1.data.TFRecordDataset.__ge__": true, + "tf.compat.v1.data.TFRecordDataset.__gt__": true, + "tf.compat.v1.data.TFRecordDataset.__init__": true, + "tf.compat.v1.data.TFRecordDataset.__iter__": true, + "tf.compat.v1.data.TFRecordDataset.__le__": true, + "tf.compat.v1.data.TFRecordDataset.__lt__": true, + "tf.compat.v1.data.TFRecordDataset.__ne__": true, + "tf.compat.v1.data.TFRecordDataset.__new__": true, + "tf.compat.v1.data.TFRecordDataset.apply": true, + "tf.compat.v1.data.TFRecordDataset.as_numpy_iterator": true, + "tf.compat.v1.data.TFRecordDataset.batch": true, + "tf.compat.v1.data.TFRecordDataset.cache": true, + "tf.compat.v1.data.TFRecordDataset.concatenate": true, + "tf.compat.v1.data.TFRecordDataset.element_spec": true, + "tf.compat.v1.data.TFRecordDataset.enumerate": true, + "tf.compat.v1.data.TFRecordDataset.filter": true, + "tf.compat.v1.data.TFRecordDataset.filter_with_legacy_function": true, + "tf.compat.v1.data.TFRecordDataset.flat_map": true, + "tf.compat.v1.data.TFRecordDataset.from_generator": true, + "tf.compat.v1.data.TFRecordDataset.from_sparse_tensor_slices": true, + "tf.compat.v1.data.TFRecordDataset.from_tensor_slices": true, + "tf.compat.v1.data.TFRecordDataset.from_tensors": true, + "tf.compat.v1.data.TFRecordDataset.interleave": true, + "tf.compat.v1.data.TFRecordDataset.list_files": true, + "tf.compat.v1.data.TFRecordDataset.make_initializable_iterator": true, + "tf.compat.v1.data.TFRecordDataset.make_one_shot_iterator": true, + "tf.compat.v1.data.TFRecordDataset.map": true, + "tf.compat.v1.data.TFRecordDataset.map_with_legacy_function": true, + "tf.compat.v1.data.TFRecordDataset.options": true, + "tf.compat.v1.data.TFRecordDataset.output_classes": true, + "tf.compat.v1.data.TFRecordDataset.output_shapes": true, + "tf.compat.v1.data.TFRecordDataset.output_types": true, + "tf.compat.v1.data.TFRecordDataset.padded_batch": true, + "tf.compat.v1.data.TFRecordDataset.prefetch": true, + "tf.compat.v1.data.TFRecordDataset.range": true, + "tf.compat.v1.data.TFRecordDataset.reduce": true, + "tf.compat.v1.data.TFRecordDataset.repeat": true, + "tf.compat.v1.data.TFRecordDataset.shard": true, + "tf.compat.v1.data.TFRecordDataset.shuffle": true, + "tf.compat.v1.data.TFRecordDataset.skip": true, + "tf.compat.v1.data.TFRecordDataset.take": true, + "tf.compat.v1.data.TFRecordDataset.unbatch": true, + "tf.compat.v1.data.TFRecordDataset.window": true, + "tf.compat.v1.data.TFRecordDataset.with_options": true, + "tf.compat.v1.data.TFRecordDataset.zip": true, + "tf.compat.v1.data.TextLineDataset": false, + "tf.compat.v1.data.TextLineDataset.__eq__": true, + "tf.compat.v1.data.TextLineDataset.__ge__": true, + "tf.compat.v1.data.TextLineDataset.__gt__": true, + "tf.compat.v1.data.TextLineDataset.__init__": true, + "tf.compat.v1.data.TextLineDataset.__iter__": true, + "tf.compat.v1.data.TextLineDataset.__le__": true, + "tf.compat.v1.data.TextLineDataset.__lt__": true, + "tf.compat.v1.data.TextLineDataset.__ne__": true, + "tf.compat.v1.data.TextLineDataset.__new__": true, + "tf.compat.v1.data.TextLineDataset.apply": true, + "tf.compat.v1.data.TextLineDataset.as_numpy_iterator": true, + "tf.compat.v1.data.TextLineDataset.batch": true, + "tf.compat.v1.data.TextLineDataset.cache": true, + "tf.compat.v1.data.TextLineDataset.concatenate": true, + "tf.compat.v1.data.TextLineDataset.element_spec": true, + "tf.compat.v1.data.TextLineDataset.enumerate": true, + "tf.compat.v1.data.TextLineDataset.filter": true, + "tf.compat.v1.data.TextLineDataset.filter_with_legacy_function": true, + "tf.compat.v1.data.TextLineDataset.flat_map": true, + "tf.compat.v1.data.TextLineDataset.from_generator": true, + "tf.compat.v1.data.TextLineDataset.from_sparse_tensor_slices": true, + "tf.compat.v1.data.TextLineDataset.from_tensor_slices": true, + "tf.compat.v1.data.TextLineDataset.from_tensors": true, + "tf.compat.v1.data.TextLineDataset.interleave": true, + "tf.compat.v1.data.TextLineDataset.list_files": true, + "tf.compat.v1.data.TextLineDataset.make_initializable_iterator": true, + "tf.compat.v1.data.TextLineDataset.make_one_shot_iterator": true, + "tf.compat.v1.data.TextLineDataset.map": true, + "tf.compat.v1.data.TextLineDataset.map_with_legacy_function": true, + "tf.compat.v1.data.TextLineDataset.options": true, + "tf.compat.v1.data.TextLineDataset.output_classes": true, + "tf.compat.v1.data.TextLineDataset.output_shapes": true, + "tf.compat.v1.data.TextLineDataset.output_types": true, + "tf.compat.v1.data.TextLineDataset.padded_batch": true, + "tf.compat.v1.data.TextLineDataset.prefetch": true, + "tf.compat.v1.data.TextLineDataset.range": true, + "tf.compat.v1.data.TextLineDataset.reduce": true, + "tf.compat.v1.data.TextLineDataset.repeat": true, + "tf.compat.v1.data.TextLineDataset.shard": true, + "tf.compat.v1.data.TextLineDataset.shuffle": true, + "tf.compat.v1.data.TextLineDataset.skip": true, + "tf.compat.v1.data.TextLineDataset.take": true, + "tf.compat.v1.data.TextLineDataset.unbatch": true, + "tf.compat.v1.data.TextLineDataset.window": true, + "tf.compat.v1.data.TextLineDataset.with_options": true, + "tf.compat.v1.data.TextLineDataset.zip": true, + "tf.compat.v1.data.experimental": false, + "tf.compat.v1.data.experimental.AUTOTUNE": true, + "tf.compat.v1.data.experimental.AutoShardPolicy": false, + "tf.compat.v1.data.experimental.AutoShardPolicy.AUTO": true, + "tf.compat.v1.data.experimental.AutoShardPolicy.DATA": true, + "tf.compat.v1.data.experimental.AutoShardPolicy.FILE": true, + "tf.compat.v1.data.experimental.AutoShardPolicy.OFF": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook": false, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__eq__": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__ge__": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__gt__": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__init__": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__le__": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__lt__": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__ne__": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.__new__": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.after_create_session": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.after_run": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.before_run": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.begin": true, + "tf.compat.v1.data.experimental.CheckpointInputPipelineHook.end": true, + "tf.compat.v1.data.experimental.Counter": false, + "tf.compat.v1.data.experimental.CsvDataset": false, + "tf.compat.v1.data.experimental.CsvDataset.__eq__": true, + "tf.compat.v1.data.experimental.CsvDataset.__ge__": true, + "tf.compat.v1.data.experimental.CsvDataset.__gt__": true, + "tf.compat.v1.data.experimental.CsvDataset.__init__": true, + "tf.compat.v1.data.experimental.CsvDataset.__iter__": true, + "tf.compat.v1.data.experimental.CsvDataset.__le__": true, + "tf.compat.v1.data.experimental.CsvDataset.__lt__": true, + "tf.compat.v1.data.experimental.CsvDataset.__ne__": true, + "tf.compat.v1.data.experimental.CsvDataset.__new__": true, + "tf.compat.v1.data.experimental.CsvDataset.apply": true, + "tf.compat.v1.data.experimental.CsvDataset.as_numpy_iterator": true, + "tf.compat.v1.data.experimental.CsvDataset.batch": true, + "tf.compat.v1.data.experimental.CsvDataset.cache": true, + "tf.compat.v1.data.experimental.CsvDataset.concatenate": true, + "tf.compat.v1.data.experimental.CsvDataset.element_spec": true, + "tf.compat.v1.data.experimental.CsvDataset.enumerate": true, + "tf.compat.v1.data.experimental.CsvDataset.filter": true, + "tf.compat.v1.data.experimental.CsvDataset.filter_with_legacy_function": true, + "tf.compat.v1.data.experimental.CsvDataset.flat_map": true, + "tf.compat.v1.data.experimental.CsvDataset.from_generator": true, + "tf.compat.v1.data.experimental.CsvDataset.from_sparse_tensor_slices": true, + "tf.compat.v1.data.experimental.CsvDataset.from_tensor_slices": true, + "tf.compat.v1.data.experimental.CsvDataset.from_tensors": true, + "tf.compat.v1.data.experimental.CsvDataset.interleave": true, + "tf.compat.v1.data.experimental.CsvDataset.list_files": true, + "tf.compat.v1.data.experimental.CsvDataset.make_initializable_iterator": true, + "tf.compat.v1.data.experimental.CsvDataset.make_one_shot_iterator": true, + "tf.compat.v1.data.experimental.CsvDataset.map": true, + "tf.compat.v1.data.experimental.CsvDataset.map_with_legacy_function": true, + "tf.compat.v1.data.experimental.CsvDataset.options": true, + "tf.compat.v1.data.experimental.CsvDataset.output_classes": true, + "tf.compat.v1.data.experimental.CsvDataset.output_shapes": true, + "tf.compat.v1.data.experimental.CsvDataset.output_types": true, + "tf.compat.v1.data.experimental.CsvDataset.padded_batch": true, + "tf.compat.v1.data.experimental.CsvDataset.prefetch": true, + "tf.compat.v1.data.experimental.CsvDataset.range": true, + "tf.compat.v1.data.experimental.CsvDataset.reduce": true, + "tf.compat.v1.data.experimental.CsvDataset.repeat": true, + "tf.compat.v1.data.experimental.CsvDataset.shard": true, + "tf.compat.v1.data.experimental.CsvDataset.shuffle": true, + "tf.compat.v1.data.experimental.CsvDataset.skip": true, + "tf.compat.v1.data.experimental.CsvDataset.take": true, + "tf.compat.v1.data.experimental.CsvDataset.unbatch": true, + "tf.compat.v1.data.experimental.CsvDataset.window": true, + "tf.compat.v1.data.experimental.CsvDataset.with_options": true, + "tf.compat.v1.data.experimental.CsvDataset.zip": true, + "tf.compat.v1.data.experimental.DatasetStructure": false, + "tf.compat.v1.data.experimental.DatasetStructure.__eq__": true, + "tf.compat.v1.data.experimental.DatasetStructure.__ge__": true, + "tf.compat.v1.data.experimental.DatasetStructure.__gt__": true, + "tf.compat.v1.data.experimental.DatasetStructure.__init__": true, + "tf.compat.v1.data.experimental.DatasetStructure.__le__": true, + "tf.compat.v1.data.experimental.DatasetStructure.__lt__": true, + "tf.compat.v1.data.experimental.DatasetStructure.__ne__": true, + "tf.compat.v1.data.experimental.DatasetStructure.__new__": true, + "tf.compat.v1.data.experimental.DatasetStructure.from_value": true, + "tf.compat.v1.data.experimental.DatasetStructure.is_compatible_with": true, + "tf.compat.v1.data.experimental.DatasetStructure.most_specific_compatible_type": true, + "tf.compat.v1.data.experimental.DatasetStructure.value_type": true, + "tf.compat.v1.data.experimental.DistributeOptions": false, + "tf.compat.v1.data.experimental.DistributeOptions.__eq__": true, + "tf.compat.v1.data.experimental.DistributeOptions.__ge__": true, + "tf.compat.v1.data.experimental.DistributeOptions.__gt__": true, + "tf.compat.v1.data.experimental.DistributeOptions.__init__": true, + "tf.compat.v1.data.experimental.DistributeOptions.__le__": true, + "tf.compat.v1.data.experimental.DistributeOptions.__lt__": true, + "tf.compat.v1.data.experimental.DistributeOptions.__ne__": true, + "tf.compat.v1.data.experimental.DistributeOptions.__new__": true, + "tf.compat.v1.data.experimental.DistributeOptions.auto_shard_policy": true, + "tf.compat.v1.data.experimental.DistributeOptions.num_devices": true, + "tf.compat.v1.data.experimental.INFINITE_CARDINALITY": true, + "tf.compat.v1.data.experimental.MapVectorizationOptions": false, + "tf.compat.v1.data.experimental.MapVectorizationOptions.__eq__": true, + "tf.compat.v1.data.experimental.MapVectorizationOptions.__ge__": true, + "tf.compat.v1.data.experimental.MapVectorizationOptions.__gt__": true, + "tf.compat.v1.data.experimental.MapVectorizationOptions.__init__": true, + "tf.compat.v1.data.experimental.MapVectorizationOptions.__le__": true, + "tf.compat.v1.data.experimental.MapVectorizationOptions.__lt__": true, + "tf.compat.v1.data.experimental.MapVectorizationOptions.__ne__": true, + "tf.compat.v1.data.experimental.MapVectorizationOptions.__new__": true, + "tf.compat.v1.data.experimental.MapVectorizationOptions.enabled": true, + "tf.compat.v1.data.experimental.MapVectorizationOptions.use_choose_fastest": true, + "tf.compat.v1.data.experimental.OptimizationOptions": false, + "tf.compat.v1.data.experimental.OptimizationOptions.__eq__": true, + "tf.compat.v1.data.experimental.OptimizationOptions.__ge__": true, + "tf.compat.v1.data.experimental.OptimizationOptions.__gt__": true, + "tf.compat.v1.data.experimental.OptimizationOptions.__init__": true, + "tf.compat.v1.data.experimental.OptimizationOptions.__le__": true, + "tf.compat.v1.data.experimental.OptimizationOptions.__lt__": true, + "tf.compat.v1.data.experimental.OptimizationOptions.__ne__": true, + "tf.compat.v1.data.experimental.OptimizationOptions.__new__": true, + "tf.compat.v1.data.experimental.OptimizationOptions.apply_default_optimizations": true, + "tf.compat.v1.data.experimental.OptimizationOptions.autotune": true, + "tf.compat.v1.data.experimental.OptimizationOptions.autotune_buffers": true, + "tf.compat.v1.data.experimental.OptimizationOptions.autotune_cpu_budget": true, + "tf.compat.v1.data.experimental.OptimizationOptions.filter_fusion": true, + "tf.compat.v1.data.experimental.OptimizationOptions.filter_with_random_uniform_fusion": true, + "tf.compat.v1.data.experimental.OptimizationOptions.hoist_random_uniform": true, + "tf.compat.v1.data.experimental.OptimizationOptions.map_and_batch_fusion": true, + "tf.compat.v1.data.experimental.OptimizationOptions.map_and_filter_fusion": true, + "tf.compat.v1.data.experimental.OptimizationOptions.map_fusion": true, + "tf.compat.v1.data.experimental.OptimizationOptions.map_parallelization": true, + "tf.compat.v1.data.experimental.OptimizationOptions.map_vectorization": true, + "tf.compat.v1.data.experimental.OptimizationOptions.noop_elimination": true, + "tf.compat.v1.data.experimental.OptimizationOptions.parallel_batch": true, + "tf.compat.v1.data.experimental.OptimizationOptions.shuffle_and_repeat_fusion": true, + "tf.compat.v1.data.experimental.Optional": false, + "tf.compat.v1.data.experimental.Optional.__eq__": true, + "tf.compat.v1.data.experimental.Optional.__ge__": true, + "tf.compat.v1.data.experimental.Optional.__gt__": true, + "tf.compat.v1.data.experimental.Optional.__init__": true, + "tf.compat.v1.data.experimental.Optional.__le__": true, + "tf.compat.v1.data.experimental.Optional.__lt__": true, + "tf.compat.v1.data.experimental.Optional.__ne__": true, + "tf.compat.v1.data.experimental.Optional.__new__": true, + "tf.compat.v1.data.experimental.Optional.from_value": true, + "tf.compat.v1.data.experimental.Optional.get_value": true, + "tf.compat.v1.data.experimental.Optional.has_value": true, + "tf.compat.v1.data.experimental.Optional.none_from_structure": true, + "tf.compat.v1.data.experimental.Optional.value_structure": true, + "tf.compat.v1.data.experimental.OptionalStructure": false, + "tf.compat.v1.data.experimental.OptionalStructure.__eq__": true, + "tf.compat.v1.data.experimental.OptionalStructure.__ge__": true, + "tf.compat.v1.data.experimental.OptionalStructure.__gt__": true, + "tf.compat.v1.data.experimental.OptionalStructure.__init__": true, + "tf.compat.v1.data.experimental.OptionalStructure.__le__": true, + "tf.compat.v1.data.experimental.OptionalStructure.__lt__": true, + "tf.compat.v1.data.experimental.OptionalStructure.__ne__": true, + "tf.compat.v1.data.experimental.OptionalStructure.__new__": true, + "tf.compat.v1.data.experimental.OptionalStructure.from_value": true, + "tf.compat.v1.data.experimental.OptionalStructure.is_compatible_with": true, + "tf.compat.v1.data.experimental.OptionalStructure.most_specific_compatible_type": true, + "tf.compat.v1.data.experimental.OptionalStructure.value_type": true, + "tf.compat.v1.data.experimental.RaggedTensorStructure": false, + "tf.compat.v1.data.experimental.RandomDataset": false, + "tf.compat.v1.data.experimental.RandomDataset.__eq__": true, + "tf.compat.v1.data.experimental.RandomDataset.__ge__": true, + "tf.compat.v1.data.experimental.RandomDataset.__gt__": true, + "tf.compat.v1.data.experimental.RandomDataset.__init__": true, + "tf.compat.v1.data.experimental.RandomDataset.__iter__": true, + "tf.compat.v1.data.experimental.RandomDataset.__le__": true, + "tf.compat.v1.data.experimental.RandomDataset.__lt__": true, + "tf.compat.v1.data.experimental.RandomDataset.__ne__": true, + "tf.compat.v1.data.experimental.RandomDataset.__new__": true, + "tf.compat.v1.data.experimental.RandomDataset.apply": true, + "tf.compat.v1.data.experimental.RandomDataset.as_numpy_iterator": true, + "tf.compat.v1.data.experimental.RandomDataset.batch": true, + "tf.compat.v1.data.experimental.RandomDataset.cache": true, + "tf.compat.v1.data.experimental.RandomDataset.concatenate": true, + "tf.compat.v1.data.experimental.RandomDataset.element_spec": true, + "tf.compat.v1.data.experimental.RandomDataset.enumerate": true, + "tf.compat.v1.data.experimental.RandomDataset.filter": true, + "tf.compat.v1.data.experimental.RandomDataset.filter_with_legacy_function": true, + "tf.compat.v1.data.experimental.RandomDataset.flat_map": true, + "tf.compat.v1.data.experimental.RandomDataset.from_generator": true, + "tf.compat.v1.data.experimental.RandomDataset.from_sparse_tensor_slices": true, + "tf.compat.v1.data.experimental.RandomDataset.from_tensor_slices": true, + "tf.compat.v1.data.experimental.RandomDataset.from_tensors": true, + "tf.compat.v1.data.experimental.RandomDataset.interleave": true, + "tf.compat.v1.data.experimental.RandomDataset.list_files": true, + "tf.compat.v1.data.experimental.RandomDataset.make_initializable_iterator": true, + "tf.compat.v1.data.experimental.RandomDataset.make_one_shot_iterator": true, + "tf.compat.v1.data.experimental.RandomDataset.map": true, + "tf.compat.v1.data.experimental.RandomDataset.map_with_legacy_function": true, + "tf.compat.v1.data.experimental.RandomDataset.options": true, + "tf.compat.v1.data.experimental.RandomDataset.output_classes": true, + "tf.compat.v1.data.experimental.RandomDataset.output_shapes": true, + "tf.compat.v1.data.experimental.RandomDataset.output_types": true, + "tf.compat.v1.data.experimental.RandomDataset.padded_batch": true, + "tf.compat.v1.data.experimental.RandomDataset.prefetch": true, + "tf.compat.v1.data.experimental.RandomDataset.range": true, + "tf.compat.v1.data.experimental.RandomDataset.reduce": true, + "tf.compat.v1.data.experimental.RandomDataset.repeat": true, + "tf.compat.v1.data.experimental.RandomDataset.shard": true, + "tf.compat.v1.data.experimental.RandomDataset.shuffle": true, + "tf.compat.v1.data.experimental.RandomDataset.skip": true, + "tf.compat.v1.data.experimental.RandomDataset.take": true, + "tf.compat.v1.data.experimental.RandomDataset.unbatch": true, + "tf.compat.v1.data.experimental.RandomDataset.window": true, + "tf.compat.v1.data.experimental.RandomDataset.with_options": true, + "tf.compat.v1.data.experimental.RandomDataset.zip": true, + "tf.compat.v1.data.experimental.Reducer": false, + "tf.compat.v1.data.experimental.Reducer.__eq__": true, + "tf.compat.v1.data.experimental.Reducer.__ge__": true, + "tf.compat.v1.data.experimental.Reducer.__gt__": true, + "tf.compat.v1.data.experimental.Reducer.__init__": true, + "tf.compat.v1.data.experimental.Reducer.__le__": true, + "tf.compat.v1.data.experimental.Reducer.__lt__": true, + "tf.compat.v1.data.experimental.Reducer.__ne__": true, + "tf.compat.v1.data.experimental.Reducer.__new__": true, + "tf.compat.v1.data.experimental.Reducer.finalize_func": true, + "tf.compat.v1.data.experimental.Reducer.init_func": true, + "tf.compat.v1.data.experimental.Reducer.reduce_func": true, + "tf.compat.v1.data.experimental.SparseTensorStructure": false, + "tf.compat.v1.data.experimental.SqlDataset": false, + "tf.compat.v1.data.experimental.SqlDataset.__eq__": true, + "tf.compat.v1.data.experimental.SqlDataset.__ge__": true, + "tf.compat.v1.data.experimental.SqlDataset.__gt__": true, + "tf.compat.v1.data.experimental.SqlDataset.__init__": true, + "tf.compat.v1.data.experimental.SqlDataset.__iter__": true, + "tf.compat.v1.data.experimental.SqlDataset.__le__": true, + "tf.compat.v1.data.experimental.SqlDataset.__lt__": true, + "tf.compat.v1.data.experimental.SqlDataset.__ne__": true, + "tf.compat.v1.data.experimental.SqlDataset.__new__": true, + "tf.compat.v1.data.experimental.SqlDataset.apply": true, + "tf.compat.v1.data.experimental.SqlDataset.as_numpy_iterator": true, + "tf.compat.v1.data.experimental.SqlDataset.batch": true, + "tf.compat.v1.data.experimental.SqlDataset.cache": true, + "tf.compat.v1.data.experimental.SqlDataset.concatenate": true, + "tf.compat.v1.data.experimental.SqlDataset.element_spec": true, + "tf.compat.v1.data.experimental.SqlDataset.enumerate": true, + "tf.compat.v1.data.experimental.SqlDataset.filter": true, + "tf.compat.v1.data.experimental.SqlDataset.filter_with_legacy_function": true, + "tf.compat.v1.data.experimental.SqlDataset.flat_map": true, + "tf.compat.v1.data.experimental.SqlDataset.from_generator": true, + "tf.compat.v1.data.experimental.SqlDataset.from_sparse_tensor_slices": true, + "tf.compat.v1.data.experimental.SqlDataset.from_tensor_slices": true, + "tf.compat.v1.data.experimental.SqlDataset.from_tensors": true, + "tf.compat.v1.data.experimental.SqlDataset.interleave": true, + "tf.compat.v1.data.experimental.SqlDataset.list_files": true, + "tf.compat.v1.data.experimental.SqlDataset.make_initializable_iterator": true, + "tf.compat.v1.data.experimental.SqlDataset.make_one_shot_iterator": true, + "tf.compat.v1.data.experimental.SqlDataset.map": true, + "tf.compat.v1.data.experimental.SqlDataset.map_with_legacy_function": true, + "tf.compat.v1.data.experimental.SqlDataset.options": true, + "tf.compat.v1.data.experimental.SqlDataset.output_classes": true, + "tf.compat.v1.data.experimental.SqlDataset.output_shapes": true, + "tf.compat.v1.data.experimental.SqlDataset.output_types": true, + "tf.compat.v1.data.experimental.SqlDataset.padded_batch": true, + "tf.compat.v1.data.experimental.SqlDataset.prefetch": true, + "tf.compat.v1.data.experimental.SqlDataset.range": true, + "tf.compat.v1.data.experimental.SqlDataset.reduce": true, + "tf.compat.v1.data.experimental.SqlDataset.repeat": true, + "tf.compat.v1.data.experimental.SqlDataset.shard": true, + "tf.compat.v1.data.experimental.SqlDataset.shuffle": true, + "tf.compat.v1.data.experimental.SqlDataset.skip": true, + "tf.compat.v1.data.experimental.SqlDataset.take": true, + "tf.compat.v1.data.experimental.SqlDataset.unbatch": true, + "tf.compat.v1.data.experimental.SqlDataset.window": true, + "tf.compat.v1.data.experimental.SqlDataset.with_options": true, + "tf.compat.v1.data.experimental.SqlDataset.zip": true, + "tf.compat.v1.data.experimental.StatsAggregator": false, + "tf.compat.v1.data.experimental.StatsAggregator.__eq__": true, + "tf.compat.v1.data.experimental.StatsAggregator.__ge__": true, + "tf.compat.v1.data.experimental.StatsAggregator.__gt__": true, + "tf.compat.v1.data.experimental.StatsAggregator.__init__": true, + "tf.compat.v1.data.experimental.StatsAggregator.__le__": true, + "tf.compat.v1.data.experimental.StatsAggregator.__lt__": true, + "tf.compat.v1.data.experimental.StatsAggregator.__ne__": true, + "tf.compat.v1.data.experimental.StatsAggregator.__new__": true, + "tf.compat.v1.data.experimental.StatsAggregator.get_summary": true, + "tf.compat.v1.data.experimental.StatsOptions": false, + "tf.compat.v1.data.experimental.StatsOptions.__eq__": true, + "tf.compat.v1.data.experimental.StatsOptions.__ge__": true, + "tf.compat.v1.data.experimental.StatsOptions.__gt__": true, + "tf.compat.v1.data.experimental.StatsOptions.__init__": true, + "tf.compat.v1.data.experimental.StatsOptions.__le__": true, + "tf.compat.v1.data.experimental.StatsOptions.__lt__": true, + "tf.compat.v1.data.experimental.StatsOptions.__ne__": true, + "tf.compat.v1.data.experimental.StatsOptions.__new__": true, + "tf.compat.v1.data.experimental.StatsOptions.aggregator": true, + "tf.compat.v1.data.experimental.StatsOptions.counter_prefix": true, + "tf.compat.v1.data.experimental.StatsOptions.latency_all_edges": true, + "tf.compat.v1.data.experimental.StatsOptions.prefix": true, + "tf.compat.v1.data.experimental.Structure": false, + "tf.compat.v1.data.experimental.Structure.__eq__": true, + "tf.compat.v1.data.experimental.Structure.__ge__": true, + "tf.compat.v1.data.experimental.Structure.__gt__": true, + "tf.compat.v1.data.experimental.Structure.__init__": true, + "tf.compat.v1.data.experimental.Structure.__le__": true, + "tf.compat.v1.data.experimental.Structure.__lt__": true, + "tf.compat.v1.data.experimental.Structure.__ne__": true, + "tf.compat.v1.data.experimental.Structure.__new__": true, + "tf.compat.v1.data.experimental.Structure.is_compatible_with": true, + "tf.compat.v1.data.experimental.Structure.most_specific_compatible_type": true, + "tf.compat.v1.data.experimental.Structure.value_type": true, + "tf.compat.v1.data.experimental.TFRecordWriter": false, + "tf.compat.v1.data.experimental.TFRecordWriter.__eq__": true, + "tf.compat.v1.data.experimental.TFRecordWriter.__ge__": true, + "tf.compat.v1.data.experimental.TFRecordWriter.__gt__": true, + "tf.compat.v1.data.experimental.TFRecordWriter.__init__": true, + "tf.compat.v1.data.experimental.TFRecordWriter.__le__": true, + "tf.compat.v1.data.experimental.TFRecordWriter.__lt__": true, + "tf.compat.v1.data.experimental.TFRecordWriter.__ne__": true, + "tf.compat.v1.data.experimental.TFRecordWriter.__new__": true, + "tf.compat.v1.data.experimental.TFRecordWriter.write": true, + "tf.compat.v1.data.experimental.TensorArrayStructure": false, + "tf.compat.v1.data.experimental.TensorStructure": false, + "tf.compat.v1.data.experimental.ThreadingOptions": false, + "tf.compat.v1.data.experimental.ThreadingOptions.__eq__": true, + "tf.compat.v1.data.experimental.ThreadingOptions.__ge__": true, + "tf.compat.v1.data.experimental.ThreadingOptions.__gt__": true, + "tf.compat.v1.data.experimental.ThreadingOptions.__init__": true, + "tf.compat.v1.data.experimental.ThreadingOptions.__le__": true, + "tf.compat.v1.data.experimental.ThreadingOptions.__lt__": true, + "tf.compat.v1.data.experimental.ThreadingOptions.__ne__": true, + "tf.compat.v1.data.experimental.ThreadingOptions.__new__": true, + "tf.compat.v1.data.experimental.ThreadingOptions.max_intra_op_parallelism": true, + "tf.compat.v1.data.experimental.ThreadingOptions.private_threadpool_size": true, + "tf.compat.v1.data.experimental.UNKNOWN_CARDINALITY": true, + "tf.compat.v1.data.experimental.assert_cardinality": false, + "tf.compat.v1.data.experimental.bucket_by_sequence_length": false, + "tf.compat.v1.data.experimental.bytes_produced_stats": false, + "tf.compat.v1.data.experimental.cardinality": false, + "tf.compat.v1.data.experimental.choose_from_datasets": false, + "tf.compat.v1.data.experimental.copy_to_device": false, + "tf.compat.v1.data.experimental.dense_to_ragged_batch": false, + "tf.compat.v1.data.experimental.dense_to_sparse_batch": false, + "tf.compat.v1.data.experimental.enumerate_dataset": false, + "tf.compat.v1.data.experimental.from_variant": false, + "tf.compat.v1.data.experimental.get_next_as_optional": false, + "tf.compat.v1.data.experimental.get_single_element": false, + "tf.compat.v1.data.experimental.get_structure": false, + "tf.compat.v1.data.experimental.group_by_reducer": false, + "tf.compat.v1.data.experimental.group_by_window": false, + "tf.compat.v1.data.experimental.ignore_errors": false, + "tf.compat.v1.data.experimental.latency_stats": false, + "tf.compat.v1.data.experimental.make_batched_features_dataset": false, + "tf.compat.v1.data.experimental.make_csv_dataset": false, + "tf.compat.v1.data.experimental.make_saveable_from_iterator": false, + "tf.compat.v1.data.experimental.map_and_batch": false, + "tf.compat.v1.data.experimental.map_and_batch_with_legacy_function": false, + "tf.compat.v1.data.experimental.parallel_interleave": false, + "tf.compat.v1.data.experimental.parse_example_dataset": false, + "tf.compat.v1.data.experimental.prefetch_to_device": false, + "tf.compat.v1.data.experimental.rejection_resample": false, + "tf.compat.v1.data.experimental.sample_from_datasets": false, + "tf.compat.v1.data.experimental.scan": false, + "tf.compat.v1.data.experimental.shuffle_and_repeat": false, + "tf.compat.v1.data.experimental.take_while": false, + "tf.compat.v1.data.experimental.to_variant": false, + "tf.compat.v1.data.experimental.unbatch": false, + "tf.compat.v1.data.experimental.unique": false, + "tf.compat.v1.data.get_output_classes": false, + "tf.compat.v1.data.get_output_shapes": false, + "tf.compat.v1.data.get_output_types": false, + "tf.compat.v1.data.make_initializable_iterator": false, + "tf.compat.v1.data.make_one_shot_iterator": false, + "tf.compat.v1.debugging": false, + "tf.compat.v1.debugging.Assert": false, + "tf.compat.v1.debugging.assert_all_finite": false, + "tf.compat.v1.debugging.assert_equal": false, + "tf.compat.v1.debugging.assert_greater": false, + "tf.compat.v1.debugging.assert_greater_equal": false, + "tf.compat.v1.debugging.assert_integer": false, + "tf.compat.v1.debugging.assert_less": false, + "tf.compat.v1.debugging.assert_less_equal": false, + "tf.compat.v1.debugging.assert_near": false, + "tf.compat.v1.debugging.assert_negative": false, + "tf.compat.v1.debugging.assert_non_negative": false, + "tf.compat.v1.debugging.assert_non_positive": false, + "tf.compat.v1.debugging.assert_none_equal": false, + "tf.compat.v1.debugging.assert_positive": false, + "tf.compat.v1.debugging.assert_proper_iterable": false, + "tf.compat.v1.debugging.assert_rank": false, + "tf.compat.v1.debugging.assert_rank_at_least": false, + "tf.compat.v1.debugging.assert_rank_in": false, + "tf.compat.v1.debugging.assert_same_float_dtype": false, + "tf.compat.v1.debugging.assert_scalar": false, + "tf.compat.v1.debugging.assert_shapes": false, + "tf.compat.v1.debugging.assert_type": false, + "tf.compat.v1.debugging.check_numerics": false, + "tf.compat.v1.debugging.disable_check_numerics": false, + "tf.compat.v1.debugging.enable_check_numerics": false, + "tf.compat.v1.debugging.experimental": false, + "tf.compat.v1.debugging.experimental.disable_dump_debug_info": false, + "tf.compat.v1.debugging.experimental.enable_dump_debug_info": false, + "tf.compat.v1.debugging.get_log_device_placement": false, + "tf.compat.v1.debugging.is_finite": false, + "tf.compat.v1.debugging.is_inf": false, + "tf.compat.v1.debugging.is_nan": false, + "tf.compat.v1.debugging.is_non_decreasing": false, + "tf.compat.v1.debugging.is_numeric_tensor": false, + "tf.compat.v1.debugging.is_strictly_increasing": false, + "tf.compat.v1.debugging.set_log_device_placement": false, + "tf.compat.v1.decode_base64": false, + "tf.compat.v1.decode_compressed": false, + "tf.compat.v1.decode_csv": false, + "tf.compat.v1.decode_json_example": false, + "tf.compat.v1.decode_raw": false, + "tf.compat.v1.delete_session_tensor": false, + "tf.compat.v1.depth_to_space": false, + "tf.compat.v1.dequantize": false, + "tf.compat.v1.deserialize_many_sparse": false, + "tf.compat.v1.device": false, + "tf.compat.v1.diag": false, + "tf.compat.v1.diag_part": false, + "tf.compat.v1.digamma": false, + "tf.compat.v1.dimension_at_index": false, + "tf.compat.v1.dimension_value": false, + "tf.compat.v1.disable_control_flow_v2": false, + "tf.compat.v1.disable_eager_execution": false, + "tf.compat.v1.disable_resource_variables": false, + "tf.compat.v1.disable_tensor_equality": false, + "tf.compat.v1.disable_v2_behavior": false, + "tf.compat.v1.disable_v2_tensorshape": false, + "tf.compat.v1.distribute": false, + "tf.compat.v1.distribute.CrossDeviceOps": false, + "tf.compat.v1.distribute.CrossDeviceOps.__eq__": true, + "tf.compat.v1.distribute.CrossDeviceOps.__ge__": true, + "tf.compat.v1.distribute.CrossDeviceOps.__gt__": true, + "tf.compat.v1.distribute.CrossDeviceOps.__init__": true, + "tf.compat.v1.distribute.CrossDeviceOps.__le__": true, + "tf.compat.v1.distribute.CrossDeviceOps.__lt__": true, + "tf.compat.v1.distribute.CrossDeviceOps.__ne__": true, + "tf.compat.v1.distribute.CrossDeviceOps.__new__": true, + "tf.compat.v1.distribute.CrossDeviceOps.batch_reduce": true, + "tf.compat.v1.distribute.CrossDeviceOps.batch_reduce_implementation": true, + "tf.compat.v1.distribute.CrossDeviceOps.broadcast": true, + "tf.compat.v1.distribute.CrossDeviceOps.broadcast_implementation": true, + "tf.compat.v1.distribute.CrossDeviceOps.reduce": true, + "tf.compat.v1.distribute.CrossDeviceOps.reduce_implementation": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce": false, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__eq__": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__ge__": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__gt__": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__init__": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__le__": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__lt__": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__ne__": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.__new__": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.batch_reduce": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.batch_reduce_implementation": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.broadcast": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.broadcast_implementation": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.reduce": true, + "tf.compat.v1.distribute.HierarchicalCopyAllReduce.reduce_implementation": true, + "tf.compat.v1.distribute.InputContext": false, + "tf.compat.v1.distribute.InputContext.__eq__": true, + "tf.compat.v1.distribute.InputContext.__ge__": true, + "tf.compat.v1.distribute.InputContext.__gt__": true, + "tf.compat.v1.distribute.InputContext.__init__": true, + "tf.compat.v1.distribute.InputContext.__le__": true, + "tf.compat.v1.distribute.InputContext.__lt__": true, + "tf.compat.v1.distribute.InputContext.__ne__": true, + "tf.compat.v1.distribute.InputContext.__new__": true, + "tf.compat.v1.distribute.InputContext.get_per_replica_batch_size": true, + "tf.compat.v1.distribute.InputContext.input_pipeline_id": true, + "tf.compat.v1.distribute.InputContext.num_input_pipelines": true, + "tf.compat.v1.distribute.InputContext.num_replicas_in_sync": true, + "tf.compat.v1.distribute.InputReplicationMode": false, + "tf.compat.v1.distribute.InputReplicationMode.PER_WORKER": true, + "tf.compat.v1.distribute.InputReplicationMode.name": true, + "tf.compat.v1.distribute.InputReplicationMode.value": true, + "tf.compat.v1.distribute.MirroredStrategy": false, + "tf.compat.v1.distribute.MirroredStrategy.__eq__": true, + "tf.compat.v1.distribute.MirroredStrategy.__ge__": true, + "tf.compat.v1.distribute.MirroredStrategy.__gt__": true, + "tf.compat.v1.distribute.MirroredStrategy.__init__": true, + "tf.compat.v1.distribute.MirroredStrategy.__le__": true, + "tf.compat.v1.distribute.MirroredStrategy.__lt__": true, + "tf.compat.v1.distribute.MirroredStrategy.__ne__": true, + "tf.compat.v1.distribute.MirroredStrategy.__new__": true, + "tf.compat.v1.distribute.MirroredStrategy.experimental_distribute_dataset": true, + "tf.compat.v1.distribute.MirroredStrategy.experimental_distribute_datasets_from_function": true, + "tf.compat.v1.distribute.MirroredStrategy.experimental_local_results": true, + "tf.compat.v1.distribute.MirroredStrategy.experimental_make_numpy_dataset": true, + "tf.compat.v1.distribute.MirroredStrategy.experimental_run": true, + "tf.compat.v1.distribute.MirroredStrategy.extended": true, + "tf.compat.v1.distribute.MirroredStrategy.make_dataset_iterator": true, + "tf.compat.v1.distribute.MirroredStrategy.make_input_fn_iterator": true, + "tf.compat.v1.distribute.MirroredStrategy.num_replicas_in_sync": true, + "tf.compat.v1.distribute.MirroredStrategy.reduce": true, + "tf.compat.v1.distribute.MirroredStrategy.run": true, + "tf.compat.v1.distribute.MirroredStrategy.scope": true, + "tf.compat.v1.distribute.MirroredStrategy.update_config_proto": true, + "tf.compat.v1.distribute.NcclAllReduce": false, + "tf.compat.v1.distribute.NcclAllReduce.__eq__": true, + "tf.compat.v1.distribute.NcclAllReduce.__ge__": true, + "tf.compat.v1.distribute.NcclAllReduce.__gt__": true, + "tf.compat.v1.distribute.NcclAllReduce.__init__": true, + "tf.compat.v1.distribute.NcclAllReduce.__le__": true, + "tf.compat.v1.distribute.NcclAllReduce.__lt__": true, + "tf.compat.v1.distribute.NcclAllReduce.__ne__": true, + "tf.compat.v1.distribute.NcclAllReduce.__new__": true, + "tf.compat.v1.distribute.NcclAllReduce.batch_reduce": true, + "tf.compat.v1.distribute.NcclAllReduce.batch_reduce_implementation": true, + "tf.compat.v1.distribute.NcclAllReduce.broadcast": true, + "tf.compat.v1.distribute.NcclAllReduce.broadcast_implementation": true, + "tf.compat.v1.distribute.NcclAllReduce.reduce": true, + "tf.compat.v1.distribute.NcclAllReduce.reduce_implementation": true, + "tf.compat.v1.distribute.OneDeviceStrategy": false, + "tf.compat.v1.distribute.OneDeviceStrategy.__eq__": true, + "tf.compat.v1.distribute.OneDeviceStrategy.__ge__": true, + "tf.compat.v1.distribute.OneDeviceStrategy.__gt__": true, + "tf.compat.v1.distribute.OneDeviceStrategy.__init__": true, + "tf.compat.v1.distribute.OneDeviceStrategy.__le__": true, + "tf.compat.v1.distribute.OneDeviceStrategy.__lt__": true, + "tf.compat.v1.distribute.OneDeviceStrategy.__ne__": true, + "tf.compat.v1.distribute.OneDeviceStrategy.__new__": true, + "tf.compat.v1.distribute.OneDeviceStrategy.experimental_distribute_dataset": true, + "tf.compat.v1.distribute.OneDeviceStrategy.experimental_distribute_datasets_from_function": true, + "tf.compat.v1.distribute.OneDeviceStrategy.experimental_local_results": true, + "tf.compat.v1.distribute.OneDeviceStrategy.experimental_make_numpy_dataset": true, + "tf.compat.v1.distribute.OneDeviceStrategy.experimental_run": true, + "tf.compat.v1.distribute.OneDeviceStrategy.extended": true, + "tf.compat.v1.distribute.OneDeviceStrategy.make_dataset_iterator": true, + "tf.compat.v1.distribute.OneDeviceStrategy.make_input_fn_iterator": true, + "tf.compat.v1.distribute.OneDeviceStrategy.num_replicas_in_sync": true, + "tf.compat.v1.distribute.OneDeviceStrategy.reduce": true, + "tf.compat.v1.distribute.OneDeviceStrategy.run": true, + "tf.compat.v1.distribute.OneDeviceStrategy.scope": true, + "tf.compat.v1.distribute.OneDeviceStrategy.update_config_proto": true, + "tf.compat.v1.distribute.ReduceOp": false, + "tf.compat.v1.distribute.ReduceOp.MEAN": true, + "tf.compat.v1.distribute.ReduceOp.SUM": true, + "tf.compat.v1.distribute.ReduceOp.name": true, + "tf.compat.v1.distribute.ReduceOp.value": true, + "tf.compat.v1.distribute.ReductionToOneDevice": false, + "tf.compat.v1.distribute.ReductionToOneDevice.__eq__": true, + "tf.compat.v1.distribute.ReductionToOneDevice.__ge__": true, + "tf.compat.v1.distribute.ReductionToOneDevice.__gt__": true, + "tf.compat.v1.distribute.ReductionToOneDevice.__init__": true, + "tf.compat.v1.distribute.ReductionToOneDevice.__le__": true, + "tf.compat.v1.distribute.ReductionToOneDevice.__lt__": true, + "tf.compat.v1.distribute.ReductionToOneDevice.__ne__": true, + "tf.compat.v1.distribute.ReductionToOneDevice.__new__": true, + "tf.compat.v1.distribute.ReductionToOneDevice.batch_reduce": true, + "tf.compat.v1.distribute.ReductionToOneDevice.batch_reduce_implementation": true, + "tf.compat.v1.distribute.ReductionToOneDevice.broadcast": true, + "tf.compat.v1.distribute.ReductionToOneDevice.broadcast_implementation": true, + "tf.compat.v1.distribute.ReductionToOneDevice.reduce": true, + "tf.compat.v1.distribute.ReductionToOneDevice.reduce_implementation": true, + "tf.compat.v1.distribute.ReplicaContext": false, + "tf.compat.v1.distribute.ReplicaContext.__enter__": true, + "tf.compat.v1.distribute.ReplicaContext.__eq__": true, + "tf.compat.v1.distribute.ReplicaContext.__exit__": true, + "tf.compat.v1.distribute.ReplicaContext.__ge__": true, + "tf.compat.v1.distribute.ReplicaContext.__gt__": true, + "tf.compat.v1.distribute.ReplicaContext.__init__": true, + "tf.compat.v1.distribute.ReplicaContext.__le__": true, + "tf.compat.v1.distribute.ReplicaContext.__lt__": true, + "tf.compat.v1.distribute.ReplicaContext.__ne__": true, + "tf.compat.v1.distribute.ReplicaContext.__new__": true, + "tf.compat.v1.distribute.ReplicaContext.all_reduce": true, + "tf.compat.v1.distribute.ReplicaContext.devices": true, + "tf.compat.v1.distribute.ReplicaContext.merge_call": true, + "tf.compat.v1.distribute.ReplicaContext.num_replicas_in_sync": true, + "tf.compat.v1.distribute.ReplicaContext.replica_id_in_sync_group": true, + "tf.compat.v1.distribute.ReplicaContext.strategy": true, + "tf.compat.v1.distribute.RunOptions": false, + "tf.compat.v1.distribute.RunOptions.__add__": true, + "tf.compat.v1.distribute.RunOptions.__contains__": true, + "tf.compat.v1.distribute.RunOptions.__eq__": true, + "tf.compat.v1.distribute.RunOptions.__ge__": true, + "tf.compat.v1.distribute.RunOptions.__getitem__": true, + "tf.compat.v1.distribute.RunOptions.__gt__": true, + "tf.compat.v1.distribute.RunOptions.__init__": true, + "tf.compat.v1.distribute.RunOptions.__iter__": true, + "tf.compat.v1.distribute.RunOptions.__le__": true, + "tf.compat.v1.distribute.RunOptions.__len__": true, + "tf.compat.v1.distribute.RunOptions.__lt__": true, + "tf.compat.v1.distribute.RunOptions.__mul__": true, + "tf.compat.v1.distribute.RunOptions.__ne__": true, + "tf.compat.v1.distribute.RunOptions.__new__": true, + "tf.compat.v1.distribute.RunOptions.__rmul__": true, + "tf.compat.v1.distribute.RunOptions.count": true, + "tf.compat.v1.distribute.RunOptions.experimental_bucketizing_dynamic_shape": true, + "tf.compat.v1.distribute.RunOptions.experimental_enable_dynamic_batch_size": true, + "tf.compat.v1.distribute.RunOptions.index": true, + "tf.compat.v1.distribute.Server": false, + "tf.compat.v1.distribute.Server.__eq__": true, + "tf.compat.v1.distribute.Server.__ge__": true, + "tf.compat.v1.distribute.Server.__gt__": true, + "tf.compat.v1.distribute.Server.__init__": true, + "tf.compat.v1.distribute.Server.__le__": true, + "tf.compat.v1.distribute.Server.__lt__": true, + "tf.compat.v1.distribute.Server.__ne__": true, + "tf.compat.v1.distribute.Server.__new__": true, + "tf.compat.v1.distribute.Server.create_local_server": true, + "tf.compat.v1.distribute.Server.join": true, + "tf.compat.v1.distribute.Server.server_def": true, + "tf.compat.v1.distribute.Server.start": true, + "tf.compat.v1.distribute.Server.target": true, + "tf.compat.v1.distribute.Strategy": false, + "tf.compat.v1.distribute.Strategy.__eq__": true, + "tf.compat.v1.distribute.Strategy.__ge__": true, + "tf.compat.v1.distribute.Strategy.__gt__": true, + "tf.compat.v1.distribute.Strategy.__init__": true, + "tf.compat.v1.distribute.Strategy.__le__": true, + "tf.compat.v1.distribute.Strategy.__lt__": true, + "tf.compat.v1.distribute.Strategy.__ne__": true, + "tf.compat.v1.distribute.Strategy.__new__": true, + "tf.compat.v1.distribute.Strategy.experimental_distribute_dataset": true, + "tf.compat.v1.distribute.Strategy.experimental_distribute_datasets_from_function": true, + "tf.compat.v1.distribute.Strategy.experimental_local_results": true, + "tf.compat.v1.distribute.Strategy.experimental_make_numpy_dataset": true, + "tf.compat.v1.distribute.Strategy.experimental_run": true, + "tf.compat.v1.distribute.Strategy.extended": true, + "tf.compat.v1.distribute.Strategy.make_dataset_iterator": true, + "tf.compat.v1.distribute.Strategy.make_input_fn_iterator": true, + "tf.compat.v1.distribute.Strategy.num_replicas_in_sync": true, + "tf.compat.v1.distribute.Strategy.reduce": true, + "tf.compat.v1.distribute.Strategy.run": true, + "tf.compat.v1.distribute.Strategy.scope": true, + "tf.compat.v1.distribute.Strategy.update_config_proto": true, + "tf.compat.v1.distribute.StrategyExtended": false, + "tf.compat.v1.distribute.StrategyExtended.__eq__": true, + "tf.compat.v1.distribute.StrategyExtended.__ge__": true, + "tf.compat.v1.distribute.StrategyExtended.__gt__": true, + "tf.compat.v1.distribute.StrategyExtended.__init__": true, + "tf.compat.v1.distribute.StrategyExtended.__le__": true, + "tf.compat.v1.distribute.StrategyExtended.__lt__": true, + "tf.compat.v1.distribute.StrategyExtended.__ne__": true, + "tf.compat.v1.distribute.StrategyExtended.__new__": true, + "tf.compat.v1.distribute.StrategyExtended.batch_reduce_to": true, + "tf.compat.v1.distribute.StrategyExtended.broadcast_to": true, + "tf.compat.v1.distribute.StrategyExtended.call_for_each_replica": true, + "tf.compat.v1.distribute.StrategyExtended.colocate_vars_with": true, + "tf.compat.v1.distribute.StrategyExtended.experimental_between_graph": true, + "tf.compat.v1.distribute.StrategyExtended.experimental_make_numpy_dataset": true, + "tf.compat.v1.distribute.StrategyExtended.experimental_require_static_shapes": true, + "tf.compat.v1.distribute.StrategyExtended.experimental_run_steps_on_iterator": true, + "tf.compat.v1.distribute.StrategyExtended.experimental_should_init": true, + "tf.compat.v1.distribute.StrategyExtended.non_slot_devices": true, + "tf.compat.v1.distribute.StrategyExtended.parameter_devices": true, + "tf.compat.v1.distribute.StrategyExtended.read_var": true, + "tf.compat.v1.distribute.StrategyExtended.reduce_to": true, + "tf.compat.v1.distribute.StrategyExtended.should_checkpoint": true, + "tf.compat.v1.distribute.StrategyExtended.should_save_summary": true, + "tf.compat.v1.distribute.StrategyExtended.update": true, + "tf.compat.v1.distribute.StrategyExtended.update_non_slot": true, + "tf.compat.v1.distribute.StrategyExtended.value_container": true, + "tf.compat.v1.distribute.StrategyExtended.variable_created_in_scope": true, + "tf.compat.v1.distribute.StrategyExtended.worker_devices": true, + "tf.compat.v1.distribute.cluster_resolver": false, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver": false, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__eq__": true, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__ge__": true, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__gt__": true, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__init__": true, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__le__": true, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__lt__": true, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__ne__": true, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.__new__": true, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.cluster_spec": true, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.environment": true, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.master": true, + "tf.compat.v1.distribute.cluster_resolver.ClusterResolver.num_accelerators": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver": false, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__eq__": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__ge__": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__gt__": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__init__": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__le__": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__lt__": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__ne__": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.__new__": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.cluster_spec": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.environment": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.master": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.num_accelerators": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.rpc_layer": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.task_id": true, + "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver.task_type": true, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver": false, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__eq__": true, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__ge__": true, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__gt__": true, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__init__": true, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__le__": true, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__lt__": true, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__ne__": true, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.__new__": true, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.cluster_spec": true, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.environment": true, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.master": true, + "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver.num_accelerators": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver": false, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__eq__": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__ge__": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__gt__": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__init__": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__le__": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__lt__": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__ne__": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.__new__": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.cluster_spec": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.environment": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.master": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.num_accelerators": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.rpc_layer": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.task_id": true, + "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver.task_type": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver": false, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__eq__": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__ge__": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__gt__": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__init__": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__le__": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__lt__": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__ne__": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.__new__": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.cluster_spec": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.environment": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.get_task_info": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.master": true, + "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver.num_accelerators": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver": false, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__eq__": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__ge__": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__gt__": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__init__": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__le__": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__lt__": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__ne__": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.__new__": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.cluster_spec": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.environment": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.master": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.num_accelerators": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.rpc_layer": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.task_id": true, + "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver.task_type": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver": false, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__enter__": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__eq__": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__exit__": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__ge__": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__gt__": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__init__": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__le__": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__lt__": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__ne__": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.__new__": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.cluster_spec": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.environment": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.get_job_name": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.get_master": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.master": true, + "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver.num_accelerators": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver": false, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__eq__": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__ge__": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__gt__": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__init__": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__le__": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__lt__": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__ne__": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.__new__": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.cluster_spec": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.environment": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.master": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.num_accelerators": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.rpc_layer": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.task_id": true, + "tf.compat.v1.distribute.cluster_resolver.UnionResolver.task_type": true, + "tf.compat.v1.distribute.experimental": false, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy": false, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__eq__": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__ge__": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__gt__": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__init__": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__le__": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__lt__": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__ne__": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.__new__": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.experimental_distribute_dataset": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.experimental_distribute_datasets_from_function": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.experimental_local_results": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.experimental_make_numpy_dataset": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.experimental_run": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.extended": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.make_dataset_iterator": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.make_input_fn_iterator": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.num_replicas_in_sync": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.reduce": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.run": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.scope": true, + "tf.compat.v1.distribute.experimental.CentralStorageStrategy.update_config_proto": true, + "tf.compat.v1.distribute.experimental.CollectiveCommunication": false, + "tf.compat.v1.distribute.experimental.CollectiveCommunication.AUTO": true, + "tf.compat.v1.distribute.experimental.CollectiveCommunication.NCCL": true, + "tf.compat.v1.distribute.experimental.CollectiveCommunication.RING": true, + "tf.compat.v1.distribute.experimental.CollectiveCommunication.name": true, + "tf.compat.v1.distribute.experimental.CollectiveCommunication.value": true, + "tf.compat.v1.distribute.experimental.CollectiveHints": false, + "tf.compat.v1.distribute.experimental.CollectiveHints.__eq__": true, + "tf.compat.v1.distribute.experimental.CollectiveHints.__ge__": true, + "tf.compat.v1.distribute.experimental.CollectiveHints.__gt__": true, + "tf.compat.v1.distribute.experimental.CollectiveHints.__init__": true, + "tf.compat.v1.distribute.experimental.CollectiveHints.__le__": true, + "tf.compat.v1.distribute.experimental.CollectiveHints.__lt__": true, + "tf.compat.v1.distribute.experimental.CollectiveHints.__ne__": true, + "tf.compat.v1.distribute.experimental.CollectiveHints.__new__": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy": false, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__eq__": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__ge__": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__gt__": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__init__": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__le__": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__lt__": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__ne__": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.__new__": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.experimental_distribute_dataset": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.experimental_distribute_datasets_from_function": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.experimental_local_results": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.experimental_make_numpy_dataset": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.experimental_run": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.extended": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.make_dataset_iterator": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.make_input_fn_iterator": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.num_replicas_in_sync": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.reduce": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.run": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.scope": true, + "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy.update_config_proto": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy": false, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__eq__": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__ge__": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__gt__": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__init__": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__le__": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__lt__": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__ne__": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.__new__": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.experimental_distribute_dataset": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.experimental_distribute_datasets_from_function": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.experimental_local_results": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.experimental_make_numpy_dataset": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.experimental_run": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.extended": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.make_dataset_iterator": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.make_input_fn_iterator": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.num_replicas_in_sync": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.reduce": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.run": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.scope": true, + "tf.compat.v1.distribute.experimental.ParameterServerStrategy.update_config_proto": true, + "tf.compat.v1.distribute.experimental.TPUStrategy": false, + "tf.compat.v1.distribute.experimental.TPUStrategy.__eq__": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.__ge__": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.__gt__": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.__init__": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.__le__": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.__lt__": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.__ne__": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.__new__": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.experimental_distribute_dataset": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.experimental_distribute_datasets_from_function": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.experimental_local_results": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.experimental_make_numpy_dataset": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.experimental_run": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.extended": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.make_dataset_iterator": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.make_input_fn_iterator": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.num_replicas_in_sync": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.reduce": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.run": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.scope": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.steps_per_run": true, + "tf.compat.v1.distribute.experimental.TPUStrategy.update_config_proto": true, + "tf.compat.v1.distribute.experimental_set_strategy": false, + "tf.compat.v1.distribute.get_loss_reduction": false, + "tf.compat.v1.distribute.get_replica_context": false, + "tf.compat.v1.distribute.get_strategy": false, + "tf.compat.v1.distribute.has_strategy": false, + "tf.compat.v1.distribute.in_cross_replica_context": false, + "tf.compat.v1.distributions": false, + "tf.compat.v1.distributions.Bernoulli": false, + "tf.compat.v1.distributions.Bernoulli.__eq__": true, + "tf.compat.v1.distributions.Bernoulli.__ge__": true, + "tf.compat.v1.distributions.Bernoulli.__gt__": true, + "tf.compat.v1.distributions.Bernoulli.__init__": true, + "tf.compat.v1.distributions.Bernoulli.__le__": true, + "tf.compat.v1.distributions.Bernoulli.__lt__": true, + "tf.compat.v1.distributions.Bernoulli.__ne__": true, + "tf.compat.v1.distributions.Bernoulli.__new__": true, + "tf.compat.v1.distributions.Bernoulli.allow_nan_stats": true, + "tf.compat.v1.distributions.Bernoulli.batch_shape": true, + "tf.compat.v1.distributions.Bernoulli.batch_shape_tensor": true, + "tf.compat.v1.distributions.Bernoulli.cdf": true, + "tf.compat.v1.distributions.Bernoulli.copy": true, + "tf.compat.v1.distributions.Bernoulli.covariance": true, + "tf.compat.v1.distributions.Bernoulli.cross_entropy": true, + "tf.compat.v1.distributions.Bernoulli.dtype": true, + "tf.compat.v1.distributions.Bernoulli.entropy": true, + "tf.compat.v1.distributions.Bernoulli.event_shape": true, + "tf.compat.v1.distributions.Bernoulli.event_shape_tensor": true, + "tf.compat.v1.distributions.Bernoulli.is_scalar_batch": true, + "tf.compat.v1.distributions.Bernoulli.is_scalar_event": true, + "tf.compat.v1.distributions.Bernoulli.kl_divergence": true, + "tf.compat.v1.distributions.Bernoulli.log_cdf": true, + "tf.compat.v1.distributions.Bernoulli.log_prob": true, + "tf.compat.v1.distributions.Bernoulli.log_survival_function": true, + "tf.compat.v1.distributions.Bernoulli.logits": true, + "tf.compat.v1.distributions.Bernoulli.mean": true, + "tf.compat.v1.distributions.Bernoulli.mode": true, + "tf.compat.v1.distributions.Bernoulli.name": true, + "tf.compat.v1.distributions.Bernoulli.param_shapes": true, + "tf.compat.v1.distributions.Bernoulli.param_static_shapes": true, + "tf.compat.v1.distributions.Bernoulli.parameters": true, + "tf.compat.v1.distributions.Bernoulli.prob": true, + "tf.compat.v1.distributions.Bernoulli.probs": true, + "tf.compat.v1.distributions.Bernoulli.quantile": true, + "tf.compat.v1.distributions.Bernoulli.reparameterization_type": true, + "tf.compat.v1.distributions.Bernoulli.sample": true, + "tf.compat.v1.distributions.Bernoulli.stddev": true, + "tf.compat.v1.distributions.Bernoulli.survival_function": true, + "tf.compat.v1.distributions.Bernoulli.validate_args": true, + "tf.compat.v1.distributions.Bernoulli.variance": true, + "tf.compat.v1.distributions.Beta": false, + "tf.compat.v1.distributions.Beta.__eq__": true, + "tf.compat.v1.distributions.Beta.__ge__": true, + "tf.compat.v1.distributions.Beta.__gt__": true, + "tf.compat.v1.distributions.Beta.__init__": true, + "tf.compat.v1.distributions.Beta.__le__": true, + "tf.compat.v1.distributions.Beta.__lt__": true, + "tf.compat.v1.distributions.Beta.__ne__": true, + "tf.compat.v1.distributions.Beta.__new__": true, + "tf.compat.v1.distributions.Beta.allow_nan_stats": true, + "tf.compat.v1.distributions.Beta.batch_shape": true, + "tf.compat.v1.distributions.Beta.batch_shape_tensor": true, + "tf.compat.v1.distributions.Beta.cdf": true, + "tf.compat.v1.distributions.Beta.concentration0": true, + "tf.compat.v1.distributions.Beta.concentration1": true, + "tf.compat.v1.distributions.Beta.copy": true, + "tf.compat.v1.distributions.Beta.covariance": true, + "tf.compat.v1.distributions.Beta.cross_entropy": true, + "tf.compat.v1.distributions.Beta.dtype": true, + "tf.compat.v1.distributions.Beta.entropy": true, + "tf.compat.v1.distributions.Beta.event_shape": true, + "tf.compat.v1.distributions.Beta.event_shape_tensor": true, + "tf.compat.v1.distributions.Beta.is_scalar_batch": true, + "tf.compat.v1.distributions.Beta.is_scalar_event": true, + "tf.compat.v1.distributions.Beta.kl_divergence": true, + "tf.compat.v1.distributions.Beta.log_cdf": true, + "tf.compat.v1.distributions.Beta.log_prob": true, + "tf.compat.v1.distributions.Beta.log_survival_function": true, + "tf.compat.v1.distributions.Beta.mean": true, + "tf.compat.v1.distributions.Beta.mode": true, + "tf.compat.v1.distributions.Beta.name": true, + "tf.compat.v1.distributions.Beta.param_shapes": true, + "tf.compat.v1.distributions.Beta.param_static_shapes": true, + "tf.compat.v1.distributions.Beta.parameters": true, + "tf.compat.v1.distributions.Beta.prob": true, + "tf.compat.v1.distributions.Beta.quantile": true, + "tf.compat.v1.distributions.Beta.reparameterization_type": true, + "tf.compat.v1.distributions.Beta.sample": true, + "tf.compat.v1.distributions.Beta.stddev": true, + "tf.compat.v1.distributions.Beta.survival_function": true, + "tf.compat.v1.distributions.Beta.total_concentration": true, + "tf.compat.v1.distributions.Beta.validate_args": true, + "tf.compat.v1.distributions.Beta.variance": true, + "tf.compat.v1.distributions.Categorical": false, + "tf.compat.v1.distributions.Categorical.__eq__": true, + "tf.compat.v1.distributions.Categorical.__ge__": true, + "tf.compat.v1.distributions.Categorical.__gt__": true, + "tf.compat.v1.distributions.Categorical.__init__": true, + "tf.compat.v1.distributions.Categorical.__le__": true, + "tf.compat.v1.distributions.Categorical.__lt__": true, + "tf.compat.v1.distributions.Categorical.__ne__": true, + "tf.compat.v1.distributions.Categorical.__new__": true, + "tf.compat.v1.distributions.Categorical.allow_nan_stats": true, + "tf.compat.v1.distributions.Categorical.batch_shape": true, + "tf.compat.v1.distributions.Categorical.batch_shape_tensor": true, + "tf.compat.v1.distributions.Categorical.cdf": true, + "tf.compat.v1.distributions.Categorical.copy": true, + "tf.compat.v1.distributions.Categorical.covariance": true, + "tf.compat.v1.distributions.Categorical.cross_entropy": true, + "tf.compat.v1.distributions.Categorical.dtype": true, + "tf.compat.v1.distributions.Categorical.entropy": true, + "tf.compat.v1.distributions.Categorical.event_shape": true, + "tf.compat.v1.distributions.Categorical.event_shape_tensor": true, + "tf.compat.v1.distributions.Categorical.event_size": true, + "tf.compat.v1.distributions.Categorical.is_scalar_batch": true, + "tf.compat.v1.distributions.Categorical.is_scalar_event": true, + "tf.compat.v1.distributions.Categorical.kl_divergence": true, + "tf.compat.v1.distributions.Categorical.log_cdf": true, + "tf.compat.v1.distributions.Categorical.log_prob": true, + "tf.compat.v1.distributions.Categorical.log_survival_function": true, + "tf.compat.v1.distributions.Categorical.logits": true, + "tf.compat.v1.distributions.Categorical.mean": true, + "tf.compat.v1.distributions.Categorical.mode": true, + "tf.compat.v1.distributions.Categorical.name": true, + "tf.compat.v1.distributions.Categorical.param_shapes": true, + "tf.compat.v1.distributions.Categorical.param_static_shapes": true, + "tf.compat.v1.distributions.Categorical.parameters": true, + "tf.compat.v1.distributions.Categorical.prob": true, + "tf.compat.v1.distributions.Categorical.probs": true, + "tf.compat.v1.distributions.Categorical.quantile": true, + "tf.compat.v1.distributions.Categorical.reparameterization_type": true, + "tf.compat.v1.distributions.Categorical.sample": true, + "tf.compat.v1.distributions.Categorical.stddev": true, + "tf.compat.v1.distributions.Categorical.survival_function": true, + "tf.compat.v1.distributions.Categorical.validate_args": true, + "tf.compat.v1.distributions.Categorical.variance": true, + "tf.compat.v1.distributions.Dirichlet": false, + "tf.compat.v1.distributions.Dirichlet.__eq__": true, + "tf.compat.v1.distributions.Dirichlet.__ge__": true, + "tf.compat.v1.distributions.Dirichlet.__gt__": true, + "tf.compat.v1.distributions.Dirichlet.__init__": true, + "tf.compat.v1.distributions.Dirichlet.__le__": true, + "tf.compat.v1.distributions.Dirichlet.__lt__": true, + "tf.compat.v1.distributions.Dirichlet.__ne__": true, + "tf.compat.v1.distributions.Dirichlet.__new__": true, + "tf.compat.v1.distributions.Dirichlet.allow_nan_stats": true, + "tf.compat.v1.distributions.Dirichlet.batch_shape": true, + "tf.compat.v1.distributions.Dirichlet.batch_shape_tensor": true, + "tf.compat.v1.distributions.Dirichlet.cdf": true, + "tf.compat.v1.distributions.Dirichlet.concentration": true, + "tf.compat.v1.distributions.Dirichlet.copy": true, + "tf.compat.v1.distributions.Dirichlet.covariance": true, + "tf.compat.v1.distributions.Dirichlet.cross_entropy": true, + "tf.compat.v1.distributions.Dirichlet.dtype": true, + "tf.compat.v1.distributions.Dirichlet.entropy": true, + "tf.compat.v1.distributions.Dirichlet.event_shape": true, + "tf.compat.v1.distributions.Dirichlet.event_shape_tensor": true, + "tf.compat.v1.distributions.Dirichlet.is_scalar_batch": true, + "tf.compat.v1.distributions.Dirichlet.is_scalar_event": true, + "tf.compat.v1.distributions.Dirichlet.kl_divergence": true, + "tf.compat.v1.distributions.Dirichlet.log_cdf": true, + "tf.compat.v1.distributions.Dirichlet.log_prob": true, + "tf.compat.v1.distributions.Dirichlet.log_survival_function": true, + "tf.compat.v1.distributions.Dirichlet.mean": true, + "tf.compat.v1.distributions.Dirichlet.mode": true, + "tf.compat.v1.distributions.Dirichlet.name": true, + "tf.compat.v1.distributions.Dirichlet.param_shapes": true, + "tf.compat.v1.distributions.Dirichlet.param_static_shapes": true, + "tf.compat.v1.distributions.Dirichlet.parameters": true, + "tf.compat.v1.distributions.Dirichlet.prob": true, + "tf.compat.v1.distributions.Dirichlet.quantile": true, + "tf.compat.v1.distributions.Dirichlet.reparameterization_type": true, + "tf.compat.v1.distributions.Dirichlet.sample": true, + "tf.compat.v1.distributions.Dirichlet.stddev": true, + "tf.compat.v1.distributions.Dirichlet.survival_function": true, + "tf.compat.v1.distributions.Dirichlet.total_concentration": true, + "tf.compat.v1.distributions.Dirichlet.validate_args": true, + "tf.compat.v1.distributions.Dirichlet.variance": true, + "tf.compat.v1.distributions.DirichletMultinomial": false, + "tf.compat.v1.distributions.DirichletMultinomial.__eq__": true, + "tf.compat.v1.distributions.DirichletMultinomial.__ge__": true, + "tf.compat.v1.distributions.DirichletMultinomial.__gt__": true, + "tf.compat.v1.distributions.DirichletMultinomial.__init__": true, + "tf.compat.v1.distributions.DirichletMultinomial.__le__": true, + "tf.compat.v1.distributions.DirichletMultinomial.__lt__": true, + "tf.compat.v1.distributions.DirichletMultinomial.__ne__": true, + "tf.compat.v1.distributions.DirichletMultinomial.__new__": true, + "tf.compat.v1.distributions.DirichletMultinomial.allow_nan_stats": true, + "tf.compat.v1.distributions.DirichletMultinomial.batch_shape": true, + "tf.compat.v1.distributions.DirichletMultinomial.batch_shape_tensor": true, + "tf.compat.v1.distributions.DirichletMultinomial.cdf": true, + "tf.compat.v1.distributions.DirichletMultinomial.concentration": true, + "tf.compat.v1.distributions.DirichletMultinomial.copy": true, + "tf.compat.v1.distributions.DirichletMultinomial.covariance": true, + "tf.compat.v1.distributions.DirichletMultinomial.cross_entropy": true, + "tf.compat.v1.distributions.DirichletMultinomial.dtype": true, + "tf.compat.v1.distributions.DirichletMultinomial.entropy": true, + "tf.compat.v1.distributions.DirichletMultinomial.event_shape": true, + "tf.compat.v1.distributions.DirichletMultinomial.event_shape_tensor": true, + "tf.compat.v1.distributions.DirichletMultinomial.is_scalar_batch": true, + "tf.compat.v1.distributions.DirichletMultinomial.is_scalar_event": true, + "tf.compat.v1.distributions.DirichletMultinomial.kl_divergence": true, + "tf.compat.v1.distributions.DirichletMultinomial.log_cdf": true, + "tf.compat.v1.distributions.DirichletMultinomial.log_prob": true, + "tf.compat.v1.distributions.DirichletMultinomial.log_survival_function": true, + "tf.compat.v1.distributions.DirichletMultinomial.mean": true, + "tf.compat.v1.distributions.DirichletMultinomial.mode": true, + "tf.compat.v1.distributions.DirichletMultinomial.name": true, + "tf.compat.v1.distributions.DirichletMultinomial.param_shapes": true, + "tf.compat.v1.distributions.DirichletMultinomial.param_static_shapes": true, + "tf.compat.v1.distributions.DirichletMultinomial.parameters": true, + "tf.compat.v1.distributions.DirichletMultinomial.prob": true, + "tf.compat.v1.distributions.DirichletMultinomial.quantile": true, + "tf.compat.v1.distributions.DirichletMultinomial.reparameterization_type": true, + "tf.compat.v1.distributions.DirichletMultinomial.sample": true, + "tf.compat.v1.distributions.DirichletMultinomial.stddev": true, + "tf.compat.v1.distributions.DirichletMultinomial.survival_function": true, + "tf.compat.v1.distributions.DirichletMultinomial.total_concentration": true, + "tf.compat.v1.distributions.DirichletMultinomial.total_count": true, + "tf.compat.v1.distributions.DirichletMultinomial.validate_args": true, + "tf.compat.v1.distributions.DirichletMultinomial.variance": true, + "tf.compat.v1.distributions.Distribution": false, + "tf.compat.v1.distributions.Distribution.__eq__": true, + "tf.compat.v1.distributions.Distribution.__ge__": true, + "tf.compat.v1.distributions.Distribution.__gt__": true, + "tf.compat.v1.distributions.Distribution.__init__": true, + "tf.compat.v1.distributions.Distribution.__le__": true, + "tf.compat.v1.distributions.Distribution.__lt__": true, + "tf.compat.v1.distributions.Distribution.__ne__": true, + "tf.compat.v1.distributions.Distribution.__new__": true, + "tf.compat.v1.distributions.Distribution.allow_nan_stats": true, + "tf.compat.v1.distributions.Distribution.batch_shape": true, + "tf.compat.v1.distributions.Distribution.batch_shape_tensor": true, + "tf.compat.v1.distributions.Distribution.cdf": true, + "tf.compat.v1.distributions.Distribution.copy": true, + "tf.compat.v1.distributions.Distribution.covariance": true, + "tf.compat.v1.distributions.Distribution.cross_entropy": true, + "tf.compat.v1.distributions.Distribution.dtype": true, + "tf.compat.v1.distributions.Distribution.entropy": true, + "tf.compat.v1.distributions.Distribution.event_shape": true, + "tf.compat.v1.distributions.Distribution.event_shape_tensor": true, + "tf.compat.v1.distributions.Distribution.is_scalar_batch": true, + "tf.compat.v1.distributions.Distribution.is_scalar_event": true, + "tf.compat.v1.distributions.Distribution.kl_divergence": true, + "tf.compat.v1.distributions.Distribution.log_cdf": true, + "tf.compat.v1.distributions.Distribution.log_prob": true, + "tf.compat.v1.distributions.Distribution.log_survival_function": true, + "tf.compat.v1.distributions.Distribution.mean": true, + "tf.compat.v1.distributions.Distribution.mode": true, + "tf.compat.v1.distributions.Distribution.name": true, + "tf.compat.v1.distributions.Distribution.param_shapes": true, + "tf.compat.v1.distributions.Distribution.param_static_shapes": true, + "tf.compat.v1.distributions.Distribution.parameters": true, + "tf.compat.v1.distributions.Distribution.prob": true, + "tf.compat.v1.distributions.Distribution.quantile": true, + "tf.compat.v1.distributions.Distribution.reparameterization_type": true, + "tf.compat.v1.distributions.Distribution.sample": true, + "tf.compat.v1.distributions.Distribution.stddev": true, + "tf.compat.v1.distributions.Distribution.survival_function": true, + "tf.compat.v1.distributions.Distribution.validate_args": true, + "tf.compat.v1.distributions.Distribution.variance": true, + "tf.compat.v1.distributions.Exponential": false, + "tf.compat.v1.distributions.Exponential.__eq__": true, + "tf.compat.v1.distributions.Exponential.__ge__": true, + "tf.compat.v1.distributions.Exponential.__gt__": true, + "tf.compat.v1.distributions.Exponential.__init__": true, + "tf.compat.v1.distributions.Exponential.__le__": true, + "tf.compat.v1.distributions.Exponential.__lt__": true, + "tf.compat.v1.distributions.Exponential.__ne__": true, + "tf.compat.v1.distributions.Exponential.__new__": true, + "tf.compat.v1.distributions.Exponential.allow_nan_stats": true, + "tf.compat.v1.distributions.Exponential.batch_shape": true, + "tf.compat.v1.distributions.Exponential.batch_shape_tensor": true, + "tf.compat.v1.distributions.Exponential.cdf": true, + "tf.compat.v1.distributions.Exponential.concentration": true, + "tf.compat.v1.distributions.Exponential.copy": true, + "tf.compat.v1.distributions.Exponential.covariance": true, + "tf.compat.v1.distributions.Exponential.cross_entropy": true, + "tf.compat.v1.distributions.Exponential.dtype": true, + "tf.compat.v1.distributions.Exponential.entropy": true, + "tf.compat.v1.distributions.Exponential.event_shape": true, + "tf.compat.v1.distributions.Exponential.event_shape_tensor": true, + "tf.compat.v1.distributions.Exponential.is_scalar_batch": true, + "tf.compat.v1.distributions.Exponential.is_scalar_event": true, + "tf.compat.v1.distributions.Exponential.kl_divergence": true, + "tf.compat.v1.distributions.Exponential.log_cdf": true, + "tf.compat.v1.distributions.Exponential.log_prob": true, + "tf.compat.v1.distributions.Exponential.log_survival_function": true, + "tf.compat.v1.distributions.Exponential.mean": true, + "tf.compat.v1.distributions.Exponential.mode": true, + "tf.compat.v1.distributions.Exponential.name": true, + "tf.compat.v1.distributions.Exponential.param_shapes": true, + "tf.compat.v1.distributions.Exponential.param_static_shapes": true, + "tf.compat.v1.distributions.Exponential.parameters": true, + "tf.compat.v1.distributions.Exponential.prob": true, + "tf.compat.v1.distributions.Exponential.quantile": true, + "tf.compat.v1.distributions.Exponential.rate": true, + "tf.compat.v1.distributions.Exponential.reparameterization_type": true, + "tf.compat.v1.distributions.Exponential.sample": true, + "tf.compat.v1.distributions.Exponential.stddev": true, + "tf.compat.v1.distributions.Exponential.survival_function": true, + "tf.compat.v1.distributions.Exponential.validate_args": true, + "tf.compat.v1.distributions.Exponential.variance": true, + "tf.compat.v1.distributions.FULLY_REPARAMETERIZED": true, + "tf.compat.v1.distributions.Gamma": false, + "tf.compat.v1.distributions.Gamma.__eq__": true, + "tf.compat.v1.distributions.Gamma.__ge__": true, + "tf.compat.v1.distributions.Gamma.__gt__": true, + "tf.compat.v1.distributions.Gamma.__init__": true, + "tf.compat.v1.distributions.Gamma.__le__": true, + "tf.compat.v1.distributions.Gamma.__lt__": true, + "tf.compat.v1.distributions.Gamma.__ne__": true, + "tf.compat.v1.distributions.Gamma.__new__": true, + "tf.compat.v1.distributions.Gamma.allow_nan_stats": true, + "tf.compat.v1.distributions.Gamma.batch_shape": true, + "tf.compat.v1.distributions.Gamma.batch_shape_tensor": true, + "tf.compat.v1.distributions.Gamma.cdf": true, + "tf.compat.v1.distributions.Gamma.concentration": true, + "tf.compat.v1.distributions.Gamma.copy": true, + "tf.compat.v1.distributions.Gamma.covariance": true, + "tf.compat.v1.distributions.Gamma.cross_entropy": true, + "tf.compat.v1.distributions.Gamma.dtype": true, + "tf.compat.v1.distributions.Gamma.entropy": true, + "tf.compat.v1.distributions.Gamma.event_shape": true, + "tf.compat.v1.distributions.Gamma.event_shape_tensor": true, + "tf.compat.v1.distributions.Gamma.is_scalar_batch": true, + "tf.compat.v1.distributions.Gamma.is_scalar_event": true, + "tf.compat.v1.distributions.Gamma.kl_divergence": true, + "tf.compat.v1.distributions.Gamma.log_cdf": true, + "tf.compat.v1.distributions.Gamma.log_prob": true, + "tf.compat.v1.distributions.Gamma.log_survival_function": true, + "tf.compat.v1.distributions.Gamma.mean": true, + "tf.compat.v1.distributions.Gamma.mode": true, + "tf.compat.v1.distributions.Gamma.name": true, + "tf.compat.v1.distributions.Gamma.param_shapes": true, + "tf.compat.v1.distributions.Gamma.param_static_shapes": true, + "tf.compat.v1.distributions.Gamma.parameters": true, + "tf.compat.v1.distributions.Gamma.prob": true, + "tf.compat.v1.distributions.Gamma.quantile": true, + "tf.compat.v1.distributions.Gamma.rate": true, + "tf.compat.v1.distributions.Gamma.reparameterization_type": true, + "tf.compat.v1.distributions.Gamma.sample": true, + "tf.compat.v1.distributions.Gamma.stddev": true, + "tf.compat.v1.distributions.Gamma.survival_function": true, + "tf.compat.v1.distributions.Gamma.validate_args": true, + "tf.compat.v1.distributions.Gamma.variance": true, + "tf.compat.v1.distributions.Laplace": false, + "tf.compat.v1.distributions.Laplace.__eq__": true, + "tf.compat.v1.distributions.Laplace.__ge__": true, + "tf.compat.v1.distributions.Laplace.__gt__": true, + "tf.compat.v1.distributions.Laplace.__init__": true, + "tf.compat.v1.distributions.Laplace.__le__": true, + "tf.compat.v1.distributions.Laplace.__lt__": true, + "tf.compat.v1.distributions.Laplace.__ne__": true, + "tf.compat.v1.distributions.Laplace.__new__": true, + "tf.compat.v1.distributions.Laplace.allow_nan_stats": true, + "tf.compat.v1.distributions.Laplace.batch_shape": true, + "tf.compat.v1.distributions.Laplace.batch_shape_tensor": true, + "tf.compat.v1.distributions.Laplace.cdf": true, + "tf.compat.v1.distributions.Laplace.copy": true, + "tf.compat.v1.distributions.Laplace.covariance": true, + "tf.compat.v1.distributions.Laplace.cross_entropy": true, + "tf.compat.v1.distributions.Laplace.dtype": true, + "tf.compat.v1.distributions.Laplace.entropy": true, + "tf.compat.v1.distributions.Laplace.event_shape": true, + "tf.compat.v1.distributions.Laplace.event_shape_tensor": true, + "tf.compat.v1.distributions.Laplace.is_scalar_batch": true, + "tf.compat.v1.distributions.Laplace.is_scalar_event": true, + "tf.compat.v1.distributions.Laplace.kl_divergence": true, + "tf.compat.v1.distributions.Laplace.loc": true, + "tf.compat.v1.distributions.Laplace.log_cdf": true, + "tf.compat.v1.distributions.Laplace.log_prob": true, + "tf.compat.v1.distributions.Laplace.log_survival_function": true, + "tf.compat.v1.distributions.Laplace.mean": true, + "tf.compat.v1.distributions.Laplace.mode": true, + "tf.compat.v1.distributions.Laplace.name": true, + "tf.compat.v1.distributions.Laplace.param_shapes": true, + "tf.compat.v1.distributions.Laplace.param_static_shapes": true, + "tf.compat.v1.distributions.Laplace.parameters": true, + "tf.compat.v1.distributions.Laplace.prob": true, + "tf.compat.v1.distributions.Laplace.quantile": true, + "tf.compat.v1.distributions.Laplace.reparameterization_type": true, + "tf.compat.v1.distributions.Laplace.sample": true, + "tf.compat.v1.distributions.Laplace.scale": true, + "tf.compat.v1.distributions.Laplace.stddev": true, + "tf.compat.v1.distributions.Laplace.survival_function": true, + "tf.compat.v1.distributions.Laplace.validate_args": true, + "tf.compat.v1.distributions.Laplace.variance": true, + "tf.compat.v1.distributions.Multinomial": false, + "tf.compat.v1.distributions.Multinomial.__eq__": true, + "tf.compat.v1.distributions.Multinomial.__ge__": true, + "tf.compat.v1.distributions.Multinomial.__gt__": true, + "tf.compat.v1.distributions.Multinomial.__init__": true, + "tf.compat.v1.distributions.Multinomial.__le__": true, + "tf.compat.v1.distributions.Multinomial.__lt__": true, + "tf.compat.v1.distributions.Multinomial.__ne__": true, + "tf.compat.v1.distributions.Multinomial.__new__": true, + "tf.compat.v1.distributions.Multinomial.allow_nan_stats": true, + "tf.compat.v1.distributions.Multinomial.batch_shape": true, + "tf.compat.v1.distributions.Multinomial.batch_shape_tensor": true, + "tf.compat.v1.distributions.Multinomial.cdf": true, + "tf.compat.v1.distributions.Multinomial.copy": true, + "tf.compat.v1.distributions.Multinomial.covariance": true, + "tf.compat.v1.distributions.Multinomial.cross_entropy": true, + "tf.compat.v1.distributions.Multinomial.dtype": true, + "tf.compat.v1.distributions.Multinomial.entropy": true, + "tf.compat.v1.distributions.Multinomial.event_shape": true, + "tf.compat.v1.distributions.Multinomial.event_shape_tensor": true, + "tf.compat.v1.distributions.Multinomial.is_scalar_batch": true, + "tf.compat.v1.distributions.Multinomial.is_scalar_event": true, + "tf.compat.v1.distributions.Multinomial.kl_divergence": true, + "tf.compat.v1.distributions.Multinomial.log_cdf": true, + "tf.compat.v1.distributions.Multinomial.log_prob": true, + "tf.compat.v1.distributions.Multinomial.log_survival_function": true, + "tf.compat.v1.distributions.Multinomial.logits": true, + "tf.compat.v1.distributions.Multinomial.mean": true, + "tf.compat.v1.distributions.Multinomial.mode": true, + "tf.compat.v1.distributions.Multinomial.name": true, + "tf.compat.v1.distributions.Multinomial.param_shapes": true, + "tf.compat.v1.distributions.Multinomial.param_static_shapes": true, + "tf.compat.v1.distributions.Multinomial.parameters": true, + "tf.compat.v1.distributions.Multinomial.prob": true, + "tf.compat.v1.distributions.Multinomial.probs": true, + "tf.compat.v1.distributions.Multinomial.quantile": true, + "tf.compat.v1.distributions.Multinomial.reparameterization_type": true, + "tf.compat.v1.distributions.Multinomial.sample": true, + "tf.compat.v1.distributions.Multinomial.stddev": true, + "tf.compat.v1.distributions.Multinomial.survival_function": true, + "tf.compat.v1.distributions.Multinomial.total_count": true, + "tf.compat.v1.distributions.Multinomial.validate_args": true, + "tf.compat.v1.distributions.Multinomial.variance": true, + "tf.compat.v1.distributions.NOT_REPARAMETERIZED": true, + "tf.compat.v1.distributions.Normal": false, + "tf.compat.v1.distributions.Normal.__eq__": true, + "tf.compat.v1.distributions.Normal.__ge__": true, + "tf.compat.v1.distributions.Normal.__gt__": true, + "tf.compat.v1.distributions.Normal.__init__": true, + "tf.compat.v1.distributions.Normal.__le__": true, + "tf.compat.v1.distributions.Normal.__lt__": true, + "tf.compat.v1.distributions.Normal.__ne__": true, + "tf.compat.v1.distributions.Normal.__new__": true, + "tf.compat.v1.distributions.Normal.allow_nan_stats": true, + "tf.compat.v1.distributions.Normal.batch_shape": true, + "tf.compat.v1.distributions.Normal.batch_shape_tensor": true, + "tf.compat.v1.distributions.Normal.cdf": true, + "tf.compat.v1.distributions.Normal.copy": true, + "tf.compat.v1.distributions.Normal.covariance": true, + "tf.compat.v1.distributions.Normal.cross_entropy": true, + "tf.compat.v1.distributions.Normal.dtype": true, + "tf.compat.v1.distributions.Normal.entropy": true, + "tf.compat.v1.distributions.Normal.event_shape": true, + "tf.compat.v1.distributions.Normal.event_shape_tensor": true, + "tf.compat.v1.distributions.Normal.is_scalar_batch": true, + "tf.compat.v1.distributions.Normal.is_scalar_event": true, + "tf.compat.v1.distributions.Normal.kl_divergence": true, + "tf.compat.v1.distributions.Normal.loc": true, + "tf.compat.v1.distributions.Normal.log_cdf": true, + "tf.compat.v1.distributions.Normal.log_prob": true, + "tf.compat.v1.distributions.Normal.log_survival_function": true, + "tf.compat.v1.distributions.Normal.mean": true, + "tf.compat.v1.distributions.Normal.mode": true, + "tf.compat.v1.distributions.Normal.name": true, + "tf.compat.v1.distributions.Normal.param_shapes": true, + "tf.compat.v1.distributions.Normal.param_static_shapes": true, + "tf.compat.v1.distributions.Normal.parameters": true, + "tf.compat.v1.distributions.Normal.prob": true, + "tf.compat.v1.distributions.Normal.quantile": true, + "tf.compat.v1.distributions.Normal.reparameterization_type": true, + "tf.compat.v1.distributions.Normal.sample": true, + "tf.compat.v1.distributions.Normal.scale": true, + "tf.compat.v1.distributions.Normal.stddev": true, + "tf.compat.v1.distributions.Normal.survival_function": true, + "tf.compat.v1.distributions.Normal.validate_args": true, + "tf.compat.v1.distributions.Normal.variance": true, + "tf.compat.v1.distributions.RegisterKL": false, + "tf.compat.v1.distributions.RegisterKL.__call__": true, + "tf.compat.v1.distributions.RegisterKL.__eq__": true, + "tf.compat.v1.distributions.RegisterKL.__ge__": true, + "tf.compat.v1.distributions.RegisterKL.__gt__": true, + "tf.compat.v1.distributions.RegisterKL.__init__": true, + "tf.compat.v1.distributions.RegisterKL.__le__": true, + "tf.compat.v1.distributions.RegisterKL.__lt__": true, + "tf.compat.v1.distributions.RegisterKL.__ne__": true, + "tf.compat.v1.distributions.RegisterKL.__new__": true, + "tf.compat.v1.distributions.ReparameterizationType": false, + "tf.compat.v1.distributions.ReparameterizationType.__eq__": true, + "tf.compat.v1.distributions.ReparameterizationType.__ge__": true, + "tf.compat.v1.distributions.ReparameterizationType.__gt__": true, + "tf.compat.v1.distributions.ReparameterizationType.__init__": true, + "tf.compat.v1.distributions.ReparameterizationType.__le__": true, + "tf.compat.v1.distributions.ReparameterizationType.__lt__": true, + "tf.compat.v1.distributions.ReparameterizationType.__ne__": true, + "tf.compat.v1.distributions.ReparameterizationType.__new__": true, + "tf.compat.v1.distributions.StudentT": false, + "tf.compat.v1.distributions.StudentT.__eq__": true, + "tf.compat.v1.distributions.StudentT.__ge__": true, + "tf.compat.v1.distributions.StudentT.__gt__": true, + "tf.compat.v1.distributions.StudentT.__init__": true, + "tf.compat.v1.distributions.StudentT.__le__": true, + "tf.compat.v1.distributions.StudentT.__lt__": true, + "tf.compat.v1.distributions.StudentT.__ne__": true, + "tf.compat.v1.distributions.StudentT.__new__": true, + "tf.compat.v1.distributions.StudentT.allow_nan_stats": true, + "tf.compat.v1.distributions.StudentT.batch_shape": true, + "tf.compat.v1.distributions.StudentT.batch_shape_tensor": true, + "tf.compat.v1.distributions.StudentT.cdf": true, + "tf.compat.v1.distributions.StudentT.copy": true, + "tf.compat.v1.distributions.StudentT.covariance": true, + "tf.compat.v1.distributions.StudentT.cross_entropy": true, + "tf.compat.v1.distributions.StudentT.df": true, + "tf.compat.v1.distributions.StudentT.dtype": true, + "tf.compat.v1.distributions.StudentT.entropy": true, + "tf.compat.v1.distributions.StudentT.event_shape": true, + "tf.compat.v1.distributions.StudentT.event_shape_tensor": true, + "tf.compat.v1.distributions.StudentT.is_scalar_batch": true, + "tf.compat.v1.distributions.StudentT.is_scalar_event": true, + "tf.compat.v1.distributions.StudentT.kl_divergence": true, + "tf.compat.v1.distributions.StudentT.loc": true, + "tf.compat.v1.distributions.StudentT.log_cdf": true, + "tf.compat.v1.distributions.StudentT.log_prob": true, + "tf.compat.v1.distributions.StudentT.log_survival_function": true, + "tf.compat.v1.distributions.StudentT.mean": true, + "tf.compat.v1.distributions.StudentT.mode": true, + "tf.compat.v1.distributions.StudentT.name": true, + "tf.compat.v1.distributions.StudentT.param_shapes": true, + "tf.compat.v1.distributions.StudentT.param_static_shapes": true, + "tf.compat.v1.distributions.StudentT.parameters": true, + "tf.compat.v1.distributions.StudentT.prob": true, + "tf.compat.v1.distributions.StudentT.quantile": true, + "tf.compat.v1.distributions.StudentT.reparameterization_type": true, + "tf.compat.v1.distributions.StudentT.sample": true, + "tf.compat.v1.distributions.StudentT.scale": true, + "tf.compat.v1.distributions.StudentT.stddev": true, + "tf.compat.v1.distributions.StudentT.survival_function": true, + "tf.compat.v1.distributions.StudentT.validate_args": true, + "tf.compat.v1.distributions.StudentT.variance": true, + "tf.compat.v1.distributions.Uniform": false, + "tf.compat.v1.distributions.Uniform.__eq__": true, + "tf.compat.v1.distributions.Uniform.__ge__": true, + "tf.compat.v1.distributions.Uniform.__gt__": true, + "tf.compat.v1.distributions.Uniform.__init__": true, + "tf.compat.v1.distributions.Uniform.__le__": true, + "tf.compat.v1.distributions.Uniform.__lt__": true, + "tf.compat.v1.distributions.Uniform.__ne__": true, + "tf.compat.v1.distributions.Uniform.__new__": true, + "tf.compat.v1.distributions.Uniform.allow_nan_stats": true, + "tf.compat.v1.distributions.Uniform.batch_shape": true, + "tf.compat.v1.distributions.Uniform.batch_shape_tensor": true, + "tf.compat.v1.distributions.Uniform.cdf": true, + "tf.compat.v1.distributions.Uniform.copy": true, + "tf.compat.v1.distributions.Uniform.covariance": true, + "tf.compat.v1.distributions.Uniform.cross_entropy": true, + "tf.compat.v1.distributions.Uniform.dtype": true, + "tf.compat.v1.distributions.Uniform.entropy": true, + "tf.compat.v1.distributions.Uniform.event_shape": true, + "tf.compat.v1.distributions.Uniform.event_shape_tensor": true, + "tf.compat.v1.distributions.Uniform.high": true, + "tf.compat.v1.distributions.Uniform.is_scalar_batch": true, + "tf.compat.v1.distributions.Uniform.is_scalar_event": true, + "tf.compat.v1.distributions.Uniform.kl_divergence": true, + "tf.compat.v1.distributions.Uniform.log_cdf": true, + "tf.compat.v1.distributions.Uniform.log_prob": true, + "tf.compat.v1.distributions.Uniform.log_survival_function": true, + "tf.compat.v1.distributions.Uniform.low": true, + "tf.compat.v1.distributions.Uniform.mean": true, + "tf.compat.v1.distributions.Uniform.mode": true, + "tf.compat.v1.distributions.Uniform.name": true, + "tf.compat.v1.distributions.Uniform.param_shapes": true, + "tf.compat.v1.distributions.Uniform.param_static_shapes": true, + "tf.compat.v1.distributions.Uniform.parameters": true, + "tf.compat.v1.distributions.Uniform.prob": true, + "tf.compat.v1.distributions.Uniform.quantile": true, + "tf.compat.v1.distributions.Uniform.range": true, + "tf.compat.v1.distributions.Uniform.reparameterization_type": true, + "tf.compat.v1.distributions.Uniform.sample": true, + "tf.compat.v1.distributions.Uniform.stddev": true, + "tf.compat.v1.distributions.Uniform.survival_function": true, + "tf.compat.v1.distributions.Uniform.validate_args": true, + "tf.compat.v1.distributions.Uniform.variance": true, + "tf.compat.v1.distributions.kl_divergence": false, + "tf.compat.v1.div": false, + "tf.compat.v1.div_no_nan": false, + "tf.compat.v1.divide": false, + "tf.compat.v1.double": true, + "tf.compat.v1.dtypes": false, + "tf.compat.v1.dtypes.DType": false, + "tf.compat.v1.dtypes.DType.__eq__": true, + "tf.compat.v1.dtypes.DType.__ge__": true, + "tf.compat.v1.dtypes.DType.__gt__": true, + "tf.compat.v1.dtypes.DType.__init__": true, + "tf.compat.v1.dtypes.DType.__le__": true, + "tf.compat.v1.dtypes.DType.__lt__": true, + "tf.compat.v1.dtypes.DType.__ne__": true, + "tf.compat.v1.dtypes.DType.__new__": true, + "tf.compat.v1.dtypes.DType.as_datatype_enum": true, + "tf.compat.v1.dtypes.DType.as_numpy_dtype": true, + "tf.compat.v1.dtypes.DType.base_dtype": true, + "tf.compat.v1.dtypes.DType.is_bool": true, + "tf.compat.v1.dtypes.DType.is_compatible_with": true, + "tf.compat.v1.dtypes.DType.is_complex": true, + "tf.compat.v1.dtypes.DType.is_floating": true, + "tf.compat.v1.dtypes.DType.is_integer": true, + "tf.compat.v1.dtypes.DType.is_numpy_compatible": true, + "tf.compat.v1.dtypes.DType.is_quantized": true, + "tf.compat.v1.dtypes.DType.is_unsigned": true, + "tf.compat.v1.dtypes.DType.limits": true, + "tf.compat.v1.dtypes.DType.max": true, + "tf.compat.v1.dtypes.DType.min": true, + "tf.compat.v1.dtypes.DType.name": true, + "tf.compat.v1.dtypes.DType.real_dtype": true, + "tf.compat.v1.dtypes.DType.size": true, + "tf.compat.v1.dtypes.QUANTIZED_DTYPES": true, + "tf.compat.v1.dtypes.as_dtype": false, + "tf.compat.v1.dtypes.as_string": false, + "tf.compat.v1.dtypes.bfloat16": true, + "tf.compat.v1.dtypes.bool": true, + "tf.compat.v1.dtypes.cast": false, + "tf.compat.v1.dtypes.complex": false, + "tf.compat.v1.dtypes.complex128": true, + "tf.compat.v1.dtypes.complex64": true, + "tf.compat.v1.dtypes.double": true, + "tf.compat.v1.dtypes.float16": true, + "tf.compat.v1.dtypes.float32": true, + "tf.compat.v1.dtypes.float64": true, + "tf.compat.v1.dtypes.half": true, + "tf.compat.v1.dtypes.int16": true, + "tf.compat.v1.dtypes.int32": true, + "tf.compat.v1.dtypes.int64": true, + "tf.compat.v1.dtypes.int8": true, + "tf.compat.v1.dtypes.qint16": true, + "tf.compat.v1.dtypes.qint32": true, + "tf.compat.v1.dtypes.qint8": true, + "tf.compat.v1.dtypes.quint16": true, + "tf.compat.v1.dtypes.quint8": true, + "tf.compat.v1.dtypes.resource": true, + "tf.compat.v1.dtypes.saturate_cast": false, + "tf.compat.v1.dtypes.string": true, + "tf.compat.v1.dtypes.uint16": true, + "tf.compat.v1.dtypes.uint32": true, + "tf.compat.v1.dtypes.uint64": true, + "tf.compat.v1.dtypes.uint8": true, + "tf.compat.v1.dtypes.variant": true, + "tf.compat.v1.dynamic_partition": false, + "tf.compat.v1.dynamic_stitch": false, + "tf.compat.v1.edit_distance": false, + "tf.compat.v1.einsum": false, + "tf.compat.v1.enable_control_flow_v2": false, + "tf.compat.v1.enable_eager_execution": false, + "tf.compat.v1.enable_resource_variables": false, + "tf.compat.v1.enable_tensor_equality": false, + "tf.compat.v1.enable_v2_behavior": false, + "tf.compat.v1.enable_v2_tensorshape": false, + "tf.compat.v1.encode_base64": false, + "tf.compat.v1.ensure_shape": false, + "tf.compat.v1.equal": false, + "tf.compat.v1.erf": false, + "tf.compat.v1.erfc": false, + "tf.compat.v1.errors": false, + "tf.compat.v1.errors.ABORTED": true, + "tf.compat.v1.errors.ALREADY_EXISTS": true, + "tf.compat.v1.errors.AbortedError": false, + "tf.compat.v1.errors.AbortedError.__eq__": true, + "tf.compat.v1.errors.AbortedError.__ge__": true, + "tf.compat.v1.errors.AbortedError.__gt__": true, + "tf.compat.v1.errors.AbortedError.__init__": true, + "tf.compat.v1.errors.AbortedError.__le__": true, + "tf.compat.v1.errors.AbortedError.__lt__": true, + "tf.compat.v1.errors.AbortedError.__ne__": true, + "tf.compat.v1.errors.AbortedError.__new__": true, + "tf.compat.v1.errors.AbortedError.args": true, + "tf.compat.v1.errors.AbortedError.error_code": true, + "tf.compat.v1.errors.AbortedError.message": true, + "tf.compat.v1.errors.AbortedError.node_def": true, + "tf.compat.v1.errors.AbortedError.op": true, + "tf.compat.v1.errors.AbortedError.with_traceback": true, + "tf.compat.v1.errors.AlreadyExistsError": false, + "tf.compat.v1.errors.AlreadyExistsError.__eq__": true, + "tf.compat.v1.errors.AlreadyExistsError.__ge__": true, + "tf.compat.v1.errors.AlreadyExistsError.__gt__": true, + "tf.compat.v1.errors.AlreadyExistsError.__init__": true, + "tf.compat.v1.errors.AlreadyExistsError.__le__": true, + "tf.compat.v1.errors.AlreadyExistsError.__lt__": true, + "tf.compat.v1.errors.AlreadyExistsError.__ne__": true, + "tf.compat.v1.errors.AlreadyExistsError.__new__": true, + "tf.compat.v1.errors.AlreadyExistsError.args": true, + "tf.compat.v1.errors.AlreadyExistsError.error_code": true, + "tf.compat.v1.errors.AlreadyExistsError.message": true, + "tf.compat.v1.errors.AlreadyExistsError.node_def": true, + "tf.compat.v1.errors.AlreadyExistsError.op": true, + "tf.compat.v1.errors.AlreadyExistsError.with_traceback": true, + "tf.compat.v1.errors.CANCELLED": true, + "tf.compat.v1.errors.CancelledError": false, + "tf.compat.v1.errors.CancelledError.__eq__": true, + "tf.compat.v1.errors.CancelledError.__ge__": true, + "tf.compat.v1.errors.CancelledError.__gt__": true, + "tf.compat.v1.errors.CancelledError.__init__": true, + "tf.compat.v1.errors.CancelledError.__le__": true, + "tf.compat.v1.errors.CancelledError.__lt__": true, + "tf.compat.v1.errors.CancelledError.__ne__": true, + "tf.compat.v1.errors.CancelledError.__new__": true, + "tf.compat.v1.errors.CancelledError.args": true, + "tf.compat.v1.errors.CancelledError.error_code": true, + "tf.compat.v1.errors.CancelledError.message": true, + "tf.compat.v1.errors.CancelledError.node_def": true, + "tf.compat.v1.errors.CancelledError.op": true, + "tf.compat.v1.errors.CancelledError.with_traceback": true, + "tf.compat.v1.errors.DATA_LOSS": true, + "tf.compat.v1.errors.DEADLINE_EXCEEDED": true, + "tf.compat.v1.errors.DataLossError": false, + "tf.compat.v1.errors.DataLossError.__eq__": true, + "tf.compat.v1.errors.DataLossError.__ge__": true, + "tf.compat.v1.errors.DataLossError.__gt__": true, + "tf.compat.v1.errors.DataLossError.__init__": true, + "tf.compat.v1.errors.DataLossError.__le__": true, + "tf.compat.v1.errors.DataLossError.__lt__": true, + "tf.compat.v1.errors.DataLossError.__ne__": true, + "tf.compat.v1.errors.DataLossError.__new__": true, + "tf.compat.v1.errors.DataLossError.args": true, + "tf.compat.v1.errors.DataLossError.error_code": true, + "tf.compat.v1.errors.DataLossError.message": true, + "tf.compat.v1.errors.DataLossError.node_def": true, + "tf.compat.v1.errors.DataLossError.op": true, + "tf.compat.v1.errors.DataLossError.with_traceback": true, + "tf.compat.v1.errors.DeadlineExceededError": false, + "tf.compat.v1.errors.DeadlineExceededError.__eq__": true, + "tf.compat.v1.errors.DeadlineExceededError.__ge__": true, + "tf.compat.v1.errors.DeadlineExceededError.__gt__": true, + "tf.compat.v1.errors.DeadlineExceededError.__init__": true, + "tf.compat.v1.errors.DeadlineExceededError.__le__": true, + "tf.compat.v1.errors.DeadlineExceededError.__lt__": true, + "tf.compat.v1.errors.DeadlineExceededError.__ne__": true, + "tf.compat.v1.errors.DeadlineExceededError.__new__": true, + "tf.compat.v1.errors.DeadlineExceededError.args": true, + "tf.compat.v1.errors.DeadlineExceededError.error_code": true, + "tf.compat.v1.errors.DeadlineExceededError.message": true, + "tf.compat.v1.errors.DeadlineExceededError.node_def": true, + "tf.compat.v1.errors.DeadlineExceededError.op": true, + "tf.compat.v1.errors.DeadlineExceededError.with_traceback": true, + "tf.compat.v1.errors.FAILED_PRECONDITION": true, + "tf.compat.v1.errors.FailedPreconditionError": false, + "tf.compat.v1.errors.FailedPreconditionError.__eq__": true, + "tf.compat.v1.errors.FailedPreconditionError.__ge__": true, + "tf.compat.v1.errors.FailedPreconditionError.__gt__": true, + "tf.compat.v1.errors.FailedPreconditionError.__init__": true, + "tf.compat.v1.errors.FailedPreconditionError.__le__": true, + "tf.compat.v1.errors.FailedPreconditionError.__lt__": true, + "tf.compat.v1.errors.FailedPreconditionError.__ne__": true, + "tf.compat.v1.errors.FailedPreconditionError.__new__": true, + "tf.compat.v1.errors.FailedPreconditionError.args": true, + "tf.compat.v1.errors.FailedPreconditionError.error_code": true, + "tf.compat.v1.errors.FailedPreconditionError.message": true, + "tf.compat.v1.errors.FailedPreconditionError.node_def": true, + "tf.compat.v1.errors.FailedPreconditionError.op": true, + "tf.compat.v1.errors.FailedPreconditionError.with_traceback": true, + "tf.compat.v1.errors.INTERNAL": true, + "tf.compat.v1.errors.INVALID_ARGUMENT": true, + "tf.compat.v1.errors.InternalError": false, + "tf.compat.v1.errors.InternalError.__eq__": true, + "tf.compat.v1.errors.InternalError.__ge__": true, + "tf.compat.v1.errors.InternalError.__gt__": true, + "tf.compat.v1.errors.InternalError.__init__": true, + "tf.compat.v1.errors.InternalError.__le__": true, + "tf.compat.v1.errors.InternalError.__lt__": true, + "tf.compat.v1.errors.InternalError.__ne__": true, + "tf.compat.v1.errors.InternalError.__new__": true, + "tf.compat.v1.errors.InternalError.args": true, + "tf.compat.v1.errors.InternalError.error_code": true, + "tf.compat.v1.errors.InternalError.message": true, + "tf.compat.v1.errors.InternalError.node_def": true, + "tf.compat.v1.errors.InternalError.op": true, + "tf.compat.v1.errors.InternalError.with_traceback": true, + "tf.compat.v1.errors.InvalidArgumentError": false, + "tf.compat.v1.errors.InvalidArgumentError.__eq__": true, + "tf.compat.v1.errors.InvalidArgumentError.__ge__": true, + "tf.compat.v1.errors.InvalidArgumentError.__gt__": true, + "tf.compat.v1.errors.InvalidArgumentError.__init__": true, + "tf.compat.v1.errors.InvalidArgumentError.__le__": true, + "tf.compat.v1.errors.InvalidArgumentError.__lt__": true, + "tf.compat.v1.errors.InvalidArgumentError.__ne__": true, + "tf.compat.v1.errors.InvalidArgumentError.__new__": true, + "tf.compat.v1.errors.InvalidArgumentError.args": true, + "tf.compat.v1.errors.InvalidArgumentError.error_code": true, + "tf.compat.v1.errors.InvalidArgumentError.message": true, + "tf.compat.v1.errors.InvalidArgumentError.node_def": true, + "tf.compat.v1.errors.InvalidArgumentError.op": true, + "tf.compat.v1.errors.InvalidArgumentError.with_traceback": true, + "tf.compat.v1.errors.NOT_FOUND": true, + "tf.compat.v1.errors.NotFoundError": false, + "tf.compat.v1.errors.NotFoundError.__eq__": true, + "tf.compat.v1.errors.NotFoundError.__ge__": true, + "tf.compat.v1.errors.NotFoundError.__gt__": true, + "tf.compat.v1.errors.NotFoundError.__init__": true, + "tf.compat.v1.errors.NotFoundError.__le__": true, + "tf.compat.v1.errors.NotFoundError.__lt__": true, + "tf.compat.v1.errors.NotFoundError.__ne__": true, + "tf.compat.v1.errors.NotFoundError.__new__": true, + "tf.compat.v1.errors.NotFoundError.args": true, + "tf.compat.v1.errors.NotFoundError.error_code": true, + "tf.compat.v1.errors.NotFoundError.message": true, + "tf.compat.v1.errors.NotFoundError.node_def": true, + "tf.compat.v1.errors.NotFoundError.op": true, + "tf.compat.v1.errors.NotFoundError.with_traceback": true, + "tf.compat.v1.errors.OK": true, + "tf.compat.v1.errors.OUT_OF_RANGE": true, + "tf.compat.v1.errors.OpError": false, + "tf.compat.v1.errors.OpError.__eq__": true, + "tf.compat.v1.errors.OpError.__ge__": true, + "tf.compat.v1.errors.OpError.__gt__": true, + "tf.compat.v1.errors.OpError.__init__": true, + "tf.compat.v1.errors.OpError.__le__": true, + "tf.compat.v1.errors.OpError.__lt__": true, + "tf.compat.v1.errors.OpError.__ne__": true, + "tf.compat.v1.errors.OpError.__new__": true, + "tf.compat.v1.errors.OpError.args": true, + "tf.compat.v1.errors.OpError.error_code": true, + "tf.compat.v1.errors.OpError.message": true, + "tf.compat.v1.errors.OpError.node_def": true, + "tf.compat.v1.errors.OpError.op": true, + "tf.compat.v1.errors.OpError.with_traceback": true, + "tf.compat.v1.errors.OutOfRangeError": false, + "tf.compat.v1.errors.OutOfRangeError.__eq__": true, + "tf.compat.v1.errors.OutOfRangeError.__ge__": true, + "tf.compat.v1.errors.OutOfRangeError.__gt__": true, + "tf.compat.v1.errors.OutOfRangeError.__init__": true, + "tf.compat.v1.errors.OutOfRangeError.__le__": true, + "tf.compat.v1.errors.OutOfRangeError.__lt__": true, + "tf.compat.v1.errors.OutOfRangeError.__ne__": true, + "tf.compat.v1.errors.OutOfRangeError.__new__": true, + "tf.compat.v1.errors.OutOfRangeError.args": true, + "tf.compat.v1.errors.OutOfRangeError.error_code": true, + "tf.compat.v1.errors.OutOfRangeError.message": true, + "tf.compat.v1.errors.OutOfRangeError.node_def": true, + "tf.compat.v1.errors.OutOfRangeError.op": true, + "tf.compat.v1.errors.OutOfRangeError.with_traceback": true, + "tf.compat.v1.errors.PERMISSION_DENIED": true, + "tf.compat.v1.errors.PermissionDeniedError": false, + "tf.compat.v1.errors.PermissionDeniedError.__eq__": true, + "tf.compat.v1.errors.PermissionDeniedError.__ge__": true, + "tf.compat.v1.errors.PermissionDeniedError.__gt__": true, + "tf.compat.v1.errors.PermissionDeniedError.__init__": true, + "tf.compat.v1.errors.PermissionDeniedError.__le__": true, + "tf.compat.v1.errors.PermissionDeniedError.__lt__": true, + "tf.compat.v1.errors.PermissionDeniedError.__ne__": true, + "tf.compat.v1.errors.PermissionDeniedError.__new__": true, + "tf.compat.v1.errors.PermissionDeniedError.args": true, + "tf.compat.v1.errors.PermissionDeniedError.error_code": true, + "tf.compat.v1.errors.PermissionDeniedError.message": true, + "tf.compat.v1.errors.PermissionDeniedError.node_def": true, + "tf.compat.v1.errors.PermissionDeniedError.op": true, + "tf.compat.v1.errors.PermissionDeniedError.with_traceback": true, + "tf.compat.v1.errors.RESOURCE_EXHAUSTED": true, + "tf.compat.v1.errors.ResourceExhaustedError": false, + "tf.compat.v1.errors.ResourceExhaustedError.__eq__": true, + "tf.compat.v1.errors.ResourceExhaustedError.__ge__": true, + "tf.compat.v1.errors.ResourceExhaustedError.__gt__": true, + "tf.compat.v1.errors.ResourceExhaustedError.__init__": true, + "tf.compat.v1.errors.ResourceExhaustedError.__le__": true, + "tf.compat.v1.errors.ResourceExhaustedError.__lt__": true, + "tf.compat.v1.errors.ResourceExhaustedError.__ne__": true, + "tf.compat.v1.errors.ResourceExhaustedError.__new__": true, + "tf.compat.v1.errors.ResourceExhaustedError.args": true, + "tf.compat.v1.errors.ResourceExhaustedError.error_code": true, + "tf.compat.v1.errors.ResourceExhaustedError.message": true, + "tf.compat.v1.errors.ResourceExhaustedError.node_def": true, + "tf.compat.v1.errors.ResourceExhaustedError.op": true, + "tf.compat.v1.errors.ResourceExhaustedError.with_traceback": true, + "tf.compat.v1.errors.UNAUTHENTICATED": true, + "tf.compat.v1.errors.UNAVAILABLE": true, + "tf.compat.v1.errors.UNIMPLEMENTED": true, + "tf.compat.v1.errors.UNKNOWN": true, + "tf.compat.v1.errors.UnauthenticatedError": false, + "tf.compat.v1.errors.UnauthenticatedError.__eq__": true, + "tf.compat.v1.errors.UnauthenticatedError.__ge__": true, + "tf.compat.v1.errors.UnauthenticatedError.__gt__": true, + "tf.compat.v1.errors.UnauthenticatedError.__init__": true, + "tf.compat.v1.errors.UnauthenticatedError.__le__": true, + "tf.compat.v1.errors.UnauthenticatedError.__lt__": true, + "tf.compat.v1.errors.UnauthenticatedError.__ne__": true, + "tf.compat.v1.errors.UnauthenticatedError.__new__": true, + "tf.compat.v1.errors.UnauthenticatedError.args": true, + "tf.compat.v1.errors.UnauthenticatedError.error_code": true, + "tf.compat.v1.errors.UnauthenticatedError.message": true, + "tf.compat.v1.errors.UnauthenticatedError.node_def": true, + "tf.compat.v1.errors.UnauthenticatedError.op": true, + "tf.compat.v1.errors.UnauthenticatedError.with_traceback": true, + "tf.compat.v1.errors.UnavailableError": false, + "tf.compat.v1.errors.UnavailableError.__eq__": true, + "tf.compat.v1.errors.UnavailableError.__ge__": true, + "tf.compat.v1.errors.UnavailableError.__gt__": true, + "tf.compat.v1.errors.UnavailableError.__init__": true, + "tf.compat.v1.errors.UnavailableError.__le__": true, + "tf.compat.v1.errors.UnavailableError.__lt__": true, + "tf.compat.v1.errors.UnavailableError.__ne__": true, + "tf.compat.v1.errors.UnavailableError.__new__": true, + "tf.compat.v1.errors.UnavailableError.args": true, + "tf.compat.v1.errors.UnavailableError.error_code": true, + "tf.compat.v1.errors.UnavailableError.message": true, + "tf.compat.v1.errors.UnavailableError.node_def": true, + "tf.compat.v1.errors.UnavailableError.op": true, + "tf.compat.v1.errors.UnavailableError.with_traceback": true, + "tf.compat.v1.errors.UnimplementedError": false, + "tf.compat.v1.errors.UnimplementedError.__eq__": true, + "tf.compat.v1.errors.UnimplementedError.__ge__": true, + "tf.compat.v1.errors.UnimplementedError.__gt__": true, + "tf.compat.v1.errors.UnimplementedError.__init__": true, + "tf.compat.v1.errors.UnimplementedError.__le__": true, + "tf.compat.v1.errors.UnimplementedError.__lt__": true, + "tf.compat.v1.errors.UnimplementedError.__ne__": true, + "tf.compat.v1.errors.UnimplementedError.__new__": true, + "tf.compat.v1.errors.UnimplementedError.args": true, + "tf.compat.v1.errors.UnimplementedError.error_code": true, + "tf.compat.v1.errors.UnimplementedError.message": true, + "tf.compat.v1.errors.UnimplementedError.node_def": true, + "tf.compat.v1.errors.UnimplementedError.op": true, + "tf.compat.v1.errors.UnimplementedError.with_traceback": true, + "tf.compat.v1.errors.UnknownError": false, + "tf.compat.v1.errors.UnknownError.__eq__": true, + "tf.compat.v1.errors.UnknownError.__ge__": true, + "tf.compat.v1.errors.UnknownError.__gt__": true, + "tf.compat.v1.errors.UnknownError.__init__": true, + "tf.compat.v1.errors.UnknownError.__le__": true, + "tf.compat.v1.errors.UnknownError.__lt__": true, + "tf.compat.v1.errors.UnknownError.__ne__": true, + "tf.compat.v1.errors.UnknownError.__new__": true, + "tf.compat.v1.errors.UnknownError.args": true, + "tf.compat.v1.errors.UnknownError.error_code": true, + "tf.compat.v1.errors.UnknownError.message": true, + "tf.compat.v1.errors.UnknownError.node_def": true, + "tf.compat.v1.errors.UnknownError.op": true, + "tf.compat.v1.errors.UnknownError.with_traceback": true, + "tf.compat.v1.errors.error_code_from_exception_type": false, + "tf.compat.v1.errors.exception_type_from_error_code": false, + "tf.compat.v1.errors.raise_exception_on_not_ok_status": false, + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__enter__": true, + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__eq__": true, + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__exit__": true, + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__ge__": true, + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__gt__": true, + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__init__": true, + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__le__": true, + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__lt__": true, + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__ne__": true, + "tf.compat.v1.errors.raise_exception_on_not_ok_status.__new__": true, + "tf.compat.v1.estimator": false, + "tf.compat.v1.estimator.BaselineClassifier": false, + "tf.compat.v1.estimator.BaselineClassifier.__eq__": true, + "tf.compat.v1.estimator.BaselineClassifier.__ge__": true, + "tf.compat.v1.estimator.BaselineClassifier.__gt__": true, + "tf.compat.v1.estimator.BaselineClassifier.__init__": true, + "tf.compat.v1.estimator.BaselineClassifier.__le__": true, + "tf.compat.v1.estimator.BaselineClassifier.__lt__": true, + "tf.compat.v1.estimator.BaselineClassifier.__ne__": true, + "tf.compat.v1.estimator.BaselineClassifier.__new__": true, + "tf.compat.v1.estimator.BaselineClassifier.config": true, + "tf.compat.v1.estimator.BaselineClassifier.eval_dir": true, + "tf.compat.v1.estimator.BaselineClassifier.evaluate": true, + "tf.compat.v1.estimator.BaselineClassifier.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.BaselineClassifier.export_saved_model": true, + "tf.compat.v1.estimator.BaselineClassifier.export_savedmodel": true, + "tf.compat.v1.estimator.BaselineClassifier.get_variable_names": true, + "tf.compat.v1.estimator.BaselineClassifier.get_variable_value": true, + "tf.compat.v1.estimator.BaselineClassifier.latest_checkpoint": true, + "tf.compat.v1.estimator.BaselineClassifier.model_dir": true, + "tf.compat.v1.estimator.BaselineClassifier.model_fn": true, + "tf.compat.v1.estimator.BaselineClassifier.params": true, + "tf.compat.v1.estimator.BaselineClassifier.predict": true, + "tf.compat.v1.estimator.BaselineClassifier.train": true, + "tf.compat.v1.estimator.BaselineEstimator": false, + "tf.compat.v1.estimator.BaselineEstimator.__eq__": true, + "tf.compat.v1.estimator.BaselineEstimator.__ge__": true, + "tf.compat.v1.estimator.BaselineEstimator.__gt__": true, + "tf.compat.v1.estimator.BaselineEstimator.__init__": true, + "tf.compat.v1.estimator.BaselineEstimator.__le__": true, + "tf.compat.v1.estimator.BaselineEstimator.__lt__": true, + "tf.compat.v1.estimator.BaselineEstimator.__ne__": true, + "tf.compat.v1.estimator.BaselineEstimator.__new__": true, + "tf.compat.v1.estimator.BaselineEstimator.config": true, + "tf.compat.v1.estimator.BaselineEstimator.eval_dir": true, + "tf.compat.v1.estimator.BaselineEstimator.evaluate": true, + "tf.compat.v1.estimator.BaselineEstimator.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.BaselineEstimator.export_saved_model": true, + "tf.compat.v1.estimator.BaselineEstimator.export_savedmodel": true, + "tf.compat.v1.estimator.BaselineEstimator.get_variable_names": true, + "tf.compat.v1.estimator.BaselineEstimator.get_variable_value": true, + "tf.compat.v1.estimator.BaselineEstimator.latest_checkpoint": true, + "tf.compat.v1.estimator.BaselineEstimator.model_dir": true, + "tf.compat.v1.estimator.BaselineEstimator.model_fn": true, + "tf.compat.v1.estimator.BaselineEstimator.params": true, + "tf.compat.v1.estimator.BaselineEstimator.predict": true, + "tf.compat.v1.estimator.BaselineEstimator.train": true, + "tf.compat.v1.estimator.BaselineRegressor": false, + "tf.compat.v1.estimator.BaselineRegressor.__eq__": true, + "tf.compat.v1.estimator.BaselineRegressor.__ge__": true, + "tf.compat.v1.estimator.BaselineRegressor.__gt__": true, + "tf.compat.v1.estimator.BaselineRegressor.__init__": true, + "tf.compat.v1.estimator.BaselineRegressor.__le__": true, + "tf.compat.v1.estimator.BaselineRegressor.__lt__": true, + "tf.compat.v1.estimator.BaselineRegressor.__ne__": true, + "tf.compat.v1.estimator.BaselineRegressor.__new__": true, + "tf.compat.v1.estimator.BaselineRegressor.config": true, + "tf.compat.v1.estimator.BaselineRegressor.eval_dir": true, + "tf.compat.v1.estimator.BaselineRegressor.evaluate": true, + "tf.compat.v1.estimator.BaselineRegressor.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.BaselineRegressor.export_saved_model": true, + "tf.compat.v1.estimator.BaselineRegressor.export_savedmodel": true, + "tf.compat.v1.estimator.BaselineRegressor.get_variable_names": true, + "tf.compat.v1.estimator.BaselineRegressor.get_variable_value": true, + "tf.compat.v1.estimator.BaselineRegressor.latest_checkpoint": true, + "tf.compat.v1.estimator.BaselineRegressor.model_dir": true, + "tf.compat.v1.estimator.BaselineRegressor.model_fn": true, + "tf.compat.v1.estimator.BaselineRegressor.params": true, + "tf.compat.v1.estimator.BaselineRegressor.predict": true, + "tf.compat.v1.estimator.BaselineRegressor.train": true, + "tf.compat.v1.estimator.BestExporter": false, + "tf.compat.v1.estimator.BestExporter.__eq__": true, + "tf.compat.v1.estimator.BestExporter.__ge__": true, + "tf.compat.v1.estimator.BestExporter.__gt__": true, + "tf.compat.v1.estimator.BestExporter.__init__": true, + "tf.compat.v1.estimator.BestExporter.__le__": true, + "tf.compat.v1.estimator.BestExporter.__lt__": true, + "tf.compat.v1.estimator.BestExporter.__ne__": true, + "tf.compat.v1.estimator.BestExporter.__new__": true, + "tf.compat.v1.estimator.BestExporter.export": true, + "tf.compat.v1.estimator.BestExporter.name": true, + "tf.compat.v1.estimator.BinaryClassHead": false, + "tf.compat.v1.estimator.BinaryClassHead.__eq__": true, + "tf.compat.v1.estimator.BinaryClassHead.__ge__": true, + "tf.compat.v1.estimator.BinaryClassHead.__gt__": true, + "tf.compat.v1.estimator.BinaryClassHead.__init__": true, + "tf.compat.v1.estimator.BinaryClassHead.__le__": true, + "tf.compat.v1.estimator.BinaryClassHead.__lt__": true, + "tf.compat.v1.estimator.BinaryClassHead.__ne__": true, + "tf.compat.v1.estimator.BinaryClassHead.__new__": true, + "tf.compat.v1.estimator.BinaryClassHead.create_estimator_spec": true, + "tf.compat.v1.estimator.BinaryClassHead.logits_dimension": true, + "tf.compat.v1.estimator.BinaryClassHead.loss": true, + "tf.compat.v1.estimator.BinaryClassHead.loss_reduction": true, + "tf.compat.v1.estimator.BinaryClassHead.metrics": true, + "tf.compat.v1.estimator.BinaryClassHead.name": true, + "tf.compat.v1.estimator.BinaryClassHead.predictions": true, + "tf.compat.v1.estimator.BinaryClassHead.update_metrics": true, + "tf.compat.v1.estimator.BoostedTreesClassifier": false, + "tf.compat.v1.estimator.BoostedTreesClassifier.__eq__": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.__ge__": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.__gt__": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.__init__": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.__le__": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.__lt__": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.__ne__": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.__new__": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.config": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.eval_dir": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.evaluate": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.experimental_feature_importances": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.experimental_predict_with_explanations": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.export_saved_model": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.export_savedmodel": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.get_variable_names": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.get_variable_value": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.latest_checkpoint": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.model_dir": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.model_fn": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.params": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.predict": true, + "tf.compat.v1.estimator.BoostedTreesClassifier.train": true, + "tf.compat.v1.estimator.BoostedTreesEstimator": false, + "tf.compat.v1.estimator.BoostedTreesEstimator.__eq__": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.__ge__": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.__gt__": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.__init__": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.__le__": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.__lt__": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.__ne__": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.__new__": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.config": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.eval_dir": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.evaluate": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.experimental_feature_importances": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.experimental_predict_with_explanations": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.export_saved_model": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.export_savedmodel": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.get_variable_names": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.get_variable_value": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.latest_checkpoint": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.model_dir": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.model_fn": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.params": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.predict": true, + "tf.compat.v1.estimator.BoostedTreesEstimator.train": true, + "tf.compat.v1.estimator.BoostedTreesRegressor": false, + "tf.compat.v1.estimator.BoostedTreesRegressor.__eq__": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.__ge__": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.__gt__": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.__init__": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.__le__": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.__lt__": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.__ne__": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.__new__": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.config": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.eval_dir": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.evaluate": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.experimental_feature_importances": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.experimental_predict_with_explanations": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.export_saved_model": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.export_savedmodel": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.get_variable_names": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.get_variable_value": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.latest_checkpoint": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.model_dir": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.model_fn": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.params": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.predict": true, + "tf.compat.v1.estimator.BoostedTreesRegressor.train": true, + "tf.compat.v1.estimator.CheckpointSaverHook": false, + "tf.compat.v1.estimator.CheckpointSaverHook.__eq__": true, + "tf.compat.v1.estimator.CheckpointSaverHook.__ge__": true, + "tf.compat.v1.estimator.CheckpointSaverHook.__gt__": true, + "tf.compat.v1.estimator.CheckpointSaverHook.__init__": true, + "tf.compat.v1.estimator.CheckpointSaverHook.__le__": true, + "tf.compat.v1.estimator.CheckpointSaverHook.__lt__": true, + "tf.compat.v1.estimator.CheckpointSaverHook.__ne__": true, + "tf.compat.v1.estimator.CheckpointSaverHook.__new__": true, + "tf.compat.v1.estimator.CheckpointSaverHook.after_create_session": true, + "tf.compat.v1.estimator.CheckpointSaverHook.after_run": true, + "tf.compat.v1.estimator.CheckpointSaverHook.before_run": true, + "tf.compat.v1.estimator.CheckpointSaverHook.begin": true, + "tf.compat.v1.estimator.CheckpointSaverHook.end": true, + "tf.compat.v1.estimator.CheckpointSaverListener": false, + "tf.compat.v1.estimator.CheckpointSaverListener.__eq__": true, + "tf.compat.v1.estimator.CheckpointSaverListener.__ge__": true, + "tf.compat.v1.estimator.CheckpointSaverListener.__gt__": true, + "tf.compat.v1.estimator.CheckpointSaverListener.__init__": true, + "tf.compat.v1.estimator.CheckpointSaverListener.__le__": true, + "tf.compat.v1.estimator.CheckpointSaverListener.__lt__": true, + "tf.compat.v1.estimator.CheckpointSaverListener.__ne__": true, + "tf.compat.v1.estimator.CheckpointSaverListener.__new__": true, + "tf.compat.v1.estimator.CheckpointSaverListener.after_save": true, + "tf.compat.v1.estimator.CheckpointSaverListener.before_save": true, + "tf.compat.v1.estimator.CheckpointSaverListener.begin": true, + "tf.compat.v1.estimator.CheckpointSaverListener.end": true, + "tf.compat.v1.estimator.DNNClassifier": false, + "tf.compat.v1.estimator.DNNClassifier.__eq__": true, + "tf.compat.v1.estimator.DNNClassifier.__ge__": true, + "tf.compat.v1.estimator.DNNClassifier.__gt__": true, + "tf.compat.v1.estimator.DNNClassifier.__init__": true, + "tf.compat.v1.estimator.DNNClassifier.__le__": true, + "tf.compat.v1.estimator.DNNClassifier.__lt__": true, + "tf.compat.v1.estimator.DNNClassifier.__ne__": true, + "tf.compat.v1.estimator.DNNClassifier.__new__": true, + "tf.compat.v1.estimator.DNNClassifier.config": true, + "tf.compat.v1.estimator.DNNClassifier.eval_dir": true, + "tf.compat.v1.estimator.DNNClassifier.evaluate": true, + "tf.compat.v1.estimator.DNNClassifier.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.DNNClassifier.export_saved_model": true, + "tf.compat.v1.estimator.DNNClassifier.export_savedmodel": true, + "tf.compat.v1.estimator.DNNClassifier.get_variable_names": true, + "tf.compat.v1.estimator.DNNClassifier.get_variable_value": true, + "tf.compat.v1.estimator.DNNClassifier.latest_checkpoint": true, + "tf.compat.v1.estimator.DNNClassifier.model_dir": true, + "tf.compat.v1.estimator.DNNClassifier.model_fn": true, + "tf.compat.v1.estimator.DNNClassifier.params": true, + "tf.compat.v1.estimator.DNNClassifier.predict": true, + "tf.compat.v1.estimator.DNNClassifier.train": true, + "tf.compat.v1.estimator.DNNEstimator": false, + "tf.compat.v1.estimator.DNNEstimator.__eq__": true, + "tf.compat.v1.estimator.DNNEstimator.__ge__": true, + "tf.compat.v1.estimator.DNNEstimator.__gt__": true, + "tf.compat.v1.estimator.DNNEstimator.__init__": true, + "tf.compat.v1.estimator.DNNEstimator.__le__": true, + "tf.compat.v1.estimator.DNNEstimator.__lt__": true, + "tf.compat.v1.estimator.DNNEstimator.__ne__": true, + "tf.compat.v1.estimator.DNNEstimator.__new__": true, + "tf.compat.v1.estimator.DNNEstimator.config": true, + "tf.compat.v1.estimator.DNNEstimator.eval_dir": true, + "tf.compat.v1.estimator.DNNEstimator.evaluate": true, + "tf.compat.v1.estimator.DNNEstimator.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.DNNEstimator.export_saved_model": true, + "tf.compat.v1.estimator.DNNEstimator.export_savedmodel": true, + "tf.compat.v1.estimator.DNNEstimator.get_variable_names": true, + "tf.compat.v1.estimator.DNNEstimator.get_variable_value": true, + "tf.compat.v1.estimator.DNNEstimator.latest_checkpoint": true, + "tf.compat.v1.estimator.DNNEstimator.model_dir": true, + "tf.compat.v1.estimator.DNNEstimator.model_fn": true, + "tf.compat.v1.estimator.DNNEstimator.params": true, + "tf.compat.v1.estimator.DNNEstimator.predict": true, + "tf.compat.v1.estimator.DNNEstimator.train": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier": false, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__eq__": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__ge__": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__gt__": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__init__": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__le__": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__lt__": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__ne__": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.__new__": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.config": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.eval_dir": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.evaluate": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.export_saved_model": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.export_savedmodel": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.get_variable_names": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.get_variable_value": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.latest_checkpoint": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.model_dir": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.model_fn": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.params": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.predict": true, + "tf.compat.v1.estimator.DNNLinearCombinedClassifier.train": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator": false, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__eq__": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__ge__": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__gt__": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__init__": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__le__": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__lt__": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__ne__": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.__new__": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.config": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.eval_dir": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.evaluate": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.export_saved_model": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.export_savedmodel": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.get_variable_names": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.get_variable_value": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.latest_checkpoint": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.model_dir": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.model_fn": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.params": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.predict": true, + "tf.compat.v1.estimator.DNNLinearCombinedEstimator.train": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor": false, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__eq__": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__ge__": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__gt__": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__init__": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__le__": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__lt__": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__ne__": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.__new__": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.config": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.eval_dir": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.evaluate": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.export_saved_model": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.export_savedmodel": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.get_variable_names": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.get_variable_value": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.latest_checkpoint": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.model_dir": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.model_fn": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.params": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.predict": true, + "tf.compat.v1.estimator.DNNLinearCombinedRegressor.train": true, + "tf.compat.v1.estimator.DNNRegressor": false, + "tf.compat.v1.estimator.DNNRegressor.__eq__": true, + "tf.compat.v1.estimator.DNNRegressor.__ge__": true, + "tf.compat.v1.estimator.DNNRegressor.__gt__": true, + "tf.compat.v1.estimator.DNNRegressor.__init__": true, + "tf.compat.v1.estimator.DNNRegressor.__le__": true, + "tf.compat.v1.estimator.DNNRegressor.__lt__": true, + "tf.compat.v1.estimator.DNNRegressor.__ne__": true, + "tf.compat.v1.estimator.DNNRegressor.__new__": true, + "tf.compat.v1.estimator.DNNRegressor.config": true, + "tf.compat.v1.estimator.DNNRegressor.eval_dir": true, + "tf.compat.v1.estimator.DNNRegressor.evaluate": true, + "tf.compat.v1.estimator.DNNRegressor.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.DNNRegressor.export_saved_model": true, + "tf.compat.v1.estimator.DNNRegressor.export_savedmodel": true, + "tf.compat.v1.estimator.DNNRegressor.get_variable_names": true, + "tf.compat.v1.estimator.DNNRegressor.get_variable_value": true, + "tf.compat.v1.estimator.DNNRegressor.latest_checkpoint": true, + "tf.compat.v1.estimator.DNNRegressor.model_dir": true, + "tf.compat.v1.estimator.DNNRegressor.model_fn": true, + "tf.compat.v1.estimator.DNNRegressor.params": true, + "tf.compat.v1.estimator.DNNRegressor.predict": true, + "tf.compat.v1.estimator.DNNRegressor.train": true, + "tf.compat.v1.estimator.Estimator": false, + "tf.compat.v1.estimator.Estimator.__eq__": true, + "tf.compat.v1.estimator.Estimator.__ge__": true, + "tf.compat.v1.estimator.Estimator.__gt__": true, + "tf.compat.v1.estimator.Estimator.__init__": true, + "tf.compat.v1.estimator.Estimator.__le__": true, + "tf.compat.v1.estimator.Estimator.__lt__": true, + "tf.compat.v1.estimator.Estimator.__ne__": true, + "tf.compat.v1.estimator.Estimator.__new__": true, + "tf.compat.v1.estimator.Estimator.config": true, + "tf.compat.v1.estimator.Estimator.eval_dir": true, + "tf.compat.v1.estimator.Estimator.evaluate": true, + "tf.compat.v1.estimator.Estimator.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.Estimator.export_saved_model": true, + "tf.compat.v1.estimator.Estimator.export_savedmodel": true, + "tf.compat.v1.estimator.Estimator.get_variable_names": true, + "tf.compat.v1.estimator.Estimator.get_variable_value": true, + "tf.compat.v1.estimator.Estimator.latest_checkpoint": true, + "tf.compat.v1.estimator.Estimator.model_dir": true, + "tf.compat.v1.estimator.Estimator.model_fn": true, + "tf.compat.v1.estimator.Estimator.params": true, + "tf.compat.v1.estimator.Estimator.predict": true, + "tf.compat.v1.estimator.Estimator.train": true, + "tf.compat.v1.estimator.EstimatorSpec": false, + "tf.compat.v1.estimator.EstimatorSpec.__add__": true, + "tf.compat.v1.estimator.EstimatorSpec.__contains__": true, + "tf.compat.v1.estimator.EstimatorSpec.__eq__": true, + "tf.compat.v1.estimator.EstimatorSpec.__ge__": true, + "tf.compat.v1.estimator.EstimatorSpec.__getitem__": true, + "tf.compat.v1.estimator.EstimatorSpec.__gt__": true, + "tf.compat.v1.estimator.EstimatorSpec.__init__": true, + "tf.compat.v1.estimator.EstimatorSpec.__iter__": true, + "tf.compat.v1.estimator.EstimatorSpec.__le__": true, + "tf.compat.v1.estimator.EstimatorSpec.__len__": true, + "tf.compat.v1.estimator.EstimatorSpec.__lt__": true, + "tf.compat.v1.estimator.EstimatorSpec.__mul__": true, + "tf.compat.v1.estimator.EstimatorSpec.__ne__": true, + "tf.compat.v1.estimator.EstimatorSpec.__new__": true, + "tf.compat.v1.estimator.EstimatorSpec.__rmul__": true, + "tf.compat.v1.estimator.EstimatorSpec.count": true, + "tf.compat.v1.estimator.EstimatorSpec.eval_metric_ops": true, + "tf.compat.v1.estimator.EstimatorSpec.evaluation_hooks": true, + "tf.compat.v1.estimator.EstimatorSpec.export_outputs": true, + "tf.compat.v1.estimator.EstimatorSpec.index": true, + "tf.compat.v1.estimator.EstimatorSpec.loss": true, + "tf.compat.v1.estimator.EstimatorSpec.mode": true, + "tf.compat.v1.estimator.EstimatorSpec.prediction_hooks": true, + "tf.compat.v1.estimator.EstimatorSpec.predictions": true, + "tf.compat.v1.estimator.EstimatorSpec.scaffold": true, + "tf.compat.v1.estimator.EstimatorSpec.train_op": true, + "tf.compat.v1.estimator.EstimatorSpec.training_chief_hooks": true, + "tf.compat.v1.estimator.EstimatorSpec.training_hooks": true, + "tf.compat.v1.estimator.EvalSpec": false, + "tf.compat.v1.estimator.EvalSpec.__add__": true, + "tf.compat.v1.estimator.EvalSpec.__contains__": true, + "tf.compat.v1.estimator.EvalSpec.__eq__": true, + "tf.compat.v1.estimator.EvalSpec.__ge__": true, + "tf.compat.v1.estimator.EvalSpec.__getitem__": true, + "tf.compat.v1.estimator.EvalSpec.__gt__": true, + "tf.compat.v1.estimator.EvalSpec.__init__": true, + "tf.compat.v1.estimator.EvalSpec.__iter__": true, + "tf.compat.v1.estimator.EvalSpec.__le__": true, + "tf.compat.v1.estimator.EvalSpec.__len__": true, + "tf.compat.v1.estimator.EvalSpec.__lt__": true, + "tf.compat.v1.estimator.EvalSpec.__mul__": true, + "tf.compat.v1.estimator.EvalSpec.__ne__": true, + "tf.compat.v1.estimator.EvalSpec.__new__": true, + "tf.compat.v1.estimator.EvalSpec.__rmul__": true, + "tf.compat.v1.estimator.EvalSpec.count": true, + "tf.compat.v1.estimator.EvalSpec.exporters": true, + "tf.compat.v1.estimator.EvalSpec.hooks": true, + "tf.compat.v1.estimator.EvalSpec.index": true, + "tf.compat.v1.estimator.EvalSpec.input_fn": true, + "tf.compat.v1.estimator.EvalSpec.name": true, + "tf.compat.v1.estimator.EvalSpec.start_delay_secs": true, + "tf.compat.v1.estimator.EvalSpec.steps": true, + "tf.compat.v1.estimator.EvalSpec.throttle_secs": true, + "tf.compat.v1.estimator.Exporter": false, + "tf.compat.v1.estimator.Exporter.__eq__": true, + "tf.compat.v1.estimator.Exporter.__ge__": true, + "tf.compat.v1.estimator.Exporter.__gt__": true, + "tf.compat.v1.estimator.Exporter.__init__": true, + "tf.compat.v1.estimator.Exporter.__le__": true, + "tf.compat.v1.estimator.Exporter.__lt__": true, + "tf.compat.v1.estimator.Exporter.__ne__": true, + "tf.compat.v1.estimator.Exporter.__new__": true, + "tf.compat.v1.estimator.Exporter.export": true, + "tf.compat.v1.estimator.Exporter.name": true, + "tf.compat.v1.estimator.FeedFnHook": false, + "tf.compat.v1.estimator.FeedFnHook.__eq__": true, + "tf.compat.v1.estimator.FeedFnHook.__ge__": true, + "tf.compat.v1.estimator.FeedFnHook.__gt__": true, + "tf.compat.v1.estimator.FeedFnHook.__init__": true, + "tf.compat.v1.estimator.FeedFnHook.__le__": true, + "tf.compat.v1.estimator.FeedFnHook.__lt__": true, + "tf.compat.v1.estimator.FeedFnHook.__ne__": true, + "tf.compat.v1.estimator.FeedFnHook.__new__": true, + "tf.compat.v1.estimator.FeedFnHook.after_create_session": true, + "tf.compat.v1.estimator.FeedFnHook.after_run": true, + "tf.compat.v1.estimator.FeedFnHook.before_run": true, + "tf.compat.v1.estimator.FeedFnHook.begin": true, + "tf.compat.v1.estimator.FeedFnHook.end": true, + "tf.compat.v1.estimator.FinalExporter": false, + "tf.compat.v1.estimator.FinalExporter.__eq__": true, + "tf.compat.v1.estimator.FinalExporter.__ge__": true, + "tf.compat.v1.estimator.FinalExporter.__gt__": true, + "tf.compat.v1.estimator.FinalExporter.__init__": true, + "tf.compat.v1.estimator.FinalExporter.__le__": true, + "tf.compat.v1.estimator.FinalExporter.__lt__": true, + "tf.compat.v1.estimator.FinalExporter.__ne__": true, + "tf.compat.v1.estimator.FinalExporter.__new__": true, + "tf.compat.v1.estimator.FinalExporter.export": true, + "tf.compat.v1.estimator.FinalExporter.name": true, + "tf.compat.v1.estimator.FinalOpsHook": false, + "tf.compat.v1.estimator.FinalOpsHook.__eq__": true, + "tf.compat.v1.estimator.FinalOpsHook.__ge__": true, + "tf.compat.v1.estimator.FinalOpsHook.__gt__": true, + "tf.compat.v1.estimator.FinalOpsHook.__init__": true, + "tf.compat.v1.estimator.FinalOpsHook.__le__": true, + "tf.compat.v1.estimator.FinalOpsHook.__lt__": true, + "tf.compat.v1.estimator.FinalOpsHook.__ne__": true, + "tf.compat.v1.estimator.FinalOpsHook.__new__": true, + "tf.compat.v1.estimator.FinalOpsHook.after_create_session": true, + "tf.compat.v1.estimator.FinalOpsHook.after_run": true, + "tf.compat.v1.estimator.FinalOpsHook.before_run": true, + "tf.compat.v1.estimator.FinalOpsHook.begin": true, + "tf.compat.v1.estimator.FinalOpsHook.end": true, + "tf.compat.v1.estimator.FinalOpsHook.final_ops_values": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook": false, + "tf.compat.v1.estimator.GlobalStepWaiterHook.__eq__": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook.__ge__": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook.__gt__": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook.__init__": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook.__le__": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook.__lt__": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook.__ne__": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook.__new__": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook.after_create_session": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook.after_run": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook.before_run": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook.begin": true, + "tf.compat.v1.estimator.GlobalStepWaiterHook.end": true, + "tf.compat.v1.estimator.Head": false, + "tf.compat.v1.estimator.Head.__eq__": true, + "tf.compat.v1.estimator.Head.__ge__": true, + "tf.compat.v1.estimator.Head.__gt__": true, + "tf.compat.v1.estimator.Head.__init__": true, + "tf.compat.v1.estimator.Head.__le__": true, + "tf.compat.v1.estimator.Head.__lt__": true, + "tf.compat.v1.estimator.Head.__ne__": true, + "tf.compat.v1.estimator.Head.__new__": true, + "tf.compat.v1.estimator.Head.create_estimator_spec": true, + "tf.compat.v1.estimator.Head.logits_dimension": true, + "tf.compat.v1.estimator.Head.loss": true, + "tf.compat.v1.estimator.Head.loss_reduction": true, + "tf.compat.v1.estimator.Head.metrics": true, + "tf.compat.v1.estimator.Head.name": true, + "tf.compat.v1.estimator.Head.predictions": true, + "tf.compat.v1.estimator.Head.update_metrics": true, + "tf.compat.v1.estimator.LatestExporter": false, + "tf.compat.v1.estimator.LatestExporter.__eq__": true, + "tf.compat.v1.estimator.LatestExporter.__ge__": true, + "tf.compat.v1.estimator.LatestExporter.__gt__": true, + "tf.compat.v1.estimator.LatestExporter.__init__": true, + "tf.compat.v1.estimator.LatestExporter.__le__": true, + "tf.compat.v1.estimator.LatestExporter.__lt__": true, + "tf.compat.v1.estimator.LatestExporter.__ne__": true, + "tf.compat.v1.estimator.LatestExporter.__new__": true, + "tf.compat.v1.estimator.LatestExporter.export": true, + "tf.compat.v1.estimator.LatestExporter.name": true, + "tf.compat.v1.estimator.LinearClassifier": false, + "tf.compat.v1.estimator.LinearClassifier.__eq__": true, + "tf.compat.v1.estimator.LinearClassifier.__ge__": true, + "tf.compat.v1.estimator.LinearClassifier.__gt__": true, + "tf.compat.v1.estimator.LinearClassifier.__init__": true, + "tf.compat.v1.estimator.LinearClassifier.__le__": true, + "tf.compat.v1.estimator.LinearClassifier.__lt__": true, + "tf.compat.v1.estimator.LinearClassifier.__ne__": true, + "tf.compat.v1.estimator.LinearClassifier.__new__": true, + "tf.compat.v1.estimator.LinearClassifier.config": true, + "tf.compat.v1.estimator.LinearClassifier.eval_dir": true, + "tf.compat.v1.estimator.LinearClassifier.evaluate": true, + "tf.compat.v1.estimator.LinearClassifier.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.LinearClassifier.export_saved_model": true, + "tf.compat.v1.estimator.LinearClassifier.export_savedmodel": true, + "tf.compat.v1.estimator.LinearClassifier.get_variable_names": true, + "tf.compat.v1.estimator.LinearClassifier.get_variable_value": true, + "tf.compat.v1.estimator.LinearClassifier.latest_checkpoint": true, + "tf.compat.v1.estimator.LinearClassifier.model_dir": true, + "tf.compat.v1.estimator.LinearClassifier.model_fn": true, + "tf.compat.v1.estimator.LinearClassifier.params": true, + "tf.compat.v1.estimator.LinearClassifier.predict": true, + "tf.compat.v1.estimator.LinearClassifier.train": true, + "tf.compat.v1.estimator.LinearEstimator": false, + "tf.compat.v1.estimator.LinearEstimator.__eq__": true, + "tf.compat.v1.estimator.LinearEstimator.__ge__": true, + "tf.compat.v1.estimator.LinearEstimator.__gt__": true, + "tf.compat.v1.estimator.LinearEstimator.__init__": true, + "tf.compat.v1.estimator.LinearEstimator.__le__": true, + "tf.compat.v1.estimator.LinearEstimator.__lt__": true, + "tf.compat.v1.estimator.LinearEstimator.__ne__": true, + "tf.compat.v1.estimator.LinearEstimator.__new__": true, + "tf.compat.v1.estimator.LinearEstimator.config": true, + "tf.compat.v1.estimator.LinearEstimator.eval_dir": true, + "tf.compat.v1.estimator.LinearEstimator.evaluate": true, + "tf.compat.v1.estimator.LinearEstimator.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.LinearEstimator.export_saved_model": true, + "tf.compat.v1.estimator.LinearEstimator.export_savedmodel": true, + "tf.compat.v1.estimator.LinearEstimator.get_variable_names": true, + "tf.compat.v1.estimator.LinearEstimator.get_variable_value": true, + "tf.compat.v1.estimator.LinearEstimator.latest_checkpoint": true, + "tf.compat.v1.estimator.LinearEstimator.model_dir": true, + "tf.compat.v1.estimator.LinearEstimator.model_fn": true, + "tf.compat.v1.estimator.LinearEstimator.params": true, + "tf.compat.v1.estimator.LinearEstimator.predict": true, + "tf.compat.v1.estimator.LinearEstimator.train": true, + "tf.compat.v1.estimator.LinearRegressor": false, + "tf.compat.v1.estimator.LinearRegressor.__eq__": true, + "tf.compat.v1.estimator.LinearRegressor.__ge__": true, + "tf.compat.v1.estimator.LinearRegressor.__gt__": true, + "tf.compat.v1.estimator.LinearRegressor.__init__": true, + "tf.compat.v1.estimator.LinearRegressor.__le__": true, + "tf.compat.v1.estimator.LinearRegressor.__lt__": true, + "tf.compat.v1.estimator.LinearRegressor.__ne__": true, + "tf.compat.v1.estimator.LinearRegressor.__new__": true, + "tf.compat.v1.estimator.LinearRegressor.config": true, + "tf.compat.v1.estimator.LinearRegressor.eval_dir": true, + "tf.compat.v1.estimator.LinearRegressor.evaluate": true, + "tf.compat.v1.estimator.LinearRegressor.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.LinearRegressor.export_saved_model": true, + "tf.compat.v1.estimator.LinearRegressor.export_savedmodel": true, + "tf.compat.v1.estimator.LinearRegressor.get_variable_names": true, + "tf.compat.v1.estimator.LinearRegressor.get_variable_value": true, + "tf.compat.v1.estimator.LinearRegressor.latest_checkpoint": true, + "tf.compat.v1.estimator.LinearRegressor.model_dir": true, + "tf.compat.v1.estimator.LinearRegressor.model_fn": true, + "tf.compat.v1.estimator.LinearRegressor.params": true, + "tf.compat.v1.estimator.LinearRegressor.predict": true, + "tf.compat.v1.estimator.LinearRegressor.train": true, + "tf.compat.v1.estimator.LoggingTensorHook": false, + "tf.compat.v1.estimator.LoggingTensorHook.__eq__": true, + "tf.compat.v1.estimator.LoggingTensorHook.__ge__": true, + "tf.compat.v1.estimator.LoggingTensorHook.__gt__": true, + "tf.compat.v1.estimator.LoggingTensorHook.__init__": true, + "tf.compat.v1.estimator.LoggingTensorHook.__le__": true, + "tf.compat.v1.estimator.LoggingTensorHook.__lt__": true, + "tf.compat.v1.estimator.LoggingTensorHook.__ne__": true, + "tf.compat.v1.estimator.LoggingTensorHook.__new__": true, + "tf.compat.v1.estimator.LoggingTensorHook.after_create_session": true, + "tf.compat.v1.estimator.LoggingTensorHook.after_run": true, + "tf.compat.v1.estimator.LoggingTensorHook.before_run": true, + "tf.compat.v1.estimator.LoggingTensorHook.begin": true, + "tf.compat.v1.estimator.LoggingTensorHook.end": true, + "tf.compat.v1.estimator.LogisticRegressionHead": false, + "tf.compat.v1.estimator.LogisticRegressionHead.__eq__": true, + "tf.compat.v1.estimator.LogisticRegressionHead.__ge__": true, + "tf.compat.v1.estimator.LogisticRegressionHead.__gt__": true, + "tf.compat.v1.estimator.LogisticRegressionHead.__init__": true, + "tf.compat.v1.estimator.LogisticRegressionHead.__le__": true, + "tf.compat.v1.estimator.LogisticRegressionHead.__lt__": true, + "tf.compat.v1.estimator.LogisticRegressionHead.__ne__": true, + "tf.compat.v1.estimator.LogisticRegressionHead.__new__": true, + "tf.compat.v1.estimator.LogisticRegressionHead.create_estimator_spec": true, + "tf.compat.v1.estimator.LogisticRegressionHead.logits_dimension": true, + "tf.compat.v1.estimator.LogisticRegressionHead.loss": true, + "tf.compat.v1.estimator.LogisticRegressionHead.loss_reduction": true, + "tf.compat.v1.estimator.LogisticRegressionHead.metrics": true, + "tf.compat.v1.estimator.LogisticRegressionHead.name": true, + "tf.compat.v1.estimator.LogisticRegressionHead.predictions": true, + "tf.compat.v1.estimator.LogisticRegressionHead.update_metrics": true, + "tf.compat.v1.estimator.ModeKeys": false, + "tf.compat.v1.estimator.ModeKeys.EVAL": true, + "tf.compat.v1.estimator.ModeKeys.PREDICT": true, + "tf.compat.v1.estimator.ModeKeys.TRAIN": true, + "tf.compat.v1.estimator.ModeKeys.__eq__": true, + "tf.compat.v1.estimator.ModeKeys.__ge__": true, + "tf.compat.v1.estimator.ModeKeys.__gt__": true, + "tf.compat.v1.estimator.ModeKeys.__init__": true, + "tf.compat.v1.estimator.ModeKeys.__le__": true, + "tf.compat.v1.estimator.ModeKeys.__lt__": true, + "tf.compat.v1.estimator.ModeKeys.__ne__": true, + "tf.compat.v1.estimator.ModeKeys.__new__": true, + "tf.compat.v1.estimator.MultiClassHead": false, + "tf.compat.v1.estimator.MultiClassHead.__eq__": true, + "tf.compat.v1.estimator.MultiClassHead.__ge__": true, + "tf.compat.v1.estimator.MultiClassHead.__gt__": true, + "tf.compat.v1.estimator.MultiClassHead.__init__": true, + "tf.compat.v1.estimator.MultiClassHead.__le__": true, + "tf.compat.v1.estimator.MultiClassHead.__lt__": true, + "tf.compat.v1.estimator.MultiClassHead.__ne__": true, + "tf.compat.v1.estimator.MultiClassHead.__new__": true, + "tf.compat.v1.estimator.MultiClassHead.create_estimator_spec": true, + "tf.compat.v1.estimator.MultiClassHead.logits_dimension": true, + "tf.compat.v1.estimator.MultiClassHead.loss": true, + "tf.compat.v1.estimator.MultiClassHead.loss_reduction": true, + "tf.compat.v1.estimator.MultiClassHead.metrics": true, + "tf.compat.v1.estimator.MultiClassHead.name": true, + "tf.compat.v1.estimator.MultiClassHead.predictions": true, + "tf.compat.v1.estimator.MultiClassHead.update_metrics": true, + "tf.compat.v1.estimator.MultiHead": false, + "tf.compat.v1.estimator.MultiHead.__eq__": true, + "tf.compat.v1.estimator.MultiHead.__ge__": true, + "tf.compat.v1.estimator.MultiHead.__gt__": true, + "tf.compat.v1.estimator.MultiHead.__init__": true, + "tf.compat.v1.estimator.MultiHead.__le__": true, + "tf.compat.v1.estimator.MultiHead.__lt__": true, + "tf.compat.v1.estimator.MultiHead.__ne__": true, + "tf.compat.v1.estimator.MultiHead.__new__": true, + "tf.compat.v1.estimator.MultiHead.create_estimator_spec": true, + "tf.compat.v1.estimator.MultiHead.logits_dimension": true, + "tf.compat.v1.estimator.MultiHead.loss": true, + "tf.compat.v1.estimator.MultiHead.loss_reduction": true, + "tf.compat.v1.estimator.MultiHead.metrics": true, + "tf.compat.v1.estimator.MultiHead.name": true, + "tf.compat.v1.estimator.MultiHead.predictions": true, + "tf.compat.v1.estimator.MultiHead.update_metrics": true, + "tf.compat.v1.estimator.MultiLabelHead": false, + "tf.compat.v1.estimator.MultiLabelHead.__eq__": true, + "tf.compat.v1.estimator.MultiLabelHead.__ge__": true, + "tf.compat.v1.estimator.MultiLabelHead.__gt__": true, + "tf.compat.v1.estimator.MultiLabelHead.__init__": true, + "tf.compat.v1.estimator.MultiLabelHead.__le__": true, + "tf.compat.v1.estimator.MultiLabelHead.__lt__": true, + "tf.compat.v1.estimator.MultiLabelHead.__ne__": true, + "tf.compat.v1.estimator.MultiLabelHead.__new__": true, + "tf.compat.v1.estimator.MultiLabelHead.create_estimator_spec": true, + "tf.compat.v1.estimator.MultiLabelHead.logits_dimension": true, + "tf.compat.v1.estimator.MultiLabelHead.loss": true, + "tf.compat.v1.estimator.MultiLabelHead.loss_reduction": true, + "tf.compat.v1.estimator.MultiLabelHead.metrics": true, + "tf.compat.v1.estimator.MultiLabelHead.name": true, + "tf.compat.v1.estimator.MultiLabelHead.predictions": true, + "tf.compat.v1.estimator.MultiLabelHead.update_metrics": true, + "tf.compat.v1.estimator.NanLossDuringTrainingError": false, + "tf.compat.v1.estimator.NanLossDuringTrainingError.__eq__": true, + "tf.compat.v1.estimator.NanLossDuringTrainingError.__ge__": true, + "tf.compat.v1.estimator.NanLossDuringTrainingError.__gt__": true, + "tf.compat.v1.estimator.NanLossDuringTrainingError.__init__": true, + "tf.compat.v1.estimator.NanLossDuringTrainingError.__le__": true, + "tf.compat.v1.estimator.NanLossDuringTrainingError.__lt__": true, + "tf.compat.v1.estimator.NanLossDuringTrainingError.__ne__": true, + "tf.compat.v1.estimator.NanLossDuringTrainingError.__new__": true, + "tf.compat.v1.estimator.NanLossDuringTrainingError.args": true, + "tf.compat.v1.estimator.NanLossDuringTrainingError.with_traceback": true, + "tf.compat.v1.estimator.NanTensorHook": false, + "tf.compat.v1.estimator.NanTensorHook.__eq__": true, + "tf.compat.v1.estimator.NanTensorHook.__ge__": true, + "tf.compat.v1.estimator.NanTensorHook.__gt__": true, + "tf.compat.v1.estimator.NanTensorHook.__init__": true, + "tf.compat.v1.estimator.NanTensorHook.__le__": true, + "tf.compat.v1.estimator.NanTensorHook.__lt__": true, + "tf.compat.v1.estimator.NanTensorHook.__ne__": true, + "tf.compat.v1.estimator.NanTensorHook.__new__": true, + "tf.compat.v1.estimator.NanTensorHook.after_create_session": true, + "tf.compat.v1.estimator.NanTensorHook.after_run": true, + "tf.compat.v1.estimator.NanTensorHook.before_run": true, + "tf.compat.v1.estimator.NanTensorHook.begin": true, + "tf.compat.v1.estimator.NanTensorHook.end": true, + "tf.compat.v1.estimator.PoissonRegressionHead": false, + "tf.compat.v1.estimator.PoissonRegressionHead.__eq__": true, + "tf.compat.v1.estimator.PoissonRegressionHead.__ge__": true, + "tf.compat.v1.estimator.PoissonRegressionHead.__gt__": true, + "tf.compat.v1.estimator.PoissonRegressionHead.__init__": true, + "tf.compat.v1.estimator.PoissonRegressionHead.__le__": true, + "tf.compat.v1.estimator.PoissonRegressionHead.__lt__": true, + "tf.compat.v1.estimator.PoissonRegressionHead.__ne__": true, + "tf.compat.v1.estimator.PoissonRegressionHead.__new__": true, + "tf.compat.v1.estimator.PoissonRegressionHead.create_estimator_spec": true, + "tf.compat.v1.estimator.PoissonRegressionHead.logits_dimension": true, + "tf.compat.v1.estimator.PoissonRegressionHead.loss": true, + "tf.compat.v1.estimator.PoissonRegressionHead.loss_reduction": true, + "tf.compat.v1.estimator.PoissonRegressionHead.metrics": true, + "tf.compat.v1.estimator.PoissonRegressionHead.name": true, + "tf.compat.v1.estimator.PoissonRegressionHead.predictions": true, + "tf.compat.v1.estimator.PoissonRegressionHead.update_metrics": true, + "tf.compat.v1.estimator.ProfilerHook": false, + "tf.compat.v1.estimator.ProfilerHook.__eq__": true, + "tf.compat.v1.estimator.ProfilerHook.__ge__": true, + "tf.compat.v1.estimator.ProfilerHook.__gt__": true, + "tf.compat.v1.estimator.ProfilerHook.__init__": true, + "tf.compat.v1.estimator.ProfilerHook.__le__": true, + "tf.compat.v1.estimator.ProfilerHook.__lt__": true, + "tf.compat.v1.estimator.ProfilerHook.__ne__": true, + "tf.compat.v1.estimator.ProfilerHook.__new__": true, + "tf.compat.v1.estimator.ProfilerHook.after_create_session": true, + "tf.compat.v1.estimator.ProfilerHook.after_run": true, + "tf.compat.v1.estimator.ProfilerHook.before_run": true, + "tf.compat.v1.estimator.ProfilerHook.begin": true, + "tf.compat.v1.estimator.ProfilerHook.end": true, + "tf.compat.v1.estimator.RegressionHead": false, + "tf.compat.v1.estimator.RegressionHead.__eq__": true, + "tf.compat.v1.estimator.RegressionHead.__ge__": true, + "tf.compat.v1.estimator.RegressionHead.__gt__": true, + "tf.compat.v1.estimator.RegressionHead.__init__": true, + "tf.compat.v1.estimator.RegressionHead.__le__": true, + "tf.compat.v1.estimator.RegressionHead.__lt__": true, + "tf.compat.v1.estimator.RegressionHead.__ne__": true, + "tf.compat.v1.estimator.RegressionHead.__new__": true, + "tf.compat.v1.estimator.RegressionHead.create_estimator_spec": true, + "tf.compat.v1.estimator.RegressionHead.logits_dimension": true, + "tf.compat.v1.estimator.RegressionHead.loss": true, + "tf.compat.v1.estimator.RegressionHead.loss_reduction": true, + "tf.compat.v1.estimator.RegressionHead.metrics": true, + "tf.compat.v1.estimator.RegressionHead.name": true, + "tf.compat.v1.estimator.RegressionHead.predictions": true, + "tf.compat.v1.estimator.RegressionHead.update_metrics": true, + "tf.compat.v1.estimator.RunConfig": false, + "tf.compat.v1.estimator.RunConfig.__eq__": true, + "tf.compat.v1.estimator.RunConfig.__ge__": true, + "tf.compat.v1.estimator.RunConfig.__gt__": true, + "tf.compat.v1.estimator.RunConfig.__init__": true, + "tf.compat.v1.estimator.RunConfig.__le__": true, + "tf.compat.v1.estimator.RunConfig.__lt__": true, + "tf.compat.v1.estimator.RunConfig.__ne__": true, + "tf.compat.v1.estimator.RunConfig.__new__": true, + "tf.compat.v1.estimator.RunConfig.cluster_spec": true, + "tf.compat.v1.estimator.RunConfig.device_fn": true, + "tf.compat.v1.estimator.RunConfig.eval_distribute": true, + "tf.compat.v1.estimator.RunConfig.evaluation_master": true, + "tf.compat.v1.estimator.RunConfig.experimental_max_worker_delay_secs": true, + "tf.compat.v1.estimator.RunConfig.global_id_in_cluster": true, + "tf.compat.v1.estimator.RunConfig.is_chief": true, + "tf.compat.v1.estimator.RunConfig.keep_checkpoint_every_n_hours": true, + "tf.compat.v1.estimator.RunConfig.keep_checkpoint_max": true, + "tf.compat.v1.estimator.RunConfig.log_step_count_steps": true, + "tf.compat.v1.estimator.RunConfig.master": true, + "tf.compat.v1.estimator.RunConfig.model_dir": true, + "tf.compat.v1.estimator.RunConfig.num_ps_replicas": true, + "tf.compat.v1.estimator.RunConfig.num_worker_replicas": true, + "tf.compat.v1.estimator.RunConfig.protocol": true, + "tf.compat.v1.estimator.RunConfig.replace": true, + "tf.compat.v1.estimator.RunConfig.save_checkpoints_secs": true, + "tf.compat.v1.estimator.RunConfig.save_checkpoints_steps": true, + "tf.compat.v1.estimator.RunConfig.save_summary_steps": true, + "tf.compat.v1.estimator.RunConfig.service": true, + "tf.compat.v1.estimator.RunConfig.session_config": true, + "tf.compat.v1.estimator.RunConfig.session_creation_timeout_secs": true, + "tf.compat.v1.estimator.RunConfig.task_id": true, + "tf.compat.v1.estimator.RunConfig.task_type": true, + "tf.compat.v1.estimator.RunConfig.tf_random_seed": true, + "tf.compat.v1.estimator.RunConfig.train_distribute": true, + "tf.compat.v1.estimator.SecondOrStepTimer": false, + "tf.compat.v1.estimator.SecondOrStepTimer.__eq__": true, + "tf.compat.v1.estimator.SecondOrStepTimer.__ge__": true, + "tf.compat.v1.estimator.SecondOrStepTimer.__gt__": true, + "tf.compat.v1.estimator.SecondOrStepTimer.__init__": true, + "tf.compat.v1.estimator.SecondOrStepTimer.__le__": true, + "tf.compat.v1.estimator.SecondOrStepTimer.__lt__": true, + "tf.compat.v1.estimator.SecondOrStepTimer.__ne__": true, + "tf.compat.v1.estimator.SecondOrStepTimer.__new__": true, + "tf.compat.v1.estimator.SecondOrStepTimer.last_triggered_step": true, + "tf.compat.v1.estimator.SecondOrStepTimer.reset": true, + "tf.compat.v1.estimator.SecondOrStepTimer.should_trigger_for_step": true, + "tf.compat.v1.estimator.SecondOrStepTimer.update_last_triggered_step": true, + "tf.compat.v1.estimator.SessionRunArgs": false, + "tf.compat.v1.estimator.SessionRunArgs.__add__": true, + "tf.compat.v1.estimator.SessionRunArgs.__contains__": true, + "tf.compat.v1.estimator.SessionRunArgs.__eq__": true, + "tf.compat.v1.estimator.SessionRunArgs.__ge__": true, + "tf.compat.v1.estimator.SessionRunArgs.__getitem__": true, + "tf.compat.v1.estimator.SessionRunArgs.__gt__": true, + "tf.compat.v1.estimator.SessionRunArgs.__init__": true, + "tf.compat.v1.estimator.SessionRunArgs.__iter__": true, + "tf.compat.v1.estimator.SessionRunArgs.__le__": true, + "tf.compat.v1.estimator.SessionRunArgs.__len__": true, + "tf.compat.v1.estimator.SessionRunArgs.__lt__": true, + "tf.compat.v1.estimator.SessionRunArgs.__mul__": true, + "tf.compat.v1.estimator.SessionRunArgs.__ne__": true, + "tf.compat.v1.estimator.SessionRunArgs.__new__": true, + "tf.compat.v1.estimator.SessionRunArgs.__rmul__": true, + "tf.compat.v1.estimator.SessionRunArgs.count": true, + "tf.compat.v1.estimator.SessionRunArgs.feed_dict": true, + "tf.compat.v1.estimator.SessionRunArgs.fetches": true, + "tf.compat.v1.estimator.SessionRunArgs.index": true, + "tf.compat.v1.estimator.SessionRunArgs.options": true, + "tf.compat.v1.estimator.SessionRunContext": false, + "tf.compat.v1.estimator.SessionRunContext.__eq__": true, + "tf.compat.v1.estimator.SessionRunContext.__ge__": true, + "tf.compat.v1.estimator.SessionRunContext.__gt__": true, + "tf.compat.v1.estimator.SessionRunContext.__init__": true, + "tf.compat.v1.estimator.SessionRunContext.__le__": true, + "tf.compat.v1.estimator.SessionRunContext.__lt__": true, + "tf.compat.v1.estimator.SessionRunContext.__ne__": true, + "tf.compat.v1.estimator.SessionRunContext.__new__": true, + "tf.compat.v1.estimator.SessionRunContext.original_args": true, + "tf.compat.v1.estimator.SessionRunContext.request_stop": true, + "tf.compat.v1.estimator.SessionRunContext.session": true, + "tf.compat.v1.estimator.SessionRunContext.stop_requested": true, + "tf.compat.v1.estimator.SessionRunHook": false, + "tf.compat.v1.estimator.SessionRunHook.__eq__": true, + "tf.compat.v1.estimator.SessionRunHook.__ge__": true, + "tf.compat.v1.estimator.SessionRunHook.__gt__": true, + "tf.compat.v1.estimator.SessionRunHook.__init__": true, + "tf.compat.v1.estimator.SessionRunHook.__le__": true, + "tf.compat.v1.estimator.SessionRunHook.__lt__": true, + "tf.compat.v1.estimator.SessionRunHook.__ne__": true, + "tf.compat.v1.estimator.SessionRunHook.__new__": true, + "tf.compat.v1.estimator.SessionRunHook.after_create_session": true, + "tf.compat.v1.estimator.SessionRunHook.after_run": true, + "tf.compat.v1.estimator.SessionRunHook.before_run": true, + "tf.compat.v1.estimator.SessionRunHook.begin": true, + "tf.compat.v1.estimator.SessionRunHook.end": true, + "tf.compat.v1.estimator.SessionRunValues": false, + "tf.compat.v1.estimator.SessionRunValues.__add__": true, + "tf.compat.v1.estimator.SessionRunValues.__contains__": true, + "tf.compat.v1.estimator.SessionRunValues.__eq__": true, + "tf.compat.v1.estimator.SessionRunValues.__ge__": true, + "tf.compat.v1.estimator.SessionRunValues.__getitem__": true, + "tf.compat.v1.estimator.SessionRunValues.__gt__": true, + "tf.compat.v1.estimator.SessionRunValues.__init__": true, + "tf.compat.v1.estimator.SessionRunValues.__iter__": true, + "tf.compat.v1.estimator.SessionRunValues.__le__": true, + "tf.compat.v1.estimator.SessionRunValues.__len__": true, + "tf.compat.v1.estimator.SessionRunValues.__lt__": true, + "tf.compat.v1.estimator.SessionRunValues.__mul__": true, + "tf.compat.v1.estimator.SessionRunValues.__ne__": true, + "tf.compat.v1.estimator.SessionRunValues.__new__": true, + "tf.compat.v1.estimator.SessionRunValues.__rmul__": true, + "tf.compat.v1.estimator.SessionRunValues.count": true, + "tf.compat.v1.estimator.SessionRunValues.index": true, + "tf.compat.v1.estimator.SessionRunValues.options": true, + "tf.compat.v1.estimator.SessionRunValues.results": true, + "tf.compat.v1.estimator.SessionRunValues.run_metadata": true, + "tf.compat.v1.estimator.StepCounterHook": false, + "tf.compat.v1.estimator.StepCounterHook.__eq__": true, + "tf.compat.v1.estimator.StepCounterHook.__ge__": true, + "tf.compat.v1.estimator.StepCounterHook.__gt__": true, + "tf.compat.v1.estimator.StepCounterHook.__init__": true, + "tf.compat.v1.estimator.StepCounterHook.__le__": true, + "tf.compat.v1.estimator.StepCounterHook.__lt__": true, + "tf.compat.v1.estimator.StepCounterHook.__ne__": true, + "tf.compat.v1.estimator.StepCounterHook.__new__": true, + "tf.compat.v1.estimator.StepCounterHook.after_create_session": true, + "tf.compat.v1.estimator.StepCounterHook.after_run": true, + "tf.compat.v1.estimator.StepCounterHook.before_run": true, + "tf.compat.v1.estimator.StepCounterHook.begin": true, + "tf.compat.v1.estimator.StepCounterHook.end": true, + "tf.compat.v1.estimator.StopAtStepHook": false, + "tf.compat.v1.estimator.StopAtStepHook.__eq__": true, + "tf.compat.v1.estimator.StopAtStepHook.__ge__": true, + "tf.compat.v1.estimator.StopAtStepHook.__gt__": true, + "tf.compat.v1.estimator.StopAtStepHook.__init__": true, + "tf.compat.v1.estimator.StopAtStepHook.__le__": true, + "tf.compat.v1.estimator.StopAtStepHook.__lt__": true, + "tf.compat.v1.estimator.StopAtStepHook.__ne__": true, + "tf.compat.v1.estimator.StopAtStepHook.__new__": true, + "tf.compat.v1.estimator.StopAtStepHook.after_create_session": true, + "tf.compat.v1.estimator.StopAtStepHook.after_run": true, + "tf.compat.v1.estimator.StopAtStepHook.before_run": true, + "tf.compat.v1.estimator.StopAtStepHook.begin": true, + "tf.compat.v1.estimator.StopAtStepHook.end": true, + "tf.compat.v1.estimator.SummarySaverHook": false, + "tf.compat.v1.estimator.SummarySaverHook.__eq__": true, + "tf.compat.v1.estimator.SummarySaverHook.__ge__": true, + "tf.compat.v1.estimator.SummarySaverHook.__gt__": true, + "tf.compat.v1.estimator.SummarySaverHook.__init__": true, + "tf.compat.v1.estimator.SummarySaverHook.__le__": true, + "tf.compat.v1.estimator.SummarySaverHook.__lt__": true, + "tf.compat.v1.estimator.SummarySaverHook.__ne__": true, + "tf.compat.v1.estimator.SummarySaverHook.__new__": true, + "tf.compat.v1.estimator.SummarySaverHook.after_create_session": true, + "tf.compat.v1.estimator.SummarySaverHook.after_run": true, + "tf.compat.v1.estimator.SummarySaverHook.before_run": true, + "tf.compat.v1.estimator.SummarySaverHook.begin": true, + "tf.compat.v1.estimator.SummarySaverHook.end": true, + "tf.compat.v1.estimator.TrainSpec": false, + "tf.compat.v1.estimator.TrainSpec.__add__": true, + "tf.compat.v1.estimator.TrainSpec.__contains__": true, + "tf.compat.v1.estimator.TrainSpec.__eq__": true, + "tf.compat.v1.estimator.TrainSpec.__ge__": true, + "tf.compat.v1.estimator.TrainSpec.__getitem__": true, + "tf.compat.v1.estimator.TrainSpec.__gt__": true, + "tf.compat.v1.estimator.TrainSpec.__init__": true, + "tf.compat.v1.estimator.TrainSpec.__iter__": true, + "tf.compat.v1.estimator.TrainSpec.__le__": true, + "tf.compat.v1.estimator.TrainSpec.__len__": true, + "tf.compat.v1.estimator.TrainSpec.__lt__": true, + "tf.compat.v1.estimator.TrainSpec.__mul__": true, + "tf.compat.v1.estimator.TrainSpec.__ne__": true, + "tf.compat.v1.estimator.TrainSpec.__new__": true, + "tf.compat.v1.estimator.TrainSpec.__rmul__": true, + "tf.compat.v1.estimator.TrainSpec.count": true, + "tf.compat.v1.estimator.TrainSpec.hooks": true, + "tf.compat.v1.estimator.TrainSpec.index": true, + "tf.compat.v1.estimator.TrainSpec.input_fn": true, + "tf.compat.v1.estimator.TrainSpec.max_steps": true, + "tf.compat.v1.estimator.VocabInfo": false, + "tf.compat.v1.estimator.VocabInfo.__add__": true, + "tf.compat.v1.estimator.VocabInfo.__contains__": true, + "tf.compat.v1.estimator.VocabInfo.__eq__": true, + "tf.compat.v1.estimator.VocabInfo.__ge__": true, + "tf.compat.v1.estimator.VocabInfo.__getitem__": true, + "tf.compat.v1.estimator.VocabInfo.__gt__": true, + "tf.compat.v1.estimator.VocabInfo.__init__": true, + "tf.compat.v1.estimator.VocabInfo.__iter__": true, + "tf.compat.v1.estimator.VocabInfo.__le__": true, + "tf.compat.v1.estimator.VocabInfo.__len__": true, + "tf.compat.v1.estimator.VocabInfo.__lt__": true, + "tf.compat.v1.estimator.VocabInfo.__mul__": true, + "tf.compat.v1.estimator.VocabInfo.__ne__": true, + "tf.compat.v1.estimator.VocabInfo.__new__": true, + "tf.compat.v1.estimator.VocabInfo.__rmul__": true, + "tf.compat.v1.estimator.VocabInfo.axis": true, + "tf.compat.v1.estimator.VocabInfo.backup_initializer": true, + "tf.compat.v1.estimator.VocabInfo.count": true, + "tf.compat.v1.estimator.VocabInfo.index": true, + "tf.compat.v1.estimator.VocabInfo.new_vocab": true, + "tf.compat.v1.estimator.VocabInfo.new_vocab_size": true, + "tf.compat.v1.estimator.VocabInfo.num_oov_buckets": true, + "tf.compat.v1.estimator.VocabInfo.old_vocab": true, + "tf.compat.v1.estimator.VocabInfo.old_vocab_size": true, + "tf.compat.v1.estimator.WarmStartSettings": false, + "tf.compat.v1.estimator.WarmStartSettings.__add__": true, + "tf.compat.v1.estimator.WarmStartSettings.__contains__": true, + "tf.compat.v1.estimator.WarmStartSettings.__eq__": true, + "tf.compat.v1.estimator.WarmStartSettings.__ge__": true, + "tf.compat.v1.estimator.WarmStartSettings.__getitem__": true, + "tf.compat.v1.estimator.WarmStartSettings.__gt__": true, + "tf.compat.v1.estimator.WarmStartSettings.__init__": true, + "tf.compat.v1.estimator.WarmStartSettings.__iter__": true, + "tf.compat.v1.estimator.WarmStartSettings.__le__": true, + "tf.compat.v1.estimator.WarmStartSettings.__len__": true, + "tf.compat.v1.estimator.WarmStartSettings.__lt__": true, + "tf.compat.v1.estimator.WarmStartSettings.__mul__": true, + "tf.compat.v1.estimator.WarmStartSettings.__ne__": true, + "tf.compat.v1.estimator.WarmStartSettings.__new__": true, + "tf.compat.v1.estimator.WarmStartSettings.__rmul__": true, + "tf.compat.v1.estimator.WarmStartSettings.ckpt_to_initialize_from": true, + "tf.compat.v1.estimator.WarmStartSettings.count": true, + "tf.compat.v1.estimator.WarmStartSettings.index": true, + "tf.compat.v1.estimator.WarmStartSettings.var_name_to_prev_var_name": true, + "tf.compat.v1.estimator.WarmStartSettings.var_name_to_vocab_info": true, + "tf.compat.v1.estimator.WarmStartSettings.vars_to_warm_start": true, + "tf.compat.v1.estimator.add_metrics": false, + "tf.compat.v1.estimator.classifier_parse_example_spec": false, + "tf.compat.v1.estimator.experimental": false, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook": false, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__eq__": true, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__ge__": true, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__gt__": true, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__init__": true, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__le__": true, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__lt__": true, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__ne__": true, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.__new__": true, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.after_create_session": true, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.after_run": true, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.before_run": true, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.begin": true, + "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook.end": true, + "tf.compat.v1.estimator.experimental.KMeans": false, + "tf.compat.v1.estimator.experimental.KMeans.ALL_DISTANCES": true, + "tf.compat.v1.estimator.experimental.KMeans.CLUSTER_CENTERS_VAR_NAME": true, + "tf.compat.v1.estimator.experimental.KMeans.CLUSTER_INDEX": true, + "tf.compat.v1.estimator.experimental.KMeans.COSINE_DISTANCE": true, + "tf.compat.v1.estimator.experimental.KMeans.KMEANS_PLUS_PLUS_INIT": true, + "tf.compat.v1.estimator.experimental.KMeans.RANDOM_INIT": true, + "tf.compat.v1.estimator.experimental.KMeans.SCORE": true, + "tf.compat.v1.estimator.experimental.KMeans.SQUARED_EUCLIDEAN_DISTANCE": true, + "tf.compat.v1.estimator.experimental.KMeans.__eq__": true, + "tf.compat.v1.estimator.experimental.KMeans.__ge__": true, + "tf.compat.v1.estimator.experimental.KMeans.__gt__": true, + "tf.compat.v1.estimator.experimental.KMeans.__init__": true, + "tf.compat.v1.estimator.experimental.KMeans.__le__": true, + "tf.compat.v1.estimator.experimental.KMeans.__lt__": true, + "tf.compat.v1.estimator.experimental.KMeans.__ne__": true, + "tf.compat.v1.estimator.experimental.KMeans.__new__": true, + "tf.compat.v1.estimator.experimental.KMeans.cluster_centers": true, + "tf.compat.v1.estimator.experimental.KMeans.config": true, + "tf.compat.v1.estimator.experimental.KMeans.eval_dir": true, + "tf.compat.v1.estimator.experimental.KMeans.evaluate": true, + "tf.compat.v1.estimator.experimental.KMeans.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.experimental.KMeans.export_saved_model": true, + "tf.compat.v1.estimator.experimental.KMeans.export_savedmodel": true, + "tf.compat.v1.estimator.experimental.KMeans.get_variable_names": true, + "tf.compat.v1.estimator.experimental.KMeans.get_variable_value": true, + "tf.compat.v1.estimator.experimental.KMeans.latest_checkpoint": true, + "tf.compat.v1.estimator.experimental.KMeans.model_dir": true, + "tf.compat.v1.estimator.experimental.KMeans.model_fn": true, + "tf.compat.v1.estimator.experimental.KMeans.params": true, + "tf.compat.v1.estimator.experimental.KMeans.predict": true, + "tf.compat.v1.estimator.experimental.KMeans.predict_cluster_index": true, + "tf.compat.v1.estimator.experimental.KMeans.score": true, + "tf.compat.v1.estimator.experimental.KMeans.train": true, + "tf.compat.v1.estimator.experimental.KMeans.transform": true, + "tf.compat.v1.estimator.experimental.LinearSDCA": false, + "tf.compat.v1.estimator.experimental.LinearSDCA.__eq__": true, + "tf.compat.v1.estimator.experimental.LinearSDCA.__ge__": true, + "tf.compat.v1.estimator.experimental.LinearSDCA.__gt__": true, + "tf.compat.v1.estimator.experimental.LinearSDCA.__init__": true, + "tf.compat.v1.estimator.experimental.LinearSDCA.__le__": true, + "tf.compat.v1.estimator.experimental.LinearSDCA.__lt__": true, + "tf.compat.v1.estimator.experimental.LinearSDCA.__ne__": true, + "tf.compat.v1.estimator.experimental.LinearSDCA.__new__": true, + "tf.compat.v1.estimator.experimental.LinearSDCA.get_train_step": true, + "tf.compat.v1.estimator.experimental.build_raw_supervised_input_receiver_fn": false, + "tf.compat.v1.estimator.experimental.call_logit_fn": false, + "tf.compat.v1.estimator.experimental.dnn_logit_fn_builder": false, + "tf.compat.v1.estimator.experimental.linear_logit_fn_builder": false, + "tf.compat.v1.estimator.experimental.make_early_stopping_hook": false, + "tf.compat.v1.estimator.experimental.make_stop_at_checkpoint_step_hook": false, + "tf.compat.v1.estimator.experimental.stop_if_higher_hook": false, + "tf.compat.v1.estimator.experimental.stop_if_lower_hook": false, + "tf.compat.v1.estimator.experimental.stop_if_no_decrease_hook": false, + "tf.compat.v1.estimator.experimental.stop_if_no_increase_hook": false, + "tf.compat.v1.estimator.export": false, + "tf.compat.v1.estimator.export.ClassificationOutput": false, + "tf.compat.v1.estimator.export.ClassificationOutput.__eq__": true, + "tf.compat.v1.estimator.export.ClassificationOutput.__ge__": true, + "tf.compat.v1.estimator.export.ClassificationOutput.__gt__": true, + "tf.compat.v1.estimator.export.ClassificationOutput.__init__": true, + "tf.compat.v1.estimator.export.ClassificationOutput.__le__": true, + "tf.compat.v1.estimator.export.ClassificationOutput.__lt__": true, + "tf.compat.v1.estimator.export.ClassificationOutput.__ne__": true, + "tf.compat.v1.estimator.export.ClassificationOutput.__new__": true, + "tf.compat.v1.estimator.export.ClassificationOutput.as_signature_def": true, + "tf.compat.v1.estimator.export.ClassificationOutput.classes": true, + "tf.compat.v1.estimator.export.ClassificationOutput.scores": true, + "tf.compat.v1.estimator.export.ExportOutput": false, + "tf.compat.v1.estimator.export.ExportOutput.__eq__": true, + "tf.compat.v1.estimator.export.ExportOutput.__ge__": true, + "tf.compat.v1.estimator.export.ExportOutput.__gt__": true, + "tf.compat.v1.estimator.export.ExportOutput.__init__": true, + "tf.compat.v1.estimator.export.ExportOutput.__le__": true, + "tf.compat.v1.estimator.export.ExportOutput.__lt__": true, + "tf.compat.v1.estimator.export.ExportOutput.__ne__": true, + "tf.compat.v1.estimator.export.ExportOutput.__new__": true, + "tf.compat.v1.estimator.export.ExportOutput.as_signature_def": true, + "tf.compat.v1.estimator.export.PredictOutput": false, + "tf.compat.v1.estimator.export.PredictOutput.__eq__": true, + "tf.compat.v1.estimator.export.PredictOutput.__ge__": true, + "tf.compat.v1.estimator.export.PredictOutput.__gt__": true, + "tf.compat.v1.estimator.export.PredictOutput.__init__": true, + "tf.compat.v1.estimator.export.PredictOutput.__le__": true, + "tf.compat.v1.estimator.export.PredictOutput.__lt__": true, + "tf.compat.v1.estimator.export.PredictOutput.__ne__": true, + "tf.compat.v1.estimator.export.PredictOutput.__new__": true, + "tf.compat.v1.estimator.export.PredictOutput.as_signature_def": true, + "tf.compat.v1.estimator.export.PredictOutput.outputs": true, + "tf.compat.v1.estimator.export.RegressionOutput": false, + "tf.compat.v1.estimator.export.RegressionOutput.__eq__": true, + "tf.compat.v1.estimator.export.RegressionOutput.__ge__": true, + "tf.compat.v1.estimator.export.RegressionOutput.__gt__": true, + "tf.compat.v1.estimator.export.RegressionOutput.__init__": true, + "tf.compat.v1.estimator.export.RegressionOutput.__le__": true, + "tf.compat.v1.estimator.export.RegressionOutput.__lt__": true, + "tf.compat.v1.estimator.export.RegressionOutput.__ne__": true, + "tf.compat.v1.estimator.export.RegressionOutput.__new__": true, + "tf.compat.v1.estimator.export.RegressionOutput.as_signature_def": true, + "tf.compat.v1.estimator.export.RegressionOutput.value": true, + "tf.compat.v1.estimator.export.ServingInputReceiver": false, + "tf.compat.v1.estimator.export.ServingInputReceiver.__add__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__contains__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__eq__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__ge__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__getitem__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__gt__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__init__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__iter__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__le__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__len__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__lt__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__mul__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__ne__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__new__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.__rmul__": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.count": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.features": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.index": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.receiver_tensors": true, + "tf.compat.v1.estimator.export.ServingInputReceiver.receiver_tensors_alternatives": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver": false, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__add__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__contains__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__eq__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__ge__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__getitem__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__gt__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__init__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__iter__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__le__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__len__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__lt__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__mul__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__ne__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__new__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.__rmul__": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.count": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.features": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.index": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.receiver_tensors": true, + "tf.compat.v1.estimator.export.TensorServingInputReceiver.receiver_tensors_alternatives": true, + "tf.compat.v1.estimator.export.build_parsing_serving_input_receiver_fn": false, + "tf.compat.v1.estimator.export.build_raw_serving_input_receiver_fn": false, + "tf.compat.v1.estimator.inputs": false, + "tf.compat.v1.estimator.inputs.numpy_input_fn": false, + "tf.compat.v1.estimator.inputs.pandas_input_fn": false, + "tf.compat.v1.estimator.regressor_parse_example_spec": false, + "tf.compat.v1.estimator.tpu": false, + "tf.compat.v1.estimator.tpu.InputPipelineConfig": false, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.BROADCAST": true, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.PER_HOST_V1": true, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.PER_HOST_V2": true, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.PER_SHARD_V1": true, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.SLICED": true, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__eq__": true, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__ge__": true, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__gt__": true, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__init__": true, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__le__": true, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__lt__": true, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__ne__": true, + "tf.compat.v1.estimator.tpu.InputPipelineConfig.__new__": true, + "tf.compat.v1.estimator.tpu.RunConfig": false, + "tf.compat.v1.estimator.tpu.RunConfig.__eq__": true, + "tf.compat.v1.estimator.tpu.RunConfig.__ge__": true, + "tf.compat.v1.estimator.tpu.RunConfig.__gt__": true, + "tf.compat.v1.estimator.tpu.RunConfig.__init__": true, + "tf.compat.v1.estimator.tpu.RunConfig.__le__": true, + "tf.compat.v1.estimator.tpu.RunConfig.__lt__": true, + "tf.compat.v1.estimator.tpu.RunConfig.__ne__": true, + "tf.compat.v1.estimator.tpu.RunConfig.__new__": true, + "tf.compat.v1.estimator.tpu.RunConfig.cluster": true, + "tf.compat.v1.estimator.tpu.RunConfig.cluster_spec": true, + "tf.compat.v1.estimator.tpu.RunConfig.device_fn": true, + "tf.compat.v1.estimator.tpu.RunConfig.eval_distribute": true, + "tf.compat.v1.estimator.tpu.RunConfig.evaluation_master": true, + "tf.compat.v1.estimator.tpu.RunConfig.experimental_max_worker_delay_secs": true, + "tf.compat.v1.estimator.tpu.RunConfig.global_id_in_cluster": true, + "tf.compat.v1.estimator.tpu.RunConfig.is_chief": true, + "tf.compat.v1.estimator.tpu.RunConfig.keep_checkpoint_every_n_hours": true, + "tf.compat.v1.estimator.tpu.RunConfig.keep_checkpoint_max": true, + "tf.compat.v1.estimator.tpu.RunConfig.log_step_count_steps": true, + "tf.compat.v1.estimator.tpu.RunConfig.master": true, + "tf.compat.v1.estimator.tpu.RunConfig.model_dir": true, + "tf.compat.v1.estimator.tpu.RunConfig.num_ps_replicas": true, + "tf.compat.v1.estimator.tpu.RunConfig.num_worker_replicas": true, + "tf.compat.v1.estimator.tpu.RunConfig.protocol": true, + "tf.compat.v1.estimator.tpu.RunConfig.replace": true, + "tf.compat.v1.estimator.tpu.RunConfig.save_checkpoints_secs": true, + "tf.compat.v1.estimator.tpu.RunConfig.save_checkpoints_steps": true, + "tf.compat.v1.estimator.tpu.RunConfig.save_summary_steps": true, + "tf.compat.v1.estimator.tpu.RunConfig.service": true, + "tf.compat.v1.estimator.tpu.RunConfig.session_config": true, + "tf.compat.v1.estimator.tpu.RunConfig.session_creation_timeout_secs": true, + "tf.compat.v1.estimator.tpu.RunConfig.task_id": true, + "tf.compat.v1.estimator.tpu.RunConfig.task_type": true, + "tf.compat.v1.estimator.tpu.RunConfig.tf_random_seed": true, + "tf.compat.v1.estimator.tpu.RunConfig.tpu_config": true, + "tf.compat.v1.estimator.tpu.RunConfig.train_distribute": true, + "tf.compat.v1.estimator.tpu.TPUConfig": false, + "tf.compat.v1.estimator.tpu.TPUConfig.__add__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__contains__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__eq__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__ge__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__getitem__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__gt__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__init__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__iter__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__le__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__len__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__lt__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__mul__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__ne__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__new__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.__rmul__": true, + "tf.compat.v1.estimator.tpu.TPUConfig.count": true, + "tf.compat.v1.estimator.tpu.TPUConfig.eval_training_input_configuration": true, + "tf.compat.v1.estimator.tpu.TPUConfig.experimental_host_call_every_n_steps": true, + "tf.compat.v1.estimator.tpu.TPUConfig.index": true, + "tf.compat.v1.estimator.tpu.TPUConfig.initial_infeed_sleep_secs": true, + "tf.compat.v1.estimator.tpu.TPUConfig.input_partition_dims": true, + "tf.compat.v1.estimator.tpu.TPUConfig.iterations_per_loop": true, + "tf.compat.v1.estimator.tpu.TPUConfig.num_cores_per_replica": true, + "tf.compat.v1.estimator.tpu.TPUConfig.num_shards": true, + "tf.compat.v1.estimator.tpu.TPUConfig.per_host_input_for_training": true, + "tf.compat.v1.estimator.tpu.TPUConfig.tpu_job_name": true, + "tf.compat.v1.estimator.tpu.TPUEstimator": false, + "tf.compat.v1.estimator.tpu.TPUEstimator.__eq__": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.__ge__": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.__gt__": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.__init__": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.__le__": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.__lt__": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.__ne__": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.__new__": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.config": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.eval_dir": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.evaluate": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.experimental_export_all_saved_models": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.export_saved_model": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.export_savedmodel": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.get_variable_names": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.get_variable_value": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.latest_checkpoint": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.model_dir": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.model_fn": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.params": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.predict": true, + "tf.compat.v1.estimator.tpu.TPUEstimator.train": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec": false, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__add__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__contains__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__eq__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__ge__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__getitem__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__gt__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__init__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__iter__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__le__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__len__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__lt__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__mul__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__ne__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__new__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.__rmul__": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.as_estimator_spec": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.count": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.eval_metrics": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.evaluation_hooks": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.export_outputs": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.host_call": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.index": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.loss": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.mode": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.prediction_hooks": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.predictions": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.scaffold_fn": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.train_op": true, + "tf.compat.v1.estimator.tpu.TPUEstimatorSpec.training_hooks": true, + "tf.compat.v1.estimator.tpu.experimental": false, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec": false, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__add__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__contains__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__eq__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__ge__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__getitem__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__gt__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__init__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__iter__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__le__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__len__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__lt__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__mul__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__ne__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__new__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.__rmul__": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.clipping_limit": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.count": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.experimental_gradient_multiplier_fn": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.feature_columns": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.feature_to_config_dict": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.index": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.optimization_parameters": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.partition_strategy": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.pipeline_execution_with_tensor_core": true, + "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec.table_to_config_dict": true, + "tf.compat.v1.estimator.train_and_evaluate": false, + "tf.compat.v1.executing_eagerly": false, + "tf.compat.v1.executing_eagerly_outside_functions": false, + "tf.compat.v1.exp": false, + "tf.compat.v1.expand_dims": false, + "tf.compat.v1.experimental": false, + "tf.compat.v1.experimental.async_clear_error": false, + "tf.compat.v1.experimental.async_scope": false, + "tf.compat.v1.experimental.function_executor_type": false, + "tf.compat.v1.experimental.output_all_intermediates": false, + "tf.compat.v1.expm1": false, + "tf.compat.v1.extract_image_patches": false, + "tf.compat.v1.extract_volume_patches": false, + "tf.compat.v1.eye": false, + "tf.compat.v1.fake_quant_with_min_max_args": false, + "tf.compat.v1.fake_quant_with_min_max_args_gradient": false, + "tf.compat.v1.fake_quant_with_min_max_vars": false, + "tf.compat.v1.fake_quant_with_min_max_vars_gradient": false, + "tf.compat.v1.fake_quant_with_min_max_vars_per_channel": false, + "tf.compat.v1.fake_quant_with_min_max_vars_per_channel_gradient": false, + "tf.compat.v1.feature_column": false, + "tf.compat.v1.feature_column.bucketized_column": false, + "tf.compat.v1.feature_column.categorical_column_with_hash_bucket": false, + "tf.compat.v1.feature_column.categorical_column_with_identity": false, + "tf.compat.v1.feature_column.categorical_column_with_vocabulary_file": false, + "tf.compat.v1.feature_column.categorical_column_with_vocabulary_list": false, + "tf.compat.v1.feature_column.crossed_column": false, + "tf.compat.v1.feature_column.embedding_column": false, + "tf.compat.v1.feature_column.indicator_column": false, + "tf.compat.v1.feature_column.input_layer": false, + "tf.compat.v1.feature_column.linear_model": false, + "tf.compat.v1.feature_column.make_parse_example_spec": false, + "tf.compat.v1.feature_column.numeric_column": false, + "tf.compat.v1.feature_column.sequence_categorical_column_with_hash_bucket": false, + "tf.compat.v1.feature_column.sequence_categorical_column_with_identity": false, + "tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_file": false, + "tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_list": false, + "tf.compat.v1.feature_column.sequence_numeric_column": false, + "tf.compat.v1.feature_column.shared_embedding_columns": false, + "tf.compat.v1.feature_column.weighted_categorical_column": false, + "tf.compat.v1.fft": false, + "tf.compat.v1.fft2d": false, + "tf.compat.v1.fft3d": false, + "tf.compat.v1.fill": false, + "tf.compat.v1.fingerprint": false, + "tf.compat.v1.fixed_size_partitioner": false, + "tf.compat.v1.flags": false, + "tf.compat.v1.flags.ArgumentParser": false, + "tf.compat.v1.flags.ArgumentParser.__eq__": true, + "tf.compat.v1.flags.ArgumentParser.__ge__": true, + "tf.compat.v1.flags.ArgumentParser.__gt__": true, + "tf.compat.v1.flags.ArgumentParser.__init__": true, + "tf.compat.v1.flags.ArgumentParser.__le__": true, + "tf.compat.v1.flags.ArgumentParser.__lt__": true, + "tf.compat.v1.flags.ArgumentParser.__ne__": true, + "tf.compat.v1.flags.ArgumentParser.__new__": true, + "tf.compat.v1.flags.ArgumentParser.flag_type": true, + "tf.compat.v1.flags.ArgumentParser.parse": true, + "tf.compat.v1.flags.ArgumentParser.syntactic_help": true, + "tf.compat.v1.flags.ArgumentSerializer": false, + "tf.compat.v1.flags.ArgumentSerializer.__eq__": true, + "tf.compat.v1.flags.ArgumentSerializer.__ge__": true, + "tf.compat.v1.flags.ArgumentSerializer.__gt__": true, + "tf.compat.v1.flags.ArgumentSerializer.__init__": true, + "tf.compat.v1.flags.ArgumentSerializer.__le__": true, + "tf.compat.v1.flags.ArgumentSerializer.__lt__": true, + "tf.compat.v1.flags.ArgumentSerializer.__ne__": true, + "tf.compat.v1.flags.ArgumentSerializer.__new__": true, + "tf.compat.v1.flags.ArgumentSerializer.serialize": true, + "tf.compat.v1.flags.BaseListParser": false, + "tf.compat.v1.flags.BaseListParser.__eq__": true, + "tf.compat.v1.flags.BaseListParser.__ge__": true, + "tf.compat.v1.flags.BaseListParser.__gt__": true, + "tf.compat.v1.flags.BaseListParser.__init__": true, + "tf.compat.v1.flags.BaseListParser.__le__": true, + "tf.compat.v1.flags.BaseListParser.__lt__": true, + "tf.compat.v1.flags.BaseListParser.__ne__": true, + "tf.compat.v1.flags.BaseListParser.__new__": true, + "tf.compat.v1.flags.BaseListParser.flag_type": true, + "tf.compat.v1.flags.BaseListParser.parse": true, + "tf.compat.v1.flags.BaseListParser.syntactic_help": true, + "tf.compat.v1.flags.BooleanFlag": false, + "tf.compat.v1.flags.BooleanFlag.__eq__": true, + "tf.compat.v1.flags.BooleanFlag.__ge__": true, + "tf.compat.v1.flags.BooleanFlag.__gt__": true, + "tf.compat.v1.flags.BooleanFlag.__init__": true, + "tf.compat.v1.flags.BooleanFlag.__le__": true, + "tf.compat.v1.flags.BooleanFlag.__lt__": true, + "tf.compat.v1.flags.BooleanFlag.__ne__": true, + "tf.compat.v1.flags.BooleanFlag.__new__": true, + "tf.compat.v1.flags.BooleanFlag.flag_type": true, + "tf.compat.v1.flags.BooleanFlag.parse": true, + "tf.compat.v1.flags.BooleanFlag.serialize": true, + "tf.compat.v1.flags.BooleanFlag.unparse": true, + "tf.compat.v1.flags.BooleanFlag.value": true, + "tf.compat.v1.flags.BooleanParser": false, + "tf.compat.v1.flags.BooleanParser.__eq__": true, + "tf.compat.v1.flags.BooleanParser.__ge__": true, + "tf.compat.v1.flags.BooleanParser.__gt__": true, + "tf.compat.v1.flags.BooleanParser.__init__": true, + "tf.compat.v1.flags.BooleanParser.__le__": true, + "tf.compat.v1.flags.BooleanParser.__lt__": true, + "tf.compat.v1.flags.BooleanParser.__ne__": true, + "tf.compat.v1.flags.BooleanParser.__new__": true, + "tf.compat.v1.flags.BooleanParser.flag_type": true, + "tf.compat.v1.flags.BooleanParser.parse": true, + "tf.compat.v1.flags.BooleanParser.syntactic_help": true, + "tf.compat.v1.flags.CantOpenFlagFileError": false, + "tf.compat.v1.flags.CantOpenFlagFileError.__eq__": true, + "tf.compat.v1.flags.CantOpenFlagFileError.__ge__": true, + "tf.compat.v1.flags.CantOpenFlagFileError.__gt__": true, + "tf.compat.v1.flags.CantOpenFlagFileError.__init__": true, + "tf.compat.v1.flags.CantOpenFlagFileError.__le__": true, + "tf.compat.v1.flags.CantOpenFlagFileError.__lt__": true, + "tf.compat.v1.flags.CantOpenFlagFileError.__ne__": true, + "tf.compat.v1.flags.CantOpenFlagFileError.__new__": true, + "tf.compat.v1.flags.CantOpenFlagFileError.args": true, + "tf.compat.v1.flags.CantOpenFlagFileError.with_traceback": true, + "tf.compat.v1.flags.CsvListSerializer": false, + "tf.compat.v1.flags.CsvListSerializer.__eq__": true, + "tf.compat.v1.flags.CsvListSerializer.__ge__": true, + "tf.compat.v1.flags.CsvListSerializer.__gt__": true, + "tf.compat.v1.flags.CsvListSerializer.__init__": true, + "tf.compat.v1.flags.CsvListSerializer.__le__": true, + "tf.compat.v1.flags.CsvListSerializer.__lt__": true, + "tf.compat.v1.flags.CsvListSerializer.__ne__": true, + "tf.compat.v1.flags.CsvListSerializer.__new__": true, + "tf.compat.v1.flags.CsvListSerializer.serialize": true, + "tf.compat.v1.flags.DEFINE": false, + "tf.compat.v1.flags.DEFINE_alias": false, + "tf.compat.v1.flags.DEFINE_bool": false, + "tf.compat.v1.flags.DEFINE_boolean": false, + "tf.compat.v1.flags.DEFINE_enum": false, + "tf.compat.v1.flags.DEFINE_enum_class": false, + "tf.compat.v1.flags.DEFINE_flag": false, + "tf.compat.v1.flags.DEFINE_float": false, + "tf.compat.v1.flags.DEFINE_integer": false, + "tf.compat.v1.flags.DEFINE_list": false, + "tf.compat.v1.flags.DEFINE_multi": false, + "tf.compat.v1.flags.DEFINE_multi_enum": false, + "tf.compat.v1.flags.DEFINE_multi_enum_class": false, + "tf.compat.v1.flags.DEFINE_multi_float": false, + "tf.compat.v1.flags.DEFINE_multi_integer": false, + "tf.compat.v1.flags.DEFINE_multi_string": false, + "tf.compat.v1.flags.DEFINE_spaceseplist": false, + "tf.compat.v1.flags.DEFINE_string": false, + "tf.compat.v1.flags.DuplicateFlagError": false, + "tf.compat.v1.flags.DuplicateFlagError.__eq__": true, + "tf.compat.v1.flags.DuplicateFlagError.__ge__": true, + "tf.compat.v1.flags.DuplicateFlagError.__gt__": true, + "tf.compat.v1.flags.DuplicateFlagError.__init__": true, + "tf.compat.v1.flags.DuplicateFlagError.__le__": true, + "tf.compat.v1.flags.DuplicateFlagError.__lt__": true, + "tf.compat.v1.flags.DuplicateFlagError.__ne__": true, + "tf.compat.v1.flags.DuplicateFlagError.__new__": true, + "tf.compat.v1.flags.DuplicateFlagError.args": true, + "tf.compat.v1.flags.DuplicateFlagError.from_flag": true, + "tf.compat.v1.flags.DuplicateFlagError.with_traceback": true, + "tf.compat.v1.flags.EnumClassFlag": false, + "tf.compat.v1.flags.EnumClassFlag.__eq__": true, + "tf.compat.v1.flags.EnumClassFlag.__ge__": true, + "tf.compat.v1.flags.EnumClassFlag.__gt__": true, + "tf.compat.v1.flags.EnumClassFlag.__init__": true, + "tf.compat.v1.flags.EnumClassFlag.__le__": true, + "tf.compat.v1.flags.EnumClassFlag.__lt__": true, + "tf.compat.v1.flags.EnumClassFlag.__ne__": true, + "tf.compat.v1.flags.EnumClassFlag.__new__": true, + "tf.compat.v1.flags.EnumClassFlag.flag_type": true, + "tf.compat.v1.flags.EnumClassFlag.parse": true, + "tf.compat.v1.flags.EnumClassFlag.serialize": true, + "tf.compat.v1.flags.EnumClassFlag.unparse": true, + "tf.compat.v1.flags.EnumClassFlag.value": true, + "tf.compat.v1.flags.EnumClassParser": false, + "tf.compat.v1.flags.EnumClassParser.__eq__": true, + "tf.compat.v1.flags.EnumClassParser.__ge__": true, + "tf.compat.v1.flags.EnumClassParser.__gt__": true, + "tf.compat.v1.flags.EnumClassParser.__init__": true, + "tf.compat.v1.flags.EnumClassParser.__le__": true, + "tf.compat.v1.flags.EnumClassParser.__lt__": true, + "tf.compat.v1.flags.EnumClassParser.__ne__": true, + "tf.compat.v1.flags.EnumClassParser.__new__": true, + "tf.compat.v1.flags.EnumClassParser.flag_type": true, + "tf.compat.v1.flags.EnumClassParser.parse": true, + "tf.compat.v1.flags.EnumClassParser.syntactic_help": true, + "tf.compat.v1.flags.EnumFlag": false, + "tf.compat.v1.flags.EnumFlag.__eq__": true, + "tf.compat.v1.flags.EnumFlag.__ge__": true, + "tf.compat.v1.flags.EnumFlag.__gt__": true, + "tf.compat.v1.flags.EnumFlag.__init__": true, + "tf.compat.v1.flags.EnumFlag.__le__": true, + "tf.compat.v1.flags.EnumFlag.__lt__": true, + "tf.compat.v1.flags.EnumFlag.__ne__": true, + "tf.compat.v1.flags.EnumFlag.__new__": true, + "tf.compat.v1.flags.EnumFlag.flag_type": true, + "tf.compat.v1.flags.EnumFlag.parse": true, + "tf.compat.v1.flags.EnumFlag.serialize": true, + "tf.compat.v1.flags.EnumFlag.unparse": true, + "tf.compat.v1.flags.EnumFlag.value": true, + "tf.compat.v1.flags.EnumParser": false, + "tf.compat.v1.flags.EnumParser.__eq__": true, + "tf.compat.v1.flags.EnumParser.__ge__": true, + "tf.compat.v1.flags.EnumParser.__gt__": true, + "tf.compat.v1.flags.EnumParser.__init__": true, + "tf.compat.v1.flags.EnumParser.__le__": true, + "tf.compat.v1.flags.EnumParser.__lt__": true, + "tf.compat.v1.flags.EnumParser.__ne__": true, + "tf.compat.v1.flags.EnumParser.__new__": true, + "tf.compat.v1.flags.EnumParser.flag_type": true, + "tf.compat.v1.flags.EnumParser.parse": true, + "tf.compat.v1.flags.EnumParser.syntactic_help": true, + "tf.compat.v1.flags.Error": false, + "tf.compat.v1.flags.Error.__eq__": true, + "tf.compat.v1.flags.Error.__ge__": true, + "tf.compat.v1.flags.Error.__gt__": true, + "tf.compat.v1.flags.Error.__init__": true, + "tf.compat.v1.flags.Error.__le__": true, + "tf.compat.v1.flags.Error.__lt__": true, + "tf.compat.v1.flags.Error.__ne__": true, + "tf.compat.v1.flags.Error.__new__": true, + "tf.compat.v1.flags.Error.args": true, + "tf.compat.v1.flags.Error.with_traceback": true, + "tf.compat.v1.flags.FLAGS": false, + "tf.compat.v1.flags.Flag": false, + "tf.compat.v1.flags.Flag.__eq__": true, + "tf.compat.v1.flags.Flag.__ge__": true, + "tf.compat.v1.flags.Flag.__gt__": true, + "tf.compat.v1.flags.Flag.__init__": true, + "tf.compat.v1.flags.Flag.__le__": true, + "tf.compat.v1.flags.Flag.__lt__": true, + "tf.compat.v1.flags.Flag.__ne__": true, + "tf.compat.v1.flags.Flag.__new__": true, + "tf.compat.v1.flags.Flag.flag_type": true, + "tf.compat.v1.flags.Flag.parse": true, + "tf.compat.v1.flags.Flag.serialize": true, + "tf.compat.v1.flags.Flag.unparse": true, + "tf.compat.v1.flags.Flag.value": true, + "tf.compat.v1.flags.FlagNameConflictsWithMethodError": false, + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__eq__": true, + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__ge__": true, + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__gt__": true, + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__init__": true, + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__le__": true, + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__lt__": true, + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__ne__": true, + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.__new__": true, + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.args": true, + "tf.compat.v1.flags.FlagNameConflictsWithMethodError.with_traceback": true, + "tf.compat.v1.flags.FlagValues": false, + "tf.compat.v1.flags.FlagValues.__call__": true, + "tf.compat.v1.flags.FlagValues.__contains__": true, + "tf.compat.v1.flags.FlagValues.__eq__": true, + "tf.compat.v1.flags.FlagValues.__ge__": true, + "tf.compat.v1.flags.FlagValues.__getitem__": true, + "tf.compat.v1.flags.FlagValues.__gt__": true, + "tf.compat.v1.flags.FlagValues.__init__": true, + "tf.compat.v1.flags.FlagValues.__iter__": true, + "tf.compat.v1.flags.FlagValues.__le__": true, + "tf.compat.v1.flags.FlagValues.__len__": true, + "tf.compat.v1.flags.FlagValues.__lt__": true, + "tf.compat.v1.flags.FlagValues.__ne__": true, + "tf.compat.v1.flags.FlagValues.__new__": true, + "tf.compat.v1.flags.FlagValues.append_flag_values": true, + "tf.compat.v1.flags.FlagValues.append_flags_into_file": true, + "tf.compat.v1.flags.FlagValues.find_module_defining_flag": true, + "tf.compat.v1.flags.FlagValues.find_module_id_defining_flag": true, + "tf.compat.v1.flags.FlagValues.flag_values_dict": true, + "tf.compat.v1.flags.FlagValues.flags_by_module_dict": true, + "tf.compat.v1.flags.FlagValues.flags_by_module_id_dict": true, + "tf.compat.v1.flags.FlagValues.flags_into_string": true, + "tf.compat.v1.flags.FlagValues.get_flag_value": true, + "tf.compat.v1.flags.FlagValues.get_help": true, + "tf.compat.v1.flags.FlagValues.get_key_flags_for_module": true, + "tf.compat.v1.flags.FlagValues.is_gnu_getopt": true, + "tf.compat.v1.flags.FlagValues.is_parsed": true, + "tf.compat.v1.flags.FlagValues.key_flags_by_module_dict": true, + "tf.compat.v1.flags.FlagValues.main_module_help": true, + "tf.compat.v1.flags.FlagValues.mark_as_parsed": true, + "tf.compat.v1.flags.FlagValues.module_help": true, + "tf.compat.v1.flags.FlagValues.read_flags_from_files": true, + "tf.compat.v1.flags.FlagValues.register_flag_by_module": true, + "tf.compat.v1.flags.FlagValues.register_flag_by_module_id": true, + "tf.compat.v1.flags.FlagValues.register_key_flag_for_module": true, + "tf.compat.v1.flags.FlagValues.remove_flag_values": true, + "tf.compat.v1.flags.FlagValues.set_default": true, + "tf.compat.v1.flags.FlagValues.set_gnu_getopt": true, + "tf.compat.v1.flags.FlagValues.unparse_flags": true, + "tf.compat.v1.flags.FlagValues.write_help_in_xml_format": true, + "tf.compat.v1.flags.FloatParser": false, + "tf.compat.v1.flags.FloatParser.__eq__": true, + "tf.compat.v1.flags.FloatParser.__ge__": true, + "tf.compat.v1.flags.FloatParser.__gt__": true, + "tf.compat.v1.flags.FloatParser.__init__": true, + "tf.compat.v1.flags.FloatParser.__le__": true, + "tf.compat.v1.flags.FloatParser.__lt__": true, + "tf.compat.v1.flags.FloatParser.__ne__": true, + "tf.compat.v1.flags.FloatParser.__new__": true, + "tf.compat.v1.flags.FloatParser.convert": true, + "tf.compat.v1.flags.FloatParser.flag_type": true, + "tf.compat.v1.flags.FloatParser.is_outside_bounds": true, + "tf.compat.v1.flags.FloatParser.number_article": true, + "tf.compat.v1.flags.FloatParser.number_name": true, + "tf.compat.v1.flags.FloatParser.parse": true, + "tf.compat.v1.flags.FloatParser.syntactic_help": true, + "tf.compat.v1.flags.IllegalFlagValueError": false, + "tf.compat.v1.flags.IllegalFlagValueError.__eq__": true, + "tf.compat.v1.flags.IllegalFlagValueError.__ge__": true, + "tf.compat.v1.flags.IllegalFlagValueError.__gt__": true, + "tf.compat.v1.flags.IllegalFlagValueError.__init__": true, + "tf.compat.v1.flags.IllegalFlagValueError.__le__": true, + "tf.compat.v1.flags.IllegalFlagValueError.__lt__": true, + "tf.compat.v1.flags.IllegalFlagValueError.__ne__": true, + "tf.compat.v1.flags.IllegalFlagValueError.__new__": true, + "tf.compat.v1.flags.IllegalFlagValueError.args": true, + "tf.compat.v1.flags.IllegalFlagValueError.with_traceback": true, + "tf.compat.v1.flags.IntegerParser": false, + "tf.compat.v1.flags.IntegerParser.__eq__": true, + "tf.compat.v1.flags.IntegerParser.__ge__": true, + "tf.compat.v1.flags.IntegerParser.__gt__": true, + "tf.compat.v1.flags.IntegerParser.__init__": true, + "tf.compat.v1.flags.IntegerParser.__le__": true, + "tf.compat.v1.flags.IntegerParser.__lt__": true, + "tf.compat.v1.flags.IntegerParser.__ne__": true, + "tf.compat.v1.flags.IntegerParser.__new__": true, + "tf.compat.v1.flags.IntegerParser.convert": true, + "tf.compat.v1.flags.IntegerParser.flag_type": true, + "tf.compat.v1.flags.IntegerParser.is_outside_bounds": true, + "tf.compat.v1.flags.IntegerParser.number_article": true, + "tf.compat.v1.flags.IntegerParser.number_name": true, + "tf.compat.v1.flags.IntegerParser.parse": true, + "tf.compat.v1.flags.IntegerParser.syntactic_help": true, + "tf.compat.v1.flags.ListParser": false, + "tf.compat.v1.flags.ListParser.__eq__": true, + "tf.compat.v1.flags.ListParser.__ge__": true, + "tf.compat.v1.flags.ListParser.__gt__": true, + "tf.compat.v1.flags.ListParser.__init__": true, + "tf.compat.v1.flags.ListParser.__le__": true, + "tf.compat.v1.flags.ListParser.__lt__": true, + "tf.compat.v1.flags.ListParser.__ne__": true, + "tf.compat.v1.flags.ListParser.__new__": true, + "tf.compat.v1.flags.ListParser.flag_type": true, + "tf.compat.v1.flags.ListParser.parse": true, + "tf.compat.v1.flags.ListParser.syntactic_help": true, + "tf.compat.v1.flags.ListSerializer": false, + "tf.compat.v1.flags.ListSerializer.__eq__": true, + "tf.compat.v1.flags.ListSerializer.__ge__": true, + "tf.compat.v1.flags.ListSerializer.__gt__": true, + "tf.compat.v1.flags.ListSerializer.__init__": true, + "tf.compat.v1.flags.ListSerializer.__le__": true, + "tf.compat.v1.flags.ListSerializer.__lt__": true, + "tf.compat.v1.flags.ListSerializer.__ne__": true, + "tf.compat.v1.flags.ListSerializer.__new__": true, + "tf.compat.v1.flags.ListSerializer.serialize": true, + "tf.compat.v1.flags.MultiEnumClassFlag": false, + "tf.compat.v1.flags.MultiEnumClassFlag.__eq__": true, + "tf.compat.v1.flags.MultiEnumClassFlag.__ge__": true, + "tf.compat.v1.flags.MultiEnumClassFlag.__gt__": true, + "tf.compat.v1.flags.MultiEnumClassFlag.__init__": true, + "tf.compat.v1.flags.MultiEnumClassFlag.__le__": true, + "tf.compat.v1.flags.MultiEnumClassFlag.__lt__": true, + "tf.compat.v1.flags.MultiEnumClassFlag.__ne__": true, + "tf.compat.v1.flags.MultiEnumClassFlag.__new__": true, + "tf.compat.v1.flags.MultiEnumClassFlag.flag_type": true, + "tf.compat.v1.flags.MultiEnumClassFlag.parse": true, + "tf.compat.v1.flags.MultiEnumClassFlag.serialize": true, + "tf.compat.v1.flags.MultiEnumClassFlag.unparse": true, + "tf.compat.v1.flags.MultiEnumClassFlag.value": true, + "tf.compat.v1.flags.MultiFlag": false, + "tf.compat.v1.flags.MultiFlag.__eq__": true, + "tf.compat.v1.flags.MultiFlag.__ge__": true, + "tf.compat.v1.flags.MultiFlag.__gt__": true, + "tf.compat.v1.flags.MultiFlag.__init__": true, + "tf.compat.v1.flags.MultiFlag.__le__": true, + "tf.compat.v1.flags.MultiFlag.__lt__": true, + "tf.compat.v1.flags.MultiFlag.__ne__": true, + "tf.compat.v1.flags.MultiFlag.__new__": true, + "tf.compat.v1.flags.MultiFlag.flag_type": true, + "tf.compat.v1.flags.MultiFlag.parse": true, + "tf.compat.v1.flags.MultiFlag.serialize": true, + "tf.compat.v1.flags.MultiFlag.unparse": true, + "tf.compat.v1.flags.MultiFlag.value": true, + "tf.compat.v1.flags.UnparsedFlagAccessError": false, + "tf.compat.v1.flags.UnparsedFlagAccessError.__eq__": true, + "tf.compat.v1.flags.UnparsedFlagAccessError.__ge__": true, + "tf.compat.v1.flags.UnparsedFlagAccessError.__gt__": true, + "tf.compat.v1.flags.UnparsedFlagAccessError.__init__": true, + "tf.compat.v1.flags.UnparsedFlagAccessError.__le__": true, + "tf.compat.v1.flags.UnparsedFlagAccessError.__lt__": true, + "tf.compat.v1.flags.UnparsedFlagAccessError.__ne__": true, + "tf.compat.v1.flags.UnparsedFlagAccessError.__new__": true, + "tf.compat.v1.flags.UnparsedFlagAccessError.args": true, + "tf.compat.v1.flags.UnparsedFlagAccessError.with_traceback": true, + "tf.compat.v1.flags.UnrecognizedFlagError": false, + "tf.compat.v1.flags.UnrecognizedFlagError.__eq__": true, + "tf.compat.v1.flags.UnrecognizedFlagError.__ge__": true, + "tf.compat.v1.flags.UnrecognizedFlagError.__gt__": true, + "tf.compat.v1.flags.UnrecognizedFlagError.__init__": true, + "tf.compat.v1.flags.UnrecognizedFlagError.__le__": true, + "tf.compat.v1.flags.UnrecognizedFlagError.__lt__": true, + "tf.compat.v1.flags.UnrecognizedFlagError.__ne__": true, + "tf.compat.v1.flags.UnrecognizedFlagError.__new__": true, + "tf.compat.v1.flags.UnrecognizedFlagError.args": true, + "tf.compat.v1.flags.UnrecognizedFlagError.with_traceback": true, + "tf.compat.v1.flags.ValidationError": false, + "tf.compat.v1.flags.ValidationError.__eq__": true, + "tf.compat.v1.flags.ValidationError.__ge__": true, + "tf.compat.v1.flags.ValidationError.__gt__": true, + "tf.compat.v1.flags.ValidationError.__init__": true, + "tf.compat.v1.flags.ValidationError.__le__": true, + "tf.compat.v1.flags.ValidationError.__lt__": true, + "tf.compat.v1.flags.ValidationError.__ne__": true, + "tf.compat.v1.flags.ValidationError.__new__": true, + "tf.compat.v1.flags.ValidationError.args": true, + "tf.compat.v1.flags.ValidationError.with_traceback": true, + "tf.compat.v1.flags.WhitespaceSeparatedListParser": false, + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__eq__": true, + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__ge__": true, + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__gt__": true, + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__init__": true, + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__le__": true, + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__lt__": true, + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__ne__": true, + "tf.compat.v1.flags.WhitespaceSeparatedListParser.__new__": true, + "tf.compat.v1.flags.WhitespaceSeparatedListParser.flag_type": true, + "tf.compat.v1.flags.WhitespaceSeparatedListParser.parse": true, + "tf.compat.v1.flags.WhitespaceSeparatedListParser.syntactic_help": true, + "tf.compat.v1.flags.absolute_import": true, + "tf.compat.v1.flags.adopt_module_key_flags": false, + "tf.compat.v1.flags.declare_key_flag": false, + "tf.compat.v1.flags.disclaim_key_flags": false, + "tf.compat.v1.flags.division": true, + "tf.compat.v1.flags.doc_to_help": false, + "tf.compat.v1.flags.flag_dict_to_args": false, + "tf.compat.v1.flags.get_help_width": false, + "tf.compat.v1.flags.mark_bool_flags_as_mutual_exclusive": false, + "tf.compat.v1.flags.mark_flag_as_required": false, + "tf.compat.v1.flags.mark_flags_as_mutual_exclusive": false, + "tf.compat.v1.flags.mark_flags_as_required": false, + "tf.compat.v1.flags.multi_flags_validator": false, + "tf.compat.v1.flags.print_function": true, + "tf.compat.v1.flags.register_multi_flags_validator": false, + "tf.compat.v1.flags.register_validator": false, + "tf.compat.v1.flags.text_wrap": false, + "tf.compat.v1.flags.tf_decorator": false, + "tf.compat.v1.flags.tf_decorator.TFDecorator": false, + "tf.compat.v1.flags.tf_decorator.TFDecorator.__call__": true, + "tf.compat.v1.flags.tf_decorator.TFDecorator.__eq__": true, + "tf.compat.v1.flags.tf_decorator.TFDecorator.__ge__": true, + "tf.compat.v1.flags.tf_decorator.TFDecorator.__gt__": true, + "tf.compat.v1.flags.tf_decorator.TFDecorator.__init__": true, + "tf.compat.v1.flags.tf_decorator.TFDecorator.__le__": true, + "tf.compat.v1.flags.tf_decorator.TFDecorator.__lt__": true, + "tf.compat.v1.flags.tf_decorator.TFDecorator.__ne__": true, + "tf.compat.v1.flags.tf_decorator.TFDecorator.__new__": true, + "tf.compat.v1.flags.tf_decorator.TFDecorator.decorated_target": true, + "tf.compat.v1.flags.tf_decorator.TFDecorator.decorator_argspec": true, + "tf.compat.v1.flags.tf_decorator.TFDecorator.decorator_doc": true, + "tf.compat.v1.flags.tf_decorator.TFDecorator.decorator_name": true, + "tf.compat.v1.flags.tf_decorator.absolute_import": true, + "tf.compat.v1.flags.tf_decorator.division": true, + "tf.compat.v1.flags.tf_decorator.make_decorator": false, + "tf.compat.v1.flags.tf_decorator.print_function": true, + "tf.compat.v1.flags.tf_decorator.rewrap": false, + "tf.compat.v1.flags.tf_decorator.tf_stack": false, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter": false, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__enter__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__eq__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__exit__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__ge__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__gt__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__init__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__le__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__lt__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__ne__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.__new__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.get_filtered_filenames": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter.reset": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary": false, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__eq__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__ge__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__getitem__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__gt__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__init__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__iter__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__le__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__len__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__lt__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__ne__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.__new__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.filename": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.line": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.lineno": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary.name": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary": false, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__bool__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__contains__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__eq__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__ge__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__getitem__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__gt__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__init__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__iter__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__le__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__len__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__lt__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__ne__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.__new__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.append": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.count": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.extend": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.insert": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.pop": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary.remove": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter": false, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__enter__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__eq__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__exit__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__ge__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__gt__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__init__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__le__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__lt__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__ne__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.__new__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.get_filtered_filenames": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter.reset": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper": false, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__enter__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__eq__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__exit__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__ge__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__gt__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__init__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__le__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__lt__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__ne__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.__new__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.get_effective_source_map": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper.reset": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform": false, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__enter__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__eq__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__exit__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__ge__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__gt__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__init__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__le__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__lt__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__ne__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.__new__": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform.reset": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.absolute_import": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.division": true, + "tf.compat.v1.flags.tf_decorator.tf_stack.extract_stack": false, + "tf.compat.v1.flags.tf_decorator.tf_stack.print_function": true, + "tf.compat.v1.flags.tf_decorator.unwrap": false, + "tf.compat.v1.flags.validator": false, + "tf.compat.v1.float16": true, + "tf.compat.v1.float32": true, + "tf.compat.v1.float64": true, + "tf.compat.v1.floor": false, + "tf.compat.v1.floor_div": false, + "tf.compat.v1.floordiv": false, + "tf.compat.v1.floormod": false, + "tf.compat.v1.foldl": false, + "tf.compat.v1.foldr": false, + "tf.compat.v1.function": false, + "tf.compat.v1.gather": false, + "tf.compat.v1.gather_nd": false, + "tf.compat.v1.get_collection": false, + "tf.compat.v1.get_collection_ref": false, + "tf.compat.v1.get_default_graph": false, + "tf.compat.v1.get_default_session": false, + "tf.compat.v1.get_local_variable": false, + "tf.compat.v1.get_logger": false, + "tf.compat.v1.get_seed": false, + "tf.compat.v1.get_session_handle": false, + "tf.compat.v1.get_session_tensor": false, + "tf.compat.v1.get_static_value": false, + "tf.compat.v1.get_variable": false, + "tf.compat.v1.get_variable_scope": false, + "tf.compat.v1.gfile": false, + "tf.compat.v1.gfile.Copy": false, + "tf.compat.v1.gfile.DeleteRecursively": false, + "tf.compat.v1.gfile.Exists": false, + "tf.compat.v1.gfile.FastGFile": false, + "tf.compat.v1.gfile.FastGFile.__enter__": true, + "tf.compat.v1.gfile.FastGFile.__eq__": true, + "tf.compat.v1.gfile.FastGFile.__exit__": true, + "tf.compat.v1.gfile.FastGFile.__ge__": true, + "tf.compat.v1.gfile.FastGFile.__gt__": true, + "tf.compat.v1.gfile.FastGFile.__init__": true, + "tf.compat.v1.gfile.FastGFile.__iter__": true, + "tf.compat.v1.gfile.FastGFile.__le__": true, + "tf.compat.v1.gfile.FastGFile.__lt__": true, + "tf.compat.v1.gfile.FastGFile.__ne__": true, + "tf.compat.v1.gfile.FastGFile.__new__": true, + "tf.compat.v1.gfile.FastGFile.close": true, + "tf.compat.v1.gfile.FastGFile.flush": true, + "tf.compat.v1.gfile.FastGFile.mode": true, + "tf.compat.v1.gfile.FastGFile.name": true, + "tf.compat.v1.gfile.FastGFile.next": true, + "tf.compat.v1.gfile.FastGFile.read": true, + "tf.compat.v1.gfile.FastGFile.readline": true, + "tf.compat.v1.gfile.FastGFile.readlines": true, + "tf.compat.v1.gfile.FastGFile.seek": true, + "tf.compat.v1.gfile.FastGFile.seekable": true, + "tf.compat.v1.gfile.FastGFile.size": true, + "tf.compat.v1.gfile.FastGFile.tell": true, + "tf.compat.v1.gfile.FastGFile.write": true, + "tf.compat.v1.gfile.GFile": false, + "tf.compat.v1.gfile.GFile.__enter__": true, + "tf.compat.v1.gfile.GFile.__eq__": true, + "tf.compat.v1.gfile.GFile.__exit__": true, + "tf.compat.v1.gfile.GFile.__ge__": true, + "tf.compat.v1.gfile.GFile.__gt__": true, + "tf.compat.v1.gfile.GFile.__init__": true, + "tf.compat.v1.gfile.GFile.__iter__": true, + "tf.compat.v1.gfile.GFile.__le__": true, + "tf.compat.v1.gfile.GFile.__lt__": true, + "tf.compat.v1.gfile.GFile.__ne__": true, + "tf.compat.v1.gfile.GFile.__new__": true, + "tf.compat.v1.gfile.GFile.close": true, + "tf.compat.v1.gfile.GFile.flush": true, + "tf.compat.v1.gfile.GFile.mode": true, + "tf.compat.v1.gfile.GFile.name": true, + "tf.compat.v1.gfile.GFile.next": true, + "tf.compat.v1.gfile.GFile.read": true, + "tf.compat.v1.gfile.GFile.readline": true, + "tf.compat.v1.gfile.GFile.readlines": true, + "tf.compat.v1.gfile.GFile.seek": true, + "tf.compat.v1.gfile.GFile.seekable": true, + "tf.compat.v1.gfile.GFile.size": true, + "tf.compat.v1.gfile.GFile.tell": true, + "tf.compat.v1.gfile.GFile.write": true, + "tf.compat.v1.gfile.Glob": false, + "tf.compat.v1.gfile.IsDirectory": false, + "tf.compat.v1.gfile.ListDirectory": false, + "tf.compat.v1.gfile.MakeDirs": false, + "tf.compat.v1.gfile.MkDir": false, + "tf.compat.v1.gfile.Open": false, + "tf.compat.v1.gfile.Open.__enter__": true, + "tf.compat.v1.gfile.Open.__eq__": true, + "tf.compat.v1.gfile.Open.__exit__": true, + "tf.compat.v1.gfile.Open.__ge__": true, + "tf.compat.v1.gfile.Open.__gt__": true, + "tf.compat.v1.gfile.Open.__init__": true, + "tf.compat.v1.gfile.Open.__iter__": true, + "tf.compat.v1.gfile.Open.__le__": true, + "tf.compat.v1.gfile.Open.__lt__": true, + "tf.compat.v1.gfile.Open.__ne__": true, + "tf.compat.v1.gfile.Open.__new__": true, + "tf.compat.v1.gfile.Open.close": true, + "tf.compat.v1.gfile.Open.flush": true, + "tf.compat.v1.gfile.Open.mode": true, + "tf.compat.v1.gfile.Open.name": true, + "tf.compat.v1.gfile.Open.next": true, + "tf.compat.v1.gfile.Open.read": true, + "tf.compat.v1.gfile.Open.readline": true, + "tf.compat.v1.gfile.Open.readlines": true, + "tf.compat.v1.gfile.Open.seek": true, + "tf.compat.v1.gfile.Open.seekable": true, + "tf.compat.v1.gfile.Open.size": true, + "tf.compat.v1.gfile.Open.tell": true, + "tf.compat.v1.gfile.Open.write": true, + "tf.compat.v1.gfile.Remove": false, + "tf.compat.v1.gfile.Rename": false, + "tf.compat.v1.gfile.Stat": false, + "tf.compat.v1.gfile.Walk": false, + "tf.compat.v1.global_norm": false, + "tf.compat.v1.global_variables": false, + "tf.compat.v1.global_variables_initializer": false, + "tf.compat.v1.glorot_normal_initializer": false, + "tf.compat.v1.glorot_normal_initializer.__call__": true, + "tf.compat.v1.glorot_normal_initializer.__eq__": true, + "tf.compat.v1.glorot_normal_initializer.__ge__": true, + "tf.compat.v1.glorot_normal_initializer.__gt__": true, + "tf.compat.v1.glorot_normal_initializer.__init__": true, + "tf.compat.v1.glorot_normal_initializer.__le__": true, + "tf.compat.v1.glorot_normal_initializer.__lt__": true, + "tf.compat.v1.glorot_normal_initializer.__ne__": true, + "tf.compat.v1.glorot_normal_initializer.__new__": true, + "tf.compat.v1.glorot_normal_initializer.from_config": true, + "tf.compat.v1.glorot_normal_initializer.get_config": true, + "tf.compat.v1.glorot_uniform_initializer": false, + "tf.compat.v1.glorot_uniform_initializer.__call__": true, + "tf.compat.v1.glorot_uniform_initializer.__eq__": true, + "tf.compat.v1.glorot_uniform_initializer.__ge__": true, + "tf.compat.v1.glorot_uniform_initializer.__gt__": true, + "tf.compat.v1.glorot_uniform_initializer.__init__": true, + "tf.compat.v1.glorot_uniform_initializer.__le__": true, + "tf.compat.v1.glorot_uniform_initializer.__lt__": true, + "tf.compat.v1.glorot_uniform_initializer.__ne__": true, + "tf.compat.v1.glorot_uniform_initializer.__new__": true, + "tf.compat.v1.glorot_uniform_initializer.from_config": true, + "tf.compat.v1.glorot_uniform_initializer.get_config": true, + "tf.compat.v1.grad_pass_through": false, + "tf.compat.v1.gradients": false, + "tf.compat.v1.graph_util": false, + "tf.compat.v1.graph_util.convert_variables_to_constants": false, + "tf.compat.v1.graph_util.extract_sub_graph": false, + "tf.compat.v1.graph_util.import_graph_def": false, + "tf.compat.v1.graph_util.must_run_on_cpu": false, + "tf.compat.v1.graph_util.remove_training_nodes": false, + "tf.compat.v1.graph_util.tensor_shape_from_node_def_name": false, + "tf.compat.v1.greater": false, + "tf.compat.v1.greater_equal": false, + "tf.compat.v1.group": false, + "tf.compat.v1.guarantee_const": false, + "tf.compat.v1.half": true, + "tf.compat.v1.hessians": false, + "tf.compat.v1.histogram_fixed_width": false, + "tf.compat.v1.histogram_fixed_width_bins": false, + "tf.compat.v1.identity": false, + "tf.compat.v1.identity_n": false, + "tf.compat.v1.ifft": false, + "tf.compat.v1.ifft2d": false, + "tf.compat.v1.ifft3d": false, + "tf.compat.v1.igamma": false, + "tf.compat.v1.igammac": false, + "tf.compat.v1.imag": false, + "tf.compat.v1.image": false, + "tf.compat.v1.image.ResizeMethod": false, + "tf.compat.v1.image.ResizeMethod.AREA": true, + "tf.compat.v1.image.ResizeMethod.BICUBIC": true, + "tf.compat.v1.image.ResizeMethod.BILINEAR": true, + "tf.compat.v1.image.ResizeMethod.NEAREST_NEIGHBOR": true, + "tf.compat.v1.image.ResizeMethod.__eq__": true, + "tf.compat.v1.image.ResizeMethod.__ge__": true, + "tf.compat.v1.image.ResizeMethod.__gt__": true, + "tf.compat.v1.image.ResizeMethod.__init__": true, + "tf.compat.v1.image.ResizeMethod.__le__": true, + "tf.compat.v1.image.ResizeMethod.__lt__": true, + "tf.compat.v1.image.ResizeMethod.__ne__": true, + "tf.compat.v1.image.ResizeMethod.__new__": true, + "tf.compat.v1.image.adjust_brightness": false, + "tf.compat.v1.image.adjust_contrast": false, + "tf.compat.v1.image.adjust_gamma": false, + "tf.compat.v1.image.adjust_hue": false, + "tf.compat.v1.image.adjust_jpeg_quality": false, + "tf.compat.v1.image.adjust_saturation": false, + "tf.compat.v1.image.central_crop": false, + "tf.compat.v1.image.combined_non_max_suppression": false, + "tf.compat.v1.image.convert_image_dtype": false, + "tf.compat.v1.image.crop_and_resize": false, + "tf.compat.v1.image.crop_to_bounding_box": false, + "tf.compat.v1.image.decode_and_crop_jpeg": false, + "tf.compat.v1.image.decode_bmp": false, + "tf.compat.v1.image.decode_gif": false, + "tf.compat.v1.image.decode_image": false, + "tf.compat.v1.image.decode_jpeg": false, + "tf.compat.v1.image.decode_png": false, + "tf.compat.v1.image.draw_bounding_boxes": false, + "tf.compat.v1.image.encode_jpeg": false, + "tf.compat.v1.image.encode_png": false, + "tf.compat.v1.image.extract_glimpse": false, + "tf.compat.v1.image.extract_image_patches": false, + "tf.compat.v1.image.extract_jpeg_shape": false, + "tf.compat.v1.image.extract_patches": false, + "tf.compat.v1.image.flip_left_right": false, + "tf.compat.v1.image.flip_up_down": false, + "tf.compat.v1.image.generate_bounding_box_proposals": false, + "tf.compat.v1.image.grayscale_to_rgb": false, + "tf.compat.v1.image.hsv_to_rgb": false, + "tf.compat.v1.image.image_gradients": false, + "tf.compat.v1.image.is_jpeg": false, + "tf.compat.v1.image.non_max_suppression": false, + "tf.compat.v1.image.non_max_suppression_overlaps": false, + "tf.compat.v1.image.non_max_suppression_padded": false, + "tf.compat.v1.image.non_max_suppression_with_scores": false, + "tf.compat.v1.image.pad_to_bounding_box": false, + "tf.compat.v1.image.per_image_standardization": false, + "tf.compat.v1.image.psnr": false, + "tf.compat.v1.image.random_brightness": false, + "tf.compat.v1.image.random_contrast": false, + "tf.compat.v1.image.random_crop": false, + "tf.compat.v1.image.random_flip_left_right": false, + "tf.compat.v1.image.random_flip_up_down": false, + "tf.compat.v1.image.random_hue": false, + "tf.compat.v1.image.random_jpeg_quality": false, + "tf.compat.v1.image.random_saturation": false, + "tf.compat.v1.image.resize": false, + "tf.compat.v1.image.resize_area": false, + "tf.compat.v1.image.resize_bicubic": false, + "tf.compat.v1.image.resize_bilinear": false, + "tf.compat.v1.image.resize_image_with_crop_or_pad": false, + "tf.compat.v1.image.resize_image_with_pad": false, + "tf.compat.v1.image.resize_images": false, + "tf.compat.v1.image.resize_nearest_neighbor": false, + "tf.compat.v1.image.resize_with_crop_or_pad": false, + "tf.compat.v1.image.rgb_to_grayscale": false, + "tf.compat.v1.image.rgb_to_hsv": false, + "tf.compat.v1.image.rgb_to_yiq": false, + "tf.compat.v1.image.rgb_to_yuv": false, + "tf.compat.v1.image.rot90": false, + "tf.compat.v1.image.sample_distorted_bounding_box": false, + "tf.compat.v1.image.sobel_edges": false, + "tf.compat.v1.image.ssim": false, + "tf.compat.v1.image.ssim_multiscale": false, + "tf.compat.v1.image.total_variation": false, + "tf.compat.v1.image.transpose": false, + "tf.compat.v1.image.transpose_image": false, + "tf.compat.v1.image.yiq_to_rgb": false, + "tf.compat.v1.image.yuv_to_rgb": false, + "tf.compat.v1.import_graph_def": false, + "tf.compat.v1.init_scope": false, + "tf.compat.v1.initialize_all_tables": false, + "tf.compat.v1.initialize_all_variables": false, + "tf.compat.v1.initialize_local_variables": false, + "tf.compat.v1.initialize_variables": false, + "tf.compat.v1.initializers": false, + "tf.compat.v1.initializers.constant": false, + "tf.compat.v1.initializers.constant.__call__": true, + "tf.compat.v1.initializers.constant.__eq__": true, + "tf.compat.v1.initializers.constant.__ge__": true, + "tf.compat.v1.initializers.constant.__gt__": true, + "tf.compat.v1.initializers.constant.__init__": true, + "tf.compat.v1.initializers.constant.__le__": true, + "tf.compat.v1.initializers.constant.__lt__": true, + "tf.compat.v1.initializers.constant.__ne__": true, + "tf.compat.v1.initializers.constant.__new__": true, + "tf.compat.v1.initializers.constant.from_config": true, + "tf.compat.v1.initializers.constant.get_config": true, + "tf.compat.v1.initializers.global_variables": false, + "tf.compat.v1.initializers.glorot_normal": false, + "tf.compat.v1.initializers.glorot_normal.__call__": true, + "tf.compat.v1.initializers.glorot_normal.__eq__": true, + "tf.compat.v1.initializers.glorot_normal.__ge__": true, + "tf.compat.v1.initializers.glorot_normal.__gt__": true, + "tf.compat.v1.initializers.glorot_normal.__init__": true, + "tf.compat.v1.initializers.glorot_normal.__le__": true, + "tf.compat.v1.initializers.glorot_normal.__lt__": true, + "tf.compat.v1.initializers.glorot_normal.__ne__": true, + "tf.compat.v1.initializers.glorot_normal.__new__": true, + "tf.compat.v1.initializers.glorot_normal.from_config": true, + "tf.compat.v1.initializers.glorot_normal.get_config": true, + "tf.compat.v1.initializers.glorot_uniform": false, + "tf.compat.v1.initializers.glorot_uniform.__call__": true, + "tf.compat.v1.initializers.glorot_uniform.__eq__": true, + "tf.compat.v1.initializers.glorot_uniform.__ge__": true, + "tf.compat.v1.initializers.glorot_uniform.__gt__": true, + "tf.compat.v1.initializers.glorot_uniform.__init__": true, + "tf.compat.v1.initializers.glorot_uniform.__le__": true, + "tf.compat.v1.initializers.glorot_uniform.__lt__": true, + "tf.compat.v1.initializers.glorot_uniform.__ne__": true, + "tf.compat.v1.initializers.glorot_uniform.__new__": true, + "tf.compat.v1.initializers.glorot_uniform.from_config": true, + "tf.compat.v1.initializers.glorot_uniform.get_config": true, + "tf.compat.v1.initializers.he_normal": false, + "tf.compat.v1.initializers.he_uniform": false, + "tf.compat.v1.initializers.identity": false, + "tf.compat.v1.initializers.identity.__call__": true, + "tf.compat.v1.initializers.identity.__eq__": true, + "tf.compat.v1.initializers.identity.__ge__": true, + "tf.compat.v1.initializers.identity.__gt__": true, + "tf.compat.v1.initializers.identity.__init__": true, + "tf.compat.v1.initializers.identity.__le__": true, + "tf.compat.v1.initializers.identity.__lt__": true, + "tf.compat.v1.initializers.identity.__ne__": true, + "tf.compat.v1.initializers.identity.__new__": true, + "tf.compat.v1.initializers.identity.from_config": true, + "tf.compat.v1.initializers.identity.get_config": true, + "tf.compat.v1.initializers.lecun_normal": false, + "tf.compat.v1.initializers.lecun_uniform": false, + "tf.compat.v1.initializers.local_variables": false, + "tf.compat.v1.initializers.ones": false, + "tf.compat.v1.initializers.ones.__call__": true, + "tf.compat.v1.initializers.ones.__eq__": true, + "tf.compat.v1.initializers.ones.__ge__": true, + "tf.compat.v1.initializers.ones.__gt__": true, + "tf.compat.v1.initializers.ones.__init__": true, + "tf.compat.v1.initializers.ones.__le__": true, + "tf.compat.v1.initializers.ones.__lt__": true, + "tf.compat.v1.initializers.ones.__ne__": true, + "tf.compat.v1.initializers.ones.__new__": true, + "tf.compat.v1.initializers.ones.from_config": true, + "tf.compat.v1.initializers.ones.get_config": true, + "tf.compat.v1.initializers.orthogonal": false, + "tf.compat.v1.initializers.orthogonal.__call__": true, + "tf.compat.v1.initializers.orthogonal.__eq__": true, + "tf.compat.v1.initializers.orthogonal.__ge__": true, + "tf.compat.v1.initializers.orthogonal.__gt__": true, + "tf.compat.v1.initializers.orthogonal.__init__": true, + "tf.compat.v1.initializers.orthogonal.__le__": true, + "tf.compat.v1.initializers.orthogonal.__lt__": true, + "tf.compat.v1.initializers.orthogonal.__ne__": true, + "tf.compat.v1.initializers.orthogonal.__new__": true, + "tf.compat.v1.initializers.orthogonal.from_config": true, + "tf.compat.v1.initializers.orthogonal.get_config": true, + "tf.compat.v1.initializers.random_normal": false, + "tf.compat.v1.initializers.random_normal.__call__": true, + "tf.compat.v1.initializers.random_normal.__eq__": true, + "tf.compat.v1.initializers.random_normal.__ge__": true, + "tf.compat.v1.initializers.random_normal.__gt__": true, + "tf.compat.v1.initializers.random_normal.__init__": true, + "tf.compat.v1.initializers.random_normal.__le__": true, + "tf.compat.v1.initializers.random_normal.__lt__": true, + "tf.compat.v1.initializers.random_normal.__ne__": true, + "tf.compat.v1.initializers.random_normal.__new__": true, + "tf.compat.v1.initializers.random_normal.from_config": true, + "tf.compat.v1.initializers.random_normal.get_config": true, + "tf.compat.v1.initializers.random_uniform": false, + "tf.compat.v1.initializers.random_uniform.__call__": true, + "tf.compat.v1.initializers.random_uniform.__eq__": true, + "tf.compat.v1.initializers.random_uniform.__ge__": true, + "tf.compat.v1.initializers.random_uniform.__gt__": true, + "tf.compat.v1.initializers.random_uniform.__init__": true, + "tf.compat.v1.initializers.random_uniform.__le__": true, + "tf.compat.v1.initializers.random_uniform.__lt__": true, + "tf.compat.v1.initializers.random_uniform.__ne__": true, + "tf.compat.v1.initializers.random_uniform.__new__": true, + "tf.compat.v1.initializers.random_uniform.from_config": true, + "tf.compat.v1.initializers.random_uniform.get_config": true, + "tf.compat.v1.initializers.tables_initializer": false, + "tf.compat.v1.initializers.truncated_normal": false, + "tf.compat.v1.initializers.truncated_normal.__call__": true, + "tf.compat.v1.initializers.truncated_normal.__eq__": true, + "tf.compat.v1.initializers.truncated_normal.__ge__": true, + "tf.compat.v1.initializers.truncated_normal.__gt__": true, + "tf.compat.v1.initializers.truncated_normal.__init__": true, + "tf.compat.v1.initializers.truncated_normal.__le__": true, + "tf.compat.v1.initializers.truncated_normal.__lt__": true, + "tf.compat.v1.initializers.truncated_normal.__ne__": true, + "tf.compat.v1.initializers.truncated_normal.__new__": true, + "tf.compat.v1.initializers.truncated_normal.from_config": true, + "tf.compat.v1.initializers.truncated_normal.get_config": true, + "tf.compat.v1.initializers.uniform_unit_scaling": false, + "tf.compat.v1.initializers.uniform_unit_scaling.__call__": true, + "tf.compat.v1.initializers.uniform_unit_scaling.__eq__": true, + "tf.compat.v1.initializers.uniform_unit_scaling.__ge__": true, + "tf.compat.v1.initializers.uniform_unit_scaling.__gt__": true, + "tf.compat.v1.initializers.uniform_unit_scaling.__init__": true, + "tf.compat.v1.initializers.uniform_unit_scaling.__le__": true, + "tf.compat.v1.initializers.uniform_unit_scaling.__lt__": true, + "tf.compat.v1.initializers.uniform_unit_scaling.__ne__": true, + "tf.compat.v1.initializers.uniform_unit_scaling.__new__": true, + "tf.compat.v1.initializers.uniform_unit_scaling.from_config": true, + "tf.compat.v1.initializers.uniform_unit_scaling.get_config": true, + "tf.compat.v1.initializers.variables": false, + "tf.compat.v1.initializers.variance_scaling": false, + "tf.compat.v1.initializers.variance_scaling.__call__": true, + "tf.compat.v1.initializers.variance_scaling.__eq__": true, + "tf.compat.v1.initializers.variance_scaling.__ge__": true, + "tf.compat.v1.initializers.variance_scaling.__gt__": true, + "tf.compat.v1.initializers.variance_scaling.__init__": true, + "tf.compat.v1.initializers.variance_scaling.__le__": true, + "tf.compat.v1.initializers.variance_scaling.__lt__": true, + "tf.compat.v1.initializers.variance_scaling.__ne__": true, + "tf.compat.v1.initializers.variance_scaling.__new__": true, + "tf.compat.v1.initializers.variance_scaling.from_config": true, + "tf.compat.v1.initializers.variance_scaling.get_config": true, + "tf.compat.v1.initializers.zeros": false, + "tf.compat.v1.initializers.zeros.__call__": true, + "tf.compat.v1.initializers.zeros.__eq__": true, + "tf.compat.v1.initializers.zeros.__ge__": true, + "tf.compat.v1.initializers.zeros.__gt__": true, + "tf.compat.v1.initializers.zeros.__init__": true, + "tf.compat.v1.initializers.zeros.__le__": true, + "tf.compat.v1.initializers.zeros.__lt__": true, + "tf.compat.v1.initializers.zeros.__ne__": true, + "tf.compat.v1.initializers.zeros.__new__": true, + "tf.compat.v1.initializers.zeros.from_config": true, + "tf.compat.v1.initializers.zeros.get_config": true, + "tf.compat.v1.int16": true, + "tf.compat.v1.int32": true, + "tf.compat.v1.int64": true, + "tf.compat.v1.int8": true, + "tf.compat.v1.invert_permutation": false, + "tf.compat.v1.io": false, + "tf.compat.v1.io.FixedLenFeature": false, + "tf.compat.v1.io.FixedLenFeature.__add__": true, + "tf.compat.v1.io.FixedLenFeature.__contains__": true, + "tf.compat.v1.io.FixedLenFeature.__eq__": true, + "tf.compat.v1.io.FixedLenFeature.__ge__": true, + "tf.compat.v1.io.FixedLenFeature.__getitem__": true, + "tf.compat.v1.io.FixedLenFeature.__gt__": true, + "tf.compat.v1.io.FixedLenFeature.__init__": true, + "tf.compat.v1.io.FixedLenFeature.__iter__": true, + "tf.compat.v1.io.FixedLenFeature.__le__": true, + "tf.compat.v1.io.FixedLenFeature.__len__": true, + "tf.compat.v1.io.FixedLenFeature.__lt__": true, + "tf.compat.v1.io.FixedLenFeature.__mul__": true, + "tf.compat.v1.io.FixedLenFeature.__ne__": true, + "tf.compat.v1.io.FixedLenFeature.__new__": true, + "tf.compat.v1.io.FixedLenFeature.__rmul__": true, + "tf.compat.v1.io.FixedLenFeature.count": true, + "tf.compat.v1.io.FixedLenFeature.default_value": true, + "tf.compat.v1.io.FixedLenFeature.dtype": true, + "tf.compat.v1.io.FixedLenFeature.index": true, + "tf.compat.v1.io.FixedLenFeature.shape": true, + "tf.compat.v1.io.FixedLenSequenceFeature": false, + "tf.compat.v1.io.FixedLenSequenceFeature.__add__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__contains__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__eq__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__ge__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__getitem__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__gt__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__init__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__iter__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__le__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__len__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__lt__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__mul__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__ne__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__new__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.__rmul__": true, + "tf.compat.v1.io.FixedLenSequenceFeature.allow_missing": true, + "tf.compat.v1.io.FixedLenSequenceFeature.count": true, + "tf.compat.v1.io.FixedLenSequenceFeature.default_value": true, + "tf.compat.v1.io.FixedLenSequenceFeature.dtype": true, + "tf.compat.v1.io.FixedLenSequenceFeature.index": true, + "tf.compat.v1.io.FixedLenSequenceFeature.shape": true, + "tf.compat.v1.io.PaddingFIFOQueue": false, + "tf.compat.v1.io.PaddingFIFOQueue.__eq__": true, + "tf.compat.v1.io.PaddingFIFOQueue.__ge__": true, + "tf.compat.v1.io.PaddingFIFOQueue.__gt__": true, + "tf.compat.v1.io.PaddingFIFOQueue.__init__": true, + "tf.compat.v1.io.PaddingFIFOQueue.__le__": true, + "tf.compat.v1.io.PaddingFIFOQueue.__lt__": true, + "tf.compat.v1.io.PaddingFIFOQueue.__ne__": true, + "tf.compat.v1.io.PaddingFIFOQueue.__new__": true, + "tf.compat.v1.io.PaddingFIFOQueue.close": true, + "tf.compat.v1.io.PaddingFIFOQueue.dequeue": true, + "tf.compat.v1.io.PaddingFIFOQueue.dequeue_many": true, + "tf.compat.v1.io.PaddingFIFOQueue.dequeue_up_to": true, + "tf.compat.v1.io.PaddingFIFOQueue.dtypes": true, + "tf.compat.v1.io.PaddingFIFOQueue.enqueue": true, + "tf.compat.v1.io.PaddingFIFOQueue.enqueue_many": true, + "tf.compat.v1.io.PaddingFIFOQueue.from_list": true, + "tf.compat.v1.io.PaddingFIFOQueue.is_closed": true, + "tf.compat.v1.io.PaddingFIFOQueue.name": true, + "tf.compat.v1.io.PaddingFIFOQueue.names": true, + "tf.compat.v1.io.PaddingFIFOQueue.queue_ref": true, + "tf.compat.v1.io.PaddingFIFOQueue.shapes": true, + "tf.compat.v1.io.PaddingFIFOQueue.size": true, + "tf.compat.v1.io.PriorityQueue": false, + "tf.compat.v1.io.PriorityQueue.__eq__": true, + "tf.compat.v1.io.PriorityQueue.__ge__": true, + "tf.compat.v1.io.PriorityQueue.__gt__": true, + "tf.compat.v1.io.PriorityQueue.__init__": true, + "tf.compat.v1.io.PriorityQueue.__le__": true, + "tf.compat.v1.io.PriorityQueue.__lt__": true, + "tf.compat.v1.io.PriorityQueue.__ne__": true, + "tf.compat.v1.io.PriorityQueue.__new__": true, + "tf.compat.v1.io.PriorityQueue.close": true, + "tf.compat.v1.io.PriorityQueue.dequeue": true, + "tf.compat.v1.io.PriorityQueue.dequeue_many": true, + "tf.compat.v1.io.PriorityQueue.dequeue_up_to": true, + "tf.compat.v1.io.PriorityQueue.dtypes": true, + "tf.compat.v1.io.PriorityQueue.enqueue": true, + "tf.compat.v1.io.PriorityQueue.enqueue_many": true, + "tf.compat.v1.io.PriorityQueue.from_list": true, + "tf.compat.v1.io.PriorityQueue.is_closed": true, + "tf.compat.v1.io.PriorityQueue.name": true, + "tf.compat.v1.io.PriorityQueue.names": true, + "tf.compat.v1.io.PriorityQueue.queue_ref": true, + "tf.compat.v1.io.PriorityQueue.shapes": true, + "tf.compat.v1.io.PriorityQueue.size": true, + "tf.compat.v1.io.QueueBase": false, + "tf.compat.v1.io.QueueBase.__eq__": true, + "tf.compat.v1.io.QueueBase.__ge__": true, + "tf.compat.v1.io.QueueBase.__gt__": true, + "tf.compat.v1.io.QueueBase.__init__": true, + "tf.compat.v1.io.QueueBase.__le__": true, + "tf.compat.v1.io.QueueBase.__lt__": true, + "tf.compat.v1.io.QueueBase.__ne__": true, + "tf.compat.v1.io.QueueBase.__new__": true, + "tf.compat.v1.io.QueueBase.close": true, + "tf.compat.v1.io.QueueBase.dequeue": true, + "tf.compat.v1.io.QueueBase.dequeue_many": true, + "tf.compat.v1.io.QueueBase.dequeue_up_to": true, + "tf.compat.v1.io.QueueBase.dtypes": true, + "tf.compat.v1.io.QueueBase.enqueue": true, + "tf.compat.v1.io.QueueBase.enqueue_many": true, + "tf.compat.v1.io.QueueBase.from_list": true, + "tf.compat.v1.io.QueueBase.is_closed": true, + "tf.compat.v1.io.QueueBase.name": true, + "tf.compat.v1.io.QueueBase.names": true, + "tf.compat.v1.io.QueueBase.queue_ref": true, + "tf.compat.v1.io.QueueBase.shapes": true, + "tf.compat.v1.io.QueueBase.size": true, + "tf.compat.v1.io.RaggedFeature": false, + "tf.compat.v1.io.RaggedFeature.RowLengths": false, + "tf.compat.v1.io.RaggedFeature.RowLengths.__add__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__contains__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__eq__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__ge__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__getitem__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__gt__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__init__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__iter__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__le__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__len__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__lt__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__mul__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__ne__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__new__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.__rmul__": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.count": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.index": true, + "tf.compat.v1.io.RaggedFeature.RowLengths.key": true, + "tf.compat.v1.io.RaggedFeature.RowLimits": false, + "tf.compat.v1.io.RaggedFeature.RowLimits.__add__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__contains__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__eq__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__ge__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__getitem__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__gt__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__init__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__iter__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__le__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__len__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__lt__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__mul__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__ne__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__new__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.__rmul__": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.count": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.index": true, + "tf.compat.v1.io.RaggedFeature.RowLimits.key": true, + "tf.compat.v1.io.RaggedFeature.RowSplits": false, + "tf.compat.v1.io.RaggedFeature.RowSplits.__add__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__contains__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__eq__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__ge__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__getitem__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__gt__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__init__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__iter__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__le__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__len__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__lt__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__mul__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__ne__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__new__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.__rmul__": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.count": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.index": true, + "tf.compat.v1.io.RaggedFeature.RowSplits.key": true, + "tf.compat.v1.io.RaggedFeature.RowStarts": false, + "tf.compat.v1.io.RaggedFeature.RowStarts.__add__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__contains__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__eq__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__ge__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__getitem__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__gt__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__init__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__iter__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__le__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__len__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__lt__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__mul__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__ne__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__new__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.__rmul__": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.count": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.index": true, + "tf.compat.v1.io.RaggedFeature.RowStarts.key": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength": false, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__add__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__contains__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__eq__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__ge__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__getitem__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__gt__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__init__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__iter__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__le__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__len__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__lt__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__mul__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__ne__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__new__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.__rmul__": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.count": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.index": true, + "tf.compat.v1.io.RaggedFeature.UniformRowLength.length": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds": false, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__add__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__contains__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__eq__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__ge__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__getitem__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__gt__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__init__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__iter__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__le__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__len__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__lt__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__mul__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__ne__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__new__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.__rmul__": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.count": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.index": true, + "tf.compat.v1.io.RaggedFeature.ValueRowIds.key": true, + "tf.compat.v1.io.RaggedFeature.__add__": true, + "tf.compat.v1.io.RaggedFeature.__contains__": true, + "tf.compat.v1.io.RaggedFeature.__eq__": true, + "tf.compat.v1.io.RaggedFeature.__ge__": true, + "tf.compat.v1.io.RaggedFeature.__getitem__": true, + "tf.compat.v1.io.RaggedFeature.__gt__": true, + "tf.compat.v1.io.RaggedFeature.__init__": true, + "tf.compat.v1.io.RaggedFeature.__iter__": true, + "tf.compat.v1.io.RaggedFeature.__le__": true, + "tf.compat.v1.io.RaggedFeature.__len__": true, + "tf.compat.v1.io.RaggedFeature.__lt__": true, + "tf.compat.v1.io.RaggedFeature.__mul__": true, + "tf.compat.v1.io.RaggedFeature.__ne__": true, + "tf.compat.v1.io.RaggedFeature.__new__": true, + "tf.compat.v1.io.RaggedFeature.__rmul__": true, + "tf.compat.v1.io.RaggedFeature.count": true, + "tf.compat.v1.io.RaggedFeature.dtype": true, + "tf.compat.v1.io.RaggedFeature.index": true, + "tf.compat.v1.io.RaggedFeature.partitions": true, + "tf.compat.v1.io.RaggedFeature.row_splits_dtype": true, + "tf.compat.v1.io.RaggedFeature.validate": true, + "tf.compat.v1.io.RaggedFeature.value_key": true, + "tf.compat.v1.io.RandomShuffleQueue": false, + "tf.compat.v1.io.RandomShuffleQueue.__eq__": true, + "tf.compat.v1.io.RandomShuffleQueue.__ge__": true, + "tf.compat.v1.io.RandomShuffleQueue.__gt__": true, + "tf.compat.v1.io.RandomShuffleQueue.__init__": true, + "tf.compat.v1.io.RandomShuffleQueue.__le__": true, + "tf.compat.v1.io.RandomShuffleQueue.__lt__": true, + "tf.compat.v1.io.RandomShuffleQueue.__ne__": true, + "tf.compat.v1.io.RandomShuffleQueue.__new__": true, + "tf.compat.v1.io.RandomShuffleQueue.close": true, + "tf.compat.v1.io.RandomShuffleQueue.dequeue": true, + "tf.compat.v1.io.RandomShuffleQueue.dequeue_many": true, + "tf.compat.v1.io.RandomShuffleQueue.dequeue_up_to": true, + "tf.compat.v1.io.RandomShuffleQueue.dtypes": true, + "tf.compat.v1.io.RandomShuffleQueue.enqueue": true, + "tf.compat.v1.io.RandomShuffleQueue.enqueue_many": true, + "tf.compat.v1.io.RandomShuffleQueue.from_list": true, + "tf.compat.v1.io.RandomShuffleQueue.is_closed": true, + "tf.compat.v1.io.RandomShuffleQueue.name": true, + "tf.compat.v1.io.RandomShuffleQueue.names": true, + "tf.compat.v1.io.RandomShuffleQueue.queue_ref": true, + "tf.compat.v1.io.RandomShuffleQueue.shapes": true, + "tf.compat.v1.io.RandomShuffleQueue.size": true, + "tf.compat.v1.io.SparseFeature": false, + "tf.compat.v1.io.SparseFeature.__add__": true, + "tf.compat.v1.io.SparseFeature.__contains__": true, + "tf.compat.v1.io.SparseFeature.__eq__": true, + "tf.compat.v1.io.SparseFeature.__ge__": true, + "tf.compat.v1.io.SparseFeature.__getitem__": true, + "tf.compat.v1.io.SparseFeature.__gt__": true, + "tf.compat.v1.io.SparseFeature.__init__": true, + "tf.compat.v1.io.SparseFeature.__iter__": true, + "tf.compat.v1.io.SparseFeature.__le__": true, + "tf.compat.v1.io.SparseFeature.__len__": true, + "tf.compat.v1.io.SparseFeature.__lt__": true, + "tf.compat.v1.io.SparseFeature.__mul__": true, + "tf.compat.v1.io.SparseFeature.__ne__": true, + "tf.compat.v1.io.SparseFeature.__new__": true, + "tf.compat.v1.io.SparseFeature.__rmul__": true, + "tf.compat.v1.io.SparseFeature.already_sorted": true, + "tf.compat.v1.io.SparseFeature.count": true, + "tf.compat.v1.io.SparseFeature.dtype": true, + "tf.compat.v1.io.SparseFeature.index": true, + "tf.compat.v1.io.SparseFeature.index_key": true, + "tf.compat.v1.io.SparseFeature.size": true, + "tf.compat.v1.io.SparseFeature.value_key": true, + "tf.compat.v1.io.TFRecordCompressionType": false, + "tf.compat.v1.io.TFRecordCompressionType.GZIP": true, + "tf.compat.v1.io.TFRecordCompressionType.NONE": true, + "tf.compat.v1.io.TFRecordCompressionType.ZLIB": true, + "tf.compat.v1.io.TFRecordCompressionType.__eq__": true, + "tf.compat.v1.io.TFRecordCompressionType.__ge__": true, + "tf.compat.v1.io.TFRecordCompressionType.__gt__": true, + "tf.compat.v1.io.TFRecordCompressionType.__init__": true, + "tf.compat.v1.io.TFRecordCompressionType.__le__": true, + "tf.compat.v1.io.TFRecordCompressionType.__lt__": true, + "tf.compat.v1.io.TFRecordCompressionType.__ne__": true, + "tf.compat.v1.io.TFRecordCompressionType.__new__": true, + "tf.compat.v1.io.TFRecordOptions": false, + "tf.compat.v1.io.TFRecordOptions.__eq__": true, + "tf.compat.v1.io.TFRecordOptions.__ge__": true, + "tf.compat.v1.io.TFRecordOptions.__gt__": true, + "tf.compat.v1.io.TFRecordOptions.__init__": true, + "tf.compat.v1.io.TFRecordOptions.__le__": true, + "tf.compat.v1.io.TFRecordOptions.__lt__": true, + "tf.compat.v1.io.TFRecordOptions.__ne__": true, + "tf.compat.v1.io.TFRecordOptions.__new__": true, + "tf.compat.v1.io.TFRecordOptions.compression_type_map": true, + "tf.compat.v1.io.TFRecordOptions.get_compression_type_string": true, + "tf.compat.v1.io.TFRecordWriter": false, + "tf.compat.v1.io.TFRecordWriter.__enter__": true, + "tf.compat.v1.io.TFRecordWriter.__eq__": true, + "tf.compat.v1.io.TFRecordWriter.__exit__": true, + "tf.compat.v1.io.TFRecordWriter.__ge__": true, + "tf.compat.v1.io.TFRecordWriter.__gt__": true, + "tf.compat.v1.io.TFRecordWriter.__init__": true, + "tf.compat.v1.io.TFRecordWriter.__le__": true, + "tf.compat.v1.io.TFRecordWriter.__lt__": true, + "tf.compat.v1.io.TFRecordWriter.__ne__": true, + "tf.compat.v1.io.TFRecordWriter.__new__": true, + "tf.compat.v1.io.TFRecordWriter.close": true, + "tf.compat.v1.io.TFRecordWriter.flush": true, + "tf.compat.v1.io.TFRecordWriter.write": true, + "tf.compat.v1.io.VarLenFeature": false, + "tf.compat.v1.io.VarLenFeature.__add__": true, + "tf.compat.v1.io.VarLenFeature.__contains__": true, + "tf.compat.v1.io.VarLenFeature.__eq__": true, + "tf.compat.v1.io.VarLenFeature.__ge__": true, + "tf.compat.v1.io.VarLenFeature.__getitem__": true, + "tf.compat.v1.io.VarLenFeature.__gt__": true, + "tf.compat.v1.io.VarLenFeature.__init__": true, + "tf.compat.v1.io.VarLenFeature.__iter__": true, + "tf.compat.v1.io.VarLenFeature.__le__": true, + "tf.compat.v1.io.VarLenFeature.__len__": true, + "tf.compat.v1.io.VarLenFeature.__lt__": true, + "tf.compat.v1.io.VarLenFeature.__mul__": true, + "tf.compat.v1.io.VarLenFeature.__ne__": true, + "tf.compat.v1.io.VarLenFeature.__new__": true, + "tf.compat.v1.io.VarLenFeature.__rmul__": true, + "tf.compat.v1.io.VarLenFeature.count": true, + "tf.compat.v1.io.VarLenFeature.dtype": true, + "tf.compat.v1.io.VarLenFeature.index": true, + "tf.compat.v1.io.decode_and_crop_jpeg": false, + "tf.compat.v1.io.decode_base64": false, + "tf.compat.v1.io.decode_bmp": false, + "tf.compat.v1.io.decode_compressed": false, + "tf.compat.v1.io.decode_csv": false, + "tf.compat.v1.io.decode_gif": false, + "tf.compat.v1.io.decode_image": false, + "tf.compat.v1.io.decode_jpeg": false, + "tf.compat.v1.io.decode_json_example": false, + "tf.compat.v1.io.decode_png": false, + "tf.compat.v1.io.decode_proto": false, + "tf.compat.v1.io.decode_raw": false, + "tf.compat.v1.io.deserialize_many_sparse": false, + "tf.compat.v1.io.encode_base64": false, + "tf.compat.v1.io.encode_jpeg": false, + "tf.compat.v1.io.encode_proto": false, + "tf.compat.v1.io.extract_jpeg_shape": false, + "tf.compat.v1.io.gfile": false, + "tf.compat.v1.io.gfile.GFile": false, + "tf.compat.v1.io.gfile.GFile.__enter__": true, + "tf.compat.v1.io.gfile.GFile.__eq__": true, + "tf.compat.v1.io.gfile.GFile.__exit__": true, + "tf.compat.v1.io.gfile.GFile.__ge__": true, + "tf.compat.v1.io.gfile.GFile.__gt__": true, + "tf.compat.v1.io.gfile.GFile.__init__": true, + "tf.compat.v1.io.gfile.GFile.__iter__": true, + "tf.compat.v1.io.gfile.GFile.__le__": true, + "tf.compat.v1.io.gfile.GFile.__lt__": true, + "tf.compat.v1.io.gfile.GFile.__ne__": true, + "tf.compat.v1.io.gfile.GFile.__new__": true, + "tf.compat.v1.io.gfile.GFile.close": true, + "tf.compat.v1.io.gfile.GFile.flush": true, + "tf.compat.v1.io.gfile.GFile.mode": true, + "tf.compat.v1.io.gfile.GFile.name": true, + "tf.compat.v1.io.gfile.GFile.next": true, + "tf.compat.v1.io.gfile.GFile.read": true, + "tf.compat.v1.io.gfile.GFile.readline": true, + "tf.compat.v1.io.gfile.GFile.readlines": true, + "tf.compat.v1.io.gfile.GFile.seek": true, + "tf.compat.v1.io.gfile.GFile.seekable": true, + "tf.compat.v1.io.gfile.GFile.size": true, + "tf.compat.v1.io.gfile.GFile.tell": true, + "tf.compat.v1.io.gfile.GFile.write": true, + "tf.compat.v1.io.gfile.copy": false, + "tf.compat.v1.io.gfile.exists": false, + "tf.compat.v1.io.gfile.glob": false, + "tf.compat.v1.io.gfile.isdir": false, + "tf.compat.v1.io.gfile.listdir": false, + "tf.compat.v1.io.gfile.makedirs": false, + "tf.compat.v1.io.gfile.mkdir": false, + "tf.compat.v1.io.gfile.remove": false, + "tf.compat.v1.io.gfile.rename": false, + "tf.compat.v1.io.gfile.rmtree": false, + "tf.compat.v1.io.gfile.stat": false, + "tf.compat.v1.io.gfile.walk": false, + "tf.compat.v1.io.is_jpeg": false, + "tf.compat.v1.io.match_filenames_once": false, + "tf.compat.v1.io.matching_files": false, + "tf.compat.v1.io.parse_example": false, + "tf.compat.v1.io.parse_sequence_example": false, + "tf.compat.v1.io.parse_single_example": false, + "tf.compat.v1.io.parse_single_sequence_example": false, + "tf.compat.v1.io.parse_tensor": false, + "tf.compat.v1.io.read_file": false, + "tf.compat.v1.io.serialize_many_sparse": false, + "tf.compat.v1.io.serialize_sparse": false, + "tf.compat.v1.io.serialize_tensor": false, + "tf.compat.v1.io.tf_record_iterator": false, + "tf.compat.v1.io.write_file": false, + "tf.compat.v1.io.write_graph": false, + "tf.compat.v1.is_finite": false, + "tf.compat.v1.is_inf": false, + "tf.compat.v1.is_nan": false, + "tf.compat.v1.is_non_decreasing": false, + "tf.compat.v1.is_numeric_tensor": false, + "tf.compat.v1.is_strictly_increasing": false, + "tf.compat.v1.is_tensor": false, + "tf.compat.v1.is_variable_initialized": false, + "tf.compat.v1.keras": false, + "tf.compat.v1.keras.Input": false, + "tf.compat.v1.keras.Model": false, + "tf.compat.v1.keras.Model.__call__": true, + "tf.compat.v1.keras.Model.__eq__": true, + "tf.compat.v1.keras.Model.__ge__": true, + "tf.compat.v1.keras.Model.__gt__": true, + "tf.compat.v1.keras.Model.__init__": true, + "tf.compat.v1.keras.Model.__le__": true, + "tf.compat.v1.keras.Model.__lt__": true, + "tf.compat.v1.keras.Model.__ne__": true, + "tf.compat.v1.keras.Model.__new__": true, + "tf.compat.v1.keras.Model.activity_regularizer": true, + "tf.compat.v1.keras.Model.add_loss": true, + "tf.compat.v1.keras.Model.add_metric": true, + "tf.compat.v1.keras.Model.add_weight": true, + "tf.compat.v1.keras.Model.build": true, + "tf.compat.v1.keras.Model.call": true, + "tf.compat.v1.keras.Model.compile": true, + "tf.compat.v1.keras.Model.compute_mask": true, + "tf.compat.v1.keras.Model.compute_output_shape": true, + "tf.compat.v1.keras.Model.compute_output_signature": true, + "tf.compat.v1.keras.Model.count_params": true, + "tf.compat.v1.keras.Model.distribute_strategy": true, + "tf.compat.v1.keras.Model.dtype": true, + "tf.compat.v1.keras.Model.dynamic": true, + "tf.compat.v1.keras.Model.evaluate": true, + "tf.compat.v1.keras.Model.evaluate_generator": true, + "tf.compat.v1.keras.Model.fit": true, + "tf.compat.v1.keras.Model.fit_generator": true, + "tf.compat.v1.keras.Model.from_config": true, + "tf.compat.v1.keras.Model.get_config": true, + "tf.compat.v1.keras.Model.get_layer": true, + "tf.compat.v1.keras.Model.get_weights": true, + "tf.compat.v1.keras.Model.input": true, + "tf.compat.v1.keras.Model.input_spec": true, + "tf.compat.v1.keras.Model.layers": true, + "tf.compat.v1.keras.Model.load_weights": true, + "tf.compat.v1.keras.Model.losses": true, + "tf.compat.v1.keras.Model.make_predict_function": true, + "tf.compat.v1.keras.Model.make_test_function": true, + "tf.compat.v1.keras.Model.make_train_function": true, + "tf.compat.v1.keras.Model.metrics": true, + "tf.compat.v1.keras.Model.metrics_names": true, + "tf.compat.v1.keras.Model.name": true, + "tf.compat.v1.keras.Model.name_scope": true, + "tf.compat.v1.keras.Model.non_trainable_weights": true, + "tf.compat.v1.keras.Model.output": true, + "tf.compat.v1.keras.Model.predict": true, + "tf.compat.v1.keras.Model.predict_generator": true, + "tf.compat.v1.keras.Model.predict_on_batch": true, + "tf.compat.v1.keras.Model.predict_step": true, + "tf.compat.v1.keras.Model.reset_metrics": true, + "tf.compat.v1.keras.Model.reset_states": true, + "tf.compat.v1.keras.Model.run_eagerly": true, + "tf.compat.v1.keras.Model.save": true, + "tf.compat.v1.keras.Model.save_weights": true, + "tf.compat.v1.keras.Model.set_weights": true, + "tf.compat.v1.keras.Model.state_updates": true, + "tf.compat.v1.keras.Model.stateful": true, + "tf.compat.v1.keras.Model.submodules": true, + "tf.compat.v1.keras.Model.summary": true, + "tf.compat.v1.keras.Model.test_on_batch": true, + "tf.compat.v1.keras.Model.test_step": true, + "tf.compat.v1.keras.Model.to_json": true, + "tf.compat.v1.keras.Model.to_yaml": true, + "tf.compat.v1.keras.Model.train_on_batch": true, + "tf.compat.v1.keras.Model.train_step": true, + "tf.compat.v1.keras.Model.trainable": true, + "tf.compat.v1.keras.Model.trainable_weights": true, + "tf.compat.v1.keras.Model.weights": true, + "tf.compat.v1.keras.Model.with_name_scope": true, + "tf.compat.v1.keras.Sequential": false, + "tf.compat.v1.keras.Sequential.__call__": true, + "tf.compat.v1.keras.Sequential.__eq__": true, + "tf.compat.v1.keras.Sequential.__ge__": true, + "tf.compat.v1.keras.Sequential.__gt__": true, + "tf.compat.v1.keras.Sequential.__init__": true, + "tf.compat.v1.keras.Sequential.__le__": true, + "tf.compat.v1.keras.Sequential.__lt__": true, + "tf.compat.v1.keras.Sequential.__ne__": true, + "tf.compat.v1.keras.Sequential.__new__": true, + "tf.compat.v1.keras.Sequential.activity_regularizer": true, + "tf.compat.v1.keras.Sequential.add": true, + "tf.compat.v1.keras.Sequential.add_loss": true, + "tf.compat.v1.keras.Sequential.add_metric": true, + "tf.compat.v1.keras.Sequential.add_weight": true, + "tf.compat.v1.keras.Sequential.build": true, + "tf.compat.v1.keras.Sequential.call": true, + "tf.compat.v1.keras.Sequential.compile": true, + "tf.compat.v1.keras.Sequential.compute_mask": true, + "tf.compat.v1.keras.Sequential.compute_output_shape": true, + "tf.compat.v1.keras.Sequential.compute_output_signature": true, + "tf.compat.v1.keras.Sequential.count_params": true, + "tf.compat.v1.keras.Sequential.distribute_strategy": true, + "tf.compat.v1.keras.Sequential.dtype": true, + "tf.compat.v1.keras.Sequential.dynamic": true, + "tf.compat.v1.keras.Sequential.evaluate": true, + "tf.compat.v1.keras.Sequential.evaluate_generator": true, + "tf.compat.v1.keras.Sequential.fit": true, + "tf.compat.v1.keras.Sequential.fit_generator": true, + "tf.compat.v1.keras.Sequential.from_config": true, + "tf.compat.v1.keras.Sequential.get_config": true, + "tf.compat.v1.keras.Sequential.get_layer": true, + "tf.compat.v1.keras.Sequential.get_weights": true, + "tf.compat.v1.keras.Sequential.input": true, + "tf.compat.v1.keras.Sequential.input_spec": true, + "tf.compat.v1.keras.Sequential.layers": true, + "tf.compat.v1.keras.Sequential.load_weights": true, + "tf.compat.v1.keras.Sequential.losses": true, + "tf.compat.v1.keras.Sequential.make_predict_function": true, + "tf.compat.v1.keras.Sequential.make_test_function": true, + "tf.compat.v1.keras.Sequential.make_train_function": true, + "tf.compat.v1.keras.Sequential.metrics": true, + "tf.compat.v1.keras.Sequential.metrics_names": true, + "tf.compat.v1.keras.Sequential.name": true, + "tf.compat.v1.keras.Sequential.name_scope": true, + "tf.compat.v1.keras.Sequential.non_trainable_weights": true, + "tf.compat.v1.keras.Sequential.output": true, + "tf.compat.v1.keras.Sequential.pop": true, + "tf.compat.v1.keras.Sequential.predict": true, + "tf.compat.v1.keras.Sequential.predict_classes": true, + "tf.compat.v1.keras.Sequential.predict_generator": true, + "tf.compat.v1.keras.Sequential.predict_on_batch": true, + "tf.compat.v1.keras.Sequential.predict_proba": true, + "tf.compat.v1.keras.Sequential.predict_step": true, + "tf.compat.v1.keras.Sequential.reset_metrics": true, + "tf.compat.v1.keras.Sequential.reset_states": true, + "tf.compat.v1.keras.Sequential.run_eagerly": true, + "tf.compat.v1.keras.Sequential.save": true, + "tf.compat.v1.keras.Sequential.save_weights": true, + "tf.compat.v1.keras.Sequential.set_weights": true, + "tf.compat.v1.keras.Sequential.state_updates": true, + "tf.compat.v1.keras.Sequential.stateful": true, + "tf.compat.v1.keras.Sequential.submodules": true, + "tf.compat.v1.keras.Sequential.summary": true, + "tf.compat.v1.keras.Sequential.test_on_batch": true, + "tf.compat.v1.keras.Sequential.test_step": true, + "tf.compat.v1.keras.Sequential.to_json": true, + "tf.compat.v1.keras.Sequential.to_yaml": true, + "tf.compat.v1.keras.Sequential.train_on_batch": true, + "tf.compat.v1.keras.Sequential.train_step": true, + "tf.compat.v1.keras.Sequential.trainable": true, + "tf.compat.v1.keras.Sequential.trainable_weights": true, + "tf.compat.v1.keras.Sequential.weights": true, + "tf.compat.v1.keras.Sequential.with_name_scope": true, + "tf.compat.v1.keras.activations": false, + "tf.compat.v1.keras.activations.deserialize": false, + "tf.compat.v1.keras.activations.elu": false, + "tf.compat.v1.keras.activations.exponential": false, + "tf.compat.v1.keras.activations.get": false, + "tf.compat.v1.keras.activations.hard_sigmoid": false, + "tf.compat.v1.keras.activations.linear": false, + "tf.compat.v1.keras.activations.relu": false, + "tf.compat.v1.keras.activations.selu": false, + "tf.compat.v1.keras.activations.serialize": false, + "tf.compat.v1.keras.activations.sigmoid": false, + "tf.compat.v1.keras.activations.softmax": false, + "tf.compat.v1.keras.activations.softplus": false, + "tf.compat.v1.keras.activations.softsign": false, + "tf.compat.v1.keras.activations.swish": false, + "tf.compat.v1.keras.activations.tanh": false, + "tf.compat.v1.keras.applications": false, + "tf.compat.v1.keras.applications.DenseNet121": false, + "tf.compat.v1.keras.applications.DenseNet169": false, + "tf.compat.v1.keras.applications.DenseNet201": false, + "tf.compat.v1.keras.applications.InceptionResNetV2": false, + "tf.compat.v1.keras.applications.InceptionV3": false, + "tf.compat.v1.keras.applications.MobileNet": false, + "tf.compat.v1.keras.applications.MobileNetV2": false, + "tf.compat.v1.keras.applications.NASNetLarge": false, + "tf.compat.v1.keras.applications.NASNetMobile": false, + "tf.compat.v1.keras.applications.ResNet101": false, + "tf.compat.v1.keras.applications.ResNet101V2": false, + "tf.compat.v1.keras.applications.ResNet152": false, + "tf.compat.v1.keras.applications.ResNet152V2": false, + "tf.compat.v1.keras.applications.ResNet50": false, + "tf.compat.v1.keras.applications.ResNet50V2": false, + "tf.compat.v1.keras.applications.VGG16": false, + "tf.compat.v1.keras.applications.VGG19": false, + "tf.compat.v1.keras.applications.Xception": false, + "tf.compat.v1.keras.applications.densenet": false, + "tf.compat.v1.keras.applications.densenet.DenseNet121": false, + "tf.compat.v1.keras.applications.densenet.DenseNet169": false, + "tf.compat.v1.keras.applications.densenet.DenseNet201": false, + "tf.compat.v1.keras.applications.densenet.decode_predictions": false, + "tf.compat.v1.keras.applications.densenet.preprocess_input": false, + "tf.compat.v1.keras.applications.imagenet_utils": false, + "tf.compat.v1.keras.applications.imagenet_utils.decode_predictions": false, + "tf.compat.v1.keras.applications.imagenet_utils.preprocess_input": false, + "tf.compat.v1.keras.applications.inception_resnet_v2": false, + "tf.compat.v1.keras.applications.inception_resnet_v2.InceptionResNetV2": false, + "tf.compat.v1.keras.applications.inception_resnet_v2.decode_predictions": false, + "tf.compat.v1.keras.applications.inception_resnet_v2.preprocess_input": false, + "tf.compat.v1.keras.applications.inception_v3": false, + "tf.compat.v1.keras.applications.inception_v3.InceptionV3": false, + "tf.compat.v1.keras.applications.inception_v3.decode_predictions": false, + "tf.compat.v1.keras.applications.inception_v3.preprocess_input": false, + "tf.compat.v1.keras.applications.mobilenet": false, + "tf.compat.v1.keras.applications.mobilenet.MobileNet": false, + "tf.compat.v1.keras.applications.mobilenet.decode_predictions": false, + "tf.compat.v1.keras.applications.mobilenet.preprocess_input": false, + "tf.compat.v1.keras.applications.mobilenet_v2": false, + "tf.compat.v1.keras.applications.mobilenet_v2.MobileNetV2": false, + "tf.compat.v1.keras.applications.mobilenet_v2.decode_predictions": false, + "tf.compat.v1.keras.applications.mobilenet_v2.preprocess_input": false, + "tf.compat.v1.keras.applications.nasnet": false, + "tf.compat.v1.keras.applications.nasnet.NASNetLarge": false, + "tf.compat.v1.keras.applications.nasnet.NASNetMobile": false, + "tf.compat.v1.keras.applications.nasnet.decode_predictions": false, + "tf.compat.v1.keras.applications.nasnet.preprocess_input": false, + "tf.compat.v1.keras.applications.resnet": false, + "tf.compat.v1.keras.applications.resnet.ResNet101": false, + "tf.compat.v1.keras.applications.resnet.ResNet152": false, + "tf.compat.v1.keras.applications.resnet.ResNet50": false, + "tf.compat.v1.keras.applications.resnet.decode_predictions": false, + "tf.compat.v1.keras.applications.resnet.preprocess_input": false, + "tf.compat.v1.keras.applications.resnet50": false, + "tf.compat.v1.keras.applications.resnet50.ResNet50": false, + "tf.compat.v1.keras.applications.resnet50.decode_predictions": false, + "tf.compat.v1.keras.applications.resnet50.preprocess_input": false, + "tf.compat.v1.keras.applications.resnet_v2": false, + "tf.compat.v1.keras.applications.resnet_v2.ResNet101V2": false, + "tf.compat.v1.keras.applications.resnet_v2.ResNet152V2": false, + "tf.compat.v1.keras.applications.resnet_v2.ResNet50V2": false, + "tf.compat.v1.keras.applications.resnet_v2.decode_predictions": false, + "tf.compat.v1.keras.applications.resnet_v2.preprocess_input": false, + "tf.compat.v1.keras.applications.vgg16": false, + "tf.compat.v1.keras.applications.vgg16.VGG16": false, + "tf.compat.v1.keras.applications.vgg16.decode_predictions": false, + "tf.compat.v1.keras.applications.vgg16.preprocess_input": false, + "tf.compat.v1.keras.applications.vgg19": false, + "tf.compat.v1.keras.applications.vgg19.VGG19": false, + "tf.compat.v1.keras.applications.vgg19.decode_predictions": false, + "tf.compat.v1.keras.applications.vgg19.preprocess_input": false, + "tf.compat.v1.keras.applications.xception": false, + "tf.compat.v1.keras.applications.xception.Xception": false, + "tf.compat.v1.keras.applications.xception.decode_predictions": false, + "tf.compat.v1.keras.applications.xception.preprocess_input": false, + "tf.compat.v1.keras.backend": false, + "tf.compat.v1.keras.backend.abs": false, + "tf.compat.v1.keras.backend.all": false, + "tf.compat.v1.keras.backend.any": false, + "tf.compat.v1.keras.backend.arange": false, + "tf.compat.v1.keras.backend.argmax": false, + "tf.compat.v1.keras.backend.argmin": false, + "tf.compat.v1.keras.backend.backend": false, + "tf.compat.v1.keras.backend.batch_dot": false, + "tf.compat.v1.keras.backend.batch_flatten": false, + "tf.compat.v1.keras.backend.batch_get_value": false, + "tf.compat.v1.keras.backend.batch_normalization": false, + "tf.compat.v1.keras.backend.batch_set_value": false, + "tf.compat.v1.keras.backend.bias_add": false, + "tf.compat.v1.keras.backend.binary_crossentropy": false, + "tf.compat.v1.keras.backend.cast": false, + "tf.compat.v1.keras.backend.cast_to_floatx": false, + "tf.compat.v1.keras.backend.categorical_crossentropy": false, + "tf.compat.v1.keras.backend.clear_session": false, + "tf.compat.v1.keras.backend.clip": false, + "tf.compat.v1.keras.backend.concatenate": false, + "tf.compat.v1.keras.backend.constant": false, + "tf.compat.v1.keras.backend.conv1d": false, + "tf.compat.v1.keras.backend.conv2d": false, + "tf.compat.v1.keras.backend.conv2d_transpose": false, + "tf.compat.v1.keras.backend.conv3d": false, + "tf.compat.v1.keras.backend.cos": false, + "tf.compat.v1.keras.backend.count_params": false, + "tf.compat.v1.keras.backend.ctc_batch_cost": false, + "tf.compat.v1.keras.backend.ctc_decode": false, + "tf.compat.v1.keras.backend.ctc_label_dense_to_sparse": false, + "tf.compat.v1.keras.backend.cumprod": false, + "tf.compat.v1.keras.backend.cumsum": false, + "tf.compat.v1.keras.backend.depthwise_conv2d": false, + "tf.compat.v1.keras.backend.dot": false, + "tf.compat.v1.keras.backend.dropout": false, + "tf.compat.v1.keras.backend.dtype": false, + "tf.compat.v1.keras.backend.elu": false, + "tf.compat.v1.keras.backend.epsilon": false, + "tf.compat.v1.keras.backend.equal": false, + "tf.compat.v1.keras.backend.eval": false, + "tf.compat.v1.keras.backend.exp": false, + "tf.compat.v1.keras.backend.expand_dims": false, + "tf.compat.v1.keras.backend.eye": false, + "tf.compat.v1.keras.backend.flatten": false, + "tf.compat.v1.keras.backend.floatx": false, + "tf.compat.v1.keras.backend.foldl": false, + "tf.compat.v1.keras.backend.foldr": false, + "tf.compat.v1.keras.backend.function": false, + "tf.compat.v1.keras.backend.gather": false, + "tf.compat.v1.keras.backend.get_session": false, + "tf.compat.v1.keras.backend.get_uid": false, + "tf.compat.v1.keras.backend.get_value": false, + "tf.compat.v1.keras.backend.gradients": false, + "tf.compat.v1.keras.backend.greater": false, + "tf.compat.v1.keras.backend.greater_equal": false, + "tf.compat.v1.keras.backend.hard_sigmoid": false, + "tf.compat.v1.keras.backend.image_data_format": false, + "tf.compat.v1.keras.backend.in_test_phase": false, + "tf.compat.v1.keras.backend.in_top_k": false, + "tf.compat.v1.keras.backend.in_train_phase": false, + "tf.compat.v1.keras.backend.int_shape": false, + "tf.compat.v1.keras.backend.is_keras_tensor": false, + "tf.compat.v1.keras.backend.is_sparse": false, + "tf.compat.v1.keras.backend.l2_normalize": false, + "tf.compat.v1.keras.backend.learning_phase": false, + "tf.compat.v1.keras.backend.learning_phase_scope": false, + "tf.compat.v1.keras.backend.less": false, + "tf.compat.v1.keras.backend.less_equal": false, + "tf.compat.v1.keras.backend.local_conv1d": false, + "tf.compat.v1.keras.backend.local_conv2d": false, + "tf.compat.v1.keras.backend.log": false, + "tf.compat.v1.keras.backend.manual_variable_initialization": false, + "tf.compat.v1.keras.backend.map_fn": false, + "tf.compat.v1.keras.backend.max": false, + "tf.compat.v1.keras.backend.maximum": false, + "tf.compat.v1.keras.backend.mean": false, + "tf.compat.v1.keras.backend.min": false, + "tf.compat.v1.keras.backend.minimum": false, + "tf.compat.v1.keras.backend.moving_average_update": false, + "tf.compat.v1.keras.backend.name_scope": false, + "tf.compat.v1.keras.backend.name_scope.__enter__": true, + "tf.compat.v1.keras.backend.name_scope.__eq__": true, + "tf.compat.v1.keras.backend.name_scope.__exit__": true, + "tf.compat.v1.keras.backend.name_scope.__ge__": true, + "tf.compat.v1.keras.backend.name_scope.__gt__": true, + "tf.compat.v1.keras.backend.name_scope.__init__": true, + "tf.compat.v1.keras.backend.name_scope.__le__": true, + "tf.compat.v1.keras.backend.name_scope.__lt__": true, + "tf.compat.v1.keras.backend.name_scope.__ne__": true, + "tf.compat.v1.keras.backend.name_scope.__new__": true, + "tf.compat.v1.keras.backend.name_scope.name": true, + "tf.compat.v1.keras.backend.ndim": false, + "tf.compat.v1.keras.backend.normalize_batch_in_training": false, + "tf.compat.v1.keras.backend.not_equal": false, + "tf.compat.v1.keras.backend.one_hot": false, + "tf.compat.v1.keras.backend.ones": false, + "tf.compat.v1.keras.backend.ones_like": false, + "tf.compat.v1.keras.backend.permute_dimensions": false, + "tf.compat.v1.keras.backend.placeholder": false, + "tf.compat.v1.keras.backend.pool2d": false, + "tf.compat.v1.keras.backend.pool3d": false, + "tf.compat.v1.keras.backend.pow": false, + "tf.compat.v1.keras.backend.print_tensor": false, + "tf.compat.v1.keras.backend.prod": false, + "tf.compat.v1.keras.backend.random_binomial": false, + "tf.compat.v1.keras.backend.random_normal": false, + "tf.compat.v1.keras.backend.random_normal_variable": false, + "tf.compat.v1.keras.backend.random_uniform": false, + "tf.compat.v1.keras.backend.random_uniform_variable": false, + "tf.compat.v1.keras.backend.relu": false, + "tf.compat.v1.keras.backend.repeat": false, + "tf.compat.v1.keras.backend.repeat_elements": false, + "tf.compat.v1.keras.backend.reset_uids": false, + "tf.compat.v1.keras.backend.reshape": false, + "tf.compat.v1.keras.backend.resize_images": false, + "tf.compat.v1.keras.backend.resize_volumes": false, + "tf.compat.v1.keras.backend.reverse": false, + "tf.compat.v1.keras.backend.rnn": false, + "tf.compat.v1.keras.backend.round": false, + "tf.compat.v1.keras.backend.separable_conv2d": false, + "tf.compat.v1.keras.backend.set_epsilon": false, + "tf.compat.v1.keras.backend.set_floatx": false, + "tf.compat.v1.keras.backend.set_image_data_format": false, + "tf.compat.v1.keras.backend.set_learning_phase": false, + "tf.compat.v1.keras.backend.set_session": false, + "tf.compat.v1.keras.backend.set_value": false, + "tf.compat.v1.keras.backend.shape": false, + "tf.compat.v1.keras.backend.sigmoid": false, + "tf.compat.v1.keras.backend.sign": false, + "tf.compat.v1.keras.backend.sin": false, + "tf.compat.v1.keras.backend.softmax": false, + "tf.compat.v1.keras.backend.softplus": false, + "tf.compat.v1.keras.backend.softsign": false, + "tf.compat.v1.keras.backend.sparse_categorical_crossentropy": false, + "tf.compat.v1.keras.backend.spatial_2d_padding": false, + "tf.compat.v1.keras.backend.spatial_3d_padding": false, + "tf.compat.v1.keras.backend.sqrt": false, + "tf.compat.v1.keras.backend.square": false, + "tf.compat.v1.keras.backend.squeeze": false, + "tf.compat.v1.keras.backend.stack": false, + "tf.compat.v1.keras.backend.std": false, + "tf.compat.v1.keras.backend.stop_gradient": false, + "tf.compat.v1.keras.backend.sum": false, + "tf.compat.v1.keras.backend.switch": false, + "tf.compat.v1.keras.backend.tanh": false, + "tf.compat.v1.keras.backend.temporal_padding": false, + "tf.compat.v1.keras.backend.tile": false, + "tf.compat.v1.keras.backend.to_dense": false, + "tf.compat.v1.keras.backend.transpose": false, + "tf.compat.v1.keras.backend.truncated_normal": false, + "tf.compat.v1.keras.backend.update": false, + "tf.compat.v1.keras.backend.update_add": false, + "tf.compat.v1.keras.backend.update_sub": false, + "tf.compat.v1.keras.backend.var": false, + "tf.compat.v1.keras.backend.variable": false, + "tf.compat.v1.keras.backend.zeros": false, + "tf.compat.v1.keras.backend.zeros_like": false, + "tf.compat.v1.keras.callbacks": false, + "tf.compat.v1.keras.callbacks.BaseLogger": false, + "tf.compat.v1.keras.callbacks.BaseLogger.__eq__": true, + "tf.compat.v1.keras.callbacks.BaseLogger.__ge__": true, + "tf.compat.v1.keras.callbacks.BaseLogger.__gt__": true, + "tf.compat.v1.keras.callbacks.BaseLogger.__init__": true, + "tf.compat.v1.keras.callbacks.BaseLogger.__le__": true, + "tf.compat.v1.keras.callbacks.BaseLogger.__lt__": true, + "tf.compat.v1.keras.callbacks.BaseLogger.__ne__": true, + "tf.compat.v1.keras.callbacks.BaseLogger.__new__": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_batch_end": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_predict_end": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_test_begin": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_test_end": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_train_begin": true, + "tf.compat.v1.keras.callbacks.BaseLogger.on_train_end": true, + "tf.compat.v1.keras.callbacks.BaseLogger.set_model": true, + "tf.compat.v1.keras.callbacks.BaseLogger.set_params": true, + "tf.compat.v1.keras.callbacks.CSVLogger": false, + "tf.compat.v1.keras.callbacks.CSVLogger.__eq__": true, + "tf.compat.v1.keras.callbacks.CSVLogger.__ge__": true, + "tf.compat.v1.keras.callbacks.CSVLogger.__gt__": true, + "tf.compat.v1.keras.callbacks.CSVLogger.__init__": true, + "tf.compat.v1.keras.callbacks.CSVLogger.__le__": true, + "tf.compat.v1.keras.callbacks.CSVLogger.__lt__": true, + "tf.compat.v1.keras.callbacks.CSVLogger.__ne__": true, + "tf.compat.v1.keras.callbacks.CSVLogger.__new__": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_batch_end": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_predict_end": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_test_begin": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_test_end": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_train_begin": true, + "tf.compat.v1.keras.callbacks.CSVLogger.on_train_end": true, + "tf.compat.v1.keras.callbacks.CSVLogger.set_model": true, + "tf.compat.v1.keras.callbacks.CSVLogger.set_params": true, + "tf.compat.v1.keras.callbacks.Callback": false, + "tf.compat.v1.keras.callbacks.Callback.__eq__": true, + "tf.compat.v1.keras.callbacks.Callback.__ge__": true, + "tf.compat.v1.keras.callbacks.Callback.__gt__": true, + "tf.compat.v1.keras.callbacks.Callback.__init__": true, + "tf.compat.v1.keras.callbacks.Callback.__le__": true, + "tf.compat.v1.keras.callbacks.Callback.__lt__": true, + "tf.compat.v1.keras.callbacks.Callback.__ne__": true, + "tf.compat.v1.keras.callbacks.Callback.__new__": true, + "tf.compat.v1.keras.callbacks.Callback.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.Callback.on_batch_end": true, + "tf.compat.v1.keras.callbacks.Callback.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.Callback.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.Callback.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.Callback.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.Callback.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.Callback.on_predict_end": true, + "tf.compat.v1.keras.callbacks.Callback.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.Callback.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.Callback.on_test_begin": true, + "tf.compat.v1.keras.callbacks.Callback.on_test_end": true, + "tf.compat.v1.keras.callbacks.Callback.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.Callback.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.Callback.on_train_begin": true, + "tf.compat.v1.keras.callbacks.Callback.on_train_end": true, + "tf.compat.v1.keras.callbacks.Callback.set_model": true, + "tf.compat.v1.keras.callbacks.Callback.set_params": true, + "tf.compat.v1.keras.callbacks.EarlyStopping": false, + "tf.compat.v1.keras.callbacks.EarlyStopping.__eq__": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.__ge__": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.__gt__": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.__init__": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.__le__": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.__lt__": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.__ne__": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.__new__": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.get_monitor_value": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_batch_end": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_predict_end": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_test_begin": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_test_end": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_train_begin": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.on_train_end": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.set_model": true, + "tf.compat.v1.keras.callbacks.EarlyStopping.set_params": true, + "tf.compat.v1.keras.callbacks.History": false, + "tf.compat.v1.keras.callbacks.History.__eq__": true, + "tf.compat.v1.keras.callbacks.History.__ge__": true, + "tf.compat.v1.keras.callbacks.History.__gt__": true, + "tf.compat.v1.keras.callbacks.History.__init__": true, + "tf.compat.v1.keras.callbacks.History.__le__": true, + "tf.compat.v1.keras.callbacks.History.__lt__": true, + "tf.compat.v1.keras.callbacks.History.__ne__": true, + "tf.compat.v1.keras.callbacks.History.__new__": true, + "tf.compat.v1.keras.callbacks.History.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.History.on_batch_end": true, + "tf.compat.v1.keras.callbacks.History.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.History.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.History.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.History.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.History.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.History.on_predict_end": true, + "tf.compat.v1.keras.callbacks.History.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.History.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.History.on_test_begin": true, + "tf.compat.v1.keras.callbacks.History.on_test_end": true, + "tf.compat.v1.keras.callbacks.History.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.History.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.History.on_train_begin": true, + "tf.compat.v1.keras.callbacks.History.on_train_end": true, + "tf.compat.v1.keras.callbacks.History.set_model": true, + "tf.compat.v1.keras.callbacks.History.set_params": true, + "tf.compat.v1.keras.callbacks.LambdaCallback": false, + "tf.compat.v1.keras.callbacks.LambdaCallback.__eq__": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.__ge__": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.__gt__": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.__init__": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.__le__": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.__lt__": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.__ne__": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.__new__": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_batch_end": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_predict_end": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_test_begin": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_test_end": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_train_begin": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.on_train_end": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.set_model": true, + "tf.compat.v1.keras.callbacks.LambdaCallback.set_params": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler": false, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__eq__": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__ge__": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__gt__": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__init__": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__le__": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__lt__": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__ne__": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.__new__": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_batch_end": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_predict_end": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_test_begin": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_test_end": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_train_begin": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.on_train_end": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.set_model": true, + "tf.compat.v1.keras.callbacks.LearningRateScheduler.set_params": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint": false, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__eq__": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__ge__": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__gt__": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__init__": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__le__": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__lt__": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__ne__": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.__new__": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_batch_end": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_predict_end": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_test_begin": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_test_end": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_train_begin": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.on_train_end": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.set_model": true, + "tf.compat.v1.keras.callbacks.ModelCheckpoint.set_params": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger": false, + "tf.compat.v1.keras.callbacks.ProgbarLogger.__eq__": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.__ge__": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.__gt__": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.__init__": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.__le__": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.__lt__": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.__ne__": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.__new__": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_batch_end": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_predict_end": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_test_begin": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_test_end": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_train_begin": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.on_train_end": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.set_model": true, + "tf.compat.v1.keras.callbacks.ProgbarLogger.set_params": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau": false, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__eq__": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__ge__": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__gt__": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__init__": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__le__": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__lt__": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__ne__": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.__new__": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.in_cooldown": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_batch_end": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_predict_end": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_test_begin": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_test_end": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_train_begin": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.on_train_end": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.set_model": true, + "tf.compat.v1.keras.callbacks.ReduceLROnPlateau.set_params": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor": false, + "tf.compat.v1.keras.callbacks.RemoteMonitor.__eq__": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.__ge__": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.__gt__": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.__init__": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.__le__": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.__lt__": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.__ne__": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.__new__": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_batch_end": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_predict_end": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_test_begin": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_test_end": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_train_begin": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.on_train_end": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.set_model": true, + "tf.compat.v1.keras.callbacks.RemoteMonitor.set_params": true, + "tf.compat.v1.keras.callbacks.TensorBoard": false, + "tf.compat.v1.keras.callbacks.TensorBoard.__eq__": true, + "tf.compat.v1.keras.callbacks.TensorBoard.__ge__": true, + "tf.compat.v1.keras.callbacks.TensorBoard.__gt__": true, + "tf.compat.v1.keras.callbacks.TensorBoard.__init__": true, + "tf.compat.v1.keras.callbacks.TensorBoard.__le__": true, + "tf.compat.v1.keras.callbacks.TensorBoard.__lt__": true, + "tf.compat.v1.keras.callbacks.TensorBoard.__ne__": true, + "tf.compat.v1.keras.callbacks.TensorBoard.__new__": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_batch_end": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_predict_end": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_test_begin": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_test_end": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_train_begin": true, + "tf.compat.v1.keras.callbacks.TensorBoard.on_train_end": true, + "tf.compat.v1.keras.callbacks.TensorBoard.set_model": true, + "tf.compat.v1.keras.callbacks.TensorBoard.set_params": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN": false, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__eq__": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__ge__": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__gt__": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__init__": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__le__": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__lt__": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__ne__": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.__new__": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_batch_begin": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_batch_end": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_epoch_begin": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_epoch_end": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_predict_batch_begin": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_predict_batch_end": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_predict_begin": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_predict_end": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_test_batch_begin": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_test_batch_end": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_test_begin": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_test_end": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_train_batch_begin": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_train_batch_end": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_train_begin": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.on_train_end": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.set_model": true, + "tf.compat.v1.keras.callbacks.TerminateOnNaN.set_params": true, + "tf.compat.v1.keras.constraints": false, + "tf.compat.v1.keras.constraints.Constraint": false, + "tf.compat.v1.keras.constraints.Constraint.__call__": true, + "tf.compat.v1.keras.constraints.Constraint.__eq__": true, + "tf.compat.v1.keras.constraints.Constraint.__ge__": true, + "tf.compat.v1.keras.constraints.Constraint.__gt__": true, + "tf.compat.v1.keras.constraints.Constraint.__init__": true, + "tf.compat.v1.keras.constraints.Constraint.__le__": true, + "tf.compat.v1.keras.constraints.Constraint.__lt__": true, + "tf.compat.v1.keras.constraints.Constraint.__ne__": true, + "tf.compat.v1.keras.constraints.Constraint.__new__": true, + "tf.compat.v1.keras.constraints.Constraint.get_config": true, + "tf.compat.v1.keras.constraints.MaxNorm": false, + "tf.compat.v1.keras.constraints.MaxNorm.__call__": true, + "tf.compat.v1.keras.constraints.MaxNorm.__eq__": true, + "tf.compat.v1.keras.constraints.MaxNorm.__ge__": true, + "tf.compat.v1.keras.constraints.MaxNorm.__gt__": true, + "tf.compat.v1.keras.constraints.MaxNorm.__init__": true, + "tf.compat.v1.keras.constraints.MaxNorm.__le__": true, + "tf.compat.v1.keras.constraints.MaxNorm.__lt__": true, + "tf.compat.v1.keras.constraints.MaxNorm.__ne__": true, + "tf.compat.v1.keras.constraints.MaxNorm.__new__": true, + "tf.compat.v1.keras.constraints.MaxNorm.get_config": true, + "tf.compat.v1.keras.constraints.MinMaxNorm": false, + "tf.compat.v1.keras.constraints.MinMaxNorm.__call__": true, + "tf.compat.v1.keras.constraints.MinMaxNorm.__eq__": true, + "tf.compat.v1.keras.constraints.MinMaxNorm.__ge__": true, + "tf.compat.v1.keras.constraints.MinMaxNorm.__gt__": true, + "tf.compat.v1.keras.constraints.MinMaxNorm.__init__": true, + "tf.compat.v1.keras.constraints.MinMaxNorm.__le__": true, + "tf.compat.v1.keras.constraints.MinMaxNorm.__lt__": true, + "tf.compat.v1.keras.constraints.MinMaxNorm.__ne__": true, + "tf.compat.v1.keras.constraints.MinMaxNorm.__new__": true, + "tf.compat.v1.keras.constraints.MinMaxNorm.get_config": true, + "tf.compat.v1.keras.constraints.NonNeg": false, + "tf.compat.v1.keras.constraints.NonNeg.__call__": true, + "tf.compat.v1.keras.constraints.NonNeg.__eq__": true, + "tf.compat.v1.keras.constraints.NonNeg.__ge__": true, + "tf.compat.v1.keras.constraints.NonNeg.__gt__": true, + "tf.compat.v1.keras.constraints.NonNeg.__init__": true, + "tf.compat.v1.keras.constraints.NonNeg.__le__": true, + "tf.compat.v1.keras.constraints.NonNeg.__lt__": true, + "tf.compat.v1.keras.constraints.NonNeg.__ne__": true, + "tf.compat.v1.keras.constraints.NonNeg.__new__": true, + "tf.compat.v1.keras.constraints.NonNeg.get_config": true, + "tf.compat.v1.keras.constraints.RadialConstraint": false, + "tf.compat.v1.keras.constraints.RadialConstraint.__call__": true, + "tf.compat.v1.keras.constraints.RadialConstraint.__eq__": true, + "tf.compat.v1.keras.constraints.RadialConstraint.__ge__": true, + "tf.compat.v1.keras.constraints.RadialConstraint.__gt__": true, + "tf.compat.v1.keras.constraints.RadialConstraint.__init__": true, + "tf.compat.v1.keras.constraints.RadialConstraint.__le__": true, + "tf.compat.v1.keras.constraints.RadialConstraint.__lt__": true, + "tf.compat.v1.keras.constraints.RadialConstraint.__ne__": true, + "tf.compat.v1.keras.constraints.RadialConstraint.__new__": true, + "tf.compat.v1.keras.constraints.RadialConstraint.get_config": true, + "tf.compat.v1.keras.constraints.UnitNorm": false, + "tf.compat.v1.keras.constraints.UnitNorm.__call__": true, + "tf.compat.v1.keras.constraints.UnitNorm.__eq__": true, + "tf.compat.v1.keras.constraints.UnitNorm.__ge__": true, + "tf.compat.v1.keras.constraints.UnitNorm.__gt__": true, + "tf.compat.v1.keras.constraints.UnitNorm.__init__": true, + "tf.compat.v1.keras.constraints.UnitNorm.__le__": true, + "tf.compat.v1.keras.constraints.UnitNorm.__lt__": true, + "tf.compat.v1.keras.constraints.UnitNorm.__ne__": true, + "tf.compat.v1.keras.constraints.UnitNorm.__new__": true, + "tf.compat.v1.keras.constraints.UnitNorm.get_config": true, + "tf.compat.v1.keras.constraints.deserialize": false, + "tf.compat.v1.keras.constraints.get": false, + "tf.compat.v1.keras.constraints.max_norm": false, + "tf.compat.v1.keras.constraints.max_norm.__call__": true, + "tf.compat.v1.keras.constraints.max_norm.__eq__": true, + "tf.compat.v1.keras.constraints.max_norm.__ge__": true, + "tf.compat.v1.keras.constraints.max_norm.__gt__": true, + "tf.compat.v1.keras.constraints.max_norm.__init__": true, + "tf.compat.v1.keras.constraints.max_norm.__le__": true, + "tf.compat.v1.keras.constraints.max_norm.__lt__": true, + "tf.compat.v1.keras.constraints.max_norm.__ne__": true, + "tf.compat.v1.keras.constraints.max_norm.__new__": true, + "tf.compat.v1.keras.constraints.max_norm.get_config": true, + "tf.compat.v1.keras.constraints.min_max_norm": false, + "tf.compat.v1.keras.constraints.min_max_norm.__call__": true, + "tf.compat.v1.keras.constraints.min_max_norm.__eq__": true, + "tf.compat.v1.keras.constraints.min_max_norm.__ge__": true, + "tf.compat.v1.keras.constraints.min_max_norm.__gt__": true, + "tf.compat.v1.keras.constraints.min_max_norm.__init__": true, + "tf.compat.v1.keras.constraints.min_max_norm.__le__": true, + "tf.compat.v1.keras.constraints.min_max_norm.__lt__": true, + "tf.compat.v1.keras.constraints.min_max_norm.__ne__": true, + "tf.compat.v1.keras.constraints.min_max_norm.__new__": true, + "tf.compat.v1.keras.constraints.min_max_norm.get_config": true, + "tf.compat.v1.keras.constraints.non_neg": false, + "tf.compat.v1.keras.constraints.non_neg.__call__": true, + "tf.compat.v1.keras.constraints.non_neg.__eq__": true, + "tf.compat.v1.keras.constraints.non_neg.__ge__": true, + "tf.compat.v1.keras.constraints.non_neg.__gt__": true, + "tf.compat.v1.keras.constraints.non_neg.__init__": true, + "tf.compat.v1.keras.constraints.non_neg.__le__": true, + "tf.compat.v1.keras.constraints.non_neg.__lt__": true, + "tf.compat.v1.keras.constraints.non_neg.__ne__": true, + "tf.compat.v1.keras.constraints.non_neg.__new__": true, + "tf.compat.v1.keras.constraints.non_neg.get_config": true, + "tf.compat.v1.keras.constraints.radial_constraint": false, + "tf.compat.v1.keras.constraints.radial_constraint.__call__": true, + "tf.compat.v1.keras.constraints.radial_constraint.__eq__": true, + "tf.compat.v1.keras.constraints.radial_constraint.__ge__": true, + "tf.compat.v1.keras.constraints.radial_constraint.__gt__": true, + "tf.compat.v1.keras.constraints.radial_constraint.__init__": true, + "tf.compat.v1.keras.constraints.radial_constraint.__le__": true, + "tf.compat.v1.keras.constraints.radial_constraint.__lt__": true, + "tf.compat.v1.keras.constraints.radial_constraint.__ne__": true, + "tf.compat.v1.keras.constraints.radial_constraint.__new__": true, + "tf.compat.v1.keras.constraints.radial_constraint.get_config": true, + "tf.compat.v1.keras.constraints.serialize": false, + "tf.compat.v1.keras.constraints.unit_norm": false, + "tf.compat.v1.keras.constraints.unit_norm.__call__": true, + "tf.compat.v1.keras.constraints.unit_norm.__eq__": true, + "tf.compat.v1.keras.constraints.unit_norm.__ge__": true, + "tf.compat.v1.keras.constraints.unit_norm.__gt__": true, + "tf.compat.v1.keras.constraints.unit_norm.__init__": true, + "tf.compat.v1.keras.constraints.unit_norm.__le__": true, + "tf.compat.v1.keras.constraints.unit_norm.__lt__": true, + "tf.compat.v1.keras.constraints.unit_norm.__ne__": true, + "tf.compat.v1.keras.constraints.unit_norm.__new__": true, + "tf.compat.v1.keras.constraints.unit_norm.get_config": true, + "tf.compat.v1.keras.datasets": false, + "tf.compat.v1.keras.datasets.boston_housing": false, + "tf.compat.v1.keras.datasets.boston_housing.load_data": false, + "tf.compat.v1.keras.datasets.cifar10": false, + "tf.compat.v1.keras.datasets.cifar10.load_data": false, + "tf.compat.v1.keras.datasets.cifar100": false, + "tf.compat.v1.keras.datasets.cifar100.load_data": false, + "tf.compat.v1.keras.datasets.fashion_mnist": false, + "tf.compat.v1.keras.datasets.fashion_mnist.load_data": false, + "tf.compat.v1.keras.datasets.imdb": false, + "tf.compat.v1.keras.datasets.imdb.get_word_index": false, + "tf.compat.v1.keras.datasets.imdb.load_data": false, + "tf.compat.v1.keras.datasets.mnist": false, + "tf.compat.v1.keras.datasets.mnist.load_data": false, + "tf.compat.v1.keras.datasets.reuters": false, + "tf.compat.v1.keras.datasets.reuters.get_word_index": false, + "tf.compat.v1.keras.datasets.reuters.load_data": false, + "tf.compat.v1.keras.estimator": false, + "tf.compat.v1.keras.estimator.model_to_estimator": false, + "tf.compat.v1.keras.experimental": false, + "tf.compat.v1.keras.experimental.CosineDecay": false, + "tf.compat.v1.keras.experimental.CosineDecay.__call__": true, + "tf.compat.v1.keras.experimental.CosineDecay.__eq__": true, + "tf.compat.v1.keras.experimental.CosineDecay.__ge__": true, + "tf.compat.v1.keras.experimental.CosineDecay.__gt__": true, + "tf.compat.v1.keras.experimental.CosineDecay.__init__": true, + "tf.compat.v1.keras.experimental.CosineDecay.__le__": true, + "tf.compat.v1.keras.experimental.CosineDecay.__lt__": true, + "tf.compat.v1.keras.experimental.CosineDecay.__ne__": true, + "tf.compat.v1.keras.experimental.CosineDecay.__new__": true, + "tf.compat.v1.keras.experimental.CosineDecay.from_config": true, + "tf.compat.v1.keras.experimental.CosineDecay.get_config": true, + "tf.compat.v1.keras.experimental.CosineDecayRestarts": false, + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__call__": true, + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__eq__": true, + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__ge__": true, + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__gt__": true, + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__init__": true, + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__le__": true, + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__lt__": true, + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__ne__": true, + "tf.compat.v1.keras.experimental.CosineDecayRestarts.__new__": true, + "tf.compat.v1.keras.experimental.CosineDecayRestarts.from_config": true, + "tf.compat.v1.keras.experimental.CosineDecayRestarts.get_config": true, + "tf.compat.v1.keras.experimental.LinearCosineDecay": false, + "tf.compat.v1.keras.experimental.LinearCosineDecay.__call__": true, + "tf.compat.v1.keras.experimental.LinearCosineDecay.__eq__": true, + "tf.compat.v1.keras.experimental.LinearCosineDecay.__ge__": true, + "tf.compat.v1.keras.experimental.LinearCosineDecay.__gt__": true, + "tf.compat.v1.keras.experimental.LinearCosineDecay.__init__": true, + "tf.compat.v1.keras.experimental.LinearCosineDecay.__le__": true, + "tf.compat.v1.keras.experimental.LinearCosineDecay.__lt__": true, + "tf.compat.v1.keras.experimental.LinearCosineDecay.__ne__": true, + "tf.compat.v1.keras.experimental.LinearCosineDecay.__new__": true, + "tf.compat.v1.keras.experimental.LinearCosineDecay.from_config": true, + "tf.compat.v1.keras.experimental.LinearCosineDecay.get_config": true, + "tf.compat.v1.keras.experimental.LinearModel": false, + "tf.compat.v1.keras.experimental.LinearModel.__call__": true, + "tf.compat.v1.keras.experimental.LinearModel.__eq__": true, + "tf.compat.v1.keras.experimental.LinearModel.__ge__": true, + "tf.compat.v1.keras.experimental.LinearModel.__gt__": true, + "tf.compat.v1.keras.experimental.LinearModel.__init__": true, + "tf.compat.v1.keras.experimental.LinearModel.__le__": true, + "tf.compat.v1.keras.experimental.LinearModel.__lt__": true, + "tf.compat.v1.keras.experimental.LinearModel.__ne__": true, + "tf.compat.v1.keras.experimental.LinearModel.__new__": true, + "tf.compat.v1.keras.experimental.LinearModel.activity_regularizer": true, + "tf.compat.v1.keras.experimental.LinearModel.add_loss": true, + "tf.compat.v1.keras.experimental.LinearModel.add_metric": true, + "tf.compat.v1.keras.experimental.LinearModel.add_weight": true, + "tf.compat.v1.keras.experimental.LinearModel.build": true, + "tf.compat.v1.keras.experimental.LinearModel.call": true, + "tf.compat.v1.keras.experimental.LinearModel.compile": true, + "tf.compat.v1.keras.experimental.LinearModel.compute_mask": true, + "tf.compat.v1.keras.experimental.LinearModel.compute_output_shape": true, + "tf.compat.v1.keras.experimental.LinearModel.compute_output_signature": true, + "tf.compat.v1.keras.experimental.LinearModel.count_params": true, + "tf.compat.v1.keras.experimental.LinearModel.distribute_strategy": true, + "tf.compat.v1.keras.experimental.LinearModel.dtype": true, + "tf.compat.v1.keras.experimental.LinearModel.dynamic": true, + "tf.compat.v1.keras.experimental.LinearModel.evaluate": true, + "tf.compat.v1.keras.experimental.LinearModel.evaluate_generator": true, + "tf.compat.v1.keras.experimental.LinearModel.fit": true, + "tf.compat.v1.keras.experimental.LinearModel.fit_generator": true, + "tf.compat.v1.keras.experimental.LinearModel.from_config": true, + "tf.compat.v1.keras.experimental.LinearModel.get_config": true, + "tf.compat.v1.keras.experimental.LinearModel.get_layer": true, + "tf.compat.v1.keras.experimental.LinearModel.get_weights": true, + "tf.compat.v1.keras.experimental.LinearModel.input": true, + "tf.compat.v1.keras.experimental.LinearModel.input_spec": true, + "tf.compat.v1.keras.experimental.LinearModel.layers": true, + "tf.compat.v1.keras.experimental.LinearModel.load_weights": true, + "tf.compat.v1.keras.experimental.LinearModel.losses": true, + "tf.compat.v1.keras.experimental.LinearModel.make_predict_function": true, + "tf.compat.v1.keras.experimental.LinearModel.make_test_function": true, + "tf.compat.v1.keras.experimental.LinearModel.make_train_function": true, + "tf.compat.v1.keras.experimental.LinearModel.metrics": true, + "tf.compat.v1.keras.experimental.LinearModel.metrics_names": true, + "tf.compat.v1.keras.experimental.LinearModel.name": true, + "tf.compat.v1.keras.experimental.LinearModel.name_scope": true, + "tf.compat.v1.keras.experimental.LinearModel.non_trainable_weights": true, + "tf.compat.v1.keras.experimental.LinearModel.output": true, + "tf.compat.v1.keras.experimental.LinearModel.predict": true, + "tf.compat.v1.keras.experimental.LinearModel.predict_generator": true, + "tf.compat.v1.keras.experimental.LinearModel.predict_on_batch": true, + "tf.compat.v1.keras.experimental.LinearModel.predict_step": true, + "tf.compat.v1.keras.experimental.LinearModel.reset_metrics": true, + "tf.compat.v1.keras.experimental.LinearModel.reset_states": true, + "tf.compat.v1.keras.experimental.LinearModel.run_eagerly": true, + "tf.compat.v1.keras.experimental.LinearModel.save": true, + "tf.compat.v1.keras.experimental.LinearModel.save_weights": true, + "tf.compat.v1.keras.experimental.LinearModel.set_weights": true, + "tf.compat.v1.keras.experimental.LinearModel.state_updates": true, + "tf.compat.v1.keras.experimental.LinearModel.stateful": true, + "tf.compat.v1.keras.experimental.LinearModel.submodules": true, + "tf.compat.v1.keras.experimental.LinearModel.summary": true, + "tf.compat.v1.keras.experimental.LinearModel.test_on_batch": true, + "tf.compat.v1.keras.experimental.LinearModel.test_step": true, + "tf.compat.v1.keras.experimental.LinearModel.to_json": true, + "tf.compat.v1.keras.experimental.LinearModel.to_yaml": true, + "tf.compat.v1.keras.experimental.LinearModel.train_on_batch": true, + "tf.compat.v1.keras.experimental.LinearModel.train_step": true, + "tf.compat.v1.keras.experimental.LinearModel.trainable": true, + "tf.compat.v1.keras.experimental.LinearModel.trainable_weights": true, + "tf.compat.v1.keras.experimental.LinearModel.weights": true, + "tf.compat.v1.keras.experimental.LinearModel.with_name_scope": true, + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay": false, + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__call__": true, + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__eq__": true, + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__ge__": true, + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__gt__": true, + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__init__": true, + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__le__": true, + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__lt__": true, + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__ne__": true, + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.__new__": true, + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.from_config": true, + "tf.compat.v1.keras.experimental.NoisyLinearCosineDecay.get_config": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell": false, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__call__": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__eq__": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__ge__": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__gt__": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__init__": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__le__": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__lt__": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__ne__": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.__new__": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.activity_regularizer": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.add_loss": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.add_metric": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.add_weight": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.build": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.call": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.compute_mask": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.compute_output_shape": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.compute_output_signature": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.count_params": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.dtype": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.dynamic": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.from_config": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.get_config": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.get_dropout_mask_for_cell": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.get_initial_state": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.get_recurrent_dropout_mask_for_cell": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.get_weights": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.input": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.input_spec": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.losses": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.metrics": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.name": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.name_scope": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.non_trainable_weights": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.output": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.reset_dropout_mask": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.reset_recurrent_dropout_mask": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.set_weights": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.submodules": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.trainable": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.trainable_weights": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.weights": true, + "tf.compat.v1.keras.experimental.PeepholeLSTMCell.with_name_scope": true, + "tf.compat.v1.keras.experimental.SequenceFeatures": false, + "tf.compat.v1.keras.experimental.SequenceFeatures.__call__": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.__eq__": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.__ge__": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.__gt__": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.__init__": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.__le__": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.__lt__": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.__ne__": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.__new__": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.activity_regularizer": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.add_loss": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.add_metric": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.add_weight": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.build": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.call": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.compute_mask": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.compute_output_shape": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.compute_output_signature": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.count_params": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.dtype": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.dynamic": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.from_config": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.get_config": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.get_weights": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.input": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.input_spec": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.losses": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.metrics": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.name": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.name_scope": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.non_trainable_weights": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.output": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.set_weights": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.submodules": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.trainable": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.trainable_weights": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.weights": true, + "tf.compat.v1.keras.experimental.SequenceFeatures.with_name_scope": true, + "tf.compat.v1.keras.experimental.WideDeepModel": false, + "tf.compat.v1.keras.experimental.WideDeepModel.__call__": true, + "tf.compat.v1.keras.experimental.WideDeepModel.__eq__": true, + "tf.compat.v1.keras.experimental.WideDeepModel.__ge__": true, + "tf.compat.v1.keras.experimental.WideDeepModel.__gt__": true, + "tf.compat.v1.keras.experimental.WideDeepModel.__init__": true, + "tf.compat.v1.keras.experimental.WideDeepModel.__le__": true, + "tf.compat.v1.keras.experimental.WideDeepModel.__lt__": true, + "tf.compat.v1.keras.experimental.WideDeepModel.__ne__": true, + "tf.compat.v1.keras.experimental.WideDeepModel.__new__": true, + "tf.compat.v1.keras.experimental.WideDeepModel.activity_regularizer": true, + "tf.compat.v1.keras.experimental.WideDeepModel.add_loss": true, + "tf.compat.v1.keras.experimental.WideDeepModel.add_metric": true, + "tf.compat.v1.keras.experimental.WideDeepModel.add_weight": true, + "tf.compat.v1.keras.experimental.WideDeepModel.build": true, + "tf.compat.v1.keras.experimental.WideDeepModel.call": true, + "tf.compat.v1.keras.experimental.WideDeepModel.compile": true, + "tf.compat.v1.keras.experimental.WideDeepModel.compute_mask": true, + "tf.compat.v1.keras.experimental.WideDeepModel.compute_output_shape": true, + "tf.compat.v1.keras.experimental.WideDeepModel.compute_output_signature": true, + "tf.compat.v1.keras.experimental.WideDeepModel.count_params": true, + "tf.compat.v1.keras.experimental.WideDeepModel.distribute_strategy": true, + "tf.compat.v1.keras.experimental.WideDeepModel.dtype": true, + "tf.compat.v1.keras.experimental.WideDeepModel.dynamic": true, + "tf.compat.v1.keras.experimental.WideDeepModel.evaluate": true, + "tf.compat.v1.keras.experimental.WideDeepModel.evaluate_generator": true, + "tf.compat.v1.keras.experimental.WideDeepModel.fit": true, + "tf.compat.v1.keras.experimental.WideDeepModel.fit_generator": true, + "tf.compat.v1.keras.experimental.WideDeepModel.from_config": true, + "tf.compat.v1.keras.experimental.WideDeepModel.get_config": true, + "tf.compat.v1.keras.experimental.WideDeepModel.get_layer": true, + "tf.compat.v1.keras.experimental.WideDeepModel.get_weights": true, + "tf.compat.v1.keras.experimental.WideDeepModel.input": true, + "tf.compat.v1.keras.experimental.WideDeepModel.input_spec": true, + "tf.compat.v1.keras.experimental.WideDeepModel.layers": true, + "tf.compat.v1.keras.experimental.WideDeepModel.load_weights": true, + "tf.compat.v1.keras.experimental.WideDeepModel.losses": true, + "tf.compat.v1.keras.experimental.WideDeepModel.make_predict_function": true, + "tf.compat.v1.keras.experimental.WideDeepModel.make_test_function": true, + "tf.compat.v1.keras.experimental.WideDeepModel.make_train_function": true, + "tf.compat.v1.keras.experimental.WideDeepModel.metrics": true, + "tf.compat.v1.keras.experimental.WideDeepModel.metrics_names": true, + "tf.compat.v1.keras.experimental.WideDeepModel.name": true, + "tf.compat.v1.keras.experimental.WideDeepModel.name_scope": true, + "tf.compat.v1.keras.experimental.WideDeepModel.non_trainable_weights": true, + "tf.compat.v1.keras.experimental.WideDeepModel.output": true, + "tf.compat.v1.keras.experimental.WideDeepModel.predict": true, + "tf.compat.v1.keras.experimental.WideDeepModel.predict_generator": true, + "tf.compat.v1.keras.experimental.WideDeepModel.predict_on_batch": true, + "tf.compat.v1.keras.experimental.WideDeepModel.predict_step": true, + "tf.compat.v1.keras.experimental.WideDeepModel.reset_metrics": true, + "tf.compat.v1.keras.experimental.WideDeepModel.reset_states": true, + "tf.compat.v1.keras.experimental.WideDeepModel.run_eagerly": true, + "tf.compat.v1.keras.experimental.WideDeepModel.save": true, + "tf.compat.v1.keras.experimental.WideDeepModel.save_weights": true, + "tf.compat.v1.keras.experimental.WideDeepModel.set_weights": true, + "tf.compat.v1.keras.experimental.WideDeepModel.state_updates": true, + "tf.compat.v1.keras.experimental.WideDeepModel.stateful": true, + "tf.compat.v1.keras.experimental.WideDeepModel.submodules": true, + "tf.compat.v1.keras.experimental.WideDeepModel.summary": true, + "tf.compat.v1.keras.experimental.WideDeepModel.test_on_batch": true, + "tf.compat.v1.keras.experimental.WideDeepModel.test_step": true, + "tf.compat.v1.keras.experimental.WideDeepModel.to_json": true, + "tf.compat.v1.keras.experimental.WideDeepModel.to_yaml": true, + "tf.compat.v1.keras.experimental.WideDeepModel.train_on_batch": true, + "tf.compat.v1.keras.experimental.WideDeepModel.train_step": true, + "tf.compat.v1.keras.experimental.WideDeepModel.trainable": true, + "tf.compat.v1.keras.experimental.WideDeepModel.trainable_weights": true, + "tf.compat.v1.keras.experimental.WideDeepModel.weights": true, + "tf.compat.v1.keras.experimental.WideDeepModel.with_name_scope": true, + "tf.compat.v1.keras.experimental.export_saved_model": false, + "tf.compat.v1.keras.experimental.load_from_saved_model": false, + "tf.compat.v1.keras.experimental.terminate_keras_multiprocessing_pools": false, + "tf.compat.v1.keras.initializers": false, + "tf.compat.v1.keras.initializers.Constant": false, + "tf.compat.v1.keras.initializers.Constant.__call__": true, + "tf.compat.v1.keras.initializers.Constant.__eq__": true, + "tf.compat.v1.keras.initializers.Constant.__ge__": true, + "tf.compat.v1.keras.initializers.Constant.__gt__": true, + "tf.compat.v1.keras.initializers.Constant.__init__": true, + "tf.compat.v1.keras.initializers.Constant.__le__": true, + "tf.compat.v1.keras.initializers.Constant.__lt__": true, + "tf.compat.v1.keras.initializers.Constant.__ne__": true, + "tf.compat.v1.keras.initializers.Constant.__new__": true, + "tf.compat.v1.keras.initializers.Constant.from_config": true, + "tf.compat.v1.keras.initializers.Constant.get_config": true, + "tf.compat.v1.keras.initializers.Identity": false, + "tf.compat.v1.keras.initializers.Identity.__call__": true, + "tf.compat.v1.keras.initializers.Identity.__eq__": true, + "tf.compat.v1.keras.initializers.Identity.__ge__": true, + "tf.compat.v1.keras.initializers.Identity.__gt__": true, + "tf.compat.v1.keras.initializers.Identity.__init__": true, + "tf.compat.v1.keras.initializers.Identity.__le__": true, + "tf.compat.v1.keras.initializers.Identity.__lt__": true, + "tf.compat.v1.keras.initializers.Identity.__ne__": true, + "tf.compat.v1.keras.initializers.Identity.__new__": true, + "tf.compat.v1.keras.initializers.Identity.from_config": true, + "tf.compat.v1.keras.initializers.Identity.get_config": true, + "tf.compat.v1.keras.initializers.Initializer": false, + "tf.compat.v1.keras.initializers.Initializer.__call__": true, + "tf.compat.v1.keras.initializers.Initializer.__eq__": true, + "tf.compat.v1.keras.initializers.Initializer.__ge__": true, + "tf.compat.v1.keras.initializers.Initializer.__gt__": true, + "tf.compat.v1.keras.initializers.Initializer.__init__": true, + "tf.compat.v1.keras.initializers.Initializer.__le__": true, + "tf.compat.v1.keras.initializers.Initializer.__lt__": true, + "tf.compat.v1.keras.initializers.Initializer.__ne__": true, + "tf.compat.v1.keras.initializers.Initializer.__new__": true, + "tf.compat.v1.keras.initializers.Initializer.from_config": true, + "tf.compat.v1.keras.initializers.Initializer.get_config": true, + "tf.compat.v1.keras.initializers.Ones": false, + "tf.compat.v1.keras.initializers.Ones.__call__": true, + "tf.compat.v1.keras.initializers.Ones.__eq__": true, + "tf.compat.v1.keras.initializers.Ones.__ge__": true, + "tf.compat.v1.keras.initializers.Ones.__gt__": true, + "tf.compat.v1.keras.initializers.Ones.__init__": true, + "tf.compat.v1.keras.initializers.Ones.__le__": true, + "tf.compat.v1.keras.initializers.Ones.__lt__": true, + "tf.compat.v1.keras.initializers.Ones.__ne__": true, + "tf.compat.v1.keras.initializers.Ones.__new__": true, + "tf.compat.v1.keras.initializers.Ones.from_config": true, + "tf.compat.v1.keras.initializers.Ones.get_config": true, + "tf.compat.v1.keras.initializers.Orthogonal": false, + "tf.compat.v1.keras.initializers.Orthogonal.__call__": true, + "tf.compat.v1.keras.initializers.Orthogonal.__eq__": true, + "tf.compat.v1.keras.initializers.Orthogonal.__ge__": true, + "tf.compat.v1.keras.initializers.Orthogonal.__gt__": true, + "tf.compat.v1.keras.initializers.Orthogonal.__init__": true, + "tf.compat.v1.keras.initializers.Orthogonal.__le__": true, + "tf.compat.v1.keras.initializers.Orthogonal.__lt__": true, + "tf.compat.v1.keras.initializers.Orthogonal.__ne__": true, + "tf.compat.v1.keras.initializers.Orthogonal.__new__": true, + "tf.compat.v1.keras.initializers.Orthogonal.from_config": true, + "tf.compat.v1.keras.initializers.Orthogonal.get_config": true, + "tf.compat.v1.keras.initializers.RandomNormal": false, + "tf.compat.v1.keras.initializers.RandomNormal.__call__": true, + "tf.compat.v1.keras.initializers.RandomNormal.__eq__": true, + "tf.compat.v1.keras.initializers.RandomNormal.__ge__": true, + "tf.compat.v1.keras.initializers.RandomNormal.__gt__": true, + "tf.compat.v1.keras.initializers.RandomNormal.__init__": true, + "tf.compat.v1.keras.initializers.RandomNormal.__le__": true, + "tf.compat.v1.keras.initializers.RandomNormal.__lt__": true, + "tf.compat.v1.keras.initializers.RandomNormal.__ne__": true, + "tf.compat.v1.keras.initializers.RandomNormal.__new__": true, + "tf.compat.v1.keras.initializers.RandomNormal.from_config": true, + "tf.compat.v1.keras.initializers.RandomNormal.get_config": true, + "tf.compat.v1.keras.initializers.RandomUniform": false, + "tf.compat.v1.keras.initializers.RandomUniform.__call__": true, + "tf.compat.v1.keras.initializers.RandomUniform.__eq__": true, + "tf.compat.v1.keras.initializers.RandomUniform.__ge__": true, + "tf.compat.v1.keras.initializers.RandomUniform.__gt__": true, + "tf.compat.v1.keras.initializers.RandomUniform.__init__": true, + "tf.compat.v1.keras.initializers.RandomUniform.__le__": true, + "tf.compat.v1.keras.initializers.RandomUniform.__lt__": true, + "tf.compat.v1.keras.initializers.RandomUniform.__ne__": true, + "tf.compat.v1.keras.initializers.RandomUniform.__new__": true, + "tf.compat.v1.keras.initializers.RandomUniform.from_config": true, + "tf.compat.v1.keras.initializers.RandomUniform.get_config": true, + "tf.compat.v1.keras.initializers.TruncatedNormal": false, + "tf.compat.v1.keras.initializers.TruncatedNormal.__call__": true, + "tf.compat.v1.keras.initializers.TruncatedNormal.__eq__": true, + "tf.compat.v1.keras.initializers.TruncatedNormal.__ge__": true, + "tf.compat.v1.keras.initializers.TruncatedNormal.__gt__": true, + "tf.compat.v1.keras.initializers.TruncatedNormal.__init__": true, + "tf.compat.v1.keras.initializers.TruncatedNormal.__le__": true, + "tf.compat.v1.keras.initializers.TruncatedNormal.__lt__": true, + "tf.compat.v1.keras.initializers.TruncatedNormal.__ne__": true, + "tf.compat.v1.keras.initializers.TruncatedNormal.__new__": true, + "tf.compat.v1.keras.initializers.TruncatedNormal.from_config": true, + "tf.compat.v1.keras.initializers.TruncatedNormal.get_config": true, + "tf.compat.v1.keras.initializers.VarianceScaling": false, + "tf.compat.v1.keras.initializers.VarianceScaling.__call__": true, + "tf.compat.v1.keras.initializers.VarianceScaling.__eq__": true, + "tf.compat.v1.keras.initializers.VarianceScaling.__ge__": true, + "tf.compat.v1.keras.initializers.VarianceScaling.__gt__": true, + "tf.compat.v1.keras.initializers.VarianceScaling.__init__": true, + "tf.compat.v1.keras.initializers.VarianceScaling.__le__": true, + "tf.compat.v1.keras.initializers.VarianceScaling.__lt__": true, + "tf.compat.v1.keras.initializers.VarianceScaling.__ne__": true, + "tf.compat.v1.keras.initializers.VarianceScaling.__new__": true, + "tf.compat.v1.keras.initializers.VarianceScaling.from_config": true, + "tf.compat.v1.keras.initializers.VarianceScaling.get_config": true, + "tf.compat.v1.keras.initializers.Zeros": false, + "tf.compat.v1.keras.initializers.Zeros.__call__": true, + "tf.compat.v1.keras.initializers.Zeros.__eq__": true, + "tf.compat.v1.keras.initializers.Zeros.__ge__": true, + "tf.compat.v1.keras.initializers.Zeros.__gt__": true, + "tf.compat.v1.keras.initializers.Zeros.__init__": true, + "tf.compat.v1.keras.initializers.Zeros.__le__": true, + "tf.compat.v1.keras.initializers.Zeros.__lt__": true, + "tf.compat.v1.keras.initializers.Zeros.__ne__": true, + "tf.compat.v1.keras.initializers.Zeros.__new__": true, + "tf.compat.v1.keras.initializers.Zeros.from_config": true, + "tf.compat.v1.keras.initializers.Zeros.get_config": true, + "tf.compat.v1.keras.initializers.constant": false, + "tf.compat.v1.keras.initializers.constant.__call__": true, + "tf.compat.v1.keras.initializers.constant.__eq__": true, + "tf.compat.v1.keras.initializers.constant.__ge__": true, + "tf.compat.v1.keras.initializers.constant.__gt__": true, + "tf.compat.v1.keras.initializers.constant.__init__": true, + "tf.compat.v1.keras.initializers.constant.__le__": true, + "tf.compat.v1.keras.initializers.constant.__lt__": true, + "tf.compat.v1.keras.initializers.constant.__ne__": true, + "tf.compat.v1.keras.initializers.constant.__new__": true, + "tf.compat.v1.keras.initializers.constant.from_config": true, + "tf.compat.v1.keras.initializers.constant.get_config": true, + "tf.compat.v1.keras.initializers.deserialize": false, + "tf.compat.v1.keras.initializers.get": false, + "tf.compat.v1.keras.initializers.glorot_normal": false, + "tf.compat.v1.keras.initializers.glorot_normal.__call__": true, + "tf.compat.v1.keras.initializers.glorot_normal.__eq__": true, + "tf.compat.v1.keras.initializers.glorot_normal.__ge__": true, + "tf.compat.v1.keras.initializers.glorot_normal.__gt__": true, + "tf.compat.v1.keras.initializers.glorot_normal.__init__": true, + "tf.compat.v1.keras.initializers.glorot_normal.__le__": true, + "tf.compat.v1.keras.initializers.glorot_normal.__lt__": true, + "tf.compat.v1.keras.initializers.glorot_normal.__ne__": true, + "tf.compat.v1.keras.initializers.glorot_normal.__new__": true, + "tf.compat.v1.keras.initializers.glorot_normal.from_config": true, + "tf.compat.v1.keras.initializers.glorot_normal.get_config": true, + "tf.compat.v1.keras.initializers.glorot_uniform": false, + "tf.compat.v1.keras.initializers.glorot_uniform.__call__": true, + "tf.compat.v1.keras.initializers.glorot_uniform.__eq__": true, + "tf.compat.v1.keras.initializers.glorot_uniform.__ge__": true, + "tf.compat.v1.keras.initializers.glorot_uniform.__gt__": true, + "tf.compat.v1.keras.initializers.glorot_uniform.__init__": true, + "tf.compat.v1.keras.initializers.glorot_uniform.__le__": true, + "tf.compat.v1.keras.initializers.glorot_uniform.__lt__": true, + "tf.compat.v1.keras.initializers.glorot_uniform.__ne__": true, + "tf.compat.v1.keras.initializers.glorot_uniform.__new__": true, + "tf.compat.v1.keras.initializers.glorot_uniform.from_config": true, + "tf.compat.v1.keras.initializers.glorot_uniform.get_config": true, + "tf.compat.v1.keras.initializers.he_normal": false, + "tf.compat.v1.keras.initializers.he_uniform": false, + "tf.compat.v1.keras.initializers.identity": false, + "tf.compat.v1.keras.initializers.identity.__call__": true, + "tf.compat.v1.keras.initializers.identity.__eq__": true, + "tf.compat.v1.keras.initializers.identity.__ge__": true, + "tf.compat.v1.keras.initializers.identity.__gt__": true, + "tf.compat.v1.keras.initializers.identity.__init__": true, + "tf.compat.v1.keras.initializers.identity.__le__": true, + "tf.compat.v1.keras.initializers.identity.__lt__": true, + "tf.compat.v1.keras.initializers.identity.__ne__": true, + "tf.compat.v1.keras.initializers.identity.__new__": true, + "tf.compat.v1.keras.initializers.identity.from_config": true, + "tf.compat.v1.keras.initializers.identity.get_config": true, + "tf.compat.v1.keras.initializers.lecun_normal": false, + "tf.compat.v1.keras.initializers.lecun_uniform": false, + "tf.compat.v1.keras.initializers.normal": false, + "tf.compat.v1.keras.initializers.normal.__call__": true, + "tf.compat.v1.keras.initializers.normal.__eq__": true, + "tf.compat.v1.keras.initializers.normal.__ge__": true, + "tf.compat.v1.keras.initializers.normal.__gt__": true, + "tf.compat.v1.keras.initializers.normal.__init__": true, + "tf.compat.v1.keras.initializers.normal.__le__": true, + "tf.compat.v1.keras.initializers.normal.__lt__": true, + "tf.compat.v1.keras.initializers.normal.__ne__": true, + "tf.compat.v1.keras.initializers.normal.__new__": true, + "tf.compat.v1.keras.initializers.normal.from_config": true, + "tf.compat.v1.keras.initializers.normal.get_config": true, + "tf.compat.v1.keras.initializers.ones": false, + "tf.compat.v1.keras.initializers.ones.__call__": true, + "tf.compat.v1.keras.initializers.ones.__eq__": true, + "tf.compat.v1.keras.initializers.ones.__ge__": true, + "tf.compat.v1.keras.initializers.ones.__gt__": true, + "tf.compat.v1.keras.initializers.ones.__init__": true, + "tf.compat.v1.keras.initializers.ones.__le__": true, + "tf.compat.v1.keras.initializers.ones.__lt__": true, + "tf.compat.v1.keras.initializers.ones.__ne__": true, + "tf.compat.v1.keras.initializers.ones.__new__": true, + "tf.compat.v1.keras.initializers.ones.from_config": true, + "tf.compat.v1.keras.initializers.ones.get_config": true, + "tf.compat.v1.keras.initializers.orthogonal": false, + "tf.compat.v1.keras.initializers.orthogonal.__call__": true, + "tf.compat.v1.keras.initializers.orthogonal.__eq__": true, + "tf.compat.v1.keras.initializers.orthogonal.__ge__": true, + "tf.compat.v1.keras.initializers.orthogonal.__gt__": true, + "tf.compat.v1.keras.initializers.orthogonal.__init__": true, + "tf.compat.v1.keras.initializers.orthogonal.__le__": true, + "tf.compat.v1.keras.initializers.orthogonal.__lt__": true, + "tf.compat.v1.keras.initializers.orthogonal.__ne__": true, + "tf.compat.v1.keras.initializers.orthogonal.__new__": true, + "tf.compat.v1.keras.initializers.orthogonal.from_config": true, + "tf.compat.v1.keras.initializers.orthogonal.get_config": true, + "tf.compat.v1.keras.initializers.random_normal": false, + "tf.compat.v1.keras.initializers.random_normal.__call__": true, + "tf.compat.v1.keras.initializers.random_normal.__eq__": true, + "tf.compat.v1.keras.initializers.random_normal.__ge__": true, + "tf.compat.v1.keras.initializers.random_normal.__gt__": true, + "tf.compat.v1.keras.initializers.random_normal.__init__": true, + "tf.compat.v1.keras.initializers.random_normal.__le__": true, + "tf.compat.v1.keras.initializers.random_normal.__lt__": true, + "tf.compat.v1.keras.initializers.random_normal.__ne__": true, + "tf.compat.v1.keras.initializers.random_normal.__new__": true, + "tf.compat.v1.keras.initializers.random_normal.from_config": true, + "tf.compat.v1.keras.initializers.random_normal.get_config": true, + "tf.compat.v1.keras.initializers.random_uniform": false, + "tf.compat.v1.keras.initializers.random_uniform.__call__": true, + "tf.compat.v1.keras.initializers.random_uniform.__eq__": true, + "tf.compat.v1.keras.initializers.random_uniform.__ge__": true, + "tf.compat.v1.keras.initializers.random_uniform.__gt__": true, + "tf.compat.v1.keras.initializers.random_uniform.__init__": true, + "tf.compat.v1.keras.initializers.random_uniform.__le__": true, + "tf.compat.v1.keras.initializers.random_uniform.__lt__": true, + "tf.compat.v1.keras.initializers.random_uniform.__ne__": true, + "tf.compat.v1.keras.initializers.random_uniform.__new__": true, + "tf.compat.v1.keras.initializers.random_uniform.from_config": true, + "tf.compat.v1.keras.initializers.random_uniform.get_config": true, + "tf.compat.v1.keras.initializers.serialize": false, + "tf.compat.v1.keras.initializers.truncated_normal": false, + "tf.compat.v1.keras.initializers.truncated_normal.__call__": true, + "tf.compat.v1.keras.initializers.truncated_normal.__eq__": true, + "tf.compat.v1.keras.initializers.truncated_normal.__ge__": true, + "tf.compat.v1.keras.initializers.truncated_normal.__gt__": true, + "tf.compat.v1.keras.initializers.truncated_normal.__init__": true, + "tf.compat.v1.keras.initializers.truncated_normal.__le__": true, + "tf.compat.v1.keras.initializers.truncated_normal.__lt__": true, + "tf.compat.v1.keras.initializers.truncated_normal.__ne__": true, + "tf.compat.v1.keras.initializers.truncated_normal.__new__": true, + "tf.compat.v1.keras.initializers.truncated_normal.from_config": true, + "tf.compat.v1.keras.initializers.truncated_normal.get_config": true, + "tf.compat.v1.keras.initializers.uniform": false, + "tf.compat.v1.keras.initializers.uniform.__call__": true, + "tf.compat.v1.keras.initializers.uniform.__eq__": true, + "tf.compat.v1.keras.initializers.uniform.__ge__": true, + "tf.compat.v1.keras.initializers.uniform.__gt__": true, + "tf.compat.v1.keras.initializers.uniform.__init__": true, + "tf.compat.v1.keras.initializers.uniform.__le__": true, + "tf.compat.v1.keras.initializers.uniform.__lt__": true, + "tf.compat.v1.keras.initializers.uniform.__ne__": true, + "tf.compat.v1.keras.initializers.uniform.__new__": true, + "tf.compat.v1.keras.initializers.uniform.from_config": true, + "tf.compat.v1.keras.initializers.uniform.get_config": true, + "tf.compat.v1.keras.initializers.zeros": false, + "tf.compat.v1.keras.initializers.zeros.__call__": true, + "tf.compat.v1.keras.initializers.zeros.__eq__": true, + "tf.compat.v1.keras.initializers.zeros.__ge__": true, + "tf.compat.v1.keras.initializers.zeros.__gt__": true, + "tf.compat.v1.keras.initializers.zeros.__init__": true, + "tf.compat.v1.keras.initializers.zeros.__le__": true, + "tf.compat.v1.keras.initializers.zeros.__lt__": true, + "tf.compat.v1.keras.initializers.zeros.__ne__": true, + "tf.compat.v1.keras.initializers.zeros.__new__": true, + "tf.compat.v1.keras.initializers.zeros.from_config": true, + "tf.compat.v1.keras.initializers.zeros.get_config": true, + "tf.compat.v1.keras.layers": false, + "tf.compat.v1.keras.layers.AbstractRNNCell": false, + "tf.compat.v1.keras.layers.AbstractRNNCell.__call__": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.__eq__": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.__ge__": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.__gt__": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.__init__": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.__le__": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.__lt__": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.__ne__": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.__new__": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.activity_regularizer": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.add_loss": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.add_metric": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.add_weight": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.build": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.call": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.compute_mask": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.compute_output_shape": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.compute_output_signature": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.count_params": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.dtype": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.dynamic": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.from_config": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.get_config": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.get_initial_state": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.get_weights": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.input": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.input_spec": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.losses": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.metrics": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.name": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.name_scope": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.non_trainable_weights": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.output": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.output_size": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.set_weights": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.state_size": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.submodules": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.trainable": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.trainable_weights": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.weights": true, + "tf.compat.v1.keras.layers.AbstractRNNCell.with_name_scope": true, + "tf.compat.v1.keras.layers.Activation": false, + "tf.compat.v1.keras.layers.Activation.__call__": true, + "tf.compat.v1.keras.layers.Activation.__eq__": true, + "tf.compat.v1.keras.layers.Activation.__ge__": true, + "tf.compat.v1.keras.layers.Activation.__gt__": true, + "tf.compat.v1.keras.layers.Activation.__init__": true, + "tf.compat.v1.keras.layers.Activation.__le__": true, + "tf.compat.v1.keras.layers.Activation.__lt__": true, + "tf.compat.v1.keras.layers.Activation.__ne__": true, + "tf.compat.v1.keras.layers.Activation.__new__": true, + "tf.compat.v1.keras.layers.Activation.activity_regularizer": true, + "tf.compat.v1.keras.layers.Activation.add_loss": true, + "tf.compat.v1.keras.layers.Activation.add_metric": true, + "tf.compat.v1.keras.layers.Activation.add_weight": true, + "tf.compat.v1.keras.layers.Activation.build": true, + "tf.compat.v1.keras.layers.Activation.call": true, + "tf.compat.v1.keras.layers.Activation.compute_mask": true, + "tf.compat.v1.keras.layers.Activation.compute_output_shape": true, + "tf.compat.v1.keras.layers.Activation.compute_output_signature": true, + "tf.compat.v1.keras.layers.Activation.count_params": true, + "tf.compat.v1.keras.layers.Activation.dtype": true, + "tf.compat.v1.keras.layers.Activation.dynamic": true, + "tf.compat.v1.keras.layers.Activation.from_config": true, + "tf.compat.v1.keras.layers.Activation.get_config": true, + "tf.compat.v1.keras.layers.Activation.get_weights": true, + "tf.compat.v1.keras.layers.Activation.input": true, + "tf.compat.v1.keras.layers.Activation.input_spec": true, + "tf.compat.v1.keras.layers.Activation.losses": true, + "tf.compat.v1.keras.layers.Activation.metrics": true, + "tf.compat.v1.keras.layers.Activation.name": true, + "tf.compat.v1.keras.layers.Activation.name_scope": true, + "tf.compat.v1.keras.layers.Activation.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Activation.output": true, + "tf.compat.v1.keras.layers.Activation.set_weights": true, + "tf.compat.v1.keras.layers.Activation.submodules": true, + "tf.compat.v1.keras.layers.Activation.trainable": true, + "tf.compat.v1.keras.layers.Activation.trainable_weights": true, + "tf.compat.v1.keras.layers.Activation.weights": true, + "tf.compat.v1.keras.layers.Activation.with_name_scope": true, + "tf.compat.v1.keras.layers.ActivityRegularization": false, + "tf.compat.v1.keras.layers.ActivityRegularization.__call__": true, + "tf.compat.v1.keras.layers.ActivityRegularization.__eq__": true, + "tf.compat.v1.keras.layers.ActivityRegularization.__ge__": true, + "tf.compat.v1.keras.layers.ActivityRegularization.__gt__": true, + "tf.compat.v1.keras.layers.ActivityRegularization.__init__": true, + "tf.compat.v1.keras.layers.ActivityRegularization.__le__": true, + "tf.compat.v1.keras.layers.ActivityRegularization.__lt__": true, + "tf.compat.v1.keras.layers.ActivityRegularization.__ne__": true, + "tf.compat.v1.keras.layers.ActivityRegularization.__new__": true, + "tf.compat.v1.keras.layers.ActivityRegularization.activity_regularizer": true, + "tf.compat.v1.keras.layers.ActivityRegularization.add_loss": true, + "tf.compat.v1.keras.layers.ActivityRegularization.add_metric": true, + "tf.compat.v1.keras.layers.ActivityRegularization.add_weight": true, + "tf.compat.v1.keras.layers.ActivityRegularization.build": true, + "tf.compat.v1.keras.layers.ActivityRegularization.call": true, + "tf.compat.v1.keras.layers.ActivityRegularization.compute_mask": true, + "tf.compat.v1.keras.layers.ActivityRegularization.compute_output_shape": true, + "tf.compat.v1.keras.layers.ActivityRegularization.compute_output_signature": true, + "tf.compat.v1.keras.layers.ActivityRegularization.count_params": true, + "tf.compat.v1.keras.layers.ActivityRegularization.dtype": true, + "tf.compat.v1.keras.layers.ActivityRegularization.dynamic": true, + "tf.compat.v1.keras.layers.ActivityRegularization.from_config": true, + "tf.compat.v1.keras.layers.ActivityRegularization.get_config": true, + "tf.compat.v1.keras.layers.ActivityRegularization.get_weights": true, + "tf.compat.v1.keras.layers.ActivityRegularization.input": true, + "tf.compat.v1.keras.layers.ActivityRegularization.input_spec": true, + "tf.compat.v1.keras.layers.ActivityRegularization.losses": true, + "tf.compat.v1.keras.layers.ActivityRegularization.metrics": true, + "tf.compat.v1.keras.layers.ActivityRegularization.name": true, + "tf.compat.v1.keras.layers.ActivityRegularization.name_scope": true, + "tf.compat.v1.keras.layers.ActivityRegularization.non_trainable_weights": true, + "tf.compat.v1.keras.layers.ActivityRegularization.output": true, + "tf.compat.v1.keras.layers.ActivityRegularization.set_weights": true, + "tf.compat.v1.keras.layers.ActivityRegularization.submodules": true, + "tf.compat.v1.keras.layers.ActivityRegularization.trainable": true, + "tf.compat.v1.keras.layers.ActivityRegularization.trainable_weights": true, + "tf.compat.v1.keras.layers.ActivityRegularization.weights": true, + "tf.compat.v1.keras.layers.ActivityRegularization.with_name_scope": true, + "tf.compat.v1.keras.layers.Add": false, + "tf.compat.v1.keras.layers.Add.__call__": true, + "tf.compat.v1.keras.layers.Add.__eq__": true, + "tf.compat.v1.keras.layers.Add.__ge__": true, + "tf.compat.v1.keras.layers.Add.__gt__": true, + "tf.compat.v1.keras.layers.Add.__init__": true, + "tf.compat.v1.keras.layers.Add.__le__": true, + "tf.compat.v1.keras.layers.Add.__lt__": true, + "tf.compat.v1.keras.layers.Add.__ne__": true, + "tf.compat.v1.keras.layers.Add.__new__": true, + "tf.compat.v1.keras.layers.Add.activity_regularizer": true, + "tf.compat.v1.keras.layers.Add.add_loss": true, + "tf.compat.v1.keras.layers.Add.add_metric": true, + "tf.compat.v1.keras.layers.Add.add_weight": true, + "tf.compat.v1.keras.layers.Add.build": true, + "tf.compat.v1.keras.layers.Add.call": true, + "tf.compat.v1.keras.layers.Add.compute_mask": true, + "tf.compat.v1.keras.layers.Add.compute_output_shape": true, + "tf.compat.v1.keras.layers.Add.compute_output_signature": true, + "tf.compat.v1.keras.layers.Add.count_params": true, + "tf.compat.v1.keras.layers.Add.dtype": true, + "tf.compat.v1.keras.layers.Add.dynamic": true, + "tf.compat.v1.keras.layers.Add.from_config": true, + "tf.compat.v1.keras.layers.Add.get_config": true, + "tf.compat.v1.keras.layers.Add.get_weights": true, + "tf.compat.v1.keras.layers.Add.input": true, + "tf.compat.v1.keras.layers.Add.input_spec": true, + "tf.compat.v1.keras.layers.Add.losses": true, + "tf.compat.v1.keras.layers.Add.metrics": true, + "tf.compat.v1.keras.layers.Add.name": true, + "tf.compat.v1.keras.layers.Add.name_scope": true, + "tf.compat.v1.keras.layers.Add.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Add.output": true, + "tf.compat.v1.keras.layers.Add.set_weights": true, + "tf.compat.v1.keras.layers.Add.submodules": true, + "tf.compat.v1.keras.layers.Add.trainable": true, + "tf.compat.v1.keras.layers.Add.trainable_weights": true, + "tf.compat.v1.keras.layers.Add.weights": true, + "tf.compat.v1.keras.layers.Add.with_name_scope": true, + "tf.compat.v1.keras.layers.AdditiveAttention": false, + "tf.compat.v1.keras.layers.AdditiveAttention.__call__": true, + "tf.compat.v1.keras.layers.AdditiveAttention.__eq__": true, + "tf.compat.v1.keras.layers.AdditiveAttention.__ge__": true, + "tf.compat.v1.keras.layers.AdditiveAttention.__gt__": true, + "tf.compat.v1.keras.layers.AdditiveAttention.__init__": true, + "tf.compat.v1.keras.layers.AdditiveAttention.__le__": true, + "tf.compat.v1.keras.layers.AdditiveAttention.__lt__": true, + "tf.compat.v1.keras.layers.AdditiveAttention.__ne__": true, + "tf.compat.v1.keras.layers.AdditiveAttention.__new__": true, + "tf.compat.v1.keras.layers.AdditiveAttention.activity_regularizer": true, + "tf.compat.v1.keras.layers.AdditiveAttention.add_loss": true, + "tf.compat.v1.keras.layers.AdditiveAttention.add_metric": true, + "tf.compat.v1.keras.layers.AdditiveAttention.add_weight": true, + "tf.compat.v1.keras.layers.AdditiveAttention.build": true, + "tf.compat.v1.keras.layers.AdditiveAttention.call": true, + "tf.compat.v1.keras.layers.AdditiveAttention.compute_mask": true, + "tf.compat.v1.keras.layers.AdditiveAttention.compute_output_shape": true, + "tf.compat.v1.keras.layers.AdditiveAttention.compute_output_signature": true, + "tf.compat.v1.keras.layers.AdditiveAttention.count_params": true, + "tf.compat.v1.keras.layers.AdditiveAttention.dtype": true, + "tf.compat.v1.keras.layers.AdditiveAttention.dynamic": true, + "tf.compat.v1.keras.layers.AdditiveAttention.from_config": true, + "tf.compat.v1.keras.layers.AdditiveAttention.get_config": true, + "tf.compat.v1.keras.layers.AdditiveAttention.get_weights": true, + "tf.compat.v1.keras.layers.AdditiveAttention.input": true, + "tf.compat.v1.keras.layers.AdditiveAttention.input_spec": true, + "tf.compat.v1.keras.layers.AdditiveAttention.losses": true, + "tf.compat.v1.keras.layers.AdditiveAttention.metrics": true, + "tf.compat.v1.keras.layers.AdditiveAttention.name": true, + "tf.compat.v1.keras.layers.AdditiveAttention.name_scope": true, + "tf.compat.v1.keras.layers.AdditiveAttention.non_trainable_weights": true, + "tf.compat.v1.keras.layers.AdditiveAttention.output": true, + "tf.compat.v1.keras.layers.AdditiveAttention.set_weights": true, + "tf.compat.v1.keras.layers.AdditiveAttention.submodules": true, + "tf.compat.v1.keras.layers.AdditiveAttention.trainable": true, + "tf.compat.v1.keras.layers.AdditiveAttention.trainable_weights": true, + "tf.compat.v1.keras.layers.AdditiveAttention.weights": true, + "tf.compat.v1.keras.layers.AdditiveAttention.with_name_scope": true, + "tf.compat.v1.keras.layers.AlphaDropout": false, + "tf.compat.v1.keras.layers.AlphaDropout.__call__": true, + "tf.compat.v1.keras.layers.AlphaDropout.__eq__": true, + "tf.compat.v1.keras.layers.AlphaDropout.__ge__": true, + "tf.compat.v1.keras.layers.AlphaDropout.__gt__": true, + "tf.compat.v1.keras.layers.AlphaDropout.__init__": true, + "tf.compat.v1.keras.layers.AlphaDropout.__le__": true, + "tf.compat.v1.keras.layers.AlphaDropout.__lt__": true, + "tf.compat.v1.keras.layers.AlphaDropout.__ne__": true, + "tf.compat.v1.keras.layers.AlphaDropout.__new__": true, + "tf.compat.v1.keras.layers.AlphaDropout.activity_regularizer": true, + "tf.compat.v1.keras.layers.AlphaDropout.add_loss": true, + "tf.compat.v1.keras.layers.AlphaDropout.add_metric": true, + "tf.compat.v1.keras.layers.AlphaDropout.add_weight": true, + "tf.compat.v1.keras.layers.AlphaDropout.build": true, + "tf.compat.v1.keras.layers.AlphaDropout.call": true, + "tf.compat.v1.keras.layers.AlphaDropout.compute_mask": true, + "tf.compat.v1.keras.layers.AlphaDropout.compute_output_shape": true, + "tf.compat.v1.keras.layers.AlphaDropout.compute_output_signature": true, + "tf.compat.v1.keras.layers.AlphaDropout.count_params": true, + "tf.compat.v1.keras.layers.AlphaDropout.dtype": true, + "tf.compat.v1.keras.layers.AlphaDropout.dynamic": true, + "tf.compat.v1.keras.layers.AlphaDropout.from_config": true, + "tf.compat.v1.keras.layers.AlphaDropout.get_config": true, + "tf.compat.v1.keras.layers.AlphaDropout.get_weights": true, + "tf.compat.v1.keras.layers.AlphaDropout.input": true, + "tf.compat.v1.keras.layers.AlphaDropout.input_spec": true, + "tf.compat.v1.keras.layers.AlphaDropout.losses": true, + "tf.compat.v1.keras.layers.AlphaDropout.metrics": true, + "tf.compat.v1.keras.layers.AlphaDropout.name": true, + "tf.compat.v1.keras.layers.AlphaDropout.name_scope": true, + "tf.compat.v1.keras.layers.AlphaDropout.non_trainable_weights": true, + "tf.compat.v1.keras.layers.AlphaDropout.output": true, + "tf.compat.v1.keras.layers.AlphaDropout.set_weights": true, + "tf.compat.v1.keras.layers.AlphaDropout.submodules": true, + "tf.compat.v1.keras.layers.AlphaDropout.trainable": true, + "tf.compat.v1.keras.layers.AlphaDropout.trainable_weights": true, + "tf.compat.v1.keras.layers.AlphaDropout.weights": true, + "tf.compat.v1.keras.layers.AlphaDropout.with_name_scope": true, + "tf.compat.v1.keras.layers.Attention": false, + "tf.compat.v1.keras.layers.Attention.__call__": true, + "tf.compat.v1.keras.layers.Attention.__eq__": true, + "tf.compat.v1.keras.layers.Attention.__ge__": true, + "tf.compat.v1.keras.layers.Attention.__gt__": true, + "tf.compat.v1.keras.layers.Attention.__init__": true, + "tf.compat.v1.keras.layers.Attention.__le__": true, + "tf.compat.v1.keras.layers.Attention.__lt__": true, + "tf.compat.v1.keras.layers.Attention.__ne__": true, + "tf.compat.v1.keras.layers.Attention.__new__": true, + "tf.compat.v1.keras.layers.Attention.activity_regularizer": true, + "tf.compat.v1.keras.layers.Attention.add_loss": true, + "tf.compat.v1.keras.layers.Attention.add_metric": true, + "tf.compat.v1.keras.layers.Attention.add_weight": true, + "tf.compat.v1.keras.layers.Attention.build": true, + "tf.compat.v1.keras.layers.Attention.call": true, + "tf.compat.v1.keras.layers.Attention.compute_mask": true, + "tf.compat.v1.keras.layers.Attention.compute_output_shape": true, + "tf.compat.v1.keras.layers.Attention.compute_output_signature": true, + "tf.compat.v1.keras.layers.Attention.count_params": true, + "tf.compat.v1.keras.layers.Attention.dtype": true, + "tf.compat.v1.keras.layers.Attention.dynamic": true, + "tf.compat.v1.keras.layers.Attention.from_config": true, + "tf.compat.v1.keras.layers.Attention.get_config": true, + "tf.compat.v1.keras.layers.Attention.get_weights": true, + "tf.compat.v1.keras.layers.Attention.input": true, + "tf.compat.v1.keras.layers.Attention.input_spec": true, + "tf.compat.v1.keras.layers.Attention.losses": true, + "tf.compat.v1.keras.layers.Attention.metrics": true, + "tf.compat.v1.keras.layers.Attention.name": true, + "tf.compat.v1.keras.layers.Attention.name_scope": true, + "tf.compat.v1.keras.layers.Attention.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Attention.output": true, + "tf.compat.v1.keras.layers.Attention.set_weights": true, + "tf.compat.v1.keras.layers.Attention.submodules": true, + "tf.compat.v1.keras.layers.Attention.trainable": true, + "tf.compat.v1.keras.layers.Attention.trainable_weights": true, + "tf.compat.v1.keras.layers.Attention.weights": true, + "tf.compat.v1.keras.layers.Attention.with_name_scope": true, + "tf.compat.v1.keras.layers.Average": false, + "tf.compat.v1.keras.layers.Average.__call__": true, + "tf.compat.v1.keras.layers.Average.__eq__": true, + "tf.compat.v1.keras.layers.Average.__ge__": true, + "tf.compat.v1.keras.layers.Average.__gt__": true, + "tf.compat.v1.keras.layers.Average.__init__": true, + "tf.compat.v1.keras.layers.Average.__le__": true, + "tf.compat.v1.keras.layers.Average.__lt__": true, + "tf.compat.v1.keras.layers.Average.__ne__": true, + "tf.compat.v1.keras.layers.Average.__new__": true, + "tf.compat.v1.keras.layers.Average.activity_regularizer": true, + "tf.compat.v1.keras.layers.Average.add_loss": true, + "tf.compat.v1.keras.layers.Average.add_metric": true, + "tf.compat.v1.keras.layers.Average.add_weight": true, + "tf.compat.v1.keras.layers.Average.build": true, + "tf.compat.v1.keras.layers.Average.call": true, + "tf.compat.v1.keras.layers.Average.compute_mask": true, + "tf.compat.v1.keras.layers.Average.compute_output_shape": true, + "tf.compat.v1.keras.layers.Average.compute_output_signature": true, + "tf.compat.v1.keras.layers.Average.count_params": true, + "tf.compat.v1.keras.layers.Average.dtype": true, + "tf.compat.v1.keras.layers.Average.dynamic": true, + "tf.compat.v1.keras.layers.Average.from_config": true, + "tf.compat.v1.keras.layers.Average.get_config": true, + "tf.compat.v1.keras.layers.Average.get_weights": true, + "tf.compat.v1.keras.layers.Average.input": true, + "tf.compat.v1.keras.layers.Average.input_spec": true, + "tf.compat.v1.keras.layers.Average.losses": true, + "tf.compat.v1.keras.layers.Average.metrics": true, + "tf.compat.v1.keras.layers.Average.name": true, + "tf.compat.v1.keras.layers.Average.name_scope": true, + "tf.compat.v1.keras.layers.Average.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Average.output": true, + "tf.compat.v1.keras.layers.Average.set_weights": true, + "tf.compat.v1.keras.layers.Average.submodules": true, + "tf.compat.v1.keras.layers.Average.trainable": true, + "tf.compat.v1.keras.layers.Average.trainable_weights": true, + "tf.compat.v1.keras.layers.Average.weights": true, + "tf.compat.v1.keras.layers.Average.with_name_scope": true, + "tf.compat.v1.keras.layers.AveragePooling1D": false, + "tf.compat.v1.keras.layers.AveragePooling1D.__call__": true, + "tf.compat.v1.keras.layers.AveragePooling1D.__eq__": true, + "tf.compat.v1.keras.layers.AveragePooling1D.__ge__": true, + "tf.compat.v1.keras.layers.AveragePooling1D.__gt__": true, + "tf.compat.v1.keras.layers.AveragePooling1D.__init__": true, + "tf.compat.v1.keras.layers.AveragePooling1D.__le__": true, + "tf.compat.v1.keras.layers.AveragePooling1D.__lt__": true, + "tf.compat.v1.keras.layers.AveragePooling1D.__ne__": true, + "tf.compat.v1.keras.layers.AveragePooling1D.__new__": true, + "tf.compat.v1.keras.layers.AveragePooling1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.AveragePooling1D.add_loss": true, + "tf.compat.v1.keras.layers.AveragePooling1D.add_metric": true, + "tf.compat.v1.keras.layers.AveragePooling1D.add_weight": true, + "tf.compat.v1.keras.layers.AveragePooling1D.build": true, + "tf.compat.v1.keras.layers.AveragePooling1D.call": true, + "tf.compat.v1.keras.layers.AveragePooling1D.compute_mask": true, + "tf.compat.v1.keras.layers.AveragePooling1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.AveragePooling1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.AveragePooling1D.count_params": true, + "tf.compat.v1.keras.layers.AveragePooling1D.dtype": true, + "tf.compat.v1.keras.layers.AveragePooling1D.dynamic": true, + "tf.compat.v1.keras.layers.AveragePooling1D.from_config": true, + "tf.compat.v1.keras.layers.AveragePooling1D.get_config": true, + "tf.compat.v1.keras.layers.AveragePooling1D.get_weights": true, + "tf.compat.v1.keras.layers.AveragePooling1D.input": true, + "tf.compat.v1.keras.layers.AveragePooling1D.input_spec": true, + "tf.compat.v1.keras.layers.AveragePooling1D.losses": true, + "tf.compat.v1.keras.layers.AveragePooling1D.metrics": true, + "tf.compat.v1.keras.layers.AveragePooling1D.name": true, + "tf.compat.v1.keras.layers.AveragePooling1D.name_scope": true, + "tf.compat.v1.keras.layers.AveragePooling1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.AveragePooling1D.output": true, + "tf.compat.v1.keras.layers.AveragePooling1D.set_weights": true, + "tf.compat.v1.keras.layers.AveragePooling1D.submodules": true, + "tf.compat.v1.keras.layers.AveragePooling1D.trainable": true, + "tf.compat.v1.keras.layers.AveragePooling1D.trainable_weights": true, + "tf.compat.v1.keras.layers.AveragePooling1D.weights": true, + "tf.compat.v1.keras.layers.AveragePooling1D.with_name_scope": true, + "tf.compat.v1.keras.layers.AveragePooling2D": false, + "tf.compat.v1.keras.layers.AveragePooling2D.__call__": true, + "tf.compat.v1.keras.layers.AveragePooling2D.__eq__": true, + "tf.compat.v1.keras.layers.AveragePooling2D.__ge__": true, + "tf.compat.v1.keras.layers.AveragePooling2D.__gt__": true, + "tf.compat.v1.keras.layers.AveragePooling2D.__init__": true, + "tf.compat.v1.keras.layers.AveragePooling2D.__le__": true, + "tf.compat.v1.keras.layers.AveragePooling2D.__lt__": true, + "tf.compat.v1.keras.layers.AveragePooling2D.__ne__": true, + "tf.compat.v1.keras.layers.AveragePooling2D.__new__": true, + "tf.compat.v1.keras.layers.AveragePooling2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.AveragePooling2D.add_loss": true, + "tf.compat.v1.keras.layers.AveragePooling2D.add_metric": true, + "tf.compat.v1.keras.layers.AveragePooling2D.add_weight": true, + "tf.compat.v1.keras.layers.AveragePooling2D.build": true, + "tf.compat.v1.keras.layers.AveragePooling2D.call": true, + "tf.compat.v1.keras.layers.AveragePooling2D.compute_mask": true, + "tf.compat.v1.keras.layers.AveragePooling2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.AveragePooling2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.AveragePooling2D.count_params": true, + "tf.compat.v1.keras.layers.AveragePooling2D.dtype": true, + "tf.compat.v1.keras.layers.AveragePooling2D.dynamic": true, + "tf.compat.v1.keras.layers.AveragePooling2D.from_config": true, + "tf.compat.v1.keras.layers.AveragePooling2D.get_config": true, + "tf.compat.v1.keras.layers.AveragePooling2D.get_weights": true, + "tf.compat.v1.keras.layers.AveragePooling2D.input": true, + "tf.compat.v1.keras.layers.AveragePooling2D.input_spec": true, + "tf.compat.v1.keras.layers.AveragePooling2D.losses": true, + "tf.compat.v1.keras.layers.AveragePooling2D.metrics": true, + "tf.compat.v1.keras.layers.AveragePooling2D.name": true, + "tf.compat.v1.keras.layers.AveragePooling2D.name_scope": true, + "tf.compat.v1.keras.layers.AveragePooling2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.AveragePooling2D.output": true, + "tf.compat.v1.keras.layers.AveragePooling2D.set_weights": true, + "tf.compat.v1.keras.layers.AveragePooling2D.submodules": true, + "tf.compat.v1.keras.layers.AveragePooling2D.trainable": true, + "tf.compat.v1.keras.layers.AveragePooling2D.trainable_weights": true, + "tf.compat.v1.keras.layers.AveragePooling2D.weights": true, + "tf.compat.v1.keras.layers.AveragePooling2D.with_name_scope": true, + "tf.compat.v1.keras.layers.AveragePooling3D": false, + "tf.compat.v1.keras.layers.AveragePooling3D.__call__": true, + "tf.compat.v1.keras.layers.AveragePooling3D.__eq__": true, + "tf.compat.v1.keras.layers.AveragePooling3D.__ge__": true, + "tf.compat.v1.keras.layers.AveragePooling3D.__gt__": true, + "tf.compat.v1.keras.layers.AveragePooling3D.__init__": true, + "tf.compat.v1.keras.layers.AveragePooling3D.__le__": true, + "tf.compat.v1.keras.layers.AveragePooling3D.__lt__": true, + "tf.compat.v1.keras.layers.AveragePooling3D.__ne__": true, + "tf.compat.v1.keras.layers.AveragePooling3D.__new__": true, + "tf.compat.v1.keras.layers.AveragePooling3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.AveragePooling3D.add_loss": true, + "tf.compat.v1.keras.layers.AveragePooling3D.add_metric": true, + "tf.compat.v1.keras.layers.AveragePooling3D.add_weight": true, + "tf.compat.v1.keras.layers.AveragePooling3D.build": true, + "tf.compat.v1.keras.layers.AveragePooling3D.call": true, + "tf.compat.v1.keras.layers.AveragePooling3D.compute_mask": true, + "tf.compat.v1.keras.layers.AveragePooling3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.AveragePooling3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.AveragePooling3D.count_params": true, + "tf.compat.v1.keras.layers.AveragePooling3D.dtype": true, + "tf.compat.v1.keras.layers.AveragePooling3D.dynamic": true, + "tf.compat.v1.keras.layers.AveragePooling3D.from_config": true, + "tf.compat.v1.keras.layers.AveragePooling3D.get_config": true, + "tf.compat.v1.keras.layers.AveragePooling3D.get_weights": true, + "tf.compat.v1.keras.layers.AveragePooling3D.input": true, + "tf.compat.v1.keras.layers.AveragePooling3D.input_spec": true, + "tf.compat.v1.keras.layers.AveragePooling3D.losses": true, + "tf.compat.v1.keras.layers.AveragePooling3D.metrics": true, + "tf.compat.v1.keras.layers.AveragePooling3D.name": true, + "tf.compat.v1.keras.layers.AveragePooling3D.name_scope": true, + "tf.compat.v1.keras.layers.AveragePooling3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.AveragePooling3D.output": true, + "tf.compat.v1.keras.layers.AveragePooling3D.set_weights": true, + "tf.compat.v1.keras.layers.AveragePooling3D.submodules": true, + "tf.compat.v1.keras.layers.AveragePooling3D.trainable": true, + "tf.compat.v1.keras.layers.AveragePooling3D.trainable_weights": true, + "tf.compat.v1.keras.layers.AveragePooling3D.weights": true, + "tf.compat.v1.keras.layers.AveragePooling3D.with_name_scope": true, + "tf.compat.v1.keras.layers.AvgPool1D": false, + "tf.compat.v1.keras.layers.AvgPool1D.__call__": true, + "tf.compat.v1.keras.layers.AvgPool1D.__eq__": true, + "tf.compat.v1.keras.layers.AvgPool1D.__ge__": true, + "tf.compat.v1.keras.layers.AvgPool1D.__gt__": true, + "tf.compat.v1.keras.layers.AvgPool1D.__init__": true, + "tf.compat.v1.keras.layers.AvgPool1D.__le__": true, + "tf.compat.v1.keras.layers.AvgPool1D.__lt__": true, + "tf.compat.v1.keras.layers.AvgPool1D.__ne__": true, + "tf.compat.v1.keras.layers.AvgPool1D.__new__": true, + "tf.compat.v1.keras.layers.AvgPool1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.AvgPool1D.add_loss": true, + "tf.compat.v1.keras.layers.AvgPool1D.add_metric": true, + "tf.compat.v1.keras.layers.AvgPool1D.add_weight": true, + "tf.compat.v1.keras.layers.AvgPool1D.build": true, + "tf.compat.v1.keras.layers.AvgPool1D.call": true, + "tf.compat.v1.keras.layers.AvgPool1D.compute_mask": true, + "tf.compat.v1.keras.layers.AvgPool1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.AvgPool1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.AvgPool1D.count_params": true, + "tf.compat.v1.keras.layers.AvgPool1D.dtype": true, + "tf.compat.v1.keras.layers.AvgPool1D.dynamic": true, + "tf.compat.v1.keras.layers.AvgPool1D.from_config": true, + "tf.compat.v1.keras.layers.AvgPool1D.get_config": true, + "tf.compat.v1.keras.layers.AvgPool1D.get_weights": true, + "tf.compat.v1.keras.layers.AvgPool1D.input": true, + "tf.compat.v1.keras.layers.AvgPool1D.input_spec": true, + "tf.compat.v1.keras.layers.AvgPool1D.losses": true, + "tf.compat.v1.keras.layers.AvgPool1D.metrics": true, + "tf.compat.v1.keras.layers.AvgPool1D.name": true, + "tf.compat.v1.keras.layers.AvgPool1D.name_scope": true, + "tf.compat.v1.keras.layers.AvgPool1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.AvgPool1D.output": true, + "tf.compat.v1.keras.layers.AvgPool1D.set_weights": true, + "tf.compat.v1.keras.layers.AvgPool1D.submodules": true, + "tf.compat.v1.keras.layers.AvgPool1D.trainable": true, + "tf.compat.v1.keras.layers.AvgPool1D.trainable_weights": true, + "tf.compat.v1.keras.layers.AvgPool1D.weights": true, + "tf.compat.v1.keras.layers.AvgPool1D.with_name_scope": true, + "tf.compat.v1.keras.layers.AvgPool2D": false, + "tf.compat.v1.keras.layers.AvgPool2D.__call__": true, + "tf.compat.v1.keras.layers.AvgPool2D.__eq__": true, + "tf.compat.v1.keras.layers.AvgPool2D.__ge__": true, + "tf.compat.v1.keras.layers.AvgPool2D.__gt__": true, + "tf.compat.v1.keras.layers.AvgPool2D.__init__": true, + "tf.compat.v1.keras.layers.AvgPool2D.__le__": true, + "tf.compat.v1.keras.layers.AvgPool2D.__lt__": true, + "tf.compat.v1.keras.layers.AvgPool2D.__ne__": true, + "tf.compat.v1.keras.layers.AvgPool2D.__new__": true, + "tf.compat.v1.keras.layers.AvgPool2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.AvgPool2D.add_loss": true, + "tf.compat.v1.keras.layers.AvgPool2D.add_metric": true, + "tf.compat.v1.keras.layers.AvgPool2D.add_weight": true, + "tf.compat.v1.keras.layers.AvgPool2D.build": true, + "tf.compat.v1.keras.layers.AvgPool2D.call": true, + "tf.compat.v1.keras.layers.AvgPool2D.compute_mask": true, + "tf.compat.v1.keras.layers.AvgPool2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.AvgPool2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.AvgPool2D.count_params": true, + "tf.compat.v1.keras.layers.AvgPool2D.dtype": true, + "tf.compat.v1.keras.layers.AvgPool2D.dynamic": true, + "tf.compat.v1.keras.layers.AvgPool2D.from_config": true, + "tf.compat.v1.keras.layers.AvgPool2D.get_config": true, + "tf.compat.v1.keras.layers.AvgPool2D.get_weights": true, + "tf.compat.v1.keras.layers.AvgPool2D.input": true, + "tf.compat.v1.keras.layers.AvgPool2D.input_spec": true, + "tf.compat.v1.keras.layers.AvgPool2D.losses": true, + "tf.compat.v1.keras.layers.AvgPool2D.metrics": true, + "tf.compat.v1.keras.layers.AvgPool2D.name": true, + "tf.compat.v1.keras.layers.AvgPool2D.name_scope": true, + "tf.compat.v1.keras.layers.AvgPool2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.AvgPool2D.output": true, + "tf.compat.v1.keras.layers.AvgPool2D.set_weights": true, + "tf.compat.v1.keras.layers.AvgPool2D.submodules": true, + "tf.compat.v1.keras.layers.AvgPool2D.trainable": true, + "tf.compat.v1.keras.layers.AvgPool2D.trainable_weights": true, + "tf.compat.v1.keras.layers.AvgPool2D.weights": true, + "tf.compat.v1.keras.layers.AvgPool2D.with_name_scope": true, + "tf.compat.v1.keras.layers.AvgPool3D": false, + "tf.compat.v1.keras.layers.AvgPool3D.__call__": true, + "tf.compat.v1.keras.layers.AvgPool3D.__eq__": true, + "tf.compat.v1.keras.layers.AvgPool3D.__ge__": true, + "tf.compat.v1.keras.layers.AvgPool3D.__gt__": true, + "tf.compat.v1.keras.layers.AvgPool3D.__init__": true, + "tf.compat.v1.keras.layers.AvgPool3D.__le__": true, + "tf.compat.v1.keras.layers.AvgPool3D.__lt__": true, + "tf.compat.v1.keras.layers.AvgPool3D.__ne__": true, + "tf.compat.v1.keras.layers.AvgPool3D.__new__": true, + "tf.compat.v1.keras.layers.AvgPool3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.AvgPool3D.add_loss": true, + "tf.compat.v1.keras.layers.AvgPool3D.add_metric": true, + "tf.compat.v1.keras.layers.AvgPool3D.add_weight": true, + "tf.compat.v1.keras.layers.AvgPool3D.build": true, + "tf.compat.v1.keras.layers.AvgPool3D.call": true, + "tf.compat.v1.keras.layers.AvgPool3D.compute_mask": true, + "tf.compat.v1.keras.layers.AvgPool3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.AvgPool3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.AvgPool3D.count_params": true, + "tf.compat.v1.keras.layers.AvgPool3D.dtype": true, + "tf.compat.v1.keras.layers.AvgPool3D.dynamic": true, + "tf.compat.v1.keras.layers.AvgPool3D.from_config": true, + "tf.compat.v1.keras.layers.AvgPool3D.get_config": true, + "tf.compat.v1.keras.layers.AvgPool3D.get_weights": true, + "tf.compat.v1.keras.layers.AvgPool3D.input": true, + "tf.compat.v1.keras.layers.AvgPool3D.input_spec": true, + "tf.compat.v1.keras.layers.AvgPool3D.losses": true, + "tf.compat.v1.keras.layers.AvgPool3D.metrics": true, + "tf.compat.v1.keras.layers.AvgPool3D.name": true, + "tf.compat.v1.keras.layers.AvgPool3D.name_scope": true, + "tf.compat.v1.keras.layers.AvgPool3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.AvgPool3D.output": true, + "tf.compat.v1.keras.layers.AvgPool3D.set_weights": true, + "tf.compat.v1.keras.layers.AvgPool3D.submodules": true, + "tf.compat.v1.keras.layers.AvgPool3D.trainable": true, + "tf.compat.v1.keras.layers.AvgPool3D.trainable_weights": true, + "tf.compat.v1.keras.layers.AvgPool3D.weights": true, + "tf.compat.v1.keras.layers.AvgPool3D.with_name_scope": true, + "tf.compat.v1.keras.layers.BatchNormalization": false, + "tf.compat.v1.keras.layers.BatchNormalization.__call__": true, + "tf.compat.v1.keras.layers.BatchNormalization.__eq__": true, + "tf.compat.v1.keras.layers.BatchNormalization.__ge__": true, + "tf.compat.v1.keras.layers.BatchNormalization.__gt__": true, + "tf.compat.v1.keras.layers.BatchNormalization.__init__": true, + "tf.compat.v1.keras.layers.BatchNormalization.__le__": true, + "tf.compat.v1.keras.layers.BatchNormalization.__lt__": true, + "tf.compat.v1.keras.layers.BatchNormalization.__ne__": true, + "tf.compat.v1.keras.layers.BatchNormalization.__new__": true, + "tf.compat.v1.keras.layers.BatchNormalization.activity_regularizer": true, + "tf.compat.v1.keras.layers.BatchNormalization.add_loss": true, + "tf.compat.v1.keras.layers.BatchNormalization.add_metric": true, + "tf.compat.v1.keras.layers.BatchNormalization.add_weight": true, + "tf.compat.v1.keras.layers.BatchNormalization.build": true, + "tf.compat.v1.keras.layers.BatchNormalization.call": true, + "tf.compat.v1.keras.layers.BatchNormalization.compute_mask": true, + "tf.compat.v1.keras.layers.BatchNormalization.compute_output_shape": true, + "tf.compat.v1.keras.layers.BatchNormalization.compute_output_signature": true, + "tf.compat.v1.keras.layers.BatchNormalization.count_params": true, + "tf.compat.v1.keras.layers.BatchNormalization.dtype": true, + "tf.compat.v1.keras.layers.BatchNormalization.dynamic": true, + "tf.compat.v1.keras.layers.BatchNormalization.from_config": true, + "tf.compat.v1.keras.layers.BatchNormalization.get_config": true, + "tf.compat.v1.keras.layers.BatchNormalization.get_weights": true, + "tf.compat.v1.keras.layers.BatchNormalization.input": true, + "tf.compat.v1.keras.layers.BatchNormalization.input_spec": true, + "tf.compat.v1.keras.layers.BatchNormalization.losses": true, + "tf.compat.v1.keras.layers.BatchNormalization.metrics": true, + "tf.compat.v1.keras.layers.BatchNormalization.name": true, + "tf.compat.v1.keras.layers.BatchNormalization.name_scope": true, + "tf.compat.v1.keras.layers.BatchNormalization.non_trainable_weights": true, + "tf.compat.v1.keras.layers.BatchNormalization.output": true, + "tf.compat.v1.keras.layers.BatchNormalization.set_weights": true, + "tf.compat.v1.keras.layers.BatchNormalization.submodules": true, + "tf.compat.v1.keras.layers.BatchNormalization.trainable": true, + "tf.compat.v1.keras.layers.BatchNormalization.trainable_weights": true, + "tf.compat.v1.keras.layers.BatchNormalization.weights": true, + "tf.compat.v1.keras.layers.BatchNormalization.with_name_scope": true, + "tf.compat.v1.keras.layers.Bidirectional": false, + "tf.compat.v1.keras.layers.Bidirectional.__call__": true, + "tf.compat.v1.keras.layers.Bidirectional.__eq__": true, + "tf.compat.v1.keras.layers.Bidirectional.__ge__": true, + "tf.compat.v1.keras.layers.Bidirectional.__gt__": true, + "tf.compat.v1.keras.layers.Bidirectional.__init__": true, + "tf.compat.v1.keras.layers.Bidirectional.__le__": true, + "tf.compat.v1.keras.layers.Bidirectional.__lt__": true, + "tf.compat.v1.keras.layers.Bidirectional.__ne__": true, + "tf.compat.v1.keras.layers.Bidirectional.__new__": true, + "tf.compat.v1.keras.layers.Bidirectional.activity_regularizer": true, + "tf.compat.v1.keras.layers.Bidirectional.add_loss": true, + "tf.compat.v1.keras.layers.Bidirectional.add_metric": true, + "tf.compat.v1.keras.layers.Bidirectional.add_weight": true, + "tf.compat.v1.keras.layers.Bidirectional.build": true, + "tf.compat.v1.keras.layers.Bidirectional.call": true, + "tf.compat.v1.keras.layers.Bidirectional.compute_mask": true, + "tf.compat.v1.keras.layers.Bidirectional.compute_output_shape": true, + "tf.compat.v1.keras.layers.Bidirectional.compute_output_signature": true, + "tf.compat.v1.keras.layers.Bidirectional.constraints": true, + "tf.compat.v1.keras.layers.Bidirectional.count_params": true, + "tf.compat.v1.keras.layers.Bidirectional.dtype": true, + "tf.compat.v1.keras.layers.Bidirectional.dynamic": true, + "tf.compat.v1.keras.layers.Bidirectional.from_config": true, + "tf.compat.v1.keras.layers.Bidirectional.get_config": true, + "tf.compat.v1.keras.layers.Bidirectional.get_weights": true, + "tf.compat.v1.keras.layers.Bidirectional.input": true, + "tf.compat.v1.keras.layers.Bidirectional.input_spec": true, + "tf.compat.v1.keras.layers.Bidirectional.losses": true, + "tf.compat.v1.keras.layers.Bidirectional.metrics": true, + "tf.compat.v1.keras.layers.Bidirectional.name": true, + "tf.compat.v1.keras.layers.Bidirectional.name_scope": true, + "tf.compat.v1.keras.layers.Bidirectional.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Bidirectional.output": true, + "tf.compat.v1.keras.layers.Bidirectional.reset_states": true, + "tf.compat.v1.keras.layers.Bidirectional.set_weights": true, + "tf.compat.v1.keras.layers.Bidirectional.submodules": true, + "tf.compat.v1.keras.layers.Bidirectional.trainable": true, + "tf.compat.v1.keras.layers.Bidirectional.trainable_weights": true, + "tf.compat.v1.keras.layers.Bidirectional.weights": true, + "tf.compat.v1.keras.layers.Bidirectional.with_name_scope": true, + "tf.compat.v1.keras.layers.Concatenate": false, + "tf.compat.v1.keras.layers.Concatenate.__call__": true, + "tf.compat.v1.keras.layers.Concatenate.__eq__": true, + "tf.compat.v1.keras.layers.Concatenate.__ge__": true, + "tf.compat.v1.keras.layers.Concatenate.__gt__": true, + "tf.compat.v1.keras.layers.Concatenate.__init__": true, + "tf.compat.v1.keras.layers.Concatenate.__le__": true, + "tf.compat.v1.keras.layers.Concatenate.__lt__": true, + "tf.compat.v1.keras.layers.Concatenate.__ne__": true, + "tf.compat.v1.keras.layers.Concatenate.__new__": true, + "tf.compat.v1.keras.layers.Concatenate.activity_regularizer": true, + "tf.compat.v1.keras.layers.Concatenate.add_loss": true, + "tf.compat.v1.keras.layers.Concatenate.add_metric": true, + "tf.compat.v1.keras.layers.Concatenate.add_weight": true, + "tf.compat.v1.keras.layers.Concatenate.build": true, + "tf.compat.v1.keras.layers.Concatenate.call": true, + "tf.compat.v1.keras.layers.Concatenate.compute_mask": true, + "tf.compat.v1.keras.layers.Concatenate.compute_output_shape": true, + "tf.compat.v1.keras.layers.Concatenate.compute_output_signature": true, + "tf.compat.v1.keras.layers.Concatenate.count_params": true, + "tf.compat.v1.keras.layers.Concatenate.dtype": true, + "tf.compat.v1.keras.layers.Concatenate.dynamic": true, + "tf.compat.v1.keras.layers.Concatenate.from_config": true, + "tf.compat.v1.keras.layers.Concatenate.get_config": true, + "tf.compat.v1.keras.layers.Concatenate.get_weights": true, + "tf.compat.v1.keras.layers.Concatenate.input": true, + "tf.compat.v1.keras.layers.Concatenate.input_spec": true, + "tf.compat.v1.keras.layers.Concatenate.losses": true, + "tf.compat.v1.keras.layers.Concatenate.metrics": true, + "tf.compat.v1.keras.layers.Concatenate.name": true, + "tf.compat.v1.keras.layers.Concatenate.name_scope": true, + "tf.compat.v1.keras.layers.Concatenate.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Concatenate.output": true, + "tf.compat.v1.keras.layers.Concatenate.set_weights": true, + "tf.compat.v1.keras.layers.Concatenate.submodules": true, + "tf.compat.v1.keras.layers.Concatenate.trainable": true, + "tf.compat.v1.keras.layers.Concatenate.trainable_weights": true, + "tf.compat.v1.keras.layers.Concatenate.weights": true, + "tf.compat.v1.keras.layers.Concatenate.with_name_scope": true, + "tf.compat.v1.keras.layers.Conv1D": false, + "tf.compat.v1.keras.layers.Conv1D.__call__": true, + "tf.compat.v1.keras.layers.Conv1D.__eq__": true, + "tf.compat.v1.keras.layers.Conv1D.__ge__": true, + "tf.compat.v1.keras.layers.Conv1D.__gt__": true, + "tf.compat.v1.keras.layers.Conv1D.__init__": true, + "tf.compat.v1.keras.layers.Conv1D.__le__": true, + "tf.compat.v1.keras.layers.Conv1D.__lt__": true, + "tf.compat.v1.keras.layers.Conv1D.__ne__": true, + "tf.compat.v1.keras.layers.Conv1D.__new__": true, + "tf.compat.v1.keras.layers.Conv1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.Conv1D.add_loss": true, + "tf.compat.v1.keras.layers.Conv1D.add_metric": true, + "tf.compat.v1.keras.layers.Conv1D.add_weight": true, + "tf.compat.v1.keras.layers.Conv1D.build": true, + "tf.compat.v1.keras.layers.Conv1D.call": true, + "tf.compat.v1.keras.layers.Conv1D.compute_mask": true, + "tf.compat.v1.keras.layers.Conv1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.Conv1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.Conv1D.count_params": true, + "tf.compat.v1.keras.layers.Conv1D.dtype": true, + "tf.compat.v1.keras.layers.Conv1D.dynamic": true, + "tf.compat.v1.keras.layers.Conv1D.from_config": true, + "tf.compat.v1.keras.layers.Conv1D.get_config": true, + "tf.compat.v1.keras.layers.Conv1D.get_weights": true, + "tf.compat.v1.keras.layers.Conv1D.input": true, + "tf.compat.v1.keras.layers.Conv1D.input_spec": true, + "tf.compat.v1.keras.layers.Conv1D.losses": true, + "tf.compat.v1.keras.layers.Conv1D.metrics": true, + "tf.compat.v1.keras.layers.Conv1D.name": true, + "tf.compat.v1.keras.layers.Conv1D.name_scope": true, + "tf.compat.v1.keras.layers.Conv1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Conv1D.output": true, + "tf.compat.v1.keras.layers.Conv1D.set_weights": true, + "tf.compat.v1.keras.layers.Conv1D.submodules": true, + "tf.compat.v1.keras.layers.Conv1D.trainable": true, + "tf.compat.v1.keras.layers.Conv1D.trainable_weights": true, + "tf.compat.v1.keras.layers.Conv1D.weights": true, + "tf.compat.v1.keras.layers.Conv1D.with_name_scope": true, + "tf.compat.v1.keras.layers.Conv2D": false, + "tf.compat.v1.keras.layers.Conv2D.__call__": true, + "tf.compat.v1.keras.layers.Conv2D.__eq__": true, + "tf.compat.v1.keras.layers.Conv2D.__ge__": true, + "tf.compat.v1.keras.layers.Conv2D.__gt__": true, + "tf.compat.v1.keras.layers.Conv2D.__init__": true, + "tf.compat.v1.keras.layers.Conv2D.__le__": true, + "tf.compat.v1.keras.layers.Conv2D.__lt__": true, + "tf.compat.v1.keras.layers.Conv2D.__ne__": true, + "tf.compat.v1.keras.layers.Conv2D.__new__": true, + "tf.compat.v1.keras.layers.Conv2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.Conv2D.add_loss": true, + "tf.compat.v1.keras.layers.Conv2D.add_metric": true, + "tf.compat.v1.keras.layers.Conv2D.add_weight": true, + "tf.compat.v1.keras.layers.Conv2D.build": true, + "tf.compat.v1.keras.layers.Conv2D.call": true, + "tf.compat.v1.keras.layers.Conv2D.compute_mask": true, + "tf.compat.v1.keras.layers.Conv2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.Conv2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.Conv2D.count_params": true, + "tf.compat.v1.keras.layers.Conv2D.dtype": true, + "tf.compat.v1.keras.layers.Conv2D.dynamic": true, + "tf.compat.v1.keras.layers.Conv2D.from_config": true, + "tf.compat.v1.keras.layers.Conv2D.get_config": true, + "tf.compat.v1.keras.layers.Conv2D.get_weights": true, + "tf.compat.v1.keras.layers.Conv2D.input": true, + "tf.compat.v1.keras.layers.Conv2D.input_spec": true, + "tf.compat.v1.keras.layers.Conv2D.losses": true, + "tf.compat.v1.keras.layers.Conv2D.metrics": true, + "tf.compat.v1.keras.layers.Conv2D.name": true, + "tf.compat.v1.keras.layers.Conv2D.name_scope": true, + "tf.compat.v1.keras.layers.Conv2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Conv2D.output": true, + "tf.compat.v1.keras.layers.Conv2D.set_weights": true, + "tf.compat.v1.keras.layers.Conv2D.submodules": true, + "tf.compat.v1.keras.layers.Conv2D.trainable": true, + "tf.compat.v1.keras.layers.Conv2D.trainable_weights": true, + "tf.compat.v1.keras.layers.Conv2D.weights": true, + "tf.compat.v1.keras.layers.Conv2D.with_name_scope": true, + "tf.compat.v1.keras.layers.Conv2DTranspose": false, + "tf.compat.v1.keras.layers.Conv2DTranspose.__call__": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.__eq__": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.__ge__": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.__gt__": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.__init__": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.__le__": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.__lt__": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.__ne__": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.__new__": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.activity_regularizer": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.add_loss": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.add_metric": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.add_weight": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.build": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.call": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.compute_mask": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.compute_output_shape": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.compute_output_signature": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.count_params": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.dtype": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.dynamic": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.from_config": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.get_config": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.get_weights": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.input": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.input_spec": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.losses": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.metrics": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.name": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.name_scope": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.output": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.set_weights": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.submodules": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.trainable": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.trainable_weights": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.weights": true, + "tf.compat.v1.keras.layers.Conv2DTranspose.with_name_scope": true, + "tf.compat.v1.keras.layers.Conv3D": false, + "tf.compat.v1.keras.layers.Conv3D.__call__": true, + "tf.compat.v1.keras.layers.Conv3D.__eq__": true, + "tf.compat.v1.keras.layers.Conv3D.__ge__": true, + "tf.compat.v1.keras.layers.Conv3D.__gt__": true, + "tf.compat.v1.keras.layers.Conv3D.__init__": true, + "tf.compat.v1.keras.layers.Conv3D.__le__": true, + "tf.compat.v1.keras.layers.Conv3D.__lt__": true, + "tf.compat.v1.keras.layers.Conv3D.__ne__": true, + "tf.compat.v1.keras.layers.Conv3D.__new__": true, + "tf.compat.v1.keras.layers.Conv3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.Conv3D.add_loss": true, + "tf.compat.v1.keras.layers.Conv3D.add_metric": true, + "tf.compat.v1.keras.layers.Conv3D.add_weight": true, + "tf.compat.v1.keras.layers.Conv3D.build": true, + "tf.compat.v1.keras.layers.Conv3D.call": true, + "tf.compat.v1.keras.layers.Conv3D.compute_mask": true, + "tf.compat.v1.keras.layers.Conv3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.Conv3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.Conv3D.count_params": true, + "tf.compat.v1.keras.layers.Conv3D.dtype": true, + "tf.compat.v1.keras.layers.Conv3D.dynamic": true, + "tf.compat.v1.keras.layers.Conv3D.from_config": true, + "tf.compat.v1.keras.layers.Conv3D.get_config": true, + "tf.compat.v1.keras.layers.Conv3D.get_weights": true, + "tf.compat.v1.keras.layers.Conv3D.input": true, + "tf.compat.v1.keras.layers.Conv3D.input_spec": true, + "tf.compat.v1.keras.layers.Conv3D.losses": true, + "tf.compat.v1.keras.layers.Conv3D.metrics": true, + "tf.compat.v1.keras.layers.Conv3D.name": true, + "tf.compat.v1.keras.layers.Conv3D.name_scope": true, + "tf.compat.v1.keras.layers.Conv3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Conv3D.output": true, + "tf.compat.v1.keras.layers.Conv3D.set_weights": true, + "tf.compat.v1.keras.layers.Conv3D.submodules": true, + "tf.compat.v1.keras.layers.Conv3D.trainable": true, + "tf.compat.v1.keras.layers.Conv3D.trainable_weights": true, + "tf.compat.v1.keras.layers.Conv3D.weights": true, + "tf.compat.v1.keras.layers.Conv3D.with_name_scope": true, + "tf.compat.v1.keras.layers.Conv3DTranspose": false, + "tf.compat.v1.keras.layers.Conv3DTranspose.__call__": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.__eq__": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.__ge__": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.__gt__": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.__init__": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.__le__": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.__lt__": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.__ne__": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.__new__": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.activity_regularizer": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.add_loss": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.add_metric": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.add_weight": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.build": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.call": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.compute_mask": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.compute_output_shape": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.compute_output_signature": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.count_params": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.dtype": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.dynamic": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.from_config": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.get_config": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.get_weights": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.input": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.input_spec": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.losses": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.metrics": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.name": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.name_scope": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.output": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.set_weights": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.submodules": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.trainable": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.trainable_weights": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.weights": true, + "tf.compat.v1.keras.layers.Conv3DTranspose.with_name_scope": true, + "tf.compat.v1.keras.layers.ConvLSTM2D": false, + "tf.compat.v1.keras.layers.ConvLSTM2D.__call__": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.__eq__": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.__ge__": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.__gt__": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.__init__": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.__le__": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.__lt__": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.__ne__": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.__new__": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.activation": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.add_loss": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.add_metric": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.add_weight": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.bias_constraint": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.bias_initializer": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.bias_regularizer": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.build": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.call": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.compute_mask": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.count_params": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.data_format": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.dilation_rate": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.dropout": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.dtype": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.dynamic": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.filters": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.from_config": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.get_config": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.get_initial_state": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.get_weights": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.input": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.input_spec": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.kernel_constraint": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.kernel_initializer": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.kernel_regularizer": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.kernel_size": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.losses": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.metrics": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.name": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.name_scope": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.output": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.padding": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.recurrent_activation": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.recurrent_constraint": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.recurrent_dropout": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.recurrent_initializer": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.recurrent_regularizer": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.reset_states": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.set_weights": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.states": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.strides": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.submodules": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.trainable": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.trainable_weights": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.unit_forget_bias": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.use_bias": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.weights": true, + "tf.compat.v1.keras.layers.ConvLSTM2D.with_name_scope": true, + "tf.compat.v1.keras.layers.Convolution1D": false, + "tf.compat.v1.keras.layers.Convolution1D.__call__": true, + "tf.compat.v1.keras.layers.Convolution1D.__eq__": true, + "tf.compat.v1.keras.layers.Convolution1D.__ge__": true, + "tf.compat.v1.keras.layers.Convolution1D.__gt__": true, + "tf.compat.v1.keras.layers.Convolution1D.__init__": true, + "tf.compat.v1.keras.layers.Convolution1D.__le__": true, + "tf.compat.v1.keras.layers.Convolution1D.__lt__": true, + "tf.compat.v1.keras.layers.Convolution1D.__ne__": true, + "tf.compat.v1.keras.layers.Convolution1D.__new__": true, + "tf.compat.v1.keras.layers.Convolution1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.Convolution1D.add_loss": true, + "tf.compat.v1.keras.layers.Convolution1D.add_metric": true, + "tf.compat.v1.keras.layers.Convolution1D.add_weight": true, + "tf.compat.v1.keras.layers.Convolution1D.build": true, + "tf.compat.v1.keras.layers.Convolution1D.call": true, + "tf.compat.v1.keras.layers.Convolution1D.compute_mask": true, + "tf.compat.v1.keras.layers.Convolution1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.Convolution1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.Convolution1D.count_params": true, + "tf.compat.v1.keras.layers.Convolution1D.dtype": true, + "tf.compat.v1.keras.layers.Convolution1D.dynamic": true, + "tf.compat.v1.keras.layers.Convolution1D.from_config": true, + "tf.compat.v1.keras.layers.Convolution1D.get_config": true, + "tf.compat.v1.keras.layers.Convolution1D.get_weights": true, + "tf.compat.v1.keras.layers.Convolution1D.input": true, + "tf.compat.v1.keras.layers.Convolution1D.input_spec": true, + "tf.compat.v1.keras.layers.Convolution1D.losses": true, + "tf.compat.v1.keras.layers.Convolution1D.metrics": true, + "tf.compat.v1.keras.layers.Convolution1D.name": true, + "tf.compat.v1.keras.layers.Convolution1D.name_scope": true, + "tf.compat.v1.keras.layers.Convolution1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Convolution1D.output": true, + "tf.compat.v1.keras.layers.Convolution1D.set_weights": true, + "tf.compat.v1.keras.layers.Convolution1D.submodules": true, + "tf.compat.v1.keras.layers.Convolution1D.trainable": true, + "tf.compat.v1.keras.layers.Convolution1D.trainable_weights": true, + "tf.compat.v1.keras.layers.Convolution1D.weights": true, + "tf.compat.v1.keras.layers.Convolution1D.with_name_scope": true, + "tf.compat.v1.keras.layers.Convolution2D": false, + "tf.compat.v1.keras.layers.Convolution2D.__call__": true, + "tf.compat.v1.keras.layers.Convolution2D.__eq__": true, + "tf.compat.v1.keras.layers.Convolution2D.__ge__": true, + "tf.compat.v1.keras.layers.Convolution2D.__gt__": true, + "tf.compat.v1.keras.layers.Convolution2D.__init__": true, + "tf.compat.v1.keras.layers.Convolution2D.__le__": true, + "tf.compat.v1.keras.layers.Convolution2D.__lt__": true, + "tf.compat.v1.keras.layers.Convolution2D.__ne__": true, + "tf.compat.v1.keras.layers.Convolution2D.__new__": true, + "tf.compat.v1.keras.layers.Convolution2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.Convolution2D.add_loss": true, + "tf.compat.v1.keras.layers.Convolution2D.add_metric": true, + "tf.compat.v1.keras.layers.Convolution2D.add_weight": true, + "tf.compat.v1.keras.layers.Convolution2D.build": true, + "tf.compat.v1.keras.layers.Convolution2D.call": true, + "tf.compat.v1.keras.layers.Convolution2D.compute_mask": true, + "tf.compat.v1.keras.layers.Convolution2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.Convolution2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.Convolution2D.count_params": true, + "tf.compat.v1.keras.layers.Convolution2D.dtype": true, + "tf.compat.v1.keras.layers.Convolution2D.dynamic": true, + "tf.compat.v1.keras.layers.Convolution2D.from_config": true, + "tf.compat.v1.keras.layers.Convolution2D.get_config": true, + "tf.compat.v1.keras.layers.Convolution2D.get_weights": true, + "tf.compat.v1.keras.layers.Convolution2D.input": true, + "tf.compat.v1.keras.layers.Convolution2D.input_spec": true, + "tf.compat.v1.keras.layers.Convolution2D.losses": true, + "tf.compat.v1.keras.layers.Convolution2D.metrics": true, + "tf.compat.v1.keras.layers.Convolution2D.name": true, + "tf.compat.v1.keras.layers.Convolution2D.name_scope": true, + "tf.compat.v1.keras.layers.Convolution2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Convolution2D.output": true, + "tf.compat.v1.keras.layers.Convolution2D.set_weights": true, + "tf.compat.v1.keras.layers.Convolution2D.submodules": true, + "tf.compat.v1.keras.layers.Convolution2D.trainable": true, + "tf.compat.v1.keras.layers.Convolution2D.trainable_weights": true, + "tf.compat.v1.keras.layers.Convolution2D.weights": true, + "tf.compat.v1.keras.layers.Convolution2D.with_name_scope": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose": false, + "tf.compat.v1.keras.layers.Convolution2DTranspose.__call__": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.__eq__": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.__ge__": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.__gt__": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.__init__": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.__le__": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.__lt__": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.__ne__": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.__new__": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.activity_regularizer": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.add_loss": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.add_metric": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.add_weight": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.build": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.call": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.compute_mask": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.compute_output_shape": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.compute_output_signature": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.count_params": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.dtype": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.dynamic": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.from_config": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.get_config": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.get_weights": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.input": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.input_spec": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.losses": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.metrics": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.name": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.name_scope": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.output": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.set_weights": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.submodules": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.trainable": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.trainable_weights": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.weights": true, + "tf.compat.v1.keras.layers.Convolution2DTranspose.with_name_scope": true, + "tf.compat.v1.keras.layers.Convolution3D": false, + "tf.compat.v1.keras.layers.Convolution3D.__call__": true, + "tf.compat.v1.keras.layers.Convolution3D.__eq__": true, + "tf.compat.v1.keras.layers.Convolution3D.__ge__": true, + "tf.compat.v1.keras.layers.Convolution3D.__gt__": true, + "tf.compat.v1.keras.layers.Convolution3D.__init__": true, + "tf.compat.v1.keras.layers.Convolution3D.__le__": true, + "tf.compat.v1.keras.layers.Convolution3D.__lt__": true, + "tf.compat.v1.keras.layers.Convolution3D.__ne__": true, + "tf.compat.v1.keras.layers.Convolution3D.__new__": true, + "tf.compat.v1.keras.layers.Convolution3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.Convolution3D.add_loss": true, + "tf.compat.v1.keras.layers.Convolution3D.add_metric": true, + "tf.compat.v1.keras.layers.Convolution3D.add_weight": true, + "tf.compat.v1.keras.layers.Convolution3D.build": true, + "tf.compat.v1.keras.layers.Convolution3D.call": true, + "tf.compat.v1.keras.layers.Convolution3D.compute_mask": true, + "tf.compat.v1.keras.layers.Convolution3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.Convolution3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.Convolution3D.count_params": true, + "tf.compat.v1.keras.layers.Convolution3D.dtype": true, + "tf.compat.v1.keras.layers.Convolution3D.dynamic": true, + "tf.compat.v1.keras.layers.Convolution3D.from_config": true, + "tf.compat.v1.keras.layers.Convolution3D.get_config": true, + "tf.compat.v1.keras.layers.Convolution3D.get_weights": true, + "tf.compat.v1.keras.layers.Convolution3D.input": true, + "tf.compat.v1.keras.layers.Convolution3D.input_spec": true, + "tf.compat.v1.keras.layers.Convolution3D.losses": true, + "tf.compat.v1.keras.layers.Convolution3D.metrics": true, + "tf.compat.v1.keras.layers.Convolution3D.name": true, + "tf.compat.v1.keras.layers.Convolution3D.name_scope": true, + "tf.compat.v1.keras.layers.Convolution3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Convolution3D.output": true, + "tf.compat.v1.keras.layers.Convolution3D.set_weights": true, + "tf.compat.v1.keras.layers.Convolution3D.submodules": true, + "tf.compat.v1.keras.layers.Convolution3D.trainable": true, + "tf.compat.v1.keras.layers.Convolution3D.trainable_weights": true, + "tf.compat.v1.keras.layers.Convolution3D.weights": true, + "tf.compat.v1.keras.layers.Convolution3D.with_name_scope": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose": false, + "tf.compat.v1.keras.layers.Convolution3DTranspose.__call__": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.__eq__": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.__ge__": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.__gt__": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.__init__": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.__le__": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.__lt__": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.__ne__": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.__new__": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.activity_regularizer": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.add_loss": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.add_metric": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.add_weight": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.build": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.call": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.compute_mask": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.compute_output_shape": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.compute_output_signature": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.count_params": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.dtype": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.dynamic": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.from_config": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.get_config": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.get_weights": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.input": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.input_spec": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.losses": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.metrics": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.name": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.name_scope": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.output": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.set_weights": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.submodules": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.trainable": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.trainable_weights": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.weights": true, + "tf.compat.v1.keras.layers.Convolution3DTranspose.with_name_scope": true, + "tf.compat.v1.keras.layers.Cropping1D": false, + "tf.compat.v1.keras.layers.Cropping1D.__call__": true, + "tf.compat.v1.keras.layers.Cropping1D.__eq__": true, + "tf.compat.v1.keras.layers.Cropping1D.__ge__": true, + "tf.compat.v1.keras.layers.Cropping1D.__gt__": true, + "tf.compat.v1.keras.layers.Cropping1D.__init__": true, + "tf.compat.v1.keras.layers.Cropping1D.__le__": true, + "tf.compat.v1.keras.layers.Cropping1D.__lt__": true, + "tf.compat.v1.keras.layers.Cropping1D.__ne__": true, + "tf.compat.v1.keras.layers.Cropping1D.__new__": true, + "tf.compat.v1.keras.layers.Cropping1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.Cropping1D.add_loss": true, + "tf.compat.v1.keras.layers.Cropping1D.add_metric": true, + "tf.compat.v1.keras.layers.Cropping1D.add_weight": true, + "tf.compat.v1.keras.layers.Cropping1D.build": true, + "tf.compat.v1.keras.layers.Cropping1D.call": true, + "tf.compat.v1.keras.layers.Cropping1D.compute_mask": true, + "tf.compat.v1.keras.layers.Cropping1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.Cropping1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.Cropping1D.count_params": true, + "tf.compat.v1.keras.layers.Cropping1D.dtype": true, + "tf.compat.v1.keras.layers.Cropping1D.dynamic": true, + "tf.compat.v1.keras.layers.Cropping1D.from_config": true, + "tf.compat.v1.keras.layers.Cropping1D.get_config": true, + "tf.compat.v1.keras.layers.Cropping1D.get_weights": true, + "tf.compat.v1.keras.layers.Cropping1D.input": true, + "tf.compat.v1.keras.layers.Cropping1D.input_spec": true, + "tf.compat.v1.keras.layers.Cropping1D.losses": true, + "tf.compat.v1.keras.layers.Cropping1D.metrics": true, + "tf.compat.v1.keras.layers.Cropping1D.name": true, + "tf.compat.v1.keras.layers.Cropping1D.name_scope": true, + "tf.compat.v1.keras.layers.Cropping1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Cropping1D.output": true, + "tf.compat.v1.keras.layers.Cropping1D.set_weights": true, + "tf.compat.v1.keras.layers.Cropping1D.submodules": true, + "tf.compat.v1.keras.layers.Cropping1D.trainable": true, + "tf.compat.v1.keras.layers.Cropping1D.trainable_weights": true, + "tf.compat.v1.keras.layers.Cropping1D.weights": true, + "tf.compat.v1.keras.layers.Cropping1D.with_name_scope": true, + "tf.compat.v1.keras.layers.Cropping2D": false, + "tf.compat.v1.keras.layers.Cropping2D.__call__": true, + "tf.compat.v1.keras.layers.Cropping2D.__eq__": true, + "tf.compat.v1.keras.layers.Cropping2D.__ge__": true, + "tf.compat.v1.keras.layers.Cropping2D.__gt__": true, + "tf.compat.v1.keras.layers.Cropping2D.__init__": true, + "tf.compat.v1.keras.layers.Cropping2D.__le__": true, + "tf.compat.v1.keras.layers.Cropping2D.__lt__": true, + "tf.compat.v1.keras.layers.Cropping2D.__ne__": true, + "tf.compat.v1.keras.layers.Cropping2D.__new__": true, + "tf.compat.v1.keras.layers.Cropping2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.Cropping2D.add_loss": true, + "tf.compat.v1.keras.layers.Cropping2D.add_metric": true, + "tf.compat.v1.keras.layers.Cropping2D.add_weight": true, + "tf.compat.v1.keras.layers.Cropping2D.build": true, + "tf.compat.v1.keras.layers.Cropping2D.call": true, + "tf.compat.v1.keras.layers.Cropping2D.compute_mask": true, + "tf.compat.v1.keras.layers.Cropping2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.Cropping2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.Cropping2D.count_params": true, + "tf.compat.v1.keras.layers.Cropping2D.dtype": true, + "tf.compat.v1.keras.layers.Cropping2D.dynamic": true, + "tf.compat.v1.keras.layers.Cropping2D.from_config": true, + "tf.compat.v1.keras.layers.Cropping2D.get_config": true, + "tf.compat.v1.keras.layers.Cropping2D.get_weights": true, + "tf.compat.v1.keras.layers.Cropping2D.input": true, + "tf.compat.v1.keras.layers.Cropping2D.input_spec": true, + "tf.compat.v1.keras.layers.Cropping2D.losses": true, + "tf.compat.v1.keras.layers.Cropping2D.metrics": true, + "tf.compat.v1.keras.layers.Cropping2D.name": true, + "tf.compat.v1.keras.layers.Cropping2D.name_scope": true, + "tf.compat.v1.keras.layers.Cropping2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Cropping2D.output": true, + "tf.compat.v1.keras.layers.Cropping2D.set_weights": true, + "tf.compat.v1.keras.layers.Cropping2D.submodules": true, + "tf.compat.v1.keras.layers.Cropping2D.trainable": true, + "tf.compat.v1.keras.layers.Cropping2D.trainable_weights": true, + "tf.compat.v1.keras.layers.Cropping2D.weights": true, + "tf.compat.v1.keras.layers.Cropping2D.with_name_scope": true, + "tf.compat.v1.keras.layers.Cropping3D": false, + "tf.compat.v1.keras.layers.Cropping3D.__call__": true, + "tf.compat.v1.keras.layers.Cropping3D.__eq__": true, + "tf.compat.v1.keras.layers.Cropping3D.__ge__": true, + "tf.compat.v1.keras.layers.Cropping3D.__gt__": true, + "tf.compat.v1.keras.layers.Cropping3D.__init__": true, + "tf.compat.v1.keras.layers.Cropping3D.__le__": true, + "tf.compat.v1.keras.layers.Cropping3D.__lt__": true, + "tf.compat.v1.keras.layers.Cropping3D.__ne__": true, + "tf.compat.v1.keras.layers.Cropping3D.__new__": true, + "tf.compat.v1.keras.layers.Cropping3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.Cropping3D.add_loss": true, + "tf.compat.v1.keras.layers.Cropping3D.add_metric": true, + "tf.compat.v1.keras.layers.Cropping3D.add_weight": true, + "tf.compat.v1.keras.layers.Cropping3D.build": true, + "tf.compat.v1.keras.layers.Cropping3D.call": true, + "tf.compat.v1.keras.layers.Cropping3D.compute_mask": true, + "tf.compat.v1.keras.layers.Cropping3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.Cropping3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.Cropping3D.count_params": true, + "tf.compat.v1.keras.layers.Cropping3D.dtype": true, + "tf.compat.v1.keras.layers.Cropping3D.dynamic": true, + "tf.compat.v1.keras.layers.Cropping3D.from_config": true, + "tf.compat.v1.keras.layers.Cropping3D.get_config": true, + "tf.compat.v1.keras.layers.Cropping3D.get_weights": true, + "tf.compat.v1.keras.layers.Cropping3D.input": true, + "tf.compat.v1.keras.layers.Cropping3D.input_spec": true, + "tf.compat.v1.keras.layers.Cropping3D.losses": true, + "tf.compat.v1.keras.layers.Cropping3D.metrics": true, + "tf.compat.v1.keras.layers.Cropping3D.name": true, + "tf.compat.v1.keras.layers.Cropping3D.name_scope": true, + "tf.compat.v1.keras.layers.Cropping3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Cropping3D.output": true, + "tf.compat.v1.keras.layers.Cropping3D.set_weights": true, + "tf.compat.v1.keras.layers.Cropping3D.submodules": true, + "tf.compat.v1.keras.layers.Cropping3D.trainable": true, + "tf.compat.v1.keras.layers.Cropping3D.trainable_weights": true, + "tf.compat.v1.keras.layers.Cropping3D.weights": true, + "tf.compat.v1.keras.layers.Cropping3D.with_name_scope": true, + "tf.compat.v1.keras.layers.CuDNNGRU": false, + "tf.compat.v1.keras.layers.CuDNNGRU.__call__": true, + "tf.compat.v1.keras.layers.CuDNNGRU.__eq__": true, + "tf.compat.v1.keras.layers.CuDNNGRU.__ge__": true, + "tf.compat.v1.keras.layers.CuDNNGRU.__gt__": true, + "tf.compat.v1.keras.layers.CuDNNGRU.__init__": true, + "tf.compat.v1.keras.layers.CuDNNGRU.__le__": true, + "tf.compat.v1.keras.layers.CuDNNGRU.__lt__": true, + "tf.compat.v1.keras.layers.CuDNNGRU.__ne__": true, + "tf.compat.v1.keras.layers.CuDNNGRU.__new__": true, + "tf.compat.v1.keras.layers.CuDNNGRU.activity_regularizer": true, + "tf.compat.v1.keras.layers.CuDNNGRU.add_loss": true, + "tf.compat.v1.keras.layers.CuDNNGRU.add_metric": true, + "tf.compat.v1.keras.layers.CuDNNGRU.add_weight": true, + "tf.compat.v1.keras.layers.CuDNNGRU.build": true, + "tf.compat.v1.keras.layers.CuDNNGRU.call": true, + "tf.compat.v1.keras.layers.CuDNNGRU.cell": true, + "tf.compat.v1.keras.layers.CuDNNGRU.compute_mask": true, + "tf.compat.v1.keras.layers.CuDNNGRU.compute_output_shape": true, + "tf.compat.v1.keras.layers.CuDNNGRU.compute_output_signature": true, + "tf.compat.v1.keras.layers.CuDNNGRU.count_params": true, + "tf.compat.v1.keras.layers.CuDNNGRU.dtype": true, + "tf.compat.v1.keras.layers.CuDNNGRU.dynamic": true, + "tf.compat.v1.keras.layers.CuDNNGRU.from_config": true, + "tf.compat.v1.keras.layers.CuDNNGRU.get_config": true, + "tf.compat.v1.keras.layers.CuDNNGRU.get_losses_for": true, + "tf.compat.v1.keras.layers.CuDNNGRU.get_weights": true, + "tf.compat.v1.keras.layers.CuDNNGRU.input": true, + "tf.compat.v1.keras.layers.CuDNNGRU.input_spec": true, + "tf.compat.v1.keras.layers.CuDNNGRU.losses": true, + "tf.compat.v1.keras.layers.CuDNNGRU.metrics": true, + "tf.compat.v1.keras.layers.CuDNNGRU.name": true, + "tf.compat.v1.keras.layers.CuDNNGRU.name_scope": true, + "tf.compat.v1.keras.layers.CuDNNGRU.non_trainable_weights": true, + "tf.compat.v1.keras.layers.CuDNNGRU.output": true, + "tf.compat.v1.keras.layers.CuDNNGRU.reset_states": true, + "tf.compat.v1.keras.layers.CuDNNGRU.set_weights": true, + "tf.compat.v1.keras.layers.CuDNNGRU.states": true, + "tf.compat.v1.keras.layers.CuDNNGRU.submodules": true, + "tf.compat.v1.keras.layers.CuDNNGRU.trainable": true, + "tf.compat.v1.keras.layers.CuDNNGRU.trainable_weights": true, + "tf.compat.v1.keras.layers.CuDNNGRU.weights": true, + "tf.compat.v1.keras.layers.CuDNNGRU.with_name_scope": true, + "tf.compat.v1.keras.layers.CuDNNLSTM": false, + "tf.compat.v1.keras.layers.CuDNNLSTM.__call__": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.__eq__": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.__ge__": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.__gt__": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.__init__": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.__le__": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.__lt__": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.__ne__": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.__new__": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.activity_regularizer": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.add_loss": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.add_metric": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.add_weight": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.build": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.call": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.cell": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.compute_mask": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.compute_output_shape": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.compute_output_signature": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.count_params": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.dtype": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.dynamic": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.from_config": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.get_config": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.get_losses_for": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.get_weights": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.input": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.input_spec": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.losses": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.metrics": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.name": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.name_scope": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.non_trainable_weights": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.output": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.reset_states": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.set_weights": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.states": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.submodules": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.trainable": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.trainable_weights": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.weights": true, + "tf.compat.v1.keras.layers.CuDNNLSTM.with_name_scope": true, + "tf.compat.v1.keras.layers.Dense": false, + "tf.compat.v1.keras.layers.Dense.__call__": true, + "tf.compat.v1.keras.layers.Dense.__eq__": true, + "tf.compat.v1.keras.layers.Dense.__ge__": true, + "tf.compat.v1.keras.layers.Dense.__gt__": true, + "tf.compat.v1.keras.layers.Dense.__init__": true, + "tf.compat.v1.keras.layers.Dense.__le__": true, + "tf.compat.v1.keras.layers.Dense.__lt__": true, + "tf.compat.v1.keras.layers.Dense.__ne__": true, + "tf.compat.v1.keras.layers.Dense.__new__": true, + "tf.compat.v1.keras.layers.Dense.activity_regularizer": true, + "tf.compat.v1.keras.layers.Dense.add_loss": true, + "tf.compat.v1.keras.layers.Dense.add_metric": true, + "tf.compat.v1.keras.layers.Dense.add_weight": true, + "tf.compat.v1.keras.layers.Dense.build": true, + "tf.compat.v1.keras.layers.Dense.call": true, + "tf.compat.v1.keras.layers.Dense.compute_mask": true, + "tf.compat.v1.keras.layers.Dense.compute_output_shape": true, + "tf.compat.v1.keras.layers.Dense.compute_output_signature": true, + "tf.compat.v1.keras.layers.Dense.count_params": true, + "tf.compat.v1.keras.layers.Dense.dtype": true, + "tf.compat.v1.keras.layers.Dense.dynamic": true, + "tf.compat.v1.keras.layers.Dense.from_config": true, + "tf.compat.v1.keras.layers.Dense.get_config": true, + "tf.compat.v1.keras.layers.Dense.get_weights": true, + "tf.compat.v1.keras.layers.Dense.input": true, + "tf.compat.v1.keras.layers.Dense.input_spec": true, + "tf.compat.v1.keras.layers.Dense.losses": true, + "tf.compat.v1.keras.layers.Dense.metrics": true, + "tf.compat.v1.keras.layers.Dense.name": true, + "tf.compat.v1.keras.layers.Dense.name_scope": true, + "tf.compat.v1.keras.layers.Dense.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Dense.output": true, + "tf.compat.v1.keras.layers.Dense.set_weights": true, + "tf.compat.v1.keras.layers.Dense.submodules": true, + "tf.compat.v1.keras.layers.Dense.trainable": true, + "tf.compat.v1.keras.layers.Dense.trainable_weights": true, + "tf.compat.v1.keras.layers.Dense.weights": true, + "tf.compat.v1.keras.layers.Dense.with_name_scope": true, + "tf.compat.v1.keras.layers.DenseFeatures": false, + "tf.compat.v1.keras.layers.DenseFeatures.__call__": true, + "tf.compat.v1.keras.layers.DenseFeatures.__eq__": true, + "tf.compat.v1.keras.layers.DenseFeatures.__ge__": true, + "tf.compat.v1.keras.layers.DenseFeatures.__gt__": true, + "tf.compat.v1.keras.layers.DenseFeatures.__init__": true, + "tf.compat.v1.keras.layers.DenseFeatures.__le__": true, + "tf.compat.v1.keras.layers.DenseFeatures.__lt__": true, + "tf.compat.v1.keras.layers.DenseFeatures.__ne__": true, + "tf.compat.v1.keras.layers.DenseFeatures.__new__": true, + "tf.compat.v1.keras.layers.DenseFeatures.activity_regularizer": true, + "tf.compat.v1.keras.layers.DenseFeatures.add_loss": true, + "tf.compat.v1.keras.layers.DenseFeatures.add_metric": true, + "tf.compat.v1.keras.layers.DenseFeatures.add_weight": true, + "tf.compat.v1.keras.layers.DenseFeatures.build": true, + "tf.compat.v1.keras.layers.DenseFeatures.call": true, + "tf.compat.v1.keras.layers.DenseFeatures.compute_mask": true, + "tf.compat.v1.keras.layers.DenseFeatures.compute_output_shape": true, + "tf.compat.v1.keras.layers.DenseFeatures.compute_output_signature": true, + "tf.compat.v1.keras.layers.DenseFeatures.count_params": true, + "tf.compat.v1.keras.layers.DenseFeatures.dtype": true, + "tf.compat.v1.keras.layers.DenseFeatures.dynamic": true, + "tf.compat.v1.keras.layers.DenseFeatures.from_config": true, + "tf.compat.v1.keras.layers.DenseFeatures.get_config": true, + "tf.compat.v1.keras.layers.DenseFeatures.get_weights": true, + "tf.compat.v1.keras.layers.DenseFeatures.input": true, + "tf.compat.v1.keras.layers.DenseFeatures.input_spec": true, + "tf.compat.v1.keras.layers.DenseFeatures.losses": true, + "tf.compat.v1.keras.layers.DenseFeatures.metrics": true, + "tf.compat.v1.keras.layers.DenseFeatures.name": true, + "tf.compat.v1.keras.layers.DenseFeatures.name_scope": true, + "tf.compat.v1.keras.layers.DenseFeatures.non_trainable_weights": true, + "tf.compat.v1.keras.layers.DenseFeatures.output": true, + "tf.compat.v1.keras.layers.DenseFeatures.set_weights": true, + "tf.compat.v1.keras.layers.DenseFeatures.submodules": true, + "tf.compat.v1.keras.layers.DenseFeatures.trainable": true, + "tf.compat.v1.keras.layers.DenseFeatures.trainable_weights": true, + "tf.compat.v1.keras.layers.DenseFeatures.weights": true, + "tf.compat.v1.keras.layers.DenseFeatures.with_name_scope": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D": false, + "tf.compat.v1.keras.layers.DepthwiseConv2D.__call__": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.__eq__": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.__ge__": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.__gt__": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.__init__": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.__le__": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.__lt__": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.__ne__": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.__new__": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.add_loss": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.add_metric": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.add_weight": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.build": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.call": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.compute_mask": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.count_params": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.dtype": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.dynamic": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.from_config": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.get_config": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.get_weights": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.input": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.input_spec": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.losses": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.metrics": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.name": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.name_scope": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.output": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.set_weights": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.submodules": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.trainable": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.trainable_weights": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.weights": true, + "tf.compat.v1.keras.layers.DepthwiseConv2D.with_name_scope": true, + "tf.compat.v1.keras.layers.Dot": false, + "tf.compat.v1.keras.layers.Dot.__call__": true, + "tf.compat.v1.keras.layers.Dot.__eq__": true, + "tf.compat.v1.keras.layers.Dot.__ge__": true, + "tf.compat.v1.keras.layers.Dot.__gt__": true, + "tf.compat.v1.keras.layers.Dot.__init__": true, + "tf.compat.v1.keras.layers.Dot.__le__": true, + "tf.compat.v1.keras.layers.Dot.__lt__": true, + "tf.compat.v1.keras.layers.Dot.__ne__": true, + "tf.compat.v1.keras.layers.Dot.__new__": true, + "tf.compat.v1.keras.layers.Dot.activity_regularizer": true, + "tf.compat.v1.keras.layers.Dot.add_loss": true, + "tf.compat.v1.keras.layers.Dot.add_metric": true, + "tf.compat.v1.keras.layers.Dot.add_weight": true, + "tf.compat.v1.keras.layers.Dot.build": true, + "tf.compat.v1.keras.layers.Dot.call": true, + "tf.compat.v1.keras.layers.Dot.compute_mask": true, + "tf.compat.v1.keras.layers.Dot.compute_output_shape": true, + "tf.compat.v1.keras.layers.Dot.compute_output_signature": true, + "tf.compat.v1.keras.layers.Dot.count_params": true, + "tf.compat.v1.keras.layers.Dot.dtype": true, + "tf.compat.v1.keras.layers.Dot.dynamic": true, + "tf.compat.v1.keras.layers.Dot.from_config": true, + "tf.compat.v1.keras.layers.Dot.get_config": true, + "tf.compat.v1.keras.layers.Dot.get_weights": true, + "tf.compat.v1.keras.layers.Dot.input": true, + "tf.compat.v1.keras.layers.Dot.input_spec": true, + "tf.compat.v1.keras.layers.Dot.losses": true, + "tf.compat.v1.keras.layers.Dot.metrics": true, + "tf.compat.v1.keras.layers.Dot.name": true, + "tf.compat.v1.keras.layers.Dot.name_scope": true, + "tf.compat.v1.keras.layers.Dot.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Dot.output": true, + "tf.compat.v1.keras.layers.Dot.set_weights": true, + "tf.compat.v1.keras.layers.Dot.submodules": true, + "tf.compat.v1.keras.layers.Dot.trainable": true, + "tf.compat.v1.keras.layers.Dot.trainable_weights": true, + "tf.compat.v1.keras.layers.Dot.weights": true, + "tf.compat.v1.keras.layers.Dot.with_name_scope": true, + "tf.compat.v1.keras.layers.Dropout": false, + "tf.compat.v1.keras.layers.Dropout.__call__": true, + "tf.compat.v1.keras.layers.Dropout.__eq__": true, + "tf.compat.v1.keras.layers.Dropout.__ge__": true, + "tf.compat.v1.keras.layers.Dropout.__gt__": true, + "tf.compat.v1.keras.layers.Dropout.__init__": true, + "tf.compat.v1.keras.layers.Dropout.__le__": true, + "tf.compat.v1.keras.layers.Dropout.__lt__": true, + "tf.compat.v1.keras.layers.Dropout.__ne__": true, + "tf.compat.v1.keras.layers.Dropout.__new__": true, + "tf.compat.v1.keras.layers.Dropout.activity_regularizer": true, + "tf.compat.v1.keras.layers.Dropout.add_loss": true, + "tf.compat.v1.keras.layers.Dropout.add_metric": true, + "tf.compat.v1.keras.layers.Dropout.add_weight": true, + "tf.compat.v1.keras.layers.Dropout.build": true, + "tf.compat.v1.keras.layers.Dropout.call": true, + "tf.compat.v1.keras.layers.Dropout.compute_mask": true, + "tf.compat.v1.keras.layers.Dropout.compute_output_shape": true, + "tf.compat.v1.keras.layers.Dropout.compute_output_signature": true, + "tf.compat.v1.keras.layers.Dropout.count_params": true, + "tf.compat.v1.keras.layers.Dropout.dtype": true, + "tf.compat.v1.keras.layers.Dropout.dynamic": true, + "tf.compat.v1.keras.layers.Dropout.from_config": true, + "tf.compat.v1.keras.layers.Dropout.get_config": true, + "tf.compat.v1.keras.layers.Dropout.get_weights": true, + "tf.compat.v1.keras.layers.Dropout.input": true, + "tf.compat.v1.keras.layers.Dropout.input_spec": true, + "tf.compat.v1.keras.layers.Dropout.losses": true, + "tf.compat.v1.keras.layers.Dropout.metrics": true, + "tf.compat.v1.keras.layers.Dropout.name": true, + "tf.compat.v1.keras.layers.Dropout.name_scope": true, + "tf.compat.v1.keras.layers.Dropout.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Dropout.output": true, + "tf.compat.v1.keras.layers.Dropout.set_weights": true, + "tf.compat.v1.keras.layers.Dropout.submodules": true, + "tf.compat.v1.keras.layers.Dropout.trainable": true, + "tf.compat.v1.keras.layers.Dropout.trainable_weights": true, + "tf.compat.v1.keras.layers.Dropout.weights": true, + "tf.compat.v1.keras.layers.Dropout.with_name_scope": true, + "tf.compat.v1.keras.layers.ELU": false, + "tf.compat.v1.keras.layers.ELU.__call__": true, + "tf.compat.v1.keras.layers.ELU.__eq__": true, + "tf.compat.v1.keras.layers.ELU.__ge__": true, + "tf.compat.v1.keras.layers.ELU.__gt__": true, + "tf.compat.v1.keras.layers.ELU.__init__": true, + "tf.compat.v1.keras.layers.ELU.__le__": true, + "tf.compat.v1.keras.layers.ELU.__lt__": true, + "tf.compat.v1.keras.layers.ELU.__ne__": true, + "tf.compat.v1.keras.layers.ELU.__new__": true, + "tf.compat.v1.keras.layers.ELU.activity_regularizer": true, + "tf.compat.v1.keras.layers.ELU.add_loss": true, + "tf.compat.v1.keras.layers.ELU.add_metric": true, + "tf.compat.v1.keras.layers.ELU.add_weight": true, + "tf.compat.v1.keras.layers.ELU.build": true, + "tf.compat.v1.keras.layers.ELU.call": true, + "tf.compat.v1.keras.layers.ELU.compute_mask": true, + "tf.compat.v1.keras.layers.ELU.compute_output_shape": true, + "tf.compat.v1.keras.layers.ELU.compute_output_signature": true, + "tf.compat.v1.keras.layers.ELU.count_params": true, + "tf.compat.v1.keras.layers.ELU.dtype": true, + "tf.compat.v1.keras.layers.ELU.dynamic": true, + "tf.compat.v1.keras.layers.ELU.from_config": true, + "tf.compat.v1.keras.layers.ELU.get_config": true, + "tf.compat.v1.keras.layers.ELU.get_weights": true, + "tf.compat.v1.keras.layers.ELU.input": true, + "tf.compat.v1.keras.layers.ELU.input_spec": true, + "tf.compat.v1.keras.layers.ELU.losses": true, + "tf.compat.v1.keras.layers.ELU.metrics": true, + "tf.compat.v1.keras.layers.ELU.name": true, + "tf.compat.v1.keras.layers.ELU.name_scope": true, + "tf.compat.v1.keras.layers.ELU.non_trainable_weights": true, + "tf.compat.v1.keras.layers.ELU.output": true, + "tf.compat.v1.keras.layers.ELU.set_weights": true, + "tf.compat.v1.keras.layers.ELU.submodules": true, + "tf.compat.v1.keras.layers.ELU.trainable": true, + "tf.compat.v1.keras.layers.ELU.trainable_weights": true, + "tf.compat.v1.keras.layers.ELU.weights": true, + "tf.compat.v1.keras.layers.ELU.with_name_scope": true, + "tf.compat.v1.keras.layers.Embedding": false, + "tf.compat.v1.keras.layers.Embedding.__call__": true, + "tf.compat.v1.keras.layers.Embedding.__eq__": true, + "tf.compat.v1.keras.layers.Embedding.__ge__": true, + "tf.compat.v1.keras.layers.Embedding.__gt__": true, + "tf.compat.v1.keras.layers.Embedding.__init__": true, + "tf.compat.v1.keras.layers.Embedding.__le__": true, + "tf.compat.v1.keras.layers.Embedding.__lt__": true, + "tf.compat.v1.keras.layers.Embedding.__ne__": true, + "tf.compat.v1.keras.layers.Embedding.__new__": true, + "tf.compat.v1.keras.layers.Embedding.activity_regularizer": true, + "tf.compat.v1.keras.layers.Embedding.add_loss": true, + "tf.compat.v1.keras.layers.Embedding.add_metric": true, + "tf.compat.v1.keras.layers.Embedding.add_weight": true, + "tf.compat.v1.keras.layers.Embedding.build": true, + "tf.compat.v1.keras.layers.Embedding.call": true, + "tf.compat.v1.keras.layers.Embedding.compute_mask": true, + "tf.compat.v1.keras.layers.Embedding.compute_output_shape": true, + "tf.compat.v1.keras.layers.Embedding.compute_output_signature": true, + "tf.compat.v1.keras.layers.Embedding.count_params": true, + "tf.compat.v1.keras.layers.Embedding.dtype": true, + "tf.compat.v1.keras.layers.Embedding.dynamic": true, + "tf.compat.v1.keras.layers.Embedding.from_config": true, + "tf.compat.v1.keras.layers.Embedding.get_config": true, + "tf.compat.v1.keras.layers.Embedding.get_weights": true, + "tf.compat.v1.keras.layers.Embedding.input": true, + "tf.compat.v1.keras.layers.Embedding.input_spec": true, + "tf.compat.v1.keras.layers.Embedding.losses": true, + "tf.compat.v1.keras.layers.Embedding.metrics": true, + "tf.compat.v1.keras.layers.Embedding.name": true, + "tf.compat.v1.keras.layers.Embedding.name_scope": true, + "tf.compat.v1.keras.layers.Embedding.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Embedding.output": true, + "tf.compat.v1.keras.layers.Embedding.set_weights": true, + "tf.compat.v1.keras.layers.Embedding.submodules": true, + "tf.compat.v1.keras.layers.Embedding.trainable": true, + "tf.compat.v1.keras.layers.Embedding.trainable_weights": true, + "tf.compat.v1.keras.layers.Embedding.weights": true, + "tf.compat.v1.keras.layers.Embedding.with_name_scope": true, + "tf.compat.v1.keras.layers.Flatten": false, + "tf.compat.v1.keras.layers.Flatten.__call__": true, + "tf.compat.v1.keras.layers.Flatten.__eq__": true, + "tf.compat.v1.keras.layers.Flatten.__ge__": true, + "tf.compat.v1.keras.layers.Flatten.__gt__": true, + "tf.compat.v1.keras.layers.Flatten.__init__": true, + "tf.compat.v1.keras.layers.Flatten.__le__": true, + "tf.compat.v1.keras.layers.Flatten.__lt__": true, + "tf.compat.v1.keras.layers.Flatten.__ne__": true, + "tf.compat.v1.keras.layers.Flatten.__new__": true, + "tf.compat.v1.keras.layers.Flatten.activity_regularizer": true, + "tf.compat.v1.keras.layers.Flatten.add_loss": true, + "tf.compat.v1.keras.layers.Flatten.add_metric": true, + "tf.compat.v1.keras.layers.Flatten.add_weight": true, + "tf.compat.v1.keras.layers.Flatten.build": true, + "tf.compat.v1.keras.layers.Flatten.call": true, + "tf.compat.v1.keras.layers.Flatten.compute_mask": true, + "tf.compat.v1.keras.layers.Flatten.compute_output_shape": true, + "tf.compat.v1.keras.layers.Flatten.compute_output_signature": true, + "tf.compat.v1.keras.layers.Flatten.count_params": true, + "tf.compat.v1.keras.layers.Flatten.dtype": true, + "tf.compat.v1.keras.layers.Flatten.dynamic": true, + "tf.compat.v1.keras.layers.Flatten.from_config": true, + "tf.compat.v1.keras.layers.Flatten.get_config": true, + "tf.compat.v1.keras.layers.Flatten.get_weights": true, + "tf.compat.v1.keras.layers.Flatten.input": true, + "tf.compat.v1.keras.layers.Flatten.input_spec": true, + "tf.compat.v1.keras.layers.Flatten.losses": true, + "tf.compat.v1.keras.layers.Flatten.metrics": true, + "tf.compat.v1.keras.layers.Flatten.name": true, + "tf.compat.v1.keras.layers.Flatten.name_scope": true, + "tf.compat.v1.keras.layers.Flatten.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Flatten.output": true, + "tf.compat.v1.keras.layers.Flatten.set_weights": true, + "tf.compat.v1.keras.layers.Flatten.submodules": true, + "tf.compat.v1.keras.layers.Flatten.trainable": true, + "tf.compat.v1.keras.layers.Flatten.trainable_weights": true, + "tf.compat.v1.keras.layers.Flatten.weights": true, + "tf.compat.v1.keras.layers.Flatten.with_name_scope": true, + "tf.compat.v1.keras.layers.GRU": false, + "tf.compat.v1.keras.layers.GRU.__call__": true, + "tf.compat.v1.keras.layers.GRU.__eq__": true, + "tf.compat.v1.keras.layers.GRU.__ge__": true, + "tf.compat.v1.keras.layers.GRU.__gt__": true, + "tf.compat.v1.keras.layers.GRU.__init__": true, + "tf.compat.v1.keras.layers.GRU.__le__": true, + "tf.compat.v1.keras.layers.GRU.__lt__": true, + "tf.compat.v1.keras.layers.GRU.__ne__": true, + "tf.compat.v1.keras.layers.GRU.__new__": true, + "tf.compat.v1.keras.layers.GRU.activation": true, + "tf.compat.v1.keras.layers.GRU.activity_regularizer": true, + "tf.compat.v1.keras.layers.GRU.add_loss": true, + "tf.compat.v1.keras.layers.GRU.add_metric": true, + "tf.compat.v1.keras.layers.GRU.add_weight": true, + "tf.compat.v1.keras.layers.GRU.bias_constraint": true, + "tf.compat.v1.keras.layers.GRU.bias_initializer": true, + "tf.compat.v1.keras.layers.GRU.bias_regularizer": true, + "tf.compat.v1.keras.layers.GRU.build": true, + "tf.compat.v1.keras.layers.GRU.call": true, + "tf.compat.v1.keras.layers.GRU.compute_mask": true, + "tf.compat.v1.keras.layers.GRU.compute_output_shape": true, + "tf.compat.v1.keras.layers.GRU.compute_output_signature": true, + "tf.compat.v1.keras.layers.GRU.count_params": true, + "tf.compat.v1.keras.layers.GRU.dropout": true, + "tf.compat.v1.keras.layers.GRU.dtype": true, + "tf.compat.v1.keras.layers.GRU.dynamic": true, + "tf.compat.v1.keras.layers.GRU.from_config": true, + "tf.compat.v1.keras.layers.GRU.get_config": true, + "tf.compat.v1.keras.layers.GRU.get_weights": true, + "tf.compat.v1.keras.layers.GRU.implementation": true, + "tf.compat.v1.keras.layers.GRU.input": true, + "tf.compat.v1.keras.layers.GRU.input_spec": true, + "tf.compat.v1.keras.layers.GRU.kernel_constraint": true, + "tf.compat.v1.keras.layers.GRU.kernel_initializer": true, + "tf.compat.v1.keras.layers.GRU.kernel_regularizer": true, + "tf.compat.v1.keras.layers.GRU.losses": true, + "tf.compat.v1.keras.layers.GRU.metrics": true, + "tf.compat.v1.keras.layers.GRU.name": true, + "tf.compat.v1.keras.layers.GRU.name_scope": true, + "tf.compat.v1.keras.layers.GRU.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GRU.output": true, + "tf.compat.v1.keras.layers.GRU.recurrent_activation": true, + "tf.compat.v1.keras.layers.GRU.recurrent_constraint": true, + "tf.compat.v1.keras.layers.GRU.recurrent_dropout": true, + "tf.compat.v1.keras.layers.GRU.recurrent_initializer": true, + "tf.compat.v1.keras.layers.GRU.recurrent_regularizer": true, + "tf.compat.v1.keras.layers.GRU.reset_after": true, + "tf.compat.v1.keras.layers.GRU.reset_states": true, + "tf.compat.v1.keras.layers.GRU.set_weights": true, + "tf.compat.v1.keras.layers.GRU.states": true, + "tf.compat.v1.keras.layers.GRU.submodules": true, + "tf.compat.v1.keras.layers.GRU.trainable": true, + "tf.compat.v1.keras.layers.GRU.trainable_weights": true, + "tf.compat.v1.keras.layers.GRU.units": true, + "tf.compat.v1.keras.layers.GRU.use_bias": true, + "tf.compat.v1.keras.layers.GRU.weights": true, + "tf.compat.v1.keras.layers.GRU.with_name_scope": true, + "tf.compat.v1.keras.layers.GRUCell": false, + "tf.compat.v1.keras.layers.GRUCell.__call__": true, + "tf.compat.v1.keras.layers.GRUCell.__eq__": true, + "tf.compat.v1.keras.layers.GRUCell.__ge__": true, + "tf.compat.v1.keras.layers.GRUCell.__gt__": true, + "tf.compat.v1.keras.layers.GRUCell.__init__": true, + "tf.compat.v1.keras.layers.GRUCell.__le__": true, + "tf.compat.v1.keras.layers.GRUCell.__lt__": true, + "tf.compat.v1.keras.layers.GRUCell.__ne__": true, + "tf.compat.v1.keras.layers.GRUCell.__new__": true, + "tf.compat.v1.keras.layers.GRUCell.activity_regularizer": true, + "tf.compat.v1.keras.layers.GRUCell.add_loss": true, + "tf.compat.v1.keras.layers.GRUCell.add_metric": true, + "tf.compat.v1.keras.layers.GRUCell.add_weight": true, + "tf.compat.v1.keras.layers.GRUCell.build": true, + "tf.compat.v1.keras.layers.GRUCell.call": true, + "tf.compat.v1.keras.layers.GRUCell.compute_mask": true, + "tf.compat.v1.keras.layers.GRUCell.compute_output_shape": true, + "tf.compat.v1.keras.layers.GRUCell.compute_output_signature": true, + "tf.compat.v1.keras.layers.GRUCell.count_params": true, + "tf.compat.v1.keras.layers.GRUCell.dtype": true, + "tf.compat.v1.keras.layers.GRUCell.dynamic": true, + "tf.compat.v1.keras.layers.GRUCell.from_config": true, + "tf.compat.v1.keras.layers.GRUCell.get_config": true, + "tf.compat.v1.keras.layers.GRUCell.get_dropout_mask_for_cell": true, + "tf.compat.v1.keras.layers.GRUCell.get_initial_state": true, + "tf.compat.v1.keras.layers.GRUCell.get_recurrent_dropout_mask_for_cell": true, + "tf.compat.v1.keras.layers.GRUCell.get_weights": true, + "tf.compat.v1.keras.layers.GRUCell.input": true, + "tf.compat.v1.keras.layers.GRUCell.input_spec": true, + "tf.compat.v1.keras.layers.GRUCell.losses": true, + "tf.compat.v1.keras.layers.GRUCell.metrics": true, + "tf.compat.v1.keras.layers.GRUCell.name": true, + "tf.compat.v1.keras.layers.GRUCell.name_scope": true, + "tf.compat.v1.keras.layers.GRUCell.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GRUCell.output": true, + "tf.compat.v1.keras.layers.GRUCell.reset_dropout_mask": true, + "tf.compat.v1.keras.layers.GRUCell.reset_recurrent_dropout_mask": true, + "tf.compat.v1.keras.layers.GRUCell.set_weights": true, + "tf.compat.v1.keras.layers.GRUCell.submodules": true, + "tf.compat.v1.keras.layers.GRUCell.trainable": true, + "tf.compat.v1.keras.layers.GRUCell.trainable_weights": true, + "tf.compat.v1.keras.layers.GRUCell.weights": true, + "tf.compat.v1.keras.layers.GRUCell.with_name_scope": true, + "tf.compat.v1.keras.layers.GaussianDropout": false, + "tf.compat.v1.keras.layers.GaussianDropout.__call__": true, + "tf.compat.v1.keras.layers.GaussianDropout.__eq__": true, + "tf.compat.v1.keras.layers.GaussianDropout.__ge__": true, + "tf.compat.v1.keras.layers.GaussianDropout.__gt__": true, + "tf.compat.v1.keras.layers.GaussianDropout.__init__": true, + "tf.compat.v1.keras.layers.GaussianDropout.__le__": true, + "tf.compat.v1.keras.layers.GaussianDropout.__lt__": true, + "tf.compat.v1.keras.layers.GaussianDropout.__ne__": true, + "tf.compat.v1.keras.layers.GaussianDropout.__new__": true, + "tf.compat.v1.keras.layers.GaussianDropout.activity_regularizer": true, + "tf.compat.v1.keras.layers.GaussianDropout.add_loss": true, + "tf.compat.v1.keras.layers.GaussianDropout.add_metric": true, + "tf.compat.v1.keras.layers.GaussianDropout.add_weight": true, + "tf.compat.v1.keras.layers.GaussianDropout.build": true, + "tf.compat.v1.keras.layers.GaussianDropout.call": true, + "tf.compat.v1.keras.layers.GaussianDropout.compute_mask": true, + "tf.compat.v1.keras.layers.GaussianDropout.compute_output_shape": true, + "tf.compat.v1.keras.layers.GaussianDropout.compute_output_signature": true, + "tf.compat.v1.keras.layers.GaussianDropout.count_params": true, + "tf.compat.v1.keras.layers.GaussianDropout.dtype": true, + "tf.compat.v1.keras.layers.GaussianDropout.dynamic": true, + "tf.compat.v1.keras.layers.GaussianDropout.from_config": true, + "tf.compat.v1.keras.layers.GaussianDropout.get_config": true, + "tf.compat.v1.keras.layers.GaussianDropout.get_weights": true, + "tf.compat.v1.keras.layers.GaussianDropout.input": true, + "tf.compat.v1.keras.layers.GaussianDropout.input_spec": true, + "tf.compat.v1.keras.layers.GaussianDropout.losses": true, + "tf.compat.v1.keras.layers.GaussianDropout.metrics": true, + "tf.compat.v1.keras.layers.GaussianDropout.name": true, + "tf.compat.v1.keras.layers.GaussianDropout.name_scope": true, + "tf.compat.v1.keras.layers.GaussianDropout.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GaussianDropout.output": true, + "tf.compat.v1.keras.layers.GaussianDropout.set_weights": true, + "tf.compat.v1.keras.layers.GaussianDropout.submodules": true, + "tf.compat.v1.keras.layers.GaussianDropout.trainable": true, + "tf.compat.v1.keras.layers.GaussianDropout.trainable_weights": true, + "tf.compat.v1.keras.layers.GaussianDropout.weights": true, + "tf.compat.v1.keras.layers.GaussianDropout.with_name_scope": true, + "tf.compat.v1.keras.layers.GaussianNoise": false, + "tf.compat.v1.keras.layers.GaussianNoise.__call__": true, + "tf.compat.v1.keras.layers.GaussianNoise.__eq__": true, + "tf.compat.v1.keras.layers.GaussianNoise.__ge__": true, + "tf.compat.v1.keras.layers.GaussianNoise.__gt__": true, + "tf.compat.v1.keras.layers.GaussianNoise.__init__": true, + "tf.compat.v1.keras.layers.GaussianNoise.__le__": true, + "tf.compat.v1.keras.layers.GaussianNoise.__lt__": true, + "tf.compat.v1.keras.layers.GaussianNoise.__ne__": true, + "tf.compat.v1.keras.layers.GaussianNoise.__new__": true, + "tf.compat.v1.keras.layers.GaussianNoise.activity_regularizer": true, + "tf.compat.v1.keras.layers.GaussianNoise.add_loss": true, + "tf.compat.v1.keras.layers.GaussianNoise.add_metric": true, + "tf.compat.v1.keras.layers.GaussianNoise.add_weight": true, + "tf.compat.v1.keras.layers.GaussianNoise.build": true, + "tf.compat.v1.keras.layers.GaussianNoise.call": true, + "tf.compat.v1.keras.layers.GaussianNoise.compute_mask": true, + "tf.compat.v1.keras.layers.GaussianNoise.compute_output_shape": true, + "tf.compat.v1.keras.layers.GaussianNoise.compute_output_signature": true, + "tf.compat.v1.keras.layers.GaussianNoise.count_params": true, + "tf.compat.v1.keras.layers.GaussianNoise.dtype": true, + "tf.compat.v1.keras.layers.GaussianNoise.dynamic": true, + "tf.compat.v1.keras.layers.GaussianNoise.from_config": true, + "tf.compat.v1.keras.layers.GaussianNoise.get_config": true, + "tf.compat.v1.keras.layers.GaussianNoise.get_weights": true, + "tf.compat.v1.keras.layers.GaussianNoise.input": true, + "tf.compat.v1.keras.layers.GaussianNoise.input_spec": true, + "tf.compat.v1.keras.layers.GaussianNoise.losses": true, + "tf.compat.v1.keras.layers.GaussianNoise.metrics": true, + "tf.compat.v1.keras.layers.GaussianNoise.name": true, + "tf.compat.v1.keras.layers.GaussianNoise.name_scope": true, + "tf.compat.v1.keras.layers.GaussianNoise.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GaussianNoise.output": true, + "tf.compat.v1.keras.layers.GaussianNoise.set_weights": true, + "tf.compat.v1.keras.layers.GaussianNoise.submodules": true, + "tf.compat.v1.keras.layers.GaussianNoise.trainable": true, + "tf.compat.v1.keras.layers.GaussianNoise.trainable_weights": true, + "tf.compat.v1.keras.layers.GaussianNoise.weights": true, + "tf.compat.v1.keras.layers.GaussianNoise.with_name_scope": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D": false, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__call__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__eq__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__ge__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__gt__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__init__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__le__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__lt__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__ne__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.__new__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.add_loss": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.add_metric": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.add_weight": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.build": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.call": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.compute_mask": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.count_params": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.dtype": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.dynamic": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.from_config": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.get_config": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.get_weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.input": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.input_spec": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.losses": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.metrics": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.name": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.name_scope": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.output": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.set_weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.submodules": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.trainable": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling1D.with_name_scope": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D": false, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__call__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__eq__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__ge__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__gt__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__init__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__le__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__lt__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__ne__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.__new__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.add_loss": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.add_metric": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.add_weight": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.build": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.call": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.compute_mask": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.count_params": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.dtype": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.dynamic": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.from_config": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.get_config": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.get_weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.input": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.input_spec": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.losses": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.metrics": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.name": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.name_scope": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.output": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.set_weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.submodules": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.trainable": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling2D.with_name_scope": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D": false, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__call__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__eq__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__ge__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__gt__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__init__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__le__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__lt__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__ne__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.__new__": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.add_loss": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.add_metric": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.add_weight": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.build": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.call": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.compute_mask": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.count_params": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.dtype": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.dynamic": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.from_config": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.get_config": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.get_weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.input": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.input_spec": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.losses": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.metrics": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.name": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.name_scope": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.output": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.set_weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.submodules": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.trainable": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.weights": true, + "tf.compat.v1.keras.layers.GlobalAveragePooling3D.with_name_scope": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D": false, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__call__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__eq__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__ge__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__gt__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__init__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__le__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__lt__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__ne__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.__new__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.add_loss": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.add_metric": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.add_weight": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.build": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.call": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.compute_mask": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.count_params": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.dtype": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.dynamic": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.from_config": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.get_config": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.get_weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.input": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.input_spec": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.losses": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.metrics": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.name": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.name_scope": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.output": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.set_weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.submodules": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.trainable": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool1D.with_name_scope": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D": false, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__call__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__eq__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__ge__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__gt__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__init__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__le__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__lt__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__ne__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.__new__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.add_loss": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.add_metric": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.add_weight": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.build": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.call": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.compute_mask": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.count_params": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.dtype": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.dynamic": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.from_config": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.get_config": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.get_weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.input": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.input_spec": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.losses": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.metrics": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.name": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.name_scope": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.output": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.set_weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.submodules": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.trainable": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool2D.with_name_scope": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D": false, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__call__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__eq__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__ge__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__gt__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__init__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__le__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__lt__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__ne__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.__new__": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.add_loss": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.add_metric": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.add_weight": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.build": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.call": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.compute_mask": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.count_params": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.dtype": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.dynamic": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.from_config": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.get_config": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.get_weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.input": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.input_spec": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.losses": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.metrics": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.name": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.name_scope": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.output": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.set_weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.submodules": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.trainable": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.weights": true, + "tf.compat.v1.keras.layers.GlobalAvgPool3D.with_name_scope": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D": false, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__call__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__eq__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__ge__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__gt__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__init__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__le__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__lt__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__ne__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.__new__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.add_loss": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.add_metric": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.add_weight": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.build": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.call": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.compute_mask": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.count_params": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.dtype": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.dynamic": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.from_config": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.get_config": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.get_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.input": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.input_spec": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.losses": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.metrics": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.name": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.name_scope": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.output": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.set_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.submodules": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.trainable": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool1D.with_name_scope": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D": false, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__call__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__eq__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__ge__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__gt__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__init__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__le__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__lt__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__ne__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.__new__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.add_loss": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.add_metric": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.add_weight": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.build": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.call": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.compute_mask": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.count_params": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.dtype": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.dynamic": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.from_config": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.get_config": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.get_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.input": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.input_spec": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.losses": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.metrics": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.name": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.name_scope": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.output": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.set_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.submodules": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.trainable": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool2D.with_name_scope": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D": false, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__call__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__eq__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__ge__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__gt__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__init__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__le__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__lt__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__ne__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.__new__": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.add_loss": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.add_metric": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.add_weight": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.build": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.call": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.compute_mask": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.count_params": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.dtype": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.dynamic": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.from_config": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.get_config": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.get_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.input": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.input_spec": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.losses": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.metrics": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.name": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.name_scope": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.output": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.set_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.submodules": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.trainable": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPool3D.with_name_scope": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D": false, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__call__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__eq__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__ge__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__gt__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__init__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__le__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__lt__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__ne__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.__new__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.add_loss": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.add_metric": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.add_weight": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.build": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.call": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.compute_mask": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.count_params": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.dtype": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.dynamic": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.from_config": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.get_config": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.get_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.input": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.input_spec": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.losses": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.metrics": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.name": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.name_scope": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.output": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.set_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.submodules": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.trainable": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling1D.with_name_scope": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D": false, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__call__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__eq__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__ge__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__gt__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__init__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__le__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__lt__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__ne__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.__new__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.add_loss": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.add_metric": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.add_weight": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.build": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.call": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.compute_mask": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.count_params": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.dtype": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.dynamic": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.from_config": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.get_config": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.get_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.input": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.input_spec": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.losses": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.metrics": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.name": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.name_scope": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.output": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.set_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.submodules": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.trainable": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling2D.with_name_scope": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D": false, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__call__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__eq__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__ge__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__gt__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__init__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__le__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__lt__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__ne__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.__new__": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.add_loss": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.add_metric": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.add_weight": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.build": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.call": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.compute_mask": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.count_params": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.dtype": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.dynamic": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.from_config": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.get_config": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.get_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.input": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.input_spec": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.losses": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.metrics": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.name": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.name_scope": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.output": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.set_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.submodules": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.trainable": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.trainable_weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.weights": true, + "tf.compat.v1.keras.layers.GlobalMaxPooling3D.with_name_scope": true, + "tf.compat.v1.keras.layers.Input": false, + "tf.compat.v1.keras.layers.InputLayer": false, + "tf.compat.v1.keras.layers.InputLayer.__call__": true, + "tf.compat.v1.keras.layers.InputLayer.__eq__": true, + "tf.compat.v1.keras.layers.InputLayer.__ge__": true, + "tf.compat.v1.keras.layers.InputLayer.__gt__": true, + "tf.compat.v1.keras.layers.InputLayer.__init__": true, + "tf.compat.v1.keras.layers.InputLayer.__le__": true, + "tf.compat.v1.keras.layers.InputLayer.__lt__": true, + "tf.compat.v1.keras.layers.InputLayer.__ne__": true, + "tf.compat.v1.keras.layers.InputLayer.__new__": true, + "tf.compat.v1.keras.layers.InputLayer.activity_regularizer": true, + "tf.compat.v1.keras.layers.InputLayer.add_loss": true, + "tf.compat.v1.keras.layers.InputLayer.add_metric": true, + "tf.compat.v1.keras.layers.InputLayer.add_weight": true, + "tf.compat.v1.keras.layers.InputLayer.build": true, + "tf.compat.v1.keras.layers.InputLayer.call": true, + "tf.compat.v1.keras.layers.InputLayer.compute_mask": true, + "tf.compat.v1.keras.layers.InputLayer.compute_output_shape": true, + "tf.compat.v1.keras.layers.InputLayer.compute_output_signature": true, + "tf.compat.v1.keras.layers.InputLayer.count_params": true, + "tf.compat.v1.keras.layers.InputLayer.dtype": true, + "tf.compat.v1.keras.layers.InputLayer.dynamic": true, + "tf.compat.v1.keras.layers.InputLayer.from_config": true, + "tf.compat.v1.keras.layers.InputLayer.get_config": true, + "tf.compat.v1.keras.layers.InputLayer.get_weights": true, + "tf.compat.v1.keras.layers.InputLayer.input": true, + "tf.compat.v1.keras.layers.InputLayer.input_spec": true, + "tf.compat.v1.keras.layers.InputLayer.losses": true, + "tf.compat.v1.keras.layers.InputLayer.metrics": true, + "tf.compat.v1.keras.layers.InputLayer.name": true, + "tf.compat.v1.keras.layers.InputLayer.name_scope": true, + "tf.compat.v1.keras.layers.InputLayer.non_trainable_weights": true, + "tf.compat.v1.keras.layers.InputLayer.output": true, + "tf.compat.v1.keras.layers.InputLayer.set_weights": true, + "tf.compat.v1.keras.layers.InputLayer.submodules": true, + "tf.compat.v1.keras.layers.InputLayer.trainable": true, + "tf.compat.v1.keras.layers.InputLayer.trainable_weights": true, + "tf.compat.v1.keras.layers.InputLayer.weights": true, + "tf.compat.v1.keras.layers.InputLayer.with_name_scope": true, + "tf.compat.v1.keras.layers.InputSpec": false, + "tf.compat.v1.keras.layers.InputSpec.__eq__": true, + "tf.compat.v1.keras.layers.InputSpec.__ge__": true, + "tf.compat.v1.keras.layers.InputSpec.__gt__": true, + "tf.compat.v1.keras.layers.InputSpec.__init__": true, + "tf.compat.v1.keras.layers.InputSpec.__le__": true, + "tf.compat.v1.keras.layers.InputSpec.__lt__": true, + "tf.compat.v1.keras.layers.InputSpec.__ne__": true, + "tf.compat.v1.keras.layers.InputSpec.__new__": true, + "tf.compat.v1.keras.layers.InputSpec.from_config": true, + "tf.compat.v1.keras.layers.InputSpec.get_config": true, + "tf.compat.v1.keras.layers.LSTM": false, + "tf.compat.v1.keras.layers.LSTM.__call__": true, + "tf.compat.v1.keras.layers.LSTM.__eq__": true, + "tf.compat.v1.keras.layers.LSTM.__ge__": true, + "tf.compat.v1.keras.layers.LSTM.__gt__": true, + "tf.compat.v1.keras.layers.LSTM.__init__": true, + "tf.compat.v1.keras.layers.LSTM.__le__": true, + "tf.compat.v1.keras.layers.LSTM.__lt__": true, + "tf.compat.v1.keras.layers.LSTM.__ne__": true, + "tf.compat.v1.keras.layers.LSTM.__new__": true, + "tf.compat.v1.keras.layers.LSTM.activation": true, + "tf.compat.v1.keras.layers.LSTM.activity_regularizer": true, + "tf.compat.v1.keras.layers.LSTM.add_loss": true, + "tf.compat.v1.keras.layers.LSTM.add_metric": true, + "tf.compat.v1.keras.layers.LSTM.add_weight": true, + "tf.compat.v1.keras.layers.LSTM.bias_constraint": true, + "tf.compat.v1.keras.layers.LSTM.bias_initializer": true, + "tf.compat.v1.keras.layers.LSTM.bias_regularizer": true, + "tf.compat.v1.keras.layers.LSTM.build": true, + "tf.compat.v1.keras.layers.LSTM.call": true, + "tf.compat.v1.keras.layers.LSTM.compute_mask": true, + "tf.compat.v1.keras.layers.LSTM.compute_output_shape": true, + "tf.compat.v1.keras.layers.LSTM.compute_output_signature": true, + "tf.compat.v1.keras.layers.LSTM.count_params": true, + "tf.compat.v1.keras.layers.LSTM.dropout": true, + "tf.compat.v1.keras.layers.LSTM.dtype": true, + "tf.compat.v1.keras.layers.LSTM.dynamic": true, + "tf.compat.v1.keras.layers.LSTM.from_config": true, + "tf.compat.v1.keras.layers.LSTM.get_config": true, + "tf.compat.v1.keras.layers.LSTM.get_weights": true, + "tf.compat.v1.keras.layers.LSTM.implementation": true, + "tf.compat.v1.keras.layers.LSTM.input": true, + "tf.compat.v1.keras.layers.LSTM.input_spec": true, + "tf.compat.v1.keras.layers.LSTM.kernel_constraint": true, + "tf.compat.v1.keras.layers.LSTM.kernel_initializer": true, + "tf.compat.v1.keras.layers.LSTM.kernel_regularizer": true, + "tf.compat.v1.keras.layers.LSTM.losses": true, + "tf.compat.v1.keras.layers.LSTM.metrics": true, + "tf.compat.v1.keras.layers.LSTM.name": true, + "tf.compat.v1.keras.layers.LSTM.name_scope": true, + "tf.compat.v1.keras.layers.LSTM.non_trainable_weights": true, + "tf.compat.v1.keras.layers.LSTM.output": true, + "tf.compat.v1.keras.layers.LSTM.recurrent_activation": true, + "tf.compat.v1.keras.layers.LSTM.recurrent_constraint": true, + "tf.compat.v1.keras.layers.LSTM.recurrent_dropout": true, + "tf.compat.v1.keras.layers.LSTM.recurrent_initializer": true, + "tf.compat.v1.keras.layers.LSTM.recurrent_regularizer": true, + "tf.compat.v1.keras.layers.LSTM.reset_states": true, + "tf.compat.v1.keras.layers.LSTM.set_weights": true, + "tf.compat.v1.keras.layers.LSTM.states": true, + "tf.compat.v1.keras.layers.LSTM.submodules": true, + "tf.compat.v1.keras.layers.LSTM.trainable": true, + "tf.compat.v1.keras.layers.LSTM.trainable_weights": true, + "tf.compat.v1.keras.layers.LSTM.unit_forget_bias": true, + "tf.compat.v1.keras.layers.LSTM.units": true, + "tf.compat.v1.keras.layers.LSTM.use_bias": true, + "tf.compat.v1.keras.layers.LSTM.weights": true, + "tf.compat.v1.keras.layers.LSTM.with_name_scope": true, + "tf.compat.v1.keras.layers.LSTMCell": false, + "tf.compat.v1.keras.layers.LSTMCell.__call__": true, + "tf.compat.v1.keras.layers.LSTMCell.__eq__": true, + "tf.compat.v1.keras.layers.LSTMCell.__ge__": true, + "tf.compat.v1.keras.layers.LSTMCell.__gt__": true, + "tf.compat.v1.keras.layers.LSTMCell.__init__": true, + "tf.compat.v1.keras.layers.LSTMCell.__le__": true, + "tf.compat.v1.keras.layers.LSTMCell.__lt__": true, + "tf.compat.v1.keras.layers.LSTMCell.__ne__": true, + "tf.compat.v1.keras.layers.LSTMCell.__new__": true, + "tf.compat.v1.keras.layers.LSTMCell.activity_regularizer": true, + "tf.compat.v1.keras.layers.LSTMCell.add_loss": true, + "tf.compat.v1.keras.layers.LSTMCell.add_metric": true, + "tf.compat.v1.keras.layers.LSTMCell.add_weight": true, + "tf.compat.v1.keras.layers.LSTMCell.build": true, + "tf.compat.v1.keras.layers.LSTMCell.call": true, + "tf.compat.v1.keras.layers.LSTMCell.compute_mask": true, + "tf.compat.v1.keras.layers.LSTMCell.compute_output_shape": true, + "tf.compat.v1.keras.layers.LSTMCell.compute_output_signature": true, + "tf.compat.v1.keras.layers.LSTMCell.count_params": true, + "tf.compat.v1.keras.layers.LSTMCell.dtype": true, + "tf.compat.v1.keras.layers.LSTMCell.dynamic": true, + "tf.compat.v1.keras.layers.LSTMCell.from_config": true, + "tf.compat.v1.keras.layers.LSTMCell.get_config": true, + "tf.compat.v1.keras.layers.LSTMCell.get_dropout_mask_for_cell": true, + "tf.compat.v1.keras.layers.LSTMCell.get_initial_state": true, + "tf.compat.v1.keras.layers.LSTMCell.get_recurrent_dropout_mask_for_cell": true, + "tf.compat.v1.keras.layers.LSTMCell.get_weights": true, + "tf.compat.v1.keras.layers.LSTMCell.input": true, + "tf.compat.v1.keras.layers.LSTMCell.input_spec": true, + "tf.compat.v1.keras.layers.LSTMCell.losses": true, + "tf.compat.v1.keras.layers.LSTMCell.metrics": true, + "tf.compat.v1.keras.layers.LSTMCell.name": true, + "tf.compat.v1.keras.layers.LSTMCell.name_scope": true, + "tf.compat.v1.keras.layers.LSTMCell.non_trainable_weights": true, + "tf.compat.v1.keras.layers.LSTMCell.output": true, + "tf.compat.v1.keras.layers.LSTMCell.reset_dropout_mask": true, + "tf.compat.v1.keras.layers.LSTMCell.reset_recurrent_dropout_mask": true, + "tf.compat.v1.keras.layers.LSTMCell.set_weights": true, + "tf.compat.v1.keras.layers.LSTMCell.submodules": true, + "tf.compat.v1.keras.layers.LSTMCell.trainable": true, + "tf.compat.v1.keras.layers.LSTMCell.trainable_weights": true, + "tf.compat.v1.keras.layers.LSTMCell.weights": true, + "tf.compat.v1.keras.layers.LSTMCell.with_name_scope": true, + "tf.compat.v1.keras.layers.Lambda": false, + "tf.compat.v1.keras.layers.Lambda.__call__": true, + "tf.compat.v1.keras.layers.Lambda.__eq__": true, + "tf.compat.v1.keras.layers.Lambda.__ge__": true, + "tf.compat.v1.keras.layers.Lambda.__gt__": true, + "tf.compat.v1.keras.layers.Lambda.__init__": true, + "tf.compat.v1.keras.layers.Lambda.__le__": true, + "tf.compat.v1.keras.layers.Lambda.__lt__": true, + "tf.compat.v1.keras.layers.Lambda.__ne__": true, + "tf.compat.v1.keras.layers.Lambda.__new__": true, + "tf.compat.v1.keras.layers.Lambda.activity_regularizer": true, + "tf.compat.v1.keras.layers.Lambda.add_loss": true, + "tf.compat.v1.keras.layers.Lambda.add_metric": true, + "tf.compat.v1.keras.layers.Lambda.add_weight": true, + "tf.compat.v1.keras.layers.Lambda.build": true, + "tf.compat.v1.keras.layers.Lambda.call": true, + "tf.compat.v1.keras.layers.Lambda.compute_mask": true, + "tf.compat.v1.keras.layers.Lambda.compute_output_shape": true, + "tf.compat.v1.keras.layers.Lambda.compute_output_signature": true, + "tf.compat.v1.keras.layers.Lambda.count_params": true, + "tf.compat.v1.keras.layers.Lambda.dtype": true, + "tf.compat.v1.keras.layers.Lambda.dynamic": true, + "tf.compat.v1.keras.layers.Lambda.from_config": true, + "tf.compat.v1.keras.layers.Lambda.get_config": true, + "tf.compat.v1.keras.layers.Lambda.get_weights": true, + "tf.compat.v1.keras.layers.Lambda.input": true, + "tf.compat.v1.keras.layers.Lambda.input_spec": true, + "tf.compat.v1.keras.layers.Lambda.losses": true, + "tf.compat.v1.keras.layers.Lambda.metrics": true, + "tf.compat.v1.keras.layers.Lambda.name": true, + "tf.compat.v1.keras.layers.Lambda.name_scope": true, + "tf.compat.v1.keras.layers.Lambda.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Lambda.output": true, + "tf.compat.v1.keras.layers.Lambda.set_weights": true, + "tf.compat.v1.keras.layers.Lambda.submodules": true, + "tf.compat.v1.keras.layers.Lambda.trainable": true, + "tf.compat.v1.keras.layers.Lambda.trainable_weights": true, + "tf.compat.v1.keras.layers.Lambda.weights": true, + "tf.compat.v1.keras.layers.Lambda.with_name_scope": true, + "tf.compat.v1.keras.layers.Layer": false, + "tf.compat.v1.keras.layers.Layer.__call__": true, + "tf.compat.v1.keras.layers.Layer.__eq__": true, + "tf.compat.v1.keras.layers.Layer.__ge__": true, + "tf.compat.v1.keras.layers.Layer.__gt__": true, + "tf.compat.v1.keras.layers.Layer.__init__": true, + "tf.compat.v1.keras.layers.Layer.__le__": true, + "tf.compat.v1.keras.layers.Layer.__lt__": true, + "tf.compat.v1.keras.layers.Layer.__ne__": true, + "tf.compat.v1.keras.layers.Layer.__new__": true, + "tf.compat.v1.keras.layers.Layer.activity_regularizer": true, + "tf.compat.v1.keras.layers.Layer.add_loss": true, + "tf.compat.v1.keras.layers.Layer.add_metric": true, + "tf.compat.v1.keras.layers.Layer.add_weight": true, + "tf.compat.v1.keras.layers.Layer.build": true, + "tf.compat.v1.keras.layers.Layer.call": true, + "tf.compat.v1.keras.layers.Layer.compute_mask": true, + "tf.compat.v1.keras.layers.Layer.compute_output_shape": true, + "tf.compat.v1.keras.layers.Layer.compute_output_signature": true, + "tf.compat.v1.keras.layers.Layer.count_params": true, + "tf.compat.v1.keras.layers.Layer.dtype": true, + "tf.compat.v1.keras.layers.Layer.dynamic": true, + "tf.compat.v1.keras.layers.Layer.from_config": true, + "tf.compat.v1.keras.layers.Layer.get_config": true, + "tf.compat.v1.keras.layers.Layer.get_weights": true, + "tf.compat.v1.keras.layers.Layer.input": true, + "tf.compat.v1.keras.layers.Layer.input_spec": true, + "tf.compat.v1.keras.layers.Layer.losses": true, + "tf.compat.v1.keras.layers.Layer.metrics": true, + "tf.compat.v1.keras.layers.Layer.name": true, + "tf.compat.v1.keras.layers.Layer.name_scope": true, + "tf.compat.v1.keras.layers.Layer.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Layer.output": true, + "tf.compat.v1.keras.layers.Layer.set_weights": true, + "tf.compat.v1.keras.layers.Layer.submodules": true, + "tf.compat.v1.keras.layers.Layer.trainable": true, + "tf.compat.v1.keras.layers.Layer.trainable_weights": true, + "tf.compat.v1.keras.layers.Layer.weights": true, + "tf.compat.v1.keras.layers.Layer.with_name_scope": true, + "tf.compat.v1.keras.layers.LayerNormalization": false, + "tf.compat.v1.keras.layers.LayerNormalization.__call__": true, + "tf.compat.v1.keras.layers.LayerNormalization.__eq__": true, + "tf.compat.v1.keras.layers.LayerNormalization.__ge__": true, + "tf.compat.v1.keras.layers.LayerNormalization.__gt__": true, + "tf.compat.v1.keras.layers.LayerNormalization.__init__": true, + "tf.compat.v1.keras.layers.LayerNormalization.__le__": true, + "tf.compat.v1.keras.layers.LayerNormalization.__lt__": true, + "tf.compat.v1.keras.layers.LayerNormalization.__ne__": true, + "tf.compat.v1.keras.layers.LayerNormalization.__new__": true, + "tf.compat.v1.keras.layers.LayerNormalization.activity_regularizer": true, + "tf.compat.v1.keras.layers.LayerNormalization.add_loss": true, + "tf.compat.v1.keras.layers.LayerNormalization.add_metric": true, + "tf.compat.v1.keras.layers.LayerNormalization.add_weight": true, + "tf.compat.v1.keras.layers.LayerNormalization.build": true, + "tf.compat.v1.keras.layers.LayerNormalization.call": true, + "tf.compat.v1.keras.layers.LayerNormalization.compute_mask": true, + "tf.compat.v1.keras.layers.LayerNormalization.compute_output_shape": true, + "tf.compat.v1.keras.layers.LayerNormalization.compute_output_signature": true, + "tf.compat.v1.keras.layers.LayerNormalization.count_params": true, + "tf.compat.v1.keras.layers.LayerNormalization.dtype": true, + "tf.compat.v1.keras.layers.LayerNormalization.dynamic": true, + "tf.compat.v1.keras.layers.LayerNormalization.from_config": true, + "tf.compat.v1.keras.layers.LayerNormalization.get_config": true, + "tf.compat.v1.keras.layers.LayerNormalization.get_weights": true, + "tf.compat.v1.keras.layers.LayerNormalization.input": true, + "tf.compat.v1.keras.layers.LayerNormalization.input_spec": true, + "tf.compat.v1.keras.layers.LayerNormalization.losses": true, + "tf.compat.v1.keras.layers.LayerNormalization.metrics": true, + "tf.compat.v1.keras.layers.LayerNormalization.name": true, + "tf.compat.v1.keras.layers.LayerNormalization.name_scope": true, + "tf.compat.v1.keras.layers.LayerNormalization.non_trainable_weights": true, + "tf.compat.v1.keras.layers.LayerNormalization.output": true, + "tf.compat.v1.keras.layers.LayerNormalization.set_weights": true, + "tf.compat.v1.keras.layers.LayerNormalization.submodules": true, + "tf.compat.v1.keras.layers.LayerNormalization.trainable": true, + "tf.compat.v1.keras.layers.LayerNormalization.trainable_weights": true, + "tf.compat.v1.keras.layers.LayerNormalization.weights": true, + "tf.compat.v1.keras.layers.LayerNormalization.with_name_scope": true, + "tf.compat.v1.keras.layers.LeakyReLU": false, + "tf.compat.v1.keras.layers.LeakyReLU.__call__": true, + "tf.compat.v1.keras.layers.LeakyReLU.__eq__": true, + "tf.compat.v1.keras.layers.LeakyReLU.__ge__": true, + "tf.compat.v1.keras.layers.LeakyReLU.__gt__": true, + "tf.compat.v1.keras.layers.LeakyReLU.__init__": true, + "tf.compat.v1.keras.layers.LeakyReLU.__le__": true, + "tf.compat.v1.keras.layers.LeakyReLU.__lt__": true, + "tf.compat.v1.keras.layers.LeakyReLU.__ne__": true, + "tf.compat.v1.keras.layers.LeakyReLU.__new__": true, + "tf.compat.v1.keras.layers.LeakyReLU.activity_regularizer": true, + "tf.compat.v1.keras.layers.LeakyReLU.add_loss": true, + "tf.compat.v1.keras.layers.LeakyReLU.add_metric": true, + "tf.compat.v1.keras.layers.LeakyReLU.add_weight": true, + "tf.compat.v1.keras.layers.LeakyReLU.build": true, + "tf.compat.v1.keras.layers.LeakyReLU.call": true, + "tf.compat.v1.keras.layers.LeakyReLU.compute_mask": true, + "tf.compat.v1.keras.layers.LeakyReLU.compute_output_shape": true, + "tf.compat.v1.keras.layers.LeakyReLU.compute_output_signature": true, + "tf.compat.v1.keras.layers.LeakyReLU.count_params": true, + "tf.compat.v1.keras.layers.LeakyReLU.dtype": true, + "tf.compat.v1.keras.layers.LeakyReLU.dynamic": true, + "tf.compat.v1.keras.layers.LeakyReLU.from_config": true, + "tf.compat.v1.keras.layers.LeakyReLU.get_config": true, + "tf.compat.v1.keras.layers.LeakyReLU.get_weights": true, + "tf.compat.v1.keras.layers.LeakyReLU.input": true, + "tf.compat.v1.keras.layers.LeakyReLU.input_spec": true, + "tf.compat.v1.keras.layers.LeakyReLU.losses": true, + "tf.compat.v1.keras.layers.LeakyReLU.metrics": true, + "tf.compat.v1.keras.layers.LeakyReLU.name": true, + "tf.compat.v1.keras.layers.LeakyReLU.name_scope": true, + "tf.compat.v1.keras.layers.LeakyReLU.non_trainable_weights": true, + "tf.compat.v1.keras.layers.LeakyReLU.output": true, + "tf.compat.v1.keras.layers.LeakyReLU.set_weights": true, + "tf.compat.v1.keras.layers.LeakyReLU.submodules": true, + "tf.compat.v1.keras.layers.LeakyReLU.trainable": true, + "tf.compat.v1.keras.layers.LeakyReLU.trainable_weights": true, + "tf.compat.v1.keras.layers.LeakyReLU.weights": true, + "tf.compat.v1.keras.layers.LeakyReLU.with_name_scope": true, + "tf.compat.v1.keras.layers.LocallyConnected1D": false, + "tf.compat.v1.keras.layers.LocallyConnected1D.__call__": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.__eq__": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.__ge__": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.__gt__": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.__init__": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.__le__": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.__lt__": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.__ne__": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.__new__": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.add_loss": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.add_metric": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.add_weight": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.build": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.call": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.compute_mask": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.count_params": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.dtype": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.dynamic": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.from_config": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.get_config": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.get_weights": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.input": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.input_spec": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.losses": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.metrics": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.name": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.name_scope": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.output": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.set_weights": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.submodules": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.trainable": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.trainable_weights": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.weights": true, + "tf.compat.v1.keras.layers.LocallyConnected1D.with_name_scope": true, + "tf.compat.v1.keras.layers.LocallyConnected2D": false, + "tf.compat.v1.keras.layers.LocallyConnected2D.__call__": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.__eq__": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.__ge__": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.__gt__": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.__init__": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.__le__": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.__lt__": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.__ne__": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.__new__": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.add_loss": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.add_metric": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.add_weight": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.build": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.call": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.compute_mask": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.count_params": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.dtype": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.dynamic": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.from_config": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.get_config": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.get_weights": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.input": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.input_spec": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.losses": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.metrics": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.name": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.name_scope": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.output": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.set_weights": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.submodules": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.trainable": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.trainable_weights": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.weights": true, + "tf.compat.v1.keras.layers.LocallyConnected2D.with_name_scope": true, + "tf.compat.v1.keras.layers.Masking": false, + "tf.compat.v1.keras.layers.Masking.__call__": true, + "tf.compat.v1.keras.layers.Masking.__eq__": true, + "tf.compat.v1.keras.layers.Masking.__ge__": true, + "tf.compat.v1.keras.layers.Masking.__gt__": true, + "tf.compat.v1.keras.layers.Masking.__init__": true, + "tf.compat.v1.keras.layers.Masking.__le__": true, + "tf.compat.v1.keras.layers.Masking.__lt__": true, + "tf.compat.v1.keras.layers.Masking.__ne__": true, + "tf.compat.v1.keras.layers.Masking.__new__": true, + "tf.compat.v1.keras.layers.Masking.activity_regularizer": true, + "tf.compat.v1.keras.layers.Masking.add_loss": true, + "tf.compat.v1.keras.layers.Masking.add_metric": true, + "tf.compat.v1.keras.layers.Masking.add_weight": true, + "tf.compat.v1.keras.layers.Masking.build": true, + "tf.compat.v1.keras.layers.Masking.call": true, + "tf.compat.v1.keras.layers.Masking.compute_mask": true, + "tf.compat.v1.keras.layers.Masking.compute_output_shape": true, + "tf.compat.v1.keras.layers.Masking.compute_output_signature": true, + "tf.compat.v1.keras.layers.Masking.count_params": true, + "tf.compat.v1.keras.layers.Masking.dtype": true, + "tf.compat.v1.keras.layers.Masking.dynamic": true, + "tf.compat.v1.keras.layers.Masking.from_config": true, + "tf.compat.v1.keras.layers.Masking.get_config": true, + "tf.compat.v1.keras.layers.Masking.get_weights": true, + "tf.compat.v1.keras.layers.Masking.input": true, + "tf.compat.v1.keras.layers.Masking.input_spec": true, + "tf.compat.v1.keras.layers.Masking.losses": true, + "tf.compat.v1.keras.layers.Masking.metrics": true, + "tf.compat.v1.keras.layers.Masking.name": true, + "tf.compat.v1.keras.layers.Masking.name_scope": true, + "tf.compat.v1.keras.layers.Masking.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Masking.output": true, + "tf.compat.v1.keras.layers.Masking.set_weights": true, + "tf.compat.v1.keras.layers.Masking.submodules": true, + "tf.compat.v1.keras.layers.Masking.trainable": true, + "tf.compat.v1.keras.layers.Masking.trainable_weights": true, + "tf.compat.v1.keras.layers.Masking.weights": true, + "tf.compat.v1.keras.layers.Masking.with_name_scope": true, + "tf.compat.v1.keras.layers.MaxPool1D": false, + "tf.compat.v1.keras.layers.MaxPool1D.__call__": true, + "tf.compat.v1.keras.layers.MaxPool1D.__eq__": true, + "tf.compat.v1.keras.layers.MaxPool1D.__ge__": true, + "tf.compat.v1.keras.layers.MaxPool1D.__gt__": true, + "tf.compat.v1.keras.layers.MaxPool1D.__init__": true, + "tf.compat.v1.keras.layers.MaxPool1D.__le__": true, + "tf.compat.v1.keras.layers.MaxPool1D.__lt__": true, + "tf.compat.v1.keras.layers.MaxPool1D.__ne__": true, + "tf.compat.v1.keras.layers.MaxPool1D.__new__": true, + "tf.compat.v1.keras.layers.MaxPool1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.MaxPool1D.add_loss": true, + "tf.compat.v1.keras.layers.MaxPool1D.add_metric": true, + "tf.compat.v1.keras.layers.MaxPool1D.add_weight": true, + "tf.compat.v1.keras.layers.MaxPool1D.build": true, + "tf.compat.v1.keras.layers.MaxPool1D.call": true, + "tf.compat.v1.keras.layers.MaxPool1D.compute_mask": true, + "tf.compat.v1.keras.layers.MaxPool1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.MaxPool1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.MaxPool1D.count_params": true, + "tf.compat.v1.keras.layers.MaxPool1D.dtype": true, + "tf.compat.v1.keras.layers.MaxPool1D.dynamic": true, + "tf.compat.v1.keras.layers.MaxPool1D.from_config": true, + "tf.compat.v1.keras.layers.MaxPool1D.get_config": true, + "tf.compat.v1.keras.layers.MaxPool1D.get_weights": true, + "tf.compat.v1.keras.layers.MaxPool1D.input": true, + "tf.compat.v1.keras.layers.MaxPool1D.input_spec": true, + "tf.compat.v1.keras.layers.MaxPool1D.losses": true, + "tf.compat.v1.keras.layers.MaxPool1D.metrics": true, + "tf.compat.v1.keras.layers.MaxPool1D.name": true, + "tf.compat.v1.keras.layers.MaxPool1D.name_scope": true, + "tf.compat.v1.keras.layers.MaxPool1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.MaxPool1D.output": true, + "tf.compat.v1.keras.layers.MaxPool1D.set_weights": true, + "tf.compat.v1.keras.layers.MaxPool1D.submodules": true, + "tf.compat.v1.keras.layers.MaxPool1D.trainable": true, + "tf.compat.v1.keras.layers.MaxPool1D.trainable_weights": true, + "tf.compat.v1.keras.layers.MaxPool1D.weights": true, + "tf.compat.v1.keras.layers.MaxPool1D.with_name_scope": true, + "tf.compat.v1.keras.layers.MaxPool2D": false, + "tf.compat.v1.keras.layers.MaxPool2D.__call__": true, + "tf.compat.v1.keras.layers.MaxPool2D.__eq__": true, + "tf.compat.v1.keras.layers.MaxPool2D.__ge__": true, + "tf.compat.v1.keras.layers.MaxPool2D.__gt__": true, + "tf.compat.v1.keras.layers.MaxPool2D.__init__": true, + "tf.compat.v1.keras.layers.MaxPool2D.__le__": true, + "tf.compat.v1.keras.layers.MaxPool2D.__lt__": true, + "tf.compat.v1.keras.layers.MaxPool2D.__ne__": true, + "tf.compat.v1.keras.layers.MaxPool2D.__new__": true, + "tf.compat.v1.keras.layers.MaxPool2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.MaxPool2D.add_loss": true, + "tf.compat.v1.keras.layers.MaxPool2D.add_metric": true, + "tf.compat.v1.keras.layers.MaxPool2D.add_weight": true, + "tf.compat.v1.keras.layers.MaxPool2D.build": true, + "tf.compat.v1.keras.layers.MaxPool2D.call": true, + "tf.compat.v1.keras.layers.MaxPool2D.compute_mask": true, + "tf.compat.v1.keras.layers.MaxPool2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.MaxPool2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.MaxPool2D.count_params": true, + "tf.compat.v1.keras.layers.MaxPool2D.dtype": true, + "tf.compat.v1.keras.layers.MaxPool2D.dynamic": true, + "tf.compat.v1.keras.layers.MaxPool2D.from_config": true, + "tf.compat.v1.keras.layers.MaxPool2D.get_config": true, + "tf.compat.v1.keras.layers.MaxPool2D.get_weights": true, + "tf.compat.v1.keras.layers.MaxPool2D.input": true, + "tf.compat.v1.keras.layers.MaxPool2D.input_spec": true, + "tf.compat.v1.keras.layers.MaxPool2D.losses": true, + "tf.compat.v1.keras.layers.MaxPool2D.metrics": true, + "tf.compat.v1.keras.layers.MaxPool2D.name": true, + "tf.compat.v1.keras.layers.MaxPool2D.name_scope": true, + "tf.compat.v1.keras.layers.MaxPool2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.MaxPool2D.output": true, + "tf.compat.v1.keras.layers.MaxPool2D.set_weights": true, + "tf.compat.v1.keras.layers.MaxPool2D.submodules": true, + "tf.compat.v1.keras.layers.MaxPool2D.trainable": true, + "tf.compat.v1.keras.layers.MaxPool2D.trainable_weights": true, + "tf.compat.v1.keras.layers.MaxPool2D.weights": true, + "tf.compat.v1.keras.layers.MaxPool2D.with_name_scope": true, + "tf.compat.v1.keras.layers.MaxPool3D": false, + "tf.compat.v1.keras.layers.MaxPool3D.__call__": true, + "tf.compat.v1.keras.layers.MaxPool3D.__eq__": true, + "tf.compat.v1.keras.layers.MaxPool3D.__ge__": true, + "tf.compat.v1.keras.layers.MaxPool3D.__gt__": true, + "tf.compat.v1.keras.layers.MaxPool3D.__init__": true, + "tf.compat.v1.keras.layers.MaxPool3D.__le__": true, + "tf.compat.v1.keras.layers.MaxPool3D.__lt__": true, + "tf.compat.v1.keras.layers.MaxPool3D.__ne__": true, + "tf.compat.v1.keras.layers.MaxPool3D.__new__": true, + "tf.compat.v1.keras.layers.MaxPool3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.MaxPool3D.add_loss": true, + "tf.compat.v1.keras.layers.MaxPool3D.add_metric": true, + "tf.compat.v1.keras.layers.MaxPool3D.add_weight": true, + "tf.compat.v1.keras.layers.MaxPool3D.build": true, + "tf.compat.v1.keras.layers.MaxPool3D.call": true, + "tf.compat.v1.keras.layers.MaxPool3D.compute_mask": true, + "tf.compat.v1.keras.layers.MaxPool3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.MaxPool3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.MaxPool3D.count_params": true, + "tf.compat.v1.keras.layers.MaxPool3D.dtype": true, + "tf.compat.v1.keras.layers.MaxPool3D.dynamic": true, + "tf.compat.v1.keras.layers.MaxPool3D.from_config": true, + "tf.compat.v1.keras.layers.MaxPool3D.get_config": true, + "tf.compat.v1.keras.layers.MaxPool3D.get_weights": true, + "tf.compat.v1.keras.layers.MaxPool3D.input": true, + "tf.compat.v1.keras.layers.MaxPool3D.input_spec": true, + "tf.compat.v1.keras.layers.MaxPool3D.losses": true, + "tf.compat.v1.keras.layers.MaxPool3D.metrics": true, + "tf.compat.v1.keras.layers.MaxPool3D.name": true, + "tf.compat.v1.keras.layers.MaxPool3D.name_scope": true, + "tf.compat.v1.keras.layers.MaxPool3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.MaxPool3D.output": true, + "tf.compat.v1.keras.layers.MaxPool3D.set_weights": true, + "tf.compat.v1.keras.layers.MaxPool3D.submodules": true, + "tf.compat.v1.keras.layers.MaxPool3D.trainable": true, + "tf.compat.v1.keras.layers.MaxPool3D.trainable_weights": true, + "tf.compat.v1.keras.layers.MaxPool3D.weights": true, + "tf.compat.v1.keras.layers.MaxPool3D.with_name_scope": true, + "tf.compat.v1.keras.layers.MaxPooling1D": false, + "tf.compat.v1.keras.layers.MaxPooling1D.__call__": true, + "tf.compat.v1.keras.layers.MaxPooling1D.__eq__": true, + "tf.compat.v1.keras.layers.MaxPooling1D.__ge__": true, + "tf.compat.v1.keras.layers.MaxPooling1D.__gt__": true, + "tf.compat.v1.keras.layers.MaxPooling1D.__init__": true, + "tf.compat.v1.keras.layers.MaxPooling1D.__le__": true, + "tf.compat.v1.keras.layers.MaxPooling1D.__lt__": true, + "tf.compat.v1.keras.layers.MaxPooling1D.__ne__": true, + "tf.compat.v1.keras.layers.MaxPooling1D.__new__": true, + "tf.compat.v1.keras.layers.MaxPooling1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.MaxPooling1D.add_loss": true, + "tf.compat.v1.keras.layers.MaxPooling1D.add_metric": true, + "tf.compat.v1.keras.layers.MaxPooling1D.add_weight": true, + "tf.compat.v1.keras.layers.MaxPooling1D.build": true, + "tf.compat.v1.keras.layers.MaxPooling1D.call": true, + "tf.compat.v1.keras.layers.MaxPooling1D.compute_mask": true, + "tf.compat.v1.keras.layers.MaxPooling1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.MaxPooling1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.MaxPooling1D.count_params": true, + "tf.compat.v1.keras.layers.MaxPooling1D.dtype": true, + "tf.compat.v1.keras.layers.MaxPooling1D.dynamic": true, + "tf.compat.v1.keras.layers.MaxPooling1D.from_config": true, + "tf.compat.v1.keras.layers.MaxPooling1D.get_config": true, + "tf.compat.v1.keras.layers.MaxPooling1D.get_weights": true, + "tf.compat.v1.keras.layers.MaxPooling1D.input": true, + "tf.compat.v1.keras.layers.MaxPooling1D.input_spec": true, + "tf.compat.v1.keras.layers.MaxPooling1D.losses": true, + "tf.compat.v1.keras.layers.MaxPooling1D.metrics": true, + "tf.compat.v1.keras.layers.MaxPooling1D.name": true, + "tf.compat.v1.keras.layers.MaxPooling1D.name_scope": true, + "tf.compat.v1.keras.layers.MaxPooling1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.MaxPooling1D.output": true, + "tf.compat.v1.keras.layers.MaxPooling1D.set_weights": true, + "tf.compat.v1.keras.layers.MaxPooling1D.submodules": true, + "tf.compat.v1.keras.layers.MaxPooling1D.trainable": true, + "tf.compat.v1.keras.layers.MaxPooling1D.trainable_weights": true, + "tf.compat.v1.keras.layers.MaxPooling1D.weights": true, + "tf.compat.v1.keras.layers.MaxPooling1D.with_name_scope": true, + "tf.compat.v1.keras.layers.MaxPooling2D": false, + "tf.compat.v1.keras.layers.MaxPooling2D.__call__": true, + "tf.compat.v1.keras.layers.MaxPooling2D.__eq__": true, + "tf.compat.v1.keras.layers.MaxPooling2D.__ge__": true, + "tf.compat.v1.keras.layers.MaxPooling2D.__gt__": true, + "tf.compat.v1.keras.layers.MaxPooling2D.__init__": true, + "tf.compat.v1.keras.layers.MaxPooling2D.__le__": true, + "tf.compat.v1.keras.layers.MaxPooling2D.__lt__": true, + "tf.compat.v1.keras.layers.MaxPooling2D.__ne__": true, + "tf.compat.v1.keras.layers.MaxPooling2D.__new__": true, + "tf.compat.v1.keras.layers.MaxPooling2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.MaxPooling2D.add_loss": true, + "tf.compat.v1.keras.layers.MaxPooling2D.add_metric": true, + "tf.compat.v1.keras.layers.MaxPooling2D.add_weight": true, + "tf.compat.v1.keras.layers.MaxPooling2D.build": true, + "tf.compat.v1.keras.layers.MaxPooling2D.call": true, + "tf.compat.v1.keras.layers.MaxPooling2D.compute_mask": true, + "tf.compat.v1.keras.layers.MaxPooling2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.MaxPooling2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.MaxPooling2D.count_params": true, + "tf.compat.v1.keras.layers.MaxPooling2D.dtype": true, + "tf.compat.v1.keras.layers.MaxPooling2D.dynamic": true, + "tf.compat.v1.keras.layers.MaxPooling2D.from_config": true, + "tf.compat.v1.keras.layers.MaxPooling2D.get_config": true, + "tf.compat.v1.keras.layers.MaxPooling2D.get_weights": true, + "tf.compat.v1.keras.layers.MaxPooling2D.input": true, + "tf.compat.v1.keras.layers.MaxPooling2D.input_spec": true, + "tf.compat.v1.keras.layers.MaxPooling2D.losses": true, + "tf.compat.v1.keras.layers.MaxPooling2D.metrics": true, + "tf.compat.v1.keras.layers.MaxPooling2D.name": true, + "tf.compat.v1.keras.layers.MaxPooling2D.name_scope": true, + "tf.compat.v1.keras.layers.MaxPooling2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.MaxPooling2D.output": true, + "tf.compat.v1.keras.layers.MaxPooling2D.set_weights": true, + "tf.compat.v1.keras.layers.MaxPooling2D.submodules": true, + "tf.compat.v1.keras.layers.MaxPooling2D.trainable": true, + "tf.compat.v1.keras.layers.MaxPooling2D.trainable_weights": true, + "tf.compat.v1.keras.layers.MaxPooling2D.weights": true, + "tf.compat.v1.keras.layers.MaxPooling2D.with_name_scope": true, + "tf.compat.v1.keras.layers.MaxPooling3D": false, + "tf.compat.v1.keras.layers.MaxPooling3D.__call__": true, + "tf.compat.v1.keras.layers.MaxPooling3D.__eq__": true, + "tf.compat.v1.keras.layers.MaxPooling3D.__ge__": true, + "tf.compat.v1.keras.layers.MaxPooling3D.__gt__": true, + "tf.compat.v1.keras.layers.MaxPooling3D.__init__": true, + "tf.compat.v1.keras.layers.MaxPooling3D.__le__": true, + "tf.compat.v1.keras.layers.MaxPooling3D.__lt__": true, + "tf.compat.v1.keras.layers.MaxPooling3D.__ne__": true, + "tf.compat.v1.keras.layers.MaxPooling3D.__new__": true, + "tf.compat.v1.keras.layers.MaxPooling3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.MaxPooling3D.add_loss": true, + "tf.compat.v1.keras.layers.MaxPooling3D.add_metric": true, + "tf.compat.v1.keras.layers.MaxPooling3D.add_weight": true, + "tf.compat.v1.keras.layers.MaxPooling3D.build": true, + "tf.compat.v1.keras.layers.MaxPooling3D.call": true, + "tf.compat.v1.keras.layers.MaxPooling3D.compute_mask": true, + "tf.compat.v1.keras.layers.MaxPooling3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.MaxPooling3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.MaxPooling3D.count_params": true, + "tf.compat.v1.keras.layers.MaxPooling3D.dtype": true, + "tf.compat.v1.keras.layers.MaxPooling3D.dynamic": true, + "tf.compat.v1.keras.layers.MaxPooling3D.from_config": true, + "tf.compat.v1.keras.layers.MaxPooling3D.get_config": true, + "tf.compat.v1.keras.layers.MaxPooling3D.get_weights": true, + "tf.compat.v1.keras.layers.MaxPooling3D.input": true, + "tf.compat.v1.keras.layers.MaxPooling3D.input_spec": true, + "tf.compat.v1.keras.layers.MaxPooling3D.losses": true, + "tf.compat.v1.keras.layers.MaxPooling3D.metrics": true, + "tf.compat.v1.keras.layers.MaxPooling3D.name": true, + "tf.compat.v1.keras.layers.MaxPooling3D.name_scope": true, + "tf.compat.v1.keras.layers.MaxPooling3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.MaxPooling3D.output": true, + "tf.compat.v1.keras.layers.MaxPooling3D.set_weights": true, + "tf.compat.v1.keras.layers.MaxPooling3D.submodules": true, + "tf.compat.v1.keras.layers.MaxPooling3D.trainable": true, + "tf.compat.v1.keras.layers.MaxPooling3D.trainable_weights": true, + "tf.compat.v1.keras.layers.MaxPooling3D.weights": true, + "tf.compat.v1.keras.layers.MaxPooling3D.with_name_scope": true, + "tf.compat.v1.keras.layers.Maximum": false, + "tf.compat.v1.keras.layers.Maximum.__call__": true, + "tf.compat.v1.keras.layers.Maximum.__eq__": true, + "tf.compat.v1.keras.layers.Maximum.__ge__": true, + "tf.compat.v1.keras.layers.Maximum.__gt__": true, + "tf.compat.v1.keras.layers.Maximum.__init__": true, + "tf.compat.v1.keras.layers.Maximum.__le__": true, + "tf.compat.v1.keras.layers.Maximum.__lt__": true, + "tf.compat.v1.keras.layers.Maximum.__ne__": true, + "tf.compat.v1.keras.layers.Maximum.__new__": true, + "tf.compat.v1.keras.layers.Maximum.activity_regularizer": true, + "tf.compat.v1.keras.layers.Maximum.add_loss": true, + "tf.compat.v1.keras.layers.Maximum.add_metric": true, + "tf.compat.v1.keras.layers.Maximum.add_weight": true, + "tf.compat.v1.keras.layers.Maximum.build": true, + "tf.compat.v1.keras.layers.Maximum.call": true, + "tf.compat.v1.keras.layers.Maximum.compute_mask": true, + "tf.compat.v1.keras.layers.Maximum.compute_output_shape": true, + "tf.compat.v1.keras.layers.Maximum.compute_output_signature": true, + "tf.compat.v1.keras.layers.Maximum.count_params": true, + "tf.compat.v1.keras.layers.Maximum.dtype": true, + "tf.compat.v1.keras.layers.Maximum.dynamic": true, + "tf.compat.v1.keras.layers.Maximum.from_config": true, + "tf.compat.v1.keras.layers.Maximum.get_config": true, + "tf.compat.v1.keras.layers.Maximum.get_weights": true, + "tf.compat.v1.keras.layers.Maximum.input": true, + "tf.compat.v1.keras.layers.Maximum.input_spec": true, + "tf.compat.v1.keras.layers.Maximum.losses": true, + "tf.compat.v1.keras.layers.Maximum.metrics": true, + "tf.compat.v1.keras.layers.Maximum.name": true, + "tf.compat.v1.keras.layers.Maximum.name_scope": true, + "tf.compat.v1.keras.layers.Maximum.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Maximum.output": true, + "tf.compat.v1.keras.layers.Maximum.set_weights": true, + "tf.compat.v1.keras.layers.Maximum.submodules": true, + "tf.compat.v1.keras.layers.Maximum.trainable": true, + "tf.compat.v1.keras.layers.Maximum.trainable_weights": true, + "tf.compat.v1.keras.layers.Maximum.weights": true, + "tf.compat.v1.keras.layers.Maximum.with_name_scope": true, + "tf.compat.v1.keras.layers.Minimum": false, + "tf.compat.v1.keras.layers.Minimum.__call__": true, + "tf.compat.v1.keras.layers.Minimum.__eq__": true, + "tf.compat.v1.keras.layers.Minimum.__ge__": true, + "tf.compat.v1.keras.layers.Minimum.__gt__": true, + "tf.compat.v1.keras.layers.Minimum.__init__": true, + "tf.compat.v1.keras.layers.Minimum.__le__": true, + "tf.compat.v1.keras.layers.Minimum.__lt__": true, + "tf.compat.v1.keras.layers.Minimum.__ne__": true, + "tf.compat.v1.keras.layers.Minimum.__new__": true, + "tf.compat.v1.keras.layers.Minimum.activity_regularizer": true, + "tf.compat.v1.keras.layers.Minimum.add_loss": true, + "tf.compat.v1.keras.layers.Minimum.add_metric": true, + "tf.compat.v1.keras.layers.Minimum.add_weight": true, + "tf.compat.v1.keras.layers.Minimum.build": true, + "tf.compat.v1.keras.layers.Minimum.call": true, + "tf.compat.v1.keras.layers.Minimum.compute_mask": true, + "tf.compat.v1.keras.layers.Minimum.compute_output_shape": true, + "tf.compat.v1.keras.layers.Minimum.compute_output_signature": true, + "tf.compat.v1.keras.layers.Minimum.count_params": true, + "tf.compat.v1.keras.layers.Minimum.dtype": true, + "tf.compat.v1.keras.layers.Minimum.dynamic": true, + "tf.compat.v1.keras.layers.Minimum.from_config": true, + "tf.compat.v1.keras.layers.Minimum.get_config": true, + "tf.compat.v1.keras.layers.Minimum.get_weights": true, + "tf.compat.v1.keras.layers.Minimum.input": true, + "tf.compat.v1.keras.layers.Minimum.input_spec": true, + "tf.compat.v1.keras.layers.Minimum.losses": true, + "tf.compat.v1.keras.layers.Minimum.metrics": true, + "tf.compat.v1.keras.layers.Minimum.name": true, + "tf.compat.v1.keras.layers.Minimum.name_scope": true, + "tf.compat.v1.keras.layers.Minimum.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Minimum.output": true, + "tf.compat.v1.keras.layers.Minimum.set_weights": true, + "tf.compat.v1.keras.layers.Minimum.submodules": true, + "tf.compat.v1.keras.layers.Minimum.trainable": true, + "tf.compat.v1.keras.layers.Minimum.trainable_weights": true, + "tf.compat.v1.keras.layers.Minimum.weights": true, + "tf.compat.v1.keras.layers.Minimum.with_name_scope": true, + "tf.compat.v1.keras.layers.Multiply": false, + "tf.compat.v1.keras.layers.Multiply.__call__": true, + "tf.compat.v1.keras.layers.Multiply.__eq__": true, + "tf.compat.v1.keras.layers.Multiply.__ge__": true, + "tf.compat.v1.keras.layers.Multiply.__gt__": true, + "tf.compat.v1.keras.layers.Multiply.__init__": true, + "tf.compat.v1.keras.layers.Multiply.__le__": true, + "tf.compat.v1.keras.layers.Multiply.__lt__": true, + "tf.compat.v1.keras.layers.Multiply.__ne__": true, + "tf.compat.v1.keras.layers.Multiply.__new__": true, + "tf.compat.v1.keras.layers.Multiply.activity_regularizer": true, + "tf.compat.v1.keras.layers.Multiply.add_loss": true, + "tf.compat.v1.keras.layers.Multiply.add_metric": true, + "tf.compat.v1.keras.layers.Multiply.add_weight": true, + "tf.compat.v1.keras.layers.Multiply.build": true, + "tf.compat.v1.keras.layers.Multiply.call": true, + "tf.compat.v1.keras.layers.Multiply.compute_mask": true, + "tf.compat.v1.keras.layers.Multiply.compute_output_shape": true, + "tf.compat.v1.keras.layers.Multiply.compute_output_signature": true, + "tf.compat.v1.keras.layers.Multiply.count_params": true, + "tf.compat.v1.keras.layers.Multiply.dtype": true, + "tf.compat.v1.keras.layers.Multiply.dynamic": true, + "tf.compat.v1.keras.layers.Multiply.from_config": true, + "tf.compat.v1.keras.layers.Multiply.get_config": true, + "tf.compat.v1.keras.layers.Multiply.get_weights": true, + "tf.compat.v1.keras.layers.Multiply.input": true, + "tf.compat.v1.keras.layers.Multiply.input_spec": true, + "tf.compat.v1.keras.layers.Multiply.losses": true, + "tf.compat.v1.keras.layers.Multiply.metrics": true, + "tf.compat.v1.keras.layers.Multiply.name": true, + "tf.compat.v1.keras.layers.Multiply.name_scope": true, + "tf.compat.v1.keras.layers.Multiply.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Multiply.output": true, + "tf.compat.v1.keras.layers.Multiply.set_weights": true, + "tf.compat.v1.keras.layers.Multiply.submodules": true, + "tf.compat.v1.keras.layers.Multiply.trainable": true, + "tf.compat.v1.keras.layers.Multiply.trainable_weights": true, + "tf.compat.v1.keras.layers.Multiply.weights": true, + "tf.compat.v1.keras.layers.Multiply.with_name_scope": true, + "tf.compat.v1.keras.layers.PReLU": false, + "tf.compat.v1.keras.layers.PReLU.__call__": true, + "tf.compat.v1.keras.layers.PReLU.__eq__": true, + "tf.compat.v1.keras.layers.PReLU.__ge__": true, + "tf.compat.v1.keras.layers.PReLU.__gt__": true, + "tf.compat.v1.keras.layers.PReLU.__init__": true, + "tf.compat.v1.keras.layers.PReLU.__le__": true, + "tf.compat.v1.keras.layers.PReLU.__lt__": true, + "tf.compat.v1.keras.layers.PReLU.__ne__": true, + "tf.compat.v1.keras.layers.PReLU.__new__": true, + "tf.compat.v1.keras.layers.PReLU.activity_regularizer": true, + "tf.compat.v1.keras.layers.PReLU.add_loss": true, + "tf.compat.v1.keras.layers.PReLU.add_metric": true, + "tf.compat.v1.keras.layers.PReLU.add_weight": true, + "tf.compat.v1.keras.layers.PReLU.build": true, + "tf.compat.v1.keras.layers.PReLU.call": true, + "tf.compat.v1.keras.layers.PReLU.compute_mask": true, + "tf.compat.v1.keras.layers.PReLU.compute_output_shape": true, + "tf.compat.v1.keras.layers.PReLU.compute_output_signature": true, + "tf.compat.v1.keras.layers.PReLU.count_params": true, + "tf.compat.v1.keras.layers.PReLU.dtype": true, + "tf.compat.v1.keras.layers.PReLU.dynamic": true, + "tf.compat.v1.keras.layers.PReLU.from_config": true, + "tf.compat.v1.keras.layers.PReLU.get_config": true, + "tf.compat.v1.keras.layers.PReLU.get_weights": true, + "tf.compat.v1.keras.layers.PReLU.input": true, + "tf.compat.v1.keras.layers.PReLU.input_spec": true, + "tf.compat.v1.keras.layers.PReLU.losses": true, + "tf.compat.v1.keras.layers.PReLU.metrics": true, + "tf.compat.v1.keras.layers.PReLU.name": true, + "tf.compat.v1.keras.layers.PReLU.name_scope": true, + "tf.compat.v1.keras.layers.PReLU.non_trainable_weights": true, + "tf.compat.v1.keras.layers.PReLU.output": true, + "tf.compat.v1.keras.layers.PReLU.set_weights": true, + "tf.compat.v1.keras.layers.PReLU.submodules": true, + "tf.compat.v1.keras.layers.PReLU.trainable": true, + "tf.compat.v1.keras.layers.PReLU.trainable_weights": true, + "tf.compat.v1.keras.layers.PReLU.weights": true, + "tf.compat.v1.keras.layers.PReLU.with_name_scope": true, + "tf.compat.v1.keras.layers.Permute": false, + "tf.compat.v1.keras.layers.Permute.__call__": true, + "tf.compat.v1.keras.layers.Permute.__eq__": true, + "tf.compat.v1.keras.layers.Permute.__ge__": true, + "tf.compat.v1.keras.layers.Permute.__gt__": true, + "tf.compat.v1.keras.layers.Permute.__init__": true, + "tf.compat.v1.keras.layers.Permute.__le__": true, + "tf.compat.v1.keras.layers.Permute.__lt__": true, + "tf.compat.v1.keras.layers.Permute.__ne__": true, + "tf.compat.v1.keras.layers.Permute.__new__": true, + "tf.compat.v1.keras.layers.Permute.activity_regularizer": true, + "tf.compat.v1.keras.layers.Permute.add_loss": true, + "tf.compat.v1.keras.layers.Permute.add_metric": true, + "tf.compat.v1.keras.layers.Permute.add_weight": true, + "tf.compat.v1.keras.layers.Permute.build": true, + "tf.compat.v1.keras.layers.Permute.call": true, + "tf.compat.v1.keras.layers.Permute.compute_mask": true, + "tf.compat.v1.keras.layers.Permute.compute_output_shape": true, + "tf.compat.v1.keras.layers.Permute.compute_output_signature": true, + "tf.compat.v1.keras.layers.Permute.count_params": true, + "tf.compat.v1.keras.layers.Permute.dtype": true, + "tf.compat.v1.keras.layers.Permute.dynamic": true, + "tf.compat.v1.keras.layers.Permute.from_config": true, + "tf.compat.v1.keras.layers.Permute.get_config": true, + "tf.compat.v1.keras.layers.Permute.get_weights": true, + "tf.compat.v1.keras.layers.Permute.input": true, + "tf.compat.v1.keras.layers.Permute.input_spec": true, + "tf.compat.v1.keras.layers.Permute.losses": true, + "tf.compat.v1.keras.layers.Permute.metrics": true, + "tf.compat.v1.keras.layers.Permute.name": true, + "tf.compat.v1.keras.layers.Permute.name_scope": true, + "tf.compat.v1.keras.layers.Permute.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Permute.output": true, + "tf.compat.v1.keras.layers.Permute.set_weights": true, + "tf.compat.v1.keras.layers.Permute.submodules": true, + "tf.compat.v1.keras.layers.Permute.trainable": true, + "tf.compat.v1.keras.layers.Permute.trainable_weights": true, + "tf.compat.v1.keras.layers.Permute.weights": true, + "tf.compat.v1.keras.layers.Permute.with_name_scope": true, + "tf.compat.v1.keras.layers.RNN": false, + "tf.compat.v1.keras.layers.RNN.__call__": true, + "tf.compat.v1.keras.layers.RNN.__eq__": true, + "tf.compat.v1.keras.layers.RNN.__ge__": true, + "tf.compat.v1.keras.layers.RNN.__gt__": true, + "tf.compat.v1.keras.layers.RNN.__init__": true, + "tf.compat.v1.keras.layers.RNN.__le__": true, + "tf.compat.v1.keras.layers.RNN.__lt__": true, + "tf.compat.v1.keras.layers.RNN.__ne__": true, + "tf.compat.v1.keras.layers.RNN.__new__": true, + "tf.compat.v1.keras.layers.RNN.activity_regularizer": true, + "tf.compat.v1.keras.layers.RNN.add_loss": true, + "tf.compat.v1.keras.layers.RNN.add_metric": true, + "tf.compat.v1.keras.layers.RNN.add_weight": true, + "tf.compat.v1.keras.layers.RNN.build": true, + "tf.compat.v1.keras.layers.RNN.call": true, + "tf.compat.v1.keras.layers.RNN.compute_mask": true, + "tf.compat.v1.keras.layers.RNN.compute_output_shape": true, + "tf.compat.v1.keras.layers.RNN.compute_output_signature": true, + "tf.compat.v1.keras.layers.RNN.count_params": true, + "tf.compat.v1.keras.layers.RNN.dtype": true, + "tf.compat.v1.keras.layers.RNN.dynamic": true, + "tf.compat.v1.keras.layers.RNN.from_config": true, + "tf.compat.v1.keras.layers.RNN.get_config": true, + "tf.compat.v1.keras.layers.RNN.get_weights": true, + "tf.compat.v1.keras.layers.RNN.input": true, + "tf.compat.v1.keras.layers.RNN.input_spec": true, + "tf.compat.v1.keras.layers.RNN.losses": true, + "tf.compat.v1.keras.layers.RNN.metrics": true, + "tf.compat.v1.keras.layers.RNN.name": true, + "tf.compat.v1.keras.layers.RNN.name_scope": true, + "tf.compat.v1.keras.layers.RNN.non_trainable_weights": true, + "tf.compat.v1.keras.layers.RNN.output": true, + "tf.compat.v1.keras.layers.RNN.reset_states": true, + "tf.compat.v1.keras.layers.RNN.set_weights": true, + "tf.compat.v1.keras.layers.RNN.states": true, + "tf.compat.v1.keras.layers.RNN.submodules": true, + "tf.compat.v1.keras.layers.RNN.trainable": true, + "tf.compat.v1.keras.layers.RNN.trainable_weights": true, + "tf.compat.v1.keras.layers.RNN.weights": true, + "tf.compat.v1.keras.layers.RNN.with_name_scope": true, + "tf.compat.v1.keras.layers.ReLU": false, + "tf.compat.v1.keras.layers.ReLU.__call__": true, + "tf.compat.v1.keras.layers.ReLU.__eq__": true, + "tf.compat.v1.keras.layers.ReLU.__ge__": true, + "tf.compat.v1.keras.layers.ReLU.__gt__": true, + "tf.compat.v1.keras.layers.ReLU.__init__": true, + "tf.compat.v1.keras.layers.ReLU.__le__": true, + "tf.compat.v1.keras.layers.ReLU.__lt__": true, + "tf.compat.v1.keras.layers.ReLU.__ne__": true, + "tf.compat.v1.keras.layers.ReLU.__new__": true, + "tf.compat.v1.keras.layers.ReLU.activity_regularizer": true, + "tf.compat.v1.keras.layers.ReLU.add_loss": true, + "tf.compat.v1.keras.layers.ReLU.add_metric": true, + "tf.compat.v1.keras.layers.ReLU.add_weight": true, + "tf.compat.v1.keras.layers.ReLU.build": true, + "tf.compat.v1.keras.layers.ReLU.call": true, + "tf.compat.v1.keras.layers.ReLU.compute_mask": true, + "tf.compat.v1.keras.layers.ReLU.compute_output_shape": true, + "tf.compat.v1.keras.layers.ReLU.compute_output_signature": true, + "tf.compat.v1.keras.layers.ReLU.count_params": true, + "tf.compat.v1.keras.layers.ReLU.dtype": true, + "tf.compat.v1.keras.layers.ReLU.dynamic": true, + "tf.compat.v1.keras.layers.ReLU.from_config": true, + "tf.compat.v1.keras.layers.ReLU.get_config": true, + "tf.compat.v1.keras.layers.ReLU.get_weights": true, + "tf.compat.v1.keras.layers.ReLU.input": true, + "tf.compat.v1.keras.layers.ReLU.input_spec": true, + "tf.compat.v1.keras.layers.ReLU.losses": true, + "tf.compat.v1.keras.layers.ReLU.metrics": true, + "tf.compat.v1.keras.layers.ReLU.name": true, + "tf.compat.v1.keras.layers.ReLU.name_scope": true, + "tf.compat.v1.keras.layers.ReLU.non_trainable_weights": true, + "tf.compat.v1.keras.layers.ReLU.output": true, + "tf.compat.v1.keras.layers.ReLU.set_weights": true, + "tf.compat.v1.keras.layers.ReLU.submodules": true, + "tf.compat.v1.keras.layers.ReLU.trainable": true, + "tf.compat.v1.keras.layers.ReLU.trainable_weights": true, + "tf.compat.v1.keras.layers.ReLU.weights": true, + "tf.compat.v1.keras.layers.ReLU.with_name_scope": true, + "tf.compat.v1.keras.layers.RepeatVector": false, + "tf.compat.v1.keras.layers.RepeatVector.__call__": true, + "tf.compat.v1.keras.layers.RepeatVector.__eq__": true, + "tf.compat.v1.keras.layers.RepeatVector.__ge__": true, + "tf.compat.v1.keras.layers.RepeatVector.__gt__": true, + "tf.compat.v1.keras.layers.RepeatVector.__init__": true, + "tf.compat.v1.keras.layers.RepeatVector.__le__": true, + "tf.compat.v1.keras.layers.RepeatVector.__lt__": true, + "tf.compat.v1.keras.layers.RepeatVector.__ne__": true, + "tf.compat.v1.keras.layers.RepeatVector.__new__": true, + "tf.compat.v1.keras.layers.RepeatVector.activity_regularizer": true, + "tf.compat.v1.keras.layers.RepeatVector.add_loss": true, + "tf.compat.v1.keras.layers.RepeatVector.add_metric": true, + "tf.compat.v1.keras.layers.RepeatVector.add_weight": true, + "tf.compat.v1.keras.layers.RepeatVector.build": true, + "tf.compat.v1.keras.layers.RepeatVector.call": true, + "tf.compat.v1.keras.layers.RepeatVector.compute_mask": true, + "tf.compat.v1.keras.layers.RepeatVector.compute_output_shape": true, + "tf.compat.v1.keras.layers.RepeatVector.compute_output_signature": true, + "tf.compat.v1.keras.layers.RepeatVector.count_params": true, + "tf.compat.v1.keras.layers.RepeatVector.dtype": true, + "tf.compat.v1.keras.layers.RepeatVector.dynamic": true, + "tf.compat.v1.keras.layers.RepeatVector.from_config": true, + "tf.compat.v1.keras.layers.RepeatVector.get_config": true, + "tf.compat.v1.keras.layers.RepeatVector.get_weights": true, + "tf.compat.v1.keras.layers.RepeatVector.input": true, + "tf.compat.v1.keras.layers.RepeatVector.input_spec": true, + "tf.compat.v1.keras.layers.RepeatVector.losses": true, + "tf.compat.v1.keras.layers.RepeatVector.metrics": true, + "tf.compat.v1.keras.layers.RepeatVector.name": true, + "tf.compat.v1.keras.layers.RepeatVector.name_scope": true, + "tf.compat.v1.keras.layers.RepeatVector.non_trainable_weights": true, + "tf.compat.v1.keras.layers.RepeatVector.output": true, + "tf.compat.v1.keras.layers.RepeatVector.set_weights": true, + "tf.compat.v1.keras.layers.RepeatVector.submodules": true, + "tf.compat.v1.keras.layers.RepeatVector.trainable": true, + "tf.compat.v1.keras.layers.RepeatVector.trainable_weights": true, + "tf.compat.v1.keras.layers.RepeatVector.weights": true, + "tf.compat.v1.keras.layers.RepeatVector.with_name_scope": true, + "tf.compat.v1.keras.layers.Reshape": false, + "tf.compat.v1.keras.layers.Reshape.__call__": true, + "tf.compat.v1.keras.layers.Reshape.__eq__": true, + "tf.compat.v1.keras.layers.Reshape.__ge__": true, + "tf.compat.v1.keras.layers.Reshape.__gt__": true, + "tf.compat.v1.keras.layers.Reshape.__init__": true, + "tf.compat.v1.keras.layers.Reshape.__le__": true, + "tf.compat.v1.keras.layers.Reshape.__lt__": true, + "tf.compat.v1.keras.layers.Reshape.__ne__": true, + "tf.compat.v1.keras.layers.Reshape.__new__": true, + "tf.compat.v1.keras.layers.Reshape.activity_regularizer": true, + "tf.compat.v1.keras.layers.Reshape.add_loss": true, + "tf.compat.v1.keras.layers.Reshape.add_metric": true, + "tf.compat.v1.keras.layers.Reshape.add_weight": true, + "tf.compat.v1.keras.layers.Reshape.build": true, + "tf.compat.v1.keras.layers.Reshape.call": true, + "tf.compat.v1.keras.layers.Reshape.compute_mask": true, + "tf.compat.v1.keras.layers.Reshape.compute_output_shape": true, + "tf.compat.v1.keras.layers.Reshape.compute_output_signature": true, + "tf.compat.v1.keras.layers.Reshape.count_params": true, + "tf.compat.v1.keras.layers.Reshape.dtype": true, + "tf.compat.v1.keras.layers.Reshape.dynamic": true, + "tf.compat.v1.keras.layers.Reshape.from_config": true, + "tf.compat.v1.keras.layers.Reshape.get_config": true, + "tf.compat.v1.keras.layers.Reshape.get_weights": true, + "tf.compat.v1.keras.layers.Reshape.input": true, + "tf.compat.v1.keras.layers.Reshape.input_spec": true, + "tf.compat.v1.keras.layers.Reshape.losses": true, + "tf.compat.v1.keras.layers.Reshape.metrics": true, + "tf.compat.v1.keras.layers.Reshape.name": true, + "tf.compat.v1.keras.layers.Reshape.name_scope": true, + "tf.compat.v1.keras.layers.Reshape.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Reshape.output": true, + "tf.compat.v1.keras.layers.Reshape.set_weights": true, + "tf.compat.v1.keras.layers.Reshape.submodules": true, + "tf.compat.v1.keras.layers.Reshape.trainable": true, + "tf.compat.v1.keras.layers.Reshape.trainable_weights": true, + "tf.compat.v1.keras.layers.Reshape.weights": true, + "tf.compat.v1.keras.layers.Reshape.with_name_scope": true, + "tf.compat.v1.keras.layers.SeparableConv1D": false, + "tf.compat.v1.keras.layers.SeparableConv1D.__call__": true, + "tf.compat.v1.keras.layers.SeparableConv1D.__eq__": true, + "tf.compat.v1.keras.layers.SeparableConv1D.__ge__": true, + "tf.compat.v1.keras.layers.SeparableConv1D.__gt__": true, + "tf.compat.v1.keras.layers.SeparableConv1D.__init__": true, + "tf.compat.v1.keras.layers.SeparableConv1D.__le__": true, + "tf.compat.v1.keras.layers.SeparableConv1D.__lt__": true, + "tf.compat.v1.keras.layers.SeparableConv1D.__ne__": true, + "tf.compat.v1.keras.layers.SeparableConv1D.__new__": true, + "tf.compat.v1.keras.layers.SeparableConv1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.SeparableConv1D.add_loss": true, + "tf.compat.v1.keras.layers.SeparableConv1D.add_metric": true, + "tf.compat.v1.keras.layers.SeparableConv1D.add_weight": true, + "tf.compat.v1.keras.layers.SeparableConv1D.build": true, + "tf.compat.v1.keras.layers.SeparableConv1D.call": true, + "tf.compat.v1.keras.layers.SeparableConv1D.compute_mask": true, + "tf.compat.v1.keras.layers.SeparableConv1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.SeparableConv1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.SeparableConv1D.count_params": true, + "tf.compat.v1.keras.layers.SeparableConv1D.dtype": true, + "tf.compat.v1.keras.layers.SeparableConv1D.dynamic": true, + "tf.compat.v1.keras.layers.SeparableConv1D.from_config": true, + "tf.compat.v1.keras.layers.SeparableConv1D.get_config": true, + "tf.compat.v1.keras.layers.SeparableConv1D.get_weights": true, + "tf.compat.v1.keras.layers.SeparableConv1D.input": true, + "tf.compat.v1.keras.layers.SeparableConv1D.input_spec": true, + "tf.compat.v1.keras.layers.SeparableConv1D.losses": true, + "tf.compat.v1.keras.layers.SeparableConv1D.metrics": true, + "tf.compat.v1.keras.layers.SeparableConv1D.name": true, + "tf.compat.v1.keras.layers.SeparableConv1D.name_scope": true, + "tf.compat.v1.keras.layers.SeparableConv1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.SeparableConv1D.output": true, + "tf.compat.v1.keras.layers.SeparableConv1D.set_weights": true, + "tf.compat.v1.keras.layers.SeparableConv1D.submodules": true, + "tf.compat.v1.keras.layers.SeparableConv1D.trainable": true, + "tf.compat.v1.keras.layers.SeparableConv1D.trainable_weights": true, + "tf.compat.v1.keras.layers.SeparableConv1D.weights": true, + "tf.compat.v1.keras.layers.SeparableConv1D.with_name_scope": true, + "tf.compat.v1.keras.layers.SeparableConv2D": false, + "tf.compat.v1.keras.layers.SeparableConv2D.__call__": true, + "tf.compat.v1.keras.layers.SeparableConv2D.__eq__": true, + "tf.compat.v1.keras.layers.SeparableConv2D.__ge__": true, + "tf.compat.v1.keras.layers.SeparableConv2D.__gt__": true, + "tf.compat.v1.keras.layers.SeparableConv2D.__init__": true, + "tf.compat.v1.keras.layers.SeparableConv2D.__le__": true, + "tf.compat.v1.keras.layers.SeparableConv2D.__lt__": true, + "tf.compat.v1.keras.layers.SeparableConv2D.__ne__": true, + "tf.compat.v1.keras.layers.SeparableConv2D.__new__": true, + "tf.compat.v1.keras.layers.SeparableConv2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.SeparableConv2D.add_loss": true, + "tf.compat.v1.keras.layers.SeparableConv2D.add_metric": true, + "tf.compat.v1.keras.layers.SeparableConv2D.add_weight": true, + "tf.compat.v1.keras.layers.SeparableConv2D.build": true, + "tf.compat.v1.keras.layers.SeparableConv2D.call": true, + "tf.compat.v1.keras.layers.SeparableConv2D.compute_mask": true, + "tf.compat.v1.keras.layers.SeparableConv2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.SeparableConv2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.SeparableConv2D.count_params": true, + "tf.compat.v1.keras.layers.SeparableConv2D.dtype": true, + "tf.compat.v1.keras.layers.SeparableConv2D.dynamic": true, + "tf.compat.v1.keras.layers.SeparableConv2D.from_config": true, + "tf.compat.v1.keras.layers.SeparableConv2D.get_config": true, + "tf.compat.v1.keras.layers.SeparableConv2D.get_weights": true, + "tf.compat.v1.keras.layers.SeparableConv2D.input": true, + "tf.compat.v1.keras.layers.SeparableConv2D.input_spec": true, + "tf.compat.v1.keras.layers.SeparableConv2D.losses": true, + "tf.compat.v1.keras.layers.SeparableConv2D.metrics": true, + "tf.compat.v1.keras.layers.SeparableConv2D.name": true, + "tf.compat.v1.keras.layers.SeparableConv2D.name_scope": true, + "tf.compat.v1.keras.layers.SeparableConv2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.SeparableConv2D.output": true, + "tf.compat.v1.keras.layers.SeparableConv2D.set_weights": true, + "tf.compat.v1.keras.layers.SeparableConv2D.submodules": true, + "tf.compat.v1.keras.layers.SeparableConv2D.trainable": true, + "tf.compat.v1.keras.layers.SeparableConv2D.trainable_weights": true, + "tf.compat.v1.keras.layers.SeparableConv2D.weights": true, + "tf.compat.v1.keras.layers.SeparableConv2D.with_name_scope": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D": false, + "tf.compat.v1.keras.layers.SeparableConvolution1D.__call__": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.__eq__": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.__ge__": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.__gt__": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.__init__": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.__le__": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.__lt__": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.__ne__": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.__new__": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.add_loss": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.add_metric": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.add_weight": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.build": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.call": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.compute_mask": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.count_params": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.dtype": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.dynamic": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.from_config": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.get_config": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.get_weights": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.input": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.input_spec": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.losses": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.metrics": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.name": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.name_scope": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.output": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.set_weights": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.submodules": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.trainable": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.trainable_weights": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.weights": true, + "tf.compat.v1.keras.layers.SeparableConvolution1D.with_name_scope": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D": false, + "tf.compat.v1.keras.layers.SeparableConvolution2D.__call__": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.__eq__": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.__ge__": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.__gt__": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.__init__": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.__le__": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.__lt__": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.__ne__": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.__new__": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.add_loss": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.add_metric": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.add_weight": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.build": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.call": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.compute_mask": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.count_params": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.dtype": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.dynamic": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.from_config": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.get_config": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.get_weights": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.input": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.input_spec": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.losses": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.metrics": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.name": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.name_scope": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.output": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.set_weights": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.submodules": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.trainable": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.trainable_weights": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.weights": true, + "tf.compat.v1.keras.layers.SeparableConvolution2D.with_name_scope": true, + "tf.compat.v1.keras.layers.SimpleRNN": false, + "tf.compat.v1.keras.layers.SimpleRNN.__call__": true, + "tf.compat.v1.keras.layers.SimpleRNN.__eq__": true, + "tf.compat.v1.keras.layers.SimpleRNN.__ge__": true, + "tf.compat.v1.keras.layers.SimpleRNN.__gt__": true, + "tf.compat.v1.keras.layers.SimpleRNN.__init__": true, + "tf.compat.v1.keras.layers.SimpleRNN.__le__": true, + "tf.compat.v1.keras.layers.SimpleRNN.__lt__": true, + "tf.compat.v1.keras.layers.SimpleRNN.__ne__": true, + "tf.compat.v1.keras.layers.SimpleRNN.__new__": true, + "tf.compat.v1.keras.layers.SimpleRNN.activation": true, + "tf.compat.v1.keras.layers.SimpleRNN.activity_regularizer": true, + "tf.compat.v1.keras.layers.SimpleRNN.add_loss": true, + "tf.compat.v1.keras.layers.SimpleRNN.add_metric": true, + "tf.compat.v1.keras.layers.SimpleRNN.add_weight": true, + "tf.compat.v1.keras.layers.SimpleRNN.bias_constraint": true, + "tf.compat.v1.keras.layers.SimpleRNN.bias_initializer": true, + "tf.compat.v1.keras.layers.SimpleRNN.bias_regularizer": true, + "tf.compat.v1.keras.layers.SimpleRNN.build": true, + "tf.compat.v1.keras.layers.SimpleRNN.call": true, + "tf.compat.v1.keras.layers.SimpleRNN.compute_mask": true, + "tf.compat.v1.keras.layers.SimpleRNN.compute_output_shape": true, + "tf.compat.v1.keras.layers.SimpleRNN.compute_output_signature": true, + "tf.compat.v1.keras.layers.SimpleRNN.count_params": true, + "tf.compat.v1.keras.layers.SimpleRNN.dropout": true, + "tf.compat.v1.keras.layers.SimpleRNN.dtype": true, + "tf.compat.v1.keras.layers.SimpleRNN.dynamic": true, + "tf.compat.v1.keras.layers.SimpleRNN.from_config": true, + "tf.compat.v1.keras.layers.SimpleRNN.get_config": true, + "tf.compat.v1.keras.layers.SimpleRNN.get_weights": true, + "tf.compat.v1.keras.layers.SimpleRNN.input": true, + "tf.compat.v1.keras.layers.SimpleRNN.input_spec": true, + "tf.compat.v1.keras.layers.SimpleRNN.kernel_constraint": true, + "tf.compat.v1.keras.layers.SimpleRNN.kernel_initializer": true, + "tf.compat.v1.keras.layers.SimpleRNN.kernel_regularizer": true, + "tf.compat.v1.keras.layers.SimpleRNN.losses": true, + "tf.compat.v1.keras.layers.SimpleRNN.metrics": true, + "tf.compat.v1.keras.layers.SimpleRNN.name": true, + "tf.compat.v1.keras.layers.SimpleRNN.name_scope": true, + "tf.compat.v1.keras.layers.SimpleRNN.non_trainable_weights": true, + "tf.compat.v1.keras.layers.SimpleRNN.output": true, + "tf.compat.v1.keras.layers.SimpleRNN.recurrent_constraint": true, + "tf.compat.v1.keras.layers.SimpleRNN.recurrent_dropout": true, + "tf.compat.v1.keras.layers.SimpleRNN.recurrent_initializer": true, + "tf.compat.v1.keras.layers.SimpleRNN.recurrent_regularizer": true, + "tf.compat.v1.keras.layers.SimpleRNN.reset_states": true, + "tf.compat.v1.keras.layers.SimpleRNN.set_weights": true, + "tf.compat.v1.keras.layers.SimpleRNN.states": true, + "tf.compat.v1.keras.layers.SimpleRNN.submodules": true, + "tf.compat.v1.keras.layers.SimpleRNN.trainable": true, + "tf.compat.v1.keras.layers.SimpleRNN.trainable_weights": true, + "tf.compat.v1.keras.layers.SimpleRNN.units": true, + "tf.compat.v1.keras.layers.SimpleRNN.use_bias": true, + "tf.compat.v1.keras.layers.SimpleRNN.weights": true, + "tf.compat.v1.keras.layers.SimpleRNN.with_name_scope": true, + "tf.compat.v1.keras.layers.SimpleRNNCell": false, + "tf.compat.v1.keras.layers.SimpleRNNCell.__call__": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.__eq__": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.__ge__": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.__gt__": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.__init__": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.__le__": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.__lt__": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.__ne__": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.__new__": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.activity_regularizer": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.add_loss": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.add_metric": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.add_weight": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.build": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.call": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.compute_mask": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.compute_output_shape": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.compute_output_signature": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.count_params": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.dtype": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.dynamic": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.from_config": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.get_config": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.get_dropout_mask_for_cell": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.get_initial_state": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.get_recurrent_dropout_mask_for_cell": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.get_weights": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.input": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.input_spec": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.losses": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.metrics": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.name": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.name_scope": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.non_trainable_weights": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.output": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.reset_dropout_mask": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.reset_recurrent_dropout_mask": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.set_weights": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.submodules": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.trainable": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.trainable_weights": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.weights": true, + "tf.compat.v1.keras.layers.SimpleRNNCell.with_name_scope": true, + "tf.compat.v1.keras.layers.Softmax": false, + "tf.compat.v1.keras.layers.Softmax.__call__": true, + "tf.compat.v1.keras.layers.Softmax.__eq__": true, + "tf.compat.v1.keras.layers.Softmax.__ge__": true, + "tf.compat.v1.keras.layers.Softmax.__gt__": true, + "tf.compat.v1.keras.layers.Softmax.__init__": true, + "tf.compat.v1.keras.layers.Softmax.__le__": true, + "tf.compat.v1.keras.layers.Softmax.__lt__": true, + "tf.compat.v1.keras.layers.Softmax.__ne__": true, + "tf.compat.v1.keras.layers.Softmax.__new__": true, + "tf.compat.v1.keras.layers.Softmax.activity_regularizer": true, + "tf.compat.v1.keras.layers.Softmax.add_loss": true, + "tf.compat.v1.keras.layers.Softmax.add_metric": true, + "tf.compat.v1.keras.layers.Softmax.add_weight": true, + "tf.compat.v1.keras.layers.Softmax.build": true, + "tf.compat.v1.keras.layers.Softmax.call": true, + "tf.compat.v1.keras.layers.Softmax.compute_mask": true, + "tf.compat.v1.keras.layers.Softmax.compute_output_shape": true, + "tf.compat.v1.keras.layers.Softmax.compute_output_signature": true, + "tf.compat.v1.keras.layers.Softmax.count_params": true, + "tf.compat.v1.keras.layers.Softmax.dtype": true, + "tf.compat.v1.keras.layers.Softmax.dynamic": true, + "tf.compat.v1.keras.layers.Softmax.from_config": true, + "tf.compat.v1.keras.layers.Softmax.get_config": true, + "tf.compat.v1.keras.layers.Softmax.get_weights": true, + "tf.compat.v1.keras.layers.Softmax.input": true, + "tf.compat.v1.keras.layers.Softmax.input_spec": true, + "tf.compat.v1.keras.layers.Softmax.losses": true, + "tf.compat.v1.keras.layers.Softmax.metrics": true, + "tf.compat.v1.keras.layers.Softmax.name": true, + "tf.compat.v1.keras.layers.Softmax.name_scope": true, + "tf.compat.v1.keras.layers.Softmax.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Softmax.output": true, + "tf.compat.v1.keras.layers.Softmax.set_weights": true, + "tf.compat.v1.keras.layers.Softmax.submodules": true, + "tf.compat.v1.keras.layers.Softmax.trainable": true, + "tf.compat.v1.keras.layers.Softmax.trainable_weights": true, + "tf.compat.v1.keras.layers.Softmax.weights": true, + "tf.compat.v1.keras.layers.Softmax.with_name_scope": true, + "tf.compat.v1.keras.layers.SpatialDropout1D": false, + "tf.compat.v1.keras.layers.SpatialDropout1D.__call__": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.__eq__": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.__ge__": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.__gt__": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.__init__": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.__le__": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.__lt__": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.__ne__": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.__new__": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.add_loss": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.add_metric": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.add_weight": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.build": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.call": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.compute_mask": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.count_params": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.dtype": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.dynamic": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.from_config": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.get_config": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.get_weights": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.input": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.input_spec": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.losses": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.metrics": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.name": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.name_scope": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.output": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.set_weights": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.submodules": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.trainable": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.trainable_weights": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.weights": true, + "tf.compat.v1.keras.layers.SpatialDropout1D.with_name_scope": true, + "tf.compat.v1.keras.layers.SpatialDropout2D": false, + "tf.compat.v1.keras.layers.SpatialDropout2D.__call__": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.__eq__": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.__ge__": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.__gt__": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.__init__": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.__le__": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.__lt__": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.__ne__": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.__new__": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.add_loss": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.add_metric": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.add_weight": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.build": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.call": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.compute_mask": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.count_params": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.dtype": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.dynamic": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.from_config": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.get_config": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.get_weights": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.input": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.input_spec": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.losses": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.metrics": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.name": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.name_scope": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.output": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.set_weights": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.submodules": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.trainable": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.trainable_weights": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.weights": true, + "tf.compat.v1.keras.layers.SpatialDropout2D.with_name_scope": true, + "tf.compat.v1.keras.layers.SpatialDropout3D": false, + "tf.compat.v1.keras.layers.SpatialDropout3D.__call__": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.__eq__": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.__ge__": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.__gt__": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.__init__": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.__le__": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.__lt__": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.__ne__": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.__new__": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.add_loss": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.add_metric": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.add_weight": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.build": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.call": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.compute_mask": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.count_params": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.dtype": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.dynamic": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.from_config": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.get_config": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.get_weights": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.input": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.input_spec": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.losses": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.metrics": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.name": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.name_scope": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.output": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.set_weights": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.submodules": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.trainable": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.trainable_weights": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.weights": true, + "tf.compat.v1.keras.layers.SpatialDropout3D.with_name_scope": true, + "tf.compat.v1.keras.layers.StackedRNNCells": false, + "tf.compat.v1.keras.layers.StackedRNNCells.__call__": true, + "tf.compat.v1.keras.layers.StackedRNNCells.__eq__": true, + "tf.compat.v1.keras.layers.StackedRNNCells.__ge__": true, + "tf.compat.v1.keras.layers.StackedRNNCells.__gt__": true, + "tf.compat.v1.keras.layers.StackedRNNCells.__init__": true, + "tf.compat.v1.keras.layers.StackedRNNCells.__le__": true, + "tf.compat.v1.keras.layers.StackedRNNCells.__lt__": true, + "tf.compat.v1.keras.layers.StackedRNNCells.__ne__": true, + "tf.compat.v1.keras.layers.StackedRNNCells.__new__": true, + "tf.compat.v1.keras.layers.StackedRNNCells.activity_regularizer": true, + "tf.compat.v1.keras.layers.StackedRNNCells.add_loss": true, + "tf.compat.v1.keras.layers.StackedRNNCells.add_metric": true, + "tf.compat.v1.keras.layers.StackedRNNCells.add_weight": true, + "tf.compat.v1.keras.layers.StackedRNNCells.build": true, + "tf.compat.v1.keras.layers.StackedRNNCells.call": true, + "tf.compat.v1.keras.layers.StackedRNNCells.compute_mask": true, + "tf.compat.v1.keras.layers.StackedRNNCells.compute_output_shape": true, + "tf.compat.v1.keras.layers.StackedRNNCells.compute_output_signature": true, + "tf.compat.v1.keras.layers.StackedRNNCells.count_params": true, + "tf.compat.v1.keras.layers.StackedRNNCells.dtype": true, + "tf.compat.v1.keras.layers.StackedRNNCells.dynamic": true, + "tf.compat.v1.keras.layers.StackedRNNCells.from_config": true, + "tf.compat.v1.keras.layers.StackedRNNCells.get_config": true, + "tf.compat.v1.keras.layers.StackedRNNCells.get_initial_state": true, + "tf.compat.v1.keras.layers.StackedRNNCells.get_weights": true, + "tf.compat.v1.keras.layers.StackedRNNCells.input": true, + "tf.compat.v1.keras.layers.StackedRNNCells.input_spec": true, + "tf.compat.v1.keras.layers.StackedRNNCells.losses": true, + "tf.compat.v1.keras.layers.StackedRNNCells.metrics": true, + "tf.compat.v1.keras.layers.StackedRNNCells.name": true, + "tf.compat.v1.keras.layers.StackedRNNCells.name_scope": true, + "tf.compat.v1.keras.layers.StackedRNNCells.non_trainable_weights": true, + "tf.compat.v1.keras.layers.StackedRNNCells.output": true, + "tf.compat.v1.keras.layers.StackedRNNCells.output_size": true, + "tf.compat.v1.keras.layers.StackedRNNCells.set_weights": true, + "tf.compat.v1.keras.layers.StackedRNNCells.state_size": true, + "tf.compat.v1.keras.layers.StackedRNNCells.submodules": true, + "tf.compat.v1.keras.layers.StackedRNNCells.trainable": true, + "tf.compat.v1.keras.layers.StackedRNNCells.trainable_weights": true, + "tf.compat.v1.keras.layers.StackedRNNCells.weights": true, + "tf.compat.v1.keras.layers.StackedRNNCells.with_name_scope": true, + "tf.compat.v1.keras.layers.Subtract": false, + "tf.compat.v1.keras.layers.Subtract.__call__": true, + "tf.compat.v1.keras.layers.Subtract.__eq__": true, + "tf.compat.v1.keras.layers.Subtract.__ge__": true, + "tf.compat.v1.keras.layers.Subtract.__gt__": true, + "tf.compat.v1.keras.layers.Subtract.__init__": true, + "tf.compat.v1.keras.layers.Subtract.__le__": true, + "tf.compat.v1.keras.layers.Subtract.__lt__": true, + "tf.compat.v1.keras.layers.Subtract.__ne__": true, + "tf.compat.v1.keras.layers.Subtract.__new__": true, + "tf.compat.v1.keras.layers.Subtract.activity_regularizer": true, + "tf.compat.v1.keras.layers.Subtract.add_loss": true, + "tf.compat.v1.keras.layers.Subtract.add_metric": true, + "tf.compat.v1.keras.layers.Subtract.add_weight": true, + "tf.compat.v1.keras.layers.Subtract.build": true, + "tf.compat.v1.keras.layers.Subtract.call": true, + "tf.compat.v1.keras.layers.Subtract.compute_mask": true, + "tf.compat.v1.keras.layers.Subtract.compute_output_shape": true, + "tf.compat.v1.keras.layers.Subtract.compute_output_signature": true, + "tf.compat.v1.keras.layers.Subtract.count_params": true, + "tf.compat.v1.keras.layers.Subtract.dtype": true, + "tf.compat.v1.keras.layers.Subtract.dynamic": true, + "tf.compat.v1.keras.layers.Subtract.from_config": true, + "tf.compat.v1.keras.layers.Subtract.get_config": true, + "tf.compat.v1.keras.layers.Subtract.get_weights": true, + "tf.compat.v1.keras.layers.Subtract.input": true, + "tf.compat.v1.keras.layers.Subtract.input_spec": true, + "tf.compat.v1.keras.layers.Subtract.losses": true, + "tf.compat.v1.keras.layers.Subtract.metrics": true, + "tf.compat.v1.keras.layers.Subtract.name": true, + "tf.compat.v1.keras.layers.Subtract.name_scope": true, + "tf.compat.v1.keras.layers.Subtract.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Subtract.output": true, + "tf.compat.v1.keras.layers.Subtract.set_weights": true, + "tf.compat.v1.keras.layers.Subtract.submodules": true, + "tf.compat.v1.keras.layers.Subtract.trainable": true, + "tf.compat.v1.keras.layers.Subtract.trainable_weights": true, + "tf.compat.v1.keras.layers.Subtract.weights": true, + "tf.compat.v1.keras.layers.Subtract.with_name_scope": true, + "tf.compat.v1.keras.layers.ThresholdedReLU": false, + "tf.compat.v1.keras.layers.ThresholdedReLU.__call__": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.__eq__": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.__ge__": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.__gt__": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.__init__": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.__le__": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.__lt__": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.__ne__": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.__new__": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.activity_regularizer": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.add_loss": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.add_metric": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.add_weight": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.build": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.call": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.compute_mask": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.compute_output_shape": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.compute_output_signature": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.count_params": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.dtype": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.dynamic": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.from_config": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.get_config": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.get_weights": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.input": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.input_spec": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.losses": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.metrics": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.name": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.name_scope": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.non_trainable_weights": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.output": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.set_weights": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.submodules": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.trainable": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.trainable_weights": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.weights": true, + "tf.compat.v1.keras.layers.ThresholdedReLU.with_name_scope": true, + "tf.compat.v1.keras.layers.TimeDistributed": false, + "tf.compat.v1.keras.layers.TimeDistributed.__call__": true, + "tf.compat.v1.keras.layers.TimeDistributed.__eq__": true, + "tf.compat.v1.keras.layers.TimeDistributed.__ge__": true, + "tf.compat.v1.keras.layers.TimeDistributed.__gt__": true, + "tf.compat.v1.keras.layers.TimeDistributed.__init__": true, + "tf.compat.v1.keras.layers.TimeDistributed.__le__": true, + "tf.compat.v1.keras.layers.TimeDistributed.__lt__": true, + "tf.compat.v1.keras.layers.TimeDistributed.__ne__": true, + "tf.compat.v1.keras.layers.TimeDistributed.__new__": true, + "tf.compat.v1.keras.layers.TimeDistributed.activity_regularizer": true, + "tf.compat.v1.keras.layers.TimeDistributed.add_loss": true, + "tf.compat.v1.keras.layers.TimeDistributed.add_metric": true, + "tf.compat.v1.keras.layers.TimeDistributed.add_weight": true, + "tf.compat.v1.keras.layers.TimeDistributed.build": true, + "tf.compat.v1.keras.layers.TimeDistributed.call": true, + "tf.compat.v1.keras.layers.TimeDistributed.compute_mask": true, + "tf.compat.v1.keras.layers.TimeDistributed.compute_output_shape": true, + "tf.compat.v1.keras.layers.TimeDistributed.compute_output_signature": true, + "tf.compat.v1.keras.layers.TimeDistributed.count_params": true, + "tf.compat.v1.keras.layers.TimeDistributed.dtype": true, + "tf.compat.v1.keras.layers.TimeDistributed.dynamic": true, + "tf.compat.v1.keras.layers.TimeDistributed.from_config": true, + "tf.compat.v1.keras.layers.TimeDistributed.get_config": true, + "tf.compat.v1.keras.layers.TimeDistributed.get_weights": true, + "tf.compat.v1.keras.layers.TimeDistributed.input": true, + "tf.compat.v1.keras.layers.TimeDistributed.input_spec": true, + "tf.compat.v1.keras.layers.TimeDistributed.losses": true, + "tf.compat.v1.keras.layers.TimeDistributed.metrics": true, + "tf.compat.v1.keras.layers.TimeDistributed.name": true, + "tf.compat.v1.keras.layers.TimeDistributed.name_scope": true, + "tf.compat.v1.keras.layers.TimeDistributed.non_trainable_weights": true, + "tf.compat.v1.keras.layers.TimeDistributed.output": true, + "tf.compat.v1.keras.layers.TimeDistributed.set_weights": true, + "tf.compat.v1.keras.layers.TimeDistributed.submodules": true, + "tf.compat.v1.keras.layers.TimeDistributed.trainable": true, + "tf.compat.v1.keras.layers.TimeDistributed.trainable_weights": true, + "tf.compat.v1.keras.layers.TimeDistributed.weights": true, + "tf.compat.v1.keras.layers.TimeDistributed.with_name_scope": true, + "tf.compat.v1.keras.layers.UpSampling1D": false, + "tf.compat.v1.keras.layers.UpSampling1D.__call__": true, + "tf.compat.v1.keras.layers.UpSampling1D.__eq__": true, + "tf.compat.v1.keras.layers.UpSampling1D.__ge__": true, + "tf.compat.v1.keras.layers.UpSampling1D.__gt__": true, + "tf.compat.v1.keras.layers.UpSampling1D.__init__": true, + "tf.compat.v1.keras.layers.UpSampling1D.__le__": true, + "tf.compat.v1.keras.layers.UpSampling1D.__lt__": true, + "tf.compat.v1.keras.layers.UpSampling1D.__ne__": true, + "tf.compat.v1.keras.layers.UpSampling1D.__new__": true, + "tf.compat.v1.keras.layers.UpSampling1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.UpSampling1D.add_loss": true, + "tf.compat.v1.keras.layers.UpSampling1D.add_metric": true, + "tf.compat.v1.keras.layers.UpSampling1D.add_weight": true, + "tf.compat.v1.keras.layers.UpSampling1D.build": true, + "tf.compat.v1.keras.layers.UpSampling1D.call": true, + "tf.compat.v1.keras.layers.UpSampling1D.compute_mask": true, + "tf.compat.v1.keras.layers.UpSampling1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.UpSampling1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.UpSampling1D.count_params": true, + "tf.compat.v1.keras.layers.UpSampling1D.dtype": true, + "tf.compat.v1.keras.layers.UpSampling1D.dynamic": true, + "tf.compat.v1.keras.layers.UpSampling1D.from_config": true, + "tf.compat.v1.keras.layers.UpSampling1D.get_config": true, + "tf.compat.v1.keras.layers.UpSampling1D.get_weights": true, + "tf.compat.v1.keras.layers.UpSampling1D.input": true, + "tf.compat.v1.keras.layers.UpSampling1D.input_spec": true, + "tf.compat.v1.keras.layers.UpSampling1D.losses": true, + "tf.compat.v1.keras.layers.UpSampling1D.metrics": true, + "tf.compat.v1.keras.layers.UpSampling1D.name": true, + "tf.compat.v1.keras.layers.UpSampling1D.name_scope": true, + "tf.compat.v1.keras.layers.UpSampling1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.UpSampling1D.output": true, + "tf.compat.v1.keras.layers.UpSampling1D.set_weights": true, + "tf.compat.v1.keras.layers.UpSampling1D.submodules": true, + "tf.compat.v1.keras.layers.UpSampling1D.trainable": true, + "tf.compat.v1.keras.layers.UpSampling1D.trainable_weights": true, + "tf.compat.v1.keras.layers.UpSampling1D.weights": true, + "tf.compat.v1.keras.layers.UpSampling1D.with_name_scope": true, + "tf.compat.v1.keras.layers.UpSampling2D": false, + "tf.compat.v1.keras.layers.UpSampling2D.__call__": true, + "tf.compat.v1.keras.layers.UpSampling2D.__eq__": true, + "tf.compat.v1.keras.layers.UpSampling2D.__ge__": true, + "tf.compat.v1.keras.layers.UpSampling2D.__gt__": true, + "tf.compat.v1.keras.layers.UpSampling2D.__init__": true, + "tf.compat.v1.keras.layers.UpSampling2D.__le__": true, + "tf.compat.v1.keras.layers.UpSampling2D.__lt__": true, + "tf.compat.v1.keras.layers.UpSampling2D.__ne__": true, + "tf.compat.v1.keras.layers.UpSampling2D.__new__": true, + "tf.compat.v1.keras.layers.UpSampling2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.UpSampling2D.add_loss": true, + "tf.compat.v1.keras.layers.UpSampling2D.add_metric": true, + "tf.compat.v1.keras.layers.UpSampling2D.add_weight": true, + "tf.compat.v1.keras.layers.UpSampling2D.build": true, + "tf.compat.v1.keras.layers.UpSampling2D.call": true, + "tf.compat.v1.keras.layers.UpSampling2D.compute_mask": true, + "tf.compat.v1.keras.layers.UpSampling2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.UpSampling2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.UpSampling2D.count_params": true, + "tf.compat.v1.keras.layers.UpSampling2D.dtype": true, + "tf.compat.v1.keras.layers.UpSampling2D.dynamic": true, + "tf.compat.v1.keras.layers.UpSampling2D.from_config": true, + "tf.compat.v1.keras.layers.UpSampling2D.get_config": true, + "tf.compat.v1.keras.layers.UpSampling2D.get_weights": true, + "tf.compat.v1.keras.layers.UpSampling2D.input": true, + "tf.compat.v1.keras.layers.UpSampling2D.input_spec": true, + "tf.compat.v1.keras.layers.UpSampling2D.losses": true, + "tf.compat.v1.keras.layers.UpSampling2D.metrics": true, + "tf.compat.v1.keras.layers.UpSampling2D.name": true, + "tf.compat.v1.keras.layers.UpSampling2D.name_scope": true, + "tf.compat.v1.keras.layers.UpSampling2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.UpSampling2D.output": true, + "tf.compat.v1.keras.layers.UpSampling2D.set_weights": true, + "tf.compat.v1.keras.layers.UpSampling2D.submodules": true, + "tf.compat.v1.keras.layers.UpSampling2D.trainable": true, + "tf.compat.v1.keras.layers.UpSampling2D.trainable_weights": true, + "tf.compat.v1.keras.layers.UpSampling2D.weights": true, + "tf.compat.v1.keras.layers.UpSampling2D.with_name_scope": true, + "tf.compat.v1.keras.layers.UpSampling3D": false, + "tf.compat.v1.keras.layers.UpSampling3D.__call__": true, + "tf.compat.v1.keras.layers.UpSampling3D.__eq__": true, + "tf.compat.v1.keras.layers.UpSampling3D.__ge__": true, + "tf.compat.v1.keras.layers.UpSampling3D.__gt__": true, + "tf.compat.v1.keras.layers.UpSampling3D.__init__": true, + "tf.compat.v1.keras.layers.UpSampling3D.__le__": true, + "tf.compat.v1.keras.layers.UpSampling3D.__lt__": true, + "tf.compat.v1.keras.layers.UpSampling3D.__ne__": true, + "tf.compat.v1.keras.layers.UpSampling3D.__new__": true, + "tf.compat.v1.keras.layers.UpSampling3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.UpSampling3D.add_loss": true, + "tf.compat.v1.keras.layers.UpSampling3D.add_metric": true, + "tf.compat.v1.keras.layers.UpSampling3D.add_weight": true, + "tf.compat.v1.keras.layers.UpSampling3D.build": true, + "tf.compat.v1.keras.layers.UpSampling3D.call": true, + "tf.compat.v1.keras.layers.UpSampling3D.compute_mask": true, + "tf.compat.v1.keras.layers.UpSampling3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.UpSampling3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.UpSampling3D.count_params": true, + "tf.compat.v1.keras.layers.UpSampling3D.dtype": true, + "tf.compat.v1.keras.layers.UpSampling3D.dynamic": true, + "tf.compat.v1.keras.layers.UpSampling3D.from_config": true, + "tf.compat.v1.keras.layers.UpSampling3D.get_config": true, + "tf.compat.v1.keras.layers.UpSampling3D.get_weights": true, + "tf.compat.v1.keras.layers.UpSampling3D.input": true, + "tf.compat.v1.keras.layers.UpSampling3D.input_spec": true, + "tf.compat.v1.keras.layers.UpSampling3D.losses": true, + "tf.compat.v1.keras.layers.UpSampling3D.metrics": true, + "tf.compat.v1.keras.layers.UpSampling3D.name": true, + "tf.compat.v1.keras.layers.UpSampling3D.name_scope": true, + "tf.compat.v1.keras.layers.UpSampling3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.UpSampling3D.output": true, + "tf.compat.v1.keras.layers.UpSampling3D.set_weights": true, + "tf.compat.v1.keras.layers.UpSampling3D.submodules": true, + "tf.compat.v1.keras.layers.UpSampling3D.trainable": true, + "tf.compat.v1.keras.layers.UpSampling3D.trainable_weights": true, + "tf.compat.v1.keras.layers.UpSampling3D.weights": true, + "tf.compat.v1.keras.layers.UpSampling3D.with_name_scope": true, + "tf.compat.v1.keras.layers.Wrapper": false, + "tf.compat.v1.keras.layers.Wrapper.__call__": true, + "tf.compat.v1.keras.layers.Wrapper.__eq__": true, + "tf.compat.v1.keras.layers.Wrapper.__ge__": true, + "tf.compat.v1.keras.layers.Wrapper.__gt__": true, + "tf.compat.v1.keras.layers.Wrapper.__init__": true, + "tf.compat.v1.keras.layers.Wrapper.__le__": true, + "tf.compat.v1.keras.layers.Wrapper.__lt__": true, + "tf.compat.v1.keras.layers.Wrapper.__ne__": true, + "tf.compat.v1.keras.layers.Wrapper.__new__": true, + "tf.compat.v1.keras.layers.Wrapper.activity_regularizer": true, + "tf.compat.v1.keras.layers.Wrapper.add_loss": true, + "tf.compat.v1.keras.layers.Wrapper.add_metric": true, + "tf.compat.v1.keras.layers.Wrapper.add_weight": true, + "tf.compat.v1.keras.layers.Wrapper.build": true, + "tf.compat.v1.keras.layers.Wrapper.call": true, + "tf.compat.v1.keras.layers.Wrapper.compute_mask": true, + "tf.compat.v1.keras.layers.Wrapper.compute_output_shape": true, + "tf.compat.v1.keras.layers.Wrapper.compute_output_signature": true, + "tf.compat.v1.keras.layers.Wrapper.count_params": true, + "tf.compat.v1.keras.layers.Wrapper.dtype": true, + "tf.compat.v1.keras.layers.Wrapper.dynamic": true, + "tf.compat.v1.keras.layers.Wrapper.from_config": true, + "tf.compat.v1.keras.layers.Wrapper.get_config": true, + "tf.compat.v1.keras.layers.Wrapper.get_weights": true, + "tf.compat.v1.keras.layers.Wrapper.input": true, + "tf.compat.v1.keras.layers.Wrapper.input_spec": true, + "tf.compat.v1.keras.layers.Wrapper.losses": true, + "tf.compat.v1.keras.layers.Wrapper.metrics": true, + "tf.compat.v1.keras.layers.Wrapper.name": true, + "tf.compat.v1.keras.layers.Wrapper.name_scope": true, + "tf.compat.v1.keras.layers.Wrapper.non_trainable_weights": true, + "tf.compat.v1.keras.layers.Wrapper.output": true, + "tf.compat.v1.keras.layers.Wrapper.set_weights": true, + "tf.compat.v1.keras.layers.Wrapper.submodules": true, + "tf.compat.v1.keras.layers.Wrapper.trainable": true, + "tf.compat.v1.keras.layers.Wrapper.trainable_weights": true, + "tf.compat.v1.keras.layers.Wrapper.weights": true, + "tf.compat.v1.keras.layers.Wrapper.with_name_scope": true, + "tf.compat.v1.keras.layers.ZeroPadding1D": false, + "tf.compat.v1.keras.layers.ZeroPadding1D.__call__": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.__eq__": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.__ge__": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.__gt__": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.__init__": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.__le__": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.__lt__": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.__ne__": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.__new__": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.activity_regularizer": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.add_loss": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.add_metric": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.add_weight": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.build": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.call": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.compute_mask": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.compute_output_shape": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.compute_output_signature": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.count_params": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.dtype": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.dynamic": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.from_config": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.get_config": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.get_weights": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.input": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.input_spec": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.losses": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.metrics": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.name": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.name_scope": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.output": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.set_weights": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.submodules": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.trainable": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.trainable_weights": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.weights": true, + "tf.compat.v1.keras.layers.ZeroPadding1D.with_name_scope": true, + "tf.compat.v1.keras.layers.ZeroPadding2D": false, + "tf.compat.v1.keras.layers.ZeroPadding2D.__call__": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.__eq__": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.__ge__": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.__gt__": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.__init__": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.__le__": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.__lt__": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.__ne__": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.__new__": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.activity_regularizer": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.add_loss": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.add_metric": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.add_weight": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.build": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.call": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.compute_mask": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.compute_output_shape": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.compute_output_signature": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.count_params": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.dtype": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.dynamic": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.from_config": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.get_config": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.get_weights": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.input": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.input_spec": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.losses": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.metrics": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.name": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.name_scope": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.output": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.set_weights": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.submodules": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.trainable": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.trainable_weights": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.weights": true, + "tf.compat.v1.keras.layers.ZeroPadding2D.with_name_scope": true, + "tf.compat.v1.keras.layers.ZeroPadding3D": false, + "tf.compat.v1.keras.layers.ZeroPadding3D.__call__": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.__eq__": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.__ge__": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.__gt__": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.__init__": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.__le__": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.__lt__": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.__ne__": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.__new__": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.activity_regularizer": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.add_loss": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.add_metric": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.add_weight": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.build": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.call": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.compute_mask": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.compute_output_shape": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.compute_output_signature": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.count_params": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.dtype": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.dynamic": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.from_config": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.get_config": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.get_weights": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.input": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.input_spec": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.losses": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.metrics": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.name": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.name_scope": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.non_trainable_weights": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.output": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.set_weights": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.submodules": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.trainable": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.trainable_weights": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.weights": true, + "tf.compat.v1.keras.layers.ZeroPadding3D.with_name_scope": true, + "tf.compat.v1.keras.layers.add": false, + "tf.compat.v1.keras.layers.average": false, + "tf.compat.v1.keras.layers.concatenate": false, + "tf.compat.v1.keras.layers.deserialize": false, + "tf.compat.v1.keras.layers.dot": false, + "tf.compat.v1.keras.layers.experimental": false, + "tf.compat.v1.keras.layers.experimental.preprocessing": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop.with_name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.adapt": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization.with_name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.adapt": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer.with_name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast.with_name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop.with_name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip.with_name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight.with_name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation.with_name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation.with_name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth.with_name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling.with_name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing.with_name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization": false, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__call__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__eq__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__ge__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__gt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__init__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__le__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__lt__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__ne__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.__new__": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.activity_regularizer": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.adapt": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.add_loss": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.add_metric": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.add_weight": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.build": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.call": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.compute_mask": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.compute_output_shape": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.compute_output_signature": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.count_params": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.dtype": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.dynamic": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.from_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.get_config": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.get_vocabulary": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.get_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.input": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.input_spec": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.losses": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.metrics": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.name": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.name_scope": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.non_trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.output": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.set_vocabulary": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.set_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.submodules": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.trainable": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.trainable_weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.weights": true, + "tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization.with_name_scope": true, + "tf.compat.v1.keras.layers.maximum": false, + "tf.compat.v1.keras.layers.minimum": false, + "tf.compat.v1.keras.layers.multiply": false, + "tf.compat.v1.keras.layers.serialize": false, + "tf.compat.v1.keras.layers.subtract": false, + "tf.compat.v1.keras.losses": false, + "tf.compat.v1.keras.losses.BinaryCrossentropy": false, + "tf.compat.v1.keras.losses.BinaryCrossentropy.__call__": true, + "tf.compat.v1.keras.losses.BinaryCrossentropy.__eq__": true, + "tf.compat.v1.keras.losses.BinaryCrossentropy.__ge__": true, + "tf.compat.v1.keras.losses.BinaryCrossentropy.__gt__": true, + "tf.compat.v1.keras.losses.BinaryCrossentropy.__init__": true, + "tf.compat.v1.keras.losses.BinaryCrossentropy.__le__": true, + "tf.compat.v1.keras.losses.BinaryCrossentropy.__lt__": true, + "tf.compat.v1.keras.losses.BinaryCrossentropy.__ne__": true, + "tf.compat.v1.keras.losses.BinaryCrossentropy.__new__": true, + "tf.compat.v1.keras.losses.BinaryCrossentropy.call": true, + "tf.compat.v1.keras.losses.BinaryCrossentropy.from_config": true, + "tf.compat.v1.keras.losses.BinaryCrossentropy.get_config": true, + "tf.compat.v1.keras.losses.CategoricalCrossentropy": false, + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__call__": true, + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__eq__": true, + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__ge__": true, + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__gt__": true, + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__init__": true, + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__le__": true, + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__lt__": true, + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__ne__": true, + "tf.compat.v1.keras.losses.CategoricalCrossentropy.__new__": true, + "tf.compat.v1.keras.losses.CategoricalCrossentropy.call": true, + "tf.compat.v1.keras.losses.CategoricalCrossentropy.from_config": true, + "tf.compat.v1.keras.losses.CategoricalCrossentropy.get_config": true, + "tf.compat.v1.keras.losses.CategoricalHinge": false, + "tf.compat.v1.keras.losses.CategoricalHinge.__call__": true, + "tf.compat.v1.keras.losses.CategoricalHinge.__eq__": true, + "tf.compat.v1.keras.losses.CategoricalHinge.__ge__": true, + "tf.compat.v1.keras.losses.CategoricalHinge.__gt__": true, + "tf.compat.v1.keras.losses.CategoricalHinge.__init__": true, + "tf.compat.v1.keras.losses.CategoricalHinge.__le__": true, + "tf.compat.v1.keras.losses.CategoricalHinge.__lt__": true, + "tf.compat.v1.keras.losses.CategoricalHinge.__ne__": true, + "tf.compat.v1.keras.losses.CategoricalHinge.__new__": true, + "tf.compat.v1.keras.losses.CategoricalHinge.call": true, + "tf.compat.v1.keras.losses.CategoricalHinge.from_config": true, + "tf.compat.v1.keras.losses.CategoricalHinge.get_config": true, + "tf.compat.v1.keras.losses.CosineSimilarity": false, + "tf.compat.v1.keras.losses.CosineSimilarity.__call__": true, + "tf.compat.v1.keras.losses.CosineSimilarity.__eq__": true, + "tf.compat.v1.keras.losses.CosineSimilarity.__ge__": true, + "tf.compat.v1.keras.losses.CosineSimilarity.__gt__": true, + "tf.compat.v1.keras.losses.CosineSimilarity.__init__": true, + "tf.compat.v1.keras.losses.CosineSimilarity.__le__": true, + "tf.compat.v1.keras.losses.CosineSimilarity.__lt__": true, + "tf.compat.v1.keras.losses.CosineSimilarity.__ne__": true, + "tf.compat.v1.keras.losses.CosineSimilarity.__new__": true, + "tf.compat.v1.keras.losses.CosineSimilarity.call": true, + "tf.compat.v1.keras.losses.CosineSimilarity.from_config": true, + "tf.compat.v1.keras.losses.CosineSimilarity.get_config": true, + "tf.compat.v1.keras.losses.Hinge": false, + "tf.compat.v1.keras.losses.Hinge.__call__": true, + "tf.compat.v1.keras.losses.Hinge.__eq__": true, + "tf.compat.v1.keras.losses.Hinge.__ge__": true, + "tf.compat.v1.keras.losses.Hinge.__gt__": true, + "tf.compat.v1.keras.losses.Hinge.__init__": true, + "tf.compat.v1.keras.losses.Hinge.__le__": true, + "tf.compat.v1.keras.losses.Hinge.__lt__": true, + "tf.compat.v1.keras.losses.Hinge.__ne__": true, + "tf.compat.v1.keras.losses.Hinge.__new__": true, + "tf.compat.v1.keras.losses.Hinge.call": true, + "tf.compat.v1.keras.losses.Hinge.from_config": true, + "tf.compat.v1.keras.losses.Hinge.get_config": true, + "tf.compat.v1.keras.losses.Huber": false, + "tf.compat.v1.keras.losses.Huber.__call__": true, + "tf.compat.v1.keras.losses.Huber.__eq__": true, + "tf.compat.v1.keras.losses.Huber.__ge__": true, + "tf.compat.v1.keras.losses.Huber.__gt__": true, + "tf.compat.v1.keras.losses.Huber.__init__": true, + "tf.compat.v1.keras.losses.Huber.__le__": true, + "tf.compat.v1.keras.losses.Huber.__lt__": true, + "tf.compat.v1.keras.losses.Huber.__ne__": true, + "tf.compat.v1.keras.losses.Huber.__new__": true, + "tf.compat.v1.keras.losses.Huber.call": true, + "tf.compat.v1.keras.losses.Huber.from_config": true, + "tf.compat.v1.keras.losses.Huber.get_config": true, + "tf.compat.v1.keras.losses.KLD": false, + "tf.compat.v1.keras.losses.KLDivergence": false, + "tf.compat.v1.keras.losses.KLDivergence.__call__": true, + "tf.compat.v1.keras.losses.KLDivergence.__eq__": true, + "tf.compat.v1.keras.losses.KLDivergence.__ge__": true, + "tf.compat.v1.keras.losses.KLDivergence.__gt__": true, + "tf.compat.v1.keras.losses.KLDivergence.__init__": true, + "tf.compat.v1.keras.losses.KLDivergence.__le__": true, + "tf.compat.v1.keras.losses.KLDivergence.__lt__": true, + "tf.compat.v1.keras.losses.KLDivergence.__ne__": true, + "tf.compat.v1.keras.losses.KLDivergence.__new__": true, + "tf.compat.v1.keras.losses.KLDivergence.call": true, + "tf.compat.v1.keras.losses.KLDivergence.from_config": true, + "tf.compat.v1.keras.losses.KLDivergence.get_config": true, + "tf.compat.v1.keras.losses.LogCosh": false, + "tf.compat.v1.keras.losses.LogCosh.__call__": true, + "tf.compat.v1.keras.losses.LogCosh.__eq__": true, + "tf.compat.v1.keras.losses.LogCosh.__ge__": true, + "tf.compat.v1.keras.losses.LogCosh.__gt__": true, + "tf.compat.v1.keras.losses.LogCosh.__init__": true, + "tf.compat.v1.keras.losses.LogCosh.__le__": true, + "tf.compat.v1.keras.losses.LogCosh.__lt__": true, + "tf.compat.v1.keras.losses.LogCosh.__ne__": true, + "tf.compat.v1.keras.losses.LogCosh.__new__": true, + "tf.compat.v1.keras.losses.LogCosh.call": true, + "tf.compat.v1.keras.losses.LogCosh.from_config": true, + "tf.compat.v1.keras.losses.LogCosh.get_config": true, + "tf.compat.v1.keras.losses.Loss": false, + "tf.compat.v1.keras.losses.Loss.__call__": true, + "tf.compat.v1.keras.losses.Loss.__eq__": true, + "tf.compat.v1.keras.losses.Loss.__ge__": true, + "tf.compat.v1.keras.losses.Loss.__gt__": true, + "tf.compat.v1.keras.losses.Loss.__init__": true, + "tf.compat.v1.keras.losses.Loss.__le__": true, + "tf.compat.v1.keras.losses.Loss.__lt__": true, + "tf.compat.v1.keras.losses.Loss.__ne__": true, + "tf.compat.v1.keras.losses.Loss.__new__": true, + "tf.compat.v1.keras.losses.Loss.call": true, + "tf.compat.v1.keras.losses.Loss.from_config": true, + "tf.compat.v1.keras.losses.Loss.get_config": true, + "tf.compat.v1.keras.losses.MAE": false, + "tf.compat.v1.keras.losses.MAPE": false, + "tf.compat.v1.keras.losses.MSE": false, + "tf.compat.v1.keras.losses.MSLE": false, + "tf.compat.v1.keras.losses.MeanAbsoluteError": false, + "tf.compat.v1.keras.losses.MeanAbsoluteError.__call__": true, + "tf.compat.v1.keras.losses.MeanAbsoluteError.__eq__": true, + "tf.compat.v1.keras.losses.MeanAbsoluteError.__ge__": true, + "tf.compat.v1.keras.losses.MeanAbsoluteError.__gt__": true, + "tf.compat.v1.keras.losses.MeanAbsoluteError.__init__": true, + "tf.compat.v1.keras.losses.MeanAbsoluteError.__le__": true, + "tf.compat.v1.keras.losses.MeanAbsoluteError.__lt__": true, + "tf.compat.v1.keras.losses.MeanAbsoluteError.__ne__": true, + "tf.compat.v1.keras.losses.MeanAbsoluteError.__new__": true, + "tf.compat.v1.keras.losses.MeanAbsoluteError.call": true, + "tf.compat.v1.keras.losses.MeanAbsoluteError.from_config": true, + "tf.compat.v1.keras.losses.MeanAbsoluteError.get_config": true, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError": false, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__call__": true, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__eq__": true, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__ge__": true, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__gt__": true, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__init__": true, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__le__": true, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__lt__": true, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__ne__": true, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.__new__": true, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.call": true, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.from_config": true, + "tf.compat.v1.keras.losses.MeanAbsolutePercentageError.get_config": true, + "tf.compat.v1.keras.losses.MeanSquaredError": false, + "tf.compat.v1.keras.losses.MeanSquaredError.__call__": true, + "tf.compat.v1.keras.losses.MeanSquaredError.__eq__": true, + "tf.compat.v1.keras.losses.MeanSquaredError.__ge__": true, + "tf.compat.v1.keras.losses.MeanSquaredError.__gt__": true, + "tf.compat.v1.keras.losses.MeanSquaredError.__init__": true, + "tf.compat.v1.keras.losses.MeanSquaredError.__le__": true, + "tf.compat.v1.keras.losses.MeanSquaredError.__lt__": true, + "tf.compat.v1.keras.losses.MeanSquaredError.__ne__": true, + "tf.compat.v1.keras.losses.MeanSquaredError.__new__": true, + "tf.compat.v1.keras.losses.MeanSquaredError.call": true, + "tf.compat.v1.keras.losses.MeanSquaredError.from_config": true, + "tf.compat.v1.keras.losses.MeanSquaredError.get_config": true, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError": false, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__call__": true, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__eq__": true, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__ge__": true, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__gt__": true, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__init__": true, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__le__": true, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__lt__": true, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__ne__": true, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.__new__": true, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.call": true, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.from_config": true, + "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError.get_config": true, + "tf.compat.v1.keras.losses.Poisson": false, + "tf.compat.v1.keras.losses.Poisson.__call__": true, + "tf.compat.v1.keras.losses.Poisson.__eq__": true, + "tf.compat.v1.keras.losses.Poisson.__ge__": true, + "tf.compat.v1.keras.losses.Poisson.__gt__": true, + "tf.compat.v1.keras.losses.Poisson.__init__": true, + "tf.compat.v1.keras.losses.Poisson.__le__": true, + "tf.compat.v1.keras.losses.Poisson.__lt__": true, + "tf.compat.v1.keras.losses.Poisson.__ne__": true, + "tf.compat.v1.keras.losses.Poisson.__new__": true, + "tf.compat.v1.keras.losses.Poisson.call": true, + "tf.compat.v1.keras.losses.Poisson.from_config": true, + "tf.compat.v1.keras.losses.Poisson.get_config": true, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy": false, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__call__": true, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__eq__": true, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__ge__": true, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__gt__": true, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__init__": true, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__le__": true, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__lt__": true, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__ne__": true, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.__new__": true, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.call": true, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.from_config": true, + "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy.get_config": true, + "tf.compat.v1.keras.losses.SquaredHinge": false, + "tf.compat.v1.keras.losses.SquaredHinge.__call__": true, + "tf.compat.v1.keras.losses.SquaredHinge.__eq__": true, + "tf.compat.v1.keras.losses.SquaredHinge.__ge__": true, + "tf.compat.v1.keras.losses.SquaredHinge.__gt__": true, + "tf.compat.v1.keras.losses.SquaredHinge.__init__": true, + "tf.compat.v1.keras.losses.SquaredHinge.__le__": true, + "tf.compat.v1.keras.losses.SquaredHinge.__lt__": true, + "tf.compat.v1.keras.losses.SquaredHinge.__ne__": true, + "tf.compat.v1.keras.losses.SquaredHinge.__new__": true, + "tf.compat.v1.keras.losses.SquaredHinge.call": true, + "tf.compat.v1.keras.losses.SquaredHinge.from_config": true, + "tf.compat.v1.keras.losses.SquaredHinge.get_config": true, + "tf.compat.v1.keras.losses.binary_crossentropy": false, + "tf.compat.v1.keras.losses.categorical_crossentropy": false, + "tf.compat.v1.keras.losses.categorical_hinge": false, + "tf.compat.v1.keras.losses.cosine": false, + "tf.compat.v1.keras.losses.cosine_proximity": false, + "tf.compat.v1.keras.losses.cosine_similarity": false, + "tf.compat.v1.keras.losses.deserialize": false, + "tf.compat.v1.keras.losses.get": false, + "tf.compat.v1.keras.losses.hinge": false, + "tf.compat.v1.keras.losses.kld": false, + "tf.compat.v1.keras.losses.kullback_leibler_divergence": false, + "tf.compat.v1.keras.losses.logcosh": false, + "tf.compat.v1.keras.losses.mae": false, + "tf.compat.v1.keras.losses.mape": false, + "tf.compat.v1.keras.losses.mean_absolute_error": false, + "tf.compat.v1.keras.losses.mean_absolute_percentage_error": false, + "tf.compat.v1.keras.losses.mean_squared_error": false, + "tf.compat.v1.keras.losses.mean_squared_logarithmic_error": false, + "tf.compat.v1.keras.losses.mse": false, + "tf.compat.v1.keras.losses.msle": false, + "tf.compat.v1.keras.losses.poisson": false, + "tf.compat.v1.keras.losses.serialize": false, + "tf.compat.v1.keras.losses.sparse_categorical_crossentropy": false, + "tf.compat.v1.keras.losses.squared_hinge": false, + "tf.compat.v1.keras.metrics": false, + "tf.compat.v1.keras.metrics.AUC": false, + "tf.compat.v1.keras.metrics.AUC.__call__": true, + "tf.compat.v1.keras.metrics.AUC.__eq__": true, + "tf.compat.v1.keras.metrics.AUC.__ge__": true, + "tf.compat.v1.keras.metrics.AUC.__gt__": true, + "tf.compat.v1.keras.metrics.AUC.__init__": true, + "tf.compat.v1.keras.metrics.AUC.__le__": true, + "tf.compat.v1.keras.metrics.AUC.__lt__": true, + "tf.compat.v1.keras.metrics.AUC.__ne__": true, + "tf.compat.v1.keras.metrics.AUC.__new__": true, + "tf.compat.v1.keras.metrics.AUC.activity_regularizer": true, + "tf.compat.v1.keras.metrics.AUC.add_loss": true, + "tf.compat.v1.keras.metrics.AUC.add_metric": true, + "tf.compat.v1.keras.metrics.AUC.add_weight": true, + "tf.compat.v1.keras.metrics.AUC.build": true, + "tf.compat.v1.keras.metrics.AUC.call": true, + "tf.compat.v1.keras.metrics.AUC.compute_mask": true, + "tf.compat.v1.keras.metrics.AUC.compute_output_shape": true, + "tf.compat.v1.keras.metrics.AUC.compute_output_signature": true, + "tf.compat.v1.keras.metrics.AUC.count_params": true, + "tf.compat.v1.keras.metrics.AUC.dtype": true, + "tf.compat.v1.keras.metrics.AUC.dynamic": true, + "tf.compat.v1.keras.metrics.AUC.from_config": true, + "tf.compat.v1.keras.metrics.AUC.get_config": true, + "tf.compat.v1.keras.metrics.AUC.get_weights": true, + "tf.compat.v1.keras.metrics.AUC.input": true, + "tf.compat.v1.keras.metrics.AUC.input_spec": true, + "tf.compat.v1.keras.metrics.AUC.interpolate_pr_auc": true, + "tf.compat.v1.keras.metrics.AUC.losses": true, + "tf.compat.v1.keras.metrics.AUC.metrics": true, + "tf.compat.v1.keras.metrics.AUC.name": true, + "tf.compat.v1.keras.metrics.AUC.name_scope": true, + "tf.compat.v1.keras.metrics.AUC.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.AUC.output": true, + "tf.compat.v1.keras.metrics.AUC.reset_states": true, + "tf.compat.v1.keras.metrics.AUC.result": true, + "tf.compat.v1.keras.metrics.AUC.set_weights": true, + "tf.compat.v1.keras.metrics.AUC.submodules": true, + "tf.compat.v1.keras.metrics.AUC.trainable": true, + "tf.compat.v1.keras.metrics.AUC.trainable_weights": true, + "tf.compat.v1.keras.metrics.AUC.update_state": true, + "tf.compat.v1.keras.metrics.AUC.weights": true, + "tf.compat.v1.keras.metrics.AUC.with_name_scope": true, + "tf.compat.v1.keras.metrics.Accuracy": false, + "tf.compat.v1.keras.metrics.Accuracy.__call__": true, + "tf.compat.v1.keras.metrics.Accuracy.__eq__": true, + "tf.compat.v1.keras.metrics.Accuracy.__ge__": true, + "tf.compat.v1.keras.metrics.Accuracy.__gt__": true, + "tf.compat.v1.keras.metrics.Accuracy.__init__": true, + "tf.compat.v1.keras.metrics.Accuracy.__le__": true, + "tf.compat.v1.keras.metrics.Accuracy.__lt__": true, + "tf.compat.v1.keras.metrics.Accuracy.__ne__": true, + "tf.compat.v1.keras.metrics.Accuracy.__new__": true, + "tf.compat.v1.keras.metrics.Accuracy.activity_regularizer": true, + "tf.compat.v1.keras.metrics.Accuracy.add_loss": true, + "tf.compat.v1.keras.metrics.Accuracy.add_metric": true, + "tf.compat.v1.keras.metrics.Accuracy.add_weight": true, + "tf.compat.v1.keras.metrics.Accuracy.build": true, + "tf.compat.v1.keras.metrics.Accuracy.call": true, + "tf.compat.v1.keras.metrics.Accuracy.compute_mask": true, + "tf.compat.v1.keras.metrics.Accuracy.compute_output_shape": true, + "tf.compat.v1.keras.metrics.Accuracy.compute_output_signature": true, + "tf.compat.v1.keras.metrics.Accuracy.count_params": true, + "tf.compat.v1.keras.metrics.Accuracy.dtype": true, + "tf.compat.v1.keras.metrics.Accuracy.dynamic": true, + "tf.compat.v1.keras.metrics.Accuracy.from_config": true, + "tf.compat.v1.keras.metrics.Accuracy.get_config": true, + "tf.compat.v1.keras.metrics.Accuracy.get_weights": true, + "tf.compat.v1.keras.metrics.Accuracy.input": true, + "tf.compat.v1.keras.metrics.Accuracy.input_spec": true, + "tf.compat.v1.keras.metrics.Accuracy.losses": true, + "tf.compat.v1.keras.metrics.Accuracy.metrics": true, + "tf.compat.v1.keras.metrics.Accuracy.name": true, + "tf.compat.v1.keras.metrics.Accuracy.name_scope": true, + "tf.compat.v1.keras.metrics.Accuracy.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.Accuracy.output": true, + "tf.compat.v1.keras.metrics.Accuracy.reset_states": true, + "tf.compat.v1.keras.metrics.Accuracy.result": true, + "tf.compat.v1.keras.metrics.Accuracy.set_weights": true, + "tf.compat.v1.keras.metrics.Accuracy.submodules": true, + "tf.compat.v1.keras.metrics.Accuracy.trainable": true, + "tf.compat.v1.keras.metrics.Accuracy.trainable_weights": true, + "tf.compat.v1.keras.metrics.Accuracy.update_state": true, + "tf.compat.v1.keras.metrics.Accuracy.weights": true, + "tf.compat.v1.keras.metrics.Accuracy.with_name_scope": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy": false, + "tf.compat.v1.keras.metrics.BinaryAccuracy.__call__": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.__eq__": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.__ge__": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.__gt__": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.__init__": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.__le__": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.__lt__": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.__ne__": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.__new__": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.activity_regularizer": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.add_loss": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.add_metric": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.add_weight": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.build": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.call": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.compute_mask": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.compute_output_shape": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.compute_output_signature": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.count_params": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.dtype": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.dynamic": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.from_config": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.get_config": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.get_weights": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.input": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.input_spec": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.losses": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.metrics": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.name": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.name_scope": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.output": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.reset_states": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.result": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.set_weights": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.submodules": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.trainable": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.trainable_weights": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.update_state": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.weights": true, + "tf.compat.v1.keras.metrics.BinaryAccuracy.with_name_scope": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy": false, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__call__": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__eq__": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__ge__": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__gt__": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__init__": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__le__": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__lt__": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__ne__": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.__new__": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.activity_regularizer": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.add_loss": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.add_metric": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.add_weight": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.build": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.call": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.compute_mask": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.compute_output_shape": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.compute_output_signature": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.count_params": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.dtype": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.dynamic": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.from_config": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.get_config": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.get_weights": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.input": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.input_spec": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.losses": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.metrics": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.name": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.name_scope": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.output": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.reset_states": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.result": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.set_weights": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.submodules": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.trainable": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.trainable_weights": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.update_state": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.weights": true, + "tf.compat.v1.keras.metrics.BinaryCrossentropy.with_name_scope": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy": false, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__call__": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__eq__": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__ge__": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__gt__": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__init__": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__le__": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__lt__": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__ne__": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.__new__": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.activity_regularizer": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.add_loss": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.add_metric": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.add_weight": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.build": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.call": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.compute_mask": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.compute_output_shape": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.compute_output_signature": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.count_params": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.dtype": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.dynamic": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.from_config": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.get_config": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.get_weights": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.input": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.input_spec": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.losses": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.metrics": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.name": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.name_scope": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.output": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.reset_states": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.result": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.set_weights": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.submodules": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.trainable": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.trainable_weights": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.update_state": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.weights": true, + "tf.compat.v1.keras.metrics.CategoricalAccuracy.with_name_scope": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy": false, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__call__": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__eq__": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__ge__": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__gt__": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__init__": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__le__": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__lt__": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__ne__": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.__new__": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.activity_regularizer": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.add_loss": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.add_metric": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.add_weight": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.build": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.call": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.compute_mask": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.compute_output_shape": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.compute_output_signature": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.count_params": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.dtype": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.dynamic": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.from_config": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.get_config": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.get_weights": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.input": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.input_spec": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.losses": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.metrics": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.name": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.name_scope": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.output": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.reset_states": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.result": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.set_weights": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.submodules": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.trainable": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.trainable_weights": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.update_state": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.weights": true, + "tf.compat.v1.keras.metrics.CategoricalCrossentropy.with_name_scope": true, + "tf.compat.v1.keras.metrics.CategoricalHinge": false, + "tf.compat.v1.keras.metrics.CategoricalHinge.__call__": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.__eq__": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.__ge__": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.__gt__": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.__init__": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.__le__": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.__lt__": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.__ne__": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.__new__": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.activity_regularizer": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.add_loss": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.add_metric": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.add_weight": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.build": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.call": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.compute_mask": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.compute_output_shape": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.compute_output_signature": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.count_params": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.dtype": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.dynamic": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.from_config": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.get_config": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.get_weights": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.input": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.input_spec": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.losses": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.metrics": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.name": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.name_scope": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.output": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.reset_states": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.result": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.set_weights": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.submodules": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.trainable": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.trainable_weights": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.update_state": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.weights": true, + "tf.compat.v1.keras.metrics.CategoricalHinge.with_name_scope": true, + "tf.compat.v1.keras.metrics.CosineSimilarity": false, + "tf.compat.v1.keras.metrics.CosineSimilarity.__call__": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.__eq__": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.__ge__": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.__gt__": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.__init__": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.__le__": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.__lt__": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.__ne__": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.__new__": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.activity_regularizer": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.add_loss": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.add_metric": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.add_weight": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.build": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.call": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.compute_mask": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.compute_output_shape": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.compute_output_signature": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.count_params": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.dtype": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.dynamic": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.from_config": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.get_config": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.get_weights": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.input": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.input_spec": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.losses": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.metrics": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.name": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.name_scope": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.output": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.reset_states": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.result": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.set_weights": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.submodules": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.trainable": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.trainable_weights": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.update_state": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.weights": true, + "tf.compat.v1.keras.metrics.CosineSimilarity.with_name_scope": true, + "tf.compat.v1.keras.metrics.FalseNegatives": false, + "tf.compat.v1.keras.metrics.FalseNegatives.__call__": true, + "tf.compat.v1.keras.metrics.FalseNegatives.__eq__": true, + "tf.compat.v1.keras.metrics.FalseNegatives.__ge__": true, + "tf.compat.v1.keras.metrics.FalseNegatives.__gt__": true, + "tf.compat.v1.keras.metrics.FalseNegatives.__init__": true, + "tf.compat.v1.keras.metrics.FalseNegatives.__le__": true, + "tf.compat.v1.keras.metrics.FalseNegatives.__lt__": true, + "tf.compat.v1.keras.metrics.FalseNegatives.__ne__": true, + "tf.compat.v1.keras.metrics.FalseNegatives.__new__": true, + "tf.compat.v1.keras.metrics.FalseNegatives.activity_regularizer": true, + "tf.compat.v1.keras.metrics.FalseNegatives.add_loss": true, + "tf.compat.v1.keras.metrics.FalseNegatives.add_metric": true, + "tf.compat.v1.keras.metrics.FalseNegatives.add_weight": true, + "tf.compat.v1.keras.metrics.FalseNegatives.build": true, + "tf.compat.v1.keras.metrics.FalseNegatives.call": true, + "tf.compat.v1.keras.metrics.FalseNegatives.compute_mask": true, + "tf.compat.v1.keras.metrics.FalseNegatives.compute_output_shape": true, + "tf.compat.v1.keras.metrics.FalseNegatives.compute_output_signature": true, + "tf.compat.v1.keras.metrics.FalseNegatives.count_params": true, + "tf.compat.v1.keras.metrics.FalseNegatives.dtype": true, + "tf.compat.v1.keras.metrics.FalseNegatives.dynamic": true, + "tf.compat.v1.keras.metrics.FalseNegatives.from_config": true, + "tf.compat.v1.keras.metrics.FalseNegatives.get_config": true, + "tf.compat.v1.keras.metrics.FalseNegatives.get_weights": true, + "tf.compat.v1.keras.metrics.FalseNegatives.input": true, + "tf.compat.v1.keras.metrics.FalseNegatives.input_spec": true, + "tf.compat.v1.keras.metrics.FalseNegatives.losses": true, + "tf.compat.v1.keras.metrics.FalseNegatives.metrics": true, + "tf.compat.v1.keras.metrics.FalseNegatives.name": true, + "tf.compat.v1.keras.metrics.FalseNegatives.name_scope": true, + "tf.compat.v1.keras.metrics.FalseNegatives.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.FalseNegatives.output": true, + "tf.compat.v1.keras.metrics.FalseNegatives.reset_states": true, + "tf.compat.v1.keras.metrics.FalseNegatives.result": true, + "tf.compat.v1.keras.metrics.FalseNegatives.set_weights": true, + "tf.compat.v1.keras.metrics.FalseNegatives.submodules": true, + "tf.compat.v1.keras.metrics.FalseNegatives.trainable": true, + "tf.compat.v1.keras.metrics.FalseNegatives.trainable_weights": true, + "tf.compat.v1.keras.metrics.FalseNegatives.update_state": true, + "tf.compat.v1.keras.metrics.FalseNegatives.weights": true, + "tf.compat.v1.keras.metrics.FalseNegatives.with_name_scope": true, + "tf.compat.v1.keras.metrics.FalsePositives": false, + "tf.compat.v1.keras.metrics.FalsePositives.__call__": true, + "tf.compat.v1.keras.metrics.FalsePositives.__eq__": true, + "tf.compat.v1.keras.metrics.FalsePositives.__ge__": true, + "tf.compat.v1.keras.metrics.FalsePositives.__gt__": true, + "tf.compat.v1.keras.metrics.FalsePositives.__init__": true, + "tf.compat.v1.keras.metrics.FalsePositives.__le__": true, + "tf.compat.v1.keras.metrics.FalsePositives.__lt__": true, + "tf.compat.v1.keras.metrics.FalsePositives.__ne__": true, + "tf.compat.v1.keras.metrics.FalsePositives.__new__": true, + "tf.compat.v1.keras.metrics.FalsePositives.activity_regularizer": true, + "tf.compat.v1.keras.metrics.FalsePositives.add_loss": true, + "tf.compat.v1.keras.metrics.FalsePositives.add_metric": true, + "tf.compat.v1.keras.metrics.FalsePositives.add_weight": true, + "tf.compat.v1.keras.metrics.FalsePositives.build": true, + "tf.compat.v1.keras.metrics.FalsePositives.call": true, + "tf.compat.v1.keras.metrics.FalsePositives.compute_mask": true, + "tf.compat.v1.keras.metrics.FalsePositives.compute_output_shape": true, + "tf.compat.v1.keras.metrics.FalsePositives.compute_output_signature": true, + "tf.compat.v1.keras.metrics.FalsePositives.count_params": true, + "tf.compat.v1.keras.metrics.FalsePositives.dtype": true, + "tf.compat.v1.keras.metrics.FalsePositives.dynamic": true, + "tf.compat.v1.keras.metrics.FalsePositives.from_config": true, + "tf.compat.v1.keras.metrics.FalsePositives.get_config": true, + "tf.compat.v1.keras.metrics.FalsePositives.get_weights": true, + "tf.compat.v1.keras.metrics.FalsePositives.input": true, + "tf.compat.v1.keras.metrics.FalsePositives.input_spec": true, + "tf.compat.v1.keras.metrics.FalsePositives.losses": true, + "tf.compat.v1.keras.metrics.FalsePositives.metrics": true, + "tf.compat.v1.keras.metrics.FalsePositives.name": true, + "tf.compat.v1.keras.metrics.FalsePositives.name_scope": true, + "tf.compat.v1.keras.metrics.FalsePositives.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.FalsePositives.output": true, + "tf.compat.v1.keras.metrics.FalsePositives.reset_states": true, + "tf.compat.v1.keras.metrics.FalsePositives.result": true, + "tf.compat.v1.keras.metrics.FalsePositives.set_weights": true, + "tf.compat.v1.keras.metrics.FalsePositives.submodules": true, + "tf.compat.v1.keras.metrics.FalsePositives.trainable": true, + "tf.compat.v1.keras.metrics.FalsePositives.trainable_weights": true, + "tf.compat.v1.keras.metrics.FalsePositives.update_state": true, + "tf.compat.v1.keras.metrics.FalsePositives.weights": true, + "tf.compat.v1.keras.metrics.FalsePositives.with_name_scope": true, + "tf.compat.v1.keras.metrics.Hinge": false, + "tf.compat.v1.keras.metrics.Hinge.__call__": true, + "tf.compat.v1.keras.metrics.Hinge.__eq__": true, + "tf.compat.v1.keras.metrics.Hinge.__ge__": true, + "tf.compat.v1.keras.metrics.Hinge.__gt__": true, + "tf.compat.v1.keras.metrics.Hinge.__init__": true, + "tf.compat.v1.keras.metrics.Hinge.__le__": true, + "tf.compat.v1.keras.metrics.Hinge.__lt__": true, + "tf.compat.v1.keras.metrics.Hinge.__ne__": true, + "tf.compat.v1.keras.metrics.Hinge.__new__": true, + "tf.compat.v1.keras.metrics.Hinge.activity_regularizer": true, + "tf.compat.v1.keras.metrics.Hinge.add_loss": true, + "tf.compat.v1.keras.metrics.Hinge.add_metric": true, + "tf.compat.v1.keras.metrics.Hinge.add_weight": true, + "tf.compat.v1.keras.metrics.Hinge.build": true, + "tf.compat.v1.keras.metrics.Hinge.call": true, + "tf.compat.v1.keras.metrics.Hinge.compute_mask": true, + "tf.compat.v1.keras.metrics.Hinge.compute_output_shape": true, + "tf.compat.v1.keras.metrics.Hinge.compute_output_signature": true, + "tf.compat.v1.keras.metrics.Hinge.count_params": true, + "tf.compat.v1.keras.metrics.Hinge.dtype": true, + "tf.compat.v1.keras.metrics.Hinge.dynamic": true, + "tf.compat.v1.keras.metrics.Hinge.from_config": true, + "tf.compat.v1.keras.metrics.Hinge.get_config": true, + "tf.compat.v1.keras.metrics.Hinge.get_weights": true, + "tf.compat.v1.keras.metrics.Hinge.input": true, + "tf.compat.v1.keras.metrics.Hinge.input_spec": true, + "tf.compat.v1.keras.metrics.Hinge.losses": true, + "tf.compat.v1.keras.metrics.Hinge.metrics": true, + "tf.compat.v1.keras.metrics.Hinge.name": true, + "tf.compat.v1.keras.metrics.Hinge.name_scope": true, + "tf.compat.v1.keras.metrics.Hinge.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.Hinge.output": true, + "tf.compat.v1.keras.metrics.Hinge.reset_states": true, + "tf.compat.v1.keras.metrics.Hinge.result": true, + "tf.compat.v1.keras.metrics.Hinge.set_weights": true, + "tf.compat.v1.keras.metrics.Hinge.submodules": true, + "tf.compat.v1.keras.metrics.Hinge.trainable": true, + "tf.compat.v1.keras.metrics.Hinge.trainable_weights": true, + "tf.compat.v1.keras.metrics.Hinge.update_state": true, + "tf.compat.v1.keras.metrics.Hinge.weights": true, + "tf.compat.v1.keras.metrics.Hinge.with_name_scope": true, + "tf.compat.v1.keras.metrics.KLD": false, + "tf.compat.v1.keras.metrics.KLDivergence": false, + "tf.compat.v1.keras.metrics.KLDivergence.__call__": true, + "tf.compat.v1.keras.metrics.KLDivergence.__eq__": true, + "tf.compat.v1.keras.metrics.KLDivergence.__ge__": true, + "tf.compat.v1.keras.metrics.KLDivergence.__gt__": true, + "tf.compat.v1.keras.metrics.KLDivergence.__init__": true, + "tf.compat.v1.keras.metrics.KLDivergence.__le__": true, + "tf.compat.v1.keras.metrics.KLDivergence.__lt__": true, + "tf.compat.v1.keras.metrics.KLDivergence.__ne__": true, + "tf.compat.v1.keras.metrics.KLDivergence.__new__": true, + "tf.compat.v1.keras.metrics.KLDivergence.activity_regularizer": true, + "tf.compat.v1.keras.metrics.KLDivergence.add_loss": true, + "tf.compat.v1.keras.metrics.KLDivergence.add_metric": true, + "tf.compat.v1.keras.metrics.KLDivergence.add_weight": true, + "tf.compat.v1.keras.metrics.KLDivergence.build": true, + "tf.compat.v1.keras.metrics.KLDivergence.call": true, + "tf.compat.v1.keras.metrics.KLDivergence.compute_mask": true, + "tf.compat.v1.keras.metrics.KLDivergence.compute_output_shape": true, + "tf.compat.v1.keras.metrics.KLDivergence.compute_output_signature": true, + "tf.compat.v1.keras.metrics.KLDivergence.count_params": true, + "tf.compat.v1.keras.metrics.KLDivergence.dtype": true, + "tf.compat.v1.keras.metrics.KLDivergence.dynamic": true, + "tf.compat.v1.keras.metrics.KLDivergence.from_config": true, + "tf.compat.v1.keras.metrics.KLDivergence.get_config": true, + "tf.compat.v1.keras.metrics.KLDivergence.get_weights": true, + "tf.compat.v1.keras.metrics.KLDivergence.input": true, + "tf.compat.v1.keras.metrics.KLDivergence.input_spec": true, + "tf.compat.v1.keras.metrics.KLDivergence.losses": true, + "tf.compat.v1.keras.metrics.KLDivergence.metrics": true, + "tf.compat.v1.keras.metrics.KLDivergence.name": true, + "tf.compat.v1.keras.metrics.KLDivergence.name_scope": true, + "tf.compat.v1.keras.metrics.KLDivergence.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.KLDivergence.output": true, + "tf.compat.v1.keras.metrics.KLDivergence.reset_states": true, + "tf.compat.v1.keras.metrics.KLDivergence.result": true, + "tf.compat.v1.keras.metrics.KLDivergence.set_weights": true, + "tf.compat.v1.keras.metrics.KLDivergence.submodules": true, + "tf.compat.v1.keras.metrics.KLDivergence.trainable": true, + "tf.compat.v1.keras.metrics.KLDivergence.trainable_weights": true, + "tf.compat.v1.keras.metrics.KLDivergence.update_state": true, + "tf.compat.v1.keras.metrics.KLDivergence.weights": true, + "tf.compat.v1.keras.metrics.KLDivergence.with_name_scope": true, + "tf.compat.v1.keras.metrics.LogCoshError": false, + "tf.compat.v1.keras.metrics.LogCoshError.__call__": true, + "tf.compat.v1.keras.metrics.LogCoshError.__eq__": true, + "tf.compat.v1.keras.metrics.LogCoshError.__ge__": true, + "tf.compat.v1.keras.metrics.LogCoshError.__gt__": true, + "tf.compat.v1.keras.metrics.LogCoshError.__init__": true, + "tf.compat.v1.keras.metrics.LogCoshError.__le__": true, + "tf.compat.v1.keras.metrics.LogCoshError.__lt__": true, + "tf.compat.v1.keras.metrics.LogCoshError.__ne__": true, + "tf.compat.v1.keras.metrics.LogCoshError.__new__": true, + "tf.compat.v1.keras.metrics.LogCoshError.activity_regularizer": true, + "tf.compat.v1.keras.metrics.LogCoshError.add_loss": true, + "tf.compat.v1.keras.metrics.LogCoshError.add_metric": true, + "tf.compat.v1.keras.metrics.LogCoshError.add_weight": true, + "tf.compat.v1.keras.metrics.LogCoshError.build": true, + "tf.compat.v1.keras.metrics.LogCoshError.call": true, + "tf.compat.v1.keras.metrics.LogCoshError.compute_mask": true, + "tf.compat.v1.keras.metrics.LogCoshError.compute_output_shape": true, + "tf.compat.v1.keras.metrics.LogCoshError.compute_output_signature": true, + "tf.compat.v1.keras.metrics.LogCoshError.count_params": true, + "tf.compat.v1.keras.metrics.LogCoshError.dtype": true, + "tf.compat.v1.keras.metrics.LogCoshError.dynamic": true, + "tf.compat.v1.keras.metrics.LogCoshError.from_config": true, + "tf.compat.v1.keras.metrics.LogCoshError.get_config": true, + "tf.compat.v1.keras.metrics.LogCoshError.get_weights": true, + "tf.compat.v1.keras.metrics.LogCoshError.input": true, + "tf.compat.v1.keras.metrics.LogCoshError.input_spec": true, + "tf.compat.v1.keras.metrics.LogCoshError.losses": true, + "tf.compat.v1.keras.metrics.LogCoshError.metrics": true, + "tf.compat.v1.keras.metrics.LogCoshError.name": true, + "tf.compat.v1.keras.metrics.LogCoshError.name_scope": true, + "tf.compat.v1.keras.metrics.LogCoshError.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.LogCoshError.output": true, + "tf.compat.v1.keras.metrics.LogCoshError.reset_states": true, + "tf.compat.v1.keras.metrics.LogCoshError.result": true, + "tf.compat.v1.keras.metrics.LogCoshError.set_weights": true, + "tf.compat.v1.keras.metrics.LogCoshError.submodules": true, + "tf.compat.v1.keras.metrics.LogCoshError.trainable": true, + "tf.compat.v1.keras.metrics.LogCoshError.trainable_weights": true, + "tf.compat.v1.keras.metrics.LogCoshError.update_state": true, + "tf.compat.v1.keras.metrics.LogCoshError.weights": true, + "tf.compat.v1.keras.metrics.LogCoshError.with_name_scope": true, + "tf.compat.v1.keras.metrics.MAE": false, + "tf.compat.v1.keras.metrics.MAPE": false, + "tf.compat.v1.keras.metrics.MSE": false, + "tf.compat.v1.keras.metrics.MSLE": false, + "tf.compat.v1.keras.metrics.Mean": false, + "tf.compat.v1.keras.metrics.Mean.__call__": true, + "tf.compat.v1.keras.metrics.Mean.__eq__": true, + "tf.compat.v1.keras.metrics.Mean.__ge__": true, + "tf.compat.v1.keras.metrics.Mean.__gt__": true, + "tf.compat.v1.keras.metrics.Mean.__init__": true, + "tf.compat.v1.keras.metrics.Mean.__le__": true, + "tf.compat.v1.keras.metrics.Mean.__lt__": true, + "tf.compat.v1.keras.metrics.Mean.__ne__": true, + "tf.compat.v1.keras.metrics.Mean.__new__": true, + "tf.compat.v1.keras.metrics.Mean.activity_regularizer": true, + "tf.compat.v1.keras.metrics.Mean.add_loss": true, + "tf.compat.v1.keras.metrics.Mean.add_metric": true, + "tf.compat.v1.keras.metrics.Mean.add_weight": true, + "tf.compat.v1.keras.metrics.Mean.build": true, + "tf.compat.v1.keras.metrics.Mean.call": true, + "tf.compat.v1.keras.metrics.Mean.compute_mask": true, + "tf.compat.v1.keras.metrics.Mean.compute_output_shape": true, + "tf.compat.v1.keras.metrics.Mean.compute_output_signature": true, + "tf.compat.v1.keras.metrics.Mean.count_params": true, + "tf.compat.v1.keras.metrics.Mean.dtype": true, + "tf.compat.v1.keras.metrics.Mean.dynamic": true, + "tf.compat.v1.keras.metrics.Mean.from_config": true, + "tf.compat.v1.keras.metrics.Mean.get_config": true, + "tf.compat.v1.keras.metrics.Mean.get_weights": true, + "tf.compat.v1.keras.metrics.Mean.input": true, + "tf.compat.v1.keras.metrics.Mean.input_spec": true, + "tf.compat.v1.keras.metrics.Mean.losses": true, + "tf.compat.v1.keras.metrics.Mean.metrics": true, + "tf.compat.v1.keras.metrics.Mean.name": true, + "tf.compat.v1.keras.metrics.Mean.name_scope": true, + "tf.compat.v1.keras.metrics.Mean.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.Mean.output": true, + "tf.compat.v1.keras.metrics.Mean.reset_states": true, + "tf.compat.v1.keras.metrics.Mean.result": true, + "tf.compat.v1.keras.metrics.Mean.set_weights": true, + "tf.compat.v1.keras.metrics.Mean.submodules": true, + "tf.compat.v1.keras.metrics.Mean.trainable": true, + "tf.compat.v1.keras.metrics.Mean.trainable_weights": true, + "tf.compat.v1.keras.metrics.Mean.update_state": true, + "tf.compat.v1.keras.metrics.Mean.weights": true, + "tf.compat.v1.keras.metrics.Mean.with_name_scope": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError": false, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__call__": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__eq__": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__ge__": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__gt__": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__init__": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__le__": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__lt__": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__ne__": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.__new__": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.activity_regularizer": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.add_loss": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.add_metric": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.add_weight": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.build": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.call": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.compute_mask": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.compute_output_shape": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.compute_output_signature": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.count_params": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.dtype": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.dynamic": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.from_config": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.get_config": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.get_weights": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.input": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.input_spec": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.losses": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.metrics": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.name": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.name_scope": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.output": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.reset_states": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.result": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.set_weights": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.submodules": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.trainable": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.update_state": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.weights": true, + "tf.compat.v1.keras.metrics.MeanAbsoluteError.with_name_scope": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError": false, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__call__": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__eq__": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__ge__": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__gt__": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__init__": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__le__": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__lt__": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__ne__": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.__new__": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.activity_regularizer": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.add_loss": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.add_metric": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.add_weight": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.build": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.call": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.compute_mask": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.compute_output_shape": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.compute_output_signature": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.count_params": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.dtype": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.dynamic": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.from_config": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.get_config": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.get_weights": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.input": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.input_spec": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.losses": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.metrics": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.name": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.name_scope": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.output": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.reset_states": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.result": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.set_weights": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.submodules": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.trainable": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.update_state": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.weights": true, + "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError.with_name_scope": true, + "tf.compat.v1.keras.metrics.MeanIoU": false, + "tf.compat.v1.keras.metrics.MeanIoU.__call__": true, + "tf.compat.v1.keras.metrics.MeanIoU.__eq__": true, + "tf.compat.v1.keras.metrics.MeanIoU.__ge__": true, + "tf.compat.v1.keras.metrics.MeanIoU.__gt__": true, + "tf.compat.v1.keras.metrics.MeanIoU.__init__": true, + "tf.compat.v1.keras.metrics.MeanIoU.__le__": true, + "tf.compat.v1.keras.metrics.MeanIoU.__lt__": true, + "tf.compat.v1.keras.metrics.MeanIoU.__ne__": true, + "tf.compat.v1.keras.metrics.MeanIoU.__new__": true, + "tf.compat.v1.keras.metrics.MeanIoU.activity_regularizer": true, + "tf.compat.v1.keras.metrics.MeanIoU.add_loss": true, + "tf.compat.v1.keras.metrics.MeanIoU.add_metric": true, + "tf.compat.v1.keras.metrics.MeanIoU.add_weight": true, + "tf.compat.v1.keras.metrics.MeanIoU.build": true, + "tf.compat.v1.keras.metrics.MeanIoU.call": true, + "tf.compat.v1.keras.metrics.MeanIoU.compute_mask": true, + "tf.compat.v1.keras.metrics.MeanIoU.compute_output_shape": true, + "tf.compat.v1.keras.metrics.MeanIoU.compute_output_signature": true, + "tf.compat.v1.keras.metrics.MeanIoU.count_params": true, + "tf.compat.v1.keras.metrics.MeanIoU.dtype": true, + "tf.compat.v1.keras.metrics.MeanIoU.dynamic": true, + "tf.compat.v1.keras.metrics.MeanIoU.from_config": true, + "tf.compat.v1.keras.metrics.MeanIoU.get_config": true, + "tf.compat.v1.keras.metrics.MeanIoU.get_weights": true, + "tf.compat.v1.keras.metrics.MeanIoU.input": true, + "tf.compat.v1.keras.metrics.MeanIoU.input_spec": true, + "tf.compat.v1.keras.metrics.MeanIoU.losses": true, + "tf.compat.v1.keras.metrics.MeanIoU.metrics": true, + "tf.compat.v1.keras.metrics.MeanIoU.name": true, + "tf.compat.v1.keras.metrics.MeanIoU.name_scope": true, + "tf.compat.v1.keras.metrics.MeanIoU.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanIoU.output": true, + "tf.compat.v1.keras.metrics.MeanIoU.reset_states": true, + "tf.compat.v1.keras.metrics.MeanIoU.result": true, + "tf.compat.v1.keras.metrics.MeanIoU.set_weights": true, + "tf.compat.v1.keras.metrics.MeanIoU.submodules": true, + "tf.compat.v1.keras.metrics.MeanIoU.trainable": true, + "tf.compat.v1.keras.metrics.MeanIoU.trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanIoU.update_state": true, + "tf.compat.v1.keras.metrics.MeanIoU.weights": true, + "tf.compat.v1.keras.metrics.MeanIoU.with_name_scope": true, + "tf.compat.v1.keras.metrics.MeanRelativeError": false, + "tf.compat.v1.keras.metrics.MeanRelativeError.__call__": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.__eq__": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.__ge__": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.__gt__": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.__init__": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.__le__": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.__lt__": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.__ne__": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.__new__": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.activity_regularizer": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.add_loss": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.add_metric": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.add_weight": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.build": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.call": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.compute_mask": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.compute_output_shape": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.compute_output_signature": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.count_params": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.dtype": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.dynamic": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.from_config": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.get_config": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.get_weights": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.input": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.input_spec": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.losses": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.metrics": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.name": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.name_scope": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.output": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.reset_states": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.result": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.set_weights": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.submodules": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.trainable": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.update_state": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.weights": true, + "tf.compat.v1.keras.metrics.MeanRelativeError.with_name_scope": true, + "tf.compat.v1.keras.metrics.MeanSquaredError": false, + "tf.compat.v1.keras.metrics.MeanSquaredError.__call__": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.__eq__": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.__ge__": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.__gt__": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.__init__": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.__le__": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.__lt__": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.__ne__": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.__new__": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.activity_regularizer": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.add_loss": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.add_metric": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.add_weight": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.build": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.call": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.compute_mask": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.compute_output_shape": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.compute_output_signature": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.count_params": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.dtype": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.dynamic": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.from_config": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.get_config": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.get_weights": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.input": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.input_spec": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.losses": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.metrics": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.name": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.name_scope": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.output": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.reset_states": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.result": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.set_weights": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.submodules": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.trainable": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.update_state": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.weights": true, + "tf.compat.v1.keras.metrics.MeanSquaredError.with_name_scope": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError": false, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__call__": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__eq__": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__ge__": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__gt__": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__init__": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__le__": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__lt__": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__ne__": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.__new__": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.activity_regularizer": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.add_loss": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.add_metric": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.add_weight": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.build": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.call": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.compute_mask": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.compute_output_shape": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.compute_output_signature": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.count_params": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.dtype": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.dynamic": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.from_config": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.get_config": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.get_weights": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.input": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.input_spec": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.losses": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.metrics": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.name": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.name_scope": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.output": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.reset_states": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.result": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.set_weights": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.submodules": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.trainable": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.update_state": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.weights": true, + "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError.with_name_scope": true, + "tf.compat.v1.keras.metrics.MeanTensor": false, + "tf.compat.v1.keras.metrics.MeanTensor.__call__": true, + "tf.compat.v1.keras.metrics.MeanTensor.__eq__": true, + "tf.compat.v1.keras.metrics.MeanTensor.__ge__": true, + "tf.compat.v1.keras.metrics.MeanTensor.__gt__": true, + "tf.compat.v1.keras.metrics.MeanTensor.__init__": true, + "tf.compat.v1.keras.metrics.MeanTensor.__le__": true, + "tf.compat.v1.keras.metrics.MeanTensor.__lt__": true, + "tf.compat.v1.keras.metrics.MeanTensor.__ne__": true, + "tf.compat.v1.keras.metrics.MeanTensor.__new__": true, + "tf.compat.v1.keras.metrics.MeanTensor.activity_regularizer": true, + "tf.compat.v1.keras.metrics.MeanTensor.add_loss": true, + "tf.compat.v1.keras.metrics.MeanTensor.add_metric": true, + "tf.compat.v1.keras.metrics.MeanTensor.add_weight": true, + "tf.compat.v1.keras.metrics.MeanTensor.build": true, + "tf.compat.v1.keras.metrics.MeanTensor.call": true, + "tf.compat.v1.keras.metrics.MeanTensor.compute_mask": true, + "tf.compat.v1.keras.metrics.MeanTensor.compute_output_shape": true, + "tf.compat.v1.keras.metrics.MeanTensor.compute_output_signature": true, + "tf.compat.v1.keras.metrics.MeanTensor.count": true, + "tf.compat.v1.keras.metrics.MeanTensor.count_params": true, + "tf.compat.v1.keras.metrics.MeanTensor.dtype": true, + "tf.compat.v1.keras.metrics.MeanTensor.dynamic": true, + "tf.compat.v1.keras.metrics.MeanTensor.from_config": true, + "tf.compat.v1.keras.metrics.MeanTensor.get_config": true, + "tf.compat.v1.keras.metrics.MeanTensor.get_weights": true, + "tf.compat.v1.keras.metrics.MeanTensor.input": true, + "tf.compat.v1.keras.metrics.MeanTensor.input_spec": true, + "tf.compat.v1.keras.metrics.MeanTensor.losses": true, + "tf.compat.v1.keras.metrics.MeanTensor.metrics": true, + "tf.compat.v1.keras.metrics.MeanTensor.name": true, + "tf.compat.v1.keras.metrics.MeanTensor.name_scope": true, + "tf.compat.v1.keras.metrics.MeanTensor.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanTensor.output": true, + "tf.compat.v1.keras.metrics.MeanTensor.reset_states": true, + "tf.compat.v1.keras.metrics.MeanTensor.result": true, + "tf.compat.v1.keras.metrics.MeanTensor.set_weights": true, + "tf.compat.v1.keras.metrics.MeanTensor.submodules": true, + "tf.compat.v1.keras.metrics.MeanTensor.total": true, + "tf.compat.v1.keras.metrics.MeanTensor.trainable": true, + "tf.compat.v1.keras.metrics.MeanTensor.trainable_weights": true, + "tf.compat.v1.keras.metrics.MeanTensor.update_state": true, + "tf.compat.v1.keras.metrics.MeanTensor.weights": true, + "tf.compat.v1.keras.metrics.MeanTensor.with_name_scope": true, + "tf.compat.v1.keras.metrics.Metric": false, + "tf.compat.v1.keras.metrics.Metric.__call__": true, + "tf.compat.v1.keras.metrics.Metric.__eq__": true, + "tf.compat.v1.keras.metrics.Metric.__ge__": true, + "tf.compat.v1.keras.metrics.Metric.__gt__": true, + "tf.compat.v1.keras.metrics.Metric.__init__": true, + "tf.compat.v1.keras.metrics.Metric.__le__": true, + "tf.compat.v1.keras.metrics.Metric.__lt__": true, + "tf.compat.v1.keras.metrics.Metric.__ne__": true, + "tf.compat.v1.keras.metrics.Metric.__new__": true, + "tf.compat.v1.keras.metrics.Metric.activity_regularizer": true, + "tf.compat.v1.keras.metrics.Metric.add_loss": true, + "tf.compat.v1.keras.metrics.Metric.add_metric": true, + "tf.compat.v1.keras.metrics.Metric.add_weight": true, + "tf.compat.v1.keras.metrics.Metric.build": true, + "tf.compat.v1.keras.metrics.Metric.call": true, + "tf.compat.v1.keras.metrics.Metric.compute_mask": true, + "tf.compat.v1.keras.metrics.Metric.compute_output_shape": true, + "tf.compat.v1.keras.metrics.Metric.compute_output_signature": true, + "tf.compat.v1.keras.metrics.Metric.count_params": true, + "tf.compat.v1.keras.metrics.Metric.dtype": true, + "tf.compat.v1.keras.metrics.Metric.dynamic": true, + "tf.compat.v1.keras.metrics.Metric.from_config": true, + "tf.compat.v1.keras.metrics.Metric.get_config": true, + "tf.compat.v1.keras.metrics.Metric.get_weights": true, + "tf.compat.v1.keras.metrics.Metric.input": true, + "tf.compat.v1.keras.metrics.Metric.input_spec": true, + "tf.compat.v1.keras.metrics.Metric.losses": true, + "tf.compat.v1.keras.metrics.Metric.metrics": true, + "tf.compat.v1.keras.metrics.Metric.name": true, + "tf.compat.v1.keras.metrics.Metric.name_scope": true, + "tf.compat.v1.keras.metrics.Metric.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.Metric.output": true, + "tf.compat.v1.keras.metrics.Metric.reset_states": true, + "tf.compat.v1.keras.metrics.Metric.result": true, + "tf.compat.v1.keras.metrics.Metric.set_weights": true, + "tf.compat.v1.keras.metrics.Metric.submodules": true, + "tf.compat.v1.keras.metrics.Metric.trainable": true, + "tf.compat.v1.keras.metrics.Metric.trainable_weights": true, + "tf.compat.v1.keras.metrics.Metric.update_state": true, + "tf.compat.v1.keras.metrics.Metric.weights": true, + "tf.compat.v1.keras.metrics.Metric.with_name_scope": true, + "tf.compat.v1.keras.metrics.Poisson": false, + "tf.compat.v1.keras.metrics.Poisson.__call__": true, + "tf.compat.v1.keras.metrics.Poisson.__eq__": true, + "tf.compat.v1.keras.metrics.Poisson.__ge__": true, + "tf.compat.v1.keras.metrics.Poisson.__gt__": true, + "tf.compat.v1.keras.metrics.Poisson.__init__": true, + "tf.compat.v1.keras.metrics.Poisson.__le__": true, + "tf.compat.v1.keras.metrics.Poisson.__lt__": true, + "tf.compat.v1.keras.metrics.Poisson.__ne__": true, + "tf.compat.v1.keras.metrics.Poisson.__new__": true, + "tf.compat.v1.keras.metrics.Poisson.activity_regularizer": true, + "tf.compat.v1.keras.metrics.Poisson.add_loss": true, + "tf.compat.v1.keras.metrics.Poisson.add_metric": true, + "tf.compat.v1.keras.metrics.Poisson.add_weight": true, + "tf.compat.v1.keras.metrics.Poisson.build": true, + "tf.compat.v1.keras.metrics.Poisson.call": true, + "tf.compat.v1.keras.metrics.Poisson.compute_mask": true, + "tf.compat.v1.keras.metrics.Poisson.compute_output_shape": true, + "tf.compat.v1.keras.metrics.Poisson.compute_output_signature": true, + "tf.compat.v1.keras.metrics.Poisson.count_params": true, + "tf.compat.v1.keras.metrics.Poisson.dtype": true, + "tf.compat.v1.keras.metrics.Poisson.dynamic": true, + "tf.compat.v1.keras.metrics.Poisson.from_config": true, + "tf.compat.v1.keras.metrics.Poisson.get_config": true, + "tf.compat.v1.keras.metrics.Poisson.get_weights": true, + "tf.compat.v1.keras.metrics.Poisson.input": true, + "tf.compat.v1.keras.metrics.Poisson.input_spec": true, + "tf.compat.v1.keras.metrics.Poisson.losses": true, + "tf.compat.v1.keras.metrics.Poisson.metrics": true, + "tf.compat.v1.keras.metrics.Poisson.name": true, + "tf.compat.v1.keras.metrics.Poisson.name_scope": true, + "tf.compat.v1.keras.metrics.Poisson.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.Poisson.output": true, + "tf.compat.v1.keras.metrics.Poisson.reset_states": true, + "tf.compat.v1.keras.metrics.Poisson.result": true, + "tf.compat.v1.keras.metrics.Poisson.set_weights": true, + "tf.compat.v1.keras.metrics.Poisson.submodules": true, + "tf.compat.v1.keras.metrics.Poisson.trainable": true, + "tf.compat.v1.keras.metrics.Poisson.trainable_weights": true, + "tf.compat.v1.keras.metrics.Poisson.update_state": true, + "tf.compat.v1.keras.metrics.Poisson.weights": true, + "tf.compat.v1.keras.metrics.Poisson.with_name_scope": true, + "tf.compat.v1.keras.metrics.Precision": false, + "tf.compat.v1.keras.metrics.Precision.__call__": true, + "tf.compat.v1.keras.metrics.Precision.__eq__": true, + "tf.compat.v1.keras.metrics.Precision.__ge__": true, + "tf.compat.v1.keras.metrics.Precision.__gt__": true, + "tf.compat.v1.keras.metrics.Precision.__init__": true, + "tf.compat.v1.keras.metrics.Precision.__le__": true, + "tf.compat.v1.keras.metrics.Precision.__lt__": true, + "tf.compat.v1.keras.metrics.Precision.__ne__": true, + "tf.compat.v1.keras.metrics.Precision.__new__": true, + "tf.compat.v1.keras.metrics.Precision.activity_regularizer": true, + "tf.compat.v1.keras.metrics.Precision.add_loss": true, + "tf.compat.v1.keras.metrics.Precision.add_metric": true, + "tf.compat.v1.keras.metrics.Precision.add_weight": true, + "tf.compat.v1.keras.metrics.Precision.build": true, + "tf.compat.v1.keras.metrics.Precision.call": true, + "tf.compat.v1.keras.metrics.Precision.compute_mask": true, + "tf.compat.v1.keras.metrics.Precision.compute_output_shape": true, + "tf.compat.v1.keras.metrics.Precision.compute_output_signature": true, + "tf.compat.v1.keras.metrics.Precision.count_params": true, + "tf.compat.v1.keras.metrics.Precision.dtype": true, + "tf.compat.v1.keras.metrics.Precision.dynamic": true, + "tf.compat.v1.keras.metrics.Precision.from_config": true, + "tf.compat.v1.keras.metrics.Precision.get_config": true, + "tf.compat.v1.keras.metrics.Precision.get_weights": true, + "tf.compat.v1.keras.metrics.Precision.input": true, + "tf.compat.v1.keras.metrics.Precision.input_spec": true, + "tf.compat.v1.keras.metrics.Precision.losses": true, + "tf.compat.v1.keras.metrics.Precision.metrics": true, + "tf.compat.v1.keras.metrics.Precision.name": true, + "tf.compat.v1.keras.metrics.Precision.name_scope": true, + "tf.compat.v1.keras.metrics.Precision.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.Precision.output": true, + "tf.compat.v1.keras.metrics.Precision.reset_states": true, + "tf.compat.v1.keras.metrics.Precision.result": true, + "tf.compat.v1.keras.metrics.Precision.set_weights": true, + "tf.compat.v1.keras.metrics.Precision.submodules": true, + "tf.compat.v1.keras.metrics.Precision.trainable": true, + "tf.compat.v1.keras.metrics.Precision.trainable_weights": true, + "tf.compat.v1.keras.metrics.Precision.update_state": true, + "tf.compat.v1.keras.metrics.Precision.weights": true, + "tf.compat.v1.keras.metrics.Precision.with_name_scope": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall": false, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__call__": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__eq__": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__ge__": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__gt__": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__init__": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__le__": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__lt__": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__ne__": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.__new__": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.activity_regularizer": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.add_loss": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.add_metric": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.add_weight": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.build": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.call": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.compute_mask": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.compute_output_shape": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.compute_output_signature": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.count_params": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.dtype": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.dynamic": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.from_config": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.get_config": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.get_weights": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.input": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.input_spec": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.losses": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.metrics": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.name": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.name_scope": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.output": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.reset_states": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.result": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.set_weights": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.submodules": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.trainable": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.trainable_weights": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.update_state": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.weights": true, + "tf.compat.v1.keras.metrics.PrecisionAtRecall.with_name_scope": true, + "tf.compat.v1.keras.metrics.Recall": false, + "tf.compat.v1.keras.metrics.Recall.__call__": true, + "tf.compat.v1.keras.metrics.Recall.__eq__": true, + "tf.compat.v1.keras.metrics.Recall.__ge__": true, + "tf.compat.v1.keras.metrics.Recall.__gt__": true, + "tf.compat.v1.keras.metrics.Recall.__init__": true, + "tf.compat.v1.keras.metrics.Recall.__le__": true, + "tf.compat.v1.keras.metrics.Recall.__lt__": true, + "tf.compat.v1.keras.metrics.Recall.__ne__": true, + "tf.compat.v1.keras.metrics.Recall.__new__": true, + "tf.compat.v1.keras.metrics.Recall.activity_regularizer": true, + "tf.compat.v1.keras.metrics.Recall.add_loss": true, + "tf.compat.v1.keras.metrics.Recall.add_metric": true, + "tf.compat.v1.keras.metrics.Recall.add_weight": true, + "tf.compat.v1.keras.metrics.Recall.build": true, + "tf.compat.v1.keras.metrics.Recall.call": true, + "tf.compat.v1.keras.metrics.Recall.compute_mask": true, + "tf.compat.v1.keras.metrics.Recall.compute_output_shape": true, + "tf.compat.v1.keras.metrics.Recall.compute_output_signature": true, + "tf.compat.v1.keras.metrics.Recall.count_params": true, + "tf.compat.v1.keras.metrics.Recall.dtype": true, + "tf.compat.v1.keras.metrics.Recall.dynamic": true, + "tf.compat.v1.keras.metrics.Recall.from_config": true, + "tf.compat.v1.keras.metrics.Recall.get_config": true, + "tf.compat.v1.keras.metrics.Recall.get_weights": true, + "tf.compat.v1.keras.metrics.Recall.input": true, + "tf.compat.v1.keras.metrics.Recall.input_spec": true, + "tf.compat.v1.keras.metrics.Recall.losses": true, + "tf.compat.v1.keras.metrics.Recall.metrics": true, + "tf.compat.v1.keras.metrics.Recall.name": true, + "tf.compat.v1.keras.metrics.Recall.name_scope": true, + "tf.compat.v1.keras.metrics.Recall.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.Recall.output": true, + "tf.compat.v1.keras.metrics.Recall.reset_states": true, + "tf.compat.v1.keras.metrics.Recall.result": true, + "tf.compat.v1.keras.metrics.Recall.set_weights": true, + "tf.compat.v1.keras.metrics.Recall.submodules": true, + "tf.compat.v1.keras.metrics.Recall.trainable": true, + "tf.compat.v1.keras.metrics.Recall.trainable_weights": true, + "tf.compat.v1.keras.metrics.Recall.update_state": true, + "tf.compat.v1.keras.metrics.Recall.weights": true, + "tf.compat.v1.keras.metrics.Recall.with_name_scope": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision": false, + "tf.compat.v1.keras.metrics.RecallAtPrecision.__call__": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.__eq__": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.__ge__": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.__gt__": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.__init__": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.__le__": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.__lt__": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.__ne__": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.__new__": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.activity_regularizer": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.add_loss": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.add_metric": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.add_weight": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.build": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.call": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.compute_mask": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.compute_output_shape": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.compute_output_signature": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.count_params": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.dtype": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.dynamic": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.from_config": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.get_config": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.get_weights": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.input": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.input_spec": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.losses": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.metrics": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.name": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.name_scope": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.output": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.reset_states": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.result": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.set_weights": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.submodules": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.trainable": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.trainable_weights": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.update_state": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.weights": true, + "tf.compat.v1.keras.metrics.RecallAtPrecision.with_name_scope": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError": false, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__call__": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__eq__": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__ge__": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__gt__": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__init__": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__le__": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__lt__": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__ne__": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.__new__": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.activity_regularizer": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.add_loss": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.add_metric": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.add_weight": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.build": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.call": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.compute_mask": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.compute_output_shape": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.compute_output_signature": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.count_params": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.dtype": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.dynamic": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.from_config": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.get_config": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.get_weights": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.input": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.input_spec": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.losses": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.metrics": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.name": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.name_scope": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.output": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.reset_states": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.result": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.set_weights": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.submodules": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.trainable": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.trainable_weights": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.update_state": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.weights": true, + "tf.compat.v1.keras.metrics.RootMeanSquaredError.with_name_scope": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity": false, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__call__": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__eq__": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__ge__": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__gt__": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__init__": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__le__": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__lt__": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__ne__": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.__new__": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.activity_regularizer": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.add_loss": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.add_metric": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.add_weight": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.build": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.call": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.compute_mask": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.compute_output_shape": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.compute_output_signature": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.count_params": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.dtype": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.dynamic": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.from_config": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.get_config": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.get_weights": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.input": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.input_spec": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.losses": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.metrics": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.name": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.name_scope": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.output": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.reset_states": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.result": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.set_weights": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.submodules": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.trainable": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.trainable_weights": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.update_state": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.weights": true, + "tf.compat.v1.keras.metrics.SensitivityAtSpecificity.with_name_scope": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy": false, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__call__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__eq__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__ge__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__gt__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__init__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__le__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__lt__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__ne__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.__new__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.activity_regularizer": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.add_loss": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.add_metric": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.add_weight": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.build": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.call": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.compute_mask": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.compute_output_shape": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.compute_output_signature": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.count_params": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.dtype": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.dynamic": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.from_config": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.get_config": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.get_weights": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.input": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.input_spec": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.losses": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.metrics": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.name": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.name_scope": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.output": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.reset_states": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.result": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.set_weights": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.submodules": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.trainable": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.trainable_weights": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.update_state": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.weights": true, + "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy.with_name_scope": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy": false, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__call__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__eq__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__ge__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__gt__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__init__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__le__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__lt__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__ne__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.__new__": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.activity_regularizer": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.add_loss": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.add_metric": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.add_weight": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.build": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.call": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.compute_mask": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.compute_output_shape": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.compute_output_signature": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.count_params": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.dtype": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.dynamic": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.from_config": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.get_config": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.get_weights": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.input": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.input_spec": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.losses": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.metrics": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.name": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.name_scope": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.output": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.reset_states": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.result": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.set_weights": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.submodules": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.trainable": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.trainable_weights": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.update_state": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.weights": true, + "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy.with_name_scope": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy": false, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__call__": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__eq__": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__ge__": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__gt__": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__init__": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__le__": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__lt__": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__ne__": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.__new__": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.activity_regularizer": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.add_loss": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.add_metric": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.add_weight": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.build": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.call": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.compute_mask": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.compute_output_shape": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.compute_output_signature": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.count_params": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.dtype": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.dynamic": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.from_config": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.get_config": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.get_weights": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.input": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.input_spec": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.losses": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.metrics": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.name": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.name_scope": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.output": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.reset_states": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.result": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.set_weights": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.submodules": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.trainable": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.trainable_weights": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.update_state": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.weights": true, + "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy.with_name_scope": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity": false, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__call__": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__eq__": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__ge__": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__gt__": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__init__": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__le__": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__lt__": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__ne__": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.__new__": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.activity_regularizer": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.add_loss": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.add_metric": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.add_weight": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.build": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.call": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.compute_mask": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.compute_output_shape": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.compute_output_signature": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.count_params": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.dtype": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.dynamic": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.from_config": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.get_config": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.get_weights": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.input": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.input_spec": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.losses": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.metrics": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.name": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.name_scope": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.output": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.reset_states": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.result": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.set_weights": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.submodules": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.trainable": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.trainable_weights": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.update_state": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.weights": true, + "tf.compat.v1.keras.metrics.SpecificityAtSensitivity.with_name_scope": true, + "tf.compat.v1.keras.metrics.SquaredHinge": false, + "tf.compat.v1.keras.metrics.SquaredHinge.__call__": true, + "tf.compat.v1.keras.metrics.SquaredHinge.__eq__": true, + "tf.compat.v1.keras.metrics.SquaredHinge.__ge__": true, + "tf.compat.v1.keras.metrics.SquaredHinge.__gt__": true, + "tf.compat.v1.keras.metrics.SquaredHinge.__init__": true, + "tf.compat.v1.keras.metrics.SquaredHinge.__le__": true, + "tf.compat.v1.keras.metrics.SquaredHinge.__lt__": true, + "tf.compat.v1.keras.metrics.SquaredHinge.__ne__": true, + "tf.compat.v1.keras.metrics.SquaredHinge.__new__": true, + "tf.compat.v1.keras.metrics.SquaredHinge.activity_regularizer": true, + "tf.compat.v1.keras.metrics.SquaredHinge.add_loss": true, + "tf.compat.v1.keras.metrics.SquaredHinge.add_metric": true, + "tf.compat.v1.keras.metrics.SquaredHinge.add_weight": true, + "tf.compat.v1.keras.metrics.SquaredHinge.build": true, + "tf.compat.v1.keras.metrics.SquaredHinge.call": true, + "tf.compat.v1.keras.metrics.SquaredHinge.compute_mask": true, + "tf.compat.v1.keras.metrics.SquaredHinge.compute_output_shape": true, + "tf.compat.v1.keras.metrics.SquaredHinge.compute_output_signature": true, + "tf.compat.v1.keras.metrics.SquaredHinge.count_params": true, + "tf.compat.v1.keras.metrics.SquaredHinge.dtype": true, + "tf.compat.v1.keras.metrics.SquaredHinge.dynamic": true, + "tf.compat.v1.keras.metrics.SquaredHinge.from_config": true, + "tf.compat.v1.keras.metrics.SquaredHinge.get_config": true, + "tf.compat.v1.keras.metrics.SquaredHinge.get_weights": true, + "tf.compat.v1.keras.metrics.SquaredHinge.input": true, + "tf.compat.v1.keras.metrics.SquaredHinge.input_spec": true, + "tf.compat.v1.keras.metrics.SquaredHinge.losses": true, + "tf.compat.v1.keras.metrics.SquaredHinge.metrics": true, + "tf.compat.v1.keras.metrics.SquaredHinge.name": true, + "tf.compat.v1.keras.metrics.SquaredHinge.name_scope": true, + "tf.compat.v1.keras.metrics.SquaredHinge.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.SquaredHinge.output": true, + "tf.compat.v1.keras.metrics.SquaredHinge.reset_states": true, + "tf.compat.v1.keras.metrics.SquaredHinge.result": true, + "tf.compat.v1.keras.metrics.SquaredHinge.set_weights": true, + "tf.compat.v1.keras.metrics.SquaredHinge.submodules": true, + "tf.compat.v1.keras.metrics.SquaredHinge.trainable": true, + "tf.compat.v1.keras.metrics.SquaredHinge.trainable_weights": true, + "tf.compat.v1.keras.metrics.SquaredHinge.update_state": true, + "tf.compat.v1.keras.metrics.SquaredHinge.weights": true, + "tf.compat.v1.keras.metrics.SquaredHinge.with_name_scope": true, + "tf.compat.v1.keras.metrics.Sum": false, + "tf.compat.v1.keras.metrics.Sum.__call__": true, + "tf.compat.v1.keras.metrics.Sum.__eq__": true, + "tf.compat.v1.keras.metrics.Sum.__ge__": true, + "tf.compat.v1.keras.metrics.Sum.__gt__": true, + "tf.compat.v1.keras.metrics.Sum.__init__": true, + "tf.compat.v1.keras.metrics.Sum.__le__": true, + "tf.compat.v1.keras.metrics.Sum.__lt__": true, + "tf.compat.v1.keras.metrics.Sum.__ne__": true, + "tf.compat.v1.keras.metrics.Sum.__new__": true, + "tf.compat.v1.keras.metrics.Sum.activity_regularizer": true, + "tf.compat.v1.keras.metrics.Sum.add_loss": true, + "tf.compat.v1.keras.metrics.Sum.add_metric": true, + "tf.compat.v1.keras.metrics.Sum.add_weight": true, + "tf.compat.v1.keras.metrics.Sum.build": true, + "tf.compat.v1.keras.metrics.Sum.call": true, + "tf.compat.v1.keras.metrics.Sum.compute_mask": true, + "tf.compat.v1.keras.metrics.Sum.compute_output_shape": true, + "tf.compat.v1.keras.metrics.Sum.compute_output_signature": true, + "tf.compat.v1.keras.metrics.Sum.count_params": true, + "tf.compat.v1.keras.metrics.Sum.dtype": true, + "tf.compat.v1.keras.metrics.Sum.dynamic": true, + "tf.compat.v1.keras.metrics.Sum.from_config": true, + "tf.compat.v1.keras.metrics.Sum.get_config": true, + "tf.compat.v1.keras.metrics.Sum.get_weights": true, + "tf.compat.v1.keras.metrics.Sum.input": true, + "tf.compat.v1.keras.metrics.Sum.input_spec": true, + "tf.compat.v1.keras.metrics.Sum.losses": true, + "tf.compat.v1.keras.metrics.Sum.metrics": true, + "tf.compat.v1.keras.metrics.Sum.name": true, + "tf.compat.v1.keras.metrics.Sum.name_scope": true, + "tf.compat.v1.keras.metrics.Sum.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.Sum.output": true, + "tf.compat.v1.keras.metrics.Sum.reset_states": true, + "tf.compat.v1.keras.metrics.Sum.result": true, + "tf.compat.v1.keras.metrics.Sum.set_weights": true, + "tf.compat.v1.keras.metrics.Sum.submodules": true, + "tf.compat.v1.keras.metrics.Sum.trainable": true, + "tf.compat.v1.keras.metrics.Sum.trainable_weights": true, + "tf.compat.v1.keras.metrics.Sum.update_state": true, + "tf.compat.v1.keras.metrics.Sum.weights": true, + "tf.compat.v1.keras.metrics.Sum.with_name_scope": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy": false, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__call__": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__eq__": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__ge__": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__gt__": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__init__": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__le__": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__lt__": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__ne__": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.__new__": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.activity_regularizer": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.add_loss": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.add_metric": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.add_weight": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.build": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.call": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.compute_mask": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.compute_output_shape": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.compute_output_signature": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.count_params": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.dtype": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.dynamic": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.from_config": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.get_config": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.get_weights": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.input": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.input_spec": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.losses": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.metrics": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.name": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.name_scope": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.output": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.reset_states": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.result": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.set_weights": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.submodules": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.trainable": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.trainable_weights": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.update_state": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.weights": true, + "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy.with_name_scope": true, + "tf.compat.v1.keras.metrics.TrueNegatives": false, + "tf.compat.v1.keras.metrics.TrueNegatives.__call__": true, + "tf.compat.v1.keras.metrics.TrueNegatives.__eq__": true, + "tf.compat.v1.keras.metrics.TrueNegatives.__ge__": true, + "tf.compat.v1.keras.metrics.TrueNegatives.__gt__": true, + "tf.compat.v1.keras.metrics.TrueNegatives.__init__": true, + "tf.compat.v1.keras.metrics.TrueNegatives.__le__": true, + "tf.compat.v1.keras.metrics.TrueNegatives.__lt__": true, + "tf.compat.v1.keras.metrics.TrueNegatives.__ne__": true, + "tf.compat.v1.keras.metrics.TrueNegatives.__new__": true, + "tf.compat.v1.keras.metrics.TrueNegatives.activity_regularizer": true, + "tf.compat.v1.keras.metrics.TrueNegatives.add_loss": true, + "tf.compat.v1.keras.metrics.TrueNegatives.add_metric": true, + "tf.compat.v1.keras.metrics.TrueNegatives.add_weight": true, + "tf.compat.v1.keras.metrics.TrueNegatives.build": true, + "tf.compat.v1.keras.metrics.TrueNegatives.call": true, + "tf.compat.v1.keras.metrics.TrueNegatives.compute_mask": true, + "tf.compat.v1.keras.metrics.TrueNegatives.compute_output_shape": true, + "tf.compat.v1.keras.metrics.TrueNegatives.compute_output_signature": true, + "tf.compat.v1.keras.metrics.TrueNegatives.count_params": true, + "tf.compat.v1.keras.metrics.TrueNegatives.dtype": true, + "tf.compat.v1.keras.metrics.TrueNegatives.dynamic": true, + "tf.compat.v1.keras.metrics.TrueNegatives.from_config": true, + "tf.compat.v1.keras.metrics.TrueNegatives.get_config": true, + "tf.compat.v1.keras.metrics.TrueNegatives.get_weights": true, + "tf.compat.v1.keras.metrics.TrueNegatives.input": true, + "tf.compat.v1.keras.metrics.TrueNegatives.input_spec": true, + "tf.compat.v1.keras.metrics.TrueNegatives.losses": true, + "tf.compat.v1.keras.metrics.TrueNegatives.metrics": true, + "tf.compat.v1.keras.metrics.TrueNegatives.name": true, + "tf.compat.v1.keras.metrics.TrueNegatives.name_scope": true, + "tf.compat.v1.keras.metrics.TrueNegatives.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.TrueNegatives.output": true, + "tf.compat.v1.keras.metrics.TrueNegatives.reset_states": true, + "tf.compat.v1.keras.metrics.TrueNegatives.result": true, + "tf.compat.v1.keras.metrics.TrueNegatives.set_weights": true, + "tf.compat.v1.keras.metrics.TrueNegatives.submodules": true, + "tf.compat.v1.keras.metrics.TrueNegatives.trainable": true, + "tf.compat.v1.keras.metrics.TrueNegatives.trainable_weights": true, + "tf.compat.v1.keras.metrics.TrueNegatives.update_state": true, + "tf.compat.v1.keras.metrics.TrueNegatives.weights": true, + "tf.compat.v1.keras.metrics.TrueNegatives.with_name_scope": true, + "tf.compat.v1.keras.metrics.TruePositives": false, + "tf.compat.v1.keras.metrics.TruePositives.__call__": true, + "tf.compat.v1.keras.metrics.TruePositives.__eq__": true, + "tf.compat.v1.keras.metrics.TruePositives.__ge__": true, + "tf.compat.v1.keras.metrics.TruePositives.__gt__": true, + "tf.compat.v1.keras.metrics.TruePositives.__init__": true, + "tf.compat.v1.keras.metrics.TruePositives.__le__": true, + "tf.compat.v1.keras.metrics.TruePositives.__lt__": true, + "tf.compat.v1.keras.metrics.TruePositives.__ne__": true, + "tf.compat.v1.keras.metrics.TruePositives.__new__": true, + "tf.compat.v1.keras.metrics.TruePositives.activity_regularizer": true, + "tf.compat.v1.keras.metrics.TruePositives.add_loss": true, + "tf.compat.v1.keras.metrics.TruePositives.add_metric": true, + "tf.compat.v1.keras.metrics.TruePositives.add_weight": true, + "tf.compat.v1.keras.metrics.TruePositives.build": true, + "tf.compat.v1.keras.metrics.TruePositives.call": true, + "tf.compat.v1.keras.metrics.TruePositives.compute_mask": true, + "tf.compat.v1.keras.metrics.TruePositives.compute_output_shape": true, + "tf.compat.v1.keras.metrics.TruePositives.compute_output_signature": true, + "tf.compat.v1.keras.metrics.TruePositives.count_params": true, + "tf.compat.v1.keras.metrics.TruePositives.dtype": true, + "tf.compat.v1.keras.metrics.TruePositives.dynamic": true, + "tf.compat.v1.keras.metrics.TruePositives.from_config": true, + "tf.compat.v1.keras.metrics.TruePositives.get_config": true, + "tf.compat.v1.keras.metrics.TruePositives.get_weights": true, + "tf.compat.v1.keras.metrics.TruePositives.input": true, + "tf.compat.v1.keras.metrics.TruePositives.input_spec": true, + "tf.compat.v1.keras.metrics.TruePositives.losses": true, + "tf.compat.v1.keras.metrics.TruePositives.metrics": true, + "tf.compat.v1.keras.metrics.TruePositives.name": true, + "tf.compat.v1.keras.metrics.TruePositives.name_scope": true, + "tf.compat.v1.keras.metrics.TruePositives.non_trainable_weights": true, + "tf.compat.v1.keras.metrics.TruePositives.output": true, + "tf.compat.v1.keras.metrics.TruePositives.reset_states": true, + "tf.compat.v1.keras.metrics.TruePositives.result": true, + "tf.compat.v1.keras.metrics.TruePositives.set_weights": true, + "tf.compat.v1.keras.metrics.TruePositives.submodules": true, + "tf.compat.v1.keras.metrics.TruePositives.trainable": true, + "tf.compat.v1.keras.metrics.TruePositives.trainable_weights": true, + "tf.compat.v1.keras.metrics.TruePositives.update_state": true, + "tf.compat.v1.keras.metrics.TruePositives.weights": true, + "tf.compat.v1.keras.metrics.TruePositives.with_name_scope": true, + "tf.compat.v1.keras.metrics.binary_accuracy": false, + "tf.compat.v1.keras.metrics.binary_crossentropy": false, + "tf.compat.v1.keras.metrics.categorical_accuracy": false, + "tf.compat.v1.keras.metrics.categorical_crossentropy": false, + "tf.compat.v1.keras.metrics.cosine": false, + "tf.compat.v1.keras.metrics.cosine_proximity": false, + "tf.compat.v1.keras.metrics.deserialize": false, + "tf.compat.v1.keras.metrics.get": false, + "tf.compat.v1.keras.metrics.hinge": false, + "tf.compat.v1.keras.metrics.kld": false, + "tf.compat.v1.keras.metrics.kullback_leibler_divergence": false, + "tf.compat.v1.keras.metrics.mae": false, + "tf.compat.v1.keras.metrics.mape": false, + "tf.compat.v1.keras.metrics.mean_absolute_error": false, + "tf.compat.v1.keras.metrics.mean_absolute_percentage_error": false, + "tf.compat.v1.keras.metrics.mean_squared_error": false, + "tf.compat.v1.keras.metrics.mean_squared_logarithmic_error": false, + "tf.compat.v1.keras.metrics.mse": false, + "tf.compat.v1.keras.metrics.msle": false, + "tf.compat.v1.keras.metrics.poisson": false, + "tf.compat.v1.keras.metrics.serialize": false, + "tf.compat.v1.keras.metrics.sparse_categorical_accuracy": false, + "tf.compat.v1.keras.metrics.sparse_categorical_crossentropy": false, + "tf.compat.v1.keras.metrics.sparse_top_k_categorical_accuracy": false, + "tf.compat.v1.keras.metrics.squared_hinge": false, + "tf.compat.v1.keras.metrics.top_k_categorical_accuracy": false, + "tf.compat.v1.keras.mixed_precision": false, + "tf.compat.v1.keras.mixed_precision.experimental": false, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer": false, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__eq__": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__ge__": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__gt__": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__init__": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__le__": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__lt__": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__ne__": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.__new__": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.add_slot": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.add_weight": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.apply_gradients": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.from_config": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_config": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_gradients": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_scaled_loss": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_slot": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_slot_names": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_unscaled_gradients": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_updates": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.get_weights": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.iterations": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.learning_rate": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.loss_scale": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.lr": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.minimize": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.set_weights": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.variables": true, + "tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer.weights": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy": false, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__eq__": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__ge__": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__gt__": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__init__": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__le__": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__lt__": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__ne__": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.__new__": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.compute_dtype": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.from_config": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.get_config": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.loss_scale": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.name": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.should_cast_variables": true, + "tf.compat.v1.keras.mixed_precision.experimental.Policy.variable_dtype": true, + "tf.compat.v1.keras.mixed_precision.experimental.get_layer_policy": false, + "tf.compat.v1.keras.mixed_precision.experimental.global_policy": false, + "tf.compat.v1.keras.mixed_precision.experimental.set_policy": false, + "tf.compat.v1.keras.models": false, + "tf.compat.v1.keras.models.Model": false, + "tf.compat.v1.keras.models.Model.__call__": true, + "tf.compat.v1.keras.models.Model.__eq__": true, + "tf.compat.v1.keras.models.Model.__ge__": true, + "tf.compat.v1.keras.models.Model.__gt__": true, + "tf.compat.v1.keras.models.Model.__init__": true, + "tf.compat.v1.keras.models.Model.__le__": true, + "tf.compat.v1.keras.models.Model.__lt__": true, + "tf.compat.v1.keras.models.Model.__ne__": true, + "tf.compat.v1.keras.models.Model.__new__": true, + "tf.compat.v1.keras.models.Model.activity_regularizer": true, + "tf.compat.v1.keras.models.Model.add_loss": true, + "tf.compat.v1.keras.models.Model.add_metric": true, + "tf.compat.v1.keras.models.Model.add_weight": true, + "tf.compat.v1.keras.models.Model.build": true, + "tf.compat.v1.keras.models.Model.call": true, + "tf.compat.v1.keras.models.Model.compile": true, + "tf.compat.v1.keras.models.Model.compute_mask": true, + "tf.compat.v1.keras.models.Model.compute_output_shape": true, + "tf.compat.v1.keras.models.Model.compute_output_signature": true, + "tf.compat.v1.keras.models.Model.count_params": true, + "tf.compat.v1.keras.models.Model.distribute_strategy": true, + "tf.compat.v1.keras.models.Model.dtype": true, + "tf.compat.v1.keras.models.Model.dynamic": true, + "tf.compat.v1.keras.models.Model.evaluate": true, + "tf.compat.v1.keras.models.Model.evaluate_generator": true, + "tf.compat.v1.keras.models.Model.fit": true, + "tf.compat.v1.keras.models.Model.fit_generator": true, + "tf.compat.v1.keras.models.Model.from_config": true, + "tf.compat.v1.keras.models.Model.get_config": true, + "tf.compat.v1.keras.models.Model.get_layer": true, + "tf.compat.v1.keras.models.Model.get_weights": true, + "tf.compat.v1.keras.models.Model.input": true, + "tf.compat.v1.keras.models.Model.input_spec": true, + "tf.compat.v1.keras.models.Model.layers": true, + "tf.compat.v1.keras.models.Model.load_weights": true, + "tf.compat.v1.keras.models.Model.losses": true, + "tf.compat.v1.keras.models.Model.make_predict_function": true, + "tf.compat.v1.keras.models.Model.make_test_function": true, + "tf.compat.v1.keras.models.Model.make_train_function": true, + "tf.compat.v1.keras.models.Model.metrics": true, + "tf.compat.v1.keras.models.Model.metrics_names": true, + "tf.compat.v1.keras.models.Model.name": true, + "tf.compat.v1.keras.models.Model.name_scope": true, + "tf.compat.v1.keras.models.Model.non_trainable_weights": true, + "tf.compat.v1.keras.models.Model.output": true, + "tf.compat.v1.keras.models.Model.predict": true, + "tf.compat.v1.keras.models.Model.predict_generator": true, + "tf.compat.v1.keras.models.Model.predict_on_batch": true, + "tf.compat.v1.keras.models.Model.predict_step": true, + "tf.compat.v1.keras.models.Model.reset_metrics": true, + "tf.compat.v1.keras.models.Model.reset_states": true, + "tf.compat.v1.keras.models.Model.run_eagerly": true, + "tf.compat.v1.keras.models.Model.save": true, + "tf.compat.v1.keras.models.Model.save_weights": true, + "tf.compat.v1.keras.models.Model.set_weights": true, + "tf.compat.v1.keras.models.Model.state_updates": true, + "tf.compat.v1.keras.models.Model.stateful": true, + "tf.compat.v1.keras.models.Model.submodules": true, + "tf.compat.v1.keras.models.Model.summary": true, + "tf.compat.v1.keras.models.Model.test_on_batch": true, + "tf.compat.v1.keras.models.Model.test_step": true, + "tf.compat.v1.keras.models.Model.to_json": true, + "tf.compat.v1.keras.models.Model.to_yaml": true, + "tf.compat.v1.keras.models.Model.train_on_batch": true, + "tf.compat.v1.keras.models.Model.train_step": true, + "tf.compat.v1.keras.models.Model.trainable": true, + "tf.compat.v1.keras.models.Model.trainable_weights": true, + "tf.compat.v1.keras.models.Model.weights": true, + "tf.compat.v1.keras.models.Model.with_name_scope": true, + "tf.compat.v1.keras.models.Sequential": false, + "tf.compat.v1.keras.models.Sequential.__call__": true, + "tf.compat.v1.keras.models.Sequential.__eq__": true, + "tf.compat.v1.keras.models.Sequential.__ge__": true, + "tf.compat.v1.keras.models.Sequential.__gt__": true, + "tf.compat.v1.keras.models.Sequential.__init__": true, + "tf.compat.v1.keras.models.Sequential.__le__": true, + "tf.compat.v1.keras.models.Sequential.__lt__": true, + "tf.compat.v1.keras.models.Sequential.__ne__": true, + "tf.compat.v1.keras.models.Sequential.__new__": true, + "tf.compat.v1.keras.models.Sequential.activity_regularizer": true, + "tf.compat.v1.keras.models.Sequential.add": true, + "tf.compat.v1.keras.models.Sequential.add_loss": true, + "tf.compat.v1.keras.models.Sequential.add_metric": true, + "tf.compat.v1.keras.models.Sequential.add_weight": true, + "tf.compat.v1.keras.models.Sequential.build": true, + "tf.compat.v1.keras.models.Sequential.call": true, + "tf.compat.v1.keras.models.Sequential.compile": true, + "tf.compat.v1.keras.models.Sequential.compute_mask": true, + "tf.compat.v1.keras.models.Sequential.compute_output_shape": true, + "tf.compat.v1.keras.models.Sequential.compute_output_signature": true, + "tf.compat.v1.keras.models.Sequential.count_params": true, + "tf.compat.v1.keras.models.Sequential.distribute_strategy": true, + "tf.compat.v1.keras.models.Sequential.dtype": true, + "tf.compat.v1.keras.models.Sequential.dynamic": true, + "tf.compat.v1.keras.models.Sequential.evaluate": true, + "tf.compat.v1.keras.models.Sequential.evaluate_generator": true, + "tf.compat.v1.keras.models.Sequential.fit": true, + "tf.compat.v1.keras.models.Sequential.fit_generator": true, + "tf.compat.v1.keras.models.Sequential.from_config": true, + "tf.compat.v1.keras.models.Sequential.get_config": true, + "tf.compat.v1.keras.models.Sequential.get_layer": true, + "tf.compat.v1.keras.models.Sequential.get_weights": true, + "tf.compat.v1.keras.models.Sequential.input": true, + "tf.compat.v1.keras.models.Sequential.input_spec": true, + "tf.compat.v1.keras.models.Sequential.layers": true, + "tf.compat.v1.keras.models.Sequential.load_weights": true, + "tf.compat.v1.keras.models.Sequential.losses": true, + "tf.compat.v1.keras.models.Sequential.make_predict_function": true, + "tf.compat.v1.keras.models.Sequential.make_test_function": true, + "tf.compat.v1.keras.models.Sequential.make_train_function": true, + "tf.compat.v1.keras.models.Sequential.metrics": true, + "tf.compat.v1.keras.models.Sequential.metrics_names": true, + "tf.compat.v1.keras.models.Sequential.name": true, + "tf.compat.v1.keras.models.Sequential.name_scope": true, + "tf.compat.v1.keras.models.Sequential.non_trainable_weights": true, + "tf.compat.v1.keras.models.Sequential.output": true, + "tf.compat.v1.keras.models.Sequential.pop": true, + "tf.compat.v1.keras.models.Sequential.predict": true, + "tf.compat.v1.keras.models.Sequential.predict_classes": true, + "tf.compat.v1.keras.models.Sequential.predict_generator": true, + "tf.compat.v1.keras.models.Sequential.predict_on_batch": true, + "tf.compat.v1.keras.models.Sequential.predict_proba": true, + "tf.compat.v1.keras.models.Sequential.predict_step": true, + "tf.compat.v1.keras.models.Sequential.reset_metrics": true, + "tf.compat.v1.keras.models.Sequential.reset_states": true, + "tf.compat.v1.keras.models.Sequential.run_eagerly": true, + "tf.compat.v1.keras.models.Sequential.save": true, + "tf.compat.v1.keras.models.Sequential.save_weights": true, + "tf.compat.v1.keras.models.Sequential.set_weights": true, + "tf.compat.v1.keras.models.Sequential.state_updates": true, + "tf.compat.v1.keras.models.Sequential.stateful": true, + "tf.compat.v1.keras.models.Sequential.submodules": true, + "tf.compat.v1.keras.models.Sequential.summary": true, + "tf.compat.v1.keras.models.Sequential.test_on_batch": true, + "tf.compat.v1.keras.models.Sequential.test_step": true, + "tf.compat.v1.keras.models.Sequential.to_json": true, + "tf.compat.v1.keras.models.Sequential.to_yaml": true, + "tf.compat.v1.keras.models.Sequential.train_on_batch": true, + "tf.compat.v1.keras.models.Sequential.train_step": true, + "tf.compat.v1.keras.models.Sequential.trainable": true, + "tf.compat.v1.keras.models.Sequential.trainable_weights": true, + "tf.compat.v1.keras.models.Sequential.weights": true, + "tf.compat.v1.keras.models.Sequential.with_name_scope": true, + "tf.compat.v1.keras.models.clone_model": false, + "tf.compat.v1.keras.models.load_model": false, + "tf.compat.v1.keras.models.model_from_config": false, + "tf.compat.v1.keras.models.model_from_json": false, + "tf.compat.v1.keras.models.model_from_yaml": false, + "tf.compat.v1.keras.models.save_model": false, + "tf.compat.v1.keras.optimizers": false, + "tf.compat.v1.keras.optimizers.Adadelta": false, + "tf.compat.v1.keras.optimizers.Adadelta.__eq__": true, + "tf.compat.v1.keras.optimizers.Adadelta.__ge__": true, + "tf.compat.v1.keras.optimizers.Adadelta.__gt__": true, + "tf.compat.v1.keras.optimizers.Adadelta.__init__": true, + "tf.compat.v1.keras.optimizers.Adadelta.__le__": true, + "tf.compat.v1.keras.optimizers.Adadelta.__lt__": true, + "tf.compat.v1.keras.optimizers.Adadelta.__ne__": true, + "tf.compat.v1.keras.optimizers.Adadelta.__new__": true, + "tf.compat.v1.keras.optimizers.Adadelta.add_slot": true, + "tf.compat.v1.keras.optimizers.Adadelta.add_weight": true, + "tf.compat.v1.keras.optimizers.Adadelta.apply_gradients": true, + "tf.compat.v1.keras.optimizers.Adadelta.from_config": true, + "tf.compat.v1.keras.optimizers.Adadelta.get_config": true, + "tf.compat.v1.keras.optimizers.Adadelta.get_gradients": true, + "tf.compat.v1.keras.optimizers.Adadelta.get_slot": true, + "tf.compat.v1.keras.optimizers.Adadelta.get_slot_names": true, + "tf.compat.v1.keras.optimizers.Adadelta.get_updates": true, + "tf.compat.v1.keras.optimizers.Adadelta.get_weights": true, + "tf.compat.v1.keras.optimizers.Adadelta.iterations": true, + "tf.compat.v1.keras.optimizers.Adadelta.minimize": true, + "tf.compat.v1.keras.optimizers.Adadelta.set_weights": true, + "tf.compat.v1.keras.optimizers.Adadelta.variables": true, + "tf.compat.v1.keras.optimizers.Adadelta.weights": true, + "tf.compat.v1.keras.optimizers.Adagrad": false, + "tf.compat.v1.keras.optimizers.Adagrad.__eq__": true, + "tf.compat.v1.keras.optimizers.Adagrad.__ge__": true, + "tf.compat.v1.keras.optimizers.Adagrad.__gt__": true, + "tf.compat.v1.keras.optimizers.Adagrad.__init__": true, + "tf.compat.v1.keras.optimizers.Adagrad.__le__": true, + "tf.compat.v1.keras.optimizers.Adagrad.__lt__": true, + "tf.compat.v1.keras.optimizers.Adagrad.__ne__": true, + "tf.compat.v1.keras.optimizers.Adagrad.__new__": true, + "tf.compat.v1.keras.optimizers.Adagrad.add_slot": true, + "tf.compat.v1.keras.optimizers.Adagrad.add_weight": true, + "tf.compat.v1.keras.optimizers.Adagrad.apply_gradients": true, + "tf.compat.v1.keras.optimizers.Adagrad.from_config": true, + "tf.compat.v1.keras.optimizers.Adagrad.get_config": true, + "tf.compat.v1.keras.optimizers.Adagrad.get_gradients": true, + "tf.compat.v1.keras.optimizers.Adagrad.get_slot": true, + "tf.compat.v1.keras.optimizers.Adagrad.get_slot_names": true, + "tf.compat.v1.keras.optimizers.Adagrad.get_updates": true, + "tf.compat.v1.keras.optimizers.Adagrad.get_weights": true, + "tf.compat.v1.keras.optimizers.Adagrad.iterations": true, + "tf.compat.v1.keras.optimizers.Adagrad.minimize": true, + "tf.compat.v1.keras.optimizers.Adagrad.set_weights": true, + "tf.compat.v1.keras.optimizers.Adagrad.variables": true, + "tf.compat.v1.keras.optimizers.Adagrad.weights": true, + "tf.compat.v1.keras.optimizers.Adam": false, + "tf.compat.v1.keras.optimizers.Adam.__eq__": true, + "tf.compat.v1.keras.optimizers.Adam.__ge__": true, + "tf.compat.v1.keras.optimizers.Adam.__gt__": true, + "tf.compat.v1.keras.optimizers.Adam.__init__": true, + "tf.compat.v1.keras.optimizers.Adam.__le__": true, + "tf.compat.v1.keras.optimizers.Adam.__lt__": true, + "tf.compat.v1.keras.optimizers.Adam.__ne__": true, + "tf.compat.v1.keras.optimizers.Adam.__new__": true, + "tf.compat.v1.keras.optimizers.Adam.add_slot": true, + "tf.compat.v1.keras.optimizers.Adam.add_weight": true, + "tf.compat.v1.keras.optimizers.Adam.apply_gradients": true, + "tf.compat.v1.keras.optimizers.Adam.from_config": true, + "tf.compat.v1.keras.optimizers.Adam.get_config": true, + "tf.compat.v1.keras.optimizers.Adam.get_gradients": true, + "tf.compat.v1.keras.optimizers.Adam.get_slot": true, + "tf.compat.v1.keras.optimizers.Adam.get_slot_names": true, + "tf.compat.v1.keras.optimizers.Adam.get_updates": true, + "tf.compat.v1.keras.optimizers.Adam.get_weights": true, + "tf.compat.v1.keras.optimizers.Adam.iterations": true, + "tf.compat.v1.keras.optimizers.Adam.minimize": true, + "tf.compat.v1.keras.optimizers.Adam.set_weights": true, + "tf.compat.v1.keras.optimizers.Adam.variables": true, + "tf.compat.v1.keras.optimizers.Adam.weights": true, + "tf.compat.v1.keras.optimizers.Adamax": false, + "tf.compat.v1.keras.optimizers.Adamax.__eq__": true, + "tf.compat.v1.keras.optimizers.Adamax.__ge__": true, + "tf.compat.v1.keras.optimizers.Adamax.__gt__": true, + "tf.compat.v1.keras.optimizers.Adamax.__init__": true, + "tf.compat.v1.keras.optimizers.Adamax.__le__": true, + "tf.compat.v1.keras.optimizers.Adamax.__lt__": true, + "tf.compat.v1.keras.optimizers.Adamax.__ne__": true, + "tf.compat.v1.keras.optimizers.Adamax.__new__": true, + "tf.compat.v1.keras.optimizers.Adamax.add_slot": true, + "tf.compat.v1.keras.optimizers.Adamax.add_weight": true, + "tf.compat.v1.keras.optimizers.Adamax.apply_gradients": true, + "tf.compat.v1.keras.optimizers.Adamax.from_config": true, + "tf.compat.v1.keras.optimizers.Adamax.get_config": true, + "tf.compat.v1.keras.optimizers.Adamax.get_gradients": true, + "tf.compat.v1.keras.optimizers.Adamax.get_slot": true, + "tf.compat.v1.keras.optimizers.Adamax.get_slot_names": true, + "tf.compat.v1.keras.optimizers.Adamax.get_updates": true, + "tf.compat.v1.keras.optimizers.Adamax.get_weights": true, + "tf.compat.v1.keras.optimizers.Adamax.iterations": true, + "tf.compat.v1.keras.optimizers.Adamax.minimize": true, + "tf.compat.v1.keras.optimizers.Adamax.set_weights": true, + "tf.compat.v1.keras.optimizers.Adamax.variables": true, + "tf.compat.v1.keras.optimizers.Adamax.weights": true, + "tf.compat.v1.keras.optimizers.Ftrl": false, + "tf.compat.v1.keras.optimizers.Ftrl.__eq__": true, + "tf.compat.v1.keras.optimizers.Ftrl.__ge__": true, + "tf.compat.v1.keras.optimizers.Ftrl.__gt__": true, + "tf.compat.v1.keras.optimizers.Ftrl.__init__": true, + "tf.compat.v1.keras.optimizers.Ftrl.__le__": true, + "tf.compat.v1.keras.optimizers.Ftrl.__lt__": true, + "tf.compat.v1.keras.optimizers.Ftrl.__ne__": true, + "tf.compat.v1.keras.optimizers.Ftrl.__new__": true, + "tf.compat.v1.keras.optimizers.Ftrl.add_slot": true, + "tf.compat.v1.keras.optimizers.Ftrl.add_weight": true, + "tf.compat.v1.keras.optimizers.Ftrl.apply_gradients": true, + "tf.compat.v1.keras.optimizers.Ftrl.from_config": true, + "tf.compat.v1.keras.optimizers.Ftrl.get_config": true, + "tf.compat.v1.keras.optimizers.Ftrl.get_gradients": true, + "tf.compat.v1.keras.optimizers.Ftrl.get_slot": true, + "tf.compat.v1.keras.optimizers.Ftrl.get_slot_names": true, + "tf.compat.v1.keras.optimizers.Ftrl.get_updates": true, + "tf.compat.v1.keras.optimizers.Ftrl.get_weights": true, + "tf.compat.v1.keras.optimizers.Ftrl.iterations": true, + "tf.compat.v1.keras.optimizers.Ftrl.minimize": true, + "tf.compat.v1.keras.optimizers.Ftrl.set_weights": true, + "tf.compat.v1.keras.optimizers.Ftrl.variables": true, + "tf.compat.v1.keras.optimizers.Ftrl.weights": true, + "tf.compat.v1.keras.optimizers.Nadam": false, + "tf.compat.v1.keras.optimizers.Nadam.__eq__": true, + "tf.compat.v1.keras.optimizers.Nadam.__ge__": true, + "tf.compat.v1.keras.optimizers.Nadam.__gt__": true, + "tf.compat.v1.keras.optimizers.Nadam.__init__": true, + "tf.compat.v1.keras.optimizers.Nadam.__le__": true, + "tf.compat.v1.keras.optimizers.Nadam.__lt__": true, + "tf.compat.v1.keras.optimizers.Nadam.__ne__": true, + "tf.compat.v1.keras.optimizers.Nadam.__new__": true, + "tf.compat.v1.keras.optimizers.Nadam.add_slot": true, + "tf.compat.v1.keras.optimizers.Nadam.add_weight": true, + "tf.compat.v1.keras.optimizers.Nadam.apply_gradients": true, + "tf.compat.v1.keras.optimizers.Nadam.from_config": true, + "tf.compat.v1.keras.optimizers.Nadam.get_config": true, + "tf.compat.v1.keras.optimizers.Nadam.get_gradients": true, + "tf.compat.v1.keras.optimizers.Nadam.get_slot": true, + "tf.compat.v1.keras.optimizers.Nadam.get_slot_names": true, + "tf.compat.v1.keras.optimizers.Nadam.get_updates": true, + "tf.compat.v1.keras.optimizers.Nadam.get_weights": true, + "tf.compat.v1.keras.optimizers.Nadam.iterations": true, + "tf.compat.v1.keras.optimizers.Nadam.minimize": true, + "tf.compat.v1.keras.optimizers.Nadam.set_weights": true, + "tf.compat.v1.keras.optimizers.Nadam.variables": true, + "tf.compat.v1.keras.optimizers.Nadam.weights": true, + "tf.compat.v1.keras.optimizers.Optimizer": false, + "tf.compat.v1.keras.optimizers.Optimizer.__eq__": true, + "tf.compat.v1.keras.optimizers.Optimizer.__ge__": true, + "tf.compat.v1.keras.optimizers.Optimizer.__gt__": true, + "tf.compat.v1.keras.optimizers.Optimizer.__init__": true, + "tf.compat.v1.keras.optimizers.Optimizer.__le__": true, + "tf.compat.v1.keras.optimizers.Optimizer.__lt__": true, + "tf.compat.v1.keras.optimizers.Optimizer.__ne__": true, + "tf.compat.v1.keras.optimizers.Optimizer.__new__": true, + "tf.compat.v1.keras.optimizers.Optimizer.add_slot": true, + "tf.compat.v1.keras.optimizers.Optimizer.add_weight": true, + "tf.compat.v1.keras.optimizers.Optimizer.apply_gradients": true, + "tf.compat.v1.keras.optimizers.Optimizer.from_config": true, + "tf.compat.v1.keras.optimizers.Optimizer.get_config": true, + "tf.compat.v1.keras.optimizers.Optimizer.get_gradients": true, + "tf.compat.v1.keras.optimizers.Optimizer.get_slot": true, + "tf.compat.v1.keras.optimizers.Optimizer.get_slot_names": true, + "tf.compat.v1.keras.optimizers.Optimizer.get_updates": true, + "tf.compat.v1.keras.optimizers.Optimizer.get_weights": true, + "tf.compat.v1.keras.optimizers.Optimizer.iterations": true, + "tf.compat.v1.keras.optimizers.Optimizer.minimize": true, + "tf.compat.v1.keras.optimizers.Optimizer.set_weights": true, + "tf.compat.v1.keras.optimizers.Optimizer.variables": true, + "tf.compat.v1.keras.optimizers.Optimizer.weights": true, + "tf.compat.v1.keras.optimizers.RMSprop": false, + "tf.compat.v1.keras.optimizers.RMSprop.__eq__": true, + "tf.compat.v1.keras.optimizers.RMSprop.__ge__": true, + "tf.compat.v1.keras.optimizers.RMSprop.__gt__": true, + "tf.compat.v1.keras.optimizers.RMSprop.__init__": true, + "tf.compat.v1.keras.optimizers.RMSprop.__le__": true, + "tf.compat.v1.keras.optimizers.RMSprop.__lt__": true, + "tf.compat.v1.keras.optimizers.RMSprop.__ne__": true, + "tf.compat.v1.keras.optimizers.RMSprop.__new__": true, + "tf.compat.v1.keras.optimizers.RMSprop.add_slot": true, + "tf.compat.v1.keras.optimizers.RMSprop.add_weight": true, + "tf.compat.v1.keras.optimizers.RMSprop.apply_gradients": true, + "tf.compat.v1.keras.optimizers.RMSprop.from_config": true, + "tf.compat.v1.keras.optimizers.RMSprop.get_config": true, + "tf.compat.v1.keras.optimizers.RMSprop.get_gradients": true, + "tf.compat.v1.keras.optimizers.RMSprop.get_slot": true, + "tf.compat.v1.keras.optimizers.RMSprop.get_slot_names": true, + "tf.compat.v1.keras.optimizers.RMSprop.get_updates": true, + "tf.compat.v1.keras.optimizers.RMSprop.get_weights": true, + "tf.compat.v1.keras.optimizers.RMSprop.iterations": true, + "tf.compat.v1.keras.optimizers.RMSprop.minimize": true, + "tf.compat.v1.keras.optimizers.RMSprop.set_weights": true, + "tf.compat.v1.keras.optimizers.RMSprop.variables": true, + "tf.compat.v1.keras.optimizers.RMSprop.weights": true, + "tf.compat.v1.keras.optimizers.SGD": false, + "tf.compat.v1.keras.optimizers.SGD.__eq__": true, + "tf.compat.v1.keras.optimizers.SGD.__ge__": true, + "tf.compat.v1.keras.optimizers.SGD.__gt__": true, + "tf.compat.v1.keras.optimizers.SGD.__init__": true, + "tf.compat.v1.keras.optimizers.SGD.__le__": true, + "tf.compat.v1.keras.optimizers.SGD.__lt__": true, + "tf.compat.v1.keras.optimizers.SGD.__ne__": true, + "tf.compat.v1.keras.optimizers.SGD.__new__": true, + "tf.compat.v1.keras.optimizers.SGD.add_slot": true, + "tf.compat.v1.keras.optimizers.SGD.add_weight": true, + "tf.compat.v1.keras.optimizers.SGD.apply_gradients": true, + "tf.compat.v1.keras.optimizers.SGD.from_config": true, + "tf.compat.v1.keras.optimizers.SGD.get_config": true, + "tf.compat.v1.keras.optimizers.SGD.get_gradients": true, + "tf.compat.v1.keras.optimizers.SGD.get_slot": true, + "tf.compat.v1.keras.optimizers.SGD.get_slot_names": true, + "tf.compat.v1.keras.optimizers.SGD.get_updates": true, + "tf.compat.v1.keras.optimizers.SGD.get_weights": true, + "tf.compat.v1.keras.optimizers.SGD.iterations": true, + "tf.compat.v1.keras.optimizers.SGD.minimize": true, + "tf.compat.v1.keras.optimizers.SGD.set_weights": true, + "tf.compat.v1.keras.optimizers.SGD.variables": true, + "tf.compat.v1.keras.optimizers.SGD.weights": true, + "tf.compat.v1.keras.optimizers.deserialize": false, + "tf.compat.v1.keras.optimizers.get": false, + "tf.compat.v1.keras.optimizers.schedules": false, + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay": false, + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__call__": true, + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__eq__": true, + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__ge__": true, + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__gt__": true, + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__init__": true, + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__le__": true, + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__lt__": true, + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__ne__": true, + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.__new__": true, + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.from_config": true, + "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay.get_config": true, + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay": false, + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__call__": true, + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__eq__": true, + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__ge__": true, + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__gt__": true, + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__init__": true, + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__le__": true, + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__lt__": true, + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__ne__": true, + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.__new__": true, + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.from_config": true, + "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay.get_config": true, + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule": false, + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__call__": true, + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__eq__": true, + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__ge__": true, + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__gt__": true, + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__init__": true, + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__le__": true, + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__lt__": true, + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__ne__": true, + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.__new__": true, + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.from_config": true, + "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule.get_config": true, + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay": false, + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__call__": true, + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__eq__": true, + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__ge__": true, + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__gt__": true, + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__init__": true, + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__le__": true, + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__lt__": true, + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__ne__": true, + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.__new__": true, + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.from_config": true, + "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay.get_config": true, + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay": false, + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__call__": true, + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__eq__": true, + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__ge__": true, + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__gt__": true, + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__init__": true, + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__le__": true, + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__lt__": true, + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__ne__": true, + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.__new__": true, + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.from_config": true, + "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay.get_config": true, + "tf.compat.v1.keras.optimizers.schedules.deserialize": false, + "tf.compat.v1.keras.optimizers.schedules.serialize": false, + "tf.compat.v1.keras.optimizers.serialize": false, + "tf.compat.v1.keras.preprocessing": false, + "tf.compat.v1.keras.preprocessing.image": false, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator": false, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__eq__": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__ge__": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__getitem__": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__gt__": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__init__": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__iter__": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__le__": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__len__": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__lt__": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__ne__": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.__new__": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.allowed_class_modes": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.filepaths": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.labels": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.next": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.on_epoch_end": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.reset": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.sample_weight": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.set_processing_attrs": true, + "tf.compat.v1.keras.preprocessing.image.DirectoryIterator.white_list_formats": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator": false, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__eq__": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__ge__": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__gt__": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__init__": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__le__": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__lt__": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__ne__": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.__new__": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.apply_transform": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.fit": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.flow": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.flow_from_dataframe": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.flow_from_directory": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.get_random_transform": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.random_transform": true, + "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator.standardize": true, + "tf.compat.v1.keras.preprocessing.image.Iterator": false, + "tf.compat.v1.keras.preprocessing.image.Iterator.__eq__": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.__ge__": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.__getitem__": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.__gt__": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.__init__": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.__iter__": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.__le__": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.__len__": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.__lt__": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.__ne__": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.__new__": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.next": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.on_epoch_end": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.reset": true, + "tf.compat.v1.keras.preprocessing.image.Iterator.white_list_formats": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator": false, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__eq__": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__ge__": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__getitem__": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__gt__": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__init__": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__iter__": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__le__": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__len__": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__lt__": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__ne__": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.__new__": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.next": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.on_epoch_end": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.reset": true, + "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator.white_list_formats": true, + "tf.compat.v1.keras.preprocessing.image.apply_affine_transform": false, + "tf.compat.v1.keras.preprocessing.image.apply_brightness_shift": false, + "tf.compat.v1.keras.preprocessing.image.apply_channel_shift": false, + "tf.compat.v1.keras.preprocessing.image.array_to_img": false, + "tf.compat.v1.keras.preprocessing.image.img_to_array": false, + "tf.compat.v1.keras.preprocessing.image.load_img": false, + "tf.compat.v1.keras.preprocessing.image.random_brightness": false, + "tf.compat.v1.keras.preprocessing.image.random_channel_shift": false, + "tf.compat.v1.keras.preprocessing.image.random_rotation": false, + "tf.compat.v1.keras.preprocessing.image.random_shear": false, + "tf.compat.v1.keras.preprocessing.image.random_shift": false, + "tf.compat.v1.keras.preprocessing.image.random_zoom": false, + "tf.compat.v1.keras.preprocessing.image.save_img": false, + "tf.compat.v1.keras.preprocessing.sequence": false, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator": false, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__eq__": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__ge__": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__getitem__": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__gt__": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__init__": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__iter__": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__le__": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__len__": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__lt__": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__ne__": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.__new__": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.get_config": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.on_epoch_end": true, + "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator.to_json": true, + "tf.compat.v1.keras.preprocessing.sequence.make_sampling_table": false, + "tf.compat.v1.keras.preprocessing.sequence.pad_sequences": false, + "tf.compat.v1.keras.preprocessing.sequence.skipgrams": false, + "tf.compat.v1.keras.preprocessing.text": false, + "tf.compat.v1.keras.preprocessing.text.Tokenizer": false, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__eq__": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__ge__": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__gt__": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__init__": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__le__": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__lt__": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__ne__": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.__new__": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.fit_on_sequences": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.fit_on_texts": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.get_config": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.sequences_to_matrix": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.sequences_to_texts": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.sequences_to_texts_generator": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.texts_to_matrix": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.texts_to_sequences": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.texts_to_sequences_generator": true, + "tf.compat.v1.keras.preprocessing.text.Tokenizer.to_json": true, + "tf.compat.v1.keras.preprocessing.text.hashing_trick": false, + "tf.compat.v1.keras.preprocessing.text.one_hot": false, + "tf.compat.v1.keras.preprocessing.text.text_to_word_sequence": false, + "tf.compat.v1.keras.preprocessing.text.tokenizer_from_json": false, + "tf.compat.v1.keras.regularizers": false, + "tf.compat.v1.keras.regularizers.L1L2": false, + "tf.compat.v1.keras.regularizers.L1L2.__call__": true, + "tf.compat.v1.keras.regularizers.L1L2.__eq__": true, + "tf.compat.v1.keras.regularizers.L1L2.__ge__": true, + "tf.compat.v1.keras.regularizers.L1L2.__gt__": true, + "tf.compat.v1.keras.regularizers.L1L2.__init__": true, + "tf.compat.v1.keras.regularizers.L1L2.__le__": true, + "tf.compat.v1.keras.regularizers.L1L2.__lt__": true, + "tf.compat.v1.keras.regularizers.L1L2.__ne__": true, + "tf.compat.v1.keras.regularizers.L1L2.__new__": true, + "tf.compat.v1.keras.regularizers.L1L2.from_config": true, + "tf.compat.v1.keras.regularizers.L1L2.get_config": true, + "tf.compat.v1.keras.regularizers.Regularizer": false, + "tf.compat.v1.keras.regularizers.Regularizer.__call__": true, + "tf.compat.v1.keras.regularizers.Regularizer.__eq__": true, + "tf.compat.v1.keras.regularizers.Regularizer.__ge__": true, + "tf.compat.v1.keras.regularizers.Regularizer.__gt__": true, + "tf.compat.v1.keras.regularizers.Regularizer.__init__": true, + "tf.compat.v1.keras.regularizers.Regularizer.__le__": true, + "tf.compat.v1.keras.regularizers.Regularizer.__lt__": true, + "tf.compat.v1.keras.regularizers.Regularizer.__ne__": true, + "tf.compat.v1.keras.regularizers.Regularizer.__new__": true, + "tf.compat.v1.keras.regularizers.Regularizer.from_config": true, + "tf.compat.v1.keras.regularizers.Regularizer.get_config": true, + "tf.compat.v1.keras.regularizers.deserialize": false, + "tf.compat.v1.keras.regularizers.get": false, + "tf.compat.v1.keras.regularizers.l1": false, + "tf.compat.v1.keras.regularizers.l1_l2": false, + "tf.compat.v1.keras.regularizers.l2": false, + "tf.compat.v1.keras.regularizers.serialize": false, + "tf.compat.v1.keras.utils": false, + "tf.compat.v1.keras.utils.CustomObjectScope": false, + "tf.compat.v1.keras.utils.CustomObjectScope.__enter__": true, + "tf.compat.v1.keras.utils.CustomObjectScope.__eq__": true, + "tf.compat.v1.keras.utils.CustomObjectScope.__exit__": true, + "tf.compat.v1.keras.utils.CustomObjectScope.__ge__": true, + "tf.compat.v1.keras.utils.CustomObjectScope.__gt__": true, + "tf.compat.v1.keras.utils.CustomObjectScope.__init__": true, + "tf.compat.v1.keras.utils.CustomObjectScope.__le__": true, + "tf.compat.v1.keras.utils.CustomObjectScope.__lt__": true, + "tf.compat.v1.keras.utils.CustomObjectScope.__ne__": true, + "tf.compat.v1.keras.utils.CustomObjectScope.__new__": true, + "tf.compat.v1.keras.utils.GeneratorEnqueuer": false, + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__eq__": true, + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__ge__": true, + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__gt__": true, + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__init__": true, + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__le__": true, + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__lt__": true, + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__ne__": true, + "tf.compat.v1.keras.utils.GeneratorEnqueuer.__new__": true, + "tf.compat.v1.keras.utils.GeneratorEnqueuer.get": true, + "tf.compat.v1.keras.utils.GeneratorEnqueuer.is_running": true, + "tf.compat.v1.keras.utils.GeneratorEnqueuer.start": true, + "tf.compat.v1.keras.utils.GeneratorEnqueuer.stop": true, + "tf.compat.v1.keras.utils.HDF5Matrix": false, + "tf.compat.v1.keras.utils.HDF5Matrix.__eq__": true, + "tf.compat.v1.keras.utils.HDF5Matrix.__ge__": true, + "tf.compat.v1.keras.utils.HDF5Matrix.__getitem__": true, + "tf.compat.v1.keras.utils.HDF5Matrix.__gt__": true, + "tf.compat.v1.keras.utils.HDF5Matrix.__init__": true, + "tf.compat.v1.keras.utils.HDF5Matrix.__le__": true, + "tf.compat.v1.keras.utils.HDF5Matrix.__len__": true, + "tf.compat.v1.keras.utils.HDF5Matrix.__lt__": true, + "tf.compat.v1.keras.utils.HDF5Matrix.__ne__": true, + "tf.compat.v1.keras.utils.HDF5Matrix.__new__": true, + "tf.compat.v1.keras.utils.HDF5Matrix.dtype": true, + "tf.compat.v1.keras.utils.HDF5Matrix.ndim": true, + "tf.compat.v1.keras.utils.HDF5Matrix.refs": true, + "tf.compat.v1.keras.utils.HDF5Matrix.shape": true, + "tf.compat.v1.keras.utils.HDF5Matrix.size": true, + "tf.compat.v1.keras.utils.OrderedEnqueuer": false, + "tf.compat.v1.keras.utils.OrderedEnqueuer.__eq__": true, + "tf.compat.v1.keras.utils.OrderedEnqueuer.__ge__": true, + "tf.compat.v1.keras.utils.OrderedEnqueuer.__gt__": true, + "tf.compat.v1.keras.utils.OrderedEnqueuer.__init__": true, + "tf.compat.v1.keras.utils.OrderedEnqueuer.__le__": true, + "tf.compat.v1.keras.utils.OrderedEnqueuer.__lt__": true, + "tf.compat.v1.keras.utils.OrderedEnqueuer.__ne__": true, + "tf.compat.v1.keras.utils.OrderedEnqueuer.__new__": true, + "tf.compat.v1.keras.utils.OrderedEnqueuer.get": true, + "tf.compat.v1.keras.utils.OrderedEnqueuer.is_running": true, + "tf.compat.v1.keras.utils.OrderedEnqueuer.start": true, + "tf.compat.v1.keras.utils.OrderedEnqueuer.stop": true, + "tf.compat.v1.keras.utils.Progbar": false, + "tf.compat.v1.keras.utils.Progbar.__eq__": true, + "tf.compat.v1.keras.utils.Progbar.__ge__": true, + "tf.compat.v1.keras.utils.Progbar.__gt__": true, + "tf.compat.v1.keras.utils.Progbar.__init__": true, + "tf.compat.v1.keras.utils.Progbar.__le__": true, + "tf.compat.v1.keras.utils.Progbar.__lt__": true, + "tf.compat.v1.keras.utils.Progbar.__ne__": true, + "tf.compat.v1.keras.utils.Progbar.__new__": true, + "tf.compat.v1.keras.utils.Progbar.add": true, + "tf.compat.v1.keras.utils.Progbar.update": true, + "tf.compat.v1.keras.utils.Sequence": false, + "tf.compat.v1.keras.utils.Sequence.__eq__": true, + "tf.compat.v1.keras.utils.Sequence.__ge__": true, + "tf.compat.v1.keras.utils.Sequence.__getitem__": true, + "tf.compat.v1.keras.utils.Sequence.__gt__": true, + "tf.compat.v1.keras.utils.Sequence.__init__": true, + "tf.compat.v1.keras.utils.Sequence.__iter__": true, + "tf.compat.v1.keras.utils.Sequence.__le__": true, + "tf.compat.v1.keras.utils.Sequence.__len__": true, + "tf.compat.v1.keras.utils.Sequence.__lt__": true, + "tf.compat.v1.keras.utils.Sequence.__ne__": true, + "tf.compat.v1.keras.utils.Sequence.__new__": true, + "tf.compat.v1.keras.utils.Sequence.on_epoch_end": true, + "tf.compat.v1.keras.utils.SequenceEnqueuer": false, + "tf.compat.v1.keras.utils.SequenceEnqueuer.__eq__": true, + "tf.compat.v1.keras.utils.SequenceEnqueuer.__ge__": true, + "tf.compat.v1.keras.utils.SequenceEnqueuer.__gt__": true, + "tf.compat.v1.keras.utils.SequenceEnqueuer.__init__": true, + "tf.compat.v1.keras.utils.SequenceEnqueuer.__le__": true, + "tf.compat.v1.keras.utils.SequenceEnqueuer.__lt__": true, + "tf.compat.v1.keras.utils.SequenceEnqueuer.__ne__": true, + "tf.compat.v1.keras.utils.SequenceEnqueuer.__new__": true, + "tf.compat.v1.keras.utils.SequenceEnqueuer.get": true, + "tf.compat.v1.keras.utils.SequenceEnqueuer.is_running": true, + "tf.compat.v1.keras.utils.SequenceEnqueuer.start": true, + "tf.compat.v1.keras.utils.SequenceEnqueuer.stop": true, + "tf.compat.v1.keras.utils.convert_all_kernels_in_model": false, + "tf.compat.v1.keras.utils.custom_object_scope": false, + "tf.compat.v1.keras.utils.deserialize_keras_object": false, + "tf.compat.v1.keras.utils.get_custom_objects": false, + "tf.compat.v1.keras.utils.get_file": false, + "tf.compat.v1.keras.utils.get_registered_name": false, + "tf.compat.v1.keras.utils.get_registered_object": false, + "tf.compat.v1.keras.utils.get_source_inputs": false, + "tf.compat.v1.keras.utils.model_to_dot": false, + "tf.compat.v1.keras.utils.multi_gpu_model": false, + "tf.compat.v1.keras.utils.normalize": false, + "tf.compat.v1.keras.utils.plot_model": false, + "tf.compat.v1.keras.utils.register_keras_serializable": false, + "tf.compat.v1.keras.utils.serialize_keras_object": false, + "tf.compat.v1.keras.utils.to_categorical": false, + "tf.compat.v1.keras.wrappers": false, + "tf.compat.v1.keras.wrappers.scikit_learn": false, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier": false, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__eq__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__ge__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__gt__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__init__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__le__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__lt__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__ne__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.__new__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.check_params": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.filter_sk_params": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.fit": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.get_params": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.predict": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.predict_proba": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.score": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier.set_params": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor": false, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__eq__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__ge__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__gt__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__init__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__le__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__lt__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__ne__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.__new__": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.check_params": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.filter_sk_params": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.fit": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.get_params": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.predict": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.score": true, + "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor.set_params": true, + "tf.compat.v1.layers": false, + "tf.compat.v1.layers.AveragePooling1D": false, + "tf.compat.v1.layers.AveragePooling1D.__call__": true, + "tf.compat.v1.layers.AveragePooling1D.__eq__": true, + "tf.compat.v1.layers.AveragePooling1D.__ge__": true, + "tf.compat.v1.layers.AveragePooling1D.__gt__": true, + "tf.compat.v1.layers.AveragePooling1D.__init__": true, + "tf.compat.v1.layers.AveragePooling1D.__le__": true, + "tf.compat.v1.layers.AveragePooling1D.__lt__": true, + "tf.compat.v1.layers.AveragePooling1D.__ne__": true, + "tf.compat.v1.layers.AveragePooling1D.__new__": true, + "tf.compat.v1.layers.AveragePooling1D.activity_regularizer": true, + "tf.compat.v1.layers.AveragePooling1D.add_loss": true, + "tf.compat.v1.layers.AveragePooling1D.add_metric": true, + "tf.compat.v1.layers.AveragePooling1D.add_weight": true, + "tf.compat.v1.layers.AveragePooling1D.build": true, + "tf.compat.v1.layers.AveragePooling1D.call": true, + "tf.compat.v1.layers.AveragePooling1D.compute_mask": true, + "tf.compat.v1.layers.AveragePooling1D.compute_output_shape": true, + "tf.compat.v1.layers.AveragePooling1D.compute_output_signature": true, + "tf.compat.v1.layers.AveragePooling1D.count_params": true, + "tf.compat.v1.layers.AveragePooling1D.dtype": true, + "tf.compat.v1.layers.AveragePooling1D.dynamic": true, + "tf.compat.v1.layers.AveragePooling1D.from_config": true, + "tf.compat.v1.layers.AveragePooling1D.get_config": true, + "tf.compat.v1.layers.AveragePooling1D.get_weights": true, + "tf.compat.v1.layers.AveragePooling1D.graph": true, + "tf.compat.v1.layers.AveragePooling1D.input": true, + "tf.compat.v1.layers.AveragePooling1D.input_spec": true, + "tf.compat.v1.layers.AveragePooling1D.losses": true, + "tf.compat.v1.layers.AveragePooling1D.metrics": true, + "tf.compat.v1.layers.AveragePooling1D.name": true, + "tf.compat.v1.layers.AveragePooling1D.name_scope": true, + "tf.compat.v1.layers.AveragePooling1D.non_trainable_weights": true, + "tf.compat.v1.layers.AveragePooling1D.output": true, + "tf.compat.v1.layers.AveragePooling1D.scope_name": true, + "tf.compat.v1.layers.AveragePooling1D.set_weights": true, + "tf.compat.v1.layers.AveragePooling1D.submodules": true, + "tf.compat.v1.layers.AveragePooling1D.trainable": true, + "tf.compat.v1.layers.AveragePooling1D.trainable_weights": true, + "tf.compat.v1.layers.AveragePooling1D.weights": true, + "tf.compat.v1.layers.AveragePooling1D.with_name_scope": true, + "tf.compat.v1.layers.AveragePooling2D": false, + "tf.compat.v1.layers.AveragePooling2D.__call__": true, + "tf.compat.v1.layers.AveragePooling2D.__eq__": true, + "tf.compat.v1.layers.AveragePooling2D.__ge__": true, + "tf.compat.v1.layers.AveragePooling2D.__gt__": true, + "tf.compat.v1.layers.AveragePooling2D.__init__": true, + "tf.compat.v1.layers.AveragePooling2D.__le__": true, + "tf.compat.v1.layers.AveragePooling2D.__lt__": true, + "tf.compat.v1.layers.AveragePooling2D.__ne__": true, + "tf.compat.v1.layers.AveragePooling2D.__new__": true, + "tf.compat.v1.layers.AveragePooling2D.activity_regularizer": true, + "tf.compat.v1.layers.AveragePooling2D.add_loss": true, + "tf.compat.v1.layers.AveragePooling2D.add_metric": true, + "tf.compat.v1.layers.AveragePooling2D.add_weight": true, + "tf.compat.v1.layers.AveragePooling2D.build": true, + "tf.compat.v1.layers.AveragePooling2D.call": true, + "tf.compat.v1.layers.AveragePooling2D.compute_mask": true, + "tf.compat.v1.layers.AveragePooling2D.compute_output_shape": true, + "tf.compat.v1.layers.AveragePooling2D.compute_output_signature": true, + "tf.compat.v1.layers.AveragePooling2D.count_params": true, + "tf.compat.v1.layers.AveragePooling2D.dtype": true, + "tf.compat.v1.layers.AveragePooling2D.dynamic": true, + "tf.compat.v1.layers.AveragePooling2D.from_config": true, + "tf.compat.v1.layers.AveragePooling2D.get_config": true, + "tf.compat.v1.layers.AveragePooling2D.get_weights": true, + "tf.compat.v1.layers.AveragePooling2D.graph": true, + "tf.compat.v1.layers.AveragePooling2D.input": true, + "tf.compat.v1.layers.AveragePooling2D.input_spec": true, + "tf.compat.v1.layers.AveragePooling2D.losses": true, + "tf.compat.v1.layers.AveragePooling2D.metrics": true, + "tf.compat.v1.layers.AveragePooling2D.name": true, + "tf.compat.v1.layers.AveragePooling2D.name_scope": true, + "tf.compat.v1.layers.AveragePooling2D.non_trainable_weights": true, + "tf.compat.v1.layers.AveragePooling2D.output": true, + "tf.compat.v1.layers.AveragePooling2D.scope_name": true, + "tf.compat.v1.layers.AveragePooling2D.set_weights": true, + "tf.compat.v1.layers.AveragePooling2D.submodules": true, + "tf.compat.v1.layers.AveragePooling2D.trainable": true, + "tf.compat.v1.layers.AveragePooling2D.trainable_weights": true, + "tf.compat.v1.layers.AveragePooling2D.weights": true, + "tf.compat.v1.layers.AveragePooling2D.with_name_scope": true, + "tf.compat.v1.layers.AveragePooling3D": false, + "tf.compat.v1.layers.AveragePooling3D.__call__": true, + "tf.compat.v1.layers.AveragePooling3D.__eq__": true, + "tf.compat.v1.layers.AveragePooling3D.__ge__": true, + "tf.compat.v1.layers.AveragePooling3D.__gt__": true, + "tf.compat.v1.layers.AveragePooling3D.__init__": true, + "tf.compat.v1.layers.AveragePooling3D.__le__": true, + "tf.compat.v1.layers.AveragePooling3D.__lt__": true, + "tf.compat.v1.layers.AveragePooling3D.__ne__": true, + "tf.compat.v1.layers.AveragePooling3D.__new__": true, + "tf.compat.v1.layers.AveragePooling3D.activity_regularizer": true, + "tf.compat.v1.layers.AveragePooling3D.add_loss": true, + "tf.compat.v1.layers.AveragePooling3D.add_metric": true, + "tf.compat.v1.layers.AveragePooling3D.add_weight": true, + "tf.compat.v1.layers.AveragePooling3D.build": true, + "tf.compat.v1.layers.AveragePooling3D.call": true, + "tf.compat.v1.layers.AveragePooling3D.compute_mask": true, + "tf.compat.v1.layers.AveragePooling3D.compute_output_shape": true, + "tf.compat.v1.layers.AveragePooling3D.compute_output_signature": true, + "tf.compat.v1.layers.AveragePooling3D.count_params": true, + "tf.compat.v1.layers.AveragePooling3D.dtype": true, + "tf.compat.v1.layers.AveragePooling3D.dynamic": true, + "tf.compat.v1.layers.AveragePooling3D.from_config": true, + "tf.compat.v1.layers.AveragePooling3D.get_config": true, + "tf.compat.v1.layers.AveragePooling3D.get_weights": true, + "tf.compat.v1.layers.AveragePooling3D.graph": true, + "tf.compat.v1.layers.AveragePooling3D.input": true, + "tf.compat.v1.layers.AveragePooling3D.input_spec": true, + "tf.compat.v1.layers.AveragePooling3D.losses": true, + "tf.compat.v1.layers.AveragePooling3D.metrics": true, + "tf.compat.v1.layers.AveragePooling3D.name": true, + "tf.compat.v1.layers.AveragePooling3D.name_scope": true, + "tf.compat.v1.layers.AveragePooling3D.non_trainable_weights": true, + "tf.compat.v1.layers.AveragePooling3D.output": true, + "tf.compat.v1.layers.AveragePooling3D.scope_name": true, + "tf.compat.v1.layers.AveragePooling3D.set_weights": true, + "tf.compat.v1.layers.AveragePooling3D.submodules": true, + "tf.compat.v1.layers.AveragePooling3D.trainable": true, + "tf.compat.v1.layers.AveragePooling3D.trainable_weights": true, + "tf.compat.v1.layers.AveragePooling3D.weights": true, + "tf.compat.v1.layers.AveragePooling3D.with_name_scope": true, + "tf.compat.v1.layers.BatchNormalization": false, + "tf.compat.v1.layers.BatchNormalization.__call__": true, + "tf.compat.v1.layers.BatchNormalization.__eq__": true, + "tf.compat.v1.layers.BatchNormalization.__ge__": true, + "tf.compat.v1.layers.BatchNormalization.__gt__": true, + "tf.compat.v1.layers.BatchNormalization.__init__": true, + "tf.compat.v1.layers.BatchNormalization.__le__": true, + "tf.compat.v1.layers.BatchNormalization.__lt__": true, + "tf.compat.v1.layers.BatchNormalization.__ne__": true, + "tf.compat.v1.layers.BatchNormalization.__new__": true, + "tf.compat.v1.layers.BatchNormalization.activity_regularizer": true, + "tf.compat.v1.layers.BatchNormalization.add_loss": true, + "tf.compat.v1.layers.BatchNormalization.add_metric": true, + "tf.compat.v1.layers.BatchNormalization.add_weight": true, + "tf.compat.v1.layers.BatchNormalization.build": true, + "tf.compat.v1.layers.BatchNormalization.call": true, + "tf.compat.v1.layers.BatchNormalization.compute_mask": true, + "tf.compat.v1.layers.BatchNormalization.compute_output_shape": true, + "tf.compat.v1.layers.BatchNormalization.compute_output_signature": true, + "tf.compat.v1.layers.BatchNormalization.count_params": true, + "tf.compat.v1.layers.BatchNormalization.dtype": true, + "tf.compat.v1.layers.BatchNormalization.dynamic": true, + "tf.compat.v1.layers.BatchNormalization.from_config": true, + "tf.compat.v1.layers.BatchNormalization.get_config": true, + "tf.compat.v1.layers.BatchNormalization.get_weights": true, + "tf.compat.v1.layers.BatchNormalization.graph": true, + "tf.compat.v1.layers.BatchNormalization.input": true, + "tf.compat.v1.layers.BatchNormalization.input_spec": true, + "tf.compat.v1.layers.BatchNormalization.losses": true, + "tf.compat.v1.layers.BatchNormalization.metrics": true, + "tf.compat.v1.layers.BatchNormalization.name": true, + "tf.compat.v1.layers.BatchNormalization.name_scope": true, + "tf.compat.v1.layers.BatchNormalization.non_trainable_weights": true, + "tf.compat.v1.layers.BatchNormalization.output": true, + "tf.compat.v1.layers.BatchNormalization.scope_name": true, + "tf.compat.v1.layers.BatchNormalization.set_weights": true, + "tf.compat.v1.layers.BatchNormalization.submodules": true, + "tf.compat.v1.layers.BatchNormalization.trainable": true, + "tf.compat.v1.layers.BatchNormalization.trainable_weights": true, + "tf.compat.v1.layers.BatchNormalization.weights": true, + "tf.compat.v1.layers.BatchNormalization.with_name_scope": true, + "tf.compat.v1.layers.Conv1D": false, + "tf.compat.v1.layers.Conv1D.__call__": true, + "tf.compat.v1.layers.Conv1D.__eq__": true, + "tf.compat.v1.layers.Conv1D.__ge__": true, + "tf.compat.v1.layers.Conv1D.__gt__": true, + "tf.compat.v1.layers.Conv1D.__init__": true, + "tf.compat.v1.layers.Conv1D.__le__": true, + "tf.compat.v1.layers.Conv1D.__lt__": true, + "tf.compat.v1.layers.Conv1D.__ne__": true, + "tf.compat.v1.layers.Conv1D.__new__": true, + "tf.compat.v1.layers.Conv1D.activity_regularizer": true, + "tf.compat.v1.layers.Conv1D.add_loss": true, + "tf.compat.v1.layers.Conv1D.add_metric": true, + "tf.compat.v1.layers.Conv1D.add_weight": true, + "tf.compat.v1.layers.Conv1D.build": true, + "tf.compat.v1.layers.Conv1D.call": true, + "tf.compat.v1.layers.Conv1D.compute_mask": true, + "tf.compat.v1.layers.Conv1D.compute_output_shape": true, + "tf.compat.v1.layers.Conv1D.compute_output_signature": true, + "tf.compat.v1.layers.Conv1D.count_params": true, + "tf.compat.v1.layers.Conv1D.dtype": true, + "tf.compat.v1.layers.Conv1D.dynamic": true, + "tf.compat.v1.layers.Conv1D.from_config": true, + "tf.compat.v1.layers.Conv1D.get_config": true, + "tf.compat.v1.layers.Conv1D.get_weights": true, + "tf.compat.v1.layers.Conv1D.graph": true, + "tf.compat.v1.layers.Conv1D.input": true, + "tf.compat.v1.layers.Conv1D.input_spec": true, + "tf.compat.v1.layers.Conv1D.losses": true, + "tf.compat.v1.layers.Conv1D.metrics": true, + "tf.compat.v1.layers.Conv1D.name": true, + "tf.compat.v1.layers.Conv1D.name_scope": true, + "tf.compat.v1.layers.Conv1D.non_trainable_weights": true, + "tf.compat.v1.layers.Conv1D.output": true, + "tf.compat.v1.layers.Conv1D.scope_name": true, + "tf.compat.v1.layers.Conv1D.set_weights": true, + "tf.compat.v1.layers.Conv1D.submodules": true, + "tf.compat.v1.layers.Conv1D.trainable": true, + "tf.compat.v1.layers.Conv1D.trainable_weights": true, + "tf.compat.v1.layers.Conv1D.weights": true, + "tf.compat.v1.layers.Conv1D.with_name_scope": true, + "tf.compat.v1.layers.Conv2D": false, + "tf.compat.v1.layers.Conv2D.__call__": true, + "tf.compat.v1.layers.Conv2D.__eq__": true, + "tf.compat.v1.layers.Conv2D.__ge__": true, + "tf.compat.v1.layers.Conv2D.__gt__": true, + "tf.compat.v1.layers.Conv2D.__init__": true, + "tf.compat.v1.layers.Conv2D.__le__": true, + "tf.compat.v1.layers.Conv2D.__lt__": true, + "tf.compat.v1.layers.Conv2D.__ne__": true, + "tf.compat.v1.layers.Conv2D.__new__": true, + "tf.compat.v1.layers.Conv2D.activity_regularizer": true, + "tf.compat.v1.layers.Conv2D.add_loss": true, + "tf.compat.v1.layers.Conv2D.add_metric": true, + "tf.compat.v1.layers.Conv2D.add_weight": true, + "tf.compat.v1.layers.Conv2D.build": true, + "tf.compat.v1.layers.Conv2D.call": true, + "tf.compat.v1.layers.Conv2D.compute_mask": true, + "tf.compat.v1.layers.Conv2D.compute_output_shape": true, + "tf.compat.v1.layers.Conv2D.compute_output_signature": true, + "tf.compat.v1.layers.Conv2D.count_params": true, + "tf.compat.v1.layers.Conv2D.dtype": true, + "tf.compat.v1.layers.Conv2D.dynamic": true, + "tf.compat.v1.layers.Conv2D.from_config": true, + "tf.compat.v1.layers.Conv2D.get_config": true, + "tf.compat.v1.layers.Conv2D.get_weights": true, + "tf.compat.v1.layers.Conv2D.graph": true, + "tf.compat.v1.layers.Conv2D.input": true, + "tf.compat.v1.layers.Conv2D.input_spec": true, + "tf.compat.v1.layers.Conv2D.losses": true, + "tf.compat.v1.layers.Conv2D.metrics": true, + "tf.compat.v1.layers.Conv2D.name": true, + "tf.compat.v1.layers.Conv2D.name_scope": true, + "tf.compat.v1.layers.Conv2D.non_trainable_weights": true, + "tf.compat.v1.layers.Conv2D.output": true, + "tf.compat.v1.layers.Conv2D.scope_name": true, + "tf.compat.v1.layers.Conv2D.set_weights": true, + "tf.compat.v1.layers.Conv2D.submodules": true, + "tf.compat.v1.layers.Conv2D.trainable": true, + "tf.compat.v1.layers.Conv2D.trainable_weights": true, + "tf.compat.v1.layers.Conv2D.weights": true, + "tf.compat.v1.layers.Conv2D.with_name_scope": true, + "tf.compat.v1.layers.Conv2DTranspose": false, + "tf.compat.v1.layers.Conv2DTranspose.__call__": true, + "tf.compat.v1.layers.Conv2DTranspose.__eq__": true, + "tf.compat.v1.layers.Conv2DTranspose.__ge__": true, + "tf.compat.v1.layers.Conv2DTranspose.__gt__": true, + "tf.compat.v1.layers.Conv2DTranspose.__init__": true, + "tf.compat.v1.layers.Conv2DTranspose.__le__": true, + "tf.compat.v1.layers.Conv2DTranspose.__lt__": true, + "tf.compat.v1.layers.Conv2DTranspose.__ne__": true, + "tf.compat.v1.layers.Conv2DTranspose.__new__": true, + "tf.compat.v1.layers.Conv2DTranspose.activity_regularizer": true, + "tf.compat.v1.layers.Conv2DTranspose.add_loss": true, + "tf.compat.v1.layers.Conv2DTranspose.add_metric": true, + "tf.compat.v1.layers.Conv2DTranspose.add_weight": true, + "tf.compat.v1.layers.Conv2DTranspose.build": true, + "tf.compat.v1.layers.Conv2DTranspose.call": true, + "tf.compat.v1.layers.Conv2DTranspose.compute_mask": true, + "tf.compat.v1.layers.Conv2DTranspose.compute_output_shape": true, + "tf.compat.v1.layers.Conv2DTranspose.compute_output_signature": true, + "tf.compat.v1.layers.Conv2DTranspose.count_params": true, + "tf.compat.v1.layers.Conv2DTranspose.dtype": true, + "tf.compat.v1.layers.Conv2DTranspose.dynamic": true, + "tf.compat.v1.layers.Conv2DTranspose.from_config": true, + "tf.compat.v1.layers.Conv2DTranspose.get_config": true, + "tf.compat.v1.layers.Conv2DTranspose.get_weights": true, + "tf.compat.v1.layers.Conv2DTranspose.graph": true, + "tf.compat.v1.layers.Conv2DTranspose.input": true, + "tf.compat.v1.layers.Conv2DTranspose.input_spec": true, + "tf.compat.v1.layers.Conv2DTranspose.losses": true, + "tf.compat.v1.layers.Conv2DTranspose.metrics": true, + "tf.compat.v1.layers.Conv2DTranspose.name": true, + "tf.compat.v1.layers.Conv2DTranspose.name_scope": true, + "tf.compat.v1.layers.Conv2DTranspose.non_trainable_weights": true, + "tf.compat.v1.layers.Conv2DTranspose.output": true, + "tf.compat.v1.layers.Conv2DTranspose.scope_name": true, + "tf.compat.v1.layers.Conv2DTranspose.set_weights": true, + "tf.compat.v1.layers.Conv2DTranspose.submodules": true, + "tf.compat.v1.layers.Conv2DTranspose.trainable": true, + "tf.compat.v1.layers.Conv2DTranspose.trainable_weights": true, + "tf.compat.v1.layers.Conv2DTranspose.weights": true, + "tf.compat.v1.layers.Conv2DTranspose.with_name_scope": true, + "tf.compat.v1.layers.Conv3D": false, + "tf.compat.v1.layers.Conv3D.__call__": true, + "tf.compat.v1.layers.Conv3D.__eq__": true, + "tf.compat.v1.layers.Conv3D.__ge__": true, + "tf.compat.v1.layers.Conv3D.__gt__": true, + "tf.compat.v1.layers.Conv3D.__init__": true, + "tf.compat.v1.layers.Conv3D.__le__": true, + "tf.compat.v1.layers.Conv3D.__lt__": true, + "tf.compat.v1.layers.Conv3D.__ne__": true, + "tf.compat.v1.layers.Conv3D.__new__": true, + "tf.compat.v1.layers.Conv3D.activity_regularizer": true, + "tf.compat.v1.layers.Conv3D.add_loss": true, + "tf.compat.v1.layers.Conv3D.add_metric": true, + "tf.compat.v1.layers.Conv3D.add_weight": true, + "tf.compat.v1.layers.Conv3D.build": true, + "tf.compat.v1.layers.Conv3D.call": true, + "tf.compat.v1.layers.Conv3D.compute_mask": true, + "tf.compat.v1.layers.Conv3D.compute_output_shape": true, + "tf.compat.v1.layers.Conv3D.compute_output_signature": true, + "tf.compat.v1.layers.Conv3D.count_params": true, + "tf.compat.v1.layers.Conv3D.dtype": true, + "tf.compat.v1.layers.Conv3D.dynamic": true, + "tf.compat.v1.layers.Conv3D.from_config": true, + "tf.compat.v1.layers.Conv3D.get_config": true, + "tf.compat.v1.layers.Conv3D.get_weights": true, + "tf.compat.v1.layers.Conv3D.graph": true, + "tf.compat.v1.layers.Conv3D.input": true, + "tf.compat.v1.layers.Conv3D.input_spec": true, + "tf.compat.v1.layers.Conv3D.losses": true, + "tf.compat.v1.layers.Conv3D.metrics": true, + "tf.compat.v1.layers.Conv3D.name": true, + "tf.compat.v1.layers.Conv3D.name_scope": true, + "tf.compat.v1.layers.Conv3D.non_trainable_weights": true, + "tf.compat.v1.layers.Conv3D.output": true, + "tf.compat.v1.layers.Conv3D.scope_name": true, + "tf.compat.v1.layers.Conv3D.set_weights": true, + "tf.compat.v1.layers.Conv3D.submodules": true, + "tf.compat.v1.layers.Conv3D.trainable": true, + "tf.compat.v1.layers.Conv3D.trainable_weights": true, + "tf.compat.v1.layers.Conv3D.weights": true, + "tf.compat.v1.layers.Conv3D.with_name_scope": true, + "tf.compat.v1.layers.Conv3DTranspose": false, + "tf.compat.v1.layers.Conv3DTranspose.__call__": true, + "tf.compat.v1.layers.Conv3DTranspose.__eq__": true, + "tf.compat.v1.layers.Conv3DTranspose.__ge__": true, + "tf.compat.v1.layers.Conv3DTranspose.__gt__": true, + "tf.compat.v1.layers.Conv3DTranspose.__init__": true, + "tf.compat.v1.layers.Conv3DTranspose.__le__": true, + "tf.compat.v1.layers.Conv3DTranspose.__lt__": true, + "tf.compat.v1.layers.Conv3DTranspose.__ne__": true, + "tf.compat.v1.layers.Conv3DTranspose.__new__": true, + "tf.compat.v1.layers.Conv3DTranspose.activity_regularizer": true, + "tf.compat.v1.layers.Conv3DTranspose.add_loss": true, + "tf.compat.v1.layers.Conv3DTranspose.add_metric": true, + "tf.compat.v1.layers.Conv3DTranspose.add_weight": true, + "tf.compat.v1.layers.Conv3DTranspose.build": true, + "tf.compat.v1.layers.Conv3DTranspose.call": true, + "tf.compat.v1.layers.Conv3DTranspose.compute_mask": true, + "tf.compat.v1.layers.Conv3DTranspose.compute_output_shape": true, + "tf.compat.v1.layers.Conv3DTranspose.compute_output_signature": true, + "tf.compat.v1.layers.Conv3DTranspose.count_params": true, + "tf.compat.v1.layers.Conv3DTranspose.dtype": true, + "tf.compat.v1.layers.Conv3DTranspose.dynamic": true, + "tf.compat.v1.layers.Conv3DTranspose.from_config": true, + "tf.compat.v1.layers.Conv3DTranspose.get_config": true, + "tf.compat.v1.layers.Conv3DTranspose.get_weights": true, + "tf.compat.v1.layers.Conv3DTranspose.graph": true, + "tf.compat.v1.layers.Conv3DTranspose.input": true, + "tf.compat.v1.layers.Conv3DTranspose.input_spec": true, + "tf.compat.v1.layers.Conv3DTranspose.losses": true, + "tf.compat.v1.layers.Conv3DTranspose.metrics": true, + "tf.compat.v1.layers.Conv3DTranspose.name": true, + "tf.compat.v1.layers.Conv3DTranspose.name_scope": true, + "tf.compat.v1.layers.Conv3DTranspose.non_trainable_weights": true, + "tf.compat.v1.layers.Conv3DTranspose.output": true, + "tf.compat.v1.layers.Conv3DTranspose.scope_name": true, + "tf.compat.v1.layers.Conv3DTranspose.set_weights": true, + "tf.compat.v1.layers.Conv3DTranspose.submodules": true, + "tf.compat.v1.layers.Conv3DTranspose.trainable": true, + "tf.compat.v1.layers.Conv3DTranspose.trainable_weights": true, + "tf.compat.v1.layers.Conv3DTranspose.weights": true, + "tf.compat.v1.layers.Conv3DTranspose.with_name_scope": true, + "tf.compat.v1.layers.Dense": false, + "tf.compat.v1.layers.Dense.__call__": true, + "tf.compat.v1.layers.Dense.__eq__": true, + "tf.compat.v1.layers.Dense.__ge__": true, + "tf.compat.v1.layers.Dense.__gt__": true, + "tf.compat.v1.layers.Dense.__init__": true, + "tf.compat.v1.layers.Dense.__le__": true, + "tf.compat.v1.layers.Dense.__lt__": true, + "tf.compat.v1.layers.Dense.__ne__": true, + "tf.compat.v1.layers.Dense.__new__": true, + "tf.compat.v1.layers.Dense.activity_regularizer": true, + "tf.compat.v1.layers.Dense.add_loss": true, + "tf.compat.v1.layers.Dense.add_metric": true, + "tf.compat.v1.layers.Dense.add_weight": true, + "tf.compat.v1.layers.Dense.build": true, + "tf.compat.v1.layers.Dense.call": true, + "tf.compat.v1.layers.Dense.compute_mask": true, + "tf.compat.v1.layers.Dense.compute_output_shape": true, + "tf.compat.v1.layers.Dense.compute_output_signature": true, + "tf.compat.v1.layers.Dense.count_params": true, + "tf.compat.v1.layers.Dense.dtype": true, + "tf.compat.v1.layers.Dense.dynamic": true, + "tf.compat.v1.layers.Dense.from_config": true, + "tf.compat.v1.layers.Dense.get_config": true, + "tf.compat.v1.layers.Dense.get_weights": true, + "tf.compat.v1.layers.Dense.graph": true, + "tf.compat.v1.layers.Dense.input": true, + "tf.compat.v1.layers.Dense.input_spec": true, + "tf.compat.v1.layers.Dense.losses": true, + "tf.compat.v1.layers.Dense.metrics": true, + "tf.compat.v1.layers.Dense.name": true, + "tf.compat.v1.layers.Dense.name_scope": true, + "tf.compat.v1.layers.Dense.non_trainable_weights": true, + "tf.compat.v1.layers.Dense.output": true, + "tf.compat.v1.layers.Dense.scope_name": true, + "tf.compat.v1.layers.Dense.set_weights": true, + "tf.compat.v1.layers.Dense.submodules": true, + "tf.compat.v1.layers.Dense.trainable": true, + "tf.compat.v1.layers.Dense.trainable_weights": true, + "tf.compat.v1.layers.Dense.weights": true, + "tf.compat.v1.layers.Dense.with_name_scope": true, + "tf.compat.v1.layers.Dropout": false, + "tf.compat.v1.layers.Dropout.__call__": true, + "tf.compat.v1.layers.Dropout.__eq__": true, + "tf.compat.v1.layers.Dropout.__ge__": true, + "tf.compat.v1.layers.Dropout.__gt__": true, + "tf.compat.v1.layers.Dropout.__init__": true, + "tf.compat.v1.layers.Dropout.__le__": true, + "tf.compat.v1.layers.Dropout.__lt__": true, + "tf.compat.v1.layers.Dropout.__ne__": true, + "tf.compat.v1.layers.Dropout.__new__": true, + "tf.compat.v1.layers.Dropout.activity_regularizer": true, + "tf.compat.v1.layers.Dropout.add_loss": true, + "tf.compat.v1.layers.Dropout.add_metric": true, + "tf.compat.v1.layers.Dropout.add_weight": true, + "tf.compat.v1.layers.Dropout.build": true, + "tf.compat.v1.layers.Dropout.call": true, + "tf.compat.v1.layers.Dropout.compute_mask": true, + "tf.compat.v1.layers.Dropout.compute_output_shape": true, + "tf.compat.v1.layers.Dropout.compute_output_signature": true, + "tf.compat.v1.layers.Dropout.count_params": true, + "tf.compat.v1.layers.Dropout.dtype": true, + "tf.compat.v1.layers.Dropout.dynamic": true, + "tf.compat.v1.layers.Dropout.from_config": true, + "tf.compat.v1.layers.Dropout.get_config": true, + "tf.compat.v1.layers.Dropout.get_weights": true, + "tf.compat.v1.layers.Dropout.graph": true, + "tf.compat.v1.layers.Dropout.input": true, + "tf.compat.v1.layers.Dropout.input_spec": true, + "tf.compat.v1.layers.Dropout.losses": true, + "tf.compat.v1.layers.Dropout.metrics": true, + "tf.compat.v1.layers.Dropout.name": true, + "tf.compat.v1.layers.Dropout.name_scope": true, + "tf.compat.v1.layers.Dropout.non_trainable_weights": true, + "tf.compat.v1.layers.Dropout.output": true, + "tf.compat.v1.layers.Dropout.scope_name": true, + "tf.compat.v1.layers.Dropout.set_weights": true, + "tf.compat.v1.layers.Dropout.submodules": true, + "tf.compat.v1.layers.Dropout.trainable": true, + "tf.compat.v1.layers.Dropout.trainable_weights": true, + "tf.compat.v1.layers.Dropout.weights": true, + "tf.compat.v1.layers.Dropout.with_name_scope": true, + "tf.compat.v1.layers.Flatten": false, + "tf.compat.v1.layers.Flatten.__call__": true, + "tf.compat.v1.layers.Flatten.__eq__": true, + "tf.compat.v1.layers.Flatten.__ge__": true, + "tf.compat.v1.layers.Flatten.__gt__": true, + "tf.compat.v1.layers.Flatten.__init__": true, + "tf.compat.v1.layers.Flatten.__le__": true, + "tf.compat.v1.layers.Flatten.__lt__": true, + "tf.compat.v1.layers.Flatten.__ne__": true, + "tf.compat.v1.layers.Flatten.__new__": true, + "tf.compat.v1.layers.Flatten.activity_regularizer": true, + "tf.compat.v1.layers.Flatten.add_loss": true, + "tf.compat.v1.layers.Flatten.add_metric": true, + "tf.compat.v1.layers.Flatten.add_weight": true, + "tf.compat.v1.layers.Flatten.build": true, + "tf.compat.v1.layers.Flatten.call": true, + "tf.compat.v1.layers.Flatten.compute_mask": true, + "tf.compat.v1.layers.Flatten.compute_output_shape": true, + "tf.compat.v1.layers.Flatten.compute_output_signature": true, + "tf.compat.v1.layers.Flatten.count_params": true, + "tf.compat.v1.layers.Flatten.dtype": true, + "tf.compat.v1.layers.Flatten.dynamic": true, + "tf.compat.v1.layers.Flatten.from_config": true, + "tf.compat.v1.layers.Flatten.get_config": true, + "tf.compat.v1.layers.Flatten.get_weights": true, + "tf.compat.v1.layers.Flatten.graph": true, + "tf.compat.v1.layers.Flatten.input": true, + "tf.compat.v1.layers.Flatten.input_spec": true, + "tf.compat.v1.layers.Flatten.losses": true, + "tf.compat.v1.layers.Flatten.metrics": true, + "tf.compat.v1.layers.Flatten.name": true, + "tf.compat.v1.layers.Flatten.name_scope": true, + "tf.compat.v1.layers.Flatten.non_trainable_weights": true, + "tf.compat.v1.layers.Flatten.output": true, + "tf.compat.v1.layers.Flatten.scope_name": true, + "tf.compat.v1.layers.Flatten.set_weights": true, + "tf.compat.v1.layers.Flatten.submodules": true, + "tf.compat.v1.layers.Flatten.trainable": true, + "tf.compat.v1.layers.Flatten.trainable_weights": true, + "tf.compat.v1.layers.Flatten.weights": true, + "tf.compat.v1.layers.Flatten.with_name_scope": true, + "tf.compat.v1.layers.InputSpec": false, + "tf.compat.v1.layers.InputSpec.__eq__": true, + "tf.compat.v1.layers.InputSpec.__ge__": true, + "tf.compat.v1.layers.InputSpec.__gt__": true, + "tf.compat.v1.layers.InputSpec.__init__": true, + "tf.compat.v1.layers.InputSpec.__le__": true, + "tf.compat.v1.layers.InputSpec.__lt__": true, + "tf.compat.v1.layers.InputSpec.__ne__": true, + "tf.compat.v1.layers.InputSpec.__new__": true, + "tf.compat.v1.layers.InputSpec.from_config": true, + "tf.compat.v1.layers.InputSpec.get_config": true, + "tf.compat.v1.layers.Layer": false, + "tf.compat.v1.layers.Layer.__call__": true, + "tf.compat.v1.layers.Layer.__eq__": true, + "tf.compat.v1.layers.Layer.__ge__": true, + "tf.compat.v1.layers.Layer.__gt__": true, + "tf.compat.v1.layers.Layer.__init__": true, + "tf.compat.v1.layers.Layer.__le__": true, + "tf.compat.v1.layers.Layer.__lt__": true, + "tf.compat.v1.layers.Layer.__ne__": true, + "tf.compat.v1.layers.Layer.__new__": true, + "tf.compat.v1.layers.Layer.activity_regularizer": true, + "tf.compat.v1.layers.Layer.add_loss": true, + "tf.compat.v1.layers.Layer.add_metric": true, + "tf.compat.v1.layers.Layer.add_weight": true, + "tf.compat.v1.layers.Layer.build": true, + "tf.compat.v1.layers.Layer.call": true, + "tf.compat.v1.layers.Layer.compute_mask": true, + "tf.compat.v1.layers.Layer.compute_output_shape": true, + "tf.compat.v1.layers.Layer.compute_output_signature": true, + "tf.compat.v1.layers.Layer.count_params": true, + "tf.compat.v1.layers.Layer.dtype": true, + "tf.compat.v1.layers.Layer.dynamic": true, + "tf.compat.v1.layers.Layer.from_config": true, + "tf.compat.v1.layers.Layer.get_config": true, + "tf.compat.v1.layers.Layer.get_weights": true, + "tf.compat.v1.layers.Layer.graph": true, + "tf.compat.v1.layers.Layer.input": true, + "tf.compat.v1.layers.Layer.input_spec": true, + "tf.compat.v1.layers.Layer.losses": true, + "tf.compat.v1.layers.Layer.metrics": true, + "tf.compat.v1.layers.Layer.name": true, + "tf.compat.v1.layers.Layer.name_scope": true, + "tf.compat.v1.layers.Layer.non_trainable_weights": true, + "tf.compat.v1.layers.Layer.output": true, + "tf.compat.v1.layers.Layer.scope_name": true, + "tf.compat.v1.layers.Layer.set_weights": true, + "tf.compat.v1.layers.Layer.submodules": true, + "tf.compat.v1.layers.Layer.trainable": true, + "tf.compat.v1.layers.Layer.trainable_weights": true, + "tf.compat.v1.layers.Layer.weights": true, + "tf.compat.v1.layers.Layer.with_name_scope": true, + "tf.compat.v1.layers.MaxPooling1D": false, + "tf.compat.v1.layers.MaxPooling1D.__call__": true, + "tf.compat.v1.layers.MaxPooling1D.__eq__": true, + "tf.compat.v1.layers.MaxPooling1D.__ge__": true, + "tf.compat.v1.layers.MaxPooling1D.__gt__": true, + "tf.compat.v1.layers.MaxPooling1D.__init__": true, + "tf.compat.v1.layers.MaxPooling1D.__le__": true, + "tf.compat.v1.layers.MaxPooling1D.__lt__": true, + "tf.compat.v1.layers.MaxPooling1D.__ne__": true, + "tf.compat.v1.layers.MaxPooling1D.__new__": true, + "tf.compat.v1.layers.MaxPooling1D.activity_regularizer": true, + "tf.compat.v1.layers.MaxPooling1D.add_loss": true, + "tf.compat.v1.layers.MaxPooling1D.add_metric": true, + "tf.compat.v1.layers.MaxPooling1D.add_weight": true, + "tf.compat.v1.layers.MaxPooling1D.build": true, + "tf.compat.v1.layers.MaxPooling1D.call": true, + "tf.compat.v1.layers.MaxPooling1D.compute_mask": true, + "tf.compat.v1.layers.MaxPooling1D.compute_output_shape": true, + "tf.compat.v1.layers.MaxPooling1D.compute_output_signature": true, + "tf.compat.v1.layers.MaxPooling1D.count_params": true, + "tf.compat.v1.layers.MaxPooling1D.dtype": true, + "tf.compat.v1.layers.MaxPooling1D.dynamic": true, + "tf.compat.v1.layers.MaxPooling1D.from_config": true, + "tf.compat.v1.layers.MaxPooling1D.get_config": true, + "tf.compat.v1.layers.MaxPooling1D.get_weights": true, + "tf.compat.v1.layers.MaxPooling1D.graph": true, + "tf.compat.v1.layers.MaxPooling1D.input": true, + "tf.compat.v1.layers.MaxPooling1D.input_spec": true, + "tf.compat.v1.layers.MaxPooling1D.losses": true, + "tf.compat.v1.layers.MaxPooling1D.metrics": true, + "tf.compat.v1.layers.MaxPooling1D.name": true, + "tf.compat.v1.layers.MaxPooling1D.name_scope": true, + "tf.compat.v1.layers.MaxPooling1D.non_trainable_weights": true, + "tf.compat.v1.layers.MaxPooling1D.output": true, + "tf.compat.v1.layers.MaxPooling1D.scope_name": true, + "tf.compat.v1.layers.MaxPooling1D.set_weights": true, + "tf.compat.v1.layers.MaxPooling1D.submodules": true, + "tf.compat.v1.layers.MaxPooling1D.trainable": true, + "tf.compat.v1.layers.MaxPooling1D.trainable_weights": true, + "tf.compat.v1.layers.MaxPooling1D.weights": true, + "tf.compat.v1.layers.MaxPooling1D.with_name_scope": true, + "tf.compat.v1.layers.MaxPooling2D": false, + "tf.compat.v1.layers.MaxPooling2D.__call__": true, + "tf.compat.v1.layers.MaxPooling2D.__eq__": true, + "tf.compat.v1.layers.MaxPooling2D.__ge__": true, + "tf.compat.v1.layers.MaxPooling2D.__gt__": true, + "tf.compat.v1.layers.MaxPooling2D.__init__": true, + "tf.compat.v1.layers.MaxPooling2D.__le__": true, + "tf.compat.v1.layers.MaxPooling2D.__lt__": true, + "tf.compat.v1.layers.MaxPooling2D.__ne__": true, + "tf.compat.v1.layers.MaxPooling2D.__new__": true, + "tf.compat.v1.layers.MaxPooling2D.activity_regularizer": true, + "tf.compat.v1.layers.MaxPooling2D.add_loss": true, + "tf.compat.v1.layers.MaxPooling2D.add_metric": true, + "tf.compat.v1.layers.MaxPooling2D.add_weight": true, + "tf.compat.v1.layers.MaxPooling2D.build": true, + "tf.compat.v1.layers.MaxPooling2D.call": true, + "tf.compat.v1.layers.MaxPooling2D.compute_mask": true, + "tf.compat.v1.layers.MaxPooling2D.compute_output_shape": true, + "tf.compat.v1.layers.MaxPooling2D.compute_output_signature": true, + "tf.compat.v1.layers.MaxPooling2D.count_params": true, + "tf.compat.v1.layers.MaxPooling2D.dtype": true, + "tf.compat.v1.layers.MaxPooling2D.dynamic": true, + "tf.compat.v1.layers.MaxPooling2D.from_config": true, + "tf.compat.v1.layers.MaxPooling2D.get_config": true, + "tf.compat.v1.layers.MaxPooling2D.get_weights": true, + "tf.compat.v1.layers.MaxPooling2D.graph": true, + "tf.compat.v1.layers.MaxPooling2D.input": true, + "tf.compat.v1.layers.MaxPooling2D.input_spec": true, + "tf.compat.v1.layers.MaxPooling2D.losses": true, + "tf.compat.v1.layers.MaxPooling2D.metrics": true, + "tf.compat.v1.layers.MaxPooling2D.name": true, + "tf.compat.v1.layers.MaxPooling2D.name_scope": true, + "tf.compat.v1.layers.MaxPooling2D.non_trainable_weights": true, + "tf.compat.v1.layers.MaxPooling2D.output": true, + "tf.compat.v1.layers.MaxPooling2D.scope_name": true, + "tf.compat.v1.layers.MaxPooling2D.set_weights": true, + "tf.compat.v1.layers.MaxPooling2D.submodules": true, + "tf.compat.v1.layers.MaxPooling2D.trainable": true, + "tf.compat.v1.layers.MaxPooling2D.trainable_weights": true, + "tf.compat.v1.layers.MaxPooling2D.weights": true, + "tf.compat.v1.layers.MaxPooling2D.with_name_scope": true, + "tf.compat.v1.layers.MaxPooling3D": false, + "tf.compat.v1.layers.MaxPooling3D.__call__": true, + "tf.compat.v1.layers.MaxPooling3D.__eq__": true, + "tf.compat.v1.layers.MaxPooling3D.__ge__": true, + "tf.compat.v1.layers.MaxPooling3D.__gt__": true, + "tf.compat.v1.layers.MaxPooling3D.__init__": true, + "tf.compat.v1.layers.MaxPooling3D.__le__": true, + "tf.compat.v1.layers.MaxPooling3D.__lt__": true, + "tf.compat.v1.layers.MaxPooling3D.__ne__": true, + "tf.compat.v1.layers.MaxPooling3D.__new__": true, + "tf.compat.v1.layers.MaxPooling3D.activity_regularizer": true, + "tf.compat.v1.layers.MaxPooling3D.add_loss": true, + "tf.compat.v1.layers.MaxPooling3D.add_metric": true, + "tf.compat.v1.layers.MaxPooling3D.add_weight": true, + "tf.compat.v1.layers.MaxPooling3D.build": true, + "tf.compat.v1.layers.MaxPooling3D.call": true, + "tf.compat.v1.layers.MaxPooling3D.compute_mask": true, + "tf.compat.v1.layers.MaxPooling3D.compute_output_shape": true, + "tf.compat.v1.layers.MaxPooling3D.compute_output_signature": true, + "tf.compat.v1.layers.MaxPooling3D.count_params": true, + "tf.compat.v1.layers.MaxPooling3D.dtype": true, + "tf.compat.v1.layers.MaxPooling3D.dynamic": true, + "tf.compat.v1.layers.MaxPooling3D.from_config": true, + "tf.compat.v1.layers.MaxPooling3D.get_config": true, + "tf.compat.v1.layers.MaxPooling3D.get_weights": true, + "tf.compat.v1.layers.MaxPooling3D.graph": true, + "tf.compat.v1.layers.MaxPooling3D.input": true, + "tf.compat.v1.layers.MaxPooling3D.input_spec": true, + "tf.compat.v1.layers.MaxPooling3D.losses": true, + "tf.compat.v1.layers.MaxPooling3D.metrics": true, + "tf.compat.v1.layers.MaxPooling3D.name": true, + "tf.compat.v1.layers.MaxPooling3D.name_scope": true, + "tf.compat.v1.layers.MaxPooling3D.non_trainable_weights": true, + "tf.compat.v1.layers.MaxPooling3D.output": true, + "tf.compat.v1.layers.MaxPooling3D.scope_name": true, + "tf.compat.v1.layers.MaxPooling3D.set_weights": true, + "tf.compat.v1.layers.MaxPooling3D.submodules": true, + "tf.compat.v1.layers.MaxPooling3D.trainable": true, + "tf.compat.v1.layers.MaxPooling3D.trainable_weights": true, + "tf.compat.v1.layers.MaxPooling3D.weights": true, + "tf.compat.v1.layers.MaxPooling3D.with_name_scope": true, + "tf.compat.v1.layers.SeparableConv1D": false, + "tf.compat.v1.layers.SeparableConv1D.__call__": true, + "tf.compat.v1.layers.SeparableConv1D.__eq__": true, + "tf.compat.v1.layers.SeparableConv1D.__ge__": true, + "tf.compat.v1.layers.SeparableConv1D.__gt__": true, + "tf.compat.v1.layers.SeparableConv1D.__init__": true, + "tf.compat.v1.layers.SeparableConv1D.__le__": true, + "tf.compat.v1.layers.SeparableConv1D.__lt__": true, + "tf.compat.v1.layers.SeparableConv1D.__ne__": true, + "tf.compat.v1.layers.SeparableConv1D.__new__": true, + "tf.compat.v1.layers.SeparableConv1D.activity_regularizer": true, + "tf.compat.v1.layers.SeparableConv1D.add_loss": true, + "tf.compat.v1.layers.SeparableConv1D.add_metric": true, + "tf.compat.v1.layers.SeparableConv1D.add_weight": true, + "tf.compat.v1.layers.SeparableConv1D.build": true, + "tf.compat.v1.layers.SeparableConv1D.call": true, + "tf.compat.v1.layers.SeparableConv1D.compute_mask": true, + "tf.compat.v1.layers.SeparableConv1D.compute_output_shape": true, + "tf.compat.v1.layers.SeparableConv1D.compute_output_signature": true, + "tf.compat.v1.layers.SeparableConv1D.count_params": true, + "tf.compat.v1.layers.SeparableConv1D.dtype": true, + "tf.compat.v1.layers.SeparableConv1D.dynamic": true, + "tf.compat.v1.layers.SeparableConv1D.from_config": true, + "tf.compat.v1.layers.SeparableConv1D.get_config": true, + "tf.compat.v1.layers.SeparableConv1D.get_weights": true, + "tf.compat.v1.layers.SeparableConv1D.graph": true, + "tf.compat.v1.layers.SeparableConv1D.input": true, + "tf.compat.v1.layers.SeparableConv1D.input_spec": true, + "tf.compat.v1.layers.SeparableConv1D.losses": true, + "tf.compat.v1.layers.SeparableConv1D.metrics": true, + "tf.compat.v1.layers.SeparableConv1D.name": true, + "tf.compat.v1.layers.SeparableConv1D.name_scope": true, + "tf.compat.v1.layers.SeparableConv1D.non_trainable_weights": true, + "tf.compat.v1.layers.SeparableConv1D.output": true, + "tf.compat.v1.layers.SeparableConv1D.scope_name": true, + "tf.compat.v1.layers.SeparableConv1D.set_weights": true, + "tf.compat.v1.layers.SeparableConv1D.submodules": true, + "tf.compat.v1.layers.SeparableConv1D.trainable": true, + "tf.compat.v1.layers.SeparableConv1D.trainable_weights": true, + "tf.compat.v1.layers.SeparableConv1D.weights": true, + "tf.compat.v1.layers.SeparableConv1D.with_name_scope": true, + "tf.compat.v1.layers.SeparableConv2D": false, + "tf.compat.v1.layers.SeparableConv2D.__call__": true, + "tf.compat.v1.layers.SeparableConv2D.__eq__": true, + "tf.compat.v1.layers.SeparableConv2D.__ge__": true, + "tf.compat.v1.layers.SeparableConv2D.__gt__": true, + "tf.compat.v1.layers.SeparableConv2D.__init__": true, + "tf.compat.v1.layers.SeparableConv2D.__le__": true, + "tf.compat.v1.layers.SeparableConv2D.__lt__": true, + "tf.compat.v1.layers.SeparableConv2D.__ne__": true, + "tf.compat.v1.layers.SeparableConv2D.__new__": true, + "tf.compat.v1.layers.SeparableConv2D.activity_regularizer": true, + "tf.compat.v1.layers.SeparableConv2D.add_loss": true, + "tf.compat.v1.layers.SeparableConv2D.add_metric": true, + "tf.compat.v1.layers.SeparableConv2D.add_weight": true, + "tf.compat.v1.layers.SeparableConv2D.build": true, + "tf.compat.v1.layers.SeparableConv2D.call": true, + "tf.compat.v1.layers.SeparableConv2D.compute_mask": true, + "tf.compat.v1.layers.SeparableConv2D.compute_output_shape": true, + "tf.compat.v1.layers.SeparableConv2D.compute_output_signature": true, + "tf.compat.v1.layers.SeparableConv2D.count_params": true, + "tf.compat.v1.layers.SeparableConv2D.dtype": true, + "tf.compat.v1.layers.SeparableConv2D.dynamic": true, + "tf.compat.v1.layers.SeparableConv2D.from_config": true, + "tf.compat.v1.layers.SeparableConv2D.get_config": true, + "tf.compat.v1.layers.SeparableConv2D.get_weights": true, + "tf.compat.v1.layers.SeparableConv2D.graph": true, + "tf.compat.v1.layers.SeparableConv2D.input": true, + "tf.compat.v1.layers.SeparableConv2D.input_spec": true, + "tf.compat.v1.layers.SeparableConv2D.losses": true, + "tf.compat.v1.layers.SeparableConv2D.metrics": true, + "tf.compat.v1.layers.SeparableConv2D.name": true, + "tf.compat.v1.layers.SeparableConv2D.name_scope": true, + "tf.compat.v1.layers.SeparableConv2D.non_trainable_weights": true, + "tf.compat.v1.layers.SeparableConv2D.output": true, + "tf.compat.v1.layers.SeparableConv2D.scope_name": true, + "tf.compat.v1.layers.SeparableConv2D.set_weights": true, + "tf.compat.v1.layers.SeparableConv2D.submodules": true, + "tf.compat.v1.layers.SeparableConv2D.trainable": true, + "tf.compat.v1.layers.SeparableConv2D.trainable_weights": true, + "tf.compat.v1.layers.SeparableConv2D.weights": true, + "tf.compat.v1.layers.SeparableConv2D.with_name_scope": true, + "tf.compat.v1.layers.average_pooling1d": false, + "tf.compat.v1.layers.average_pooling2d": false, + "tf.compat.v1.layers.average_pooling3d": false, + "tf.compat.v1.layers.batch_normalization": false, + "tf.compat.v1.layers.conv1d": false, + "tf.compat.v1.layers.conv2d": false, + "tf.compat.v1.layers.conv2d_transpose": false, + "tf.compat.v1.layers.conv3d": false, + "tf.compat.v1.layers.conv3d_transpose": false, + "tf.compat.v1.layers.dense": false, + "tf.compat.v1.layers.dropout": false, + "tf.compat.v1.layers.experimental": false, + "tf.compat.v1.layers.experimental.keras_style_scope": false, + "tf.compat.v1.layers.experimental.set_keras_style": false, + "tf.compat.v1.layers.flatten": false, + "tf.compat.v1.layers.max_pooling1d": false, + "tf.compat.v1.layers.max_pooling2d": false, + "tf.compat.v1.layers.max_pooling3d": false, + "tf.compat.v1.layers.separable_conv1d": false, + "tf.compat.v1.layers.separable_conv2d": false, + "tf.compat.v1.lbeta": false, + "tf.compat.v1.less": false, + "tf.compat.v1.less_equal": false, + "tf.compat.v1.lgamma": false, + "tf.compat.v1.lin_space": false, + "tf.compat.v1.linalg": false, + "tf.compat.v1.linalg.LinearOperator": false, + "tf.compat.v1.linalg.LinearOperator.H": true, + "tf.compat.v1.linalg.LinearOperator.__eq__": true, + "tf.compat.v1.linalg.LinearOperator.__ge__": true, + "tf.compat.v1.linalg.LinearOperator.__gt__": true, + "tf.compat.v1.linalg.LinearOperator.__init__": true, + "tf.compat.v1.linalg.LinearOperator.__le__": true, + "tf.compat.v1.linalg.LinearOperator.__lt__": true, + "tf.compat.v1.linalg.LinearOperator.__matmul__": true, + "tf.compat.v1.linalg.LinearOperator.__ne__": true, + "tf.compat.v1.linalg.LinearOperator.__new__": true, + "tf.compat.v1.linalg.LinearOperator.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperator.adjoint": true, + "tf.compat.v1.linalg.LinearOperator.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperator.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperator.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperator.batch_shape": true, + "tf.compat.v1.linalg.LinearOperator.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperator.cholesky": true, + "tf.compat.v1.linalg.LinearOperator.cond": true, + "tf.compat.v1.linalg.LinearOperator.determinant": true, + "tf.compat.v1.linalg.LinearOperator.diag_part": true, + "tf.compat.v1.linalg.LinearOperator.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperator.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperator.dtype": true, + "tf.compat.v1.linalg.LinearOperator.eigvals": true, + "tf.compat.v1.linalg.LinearOperator.graph_parents": true, + "tf.compat.v1.linalg.LinearOperator.inverse": true, + "tf.compat.v1.linalg.LinearOperator.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperator.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperator.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperator.is_square": true, + "tf.compat.v1.linalg.LinearOperator.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperator.matmul": true, + "tf.compat.v1.linalg.LinearOperator.matvec": true, + "tf.compat.v1.linalg.LinearOperator.name": true, + "tf.compat.v1.linalg.LinearOperator.name_scope": true, + "tf.compat.v1.linalg.LinearOperator.range_dimension": true, + "tf.compat.v1.linalg.LinearOperator.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperator.shape": true, + "tf.compat.v1.linalg.LinearOperator.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperator.solve": true, + "tf.compat.v1.linalg.LinearOperator.solvevec": true, + "tf.compat.v1.linalg.LinearOperator.submodules": true, + "tf.compat.v1.linalg.LinearOperator.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperator.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperator.to_dense": true, + "tf.compat.v1.linalg.LinearOperator.trace": true, + "tf.compat.v1.linalg.LinearOperator.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperator.variables": true, + "tf.compat.v1.linalg.LinearOperator.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint": false, + "tf.compat.v1.linalg.LinearOperatorAdjoint.H": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.__init__": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.__le__": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.__new__": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.cond": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.determinant": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.dtype": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.inverse": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.is_square": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.matmul": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.matvec": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.name": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.operator": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.shape": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.solve": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.submodules": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.trace": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.variables": true, + "tf.compat.v1.linalg.LinearOperatorAdjoint.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag": false, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.H": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__init__": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__le__": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.__new__": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.cond": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.determinant": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.dtype": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.inverse": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.is_square": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.matmul": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.matvec": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.name": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.operators": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.shape": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.solve": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.submodules": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.trace": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.variables": true, + "tf.compat.v1.linalg.LinearOperatorBlockDiag.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular": false, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.H": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__init__": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__le__": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.__new__": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.cond": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.determinant": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.dtype": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.inverse": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.is_square": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.matmul": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.matvec": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.name": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.operators": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.shape": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.solve": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.submodules": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.trace": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.variables": true, + "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorCirculant": false, + "tf.compat.v1.linalg.LinearOperatorCirculant.H": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.__init__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.__le__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.__new__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.assert_hermitian_spectrum": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.block_depth": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.block_shape": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.block_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.cond": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.convolution_kernel": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.determinant": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.dtype": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.inverse": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.is_square": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.matmul": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.matvec": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.name": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.shape": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.solve": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.spectrum": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.submodules": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.trace": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.variables": true, + "tf.compat.v1.linalg.LinearOperatorCirculant.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D": false, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.H": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__init__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__le__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.__new__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.assert_hermitian_spectrum": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.block_depth": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.block_shape": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.block_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.cond": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.convolution_kernel": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.determinant": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.dtype": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.inverse": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.is_square": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.matmul": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.matvec": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.name": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.shape": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.solve": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.spectrum": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.submodules": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.trace": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.variables": true, + "tf.compat.v1.linalg.LinearOperatorCirculant2D.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D": false, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.H": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__init__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__le__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.__new__": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.assert_hermitian_spectrum": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.block_depth": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.block_shape": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.block_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.cond": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.convolution_kernel": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.determinant": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.dtype": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.inverse": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.is_square": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.matmul": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.matvec": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.name": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.shape": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.solve": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.spectrum": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.submodules": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.trace": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.variables": true, + "tf.compat.v1.linalg.LinearOperatorCirculant3D.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorComposition": false, + "tf.compat.v1.linalg.LinearOperatorComposition.H": true, + "tf.compat.v1.linalg.LinearOperatorComposition.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorComposition.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorComposition.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorComposition.__init__": true, + "tf.compat.v1.linalg.LinearOperatorComposition.__le__": true, + "tf.compat.v1.linalg.LinearOperatorComposition.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorComposition.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorComposition.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorComposition.__new__": true, + "tf.compat.v1.linalg.LinearOperatorComposition.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorComposition.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorComposition.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorComposition.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorComposition.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorComposition.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorComposition.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorComposition.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorComposition.cond": true, + "tf.compat.v1.linalg.LinearOperatorComposition.determinant": true, + "tf.compat.v1.linalg.LinearOperatorComposition.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorComposition.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorComposition.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorComposition.dtype": true, + "tf.compat.v1.linalg.LinearOperatorComposition.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorComposition.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorComposition.inverse": true, + "tf.compat.v1.linalg.LinearOperatorComposition.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorComposition.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorComposition.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorComposition.is_square": true, + "tf.compat.v1.linalg.LinearOperatorComposition.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorComposition.matmul": true, + "tf.compat.v1.linalg.LinearOperatorComposition.matvec": true, + "tf.compat.v1.linalg.LinearOperatorComposition.name": true, + "tf.compat.v1.linalg.LinearOperatorComposition.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorComposition.operators": true, + "tf.compat.v1.linalg.LinearOperatorComposition.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorComposition.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorComposition.shape": true, + "tf.compat.v1.linalg.LinearOperatorComposition.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorComposition.solve": true, + "tf.compat.v1.linalg.LinearOperatorComposition.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorComposition.submodules": true, + "tf.compat.v1.linalg.LinearOperatorComposition.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorComposition.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorComposition.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorComposition.trace": true, + "tf.compat.v1.linalg.LinearOperatorComposition.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorComposition.variables": true, + "tf.compat.v1.linalg.LinearOperatorComposition.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorDiag": false, + "tf.compat.v1.linalg.LinearOperatorDiag.H": true, + "tf.compat.v1.linalg.LinearOperatorDiag.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorDiag.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorDiag.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorDiag.__init__": true, + "tf.compat.v1.linalg.LinearOperatorDiag.__le__": true, + "tf.compat.v1.linalg.LinearOperatorDiag.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorDiag.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorDiag.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorDiag.__new__": true, + "tf.compat.v1.linalg.LinearOperatorDiag.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorDiag.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorDiag.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorDiag.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorDiag.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorDiag.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorDiag.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorDiag.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorDiag.cond": true, + "tf.compat.v1.linalg.LinearOperatorDiag.determinant": true, + "tf.compat.v1.linalg.LinearOperatorDiag.diag": true, + "tf.compat.v1.linalg.LinearOperatorDiag.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorDiag.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorDiag.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorDiag.dtype": true, + "tf.compat.v1.linalg.LinearOperatorDiag.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorDiag.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorDiag.inverse": true, + "tf.compat.v1.linalg.LinearOperatorDiag.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorDiag.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorDiag.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorDiag.is_square": true, + "tf.compat.v1.linalg.LinearOperatorDiag.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorDiag.matmul": true, + "tf.compat.v1.linalg.LinearOperatorDiag.matvec": true, + "tf.compat.v1.linalg.LinearOperatorDiag.name": true, + "tf.compat.v1.linalg.LinearOperatorDiag.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorDiag.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorDiag.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorDiag.shape": true, + "tf.compat.v1.linalg.LinearOperatorDiag.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorDiag.solve": true, + "tf.compat.v1.linalg.LinearOperatorDiag.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorDiag.submodules": true, + "tf.compat.v1.linalg.LinearOperatorDiag.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorDiag.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorDiag.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorDiag.trace": true, + "tf.compat.v1.linalg.LinearOperatorDiag.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorDiag.variables": true, + "tf.compat.v1.linalg.LinearOperatorDiag.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix": false, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.H": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__init__": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__le__": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.__new__": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.cond": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.determinant": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.dtype": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.inverse": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.is_square": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.matmul": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.matvec": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.name": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.shape": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.solve": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.submodules": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.trace": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.variables": true, + "tf.compat.v1.linalg.LinearOperatorFullMatrix.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder": false, + "tf.compat.v1.linalg.LinearOperatorHouseholder.H": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.__init__": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.__le__": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.__new__": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.cond": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.determinant": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.dtype": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.inverse": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.is_square": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.matmul": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.matvec": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.name": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.reflection_axis": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.shape": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.solve": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.submodules": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.trace": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.variables": true, + "tf.compat.v1.linalg.LinearOperatorHouseholder.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorIdentity": false, + "tf.compat.v1.linalg.LinearOperatorIdentity.H": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.__init__": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.__le__": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.__new__": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.cond": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.determinant": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.dtype": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.inverse": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.is_square": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.matmul": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.matvec": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.name": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.shape": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.solve": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.submodules": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.trace": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.variables": true, + "tf.compat.v1.linalg.LinearOperatorIdentity.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorInversion": false, + "tf.compat.v1.linalg.LinearOperatorInversion.H": true, + "tf.compat.v1.linalg.LinearOperatorInversion.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorInversion.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorInversion.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorInversion.__init__": true, + "tf.compat.v1.linalg.LinearOperatorInversion.__le__": true, + "tf.compat.v1.linalg.LinearOperatorInversion.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorInversion.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorInversion.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorInversion.__new__": true, + "tf.compat.v1.linalg.LinearOperatorInversion.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorInversion.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorInversion.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorInversion.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorInversion.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorInversion.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorInversion.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorInversion.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorInversion.cond": true, + "tf.compat.v1.linalg.LinearOperatorInversion.determinant": true, + "tf.compat.v1.linalg.LinearOperatorInversion.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorInversion.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorInversion.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorInversion.dtype": true, + "tf.compat.v1.linalg.LinearOperatorInversion.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorInversion.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorInversion.inverse": true, + "tf.compat.v1.linalg.LinearOperatorInversion.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorInversion.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorInversion.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorInversion.is_square": true, + "tf.compat.v1.linalg.LinearOperatorInversion.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorInversion.matmul": true, + "tf.compat.v1.linalg.LinearOperatorInversion.matvec": true, + "tf.compat.v1.linalg.LinearOperatorInversion.name": true, + "tf.compat.v1.linalg.LinearOperatorInversion.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorInversion.operator": true, + "tf.compat.v1.linalg.LinearOperatorInversion.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorInversion.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorInversion.shape": true, + "tf.compat.v1.linalg.LinearOperatorInversion.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorInversion.solve": true, + "tf.compat.v1.linalg.LinearOperatorInversion.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorInversion.submodules": true, + "tf.compat.v1.linalg.LinearOperatorInversion.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorInversion.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorInversion.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorInversion.trace": true, + "tf.compat.v1.linalg.LinearOperatorInversion.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorInversion.variables": true, + "tf.compat.v1.linalg.LinearOperatorInversion.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorKronecker": false, + "tf.compat.v1.linalg.LinearOperatorKronecker.H": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.__init__": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.__le__": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.__new__": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.cond": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.determinant": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.dtype": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.inverse": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.is_square": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.matmul": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.matvec": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.name": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.operators": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.shape": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.solve": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.submodules": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.trace": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.variables": true, + "tf.compat.v1.linalg.LinearOperatorKronecker.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate": false, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.H": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__init__": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__le__": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.__new__": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.base_operator": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.cond": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.determinant": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.diag_operator": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.diag_update": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.dtype": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.inverse": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.is_diag_update_positive": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.is_square": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.matmul": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.matvec": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.name": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.shape": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.solve": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.submodules": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.trace": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.u": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.v": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.variables": true, + "tf.compat.v1.linalg.LinearOperatorLowRankUpdate.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular": false, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.H": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__init__": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__le__": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.__new__": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.cond": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.determinant": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.dtype": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.inverse": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.is_square": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.matmul": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.matvec": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.name": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.shape": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.solve": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.submodules": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.trace": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.variables": true, + "tf.compat.v1.linalg.LinearOperatorLowerTriangular.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorPermutation": false, + "tf.compat.v1.linalg.LinearOperatorPermutation.H": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.__init__": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.__le__": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.__new__": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.cond": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.determinant": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.dtype": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.inverse": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.is_square": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.matmul": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.matvec": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.name": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.perm": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.shape": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.solve": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.submodules": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.trace": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.variables": true, + "tf.compat.v1.linalg.LinearOperatorPermutation.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity": false, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.H": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__init__": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__le__": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.__new__": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.cond": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.determinant": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.dtype": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.inverse": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.is_square": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.matmul": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.matvec": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.multiplier": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.name": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.shape": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.solve": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.submodules": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.trace": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.variables": true, + "tf.compat.v1.linalg.LinearOperatorScaledIdentity.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz": false, + "tf.compat.v1.linalg.LinearOperatorToeplitz.H": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.__init__": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.__le__": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.__new__": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.col": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.cond": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.determinant": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.dtype": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.inverse": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.is_square": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.matmul": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.matvec": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.name": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.row": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.shape": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.solve": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.submodules": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.trace": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.variables": true, + "tf.compat.v1.linalg.LinearOperatorToeplitz.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorTridiag": false, + "tf.compat.v1.linalg.LinearOperatorTridiag.H": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.__init__": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.__le__": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.__new__": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.cond": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.determinant": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.diagonals": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.diagonals_format": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.dtype": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.inverse": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.is_square": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.matmul": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.matvec": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.name": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.shape": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.solve": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.submodules": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.trace": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.variables": true, + "tf.compat.v1.linalg.LinearOperatorTridiag.with_name_scope": true, + "tf.compat.v1.linalg.LinearOperatorZeros": false, + "tf.compat.v1.linalg.LinearOperatorZeros.H": true, + "tf.compat.v1.linalg.LinearOperatorZeros.__eq__": true, + "tf.compat.v1.linalg.LinearOperatorZeros.__ge__": true, + "tf.compat.v1.linalg.LinearOperatorZeros.__gt__": true, + "tf.compat.v1.linalg.LinearOperatorZeros.__init__": true, + "tf.compat.v1.linalg.LinearOperatorZeros.__le__": true, + "tf.compat.v1.linalg.LinearOperatorZeros.__lt__": true, + "tf.compat.v1.linalg.LinearOperatorZeros.__matmul__": true, + "tf.compat.v1.linalg.LinearOperatorZeros.__ne__": true, + "tf.compat.v1.linalg.LinearOperatorZeros.__new__": true, + "tf.compat.v1.linalg.LinearOperatorZeros.add_to_tensor": true, + "tf.compat.v1.linalg.LinearOperatorZeros.adjoint": true, + "tf.compat.v1.linalg.LinearOperatorZeros.assert_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorZeros.assert_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorZeros.assert_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorZeros.batch_shape": true, + "tf.compat.v1.linalg.LinearOperatorZeros.batch_shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorZeros.cholesky": true, + "tf.compat.v1.linalg.LinearOperatorZeros.cond": true, + "tf.compat.v1.linalg.LinearOperatorZeros.determinant": true, + "tf.compat.v1.linalg.LinearOperatorZeros.diag_part": true, + "tf.compat.v1.linalg.LinearOperatorZeros.domain_dimension": true, + "tf.compat.v1.linalg.LinearOperatorZeros.domain_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorZeros.dtype": true, + "tf.compat.v1.linalg.LinearOperatorZeros.eigvals": true, + "tf.compat.v1.linalg.LinearOperatorZeros.graph_parents": true, + "tf.compat.v1.linalg.LinearOperatorZeros.inverse": true, + "tf.compat.v1.linalg.LinearOperatorZeros.is_non_singular": true, + "tf.compat.v1.linalg.LinearOperatorZeros.is_positive_definite": true, + "tf.compat.v1.linalg.LinearOperatorZeros.is_self_adjoint": true, + "tf.compat.v1.linalg.LinearOperatorZeros.is_square": true, + "tf.compat.v1.linalg.LinearOperatorZeros.log_abs_determinant": true, + "tf.compat.v1.linalg.LinearOperatorZeros.matmul": true, + "tf.compat.v1.linalg.LinearOperatorZeros.matvec": true, + "tf.compat.v1.linalg.LinearOperatorZeros.name": true, + "tf.compat.v1.linalg.LinearOperatorZeros.name_scope": true, + "tf.compat.v1.linalg.LinearOperatorZeros.range_dimension": true, + "tf.compat.v1.linalg.LinearOperatorZeros.range_dimension_tensor": true, + "tf.compat.v1.linalg.LinearOperatorZeros.shape": true, + "tf.compat.v1.linalg.LinearOperatorZeros.shape_tensor": true, + "tf.compat.v1.linalg.LinearOperatorZeros.solve": true, + "tf.compat.v1.linalg.LinearOperatorZeros.solvevec": true, + "tf.compat.v1.linalg.LinearOperatorZeros.submodules": true, + "tf.compat.v1.linalg.LinearOperatorZeros.tensor_rank": true, + "tf.compat.v1.linalg.LinearOperatorZeros.tensor_rank_tensor": true, + "tf.compat.v1.linalg.LinearOperatorZeros.to_dense": true, + "tf.compat.v1.linalg.LinearOperatorZeros.trace": true, + "tf.compat.v1.linalg.LinearOperatorZeros.trainable_variables": true, + "tf.compat.v1.linalg.LinearOperatorZeros.variables": true, + "tf.compat.v1.linalg.LinearOperatorZeros.with_name_scope": true, + "tf.compat.v1.linalg.adjoint": false, + "tf.compat.v1.linalg.band_part": false, + "tf.compat.v1.linalg.cholesky": false, + "tf.compat.v1.linalg.cholesky_solve": false, + "tf.compat.v1.linalg.cross": false, + "tf.compat.v1.linalg.det": false, + "tf.compat.v1.linalg.diag": false, + "tf.compat.v1.linalg.diag_part": false, + "tf.compat.v1.linalg.eigh": false, + "tf.compat.v1.linalg.eigvalsh": false, + "tf.compat.v1.linalg.einsum": false, + "tf.compat.v1.linalg.experimental": false, + "tf.compat.v1.linalg.experimental.conjugate_gradient": false, + "tf.compat.v1.linalg.expm": false, + "tf.compat.v1.linalg.eye": false, + "tf.compat.v1.linalg.global_norm": false, + "tf.compat.v1.linalg.inv": false, + "tf.compat.v1.linalg.l2_normalize": false, + "tf.compat.v1.linalg.logdet": false, + "tf.compat.v1.linalg.logm": false, + "tf.compat.v1.linalg.lstsq": false, + "tf.compat.v1.linalg.lu": false, + "tf.compat.v1.linalg.lu_matrix_inverse": false, + "tf.compat.v1.linalg.lu_reconstruct": false, + "tf.compat.v1.linalg.lu_solve": false, + "tf.compat.v1.linalg.matmul": false, + "tf.compat.v1.linalg.matrix_rank": false, + "tf.compat.v1.linalg.matrix_transpose": false, + "tf.compat.v1.linalg.matvec": false, + "tf.compat.v1.linalg.norm": false, + "tf.compat.v1.linalg.normalize": false, + "tf.compat.v1.linalg.pinv": false, + "tf.compat.v1.linalg.qr": false, + "tf.compat.v1.linalg.set_diag": false, + "tf.compat.v1.linalg.slogdet": false, + "tf.compat.v1.linalg.solve": false, + "tf.compat.v1.linalg.sqrtm": false, + "tf.compat.v1.linalg.svd": false, + "tf.compat.v1.linalg.tensor_diag": false, + "tf.compat.v1.linalg.tensor_diag_part": false, + "tf.compat.v1.linalg.tensordot": false, + "tf.compat.v1.linalg.trace": false, + "tf.compat.v1.linalg.transpose": false, + "tf.compat.v1.linalg.triangular_solve": false, + "tf.compat.v1.linalg.tridiagonal_matmul": false, + "tf.compat.v1.linalg.tridiagonal_solve": false, + "tf.compat.v1.linspace": false, + "tf.compat.v1.lite": false, + "tf.compat.v1.lite.Interpreter": false, + "tf.compat.v1.lite.Interpreter.__eq__": true, + "tf.compat.v1.lite.Interpreter.__ge__": true, + "tf.compat.v1.lite.Interpreter.__gt__": true, + "tf.compat.v1.lite.Interpreter.__init__": true, + "tf.compat.v1.lite.Interpreter.__le__": true, + "tf.compat.v1.lite.Interpreter.__lt__": true, + "tf.compat.v1.lite.Interpreter.__ne__": true, + "tf.compat.v1.lite.Interpreter.__new__": true, + "tf.compat.v1.lite.Interpreter.allocate_tensors": true, + "tf.compat.v1.lite.Interpreter.get_input_details": true, + "tf.compat.v1.lite.Interpreter.get_output_details": true, + "tf.compat.v1.lite.Interpreter.get_tensor": true, + "tf.compat.v1.lite.Interpreter.get_tensor_details": true, + "tf.compat.v1.lite.Interpreter.invoke": true, + "tf.compat.v1.lite.Interpreter.reset_all_variables": true, + "tf.compat.v1.lite.Interpreter.resize_tensor_input": true, + "tf.compat.v1.lite.Interpreter.set_tensor": true, + "tf.compat.v1.lite.Interpreter.tensor": true, + "tf.compat.v1.lite.OpHint": false, + "tf.compat.v1.lite.OpHint.AGGREGATE_FIRST": true, + "tf.compat.v1.lite.OpHint.AGGREGATE_LAST": true, + "tf.compat.v1.lite.OpHint.AGGREGATE_STACK": true, + "tf.compat.v1.lite.OpHint.CHILDREN_INPUTS_MAPPINGS": true, + "tf.compat.v1.lite.OpHint.FUNCTION_AGGREGATE_ATTR": true, + "tf.compat.v1.lite.OpHint.FUNCTION_INPUT_INDEX_ATTR": true, + "tf.compat.v1.lite.OpHint.FUNCTION_LEVEL_ATTR": true, + "tf.compat.v1.lite.OpHint.FUNCTION_NAME_ATTR": true, + "tf.compat.v1.lite.OpHint.FUNCTION_OUTPUT_INDEX_ATTR": true, + "tf.compat.v1.lite.OpHint.FUNCTION_SORT_INDEX_ATTR": true, + "tf.compat.v1.lite.OpHint.FUNCTION_UUID_ATTR": true, + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker": false, + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__eq__": true, + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__ge__": true, + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__gt__": true, + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__init__": true, + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__le__": true, + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__lt__": true, + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__ne__": true, + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.__new__": true, + "tf.compat.v1.lite.OpHint.OpHintArgumentTracker.add": true, + "tf.compat.v1.lite.OpHint.TFLITE_INPUT_INDICES": true, + "tf.compat.v1.lite.OpHint.__eq__": true, + "tf.compat.v1.lite.OpHint.__ge__": true, + "tf.compat.v1.lite.OpHint.__gt__": true, + "tf.compat.v1.lite.OpHint.__init__": true, + "tf.compat.v1.lite.OpHint.__le__": true, + "tf.compat.v1.lite.OpHint.__lt__": true, + "tf.compat.v1.lite.OpHint.__ne__": true, + "tf.compat.v1.lite.OpHint.__new__": true, + "tf.compat.v1.lite.OpHint.add_input": true, + "tf.compat.v1.lite.OpHint.add_inputs": true, + "tf.compat.v1.lite.OpHint.add_output": true, + "tf.compat.v1.lite.OpHint.add_outputs": true, + "tf.compat.v1.lite.OpsSet": false, + "tf.compat.v1.lite.OpsSet.SELECT_TF_OPS": true, + "tf.compat.v1.lite.OpsSet.TFLITE_BUILTINS": true, + "tf.compat.v1.lite.OpsSet.TFLITE_BUILTINS_INT8": true, + "tf.compat.v1.lite.OpsSet.name": true, + "tf.compat.v1.lite.OpsSet.value": true, + "tf.compat.v1.lite.Optimize": false, + "tf.compat.v1.lite.Optimize.DEFAULT": true, + "tf.compat.v1.lite.Optimize.OPTIMIZE_FOR_LATENCY": true, + "tf.compat.v1.lite.Optimize.OPTIMIZE_FOR_SIZE": true, + "tf.compat.v1.lite.Optimize.name": true, + "tf.compat.v1.lite.Optimize.value": true, + "tf.compat.v1.lite.RepresentativeDataset": false, + "tf.compat.v1.lite.RepresentativeDataset.__eq__": true, + "tf.compat.v1.lite.RepresentativeDataset.__ge__": true, + "tf.compat.v1.lite.RepresentativeDataset.__gt__": true, + "tf.compat.v1.lite.RepresentativeDataset.__init__": true, + "tf.compat.v1.lite.RepresentativeDataset.__le__": true, + "tf.compat.v1.lite.RepresentativeDataset.__lt__": true, + "tf.compat.v1.lite.RepresentativeDataset.__ne__": true, + "tf.compat.v1.lite.RepresentativeDataset.__new__": true, + "tf.compat.v1.lite.TFLiteConverter": false, + "tf.compat.v1.lite.TFLiteConverter.__eq__": true, + "tf.compat.v1.lite.TFLiteConverter.__ge__": true, + "tf.compat.v1.lite.TFLiteConverter.__gt__": true, + "tf.compat.v1.lite.TFLiteConverter.__init__": true, + "tf.compat.v1.lite.TFLiteConverter.__le__": true, + "tf.compat.v1.lite.TFLiteConverter.__lt__": true, + "tf.compat.v1.lite.TFLiteConverter.__ne__": true, + "tf.compat.v1.lite.TFLiteConverter.__new__": true, + "tf.compat.v1.lite.TFLiteConverter.convert": true, + "tf.compat.v1.lite.TFLiteConverter.from_frozen_graph": true, + "tf.compat.v1.lite.TFLiteConverter.from_keras_model_file": true, + "tf.compat.v1.lite.TFLiteConverter.from_saved_model": true, + "tf.compat.v1.lite.TFLiteConverter.from_session": true, + "tf.compat.v1.lite.TFLiteConverter.get_input_arrays": true, + "tf.compat.v1.lite.TargetSpec": false, + "tf.compat.v1.lite.TargetSpec.__eq__": true, + "tf.compat.v1.lite.TargetSpec.__ge__": true, + "tf.compat.v1.lite.TargetSpec.__gt__": true, + "tf.compat.v1.lite.TargetSpec.__init__": true, + "tf.compat.v1.lite.TargetSpec.__le__": true, + "tf.compat.v1.lite.TargetSpec.__lt__": true, + "tf.compat.v1.lite.TargetSpec.__ne__": true, + "tf.compat.v1.lite.TargetSpec.__new__": true, + "tf.compat.v1.lite.TocoConverter": false, + "tf.compat.v1.lite.TocoConverter.__eq__": true, + "tf.compat.v1.lite.TocoConverter.__ge__": true, + "tf.compat.v1.lite.TocoConverter.__gt__": true, + "tf.compat.v1.lite.TocoConverter.__init__": true, + "tf.compat.v1.lite.TocoConverter.__le__": true, + "tf.compat.v1.lite.TocoConverter.__lt__": true, + "tf.compat.v1.lite.TocoConverter.__ne__": true, + "tf.compat.v1.lite.TocoConverter.__new__": true, + "tf.compat.v1.lite.TocoConverter.from_frozen_graph": true, + "tf.compat.v1.lite.TocoConverter.from_keras_model_file": true, + "tf.compat.v1.lite.TocoConverter.from_saved_model": true, + "tf.compat.v1.lite.TocoConverter.from_session": true, + "tf.compat.v1.lite.constants": false, + "tf.compat.v1.lite.constants.FLOAT": true, + "tf.compat.v1.lite.constants.FLOAT16": true, + "tf.compat.v1.lite.constants.GRAPHVIZ_DOT": true, + "tf.compat.v1.lite.constants.INT32": true, + "tf.compat.v1.lite.constants.INT64": true, + "tf.compat.v1.lite.constants.INT8": true, + "tf.compat.v1.lite.constants.QUANTIZED_UINT8": true, + "tf.compat.v1.lite.constants.STRING": true, + "tf.compat.v1.lite.constants.TFLITE": true, + "tf.compat.v1.lite.experimental": false, + "tf.compat.v1.lite.experimental.convert_op_hints_to_stubs": false, + "tf.compat.v1.lite.experimental.get_potentially_supported_ops": false, + "tf.compat.v1.lite.experimental.load_delegate": false, + "tf.compat.v1.lite.experimental.nn": false, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell": false, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__call__": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__eq__": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__ge__": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__gt__": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__init__": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__le__": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__lt__": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__ne__": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.__new__": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.activity_regularizer": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.add_loss": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.add_metric": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.add_weight": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.build": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.call": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.compute_mask": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.compute_output_shape": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.compute_output_signature": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.count_params": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.dtype": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.dynamic": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.from_config": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.get_config": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.get_initial_state": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.get_weights": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.graph": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.input": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.input_spec": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.losses": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.metrics": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.name": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.name_scope": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.non_trainable_weights": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.output": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.output_size": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.scope_name": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.set_weights": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.state_size": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.submodules": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.trainable": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.trainable_weights": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.weights": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.with_name_scope": true, + "tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell.zero_state": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell": false, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__call__": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__eq__": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__ge__": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__gt__": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__init__": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__le__": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__lt__": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__ne__": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.__new__": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.activity_regularizer": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.add_loss": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.add_metric": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.add_weight": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.build": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.call": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.compute_mask": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.compute_output_shape": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.compute_output_signature": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.count_params": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.dtype": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.dynamic": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.from_config": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.get_config": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.get_initial_state": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.get_weights": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.graph": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.input": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.input_spec": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.losses": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.metrics": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.name": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.name_scope": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.non_trainable_weights": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.output": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.output_size": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.scope_name": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.set_weights": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.state_size": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.submodules": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.trainable": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.trainable_weights": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.weights": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.with_name_scope": true, + "tf.compat.v1.lite.experimental.nn.TfLiteRNNCell.zero_state": true, + "tf.compat.v1.lite.experimental.nn.dynamic_rnn": false, + "tf.compat.v1.lite.toco_convert": false, + "tf.compat.v1.load_file_system_library": false, + "tf.compat.v1.load_library": false, + "tf.compat.v1.load_op_library": false, + "tf.compat.v1.local_variables": false, + "tf.compat.v1.local_variables_initializer": false, + "tf.compat.v1.log": false, + "tf.compat.v1.log1p": false, + "tf.compat.v1.log_sigmoid": false, + "tf.compat.v1.logging": false, + "tf.compat.v1.logging.DEBUG": true, + "tf.compat.v1.logging.ERROR": true, + "tf.compat.v1.logging.FATAL": true, + "tf.compat.v1.logging.INFO": true, + "tf.compat.v1.logging.TaskLevelStatusMessage": false, + "tf.compat.v1.logging.WARN": true, + "tf.compat.v1.logging.debug": false, + "tf.compat.v1.logging.error": false, + "tf.compat.v1.logging.fatal": false, + "tf.compat.v1.logging.flush": false, + "tf.compat.v1.logging.get_verbosity": false, + "tf.compat.v1.logging.info": false, + "tf.compat.v1.logging.log": false, + "tf.compat.v1.logging.log_every_n": false, + "tf.compat.v1.logging.log_first_n": false, + "tf.compat.v1.logging.log_if": false, + "tf.compat.v1.logging.set_verbosity": false, + "tf.compat.v1.logging.vlog": false, + "tf.compat.v1.logging.warn": false, + "tf.compat.v1.logging.warning": false, + "tf.compat.v1.logical_and": false, + "tf.compat.v1.logical_not": false, + "tf.compat.v1.logical_or": false, + "tf.compat.v1.logical_xor": false, + "tf.compat.v1.lookup": false, + "tf.compat.v1.lookup.KeyValueTensorInitializer": false, + "tf.compat.v1.lookup.KeyValueTensorInitializer.__eq__": true, + "tf.compat.v1.lookup.KeyValueTensorInitializer.__ge__": true, + "tf.compat.v1.lookup.KeyValueTensorInitializer.__gt__": true, + "tf.compat.v1.lookup.KeyValueTensorInitializer.__init__": true, + "tf.compat.v1.lookup.KeyValueTensorInitializer.__le__": true, + "tf.compat.v1.lookup.KeyValueTensorInitializer.__lt__": true, + "tf.compat.v1.lookup.KeyValueTensorInitializer.__ne__": true, + "tf.compat.v1.lookup.KeyValueTensorInitializer.__new__": true, + "tf.compat.v1.lookup.KeyValueTensorInitializer.initialize": true, + "tf.compat.v1.lookup.KeyValueTensorInitializer.key_dtype": true, + "tf.compat.v1.lookup.KeyValueTensorInitializer.value_dtype": true, + "tf.compat.v1.lookup.StaticHashTable": false, + "tf.compat.v1.lookup.StaticHashTable.__eq__": true, + "tf.compat.v1.lookup.StaticHashTable.__ge__": true, + "tf.compat.v1.lookup.StaticHashTable.__gt__": true, + "tf.compat.v1.lookup.StaticHashTable.__init__": true, + "tf.compat.v1.lookup.StaticHashTable.__le__": true, + "tf.compat.v1.lookup.StaticHashTable.__lt__": true, + "tf.compat.v1.lookup.StaticHashTable.__ne__": true, + "tf.compat.v1.lookup.StaticHashTable.__new__": true, + "tf.compat.v1.lookup.StaticHashTable.default_value": true, + "tf.compat.v1.lookup.StaticHashTable.export": true, + "tf.compat.v1.lookup.StaticHashTable.initializer": true, + "tf.compat.v1.lookup.StaticHashTable.key_dtype": true, + "tf.compat.v1.lookup.StaticHashTable.lookup": true, + "tf.compat.v1.lookup.StaticHashTable.name": true, + "tf.compat.v1.lookup.StaticHashTable.resource_handle": true, + "tf.compat.v1.lookup.StaticHashTable.size": true, + "tf.compat.v1.lookup.StaticHashTable.value_dtype": true, + "tf.compat.v1.lookup.StaticVocabularyTable": false, + "tf.compat.v1.lookup.StaticVocabularyTable.__eq__": true, + "tf.compat.v1.lookup.StaticVocabularyTable.__ge__": true, + "tf.compat.v1.lookup.StaticVocabularyTable.__gt__": true, + "tf.compat.v1.lookup.StaticVocabularyTable.__init__": true, + "tf.compat.v1.lookup.StaticVocabularyTable.__le__": true, + "tf.compat.v1.lookup.StaticVocabularyTable.__lt__": true, + "tf.compat.v1.lookup.StaticVocabularyTable.__ne__": true, + "tf.compat.v1.lookup.StaticVocabularyTable.__new__": true, + "tf.compat.v1.lookup.StaticVocabularyTable.initializer": true, + "tf.compat.v1.lookup.StaticVocabularyTable.key_dtype": true, + "tf.compat.v1.lookup.StaticVocabularyTable.lookup": true, + "tf.compat.v1.lookup.StaticVocabularyTable.name": true, + "tf.compat.v1.lookup.StaticVocabularyTable.resource_handle": true, + "tf.compat.v1.lookup.StaticVocabularyTable.size": true, + "tf.compat.v1.lookup.StaticVocabularyTable.value_dtype": true, + "tf.compat.v1.lookup.TextFileIndex": false, + "tf.compat.v1.lookup.TextFileIndex.LINE_NUMBER": true, + "tf.compat.v1.lookup.TextFileIndex.WHOLE_LINE": true, + "tf.compat.v1.lookup.TextFileIndex.__eq__": true, + "tf.compat.v1.lookup.TextFileIndex.__ge__": true, + "tf.compat.v1.lookup.TextFileIndex.__gt__": true, + "tf.compat.v1.lookup.TextFileIndex.__init__": true, + "tf.compat.v1.lookup.TextFileIndex.__le__": true, + "tf.compat.v1.lookup.TextFileIndex.__lt__": true, + "tf.compat.v1.lookup.TextFileIndex.__ne__": true, + "tf.compat.v1.lookup.TextFileIndex.__new__": true, + "tf.compat.v1.lookup.TextFileInitializer": false, + "tf.compat.v1.lookup.TextFileInitializer.__eq__": true, + "tf.compat.v1.lookup.TextFileInitializer.__ge__": true, + "tf.compat.v1.lookup.TextFileInitializer.__gt__": true, + "tf.compat.v1.lookup.TextFileInitializer.__init__": true, + "tf.compat.v1.lookup.TextFileInitializer.__le__": true, + "tf.compat.v1.lookup.TextFileInitializer.__lt__": true, + "tf.compat.v1.lookup.TextFileInitializer.__ne__": true, + "tf.compat.v1.lookup.TextFileInitializer.__new__": true, + "tf.compat.v1.lookup.TextFileInitializer.initialize": true, + "tf.compat.v1.lookup.TextFileInitializer.key_dtype": true, + "tf.compat.v1.lookup.TextFileInitializer.value_dtype": true, + "tf.compat.v1.lookup.experimental": false, + "tf.compat.v1.lookup.experimental.DenseHashTable": false, + "tf.compat.v1.lookup.experimental.DenseHashTable.__eq__": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.__ge__": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.__gt__": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.__init__": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.__le__": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.__lt__": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.__ne__": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.__new__": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.erase": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.export": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.insert": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.insert_or_assign": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.key_dtype": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.lookup": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.name": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.remove": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.resource_handle": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.size": true, + "tf.compat.v1.lookup.experimental.DenseHashTable.value_dtype": true, + "tf.compat.v1.losses": false, + "tf.compat.v1.losses.Reduction": false, + "tf.compat.v1.losses.Reduction.MEAN": true, + "tf.compat.v1.losses.Reduction.NONE": true, + "tf.compat.v1.losses.Reduction.SUM": true, + "tf.compat.v1.losses.Reduction.SUM_BY_NONZERO_WEIGHTS": true, + "tf.compat.v1.losses.Reduction.SUM_OVER_BATCH_SIZE": true, + "tf.compat.v1.losses.Reduction.SUM_OVER_NONZERO_WEIGHTS": true, + "tf.compat.v1.losses.Reduction.__eq__": true, + "tf.compat.v1.losses.Reduction.__ge__": true, + "tf.compat.v1.losses.Reduction.__gt__": true, + "tf.compat.v1.losses.Reduction.__init__": true, + "tf.compat.v1.losses.Reduction.__le__": true, + "tf.compat.v1.losses.Reduction.__lt__": true, + "tf.compat.v1.losses.Reduction.__ne__": true, + "tf.compat.v1.losses.Reduction.__new__": true, + "tf.compat.v1.losses.Reduction.all": true, + "tf.compat.v1.losses.Reduction.validate": true, + "tf.compat.v1.losses.absolute_difference": false, + "tf.compat.v1.losses.add_loss": false, + "tf.compat.v1.losses.compute_weighted_loss": false, + "tf.compat.v1.losses.cosine_distance": false, + "tf.compat.v1.losses.get_losses": false, + "tf.compat.v1.losses.get_regularization_loss": false, + "tf.compat.v1.losses.get_regularization_losses": false, + "tf.compat.v1.losses.get_total_loss": false, + "tf.compat.v1.losses.hinge_loss": false, + "tf.compat.v1.losses.huber_loss": false, + "tf.compat.v1.losses.log_loss": false, + "tf.compat.v1.losses.mean_pairwise_squared_error": false, + "tf.compat.v1.losses.mean_squared_error": false, + "tf.compat.v1.losses.sigmoid_cross_entropy": false, + "tf.compat.v1.losses.softmax_cross_entropy": false, + "tf.compat.v1.losses.sparse_softmax_cross_entropy": false, + "tf.compat.v1.make_ndarray": false, + "tf.compat.v1.make_template": false, + "tf.compat.v1.make_tensor_proto": false, + "tf.compat.v1.manip": false, + "tf.compat.v1.manip.batch_to_space_nd": false, + "tf.compat.v1.manip.gather_nd": false, + "tf.compat.v1.manip.reshape": false, + "tf.compat.v1.manip.reverse": false, + "tf.compat.v1.manip.roll": false, + "tf.compat.v1.manip.scatter_nd": false, + "tf.compat.v1.manip.space_to_batch_nd": false, + "tf.compat.v1.manip.tile": false, + "tf.compat.v1.map_fn": false, + "tf.compat.v1.matching_files": false, + "tf.compat.v1.math": false, + "tf.compat.v1.math.abs": false, + "tf.compat.v1.math.accumulate_n": false, + "tf.compat.v1.math.acos": false, + "tf.compat.v1.math.acosh": false, + "tf.compat.v1.math.add": false, + "tf.compat.v1.math.add_n": false, + "tf.compat.v1.math.angle": false, + "tf.compat.v1.math.argmax": false, + "tf.compat.v1.math.argmin": false, + "tf.compat.v1.math.asin": false, + "tf.compat.v1.math.asinh": false, + "tf.compat.v1.math.atan": false, + "tf.compat.v1.math.atan2": false, + "tf.compat.v1.math.atanh": false, + "tf.compat.v1.math.bessel_i0": false, + "tf.compat.v1.math.bessel_i0e": false, + "tf.compat.v1.math.bessel_i1": false, + "tf.compat.v1.math.bessel_i1e": false, + "tf.compat.v1.math.betainc": false, + "tf.compat.v1.math.bincount": false, + "tf.compat.v1.math.ceil": false, + "tf.compat.v1.math.confusion_matrix": false, + "tf.compat.v1.math.conj": false, + "tf.compat.v1.math.cos": false, + "tf.compat.v1.math.cosh": false, + "tf.compat.v1.math.count_nonzero": false, + "tf.compat.v1.math.cumprod": false, + "tf.compat.v1.math.cumsum": false, + "tf.compat.v1.math.cumulative_logsumexp": false, + "tf.compat.v1.math.digamma": false, + "tf.compat.v1.math.divide": false, + "tf.compat.v1.math.divide_no_nan": false, + "tf.compat.v1.math.equal": false, + "tf.compat.v1.math.erf": false, + "tf.compat.v1.math.erfc": false, + "tf.compat.v1.math.erfinv": false, + "tf.compat.v1.math.exp": false, + "tf.compat.v1.math.expm1": false, + "tf.compat.v1.math.floor": false, + "tf.compat.v1.math.floordiv": false, + "tf.compat.v1.math.floormod": false, + "tf.compat.v1.math.greater": false, + "tf.compat.v1.math.greater_equal": false, + "tf.compat.v1.math.igamma": false, + "tf.compat.v1.math.igammac": false, + "tf.compat.v1.math.imag": false, + "tf.compat.v1.math.in_top_k": false, + "tf.compat.v1.math.invert_permutation": false, + "tf.compat.v1.math.is_finite": false, + "tf.compat.v1.math.is_inf": false, + "tf.compat.v1.math.is_nan": false, + "tf.compat.v1.math.is_non_decreasing": false, + "tf.compat.v1.math.is_strictly_increasing": false, + "tf.compat.v1.math.l2_normalize": false, + "tf.compat.v1.math.lbeta": false, + "tf.compat.v1.math.less": false, + "tf.compat.v1.math.less_equal": false, + "tf.compat.v1.math.lgamma": false, + "tf.compat.v1.math.log": false, + "tf.compat.v1.math.log1p": false, + "tf.compat.v1.math.log_sigmoid": false, + "tf.compat.v1.math.log_softmax": false, + "tf.compat.v1.math.logical_and": false, + "tf.compat.v1.math.logical_not": false, + "tf.compat.v1.math.logical_or": false, + "tf.compat.v1.math.logical_xor": false, + "tf.compat.v1.math.maximum": false, + "tf.compat.v1.math.minimum": false, + "tf.compat.v1.math.mod": false, + "tf.compat.v1.math.multiply": false, + "tf.compat.v1.math.multiply_no_nan": false, + "tf.compat.v1.math.ndtri": false, + "tf.compat.v1.math.negative": false, + "tf.compat.v1.math.nextafter": false, + "tf.compat.v1.math.not_equal": false, + "tf.compat.v1.math.polygamma": false, + "tf.compat.v1.math.polyval": false, + "tf.compat.v1.math.pow": false, + "tf.compat.v1.math.real": false, + "tf.compat.v1.math.reciprocal": false, + "tf.compat.v1.math.reciprocal_no_nan": false, + "tf.compat.v1.math.reduce_all": false, + "tf.compat.v1.math.reduce_any": false, + "tf.compat.v1.math.reduce_euclidean_norm": false, + "tf.compat.v1.math.reduce_logsumexp": false, + "tf.compat.v1.math.reduce_max": false, + "tf.compat.v1.math.reduce_mean": false, + "tf.compat.v1.math.reduce_min": false, + "tf.compat.v1.math.reduce_prod": false, + "tf.compat.v1.math.reduce_std": false, + "tf.compat.v1.math.reduce_sum": false, + "tf.compat.v1.math.reduce_variance": false, + "tf.compat.v1.math.rint": false, + "tf.compat.v1.math.round": false, + "tf.compat.v1.math.rsqrt": false, + "tf.compat.v1.math.scalar_mul": false, + "tf.compat.v1.math.segment_max": false, + "tf.compat.v1.math.segment_mean": false, + "tf.compat.v1.math.segment_min": false, + "tf.compat.v1.math.segment_prod": false, + "tf.compat.v1.math.segment_sum": false, + "tf.compat.v1.math.sigmoid": false, + "tf.compat.v1.math.sign": false, + "tf.compat.v1.math.sin": false, + "tf.compat.v1.math.sinh": false, + "tf.compat.v1.math.sobol_sample": false, + "tf.compat.v1.math.softmax": false, + "tf.compat.v1.math.softplus": false, + "tf.compat.v1.math.softsign": false, + "tf.compat.v1.math.special": false, + "tf.compat.v1.math.special.dawsn": false, + "tf.compat.v1.math.special.expint": false, + "tf.compat.v1.math.special.fresnel_cos": false, + "tf.compat.v1.math.special.fresnel_sin": false, + "tf.compat.v1.math.special.spence": false, + "tf.compat.v1.math.sqrt": false, + "tf.compat.v1.math.square": false, + "tf.compat.v1.math.squared_difference": false, + "tf.compat.v1.math.subtract": false, + "tf.compat.v1.math.tan": false, + "tf.compat.v1.math.tanh": false, + "tf.compat.v1.math.top_k": false, + "tf.compat.v1.math.truediv": false, + "tf.compat.v1.math.unsorted_segment_max": false, + "tf.compat.v1.math.unsorted_segment_mean": false, + "tf.compat.v1.math.unsorted_segment_min": false, + "tf.compat.v1.math.unsorted_segment_prod": false, + "tf.compat.v1.math.unsorted_segment_sqrt_n": false, + "tf.compat.v1.math.unsorted_segment_sum": false, + "tf.compat.v1.math.xdivy": false, + "tf.compat.v1.math.xlog1py": false, + "tf.compat.v1.math.xlogy": false, + "tf.compat.v1.math.zero_fraction": false, + "tf.compat.v1.math.zeta": false, + "tf.compat.v1.matmul": false, + "tf.compat.v1.matrix_band_part": false, + "tf.compat.v1.matrix_determinant": false, + "tf.compat.v1.matrix_diag": false, + "tf.compat.v1.matrix_diag_part": false, + "tf.compat.v1.matrix_inverse": false, + "tf.compat.v1.matrix_set_diag": false, + "tf.compat.v1.matrix_solve": false, + "tf.compat.v1.matrix_solve_ls": false, + "tf.compat.v1.matrix_square_root": false, + "tf.compat.v1.matrix_transpose": false, + "tf.compat.v1.matrix_triangular_solve": false, + "tf.compat.v1.maximum": false, + "tf.compat.v1.meshgrid": false, + "tf.compat.v1.metrics": false, + "tf.compat.v1.metrics.accuracy": false, + "tf.compat.v1.metrics.auc": false, + "tf.compat.v1.metrics.average_precision_at_k": false, + "tf.compat.v1.metrics.false_negatives": false, + "tf.compat.v1.metrics.false_negatives_at_thresholds": false, + "tf.compat.v1.metrics.false_positives": false, + "tf.compat.v1.metrics.false_positives_at_thresholds": false, + "tf.compat.v1.metrics.mean": false, + "tf.compat.v1.metrics.mean_absolute_error": false, + "tf.compat.v1.metrics.mean_cosine_distance": false, + "tf.compat.v1.metrics.mean_iou": false, + "tf.compat.v1.metrics.mean_per_class_accuracy": false, + "tf.compat.v1.metrics.mean_relative_error": false, + "tf.compat.v1.metrics.mean_squared_error": false, + "tf.compat.v1.metrics.mean_tensor": false, + "tf.compat.v1.metrics.percentage_below": false, + "tf.compat.v1.metrics.precision": false, + "tf.compat.v1.metrics.precision_at_k": false, + "tf.compat.v1.metrics.precision_at_thresholds": false, + "tf.compat.v1.metrics.precision_at_top_k": false, + "tf.compat.v1.metrics.recall": false, + "tf.compat.v1.metrics.recall_at_k": false, + "tf.compat.v1.metrics.recall_at_thresholds": false, + "tf.compat.v1.metrics.recall_at_top_k": false, + "tf.compat.v1.metrics.root_mean_squared_error": false, + "tf.compat.v1.metrics.sensitivity_at_specificity": false, + "tf.compat.v1.metrics.sparse_average_precision_at_k": false, + "tf.compat.v1.metrics.sparse_precision_at_k": false, + "tf.compat.v1.metrics.specificity_at_sensitivity": false, + "tf.compat.v1.metrics.true_negatives": false, + "tf.compat.v1.metrics.true_negatives_at_thresholds": false, + "tf.compat.v1.metrics.true_positives": false, + "tf.compat.v1.metrics.true_positives_at_thresholds": false, + "tf.compat.v1.min_max_variable_partitioner": false, + "tf.compat.v1.minimum": false, + "tf.compat.v1.mixed_precision": false, + "tf.compat.v1.mixed_precision.experimental": false, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale": false, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__call__": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__eq__": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__ge__": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__gt__": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__init__": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__le__": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__lt__": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__ne__": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.__new__": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.from_config": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.get_config": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.increment_period": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.initial_loss_scale": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.multiplier": true, + "tf.compat.v1.mixed_precision.experimental.DynamicLossScale.update": true, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale": false, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__call__": true, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__eq__": true, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__ge__": true, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__gt__": true, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__init__": true, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__le__": true, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__lt__": true, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__ne__": true, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.__new__": true, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.from_config": true, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.get_config": true, + "tf.compat.v1.mixed_precision.experimental.FixedLossScale.update": true, + "tf.compat.v1.mixed_precision.experimental.LossScale": false, + "tf.compat.v1.mixed_precision.experimental.LossScale.__call__": true, + "tf.compat.v1.mixed_precision.experimental.LossScale.__eq__": true, + "tf.compat.v1.mixed_precision.experimental.LossScale.__ge__": true, + "tf.compat.v1.mixed_precision.experimental.LossScale.__gt__": true, + "tf.compat.v1.mixed_precision.experimental.LossScale.__init__": true, + "tf.compat.v1.mixed_precision.experimental.LossScale.__le__": true, + "tf.compat.v1.mixed_precision.experimental.LossScale.__lt__": true, + "tf.compat.v1.mixed_precision.experimental.LossScale.__ne__": true, + "tf.compat.v1.mixed_precision.experimental.LossScale.__new__": true, + "tf.compat.v1.mixed_precision.experimental.LossScale.from_config": true, + "tf.compat.v1.mixed_precision.experimental.LossScale.get_config": true, + "tf.compat.v1.mixed_precision.experimental.LossScale.update": true, + "tf.compat.v1.mlir": false, + "tf.compat.v1.mlir.experimental": false, + "tf.compat.v1.mlir.experimental.convert_graph_def": false, + "tf.compat.v1.mod": false, + "tf.compat.v1.model_variables": false, + "tf.compat.v1.moving_average_variables": false, + "tf.compat.v1.multinomial": false, + "tf.compat.v1.multiply": false, + "tf.compat.v1.name_scope": false, + "tf.compat.v1.name_scope.__enter__": true, + "tf.compat.v1.name_scope.__eq__": true, + "tf.compat.v1.name_scope.__exit__": true, + "tf.compat.v1.name_scope.__ge__": true, + "tf.compat.v1.name_scope.__gt__": true, + "tf.compat.v1.name_scope.__init__": true, + "tf.compat.v1.name_scope.__le__": true, + "tf.compat.v1.name_scope.__lt__": true, + "tf.compat.v1.name_scope.__ne__": true, + "tf.compat.v1.name_scope.__new__": true, + "tf.compat.v1.name_scope.name": true, + "tf.compat.v1.negative": false, + "tf.compat.v1.nest": false, + "tf.compat.v1.nest.assert_same_structure": false, + "tf.compat.v1.nest.flatten": false, + "tf.compat.v1.nest.is_nested": false, + "tf.compat.v1.nest.map_structure": false, + "tf.compat.v1.nest.pack_sequence_as": false, + "tf.compat.v1.newaxis": true, + "tf.compat.v1.nn": false, + "tf.compat.v1.nn.all_candidate_sampler": false, + "tf.compat.v1.nn.atrous_conv2d": false, + "tf.compat.v1.nn.atrous_conv2d_transpose": false, + "tf.compat.v1.nn.avg_pool": false, + "tf.compat.v1.nn.avg_pool1d": false, + "tf.compat.v1.nn.avg_pool2d": false, + "tf.compat.v1.nn.avg_pool3d": false, + "tf.compat.v1.nn.avg_pool_v2": false, + "tf.compat.v1.nn.batch_norm_with_global_normalization": false, + "tf.compat.v1.nn.batch_normalization": false, + "tf.compat.v1.nn.bias_add": false, + "tf.compat.v1.nn.bidirectional_dynamic_rnn": false, + "tf.compat.v1.nn.collapse_repeated": false, + "tf.compat.v1.nn.compute_accidental_hits": false, + "tf.compat.v1.nn.compute_average_loss": false, + "tf.compat.v1.nn.conv1d": false, + "tf.compat.v1.nn.conv1d_transpose": false, + "tf.compat.v1.nn.conv2d": false, + "tf.compat.v1.nn.conv2d_backprop_filter": false, + "tf.compat.v1.nn.conv2d_backprop_input": false, + "tf.compat.v1.nn.conv2d_transpose": false, + "tf.compat.v1.nn.conv3d": false, + "tf.compat.v1.nn.conv3d_backprop_filter": false, + "tf.compat.v1.nn.conv3d_backprop_filter_v2": false, + "tf.compat.v1.nn.conv3d_transpose": false, + "tf.compat.v1.nn.conv_transpose": false, + "tf.compat.v1.nn.convolution": false, + "tf.compat.v1.nn.crelu": false, + "tf.compat.v1.nn.ctc_beam_search_decoder": false, + "tf.compat.v1.nn.ctc_beam_search_decoder_v2": false, + "tf.compat.v1.nn.ctc_greedy_decoder": false, + "tf.compat.v1.nn.ctc_loss": false, + "tf.compat.v1.nn.ctc_loss_v2": false, + "tf.compat.v1.nn.ctc_unique_labels": false, + "tf.compat.v1.nn.depth_to_space": false, + "tf.compat.v1.nn.depthwise_conv2d": false, + "tf.compat.v1.nn.depthwise_conv2d_backprop_filter": false, + "tf.compat.v1.nn.depthwise_conv2d_backprop_input": false, + "tf.compat.v1.nn.depthwise_conv2d_native": false, + "tf.compat.v1.nn.depthwise_conv2d_native_backprop_filter": false, + "tf.compat.v1.nn.depthwise_conv2d_native_backprop_input": false, + "tf.compat.v1.nn.dilation2d": false, + "tf.compat.v1.nn.dropout": false, + "tf.compat.v1.nn.dynamic_rnn": false, + "tf.compat.v1.nn.elu": false, + "tf.compat.v1.nn.embedding_lookup": false, + "tf.compat.v1.nn.embedding_lookup_sparse": false, + "tf.compat.v1.nn.erosion2d": false, + "tf.compat.v1.nn.fixed_unigram_candidate_sampler": false, + "tf.compat.v1.nn.fractional_avg_pool": false, + "tf.compat.v1.nn.fractional_max_pool": false, + "tf.compat.v1.nn.fused_batch_norm": false, + "tf.compat.v1.nn.in_top_k": false, + "tf.compat.v1.nn.l2_loss": false, + "tf.compat.v1.nn.l2_normalize": false, + "tf.compat.v1.nn.leaky_relu": false, + "tf.compat.v1.nn.learned_unigram_candidate_sampler": false, + "tf.compat.v1.nn.local_response_normalization": false, + "tf.compat.v1.nn.log_poisson_loss": false, + "tf.compat.v1.nn.log_softmax": false, + "tf.compat.v1.nn.log_uniform_candidate_sampler": false, + "tf.compat.v1.nn.lrn": false, + "tf.compat.v1.nn.max_pool": false, + "tf.compat.v1.nn.max_pool1d": false, + "tf.compat.v1.nn.max_pool2d": false, + "tf.compat.v1.nn.max_pool3d": false, + "tf.compat.v1.nn.max_pool_v2": false, + "tf.compat.v1.nn.max_pool_with_argmax": false, + "tf.compat.v1.nn.moments": false, + "tf.compat.v1.nn.nce_loss": false, + "tf.compat.v1.nn.normalize_moments": false, + "tf.compat.v1.nn.pool": false, + "tf.compat.v1.nn.quantized_avg_pool": false, + "tf.compat.v1.nn.quantized_conv2d": false, + "tf.compat.v1.nn.quantized_max_pool": false, + "tf.compat.v1.nn.quantized_relu_x": false, + "tf.compat.v1.nn.raw_rnn": false, + "tf.compat.v1.nn.relu": false, + "tf.compat.v1.nn.relu6": false, + "tf.compat.v1.nn.relu_layer": false, + "tf.compat.v1.nn.rnn_cell": false, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell": false, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__call__": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__eq__": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__ge__": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__gt__": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__init__": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__le__": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__lt__": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__ne__": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.__new__": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.activity_regularizer": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.add_loss": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.add_metric": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.add_weight": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.build": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.call": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.compute_mask": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.compute_output_shape": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.compute_output_signature": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.count_params": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.dtype": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.dynamic": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.from_config": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.get_config": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.get_initial_state": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.get_weights": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.graph": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.input": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.input_spec": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.losses": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.metrics": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.name": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.name_scope": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.non_trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.output": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.output_size": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.scope_name": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.set_weights": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.state_size": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.submodules": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.trainable": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.weights": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.with_name_scope": true, + "tf.compat.v1.nn.rnn_cell.BasicLSTMCell.zero_state": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell": false, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__call__": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__eq__": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__ge__": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__gt__": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__init__": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__le__": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__lt__": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__ne__": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.__new__": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.activity_regularizer": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.add_loss": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.add_metric": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.add_weight": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.build": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.call": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.compute_mask": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.compute_output_shape": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.compute_output_signature": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.count_params": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.dtype": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.dynamic": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.from_config": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.get_config": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.get_initial_state": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.get_weights": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.graph": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.input": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.input_spec": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.losses": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.metrics": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.name": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.name_scope": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.non_trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.output": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.output_size": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.scope_name": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.set_weights": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.state_size": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.submodules": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.trainable": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.weights": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.with_name_scope": true, + "tf.compat.v1.nn.rnn_cell.BasicRNNCell.zero_state": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper": false, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__call__": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__eq__": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__ge__": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__gt__": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__init__": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__le__": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__lt__": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__ne__": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.__new__": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.activity_regularizer": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.add_loss": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.add_metric": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.add_weight": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.build": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.call": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.compute_mask": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.compute_output_shape": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.compute_output_signature": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.count_params": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.dtype": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.dynamic": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.from_config": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.get_config": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.get_initial_state": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.get_weights": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.graph": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.input": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.input_spec": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.losses": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.metrics": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.name": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.name_scope": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.non_trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.output": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.output_size": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.scope_name": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.set_weights": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.state_size": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.submodules": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.trainable": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.weights": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.with_name_scope": true, + "tf.compat.v1.nn.rnn_cell.DeviceWrapper.zero_state": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper": false, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__call__": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__eq__": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__ge__": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__gt__": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__init__": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__le__": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__lt__": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__ne__": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.__new__": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.activity_regularizer": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.add_loss": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.add_metric": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.add_weight": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.build": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.call": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.compute_mask": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.compute_output_shape": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.compute_output_signature": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.count_params": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.dtype": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.dynamic": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.from_config": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.get_config": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.get_initial_state": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.get_weights": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.graph": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.input": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.input_spec": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.losses": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.metrics": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.name": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.name_scope": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.non_trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.output": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.output_size": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.scope_name": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.set_weights": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.state_size": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.submodules": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.trainable": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.weights": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.with_name_scope": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.wrapped_cell": true, + "tf.compat.v1.nn.rnn_cell.DropoutWrapper.zero_state": true, + "tf.compat.v1.nn.rnn_cell.GRUCell": false, + "tf.compat.v1.nn.rnn_cell.GRUCell.__call__": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.__eq__": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.__ge__": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.__gt__": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.__init__": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.__le__": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.__lt__": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.__ne__": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.__new__": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.activity_regularizer": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.add_loss": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.add_metric": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.add_weight": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.build": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.call": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.compute_mask": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.compute_output_shape": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.compute_output_signature": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.count_params": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.dtype": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.dynamic": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.from_config": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.get_config": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.get_initial_state": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.get_weights": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.graph": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.input": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.input_spec": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.losses": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.metrics": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.name": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.name_scope": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.non_trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.output": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.output_size": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.scope_name": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.set_weights": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.state_size": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.submodules": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.trainable": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.weights": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.with_name_scope": true, + "tf.compat.v1.nn.rnn_cell.GRUCell.zero_state": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell": false, + "tf.compat.v1.nn.rnn_cell.LSTMCell.__call__": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.__eq__": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.__ge__": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.__gt__": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.__init__": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.__le__": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.__lt__": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.__ne__": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.__new__": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.activity_regularizer": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.add_loss": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.add_metric": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.add_weight": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.build": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.call": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.compute_mask": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.compute_output_shape": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.compute_output_signature": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.count_params": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.dtype": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.dynamic": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.from_config": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.get_config": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.get_initial_state": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.get_weights": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.graph": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.input": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.input_spec": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.losses": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.metrics": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.name": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.name_scope": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.non_trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.output": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.output_size": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.scope_name": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.set_weights": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.state_size": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.submodules": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.trainable": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.weights": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.with_name_scope": true, + "tf.compat.v1.nn.rnn_cell.LSTMCell.zero_state": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple": false, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__add__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__contains__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__eq__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__ge__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__getitem__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__gt__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__init__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__iter__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__le__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__len__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__lt__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__mul__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__ne__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__new__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.__rmul__": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.c": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.count": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.dtype": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.h": true, + "tf.compat.v1.nn.rnn_cell.LSTMStateTuple.index": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell": false, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__call__": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__eq__": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__ge__": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__gt__": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__init__": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__le__": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__lt__": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__ne__": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.__new__": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.activity_regularizer": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.add_loss": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.add_metric": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.add_weight": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.build": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.call": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.compute_mask": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.compute_output_shape": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.compute_output_signature": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.count_params": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.dtype": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.dynamic": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.from_config": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.get_config": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.get_initial_state": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.get_weights": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.graph": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.input": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.input_spec": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.losses": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.metrics": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.name": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.name_scope": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.non_trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.output": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.output_size": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.scope_name": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.set_weights": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.state_size": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.submodules": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.trainable": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.weights": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.with_name_scope": true, + "tf.compat.v1.nn.rnn_cell.MultiRNNCell.zero_state": true, + "tf.compat.v1.nn.rnn_cell.RNNCell": false, + "tf.compat.v1.nn.rnn_cell.RNNCell.__call__": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.__eq__": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.__ge__": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.__gt__": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.__init__": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.__le__": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.__lt__": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.__ne__": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.__new__": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.activity_regularizer": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.add_loss": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.add_metric": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.add_weight": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.build": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.call": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.compute_mask": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.compute_output_shape": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.compute_output_signature": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.count_params": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.dtype": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.dynamic": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.from_config": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.get_config": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.get_initial_state": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.get_weights": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.graph": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.input": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.input_spec": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.losses": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.metrics": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.name": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.name_scope": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.non_trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.output": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.output_size": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.scope_name": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.set_weights": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.state_size": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.submodules": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.trainable": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.weights": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.with_name_scope": true, + "tf.compat.v1.nn.rnn_cell.RNNCell.zero_state": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper": false, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__call__": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__eq__": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__ge__": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__gt__": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__init__": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__le__": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__lt__": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__ne__": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.__new__": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.activity_regularizer": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.add_loss": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.add_metric": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.add_weight": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.build": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.call": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.compute_mask": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.compute_output_shape": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.compute_output_signature": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.count_params": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.dtype": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.dynamic": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.from_config": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.get_config": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.get_initial_state": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.get_weights": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.graph": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.input": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.input_spec": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.losses": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.metrics": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.name": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.name_scope": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.non_trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.output": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.output_size": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.scope_name": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.set_weights": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.state_size": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.submodules": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.trainable": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.trainable_weights": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.weights": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.with_name_scope": true, + "tf.compat.v1.nn.rnn_cell.ResidualWrapper.zero_state": true, + "tf.compat.v1.nn.safe_embedding_lookup_sparse": false, + "tf.compat.v1.nn.sampled_softmax_loss": false, + "tf.compat.v1.nn.scale_regularization_loss": false, + "tf.compat.v1.nn.selu": false, + "tf.compat.v1.nn.separable_conv2d": false, + "tf.compat.v1.nn.sigmoid": false, + "tf.compat.v1.nn.sigmoid_cross_entropy_with_logits": false, + "tf.compat.v1.nn.softmax": false, + "tf.compat.v1.nn.softmax_cross_entropy_with_logits": false, + "tf.compat.v1.nn.softmax_cross_entropy_with_logits_v2": false, + "tf.compat.v1.nn.softplus": false, + "tf.compat.v1.nn.softsign": false, + "tf.compat.v1.nn.space_to_batch": false, + "tf.compat.v1.nn.space_to_depth": false, + "tf.compat.v1.nn.sparse_softmax_cross_entropy_with_logits": false, + "tf.compat.v1.nn.static_bidirectional_rnn": false, + "tf.compat.v1.nn.static_rnn": false, + "tf.compat.v1.nn.static_state_saving_rnn": false, + "tf.compat.v1.nn.sufficient_statistics": false, + "tf.compat.v1.nn.swish": false, + "tf.compat.v1.nn.tanh": false, + "tf.compat.v1.nn.top_k": false, + "tf.compat.v1.nn.uniform_candidate_sampler": false, + "tf.compat.v1.nn.weighted_cross_entropy_with_logits": false, + "tf.compat.v1.nn.weighted_moments": false, + "tf.compat.v1.nn.with_space_to_batch": false, + "tf.compat.v1.nn.xw_plus_b": false, + "tf.compat.v1.nn.zero_fraction": false, + "tf.compat.v1.no_gradient": false, + "tf.compat.v1.no_op": false, + "tf.compat.v1.no_regularizer": false, + "tf.compat.v1.nondifferentiable_batch_function": false, + "tf.compat.v1.norm": false, + "tf.compat.v1.not_equal": false, + "tf.compat.v1.numpy_function": false, + "tf.compat.v1.one_hot": false, + "tf.compat.v1.ones": false, + "tf.compat.v1.ones_initializer": false, + "tf.compat.v1.ones_initializer.__call__": true, + "tf.compat.v1.ones_initializer.__eq__": true, + "tf.compat.v1.ones_initializer.__ge__": true, + "tf.compat.v1.ones_initializer.__gt__": true, + "tf.compat.v1.ones_initializer.__init__": true, + "tf.compat.v1.ones_initializer.__le__": true, + "tf.compat.v1.ones_initializer.__lt__": true, + "tf.compat.v1.ones_initializer.__ne__": true, + "tf.compat.v1.ones_initializer.__new__": true, + "tf.compat.v1.ones_initializer.from_config": true, + "tf.compat.v1.ones_initializer.get_config": true, + "tf.compat.v1.ones_like": false, + "tf.compat.v1.op_scope": false, + "tf.compat.v1.orthogonal_initializer": false, + "tf.compat.v1.orthogonal_initializer.__call__": true, + "tf.compat.v1.orthogonal_initializer.__eq__": true, + "tf.compat.v1.orthogonal_initializer.__ge__": true, + "tf.compat.v1.orthogonal_initializer.__gt__": true, + "tf.compat.v1.orthogonal_initializer.__init__": true, + "tf.compat.v1.orthogonal_initializer.__le__": true, + "tf.compat.v1.orthogonal_initializer.__lt__": true, + "tf.compat.v1.orthogonal_initializer.__ne__": true, + "tf.compat.v1.orthogonal_initializer.__new__": true, + "tf.compat.v1.orthogonal_initializer.from_config": true, + "tf.compat.v1.orthogonal_initializer.get_config": true, + "tf.compat.v1.pad": false, + "tf.compat.v1.parallel_stack": false, + "tf.compat.v1.parse_example": false, + "tf.compat.v1.parse_single_example": false, + "tf.compat.v1.parse_single_sequence_example": false, + "tf.compat.v1.parse_tensor": false, + "tf.compat.v1.placeholder": false, + "tf.compat.v1.placeholder_with_default": false, + "tf.compat.v1.polygamma": false, + "tf.compat.v1.pow": false, + "tf.compat.v1.print": false, + "tf.compat.v1.profiler": false, + "tf.compat.v1.profiler.AdviceProto": false, + "tf.compat.v1.profiler.AdviceProto.ByteSize": true, + "tf.compat.v1.profiler.AdviceProto.Checker": false, + "tf.compat.v1.profiler.AdviceProto.Checker.ByteSize": true, + "tf.compat.v1.profiler.AdviceProto.Checker.Clear": true, + "tf.compat.v1.profiler.AdviceProto.Checker.ClearExtension": true, + "tf.compat.v1.profiler.AdviceProto.Checker.ClearField": true, + "tf.compat.v1.profiler.AdviceProto.Checker.CopyFrom": true, + "tf.compat.v1.profiler.AdviceProto.Checker.DESCRIPTOR": true, + "tf.compat.v1.profiler.AdviceProto.Checker.DiscardUnknownFields": true, + "tf.compat.v1.profiler.AdviceProto.Checker.Extensions": true, + "tf.compat.v1.profiler.AdviceProto.Checker.FindInitializationErrors": true, + "tf.compat.v1.profiler.AdviceProto.Checker.FromString": true, + "tf.compat.v1.profiler.AdviceProto.Checker.HasExtension": true, + "tf.compat.v1.profiler.AdviceProto.Checker.HasField": true, + "tf.compat.v1.profiler.AdviceProto.Checker.IsInitialized": true, + "tf.compat.v1.profiler.AdviceProto.Checker.ListFields": true, + "tf.compat.v1.profiler.AdviceProto.Checker.MergeFrom": true, + "tf.compat.v1.profiler.AdviceProto.Checker.MergeFromString": true, + "tf.compat.v1.profiler.AdviceProto.Checker.ParseFromString": true, + "tf.compat.v1.profiler.AdviceProto.Checker.RegisterExtension": true, + "tf.compat.v1.profiler.AdviceProto.Checker.SerializePartialToString": true, + "tf.compat.v1.profiler.AdviceProto.Checker.SerializeToString": true, + "tf.compat.v1.profiler.AdviceProto.Checker.SetInParent": true, + "tf.compat.v1.profiler.AdviceProto.Checker.UnknownFields": true, + "tf.compat.v1.profiler.AdviceProto.Checker.WhichOneof": true, + "tf.compat.v1.profiler.AdviceProto.Checker.__eq__": true, + "tf.compat.v1.profiler.AdviceProto.Checker.__ge__": true, + "tf.compat.v1.profiler.AdviceProto.Checker.__gt__": true, + "tf.compat.v1.profiler.AdviceProto.Checker.__init__": true, + "tf.compat.v1.profiler.AdviceProto.Checker.__le__": true, + "tf.compat.v1.profiler.AdviceProto.Checker.__lt__": true, + "tf.compat.v1.profiler.AdviceProto.Checker.__ne__": true, + "tf.compat.v1.profiler.AdviceProto.Checker.__new__": true, + "tf.compat.v1.profiler.AdviceProto.Checker.reports": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry": false, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.ByteSize": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.Clear": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.ClearExtension": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.ClearField": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.CopyFrom": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.DESCRIPTOR": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.DiscardUnknownFields": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.Extensions": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.FindInitializationErrors": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.FromString": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.HasExtension": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.HasField": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.IsInitialized": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.ListFields": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.MergeFrom": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.MergeFromString": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.ParseFromString": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.RegisterExtension": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.SerializePartialToString": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.SerializeToString": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.SetInParent": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.UnknownFields": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.WhichOneof": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__eq__": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__ge__": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__gt__": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__init__": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__le__": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__lt__": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__ne__": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.__new__": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.key": true, + "tf.compat.v1.profiler.AdviceProto.CheckersEntry.value": true, + "tf.compat.v1.profiler.AdviceProto.Clear": true, + "tf.compat.v1.profiler.AdviceProto.ClearExtension": true, + "tf.compat.v1.profiler.AdviceProto.ClearField": true, + "tf.compat.v1.profiler.AdviceProto.CopyFrom": true, + "tf.compat.v1.profiler.AdviceProto.DESCRIPTOR": true, + "tf.compat.v1.profiler.AdviceProto.DiscardUnknownFields": true, + "tf.compat.v1.profiler.AdviceProto.Extensions": true, + "tf.compat.v1.profiler.AdviceProto.FindInitializationErrors": true, + "tf.compat.v1.profiler.AdviceProto.FromString": true, + "tf.compat.v1.profiler.AdviceProto.HasExtension": true, + "tf.compat.v1.profiler.AdviceProto.HasField": true, + "tf.compat.v1.profiler.AdviceProto.IsInitialized": true, + "tf.compat.v1.profiler.AdviceProto.ListFields": true, + "tf.compat.v1.profiler.AdviceProto.MergeFrom": true, + "tf.compat.v1.profiler.AdviceProto.MergeFromString": true, + "tf.compat.v1.profiler.AdviceProto.ParseFromString": true, + "tf.compat.v1.profiler.AdviceProto.RegisterExtension": true, + "tf.compat.v1.profiler.AdviceProto.SerializePartialToString": true, + "tf.compat.v1.profiler.AdviceProto.SerializeToString": true, + "tf.compat.v1.profiler.AdviceProto.SetInParent": true, + "tf.compat.v1.profiler.AdviceProto.UnknownFields": true, + "tf.compat.v1.profiler.AdviceProto.WhichOneof": true, + "tf.compat.v1.profiler.AdviceProto.__eq__": true, + "tf.compat.v1.profiler.AdviceProto.__ge__": true, + "tf.compat.v1.profiler.AdviceProto.__gt__": true, + "tf.compat.v1.profiler.AdviceProto.__init__": true, + "tf.compat.v1.profiler.AdviceProto.__le__": true, + "tf.compat.v1.profiler.AdviceProto.__lt__": true, + "tf.compat.v1.profiler.AdviceProto.__ne__": true, + "tf.compat.v1.profiler.AdviceProto.__new__": true, + "tf.compat.v1.profiler.AdviceProto.checkers": true, + "tf.compat.v1.profiler.GraphNodeProto": false, + "tf.compat.v1.profiler.GraphNodeProto.ByteSize": true, + "tf.compat.v1.profiler.GraphNodeProto.Clear": true, + "tf.compat.v1.profiler.GraphNodeProto.ClearExtension": true, + "tf.compat.v1.profiler.GraphNodeProto.ClearField": true, + "tf.compat.v1.profiler.GraphNodeProto.CopyFrom": true, + "tf.compat.v1.profiler.GraphNodeProto.DESCRIPTOR": true, + "tf.compat.v1.profiler.GraphNodeProto.DiscardUnknownFields": true, + "tf.compat.v1.profiler.GraphNodeProto.Extensions": true, + "tf.compat.v1.profiler.GraphNodeProto.FindInitializationErrors": true, + "tf.compat.v1.profiler.GraphNodeProto.FromString": true, + "tf.compat.v1.profiler.GraphNodeProto.HasExtension": true, + "tf.compat.v1.profiler.GraphNodeProto.HasField": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry": false, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.ByteSize": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.Clear": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.ClearExtension": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.ClearField": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.CopyFrom": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.DESCRIPTOR": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.DiscardUnknownFields": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.Extensions": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.FindInitializationErrors": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.FromString": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.HasExtension": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.HasField": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.IsInitialized": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.ListFields": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.MergeFrom": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.MergeFromString": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.ParseFromString": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.RegisterExtension": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.SerializePartialToString": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.SerializeToString": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.SetInParent": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.UnknownFields": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.WhichOneof": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__eq__": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__ge__": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__gt__": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__init__": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__le__": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__lt__": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__ne__": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.__new__": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.key": true, + "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry.value": true, + "tf.compat.v1.profiler.GraphNodeProto.IsInitialized": true, + "tf.compat.v1.profiler.GraphNodeProto.ListFields": true, + "tf.compat.v1.profiler.GraphNodeProto.MergeFrom": true, + "tf.compat.v1.profiler.GraphNodeProto.MergeFromString": true, + "tf.compat.v1.profiler.GraphNodeProto.ParseFromString": true, + "tf.compat.v1.profiler.GraphNodeProto.RegisterExtension": true, + "tf.compat.v1.profiler.GraphNodeProto.SerializePartialToString": true, + "tf.compat.v1.profiler.GraphNodeProto.SerializeToString": true, + "tf.compat.v1.profiler.GraphNodeProto.SetInParent": true, + "tf.compat.v1.profiler.GraphNodeProto.UnknownFields": true, + "tf.compat.v1.profiler.GraphNodeProto.WhichOneof": true, + "tf.compat.v1.profiler.GraphNodeProto.__eq__": true, + "tf.compat.v1.profiler.GraphNodeProto.__ge__": true, + "tf.compat.v1.profiler.GraphNodeProto.__gt__": true, + "tf.compat.v1.profiler.GraphNodeProto.__init__": true, + "tf.compat.v1.profiler.GraphNodeProto.__le__": true, + "tf.compat.v1.profiler.GraphNodeProto.__lt__": true, + "tf.compat.v1.profiler.GraphNodeProto.__ne__": true, + "tf.compat.v1.profiler.GraphNodeProto.__new__": true, + "tf.compat.v1.profiler.GraphNodeProto.accelerator_exec_micros": true, + "tf.compat.v1.profiler.GraphNodeProto.children": true, + "tf.compat.v1.profiler.GraphNodeProto.cpu_exec_micros": true, + "tf.compat.v1.profiler.GraphNodeProto.devices": true, + "tf.compat.v1.profiler.GraphNodeProto.exec_micros": true, + "tf.compat.v1.profiler.GraphNodeProto.float_ops": true, + "tf.compat.v1.profiler.GraphNodeProto.input_shapes": true, + "tf.compat.v1.profiler.GraphNodeProto.name": true, + "tf.compat.v1.profiler.GraphNodeProto.output_bytes": true, + "tf.compat.v1.profiler.GraphNodeProto.parameters": true, + "tf.compat.v1.profiler.GraphNodeProto.peak_bytes": true, + "tf.compat.v1.profiler.GraphNodeProto.requested_bytes": true, + "tf.compat.v1.profiler.GraphNodeProto.residual_bytes": true, + "tf.compat.v1.profiler.GraphNodeProto.run_count": true, + "tf.compat.v1.profiler.GraphNodeProto.shapes": true, + "tf.compat.v1.profiler.GraphNodeProto.tensor_value": true, + "tf.compat.v1.profiler.GraphNodeProto.total_accelerator_exec_micros": true, + "tf.compat.v1.profiler.GraphNodeProto.total_cpu_exec_micros": true, + "tf.compat.v1.profiler.GraphNodeProto.total_definition_count": true, + "tf.compat.v1.profiler.GraphNodeProto.total_exec_micros": true, + "tf.compat.v1.profiler.GraphNodeProto.total_float_ops": true, + "tf.compat.v1.profiler.GraphNodeProto.total_output_bytes": true, + "tf.compat.v1.profiler.GraphNodeProto.total_parameters": true, + "tf.compat.v1.profiler.GraphNodeProto.total_peak_bytes": true, + "tf.compat.v1.profiler.GraphNodeProto.total_requested_bytes": true, + "tf.compat.v1.profiler.GraphNodeProto.total_residual_bytes": true, + "tf.compat.v1.profiler.GraphNodeProto.total_run_count": true, + "tf.compat.v1.profiler.MultiGraphNodeProto": false, + "tf.compat.v1.profiler.MultiGraphNodeProto.ByteSize": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.Clear": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.ClearExtension": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.ClearField": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.CopyFrom": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.DESCRIPTOR": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.DiscardUnknownFields": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.Extensions": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.FindInitializationErrors": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.FromString": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.HasExtension": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.HasField": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.IsInitialized": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.ListFields": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.MergeFrom": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.MergeFromString": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.ParseFromString": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.RegisterExtension": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.SerializePartialToString": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.SerializeToString": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.SetInParent": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.UnknownFields": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.WhichOneof": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.__eq__": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.__ge__": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.__gt__": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.__init__": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.__le__": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.__lt__": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.__ne__": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.__new__": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.accelerator_exec_micros": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.children": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.cpu_exec_micros": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.exec_micros": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.float_ops": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.graph_nodes": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.name": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.output_bytes": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.parameters": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.peak_bytes": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.requested_bytes": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.residual_bytes": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.total_accelerator_exec_micros": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.total_cpu_exec_micros": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.total_exec_micros": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.total_float_ops": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.total_output_bytes": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.total_parameters": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.total_peak_bytes": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.total_requested_bytes": true, + "tf.compat.v1.profiler.MultiGraphNodeProto.total_residual_bytes": true, + "tf.compat.v1.profiler.OpLogProto": false, + "tf.compat.v1.profiler.OpLogProto.ByteSize": true, + "tf.compat.v1.profiler.OpLogProto.Clear": true, + "tf.compat.v1.profiler.OpLogProto.ClearExtension": true, + "tf.compat.v1.profiler.OpLogProto.ClearField": true, + "tf.compat.v1.profiler.OpLogProto.CopyFrom": true, + "tf.compat.v1.profiler.OpLogProto.DESCRIPTOR": true, + "tf.compat.v1.profiler.OpLogProto.DiscardUnknownFields": true, + "tf.compat.v1.profiler.OpLogProto.Extensions": true, + "tf.compat.v1.profiler.OpLogProto.FindInitializationErrors": true, + "tf.compat.v1.profiler.OpLogProto.FromString": true, + "tf.compat.v1.profiler.OpLogProto.HasExtension": true, + "tf.compat.v1.profiler.OpLogProto.HasField": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry": false, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.ByteSize": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.Clear": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.ClearExtension": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.ClearField": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.CopyFrom": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.DESCRIPTOR": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.DiscardUnknownFields": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.Extensions": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.FindInitializationErrors": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.FromString": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.HasExtension": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.HasField": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.IsInitialized": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.ListFields": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.MergeFrom": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.MergeFromString": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.ParseFromString": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.RegisterExtension": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.SerializePartialToString": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.SerializeToString": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.SetInParent": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.UnknownFields": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.WhichOneof": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__eq__": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__ge__": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__gt__": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__init__": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__le__": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__lt__": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__ne__": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.__new__": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.key": true, + "tf.compat.v1.profiler.OpLogProto.IdToStringEntry.value": true, + "tf.compat.v1.profiler.OpLogProto.IsInitialized": true, + "tf.compat.v1.profiler.OpLogProto.ListFields": true, + "tf.compat.v1.profiler.OpLogProto.MergeFrom": true, + "tf.compat.v1.profiler.OpLogProto.MergeFromString": true, + "tf.compat.v1.profiler.OpLogProto.ParseFromString": true, + "tf.compat.v1.profiler.OpLogProto.RegisterExtension": true, + "tf.compat.v1.profiler.OpLogProto.SerializePartialToString": true, + "tf.compat.v1.profiler.OpLogProto.SerializeToString": true, + "tf.compat.v1.profiler.OpLogProto.SetInParent": true, + "tf.compat.v1.profiler.OpLogProto.UnknownFields": true, + "tf.compat.v1.profiler.OpLogProto.WhichOneof": true, + "tf.compat.v1.profiler.OpLogProto.__eq__": true, + "tf.compat.v1.profiler.OpLogProto.__ge__": true, + "tf.compat.v1.profiler.OpLogProto.__gt__": true, + "tf.compat.v1.profiler.OpLogProto.__init__": true, + "tf.compat.v1.profiler.OpLogProto.__le__": true, + "tf.compat.v1.profiler.OpLogProto.__lt__": true, + "tf.compat.v1.profiler.OpLogProto.__ne__": true, + "tf.compat.v1.profiler.OpLogProto.__new__": true, + "tf.compat.v1.profiler.OpLogProto.id_to_string": true, + "tf.compat.v1.profiler.OpLogProto.log_entries": true, + "tf.compat.v1.profiler.ProfileOptionBuilder": false, + "tf.compat.v1.profiler.ProfileOptionBuilder.__eq__": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.__ge__": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.__gt__": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.__init__": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.__le__": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.__lt__": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.__ne__": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.__new__": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.account_displayed_op_only": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.build": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.float_operation": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.order_by": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.select": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.time_and_memory": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.trainable_variables_parameter": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_accounted_types": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_empty_output": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_file_output": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_max_depth": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_min_execution_time": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_min_float_operations": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_min_memory": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_min_occurrence": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_min_parameters": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_node_names": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_pprof_output": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_stdout_output": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_step": true, + "tf.compat.v1.profiler.ProfileOptionBuilder.with_timeline_output": true, + "tf.compat.v1.profiler.Profiler": false, + "tf.compat.v1.profiler.Profiler.__eq__": true, + "tf.compat.v1.profiler.Profiler.__ge__": true, + "tf.compat.v1.profiler.Profiler.__gt__": true, + "tf.compat.v1.profiler.Profiler.__init__": true, + "tf.compat.v1.profiler.Profiler.__le__": true, + "tf.compat.v1.profiler.Profiler.__lt__": true, + "tf.compat.v1.profiler.Profiler.__ne__": true, + "tf.compat.v1.profiler.Profiler.__new__": true, + "tf.compat.v1.profiler.Profiler.add_step": true, + "tf.compat.v1.profiler.Profiler.advise": true, + "tf.compat.v1.profiler.Profiler.profile_graph": true, + "tf.compat.v1.profiler.Profiler.profile_name_scope": true, + "tf.compat.v1.profiler.Profiler.profile_operations": true, + "tf.compat.v1.profiler.Profiler.profile_python": true, + "tf.compat.v1.profiler.Profiler.serialize_to_string": true, + "tf.compat.v1.profiler.advise": false, + "tf.compat.v1.profiler.profile": false, + "tf.compat.v1.profiler.write_op_log": false, + "tf.compat.v1.py_func": false, + "tf.compat.v1.py_function": false, + "tf.compat.v1.python_io": false, + "tf.compat.v1.python_io.TFRecordCompressionType": false, + "tf.compat.v1.python_io.TFRecordCompressionType.GZIP": true, + "tf.compat.v1.python_io.TFRecordCompressionType.NONE": true, + "tf.compat.v1.python_io.TFRecordCompressionType.ZLIB": true, + "tf.compat.v1.python_io.TFRecordCompressionType.__eq__": true, + "tf.compat.v1.python_io.TFRecordCompressionType.__ge__": true, + "tf.compat.v1.python_io.TFRecordCompressionType.__gt__": true, + "tf.compat.v1.python_io.TFRecordCompressionType.__init__": true, + "tf.compat.v1.python_io.TFRecordCompressionType.__le__": true, + "tf.compat.v1.python_io.TFRecordCompressionType.__lt__": true, + "tf.compat.v1.python_io.TFRecordCompressionType.__ne__": true, + "tf.compat.v1.python_io.TFRecordCompressionType.__new__": true, + "tf.compat.v1.python_io.TFRecordOptions": false, + "tf.compat.v1.python_io.TFRecordOptions.__eq__": true, + "tf.compat.v1.python_io.TFRecordOptions.__ge__": true, + "tf.compat.v1.python_io.TFRecordOptions.__gt__": true, + "tf.compat.v1.python_io.TFRecordOptions.__init__": true, + "tf.compat.v1.python_io.TFRecordOptions.__le__": true, + "tf.compat.v1.python_io.TFRecordOptions.__lt__": true, + "tf.compat.v1.python_io.TFRecordOptions.__ne__": true, + "tf.compat.v1.python_io.TFRecordOptions.__new__": true, + "tf.compat.v1.python_io.TFRecordOptions.compression_type_map": true, + "tf.compat.v1.python_io.TFRecordOptions.get_compression_type_string": true, + "tf.compat.v1.python_io.TFRecordWriter": false, + "tf.compat.v1.python_io.TFRecordWriter.__enter__": true, + "tf.compat.v1.python_io.TFRecordWriter.__eq__": true, + "tf.compat.v1.python_io.TFRecordWriter.__exit__": true, + "tf.compat.v1.python_io.TFRecordWriter.__ge__": true, + "tf.compat.v1.python_io.TFRecordWriter.__gt__": true, + "tf.compat.v1.python_io.TFRecordWriter.__init__": true, + "tf.compat.v1.python_io.TFRecordWriter.__le__": true, + "tf.compat.v1.python_io.TFRecordWriter.__lt__": true, + "tf.compat.v1.python_io.TFRecordWriter.__ne__": true, + "tf.compat.v1.python_io.TFRecordWriter.__new__": true, + "tf.compat.v1.python_io.TFRecordWriter.close": true, + "tf.compat.v1.python_io.TFRecordWriter.flush": true, + "tf.compat.v1.python_io.TFRecordWriter.write": true, + "tf.compat.v1.python_io.tf_record_iterator": false, + "tf.compat.v1.qint16": true, + "tf.compat.v1.qint32": true, + "tf.compat.v1.qint8": true, + "tf.compat.v1.qr": false, + "tf.compat.v1.quantization": false, + "tf.compat.v1.quantization.dequantize": false, + "tf.compat.v1.quantization.fake_quant_with_min_max_args": false, + "tf.compat.v1.quantization.fake_quant_with_min_max_args_gradient": false, + "tf.compat.v1.quantization.fake_quant_with_min_max_vars": false, + "tf.compat.v1.quantization.fake_quant_with_min_max_vars_gradient": false, + "tf.compat.v1.quantization.fake_quant_with_min_max_vars_per_channel": false, + "tf.compat.v1.quantization.fake_quant_with_min_max_vars_per_channel_gradient": false, + "tf.compat.v1.quantization.quantize": false, + "tf.compat.v1.quantization.quantize_and_dequantize": false, + "tf.compat.v1.quantization.quantized_concat": false, + "tf.compat.v1.quantize": false, + "tf.compat.v1.quantize_v2": false, + "tf.compat.v1.quantized_concat": false, + "tf.compat.v1.queue": false, + "tf.compat.v1.queue.FIFOQueue": false, + "tf.compat.v1.queue.FIFOQueue.__eq__": true, + "tf.compat.v1.queue.FIFOQueue.__ge__": true, + "tf.compat.v1.queue.FIFOQueue.__gt__": true, + "tf.compat.v1.queue.FIFOQueue.__init__": true, + "tf.compat.v1.queue.FIFOQueue.__le__": true, + "tf.compat.v1.queue.FIFOQueue.__lt__": true, + "tf.compat.v1.queue.FIFOQueue.__ne__": true, + "tf.compat.v1.queue.FIFOQueue.__new__": true, + "tf.compat.v1.queue.FIFOQueue.close": true, + "tf.compat.v1.queue.FIFOQueue.dequeue": true, + "tf.compat.v1.queue.FIFOQueue.dequeue_many": true, + "tf.compat.v1.queue.FIFOQueue.dequeue_up_to": true, + "tf.compat.v1.queue.FIFOQueue.dtypes": true, + "tf.compat.v1.queue.FIFOQueue.enqueue": true, + "tf.compat.v1.queue.FIFOQueue.enqueue_many": true, + "tf.compat.v1.queue.FIFOQueue.from_list": true, + "tf.compat.v1.queue.FIFOQueue.is_closed": true, + "tf.compat.v1.queue.FIFOQueue.name": true, + "tf.compat.v1.queue.FIFOQueue.names": true, + "tf.compat.v1.queue.FIFOQueue.queue_ref": true, + "tf.compat.v1.queue.FIFOQueue.shapes": true, + "tf.compat.v1.queue.FIFOQueue.size": true, + "tf.compat.v1.queue.PaddingFIFOQueue": false, + "tf.compat.v1.queue.PaddingFIFOQueue.__eq__": true, + "tf.compat.v1.queue.PaddingFIFOQueue.__ge__": true, + "tf.compat.v1.queue.PaddingFIFOQueue.__gt__": true, + "tf.compat.v1.queue.PaddingFIFOQueue.__init__": true, + "tf.compat.v1.queue.PaddingFIFOQueue.__le__": true, + "tf.compat.v1.queue.PaddingFIFOQueue.__lt__": true, + "tf.compat.v1.queue.PaddingFIFOQueue.__ne__": true, + "tf.compat.v1.queue.PaddingFIFOQueue.__new__": true, + "tf.compat.v1.queue.PaddingFIFOQueue.close": true, + "tf.compat.v1.queue.PaddingFIFOQueue.dequeue": true, + "tf.compat.v1.queue.PaddingFIFOQueue.dequeue_many": true, + "tf.compat.v1.queue.PaddingFIFOQueue.dequeue_up_to": true, + "tf.compat.v1.queue.PaddingFIFOQueue.dtypes": true, + "tf.compat.v1.queue.PaddingFIFOQueue.enqueue": true, + "tf.compat.v1.queue.PaddingFIFOQueue.enqueue_many": true, + "tf.compat.v1.queue.PaddingFIFOQueue.from_list": true, + "tf.compat.v1.queue.PaddingFIFOQueue.is_closed": true, + "tf.compat.v1.queue.PaddingFIFOQueue.name": true, + "tf.compat.v1.queue.PaddingFIFOQueue.names": true, + "tf.compat.v1.queue.PaddingFIFOQueue.queue_ref": true, + "tf.compat.v1.queue.PaddingFIFOQueue.shapes": true, + "tf.compat.v1.queue.PaddingFIFOQueue.size": true, + "tf.compat.v1.queue.PriorityQueue": false, + "tf.compat.v1.queue.PriorityQueue.__eq__": true, + "tf.compat.v1.queue.PriorityQueue.__ge__": true, + "tf.compat.v1.queue.PriorityQueue.__gt__": true, + "tf.compat.v1.queue.PriorityQueue.__init__": true, + "tf.compat.v1.queue.PriorityQueue.__le__": true, + "tf.compat.v1.queue.PriorityQueue.__lt__": true, + "tf.compat.v1.queue.PriorityQueue.__ne__": true, + "tf.compat.v1.queue.PriorityQueue.__new__": true, + "tf.compat.v1.queue.PriorityQueue.close": true, + "tf.compat.v1.queue.PriorityQueue.dequeue": true, + "tf.compat.v1.queue.PriorityQueue.dequeue_many": true, + "tf.compat.v1.queue.PriorityQueue.dequeue_up_to": true, + "tf.compat.v1.queue.PriorityQueue.dtypes": true, + "tf.compat.v1.queue.PriorityQueue.enqueue": true, + "tf.compat.v1.queue.PriorityQueue.enqueue_many": true, + "tf.compat.v1.queue.PriorityQueue.from_list": true, + "tf.compat.v1.queue.PriorityQueue.is_closed": true, + "tf.compat.v1.queue.PriorityQueue.name": true, + "tf.compat.v1.queue.PriorityQueue.names": true, + "tf.compat.v1.queue.PriorityQueue.queue_ref": true, + "tf.compat.v1.queue.PriorityQueue.shapes": true, + "tf.compat.v1.queue.PriorityQueue.size": true, + "tf.compat.v1.queue.QueueBase": false, + "tf.compat.v1.queue.QueueBase.__eq__": true, + "tf.compat.v1.queue.QueueBase.__ge__": true, + "tf.compat.v1.queue.QueueBase.__gt__": true, + "tf.compat.v1.queue.QueueBase.__init__": true, + "tf.compat.v1.queue.QueueBase.__le__": true, + "tf.compat.v1.queue.QueueBase.__lt__": true, + "tf.compat.v1.queue.QueueBase.__ne__": true, + "tf.compat.v1.queue.QueueBase.__new__": true, + "tf.compat.v1.queue.QueueBase.close": true, + "tf.compat.v1.queue.QueueBase.dequeue": true, + "tf.compat.v1.queue.QueueBase.dequeue_many": true, + "tf.compat.v1.queue.QueueBase.dequeue_up_to": true, + "tf.compat.v1.queue.QueueBase.dtypes": true, + "tf.compat.v1.queue.QueueBase.enqueue": true, + "tf.compat.v1.queue.QueueBase.enqueue_many": true, + "tf.compat.v1.queue.QueueBase.from_list": true, + "tf.compat.v1.queue.QueueBase.is_closed": true, + "tf.compat.v1.queue.QueueBase.name": true, + "tf.compat.v1.queue.QueueBase.names": true, + "tf.compat.v1.queue.QueueBase.queue_ref": true, + "tf.compat.v1.queue.QueueBase.shapes": true, + "tf.compat.v1.queue.QueueBase.size": true, + "tf.compat.v1.queue.RandomShuffleQueue": false, + "tf.compat.v1.queue.RandomShuffleQueue.__eq__": true, + "tf.compat.v1.queue.RandomShuffleQueue.__ge__": true, + "tf.compat.v1.queue.RandomShuffleQueue.__gt__": true, + "tf.compat.v1.queue.RandomShuffleQueue.__init__": true, + "tf.compat.v1.queue.RandomShuffleQueue.__le__": true, + "tf.compat.v1.queue.RandomShuffleQueue.__lt__": true, + "tf.compat.v1.queue.RandomShuffleQueue.__ne__": true, + "tf.compat.v1.queue.RandomShuffleQueue.__new__": true, + "tf.compat.v1.queue.RandomShuffleQueue.close": true, + "tf.compat.v1.queue.RandomShuffleQueue.dequeue": true, + "tf.compat.v1.queue.RandomShuffleQueue.dequeue_many": true, + "tf.compat.v1.queue.RandomShuffleQueue.dequeue_up_to": true, + "tf.compat.v1.queue.RandomShuffleQueue.dtypes": true, + "tf.compat.v1.queue.RandomShuffleQueue.enqueue": true, + "tf.compat.v1.queue.RandomShuffleQueue.enqueue_many": true, + "tf.compat.v1.queue.RandomShuffleQueue.from_list": true, + "tf.compat.v1.queue.RandomShuffleQueue.is_closed": true, + "tf.compat.v1.queue.RandomShuffleQueue.name": true, + "tf.compat.v1.queue.RandomShuffleQueue.names": true, + "tf.compat.v1.queue.RandomShuffleQueue.queue_ref": true, + "tf.compat.v1.queue.RandomShuffleQueue.shapes": true, + "tf.compat.v1.queue.RandomShuffleQueue.size": true, + "tf.compat.v1.quint16": true, + "tf.compat.v1.quint8": true, + "tf.compat.v1.ragged": false, + "tf.compat.v1.ragged.RaggedTensorValue": false, + "tf.compat.v1.ragged.RaggedTensorValue.__eq__": true, + "tf.compat.v1.ragged.RaggedTensorValue.__ge__": true, + "tf.compat.v1.ragged.RaggedTensorValue.__gt__": true, + "tf.compat.v1.ragged.RaggedTensorValue.__init__": true, + "tf.compat.v1.ragged.RaggedTensorValue.__le__": true, + "tf.compat.v1.ragged.RaggedTensorValue.__lt__": true, + "tf.compat.v1.ragged.RaggedTensorValue.__ne__": true, + "tf.compat.v1.ragged.RaggedTensorValue.__new__": true, + "tf.compat.v1.ragged.RaggedTensorValue.dtype": true, + "tf.compat.v1.ragged.RaggedTensorValue.flat_values": true, + "tf.compat.v1.ragged.RaggedTensorValue.nested_row_splits": true, + "tf.compat.v1.ragged.RaggedTensorValue.ragged_rank": true, + "tf.compat.v1.ragged.RaggedTensorValue.row_splits": true, + "tf.compat.v1.ragged.RaggedTensorValue.shape": true, + "tf.compat.v1.ragged.RaggedTensorValue.to_list": true, + "tf.compat.v1.ragged.RaggedTensorValue.values": true, + "tf.compat.v1.ragged.boolean_mask": false, + "tf.compat.v1.ragged.constant": false, + "tf.compat.v1.ragged.constant_value": false, + "tf.compat.v1.ragged.map_flat_values": false, + "tf.compat.v1.ragged.placeholder": false, + "tf.compat.v1.ragged.range": false, + "tf.compat.v1.ragged.row_splits_to_segment_ids": false, + "tf.compat.v1.ragged.segment_ids_to_row_splits": false, + "tf.compat.v1.ragged.stack": false, + "tf.compat.v1.ragged.stack_dynamic_partitions": false, + "tf.compat.v1.random": false, + "tf.compat.v1.random.Algorithm": false, + "tf.compat.v1.random.Algorithm.PHILOX": true, + "tf.compat.v1.random.Algorithm.THREEFRY": true, + "tf.compat.v1.random.Algorithm.name": true, + "tf.compat.v1.random.Algorithm.value": true, + "tf.compat.v1.random.Generator": false, + "tf.compat.v1.random.Generator.__eq__": true, + "tf.compat.v1.random.Generator.__ge__": true, + "tf.compat.v1.random.Generator.__gt__": true, + "tf.compat.v1.random.Generator.__init__": true, + "tf.compat.v1.random.Generator.__le__": true, + "tf.compat.v1.random.Generator.__lt__": true, + "tf.compat.v1.random.Generator.__ne__": true, + "tf.compat.v1.random.Generator.__new__": true, + "tf.compat.v1.random.Generator.algorithm": true, + "tf.compat.v1.random.Generator.binomial": true, + "tf.compat.v1.random.Generator.from_key_counter": true, + "tf.compat.v1.random.Generator.from_non_deterministic_state": true, + "tf.compat.v1.random.Generator.from_seed": true, + "tf.compat.v1.random.Generator.from_state": true, + "tf.compat.v1.random.Generator.key": true, + "tf.compat.v1.random.Generator.make_seeds": true, + "tf.compat.v1.random.Generator.normal": true, + "tf.compat.v1.random.Generator.reset": true, + "tf.compat.v1.random.Generator.reset_from_key_counter": true, + "tf.compat.v1.random.Generator.reset_from_seed": true, + "tf.compat.v1.random.Generator.skip": true, + "tf.compat.v1.random.Generator.split": true, + "tf.compat.v1.random.Generator.state": true, + "tf.compat.v1.random.Generator.truncated_normal": true, + "tf.compat.v1.random.Generator.uniform": true, + "tf.compat.v1.random.Generator.uniform_full_int": true, + "tf.compat.v1.random.all_candidate_sampler": false, + "tf.compat.v1.random.categorical": false, + "tf.compat.v1.random.create_rng_state": false, + "tf.compat.v1.random.experimental": false, + "tf.compat.v1.random.experimental.Algorithm": false, + "tf.compat.v1.random.experimental.Algorithm.PHILOX": true, + "tf.compat.v1.random.experimental.Algorithm.THREEFRY": true, + "tf.compat.v1.random.experimental.Algorithm.name": true, + "tf.compat.v1.random.experimental.Algorithm.value": true, + "tf.compat.v1.random.experimental.Generator": false, + "tf.compat.v1.random.experimental.Generator.__eq__": true, + "tf.compat.v1.random.experimental.Generator.__ge__": true, + "tf.compat.v1.random.experimental.Generator.__gt__": true, + "tf.compat.v1.random.experimental.Generator.__init__": true, + "tf.compat.v1.random.experimental.Generator.__le__": true, + "tf.compat.v1.random.experimental.Generator.__lt__": true, + "tf.compat.v1.random.experimental.Generator.__ne__": true, + "tf.compat.v1.random.experimental.Generator.__new__": true, + "tf.compat.v1.random.experimental.Generator.algorithm": true, + "tf.compat.v1.random.experimental.Generator.binomial": true, + "tf.compat.v1.random.experimental.Generator.from_key_counter": true, + "tf.compat.v1.random.experimental.Generator.from_non_deterministic_state": true, + "tf.compat.v1.random.experimental.Generator.from_seed": true, + "tf.compat.v1.random.experimental.Generator.from_state": true, + "tf.compat.v1.random.experimental.Generator.key": true, + "tf.compat.v1.random.experimental.Generator.make_seeds": true, + "tf.compat.v1.random.experimental.Generator.normal": true, + "tf.compat.v1.random.experimental.Generator.reset": true, + "tf.compat.v1.random.experimental.Generator.reset_from_key_counter": true, + "tf.compat.v1.random.experimental.Generator.reset_from_seed": true, + "tf.compat.v1.random.experimental.Generator.skip": true, + "tf.compat.v1.random.experimental.Generator.split": true, + "tf.compat.v1.random.experimental.Generator.state": true, + "tf.compat.v1.random.experimental.Generator.truncated_normal": true, + "tf.compat.v1.random.experimental.Generator.uniform": true, + "tf.compat.v1.random.experimental.Generator.uniform_full_int": true, + "tf.compat.v1.random.experimental.create_rng_state": false, + "tf.compat.v1.random.experimental.get_global_generator": false, + "tf.compat.v1.random.experimental.set_global_generator": false, + "tf.compat.v1.random.fixed_unigram_candidate_sampler": false, + "tf.compat.v1.random.gamma": false, + "tf.compat.v1.random.get_global_generator": false, + "tf.compat.v1.random.get_seed": false, + "tf.compat.v1.random.learned_unigram_candidate_sampler": false, + "tf.compat.v1.random.log_uniform_candidate_sampler": false, + "tf.compat.v1.random.multinomial": false, + "tf.compat.v1.random.normal": false, + "tf.compat.v1.random.poisson": false, + "tf.compat.v1.random.set_global_generator": false, + "tf.compat.v1.random.set_random_seed": false, + "tf.compat.v1.random.shuffle": false, + "tf.compat.v1.random.stateless_binomial": false, + "tf.compat.v1.random.stateless_categorical": false, + "tf.compat.v1.random.stateless_gamma": false, + "tf.compat.v1.random.stateless_multinomial": false, + "tf.compat.v1.random.stateless_normal": false, + "tf.compat.v1.random.stateless_poisson": false, + "tf.compat.v1.random.stateless_truncated_normal": false, + "tf.compat.v1.random.stateless_uniform": false, + "tf.compat.v1.random.truncated_normal": false, + "tf.compat.v1.random.uniform": false, + "tf.compat.v1.random.uniform_candidate_sampler": false, + "tf.compat.v1.random_crop": false, + "tf.compat.v1.random_gamma": false, + "tf.compat.v1.random_normal": false, + "tf.compat.v1.random_normal_initializer": false, + "tf.compat.v1.random_normal_initializer.__call__": true, + "tf.compat.v1.random_normal_initializer.__eq__": true, + "tf.compat.v1.random_normal_initializer.__ge__": true, + "tf.compat.v1.random_normal_initializer.__gt__": true, + "tf.compat.v1.random_normal_initializer.__init__": true, + "tf.compat.v1.random_normal_initializer.__le__": true, + "tf.compat.v1.random_normal_initializer.__lt__": true, + "tf.compat.v1.random_normal_initializer.__ne__": true, + "tf.compat.v1.random_normal_initializer.__new__": true, + "tf.compat.v1.random_normal_initializer.from_config": true, + "tf.compat.v1.random_normal_initializer.get_config": true, + "tf.compat.v1.random_poisson": false, + "tf.compat.v1.random_shuffle": false, + "tf.compat.v1.random_uniform": false, + "tf.compat.v1.random_uniform_initializer": false, + "tf.compat.v1.random_uniform_initializer.__call__": true, + "tf.compat.v1.random_uniform_initializer.__eq__": true, + "tf.compat.v1.random_uniform_initializer.__ge__": true, + "tf.compat.v1.random_uniform_initializer.__gt__": true, + "tf.compat.v1.random_uniform_initializer.__init__": true, + "tf.compat.v1.random_uniform_initializer.__le__": true, + "tf.compat.v1.random_uniform_initializer.__lt__": true, + "tf.compat.v1.random_uniform_initializer.__ne__": true, + "tf.compat.v1.random_uniform_initializer.__new__": true, + "tf.compat.v1.random_uniform_initializer.from_config": true, + "tf.compat.v1.random_uniform_initializer.get_config": true, + "tf.compat.v1.range": false, + "tf.compat.v1.rank": false, + "tf.compat.v1.raw_ops": false, + "tf.compat.v1.raw_ops.Abort": false, + "tf.compat.v1.raw_ops.Abs": false, + "tf.compat.v1.raw_ops.AccumulateNV2": false, + "tf.compat.v1.raw_ops.AccumulatorApplyGradient": false, + "tf.compat.v1.raw_ops.AccumulatorNumAccumulated": false, + "tf.compat.v1.raw_ops.AccumulatorSetGlobalStep": false, + "tf.compat.v1.raw_ops.AccumulatorTakeGradient": false, + "tf.compat.v1.raw_ops.Acos": false, + "tf.compat.v1.raw_ops.Acosh": false, + "tf.compat.v1.raw_ops.Add": false, + "tf.compat.v1.raw_ops.AddManySparseToTensorsMap": false, + "tf.compat.v1.raw_ops.AddN": false, + "tf.compat.v1.raw_ops.AddSparseToTensorsMap": false, + "tf.compat.v1.raw_ops.AddV2": false, + "tf.compat.v1.raw_ops.AdjustContrast": false, + "tf.compat.v1.raw_ops.AdjustContrastv2": false, + "tf.compat.v1.raw_ops.AdjustHue": false, + "tf.compat.v1.raw_ops.AdjustSaturation": false, + "tf.compat.v1.raw_ops.All": false, + "tf.compat.v1.raw_ops.AllCandidateSampler": false, + "tf.compat.v1.raw_ops.AllToAll": false, + "tf.compat.v1.raw_ops.Angle": false, + "tf.compat.v1.raw_ops.AnonymousIterator": false, + "tf.compat.v1.raw_ops.AnonymousIteratorV2": false, + "tf.compat.v1.raw_ops.AnonymousMemoryCache": false, + "tf.compat.v1.raw_ops.AnonymousMultiDeviceIterator": false, + "tf.compat.v1.raw_ops.AnonymousRandomSeedGenerator": false, + "tf.compat.v1.raw_ops.Any": false, + "tf.compat.v1.raw_ops.ApplyAdaMax": false, + "tf.compat.v1.raw_ops.ApplyAdadelta": false, + "tf.compat.v1.raw_ops.ApplyAdagrad": false, + "tf.compat.v1.raw_ops.ApplyAdagradDA": false, + "tf.compat.v1.raw_ops.ApplyAdagradV2": false, + "tf.compat.v1.raw_ops.ApplyAdam": false, + "tf.compat.v1.raw_ops.ApplyAddSign": false, + "tf.compat.v1.raw_ops.ApplyCenteredRMSProp": false, + "tf.compat.v1.raw_ops.ApplyFtrl": false, + "tf.compat.v1.raw_ops.ApplyFtrlV2": false, + "tf.compat.v1.raw_ops.ApplyGradientDescent": false, + "tf.compat.v1.raw_ops.ApplyMomentum": false, + "tf.compat.v1.raw_ops.ApplyPowerSign": false, + "tf.compat.v1.raw_ops.ApplyProximalAdagrad": false, + "tf.compat.v1.raw_ops.ApplyProximalGradientDescent": false, + "tf.compat.v1.raw_ops.ApplyRMSProp": false, + "tf.compat.v1.raw_ops.ApproximateEqual": false, + "tf.compat.v1.raw_ops.ArgMax": false, + "tf.compat.v1.raw_ops.ArgMin": false, + "tf.compat.v1.raw_ops.AsString": false, + "tf.compat.v1.raw_ops.Asin": false, + "tf.compat.v1.raw_ops.Asinh": false, + "tf.compat.v1.raw_ops.Assert": false, + "tf.compat.v1.raw_ops.AssertCardinalityDataset": false, + "tf.compat.v1.raw_ops.AssertNextDataset": false, + "tf.compat.v1.raw_ops.Assign": false, + "tf.compat.v1.raw_ops.AssignAdd": false, + "tf.compat.v1.raw_ops.AssignAddVariableOp": false, + "tf.compat.v1.raw_ops.AssignSub": false, + "tf.compat.v1.raw_ops.AssignSubVariableOp": false, + "tf.compat.v1.raw_ops.AssignVariableOp": false, + "tf.compat.v1.raw_ops.Atan": false, + "tf.compat.v1.raw_ops.Atan2": false, + "tf.compat.v1.raw_ops.Atanh": false, + "tf.compat.v1.raw_ops.AudioSpectrogram": false, + "tf.compat.v1.raw_ops.AudioSummary": false, + "tf.compat.v1.raw_ops.AudioSummaryV2": false, + "tf.compat.v1.raw_ops.AutoShardDataset": false, + "tf.compat.v1.raw_ops.AvgPool": false, + "tf.compat.v1.raw_ops.AvgPool3D": false, + "tf.compat.v1.raw_ops.AvgPool3DGrad": false, + "tf.compat.v1.raw_ops.AvgPoolGrad": false, + "tf.compat.v1.raw_ops.Barrier": false, + "tf.compat.v1.raw_ops.BarrierClose": false, + "tf.compat.v1.raw_ops.BarrierIncompleteSize": false, + "tf.compat.v1.raw_ops.BarrierInsertMany": false, + "tf.compat.v1.raw_ops.BarrierReadySize": false, + "tf.compat.v1.raw_ops.BarrierTakeMany": false, + "tf.compat.v1.raw_ops.Batch": false, + "tf.compat.v1.raw_ops.BatchCholesky": false, + "tf.compat.v1.raw_ops.BatchCholeskyGrad": false, + "tf.compat.v1.raw_ops.BatchDataset": false, + "tf.compat.v1.raw_ops.BatchDatasetV2": false, + "tf.compat.v1.raw_ops.BatchFFT": false, + "tf.compat.v1.raw_ops.BatchFFT2D": false, + "tf.compat.v1.raw_ops.BatchFFT3D": false, + "tf.compat.v1.raw_ops.BatchFunction": false, + "tf.compat.v1.raw_ops.BatchIFFT": false, + "tf.compat.v1.raw_ops.BatchIFFT2D": false, + "tf.compat.v1.raw_ops.BatchIFFT3D": false, + "tf.compat.v1.raw_ops.BatchMatMul": false, + "tf.compat.v1.raw_ops.BatchMatMulV2": false, + "tf.compat.v1.raw_ops.BatchMatrixBandPart": false, + "tf.compat.v1.raw_ops.BatchMatrixDeterminant": false, + "tf.compat.v1.raw_ops.BatchMatrixDiag": false, + "tf.compat.v1.raw_ops.BatchMatrixDiagPart": false, + "tf.compat.v1.raw_ops.BatchMatrixInverse": false, + "tf.compat.v1.raw_ops.BatchMatrixSetDiag": false, + "tf.compat.v1.raw_ops.BatchMatrixSolve": false, + "tf.compat.v1.raw_ops.BatchMatrixSolveLs": false, + "tf.compat.v1.raw_ops.BatchMatrixTriangularSolve": false, + "tf.compat.v1.raw_ops.BatchNormWithGlobalNormalization": false, + "tf.compat.v1.raw_ops.BatchNormWithGlobalNormalizationGrad": false, + "tf.compat.v1.raw_ops.BatchSelfAdjointEig": false, + "tf.compat.v1.raw_ops.BatchSelfAdjointEigV2": false, + "tf.compat.v1.raw_ops.BatchSvd": false, + "tf.compat.v1.raw_ops.BatchToSpace": false, + "tf.compat.v1.raw_ops.BatchToSpaceND": false, + "tf.compat.v1.raw_ops.BesselI0e": false, + "tf.compat.v1.raw_ops.BesselI1e": false, + "tf.compat.v1.raw_ops.Betainc": false, + "tf.compat.v1.raw_ops.BiasAdd": false, + "tf.compat.v1.raw_ops.BiasAddGrad": false, + "tf.compat.v1.raw_ops.BiasAddV1": false, + "tf.compat.v1.raw_ops.Bincount": false, + "tf.compat.v1.raw_ops.Bitcast": false, + "tf.compat.v1.raw_ops.BitwiseAnd": false, + "tf.compat.v1.raw_ops.BitwiseOr": false, + "tf.compat.v1.raw_ops.BitwiseXor": false, + "tf.compat.v1.raw_ops.BlockLSTM": false, + "tf.compat.v1.raw_ops.BlockLSTMGrad": false, + "tf.compat.v1.raw_ops.BlockLSTMGradV2": false, + "tf.compat.v1.raw_ops.BlockLSTMV2": false, + "tf.compat.v1.raw_ops.BoostedTreesAggregateStats": false, + "tf.compat.v1.raw_ops.BoostedTreesBucketize": false, + "tf.compat.v1.raw_ops.BoostedTreesCalculateBestFeatureSplit": false, + "tf.compat.v1.raw_ops.BoostedTreesCalculateBestFeatureSplitV2": false, + "tf.compat.v1.raw_ops.BoostedTreesCalculateBestGainsPerFeature": false, + "tf.compat.v1.raw_ops.BoostedTreesCenterBias": false, + "tf.compat.v1.raw_ops.BoostedTreesCreateEnsemble": false, + "tf.compat.v1.raw_ops.BoostedTreesCreateQuantileStreamResource": false, + "tf.compat.v1.raw_ops.BoostedTreesDeserializeEnsemble": false, + "tf.compat.v1.raw_ops.BoostedTreesEnsembleResourceHandleOp": false, + "tf.compat.v1.raw_ops.BoostedTreesExampleDebugOutputs": false, + "tf.compat.v1.raw_ops.BoostedTreesFlushQuantileSummaries": false, + "tf.compat.v1.raw_ops.BoostedTreesGetEnsembleStates": false, + "tf.compat.v1.raw_ops.BoostedTreesMakeQuantileSummaries": false, + "tf.compat.v1.raw_ops.BoostedTreesMakeStatsSummary": false, + "tf.compat.v1.raw_ops.BoostedTreesPredict": false, + "tf.compat.v1.raw_ops.BoostedTreesQuantileStreamResourceAddSummaries": false, + "tf.compat.v1.raw_ops.BoostedTreesQuantileStreamResourceDeserialize": false, + "tf.compat.v1.raw_ops.BoostedTreesQuantileStreamResourceFlush": false, + "tf.compat.v1.raw_ops.BoostedTreesQuantileStreamResourceGetBucketBoundaries": false, + "tf.compat.v1.raw_ops.BoostedTreesQuantileStreamResourceHandleOp": false, + "tf.compat.v1.raw_ops.BoostedTreesSerializeEnsemble": false, + "tf.compat.v1.raw_ops.BoostedTreesSparseAggregateStats": false, + "tf.compat.v1.raw_ops.BoostedTreesSparseCalculateBestFeatureSplit": false, + "tf.compat.v1.raw_ops.BoostedTreesTrainingPredict": false, + "tf.compat.v1.raw_ops.BoostedTreesUpdateEnsemble": false, + "tf.compat.v1.raw_ops.BoostedTreesUpdateEnsembleV2": false, + "tf.compat.v1.raw_ops.BroadcastArgs": false, + "tf.compat.v1.raw_ops.BroadcastGradientArgs": false, + "tf.compat.v1.raw_ops.BroadcastTo": false, + "tf.compat.v1.raw_ops.Bucketize": false, + "tf.compat.v1.raw_ops.BytesProducedStatsDataset": false, + "tf.compat.v1.raw_ops.CSRSparseMatrixComponents": false, + "tf.compat.v1.raw_ops.CSRSparseMatrixToDense": false, + "tf.compat.v1.raw_ops.CSRSparseMatrixToSparseTensor": false, + "tf.compat.v1.raw_ops.CSVDataset": false, + "tf.compat.v1.raw_ops.CTCBeamSearchDecoder": false, + "tf.compat.v1.raw_ops.CTCGreedyDecoder": false, + "tf.compat.v1.raw_ops.CTCLoss": false, + "tf.compat.v1.raw_ops.CTCLossV2": false, + "tf.compat.v1.raw_ops.CacheDataset": false, + "tf.compat.v1.raw_ops.CacheDatasetV2": false, + "tf.compat.v1.raw_ops.Case": false, + "tf.compat.v1.raw_ops.Cast": false, + "tf.compat.v1.raw_ops.Ceil": false, + "tf.compat.v1.raw_ops.CheckNumerics": false, + "tf.compat.v1.raw_ops.CheckNumericsV2": false, + "tf.compat.v1.raw_ops.Cholesky": false, + "tf.compat.v1.raw_ops.CholeskyGrad": false, + "tf.compat.v1.raw_ops.ChooseFastestBranchDataset": false, + "tf.compat.v1.raw_ops.ChooseFastestDataset": false, + "tf.compat.v1.raw_ops.ClipByValue": false, + "tf.compat.v1.raw_ops.CloseSummaryWriter": false, + "tf.compat.v1.raw_ops.CollectiveBcastRecv": false, + "tf.compat.v1.raw_ops.CollectiveBcastSend": false, + "tf.compat.v1.raw_ops.CollectiveGather": false, + "tf.compat.v1.raw_ops.CollectivePermute": false, + "tf.compat.v1.raw_ops.CollectiveReduce": false, + "tf.compat.v1.raw_ops.CombinedNonMaxSuppression": false, + "tf.compat.v1.raw_ops.CompareAndBitpack": false, + "tf.compat.v1.raw_ops.Complex": false, + "tf.compat.v1.raw_ops.ComplexAbs": false, + "tf.compat.v1.raw_ops.ComputeAccidentalHits": false, + "tf.compat.v1.raw_ops.Concat": false, + "tf.compat.v1.raw_ops.ConcatOffset": false, + "tf.compat.v1.raw_ops.ConcatV2": false, + "tf.compat.v1.raw_ops.ConcatenateDataset": false, + "tf.compat.v1.raw_ops.ConditionalAccumulator": false, + "tf.compat.v1.raw_ops.ConfigureDistributedTPU": false, + "tf.compat.v1.raw_ops.ConfigureTPUEmbedding": false, + "tf.compat.v1.raw_ops.Conj": false, + "tf.compat.v1.raw_ops.ConjugateTranspose": false, + "tf.compat.v1.raw_ops.Const": false, + "tf.compat.v1.raw_ops.ConsumeMutexLock": false, + "tf.compat.v1.raw_ops.ControlTrigger": false, + "tf.compat.v1.raw_ops.Conv2D": false, + "tf.compat.v1.raw_ops.Conv2DBackpropFilter": false, + "tf.compat.v1.raw_ops.Conv2DBackpropInput": false, + "tf.compat.v1.raw_ops.Conv3D": false, + "tf.compat.v1.raw_ops.Conv3DBackpropFilter": false, + "tf.compat.v1.raw_ops.Conv3DBackpropFilterV2": false, + "tf.compat.v1.raw_ops.Conv3DBackpropInput": false, + "tf.compat.v1.raw_ops.Conv3DBackpropInputV2": false, + "tf.compat.v1.raw_ops.Copy": false, + "tf.compat.v1.raw_ops.CopyHost": false, + "tf.compat.v1.raw_ops.Cos": false, + "tf.compat.v1.raw_ops.Cosh": false, + "tf.compat.v1.raw_ops.CountUpTo": false, + "tf.compat.v1.raw_ops.CreateSummaryDbWriter": false, + "tf.compat.v1.raw_ops.CreateSummaryFileWriter": false, + "tf.compat.v1.raw_ops.CropAndResize": false, + "tf.compat.v1.raw_ops.CropAndResizeGradBoxes": false, + "tf.compat.v1.raw_ops.CropAndResizeGradImage": false, + "tf.compat.v1.raw_ops.Cross": false, + "tf.compat.v1.raw_ops.CrossReplicaSum": false, + "tf.compat.v1.raw_ops.CudnnRNN": false, + "tf.compat.v1.raw_ops.CudnnRNNBackprop": false, + "tf.compat.v1.raw_ops.CudnnRNNBackpropV2": false, + "tf.compat.v1.raw_ops.CudnnRNNBackpropV3": false, + "tf.compat.v1.raw_ops.CudnnRNNCanonicalToParams": false, + "tf.compat.v1.raw_ops.CudnnRNNCanonicalToParamsV2": false, + "tf.compat.v1.raw_ops.CudnnRNNParamsSize": false, + "tf.compat.v1.raw_ops.CudnnRNNParamsToCanonical": false, + "tf.compat.v1.raw_ops.CudnnRNNParamsToCanonicalV2": false, + "tf.compat.v1.raw_ops.CudnnRNNV2": false, + "tf.compat.v1.raw_ops.CudnnRNNV3": false, + "tf.compat.v1.raw_ops.Cumprod": false, + "tf.compat.v1.raw_ops.Cumsum": false, + "tf.compat.v1.raw_ops.CumulativeLogsumexp": false, + "tf.compat.v1.raw_ops.DataFormatDimMap": false, + "tf.compat.v1.raw_ops.DataFormatVecPermute": false, + "tf.compat.v1.raw_ops.DatasetCardinality": false, + "tf.compat.v1.raw_ops.DatasetFromGraph": false, + "tf.compat.v1.raw_ops.DatasetToGraph": false, + "tf.compat.v1.raw_ops.DatasetToGraphV2": false, + "tf.compat.v1.raw_ops.DatasetToSingleElement": false, + "tf.compat.v1.raw_ops.DatasetToTFRecord": false, + "tf.compat.v1.raw_ops.Dawsn": false, + "tf.compat.v1.raw_ops.DebugGradientIdentity": false, + "tf.compat.v1.raw_ops.DebugGradientRefIdentity": false, + "tf.compat.v1.raw_ops.DebugIdentity": false, + "tf.compat.v1.raw_ops.DebugIdentityV2": false, + "tf.compat.v1.raw_ops.DebugNanCount": false, + "tf.compat.v1.raw_ops.DebugNumericSummary": false, + "tf.compat.v1.raw_ops.DebugNumericSummaryV2": false, + "tf.compat.v1.raw_ops.DecodeAndCropJpeg": false, + "tf.compat.v1.raw_ops.DecodeBase64": false, + "tf.compat.v1.raw_ops.DecodeBmp": false, + "tf.compat.v1.raw_ops.DecodeCSV": false, + "tf.compat.v1.raw_ops.DecodeCompressed": false, + "tf.compat.v1.raw_ops.DecodeGif": false, + "tf.compat.v1.raw_ops.DecodeJSONExample": false, + "tf.compat.v1.raw_ops.DecodeJpeg": false, + "tf.compat.v1.raw_ops.DecodePaddedRaw": false, + "tf.compat.v1.raw_ops.DecodePng": false, + "tf.compat.v1.raw_ops.DecodeProtoV2": false, + "tf.compat.v1.raw_ops.DecodeRaw": false, + "tf.compat.v1.raw_ops.DecodeWav": false, + "tf.compat.v1.raw_ops.DeepCopy": false, + "tf.compat.v1.raw_ops.DeleteIterator": false, + "tf.compat.v1.raw_ops.DeleteMemoryCache": false, + "tf.compat.v1.raw_ops.DeleteMultiDeviceIterator": false, + "tf.compat.v1.raw_ops.DeleteRandomSeedGenerator": false, + "tf.compat.v1.raw_ops.DeleteSessionTensor": false, + "tf.compat.v1.raw_ops.DenseToCSRSparseMatrix": false, + "tf.compat.v1.raw_ops.DenseToDenseSetOperation": false, + "tf.compat.v1.raw_ops.DenseToSparseBatchDataset": false, + "tf.compat.v1.raw_ops.DenseToSparseSetOperation": false, + "tf.compat.v1.raw_ops.DepthToSpace": false, + "tf.compat.v1.raw_ops.DepthwiseConv2dNative": false, + "tf.compat.v1.raw_ops.DepthwiseConv2dNativeBackpropFilter": false, + "tf.compat.v1.raw_ops.DepthwiseConv2dNativeBackpropInput": false, + "tf.compat.v1.raw_ops.Dequantize": false, + "tf.compat.v1.raw_ops.DeserializeIterator": false, + "tf.compat.v1.raw_ops.DeserializeManySparse": false, + "tf.compat.v1.raw_ops.DeserializeSparse": false, + "tf.compat.v1.raw_ops.DestroyResourceOp": false, + "tf.compat.v1.raw_ops.DestroyTemporaryVariable": false, + "tf.compat.v1.raw_ops.Diag": false, + "tf.compat.v1.raw_ops.DiagPart": false, + "tf.compat.v1.raw_ops.Digamma": false, + "tf.compat.v1.raw_ops.Dilation2D": false, + "tf.compat.v1.raw_ops.Dilation2DBackpropFilter": false, + "tf.compat.v1.raw_ops.Dilation2DBackpropInput": false, + "tf.compat.v1.raw_ops.DirectedInterleaveDataset": false, + "tf.compat.v1.raw_ops.Div": false, + "tf.compat.v1.raw_ops.DivNoNan": false, + "tf.compat.v1.raw_ops.DrawBoundingBoxes": false, + "tf.compat.v1.raw_ops.DrawBoundingBoxesV2": false, + "tf.compat.v1.raw_ops.DummyMemoryCache": false, + "tf.compat.v1.raw_ops.DynamicPartition": false, + "tf.compat.v1.raw_ops.DynamicStitch": false, + "tf.compat.v1.raw_ops.EagerPyFunc": false, + "tf.compat.v1.raw_ops.EditDistance": false, + "tf.compat.v1.raw_ops.Eig": false, + "tf.compat.v1.raw_ops.Einsum": false, + "tf.compat.v1.raw_ops.Elu": false, + "tf.compat.v1.raw_ops.EluGrad": false, + "tf.compat.v1.raw_ops.Empty": false, + "tf.compat.v1.raw_ops.EmptyTensorList": false, + "tf.compat.v1.raw_ops.EncodeBase64": false, + "tf.compat.v1.raw_ops.EncodeJpeg": false, + "tf.compat.v1.raw_ops.EncodeJpegVariableQuality": false, + "tf.compat.v1.raw_ops.EncodePng": false, + "tf.compat.v1.raw_ops.EncodeProto": false, + "tf.compat.v1.raw_ops.EncodeWav": false, + "tf.compat.v1.raw_ops.EnqueueTPUEmbeddingIntegerBatch": false, + "tf.compat.v1.raw_ops.EnqueueTPUEmbeddingSparseBatch": false, + "tf.compat.v1.raw_ops.EnqueueTPUEmbeddingSparseTensorBatch": false, + "tf.compat.v1.raw_ops.EnsureShape": false, + "tf.compat.v1.raw_ops.Enter": false, + "tf.compat.v1.raw_ops.Equal": false, + "tf.compat.v1.raw_ops.Erf": false, + "tf.compat.v1.raw_ops.Erfc": false, + "tf.compat.v1.raw_ops.Erfinv": false, + "tf.compat.v1.raw_ops.EuclideanNorm": false, + "tf.compat.v1.raw_ops.Exit": false, + "tf.compat.v1.raw_ops.Exp": false, + "tf.compat.v1.raw_ops.ExpandDims": false, + "tf.compat.v1.raw_ops.ExperimentalAssertNextDataset": false, + "tf.compat.v1.raw_ops.ExperimentalAutoShardDataset": false, + "tf.compat.v1.raw_ops.ExperimentalBytesProducedStatsDataset": false, + "tf.compat.v1.raw_ops.ExperimentalCSVDataset": false, + "tf.compat.v1.raw_ops.ExperimentalChooseFastestDataset": false, + "tf.compat.v1.raw_ops.ExperimentalDatasetCardinality": false, + "tf.compat.v1.raw_ops.ExperimentalDatasetToTFRecord": false, + "tf.compat.v1.raw_ops.ExperimentalDenseToSparseBatchDataset": false, + "tf.compat.v1.raw_ops.ExperimentalDirectedInterleaveDataset": false, + "tf.compat.v1.raw_ops.ExperimentalGroupByReducerDataset": false, + "tf.compat.v1.raw_ops.ExperimentalGroupByWindowDataset": false, + "tf.compat.v1.raw_ops.ExperimentalIgnoreErrorsDataset": false, + "tf.compat.v1.raw_ops.ExperimentalIteratorGetDevice": false, + "tf.compat.v1.raw_ops.ExperimentalLMDBDataset": false, + "tf.compat.v1.raw_ops.ExperimentalLatencyStatsDataset": false, + "tf.compat.v1.raw_ops.ExperimentalMapAndBatchDataset": false, + "tf.compat.v1.raw_ops.ExperimentalMapDataset": false, + "tf.compat.v1.raw_ops.ExperimentalMatchingFilesDataset": false, + "tf.compat.v1.raw_ops.ExperimentalMaxIntraOpParallelismDataset": false, + "tf.compat.v1.raw_ops.ExperimentalNonSerializableDataset": false, + "tf.compat.v1.raw_ops.ExperimentalParallelInterleaveDataset": false, + "tf.compat.v1.raw_ops.ExperimentalParseExampleDataset": false, + "tf.compat.v1.raw_ops.ExperimentalPrivateThreadPoolDataset": false, + "tf.compat.v1.raw_ops.ExperimentalRandomDataset": false, + "tf.compat.v1.raw_ops.ExperimentalRebatchDataset": false, + "tf.compat.v1.raw_ops.ExperimentalScanDataset": false, + "tf.compat.v1.raw_ops.ExperimentalSetStatsAggregatorDataset": false, + "tf.compat.v1.raw_ops.ExperimentalSleepDataset": false, + "tf.compat.v1.raw_ops.ExperimentalSlidingWindowDataset": false, + "tf.compat.v1.raw_ops.ExperimentalSqlDataset": false, + "tf.compat.v1.raw_ops.ExperimentalStatsAggregatorHandle": false, + "tf.compat.v1.raw_ops.ExperimentalStatsAggregatorSummary": false, + "tf.compat.v1.raw_ops.ExperimentalTakeWhileDataset": false, + "tf.compat.v1.raw_ops.ExperimentalThreadPoolDataset": false, + "tf.compat.v1.raw_ops.ExperimentalThreadPoolHandle": false, + "tf.compat.v1.raw_ops.ExperimentalUnbatchDataset": false, + "tf.compat.v1.raw_ops.ExperimentalUniqueDataset": false, + "tf.compat.v1.raw_ops.Expint": false, + "tf.compat.v1.raw_ops.Expm1": false, + "tf.compat.v1.raw_ops.ExtractGlimpse": false, + "tf.compat.v1.raw_ops.ExtractImagePatches": false, + "tf.compat.v1.raw_ops.ExtractJpegShape": false, + "tf.compat.v1.raw_ops.ExtractVolumePatches": false, + "tf.compat.v1.raw_ops.FFT": false, + "tf.compat.v1.raw_ops.FFT2D": false, + "tf.compat.v1.raw_ops.FFT3D": false, + "tf.compat.v1.raw_ops.FIFOQueue": false, + "tf.compat.v1.raw_ops.FIFOQueueV2": false, + "tf.compat.v1.raw_ops.Fact": false, + "tf.compat.v1.raw_ops.FakeParam": false, + "tf.compat.v1.raw_ops.FakeQuantWithMinMaxArgs": false, + "tf.compat.v1.raw_ops.FakeQuantWithMinMaxArgsGradient": false, + "tf.compat.v1.raw_ops.FakeQuantWithMinMaxVars": false, + "tf.compat.v1.raw_ops.FakeQuantWithMinMaxVarsGradient": false, + "tf.compat.v1.raw_ops.FakeQuantWithMinMaxVarsPerChannel": false, + "tf.compat.v1.raw_ops.FakeQuantWithMinMaxVarsPerChannelGradient": false, + "tf.compat.v1.raw_ops.FakeQueue": false, + "tf.compat.v1.raw_ops.Fill": false, + "tf.compat.v1.raw_ops.FilterByLastComponentDataset": false, + "tf.compat.v1.raw_ops.FilterDataset": false, + "tf.compat.v1.raw_ops.Fingerprint": false, + "tf.compat.v1.raw_ops.FixedLengthRecordDataset": false, + "tf.compat.v1.raw_ops.FixedLengthRecordDatasetV2": false, + "tf.compat.v1.raw_ops.FixedLengthRecordReader": false, + "tf.compat.v1.raw_ops.FixedLengthRecordReaderV2": false, + "tf.compat.v1.raw_ops.FixedUnigramCandidateSampler": false, + "tf.compat.v1.raw_ops.FlatMapDataset": false, + "tf.compat.v1.raw_ops.Floor": false, + "tf.compat.v1.raw_ops.FloorDiv": false, + "tf.compat.v1.raw_ops.FloorMod": false, + "tf.compat.v1.raw_ops.FlushSummaryWriter": false, + "tf.compat.v1.raw_ops.For": false, + "tf.compat.v1.raw_ops.FractionalAvgPool": false, + "tf.compat.v1.raw_ops.FractionalAvgPoolGrad": false, + "tf.compat.v1.raw_ops.FractionalMaxPool": false, + "tf.compat.v1.raw_ops.FractionalMaxPoolGrad": false, + "tf.compat.v1.raw_ops.FresnelCos": false, + "tf.compat.v1.raw_ops.FresnelSin": false, + "tf.compat.v1.raw_ops.FusedBatchNorm": false, + "tf.compat.v1.raw_ops.FusedBatchNormGrad": false, + "tf.compat.v1.raw_ops.FusedBatchNormGradV2": false, + "tf.compat.v1.raw_ops.FusedBatchNormGradV3": false, + "tf.compat.v1.raw_ops.FusedBatchNormV2": false, + "tf.compat.v1.raw_ops.FusedBatchNormV3": false, + "tf.compat.v1.raw_ops.FusedPadConv2D": false, + "tf.compat.v1.raw_ops.FusedResizeAndPadConv2D": false, + "tf.compat.v1.raw_ops.GRUBlockCell": false, + "tf.compat.v1.raw_ops.GRUBlockCellGrad": false, + "tf.compat.v1.raw_ops.Gather": false, + "tf.compat.v1.raw_ops.GatherNd": false, + "tf.compat.v1.raw_ops.GatherV2": false, + "tf.compat.v1.raw_ops.GenerateBoundingBoxProposals": false, + "tf.compat.v1.raw_ops.GenerateVocabRemapping": false, + "tf.compat.v1.raw_ops.GeneratorDataset": false, + "tf.compat.v1.raw_ops.GetSessionHandle": false, + "tf.compat.v1.raw_ops.GetSessionHandleV2": false, + "tf.compat.v1.raw_ops.GetSessionTensor": false, + "tf.compat.v1.raw_ops.Greater": false, + "tf.compat.v1.raw_ops.GreaterEqual": false, + "tf.compat.v1.raw_ops.GroupByReducerDataset": false, + "tf.compat.v1.raw_ops.GroupByWindowDataset": false, + "tf.compat.v1.raw_ops.GuaranteeConst": false, + "tf.compat.v1.raw_ops.HSVToRGB": false, + "tf.compat.v1.raw_ops.HashTable": false, + "tf.compat.v1.raw_ops.HashTableV2": false, + "tf.compat.v1.raw_ops.HistogramFixedWidth": false, + "tf.compat.v1.raw_ops.HistogramSummary": false, + "tf.compat.v1.raw_ops.IFFT": false, + "tf.compat.v1.raw_ops.IFFT2D": false, + "tf.compat.v1.raw_ops.IFFT3D": false, + "tf.compat.v1.raw_ops.IRFFT": false, + "tf.compat.v1.raw_ops.IRFFT2D": false, + "tf.compat.v1.raw_ops.IRFFT3D": false, + "tf.compat.v1.raw_ops.Identity": false, + "tf.compat.v1.raw_ops.IdentityN": false, + "tf.compat.v1.raw_ops.IdentityReader": false, + "tf.compat.v1.raw_ops.IdentityReaderV2": false, + "tf.compat.v1.raw_ops.If": false, + "tf.compat.v1.raw_ops.Igamma": false, + "tf.compat.v1.raw_ops.IgammaGradA": false, + "tf.compat.v1.raw_ops.Igammac": false, + "tf.compat.v1.raw_ops.IgnoreErrorsDataset": false, + "tf.compat.v1.raw_ops.Imag": false, + "tf.compat.v1.raw_ops.ImageProjectiveTransformV2": false, + "tf.compat.v1.raw_ops.ImageSummary": false, + "tf.compat.v1.raw_ops.ImmutableConst": false, + "tf.compat.v1.raw_ops.ImportEvent": false, + "tf.compat.v1.raw_ops.InTopK": false, + "tf.compat.v1.raw_ops.InTopKV2": false, + "tf.compat.v1.raw_ops.InfeedDequeue": false, + "tf.compat.v1.raw_ops.InfeedDequeueTuple": false, + "tf.compat.v1.raw_ops.InfeedEnqueue": false, + "tf.compat.v1.raw_ops.InfeedEnqueuePrelinearizedBuffer": false, + "tf.compat.v1.raw_ops.InfeedEnqueueTuple": false, + "tf.compat.v1.raw_ops.InitializeTable": false, + "tf.compat.v1.raw_ops.InitializeTableFromTextFile": false, + "tf.compat.v1.raw_ops.InitializeTableFromTextFileV2": false, + "tf.compat.v1.raw_ops.InitializeTableV2": false, + "tf.compat.v1.raw_ops.InplaceAdd": false, + "tf.compat.v1.raw_ops.InplaceSub": false, + "tf.compat.v1.raw_ops.InplaceUpdate": false, + "tf.compat.v1.raw_ops.InterleaveDataset": false, + "tf.compat.v1.raw_ops.Inv": false, + "tf.compat.v1.raw_ops.InvGrad": false, + "tf.compat.v1.raw_ops.Invert": false, + "tf.compat.v1.raw_ops.InvertPermutation": false, + "tf.compat.v1.raw_ops.IsBoostedTreesEnsembleInitialized": false, + "tf.compat.v1.raw_ops.IsBoostedTreesQuantileStreamResourceInitialized": false, + "tf.compat.v1.raw_ops.IsFinite": false, + "tf.compat.v1.raw_ops.IsInf": false, + "tf.compat.v1.raw_ops.IsNan": false, + "tf.compat.v1.raw_ops.IsVariableInitialized": false, + "tf.compat.v1.raw_ops.Iterator": false, + "tf.compat.v1.raw_ops.IteratorFromStringHandle": false, + "tf.compat.v1.raw_ops.IteratorFromStringHandleV2": false, + "tf.compat.v1.raw_ops.IteratorGetDevice": false, + "tf.compat.v1.raw_ops.IteratorGetNext": false, + "tf.compat.v1.raw_ops.IteratorGetNextAsOptional": false, + "tf.compat.v1.raw_ops.IteratorGetNextSync": false, + "tf.compat.v1.raw_ops.IteratorToStringHandle": false, + "tf.compat.v1.raw_ops.IteratorV2": false, + "tf.compat.v1.raw_ops.L2Loss": false, + "tf.compat.v1.raw_ops.LMDBDataset": false, + "tf.compat.v1.raw_ops.LMDBReader": false, + "tf.compat.v1.raw_ops.LRN": false, + "tf.compat.v1.raw_ops.LRNGrad": false, + "tf.compat.v1.raw_ops.LSTMBlockCell": false, + "tf.compat.v1.raw_ops.LSTMBlockCellGrad": false, + "tf.compat.v1.raw_ops.LatencyStatsDataset": false, + "tf.compat.v1.raw_ops.LeakyRelu": false, + "tf.compat.v1.raw_ops.LeakyReluGrad": false, + "tf.compat.v1.raw_ops.LearnedUnigramCandidateSampler": false, + "tf.compat.v1.raw_ops.LeftShift": false, + "tf.compat.v1.raw_ops.LegacyParallelInterleaveDatasetV2": false, + "tf.compat.v1.raw_ops.Less": false, + "tf.compat.v1.raw_ops.LessEqual": false, + "tf.compat.v1.raw_ops.Lgamma": false, + "tf.compat.v1.raw_ops.LinSpace": false, + "tf.compat.v1.raw_ops.ListDiff": false, + "tf.compat.v1.raw_ops.LoadAndRemapMatrix": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingADAMParameters": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingADAMParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingAdadeltaParameters": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingAdadeltaParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingAdagradParameters": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingAdagradParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingCenteredRMSPropParameters": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingFTRLParameters": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingFTRLParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingMDLAdagradLightParameters": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingMomentumParameters": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingMomentumParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingProximalAdagradParameters": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingRMSPropParameters": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingRMSPropParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.LoadTPUEmbeddingStochasticGradientDescentParameters": false, + "tf.compat.v1.raw_ops.Log": false, + "tf.compat.v1.raw_ops.Log1p": false, + "tf.compat.v1.raw_ops.LogMatrixDeterminant": false, + "tf.compat.v1.raw_ops.LogSoftmax": false, + "tf.compat.v1.raw_ops.LogUniformCandidateSampler": false, + "tf.compat.v1.raw_ops.LogicalAnd": false, + "tf.compat.v1.raw_ops.LogicalNot": false, + "tf.compat.v1.raw_ops.LogicalOr": false, + "tf.compat.v1.raw_ops.LookupTableExport": false, + "tf.compat.v1.raw_ops.LookupTableExportV2": false, + "tf.compat.v1.raw_ops.LookupTableFind": false, + "tf.compat.v1.raw_ops.LookupTableFindV2": false, + "tf.compat.v1.raw_ops.LookupTableImport": false, + "tf.compat.v1.raw_ops.LookupTableImportV2": false, + "tf.compat.v1.raw_ops.LookupTableInsert": false, + "tf.compat.v1.raw_ops.LookupTableInsertV2": false, + "tf.compat.v1.raw_ops.LookupTableRemoveV2": false, + "tf.compat.v1.raw_ops.LookupTableSize": false, + "tf.compat.v1.raw_ops.LookupTableSizeV2": false, + "tf.compat.v1.raw_ops.LoopCond": false, + "tf.compat.v1.raw_ops.LowerBound": false, + "tf.compat.v1.raw_ops.Lu": false, + "tf.compat.v1.raw_ops.MakeIterator": false, + "tf.compat.v1.raw_ops.MapAndBatchDataset": false, + "tf.compat.v1.raw_ops.MapClear": false, + "tf.compat.v1.raw_ops.MapDataset": false, + "tf.compat.v1.raw_ops.MapDefun": false, + "tf.compat.v1.raw_ops.MapIncompleteSize": false, + "tf.compat.v1.raw_ops.MapPeek": false, + "tf.compat.v1.raw_ops.MapSize": false, + "tf.compat.v1.raw_ops.MapStage": false, + "tf.compat.v1.raw_ops.MapUnstage": false, + "tf.compat.v1.raw_ops.MapUnstageNoKey": false, + "tf.compat.v1.raw_ops.MatMul": false, + "tf.compat.v1.raw_ops.MatchingFiles": false, + "tf.compat.v1.raw_ops.MatchingFilesDataset": false, + "tf.compat.v1.raw_ops.MatrixBandPart": false, + "tf.compat.v1.raw_ops.MatrixDeterminant": false, + "tf.compat.v1.raw_ops.MatrixDiag": false, + "tf.compat.v1.raw_ops.MatrixDiagPart": false, + "tf.compat.v1.raw_ops.MatrixDiagPartV2": false, + "tf.compat.v1.raw_ops.MatrixDiagPartV3": false, + "tf.compat.v1.raw_ops.MatrixDiagV2": false, + "tf.compat.v1.raw_ops.MatrixDiagV3": false, + "tf.compat.v1.raw_ops.MatrixExponential": false, + "tf.compat.v1.raw_ops.MatrixInverse": false, + "tf.compat.v1.raw_ops.MatrixLogarithm": false, + "tf.compat.v1.raw_ops.MatrixSetDiag": false, + "tf.compat.v1.raw_ops.MatrixSetDiagV2": false, + "tf.compat.v1.raw_ops.MatrixSetDiagV3": false, + "tf.compat.v1.raw_ops.MatrixSolve": false, + "tf.compat.v1.raw_ops.MatrixSolveLs": false, + "tf.compat.v1.raw_ops.MatrixSquareRoot": false, + "tf.compat.v1.raw_ops.MatrixTriangularSolve": false, + "tf.compat.v1.raw_ops.Max": false, + "tf.compat.v1.raw_ops.MaxIntraOpParallelismDataset": false, + "tf.compat.v1.raw_ops.MaxPool": false, + "tf.compat.v1.raw_ops.MaxPool3D": false, + "tf.compat.v1.raw_ops.MaxPool3DGrad": false, + "tf.compat.v1.raw_ops.MaxPool3DGradGrad": false, + "tf.compat.v1.raw_ops.MaxPoolGrad": false, + "tf.compat.v1.raw_ops.MaxPoolGradGrad": false, + "tf.compat.v1.raw_ops.MaxPoolGradGradV2": false, + "tf.compat.v1.raw_ops.MaxPoolGradGradWithArgmax": false, + "tf.compat.v1.raw_ops.MaxPoolGradV2": false, + "tf.compat.v1.raw_ops.MaxPoolGradWithArgmax": false, + "tf.compat.v1.raw_ops.MaxPoolV2": false, + "tf.compat.v1.raw_ops.MaxPoolWithArgmax": false, + "tf.compat.v1.raw_ops.Maximum": false, + "tf.compat.v1.raw_ops.Mean": false, + "tf.compat.v1.raw_ops.Merge": false, + "tf.compat.v1.raw_ops.MergeSummary": false, + "tf.compat.v1.raw_ops.MergeV2Checkpoints": false, + "tf.compat.v1.raw_ops.Mfcc": false, + "tf.compat.v1.raw_ops.Min": false, + "tf.compat.v1.raw_ops.Minimum": false, + "tf.compat.v1.raw_ops.MirrorPad": false, + "tf.compat.v1.raw_ops.MirrorPadGrad": false, + "tf.compat.v1.raw_ops.Mod": false, + "tf.compat.v1.raw_ops.ModelDataset": false, + "tf.compat.v1.raw_ops.Mul": false, + "tf.compat.v1.raw_ops.MulNoNan": false, + "tf.compat.v1.raw_ops.MultiDeviceIterator": false, + "tf.compat.v1.raw_ops.MultiDeviceIteratorFromStringHandle": false, + "tf.compat.v1.raw_ops.MultiDeviceIteratorGetNextFromShard": false, + "tf.compat.v1.raw_ops.MultiDeviceIteratorInit": false, + "tf.compat.v1.raw_ops.MultiDeviceIteratorToStringHandle": false, + "tf.compat.v1.raw_ops.Multinomial": false, + "tf.compat.v1.raw_ops.MutableDenseHashTable": false, + "tf.compat.v1.raw_ops.MutableDenseHashTableV2": false, + "tf.compat.v1.raw_ops.MutableHashTable": false, + "tf.compat.v1.raw_ops.MutableHashTableOfTensors": false, + "tf.compat.v1.raw_ops.MutableHashTableOfTensorsV2": false, + "tf.compat.v1.raw_ops.MutableHashTableV2": false, + "tf.compat.v1.raw_ops.MutexLock": false, + "tf.compat.v1.raw_ops.MutexV2": false, + "tf.compat.v1.raw_ops.NcclAllReduce": false, + "tf.compat.v1.raw_ops.NcclBroadcast": false, + "tf.compat.v1.raw_ops.NcclReduce": false, + "tf.compat.v1.raw_ops.Ndtri": false, + "tf.compat.v1.raw_ops.Neg": false, + "tf.compat.v1.raw_ops.NextAfter": false, + "tf.compat.v1.raw_ops.NextIteration": false, + "tf.compat.v1.raw_ops.NoOp": false, + "tf.compat.v1.raw_ops.NonDeterministicInts": false, + "tf.compat.v1.raw_ops.NonMaxSuppression": false, + "tf.compat.v1.raw_ops.NonMaxSuppressionV2": false, + "tf.compat.v1.raw_ops.NonMaxSuppressionV3": false, + "tf.compat.v1.raw_ops.NonMaxSuppressionV4": false, + "tf.compat.v1.raw_ops.NonMaxSuppressionV5": false, + "tf.compat.v1.raw_ops.NonMaxSuppressionWithOverlaps": false, + "tf.compat.v1.raw_ops.NonSerializableDataset": false, + "tf.compat.v1.raw_ops.NotEqual": false, + "tf.compat.v1.raw_ops.NthElement": false, + "tf.compat.v1.raw_ops.OneHot": false, + "tf.compat.v1.raw_ops.OneShotIterator": false, + "tf.compat.v1.raw_ops.OnesLike": false, + "tf.compat.v1.raw_ops.OptimizeDataset": false, + "tf.compat.v1.raw_ops.OptionalFromValue": false, + "tf.compat.v1.raw_ops.OptionalGetValue": false, + "tf.compat.v1.raw_ops.OptionalHasValue": false, + "tf.compat.v1.raw_ops.OptionalNone": false, + "tf.compat.v1.raw_ops.OrderedMapClear": false, + "tf.compat.v1.raw_ops.OrderedMapIncompleteSize": false, + "tf.compat.v1.raw_ops.OrderedMapPeek": false, + "tf.compat.v1.raw_ops.OrderedMapSize": false, + "tf.compat.v1.raw_ops.OrderedMapStage": false, + "tf.compat.v1.raw_ops.OrderedMapUnstage": false, + "tf.compat.v1.raw_ops.OrderedMapUnstageNoKey": false, + "tf.compat.v1.raw_ops.OutfeedDequeue": false, + "tf.compat.v1.raw_ops.OutfeedDequeueTuple": false, + "tf.compat.v1.raw_ops.OutfeedEnqueue": false, + "tf.compat.v1.raw_ops.OutfeedEnqueueTuple": false, + "tf.compat.v1.raw_ops.Pack": false, + "tf.compat.v1.raw_ops.Pad": false, + "tf.compat.v1.raw_ops.PadV2": false, + "tf.compat.v1.raw_ops.PaddedBatchDataset": false, + "tf.compat.v1.raw_ops.PaddedBatchDatasetV2": false, + "tf.compat.v1.raw_ops.PaddingFIFOQueue": false, + "tf.compat.v1.raw_ops.PaddingFIFOQueueV2": false, + "tf.compat.v1.raw_ops.ParallelConcat": false, + "tf.compat.v1.raw_ops.ParallelDynamicStitch": false, + "tf.compat.v1.raw_ops.ParallelInterleaveDataset": false, + "tf.compat.v1.raw_ops.ParallelInterleaveDatasetV2": false, + "tf.compat.v1.raw_ops.ParallelInterleaveDatasetV3": false, + "tf.compat.v1.raw_ops.ParallelInterleaveDatasetV4": false, + "tf.compat.v1.raw_ops.ParallelMapDataset": false, + "tf.compat.v1.raw_ops.ParallelMapDatasetV2": false, + "tf.compat.v1.raw_ops.ParameterizedTruncatedNormal": false, + "tf.compat.v1.raw_ops.ParseExample": false, + "tf.compat.v1.raw_ops.ParseExampleDataset": false, + "tf.compat.v1.raw_ops.ParseExampleDatasetV2": false, + "tf.compat.v1.raw_ops.ParseExampleV2": false, + "tf.compat.v1.raw_ops.ParseSequenceExample": false, + "tf.compat.v1.raw_ops.ParseSequenceExampleV2": false, + "tf.compat.v1.raw_ops.ParseSingleExample": false, + "tf.compat.v1.raw_ops.ParseSingleSequenceExample": false, + "tf.compat.v1.raw_ops.ParseTensor": false, + "tf.compat.v1.raw_ops.PartitionedCall": false, + "tf.compat.v1.raw_ops.Placeholder": false, + "tf.compat.v1.raw_ops.PlaceholderV2": false, + "tf.compat.v1.raw_ops.PlaceholderWithDefault": false, + "tf.compat.v1.raw_ops.Polygamma": false, + "tf.compat.v1.raw_ops.PopulationCount": false, + "tf.compat.v1.raw_ops.Pow": false, + "tf.compat.v1.raw_ops.PrefetchDataset": false, + "tf.compat.v1.raw_ops.Prelinearize": false, + "tf.compat.v1.raw_ops.PrelinearizeTuple": false, + "tf.compat.v1.raw_ops.PreventGradient": false, + "tf.compat.v1.raw_ops.Print": false, + "tf.compat.v1.raw_ops.PrintV2": false, + "tf.compat.v1.raw_ops.PriorityQueue": false, + "tf.compat.v1.raw_ops.PriorityQueueV2": false, + "tf.compat.v1.raw_ops.PrivateThreadPoolDataset": false, + "tf.compat.v1.raw_ops.Prod": false, + "tf.compat.v1.raw_ops.PyFunc": false, + "tf.compat.v1.raw_ops.PyFuncStateless": false, + "tf.compat.v1.raw_ops.Qr": false, + "tf.compat.v1.raw_ops.QuantizeAndDequantize": false, + "tf.compat.v1.raw_ops.QuantizeAndDequantizeV2": false, + "tf.compat.v1.raw_ops.QuantizeAndDequantizeV3": false, + "tf.compat.v1.raw_ops.QuantizeDownAndShrinkRange": false, + "tf.compat.v1.raw_ops.QuantizeV2": false, + "tf.compat.v1.raw_ops.QuantizedAdd": false, + "tf.compat.v1.raw_ops.QuantizedAvgPool": false, + "tf.compat.v1.raw_ops.QuantizedBatchNormWithGlobalNormalization": false, + "tf.compat.v1.raw_ops.QuantizedBiasAdd": false, + "tf.compat.v1.raw_ops.QuantizedConcat": false, + "tf.compat.v1.raw_ops.QuantizedConv2D": false, + "tf.compat.v1.raw_ops.QuantizedConv2DAndRelu": false, + "tf.compat.v1.raw_ops.QuantizedConv2DAndReluAndRequantize": false, + "tf.compat.v1.raw_ops.QuantizedConv2DAndRequantize": false, + "tf.compat.v1.raw_ops.QuantizedConv2DPerChannel": false, + "tf.compat.v1.raw_ops.QuantizedConv2DWithBias": false, + "tf.compat.v1.raw_ops.QuantizedConv2DWithBiasAndRelu": false, + "tf.compat.v1.raw_ops.QuantizedConv2DWithBiasAndReluAndRequantize": false, + "tf.compat.v1.raw_ops.QuantizedConv2DWithBiasAndRequantize": false, + "tf.compat.v1.raw_ops.QuantizedConv2DWithBiasSignedSumAndReluAndRequantize": false, + "tf.compat.v1.raw_ops.QuantizedConv2DWithBiasSumAndRelu": false, + "tf.compat.v1.raw_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize": false, + "tf.compat.v1.raw_ops.QuantizedDepthwiseConv2D": false, + "tf.compat.v1.raw_ops.QuantizedDepthwiseConv2DWithBias": false, + "tf.compat.v1.raw_ops.QuantizedDepthwiseConv2DWithBiasAndRelu": false, + "tf.compat.v1.raw_ops.QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize": false, + "tf.compat.v1.raw_ops.QuantizedInstanceNorm": false, + "tf.compat.v1.raw_ops.QuantizedMatMul": false, + "tf.compat.v1.raw_ops.QuantizedMatMulWithBias": false, + "tf.compat.v1.raw_ops.QuantizedMatMulWithBiasAndDequantize": false, + "tf.compat.v1.raw_ops.QuantizedMatMulWithBiasAndRelu": false, + "tf.compat.v1.raw_ops.QuantizedMatMulWithBiasAndReluAndRequantize": false, + "tf.compat.v1.raw_ops.QuantizedMatMulWithBiasAndRequantize": false, + "tf.compat.v1.raw_ops.QuantizedMaxPool": false, + "tf.compat.v1.raw_ops.QuantizedMul": false, + "tf.compat.v1.raw_ops.QuantizedRelu": false, + "tf.compat.v1.raw_ops.QuantizedRelu6": false, + "tf.compat.v1.raw_ops.QuantizedReluX": false, + "tf.compat.v1.raw_ops.QuantizedReshape": false, + "tf.compat.v1.raw_ops.QuantizedResizeBilinear": false, + "tf.compat.v1.raw_ops.QueueClose": false, + "tf.compat.v1.raw_ops.QueueCloseV2": false, + "tf.compat.v1.raw_ops.QueueDequeue": false, + "tf.compat.v1.raw_ops.QueueDequeueMany": false, + "tf.compat.v1.raw_ops.QueueDequeueManyV2": false, + "tf.compat.v1.raw_ops.QueueDequeueUpTo": false, + "tf.compat.v1.raw_ops.QueueDequeueUpToV2": false, + "tf.compat.v1.raw_ops.QueueDequeueV2": false, + "tf.compat.v1.raw_ops.QueueEnqueue": false, + "tf.compat.v1.raw_ops.QueueEnqueueMany": false, + "tf.compat.v1.raw_ops.QueueEnqueueManyV2": false, + "tf.compat.v1.raw_ops.QueueEnqueueV2": false, + "tf.compat.v1.raw_ops.QueueIsClosed": false, + "tf.compat.v1.raw_ops.QueueIsClosedV2": false, + "tf.compat.v1.raw_ops.QueueSize": false, + "tf.compat.v1.raw_ops.QueueSizeV2": false, + "tf.compat.v1.raw_ops.RFFT": false, + "tf.compat.v1.raw_ops.RFFT2D": false, + "tf.compat.v1.raw_ops.RFFT3D": false, + "tf.compat.v1.raw_ops.RGBToHSV": false, + "tf.compat.v1.raw_ops.RaggedGather": false, + "tf.compat.v1.raw_ops.RaggedRange": false, + "tf.compat.v1.raw_ops.RaggedTensorFromVariant": false, + "tf.compat.v1.raw_ops.RaggedTensorToSparse": false, + "tf.compat.v1.raw_ops.RaggedTensorToTensor": false, + "tf.compat.v1.raw_ops.RaggedTensorToVariant": false, + "tf.compat.v1.raw_ops.RandomCrop": false, + "tf.compat.v1.raw_ops.RandomDataset": false, + "tf.compat.v1.raw_ops.RandomGamma": false, + "tf.compat.v1.raw_ops.RandomGammaGrad": false, + "tf.compat.v1.raw_ops.RandomPoisson": false, + "tf.compat.v1.raw_ops.RandomPoissonV2": false, + "tf.compat.v1.raw_ops.RandomShuffle": false, + "tf.compat.v1.raw_ops.RandomShuffleQueue": false, + "tf.compat.v1.raw_ops.RandomShuffleQueueV2": false, + "tf.compat.v1.raw_ops.RandomStandardNormal": false, + "tf.compat.v1.raw_ops.RandomUniform": false, + "tf.compat.v1.raw_ops.RandomUniformInt": false, + "tf.compat.v1.raw_ops.Range": false, + "tf.compat.v1.raw_ops.RangeDataset": false, + "tf.compat.v1.raw_ops.Rank": false, + "tf.compat.v1.raw_ops.ReadFile": false, + "tf.compat.v1.raw_ops.ReadVariableOp": false, + "tf.compat.v1.raw_ops.ReaderNumRecordsProduced": false, + "tf.compat.v1.raw_ops.ReaderNumRecordsProducedV2": false, + "tf.compat.v1.raw_ops.ReaderNumWorkUnitsCompleted": false, + "tf.compat.v1.raw_ops.ReaderNumWorkUnitsCompletedV2": false, + "tf.compat.v1.raw_ops.ReaderRead": false, + "tf.compat.v1.raw_ops.ReaderReadUpTo": false, + "tf.compat.v1.raw_ops.ReaderReadUpToV2": false, + "tf.compat.v1.raw_ops.ReaderReadV2": false, + "tf.compat.v1.raw_ops.ReaderReset": false, + "tf.compat.v1.raw_ops.ReaderResetV2": false, + "tf.compat.v1.raw_ops.ReaderRestoreState": false, + "tf.compat.v1.raw_ops.ReaderRestoreStateV2": false, + "tf.compat.v1.raw_ops.ReaderSerializeState": false, + "tf.compat.v1.raw_ops.ReaderSerializeStateV2": false, + "tf.compat.v1.raw_ops.Real": false, + "tf.compat.v1.raw_ops.RealDiv": false, + "tf.compat.v1.raw_ops.RebatchDataset": false, + "tf.compat.v1.raw_ops.Reciprocal": false, + "tf.compat.v1.raw_ops.ReciprocalGrad": false, + "tf.compat.v1.raw_ops.RecordInput": false, + "tf.compat.v1.raw_ops.Recv": false, + "tf.compat.v1.raw_ops.RecvTPUEmbeddingActivations": false, + "tf.compat.v1.raw_ops.ReduceDataset": false, + "tf.compat.v1.raw_ops.ReduceJoin": false, + "tf.compat.v1.raw_ops.RefEnter": false, + "tf.compat.v1.raw_ops.RefExit": false, + "tf.compat.v1.raw_ops.RefIdentity": false, + "tf.compat.v1.raw_ops.RefMerge": false, + "tf.compat.v1.raw_ops.RefNextIteration": false, + "tf.compat.v1.raw_ops.RefSelect": false, + "tf.compat.v1.raw_ops.RefSwitch": false, + "tf.compat.v1.raw_ops.RegexFullMatch": false, + "tf.compat.v1.raw_ops.RegexReplace": false, + "tf.compat.v1.raw_ops.Relu": false, + "tf.compat.v1.raw_ops.Relu6": false, + "tf.compat.v1.raw_ops.Relu6Grad": false, + "tf.compat.v1.raw_ops.ReluGrad": false, + "tf.compat.v1.raw_ops.RemoteCall": false, + "tf.compat.v1.raw_ops.RepeatDataset": false, + "tf.compat.v1.raw_ops.RequantizationRange": false, + "tf.compat.v1.raw_ops.RequantizationRangePerChannel": false, + "tf.compat.v1.raw_ops.Requantize": false, + "tf.compat.v1.raw_ops.RequantizePerChannel": false, + "tf.compat.v1.raw_ops.Reshape": false, + "tf.compat.v1.raw_ops.ResizeArea": false, + "tf.compat.v1.raw_ops.ResizeBicubic": false, + "tf.compat.v1.raw_ops.ResizeBicubicGrad": false, + "tf.compat.v1.raw_ops.ResizeBilinear": false, + "tf.compat.v1.raw_ops.ResizeBilinearGrad": false, + "tf.compat.v1.raw_ops.ResizeNearestNeighbor": false, + "tf.compat.v1.raw_ops.ResizeNearestNeighborGrad": false, + "tf.compat.v1.raw_ops.ResourceAccumulatorApplyGradient": false, + "tf.compat.v1.raw_ops.ResourceAccumulatorNumAccumulated": false, + "tf.compat.v1.raw_ops.ResourceAccumulatorSetGlobalStep": false, + "tf.compat.v1.raw_ops.ResourceAccumulatorTakeGradient": false, + "tf.compat.v1.raw_ops.ResourceApplyAdaMax": false, + "tf.compat.v1.raw_ops.ResourceApplyAdadelta": false, + "tf.compat.v1.raw_ops.ResourceApplyAdagrad": false, + "tf.compat.v1.raw_ops.ResourceApplyAdagradDA": false, + "tf.compat.v1.raw_ops.ResourceApplyAdagradV2": false, + "tf.compat.v1.raw_ops.ResourceApplyAdam": false, + "tf.compat.v1.raw_ops.ResourceApplyAdamWithAmsgrad": false, + "tf.compat.v1.raw_ops.ResourceApplyAddSign": false, + "tf.compat.v1.raw_ops.ResourceApplyCenteredRMSProp": false, + "tf.compat.v1.raw_ops.ResourceApplyFtrl": false, + "tf.compat.v1.raw_ops.ResourceApplyFtrlV2": false, + "tf.compat.v1.raw_ops.ResourceApplyGradientDescent": false, + "tf.compat.v1.raw_ops.ResourceApplyKerasMomentum": false, + "tf.compat.v1.raw_ops.ResourceApplyMomentum": false, + "tf.compat.v1.raw_ops.ResourceApplyPowerSign": false, + "tf.compat.v1.raw_ops.ResourceApplyProximalAdagrad": false, + "tf.compat.v1.raw_ops.ResourceApplyProximalGradientDescent": false, + "tf.compat.v1.raw_ops.ResourceApplyRMSProp": false, + "tf.compat.v1.raw_ops.ResourceConditionalAccumulator": false, + "tf.compat.v1.raw_ops.ResourceCountUpTo": false, + "tf.compat.v1.raw_ops.ResourceGather": false, + "tf.compat.v1.raw_ops.ResourceGatherNd": false, + "tf.compat.v1.raw_ops.ResourceScatterAdd": false, + "tf.compat.v1.raw_ops.ResourceScatterDiv": false, + "tf.compat.v1.raw_ops.ResourceScatterMax": false, + "tf.compat.v1.raw_ops.ResourceScatterMin": false, + "tf.compat.v1.raw_ops.ResourceScatterMul": false, + "tf.compat.v1.raw_ops.ResourceScatterNdAdd": false, + "tf.compat.v1.raw_ops.ResourceScatterNdSub": false, + "tf.compat.v1.raw_ops.ResourceScatterNdUpdate": false, + "tf.compat.v1.raw_ops.ResourceScatterSub": false, + "tf.compat.v1.raw_ops.ResourceScatterUpdate": false, + "tf.compat.v1.raw_ops.ResourceSparseApplyAdadelta": false, + "tf.compat.v1.raw_ops.ResourceSparseApplyAdagrad": false, + "tf.compat.v1.raw_ops.ResourceSparseApplyAdagradDA": false, + "tf.compat.v1.raw_ops.ResourceSparseApplyAdagradV2": false, + "tf.compat.v1.raw_ops.ResourceSparseApplyCenteredRMSProp": false, + "tf.compat.v1.raw_ops.ResourceSparseApplyFtrl": false, + "tf.compat.v1.raw_ops.ResourceSparseApplyFtrlV2": false, + "tf.compat.v1.raw_ops.ResourceSparseApplyKerasMomentum": false, + "tf.compat.v1.raw_ops.ResourceSparseApplyMomentum": false, + "tf.compat.v1.raw_ops.ResourceSparseApplyProximalAdagrad": false, + "tf.compat.v1.raw_ops.ResourceSparseApplyProximalGradientDescent": false, + "tf.compat.v1.raw_ops.ResourceSparseApplyRMSProp": false, + "tf.compat.v1.raw_ops.ResourceStridedSliceAssign": false, + "tf.compat.v1.raw_ops.Restore": false, + "tf.compat.v1.raw_ops.RestoreSlice": false, + "tf.compat.v1.raw_ops.RestoreV2": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingADAMParameters": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingADAMParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdadeltaParameters": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdagradParameters": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdagradParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingCenteredRMSPropParameters": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingFTRLParameters": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingFTRLParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingMDLAdagradLightParameters": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingMomentumParameters": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingMomentumParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingRMSPropParameters": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug": false, + "tf.compat.v1.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParameters": false, + "tf.compat.v1.raw_ops.Reverse": false, + "tf.compat.v1.raw_ops.ReverseSequence": false, + "tf.compat.v1.raw_ops.ReverseV2": false, + "tf.compat.v1.raw_ops.RightShift": false, + "tf.compat.v1.raw_ops.Rint": false, + "tf.compat.v1.raw_ops.RngSkip": false, + "tf.compat.v1.raw_ops.Roll": false, + "tf.compat.v1.raw_ops.Round": false, + "tf.compat.v1.raw_ops.Rsqrt": false, + "tf.compat.v1.raw_ops.RsqrtGrad": false, + "tf.compat.v1.raw_ops.SampleDistortedBoundingBox": false, + "tf.compat.v1.raw_ops.SampleDistortedBoundingBoxV2": false, + "tf.compat.v1.raw_ops.SamplingDataset": false, + "tf.compat.v1.raw_ops.Save": false, + "tf.compat.v1.raw_ops.SaveSlices": false, + "tf.compat.v1.raw_ops.SaveV2": false, + "tf.compat.v1.raw_ops.ScalarSummary": false, + "tf.compat.v1.raw_ops.ScaleAndTranslate": false, + "tf.compat.v1.raw_ops.ScaleAndTranslateGrad": false, + "tf.compat.v1.raw_ops.ScanDataset": false, + "tf.compat.v1.raw_ops.ScatterAdd": false, + "tf.compat.v1.raw_ops.ScatterDiv": false, + "tf.compat.v1.raw_ops.ScatterMax": false, + "tf.compat.v1.raw_ops.ScatterMin": false, + "tf.compat.v1.raw_ops.ScatterMul": false, + "tf.compat.v1.raw_ops.ScatterNd": false, + "tf.compat.v1.raw_ops.ScatterNdAdd": false, + "tf.compat.v1.raw_ops.ScatterNdNonAliasingAdd": false, + "tf.compat.v1.raw_ops.ScatterNdSub": false, + "tf.compat.v1.raw_ops.ScatterNdUpdate": false, + "tf.compat.v1.raw_ops.ScatterSub": false, + "tf.compat.v1.raw_ops.ScatterUpdate": false, + "tf.compat.v1.raw_ops.SdcaFprint": false, + "tf.compat.v1.raw_ops.SdcaOptimizer": false, + "tf.compat.v1.raw_ops.SdcaOptimizerV2": false, + "tf.compat.v1.raw_ops.SdcaShrinkL1": false, + "tf.compat.v1.raw_ops.SegmentMax": false, + "tf.compat.v1.raw_ops.SegmentMean": false, + "tf.compat.v1.raw_ops.SegmentMin": false, + "tf.compat.v1.raw_ops.SegmentProd": false, + "tf.compat.v1.raw_ops.SegmentSum": false, + "tf.compat.v1.raw_ops.Select": false, + "tf.compat.v1.raw_ops.SelectV2": false, + "tf.compat.v1.raw_ops.SelfAdjointEig": false, + "tf.compat.v1.raw_ops.SelfAdjointEigV2": false, + "tf.compat.v1.raw_ops.Selu": false, + "tf.compat.v1.raw_ops.SeluGrad": false, + "tf.compat.v1.raw_ops.Send": false, + "tf.compat.v1.raw_ops.SendTPUEmbeddingGradients": false, + "tf.compat.v1.raw_ops.SerializeIterator": false, + "tf.compat.v1.raw_ops.SerializeManySparse": false, + "tf.compat.v1.raw_ops.SerializeSparse": false, + "tf.compat.v1.raw_ops.SerializeTensor": false, + "tf.compat.v1.raw_ops.SetSize": false, + "tf.compat.v1.raw_ops.SetStatsAggregatorDataset": false, + "tf.compat.v1.raw_ops.Shape": false, + "tf.compat.v1.raw_ops.ShapeN": false, + "tf.compat.v1.raw_ops.ShardDataset": false, + "tf.compat.v1.raw_ops.ShardedFilename": false, + "tf.compat.v1.raw_ops.ShardedFilespec": false, + "tf.compat.v1.raw_ops.ShuffleAndRepeatDataset": false, + "tf.compat.v1.raw_ops.ShuffleDataset": false, + "tf.compat.v1.raw_ops.ShuffleDatasetV2": false, + "tf.compat.v1.raw_ops.ShutdownDistributedTPU": false, + "tf.compat.v1.raw_ops.Sigmoid": false, + "tf.compat.v1.raw_ops.SigmoidGrad": false, + "tf.compat.v1.raw_ops.Sign": false, + "tf.compat.v1.raw_ops.Sin": false, + "tf.compat.v1.raw_ops.Sinh": false, + "tf.compat.v1.raw_ops.Size": false, + "tf.compat.v1.raw_ops.SkipDataset": false, + "tf.compat.v1.raw_ops.SleepDataset": false, + "tf.compat.v1.raw_ops.Slice": false, + "tf.compat.v1.raw_ops.SlidingWindowDataset": false, + "tf.compat.v1.raw_ops.Snapshot": false, + "tf.compat.v1.raw_ops.SnapshotDataset": false, + "tf.compat.v1.raw_ops.SobolSample": false, + "tf.compat.v1.raw_ops.Softmax": false, + "tf.compat.v1.raw_ops.SoftmaxCrossEntropyWithLogits": false, + "tf.compat.v1.raw_ops.Softplus": false, + "tf.compat.v1.raw_ops.SoftplusGrad": false, + "tf.compat.v1.raw_ops.Softsign": false, + "tf.compat.v1.raw_ops.SoftsignGrad": false, + "tf.compat.v1.raw_ops.SpaceToBatch": false, + "tf.compat.v1.raw_ops.SpaceToBatchND": false, + "tf.compat.v1.raw_ops.SpaceToDepth": false, + "tf.compat.v1.raw_ops.SparseAccumulatorApplyGradient": false, + "tf.compat.v1.raw_ops.SparseAccumulatorTakeGradient": false, + "tf.compat.v1.raw_ops.SparseAdd": false, + "tf.compat.v1.raw_ops.SparseAddGrad": false, + "tf.compat.v1.raw_ops.SparseApplyAdadelta": false, + "tf.compat.v1.raw_ops.SparseApplyAdagrad": false, + "tf.compat.v1.raw_ops.SparseApplyAdagradDA": false, + "tf.compat.v1.raw_ops.SparseApplyAdagradV2": false, + "tf.compat.v1.raw_ops.SparseApplyCenteredRMSProp": false, + "tf.compat.v1.raw_ops.SparseApplyFtrl": false, + "tf.compat.v1.raw_ops.SparseApplyFtrlV2": false, + "tf.compat.v1.raw_ops.SparseApplyMomentum": false, + "tf.compat.v1.raw_ops.SparseApplyProximalAdagrad": false, + "tf.compat.v1.raw_ops.SparseApplyProximalGradientDescent": false, + "tf.compat.v1.raw_ops.SparseApplyRMSProp": false, + "tf.compat.v1.raw_ops.SparseConcat": false, + "tf.compat.v1.raw_ops.SparseConditionalAccumulator": false, + "tf.compat.v1.raw_ops.SparseCross": false, + "tf.compat.v1.raw_ops.SparseDenseCwiseAdd": false, + "tf.compat.v1.raw_ops.SparseDenseCwiseDiv": false, + "tf.compat.v1.raw_ops.SparseDenseCwiseMul": false, + "tf.compat.v1.raw_ops.SparseFillEmptyRows": false, + "tf.compat.v1.raw_ops.SparseFillEmptyRowsGrad": false, + "tf.compat.v1.raw_ops.SparseMatMul": false, + "tf.compat.v1.raw_ops.SparseMatrixAdd": false, + "tf.compat.v1.raw_ops.SparseMatrixMatMul": false, + "tf.compat.v1.raw_ops.SparseMatrixMul": false, + "tf.compat.v1.raw_ops.SparseMatrixNNZ": false, + "tf.compat.v1.raw_ops.SparseMatrixOrderingAMD": false, + "tf.compat.v1.raw_ops.SparseMatrixSoftmax": false, + "tf.compat.v1.raw_ops.SparseMatrixSoftmaxGrad": false, + "tf.compat.v1.raw_ops.SparseMatrixSparseCholesky": false, + "tf.compat.v1.raw_ops.SparseMatrixSparseMatMul": false, + "tf.compat.v1.raw_ops.SparseMatrixTranspose": false, + "tf.compat.v1.raw_ops.SparseMatrixZeros": false, + "tf.compat.v1.raw_ops.SparseReduceMax": false, + "tf.compat.v1.raw_ops.SparseReduceMaxSparse": false, + "tf.compat.v1.raw_ops.SparseReduceSum": false, + "tf.compat.v1.raw_ops.SparseReduceSumSparse": false, + "tf.compat.v1.raw_ops.SparseReorder": false, + "tf.compat.v1.raw_ops.SparseReshape": false, + "tf.compat.v1.raw_ops.SparseSegmentMean": false, + "tf.compat.v1.raw_ops.SparseSegmentMeanGrad": false, + "tf.compat.v1.raw_ops.SparseSegmentMeanWithNumSegments": false, + "tf.compat.v1.raw_ops.SparseSegmentSqrtN": false, + "tf.compat.v1.raw_ops.SparseSegmentSqrtNGrad": false, + "tf.compat.v1.raw_ops.SparseSegmentSqrtNWithNumSegments": false, + "tf.compat.v1.raw_ops.SparseSegmentSum": false, + "tf.compat.v1.raw_ops.SparseSegmentSumWithNumSegments": false, + "tf.compat.v1.raw_ops.SparseSlice": false, + "tf.compat.v1.raw_ops.SparseSliceGrad": false, + "tf.compat.v1.raw_ops.SparseSoftmax": false, + "tf.compat.v1.raw_ops.SparseSoftmaxCrossEntropyWithLogits": false, + "tf.compat.v1.raw_ops.SparseSparseMaximum": false, + "tf.compat.v1.raw_ops.SparseSparseMinimum": false, + "tf.compat.v1.raw_ops.SparseSplit": false, + "tf.compat.v1.raw_ops.SparseTensorDenseAdd": false, + "tf.compat.v1.raw_ops.SparseTensorDenseMatMul": false, + "tf.compat.v1.raw_ops.SparseTensorSliceDataset": false, + "tf.compat.v1.raw_ops.SparseTensorToCSRSparseMatrix": false, + "tf.compat.v1.raw_ops.SparseToDense": false, + "tf.compat.v1.raw_ops.SparseToSparseSetOperation": false, + "tf.compat.v1.raw_ops.Spence": false, + "tf.compat.v1.raw_ops.Split": false, + "tf.compat.v1.raw_ops.SplitV": false, + "tf.compat.v1.raw_ops.SqlDataset": false, + "tf.compat.v1.raw_ops.Sqrt": false, + "tf.compat.v1.raw_ops.SqrtGrad": false, + "tf.compat.v1.raw_ops.Square": false, + "tf.compat.v1.raw_ops.SquaredDifference": false, + "tf.compat.v1.raw_ops.Squeeze": false, + "tf.compat.v1.raw_ops.Stack": false, + "tf.compat.v1.raw_ops.StackClose": false, + "tf.compat.v1.raw_ops.StackCloseV2": false, + "tf.compat.v1.raw_ops.StackPop": false, + "tf.compat.v1.raw_ops.StackPopV2": false, + "tf.compat.v1.raw_ops.StackPush": false, + "tf.compat.v1.raw_ops.StackPushV2": false, + "tf.compat.v1.raw_ops.StackV2": false, + "tf.compat.v1.raw_ops.Stage": false, + "tf.compat.v1.raw_ops.StageClear": false, + "tf.compat.v1.raw_ops.StagePeek": false, + "tf.compat.v1.raw_ops.StageSize": false, + "tf.compat.v1.raw_ops.StatefulPartitionedCall": false, + "tf.compat.v1.raw_ops.StatefulRandomBinomial": false, + "tf.compat.v1.raw_ops.StatefulStandardNormal": false, + "tf.compat.v1.raw_ops.StatefulStandardNormalV2": false, + "tf.compat.v1.raw_ops.StatefulTruncatedNormal": false, + "tf.compat.v1.raw_ops.StatefulUniform": false, + "tf.compat.v1.raw_ops.StatefulUniformFullInt": false, + "tf.compat.v1.raw_ops.StatefulUniformInt": false, + "tf.compat.v1.raw_ops.StatelessIf": false, + "tf.compat.v1.raw_ops.StatelessMultinomial": false, + "tf.compat.v1.raw_ops.StatelessRandomBinomial": false, + "tf.compat.v1.raw_ops.StatelessRandomGammaV2": false, + "tf.compat.v1.raw_ops.StatelessRandomNormal": false, + "tf.compat.v1.raw_ops.StatelessRandomPoisson": false, + "tf.compat.v1.raw_ops.StatelessRandomUniform": false, + "tf.compat.v1.raw_ops.StatelessRandomUniformFullInt": false, + "tf.compat.v1.raw_ops.StatelessRandomUniformInt": false, + "tf.compat.v1.raw_ops.StatelessTruncatedNormal": false, + "tf.compat.v1.raw_ops.StatelessWhile": false, + "tf.compat.v1.raw_ops.StaticRegexFullMatch": false, + "tf.compat.v1.raw_ops.StaticRegexReplace": false, + "tf.compat.v1.raw_ops.StatsAggregatorHandle": false, + "tf.compat.v1.raw_ops.StatsAggregatorHandleV2": false, + "tf.compat.v1.raw_ops.StatsAggregatorSetSummaryWriter": false, + "tf.compat.v1.raw_ops.StatsAggregatorSummary": false, + "tf.compat.v1.raw_ops.StopGradient": false, + "tf.compat.v1.raw_ops.StridedSlice": false, + "tf.compat.v1.raw_ops.StridedSliceAssign": false, + "tf.compat.v1.raw_ops.StridedSliceGrad": false, + "tf.compat.v1.raw_ops.StringFormat": false, + "tf.compat.v1.raw_ops.StringJoin": false, + "tf.compat.v1.raw_ops.StringLength": false, + "tf.compat.v1.raw_ops.StringLower": false, + "tf.compat.v1.raw_ops.StringNGrams": false, + "tf.compat.v1.raw_ops.StringSplit": false, + "tf.compat.v1.raw_ops.StringSplitV2": false, + "tf.compat.v1.raw_ops.StringStrip": false, + "tf.compat.v1.raw_ops.StringToHashBucket": false, + "tf.compat.v1.raw_ops.StringToHashBucketFast": false, + "tf.compat.v1.raw_ops.StringToHashBucketStrong": false, + "tf.compat.v1.raw_ops.StringToNumber": false, + "tf.compat.v1.raw_ops.StringUpper": false, + "tf.compat.v1.raw_ops.Sub": false, + "tf.compat.v1.raw_ops.Substr": false, + "tf.compat.v1.raw_ops.Sum": false, + "tf.compat.v1.raw_ops.SummaryWriter": false, + "tf.compat.v1.raw_ops.Svd": false, + "tf.compat.v1.raw_ops.Switch": false, + "tf.compat.v1.raw_ops.SymbolicGradient": false, + "tf.compat.v1.raw_ops.TFRecordDataset": false, + "tf.compat.v1.raw_ops.TFRecordReader": false, + "tf.compat.v1.raw_ops.TFRecordReaderV2": false, + "tf.compat.v1.raw_ops.TPUCompilationResult": false, + "tf.compat.v1.raw_ops.TPUEmbeddingActivations": false, + "tf.compat.v1.raw_ops.TPUOrdinalSelector": false, + "tf.compat.v1.raw_ops.TPUPartitionedCall": false, + "tf.compat.v1.raw_ops.TPUReplicateMetadata": false, + "tf.compat.v1.raw_ops.TPUReplicatedInput": false, + "tf.compat.v1.raw_ops.TPUReplicatedOutput": false, + "tf.compat.v1.raw_ops.TakeDataset": false, + "tf.compat.v1.raw_ops.TakeManySparseFromTensorsMap": false, + "tf.compat.v1.raw_ops.TakeWhileDataset": false, + "tf.compat.v1.raw_ops.Tan": false, + "tf.compat.v1.raw_ops.Tanh": false, + "tf.compat.v1.raw_ops.TanhGrad": false, + "tf.compat.v1.raw_ops.TemporaryVariable": false, + "tf.compat.v1.raw_ops.TensorArray": false, + "tf.compat.v1.raw_ops.TensorArrayClose": false, + "tf.compat.v1.raw_ops.TensorArrayCloseV2": false, + "tf.compat.v1.raw_ops.TensorArrayCloseV3": false, + "tf.compat.v1.raw_ops.TensorArrayConcat": false, + "tf.compat.v1.raw_ops.TensorArrayConcatV2": false, + "tf.compat.v1.raw_ops.TensorArrayConcatV3": false, + "tf.compat.v1.raw_ops.TensorArrayGather": false, + "tf.compat.v1.raw_ops.TensorArrayGatherV2": false, + "tf.compat.v1.raw_ops.TensorArrayGatherV3": false, + "tf.compat.v1.raw_ops.TensorArrayGrad": false, + "tf.compat.v1.raw_ops.TensorArrayGradV2": false, + "tf.compat.v1.raw_ops.TensorArrayGradV3": false, + "tf.compat.v1.raw_ops.TensorArrayGradWithShape": false, + "tf.compat.v1.raw_ops.TensorArrayPack": false, + "tf.compat.v1.raw_ops.TensorArrayRead": false, + "tf.compat.v1.raw_ops.TensorArrayReadV2": false, + "tf.compat.v1.raw_ops.TensorArrayReadV3": false, + "tf.compat.v1.raw_ops.TensorArrayScatter": false, + "tf.compat.v1.raw_ops.TensorArrayScatterV2": false, + "tf.compat.v1.raw_ops.TensorArrayScatterV3": false, + "tf.compat.v1.raw_ops.TensorArraySize": false, + "tf.compat.v1.raw_ops.TensorArraySizeV2": false, + "tf.compat.v1.raw_ops.TensorArraySizeV3": false, + "tf.compat.v1.raw_ops.TensorArraySplit": false, + "tf.compat.v1.raw_ops.TensorArraySplitV2": false, + "tf.compat.v1.raw_ops.TensorArraySplitV3": false, + "tf.compat.v1.raw_ops.TensorArrayUnpack": false, + "tf.compat.v1.raw_ops.TensorArrayV2": false, + "tf.compat.v1.raw_ops.TensorArrayV3": false, + "tf.compat.v1.raw_ops.TensorArrayWrite": false, + "tf.compat.v1.raw_ops.TensorArrayWriteV2": false, + "tf.compat.v1.raw_ops.TensorArrayWriteV3": false, + "tf.compat.v1.raw_ops.TensorDataset": false, + "tf.compat.v1.raw_ops.TensorListConcat": false, + "tf.compat.v1.raw_ops.TensorListConcatLists": false, + "tf.compat.v1.raw_ops.TensorListConcatV2": false, + "tf.compat.v1.raw_ops.TensorListElementShape": false, + "tf.compat.v1.raw_ops.TensorListFromTensor": false, + "tf.compat.v1.raw_ops.TensorListGather": false, + "tf.compat.v1.raw_ops.TensorListGetItem": false, + "tf.compat.v1.raw_ops.TensorListLength": false, + "tf.compat.v1.raw_ops.TensorListPopBack": false, + "tf.compat.v1.raw_ops.TensorListPushBack": false, + "tf.compat.v1.raw_ops.TensorListPushBackBatch": false, + "tf.compat.v1.raw_ops.TensorListReserve": false, + "tf.compat.v1.raw_ops.TensorListResize": false, + "tf.compat.v1.raw_ops.TensorListScatter": false, + "tf.compat.v1.raw_ops.TensorListScatterIntoExistingList": false, + "tf.compat.v1.raw_ops.TensorListScatterV2": false, + "tf.compat.v1.raw_ops.TensorListSetItem": false, + "tf.compat.v1.raw_ops.TensorListSplit": false, + "tf.compat.v1.raw_ops.TensorListStack": false, + "tf.compat.v1.raw_ops.TensorScatterAdd": false, + "tf.compat.v1.raw_ops.TensorScatterSub": false, + "tf.compat.v1.raw_ops.TensorScatterUpdate": false, + "tf.compat.v1.raw_ops.TensorSliceDataset": false, + "tf.compat.v1.raw_ops.TensorStridedSliceUpdate": false, + "tf.compat.v1.raw_ops.TensorSummary": false, + "tf.compat.v1.raw_ops.TensorSummaryV2": false, + "tf.compat.v1.raw_ops.TextLineDataset": false, + "tf.compat.v1.raw_ops.TextLineReader": false, + "tf.compat.v1.raw_ops.TextLineReaderV2": false, + "tf.compat.v1.raw_ops.ThreadPoolDataset": false, + "tf.compat.v1.raw_ops.ThreadPoolHandle": false, + "tf.compat.v1.raw_ops.ThreadUnsafeUnigramCandidateSampler": false, + "tf.compat.v1.raw_ops.Tile": false, + "tf.compat.v1.raw_ops.TileGrad": false, + "tf.compat.v1.raw_ops.Timestamp": false, + "tf.compat.v1.raw_ops.ToBool": false, + "tf.compat.v1.raw_ops.TopK": false, + "tf.compat.v1.raw_ops.TopKV2": false, + "tf.compat.v1.raw_ops.Transpose": false, + "tf.compat.v1.raw_ops.TridiagonalMatMul": false, + "tf.compat.v1.raw_ops.TridiagonalSolve": false, + "tf.compat.v1.raw_ops.TruncateDiv": false, + "tf.compat.v1.raw_ops.TruncateMod": false, + "tf.compat.v1.raw_ops.TruncatedNormal": false, + "tf.compat.v1.raw_ops.Unbatch": false, + "tf.compat.v1.raw_ops.UnbatchDataset": false, + "tf.compat.v1.raw_ops.UnbatchGrad": false, + "tf.compat.v1.raw_ops.UnicodeDecode": false, + "tf.compat.v1.raw_ops.UnicodeDecodeWithOffsets": false, + "tf.compat.v1.raw_ops.UnicodeEncode": false, + "tf.compat.v1.raw_ops.UnicodeScript": false, + "tf.compat.v1.raw_ops.UnicodeTranscode": false, + "tf.compat.v1.raw_ops.UniformCandidateSampler": false, + "tf.compat.v1.raw_ops.Unique": false, + "tf.compat.v1.raw_ops.UniqueDataset": false, + "tf.compat.v1.raw_ops.UniqueV2": false, + "tf.compat.v1.raw_ops.UniqueWithCounts": false, + "tf.compat.v1.raw_ops.UniqueWithCountsV2": false, + "tf.compat.v1.raw_ops.Unpack": false, + "tf.compat.v1.raw_ops.UnravelIndex": false, + "tf.compat.v1.raw_ops.UnsortedSegmentJoin": false, + "tf.compat.v1.raw_ops.UnsortedSegmentMax": false, + "tf.compat.v1.raw_ops.UnsortedSegmentMin": false, + "tf.compat.v1.raw_ops.UnsortedSegmentProd": false, + "tf.compat.v1.raw_ops.UnsortedSegmentSum": false, + "tf.compat.v1.raw_ops.Unstage": false, + "tf.compat.v1.raw_ops.UnwrapDatasetVariant": false, + "tf.compat.v1.raw_ops.UpperBound": false, + "tf.compat.v1.raw_ops.VarHandleOp": false, + "tf.compat.v1.raw_ops.VarIsInitializedOp": false, + "tf.compat.v1.raw_ops.Variable": false, + "tf.compat.v1.raw_ops.VariableShape": false, + "tf.compat.v1.raw_ops.VariableV2": false, + "tf.compat.v1.raw_ops.Where": false, + "tf.compat.v1.raw_ops.While": false, + "tf.compat.v1.raw_ops.WholeFileReader": false, + "tf.compat.v1.raw_ops.WholeFileReaderV2": false, + "tf.compat.v1.raw_ops.WindowDataset": false, + "tf.compat.v1.raw_ops.WorkerHeartbeat": false, + "tf.compat.v1.raw_ops.WrapDatasetVariant": false, + "tf.compat.v1.raw_ops.WriteAudioSummary": false, + "tf.compat.v1.raw_ops.WriteFile": false, + "tf.compat.v1.raw_ops.WriteGraphSummary": false, + "tf.compat.v1.raw_ops.WriteHistogramSummary": false, + "tf.compat.v1.raw_ops.WriteImageSummary": false, + "tf.compat.v1.raw_ops.WriteRawProtoSummary": false, + "tf.compat.v1.raw_ops.WriteScalarSummary": false, + "tf.compat.v1.raw_ops.WriteSummary": false, + "tf.compat.v1.raw_ops.Xdivy": false, + "tf.compat.v1.raw_ops.Xlog1py": false, + "tf.compat.v1.raw_ops.Xlogy": false, + "tf.compat.v1.raw_ops.ZerosLike": false, + "tf.compat.v1.raw_ops.Zeta": false, + "tf.compat.v1.raw_ops.ZipDataset": false, + "tf.compat.v1.read_file": false, + "tf.compat.v1.real": false, + "tf.compat.v1.realdiv": false, + "tf.compat.v1.reciprocal": false, + "tf.compat.v1.recompute_grad": false, + "tf.compat.v1.reduce_all": false, + "tf.compat.v1.reduce_any": false, + "tf.compat.v1.reduce_join": false, + "tf.compat.v1.reduce_logsumexp": false, + "tf.compat.v1.reduce_max": false, + "tf.compat.v1.reduce_mean": false, + "tf.compat.v1.reduce_min": false, + "tf.compat.v1.reduce_prod": false, + "tf.compat.v1.reduce_sum": false, + "tf.compat.v1.regex_replace": false, + "tf.compat.v1.register_tensor_conversion_function": false, + "tf.compat.v1.repeat": false, + "tf.compat.v1.report_uninitialized_variables": false, + "tf.compat.v1.required_space_to_batch_paddings": false, + "tf.compat.v1.reset_default_graph": false, + "tf.compat.v1.reshape": false, + "tf.compat.v1.resource": true, + "tf.compat.v1.resource_loader": false, + "tf.compat.v1.resource_loader.get_data_files_path": false, + "tf.compat.v1.resource_loader.get_path_to_datafile": false, + "tf.compat.v1.resource_loader.get_root_dir_with_all_resources": false, + "tf.compat.v1.resource_loader.load_resource": false, + "tf.compat.v1.resource_loader.readahead_file_path": false, + "tf.compat.v1.resource_variables_enabled": false, + "tf.compat.v1.reverse": false, + "tf.compat.v1.reverse_sequence": false, + "tf.compat.v1.reverse_v2": false, + "tf.compat.v1.rint": false, + "tf.compat.v1.roll": false, + "tf.compat.v1.round": false, + "tf.compat.v1.rsqrt": false, + "tf.compat.v1.saturate_cast": false, + "tf.compat.v1.saved_model": false, + "tf.compat.v1.saved_model.ASSETS_DIRECTORY": true, + "tf.compat.v1.saved_model.ASSETS_KEY": true, + "tf.compat.v1.saved_model.Asset": false, + "tf.compat.v1.saved_model.Asset.__eq__": true, + "tf.compat.v1.saved_model.Asset.__ge__": true, + "tf.compat.v1.saved_model.Asset.__gt__": true, + "tf.compat.v1.saved_model.Asset.__init__": true, + "tf.compat.v1.saved_model.Asset.__le__": true, + "tf.compat.v1.saved_model.Asset.__lt__": true, + "tf.compat.v1.saved_model.Asset.__ne__": true, + "tf.compat.v1.saved_model.Asset.__new__": true, + "tf.compat.v1.saved_model.Asset.asset_path": true, + "tf.compat.v1.saved_model.Builder": false, + "tf.compat.v1.saved_model.Builder.__eq__": true, + "tf.compat.v1.saved_model.Builder.__ge__": true, + "tf.compat.v1.saved_model.Builder.__gt__": true, + "tf.compat.v1.saved_model.Builder.__init__": true, + "tf.compat.v1.saved_model.Builder.__le__": true, + "tf.compat.v1.saved_model.Builder.__lt__": true, + "tf.compat.v1.saved_model.Builder.__ne__": true, + "tf.compat.v1.saved_model.Builder.__new__": true, + "tf.compat.v1.saved_model.Builder.add_meta_graph": true, + "tf.compat.v1.saved_model.Builder.add_meta_graph_and_variables": true, + "tf.compat.v1.saved_model.Builder.save": true, + "tf.compat.v1.saved_model.CLASSIFY_INPUTS": true, + "tf.compat.v1.saved_model.CLASSIFY_METHOD_NAME": true, + "tf.compat.v1.saved_model.CLASSIFY_OUTPUT_CLASSES": true, + "tf.compat.v1.saved_model.CLASSIFY_OUTPUT_SCORES": true, + "tf.compat.v1.saved_model.DEBUG_DIRECTORY": true, + "tf.compat.v1.saved_model.DEBUG_INFO_FILENAME_PB": true, + "tf.compat.v1.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY": true, + "tf.compat.v1.saved_model.GPU": true, + "tf.compat.v1.saved_model.LEGACY_INIT_OP_KEY": true, + "tf.compat.v1.saved_model.MAIN_OP_KEY": true, + "tf.compat.v1.saved_model.PREDICT_INPUTS": true, + "tf.compat.v1.saved_model.PREDICT_METHOD_NAME": true, + "tf.compat.v1.saved_model.PREDICT_OUTPUTS": true, + "tf.compat.v1.saved_model.REGRESS_INPUTS": true, + "tf.compat.v1.saved_model.REGRESS_METHOD_NAME": true, + "tf.compat.v1.saved_model.REGRESS_OUTPUTS": true, + "tf.compat.v1.saved_model.SAVED_MODEL_FILENAME_PB": true, + "tf.compat.v1.saved_model.SAVED_MODEL_FILENAME_PBTXT": true, + "tf.compat.v1.saved_model.SAVED_MODEL_SCHEMA_VERSION": true, + "tf.compat.v1.saved_model.SERVING": true, + "tf.compat.v1.saved_model.SaveOptions": false, + "tf.compat.v1.saved_model.SaveOptions.__eq__": true, + "tf.compat.v1.saved_model.SaveOptions.__ge__": true, + "tf.compat.v1.saved_model.SaveOptions.__gt__": true, + "tf.compat.v1.saved_model.SaveOptions.__init__": true, + "tf.compat.v1.saved_model.SaveOptions.__le__": true, + "tf.compat.v1.saved_model.SaveOptions.__lt__": true, + "tf.compat.v1.saved_model.SaveOptions.__ne__": true, + "tf.compat.v1.saved_model.SaveOptions.__new__": true, + "tf.compat.v1.saved_model.SaveOptions.function_aliases": true, + "tf.compat.v1.saved_model.SaveOptions.namespace_whitelist": true, + "tf.compat.v1.saved_model.SaveOptions.save_debug_info": true, + "tf.compat.v1.saved_model.TPU": true, + "tf.compat.v1.saved_model.TRAINING": true, + "tf.compat.v1.saved_model.VARIABLES_DIRECTORY": true, + "tf.compat.v1.saved_model.VARIABLES_FILENAME": true, + "tf.compat.v1.saved_model.build_signature_def": false, + "tf.compat.v1.saved_model.build_tensor_info": false, + "tf.compat.v1.saved_model.builder": false, + "tf.compat.v1.saved_model.builder.SavedModelBuilder": false, + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__eq__": true, + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__ge__": true, + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__gt__": true, + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__init__": true, + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__le__": true, + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__lt__": true, + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__ne__": true, + "tf.compat.v1.saved_model.builder.SavedModelBuilder.__new__": true, + "tf.compat.v1.saved_model.builder.SavedModelBuilder.add_meta_graph": true, + "tf.compat.v1.saved_model.builder.SavedModelBuilder.add_meta_graph_and_variables": true, + "tf.compat.v1.saved_model.builder.SavedModelBuilder.save": true, + "tf.compat.v1.saved_model.classification_signature_def": false, + "tf.compat.v1.saved_model.constants": false, + "tf.compat.v1.saved_model.constants.ASSETS_DIRECTORY": true, + "tf.compat.v1.saved_model.constants.ASSETS_KEY": true, + "tf.compat.v1.saved_model.constants.DEBUG_DIRECTORY": true, + "tf.compat.v1.saved_model.constants.DEBUG_INFO_FILENAME_PB": true, + "tf.compat.v1.saved_model.constants.LEGACY_INIT_OP_KEY": true, + "tf.compat.v1.saved_model.constants.MAIN_OP_KEY": true, + "tf.compat.v1.saved_model.constants.SAVED_MODEL_FILENAME_PB": true, + "tf.compat.v1.saved_model.constants.SAVED_MODEL_FILENAME_PBTXT": true, + "tf.compat.v1.saved_model.constants.SAVED_MODEL_SCHEMA_VERSION": true, + "tf.compat.v1.saved_model.constants.VARIABLES_DIRECTORY": true, + "tf.compat.v1.saved_model.constants.VARIABLES_FILENAME": true, + "tf.compat.v1.saved_model.contains_saved_model": false, + "tf.compat.v1.saved_model.experimental": false, + "tf.compat.v1.saved_model.experimental.save": false, + "tf.compat.v1.saved_model.get_tensor_from_tensor_info": false, + "tf.compat.v1.saved_model.is_valid_signature": false, + "tf.compat.v1.saved_model.load": false, + "tf.compat.v1.saved_model.load_v2": false, + "tf.compat.v1.saved_model.loader": false, + "tf.compat.v1.saved_model.loader.load": false, + "tf.compat.v1.saved_model.loader.maybe_saved_model_directory": false, + "tf.compat.v1.saved_model.main_op": false, + "tf.compat.v1.saved_model.main_op.main_op": false, + "tf.compat.v1.saved_model.main_op.main_op_with_restore": false, + "tf.compat.v1.saved_model.main_op_with_restore": false, + "tf.compat.v1.saved_model.maybe_saved_model_directory": false, + "tf.compat.v1.saved_model.predict_signature_def": false, + "tf.compat.v1.saved_model.regression_signature_def": false, + "tf.compat.v1.saved_model.save": false, + "tf.compat.v1.saved_model.signature_constants": false, + "tf.compat.v1.saved_model.signature_constants.CLASSIFY_INPUTS": true, + "tf.compat.v1.saved_model.signature_constants.CLASSIFY_METHOD_NAME": true, + "tf.compat.v1.saved_model.signature_constants.CLASSIFY_OUTPUT_CLASSES": true, + "tf.compat.v1.saved_model.signature_constants.CLASSIFY_OUTPUT_SCORES": true, + "tf.compat.v1.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY": true, + "tf.compat.v1.saved_model.signature_constants.PREDICT_INPUTS": true, + "tf.compat.v1.saved_model.signature_constants.PREDICT_METHOD_NAME": true, + "tf.compat.v1.saved_model.signature_constants.PREDICT_OUTPUTS": true, + "tf.compat.v1.saved_model.signature_constants.REGRESS_INPUTS": true, + "tf.compat.v1.saved_model.signature_constants.REGRESS_METHOD_NAME": true, + "tf.compat.v1.saved_model.signature_constants.REGRESS_OUTPUTS": true, + "tf.compat.v1.saved_model.signature_def_utils": false, + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater": false, + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__eq__": true, + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__ge__": true, + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__gt__": true, + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__init__": true, + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__le__": true, + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__lt__": true, + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__ne__": true, + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.__new__": true, + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.replace_method_name": true, + "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater.save": true, + "tf.compat.v1.saved_model.signature_def_utils.build_signature_def": false, + "tf.compat.v1.saved_model.signature_def_utils.classification_signature_def": false, + "tf.compat.v1.saved_model.signature_def_utils.is_valid_signature": false, + "tf.compat.v1.saved_model.signature_def_utils.predict_signature_def": false, + "tf.compat.v1.saved_model.signature_def_utils.regression_signature_def": false, + "tf.compat.v1.saved_model.simple_save": false, + "tf.compat.v1.saved_model.tag_constants": false, + "tf.compat.v1.saved_model.tag_constants.GPU": true, + "tf.compat.v1.saved_model.tag_constants.SERVING": true, + "tf.compat.v1.saved_model.tag_constants.TPU": true, + "tf.compat.v1.saved_model.tag_constants.TRAINING": true, + "tf.compat.v1.saved_model.utils": false, + "tf.compat.v1.saved_model.utils.build_tensor_info": false, + "tf.compat.v1.saved_model.utils.get_tensor_from_tensor_info": false, + "tf.compat.v1.scalar_mul": false, + "tf.compat.v1.scan": false, + "tf.compat.v1.scatter_add": false, + "tf.compat.v1.scatter_div": false, + "tf.compat.v1.scatter_max": false, + "tf.compat.v1.scatter_min": false, + "tf.compat.v1.scatter_mul": false, + "tf.compat.v1.scatter_nd": false, + "tf.compat.v1.scatter_nd_add": false, + "tf.compat.v1.scatter_nd_sub": false, + "tf.compat.v1.scatter_nd_update": false, + "tf.compat.v1.scatter_sub": false, + "tf.compat.v1.scatter_update": false, + "tf.compat.v1.searchsorted": false, + "tf.compat.v1.segment_max": false, + "tf.compat.v1.segment_mean": false, + "tf.compat.v1.segment_min": false, + "tf.compat.v1.segment_prod": false, + "tf.compat.v1.segment_sum": false, + "tf.compat.v1.self_adjoint_eig": false, + "tf.compat.v1.self_adjoint_eigvals": false, + "tf.compat.v1.sequence_mask": false, + "tf.compat.v1.serialize_many_sparse": false, + "tf.compat.v1.serialize_sparse": false, + "tf.compat.v1.serialize_tensor": false, + "tf.compat.v1.set_random_seed": false, + "tf.compat.v1.setdiff1d": false, + "tf.compat.v1.sets": false, + "tf.compat.v1.sets.difference": false, + "tf.compat.v1.sets.intersection": false, + "tf.compat.v1.sets.set_difference": false, + "tf.compat.v1.sets.set_intersection": false, + "tf.compat.v1.sets.set_size": false, + "tf.compat.v1.sets.set_union": false, + "tf.compat.v1.sets.size": false, + "tf.compat.v1.sets.union": false, + "tf.compat.v1.shape": false, + "tf.compat.v1.shape_n": false, + "tf.compat.v1.sigmoid": false, + "tf.compat.v1.sign": false, + "tf.compat.v1.signal": false, + "tf.compat.v1.signal.dct": false, + "tf.compat.v1.signal.fft": false, + "tf.compat.v1.signal.fft2d": false, + "tf.compat.v1.signal.fft3d": false, + "tf.compat.v1.signal.fftshift": false, + "tf.compat.v1.signal.frame": false, + "tf.compat.v1.signal.hamming_window": false, + "tf.compat.v1.signal.hann_window": false, + "tf.compat.v1.signal.idct": false, + "tf.compat.v1.signal.ifft": false, + "tf.compat.v1.signal.ifft2d": false, + "tf.compat.v1.signal.ifft3d": false, + "tf.compat.v1.signal.ifftshift": false, + "tf.compat.v1.signal.inverse_mdct": false, + "tf.compat.v1.signal.inverse_stft": false, + "tf.compat.v1.signal.inverse_stft_window_fn": false, + "tf.compat.v1.signal.irfft": false, + "tf.compat.v1.signal.irfft2d": false, + "tf.compat.v1.signal.irfft3d": false, + "tf.compat.v1.signal.kaiser_bessel_derived_window": false, + "tf.compat.v1.signal.kaiser_window": false, + "tf.compat.v1.signal.linear_to_mel_weight_matrix": false, + "tf.compat.v1.signal.mdct": false, + "tf.compat.v1.signal.mfccs_from_log_mel_spectrograms": false, + "tf.compat.v1.signal.overlap_and_add": false, + "tf.compat.v1.signal.rfft": false, + "tf.compat.v1.signal.rfft2d": false, + "tf.compat.v1.signal.rfft3d": false, + "tf.compat.v1.signal.stft": false, + "tf.compat.v1.signal.vorbis_window": false, + "tf.compat.v1.sin": false, + "tf.compat.v1.sinh": false, + "tf.compat.v1.size": false, + "tf.compat.v1.slice": false, + "tf.compat.v1.sort": false, + "tf.compat.v1.space_to_batch": false, + "tf.compat.v1.space_to_batch_nd": false, + "tf.compat.v1.space_to_depth": false, + "tf.compat.v1.sparse": false, + "tf.compat.v1.sparse.SparseConditionalAccumulator": false, + "tf.compat.v1.sparse.SparseConditionalAccumulator.__eq__": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.__ge__": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.__gt__": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.__init__": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.__le__": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.__lt__": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.__ne__": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.__new__": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.accumulator_ref": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.apply_grad": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.apply_indexed_slices_grad": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.dtype": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.name": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.num_accumulated": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.set_global_step": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.take_grad": true, + "tf.compat.v1.sparse.SparseConditionalAccumulator.take_indexed_slices_grad": true, + "tf.compat.v1.sparse.SparseTensor": false, + "tf.compat.v1.sparse.SparseTensor.__div__": true, + "tf.compat.v1.sparse.SparseTensor.__eq__": true, + "tf.compat.v1.sparse.SparseTensor.__ge__": true, + "tf.compat.v1.sparse.SparseTensor.__gt__": true, + "tf.compat.v1.sparse.SparseTensor.__init__": true, + "tf.compat.v1.sparse.SparseTensor.__le__": true, + "tf.compat.v1.sparse.SparseTensor.__lt__": true, + "tf.compat.v1.sparse.SparseTensor.__mul__": true, + "tf.compat.v1.sparse.SparseTensor.__ne__": true, + "tf.compat.v1.sparse.SparseTensor.__new__": true, + "tf.compat.v1.sparse.SparseTensor.__truediv__": true, + "tf.compat.v1.sparse.SparseTensor.consumers": true, + "tf.compat.v1.sparse.SparseTensor.dense_shape": true, + "tf.compat.v1.sparse.SparseTensor.dtype": true, + "tf.compat.v1.sparse.SparseTensor.eval": true, + "tf.compat.v1.sparse.SparseTensor.from_value": true, + "tf.compat.v1.sparse.SparseTensor.get_shape": true, + "tf.compat.v1.sparse.SparseTensor.graph": true, + "tf.compat.v1.sparse.SparseTensor.indices": true, + "tf.compat.v1.sparse.SparseTensor.op": true, + "tf.compat.v1.sparse.SparseTensor.shape": true, + "tf.compat.v1.sparse.SparseTensor.values": true, + "tf.compat.v1.sparse.add": false, + "tf.compat.v1.sparse.concat": false, + "tf.compat.v1.sparse.cross": false, + "tf.compat.v1.sparse.cross_hashed": false, + "tf.compat.v1.sparse.expand_dims": false, + "tf.compat.v1.sparse.eye": false, + "tf.compat.v1.sparse.fill_empty_rows": false, + "tf.compat.v1.sparse.from_dense": false, + "tf.compat.v1.sparse.mask": false, + "tf.compat.v1.sparse.matmul": false, + "tf.compat.v1.sparse.maximum": false, + "tf.compat.v1.sparse.merge": false, + "tf.compat.v1.sparse.minimum": false, + "tf.compat.v1.sparse.placeholder": false, + "tf.compat.v1.sparse.reduce_max": false, + "tf.compat.v1.sparse.reduce_max_sparse": false, + "tf.compat.v1.sparse.reduce_sum": false, + "tf.compat.v1.sparse.reduce_sum_sparse": false, + "tf.compat.v1.sparse.reorder": false, + "tf.compat.v1.sparse.reset_shape": false, + "tf.compat.v1.sparse.reshape": false, + "tf.compat.v1.sparse.retain": false, + "tf.compat.v1.sparse.segment_mean": false, + "tf.compat.v1.sparse.segment_sqrt_n": false, + "tf.compat.v1.sparse.segment_sum": false, + "tf.compat.v1.sparse.slice": false, + "tf.compat.v1.sparse.softmax": false, + "tf.compat.v1.sparse.sparse_dense_matmul": false, + "tf.compat.v1.sparse.split": false, + "tf.compat.v1.sparse.to_dense": false, + "tf.compat.v1.sparse.to_indicator": false, + "tf.compat.v1.sparse.transpose": false, + "tf.compat.v1.sparse_add": false, + "tf.compat.v1.sparse_concat": false, + "tf.compat.v1.sparse_fill_empty_rows": false, + "tf.compat.v1.sparse_mask": false, + "tf.compat.v1.sparse_matmul": false, + "tf.compat.v1.sparse_maximum": false, + "tf.compat.v1.sparse_merge": false, + "tf.compat.v1.sparse_minimum": false, + "tf.compat.v1.sparse_placeholder": false, + "tf.compat.v1.sparse_reduce_max": false, + "tf.compat.v1.sparse_reduce_max_sparse": false, + "tf.compat.v1.sparse_reduce_sum": false, + "tf.compat.v1.sparse_reduce_sum_sparse": false, + "tf.compat.v1.sparse_reorder": false, + "tf.compat.v1.sparse_reset_shape": false, + "tf.compat.v1.sparse_reshape": false, + "tf.compat.v1.sparse_retain": false, + "tf.compat.v1.sparse_segment_mean": false, + "tf.compat.v1.sparse_segment_sqrt_n": false, + "tf.compat.v1.sparse_segment_sum": false, + "tf.compat.v1.sparse_slice": false, + "tf.compat.v1.sparse_softmax": false, + "tf.compat.v1.sparse_split": false, + "tf.compat.v1.sparse_tensor_dense_matmul": false, + "tf.compat.v1.sparse_tensor_to_dense": false, + "tf.compat.v1.sparse_to_dense": false, + "tf.compat.v1.sparse_to_indicator": false, + "tf.compat.v1.sparse_transpose": false, + "tf.compat.v1.spectral": false, + "tf.compat.v1.spectral.dct": false, + "tf.compat.v1.spectral.fft": false, + "tf.compat.v1.spectral.fft2d": false, + "tf.compat.v1.spectral.fft3d": false, + "tf.compat.v1.spectral.idct": false, + "tf.compat.v1.spectral.ifft": false, + "tf.compat.v1.spectral.ifft2d": false, + "tf.compat.v1.spectral.ifft3d": false, + "tf.compat.v1.spectral.irfft": false, + "tf.compat.v1.spectral.irfft2d": false, + "tf.compat.v1.spectral.irfft3d": false, + "tf.compat.v1.spectral.rfft": false, + "tf.compat.v1.spectral.rfft2d": false, + "tf.compat.v1.spectral.rfft3d": false, + "tf.compat.v1.split": false, + "tf.compat.v1.sqrt": false, + "tf.compat.v1.square": false, + "tf.compat.v1.squared_difference": false, + "tf.compat.v1.squeeze": false, + "tf.compat.v1.stack": false, + "tf.compat.v1.stop_gradient": false, + "tf.compat.v1.strided_slice": false, + "tf.compat.v1.string": true, + "tf.compat.v1.string_join": false, + "tf.compat.v1.string_split": false, + "tf.compat.v1.string_strip": false, + "tf.compat.v1.string_to_hash_bucket": false, + "tf.compat.v1.string_to_hash_bucket_fast": false, + "tf.compat.v1.string_to_hash_bucket_strong": false, + "tf.compat.v1.string_to_number": false, + "tf.compat.v1.strings": false, + "tf.compat.v1.strings.as_string": false, + "tf.compat.v1.strings.bytes_split": false, + "tf.compat.v1.strings.format": false, + "tf.compat.v1.strings.join": false, + "tf.compat.v1.strings.length": false, + "tf.compat.v1.strings.lower": false, + "tf.compat.v1.strings.ngrams": false, + "tf.compat.v1.strings.reduce_join": false, + "tf.compat.v1.strings.regex_full_match": false, + "tf.compat.v1.strings.regex_replace": false, + "tf.compat.v1.strings.split": false, + "tf.compat.v1.strings.strip": false, + "tf.compat.v1.strings.substr": false, + "tf.compat.v1.strings.to_hash_bucket": false, + "tf.compat.v1.strings.to_hash_bucket_fast": false, + "tf.compat.v1.strings.to_hash_bucket_strong": false, + "tf.compat.v1.strings.to_number": false, + "tf.compat.v1.strings.unicode_decode": false, + "tf.compat.v1.strings.unicode_decode_with_offsets": false, + "tf.compat.v1.strings.unicode_encode": false, + "tf.compat.v1.strings.unicode_script": false, + "tf.compat.v1.strings.unicode_split": false, + "tf.compat.v1.strings.unicode_split_with_offsets": false, + "tf.compat.v1.strings.unicode_transcode": false, + "tf.compat.v1.strings.unsorted_segment_join": false, + "tf.compat.v1.strings.upper": false, + "tf.compat.v1.substr": false, + "tf.compat.v1.subtract": false, + "tf.compat.v1.summary": false, + "tf.compat.v1.summary.Event": false, + "tf.compat.v1.summary.Event.ByteSize": true, + "tf.compat.v1.summary.Event.Clear": true, + "tf.compat.v1.summary.Event.ClearExtension": true, + "tf.compat.v1.summary.Event.ClearField": true, + "tf.compat.v1.summary.Event.CopyFrom": true, + "tf.compat.v1.summary.Event.DESCRIPTOR": true, + "tf.compat.v1.summary.Event.DiscardUnknownFields": true, + "tf.compat.v1.summary.Event.Extensions": true, + "tf.compat.v1.summary.Event.FindInitializationErrors": true, + "tf.compat.v1.summary.Event.FromString": true, + "tf.compat.v1.summary.Event.HasExtension": true, + "tf.compat.v1.summary.Event.HasField": true, + "tf.compat.v1.summary.Event.IsInitialized": true, + "tf.compat.v1.summary.Event.ListFields": true, + "tf.compat.v1.summary.Event.MergeFrom": true, + "tf.compat.v1.summary.Event.MergeFromString": true, + "tf.compat.v1.summary.Event.ParseFromString": true, + "tf.compat.v1.summary.Event.RegisterExtension": true, + "tf.compat.v1.summary.Event.SerializePartialToString": true, + "tf.compat.v1.summary.Event.SerializeToString": true, + "tf.compat.v1.summary.Event.SetInParent": true, + "tf.compat.v1.summary.Event.UnknownFields": true, + "tf.compat.v1.summary.Event.WhichOneof": true, + "tf.compat.v1.summary.Event.__eq__": true, + "tf.compat.v1.summary.Event.__ge__": true, + "tf.compat.v1.summary.Event.__gt__": true, + "tf.compat.v1.summary.Event.__init__": true, + "tf.compat.v1.summary.Event.__le__": true, + "tf.compat.v1.summary.Event.__lt__": true, + "tf.compat.v1.summary.Event.__ne__": true, + "tf.compat.v1.summary.Event.__new__": true, + "tf.compat.v1.summary.Event.file_version": true, + "tf.compat.v1.summary.Event.graph_def": true, + "tf.compat.v1.summary.Event.log_message": true, + "tf.compat.v1.summary.Event.meta_graph_def": true, + "tf.compat.v1.summary.Event.session_log": true, + "tf.compat.v1.summary.Event.step": true, + "tf.compat.v1.summary.Event.summary": true, + "tf.compat.v1.summary.Event.tagged_run_metadata": true, + "tf.compat.v1.summary.Event.wall_time": true, + "tf.compat.v1.summary.FileWriter": false, + "tf.compat.v1.summary.FileWriter.__enter__": true, + "tf.compat.v1.summary.FileWriter.__eq__": true, + "tf.compat.v1.summary.FileWriter.__exit__": true, + "tf.compat.v1.summary.FileWriter.__ge__": true, + "tf.compat.v1.summary.FileWriter.__gt__": true, + "tf.compat.v1.summary.FileWriter.__init__": true, + "tf.compat.v1.summary.FileWriter.__le__": true, + "tf.compat.v1.summary.FileWriter.__lt__": true, + "tf.compat.v1.summary.FileWriter.__ne__": true, + "tf.compat.v1.summary.FileWriter.__new__": true, + "tf.compat.v1.summary.FileWriter.add_event": true, + "tf.compat.v1.summary.FileWriter.add_graph": true, + "tf.compat.v1.summary.FileWriter.add_meta_graph": true, + "tf.compat.v1.summary.FileWriter.add_run_metadata": true, + "tf.compat.v1.summary.FileWriter.add_session_log": true, + "tf.compat.v1.summary.FileWriter.add_summary": true, + "tf.compat.v1.summary.FileWriter.close": true, + "tf.compat.v1.summary.FileWriter.flush": true, + "tf.compat.v1.summary.FileWriter.get_logdir": true, + "tf.compat.v1.summary.FileWriter.reopen": true, + "tf.compat.v1.summary.FileWriterCache": false, + "tf.compat.v1.summary.FileWriterCache.__eq__": true, + "tf.compat.v1.summary.FileWriterCache.__ge__": true, + "tf.compat.v1.summary.FileWriterCache.__gt__": true, + "tf.compat.v1.summary.FileWriterCache.__init__": true, + "tf.compat.v1.summary.FileWriterCache.__le__": true, + "tf.compat.v1.summary.FileWriterCache.__lt__": true, + "tf.compat.v1.summary.FileWriterCache.__ne__": true, + "tf.compat.v1.summary.FileWriterCache.__new__": true, + "tf.compat.v1.summary.FileWriterCache.clear": true, + "tf.compat.v1.summary.FileWriterCache.get": true, + "tf.compat.v1.summary.SessionLog": false, + "tf.compat.v1.summary.SessionLog.ByteSize": true, + "tf.compat.v1.summary.SessionLog.CHECKPOINT": true, + "tf.compat.v1.summary.SessionLog.Clear": true, + "tf.compat.v1.summary.SessionLog.ClearExtension": true, + "tf.compat.v1.summary.SessionLog.ClearField": true, + "tf.compat.v1.summary.SessionLog.CopyFrom": true, + "tf.compat.v1.summary.SessionLog.DESCRIPTOR": true, + "tf.compat.v1.summary.SessionLog.DiscardUnknownFields": true, + "tf.compat.v1.summary.SessionLog.Extensions": true, + "tf.compat.v1.summary.SessionLog.FindInitializationErrors": true, + "tf.compat.v1.summary.SessionLog.FromString": true, + "tf.compat.v1.summary.SessionLog.HasExtension": true, + "tf.compat.v1.summary.SessionLog.HasField": true, + "tf.compat.v1.summary.SessionLog.IsInitialized": true, + "tf.compat.v1.summary.SessionLog.ListFields": true, + "tf.compat.v1.summary.SessionLog.MergeFrom": true, + "tf.compat.v1.summary.SessionLog.MergeFromString": true, + "tf.compat.v1.summary.SessionLog.ParseFromString": true, + "tf.compat.v1.summary.SessionLog.RegisterExtension": true, + "tf.compat.v1.summary.SessionLog.START": true, + "tf.compat.v1.summary.SessionLog.STATUS_UNSPECIFIED": true, + "tf.compat.v1.summary.SessionLog.STOP": true, + "tf.compat.v1.summary.SessionLog.SerializePartialToString": true, + "tf.compat.v1.summary.SessionLog.SerializeToString": true, + "tf.compat.v1.summary.SessionLog.SessionStatus": true, + "tf.compat.v1.summary.SessionLog.SetInParent": true, + "tf.compat.v1.summary.SessionLog.UnknownFields": true, + "tf.compat.v1.summary.SessionLog.WhichOneof": true, + "tf.compat.v1.summary.SessionLog.__eq__": true, + "tf.compat.v1.summary.SessionLog.__ge__": true, + "tf.compat.v1.summary.SessionLog.__gt__": true, + "tf.compat.v1.summary.SessionLog.__init__": true, + "tf.compat.v1.summary.SessionLog.__le__": true, + "tf.compat.v1.summary.SessionLog.__lt__": true, + "tf.compat.v1.summary.SessionLog.__ne__": true, + "tf.compat.v1.summary.SessionLog.__new__": true, + "tf.compat.v1.summary.SessionLog.checkpoint_path": true, + "tf.compat.v1.summary.SessionLog.msg": true, + "tf.compat.v1.summary.SessionLog.status": true, + "tf.compat.v1.summary.Summary": false, + "tf.compat.v1.summary.Summary.Audio": false, + "tf.compat.v1.summary.Summary.Audio.ByteSize": true, + "tf.compat.v1.summary.Summary.Audio.Clear": true, + "tf.compat.v1.summary.Summary.Audio.ClearExtension": true, + "tf.compat.v1.summary.Summary.Audio.ClearField": true, + "tf.compat.v1.summary.Summary.Audio.CopyFrom": true, + "tf.compat.v1.summary.Summary.Audio.DESCRIPTOR": true, + "tf.compat.v1.summary.Summary.Audio.DiscardUnknownFields": true, + "tf.compat.v1.summary.Summary.Audio.Extensions": true, + "tf.compat.v1.summary.Summary.Audio.FindInitializationErrors": true, + "tf.compat.v1.summary.Summary.Audio.FromString": true, + "tf.compat.v1.summary.Summary.Audio.HasExtension": true, + "tf.compat.v1.summary.Summary.Audio.HasField": true, + "tf.compat.v1.summary.Summary.Audio.IsInitialized": true, + "tf.compat.v1.summary.Summary.Audio.ListFields": true, + "tf.compat.v1.summary.Summary.Audio.MergeFrom": true, + "tf.compat.v1.summary.Summary.Audio.MergeFromString": true, + "tf.compat.v1.summary.Summary.Audio.ParseFromString": true, + "tf.compat.v1.summary.Summary.Audio.RegisterExtension": true, + "tf.compat.v1.summary.Summary.Audio.SerializePartialToString": true, + "tf.compat.v1.summary.Summary.Audio.SerializeToString": true, + "tf.compat.v1.summary.Summary.Audio.SetInParent": true, + "tf.compat.v1.summary.Summary.Audio.UnknownFields": true, + "tf.compat.v1.summary.Summary.Audio.WhichOneof": true, + "tf.compat.v1.summary.Summary.Audio.__eq__": true, + "tf.compat.v1.summary.Summary.Audio.__ge__": true, + "tf.compat.v1.summary.Summary.Audio.__gt__": true, + "tf.compat.v1.summary.Summary.Audio.__init__": true, + "tf.compat.v1.summary.Summary.Audio.__le__": true, + "tf.compat.v1.summary.Summary.Audio.__lt__": true, + "tf.compat.v1.summary.Summary.Audio.__ne__": true, + "tf.compat.v1.summary.Summary.Audio.__new__": true, + "tf.compat.v1.summary.Summary.Audio.content_type": true, + "tf.compat.v1.summary.Summary.Audio.encoded_audio_string": true, + "tf.compat.v1.summary.Summary.Audio.length_frames": true, + "tf.compat.v1.summary.Summary.Audio.num_channels": true, + "tf.compat.v1.summary.Summary.Audio.sample_rate": true, + "tf.compat.v1.summary.Summary.ByteSize": true, + "tf.compat.v1.summary.Summary.Clear": true, + "tf.compat.v1.summary.Summary.ClearExtension": true, + "tf.compat.v1.summary.Summary.ClearField": true, + "tf.compat.v1.summary.Summary.CopyFrom": true, + "tf.compat.v1.summary.Summary.DESCRIPTOR": true, + "tf.compat.v1.summary.Summary.DiscardUnknownFields": true, + "tf.compat.v1.summary.Summary.Extensions": true, + "tf.compat.v1.summary.Summary.FindInitializationErrors": true, + "tf.compat.v1.summary.Summary.FromString": true, + "tf.compat.v1.summary.Summary.HasExtension": true, + "tf.compat.v1.summary.Summary.HasField": true, + "tf.compat.v1.summary.Summary.Image": false, + "tf.compat.v1.summary.Summary.Image.ByteSize": true, + "tf.compat.v1.summary.Summary.Image.Clear": true, + "tf.compat.v1.summary.Summary.Image.ClearExtension": true, + "tf.compat.v1.summary.Summary.Image.ClearField": true, + "tf.compat.v1.summary.Summary.Image.CopyFrom": true, + "tf.compat.v1.summary.Summary.Image.DESCRIPTOR": true, + "tf.compat.v1.summary.Summary.Image.DiscardUnknownFields": true, + "tf.compat.v1.summary.Summary.Image.Extensions": true, + "tf.compat.v1.summary.Summary.Image.FindInitializationErrors": true, + "tf.compat.v1.summary.Summary.Image.FromString": true, + "tf.compat.v1.summary.Summary.Image.HasExtension": true, + "tf.compat.v1.summary.Summary.Image.HasField": true, + "tf.compat.v1.summary.Summary.Image.IsInitialized": true, + "tf.compat.v1.summary.Summary.Image.ListFields": true, + "tf.compat.v1.summary.Summary.Image.MergeFrom": true, + "tf.compat.v1.summary.Summary.Image.MergeFromString": true, + "tf.compat.v1.summary.Summary.Image.ParseFromString": true, + "tf.compat.v1.summary.Summary.Image.RegisterExtension": true, + "tf.compat.v1.summary.Summary.Image.SerializePartialToString": true, + "tf.compat.v1.summary.Summary.Image.SerializeToString": true, + "tf.compat.v1.summary.Summary.Image.SetInParent": true, + "tf.compat.v1.summary.Summary.Image.UnknownFields": true, + "tf.compat.v1.summary.Summary.Image.WhichOneof": true, + "tf.compat.v1.summary.Summary.Image.__eq__": true, + "tf.compat.v1.summary.Summary.Image.__ge__": true, + "tf.compat.v1.summary.Summary.Image.__gt__": true, + "tf.compat.v1.summary.Summary.Image.__init__": true, + "tf.compat.v1.summary.Summary.Image.__le__": true, + "tf.compat.v1.summary.Summary.Image.__lt__": true, + "tf.compat.v1.summary.Summary.Image.__ne__": true, + "tf.compat.v1.summary.Summary.Image.__new__": true, + "tf.compat.v1.summary.Summary.Image.colorspace": true, + "tf.compat.v1.summary.Summary.Image.encoded_image_string": true, + "tf.compat.v1.summary.Summary.Image.height": true, + "tf.compat.v1.summary.Summary.Image.width": true, + "tf.compat.v1.summary.Summary.IsInitialized": true, + "tf.compat.v1.summary.Summary.ListFields": true, + "tf.compat.v1.summary.Summary.MergeFrom": true, + "tf.compat.v1.summary.Summary.MergeFromString": true, + "tf.compat.v1.summary.Summary.ParseFromString": true, + "tf.compat.v1.summary.Summary.RegisterExtension": true, + "tf.compat.v1.summary.Summary.SerializePartialToString": true, + "tf.compat.v1.summary.Summary.SerializeToString": true, + "tf.compat.v1.summary.Summary.SetInParent": true, + "tf.compat.v1.summary.Summary.UnknownFields": true, + "tf.compat.v1.summary.Summary.Value": false, + "tf.compat.v1.summary.Summary.Value.ByteSize": true, + "tf.compat.v1.summary.Summary.Value.Clear": true, + "tf.compat.v1.summary.Summary.Value.ClearExtension": true, + "tf.compat.v1.summary.Summary.Value.ClearField": true, + "tf.compat.v1.summary.Summary.Value.CopyFrom": true, + "tf.compat.v1.summary.Summary.Value.DESCRIPTOR": true, + "tf.compat.v1.summary.Summary.Value.DiscardUnknownFields": true, + "tf.compat.v1.summary.Summary.Value.Extensions": true, + "tf.compat.v1.summary.Summary.Value.FindInitializationErrors": true, + "tf.compat.v1.summary.Summary.Value.FromString": true, + "tf.compat.v1.summary.Summary.Value.HasExtension": true, + "tf.compat.v1.summary.Summary.Value.HasField": true, + "tf.compat.v1.summary.Summary.Value.IsInitialized": true, + "tf.compat.v1.summary.Summary.Value.ListFields": true, + "tf.compat.v1.summary.Summary.Value.MergeFrom": true, + "tf.compat.v1.summary.Summary.Value.MergeFromString": true, + "tf.compat.v1.summary.Summary.Value.ParseFromString": true, + "tf.compat.v1.summary.Summary.Value.RegisterExtension": true, + "tf.compat.v1.summary.Summary.Value.SerializePartialToString": true, + "tf.compat.v1.summary.Summary.Value.SerializeToString": true, + "tf.compat.v1.summary.Summary.Value.SetInParent": true, + "tf.compat.v1.summary.Summary.Value.UnknownFields": true, + "tf.compat.v1.summary.Summary.Value.WhichOneof": true, + "tf.compat.v1.summary.Summary.Value.__eq__": true, + "tf.compat.v1.summary.Summary.Value.__ge__": true, + "tf.compat.v1.summary.Summary.Value.__gt__": true, + "tf.compat.v1.summary.Summary.Value.__init__": true, + "tf.compat.v1.summary.Summary.Value.__le__": true, + "tf.compat.v1.summary.Summary.Value.__lt__": true, + "tf.compat.v1.summary.Summary.Value.__ne__": true, + "tf.compat.v1.summary.Summary.Value.__new__": true, + "tf.compat.v1.summary.Summary.Value.audio": true, + "tf.compat.v1.summary.Summary.Value.histo": true, + "tf.compat.v1.summary.Summary.Value.image": true, + "tf.compat.v1.summary.Summary.Value.metadata": true, + "tf.compat.v1.summary.Summary.Value.node_name": true, + "tf.compat.v1.summary.Summary.Value.obsolete_old_style_histogram": true, + "tf.compat.v1.summary.Summary.Value.simple_value": true, + "tf.compat.v1.summary.Summary.Value.tag": true, + "tf.compat.v1.summary.Summary.Value.tensor": true, + "tf.compat.v1.summary.Summary.WhichOneof": true, + "tf.compat.v1.summary.Summary.__eq__": true, + "tf.compat.v1.summary.Summary.__ge__": true, + "tf.compat.v1.summary.Summary.__gt__": true, + "tf.compat.v1.summary.Summary.__init__": true, + "tf.compat.v1.summary.Summary.__le__": true, + "tf.compat.v1.summary.Summary.__lt__": true, + "tf.compat.v1.summary.Summary.__ne__": true, + "tf.compat.v1.summary.Summary.__new__": true, + "tf.compat.v1.summary.Summary.value": true, + "tf.compat.v1.summary.SummaryDescription": false, + "tf.compat.v1.summary.SummaryDescription.ByteSize": true, + "tf.compat.v1.summary.SummaryDescription.Clear": true, + "tf.compat.v1.summary.SummaryDescription.ClearExtension": true, + "tf.compat.v1.summary.SummaryDescription.ClearField": true, + "tf.compat.v1.summary.SummaryDescription.CopyFrom": true, + "tf.compat.v1.summary.SummaryDescription.DESCRIPTOR": true, + "tf.compat.v1.summary.SummaryDescription.DiscardUnknownFields": true, + "tf.compat.v1.summary.SummaryDescription.Extensions": true, + "tf.compat.v1.summary.SummaryDescription.FindInitializationErrors": true, + "tf.compat.v1.summary.SummaryDescription.FromString": true, + "tf.compat.v1.summary.SummaryDescription.HasExtension": true, + "tf.compat.v1.summary.SummaryDescription.HasField": true, + "tf.compat.v1.summary.SummaryDescription.IsInitialized": true, + "tf.compat.v1.summary.SummaryDescription.ListFields": true, + "tf.compat.v1.summary.SummaryDescription.MergeFrom": true, + "tf.compat.v1.summary.SummaryDescription.MergeFromString": true, + "tf.compat.v1.summary.SummaryDescription.ParseFromString": true, + "tf.compat.v1.summary.SummaryDescription.RegisterExtension": true, + "tf.compat.v1.summary.SummaryDescription.SerializePartialToString": true, + "tf.compat.v1.summary.SummaryDescription.SerializeToString": true, + "tf.compat.v1.summary.SummaryDescription.SetInParent": true, + "tf.compat.v1.summary.SummaryDescription.UnknownFields": true, + "tf.compat.v1.summary.SummaryDescription.WhichOneof": true, + "tf.compat.v1.summary.SummaryDescription.__eq__": true, + "tf.compat.v1.summary.SummaryDescription.__ge__": true, + "tf.compat.v1.summary.SummaryDescription.__gt__": true, + "tf.compat.v1.summary.SummaryDescription.__init__": true, + "tf.compat.v1.summary.SummaryDescription.__le__": true, + "tf.compat.v1.summary.SummaryDescription.__lt__": true, + "tf.compat.v1.summary.SummaryDescription.__ne__": true, + "tf.compat.v1.summary.SummaryDescription.__new__": true, + "tf.compat.v1.summary.SummaryDescription.type_hint": true, + "tf.compat.v1.summary.TaggedRunMetadata": false, + "tf.compat.v1.summary.TaggedRunMetadata.ByteSize": true, + "tf.compat.v1.summary.TaggedRunMetadata.Clear": true, + "tf.compat.v1.summary.TaggedRunMetadata.ClearExtension": true, + "tf.compat.v1.summary.TaggedRunMetadata.ClearField": true, + "tf.compat.v1.summary.TaggedRunMetadata.CopyFrom": true, + "tf.compat.v1.summary.TaggedRunMetadata.DESCRIPTOR": true, + "tf.compat.v1.summary.TaggedRunMetadata.DiscardUnknownFields": true, + "tf.compat.v1.summary.TaggedRunMetadata.Extensions": true, + "tf.compat.v1.summary.TaggedRunMetadata.FindInitializationErrors": true, + "tf.compat.v1.summary.TaggedRunMetadata.FromString": true, + "tf.compat.v1.summary.TaggedRunMetadata.HasExtension": true, + "tf.compat.v1.summary.TaggedRunMetadata.HasField": true, + "tf.compat.v1.summary.TaggedRunMetadata.IsInitialized": true, + "tf.compat.v1.summary.TaggedRunMetadata.ListFields": true, + "tf.compat.v1.summary.TaggedRunMetadata.MergeFrom": true, + "tf.compat.v1.summary.TaggedRunMetadata.MergeFromString": true, + "tf.compat.v1.summary.TaggedRunMetadata.ParseFromString": true, + "tf.compat.v1.summary.TaggedRunMetadata.RegisterExtension": true, + "tf.compat.v1.summary.TaggedRunMetadata.SerializePartialToString": true, + "tf.compat.v1.summary.TaggedRunMetadata.SerializeToString": true, + "tf.compat.v1.summary.TaggedRunMetadata.SetInParent": true, + "tf.compat.v1.summary.TaggedRunMetadata.UnknownFields": true, + "tf.compat.v1.summary.TaggedRunMetadata.WhichOneof": true, + "tf.compat.v1.summary.TaggedRunMetadata.__eq__": true, + "tf.compat.v1.summary.TaggedRunMetadata.__ge__": true, + "tf.compat.v1.summary.TaggedRunMetadata.__gt__": true, + "tf.compat.v1.summary.TaggedRunMetadata.__init__": true, + "tf.compat.v1.summary.TaggedRunMetadata.__le__": true, + "tf.compat.v1.summary.TaggedRunMetadata.__lt__": true, + "tf.compat.v1.summary.TaggedRunMetadata.__ne__": true, + "tf.compat.v1.summary.TaggedRunMetadata.__new__": true, + "tf.compat.v1.summary.TaggedRunMetadata.run_metadata": true, + "tf.compat.v1.summary.TaggedRunMetadata.tag": true, + "tf.compat.v1.summary.all_v2_summary_ops": false, + "tf.compat.v1.summary.audio": false, + "tf.compat.v1.summary.get_summary_description": false, + "tf.compat.v1.summary.histogram": false, + "tf.compat.v1.summary.image": false, + "tf.compat.v1.summary.initialize": false, + "tf.compat.v1.summary.merge": false, + "tf.compat.v1.summary.merge_all": false, + "tf.compat.v1.summary.scalar": false, + "tf.compat.v1.summary.tensor_summary": false, + "tf.compat.v1.summary.text": false, + "tf.compat.v1.svd": false, + "tf.compat.v1.switch_case": false, + "tf.compat.v1.sysconfig": false, + "tf.compat.v1.sysconfig.CXX11_ABI_FLAG": true, + "tf.compat.v1.sysconfig.MONOLITHIC_BUILD": true, + "tf.compat.v1.sysconfig.get_compile_flags": false, + "tf.compat.v1.sysconfig.get_include": false, + "tf.compat.v1.sysconfig.get_lib": false, + "tf.compat.v1.sysconfig.get_link_flags": false, + "tf.compat.v1.tables_initializer": false, + "tf.compat.v1.tan": false, + "tf.compat.v1.tanh": false, + "tf.compat.v1.tensor_scatter_add": false, + "tf.compat.v1.tensor_scatter_nd_add": false, + "tf.compat.v1.tensor_scatter_nd_sub": false, + "tf.compat.v1.tensor_scatter_nd_update": false, + "tf.compat.v1.tensor_scatter_sub": false, + "tf.compat.v1.tensor_scatter_update": false, + "tf.compat.v1.tensordot": false, + "tf.compat.v1.test": false, + "tf.compat.v1.test.Benchmark": false, + "tf.compat.v1.test.Benchmark.__eq__": true, + "tf.compat.v1.test.Benchmark.__ge__": true, + "tf.compat.v1.test.Benchmark.__gt__": true, + "tf.compat.v1.test.Benchmark.__init__": true, + "tf.compat.v1.test.Benchmark.__le__": true, + "tf.compat.v1.test.Benchmark.__lt__": true, + "tf.compat.v1.test.Benchmark.__ne__": true, + "tf.compat.v1.test.Benchmark.__new__": true, + "tf.compat.v1.test.Benchmark.evaluate": true, + "tf.compat.v1.test.Benchmark.is_abstract": true, + "tf.compat.v1.test.Benchmark.report_benchmark": true, + "tf.compat.v1.test.Benchmark.run_op_benchmark": true, + "tf.compat.v1.test.StubOutForTesting": false, + "tf.compat.v1.test.StubOutForTesting.CleanUp": true, + "tf.compat.v1.test.StubOutForTesting.Set": true, + "tf.compat.v1.test.StubOutForTesting.SmartSet": true, + "tf.compat.v1.test.StubOutForTesting.SmartUnsetAll": true, + "tf.compat.v1.test.StubOutForTesting.UnsetAll": true, + "tf.compat.v1.test.StubOutForTesting.__enter__": true, + "tf.compat.v1.test.StubOutForTesting.__eq__": true, + "tf.compat.v1.test.StubOutForTesting.__exit__": true, + "tf.compat.v1.test.StubOutForTesting.__ge__": true, + "tf.compat.v1.test.StubOutForTesting.__gt__": true, + "tf.compat.v1.test.StubOutForTesting.__init__": true, + "tf.compat.v1.test.StubOutForTesting.__le__": true, + "tf.compat.v1.test.StubOutForTesting.__lt__": true, + "tf.compat.v1.test.StubOutForTesting.__ne__": true, + "tf.compat.v1.test.StubOutForTesting.__new__": true, + "tf.compat.v1.test.TestCase": false, + "tf.compat.v1.test.TestCase.__call__": true, + "tf.compat.v1.test.TestCase.__eq__": true, + "tf.compat.v1.test.TestCase.__ge__": true, + "tf.compat.v1.test.TestCase.__gt__": true, + "tf.compat.v1.test.TestCase.__init__": true, + "tf.compat.v1.test.TestCase.__le__": true, + "tf.compat.v1.test.TestCase.__lt__": true, + "tf.compat.v1.test.TestCase.__ne__": true, + "tf.compat.v1.test.TestCase.__new__": true, + "tf.compat.v1.test.TestCase.addClassCleanup": true, + "tf.compat.v1.test.TestCase.addCleanup": true, + "tf.compat.v1.test.TestCase.addTypeEqualityFunc": true, + "tf.compat.v1.test.TestCase.assertAllClose": true, + "tf.compat.v1.test.TestCase.assertAllCloseAccordingToType": true, + "tf.compat.v1.test.TestCase.assertAllEqual": true, + "tf.compat.v1.test.TestCase.assertAllGreater": true, + "tf.compat.v1.test.TestCase.assertAllGreaterEqual": true, + "tf.compat.v1.test.TestCase.assertAllInRange": true, + "tf.compat.v1.test.TestCase.assertAllInSet": true, + "tf.compat.v1.test.TestCase.assertAllLess": true, + "tf.compat.v1.test.TestCase.assertAllLessEqual": true, + "tf.compat.v1.test.TestCase.assertAlmostEqual": true, + "tf.compat.v1.test.TestCase.assertAlmostEquals": true, + "tf.compat.v1.test.TestCase.assertArrayNear": true, + "tf.compat.v1.test.TestCase.assertBetween": true, + "tf.compat.v1.test.TestCase.assertCommandFails": true, + "tf.compat.v1.test.TestCase.assertCommandSucceeds": true, + "tf.compat.v1.test.TestCase.assertContainsExactSubsequence": true, + "tf.compat.v1.test.TestCase.assertContainsInOrder": true, + "tf.compat.v1.test.TestCase.assertContainsSubsequence": true, + "tf.compat.v1.test.TestCase.assertContainsSubset": true, + "tf.compat.v1.test.TestCase.assertCountEqual": true, + "tf.compat.v1.test.TestCase.assertDTypeEqual": true, + "tf.compat.v1.test.TestCase.assertDeviceEqual": true, + "tf.compat.v1.test.TestCase.assertDictContainsSubset": true, + "tf.compat.v1.test.TestCase.assertDictEqual": true, + "tf.compat.v1.test.TestCase.assertEmpty": true, + "tf.compat.v1.test.TestCase.assertEndsWith": true, + "tf.compat.v1.test.TestCase.assertEqual": true, + "tf.compat.v1.test.TestCase.assertEquals": true, + "tf.compat.v1.test.TestCase.assertFalse": true, + "tf.compat.v1.test.TestCase.assertGreater": true, + "tf.compat.v1.test.TestCase.assertGreaterEqual": true, + "tf.compat.v1.test.TestCase.assertIn": true, + "tf.compat.v1.test.TestCase.assertIs": true, + "tf.compat.v1.test.TestCase.assertIsInstance": true, + "tf.compat.v1.test.TestCase.assertIsNone": true, + "tf.compat.v1.test.TestCase.assertIsNot": true, + "tf.compat.v1.test.TestCase.assertIsNotNone": true, + "tf.compat.v1.test.TestCase.assertItemsEqual": true, + "tf.compat.v1.test.TestCase.assertJsonEqual": true, + "tf.compat.v1.test.TestCase.assertLen": true, + "tf.compat.v1.test.TestCase.assertLess": true, + "tf.compat.v1.test.TestCase.assertLessEqual": true, + "tf.compat.v1.test.TestCase.assertListEqual": true, + "tf.compat.v1.test.TestCase.assertLogs": true, + "tf.compat.v1.test.TestCase.assertMultiLineEqual": true, + "tf.compat.v1.test.TestCase.assertNDArrayNear": true, + "tf.compat.v1.test.TestCase.assertNear": true, + "tf.compat.v1.test.TestCase.assertNoCommonElements": true, + "tf.compat.v1.test.TestCase.assertNotAllClose": true, + "tf.compat.v1.test.TestCase.assertNotAllEqual": true, + "tf.compat.v1.test.TestCase.assertNotAlmostEqual": true, + "tf.compat.v1.test.TestCase.assertNotAlmostEquals": true, + "tf.compat.v1.test.TestCase.assertNotEmpty": true, + "tf.compat.v1.test.TestCase.assertNotEndsWith": true, + "tf.compat.v1.test.TestCase.assertNotEqual": true, + "tf.compat.v1.test.TestCase.assertNotEquals": true, + "tf.compat.v1.test.TestCase.assertNotIn": true, + "tf.compat.v1.test.TestCase.assertNotIsInstance": true, + "tf.compat.v1.test.TestCase.assertNotRegex": true, + "tf.compat.v1.test.TestCase.assertNotRegexpMatches": true, + "tf.compat.v1.test.TestCase.assertNotStartsWith": true, + "tf.compat.v1.test.TestCase.assertProtoEquals": true, + "tf.compat.v1.test.TestCase.assertProtoEqualsVersion": true, + "tf.compat.v1.test.TestCase.assertRaises": true, + "tf.compat.v1.test.TestCase.assertRaisesOpError": true, + "tf.compat.v1.test.TestCase.assertRaisesRegex": true, + "tf.compat.v1.test.TestCase.assertRaisesRegexp": true, + "tf.compat.v1.test.TestCase.assertRaisesWithLiteralMatch": true, + "tf.compat.v1.test.TestCase.assertRaisesWithPredicateMatch": true, + "tf.compat.v1.test.TestCase.assertRegex": true, + "tf.compat.v1.test.TestCase.assertRegexMatch": true, + "tf.compat.v1.test.TestCase.assertRegexpMatches": true, + "tf.compat.v1.test.TestCase.assertSameElements": true, + "tf.compat.v1.test.TestCase.assertSameStructure": true, + "tf.compat.v1.test.TestCase.assertSequenceAlmostEqual": true, + "tf.compat.v1.test.TestCase.assertSequenceEqual": true, + "tf.compat.v1.test.TestCase.assertSequenceStartsWith": true, + "tf.compat.v1.test.TestCase.assertSetEqual": true, + "tf.compat.v1.test.TestCase.assertShapeEqual": true, + "tf.compat.v1.test.TestCase.assertStartsWith": true, + "tf.compat.v1.test.TestCase.assertTotallyOrdered": true, + "tf.compat.v1.test.TestCase.assertTrue": true, + "tf.compat.v1.test.TestCase.assertTupleEqual": true, + "tf.compat.v1.test.TestCase.assertUrlEqual": true, + "tf.compat.v1.test.TestCase.assertWarns": true, + "tf.compat.v1.test.TestCase.assertWarnsRegex": true, + "tf.compat.v1.test.TestCase.assert_": true, + "tf.compat.v1.test.TestCase.cached_session": true, + "tf.compat.v1.test.TestCase.captureWritesToStream": true, + "tf.compat.v1.test.TestCase.checkedThread": true, + "tf.compat.v1.test.TestCase.countTestCases": true, + "tf.compat.v1.test.TestCase.create_tempdir": true, + "tf.compat.v1.test.TestCase.create_tempfile": true, + "tf.compat.v1.test.TestCase.debug": true, + "tf.compat.v1.test.TestCase.defaultTestResult": true, + "tf.compat.v1.test.TestCase.doClassCleanups": true, + "tf.compat.v1.test.TestCase.doCleanups": true, + "tf.compat.v1.test.TestCase.enter_context": true, + "tf.compat.v1.test.TestCase.evaluate": true, + "tf.compat.v1.test.TestCase.fail": true, + "tf.compat.v1.test.TestCase.failIf": true, + "tf.compat.v1.test.TestCase.failIfAlmostEqual": true, + "tf.compat.v1.test.TestCase.failIfEqual": true, + "tf.compat.v1.test.TestCase.failUnless": true, + "tf.compat.v1.test.TestCase.failUnlessAlmostEqual": true, + "tf.compat.v1.test.TestCase.failUnlessEqual": true, + "tf.compat.v1.test.TestCase.failUnlessRaises": true, + "tf.compat.v1.test.TestCase.failureException": false, + "tf.compat.v1.test.TestCase.failureException.__eq__": true, + "tf.compat.v1.test.TestCase.failureException.__ge__": true, + "tf.compat.v1.test.TestCase.failureException.__gt__": true, + "tf.compat.v1.test.TestCase.failureException.__init__": true, + "tf.compat.v1.test.TestCase.failureException.__le__": true, + "tf.compat.v1.test.TestCase.failureException.__lt__": true, + "tf.compat.v1.test.TestCase.failureException.__ne__": true, + "tf.compat.v1.test.TestCase.failureException.__new__": true, + "tf.compat.v1.test.TestCase.failureException.args": true, + "tf.compat.v1.test.TestCase.failureException.with_traceback": true, + "tf.compat.v1.test.TestCase.get_temp_dir": true, + "tf.compat.v1.test.TestCase.id": true, + "tf.compat.v1.test.TestCase.longMessage": true, + "tf.compat.v1.test.TestCase.maxDiff": true, + "tf.compat.v1.test.TestCase.run": true, + "tf.compat.v1.test.TestCase.session": true, + "tf.compat.v1.test.TestCase.setUp": true, + "tf.compat.v1.test.TestCase.setUpClass": true, + "tf.compat.v1.test.TestCase.shortDescription": true, + "tf.compat.v1.test.TestCase.skipTest": true, + "tf.compat.v1.test.TestCase.subTest": true, + "tf.compat.v1.test.TestCase.tearDown": true, + "tf.compat.v1.test.TestCase.tearDownClass": true, + "tf.compat.v1.test.TestCase.tempfile_cleanup": true, + "tf.compat.v1.test.TestCase.test_session": true, + "tf.compat.v1.test.assert_equal_graph_def": false, + "tf.compat.v1.test.benchmark_config": false, + "tf.compat.v1.test.compute_gradient": false, + "tf.compat.v1.test.compute_gradient_error": false, + "tf.compat.v1.test.create_local_cluster": false, + "tf.compat.v1.test.get_temp_dir": false, + "tf.compat.v1.test.gpu_device_name": false, + "tf.compat.v1.test.is_built_with_cuda": false, + "tf.compat.v1.test.is_built_with_gpu_support": false, + "tf.compat.v1.test.is_built_with_rocm": false, + "tf.compat.v1.test.is_built_with_xla": false, + "tf.compat.v1.test.is_gpu_available": false, + "tf.compat.v1.test.main": false, + "tf.compat.v1.test.test_src_dir_path": false, + "tf.compat.v1.tile": false, + "tf.compat.v1.timestamp": false, + "tf.compat.v1.to_bfloat16": false, + "tf.compat.v1.to_complex128": false, + "tf.compat.v1.to_complex64": false, + "tf.compat.v1.to_double": false, + "tf.compat.v1.to_float": false, + "tf.compat.v1.to_int32": false, + "tf.compat.v1.to_int64": false, + "tf.compat.v1.tpu": false, + "tf.compat.v1.tpu.CrossShardOptimizer": false, + "tf.compat.v1.tpu.CrossShardOptimizer.GATE_GRAPH": true, + "tf.compat.v1.tpu.CrossShardOptimizer.GATE_NONE": true, + "tf.compat.v1.tpu.CrossShardOptimizer.GATE_OP": true, + "tf.compat.v1.tpu.CrossShardOptimizer.__eq__": true, + "tf.compat.v1.tpu.CrossShardOptimizer.__ge__": true, + "tf.compat.v1.tpu.CrossShardOptimizer.__gt__": true, + "tf.compat.v1.tpu.CrossShardOptimizer.__init__": true, + "tf.compat.v1.tpu.CrossShardOptimizer.__le__": true, + "tf.compat.v1.tpu.CrossShardOptimizer.__lt__": true, + "tf.compat.v1.tpu.CrossShardOptimizer.__ne__": true, + "tf.compat.v1.tpu.CrossShardOptimizer.__new__": true, + "tf.compat.v1.tpu.CrossShardOptimizer.apply_gradients": true, + "tf.compat.v1.tpu.CrossShardOptimizer.compute_gradients": true, + "tf.compat.v1.tpu.CrossShardOptimizer.get_name": true, + "tf.compat.v1.tpu.CrossShardOptimizer.get_slot": true, + "tf.compat.v1.tpu.CrossShardOptimizer.get_slot_names": true, + "tf.compat.v1.tpu.CrossShardOptimizer.minimize": true, + "tf.compat.v1.tpu.CrossShardOptimizer.variables": true, + "tf.compat.v1.tpu.PaddingSpec": false, + "tf.compat.v1.tpu.PaddingSpec.AUTO": true, + "tf.compat.v1.tpu.PaddingSpec.POWER_OF_TWO": true, + "tf.compat.v1.tpu.batch_parallel": false, + "tf.compat.v1.tpu.bfloat16_scope": false, + "tf.compat.v1.tpu.core": false, + "tf.compat.v1.tpu.cross_replica_sum": false, + "tf.compat.v1.tpu.experimental": false, + "tf.compat.v1.tpu.experimental.AdagradParameters": false, + "tf.compat.v1.tpu.experimental.AdagradParameters.__eq__": true, + "tf.compat.v1.tpu.experimental.AdagradParameters.__ge__": true, + "tf.compat.v1.tpu.experimental.AdagradParameters.__gt__": true, + "tf.compat.v1.tpu.experimental.AdagradParameters.__init__": true, + "tf.compat.v1.tpu.experimental.AdagradParameters.__le__": true, + "tf.compat.v1.tpu.experimental.AdagradParameters.__lt__": true, + "tf.compat.v1.tpu.experimental.AdagradParameters.__ne__": true, + "tf.compat.v1.tpu.experimental.AdagradParameters.__new__": true, + "tf.compat.v1.tpu.experimental.AdamParameters": false, + "tf.compat.v1.tpu.experimental.AdamParameters.__eq__": true, + "tf.compat.v1.tpu.experimental.AdamParameters.__ge__": true, + "tf.compat.v1.tpu.experimental.AdamParameters.__gt__": true, + "tf.compat.v1.tpu.experimental.AdamParameters.__init__": true, + "tf.compat.v1.tpu.experimental.AdamParameters.__le__": true, + "tf.compat.v1.tpu.experimental.AdamParameters.__lt__": true, + "tf.compat.v1.tpu.experimental.AdamParameters.__ne__": true, + "tf.compat.v1.tpu.experimental.AdamParameters.__new__": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment": false, + "tf.compat.v1.tpu.experimental.DeviceAssignment.__eq__": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.__ge__": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.__gt__": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.__init__": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.__le__": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.__lt__": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.__ne__": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.__new__": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.build": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.coordinates": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.core_assignment": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.host_device": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.lookup_replicas": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.num_cores_per_replica": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.num_replicas": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.topology": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.tpu_device": true, + "tf.compat.v1.tpu.experimental.DeviceAssignment.tpu_ordinal": true, + "tf.compat.v1.tpu.experimental.FtrlParameters": false, + "tf.compat.v1.tpu.experimental.FtrlParameters.__eq__": true, + "tf.compat.v1.tpu.experimental.FtrlParameters.__ge__": true, + "tf.compat.v1.tpu.experimental.FtrlParameters.__gt__": true, + "tf.compat.v1.tpu.experimental.FtrlParameters.__init__": true, + "tf.compat.v1.tpu.experimental.FtrlParameters.__le__": true, + "tf.compat.v1.tpu.experimental.FtrlParameters.__lt__": true, + "tf.compat.v1.tpu.experimental.FtrlParameters.__ne__": true, + "tf.compat.v1.tpu.experimental.FtrlParameters.__new__": true, + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters": false, + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__eq__": true, + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__ge__": true, + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__gt__": true, + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__init__": true, + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__le__": true, + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__lt__": true, + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__ne__": true, + "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters.__new__": true, + "tf.compat.v1.tpu.experimental.embedding_column": false, + "tf.compat.v1.tpu.experimental.initialize_tpu_system": false, + "tf.compat.v1.tpu.experimental.shared_embedding_columns": false, + "tf.compat.v1.tpu.experimental.shutdown_tpu_system": false, + "tf.compat.v1.tpu.initialize_system": false, + "tf.compat.v1.tpu.outside_compilation": false, + "tf.compat.v1.tpu.replicate": false, + "tf.compat.v1.tpu.rewrite": false, + "tf.compat.v1.tpu.shard": false, + "tf.compat.v1.tpu.shutdown_system": false, + "tf.compat.v1.trace": false, + "tf.compat.v1.train": false, + "tf.compat.v1.train.AdadeltaOptimizer": false, + "tf.compat.v1.train.AdadeltaOptimizer.GATE_GRAPH": true, + "tf.compat.v1.train.AdadeltaOptimizer.GATE_NONE": true, + "tf.compat.v1.train.AdadeltaOptimizer.GATE_OP": true, + "tf.compat.v1.train.AdadeltaOptimizer.__eq__": true, + "tf.compat.v1.train.AdadeltaOptimizer.__ge__": true, + "tf.compat.v1.train.AdadeltaOptimizer.__gt__": true, + "tf.compat.v1.train.AdadeltaOptimizer.__init__": true, + "tf.compat.v1.train.AdadeltaOptimizer.__le__": true, + "tf.compat.v1.train.AdadeltaOptimizer.__lt__": true, + "tf.compat.v1.train.AdadeltaOptimizer.__ne__": true, + "tf.compat.v1.train.AdadeltaOptimizer.__new__": true, + "tf.compat.v1.train.AdadeltaOptimizer.apply_gradients": true, + "tf.compat.v1.train.AdadeltaOptimizer.compute_gradients": true, + "tf.compat.v1.train.AdadeltaOptimizer.get_name": true, + "tf.compat.v1.train.AdadeltaOptimizer.get_slot": true, + "tf.compat.v1.train.AdadeltaOptimizer.get_slot_names": true, + "tf.compat.v1.train.AdadeltaOptimizer.minimize": true, + "tf.compat.v1.train.AdadeltaOptimizer.variables": true, + "tf.compat.v1.train.AdagradDAOptimizer": false, + "tf.compat.v1.train.AdagradDAOptimizer.GATE_GRAPH": true, + "tf.compat.v1.train.AdagradDAOptimizer.GATE_NONE": true, + "tf.compat.v1.train.AdagradDAOptimizer.GATE_OP": true, + "tf.compat.v1.train.AdagradDAOptimizer.__eq__": true, + "tf.compat.v1.train.AdagradDAOptimizer.__ge__": true, + "tf.compat.v1.train.AdagradDAOptimizer.__gt__": true, + "tf.compat.v1.train.AdagradDAOptimizer.__init__": true, + "tf.compat.v1.train.AdagradDAOptimizer.__le__": true, + "tf.compat.v1.train.AdagradDAOptimizer.__lt__": true, + "tf.compat.v1.train.AdagradDAOptimizer.__ne__": true, + "tf.compat.v1.train.AdagradDAOptimizer.__new__": true, + "tf.compat.v1.train.AdagradDAOptimizer.apply_gradients": true, + "tf.compat.v1.train.AdagradDAOptimizer.compute_gradients": true, + "tf.compat.v1.train.AdagradDAOptimizer.get_name": true, + "tf.compat.v1.train.AdagradDAOptimizer.get_slot": true, + "tf.compat.v1.train.AdagradDAOptimizer.get_slot_names": true, + "tf.compat.v1.train.AdagradDAOptimizer.minimize": true, + "tf.compat.v1.train.AdagradDAOptimizer.variables": true, + "tf.compat.v1.train.AdagradOptimizer": false, + "tf.compat.v1.train.AdagradOptimizer.GATE_GRAPH": true, + "tf.compat.v1.train.AdagradOptimizer.GATE_NONE": true, + "tf.compat.v1.train.AdagradOptimizer.GATE_OP": true, + "tf.compat.v1.train.AdagradOptimizer.__eq__": true, + "tf.compat.v1.train.AdagradOptimizer.__ge__": true, + "tf.compat.v1.train.AdagradOptimizer.__gt__": true, + "tf.compat.v1.train.AdagradOptimizer.__init__": true, + "tf.compat.v1.train.AdagradOptimizer.__le__": true, + "tf.compat.v1.train.AdagradOptimizer.__lt__": true, + "tf.compat.v1.train.AdagradOptimizer.__ne__": true, + "tf.compat.v1.train.AdagradOptimizer.__new__": true, + "tf.compat.v1.train.AdagradOptimizer.apply_gradients": true, + "tf.compat.v1.train.AdagradOptimizer.compute_gradients": true, + "tf.compat.v1.train.AdagradOptimizer.get_name": true, + "tf.compat.v1.train.AdagradOptimizer.get_slot": true, + "tf.compat.v1.train.AdagradOptimizer.get_slot_names": true, + "tf.compat.v1.train.AdagradOptimizer.minimize": true, + "tf.compat.v1.train.AdagradOptimizer.variables": true, + "tf.compat.v1.train.AdamOptimizer": false, + "tf.compat.v1.train.AdamOptimizer.GATE_GRAPH": true, + "tf.compat.v1.train.AdamOptimizer.GATE_NONE": true, + "tf.compat.v1.train.AdamOptimizer.GATE_OP": true, + "tf.compat.v1.train.AdamOptimizer.__eq__": true, + "tf.compat.v1.train.AdamOptimizer.__ge__": true, + "tf.compat.v1.train.AdamOptimizer.__gt__": true, + "tf.compat.v1.train.AdamOptimizer.__init__": true, + "tf.compat.v1.train.AdamOptimizer.__le__": true, + "tf.compat.v1.train.AdamOptimizer.__lt__": true, + "tf.compat.v1.train.AdamOptimizer.__ne__": true, + "tf.compat.v1.train.AdamOptimizer.__new__": true, + "tf.compat.v1.train.AdamOptimizer.apply_gradients": true, + "tf.compat.v1.train.AdamOptimizer.compute_gradients": true, + "tf.compat.v1.train.AdamOptimizer.get_name": true, + "tf.compat.v1.train.AdamOptimizer.get_slot": true, + "tf.compat.v1.train.AdamOptimizer.get_slot_names": true, + "tf.compat.v1.train.AdamOptimizer.minimize": true, + "tf.compat.v1.train.AdamOptimizer.variables": true, + "tf.compat.v1.train.BytesList": false, + "tf.compat.v1.train.BytesList.ByteSize": true, + "tf.compat.v1.train.BytesList.Clear": true, + "tf.compat.v1.train.BytesList.ClearExtension": true, + "tf.compat.v1.train.BytesList.ClearField": true, + "tf.compat.v1.train.BytesList.CopyFrom": true, + "tf.compat.v1.train.BytesList.DESCRIPTOR": true, + "tf.compat.v1.train.BytesList.DiscardUnknownFields": true, + "tf.compat.v1.train.BytesList.Extensions": true, + "tf.compat.v1.train.BytesList.FindInitializationErrors": true, + "tf.compat.v1.train.BytesList.FromString": true, + "tf.compat.v1.train.BytesList.HasExtension": true, + "tf.compat.v1.train.BytesList.HasField": true, + "tf.compat.v1.train.BytesList.IsInitialized": true, + "tf.compat.v1.train.BytesList.ListFields": true, + "tf.compat.v1.train.BytesList.MergeFrom": true, + "tf.compat.v1.train.BytesList.MergeFromString": true, + "tf.compat.v1.train.BytesList.ParseFromString": true, + "tf.compat.v1.train.BytesList.RegisterExtension": true, + "tf.compat.v1.train.BytesList.SerializePartialToString": true, + "tf.compat.v1.train.BytesList.SerializeToString": true, + "tf.compat.v1.train.BytesList.SetInParent": true, + "tf.compat.v1.train.BytesList.UnknownFields": true, + "tf.compat.v1.train.BytesList.WhichOneof": true, + "tf.compat.v1.train.BytesList.__eq__": true, + "tf.compat.v1.train.BytesList.__ge__": true, + "tf.compat.v1.train.BytesList.__gt__": true, + "tf.compat.v1.train.BytesList.__init__": true, + "tf.compat.v1.train.BytesList.__le__": true, + "tf.compat.v1.train.BytesList.__lt__": true, + "tf.compat.v1.train.BytesList.__ne__": true, + "tf.compat.v1.train.BytesList.__new__": true, + "tf.compat.v1.train.BytesList.value": true, + "tf.compat.v1.train.Checkpoint": false, + "tf.compat.v1.train.Checkpoint.__eq__": true, + "tf.compat.v1.train.Checkpoint.__ge__": true, + "tf.compat.v1.train.Checkpoint.__gt__": true, + "tf.compat.v1.train.Checkpoint.__init__": true, + "tf.compat.v1.train.Checkpoint.__le__": true, + "tf.compat.v1.train.Checkpoint.__lt__": true, + "tf.compat.v1.train.Checkpoint.__ne__": true, + "tf.compat.v1.train.Checkpoint.__new__": true, + "tf.compat.v1.train.Checkpoint.restore": true, + "tf.compat.v1.train.Checkpoint.save": true, + "tf.compat.v1.train.Checkpoint.save_counter": true, + "tf.compat.v1.train.Checkpoint.write": true, + "tf.compat.v1.train.CheckpointManager": false, + "tf.compat.v1.train.CheckpointManager.__eq__": true, + "tf.compat.v1.train.CheckpointManager.__ge__": true, + "tf.compat.v1.train.CheckpointManager.__gt__": true, + "tf.compat.v1.train.CheckpointManager.__init__": true, + "tf.compat.v1.train.CheckpointManager.__le__": true, + "tf.compat.v1.train.CheckpointManager.__lt__": true, + "tf.compat.v1.train.CheckpointManager.__ne__": true, + "tf.compat.v1.train.CheckpointManager.__new__": true, + "tf.compat.v1.train.CheckpointManager.checkpoint": true, + "tf.compat.v1.train.CheckpointManager.checkpoint_interval": true, + "tf.compat.v1.train.CheckpointManager.checkpoints": true, + "tf.compat.v1.train.CheckpointManager.directory": true, + "tf.compat.v1.train.CheckpointManager.latest_checkpoint": true, + "tf.compat.v1.train.CheckpointManager.restore_or_initialize": true, + "tf.compat.v1.train.CheckpointManager.save": true, + "tf.compat.v1.train.CheckpointSaverHook": false, + "tf.compat.v1.train.CheckpointSaverHook.__eq__": true, + "tf.compat.v1.train.CheckpointSaverHook.__ge__": true, + "tf.compat.v1.train.CheckpointSaverHook.__gt__": true, + "tf.compat.v1.train.CheckpointSaverHook.__init__": true, + "tf.compat.v1.train.CheckpointSaverHook.__le__": true, + "tf.compat.v1.train.CheckpointSaverHook.__lt__": true, + "tf.compat.v1.train.CheckpointSaverHook.__ne__": true, + "tf.compat.v1.train.CheckpointSaverHook.__new__": true, + "tf.compat.v1.train.CheckpointSaverHook.after_create_session": true, + "tf.compat.v1.train.CheckpointSaverHook.after_run": true, + "tf.compat.v1.train.CheckpointSaverHook.before_run": true, + "tf.compat.v1.train.CheckpointSaverHook.begin": true, + "tf.compat.v1.train.CheckpointSaverHook.end": true, + "tf.compat.v1.train.CheckpointSaverListener": false, + "tf.compat.v1.train.CheckpointSaverListener.__eq__": true, + "tf.compat.v1.train.CheckpointSaverListener.__ge__": true, + "tf.compat.v1.train.CheckpointSaverListener.__gt__": true, + "tf.compat.v1.train.CheckpointSaverListener.__init__": true, + "tf.compat.v1.train.CheckpointSaverListener.__le__": true, + "tf.compat.v1.train.CheckpointSaverListener.__lt__": true, + "tf.compat.v1.train.CheckpointSaverListener.__ne__": true, + "tf.compat.v1.train.CheckpointSaverListener.__new__": true, + "tf.compat.v1.train.CheckpointSaverListener.after_save": true, + "tf.compat.v1.train.CheckpointSaverListener.before_save": true, + "tf.compat.v1.train.CheckpointSaverListener.begin": true, + "tf.compat.v1.train.CheckpointSaverListener.end": true, + "tf.compat.v1.train.ChiefSessionCreator": false, + "tf.compat.v1.train.ChiefSessionCreator.__eq__": true, + "tf.compat.v1.train.ChiefSessionCreator.__ge__": true, + "tf.compat.v1.train.ChiefSessionCreator.__gt__": true, + "tf.compat.v1.train.ChiefSessionCreator.__init__": true, + "tf.compat.v1.train.ChiefSessionCreator.__le__": true, + "tf.compat.v1.train.ChiefSessionCreator.__lt__": true, + "tf.compat.v1.train.ChiefSessionCreator.__ne__": true, + "tf.compat.v1.train.ChiefSessionCreator.__new__": true, + "tf.compat.v1.train.ChiefSessionCreator.create_session": true, + "tf.compat.v1.train.ClusterDef": false, + "tf.compat.v1.train.ClusterDef.ByteSize": true, + "tf.compat.v1.train.ClusterDef.Clear": true, + "tf.compat.v1.train.ClusterDef.ClearExtension": true, + "tf.compat.v1.train.ClusterDef.ClearField": true, + "tf.compat.v1.train.ClusterDef.CopyFrom": true, + "tf.compat.v1.train.ClusterDef.DESCRIPTOR": true, + "tf.compat.v1.train.ClusterDef.DiscardUnknownFields": true, + "tf.compat.v1.train.ClusterDef.Extensions": true, + "tf.compat.v1.train.ClusterDef.FindInitializationErrors": true, + "tf.compat.v1.train.ClusterDef.FromString": true, + "tf.compat.v1.train.ClusterDef.HasExtension": true, + "tf.compat.v1.train.ClusterDef.HasField": true, + "tf.compat.v1.train.ClusterDef.IsInitialized": true, + "tf.compat.v1.train.ClusterDef.ListFields": true, + "tf.compat.v1.train.ClusterDef.MergeFrom": true, + "tf.compat.v1.train.ClusterDef.MergeFromString": true, + "tf.compat.v1.train.ClusterDef.ParseFromString": true, + "tf.compat.v1.train.ClusterDef.RegisterExtension": true, + "tf.compat.v1.train.ClusterDef.SerializePartialToString": true, + "tf.compat.v1.train.ClusterDef.SerializeToString": true, + "tf.compat.v1.train.ClusterDef.SetInParent": true, + "tf.compat.v1.train.ClusterDef.UnknownFields": true, + "tf.compat.v1.train.ClusterDef.WhichOneof": true, + "tf.compat.v1.train.ClusterDef.__eq__": true, + "tf.compat.v1.train.ClusterDef.__ge__": true, + "tf.compat.v1.train.ClusterDef.__gt__": true, + "tf.compat.v1.train.ClusterDef.__init__": true, + "tf.compat.v1.train.ClusterDef.__le__": true, + "tf.compat.v1.train.ClusterDef.__lt__": true, + "tf.compat.v1.train.ClusterDef.__ne__": true, + "tf.compat.v1.train.ClusterDef.__new__": true, + "tf.compat.v1.train.ClusterDef.job": true, + "tf.compat.v1.train.ClusterSpec": false, + "tf.compat.v1.train.ClusterSpec.__bool__": true, + "tf.compat.v1.train.ClusterSpec.__eq__": true, + "tf.compat.v1.train.ClusterSpec.__ge__": true, + "tf.compat.v1.train.ClusterSpec.__gt__": true, + "tf.compat.v1.train.ClusterSpec.__init__": true, + "tf.compat.v1.train.ClusterSpec.__le__": true, + "tf.compat.v1.train.ClusterSpec.__lt__": true, + "tf.compat.v1.train.ClusterSpec.__ne__": true, + "tf.compat.v1.train.ClusterSpec.__new__": true, + "tf.compat.v1.train.ClusterSpec.__nonzero__": true, + "tf.compat.v1.train.ClusterSpec.as_cluster_def": true, + "tf.compat.v1.train.ClusterSpec.as_dict": true, + "tf.compat.v1.train.ClusterSpec.job_tasks": true, + "tf.compat.v1.train.ClusterSpec.jobs": true, + "tf.compat.v1.train.ClusterSpec.num_tasks": true, + "tf.compat.v1.train.ClusterSpec.task_address": true, + "tf.compat.v1.train.ClusterSpec.task_indices": true, + "tf.compat.v1.train.Coordinator": false, + "tf.compat.v1.train.Coordinator.__eq__": true, + "tf.compat.v1.train.Coordinator.__ge__": true, + "tf.compat.v1.train.Coordinator.__gt__": true, + "tf.compat.v1.train.Coordinator.__init__": true, + "tf.compat.v1.train.Coordinator.__le__": true, + "tf.compat.v1.train.Coordinator.__lt__": true, + "tf.compat.v1.train.Coordinator.__ne__": true, + "tf.compat.v1.train.Coordinator.__new__": true, + "tf.compat.v1.train.Coordinator.clear_stop": true, + "tf.compat.v1.train.Coordinator.join": true, + "tf.compat.v1.train.Coordinator.joined": true, + "tf.compat.v1.train.Coordinator.raise_requested_exception": true, + "tf.compat.v1.train.Coordinator.register_thread": true, + "tf.compat.v1.train.Coordinator.request_stop": true, + "tf.compat.v1.train.Coordinator.should_stop": true, + "tf.compat.v1.train.Coordinator.stop_on_exception": true, + "tf.compat.v1.train.Coordinator.wait_for_stop": true, + "tf.compat.v1.train.Example": false, + "tf.compat.v1.train.Example.ByteSize": true, + "tf.compat.v1.train.Example.Clear": true, + "tf.compat.v1.train.Example.ClearExtension": true, + "tf.compat.v1.train.Example.ClearField": true, + "tf.compat.v1.train.Example.CopyFrom": true, + "tf.compat.v1.train.Example.DESCRIPTOR": true, + "tf.compat.v1.train.Example.DiscardUnknownFields": true, + "tf.compat.v1.train.Example.Extensions": true, + "tf.compat.v1.train.Example.FindInitializationErrors": true, + "tf.compat.v1.train.Example.FromString": true, + "tf.compat.v1.train.Example.HasExtension": true, + "tf.compat.v1.train.Example.HasField": true, + "tf.compat.v1.train.Example.IsInitialized": true, + "tf.compat.v1.train.Example.ListFields": true, + "tf.compat.v1.train.Example.MergeFrom": true, + "tf.compat.v1.train.Example.MergeFromString": true, + "tf.compat.v1.train.Example.ParseFromString": true, + "tf.compat.v1.train.Example.RegisterExtension": true, + "tf.compat.v1.train.Example.SerializePartialToString": true, + "tf.compat.v1.train.Example.SerializeToString": true, + "tf.compat.v1.train.Example.SetInParent": true, + "tf.compat.v1.train.Example.UnknownFields": true, + "tf.compat.v1.train.Example.WhichOneof": true, + "tf.compat.v1.train.Example.__eq__": true, + "tf.compat.v1.train.Example.__ge__": true, + "tf.compat.v1.train.Example.__gt__": true, + "tf.compat.v1.train.Example.__init__": true, + "tf.compat.v1.train.Example.__le__": true, + "tf.compat.v1.train.Example.__lt__": true, + "tf.compat.v1.train.Example.__ne__": true, + "tf.compat.v1.train.Example.__new__": true, + "tf.compat.v1.train.Example.features": true, + "tf.compat.v1.train.ExponentialMovingAverage": false, + "tf.compat.v1.train.ExponentialMovingAverage.__eq__": true, + "tf.compat.v1.train.ExponentialMovingAverage.__ge__": true, + "tf.compat.v1.train.ExponentialMovingAverage.__gt__": true, + "tf.compat.v1.train.ExponentialMovingAverage.__init__": true, + "tf.compat.v1.train.ExponentialMovingAverage.__le__": true, + "tf.compat.v1.train.ExponentialMovingAverage.__lt__": true, + "tf.compat.v1.train.ExponentialMovingAverage.__ne__": true, + "tf.compat.v1.train.ExponentialMovingAverage.__new__": true, + "tf.compat.v1.train.ExponentialMovingAverage.apply": true, + "tf.compat.v1.train.ExponentialMovingAverage.average": true, + "tf.compat.v1.train.ExponentialMovingAverage.average_name": true, + "tf.compat.v1.train.ExponentialMovingAverage.name": true, + "tf.compat.v1.train.ExponentialMovingAverage.variables_to_restore": true, + "tf.compat.v1.train.Feature": false, + "tf.compat.v1.train.Feature.ByteSize": true, + "tf.compat.v1.train.Feature.Clear": true, + "tf.compat.v1.train.Feature.ClearExtension": true, + "tf.compat.v1.train.Feature.ClearField": true, + "tf.compat.v1.train.Feature.CopyFrom": true, + "tf.compat.v1.train.Feature.DESCRIPTOR": true, + "tf.compat.v1.train.Feature.DiscardUnknownFields": true, + "tf.compat.v1.train.Feature.Extensions": true, + "tf.compat.v1.train.Feature.FindInitializationErrors": true, + "tf.compat.v1.train.Feature.FromString": true, + "tf.compat.v1.train.Feature.HasExtension": true, + "tf.compat.v1.train.Feature.HasField": true, + "tf.compat.v1.train.Feature.IsInitialized": true, + "tf.compat.v1.train.Feature.ListFields": true, + "tf.compat.v1.train.Feature.MergeFrom": true, + "tf.compat.v1.train.Feature.MergeFromString": true, + "tf.compat.v1.train.Feature.ParseFromString": true, + "tf.compat.v1.train.Feature.RegisterExtension": true, + "tf.compat.v1.train.Feature.SerializePartialToString": true, + "tf.compat.v1.train.Feature.SerializeToString": true, + "tf.compat.v1.train.Feature.SetInParent": true, + "tf.compat.v1.train.Feature.UnknownFields": true, + "tf.compat.v1.train.Feature.WhichOneof": true, + "tf.compat.v1.train.Feature.__eq__": true, + "tf.compat.v1.train.Feature.__ge__": true, + "tf.compat.v1.train.Feature.__gt__": true, + "tf.compat.v1.train.Feature.__init__": true, + "tf.compat.v1.train.Feature.__le__": true, + "tf.compat.v1.train.Feature.__lt__": true, + "tf.compat.v1.train.Feature.__ne__": true, + "tf.compat.v1.train.Feature.__new__": true, + "tf.compat.v1.train.Feature.bytes_list": true, + "tf.compat.v1.train.Feature.float_list": true, + "tf.compat.v1.train.Feature.int64_list": true, + "tf.compat.v1.train.FeatureList": false, + "tf.compat.v1.train.FeatureList.ByteSize": true, + "tf.compat.v1.train.FeatureList.Clear": true, + "tf.compat.v1.train.FeatureList.ClearExtension": true, + "tf.compat.v1.train.FeatureList.ClearField": true, + "tf.compat.v1.train.FeatureList.CopyFrom": true, + "tf.compat.v1.train.FeatureList.DESCRIPTOR": true, + "tf.compat.v1.train.FeatureList.DiscardUnknownFields": true, + "tf.compat.v1.train.FeatureList.Extensions": true, + "tf.compat.v1.train.FeatureList.FindInitializationErrors": true, + "tf.compat.v1.train.FeatureList.FromString": true, + "tf.compat.v1.train.FeatureList.HasExtension": true, + "tf.compat.v1.train.FeatureList.HasField": true, + "tf.compat.v1.train.FeatureList.IsInitialized": true, + "tf.compat.v1.train.FeatureList.ListFields": true, + "tf.compat.v1.train.FeatureList.MergeFrom": true, + "tf.compat.v1.train.FeatureList.MergeFromString": true, + "tf.compat.v1.train.FeatureList.ParseFromString": true, + "tf.compat.v1.train.FeatureList.RegisterExtension": true, + "tf.compat.v1.train.FeatureList.SerializePartialToString": true, + "tf.compat.v1.train.FeatureList.SerializeToString": true, + "tf.compat.v1.train.FeatureList.SetInParent": true, + "tf.compat.v1.train.FeatureList.UnknownFields": true, + "tf.compat.v1.train.FeatureList.WhichOneof": true, + "tf.compat.v1.train.FeatureList.__eq__": true, + "tf.compat.v1.train.FeatureList.__ge__": true, + "tf.compat.v1.train.FeatureList.__gt__": true, + "tf.compat.v1.train.FeatureList.__init__": true, + "tf.compat.v1.train.FeatureList.__le__": true, + "tf.compat.v1.train.FeatureList.__lt__": true, + "tf.compat.v1.train.FeatureList.__ne__": true, + "tf.compat.v1.train.FeatureList.__new__": true, + "tf.compat.v1.train.FeatureList.feature": true, + "tf.compat.v1.train.FeatureLists": false, + "tf.compat.v1.train.FeatureLists.ByteSize": true, + "tf.compat.v1.train.FeatureLists.Clear": true, + "tf.compat.v1.train.FeatureLists.ClearExtension": true, + "tf.compat.v1.train.FeatureLists.ClearField": true, + "tf.compat.v1.train.FeatureLists.CopyFrom": true, + "tf.compat.v1.train.FeatureLists.DESCRIPTOR": true, + "tf.compat.v1.train.FeatureLists.DiscardUnknownFields": true, + "tf.compat.v1.train.FeatureLists.Extensions": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry": false, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.ByteSize": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.Clear": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.ClearExtension": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.ClearField": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.CopyFrom": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.DESCRIPTOR": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.DiscardUnknownFields": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.Extensions": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.FindInitializationErrors": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.FromString": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.HasExtension": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.HasField": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.IsInitialized": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.ListFields": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.MergeFrom": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.MergeFromString": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.ParseFromString": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.RegisterExtension": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.SerializePartialToString": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.SerializeToString": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.SetInParent": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.UnknownFields": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.WhichOneof": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__eq__": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__ge__": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__gt__": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__init__": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__le__": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__lt__": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__ne__": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.__new__": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.key": true, + "tf.compat.v1.train.FeatureLists.FeatureListEntry.value": true, + "tf.compat.v1.train.FeatureLists.FindInitializationErrors": true, + "tf.compat.v1.train.FeatureLists.FromString": true, + "tf.compat.v1.train.FeatureLists.HasExtension": true, + "tf.compat.v1.train.FeatureLists.HasField": true, + "tf.compat.v1.train.FeatureLists.IsInitialized": true, + "tf.compat.v1.train.FeatureLists.ListFields": true, + "tf.compat.v1.train.FeatureLists.MergeFrom": true, + "tf.compat.v1.train.FeatureLists.MergeFromString": true, + "tf.compat.v1.train.FeatureLists.ParseFromString": true, + "tf.compat.v1.train.FeatureLists.RegisterExtension": true, + "tf.compat.v1.train.FeatureLists.SerializePartialToString": true, + "tf.compat.v1.train.FeatureLists.SerializeToString": true, + "tf.compat.v1.train.FeatureLists.SetInParent": true, + "tf.compat.v1.train.FeatureLists.UnknownFields": true, + "tf.compat.v1.train.FeatureLists.WhichOneof": true, + "tf.compat.v1.train.FeatureLists.__eq__": true, + "tf.compat.v1.train.FeatureLists.__ge__": true, + "tf.compat.v1.train.FeatureLists.__gt__": true, + "tf.compat.v1.train.FeatureLists.__init__": true, + "tf.compat.v1.train.FeatureLists.__le__": true, + "tf.compat.v1.train.FeatureLists.__lt__": true, + "tf.compat.v1.train.FeatureLists.__ne__": true, + "tf.compat.v1.train.FeatureLists.__new__": true, + "tf.compat.v1.train.FeatureLists.feature_list": true, + "tf.compat.v1.train.Features": false, + "tf.compat.v1.train.Features.ByteSize": true, + "tf.compat.v1.train.Features.Clear": true, + "tf.compat.v1.train.Features.ClearExtension": true, + "tf.compat.v1.train.Features.ClearField": true, + "tf.compat.v1.train.Features.CopyFrom": true, + "tf.compat.v1.train.Features.DESCRIPTOR": true, + "tf.compat.v1.train.Features.DiscardUnknownFields": true, + "tf.compat.v1.train.Features.Extensions": true, + "tf.compat.v1.train.Features.FeatureEntry": false, + "tf.compat.v1.train.Features.FeatureEntry.ByteSize": true, + "tf.compat.v1.train.Features.FeatureEntry.Clear": true, + "tf.compat.v1.train.Features.FeatureEntry.ClearExtension": true, + "tf.compat.v1.train.Features.FeatureEntry.ClearField": true, + "tf.compat.v1.train.Features.FeatureEntry.CopyFrom": true, + "tf.compat.v1.train.Features.FeatureEntry.DESCRIPTOR": true, + "tf.compat.v1.train.Features.FeatureEntry.DiscardUnknownFields": true, + "tf.compat.v1.train.Features.FeatureEntry.Extensions": true, + "tf.compat.v1.train.Features.FeatureEntry.FindInitializationErrors": true, + "tf.compat.v1.train.Features.FeatureEntry.FromString": true, + "tf.compat.v1.train.Features.FeatureEntry.HasExtension": true, + "tf.compat.v1.train.Features.FeatureEntry.HasField": true, + "tf.compat.v1.train.Features.FeatureEntry.IsInitialized": true, + "tf.compat.v1.train.Features.FeatureEntry.ListFields": true, + "tf.compat.v1.train.Features.FeatureEntry.MergeFrom": true, + "tf.compat.v1.train.Features.FeatureEntry.MergeFromString": true, + "tf.compat.v1.train.Features.FeatureEntry.ParseFromString": true, + "tf.compat.v1.train.Features.FeatureEntry.RegisterExtension": true, + "tf.compat.v1.train.Features.FeatureEntry.SerializePartialToString": true, + "tf.compat.v1.train.Features.FeatureEntry.SerializeToString": true, + "tf.compat.v1.train.Features.FeatureEntry.SetInParent": true, + "tf.compat.v1.train.Features.FeatureEntry.UnknownFields": true, + "tf.compat.v1.train.Features.FeatureEntry.WhichOneof": true, + "tf.compat.v1.train.Features.FeatureEntry.__eq__": true, + "tf.compat.v1.train.Features.FeatureEntry.__ge__": true, + "tf.compat.v1.train.Features.FeatureEntry.__gt__": true, + "tf.compat.v1.train.Features.FeatureEntry.__init__": true, + "tf.compat.v1.train.Features.FeatureEntry.__le__": true, + "tf.compat.v1.train.Features.FeatureEntry.__lt__": true, + "tf.compat.v1.train.Features.FeatureEntry.__ne__": true, + "tf.compat.v1.train.Features.FeatureEntry.__new__": true, + "tf.compat.v1.train.Features.FeatureEntry.key": true, + "tf.compat.v1.train.Features.FeatureEntry.value": true, + "tf.compat.v1.train.Features.FindInitializationErrors": true, + "tf.compat.v1.train.Features.FromString": true, + "tf.compat.v1.train.Features.HasExtension": true, + "tf.compat.v1.train.Features.HasField": true, + "tf.compat.v1.train.Features.IsInitialized": true, + "tf.compat.v1.train.Features.ListFields": true, + "tf.compat.v1.train.Features.MergeFrom": true, + "tf.compat.v1.train.Features.MergeFromString": true, + "tf.compat.v1.train.Features.ParseFromString": true, + "tf.compat.v1.train.Features.RegisterExtension": true, + "tf.compat.v1.train.Features.SerializePartialToString": true, + "tf.compat.v1.train.Features.SerializeToString": true, + "tf.compat.v1.train.Features.SetInParent": true, + "tf.compat.v1.train.Features.UnknownFields": true, + "tf.compat.v1.train.Features.WhichOneof": true, + "tf.compat.v1.train.Features.__eq__": true, + "tf.compat.v1.train.Features.__ge__": true, + "tf.compat.v1.train.Features.__gt__": true, + "tf.compat.v1.train.Features.__init__": true, + "tf.compat.v1.train.Features.__le__": true, + "tf.compat.v1.train.Features.__lt__": true, + "tf.compat.v1.train.Features.__ne__": true, + "tf.compat.v1.train.Features.__new__": true, + "tf.compat.v1.train.Features.feature": true, + "tf.compat.v1.train.FeedFnHook": false, + "tf.compat.v1.train.FeedFnHook.__eq__": true, + "tf.compat.v1.train.FeedFnHook.__ge__": true, + "tf.compat.v1.train.FeedFnHook.__gt__": true, + "tf.compat.v1.train.FeedFnHook.__init__": true, + "tf.compat.v1.train.FeedFnHook.__le__": true, + "tf.compat.v1.train.FeedFnHook.__lt__": true, + "tf.compat.v1.train.FeedFnHook.__ne__": true, + "tf.compat.v1.train.FeedFnHook.__new__": true, + "tf.compat.v1.train.FeedFnHook.after_create_session": true, + "tf.compat.v1.train.FeedFnHook.after_run": true, + "tf.compat.v1.train.FeedFnHook.before_run": true, + "tf.compat.v1.train.FeedFnHook.begin": true, + "tf.compat.v1.train.FeedFnHook.end": true, + "tf.compat.v1.train.FinalOpsHook": false, + "tf.compat.v1.train.FinalOpsHook.__eq__": true, + "tf.compat.v1.train.FinalOpsHook.__ge__": true, + "tf.compat.v1.train.FinalOpsHook.__gt__": true, + "tf.compat.v1.train.FinalOpsHook.__init__": true, + "tf.compat.v1.train.FinalOpsHook.__le__": true, + "tf.compat.v1.train.FinalOpsHook.__lt__": true, + "tf.compat.v1.train.FinalOpsHook.__ne__": true, + "tf.compat.v1.train.FinalOpsHook.__new__": true, + "tf.compat.v1.train.FinalOpsHook.after_create_session": true, + "tf.compat.v1.train.FinalOpsHook.after_run": true, + "tf.compat.v1.train.FinalOpsHook.before_run": true, + "tf.compat.v1.train.FinalOpsHook.begin": true, + "tf.compat.v1.train.FinalOpsHook.end": true, + "tf.compat.v1.train.FinalOpsHook.final_ops_values": true, + "tf.compat.v1.train.FloatList": false, + "tf.compat.v1.train.FloatList.ByteSize": true, + "tf.compat.v1.train.FloatList.Clear": true, + "tf.compat.v1.train.FloatList.ClearExtension": true, + "tf.compat.v1.train.FloatList.ClearField": true, + "tf.compat.v1.train.FloatList.CopyFrom": true, + "tf.compat.v1.train.FloatList.DESCRIPTOR": true, + "tf.compat.v1.train.FloatList.DiscardUnknownFields": true, + "tf.compat.v1.train.FloatList.Extensions": true, + "tf.compat.v1.train.FloatList.FindInitializationErrors": true, + "tf.compat.v1.train.FloatList.FromString": true, + "tf.compat.v1.train.FloatList.HasExtension": true, + "tf.compat.v1.train.FloatList.HasField": true, + "tf.compat.v1.train.FloatList.IsInitialized": true, + "tf.compat.v1.train.FloatList.ListFields": true, + "tf.compat.v1.train.FloatList.MergeFrom": true, + "tf.compat.v1.train.FloatList.MergeFromString": true, + "tf.compat.v1.train.FloatList.ParseFromString": true, + "tf.compat.v1.train.FloatList.RegisterExtension": true, + "tf.compat.v1.train.FloatList.SerializePartialToString": true, + "tf.compat.v1.train.FloatList.SerializeToString": true, + "tf.compat.v1.train.FloatList.SetInParent": true, + "tf.compat.v1.train.FloatList.UnknownFields": true, + "tf.compat.v1.train.FloatList.WhichOneof": true, + "tf.compat.v1.train.FloatList.__eq__": true, + "tf.compat.v1.train.FloatList.__ge__": true, + "tf.compat.v1.train.FloatList.__gt__": true, + "tf.compat.v1.train.FloatList.__init__": true, + "tf.compat.v1.train.FloatList.__le__": true, + "tf.compat.v1.train.FloatList.__lt__": true, + "tf.compat.v1.train.FloatList.__ne__": true, + "tf.compat.v1.train.FloatList.__new__": true, + "tf.compat.v1.train.FloatList.value": true, + "tf.compat.v1.train.FtrlOptimizer": false, + "tf.compat.v1.train.FtrlOptimizer.GATE_GRAPH": true, + "tf.compat.v1.train.FtrlOptimizer.GATE_NONE": true, + "tf.compat.v1.train.FtrlOptimizer.GATE_OP": true, + "tf.compat.v1.train.FtrlOptimizer.__eq__": true, + "tf.compat.v1.train.FtrlOptimizer.__ge__": true, + "tf.compat.v1.train.FtrlOptimizer.__gt__": true, + "tf.compat.v1.train.FtrlOptimizer.__init__": true, + "tf.compat.v1.train.FtrlOptimizer.__le__": true, + "tf.compat.v1.train.FtrlOptimizer.__lt__": true, + "tf.compat.v1.train.FtrlOptimizer.__ne__": true, + "tf.compat.v1.train.FtrlOptimizer.__new__": true, + "tf.compat.v1.train.FtrlOptimizer.apply_gradients": true, + "tf.compat.v1.train.FtrlOptimizer.compute_gradients": true, + "tf.compat.v1.train.FtrlOptimizer.get_name": true, + "tf.compat.v1.train.FtrlOptimizer.get_slot": true, + "tf.compat.v1.train.FtrlOptimizer.get_slot_names": true, + "tf.compat.v1.train.FtrlOptimizer.minimize": true, + "tf.compat.v1.train.FtrlOptimizer.variables": true, + "tf.compat.v1.train.GlobalStepWaiterHook": false, + "tf.compat.v1.train.GlobalStepWaiterHook.__eq__": true, + "tf.compat.v1.train.GlobalStepWaiterHook.__ge__": true, + "tf.compat.v1.train.GlobalStepWaiterHook.__gt__": true, + "tf.compat.v1.train.GlobalStepWaiterHook.__init__": true, + "tf.compat.v1.train.GlobalStepWaiterHook.__le__": true, + "tf.compat.v1.train.GlobalStepWaiterHook.__lt__": true, + "tf.compat.v1.train.GlobalStepWaiterHook.__ne__": true, + "tf.compat.v1.train.GlobalStepWaiterHook.__new__": true, + "tf.compat.v1.train.GlobalStepWaiterHook.after_create_session": true, + "tf.compat.v1.train.GlobalStepWaiterHook.after_run": true, + "tf.compat.v1.train.GlobalStepWaiterHook.before_run": true, + "tf.compat.v1.train.GlobalStepWaiterHook.begin": true, + "tf.compat.v1.train.GlobalStepWaiterHook.end": true, + "tf.compat.v1.train.GradientDescentOptimizer": false, + "tf.compat.v1.train.GradientDescentOptimizer.GATE_GRAPH": true, + "tf.compat.v1.train.GradientDescentOptimizer.GATE_NONE": true, + "tf.compat.v1.train.GradientDescentOptimizer.GATE_OP": true, + "tf.compat.v1.train.GradientDescentOptimizer.__eq__": true, + "tf.compat.v1.train.GradientDescentOptimizer.__ge__": true, + "tf.compat.v1.train.GradientDescentOptimizer.__gt__": true, + "tf.compat.v1.train.GradientDescentOptimizer.__init__": true, + "tf.compat.v1.train.GradientDescentOptimizer.__le__": true, + "tf.compat.v1.train.GradientDescentOptimizer.__lt__": true, + "tf.compat.v1.train.GradientDescentOptimizer.__ne__": true, + "tf.compat.v1.train.GradientDescentOptimizer.__new__": true, + "tf.compat.v1.train.GradientDescentOptimizer.apply_gradients": true, + "tf.compat.v1.train.GradientDescentOptimizer.compute_gradients": true, + "tf.compat.v1.train.GradientDescentOptimizer.get_name": true, + "tf.compat.v1.train.GradientDescentOptimizer.get_slot": true, + "tf.compat.v1.train.GradientDescentOptimizer.get_slot_names": true, + "tf.compat.v1.train.GradientDescentOptimizer.minimize": true, + "tf.compat.v1.train.GradientDescentOptimizer.variables": true, + "tf.compat.v1.train.Int64List": false, + "tf.compat.v1.train.Int64List.ByteSize": true, + "tf.compat.v1.train.Int64List.Clear": true, + "tf.compat.v1.train.Int64List.ClearExtension": true, + "tf.compat.v1.train.Int64List.ClearField": true, + "tf.compat.v1.train.Int64List.CopyFrom": true, + "tf.compat.v1.train.Int64List.DESCRIPTOR": true, + "tf.compat.v1.train.Int64List.DiscardUnknownFields": true, + "tf.compat.v1.train.Int64List.Extensions": true, + "tf.compat.v1.train.Int64List.FindInitializationErrors": true, + "tf.compat.v1.train.Int64List.FromString": true, + "tf.compat.v1.train.Int64List.HasExtension": true, + "tf.compat.v1.train.Int64List.HasField": true, + "tf.compat.v1.train.Int64List.IsInitialized": true, + "tf.compat.v1.train.Int64List.ListFields": true, + "tf.compat.v1.train.Int64List.MergeFrom": true, + "tf.compat.v1.train.Int64List.MergeFromString": true, + "tf.compat.v1.train.Int64List.ParseFromString": true, + "tf.compat.v1.train.Int64List.RegisterExtension": true, + "tf.compat.v1.train.Int64List.SerializePartialToString": true, + "tf.compat.v1.train.Int64List.SerializeToString": true, + "tf.compat.v1.train.Int64List.SetInParent": true, + "tf.compat.v1.train.Int64List.UnknownFields": true, + "tf.compat.v1.train.Int64List.WhichOneof": true, + "tf.compat.v1.train.Int64List.__eq__": true, + "tf.compat.v1.train.Int64List.__ge__": true, + "tf.compat.v1.train.Int64List.__gt__": true, + "tf.compat.v1.train.Int64List.__init__": true, + "tf.compat.v1.train.Int64List.__le__": true, + "tf.compat.v1.train.Int64List.__lt__": true, + "tf.compat.v1.train.Int64List.__ne__": true, + "tf.compat.v1.train.Int64List.__new__": true, + "tf.compat.v1.train.Int64List.value": true, + "tf.compat.v1.train.JobDef": false, + "tf.compat.v1.train.JobDef.ByteSize": true, + "tf.compat.v1.train.JobDef.Clear": true, + "tf.compat.v1.train.JobDef.ClearExtension": true, + "tf.compat.v1.train.JobDef.ClearField": true, + "tf.compat.v1.train.JobDef.CopyFrom": true, + "tf.compat.v1.train.JobDef.DESCRIPTOR": true, + "tf.compat.v1.train.JobDef.DiscardUnknownFields": true, + "tf.compat.v1.train.JobDef.Extensions": true, + "tf.compat.v1.train.JobDef.FindInitializationErrors": true, + "tf.compat.v1.train.JobDef.FromString": true, + "tf.compat.v1.train.JobDef.HasExtension": true, + "tf.compat.v1.train.JobDef.HasField": true, + "tf.compat.v1.train.JobDef.IsInitialized": true, + "tf.compat.v1.train.JobDef.ListFields": true, + "tf.compat.v1.train.JobDef.MergeFrom": true, + "tf.compat.v1.train.JobDef.MergeFromString": true, + "tf.compat.v1.train.JobDef.ParseFromString": true, + "tf.compat.v1.train.JobDef.RegisterExtension": true, + "tf.compat.v1.train.JobDef.SerializePartialToString": true, + "tf.compat.v1.train.JobDef.SerializeToString": true, + "tf.compat.v1.train.JobDef.SetInParent": true, + "tf.compat.v1.train.JobDef.TasksEntry": false, + "tf.compat.v1.train.JobDef.TasksEntry.ByteSize": true, + "tf.compat.v1.train.JobDef.TasksEntry.Clear": true, + "tf.compat.v1.train.JobDef.TasksEntry.ClearExtension": true, + "tf.compat.v1.train.JobDef.TasksEntry.ClearField": true, + "tf.compat.v1.train.JobDef.TasksEntry.CopyFrom": true, + "tf.compat.v1.train.JobDef.TasksEntry.DESCRIPTOR": true, + "tf.compat.v1.train.JobDef.TasksEntry.DiscardUnknownFields": true, + "tf.compat.v1.train.JobDef.TasksEntry.Extensions": true, + "tf.compat.v1.train.JobDef.TasksEntry.FindInitializationErrors": true, + "tf.compat.v1.train.JobDef.TasksEntry.FromString": true, + "tf.compat.v1.train.JobDef.TasksEntry.HasExtension": true, + "tf.compat.v1.train.JobDef.TasksEntry.HasField": true, + "tf.compat.v1.train.JobDef.TasksEntry.IsInitialized": true, + "tf.compat.v1.train.JobDef.TasksEntry.ListFields": true, + "tf.compat.v1.train.JobDef.TasksEntry.MergeFrom": true, + "tf.compat.v1.train.JobDef.TasksEntry.MergeFromString": true, + "tf.compat.v1.train.JobDef.TasksEntry.ParseFromString": true, + "tf.compat.v1.train.JobDef.TasksEntry.RegisterExtension": true, + "tf.compat.v1.train.JobDef.TasksEntry.SerializePartialToString": true, + "tf.compat.v1.train.JobDef.TasksEntry.SerializeToString": true, + "tf.compat.v1.train.JobDef.TasksEntry.SetInParent": true, + "tf.compat.v1.train.JobDef.TasksEntry.UnknownFields": true, + "tf.compat.v1.train.JobDef.TasksEntry.WhichOneof": true, + "tf.compat.v1.train.JobDef.TasksEntry.__eq__": true, + "tf.compat.v1.train.JobDef.TasksEntry.__ge__": true, + "tf.compat.v1.train.JobDef.TasksEntry.__gt__": true, + "tf.compat.v1.train.JobDef.TasksEntry.__init__": true, + "tf.compat.v1.train.JobDef.TasksEntry.__le__": true, + "tf.compat.v1.train.JobDef.TasksEntry.__lt__": true, + "tf.compat.v1.train.JobDef.TasksEntry.__ne__": true, + "tf.compat.v1.train.JobDef.TasksEntry.__new__": true, + "tf.compat.v1.train.JobDef.TasksEntry.key": true, + "tf.compat.v1.train.JobDef.TasksEntry.value": true, + "tf.compat.v1.train.JobDef.UnknownFields": true, + "tf.compat.v1.train.JobDef.WhichOneof": true, + "tf.compat.v1.train.JobDef.__eq__": true, + "tf.compat.v1.train.JobDef.__ge__": true, + "tf.compat.v1.train.JobDef.__gt__": true, + "tf.compat.v1.train.JobDef.__init__": true, + "tf.compat.v1.train.JobDef.__le__": true, + "tf.compat.v1.train.JobDef.__lt__": true, + "tf.compat.v1.train.JobDef.__ne__": true, + "tf.compat.v1.train.JobDef.__new__": true, + "tf.compat.v1.train.JobDef.name": true, + "tf.compat.v1.train.JobDef.tasks": true, + "tf.compat.v1.train.LoggingTensorHook": false, + "tf.compat.v1.train.LoggingTensorHook.__eq__": true, + "tf.compat.v1.train.LoggingTensorHook.__ge__": true, + "tf.compat.v1.train.LoggingTensorHook.__gt__": true, + "tf.compat.v1.train.LoggingTensorHook.__init__": true, + "tf.compat.v1.train.LoggingTensorHook.__le__": true, + "tf.compat.v1.train.LoggingTensorHook.__lt__": true, + "tf.compat.v1.train.LoggingTensorHook.__ne__": true, + "tf.compat.v1.train.LoggingTensorHook.__new__": true, + "tf.compat.v1.train.LoggingTensorHook.after_create_session": true, + "tf.compat.v1.train.LoggingTensorHook.after_run": true, + "tf.compat.v1.train.LoggingTensorHook.before_run": true, + "tf.compat.v1.train.LoggingTensorHook.begin": true, + "tf.compat.v1.train.LoggingTensorHook.end": true, + "tf.compat.v1.train.LooperThread": false, + "tf.compat.v1.train.LooperThread.__eq__": true, + "tf.compat.v1.train.LooperThread.__ge__": true, + "tf.compat.v1.train.LooperThread.__gt__": true, + "tf.compat.v1.train.LooperThread.__init__": true, + "tf.compat.v1.train.LooperThread.__le__": true, + "tf.compat.v1.train.LooperThread.__lt__": true, + "tf.compat.v1.train.LooperThread.__ne__": true, + "tf.compat.v1.train.LooperThread.__new__": true, + "tf.compat.v1.train.LooperThread.daemon": true, + "tf.compat.v1.train.LooperThread.getName": true, + "tf.compat.v1.train.LooperThread.ident": true, + "tf.compat.v1.train.LooperThread.isAlive": true, + "tf.compat.v1.train.LooperThread.isDaemon": true, + "tf.compat.v1.train.LooperThread.is_alive": true, + "tf.compat.v1.train.LooperThread.join": true, + "tf.compat.v1.train.LooperThread.loop": true, + "tf.compat.v1.train.LooperThread.name": true, + "tf.compat.v1.train.LooperThread.native_id": true, + "tf.compat.v1.train.LooperThread.run": true, + "tf.compat.v1.train.LooperThread.run_loop": true, + "tf.compat.v1.train.LooperThread.setDaemon": true, + "tf.compat.v1.train.LooperThread.setName": true, + "tf.compat.v1.train.LooperThread.start": true, + "tf.compat.v1.train.LooperThread.start_loop": true, + "tf.compat.v1.train.LooperThread.stop_loop": true, + "tf.compat.v1.train.MomentumOptimizer": false, + "tf.compat.v1.train.MomentumOptimizer.GATE_GRAPH": true, + "tf.compat.v1.train.MomentumOptimizer.GATE_NONE": true, + "tf.compat.v1.train.MomentumOptimizer.GATE_OP": true, + "tf.compat.v1.train.MomentumOptimizer.__eq__": true, + "tf.compat.v1.train.MomentumOptimizer.__ge__": true, + "tf.compat.v1.train.MomentumOptimizer.__gt__": true, + "tf.compat.v1.train.MomentumOptimizer.__init__": true, + "tf.compat.v1.train.MomentumOptimizer.__le__": true, + "tf.compat.v1.train.MomentumOptimizer.__lt__": true, + "tf.compat.v1.train.MomentumOptimizer.__ne__": true, + "tf.compat.v1.train.MomentumOptimizer.__new__": true, + "tf.compat.v1.train.MomentumOptimizer.apply_gradients": true, + "tf.compat.v1.train.MomentumOptimizer.compute_gradients": true, + "tf.compat.v1.train.MomentumOptimizer.get_name": true, + "tf.compat.v1.train.MomentumOptimizer.get_slot": true, + "tf.compat.v1.train.MomentumOptimizer.get_slot_names": true, + "tf.compat.v1.train.MomentumOptimizer.minimize": true, + "tf.compat.v1.train.MomentumOptimizer.variables": true, + "tf.compat.v1.train.MonitoredSession": false, + "tf.compat.v1.train.MonitoredSession.StepContext": false, + "tf.compat.v1.train.MonitoredSession.StepContext.__eq__": true, + "tf.compat.v1.train.MonitoredSession.StepContext.__ge__": true, + "tf.compat.v1.train.MonitoredSession.StepContext.__gt__": true, + "tf.compat.v1.train.MonitoredSession.StepContext.__init__": true, + "tf.compat.v1.train.MonitoredSession.StepContext.__le__": true, + "tf.compat.v1.train.MonitoredSession.StepContext.__lt__": true, + "tf.compat.v1.train.MonitoredSession.StepContext.__ne__": true, + "tf.compat.v1.train.MonitoredSession.StepContext.__new__": true, + "tf.compat.v1.train.MonitoredSession.StepContext.request_stop": true, + "tf.compat.v1.train.MonitoredSession.StepContext.run_with_hooks": true, + "tf.compat.v1.train.MonitoredSession.StepContext.session": true, + "tf.compat.v1.train.MonitoredSession.__enter__": true, + "tf.compat.v1.train.MonitoredSession.__eq__": true, + "tf.compat.v1.train.MonitoredSession.__exit__": true, + "tf.compat.v1.train.MonitoredSession.__ge__": true, + "tf.compat.v1.train.MonitoredSession.__gt__": true, + "tf.compat.v1.train.MonitoredSession.__init__": true, + "tf.compat.v1.train.MonitoredSession.__le__": true, + "tf.compat.v1.train.MonitoredSession.__lt__": true, + "tf.compat.v1.train.MonitoredSession.__ne__": true, + "tf.compat.v1.train.MonitoredSession.__new__": true, + "tf.compat.v1.train.MonitoredSession.close": true, + "tf.compat.v1.train.MonitoredSession.graph": true, + "tf.compat.v1.train.MonitoredSession.run": true, + "tf.compat.v1.train.MonitoredSession.run_step_fn": true, + "tf.compat.v1.train.MonitoredSession.should_stop": true, + "tf.compat.v1.train.MonitoredTrainingSession": false, + "tf.compat.v1.train.NanLossDuringTrainingError": false, + "tf.compat.v1.train.NanLossDuringTrainingError.__eq__": true, + "tf.compat.v1.train.NanLossDuringTrainingError.__ge__": true, + "tf.compat.v1.train.NanLossDuringTrainingError.__gt__": true, + "tf.compat.v1.train.NanLossDuringTrainingError.__init__": true, + "tf.compat.v1.train.NanLossDuringTrainingError.__le__": true, + "tf.compat.v1.train.NanLossDuringTrainingError.__lt__": true, + "tf.compat.v1.train.NanLossDuringTrainingError.__ne__": true, + "tf.compat.v1.train.NanLossDuringTrainingError.__new__": true, + "tf.compat.v1.train.NanLossDuringTrainingError.args": true, + "tf.compat.v1.train.NanLossDuringTrainingError.with_traceback": true, + "tf.compat.v1.train.NanTensorHook": false, + "tf.compat.v1.train.NanTensorHook.__eq__": true, + "tf.compat.v1.train.NanTensorHook.__ge__": true, + "tf.compat.v1.train.NanTensorHook.__gt__": true, + "tf.compat.v1.train.NanTensorHook.__init__": true, + "tf.compat.v1.train.NanTensorHook.__le__": true, + "tf.compat.v1.train.NanTensorHook.__lt__": true, + "tf.compat.v1.train.NanTensorHook.__ne__": true, + "tf.compat.v1.train.NanTensorHook.__new__": true, + "tf.compat.v1.train.NanTensorHook.after_create_session": true, + "tf.compat.v1.train.NanTensorHook.after_run": true, + "tf.compat.v1.train.NanTensorHook.before_run": true, + "tf.compat.v1.train.NanTensorHook.begin": true, + "tf.compat.v1.train.NanTensorHook.end": true, + "tf.compat.v1.train.NewCheckpointReader": false, + "tf.compat.v1.train.Optimizer": false, + "tf.compat.v1.train.Optimizer.GATE_GRAPH": true, + "tf.compat.v1.train.Optimizer.GATE_NONE": true, + "tf.compat.v1.train.Optimizer.GATE_OP": true, + "tf.compat.v1.train.Optimizer.__eq__": true, + "tf.compat.v1.train.Optimizer.__ge__": true, + "tf.compat.v1.train.Optimizer.__gt__": true, + "tf.compat.v1.train.Optimizer.__init__": true, + "tf.compat.v1.train.Optimizer.__le__": true, + "tf.compat.v1.train.Optimizer.__lt__": true, + "tf.compat.v1.train.Optimizer.__ne__": true, + "tf.compat.v1.train.Optimizer.__new__": true, + "tf.compat.v1.train.Optimizer.apply_gradients": true, + "tf.compat.v1.train.Optimizer.compute_gradients": true, + "tf.compat.v1.train.Optimizer.get_name": true, + "tf.compat.v1.train.Optimizer.get_slot": true, + "tf.compat.v1.train.Optimizer.get_slot_names": true, + "tf.compat.v1.train.Optimizer.minimize": true, + "tf.compat.v1.train.Optimizer.variables": true, + "tf.compat.v1.train.ProfilerHook": false, + "tf.compat.v1.train.ProfilerHook.__eq__": true, + "tf.compat.v1.train.ProfilerHook.__ge__": true, + "tf.compat.v1.train.ProfilerHook.__gt__": true, + "tf.compat.v1.train.ProfilerHook.__init__": true, + "tf.compat.v1.train.ProfilerHook.__le__": true, + "tf.compat.v1.train.ProfilerHook.__lt__": true, + "tf.compat.v1.train.ProfilerHook.__ne__": true, + "tf.compat.v1.train.ProfilerHook.__new__": true, + "tf.compat.v1.train.ProfilerHook.after_create_session": true, + "tf.compat.v1.train.ProfilerHook.after_run": true, + "tf.compat.v1.train.ProfilerHook.before_run": true, + "tf.compat.v1.train.ProfilerHook.begin": true, + "tf.compat.v1.train.ProfilerHook.end": true, + "tf.compat.v1.train.ProximalAdagradOptimizer": false, + "tf.compat.v1.train.ProximalAdagradOptimizer.GATE_GRAPH": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.GATE_NONE": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.GATE_OP": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.__eq__": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.__ge__": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.__gt__": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.__init__": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.__le__": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.__lt__": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.__ne__": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.__new__": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.apply_gradients": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.compute_gradients": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.get_name": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.get_slot": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.get_slot_names": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.minimize": true, + "tf.compat.v1.train.ProximalAdagradOptimizer.variables": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer": false, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.GATE_GRAPH": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.GATE_NONE": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.GATE_OP": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__eq__": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__ge__": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__gt__": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__init__": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__le__": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__lt__": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__ne__": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.__new__": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.apply_gradients": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.compute_gradients": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.get_name": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.get_slot": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.get_slot_names": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.minimize": true, + "tf.compat.v1.train.ProximalGradientDescentOptimizer.variables": true, + "tf.compat.v1.train.QueueRunner": false, + "tf.compat.v1.train.QueueRunner.__eq__": true, + "tf.compat.v1.train.QueueRunner.__ge__": true, + "tf.compat.v1.train.QueueRunner.__gt__": true, + "tf.compat.v1.train.QueueRunner.__init__": true, + "tf.compat.v1.train.QueueRunner.__le__": true, + "tf.compat.v1.train.QueueRunner.__lt__": true, + "tf.compat.v1.train.QueueRunner.__ne__": true, + "tf.compat.v1.train.QueueRunner.__new__": true, + "tf.compat.v1.train.QueueRunner.cancel_op": true, + "tf.compat.v1.train.QueueRunner.close_op": true, + "tf.compat.v1.train.QueueRunner.create_threads": true, + "tf.compat.v1.train.QueueRunner.enqueue_ops": true, + "tf.compat.v1.train.QueueRunner.exceptions_raised": true, + "tf.compat.v1.train.QueueRunner.from_proto": true, + "tf.compat.v1.train.QueueRunner.name": true, + "tf.compat.v1.train.QueueRunner.queue": true, + "tf.compat.v1.train.QueueRunner.queue_closed_exception_types": true, + "tf.compat.v1.train.QueueRunner.to_proto": true, + "tf.compat.v1.train.RMSPropOptimizer": false, + "tf.compat.v1.train.RMSPropOptimizer.GATE_GRAPH": true, + "tf.compat.v1.train.RMSPropOptimizer.GATE_NONE": true, + "tf.compat.v1.train.RMSPropOptimizer.GATE_OP": true, + "tf.compat.v1.train.RMSPropOptimizer.__eq__": true, + "tf.compat.v1.train.RMSPropOptimizer.__ge__": true, + "tf.compat.v1.train.RMSPropOptimizer.__gt__": true, + "tf.compat.v1.train.RMSPropOptimizer.__init__": true, + "tf.compat.v1.train.RMSPropOptimizer.__le__": true, + "tf.compat.v1.train.RMSPropOptimizer.__lt__": true, + "tf.compat.v1.train.RMSPropOptimizer.__ne__": true, + "tf.compat.v1.train.RMSPropOptimizer.__new__": true, + "tf.compat.v1.train.RMSPropOptimizer.apply_gradients": true, + "tf.compat.v1.train.RMSPropOptimizer.compute_gradients": true, + "tf.compat.v1.train.RMSPropOptimizer.get_name": true, + "tf.compat.v1.train.RMSPropOptimizer.get_slot": true, + "tf.compat.v1.train.RMSPropOptimizer.get_slot_names": true, + "tf.compat.v1.train.RMSPropOptimizer.minimize": true, + "tf.compat.v1.train.RMSPropOptimizer.variables": true, + "tf.compat.v1.train.Saver": false, + "tf.compat.v1.train.Saver.__eq__": true, + "tf.compat.v1.train.Saver.__ge__": true, + "tf.compat.v1.train.Saver.__gt__": true, + "tf.compat.v1.train.Saver.__init__": true, + "tf.compat.v1.train.Saver.__le__": true, + "tf.compat.v1.train.Saver.__lt__": true, + "tf.compat.v1.train.Saver.__ne__": true, + "tf.compat.v1.train.Saver.__new__": true, + "tf.compat.v1.train.Saver.as_saver_def": true, + "tf.compat.v1.train.Saver.build": true, + "tf.compat.v1.train.Saver.export_meta_graph": true, + "tf.compat.v1.train.Saver.from_proto": true, + "tf.compat.v1.train.Saver.last_checkpoints": true, + "tf.compat.v1.train.Saver.recover_last_checkpoints": true, + "tf.compat.v1.train.Saver.restore": true, + "tf.compat.v1.train.Saver.save": true, + "tf.compat.v1.train.Saver.set_last_checkpoints": true, + "tf.compat.v1.train.Saver.set_last_checkpoints_with_time": true, + "tf.compat.v1.train.Saver.to_proto": true, + "tf.compat.v1.train.SaverDef": false, + "tf.compat.v1.train.SaverDef.ByteSize": true, + "tf.compat.v1.train.SaverDef.CheckpointFormatVersion": true, + "tf.compat.v1.train.SaverDef.Clear": true, + "tf.compat.v1.train.SaverDef.ClearExtension": true, + "tf.compat.v1.train.SaverDef.ClearField": true, + "tf.compat.v1.train.SaverDef.CopyFrom": true, + "tf.compat.v1.train.SaverDef.DESCRIPTOR": true, + "tf.compat.v1.train.SaverDef.DiscardUnknownFields": true, + "tf.compat.v1.train.SaverDef.Extensions": true, + "tf.compat.v1.train.SaverDef.FindInitializationErrors": true, + "tf.compat.v1.train.SaverDef.FromString": true, + "tf.compat.v1.train.SaverDef.HasExtension": true, + "tf.compat.v1.train.SaverDef.HasField": true, + "tf.compat.v1.train.SaverDef.IsInitialized": true, + "tf.compat.v1.train.SaverDef.LEGACY": true, + "tf.compat.v1.train.SaverDef.ListFields": true, + "tf.compat.v1.train.SaverDef.MergeFrom": true, + "tf.compat.v1.train.SaverDef.MergeFromString": true, + "tf.compat.v1.train.SaverDef.ParseFromString": true, + "tf.compat.v1.train.SaverDef.RegisterExtension": true, + "tf.compat.v1.train.SaverDef.SerializePartialToString": true, + "tf.compat.v1.train.SaverDef.SerializeToString": true, + "tf.compat.v1.train.SaverDef.SetInParent": true, + "tf.compat.v1.train.SaverDef.UnknownFields": true, + "tf.compat.v1.train.SaverDef.V1": true, + "tf.compat.v1.train.SaverDef.V2": true, + "tf.compat.v1.train.SaverDef.WhichOneof": true, + "tf.compat.v1.train.SaverDef.__eq__": true, + "tf.compat.v1.train.SaverDef.__ge__": true, + "tf.compat.v1.train.SaverDef.__gt__": true, + "tf.compat.v1.train.SaverDef.__init__": true, + "tf.compat.v1.train.SaverDef.__le__": true, + "tf.compat.v1.train.SaverDef.__lt__": true, + "tf.compat.v1.train.SaverDef.__ne__": true, + "tf.compat.v1.train.SaverDef.__new__": true, + "tf.compat.v1.train.SaverDef.filename_tensor_name": true, + "tf.compat.v1.train.SaverDef.keep_checkpoint_every_n_hours": true, + "tf.compat.v1.train.SaverDef.max_to_keep": true, + "tf.compat.v1.train.SaverDef.restore_op_name": true, + "tf.compat.v1.train.SaverDef.save_tensor_name": true, + "tf.compat.v1.train.SaverDef.sharded": true, + "tf.compat.v1.train.SaverDef.version": true, + "tf.compat.v1.train.Scaffold": false, + "tf.compat.v1.train.Scaffold.__eq__": true, + "tf.compat.v1.train.Scaffold.__ge__": true, + "tf.compat.v1.train.Scaffold.__gt__": true, + "tf.compat.v1.train.Scaffold.__init__": true, + "tf.compat.v1.train.Scaffold.__le__": true, + "tf.compat.v1.train.Scaffold.__lt__": true, + "tf.compat.v1.train.Scaffold.__ne__": true, + "tf.compat.v1.train.Scaffold.__new__": true, + "tf.compat.v1.train.Scaffold.default_local_init_op": true, + "tf.compat.v1.train.Scaffold.finalize": true, + "tf.compat.v1.train.Scaffold.get_or_default": true, + "tf.compat.v1.train.Scaffold.init_feed_dict": true, + "tf.compat.v1.train.Scaffold.init_fn": true, + "tf.compat.v1.train.Scaffold.init_op": true, + "tf.compat.v1.train.Scaffold.local_init_feed_dict": true, + "tf.compat.v1.train.Scaffold.local_init_op": true, + "tf.compat.v1.train.Scaffold.ready_for_local_init_op": true, + "tf.compat.v1.train.Scaffold.ready_op": true, + "tf.compat.v1.train.Scaffold.saver": true, + "tf.compat.v1.train.Scaffold.summary_op": true, + "tf.compat.v1.train.SecondOrStepTimer": false, + "tf.compat.v1.train.SecondOrStepTimer.__eq__": true, + "tf.compat.v1.train.SecondOrStepTimer.__ge__": true, + "tf.compat.v1.train.SecondOrStepTimer.__gt__": true, + "tf.compat.v1.train.SecondOrStepTimer.__init__": true, + "tf.compat.v1.train.SecondOrStepTimer.__le__": true, + "tf.compat.v1.train.SecondOrStepTimer.__lt__": true, + "tf.compat.v1.train.SecondOrStepTimer.__ne__": true, + "tf.compat.v1.train.SecondOrStepTimer.__new__": true, + "tf.compat.v1.train.SecondOrStepTimer.last_triggered_step": true, + "tf.compat.v1.train.SecondOrStepTimer.reset": true, + "tf.compat.v1.train.SecondOrStepTimer.should_trigger_for_step": true, + "tf.compat.v1.train.SecondOrStepTimer.update_last_triggered_step": true, + "tf.compat.v1.train.SequenceExample": false, + "tf.compat.v1.train.SequenceExample.ByteSize": true, + "tf.compat.v1.train.SequenceExample.Clear": true, + "tf.compat.v1.train.SequenceExample.ClearExtension": true, + "tf.compat.v1.train.SequenceExample.ClearField": true, + "tf.compat.v1.train.SequenceExample.CopyFrom": true, + "tf.compat.v1.train.SequenceExample.DESCRIPTOR": true, + "tf.compat.v1.train.SequenceExample.DiscardUnknownFields": true, + "tf.compat.v1.train.SequenceExample.Extensions": true, + "tf.compat.v1.train.SequenceExample.FindInitializationErrors": true, + "tf.compat.v1.train.SequenceExample.FromString": true, + "tf.compat.v1.train.SequenceExample.HasExtension": true, + "tf.compat.v1.train.SequenceExample.HasField": true, + "tf.compat.v1.train.SequenceExample.IsInitialized": true, + "tf.compat.v1.train.SequenceExample.ListFields": true, + "tf.compat.v1.train.SequenceExample.MergeFrom": true, + "tf.compat.v1.train.SequenceExample.MergeFromString": true, + "tf.compat.v1.train.SequenceExample.ParseFromString": true, + "tf.compat.v1.train.SequenceExample.RegisterExtension": true, + "tf.compat.v1.train.SequenceExample.SerializePartialToString": true, + "tf.compat.v1.train.SequenceExample.SerializeToString": true, + "tf.compat.v1.train.SequenceExample.SetInParent": true, + "tf.compat.v1.train.SequenceExample.UnknownFields": true, + "tf.compat.v1.train.SequenceExample.WhichOneof": true, + "tf.compat.v1.train.SequenceExample.__eq__": true, + "tf.compat.v1.train.SequenceExample.__ge__": true, + "tf.compat.v1.train.SequenceExample.__gt__": true, + "tf.compat.v1.train.SequenceExample.__init__": true, + "tf.compat.v1.train.SequenceExample.__le__": true, + "tf.compat.v1.train.SequenceExample.__lt__": true, + "tf.compat.v1.train.SequenceExample.__ne__": true, + "tf.compat.v1.train.SequenceExample.__new__": true, + "tf.compat.v1.train.SequenceExample.context": true, + "tf.compat.v1.train.SequenceExample.feature_lists": true, + "tf.compat.v1.train.Server": false, + "tf.compat.v1.train.Server.__eq__": true, + "tf.compat.v1.train.Server.__ge__": true, + "tf.compat.v1.train.Server.__gt__": true, + "tf.compat.v1.train.Server.__init__": true, + "tf.compat.v1.train.Server.__le__": true, + "tf.compat.v1.train.Server.__lt__": true, + "tf.compat.v1.train.Server.__ne__": true, + "tf.compat.v1.train.Server.__new__": true, + "tf.compat.v1.train.Server.create_local_server": true, + "tf.compat.v1.train.Server.join": true, + "tf.compat.v1.train.Server.server_def": true, + "tf.compat.v1.train.Server.start": true, + "tf.compat.v1.train.Server.target": true, + "tf.compat.v1.train.ServerDef": false, + "tf.compat.v1.train.ServerDef.ByteSize": true, + "tf.compat.v1.train.ServerDef.Clear": true, + "tf.compat.v1.train.ServerDef.ClearExtension": true, + "tf.compat.v1.train.ServerDef.ClearField": true, + "tf.compat.v1.train.ServerDef.CopyFrom": true, + "tf.compat.v1.train.ServerDef.DESCRIPTOR": true, + "tf.compat.v1.train.ServerDef.DiscardUnknownFields": true, + "tf.compat.v1.train.ServerDef.Extensions": true, + "tf.compat.v1.train.ServerDef.FindInitializationErrors": true, + "tf.compat.v1.train.ServerDef.FromString": true, + "tf.compat.v1.train.ServerDef.HasExtension": true, + "tf.compat.v1.train.ServerDef.HasField": true, + "tf.compat.v1.train.ServerDef.IsInitialized": true, + "tf.compat.v1.train.ServerDef.ListFields": true, + "tf.compat.v1.train.ServerDef.MergeFrom": true, + "tf.compat.v1.train.ServerDef.MergeFromString": true, + "tf.compat.v1.train.ServerDef.ParseFromString": true, + "tf.compat.v1.train.ServerDef.RegisterExtension": true, + "tf.compat.v1.train.ServerDef.SerializePartialToString": true, + "tf.compat.v1.train.ServerDef.SerializeToString": true, + "tf.compat.v1.train.ServerDef.SetInParent": true, + "tf.compat.v1.train.ServerDef.UnknownFields": true, + "tf.compat.v1.train.ServerDef.WhichOneof": true, + "tf.compat.v1.train.ServerDef.__eq__": true, + "tf.compat.v1.train.ServerDef.__ge__": true, + "tf.compat.v1.train.ServerDef.__gt__": true, + "tf.compat.v1.train.ServerDef.__init__": true, + "tf.compat.v1.train.ServerDef.__le__": true, + "tf.compat.v1.train.ServerDef.__lt__": true, + "tf.compat.v1.train.ServerDef.__ne__": true, + "tf.compat.v1.train.ServerDef.__new__": true, + "tf.compat.v1.train.ServerDef.cluster": true, + "tf.compat.v1.train.ServerDef.cluster_device_filters": true, + "tf.compat.v1.train.ServerDef.default_session_config": true, + "tf.compat.v1.train.ServerDef.job_name": true, + "tf.compat.v1.train.ServerDef.port": true, + "tf.compat.v1.train.ServerDef.protocol": true, + "tf.compat.v1.train.ServerDef.task_index": true, + "tf.compat.v1.train.SessionCreator": false, + "tf.compat.v1.train.SessionCreator.__eq__": true, + "tf.compat.v1.train.SessionCreator.__ge__": true, + "tf.compat.v1.train.SessionCreator.__gt__": true, + "tf.compat.v1.train.SessionCreator.__init__": true, + "tf.compat.v1.train.SessionCreator.__le__": true, + "tf.compat.v1.train.SessionCreator.__lt__": true, + "tf.compat.v1.train.SessionCreator.__ne__": true, + "tf.compat.v1.train.SessionCreator.__new__": true, + "tf.compat.v1.train.SessionCreator.create_session": true, + "tf.compat.v1.train.SessionManager": false, + "tf.compat.v1.train.SessionManager.__eq__": true, + "tf.compat.v1.train.SessionManager.__ge__": true, + "tf.compat.v1.train.SessionManager.__gt__": true, + "tf.compat.v1.train.SessionManager.__init__": true, + "tf.compat.v1.train.SessionManager.__le__": true, + "tf.compat.v1.train.SessionManager.__lt__": true, + "tf.compat.v1.train.SessionManager.__ne__": true, + "tf.compat.v1.train.SessionManager.__new__": true, + "tf.compat.v1.train.SessionManager.prepare_session": true, + "tf.compat.v1.train.SessionManager.recover_session": true, + "tf.compat.v1.train.SessionManager.wait_for_session": true, + "tf.compat.v1.train.SessionRunArgs": false, + "tf.compat.v1.train.SessionRunArgs.__add__": true, + "tf.compat.v1.train.SessionRunArgs.__contains__": true, + "tf.compat.v1.train.SessionRunArgs.__eq__": true, + "tf.compat.v1.train.SessionRunArgs.__ge__": true, + "tf.compat.v1.train.SessionRunArgs.__getitem__": true, + "tf.compat.v1.train.SessionRunArgs.__gt__": true, + "tf.compat.v1.train.SessionRunArgs.__init__": true, + "tf.compat.v1.train.SessionRunArgs.__iter__": true, + "tf.compat.v1.train.SessionRunArgs.__le__": true, + "tf.compat.v1.train.SessionRunArgs.__len__": true, + "tf.compat.v1.train.SessionRunArgs.__lt__": true, + "tf.compat.v1.train.SessionRunArgs.__mul__": true, + "tf.compat.v1.train.SessionRunArgs.__ne__": true, + "tf.compat.v1.train.SessionRunArgs.__new__": true, + "tf.compat.v1.train.SessionRunArgs.__rmul__": true, + "tf.compat.v1.train.SessionRunArgs.count": true, + "tf.compat.v1.train.SessionRunArgs.feed_dict": true, + "tf.compat.v1.train.SessionRunArgs.fetches": true, + "tf.compat.v1.train.SessionRunArgs.index": true, + "tf.compat.v1.train.SessionRunArgs.options": true, + "tf.compat.v1.train.SessionRunContext": false, + "tf.compat.v1.train.SessionRunContext.__eq__": true, + "tf.compat.v1.train.SessionRunContext.__ge__": true, + "tf.compat.v1.train.SessionRunContext.__gt__": true, + "tf.compat.v1.train.SessionRunContext.__init__": true, + "tf.compat.v1.train.SessionRunContext.__le__": true, + "tf.compat.v1.train.SessionRunContext.__lt__": true, + "tf.compat.v1.train.SessionRunContext.__ne__": true, + "tf.compat.v1.train.SessionRunContext.__new__": true, + "tf.compat.v1.train.SessionRunContext.original_args": true, + "tf.compat.v1.train.SessionRunContext.request_stop": true, + "tf.compat.v1.train.SessionRunContext.session": true, + "tf.compat.v1.train.SessionRunContext.stop_requested": true, + "tf.compat.v1.train.SessionRunHook": false, + "tf.compat.v1.train.SessionRunHook.__eq__": true, + "tf.compat.v1.train.SessionRunHook.__ge__": true, + "tf.compat.v1.train.SessionRunHook.__gt__": true, + "tf.compat.v1.train.SessionRunHook.__init__": true, + "tf.compat.v1.train.SessionRunHook.__le__": true, + "tf.compat.v1.train.SessionRunHook.__lt__": true, + "tf.compat.v1.train.SessionRunHook.__ne__": true, + "tf.compat.v1.train.SessionRunHook.__new__": true, + "tf.compat.v1.train.SessionRunHook.after_create_session": true, + "tf.compat.v1.train.SessionRunHook.after_run": true, + "tf.compat.v1.train.SessionRunHook.before_run": true, + "tf.compat.v1.train.SessionRunHook.begin": true, + "tf.compat.v1.train.SessionRunHook.end": true, + "tf.compat.v1.train.SessionRunValues": false, + "tf.compat.v1.train.SessionRunValues.__add__": true, + "tf.compat.v1.train.SessionRunValues.__contains__": true, + "tf.compat.v1.train.SessionRunValues.__eq__": true, + "tf.compat.v1.train.SessionRunValues.__ge__": true, + "tf.compat.v1.train.SessionRunValues.__getitem__": true, + "tf.compat.v1.train.SessionRunValues.__gt__": true, + "tf.compat.v1.train.SessionRunValues.__init__": true, + "tf.compat.v1.train.SessionRunValues.__iter__": true, + "tf.compat.v1.train.SessionRunValues.__le__": true, + "tf.compat.v1.train.SessionRunValues.__len__": true, + "tf.compat.v1.train.SessionRunValues.__lt__": true, + "tf.compat.v1.train.SessionRunValues.__mul__": true, + "tf.compat.v1.train.SessionRunValues.__ne__": true, + "tf.compat.v1.train.SessionRunValues.__new__": true, + "tf.compat.v1.train.SessionRunValues.__rmul__": true, + "tf.compat.v1.train.SessionRunValues.count": true, + "tf.compat.v1.train.SessionRunValues.index": true, + "tf.compat.v1.train.SessionRunValues.options": true, + "tf.compat.v1.train.SessionRunValues.results": true, + "tf.compat.v1.train.SessionRunValues.run_metadata": true, + "tf.compat.v1.train.SingularMonitoredSession": false, + "tf.compat.v1.train.SingularMonitoredSession.StepContext": false, + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__eq__": true, + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__ge__": true, + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__gt__": true, + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__init__": true, + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__le__": true, + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__lt__": true, + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__ne__": true, + "tf.compat.v1.train.SingularMonitoredSession.StepContext.__new__": true, + "tf.compat.v1.train.SingularMonitoredSession.StepContext.request_stop": true, + "tf.compat.v1.train.SingularMonitoredSession.StepContext.run_with_hooks": true, + "tf.compat.v1.train.SingularMonitoredSession.StepContext.session": true, + "tf.compat.v1.train.SingularMonitoredSession.__enter__": true, + "tf.compat.v1.train.SingularMonitoredSession.__eq__": true, + "tf.compat.v1.train.SingularMonitoredSession.__exit__": true, + "tf.compat.v1.train.SingularMonitoredSession.__ge__": true, + "tf.compat.v1.train.SingularMonitoredSession.__gt__": true, + "tf.compat.v1.train.SingularMonitoredSession.__init__": true, + "tf.compat.v1.train.SingularMonitoredSession.__le__": true, + "tf.compat.v1.train.SingularMonitoredSession.__lt__": true, + "tf.compat.v1.train.SingularMonitoredSession.__ne__": true, + "tf.compat.v1.train.SingularMonitoredSession.__new__": true, + "tf.compat.v1.train.SingularMonitoredSession.close": true, + "tf.compat.v1.train.SingularMonitoredSession.graph": true, + "tf.compat.v1.train.SingularMonitoredSession.raw_session": true, + "tf.compat.v1.train.SingularMonitoredSession.run": true, + "tf.compat.v1.train.SingularMonitoredSession.run_step_fn": true, + "tf.compat.v1.train.SingularMonitoredSession.should_stop": true, + "tf.compat.v1.train.StepCounterHook": false, + "tf.compat.v1.train.StepCounterHook.__eq__": true, + "tf.compat.v1.train.StepCounterHook.__ge__": true, + "tf.compat.v1.train.StepCounterHook.__gt__": true, + "tf.compat.v1.train.StepCounterHook.__init__": true, + "tf.compat.v1.train.StepCounterHook.__le__": true, + "tf.compat.v1.train.StepCounterHook.__lt__": true, + "tf.compat.v1.train.StepCounterHook.__ne__": true, + "tf.compat.v1.train.StepCounterHook.__new__": true, + "tf.compat.v1.train.StepCounterHook.after_create_session": true, + "tf.compat.v1.train.StepCounterHook.after_run": true, + "tf.compat.v1.train.StepCounterHook.before_run": true, + "tf.compat.v1.train.StepCounterHook.begin": true, + "tf.compat.v1.train.StepCounterHook.end": true, + "tf.compat.v1.train.StopAtStepHook": false, + "tf.compat.v1.train.StopAtStepHook.__eq__": true, + "tf.compat.v1.train.StopAtStepHook.__ge__": true, + "tf.compat.v1.train.StopAtStepHook.__gt__": true, + "tf.compat.v1.train.StopAtStepHook.__init__": true, + "tf.compat.v1.train.StopAtStepHook.__le__": true, + "tf.compat.v1.train.StopAtStepHook.__lt__": true, + "tf.compat.v1.train.StopAtStepHook.__ne__": true, + "tf.compat.v1.train.StopAtStepHook.__new__": true, + "tf.compat.v1.train.StopAtStepHook.after_create_session": true, + "tf.compat.v1.train.StopAtStepHook.after_run": true, + "tf.compat.v1.train.StopAtStepHook.before_run": true, + "tf.compat.v1.train.StopAtStepHook.begin": true, + "tf.compat.v1.train.StopAtStepHook.end": true, + "tf.compat.v1.train.SummarySaverHook": false, + "tf.compat.v1.train.SummarySaverHook.__eq__": true, + "tf.compat.v1.train.SummarySaverHook.__ge__": true, + "tf.compat.v1.train.SummarySaverHook.__gt__": true, + "tf.compat.v1.train.SummarySaverHook.__init__": true, + "tf.compat.v1.train.SummarySaverHook.__le__": true, + "tf.compat.v1.train.SummarySaverHook.__lt__": true, + "tf.compat.v1.train.SummarySaverHook.__ne__": true, + "tf.compat.v1.train.SummarySaverHook.__new__": true, + "tf.compat.v1.train.SummarySaverHook.after_create_session": true, + "tf.compat.v1.train.SummarySaverHook.after_run": true, + "tf.compat.v1.train.SummarySaverHook.before_run": true, + "tf.compat.v1.train.SummarySaverHook.begin": true, + "tf.compat.v1.train.SummarySaverHook.end": true, + "tf.compat.v1.train.Supervisor": false, + "tf.compat.v1.train.Supervisor.Loop": true, + "tf.compat.v1.train.Supervisor.PrepareSession": true, + "tf.compat.v1.train.Supervisor.RequestStop": true, + "tf.compat.v1.train.Supervisor.ShouldStop": true, + "tf.compat.v1.train.Supervisor.StartQueueRunners": true, + "tf.compat.v1.train.Supervisor.StartStandardServices": true, + "tf.compat.v1.train.Supervisor.Stop": true, + "tf.compat.v1.train.Supervisor.StopOnException": true, + "tf.compat.v1.train.Supervisor.SummaryComputed": true, + "tf.compat.v1.train.Supervisor.USE_DEFAULT": true, + "tf.compat.v1.train.Supervisor.WaitForStop": true, + "tf.compat.v1.train.Supervisor.__eq__": true, + "tf.compat.v1.train.Supervisor.__ge__": true, + "tf.compat.v1.train.Supervisor.__gt__": true, + "tf.compat.v1.train.Supervisor.__init__": true, + "tf.compat.v1.train.Supervisor.__le__": true, + "tf.compat.v1.train.Supervisor.__lt__": true, + "tf.compat.v1.train.Supervisor.__ne__": true, + "tf.compat.v1.train.Supervisor.__new__": true, + "tf.compat.v1.train.Supervisor.coord": true, + "tf.compat.v1.train.Supervisor.global_step": true, + "tf.compat.v1.train.Supervisor.init_feed_dict": true, + "tf.compat.v1.train.Supervisor.init_op": true, + "tf.compat.v1.train.Supervisor.is_chief": true, + "tf.compat.v1.train.Supervisor.loop": true, + "tf.compat.v1.train.Supervisor.managed_session": true, + "tf.compat.v1.train.Supervisor.prepare_or_wait_for_session": true, + "tf.compat.v1.train.Supervisor.ready_for_local_init_op": true, + "tf.compat.v1.train.Supervisor.ready_op": true, + "tf.compat.v1.train.Supervisor.request_stop": true, + "tf.compat.v1.train.Supervisor.save_model_secs": true, + "tf.compat.v1.train.Supervisor.save_path": true, + "tf.compat.v1.train.Supervisor.save_summaries_secs": true, + "tf.compat.v1.train.Supervisor.saver": true, + "tf.compat.v1.train.Supervisor.session_manager": true, + "tf.compat.v1.train.Supervisor.should_stop": true, + "tf.compat.v1.train.Supervisor.start_queue_runners": true, + "tf.compat.v1.train.Supervisor.start_standard_services": true, + "tf.compat.v1.train.Supervisor.stop": true, + "tf.compat.v1.train.Supervisor.stop_on_exception": true, + "tf.compat.v1.train.Supervisor.summary_computed": true, + "tf.compat.v1.train.Supervisor.summary_op": true, + "tf.compat.v1.train.Supervisor.summary_writer": true, + "tf.compat.v1.train.Supervisor.wait_for_stop": true, + "tf.compat.v1.train.SyncReplicasOptimizer": false, + "tf.compat.v1.train.SyncReplicasOptimizer.GATE_GRAPH": true, + "tf.compat.v1.train.SyncReplicasOptimizer.GATE_NONE": true, + "tf.compat.v1.train.SyncReplicasOptimizer.GATE_OP": true, + "tf.compat.v1.train.SyncReplicasOptimizer.__eq__": true, + "tf.compat.v1.train.SyncReplicasOptimizer.__ge__": true, + "tf.compat.v1.train.SyncReplicasOptimizer.__gt__": true, + "tf.compat.v1.train.SyncReplicasOptimizer.__init__": true, + "tf.compat.v1.train.SyncReplicasOptimizer.__le__": true, + "tf.compat.v1.train.SyncReplicasOptimizer.__lt__": true, + "tf.compat.v1.train.SyncReplicasOptimizer.__ne__": true, + "tf.compat.v1.train.SyncReplicasOptimizer.__new__": true, + "tf.compat.v1.train.SyncReplicasOptimizer.apply_gradients": true, + "tf.compat.v1.train.SyncReplicasOptimizer.compute_gradients": true, + "tf.compat.v1.train.SyncReplicasOptimizer.get_chief_queue_runner": true, + "tf.compat.v1.train.SyncReplicasOptimizer.get_init_tokens_op": true, + "tf.compat.v1.train.SyncReplicasOptimizer.get_name": true, + "tf.compat.v1.train.SyncReplicasOptimizer.get_slot": true, + "tf.compat.v1.train.SyncReplicasOptimizer.get_slot_names": true, + "tf.compat.v1.train.SyncReplicasOptimizer.make_session_run_hook": true, + "tf.compat.v1.train.SyncReplicasOptimizer.minimize": true, + "tf.compat.v1.train.SyncReplicasOptimizer.variables": true, + "tf.compat.v1.train.VocabInfo": false, + "tf.compat.v1.train.VocabInfo.__add__": true, + "tf.compat.v1.train.VocabInfo.__contains__": true, + "tf.compat.v1.train.VocabInfo.__eq__": true, + "tf.compat.v1.train.VocabInfo.__ge__": true, + "tf.compat.v1.train.VocabInfo.__getitem__": true, + "tf.compat.v1.train.VocabInfo.__gt__": true, + "tf.compat.v1.train.VocabInfo.__init__": true, + "tf.compat.v1.train.VocabInfo.__iter__": true, + "tf.compat.v1.train.VocabInfo.__le__": true, + "tf.compat.v1.train.VocabInfo.__len__": true, + "tf.compat.v1.train.VocabInfo.__lt__": true, + "tf.compat.v1.train.VocabInfo.__mul__": true, + "tf.compat.v1.train.VocabInfo.__ne__": true, + "tf.compat.v1.train.VocabInfo.__new__": true, + "tf.compat.v1.train.VocabInfo.__rmul__": true, + "tf.compat.v1.train.VocabInfo.axis": true, + "tf.compat.v1.train.VocabInfo.backup_initializer": true, + "tf.compat.v1.train.VocabInfo.count": true, + "tf.compat.v1.train.VocabInfo.index": true, + "tf.compat.v1.train.VocabInfo.new_vocab": true, + "tf.compat.v1.train.VocabInfo.new_vocab_size": true, + "tf.compat.v1.train.VocabInfo.num_oov_buckets": true, + "tf.compat.v1.train.VocabInfo.old_vocab": true, + "tf.compat.v1.train.VocabInfo.old_vocab_size": true, + "tf.compat.v1.train.WorkerSessionCreator": false, + "tf.compat.v1.train.WorkerSessionCreator.__eq__": true, + "tf.compat.v1.train.WorkerSessionCreator.__ge__": true, + "tf.compat.v1.train.WorkerSessionCreator.__gt__": true, + "tf.compat.v1.train.WorkerSessionCreator.__init__": true, + "tf.compat.v1.train.WorkerSessionCreator.__le__": true, + "tf.compat.v1.train.WorkerSessionCreator.__lt__": true, + "tf.compat.v1.train.WorkerSessionCreator.__ne__": true, + "tf.compat.v1.train.WorkerSessionCreator.__new__": true, + "tf.compat.v1.train.WorkerSessionCreator.create_session": true, + "tf.compat.v1.train.add_queue_runner": false, + "tf.compat.v1.train.assert_global_step": false, + "tf.compat.v1.train.basic_train_loop": false, + "tf.compat.v1.train.batch": false, + "tf.compat.v1.train.batch_join": false, + "tf.compat.v1.train.checkpoint_exists": false, + "tf.compat.v1.train.checkpoints_iterator": false, + "tf.compat.v1.train.cosine_decay": false, + "tf.compat.v1.train.cosine_decay_restarts": false, + "tf.compat.v1.train.create_global_step": false, + "tf.compat.v1.train.do_quantize_training_on_graphdef": false, + "tf.compat.v1.train.experimental": false, + "tf.compat.v1.train.experimental.DynamicLossScale": false, + "tf.compat.v1.train.experimental.DynamicLossScale.__call__": true, + "tf.compat.v1.train.experimental.DynamicLossScale.__eq__": true, + "tf.compat.v1.train.experimental.DynamicLossScale.__ge__": true, + "tf.compat.v1.train.experimental.DynamicLossScale.__gt__": true, + "tf.compat.v1.train.experimental.DynamicLossScale.__init__": true, + "tf.compat.v1.train.experimental.DynamicLossScale.__le__": true, + "tf.compat.v1.train.experimental.DynamicLossScale.__lt__": true, + "tf.compat.v1.train.experimental.DynamicLossScale.__ne__": true, + "tf.compat.v1.train.experimental.DynamicLossScale.__new__": true, + "tf.compat.v1.train.experimental.DynamicLossScale.from_config": true, + "tf.compat.v1.train.experimental.DynamicLossScale.get_config": true, + "tf.compat.v1.train.experimental.DynamicLossScale.increment_period": true, + "tf.compat.v1.train.experimental.DynamicLossScale.initial_loss_scale": true, + "tf.compat.v1.train.experimental.DynamicLossScale.multiplier": true, + "tf.compat.v1.train.experimental.DynamicLossScale.update": true, + "tf.compat.v1.train.experimental.FixedLossScale": false, + "tf.compat.v1.train.experimental.FixedLossScale.__call__": true, + "tf.compat.v1.train.experimental.FixedLossScale.__eq__": true, + "tf.compat.v1.train.experimental.FixedLossScale.__ge__": true, + "tf.compat.v1.train.experimental.FixedLossScale.__gt__": true, + "tf.compat.v1.train.experimental.FixedLossScale.__init__": true, + "tf.compat.v1.train.experimental.FixedLossScale.__le__": true, + "tf.compat.v1.train.experimental.FixedLossScale.__lt__": true, + "tf.compat.v1.train.experimental.FixedLossScale.__ne__": true, + "tf.compat.v1.train.experimental.FixedLossScale.__new__": true, + "tf.compat.v1.train.experimental.FixedLossScale.from_config": true, + "tf.compat.v1.train.experimental.FixedLossScale.get_config": true, + "tf.compat.v1.train.experimental.FixedLossScale.update": true, + "tf.compat.v1.train.experimental.LossScale": false, + "tf.compat.v1.train.experimental.LossScale.__call__": true, + "tf.compat.v1.train.experimental.LossScale.__eq__": true, + "tf.compat.v1.train.experimental.LossScale.__ge__": true, + "tf.compat.v1.train.experimental.LossScale.__gt__": true, + "tf.compat.v1.train.experimental.LossScale.__init__": true, + "tf.compat.v1.train.experimental.LossScale.__le__": true, + "tf.compat.v1.train.experimental.LossScale.__lt__": true, + "tf.compat.v1.train.experimental.LossScale.__ne__": true, + "tf.compat.v1.train.experimental.LossScale.__new__": true, + "tf.compat.v1.train.experimental.LossScale.from_config": true, + "tf.compat.v1.train.experimental.LossScale.get_config": true, + "tf.compat.v1.train.experimental.LossScale.update": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer": false, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.GATE_GRAPH": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.GATE_NONE": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.GATE_OP": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__eq__": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__ge__": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__gt__": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__init__": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__le__": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__lt__": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__ne__": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.__new__": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.apply_gradients": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.compute_gradients": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.get_name": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.get_slot": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.get_slot_names": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.minimize": true, + "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer.variables": true, + "tf.compat.v1.train.experimental.PythonState": false, + "tf.compat.v1.train.experimental.PythonState.__eq__": true, + "tf.compat.v1.train.experimental.PythonState.__ge__": true, + "tf.compat.v1.train.experimental.PythonState.__gt__": true, + "tf.compat.v1.train.experimental.PythonState.__init__": true, + "tf.compat.v1.train.experimental.PythonState.__le__": true, + "tf.compat.v1.train.experimental.PythonState.__lt__": true, + "tf.compat.v1.train.experimental.PythonState.__ne__": true, + "tf.compat.v1.train.experimental.PythonState.__new__": true, + "tf.compat.v1.train.experimental.PythonState.deserialize": true, + "tf.compat.v1.train.experimental.PythonState.serialize": true, + "tf.compat.v1.train.experimental.disable_mixed_precision_graph_rewrite": false, + "tf.compat.v1.train.experimental.enable_mixed_precision_graph_rewrite": false, + "tf.compat.v1.train.exponential_decay": false, + "tf.compat.v1.train.export_meta_graph": false, + "tf.compat.v1.train.generate_checkpoint_state_proto": false, + "tf.compat.v1.train.get_checkpoint_mtimes": false, + "tf.compat.v1.train.get_checkpoint_state": false, + "tf.compat.v1.train.get_global_step": false, + "tf.compat.v1.train.get_or_create_global_step": false, + "tf.compat.v1.train.global_step": false, + "tf.compat.v1.train.import_meta_graph": false, + "tf.compat.v1.train.init_from_checkpoint": false, + "tf.compat.v1.train.input_producer": false, + "tf.compat.v1.train.inverse_time_decay": false, + "tf.compat.v1.train.latest_checkpoint": false, + "tf.compat.v1.train.limit_epochs": false, + "tf.compat.v1.train.linear_cosine_decay": false, + "tf.compat.v1.train.list_variables": false, + "tf.compat.v1.train.load_checkpoint": false, + "tf.compat.v1.train.load_variable": false, + "tf.compat.v1.train.match_filenames_once": false, + "tf.compat.v1.train.maybe_batch": false, + "tf.compat.v1.train.maybe_batch_join": false, + "tf.compat.v1.train.maybe_shuffle_batch": false, + "tf.compat.v1.train.maybe_shuffle_batch_join": false, + "tf.compat.v1.train.natural_exp_decay": false, + "tf.compat.v1.train.noisy_linear_cosine_decay": false, + "tf.compat.v1.train.piecewise_constant": false, + "tf.compat.v1.train.piecewise_constant_decay": false, + "tf.compat.v1.train.polynomial_decay": false, + "tf.compat.v1.train.queue_runner": false, + "tf.compat.v1.train.queue_runner.QueueRunner": false, + "tf.compat.v1.train.queue_runner.QueueRunner.__eq__": true, + "tf.compat.v1.train.queue_runner.QueueRunner.__ge__": true, + "tf.compat.v1.train.queue_runner.QueueRunner.__gt__": true, + "tf.compat.v1.train.queue_runner.QueueRunner.__init__": true, + "tf.compat.v1.train.queue_runner.QueueRunner.__le__": true, + "tf.compat.v1.train.queue_runner.QueueRunner.__lt__": true, + "tf.compat.v1.train.queue_runner.QueueRunner.__ne__": true, + "tf.compat.v1.train.queue_runner.QueueRunner.__new__": true, + "tf.compat.v1.train.queue_runner.QueueRunner.cancel_op": true, + "tf.compat.v1.train.queue_runner.QueueRunner.close_op": true, + "tf.compat.v1.train.queue_runner.QueueRunner.create_threads": true, + "tf.compat.v1.train.queue_runner.QueueRunner.enqueue_ops": true, + "tf.compat.v1.train.queue_runner.QueueRunner.exceptions_raised": true, + "tf.compat.v1.train.queue_runner.QueueRunner.from_proto": true, + "tf.compat.v1.train.queue_runner.QueueRunner.name": true, + "tf.compat.v1.train.queue_runner.QueueRunner.queue": true, + "tf.compat.v1.train.queue_runner.QueueRunner.queue_closed_exception_types": true, + "tf.compat.v1.train.queue_runner.QueueRunner.to_proto": true, + "tf.compat.v1.train.queue_runner.add_queue_runner": false, + "tf.compat.v1.train.queue_runner.start_queue_runners": false, + "tf.compat.v1.train.range_input_producer": false, + "tf.compat.v1.train.remove_checkpoint": false, + "tf.compat.v1.train.replica_device_setter": false, + "tf.compat.v1.train.sdca_fprint": false, + "tf.compat.v1.train.sdca_optimizer": false, + "tf.compat.v1.train.sdca_shrink_l1": false, + "tf.compat.v1.train.shuffle_batch": false, + "tf.compat.v1.train.shuffle_batch_join": false, + "tf.compat.v1.train.slice_input_producer": false, + "tf.compat.v1.train.start_queue_runners": false, + "tf.compat.v1.train.string_input_producer": false, + "tf.compat.v1.train.summary_iterator": false, + "tf.compat.v1.train.update_checkpoint_state": false, + "tf.compat.v1.train.warm_start": false, + "tf.compat.v1.train.write_graph": false, + "tf.compat.v1.trainable_variables": false, + "tf.compat.v1.transpose": false, + "tf.compat.v1.truediv": false, + "tf.compat.v1.truncated_normal": false, + "tf.compat.v1.truncated_normal_initializer": false, + "tf.compat.v1.truncated_normal_initializer.__call__": true, + "tf.compat.v1.truncated_normal_initializer.__eq__": true, + "tf.compat.v1.truncated_normal_initializer.__ge__": true, + "tf.compat.v1.truncated_normal_initializer.__gt__": true, + "tf.compat.v1.truncated_normal_initializer.__init__": true, + "tf.compat.v1.truncated_normal_initializer.__le__": true, + "tf.compat.v1.truncated_normal_initializer.__lt__": true, + "tf.compat.v1.truncated_normal_initializer.__ne__": true, + "tf.compat.v1.truncated_normal_initializer.__new__": true, + "tf.compat.v1.truncated_normal_initializer.from_config": true, + "tf.compat.v1.truncated_normal_initializer.get_config": true, + "tf.compat.v1.truncatediv": false, + "tf.compat.v1.truncatemod": false, + "tf.compat.v1.tuple": false, + "tf.compat.v1.uint16": true, + "tf.compat.v1.uint32": true, + "tf.compat.v1.uint64": true, + "tf.compat.v1.uint8": true, + "tf.compat.v1.uniform_unit_scaling_initializer": false, + "tf.compat.v1.uniform_unit_scaling_initializer.__call__": true, + "tf.compat.v1.uniform_unit_scaling_initializer.__eq__": true, + "tf.compat.v1.uniform_unit_scaling_initializer.__ge__": true, + "tf.compat.v1.uniform_unit_scaling_initializer.__gt__": true, + "tf.compat.v1.uniform_unit_scaling_initializer.__init__": true, + "tf.compat.v1.uniform_unit_scaling_initializer.__le__": true, + "tf.compat.v1.uniform_unit_scaling_initializer.__lt__": true, + "tf.compat.v1.uniform_unit_scaling_initializer.__ne__": true, + "tf.compat.v1.uniform_unit_scaling_initializer.__new__": true, + "tf.compat.v1.uniform_unit_scaling_initializer.from_config": true, + "tf.compat.v1.uniform_unit_scaling_initializer.get_config": true, + "tf.compat.v1.unique": false, + "tf.compat.v1.unique_with_counts": false, + "tf.compat.v1.unravel_index": false, + "tf.compat.v1.unsorted_segment_max": false, + "tf.compat.v1.unsorted_segment_mean": false, + "tf.compat.v1.unsorted_segment_min": false, + "tf.compat.v1.unsorted_segment_prod": false, + "tf.compat.v1.unsorted_segment_sqrt_n": false, + "tf.compat.v1.unsorted_segment_sum": false, + "tf.compat.v1.unstack": false, + "tf.compat.v1.user_ops": false, + "tf.compat.v1.user_ops.my_fact": false, + "tf.compat.v1.variable_axis_size_partitioner": false, + "tf.compat.v1.variable_creator_scope": false, + "tf.compat.v1.variable_op_scope": false, + "tf.compat.v1.variable_scope": false, + "tf.compat.v1.variable_scope.__enter__": true, + "tf.compat.v1.variable_scope.__eq__": true, + "tf.compat.v1.variable_scope.__exit__": true, + "tf.compat.v1.variable_scope.__ge__": true, + "tf.compat.v1.variable_scope.__gt__": true, + "tf.compat.v1.variable_scope.__init__": true, + "tf.compat.v1.variable_scope.__le__": true, + "tf.compat.v1.variable_scope.__lt__": true, + "tf.compat.v1.variable_scope.__ne__": true, + "tf.compat.v1.variable_scope.__new__": true, + "tf.compat.v1.variables_initializer": false, + "tf.compat.v1.variance_scaling_initializer": false, + "tf.compat.v1.variance_scaling_initializer.__call__": true, + "tf.compat.v1.variance_scaling_initializer.__eq__": true, + "tf.compat.v1.variance_scaling_initializer.__ge__": true, + "tf.compat.v1.variance_scaling_initializer.__gt__": true, + "tf.compat.v1.variance_scaling_initializer.__init__": true, + "tf.compat.v1.variance_scaling_initializer.__le__": true, + "tf.compat.v1.variance_scaling_initializer.__lt__": true, + "tf.compat.v1.variance_scaling_initializer.__ne__": true, + "tf.compat.v1.variance_scaling_initializer.__new__": true, + "tf.compat.v1.variance_scaling_initializer.from_config": true, + "tf.compat.v1.variance_scaling_initializer.get_config": true, + "tf.compat.v1.variant": true, + "tf.compat.v1.vectorized_map": false, + "tf.compat.v1.verify_tensor_all_finite": false, + "tf.compat.v1.version": false, + "tf.compat.v1.version.COMPILER_VERSION": true, + "tf.compat.v1.version.GIT_VERSION": true, + "tf.compat.v1.version.GRAPH_DEF_VERSION": true, + "tf.compat.v1.version.GRAPH_DEF_VERSION_MIN_CONSUMER": true, + "tf.compat.v1.version.GRAPH_DEF_VERSION_MIN_PRODUCER": true, + "tf.compat.v1.version.VERSION": true, + "tf.compat.v1.where": false, + "tf.compat.v1.where_v2": false, + "tf.compat.v1.while_loop": false, + "tf.compat.v1.wrap_function": false, + "tf.compat.v1.write_file": false, + "tf.compat.v1.xla": false, + "tf.compat.v1.xla.experimental": false, + "tf.compat.v1.xla.experimental.compile": false, + "tf.compat.v1.xla.experimental.jit_scope": false, + "tf.compat.v1.zeros": false, + "tf.compat.v1.zeros_initializer": false, + "tf.compat.v1.zeros_initializer.__call__": true, + "tf.compat.v1.zeros_initializer.__eq__": true, + "tf.compat.v1.zeros_initializer.__ge__": true, + "tf.compat.v1.zeros_initializer.__gt__": true, + "tf.compat.v1.zeros_initializer.__init__": true, + "tf.compat.v1.zeros_initializer.__le__": true, + "tf.compat.v1.zeros_initializer.__lt__": true, + "tf.compat.v1.zeros_initializer.__ne__": true, + "tf.compat.v1.zeros_initializer.__new__": true, + "tf.compat.v1.zeros_initializer.from_config": true, + "tf.compat.v1.zeros_initializer.get_config": true, + "tf.compat.v1.zeros_like": false, + "tf.compat.v1.zeta": false, + "tf.complex": false, + "tf.complex128": true, + "tf.complex64": true, + "tf.concat": false, + "tf.cond": false, + "tf.config": false, + "tf.config.LogicalDevice": false, + "tf.config.LogicalDevice.__add__": true, + "tf.config.LogicalDevice.__contains__": true, + "tf.config.LogicalDevice.__eq__": true, + "tf.config.LogicalDevice.__ge__": true, + "tf.config.LogicalDevice.__getitem__": true, + "tf.config.LogicalDevice.__gt__": true, + "tf.config.LogicalDevice.__init__": true, + "tf.config.LogicalDevice.__iter__": true, + "tf.config.LogicalDevice.__le__": true, + "tf.config.LogicalDevice.__len__": true, + "tf.config.LogicalDevice.__lt__": true, + "tf.config.LogicalDevice.__mul__": true, + "tf.config.LogicalDevice.__ne__": true, + "tf.config.LogicalDevice.__new__": true, + "tf.config.LogicalDevice.__rmul__": true, + "tf.config.LogicalDevice.count": true, + "tf.config.LogicalDevice.device_type": true, + "tf.config.LogicalDevice.index": true, + "tf.config.LogicalDevice.name": true, + "tf.config.LogicalDeviceConfiguration": false, + "tf.config.LogicalDeviceConfiguration.__add__": true, + "tf.config.LogicalDeviceConfiguration.__contains__": true, + "tf.config.LogicalDeviceConfiguration.__eq__": true, + "tf.config.LogicalDeviceConfiguration.__ge__": true, + "tf.config.LogicalDeviceConfiguration.__getitem__": true, + "tf.config.LogicalDeviceConfiguration.__gt__": true, + "tf.config.LogicalDeviceConfiguration.__init__": true, + "tf.config.LogicalDeviceConfiguration.__iter__": true, + "tf.config.LogicalDeviceConfiguration.__le__": true, + "tf.config.LogicalDeviceConfiguration.__len__": true, + "tf.config.LogicalDeviceConfiguration.__lt__": true, + "tf.config.LogicalDeviceConfiguration.__mul__": true, + "tf.config.LogicalDeviceConfiguration.__ne__": true, + "tf.config.LogicalDeviceConfiguration.__new__": true, + "tf.config.LogicalDeviceConfiguration.__rmul__": true, + "tf.config.LogicalDeviceConfiguration.count": true, + "tf.config.LogicalDeviceConfiguration.index": true, + "tf.config.LogicalDeviceConfiguration.memory_limit": true, + "tf.config.PhysicalDevice": false, + "tf.config.PhysicalDevice.__add__": true, + "tf.config.PhysicalDevice.__contains__": true, + "tf.config.PhysicalDevice.__eq__": true, + "tf.config.PhysicalDevice.__ge__": true, + "tf.config.PhysicalDevice.__getitem__": true, + "tf.config.PhysicalDevice.__gt__": true, + "tf.config.PhysicalDevice.__init__": true, + "tf.config.PhysicalDevice.__iter__": true, + "tf.config.PhysicalDevice.__le__": true, + "tf.config.PhysicalDevice.__len__": true, + "tf.config.PhysicalDevice.__lt__": true, + "tf.config.PhysicalDevice.__mul__": true, + "tf.config.PhysicalDevice.__ne__": true, + "tf.config.PhysicalDevice.__new__": true, + "tf.config.PhysicalDevice.__rmul__": true, + "tf.config.PhysicalDevice.count": true, + "tf.config.PhysicalDevice.device_type": true, + "tf.config.PhysicalDevice.index": true, + "tf.config.PhysicalDevice.name": true, + "tf.config.experimental": false, + "tf.config.experimental.ClusterDeviceFilters": false, + "tf.config.experimental.ClusterDeviceFilters.__eq__": true, + "tf.config.experimental.ClusterDeviceFilters.__ge__": true, + "tf.config.experimental.ClusterDeviceFilters.__gt__": true, + "tf.config.experimental.ClusterDeviceFilters.__init__": true, + "tf.config.experimental.ClusterDeviceFilters.__le__": true, + "tf.config.experimental.ClusterDeviceFilters.__lt__": true, + "tf.config.experimental.ClusterDeviceFilters.__ne__": true, + "tf.config.experimental.ClusterDeviceFilters.__new__": true, + "tf.config.experimental.ClusterDeviceFilters.set_device_filters": true, + "tf.config.experimental.VirtualDeviceConfiguration": false, + "tf.config.experimental.VirtualDeviceConfiguration.__add__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__contains__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__eq__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__ge__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__getitem__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__gt__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__init__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__iter__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__le__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__len__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__lt__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__mul__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__ne__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__new__": true, + "tf.config.experimental.VirtualDeviceConfiguration.__rmul__": true, + "tf.config.experimental.VirtualDeviceConfiguration.count": true, + "tf.config.experimental.VirtualDeviceConfiguration.index": true, + "tf.config.experimental.VirtualDeviceConfiguration.memory_limit": true, + "tf.config.experimental.disable_mlir_bridge": false, + "tf.config.experimental.enable_mlir_bridge": false, + "tf.config.experimental.get_device_policy": false, + "tf.config.experimental.get_memory_growth": false, + "tf.config.experimental.get_synchronous_execution": false, + "tf.config.experimental.get_virtual_device_configuration": false, + "tf.config.experimental.get_visible_devices": false, + "tf.config.experimental.list_logical_devices": false, + "tf.config.experimental.list_physical_devices": false, + "tf.config.experimental.set_device_policy": false, + "tf.config.experimental.set_memory_growth": false, + "tf.config.experimental.set_synchronous_execution": false, + "tf.config.experimental.set_virtual_device_configuration": false, + "tf.config.experimental.set_visible_devices": false, + "tf.config.experimental_connect_to_cluster": false, + "tf.config.experimental_connect_to_host": false, + "tf.config.experimental_functions_run_eagerly": false, + "tf.config.experimental_run_functions_eagerly": false, + "tf.config.get_logical_device_configuration": false, + "tf.config.get_soft_device_placement": false, + "tf.config.get_visible_devices": false, + "tf.config.list_logical_devices": false, + "tf.config.list_physical_devices": false, + "tf.config.optimizer": false, + "tf.config.optimizer.get_experimental_options": false, + "tf.config.optimizer.get_jit": false, + "tf.config.optimizer.set_experimental_options": false, + "tf.config.optimizer.set_jit": false, + "tf.config.set_logical_device_configuration": false, + "tf.config.set_soft_device_placement": false, + "tf.config.set_visible_devices": false, + "tf.config.threading": false, + "tf.config.threading.get_inter_op_parallelism_threads": false, + "tf.config.threading.get_intra_op_parallelism_threads": false, + "tf.config.threading.set_inter_op_parallelism_threads": false, + "tf.config.threading.set_intra_op_parallelism_threads": false, + "tf.constant": false, + "tf.constant_initializer": false, + "tf.constant_initializer.__call__": true, + "tf.constant_initializer.__eq__": true, + "tf.constant_initializer.__ge__": true, + "tf.constant_initializer.__gt__": true, + "tf.constant_initializer.__init__": true, + "tf.constant_initializer.__le__": true, + "tf.constant_initializer.__lt__": true, + "tf.constant_initializer.__ne__": true, + "tf.constant_initializer.__new__": true, + "tf.constant_initializer.from_config": true, + "tf.constant_initializer.get_config": true, + "tf.control_dependencies": false, + "tf.convert_to_tensor": false, + "tf.cos": false, + "tf.cosh": false, + "tf.cumsum": false, + "tf.custom_gradient": false, + "tf.data": false, + "tf.data.Dataset": false, + "tf.data.Dataset.__eq__": true, + "tf.data.Dataset.__ge__": true, + "tf.data.Dataset.__gt__": true, + "tf.data.Dataset.__init__": true, + "tf.data.Dataset.__iter__": true, + "tf.data.Dataset.__le__": true, + "tf.data.Dataset.__lt__": true, + "tf.data.Dataset.__ne__": true, + "tf.data.Dataset.__new__": true, + "tf.data.Dataset.apply": true, + "tf.data.Dataset.as_numpy_iterator": true, + "tf.data.Dataset.batch": true, + "tf.data.Dataset.cache": true, + "tf.data.Dataset.concatenate": true, + "tf.data.Dataset.element_spec": true, + "tf.data.Dataset.enumerate": true, + "tf.data.Dataset.filter": true, + "tf.data.Dataset.flat_map": true, + "tf.data.Dataset.from_generator": true, + "tf.data.Dataset.from_tensor_slices": true, + "tf.data.Dataset.from_tensors": true, + "tf.data.Dataset.interleave": true, + "tf.data.Dataset.list_files": true, + "tf.data.Dataset.map": true, + "tf.data.Dataset.options": true, + "tf.data.Dataset.padded_batch": true, + "tf.data.Dataset.prefetch": true, + "tf.data.Dataset.range": true, + "tf.data.Dataset.reduce": true, + "tf.data.Dataset.repeat": true, + "tf.data.Dataset.shard": true, + "tf.data.Dataset.shuffle": true, + "tf.data.Dataset.skip": true, + "tf.data.Dataset.take": true, + "tf.data.Dataset.unbatch": true, + "tf.data.Dataset.window": true, + "tf.data.Dataset.with_options": true, + "tf.data.Dataset.zip": true, + "tf.data.DatasetSpec": false, + "tf.data.DatasetSpec.__eq__": true, + "tf.data.DatasetSpec.__ge__": true, + "tf.data.DatasetSpec.__gt__": true, + "tf.data.DatasetSpec.__init__": true, + "tf.data.DatasetSpec.__le__": true, + "tf.data.DatasetSpec.__lt__": true, + "tf.data.DatasetSpec.__ne__": true, + "tf.data.DatasetSpec.__new__": true, + "tf.data.DatasetSpec.from_value": true, + "tf.data.DatasetSpec.is_compatible_with": true, + "tf.data.DatasetSpec.most_specific_compatible_type": true, + "tf.data.DatasetSpec.value_type": true, + "tf.data.FixedLengthRecordDataset": false, + "tf.data.FixedLengthRecordDataset.__eq__": true, + "tf.data.FixedLengthRecordDataset.__ge__": true, + "tf.data.FixedLengthRecordDataset.__gt__": true, + "tf.data.FixedLengthRecordDataset.__init__": true, + "tf.data.FixedLengthRecordDataset.__iter__": true, + "tf.data.FixedLengthRecordDataset.__le__": true, + "tf.data.FixedLengthRecordDataset.__lt__": true, + "tf.data.FixedLengthRecordDataset.__ne__": true, + "tf.data.FixedLengthRecordDataset.__new__": true, + "tf.data.FixedLengthRecordDataset.apply": true, + "tf.data.FixedLengthRecordDataset.as_numpy_iterator": true, + "tf.data.FixedLengthRecordDataset.batch": true, + "tf.data.FixedLengthRecordDataset.cache": true, + "tf.data.FixedLengthRecordDataset.concatenate": true, + "tf.data.FixedLengthRecordDataset.element_spec": true, + "tf.data.FixedLengthRecordDataset.enumerate": true, + "tf.data.FixedLengthRecordDataset.filter": true, + "tf.data.FixedLengthRecordDataset.flat_map": true, + "tf.data.FixedLengthRecordDataset.from_generator": true, + "tf.data.FixedLengthRecordDataset.from_tensor_slices": true, + "tf.data.FixedLengthRecordDataset.from_tensors": true, + "tf.data.FixedLengthRecordDataset.interleave": true, + "tf.data.FixedLengthRecordDataset.list_files": true, + "tf.data.FixedLengthRecordDataset.map": true, + "tf.data.FixedLengthRecordDataset.options": true, + "tf.data.FixedLengthRecordDataset.padded_batch": true, + "tf.data.FixedLengthRecordDataset.prefetch": true, + "tf.data.FixedLengthRecordDataset.range": true, + "tf.data.FixedLengthRecordDataset.reduce": true, + "tf.data.FixedLengthRecordDataset.repeat": true, + "tf.data.FixedLengthRecordDataset.shard": true, + "tf.data.FixedLengthRecordDataset.shuffle": true, + "tf.data.FixedLengthRecordDataset.skip": true, + "tf.data.FixedLengthRecordDataset.take": true, + "tf.data.FixedLengthRecordDataset.unbatch": true, + "tf.data.FixedLengthRecordDataset.window": true, + "tf.data.FixedLengthRecordDataset.with_options": true, + "tf.data.FixedLengthRecordDataset.zip": true, + "tf.data.Options": false, + "tf.data.Options.__eq__": true, + "tf.data.Options.__ge__": true, + "tf.data.Options.__gt__": true, + "tf.data.Options.__init__": true, + "tf.data.Options.__le__": true, + "tf.data.Options.__lt__": true, + "tf.data.Options.__ne__": true, + "tf.data.Options.__new__": true, + "tf.data.Options.experimental_deterministic": true, + "tf.data.Options.experimental_distribute": true, + "tf.data.Options.experimental_external_state_policy": true, + "tf.data.Options.experimental_optimization": true, + "tf.data.Options.experimental_slack": true, + "tf.data.Options.experimental_stats": true, + "tf.data.Options.experimental_threading": true, + "tf.data.Options.merge": true, + "tf.data.TFRecordDataset": false, + "tf.data.TFRecordDataset.__eq__": true, + "tf.data.TFRecordDataset.__ge__": true, + "tf.data.TFRecordDataset.__gt__": true, + "tf.data.TFRecordDataset.__init__": true, + "tf.data.TFRecordDataset.__iter__": true, + "tf.data.TFRecordDataset.__le__": true, + "tf.data.TFRecordDataset.__lt__": true, + "tf.data.TFRecordDataset.__ne__": true, + "tf.data.TFRecordDataset.__new__": true, + "tf.data.TFRecordDataset.apply": true, + "tf.data.TFRecordDataset.as_numpy_iterator": true, + "tf.data.TFRecordDataset.batch": true, + "tf.data.TFRecordDataset.cache": true, + "tf.data.TFRecordDataset.concatenate": true, + "tf.data.TFRecordDataset.element_spec": true, + "tf.data.TFRecordDataset.enumerate": true, + "tf.data.TFRecordDataset.filter": true, + "tf.data.TFRecordDataset.flat_map": true, + "tf.data.TFRecordDataset.from_generator": true, + "tf.data.TFRecordDataset.from_tensor_slices": true, + "tf.data.TFRecordDataset.from_tensors": true, + "tf.data.TFRecordDataset.interleave": true, + "tf.data.TFRecordDataset.list_files": true, + "tf.data.TFRecordDataset.map": true, + "tf.data.TFRecordDataset.options": true, + "tf.data.TFRecordDataset.padded_batch": true, + "tf.data.TFRecordDataset.prefetch": true, + "tf.data.TFRecordDataset.range": true, + "tf.data.TFRecordDataset.reduce": true, + "tf.data.TFRecordDataset.repeat": true, + "tf.data.TFRecordDataset.shard": true, + "tf.data.TFRecordDataset.shuffle": true, + "tf.data.TFRecordDataset.skip": true, + "tf.data.TFRecordDataset.take": true, + "tf.data.TFRecordDataset.unbatch": true, + "tf.data.TFRecordDataset.window": true, + "tf.data.TFRecordDataset.with_options": true, + "tf.data.TFRecordDataset.zip": true, + "tf.data.TextLineDataset": false, + "tf.data.TextLineDataset.__eq__": true, + "tf.data.TextLineDataset.__ge__": true, + "tf.data.TextLineDataset.__gt__": true, + "tf.data.TextLineDataset.__init__": true, + "tf.data.TextLineDataset.__iter__": true, + "tf.data.TextLineDataset.__le__": true, + "tf.data.TextLineDataset.__lt__": true, + "tf.data.TextLineDataset.__ne__": true, + "tf.data.TextLineDataset.__new__": true, + "tf.data.TextLineDataset.apply": true, + "tf.data.TextLineDataset.as_numpy_iterator": true, + "tf.data.TextLineDataset.batch": true, + "tf.data.TextLineDataset.cache": true, + "tf.data.TextLineDataset.concatenate": true, + "tf.data.TextLineDataset.element_spec": true, + "tf.data.TextLineDataset.enumerate": true, + "tf.data.TextLineDataset.filter": true, + "tf.data.TextLineDataset.flat_map": true, + "tf.data.TextLineDataset.from_generator": true, + "tf.data.TextLineDataset.from_tensor_slices": true, + "tf.data.TextLineDataset.from_tensors": true, + "tf.data.TextLineDataset.interleave": true, + "tf.data.TextLineDataset.list_files": true, + "tf.data.TextLineDataset.map": true, + "tf.data.TextLineDataset.options": true, + "tf.data.TextLineDataset.padded_batch": true, + "tf.data.TextLineDataset.prefetch": true, + "tf.data.TextLineDataset.range": true, + "tf.data.TextLineDataset.reduce": true, + "tf.data.TextLineDataset.repeat": true, + "tf.data.TextLineDataset.shard": true, + "tf.data.TextLineDataset.shuffle": true, + "tf.data.TextLineDataset.skip": true, + "tf.data.TextLineDataset.take": true, + "tf.data.TextLineDataset.unbatch": true, + "tf.data.TextLineDataset.window": true, + "tf.data.TextLineDataset.with_options": true, + "tf.data.TextLineDataset.zip": true, + "tf.data.experimental": false, + "tf.data.experimental.AUTOTUNE": true, + "tf.data.experimental.AutoShardPolicy": false, + "tf.data.experimental.AutoShardPolicy.AUTO": true, + "tf.data.experimental.AutoShardPolicy.DATA": true, + "tf.data.experimental.AutoShardPolicy.FILE": true, + "tf.data.experimental.AutoShardPolicy.OFF": true, + "tf.data.experimental.CheckpointInputPipelineHook": false, + "tf.data.experimental.CheckpointInputPipelineHook.__eq__": true, + "tf.data.experimental.CheckpointInputPipelineHook.__ge__": true, + "tf.data.experimental.CheckpointInputPipelineHook.__gt__": true, + "tf.data.experimental.CheckpointInputPipelineHook.__init__": true, + "tf.data.experimental.CheckpointInputPipelineHook.__le__": true, + "tf.data.experimental.CheckpointInputPipelineHook.__lt__": true, + "tf.data.experimental.CheckpointInputPipelineHook.__ne__": true, + "tf.data.experimental.CheckpointInputPipelineHook.__new__": true, + "tf.data.experimental.CheckpointInputPipelineHook.after_create_session": true, + "tf.data.experimental.CheckpointInputPipelineHook.after_run": true, + "tf.data.experimental.CheckpointInputPipelineHook.before_run": true, + "tf.data.experimental.CheckpointInputPipelineHook.begin": true, + "tf.data.experimental.CheckpointInputPipelineHook.end": true, + "tf.data.experimental.Counter": false, + "tf.data.experimental.CsvDataset": false, + "tf.data.experimental.CsvDataset.__eq__": true, + "tf.data.experimental.CsvDataset.__ge__": true, + "tf.data.experimental.CsvDataset.__gt__": true, + "tf.data.experimental.CsvDataset.__init__": true, + "tf.data.experimental.CsvDataset.__iter__": true, + "tf.data.experimental.CsvDataset.__le__": true, + "tf.data.experimental.CsvDataset.__lt__": true, + "tf.data.experimental.CsvDataset.__ne__": true, + "tf.data.experimental.CsvDataset.__new__": true, + "tf.data.experimental.CsvDataset.apply": true, + "tf.data.experimental.CsvDataset.as_numpy_iterator": true, + "tf.data.experimental.CsvDataset.batch": true, + "tf.data.experimental.CsvDataset.cache": true, + "tf.data.experimental.CsvDataset.concatenate": true, + "tf.data.experimental.CsvDataset.element_spec": true, + "tf.data.experimental.CsvDataset.enumerate": true, + "tf.data.experimental.CsvDataset.filter": true, + "tf.data.experimental.CsvDataset.flat_map": true, + "tf.data.experimental.CsvDataset.from_generator": true, + "tf.data.experimental.CsvDataset.from_tensor_slices": true, + "tf.data.experimental.CsvDataset.from_tensors": true, + "tf.data.experimental.CsvDataset.interleave": true, + "tf.data.experimental.CsvDataset.list_files": true, + "tf.data.experimental.CsvDataset.map": true, + "tf.data.experimental.CsvDataset.options": true, + "tf.data.experimental.CsvDataset.padded_batch": true, + "tf.data.experimental.CsvDataset.prefetch": true, + "tf.data.experimental.CsvDataset.range": true, + "tf.data.experimental.CsvDataset.reduce": true, + "tf.data.experimental.CsvDataset.repeat": true, + "tf.data.experimental.CsvDataset.shard": true, + "tf.data.experimental.CsvDataset.shuffle": true, + "tf.data.experimental.CsvDataset.skip": true, + "tf.data.experimental.CsvDataset.take": true, + "tf.data.experimental.CsvDataset.unbatch": true, + "tf.data.experimental.CsvDataset.window": true, + "tf.data.experimental.CsvDataset.with_options": true, + "tf.data.experimental.CsvDataset.zip": true, + "tf.data.experimental.DistributeOptions": false, + "tf.data.experimental.DistributeOptions.__eq__": true, + "tf.data.experimental.DistributeOptions.__ge__": true, + "tf.data.experimental.DistributeOptions.__gt__": true, + "tf.data.experimental.DistributeOptions.__init__": true, + "tf.data.experimental.DistributeOptions.__le__": true, + "tf.data.experimental.DistributeOptions.__lt__": true, + "tf.data.experimental.DistributeOptions.__ne__": true, + "tf.data.experimental.DistributeOptions.__new__": true, + "tf.data.experimental.DistributeOptions.auto_shard_policy": true, + "tf.data.experimental.DistributeOptions.num_devices": true, + "tf.data.experimental.INFINITE_CARDINALITY": true, + "tf.data.experimental.MapVectorizationOptions": false, + "tf.data.experimental.MapVectorizationOptions.__eq__": true, + "tf.data.experimental.MapVectorizationOptions.__ge__": true, + "tf.data.experimental.MapVectorizationOptions.__gt__": true, + "tf.data.experimental.MapVectorizationOptions.__init__": true, + "tf.data.experimental.MapVectorizationOptions.__le__": true, + "tf.data.experimental.MapVectorizationOptions.__lt__": true, + "tf.data.experimental.MapVectorizationOptions.__ne__": true, + "tf.data.experimental.MapVectorizationOptions.__new__": true, + "tf.data.experimental.MapVectorizationOptions.enabled": true, + "tf.data.experimental.MapVectorizationOptions.use_choose_fastest": true, + "tf.data.experimental.OptimizationOptions": false, + "tf.data.experimental.OptimizationOptions.__eq__": true, + "tf.data.experimental.OptimizationOptions.__ge__": true, + "tf.data.experimental.OptimizationOptions.__gt__": true, + "tf.data.experimental.OptimizationOptions.__init__": true, + "tf.data.experimental.OptimizationOptions.__le__": true, + "tf.data.experimental.OptimizationOptions.__lt__": true, + "tf.data.experimental.OptimizationOptions.__ne__": true, + "tf.data.experimental.OptimizationOptions.__new__": true, + "tf.data.experimental.OptimizationOptions.apply_default_optimizations": true, + "tf.data.experimental.OptimizationOptions.autotune": true, + "tf.data.experimental.OptimizationOptions.autotune_buffers": true, + "tf.data.experimental.OptimizationOptions.autotune_cpu_budget": true, + "tf.data.experimental.OptimizationOptions.filter_fusion": true, + "tf.data.experimental.OptimizationOptions.filter_with_random_uniform_fusion": true, + "tf.data.experimental.OptimizationOptions.hoist_random_uniform": true, + "tf.data.experimental.OptimizationOptions.map_and_batch_fusion": true, + "tf.data.experimental.OptimizationOptions.map_and_filter_fusion": true, + "tf.data.experimental.OptimizationOptions.map_fusion": true, + "tf.data.experimental.OptimizationOptions.map_parallelization": true, + "tf.data.experimental.OptimizationOptions.map_vectorization": true, + "tf.data.experimental.OptimizationOptions.noop_elimination": true, + "tf.data.experimental.OptimizationOptions.parallel_batch": true, + "tf.data.experimental.OptimizationOptions.shuffle_and_repeat_fusion": true, + "tf.data.experimental.Optional": false, + "tf.data.experimental.Optional.__eq__": true, + "tf.data.experimental.Optional.__ge__": true, + "tf.data.experimental.Optional.__gt__": true, + "tf.data.experimental.Optional.__init__": true, + "tf.data.experimental.Optional.__le__": true, + "tf.data.experimental.Optional.__lt__": true, + "tf.data.experimental.Optional.__ne__": true, + "tf.data.experimental.Optional.__new__": true, + "tf.data.experimental.Optional.from_value": true, + "tf.data.experimental.Optional.get_value": true, + "tf.data.experimental.Optional.has_value": true, + "tf.data.experimental.Optional.none_from_structure": true, + "tf.data.experimental.Optional.value_structure": true, + "tf.data.experimental.RandomDataset": false, + "tf.data.experimental.RandomDataset.__eq__": true, + "tf.data.experimental.RandomDataset.__ge__": true, + "tf.data.experimental.RandomDataset.__gt__": true, + "tf.data.experimental.RandomDataset.__init__": true, + "tf.data.experimental.RandomDataset.__iter__": true, + "tf.data.experimental.RandomDataset.__le__": true, + "tf.data.experimental.RandomDataset.__lt__": true, + "tf.data.experimental.RandomDataset.__ne__": true, + "tf.data.experimental.RandomDataset.__new__": true, + "tf.data.experimental.RandomDataset.apply": true, + "tf.data.experimental.RandomDataset.as_numpy_iterator": true, + "tf.data.experimental.RandomDataset.batch": true, + "tf.data.experimental.RandomDataset.cache": true, + "tf.data.experimental.RandomDataset.concatenate": true, + "tf.data.experimental.RandomDataset.element_spec": true, + "tf.data.experimental.RandomDataset.enumerate": true, + "tf.data.experimental.RandomDataset.filter": true, + "tf.data.experimental.RandomDataset.flat_map": true, + "tf.data.experimental.RandomDataset.from_generator": true, + "tf.data.experimental.RandomDataset.from_tensor_slices": true, + "tf.data.experimental.RandomDataset.from_tensors": true, + "tf.data.experimental.RandomDataset.interleave": true, + "tf.data.experimental.RandomDataset.list_files": true, + "tf.data.experimental.RandomDataset.map": true, + "tf.data.experimental.RandomDataset.options": true, + "tf.data.experimental.RandomDataset.padded_batch": true, + "tf.data.experimental.RandomDataset.prefetch": true, + "tf.data.experimental.RandomDataset.range": true, + "tf.data.experimental.RandomDataset.reduce": true, + "tf.data.experimental.RandomDataset.repeat": true, + "tf.data.experimental.RandomDataset.shard": true, + "tf.data.experimental.RandomDataset.shuffle": true, + "tf.data.experimental.RandomDataset.skip": true, + "tf.data.experimental.RandomDataset.take": true, + "tf.data.experimental.RandomDataset.unbatch": true, + "tf.data.experimental.RandomDataset.window": true, + "tf.data.experimental.RandomDataset.with_options": true, + "tf.data.experimental.RandomDataset.zip": true, + "tf.data.experimental.Reducer": false, + "tf.data.experimental.Reducer.__eq__": true, + "tf.data.experimental.Reducer.__ge__": true, + "tf.data.experimental.Reducer.__gt__": true, + "tf.data.experimental.Reducer.__init__": true, + "tf.data.experimental.Reducer.__le__": true, + "tf.data.experimental.Reducer.__lt__": true, + "tf.data.experimental.Reducer.__ne__": true, + "tf.data.experimental.Reducer.__new__": true, + "tf.data.experimental.Reducer.finalize_func": true, + "tf.data.experimental.Reducer.init_func": true, + "tf.data.experimental.Reducer.reduce_func": true, + "tf.data.experimental.SqlDataset": false, + "tf.data.experimental.SqlDataset.__eq__": true, + "tf.data.experimental.SqlDataset.__ge__": true, + "tf.data.experimental.SqlDataset.__gt__": true, + "tf.data.experimental.SqlDataset.__init__": true, + "tf.data.experimental.SqlDataset.__iter__": true, + "tf.data.experimental.SqlDataset.__le__": true, + "tf.data.experimental.SqlDataset.__lt__": true, + "tf.data.experimental.SqlDataset.__ne__": true, + "tf.data.experimental.SqlDataset.__new__": true, + "tf.data.experimental.SqlDataset.apply": true, + "tf.data.experimental.SqlDataset.as_numpy_iterator": true, + "tf.data.experimental.SqlDataset.batch": true, + "tf.data.experimental.SqlDataset.cache": true, + "tf.data.experimental.SqlDataset.concatenate": true, + "tf.data.experimental.SqlDataset.element_spec": true, + "tf.data.experimental.SqlDataset.enumerate": true, + "tf.data.experimental.SqlDataset.filter": true, + "tf.data.experimental.SqlDataset.flat_map": true, + "tf.data.experimental.SqlDataset.from_generator": true, + "tf.data.experimental.SqlDataset.from_tensor_slices": true, + "tf.data.experimental.SqlDataset.from_tensors": true, + "tf.data.experimental.SqlDataset.interleave": true, + "tf.data.experimental.SqlDataset.list_files": true, + "tf.data.experimental.SqlDataset.map": true, + "tf.data.experimental.SqlDataset.options": true, + "tf.data.experimental.SqlDataset.padded_batch": true, + "tf.data.experimental.SqlDataset.prefetch": true, + "tf.data.experimental.SqlDataset.range": true, + "tf.data.experimental.SqlDataset.reduce": true, + "tf.data.experimental.SqlDataset.repeat": true, + "tf.data.experimental.SqlDataset.shard": true, + "tf.data.experimental.SqlDataset.shuffle": true, + "tf.data.experimental.SqlDataset.skip": true, + "tf.data.experimental.SqlDataset.take": true, + "tf.data.experimental.SqlDataset.unbatch": true, + "tf.data.experimental.SqlDataset.window": true, + "tf.data.experimental.SqlDataset.with_options": true, + "tf.data.experimental.SqlDataset.zip": true, + "tf.data.experimental.StatsAggregator": false, + "tf.data.experimental.StatsAggregator.__eq__": true, + "tf.data.experimental.StatsAggregator.__ge__": true, + "tf.data.experimental.StatsAggregator.__gt__": true, + "tf.data.experimental.StatsAggregator.__init__": true, + "tf.data.experimental.StatsAggregator.__le__": true, + "tf.data.experimental.StatsAggregator.__lt__": true, + "tf.data.experimental.StatsAggregator.__ne__": true, + "tf.data.experimental.StatsAggregator.__new__": true, + "tf.data.experimental.StatsOptions": false, + "tf.data.experimental.StatsOptions.__eq__": true, + "tf.data.experimental.StatsOptions.__ge__": true, + "tf.data.experimental.StatsOptions.__gt__": true, + "tf.data.experimental.StatsOptions.__init__": true, + "tf.data.experimental.StatsOptions.__le__": true, + "tf.data.experimental.StatsOptions.__lt__": true, + "tf.data.experimental.StatsOptions.__ne__": true, + "tf.data.experimental.StatsOptions.__new__": true, + "tf.data.experimental.StatsOptions.aggregator": true, + "tf.data.experimental.StatsOptions.counter_prefix": true, + "tf.data.experimental.StatsOptions.latency_all_edges": true, + "tf.data.experimental.StatsOptions.prefix": true, + "tf.data.experimental.TFRecordWriter": false, + "tf.data.experimental.TFRecordWriter.__eq__": true, + "tf.data.experimental.TFRecordWriter.__ge__": true, + "tf.data.experimental.TFRecordWriter.__gt__": true, + "tf.data.experimental.TFRecordWriter.__init__": true, + "tf.data.experimental.TFRecordWriter.__le__": true, + "tf.data.experimental.TFRecordWriter.__lt__": true, + "tf.data.experimental.TFRecordWriter.__ne__": true, + "tf.data.experimental.TFRecordWriter.__new__": true, + "tf.data.experimental.TFRecordWriter.write": true, + "tf.data.experimental.ThreadingOptions": false, + "tf.data.experimental.ThreadingOptions.__eq__": true, + "tf.data.experimental.ThreadingOptions.__ge__": true, + "tf.data.experimental.ThreadingOptions.__gt__": true, + "tf.data.experimental.ThreadingOptions.__init__": true, + "tf.data.experimental.ThreadingOptions.__le__": true, + "tf.data.experimental.ThreadingOptions.__lt__": true, + "tf.data.experimental.ThreadingOptions.__ne__": true, + "tf.data.experimental.ThreadingOptions.__new__": true, + "tf.data.experimental.ThreadingOptions.max_intra_op_parallelism": true, + "tf.data.experimental.ThreadingOptions.private_threadpool_size": true, + "tf.data.experimental.UNKNOWN_CARDINALITY": true, + "tf.data.experimental.assert_cardinality": false, + "tf.data.experimental.bucket_by_sequence_length": false, + "tf.data.experimental.bytes_produced_stats": false, + "tf.data.experimental.cardinality": false, + "tf.data.experimental.choose_from_datasets": false, + "tf.data.experimental.copy_to_device": false, + "tf.data.experimental.dense_to_ragged_batch": false, + "tf.data.experimental.dense_to_sparse_batch": false, + "tf.data.experimental.enumerate_dataset": false, + "tf.data.experimental.from_variant": false, + "tf.data.experimental.get_next_as_optional": false, + "tf.data.experimental.get_single_element": false, + "tf.data.experimental.get_structure": false, + "tf.data.experimental.group_by_reducer": false, + "tf.data.experimental.group_by_window": false, + "tf.data.experimental.ignore_errors": false, + "tf.data.experimental.latency_stats": false, + "tf.data.experimental.make_batched_features_dataset": false, + "tf.data.experimental.make_csv_dataset": false, + "tf.data.experimental.make_saveable_from_iterator": false, + "tf.data.experimental.map_and_batch": false, + "tf.data.experimental.parallel_interleave": false, + "tf.data.experimental.parse_example_dataset": false, + "tf.data.experimental.prefetch_to_device": false, + "tf.data.experimental.rejection_resample": false, + "tf.data.experimental.sample_from_datasets": false, + "tf.data.experimental.scan": false, + "tf.data.experimental.shuffle_and_repeat": false, + "tf.data.experimental.take_while": false, + "tf.data.experimental.to_variant": false, + "tf.data.experimental.unbatch": false, + "tf.data.experimental.unique": false, + "tf.debugging": false, + "tf.debugging.Assert": false, + "tf.debugging.assert_all_finite": false, + "tf.debugging.assert_equal": false, + "tf.debugging.assert_greater": false, + "tf.debugging.assert_greater_equal": false, + "tf.debugging.assert_integer": false, + "tf.debugging.assert_less": false, + "tf.debugging.assert_less_equal": false, + "tf.debugging.assert_near": false, + "tf.debugging.assert_negative": false, + "tf.debugging.assert_non_negative": false, + "tf.debugging.assert_non_positive": false, + "tf.debugging.assert_none_equal": false, + "tf.debugging.assert_positive": false, + "tf.debugging.assert_proper_iterable": false, + "tf.debugging.assert_rank": false, + "tf.debugging.assert_rank_at_least": false, + "tf.debugging.assert_rank_in": false, + "tf.debugging.assert_same_float_dtype": false, + "tf.debugging.assert_scalar": false, + "tf.debugging.assert_shapes": false, + "tf.debugging.assert_type": false, + "tf.debugging.check_numerics": false, + "tf.debugging.disable_check_numerics": false, + "tf.debugging.enable_check_numerics": false, + "tf.debugging.experimental": false, + "tf.debugging.experimental.disable_dump_debug_info": false, + "tf.debugging.experimental.enable_dump_debug_info": false, + "tf.debugging.get_log_device_placement": false, + "tf.debugging.is_numeric_tensor": false, + "tf.debugging.set_log_device_placement": false, + "tf.device": false, + "tf.distribute": false, + "tf.distribute.CrossDeviceOps": false, + "tf.distribute.CrossDeviceOps.__eq__": true, + "tf.distribute.CrossDeviceOps.__ge__": true, + "tf.distribute.CrossDeviceOps.__gt__": true, + "tf.distribute.CrossDeviceOps.__init__": true, + "tf.distribute.CrossDeviceOps.__le__": true, + "tf.distribute.CrossDeviceOps.__lt__": true, + "tf.distribute.CrossDeviceOps.__ne__": true, + "tf.distribute.CrossDeviceOps.__new__": true, + "tf.distribute.CrossDeviceOps.batch_reduce": true, + "tf.distribute.CrossDeviceOps.batch_reduce_implementation": true, + "tf.distribute.CrossDeviceOps.broadcast": true, + "tf.distribute.CrossDeviceOps.broadcast_implementation": true, + "tf.distribute.CrossDeviceOps.reduce": true, + "tf.distribute.CrossDeviceOps.reduce_implementation": true, + "tf.distribute.DistributedValues": false, + "tf.distribute.DistributedValues.__eq__": true, + "tf.distribute.DistributedValues.__ge__": true, + "tf.distribute.DistributedValues.__gt__": true, + "tf.distribute.DistributedValues.__init__": true, + "tf.distribute.DistributedValues.__le__": true, + "tf.distribute.DistributedValues.__lt__": true, + "tf.distribute.DistributedValues.__ne__": true, + "tf.distribute.DistributedValues.__new__": true, + "tf.distribute.HierarchicalCopyAllReduce": false, + "tf.distribute.HierarchicalCopyAllReduce.__eq__": true, + "tf.distribute.HierarchicalCopyAllReduce.__ge__": true, + "tf.distribute.HierarchicalCopyAllReduce.__gt__": true, + "tf.distribute.HierarchicalCopyAllReduce.__init__": true, + "tf.distribute.HierarchicalCopyAllReduce.__le__": true, + "tf.distribute.HierarchicalCopyAllReduce.__lt__": true, + "tf.distribute.HierarchicalCopyAllReduce.__ne__": true, + "tf.distribute.HierarchicalCopyAllReduce.__new__": true, + "tf.distribute.HierarchicalCopyAllReduce.batch_reduce": true, + "tf.distribute.HierarchicalCopyAllReduce.batch_reduce_implementation": true, + "tf.distribute.HierarchicalCopyAllReduce.broadcast": true, + "tf.distribute.HierarchicalCopyAllReduce.broadcast_implementation": true, + "tf.distribute.HierarchicalCopyAllReduce.reduce": true, + "tf.distribute.HierarchicalCopyAllReduce.reduce_implementation": true, + "tf.distribute.InputContext": false, + "tf.distribute.InputContext.__eq__": true, + "tf.distribute.InputContext.__ge__": true, + "tf.distribute.InputContext.__gt__": true, + "tf.distribute.InputContext.__init__": true, + "tf.distribute.InputContext.__le__": true, + "tf.distribute.InputContext.__lt__": true, + "tf.distribute.InputContext.__ne__": true, + "tf.distribute.InputContext.__new__": true, + "tf.distribute.InputContext.get_per_replica_batch_size": true, + "tf.distribute.InputContext.input_pipeline_id": true, + "tf.distribute.InputContext.num_input_pipelines": true, + "tf.distribute.InputContext.num_replicas_in_sync": true, + "tf.distribute.InputReplicationMode": false, + "tf.distribute.InputReplicationMode.PER_WORKER": true, + "tf.distribute.InputReplicationMode.name": true, + "tf.distribute.InputReplicationMode.value": true, + "tf.distribute.MirroredStrategy": false, + "tf.distribute.MirroredStrategy.__eq__": true, + "tf.distribute.MirroredStrategy.__ge__": true, + "tf.distribute.MirroredStrategy.__gt__": true, + "tf.distribute.MirroredStrategy.__init__": true, + "tf.distribute.MirroredStrategy.__le__": true, + "tf.distribute.MirroredStrategy.__lt__": true, + "tf.distribute.MirroredStrategy.__ne__": true, + "tf.distribute.MirroredStrategy.__new__": true, + "tf.distribute.MirroredStrategy.experimental_assign_to_logical_device": true, + "tf.distribute.MirroredStrategy.experimental_distribute_dataset": true, + "tf.distribute.MirroredStrategy.experimental_distribute_datasets_from_function": true, + "tf.distribute.MirroredStrategy.experimental_distribute_values_from_function": true, + "tf.distribute.MirroredStrategy.experimental_local_results": true, + "tf.distribute.MirroredStrategy.experimental_make_numpy_dataset": true, + "tf.distribute.MirroredStrategy.experimental_replicate_to_logical_devices": true, + "tf.distribute.MirroredStrategy.experimental_split_to_logical_devices": true, + "tf.distribute.MirroredStrategy.extended": true, + "tf.distribute.MirroredStrategy.num_replicas_in_sync": true, + "tf.distribute.MirroredStrategy.reduce": true, + "tf.distribute.MirroredStrategy.run": true, + "tf.distribute.MirroredStrategy.scope": true, + "tf.distribute.NcclAllReduce": false, + "tf.distribute.NcclAllReduce.__eq__": true, + "tf.distribute.NcclAllReduce.__ge__": true, + "tf.distribute.NcclAllReduce.__gt__": true, + "tf.distribute.NcclAllReduce.__init__": true, + "tf.distribute.NcclAllReduce.__le__": true, + "tf.distribute.NcclAllReduce.__lt__": true, + "tf.distribute.NcclAllReduce.__ne__": true, + "tf.distribute.NcclAllReduce.__new__": true, + "tf.distribute.NcclAllReduce.batch_reduce": true, + "tf.distribute.NcclAllReduce.batch_reduce_implementation": true, + "tf.distribute.NcclAllReduce.broadcast": true, + "tf.distribute.NcclAllReduce.broadcast_implementation": true, + "tf.distribute.NcclAllReduce.reduce": true, + "tf.distribute.NcclAllReduce.reduce_implementation": true, + "tf.distribute.OneDeviceStrategy": false, + "tf.distribute.OneDeviceStrategy.__eq__": true, + "tf.distribute.OneDeviceStrategy.__ge__": true, + "tf.distribute.OneDeviceStrategy.__gt__": true, + "tf.distribute.OneDeviceStrategy.__init__": true, + "tf.distribute.OneDeviceStrategy.__le__": true, + "tf.distribute.OneDeviceStrategy.__lt__": true, + "tf.distribute.OneDeviceStrategy.__ne__": true, + "tf.distribute.OneDeviceStrategy.__new__": true, + "tf.distribute.OneDeviceStrategy.experimental_assign_to_logical_device": true, + "tf.distribute.OneDeviceStrategy.experimental_distribute_dataset": true, + "tf.distribute.OneDeviceStrategy.experimental_distribute_datasets_from_function": true, + "tf.distribute.OneDeviceStrategy.experimental_distribute_values_from_function": true, + "tf.distribute.OneDeviceStrategy.experimental_local_results": true, + "tf.distribute.OneDeviceStrategy.experimental_make_numpy_dataset": true, + "tf.distribute.OneDeviceStrategy.experimental_replicate_to_logical_devices": true, + "tf.distribute.OneDeviceStrategy.experimental_split_to_logical_devices": true, + "tf.distribute.OneDeviceStrategy.extended": true, + "tf.distribute.OneDeviceStrategy.num_replicas_in_sync": true, + "tf.distribute.OneDeviceStrategy.reduce": true, + "tf.distribute.OneDeviceStrategy.run": true, + "tf.distribute.OneDeviceStrategy.scope": true, + "tf.distribute.ReduceOp": false, + "tf.distribute.ReduceOp.MEAN": true, + "tf.distribute.ReduceOp.SUM": true, + "tf.distribute.ReduceOp.name": true, + "tf.distribute.ReduceOp.value": true, + "tf.distribute.ReductionToOneDevice": false, + "tf.distribute.ReductionToOneDevice.__eq__": true, + "tf.distribute.ReductionToOneDevice.__ge__": true, + "tf.distribute.ReductionToOneDevice.__gt__": true, + "tf.distribute.ReductionToOneDevice.__init__": true, + "tf.distribute.ReductionToOneDevice.__le__": true, + "tf.distribute.ReductionToOneDevice.__lt__": true, + "tf.distribute.ReductionToOneDevice.__ne__": true, + "tf.distribute.ReductionToOneDevice.__new__": true, + "tf.distribute.ReductionToOneDevice.batch_reduce": true, + "tf.distribute.ReductionToOneDevice.batch_reduce_implementation": true, + "tf.distribute.ReductionToOneDevice.broadcast": true, + "tf.distribute.ReductionToOneDevice.broadcast_implementation": true, + "tf.distribute.ReductionToOneDevice.reduce": true, + "tf.distribute.ReductionToOneDevice.reduce_implementation": true, + "tf.distribute.ReplicaContext": false, + "tf.distribute.ReplicaContext.__enter__": true, + "tf.distribute.ReplicaContext.__eq__": true, + "tf.distribute.ReplicaContext.__exit__": true, + "tf.distribute.ReplicaContext.__ge__": true, + "tf.distribute.ReplicaContext.__gt__": true, + "tf.distribute.ReplicaContext.__init__": true, + "tf.distribute.ReplicaContext.__le__": true, + "tf.distribute.ReplicaContext.__lt__": true, + "tf.distribute.ReplicaContext.__ne__": true, + "tf.distribute.ReplicaContext.__new__": true, + "tf.distribute.ReplicaContext.all_reduce": true, + "tf.distribute.ReplicaContext.devices": true, + "tf.distribute.ReplicaContext.merge_call": true, + "tf.distribute.ReplicaContext.num_replicas_in_sync": true, + "tf.distribute.ReplicaContext.replica_id_in_sync_group": true, + "tf.distribute.ReplicaContext.strategy": true, + "tf.distribute.RunOptions": false, + "tf.distribute.RunOptions.__add__": true, + "tf.distribute.RunOptions.__contains__": true, + "tf.distribute.RunOptions.__eq__": true, + "tf.distribute.RunOptions.__ge__": true, + "tf.distribute.RunOptions.__getitem__": true, + "tf.distribute.RunOptions.__gt__": true, + "tf.distribute.RunOptions.__init__": true, + "tf.distribute.RunOptions.__iter__": true, + "tf.distribute.RunOptions.__le__": true, + "tf.distribute.RunOptions.__len__": true, + "tf.distribute.RunOptions.__lt__": true, + "tf.distribute.RunOptions.__mul__": true, + "tf.distribute.RunOptions.__ne__": true, + "tf.distribute.RunOptions.__new__": true, + "tf.distribute.RunOptions.__rmul__": true, + "tf.distribute.RunOptions.count": true, + "tf.distribute.RunOptions.experimental_bucketizing_dynamic_shape": true, + "tf.distribute.RunOptions.experimental_enable_dynamic_batch_size": true, + "tf.distribute.RunOptions.index": true, + "tf.distribute.Server": false, + "tf.distribute.Server.__eq__": true, + "tf.distribute.Server.__ge__": true, + "tf.distribute.Server.__gt__": true, + "tf.distribute.Server.__init__": true, + "tf.distribute.Server.__le__": true, + "tf.distribute.Server.__lt__": true, + "tf.distribute.Server.__ne__": true, + "tf.distribute.Server.__new__": true, + "tf.distribute.Server.create_local_server": true, + "tf.distribute.Server.join": true, + "tf.distribute.Server.server_def": true, + "tf.distribute.Server.start": true, + "tf.distribute.Server.target": true, + "tf.distribute.Strategy": false, + "tf.distribute.Strategy.__eq__": true, + "tf.distribute.Strategy.__ge__": true, + "tf.distribute.Strategy.__gt__": true, + "tf.distribute.Strategy.__init__": true, + "tf.distribute.Strategy.__le__": true, + "tf.distribute.Strategy.__lt__": true, + "tf.distribute.Strategy.__ne__": true, + "tf.distribute.Strategy.__new__": true, + "tf.distribute.Strategy.experimental_assign_to_logical_device": true, + "tf.distribute.Strategy.experimental_distribute_dataset": true, + "tf.distribute.Strategy.experimental_distribute_datasets_from_function": true, + "tf.distribute.Strategy.experimental_distribute_values_from_function": true, + "tf.distribute.Strategy.experimental_local_results": true, + "tf.distribute.Strategy.experimental_make_numpy_dataset": true, + "tf.distribute.Strategy.experimental_replicate_to_logical_devices": true, + "tf.distribute.Strategy.experimental_split_to_logical_devices": true, + "tf.distribute.Strategy.extended": true, + "tf.distribute.Strategy.num_replicas_in_sync": true, + "tf.distribute.Strategy.reduce": true, + "tf.distribute.Strategy.run": true, + "tf.distribute.Strategy.scope": true, + "tf.distribute.StrategyExtended": false, + "tf.distribute.StrategyExtended.__eq__": true, + "tf.distribute.StrategyExtended.__ge__": true, + "tf.distribute.StrategyExtended.__gt__": true, + "tf.distribute.StrategyExtended.__init__": true, + "tf.distribute.StrategyExtended.__le__": true, + "tf.distribute.StrategyExtended.__lt__": true, + "tf.distribute.StrategyExtended.__ne__": true, + "tf.distribute.StrategyExtended.__new__": true, + "tf.distribute.StrategyExtended.batch_reduce_to": true, + "tf.distribute.StrategyExtended.colocate_vars_with": true, + "tf.distribute.StrategyExtended.experimental_require_static_shapes": true, + "tf.distribute.StrategyExtended.non_slot_devices": true, + "tf.distribute.StrategyExtended.parameter_devices": true, + "tf.distribute.StrategyExtended.reduce_to": true, + "tf.distribute.StrategyExtended.update": true, + "tf.distribute.StrategyExtended.update_non_slot": true, + "tf.distribute.StrategyExtended.value_container": true, + "tf.distribute.StrategyExtended.variable_created_in_scope": true, + "tf.distribute.StrategyExtended.worker_devices": true, + "tf.distribute.cluster_resolver": false, + "tf.distribute.cluster_resolver.ClusterResolver": false, + "tf.distribute.cluster_resolver.ClusterResolver.__eq__": true, + "tf.distribute.cluster_resolver.ClusterResolver.__ge__": true, + "tf.distribute.cluster_resolver.ClusterResolver.__gt__": true, + "tf.distribute.cluster_resolver.ClusterResolver.__init__": true, + "tf.distribute.cluster_resolver.ClusterResolver.__le__": true, + "tf.distribute.cluster_resolver.ClusterResolver.__lt__": true, + "tf.distribute.cluster_resolver.ClusterResolver.__ne__": true, + "tf.distribute.cluster_resolver.ClusterResolver.__new__": true, + "tf.distribute.cluster_resolver.ClusterResolver.cluster_spec": true, + "tf.distribute.cluster_resolver.ClusterResolver.environment": true, + "tf.distribute.cluster_resolver.ClusterResolver.master": true, + "tf.distribute.cluster_resolver.ClusterResolver.num_accelerators": true, + "tf.distribute.cluster_resolver.GCEClusterResolver": false, + "tf.distribute.cluster_resolver.GCEClusterResolver.__eq__": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.__ge__": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.__gt__": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.__init__": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.__le__": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.__lt__": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.__ne__": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.__new__": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.cluster_spec": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.environment": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.master": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.num_accelerators": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.rpc_layer": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.task_id": true, + "tf.distribute.cluster_resolver.GCEClusterResolver.task_type": true, + "tf.distribute.cluster_resolver.KubernetesClusterResolver": false, + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__eq__": true, + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__ge__": true, + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__gt__": true, + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__init__": true, + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__le__": true, + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__lt__": true, + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__ne__": true, + "tf.distribute.cluster_resolver.KubernetesClusterResolver.__new__": true, + "tf.distribute.cluster_resolver.KubernetesClusterResolver.cluster_spec": true, + "tf.distribute.cluster_resolver.KubernetesClusterResolver.environment": true, + "tf.distribute.cluster_resolver.KubernetesClusterResolver.master": true, + "tf.distribute.cluster_resolver.KubernetesClusterResolver.num_accelerators": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver": false, + "tf.distribute.cluster_resolver.SimpleClusterResolver.__eq__": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.__ge__": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.__gt__": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.__init__": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.__le__": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.__lt__": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.__ne__": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.__new__": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.cluster_spec": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.environment": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.master": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.num_accelerators": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.rpc_layer": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.task_id": true, + "tf.distribute.cluster_resolver.SimpleClusterResolver.task_type": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver": false, + "tf.distribute.cluster_resolver.SlurmClusterResolver.__eq__": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver.__ge__": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver.__gt__": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver.__init__": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver.__le__": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver.__lt__": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver.__ne__": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver.__new__": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver.cluster_spec": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver.environment": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver.get_task_info": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver.master": true, + "tf.distribute.cluster_resolver.SlurmClusterResolver.num_accelerators": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver": false, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__eq__": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__ge__": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__gt__": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__init__": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__le__": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__lt__": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__ne__": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.__new__": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.cluster_spec": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.environment": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.master": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.num_accelerators": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.rpc_layer": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.task_id": true, + "tf.distribute.cluster_resolver.TFConfigClusterResolver.task_type": true, + "tf.distribute.cluster_resolver.TPUClusterResolver": false, + "tf.distribute.cluster_resolver.TPUClusterResolver.__enter__": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.__eq__": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.__exit__": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.__ge__": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.__gt__": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.__init__": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.__le__": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.__lt__": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.__ne__": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.__new__": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.cluster_spec": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.environment": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.get_job_name": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.get_master": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.master": true, + "tf.distribute.cluster_resolver.TPUClusterResolver.num_accelerators": true, + "tf.distribute.cluster_resolver.UnionResolver": false, + "tf.distribute.cluster_resolver.UnionResolver.__eq__": true, + "tf.distribute.cluster_resolver.UnionResolver.__ge__": true, + "tf.distribute.cluster_resolver.UnionResolver.__gt__": true, + "tf.distribute.cluster_resolver.UnionResolver.__init__": true, + "tf.distribute.cluster_resolver.UnionResolver.__le__": true, + "tf.distribute.cluster_resolver.UnionResolver.__lt__": true, + "tf.distribute.cluster_resolver.UnionResolver.__ne__": true, + "tf.distribute.cluster_resolver.UnionResolver.__new__": true, + "tf.distribute.cluster_resolver.UnionResolver.cluster_spec": true, + "tf.distribute.cluster_resolver.UnionResolver.environment": true, + "tf.distribute.cluster_resolver.UnionResolver.master": true, + "tf.distribute.cluster_resolver.UnionResolver.num_accelerators": true, + "tf.distribute.cluster_resolver.UnionResolver.rpc_layer": true, + "tf.distribute.cluster_resolver.UnionResolver.task_id": true, + "tf.distribute.cluster_resolver.UnionResolver.task_type": true, + "tf.distribute.experimental": false, + "tf.distribute.experimental.CentralStorageStrategy": false, + "tf.distribute.experimental.CentralStorageStrategy.__eq__": true, + "tf.distribute.experimental.CentralStorageStrategy.__ge__": true, + "tf.distribute.experimental.CentralStorageStrategy.__gt__": true, + "tf.distribute.experimental.CentralStorageStrategy.__init__": true, + "tf.distribute.experimental.CentralStorageStrategy.__le__": true, + "tf.distribute.experimental.CentralStorageStrategy.__lt__": true, + "tf.distribute.experimental.CentralStorageStrategy.__ne__": true, + "tf.distribute.experimental.CentralStorageStrategy.__new__": true, + "tf.distribute.experimental.CentralStorageStrategy.experimental_assign_to_logical_device": true, + "tf.distribute.experimental.CentralStorageStrategy.experimental_distribute_dataset": true, + "tf.distribute.experimental.CentralStorageStrategy.experimental_distribute_datasets_from_function": true, + "tf.distribute.experimental.CentralStorageStrategy.experimental_distribute_values_from_function": true, + "tf.distribute.experimental.CentralStorageStrategy.experimental_local_results": true, + "tf.distribute.experimental.CentralStorageStrategy.experimental_make_numpy_dataset": true, + "tf.distribute.experimental.CentralStorageStrategy.experimental_replicate_to_logical_devices": true, + "tf.distribute.experimental.CentralStorageStrategy.experimental_split_to_logical_devices": true, + "tf.distribute.experimental.CentralStorageStrategy.extended": true, + "tf.distribute.experimental.CentralStorageStrategy.num_replicas_in_sync": true, + "tf.distribute.experimental.CentralStorageStrategy.reduce": true, + "tf.distribute.experimental.CentralStorageStrategy.run": true, + "tf.distribute.experimental.CentralStorageStrategy.scope": true, + "tf.distribute.experimental.CollectiveCommunication": false, + "tf.distribute.experimental.CollectiveCommunication.AUTO": true, + "tf.distribute.experimental.CollectiveCommunication.NCCL": true, + "tf.distribute.experimental.CollectiveCommunication.RING": true, + "tf.distribute.experimental.CollectiveCommunication.name": true, + "tf.distribute.experimental.CollectiveCommunication.value": true, + "tf.distribute.experimental.CollectiveHints": false, + "tf.distribute.experimental.CollectiveHints.__eq__": true, + "tf.distribute.experimental.CollectiveHints.__ge__": true, + "tf.distribute.experimental.CollectiveHints.__gt__": true, + "tf.distribute.experimental.CollectiveHints.__init__": true, + "tf.distribute.experimental.CollectiveHints.__le__": true, + "tf.distribute.experimental.CollectiveHints.__lt__": true, + "tf.distribute.experimental.CollectiveHints.__ne__": true, + "tf.distribute.experimental.CollectiveHints.__new__": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy": false, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__eq__": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__ge__": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__gt__": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__init__": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__le__": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__lt__": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__ne__": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.__new__": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_assign_to_logical_device": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_distribute_dataset": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_distribute_datasets_from_function": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_distribute_values_from_function": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_local_results": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_make_numpy_dataset": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_replicate_to_logical_devices": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_split_to_logical_devices": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.extended": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.num_replicas_in_sync": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.reduce": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.run": true, + "tf.distribute.experimental.MultiWorkerMirroredStrategy.scope": true, + "tf.distribute.experimental.ParameterServerStrategy": false, + "tf.distribute.experimental.ParameterServerStrategy.__eq__": true, + "tf.distribute.experimental.ParameterServerStrategy.__ge__": true, + "tf.distribute.experimental.ParameterServerStrategy.__gt__": true, + "tf.distribute.experimental.ParameterServerStrategy.__init__": true, + "tf.distribute.experimental.ParameterServerStrategy.__le__": true, + "tf.distribute.experimental.ParameterServerStrategy.__lt__": true, + "tf.distribute.experimental.ParameterServerStrategy.__ne__": true, + "tf.distribute.experimental.ParameterServerStrategy.__new__": true, + "tf.distribute.experimental.ParameterServerStrategy.experimental_assign_to_logical_device": true, + "tf.distribute.experimental.ParameterServerStrategy.experimental_distribute_dataset": true, + "tf.distribute.experimental.ParameterServerStrategy.experimental_distribute_datasets_from_function": true, + "tf.distribute.experimental.ParameterServerStrategy.experimental_distribute_values_from_function": true, + "tf.distribute.experimental.ParameterServerStrategy.experimental_local_results": true, + "tf.distribute.experimental.ParameterServerStrategy.experimental_make_numpy_dataset": true, + "tf.distribute.experimental.ParameterServerStrategy.experimental_replicate_to_logical_devices": true, + "tf.distribute.experimental.ParameterServerStrategy.experimental_split_to_logical_devices": true, + "tf.distribute.experimental.ParameterServerStrategy.extended": true, + "tf.distribute.experimental.ParameterServerStrategy.num_replicas_in_sync": true, + "tf.distribute.experimental.ParameterServerStrategy.reduce": true, + "tf.distribute.experimental.ParameterServerStrategy.run": true, + "tf.distribute.experimental.ParameterServerStrategy.scope": true, + "tf.distribute.experimental.TPUStrategy": false, + "tf.distribute.experimental.TPUStrategy.__eq__": true, + "tf.distribute.experimental.TPUStrategy.__ge__": true, + "tf.distribute.experimental.TPUStrategy.__gt__": true, + "tf.distribute.experimental.TPUStrategy.__init__": true, + "tf.distribute.experimental.TPUStrategy.__le__": true, + "tf.distribute.experimental.TPUStrategy.__lt__": true, + "tf.distribute.experimental.TPUStrategy.__ne__": true, + "tf.distribute.experimental.TPUStrategy.__new__": true, + "tf.distribute.experimental.TPUStrategy.experimental_assign_to_logical_device": true, + "tf.distribute.experimental.TPUStrategy.experimental_distribute_dataset": true, + "tf.distribute.experimental.TPUStrategy.experimental_distribute_datasets_from_function": true, + "tf.distribute.experimental.TPUStrategy.experimental_distribute_values_from_function": true, + "tf.distribute.experimental.TPUStrategy.experimental_local_results": true, + "tf.distribute.experimental.TPUStrategy.experimental_make_numpy_dataset": true, + "tf.distribute.experimental.TPUStrategy.experimental_replicate_to_logical_devices": true, + "tf.distribute.experimental.TPUStrategy.experimental_split_to_logical_devices": true, + "tf.distribute.experimental.TPUStrategy.extended": true, + "tf.distribute.experimental.TPUStrategy.num_replicas_in_sync": true, + "tf.distribute.experimental.TPUStrategy.reduce": true, + "tf.distribute.experimental.TPUStrategy.run": true, + "tf.distribute.experimental.TPUStrategy.scope": true, + "tf.distribute.experimental.ValueContext": false, + "tf.distribute.experimental.ValueContext.__eq__": true, + "tf.distribute.experimental.ValueContext.__ge__": true, + "tf.distribute.experimental.ValueContext.__gt__": true, + "tf.distribute.experimental.ValueContext.__init__": true, + "tf.distribute.experimental.ValueContext.__le__": true, + "tf.distribute.experimental.ValueContext.__lt__": true, + "tf.distribute.experimental.ValueContext.__ne__": true, + "tf.distribute.experimental.ValueContext.__new__": true, + "tf.distribute.experimental.ValueContext.num_replicas_in_sync": true, + "tf.distribute.experimental.ValueContext.replica_id_in_sync_group": true, + "tf.distribute.experimental_set_strategy": false, + "tf.distribute.get_replica_context": false, + "tf.distribute.get_strategy": false, + "tf.distribute.has_strategy": false, + "tf.distribute.in_cross_replica_context": false, + "tf.divide": false, + "tf.double": true, + "tf.dtypes": false, + "tf.dtypes.DType": false, + "tf.dtypes.DType.__eq__": true, + "tf.dtypes.DType.__ge__": true, + "tf.dtypes.DType.__gt__": true, + "tf.dtypes.DType.__init__": true, + "tf.dtypes.DType.__le__": true, + "tf.dtypes.DType.__lt__": true, + "tf.dtypes.DType.__ne__": true, + "tf.dtypes.DType.__new__": true, + "tf.dtypes.DType.as_datatype_enum": true, + "tf.dtypes.DType.as_numpy_dtype": true, + "tf.dtypes.DType.base_dtype": true, + "tf.dtypes.DType.is_bool": true, + "tf.dtypes.DType.is_compatible_with": true, + "tf.dtypes.DType.is_complex": true, + "tf.dtypes.DType.is_floating": true, + "tf.dtypes.DType.is_integer": true, + "tf.dtypes.DType.is_numpy_compatible": true, + "tf.dtypes.DType.is_quantized": true, + "tf.dtypes.DType.is_unsigned": true, + "tf.dtypes.DType.limits": true, + "tf.dtypes.DType.max": true, + "tf.dtypes.DType.min": true, + "tf.dtypes.DType.name": true, + "tf.dtypes.DType.real_dtype": true, + "tf.dtypes.DType.size": true, + "tf.dtypes.QUANTIZED_DTYPES": true, + "tf.dtypes.as_dtype": false, + "tf.dtypes.bfloat16": true, + "tf.dtypes.bool": true, + "tf.dtypes.cast": false, + "tf.dtypes.complex": false, + "tf.dtypes.complex128": true, + "tf.dtypes.complex64": true, + "tf.dtypes.double": true, + "tf.dtypes.float16": true, + "tf.dtypes.float32": true, + "tf.dtypes.float64": true, + "tf.dtypes.half": true, + "tf.dtypes.int16": true, + "tf.dtypes.int32": true, + "tf.dtypes.int64": true, + "tf.dtypes.int8": true, + "tf.dtypes.qint16": true, + "tf.dtypes.qint32": true, + "tf.dtypes.qint8": true, + "tf.dtypes.quint16": true, + "tf.dtypes.quint8": true, + "tf.dtypes.resource": true, + "tf.dtypes.saturate_cast": false, + "tf.dtypes.string": true, + "tf.dtypes.uint16": true, + "tf.dtypes.uint32": true, + "tf.dtypes.uint64": true, + "tf.dtypes.uint8": true, + "tf.dtypes.variant": true, + "tf.dynamic_partition": false, + "tf.dynamic_stitch": false, + "tf.edit_distance": false, + "tf.eig": false, + "tf.eigvals": false, + "tf.einsum": false, + "tf.ensure_shape": false, + "tf.equal": false, + "tf.errors": false, + "tf.errors.ABORTED": true, + "tf.errors.ALREADY_EXISTS": true, + "tf.errors.AbortedError": false, + "tf.errors.AbortedError.__eq__": true, + "tf.errors.AbortedError.__ge__": true, + "tf.errors.AbortedError.__gt__": true, + "tf.errors.AbortedError.__init__": true, + "tf.errors.AbortedError.__le__": true, + "tf.errors.AbortedError.__lt__": true, + "tf.errors.AbortedError.__ne__": true, + "tf.errors.AbortedError.__new__": true, + "tf.errors.AbortedError.args": true, + "tf.errors.AbortedError.error_code": true, + "tf.errors.AbortedError.message": true, + "tf.errors.AbortedError.node_def": true, + "tf.errors.AbortedError.op": true, + "tf.errors.AbortedError.with_traceback": true, + "tf.errors.AlreadyExistsError": false, + "tf.errors.AlreadyExistsError.__eq__": true, + "tf.errors.AlreadyExistsError.__ge__": true, + "tf.errors.AlreadyExistsError.__gt__": true, + "tf.errors.AlreadyExistsError.__init__": true, + "tf.errors.AlreadyExistsError.__le__": true, + "tf.errors.AlreadyExistsError.__lt__": true, + "tf.errors.AlreadyExistsError.__ne__": true, + "tf.errors.AlreadyExistsError.__new__": true, + "tf.errors.AlreadyExistsError.args": true, + "tf.errors.AlreadyExistsError.error_code": true, + "tf.errors.AlreadyExistsError.message": true, + "tf.errors.AlreadyExistsError.node_def": true, + "tf.errors.AlreadyExistsError.op": true, + "tf.errors.AlreadyExistsError.with_traceback": true, + "tf.errors.CANCELLED": true, + "tf.errors.CancelledError": false, + "tf.errors.CancelledError.__eq__": true, + "tf.errors.CancelledError.__ge__": true, + "tf.errors.CancelledError.__gt__": true, + "tf.errors.CancelledError.__init__": true, + "tf.errors.CancelledError.__le__": true, + "tf.errors.CancelledError.__lt__": true, + "tf.errors.CancelledError.__ne__": true, + "tf.errors.CancelledError.__new__": true, + "tf.errors.CancelledError.args": true, + "tf.errors.CancelledError.error_code": true, + "tf.errors.CancelledError.message": true, + "tf.errors.CancelledError.node_def": true, + "tf.errors.CancelledError.op": true, + "tf.errors.CancelledError.with_traceback": true, + "tf.errors.DATA_LOSS": true, + "tf.errors.DEADLINE_EXCEEDED": true, + "tf.errors.DataLossError": false, + "tf.errors.DataLossError.__eq__": true, + "tf.errors.DataLossError.__ge__": true, + "tf.errors.DataLossError.__gt__": true, + "tf.errors.DataLossError.__init__": true, + "tf.errors.DataLossError.__le__": true, + "tf.errors.DataLossError.__lt__": true, + "tf.errors.DataLossError.__ne__": true, + "tf.errors.DataLossError.__new__": true, + "tf.errors.DataLossError.args": true, + "tf.errors.DataLossError.error_code": true, + "tf.errors.DataLossError.message": true, + "tf.errors.DataLossError.node_def": true, + "tf.errors.DataLossError.op": true, + "tf.errors.DataLossError.with_traceback": true, + "tf.errors.DeadlineExceededError": false, + "tf.errors.DeadlineExceededError.__eq__": true, + "tf.errors.DeadlineExceededError.__ge__": true, + "tf.errors.DeadlineExceededError.__gt__": true, + "tf.errors.DeadlineExceededError.__init__": true, + "tf.errors.DeadlineExceededError.__le__": true, + "tf.errors.DeadlineExceededError.__lt__": true, + "tf.errors.DeadlineExceededError.__ne__": true, + "tf.errors.DeadlineExceededError.__new__": true, + "tf.errors.DeadlineExceededError.args": true, + "tf.errors.DeadlineExceededError.error_code": true, + "tf.errors.DeadlineExceededError.message": true, + "tf.errors.DeadlineExceededError.node_def": true, + "tf.errors.DeadlineExceededError.op": true, + "tf.errors.DeadlineExceededError.with_traceback": true, + "tf.errors.FAILED_PRECONDITION": true, + "tf.errors.FailedPreconditionError": false, + "tf.errors.FailedPreconditionError.__eq__": true, + "tf.errors.FailedPreconditionError.__ge__": true, + "tf.errors.FailedPreconditionError.__gt__": true, + "tf.errors.FailedPreconditionError.__init__": true, + "tf.errors.FailedPreconditionError.__le__": true, + "tf.errors.FailedPreconditionError.__lt__": true, + "tf.errors.FailedPreconditionError.__ne__": true, + "tf.errors.FailedPreconditionError.__new__": true, + "tf.errors.FailedPreconditionError.args": true, + "tf.errors.FailedPreconditionError.error_code": true, + "tf.errors.FailedPreconditionError.message": true, + "tf.errors.FailedPreconditionError.node_def": true, + "tf.errors.FailedPreconditionError.op": true, + "tf.errors.FailedPreconditionError.with_traceback": true, + "tf.errors.INTERNAL": true, + "tf.errors.INVALID_ARGUMENT": true, + "tf.errors.InternalError": false, + "tf.errors.InternalError.__eq__": true, + "tf.errors.InternalError.__ge__": true, + "tf.errors.InternalError.__gt__": true, + "tf.errors.InternalError.__init__": true, + "tf.errors.InternalError.__le__": true, + "tf.errors.InternalError.__lt__": true, + "tf.errors.InternalError.__ne__": true, + "tf.errors.InternalError.__new__": true, + "tf.errors.InternalError.args": true, + "tf.errors.InternalError.error_code": true, + "tf.errors.InternalError.message": true, + "tf.errors.InternalError.node_def": true, + "tf.errors.InternalError.op": true, + "tf.errors.InternalError.with_traceback": true, + "tf.errors.InvalidArgumentError": false, + "tf.errors.InvalidArgumentError.__eq__": true, + "tf.errors.InvalidArgumentError.__ge__": true, + "tf.errors.InvalidArgumentError.__gt__": true, + "tf.errors.InvalidArgumentError.__init__": true, + "tf.errors.InvalidArgumentError.__le__": true, + "tf.errors.InvalidArgumentError.__lt__": true, + "tf.errors.InvalidArgumentError.__ne__": true, + "tf.errors.InvalidArgumentError.__new__": true, + "tf.errors.InvalidArgumentError.args": true, + "tf.errors.InvalidArgumentError.error_code": true, + "tf.errors.InvalidArgumentError.message": true, + "tf.errors.InvalidArgumentError.node_def": true, + "tf.errors.InvalidArgumentError.op": true, + "tf.errors.InvalidArgumentError.with_traceback": true, + "tf.errors.NOT_FOUND": true, + "tf.errors.NotFoundError": false, + "tf.errors.NotFoundError.__eq__": true, + "tf.errors.NotFoundError.__ge__": true, + "tf.errors.NotFoundError.__gt__": true, + "tf.errors.NotFoundError.__init__": true, + "tf.errors.NotFoundError.__le__": true, + "tf.errors.NotFoundError.__lt__": true, + "tf.errors.NotFoundError.__ne__": true, + "tf.errors.NotFoundError.__new__": true, + "tf.errors.NotFoundError.args": true, + "tf.errors.NotFoundError.error_code": true, + "tf.errors.NotFoundError.message": true, + "tf.errors.NotFoundError.node_def": true, + "tf.errors.NotFoundError.op": true, + "tf.errors.NotFoundError.with_traceback": true, + "tf.errors.OK": true, + "tf.errors.OUT_OF_RANGE": true, + "tf.errors.OpError": false, + "tf.errors.OpError.__eq__": true, + "tf.errors.OpError.__ge__": true, + "tf.errors.OpError.__gt__": true, + "tf.errors.OpError.__init__": true, + "tf.errors.OpError.__le__": true, + "tf.errors.OpError.__lt__": true, + "tf.errors.OpError.__ne__": true, + "tf.errors.OpError.__new__": true, + "tf.errors.OpError.args": true, + "tf.errors.OpError.error_code": true, + "tf.errors.OpError.message": true, + "tf.errors.OpError.node_def": true, + "tf.errors.OpError.op": true, + "tf.errors.OpError.with_traceback": true, + "tf.errors.OutOfRangeError": false, + "tf.errors.OutOfRangeError.__eq__": true, + "tf.errors.OutOfRangeError.__ge__": true, + "tf.errors.OutOfRangeError.__gt__": true, + "tf.errors.OutOfRangeError.__init__": true, + "tf.errors.OutOfRangeError.__le__": true, + "tf.errors.OutOfRangeError.__lt__": true, + "tf.errors.OutOfRangeError.__ne__": true, + "tf.errors.OutOfRangeError.__new__": true, + "tf.errors.OutOfRangeError.args": true, + "tf.errors.OutOfRangeError.error_code": true, + "tf.errors.OutOfRangeError.message": true, + "tf.errors.OutOfRangeError.node_def": true, + "tf.errors.OutOfRangeError.op": true, + "tf.errors.OutOfRangeError.with_traceback": true, + "tf.errors.PERMISSION_DENIED": true, + "tf.errors.PermissionDeniedError": false, + "tf.errors.PermissionDeniedError.__eq__": true, + "tf.errors.PermissionDeniedError.__ge__": true, + "tf.errors.PermissionDeniedError.__gt__": true, + "tf.errors.PermissionDeniedError.__init__": true, + "tf.errors.PermissionDeniedError.__le__": true, + "tf.errors.PermissionDeniedError.__lt__": true, + "tf.errors.PermissionDeniedError.__ne__": true, + "tf.errors.PermissionDeniedError.__new__": true, + "tf.errors.PermissionDeniedError.args": true, + "tf.errors.PermissionDeniedError.error_code": true, + "tf.errors.PermissionDeniedError.message": true, + "tf.errors.PermissionDeniedError.node_def": true, + "tf.errors.PermissionDeniedError.op": true, + "tf.errors.PermissionDeniedError.with_traceback": true, + "tf.errors.RESOURCE_EXHAUSTED": true, + "tf.errors.ResourceExhaustedError": false, + "tf.errors.ResourceExhaustedError.__eq__": true, + "tf.errors.ResourceExhaustedError.__ge__": true, + "tf.errors.ResourceExhaustedError.__gt__": true, + "tf.errors.ResourceExhaustedError.__init__": true, + "tf.errors.ResourceExhaustedError.__le__": true, + "tf.errors.ResourceExhaustedError.__lt__": true, + "tf.errors.ResourceExhaustedError.__ne__": true, + "tf.errors.ResourceExhaustedError.__new__": true, + "tf.errors.ResourceExhaustedError.args": true, + "tf.errors.ResourceExhaustedError.error_code": true, + "tf.errors.ResourceExhaustedError.message": true, + "tf.errors.ResourceExhaustedError.node_def": true, + "tf.errors.ResourceExhaustedError.op": true, + "tf.errors.ResourceExhaustedError.with_traceback": true, + "tf.errors.UNAUTHENTICATED": true, + "tf.errors.UNAVAILABLE": true, + "tf.errors.UNIMPLEMENTED": true, + "tf.errors.UNKNOWN": true, + "tf.errors.UnauthenticatedError": false, + "tf.errors.UnauthenticatedError.__eq__": true, + "tf.errors.UnauthenticatedError.__ge__": true, + "tf.errors.UnauthenticatedError.__gt__": true, + "tf.errors.UnauthenticatedError.__init__": true, + "tf.errors.UnauthenticatedError.__le__": true, + "tf.errors.UnauthenticatedError.__lt__": true, + "tf.errors.UnauthenticatedError.__ne__": true, + "tf.errors.UnauthenticatedError.__new__": true, + "tf.errors.UnauthenticatedError.args": true, + "tf.errors.UnauthenticatedError.error_code": true, + "tf.errors.UnauthenticatedError.message": true, + "tf.errors.UnauthenticatedError.node_def": true, + "tf.errors.UnauthenticatedError.op": true, + "tf.errors.UnauthenticatedError.with_traceback": true, + "tf.errors.UnavailableError": false, + "tf.errors.UnavailableError.__eq__": true, + "tf.errors.UnavailableError.__ge__": true, + "tf.errors.UnavailableError.__gt__": true, + "tf.errors.UnavailableError.__init__": true, + "tf.errors.UnavailableError.__le__": true, + "tf.errors.UnavailableError.__lt__": true, + "tf.errors.UnavailableError.__ne__": true, + "tf.errors.UnavailableError.__new__": true, + "tf.errors.UnavailableError.args": true, + "tf.errors.UnavailableError.error_code": true, + "tf.errors.UnavailableError.message": true, + "tf.errors.UnavailableError.node_def": true, + "tf.errors.UnavailableError.op": true, + "tf.errors.UnavailableError.with_traceback": true, + "tf.errors.UnimplementedError": false, + "tf.errors.UnimplementedError.__eq__": true, + "tf.errors.UnimplementedError.__ge__": true, + "tf.errors.UnimplementedError.__gt__": true, + "tf.errors.UnimplementedError.__init__": true, + "tf.errors.UnimplementedError.__le__": true, + "tf.errors.UnimplementedError.__lt__": true, + "tf.errors.UnimplementedError.__ne__": true, + "tf.errors.UnimplementedError.__new__": true, + "tf.errors.UnimplementedError.args": true, + "tf.errors.UnimplementedError.error_code": true, + "tf.errors.UnimplementedError.message": true, + "tf.errors.UnimplementedError.node_def": true, + "tf.errors.UnimplementedError.op": true, + "tf.errors.UnimplementedError.with_traceback": true, + "tf.errors.UnknownError": false, + "tf.errors.UnknownError.__eq__": true, + "tf.errors.UnknownError.__ge__": true, + "tf.errors.UnknownError.__gt__": true, + "tf.errors.UnknownError.__init__": true, + "tf.errors.UnknownError.__le__": true, + "tf.errors.UnknownError.__lt__": true, + "tf.errors.UnknownError.__ne__": true, + "tf.errors.UnknownError.__new__": true, + "tf.errors.UnknownError.args": true, + "tf.errors.UnknownError.error_code": true, + "tf.errors.UnknownError.message": true, + "tf.errors.UnknownError.node_def": true, + "tf.errors.UnknownError.op": true, + "tf.errors.UnknownError.with_traceback": true, + "tf.estimator": false, + "tf.estimator.BaselineClassifier": false, + "tf.estimator.BaselineClassifier.__eq__": true, + "tf.estimator.BaselineClassifier.__ge__": true, + "tf.estimator.BaselineClassifier.__gt__": true, + "tf.estimator.BaselineClassifier.__init__": true, + "tf.estimator.BaselineClassifier.__le__": true, + "tf.estimator.BaselineClassifier.__lt__": true, + "tf.estimator.BaselineClassifier.__ne__": true, + "tf.estimator.BaselineClassifier.__new__": true, + "tf.estimator.BaselineClassifier.config": true, + "tf.estimator.BaselineClassifier.eval_dir": true, + "tf.estimator.BaselineClassifier.evaluate": true, + "tf.estimator.BaselineClassifier.experimental_export_all_saved_models": true, + "tf.estimator.BaselineClassifier.export_saved_model": true, + "tf.estimator.BaselineClassifier.export_savedmodel": true, + "tf.estimator.BaselineClassifier.get_variable_names": true, + "tf.estimator.BaselineClassifier.get_variable_value": true, + "tf.estimator.BaselineClassifier.latest_checkpoint": true, + "tf.estimator.BaselineClassifier.model_dir": true, + "tf.estimator.BaselineClassifier.model_fn": true, + "tf.estimator.BaselineClassifier.params": true, + "tf.estimator.BaselineClassifier.predict": true, + "tf.estimator.BaselineClassifier.train": true, + "tf.estimator.BaselineEstimator": false, + "tf.estimator.BaselineEstimator.__eq__": true, + "tf.estimator.BaselineEstimator.__ge__": true, + "tf.estimator.BaselineEstimator.__gt__": true, + "tf.estimator.BaselineEstimator.__init__": true, + "tf.estimator.BaselineEstimator.__le__": true, + "tf.estimator.BaselineEstimator.__lt__": true, + "tf.estimator.BaselineEstimator.__ne__": true, + "tf.estimator.BaselineEstimator.__new__": true, + "tf.estimator.BaselineEstimator.config": true, + "tf.estimator.BaselineEstimator.eval_dir": true, + "tf.estimator.BaselineEstimator.evaluate": true, + "tf.estimator.BaselineEstimator.experimental_export_all_saved_models": true, + "tf.estimator.BaselineEstimator.export_saved_model": true, + "tf.estimator.BaselineEstimator.export_savedmodel": true, + "tf.estimator.BaselineEstimator.get_variable_names": true, + "tf.estimator.BaselineEstimator.get_variable_value": true, + "tf.estimator.BaselineEstimator.latest_checkpoint": true, + "tf.estimator.BaselineEstimator.model_dir": true, + "tf.estimator.BaselineEstimator.model_fn": true, + "tf.estimator.BaselineEstimator.params": true, + "tf.estimator.BaselineEstimator.predict": true, + "tf.estimator.BaselineEstimator.train": true, + "tf.estimator.BaselineRegressor": false, + "tf.estimator.BaselineRegressor.__eq__": true, + "tf.estimator.BaselineRegressor.__ge__": true, + "tf.estimator.BaselineRegressor.__gt__": true, + "tf.estimator.BaselineRegressor.__init__": true, + "tf.estimator.BaselineRegressor.__le__": true, + "tf.estimator.BaselineRegressor.__lt__": true, + "tf.estimator.BaselineRegressor.__ne__": true, + "tf.estimator.BaselineRegressor.__new__": true, + "tf.estimator.BaselineRegressor.config": true, + "tf.estimator.BaselineRegressor.eval_dir": true, + "tf.estimator.BaselineRegressor.evaluate": true, + "tf.estimator.BaselineRegressor.experimental_export_all_saved_models": true, + "tf.estimator.BaselineRegressor.export_saved_model": true, + "tf.estimator.BaselineRegressor.export_savedmodel": true, + "tf.estimator.BaselineRegressor.get_variable_names": true, + "tf.estimator.BaselineRegressor.get_variable_value": true, + "tf.estimator.BaselineRegressor.latest_checkpoint": true, + "tf.estimator.BaselineRegressor.model_dir": true, + "tf.estimator.BaselineRegressor.model_fn": true, + "tf.estimator.BaselineRegressor.params": true, + "tf.estimator.BaselineRegressor.predict": true, + "tf.estimator.BaselineRegressor.train": true, + "tf.estimator.BestExporter": false, + "tf.estimator.BestExporter.__eq__": true, + "tf.estimator.BestExporter.__ge__": true, + "tf.estimator.BestExporter.__gt__": true, + "tf.estimator.BestExporter.__init__": true, + "tf.estimator.BestExporter.__le__": true, + "tf.estimator.BestExporter.__lt__": true, + "tf.estimator.BestExporter.__ne__": true, + "tf.estimator.BestExporter.__new__": true, + "tf.estimator.BestExporter.export": true, + "tf.estimator.BestExporter.name": true, + "tf.estimator.BinaryClassHead": false, + "tf.estimator.BinaryClassHead.__eq__": true, + "tf.estimator.BinaryClassHead.__ge__": true, + "tf.estimator.BinaryClassHead.__gt__": true, + "tf.estimator.BinaryClassHead.__init__": true, + "tf.estimator.BinaryClassHead.__le__": true, + "tf.estimator.BinaryClassHead.__lt__": true, + "tf.estimator.BinaryClassHead.__ne__": true, + "tf.estimator.BinaryClassHead.__new__": true, + "tf.estimator.BinaryClassHead.create_estimator_spec": true, + "tf.estimator.BinaryClassHead.logits_dimension": true, + "tf.estimator.BinaryClassHead.loss": true, + "tf.estimator.BinaryClassHead.loss_reduction": true, + "tf.estimator.BinaryClassHead.metrics": true, + "tf.estimator.BinaryClassHead.name": true, + "tf.estimator.BinaryClassHead.predictions": true, + "tf.estimator.BinaryClassHead.update_metrics": true, + "tf.estimator.BoostedTreesClassifier": false, + "tf.estimator.BoostedTreesClassifier.__eq__": true, + "tf.estimator.BoostedTreesClassifier.__ge__": true, + "tf.estimator.BoostedTreesClassifier.__gt__": true, + "tf.estimator.BoostedTreesClassifier.__init__": true, + "tf.estimator.BoostedTreesClassifier.__le__": true, + "tf.estimator.BoostedTreesClassifier.__lt__": true, + "tf.estimator.BoostedTreesClassifier.__ne__": true, + "tf.estimator.BoostedTreesClassifier.__new__": true, + "tf.estimator.BoostedTreesClassifier.config": true, + "tf.estimator.BoostedTreesClassifier.eval_dir": true, + "tf.estimator.BoostedTreesClassifier.evaluate": true, + "tf.estimator.BoostedTreesClassifier.experimental_export_all_saved_models": true, + "tf.estimator.BoostedTreesClassifier.experimental_feature_importances": true, + "tf.estimator.BoostedTreesClassifier.experimental_predict_with_explanations": true, + "tf.estimator.BoostedTreesClassifier.export_saved_model": true, + "tf.estimator.BoostedTreesClassifier.export_savedmodel": true, + "tf.estimator.BoostedTreesClassifier.get_variable_names": true, + "tf.estimator.BoostedTreesClassifier.get_variable_value": true, + "tf.estimator.BoostedTreesClassifier.latest_checkpoint": true, + "tf.estimator.BoostedTreesClassifier.model_dir": true, + "tf.estimator.BoostedTreesClassifier.model_fn": true, + "tf.estimator.BoostedTreesClassifier.params": true, + "tf.estimator.BoostedTreesClassifier.predict": true, + "tf.estimator.BoostedTreesClassifier.train": true, + "tf.estimator.BoostedTreesEstimator": false, + "tf.estimator.BoostedTreesEstimator.__eq__": true, + "tf.estimator.BoostedTreesEstimator.__ge__": true, + "tf.estimator.BoostedTreesEstimator.__gt__": true, + "tf.estimator.BoostedTreesEstimator.__init__": true, + "tf.estimator.BoostedTreesEstimator.__le__": true, + "tf.estimator.BoostedTreesEstimator.__lt__": true, + "tf.estimator.BoostedTreesEstimator.__ne__": true, + "tf.estimator.BoostedTreesEstimator.__new__": true, + "tf.estimator.BoostedTreesEstimator.config": true, + "tf.estimator.BoostedTreesEstimator.eval_dir": true, + "tf.estimator.BoostedTreesEstimator.evaluate": true, + "tf.estimator.BoostedTreesEstimator.experimental_export_all_saved_models": true, + "tf.estimator.BoostedTreesEstimator.experimental_feature_importances": true, + "tf.estimator.BoostedTreesEstimator.experimental_predict_with_explanations": true, + "tf.estimator.BoostedTreesEstimator.export_saved_model": true, + "tf.estimator.BoostedTreesEstimator.export_savedmodel": true, + "tf.estimator.BoostedTreesEstimator.get_variable_names": true, + "tf.estimator.BoostedTreesEstimator.get_variable_value": true, + "tf.estimator.BoostedTreesEstimator.latest_checkpoint": true, + "tf.estimator.BoostedTreesEstimator.model_dir": true, + "tf.estimator.BoostedTreesEstimator.model_fn": true, + "tf.estimator.BoostedTreesEstimator.params": true, + "tf.estimator.BoostedTreesEstimator.predict": true, + "tf.estimator.BoostedTreesEstimator.train": true, + "tf.estimator.BoostedTreesRegressor": false, + "tf.estimator.BoostedTreesRegressor.__eq__": true, + "tf.estimator.BoostedTreesRegressor.__ge__": true, + "tf.estimator.BoostedTreesRegressor.__gt__": true, + "tf.estimator.BoostedTreesRegressor.__init__": true, + "tf.estimator.BoostedTreesRegressor.__le__": true, + "tf.estimator.BoostedTreesRegressor.__lt__": true, + "tf.estimator.BoostedTreesRegressor.__ne__": true, + "tf.estimator.BoostedTreesRegressor.__new__": true, + "tf.estimator.BoostedTreesRegressor.config": true, + "tf.estimator.BoostedTreesRegressor.eval_dir": true, + "tf.estimator.BoostedTreesRegressor.evaluate": true, + "tf.estimator.BoostedTreesRegressor.experimental_export_all_saved_models": true, + "tf.estimator.BoostedTreesRegressor.experimental_feature_importances": true, + "tf.estimator.BoostedTreesRegressor.experimental_predict_with_explanations": true, + "tf.estimator.BoostedTreesRegressor.export_saved_model": true, + "tf.estimator.BoostedTreesRegressor.export_savedmodel": true, + "tf.estimator.BoostedTreesRegressor.get_variable_names": true, + "tf.estimator.BoostedTreesRegressor.get_variable_value": true, + "tf.estimator.BoostedTreesRegressor.latest_checkpoint": true, + "tf.estimator.BoostedTreesRegressor.model_dir": true, + "tf.estimator.BoostedTreesRegressor.model_fn": true, + "tf.estimator.BoostedTreesRegressor.params": true, + "tf.estimator.BoostedTreesRegressor.predict": true, + "tf.estimator.BoostedTreesRegressor.train": true, + "tf.estimator.CheckpointSaverHook": false, + "tf.estimator.CheckpointSaverHook.__eq__": true, + "tf.estimator.CheckpointSaverHook.__ge__": true, + "tf.estimator.CheckpointSaverHook.__gt__": true, + "tf.estimator.CheckpointSaverHook.__init__": true, + "tf.estimator.CheckpointSaverHook.__le__": true, + "tf.estimator.CheckpointSaverHook.__lt__": true, + "tf.estimator.CheckpointSaverHook.__ne__": true, + "tf.estimator.CheckpointSaverHook.__new__": true, + "tf.estimator.CheckpointSaverHook.after_create_session": true, + "tf.estimator.CheckpointSaverHook.after_run": true, + "tf.estimator.CheckpointSaverHook.before_run": true, + "tf.estimator.CheckpointSaverHook.begin": true, + "tf.estimator.CheckpointSaverHook.end": true, + "tf.estimator.CheckpointSaverListener": false, + "tf.estimator.CheckpointSaverListener.__eq__": true, + "tf.estimator.CheckpointSaverListener.__ge__": true, + "tf.estimator.CheckpointSaverListener.__gt__": true, + "tf.estimator.CheckpointSaverListener.__init__": true, + "tf.estimator.CheckpointSaverListener.__le__": true, + "tf.estimator.CheckpointSaverListener.__lt__": true, + "tf.estimator.CheckpointSaverListener.__ne__": true, + "tf.estimator.CheckpointSaverListener.__new__": true, + "tf.estimator.CheckpointSaverListener.after_save": true, + "tf.estimator.CheckpointSaverListener.before_save": true, + "tf.estimator.CheckpointSaverListener.begin": true, + "tf.estimator.CheckpointSaverListener.end": true, + "tf.estimator.DNNClassifier": false, + "tf.estimator.DNNClassifier.__eq__": true, + "tf.estimator.DNNClassifier.__ge__": true, + "tf.estimator.DNNClassifier.__gt__": true, + "tf.estimator.DNNClassifier.__init__": true, + "tf.estimator.DNNClassifier.__le__": true, + "tf.estimator.DNNClassifier.__lt__": true, + "tf.estimator.DNNClassifier.__ne__": true, + "tf.estimator.DNNClassifier.__new__": true, + "tf.estimator.DNNClassifier.config": true, + "tf.estimator.DNNClassifier.eval_dir": true, + "tf.estimator.DNNClassifier.evaluate": true, + "tf.estimator.DNNClassifier.experimental_export_all_saved_models": true, + "tf.estimator.DNNClassifier.export_saved_model": true, + "tf.estimator.DNNClassifier.export_savedmodel": true, + "tf.estimator.DNNClassifier.get_variable_names": true, + "tf.estimator.DNNClassifier.get_variable_value": true, + "tf.estimator.DNNClassifier.latest_checkpoint": true, + "tf.estimator.DNNClassifier.model_dir": true, + "tf.estimator.DNNClassifier.model_fn": true, + "tf.estimator.DNNClassifier.params": true, + "tf.estimator.DNNClassifier.predict": true, + "tf.estimator.DNNClassifier.train": true, + "tf.estimator.DNNEstimator": false, + "tf.estimator.DNNEstimator.__eq__": true, + "tf.estimator.DNNEstimator.__ge__": true, + "tf.estimator.DNNEstimator.__gt__": true, + "tf.estimator.DNNEstimator.__init__": true, + "tf.estimator.DNNEstimator.__le__": true, + "tf.estimator.DNNEstimator.__lt__": true, + "tf.estimator.DNNEstimator.__ne__": true, + "tf.estimator.DNNEstimator.__new__": true, + "tf.estimator.DNNEstimator.config": true, + "tf.estimator.DNNEstimator.eval_dir": true, + "tf.estimator.DNNEstimator.evaluate": true, + "tf.estimator.DNNEstimator.experimental_export_all_saved_models": true, + "tf.estimator.DNNEstimator.export_saved_model": true, + "tf.estimator.DNNEstimator.export_savedmodel": true, + "tf.estimator.DNNEstimator.get_variable_names": true, + "tf.estimator.DNNEstimator.get_variable_value": true, + "tf.estimator.DNNEstimator.latest_checkpoint": true, + "tf.estimator.DNNEstimator.model_dir": true, + "tf.estimator.DNNEstimator.model_fn": true, + "tf.estimator.DNNEstimator.params": true, + "tf.estimator.DNNEstimator.predict": true, + "tf.estimator.DNNEstimator.train": true, + "tf.estimator.DNNLinearCombinedClassifier": false, + "tf.estimator.DNNLinearCombinedClassifier.__eq__": true, + "tf.estimator.DNNLinearCombinedClassifier.__ge__": true, + "tf.estimator.DNNLinearCombinedClassifier.__gt__": true, + "tf.estimator.DNNLinearCombinedClassifier.__init__": true, + "tf.estimator.DNNLinearCombinedClassifier.__le__": true, + "tf.estimator.DNNLinearCombinedClassifier.__lt__": true, + "tf.estimator.DNNLinearCombinedClassifier.__ne__": true, + "tf.estimator.DNNLinearCombinedClassifier.__new__": true, + "tf.estimator.DNNLinearCombinedClassifier.config": true, + "tf.estimator.DNNLinearCombinedClassifier.eval_dir": true, + "tf.estimator.DNNLinearCombinedClassifier.evaluate": true, + "tf.estimator.DNNLinearCombinedClassifier.experimental_export_all_saved_models": true, + "tf.estimator.DNNLinearCombinedClassifier.export_saved_model": true, + "tf.estimator.DNNLinearCombinedClassifier.export_savedmodel": true, + "tf.estimator.DNNLinearCombinedClassifier.get_variable_names": true, + "tf.estimator.DNNLinearCombinedClassifier.get_variable_value": true, + "tf.estimator.DNNLinearCombinedClassifier.latest_checkpoint": true, + "tf.estimator.DNNLinearCombinedClassifier.model_dir": true, + "tf.estimator.DNNLinearCombinedClassifier.model_fn": true, + "tf.estimator.DNNLinearCombinedClassifier.params": true, + "tf.estimator.DNNLinearCombinedClassifier.predict": true, + "tf.estimator.DNNLinearCombinedClassifier.train": true, + "tf.estimator.DNNLinearCombinedEstimator": false, + "tf.estimator.DNNLinearCombinedEstimator.__eq__": true, + "tf.estimator.DNNLinearCombinedEstimator.__ge__": true, + "tf.estimator.DNNLinearCombinedEstimator.__gt__": true, + "tf.estimator.DNNLinearCombinedEstimator.__init__": true, + "tf.estimator.DNNLinearCombinedEstimator.__le__": true, + "tf.estimator.DNNLinearCombinedEstimator.__lt__": true, + "tf.estimator.DNNLinearCombinedEstimator.__ne__": true, + "tf.estimator.DNNLinearCombinedEstimator.__new__": true, + "tf.estimator.DNNLinearCombinedEstimator.config": true, + "tf.estimator.DNNLinearCombinedEstimator.eval_dir": true, + "tf.estimator.DNNLinearCombinedEstimator.evaluate": true, + "tf.estimator.DNNLinearCombinedEstimator.experimental_export_all_saved_models": true, + "tf.estimator.DNNLinearCombinedEstimator.export_saved_model": true, + "tf.estimator.DNNLinearCombinedEstimator.export_savedmodel": true, + "tf.estimator.DNNLinearCombinedEstimator.get_variable_names": true, + "tf.estimator.DNNLinearCombinedEstimator.get_variable_value": true, + "tf.estimator.DNNLinearCombinedEstimator.latest_checkpoint": true, + "tf.estimator.DNNLinearCombinedEstimator.model_dir": true, + "tf.estimator.DNNLinearCombinedEstimator.model_fn": true, + "tf.estimator.DNNLinearCombinedEstimator.params": true, + "tf.estimator.DNNLinearCombinedEstimator.predict": true, + "tf.estimator.DNNLinearCombinedEstimator.train": true, + "tf.estimator.DNNLinearCombinedRegressor": false, + "tf.estimator.DNNLinearCombinedRegressor.__eq__": true, + "tf.estimator.DNNLinearCombinedRegressor.__ge__": true, + "tf.estimator.DNNLinearCombinedRegressor.__gt__": true, + "tf.estimator.DNNLinearCombinedRegressor.__init__": true, + "tf.estimator.DNNLinearCombinedRegressor.__le__": true, + "tf.estimator.DNNLinearCombinedRegressor.__lt__": true, + "tf.estimator.DNNLinearCombinedRegressor.__ne__": true, + "tf.estimator.DNNLinearCombinedRegressor.__new__": true, + "tf.estimator.DNNLinearCombinedRegressor.config": true, + "tf.estimator.DNNLinearCombinedRegressor.eval_dir": true, + "tf.estimator.DNNLinearCombinedRegressor.evaluate": true, + "tf.estimator.DNNLinearCombinedRegressor.experimental_export_all_saved_models": true, + "tf.estimator.DNNLinearCombinedRegressor.export_saved_model": true, + "tf.estimator.DNNLinearCombinedRegressor.export_savedmodel": true, + "tf.estimator.DNNLinearCombinedRegressor.get_variable_names": true, + "tf.estimator.DNNLinearCombinedRegressor.get_variable_value": true, + "tf.estimator.DNNLinearCombinedRegressor.latest_checkpoint": true, + "tf.estimator.DNNLinearCombinedRegressor.model_dir": true, + "tf.estimator.DNNLinearCombinedRegressor.model_fn": true, + "tf.estimator.DNNLinearCombinedRegressor.params": true, + "tf.estimator.DNNLinearCombinedRegressor.predict": true, + "tf.estimator.DNNLinearCombinedRegressor.train": true, + "tf.estimator.DNNRegressor": false, + "tf.estimator.DNNRegressor.__eq__": true, + "tf.estimator.DNNRegressor.__ge__": true, + "tf.estimator.DNNRegressor.__gt__": true, + "tf.estimator.DNNRegressor.__init__": true, + "tf.estimator.DNNRegressor.__le__": true, + "tf.estimator.DNNRegressor.__lt__": true, + "tf.estimator.DNNRegressor.__ne__": true, + "tf.estimator.DNNRegressor.__new__": true, + "tf.estimator.DNNRegressor.config": true, + "tf.estimator.DNNRegressor.eval_dir": true, + "tf.estimator.DNNRegressor.evaluate": true, + "tf.estimator.DNNRegressor.experimental_export_all_saved_models": true, + "tf.estimator.DNNRegressor.export_saved_model": true, + "tf.estimator.DNNRegressor.export_savedmodel": true, + "tf.estimator.DNNRegressor.get_variable_names": true, + "tf.estimator.DNNRegressor.get_variable_value": true, + "tf.estimator.DNNRegressor.latest_checkpoint": true, + "tf.estimator.DNNRegressor.model_dir": true, + "tf.estimator.DNNRegressor.model_fn": true, + "tf.estimator.DNNRegressor.params": true, + "tf.estimator.DNNRegressor.predict": true, + "tf.estimator.DNNRegressor.train": true, + "tf.estimator.Estimator": false, + "tf.estimator.Estimator.__eq__": true, + "tf.estimator.Estimator.__ge__": true, + "tf.estimator.Estimator.__gt__": true, + "tf.estimator.Estimator.__init__": true, + "tf.estimator.Estimator.__le__": true, + "tf.estimator.Estimator.__lt__": true, + "tf.estimator.Estimator.__ne__": true, + "tf.estimator.Estimator.__new__": true, + "tf.estimator.Estimator.config": true, + "tf.estimator.Estimator.eval_dir": true, + "tf.estimator.Estimator.evaluate": true, + "tf.estimator.Estimator.experimental_export_all_saved_models": true, + "tf.estimator.Estimator.export_saved_model": true, + "tf.estimator.Estimator.export_savedmodel": true, + "tf.estimator.Estimator.get_variable_names": true, + "tf.estimator.Estimator.get_variable_value": true, + "tf.estimator.Estimator.latest_checkpoint": true, + "tf.estimator.Estimator.model_dir": true, + "tf.estimator.Estimator.model_fn": true, + "tf.estimator.Estimator.params": true, + "tf.estimator.Estimator.predict": true, + "tf.estimator.Estimator.train": true, + "tf.estimator.EstimatorSpec": false, + "tf.estimator.EstimatorSpec.__add__": true, + "tf.estimator.EstimatorSpec.__contains__": true, + "tf.estimator.EstimatorSpec.__eq__": true, + "tf.estimator.EstimatorSpec.__ge__": true, + "tf.estimator.EstimatorSpec.__getitem__": true, + "tf.estimator.EstimatorSpec.__gt__": true, + "tf.estimator.EstimatorSpec.__init__": true, + "tf.estimator.EstimatorSpec.__iter__": true, + "tf.estimator.EstimatorSpec.__le__": true, + "tf.estimator.EstimatorSpec.__len__": true, + "tf.estimator.EstimatorSpec.__lt__": true, + "tf.estimator.EstimatorSpec.__mul__": true, + "tf.estimator.EstimatorSpec.__ne__": true, + "tf.estimator.EstimatorSpec.__new__": true, + "tf.estimator.EstimatorSpec.__rmul__": true, + "tf.estimator.EstimatorSpec.count": true, + "tf.estimator.EstimatorSpec.eval_metric_ops": true, + "tf.estimator.EstimatorSpec.evaluation_hooks": true, + "tf.estimator.EstimatorSpec.export_outputs": true, + "tf.estimator.EstimatorSpec.index": true, + "tf.estimator.EstimatorSpec.loss": true, + "tf.estimator.EstimatorSpec.mode": true, + "tf.estimator.EstimatorSpec.prediction_hooks": true, + "tf.estimator.EstimatorSpec.predictions": true, + "tf.estimator.EstimatorSpec.scaffold": true, + "tf.estimator.EstimatorSpec.train_op": true, + "tf.estimator.EstimatorSpec.training_chief_hooks": true, + "tf.estimator.EstimatorSpec.training_hooks": true, + "tf.estimator.EvalSpec": false, + "tf.estimator.EvalSpec.__add__": true, + "tf.estimator.EvalSpec.__contains__": true, + "tf.estimator.EvalSpec.__eq__": true, + "tf.estimator.EvalSpec.__ge__": true, + "tf.estimator.EvalSpec.__getitem__": true, + "tf.estimator.EvalSpec.__gt__": true, + "tf.estimator.EvalSpec.__init__": true, + "tf.estimator.EvalSpec.__iter__": true, + "tf.estimator.EvalSpec.__le__": true, + "tf.estimator.EvalSpec.__len__": true, + "tf.estimator.EvalSpec.__lt__": true, + "tf.estimator.EvalSpec.__mul__": true, + "tf.estimator.EvalSpec.__ne__": true, + "tf.estimator.EvalSpec.__new__": true, + "tf.estimator.EvalSpec.__rmul__": true, + "tf.estimator.EvalSpec.count": true, + "tf.estimator.EvalSpec.exporters": true, + "tf.estimator.EvalSpec.hooks": true, + "tf.estimator.EvalSpec.index": true, + "tf.estimator.EvalSpec.input_fn": true, + "tf.estimator.EvalSpec.name": true, + "tf.estimator.EvalSpec.start_delay_secs": true, + "tf.estimator.EvalSpec.steps": true, + "tf.estimator.EvalSpec.throttle_secs": true, + "tf.estimator.Exporter": false, + "tf.estimator.Exporter.__eq__": true, + "tf.estimator.Exporter.__ge__": true, + "tf.estimator.Exporter.__gt__": true, + "tf.estimator.Exporter.__init__": true, + "tf.estimator.Exporter.__le__": true, + "tf.estimator.Exporter.__lt__": true, + "tf.estimator.Exporter.__ne__": true, + "tf.estimator.Exporter.__new__": true, + "tf.estimator.Exporter.export": true, + "tf.estimator.Exporter.name": true, + "tf.estimator.FeedFnHook": false, + "tf.estimator.FeedFnHook.__eq__": true, + "tf.estimator.FeedFnHook.__ge__": true, + "tf.estimator.FeedFnHook.__gt__": true, + "tf.estimator.FeedFnHook.__init__": true, + "tf.estimator.FeedFnHook.__le__": true, + "tf.estimator.FeedFnHook.__lt__": true, + "tf.estimator.FeedFnHook.__ne__": true, + "tf.estimator.FeedFnHook.__new__": true, + "tf.estimator.FeedFnHook.after_create_session": true, + "tf.estimator.FeedFnHook.after_run": true, + "tf.estimator.FeedFnHook.before_run": true, + "tf.estimator.FeedFnHook.begin": true, + "tf.estimator.FeedFnHook.end": true, + "tf.estimator.FinalExporter": false, + "tf.estimator.FinalExporter.__eq__": true, + "tf.estimator.FinalExporter.__ge__": true, + "tf.estimator.FinalExporter.__gt__": true, + "tf.estimator.FinalExporter.__init__": true, + "tf.estimator.FinalExporter.__le__": true, + "tf.estimator.FinalExporter.__lt__": true, + "tf.estimator.FinalExporter.__ne__": true, + "tf.estimator.FinalExporter.__new__": true, + "tf.estimator.FinalExporter.export": true, + "tf.estimator.FinalExporter.name": true, + "tf.estimator.FinalOpsHook": false, + "tf.estimator.FinalOpsHook.__eq__": true, + "tf.estimator.FinalOpsHook.__ge__": true, + "tf.estimator.FinalOpsHook.__gt__": true, + "tf.estimator.FinalOpsHook.__init__": true, + "tf.estimator.FinalOpsHook.__le__": true, + "tf.estimator.FinalOpsHook.__lt__": true, + "tf.estimator.FinalOpsHook.__ne__": true, + "tf.estimator.FinalOpsHook.__new__": true, + "tf.estimator.FinalOpsHook.after_create_session": true, + "tf.estimator.FinalOpsHook.after_run": true, + "tf.estimator.FinalOpsHook.before_run": true, + "tf.estimator.FinalOpsHook.begin": true, + "tf.estimator.FinalOpsHook.end": true, + "tf.estimator.FinalOpsHook.final_ops_values": true, + "tf.estimator.GlobalStepWaiterHook": false, + "tf.estimator.GlobalStepWaiterHook.__eq__": true, + "tf.estimator.GlobalStepWaiterHook.__ge__": true, + "tf.estimator.GlobalStepWaiterHook.__gt__": true, + "tf.estimator.GlobalStepWaiterHook.__init__": true, + "tf.estimator.GlobalStepWaiterHook.__le__": true, + "tf.estimator.GlobalStepWaiterHook.__lt__": true, + "tf.estimator.GlobalStepWaiterHook.__ne__": true, + "tf.estimator.GlobalStepWaiterHook.__new__": true, + "tf.estimator.GlobalStepWaiterHook.after_create_session": true, + "tf.estimator.GlobalStepWaiterHook.after_run": true, + "tf.estimator.GlobalStepWaiterHook.before_run": true, + "tf.estimator.GlobalStepWaiterHook.begin": true, + "tf.estimator.GlobalStepWaiterHook.end": true, + "tf.estimator.Head": false, + "tf.estimator.Head.__eq__": true, + "tf.estimator.Head.__ge__": true, + "tf.estimator.Head.__gt__": true, + "tf.estimator.Head.__init__": true, + "tf.estimator.Head.__le__": true, + "tf.estimator.Head.__lt__": true, + "tf.estimator.Head.__ne__": true, + "tf.estimator.Head.__new__": true, + "tf.estimator.Head.create_estimator_spec": true, + "tf.estimator.Head.logits_dimension": true, + "tf.estimator.Head.loss": true, + "tf.estimator.Head.loss_reduction": true, + "tf.estimator.Head.metrics": true, + "tf.estimator.Head.name": true, + "tf.estimator.Head.predictions": true, + "tf.estimator.Head.update_metrics": true, + "tf.estimator.LatestExporter": false, + "tf.estimator.LatestExporter.__eq__": true, + "tf.estimator.LatestExporter.__ge__": true, + "tf.estimator.LatestExporter.__gt__": true, + "tf.estimator.LatestExporter.__init__": true, + "tf.estimator.LatestExporter.__le__": true, + "tf.estimator.LatestExporter.__lt__": true, + "tf.estimator.LatestExporter.__ne__": true, + "tf.estimator.LatestExporter.__new__": true, + "tf.estimator.LatestExporter.export": true, + "tf.estimator.LatestExporter.name": true, + "tf.estimator.LinearClassifier": false, + "tf.estimator.LinearClassifier.__eq__": true, + "tf.estimator.LinearClassifier.__ge__": true, + "tf.estimator.LinearClassifier.__gt__": true, + "tf.estimator.LinearClassifier.__init__": true, + "tf.estimator.LinearClassifier.__le__": true, + "tf.estimator.LinearClassifier.__lt__": true, + "tf.estimator.LinearClassifier.__ne__": true, + "tf.estimator.LinearClassifier.__new__": true, + "tf.estimator.LinearClassifier.config": true, + "tf.estimator.LinearClassifier.eval_dir": true, + "tf.estimator.LinearClassifier.evaluate": true, + "tf.estimator.LinearClassifier.experimental_export_all_saved_models": true, + "tf.estimator.LinearClassifier.export_saved_model": true, + "tf.estimator.LinearClassifier.export_savedmodel": true, + "tf.estimator.LinearClassifier.get_variable_names": true, + "tf.estimator.LinearClassifier.get_variable_value": true, + "tf.estimator.LinearClassifier.latest_checkpoint": true, + "tf.estimator.LinearClassifier.model_dir": true, + "tf.estimator.LinearClassifier.model_fn": true, + "tf.estimator.LinearClassifier.params": true, + "tf.estimator.LinearClassifier.predict": true, + "tf.estimator.LinearClassifier.train": true, + "tf.estimator.LinearEstimator": false, + "tf.estimator.LinearEstimator.__eq__": true, + "tf.estimator.LinearEstimator.__ge__": true, + "tf.estimator.LinearEstimator.__gt__": true, + "tf.estimator.LinearEstimator.__init__": true, + "tf.estimator.LinearEstimator.__le__": true, + "tf.estimator.LinearEstimator.__lt__": true, + "tf.estimator.LinearEstimator.__ne__": true, + "tf.estimator.LinearEstimator.__new__": true, + "tf.estimator.LinearEstimator.config": true, + "tf.estimator.LinearEstimator.eval_dir": true, + "tf.estimator.LinearEstimator.evaluate": true, + "tf.estimator.LinearEstimator.experimental_export_all_saved_models": true, + "tf.estimator.LinearEstimator.export_saved_model": true, + "tf.estimator.LinearEstimator.export_savedmodel": true, + "tf.estimator.LinearEstimator.get_variable_names": true, + "tf.estimator.LinearEstimator.get_variable_value": true, + "tf.estimator.LinearEstimator.latest_checkpoint": true, + "tf.estimator.LinearEstimator.model_dir": true, + "tf.estimator.LinearEstimator.model_fn": true, + "tf.estimator.LinearEstimator.params": true, + "tf.estimator.LinearEstimator.predict": true, + "tf.estimator.LinearEstimator.train": true, + "tf.estimator.LinearRegressor": false, + "tf.estimator.LinearRegressor.__eq__": true, + "tf.estimator.LinearRegressor.__ge__": true, + "tf.estimator.LinearRegressor.__gt__": true, + "tf.estimator.LinearRegressor.__init__": true, + "tf.estimator.LinearRegressor.__le__": true, + "tf.estimator.LinearRegressor.__lt__": true, + "tf.estimator.LinearRegressor.__ne__": true, + "tf.estimator.LinearRegressor.__new__": true, + "tf.estimator.LinearRegressor.config": true, + "tf.estimator.LinearRegressor.eval_dir": true, + "tf.estimator.LinearRegressor.evaluate": true, + "tf.estimator.LinearRegressor.experimental_export_all_saved_models": true, + "tf.estimator.LinearRegressor.export_saved_model": true, + "tf.estimator.LinearRegressor.export_savedmodel": true, + "tf.estimator.LinearRegressor.get_variable_names": true, + "tf.estimator.LinearRegressor.get_variable_value": true, + "tf.estimator.LinearRegressor.latest_checkpoint": true, + "tf.estimator.LinearRegressor.model_dir": true, + "tf.estimator.LinearRegressor.model_fn": true, + "tf.estimator.LinearRegressor.params": true, + "tf.estimator.LinearRegressor.predict": true, + "tf.estimator.LinearRegressor.train": true, + "tf.estimator.LoggingTensorHook": false, + "tf.estimator.LoggingTensorHook.__eq__": true, + "tf.estimator.LoggingTensorHook.__ge__": true, + "tf.estimator.LoggingTensorHook.__gt__": true, + "tf.estimator.LoggingTensorHook.__init__": true, + "tf.estimator.LoggingTensorHook.__le__": true, + "tf.estimator.LoggingTensorHook.__lt__": true, + "tf.estimator.LoggingTensorHook.__ne__": true, + "tf.estimator.LoggingTensorHook.__new__": true, + "tf.estimator.LoggingTensorHook.after_create_session": true, + "tf.estimator.LoggingTensorHook.after_run": true, + "tf.estimator.LoggingTensorHook.before_run": true, + "tf.estimator.LoggingTensorHook.begin": true, + "tf.estimator.LoggingTensorHook.end": true, + "tf.estimator.LogisticRegressionHead": false, + "tf.estimator.LogisticRegressionHead.__eq__": true, + "tf.estimator.LogisticRegressionHead.__ge__": true, + "tf.estimator.LogisticRegressionHead.__gt__": true, + "tf.estimator.LogisticRegressionHead.__init__": true, + "tf.estimator.LogisticRegressionHead.__le__": true, + "tf.estimator.LogisticRegressionHead.__lt__": true, + "tf.estimator.LogisticRegressionHead.__ne__": true, + "tf.estimator.LogisticRegressionHead.__new__": true, + "tf.estimator.LogisticRegressionHead.create_estimator_spec": true, + "tf.estimator.LogisticRegressionHead.logits_dimension": true, + "tf.estimator.LogisticRegressionHead.loss": true, + "tf.estimator.LogisticRegressionHead.loss_reduction": true, + "tf.estimator.LogisticRegressionHead.metrics": true, + "tf.estimator.LogisticRegressionHead.name": true, + "tf.estimator.LogisticRegressionHead.predictions": true, + "tf.estimator.LogisticRegressionHead.update_metrics": true, + "tf.estimator.ModeKeys": false, + "tf.estimator.ModeKeys.EVAL": true, + "tf.estimator.ModeKeys.PREDICT": true, + "tf.estimator.ModeKeys.TRAIN": true, + "tf.estimator.ModeKeys.__eq__": true, + "tf.estimator.ModeKeys.__ge__": true, + "tf.estimator.ModeKeys.__gt__": true, + "tf.estimator.ModeKeys.__init__": true, + "tf.estimator.ModeKeys.__le__": true, + "tf.estimator.ModeKeys.__lt__": true, + "tf.estimator.ModeKeys.__ne__": true, + "tf.estimator.ModeKeys.__new__": true, + "tf.estimator.MultiClassHead": false, + "tf.estimator.MultiClassHead.__eq__": true, + "tf.estimator.MultiClassHead.__ge__": true, + "tf.estimator.MultiClassHead.__gt__": true, + "tf.estimator.MultiClassHead.__init__": true, + "tf.estimator.MultiClassHead.__le__": true, + "tf.estimator.MultiClassHead.__lt__": true, + "tf.estimator.MultiClassHead.__ne__": true, + "tf.estimator.MultiClassHead.__new__": true, + "tf.estimator.MultiClassHead.create_estimator_spec": true, + "tf.estimator.MultiClassHead.logits_dimension": true, + "tf.estimator.MultiClassHead.loss": true, + "tf.estimator.MultiClassHead.loss_reduction": true, + "tf.estimator.MultiClassHead.metrics": true, + "tf.estimator.MultiClassHead.name": true, + "tf.estimator.MultiClassHead.predictions": true, + "tf.estimator.MultiClassHead.update_metrics": true, + "tf.estimator.MultiHead": false, + "tf.estimator.MultiHead.__eq__": true, + "tf.estimator.MultiHead.__ge__": true, + "tf.estimator.MultiHead.__gt__": true, + "tf.estimator.MultiHead.__init__": true, + "tf.estimator.MultiHead.__le__": true, + "tf.estimator.MultiHead.__lt__": true, + "tf.estimator.MultiHead.__ne__": true, + "tf.estimator.MultiHead.__new__": true, + "tf.estimator.MultiHead.create_estimator_spec": true, + "tf.estimator.MultiHead.logits_dimension": true, + "tf.estimator.MultiHead.loss": true, + "tf.estimator.MultiHead.loss_reduction": true, + "tf.estimator.MultiHead.metrics": true, + "tf.estimator.MultiHead.name": true, + "tf.estimator.MultiHead.predictions": true, + "tf.estimator.MultiHead.update_metrics": true, + "tf.estimator.MultiLabelHead": false, + "tf.estimator.MultiLabelHead.__eq__": true, + "tf.estimator.MultiLabelHead.__ge__": true, + "tf.estimator.MultiLabelHead.__gt__": true, + "tf.estimator.MultiLabelHead.__init__": true, + "tf.estimator.MultiLabelHead.__le__": true, + "tf.estimator.MultiLabelHead.__lt__": true, + "tf.estimator.MultiLabelHead.__ne__": true, + "tf.estimator.MultiLabelHead.__new__": true, + "tf.estimator.MultiLabelHead.create_estimator_spec": true, + "tf.estimator.MultiLabelHead.logits_dimension": true, + "tf.estimator.MultiLabelHead.loss": true, + "tf.estimator.MultiLabelHead.loss_reduction": true, + "tf.estimator.MultiLabelHead.metrics": true, + "tf.estimator.MultiLabelHead.name": true, + "tf.estimator.MultiLabelHead.predictions": true, + "tf.estimator.MultiLabelHead.update_metrics": true, + "tf.estimator.NanLossDuringTrainingError": false, + "tf.estimator.NanLossDuringTrainingError.__eq__": true, + "tf.estimator.NanLossDuringTrainingError.__ge__": true, + "tf.estimator.NanLossDuringTrainingError.__gt__": true, + "tf.estimator.NanLossDuringTrainingError.__init__": true, + "tf.estimator.NanLossDuringTrainingError.__le__": true, + "tf.estimator.NanLossDuringTrainingError.__lt__": true, + "tf.estimator.NanLossDuringTrainingError.__ne__": true, + "tf.estimator.NanLossDuringTrainingError.__new__": true, + "tf.estimator.NanLossDuringTrainingError.args": true, + "tf.estimator.NanLossDuringTrainingError.with_traceback": true, + "tf.estimator.NanTensorHook": false, + "tf.estimator.NanTensorHook.__eq__": true, + "tf.estimator.NanTensorHook.__ge__": true, + "tf.estimator.NanTensorHook.__gt__": true, + "tf.estimator.NanTensorHook.__init__": true, + "tf.estimator.NanTensorHook.__le__": true, + "tf.estimator.NanTensorHook.__lt__": true, + "tf.estimator.NanTensorHook.__ne__": true, + "tf.estimator.NanTensorHook.__new__": true, + "tf.estimator.NanTensorHook.after_create_session": true, + "tf.estimator.NanTensorHook.after_run": true, + "tf.estimator.NanTensorHook.before_run": true, + "tf.estimator.NanTensorHook.begin": true, + "tf.estimator.NanTensorHook.end": true, + "tf.estimator.PoissonRegressionHead": false, + "tf.estimator.PoissonRegressionHead.__eq__": true, + "tf.estimator.PoissonRegressionHead.__ge__": true, + "tf.estimator.PoissonRegressionHead.__gt__": true, + "tf.estimator.PoissonRegressionHead.__init__": true, + "tf.estimator.PoissonRegressionHead.__le__": true, + "tf.estimator.PoissonRegressionHead.__lt__": true, + "tf.estimator.PoissonRegressionHead.__ne__": true, + "tf.estimator.PoissonRegressionHead.__new__": true, + "tf.estimator.PoissonRegressionHead.create_estimator_spec": true, + "tf.estimator.PoissonRegressionHead.logits_dimension": true, + "tf.estimator.PoissonRegressionHead.loss": true, + "tf.estimator.PoissonRegressionHead.loss_reduction": true, + "tf.estimator.PoissonRegressionHead.metrics": true, + "tf.estimator.PoissonRegressionHead.name": true, + "tf.estimator.PoissonRegressionHead.predictions": true, + "tf.estimator.PoissonRegressionHead.update_metrics": true, + "tf.estimator.ProfilerHook": false, + "tf.estimator.ProfilerHook.__eq__": true, + "tf.estimator.ProfilerHook.__ge__": true, + "tf.estimator.ProfilerHook.__gt__": true, + "tf.estimator.ProfilerHook.__init__": true, + "tf.estimator.ProfilerHook.__le__": true, + "tf.estimator.ProfilerHook.__lt__": true, + "tf.estimator.ProfilerHook.__ne__": true, + "tf.estimator.ProfilerHook.__new__": true, + "tf.estimator.ProfilerHook.after_create_session": true, + "tf.estimator.ProfilerHook.after_run": true, + "tf.estimator.ProfilerHook.before_run": true, + "tf.estimator.ProfilerHook.begin": true, + "tf.estimator.ProfilerHook.end": true, + "tf.estimator.RegressionHead": false, + "tf.estimator.RegressionHead.__eq__": true, + "tf.estimator.RegressionHead.__ge__": true, + "tf.estimator.RegressionHead.__gt__": true, + "tf.estimator.RegressionHead.__init__": true, + "tf.estimator.RegressionHead.__le__": true, + "tf.estimator.RegressionHead.__lt__": true, + "tf.estimator.RegressionHead.__ne__": true, + "tf.estimator.RegressionHead.__new__": true, + "tf.estimator.RegressionHead.create_estimator_spec": true, + "tf.estimator.RegressionHead.logits_dimension": true, + "tf.estimator.RegressionHead.loss": true, + "tf.estimator.RegressionHead.loss_reduction": true, + "tf.estimator.RegressionHead.metrics": true, + "tf.estimator.RegressionHead.name": true, + "tf.estimator.RegressionHead.predictions": true, + "tf.estimator.RegressionHead.update_metrics": true, + "tf.estimator.RunConfig": false, + "tf.estimator.RunConfig.__eq__": true, + "tf.estimator.RunConfig.__ge__": true, + "tf.estimator.RunConfig.__gt__": true, + "tf.estimator.RunConfig.__init__": true, + "tf.estimator.RunConfig.__le__": true, + "tf.estimator.RunConfig.__lt__": true, + "tf.estimator.RunConfig.__ne__": true, + "tf.estimator.RunConfig.__new__": true, + "tf.estimator.RunConfig.cluster_spec": true, + "tf.estimator.RunConfig.device_fn": true, + "tf.estimator.RunConfig.eval_distribute": true, + "tf.estimator.RunConfig.evaluation_master": true, + "tf.estimator.RunConfig.experimental_max_worker_delay_secs": true, + "tf.estimator.RunConfig.global_id_in_cluster": true, + "tf.estimator.RunConfig.is_chief": true, + "tf.estimator.RunConfig.keep_checkpoint_every_n_hours": true, + "tf.estimator.RunConfig.keep_checkpoint_max": true, + "tf.estimator.RunConfig.log_step_count_steps": true, + "tf.estimator.RunConfig.master": true, + "tf.estimator.RunConfig.model_dir": true, + "tf.estimator.RunConfig.num_ps_replicas": true, + "tf.estimator.RunConfig.num_worker_replicas": true, + "tf.estimator.RunConfig.protocol": true, + "tf.estimator.RunConfig.replace": true, + "tf.estimator.RunConfig.save_checkpoints_secs": true, + "tf.estimator.RunConfig.save_checkpoints_steps": true, + "tf.estimator.RunConfig.save_summary_steps": true, + "tf.estimator.RunConfig.service": true, + "tf.estimator.RunConfig.session_config": true, + "tf.estimator.RunConfig.session_creation_timeout_secs": true, + "tf.estimator.RunConfig.task_id": true, + "tf.estimator.RunConfig.task_type": true, + "tf.estimator.RunConfig.tf_random_seed": true, + "tf.estimator.RunConfig.train_distribute": true, + "tf.estimator.SecondOrStepTimer": false, + "tf.estimator.SecondOrStepTimer.__eq__": true, + "tf.estimator.SecondOrStepTimer.__ge__": true, + "tf.estimator.SecondOrStepTimer.__gt__": true, + "tf.estimator.SecondOrStepTimer.__init__": true, + "tf.estimator.SecondOrStepTimer.__le__": true, + "tf.estimator.SecondOrStepTimer.__lt__": true, + "tf.estimator.SecondOrStepTimer.__ne__": true, + "tf.estimator.SecondOrStepTimer.__new__": true, + "tf.estimator.SecondOrStepTimer.last_triggered_step": true, + "tf.estimator.SecondOrStepTimer.reset": true, + "tf.estimator.SecondOrStepTimer.should_trigger_for_step": true, + "tf.estimator.SecondOrStepTimer.update_last_triggered_step": true, + "tf.estimator.SessionRunArgs": false, + "tf.estimator.SessionRunArgs.__add__": true, + "tf.estimator.SessionRunArgs.__contains__": true, + "tf.estimator.SessionRunArgs.__eq__": true, + "tf.estimator.SessionRunArgs.__ge__": true, + "tf.estimator.SessionRunArgs.__getitem__": true, + "tf.estimator.SessionRunArgs.__gt__": true, + "tf.estimator.SessionRunArgs.__init__": true, + "tf.estimator.SessionRunArgs.__iter__": true, + "tf.estimator.SessionRunArgs.__le__": true, + "tf.estimator.SessionRunArgs.__len__": true, + "tf.estimator.SessionRunArgs.__lt__": true, + "tf.estimator.SessionRunArgs.__mul__": true, + "tf.estimator.SessionRunArgs.__ne__": true, + "tf.estimator.SessionRunArgs.__new__": true, + "tf.estimator.SessionRunArgs.__rmul__": true, + "tf.estimator.SessionRunArgs.count": true, + "tf.estimator.SessionRunArgs.feed_dict": true, + "tf.estimator.SessionRunArgs.fetches": true, + "tf.estimator.SessionRunArgs.index": true, + "tf.estimator.SessionRunArgs.options": true, + "tf.estimator.SessionRunContext": false, + "tf.estimator.SessionRunContext.__eq__": true, + "tf.estimator.SessionRunContext.__ge__": true, + "tf.estimator.SessionRunContext.__gt__": true, + "tf.estimator.SessionRunContext.__init__": true, + "tf.estimator.SessionRunContext.__le__": true, + "tf.estimator.SessionRunContext.__lt__": true, + "tf.estimator.SessionRunContext.__ne__": true, + "tf.estimator.SessionRunContext.__new__": true, + "tf.estimator.SessionRunContext.original_args": true, + "tf.estimator.SessionRunContext.request_stop": true, + "tf.estimator.SessionRunContext.session": true, + "tf.estimator.SessionRunContext.stop_requested": true, + "tf.estimator.SessionRunHook": false, + "tf.estimator.SessionRunHook.__eq__": true, + "tf.estimator.SessionRunHook.__ge__": true, + "tf.estimator.SessionRunHook.__gt__": true, + "tf.estimator.SessionRunHook.__init__": true, + "tf.estimator.SessionRunHook.__le__": true, + "tf.estimator.SessionRunHook.__lt__": true, + "tf.estimator.SessionRunHook.__ne__": true, + "tf.estimator.SessionRunHook.__new__": true, + "tf.estimator.SessionRunHook.after_create_session": true, + "tf.estimator.SessionRunHook.after_run": true, + "tf.estimator.SessionRunHook.before_run": true, + "tf.estimator.SessionRunHook.begin": true, + "tf.estimator.SessionRunHook.end": true, + "tf.estimator.SessionRunValues": false, + "tf.estimator.SessionRunValues.__add__": true, + "tf.estimator.SessionRunValues.__contains__": true, + "tf.estimator.SessionRunValues.__eq__": true, + "tf.estimator.SessionRunValues.__ge__": true, + "tf.estimator.SessionRunValues.__getitem__": true, + "tf.estimator.SessionRunValues.__gt__": true, + "tf.estimator.SessionRunValues.__init__": true, + "tf.estimator.SessionRunValues.__iter__": true, + "tf.estimator.SessionRunValues.__le__": true, + "tf.estimator.SessionRunValues.__len__": true, + "tf.estimator.SessionRunValues.__lt__": true, + "tf.estimator.SessionRunValues.__mul__": true, + "tf.estimator.SessionRunValues.__ne__": true, + "tf.estimator.SessionRunValues.__new__": true, + "tf.estimator.SessionRunValues.__rmul__": true, + "tf.estimator.SessionRunValues.count": true, + "tf.estimator.SessionRunValues.index": true, + "tf.estimator.SessionRunValues.options": true, + "tf.estimator.SessionRunValues.results": true, + "tf.estimator.SessionRunValues.run_metadata": true, + "tf.estimator.StepCounterHook": false, + "tf.estimator.StepCounterHook.__eq__": true, + "tf.estimator.StepCounterHook.__ge__": true, + "tf.estimator.StepCounterHook.__gt__": true, + "tf.estimator.StepCounterHook.__init__": true, + "tf.estimator.StepCounterHook.__le__": true, + "tf.estimator.StepCounterHook.__lt__": true, + "tf.estimator.StepCounterHook.__ne__": true, + "tf.estimator.StepCounterHook.__new__": true, + "tf.estimator.StepCounterHook.after_create_session": true, + "tf.estimator.StepCounterHook.after_run": true, + "tf.estimator.StepCounterHook.before_run": true, + "tf.estimator.StepCounterHook.begin": true, + "tf.estimator.StepCounterHook.end": true, + "tf.estimator.StopAtStepHook": false, + "tf.estimator.StopAtStepHook.__eq__": true, + "tf.estimator.StopAtStepHook.__ge__": true, + "tf.estimator.StopAtStepHook.__gt__": true, + "tf.estimator.StopAtStepHook.__init__": true, + "tf.estimator.StopAtStepHook.__le__": true, + "tf.estimator.StopAtStepHook.__lt__": true, + "tf.estimator.StopAtStepHook.__ne__": true, + "tf.estimator.StopAtStepHook.__new__": true, + "tf.estimator.StopAtStepHook.after_create_session": true, + "tf.estimator.StopAtStepHook.after_run": true, + "tf.estimator.StopAtStepHook.before_run": true, + "tf.estimator.StopAtStepHook.begin": true, + "tf.estimator.StopAtStepHook.end": true, + "tf.estimator.SummarySaverHook": false, + "tf.estimator.SummarySaverHook.__eq__": true, + "tf.estimator.SummarySaverHook.__ge__": true, + "tf.estimator.SummarySaverHook.__gt__": true, + "tf.estimator.SummarySaverHook.__init__": true, + "tf.estimator.SummarySaverHook.__le__": true, + "tf.estimator.SummarySaverHook.__lt__": true, + "tf.estimator.SummarySaverHook.__ne__": true, + "tf.estimator.SummarySaverHook.__new__": true, + "tf.estimator.SummarySaverHook.after_create_session": true, + "tf.estimator.SummarySaverHook.after_run": true, + "tf.estimator.SummarySaverHook.before_run": true, + "tf.estimator.SummarySaverHook.begin": true, + "tf.estimator.SummarySaverHook.end": true, + "tf.estimator.TrainSpec": false, + "tf.estimator.TrainSpec.__add__": true, + "tf.estimator.TrainSpec.__contains__": true, + "tf.estimator.TrainSpec.__eq__": true, + "tf.estimator.TrainSpec.__ge__": true, + "tf.estimator.TrainSpec.__getitem__": true, + "tf.estimator.TrainSpec.__gt__": true, + "tf.estimator.TrainSpec.__init__": true, + "tf.estimator.TrainSpec.__iter__": true, + "tf.estimator.TrainSpec.__le__": true, + "tf.estimator.TrainSpec.__len__": true, + "tf.estimator.TrainSpec.__lt__": true, + "tf.estimator.TrainSpec.__mul__": true, + "tf.estimator.TrainSpec.__ne__": true, + "tf.estimator.TrainSpec.__new__": true, + "tf.estimator.TrainSpec.__rmul__": true, + "tf.estimator.TrainSpec.count": true, + "tf.estimator.TrainSpec.hooks": true, + "tf.estimator.TrainSpec.index": true, + "tf.estimator.TrainSpec.input_fn": true, + "tf.estimator.TrainSpec.max_steps": true, + "tf.estimator.VocabInfo": false, + "tf.estimator.VocabInfo.__add__": true, + "tf.estimator.VocabInfo.__contains__": true, + "tf.estimator.VocabInfo.__eq__": true, + "tf.estimator.VocabInfo.__ge__": true, + "tf.estimator.VocabInfo.__getitem__": true, + "tf.estimator.VocabInfo.__gt__": true, + "tf.estimator.VocabInfo.__init__": true, + "tf.estimator.VocabInfo.__iter__": true, + "tf.estimator.VocabInfo.__le__": true, + "tf.estimator.VocabInfo.__len__": true, + "tf.estimator.VocabInfo.__lt__": true, + "tf.estimator.VocabInfo.__mul__": true, + "tf.estimator.VocabInfo.__ne__": true, + "tf.estimator.VocabInfo.__new__": true, + "tf.estimator.VocabInfo.__rmul__": true, + "tf.estimator.VocabInfo.axis": true, + "tf.estimator.VocabInfo.backup_initializer": true, + "tf.estimator.VocabInfo.count": true, + "tf.estimator.VocabInfo.index": true, + "tf.estimator.VocabInfo.new_vocab": true, + "tf.estimator.VocabInfo.new_vocab_size": true, + "tf.estimator.VocabInfo.num_oov_buckets": true, + "tf.estimator.VocabInfo.old_vocab": true, + "tf.estimator.VocabInfo.old_vocab_size": true, + "tf.estimator.WarmStartSettings": false, + "tf.estimator.WarmStartSettings.__add__": true, + "tf.estimator.WarmStartSettings.__contains__": true, + "tf.estimator.WarmStartSettings.__eq__": true, + "tf.estimator.WarmStartSettings.__ge__": true, + "tf.estimator.WarmStartSettings.__getitem__": true, + "tf.estimator.WarmStartSettings.__gt__": true, + "tf.estimator.WarmStartSettings.__init__": true, + "tf.estimator.WarmStartSettings.__iter__": true, + "tf.estimator.WarmStartSettings.__le__": true, + "tf.estimator.WarmStartSettings.__len__": true, + "tf.estimator.WarmStartSettings.__lt__": true, + "tf.estimator.WarmStartSettings.__mul__": true, + "tf.estimator.WarmStartSettings.__ne__": true, + "tf.estimator.WarmStartSettings.__new__": true, + "tf.estimator.WarmStartSettings.__rmul__": true, + "tf.estimator.WarmStartSettings.ckpt_to_initialize_from": true, + "tf.estimator.WarmStartSettings.count": true, + "tf.estimator.WarmStartSettings.index": true, + "tf.estimator.WarmStartSettings.var_name_to_prev_var_name": true, + "tf.estimator.WarmStartSettings.var_name_to_vocab_info": true, + "tf.estimator.WarmStartSettings.vars_to_warm_start": true, + "tf.estimator.add_metrics": false, + "tf.estimator.classifier_parse_example_spec": false, + "tf.estimator.experimental": false, + "tf.estimator.experimental.InMemoryEvaluatorHook": false, + "tf.estimator.experimental.InMemoryEvaluatorHook.__eq__": true, + "tf.estimator.experimental.InMemoryEvaluatorHook.__ge__": true, + "tf.estimator.experimental.InMemoryEvaluatorHook.__gt__": true, + "tf.estimator.experimental.InMemoryEvaluatorHook.__init__": true, + "tf.estimator.experimental.InMemoryEvaluatorHook.__le__": true, + "tf.estimator.experimental.InMemoryEvaluatorHook.__lt__": true, + "tf.estimator.experimental.InMemoryEvaluatorHook.__ne__": true, + "tf.estimator.experimental.InMemoryEvaluatorHook.__new__": true, + "tf.estimator.experimental.InMemoryEvaluatorHook.after_create_session": true, + "tf.estimator.experimental.InMemoryEvaluatorHook.after_run": true, + "tf.estimator.experimental.InMemoryEvaluatorHook.before_run": true, + "tf.estimator.experimental.InMemoryEvaluatorHook.begin": true, + "tf.estimator.experimental.InMemoryEvaluatorHook.end": true, + "tf.estimator.experimental.LinearSDCA": false, + "tf.estimator.experimental.LinearSDCA.__eq__": true, + "tf.estimator.experimental.LinearSDCA.__ge__": true, + "tf.estimator.experimental.LinearSDCA.__gt__": true, + "tf.estimator.experimental.LinearSDCA.__init__": true, + "tf.estimator.experimental.LinearSDCA.__le__": true, + "tf.estimator.experimental.LinearSDCA.__lt__": true, + "tf.estimator.experimental.LinearSDCA.__ne__": true, + "tf.estimator.experimental.LinearSDCA.__new__": true, + "tf.estimator.experimental.LinearSDCA.get_train_step": true, + "tf.estimator.experimental.RNNClassifier": false, + "tf.estimator.experimental.RNNClassifier.__eq__": true, + "tf.estimator.experimental.RNNClassifier.__ge__": true, + "tf.estimator.experimental.RNNClassifier.__gt__": true, + "tf.estimator.experimental.RNNClassifier.__init__": true, + "tf.estimator.experimental.RNNClassifier.__le__": true, + "tf.estimator.experimental.RNNClassifier.__lt__": true, + "tf.estimator.experimental.RNNClassifier.__ne__": true, + "tf.estimator.experimental.RNNClassifier.__new__": true, + "tf.estimator.experimental.RNNClassifier.config": true, + "tf.estimator.experimental.RNNClassifier.eval_dir": true, + "tf.estimator.experimental.RNNClassifier.evaluate": true, + "tf.estimator.experimental.RNNClassifier.experimental_export_all_saved_models": true, + "tf.estimator.experimental.RNNClassifier.export_saved_model": true, + "tf.estimator.experimental.RNNClassifier.export_savedmodel": true, + "tf.estimator.experimental.RNNClassifier.get_variable_names": true, + "tf.estimator.experimental.RNNClassifier.get_variable_value": true, + "tf.estimator.experimental.RNNClassifier.latest_checkpoint": true, + "tf.estimator.experimental.RNNClassifier.model_dir": true, + "tf.estimator.experimental.RNNClassifier.model_fn": true, + "tf.estimator.experimental.RNNClassifier.params": true, + "tf.estimator.experimental.RNNClassifier.predict": true, + "tf.estimator.experimental.RNNClassifier.train": true, + "tf.estimator.experimental.RNNEstimator": false, + "tf.estimator.experimental.RNNEstimator.__eq__": true, + "tf.estimator.experimental.RNNEstimator.__ge__": true, + "tf.estimator.experimental.RNNEstimator.__gt__": true, + "tf.estimator.experimental.RNNEstimator.__init__": true, + "tf.estimator.experimental.RNNEstimator.__le__": true, + "tf.estimator.experimental.RNNEstimator.__lt__": true, + "tf.estimator.experimental.RNNEstimator.__ne__": true, + "tf.estimator.experimental.RNNEstimator.__new__": true, + "tf.estimator.experimental.RNNEstimator.config": true, + "tf.estimator.experimental.RNNEstimator.eval_dir": true, + "tf.estimator.experimental.RNNEstimator.evaluate": true, + "tf.estimator.experimental.RNNEstimator.experimental_export_all_saved_models": true, + "tf.estimator.experimental.RNNEstimator.export_saved_model": true, + "tf.estimator.experimental.RNNEstimator.export_savedmodel": true, + "tf.estimator.experimental.RNNEstimator.get_variable_names": true, + "tf.estimator.experimental.RNNEstimator.get_variable_value": true, + "tf.estimator.experimental.RNNEstimator.latest_checkpoint": true, + "tf.estimator.experimental.RNNEstimator.model_dir": true, + "tf.estimator.experimental.RNNEstimator.model_fn": true, + "tf.estimator.experimental.RNNEstimator.params": true, + "tf.estimator.experimental.RNNEstimator.predict": true, + "tf.estimator.experimental.RNNEstimator.train": true, + "tf.estimator.experimental.build_raw_supervised_input_receiver_fn": false, + "tf.estimator.experimental.call_logit_fn": false, + "tf.estimator.experimental.make_early_stopping_hook": false, + "tf.estimator.experimental.make_stop_at_checkpoint_step_hook": false, + "tf.estimator.experimental.stop_if_higher_hook": false, + "tf.estimator.experimental.stop_if_lower_hook": false, + "tf.estimator.experimental.stop_if_no_decrease_hook": false, + "tf.estimator.experimental.stop_if_no_increase_hook": false, + "tf.estimator.export": false, + "tf.estimator.export.ClassificationOutput": false, + "tf.estimator.export.ClassificationOutput.__eq__": true, + "tf.estimator.export.ClassificationOutput.__ge__": true, + "tf.estimator.export.ClassificationOutput.__gt__": true, + "tf.estimator.export.ClassificationOutput.__init__": true, + "tf.estimator.export.ClassificationOutput.__le__": true, + "tf.estimator.export.ClassificationOutput.__lt__": true, + "tf.estimator.export.ClassificationOutput.__ne__": true, + "tf.estimator.export.ClassificationOutput.__new__": true, + "tf.estimator.export.ClassificationOutput.as_signature_def": true, + "tf.estimator.export.ClassificationOutput.classes": true, + "tf.estimator.export.ClassificationOutput.scores": true, + "tf.estimator.export.ExportOutput": false, + "tf.estimator.export.ExportOutput.__eq__": true, + "tf.estimator.export.ExportOutput.__ge__": true, + "tf.estimator.export.ExportOutput.__gt__": true, + "tf.estimator.export.ExportOutput.__init__": true, + "tf.estimator.export.ExportOutput.__le__": true, + "tf.estimator.export.ExportOutput.__lt__": true, + "tf.estimator.export.ExportOutput.__ne__": true, + "tf.estimator.export.ExportOutput.__new__": true, + "tf.estimator.export.ExportOutput.as_signature_def": true, + "tf.estimator.export.PredictOutput": false, + "tf.estimator.export.PredictOutput.__eq__": true, + "tf.estimator.export.PredictOutput.__ge__": true, + "tf.estimator.export.PredictOutput.__gt__": true, + "tf.estimator.export.PredictOutput.__init__": true, + "tf.estimator.export.PredictOutput.__le__": true, + "tf.estimator.export.PredictOutput.__lt__": true, + "tf.estimator.export.PredictOutput.__ne__": true, + "tf.estimator.export.PredictOutput.__new__": true, + "tf.estimator.export.PredictOutput.as_signature_def": true, + "tf.estimator.export.PredictOutput.outputs": true, + "tf.estimator.export.RegressionOutput": false, + "tf.estimator.export.RegressionOutput.__eq__": true, + "tf.estimator.export.RegressionOutput.__ge__": true, + "tf.estimator.export.RegressionOutput.__gt__": true, + "tf.estimator.export.RegressionOutput.__init__": true, + "tf.estimator.export.RegressionOutput.__le__": true, + "tf.estimator.export.RegressionOutput.__lt__": true, + "tf.estimator.export.RegressionOutput.__ne__": true, + "tf.estimator.export.RegressionOutput.__new__": true, + "tf.estimator.export.RegressionOutput.as_signature_def": true, + "tf.estimator.export.RegressionOutput.value": true, + "tf.estimator.export.ServingInputReceiver": false, + "tf.estimator.export.ServingInputReceiver.__add__": true, + "tf.estimator.export.ServingInputReceiver.__contains__": true, + "tf.estimator.export.ServingInputReceiver.__eq__": true, + "tf.estimator.export.ServingInputReceiver.__ge__": true, + "tf.estimator.export.ServingInputReceiver.__getitem__": true, + "tf.estimator.export.ServingInputReceiver.__gt__": true, + "tf.estimator.export.ServingInputReceiver.__init__": true, + "tf.estimator.export.ServingInputReceiver.__iter__": true, + "tf.estimator.export.ServingInputReceiver.__le__": true, + "tf.estimator.export.ServingInputReceiver.__len__": true, + "tf.estimator.export.ServingInputReceiver.__lt__": true, + "tf.estimator.export.ServingInputReceiver.__mul__": true, + "tf.estimator.export.ServingInputReceiver.__ne__": true, + "tf.estimator.export.ServingInputReceiver.__new__": true, + "tf.estimator.export.ServingInputReceiver.__rmul__": true, + "tf.estimator.export.ServingInputReceiver.count": true, + "tf.estimator.export.ServingInputReceiver.features": true, + "tf.estimator.export.ServingInputReceiver.index": true, + "tf.estimator.export.ServingInputReceiver.receiver_tensors": true, + "tf.estimator.export.ServingInputReceiver.receiver_tensors_alternatives": true, + "tf.estimator.export.TensorServingInputReceiver": false, + "tf.estimator.export.TensorServingInputReceiver.__add__": true, + "tf.estimator.export.TensorServingInputReceiver.__contains__": true, + "tf.estimator.export.TensorServingInputReceiver.__eq__": true, + "tf.estimator.export.TensorServingInputReceiver.__ge__": true, + "tf.estimator.export.TensorServingInputReceiver.__getitem__": true, + "tf.estimator.export.TensorServingInputReceiver.__gt__": true, + "tf.estimator.export.TensorServingInputReceiver.__init__": true, + "tf.estimator.export.TensorServingInputReceiver.__iter__": true, + "tf.estimator.export.TensorServingInputReceiver.__le__": true, + "tf.estimator.export.TensorServingInputReceiver.__len__": true, + "tf.estimator.export.TensorServingInputReceiver.__lt__": true, + "tf.estimator.export.TensorServingInputReceiver.__mul__": true, + "tf.estimator.export.TensorServingInputReceiver.__ne__": true, + "tf.estimator.export.TensorServingInputReceiver.__new__": true, + "tf.estimator.export.TensorServingInputReceiver.__rmul__": true, + "tf.estimator.export.TensorServingInputReceiver.count": true, + "tf.estimator.export.TensorServingInputReceiver.features": true, + "tf.estimator.export.TensorServingInputReceiver.index": true, + "tf.estimator.export.TensorServingInputReceiver.receiver_tensors": true, + "tf.estimator.export.TensorServingInputReceiver.receiver_tensors_alternatives": true, + "tf.estimator.export.build_parsing_serving_input_receiver_fn": false, + "tf.estimator.export.build_raw_serving_input_receiver_fn": false, + "tf.estimator.regressor_parse_example_spec": false, + "tf.estimator.train_and_evaluate": false, + "tf.executing_eagerly": false, + "tf.exp": false, + "tf.expand_dims": false, + "tf.experimental": false, + "tf.experimental.async_clear_error": false, + "tf.experimental.async_scope": false, + "tf.experimental.dlpack": false, + "tf.experimental.dlpack.from_dlpack": false, + "tf.experimental.dlpack.to_dlpack": false, + "tf.experimental.function_executor_type": false, + "tf.experimental.tensorrt": false, + "tf.experimental.tensorrt.ConversionParams": false, + "tf.experimental.tensorrt.ConversionParams.__add__": true, + "tf.experimental.tensorrt.ConversionParams.__contains__": true, + "tf.experimental.tensorrt.ConversionParams.__eq__": true, + "tf.experimental.tensorrt.ConversionParams.__ge__": true, + "tf.experimental.tensorrt.ConversionParams.__getitem__": true, + "tf.experimental.tensorrt.ConversionParams.__gt__": true, + "tf.experimental.tensorrt.ConversionParams.__init__": true, + "tf.experimental.tensorrt.ConversionParams.__iter__": true, + "tf.experimental.tensorrt.ConversionParams.__le__": true, + "tf.experimental.tensorrt.ConversionParams.__len__": true, + "tf.experimental.tensorrt.ConversionParams.__lt__": true, + "tf.experimental.tensorrt.ConversionParams.__mul__": true, + "tf.experimental.tensorrt.ConversionParams.__ne__": true, + "tf.experimental.tensorrt.ConversionParams.__new__": true, + "tf.experimental.tensorrt.ConversionParams.__rmul__": true, + "tf.experimental.tensorrt.ConversionParams.allow_build_at_runtime": true, + "tf.experimental.tensorrt.ConversionParams.count": true, + "tf.experimental.tensorrt.ConversionParams.index": true, + "tf.experimental.tensorrt.ConversionParams.is_dynamic_op": true, + "tf.experimental.tensorrt.ConversionParams.max_batch_size": true, + "tf.experimental.tensorrt.ConversionParams.max_workspace_size_bytes": true, + "tf.experimental.tensorrt.ConversionParams.maximum_cached_engines": true, + "tf.experimental.tensorrt.ConversionParams.minimum_segment_size": true, + "tf.experimental.tensorrt.ConversionParams.precision_mode": true, + "tf.experimental.tensorrt.ConversionParams.rewriter_config_template": true, + "tf.experimental.tensorrt.ConversionParams.use_calibration": true, + "tf.experimental.tensorrt.Converter": false, + "tf.experimental.tensorrt.Converter.__eq__": true, + "tf.experimental.tensorrt.Converter.__ge__": true, + "tf.experimental.tensorrt.Converter.__gt__": true, + "tf.experimental.tensorrt.Converter.__init__": true, + "tf.experimental.tensorrt.Converter.__le__": true, + "tf.experimental.tensorrt.Converter.__lt__": true, + "tf.experimental.tensorrt.Converter.__ne__": true, + "tf.experimental.tensorrt.Converter.__new__": true, + "tf.experimental.tensorrt.Converter.build": true, + "tf.experimental.tensorrt.Converter.convert": true, + "tf.experimental.tensorrt.Converter.save": true, + "tf.extract_volume_patches": false, + "tf.eye": false, + "tf.feature_column": false, + "tf.feature_column.bucketized_column": false, + "tf.feature_column.categorical_column_with_hash_bucket": false, + "tf.feature_column.categorical_column_with_identity": false, + "tf.feature_column.categorical_column_with_vocabulary_file": false, + "tf.feature_column.categorical_column_with_vocabulary_list": false, + "tf.feature_column.crossed_column": false, + "tf.feature_column.embedding_column": false, + "tf.feature_column.indicator_column": false, + "tf.feature_column.make_parse_example_spec": false, + "tf.feature_column.numeric_column": false, + "tf.feature_column.sequence_categorical_column_with_hash_bucket": false, + "tf.feature_column.sequence_categorical_column_with_identity": false, + "tf.feature_column.sequence_categorical_column_with_vocabulary_file": false, + "tf.feature_column.sequence_categorical_column_with_vocabulary_list": false, + "tf.feature_column.sequence_numeric_column": false, + "tf.feature_column.shared_embeddings": false, + "tf.feature_column.weighted_categorical_column": false, + "tf.fill": false, + "tf.fingerprint": false, + "tf.float16": true, + "tf.float32": true, + "tf.float64": true, + "tf.floor": false, + "tf.foldl": false, + "tf.foldr": false, + "tf.function": false, + "tf.gather": false, + "tf.gather_nd": false, + "tf.get_logger": false, + "tf.get_static_value": false, + "tf.grad_pass_through": false, + "tf.gradients": false, + "tf.graph_util": false, + "tf.graph_util.import_graph_def": false, + "tf.greater": false, + "tf.greater_equal": false, + "tf.group": false, + "tf.guarantee_const": false, + "tf.half": true, + "tf.hessians": false, + "tf.histogram_fixed_width": false, + "tf.histogram_fixed_width_bins": false, + "tf.identity": false, + "tf.identity_n": false, + "tf.image": false, + "tf.image.ResizeMethod": false, + "tf.image.ResizeMethod.AREA": true, + "tf.image.ResizeMethod.BICUBIC": true, + "tf.image.ResizeMethod.BILINEAR": true, + "tf.image.ResizeMethod.GAUSSIAN": true, + "tf.image.ResizeMethod.LANCZOS3": true, + "tf.image.ResizeMethod.LANCZOS5": true, + "tf.image.ResizeMethod.MITCHELLCUBIC": true, + "tf.image.ResizeMethod.NEAREST_NEIGHBOR": true, + "tf.image.ResizeMethod.__eq__": true, + "tf.image.ResizeMethod.__ge__": true, + "tf.image.ResizeMethod.__gt__": true, + "tf.image.ResizeMethod.__init__": true, + "tf.image.ResizeMethod.__le__": true, + "tf.image.ResizeMethod.__lt__": true, + "tf.image.ResizeMethod.__ne__": true, + "tf.image.ResizeMethod.__new__": true, + "tf.image.adjust_brightness": false, + "tf.image.adjust_contrast": false, + "tf.image.adjust_gamma": false, + "tf.image.adjust_hue": false, + "tf.image.adjust_jpeg_quality": false, + "tf.image.adjust_saturation": false, + "tf.image.central_crop": false, + "tf.image.combined_non_max_suppression": false, + "tf.image.convert_image_dtype": false, + "tf.image.crop_and_resize": false, + "tf.image.crop_to_bounding_box": false, + "tf.image.decode_and_crop_jpeg": false, + "tf.image.decode_bmp": false, + "tf.image.decode_gif": false, + "tf.image.decode_image": false, + "tf.image.decode_jpeg": false, + "tf.image.decode_png": false, + "tf.image.draw_bounding_boxes": false, + "tf.image.encode_jpeg": false, + "tf.image.encode_png": false, + "tf.image.extract_glimpse": false, + "tf.image.extract_jpeg_shape": false, + "tf.image.extract_patches": false, + "tf.image.flip_left_right": false, + "tf.image.flip_up_down": false, + "tf.image.generate_bounding_box_proposals": false, + "tf.image.grayscale_to_rgb": false, + "tf.image.hsv_to_rgb": false, + "tf.image.image_gradients": false, + "tf.image.is_jpeg": false, + "tf.image.non_max_suppression": false, + "tf.image.non_max_suppression_overlaps": false, + "tf.image.non_max_suppression_padded": false, + "tf.image.non_max_suppression_with_scores": false, + "tf.image.pad_to_bounding_box": false, + "tf.image.per_image_standardization": false, + "tf.image.psnr": false, + "tf.image.random_brightness": false, + "tf.image.random_contrast": false, + "tf.image.random_crop": false, + "tf.image.random_flip_left_right": false, + "tf.image.random_flip_up_down": false, + "tf.image.random_hue": false, + "tf.image.random_jpeg_quality": false, + "tf.image.random_saturation": false, + "tf.image.resize": false, + "tf.image.resize_with_crop_or_pad": false, + "tf.image.resize_with_pad": false, + "tf.image.rgb_to_grayscale": false, + "tf.image.rgb_to_hsv": false, + "tf.image.rgb_to_yiq": false, + "tf.image.rgb_to_yuv": false, + "tf.image.rot90": false, + "tf.image.sample_distorted_bounding_box": false, + "tf.image.sobel_edges": false, + "tf.image.ssim": false, + "tf.image.ssim_multiscale": false, + "tf.image.total_variation": false, + "tf.image.transpose": false, + "tf.image.yiq_to_rgb": false, + "tf.image.yuv_to_rgb": false, + "tf.import_graph_def": false, + "tf.init_scope": false, + "tf.initializers": false, + "tf.initializers.Constant": false, + "tf.initializers.Constant.__call__": true, + "tf.initializers.Constant.__eq__": true, + "tf.initializers.Constant.__ge__": true, + "tf.initializers.Constant.__gt__": true, + "tf.initializers.Constant.__init__": true, + "tf.initializers.Constant.__le__": true, + "tf.initializers.Constant.__lt__": true, + "tf.initializers.Constant.__ne__": true, + "tf.initializers.Constant.__new__": true, + "tf.initializers.Constant.from_config": true, + "tf.initializers.Constant.get_config": true, + "tf.initializers.GlorotNormal": false, + "tf.initializers.GlorotNormal.__call__": true, + "tf.initializers.GlorotNormal.__eq__": true, + "tf.initializers.GlorotNormal.__ge__": true, + "tf.initializers.GlorotNormal.__gt__": true, + "tf.initializers.GlorotNormal.__init__": true, + "tf.initializers.GlorotNormal.__le__": true, + "tf.initializers.GlorotNormal.__lt__": true, + "tf.initializers.GlorotNormal.__ne__": true, + "tf.initializers.GlorotNormal.__new__": true, + "tf.initializers.GlorotNormal.from_config": true, + "tf.initializers.GlorotNormal.get_config": true, + "tf.initializers.GlorotUniform": false, + "tf.initializers.GlorotUniform.__call__": true, + "tf.initializers.GlorotUniform.__eq__": true, + "tf.initializers.GlorotUniform.__ge__": true, + "tf.initializers.GlorotUniform.__gt__": true, + "tf.initializers.GlorotUniform.__init__": true, + "tf.initializers.GlorotUniform.__le__": true, + "tf.initializers.GlorotUniform.__lt__": true, + "tf.initializers.GlorotUniform.__ne__": true, + "tf.initializers.GlorotUniform.__new__": true, + "tf.initializers.GlorotUniform.from_config": true, + "tf.initializers.GlorotUniform.get_config": true, + "tf.initializers.Identity": false, + "tf.initializers.Identity.__call__": true, + "tf.initializers.Identity.__eq__": true, + "tf.initializers.Identity.__ge__": true, + "tf.initializers.Identity.__gt__": true, + "tf.initializers.Identity.__init__": true, + "tf.initializers.Identity.__le__": true, + "tf.initializers.Identity.__lt__": true, + "tf.initializers.Identity.__ne__": true, + "tf.initializers.Identity.__new__": true, + "tf.initializers.Identity.from_config": true, + "tf.initializers.Identity.get_config": true, + "tf.initializers.Initializer": false, + "tf.initializers.Initializer.__call__": true, + "tf.initializers.Initializer.__eq__": true, + "tf.initializers.Initializer.__ge__": true, + "tf.initializers.Initializer.__gt__": true, + "tf.initializers.Initializer.__init__": true, + "tf.initializers.Initializer.__le__": true, + "tf.initializers.Initializer.__lt__": true, + "tf.initializers.Initializer.__ne__": true, + "tf.initializers.Initializer.__new__": true, + "tf.initializers.Initializer.from_config": true, + "tf.initializers.Initializer.get_config": true, + "tf.initializers.Ones": false, + "tf.initializers.Ones.__call__": true, + "tf.initializers.Ones.__eq__": true, + "tf.initializers.Ones.__ge__": true, + "tf.initializers.Ones.__gt__": true, + "tf.initializers.Ones.__init__": true, + "tf.initializers.Ones.__le__": true, + "tf.initializers.Ones.__lt__": true, + "tf.initializers.Ones.__ne__": true, + "tf.initializers.Ones.__new__": true, + "tf.initializers.Ones.from_config": true, + "tf.initializers.Ones.get_config": true, + "tf.initializers.Orthogonal": false, + "tf.initializers.Orthogonal.__call__": true, + "tf.initializers.Orthogonal.__eq__": true, + "tf.initializers.Orthogonal.__ge__": true, + "tf.initializers.Orthogonal.__gt__": true, + "tf.initializers.Orthogonal.__init__": true, + "tf.initializers.Orthogonal.__le__": true, + "tf.initializers.Orthogonal.__lt__": true, + "tf.initializers.Orthogonal.__ne__": true, + "tf.initializers.Orthogonal.__new__": true, + "tf.initializers.Orthogonal.from_config": true, + "tf.initializers.Orthogonal.get_config": true, + "tf.initializers.RandomNormal": false, + "tf.initializers.RandomNormal.__call__": true, + "tf.initializers.RandomNormal.__eq__": true, + "tf.initializers.RandomNormal.__ge__": true, + "tf.initializers.RandomNormal.__gt__": true, + "tf.initializers.RandomNormal.__init__": true, + "tf.initializers.RandomNormal.__le__": true, + "tf.initializers.RandomNormal.__lt__": true, + "tf.initializers.RandomNormal.__ne__": true, + "tf.initializers.RandomNormal.__new__": true, + "tf.initializers.RandomNormal.from_config": true, + "tf.initializers.RandomNormal.get_config": true, + "tf.initializers.RandomUniform": false, + "tf.initializers.RandomUniform.__call__": true, + "tf.initializers.RandomUniform.__eq__": true, + "tf.initializers.RandomUniform.__ge__": true, + "tf.initializers.RandomUniform.__gt__": true, + "tf.initializers.RandomUniform.__init__": true, + "tf.initializers.RandomUniform.__le__": true, + "tf.initializers.RandomUniform.__lt__": true, + "tf.initializers.RandomUniform.__ne__": true, + "tf.initializers.RandomUniform.__new__": true, + "tf.initializers.RandomUniform.from_config": true, + "tf.initializers.RandomUniform.get_config": true, + "tf.initializers.TruncatedNormal": false, + "tf.initializers.TruncatedNormal.__call__": true, + "tf.initializers.TruncatedNormal.__eq__": true, + "tf.initializers.TruncatedNormal.__ge__": true, + "tf.initializers.TruncatedNormal.__gt__": true, + "tf.initializers.TruncatedNormal.__init__": true, + "tf.initializers.TruncatedNormal.__le__": true, + "tf.initializers.TruncatedNormal.__lt__": true, + "tf.initializers.TruncatedNormal.__ne__": true, + "tf.initializers.TruncatedNormal.__new__": true, + "tf.initializers.TruncatedNormal.from_config": true, + "tf.initializers.TruncatedNormal.get_config": true, + "tf.initializers.VarianceScaling": false, + "tf.initializers.VarianceScaling.__call__": true, + "tf.initializers.VarianceScaling.__eq__": true, + "tf.initializers.VarianceScaling.__ge__": true, + "tf.initializers.VarianceScaling.__gt__": true, + "tf.initializers.VarianceScaling.__init__": true, + "tf.initializers.VarianceScaling.__le__": true, + "tf.initializers.VarianceScaling.__lt__": true, + "tf.initializers.VarianceScaling.__ne__": true, + "tf.initializers.VarianceScaling.__new__": true, + "tf.initializers.VarianceScaling.from_config": true, + "tf.initializers.VarianceScaling.get_config": true, + "tf.initializers.Zeros": false, + "tf.initializers.Zeros.__call__": true, + "tf.initializers.Zeros.__eq__": true, + "tf.initializers.Zeros.__ge__": true, + "tf.initializers.Zeros.__gt__": true, + "tf.initializers.Zeros.__init__": true, + "tf.initializers.Zeros.__le__": true, + "tf.initializers.Zeros.__lt__": true, + "tf.initializers.Zeros.__ne__": true, + "tf.initializers.Zeros.__new__": true, + "tf.initializers.Zeros.from_config": true, + "tf.initializers.Zeros.get_config": true, + "tf.initializers.constant": false, + "tf.initializers.constant.__call__": true, + "tf.initializers.constant.__eq__": true, + "tf.initializers.constant.__ge__": true, + "tf.initializers.constant.__gt__": true, + "tf.initializers.constant.__init__": true, + "tf.initializers.constant.__le__": true, + "tf.initializers.constant.__lt__": true, + "tf.initializers.constant.__ne__": true, + "tf.initializers.constant.__new__": true, + "tf.initializers.constant.from_config": true, + "tf.initializers.constant.get_config": true, + "tf.initializers.deserialize": false, + "tf.initializers.get": false, + "tf.initializers.glorot_normal": false, + "tf.initializers.glorot_normal.__call__": true, + "tf.initializers.glorot_normal.__eq__": true, + "tf.initializers.glorot_normal.__ge__": true, + "tf.initializers.glorot_normal.__gt__": true, + "tf.initializers.glorot_normal.__init__": true, + "tf.initializers.glorot_normal.__le__": true, + "tf.initializers.glorot_normal.__lt__": true, + "tf.initializers.glorot_normal.__ne__": true, + "tf.initializers.glorot_normal.__new__": true, + "tf.initializers.glorot_normal.from_config": true, + "tf.initializers.glorot_normal.get_config": true, + "tf.initializers.glorot_uniform": false, + "tf.initializers.glorot_uniform.__call__": true, + "tf.initializers.glorot_uniform.__eq__": true, + "tf.initializers.glorot_uniform.__ge__": true, + "tf.initializers.glorot_uniform.__gt__": true, + "tf.initializers.glorot_uniform.__init__": true, + "tf.initializers.glorot_uniform.__le__": true, + "tf.initializers.glorot_uniform.__lt__": true, + "tf.initializers.glorot_uniform.__ne__": true, + "tf.initializers.glorot_uniform.__new__": true, + "tf.initializers.glorot_uniform.from_config": true, + "tf.initializers.glorot_uniform.get_config": true, + "tf.initializers.he_normal": false, + "tf.initializers.he_uniform": false, + "tf.initializers.identity": false, + "tf.initializers.identity.__call__": true, + "tf.initializers.identity.__eq__": true, + "tf.initializers.identity.__ge__": true, + "tf.initializers.identity.__gt__": true, + "tf.initializers.identity.__init__": true, + "tf.initializers.identity.__le__": true, + "tf.initializers.identity.__lt__": true, + "tf.initializers.identity.__ne__": true, + "tf.initializers.identity.__new__": true, + "tf.initializers.identity.from_config": true, + "tf.initializers.identity.get_config": true, + "tf.initializers.lecun_normal": false, + "tf.initializers.lecun_uniform": false, + "tf.initializers.ones": false, + "tf.initializers.ones.__call__": true, + "tf.initializers.ones.__eq__": true, + "tf.initializers.ones.__ge__": true, + "tf.initializers.ones.__gt__": true, + "tf.initializers.ones.__init__": true, + "tf.initializers.ones.__le__": true, + "tf.initializers.ones.__lt__": true, + "tf.initializers.ones.__ne__": true, + "tf.initializers.ones.__new__": true, + "tf.initializers.ones.from_config": true, + "tf.initializers.ones.get_config": true, + "tf.initializers.orthogonal": false, + "tf.initializers.orthogonal.__call__": true, + "tf.initializers.orthogonal.__eq__": true, + "tf.initializers.orthogonal.__ge__": true, + "tf.initializers.orthogonal.__gt__": true, + "tf.initializers.orthogonal.__init__": true, + "tf.initializers.orthogonal.__le__": true, + "tf.initializers.orthogonal.__lt__": true, + "tf.initializers.orthogonal.__ne__": true, + "tf.initializers.orthogonal.__new__": true, + "tf.initializers.orthogonal.from_config": true, + "tf.initializers.orthogonal.get_config": true, + "tf.initializers.serialize": false, + "tf.initializers.zeros": false, + "tf.initializers.zeros.__call__": true, + "tf.initializers.zeros.__eq__": true, + "tf.initializers.zeros.__ge__": true, + "tf.initializers.zeros.__gt__": true, + "tf.initializers.zeros.__init__": true, + "tf.initializers.zeros.__le__": true, + "tf.initializers.zeros.__lt__": true, + "tf.initializers.zeros.__ne__": true, + "tf.initializers.zeros.__new__": true, + "tf.initializers.zeros.from_config": true, + "tf.initializers.zeros.get_config": true, + "tf.int16": true, + "tf.int32": true, + "tf.int64": true, + "tf.int8": true, + "tf.io": false, + "tf.io.FixedLenFeature": false, + "tf.io.FixedLenFeature.__add__": true, + "tf.io.FixedLenFeature.__contains__": true, + "tf.io.FixedLenFeature.__eq__": true, + "tf.io.FixedLenFeature.__ge__": true, + "tf.io.FixedLenFeature.__getitem__": true, + "tf.io.FixedLenFeature.__gt__": true, + "tf.io.FixedLenFeature.__init__": true, + "tf.io.FixedLenFeature.__iter__": true, + "tf.io.FixedLenFeature.__le__": true, + "tf.io.FixedLenFeature.__len__": true, + "tf.io.FixedLenFeature.__lt__": true, + "tf.io.FixedLenFeature.__mul__": true, + "tf.io.FixedLenFeature.__ne__": true, + "tf.io.FixedLenFeature.__new__": true, + "tf.io.FixedLenFeature.__rmul__": true, + "tf.io.FixedLenFeature.count": true, + "tf.io.FixedLenFeature.default_value": true, + "tf.io.FixedLenFeature.dtype": true, + "tf.io.FixedLenFeature.index": true, + "tf.io.FixedLenFeature.shape": true, + "tf.io.FixedLenSequenceFeature": false, + "tf.io.FixedLenSequenceFeature.__add__": true, + "tf.io.FixedLenSequenceFeature.__contains__": true, + "tf.io.FixedLenSequenceFeature.__eq__": true, + "tf.io.FixedLenSequenceFeature.__ge__": true, + "tf.io.FixedLenSequenceFeature.__getitem__": true, + "tf.io.FixedLenSequenceFeature.__gt__": true, + "tf.io.FixedLenSequenceFeature.__init__": true, + "tf.io.FixedLenSequenceFeature.__iter__": true, + "tf.io.FixedLenSequenceFeature.__le__": true, + "tf.io.FixedLenSequenceFeature.__len__": true, + "tf.io.FixedLenSequenceFeature.__lt__": true, + "tf.io.FixedLenSequenceFeature.__mul__": true, + "tf.io.FixedLenSequenceFeature.__ne__": true, + "tf.io.FixedLenSequenceFeature.__new__": true, + "tf.io.FixedLenSequenceFeature.__rmul__": true, + "tf.io.FixedLenSequenceFeature.allow_missing": true, + "tf.io.FixedLenSequenceFeature.count": true, + "tf.io.FixedLenSequenceFeature.default_value": true, + "tf.io.FixedLenSequenceFeature.dtype": true, + "tf.io.FixedLenSequenceFeature.index": true, + "tf.io.FixedLenSequenceFeature.shape": true, + "tf.io.RaggedFeature": false, + "tf.io.RaggedFeature.RowLengths": false, + "tf.io.RaggedFeature.RowLengths.__add__": true, + "tf.io.RaggedFeature.RowLengths.__contains__": true, + "tf.io.RaggedFeature.RowLengths.__eq__": true, + "tf.io.RaggedFeature.RowLengths.__ge__": true, + "tf.io.RaggedFeature.RowLengths.__getitem__": true, + "tf.io.RaggedFeature.RowLengths.__gt__": true, + "tf.io.RaggedFeature.RowLengths.__init__": true, + "tf.io.RaggedFeature.RowLengths.__iter__": true, + "tf.io.RaggedFeature.RowLengths.__le__": true, + "tf.io.RaggedFeature.RowLengths.__len__": true, + "tf.io.RaggedFeature.RowLengths.__lt__": true, + "tf.io.RaggedFeature.RowLengths.__mul__": true, + "tf.io.RaggedFeature.RowLengths.__ne__": true, + "tf.io.RaggedFeature.RowLengths.__new__": true, + "tf.io.RaggedFeature.RowLengths.__rmul__": true, + "tf.io.RaggedFeature.RowLengths.count": true, + "tf.io.RaggedFeature.RowLengths.index": true, + "tf.io.RaggedFeature.RowLengths.key": true, + "tf.io.RaggedFeature.RowLimits": false, + "tf.io.RaggedFeature.RowLimits.__add__": true, + "tf.io.RaggedFeature.RowLimits.__contains__": true, + "tf.io.RaggedFeature.RowLimits.__eq__": true, + "tf.io.RaggedFeature.RowLimits.__ge__": true, + "tf.io.RaggedFeature.RowLimits.__getitem__": true, + "tf.io.RaggedFeature.RowLimits.__gt__": true, + "tf.io.RaggedFeature.RowLimits.__init__": true, + "tf.io.RaggedFeature.RowLimits.__iter__": true, + "tf.io.RaggedFeature.RowLimits.__le__": true, + "tf.io.RaggedFeature.RowLimits.__len__": true, + "tf.io.RaggedFeature.RowLimits.__lt__": true, + "tf.io.RaggedFeature.RowLimits.__mul__": true, + "tf.io.RaggedFeature.RowLimits.__ne__": true, + "tf.io.RaggedFeature.RowLimits.__new__": true, + "tf.io.RaggedFeature.RowLimits.__rmul__": true, + "tf.io.RaggedFeature.RowLimits.count": true, + "tf.io.RaggedFeature.RowLimits.index": true, + "tf.io.RaggedFeature.RowLimits.key": true, + "tf.io.RaggedFeature.RowSplits": false, + "tf.io.RaggedFeature.RowSplits.__add__": true, + "tf.io.RaggedFeature.RowSplits.__contains__": true, + "tf.io.RaggedFeature.RowSplits.__eq__": true, + "tf.io.RaggedFeature.RowSplits.__ge__": true, + "tf.io.RaggedFeature.RowSplits.__getitem__": true, + "tf.io.RaggedFeature.RowSplits.__gt__": true, + "tf.io.RaggedFeature.RowSplits.__init__": true, + "tf.io.RaggedFeature.RowSplits.__iter__": true, + "tf.io.RaggedFeature.RowSplits.__le__": true, + "tf.io.RaggedFeature.RowSplits.__len__": true, + "tf.io.RaggedFeature.RowSplits.__lt__": true, + "tf.io.RaggedFeature.RowSplits.__mul__": true, + "tf.io.RaggedFeature.RowSplits.__ne__": true, + "tf.io.RaggedFeature.RowSplits.__new__": true, + "tf.io.RaggedFeature.RowSplits.__rmul__": true, + "tf.io.RaggedFeature.RowSplits.count": true, + "tf.io.RaggedFeature.RowSplits.index": true, + "tf.io.RaggedFeature.RowSplits.key": true, + "tf.io.RaggedFeature.RowStarts": false, + "tf.io.RaggedFeature.RowStarts.__add__": true, + "tf.io.RaggedFeature.RowStarts.__contains__": true, + "tf.io.RaggedFeature.RowStarts.__eq__": true, + "tf.io.RaggedFeature.RowStarts.__ge__": true, + "tf.io.RaggedFeature.RowStarts.__getitem__": true, + "tf.io.RaggedFeature.RowStarts.__gt__": true, + "tf.io.RaggedFeature.RowStarts.__init__": true, + "tf.io.RaggedFeature.RowStarts.__iter__": true, + "tf.io.RaggedFeature.RowStarts.__le__": true, + "tf.io.RaggedFeature.RowStarts.__len__": true, + "tf.io.RaggedFeature.RowStarts.__lt__": true, + "tf.io.RaggedFeature.RowStarts.__mul__": true, + "tf.io.RaggedFeature.RowStarts.__ne__": true, + "tf.io.RaggedFeature.RowStarts.__new__": true, + "tf.io.RaggedFeature.RowStarts.__rmul__": true, + "tf.io.RaggedFeature.RowStarts.count": true, + "tf.io.RaggedFeature.RowStarts.index": true, + "tf.io.RaggedFeature.RowStarts.key": true, + "tf.io.RaggedFeature.UniformRowLength": false, + "tf.io.RaggedFeature.UniformRowLength.__add__": true, + "tf.io.RaggedFeature.UniformRowLength.__contains__": true, + "tf.io.RaggedFeature.UniformRowLength.__eq__": true, + "tf.io.RaggedFeature.UniformRowLength.__ge__": true, + "tf.io.RaggedFeature.UniformRowLength.__getitem__": true, + "tf.io.RaggedFeature.UniformRowLength.__gt__": true, + "tf.io.RaggedFeature.UniformRowLength.__init__": true, + "tf.io.RaggedFeature.UniformRowLength.__iter__": true, + "tf.io.RaggedFeature.UniformRowLength.__le__": true, + "tf.io.RaggedFeature.UniformRowLength.__len__": true, + "tf.io.RaggedFeature.UniformRowLength.__lt__": true, + "tf.io.RaggedFeature.UniformRowLength.__mul__": true, + "tf.io.RaggedFeature.UniformRowLength.__ne__": true, + "tf.io.RaggedFeature.UniformRowLength.__new__": true, + "tf.io.RaggedFeature.UniformRowLength.__rmul__": true, + "tf.io.RaggedFeature.UniformRowLength.count": true, + "tf.io.RaggedFeature.UniformRowLength.index": true, + "tf.io.RaggedFeature.UniformRowLength.length": true, + "tf.io.RaggedFeature.ValueRowIds": false, + "tf.io.RaggedFeature.ValueRowIds.__add__": true, + "tf.io.RaggedFeature.ValueRowIds.__contains__": true, + "tf.io.RaggedFeature.ValueRowIds.__eq__": true, + "tf.io.RaggedFeature.ValueRowIds.__ge__": true, + "tf.io.RaggedFeature.ValueRowIds.__getitem__": true, + "tf.io.RaggedFeature.ValueRowIds.__gt__": true, + "tf.io.RaggedFeature.ValueRowIds.__init__": true, + "tf.io.RaggedFeature.ValueRowIds.__iter__": true, + "tf.io.RaggedFeature.ValueRowIds.__le__": true, + "tf.io.RaggedFeature.ValueRowIds.__len__": true, + "tf.io.RaggedFeature.ValueRowIds.__lt__": true, + "tf.io.RaggedFeature.ValueRowIds.__mul__": true, + "tf.io.RaggedFeature.ValueRowIds.__ne__": true, + "tf.io.RaggedFeature.ValueRowIds.__new__": true, + "tf.io.RaggedFeature.ValueRowIds.__rmul__": true, + "tf.io.RaggedFeature.ValueRowIds.count": true, + "tf.io.RaggedFeature.ValueRowIds.index": true, + "tf.io.RaggedFeature.ValueRowIds.key": true, + "tf.io.RaggedFeature.__add__": true, + "tf.io.RaggedFeature.__contains__": true, + "tf.io.RaggedFeature.__eq__": true, + "tf.io.RaggedFeature.__ge__": true, + "tf.io.RaggedFeature.__getitem__": true, + "tf.io.RaggedFeature.__gt__": true, + "tf.io.RaggedFeature.__init__": true, + "tf.io.RaggedFeature.__iter__": true, + "tf.io.RaggedFeature.__le__": true, + "tf.io.RaggedFeature.__len__": true, + "tf.io.RaggedFeature.__lt__": true, + "tf.io.RaggedFeature.__mul__": true, + "tf.io.RaggedFeature.__ne__": true, + "tf.io.RaggedFeature.__new__": true, + "tf.io.RaggedFeature.__rmul__": true, + "tf.io.RaggedFeature.count": true, + "tf.io.RaggedFeature.dtype": true, + "tf.io.RaggedFeature.index": true, + "tf.io.RaggedFeature.partitions": true, + "tf.io.RaggedFeature.row_splits_dtype": true, + "tf.io.RaggedFeature.validate": true, + "tf.io.RaggedFeature.value_key": true, + "tf.io.SparseFeature": false, + "tf.io.SparseFeature.__add__": true, + "tf.io.SparseFeature.__contains__": true, + "tf.io.SparseFeature.__eq__": true, + "tf.io.SparseFeature.__ge__": true, + "tf.io.SparseFeature.__getitem__": true, + "tf.io.SparseFeature.__gt__": true, + "tf.io.SparseFeature.__init__": true, + "tf.io.SparseFeature.__iter__": true, + "tf.io.SparseFeature.__le__": true, + "tf.io.SparseFeature.__len__": true, + "tf.io.SparseFeature.__lt__": true, + "tf.io.SparseFeature.__mul__": true, + "tf.io.SparseFeature.__ne__": true, + "tf.io.SparseFeature.__new__": true, + "tf.io.SparseFeature.__rmul__": true, + "tf.io.SparseFeature.already_sorted": true, + "tf.io.SparseFeature.count": true, + "tf.io.SparseFeature.dtype": true, + "tf.io.SparseFeature.index": true, + "tf.io.SparseFeature.index_key": true, + "tf.io.SparseFeature.size": true, + "tf.io.SparseFeature.value_key": true, + "tf.io.TFRecordOptions": false, + "tf.io.TFRecordOptions.__eq__": true, + "tf.io.TFRecordOptions.__ge__": true, + "tf.io.TFRecordOptions.__gt__": true, + "tf.io.TFRecordOptions.__init__": true, + "tf.io.TFRecordOptions.__le__": true, + "tf.io.TFRecordOptions.__lt__": true, + "tf.io.TFRecordOptions.__ne__": true, + "tf.io.TFRecordOptions.__new__": true, + "tf.io.TFRecordOptions.compression_type_map": true, + "tf.io.TFRecordOptions.get_compression_type_string": true, + "tf.io.TFRecordWriter": false, + "tf.io.TFRecordWriter.__enter__": true, + "tf.io.TFRecordWriter.__eq__": true, + "tf.io.TFRecordWriter.__exit__": true, + "tf.io.TFRecordWriter.__ge__": true, + "tf.io.TFRecordWriter.__gt__": true, + "tf.io.TFRecordWriter.__init__": true, + "tf.io.TFRecordWriter.__le__": true, + "tf.io.TFRecordWriter.__lt__": true, + "tf.io.TFRecordWriter.__ne__": true, + "tf.io.TFRecordWriter.__new__": true, + "tf.io.TFRecordWriter.close": true, + "tf.io.TFRecordWriter.flush": true, + "tf.io.TFRecordWriter.write": true, + "tf.io.VarLenFeature": false, + "tf.io.VarLenFeature.__add__": true, + "tf.io.VarLenFeature.__contains__": true, + "tf.io.VarLenFeature.__eq__": true, + "tf.io.VarLenFeature.__ge__": true, + "tf.io.VarLenFeature.__getitem__": true, + "tf.io.VarLenFeature.__gt__": true, + "tf.io.VarLenFeature.__init__": true, + "tf.io.VarLenFeature.__iter__": true, + "tf.io.VarLenFeature.__le__": true, + "tf.io.VarLenFeature.__len__": true, + "tf.io.VarLenFeature.__lt__": true, + "tf.io.VarLenFeature.__mul__": true, + "tf.io.VarLenFeature.__ne__": true, + "tf.io.VarLenFeature.__new__": true, + "tf.io.VarLenFeature.__rmul__": true, + "tf.io.VarLenFeature.count": true, + "tf.io.VarLenFeature.dtype": true, + "tf.io.VarLenFeature.index": true, + "tf.io.decode_and_crop_jpeg": false, + "tf.io.decode_base64": false, + "tf.io.decode_bmp": false, + "tf.io.decode_compressed": false, + "tf.io.decode_csv": false, + "tf.io.decode_gif": false, + "tf.io.decode_image": false, + "tf.io.decode_jpeg": false, + "tf.io.decode_json_example": false, + "tf.io.decode_png": false, + "tf.io.decode_proto": false, + "tf.io.decode_raw": false, + "tf.io.deserialize_many_sparse": false, + "tf.io.encode_base64": false, + "tf.io.encode_jpeg": false, + "tf.io.encode_proto": false, + "tf.io.extract_jpeg_shape": false, + "tf.io.gfile": false, + "tf.io.gfile.GFile": false, + "tf.io.gfile.GFile.__enter__": true, + "tf.io.gfile.GFile.__eq__": true, + "tf.io.gfile.GFile.__exit__": true, + "tf.io.gfile.GFile.__ge__": true, + "tf.io.gfile.GFile.__gt__": true, + "tf.io.gfile.GFile.__init__": true, + "tf.io.gfile.GFile.__iter__": true, + "tf.io.gfile.GFile.__le__": true, + "tf.io.gfile.GFile.__lt__": true, + "tf.io.gfile.GFile.__ne__": true, + "tf.io.gfile.GFile.__new__": true, + "tf.io.gfile.GFile.close": true, + "tf.io.gfile.GFile.flush": true, + "tf.io.gfile.GFile.mode": true, + "tf.io.gfile.GFile.name": true, + "tf.io.gfile.GFile.next": true, + "tf.io.gfile.GFile.read": true, + "tf.io.gfile.GFile.readline": true, + "tf.io.gfile.GFile.readlines": true, + "tf.io.gfile.GFile.seek": true, + "tf.io.gfile.GFile.seekable": true, + "tf.io.gfile.GFile.size": true, + "tf.io.gfile.GFile.tell": true, + "tf.io.gfile.GFile.write": true, + "tf.io.gfile.copy": false, + "tf.io.gfile.exists": false, + "tf.io.gfile.glob": false, + "tf.io.gfile.isdir": false, + "tf.io.gfile.listdir": false, + "tf.io.gfile.makedirs": false, + "tf.io.gfile.mkdir": false, + "tf.io.gfile.remove": false, + "tf.io.gfile.rename": false, + "tf.io.gfile.rmtree": false, + "tf.io.gfile.stat": false, + "tf.io.gfile.walk": false, + "tf.io.is_jpeg": false, + "tf.io.match_filenames_once": false, + "tf.io.matching_files": false, + "tf.io.parse_example": false, + "tf.io.parse_sequence_example": false, + "tf.io.parse_single_example": false, + "tf.io.parse_single_sequence_example": false, + "tf.io.parse_tensor": false, + "tf.io.read_file": false, + "tf.io.serialize_many_sparse": false, + "tf.io.serialize_sparse": false, + "tf.io.serialize_tensor": false, + "tf.io.write_file": false, + "tf.io.write_graph": false, + "tf.is_tensor": false, + "tf.keras": false, + "tf.keras.Input": false, + "tf.keras.Model": false, + "tf.keras.Model.__call__": true, + "tf.keras.Model.__eq__": true, + "tf.keras.Model.__ge__": true, + "tf.keras.Model.__gt__": true, + "tf.keras.Model.__init__": true, + "tf.keras.Model.__le__": true, + "tf.keras.Model.__lt__": true, + "tf.keras.Model.__ne__": true, + "tf.keras.Model.__new__": true, + "tf.keras.Model.activity_regularizer": true, + "tf.keras.Model.add_loss": true, + "tf.keras.Model.add_metric": true, + "tf.keras.Model.add_weight": true, + "tf.keras.Model.build": true, + "tf.keras.Model.call": true, + "tf.keras.Model.compile": true, + "tf.keras.Model.compute_mask": true, + "tf.keras.Model.compute_output_shape": true, + "tf.keras.Model.compute_output_signature": true, + "tf.keras.Model.count_params": true, + "tf.keras.Model.distribute_strategy": true, + "tf.keras.Model.dtype": true, + "tf.keras.Model.dynamic": true, + "tf.keras.Model.evaluate": true, + "tf.keras.Model.evaluate_generator": true, + "tf.keras.Model.fit": true, + "tf.keras.Model.fit_generator": true, + "tf.keras.Model.from_config": true, + "tf.keras.Model.get_config": true, + "tf.keras.Model.get_layer": true, + "tf.keras.Model.get_weights": true, + "tf.keras.Model.input": true, + "tf.keras.Model.input_spec": true, + "tf.keras.Model.layers": true, + "tf.keras.Model.load_weights": true, + "tf.keras.Model.losses": true, + "tf.keras.Model.make_predict_function": true, + "tf.keras.Model.make_test_function": true, + "tf.keras.Model.make_train_function": true, + "tf.keras.Model.metrics": true, + "tf.keras.Model.metrics_names": true, + "tf.keras.Model.name": true, + "tf.keras.Model.name_scope": true, + "tf.keras.Model.non_trainable_weights": true, + "tf.keras.Model.output": true, + "tf.keras.Model.predict": true, + "tf.keras.Model.predict_generator": true, + "tf.keras.Model.predict_on_batch": true, + "tf.keras.Model.predict_step": true, + "tf.keras.Model.reset_metrics": true, + "tf.keras.Model.reset_states": true, + "tf.keras.Model.run_eagerly": true, + "tf.keras.Model.save": true, + "tf.keras.Model.save_weights": true, + "tf.keras.Model.set_weights": true, + "tf.keras.Model.state_updates": true, + "tf.keras.Model.stateful": true, + "tf.keras.Model.submodules": true, + "tf.keras.Model.summary": true, + "tf.keras.Model.test_on_batch": true, + "tf.keras.Model.test_step": true, + "tf.keras.Model.to_json": true, + "tf.keras.Model.to_yaml": true, + "tf.keras.Model.train_on_batch": true, + "tf.keras.Model.train_step": true, + "tf.keras.Model.trainable": true, + "tf.keras.Model.trainable_weights": true, + "tf.keras.Model.weights": true, + "tf.keras.Model.with_name_scope": true, + "tf.keras.Sequential": false, + "tf.keras.Sequential.__call__": true, + "tf.keras.Sequential.__eq__": true, + "tf.keras.Sequential.__ge__": true, + "tf.keras.Sequential.__gt__": true, + "tf.keras.Sequential.__init__": true, + "tf.keras.Sequential.__le__": true, + "tf.keras.Sequential.__lt__": true, + "tf.keras.Sequential.__ne__": true, + "tf.keras.Sequential.__new__": true, + "tf.keras.Sequential.activity_regularizer": true, + "tf.keras.Sequential.add": true, + "tf.keras.Sequential.add_loss": true, + "tf.keras.Sequential.add_metric": true, + "tf.keras.Sequential.add_weight": true, + "tf.keras.Sequential.build": true, + "tf.keras.Sequential.call": true, + "tf.keras.Sequential.compile": true, + "tf.keras.Sequential.compute_mask": true, + "tf.keras.Sequential.compute_output_shape": true, + "tf.keras.Sequential.compute_output_signature": true, + "tf.keras.Sequential.count_params": true, + "tf.keras.Sequential.distribute_strategy": true, + "tf.keras.Sequential.dtype": true, + "tf.keras.Sequential.dynamic": true, + "tf.keras.Sequential.evaluate": true, + "tf.keras.Sequential.evaluate_generator": true, + "tf.keras.Sequential.fit": true, + "tf.keras.Sequential.fit_generator": true, + "tf.keras.Sequential.from_config": true, + "tf.keras.Sequential.get_config": true, + "tf.keras.Sequential.get_layer": true, + "tf.keras.Sequential.get_weights": true, + "tf.keras.Sequential.input": true, + "tf.keras.Sequential.input_spec": true, + "tf.keras.Sequential.layers": true, + "tf.keras.Sequential.load_weights": true, + "tf.keras.Sequential.losses": true, + "tf.keras.Sequential.make_predict_function": true, + "tf.keras.Sequential.make_test_function": true, + "tf.keras.Sequential.make_train_function": true, + "tf.keras.Sequential.metrics": true, + "tf.keras.Sequential.metrics_names": true, + "tf.keras.Sequential.name": true, + "tf.keras.Sequential.name_scope": true, + "tf.keras.Sequential.non_trainable_weights": true, + "tf.keras.Sequential.output": true, + "tf.keras.Sequential.pop": true, + "tf.keras.Sequential.predict": true, + "tf.keras.Sequential.predict_classes": true, + "tf.keras.Sequential.predict_generator": true, + "tf.keras.Sequential.predict_on_batch": true, + "tf.keras.Sequential.predict_proba": true, + "tf.keras.Sequential.predict_step": true, + "tf.keras.Sequential.reset_metrics": true, + "tf.keras.Sequential.reset_states": true, + "tf.keras.Sequential.run_eagerly": true, + "tf.keras.Sequential.save": true, + "tf.keras.Sequential.save_weights": true, + "tf.keras.Sequential.set_weights": true, + "tf.keras.Sequential.state_updates": true, + "tf.keras.Sequential.stateful": true, + "tf.keras.Sequential.submodules": true, + "tf.keras.Sequential.summary": true, + "tf.keras.Sequential.test_on_batch": true, + "tf.keras.Sequential.test_step": true, + "tf.keras.Sequential.to_json": true, + "tf.keras.Sequential.to_yaml": true, + "tf.keras.Sequential.train_on_batch": true, + "tf.keras.Sequential.train_step": true, + "tf.keras.Sequential.trainable": true, + "tf.keras.Sequential.trainable_weights": true, + "tf.keras.Sequential.weights": true, + "tf.keras.Sequential.with_name_scope": true, + "tf.keras.__version__": true, + "tf.keras.activations": false, + "tf.keras.activations.deserialize": false, + "tf.keras.activations.elu": false, + "tf.keras.activations.exponential": false, + "tf.keras.activations.get": false, + "tf.keras.activations.hard_sigmoid": false, + "tf.keras.activations.linear": false, + "tf.keras.activations.relu": false, + "tf.keras.activations.selu": false, + "tf.keras.activations.serialize": false, + "tf.keras.activations.sigmoid": false, + "tf.keras.activations.softmax": false, + "tf.keras.activations.softplus": false, + "tf.keras.activations.softsign": false, + "tf.keras.activations.swish": false, + "tf.keras.activations.tanh": false, + "tf.keras.applications": false, + "tf.keras.applications.DenseNet121": false, + "tf.keras.applications.DenseNet169": false, + "tf.keras.applications.DenseNet201": false, + "tf.keras.applications.InceptionResNetV2": false, + "tf.keras.applications.InceptionV3": false, + "tf.keras.applications.MobileNet": false, + "tf.keras.applications.MobileNetV2": false, + "tf.keras.applications.NASNetLarge": false, + "tf.keras.applications.NASNetMobile": false, + "tf.keras.applications.ResNet101": false, + "tf.keras.applications.ResNet101V2": false, + "tf.keras.applications.ResNet152": false, + "tf.keras.applications.ResNet152V2": false, + "tf.keras.applications.ResNet50": false, + "tf.keras.applications.ResNet50V2": false, + "tf.keras.applications.VGG16": false, + "tf.keras.applications.VGG19": false, + "tf.keras.applications.Xception": false, + "tf.keras.applications.densenet": false, + "tf.keras.applications.densenet.DenseNet121": false, + "tf.keras.applications.densenet.DenseNet169": false, + "tf.keras.applications.densenet.DenseNet201": false, + "tf.keras.applications.densenet.decode_predictions": false, + "tf.keras.applications.densenet.preprocess_input": false, + "tf.keras.applications.imagenet_utils": false, + "tf.keras.applications.imagenet_utils.decode_predictions": false, + "tf.keras.applications.imagenet_utils.preprocess_input": false, + "tf.keras.applications.inception_resnet_v2": false, + "tf.keras.applications.inception_resnet_v2.InceptionResNetV2": false, + "tf.keras.applications.inception_resnet_v2.decode_predictions": false, + "tf.keras.applications.inception_resnet_v2.preprocess_input": false, + "tf.keras.applications.inception_v3": false, + "tf.keras.applications.inception_v3.InceptionV3": false, + "tf.keras.applications.inception_v3.decode_predictions": false, + "tf.keras.applications.inception_v3.preprocess_input": false, + "tf.keras.applications.mobilenet": false, + "tf.keras.applications.mobilenet.MobileNet": false, + "tf.keras.applications.mobilenet.decode_predictions": false, + "tf.keras.applications.mobilenet.preprocess_input": false, + "tf.keras.applications.mobilenet_v2": false, + "tf.keras.applications.mobilenet_v2.MobileNetV2": false, + "tf.keras.applications.mobilenet_v2.decode_predictions": false, + "tf.keras.applications.mobilenet_v2.preprocess_input": false, + "tf.keras.applications.nasnet": false, + "tf.keras.applications.nasnet.NASNetLarge": false, + "tf.keras.applications.nasnet.NASNetMobile": false, + "tf.keras.applications.nasnet.decode_predictions": false, + "tf.keras.applications.nasnet.preprocess_input": false, + "tf.keras.applications.resnet": false, + "tf.keras.applications.resnet.ResNet101": false, + "tf.keras.applications.resnet.ResNet152": false, + "tf.keras.applications.resnet.ResNet50": false, + "tf.keras.applications.resnet.decode_predictions": false, + "tf.keras.applications.resnet.preprocess_input": false, + "tf.keras.applications.resnet50": false, + "tf.keras.applications.resnet50.ResNet50": false, + "tf.keras.applications.resnet50.decode_predictions": false, + "tf.keras.applications.resnet50.preprocess_input": false, + "tf.keras.applications.resnet_v2": false, + "tf.keras.applications.resnet_v2.ResNet101V2": false, + "tf.keras.applications.resnet_v2.ResNet152V2": false, + "tf.keras.applications.resnet_v2.ResNet50V2": false, + "tf.keras.applications.resnet_v2.decode_predictions": false, + "tf.keras.applications.resnet_v2.preprocess_input": false, + "tf.keras.applications.vgg16": false, + "tf.keras.applications.vgg16.VGG16": false, + "tf.keras.applications.vgg16.decode_predictions": false, + "tf.keras.applications.vgg16.preprocess_input": false, + "tf.keras.applications.vgg19": false, + "tf.keras.applications.vgg19.VGG19": false, + "tf.keras.applications.vgg19.decode_predictions": false, + "tf.keras.applications.vgg19.preprocess_input": false, + "tf.keras.applications.xception": false, + "tf.keras.applications.xception.Xception": false, + "tf.keras.applications.xception.decode_predictions": false, + "tf.keras.applications.xception.preprocess_input": false, + "tf.keras.backend": false, + "tf.keras.backend.abs": false, + "tf.keras.backend.all": false, + "tf.keras.backend.any": false, + "tf.keras.backend.arange": false, + "tf.keras.backend.argmax": false, + "tf.keras.backend.argmin": false, + "tf.keras.backend.backend": false, + "tf.keras.backend.batch_dot": false, + "tf.keras.backend.batch_flatten": false, + "tf.keras.backend.batch_get_value": false, + "tf.keras.backend.batch_normalization": false, + "tf.keras.backend.batch_set_value": false, + "tf.keras.backend.bias_add": false, + "tf.keras.backend.binary_crossentropy": false, + "tf.keras.backend.cast": false, + "tf.keras.backend.cast_to_floatx": false, + "tf.keras.backend.categorical_crossentropy": false, + "tf.keras.backend.clear_session": false, + "tf.keras.backend.clip": false, + "tf.keras.backend.concatenate": false, + "tf.keras.backend.constant": false, + "tf.keras.backend.conv1d": false, + "tf.keras.backend.conv2d": false, + "tf.keras.backend.conv2d_transpose": false, + "tf.keras.backend.conv3d": false, + "tf.keras.backend.cos": false, + "tf.keras.backend.count_params": false, + "tf.keras.backend.ctc_batch_cost": false, + "tf.keras.backend.ctc_decode": false, + "tf.keras.backend.ctc_label_dense_to_sparse": false, + "tf.keras.backend.cumprod": false, + "tf.keras.backend.cumsum": false, + "tf.keras.backend.depthwise_conv2d": false, + "tf.keras.backend.dot": false, + "tf.keras.backend.dropout": false, + "tf.keras.backend.dtype": false, + "tf.keras.backend.elu": false, + "tf.keras.backend.epsilon": false, + "tf.keras.backend.equal": false, + "tf.keras.backend.eval": false, + "tf.keras.backend.exp": false, + "tf.keras.backend.expand_dims": false, + "tf.keras.backend.eye": false, + "tf.keras.backend.flatten": false, + "tf.keras.backend.floatx": false, + "tf.keras.backend.foldl": false, + "tf.keras.backend.foldr": false, + "tf.keras.backend.function": false, + "tf.keras.backend.gather": false, + "tf.keras.backend.get_uid": false, + "tf.keras.backend.get_value": false, + "tf.keras.backend.gradients": false, + "tf.keras.backend.greater": false, + "tf.keras.backend.greater_equal": false, + "tf.keras.backend.hard_sigmoid": false, + "tf.keras.backend.image_data_format": false, + "tf.keras.backend.in_test_phase": false, + "tf.keras.backend.in_top_k": false, + "tf.keras.backend.in_train_phase": false, + "tf.keras.backend.int_shape": false, + "tf.keras.backend.is_keras_tensor": false, + "tf.keras.backend.is_sparse": false, + "tf.keras.backend.l2_normalize": false, + "tf.keras.backend.learning_phase": false, + "tf.keras.backend.learning_phase_scope": false, + "tf.keras.backend.less": false, + "tf.keras.backend.less_equal": false, + "tf.keras.backend.local_conv1d": false, + "tf.keras.backend.local_conv2d": false, + "tf.keras.backend.log": false, + "tf.keras.backend.manual_variable_initialization": false, + "tf.keras.backend.map_fn": false, + "tf.keras.backend.max": false, + "tf.keras.backend.maximum": false, + "tf.keras.backend.mean": false, + "tf.keras.backend.min": false, + "tf.keras.backend.minimum": false, + "tf.keras.backend.moving_average_update": false, + "tf.keras.backend.name_scope": false, + "tf.keras.backend.ndim": false, + "tf.keras.backend.normalize_batch_in_training": false, + "tf.keras.backend.not_equal": false, + "tf.keras.backend.one_hot": false, + "tf.keras.backend.ones": false, + "tf.keras.backend.ones_like": false, + "tf.keras.backend.permute_dimensions": false, + "tf.keras.backend.placeholder": false, + "tf.keras.backend.pool2d": false, + "tf.keras.backend.pool3d": false, + "tf.keras.backend.pow": false, + "tf.keras.backend.print_tensor": false, + "tf.keras.backend.prod": false, + "tf.keras.backend.random_binomial": false, + "tf.keras.backend.random_normal": false, + "tf.keras.backend.random_normal_variable": false, + "tf.keras.backend.random_uniform": false, + "tf.keras.backend.random_uniform_variable": false, + "tf.keras.backend.relu": false, + "tf.keras.backend.repeat": false, + "tf.keras.backend.repeat_elements": false, + "tf.keras.backend.reset_uids": false, + "tf.keras.backend.reshape": false, + "tf.keras.backend.resize_images": false, + "tf.keras.backend.resize_volumes": false, + "tf.keras.backend.reverse": false, + "tf.keras.backend.rnn": false, + "tf.keras.backend.round": false, + "tf.keras.backend.separable_conv2d": false, + "tf.keras.backend.set_epsilon": false, + "tf.keras.backend.set_floatx": false, + "tf.keras.backend.set_image_data_format": false, + "tf.keras.backend.set_learning_phase": false, + "tf.keras.backend.set_value": false, + "tf.keras.backend.shape": false, + "tf.keras.backend.sigmoid": false, + "tf.keras.backend.sign": false, + "tf.keras.backend.sin": false, + "tf.keras.backend.softmax": false, + "tf.keras.backend.softplus": false, + "tf.keras.backend.softsign": false, + "tf.keras.backend.sparse_categorical_crossentropy": false, + "tf.keras.backend.spatial_2d_padding": false, + "tf.keras.backend.spatial_3d_padding": false, + "tf.keras.backend.sqrt": false, + "tf.keras.backend.square": false, + "tf.keras.backend.squeeze": false, + "tf.keras.backend.stack": false, + "tf.keras.backend.std": false, + "tf.keras.backend.stop_gradient": false, + "tf.keras.backend.sum": false, + "tf.keras.backend.switch": false, + "tf.keras.backend.tanh": false, + "tf.keras.backend.temporal_padding": false, + "tf.keras.backend.tile": false, + "tf.keras.backend.to_dense": false, + "tf.keras.backend.transpose": false, + "tf.keras.backend.truncated_normal": false, + "tf.keras.backend.update": false, + "tf.keras.backend.update_add": false, + "tf.keras.backend.update_sub": false, + "tf.keras.backend.var": false, + "tf.keras.backend.variable": false, + "tf.keras.backend.zeros": false, + "tf.keras.backend.zeros_like": false, + "tf.keras.callbacks": false, + "tf.keras.callbacks.BaseLogger": false, + "tf.keras.callbacks.BaseLogger.__eq__": true, + "tf.keras.callbacks.BaseLogger.__ge__": true, + "tf.keras.callbacks.BaseLogger.__gt__": true, + "tf.keras.callbacks.BaseLogger.__init__": true, + "tf.keras.callbacks.BaseLogger.__le__": true, + "tf.keras.callbacks.BaseLogger.__lt__": true, + "tf.keras.callbacks.BaseLogger.__ne__": true, + "tf.keras.callbacks.BaseLogger.__new__": true, + "tf.keras.callbacks.BaseLogger.on_batch_begin": true, + "tf.keras.callbacks.BaseLogger.on_batch_end": true, + "tf.keras.callbacks.BaseLogger.on_epoch_begin": true, + "tf.keras.callbacks.BaseLogger.on_epoch_end": true, + "tf.keras.callbacks.BaseLogger.on_predict_batch_begin": true, + "tf.keras.callbacks.BaseLogger.on_predict_batch_end": true, + "tf.keras.callbacks.BaseLogger.on_predict_begin": true, + "tf.keras.callbacks.BaseLogger.on_predict_end": true, + "tf.keras.callbacks.BaseLogger.on_test_batch_begin": true, + "tf.keras.callbacks.BaseLogger.on_test_batch_end": true, + "tf.keras.callbacks.BaseLogger.on_test_begin": true, + "tf.keras.callbacks.BaseLogger.on_test_end": true, + "tf.keras.callbacks.BaseLogger.on_train_batch_begin": true, + "tf.keras.callbacks.BaseLogger.on_train_batch_end": true, + "tf.keras.callbacks.BaseLogger.on_train_begin": true, + "tf.keras.callbacks.BaseLogger.on_train_end": true, + "tf.keras.callbacks.BaseLogger.set_model": true, + "tf.keras.callbacks.BaseLogger.set_params": true, + "tf.keras.callbacks.CSVLogger": false, + "tf.keras.callbacks.CSVLogger.__eq__": true, + "tf.keras.callbacks.CSVLogger.__ge__": true, + "tf.keras.callbacks.CSVLogger.__gt__": true, + "tf.keras.callbacks.CSVLogger.__init__": true, + "tf.keras.callbacks.CSVLogger.__le__": true, + "tf.keras.callbacks.CSVLogger.__lt__": true, + "tf.keras.callbacks.CSVLogger.__ne__": true, + "tf.keras.callbacks.CSVLogger.__new__": true, + "tf.keras.callbacks.CSVLogger.on_batch_begin": true, + "tf.keras.callbacks.CSVLogger.on_batch_end": true, + "tf.keras.callbacks.CSVLogger.on_epoch_begin": true, + "tf.keras.callbacks.CSVLogger.on_epoch_end": true, + "tf.keras.callbacks.CSVLogger.on_predict_batch_begin": true, + "tf.keras.callbacks.CSVLogger.on_predict_batch_end": true, + "tf.keras.callbacks.CSVLogger.on_predict_begin": true, + "tf.keras.callbacks.CSVLogger.on_predict_end": true, + "tf.keras.callbacks.CSVLogger.on_test_batch_begin": true, + "tf.keras.callbacks.CSVLogger.on_test_batch_end": true, + "tf.keras.callbacks.CSVLogger.on_test_begin": true, + "tf.keras.callbacks.CSVLogger.on_test_end": true, + "tf.keras.callbacks.CSVLogger.on_train_batch_begin": true, + "tf.keras.callbacks.CSVLogger.on_train_batch_end": true, + "tf.keras.callbacks.CSVLogger.on_train_begin": true, + "tf.keras.callbacks.CSVLogger.on_train_end": true, + "tf.keras.callbacks.CSVLogger.set_model": true, + "tf.keras.callbacks.CSVLogger.set_params": true, + "tf.keras.callbacks.Callback": false, + "tf.keras.callbacks.Callback.__eq__": true, + "tf.keras.callbacks.Callback.__ge__": true, + "tf.keras.callbacks.Callback.__gt__": true, + "tf.keras.callbacks.Callback.__init__": true, + "tf.keras.callbacks.Callback.__le__": true, + "tf.keras.callbacks.Callback.__lt__": true, + "tf.keras.callbacks.Callback.__ne__": true, + "tf.keras.callbacks.Callback.__new__": true, + "tf.keras.callbacks.Callback.on_batch_begin": true, + "tf.keras.callbacks.Callback.on_batch_end": true, + "tf.keras.callbacks.Callback.on_epoch_begin": true, + "tf.keras.callbacks.Callback.on_epoch_end": true, + "tf.keras.callbacks.Callback.on_predict_batch_begin": true, + "tf.keras.callbacks.Callback.on_predict_batch_end": true, + "tf.keras.callbacks.Callback.on_predict_begin": true, + "tf.keras.callbacks.Callback.on_predict_end": true, + "tf.keras.callbacks.Callback.on_test_batch_begin": true, + "tf.keras.callbacks.Callback.on_test_batch_end": true, + "tf.keras.callbacks.Callback.on_test_begin": true, + "tf.keras.callbacks.Callback.on_test_end": true, + "tf.keras.callbacks.Callback.on_train_batch_begin": true, + "tf.keras.callbacks.Callback.on_train_batch_end": true, + "tf.keras.callbacks.Callback.on_train_begin": true, + "tf.keras.callbacks.Callback.on_train_end": true, + "tf.keras.callbacks.Callback.set_model": true, + "tf.keras.callbacks.Callback.set_params": true, + "tf.keras.callbacks.EarlyStopping": false, + "tf.keras.callbacks.EarlyStopping.__eq__": true, + "tf.keras.callbacks.EarlyStopping.__ge__": true, + "tf.keras.callbacks.EarlyStopping.__gt__": true, + "tf.keras.callbacks.EarlyStopping.__init__": true, + "tf.keras.callbacks.EarlyStopping.__le__": true, + "tf.keras.callbacks.EarlyStopping.__lt__": true, + "tf.keras.callbacks.EarlyStopping.__ne__": true, + "tf.keras.callbacks.EarlyStopping.__new__": true, + "tf.keras.callbacks.EarlyStopping.get_monitor_value": true, + "tf.keras.callbacks.EarlyStopping.on_batch_begin": true, + "tf.keras.callbacks.EarlyStopping.on_batch_end": true, + "tf.keras.callbacks.EarlyStopping.on_epoch_begin": true, + "tf.keras.callbacks.EarlyStopping.on_epoch_end": true, + "tf.keras.callbacks.EarlyStopping.on_predict_batch_begin": true, + "tf.keras.callbacks.EarlyStopping.on_predict_batch_end": true, + "tf.keras.callbacks.EarlyStopping.on_predict_begin": true, + "tf.keras.callbacks.EarlyStopping.on_predict_end": true, + "tf.keras.callbacks.EarlyStopping.on_test_batch_begin": true, + "tf.keras.callbacks.EarlyStopping.on_test_batch_end": true, + "tf.keras.callbacks.EarlyStopping.on_test_begin": true, + "tf.keras.callbacks.EarlyStopping.on_test_end": true, + "tf.keras.callbacks.EarlyStopping.on_train_batch_begin": true, + "tf.keras.callbacks.EarlyStopping.on_train_batch_end": true, + "tf.keras.callbacks.EarlyStopping.on_train_begin": true, + "tf.keras.callbacks.EarlyStopping.on_train_end": true, + "tf.keras.callbacks.EarlyStopping.set_model": true, + "tf.keras.callbacks.EarlyStopping.set_params": true, + "tf.keras.callbacks.History": false, + "tf.keras.callbacks.History.__eq__": true, + "tf.keras.callbacks.History.__ge__": true, + "tf.keras.callbacks.History.__gt__": true, + "tf.keras.callbacks.History.__init__": true, + "tf.keras.callbacks.History.__le__": true, + "tf.keras.callbacks.History.__lt__": true, + "tf.keras.callbacks.History.__ne__": true, + "tf.keras.callbacks.History.__new__": true, + "tf.keras.callbacks.History.on_batch_begin": true, + "tf.keras.callbacks.History.on_batch_end": true, + "tf.keras.callbacks.History.on_epoch_begin": true, + "tf.keras.callbacks.History.on_epoch_end": true, + "tf.keras.callbacks.History.on_predict_batch_begin": true, + "tf.keras.callbacks.History.on_predict_batch_end": true, + "tf.keras.callbacks.History.on_predict_begin": true, + "tf.keras.callbacks.History.on_predict_end": true, + "tf.keras.callbacks.History.on_test_batch_begin": true, + "tf.keras.callbacks.History.on_test_batch_end": true, + "tf.keras.callbacks.History.on_test_begin": true, + "tf.keras.callbacks.History.on_test_end": true, + "tf.keras.callbacks.History.on_train_batch_begin": true, + "tf.keras.callbacks.History.on_train_batch_end": true, + "tf.keras.callbacks.History.on_train_begin": true, + "tf.keras.callbacks.History.on_train_end": true, + "tf.keras.callbacks.History.set_model": true, + "tf.keras.callbacks.History.set_params": true, + "tf.keras.callbacks.LambdaCallback": false, + "tf.keras.callbacks.LambdaCallback.__eq__": true, + "tf.keras.callbacks.LambdaCallback.__ge__": true, + "tf.keras.callbacks.LambdaCallback.__gt__": true, + "tf.keras.callbacks.LambdaCallback.__init__": true, + "tf.keras.callbacks.LambdaCallback.__le__": true, + "tf.keras.callbacks.LambdaCallback.__lt__": true, + "tf.keras.callbacks.LambdaCallback.__ne__": true, + "tf.keras.callbacks.LambdaCallback.__new__": true, + "tf.keras.callbacks.LambdaCallback.on_batch_begin": true, + "tf.keras.callbacks.LambdaCallback.on_batch_end": true, + "tf.keras.callbacks.LambdaCallback.on_epoch_begin": true, + "tf.keras.callbacks.LambdaCallback.on_epoch_end": true, + "tf.keras.callbacks.LambdaCallback.on_predict_batch_begin": true, + "tf.keras.callbacks.LambdaCallback.on_predict_batch_end": true, + "tf.keras.callbacks.LambdaCallback.on_predict_begin": true, + "tf.keras.callbacks.LambdaCallback.on_predict_end": true, + "tf.keras.callbacks.LambdaCallback.on_test_batch_begin": true, + "tf.keras.callbacks.LambdaCallback.on_test_batch_end": true, + "tf.keras.callbacks.LambdaCallback.on_test_begin": true, + "tf.keras.callbacks.LambdaCallback.on_test_end": true, + "tf.keras.callbacks.LambdaCallback.on_train_batch_begin": true, + "tf.keras.callbacks.LambdaCallback.on_train_batch_end": true, + "tf.keras.callbacks.LambdaCallback.on_train_begin": true, + "tf.keras.callbacks.LambdaCallback.on_train_end": true, + "tf.keras.callbacks.LambdaCallback.set_model": true, + "tf.keras.callbacks.LambdaCallback.set_params": true, + "tf.keras.callbacks.LearningRateScheduler": false, + "tf.keras.callbacks.LearningRateScheduler.__eq__": true, + "tf.keras.callbacks.LearningRateScheduler.__ge__": true, + "tf.keras.callbacks.LearningRateScheduler.__gt__": true, + "tf.keras.callbacks.LearningRateScheduler.__init__": true, + "tf.keras.callbacks.LearningRateScheduler.__le__": true, + "tf.keras.callbacks.LearningRateScheduler.__lt__": true, + "tf.keras.callbacks.LearningRateScheduler.__ne__": true, + "tf.keras.callbacks.LearningRateScheduler.__new__": true, + "tf.keras.callbacks.LearningRateScheduler.on_batch_begin": true, + "tf.keras.callbacks.LearningRateScheduler.on_batch_end": true, + "tf.keras.callbacks.LearningRateScheduler.on_epoch_begin": true, + "tf.keras.callbacks.LearningRateScheduler.on_epoch_end": true, + "tf.keras.callbacks.LearningRateScheduler.on_predict_batch_begin": true, + "tf.keras.callbacks.LearningRateScheduler.on_predict_batch_end": true, + "tf.keras.callbacks.LearningRateScheduler.on_predict_begin": true, + "tf.keras.callbacks.LearningRateScheduler.on_predict_end": true, + "tf.keras.callbacks.LearningRateScheduler.on_test_batch_begin": true, + "tf.keras.callbacks.LearningRateScheduler.on_test_batch_end": true, + "tf.keras.callbacks.LearningRateScheduler.on_test_begin": true, + "tf.keras.callbacks.LearningRateScheduler.on_test_end": true, + "tf.keras.callbacks.LearningRateScheduler.on_train_batch_begin": true, + "tf.keras.callbacks.LearningRateScheduler.on_train_batch_end": true, + "tf.keras.callbacks.LearningRateScheduler.on_train_begin": true, + "tf.keras.callbacks.LearningRateScheduler.on_train_end": true, + "tf.keras.callbacks.LearningRateScheduler.set_model": true, + "tf.keras.callbacks.LearningRateScheduler.set_params": true, + "tf.keras.callbacks.ModelCheckpoint": false, + "tf.keras.callbacks.ModelCheckpoint.__eq__": true, + "tf.keras.callbacks.ModelCheckpoint.__ge__": true, + "tf.keras.callbacks.ModelCheckpoint.__gt__": true, + "tf.keras.callbacks.ModelCheckpoint.__init__": true, + "tf.keras.callbacks.ModelCheckpoint.__le__": true, + "tf.keras.callbacks.ModelCheckpoint.__lt__": true, + "tf.keras.callbacks.ModelCheckpoint.__ne__": true, + "tf.keras.callbacks.ModelCheckpoint.__new__": true, + "tf.keras.callbacks.ModelCheckpoint.on_batch_begin": true, + "tf.keras.callbacks.ModelCheckpoint.on_batch_end": true, + "tf.keras.callbacks.ModelCheckpoint.on_epoch_begin": true, + "tf.keras.callbacks.ModelCheckpoint.on_epoch_end": true, + "tf.keras.callbacks.ModelCheckpoint.on_predict_batch_begin": true, + "tf.keras.callbacks.ModelCheckpoint.on_predict_batch_end": true, + "tf.keras.callbacks.ModelCheckpoint.on_predict_begin": true, + "tf.keras.callbacks.ModelCheckpoint.on_predict_end": true, + "tf.keras.callbacks.ModelCheckpoint.on_test_batch_begin": true, + "tf.keras.callbacks.ModelCheckpoint.on_test_batch_end": true, + "tf.keras.callbacks.ModelCheckpoint.on_test_begin": true, + "tf.keras.callbacks.ModelCheckpoint.on_test_end": true, + "tf.keras.callbacks.ModelCheckpoint.on_train_batch_begin": true, + "tf.keras.callbacks.ModelCheckpoint.on_train_batch_end": true, + "tf.keras.callbacks.ModelCheckpoint.on_train_begin": true, + "tf.keras.callbacks.ModelCheckpoint.on_train_end": true, + "tf.keras.callbacks.ModelCheckpoint.set_model": true, + "tf.keras.callbacks.ModelCheckpoint.set_params": true, + "tf.keras.callbacks.ProgbarLogger": false, + "tf.keras.callbacks.ProgbarLogger.__eq__": true, + "tf.keras.callbacks.ProgbarLogger.__ge__": true, + "tf.keras.callbacks.ProgbarLogger.__gt__": true, + "tf.keras.callbacks.ProgbarLogger.__init__": true, + "tf.keras.callbacks.ProgbarLogger.__le__": true, + "tf.keras.callbacks.ProgbarLogger.__lt__": true, + "tf.keras.callbacks.ProgbarLogger.__ne__": true, + "tf.keras.callbacks.ProgbarLogger.__new__": true, + "tf.keras.callbacks.ProgbarLogger.on_batch_begin": true, + "tf.keras.callbacks.ProgbarLogger.on_batch_end": true, + "tf.keras.callbacks.ProgbarLogger.on_epoch_begin": true, + "tf.keras.callbacks.ProgbarLogger.on_epoch_end": true, + "tf.keras.callbacks.ProgbarLogger.on_predict_batch_begin": true, + "tf.keras.callbacks.ProgbarLogger.on_predict_batch_end": true, + "tf.keras.callbacks.ProgbarLogger.on_predict_begin": true, + "tf.keras.callbacks.ProgbarLogger.on_predict_end": true, + "tf.keras.callbacks.ProgbarLogger.on_test_batch_begin": true, + "tf.keras.callbacks.ProgbarLogger.on_test_batch_end": true, + "tf.keras.callbacks.ProgbarLogger.on_test_begin": true, + "tf.keras.callbacks.ProgbarLogger.on_test_end": true, + "tf.keras.callbacks.ProgbarLogger.on_train_batch_begin": true, + "tf.keras.callbacks.ProgbarLogger.on_train_batch_end": true, + "tf.keras.callbacks.ProgbarLogger.on_train_begin": true, + "tf.keras.callbacks.ProgbarLogger.on_train_end": true, + "tf.keras.callbacks.ProgbarLogger.set_model": true, + "tf.keras.callbacks.ProgbarLogger.set_params": true, + "tf.keras.callbacks.ReduceLROnPlateau": false, + "tf.keras.callbacks.ReduceLROnPlateau.__eq__": true, + "tf.keras.callbacks.ReduceLROnPlateau.__ge__": true, + "tf.keras.callbacks.ReduceLROnPlateau.__gt__": true, + "tf.keras.callbacks.ReduceLROnPlateau.__init__": true, + "tf.keras.callbacks.ReduceLROnPlateau.__le__": true, + "tf.keras.callbacks.ReduceLROnPlateau.__lt__": true, + "tf.keras.callbacks.ReduceLROnPlateau.__ne__": true, + "tf.keras.callbacks.ReduceLROnPlateau.__new__": true, + "tf.keras.callbacks.ReduceLROnPlateau.in_cooldown": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_batch_begin": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_batch_end": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_epoch_begin": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_epoch_end": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_predict_batch_begin": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_predict_batch_end": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_predict_begin": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_predict_end": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_test_batch_begin": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_test_batch_end": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_test_begin": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_test_end": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_train_batch_begin": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_train_batch_end": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_train_begin": true, + "tf.keras.callbacks.ReduceLROnPlateau.on_train_end": true, + "tf.keras.callbacks.ReduceLROnPlateau.set_model": true, + "tf.keras.callbacks.ReduceLROnPlateau.set_params": true, + "tf.keras.callbacks.RemoteMonitor": false, + "tf.keras.callbacks.RemoteMonitor.__eq__": true, + "tf.keras.callbacks.RemoteMonitor.__ge__": true, + "tf.keras.callbacks.RemoteMonitor.__gt__": true, + "tf.keras.callbacks.RemoteMonitor.__init__": true, + "tf.keras.callbacks.RemoteMonitor.__le__": true, + "tf.keras.callbacks.RemoteMonitor.__lt__": true, + "tf.keras.callbacks.RemoteMonitor.__ne__": true, + "tf.keras.callbacks.RemoteMonitor.__new__": true, + "tf.keras.callbacks.RemoteMonitor.on_batch_begin": true, + "tf.keras.callbacks.RemoteMonitor.on_batch_end": true, + "tf.keras.callbacks.RemoteMonitor.on_epoch_begin": true, + "tf.keras.callbacks.RemoteMonitor.on_epoch_end": true, + "tf.keras.callbacks.RemoteMonitor.on_predict_batch_begin": true, + "tf.keras.callbacks.RemoteMonitor.on_predict_batch_end": true, + "tf.keras.callbacks.RemoteMonitor.on_predict_begin": true, + "tf.keras.callbacks.RemoteMonitor.on_predict_end": true, + "tf.keras.callbacks.RemoteMonitor.on_test_batch_begin": true, + "tf.keras.callbacks.RemoteMonitor.on_test_batch_end": true, + "tf.keras.callbacks.RemoteMonitor.on_test_begin": true, + "tf.keras.callbacks.RemoteMonitor.on_test_end": true, + "tf.keras.callbacks.RemoteMonitor.on_train_batch_begin": true, + "tf.keras.callbacks.RemoteMonitor.on_train_batch_end": true, + "tf.keras.callbacks.RemoteMonitor.on_train_begin": true, + "tf.keras.callbacks.RemoteMonitor.on_train_end": true, + "tf.keras.callbacks.RemoteMonitor.set_model": true, + "tf.keras.callbacks.RemoteMonitor.set_params": true, + "tf.keras.callbacks.TensorBoard": false, + "tf.keras.callbacks.TensorBoard.__eq__": true, + "tf.keras.callbacks.TensorBoard.__ge__": true, + "tf.keras.callbacks.TensorBoard.__gt__": true, + "tf.keras.callbacks.TensorBoard.__init__": true, + "tf.keras.callbacks.TensorBoard.__le__": true, + "tf.keras.callbacks.TensorBoard.__lt__": true, + "tf.keras.callbacks.TensorBoard.__ne__": true, + "tf.keras.callbacks.TensorBoard.__new__": true, + "tf.keras.callbacks.TensorBoard.on_batch_begin": true, + "tf.keras.callbacks.TensorBoard.on_batch_end": true, + "tf.keras.callbacks.TensorBoard.on_epoch_begin": true, + "tf.keras.callbacks.TensorBoard.on_epoch_end": true, + "tf.keras.callbacks.TensorBoard.on_predict_batch_begin": true, + "tf.keras.callbacks.TensorBoard.on_predict_batch_end": true, + "tf.keras.callbacks.TensorBoard.on_predict_begin": true, + "tf.keras.callbacks.TensorBoard.on_predict_end": true, + "tf.keras.callbacks.TensorBoard.on_test_batch_begin": true, + "tf.keras.callbacks.TensorBoard.on_test_batch_end": true, + "tf.keras.callbacks.TensorBoard.on_test_begin": true, + "tf.keras.callbacks.TensorBoard.on_test_end": true, + "tf.keras.callbacks.TensorBoard.on_train_batch_begin": true, + "tf.keras.callbacks.TensorBoard.on_train_batch_end": true, + "tf.keras.callbacks.TensorBoard.on_train_begin": true, + "tf.keras.callbacks.TensorBoard.on_train_end": true, + "tf.keras.callbacks.TensorBoard.set_model": true, + "tf.keras.callbacks.TensorBoard.set_params": true, + "tf.keras.callbacks.TerminateOnNaN": false, + "tf.keras.callbacks.TerminateOnNaN.__eq__": true, + "tf.keras.callbacks.TerminateOnNaN.__ge__": true, + "tf.keras.callbacks.TerminateOnNaN.__gt__": true, + "tf.keras.callbacks.TerminateOnNaN.__init__": true, + "tf.keras.callbacks.TerminateOnNaN.__le__": true, + "tf.keras.callbacks.TerminateOnNaN.__lt__": true, + "tf.keras.callbacks.TerminateOnNaN.__ne__": true, + "tf.keras.callbacks.TerminateOnNaN.__new__": true, + "tf.keras.callbacks.TerminateOnNaN.on_batch_begin": true, + "tf.keras.callbacks.TerminateOnNaN.on_batch_end": true, + "tf.keras.callbacks.TerminateOnNaN.on_epoch_begin": true, + "tf.keras.callbacks.TerminateOnNaN.on_epoch_end": true, + "tf.keras.callbacks.TerminateOnNaN.on_predict_batch_begin": true, + "tf.keras.callbacks.TerminateOnNaN.on_predict_batch_end": true, + "tf.keras.callbacks.TerminateOnNaN.on_predict_begin": true, + "tf.keras.callbacks.TerminateOnNaN.on_predict_end": true, + "tf.keras.callbacks.TerminateOnNaN.on_test_batch_begin": true, + "tf.keras.callbacks.TerminateOnNaN.on_test_batch_end": true, + "tf.keras.callbacks.TerminateOnNaN.on_test_begin": true, + "tf.keras.callbacks.TerminateOnNaN.on_test_end": true, + "tf.keras.callbacks.TerminateOnNaN.on_train_batch_begin": true, + "tf.keras.callbacks.TerminateOnNaN.on_train_batch_end": true, + "tf.keras.callbacks.TerminateOnNaN.on_train_begin": true, + "tf.keras.callbacks.TerminateOnNaN.on_train_end": true, + "tf.keras.callbacks.TerminateOnNaN.set_model": true, + "tf.keras.callbacks.TerminateOnNaN.set_params": true, + "tf.keras.constraints": false, + "tf.keras.constraints.Constraint": false, + "tf.keras.constraints.Constraint.__call__": true, + "tf.keras.constraints.Constraint.__eq__": true, + "tf.keras.constraints.Constraint.__ge__": true, + "tf.keras.constraints.Constraint.__gt__": true, + "tf.keras.constraints.Constraint.__init__": true, + "tf.keras.constraints.Constraint.__le__": true, + "tf.keras.constraints.Constraint.__lt__": true, + "tf.keras.constraints.Constraint.__ne__": true, + "tf.keras.constraints.Constraint.__new__": true, + "tf.keras.constraints.Constraint.get_config": true, + "tf.keras.constraints.MaxNorm": false, + "tf.keras.constraints.MaxNorm.__call__": true, + "tf.keras.constraints.MaxNorm.__eq__": true, + "tf.keras.constraints.MaxNorm.__ge__": true, + "tf.keras.constraints.MaxNorm.__gt__": true, + "tf.keras.constraints.MaxNorm.__init__": true, + "tf.keras.constraints.MaxNorm.__le__": true, + "tf.keras.constraints.MaxNorm.__lt__": true, + "tf.keras.constraints.MaxNorm.__ne__": true, + "tf.keras.constraints.MaxNorm.__new__": true, + "tf.keras.constraints.MaxNorm.get_config": true, + "tf.keras.constraints.MinMaxNorm": false, + "tf.keras.constraints.MinMaxNorm.__call__": true, + "tf.keras.constraints.MinMaxNorm.__eq__": true, + "tf.keras.constraints.MinMaxNorm.__ge__": true, + "tf.keras.constraints.MinMaxNorm.__gt__": true, + "tf.keras.constraints.MinMaxNorm.__init__": true, + "tf.keras.constraints.MinMaxNorm.__le__": true, + "tf.keras.constraints.MinMaxNorm.__lt__": true, + "tf.keras.constraints.MinMaxNorm.__ne__": true, + "tf.keras.constraints.MinMaxNorm.__new__": true, + "tf.keras.constraints.MinMaxNorm.get_config": true, + "tf.keras.constraints.NonNeg": false, + "tf.keras.constraints.NonNeg.__call__": true, + "tf.keras.constraints.NonNeg.__eq__": true, + "tf.keras.constraints.NonNeg.__ge__": true, + "tf.keras.constraints.NonNeg.__gt__": true, + "tf.keras.constraints.NonNeg.__init__": true, + "tf.keras.constraints.NonNeg.__le__": true, + "tf.keras.constraints.NonNeg.__lt__": true, + "tf.keras.constraints.NonNeg.__ne__": true, + "tf.keras.constraints.NonNeg.__new__": true, + "tf.keras.constraints.NonNeg.get_config": true, + "tf.keras.constraints.RadialConstraint": false, + "tf.keras.constraints.RadialConstraint.__call__": true, + "tf.keras.constraints.RadialConstraint.__eq__": true, + "tf.keras.constraints.RadialConstraint.__ge__": true, + "tf.keras.constraints.RadialConstraint.__gt__": true, + "tf.keras.constraints.RadialConstraint.__init__": true, + "tf.keras.constraints.RadialConstraint.__le__": true, + "tf.keras.constraints.RadialConstraint.__lt__": true, + "tf.keras.constraints.RadialConstraint.__ne__": true, + "tf.keras.constraints.RadialConstraint.__new__": true, + "tf.keras.constraints.RadialConstraint.get_config": true, + "tf.keras.constraints.UnitNorm": false, + "tf.keras.constraints.UnitNorm.__call__": true, + "tf.keras.constraints.UnitNorm.__eq__": true, + "tf.keras.constraints.UnitNorm.__ge__": true, + "tf.keras.constraints.UnitNorm.__gt__": true, + "tf.keras.constraints.UnitNorm.__init__": true, + "tf.keras.constraints.UnitNorm.__le__": true, + "tf.keras.constraints.UnitNorm.__lt__": true, + "tf.keras.constraints.UnitNorm.__ne__": true, + "tf.keras.constraints.UnitNorm.__new__": true, + "tf.keras.constraints.UnitNorm.get_config": true, + "tf.keras.constraints.deserialize": false, + "tf.keras.constraints.get": false, + "tf.keras.constraints.max_norm": false, + "tf.keras.constraints.max_norm.__call__": true, + "tf.keras.constraints.max_norm.__eq__": true, + "tf.keras.constraints.max_norm.__ge__": true, + "tf.keras.constraints.max_norm.__gt__": true, + "tf.keras.constraints.max_norm.__init__": true, + "tf.keras.constraints.max_norm.__le__": true, + "tf.keras.constraints.max_norm.__lt__": true, + "tf.keras.constraints.max_norm.__ne__": true, + "tf.keras.constraints.max_norm.__new__": true, + "tf.keras.constraints.max_norm.get_config": true, + "tf.keras.constraints.min_max_norm": false, + "tf.keras.constraints.min_max_norm.__call__": true, + "tf.keras.constraints.min_max_norm.__eq__": true, + "tf.keras.constraints.min_max_norm.__ge__": true, + "tf.keras.constraints.min_max_norm.__gt__": true, + "tf.keras.constraints.min_max_norm.__init__": true, + "tf.keras.constraints.min_max_norm.__le__": true, + "tf.keras.constraints.min_max_norm.__lt__": true, + "tf.keras.constraints.min_max_norm.__ne__": true, + "tf.keras.constraints.min_max_norm.__new__": true, + "tf.keras.constraints.min_max_norm.get_config": true, + "tf.keras.constraints.non_neg": false, + "tf.keras.constraints.non_neg.__call__": true, + "tf.keras.constraints.non_neg.__eq__": true, + "tf.keras.constraints.non_neg.__ge__": true, + "tf.keras.constraints.non_neg.__gt__": true, + "tf.keras.constraints.non_neg.__init__": true, + "tf.keras.constraints.non_neg.__le__": true, + "tf.keras.constraints.non_neg.__lt__": true, + "tf.keras.constraints.non_neg.__ne__": true, + "tf.keras.constraints.non_neg.__new__": true, + "tf.keras.constraints.non_neg.get_config": true, + "tf.keras.constraints.radial_constraint": false, + "tf.keras.constraints.radial_constraint.__call__": true, + "tf.keras.constraints.radial_constraint.__eq__": true, + "tf.keras.constraints.radial_constraint.__ge__": true, + "tf.keras.constraints.radial_constraint.__gt__": true, + "tf.keras.constraints.radial_constraint.__init__": true, + "tf.keras.constraints.radial_constraint.__le__": true, + "tf.keras.constraints.radial_constraint.__lt__": true, + "tf.keras.constraints.radial_constraint.__ne__": true, + "tf.keras.constraints.radial_constraint.__new__": true, + "tf.keras.constraints.radial_constraint.get_config": true, + "tf.keras.constraints.serialize": false, + "tf.keras.constraints.unit_norm": false, + "tf.keras.constraints.unit_norm.__call__": true, + "tf.keras.constraints.unit_norm.__eq__": true, + "tf.keras.constraints.unit_norm.__ge__": true, + "tf.keras.constraints.unit_norm.__gt__": true, + "tf.keras.constraints.unit_norm.__init__": true, + "tf.keras.constraints.unit_norm.__le__": true, + "tf.keras.constraints.unit_norm.__lt__": true, + "tf.keras.constraints.unit_norm.__ne__": true, + "tf.keras.constraints.unit_norm.__new__": true, + "tf.keras.constraints.unit_norm.get_config": true, + "tf.keras.datasets": false, + "tf.keras.datasets.boston_housing": false, + "tf.keras.datasets.boston_housing.load_data": false, + "tf.keras.datasets.cifar10": false, + "tf.keras.datasets.cifar10.load_data": false, + "tf.keras.datasets.cifar100": false, + "tf.keras.datasets.cifar100.load_data": false, + "tf.keras.datasets.fashion_mnist": false, + "tf.keras.datasets.fashion_mnist.load_data": false, + "tf.keras.datasets.imdb": false, + "tf.keras.datasets.imdb.get_word_index": false, + "tf.keras.datasets.imdb.load_data": false, + "tf.keras.datasets.mnist": false, + "tf.keras.datasets.mnist.load_data": false, + "tf.keras.datasets.reuters": false, + "tf.keras.datasets.reuters.get_word_index": false, + "tf.keras.datasets.reuters.load_data": false, + "tf.keras.estimator": false, + "tf.keras.estimator.model_to_estimator": false, + "tf.keras.experimental": false, + "tf.keras.experimental.CosineDecay": false, + "tf.keras.experimental.CosineDecay.__call__": true, + "tf.keras.experimental.CosineDecay.__eq__": true, + "tf.keras.experimental.CosineDecay.__ge__": true, + "tf.keras.experimental.CosineDecay.__gt__": true, + "tf.keras.experimental.CosineDecay.__init__": true, + "tf.keras.experimental.CosineDecay.__le__": true, + "tf.keras.experimental.CosineDecay.__lt__": true, + "tf.keras.experimental.CosineDecay.__ne__": true, + "tf.keras.experimental.CosineDecay.__new__": true, + "tf.keras.experimental.CosineDecay.from_config": true, + "tf.keras.experimental.CosineDecay.get_config": true, + "tf.keras.experimental.CosineDecayRestarts": false, + "tf.keras.experimental.CosineDecayRestarts.__call__": true, + "tf.keras.experimental.CosineDecayRestarts.__eq__": true, + "tf.keras.experimental.CosineDecayRestarts.__ge__": true, + "tf.keras.experimental.CosineDecayRestarts.__gt__": true, + "tf.keras.experimental.CosineDecayRestarts.__init__": true, + "tf.keras.experimental.CosineDecayRestarts.__le__": true, + "tf.keras.experimental.CosineDecayRestarts.__lt__": true, + "tf.keras.experimental.CosineDecayRestarts.__ne__": true, + "tf.keras.experimental.CosineDecayRestarts.__new__": true, + "tf.keras.experimental.CosineDecayRestarts.from_config": true, + "tf.keras.experimental.CosineDecayRestarts.get_config": true, + "tf.keras.experimental.LinearCosineDecay": false, + "tf.keras.experimental.LinearCosineDecay.__call__": true, + "tf.keras.experimental.LinearCosineDecay.__eq__": true, + "tf.keras.experimental.LinearCosineDecay.__ge__": true, + "tf.keras.experimental.LinearCosineDecay.__gt__": true, + "tf.keras.experimental.LinearCosineDecay.__init__": true, + "tf.keras.experimental.LinearCosineDecay.__le__": true, + "tf.keras.experimental.LinearCosineDecay.__lt__": true, + "tf.keras.experimental.LinearCosineDecay.__ne__": true, + "tf.keras.experimental.LinearCosineDecay.__new__": true, + "tf.keras.experimental.LinearCosineDecay.from_config": true, + "tf.keras.experimental.LinearCosineDecay.get_config": true, + "tf.keras.experimental.LinearModel": false, + "tf.keras.experimental.LinearModel.__call__": true, + "tf.keras.experimental.LinearModel.__eq__": true, + "tf.keras.experimental.LinearModel.__ge__": true, + "tf.keras.experimental.LinearModel.__gt__": true, + "tf.keras.experimental.LinearModel.__init__": true, + "tf.keras.experimental.LinearModel.__le__": true, + "tf.keras.experimental.LinearModel.__lt__": true, + "tf.keras.experimental.LinearModel.__ne__": true, + "tf.keras.experimental.LinearModel.__new__": true, + "tf.keras.experimental.LinearModel.activity_regularizer": true, + "tf.keras.experimental.LinearModel.add_loss": true, + "tf.keras.experimental.LinearModel.add_metric": true, + "tf.keras.experimental.LinearModel.add_weight": true, + "tf.keras.experimental.LinearModel.build": true, + "tf.keras.experimental.LinearModel.call": true, + "tf.keras.experimental.LinearModel.compile": true, + "tf.keras.experimental.LinearModel.compute_mask": true, + "tf.keras.experimental.LinearModel.compute_output_shape": true, + "tf.keras.experimental.LinearModel.compute_output_signature": true, + "tf.keras.experimental.LinearModel.count_params": true, + "tf.keras.experimental.LinearModel.distribute_strategy": true, + "tf.keras.experimental.LinearModel.dtype": true, + "tf.keras.experimental.LinearModel.dynamic": true, + "tf.keras.experimental.LinearModel.evaluate": true, + "tf.keras.experimental.LinearModel.evaluate_generator": true, + "tf.keras.experimental.LinearModel.fit": true, + "tf.keras.experimental.LinearModel.fit_generator": true, + "tf.keras.experimental.LinearModel.from_config": true, + "tf.keras.experimental.LinearModel.get_config": true, + "tf.keras.experimental.LinearModel.get_layer": true, + "tf.keras.experimental.LinearModel.get_weights": true, + "tf.keras.experimental.LinearModel.input": true, + "tf.keras.experimental.LinearModel.input_spec": true, + "tf.keras.experimental.LinearModel.layers": true, + "tf.keras.experimental.LinearModel.load_weights": true, + "tf.keras.experimental.LinearModel.losses": true, + "tf.keras.experimental.LinearModel.make_predict_function": true, + "tf.keras.experimental.LinearModel.make_test_function": true, + "tf.keras.experimental.LinearModel.make_train_function": true, + "tf.keras.experimental.LinearModel.metrics": true, + "tf.keras.experimental.LinearModel.metrics_names": true, + "tf.keras.experimental.LinearModel.name": true, + "tf.keras.experimental.LinearModel.name_scope": true, + "tf.keras.experimental.LinearModel.non_trainable_weights": true, + "tf.keras.experimental.LinearModel.output": true, + "tf.keras.experimental.LinearModel.predict": true, + "tf.keras.experimental.LinearModel.predict_generator": true, + "tf.keras.experimental.LinearModel.predict_on_batch": true, + "tf.keras.experimental.LinearModel.predict_step": true, + "tf.keras.experimental.LinearModel.reset_metrics": true, + "tf.keras.experimental.LinearModel.reset_states": true, + "tf.keras.experimental.LinearModel.run_eagerly": true, + "tf.keras.experimental.LinearModel.save": true, + "tf.keras.experimental.LinearModel.save_weights": true, + "tf.keras.experimental.LinearModel.set_weights": true, + "tf.keras.experimental.LinearModel.state_updates": true, + "tf.keras.experimental.LinearModel.stateful": true, + "tf.keras.experimental.LinearModel.submodules": true, + "tf.keras.experimental.LinearModel.summary": true, + "tf.keras.experimental.LinearModel.test_on_batch": true, + "tf.keras.experimental.LinearModel.test_step": true, + "tf.keras.experimental.LinearModel.to_json": true, + "tf.keras.experimental.LinearModel.to_yaml": true, + "tf.keras.experimental.LinearModel.train_on_batch": true, + "tf.keras.experimental.LinearModel.train_step": true, + "tf.keras.experimental.LinearModel.trainable": true, + "tf.keras.experimental.LinearModel.trainable_weights": true, + "tf.keras.experimental.LinearModel.weights": true, + "tf.keras.experimental.LinearModel.with_name_scope": true, + "tf.keras.experimental.NoisyLinearCosineDecay": false, + "tf.keras.experimental.NoisyLinearCosineDecay.__call__": true, + "tf.keras.experimental.NoisyLinearCosineDecay.__eq__": true, + "tf.keras.experimental.NoisyLinearCosineDecay.__ge__": true, + "tf.keras.experimental.NoisyLinearCosineDecay.__gt__": true, + "tf.keras.experimental.NoisyLinearCosineDecay.__init__": true, + "tf.keras.experimental.NoisyLinearCosineDecay.__le__": true, + "tf.keras.experimental.NoisyLinearCosineDecay.__lt__": true, + "tf.keras.experimental.NoisyLinearCosineDecay.__ne__": true, + "tf.keras.experimental.NoisyLinearCosineDecay.__new__": true, + "tf.keras.experimental.NoisyLinearCosineDecay.from_config": true, + "tf.keras.experimental.NoisyLinearCosineDecay.get_config": true, + "tf.keras.experimental.PeepholeLSTMCell": false, + "tf.keras.experimental.PeepholeLSTMCell.__call__": true, + "tf.keras.experimental.PeepholeLSTMCell.__eq__": true, + "tf.keras.experimental.PeepholeLSTMCell.__ge__": true, + "tf.keras.experimental.PeepholeLSTMCell.__gt__": true, + "tf.keras.experimental.PeepholeLSTMCell.__init__": true, + "tf.keras.experimental.PeepholeLSTMCell.__le__": true, + "tf.keras.experimental.PeepholeLSTMCell.__lt__": true, + "tf.keras.experimental.PeepholeLSTMCell.__ne__": true, + "tf.keras.experimental.PeepholeLSTMCell.__new__": true, + "tf.keras.experimental.PeepholeLSTMCell.activity_regularizer": true, + "tf.keras.experimental.PeepholeLSTMCell.add_loss": true, + "tf.keras.experimental.PeepholeLSTMCell.add_metric": true, + "tf.keras.experimental.PeepholeLSTMCell.add_weight": true, + "tf.keras.experimental.PeepholeLSTMCell.build": true, + "tf.keras.experimental.PeepholeLSTMCell.call": true, + "tf.keras.experimental.PeepholeLSTMCell.compute_mask": true, + "tf.keras.experimental.PeepholeLSTMCell.compute_output_shape": true, + "tf.keras.experimental.PeepholeLSTMCell.compute_output_signature": true, + "tf.keras.experimental.PeepholeLSTMCell.count_params": true, + "tf.keras.experimental.PeepholeLSTMCell.dtype": true, + "tf.keras.experimental.PeepholeLSTMCell.dynamic": true, + "tf.keras.experimental.PeepholeLSTMCell.from_config": true, + "tf.keras.experimental.PeepholeLSTMCell.get_config": true, + "tf.keras.experimental.PeepholeLSTMCell.get_dropout_mask_for_cell": true, + "tf.keras.experimental.PeepholeLSTMCell.get_initial_state": true, + "tf.keras.experimental.PeepholeLSTMCell.get_recurrent_dropout_mask_for_cell": true, + "tf.keras.experimental.PeepholeLSTMCell.get_weights": true, + "tf.keras.experimental.PeepholeLSTMCell.input": true, + "tf.keras.experimental.PeepholeLSTMCell.input_spec": true, + "tf.keras.experimental.PeepholeLSTMCell.losses": true, + "tf.keras.experimental.PeepholeLSTMCell.metrics": true, + "tf.keras.experimental.PeepholeLSTMCell.name": true, + "tf.keras.experimental.PeepholeLSTMCell.name_scope": true, + "tf.keras.experimental.PeepholeLSTMCell.non_trainable_weights": true, + "tf.keras.experimental.PeepholeLSTMCell.output": true, + "tf.keras.experimental.PeepholeLSTMCell.reset_dropout_mask": true, + "tf.keras.experimental.PeepholeLSTMCell.reset_recurrent_dropout_mask": true, + "tf.keras.experimental.PeepholeLSTMCell.set_weights": true, + "tf.keras.experimental.PeepholeLSTMCell.submodules": true, + "tf.keras.experimental.PeepholeLSTMCell.trainable": true, + "tf.keras.experimental.PeepholeLSTMCell.trainable_weights": true, + "tf.keras.experimental.PeepholeLSTMCell.weights": true, + "tf.keras.experimental.PeepholeLSTMCell.with_name_scope": true, + "tf.keras.experimental.SequenceFeatures": false, + "tf.keras.experimental.SequenceFeatures.__call__": true, + "tf.keras.experimental.SequenceFeatures.__eq__": true, + "tf.keras.experimental.SequenceFeatures.__ge__": true, + "tf.keras.experimental.SequenceFeatures.__gt__": true, + "tf.keras.experimental.SequenceFeatures.__init__": true, + "tf.keras.experimental.SequenceFeatures.__le__": true, + "tf.keras.experimental.SequenceFeatures.__lt__": true, + "tf.keras.experimental.SequenceFeatures.__ne__": true, + "tf.keras.experimental.SequenceFeatures.__new__": true, + "tf.keras.experimental.SequenceFeatures.activity_regularizer": true, + "tf.keras.experimental.SequenceFeatures.add_loss": true, + "tf.keras.experimental.SequenceFeatures.add_metric": true, + "tf.keras.experimental.SequenceFeatures.add_weight": true, + "tf.keras.experimental.SequenceFeatures.build": true, + "tf.keras.experimental.SequenceFeatures.call": true, + "tf.keras.experimental.SequenceFeatures.compute_mask": true, + "tf.keras.experimental.SequenceFeatures.compute_output_shape": true, + "tf.keras.experimental.SequenceFeatures.compute_output_signature": true, + "tf.keras.experimental.SequenceFeatures.count_params": true, + "tf.keras.experimental.SequenceFeatures.dtype": true, + "tf.keras.experimental.SequenceFeatures.dynamic": true, + "tf.keras.experimental.SequenceFeatures.from_config": true, + "tf.keras.experimental.SequenceFeatures.get_config": true, + "tf.keras.experimental.SequenceFeatures.get_weights": true, + "tf.keras.experimental.SequenceFeatures.input": true, + "tf.keras.experimental.SequenceFeatures.input_spec": true, + "tf.keras.experimental.SequenceFeatures.losses": true, + "tf.keras.experimental.SequenceFeatures.metrics": true, + "tf.keras.experimental.SequenceFeatures.name": true, + "tf.keras.experimental.SequenceFeatures.name_scope": true, + "tf.keras.experimental.SequenceFeatures.non_trainable_weights": true, + "tf.keras.experimental.SequenceFeatures.output": true, + "tf.keras.experimental.SequenceFeatures.set_weights": true, + "tf.keras.experimental.SequenceFeatures.submodules": true, + "tf.keras.experimental.SequenceFeatures.trainable": true, + "tf.keras.experimental.SequenceFeatures.trainable_weights": true, + "tf.keras.experimental.SequenceFeatures.weights": true, + "tf.keras.experimental.SequenceFeatures.with_name_scope": true, + "tf.keras.experimental.WideDeepModel": false, + "tf.keras.experimental.WideDeepModel.__call__": true, + "tf.keras.experimental.WideDeepModel.__eq__": true, + "tf.keras.experimental.WideDeepModel.__ge__": true, + "tf.keras.experimental.WideDeepModel.__gt__": true, + "tf.keras.experimental.WideDeepModel.__init__": true, + "tf.keras.experimental.WideDeepModel.__le__": true, + "tf.keras.experimental.WideDeepModel.__lt__": true, + "tf.keras.experimental.WideDeepModel.__ne__": true, + "tf.keras.experimental.WideDeepModel.__new__": true, + "tf.keras.experimental.WideDeepModel.activity_regularizer": true, + "tf.keras.experimental.WideDeepModel.add_loss": true, + "tf.keras.experimental.WideDeepModel.add_metric": true, + "tf.keras.experimental.WideDeepModel.add_weight": true, + "tf.keras.experimental.WideDeepModel.build": true, + "tf.keras.experimental.WideDeepModel.call": true, + "tf.keras.experimental.WideDeepModel.compile": true, + "tf.keras.experimental.WideDeepModel.compute_mask": true, + "tf.keras.experimental.WideDeepModel.compute_output_shape": true, + "tf.keras.experimental.WideDeepModel.compute_output_signature": true, + "tf.keras.experimental.WideDeepModel.count_params": true, + "tf.keras.experimental.WideDeepModel.distribute_strategy": true, + "tf.keras.experimental.WideDeepModel.dtype": true, + "tf.keras.experimental.WideDeepModel.dynamic": true, + "tf.keras.experimental.WideDeepModel.evaluate": true, + "tf.keras.experimental.WideDeepModel.evaluate_generator": true, + "tf.keras.experimental.WideDeepModel.fit": true, + "tf.keras.experimental.WideDeepModel.fit_generator": true, + "tf.keras.experimental.WideDeepModel.from_config": true, + "tf.keras.experimental.WideDeepModel.get_config": true, + "tf.keras.experimental.WideDeepModel.get_layer": true, + "tf.keras.experimental.WideDeepModel.get_weights": true, + "tf.keras.experimental.WideDeepModel.input": true, + "tf.keras.experimental.WideDeepModel.input_spec": true, + "tf.keras.experimental.WideDeepModel.layers": true, + "tf.keras.experimental.WideDeepModel.load_weights": true, + "tf.keras.experimental.WideDeepModel.losses": true, + "tf.keras.experimental.WideDeepModel.make_predict_function": true, + "tf.keras.experimental.WideDeepModel.make_test_function": true, + "tf.keras.experimental.WideDeepModel.make_train_function": true, + "tf.keras.experimental.WideDeepModel.metrics": true, + "tf.keras.experimental.WideDeepModel.metrics_names": true, + "tf.keras.experimental.WideDeepModel.name": true, + "tf.keras.experimental.WideDeepModel.name_scope": true, + "tf.keras.experimental.WideDeepModel.non_trainable_weights": true, + "tf.keras.experimental.WideDeepModel.output": true, + "tf.keras.experimental.WideDeepModel.predict": true, + "tf.keras.experimental.WideDeepModel.predict_generator": true, + "tf.keras.experimental.WideDeepModel.predict_on_batch": true, + "tf.keras.experimental.WideDeepModel.predict_step": true, + "tf.keras.experimental.WideDeepModel.reset_metrics": true, + "tf.keras.experimental.WideDeepModel.reset_states": true, + "tf.keras.experimental.WideDeepModel.run_eagerly": true, + "tf.keras.experimental.WideDeepModel.save": true, + "tf.keras.experimental.WideDeepModel.save_weights": true, + "tf.keras.experimental.WideDeepModel.set_weights": true, + "tf.keras.experimental.WideDeepModel.state_updates": true, + "tf.keras.experimental.WideDeepModel.stateful": true, + "tf.keras.experimental.WideDeepModel.submodules": true, + "tf.keras.experimental.WideDeepModel.summary": true, + "tf.keras.experimental.WideDeepModel.test_on_batch": true, + "tf.keras.experimental.WideDeepModel.test_step": true, + "tf.keras.experimental.WideDeepModel.to_json": true, + "tf.keras.experimental.WideDeepModel.to_yaml": true, + "tf.keras.experimental.WideDeepModel.train_on_batch": true, + "tf.keras.experimental.WideDeepModel.train_step": true, + "tf.keras.experimental.WideDeepModel.trainable": true, + "tf.keras.experimental.WideDeepModel.trainable_weights": true, + "tf.keras.experimental.WideDeepModel.weights": true, + "tf.keras.experimental.WideDeepModel.with_name_scope": true, + "tf.keras.experimental.terminate_keras_multiprocessing_pools": false, + "tf.keras.initializers": false, + "tf.keras.initializers.Constant": false, + "tf.keras.initializers.Constant.__call__": true, + "tf.keras.initializers.Constant.__eq__": true, + "tf.keras.initializers.Constant.__ge__": true, + "tf.keras.initializers.Constant.__gt__": true, + "tf.keras.initializers.Constant.__init__": true, + "tf.keras.initializers.Constant.__le__": true, + "tf.keras.initializers.Constant.__lt__": true, + "tf.keras.initializers.Constant.__ne__": true, + "tf.keras.initializers.Constant.__new__": true, + "tf.keras.initializers.Constant.from_config": true, + "tf.keras.initializers.Constant.get_config": true, + "tf.keras.initializers.GlorotNormal": false, + "tf.keras.initializers.GlorotNormal.__call__": true, + "tf.keras.initializers.GlorotNormal.__eq__": true, + "tf.keras.initializers.GlorotNormal.__ge__": true, + "tf.keras.initializers.GlorotNormal.__gt__": true, + "tf.keras.initializers.GlorotNormal.__init__": true, + "tf.keras.initializers.GlorotNormal.__le__": true, + "tf.keras.initializers.GlorotNormal.__lt__": true, + "tf.keras.initializers.GlorotNormal.__ne__": true, + "tf.keras.initializers.GlorotNormal.__new__": true, + "tf.keras.initializers.GlorotNormal.from_config": true, + "tf.keras.initializers.GlorotNormal.get_config": true, + "tf.keras.initializers.GlorotUniform": false, + "tf.keras.initializers.GlorotUniform.__call__": true, + "tf.keras.initializers.GlorotUniform.__eq__": true, + "tf.keras.initializers.GlorotUniform.__ge__": true, + "tf.keras.initializers.GlorotUniform.__gt__": true, + "tf.keras.initializers.GlorotUniform.__init__": true, + "tf.keras.initializers.GlorotUniform.__le__": true, + "tf.keras.initializers.GlorotUniform.__lt__": true, + "tf.keras.initializers.GlorotUniform.__ne__": true, + "tf.keras.initializers.GlorotUniform.__new__": true, + "tf.keras.initializers.GlorotUniform.from_config": true, + "tf.keras.initializers.GlorotUniform.get_config": true, + "tf.keras.initializers.Identity": false, + "tf.keras.initializers.Identity.__call__": true, + "tf.keras.initializers.Identity.__eq__": true, + "tf.keras.initializers.Identity.__ge__": true, + "tf.keras.initializers.Identity.__gt__": true, + "tf.keras.initializers.Identity.__init__": true, + "tf.keras.initializers.Identity.__le__": true, + "tf.keras.initializers.Identity.__lt__": true, + "tf.keras.initializers.Identity.__ne__": true, + "tf.keras.initializers.Identity.__new__": true, + "tf.keras.initializers.Identity.from_config": true, + "tf.keras.initializers.Identity.get_config": true, + "tf.keras.initializers.Initializer": false, + "tf.keras.initializers.Initializer.__call__": true, + "tf.keras.initializers.Initializer.__eq__": true, + "tf.keras.initializers.Initializer.__ge__": true, + "tf.keras.initializers.Initializer.__gt__": true, + "tf.keras.initializers.Initializer.__init__": true, + "tf.keras.initializers.Initializer.__le__": true, + "tf.keras.initializers.Initializer.__lt__": true, + "tf.keras.initializers.Initializer.__ne__": true, + "tf.keras.initializers.Initializer.__new__": true, + "tf.keras.initializers.Initializer.from_config": true, + "tf.keras.initializers.Initializer.get_config": true, + "tf.keras.initializers.Ones": false, + "tf.keras.initializers.Ones.__call__": true, + "tf.keras.initializers.Ones.__eq__": true, + "tf.keras.initializers.Ones.__ge__": true, + "tf.keras.initializers.Ones.__gt__": true, + "tf.keras.initializers.Ones.__init__": true, + "tf.keras.initializers.Ones.__le__": true, + "tf.keras.initializers.Ones.__lt__": true, + "tf.keras.initializers.Ones.__ne__": true, + "tf.keras.initializers.Ones.__new__": true, + "tf.keras.initializers.Ones.from_config": true, + "tf.keras.initializers.Ones.get_config": true, + "tf.keras.initializers.Orthogonal": false, + "tf.keras.initializers.Orthogonal.__call__": true, + "tf.keras.initializers.Orthogonal.__eq__": true, + "tf.keras.initializers.Orthogonal.__ge__": true, + "tf.keras.initializers.Orthogonal.__gt__": true, + "tf.keras.initializers.Orthogonal.__init__": true, + "tf.keras.initializers.Orthogonal.__le__": true, + "tf.keras.initializers.Orthogonal.__lt__": true, + "tf.keras.initializers.Orthogonal.__ne__": true, + "tf.keras.initializers.Orthogonal.__new__": true, + "tf.keras.initializers.Orthogonal.from_config": true, + "tf.keras.initializers.Orthogonal.get_config": true, + "tf.keras.initializers.RandomNormal": false, + "tf.keras.initializers.RandomNormal.__call__": true, + "tf.keras.initializers.RandomNormal.__eq__": true, + "tf.keras.initializers.RandomNormal.__ge__": true, + "tf.keras.initializers.RandomNormal.__gt__": true, + "tf.keras.initializers.RandomNormal.__init__": true, + "tf.keras.initializers.RandomNormal.__le__": true, + "tf.keras.initializers.RandomNormal.__lt__": true, + "tf.keras.initializers.RandomNormal.__ne__": true, + "tf.keras.initializers.RandomNormal.__new__": true, + "tf.keras.initializers.RandomNormal.from_config": true, + "tf.keras.initializers.RandomNormal.get_config": true, + "tf.keras.initializers.RandomUniform": false, + "tf.keras.initializers.RandomUniform.__call__": true, + "tf.keras.initializers.RandomUniform.__eq__": true, + "tf.keras.initializers.RandomUniform.__ge__": true, + "tf.keras.initializers.RandomUniform.__gt__": true, + "tf.keras.initializers.RandomUniform.__init__": true, + "tf.keras.initializers.RandomUniform.__le__": true, + "tf.keras.initializers.RandomUniform.__lt__": true, + "tf.keras.initializers.RandomUniform.__ne__": true, + "tf.keras.initializers.RandomUniform.__new__": true, + "tf.keras.initializers.RandomUniform.from_config": true, + "tf.keras.initializers.RandomUniform.get_config": true, + "tf.keras.initializers.TruncatedNormal": false, + "tf.keras.initializers.TruncatedNormal.__call__": true, + "tf.keras.initializers.TruncatedNormal.__eq__": true, + "tf.keras.initializers.TruncatedNormal.__ge__": true, + "tf.keras.initializers.TruncatedNormal.__gt__": true, + "tf.keras.initializers.TruncatedNormal.__init__": true, + "tf.keras.initializers.TruncatedNormal.__le__": true, + "tf.keras.initializers.TruncatedNormal.__lt__": true, + "tf.keras.initializers.TruncatedNormal.__ne__": true, + "tf.keras.initializers.TruncatedNormal.__new__": true, + "tf.keras.initializers.TruncatedNormal.from_config": true, + "tf.keras.initializers.TruncatedNormal.get_config": true, + "tf.keras.initializers.VarianceScaling": false, + "tf.keras.initializers.VarianceScaling.__call__": true, + "tf.keras.initializers.VarianceScaling.__eq__": true, + "tf.keras.initializers.VarianceScaling.__ge__": true, + "tf.keras.initializers.VarianceScaling.__gt__": true, + "tf.keras.initializers.VarianceScaling.__init__": true, + "tf.keras.initializers.VarianceScaling.__le__": true, + "tf.keras.initializers.VarianceScaling.__lt__": true, + "tf.keras.initializers.VarianceScaling.__ne__": true, + "tf.keras.initializers.VarianceScaling.__new__": true, + "tf.keras.initializers.VarianceScaling.from_config": true, + "tf.keras.initializers.VarianceScaling.get_config": true, + "tf.keras.initializers.Zeros": false, + "tf.keras.initializers.Zeros.__call__": true, + "tf.keras.initializers.Zeros.__eq__": true, + "tf.keras.initializers.Zeros.__ge__": true, + "tf.keras.initializers.Zeros.__gt__": true, + "tf.keras.initializers.Zeros.__init__": true, + "tf.keras.initializers.Zeros.__le__": true, + "tf.keras.initializers.Zeros.__lt__": true, + "tf.keras.initializers.Zeros.__ne__": true, + "tf.keras.initializers.Zeros.__new__": true, + "tf.keras.initializers.Zeros.from_config": true, + "tf.keras.initializers.Zeros.get_config": true, + "tf.keras.initializers.constant": false, + "tf.keras.initializers.constant.__call__": true, + "tf.keras.initializers.constant.__eq__": true, + "tf.keras.initializers.constant.__ge__": true, + "tf.keras.initializers.constant.__gt__": true, + "tf.keras.initializers.constant.__init__": true, + "tf.keras.initializers.constant.__le__": true, + "tf.keras.initializers.constant.__lt__": true, + "tf.keras.initializers.constant.__ne__": true, + "tf.keras.initializers.constant.__new__": true, + "tf.keras.initializers.constant.from_config": true, + "tf.keras.initializers.constant.get_config": true, + "tf.keras.initializers.deserialize": false, + "tf.keras.initializers.get": false, + "tf.keras.initializers.glorot_normal": false, + "tf.keras.initializers.glorot_normal.__call__": true, + "tf.keras.initializers.glorot_normal.__eq__": true, + "tf.keras.initializers.glorot_normal.__ge__": true, + "tf.keras.initializers.glorot_normal.__gt__": true, + "tf.keras.initializers.glorot_normal.__init__": true, + "tf.keras.initializers.glorot_normal.__le__": true, + "tf.keras.initializers.glorot_normal.__lt__": true, + "tf.keras.initializers.glorot_normal.__ne__": true, + "tf.keras.initializers.glorot_normal.__new__": true, + "tf.keras.initializers.glorot_normal.from_config": true, + "tf.keras.initializers.glorot_normal.get_config": true, + "tf.keras.initializers.glorot_uniform": false, + "tf.keras.initializers.glorot_uniform.__call__": true, + "tf.keras.initializers.glorot_uniform.__eq__": true, + "tf.keras.initializers.glorot_uniform.__ge__": true, + "tf.keras.initializers.glorot_uniform.__gt__": true, + "tf.keras.initializers.glorot_uniform.__init__": true, + "tf.keras.initializers.glorot_uniform.__le__": true, + "tf.keras.initializers.glorot_uniform.__lt__": true, + "tf.keras.initializers.glorot_uniform.__ne__": true, + "tf.keras.initializers.glorot_uniform.__new__": true, + "tf.keras.initializers.glorot_uniform.from_config": true, + "tf.keras.initializers.glorot_uniform.get_config": true, + "tf.keras.initializers.he_normal": false, + "tf.keras.initializers.he_uniform": false, + "tf.keras.initializers.identity": false, + "tf.keras.initializers.identity.__call__": true, + "tf.keras.initializers.identity.__eq__": true, + "tf.keras.initializers.identity.__ge__": true, + "tf.keras.initializers.identity.__gt__": true, + "tf.keras.initializers.identity.__init__": true, + "tf.keras.initializers.identity.__le__": true, + "tf.keras.initializers.identity.__lt__": true, + "tf.keras.initializers.identity.__ne__": true, + "tf.keras.initializers.identity.__new__": true, + "tf.keras.initializers.identity.from_config": true, + "tf.keras.initializers.identity.get_config": true, + "tf.keras.initializers.lecun_normal": false, + "tf.keras.initializers.lecun_uniform": false, + "tf.keras.initializers.ones": false, + "tf.keras.initializers.ones.__call__": true, + "tf.keras.initializers.ones.__eq__": true, + "tf.keras.initializers.ones.__ge__": true, + "tf.keras.initializers.ones.__gt__": true, + "tf.keras.initializers.ones.__init__": true, + "tf.keras.initializers.ones.__le__": true, + "tf.keras.initializers.ones.__lt__": true, + "tf.keras.initializers.ones.__ne__": true, + "tf.keras.initializers.ones.__new__": true, + "tf.keras.initializers.ones.from_config": true, + "tf.keras.initializers.ones.get_config": true, + "tf.keras.initializers.orthogonal": false, + "tf.keras.initializers.orthogonal.__call__": true, + "tf.keras.initializers.orthogonal.__eq__": true, + "tf.keras.initializers.orthogonal.__ge__": true, + "tf.keras.initializers.orthogonal.__gt__": true, + "tf.keras.initializers.orthogonal.__init__": true, + "tf.keras.initializers.orthogonal.__le__": true, + "tf.keras.initializers.orthogonal.__lt__": true, + "tf.keras.initializers.orthogonal.__ne__": true, + "tf.keras.initializers.orthogonal.__new__": true, + "tf.keras.initializers.orthogonal.from_config": true, + "tf.keras.initializers.orthogonal.get_config": true, + "tf.keras.initializers.serialize": false, + "tf.keras.initializers.zeros": false, + "tf.keras.initializers.zeros.__call__": true, + "tf.keras.initializers.zeros.__eq__": true, + "tf.keras.initializers.zeros.__ge__": true, + "tf.keras.initializers.zeros.__gt__": true, + "tf.keras.initializers.zeros.__init__": true, + "tf.keras.initializers.zeros.__le__": true, + "tf.keras.initializers.zeros.__lt__": true, + "tf.keras.initializers.zeros.__ne__": true, + "tf.keras.initializers.zeros.__new__": true, + "tf.keras.initializers.zeros.from_config": true, + "tf.keras.initializers.zeros.get_config": true, + "tf.keras.layers": false, + "tf.keras.layers.AbstractRNNCell": false, + "tf.keras.layers.AbstractRNNCell.__call__": true, + "tf.keras.layers.AbstractRNNCell.__eq__": true, + "tf.keras.layers.AbstractRNNCell.__ge__": true, + "tf.keras.layers.AbstractRNNCell.__gt__": true, + "tf.keras.layers.AbstractRNNCell.__init__": true, + "tf.keras.layers.AbstractRNNCell.__le__": true, + "tf.keras.layers.AbstractRNNCell.__lt__": true, + "tf.keras.layers.AbstractRNNCell.__ne__": true, + "tf.keras.layers.AbstractRNNCell.__new__": true, + "tf.keras.layers.AbstractRNNCell.activity_regularizer": true, + "tf.keras.layers.AbstractRNNCell.add_loss": true, + "tf.keras.layers.AbstractRNNCell.add_metric": true, + "tf.keras.layers.AbstractRNNCell.add_weight": true, + "tf.keras.layers.AbstractRNNCell.build": true, + "tf.keras.layers.AbstractRNNCell.call": true, + "tf.keras.layers.AbstractRNNCell.compute_mask": true, + "tf.keras.layers.AbstractRNNCell.compute_output_shape": true, + "tf.keras.layers.AbstractRNNCell.compute_output_signature": true, + "tf.keras.layers.AbstractRNNCell.count_params": true, + "tf.keras.layers.AbstractRNNCell.dtype": true, + "tf.keras.layers.AbstractRNNCell.dynamic": true, + "tf.keras.layers.AbstractRNNCell.from_config": true, + "tf.keras.layers.AbstractRNNCell.get_config": true, + "tf.keras.layers.AbstractRNNCell.get_initial_state": true, + "tf.keras.layers.AbstractRNNCell.get_weights": true, + "tf.keras.layers.AbstractRNNCell.input": true, + "tf.keras.layers.AbstractRNNCell.input_spec": true, + "tf.keras.layers.AbstractRNNCell.losses": true, + "tf.keras.layers.AbstractRNNCell.metrics": true, + "tf.keras.layers.AbstractRNNCell.name": true, + "tf.keras.layers.AbstractRNNCell.name_scope": true, + "tf.keras.layers.AbstractRNNCell.non_trainable_weights": true, + "tf.keras.layers.AbstractRNNCell.output": true, + "tf.keras.layers.AbstractRNNCell.output_size": true, + "tf.keras.layers.AbstractRNNCell.set_weights": true, + "tf.keras.layers.AbstractRNNCell.state_size": true, + "tf.keras.layers.AbstractRNNCell.submodules": true, + "tf.keras.layers.AbstractRNNCell.trainable": true, + "tf.keras.layers.AbstractRNNCell.trainable_weights": true, + "tf.keras.layers.AbstractRNNCell.weights": true, + "tf.keras.layers.AbstractRNNCell.with_name_scope": true, + "tf.keras.layers.Activation": false, + "tf.keras.layers.Activation.__call__": true, + "tf.keras.layers.Activation.__eq__": true, + "tf.keras.layers.Activation.__ge__": true, + "tf.keras.layers.Activation.__gt__": true, + "tf.keras.layers.Activation.__init__": true, + "tf.keras.layers.Activation.__le__": true, + "tf.keras.layers.Activation.__lt__": true, + "tf.keras.layers.Activation.__ne__": true, + "tf.keras.layers.Activation.__new__": true, + "tf.keras.layers.Activation.activity_regularizer": true, + "tf.keras.layers.Activation.add_loss": true, + "tf.keras.layers.Activation.add_metric": true, + "tf.keras.layers.Activation.add_weight": true, + "tf.keras.layers.Activation.build": true, + "tf.keras.layers.Activation.call": true, + "tf.keras.layers.Activation.compute_mask": true, + "tf.keras.layers.Activation.compute_output_shape": true, + "tf.keras.layers.Activation.compute_output_signature": true, + "tf.keras.layers.Activation.count_params": true, + "tf.keras.layers.Activation.dtype": true, + "tf.keras.layers.Activation.dynamic": true, + "tf.keras.layers.Activation.from_config": true, + "tf.keras.layers.Activation.get_config": true, + "tf.keras.layers.Activation.get_weights": true, + "tf.keras.layers.Activation.input": true, + "tf.keras.layers.Activation.input_spec": true, + "tf.keras.layers.Activation.losses": true, + "tf.keras.layers.Activation.metrics": true, + "tf.keras.layers.Activation.name": true, + "tf.keras.layers.Activation.name_scope": true, + "tf.keras.layers.Activation.non_trainable_weights": true, + "tf.keras.layers.Activation.output": true, + "tf.keras.layers.Activation.set_weights": true, + "tf.keras.layers.Activation.submodules": true, + "tf.keras.layers.Activation.trainable": true, + "tf.keras.layers.Activation.trainable_weights": true, + "tf.keras.layers.Activation.weights": true, + "tf.keras.layers.Activation.with_name_scope": true, + "tf.keras.layers.ActivityRegularization": false, + "tf.keras.layers.ActivityRegularization.__call__": true, + "tf.keras.layers.ActivityRegularization.__eq__": true, + "tf.keras.layers.ActivityRegularization.__ge__": true, + "tf.keras.layers.ActivityRegularization.__gt__": true, + "tf.keras.layers.ActivityRegularization.__init__": true, + "tf.keras.layers.ActivityRegularization.__le__": true, + "tf.keras.layers.ActivityRegularization.__lt__": true, + "tf.keras.layers.ActivityRegularization.__ne__": true, + "tf.keras.layers.ActivityRegularization.__new__": true, + "tf.keras.layers.ActivityRegularization.activity_regularizer": true, + "tf.keras.layers.ActivityRegularization.add_loss": true, + "tf.keras.layers.ActivityRegularization.add_metric": true, + "tf.keras.layers.ActivityRegularization.add_weight": true, + "tf.keras.layers.ActivityRegularization.build": true, + "tf.keras.layers.ActivityRegularization.call": true, + "tf.keras.layers.ActivityRegularization.compute_mask": true, + "tf.keras.layers.ActivityRegularization.compute_output_shape": true, + "tf.keras.layers.ActivityRegularization.compute_output_signature": true, + "tf.keras.layers.ActivityRegularization.count_params": true, + "tf.keras.layers.ActivityRegularization.dtype": true, + "tf.keras.layers.ActivityRegularization.dynamic": true, + "tf.keras.layers.ActivityRegularization.from_config": true, + "tf.keras.layers.ActivityRegularization.get_config": true, + "tf.keras.layers.ActivityRegularization.get_weights": true, + "tf.keras.layers.ActivityRegularization.input": true, + "tf.keras.layers.ActivityRegularization.input_spec": true, + "tf.keras.layers.ActivityRegularization.losses": true, + "tf.keras.layers.ActivityRegularization.metrics": true, + "tf.keras.layers.ActivityRegularization.name": true, + "tf.keras.layers.ActivityRegularization.name_scope": true, + "tf.keras.layers.ActivityRegularization.non_trainable_weights": true, + "tf.keras.layers.ActivityRegularization.output": true, + "tf.keras.layers.ActivityRegularization.set_weights": true, + "tf.keras.layers.ActivityRegularization.submodules": true, + "tf.keras.layers.ActivityRegularization.trainable": true, + "tf.keras.layers.ActivityRegularization.trainable_weights": true, + "tf.keras.layers.ActivityRegularization.weights": true, + "tf.keras.layers.ActivityRegularization.with_name_scope": true, + "tf.keras.layers.Add": false, + "tf.keras.layers.Add.__call__": true, + "tf.keras.layers.Add.__eq__": true, + "tf.keras.layers.Add.__ge__": true, + "tf.keras.layers.Add.__gt__": true, + "tf.keras.layers.Add.__init__": true, + "tf.keras.layers.Add.__le__": true, + "tf.keras.layers.Add.__lt__": true, + "tf.keras.layers.Add.__ne__": true, + "tf.keras.layers.Add.__new__": true, + "tf.keras.layers.Add.activity_regularizer": true, + "tf.keras.layers.Add.add_loss": true, + "tf.keras.layers.Add.add_metric": true, + "tf.keras.layers.Add.add_weight": true, + "tf.keras.layers.Add.build": true, + "tf.keras.layers.Add.call": true, + "tf.keras.layers.Add.compute_mask": true, + "tf.keras.layers.Add.compute_output_shape": true, + "tf.keras.layers.Add.compute_output_signature": true, + "tf.keras.layers.Add.count_params": true, + "tf.keras.layers.Add.dtype": true, + "tf.keras.layers.Add.dynamic": true, + "tf.keras.layers.Add.from_config": true, + "tf.keras.layers.Add.get_config": true, + "tf.keras.layers.Add.get_weights": true, + "tf.keras.layers.Add.input": true, + "tf.keras.layers.Add.input_spec": true, + "tf.keras.layers.Add.losses": true, + "tf.keras.layers.Add.metrics": true, + "tf.keras.layers.Add.name": true, + "tf.keras.layers.Add.name_scope": true, + "tf.keras.layers.Add.non_trainable_weights": true, + "tf.keras.layers.Add.output": true, + "tf.keras.layers.Add.set_weights": true, + "tf.keras.layers.Add.submodules": true, + "tf.keras.layers.Add.trainable": true, + "tf.keras.layers.Add.trainable_weights": true, + "tf.keras.layers.Add.weights": true, + "tf.keras.layers.Add.with_name_scope": true, + "tf.keras.layers.AdditiveAttention": false, + "tf.keras.layers.AdditiveAttention.__call__": true, + "tf.keras.layers.AdditiveAttention.__eq__": true, + "tf.keras.layers.AdditiveAttention.__ge__": true, + "tf.keras.layers.AdditiveAttention.__gt__": true, + "tf.keras.layers.AdditiveAttention.__init__": true, + "tf.keras.layers.AdditiveAttention.__le__": true, + "tf.keras.layers.AdditiveAttention.__lt__": true, + "tf.keras.layers.AdditiveAttention.__ne__": true, + "tf.keras.layers.AdditiveAttention.__new__": true, + "tf.keras.layers.AdditiveAttention.activity_regularizer": true, + "tf.keras.layers.AdditiveAttention.add_loss": true, + "tf.keras.layers.AdditiveAttention.add_metric": true, + "tf.keras.layers.AdditiveAttention.add_weight": true, + "tf.keras.layers.AdditiveAttention.build": true, + "tf.keras.layers.AdditiveAttention.call": true, + "tf.keras.layers.AdditiveAttention.compute_mask": true, + "tf.keras.layers.AdditiveAttention.compute_output_shape": true, + "tf.keras.layers.AdditiveAttention.compute_output_signature": true, + "tf.keras.layers.AdditiveAttention.count_params": true, + "tf.keras.layers.AdditiveAttention.dtype": true, + "tf.keras.layers.AdditiveAttention.dynamic": true, + "tf.keras.layers.AdditiveAttention.from_config": true, + "tf.keras.layers.AdditiveAttention.get_config": true, + "tf.keras.layers.AdditiveAttention.get_weights": true, + "tf.keras.layers.AdditiveAttention.input": true, + "tf.keras.layers.AdditiveAttention.input_spec": true, + "tf.keras.layers.AdditiveAttention.losses": true, + "tf.keras.layers.AdditiveAttention.metrics": true, + "tf.keras.layers.AdditiveAttention.name": true, + "tf.keras.layers.AdditiveAttention.name_scope": true, + "tf.keras.layers.AdditiveAttention.non_trainable_weights": true, + "tf.keras.layers.AdditiveAttention.output": true, + "tf.keras.layers.AdditiveAttention.set_weights": true, + "tf.keras.layers.AdditiveAttention.submodules": true, + "tf.keras.layers.AdditiveAttention.trainable": true, + "tf.keras.layers.AdditiveAttention.trainable_weights": true, + "tf.keras.layers.AdditiveAttention.weights": true, + "tf.keras.layers.AdditiveAttention.with_name_scope": true, + "tf.keras.layers.AlphaDropout": false, + "tf.keras.layers.AlphaDropout.__call__": true, + "tf.keras.layers.AlphaDropout.__eq__": true, + "tf.keras.layers.AlphaDropout.__ge__": true, + "tf.keras.layers.AlphaDropout.__gt__": true, + "tf.keras.layers.AlphaDropout.__init__": true, + "tf.keras.layers.AlphaDropout.__le__": true, + "tf.keras.layers.AlphaDropout.__lt__": true, + "tf.keras.layers.AlphaDropout.__ne__": true, + "tf.keras.layers.AlphaDropout.__new__": true, + "tf.keras.layers.AlphaDropout.activity_regularizer": true, + "tf.keras.layers.AlphaDropout.add_loss": true, + "tf.keras.layers.AlphaDropout.add_metric": true, + "tf.keras.layers.AlphaDropout.add_weight": true, + "tf.keras.layers.AlphaDropout.build": true, + "tf.keras.layers.AlphaDropout.call": true, + "tf.keras.layers.AlphaDropout.compute_mask": true, + "tf.keras.layers.AlphaDropout.compute_output_shape": true, + "tf.keras.layers.AlphaDropout.compute_output_signature": true, + "tf.keras.layers.AlphaDropout.count_params": true, + "tf.keras.layers.AlphaDropout.dtype": true, + "tf.keras.layers.AlphaDropout.dynamic": true, + "tf.keras.layers.AlphaDropout.from_config": true, + "tf.keras.layers.AlphaDropout.get_config": true, + "tf.keras.layers.AlphaDropout.get_weights": true, + "tf.keras.layers.AlphaDropout.input": true, + "tf.keras.layers.AlphaDropout.input_spec": true, + "tf.keras.layers.AlphaDropout.losses": true, + "tf.keras.layers.AlphaDropout.metrics": true, + "tf.keras.layers.AlphaDropout.name": true, + "tf.keras.layers.AlphaDropout.name_scope": true, + "tf.keras.layers.AlphaDropout.non_trainable_weights": true, + "tf.keras.layers.AlphaDropout.output": true, + "tf.keras.layers.AlphaDropout.set_weights": true, + "tf.keras.layers.AlphaDropout.submodules": true, + "tf.keras.layers.AlphaDropout.trainable": true, + "tf.keras.layers.AlphaDropout.trainable_weights": true, + "tf.keras.layers.AlphaDropout.weights": true, + "tf.keras.layers.AlphaDropout.with_name_scope": true, + "tf.keras.layers.Attention": false, + "tf.keras.layers.Attention.__call__": true, + "tf.keras.layers.Attention.__eq__": true, + "tf.keras.layers.Attention.__ge__": true, + "tf.keras.layers.Attention.__gt__": true, + "tf.keras.layers.Attention.__init__": true, + "tf.keras.layers.Attention.__le__": true, + "tf.keras.layers.Attention.__lt__": true, + "tf.keras.layers.Attention.__ne__": true, + "tf.keras.layers.Attention.__new__": true, + "tf.keras.layers.Attention.activity_regularizer": true, + "tf.keras.layers.Attention.add_loss": true, + "tf.keras.layers.Attention.add_metric": true, + "tf.keras.layers.Attention.add_weight": true, + "tf.keras.layers.Attention.build": true, + "tf.keras.layers.Attention.call": true, + "tf.keras.layers.Attention.compute_mask": true, + "tf.keras.layers.Attention.compute_output_shape": true, + "tf.keras.layers.Attention.compute_output_signature": true, + "tf.keras.layers.Attention.count_params": true, + "tf.keras.layers.Attention.dtype": true, + "tf.keras.layers.Attention.dynamic": true, + "tf.keras.layers.Attention.from_config": true, + "tf.keras.layers.Attention.get_config": true, + "tf.keras.layers.Attention.get_weights": true, + "tf.keras.layers.Attention.input": true, + "tf.keras.layers.Attention.input_spec": true, + "tf.keras.layers.Attention.losses": true, + "tf.keras.layers.Attention.metrics": true, + "tf.keras.layers.Attention.name": true, + "tf.keras.layers.Attention.name_scope": true, + "tf.keras.layers.Attention.non_trainable_weights": true, + "tf.keras.layers.Attention.output": true, + "tf.keras.layers.Attention.set_weights": true, + "tf.keras.layers.Attention.submodules": true, + "tf.keras.layers.Attention.trainable": true, + "tf.keras.layers.Attention.trainable_weights": true, + "tf.keras.layers.Attention.weights": true, + "tf.keras.layers.Attention.with_name_scope": true, + "tf.keras.layers.Average": false, + "tf.keras.layers.Average.__call__": true, + "tf.keras.layers.Average.__eq__": true, + "tf.keras.layers.Average.__ge__": true, + "tf.keras.layers.Average.__gt__": true, + "tf.keras.layers.Average.__init__": true, + "tf.keras.layers.Average.__le__": true, + "tf.keras.layers.Average.__lt__": true, + "tf.keras.layers.Average.__ne__": true, + "tf.keras.layers.Average.__new__": true, + "tf.keras.layers.Average.activity_regularizer": true, + "tf.keras.layers.Average.add_loss": true, + "tf.keras.layers.Average.add_metric": true, + "tf.keras.layers.Average.add_weight": true, + "tf.keras.layers.Average.build": true, + "tf.keras.layers.Average.call": true, + "tf.keras.layers.Average.compute_mask": true, + "tf.keras.layers.Average.compute_output_shape": true, + "tf.keras.layers.Average.compute_output_signature": true, + "tf.keras.layers.Average.count_params": true, + "tf.keras.layers.Average.dtype": true, + "tf.keras.layers.Average.dynamic": true, + "tf.keras.layers.Average.from_config": true, + "tf.keras.layers.Average.get_config": true, + "tf.keras.layers.Average.get_weights": true, + "tf.keras.layers.Average.input": true, + "tf.keras.layers.Average.input_spec": true, + "tf.keras.layers.Average.losses": true, + "tf.keras.layers.Average.metrics": true, + "tf.keras.layers.Average.name": true, + "tf.keras.layers.Average.name_scope": true, + "tf.keras.layers.Average.non_trainable_weights": true, + "tf.keras.layers.Average.output": true, + "tf.keras.layers.Average.set_weights": true, + "tf.keras.layers.Average.submodules": true, + "tf.keras.layers.Average.trainable": true, + "tf.keras.layers.Average.trainable_weights": true, + "tf.keras.layers.Average.weights": true, + "tf.keras.layers.Average.with_name_scope": true, + "tf.keras.layers.AveragePooling1D": false, + "tf.keras.layers.AveragePooling1D.__call__": true, + "tf.keras.layers.AveragePooling1D.__eq__": true, + "tf.keras.layers.AveragePooling1D.__ge__": true, + "tf.keras.layers.AveragePooling1D.__gt__": true, + "tf.keras.layers.AveragePooling1D.__init__": true, + "tf.keras.layers.AveragePooling1D.__le__": true, + "tf.keras.layers.AveragePooling1D.__lt__": true, + "tf.keras.layers.AveragePooling1D.__ne__": true, + "tf.keras.layers.AveragePooling1D.__new__": true, + "tf.keras.layers.AveragePooling1D.activity_regularizer": true, + "tf.keras.layers.AveragePooling1D.add_loss": true, + "tf.keras.layers.AveragePooling1D.add_metric": true, + "tf.keras.layers.AveragePooling1D.add_weight": true, + "tf.keras.layers.AveragePooling1D.build": true, + "tf.keras.layers.AveragePooling1D.call": true, + "tf.keras.layers.AveragePooling1D.compute_mask": true, + "tf.keras.layers.AveragePooling1D.compute_output_shape": true, + "tf.keras.layers.AveragePooling1D.compute_output_signature": true, + "tf.keras.layers.AveragePooling1D.count_params": true, + "tf.keras.layers.AveragePooling1D.dtype": true, + "tf.keras.layers.AveragePooling1D.dynamic": true, + "tf.keras.layers.AveragePooling1D.from_config": true, + "tf.keras.layers.AveragePooling1D.get_config": true, + "tf.keras.layers.AveragePooling1D.get_weights": true, + "tf.keras.layers.AveragePooling1D.input": true, + "tf.keras.layers.AveragePooling1D.input_spec": true, + "tf.keras.layers.AveragePooling1D.losses": true, + "tf.keras.layers.AveragePooling1D.metrics": true, + "tf.keras.layers.AveragePooling1D.name": true, + "tf.keras.layers.AveragePooling1D.name_scope": true, + "tf.keras.layers.AveragePooling1D.non_trainable_weights": true, + "tf.keras.layers.AveragePooling1D.output": true, + "tf.keras.layers.AveragePooling1D.set_weights": true, + "tf.keras.layers.AveragePooling1D.submodules": true, + "tf.keras.layers.AveragePooling1D.trainable": true, + "tf.keras.layers.AveragePooling1D.trainable_weights": true, + "tf.keras.layers.AveragePooling1D.weights": true, + "tf.keras.layers.AveragePooling1D.with_name_scope": true, + "tf.keras.layers.AveragePooling2D": false, + "tf.keras.layers.AveragePooling2D.__call__": true, + "tf.keras.layers.AveragePooling2D.__eq__": true, + "tf.keras.layers.AveragePooling2D.__ge__": true, + "tf.keras.layers.AveragePooling2D.__gt__": true, + "tf.keras.layers.AveragePooling2D.__init__": true, + "tf.keras.layers.AveragePooling2D.__le__": true, + "tf.keras.layers.AveragePooling2D.__lt__": true, + "tf.keras.layers.AveragePooling2D.__ne__": true, + "tf.keras.layers.AveragePooling2D.__new__": true, + "tf.keras.layers.AveragePooling2D.activity_regularizer": true, + "tf.keras.layers.AveragePooling2D.add_loss": true, + "tf.keras.layers.AveragePooling2D.add_metric": true, + "tf.keras.layers.AveragePooling2D.add_weight": true, + "tf.keras.layers.AveragePooling2D.build": true, + "tf.keras.layers.AveragePooling2D.call": true, + "tf.keras.layers.AveragePooling2D.compute_mask": true, + "tf.keras.layers.AveragePooling2D.compute_output_shape": true, + "tf.keras.layers.AveragePooling2D.compute_output_signature": true, + "tf.keras.layers.AveragePooling2D.count_params": true, + "tf.keras.layers.AveragePooling2D.dtype": true, + "tf.keras.layers.AveragePooling2D.dynamic": true, + "tf.keras.layers.AveragePooling2D.from_config": true, + "tf.keras.layers.AveragePooling2D.get_config": true, + "tf.keras.layers.AveragePooling2D.get_weights": true, + "tf.keras.layers.AveragePooling2D.input": true, + "tf.keras.layers.AveragePooling2D.input_spec": true, + "tf.keras.layers.AveragePooling2D.losses": true, + "tf.keras.layers.AveragePooling2D.metrics": true, + "tf.keras.layers.AveragePooling2D.name": true, + "tf.keras.layers.AveragePooling2D.name_scope": true, + "tf.keras.layers.AveragePooling2D.non_trainable_weights": true, + "tf.keras.layers.AveragePooling2D.output": true, + "tf.keras.layers.AveragePooling2D.set_weights": true, + "tf.keras.layers.AveragePooling2D.submodules": true, + "tf.keras.layers.AveragePooling2D.trainable": true, + "tf.keras.layers.AveragePooling2D.trainable_weights": true, + "tf.keras.layers.AveragePooling2D.weights": true, + "tf.keras.layers.AveragePooling2D.with_name_scope": true, + "tf.keras.layers.AveragePooling3D": false, + "tf.keras.layers.AveragePooling3D.__call__": true, + "tf.keras.layers.AveragePooling3D.__eq__": true, + "tf.keras.layers.AveragePooling3D.__ge__": true, + "tf.keras.layers.AveragePooling3D.__gt__": true, + "tf.keras.layers.AveragePooling3D.__init__": true, + "tf.keras.layers.AveragePooling3D.__le__": true, + "tf.keras.layers.AveragePooling3D.__lt__": true, + "tf.keras.layers.AveragePooling3D.__ne__": true, + "tf.keras.layers.AveragePooling3D.__new__": true, + "tf.keras.layers.AveragePooling3D.activity_regularizer": true, + "tf.keras.layers.AveragePooling3D.add_loss": true, + "tf.keras.layers.AveragePooling3D.add_metric": true, + "tf.keras.layers.AveragePooling3D.add_weight": true, + "tf.keras.layers.AveragePooling3D.build": true, + "tf.keras.layers.AveragePooling3D.call": true, + "tf.keras.layers.AveragePooling3D.compute_mask": true, + "tf.keras.layers.AveragePooling3D.compute_output_shape": true, + "tf.keras.layers.AveragePooling3D.compute_output_signature": true, + "tf.keras.layers.AveragePooling3D.count_params": true, + "tf.keras.layers.AveragePooling3D.dtype": true, + "tf.keras.layers.AveragePooling3D.dynamic": true, + "tf.keras.layers.AveragePooling3D.from_config": true, + "tf.keras.layers.AveragePooling3D.get_config": true, + "tf.keras.layers.AveragePooling3D.get_weights": true, + "tf.keras.layers.AveragePooling3D.input": true, + "tf.keras.layers.AveragePooling3D.input_spec": true, + "tf.keras.layers.AveragePooling3D.losses": true, + "tf.keras.layers.AveragePooling3D.metrics": true, + "tf.keras.layers.AveragePooling3D.name": true, + "tf.keras.layers.AveragePooling3D.name_scope": true, + "tf.keras.layers.AveragePooling3D.non_trainable_weights": true, + "tf.keras.layers.AveragePooling3D.output": true, + "tf.keras.layers.AveragePooling3D.set_weights": true, + "tf.keras.layers.AveragePooling3D.submodules": true, + "tf.keras.layers.AveragePooling3D.trainable": true, + "tf.keras.layers.AveragePooling3D.trainable_weights": true, + "tf.keras.layers.AveragePooling3D.weights": true, + "tf.keras.layers.AveragePooling3D.with_name_scope": true, + "tf.keras.layers.AvgPool1D": false, + "tf.keras.layers.AvgPool1D.__call__": true, + "tf.keras.layers.AvgPool1D.__eq__": true, + "tf.keras.layers.AvgPool1D.__ge__": true, + "tf.keras.layers.AvgPool1D.__gt__": true, + "tf.keras.layers.AvgPool1D.__init__": true, + "tf.keras.layers.AvgPool1D.__le__": true, + "tf.keras.layers.AvgPool1D.__lt__": true, + "tf.keras.layers.AvgPool1D.__ne__": true, + "tf.keras.layers.AvgPool1D.__new__": true, + "tf.keras.layers.AvgPool1D.activity_regularizer": true, + "tf.keras.layers.AvgPool1D.add_loss": true, + "tf.keras.layers.AvgPool1D.add_metric": true, + "tf.keras.layers.AvgPool1D.add_weight": true, + "tf.keras.layers.AvgPool1D.build": true, + "tf.keras.layers.AvgPool1D.call": true, + "tf.keras.layers.AvgPool1D.compute_mask": true, + "tf.keras.layers.AvgPool1D.compute_output_shape": true, + "tf.keras.layers.AvgPool1D.compute_output_signature": true, + "tf.keras.layers.AvgPool1D.count_params": true, + "tf.keras.layers.AvgPool1D.dtype": true, + "tf.keras.layers.AvgPool1D.dynamic": true, + "tf.keras.layers.AvgPool1D.from_config": true, + "tf.keras.layers.AvgPool1D.get_config": true, + "tf.keras.layers.AvgPool1D.get_weights": true, + "tf.keras.layers.AvgPool1D.input": true, + "tf.keras.layers.AvgPool1D.input_spec": true, + "tf.keras.layers.AvgPool1D.losses": true, + "tf.keras.layers.AvgPool1D.metrics": true, + "tf.keras.layers.AvgPool1D.name": true, + "tf.keras.layers.AvgPool1D.name_scope": true, + "tf.keras.layers.AvgPool1D.non_trainable_weights": true, + "tf.keras.layers.AvgPool1D.output": true, + "tf.keras.layers.AvgPool1D.set_weights": true, + "tf.keras.layers.AvgPool1D.submodules": true, + "tf.keras.layers.AvgPool1D.trainable": true, + "tf.keras.layers.AvgPool1D.trainable_weights": true, + "tf.keras.layers.AvgPool1D.weights": true, + "tf.keras.layers.AvgPool1D.with_name_scope": true, + "tf.keras.layers.AvgPool2D": false, + "tf.keras.layers.AvgPool2D.__call__": true, + "tf.keras.layers.AvgPool2D.__eq__": true, + "tf.keras.layers.AvgPool2D.__ge__": true, + "tf.keras.layers.AvgPool2D.__gt__": true, + "tf.keras.layers.AvgPool2D.__init__": true, + "tf.keras.layers.AvgPool2D.__le__": true, + "tf.keras.layers.AvgPool2D.__lt__": true, + "tf.keras.layers.AvgPool2D.__ne__": true, + "tf.keras.layers.AvgPool2D.__new__": true, + "tf.keras.layers.AvgPool2D.activity_regularizer": true, + "tf.keras.layers.AvgPool2D.add_loss": true, + "tf.keras.layers.AvgPool2D.add_metric": true, + "tf.keras.layers.AvgPool2D.add_weight": true, + "tf.keras.layers.AvgPool2D.build": true, + "tf.keras.layers.AvgPool2D.call": true, + "tf.keras.layers.AvgPool2D.compute_mask": true, + "tf.keras.layers.AvgPool2D.compute_output_shape": true, + "tf.keras.layers.AvgPool2D.compute_output_signature": true, + "tf.keras.layers.AvgPool2D.count_params": true, + "tf.keras.layers.AvgPool2D.dtype": true, + "tf.keras.layers.AvgPool2D.dynamic": true, + "tf.keras.layers.AvgPool2D.from_config": true, + "tf.keras.layers.AvgPool2D.get_config": true, + "tf.keras.layers.AvgPool2D.get_weights": true, + "tf.keras.layers.AvgPool2D.input": true, + "tf.keras.layers.AvgPool2D.input_spec": true, + "tf.keras.layers.AvgPool2D.losses": true, + "tf.keras.layers.AvgPool2D.metrics": true, + "tf.keras.layers.AvgPool2D.name": true, + "tf.keras.layers.AvgPool2D.name_scope": true, + "tf.keras.layers.AvgPool2D.non_trainable_weights": true, + "tf.keras.layers.AvgPool2D.output": true, + "tf.keras.layers.AvgPool2D.set_weights": true, + "tf.keras.layers.AvgPool2D.submodules": true, + "tf.keras.layers.AvgPool2D.trainable": true, + "tf.keras.layers.AvgPool2D.trainable_weights": true, + "tf.keras.layers.AvgPool2D.weights": true, + "tf.keras.layers.AvgPool2D.with_name_scope": true, + "tf.keras.layers.AvgPool3D": false, + "tf.keras.layers.AvgPool3D.__call__": true, + "tf.keras.layers.AvgPool3D.__eq__": true, + "tf.keras.layers.AvgPool3D.__ge__": true, + "tf.keras.layers.AvgPool3D.__gt__": true, + "tf.keras.layers.AvgPool3D.__init__": true, + "tf.keras.layers.AvgPool3D.__le__": true, + "tf.keras.layers.AvgPool3D.__lt__": true, + "tf.keras.layers.AvgPool3D.__ne__": true, + "tf.keras.layers.AvgPool3D.__new__": true, + "tf.keras.layers.AvgPool3D.activity_regularizer": true, + "tf.keras.layers.AvgPool3D.add_loss": true, + "tf.keras.layers.AvgPool3D.add_metric": true, + "tf.keras.layers.AvgPool3D.add_weight": true, + "tf.keras.layers.AvgPool3D.build": true, + "tf.keras.layers.AvgPool3D.call": true, + "tf.keras.layers.AvgPool3D.compute_mask": true, + "tf.keras.layers.AvgPool3D.compute_output_shape": true, + "tf.keras.layers.AvgPool3D.compute_output_signature": true, + "tf.keras.layers.AvgPool3D.count_params": true, + "tf.keras.layers.AvgPool3D.dtype": true, + "tf.keras.layers.AvgPool3D.dynamic": true, + "tf.keras.layers.AvgPool3D.from_config": true, + "tf.keras.layers.AvgPool3D.get_config": true, + "tf.keras.layers.AvgPool3D.get_weights": true, + "tf.keras.layers.AvgPool3D.input": true, + "tf.keras.layers.AvgPool3D.input_spec": true, + "tf.keras.layers.AvgPool3D.losses": true, + "tf.keras.layers.AvgPool3D.metrics": true, + "tf.keras.layers.AvgPool3D.name": true, + "tf.keras.layers.AvgPool3D.name_scope": true, + "tf.keras.layers.AvgPool3D.non_trainable_weights": true, + "tf.keras.layers.AvgPool3D.output": true, + "tf.keras.layers.AvgPool3D.set_weights": true, + "tf.keras.layers.AvgPool3D.submodules": true, + "tf.keras.layers.AvgPool3D.trainable": true, + "tf.keras.layers.AvgPool3D.trainable_weights": true, + "tf.keras.layers.AvgPool3D.weights": true, + "tf.keras.layers.AvgPool3D.with_name_scope": true, + "tf.keras.layers.BatchNormalization": false, + "tf.keras.layers.BatchNormalization.__call__": true, + "tf.keras.layers.BatchNormalization.__eq__": true, + "tf.keras.layers.BatchNormalization.__ge__": true, + "tf.keras.layers.BatchNormalization.__gt__": true, + "tf.keras.layers.BatchNormalization.__init__": true, + "tf.keras.layers.BatchNormalization.__le__": true, + "tf.keras.layers.BatchNormalization.__lt__": true, + "tf.keras.layers.BatchNormalization.__ne__": true, + "tf.keras.layers.BatchNormalization.__new__": true, + "tf.keras.layers.BatchNormalization.activity_regularizer": true, + "tf.keras.layers.BatchNormalization.add_loss": true, + "tf.keras.layers.BatchNormalization.add_metric": true, + "tf.keras.layers.BatchNormalization.add_weight": true, + "tf.keras.layers.BatchNormalization.build": true, + "tf.keras.layers.BatchNormalization.call": true, + "tf.keras.layers.BatchNormalization.compute_mask": true, + "tf.keras.layers.BatchNormalization.compute_output_shape": true, + "tf.keras.layers.BatchNormalization.compute_output_signature": true, + "tf.keras.layers.BatchNormalization.count_params": true, + "tf.keras.layers.BatchNormalization.dtype": true, + "tf.keras.layers.BatchNormalization.dynamic": true, + "tf.keras.layers.BatchNormalization.from_config": true, + "tf.keras.layers.BatchNormalization.get_config": true, + "tf.keras.layers.BatchNormalization.get_weights": true, + "tf.keras.layers.BatchNormalization.input": true, + "tf.keras.layers.BatchNormalization.input_spec": true, + "tf.keras.layers.BatchNormalization.losses": true, + "tf.keras.layers.BatchNormalization.metrics": true, + "tf.keras.layers.BatchNormalization.name": true, + "tf.keras.layers.BatchNormalization.name_scope": true, + "tf.keras.layers.BatchNormalization.non_trainable_weights": true, + "tf.keras.layers.BatchNormalization.output": true, + "tf.keras.layers.BatchNormalization.set_weights": true, + "tf.keras.layers.BatchNormalization.submodules": true, + "tf.keras.layers.BatchNormalization.trainable": true, + "tf.keras.layers.BatchNormalization.trainable_weights": true, + "tf.keras.layers.BatchNormalization.weights": true, + "tf.keras.layers.BatchNormalization.with_name_scope": true, + "tf.keras.layers.Bidirectional": false, + "tf.keras.layers.Bidirectional.__call__": true, + "tf.keras.layers.Bidirectional.__eq__": true, + "tf.keras.layers.Bidirectional.__ge__": true, + "tf.keras.layers.Bidirectional.__gt__": true, + "tf.keras.layers.Bidirectional.__init__": true, + "tf.keras.layers.Bidirectional.__le__": true, + "tf.keras.layers.Bidirectional.__lt__": true, + "tf.keras.layers.Bidirectional.__ne__": true, + "tf.keras.layers.Bidirectional.__new__": true, + "tf.keras.layers.Bidirectional.activity_regularizer": true, + "tf.keras.layers.Bidirectional.add_loss": true, + "tf.keras.layers.Bidirectional.add_metric": true, + "tf.keras.layers.Bidirectional.add_weight": true, + "tf.keras.layers.Bidirectional.build": true, + "tf.keras.layers.Bidirectional.call": true, + "tf.keras.layers.Bidirectional.compute_mask": true, + "tf.keras.layers.Bidirectional.compute_output_shape": true, + "tf.keras.layers.Bidirectional.compute_output_signature": true, + "tf.keras.layers.Bidirectional.constraints": true, + "tf.keras.layers.Bidirectional.count_params": true, + "tf.keras.layers.Bidirectional.dtype": true, + "tf.keras.layers.Bidirectional.dynamic": true, + "tf.keras.layers.Bidirectional.from_config": true, + "tf.keras.layers.Bidirectional.get_config": true, + "tf.keras.layers.Bidirectional.get_weights": true, + "tf.keras.layers.Bidirectional.input": true, + "tf.keras.layers.Bidirectional.input_spec": true, + "tf.keras.layers.Bidirectional.losses": true, + "tf.keras.layers.Bidirectional.metrics": true, + "tf.keras.layers.Bidirectional.name": true, + "tf.keras.layers.Bidirectional.name_scope": true, + "tf.keras.layers.Bidirectional.non_trainable_weights": true, + "tf.keras.layers.Bidirectional.output": true, + "tf.keras.layers.Bidirectional.reset_states": true, + "tf.keras.layers.Bidirectional.set_weights": true, + "tf.keras.layers.Bidirectional.submodules": true, + "tf.keras.layers.Bidirectional.trainable": true, + "tf.keras.layers.Bidirectional.trainable_weights": true, + "tf.keras.layers.Bidirectional.weights": true, + "tf.keras.layers.Bidirectional.with_name_scope": true, + "tf.keras.layers.Concatenate": false, + "tf.keras.layers.Concatenate.__call__": true, + "tf.keras.layers.Concatenate.__eq__": true, + "tf.keras.layers.Concatenate.__ge__": true, + "tf.keras.layers.Concatenate.__gt__": true, + "tf.keras.layers.Concatenate.__init__": true, + "tf.keras.layers.Concatenate.__le__": true, + "tf.keras.layers.Concatenate.__lt__": true, + "tf.keras.layers.Concatenate.__ne__": true, + "tf.keras.layers.Concatenate.__new__": true, + "tf.keras.layers.Concatenate.activity_regularizer": true, + "tf.keras.layers.Concatenate.add_loss": true, + "tf.keras.layers.Concatenate.add_metric": true, + "tf.keras.layers.Concatenate.add_weight": true, + "tf.keras.layers.Concatenate.build": true, + "tf.keras.layers.Concatenate.call": true, + "tf.keras.layers.Concatenate.compute_mask": true, + "tf.keras.layers.Concatenate.compute_output_shape": true, + "tf.keras.layers.Concatenate.compute_output_signature": true, + "tf.keras.layers.Concatenate.count_params": true, + "tf.keras.layers.Concatenate.dtype": true, + "tf.keras.layers.Concatenate.dynamic": true, + "tf.keras.layers.Concatenate.from_config": true, + "tf.keras.layers.Concatenate.get_config": true, + "tf.keras.layers.Concatenate.get_weights": true, + "tf.keras.layers.Concatenate.input": true, + "tf.keras.layers.Concatenate.input_spec": true, + "tf.keras.layers.Concatenate.losses": true, + "tf.keras.layers.Concatenate.metrics": true, + "tf.keras.layers.Concatenate.name": true, + "tf.keras.layers.Concatenate.name_scope": true, + "tf.keras.layers.Concatenate.non_trainable_weights": true, + "tf.keras.layers.Concatenate.output": true, + "tf.keras.layers.Concatenate.set_weights": true, + "tf.keras.layers.Concatenate.submodules": true, + "tf.keras.layers.Concatenate.trainable": true, + "tf.keras.layers.Concatenate.trainable_weights": true, + "tf.keras.layers.Concatenate.weights": true, + "tf.keras.layers.Concatenate.with_name_scope": true, + "tf.keras.layers.Conv1D": false, + "tf.keras.layers.Conv1D.__call__": true, + "tf.keras.layers.Conv1D.__eq__": true, + "tf.keras.layers.Conv1D.__ge__": true, + "tf.keras.layers.Conv1D.__gt__": true, + "tf.keras.layers.Conv1D.__init__": true, + "tf.keras.layers.Conv1D.__le__": true, + "tf.keras.layers.Conv1D.__lt__": true, + "tf.keras.layers.Conv1D.__ne__": true, + "tf.keras.layers.Conv1D.__new__": true, + "tf.keras.layers.Conv1D.activity_regularizer": true, + "tf.keras.layers.Conv1D.add_loss": true, + "tf.keras.layers.Conv1D.add_metric": true, + "tf.keras.layers.Conv1D.add_weight": true, + "tf.keras.layers.Conv1D.build": true, + "tf.keras.layers.Conv1D.call": true, + "tf.keras.layers.Conv1D.compute_mask": true, + "tf.keras.layers.Conv1D.compute_output_shape": true, + "tf.keras.layers.Conv1D.compute_output_signature": true, + "tf.keras.layers.Conv1D.count_params": true, + "tf.keras.layers.Conv1D.dtype": true, + "tf.keras.layers.Conv1D.dynamic": true, + "tf.keras.layers.Conv1D.from_config": true, + "tf.keras.layers.Conv1D.get_config": true, + "tf.keras.layers.Conv1D.get_weights": true, + "tf.keras.layers.Conv1D.input": true, + "tf.keras.layers.Conv1D.input_spec": true, + "tf.keras.layers.Conv1D.losses": true, + "tf.keras.layers.Conv1D.metrics": true, + "tf.keras.layers.Conv1D.name": true, + "tf.keras.layers.Conv1D.name_scope": true, + "tf.keras.layers.Conv1D.non_trainable_weights": true, + "tf.keras.layers.Conv1D.output": true, + "tf.keras.layers.Conv1D.set_weights": true, + "tf.keras.layers.Conv1D.submodules": true, + "tf.keras.layers.Conv1D.trainable": true, + "tf.keras.layers.Conv1D.trainable_weights": true, + "tf.keras.layers.Conv1D.weights": true, + "tf.keras.layers.Conv1D.with_name_scope": true, + "tf.keras.layers.Conv2D": false, + "tf.keras.layers.Conv2D.__call__": true, + "tf.keras.layers.Conv2D.__eq__": true, + "tf.keras.layers.Conv2D.__ge__": true, + "tf.keras.layers.Conv2D.__gt__": true, + "tf.keras.layers.Conv2D.__init__": true, + "tf.keras.layers.Conv2D.__le__": true, + "tf.keras.layers.Conv2D.__lt__": true, + "tf.keras.layers.Conv2D.__ne__": true, + "tf.keras.layers.Conv2D.__new__": true, + "tf.keras.layers.Conv2D.activity_regularizer": true, + "tf.keras.layers.Conv2D.add_loss": true, + "tf.keras.layers.Conv2D.add_metric": true, + "tf.keras.layers.Conv2D.add_weight": true, + "tf.keras.layers.Conv2D.build": true, + "tf.keras.layers.Conv2D.call": true, + "tf.keras.layers.Conv2D.compute_mask": true, + "tf.keras.layers.Conv2D.compute_output_shape": true, + "tf.keras.layers.Conv2D.compute_output_signature": true, + "tf.keras.layers.Conv2D.count_params": true, + "tf.keras.layers.Conv2D.dtype": true, + "tf.keras.layers.Conv2D.dynamic": true, + "tf.keras.layers.Conv2D.from_config": true, + "tf.keras.layers.Conv2D.get_config": true, + "tf.keras.layers.Conv2D.get_weights": true, + "tf.keras.layers.Conv2D.input": true, + "tf.keras.layers.Conv2D.input_spec": true, + "tf.keras.layers.Conv2D.losses": true, + "tf.keras.layers.Conv2D.metrics": true, + "tf.keras.layers.Conv2D.name": true, + "tf.keras.layers.Conv2D.name_scope": true, + "tf.keras.layers.Conv2D.non_trainable_weights": true, + "tf.keras.layers.Conv2D.output": true, + "tf.keras.layers.Conv2D.set_weights": true, + "tf.keras.layers.Conv2D.submodules": true, + "tf.keras.layers.Conv2D.trainable": true, + "tf.keras.layers.Conv2D.trainable_weights": true, + "tf.keras.layers.Conv2D.weights": true, + "tf.keras.layers.Conv2D.with_name_scope": true, + "tf.keras.layers.Conv2DTranspose": false, + "tf.keras.layers.Conv2DTranspose.__call__": true, + "tf.keras.layers.Conv2DTranspose.__eq__": true, + "tf.keras.layers.Conv2DTranspose.__ge__": true, + "tf.keras.layers.Conv2DTranspose.__gt__": true, + "tf.keras.layers.Conv2DTranspose.__init__": true, + "tf.keras.layers.Conv2DTranspose.__le__": true, + "tf.keras.layers.Conv2DTranspose.__lt__": true, + "tf.keras.layers.Conv2DTranspose.__ne__": true, + "tf.keras.layers.Conv2DTranspose.__new__": true, + "tf.keras.layers.Conv2DTranspose.activity_regularizer": true, + "tf.keras.layers.Conv2DTranspose.add_loss": true, + "tf.keras.layers.Conv2DTranspose.add_metric": true, + "tf.keras.layers.Conv2DTranspose.add_weight": true, + "tf.keras.layers.Conv2DTranspose.build": true, + "tf.keras.layers.Conv2DTranspose.call": true, + "tf.keras.layers.Conv2DTranspose.compute_mask": true, + "tf.keras.layers.Conv2DTranspose.compute_output_shape": true, + "tf.keras.layers.Conv2DTranspose.compute_output_signature": true, + "tf.keras.layers.Conv2DTranspose.count_params": true, + "tf.keras.layers.Conv2DTranspose.dtype": true, + "tf.keras.layers.Conv2DTranspose.dynamic": true, + "tf.keras.layers.Conv2DTranspose.from_config": true, + "tf.keras.layers.Conv2DTranspose.get_config": true, + "tf.keras.layers.Conv2DTranspose.get_weights": true, + "tf.keras.layers.Conv2DTranspose.input": true, + "tf.keras.layers.Conv2DTranspose.input_spec": true, + "tf.keras.layers.Conv2DTranspose.losses": true, + "tf.keras.layers.Conv2DTranspose.metrics": true, + "tf.keras.layers.Conv2DTranspose.name": true, + "tf.keras.layers.Conv2DTranspose.name_scope": true, + "tf.keras.layers.Conv2DTranspose.non_trainable_weights": true, + "tf.keras.layers.Conv2DTranspose.output": true, + "tf.keras.layers.Conv2DTranspose.set_weights": true, + "tf.keras.layers.Conv2DTranspose.submodules": true, + "tf.keras.layers.Conv2DTranspose.trainable": true, + "tf.keras.layers.Conv2DTranspose.trainable_weights": true, + "tf.keras.layers.Conv2DTranspose.weights": true, + "tf.keras.layers.Conv2DTranspose.with_name_scope": true, + "tf.keras.layers.Conv3D": false, + "tf.keras.layers.Conv3D.__call__": true, + "tf.keras.layers.Conv3D.__eq__": true, + "tf.keras.layers.Conv3D.__ge__": true, + "tf.keras.layers.Conv3D.__gt__": true, + "tf.keras.layers.Conv3D.__init__": true, + "tf.keras.layers.Conv3D.__le__": true, + "tf.keras.layers.Conv3D.__lt__": true, + "tf.keras.layers.Conv3D.__ne__": true, + "tf.keras.layers.Conv3D.__new__": true, + "tf.keras.layers.Conv3D.activity_regularizer": true, + "tf.keras.layers.Conv3D.add_loss": true, + "tf.keras.layers.Conv3D.add_metric": true, + "tf.keras.layers.Conv3D.add_weight": true, + "tf.keras.layers.Conv3D.build": true, + "tf.keras.layers.Conv3D.call": true, + "tf.keras.layers.Conv3D.compute_mask": true, + "tf.keras.layers.Conv3D.compute_output_shape": true, + "tf.keras.layers.Conv3D.compute_output_signature": true, + "tf.keras.layers.Conv3D.count_params": true, + "tf.keras.layers.Conv3D.dtype": true, + "tf.keras.layers.Conv3D.dynamic": true, + "tf.keras.layers.Conv3D.from_config": true, + "tf.keras.layers.Conv3D.get_config": true, + "tf.keras.layers.Conv3D.get_weights": true, + "tf.keras.layers.Conv3D.input": true, + "tf.keras.layers.Conv3D.input_spec": true, + "tf.keras.layers.Conv3D.losses": true, + "tf.keras.layers.Conv3D.metrics": true, + "tf.keras.layers.Conv3D.name": true, + "tf.keras.layers.Conv3D.name_scope": true, + "tf.keras.layers.Conv3D.non_trainable_weights": true, + "tf.keras.layers.Conv3D.output": true, + "tf.keras.layers.Conv3D.set_weights": true, + "tf.keras.layers.Conv3D.submodules": true, + "tf.keras.layers.Conv3D.trainable": true, + "tf.keras.layers.Conv3D.trainable_weights": true, + "tf.keras.layers.Conv3D.weights": true, + "tf.keras.layers.Conv3D.with_name_scope": true, + "tf.keras.layers.Conv3DTranspose": false, + "tf.keras.layers.Conv3DTranspose.__call__": true, + "tf.keras.layers.Conv3DTranspose.__eq__": true, + "tf.keras.layers.Conv3DTranspose.__ge__": true, + "tf.keras.layers.Conv3DTranspose.__gt__": true, + "tf.keras.layers.Conv3DTranspose.__init__": true, + "tf.keras.layers.Conv3DTranspose.__le__": true, + "tf.keras.layers.Conv3DTranspose.__lt__": true, + "tf.keras.layers.Conv3DTranspose.__ne__": true, + "tf.keras.layers.Conv3DTranspose.__new__": true, + "tf.keras.layers.Conv3DTranspose.activity_regularizer": true, + "tf.keras.layers.Conv3DTranspose.add_loss": true, + "tf.keras.layers.Conv3DTranspose.add_metric": true, + "tf.keras.layers.Conv3DTranspose.add_weight": true, + "tf.keras.layers.Conv3DTranspose.build": true, + "tf.keras.layers.Conv3DTranspose.call": true, + "tf.keras.layers.Conv3DTranspose.compute_mask": true, + "tf.keras.layers.Conv3DTranspose.compute_output_shape": true, + "tf.keras.layers.Conv3DTranspose.compute_output_signature": true, + "tf.keras.layers.Conv3DTranspose.count_params": true, + "tf.keras.layers.Conv3DTranspose.dtype": true, + "tf.keras.layers.Conv3DTranspose.dynamic": true, + "tf.keras.layers.Conv3DTranspose.from_config": true, + "tf.keras.layers.Conv3DTranspose.get_config": true, + "tf.keras.layers.Conv3DTranspose.get_weights": true, + "tf.keras.layers.Conv3DTranspose.input": true, + "tf.keras.layers.Conv3DTranspose.input_spec": true, + "tf.keras.layers.Conv3DTranspose.losses": true, + "tf.keras.layers.Conv3DTranspose.metrics": true, + "tf.keras.layers.Conv3DTranspose.name": true, + "tf.keras.layers.Conv3DTranspose.name_scope": true, + "tf.keras.layers.Conv3DTranspose.non_trainable_weights": true, + "tf.keras.layers.Conv3DTranspose.output": true, + "tf.keras.layers.Conv3DTranspose.set_weights": true, + "tf.keras.layers.Conv3DTranspose.submodules": true, + "tf.keras.layers.Conv3DTranspose.trainable": true, + "tf.keras.layers.Conv3DTranspose.trainable_weights": true, + "tf.keras.layers.Conv3DTranspose.weights": true, + "tf.keras.layers.Conv3DTranspose.with_name_scope": true, + "tf.keras.layers.ConvLSTM2D": false, + "tf.keras.layers.ConvLSTM2D.__call__": true, + "tf.keras.layers.ConvLSTM2D.__eq__": true, + "tf.keras.layers.ConvLSTM2D.__ge__": true, + "tf.keras.layers.ConvLSTM2D.__gt__": true, + "tf.keras.layers.ConvLSTM2D.__init__": true, + "tf.keras.layers.ConvLSTM2D.__le__": true, + "tf.keras.layers.ConvLSTM2D.__lt__": true, + "tf.keras.layers.ConvLSTM2D.__ne__": true, + "tf.keras.layers.ConvLSTM2D.__new__": true, + "tf.keras.layers.ConvLSTM2D.activation": true, + "tf.keras.layers.ConvLSTM2D.activity_regularizer": true, + "tf.keras.layers.ConvLSTM2D.add_loss": true, + "tf.keras.layers.ConvLSTM2D.add_metric": true, + "tf.keras.layers.ConvLSTM2D.add_weight": true, + "tf.keras.layers.ConvLSTM2D.bias_constraint": true, + "tf.keras.layers.ConvLSTM2D.bias_initializer": true, + "tf.keras.layers.ConvLSTM2D.bias_regularizer": true, + "tf.keras.layers.ConvLSTM2D.build": true, + "tf.keras.layers.ConvLSTM2D.call": true, + "tf.keras.layers.ConvLSTM2D.compute_mask": true, + "tf.keras.layers.ConvLSTM2D.compute_output_shape": true, + "tf.keras.layers.ConvLSTM2D.compute_output_signature": true, + "tf.keras.layers.ConvLSTM2D.count_params": true, + "tf.keras.layers.ConvLSTM2D.data_format": true, + "tf.keras.layers.ConvLSTM2D.dilation_rate": true, + "tf.keras.layers.ConvLSTM2D.dropout": true, + "tf.keras.layers.ConvLSTM2D.dtype": true, + "tf.keras.layers.ConvLSTM2D.dynamic": true, + "tf.keras.layers.ConvLSTM2D.filters": true, + "tf.keras.layers.ConvLSTM2D.from_config": true, + "tf.keras.layers.ConvLSTM2D.get_config": true, + "tf.keras.layers.ConvLSTM2D.get_initial_state": true, + "tf.keras.layers.ConvLSTM2D.get_weights": true, + "tf.keras.layers.ConvLSTM2D.input": true, + "tf.keras.layers.ConvLSTM2D.input_spec": true, + "tf.keras.layers.ConvLSTM2D.kernel_constraint": true, + "tf.keras.layers.ConvLSTM2D.kernel_initializer": true, + "tf.keras.layers.ConvLSTM2D.kernel_regularizer": true, + "tf.keras.layers.ConvLSTM2D.kernel_size": true, + "tf.keras.layers.ConvLSTM2D.losses": true, + "tf.keras.layers.ConvLSTM2D.metrics": true, + "tf.keras.layers.ConvLSTM2D.name": true, + "tf.keras.layers.ConvLSTM2D.name_scope": true, + "tf.keras.layers.ConvLSTM2D.non_trainable_weights": true, + "tf.keras.layers.ConvLSTM2D.output": true, + "tf.keras.layers.ConvLSTM2D.padding": true, + "tf.keras.layers.ConvLSTM2D.recurrent_activation": true, + "tf.keras.layers.ConvLSTM2D.recurrent_constraint": true, + "tf.keras.layers.ConvLSTM2D.recurrent_dropout": true, + "tf.keras.layers.ConvLSTM2D.recurrent_initializer": true, + "tf.keras.layers.ConvLSTM2D.recurrent_regularizer": true, + "tf.keras.layers.ConvLSTM2D.reset_states": true, + "tf.keras.layers.ConvLSTM2D.set_weights": true, + "tf.keras.layers.ConvLSTM2D.states": true, + "tf.keras.layers.ConvLSTM2D.strides": true, + "tf.keras.layers.ConvLSTM2D.submodules": true, + "tf.keras.layers.ConvLSTM2D.trainable": true, + "tf.keras.layers.ConvLSTM2D.trainable_weights": true, + "tf.keras.layers.ConvLSTM2D.unit_forget_bias": true, + "tf.keras.layers.ConvLSTM2D.use_bias": true, + "tf.keras.layers.ConvLSTM2D.weights": true, + "tf.keras.layers.ConvLSTM2D.with_name_scope": true, + "tf.keras.layers.Convolution1D": false, + "tf.keras.layers.Convolution1D.__call__": true, + "tf.keras.layers.Convolution1D.__eq__": true, + "tf.keras.layers.Convolution1D.__ge__": true, + "tf.keras.layers.Convolution1D.__gt__": true, + "tf.keras.layers.Convolution1D.__init__": true, + "tf.keras.layers.Convolution1D.__le__": true, + "tf.keras.layers.Convolution1D.__lt__": true, + "tf.keras.layers.Convolution1D.__ne__": true, + "tf.keras.layers.Convolution1D.__new__": true, + "tf.keras.layers.Convolution1D.activity_regularizer": true, + "tf.keras.layers.Convolution1D.add_loss": true, + "tf.keras.layers.Convolution1D.add_metric": true, + "tf.keras.layers.Convolution1D.add_weight": true, + "tf.keras.layers.Convolution1D.build": true, + "tf.keras.layers.Convolution1D.call": true, + "tf.keras.layers.Convolution1D.compute_mask": true, + "tf.keras.layers.Convolution1D.compute_output_shape": true, + "tf.keras.layers.Convolution1D.compute_output_signature": true, + "tf.keras.layers.Convolution1D.count_params": true, + "tf.keras.layers.Convolution1D.dtype": true, + "tf.keras.layers.Convolution1D.dynamic": true, + "tf.keras.layers.Convolution1D.from_config": true, + "tf.keras.layers.Convolution1D.get_config": true, + "tf.keras.layers.Convolution1D.get_weights": true, + "tf.keras.layers.Convolution1D.input": true, + "tf.keras.layers.Convolution1D.input_spec": true, + "tf.keras.layers.Convolution1D.losses": true, + "tf.keras.layers.Convolution1D.metrics": true, + "tf.keras.layers.Convolution1D.name": true, + "tf.keras.layers.Convolution1D.name_scope": true, + "tf.keras.layers.Convolution1D.non_trainable_weights": true, + "tf.keras.layers.Convolution1D.output": true, + "tf.keras.layers.Convolution1D.set_weights": true, + "tf.keras.layers.Convolution1D.submodules": true, + "tf.keras.layers.Convolution1D.trainable": true, + "tf.keras.layers.Convolution1D.trainable_weights": true, + "tf.keras.layers.Convolution1D.weights": true, + "tf.keras.layers.Convolution1D.with_name_scope": true, + "tf.keras.layers.Convolution2D": false, + "tf.keras.layers.Convolution2D.__call__": true, + "tf.keras.layers.Convolution2D.__eq__": true, + "tf.keras.layers.Convolution2D.__ge__": true, + "tf.keras.layers.Convolution2D.__gt__": true, + "tf.keras.layers.Convolution2D.__init__": true, + "tf.keras.layers.Convolution2D.__le__": true, + "tf.keras.layers.Convolution2D.__lt__": true, + "tf.keras.layers.Convolution2D.__ne__": true, + "tf.keras.layers.Convolution2D.__new__": true, + "tf.keras.layers.Convolution2D.activity_regularizer": true, + "tf.keras.layers.Convolution2D.add_loss": true, + "tf.keras.layers.Convolution2D.add_metric": true, + "tf.keras.layers.Convolution2D.add_weight": true, + "tf.keras.layers.Convolution2D.build": true, + "tf.keras.layers.Convolution2D.call": true, + "tf.keras.layers.Convolution2D.compute_mask": true, + "tf.keras.layers.Convolution2D.compute_output_shape": true, + "tf.keras.layers.Convolution2D.compute_output_signature": true, + "tf.keras.layers.Convolution2D.count_params": true, + "tf.keras.layers.Convolution2D.dtype": true, + "tf.keras.layers.Convolution2D.dynamic": true, + "tf.keras.layers.Convolution2D.from_config": true, + "tf.keras.layers.Convolution2D.get_config": true, + "tf.keras.layers.Convolution2D.get_weights": true, + "tf.keras.layers.Convolution2D.input": true, + "tf.keras.layers.Convolution2D.input_spec": true, + "tf.keras.layers.Convolution2D.losses": true, + "tf.keras.layers.Convolution2D.metrics": true, + "tf.keras.layers.Convolution2D.name": true, + "tf.keras.layers.Convolution2D.name_scope": true, + "tf.keras.layers.Convolution2D.non_trainable_weights": true, + "tf.keras.layers.Convolution2D.output": true, + "tf.keras.layers.Convolution2D.set_weights": true, + "tf.keras.layers.Convolution2D.submodules": true, + "tf.keras.layers.Convolution2D.trainable": true, + "tf.keras.layers.Convolution2D.trainable_weights": true, + "tf.keras.layers.Convolution2D.weights": true, + "tf.keras.layers.Convolution2D.with_name_scope": true, + "tf.keras.layers.Convolution2DTranspose": false, + "tf.keras.layers.Convolution2DTranspose.__call__": true, + "tf.keras.layers.Convolution2DTranspose.__eq__": true, + "tf.keras.layers.Convolution2DTranspose.__ge__": true, + "tf.keras.layers.Convolution2DTranspose.__gt__": true, + "tf.keras.layers.Convolution2DTranspose.__init__": true, + "tf.keras.layers.Convolution2DTranspose.__le__": true, + "tf.keras.layers.Convolution2DTranspose.__lt__": true, + "tf.keras.layers.Convolution2DTranspose.__ne__": true, + "tf.keras.layers.Convolution2DTranspose.__new__": true, + "tf.keras.layers.Convolution2DTranspose.activity_regularizer": true, + "tf.keras.layers.Convolution2DTranspose.add_loss": true, + "tf.keras.layers.Convolution2DTranspose.add_metric": true, + "tf.keras.layers.Convolution2DTranspose.add_weight": true, + "tf.keras.layers.Convolution2DTranspose.build": true, + "tf.keras.layers.Convolution2DTranspose.call": true, + "tf.keras.layers.Convolution2DTranspose.compute_mask": true, + "tf.keras.layers.Convolution2DTranspose.compute_output_shape": true, + "tf.keras.layers.Convolution2DTranspose.compute_output_signature": true, + "tf.keras.layers.Convolution2DTranspose.count_params": true, + "tf.keras.layers.Convolution2DTranspose.dtype": true, + "tf.keras.layers.Convolution2DTranspose.dynamic": true, + "tf.keras.layers.Convolution2DTranspose.from_config": true, + "tf.keras.layers.Convolution2DTranspose.get_config": true, + "tf.keras.layers.Convolution2DTranspose.get_weights": true, + "tf.keras.layers.Convolution2DTranspose.input": true, + "tf.keras.layers.Convolution2DTranspose.input_spec": true, + "tf.keras.layers.Convolution2DTranspose.losses": true, + "tf.keras.layers.Convolution2DTranspose.metrics": true, + "tf.keras.layers.Convolution2DTranspose.name": true, + "tf.keras.layers.Convolution2DTranspose.name_scope": true, + "tf.keras.layers.Convolution2DTranspose.non_trainable_weights": true, + "tf.keras.layers.Convolution2DTranspose.output": true, + "tf.keras.layers.Convolution2DTranspose.set_weights": true, + "tf.keras.layers.Convolution2DTranspose.submodules": true, + "tf.keras.layers.Convolution2DTranspose.trainable": true, + "tf.keras.layers.Convolution2DTranspose.trainable_weights": true, + "tf.keras.layers.Convolution2DTranspose.weights": true, + "tf.keras.layers.Convolution2DTranspose.with_name_scope": true, + "tf.keras.layers.Convolution3D": false, + "tf.keras.layers.Convolution3D.__call__": true, + "tf.keras.layers.Convolution3D.__eq__": true, + "tf.keras.layers.Convolution3D.__ge__": true, + "tf.keras.layers.Convolution3D.__gt__": true, + "tf.keras.layers.Convolution3D.__init__": true, + "tf.keras.layers.Convolution3D.__le__": true, + "tf.keras.layers.Convolution3D.__lt__": true, + "tf.keras.layers.Convolution3D.__ne__": true, + "tf.keras.layers.Convolution3D.__new__": true, + "tf.keras.layers.Convolution3D.activity_regularizer": true, + "tf.keras.layers.Convolution3D.add_loss": true, + "tf.keras.layers.Convolution3D.add_metric": true, + "tf.keras.layers.Convolution3D.add_weight": true, + "tf.keras.layers.Convolution3D.build": true, + "tf.keras.layers.Convolution3D.call": true, + "tf.keras.layers.Convolution3D.compute_mask": true, + "tf.keras.layers.Convolution3D.compute_output_shape": true, + "tf.keras.layers.Convolution3D.compute_output_signature": true, + "tf.keras.layers.Convolution3D.count_params": true, + "tf.keras.layers.Convolution3D.dtype": true, + "tf.keras.layers.Convolution3D.dynamic": true, + "tf.keras.layers.Convolution3D.from_config": true, + "tf.keras.layers.Convolution3D.get_config": true, + "tf.keras.layers.Convolution3D.get_weights": true, + "tf.keras.layers.Convolution3D.input": true, + "tf.keras.layers.Convolution3D.input_spec": true, + "tf.keras.layers.Convolution3D.losses": true, + "tf.keras.layers.Convolution3D.metrics": true, + "tf.keras.layers.Convolution3D.name": true, + "tf.keras.layers.Convolution3D.name_scope": true, + "tf.keras.layers.Convolution3D.non_trainable_weights": true, + "tf.keras.layers.Convolution3D.output": true, + "tf.keras.layers.Convolution3D.set_weights": true, + "tf.keras.layers.Convolution3D.submodules": true, + "tf.keras.layers.Convolution3D.trainable": true, + "tf.keras.layers.Convolution3D.trainable_weights": true, + "tf.keras.layers.Convolution3D.weights": true, + "tf.keras.layers.Convolution3D.with_name_scope": true, + "tf.keras.layers.Convolution3DTranspose": false, + "tf.keras.layers.Convolution3DTranspose.__call__": true, + "tf.keras.layers.Convolution3DTranspose.__eq__": true, + "tf.keras.layers.Convolution3DTranspose.__ge__": true, + "tf.keras.layers.Convolution3DTranspose.__gt__": true, + "tf.keras.layers.Convolution3DTranspose.__init__": true, + "tf.keras.layers.Convolution3DTranspose.__le__": true, + "tf.keras.layers.Convolution3DTranspose.__lt__": true, + "tf.keras.layers.Convolution3DTranspose.__ne__": true, + "tf.keras.layers.Convolution3DTranspose.__new__": true, + "tf.keras.layers.Convolution3DTranspose.activity_regularizer": true, + "tf.keras.layers.Convolution3DTranspose.add_loss": true, + "tf.keras.layers.Convolution3DTranspose.add_metric": true, + "tf.keras.layers.Convolution3DTranspose.add_weight": true, + "tf.keras.layers.Convolution3DTranspose.build": true, + "tf.keras.layers.Convolution3DTranspose.call": true, + "tf.keras.layers.Convolution3DTranspose.compute_mask": true, + "tf.keras.layers.Convolution3DTranspose.compute_output_shape": true, + "tf.keras.layers.Convolution3DTranspose.compute_output_signature": true, + "tf.keras.layers.Convolution3DTranspose.count_params": true, + "tf.keras.layers.Convolution3DTranspose.dtype": true, + "tf.keras.layers.Convolution3DTranspose.dynamic": true, + "tf.keras.layers.Convolution3DTranspose.from_config": true, + "tf.keras.layers.Convolution3DTranspose.get_config": true, + "tf.keras.layers.Convolution3DTranspose.get_weights": true, + "tf.keras.layers.Convolution3DTranspose.input": true, + "tf.keras.layers.Convolution3DTranspose.input_spec": true, + "tf.keras.layers.Convolution3DTranspose.losses": true, + "tf.keras.layers.Convolution3DTranspose.metrics": true, + "tf.keras.layers.Convolution3DTranspose.name": true, + "tf.keras.layers.Convolution3DTranspose.name_scope": true, + "tf.keras.layers.Convolution3DTranspose.non_trainable_weights": true, + "tf.keras.layers.Convolution3DTranspose.output": true, + "tf.keras.layers.Convolution3DTranspose.set_weights": true, + "tf.keras.layers.Convolution3DTranspose.submodules": true, + "tf.keras.layers.Convolution3DTranspose.trainable": true, + "tf.keras.layers.Convolution3DTranspose.trainable_weights": true, + "tf.keras.layers.Convolution3DTranspose.weights": true, + "tf.keras.layers.Convolution3DTranspose.with_name_scope": true, + "tf.keras.layers.Cropping1D": false, + "tf.keras.layers.Cropping1D.__call__": true, + "tf.keras.layers.Cropping1D.__eq__": true, + "tf.keras.layers.Cropping1D.__ge__": true, + "tf.keras.layers.Cropping1D.__gt__": true, + "tf.keras.layers.Cropping1D.__init__": true, + "tf.keras.layers.Cropping1D.__le__": true, + "tf.keras.layers.Cropping1D.__lt__": true, + "tf.keras.layers.Cropping1D.__ne__": true, + "tf.keras.layers.Cropping1D.__new__": true, + "tf.keras.layers.Cropping1D.activity_regularizer": true, + "tf.keras.layers.Cropping1D.add_loss": true, + "tf.keras.layers.Cropping1D.add_metric": true, + "tf.keras.layers.Cropping1D.add_weight": true, + "tf.keras.layers.Cropping1D.build": true, + "tf.keras.layers.Cropping1D.call": true, + "tf.keras.layers.Cropping1D.compute_mask": true, + "tf.keras.layers.Cropping1D.compute_output_shape": true, + "tf.keras.layers.Cropping1D.compute_output_signature": true, + "tf.keras.layers.Cropping1D.count_params": true, + "tf.keras.layers.Cropping1D.dtype": true, + "tf.keras.layers.Cropping1D.dynamic": true, + "tf.keras.layers.Cropping1D.from_config": true, + "tf.keras.layers.Cropping1D.get_config": true, + "tf.keras.layers.Cropping1D.get_weights": true, + "tf.keras.layers.Cropping1D.input": true, + "tf.keras.layers.Cropping1D.input_spec": true, + "tf.keras.layers.Cropping1D.losses": true, + "tf.keras.layers.Cropping1D.metrics": true, + "tf.keras.layers.Cropping1D.name": true, + "tf.keras.layers.Cropping1D.name_scope": true, + "tf.keras.layers.Cropping1D.non_trainable_weights": true, + "tf.keras.layers.Cropping1D.output": true, + "tf.keras.layers.Cropping1D.set_weights": true, + "tf.keras.layers.Cropping1D.submodules": true, + "tf.keras.layers.Cropping1D.trainable": true, + "tf.keras.layers.Cropping1D.trainable_weights": true, + "tf.keras.layers.Cropping1D.weights": true, + "tf.keras.layers.Cropping1D.with_name_scope": true, + "tf.keras.layers.Cropping2D": false, + "tf.keras.layers.Cropping2D.__call__": true, + "tf.keras.layers.Cropping2D.__eq__": true, + "tf.keras.layers.Cropping2D.__ge__": true, + "tf.keras.layers.Cropping2D.__gt__": true, + "tf.keras.layers.Cropping2D.__init__": true, + "tf.keras.layers.Cropping2D.__le__": true, + "tf.keras.layers.Cropping2D.__lt__": true, + "tf.keras.layers.Cropping2D.__ne__": true, + "tf.keras.layers.Cropping2D.__new__": true, + "tf.keras.layers.Cropping2D.activity_regularizer": true, + "tf.keras.layers.Cropping2D.add_loss": true, + "tf.keras.layers.Cropping2D.add_metric": true, + "tf.keras.layers.Cropping2D.add_weight": true, + "tf.keras.layers.Cropping2D.build": true, + "tf.keras.layers.Cropping2D.call": true, + "tf.keras.layers.Cropping2D.compute_mask": true, + "tf.keras.layers.Cropping2D.compute_output_shape": true, + "tf.keras.layers.Cropping2D.compute_output_signature": true, + "tf.keras.layers.Cropping2D.count_params": true, + "tf.keras.layers.Cropping2D.dtype": true, + "tf.keras.layers.Cropping2D.dynamic": true, + "tf.keras.layers.Cropping2D.from_config": true, + "tf.keras.layers.Cropping2D.get_config": true, + "tf.keras.layers.Cropping2D.get_weights": true, + "tf.keras.layers.Cropping2D.input": true, + "tf.keras.layers.Cropping2D.input_spec": true, + "tf.keras.layers.Cropping2D.losses": true, + "tf.keras.layers.Cropping2D.metrics": true, + "tf.keras.layers.Cropping2D.name": true, + "tf.keras.layers.Cropping2D.name_scope": true, + "tf.keras.layers.Cropping2D.non_trainable_weights": true, + "tf.keras.layers.Cropping2D.output": true, + "tf.keras.layers.Cropping2D.set_weights": true, + "tf.keras.layers.Cropping2D.submodules": true, + "tf.keras.layers.Cropping2D.trainable": true, + "tf.keras.layers.Cropping2D.trainable_weights": true, + "tf.keras.layers.Cropping2D.weights": true, + "tf.keras.layers.Cropping2D.with_name_scope": true, + "tf.keras.layers.Cropping3D": false, + "tf.keras.layers.Cropping3D.__call__": true, + "tf.keras.layers.Cropping3D.__eq__": true, + "tf.keras.layers.Cropping3D.__ge__": true, + "tf.keras.layers.Cropping3D.__gt__": true, + "tf.keras.layers.Cropping3D.__init__": true, + "tf.keras.layers.Cropping3D.__le__": true, + "tf.keras.layers.Cropping3D.__lt__": true, + "tf.keras.layers.Cropping3D.__ne__": true, + "tf.keras.layers.Cropping3D.__new__": true, + "tf.keras.layers.Cropping3D.activity_regularizer": true, + "tf.keras.layers.Cropping3D.add_loss": true, + "tf.keras.layers.Cropping3D.add_metric": true, + "tf.keras.layers.Cropping3D.add_weight": true, + "tf.keras.layers.Cropping3D.build": true, + "tf.keras.layers.Cropping3D.call": true, + "tf.keras.layers.Cropping3D.compute_mask": true, + "tf.keras.layers.Cropping3D.compute_output_shape": true, + "tf.keras.layers.Cropping3D.compute_output_signature": true, + "tf.keras.layers.Cropping3D.count_params": true, + "tf.keras.layers.Cropping3D.dtype": true, + "tf.keras.layers.Cropping3D.dynamic": true, + "tf.keras.layers.Cropping3D.from_config": true, + "tf.keras.layers.Cropping3D.get_config": true, + "tf.keras.layers.Cropping3D.get_weights": true, + "tf.keras.layers.Cropping3D.input": true, + "tf.keras.layers.Cropping3D.input_spec": true, + "tf.keras.layers.Cropping3D.losses": true, + "tf.keras.layers.Cropping3D.metrics": true, + "tf.keras.layers.Cropping3D.name": true, + "tf.keras.layers.Cropping3D.name_scope": true, + "tf.keras.layers.Cropping3D.non_trainable_weights": true, + "tf.keras.layers.Cropping3D.output": true, + "tf.keras.layers.Cropping3D.set_weights": true, + "tf.keras.layers.Cropping3D.submodules": true, + "tf.keras.layers.Cropping3D.trainable": true, + "tf.keras.layers.Cropping3D.trainable_weights": true, + "tf.keras.layers.Cropping3D.weights": true, + "tf.keras.layers.Cropping3D.with_name_scope": true, + "tf.keras.layers.Dense": false, + "tf.keras.layers.Dense.__call__": true, + "tf.keras.layers.Dense.__eq__": true, + "tf.keras.layers.Dense.__ge__": true, + "tf.keras.layers.Dense.__gt__": true, + "tf.keras.layers.Dense.__init__": true, + "tf.keras.layers.Dense.__le__": true, + "tf.keras.layers.Dense.__lt__": true, + "tf.keras.layers.Dense.__ne__": true, + "tf.keras.layers.Dense.__new__": true, + "tf.keras.layers.Dense.activity_regularizer": true, + "tf.keras.layers.Dense.add_loss": true, + "tf.keras.layers.Dense.add_metric": true, + "tf.keras.layers.Dense.add_weight": true, + "tf.keras.layers.Dense.build": true, + "tf.keras.layers.Dense.call": true, + "tf.keras.layers.Dense.compute_mask": true, + "tf.keras.layers.Dense.compute_output_shape": true, + "tf.keras.layers.Dense.compute_output_signature": true, + "tf.keras.layers.Dense.count_params": true, + "tf.keras.layers.Dense.dtype": true, + "tf.keras.layers.Dense.dynamic": true, + "tf.keras.layers.Dense.from_config": true, + "tf.keras.layers.Dense.get_config": true, + "tf.keras.layers.Dense.get_weights": true, + "tf.keras.layers.Dense.input": true, + "tf.keras.layers.Dense.input_spec": true, + "tf.keras.layers.Dense.losses": true, + "tf.keras.layers.Dense.metrics": true, + "tf.keras.layers.Dense.name": true, + "tf.keras.layers.Dense.name_scope": true, + "tf.keras.layers.Dense.non_trainable_weights": true, + "tf.keras.layers.Dense.output": true, + "tf.keras.layers.Dense.set_weights": true, + "tf.keras.layers.Dense.submodules": true, + "tf.keras.layers.Dense.trainable": true, + "tf.keras.layers.Dense.trainable_weights": true, + "tf.keras.layers.Dense.weights": true, + "tf.keras.layers.Dense.with_name_scope": true, + "tf.keras.layers.DenseFeatures": false, + "tf.keras.layers.DenseFeatures.__call__": true, + "tf.keras.layers.DenseFeatures.__eq__": true, + "tf.keras.layers.DenseFeatures.__ge__": true, + "tf.keras.layers.DenseFeatures.__gt__": true, + "tf.keras.layers.DenseFeatures.__init__": true, + "tf.keras.layers.DenseFeatures.__le__": true, + "tf.keras.layers.DenseFeatures.__lt__": true, + "tf.keras.layers.DenseFeatures.__ne__": true, + "tf.keras.layers.DenseFeatures.__new__": true, + "tf.keras.layers.DenseFeatures.activity_regularizer": true, + "tf.keras.layers.DenseFeatures.add_loss": true, + "tf.keras.layers.DenseFeatures.add_metric": true, + "tf.keras.layers.DenseFeatures.add_weight": true, + "tf.keras.layers.DenseFeatures.build": true, + "tf.keras.layers.DenseFeatures.call": true, + "tf.keras.layers.DenseFeatures.compute_mask": true, + "tf.keras.layers.DenseFeatures.compute_output_shape": true, + "tf.keras.layers.DenseFeatures.compute_output_signature": true, + "tf.keras.layers.DenseFeatures.count_params": true, + "tf.keras.layers.DenseFeatures.dtype": true, + "tf.keras.layers.DenseFeatures.dynamic": true, + "tf.keras.layers.DenseFeatures.from_config": true, + "tf.keras.layers.DenseFeatures.get_config": true, + "tf.keras.layers.DenseFeatures.get_weights": true, + "tf.keras.layers.DenseFeatures.input": true, + "tf.keras.layers.DenseFeatures.input_spec": true, + "tf.keras.layers.DenseFeatures.losses": true, + "tf.keras.layers.DenseFeatures.metrics": true, + "tf.keras.layers.DenseFeatures.name": true, + "tf.keras.layers.DenseFeatures.name_scope": true, + "tf.keras.layers.DenseFeatures.non_trainable_weights": true, + "tf.keras.layers.DenseFeatures.output": true, + "tf.keras.layers.DenseFeatures.set_weights": true, + "tf.keras.layers.DenseFeatures.submodules": true, + "tf.keras.layers.DenseFeatures.trainable": true, + "tf.keras.layers.DenseFeatures.trainable_weights": true, + "tf.keras.layers.DenseFeatures.weights": true, + "tf.keras.layers.DenseFeatures.with_name_scope": true, + "tf.keras.layers.DepthwiseConv2D": false, + "tf.keras.layers.DepthwiseConv2D.__call__": true, + "tf.keras.layers.DepthwiseConv2D.__eq__": true, + "tf.keras.layers.DepthwiseConv2D.__ge__": true, + "tf.keras.layers.DepthwiseConv2D.__gt__": true, + "tf.keras.layers.DepthwiseConv2D.__init__": true, + "tf.keras.layers.DepthwiseConv2D.__le__": true, + "tf.keras.layers.DepthwiseConv2D.__lt__": true, + "tf.keras.layers.DepthwiseConv2D.__ne__": true, + "tf.keras.layers.DepthwiseConv2D.__new__": true, + "tf.keras.layers.DepthwiseConv2D.activity_regularizer": true, + "tf.keras.layers.DepthwiseConv2D.add_loss": true, + "tf.keras.layers.DepthwiseConv2D.add_metric": true, + "tf.keras.layers.DepthwiseConv2D.add_weight": true, + "tf.keras.layers.DepthwiseConv2D.build": true, + "tf.keras.layers.DepthwiseConv2D.call": true, + "tf.keras.layers.DepthwiseConv2D.compute_mask": true, + "tf.keras.layers.DepthwiseConv2D.compute_output_shape": true, + "tf.keras.layers.DepthwiseConv2D.compute_output_signature": true, + "tf.keras.layers.DepthwiseConv2D.count_params": true, + "tf.keras.layers.DepthwiseConv2D.dtype": true, + "tf.keras.layers.DepthwiseConv2D.dynamic": true, + "tf.keras.layers.DepthwiseConv2D.from_config": true, + "tf.keras.layers.DepthwiseConv2D.get_config": true, + "tf.keras.layers.DepthwiseConv2D.get_weights": true, + "tf.keras.layers.DepthwiseConv2D.input": true, + "tf.keras.layers.DepthwiseConv2D.input_spec": true, + "tf.keras.layers.DepthwiseConv2D.losses": true, + "tf.keras.layers.DepthwiseConv2D.metrics": true, + "tf.keras.layers.DepthwiseConv2D.name": true, + "tf.keras.layers.DepthwiseConv2D.name_scope": true, + "tf.keras.layers.DepthwiseConv2D.non_trainable_weights": true, + "tf.keras.layers.DepthwiseConv2D.output": true, + "tf.keras.layers.DepthwiseConv2D.set_weights": true, + "tf.keras.layers.DepthwiseConv2D.submodules": true, + "tf.keras.layers.DepthwiseConv2D.trainable": true, + "tf.keras.layers.DepthwiseConv2D.trainable_weights": true, + "tf.keras.layers.DepthwiseConv2D.weights": true, + "tf.keras.layers.DepthwiseConv2D.with_name_scope": true, + "tf.keras.layers.Dot": false, + "tf.keras.layers.Dot.__call__": true, + "tf.keras.layers.Dot.__eq__": true, + "tf.keras.layers.Dot.__ge__": true, + "tf.keras.layers.Dot.__gt__": true, + "tf.keras.layers.Dot.__init__": true, + "tf.keras.layers.Dot.__le__": true, + "tf.keras.layers.Dot.__lt__": true, + "tf.keras.layers.Dot.__ne__": true, + "tf.keras.layers.Dot.__new__": true, + "tf.keras.layers.Dot.activity_regularizer": true, + "tf.keras.layers.Dot.add_loss": true, + "tf.keras.layers.Dot.add_metric": true, + "tf.keras.layers.Dot.add_weight": true, + "tf.keras.layers.Dot.build": true, + "tf.keras.layers.Dot.call": true, + "tf.keras.layers.Dot.compute_mask": true, + "tf.keras.layers.Dot.compute_output_shape": true, + "tf.keras.layers.Dot.compute_output_signature": true, + "tf.keras.layers.Dot.count_params": true, + "tf.keras.layers.Dot.dtype": true, + "tf.keras.layers.Dot.dynamic": true, + "tf.keras.layers.Dot.from_config": true, + "tf.keras.layers.Dot.get_config": true, + "tf.keras.layers.Dot.get_weights": true, + "tf.keras.layers.Dot.input": true, + "tf.keras.layers.Dot.input_spec": true, + "tf.keras.layers.Dot.losses": true, + "tf.keras.layers.Dot.metrics": true, + "tf.keras.layers.Dot.name": true, + "tf.keras.layers.Dot.name_scope": true, + "tf.keras.layers.Dot.non_trainable_weights": true, + "tf.keras.layers.Dot.output": true, + "tf.keras.layers.Dot.set_weights": true, + "tf.keras.layers.Dot.submodules": true, + "tf.keras.layers.Dot.trainable": true, + "tf.keras.layers.Dot.trainable_weights": true, + "tf.keras.layers.Dot.weights": true, + "tf.keras.layers.Dot.with_name_scope": true, + "tf.keras.layers.Dropout": false, + "tf.keras.layers.Dropout.__call__": true, + "tf.keras.layers.Dropout.__eq__": true, + "tf.keras.layers.Dropout.__ge__": true, + "tf.keras.layers.Dropout.__gt__": true, + "tf.keras.layers.Dropout.__init__": true, + "tf.keras.layers.Dropout.__le__": true, + "tf.keras.layers.Dropout.__lt__": true, + "tf.keras.layers.Dropout.__ne__": true, + "tf.keras.layers.Dropout.__new__": true, + "tf.keras.layers.Dropout.activity_regularizer": true, + "tf.keras.layers.Dropout.add_loss": true, + "tf.keras.layers.Dropout.add_metric": true, + "tf.keras.layers.Dropout.add_weight": true, + "tf.keras.layers.Dropout.build": true, + "tf.keras.layers.Dropout.call": true, + "tf.keras.layers.Dropout.compute_mask": true, + "tf.keras.layers.Dropout.compute_output_shape": true, + "tf.keras.layers.Dropout.compute_output_signature": true, + "tf.keras.layers.Dropout.count_params": true, + "tf.keras.layers.Dropout.dtype": true, + "tf.keras.layers.Dropout.dynamic": true, + "tf.keras.layers.Dropout.from_config": true, + "tf.keras.layers.Dropout.get_config": true, + "tf.keras.layers.Dropout.get_weights": true, + "tf.keras.layers.Dropout.input": true, + "tf.keras.layers.Dropout.input_spec": true, + "tf.keras.layers.Dropout.losses": true, + "tf.keras.layers.Dropout.metrics": true, + "tf.keras.layers.Dropout.name": true, + "tf.keras.layers.Dropout.name_scope": true, + "tf.keras.layers.Dropout.non_trainable_weights": true, + "tf.keras.layers.Dropout.output": true, + "tf.keras.layers.Dropout.set_weights": true, + "tf.keras.layers.Dropout.submodules": true, + "tf.keras.layers.Dropout.trainable": true, + "tf.keras.layers.Dropout.trainable_weights": true, + "tf.keras.layers.Dropout.weights": true, + "tf.keras.layers.Dropout.with_name_scope": true, + "tf.keras.layers.ELU": false, + "tf.keras.layers.ELU.__call__": true, + "tf.keras.layers.ELU.__eq__": true, + "tf.keras.layers.ELU.__ge__": true, + "tf.keras.layers.ELU.__gt__": true, + "tf.keras.layers.ELU.__init__": true, + "tf.keras.layers.ELU.__le__": true, + "tf.keras.layers.ELU.__lt__": true, + "tf.keras.layers.ELU.__ne__": true, + "tf.keras.layers.ELU.__new__": true, + "tf.keras.layers.ELU.activity_regularizer": true, + "tf.keras.layers.ELU.add_loss": true, + "tf.keras.layers.ELU.add_metric": true, + "tf.keras.layers.ELU.add_weight": true, + "tf.keras.layers.ELU.build": true, + "tf.keras.layers.ELU.call": true, + "tf.keras.layers.ELU.compute_mask": true, + "tf.keras.layers.ELU.compute_output_shape": true, + "tf.keras.layers.ELU.compute_output_signature": true, + "tf.keras.layers.ELU.count_params": true, + "tf.keras.layers.ELU.dtype": true, + "tf.keras.layers.ELU.dynamic": true, + "tf.keras.layers.ELU.from_config": true, + "tf.keras.layers.ELU.get_config": true, + "tf.keras.layers.ELU.get_weights": true, + "tf.keras.layers.ELU.input": true, + "tf.keras.layers.ELU.input_spec": true, + "tf.keras.layers.ELU.losses": true, + "tf.keras.layers.ELU.metrics": true, + "tf.keras.layers.ELU.name": true, + "tf.keras.layers.ELU.name_scope": true, + "tf.keras.layers.ELU.non_trainable_weights": true, + "tf.keras.layers.ELU.output": true, + "tf.keras.layers.ELU.set_weights": true, + "tf.keras.layers.ELU.submodules": true, + "tf.keras.layers.ELU.trainable": true, + "tf.keras.layers.ELU.trainable_weights": true, + "tf.keras.layers.ELU.weights": true, + "tf.keras.layers.ELU.with_name_scope": true, + "tf.keras.layers.Embedding": false, + "tf.keras.layers.Embedding.__call__": true, + "tf.keras.layers.Embedding.__eq__": true, + "tf.keras.layers.Embedding.__ge__": true, + "tf.keras.layers.Embedding.__gt__": true, + "tf.keras.layers.Embedding.__init__": true, + "tf.keras.layers.Embedding.__le__": true, + "tf.keras.layers.Embedding.__lt__": true, + "tf.keras.layers.Embedding.__ne__": true, + "tf.keras.layers.Embedding.__new__": true, + "tf.keras.layers.Embedding.activity_regularizer": true, + "tf.keras.layers.Embedding.add_loss": true, + "tf.keras.layers.Embedding.add_metric": true, + "tf.keras.layers.Embedding.add_weight": true, + "tf.keras.layers.Embedding.build": true, + "tf.keras.layers.Embedding.call": true, + "tf.keras.layers.Embedding.compute_mask": true, + "tf.keras.layers.Embedding.compute_output_shape": true, + "tf.keras.layers.Embedding.compute_output_signature": true, + "tf.keras.layers.Embedding.count_params": true, + "tf.keras.layers.Embedding.dtype": true, + "tf.keras.layers.Embedding.dynamic": true, + "tf.keras.layers.Embedding.from_config": true, + "tf.keras.layers.Embedding.get_config": true, + "tf.keras.layers.Embedding.get_weights": true, + "tf.keras.layers.Embedding.input": true, + "tf.keras.layers.Embedding.input_spec": true, + "tf.keras.layers.Embedding.losses": true, + "tf.keras.layers.Embedding.metrics": true, + "tf.keras.layers.Embedding.name": true, + "tf.keras.layers.Embedding.name_scope": true, + "tf.keras.layers.Embedding.non_trainable_weights": true, + "tf.keras.layers.Embedding.output": true, + "tf.keras.layers.Embedding.set_weights": true, + "tf.keras.layers.Embedding.submodules": true, + "tf.keras.layers.Embedding.trainable": true, + "tf.keras.layers.Embedding.trainable_weights": true, + "tf.keras.layers.Embedding.weights": true, + "tf.keras.layers.Embedding.with_name_scope": true, + "tf.keras.layers.Flatten": false, + "tf.keras.layers.Flatten.__call__": true, + "tf.keras.layers.Flatten.__eq__": true, + "tf.keras.layers.Flatten.__ge__": true, + "tf.keras.layers.Flatten.__gt__": true, + "tf.keras.layers.Flatten.__init__": true, + "tf.keras.layers.Flatten.__le__": true, + "tf.keras.layers.Flatten.__lt__": true, + "tf.keras.layers.Flatten.__ne__": true, + "tf.keras.layers.Flatten.__new__": true, + "tf.keras.layers.Flatten.activity_regularizer": true, + "tf.keras.layers.Flatten.add_loss": true, + "tf.keras.layers.Flatten.add_metric": true, + "tf.keras.layers.Flatten.add_weight": true, + "tf.keras.layers.Flatten.build": true, + "tf.keras.layers.Flatten.call": true, + "tf.keras.layers.Flatten.compute_mask": true, + "tf.keras.layers.Flatten.compute_output_shape": true, + "tf.keras.layers.Flatten.compute_output_signature": true, + "tf.keras.layers.Flatten.count_params": true, + "tf.keras.layers.Flatten.dtype": true, + "tf.keras.layers.Flatten.dynamic": true, + "tf.keras.layers.Flatten.from_config": true, + "tf.keras.layers.Flatten.get_config": true, + "tf.keras.layers.Flatten.get_weights": true, + "tf.keras.layers.Flatten.input": true, + "tf.keras.layers.Flatten.input_spec": true, + "tf.keras.layers.Flatten.losses": true, + "tf.keras.layers.Flatten.metrics": true, + "tf.keras.layers.Flatten.name": true, + "tf.keras.layers.Flatten.name_scope": true, + "tf.keras.layers.Flatten.non_trainable_weights": true, + "tf.keras.layers.Flatten.output": true, + "tf.keras.layers.Flatten.set_weights": true, + "tf.keras.layers.Flatten.submodules": true, + "tf.keras.layers.Flatten.trainable": true, + "tf.keras.layers.Flatten.trainable_weights": true, + "tf.keras.layers.Flatten.weights": true, + "tf.keras.layers.Flatten.with_name_scope": true, + "tf.keras.layers.GRU": false, + "tf.keras.layers.GRU.__call__": true, + "tf.keras.layers.GRU.__eq__": true, + "tf.keras.layers.GRU.__ge__": true, + "tf.keras.layers.GRU.__gt__": true, + "tf.keras.layers.GRU.__init__": true, + "tf.keras.layers.GRU.__le__": true, + "tf.keras.layers.GRU.__lt__": true, + "tf.keras.layers.GRU.__ne__": true, + "tf.keras.layers.GRU.__new__": true, + "tf.keras.layers.GRU.activation": true, + "tf.keras.layers.GRU.activity_regularizer": true, + "tf.keras.layers.GRU.add_loss": true, + "tf.keras.layers.GRU.add_metric": true, + "tf.keras.layers.GRU.add_weight": true, + "tf.keras.layers.GRU.bias_constraint": true, + "tf.keras.layers.GRU.bias_initializer": true, + "tf.keras.layers.GRU.bias_regularizer": true, + "tf.keras.layers.GRU.build": true, + "tf.keras.layers.GRU.call": true, + "tf.keras.layers.GRU.compute_mask": true, + "tf.keras.layers.GRU.compute_output_shape": true, + "tf.keras.layers.GRU.compute_output_signature": true, + "tf.keras.layers.GRU.count_params": true, + "tf.keras.layers.GRU.dropout": true, + "tf.keras.layers.GRU.dtype": true, + "tf.keras.layers.GRU.dynamic": true, + "tf.keras.layers.GRU.from_config": true, + "tf.keras.layers.GRU.get_config": true, + "tf.keras.layers.GRU.get_dropout_mask_for_cell": true, + "tf.keras.layers.GRU.get_recurrent_dropout_mask_for_cell": true, + "tf.keras.layers.GRU.get_weights": true, + "tf.keras.layers.GRU.implementation": true, + "tf.keras.layers.GRU.input": true, + "tf.keras.layers.GRU.input_spec": true, + "tf.keras.layers.GRU.kernel_constraint": true, + "tf.keras.layers.GRU.kernel_initializer": true, + "tf.keras.layers.GRU.kernel_regularizer": true, + "tf.keras.layers.GRU.losses": true, + "tf.keras.layers.GRU.metrics": true, + "tf.keras.layers.GRU.name": true, + "tf.keras.layers.GRU.name_scope": true, + "tf.keras.layers.GRU.non_trainable_weights": true, + "tf.keras.layers.GRU.output": true, + "tf.keras.layers.GRU.recurrent_activation": true, + "tf.keras.layers.GRU.recurrent_constraint": true, + "tf.keras.layers.GRU.recurrent_dropout": true, + "tf.keras.layers.GRU.recurrent_initializer": true, + "tf.keras.layers.GRU.recurrent_regularizer": true, + "tf.keras.layers.GRU.reset_after": true, + "tf.keras.layers.GRU.reset_dropout_mask": true, + "tf.keras.layers.GRU.reset_recurrent_dropout_mask": true, + "tf.keras.layers.GRU.reset_states": true, + "tf.keras.layers.GRU.set_weights": true, + "tf.keras.layers.GRU.states": true, + "tf.keras.layers.GRU.submodules": true, + "tf.keras.layers.GRU.trainable": true, + "tf.keras.layers.GRU.trainable_weights": true, + "tf.keras.layers.GRU.units": true, + "tf.keras.layers.GRU.use_bias": true, + "tf.keras.layers.GRU.weights": true, + "tf.keras.layers.GRU.with_name_scope": true, + "tf.keras.layers.GRUCell": false, + "tf.keras.layers.GRUCell.__call__": true, + "tf.keras.layers.GRUCell.__eq__": true, + "tf.keras.layers.GRUCell.__ge__": true, + "tf.keras.layers.GRUCell.__gt__": true, + "tf.keras.layers.GRUCell.__init__": true, + "tf.keras.layers.GRUCell.__le__": true, + "tf.keras.layers.GRUCell.__lt__": true, + "tf.keras.layers.GRUCell.__ne__": true, + "tf.keras.layers.GRUCell.__new__": true, + "tf.keras.layers.GRUCell.activity_regularizer": true, + "tf.keras.layers.GRUCell.add_loss": true, + "tf.keras.layers.GRUCell.add_metric": true, + "tf.keras.layers.GRUCell.add_weight": true, + "tf.keras.layers.GRUCell.build": true, + "tf.keras.layers.GRUCell.call": true, + "tf.keras.layers.GRUCell.compute_mask": true, + "tf.keras.layers.GRUCell.compute_output_shape": true, + "tf.keras.layers.GRUCell.compute_output_signature": true, + "tf.keras.layers.GRUCell.count_params": true, + "tf.keras.layers.GRUCell.dtype": true, + "tf.keras.layers.GRUCell.dynamic": true, + "tf.keras.layers.GRUCell.from_config": true, + "tf.keras.layers.GRUCell.get_config": true, + "tf.keras.layers.GRUCell.get_dropout_mask_for_cell": true, + "tf.keras.layers.GRUCell.get_initial_state": true, + "tf.keras.layers.GRUCell.get_recurrent_dropout_mask_for_cell": true, + "tf.keras.layers.GRUCell.get_weights": true, + "tf.keras.layers.GRUCell.input": true, + "tf.keras.layers.GRUCell.input_spec": true, + "tf.keras.layers.GRUCell.losses": true, + "tf.keras.layers.GRUCell.metrics": true, + "tf.keras.layers.GRUCell.name": true, + "tf.keras.layers.GRUCell.name_scope": true, + "tf.keras.layers.GRUCell.non_trainable_weights": true, + "tf.keras.layers.GRUCell.output": true, + "tf.keras.layers.GRUCell.reset_dropout_mask": true, + "tf.keras.layers.GRUCell.reset_recurrent_dropout_mask": true, + "tf.keras.layers.GRUCell.set_weights": true, + "tf.keras.layers.GRUCell.submodules": true, + "tf.keras.layers.GRUCell.trainable": true, + "tf.keras.layers.GRUCell.trainable_weights": true, + "tf.keras.layers.GRUCell.weights": true, + "tf.keras.layers.GRUCell.with_name_scope": true, + "tf.keras.layers.GaussianDropout": false, + "tf.keras.layers.GaussianDropout.__call__": true, + "tf.keras.layers.GaussianDropout.__eq__": true, + "tf.keras.layers.GaussianDropout.__ge__": true, + "tf.keras.layers.GaussianDropout.__gt__": true, + "tf.keras.layers.GaussianDropout.__init__": true, + "tf.keras.layers.GaussianDropout.__le__": true, + "tf.keras.layers.GaussianDropout.__lt__": true, + "tf.keras.layers.GaussianDropout.__ne__": true, + "tf.keras.layers.GaussianDropout.__new__": true, + "tf.keras.layers.GaussianDropout.activity_regularizer": true, + "tf.keras.layers.GaussianDropout.add_loss": true, + "tf.keras.layers.GaussianDropout.add_metric": true, + "tf.keras.layers.GaussianDropout.add_weight": true, + "tf.keras.layers.GaussianDropout.build": true, + "tf.keras.layers.GaussianDropout.call": true, + "tf.keras.layers.GaussianDropout.compute_mask": true, + "tf.keras.layers.GaussianDropout.compute_output_shape": true, + "tf.keras.layers.GaussianDropout.compute_output_signature": true, + "tf.keras.layers.GaussianDropout.count_params": true, + "tf.keras.layers.GaussianDropout.dtype": true, + "tf.keras.layers.GaussianDropout.dynamic": true, + "tf.keras.layers.GaussianDropout.from_config": true, + "tf.keras.layers.GaussianDropout.get_config": true, + "tf.keras.layers.GaussianDropout.get_weights": true, + "tf.keras.layers.GaussianDropout.input": true, + "tf.keras.layers.GaussianDropout.input_spec": true, + "tf.keras.layers.GaussianDropout.losses": true, + "tf.keras.layers.GaussianDropout.metrics": true, + "tf.keras.layers.GaussianDropout.name": true, + "tf.keras.layers.GaussianDropout.name_scope": true, + "tf.keras.layers.GaussianDropout.non_trainable_weights": true, + "tf.keras.layers.GaussianDropout.output": true, + "tf.keras.layers.GaussianDropout.set_weights": true, + "tf.keras.layers.GaussianDropout.submodules": true, + "tf.keras.layers.GaussianDropout.trainable": true, + "tf.keras.layers.GaussianDropout.trainable_weights": true, + "tf.keras.layers.GaussianDropout.weights": true, + "tf.keras.layers.GaussianDropout.with_name_scope": true, + "tf.keras.layers.GaussianNoise": false, + "tf.keras.layers.GaussianNoise.__call__": true, + "tf.keras.layers.GaussianNoise.__eq__": true, + "tf.keras.layers.GaussianNoise.__ge__": true, + "tf.keras.layers.GaussianNoise.__gt__": true, + "tf.keras.layers.GaussianNoise.__init__": true, + "tf.keras.layers.GaussianNoise.__le__": true, + "tf.keras.layers.GaussianNoise.__lt__": true, + "tf.keras.layers.GaussianNoise.__ne__": true, + "tf.keras.layers.GaussianNoise.__new__": true, + "tf.keras.layers.GaussianNoise.activity_regularizer": true, + "tf.keras.layers.GaussianNoise.add_loss": true, + "tf.keras.layers.GaussianNoise.add_metric": true, + "tf.keras.layers.GaussianNoise.add_weight": true, + "tf.keras.layers.GaussianNoise.build": true, + "tf.keras.layers.GaussianNoise.call": true, + "tf.keras.layers.GaussianNoise.compute_mask": true, + "tf.keras.layers.GaussianNoise.compute_output_shape": true, + "tf.keras.layers.GaussianNoise.compute_output_signature": true, + "tf.keras.layers.GaussianNoise.count_params": true, + "tf.keras.layers.GaussianNoise.dtype": true, + "tf.keras.layers.GaussianNoise.dynamic": true, + "tf.keras.layers.GaussianNoise.from_config": true, + "tf.keras.layers.GaussianNoise.get_config": true, + "tf.keras.layers.GaussianNoise.get_weights": true, + "tf.keras.layers.GaussianNoise.input": true, + "tf.keras.layers.GaussianNoise.input_spec": true, + "tf.keras.layers.GaussianNoise.losses": true, + "tf.keras.layers.GaussianNoise.metrics": true, + "tf.keras.layers.GaussianNoise.name": true, + "tf.keras.layers.GaussianNoise.name_scope": true, + "tf.keras.layers.GaussianNoise.non_trainable_weights": true, + "tf.keras.layers.GaussianNoise.output": true, + "tf.keras.layers.GaussianNoise.set_weights": true, + "tf.keras.layers.GaussianNoise.submodules": true, + "tf.keras.layers.GaussianNoise.trainable": true, + "tf.keras.layers.GaussianNoise.trainable_weights": true, + "tf.keras.layers.GaussianNoise.weights": true, + "tf.keras.layers.GaussianNoise.with_name_scope": true, + "tf.keras.layers.GlobalAveragePooling1D": false, + "tf.keras.layers.GlobalAveragePooling1D.__call__": true, + "tf.keras.layers.GlobalAveragePooling1D.__eq__": true, + "tf.keras.layers.GlobalAveragePooling1D.__ge__": true, + "tf.keras.layers.GlobalAveragePooling1D.__gt__": true, + "tf.keras.layers.GlobalAveragePooling1D.__init__": true, + "tf.keras.layers.GlobalAveragePooling1D.__le__": true, + "tf.keras.layers.GlobalAveragePooling1D.__lt__": true, + "tf.keras.layers.GlobalAveragePooling1D.__ne__": true, + "tf.keras.layers.GlobalAveragePooling1D.__new__": true, + "tf.keras.layers.GlobalAveragePooling1D.activity_regularizer": true, + "tf.keras.layers.GlobalAveragePooling1D.add_loss": true, + "tf.keras.layers.GlobalAveragePooling1D.add_metric": true, + "tf.keras.layers.GlobalAveragePooling1D.add_weight": true, + "tf.keras.layers.GlobalAveragePooling1D.build": true, + "tf.keras.layers.GlobalAveragePooling1D.call": true, + "tf.keras.layers.GlobalAveragePooling1D.compute_mask": true, + "tf.keras.layers.GlobalAveragePooling1D.compute_output_shape": true, + "tf.keras.layers.GlobalAveragePooling1D.compute_output_signature": true, + "tf.keras.layers.GlobalAveragePooling1D.count_params": true, + "tf.keras.layers.GlobalAveragePooling1D.dtype": true, + "tf.keras.layers.GlobalAveragePooling1D.dynamic": true, + "tf.keras.layers.GlobalAveragePooling1D.from_config": true, + "tf.keras.layers.GlobalAveragePooling1D.get_config": true, + "tf.keras.layers.GlobalAveragePooling1D.get_weights": true, + "tf.keras.layers.GlobalAveragePooling1D.input": true, + "tf.keras.layers.GlobalAveragePooling1D.input_spec": true, + "tf.keras.layers.GlobalAveragePooling1D.losses": true, + "tf.keras.layers.GlobalAveragePooling1D.metrics": true, + "tf.keras.layers.GlobalAveragePooling1D.name": true, + "tf.keras.layers.GlobalAveragePooling1D.name_scope": true, + "tf.keras.layers.GlobalAveragePooling1D.non_trainable_weights": true, + "tf.keras.layers.GlobalAveragePooling1D.output": true, + "tf.keras.layers.GlobalAveragePooling1D.set_weights": true, + "tf.keras.layers.GlobalAveragePooling1D.submodules": true, + "tf.keras.layers.GlobalAveragePooling1D.trainable": true, + "tf.keras.layers.GlobalAveragePooling1D.trainable_weights": true, + "tf.keras.layers.GlobalAveragePooling1D.weights": true, + "tf.keras.layers.GlobalAveragePooling1D.with_name_scope": true, + "tf.keras.layers.GlobalAveragePooling2D": false, + "tf.keras.layers.GlobalAveragePooling2D.__call__": true, + "tf.keras.layers.GlobalAveragePooling2D.__eq__": true, + "tf.keras.layers.GlobalAveragePooling2D.__ge__": true, + "tf.keras.layers.GlobalAveragePooling2D.__gt__": true, + "tf.keras.layers.GlobalAveragePooling2D.__init__": true, + "tf.keras.layers.GlobalAveragePooling2D.__le__": true, + "tf.keras.layers.GlobalAveragePooling2D.__lt__": true, + "tf.keras.layers.GlobalAveragePooling2D.__ne__": true, + "tf.keras.layers.GlobalAveragePooling2D.__new__": true, + "tf.keras.layers.GlobalAveragePooling2D.activity_regularizer": true, + "tf.keras.layers.GlobalAveragePooling2D.add_loss": true, + "tf.keras.layers.GlobalAveragePooling2D.add_metric": true, + "tf.keras.layers.GlobalAveragePooling2D.add_weight": true, + "tf.keras.layers.GlobalAveragePooling2D.build": true, + "tf.keras.layers.GlobalAveragePooling2D.call": true, + "tf.keras.layers.GlobalAveragePooling2D.compute_mask": true, + "tf.keras.layers.GlobalAveragePooling2D.compute_output_shape": true, + "tf.keras.layers.GlobalAveragePooling2D.compute_output_signature": true, + "tf.keras.layers.GlobalAveragePooling2D.count_params": true, + "tf.keras.layers.GlobalAveragePooling2D.dtype": true, + "tf.keras.layers.GlobalAveragePooling2D.dynamic": true, + "tf.keras.layers.GlobalAveragePooling2D.from_config": true, + "tf.keras.layers.GlobalAveragePooling2D.get_config": true, + "tf.keras.layers.GlobalAveragePooling2D.get_weights": true, + "tf.keras.layers.GlobalAveragePooling2D.input": true, + "tf.keras.layers.GlobalAveragePooling2D.input_spec": true, + "tf.keras.layers.GlobalAveragePooling2D.losses": true, + "tf.keras.layers.GlobalAveragePooling2D.metrics": true, + "tf.keras.layers.GlobalAveragePooling2D.name": true, + "tf.keras.layers.GlobalAveragePooling2D.name_scope": true, + "tf.keras.layers.GlobalAveragePooling2D.non_trainable_weights": true, + "tf.keras.layers.GlobalAveragePooling2D.output": true, + "tf.keras.layers.GlobalAveragePooling2D.set_weights": true, + "tf.keras.layers.GlobalAveragePooling2D.submodules": true, + "tf.keras.layers.GlobalAveragePooling2D.trainable": true, + "tf.keras.layers.GlobalAveragePooling2D.trainable_weights": true, + "tf.keras.layers.GlobalAveragePooling2D.weights": true, + "tf.keras.layers.GlobalAveragePooling2D.with_name_scope": true, + "tf.keras.layers.GlobalAveragePooling3D": false, + "tf.keras.layers.GlobalAveragePooling3D.__call__": true, + "tf.keras.layers.GlobalAveragePooling3D.__eq__": true, + "tf.keras.layers.GlobalAveragePooling3D.__ge__": true, + "tf.keras.layers.GlobalAveragePooling3D.__gt__": true, + "tf.keras.layers.GlobalAveragePooling3D.__init__": true, + "tf.keras.layers.GlobalAveragePooling3D.__le__": true, + "tf.keras.layers.GlobalAveragePooling3D.__lt__": true, + "tf.keras.layers.GlobalAveragePooling3D.__ne__": true, + "tf.keras.layers.GlobalAveragePooling3D.__new__": true, + "tf.keras.layers.GlobalAveragePooling3D.activity_regularizer": true, + "tf.keras.layers.GlobalAveragePooling3D.add_loss": true, + "tf.keras.layers.GlobalAveragePooling3D.add_metric": true, + "tf.keras.layers.GlobalAveragePooling3D.add_weight": true, + "tf.keras.layers.GlobalAveragePooling3D.build": true, + "tf.keras.layers.GlobalAveragePooling3D.call": true, + "tf.keras.layers.GlobalAveragePooling3D.compute_mask": true, + "tf.keras.layers.GlobalAveragePooling3D.compute_output_shape": true, + "tf.keras.layers.GlobalAveragePooling3D.compute_output_signature": true, + "tf.keras.layers.GlobalAveragePooling3D.count_params": true, + "tf.keras.layers.GlobalAveragePooling3D.dtype": true, + "tf.keras.layers.GlobalAveragePooling3D.dynamic": true, + "tf.keras.layers.GlobalAveragePooling3D.from_config": true, + "tf.keras.layers.GlobalAveragePooling3D.get_config": true, + "tf.keras.layers.GlobalAveragePooling3D.get_weights": true, + "tf.keras.layers.GlobalAveragePooling3D.input": true, + "tf.keras.layers.GlobalAveragePooling3D.input_spec": true, + "tf.keras.layers.GlobalAveragePooling3D.losses": true, + "tf.keras.layers.GlobalAveragePooling3D.metrics": true, + "tf.keras.layers.GlobalAveragePooling3D.name": true, + "tf.keras.layers.GlobalAveragePooling3D.name_scope": true, + "tf.keras.layers.GlobalAveragePooling3D.non_trainable_weights": true, + "tf.keras.layers.GlobalAveragePooling3D.output": true, + "tf.keras.layers.GlobalAveragePooling3D.set_weights": true, + "tf.keras.layers.GlobalAveragePooling3D.submodules": true, + "tf.keras.layers.GlobalAveragePooling3D.trainable": true, + "tf.keras.layers.GlobalAveragePooling3D.trainable_weights": true, + "tf.keras.layers.GlobalAveragePooling3D.weights": true, + "tf.keras.layers.GlobalAveragePooling3D.with_name_scope": true, + "tf.keras.layers.GlobalAvgPool1D": false, + "tf.keras.layers.GlobalAvgPool1D.__call__": true, + "tf.keras.layers.GlobalAvgPool1D.__eq__": true, + "tf.keras.layers.GlobalAvgPool1D.__ge__": true, + "tf.keras.layers.GlobalAvgPool1D.__gt__": true, + "tf.keras.layers.GlobalAvgPool1D.__init__": true, + "tf.keras.layers.GlobalAvgPool1D.__le__": true, + "tf.keras.layers.GlobalAvgPool1D.__lt__": true, + "tf.keras.layers.GlobalAvgPool1D.__ne__": true, + "tf.keras.layers.GlobalAvgPool1D.__new__": true, + "tf.keras.layers.GlobalAvgPool1D.activity_regularizer": true, + "tf.keras.layers.GlobalAvgPool1D.add_loss": true, + "tf.keras.layers.GlobalAvgPool1D.add_metric": true, + "tf.keras.layers.GlobalAvgPool1D.add_weight": true, + "tf.keras.layers.GlobalAvgPool1D.build": true, + "tf.keras.layers.GlobalAvgPool1D.call": true, + "tf.keras.layers.GlobalAvgPool1D.compute_mask": true, + "tf.keras.layers.GlobalAvgPool1D.compute_output_shape": true, + "tf.keras.layers.GlobalAvgPool1D.compute_output_signature": true, + "tf.keras.layers.GlobalAvgPool1D.count_params": true, + "tf.keras.layers.GlobalAvgPool1D.dtype": true, + "tf.keras.layers.GlobalAvgPool1D.dynamic": true, + "tf.keras.layers.GlobalAvgPool1D.from_config": true, + "tf.keras.layers.GlobalAvgPool1D.get_config": true, + "tf.keras.layers.GlobalAvgPool1D.get_weights": true, + "tf.keras.layers.GlobalAvgPool1D.input": true, + "tf.keras.layers.GlobalAvgPool1D.input_spec": true, + "tf.keras.layers.GlobalAvgPool1D.losses": true, + "tf.keras.layers.GlobalAvgPool1D.metrics": true, + "tf.keras.layers.GlobalAvgPool1D.name": true, + "tf.keras.layers.GlobalAvgPool1D.name_scope": true, + "tf.keras.layers.GlobalAvgPool1D.non_trainable_weights": true, + "tf.keras.layers.GlobalAvgPool1D.output": true, + "tf.keras.layers.GlobalAvgPool1D.set_weights": true, + "tf.keras.layers.GlobalAvgPool1D.submodules": true, + "tf.keras.layers.GlobalAvgPool1D.trainable": true, + "tf.keras.layers.GlobalAvgPool1D.trainable_weights": true, + "tf.keras.layers.GlobalAvgPool1D.weights": true, + "tf.keras.layers.GlobalAvgPool1D.with_name_scope": true, + "tf.keras.layers.GlobalAvgPool2D": false, + "tf.keras.layers.GlobalAvgPool2D.__call__": true, + "tf.keras.layers.GlobalAvgPool2D.__eq__": true, + "tf.keras.layers.GlobalAvgPool2D.__ge__": true, + "tf.keras.layers.GlobalAvgPool2D.__gt__": true, + "tf.keras.layers.GlobalAvgPool2D.__init__": true, + "tf.keras.layers.GlobalAvgPool2D.__le__": true, + "tf.keras.layers.GlobalAvgPool2D.__lt__": true, + "tf.keras.layers.GlobalAvgPool2D.__ne__": true, + "tf.keras.layers.GlobalAvgPool2D.__new__": true, + "tf.keras.layers.GlobalAvgPool2D.activity_regularizer": true, + "tf.keras.layers.GlobalAvgPool2D.add_loss": true, + "tf.keras.layers.GlobalAvgPool2D.add_metric": true, + "tf.keras.layers.GlobalAvgPool2D.add_weight": true, + "tf.keras.layers.GlobalAvgPool2D.build": true, + "tf.keras.layers.GlobalAvgPool2D.call": true, + "tf.keras.layers.GlobalAvgPool2D.compute_mask": true, + "tf.keras.layers.GlobalAvgPool2D.compute_output_shape": true, + "tf.keras.layers.GlobalAvgPool2D.compute_output_signature": true, + "tf.keras.layers.GlobalAvgPool2D.count_params": true, + "tf.keras.layers.GlobalAvgPool2D.dtype": true, + "tf.keras.layers.GlobalAvgPool2D.dynamic": true, + "tf.keras.layers.GlobalAvgPool2D.from_config": true, + "tf.keras.layers.GlobalAvgPool2D.get_config": true, + "tf.keras.layers.GlobalAvgPool2D.get_weights": true, + "tf.keras.layers.GlobalAvgPool2D.input": true, + "tf.keras.layers.GlobalAvgPool2D.input_spec": true, + "tf.keras.layers.GlobalAvgPool2D.losses": true, + "tf.keras.layers.GlobalAvgPool2D.metrics": true, + "tf.keras.layers.GlobalAvgPool2D.name": true, + "tf.keras.layers.GlobalAvgPool2D.name_scope": true, + "tf.keras.layers.GlobalAvgPool2D.non_trainable_weights": true, + "tf.keras.layers.GlobalAvgPool2D.output": true, + "tf.keras.layers.GlobalAvgPool2D.set_weights": true, + "tf.keras.layers.GlobalAvgPool2D.submodules": true, + "tf.keras.layers.GlobalAvgPool2D.trainable": true, + "tf.keras.layers.GlobalAvgPool2D.trainable_weights": true, + "tf.keras.layers.GlobalAvgPool2D.weights": true, + "tf.keras.layers.GlobalAvgPool2D.with_name_scope": true, + "tf.keras.layers.GlobalAvgPool3D": false, + "tf.keras.layers.GlobalAvgPool3D.__call__": true, + "tf.keras.layers.GlobalAvgPool3D.__eq__": true, + "tf.keras.layers.GlobalAvgPool3D.__ge__": true, + "tf.keras.layers.GlobalAvgPool3D.__gt__": true, + "tf.keras.layers.GlobalAvgPool3D.__init__": true, + "tf.keras.layers.GlobalAvgPool3D.__le__": true, + "tf.keras.layers.GlobalAvgPool3D.__lt__": true, + "tf.keras.layers.GlobalAvgPool3D.__ne__": true, + "tf.keras.layers.GlobalAvgPool3D.__new__": true, + "tf.keras.layers.GlobalAvgPool3D.activity_regularizer": true, + "tf.keras.layers.GlobalAvgPool3D.add_loss": true, + "tf.keras.layers.GlobalAvgPool3D.add_metric": true, + "tf.keras.layers.GlobalAvgPool3D.add_weight": true, + "tf.keras.layers.GlobalAvgPool3D.build": true, + "tf.keras.layers.GlobalAvgPool3D.call": true, + "tf.keras.layers.GlobalAvgPool3D.compute_mask": true, + "tf.keras.layers.GlobalAvgPool3D.compute_output_shape": true, + "tf.keras.layers.GlobalAvgPool3D.compute_output_signature": true, + "tf.keras.layers.GlobalAvgPool3D.count_params": true, + "tf.keras.layers.GlobalAvgPool3D.dtype": true, + "tf.keras.layers.GlobalAvgPool3D.dynamic": true, + "tf.keras.layers.GlobalAvgPool3D.from_config": true, + "tf.keras.layers.GlobalAvgPool3D.get_config": true, + "tf.keras.layers.GlobalAvgPool3D.get_weights": true, + "tf.keras.layers.GlobalAvgPool3D.input": true, + "tf.keras.layers.GlobalAvgPool3D.input_spec": true, + "tf.keras.layers.GlobalAvgPool3D.losses": true, + "tf.keras.layers.GlobalAvgPool3D.metrics": true, + "tf.keras.layers.GlobalAvgPool3D.name": true, + "tf.keras.layers.GlobalAvgPool3D.name_scope": true, + "tf.keras.layers.GlobalAvgPool3D.non_trainable_weights": true, + "tf.keras.layers.GlobalAvgPool3D.output": true, + "tf.keras.layers.GlobalAvgPool3D.set_weights": true, + "tf.keras.layers.GlobalAvgPool3D.submodules": true, + "tf.keras.layers.GlobalAvgPool3D.trainable": true, + "tf.keras.layers.GlobalAvgPool3D.trainable_weights": true, + "tf.keras.layers.GlobalAvgPool3D.weights": true, + "tf.keras.layers.GlobalAvgPool3D.with_name_scope": true, + "tf.keras.layers.GlobalMaxPool1D": false, + "tf.keras.layers.GlobalMaxPool1D.__call__": true, + "tf.keras.layers.GlobalMaxPool1D.__eq__": true, + "tf.keras.layers.GlobalMaxPool1D.__ge__": true, + "tf.keras.layers.GlobalMaxPool1D.__gt__": true, + "tf.keras.layers.GlobalMaxPool1D.__init__": true, + "tf.keras.layers.GlobalMaxPool1D.__le__": true, + "tf.keras.layers.GlobalMaxPool1D.__lt__": true, + "tf.keras.layers.GlobalMaxPool1D.__ne__": true, + "tf.keras.layers.GlobalMaxPool1D.__new__": true, + "tf.keras.layers.GlobalMaxPool1D.activity_regularizer": true, + "tf.keras.layers.GlobalMaxPool1D.add_loss": true, + "tf.keras.layers.GlobalMaxPool1D.add_metric": true, + "tf.keras.layers.GlobalMaxPool1D.add_weight": true, + "tf.keras.layers.GlobalMaxPool1D.build": true, + "tf.keras.layers.GlobalMaxPool1D.call": true, + "tf.keras.layers.GlobalMaxPool1D.compute_mask": true, + "tf.keras.layers.GlobalMaxPool1D.compute_output_shape": true, + "tf.keras.layers.GlobalMaxPool1D.compute_output_signature": true, + "tf.keras.layers.GlobalMaxPool1D.count_params": true, + "tf.keras.layers.GlobalMaxPool1D.dtype": true, + "tf.keras.layers.GlobalMaxPool1D.dynamic": true, + "tf.keras.layers.GlobalMaxPool1D.from_config": true, + "tf.keras.layers.GlobalMaxPool1D.get_config": true, + "tf.keras.layers.GlobalMaxPool1D.get_weights": true, + "tf.keras.layers.GlobalMaxPool1D.input": true, + "tf.keras.layers.GlobalMaxPool1D.input_spec": true, + "tf.keras.layers.GlobalMaxPool1D.losses": true, + "tf.keras.layers.GlobalMaxPool1D.metrics": true, + "tf.keras.layers.GlobalMaxPool1D.name": true, + "tf.keras.layers.GlobalMaxPool1D.name_scope": true, + "tf.keras.layers.GlobalMaxPool1D.non_trainable_weights": true, + "tf.keras.layers.GlobalMaxPool1D.output": true, + "tf.keras.layers.GlobalMaxPool1D.set_weights": true, + "tf.keras.layers.GlobalMaxPool1D.submodules": true, + "tf.keras.layers.GlobalMaxPool1D.trainable": true, + "tf.keras.layers.GlobalMaxPool1D.trainable_weights": true, + "tf.keras.layers.GlobalMaxPool1D.weights": true, + "tf.keras.layers.GlobalMaxPool1D.with_name_scope": true, + "tf.keras.layers.GlobalMaxPool2D": false, + "tf.keras.layers.GlobalMaxPool2D.__call__": true, + "tf.keras.layers.GlobalMaxPool2D.__eq__": true, + "tf.keras.layers.GlobalMaxPool2D.__ge__": true, + "tf.keras.layers.GlobalMaxPool2D.__gt__": true, + "tf.keras.layers.GlobalMaxPool2D.__init__": true, + "tf.keras.layers.GlobalMaxPool2D.__le__": true, + "tf.keras.layers.GlobalMaxPool2D.__lt__": true, + "tf.keras.layers.GlobalMaxPool2D.__ne__": true, + "tf.keras.layers.GlobalMaxPool2D.__new__": true, + "tf.keras.layers.GlobalMaxPool2D.activity_regularizer": true, + "tf.keras.layers.GlobalMaxPool2D.add_loss": true, + "tf.keras.layers.GlobalMaxPool2D.add_metric": true, + "tf.keras.layers.GlobalMaxPool2D.add_weight": true, + "tf.keras.layers.GlobalMaxPool2D.build": true, + "tf.keras.layers.GlobalMaxPool2D.call": true, + "tf.keras.layers.GlobalMaxPool2D.compute_mask": true, + "tf.keras.layers.GlobalMaxPool2D.compute_output_shape": true, + "tf.keras.layers.GlobalMaxPool2D.compute_output_signature": true, + "tf.keras.layers.GlobalMaxPool2D.count_params": true, + "tf.keras.layers.GlobalMaxPool2D.dtype": true, + "tf.keras.layers.GlobalMaxPool2D.dynamic": true, + "tf.keras.layers.GlobalMaxPool2D.from_config": true, + "tf.keras.layers.GlobalMaxPool2D.get_config": true, + "tf.keras.layers.GlobalMaxPool2D.get_weights": true, + "tf.keras.layers.GlobalMaxPool2D.input": true, + "tf.keras.layers.GlobalMaxPool2D.input_spec": true, + "tf.keras.layers.GlobalMaxPool2D.losses": true, + "tf.keras.layers.GlobalMaxPool2D.metrics": true, + "tf.keras.layers.GlobalMaxPool2D.name": true, + "tf.keras.layers.GlobalMaxPool2D.name_scope": true, + "tf.keras.layers.GlobalMaxPool2D.non_trainable_weights": true, + "tf.keras.layers.GlobalMaxPool2D.output": true, + "tf.keras.layers.GlobalMaxPool2D.set_weights": true, + "tf.keras.layers.GlobalMaxPool2D.submodules": true, + "tf.keras.layers.GlobalMaxPool2D.trainable": true, + "tf.keras.layers.GlobalMaxPool2D.trainable_weights": true, + "tf.keras.layers.GlobalMaxPool2D.weights": true, + "tf.keras.layers.GlobalMaxPool2D.with_name_scope": true, + "tf.keras.layers.GlobalMaxPool3D": false, + "tf.keras.layers.GlobalMaxPool3D.__call__": true, + "tf.keras.layers.GlobalMaxPool3D.__eq__": true, + "tf.keras.layers.GlobalMaxPool3D.__ge__": true, + "tf.keras.layers.GlobalMaxPool3D.__gt__": true, + "tf.keras.layers.GlobalMaxPool3D.__init__": true, + "tf.keras.layers.GlobalMaxPool3D.__le__": true, + "tf.keras.layers.GlobalMaxPool3D.__lt__": true, + "tf.keras.layers.GlobalMaxPool3D.__ne__": true, + "tf.keras.layers.GlobalMaxPool3D.__new__": true, + "tf.keras.layers.GlobalMaxPool3D.activity_regularizer": true, + "tf.keras.layers.GlobalMaxPool3D.add_loss": true, + "tf.keras.layers.GlobalMaxPool3D.add_metric": true, + "tf.keras.layers.GlobalMaxPool3D.add_weight": true, + "tf.keras.layers.GlobalMaxPool3D.build": true, + "tf.keras.layers.GlobalMaxPool3D.call": true, + "tf.keras.layers.GlobalMaxPool3D.compute_mask": true, + "tf.keras.layers.GlobalMaxPool3D.compute_output_shape": true, + "tf.keras.layers.GlobalMaxPool3D.compute_output_signature": true, + "tf.keras.layers.GlobalMaxPool3D.count_params": true, + "tf.keras.layers.GlobalMaxPool3D.dtype": true, + "tf.keras.layers.GlobalMaxPool3D.dynamic": true, + "tf.keras.layers.GlobalMaxPool3D.from_config": true, + "tf.keras.layers.GlobalMaxPool3D.get_config": true, + "tf.keras.layers.GlobalMaxPool3D.get_weights": true, + "tf.keras.layers.GlobalMaxPool3D.input": true, + "tf.keras.layers.GlobalMaxPool3D.input_spec": true, + "tf.keras.layers.GlobalMaxPool3D.losses": true, + "tf.keras.layers.GlobalMaxPool3D.metrics": true, + "tf.keras.layers.GlobalMaxPool3D.name": true, + "tf.keras.layers.GlobalMaxPool3D.name_scope": true, + "tf.keras.layers.GlobalMaxPool3D.non_trainable_weights": true, + "tf.keras.layers.GlobalMaxPool3D.output": true, + "tf.keras.layers.GlobalMaxPool3D.set_weights": true, + "tf.keras.layers.GlobalMaxPool3D.submodules": true, + "tf.keras.layers.GlobalMaxPool3D.trainable": true, + "tf.keras.layers.GlobalMaxPool3D.trainable_weights": true, + "tf.keras.layers.GlobalMaxPool3D.weights": true, + "tf.keras.layers.GlobalMaxPool3D.with_name_scope": true, + "tf.keras.layers.GlobalMaxPooling1D": false, + "tf.keras.layers.GlobalMaxPooling1D.__call__": true, + "tf.keras.layers.GlobalMaxPooling1D.__eq__": true, + "tf.keras.layers.GlobalMaxPooling1D.__ge__": true, + "tf.keras.layers.GlobalMaxPooling1D.__gt__": true, + "tf.keras.layers.GlobalMaxPooling1D.__init__": true, + "tf.keras.layers.GlobalMaxPooling1D.__le__": true, + "tf.keras.layers.GlobalMaxPooling1D.__lt__": true, + "tf.keras.layers.GlobalMaxPooling1D.__ne__": true, + "tf.keras.layers.GlobalMaxPooling1D.__new__": true, + "tf.keras.layers.GlobalMaxPooling1D.activity_regularizer": true, + "tf.keras.layers.GlobalMaxPooling1D.add_loss": true, + "tf.keras.layers.GlobalMaxPooling1D.add_metric": true, + "tf.keras.layers.GlobalMaxPooling1D.add_weight": true, + "tf.keras.layers.GlobalMaxPooling1D.build": true, + "tf.keras.layers.GlobalMaxPooling1D.call": true, + "tf.keras.layers.GlobalMaxPooling1D.compute_mask": true, + "tf.keras.layers.GlobalMaxPooling1D.compute_output_shape": true, + "tf.keras.layers.GlobalMaxPooling1D.compute_output_signature": true, + "tf.keras.layers.GlobalMaxPooling1D.count_params": true, + "tf.keras.layers.GlobalMaxPooling1D.dtype": true, + "tf.keras.layers.GlobalMaxPooling1D.dynamic": true, + "tf.keras.layers.GlobalMaxPooling1D.from_config": true, + "tf.keras.layers.GlobalMaxPooling1D.get_config": true, + "tf.keras.layers.GlobalMaxPooling1D.get_weights": true, + "tf.keras.layers.GlobalMaxPooling1D.input": true, + "tf.keras.layers.GlobalMaxPooling1D.input_spec": true, + "tf.keras.layers.GlobalMaxPooling1D.losses": true, + "tf.keras.layers.GlobalMaxPooling1D.metrics": true, + "tf.keras.layers.GlobalMaxPooling1D.name": true, + "tf.keras.layers.GlobalMaxPooling1D.name_scope": true, + "tf.keras.layers.GlobalMaxPooling1D.non_trainable_weights": true, + "tf.keras.layers.GlobalMaxPooling1D.output": true, + "tf.keras.layers.GlobalMaxPooling1D.set_weights": true, + "tf.keras.layers.GlobalMaxPooling1D.submodules": true, + "tf.keras.layers.GlobalMaxPooling1D.trainable": true, + "tf.keras.layers.GlobalMaxPooling1D.trainable_weights": true, + "tf.keras.layers.GlobalMaxPooling1D.weights": true, + "tf.keras.layers.GlobalMaxPooling1D.with_name_scope": true, + "tf.keras.layers.GlobalMaxPooling2D": false, + "tf.keras.layers.GlobalMaxPooling2D.__call__": true, + "tf.keras.layers.GlobalMaxPooling2D.__eq__": true, + "tf.keras.layers.GlobalMaxPooling2D.__ge__": true, + "tf.keras.layers.GlobalMaxPooling2D.__gt__": true, + "tf.keras.layers.GlobalMaxPooling2D.__init__": true, + "tf.keras.layers.GlobalMaxPooling2D.__le__": true, + "tf.keras.layers.GlobalMaxPooling2D.__lt__": true, + "tf.keras.layers.GlobalMaxPooling2D.__ne__": true, + "tf.keras.layers.GlobalMaxPooling2D.__new__": true, + "tf.keras.layers.GlobalMaxPooling2D.activity_regularizer": true, + "tf.keras.layers.GlobalMaxPooling2D.add_loss": true, + "tf.keras.layers.GlobalMaxPooling2D.add_metric": true, + "tf.keras.layers.GlobalMaxPooling2D.add_weight": true, + "tf.keras.layers.GlobalMaxPooling2D.build": true, + "tf.keras.layers.GlobalMaxPooling2D.call": true, + "tf.keras.layers.GlobalMaxPooling2D.compute_mask": true, + "tf.keras.layers.GlobalMaxPooling2D.compute_output_shape": true, + "tf.keras.layers.GlobalMaxPooling2D.compute_output_signature": true, + "tf.keras.layers.GlobalMaxPooling2D.count_params": true, + "tf.keras.layers.GlobalMaxPooling2D.dtype": true, + "tf.keras.layers.GlobalMaxPooling2D.dynamic": true, + "tf.keras.layers.GlobalMaxPooling2D.from_config": true, + "tf.keras.layers.GlobalMaxPooling2D.get_config": true, + "tf.keras.layers.GlobalMaxPooling2D.get_weights": true, + "tf.keras.layers.GlobalMaxPooling2D.input": true, + "tf.keras.layers.GlobalMaxPooling2D.input_spec": true, + "tf.keras.layers.GlobalMaxPooling2D.losses": true, + "tf.keras.layers.GlobalMaxPooling2D.metrics": true, + "tf.keras.layers.GlobalMaxPooling2D.name": true, + "tf.keras.layers.GlobalMaxPooling2D.name_scope": true, + "tf.keras.layers.GlobalMaxPooling2D.non_trainable_weights": true, + "tf.keras.layers.GlobalMaxPooling2D.output": true, + "tf.keras.layers.GlobalMaxPooling2D.set_weights": true, + "tf.keras.layers.GlobalMaxPooling2D.submodules": true, + "tf.keras.layers.GlobalMaxPooling2D.trainable": true, + "tf.keras.layers.GlobalMaxPooling2D.trainable_weights": true, + "tf.keras.layers.GlobalMaxPooling2D.weights": true, + "tf.keras.layers.GlobalMaxPooling2D.with_name_scope": true, + "tf.keras.layers.GlobalMaxPooling3D": false, + "tf.keras.layers.GlobalMaxPooling3D.__call__": true, + "tf.keras.layers.GlobalMaxPooling3D.__eq__": true, + "tf.keras.layers.GlobalMaxPooling3D.__ge__": true, + "tf.keras.layers.GlobalMaxPooling3D.__gt__": true, + "tf.keras.layers.GlobalMaxPooling3D.__init__": true, + "tf.keras.layers.GlobalMaxPooling3D.__le__": true, + "tf.keras.layers.GlobalMaxPooling3D.__lt__": true, + "tf.keras.layers.GlobalMaxPooling3D.__ne__": true, + "tf.keras.layers.GlobalMaxPooling3D.__new__": true, + "tf.keras.layers.GlobalMaxPooling3D.activity_regularizer": true, + "tf.keras.layers.GlobalMaxPooling3D.add_loss": true, + "tf.keras.layers.GlobalMaxPooling3D.add_metric": true, + "tf.keras.layers.GlobalMaxPooling3D.add_weight": true, + "tf.keras.layers.GlobalMaxPooling3D.build": true, + "tf.keras.layers.GlobalMaxPooling3D.call": true, + "tf.keras.layers.GlobalMaxPooling3D.compute_mask": true, + "tf.keras.layers.GlobalMaxPooling3D.compute_output_shape": true, + "tf.keras.layers.GlobalMaxPooling3D.compute_output_signature": true, + "tf.keras.layers.GlobalMaxPooling3D.count_params": true, + "tf.keras.layers.GlobalMaxPooling3D.dtype": true, + "tf.keras.layers.GlobalMaxPooling3D.dynamic": true, + "tf.keras.layers.GlobalMaxPooling3D.from_config": true, + "tf.keras.layers.GlobalMaxPooling3D.get_config": true, + "tf.keras.layers.GlobalMaxPooling3D.get_weights": true, + "tf.keras.layers.GlobalMaxPooling3D.input": true, + "tf.keras.layers.GlobalMaxPooling3D.input_spec": true, + "tf.keras.layers.GlobalMaxPooling3D.losses": true, + "tf.keras.layers.GlobalMaxPooling3D.metrics": true, + "tf.keras.layers.GlobalMaxPooling3D.name": true, + "tf.keras.layers.GlobalMaxPooling3D.name_scope": true, + "tf.keras.layers.GlobalMaxPooling3D.non_trainable_weights": true, + "tf.keras.layers.GlobalMaxPooling3D.output": true, + "tf.keras.layers.GlobalMaxPooling3D.set_weights": true, + "tf.keras.layers.GlobalMaxPooling3D.submodules": true, + "tf.keras.layers.GlobalMaxPooling3D.trainable": true, + "tf.keras.layers.GlobalMaxPooling3D.trainable_weights": true, + "tf.keras.layers.GlobalMaxPooling3D.weights": true, + "tf.keras.layers.GlobalMaxPooling3D.with_name_scope": true, + "tf.keras.layers.Input": false, + "tf.keras.layers.InputLayer": false, + "tf.keras.layers.InputLayer.__call__": true, + "tf.keras.layers.InputLayer.__eq__": true, + "tf.keras.layers.InputLayer.__ge__": true, + "tf.keras.layers.InputLayer.__gt__": true, + "tf.keras.layers.InputLayer.__init__": true, + "tf.keras.layers.InputLayer.__le__": true, + "tf.keras.layers.InputLayer.__lt__": true, + "tf.keras.layers.InputLayer.__ne__": true, + "tf.keras.layers.InputLayer.__new__": true, + "tf.keras.layers.InputLayer.activity_regularizer": true, + "tf.keras.layers.InputLayer.add_loss": true, + "tf.keras.layers.InputLayer.add_metric": true, + "tf.keras.layers.InputLayer.add_weight": true, + "tf.keras.layers.InputLayer.build": true, + "tf.keras.layers.InputLayer.call": true, + "tf.keras.layers.InputLayer.compute_mask": true, + "tf.keras.layers.InputLayer.compute_output_shape": true, + "tf.keras.layers.InputLayer.compute_output_signature": true, + "tf.keras.layers.InputLayer.count_params": true, + "tf.keras.layers.InputLayer.dtype": true, + "tf.keras.layers.InputLayer.dynamic": true, + "tf.keras.layers.InputLayer.from_config": true, + "tf.keras.layers.InputLayer.get_config": true, + "tf.keras.layers.InputLayer.get_weights": true, + "tf.keras.layers.InputLayer.input": true, + "tf.keras.layers.InputLayer.input_spec": true, + "tf.keras.layers.InputLayer.losses": true, + "tf.keras.layers.InputLayer.metrics": true, + "tf.keras.layers.InputLayer.name": true, + "tf.keras.layers.InputLayer.name_scope": true, + "tf.keras.layers.InputLayer.non_trainable_weights": true, + "tf.keras.layers.InputLayer.output": true, + "tf.keras.layers.InputLayer.set_weights": true, + "tf.keras.layers.InputLayer.submodules": true, + "tf.keras.layers.InputLayer.trainable": true, + "tf.keras.layers.InputLayer.trainable_weights": true, + "tf.keras.layers.InputLayer.weights": true, + "tf.keras.layers.InputLayer.with_name_scope": true, + "tf.keras.layers.InputSpec": false, + "tf.keras.layers.InputSpec.__eq__": true, + "tf.keras.layers.InputSpec.__ge__": true, + "tf.keras.layers.InputSpec.__gt__": true, + "tf.keras.layers.InputSpec.__init__": true, + "tf.keras.layers.InputSpec.__le__": true, + "tf.keras.layers.InputSpec.__lt__": true, + "tf.keras.layers.InputSpec.__ne__": true, + "tf.keras.layers.InputSpec.__new__": true, + "tf.keras.layers.InputSpec.from_config": true, + "tf.keras.layers.InputSpec.get_config": true, + "tf.keras.layers.LSTM": false, + "tf.keras.layers.LSTM.__call__": true, + "tf.keras.layers.LSTM.__eq__": true, + "tf.keras.layers.LSTM.__ge__": true, + "tf.keras.layers.LSTM.__gt__": true, + "tf.keras.layers.LSTM.__init__": true, + "tf.keras.layers.LSTM.__le__": true, + "tf.keras.layers.LSTM.__lt__": true, + "tf.keras.layers.LSTM.__ne__": true, + "tf.keras.layers.LSTM.__new__": true, + "tf.keras.layers.LSTM.activation": true, + "tf.keras.layers.LSTM.activity_regularizer": true, + "tf.keras.layers.LSTM.add_loss": true, + "tf.keras.layers.LSTM.add_metric": true, + "tf.keras.layers.LSTM.add_weight": true, + "tf.keras.layers.LSTM.bias_constraint": true, + "tf.keras.layers.LSTM.bias_initializer": true, + "tf.keras.layers.LSTM.bias_regularizer": true, + "tf.keras.layers.LSTM.build": true, + "tf.keras.layers.LSTM.call": true, + "tf.keras.layers.LSTM.compute_mask": true, + "tf.keras.layers.LSTM.compute_output_shape": true, + "tf.keras.layers.LSTM.compute_output_signature": true, + "tf.keras.layers.LSTM.count_params": true, + "tf.keras.layers.LSTM.dropout": true, + "tf.keras.layers.LSTM.dtype": true, + "tf.keras.layers.LSTM.dynamic": true, + "tf.keras.layers.LSTM.from_config": true, + "tf.keras.layers.LSTM.get_config": true, + "tf.keras.layers.LSTM.get_dropout_mask_for_cell": true, + "tf.keras.layers.LSTM.get_recurrent_dropout_mask_for_cell": true, + "tf.keras.layers.LSTM.get_weights": true, + "tf.keras.layers.LSTM.implementation": true, + "tf.keras.layers.LSTM.input": true, + "tf.keras.layers.LSTM.input_spec": true, + "tf.keras.layers.LSTM.kernel_constraint": true, + "tf.keras.layers.LSTM.kernel_initializer": true, + "tf.keras.layers.LSTM.kernel_regularizer": true, + "tf.keras.layers.LSTM.losses": true, + "tf.keras.layers.LSTM.metrics": true, + "tf.keras.layers.LSTM.name": true, + "tf.keras.layers.LSTM.name_scope": true, + "tf.keras.layers.LSTM.non_trainable_weights": true, + "tf.keras.layers.LSTM.output": true, + "tf.keras.layers.LSTM.recurrent_activation": true, + "tf.keras.layers.LSTM.recurrent_constraint": true, + "tf.keras.layers.LSTM.recurrent_dropout": true, + "tf.keras.layers.LSTM.recurrent_initializer": true, + "tf.keras.layers.LSTM.recurrent_regularizer": true, + "tf.keras.layers.LSTM.reset_dropout_mask": true, + "tf.keras.layers.LSTM.reset_recurrent_dropout_mask": true, + "tf.keras.layers.LSTM.reset_states": true, + "tf.keras.layers.LSTM.set_weights": true, + "tf.keras.layers.LSTM.states": true, + "tf.keras.layers.LSTM.submodules": true, + "tf.keras.layers.LSTM.trainable": true, + "tf.keras.layers.LSTM.trainable_weights": true, + "tf.keras.layers.LSTM.unit_forget_bias": true, + "tf.keras.layers.LSTM.units": true, + "tf.keras.layers.LSTM.use_bias": true, + "tf.keras.layers.LSTM.weights": true, + "tf.keras.layers.LSTM.with_name_scope": true, + "tf.keras.layers.LSTMCell": false, + "tf.keras.layers.LSTMCell.__call__": true, + "tf.keras.layers.LSTMCell.__eq__": true, + "tf.keras.layers.LSTMCell.__ge__": true, + "tf.keras.layers.LSTMCell.__gt__": true, + "tf.keras.layers.LSTMCell.__init__": true, + "tf.keras.layers.LSTMCell.__le__": true, + "tf.keras.layers.LSTMCell.__lt__": true, + "tf.keras.layers.LSTMCell.__ne__": true, + "tf.keras.layers.LSTMCell.__new__": true, + "tf.keras.layers.LSTMCell.activity_regularizer": true, + "tf.keras.layers.LSTMCell.add_loss": true, + "tf.keras.layers.LSTMCell.add_metric": true, + "tf.keras.layers.LSTMCell.add_weight": true, + "tf.keras.layers.LSTMCell.build": true, + "tf.keras.layers.LSTMCell.call": true, + "tf.keras.layers.LSTMCell.compute_mask": true, + "tf.keras.layers.LSTMCell.compute_output_shape": true, + "tf.keras.layers.LSTMCell.compute_output_signature": true, + "tf.keras.layers.LSTMCell.count_params": true, + "tf.keras.layers.LSTMCell.dtype": true, + "tf.keras.layers.LSTMCell.dynamic": true, + "tf.keras.layers.LSTMCell.from_config": true, + "tf.keras.layers.LSTMCell.get_config": true, + "tf.keras.layers.LSTMCell.get_dropout_mask_for_cell": true, + "tf.keras.layers.LSTMCell.get_initial_state": true, + "tf.keras.layers.LSTMCell.get_recurrent_dropout_mask_for_cell": true, + "tf.keras.layers.LSTMCell.get_weights": true, + "tf.keras.layers.LSTMCell.input": true, + "tf.keras.layers.LSTMCell.input_spec": true, + "tf.keras.layers.LSTMCell.losses": true, + "tf.keras.layers.LSTMCell.metrics": true, + "tf.keras.layers.LSTMCell.name": true, + "tf.keras.layers.LSTMCell.name_scope": true, + "tf.keras.layers.LSTMCell.non_trainable_weights": true, + "tf.keras.layers.LSTMCell.output": true, + "tf.keras.layers.LSTMCell.reset_dropout_mask": true, + "tf.keras.layers.LSTMCell.reset_recurrent_dropout_mask": true, + "tf.keras.layers.LSTMCell.set_weights": true, + "tf.keras.layers.LSTMCell.submodules": true, + "tf.keras.layers.LSTMCell.trainable": true, + "tf.keras.layers.LSTMCell.trainable_weights": true, + "tf.keras.layers.LSTMCell.weights": true, + "tf.keras.layers.LSTMCell.with_name_scope": true, + "tf.keras.layers.Lambda": false, + "tf.keras.layers.Lambda.__call__": true, + "tf.keras.layers.Lambda.__eq__": true, + "tf.keras.layers.Lambda.__ge__": true, + "tf.keras.layers.Lambda.__gt__": true, + "tf.keras.layers.Lambda.__init__": true, + "tf.keras.layers.Lambda.__le__": true, + "tf.keras.layers.Lambda.__lt__": true, + "tf.keras.layers.Lambda.__ne__": true, + "tf.keras.layers.Lambda.__new__": true, + "tf.keras.layers.Lambda.activity_regularizer": true, + "tf.keras.layers.Lambda.add_loss": true, + "tf.keras.layers.Lambda.add_metric": true, + "tf.keras.layers.Lambda.add_weight": true, + "tf.keras.layers.Lambda.build": true, + "tf.keras.layers.Lambda.call": true, + "tf.keras.layers.Lambda.compute_mask": true, + "tf.keras.layers.Lambda.compute_output_shape": true, + "tf.keras.layers.Lambda.compute_output_signature": true, + "tf.keras.layers.Lambda.count_params": true, + "tf.keras.layers.Lambda.dtype": true, + "tf.keras.layers.Lambda.dynamic": true, + "tf.keras.layers.Lambda.from_config": true, + "tf.keras.layers.Lambda.get_config": true, + "tf.keras.layers.Lambda.get_weights": true, + "tf.keras.layers.Lambda.input": true, + "tf.keras.layers.Lambda.input_spec": true, + "tf.keras.layers.Lambda.losses": true, + "tf.keras.layers.Lambda.metrics": true, + "tf.keras.layers.Lambda.name": true, + "tf.keras.layers.Lambda.name_scope": true, + "tf.keras.layers.Lambda.non_trainable_weights": true, + "tf.keras.layers.Lambda.output": true, + "tf.keras.layers.Lambda.set_weights": true, + "tf.keras.layers.Lambda.submodules": true, + "tf.keras.layers.Lambda.trainable": true, + "tf.keras.layers.Lambda.trainable_weights": true, + "tf.keras.layers.Lambda.weights": true, + "tf.keras.layers.Lambda.with_name_scope": true, + "tf.keras.layers.Layer": false, + "tf.keras.layers.Layer.__call__": true, + "tf.keras.layers.Layer.__eq__": true, + "tf.keras.layers.Layer.__ge__": true, + "tf.keras.layers.Layer.__gt__": true, + "tf.keras.layers.Layer.__init__": true, + "tf.keras.layers.Layer.__le__": true, + "tf.keras.layers.Layer.__lt__": true, + "tf.keras.layers.Layer.__ne__": true, + "tf.keras.layers.Layer.__new__": true, + "tf.keras.layers.Layer.activity_regularizer": true, + "tf.keras.layers.Layer.add_loss": true, + "tf.keras.layers.Layer.add_metric": true, + "tf.keras.layers.Layer.add_weight": true, + "tf.keras.layers.Layer.build": true, + "tf.keras.layers.Layer.call": true, + "tf.keras.layers.Layer.compute_mask": true, + "tf.keras.layers.Layer.compute_output_shape": true, + "tf.keras.layers.Layer.compute_output_signature": true, + "tf.keras.layers.Layer.count_params": true, + "tf.keras.layers.Layer.dtype": true, + "tf.keras.layers.Layer.dynamic": true, + "tf.keras.layers.Layer.from_config": true, + "tf.keras.layers.Layer.get_config": true, + "tf.keras.layers.Layer.get_weights": true, + "tf.keras.layers.Layer.input": true, + "tf.keras.layers.Layer.input_spec": true, + "tf.keras.layers.Layer.losses": true, + "tf.keras.layers.Layer.metrics": true, + "tf.keras.layers.Layer.name": true, + "tf.keras.layers.Layer.name_scope": true, + "tf.keras.layers.Layer.non_trainable_weights": true, + "tf.keras.layers.Layer.output": true, + "tf.keras.layers.Layer.set_weights": true, + "tf.keras.layers.Layer.submodules": true, + "tf.keras.layers.Layer.trainable": true, + "tf.keras.layers.Layer.trainable_weights": true, + "tf.keras.layers.Layer.weights": true, + "tf.keras.layers.Layer.with_name_scope": true, + "tf.keras.layers.LayerNormalization": false, + "tf.keras.layers.LayerNormalization.__call__": true, + "tf.keras.layers.LayerNormalization.__eq__": true, + "tf.keras.layers.LayerNormalization.__ge__": true, + "tf.keras.layers.LayerNormalization.__gt__": true, + "tf.keras.layers.LayerNormalization.__init__": true, + "tf.keras.layers.LayerNormalization.__le__": true, + "tf.keras.layers.LayerNormalization.__lt__": true, + "tf.keras.layers.LayerNormalization.__ne__": true, + "tf.keras.layers.LayerNormalization.__new__": true, + "tf.keras.layers.LayerNormalization.activity_regularizer": true, + "tf.keras.layers.LayerNormalization.add_loss": true, + "tf.keras.layers.LayerNormalization.add_metric": true, + "tf.keras.layers.LayerNormalization.add_weight": true, + "tf.keras.layers.LayerNormalization.build": true, + "tf.keras.layers.LayerNormalization.call": true, + "tf.keras.layers.LayerNormalization.compute_mask": true, + "tf.keras.layers.LayerNormalization.compute_output_shape": true, + "tf.keras.layers.LayerNormalization.compute_output_signature": true, + "tf.keras.layers.LayerNormalization.count_params": true, + "tf.keras.layers.LayerNormalization.dtype": true, + "tf.keras.layers.LayerNormalization.dynamic": true, + "tf.keras.layers.LayerNormalization.from_config": true, + "tf.keras.layers.LayerNormalization.get_config": true, + "tf.keras.layers.LayerNormalization.get_weights": true, + "tf.keras.layers.LayerNormalization.input": true, + "tf.keras.layers.LayerNormalization.input_spec": true, + "tf.keras.layers.LayerNormalization.losses": true, + "tf.keras.layers.LayerNormalization.metrics": true, + "tf.keras.layers.LayerNormalization.name": true, + "tf.keras.layers.LayerNormalization.name_scope": true, + "tf.keras.layers.LayerNormalization.non_trainable_weights": true, + "tf.keras.layers.LayerNormalization.output": true, + "tf.keras.layers.LayerNormalization.set_weights": true, + "tf.keras.layers.LayerNormalization.submodules": true, + "tf.keras.layers.LayerNormalization.trainable": true, + "tf.keras.layers.LayerNormalization.trainable_weights": true, + "tf.keras.layers.LayerNormalization.weights": true, + "tf.keras.layers.LayerNormalization.with_name_scope": true, + "tf.keras.layers.LeakyReLU": false, + "tf.keras.layers.LeakyReLU.__call__": true, + "tf.keras.layers.LeakyReLU.__eq__": true, + "tf.keras.layers.LeakyReLU.__ge__": true, + "tf.keras.layers.LeakyReLU.__gt__": true, + "tf.keras.layers.LeakyReLU.__init__": true, + "tf.keras.layers.LeakyReLU.__le__": true, + "tf.keras.layers.LeakyReLU.__lt__": true, + "tf.keras.layers.LeakyReLU.__ne__": true, + "tf.keras.layers.LeakyReLU.__new__": true, + "tf.keras.layers.LeakyReLU.activity_regularizer": true, + "tf.keras.layers.LeakyReLU.add_loss": true, + "tf.keras.layers.LeakyReLU.add_metric": true, + "tf.keras.layers.LeakyReLU.add_weight": true, + "tf.keras.layers.LeakyReLU.build": true, + "tf.keras.layers.LeakyReLU.call": true, + "tf.keras.layers.LeakyReLU.compute_mask": true, + "tf.keras.layers.LeakyReLU.compute_output_shape": true, + "tf.keras.layers.LeakyReLU.compute_output_signature": true, + "tf.keras.layers.LeakyReLU.count_params": true, + "tf.keras.layers.LeakyReLU.dtype": true, + "tf.keras.layers.LeakyReLU.dynamic": true, + "tf.keras.layers.LeakyReLU.from_config": true, + "tf.keras.layers.LeakyReLU.get_config": true, + "tf.keras.layers.LeakyReLU.get_weights": true, + "tf.keras.layers.LeakyReLU.input": true, + "tf.keras.layers.LeakyReLU.input_spec": true, + "tf.keras.layers.LeakyReLU.losses": true, + "tf.keras.layers.LeakyReLU.metrics": true, + "tf.keras.layers.LeakyReLU.name": true, + "tf.keras.layers.LeakyReLU.name_scope": true, + "tf.keras.layers.LeakyReLU.non_trainable_weights": true, + "tf.keras.layers.LeakyReLU.output": true, + "tf.keras.layers.LeakyReLU.set_weights": true, + "tf.keras.layers.LeakyReLU.submodules": true, + "tf.keras.layers.LeakyReLU.trainable": true, + "tf.keras.layers.LeakyReLU.trainable_weights": true, + "tf.keras.layers.LeakyReLU.weights": true, + "tf.keras.layers.LeakyReLU.with_name_scope": true, + "tf.keras.layers.LocallyConnected1D": false, + "tf.keras.layers.LocallyConnected1D.__call__": true, + "tf.keras.layers.LocallyConnected1D.__eq__": true, + "tf.keras.layers.LocallyConnected1D.__ge__": true, + "tf.keras.layers.LocallyConnected1D.__gt__": true, + "tf.keras.layers.LocallyConnected1D.__init__": true, + "tf.keras.layers.LocallyConnected1D.__le__": true, + "tf.keras.layers.LocallyConnected1D.__lt__": true, + "tf.keras.layers.LocallyConnected1D.__ne__": true, + "tf.keras.layers.LocallyConnected1D.__new__": true, + "tf.keras.layers.LocallyConnected1D.activity_regularizer": true, + "tf.keras.layers.LocallyConnected1D.add_loss": true, + "tf.keras.layers.LocallyConnected1D.add_metric": true, + "tf.keras.layers.LocallyConnected1D.add_weight": true, + "tf.keras.layers.LocallyConnected1D.build": true, + "tf.keras.layers.LocallyConnected1D.call": true, + "tf.keras.layers.LocallyConnected1D.compute_mask": true, + "tf.keras.layers.LocallyConnected1D.compute_output_shape": true, + "tf.keras.layers.LocallyConnected1D.compute_output_signature": true, + "tf.keras.layers.LocallyConnected1D.count_params": true, + "tf.keras.layers.LocallyConnected1D.dtype": true, + "tf.keras.layers.LocallyConnected1D.dynamic": true, + "tf.keras.layers.LocallyConnected1D.from_config": true, + "tf.keras.layers.LocallyConnected1D.get_config": true, + "tf.keras.layers.LocallyConnected1D.get_weights": true, + "tf.keras.layers.LocallyConnected1D.input": true, + "tf.keras.layers.LocallyConnected1D.input_spec": true, + "tf.keras.layers.LocallyConnected1D.losses": true, + "tf.keras.layers.LocallyConnected1D.metrics": true, + "tf.keras.layers.LocallyConnected1D.name": true, + "tf.keras.layers.LocallyConnected1D.name_scope": true, + "tf.keras.layers.LocallyConnected1D.non_trainable_weights": true, + "tf.keras.layers.LocallyConnected1D.output": true, + "tf.keras.layers.LocallyConnected1D.set_weights": true, + "tf.keras.layers.LocallyConnected1D.submodules": true, + "tf.keras.layers.LocallyConnected1D.trainable": true, + "tf.keras.layers.LocallyConnected1D.trainable_weights": true, + "tf.keras.layers.LocallyConnected1D.weights": true, + "tf.keras.layers.LocallyConnected1D.with_name_scope": true, + "tf.keras.layers.LocallyConnected2D": false, + "tf.keras.layers.LocallyConnected2D.__call__": true, + "tf.keras.layers.LocallyConnected2D.__eq__": true, + "tf.keras.layers.LocallyConnected2D.__ge__": true, + "tf.keras.layers.LocallyConnected2D.__gt__": true, + "tf.keras.layers.LocallyConnected2D.__init__": true, + "tf.keras.layers.LocallyConnected2D.__le__": true, + "tf.keras.layers.LocallyConnected2D.__lt__": true, + "tf.keras.layers.LocallyConnected2D.__ne__": true, + "tf.keras.layers.LocallyConnected2D.__new__": true, + "tf.keras.layers.LocallyConnected2D.activity_regularizer": true, + "tf.keras.layers.LocallyConnected2D.add_loss": true, + "tf.keras.layers.LocallyConnected2D.add_metric": true, + "tf.keras.layers.LocallyConnected2D.add_weight": true, + "tf.keras.layers.LocallyConnected2D.build": true, + "tf.keras.layers.LocallyConnected2D.call": true, + "tf.keras.layers.LocallyConnected2D.compute_mask": true, + "tf.keras.layers.LocallyConnected2D.compute_output_shape": true, + "tf.keras.layers.LocallyConnected2D.compute_output_signature": true, + "tf.keras.layers.LocallyConnected2D.count_params": true, + "tf.keras.layers.LocallyConnected2D.dtype": true, + "tf.keras.layers.LocallyConnected2D.dynamic": true, + "tf.keras.layers.LocallyConnected2D.from_config": true, + "tf.keras.layers.LocallyConnected2D.get_config": true, + "tf.keras.layers.LocallyConnected2D.get_weights": true, + "tf.keras.layers.LocallyConnected2D.input": true, + "tf.keras.layers.LocallyConnected2D.input_spec": true, + "tf.keras.layers.LocallyConnected2D.losses": true, + "tf.keras.layers.LocallyConnected2D.metrics": true, + "tf.keras.layers.LocallyConnected2D.name": true, + "tf.keras.layers.LocallyConnected2D.name_scope": true, + "tf.keras.layers.LocallyConnected2D.non_trainable_weights": true, + "tf.keras.layers.LocallyConnected2D.output": true, + "tf.keras.layers.LocallyConnected2D.set_weights": true, + "tf.keras.layers.LocallyConnected2D.submodules": true, + "tf.keras.layers.LocallyConnected2D.trainable": true, + "tf.keras.layers.LocallyConnected2D.trainable_weights": true, + "tf.keras.layers.LocallyConnected2D.weights": true, + "tf.keras.layers.LocallyConnected2D.with_name_scope": true, + "tf.keras.layers.Masking": false, + "tf.keras.layers.Masking.__call__": true, + "tf.keras.layers.Masking.__eq__": true, + "tf.keras.layers.Masking.__ge__": true, + "tf.keras.layers.Masking.__gt__": true, + "tf.keras.layers.Masking.__init__": true, + "tf.keras.layers.Masking.__le__": true, + "tf.keras.layers.Masking.__lt__": true, + "tf.keras.layers.Masking.__ne__": true, + "tf.keras.layers.Masking.__new__": true, + "tf.keras.layers.Masking.activity_regularizer": true, + "tf.keras.layers.Masking.add_loss": true, + "tf.keras.layers.Masking.add_metric": true, + "tf.keras.layers.Masking.add_weight": true, + "tf.keras.layers.Masking.build": true, + "tf.keras.layers.Masking.call": true, + "tf.keras.layers.Masking.compute_mask": true, + "tf.keras.layers.Masking.compute_output_shape": true, + "tf.keras.layers.Masking.compute_output_signature": true, + "tf.keras.layers.Masking.count_params": true, + "tf.keras.layers.Masking.dtype": true, + "tf.keras.layers.Masking.dynamic": true, + "tf.keras.layers.Masking.from_config": true, + "tf.keras.layers.Masking.get_config": true, + "tf.keras.layers.Masking.get_weights": true, + "tf.keras.layers.Masking.input": true, + "tf.keras.layers.Masking.input_spec": true, + "tf.keras.layers.Masking.losses": true, + "tf.keras.layers.Masking.metrics": true, + "tf.keras.layers.Masking.name": true, + "tf.keras.layers.Masking.name_scope": true, + "tf.keras.layers.Masking.non_trainable_weights": true, + "tf.keras.layers.Masking.output": true, + "tf.keras.layers.Masking.set_weights": true, + "tf.keras.layers.Masking.submodules": true, + "tf.keras.layers.Masking.trainable": true, + "tf.keras.layers.Masking.trainable_weights": true, + "tf.keras.layers.Masking.weights": true, + "tf.keras.layers.Masking.with_name_scope": true, + "tf.keras.layers.MaxPool1D": false, + "tf.keras.layers.MaxPool1D.__call__": true, + "tf.keras.layers.MaxPool1D.__eq__": true, + "tf.keras.layers.MaxPool1D.__ge__": true, + "tf.keras.layers.MaxPool1D.__gt__": true, + "tf.keras.layers.MaxPool1D.__init__": true, + "tf.keras.layers.MaxPool1D.__le__": true, + "tf.keras.layers.MaxPool1D.__lt__": true, + "tf.keras.layers.MaxPool1D.__ne__": true, + "tf.keras.layers.MaxPool1D.__new__": true, + "tf.keras.layers.MaxPool1D.activity_regularizer": true, + "tf.keras.layers.MaxPool1D.add_loss": true, + "tf.keras.layers.MaxPool1D.add_metric": true, + "tf.keras.layers.MaxPool1D.add_weight": true, + "tf.keras.layers.MaxPool1D.build": true, + "tf.keras.layers.MaxPool1D.call": true, + "tf.keras.layers.MaxPool1D.compute_mask": true, + "tf.keras.layers.MaxPool1D.compute_output_shape": true, + "tf.keras.layers.MaxPool1D.compute_output_signature": true, + "tf.keras.layers.MaxPool1D.count_params": true, + "tf.keras.layers.MaxPool1D.dtype": true, + "tf.keras.layers.MaxPool1D.dynamic": true, + "tf.keras.layers.MaxPool1D.from_config": true, + "tf.keras.layers.MaxPool1D.get_config": true, + "tf.keras.layers.MaxPool1D.get_weights": true, + "tf.keras.layers.MaxPool1D.input": true, + "tf.keras.layers.MaxPool1D.input_spec": true, + "tf.keras.layers.MaxPool1D.losses": true, + "tf.keras.layers.MaxPool1D.metrics": true, + "tf.keras.layers.MaxPool1D.name": true, + "tf.keras.layers.MaxPool1D.name_scope": true, + "tf.keras.layers.MaxPool1D.non_trainable_weights": true, + "tf.keras.layers.MaxPool1D.output": true, + "tf.keras.layers.MaxPool1D.set_weights": true, + "tf.keras.layers.MaxPool1D.submodules": true, + "tf.keras.layers.MaxPool1D.trainable": true, + "tf.keras.layers.MaxPool1D.trainable_weights": true, + "tf.keras.layers.MaxPool1D.weights": true, + "tf.keras.layers.MaxPool1D.with_name_scope": true, + "tf.keras.layers.MaxPool2D": false, + "tf.keras.layers.MaxPool2D.__call__": true, + "tf.keras.layers.MaxPool2D.__eq__": true, + "tf.keras.layers.MaxPool2D.__ge__": true, + "tf.keras.layers.MaxPool2D.__gt__": true, + "tf.keras.layers.MaxPool2D.__init__": true, + "tf.keras.layers.MaxPool2D.__le__": true, + "tf.keras.layers.MaxPool2D.__lt__": true, + "tf.keras.layers.MaxPool2D.__ne__": true, + "tf.keras.layers.MaxPool2D.__new__": true, + "tf.keras.layers.MaxPool2D.activity_regularizer": true, + "tf.keras.layers.MaxPool2D.add_loss": true, + "tf.keras.layers.MaxPool2D.add_metric": true, + "tf.keras.layers.MaxPool2D.add_weight": true, + "tf.keras.layers.MaxPool2D.build": true, + "tf.keras.layers.MaxPool2D.call": true, + "tf.keras.layers.MaxPool2D.compute_mask": true, + "tf.keras.layers.MaxPool2D.compute_output_shape": true, + "tf.keras.layers.MaxPool2D.compute_output_signature": true, + "tf.keras.layers.MaxPool2D.count_params": true, + "tf.keras.layers.MaxPool2D.dtype": true, + "tf.keras.layers.MaxPool2D.dynamic": true, + "tf.keras.layers.MaxPool2D.from_config": true, + "tf.keras.layers.MaxPool2D.get_config": true, + "tf.keras.layers.MaxPool2D.get_weights": true, + "tf.keras.layers.MaxPool2D.input": true, + "tf.keras.layers.MaxPool2D.input_spec": true, + "tf.keras.layers.MaxPool2D.losses": true, + "tf.keras.layers.MaxPool2D.metrics": true, + "tf.keras.layers.MaxPool2D.name": true, + "tf.keras.layers.MaxPool2D.name_scope": true, + "tf.keras.layers.MaxPool2D.non_trainable_weights": true, + "tf.keras.layers.MaxPool2D.output": true, + "tf.keras.layers.MaxPool2D.set_weights": true, + "tf.keras.layers.MaxPool2D.submodules": true, + "tf.keras.layers.MaxPool2D.trainable": true, + "tf.keras.layers.MaxPool2D.trainable_weights": true, + "tf.keras.layers.MaxPool2D.weights": true, + "tf.keras.layers.MaxPool2D.with_name_scope": true, + "tf.keras.layers.MaxPool3D": false, + "tf.keras.layers.MaxPool3D.__call__": true, + "tf.keras.layers.MaxPool3D.__eq__": true, + "tf.keras.layers.MaxPool3D.__ge__": true, + "tf.keras.layers.MaxPool3D.__gt__": true, + "tf.keras.layers.MaxPool3D.__init__": true, + "tf.keras.layers.MaxPool3D.__le__": true, + "tf.keras.layers.MaxPool3D.__lt__": true, + "tf.keras.layers.MaxPool3D.__ne__": true, + "tf.keras.layers.MaxPool3D.__new__": true, + "tf.keras.layers.MaxPool3D.activity_regularizer": true, + "tf.keras.layers.MaxPool3D.add_loss": true, + "tf.keras.layers.MaxPool3D.add_metric": true, + "tf.keras.layers.MaxPool3D.add_weight": true, + "tf.keras.layers.MaxPool3D.build": true, + "tf.keras.layers.MaxPool3D.call": true, + "tf.keras.layers.MaxPool3D.compute_mask": true, + "tf.keras.layers.MaxPool3D.compute_output_shape": true, + "tf.keras.layers.MaxPool3D.compute_output_signature": true, + "tf.keras.layers.MaxPool3D.count_params": true, + "tf.keras.layers.MaxPool3D.dtype": true, + "tf.keras.layers.MaxPool3D.dynamic": true, + "tf.keras.layers.MaxPool3D.from_config": true, + "tf.keras.layers.MaxPool3D.get_config": true, + "tf.keras.layers.MaxPool3D.get_weights": true, + "tf.keras.layers.MaxPool3D.input": true, + "tf.keras.layers.MaxPool3D.input_spec": true, + "tf.keras.layers.MaxPool3D.losses": true, + "tf.keras.layers.MaxPool3D.metrics": true, + "tf.keras.layers.MaxPool3D.name": true, + "tf.keras.layers.MaxPool3D.name_scope": true, + "tf.keras.layers.MaxPool3D.non_trainable_weights": true, + "tf.keras.layers.MaxPool3D.output": true, + "tf.keras.layers.MaxPool3D.set_weights": true, + "tf.keras.layers.MaxPool3D.submodules": true, + "tf.keras.layers.MaxPool3D.trainable": true, + "tf.keras.layers.MaxPool3D.trainable_weights": true, + "tf.keras.layers.MaxPool3D.weights": true, + "tf.keras.layers.MaxPool3D.with_name_scope": true, + "tf.keras.layers.MaxPooling1D": false, + "tf.keras.layers.MaxPooling1D.__call__": true, + "tf.keras.layers.MaxPooling1D.__eq__": true, + "tf.keras.layers.MaxPooling1D.__ge__": true, + "tf.keras.layers.MaxPooling1D.__gt__": true, + "tf.keras.layers.MaxPooling1D.__init__": true, + "tf.keras.layers.MaxPooling1D.__le__": true, + "tf.keras.layers.MaxPooling1D.__lt__": true, + "tf.keras.layers.MaxPooling1D.__ne__": true, + "tf.keras.layers.MaxPooling1D.__new__": true, + "tf.keras.layers.MaxPooling1D.activity_regularizer": true, + "tf.keras.layers.MaxPooling1D.add_loss": true, + "tf.keras.layers.MaxPooling1D.add_metric": true, + "tf.keras.layers.MaxPooling1D.add_weight": true, + "tf.keras.layers.MaxPooling1D.build": true, + "tf.keras.layers.MaxPooling1D.call": true, + "tf.keras.layers.MaxPooling1D.compute_mask": true, + "tf.keras.layers.MaxPooling1D.compute_output_shape": true, + "tf.keras.layers.MaxPooling1D.compute_output_signature": true, + "tf.keras.layers.MaxPooling1D.count_params": true, + "tf.keras.layers.MaxPooling1D.dtype": true, + "tf.keras.layers.MaxPooling1D.dynamic": true, + "tf.keras.layers.MaxPooling1D.from_config": true, + "tf.keras.layers.MaxPooling1D.get_config": true, + "tf.keras.layers.MaxPooling1D.get_weights": true, + "tf.keras.layers.MaxPooling1D.input": true, + "tf.keras.layers.MaxPooling1D.input_spec": true, + "tf.keras.layers.MaxPooling1D.losses": true, + "tf.keras.layers.MaxPooling1D.metrics": true, + "tf.keras.layers.MaxPooling1D.name": true, + "tf.keras.layers.MaxPooling1D.name_scope": true, + "tf.keras.layers.MaxPooling1D.non_trainable_weights": true, + "tf.keras.layers.MaxPooling1D.output": true, + "tf.keras.layers.MaxPooling1D.set_weights": true, + "tf.keras.layers.MaxPooling1D.submodules": true, + "tf.keras.layers.MaxPooling1D.trainable": true, + "tf.keras.layers.MaxPooling1D.trainable_weights": true, + "tf.keras.layers.MaxPooling1D.weights": true, + "tf.keras.layers.MaxPooling1D.with_name_scope": true, + "tf.keras.layers.MaxPooling2D": false, + "tf.keras.layers.MaxPooling2D.__call__": true, + "tf.keras.layers.MaxPooling2D.__eq__": true, + "tf.keras.layers.MaxPooling2D.__ge__": true, + "tf.keras.layers.MaxPooling2D.__gt__": true, + "tf.keras.layers.MaxPooling2D.__init__": true, + "tf.keras.layers.MaxPooling2D.__le__": true, + "tf.keras.layers.MaxPooling2D.__lt__": true, + "tf.keras.layers.MaxPooling2D.__ne__": true, + "tf.keras.layers.MaxPooling2D.__new__": true, + "tf.keras.layers.MaxPooling2D.activity_regularizer": true, + "tf.keras.layers.MaxPooling2D.add_loss": true, + "tf.keras.layers.MaxPooling2D.add_metric": true, + "tf.keras.layers.MaxPooling2D.add_weight": true, + "tf.keras.layers.MaxPooling2D.build": true, + "tf.keras.layers.MaxPooling2D.call": true, + "tf.keras.layers.MaxPooling2D.compute_mask": true, + "tf.keras.layers.MaxPooling2D.compute_output_shape": true, + "tf.keras.layers.MaxPooling2D.compute_output_signature": true, + "tf.keras.layers.MaxPooling2D.count_params": true, + "tf.keras.layers.MaxPooling2D.dtype": true, + "tf.keras.layers.MaxPooling2D.dynamic": true, + "tf.keras.layers.MaxPooling2D.from_config": true, + "tf.keras.layers.MaxPooling2D.get_config": true, + "tf.keras.layers.MaxPooling2D.get_weights": true, + "tf.keras.layers.MaxPooling2D.input": true, + "tf.keras.layers.MaxPooling2D.input_spec": true, + "tf.keras.layers.MaxPooling2D.losses": true, + "tf.keras.layers.MaxPooling2D.metrics": true, + "tf.keras.layers.MaxPooling2D.name": true, + "tf.keras.layers.MaxPooling2D.name_scope": true, + "tf.keras.layers.MaxPooling2D.non_trainable_weights": true, + "tf.keras.layers.MaxPooling2D.output": true, + "tf.keras.layers.MaxPooling2D.set_weights": true, + "tf.keras.layers.MaxPooling2D.submodules": true, + "tf.keras.layers.MaxPooling2D.trainable": true, + "tf.keras.layers.MaxPooling2D.trainable_weights": true, + "tf.keras.layers.MaxPooling2D.weights": true, + "tf.keras.layers.MaxPooling2D.with_name_scope": true, + "tf.keras.layers.MaxPooling3D": false, + "tf.keras.layers.MaxPooling3D.__call__": true, + "tf.keras.layers.MaxPooling3D.__eq__": true, + "tf.keras.layers.MaxPooling3D.__ge__": true, + "tf.keras.layers.MaxPooling3D.__gt__": true, + "tf.keras.layers.MaxPooling3D.__init__": true, + "tf.keras.layers.MaxPooling3D.__le__": true, + "tf.keras.layers.MaxPooling3D.__lt__": true, + "tf.keras.layers.MaxPooling3D.__ne__": true, + "tf.keras.layers.MaxPooling3D.__new__": true, + "tf.keras.layers.MaxPooling3D.activity_regularizer": true, + "tf.keras.layers.MaxPooling3D.add_loss": true, + "tf.keras.layers.MaxPooling3D.add_metric": true, + "tf.keras.layers.MaxPooling3D.add_weight": true, + "tf.keras.layers.MaxPooling3D.build": true, + "tf.keras.layers.MaxPooling3D.call": true, + "tf.keras.layers.MaxPooling3D.compute_mask": true, + "tf.keras.layers.MaxPooling3D.compute_output_shape": true, + "tf.keras.layers.MaxPooling3D.compute_output_signature": true, + "tf.keras.layers.MaxPooling3D.count_params": true, + "tf.keras.layers.MaxPooling3D.dtype": true, + "tf.keras.layers.MaxPooling3D.dynamic": true, + "tf.keras.layers.MaxPooling3D.from_config": true, + "tf.keras.layers.MaxPooling3D.get_config": true, + "tf.keras.layers.MaxPooling3D.get_weights": true, + "tf.keras.layers.MaxPooling3D.input": true, + "tf.keras.layers.MaxPooling3D.input_spec": true, + "tf.keras.layers.MaxPooling3D.losses": true, + "tf.keras.layers.MaxPooling3D.metrics": true, + "tf.keras.layers.MaxPooling3D.name": true, + "tf.keras.layers.MaxPooling3D.name_scope": true, + "tf.keras.layers.MaxPooling3D.non_trainable_weights": true, + "tf.keras.layers.MaxPooling3D.output": true, + "tf.keras.layers.MaxPooling3D.set_weights": true, + "tf.keras.layers.MaxPooling3D.submodules": true, + "tf.keras.layers.MaxPooling3D.trainable": true, + "tf.keras.layers.MaxPooling3D.trainable_weights": true, + "tf.keras.layers.MaxPooling3D.weights": true, + "tf.keras.layers.MaxPooling3D.with_name_scope": true, + "tf.keras.layers.Maximum": false, + "tf.keras.layers.Maximum.__call__": true, + "tf.keras.layers.Maximum.__eq__": true, + "tf.keras.layers.Maximum.__ge__": true, + "tf.keras.layers.Maximum.__gt__": true, + "tf.keras.layers.Maximum.__init__": true, + "tf.keras.layers.Maximum.__le__": true, + "tf.keras.layers.Maximum.__lt__": true, + "tf.keras.layers.Maximum.__ne__": true, + "tf.keras.layers.Maximum.__new__": true, + "tf.keras.layers.Maximum.activity_regularizer": true, + "tf.keras.layers.Maximum.add_loss": true, + "tf.keras.layers.Maximum.add_metric": true, + "tf.keras.layers.Maximum.add_weight": true, + "tf.keras.layers.Maximum.build": true, + "tf.keras.layers.Maximum.call": true, + "tf.keras.layers.Maximum.compute_mask": true, + "tf.keras.layers.Maximum.compute_output_shape": true, + "tf.keras.layers.Maximum.compute_output_signature": true, + "tf.keras.layers.Maximum.count_params": true, + "tf.keras.layers.Maximum.dtype": true, + "tf.keras.layers.Maximum.dynamic": true, + "tf.keras.layers.Maximum.from_config": true, + "tf.keras.layers.Maximum.get_config": true, + "tf.keras.layers.Maximum.get_weights": true, + "tf.keras.layers.Maximum.input": true, + "tf.keras.layers.Maximum.input_spec": true, + "tf.keras.layers.Maximum.losses": true, + "tf.keras.layers.Maximum.metrics": true, + "tf.keras.layers.Maximum.name": true, + "tf.keras.layers.Maximum.name_scope": true, + "tf.keras.layers.Maximum.non_trainable_weights": true, + "tf.keras.layers.Maximum.output": true, + "tf.keras.layers.Maximum.set_weights": true, + "tf.keras.layers.Maximum.submodules": true, + "tf.keras.layers.Maximum.trainable": true, + "tf.keras.layers.Maximum.trainable_weights": true, + "tf.keras.layers.Maximum.weights": true, + "tf.keras.layers.Maximum.with_name_scope": true, + "tf.keras.layers.Minimum": false, + "tf.keras.layers.Minimum.__call__": true, + "tf.keras.layers.Minimum.__eq__": true, + "tf.keras.layers.Minimum.__ge__": true, + "tf.keras.layers.Minimum.__gt__": true, + "tf.keras.layers.Minimum.__init__": true, + "tf.keras.layers.Minimum.__le__": true, + "tf.keras.layers.Minimum.__lt__": true, + "tf.keras.layers.Minimum.__ne__": true, + "tf.keras.layers.Minimum.__new__": true, + "tf.keras.layers.Minimum.activity_regularizer": true, + "tf.keras.layers.Minimum.add_loss": true, + "tf.keras.layers.Minimum.add_metric": true, + "tf.keras.layers.Minimum.add_weight": true, + "tf.keras.layers.Minimum.build": true, + "tf.keras.layers.Minimum.call": true, + "tf.keras.layers.Minimum.compute_mask": true, + "tf.keras.layers.Minimum.compute_output_shape": true, + "tf.keras.layers.Minimum.compute_output_signature": true, + "tf.keras.layers.Minimum.count_params": true, + "tf.keras.layers.Minimum.dtype": true, + "tf.keras.layers.Minimum.dynamic": true, + "tf.keras.layers.Minimum.from_config": true, + "tf.keras.layers.Minimum.get_config": true, + "tf.keras.layers.Minimum.get_weights": true, + "tf.keras.layers.Minimum.input": true, + "tf.keras.layers.Minimum.input_spec": true, + "tf.keras.layers.Minimum.losses": true, + "tf.keras.layers.Minimum.metrics": true, + "tf.keras.layers.Minimum.name": true, + "tf.keras.layers.Minimum.name_scope": true, + "tf.keras.layers.Minimum.non_trainable_weights": true, + "tf.keras.layers.Minimum.output": true, + "tf.keras.layers.Minimum.set_weights": true, + "tf.keras.layers.Minimum.submodules": true, + "tf.keras.layers.Minimum.trainable": true, + "tf.keras.layers.Minimum.trainable_weights": true, + "tf.keras.layers.Minimum.weights": true, + "tf.keras.layers.Minimum.with_name_scope": true, + "tf.keras.layers.Multiply": false, + "tf.keras.layers.Multiply.__call__": true, + "tf.keras.layers.Multiply.__eq__": true, + "tf.keras.layers.Multiply.__ge__": true, + "tf.keras.layers.Multiply.__gt__": true, + "tf.keras.layers.Multiply.__init__": true, + "tf.keras.layers.Multiply.__le__": true, + "tf.keras.layers.Multiply.__lt__": true, + "tf.keras.layers.Multiply.__ne__": true, + "tf.keras.layers.Multiply.__new__": true, + "tf.keras.layers.Multiply.activity_regularizer": true, + "tf.keras.layers.Multiply.add_loss": true, + "tf.keras.layers.Multiply.add_metric": true, + "tf.keras.layers.Multiply.add_weight": true, + "tf.keras.layers.Multiply.build": true, + "tf.keras.layers.Multiply.call": true, + "tf.keras.layers.Multiply.compute_mask": true, + "tf.keras.layers.Multiply.compute_output_shape": true, + "tf.keras.layers.Multiply.compute_output_signature": true, + "tf.keras.layers.Multiply.count_params": true, + "tf.keras.layers.Multiply.dtype": true, + "tf.keras.layers.Multiply.dynamic": true, + "tf.keras.layers.Multiply.from_config": true, + "tf.keras.layers.Multiply.get_config": true, + "tf.keras.layers.Multiply.get_weights": true, + "tf.keras.layers.Multiply.input": true, + "tf.keras.layers.Multiply.input_spec": true, + "tf.keras.layers.Multiply.losses": true, + "tf.keras.layers.Multiply.metrics": true, + "tf.keras.layers.Multiply.name": true, + "tf.keras.layers.Multiply.name_scope": true, + "tf.keras.layers.Multiply.non_trainable_weights": true, + "tf.keras.layers.Multiply.output": true, + "tf.keras.layers.Multiply.set_weights": true, + "tf.keras.layers.Multiply.submodules": true, + "tf.keras.layers.Multiply.trainable": true, + "tf.keras.layers.Multiply.trainable_weights": true, + "tf.keras.layers.Multiply.weights": true, + "tf.keras.layers.Multiply.with_name_scope": true, + "tf.keras.layers.PReLU": false, + "tf.keras.layers.PReLU.__call__": true, + "tf.keras.layers.PReLU.__eq__": true, + "tf.keras.layers.PReLU.__ge__": true, + "tf.keras.layers.PReLU.__gt__": true, + "tf.keras.layers.PReLU.__init__": true, + "tf.keras.layers.PReLU.__le__": true, + "tf.keras.layers.PReLU.__lt__": true, + "tf.keras.layers.PReLU.__ne__": true, + "tf.keras.layers.PReLU.__new__": true, + "tf.keras.layers.PReLU.activity_regularizer": true, + "tf.keras.layers.PReLU.add_loss": true, + "tf.keras.layers.PReLU.add_metric": true, + "tf.keras.layers.PReLU.add_weight": true, + "tf.keras.layers.PReLU.build": true, + "tf.keras.layers.PReLU.call": true, + "tf.keras.layers.PReLU.compute_mask": true, + "tf.keras.layers.PReLU.compute_output_shape": true, + "tf.keras.layers.PReLU.compute_output_signature": true, + "tf.keras.layers.PReLU.count_params": true, + "tf.keras.layers.PReLU.dtype": true, + "tf.keras.layers.PReLU.dynamic": true, + "tf.keras.layers.PReLU.from_config": true, + "tf.keras.layers.PReLU.get_config": true, + "tf.keras.layers.PReLU.get_weights": true, + "tf.keras.layers.PReLU.input": true, + "tf.keras.layers.PReLU.input_spec": true, + "tf.keras.layers.PReLU.losses": true, + "tf.keras.layers.PReLU.metrics": true, + "tf.keras.layers.PReLU.name": true, + "tf.keras.layers.PReLU.name_scope": true, + "tf.keras.layers.PReLU.non_trainable_weights": true, + "tf.keras.layers.PReLU.output": true, + "tf.keras.layers.PReLU.set_weights": true, + "tf.keras.layers.PReLU.submodules": true, + "tf.keras.layers.PReLU.trainable": true, + "tf.keras.layers.PReLU.trainable_weights": true, + "tf.keras.layers.PReLU.weights": true, + "tf.keras.layers.PReLU.with_name_scope": true, + "tf.keras.layers.Permute": false, + "tf.keras.layers.Permute.__call__": true, + "tf.keras.layers.Permute.__eq__": true, + "tf.keras.layers.Permute.__ge__": true, + "tf.keras.layers.Permute.__gt__": true, + "tf.keras.layers.Permute.__init__": true, + "tf.keras.layers.Permute.__le__": true, + "tf.keras.layers.Permute.__lt__": true, + "tf.keras.layers.Permute.__ne__": true, + "tf.keras.layers.Permute.__new__": true, + "tf.keras.layers.Permute.activity_regularizer": true, + "tf.keras.layers.Permute.add_loss": true, + "tf.keras.layers.Permute.add_metric": true, + "tf.keras.layers.Permute.add_weight": true, + "tf.keras.layers.Permute.build": true, + "tf.keras.layers.Permute.call": true, + "tf.keras.layers.Permute.compute_mask": true, + "tf.keras.layers.Permute.compute_output_shape": true, + "tf.keras.layers.Permute.compute_output_signature": true, + "tf.keras.layers.Permute.count_params": true, + "tf.keras.layers.Permute.dtype": true, + "tf.keras.layers.Permute.dynamic": true, + "tf.keras.layers.Permute.from_config": true, + "tf.keras.layers.Permute.get_config": true, + "tf.keras.layers.Permute.get_weights": true, + "tf.keras.layers.Permute.input": true, + "tf.keras.layers.Permute.input_spec": true, + "tf.keras.layers.Permute.losses": true, + "tf.keras.layers.Permute.metrics": true, + "tf.keras.layers.Permute.name": true, + "tf.keras.layers.Permute.name_scope": true, + "tf.keras.layers.Permute.non_trainable_weights": true, + "tf.keras.layers.Permute.output": true, + "tf.keras.layers.Permute.set_weights": true, + "tf.keras.layers.Permute.submodules": true, + "tf.keras.layers.Permute.trainable": true, + "tf.keras.layers.Permute.trainable_weights": true, + "tf.keras.layers.Permute.weights": true, + "tf.keras.layers.Permute.with_name_scope": true, + "tf.keras.layers.RNN": false, + "tf.keras.layers.RNN.__call__": true, + "tf.keras.layers.RNN.__eq__": true, + "tf.keras.layers.RNN.__ge__": true, + "tf.keras.layers.RNN.__gt__": true, + "tf.keras.layers.RNN.__init__": true, + "tf.keras.layers.RNN.__le__": true, + "tf.keras.layers.RNN.__lt__": true, + "tf.keras.layers.RNN.__ne__": true, + "tf.keras.layers.RNN.__new__": true, + "tf.keras.layers.RNN.activity_regularizer": true, + "tf.keras.layers.RNN.add_loss": true, + "tf.keras.layers.RNN.add_metric": true, + "tf.keras.layers.RNN.add_weight": true, + "tf.keras.layers.RNN.build": true, + "tf.keras.layers.RNN.call": true, + "tf.keras.layers.RNN.compute_mask": true, + "tf.keras.layers.RNN.compute_output_shape": true, + "tf.keras.layers.RNN.compute_output_signature": true, + "tf.keras.layers.RNN.count_params": true, + "tf.keras.layers.RNN.dtype": true, + "tf.keras.layers.RNN.dynamic": true, + "tf.keras.layers.RNN.from_config": true, + "tf.keras.layers.RNN.get_config": true, + "tf.keras.layers.RNN.get_weights": true, + "tf.keras.layers.RNN.input": true, + "tf.keras.layers.RNN.input_spec": true, + "tf.keras.layers.RNN.losses": true, + "tf.keras.layers.RNN.metrics": true, + "tf.keras.layers.RNN.name": true, + "tf.keras.layers.RNN.name_scope": true, + "tf.keras.layers.RNN.non_trainable_weights": true, + "tf.keras.layers.RNN.output": true, + "tf.keras.layers.RNN.reset_states": true, + "tf.keras.layers.RNN.set_weights": true, + "tf.keras.layers.RNN.states": true, + "tf.keras.layers.RNN.submodules": true, + "tf.keras.layers.RNN.trainable": true, + "tf.keras.layers.RNN.trainable_weights": true, + "tf.keras.layers.RNN.weights": true, + "tf.keras.layers.RNN.with_name_scope": true, + "tf.keras.layers.ReLU": false, + "tf.keras.layers.ReLU.__call__": true, + "tf.keras.layers.ReLU.__eq__": true, + "tf.keras.layers.ReLU.__ge__": true, + "tf.keras.layers.ReLU.__gt__": true, + "tf.keras.layers.ReLU.__init__": true, + "tf.keras.layers.ReLU.__le__": true, + "tf.keras.layers.ReLU.__lt__": true, + "tf.keras.layers.ReLU.__ne__": true, + "tf.keras.layers.ReLU.__new__": true, + "tf.keras.layers.ReLU.activity_regularizer": true, + "tf.keras.layers.ReLU.add_loss": true, + "tf.keras.layers.ReLU.add_metric": true, + "tf.keras.layers.ReLU.add_weight": true, + "tf.keras.layers.ReLU.build": true, + "tf.keras.layers.ReLU.call": true, + "tf.keras.layers.ReLU.compute_mask": true, + "tf.keras.layers.ReLU.compute_output_shape": true, + "tf.keras.layers.ReLU.compute_output_signature": true, + "tf.keras.layers.ReLU.count_params": true, + "tf.keras.layers.ReLU.dtype": true, + "tf.keras.layers.ReLU.dynamic": true, + "tf.keras.layers.ReLU.from_config": true, + "tf.keras.layers.ReLU.get_config": true, + "tf.keras.layers.ReLU.get_weights": true, + "tf.keras.layers.ReLU.input": true, + "tf.keras.layers.ReLU.input_spec": true, + "tf.keras.layers.ReLU.losses": true, + "tf.keras.layers.ReLU.metrics": true, + "tf.keras.layers.ReLU.name": true, + "tf.keras.layers.ReLU.name_scope": true, + "tf.keras.layers.ReLU.non_trainable_weights": true, + "tf.keras.layers.ReLU.output": true, + "tf.keras.layers.ReLU.set_weights": true, + "tf.keras.layers.ReLU.submodules": true, + "tf.keras.layers.ReLU.trainable": true, + "tf.keras.layers.ReLU.trainable_weights": true, + "tf.keras.layers.ReLU.weights": true, + "tf.keras.layers.ReLU.with_name_scope": true, + "tf.keras.layers.RepeatVector": false, + "tf.keras.layers.RepeatVector.__call__": true, + "tf.keras.layers.RepeatVector.__eq__": true, + "tf.keras.layers.RepeatVector.__ge__": true, + "tf.keras.layers.RepeatVector.__gt__": true, + "tf.keras.layers.RepeatVector.__init__": true, + "tf.keras.layers.RepeatVector.__le__": true, + "tf.keras.layers.RepeatVector.__lt__": true, + "tf.keras.layers.RepeatVector.__ne__": true, + "tf.keras.layers.RepeatVector.__new__": true, + "tf.keras.layers.RepeatVector.activity_regularizer": true, + "tf.keras.layers.RepeatVector.add_loss": true, + "tf.keras.layers.RepeatVector.add_metric": true, + "tf.keras.layers.RepeatVector.add_weight": true, + "tf.keras.layers.RepeatVector.build": true, + "tf.keras.layers.RepeatVector.call": true, + "tf.keras.layers.RepeatVector.compute_mask": true, + "tf.keras.layers.RepeatVector.compute_output_shape": true, + "tf.keras.layers.RepeatVector.compute_output_signature": true, + "tf.keras.layers.RepeatVector.count_params": true, + "tf.keras.layers.RepeatVector.dtype": true, + "tf.keras.layers.RepeatVector.dynamic": true, + "tf.keras.layers.RepeatVector.from_config": true, + "tf.keras.layers.RepeatVector.get_config": true, + "tf.keras.layers.RepeatVector.get_weights": true, + "tf.keras.layers.RepeatVector.input": true, + "tf.keras.layers.RepeatVector.input_spec": true, + "tf.keras.layers.RepeatVector.losses": true, + "tf.keras.layers.RepeatVector.metrics": true, + "tf.keras.layers.RepeatVector.name": true, + "tf.keras.layers.RepeatVector.name_scope": true, + "tf.keras.layers.RepeatVector.non_trainable_weights": true, + "tf.keras.layers.RepeatVector.output": true, + "tf.keras.layers.RepeatVector.set_weights": true, + "tf.keras.layers.RepeatVector.submodules": true, + "tf.keras.layers.RepeatVector.trainable": true, + "tf.keras.layers.RepeatVector.trainable_weights": true, + "tf.keras.layers.RepeatVector.weights": true, + "tf.keras.layers.RepeatVector.with_name_scope": true, + "tf.keras.layers.Reshape": false, + "tf.keras.layers.Reshape.__call__": true, + "tf.keras.layers.Reshape.__eq__": true, + "tf.keras.layers.Reshape.__ge__": true, + "tf.keras.layers.Reshape.__gt__": true, + "tf.keras.layers.Reshape.__init__": true, + "tf.keras.layers.Reshape.__le__": true, + "tf.keras.layers.Reshape.__lt__": true, + "tf.keras.layers.Reshape.__ne__": true, + "tf.keras.layers.Reshape.__new__": true, + "tf.keras.layers.Reshape.activity_regularizer": true, + "tf.keras.layers.Reshape.add_loss": true, + "tf.keras.layers.Reshape.add_metric": true, + "tf.keras.layers.Reshape.add_weight": true, + "tf.keras.layers.Reshape.build": true, + "tf.keras.layers.Reshape.call": true, + "tf.keras.layers.Reshape.compute_mask": true, + "tf.keras.layers.Reshape.compute_output_shape": true, + "tf.keras.layers.Reshape.compute_output_signature": true, + "tf.keras.layers.Reshape.count_params": true, + "tf.keras.layers.Reshape.dtype": true, + "tf.keras.layers.Reshape.dynamic": true, + "tf.keras.layers.Reshape.from_config": true, + "tf.keras.layers.Reshape.get_config": true, + "tf.keras.layers.Reshape.get_weights": true, + "tf.keras.layers.Reshape.input": true, + "tf.keras.layers.Reshape.input_spec": true, + "tf.keras.layers.Reshape.losses": true, + "tf.keras.layers.Reshape.metrics": true, + "tf.keras.layers.Reshape.name": true, + "tf.keras.layers.Reshape.name_scope": true, + "tf.keras.layers.Reshape.non_trainable_weights": true, + "tf.keras.layers.Reshape.output": true, + "tf.keras.layers.Reshape.set_weights": true, + "tf.keras.layers.Reshape.submodules": true, + "tf.keras.layers.Reshape.trainable": true, + "tf.keras.layers.Reshape.trainable_weights": true, + "tf.keras.layers.Reshape.weights": true, + "tf.keras.layers.Reshape.with_name_scope": true, + "tf.keras.layers.SeparableConv1D": false, + "tf.keras.layers.SeparableConv1D.__call__": true, + "tf.keras.layers.SeparableConv1D.__eq__": true, + "tf.keras.layers.SeparableConv1D.__ge__": true, + "tf.keras.layers.SeparableConv1D.__gt__": true, + "tf.keras.layers.SeparableConv1D.__init__": true, + "tf.keras.layers.SeparableConv1D.__le__": true, + "tf.keras.layers.SeparableConv1D.__lt__": true, + "tf.keras.layers.SeparableConv1D.__ne__": true, + "tf.keras.layers.SeparableConv1D.__new__": true, + "tf.keras.layers.SeparableConv1D.activity_regularizer": true, + "tf.keras.layers.SeparableConv1D.add_loss": true, + "tf.keras.layers.SeparableConv1D.add_metric": true, + "tf.keras.layers.SeparableConv1D.add_weight": true, + "tf.keras.layers.SeparableConv1D.build": true, + "tf.keras.layers.SeparableConv1D.call": true, + "tf.keras.layers.SeparableConv1D.compute_mask": true, + "tf.keras.layers.SeparableConv1D.compute_output_shape": true, + "tf.keras.layers.SeparableConv1D.compute_output_signature": true, + "tf.keras.layers.SeparableConv1D.count_params": true, + "tf.keras.layers.SeparableConv1D.dtype": true, + "tf.keras.layers.SeparableConv1D.dynamic": true, + "tf.keras.layers.SeparableConv1D.from_config": true, + "tf.keras.layers.SeparableConv1D.get_config": true, + "tf.keras.layers.SeparableConv1D.get_weights": true, + "tf.keras.layers.SeparableConv1D.input": true, + "tf.keras.layers.SeparableConv1D.input_spec": true, + "tf.keras.layers.SeparableConv1D.losses": true, + "tf.keras.layers.SeparableConv1D.metrics": true, + "tf.keras.layers.SeparableConv1D.name": true, + "tf.keras.layers.SeparableConv1D.name_scope": true, + "tf.keras.layers.SeparableConv1D.non_trainable_weights": true, + "tf.keras.layers.SeparableConv1D.output": true, + "tf.keras.layers.SeparableConv1D.set_weights": true, + "tf.keras.layers.SeparableConv1D.submodules": true, + "tf.keras.layers.SeparableConv1D.trainable": true, + "tf.keras.layers.SeparableConv1D.trainable_weights": true, + "tf.keras.layers.SeparableConv1D.weights": true, + "tf.keras.layers.SeparableConv1D.with_name_scope": true, + "tf.keras.layers.SeparableConv2D": false, + "tf.keras.layers.SeparableConv2D.__call__": true, + "tf.keras.layers.SeparableConv2D.__eq__": true, + "tf.keras.layers.SeparableConv2D.__ge__": true, + "tf.keras.layers.SeparableConv2D.__gt__": true, + "tf.keras.layers.SeparableConv2D.__init__": true, + "tf.keras.layers.SeparableConv2D.__le__": true, + "tf.keras.layers.SeparableConv2D.__lt__": true, + "tf.keras.layers.SeparableConv2D.__ne__": true, + "tf.keras.layers.SeparableConv2D.__new__": true, + "tf.keras.layers.SeparableConv2D.activity_regularizer": true, + "tf.keras.layers.SeparableConv2D.add_loss": true, + "tf.keras.layers.SeparableConv2D.add_metric": true, + "tf.keras.layers.SeparableConv2D.add_weight": true, + "tf.keras.layers.SeparableConv2D.build": true, + "tf.keras.layers.SeparableConv2D.call": true, + "tf.keras.layers.SeparableConv2D.compute_mask": true, + "tf.keras.layers.SeparableConv2D.compute_output_shape": true, + "tf.keras.layers.SeparableConv2D.compute_output_signature": true, + "tf.keras.layers.SeparableConv2D.count_params": true, + "tf.keras.layers.SeparableConv2D.dtype": true, + "tf.keras.layers.SeparableConv2D.dynamic": true, + "tf.keras.layers.SeparableConv2D.from_config": true, + "tf.keras.layers.SeparableConv2D.get_config": true, + "tf.keras.layers.SeparableConv2D.get_weights": true, + "tf.keras.layers.SeparableConv2D.input": true, + "tf.keras.layers.SeparableConv2D.input_spec": true, + "tf.keras.layers.SeparableConv2D.losses": true, + "tf.keras.layers.SeparableConv2D.metrics": true, + "tf.keras.layers.SeparableConv2D.name": true, + "tf.keras.layers.SeparableConv2D.name_scope": true, + "tf.keras.layers.SeparableConv2D.non_trainable_weights": true, + "tf.keras.layers.SeparableConv2D.output": true, + "tf.keras.layers.SeparableConv2D.set_weights": true, + "tf.keras.layers.SeparableConv2D.submodules": true, + "tf.keras.layers.SeparableConv2D.trainable": true, + "tf.keras.layers.SeparableConv2D.trainable_weights": true, + "tf.keras.layers.SeparableConv2D.weights": true, + "tf.keras.layers.SeparableConv2D.with_name_scope": true, + "tf.keras.layers.SeparableConvolution1D": false, + "tf.keras.layers.SeparableConvolution1D.__call__": true, + "tf.keras.layers.SeparableConvolution1D.__eq__": true, + "tf.keras.layers.SeparableConvolution1D.__ge__": true, + "tf.keras.layers.SeparableConvolution1D.__gt__": true, + "tf.keras.layers.SeparableConvolution1D.__init__": true, + "tf.keras.layers.SeparableConvolution1D.__le__": true, + "tf.keras.layers.SeparableConvolution1D.__lt__": true, + "tf.keras.layers.SeparableConvolution1D.__ne__": true, + "tf.keras.layers.SeparableConvolution1D.__new__": true, + "tf.keras.layers.SeparableConvolution1D.activity_regularizer": true, + "tf.keras.layers.SeparableConvolution1D.add_loss": true, + "tf.keras.layers.SeparableConvolution1D.add_metric": true, + "tf.keras.layers.SeparableConvolution1D.add_weight": true, + "tf.keras.layers.SeparableConvolution1D.build": true, + "tf.keras.layers.SeparableConvolution1D.call": true, + "tf.keras.layers.SeparableConvolution1D.compute_mask": true, + "tf.keras.layers.SeparableConvolution1D.compute_output_shape": true, + "tf.keras.layers.SeparableConvolution1D.compute_output_signature": true, + "tf.keras.layers.SeparableConvolution1D.count_params": true, + "tf.keras.layers.SeparableConvolution1D.dtype": true, + "tf.keras.layers.SeparableConvolution1D.dynamic": true, + "tf.keras.layers.SeparableConvolution1D.from_config": true, + "tf.keras.layers.SeparableConvolution1D.get_config": true, + "tf.keras.layers.SeparableConvolution1D.get_weights": true, + "tf.keras.layers.SeparableConvolution1D.input": true, + "tf.keras.layers.SeparableConvolution1D.input_spec": true, + "tf.keras.layers.SeparableConvolution1D.losses": true, + "tf.keras.layers.SeparableConvolution1D.metrics": true, + "tf.keras.layers.SeparableConvolution1D.name": true, + "tf.keras.layers.SeparableConvolution1D.name_scope": true, + "tf.keras.layers.SeparableConvolution1D.non_trainable_weights": true, + "tf.keras.layers.SeparableConvolution1D.output": true, + "tf.keras.layers.SeparableConvolution1D.set_weights": true, + "tf.keras.layers.SeparableConvolution1D.submodules": true, + "tf.keras.layers.SeparableConvolution1D.trainable": true, + "tf.keras.layers.SeparableConvolution1D.trainable_weights": true, + "tf.keras.layers.SeparableConvolution1D.weights": true, + "tf.keras.layers.SeparableConvolution1D.with_name_scope": true, + "tf.keras.layers.SeparableConvolution2D": false, + "tf.keras.layers.SeparableConvolution2D.__call__": true, + "tf.keras.layers.SeparableConvolution2D.__eq__": true, + "tf.keras.layers.SeparableConvolution2D.__ge__": true, + "tf.keras.layers.SeparableConvolution2D.__gt__": true, + "tf.keras.layers.SeparableConvolution2D.__init__": true, + "tf.keras.layers.SeparableConvolution2D.__le__": true, + "tf.keras.layers.SeparableConvolution2D.__lt__": true, + "tf.keras.layers.SeparableConvolution2D.__ne__": true, + "tf.keras.layers.SeparableConvolution2D.__new__": true, + "tf.keras.layers.SeparableConvolution2D.activity_regularizer": true, + "tf.keras.layers.SeparableConvolution2D.add_loss": true, + "tf.keras.layers.SeparableConvolution2D.add_metric": true, + "tf.keras.layers.SeparableConvolution2D.add_weight": true, + "tf.keras.layers.SeparableConvolution2D.build": true, + "tf.keras.layers.SeparableConvolution2D.call": true, + "tf.keras.layers.SeparableConvolution2D.compute_mask": true, + "tf.keras.layers.SeparableConvolution2D.compute_output_shape": true, + "tf.keras.layers.SeparableConvolution2D.compute_output_signature": true, + "tf.keras.layers.SeparableConvolution2D.count_params": true, + "tf.keras.layers.SeparableConvolution2D.dtype": true, + "tf.keras.layers.SeparableConvolution2D.dynamic": true, + "tf.keras.layers.SeparableConvolution2D.from_config": true, + "tf.keras.layers.SeparableConvolution2D.get_config": true, + "tf.keras.layers.SeparableConvolution2D.get_weights": true, + "tf.keras.layers.SeparableConvolution2D.input": true, + "tf.keras.layers.SeparableConvolution2D.input_spec": true, + "tf.keras.layers.SeparableConvolution2D.losses": true, + "tf.keras.layers.SeparableConvolution2D.metrics": true, + "tf.keras.layers.SeparableConvolution2D.name": true, + "tf.keras.layers.SeparableConvolution2D.name_scope": true, + "tf.keras.layers.SeparableConvolution2D.non_trainable_weights": true, + "tf.keras.layers.SeparableConvolution2D.output": true, + "tf.keras.layers.SeparableConvolution2D.set_weights": true, + "tf.keras.layers.SeparableConvolution2D.submodules": true, + "tf.keras.layers.SeparableConvolution2D.trainable": true, + "tf.keras.layers.SeparableConvolution2D.trainable_weights": true, + "tf.keras.layers.SeparableConvolution2D.weights": true, + "tf.keras.layers.SeparableConvolution2D.with_name_scope": true, + "tf.keras.layers.SimpleRNN": false, + "tf.keras.layers.SimpleRNN.__call__": true, + "tf.keras.layers.SimpleRNN.__eq__": true, + "tf.keras.layers.SimpleRNN.__ge__": true, + "tf.keras.layers.SimpleRNN.__gt__": true, + "tf.keras.layers.SimpleRNN.__init__": true, + "tf.keras.layers.SimpleRNN.__le__": true, + "tf.keras.layers.SimpleRNN.__lt__": true, + "tf.keras.layers.SimpleRNN.__ne__": true, + "tf.keras.layers.SimpleRNN.__new__": true, + "tf.keras.layers.SimpleRNN.activation": true, + "tf.keras.layers.SimpleRNN.activity_regularizer": true, + "tf.keras.layers.SimpleRNN.add_loss": true, + "tf.keras.layers.SimpleRNN.add_metric": true, + "tf.keras.layers.SimpleRNN.add_weight": true, + "tf.keras.layers.SimpleRNN.bias_constraint": true, + "tf.keras.layers.SimpleRNN.bias_initializer": true, + "tf.keras.layers.SimpleRNN.bias_regularizer": true, + "tf.keras.layers.SimpleRNN.build": true, + "tf.keras.layers.SimpleRNN.call": true, + "tf.keras.layers.SimpleRNN.compute_mask": true, + "tf.keras.layers.SimpleRNN.compute_output_shape": true, + "tf.keras.layers.SimpleRNN.compute_output_signature": true, + "tf.keras.layers.SimpleRNN.count_params": true, + "tf.keras.layers.SimpleRNN.dropout": true, + "tf.keras.layers.SimpleRNN.dtype": true, + "tf.keras.layers.SimpleRNN.dynamic": true, + "tf.keras.layers.SimpleRNN.from_config": true, + "tf.keras.layers.SimpleRNN.get_config": true, + "tf.keras.layers.SimpleRNN.get_weights": true, + "tf.keras.layers.SimpleRNN.input": true, + "tf.keras.layers.SimpleRNN.input_spec": true, + "tf.keras.layers.SimpleRNN.kernel_constraint": true, + "tf.keras.layers.SimpleRNN.kernel_initializer": true, + "tf.keras.layers.SimpleRNN.kernel_regularizer": true, + "tf.keras.layers.SimpleRNN.losses": true, + "tf.keras.layers.SimpleRNN.metrics": true, + "tf.keras.layers.SimpleRNN.name": true, + "tf.keras.layers.SimpleRNN.name_scope": true, + "tf.keras.layers.SimpleRNN.non_trainable_weights": true, + "tf.keras.layers.SimpleRNN.output": true, + "tf.keras.layers.SimpleRNN.recurrent_constraint": true, + "tf.keras.layers.SimpleRNN.recurrent_dropout": true, + "tf.keras.layers.SimpleRNN.recurrent_initializer": true, + "tf.keras.layers.SimpleRNN.recurrent_regularizer": true, + "tf.keras.layers.SimpleRNN.reset_states": true, + "tf.keras.layers.SimpleRNN.set_weights": true, + "tf.keras.layers.SimpleRNN.states": true, + "tf.keras.layers.SimpleRNN.submodules": true, + "tf.keras.layers.SimpleRNN.trainable": true, + "tf.keras.layers.SimpleRNN.trainable_weights": true, + "tf.keras.layers.SimpleRNN.units": true, + "tf.keras.layers.SimpleRNN.use_bias": true, + "tf.keras.layers.SimpleRNN.weights": true, + "tf.keras.layers.SimpleRNN.with_name_scope": true, + "tf.keras.layers.SimpleRNNCell": false, + "tf.keras.layers.SimpleRNNCell.__call__": true, + "tf.keras.layers.SimpleRNNCell.__eq__": true, + "tf.keras.layers.SimpleRNNCell.__ge__": true, + "tf.keras.layers.SimpleRNNCell.__gt__": true, + "tf.keras.layers.SimpleRNNCell.__init__": true, + "tf.keras.layers.SimpleRNNCell.__le__": true, + "tf.keras.layers.SimpleRNNCell.__lt__": true, + "tf.keras.layers.SimpleRNNCell.__ne__": true, + "tf.keras.layers.SimpleRNNCell.__new__": true, + "tf.keras.layers.SimpleRNNCell.activity_regularizer": true, + "tf.keras.layers.SimpleRNNCell.add_loss": true, + "tf.keras.layers.SimpleRNNCell.add_metric": true, + "tf.keras.layers.SimpleRNNCell.add_weight": true, + "tf.keras.layers.SimpleRNNCell.build": true, + "tf.keras.layers.SimpleRNNCell.call": true, + "tf.keras.layers.SimpleRNNCell.compute_mask": true, + "tf.keras.layers.SimpleRNNCell.compute_output_shape": true, + "tf.keras.layers.SimpleRNNCell.compute_output_signature": true, + "tf.keras.layers.SimpleRNNCell.count_params": true, + "tf.keras.layers.SimpleRNNCell.dtype": true, + "tf.keras.layers.SimpleRNNCell.dynamic": true, + "tf.keras.layers.SimpleRNNCell.from_config": true, + "tf.keras.layers.SimpleRNNCell.get_config": true, + "tf.keras.layers.SimpleRNNCell.get_dropout_mask_for_cell": true, + "tf.keras.layers.SimpleRNNCell.get_initial_state": true, + "tf.keras.layers.SimpleRNNCell.get_recurrent_dropout_mask_for_cell": true, + "tf.keras.layers.SimpleRNNCell.get_weights": true, + "tf.keras.layers.SimpleRNNCell.input": true, + "tf.keras.layers.SimpleRNNCell.input_spec": true, + "tf.keras.layers.SimpleRNNCell.losses": true, + "tf.keras.layers.SimpleRNNCell.metrics": true, + "tf.keras.layers.SimpleRNNCell.name": true, + "tf.keras.layers.SimpleRNNCell.name_scope": true, + "tf.keras.layers.SimpleRNNCell.non_trainable_weights": true, + "tf.keras.layers.SimpleRNNCell.output": true, + "tf.keras.layers.SimpleRNNCell.reset_dropout_mask": true, + "tf.keras.layers.SimpleRNNCell.reset_recurrent_dropout_mask": true, + "tf.keras.layers.SimpleRNNCell.set_weights": true, + "tf.keras.layers.SimpleRNNCell.submodules": true, + "tf.keras.layers.SimpleRNNCell.trainable": true, + "tf.keras.layers.SimpleRNNCell.trainable_weights": true, + "tf.keras.layers.SimpleRNNCell.weights": true, + "tf.keras.layers.SimpleRNNCell.with_name_scope": true, + "tf.keras.layers.Softmax": false, + "tf.keras.layers.Softmax.__call__": true, + "tf.keras.layers.Softmax.__eq__": true, + "tf.keras.layers.Softmax.__ge__": true, + "tf.keras.layers.Softmax.__gt__": true, + "tf.keras.layers.Softmax.__init__": true, + "tf.keras.layers.Softmax.__le__": true, + "tf.keras.layers.Softmax.__lt__": true, + "tf.keras.layers.Softmax.__ne__": true, + "tf.keras.layers.Softmax.__new__": true, + "tf.keras.layers.Softmax.activity_regularizer": true, + "tf.keras.layers.Softmax.add_loss": true, + "tf.keras.layers.Softmax.add_metric": true, + "tf.keras.layers.Softmax.add_weight": true, + "tf.keras.layers.Softmax.build": true, + "tf.keras.layers.Softmax.call": true, + "tf.keras.layers.Softmax.compute_mask": true, + "tf.keras.layers.Softmax.compute_output_shape": true, + "tf.keras.layers.Softmax.compute_output_signature": true, + "tf.keras.layers.Softmax.count_params": true, + "tf.keras.layers.Softmax.dtype": true, + "tf.keras.layers.Softmax.dynamic": true, + "tf.keras.layers.Softmax.from_config": true, + "tf.keras.layers.Softmax.get_config": true, + "tf.keras.layers.Softmax.get_weights": true, + "tf.keras.layers.Softmax.input": true, + "tf.keras.layers.Softmax.input_spec": true, + "tf.keras.layers.Softmax.losses": true, + "tf.keras.layers.Softmax.metrics": true, + "tf.keras.layers.Softmax.name": true, + "tf.keras.layers.Softmax.name_scope": true, + "tf.keras.layers.Softmax.non_trainable_weights": true, + "tf.keras.layers.Softmax.output": true, + "tf.keras.layers.Softmax.set_weights": true, + "tf.keras.layers.Softmax.submodules": true, + "tf.keras.layers.Softmax.trainable": true, + "tf.keras.layers.Softmax.trainable_weights": true, + "tf.keras.layers.Softmax.weights": true, + "tf.keras.layers.Softmax.with_name_scope": true, + "tf.keras.layers.SpatialDropout1D": false, + "tf.keras.layers.SpatialDropout1D.__call__": true, + "tf.keras.layers.SpatialDropout1D.__eq__": true, + "tf.keras.layers.SpatialDropout1D.__ge__": true, + "tf.keras.layers.SpatialDropout1D.__gt__": true, + "tf.keras.layers.SpatialDropout1D.__init__": true, + "tf.keras.layers.SpatialDropout1D.__le__": true, + "tf.keras.layers.SpatialDropout1D.__lt__": true, + "tf.keras.layers.SpatialDropout1D.__ne__": true, + "tf.keras.layers.SpatialDropout1D.__new__": true, + "tf.keras.layers.SpatialDropout1D.activity_regularizer": true, + "tf.keras.layers.SpatialDropout1D.add_loss": true, + "tf.keras.layers.SpatialDropout1D.add_metric": true, + "tf.keras.layers.SpatialDropout1D.add_weight": true, + "tf.keras.layers.SpatialDropout1D.build": true, + "tf.keras.layers.SpatialDropout1D.call": true, + "tf.keras.layers.SpatialDropout1D.compute_mask": true, + "tf.keras.layers.SpatialDropout1D.compute_output_shape": true, + "tf.keras.layers.SpatialDropout1D.compute_output_signature": true, + "tf.keras.layers.SpatialDropout1D.count_params": true, + "tf.keras.layers.SpatialDropout1D.dtype": true, + "tf.keras.layers.SpatialDropout1D.dynamic": true, + "tf.keras.layers.SpatialDropout1D.from_config": true, + "tf.keras.layers.SpatialDropout1D.get_config": true, + "tf.keras.layers.SpatialDropout1D.get_weights": true, + "tf.keras.layers.SpatialDropout1D.input": true, + "tf.keras.layers.SpatialDropout1D.input_spec": true, + "tf.keras.layers.SpatialDropout1D.losses": true, + "tf.keras.layers.SpatialDropout1D.metrics": true, + "tf.keras.layers.SpatialDropout1D.name": true, + "tf.keras.layers.SpatialDropout1D.name_scope": true, + "tf.keras.layers.SpatialDropout1D.non_trainable_weights": true, + "tf.keras.layers.SpatialDropout1D.output": true, + "tf.keras.layers.SpatialDropout1D.set_weights": true, + "tf.keras.layers.SpatialDropout1D.submodules": true, + "tf.keras.layers.SpatialDropout1D.trainable": true, + "tf.keras.layers.SpatialDropout1D.trainable_weights": true, + "tf.keras.layers.SpatialDropout1D.weights": true, + "tf.keras.layers.SpatialDropout1D.with_name_scope": true, + "tf.keras.layers.SpatialDropout2D": false, + "tf.keras.layers.SpatialDropout2D.__call__": true, + "tf.keras.layers.SpatialDropout2D.__eq__": true, + "tf.keras.layers.SpatialDropout2D.__ge__": true, + "tf.keras.layers.SpatialDropout2D.__gt__": true, + "tf.keras.layers.SpatialDropout2D.__init__": true, + "tf.keras.layers.SpatialDropout2D.__le__": true, + "tf.keras.layers.SpatialDropout2D.__lt__": true, + "tf.keras.layers.SpatialDropout2D.__ne__": true, + "tf.keras.layers.SpatialDropout2D.__new__": true, + "tf.keras.layers.SpatialDropout2D.activity_regularizer": true, + "tf.keras.layers.SpatialDropout2D.add_loss": true, + "tf.keras.layers.SpatialDropout2D.add_metric": true, + "tf.keras.layers.SpatialDropout2D.add_weight": true, + "tf.keras.layers.SpatialDropout2D.build": true, + "tf.keras.layers.SpatialDropout2D.call": true, + "tf.keras.layers.SpatialDropout2D.compute_mask": true, + "tf.keras.layers.SpatialDropout2D.compute_output_shape": true, + "tf.keras.layers.SpatialDropout2D.compute_output_signature": true, + "tf.keras.layers.SpatialDropout2D.count_params": true, + "tf.keras.layers.SpatialDropout2D.dtype": true, + "tf.keras.layers.SpatialDropout2D.dynamic": true, + "tf.keras.layers.SpatialDropout2D.from_config": true, + "tf.keras.layers.SpatialDropout2D.get_config": true, + "tf.keras.layers.SpatialDropout2D.get_weights": true, + "tf.keras.layers.SpatialDropout2D.input": true, + "tf.keras.layers.SpatialDropout2D.input_spec": true, + "tf.keras.layers.SpatialDropout2D.losses": true, + "tf.keras.layers.SpatialDropout2D.metrics": true, + "tf.keras.layers.SpatialDropout2D.name": true, + "tf.keras.layers.SpatialDropout2D.name_scope": true, + "tf.keras.layers.SpatialDropout2D.non_trainable_weights": true, + "tf.keras.layers.SpatialDropout2D.output": true, + "tf.keras.layers.SpatialDropout2D.set_weights": true, + "tf.keras.layers.SpatialDropout2D.submodules": true, + "tf.keras.layers.SpatialDropout2D.trainable": true, + "tf.keras.layers.SpatialDropout2D.trainable_weights": true, + "tf.keras.layers.SpatialDropout2D.weights": true, + "tf.keras.layers.SpatialDropout2D.with_name_scope": true, + "tf.keras.layers.SpatialDropout3D": false, + "tf.keras.layers.SpatialDropout3D.__call__": true, + "tf.keras.layers.SpatialDropout3D.__eq__": true, + "tf.keras.layers.SpatialDropout3D.__ge__": true, + "tf.keras.layers.SpatialDropout3D.__gt__": true, + "tf.keras.layers.SpatialDropout3D.__init__": true, + "tf.keras.layers.SpatialDropout3D.__le__": true, + "tf.keras.layers.SpatialDropout3D.__lt__": true, + "tf.keras.layers.SpatialDropout3D.__ne__": true, + "tf.keras.layers.SpatialDropout3D.__new__": true, + "tf.keras.layers.SpatialDropout3D.activity_regularizer": true, + "tf.keras.layers.SpatialDropout3D.add_loss": true, + "tf.keras.layers.SpatialDropout3D.add_metric": true, + "tf.keras.layers.SpatialDropout3D.add_weight": true, + "tf.keras.layers.SpatialDropout3D.build": true, + "tf.keras.layers.SpatialDropout3D.call": true, + "tf.keras.layers.SpatialDropout3D.compute_mask": true, + "tf.keras.layers.SpatialDropout3D.compute_output_shape": true, + "tf.keras.layers.SpatialDropout3D.compute_output_signature": true, + "tf.keras.layers.SpatialDropout3D.count_params": true, + "tf.keras.layers.SpatialDropout3D.dtype": true, + "tf.keras.layers.SpatialDropout3D.dynamic": true, + "tf.keras.layers.SpatialDropout3D.from_config": true, + "tf.keras.layers.SpatialDropout3D.get_config": true, + "tf.keras.layers.SpatialDropout3D.get_weights": true, + "tf.keras.layers.SpatialDropout3D.input": true, + "tf.keras.layers.SpatialDropout3D.input_spec": true, + "tf.keras.layers.SpatialDropout3D.losses": true, + "tf.keras.layers.SpatialDropout3D.metrics": true, + "tf.keras.layers.SpatialDropout3D.name": true, + "tf.keras.layers.SpatialDropout3D.name_scope": true, + "tf.keras.layers.SpatialDropout3D.non_trainable_weights": true, + "tf.keras.layers.SpatialDropout3D.output": true, + "tf.keras.layers.SpatialDropout3D.set_weights": true, + "tf.keras.layers.SpatialDropout3D.submodules": true, + "tf.keras.layers.SpatialDropout3D.trainable": true, + "tf.keras.layers.SpatialDropout3D.trainable_weights": true, + "tf.keras.layers.SpatialDropout3D.weights": true, + "tf.keras.layers.SpatialDropout3D.with_name_scope": true, + "tf.keras.layers.StackedRNNCells": false, + "tf.keras.layers.StackedRNNCells.__call__": true, + "tf.keras.layers.StackedRNNCells.__eq__": true, + "tf.keras.layers.StackedRNNCells.__ge__": true, + "tf.keras.layers.StackedRNNCells.__gt__": true, + "tf.keras.layers.StackedRNNCells.__init__": true, + "tf.keras.layers.StackedRNNCells.__le__": true, + "tf.keras.layers.StackedRNNCells.__lt__": true, + "tf.keras.layers.StackedRNNCells.__ne__": true, + "tf.keras.layers.StackedRNNCells.__new__": true, + "tf.keras.layers.StackedRNNCells.activity_regularizer": true, + "tf.keras.layers.StackedRNNCells.add_loss": true, + "tf.keras.layers.StackedRNNCells.add_metric": true, + "tf.keras.layers.StackedRNNCells.add_weight": true, + "tf.keras.layers.StackedRNNCells.build": true, + "tf.keras.layers.StackedRNNCells.call": true, + "tf.keras.layers.StackedRNNCells.compute_mask": true, + "tf.keras.layers.StackedRNNCells.compute_output_shape": true, + "tf.keras.layers.StackedRNNCells.compute_output_signature": true, + "tf.keras.layers.StackedRNNCells.count_params": true, + "tf.keras.layers.StackedRNNCells.dtype": true, + "tf.keras.layers.StackedRNNCells.dynamic": true, + "tf.keras.layers.StackedRNNCells.from_config": true, + "tf.keras.layers.StackedRNNCells.get_config": true, + "tf.keras.layers.StackedRNNCells.get_initial_state": true, + "tf.keras.layers.StackedRNNCells.get_weights": true, + "tf.keras.layers.StackedRNNCells.input": true, + "tf.keras.layers.StackedRNNCells.input_spec": true, + "tf.keras.layers.StackedRNNCells.losses": true, + "tf.keras.layers.StackedRNNCells.metrics": true, + "tf.keras.layers.StackedRNNCells.name": true, + "tf.keras.layers.StackedRNNCells.name_scope": true, + "tf.keras.layers.StackedRNNCells.non_trainable_weights": true, + "tf.keras.layers.StackedRNNCells.output": true, + "tf.keras.layers.StackedRNNCells.output_size": true, + "tf.keras.layers.StackedRNNCells.set_weights": true, + "tf.keras.layers.StackedRNNCells.state_size": true, + "tf.keras.layers.StackedRNNCells.submodules": true, + "tf.keras.layers.StackedRNNCells.trainable": true, + "tf.keras.layers.StackedRNNCells.trainable_weights": true, + "tf.keras.layers.StackedRNNCells.weights": true, + "tf.keras.layers.StackedRNNCells.with_name_scope": true, + "tf.keras.layers.Subtract": false, + "tf.keras.layers.Subtract.__call__": true, + "tf.keras.layers.Subtract.__eq__": true, + "tf.keras.layers.Subtract.__ge__": true, + "tf.keras.layers.Subtract.__gt__": true, + "tf.keras.layers.Subtract.__init__": true, + "tf.keras.layers.Subtract.__le__": true, + "tf.keras.layers.Subtract.__lt__": true, + "tf.keras.layers.Subtract.__ne__": true, + "tf.keras.layers.Subtract.__new__": true, + "tf.keras.layers.Subtract.activity_regularizer": true, + "tf.keras.layers.Subtract.add_loss": true, + "tf.keras.layers.Subtract.add_metric": true, + "tf.keras.layers.Subtract.add_weight": true, + "tf.keras.layers.Subtract.build": true, + "tf.keras.layers.Subtract.call": true, + "tf.keras.layers.Subtract.compute_mask": true, + "tf.keras.layers.Subtract.compute_output_shape": true, + "tf.keras.layers.Subtract.compute_output_signature": true, + "tf.keras.layers.Subtract.count_params": true, + "tf.keras.layers.Subtract.dtype": true, + "tf.keras.layers.Subtract.dynamic": true, + "tf.keras.layers.Subtract.from_config": true, + "tf.keras.layers.Subtract.get_config": true, + "tf.keras.layers.Subtract.get_weights": true, + "tf.keras.layers.Subtract.input": true, + "tf.keras.layers.Subtract.input_spec": true, + "tf.keras.layers.Subtract.losses": true, + "tf.keras.layers.Subtract.metrics": true, + "tf.keras.layers.Subtract.name": true, + "tf.keras.layers.Subtract.name_scope": true, + "tf.keras.layers.Subtract.non_trainable_weights": true, + "tf.keras.layers.Subtract.output": true, + "tf.keras.layers.Subtract.set_weights": true, + "tf.keras.layers.Subtract.submodules": true, + "tf.keras.layers.Subtract.trainable": true, + "tf.keras.layers.Subtract.trainable_weights": true, + "tf.keras.layers.Subtract.weights": true, + "tf.keras.layers.Subtract.with_name_scope": true, + "tf.keras.layers.ThresholdedReLU": false, + "tf.keras.layers.ThresholdedReLU.__call__": true, + "tf.keras.layers.ThresholdedReLU.__eq__": true, + "tf.keras.layers.ThresholdedReLU.__ge__": true, + "tf.keras.layers.ThresholdedReLU.__gt__": true, + "tf.keras.layers.ThresholdedReLU.__init__": true, + "tf.keras.layers.ThresholdedReLU.__le__": true, + "tf.keras.layers.ThresholdedReLU.__lt__": true, + "tf.keras.layers.ThresholdedReLU.__ne__": true, + "tf.keras.layers.ThresholdedReLU.__new__": true, + "tf.keras.layers.ThresholdedReLU.activity_regularizer": true, + "tf.keras.layers.ThresholdedReLU.add_loss": true, + "tf.keras.layers.ThresholdedReLU.add_metric": true, + "tf.keras.layers.ThresholdedReLU.add_weight": true, + "tf.keras.layers.ThresholdedReLU.build": true, + "tf.keras.layers.ThresholdedReLU.call": true, + "tf.keras.layers.ThresholdedReLU.compute_mask": true, + "tf.keras.layers.ThresholdedReLU.compute_output_shape": true, + "tf.keras.layers.ThresholdedReLU.compute_output_signature": true, + "tf.keras.layers.ThresholdedReLU.count_params": true, + "tf.keras.layers.ThresholdedReLU.dtype": true, + "tf.keras.layers.ThresholdedReLU.dynamic": true, + "tf.keras.layers.ThresholdedReLU.from_config": true, + "tf.keras.layers.ThresholdedReLU.get_config": true, + "tf.keras.layers.ThresholdedReLU.get_weights": true, + "tf.keras.layers.ThresholdedReLU.input": true, + "tf.keras.layers.ThresholdedReLU.input_spec": true, + "tf.keras.layers.ThresholdedReLU.losses": true, + "tf.keras.layers.ThresholdedReLU.metrics": true, + "tf.keras.layers.ThresholdedReLU.name": true, + "tf.keras.layers.ThresholdedReLU.name_scope": true, + "tf.keras.layers.ThresholdedReLU.non_trainable_weights": true, + "tf.keras.layers.ThresholdedReLU.output": true, + "tf.keras.layers.ThresholdedReLU.set_weights": true, + "tf.keras.layers.ThresholdedReLU.submodules": true, + "tf.keras.layers.ThresholdedReLU.trainable": true, + "tf.keras.layers.ThresholdedReLU.trainable_weights": true, + "tf.keras.layers.ThresholdedReLU.weights": true, + "tf.keras.layers.ThresholdedReLU.with_name_scope": true, + "tf.keras.layers.TimeDistributed": false, + "tf.keras.layers.TimeDistributed.__call__": true, + "tf.keras.layers.TimeDistributed.__eq__": true, + "tf.keras.layers.TimeDistributed.__ge__": true, + "tf.keras.layers.TimeDistributed.__gt__": true, + "tf.keras.layers.TimeDistributed.__init__": true, + "tf.keras.layers.TimeDistributed.__le__": true, + "tf.keras.layers.TimeDistributed.__lt__": true, + "tf.keras.layers.TimeDistributed.__ne__": true, + "tf.keras.layers.TimeDistributed.__new__": true, + "tf.keras.layers.TimeDistributed.activity_regularizer": true, + "tf.keras.layers.TimeDistributed.add_loss": true, + "tf.keras.layers.TimeDistributed.add_metric": true, + "tf.keras.layers.TimeDistributed.add_weight": true, + "tf.keras.layers.TimeDistributed.build": true, + "tf.keras.layers.TimeDistributed.call": true, + "tf.keras.layers.TimeDistributed.compute_mask": true, + "tf.keras.layers.TimeDistributed.compute_output_shape": true, + "tf.keras.layers.TimeDistributed.compute_output_signature": true, + "tf.keras.layers.TimeDistributed.count_params": true, + "tf.keras.layers.TimeDistributed.dtype": true, + "tf.keras.layers.TimeDistributed.dynamic": true, + "tf.keras.layers.TimeDistributed.from_config": true, + "tf.keras.layers.TimeDistributed.get_config": true, + "tf.keras.layers.TimeDistributed.get_weights": true, + "tf.keras.layers.TimeDistributed.input": true, + "tf.keras.layers.TimeDistributed.input_spec": true, + "tf.keras.layers.TimeDistributed.losses": true, + "tf.keras.layers.TimeDistributed.metrics": true, + "tf.keras.layers.TimeDistributed.name": true, + "tf.keras.layers.TimeDistributed.name_scope": true, + "tf.keras.layers.TimeDistributed.non_trainable_weights": true, + "tf.keras.layers.TimeDistributed.output": true, + "tf.keras.layers.TimeDistributed.set_weights": true, + "tf.keras.layers.TimeDistributed.submodules": true, + "tf.keras.layers.TimeDistributed.trainable": true, + "tf.keras.layers.TimeDistributed.trainable_weights": true, + "tf.keras.layers.TimeDistributed.weights": true, + "tf.keras.layers.TimeDistributed.with_name_scope": true, + "tf.keras.layers.UpSampling1D": false, + "tf.keras.layers.UpSampling1D.__call__": true, + "tf.keras.layers.UpSampling1D.__eq__": true, + "tf.keras.layers.UpSampling1D.__ge__": true, + "tf.keras.layers.UpSampling1D.__gt__": true, + "tf.keras.layers.UpSampling1D.__init__": true, + "tf.keras.layers.UpSampling1D.__le__": true, + "tf.keras.layers.UpSampling1D.__lt__": true, + "tf.keras.layers.UpSampling1D.__ne__": true, + "tf.keras.layers.UpSampling1D.__new__": true, + "tf.keras.layers.UpSampling1D.activity_regularizer": true, + "tf.keras.layers.UpSampling1D.add_loss": true, + "tf.keras.layers.UpSampling1D.add_metric": true, + "tf.keras.layers.UpSampling1D.add_weight": true, + "tf.keras.layers.UpSampling1D.build": true, + "tf.keras.layers.UpSampling1D.call": true, + "tf.keras.layers.UpSampling1D.compute_mask": true, + "tf.keras.layers.UpSampling1D.compute_output_shape": true, + "tf.keras.layers.UpSampling1D.compute_output_signature": true, + "tf.keras.layers.UpSampling1D.count_params": true, + "tf.keras.layers.UpSampling1D.dtype": true, + "tf.keras.layers.UpSampling1D.dynamic": true, + "tf.keras.layers.UpSampling1D.from_config": true, + "tf.keras.layers.UpSampling1D.get_config": true, + "tf.keras.layers.UpSampling1D.get_weights": true, + "tf.keras.layers.UpSampling1D.input": true, + "tf.keras.layers.UpSampling1D.input_spec": true, + "tf.keras.layers.UpSampling1D.losses": true, + "tf.keras.layers.UpSampling1D.metrics": true, + "tf.keras.layers.UpSampling1D.name": true, + "tf.keras.layers.UpSampling1D.name_scope": true, + "tf.keras.layers.UpSampling1D.non_trainable_weights": true, + "tf.keras.layers.UpSampling1D.output": true, + "tf.keras.layers.UpSampling1D.set_weights": true, + "tf.keras.layers.UpSampling1D.submodules": true, + "tf.keras.layers.UpSampling1D.trainable": true, + "tf.keras.layers.UpSampling1D.trainable_weights": true, + "tf.keras.layers.UpSampling1D.weights": true, + "tf.keras.layers.UpSampling1D.with_name_scope": true, + "tf.keras.layers.UpSampling2D": false, + "tf.keras.layers.UpSampling2D.__call__": true, + "tf.keras.layers.UpSampling2D.__eq__": true, + "tf.keras.layers.UpSampling2D.__ge__": true, + "tf.keras.layers.UpSampling2D.__gt__": true, + "tf.keras.layers.UpSampling2D.__init__": true, + "tf.keras.layers.UpSampling2D.__le__": true, + "tf.keras.layers.UpSampling2D.__lt__": true, + "tf.keras.layers.UpSampling2D.__ne__": true, + "tf.keras.layers.UpSampling2D.__new__": true, + "tf.keras.layers.UpSampling2D.activity_regularizer": true, + "tf.keras.layers.UpSampling2D.add_loss": true, + "tf.keras.layers.UpSampling2D.add_metric": true, + "tf.keras.layers.UpSampling2D.add_weight": true, + "tf.keras.layers.UpSampling2D.build": true, + "tf.keras.layers.UpSampling2D.call": true, + "tf.keras.layers.UpSampling2D.compute_mask": true, + "tf.keras.layers.UpSampling2D.compute_output_shape": true, + "tf.keras.layers.UpSampling2D.compute_output_signature": true, + "tf.keras.layers.UpSampling2D.count_params": true, + "tf.keras.layers.UpSampling2D.dtype": true, + "tf.keras.layers.UpSampling2D.dynamic": true, + "tf.keras.layers.UpSampling2D.from_config": true, + "tf.keras.layers.UpSampling2D.get_config": true, + "tf.keras.layers.UpSampling2D.get_weights": true, + "tf.keras.layers.UpSampling2D.input": true, + "tf.keras.layers.UpSampling2D.input_spec": true, + "tf.keras.layers.UpSampling2D.losses": true, + "tf.keras.layers.UpSampling2D.metrics": true, + "tf.keras.layers.UpSampling2D.name": true, + "tf.keras.layers.UpSampling2D.name_scope": true, + "tf.keras.layers.UpSampling2D.non_trainable_weights": true, + "tf.keras.layers.UpSampling2D.output": true, + "tf.keras.layers.UpSampling2D.set_weights": true, + "tf.keras.layers.UpSampling2D.submodules": true, + "tf.keras.layers.UpSampling2D.trainable": true, + "tf.keras.layers.UpSampling2D.trainable_weights": true, + "tf.keras.layers.UpSampling2D.weights": true, + "tf.keras.layers.UpSampling2D.with_name_scope": true, + "tf.keras.layers.UpSampling3D": false, + "tf.keras.layers.UpSampling3D.__call__": true, + "tf.keras.layers.UpSampling3D.__eq__": true, + "tf.keras.layers.UpSampling3D.__ge__": true, + "tf.keras.layers.UpSampling3D.__gt__": true, + "tf.keras.layers.UpSampling3D.__init__": true, + "tf.keras.layers.UpSampling3D.__le__": true, + "tf.keras.layers.UpSampling3D.__lt__": true, + "tf.keras.layers.UpSampling3D.__ne__": true, + "tf.keras.layers.UpSampling3D.__new__": true, + "tf.keras.layers.UpSampling3D.activity_regularizer": true, + "tf.keras.layers.UpSampling3D.add_loss": true, + "tf.keras.layers.UpSampling3D.add_metric": true, + "tf.keras.layers.UpSampling3D.add_weight": true, + "tf.keras.layers.UpSampling3D.build": true, + "tf.keras.layers.UpSampling3D.call": true, + "tf.keras.layers.UpSampling3D.compute_mask": true, + "tf.keras.layers.UpSampling3D.compute_output_shape": true, + "tf.keras.layers.UpSampling3D.compute_output_signature": true, + "tf.keras.layers.UpSampling3D.count_params": true, + "tf.keras.layers.UpSampling3D.dtype": true, + "tf.keras.layers.UpSampling3D.dynamic": true, + "tf.keras.layers.UpSampling3D.from_config": true, + "tf.keras.layers.UpSampling3D.get_config": true, + "tf.keras.layers.UpSampling3D.get_weights": true, + "tf.keras.layers.UpSampling3D.input": true, + "tf.keras.layers.UpSampling3D.input_spec": true, + "tf.keras.layers.UpSampling3D.losses": true, + "tf.keras.layers.UpSampling3D.metrics": true, + "tf.keras.layers.UpSampling3D.name": true, + "tf.keras.layers.UpSampling3D.name_scope": true, + "tf.keras.layers.UpSampling3D.non_trainable_weights": true, + "tf.keras.layers.UpSampling3D.output": true, + "tf.keras.layers.UpSampling3D.set_weights": true, + "tf.keras.layers.UpSampling3D.submodules": true, + "tf.keras.layers.UpSampling3D.trainable": true, + "tf.keras.layers.UpSampling3D.trainable_weights": true, + "tf.keras.layers.UpSampling3D.weights": true, + "tf.keras.layers.UpSampling3D.with_name_scope": true, + "tf.keras.layers.Wrapper": false, + "tf.keras.layers.Wrapper.__call__": true, + "tf.keras.layers.Wrapper.__eq__": true, + "tf.keras.layers.Wrapper.__ge__": true, + "tf.keras.layers.Wrapper.__gt__": true, + "tf.keras.layers.Wrapper.__init__": true, + "tf.keras.layers.Wrapper.__le__": true, + "tf.keras.layers.Wrapper.__lt__": true, + "tf.keras.layers.Wrapper.__ne__": true, + "tf.keras.layers.Wrapper.__new__": true, + "tf.keras.layers.Wrapper.activity_regularizer": true, + "tf.keras.layers.Wrapper.add_loss": true, + "tf.keras.layers.Wrapper.add_metric": true, + "tf.keras.layers.Wrapper.add_weight": true, + "tf.keras.layers.Wrapper.build": true, + "tf.keras.layers.Wrapper.call": true, + "tf.keras.layers.Wrapper.compute_mask": true, + "tf.keras.layers.Wrapper.compute_output_shape": true, + "tf.keras.layers.Wrapper.compute_output_signature": true, + "tf.keras.layers.Wrapper.count_params": true, + "tf.keras.layers.Wrapper.dtype": true, + "tf.keras.layers.Wrapper.dynamic": true, + "tf.keras.layers.Wrapper.from_config": true, + "tf.keras.layers.Wrapper.get_config": true, + "tf.keras.layers.Wrapper.get_weights": true, + "tf.keras.layers.Wrapper.input": true, + "tf.keras.layers.Wrapper.input_spec": true, + "tf.keras.layers.Wrapper.losses": true, + "tf.keras.layers.Wrapper.metrics": true, + "tf.keras.layers.Wrapper.name": true, + "tf.keras.layers.Wrapper.name_scope": true, + "tf.keras.layers.Wrapper.non_trainable_weights": true, + "tf.keras.layers.Wrapper.output": true, + "tf.keras.layers.Wrapper.set_weights": true, + "tf.keras.layers.Wrapper.submodules": true, + "tf.keras.layers.Wrapper.trainable": true, + "tf.keras.layers.Wrapper.trainable_weights": true, + "tf.keras.layers.Wrapper.weights": true, + "tf.keras.layers.Wrapper.with_name_scope": true, + "tf.keras.layers.ZeroPadding1D": false, + "tf.keras.layers.ZeroPadding1D.__call__": true, + "tf.keras.layers.ZeroPadding1D.__eq__": true, + "tf.keras.layers.ZeroPadding1D.__ge__": true, + "tf.keras.layers.ZeroPadding1D.__gt__": true, + "tf.keras.layers.ZeroPadding1D.__init__": true, + "tf.keras.layers.ZeroPadding1D.__le__": true, + "tf.keras.layers.ZeroPadding1D.__lt__": true, + "tf.keras.layers.ZeroPadding1D.__ne__": true, + "tf.keras.layers.ZeroPadding1D.__new__": true, + "tf.keras.layers.ZeroPadding1D.activity_regularizer": true, + "tf.keras.layers.ZeroPadding1D.add_loss": true, + "tf.keras.layers.ZeroPadding1D.add_metric": true, + "tf.keras.layers.ZeroPadding1D.add_weight": true, + "tf.keras.layers.ZeroPadding1D.build": true, + "tf.keras.layers.ZeroPadding1D.call": true, + "tf.keras.layers.ZeroPadding1D.compute_mask": true, + "tf.keras.layers.ZeroPadding1D.compute_output_shape": true, + "tf.keras.layers.ZeroPadding1D.compute_output_signature": true, + "tf.keras.layers.ZeroPadding1D.count_params": true, + "tf.keras.layers.ZeroPadding1D.dtype": true, + "tf.keras.layers.ZeroPadding1D.dynamic": true, + "tf.keras.layers.ZeroPadding1D.from_config": true, + "tf.keras.layers.ZeroPadding1D.get_config": true, + "tf.keras.layers.ZeroPadding1D.get_weights": true, + "tf.keras.layers.ZeroPadding1D.input": true, + "tf.keras.layers.ZeroPadding1D.input_spec": true, + "tf.keras.layers.ZeroPadding1D.losses": true, + "tf.keras.layers.ZeroPadding1D.metrics": true, + "tf.keras.layers.ZeroPadding1D.name": true, + "tf.keras.layers.ZeroPadding1D.name_scope": true, + "tf.keras.layers.ZeroPadding1D.non_trainable_weights": true, + "tf.keras.layers.ZeroPadding1D.output": true, + "tf.keras.layers.ZeroPadding1D.set_weights": true, + "tf.keras.layers.ZeroPadding1D.submodules": true, + "tf.keras.layers.ZeroPadding1D.trainable": true, + "tf.keras.layers.ZeroPadding1D.trainable_weights": true, + "tf.keras.layers.ZeroPadding1D.weights": true, + "tf.keras.layers.ZeroPadding1D.with_name_scope": true, + "tf.keras.layers.ZeroPadding2D": false, + "tf.keras.layers.ZeroPadding2D.__call__": true, + "tf.keras.layers.ZeroPadding2D.__eq__": true, + "tf.keras.layers.ZeroPadding2D.__ge__": true, + "tf.keras.layers.ZeroPadding2D.__gt__": true, + "tf.keras.layers.ZeroPadding2D.__init__": true, + "tf.keras.layers.ZeroPadding2D.__le__": true, + "tf.keras.layers.ZeroPadding2D.__lt__": true, + "tf.keras.layers.ZeroPadding2D.__ne__": true, + "tf.keras.layers.ZeroPadding2D.__new__": true, + "tf.keras.layers.ZeroPadding2D.activity_regularizer": true, + "tf.keras.layers.ZeroPadding2D.add_loss": true, + "tf.keras.layers.ZeroPadding2D.add_metric": true, + "tf.keras.layers.ZeroPadding2D.add_weight": true, + "tf.keras.layers.ZeroPadding2D.build": true, + "tf.keras.layers.ZeroPadding2D.call": true, + "tf.keras.layers.ZeroPadding2D.compute_mask": true, + "tf.keras.layers.ZeroPadding2D.compute_output_shape": true, + "tf.keras.layers.ZeroPadding2D.compute_output_signature": true, + "tf.keras.layers.ZeroPadding2D.count_params": true, + "tf.keras.layers.ZeroPadding2D.dtype": true, + "tf.keras.layers.ZeroPadding2D.dynamic": true, + "tf.keras.layers.ZeroPadding2D.from_config": true, + "tf.keras.layers.ZeroPadding2D.get_config": true, + "tf.keras.layers.ZeroPadding2D.get_weights": true, + "tf.keras.layers.ZeroPadding2D.input": true, + "tf.keras.layers.ZeroPadding2D.input_spec": true, + "tf.keras.layers.ZeroPadding2D.losses": true, + "tf.keras.layers.ZeroPadding2D.metrics": true, + "tf.keras.layers.ZeroPadding2D.name": true, + "tf.keras.layers.ZeroPadding2D.name_scope": true, + "tf.keras.layers.ZeroPadding2D.non_trainable_weights": true, + "tf.keras.layers.ZeroPadding2D.output": true, + "tf.keras.layers.ZeroPadding2D.set_weights": true, + "tf.keras.layers.ZeroPadding2D.submodules": true, + "tf.keras.layers.ZeroPadding2D.trainable": true, + "tf.keras.layers.ZeroPadding2D.trainable_weights": true, + "tf.keras.layers.ZeroPadding2D.weights": true, + "tf.keras.layers.ZeroPadding2D.with_name_scope": true, + "tf.keras.layers.ZeroPadding3D": false, + "tf.keras.layers.ZeroPadding3D.__call__": true, + "tf.keras.layers.ZeroPadding3D.__eq__": true, + "tf.keras.layers.ZeroPadding3D.__ge__": true, + "tf.keras.layers.ZeroPadding3D.__gt__": true, + "tf.keras.layers.ZeroPadding3D.__init__": true, + "tf.keras.layers.ZeroPadding3D.__le__": true, + "tf.keras.layers.ZeroPadding3D.__lt__": true, + "tf.keras.layers.ZeroPadding3D.__ne__": true, + "tf.keras.layers.ZeroPadding3D.__new__": true, + "tf.keras.layers.ZeroPadding3D.activity_regularizer": true, + "tf.keras.layers.ZeroPadding3D.add_loss": true, + "tf.keras.layers.ZeroPadding3D.add_metric": true, + "tf.keras.layers.ZeroPadding3D.add_weight": true, + "tf.keras.layers.ZeroPadding3D.build": true, + "tf.keras.layers.ZeroPadding3D.call": true, + "tf.keras.layers.ZeroPadding3D.compute_mask": true, + "tf.keras.layers.ZeroPadding3D.compute_output_shape": true, + "tf.keras.layers.ZeroPadding3D.compute_output_signature": true, + "tf.keras.layers.ZeroPadding3D.count_params": true, + "tf.keras.layers.ZeroPadding3D.dtype": true, + "tf.keras.layers.ZeroPadding3D.dynamic": true, + "tf.keras.layers.ZeroPadding3D.from_config": true, + "tf.keras.layers.ZeroPadding3D.get_config": true, + "tf.keras.layers.ZeroPadding3D.get_weights": true, + "tf.keras.layers.ZeroPadding3D.input": true, + "tf.keras.layers.ZeroPadding3D.input_spec": true, + "tf.keras.layers.ZeroPadding3D.losses": true, + "tf.keras.layers.ZeroPadding3D.metrics": true, + "tf.keras.layers.ZeroPadding3D.name": true, + "tf.keras.layers.ZeroPadding3D.name_scope": true, + "tf.keras.layers.ZeroPadding3D.non_trainable_weights": true, + "tf.keras.layers.ZeroPadding3D.output": true, + "tf.keras.layers.ZeroPadding3D.set_weights": true, + "tf.keras.layers.ZeroPadding3D.submodules": true, + "tf.keras.layers.ZeroPadding3D.trainable": true, + "tf.keras.layers.ZeroPadding3D.trainable_weights": true, + "tf.keras.layers.ZeroPadding3D.weights": true, + "tf.keras.layers.ZeroPadding3D.with_name_scope": true, + "tf.keras.layers.add": false, + "tf.keras.layers.average": false, + "tf.keras.layers.concatenate": false, + "tf.keras.layers.deserialize": false, + "tf.keras.layers.dot": false, + "tf.keras.layers.experimental": false, + "tf.keras.layers.experimental.SyncBatchNormalization": false, + "tf.keras.layers.experimental.SyncBatchNormalization.__call__": true, + "tf.keras.layers.experimental.SyncBatchNormalization.__eq__": true, + "tf.keras.layers.experimental.SyncBatchNormalization.__ge__": true, + "tf.keras.layers.experimental.SyncBatchNormalization.__gt__": true, + "tf.keras.layers.experimental.SyncBatchNormalization.__init__": true, + "tf.keras.layers.experimental.SyncBatchNormalization.__le__": true, + "tf.keras.layers.experimental.SyncBatchNormalization.__lt__": true, + "tf.keras.layers.experimental.SyncBatchNormalization.__ne__": true, + "tf.keras.layers.experimental.SyncBatchNormalization.__new__": true, + "tf.keras.layers.experimental.SyncBatchNormalization.activity_regularizer": true, + "tf.keras.layers.experimental.SyncBatchNormalization.add_loss": true, + "tf.keras.layers.experimental.SyncBatchNormalization.add_metric": true, + "tf.keras.layers.experimental.SyncBatchNormalization.add_weight": true, + "tf.keras.layers.experimental.SyncBatchNormalization.build": true, + "tf.keras.layers.experimental.SyncBatchNormalization.call": true, + "tf.keras.layers.experimental.SyncBatchNormalization.compute_mask": true, + "tf.keras.layers.experimental.SyncBatchNormalization.compute_output_shape": true, + "tf.keras.layers.experimental.SyncBatchNormalization.compute_output_signature": true, + "tf.keras.layers.experimental.SyncBatchNormalization.count_params": true, + "tf.keras.layers.experimental.SyncBatchNormalization.dtype": true, + "tf.keras.layers.experimental.SyncBatchNormalization.dynamic": true, + "tf.keras.layers.experimental.SyncBatchNormalization.from_config": true, + "tf.keras.layers.experimental.SyncBatchNormalization.get_config": true, + "tf.keras.layers.experimental.SyncBatchNormalization.get_weights": true, + "tf.keras.layers.experimental.SyncBatchNormalization.input": true, + "tf.keras.layers.experimental.SyncBatchNormalization.input_spec": true, + "tf.keras.layers.experimental.SyncBatchNormalization.losses": true, + "tf.keras.layers.experimental.SyncBatchNormalization.metrics": true, + "tf.keras.layers.experimental.SyncBatchNormalization.name": true, + "tf.keras.layers.experimental.SyncBatchNormalization.name_scope": true, + "tf.keras.layers.experimental.SyncBatchNormalization.non_trainable_weights": true, + "tf.keras.layers.experimental.SyncBatchNormalization.output": true, + "tf.keras.layers.experimental.SyncBatchNormalization.set_weights": true, + "tf.keras.layers.experimental.SyncBatchNormalization.submodules": true, + "tf.keras.layers.experimental.SyncBatchNormalization.trainable": true, + "tf.keras.layers.experimental.SyncBatchNormalization.trainable_weights": true, + "tf.keras.layers.experimental.SyncBatchNormalization.weights": true, + "tf.keras.layers.experimental.SyncBatchNormalization.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing": false, + "tf.keras.layers.experimental.preprocessing.CenterCrop": false, + "tf.keras.layers.experimental.preprocessing.CenterCrop.__call__": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.__eq__": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.__ge__": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.__gt__": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.__init__": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.__le__": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.__lt__": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.__ne__": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.__new__": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.add_loss": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.add_metric": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.add_weight": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.build": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.call": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.count_params": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.dtype": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.dynamic": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.from_config": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.get_config": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.get_weights": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.input": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.input_spec": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.losses": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.metrics": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.name": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.name_scope": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.output": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.set_weights": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.submodules": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.trainable": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.weights": true, + "tf.keras.layers.experimental.preprocessing.CenterCrop.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing.Normalization": false, + "tf.keras.layers.experimental.preprocessing.Normalization.__call__": true, + "tf.keras.layers.experimental.preprocessing.Normalization.__eq__": true, + "tf.keras.layers.experimental.preprocessing.Normalization.__ge__": true, + "tf.keras.layers.experimental.preprocessing.Normalization.__gt__": true, + "tf.keras.layers.experimental.preprocessing.Normalization.__init__": true, + "tf.keras.layers.experimental.preprocessing.Normalization.__le__": true, + "tf.keras.layers.experimental.preprocessing.Normalization.__lt__": true, + "tf.keras.layers.experimental.preprocessing.Normalization.__ne__": true, + "tf.keras.layers.experimental.preprocessing.Normalization.__new__": true, + "tf.keras.layers.experimental.preprocessing.Normalization.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.Normalization.adapt": true, + "tf.keras.layers.experimental.preprocessing.Normalization.add_loss": true, + "tf.keras.layers.experimental.preprocessing.Normalization.add_metric": true, + "tf.keras.layers.experimental.preprocessing.Normalization.add_weight": true, + "tf.keras.layers.experimental.preprocessing.Normalization.build": true, + "tf.keras.layers.experimental.preprocessing.Normalization.call": true, + "tf.keras.layers.experimental.preprocessing.Normalization.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.Normalization.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.Normalization.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.Normalization.count_params": true, + "tf.keras.layers.experimental.preprocessing.Normalization.dtype": true, + "tf.keras.layers.experimental.preprocessing.Normalization.dynamic": true, + "tf.keras.layers.experimental.preprocessing.Normalization.from_config": true, + "tf.keras.layers.experimental.preprocessing.Normalization.get_config": true, + "tf.keras.layers.experimental.preprocessing.Normalization.get_weights": true, + "tf.keras.layers.experimental.preprocessing.Normalization.input": true, + "tf.keras.layers.experimental.preprocessing.Normalization.input_spec": true, + "tf.keras.layers.experimental.preprocessing.Normalization.losses": true, + "tf.keras.layers.experimental.preprocessing.Normalization.metrics": true, + "tf.keras.layers.experimental.preprocessing.Normalization.name": true, + "tf.keras.layers.experimental.preprocessing.Normalization.name_scope": true, + "tf.keras.layers.experimental.preprocessing.Normalization.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.Normalization.output": true, + "tf.keras.layers.experimental.preprocessing.Normalization.set_weights": true, + "tf.keras.layers.experimental.preprocessing.Normalization.submodules": true, + "tf.keras.layers.experimental.preprocessing.Normalization.trainable": true, + "tf.keras.layers.experimental.preprocessing.Normalization.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.Normalization.weights": true, + "tf.keras.layers.experimental.preprocessing.Normalization.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer": false, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__call__": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__eq__": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__ge__": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__gt__": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__init__": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__le__": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__lt__": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__ne__": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.__new__": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.adapt": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.add_loss": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.add_metric": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.add_weight": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.build": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.call": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.count_params": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.dtype": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.dynamic": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.from_config": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.get_config": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.get_weights": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.input": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.input_spec": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.losses": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.metrics": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.name": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.name_scope": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.output": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.set_weights": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.submodules": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.trainable": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.weights": true, + "tf.keras.layers.experimental.preprocessing.PreprocessingLayer.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast": false, + "tf.keras.layers.experimental.preprocessing.RandomContrast.__call__": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.__eq__": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.__ge__": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.__gt__": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.__init__": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.__le__": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.__lt__": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.__ne__": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.__new__": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.add_loss": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.add_metric": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.add_weight": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.build": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.call": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.count_params": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.dtype": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.dynamic": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.from_config": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.get_config": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.get_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.input": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.input_spec": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.losses": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.metrics": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.name": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.output": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.set_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.submodules": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.trainable": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.weights": true, + "tf.keras.layers.experimental.preprocessing.RandomContrast.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop": false, + "tf.keras.layers.experimental.preprocessing.RandomCrop.__call__": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.__eq__": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.__ge__": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.__gt__": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.__init__": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.__le__": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.__lt__": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.__ne__": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.__new__": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.add_loss": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.add_metric": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.add_weight": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.build": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.call": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.count_params": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.dtype": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.dynamic": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.from_config": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.get_config": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.get_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.input": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.input_spec": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.losses": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.metrics": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.name": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.output": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.set_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.submodules": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.trainable": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.weights": true, + "tf.keras.layers.experimental.preprocessing.RandomCrop.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip": false, + "tf.keras.layers.experimental.preprocessing.RandomFlip.__call__": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.__eq__": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.__ge__": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.__gt__": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.__init__": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.__le__": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.__lt__": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.__ne__": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.__new__": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.add_loss": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.add_metric": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.add_weight": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.build": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.call": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.count_params": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.dtype": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.dynamic": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.from_config": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.get_config": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.get_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.input": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.input_spec": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.losses": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.metrics": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.name": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.output": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.set_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.submodules": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.trainable": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.weights": true, + "tf.keras.layers.experimental.preprocessing.RandomFlip.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight": false, + "tf.keras.layers.experimental.preprocessing.RandomHeight.__call__": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.__eq__": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.__ge__": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.__gt__": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.__init__": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.__le__": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.__lt__": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.__ne__": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.__new__": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.add_loss": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.add_metric": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.add_weight": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.build": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.call": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.count_params": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.dtype": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.dynamic": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.from_config": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.get_config": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.get_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.input": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.input_spec": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.losses": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.metrics": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.name": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.output": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.set_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.submodules": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.trainable": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.weights": true, + "tf.keras.layers.experimental.preprocessing.RandomHeight.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation": false, + "tf.keras.layers.experimental.preprocessing.RandomRotation.__call__": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.__eq__": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.__ge__": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.__gt__": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.__init__": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.__le__": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.__lt__": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.__ne__": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.__new__": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.add_loss": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.add_metric": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.add_weight": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.build": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.call": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.count_params": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.dtype": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.dynamic": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.from_config": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.get_config": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.get_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.input": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.input_spec": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.losses": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.metrics": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.name": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.output": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.set_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.submodules": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.trainable": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.weights": true, + "tf.keras.layers.experimental.preprocessing.RandomRotation.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation": false, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__call__": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__eq__": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__ge__": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__gt__": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__init__": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__le__": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__lt__": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__ne__": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.__new__": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.add_loss": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.add_metric": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.add_weight": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.build": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.call": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.count_params": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.dtype": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.dynamic": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.from_config": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.get_config": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.get_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.input": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.input_spec": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.losses": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.metrics": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.name": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.output": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.set_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.submodules": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.trainable": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.weights": true, + "tf.keras.layers.experimental.preprocessing.RandomTranslation.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth": false, + "tf.keras.layers.experimental.preprocessing.RandomWidth.__call__": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.__eq__": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.__ge__": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.__gt__": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.__init__": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.__le__": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.__lt__": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.__ne__": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.__new__": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.add_loss": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.add_metric": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.add_weight": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.build": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.call": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.count_params": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.dtype": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.dynamic": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.from_config": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.get_config": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.get_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.input": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.input_spec": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.losses": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.metrics": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.name": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.name_scope": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.output": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.set_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.submodules": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.trainable": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.weights": true, + "tf.keras.layers.experimental.preprocessing.RandomWidth.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing.Rescaling": false, + "tf.keras.layers.experimental.preprocessing.Rescaling.__call__": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.__eq__": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.__ge__": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.__gt__": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.__init__": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.__le__": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.__lt__": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.__ne__": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.__new__": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.add_loss": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.add_metric": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.add_weight": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.build": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.call": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.count_params": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.dtype": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.dynamic": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.from_config": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.get_config": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.get_weights": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.input": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.input_spec": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.losses": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.metrics": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.name": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.name_scope": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.output": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.set_weights": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.submodules": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.trainable": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.weights": true, + "tf.keras.layers.experimental.preprocessing.Rescaling.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing.Resizing": false, + "tf.keras.layers.experimental.preprocessing.Resizing.__call__": true, + "tf.keras.layers.experimental.preprocessing.Resizing.__eq__": true, + "tf.keras.layers.experimental.preprocessing.Resizing.__ge__": true, + "tf.keras.layers.experimental.preprocessing.Resizing.__gt__": true, + "tf.keras.layers.experimental.preprocessing.Resizing.__init__": true, + "tf.keras.layers.experimental.preprocessing.Resizing.__le__": true, + "tf.keras.layers.experimental.preprocessing.Resizing.__lt__": true, + "tf.keras.layers.experimental.preprocessing.Resizing.__ne__": true, + "tf.keras.layers.experimental.preprocessing.Resizing.__new__": true, + "tf.keras.layers.experimental.preprocessing.Resizing.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.Resizing.add_loss": true, + "tf.keras.layers.experimental.preprocessing.Resizing.add_metric": true, + "tf.keras.layers.experimental.preprocessing.Resizing.add_weight": true, + "tf.keras.layers.experimental.preprocessing.Resizing.build": true, + "tf.keras.layers.experimental.preprocessing.Resizing.call": true, + "tf.keras.layers.experimental.preprocessing.Resizing.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.Resizing.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.Resizing.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.Resizing.count_params": true, + "tf.keras.layers.experimental.preprocessing.Resizing.dtype": true, + "tf.keras.layers.experimental.preprocessing.Resizing.dynamic": true, + "tf.keras.layers.experimental.preprocessing.Resizing.from_config": true, + "tf.keras.layers.experimental.preprocessing.Resizing.get_config": true, + "tf.keras.layers.experimental.preprocessing.Resizing.get_weights": true, + "tf.keras.layers.experimental.preprocessing.Resizing.input": true, + "tf.keras.layers.experimental.preprocessing.Resizing.input_spec": true, + "tf.keras.layers.experimental.preprocessing.Resizing.losses": true, + "tf.keras.layers.experimental.preprocessing.Resizing.metrics": true, + "tf.keras.layers.experimental.preprocessing.Resizing.name": true, + "tf.keras.layers.experimental.preprocessing.Resizing.name_scope": true, + "tf.keras.layers.experimental.preprocessing.Resizing.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.Resizing.output": true, + "tf.keras.layers.experimental.preprocessing.Resizing.set_weights": true, + "tf.keras.layers.experimental.preprocessing.Resizing.submodules": true, + "tf.keras.layers.experimental.preprocessing.Resizing.trainable": true, + "tf.keras.layers.experimental.preprocessing.Resizing.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.Resizing.weights": true, + "tf.keras.layers.experimental.preprocessing.Resizing.with_name_scope": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization": false, + "tf.keras.layers.experimental.preprocessing.TextVectorization.__call__": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.__eq__": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.__ge__": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.__gt__": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.__init__": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.__le__": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.__lt__": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.__ne__": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.__new__": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.activity_regularizer": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.adapt": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.add_loss": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.add_metric": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.add_weight": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.build": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.call": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.compute_mask": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.compute_output_shape": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.compute_output_signature": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.count_params": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.dtype": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.dynamic": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.from_config": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.get_config": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.get_vocabulary": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.get_weights": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.input": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.input_spec": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.losses": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.metrics": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.name": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.name_scope": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.non_trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.output": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.set_vocabulary": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.set_weights": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.submodules": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.trainable": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.trainable_weights": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.weights": true, + "tf.keras.layers.experimental.preprocessing.TextVectorization.with_name_scope": true, + "tf.keras.layers.maximum": false, + "tf.keras.layers.minimum": false, + "tf.keras.layers.multiply": false, + "tf.keras.layers.serialize": false, + "tf.keras.layers.subtract": false, + "tf.keras.losses": false, + "tf.keras.losses.BinaryCrossentropy": false, + "tf.keras.losses.BinaryCrossentropy.__call__": true, + "tf.keras.losses.BinaryCrossentropy.__eq__": true, + "tf.keras.losses.BinaryCrossentropy.__ge__": true, + "tf.keras.losses.BinaryCrossentropy.__gt__": true, + "tf.keras.losses.BinaryCrossentropy.__init__": true, + "tf.keras.losses.BinaryCrossentropy.__le__": true, + "tf.keras.losses.BinaryCrossentropy.__lt__": true, + "tf.keras.losses.BinaryCrossentropy.__ne__": true, + "tf.keras.losses.BinaryCrossentropy.__new__": true, + "tf.keras.losses.BinaryCrossentropy.call": true, + "tf.keras.losses.BinaryCrossentropy.from_config": true, + "tf.keras.losses.BinaryCrossentropy.get_config": true, + "tf.keras.losses.CategoricalCrossentropy": false, + "tf.keras.losses.CategoricalCrossentropy.__call__": true, + "tf.keras.losses.CategoricalCrossentropy.__eq__": true, + "tf.keras.losses.CategoricalCrossentropy.__ge__": true, + "tf.keras.losses.CategoricalCrossentropy.__gt__": true, + "tf.keras.losses.CategoricalCrossentropy.__init__": true, + "tf.keras.losses.CategoricalCrossentropy.__le__": true, + "tf.keras.losses.CategoricalCrossentropy.__lt__": true, + "tf.keras.losses.CategoricalCrossentropy.__ne__": true, + "tf.keras.losses.CategoricalCrossentropy.__new__": true, + "tf.keras.losses.CategoricalCrossentropy.call": true, + "tf.keras.losses.CategoricalCrossentropy.from_config": true, + "tf.keras.losses.CategoricalCrossentropy.get_config": true, + "tf.keras.losses.CategoricalHinge": false, + "tf.keras.losses.CategoricalHinge.__call__": true, + "tf.keras.losses.CategoricalHinge.__eq__": true, + "tf.keras.losses.CategoricalHinge.__ge__": true, + "tf.keras.losses.CategoricalHinge.__gt__": true, + "tf.keras.losses.CategoricalHinge.__init__": true, + "tf.keras.losses.CategoricalHinge.__le__": true, + "tf.keras.losses.CategoricalHinge.__lt__": true, + "tf.keras.losses.CategoricalHinge.__ne__": true, + "tf.keras.losses.CategoricalHinge.__new__": true, + "tf.keras.losses.CategoricalHinge.call": true, + "tf.keras.losses.CategoricalHinge.from_config": true, + "tf.keras.losses.CategoricalHinge.get_config": true, + "tf.keras.losses.CosineSimilarity": false, + "tf.keras.losses.CosineSimilarity.__call__": true, + "tf.keras.losses.CosineSimilarity.__eq__": true, + "tf.keras.losses.CosineSimilarity.__ge__": true, + "tf.keras.losses.CosineSimilarity.__gt__": true, + "tf.keras.losses.CosineSimilarity.__init__": true, + "tf.keras.losses.CosineSimilarity.__le__": true, + "tf.keras.losses.CosineSimilarity.__lt__": true, + "tf.keras.losses.CosineSimilarity.__ne__": true, + "tf.keras.losses.CosineSimilarity.__new__": true, + "tf.keras.losses.CosineSimilarity.call": true, + "tf.keras.losses.CosineSimilarity.from_config": true, + "tf.keras.losses.CosineSimilarity.get_config": true, + "tf.keras.losses.Hinge": false, + "tf.keras.losses.Hinge.__call__": true, + "tf.keras.losses.Hinge.__eq__": true, + "tf.keras.losses.Hinge.__ge__": true, + "tf.keras.losses.Hinge.__gt__": true, + "tf.keras.losses.Hinge.__init__": true, + "tf.keras.losses.Hinge.__le__": true, + "tf.keras.losses.Hinge.__lt__": true, + "tf.keras.losses.Hinge.__ne__": true, + "tf.keras.losses.Hinge.__new__": true, + "tf.keras.losses.Hinge.call": true, + "tf.keras.losses.Hinge.from_config": true, + "tf.keras.losses.Hinge.get_config": true, + "tf.keras.losses.Huber": false, + "tf.keras.losses.Huber.__call__": true, + "tf.keras.losses.Huber.__eq__": true, + "tf.keras.losses.Huber.__ge__": true, + "tf.keras.losses.Huber.__gt__": true, + "tf.keras.losses.Huber.__init__": true, + "tf.keras.losses.Huber.__le__": true, + "tf.keras.losses.Huber.__lt__": true, + "tf.keras.losses.Huber.__ne__": true, + "tf.keras.losses.Huber.__new__": true, + "tf.keras.losses.Huber.call": true, + "tf.keras.losses.Huber.from_config": true, + "tf.keras.losses.Huber.get_config": true, + "tf.keras.losses.KLD": false, + "tf.keras.losses.KLDivergence": false, + "tf.keras.losses.KLDivergence.__call__": true, + "tf.keras.losses.KLDivergence.__eq__": true, + "tf.keras.losses.KLDivergence.__ge__": true, + "tf.keras.losses.KLDivergence.__gt__": true, + "tf.keras.losses.KLDivergence.__init__": true, + "tf.keras.losses.KLDivergence.__le__": true, + "tf.keras.losses.KLDivergence.__lt__": true, + "tf.keras.losses.KLDivergence.__ne__": true, + "tf.keras.losses.KLDivergence.__new__": true, + "tf.keras.losses.KLDivergence.call": true, + "tf.keras.losses.KLDivergence.from_config": true, + "tf.keras.losses.KLDivergence.get_config": true, + "tf.keras.losses.LogCosh": false, + "tf.keras.losses.LogCosh.__call__": true, + "tf.keras.losses.LogCosh.__eq__": true, + "tf.keras.losses.LogCosh.__ge__": true, + "tf.keras.losses.LogCosh.__gt__": true, + "tf.keras.losses.LogCosh.__init__": true, + "tf.keras.losses.LogCosh.__le__": true, + "tf.keras.losses.LogCosh.__lt__": true, + "tf.keras.losses.LogCosh.__ne__": true, + "tf.keras.losses.LogCosh.__new__": true, + "tf.keras.losses.LogCosh.call": true, + "tf.keras.losses.LogCosh.from_config": true, + "tf.keras.losses.LogCosh.get_config": true, + "tf.keras.losses.Loss": false, + "tf.keras.losses.Loss.__call__": true, + "tf.keras.losses.Loss.__eq__": true, + "tf.keras.losses.Loss.__ge__": true, + "tf.keras.losses.Loss.__gt__": true, + "tf.keras.losses.Loss.__init__": true, + "tf.keras.losses.Loss.__le__": true, + "tf.keras.losses.Loss.__lt__": true, + "tf.keras.losses.Loss.__ne__": true, + "tf.keras.losses.Loss.__new__": true, + "tf.keras.losses.Loss.call": true, + "tf.keras.losses.Loss.from_config": true, + "tf.keras.losses.Loss.get_config": true, + "tf.keras.losses.MAE": false, + "tf.keras.losses.MAPE": false, + "tf.keras.losses.MSE": false, + "tf.keras.losses.MSLE": false, + "tf.keras.losses.MeanAbsoluteError": false, + "tf.keras.losses.MeanAbsoluteError.__call__": true, + "tf.keras.losses.MeanAbsoluteError.__eq__": true, + "tf.keras.losses.MeanAbsoluteError.__ge__": true, + "tf.keras.losses.MeanAbsoluteError.__gt__": true, + "tf.keras.losses.MeanAbsoluteError.__init__": true, + "tf.keras.losses.MeanAbsoluteError.__le__": true, + "tf.keras.losses.MeanAbsoluteError.__lt__": true, + "tf.keras.losses.MeanAbsoluteError.__ne__": true, + "tf.keras.losses.MeanAbsoluteError.__new__": true, + "tf.keras.losses.MeanAbsoluteError.call": true, + "tf.keras.losses.MeanAbsoluteError.from_config": true, + "tf.keras.losses.MeanAbsoluteError.get_config": true, + "tf.keras.losses.MeanAbsolutePercentageError": false, + "tf.keras.losses.MeanAbsolutePercentageError.__call__": true, + "tf.keras.losses.MeanAbsolutePercentageError.__eq__": true, + "tf.keras.losses.MeanAbsolutePercentageError.__ge__": true, + "tf.keras.losses.MeanAbsolutePercentageError.__gt__": true, + "tf.keras.losses.MeanAbsolutePercentageError.__init__": true, + "tf.keras.losses.MeanAbsolutePercentageError.__le__": true, + "tf.keras.losses.MeanAbsolutePercentageError.__lt__": true, + "tf.keras.losses.MeanAbsolutePercentageError.__ne__": true, + "tf.keras.losses.MeanAbsolutePercentageError.__new__": true, + "tf.keras.losses.MeanAbsolutePercentageError.call": true, + "tf.keras.losses.MeanAbsolutePercentageError.from_config": true, + "tf.keras.losses.MeanAbsolutePercentageError.get_config": true, + "tf.keras.losses.MeanSquaredError": false, + "tf.keras.losses.MeanSquaredError.__call__": true, + "tf.keras.losses.MeanSquaredError.__eq__": true, + "tf.keras.losses.MeanSquaredError.__ge__": true, + "tf.keras.losses.MeanSquaredError.__gt__": true, + "tf.keras.losses.MeanSquaredError.__init__": true, + "tf.keras.losses.MeanSquaredError.__le__": true, + "tf.keras.losses.MeanSquaredError.__lt__": true, + "tf.keras.losses.MeanSquaredError.__ne__": true, + "tf.keras.losses.MeanSquaredError.__new__": true, + "tf.keras.losses.MeanSquaredError.call": true, + "tf.keras.losses.MeanSquaredError.from_config": true, + "tf.keras.losses.MeanSquaredError.get_config": true, + "tf.keras.losses.MeanSquaredLogarithmicError": false, + "tf.keras.losses.MeanSquaredLogarithmicError.__call__": true, + "tf.keras.losses.MeanSquaredLogarithmicError.__eq__": true, + "tf.keras.losses.MeanSquaredLogarithmicError.__ge__": true, + "tf.keras.losses.MeanSquaredLogarithmicError.__gt__": true, + "tf.keras.losses.MeanSquaredLogarithmicError.__init__": true, + "tf.keras.losses.MeanSquaredLogarithmicError.__le__": true, + "tf.keras.losses.MeanSquaredLogarithmicError.__lt__": true, + "tf.keras.losses.MeanSquaredLogarithmicError.__ne__": true, + "tf.keras.losses.MeanSquaredLogarithmicError.__new__": true, + "tf.keras.losses.MeanSquaredLogarithmicError.call": true, + "tf.keras.losses.MeanSquaredLogarithmicError.from_config": true, + "tf.keras.losses.MeanSquaredLogarithmicError.get_config": true, + "tf.keras.losses.Poisson": false, + "tf.keras.losses.Poisson.__call__": true, + "tf.keras.losses.Poisson.__eq__": true, + "tf.keras.losses.Poisson.__ge__": true, + "tf.keras.losses.Poisson.__gt__": true, + "tf.keras.losses.Poisson.__init__": true, + "tf.keras.losses.Poisson.__le__": true, + "tf.keras.losses.Poisson.__lt__": true, + "tf.keras.losses.Poisson.__ne__": true, + "tf.keras.losses.Poisson.__new__": true, + "tf.keras.losses.Poisson.call": true, + "tf.keras.losses.Poisson.from_config": true, + "tf.keras.losses.Poisson.get_config": true, + "tf.keras.losses.Reduction": false, + "tf.keras.losses.Reduction.AUTO": true, + "tf.keras.losses.Reduction.NONE": true, + "tf.keras.losses.Reduction.SUM": true, + "tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE": true, + "tf.keras.losses.Reduction.__eq__": true, + "tf.keras.losses.Reduction.__ge__": true, + "tf.keras.losses.Reduction.__gt__": true, + "tf.keras.losses.Reduction.__init__": true, + "tf.keras.losses.Reduction.__le__": true, + "tf.keras.losses.Reduction.__lt__": true, + "tf.keras.losses.Reduction.__ne__": true, + "tf.keras.losses.Reduction.__new__": true, + "tf.keras.losses.Reduction.all": true, + "tf.keras.losses.Reduction.validate": true, + "tf.keras.losses.SparseCategoricalCrossentropy": false, + "tf.keras.losses.SparseCategoricalCrossentropy.__call__": true, + "tf.keras.losses.SparseCategoricalCrossentropy.__eq__": true, + "tf.keras.losses.SparseCategoricalCrossentropy.__ge__": true, + "tf.keras.losses.SparseCategoricalCrossentropy.__gt__": true, + "tf.keras.losses.SparseCategoricalCrossentropy.__init__": true, + "tf.keras.losses.SparseCategoricalCrossentropy.__le__": true, + "tf.keras.losses.SparseCategoricalCrossentropy.__lt__": true, + "tf.keras.losses.SparseCategoricalCrossentropy.__ne__": true, + "tf.keras.losses.SparseCategoricalCrossentropy.__new__": true, + "tf.keras.losses.SparseCategoricalCrossentropy.call": true, + "tf.keras.losses.SparseCategoricalCrossentropy.from_config": true, + "tf.keras.losses.SparseCategoricalCrossentropy.get_config": true, + "tf.keras.losses.SquaredHinge": false, + "tf.keras.losses.SquaredHinge.__call__": true, + "tf.keras.losses.SquaredHinge.__eq__": true, + "tf.keras.losses.SquaredHinge.__ge__": true, + "tf.keras.losses.SquaredHinge.__gt__": true, + "tf.keras.losses.SquaredHinge.__init__": true, + "tf.keras.losses.SquaredHinge.__le__": true, + "tf.keras.losses.SquaredHinge.__lt__": true, + "tf.keras.losses.SquaredHinge.__ne__": true, + "tf.keras.losses.SquaredHinge.__new__": true, + "tf.keras.losses.SquaredHinge.call": true, + "tf.keras.losses.SquaredHinge.from_config": true, + "tf.keras.losses.SquaredHinge.get_config": true, + "tf.keras.losses.binary_crossentropy": false, + "tf.keras.losses.categorical_crossentropy": false, + "tf.keras.losses.categorical_hinge": false, + "tf.keras.losses.cosine_similarity": false, + "tf.keras.losses.deserialize": false, + "tf.keras.losses.get": false, + "tf.keras.losses.hinge": false, + "tf.keras.losses.kld": false, + "tf.keras.losses.kullback_leibler_divergence": false, + "tf.keras.losses.logcosh": false, + "tf.keras.losses.mae": false, + "tf.keras.losses.mape": false, + "tf.keras.losses.mean_absolute_error": false, + "tf.keras.losses.mean_absolute_percentage_error": false, + "tf.keras.losses.mean_squared_error": false, + "tf.keras.losses.mean_squared_logarithmic_error": false, + "tf.keras.losses.mse": false, + "tf.keras.losses.msle": false, + "tf.keras.losses.poisson": false, + "tf.keras.losses.serialize": false, + "tf.keras.losses.sparse_categorical_crossentropy": false, + "tf.keras.losses.squared_hinge": false, + "tf.keras.metrics": false, + "tf.keras.metrics.AUC": false, + "tf.keras.metrics.AUC.__call__": true, + "tf.keras.metrics.AUC.__eq__": true, + "tf.keras.metrics.AUC.__ge__": true, + "tf.keras.metrics.AUC.__gt__": true, + "tf.keras.metrics.AUC.__init__": true, + "tf.keras.metrics.AUC.__le__": true, + "tf.keras.metrics.AUC.__lt__": true, + "tf.keras.metrics.AUC.__ne__": true, + "tf.keras.metrics.AUC.__new__": true, + "tf.keras.metrics.AUC.activity_regularizer": true, + "tf.keras.metrics.AUC.add_loss": true, + "tf.keras.metrics.AUC.add_metric": true, + "tf.keras.metrics.AUC.add_weight": true, + "tf.keras.metrics.AUC.build": true, + "tf.keras.metrics.AUC.call": true, + "tf.keras.metrics.AUC.compute_mask": true, + "tf.keras.metrics.AUC.compute_output_shape": true, + "tf.keras.metrics.AUC.compute_output_signature": true, + "tf.keras.metrics.AUC.count_params": true, + "tf.keras.metrics.AUC.dtype": true, + "tf.keras.metrics.AUC.dynamic": true, + "tf.keras.metrics.AUC.from_config": true, + "tf.keras.metrics.AUC.get_config": true, + "tf.keras.metrics.AUC.get_weights": true, + "tf.keras.metrics.AUC.input": true, + "tf.keras.metrics.AUC.input_spec": true, + "tf.keras.metrics.AUC.interpolate_pr_auc": true, + "tf.keras.metrics.AUC.losses": true, + "tf.keras.metrics.AUC.metrics": true, + "tf.keras.metrics.AUC.name": true, + "tf.keras.metrics.AUC.name_scope": true, + "tf.keras.metrics.AUC.non_trainable_weights": true, + "tf.keras.metrics.AUC.output": true, + "tf.keras.metrics.AUC.reset_states": true, + "tf.keras.metrics.AUC.result": true, + "tf.keras.metrics.AUC.set_weights": true, + "tf.keras.metrics.AUC.submodules": true, + "tf.keras.metrics.AUC.trainable": true, + "tf.keras.metrics.AUC.trainable_weights": true, + "tf.keras.metrics.AUC.update_state": true, + "tf.keras.metrics.AUC.weights": true, + "tf.keras.metrics.AUC.with_name_scope": true, + "tf.keras.metrics.Accuracy": false, + "tf.keras.metrics.Accuracy.__call__": true, + "tf.keras.metrics.Accuracy.__eq__": true, + "tf.keras.metrics.Accuracy.__ge__": true, + "tf.keras.metrics.Accuracy.__gt__": true, + "tf.keras.metrics.Accuracy.__init__": true, + "tf.keras.metrics.Accuracy.__le__": true, + "tf.keras.metrics.Accuracy.__lt__": true, + "tf.keras.metrics.Accuracy.__ne__": true, + "tf.keras.metrics.Accuracy.__new__": true, + "tf.keras.metrics.Accuracy.activity_regularizer": true, + "tf.keras.metrics.Accuracy.add_loss": true, + "tf.keras.metrics.Accuracy.add_metric": true, + "tf.keras.metrics.Accuracy.add_weight": true, + "tf.keras.metrics.Accuracy.build": true, + "tf.keras.metrics.Accuracy.call": true, + "tf.keras.metrics.Accuracy.compute_mask": true, + "tf.keras.metrics.Accuracy.compute_output_shape": true, + "tf.keras.metrics.Accuracy.compute_output_signature": true, + "tf.keras.metrics.Accuracy.count_params": true, + "tf.keras.metrics.Accuracy.dtype": true, + "tf.keras.metrics.Accuracy.dynamic": true, + "tf.keras.metrics.Accuracy.from_config": true, + "tf.keras.metrics.Accuracy.get_config": true, + "tf.keras.metrics.Accuracy.get_weights": true, + "tf.keras.metrics.Accuracy.input": true, + "tf.keras.metrics.Accuracy.input_spec": true, + "tf.keras.metrics.Accuracy.losses": true, + "tf.keras.metrics.Accuracy.metrics": true, + "tf.keras.metrics.Accuracy.name": true, + "tf.keras.metrics.Accuracy.name_scope": true, + "tf.keras.metrics.Accuracy.non_trainable_weights": true, + "tf.keras.metrics.Accuracy.output": true, + "tf.keras.metrics.Accuracy.reset_states": true, + "tf.keras.metrics.Accuracy.result": true, + "tf.keras.metrics.Accuracy.set_weights": true, + "tf.keras.metrics.Accuracy.submodules": true, + "tf.keras.metrics.Accuracy.trainable": true, + "tf.keras.metrics.Accuracy.trainable_weights": true, + "tf.keras.metrics.Accuracy.update_state": true, + "tf.keras.metrics.Accuracy.weights": true, + "tf.keras.metrics.Accuracy.with_name_scope": true, + "tf.keras.metrics.BinaryAccuracy": false, + "tf.keras.metrics.BinaryAccuracy.__call__": true, + "tf.keras.metrics.BinaryAccuracy.__eq__": true, + "tf.keras.metrics.BinaryAccuracy.__ge__": true, + "tf.keras.metrics.BinaryAccuracy.__gt__": true, + "tf.keras.metrics.BinaryAccuracy.__init__": true, + "tf.keras.metrics.BinaryAccuracy.__le__": true, + "tf.keras.metrics.BinaryAccuracy.__lt__": true, + "tf.keras.metrics.BinaryAccuracy.__ne__": true, + "tf.keras.metrics.BinaryAccuracy.__new__": true, + "tf.keras.metrics.BinaryAccuracy.activity_regularizer": true, + "tf.keras.metrics.BinaryAccuracy.add_loss": true, + "tf.keras.metrics.BinaryAccuracy.add_metric": true, + "tf.keras.metrics.BinaryAccuracy.add_weight": true, + "tf.keras.metrics.BinaryAccuracy.build": true, + "tf.keras.metrics.BinaryAccuracy.call": true, + "tf.keras.metrics.BinaryAccuracy.compute_mask": true, + "tf.keras.metrics.BinaryAccuracy.compute_output_shape": true, + "tf.keras.metrics.BinaryAccuracy.compute_output_signature": true, + "tf.keras.metrics.BinaryAccuracy.count_params": true, + "tf.keras.metrics.BinaryAccuracy.dtype": true, + "tf.keras.metrics.BinaryAccuracy.dynamic": true, + "tf.keras.metrics.BinaryAccuracy.from_config": true, + "tf.keras.metrics.BinaryAccuracy.get_config": true, + "tf.keras.metrics.BinaryAccuracy.get_weights": true, + "tf.keras.metrics.BinaryAccuracy.input": true, + "tf.keras.metrics.BinaryAccuracy.input_spec": true, + "tf.keras.metrics.BinaryAccuracy.losses": true, + "tf.keras.metrics.BinaryAccuracy.metrics": true, + "tf.keras.metrics.BinaryAccuracy.name": true, + "tf.keras.metrics.BinaryAccuracy.name_scope": true, + "tf.keras.metrics.BinaryAccuracy.non_trainable_weights": true, + "tf.keras.metrics.BinaryAccuracy.output": true, + "tf.keras.metrics.BinaryAccuracy.reset_states": true, + "tf.keras.metrics.BinaryAccuracy.result": true, + "tf.keras.metrics.BinaryAccuracy.set_weights": true, + "tf.keras.metrics.BinaryAccuracy.submodules": true, + "tf.keras.metrics.BinaryAccuracy.trainable": true, + "tf.keras.metrics.BinaryAccuracy.trainable_weights": true, + "tf.keras.metrics.BinaryAccuracy.update_state": true, + "tf.keras.metrics.BinaryAccuracy.weights": true, + "tf.keras.metrics.BinaryAccuracy.with_name_scope": true, + "tf.keras.metrics.BinaryCrossentropy": false, + "tf.keras.metrics.BinaryCrossentropy.__call__": true, + "tf.keras.metrics.BinaryCrossentropy.__eq__": true, + "tf.keras.metrics.BinaryCrossentropy.__ge__": true, + "tf.keras.metrics.BinaryCrossentropy.__gt__": true, + "tf.keras.metrics.BinaryCrossentropy.__init__": true, + "tf.keras.metrics.BinaryCrossentropy.__le__": true, + "tf.keras.metrics.BinaryCrossentropy.__lt__": true, + "tf.keras.metrics.BinaryCrossentropy.__ne__": true, + "tf.keras.metrics.BinaryCrossentropy.__new__": true, + "tf.keras.metrics.BinaryCrossentropy.activity_regularizer": true, + "tf.keras.metrics.BinaryCrossentropy.add_loss": true, + "tf.keras.metrics.BinaryCrossentropy.add_metric": true, + "tf.keras.metrics.BinaryCrossentropy.add_weight": true, + "tf.keras.metrics.BinaryCrossentropy.build": true, + "tf.keras.metrics.BinaryCrossentropy.call": true, + "tf.keras.metrics.BinaryCrossentropy.compute_mask": true, + "tf.keras.metrics.BinaryCrossentropy.compute_output_shape": true, + "tf.keras.metrics.BinaryCrossentropy.compute_output_signature": true, + "tf.keras.metrics.BinaryCrossentropy.count_params": true, + "tf.keras.metrics.BinaryCrossentropy.dtype": true, + "tf.keras.metrics.BinaryCrossentropy.dynamic": true, + "tf.keras.metrics.BinaryCrossentropy.from_config": true, + "tf.keras.metrics.BinaryCrossentropy.get_config": true, + "tf.keras.metrics.BinaryCrossentropy.get_weights": true, + "tf.keras.metrics.BinaryCrossentropy.input": true, + "tf.keras.metrics.BinaryCrossentropy.input_spec": true, + "tf.keras.metrics.BinaryCrossentropy.losses": true, + "tf.keras.metrics.BinaryCrossentropy.metrics": true, + "tf.keras.metrics.BinaryCrossentropy.name": true, + "tf.keras.metrics.BinaryCrossentropy.name_scope": true, + "tf.keras.metrics.BinaryCrossentropy.non_trainable_weights": true, + "tf.keras.metrics.BinaryCrossentropy.output": true, + "tf.keras.metrics.BinaryCrossentropy.reset_states": true, + "tf.keras.metrics.BinaryCrossentropy.result": true, + "tf.keras.metrics.BinaryCrossentropy.set_weights": true, + "tf.keras.metrics.BinaryCrossentropy.submodules": true, + "tf.keras.metrics.BinaryCrossentropy.trainable": true, + "tf.keras.metrics.BinaryCrossentropy.trainable_weights": true, + "tf.keras.metrics.BinaryCrossentropy.update_state": true, + "tf.keras.metrics.BinaryCrossentropy.weights": true, + "tf.keras.metrics.BinaryCrossentropy.with_name_scope": true, + "tf.keras.metrics.CategoricalAccuracy": false, + "tf.keras.metrics.CategoricalAccuracy.__call__": true, + "tf.keras.metrics.CategoricalAccuracy.__eq__": true, + "tf.keras.metrics.CategoricalAccuracy.__ge__": true, + "tf.keras.metrics.CategoricalAccuracy.__gt__": true, + "tf.keras.metrics.CategoricalAccuracy.__init__": true, + "tf.keras.metrics.CategoricalAccuracy.__le__": true, + "tf.keras.metrics.CategoricalAccuracy.__lt__": true, + "tf.keras.metrics.CategoricalAccuracy.__ne__": true, + "tf.keras.metrics.CategoricalAccuracy.__new__": true, + "tf.keras.metrics.CategoricalAccuracy.activity_regularizer": true, + "tf.keras.metrics.CategoricalAccuracy.add_loss": true, + "tf.keras.metrics.CategoricalAccuracy.add_metric": true, + "tf.keras.metrics.CategoricalAccuracy.add_weight": true, + "tf.keras.metrics.CategoricalAccuracy.build": true, + "tf.keras.metrics.CategoricalAccuracy.call": true, + "tf.keras.metrics.CategoricalAccuracy.compute_mask": true, + "tf.keras.metrics.CategoricalAccuracy.compute_output_shape": true, + "tf.keras.metrics.CategoricalAccuracy.compute_output_signature": true, + "tf.keras.metrics.CategoricalAccuracy.count_params": true, + "tf.keras.metrics.CategoricalAccuracy.dtype": true, + "tf.keras.metrics.CategoricalAccuracy.dynamic": true, + "tf.keras.metrics.CategoricalAccuracy.from_config": true, + "tf.keras.metrics.CategoricalAccuracy.get_config": true, + "tf.keras.metrics.CategoricalAccuracy.get_weights": true, + "tf.keras.metrics.CategoricalAccuracy.input": true, + "tf.keras.metrics.CategoricalAccuracy.input_spec": true, + "tf.keras.metrics.CategoricalAccuracy.losses": true, + "tf.keras.metrics.CategoricalAccuracy.metrics": true, + "tf.keras.metrics.CategoricalAccuracy.name": true, + "tf.keras.metrics.CategoricalAccuracy.name_scope": true, + "tf.keras.metrics.CategoricalAccuracy.non_trainable_weights": true, + "tf.keras.metrics.CategoricalAccuracy.output": true, + "tf.keras.metrics.CategoricalAccuracy.reset_states": true, + "tf.keras.metrics.CategoricalAccuracy.result": true, + "tf.keras.metrics.CategoricalAccuracy.set_weights": true, + "tf.keras.metrics.CategoricalAccuracy.submodules": true, + "tf.keras.metrics.CategoricalAccuracy.trainable": true, + "tf.keras.metrics.CategoricalAccuracy.trainable_weights": true, + "tf.keras.metrics.CategoricalAccuracy.update_state": true, + "tf.keras.metrics.CategoricalAccuracy.weights": true, + "tf.keras.metrics.CategoricalAccuracy.with_name_scope": true, + "tf.keras.metrics.CategoricalCrossentropy": false, + "tf.keras.metrics.CategoricalCrossentropy.__call__": true, + "tf.keras.metrics.CategoricalCrossentropy.__eq__": true, + "tf.keras.metrics.CategoricalCrossentropy.__ge__": true, + "tf.keras.metrics.CategoricalCrossentropy.__gt__": true, + "tf.keras.metrics.CategoricalCrossentropy.__init__": true, + "tf.keras.metrics.CategoricalCrossentropy.__le__": true, + "tf.keras.metrics.CategoricalCrossentropy.__lt__": true, + "tf.keras.metrics.CategoricalCrossentropy.__ne__": true, + "tf.keras.metrics.CategoricalCrossentropy.__new__": true, + "tf.keras.metrics.CategoricalCrossentropy.activity_regularizer": true, + "tf.keras.metrics.CategoricalCrossentropy.add_loss": true, + "tf.keras.metrics.CategoricalCrossentropy.add_metric": true, + "tf.keras.metrics.CategoricalCrossentropy.add_weight": true, + "tf.keras.metrics.CategoricalCrossentropy.build": true, + "tf.keras.metrics.CategoricalCrossentropy.call": true, + "tf.keras.metrics.CategoricalCrossentropy.compute_mask": true, + "tf.keras.metrics.CategoricalCrossentropy.compute_output_shape": true, + "tf.keras.metrics.CategoricalCrossentropy.compute_output_signature": true, + "tf.keras.metrics.CategoricalCrossentropy.count_params": true, + "tf.keras.metrics.CategoricalCrossentropy.dtype": true, + "tf.keras.metrics.CategoricalCrossentropy.dynamic": true, + "tf.keras.metrics.CategoricalCrossentropy.from_config": true, + "tf.keras.metrics.CategoricalCrossentropy.get_config": true, + "tf.keras.metrics.CategoricalCrossentropy.get_weights": true, + "tf.keras.metrics.CategoricalCrossentropy.input": true, + "tf.keras.metrics.CategoricalCrossentropy.input_spec": true, + "tf.keras.metrics.CategoricalCrossentropy.losses": true, + "tf.keras.metrics.CategoricalCrossentropy.metrics": true, + "tf.keras.metrics.CategoricalCrossentropy.name": true, + "tf.keras.metrics.CategoricalCrossentropy.name_scope": true, + "tf.keras.metrics.CategoricalCrossentropy.non_trainable_weights": true, + "tf.keras.metrics.CategoricalCrossentropy.output": true, + "tf.keras.metrics.CategoricalCrossentropy.reset_states": true, + "tf.keras.metrics.CategoricalCrossentropy.result": true, + "tf.keras.metrics.CategoricalCrossentropy.set_weights": true, + "tf.keras.metrics.CategoricalCrossentropy.submodules": true, + "tf.keras.metrics.CategoricalCrossentropy.trainable": true, + "tf.keras.metrics.CategoricalCrossentropy.trainable_weights": true, + "tf.keras.metrics.CategoricalCrossentropy.update_state": true, + "tf.keras.metrics.CategoricalCrossentropy.weights": true, + "tf.keras.metrics.CategoricalCrossentropy.with_name_scope": true, + "tf.keras.metrics.CategoricalHinge": false, + "tf.keras.metrics.CategoricalHinge.__call__": true, + "tf.keras.metrics.CategoricalHinge.__eq__": true, + "tf.keras.metrics.CategoricalHinge.__ge__": true, + "tf.keras.metrics.CategoricalHinge.__gt__": true, + "tf.keras.metrics.CategoricalHinge.__init__": true, + "tf.keras.metrics.CategoricalHinge.__le__": true, + "tf.keras.metrics.CategoricalHinge.__lt__": true, + "tf.keras.metrics.CategoricalHinge.__ne__": true, + "tf.keras.metrics.CategoricalHinge.__new__": true, + "tf.keras.metrics.CategoricalHinge.activity_regularizer": true, + "tf.keras.metrics.CategoricalHinge.add_loss": true, + "tf.keras.metrics.CategoricalHinge.add_metric": true, + "tf.keras.metrics.CategoricalHinge.add_weight": true, + "tf.keras.metrics.CategoricalHinge.build": true, + "tf.keras.metrics.CategoricalHinge.call": true, + "tf.keras.metrics.CategoricalHinge.compute_mask": true, + "tf.keras.metrics.CategoricalHinge.compute_output_shape": true, + "tf.keras.metrics.CategoricalHinge.compute_output_signature": true, + "tf.keras.metrics.CategoricalHinge.count_params": true, + "tf.keras.metrics.CategoricalHinge.dtype": true, + "tf.keras.metrics.CategoricalHinge.dynamic": true, + "tf.keras.metrics.CategoricalHinge.from_config": true, + "tf.keras.metrics.CategoricalHinge.get_config": true, + "tf.keras.metrics.CategoricalHinge.get_weights": true, + "tf.keras.metrics.CategoricalHinge.input": true, + "tf.keras.metrics.CategoricalHinge.input_spec": true, + "tf.keras.metrics.CategoricalHinge.losses": true, + "tf.keras.metrics.CategoricalHinge.metrics": true, + "tf.keras.metrics.CategoricalHinge.name": true, + "tf.keras.metrics.CategoricalHinge.name_scope": true, + "tf.keras.metrics.CategoricalHinge.non_trainable_weights": true, + "tf.keras.metrics.CategoricalHinge.output": true, + "tf.keras.metrics.CategoricalHinge.reset_states": true, + "tf.keras.metrics.CategoricalHinge.result": true, + "tf.keras.metrics.CategoricalHinge.set_weights": true, + "tf.keras.metrics.CategoricalHinge.submodules": true, + "tf.keras.metrics.CategoricalHinge.trainable": true, + "tf.keras.metrics.CategoricalHinge.trainable_weights": true, + "tf.keras.metrics.CategoricalHinge.update_state": true, + "tf.keras.metrics.CategoricalHinge.weights": true, + "tf.keras.metrics.CategoricalHinge.with_name_scope": true, + "tf.keras.metrics.CosineSimilarity": false, + "tf.keras.metrics.CosineSimilarity.__call__": true, + "tf.keras.metrics.CosineSimilarity.__eq__": true, + "tf.keras.metrics.CosineSimilarity.__ge__": true, + "tf.keras.metrics.CosineSimilarity.__gt__": true, + "tf.keras.metrics.CosineSimilarity.__init__": true, + "tf.keras.metrics.CosineSimilarity.__le__": true, + "tf.keras.metrics.CosineSimilarity.__lt__": true, + "tf.keras.metrics.CosineSimilarity.__ne__": true, + "tf.keras.metrics.CosineSimilarity.__new__": true, + "tf.keras.metrics.CosineSimilarity.activity_regularizer": true, + "tf.keras.metrics.CosineSimilarity.add_loss": true, + "tf.keras.metrics.CosineSimilarity.add_metric": true, + "tf.keras.metrics.CosineSimilarity.add_weight": true, + "tf.keras.metrics.CosineSimilarity.build": true, + "tf.keras.metrics.CosineSimilarity.call": true, + "tf.keras.metrics.CosineSimilarity.compute_mask": true, + "tf.keras.metrics.CosineSimilarity.compute_output_shape": true, + "tf.keras.metrics.CosineSimilarity.compute_output_signature": true, + "tf.keras.metrics.CosineSimilarity.count_params": true, + "tf.keras.metrics.CosineSimilarity.dtype": true, + "tf.keras.metrics.CosineSimilarity.dynamic": true, + "tf.keras.metrics.CosineSimilarity.from_config": true, + "tf.keras.metrics.CosineSimilarity.get_config": true, + "tf.keras.metrics.CosineSimilarity.get_weights": true, + "tf.keras.metrics.CosineSimilarity.input": true, + "tf.keras.metrics.CosineSimilarity.input_spec": true, + "tf.keras.metrics.CosineSimilarity.losses": true, + "tf.keras.metrics.CosineSimilarity.metrics": true, + "tf.keras.metrics.CosineSimilarity.name": true, + "tf.keras.metrics.CosineSimilarity.name_scope": true, + "tf.keras.metrics.CosineSimilarity.non_trainable_weights": true, + "tf.keras.metrics.CosineSimilarity.output": true, + "tf.keras.metrics.CosineSimilarity.reset_states": true, + "tf.keras.metrics.CosineSimilarity.result": true, + "tf.keras.metrics.CosineSimilarity.set_weights": true, + "tf.keras.metrics.CosineSimilarity.submodules": true, + "tf.keras.metrics.CosineSimilarity.trainable": true, + "tf.keras.metrics.CosineSimilarity.trainable_weights": true, + "tf.keras.metrics.CosineSimilarity.update_state": true, + "tf.keras.metrics.CosineSimilarity.weights": true, + "tf.keras.metrics.CosineSimilarity.with_name_scope": true, + "tf.keras.metrics.FalseNegatives": false, + "tf.keras.metrics.FalseNegatives.__call__": true, + "tf.keras.metrics.FalseNegatives.__eq__": true, + "tf.keras.metrics.FalseNegatives.__ge__": true, + "tf.keras.metrics.FalseNegatives.__gt__": true, + "tf.keras.metrics.FalseNegatives.__init__": true, + "tf.keras.metrics.FalseNegatives.__le__": true, + "tf.keras.metrics.FalseNegatives.__lt__": true, + "tf.keras.metrics.FalseNegatives.__ne__": true, + "tf.keras.metrics.FalseNegatives.__new__": true, + "tf.keras.metrics.FalseNegatives.activity_regularizer": true, + "tf.keras.metrics.FalseNegatives.add_loss": true, + "tf.keras.metrics.FalseNegatives.add_metric": true, + "tf.keras.metrics.FalseNegatives.add_weight": true, + "tf.keras.metrics.FalseNegatives.build": true, + "tf.keras.metrics.FalseNegatives.call": true, + "tf.keras.metrics.FalseNegatives.compute_mask": true, + "tf.keras.metrics.FalseNegatives.compute_output_shape": true, + "tf.keras.metrics.FalseNegatives.compute_output_signature": true, + "tf.keras.metrics.FalseNegatives.count_params": true, + "tf.keras.metrics.FalseNegatives.dtype": true, + "tf.keras.metrics.FalseNegatives.dynamic": true, + "tf.keras.metrics.FalseNegatives.from_config": true, + "tf.keras.metrics.FalseNegatives.get_config": true, + "tf.keras.metrics.FalseNegatives.get_weights": true, + "tf.keras.metrics.FalseNegatives.input": true, + "tf.keras.metrics.FalseNegatives.input_spec": true, + "tf.keras.metrics.FalseNegatives.losses": true, + "tf.keras.metrics.FalseNegatives.metrics": true, + "tf.keras.metrics.FalseNegatives.name": true, + "tf.keras.metrics.FalseNegatives.name_scope": true, + "tf.keras.metrics.FalseNegatives.non_trainable_weights": true, + "tf.keras.metrics.FalseNegatives.output": true, + "tf.keras.metrics.FalseNegatives.reset_states": true, + "tf.keras.metrics.FalseNegatives.result": true, + "tf.keras.metrics.FalseNegatives.set_weights": true, + "tf.keras.metrics.FalseNegatives.submodules": true, + "tf.keras.metrics.FalseNegatives.trainable": true, + "tf.keras.metrics.FalseNegatives.trainable_weights": true, + "tf.keras.metrics.FalseNegatives.update_state": true, + "tf.keras.metrics.FalseNegatives.weights": true, + "tf.keras.metrics.FalseNegatives.with_name_scope": true, + "tf.keras.metrics.FalsePositives": false, + "tf.keras.metrics.FalsePositives.__call__": true, + "tf.keras.metrics.FalsePositives.__eq__": true, + "tf.keras.metrics.FalsePositives.__ge__": true, + "tf.keras.metrics.FalsePositives.__gt__": true, + "tf.keras.metrics.FalsePositives.__init__": true, + "tf.keras.metrics.FalsePositives.__le__": true, + "tf.keras.metrics.FalsePositives.__lt__": true, + "tf.keras.metrics.FalsePositives.__ne__": true, + "tf.keras.metrics.FalsePositives.__new__": true, + "tf.keras.metrics.FalsePositives.activity_regularizer": true, + "tf.keras.metrics.FalsePositives.add_loss": true, + "tf.keras.metrics.FalsePositives.add_metric": true, + "tf.keras.metrics.FalsePositives.add_weight": true, + "tf.keras.metrics.FalsePositives.build": true, + "tf.keras.metrics.FalsePositives.call": true, + "tf.keras.metrics.FalsePositives.compute_mask": true, + "tf.keras.metrics.FalsePositives.compute_output_shape": true, + "tf.keras.metrics.FalsePositives.compute_output_signature": true, + "tf.keras.metrics.FalsePositives.count_params": true, + "tf.keras.metrics.FalsePositives.dtype": true, + "tf.keras.metrics.FalsePositives.dynamic": true, + "tf.keras.metrics.FalsePositives.from_config": true, + "tf.keras.metrics.FalsePositives.get_config": true, + "tf.keras.metrics.FalsePositives.get_weights": true, + "tf.keras.metrics.FalsePositives.input": true, + "tf.keras.metrics.FalsePositives.input_spec": true, + "tf.keras.metrics.FalsePositives.losses": true, + "tf.keras.metrics.FalsePositives.metrics": true, + "tf.keras.metrics.FalsePositives.name": true, + "tf.keras.metrics.FalsePositives.name_scope": true, + "tf.keras.metrics.FalsePositives.non_trainable_weights": true, + "tf.keras.metrics.FalsePositives.output": true, + "tf.keras.metrics.FalsePositives.reset_states": true, + "tf.keras.metrics.FalsePositives.result": true, + "tf.keras.metrics.FalsePositives.set_weights": true, + "tf.keras.metrics.FalsePositives.submodules": true, + "tf.keras.metrics.FalsePositives.trainable": true, + "tf.keras.metrics.FalsePositives.trainable_weights": true, + "tf.keras.metrics.FalsePositives.update_state": true, + "tf.keras.metrics.FalsePositives.weights": true, + "tf.keras.metrics.FalsePositives.with_name_scope": true, + "tf.keras.metrics.Hinge": false, + "tf.keras.metrics.Hinge.__call__": true, + "tf.keras.metrics.Hinge.__eq__": true, + "tf.keras.metrics.Hinge.__ge__": true, + "tf.keras.metrics.Hinge.__gt__": true, + "tf.keras.metrics.Hinge.__init__": true, + "tf.keras.metrics.Hinge.__le__": true, + "tf.keras.metrics.Hinge.__lt__": true, + "tf.keras.metrics.Hinge.__ne__": true, + "tf.keras.metrics.Hinge.__new__": true, + "tf.keras.metrics.Hinge.activity_regularizer": true, + "tf.keras.metrics.Hinge.add_loss": true, + "tf.keras.metrics.Hinge.add_metric": true, + "tf.keras.metrics.Hinge.add_weight": true, + "tf.keras.metrics.Hinge.build": true, + "tf.keras.metrics.Hinge.call": true, + "tf.keras.metrics.Hinge.compute_mask": true, + "tf.keras.metrics.Hinge.compute_output_shape": true, + "tf.keras.metrics.Hinge.compute_output_signature": true, + "tf.keras.metrics.Hinge.count_params": true, + "tf.keras.metrics.Hinge.dtype": true, + "tf.keras.metrics.Hinge.dynamic": true, + "tf.keras.metrics.Hinge.from_config": true, + "tf.keras.metrics.Hinge.get_config": true, + "tf.keras.metrics.Hinge.get_weights": true, + "tf.keras.metrics.Hinge.input": true, + "tf.keras.metrics.Hinge.input_spec": true, + "tf.keras.metrics.Hinge.losses": true, + "tf.keras.metrics.Hinge.metrics": true, + "tf.keras.metrics.Hinge.name": true, + "tf.keras.metrics.Hinge.name_scope": true, + "tf.keras.metrics.Hinge.non_trainable_weights": true, + "tf.keras.metrics.Hinge.output": true, + "tf.keras.metrics.Hinge.reset_states": true, + "tf.keras.metrics.Hinge.result": true, + "tf.keras.metrics.Hinge.set_weights": true, + "tf.keras.metrics.Hinge.submodules": true, + "tf.keras.metrics.Hinge.trainable": true, + "tf.keras.metrics.Hinge.trainable_weights": true, + "tf.keras.metrics.Hinge.update_state": true, + "tf.keras.metrics.Hinge.weights": true, + "tf.keras.metrics.Hinge.with_name_scope": true, + "tf.keras.metrics.KLD": false, + "tf.keras.metrics.KLDivergence": false, + "tf.keras.metrics.KLDivergence.__call__": true, + "tf.keras.metrics.KLDivergence.__eq__": true, + "tf.keras.metrics.KLDivergence.__ge__": true, + "tf.keras.metrics.KLDivergence.__gt__": true, + "tf.keras.metrics.KLDivergence.__init__": true, + "tf.keras.metrics.KLDivergence.__le__": true, + "tf.keras.metrics.KLDivergence.__lt__": true, + "tf.keras.metrics.KLDivergence.__ne__": true, + "tf.keras.metrics.KLDivergence.__new__": true, + "tf.keras.metrics.KLDivergence.activity_regularizer": true, + "tf.keras.metrics.KLDivergence.add_loss": true, + "tf.keras.metrics.KLDivergence.add_metric": true, + "tf.keras.metrics.KLDivergence.add_weight": true, + "tf.keras.metrics.KLDivergence.build": true, + "tf.keras.metrics.KLDivergence.call": true, + "tf.keras.metrics.KLDivergence.compute_mask": true, + "tf.keras.metrics.KLDivergence.compute_output_shape": true, + "tf.keras.metrics.KLDivergence.compute_output_signature": true, + "tf.keras.metrics.KLDivergence.count_params": true, + "tf.keras.metrics.KLDivergence.dtype": true, + "tf.keras.metrics.KLDivergence.dynamic": true, + "tf.keras.metrics.KLDivergence.from_config": true, + "tf.keras.metrics.KLDivergence.get_config": true, + "tf.keras.metrics.KLDivergence.get_weights": true, + "tf.keras.metrics.KLDivergence.input": true, + "tf.keras.metrics.KLDivergence.input_spec": true, + "tf.keras.metrics.KLDivergence.losses": true, + "tf.keras.metrics.KLDivergence.metrics": true, + "tf.keras.metrics.KLDivergence.name": true, + "tf.keras.metrics.KLDivergence.name_scope": true, + "tf.keras.metrics.KLDivergence.non_trainable_weights": true, + "tf.keras.metrics.KLDivergence.output": true, + "tf.keras.metrics.KLDivergence.reset_states": true, + "tf.keras.metrics.KLDivergence.result": true, + "tf.keras.metrics.KLDivergence.set_weights": true, + "tf.keras.metrics.KLDivergence.submodules": true, + "tf.keras.metrics.KLDivergence.trainable": true, + "tf.keras.metrics.KLDivergence.trainable_weights": true, + "tf.keras.metrics.KLDivergence.update_state": true, + "tf.keras.metrics.KLDivergence.weights": true, + "tf.keras.metrics.KLDivergence.with_name_scope": true, + "tf.keras.metrics.LogCoshError": false, + "tf.keras.metrics.LogCoshError.__call__": true, + "tf.keras.metrics.LogCoshError.__eq__": true, + "tf.keras.metrics.LogCoshError.__ge__": true, + "tf.keras.metrics.LogCoshError.__gt__": true, + "tf.keras.metrics.LogCoshError.__init__": true, + "tf.keras.metrics.LogCoshError.__le__": true, + "tf.keras.metrics.LogCoshError.__lt__": true, + "tf.keras.metrics.LogCoshError.__ne__": true, + "tf.keras.metrics.LogCoshError.__new__": true, + "tf.keras.metrics.LogCoshError.activity_regularizer": true, + "tf.keras.metrics.LogCoshError.add_loss": true, + "tf.keras.metrics.LogCoshError.add_metric": true, + "tf.keras.metrics.LogCoshError.add_weight": true, + "tf.keras.metrics.LogCoshError.build": true, + "tf.keras.metrics.LogCoshError.call": true, + "tf.keras.metrics.LogCoshError.compute_mask": true, + "tf.keras.metrics.LogCoshError.compute_output_shape": true, + "tf.keras.metrics.LogCoshError.compute_output_signature": true, + "tf.keras.metrics.LogCoshError.count_params": true, + "tf.keras.metrics.LogCoshError.dtype": true, + "tf.keras.metrics.LogCoshError.dynamic": true, + "tf.keras.metrics.LogCoshError.from_config": true, + "tf.keras.metrics.LogCoshError.get_config": true, + "tf.keras.metrics.LogCoshError.get_weights": true, + "tf.keras.metrics.LogCoshError.input": true, + "tf.keras.metrics.LogCoshError.input_spec": true, + "tf.keras.metrics.LogCoshError.losses": true, + "tf.keras.metrics.LogCoshError.metrics": true, + "tf.keras.metrics.LogCoshError.name": true, + "tf.keras.metrics.LogCoshError.name_scope": true, + "tf.keras.metrics.LogCoshError.non_trainable_weights": true, + "tf.keras.metrics.LogCoshError.output": true, + "tf.keras.metrics.LogCoshError.reset_states": true, + "tf.keras.metrics.LogCoshError.result": true, + "tf.keras.metrics.LogCoshError.set_weights": true, + "tf.keras.metrics.LogCoshError.submodules": true, + "tf.keras.metrics.LogCoshError.trainable": true, + "tf.keras.metrics.LogCoshError.trainable_weights": true, + "tf.keras.metrics.LogCoshError.update_state": true, + "tf.keras.metrics.LogCoshError.weights": true, + "tf.keras.metrics.LogCoshError.with_name_scope": true, + "tf.keras.metrics.MAE": false, + "tf.keras.metrics.MAPE": false, + "tf.keras.metrics.MSE": false, + "tf.keras.metrics.MSLE": false, + "tf.keras.metrics.Mean": false, + "tf.keras.metrics.Mean.__call__": true, + "tf.keras.metrics.Mean.__eq__": true, + "tf.keras.metrics.Mean.__ge__": true, + "tf.keras.metrics.Mean.__gt__": true, + "tf.keras.metrics.Mean.__init__": true, + "tf.keras.metrics.Mean.__le__": true, + "tf.keras.metrics.Mean.__lt__": true, + "tf.keras.metrics.Mean.__ne__": true, + "tf.keras.metrics.Mean.__new__": true, + "tf.keras.metrics.Mean.activity_regularizer": true, + "tf.keras.metrics.Mean.add_loss": true, + "tf.keras.metrics.Mean.add_metric": true, + "tf.keras.metrics.Mean.add_weight": true, + "tf.keras.metrics.Mean.build": true, + "tf.keras.metrics.Mean.call": true, + "tf.keras.metrics.Mean.compute_mask": true, + "tf.keras.metrics.Mean.compute_output_shape": true, + "tf.keras.metrics.Mean.compute_output_signature": true, + "tf.keras.metrics.Mean.count_params": true, + "tf.keras.metrics.Mean.dtype": true, + "tf.keras.metrics.Mean.dynamic": true, + "tf.keras.metrics.Mean.from_config": true, + "tf.keras.metrics.Mean.get_config": true, + "tf.keras.metrics.Mean.get_weights": true, + "tf.keras.metrics.Mean.input": true, + "tf.keras.metrics.Mean.input_spec": true, + "tf.keras.metrics.Mean.losses": true, + "tf.keras.metrics.Mean.metrics": true, + "tf.keras.metrics.Mean.name": true, + "tf.keras.metrics.Mean.name_scope": true, + "tf.keras.metrics.Mean.non_trainable_weights": true, + "tf.keras.metrics.Mean.output": true, + "tf.keras.metrics.Mean.reset_states": true, + "tf.keras.metrics.Mean.result": true, + "tf.keras.metrics.Mean.set_weights": true, + "tf.keras.metrics.Mean.submodules": true, + "tf.keras.metrics.Mean.trainable": true, + "tf.keras.metrics.Mean.trainable_weights": true, + "tf.keras.metrics.Mean.update_state": true, + "tf.keras.metrics.Mean.weights": true, + "tf.keras.metrics.Mean.with_name_scope": true, + "tf.keras.metrics.MeanAbsoluteError": false, + "tf.keras.metrics.MeanAbsoluteError.__call__": true, + "tf.keras.metrics.MeanAbsoluteError.__eq__": true, + "tf.keras.metrics.MeanAbsoluteError.__ge__": true, + "tf.keras.metrics.MeanAbsoluteError.__gt__": true, + "tf.keras.metrics.MeanAbsoluteError.__init__": true, + "tf.keras.metrics.MeanAbsoluteError.__le__": true, + "tf.keras.metrics.MeanAbsoluteError.__lt__": true, + "tf.keras.metrics.MeanAbsoluteError.__ne__": true, + "tf.keras.metrics.MeanAbsoluteError.__new__": true, + "tf.keras.metrics.MeanAbsoluteError.activity_regularizer": true, + "tf.keras.metrics.MeanAbsoluteError.add_loss": true, + "tf.keras.metrics.MeanAbsoluteError.add_metric": true, + "tf.keras.metrics.MeanAbsoluteError.add_weight": true, + "tf.keras.metrics.MeanAbsoluteError.build": true, + "tf.keras.metrics.MeanAbsoluteError.call": true, + "tf.keras.metrics.MeanAbsoluteError.compute_mask": true, + "tf.keras.metrics.MeanAbsoluteError.compute_output_shape": true, + "tf.keras.metrics.MeanAbsoluteError.compute_output_signature": true, + "tf.keras.metrics.MeanAbsoluteError.count_params": true, + "tf.keras.metrics.MeanAbsoluteError.dtype": true, + "tf.keras.metrics.MeanAbsoluteError.dynamic": true, + "tf.keras.metrics.MeanAbsoluteError.from_config": true, + "tf.keras.metrics.MeanAbsoluteError.get_config": true, + "tf.keras.metrics.MeanAbsoluteError.get_weights": true, + "tf.keras.metrics.MeanAbsoluteError.input": true, + "tf.keras.metrics.MeanAbsoluteError.input_spec": true, + "tf.keras.metrics.MeanAbsoluteError.losses": true, + "tf.keras.metrics.MeanAbsoluteError.metrics": true, + "tf.keras.metrics.MeanAbsoluteError.name": true, + "tf.keras.metrics.MeanAbsoluteError.name_scope": true, + "tf.keras.metrics.MeanAbsoluteError.non_trainable_weights": true, + "tf.keras.metrics.MeanAbsoluteError.output": true, + "tf.keras.metrics.MeanAbsoluteError.reset_states": true, + "tf.keras.metrics.MeanAbsoluteError.result": true, + "tf.keras.metrics.MeanAbsoluteError.set_weights": true, + "tf.keras.metrics.MeanAbsoluteError.submodules": true, + "tf.keras.metrics.MeanAbsoluteError.trainable": true, + "tf.keras.metrics.MeanAbsoluteError.trainable_weights": true, + "tf.keras.metrics.MeanAbsoluteError.update_state": true, + "tf.keras.metrics.MeanAbsoluteError.weights": true, + "tf.keras.metrics.MeanAbsoluteError.with_name_scope": true, + "tf.keras.metrics.MeanAbsolutePercentageError": false, + "tf.keras.metrics.MeanAbsolutePercentageError.__call__": true, + "tf.keras.metrics.MeanAbsolutePercentageError.__eq__": true, + "tf.keras.metrics.MeanAbsolutePercentageError.__ge__": true, + "tf.keras.metrics.MeanAbsolutePercentageError.__gt__": true, + "tf.keras.metrics.MeanAbsolutePercentageError.__init__": true, + "tf.keras.metrics.MeanAbsolutePercentageError.__le__": true, + "tf.keras.metrics.MeanAbsolutePercentageError.__lt__": true, + "tf.keras.metrics.MeanAbsolutePercentageError.__ne__": true, + "tf.keras.metrics.MeanAbsolutePercentageError.__new__": true, + "tf.keras.metrics.MeanAbsolutePercentageError.activity_regularizer": true, + "tf.keras.metrics.MeanAbsolutePercentageError.add_loss": true, + "tf.keras.metrics.MeanAbsolutePercentageError.add_metric": true, + "tf.keras.metrics.MeanAbsolutePercentageError.add_weight": true, + "tf.keras.metrics.MeanAbsolutePercentageError.build": true, + "tf.keras.metrics.MeanAbsolutePercentageError.call": true, + "tf.keras.metrics.MeanAbsolutePercentageError.compute_mask": true, + "tf.keras.metrics.MeanAbsolutePercentageError.compute_output_shape": true, + "tf.keras.metrics.MeanAbsolutePercentageError.compute_output_signature": true, + "tf.keras.metrics.MeanAbsolutePercentageError.count_params": true, + "tf.keras.metrics.MeanAbsolutePercentageError.dtype": true, + "tf.keras.metrics.MeanAbsolutePercentageError.dynamic": true, + "tf.keras.metrics.MeanAbsolutePercentageError.from_config": true, + "tf.keras.metrics.MeanAbsolutePercentageError.get_config": true, + "tf.keras.metrics.MeanAbsolutePercentageError.get_weights": true, + "tf.keras.metrics.MeanAbsolutePercentageError.input": true, + "tf.keras.metrics.MeanAbsolutePercentageError.input_spec": true, + "tf.keras.metrics.MeanAbsolutePercentageError.losses": true, + "tf.keras.metrics.MeanAbsolutePercentageError.metrics": true, + "tf.keras.metrics.MeanAbsolutePercentageError.name": true, + "tf.keras.metrics.MeanAbsolutePercentageError.name_scope": true, + "tf.keras.metrics.MeanAbsolutePercentageError.non_trainable_weights": true, + "tf.keras.metrics.MeanAbsolutePercentageError.output": true, + "tf.keras.metrics.MeanAbsolutePercentageError.reset_states": true, + "tf.keras.metrics.MeanAbsolutePercentageError.result": true, + "tf.keras.metrics.MeanAbsolutePercentageError.set_weights": true, + "tf.keras.metrics.MeanAbsolutePercentageError.submodules": true, + "tf.keras.metrics.MeanAbsolutePercentageError.trainable": true, + "tf.keras.metrics.MeanAbsolutePercentageError.trainable_weights": true, + "tf.keras.metrics.MeanAbsolutePercentageError.update_state": true, + "tf.keras.metrics.MeanAbsolutePercentageError.weights": true, + "tf.keras.metrics.MeanAbsolutePercentageError.with_name_scope": true, + "tf.keras.metrics.MeanIoU": false, + "tf.keras.metrics.MeanIoU.__call__": true, + "tf.keras.metrics.MeanIoU.__eq__": true, + "tf.keras.metrics.MeanIoU.__ge__": true, + "tf.keras.metrics.MeanIoU.__gt__": true, + "tf.keras.metrics.MeanIoU.__init__": true, + "tf.keras.metrics.MeanIoU.__le__": true, + "tf.keras.metrics.MeanIoU.__lt__": true, + "tf.keras.metrics.MeanIoU.__ne__": true, + "tf.keras.metrics.MeanIoU.__new__": true, + "tf.keras.metrics.MeanIoU.activity_regularizer": true, + "tf.keras.metrics.MeanIoU.add_loss": true, + "tf.keras.metrics.MeanIoU.add_metric": true, + "tf.keras.metrics.MeanIoU.add_weight": true, + "tf.keras.metrics.MeanIoU.build": true, + "tf.keras.metrics.MeanIoU.call": true, + "tf.keras.metrics.MeanIoU.compute_mask": true, + "tf.keras.metrics.MeanIoU.compute_output_shape": true, + "tf.keras.metrics.MeanIoU.compute_output_signature": true, + "tf.keras.metrics.MeanIoU.count_params": true, + "tf.keras.metrics.MeanIoU.dtype": true, + "tf.keras.metrics.MeanIoU.dynamic": true, + "tf.keras.metrics.MeanIoU.from_config": true, + "tf.keras.metrics.MeanIoU.get_config": true, + "tf.keras.metrics.MeanIoU.get_weights": true, + "tf.keras.metrics.MeanIoU.input": true, + "tf.keras.metrics.MeanIoU.input_spec": true, + "tf.keras.metrics.MeanIoU.losses": true, + "tf.keras.metrics.MeanIoU.metrics": true, + "tf.keras.metrics.MeanIoU.name": true, + "tf.keras.metrics.MeanIoU.name_scope": true, + "tf.keras.metrics.MeanIoU.non_trainable_weights": true, + "tf.keras.metrics.MeanIoU.output": true, + "tf.keras.metrics.MeanIoU.reset_states": true, + "tf.keras.metrics.MeanIoU.result": true, + "tf.keras.metrics.MeanIoU.set_weights": true, + "tf.keras.metrics.MeanIoU.submodules": true, + "tf.keras.metrics.MeanIoU.trainable": true, + "tf.keras.metrics.MeanIoU.trainable_weights": true, + "tf.keras.metrics.MeanIoU.update_state": true, + "tf.keras.metrics.MeanIoU.weights": true, + "tf.keras.metrics.MeanIoU.with_name_scope": true, + "tf.keras.metrics.MeanRelativeError": false, + "tf.keras.metrics.MeanRelativeError.__call__": true, + "tf.keras.metrics.MeanRelativeError.__eq__": true, + "tf.keras.metrics.MeanRelativeError.__ge__": true, + "tf.keras.metrics.MeanRelativeError.__gt__": true, + "tf.keras.metrics.MeanRelativeError.__init__": true, + "tf.keras.metrics.MeanRelativeError.__le__": true, + "tf.keras.metrics.MeanRelativeError.__lt__": true, + "tf.keras.metrics.MeanRelativeError.__ne__": true, + "tf.keras.metrics.MeanRelativeError.__new__": true, + "tf.keras.metrics.MeanRelativeError.activity_regularizer": true, + "tf.keras.metrics.MeanRelativeError.add_loss": true, + "tf.keras.metrics.MeanRelativeError.add_metric": true, + "tf.keras.metrics.MeanRelativeError.add_weight": true, + "tf.keras.metrics.MeanRelativeError.build": true, + "tf.keras.metrics.MeanRelativeError.call": true, + "tf.keras.metrics.MeanRelativeError.compute_mask": true, + "tf.keras.metrics.MeanRelativeError.compute_output_shape": true, + "tf.keras.metrics.MeanRelativeError.compute_output_signature": true, + "tf.keras.metrics.MeanRelativeError.count_params": true, + "tf.keras.metrics.MeanRelativeError.dtype": true, + "tf.keras.metrics.MeanRelativeError.dynamic": true, + "tf.keras.metrics.MeanRelativeError.from_config": true, + "tf.keras.metrics.MeanRelativeError.get_config": true, + "tf.keras.metrics.MeanRelativeError.get_weights": true, + "tf.keras.metrics.MeanRelativeError.input": true, + "tf.keras.metrics.MeanRelativeError.input_spec": true, + "tf.keras.metrics.MeanRelativeError.losses": true, + "tf.keras.metrics.MeanRelativeError.metrics": true, + "tf.keras.metrics.MeanRelativeError.name": true, + "tf.keras.metrics.MeanRelativeError.name_scope": true, + "tf.keras.metrics.MeanRelativeError.non_trainable_weights": true, + "tf.keras.metrics.MeanRelativeError.output": true, + "tf.keras.metrics.MeanRelativeError.reset_states": true, + "tf.keras.metrics.MeanRelativeError.result": true, + "tf.keras.metrics.MeanRelativeError.set_weights": true, + "tf.keras.metrics.MeanRelativeError.submodules": true, + "tf.keras.metrics.MeanRelativeError.trainable": true, + "tf.keras.metrics.MeanRelativeError.trainable_weights": true, + "tf.keras.metrics.MeanRelativeError.update_state": true, + "tf.keras.metrics.MeanRelativeError.weights": true, + "tf.keras.metrics.MeanRelativeError.with_name_scope": true, + "tf.keras.metrics.MeanSquaredError": false, + "tf.keras.metrics.MeanSquaredError.__call__": true, + "tf.keras.metrics.MeanSquaredError.__eq__": true, + "tf.keras.metrics.MeanSquaredError.__ge__": true, + "tf.keras.metrics.MeanSquaredError.__gt__": true, + "tf.keras.metrics.MeanSquaredError.__init__": true, + "tf.keras.metrics.MeanSquaredError.__le__": true, + "tf.keras.metrics.MeanSquaredError.__lt__": true, + "tf.keras.metrics.MeanSquaredError.__ne__": true, + "tf.keras.metrics.MeanSquaredError.__new__": true, + "tf.keras.metrics.MeanSquaredError.activity_regularizer": true, + "tf.keras.metrics.MeanSquaredError.add_loss": true, + "tf.keras.metrics.MeanSquaredError.add_metric": true, + "tf.keras.metrics.MeanSquaredError.add_weight": true, + "tf.keras.metrics.MeanSquaredError.build": true, + "tf.keras.metrics.MeanSquaredError.call": true, + "tf.keras.metrics.MeanSquaredError.compute_mask": true, + "tf.keras.metrics.MeanSquaredError.compute_output_shape": true, + "tf.keras.metrics.MeanSquaredError.compute_output_signature": true, + "tf.keras.metrics.MeanSquaredError.count_params": true, + "tf.keras.metrics.MeanSquaredError.dtype": true, + "tf.keras.metrics.MeanSquaredError.dynamic": true, + "tf.keras.metrics.MeanSquaredError.from_config": true, + "tf.keras.metrics.MeanSquaredError.get_config": true, + "tf.keras.metrics.MeanSquaredError.get_weights": true, + "tf.keras.metrics.MeanSquaredError.input": true, + "tf.keras.metrics.MeanSquaredError.input_spec": true, + "tf.keras.metrics.MeanSquaredError.losses": true, + "tf.keras.metrics.MeanSquaredError.metrics": true, + "tf.keras.metrics.MeanSquaredError.name": true, + "tf.keras.metrics.MeanSquaredError.name_scope": true, + "tf.keras.metrics.MeanSquaredError.non_trainable_weights": true, + "tf.keras.metrics.MeanSquaredError.output": true, + "tf.keras.metrics.MeanSquaredError.reset_states": true, + "tf.keras.metrics.MeanSquaredError.result": true, + "tf.keras.metrics.MeanSquaredError.set_weights": true, + "tf.keras.metrics.MeanSquaredError.submodules": true, + "tf.keras.metrics.MeanSquaredError.trainable": true, + "tf.keras.metrics.MeanSquaredError.trainable_weights": true, + "tf.keras.metrics.MeanSquaredError.update_state": true, + "tf.keras.metrics.MeanSquaredError.weights": true, + "tf.keras.metrics.MeanSquaredError.with_name_scope": true, + "tf.keras.metrics.MeanSquaredLogarithmicError": false, + "tf.keras.metrics.MeanSquaredLogarithmicError.__call__": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.__eq__": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.__ge__": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.__gt__": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.__init__": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.__le__": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.__lt__": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.__ne__": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.__new__": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.activity_regularizer": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.add_loss": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.add_metric": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.add_weight": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.build": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.call": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.compute_mask": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.compute_output_shape": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.compute_output_signature": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.count_params": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.dtype": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.dynamic": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.from_config": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.get_config": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.get_weights": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.input": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.input_spec": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.losses": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.metrics": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.name": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.name_scope": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.non_trainable_weights": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.output": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.reset_states": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.result": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.set_weights": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.submodules": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.trainable": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.trainable_weights": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.update_state": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.weights": true, + "tf.keras.metrics.MeanSquaredLogarithmicError.with_name_scope": true, + "tf.keras.metrics.MeanTensor": false, + "tf.keras.metrics.MeanTensor.__call__": true, + "tf.keras.metrics.MeanTensor.__eq__": true, + "tf.keras.metrics.MeanTensor.__ge__": true, + "tf.keras.metrics.MeanTensor.__gt__": true, + "tf.keras.metrics.MeanTensor.__init__": true, + "tf.keras.metrics.MeanTensor.__le__": true, + "tf.keras.metrics.MeanTensor.__lt__": true, + "tf.keras.metrics.MeanTensor.__ne__": true, + "tf.keras.metrics.MeanTensor.__new__": true, + "tf.keras.metrics.MeanTensor.activity_regularizer": true, + "tf.keras.metrics.MeanTensor.add_loss": true, + "tf.keras.metrics.MeanTensor.add_metric": true, + "tf.keras.metrics.MeanTensor.add_weight": true, + "tf.keras.metrics.MeanTensor.build": true, + "tf.keras.metrics.MeanTensor.call": true, + "tf.keras.metrics.MeanTensor.compute_mask": true, + "tf.keras.metrics.MeanTensor.compute_output_shape": true, + "tf.keras.metrics.MeanTensor.compute_output_signature": true, + "tf.keras.metrics.MeanTensor.count": true, + "tf.keras.metrics.MeanTensor.count_params": true, + "tf.keras.metrics.MeanTensor.dtype": true, + "tf.keras.metrics.MeanTensor.dynamic": true, + "tf.keras.metrics.MeanTensor.from_config": true, + "tf.keras.metrics.MeanTensor.get_config": true, + "tf.keras.metrics.MeanTensor.get_weights": true, + "tf.keras.metrics.MeanTensor.input": true, + "tf.keras.metrics.MeanTensor.input_spec": true, + "tf.keras.metrics.MeanTensor.losses": true, + "tf.keras.metrics.MeanTensor.metrics": true, + "tf.keras.metrics.MeanTensor.name": true, + "tf.keras.metrics.MeanTensor.name_scope": true, + "tf.keras.metrics.MeanTensor.non_trainable_weights": true, + "tf.keras.metrics.MeanTensor.output": true, + "tf.keras.metrics.MeanTensor.reset_states": true, + "tf.keras.metrics.MeanTensor.result": true, + "tf.keras.metrics.MeanTensor.set_weights": true, + "tf.keras.metrics.MeanTensor.submodules": true, + "tf.keras.metrics.MeanTensor.total": true, + "tf.keras.metrics.MeanTensor.trainable": true, + "tf.keras.metrics.MeanTensor.trainable_weights": true, + "tf.keras.metrics.MeanTensor.update_state": true, + "tf.keras.metrics.MeanTensor.weights": true, + "tf.keras.metrics.MeanTensor.with_name_scope": true, + "tf.keras.metrics.Metric": false, + "tf.keras.metrics.Metric.__call__": true, + "tf.keras.metrics.Metric.__eq__": true, + "tf.keras.metrics.Metric.__ge__": true, + "tf.keras.metrics.Metric.__gt__": true, + "tf.keras.metrics.Metric.__init__": true, + "tf.keras.metrics.Metric.__le__": true, + "tf.keras.metrics.Metric.__lt__": true, + "tf.keras.metrics.Metric.__ne__": true, + "tf.keras.metrics.Metric.__new__": true, + "tf.keras.metrics.Metric.activity_regularizer": true, + "tf.keras.metrics.Metric.add_loss": true, + "tf.keras.metrics.Metric.add_metric": true, + "tf.keras.metrics.Metric.add_weight": true, + "tf.keras.metrics.Metric.build": true, + "tf.keras.metrics.Metric.call": true, + "tf.keras.metrics.Metric.compute_mask": true, + "tf.keras.metrics.Metric.compute_output_shape": true, + "tf.keras.metrics.Metric.compute_output_signature": true, + "tf.keras.metrics.Metric.count_params": true, + "tf.keras.metrics.Metric.dtype": true, + "tf.keras.metrics.Metric.dynamic": true, + "tf.keras.metrics.Metric.from_config": true, + "tf.keras.metrics.Metric.get_config": true, + "tf.keras.metrics.Metric.get_weights": true, + "tf.keras.metrics.Metric.input": true, + "tf.keras.metrics.Metric.input_spec": true, + "tf.keras.metrics.Metric.losses": true, + "tf.keras.metrics.Metric.metrics": true, + "tf.keras.metrics.Metric.name": true, + "tf.keras.metrics.Metric.name_scope": true, + "tf.keras.metrics.Metric.non_trainable_weights": true, + "tf.keras.metrics.Metric.output": true, + "tf.keras.metrics.Metric.reset_states": true, + "tf.keras.metrics.Metric.result": true, + "tf.keras.metrics.Metric.set_weights": true, + "tf.keras.metrics.Metric.submodules": true, + "tf.keras.metrics.Metric.trainable": true, + "tf.keras.metrics.Metric.trainable_weights": true, + "tf.keras.metrics.Metric.update_state": true, + "tf.keras.metrics.Metric.weights": true, + "tf.keras.metrics.Metric.with_name_scope": true, + "tf.keras.metrics.Poisson": false, + "tf.keras.metrics.Poisson.__call__": true, + "tf.keras.metrics.Poisson.__eq__": true, + "tf.keras.metrics.Poisson.__ge__": true, + "tf.keras.metrics.Poisson.__gt__": true, + "tf.keras.metrics.Poisson.__init__": true, + "tf.keras.metrics.Poisson.__le__": true, + "tf.keras.metrics.Poisson.__lt__": true, + "tf.keras.metrics.Poisson.__ne__": true, + "tf.keras.metrics.Poisson.__new__": true, + "tf.keras.metrics.Poisson.activity_regularizer": true, + "tf.keras.metrics.Poisson.add_loss": true, + "tf.keras.metrics.Poisson.add_metric": true, + "tf.keras.metrics.Poisson.add_weight": true, + "tf.keras.metrics.Poisson.build": true, + "tf.keras.metrics.Poisson.call": true, + "tf.keras.metrics.Poisson.compute_mask": true, + "tf.keras.metrics.Poisson.compute_output_shape": true, + "tf.keras.metrics.Poisson.compute_output_signature": true, + "tf.keras.metrics.Poisson.count_params": true, + "tf.keras.metrics.Poisson.dtype": true, + "tf.keras.metrics.Poisson.dynamic": true, + "tf.keras.metrics.Poisson.from_config": true, + "tf.keras.metrics.Poisson.get_config": true, + "tf.keras.metrics.Poisson.get_weights": true, + "tf.keras.metrics.Poisson.input": true, + "tf.keras.metrics.Poisson.input_spec": true, + "tf.keras.metrics.Poisson.losses": true, + "tf.keras.metrics.Poisson.metrics": true, + "tf.keras.metrics.Poisson.name": true, + "tf.keras.metrics.Poisson.name_scope": true, + "tf.keras.metrics.Poisson.non_trainable_weights": true, + "tf.keras.metrics.Poisson.output": true, + "tf.keras.metrics.Poisson.reset_states": true, + "tf.keras.metrics.Poisson.result": true, + "tf.keras.metrics.Poisson.set_weights": true, + "tf.keras.metrics.Poisson.submodules": true, + "tf.keras.metrics.Poisson.trainable": true, + "tf.keras.metrics.Poisson.trainable_weights": true, + "tf.keras.metrics.Poisson.update_state": true, + "tf.keras.metrics.Poisson.weights": true, + "tf.keras.metrics.Poisson.with_name_scope": true, + "tf.keras.metrics.Precision": false, + "tf.keras.metrics.Precision.__call__": true, + "tf.keras.metrics.Precision.__eq__": true, + "tf.keras.metrics.Precision.__ge__": true, + "tf.keras.metrics.Precision.__gt__": true, + "tf.keras.metrics.Precision.__init__": true, + "tf.keras.metrics.Precision.__le__": true, + "tf.keras.metrics.Precision.__lt__": true, + "tf.keras.metrics.Precision.__ne__": true, + "tf.keras.metrics.Precision.__new__": true, + "tf.keras.metrics.Precision.activity_regularizer": true, + "tf.keras.metrics.Precision.add_loss": true, + "tf.keras.metrics.Precision.add_metric": true, + "tf.keras.metrics.Precision.add_weight": true, + "tf.keras.metrics.Precision.build": true, + "tf.keras.metrics.Precision.call": true, + "tf.keras.metrics.Precision.compute_mask": true, + "tf.keras.metrics.Precision.compute_output_shape": true, + "tf.keras.metrics.Precision.compute_output_signature": true, + "tf.keras.metrics.Precision.count_params": true, + "tf.keras.metrics.Precision.dtype": true, + "tf.keras.metrics.Precision.dynamic": true, + "tf.keras.metrics.Precision.from_config": true, + "tf.keras.metrics.Precision.get_config": true, + "tf.keras.metrics.Precision.get_weights": true, + "tf.keras.metrics.Precision.input": true, + "tf.keras.metrics.Precision.input_spec": true, + "tf.keras.metrics.Precision.losses": true, + "tf.keras.metrics.Precision.metrics": true, + "tf.keras.metrics.Precision.name": true, + "tf.keras.metrics.Precision.name_scope": true, + "tf.keras.metrics.Precision.non_trainable_weights": true, + "tf.keras.metrics.Precision.output": true, + "tf.keras.metrics.Precision.reset_states": true, + "tf.keras.metrics.Precision.result": true, + "tf.keras.metrics.Precision.set_weights": true, + "tf.keras.metrics.Precision.submodules": true, + "tf.keras.metrics.Precision.trainable": true, + "tf.keras.metrics.Precision.trainable_weights": true, + "tf.keras.metrics.Precision.update_state": true, + "tf.keras.metrics.Precision.weights": true, + "tf.keras.metrics.Precision.with_name_scope": true, + "tf.keras.metrics.PrecisionAtRecall": false, + "tf.keras.metrics.PrecisionAtRecall.__call__": true, + "tf.keras.metrics.PrecisionAtRecall.__eq__": true, + "tf.keras.metrics.PrecisionAtRecall.__ge__": true, + "tf.keras.metrics.PrecisionAtRecall.__gt__": true, + "tf.keras.metrics.PrecisionAtRecall.__init__": true, + "tf.keras.metrics.PrecisionAtRecall.__le__": true, + "tf.keras.metrics.PrecisionAtRecall.__lt__": true, + "tf.keras.metrics.PrecisionAtRecall.__ne__": true, + "tf.keras.metrics.PrecisionAtRecall.__new__": true, + "tf.keras.metrics.PrecisionAtRecall.activity_regularizer": true, + "tf.keras.metrics.PrecisionAtRecall.add_loss": true, + "tf.keras.metrics.PrecisionAtRecall.add_metric": true, + "tf.keras.metrics.PrecisionAtRecall.add_weight": true, + "tf.keras.metrics.PrecisionAtRecall.build": true, + "tf.keras.metrics.PrecisionAtRecall.call": true, + "tf.keras.metrics.PrecisionAtRecall.compute_mask": true, + "tf.keras.metrics.PrecisionAtRecall.compute_output_shape": true, + "tf.keras.metrics.PrecisionAtRecall.compute_output_signature": true, + "tf.keras.metrics.PrecisionAtRecall.count_params": true, + "tf.keras.metrics.PrecisionAtRecall.dtype": true, + "tf.keras.metrics.PrecisionAtRecall.dynamic": true, + "tf.keras.metrics.PrecisionAtRecall.from_config": true, + "tf.keras.metrics.PrecisionAtRecall.get_config": true, + "tf.keras.metrics.PrecisionAtRecall.get_weights": true, + "tf.keras.metrics.PrecisionAtRecall.input": true, + "tf.keras.metrics.PrecisionAtRecall.input_spec": true, + "tf.keras.metrics.PrecisionAtRecall.losses": true, + "tf.keras.metrics.PrecisionAtRecall.metrics": true, + "tf.keras.metrics.PrecisionAtRecall.name": true, + "tf.keras.metrics.PrecisionAtRecall.name_scope": true, + "tf.keras.metrics.PrecisionAtRecall.non_trainable_weights": true, + "tf.keras.metrics.PrecisionAtRecall.output": true, + "tf.keras.metrics.PrecisionAtRecall.reset_states": true, + "tf.keras.metrics.PrecisionAtRecall.result": true, + "tf.keras.metrics.PrecisionAtRecall.set_weights": true, + "tf.keras.metrics.PrecisionAtRecall.submodules": true, + "tf.keras.metrics.PrecisionAtRecall.trainable": true, + "tf.keras.metrics.PrecisionAtRecall.trainable_weights": true, + "tf.keras.metrics.PrecisionAtRecall.update_state": true, + "tf.keras.metrics.PrecisionAtRecall.weights": true, + "tf.keras.metrics.PrecisionAtRecall.with_name_scope": true, + "tf.keras.metrics.Recall": false, + "tf.keras.metrics.Recall.__call__": true, + "tf.keras.metrics.Recall.__eq__": true, + "tf.keras.metrics.Recall.__ge__": true, + "tf.keras.metrics.Recall.__gt__": true, + "tf.keras.metrics.Recall.__init__": true, + "tf.keras.metrics.Recall.__le__": true, + "tf.keras.metrics.Recall.__lt__": true, + "tf.keras.metrics.Recall.__ne__": true, + "tf.keras.metrics.Recall.__new__": true, + "tf.keras.metrics.Recall.activity_regularizer": true, + "tf.keras.metrics.Recall.add_loss": true, + "tf.keras.metrics.Recall.add_metric": true, + "tf.keras.metrics.Recall.add_weight": true, + "tf.keras.metrics.Recall.build": true, + "tf.keras.metrics.Recall.call": true, + "tf.keras.metrics.Recall.compute_mask": true, + "tf.keras.metrics.Recall.compute_output_shape": true, + "tf.keras.metrics.Recall.compute_output_signature": true, + "tf.keras.metrics.Recall.count_params": true, + "tf.keras.metrics.Recall.dtype": true, + "tf.keras.metrics.Recall.dynamic": true, + "tf.keras.metrics.Recall.from_config": true, + "tf.keras.metrics.Recall.get_config": true, + "tf.keras.metrics.Recall.get_weights": true, + "tf.keras.metrics.Recall.input": true, + "tf.keras.metrics.Recall.input_spec": true, + "tf.keras.metrics.Recall.losses": true, + "tf.keras.metrics.Recall.metrics": true, + "tf.keras.metrics.Recall.name": true, + "tf.keras.metrics.Recall.name_scope": true, + "tf.keras.metrics.Recall.non_trainable_weights": true, + "tf.keras.metrics.Recall.output": true, + "tf.keras.metrics.Recall.reset_states": true, + "tf.keras.metrics.Recall.result": true, + "tf.keras.metrics.Recall.set_weights": true, + "tf.keras.metrics.Recall.submodules": true, + "tf.keras.metrics.Recall.trainable": true, + "tf.keras.metrics.Recall.trainable_weights": true, + "tf.keras.metrics.Recall.update_state": true, + "tf.keras.metrics.Recall.weights": true, + "tf.keras.metrics.Recall.with_name_scope": true, + "tf.keras.metrics.RecallAtPrecision": false, + "tf.keras.metrics.RecallAtPrecision.__call__": true, + "tf.keras.metrics.RecallAtPrecision.__eq__": true, + "tf.keras.metrics.RecallAtPrecision.__ge__": true, + "tf.keras.metrics.RecallAtPrecision.__gt__": true, + "tf.keras.metrics.RecallAtPrecision.__init__": true, + "tf.keras.metrics.RecallAtPrecision.__le__": true, + "tf.keras.metrics.RecallAtPrecision.__lt__": true, + "tf.keras.metrics.RecallAtPrecision.__ne__": true, + "tf.keras.metrics.RecallAtPrecision.__new__": true, + "tf.keras.metrics.RecallAtPrecision.activity_regularizer": true, + "tf.keras.metrics.RecallAtPrecision.add_loss": true, + "tf.keras.metrics.RecallAtPrecision.add_metric": true, + "tf.keras.metrics.RecallAtPrecision.add_weight": true, + "tf.keras.metrics.RecallAtPrecision.build": true, + "tf.keras.metrics.RecallAtPrecision.call": true, + "tf.keras.metrics.RecallAtPrecision.compute_mask": true, + "tf.keras.metrics.RecallAtPrecision.compute_output_shape": true, + "tf.keras.metrics.RecallAtPrecision.compute_output_signature": true, + "tf.keras.metrics.RecallAtPrecision.count_params": true, + "tf.keras.metrics.RecallAtPrecision.dtype": true, + "tf.keras.metrics.RecallAtPrecision.dynamic": true, + "tf.keras.metrics.RecallAtPrecision.from_config": true, + "tf.keras.metrics.RecallAtPrecision.get_config": true, + "tf.keras.metrics.RecallAtPrecision.get_weights": true, + "tf.keras.metrics.RecallAtPrecision.input": true, + "tf.keras.metrics.RecallAtPrecision.input_spec": true, + "tf.keras.metrics.RecallAtPrecision.losses": true, + "tf.keras.metrics.RecallAtPrecision.metrics": true, + "tf.keras.metrics.RecallAtPrecision.name": true, + "tf.keras.metrics.RecallAtPrecision.name_scope": true, + "tf.keras.metrics.RecallAtPrecision.non_trainable_weights": true, + "tf.keras.metrics.RecallAtPrecision.output": true, + "tf.keras.metrics.RecallAtPrecision.reset_states": true, + "tf.keras.metrics.RecallAtPrecision.result": true, + "tf.keras.metrics.RecallAtPrecision.set_weights": true, + "tf.keras.metrics.RecallAtPrecision.submodules": true, + "tf.keras.metrics.RecallAtPrecision.trainable": true, + "tf.keras.metrics.RecallAtPrecision.trainable_weights": true, + "tf.keras.metrics.RecallAtPrecision.update_state": true, + "tf.keras.metrics.RecallAtPrecision.weights": true, + "tf.keras.metrics.RecallAtPrecision.with_name_scope": true, + "tf.keras.metrics.RootMeanSquaredError": false, + "tf.keras.metrics.RootMeanSquaredError.__call__": true, + "tf.keras.metrics.RootMeanSquaredError.__eq__": true, + "tf.keras.metrics.RootMeanSquaredError.__ge__": true, + "tf.keras.metrics.RootMeanSquaredError.__gt__": true, + "tf.keras.metrics.RootMeanSquaredError.__init__": true, + "tf.keras.metrics.RootMeanSquaredError.__le__": true, + "tf.keras.metrics.RootMeanSquaredError.__lt__": true, + "tf.keras.metrics.RootMeanSquaredError.__ne__": true, + "tf.keras.metrics.RootMeanSquaredError.__new__": true, + "tf.keras.metrics.RootMeanSquaredError.activity_regularizer": true, + "tf.keras.metrics.RootMeanSquaredError.add_loss": true, + "tf.keras.metrics.RootMeanSquaredError.add_metric": true, + "tf.keras.metrics.RootMeanSquaredError.add_weight": true, + "tf.keras.metrics.RootMeanSquaredError.build": true, + "tf.keras.metrics.RootMeanSquaredError.call": true, + "tf.keras.metrics.RootMeanSquaredError.compute_mask": true, + "tf.keras.metrics.RootMeanSquaredError.compute_output_shape": true, + "tf.keras.metrics.RootMeanSquaredError.compute_output_signature": true, + "tf.keras.metrics.RootMeanSquaredError.count_params": true, + "tf.keras.metrics.RootMeanSquaredError.dtype": true, + "tf.keras.metrics.RootMeanSquaredError.dynamic": true, + "tf.keras.metrics.RootMeanSquaredError.from_config": true, + "tf.keras.metrics.RootMeanSquaredError.get_config": true, + "tf.keras.metrics.RootMeanSquaredError.get_weights": true, + "tf.keras.metrics.RootMeanSquaredError.input": true, + "tf.keras.metrics.RootMeanSquaredError.input_spec": true, + "tf.keras.metrics.RootMeanSquaredError.losses": true, + "tf.keras.metrics.RootMeanSquaredError.metrics": true, + "tf.keras.metrics.RootMeanSquaredError.name": true, + "tf.keras.metrics.RootMeanSquaredError.name_scope": true, + "tf.keras.metrics.RootMeanSquaredError.non_trainable_weights": true, + "tf.keras.metrics.RootMeanSquaredError.output": true, + "tf.keras.metrics.RootMeanSquaredError.reset_states": true, + "tf.keras.metrics.RootMeanSquaredError.result": true, + "tf.keras.metrics.RootMeanSquaredError.set_weights": true, + "tf.keras.metrics.RootMeanSquaredError.submodules": true, + "tf.keras.metrics.RootMeanSquaredError.trainable": true, + "tf.keras.metrics.RootMeanSquaredError.trainable_weights": true, + "tf.keras.metrics.RootMeanSquaredError.update_state": true, + "tf.keras.metrics.RootMeanSquaredError.weights": true, + "tf.keras.metrics.RootMeanSquaredError.with_name_scope": true, + "tf.keras.metrics.SensitivityAtSpecificity": false, + "tf.keras.metrics.SensitivityAtSpecificity.__call__": true, + "tf.keras.metrics.SensitivityAtSpecificity.__eq__": true, + "tf.keras.metrics.SensitivityAtSpecificity.__ge__": true, + "tf.keras.metrics.SensitivityAtSpecificity.__gt__": true, + "tf.keras.metrics.SensitivityAtSpecificity.__init__": true, + "tf.keras.metrics.SensitivityAtSpecificity.__le__": true, + "tf.keras.metrics.SensitivityAtSpecificity.__lt__": true, + "tf.keras.metrics.SensitivityAtSpecificity.__ne__": true, + "tf.keras.metrics.SensitivityAtSpecificity.__new__": true, + "tf.keras.metrics.SensitivityAtSpecificity.activity_regularizer": true, + "tf.keras.metrics.SensitivityAtSpecificity.add_loss": true, + "tf.keras.metrics.SensitivityAtSpecificity.add_metric": true, + "tf.keras.metrics.SensitivityAtSpecificity.add_weight": true, + "tf.keras.metrics.SensitivityAtSpecificity.build": true, + "tf.keras.metrics.SensitivityAtSpecificity.call": true, + "tf.keras.metrics.SensitivityAtSpecificity.compute_mask": true, + "tf.keras.metrics.SensitivityAtSpecificity.compute_output_shape": true, + "tf.keras.metrics.SensitivityAtSpecificity.compute_output_signature": true, + "tf.keras.metrics.SensitivityAtSpecificity.count_params": true, + "tf.keras.metrics.SensitivityAtSpecificity.dtype": true, + "tf.keras.metrics.SensitivityAtSpecificity.dynamic": true, + "tf.keras.metrics.SensitivityAtSpecificity.from_config": true, + "tf.keras.metrics.SensitivityAtSpecificity.get_config": true, + "tf.keras.metrics.SensitivityAtSpecificity.get_weights": true, + "tf.keras.metrics.SensitivityAtSpecificity.input": true, + "tf.keras.metrics.SensitivityAtSpecificity.input_spec": true, + "tf.keras.metrics.SensitivityAtSpecificity.losses": true, + "tf.keras.metrics.SensitivityAtSpecificity.metrics": true, + "tf.keras.metrics.SensitivityAtSpecificity.name": true, + "tf.keras.metrics.SensitivityAtSpecificity.name_scope": true, + "tf.keras.metrics.SensitivityAtSpecificity.non_trainable_weights": true, + "tf.keras.metrics.SensitivityAtSpecificity.output": true, + "tf.keras.metrics.SensitivityAtSpecificity.reset_states": true, + "tf.keras.metrics.SensitivityAtSpecificity.result": true, + "tf.keras.metrics.SensitivityAtSpecificity.set_weights": true, + "tf.keras.metrics.SensitivityAtSpecificity.submodules": true, + "tf.keras.metrics.SensitivityAtSpecificity.trainable": true, + "tf.keras.metrics.SensitivityAtSpecificity.trainable_weights": true, + "tf.keras.metrics.SensitivityAtSpecificity.update_state": true, + "tf.keras.metrics.SensitivityAtSpecificity.weights": true, + "tf.keras.metrics.SensitivityAtSpecificity.with_name_scope": true, + "tf.keras.metrics.SparseCategoricalAccuracy": false, + "tf.keras.metrics.SparseCategoricalAccuracy.__call__": true, + "tf.keras.metrics.SparseCategoricalAccuracy.__eq__": true, + "tf.keras.metrics.SparseCategoricalAccuracy.__ge__": true, + "tf.keras.metrics.SparseCategoricalAccuracy.__gt__": true, + "tf.keras.metrics.SparseCategoricalAccuracy.__init__": true, + "tf.keras.metrics.SparseCategoricalAccuracy.__le__": true, + "tf.keras.metrics.SparseCategoricalAccuracy.__lt__": true, + "tf.keras.metrics.SparseCategoricalAccuracy.__ne__": true, + "tf.keras.metrics.SparseCategoricalAccuracy.__new__": true, + "tf.keras.metrics.SparseCategoricalAccuracy.activity_regularizer": true, + "tf.keras.metrics.SparseCategoricalAccuracy.add_loss": true, + "tf.keras.metrics.SparseCategoricalAccuracy.add_metric": true, + "tf.keras.metrics.SparseCategoricalAccuracy.add_weight": true, + "tf.keras.metrics.SparseCategoricalAccuracy.build": true, + "tf.keras.metrics.SparseCategoricalAccuracy.call": true, + "tf.keras.metrics.SparseCategoricalAccuracy.compute_mask": true, + "tf.keras.metrics.SparseCategoricalAccuracy.compute_output_shape": true, + "tf.keras.metrics.SparseCategoricalAccuracy.compute_output_signature": true, + "tf.keras.metrics.SparseCategoricalAccuracy.count_params": true, + "tf.keras.metrics.SparseCategoricalAccuracy.dtype": true, + "tf.keras.metrics.SparseCategoricalAccuracy.dynamic": true, + "tf.keras.metrics.SparseCategoricalAccuracy.from_config": true, + "tf.keras.metrics.SparseCategoricalAccuracy.get_config": true, + "tf.keras.metrics.SparseCategoricalAccuracy.get_weights": true, + "tf.keras.metrics.SparseCategoricalAccuracy.input": true, + "tf.keras.metrics.SparseCategoricalAccuracy.input_spec": true, + "tf.keras.metrics.SparseCategoricalAccuracy.losses": true, + "tf.keras.metrics.SparseCategoricalAccuracy.metrics": true, + "tf.keras.metrics.SparseCategoricalAccuracy.name": true, + "tf.keras.metrics.SparseCategoricalAccuracy.name_scope": true, + "tf.keras.metrics.SparseCategoricalAccuracy.non_trainable_weights": true, + "tf.keras.metrics.SparseCategoricalAccuracy.output": true, + "tf.keras.metrics.SparseCategoricalAccuracy.reset_states": true, + "tf.keras.metrics.SparseCategoricalAccuracy.result": true, + "tf.keras.metrics.SparseCategoricalAccuracy.set_weights": true, + "tf.keras.metrics.SparseCategoricalAccuracy.submodules": true, + "tf.keras.metrics.SparseCategoricalAccuracy.trainable": true, + "tf.keras.metrics.SparseCategoricalAccuracy.trainable_weights": true, + "tf.keras.metrics.SparseCategoricalAccuracy.update_state": true, + "tf.keras.metrics.SparseCategoricalAccuracy.weights": true, + "tf.keras.metrics.SparseCategoricalAccuracy.with_name_scope": true, + "tf.keras.metrics.SparseCategoricalCrossentropy": false, + "tf.keras.metrics.SparseCategoricalCrossentropy.__call__": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.__eq__": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.__ge__": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.__gt__": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.__init__": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.__le__": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.__lt__": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.__ne__": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.__new__": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.activity_regularizer": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.add_loss": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.add_metric": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.add_weight": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.build": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.call": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.compute_mask": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.compute_output_shape": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.compute_output_signature": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.count_params": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.dtype": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.dynamic": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.from_config": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.get_config": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.get_weights": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.input": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.input_spec": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.losses": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.metrics": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.name": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.name_scope": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.non_trainable_weights": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.output": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.reset_states": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.result": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.set_weights": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.submodules": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.trainable": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.trainable_weights": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.update_state": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.weights": true, + "tf.keras.metrics.SparseCategoricalCrossentropy.with_name_scope": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy": false, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__call__": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__eq__": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__ge__": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__gt__": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__init__": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__le__": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__lt__": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__ne__": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.__new__": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.activity_regularizer": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.add_loss": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.add_metric": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.add_weight": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.build": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.call": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.compute_mask": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.compute_output_shape": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.compute_output_signature": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.count_params": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.dtype": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.dynamic": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.from_config": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.get_config": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.get_weights": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.input": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.input_spec": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.losses": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.metrics": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.name": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.name_scope": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.non_trainable_weights": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.output": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.reset_states": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.result": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.set_weights": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.submodules": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.trainable": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.trainable_weights": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.update_state": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.weights": true, + "tf.keras.metrics.SparseTopKCategoricalAccuracy.with_name_scope": true, + "tf.keras.metrics.SpecificityAtSensitivity": false, + "tf.keras.metrics.SpecificityAtSensitivity.__call__": true, + "tf.keras.metrics.SpecificityAtSensitivity.__eq__": true, + "tf.keras.metrics.SpecificityAtSensitivity.__ge__": true, + "tf.keras.metrics.SpecificityAtSensitivity.__gt__": true, + "tf.keras.metrics.SpecificityAtSensitivity.__init__": true, + "tf.keras.metrics.SpecificityAtSensitivity.__le__": true, + "tf.keras.metrics.SpecificityAtSensitivity.__lt__": true, + "tf.keras.metrics.SpecificityAtSensitivity.__ne__": true, + "tf.keras.metrics.SpecificityAtSensitivity.__new__": true, + "tf.keras.metrics.SpecificityAtSensitivity.activity_regularizer": true, + "tf.keras.metrics.SpecificityAtSensitivity.add_loss": true, + "tf.keras.metrics.SpecificityAtSensitivity.add_metric": true, + "tf.keras.metrics.SpecificityAtSensitivity.add_weight": true, + "tf.keras.metrics.SpecificityAtSensitivity.build": true, + "tf.keras.metrics.SpecificityAtSensitivity.call": true, + "tf.keras.metrics.SpecificityAtSensitivity.compute_mask": true, + "tf.keras.metrics.SpecificityAtSensitivity.compute_output_shape": true, + "tf.keras.metrics.SpecificityAtSensitivity.compute_output_signature": true, + "tf.keras.metrics.SpecificityAtSensitivity.count_params": true, + "tf.keras.metrics.SpecificityAtSensitivity.dtype": true, + "tf.keras.metrics.SpecificityAtSensitivity.dynamic": true, + "tf.keras.metrics.SpecificityAtSensitivity.from_config": true, + "tf.keras.metrics.SpecificityAtSensitivity.get_config": true, + "tf.keras.metrics.SpecificityAtSensitivity.get_weights": true, + "tf.keras.metrics.SpecificityAtSensitivity.input": true, + "tf.keras.metrics.SpecificityAtSensitivity.input_spec": true, + "tf.keras.metrics.SpecificityAtSensitivity.losses": true, + "tf.keras.metrics.SpecificityAtSensitivity.metrics": true, + "tf.keras.metrics.SpecificityAtSensitivity.name": true, + "tf.keras.metrics.SpecificityAtSensitivity.name_scope": true, + "tf.keras.metrics.SpecificityAtSensitivity.non_trainable_weights": true, + "tf.keras.metrics.SpecificityAtSensitivity.output": true, + "tf.keras.metrics.SpecificityAtSensitivity.reset_states": true, + "tf.keras.metrics.SpecificityAtSensitivity.result": true, + "tf.keras.metrics.SpecificityAtSensitivity.set_weights": true, + "tf.keras.metrics.SpecificityAtSensitivity.submodules": true, + "tf.keras.metrics.SpecificityAtSensitivity.trainable": true, + "tf.keras.metrics.SpecificityAtSensitivity.trainable_weights": true, + "tf.keras.metrics.SpecificityAtSensitivity.update_state": true, + "tf.keras.metrics.SpecificityAtSensitivity.weights": true, + "tf.keras.metrics.SpecificityAtSensitivity.with_name_scope": true, + "tf.keras.metrics.SquaredHinge": false, + "tf.keras.metrics.SquaredHinge.__call__": true, + "tf.keras.metrics.SquaredHinge.__eq__": true, + "tf.keras.metrics.SquaredHinge.__ge__": true, + "tf.keras.metrics.SquaredHinge.__gt__": true, + "tf.keras.metrics.SquaredHinge.__init__": true, + "tf.keras.metrics.SquaredHinge.__le__": true, + "tf.keras.metrics.SquaredHinge.__lt__": true, + "tf.keras.metrics.SquaredHinge.__ne__": true, + "tf.keras.metrics.SquaredHinge.__new__": true, + "tf.keras.metrics.SquaredHinge.activity_regularizer": true, + "tf.keras.metrics.SquaredHinge.add_loss": true, + "tf.keras.metrics.SquaredHinge.add_metric": true, + "tf.keras.metrics.SquaredHinge.add_weight": true, + "tf.keras.metrics.SquaredHinge.build": true, + "tf.keras.metrics.SquaredHinge.call": true, + "tf.keras.metrics.SquaredHinge.compute_mask": true, + "tf.keras.metrics.SquaredHinge.compute_output_shape": true, + "tf.keras.metrics.SquaredHinge.compute_output_signature": true, + "tf.keras.metrics.SquaredHinge.count_params": true, + "tf.keras.metrics.SquaredHinge.dtype": true, + "tf.keras.metrics.SquaredHinge.dynamic": true, + "tf.keras.metrics.SquaredHinge.from_config": true, + "tf.keras.metrics.SquaredHinge.get_config": true, + "tf.keras.metrics.SquaredHinge.get_weights": true, + "tf.keras.metrics.SquaredHinge.input": true, + "tf.keras.metrics.SquaredHinge.input_spec": true, + "tf.keras.metrics.SquaredHinge.losses": true, + "tf.keras.metrics.SquaredHinge.metrics": true, + "tf.keras.metrics.SquaredHinge.name": true, + "tf.keras.metrics.SquaredHinge.name_scope": true, + "tf.keras.metrics.SquaredHinge.non_trainable_weights": true, + "tf.keras.metrics.SquaredHinge.output": true, + "tf.keras.metrics.SquaredHinge.reset_states": true, + "tf.keras.metrics.SquaredHinge.result": true, + "tf.keras.metrics.SquaredHinge.set_weights": true, + "tf.keras.metrics.SquaredHinge.submodules": true, + "tf.keras.metrics.SquaredHinge.trainable": true, + "tf.keras.metrics.SquaredHinge.trainable_weights": true, + "tf.keras.metrics.SquaredHinge.update_state": true, + "tf.keras.metrics.SquaredHinge.weights": true, + "tf.keras.metrics.SquaredHinge.with_name_scope": true, + "tf.keras.metrics.Sum": false, + "tf.keras.metrics.Sum.__call__": true, + "tf.keras.metrics.Sum.__eq__": true, + "tf.keras.metrics.Sum.__ge__": true, + "tf.keras.metrics.Sum.__gt__": true, + "tf.keras.metrics.Sum.__init__": true, + "tf.keras.metrics.Sum.__le__": true, + "tf.keras.metrics.Sum.__lt__": true, + "tf.keras.metrics.Sum.__ne__": true, + "tf.keras.metrics.Sum.__new__": true, + "tf.keras.metrics.Sum.activity_regularizer": true, + "tf.keras.metrics.Sum.add_loss": true, + "tf.keras.metrics.Sum.add_metric": true, + "tf.keras.metrics.Sum.add_weight": true, + "tf.keras.metrics.Sum.build": true, + "tf.keras.metrics.Sum.call": true, + "tf.keras.metrics.Sum.compute_mask": true, + "tf.keras.metrics.Sum.compute_output_shape": true, + "tf.keras.metrics.Sum.compute_output_signature": true, + "tf.keras.metrics.Sum.count_params": true, + "tf.keras.metrics.Sum.dtype": true, + "tf.keras.metrics.Sum.dynamic": true, + "tf.keras.metrics.Sum.from_config": true, + "tf.keras.metrics.Sum.get_config": true, + "tf.keras.metrics.Sum.get_weights": true, + "tf.keras.metrics.Sum.input": true, + "tf.keras.metrics.Sum.input_spec": true, + "tf.keras.metrics.Sum.losses": true, + "tf.keras.metrics.Sum.metrics": true, + "tf.keras.metrics.Sum.name": true, + "tf.keras.metrics.Sum.name_scope": true, + "tf.keras.metrics.Sum.non_trainable_weights": true, + "tf.keras.metrics.Sum.output": true, + "tf.keras.metrics.Sum.reset_states": true, + "tf.keras.metrics.Sum.result": true, + "tf.keras.metrics.Sum.set_weights": true, + "tf.keras.metrics.Sum.submodules": true, + "tf.keras.metrics.Sum.trainable": true, + "tf.keras.metrics.Sum.trainable_weights": true, + "tf.keras.metrics.Sum.update_state": true, + "tf.keras.metrics.Sum.weights": true, + "tf.keras.metrics.Sum.with_name_scope": true, + "tf.keras.metrics.TopKCategoricalAccuracy": false, + "tf.keras.metrics.TopKCategoricalAccuracy.__call__": true, + "tf.keras.metrics.TopKCategoricalAccuracy.__eq__": true, + "tf.keras.metrics.TopKCategoricalAccuracy.__ge__": true, + "tf.keras.metrics.TopKCategoricalAccuracy.__gt__": true, + "tf.keras.metrics.TopKCategoricalAccuracy.__init__": true, + "tf.keras.metrics.TopKCategoricalAccuracy.__le__": true, + "tf.keras.metrics.TopKCategoricalAccuracy.__lt__": true, + "tf.keras.metrics.TopKCategoricalAccuracy.__ne__": true, + "tf.keras.metrics.TopKCategoricalAccuracy.__new__": true, + "tf.keras.metrics.TopKCategoricalAccuracy.activity_regularizer": true, + "tf.keras.metrics.TopKCategoricalAccuracy.add_loss": true, + "tf.keras.metrics.TopKCategoricalAccuracy.add_metric": true, + "tf.keras.metrics.TopKCategoricalAccuracy.add_weight": true, + "tf.keras.metrics.TopKCategoricalAccuracy.build": true, + "tf.keras.metrics.TopKCategoricalAccuracy.call": true, + "tf.keras.metrics.TopKCategoricalAccuracy.compute_mask": true, + "tf.keras.metrics.TopKCategoricalAccuracy.compute_output_shape": true, + "tf.keras.metrics.TopKCategoricalAccuracy.compute_output_signature": true, + "tf.keras.metrics.TopKCategoricalAccuracy.count_params": true, + "tf.keras.metrics.TopKCategoricalAccuracy.dtype": true, + "tf.keras.metrics.TopKCategoricalAccuracy.dynamic": true, + "tf.keras.metrics.TopKCategoricalAccuracy.from_config": true, + "tf.keras.metrics.TopKCategoricalAccuracy.get_config": true, + "tf.keras.metrics.TopKCategoricalAccuracy.get_weights": true, + "tf.keras.metrics.TopKCategoricalAccuracy.input": true, + "tf.keras.metrics.TopKCategoricalAccuracy.input_spec": true, + "tf.keras.metrics.TopKCategoricalAccuracy.losses": true, + "tf.keras.metrics.TopKCategoricalAccuracy.metrics": true, + "tf.keras.metrics.TopKCategoricalAccuracy.name": true, + "tf.keras.metrics.TopKCategoricalAccuracy.name_scope": true, + "tf.keras.metrics.TopKCategoricalAccuracy.non_trainable_weights": true, + "tf.keras.metrics.TopKCategoricalAccuracy.output": true, + "tf.keras.metrics.TopKCategoricalAccuracy.reset_states": true, + "tf.keras.metrics.TopKCategoricalAccuracy.result": true, + "tf.keras.metrics.TopKCategoricalAccuracy.set_weights": true, + "tf.keras.metrics.TopKCategoricalAccuracy.submodules": true, + "tf.keras.metrics.TopKCategoricalAccuracy.trainable": true, + "tf.keras.metrics.TopKCategoricalAccuracy.trainable_weights": true, + "tf.keras.metrics.TopKCategoricalAccuracy.update_state": true, + "tf.keras.metrics.TopKCategoricalAccuracy.weights": true, + "tf.keras.metrics.TopKCategoricalAccuracy.with_name_scope": true, + "tf.keras.metrics.TrueNegatives": false, + "tf.keras.metrics.TrueNegatives.__call__": true, + "tf.keras.metrics.TrueNegatives.__eq__": true, + "tf.keras.metrics.TrueNegatives.__ge__": true, + "tf.keras.metrics.TrueNegatives.__gt__": true, + "tf.keras.metrics.TrueNegatives.__init__": true, + "tf.keras.metrics.TrueNegatives.__le__": true, + "tf.keras.metrics.TrueNegatives.__lt__": true, + "tf.keras.metrics.TrueNegatives.__ne__": true, + "tf.keras.metrics.TrueNegatives.__new__": true, + "tf.keras.metrics.TrueNegatives.activity_regularizer": true, + "tf.keras.metrics.TrueNegatives.add_loss": true, + "tf.keras.metrics.TrueNegatives.add_metric": true, + "tf.keras.metrics.TrueNegatives.add_weight": true, + "tf.keras.metrics.TrueNegatives.build": true, + "tf.keras.metrics.TrueNegatives.call": true, + "tf.keras.metrics.TrueNegatives.compute_mask": true, + "tf.keras.metrics.TrueNegatives.compute_output_shape": true, + "tf.keras.metrics.TrueNegatives.compute_output_signature": true, + "tf.keras.metrics.TrueNegatives.count_params": true, + "tf.keras.metrics.TrueNegatives.dtype": true, + "tf.keras.metrics.TrueNegatives.dynamic": true, + "tf.keras.metrics.TrueNegatives.from_config": true, + "tf.keras.metrics.TrueNegatives.get_config": true, + "tf.keras.metrics.TrueNegatives.get_weights": true, + "tf.keras.metrics.TrueNegatives.input": true, + "tf.keras.metrics.TrueNegatives.input_spec": true, + "tf.keras.metrics.TrueNegatives.losses": true, + "tf.keras.metrics.TrueNegatives.metrics": true, + "tf.keras.metrics.TrueNegatives.name": true, + "tf.keras.metrics.TrueNegatives.name_scope": true, + "tf.keras.metrics.TrueNegatives.non_trainable_weights": true, + "tf.keras.metrics.TrueNegatives.output": true, + "tf.keras.metrics.TrueNegatives.reset_states": true, + "tf.keras.metrics.TrueNegatives.result": true, + "tf.keras.metrics.TrueNegatives.set_weights": true, + "tf.keras.metrics.TrueNegatives.submodules": true, + "tf.keras.metrics.TrueNegatives.trainable": true, + "tf.keras.metrics.TrueNegatives.trainable_weights": true, + "tf.keras.metrics.TrueNegatives.update_state": true, + "tf.keras.metrics.TrueNegatives.weights": true, + "tf.keras.metrics.TrueNegatives.with_name_scope": true, + "tf.keras.metrics.TruePositives": false, + "tf.keras.metrics.TruePositives.__call__": true, + "tf.keras.metrics.TruePositives.__eq__": true, + "tf.keras.metrics.TruePositives.__ge__": true, + "tf.keras.metrics.TruePositives.__gt__": true, + "tf.keras.metrics.TruePositives.__init__": true, + "tf.keras.metrics.TruePositives.__le__": true, + "tf.keras.metrics.TruePositives.__lt__": true, + "tf.keras.metrics.TruePositives.__ne__": true, + "tf.keras.metrics.TruePositives.__new__": true, + "tf.keras.metrics.TruePositives.activity_regularizer": true, + "tf.keras.metrics.TruePositives.add_loss": true, + "tf.keras.metrics.TruePositives.add_metric": true, + "tf.keras.metrics.TruePositives.add_weight": true, + "tf.keras.metrics.TruePositives.build": true, + "tf.keras.metrics.TruePositives.call": true, + "tf.keras.metrics.TruePositives.compute_mask": true, + "tf.keras.metrics.TruePositives.compute_output_shape": true, + "tf.keras.metrics.TruePositives.compute_output_signature": true, + "tf.keras.metrics.TruePositives.count_params": true, + "tf.keras.metrics.TruePositives.dtype": true, + "tf.keras.metrics.TruePositives.dynamic": true, + "tf.keras.metrics.TruePositives.from_config": true, + "tf.keras.metrics.TruePositives.get_config": true, + "tf.keras.metrics.TruePositives.get_weights": true, + "tf.keras.metrics.TruePositives.input": true, + "tf.keras.metrics.TruePositives.input_spec": true, + "tf.keras.metrics.TruePositives.losses": true, + "tf.keras.metrics.TruePositives.metrics": true, + "tf.keras.metrics.TruePositives.name": true, + "tf.keras.metrics.TruePositives.name_scope": true, + "tf.keras.metrics.TruePositives.non_trainable_weights": true, + "tf.keras.metrics.TruePositives.output": true, + "tf.keras.metrics.TruePositives.reset_states": true, + "tf.keras.metrics.TruePositives.result": true, + "tf.keras.metrics.TruePositives.set_weights": true, + "tf.keras.metrics.TruePositives.submodules": true, + "tf.keras.metrics.TruePositives.trainable": true, + "tf.keras.metrics.TruePositives.trainable_weights": true, + "tf.keras.metrics.TruePositives.update_state": true, + "tf.keras.metrics.TruePositives.weights": true, + "tf.keras.metrics.TruePositives.with_name_scope": true, + "tf.keras.metrics.binary_accuracy": false, + "tf.keras.metrics.binary_crossentropy": false, + "tf.keras.metrics.categorical_accuracy": false, + "tf.keras.metrics.categorical_crossentropy": false, + "tf.keras.metrics.deserialize": false, + "tf.keras.metrics.get": false, + "tf.keras.metrics.hinge": false, + "tf.keras.metrics.kld": false, + "tf.keras.metrics.kullback_leibler_divergence": false, + "tf.keras.metrics.mae": false, + "tf.keras.metrics.mape": false, + "tf.keras.metrics.mean_absolute_error": false, + "tf.keras.metrics.mean_absolute_percentage_error": false, + "tf.keras.metrics.mean_squared_error": false, + "tf.keras.metrics.mean_squared_logarithmic_error": false, + "tf.keras.metrics.mse": false, + "tf.keras.metrics.msle": false, + "tf.keras.metrics.poisson": false, + "tf.keras.metrics.serialize": false, + "tf.keras.metrics.sparse_categorical_accuracy": false, + "tf.keras.metrics.sparse_categorical_crossentropy": false, + "tf.keras.metrics.sparse_top_k_categorical_accuracy": false, + "tf.keras.metrics.squared_hinge": false, + "tf.keras.metrics.top_k_categorical_accuracy": false, + "tf.keras.mixed_precision": false, + "tf.keras.mixed_precision.experimental": false, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer": false, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__eq__": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__ge__": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__gt__": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__init__": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__le__": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__lt__": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__ne__": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.__new__": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.add_slot": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.add_weight": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.apply_gradients": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.from_config": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_config": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_gradients": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_scaled_loss": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_slot": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_slot_names": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_unscaled_gradients": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_updates": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.get_weights": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.iterations": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.learning_rate": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.loss_scale": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.lr": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.minimize": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.set_weights": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.variables": true, + "tf.keras.mixed_precision.experimental.LossScaleOptimizer.weights": true, + "tf.keras.mixed_precision.experimental.Policy": false, + "tf.keras.mixed_precision.experimental.Policy.__eq__": true, + "tf.keras.mixed_precision.experimental.Policy.__ge__": true, + "tf.keras.mixed_precision.experimental.Policy.__gt__": true, + "tf.keras.mixed_precision.experimental.Policy.__init__": true, + "tf.keras.mixed_precision.experimental.Policy.__le__": true, + "tf.keras.mixed_precision.experimental.Policy.__lt__": true, + "tf.keras.mixed_precision.experimental.Policy.__ne__": true, + "tf.keras.mixed_precision.experimental.Policy.__new__": true, + "tf.keras.mixed_precision.experimental.Policy.compute_dtype": true, + "tf.keras.mixed_precision.experimental.Policy.from_config": true, + "tf.keras.mixed_precision.experimental.Policy.get_config": true, + "tf.keras.mixed_precision.experimental.Policy.loss_scale": true, + "tf.keras.mixed_precision.experimental.Policy.name": true, + "tf.keras.mixed_precision.experimental.Policy.should_cast_variables": true, + "tf.keras.mixed_precision.experimental.Policy.variable_dtype": true, + "tf.keras.mixed_precision.experimental.get_layer_policy": false, + "tf.keras.mixed_precision.experimental.global_policy": false, + "tf.keras.mixed_precision.experimental.set_policy": false, + "tf.keras.models": false, + "tf.keras.models.Model": false, + "tf.keras.models.Model.__call__": true, + "tf.keras.models.Model.__eq__": true, + "tf.keras.models.Model.__ge__": true, + "tf.keras.models.Model.__gt__": true, + "tf.keras.models.Model.__init__": true, + "tf.keras.models.Model.__le__": true, + "tf.keras.models.Model.__lt__": true, + "tf.keras.models.Model.__ne__": true, + "tf.keras.models.Model.__new__": true, + "tf.keras.models.Model.activity_regularizer": true, + "tf.keras.models.Model.add_loss": true, + "tf.keras.models.Model.add_metric": true, + "tf.keras.models.Model.add_weight": true, + "tf.keras.models.Model.build": true, + "tf.keras.models.Model.call": true, + "tf.keras.models.Model.compile": true, + "tf.keras.models.Model.compute_mask": true, + "tf.keras.models.Model.compute_output_shape": true, + "tf.keras.models.Model.compute_output_signature": true, + "tf.keras.models.Model.count_params": true, + "tf.keras.models.Model.distribute_strategy": true, + "tf.keras.models.Model.dtype": true, + "tf.keras.models.Model.dynamic": true, + "tf.keras.models.Model.evaluate": true, + "tf.keras.models.Model.evaluate_generator": true, + "tf.keras.models.Model.fit": true, + "tf.keras.models.Model.fit_generator": true, + "tf.keras.models.Model.from_config": true, + "tf.keras.models.Model.get_config": true, + "tf.keras.models.Model.get_layer": true, + "tf.keras.models.Model.get_weights": true, + "tf.keras.models.Model.input": true, + "tf.keras.models.Model.input_spec": true, + "tf.keras.models.Model.layers": true, + "tf.keras.models.Model.load_weights": true, + "tf.keras.models.Model.losses": true, + "tf.keras.models.Model.make_predict_function": true, + "tf.keras.models.Model.make_test_function": true, + "tf.keras.models.Model.make_train_function": true, + "tf.keras.models.Model.metrics": true, + "tf.keras.models.Model.metrics_names": true, + "tf.keras.models.Model.name": true, + "tf.keras.models.Model.name_scope": true, + "tf.keras.models.Model.non_trainable_weights": true, + "tf.keras.models.Model.output": true, + "tf.keras.models.Model.predict": true, + "tf.keras.models.Model.predict_generator": true, + "tf.keras.models.Model.predict_on_batch": true, + "tf.keras.models.Model.predict_step": true, + "tf.keras.models.Model.reset_metrics": true, + "tf.keras.models.Model.reset_states": true, + "tf.keras.models.Model.run_eagerly": true, + "tf.keras.models.Model.save": true, + "tf.keras.models.Model.save_weights": true, + "tf.keras.models.Model.set_weights": true, + "tf.keras.models.Model.state_updates": true, + "tf.keras.models.Model.stateful": true, + "tf.keras.models.Model.submodules": true, + "tf.keras.models.Model.summary": true, + "tf.keras.models.Model.test_on_batch": true, + "tf.keras.models.Model.test_step": true, + "tf.keras.models.Model.to_json": true, + "tf.keras.models.Model.to_yaml": true, + "tf.keras.models.Model.train_on_batch": true, + "tf.keras.models.Model.train_step": true, + "tf.keras.models.Model.trainable": true, + "tf.keras.models.Model.trainable_weights": true, + "tf.keras.models.Model.weights": true, + "tf.keras.models.Model.with_name_scope": true, + "tf.keras.models.Sequential": false, + "tf.keras.models.Sequential.__call__": true, + "tf.keras.models.Sequential.__eq__": true, + "tf.keras.models.Sequential.__ge__": true, + "tf.keras.models.Sequential.__gt__": true, + "tf.keras.models.Sequential.__init__": true, + "tf.keras.models.Sequential.__le__": true, + "tf.keras.models.Sequential.__lt__": true, + "tf.keras.models.Sequential.__ne__": true, + "tf.keras.models.Sequential.__new__": true, + "tf.keras.models.Sequential.activity_regularizer": true, + "tf.keras.models.Sequential.add": true, + "tf.keras.models.Sequential.add_loss": true, + "tf.keras.models.Sequential.add_metric": true, + "tf.keras.models.Sequential.add_weight": true, + "tf.keras.models.Sequential.build": true, + "tf.keras.models.Sequential.call": true, + "tf.keras.models.Sequential.compile": true, + "tf.keras.models.Sequential.compute_mask": true, + "tf.keras.models.Sequential.compute_output_shape": true, + "tf.keras.models.Sequential.compute_output_signature": true, + "tf.keras.models.Sequential.count_params": true, + "tf.keras.models.Sequential.distribute_strategy": true, + "tf.keras.models.Sequential.dtype": true, + "tf.keras.models.Sequential.dynamic": true, + "tf.keras.models.Sequential.evaluate": true, + "tf.keras.models.Sequential.evaluate_generator": true, + "tf.keras.models.Sequential.fit": true, + "tf.keras.models.Sequential.fit_generator": true, + "tf.keras.models.Sequential.from_config": true, + "tf.keras.models.Sequential.get_config": true, + "tf.keras.models.Sequential.get_layer": true, + "tf.keras.models.Sequential.get_weights": true, + "tf.keras.models.Sequential.input": true, + "tf.keras.models.Sequential.input_spec": true, + "tf.keras.models.Sequential.layers": true, + "tf.keras.models.Sequential.load_weights": true, + "tf.keras.models.Sequential.losses": true, + "tf.keras.models.Sequential.make_predict_function": true, + "tf.keras.models.Sequential.make_test_function": true, + "tf.keras.models.Sequential.make_train_function": true, + "tf.keras.models.Sequential.metrics": true, + "tf.keras.models.Sequential.metrics_names": true, + "tf.keras.models.Sequential.name": true, + "tf.keras.models.Sequential.name_scope": true, + "tf.keras.models.Sequential.non_trainable_weights": true, + "tf.keras.models.Sequential.output": true, + "tf.keras.models.Sequential.pop": true, + "tf.keras.models.Sequential.predict": true, + "tf.keras.models.Sequential.predict_classes": true, + "tf.keras.models.Sequential.predict_generator": true, + "tf.keras.models.Sequential.predict_on_batch": true, + "tf.keras.models.Sequential.predict_proba": true, + "tf.keras.models.Sequential.predict_step": true, + "tf.keras.models.Sequential.reset_metrics": true, + "tf.keras.models.Sequential.reset_states": true, + "tf.keras.models.Sequential.run_eagerly": true, + "tf.keras.models.Sequential.save": true, + "tf.keras.models.Sequential.save_weights": true, + "tf.keras.models.Sequential.set_weights": true, + "tf.keras.models.Sequential.state_updates": true, + "tf.keras.models.Sequential.stateful": true, + "tf.keras.models.Sequential.submodules": true, + "tf.keras.models.Sequential.summary": true, + "tf.keras.models.Sequential.test_on_batch": true, + "tf.keras.models.Sequential.test_step": true, + "tf.keras.models.Sequential.to_json": true, + "tf.keras.models.Sequential.to_yaml": true, + "tf.keras.models.Sequential.train_on_batch": true, + "tf.keras.models.Sequential.train_step": true, + "tf.keras.models.Sequential.trainable": true, + "tf.keras.models.Sequential.trainable_weights": true, + "tf.keras.models.Sequential.weights": true, + "tf.keras.models.Sequential.with_name_scope": true, + "tf.keras.models.clone_model": false, + "tf.keras.models.load_model": false, + "tf.keras.models.model_from_config": false, + "tf.keras.models.model_from_json": false, + "tf.keras.models.model_from_yaml": false, + "tf.keras.models.save_model": false, + "tf.keras.optimizers": false, + "tf.keras.optimizers.Adadelta": false, + "tf.keras.optimizers.Adadelta.__eq__": true, + "tf.keras.optimizers.Adadelta.__ge__": true, + "tf.keras.optimizers.Adadelta.__gt__": true, + "tf.keras.optimizers.Adadelta.__init__": true, + "tf.keras.optimizers.Adadelta.__le__": true, + "tf.keras.optimizers.Adadelta.__lt__": true, + "tf.keras.optimizers.Adadelta.__ne__": true, + "tf.keras.optimizers.Adadelta.__new__": true, + "tf.keras.optimizers.Adadelta.add_slot": true, + "tf.keras.optimizers.Adadelta.add_weight": true, + "tf.keras.optimizers.Adadelta.apply_gradients": true, + "tf.keras.optimizers.Adadelta.from_config": true, + "tf.keras.optimizers.Adadelta.get_config": true, + "tf.keras.optimizers.Adadelta.get_gradients": true, + "tf.keras.optimizers.Adadelta.get_slot": true, + "tf.keras.optimizers.Adadelta.get_slot_names": true, + "tf.keras.optimizers.Adadelta.get_updates": true, + "tf.keras.optimizers.Adadelta.get_weights": true, + "tf.keras.optimizers.Adadelta.iterations": true, + "tf.keras.optimizers.Adadelta.minimize": true, + "tf.keras.optimizers.Adadelta.set_weights": true, + "tf.keras.optimizers.Adadelta.variables": true, + "tf.keras.optimizers.Adadelta.weights": true, + "tf.keras.optimizers.Adagrad": false, + "tf.keras.optimizers.Adagrad.__eq__": true, + "tf.keras.optimizers.Adagrad.__ge__": true, + "tf.keras.optimizers.Adagrad.__gt__": true, + "tf.keras.optimizers.Adagrad.__init__": true, + "tf.keras.optimizers.Adagrad.__le__": true, + "tf.keras.optimizers.Adagrad.__lt__": true, + "tf.keras.optimizers.Adagrad.__ne__": true, + "tf.keras.optimizers.Adagrad.__new__": true, + "tf.keras.optimizers.Adagrad.add_slot": true, + "tf.keras.optimizers.Adagrad.add_weight": true, + "tf.keras.optimizers.Adagrad.apply_gradients": true, + "tf.keras.optimizers.Adagrad.from_config": true, + "tf.keras.optimizers.Adagrad.get_config": true, + "tf.keras.optimizers.Adagrad.get_gradients": true, + "tf.keras.optimizers.Adagrad.get_slot": true, + "tf.keras.optimizers.Adagrad.get_slot_names": true, + "tf.keras.optimizers.Adagrad.get_updates": true, + "tf.keras.optimizers.Adagrad.get_weights": true, + "tf.keras.optimizers.Adagrad.iterations": true, + "tf.keras.optimizers.Adagrad.minimize": true, + "tf.keras.optimizers.Adagrad.set_weights": true, + "tf.keras.optimizers.Adagrad.variables": true, + "tf.keras.optimizers.Adagrad.weights": true, + "tf.keras.optimizers.Adam": false, + "tf.keras.optimizers.Adam.__eq__": true, + "tf.keras.optimizers.Adam.__ge__": true, + "tf.keras.optimizers.Adam.__gt__": true, + "tf.keras.optimizers.Adam.__init__": true, + "tf.keras.optimizers.Adam.__le__": true, + "tf.keras.optimizers.Adam.__lt__": true, + "tf.keras.optimizers.Adam.__ne__": true, + "tf.keras.optimizers.Adam.__new__": true, + "tf.keras.optimizers.Adam.add_slot": true, + "tf.keras.optimizers.Adam.add_weight": true, + "tf.keras.optimizers.Adam.apply_gradients": true, + "tf.keras.optimizers.Adam.from_config": true, + "tf.keras.optimizers.Adam.get_config": true, + "tf.keras.optimizers.Adam.get_gradients": true, + "tf.keras.optimizers.Adam.get_slot": true, + "tf.keras.optimizers.Adam.get_slot_names": true, + "tf.keras.optimizers.Adam.get_updates": true, + "tf.keras.optimizers.Adam.get_weights": true, + "tf.keras.optimizers.Adam.iterations": true, + "tf.keras.optimizers.Adam.minimize": true, + "tf.keras.optimizers.Adam.set_weights": true, + "tf.keras.optimizers.Adam.variables": true, + "tf.keras.optimizers.Adam.weights": true, + "tf.keras.optimizers.Adamax": false, + "tf.keras.optimizers.Adamax.__eq__": true, + "tf.keras.optimizers.Adamax.__ge__": true, + "tf.keras.optimizers.Adamax.__gt__": true, + "tf.keras.optimizers.Adamax.__init__": true, + "tf.keras.optimizers.Adamax.__le__": true, + "tf.keras.optimizers.Adamax.__lt__": true, + "tf.keras.optimizers.Adamax.__ne__": true, + "tf.keras.optimizers.Adamax.__new__": true, + "tf.keras.optimizers.Adamax.add_slot": true, + "tf.keras.optimizers.Adamax.add_weight": true, + "tf.keras.optimizers.Adamax.apply_gradients": true, + "tf.keras.optimizers.Adamax.from_config": true, + "tf.keras.optimizers.Adamax.get_config": true, + "tf.keras.optimizers.Adamax.get_gradients": true, + "tf.keras.optimizers.Adamax.get_slot": true, + "tf.keras.optimizers.Adamax.get_slot_names": true, + "tf.keras.optimizers.Adamax.get_updates": true, + "tf.keras.optimizers.Adamax.get_weights": true, + "tf.keras.optimizers.Adamax.iterations": true, + "tf.keras.optimizers.Adamax.minimize": true, + "tf.keras.optimizers.Adamax.set_weights": true, + "tf.keras.optimizers.Adamax.variables": true, + "tf.keras.optimizers.Adamax.weights": true, + "tf.keras.optimizers.Ftrl": false, + "tf.keras.optimizers.Ftrl.__eq__": true, + "tf.keras.optimizers.Ftrl.__ge__": true, + "tf.keras.optimizers.Ftrl.__gt__": true, + "tf.keras.optimizers.Ftrl.__init__": true, + "tf.keras.optimizers.Ftrl.__le__": true, + "tf.keras.optimizers.Ftrl.__lt__": true, + "tf.keras.optimizers.Ftrl.__ne__": true, + "tf.keras.optimizers.Ftrl.__new__": true, + "tf.keras.optimizers.Ftrl.add_slot": true, + "tf.keras.optimizers.Ftrl.add_weight": true, + "tf.keras.optimizers.Ftrl.apply_gradients": true, + "tf.keras.optimizers.Ftrl.from_config": true, + "tf.keras.optimizers.Ftrl.get_config": true, + "tf.keras.optimizers.Ftrl.get_gradients": true, + "tf.keras.optimizers.Ftrl.get_slot": true, + "tf.keras.optimizers.Ftrl.get_slot_names": true, + "tf.keras.optimizers.Ftrl.get_updates": true, + "tf.keras.optimizers.Ftrl.get_weights": true, + "tf.keras.optimizers.Ftrl.iterations": true, + "tf.keras.optimizers.Ftrl.minimize": true, + "tf.keras.optimizers.Ftrl.set_weights": true, + "tf.keras.optimizers.Ftrl.variables": true, + "tf.keras.optimizers.Ftrl.weights": true, + "tf.keras.optimizers.Nadam": false, + "tf.keras.optimizers.Nadam.__eq__": true, + "tf.keras.optimizers.Nadam.__ge__": true, + "tf.keras.optimizers.Nadam.__gt__": true, + "tf.keras.optimizers.Nadam.__init__": true, + "tf.keras.optimizers.Nadam.__le__": true, + "tf.keras.optimizers.Nadam.__lt__": true, + "tf.keras.optimizers.Nadam.__ne__": true, + "tf.keras.optimizers.Nadam.__new__": true, + "tf.keras.optimizers.Nadam.add_slot": true, + "tf.keras.optimizers.Nadam.add_weight": true, + "tf.keras.optimizers.Nadam.apply_gradients": true, + "tf.keras.optimizers.Nadam.from_config": true, + "tf.keras.optimizers.Nadam.get_config": true, + "tf.keras.optimizers.Nadam.get_gradients": true, + "tf.keras.optimizers.Nadam.get_slot": true, + "tf.keras.optimizers.Nadam.get_slot_names": true, + "tf.keras.optimizers.Nadam.get_updates": true, + "tf.keras.optimizers.Nadam.get_weights": true, + "tf.keras.optimizers.Nadam.iterations": true, + "tf.keras.optimizers.Nadam.minimize": true, + "tf.keras.optimizers.Nadam.set_weights": true, + "tf.keras.optimizers.Nadam.variables": true, + "tf.keras.optimizers.Nadam.weights": true, + "tf.keras.optimizers.Optimizer": false, + "tf.keras.optimizers.Optimizer.__eq__": true, + "tf.keras.optimizers.Optimizer.__ge__": true, + "tf.keras.optimizers.Optimizer.__gt__": true, + "tf.keras.optimizers.Optimizer.__init__": true, + "tf.keras.optimizers.Optimizer.__le__": true, + "tf.keras.optimizers.Optimizer.__lt__": true, + "tf.keras.optimizers.Optimizer.__ne__": true, + "tf.keras.optimizers.Optimizer.__new__": true, + "tf.keras.optimizers.Optimizer.add_slot": true, + "tf.keras.optimizers.Optimizer.add_weight": true, + "tf.keras.optimizers.Optimizer.apply_gradients": true, + "tf.keras.optimizers.Optimizer.from_config": true, + "tf.keras.optimizers.Optimizer.get_config": true, + "tf.keras.optimizers.Optimizer.get_gradients": true, + "tf.keras.optimizers.Optimizer.get_slot": true, + "tf.keras.optimizers.Optimizer.get_slot_names": true, + "tf.keras.optimizers.Optimizer.get_updates": true, + "tf.keras.optimizers.Optimizer.get_weights": true, + "tf.keras.optimizers.Optimizer.iterations": true, + "tf.keras.optimizers.Optimizer.minimize": true, + "tf.keras.optimizers.Optimizer.set_weights": true, + "tf.keras.optimizers.Optimizer.variables": true, + "tf.keras.optimizers.Optimizer.weights": true, + "tf.keras.optimizers.RMSprop": false, + "tf.keras.optimizers.RMSprop.__eq__": true, + "tf.keras.optimizers.RMSprop.__ge__": true, + "tf.keras.optimizers.RMSprop.__gt__": true, + "tf.keras.optimizers.RMSprop.__init__": true, + "tf.keras.optimizers.RMSprop.__le__": true, + "tf.keras.optimizers.RMSprop.__lt__": true, + "tf.keras.optimizers.RMSprop.__ne__": true, + "tf.keras.optimizers.RMSprop.__new__": true, + "tf.keras.optimizers.RMSprop.add_slot": true, + "tf.keras.optimizers.RMSprop.add_weight": true, + "tf.keras.optimizers.RMSprop.apply_gradients": true, + "tf.keras.optimizers.RMSprop.from_config": true, + "tf.keras.optimizers.RMSprop.get_config": true, + "tf.keras.optimizers.RMSprop.get_gradients": true, + "tf.keras.optimizers.RMSprop.get_slot": true, + "tf.keras.optimizers.RMSprop.get_slot_names": true, + "tf.keras.optimizers.RMSprop.get_updates": true, + "tf.keras.optimizers.RMSprop.get_weights": true, + "tf.keras.optimizers.RMSprop.iterations": true, + "tf.keras.optimizers.RMSprop.minimize": true, + "tf.keras.optimizers.RMSprop.set_weights": true, + "tf.keras.optimizers.RMSprop.variables": true, + "tf.keras.optimizers.RMSprop.weights": true, + "tf.keras.optimizers.SGD": false, + "tf.keras.optimizers.SGD.__eq__": true, + "tf.keras.optimizers.SGD.__ge__": true, + "tf.keras.optimizers.SGD.__gt__": true, + "tf.keras.optimizers.SGD.__init__": true, + "tf.keras.optimizers.SGD.__le__": true, + "tf.keras.optimizers.SGD.__lt__": true, + "tf.keras.optimizers.SGD.__ne__": true, + "tf.keras.optimizers.SGD.__new__": true, + "tf.keras.optimizers.SGD.add_slot": true, + "tf.keras.optimizers.SGD.add_weight": true, + "tf.keras.optimizers.SGD.apply_gradients": true, + "tf.keras.optimizers.SGD.from_config": true, + "tf.keras.optimizers.SGD.get_config": true, + "tf.keras.optimizers.SGD.get_gradients": true, + "tf.keras.optimizers.SGD.get_slot": true, + "tf.keras.optimizers.SGD.get_slot_names": true, + "tf.keras.optimizers.SGD.get_updates": true, + "tf.keras.optimizers.SGD.get_weights": true, + "tf.keras.optimizers.SGD.iterations": true, + "tf.keras.optimizers.SGD.minimize": true, + "tf.keras.optimizers.SGD.set_weights": true, + "tf.keras.optimizers.SGD.variables": true, + "tf.keras.optimizers.SGD.weights": true, + "tf.keras.optimizers.deserialize": false, + "tf.keras.optimizers.get": false, + "tf.keras.optimizers.schedules": false, + "tf.keras.optimizers.schedules.ExponentialDecay": false, + "tf.keras.optimizers.schedules.ExponentialDecay.__call__": true, + "tf.keras.optimizers.schedules.ExponentialDecay.__eq__": true, + "tf.keras.optimizers.schedules.ExponentialDecay.__ge__": true, + "tf.keras.optimizers.schedules.ExponentialDecay.__gt__": true, + "tf.keras.optimizers.schedules.ExponentialDecay.__init__": true, + "tf.keras.optimizers.schedules.ExponentialDecay.__le__": true, + "tf.keras.optimizers.schedules.ExponentialDecay.__lt__": true, + "tf.keras.optimizers.schedules.ExponentialDecay.__ne__": true, + "tf.keras.optimizers.schedules.ExponentialDecay.__new__": true, + "tf.keras.optimizers.schedules.ExponentialDecay.from_config": true, + "tf.keras.optimizers.schedules.ExponentialDecay.get_config": true, + "tf.keras.optimizers.schedules.InverseTimeDecay": false, + "tf.keras.optimizers.schedules.InverseTimeDecay.__call__": true, + "tf.keras.optimizers.schedules.InverseTimeDecay.__eq__": true, + "tf.keras.optimizers.schedules.InverseTimeDecay.__ge__": true, + "tf.keras.optimizers.schedules.InverseTimeDecay.__gt__": true, + "tf.keras.optimizers.schedules.InverseTimeDecay.__init__": true, + "tf.keras.optimizers.schedules.InverseTimeDecay.__le__": true, + "tf.keras.optimizers.schedules.InverseTimeDecay.__lt__": true, + "tf.keras.optimizers.schedules.InverseTimeDecay.__ne__": true, + "tf.keras.optimizers.schedules.InverseTimeDecay.__new__": true, + "tf.keras.optimizers.schedules.InverseTimeDecay.from_config": true, + "tf.keras.optimizers.schedules.InverseTimeDecay.get_config": true, + "tf.keras.optimizers.schedules.LearningRateSchedule": false, + "tf.keras.optimizers.schedules.LearningRateSchedule.__call__": true, + "tf.keras.optimizers.schedules.LearningRateSchedule.__eq__": true, + "tf.keras.optimizers.schedules.LearningRateSchedule.__ge__": true, + "tf.keras.optimizers.schedules.LearningRateSchedule.__gt__": true, + "tf.keras.optimizers.schedules.LearningRateSchedule.__init__": true, + "tf.keras.optimizers.schedules.LearningRateSchedule.__le__": true, + "tf.keras.optimizers.schedules.LearningRateSchedule.__lt__": true, + "tf.keras.optimizers.schedules.LearningRateSchedule.__ne__": true, + "tf.keras.optimizers.schedules.LearningRateSchedule.__new__": true, + "tf.keras.optimizers.schedules.LearningRateSchedule.from_config": true, + "tf.keras.optimizers.schedules.LearningRateSchedule.get_config": true, + "tf.keras.optimizers.schedules.PiecewiseConstantDecay": false, + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__call__": true, + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__eq__": true, + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__ge__": true, + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__gt__": true, + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__init__": true, + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__le__": true, + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__lt__": true, + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__ne__": true, + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.__new__": true, + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.from_config": true, + "tf.keras.optimizers.schedules.PiecewiseConstantDecay.get_config": true, + "tf.keras.optimizers.schedules.PolynomialDecay": false, + "tf.keras.optimizers.schedules.PolynomialDecay.__call__": true, + "tf.keras.optimizers.schedules.PolynomialDecay.__eq__": true, + "tf.keras.optimizers.schedules.PolynomialDecay.__ge__": true, + "tf.keras.optimizers.schedules.PolynomialDecay.__gt__": true, + "tf.keras.optimizers.schedules.PolynomialDecay.__init__": true, + "tf.keras.optimizers.schedules.PolynomialDecay.__le__": true, + "tf.keras.optimizers.schedules.PolynomialDecay.__lt__": true, + "tf.keras.optimizers.schedules.PolynomialDecay.__ne__": true, + "tf.keras.optimizers.schedules.PolynomialDecay.__new__": true, + "tf.keras.optimizers.schedules.PolynomialDecay.from_config": true, + "tf.keras.optimizers.schedules.PolynomialDecay.get_config": true, + "tf.keras.optimizers.schedules.deserialize": false, + "tf.keras.optimizers.schedules.serialize": false, + "tf.keras.optimizers.serialize": false, + "tf.keras.preprocessing": false, + "tf.keras.preprocessing.image": false, + "tf.keras.preprocessing.image.DirectoryIterator": false, + "tf.keras.preprocessing.image.DirectoryIterator.__eq__": true, + "tf.keras.preprocessing.image.DirectoryIterator.__ge__": true, + "tf.keras.preprocessing.image.DirectoryIterator.__getitem__": true, + "tf.keras.preprocessing.image.DirectoryIterator.__gt__": true, + "tf.keras.preprocessing.image.DirectoryIterator.__init__": true, + "tf.keras.preprocessing.image.DirectoryIterator.__iter__": true, + "tf.keras.preprocessing.image.DirectoryIterator.__le__": true, + "tf.keras.preprocessing.image.DirectoryIterator.__len__": true, + "tf.keras.preprocessing.image.DirectoryIterator.__lt__": true, + "tf.keras.preprocessing.image.DirectoryIterator.__ne__": true, + "tf.keras.preprocessing.image.DirectoryIterator.__new__": true, + "tf.keras.preprocessing.image.DirectoryIterator.allowed_class_modes": true, + "tf.keras.preprocessing.image.DirectoryIterator.filepaths": true, + "tf.keras.preprocessing.image.DirectoryIterator.labels": true, + "tf.keras.preprocessing.image.DirectoryIterator.next": true, + "tf.keras.preprocessing.image.DirectoryIterator.on_epoch_end": true, + "tf.keras.preprocessing.image.DirectoryIterator.reset": true, + "tf.keras.preprocessing.image.DirectoryIterator.sample_weight": true, + "tf.keras.preprocessing.image.DirectoryIterator.set_processing_attrs": true, + "tf.keras.preprocessing.image.DirectoryIterator.white_list_formats": true, + "tf.keras.preprocessing.image.ImageDataGenerator": false, + "tf.keras.preprocessing.image.ImageDataGenerator.__eq__": true, + "tf.keras.preprocessing.image.ImageDataGenerator.__ge__": true, + "tf.keras.preprocessing.image.ImageDataGenerator.__gt__": true, + "tf.keras.preprocessing.image.ImageDataGenerator.__init__": true, + "tf.keras.preprocessing.image.ImageDataGenerator.__le__": true, + "tf.keras.preprocessing.image.ImageDataGenerator.__lt__": true, + "tf.keras.preprocessing.image.ImageDataGenerator.__ne__": true, + "tf.keras.preprocessing.image.ImageDataGenerator.__new__": true, + "tf.keras.preprocessing.image.ImageDataGenerator.apply_transform": true, + "tf.keras.preprocessing.image.ImageDataGenerator.fit": true, + "tf.keras.preprocessing.image.ImageDataGenerator.flow": true, + "tf.keras.preprocessing.image.ImageDataGenerator.flow_from_dataframe": true, + "tf.keras.preprocessing.image.ImageDataGenerator.flow_from_directory": true, + "tf.keras.preprocessing.image.ImageDataGenerator.get_random_transform": true, + "tf.keras.preprocessing.image.ImageDataGenerator.random_transform": true, + "tf.keras.preprocessing.image.ImageDataGenerator.standardize": true, + "tf.keras.preprocessing.image.Iterator": false, + "tf.keras.preprocessing.image.Iterator.__eq__": true, + "tf.keras.preprocessing.image.Iterator.__ge__": true, + "tf.keras.preprocessing.image.Iterator.__getitem__": true, + "tf.keras.preprocessing.image.Iterator.__gt__": true, + "tf.keras.preprocessing.image.Iterator.__init__": true, + "tf.keras.preprocessing.image.Iterator.__iter__": true, + "tf.keras.preprocessing.image.Iterator.__le__": true, + "tf.keras.preprocessing.image.Iterator.__len__": true, + "tf.keras.preprocessing.image.Iterator.__lt__": true, + "tf.keras.preprocessing.image.Iterator.__ne__": true, + "tf.keras.preprocessing.image.Iterator.__new__": true, + "tf.keras.preprocessing.image.Iterator.next": true, + "tf.keras.preprocessing.image.Iterator.on_epoch_end": true, + "tf.keras.preprocessing.image.Iterator.reset": true, + "tf.keras.preprocessing.image.Iterator.white_list_formats": true, + "tf.keras.preprocessing.image.NumpyArrayIterator": false, + "tf.keras.preprocessing.image.NumpyArrayIterator.__eq__": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.__ge__": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.__getitem__": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.__gt__": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.__init__": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.__iter__": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.__le__": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.__len__": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.__lt__": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.__ne__": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.__new__": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.next": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.on_epoch_end": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.reset": true, + "tf.keras.preprocessing.image.NumpyArrayIterator.white_list_formats": true, + "tf.keras.preprocessing.image.apply_affine_transform": false, + "tf.keras.preprocessing.image.apply_brightness_shift": false, + "tf.keras.preprocessing.image.apply_channel_shift": false, + "tf.keras.preprocessing.image.array_to_img": false, + "tf.keras.preprocessing.image.img_to_array": false, + "tf.keras.preprocessing.image.load_img": false, + "tf.keras.preprocessing.image.random_brightness": false, + "tf.keras.preprocessing.image.random_channel_shift": false, + "tf.keras.preprocessing.image.random_rotation": false, + "tf.keras.preprocessing.image.random_shear": false, + "tf.keras.preprocessing.image.random_shift": false, + "tf.keras.preprocessing.image.random_zoom": false, + "tf.keras.preprocessing.image.save_img": false, + "tf.keras.preprocessing.sequence": false, + "tf.keras.preprocessing.sequence.TimeseriesGenerator": false, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__eq__": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__ge__": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__getitem__": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__gt__": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__init__": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__iter__": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__le__": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__len__": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__lt__": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__ne__": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.__new__": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.get_config": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.on_epoch_end": true, + "tf.keras.preprocessing.sequence.TimeseriesGenerator.to_json": true, + "tf.keras.preprocessing.sequence.make_sampling_table": false, + "tf.keras.preprocessing.sequence.pad_sequences": false, + "tf.keras.preprocessing.sequence.skipgrams": false, + "tf.keras.preprocessing.text": false, + "tf.keras.preprocessing.text.Tokenizer": false, + "tf.keras.preprocessing.text.Tokenizer.__eq__": true, + "tf.keras.preprocessing.text.Tokenizer.__ge__": true, + "tf.keras.preprocessing.text.Tokenizer.__gt__": true, + "tf.keras.preprocessing.text.Tokenizer.__init__": true, + "tf.keras.preprocessing.text.Tokenizer.__le__": true, + "tf.keras.preprocessing.text.Tokenizer.__lt__": true, + "tf.keras.preprocessing.text.Tokenizer.__ne__": true, + "tf.keras.preprocessing.text.Tokenizer.__new__": true, + "tf.keras.preprocessing.text.Tokenizer.fit_on_sequences": true, + "tf.keras.preprocessing.text.Tokenizer.fit_on_texts": true, + "tf.keras.preprocessing.text.Tokenizer.get_config": true, + "tf.keras.preprocessing.text.Tokenizer.sequences_to_matrix": true, + "tf.keras.preprocessing.text.Tokenizer.sequences_to_texts": true, + "tf.keras.preprocessing.text.Tokenizer.sequences_to_texts_generator": true, + "tf.keras.preprocessing.text.Tokenizer.texts_to_matrix": true, + "tf.keras.preprocessing.text.Tokenizer.texts_to_sequences": true, + "tf.keras.preprocessing.text.Tokenizer.texts_to_sequences_generator": true, + "tf.keras.preprocessing.text.Tokenizer.to_json": true, + "tf.keras.preprocessing.text.hashing_trick": false, + "tf.keras.preprocessing.text.one_hot": false, + "tf.keras.preprocessing.text.text_to_word_sequence": false, + "tf.keras.preprocessing.text.tokenizer_from_json": false, + "tf.keras.regularizers": false, + "tf.keras.regularizers.L1L2": false, + "tf.keras.regularizers.L1L2.__call__": true, + "tf.keras.regularizers.L1L2.__eq__": true, + "tf.keras.regularizers.L1L2.__ge__": true, + "tf.keras.regularizers.L1L2.__gt__": true, + "tf.keras.regularizers.L1L2.__init__": true, + "tf.keras.regularizers.L1L2.__le__": true, + "tf.keras.regularizers.L1L2.__lt__": true, + "tf.keras.regularizers.L1L2.__ne__": true, + "tf.keras.regularizers.L1L2.__new__": true, + "tf.keras.regularizers.L1L2.from_config": true, + "tf.keras.regularizers.L1L2.get_config": true, + "tf.keras.regularizers.Regularizer": false, + "tf.keras.regularizers.Regularizer.__call__": true, + "tf.keras.regularizers.Regularizer.__eq__": true, + "tf.keras.regularizers.Regularizer.__ge__": true, + "tf.keras.regularizers.Regularizer.__gt__": true, + "tf.keras.regularizers.Regularizer.__init__": true, + "tf.keras.regularizers.Regularizer.__le__": true, + "tf.keras.regularizers.Regularizer.__lt__": true, + "tf.keras.regularizers.Regularizer.__ne__": true, + "tf.keras.regularizers.Regularizer.__new__": true, + "tf.keras.regularizers.Regularizer.from_config": true, + "tf.keras.regularizers.Regularizer.get_config": true, + "tf.keras.regularizers.deserialize": false, + "tf.keras.regularizers.get": false, + "tf.keras.regularizers.l1": false, + "tf.keras.regularizers.l1_l2": false, + "tf.keras.regularizers.l2": false, + "tf.keras.regularizers.serialize": false, + "tf.keras.utils": false, + "tf.keras.utils.CustomObjectScope": false, + "tf.keras.utils.CustomObjectScope.__enter__": true, + "tf.keras.utils.CustomObjectScope.__eq__": true, + "tf.keras.utils.CustomObjectScope.__exit__": true, + "tf.keras.utils.CustomObjectScope.__ge__": true, + "tf.keras.utils.CustomObjectScope.__gt__": true, + "tf.keras.utils.CustomObjectScope.__init__": true, + "tf.keras.utils.CustomObjectScope.__le__": true, + "tf.keras.utils.CustomObjectScope.__lt__": true, + "tf.keras.utils.CustomObjectScope.__ne__": true, + "tf.keras.utils.CustomObjectScope.__new__": true, + "tf.keras.utils.GeneratorEnqueuer": false, + "tf.keras.utils.GeneratorEnqueuer.__eq__": true, + "tf.keras.utils.GeneratorEnqueuer.__ge__": true, + "tf.keras.utils.GeneratorEnqueuer.__gt__": true, + "tf.keras.utils.GeneratorEnqueuer.__init__": true, + "tf.keras.utils.GeneratorEnqueuer.__le__": true, + "tf.keras.utils.GeneratorEnqueuer.__lt__": true, + "tf.keras.utils.GeneratorEnqueuer.__ne__": true, + "tf.keras.utils.GeneratorEnqueuer.__new__": true, + "tf.keras.utils.GeneratorEnqueuer.get": true, + "tf.keras.utils.GeneratorEnqueuer.is_running": true, + "tf.keras.utils.GeneratorEnqueuer.start": true, + "tf.keras.utils.GeneratorEnqueuer.stop": true, + "tf.keras.utils.HDF5Matrix": false, + "tf.keras.utils.HDF5Matrix.__eq__": true, + "tf.keras.utils.HDF5Matrix.__ge__": true, + "tf.keras.utils.HDF5Matrix.__getitem__": true, + "tf.keras.utils.HDF5Matrix.__gt__": true, + "tf.keras.utils.HDF5Matrix.__init__": true, + "tf.keras.utils.HDF5Matrix.__le__": true, + "tf.keras.utils.HDF5Matrix.__len__": true, + "tf.keras.utils.HDF5Matrix.__lt__": true, + "tf.keras.utils.HDF5Matrix.__ne__": true, + "tf.keras.utils.HDF5Matrix.__new__": true, + "tf.keras.utils.HDF5Matrix.dtype": true, + "tf.keras.utils.HDF5Matrix.ndim": true, + "tf.keras.utils.HDF5Matrix.refs": true, + "tf.keras.utils.HDF5Matrix.shape": true, + "tf.keras.utils.HDF5Matrix.size": true, + "tf.keras.utils.OrderedEnqueuer": false, + "tf.keras.utils.OrderedEnqueuer.__eq__": true, + "tf.keras.utils.OrderedEnqueuer.__ge__": true, + "tf.keras.utils.OrderedEnqueuer.__gt__": true, + "tf.keras.utils.OrderedEnqueuer.__init__": true, + "tf.keras.utils.OrderedEnqueuer.__le__": true, + "tf.keras.utils.OrderedEnqueuer.__lt__": true, + "tf.keras.utils.OrderedEnqueuer.__ne__": true, + "tf.keras.utils.OrderedEnqueuer.__new__": true, + "tf.keras.utils.OrderedEnqueuer.get": true, + "tf.keras.utils.OrderedEnqueuer.is_running": true, + "tf.keras.utils.OrderedEnqueuer.start": true, + "tf.keras.utils.OrderedEnqueuer.stop": true, + "tf.keras.utils.Progbar": false, + "tf.keras.utils.Progbar.__eq__": true, + "tf.keras.utils.Progbar.__ge__": true, + "tf.keras.utils.Progbar.__gt__": true, + "tf.keras.utils.Progbar.__init__": true, + "tf.keras.utils.Progbar.__le__": true, + "tf.keras.utils.Progbar.__lt__": true, + "tf.keras.utils.Progbar.__ne__": true, + "tf.keras.utils.Progbar.__new__": true, + "tf.keras.utils.Progbar.add": true, + "tf.keras.utils.Progbar.update": true, + "tf.keras.utils.Sequence": false, + "tf.keras.utils.Sequence.__eq__": true, + "tf.keras.utils.Sequence.__ge__": true, + "tf.keras.utils.Sequence.__getitem__": true, + "tf.keras.utils.Sequence.__gt__": true, + "tf.keras.utils.Sequence.__init__": true, + "tf.keras.utils.Sequence.__iter__": true, + "tf.keras.utils.Sequence.__le__": true, + "tf.keras.utils.Sequence.__len__": true, + "tf.keras.utils.Sequence.__lt__": true, + "tf.keras.utils.Sequence.__ne__": true, + "tf.keras.utils.Sequence.__new__": true, + "tf.keras.utils.Sequence.on_epoch_end": true, + "tf.keras.utils.SequenceEnqueuer": false, + "tf.keras.utils.SequenceEnqueuer.__eq__": true, + "tf.keras.utils.SequenceEnqueuer.__ge__": true, + "tf.keras.utils.SequenceEnqueuer.__gt__": true, + "tf.keras.utils.SequenceEnqueuer.__init__": true, + "tf.keras.utils.SequenceEnqueuer.__le__": true, + "tf.keras.utils.SequenceEnqueuer.__lt__": true, + "tf.keras.utils.SequenceEnqueuer.__ne__": true, + "tf.keras.utils.SequenceEnqueuer.__new__": true, + "tf.keras.utils.SequenceEnqueuer.get": true, + "tf.keras.utils.SequenceEnqueuer.is_running": true, + "tf.keras.utils.SequenceEnqueuer.start": true, + "tf.keras.utils.SequenceEnqueuer.stop": true, + "tf.keras.utils.convert_all_kernels_in_model": false, + "tf.keras.utils.custom_object_scope": false, + "tf.keras.utils.deserialize_keras_object": false, + "tf.keras.utils.get_custom_objects": false, + "tf.keras.utils.get_file": false, + "tf.keras.utils.get_registered_name": false, + "tf.keras.utils.get_registered_object": false, + "tf.keras.utils.get_source_inputs": false, + "tf.keras.utils.model_to_dot": false, + "tf.keras.utils.multi_gpu_model": false, + "tf.keras.utils.normalize": false, + "tf.keras.utils.plot_model": false, + "tf.keras.utils.register_keras_serializable": false, + "tf.keras.utils.serialize_keras_object": false, + "tf.keras.utils.to_categorical": false, + "tf.keras.wrappers": false, + "tf.keras.wrappers.scikit_learn": false, + "tf.keras.wrappers.scikit_learn.KerasClassifier": false, + "tf.keras.wrappers.scikit_learn.KerasClassifier.__eq__": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.__ge__": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.__gt__": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.__init__": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.__le__": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.__lt__": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.__ne__": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.__new__": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.check_params": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.filter_sk_params": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.fit": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.get_params": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.predict": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.predict_proba": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.score": true, + "tf.keras.wrappers.scikit_learn.KerasClassifier.set_params": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor": false, + "tf.keras.wrappers.scikit_learn.KerasRegressor.__eq__": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.__ge__": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.__gt__": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.__init__": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.__le__": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.__lt__": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.__ne__": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.__new__": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.check_params": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.filter_sk_params": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.fit": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.get_params": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.predict": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.score": true, + "tf.keras.wrappers.scikit_learn.KerasRegressor.set_params": true, + "tf.less": false, + "tf.less_equal": false, + "tf.linalg": false, + "tf.linalg.LinearOperator": false, + "tf.linalg.LinearOperator.H": true, + "tf.linalg.LinearOperator.__eq__": true, + "tf.linalg.LinearOperator.__ge__": true, + "tf.linalg.LinearOperator.__gt__": true, + "tf.linalg.LinearOperator.__init__": true, + "tf.linalg.LinearOperator.__le__": true, + "tf.linalg.LinearOperator.__lt__": true, + "tf.linalg.LinearOperator.__matmul__": true, + "tf.linalg.LinearOperator.__ne__": true, + "tf.linalg.LinearOperator.__new__": true, + "tf.linalg.LinearOperator.add_to_tensor": true, + "tf.linalg.LinearOperator.adjoint": true, + "tf.linalg.LinearOperator.assert_non_singular": true, + "tf.linalg.LinearOperator.assert_positive_definite": true, + "tf.linalg.LinearOperator.assert_self_adjoint": true, + "tf.linalg.LinearOperator.batch_shape": true, + "tf.linalg.LinearOperator.batch_shape_tensor": true, + "tf.linalg.LinearOperator.cholesky": true, + "tf.linalg.LinearOperator.cond": true, + "tf.linalg.LinearOperator.determinant": true, + "tf.linalg.LinearOperator.diag_part": true, + "tf.linalg.LinearOperator.domain_dimension": true, + "tf.linalg.LinearOperator.domain_dimension_tensor": true, + "tf.linalg.LinearOperator.dtype": true, + "tf.linalg.LinearOperator.eigvals": true, + "tf.linalg.LinearOperator.graph_parents": true, + "tf.linalg.LinearOperator.inverse": true, + "tf.linalg.LinearOperator.is_non_singular": true, + "tf.linalg.LinearOperator.is_positive_definite": true, + "tf.linalg.LinearOperator.is_self_adjoint": true, + "tf.linalg.LinearOperator.is_square": true, + "tf.linalg.LinearOperator.log_abs_determinant": true, + "tf.linalg.LinearOperator.matmul": true, + "tf.linalg.LinearOperator.matvec": true, + "tf.linalg.LinearOperator.name": true, + "tf.linalg.LinearOperator.name_scope": true, + "tf.linalg.LinearOperator.range_dimension": true, + "tf.linalg.LinearOperator.range_dimension_tensor": true, + "tf.linalg.LinearOperator.shape": true, + "tf.linalg.LinearOperator.shape_tensor": true, + "tf.linalg.LinearOperator.solve": true, + "tf.linalg.LinearOperator.solvevec": true, + "tf.linalg.LinearOperator.submodules": true, + "tf.linalg.LinearOperator.tensor_rank": true, + "tf.linalg.LinearOperator.tensor_rank_tensor": true, + "tf.linalg.LinearOperator.to_dense": true, + "tf.linalg.LinearOperator.trace": true, + "tf.linalg.LinearOperator.trainable_variables": true, + "tf.linalg.LinearOperator.variables": true, + "tf.linalg.LinearOperator.with_name_scope": true, + "tf.linalg.LinearOperatorAdjoint": false, + "tf.linalg.LinearOperatorAdjoint.H": true, + "tf.linalg.LinearOperatorAdjoint.__eq__": true, + "tf.linalg.LinearOperatorAdjoint.__ge__": true, + "tf.linalg.LinearOperatorAdjoint.__gt__": true, + "tf.linalg.LinearOperatorAdjoint.__init__": true, + "tf.linalg.LinearOperatorAdjoint.__le__": true, + "tf.linalg.LinearOperatorAdjoint.__lt__": true, + "tf.linalg.LinearOperatorAdjoint.__matmul__": true, + "tf.linalg.LinearOperatorAdjoint.__ne__": true, + "tf.linalg.LinearOperatorAdjoint.__new__": true, + "tf.linalg.LinearOperatorAdjoint.add_to_tensor": true, + "tf.linalg.LinearOperatorAdjoint.adjoint": true, + "tf.linalg.LinearOperatorAdjoint.assert_non_singular": true, + "tf.linalg.LinearOperatorAdjoint.assert_positive_definite": true, + "tf.linalg.LinearOperatorAdjoint.assert_self_adjoint": true, + "tf.linalg.LinearOperatorAdjoint.batch_shape": true, + "tf.linalg.LinearOperatorAdjoint.batch_shape_tensor": true, + "tf.linalg.LinearOperatorAdjoint.cholesky": true, + "tf.linalg.LinearOperatorAdjoint.cond": true, + "tf.linalg.LinearOperatorAdjoint.determinant": true, + "tf.linalg.LinearOperatorAdjoint.diag_part": true, + "tf.linalg.LinearOperatorAdjoint.domain_dimension": true, + "tf.linalg.LinearOperatorAdjoint.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorAdjoint.dtype": true, + "tf.linalg.LinearOperatorAdjoint.eigvals": true, + "tf.linalg.LinearOperatorAdjoint.graph_parents": true, + "tf.linalg.LinearOperatorAdjoint.inverse": true, + "tf.linalg.LinearOperatorAdjoint.is_non_singular": true, + "tf.linalg.LinearOperatorAdjoint.is_positive_definite": true, + "tf.linalg.LinearOperatorAdjoint.is_self_adjoint": true, + "tf.linalg.LinearOperatorAdjoint.is_square": true, + "tf.linalg.LinearOperatorAdjoint.log_abs_determinant": true, + "tf.linalg.LinearOperatorAdjoint.matmul": true, + "tf.linalg.LinearOperatorAdjoint.matvec": true, + "tf.linalg.LinearOperatorAdjoint.name": true, + "tf.linalg.LinearOperatorAdjoint.name_scope": true, + "tf.linalg.LinearOperatorAdjoint.operator": true, + "tf.linalg.LinearOperatorAdjoint.range_dimension": true, + "tf.linalg.LinearOperatorAdjoint.range_dimension_tensor": true, + "tf.linalg.LinearOperatorAdjoint.shape": true, + "tf.linalg.LinearOperatorAdjoint.shape_tensor": true, + "tf.linalg.LinearOperatorAdjoint.solve": true, + "tf.linalg.LinearOperatorAdjoint.solvevec": true, + "tf.linalg.LinearOperatorAdjoint.submodules": true, + "tf.linalg.LinearOperatorAdjoint.tensor_rank": true, + "tf.linalg.LinearOperatorAdjoint.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorAdjoint.to_dense": true, + "tf.linalg.LinearOperatorAdjoint.trace": true, + "tf.linalg.LinearOperatorAdjoint.trainable_variables": true, + "tf.linalg.LinearOperatorAdjoint.variables": true, + "tf.linalg.LinearOperatorAdjoint.with_name_scope": true, + "tf.linalg.LinearOperatorBlockDiag": false, + "tf.linalg.LinearOperatorBlockDiag.H": true, + "tf.linalg.LinearOperatorBlockDiag.__eq__": true, + "tf.linalg.LinearOperatorBlockDiag.__ge__": true, + "tf.linalg.LinearOperatorBlockDiag.__gt__": true, + "tf.linalg.LinearOperatorBlockDiag.__init__": true, + "tf.linalg.LinearOperatorBlockDiag.__le__": true, + "tf.linalg.LinearOperatorBlockDiag.__lt__": true, + "tf.linalg.LinearOperatorBlockDiag.__matmul__": true, + "tf.linalg.LinearOperatorBlockDiag.__ne__": true, + "tf.linalg.LinearOperatorBlockDiag.__new__": true, + "tf.linalg.LinearOperatorBlockDiag.add_to_tensor": true, + "tf.linalg.LinearOperatorBlockDiag.adjoint": true, + "tf.linalg.LinearOperatorBlockDiag.assert_non_singular": true, + "tf.linalg.LinearOperatorBlockDiag.assert_positive_definite": true, + "tf.linalg.LinearOperatorBlockDiag.assert_self_adjoint": true, + "tf.linalg.LinearOperatorBlockDiag.batch_shape": true, + "tf.linalg.LinearOperatorBlockDiag.batch_shape_tensor": true, + "tf.linalg.LinearOperatorBlockDiag.cholesky": true, + "tf.linalg.LinearOperatorBlockDiag.cond": true, + "tf.linalg.LinearOperatorBlockDiag.determinant": true, + "tf.linalg.LinearOperatorBlockDiag.diag_part": true, + "tf.linalg.LinearOperatorBlockDiag.domain_dimension": true, + "tf.linalg.LinearOperatorBlockDiag.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorBlockDiag.dtype": true, + "tf.linalg.LinearOperatorBlockDiag.eigvals": true, + "tf.linalg.LinearOperatorBlockDiag.graph_parents": true, + "tf.linalg.LinearOperatorBlockDiag.inverse": true, + "tf.linalg.LinearOperatorBlockDiag.is_non_singular": true, + "tf.linalg.LinearOperatorBlockDiag.is_positive_definite": true, + "tf.linalg.LinearOperatorBlockDiag.is_self_adjoint": true, + "tf.linalg.LinearOperatorBlockDiag.is_square": true, + "tf.linalg.LinearOperatorBlockDiag.log_abs_determinant": true, + "tf.linalg.LinearOperatorBlockDiag.matmul": true, + "tf.linalg.LinearOperatorBlockDiag.matvec": true, + "tf.linalg.LinearOperatorBlockDiag.name": true, + "tf.linalg.LinearOperatorBlockDiag.name_scope": true, + "tf.linalg.LinearOperatorBlockDiag.operators": true, + "tf.linalg.LinearOperatorBlockDiag.range_dimension": true, + "tf.linalg.LinearOperatorBlockDiag.range_dimension_tensor": true, + "tf.linalg.LinearOperatorBlockDiag.shape": true, + "tf.linalg.LinearOperatorBlockDiag.shape_tensor": true, + "tf.linalg.LinearOperatorBlockDiag.solve": true, + "tf.linalg.LinearOperatorBlockDiag.solvevec": true, + "tf.linalg.LinearOperatorBlockDiag.submodules": true, + "tf.linalg.LinearOperatorBlockDiag.tensor_rank": true, + "tf.linalg.LinearOperatorBlockDiag.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorBlockDiag.to_dense": true, + "tf.linalg.LinearOperatorBlockDiag.trace": true, + "tf.linalg.LinearOperatorBlockDiag.trainable_variables": true, + "tf.linalg.LinearOperatorBlockDiag.variables": true, + "tf.linalg.LinearOperatorBlockDiag.with_name_scope": true, + "tf.linalg.LinearOperatorBlockLowerTriangular": false, + "tf.linalg.LinearOperatorBlockLowerTriangular.H": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.__eq__": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.__ge__": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.__gt__": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.__init__": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.__le__": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.__lt__": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.__matmul__": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.__ne__": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.__new__": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.add_to_tensor": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.adjoint": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.assert_non_singular": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.assert_positive_definite": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.assert_self_adjoint": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.batch_shape": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.batch_shape_tensor": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.cholesky": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.cond": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.determinant": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.diag_part": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.domain_dimension": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.dtype": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.eigvals": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.graph_parents": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.inverse": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.is_non_singular": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.is_positive_definite": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.is_self_adjoint": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.is_square": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.log_abs_determinant": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.matmul": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.matvec": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.name": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.name_scope": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.operators": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.range_dimension": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.range_dimension_tensor": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.shape": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.shape_tensor": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.solve": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.solvevec": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.submodules": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.tensor_rank": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.to_dense": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.trace": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.trainable_variables": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.variables": true, + "tf.linalg.LinearOperatorBlockLowerTriangular.with_name_scope": true, + "tf.linalg.LinearOperatorCirculant": false, + "tf.linalg.LinearOperatorCirculant.H": true, + "tf.linalg.LinearOperatorCirculant.__eq__": true, + "tf.linalg.LinearOperatorCirculant.__ge__": true, + "tf.linalg.LinearOperatorCirculant.__gt__": true, + "tf.linalg.LinearOperatorCirculant.__init__": true, + "tf.linalg.LinearOperatorCirculant.__le__": true, + "tf.linalg.LinearOperatorCirculant.__lt__": true, + "tf.linalg.LinearOperatorCirculant.__matmul__": true, + "tf.linalg.LinearOperatorCirculant.__ne__": true, + "tf.linalg.LinearOperatorCirculant.__new__": true, + "tf.linalg.LinearOperatorCirculant.add_to_tensor": true, + "tf.linalg.LinearOperatorCirculant.adjoint": true, + "tf.linalg.LinearOperatorCirculant.assert_hermitian_spectrum": true, + "tf.linalg.LinearOperatorCirculant.assert_non_singular": true, + "tf.linalg.LinearOperatorCirculant.assert_positive_definite": true, + "tf.linalg.LinearOperatorCirculant.assert_self_adjoint": true, + "tf.linalg.LinearOperatorCirculant.batch_shape": true, + "tf.linalg.LinearOperatorCirculant.batch_shape_tensor": true, + "tf.linalg.LinearOperatorCirculant.block_depth": true, + "tf.linalg.LinearOperatorCirculant.block_shape": true, + "tf.linalg.LinearOperatorCirculant.block_shape_tensor": true, + "tf.linalg.LinearOperatorCirculant.cholesky": true, + "tf.linalg.LinearOperatorCirculant.cond": true, + "tf.linalg.LinearOperatorCirculant.convolution_kernel": true, + "tf.linalg.LinearOperatorCirculant.determinant": true, + "tf.linalg.LinearOperatorCirculant.diag_part": true, + "tf.linalg.LinearOperatorCirculant.domain_dimension": true, + "tf.linalg.LinearOperatorCirculant.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorCirculant.dtype": true, + "tf.linalg.LinearOperatorCirculant.eigvals": true, + "tf.linalg.LinearOperatorCirculant.graph_parents": true, + "tf.linalg.LinearOperatorCirculant.inverse": true, + "tf.linalg.LinearOperatorCirculant.is_non_singular": true, + "tf.linalg.LinearOperatorCirculant.is_positive_definite": true, + "tf.linalg.LinearOperatorCirculant.is_self_adjoint": true, + "tf.linalg.LinearOperatorCirculant.is_square": true, + "tf.linalg.LinearOperatorCirculant.log_abs_determinant": true, + "tf.linalg.LinearOperatorCirculant.matmul": true, + "tf.linalg.LinearOperatorCirculant.matvec": true, + "tf.linalg.LinearOperatorCirculant.name": true, + "tf.linalg.LinearOperatorCirculant.name_scope": true, + "tf.linalg.LinearOperatorCirculant.range_dimension": true, + "tf.linalg.LinearOperatorCirculant.range_dimension_tensor": true, + "tf.linalg.LinearOperatorCirculant.shape": true, + "tf.linalg.LinearOperatorCirculant.shape_tensor": true, + "tf.linalg.LinearOperatorCirculant.solve": true, + "tf.linalg.LinearOperatorCirculant.solvevec": true, + "tf.linalg.LinearOperatorCirculant.spectrum": true, + "tf.linalg.LinearOperatorCirculant.submodules": true, + "tf.linalg.LinearOperatorCirculant.tensor_rank": true, + "tf.linalg.LinearOperatorCirculant.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorCirculant.to_dense": true, + "tf.linalg.LinearOperatorCirculant.trace": true, + "tf.linalg.LinearOperatorCirculant.trainable_variables": true, + "tf.linalg.LinearOperatorCirculant.variables": true, + "tf.linalg.LinearOperatorCirculant.with_name_scope": true, + "tf.linalg.LinearOperatorCirculant2D": false, + "tf.linalg.LinearOperatorCirculant2D.H": true, + "tf.linalg.LinearOperatorCirculant2D.__eq__": true, + "tf.linalg.LinearOperatorCirculant2D.__ge__": true, + "tf.linalg.LinearOperatorCirculant2D.__gt__": true, + "tf.linalg.LinearOperatorCirculant2D.__init__": true, + "tf.linalg.LinearOperatorCirculant2D.__le__": true, + "tf.linalg.LinearOperatorCirculant2D.__lt__": true, + "tf.linalg.LinearOperatorCirculant2D.__matmul__": true, + "tf.linalg.LinearOperatorCirculant2D.__ne__": true, + "tf.linalg.LinearOperatorCirculant2D.__new__": true, + "tf.linalg.LinearOperatorCirculant2D.add_to_tensor": true, + "tf.linalg.LinearOperatorCirculant2D.adjoint": true, + "tf.linalg.LinearOperatorCirculant2D.assert_hermitian_spectrum": true, + "tf.linalg.LinearOperatorCirculant2D.assert_non_singular": true, + "tf.linalg.LinearOperatorCirculant2D.assert_positive_definite": true, + "tf.linalg.LinearOperatorCirculant2D.assert_self_adjoint": true, + "tf.linalg.LinearOperatorCirculant2D.batch_shape": true, + "tf.linalg.LinearOperatorCirculant2D.batch_shape_tensor": true, + "tf.linalg.LinearOperatorCirculant2D.block_depth": true, + "tf.linalg.LinearOperatorCirculant2D.block_shape": true, + "tf.linalg.LinearOperatorCirculant2D.block_shape_tensor": true, + "tf.linalg.LinearOperatorCirculant2D.cholesky": true, + "tf.linalg.LinearOperatorCirculant2D.cond": true, + "tf.linalg.LinearOperatorCirculant2D.convolution_kernel": true, + "tf.linalg.LinearOperatorCirculant2D.determinant": true, + "tf.linalg.LinearOperatorCirculant2D.diag_part": true, + "tf.linalg.LinearOperatorCirculant2D.domain_dimension": true, + "tf.linalg.LinearOperatorCirculant2D.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorCirculant2D.dtype": true, + "tf.linalg.LinearOperatorCirculant2D.eigvals": true, + "tf.linalg.LinearOperatorCirculant2D.graph_parents": true, + "tf.linalg.LinearOperatorCirculant2D.inverse": true, + "tf.linalg.LinearOperatorCirculant2D.is_non_singular": true, + "tf.linalg.LinearOperatorCirculant2D.is_positive_definite": true, + "tf.linalg.LinearOperatorCirculant2D.is_self_adjoint": true, + "tf.linalg.LinearOperatorCirculant2D.is_square": true, + "tf.linalg.LinearOperatorCirculant2D.log_abs_determinant": true, + "tf.linalg.LinearOperatorCirculant2D.matmul": true, + "tf.linalg.LinearOperatorCirculant2D.matvec": true, + "tf.linalg.LinearOperatorCirculant2D.name": true, + "tf.linalg.LinearOperatorCirculant2D.name_scope": true, + "tf.linalg.LinearOperatorCirculant2D.range_dimension": true, + "tf.linalg.LinearOperatorCirculant2D.range_dimension_tensor": true, + "tf.linalg.LinearOperatorCirculant2D.shape": true, + "tf.linalg.LinearOperatorCirculant2D.shape_tensor": true, + "tf.linalg.LinearOperatorCirculant2D.solve": true, + "tf.linalg.LinearOperatorCirculant2D.solvevec": true, + "tf.linalg.LinearOperatorCirculant2D.spectrum": true, + "tf.linalg.LinearOperatorCirculant2D.submodules": true, + "tf.linalg.LinearOperatorCirculant2D.tensor_rank": true, + "tf.linalg.LinearOperatorCirculant2D.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorCirculant2D.to_dense": true, + "tf.linalg.LinearOperatorCirculant2D.trace": true, + "tf.linalg.LinearOperatorCirculant2D.trainable_variables": true, + "tf.linalg.LinearOperatorCirculant2D.variables": true, + "tf.linalg.LinearOperatorCirculant2D.with_name_scope": true, + "tf.linalg.LinearOperatorCirculant3D": false, + "tf.linalg.LinearOperatorCirculant3D.H": true, + "tf.linalg.LinearOperatorCirculant3D.__eq__": true, + "tf.linalg.LinearOperatorCirculant3D.__ge__": true, + "tf.linalg.LinearOperatorCirculant3D.__gt__": true, + "tf.linalg.LinearOperatorCirculant3D.__init__": true, + "tf.linalg.LinearOperatorCirculant3D.__le__": true, + "tf.linalg.LinearOperatorCirculant3D.__lt__": true, + "tf.linalg.LinearOperatorCirculant3D.__matmul__": true, + "tf.linalg.LinearOperatorCirculant3D.__ne__": true, + "tf.linalg.LinearOperatorCirculant3D.__new__": true, + "tf.linalg.LinearOperatorCirculant3D.add_to_tensor": true, + "tf.linalg.LinearOperatorCirculant3D.adjoint": true, + "tf.linalg.LinearOperatorCirculant3D.assert_hermitian_spectrum": true, + "tf.linalg.LinearOperatorCirculant3D.assert_non_singular": true, + "tf.linalg.LinearOperatorCirculant3D.assert_positive_definite": true, + "tf.linalg.LinearOperatorCirculant3D.assert_self_adjoint": true, + "tf.linalg.LinearOperatorCirculant3D.batch_shape": true, + "tf.linalg.LinearOperatorCirculant3D.batch_shape_tensor": true, + "tf.linalg.LinearOperatorCirculant3D.block_depth": true, + "tf.linalg.LinearOperatorCirculant3D.block_shape": true, + "tf.linalg.LinearOperatorCirculant3D.block_shape_tensor": true, + "tf.linalg.LinearOperatorCirculant3D.cholesky": true, + "tf.linalg.LinearOperatorCirculant3D.cond": true, + "tf.linalg.LinearOperatorCirculant3D.convolution_kernel": true, + "tf.linalg.LinearOperatorCirculant3D.determinant": true, + "tf.linalg.LinearOperatorCirculant3D.diag_part": true, + "tf.linalg.LinearOperatorCirculant3D.domain_dimension": true, + "tf.linalg.LinearOperatorCirculant3D.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorCirculant3D.dtype": true, + "tf.linalg.LinearOperatorCirculant3D.eigvals": true, + "tf.linalg.LinearOperatorCirculant3D.graph_parents": true, + "tf.linalg.LinearOperatorCirculant3D.inverse": true, + "tf.linalg.LinearOperatorCirculant3D.is_non_singular": true, + "tf.linalg.LinearOperatorCirculant3D.is_positive_definite": true, + "tf.linalg.LinearOperatorCirculant3D.is_self_adjoint": true, + "tf.linalg.LinearOperatorCirculant3D.is_square": true, + "tf.linalg.LinearOperatorCirculant3D.log_abs_determinant": true, + "tf.linalg.LinearOperatorCirculant3D.matmul": true, + "tf.linalg.LinearOperatorCirculant3D.matvec": true, + "tf.linalg.LinearOperatorCirculant3D.name": true, + "tf.linalg.LinearOperatorCirculant3D.name_scope": true, + "tf.linalg.LinearOperatorCirculant3D.range_dimension": true, + "tf.linalg.LinearOperatorCirculant3D.range_dimension_tensor": true, + "tf.linalg.LinearOperatorCirculant3D.shape": true, + "tf.linalg.LinearOperatorCirculant3D.shape_tensor": true, + "tf.linalg.LinearOperatorCirculant3D.solve": true, + "tf.linalg.LinearOperatorCirculant3D.solvevec": true, + "tf.linalg.LinearOperatorCirculant3D.spectrum": true, + "tf.linalg.LinearOperatorCirculant3D.submodules": true, + "tf.linalg.LinearOperatorCirculant3D.tensor_rank": true, + "tf.linalg.LinearOperatorCirculant3D.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorCirculant3D.to_dense": true, + "tf.linalg.LinearOperatorCirculant3D.trace": true, + "tf.linalg.LinearOperatorCirculant3D.trainable_variables": true, + "tf.linalg.LinearOperatorCirculant3D.variables": true, + "tf.linalg.LinearOperatorCirculant3D.with_name_scope": true, + "tf.linalg.LinearOperatorComposition": false, + "tf.linalg.LinearOperatorComposition.H": true, + "tf.linalg.LinearOperatorComposition.__eq__": true, + "tf.linalg.LinearOperatorComposition.__ge__": true, + "tf.linalg.LinearOperatorComposition.__gt__": true, + "tf.linalg.LinearOperatorComposition.__init__": true, + "tf.linalg.LinearOperatorComposition.__le__": true, + "tf.linalg.LinearOperatorComposition.__lt__": true, + "tf.linalg.LinearOperatorComposition.__matmul__": true, + "tf.linalg.LinearOperatorComposition.__ne__": true, + "tf.linalg.LinearOperatorComposition.__new__": true, + "tf.linalg.LinearOperatorComposition.add_to_tensor": true, + "tf.linalg.LinearOperatorComposition.adjoint": true, + "tf.linalg.LinearOperatorComposition.assert_non_singular": true, + "tf.linalg.LinearOperatorComposition.assert_positive_definite": true, + "tf.linalg.LinearOperatorComposition.assert_self_adjoint": true, + "tf.linalg.LinearOperatorComposition.batch_shape": true, + "tf.linalg.LinearOperatorComposition.batch_shape_tensor": true, + "tf.linalg.LinearOperatorComposition.cholesky": true, + "tf.linalg.LinearOperatorComposition.cond": true, + "tf.linalg.LinearOperatorComposition.determinant": true, + "tf.linalg.LinearOperatorComposition.diag_part": true, + "tf.linalg.LinearOperatorComposition.domain_dimension": true, + "tf.linalg.LinearOperatorComposition.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorComposition.dtype": true, + "tf.linalg.LinearOperatorComposition.eigvals": true, + "tf.linalg.LinearOperatorComposition.graph_parents": true, + "tf.linalg.LinearOperatorComposition.inverse": true, + "tf.linalg.LinearOperatorComposition.is_non_singular": true, + "tf.linalg.LinearOperatorComposition.is_positive_definite": true, + "tf.linalg.LinearOperatorComposition.is_self_adjoint": true, + "tf.linalg.LinearOperatorComposition.is_square": true, + "tf.linalg.LinearOperatorComposition.log_abs_determinant": true, + "tf.linalg.LinearOperatorComposition.matmul": true, + "tf.linalg.LinearOperatorComposition.matvec": true, + "tf.linalg.LinearOperatorComposition.name": true, + "tf.linalg.LinearOperatorComposition.name_scope": true, + "tf.linalg.LinearOperatorComposition.operators": true, + "tf.linalg.LinearOperatorComposition.range_dimension": true, + "tf.linalg.LinearOperatorComposition.range_dimension_tensor": true, + "tf.linalg.LinearOperatorComposition.shape": true, + "tf.linalg.LinearOperatorComposition.shape_tensor": true, + "tf.linalg.LinearOperatorComposition.solve": true, + "tf.linalg.LinearOperatorComposition.solvevec": true, + "tf.linalg.LinearOperatorComposition.submodules": true, + "tf.linalg.LinearOperatorComposition.tensor_rank": true, + "tf.linalg.LinearOperatorComposition.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorComposition.to_dense": true, + "tf.linalg.LinearOperatorComposition.trace": true, + "tf.linalg.LinearOperatorComposition.trainable_variables": true, + "tf.linalg.LinearOperatorComposition.variables": true, + "tf.linalg.LinearOperatorComposition.with_name_scope": true, + "tf.linalg.LinearOperatorDiag": false, + "tf.linalg.LinearOperatorDiag.H": true, + "tf.linalg.LinearOperatorDiag.__eq__": true, + "tf.linalg.LinearOperatorDiag.__ge__": true, + "tf.linalg.LinearOperatorDiag.__gt__": true, + "tf.linalg.LinearOperatorDiag.__init__": true, + "tf.linalg.LinearOperatorDiag.__le__": true, + "tf.linalg.LinearOperatorDiag.__lt__": true, + "tf.linalg.LinearOperatorDiag.__matmul__": true, + "tf.linalg.LinearOperatorDiag.__ne__": true, + "tf.linalg.LinearOperatorDiag.__new__": true, + "tf.linalg.LinearOperatorDiag.add_to_tensor": true, + "tf.linalg.LinearOperatorDiag.adjoint": true, + "tf.linalg.LinearOperatorDiag.assert_non_singular": true, + "tf.linalg.LinearOperatorDiag.assert_positive_definite": true, + "tf.linalg.LinearOperatorDiag.assert_self_adjoint": true, + "tf.linalg.LinearOperatorDiag.batch_shape": true, + "tf.linalg.LinearOperatorDiag.batch_shape_tensor": true, + "tf.linalg.LinearOperatorDiag.cholesky": true, + "tf.linalg.LinearOperatorDiag.cond": true, + "tf.linalg.LinearOperatorDiag.determinant": true, + "tf.linalg.LinearOperatorDiag.diag": true, + "tf.linalg.LinearOperatorDiag.diag_part": true, + "tf.linalg.LinearOperatorDiag.domain_dimension": true, + "tf.linalg.LinearOperatorDiag.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorDiag.dtype": true, + "tf.linalg.LinearOperatorDiag.eigvals": true, + "tf.linalg.LinearOperatorDiag.graph_parents": true, + "tf.linalg.LinearOperatorDiag.inverse": true, + "tf.linalg.LinearOperatorDiag.is_non_singular": true, + "tf.linalg.LinearOperatorDiag.is_positive_definite": true, + "tf.linalg.LinearOperatorDiag.is_self_adjoint": true, + "tf.linalg.LinearOperatorDiag.is_square": true, + "tf.linalg.LinearOperatorDiag.log_abs_determinant": true, + "tf.linalg.LinearOperatorDiag.matmul": true, + "tf.linalg.LinearOperatorDiag.matvec": true, + "tf.linalg.LinearOperatorDiag.name": true, + "tf.linalg.LinearOperatorDiag.name_scope": true, + "tf.linalg.LinearOperatorDiag.range_dimension": true, + "tf.linalg.LinearOperatorDiag.range_dimension_tensor": true, + "tf.linalg.LinearOperatorDiag.shape": true, + "tf.linalg.LinearOperatorDiag.shape_tensor": true, + "tf.linalg.LinearOperatorDiag.solve": true, + "tf.linalg.LinearOperatorDiag.solvevec": true, + "tf.linalg.LinearOperatorDiag.submodules": true, + "tf.linalg.LinearOperatorDiag.tensor_rank": true, + "tf.linalg.LinearOperatorDiag.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorDiag.to_dense": true, + "tf.linalg.LinearOperatorDiag.trace": true, + "tf.linalg.LinearOperatorDiag.trainable_variables": true, + "tf.linalg.LinearOperatorDiag.variables": true, + "tf.linalg.LinearOperatorDiag.with_name_scope": true, + "tf.linalg.LinearOperatorFullMatrix": false, + "tf.linalg.LinearOperatorFullMatrix.H": true, + "tf.linalg.LinearOperatorFullMatrix.__eq__": true, + "tf.linalg.LinearOperatorFullMatrix.__ge__": true, + "tf.linalg.LinearOperatorFullMatrix.__gt__": true, + "tf.linalg.LinearOperatorFullMatrix.__init__": true, + "tf.linalg.LinearOperatorFullMatrix.__le__": true, + "tf.linalg.LinearOperatorFullMatrix.__lt__": true, + "tf.linalg.LinearOperatorFullMatrix.__matmul__": true, + "tf.linalg.LinearOperatorFullMatrix.__ne__": true, + "tf.linalg.LinearOperatorFullMatrix.__new__": true, + "tf.linalg.LinearOperatorFullMatrix.add_to_tensor": true, + "tf.linalg.LinearOperatorFullMatrix.adjoint": true, + "tf.linalg.LinearOperatorFullMatrix.assert_non_singular": true, + "tf.linalg.LinearOperatorFullMatrix.assert_positive_definite": true, + "tf.linalg.LinearOperatorFullMatrix.assert_self_adjoint": true, + "tf.linalg.LinearOperatorFullMatrix.batch_shape": true, + "tf.linalg.LinearOperatorFullMatrix.batch_shape_tensor": true, + "tf.linalg.LinearOperatorFullMatrix.cholesky": true, + "tf.linalg.LinearOperatorFullMatrix.cond": true, + "tf.linalg.LinearOperatorFullMatrix.determinant": true, + "tf.linalg.LinearOperatorFullMatrix.diag_part": true, + "tf.linalg.LinearOperatorFullMatrix.domain_dimension": true, + "tf.linalg.LinearOperatorFullMatrix.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorFullMatrix.dtype": true, + "tf.linalg.LinearOperatorFullMatrix.eigvals": true, + "tf.linalg.LinearOperatorFullMatrix.graph_parents": true, + "tf.linalg.LinearOperatorFullMatrix.inverse": true, + "tf.linalg.LinearOperatorFullMatrix.is_non_singular": true, + "tf.linalg.LinearOperatorFullMatrix.is_positive_definite": true, + "tf.linalg.LinearOperatorFullMatrix.is_self_adjoint": true, + "tf.linalg.LinearOperatorFullMatrix.is_square": true, + "tf.linalg.LinearOperatorFullMatrix.log_abs_determinant": true, + "tf.linalg.LinearOperatorFullMatrix.matmul": true, + "tf.linalg.LinearOperatorFullMatrix.matvec": true, + "tf.linalg.LinearOperatorFullMatrix.name": true, + "tf.linalg.LinearOperatorFullMatrix.name_scope": true, + "tf.linalg.LinearOperatorFullMatrix.range_dimension": true, + "tf.linalg.LinearOperatorFullMatrix.range_dimension_tensor": true, + "tf.linalg.LinearOperatorFullMatrix.shape": true, + "tf.linalg.LinearOperatorFullMatrix.shape_tensor": true, + "tf.linalg.LinearOperatorFullMatrix.solve": true, + "tf.linalg.LinearOperatorFullMatrix.solvevec": true, + "tf.linalg.LinearOperatorFullMatrix.submodules": true, + "tf.linalg.LinearOperatorFullMatrix.tensor_rank": true, + "tf.linalg.LinearOperatorFullMatrix.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorFullMatrix.to_dense": true, + "tf.linalg.LinearOperatorFullMatrix.trace": true, + "tf.linalg.LinearOperatorFullMatrix.trainable_variables": true, + "tf.linalg.LinearOperatorFullMatrix.variables": true, + "tf.linalg.LinearOperatorFullMatrix.with_name_scope": true, + "tf.linalg.LinearOperatorHouseholder": false, + "tf.linalg.LinearOperatorHouseholder.H": true, + "tf.linalg.LinearOperatorHouseholder.__eq__": true, + "tf.linalg.LinearOperatorHouseholder.__ge__": true, + "tf.linalg.LinearOperatorHouseholder.__gt__": true, + "tf.linalg.LinearOperatorHouseholder.__init__": true, + "tf.linalg.LinearOperatorHouseholder.__le__": true, + "tf.linalg.LinearOperatorHouseholder.__lt__": true, + "tf.linalg.LinearOperatorHouseholder.__matmul__": true, + "tf.linalg.LinearOperatorHouseholder.__ne__": true, + "tf.linalg.LinearOperatorHouseholder.__new__": true, + "tf.linalg.LinearOperatorHouseholder.add_to_tensor": true, + "tf.linalg.LinearOperatorHouseholder.adjoint": true, + "tf.linalg.LinearOperatorHouseholder.assert_non_singular": true, + "tf.linalg.LinearOperatorHouseholder.assert_positive_definite": true, + "tf.linalg.LinearOperatorHouseholder.assert_self_adjoint": true, + "tf.linalg.LinearOperatorHouseholder.batch_shape": true, + "tf.linalg.LinearOperatorHouseholder.batch_shape_tensor": true, + "tf.linalg.LinearOperatorHouseholder.cholesky": true, + "tf.linalg.LinearOperatorHouseholder.cond": true, + "tf.linalg.LinearOperatorHouseholder.determinant": true, + "tf.linalg.LinearOperatorHouseholder.diag_part": true, + "tf.linalg.LinearOperatorHouseholder.domain_dimension": true, + "tf.linalg.LinearOperatorHouseholder.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorHouseholder.dtype": true, + "tf.linalg.LinearOperatorHouseholder.eigvals": true, + "tf.linalg.LinearOperatorHouseholder.graph_parents": true, + "tf.linalg.LinearOperatorHouseholder.inverse": true, + "tf.linalg.LinearOperatorHouseholder.is_non_singular": true, + "tf.linalg.LinearOperatorHouseholder.is_positive_definite": true, + "tf.linalg.LinearOperatorHouseholder.is_self_adjoint": true, + "tf.linalg.LinearOperatorHouseholder.is_square": true, + "tf.linalg.LinearOperatorHouseholder.log_abs_determinant": true, + "tf.linalg.LinearOperatorHouseholder.matmul": true, + "tf.linalg.LinearOperatorHouseholder.matvec": true, + "tf.linalg.LinearOperatorHouseholder.name": true, + "tf.linalg.LinearOperatorHouseholder.name_scope": true, + "tf.linalg.LinearOperatorHouseholder.range_dimension": true, + "tf.linalg.LinearOperatorHouseholder.range_dimension_tensor": true, + "tf.linalg.LinearOperatorHouseholder.reflection_axis": true, + "tf.linalg.LinearOperatorHouseholder.shape": true, + "tf.linalg.LinearOperatorHouseholder.shape_tensor": true, + "tf.linalg.LinearOperatorHouseholder.solve": true, + "tf.linalg.LinearOperatorHouseholder.solvevec": true, + "tf.linalg.LinearOperatorHouseholder.submodules": true, + "tf.linalg.LinearOperatorHouseholder.tensor_rank": true, + "tf.linalg.LinearOperatorHouseholder.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorHouseholder.to_dense": true, + "tf.linalg.LinearOperatorHouseholder.trace": true, + "tf.linalg.LinearOperatorHouseholder.trainable_variables": true, + "tf.linalg.LinearOperatorHouseholder.variables": true, + "tf.linalg.LinearOperatorHouseholder.with_name_scope": true, + "tf.linalg.LinearOperatorIdentity": false, + "tf.linalg.LinearOperatorIdentity.H": true, + "tf.linalg.LinearOperatorIdentity.__eq__": true, + "tf.linalg.LinearOperatorIdentity.__ge__": true, + "tf.linalg.LinearOperatorIdentity.__gt__": true, + "tf.linalg.LinearOperatorIdentity.__init__": true, + "tf.linalg.LinearOperatorIdentity.__le__": true, + "tf.linalg.LinearOperatorIdentity.__lt__": true, + "tf.linalg.LinearOperatorIdentity.__matmul__": true, + "tf.linalg.LinearOperatorIdentity.__ne__": true, + "tf.linalg.LinearOperatorIdentity.__new__": true, + "tf.linalg.LinearOperatorIdentity.add_to_tensor": true, + "tf.linalg.LinearOperatorIdentity.adjoint": true, + "tf.linalg.LinearOperatorIdentity.assert_non_singular": true, + "tf.linalg.LinearOperatorIdentity.assert_positive_definite": true, + "tf.linalg.LinearOperatorIdentity.assert_self_adjoint": true, + "tf.linalg.LinearOperatorIdentity.batch_shape": true, + "tf.linalg.LinearOperatorIdentity.batch_shape_tensor": true, + "tf.linalg.LinearOperatorIdentity.cholesky": true, + "tf.linalg.LinearOperatorIdentity.cond": true, + "tf.linalg.LinearOperatorIdentity.determinant": true, + "tf.linalg.LinearOperatorIdentity.diag_part": true, + "tf.linalg.LinearOperatorIdentity.domain_dimension": true, + "tf.linalg.LinearOperatorIdentity.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorIdentity.dtype": true, + "tf.linalg.LinearOperatorIdentity.eigvals": true, + "tf.linalg.LinearOperatorIdentity.graph_parents": true, + "tf.linalg.LinearOperatorIdentity.inverse": true, + "tf.linalg.LinearOperatorIdentity.is_non_singular": true, + "tf.linalg.LinearOperatorIdentity.is_positive_definite": true, + "tf.linalg.LinearOperatorIdentity.is_self_adjoint": true, + "tf.linalg.LinearOperatorIdentity.is_square": true, + "tf.linalg.LinearOperatorIdentity.log_abs_determinant": true, + "tf.linalg.LinearOperatorIdentity.matmul": true, + "tf.linalg.LinearOperatorIdentity.matvec": true, + "tf.linalg.LinearOperatorIdentity.name": true, + "tf.linalg.LinearOperatorIdentity.name_scope": true, + "tf.linalg.LinearOperatorIdentity.range_dimension": true, + "tf.linalg.LinearOperatorIdentity.range_dimension_tensor": true, + "tf.linalg.LinearOperatorIdentity.shape": true, + "tf.linalg.LinearOperatorIdentity.shape_tensor": true, + "tf.linalg.LinearOperatorIdentity.solve": true, + "tf.linalg.LinearOperatorIdentity.solvevec": true, + "tf.linalg.LinearOperatorIdentity.submodules": true, + "tf.linalg.LinearOperatorIdentity.tensor_rank": true, + "tf.linalg.LinearOperatorIdentity.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorIdentity.to_dense": true, + "tf.linalg.LinearOperatorIdentity.trace": true, + "tf.linalg.LinearOperatorIdentity.trainable_variables": true, + "tf.linalg.LinearOperatorIdentity.variables": true, + "tf.linalg.LinearOperatorIdentity.with_name_scope": true, + "tf.linalg.LinearOperatorInversion": false, + "tf.linalg.LinearOperatorInversion.H": true, + "tf.linalg.LinearOperatorInversion.__eq__": true, + "tf.linalg.LinearOperatorInversion.__ge__": true, + "tf.linalg.LinearOperatorInversion.__gt__": true, + "tf.linalg.LinearOperatorInversion.__init__": true, + "tf.linalg.LinearOperatorInversion.__le__": true, + "tf.linalg.LinearOperatorInversion.__lt__": true, + "tf.linalg.LinearOperatorInversion.__matmul__": true, + "tf.linalg.LinearOperatorInversion.__ne__": true, + "tf.linalg.LinearOperatorInversion.__new__": true, + "tf.linalg.LinearOperatorInversion.add_to_tensor": true, + "tf.linalg.LinearOperatorInversion.adjoint": true, + "tf.linalg.LinearOperatorInversion.assert_non_singular": true, + "tf.linalg.LinearOperatorInversion.assert_positive_definite": true, + "tf.linalg.LinearOperatorInversion.assert_self_adjoint": true, + "tf.linalg.LinearOperatorInversion.batch_shape": true, + "tf.linalg.LinearOperatorInversion.batch_shape_tensor": true, + "tf.linalg.LinearOperatorInversion.cholesky": true, + "tf.linalg.LinearOperatorInversion.cond": true, + "tf.linalg.LinearOperatorInversion.determinant": true, + "tf.linalg.LinearOperatorInversion.diag_part": true, + "tf.linalg.LinearOperatorInversion.domain_dimension": true, + "tf.linalg.LinearOperatorInversion.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorInversion.dtype": true, + "tf.linalg.LinearOperatorInversion.eigvals": true, + "tf.linalg.LinearOperatorInversion.graph_parents": true, + "tf.linalg.LinearOperatorInversion.inverse": true, + "tf.linalg.LinearOperatorInversion.is_non_singular": true, + "tf.linalg.LinearOperatorInversion.is_positive_definite": true, + "tf.linalg.LinearOperatorInversion.is_self_adjoint": true, + "tf.linalg.LinearOperatorInversion.is_square": true, + "tf.linalg.LinearOperatorInversion.log_abs_determinant": true, + "tf.linalg.LinearOperatorInversion.matmul": true, + "tf.linalg.LinearOperatorInversion.matvec": true, + "tf.linalg.LinearOperatorInversion.name": true, + "tf.linalg.LinearOperatorInversion.name_scope": true, + "tf.linalg.LinearOperatorInversion.operator": true, + "tf.linalg.LinearOperatorInversion.range_dimension": true, + "tf.linalg.LinearOperatorInversion.range_dimension_tensor": true, + "tf.linalg.LinearOperatorInversion.shape": true, + "tf.linalg.LinearOperatorInversion.shape_tensor": true, + "tf.linalg.LinearOperatorInversion.solve": true, + "tf.linalg.LinearOperatorInversion.solvevec": true, + "tf.linalg.LinearOperatorInversion.submodules": true, + "tf.linalg.LinearOperatorInversion.tensor_rank": true, + "tf.linalg.LinearOperatorInversion.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorInversion.to_dense": true, + "tf.linalg.LinearOperatorInversion.trace": true, + "tf.linalg.LinearOperatorInversion.trainable_variables": true, + "tf.linalg.LinearOperatorInversion.variables": true, + "tf.linalg.LinearOperatorInversion.with_name_scope": true, + "tf.linalg.LinearOperatorKronecker": false, + "tf.linalg.LinearOperatorKronecker.H": true, + "tf.linalg.LinearOperatorKronecker.__eq__": true, + "tf.linalg.LinearOperatorKronecker.__ge__": true, + "tf.linalg.LinearOperatorKronecker.__gt__": true, + "tf.linalg.LinearOperatorKronecker.__init__": true, + "tf.linalg.LinearOperatorKronecker.__le__": true, + "tf.linalg.LinearOperatorKronecker.__lt__": true, + "tf.linalg.LinearOperatorKronecker.__matmul__": true, + "tf.linalg.LinearOperatorKronecker.__ne__": true, + "tf.linalg.LinearOperatorKronecker.__new__": true, + "tf.linalg.LinearOperatorKronecker.add_to_tensor": true, + "tf.linalg.LinearOperatorKronecker.adjoint": true, + "tf.linalg.LinearOperatorKronecker.assert_non_singular": true, + "tf.linalg.LinearOperatorKronecker.assert_positive_definite": true, + "tf.linalg.LinearOperatorKronecker.assert_self_adjoint": true, + "tf.linalg.LinearOperatorKronecker.batch_shape": true, + "tf.linalg.LinearOperatorKronecker.batch_shape_tensor": true, + "tf.linalg.LinearOperatorKronecker.cholesky": true, + "tf.linalg.LinearOperatorKronecker.cond": true, + "tf.linalg.LinearOperatorKronecker.determinant": true, + "tf.linalg.LinearOperatorKronecker.diag_part": true, + "tf.linalg.LinearOperatorKronecker.domain_dimension": true, + "tf.linalg.LinearOperatorKronecker.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorKronecker.dtype": true, + "tf.linalg.LinearOperatorKronecker.eigvals": true, + "tf.linalg.LinearOperatorKronecker.graph_parents": true, + "tf.linalg.LinearOperatorKronecker.inverse": true, + "tf.linalg.LinearOperatorKronecker.is_non_singular": true, + "tf.linalg.LinearOperatorKronecker.is_positive_definite": true, + "tf.linalg.LinearOperatorKronecker.is_self_adjoint": true, + "tf.linalg.LinearOperatorKronecker.is_square": true, + "tf.linalg.LinearOperatorKronecker.log_abs_determinant": true, + "tf.linalg.LinearOperatorKronecker.matmul": true, + "tf.linalg.LinearOperatorKronecker.matvec": true, + "tf.linalg.LinearOperatorKronecker.name": true, + "tf.linalg.LinearOperatorKronecker.name_scope": true, + "tf.linalg.LinearOperatorKronecker.operators": true, + "tf.linalg.LinearOperatorKronecker.range_dimension": true, + "tf.linalg.LinearOperatorKronecker.range_dimension_tensor": true, + "tf.linalg.LinearOperatorKronecker.shape": true, + "tf.linalg.LinearOperatorKronecker.shape_tensor": true, + "tf.linalg.LinearOperatorKronecker.solve": true, + "tf.linalg.LinearOperatorKronecker.solvevec": true, + "tf.linalg.LinearOperatorKronecker.submodules": true, + "tf.linalg.LinearOperatorKronecker.tensor_rank": true, + "tf.linalg.LinearOperatorKronecker.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorKronecker.to_dense": true, + "tf.linalg.LinearOperatorKronecker.trace": true, + "tf.linalg.LinearOperatorKronecker.trainable_variables": true, + "tf.linalg.LinearOperatorKronecker.variables": true, + "tf.linalg.LinearOperatorKronecker.with_name_scope": true, + "tf.linalg.LinearOperatorLowRankUpdate": false, + "tf.linalg.LinearOperatorLowRankUpdate.H": true, + "tf.linalg.LinearOperatorLowRankUpdate.__eq__": true, + "tf.linalg.LinearOperatorLowRankUpdate.__ge__": true, + "tf.linalg.LinearOperatorLowRankUpdate.__gt__": true, + "tf.linalg.LinearOperatorLowRankUpdate.__init__": true, + "tf.linalg.LinearOperatorLowRankUpdate.__le__": true, + "tf.linalg.LinearOperatorLowRankUpdate.__lt__": true, + "tf.linalg.LinearOperatorLowRankUpdate.__matmul__": true, + "tf.linalg.LinearOperatorLowRankUpdate.__ne__": true, + "tf.linalg.LinearOperatorLowRankUpdate.__new__": true, + "tf.linalg.LinearOperatorLowRankUpdate.add_to_tensor": true, + "tf.linalg.LinearOperatorLowRankUpdate.adjoint": true, + "tf.linalg.LinearOperatorLowRankUpdate.assert_non_singular": true, + "tf.linalg.LinearOperatorLowRankUpdate.assert_positive_definite": true, + "tf.linalg.LinearOperatorLowRankUpdate.assert_self_adjoint": true, + "tf.linalg.LinearOperatorLowRankUpdate.base_operator": true, + "tf.linalg.LinearOperatorLowRankUpdate.batch_shape": true, + "tf.linalg.LinearOperatorLowRankUpdate.batch_shape_tensor": true, + "tf.linalg.LinearOperatorLowRankUpdate.cholesky": true, + "tf.linalg.LinearOperatorLowRankUpdate.cond": true, + "tf.linalg.LinearOperatorLowRankUpdate.determinant": true, + "tf.linalg.LinearOperatorLowRankUpdate.diag_operator": true, + "tf.linalg.LinearOperatorLowRankUpdate.diag_part": true, + "tf.linalg.LinearOperatorLowRankUpdate.diag_update": true, + "tf.linalg.LinearOperatorLowRankUpdate.domain_dimension": true, + "tf.linalg.LinearOperatorLowRankUpdate.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorLowRankUpdate.dtype": true, + "tf.linalg.LinearOperatorLowRankUpdate.eigvals": true, + "tf.linalg.LinearOperatorLowRankUpdate.graph_parents": true, + "tf.linalg.LinearOperatorLowRankUpdate.inverse": true, + "tf.linalg.LinearOperatorLowRankUpdate.is_diag_update_positive": true, + "tf.linalg.LinearOperatorLowRankUpdate.is_non_singular": true, + "tf.linalg.LinearOperatorLowRankUpdate.is_positive_definite": true, + "tf.linalg.LinearOperatorLowRankUpdate.is_self_adjoint": true, + "tf.linalg.LinearOperatorLowRankUpdate.is_square": true, + "tf.linalg.LinearOperatorLowRankUpdate.log_abs_determinant": true, + "tf.linalg.LinearOperatorLowRankUpdate.matmul": true, + "tf.linalg.LinearOperatorLowRankUpdate.matvec": true, + "tf.linalg.LinearOperatorLowRankUpdate.name": true, + "tf.linalg.LinearOperatorLowRankUpdate.name_scope": true, + "tf.linalg.LinearOperatorLowRankUpdate.range_dimension": true, + "tf.linalg.LinearOperatorLowRankUpdate.range_dimension_tensor": true, + "tf.linalg.LinearOperatorLowRankUpdate.shape": true, + "tf.linalg.LinearOperatorLowRankUpdate.shape_tensor": true, + "tf.linalg.LinearOperatorLowRankUpdate.solve": true, + "tf.linalg.LinearOperatorLowRankUpdate.solvevec": true, + "tf.linalg.LinearOperatorLowRankUpdate.submodules": true, + "tf.linalg.LinearOperatorLowRankUpdate.tensor_rank": true, + "tf.linalg.LinearOperatorLowRankUpdate.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorLowRankUpdate.to_dense": true, + "tf.linalg.LinearOperatorLowRankUpdate.trace": true, + "tf.linalg.LinearOperatorLowRankUpdate.trainable_variables": true, + "tf.linalg.LinearOperatorLowRankUpdate.u": true, + "tf.linalg.LinearOperatorLowRankUpdate.v": true, + "tf.linalg.LinearOperatorLowRankUpdate.variables": true, + "tf.linalg.LinearOperatorLowRankUpdate.with_name_scope": true, + "tf.linalg.LinearOperatorLowerTriangular": false, + "tf.linalg.LinearOperatorLowerTriangular.H": true, + "tf.linalg.LinearOperatorLowerTriangular.__eq__": true, + "tf.linalg.LinearOperatorLowerTriangular.__ge__": true, + "tf.linalg.LinearOperatorLowerTriangular.__gt__": true, + "tf.linalg.LinearOperatorLowerTriangular.__init__": true, + "tf.linalg.LinearOperatorLowerTriangular.__le__": true, + "tf.linalg.LinearOperatorLowerTriangular.__lt__": true, + "tf.linalg.LinearOperatorLowerTriangular.__matmul__": true, + "tf.linalg.LinearOperatorLowerTriangular.__ne__": true, + "tf.linalg.LinearOperatorLowerTriangular.__new__": true, + "tf.linalg.LinearOperatorLowerTriangular.add_to_tensor": true, + "tf.linalg.LinearOperatorLowerTriangular.adjoint": true, + "tf.linalg.LinearOperatorLowerTriangular.assert_non_singular": true, + "tf.linalg.LinearOperatorLowerTriangular.assert_positive_definite": true, + "tf.linalg.LinearOperatorLowerTriangular.assert_self_adjoint": true, + "tf.linalg.LinearOperatorLowerTriangular.batch_shape": true, + "tf.linalg.LinearOperatorLowerTriangular.batch_shape_tensor": true, + "tf.linalg.LinearOperatorLowerTriangular.cholesky": true, + "tf.linalg.LinearOperatorLowerTriangular.cond": true, + "tf.linalg.LinearOperatorLowerTriangular.determinant": true, + "tf.linalg.LinearOperatorLowerTriangular.diag_part": true, + "tf.linalg.LinearOperatorLowerTriangular.domain_dimension": true, + "tf.linalg.LinearOperatorLowerTriangular.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorLowerTriangular.dtype": true, + "tf.linalg.LinearOperatorLowerTriangular.eigvals": true, + "tf.linalg.LinearOperatorLowerTriangular.graph_parents": true, + "tf.linalg.LinearOperatorLowerTriangular.inverse": true, + "tf.linalg.LinearOperatorLowerTriangular.is_non_singular": true, + "tf.linalg.LinearOperatorLowerTriangular.is_positive_definite": true, + "tf.linalg.LinearOperatorLowerTriangular.is_self_adjoint": true, + "tf.linalg.LinearOperatorLowerTriangular.is_square": true, + "tf.linalg.LinearOperatorLowerTriangular.log_abs_determinant": true, + "tf.linalg.LinearOperatorLowerTriangular.matmul": true, + "tf.linalg.LinearOperatorLowerTriangular.matvec": true, + "tf.linalg.LinearOperatorLowerTriangular.name": true, + "tf.linalg.LinearOperatorLowerTriangular.name_scope": true, + "tf.linalg.LinearOperatorLowerTriangular.range_dimension": true, + "tf.linalg.LinearOperatorLowerTriangular.range_dimension_tensor": true, + "tf.linalg.LinearOperatorLowerTriangular.shape": true, + "tf.linalg.LinearOperatorLowerTriangular.shape_tensor": true, + "tf.linalg.LinearOperatorLowerTriangular.solve": true, + "tf.linalg.LinearOperatorLowerTriangular.solvevec": true, + "tf.linalg.LinearOperatorLowerTriangular.submodules": true, + "tf.linalg.LinearOperatorLowerTriangular.tensor_rank": true, + "tf.linalg.LinearOperatorLowerTriangular.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorLowerTriangular.to_dense": true, + "tf.linalg.LinearOperatorLowerTriangular.trace": true, + "tf.linalg.LinearOperatorLowerTriangular.trainable_variables": true, + "tf.linalg.LinearOperatorLowerTriangular.variables": true, + "tf.linalg.LinearOperatorLowerTriangular.with_name_scope": true, + "tf.linalg.LinearOperatorPermutation": false, + "tf.linalg.LinearOperatorPermutation.H": true, + "tf.linalg.LinearOperatorPermutation.__eq__": true, + "tf.linalg.LinearOperatorPermutation.__ge__": true, + "tf.linalg.LinearOperatorPermutation.__gt__": true, + "tf.linalg.LinearOperatorPermutation.__init__": true, + "tf.linalg.LinearOperatorPermutation.__le__": true, + "tf.linalg.LinearOperatorPermutation.__lt__": true, + "tf.linalg.LinearOperatorPermutation.__matmul__": true, + "tf.linalg.LinearOperatorPermutation.__ne__": true, + "tf.linalg.LinearOperatorPermutation.__new__": true, + "tf.linalg.LinearOperatorPermutation.add_to_tensor": true, + "tf.linalg.LinearOperatorPermutation.adjoint": true, + "tf.linalg.LinearOperatorPermutation.assert_non_singular": true, + "tf.linalg.LinearOperatorPermutation.assert_positive_definite": true, + "tf.linalg.LinearOperatorPermutation.assert_self_adjoint": true, + "tf.linalg.LinearOperatorPermutation.batch_shape": true, + "tf.linalg.LinearOperatorPermutation.batch_shape_tensor": true, + "tf.linalg.LinearOperatorPermutation.cholesky": true, + "tf.linalg.LinearOperatorPermutation.cond": true, + "tf.linalg.LinearOperatorPermutation.determinant": true, + "tf.linalg.LinearOperatorPermutation.diag_part": true, + "tf.linalg.LinearOperatorPermutation.domain_dimension": true, + "tf.linalg.LinearOperatorPermutation.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorPermutation.dtype": true, + "tf.linalg.LinearOperatorPermutation.eigvals": true, + "tf.linalg.LinearOperatorPermutation.graph_parents": true, + "tf.linalg.LinearOperatorPermutation.inverse": true, + "tf.linalg.LinearOperatorPermutation.is_non_singular": true, + "tf.linalg.LinearOperatorPermutation.is_positive_definite": true, + "tf.linalg.LinearOperatorPermutation.is_self_adjoint": true, + "tf.linalg.LinearOperatorPermutation.is_square": true, + "tf.linalg.LinearOperatorPermutation.log_abs_determinant": true, + "tf.linalg.LinearOperatorPermutation.matmul": true, + "tf.linalg.LinearOperatorPermutation.matvec": true, + "tf.linalg.LinearOperatorPermutation.name": true, + "tf.linalg.LinearOperatorPermutation.name_scope": true, + "tf.linalg.LinearOperatorPermutation.perm": true, + "tf.linalg.LinearOperatorPermutation.range_dimension": true, + "tf.linalg.LinearOperatorPermutation.range_dimension_tensor": true, + "tf.linalg.LinearOperatorPermutation.shape": true, + "tf.linalg.LinearOperatorPermutation.shape_tensor": true, + "tf.linalg.LinearOperatorPermutation.solve": true, + "tf.linalg.LinearOperatorPermutation.solvevec": true, + "tf.linalg.LinearOperatorPermutation.submodules": true, + "tf.linalg.LinearOperatorPermutation.tensor_rank": true, + "tf.linalg.LinearOperatorPermutation.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorPermutation.to_dense": true, + "tf.linalg.LinearOperatorPermutation.trace": true, + "tf.linalg.LinearOperatorPermutation.trainable_variables": true, + "tf.linalg.LinearOperatorPermutation.variables": true, + "tf.linalg.LinearOperatorPermutation.with_name_scope": true, + "tf.linalg.LinearOperatorScaledIdentity": false, + "tf.linalg.LinearOperatorScaledIdentity.H": true, + "tf.linalg.LinearOperatorScaledIdentity.__eq__": true, + "tf.linalg.LinearOperatorScaledIdentity.__ge__": true, + "tf.linalg.LinearOperatorScaledIdentity.__gt__": true, + "tf.linalg.LinearOperatorScaledIdentity.__init__": true, + "tf.linalg.LinearOperatorScaledIdentity.__le__": true, + "tf.linalg.LinearOperatorScaledIdentity.__lt__": true, + "tf.linalg.LinearOperatorScaledIdentity.__matmul__": true, + "tf.linalg.LinearOperatorScaledIdentity.__ne__": true, + "tf.linalg.LinearOperatorScaledIdentity.__new__": true, + "tf.linalg.LinearOperatorScaledIdentity.add_to_tensor": true, + "tf.linalg.LinearOperatorScaledIdentity.adjoint": true, + "tf.linalg.LinearOperatorScaledIdentity.assert_non_singular": true, + "tf.linalg.LinearOperatorScaledIdentity.assert_positive_definite": true, + "tf.linalg.LinearOperatorScaledIdentity.assert_self_adjoint": true, + "tf.linalg.LinearOperatorScaledIdentity.batch_shape": true, + "tf.linalg.LinearOperatorScaledIdentity.batch_shape_tensor": true, + "tf.linalg.LinearOperatorScaledIdentity.cholesky": true, + "tf.linalg.LinearOperatorScaledIdentity.cond": true, + "tf.linalg.LinearOperatorScaledIdentity.determinant": true, + "tf.linalg.LinearOperatorScaledIdentity.diag_part": true, + "tf.linalg.LinearOperatorScaledIdentity.domain_dimension": true, + "tf.linalg.LinearOperatorScaledIdentity.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorScaledIdentity.dtype": true, + "tf.linalg.LinearOperatorScaledIdentity.eigvals": true, + "tf.linalg.LinearOperatorScaledIdentity.graph_parents": true, + "tf.linalg.LinearOperatorScaledIdentity.inverse": true, + "tf.linalg.LinearOperatorScaledIdentity.is_non_singular": true, + "tf.linalg.LinearOperatorScaledIdentity.is_positive_definite": true, + "tf.linalg.LinearOperatorScaledIdentity.is_self_adjoint": true, + "tf.linalg.LinearOperatorScaledIdentity.is_square": true, + "tf.linalg.LinearOperatorScaledIdentity.log_abs_determinant": true, + "tf.linalg.LinearOperatorScaledIdentity.matmul": true, + "tf.linalg.LinearOperatorScaledIdentity.matvec": true, + "tf.linalg.LinearOperatorScaledIdentity.multiplier": true, + "tf.linalg.LinearOperatorScaledIdentity.name": true, + "tf.linalg.LinearOperatorScaledIdentity.name_scope": true, + "tf.linalg.LinearOperatorScaledIdentity.range_dimension": true, + "tf.linalg.LinearOperatorScaledIdentity.range_dimension_tensor": true, + "tf.linalg.LinearOperatorScaledIdentity.shape": true, + "tf.linalg.LinearOperatorScaledIdentity.shape_tensor": true, + "tf.linalg.LinearOperatorScaledIdentity.solve": true, + "tf.linalg.LinearOperatorScaledIdentity.solvevec": true, + "tf.linalg.LinearOperatorScaledIdentity.submodules": true, + "tf.linalg.LinearOperatorScaledIdentity.tensor_rank": true, + "tf.linalg.LinearOperatorScaledIdentity.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorScaledIdentity.to_dense": true, + "tf.linalg.LinearOperatorScaledIdentity.trace": true, + "tf.linalg.LinearOperatorScaledIdentity.trainable_variables": true, + "tf.linalg.LinearOperatorScaledIdentity.variables": true, + "tf.linalg.LinearOperatorScaledIdentity.with_name_scope": true, + "tf.linalg.LinearOperatorToeplitz": false, + "tf.linalg.LinearOperatorToeplitz.H": true, + "tf.linalg.LinearOperatorToeplitz.__eq__": true, + "tf.linalg.LinearOperatorToeplitz.__ge__": true, + "tf.linalg.LinearOperatorToeplitz.__gt__": true, + "tf.linalg.LinearOperatorToeplitz.__init__": true, + "tf.linalg.LinearOperatorToeplitz.__le__": true, + "tf.linalg.LinearOperatorToeplitz.__lt__": true, + "tf.linalg.LinearOperatorToeplitz.__matmul__": true, + "tf.linalg.LinearOperatorToeplitz.__ne__": true, + "tf.linalg.LinearOperatorToeplitz.__new__": true, + "tf.linalg.LinearOperatorToeplitz.add_to_tensor": true, + "tf.linalg.LinearOperatorToeplitz.adjoint": true, + "tf.linalg.LinearOperatorToeplitz.assert_non_singular": true, + "tf.linalg.LinearOperatorToeplitz.assert_positive_definite": true, + "tf.linalg.LinearOperatorToeplitz.assert_self_adjoint": true, + "tf.linalg.LinearOperatorToeplitz.batch_shape": true, + "tf.linalg.LinearOperatorToeplitz.batch_shape_tensor": true, + "tf.linalg.LinearOperatorToeplitz.cholesky": true, + "tf.linalg.LinearOperatorToeplitz.col": true, + "tf.linalg.LinearOperatorToeplitz.cond": true, + "tf.linalg.LinearOperatorToeplitz.determinant": true, + "tf.linalg.LinearOperatorToeplitz.diag_part": true, + "tf.linalg.LinearOperatorToeplitz.domain_dimension": true, + "tf.linalg.LinearOperatorToeplitz.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorToeplitz.dtype": true, + "tf.linalg.LinearOperatorToeplitz.eigvals": true, + "tf.linalg.LinearOperatorToeplitz.graph_parents": true, + "tf.linalg.LinearOperatorToeplitz.inverse": true, + "tf.linalg.LinearOperatorToeplitz.is_non_singular": true, + "tf.linalg.LinearOperatorToeplitz.is_positive_definite": true, + "tf.linalg.LinearOperatorToeplitz.is_self_adjoint": true, + "tf.linalg.LinearOperatorToeplitz.is_square": true, + "tf.linalg.LinearOperatorToeplitz.log_abs_determinant": true, + "tf.linalg.LinearOperatorToeplitz.matmul": true, + "tf.linalg.LinearOperatorToeplitz.matvec": true, + "tf.linalg.LinearOperatorToeplitz.name": true, + "tf.linalg.LinearOperatorToeplitz.name_scope": true, + "tf.linalg.LinearOperatorToeplitz.range_dimension": true, + "tf.linalg.LinearOperatorToeplitz.range_dimension_tensor": true, + "tf.linalg.LinearOperatorToeplitz.row": true, + "tf.linalg.LinearOperatorToeplitz.shape": true, + "tf.linalg.LinearOperatorToeplitz.shape_tensor": true, + "tf.linalg.LinearOperatorToeplitz.solve": true, + "tf.linalg.LinearOperatorToeplitz.solvevec": true, + "tf.linalg.LinearOperatorToeplitz.submodules": true, + "tf.linalg.LinearOperatorToeplitz.tensor_rank": true, + "tf.linalg.LinearOperatorToeplitz.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorToeplitz.to_dense": true, + "tf.linalg.LinearOperatorToeplitz.trace": true, + "tf.linalg.LinearOperatorToeplitz.trainable_variables": true, + "tf.linalg.LinearOperatorToeplitz.variables": true, + "tf.linalg.LinearOperatorToeplitz.with_name_scope": true, + "tf.linalg.LinearOperatorTridiag": false, + "tf.linalg.LinearOperatorTridiag.H": true, + "tf.linalg.LinearOperatorTridiag.__eq__": true, + "tf.linalg.LinearOperatorTridiag.__ge__": true, + "tf.linalg.LinearOperatorTridiag.__gt__": true, + "tf.linalg.LinearOperatorTridiag.__init__": true, + "tf.linalg.LinearOperatorTridiag.__le__": true, + "tf.linalg.LinearOperatorTridiag.__lt__": true, + "tf.linalg.LinearOperatorTridiag.__matmul__": true, + "tf.linalg.LinearOperatorTridiag.__ne__": true, + "tf.linalg.LinearOperatorTridiag.__new__": true, + "tf.linalg.LinearOperatorTridiag.add_to_tensor": true, + "tf.linalg.LinearOperatorTridiag.adjoint": true, + "tf.linalg.LinearOperatorTridiag.assert_non_singular": true, + "tf.linalg.LinearOperatorTridiag.assert_positive_definite": true, + "tf.linalg.LinearOperatorTridiag.assert_self_adjoint": true, + "tf.linalg.LinearOperatorTridiag.batch_shape": true, + "tf.linalg.LinearOperatorTridiag.batch_shape_tensor": true, + "tf.linalg.LinearOperatorTridiag.cholesky": true, + "tf.linalg.LinearOperatorTridiag.cond": true, + "tf.linalg.LinearOperatorTridiag.determinant": true, + "tf.linalg.LinearOperatorTridiag.diag_part": true, + "tf.linalg.LinearOperatorTridiag.diagonals": true, + "tf.linalg.LinearOperatorTridiag.diagonals_format": true, + "tf.linalg.LinearOperatorTridiag.domain_dimension": true, + "tf.linalg.LinearOperatorTridiag.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorTridiag.dtype": true, + "tf.linalg.LinearOperatorTridiag.eigvals": true, + "tf.linalg.LinearOperatorTridiag.graph_parents": true, + "tf.linalg.LinearOperatorTridiag.inverse": true, + "tf.linalg.LinearOperatorTridiag.is_non_singular": true, + "tf.linalg.LinearOperatorTridiag.is_positive_definite": true, + "tf.linalg.LinearOperatorTridiag.is_self_adjoint": true, + "tf.linalg.LinearOperatorTridiag.is_square": true, + "tf.linalg.LinearOperatorTridiag.log_abs_determinant": true, + "tf.linalg.LinearOperatorTridiag.matmul": true, + "tf.linalg.LinearOperatorTridiag.matvec": true, + "tf.linalg.LinearOperatorTridiag.name": true, + "tf.linalg.LinearOperatorTridiag.name_scope": true, + "tf.linalg.LinearOperatorTridiag.range_dimension": true, + "tf.linalg.LinearOperatorTridiag.range_dimension_tensor": true, + "tf.linalg.LinearOperatorTridiag.shape": true, + "tf.linalg.LinearOperatorTridiag.shape_tensor": true, + "tf.linalg.LinearOperatorTridiag.solve": true, + "tf.linalg.LinearOperatorTridiag.solvevec": true, + "tf.linalg.LinearOperatorTridiag.submodules": true, + "tf.linalg.LinearOperatorTridiag.tensor_rank": true, + "tf.linalg.LinearOperatorTridiag.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorTridiag.to_dense": true, + "tf.linalg.LinearOperatorTridiag.trace": true, + "tf.linalg.LinearOperatorTridiag.trainable_variables": true, + "tf.linalg.LinearOperatorTridiag.variables": true, + "tf.linalg.LinearOperatorTridiag.with_name_scope": true, + "tf.linalg.LinearOperatorZeros": false, + "tf.linalg.LinearOperatorZeros.H": true, + "tf.linalg.LinearOperatorZeros.__eq__": true, + "tf.linalg.LinearOperatorZeros.__ge__": true, + "tf.linalg.LinearOperatorZeros.__gt__": true, + "tf.linalg.LinearOperatorZeros.__init__": true, + "tf.linalg.LinearOperatorZeros.__le__": true, + "tf.linalg.LinearOperatorZeros.__lt__": true, + "tf.linalg.LinearOperatorZeros.__matmul__": true, + "tf.linalg.LinearOperatorZeros.__ne__": true, + "tf.linalg.LinearOperatorZeros.__new__": true, + "tf.linalg.LinearOperatorZeros.add_to_tensor": true, + "tf.linalg.LinearOperatorZeros.adjoint": true, + "tf.linalg.LinearOperatorZeros.assert_non_singular": true, + "tf.linalg.LinearOperatorZeros.assert_positive_definite": true, + "tf.linalg.LinearOperatorZeros.assert_self_adjoint": true, + "tf.linalg.LinearOperatorZeros.batch_shape": true, + "tf.linalg.LinearOperatorZeros.batch_shape_tensor": true, + "tf.linalg.LinearOperatorZeros.cholesky": true, + "tf.linalg.LinearOperatorZeros.cond": true, + "tf.linalg.LinearOperatorZeros.determinant": true, + "tf.linalg.LinearOperatorZeros.diag_part": true, + "tf.linalg.LinearOperatorZeros.domain_dimension": true, + "tf.linalg.LinearOperatorZeros.domain_dimension_tensor": true, + "tf.linalg.LinearOperatorZeros.dtype": true, + "tf.linalg.LinearOperatorZeros.eigvals": true, + "tf.linalg.LinearOperatorZeros.graph_parents": true, + "tf.linalg.LinearOperatorZeros.inverse": true, + "tf.linalg.LinearOperatorZeros.is_non_singular": true, + "tf.linalg.LinearOperatorZeros.is_positive_definite": true, + "tf.linalg.LinearOperatorZeros.is_self_adjoint": true, + "tf.linalg.LinearOperatorZeros.is_square": true, + "tf.linalg.LinearOperatorZeros.log_abs_determinant": true, + "tf.linalg.LinearOperatorZeros.matmul": true, + "tf.linalg.LinearOperatorZeros.matvec": true, + "tf.linalg.LinearOperatorZeros.name": true, + "tf.linalg.LinearOperatorZeros.name_scope": true, + "tf.linalg.LinearOperatorZeros.range_dimension": true, + "tf.linalg.LinearOperatorZeros.range_dimension_tensor": true, + "tf.linalg.LinearOperatorZeros.shape": true, + "tf.linalg.LinearOperatorZeros.shape_tensor": true, + "tf.linalg.LinearOperatorZeros.solve": true, + "tf.linalg.LinearOperatorZeros.solvevec": true, + "tf.linalg.LinearOperatorZeros.submodules": true, + "tf.linalg.LinearOperatorZeros.tensor_rank": true, + "tf.linalg.LinearOperatorZeros.tensor_rank_tensor": true, + "tf.linalg.LinearOperatorZeros.to_dense": true, + "tf.linalg.LinearOperatorZeros.trace": true, + "tf.linalg.LinearOperatorZeros.trainable_variables": true, + "tf.linalg.LinearOperatorZeros.variables": true, + "tf.linalg.LinearOperatorZeros.with_name_scope": true, + "tf.linalg.adjoint": false, + "tf.linalg.band_part": false, + "tf.linalg.cholesky": false, + "tf.linalg.cholesky_solve": false, + "tf.linalg.cross": false, + "tf.linalg.det": false, + "tf.linalg.diag": false, + "tf.linalg.diag_part": false, + "tf.linalg.eig": false, + "tf.linalg.eigh": false, + "tf.linalg.eigvals": false, + "tf.linalg.eigvalsh": false, + "tf.linalg.einsum": false, + "tf.linalg.experimental": false, + "tf.linalg.experimental.conjugate_gradient": false, + "tf.linalg.expm": false, + "tf.linalg.eye": false, + "tf.linalg.global_norm": false, + "tf.linalg.inv": false, + "tf.linalg.l2_normalize": false, + "tf.linalg.logdet": false, + "tf.linalg.logm": false, + "tf.linalg.lstsq": false, + "tf.linalg.lu": false, + "tf.linalg.lu_matrix_inverse": false, + "tf.linalg.lu_reconstruct": false, + "tf.linalg.lu_solve": false, + "tf.linalg.matmul": false, + "tf.linalg.matrix_rank": false, + "tf.linalg.matrix_transpose": false, + "tf.linalg.matvec": false, + "tf.linalg.norm": false, + "tf.linalg.normalize": false, + "tf.linalg.pinv": false, + "tf.linalg.qr": false, + "tf.linalg.set_diag": false, + "tf.linalg.slogdet": false, + "tf.linalg.solve": false, + "tf.linalg.sqrtm": false, + "tf.linalg.svd": false, + "tf.linalg.tensor_diag": false, + "tf.linalg.tensor_diag_part": false, + "tf.linalg.tensordot": false, + "tf.linalg.trace": false, + "tf.linalg.triangular_solve": false, + "tf.linalg.tridiagonal_matmul": false, + "tf.linalg.tridiagonal_solve": false, + "tf.linspace": false, + "tf.lite": false, + "tf.lite.Interpreter": false, + "tf.lite.Interpreter.__eq__": true, + "tf.lite.Interpreter.__ge__": true, + "tf.lite.Interpreter.__gt__": true, + "tf.lite.Interpreter.__init__": true, + "tf.lite.Interpreter.__le__": true, + "tf.lite.Interpreter.__lt__": true, + "tf.lite.Interpreter.__ne__": true, + "tf.lite.Interpreter.__new__": true, + "tf.lite.Interpreter.allocate_tensors": true, + "tf.lite.Interpreter.get_input_details": true, + "tf.lite.Interpreter.get_output_details": true, + "tf.lite.Interpreter.get_tensor": true, + "tf.lite.Interpreter.get_tensor_details": true, + "tf.lite.Interpreter.invoke": true, + "tf.lite.Interpreter.reset_all_variables": true, + "tf.lite.Interpreter.resize_tensor_input": true, + "tf.lite.Interpreter.set_tensor": true, + "tf.lite.Interpreter.tensor": true, + "tf.lite.OpsSet": false, + "tf.lite.OpsSet.SELECT_TF_OPS": true, + "tf.lite.OpsSet.TFLITE_BUILTINS": true, + "tf.lite.OpsSet.TFLITE_BUILTINS_INT8": true, + "tf.lite.OpsSet.name": true, + "tf.lite.OpsSet.value": true, + "tf.lite.Optimize": false, + "tf.lite.Optimize.DEFAULT": true, + "tf.lite.Optimize.OPTIMIZE_FOR_LATENCY": true, + "tf.lite.Optimize.OPTIMIZE_FOR_SIZE": true, + "tf.lite.Optimize.name": true, + "tf.lite.Optimize.value": true, + "tf.lite.RepresentativeDataset": false, + "tf.lite.RepresentativeDataset.__eq__": true, + "tf.lite.RepresentativeDataset.__ge__": true, + "tf.lite.RepresentativeDataset.__gt__": true, + "tf.lite.RepresentativeDataset.__init__": true, + "tf.lite.RepresentativeDataset.__le__": true, + "tf.lite.RepresentativeDataset.__lt__": true, + "tf.lite.RepresentativeDataset.__ne__": true, + "tf.lite.RepresentativeDataset.__new__": true, + "tf.lite.TFLiteConverter": false, + "tf.lite.TFLiteConverter.__eq__": true, + "tf.lite.TFLiteConverter.__ge__": true, + "tf.lite.TFLiteConverter.__gt__": true, + "tf.lite.TFLiteConverter.__init__": true, + "tf.lite.TFLiteConverter.__le__": true, + "tf.lite.TFLiteConverter.__lt__": true, + "tf.lite.TFLiteConverter.__ne__": true, + "tf.lite.TFLiteConverter.__new__": true, + "tf.lite.TFLiteConverter.convert": true, + "tf.lite.TFLiteConverter.from_concrete_functions": true, + "tf.lite.TFLiteConverter.from_keras_model": true, + "tf.lite.TFLiteConverter.from_saved_model": true, + "tf.lite.TargetSpec": false, + "tf.lite.TargetSpec.__eq__": true, + "tf.lite.TargetSpec.__ge__": true, + "tf.lite.TargetSpec.__gt__": true, + "tf.lite.TargetSpec.__init__": true, + "tf.lite.TargetSpec.__le__": true, + "tf.lite.TargetSpec.__lt__": true, + "tf.lite.TargetSpec.__ne__": true, + "tf.lite.TargetSpec.__new__": true, + "tf.lite.experimental": false, + "tf.lite.experimental.load_delegate": false, + "tf.load_library": false, + "tf.load_op_library": false, + "tf.logical_and": false, + "tf.logical_not": false, + "tf.logical_or": false, + "tf.lookup": false, + "tf.lookup.KeyValueTensorInitializer": false, + "tf.lookup.KeyValueTensorInitializer.__eq__": true, + "tf.lookup.KeyValueTensorInitializer.__ge__": true, + "tf.lookup.KeyValueTensorInitializer.__gt__": true, + "tf.lookup.KeyValueTensorInitializer.__init__": true, + "tf.lookup.KeyValueTensorInitializer.__le__": true, + "tf.lookup.KeyValueTensorInitializer.__lt__": true, + "tf.lookup.KeyValueTensorInitializer.__ne__": true, + "tf.lookup.KeyValueTensorInitializer.__new__": true, + "tf.lookup.KeyValueTensorInitializer.initialize": true, + "tf.lookup.KeyValueTensorInitializer.key_dtype": true, + "tf.lookup.KeyValueTensorInitializer.value_dtype": true, + "tf.lookup.StaticHashTable": false, + "tf.lookup.StaticHashTable.__eq__": true, + "tf.lookup.StaticHashTable.__ge__": true, + "tf.lookup.StaticHashTable.__gt__": true, + "tf.lookup.StaticHashTable.__init__": true, + "tf.lookup.StaticHashTable.__le__": true, + "tf.lookup.StaticHashTable.__lt__": true, + "tf.lookup.StaticHashTable.__ne__": true, + "tf.lookup.StaticHashTable.__new__": true, + "tf.lookup.StaticHashTable.default_value": true, + "tf.lookup.StaticHashTable.export": true, + "tf.lookup.StaticHashTable.key_dtype": true, + "tf.lookup.StaticHashTable.lookup": true, + "tf.lookup.StaticHashTable.name": true, + "tf.lookup.StaticHashTable.resource_handle": true, + "tf.lookup.StaticHashTable.size": true, + "tf.lookup.StaticHashTable.value_dtype": true, + "tf.lookup.StaticVocabularyTable": false, + "tf.lookup.StaticVocabularyTable.__eq__": true, + "tf.lookup.StaticVocabularyTable.__ge__": true, + "tf.lookup.StaticVocabularyTable.__gt__": true, + "tf.lookup.StaticVocabularyTable.__init__": true, + "tf.lookup.StaticVocabularyTable.__le__": true, + "tf.lookup.StaticVocabularyTable.__lt__": true, + "tf.lookup.StaticVocabularyTable.__ne__": true, + "tf.lookup.StaticVocabularyTable.__new__": true, + "tf.lookup.StaticVocabularyTable.key_dtype": true, + "tf.lookup.StaticVocabularyTable.lookup": true, + "tf.lookup.StaticVocabularyTable.name": true, + "tf.lookup.StaticVocabularyTable.resource_handle": true, + "tf.lookup.StaticVocabularyTable.size": true, + "tf.lookup.StaticVocabularyTable.value_dtype": true, + "tf.lookup.TextFileIndex": false, + "tf.lookup.TextFileIndex.LINE_NUMBER": true, + "tf.lookup.TextFileIndex.WHOLE_LINE": true, + "tf.lookup.TextFileIndex.__eq__": true, + "tf.lookup.TextFileIndex.__ge__": true, + "tf.lookup.TextFileIndex.__gt__": true, + "tf.lookup.TextFileIndex.__init__": true, + "tf.lookup.TextFileIndex.__le__": true, + "tf.lookup.TextFileIndex.__lt__": true, + "tf.lookup.TextFileIndex.__ne__": true, + "tf.lookup.TextFileIndex.__new__": true, + "tf.lookup.TextFileInitializer": false, + "tf.lookup.TextFileInitializer.__eq__": true, + "tf.lookup.TextFileInitializer.__ge__": true, + "tf.lookup.TextFileInitializer.__gt__": true, + "tf.lookup.TextFileInitializer.__init__": true, + "tf.lookup.TextFileInitializer.__le__": true, + "tf.lookup.TextFileInitializer.__lt__": true, + "tf.lookup.TextFileInitializer.__ne__": true, + "tf.lookup.TextFileInitializer.__new__": true, + "tf.lookup.TextFileInitializer.initialize": true, + "tf.lookup.TextFileInitializer.key_dtype": true, + "tf.lookup.TextFileInitializer.value_dtype": true, + "tf.lookup.experimental": false, + "tf.lookup.experimental.DenseHashTable": false, + "tf.lookup.experimental.DenseHashTable.__eq__": true, + "tf.lookup.experimental.DenseHashTable.__ge__": true, + "tf.lookup.experimental.DenseHashTable.__gt__": true, + "tf.lookup.experimental.DenseHashTable.__init__": true, + "tf.lookup.experimental.DenseHashTable.__le__": true, + "tf.lookup.experimental.DenseHashTable.__lt__": true, + "tf.lookup.experimental.DenseHashTable.__ne__": true, + "tf.lookup.experimental.DenseHashTable.__new__": true, + "tf.lookup.experimental.DenseHashTable.erase": true, + "tf.lookup.experimental.DenseHashTable.export": true, + "tf.lookup.experimental.DenseHashTable.insert": true, + "tf.lookup.experimental.DenseHashTable.insert_or_assign": true, + "tf.lookup.experimental.DenseHashTable.key_dtype": true, + "tf.lookup.experimental.DenseHashTable.lookup": true, + "tf.lookup.experimental.DenseHashTable.name": true, + "tf.lookup.experimental.DenseHashTable.remove": true, + "tf.lookup.experimental.DenseHashTable.resource_handle": true, + "tf.lookup.experimental.DenseHashTable.size": true, + "tf.lookup.experimental.DenseHashTable.value_dtype": true, + "tf.losses": false, + "tf.losses.BinaryCrossentropy": false, + "tf.losses.BinaryCrossentropy.__call__": true, + "tf.losses.BinaryCrossentropy.__eq__": true, + "tf.losses.BinaryCrossentropy.__ge__": true, + "tf.losses.BinaryCrossentropy.__gt__": true, + "tf.losses.BinaryCrossentropy.__init__": true, + "tf.losses.BinaryCrossentropy.__le__": true, + "tf.losses.BinaryCrossentropy.__lt__": true, + "tf.losses.BinaryCrossentropy.__ne__": true, + "tf.losses.BinaryCrossentropy.__new__": true, + "tf.losses.BinaryCrossentropy.call": true, + "tf.losses.BinaryCrossentropy.from_config": true, + "tf.losses.BinaryCrossentropy.get_config": true, + "tf.losses.CategoricalCrossentropy": false, + "tf.losses.CategoricalCrossentropy.__call__": true, + "tf.losses.CategoricalCrossentropy.__eq__": true, + "tf.losses.CategoricalCrossentropy.__ge__": true, + "tf.losses.CategoricalCrossentropy.__gt__": true, + "tf.losses.CategoricalCrossentropy.__init__": true, + "tf.losses.CategoricalCrossentropy.__le__": true, + "tf.losses.CategoricalCrossentropy.__lt__": true, + "tf.losses.CategoricalCrossentropy.__ne__": true, + "tf.losses.CategoricalCrossentropy.__new__": true, + "tf.losses.CategoricalCrossentropy.call": true, + "tf.losses.CategoricalCrossentropy.from_config": true, + "tf.losses.CategoricalCrossentropy.get_config": true, + "tf.losses.CategoricalHinge": false, + "tf.losses.CategoricalHinge.__call__": true, + "tf.losses.CategoricalHinge.__eq__": true, + "tf.losses.CategoricalHinge.__ge__": true, + "tf.losses.CategoricalHinge.__gt__": true, + "tf.losses.CategoricalHinge.__init__": true, + "tf.losses.CategoricalHinge.__le__": true, + "tf.losses.CategoricalHinge.__lt__": true, + "tf.losses.CategoricalHinge.__ne__": true, + "tf.losses.CategoricalHinge.__new__": true, + "tf.losses.CategoricalHinge.call": true, + "tf.losses.CategoricalHinge.from_config": true, + "tf.losses.CategoricalHinge.get_config": true, + "tf.losses.CosineSimilarity": false, + "tf.losses.CosineSimilarity.__call__": true, + "tf.losses.CosineSimilarity.__eq__": true, + "tf.losses.CosineSimilarity.__ge__": true, + "tf.losses.CosineSimilarity.__gt__": true, + "tf.losses.CosineSimilarity.__init__": true, + "tf.losses.CosineSimilarity.__le__": true, + "tf.losses.CosineSimilarity.__lt__": true, + "tf.losses.CosineSimilarity.__ne__": true, + "tf.losses.CosineSimilarity.__new__": true, + "tf.losses.CosineSimilarity.call": true, + "tf.losses.CosineSimilarity.from_config": true, + "tf.losses.CosineSimilarity.get_config": true, + "tf.losses.Hinge": false, + "tf.losses.Hinge.__call__": true, + "tf.losses.Hinge.__eq__": true, + "tf.losses.Hinge.__ge__": true, + "tf.losses.Hinge.__gt__": true, + "tf.losses.Hinge.__init__": true, + "tf.losses.Hinge.__le__": true, + "tf.losses.Hinge.__lt__": true, + "tf.losses.Hinge.__ne__": true, + "tf.losses.Hinge.__new__": true, + "tf.losses.Hinge.call": true, + "tf.losses.Hinge.from_config": true, + "tf.losses.Hinge.get_config": true, + "tf.losses.Huber": false, + "tf.losses.Huber.__call__": true, + "tf.losses.Huber.__eq__": true, + "tf.losses.Huber.__ge__": true, + "tf.losses.Huber.__gt__": true, + "tf.losses.Huber.__init__": true, + "tf.losses.Huber.__le__": true, + "tf.losses.Huber.__lt__": true, + "tf.losses.Huber.__ne__": true, + "tf.losses.Huber.__new__": true, + "tf.losses.Huber.call": true, + "tf.losses.Huber.from_config": true, + "tf.losses.Huber.get_config": true, + "tf.losses.KLD": false, + "tf.losses.KLDivergence": false, + "tf.losses.KLDivergence.__call__": true, + "tf.losses.KLDivergence.__eq__": true, + "tf.losses.KLDivergence.__ge__": true, + "tf.losses.KLDivergence.__gt__": true, + "tf.losses.KLDivergence.__init__": true, + "tf.losses.KLDivergence.__le__": true, + "tf.losses.KLDivergence.__lt__": true, + "tf.losses.KLDivergence.__ne__": true, + "tf.losses.KLDivergence.__new__": true, + "tf.losses.KLDivergence.call": true, + "tf.losses.KLDivergence.from_config": true, + "tf.losses.KLDivergence.get_config": true, + "tf.losses.LogCosh": false, + "tf.losses.LogCosh.__call__": true, + "tf.losses.LogCosh.__eq__": true, + "tf.losses.LogCosh.__ge__": true, + "tf.losses.LogCosh.__gt__": true, + "tf.losses.LogCosh.__init__": true, + "tf.losses.LogCosh.__le__": true, + "tf.losses.LogCosh.__lt__": true, + "tf.losses.LogCosh.__ne__": true, + "tf.losses.LogCosh.__new__": true, + "tf.losses.LogCosh.call": true, + "tf.losses.LogCosh.from_config": true, + "tf.losses.LogCosh.get_config": true, + "tf.losses.Loss": false, + "tf.losses.Loss.__call__": true, + "tf.losses.Loss.__eq__": true, + "tf.losses.Loss.__ge__": true, + "tf.losses.Loss.__gt__": true, + "tf.losses.Loss.__init__": true, + "tf.losses.Loss.__le__": true, + "tf.losses.Loss.__lt__": true, + "tf.losses.Loss.__ne__": true, + "tf.losses.Loss.__new__": true, + "tf.losses.Loss.call": true, + "tf.losses.Loss.from_config": true, + "tf.losses.Loss.get_config": true, + "tf.losses.MAE": false, + "tf.losses.MAPE": false, + "tf.losses.MSE": false, + "tf.losses.MSLE": false, + "tf.losses.MeanAbsoluteError": false, + "tf.losses.MeanAbsoluteError.__call__": true, + "tf.losses.MeanAbsoluteError.__eq__": true, + "tf.losses.MeanAbsoluteError.__ge__": true, + "tf.losses.MeanAbsoluteError.__gt__": true, + "tf.losses.MeanAbsoluteError.__init__": true, + "tf.losses.MeanAbsoluteError.__le__": true, + "tf.losses.MeanAbsoluteError.__lt__": true, + "tf.losses.MeanAbsoluteError.__ne__": true, + "tf.losses.MeanAbsoluteError.__new__": true, + "tf.losses.MeanAbsoluteError.call": true, + "tf.losses.MeanAbsoluteError.from_config": true, + "tf.losses.MeanAbsoluteError.get_config": true, + "tf.losses.MeanAbsolutePercentageError": false, + "tf.losses.MeanAbsolutePercentageError.__call__": true, + "tf.losses.MeanAbsolutePercentageError.__eq__": true, + "tf.losses.MeanAbsolutePercentageError.__ge__": true, + "tf.losses.MeanAbsolutePercentageError.__gt__": true, + "tf.losses.MeanAbsolutePercentageError.__init__": true, + "tf.losses.MeanAbsolutePercentageError.__le__": true, + "tf.losses.MeanAbsolutePercentageError.__lt__": true, + "tf.losses.MeanAbsolutePercentageError.__ne__": true, + "tf.losses.MeanAbsolutePercentageError.__new__": true, + "tf.losses.MeanAbsolutePercentageError.call": true, + "tf.losses.MeanAbsolutePercentageError.from_config": true, + "tf.losses.MeanAbsolutePercentageError.get_config": true, + "tf.losses.MeanSquaredError": false, + "tf.losses.MeanSquaredError.__call__": true, + "tf.losses.MeanSquaredError.__eq__": true, + "tf.losses.MeanSquaredError.__ge__": true, + "tf.losses.MeanSquaredError.__gt__": true, + "tf.losses.MeanSquaredError.__init__": true, + "tf.losses.MeanSquaredError.__le__": true, + "tf.losses.MeanSquaredError.__lt__": true, + "tf.losses.MeanSquaredError.__ne__": true, + "tf.losses.MeanSquaredError.__new__": true, + "tf.losses.MeanSquaredError.call": true, + "tf.losses.MeanSquaredError.from_config": true, + "tf.losses.MeanSquaredError.get_config": true, + "tf.losses.MeanSquaredLogarithmicError": false, + "tf.losses.MeanSquaredLogarithmicError.__call__": true, + "tf.losses.MeanSquaredLogarithmicError.__eq__": true, + "tf.losses.MeanSquaredLogarithmicError.__ge__": true, + "tf.losses.MeanSquaredLogarithmicError.__gt__": true, + "tf.losses.MeanSquaredLogarithmicError.__init__": true, + "tf.losses.MeanSquaredLogarithmicError.__le__": true, + "tf.losses.MeanSquaredLogarithmicError.__lt__": true, + "tf.losses.MeanSquaredLogarithmicError.__ne__": true, + "tf.losses.MeanSquaredLogarithmicError.__new__": true, + "tf.losses.MeanSquaredLogarithmicError.call": true, + "tf.losses.MeanSquaredLogarithmicError.from_config": true, + "tf.losses.MeanSquaredLogarithmicError.get_config": true, + "tf.losses.Poisson": false, + "tf.losses.Poisson.__call__": true, + "tf.losses.Poisson.__eq__": true, + "tf.losses.Poisson.__ge__": true, + "tf.losses.Poisson.__gt__": true, + "tf.losses.Poisson.__init__": true, + "tf.losses.Poisson.__le__": true, + "tf.losses.Poisson.__lt__": true, + "tf.losses.Poisson.__ne__": true, + "tf.losses.Poisson.__new__": true, + "tf.losses.Poisson.call": true, + "tf.losses.Poisson.from_config": true, + "tf.losses.Poisson.get_config": true, + "tf.losses.Reduction": false, + "tf.losses.Reduction.AUTO": true, + "tf.losses.Reduction.NONE": true, + "tf.losses.Reduction.SUM": true, + "tf.losses.Reduction.SUM_OVER_BATCH_SIZE": true, + "tf.losses.Reduction.__eq__": true, + "tf.losses.Reduction.__ge__": true, + "tf.losses.Reduction.__gt__": true, + "tf.losses.Reduction.__init__": true, + "tf.losses.Reduction.__le__": true, + "tf.losses.Reduction.__lt__": true, + "tf.losses.Reduction.__ne__": true, + "tf.losses.Reduction.__new__": true, + "tf.losses.Reduction.all": true, + "tf.losses.Reduction.validate": true, + "tf.losses.SparseCategoricalCrossentropy": false, + "tf.losses.SparseCategoricalCrossentropy.__call__": true, + "tf.losses.SparseCategoricalCrossentropy.__eq__": true, + "tf.losses.SparseCategoricalCrossentropy.__ge__": true, + "tf.losses.SparseCategoricalCrossentropy.__gt__": true, + "tf.losses.SparseCategoricalCrossentropy.__init__": true, + "tf.losses.SparseCategoricalCrossentropy.__le__": true, + "tf.losses.SparseCategoricalCrossentropy.__lt__": true, + "tf.losses.SparseCategoricalCrossentropy.__ne__": true, + "tf.losses.SparseCategoricalCrossentropy.__new__": true, + "tf.losses.SparseCategoricalCrossentropy.call": true, + "tf.losses.SparseCategoricalCrossentropy.from_config": true, + "tf.losses.SparseCategoricalCrossentropy.get_config": true, + "tf.losses.SquaredHinge": false, + "tf.losses.SquaredHinge.__call__": true, + "tf.losses.SquaredHinge.__eq__": true, + "tf.losses.SquaredHinge.__ge__": true, + "tf.losses.SquaredHinge.__gt__": true, + "tf.losses.SquaredHinge.__init__": true, + "tf.losses.SquaredHinge.__le__": true, + "tf.losses.SquaredHinge.__lt__": true, + "tf.losses.SquaredHinge.__ne__": true, + "tf.losses.SquaredHinge.__new__": true, + "tf.losses.SquaredHinge.call": true, + "tf.losses.SquaredHinge.from_config": true, + "tf.losses.SquaredHinge.get_config": true, + "tf.losses.binary_crossentropy": false, + "tf.losses.categorical_crossentropy": false, + "tf.losses.categorical_hinge": false, + "tf.losses.cosine_similarity": false, + "tf.losses.deserialize": false, + "tf.losses.get": false, + "tf.losses.hinge": false, + "tf.losses.kld": false, + "tf.losses.kullback_leibler_divergence": false, + "tf.losses.logcosh": false, + "tf.losses.mae": false, + "tf.losses.mape": false, + "tf.losses.mean_absolute_error": false, + "tf.losses.mean_absolute_percentage_error": false, + "tf.losses.mean_squared_error": false, + "tf.losses.mean_squared_logarithmic_error": false, + "tf.losses.mse": false, + "tf.losses.msle": false, + "tf.losses.poisson": false, + "tf.losses.serialize": false, + "tf.losses.sparse_categorical_crossentropy": false, + "tf.losses.squared_hinge": false, + "tf.make_ndarray": false, + "tf.make_tensor_proto": false, + "tf.map_fn": false, + "tf.math": false, + "tf.math.abs": false, + "tf.math.accumulate_n": false, + "tf.math.acos": false, + "tf.math.acosh": false, + "tf.math.add": false, + "tf.math.add_n": false, + "tf.math.angle": false, + "tf.math.argmax": false, + "tf.math.argmin": false, + "tf.math.asin": false, + "tf.math.asinh": false, + "tf.math.atan": false, + "tf.math.atan2": false, + "tf.math.atanh": false, + "tf.math.bessel_i0": false, + "tf.math.bessel_i0e": false, + "tf.math.bessel_i1": false, + "tf.math.bessel_i1e": false, + "tf.math.betainc": false, + "tf.math.bincount": false, + "tf.math.ceil": false, + "tf.math.confusion_matrix": false, + "tf.math.conj": false, + "tf.math.cos": false, + "tf.math.cosh": false, + "tf.math.count_nonzero": false, + "tf.math.cumprod": false, + "tf.math.cumsum": false, + "tf.math.cumulative_logsumexp": false, + "tf.math.digamma": false, + "tf.math.divide": false, + "tf.math.divide_no_nan": false, + "tf.math.equal": false, + "tf.math.erf": false, + "tf.math.erfc": false, + "tf.math.erfinv": false, + "tf.math.exp": false, + "tf.math.expm1": false, + "tf.math.floor": false, + "tf.math.floordiv": false, + "tf.math.floormod": false, + "tf.math.greater": false, + "tf.math.greater_equal": false, + "tf.math.igamma": false, + "tf.math.igammac": false, + "tf.math.imag": false, + "tf.math.in_top_k": false, + "tf.math.invert_permutation": false, + "tf.math.is_finite": false, + "tf.math.is_inf": false, + "tf.math.is_nan": false, + "tf.math.is_non_decreasing": false, + "tf.math.is_strictly_increasing": false, + "tf.math.l2_normalize": false, + "tf.math.lbeta": false, + "tf.math.less": false, + "tf.math.less_equal": false, + "tf.math.lgamma": false, + "tf.math.log": false, + "tf.math.log1p": false, + "tf.math.log_sigmoid": false, + "tf.math.log_softmax": false, + "tf.math.logical_and": false, + "tf.math.logical_not": false, + "tf.math.logical_or": false, + "tf.math.logical_xor": false, + "tf.math.maximum": false, + "tf.math.minimum": false, + "tf.math.mod": false, + "tf.math.multiply": false, + "tf.math.multiply_no_nan": false, + "tf.math.ndtri": false, + "tf.math.negative": false, + "tf.math.nextafter": false, + "tf.math.not_equal": false, + "tf.math.polygamma": false, + "tf.math.polyval": false, + "tf.math.pow": false, + "tf.math.real": false, + "tf.math.reciprocal": false, + "tf.math.reciprocal_no_nan": false, + "tf.math.reduce_all": false, + "tf.math.reduce_any": false, + "tf.math.reduce_euclidean_norm": false, + "tf.math.reduce_logsumexp": false, + "tf.math.reduce_max": false, + "tf.math.reduce_mean": false, + "tf.math.reduce_min": false, + "tf.math.reduce_prod": false, + "tf.math.reduce_std": false, + "tf.math.reduce_sum": false, + "tf.math.reduce_variance": false, + "tf.math.rint": false, + "tf.math.round": false, + "tf.math.rsqrt": false, + "tf.math.scalar_mul": false, + "tf.math.segment_max": false, + "tf.math.segment_mean": false, + "tf.math.segment_min": false, + "tf.math.segment_prod": false, + "tf.math.segment_sum": false, + "tf.math.sigmoid": false, + "tf.math.sign": false, + "tf.math.sin": false, + "tf.math.sinh": false, + "tf.math.sobol_sample": false, + "tf.math.softmax": false, + "tf.math.softplus": false, + "tf.math.softsign": false, + "tf.math.special": false, + "tf.math.special.dawsn": false, + "tf.math.special.expint": false, + "tf.math.special.fresnel_cos": false, + "tf.math.special.fresnel_sin": false, + "tf.math.special.spence": false, + "tf.math.sqrt": false, + "tf.math.square": false, + "tf.math.squared_difference": false, + "tf.math.subtract": false, + "tf.math.tan": false, + "tf.math.tanh": false, + "tf.math.top_k": false, + "tf.math.truediv": false, + "tf.math.unsorted_segment_max": false, + "tf.math.unsorted_segment_mean": false, + "tf.math.unsorted_segment_min": false, + "tf.math.unsorted_segment_prod": false, + "tf.math.unsorted_segment_sqrt_n": false, + "tf.math.unsorted_segment_sum": false, + "tf.math.xdivy": false, + "tf.math.xlog1py": false, + "tf.math.xlogy": false, + "tf.math.zero_fraction": false, + "tf.math.zeta": false, + "tf.matmul": false, + "tf.matrix_square_root": false, + "tf.maximum": false, + "tf.meshgrid": false, + "tf.metrics": false, + "tf.metrics.AUC": false, + "tf.metrics.AUC.__call__": true, + "tf.metrics.AUC.__eq__": true, + "tf.metrics.AUC.__ge__": true, + "tf.metrics.AUC.__gt__": true, + "tf.metrics.AUC.__init__": true, + "tf.metrics.AUC.__le__": true, + "tf.metrics.AUC.__lt__": true, + "tf.metrics.AUC.__ne__": true, + "tf.metrics.AUC.__new__": true, + "tf.metrics.AUC.activity_regularizer": true, + "tf.metrics.AUC.add_loss": true, + "tf.metrics.AUC.add_metric": true, + "tf.metrics.AUC.add_weight": true, + "tf.metrics.AUC.build": true, + "tf.metrics.AUC.call": true, + "tf.metrics.AUC.compute_mask": true, + "tf.metrics.AUC.compute_output_shape": true, + "tf.metrics.AUC.compute_output_signature": true, + "tf.metrics.AUC.count_params": true, + "tf.metrics.AUC.dtype": true, + "tf.metrics.AUC.dynamic": true, + "tf.metrics.AUC.from_config": true, + "tf.metrics.AUC.get_config": true, + "tf.metrics.AUC.get_weights": true, + "tf.metrics.AUC.input": true, + "tf.metrics.AUC.input_spec": true, + "tf.metrics.AUC.interpolate_pr_auc": true, + "tf.metrics.AUC.losses": true, + "tf.metrics.AUC.metrics": true, + "tf.metrics.AUC.name": true, + "tf.metrics.AUC.name_scope": true, + "tf.metrics.AUC.non_trainable_weights": true, + "tf.metrics.AUC.output": true, + "tf.metrics.AUC.reset_states": true, + "tf.metrics.AUC.result": true, + "tf.metrics.AUC.set_weights": true, + "tf.metrics.AUC.submodules": true, + "tf.metrics.AUC.trainable": true, + "tf.metrics.AUC.trainable_weights": true, + "tf.metrics.AUC.update_state": true, + "tf.metrics.AUC.weights": true, + "tf.metrics.AUC.with_name_scope": true, + "tf.metrics.Accuracy": false, + "tf.metrics.Accuracy.__call__": true, + "tf.metrics.Accuracy.__eq__": true, + "tf.metrics.Accuracy.__ge__": true, + "tf.metrics.Accuracy.__gt__": true, + "tf.metrics.Accuracy.__init__": true, + "tf.metrics.Accuracy.__le__": true, + "tf.metrics.Accuracy.__lt__": true, + "tf.metrics.Accuracy.__ne__": true, + "tf.metrics.Accuracy.__new__": true, + "tf.metrics.Accuracy.activity_regularizer": true, + "tf.metrics.Accuracy.add_loss": true, + "tf.metrics.Accuracy.add_metric": true, + "tf.metrics.Accuracy.add_weight": true, + "tf.metrics.Accuracy.build": true, + "tf.metrics.Accuracy.call": true, + "tf.metrics.Accuracy.compute_mask": true, + "tf.metrics.Accuracy.compute_output_shape": true, + "tf.metrics.Accuracy.compute_output_signature": true, + "tf.metrics.Accuracy.count_params": true, + "tf.metrics.Accuracy.dtype": true, + "tf.metrics.Accuracy.dynamic": true, + "tf.metrics.Accuracy.from_config": true, + "tf.metrics.Accuracy.get_config": true, + "tf.metrics.Accuracy.get_weights": true, + "tf.metrics.Accuracy.input": true, + "tf.metrics.Accuracy.input_spec": true, + "tf.metrics.Accuracy.losses": true, + "tf.metrics.Accuracy.metrics": true, + "tf.metrics.Accuracy.name": true, + "tf.metrics.Accuracy.name_scope": true, + "tf.metrics.Accuracy.non_trainable_weights": true, + "tf.metrics.Accuracy.output": true, + "tf.metrics.Accuracy.reset_states": true, + "tf.metrics.Accuracy.result": true, + "tf.metrics.Accuracy.set_weights": true, + "tf.metrics.Accuracy.submodules": true, + "tf.metrics.Accuracy.trainable": true, + "tf.metrics.Accuracy.trainable_weights": true, + "tf.metrics.Accuracy.update_state": true, + "tf.metrics.Accuracy.weights": true, + "tf.metrics.Accuracy.with_name_scope": true, + "tf.metrics.BinaryAccuracy": false, + "tf.metrics.BinaryAccuracy.__call__": true, + "tf.metrics.BinaryAccuracy.__eq__": true, + "tf.metrics.BinaryAccuracy.__ge__": true, + "tf.metrics.BinaryAccuracy.__gt__": true, + "tf.metrics.BinaryAccuracy.__init__": true, + "tf.metrics.BinaryAccuracy.__le__": true, + "tf.metrics.BinaryAccuracy.__lt__": true, + "tf.metrics.BinaryAccuracy.__ne__": true, + "tf.metrics.BinaryAccuracy.__new__": true, + "tf.metrics.BinaryAccuracy.activity_regularizer": true, + "tf.metrics.BinaryAccuracy.add_loss": true, + "tf.metrics.BinaryAccuracy.add_metric": true, + "tf.metrics.BinaryAccuracy.add_weight": true, + "tf.metrics.BinaryAccuracy.build": true, + "tf.metrics.BinaryAccuracy.call": true, + "tf.metrics.BinaryAccuracy.compute_mask": true, + "tf.metrics.BinaryAccuracy.compute_output_shape": true, + "tf.metrics.BinaryAccuracy.compute_output_signature": true, + "tf.metrics.BinaryAccuracy.count_params": true, + "tf.metrics.BinaryAccuracy.dtype": true, + "tf.metrics.BinaryAccuracy.dynamic": true, + "tf.metrics.BinaryAccuracy.from_config": true, + "tf.metrics.BinaryAccuracy.get_config": true, + "tf.metrics.BinaryAccuracy.get_weights": true, + "tf.metrics.BinaryAccuracy.input": true, + "tf.metrics.BinaryAccuracy.input_spec": true, + "tf.metrics.BinaryAccuracy.losses": true, + "tf.metrics.BinaryAccuracy.metrics": true, + "tf.metrics.BinaryAccuracy.name": true, + "tf.metrics.BinaryAccuracy.name_scope": true, + "tf.metrics.BinaryAccuracy.non_trainable_weights": true, + "tf.metrics.BinaryAccuracy.output": true, + "tf.metrics.BinaryAccuracy.reset_states": true, + "tf.metrics.BinaryAccuracy.result": true, + "tf.metrics.BinaryAccuracy.set_weights": true, + "tf.metrics.BinaryAccuracy.submodules": true, + "tf.metrics.BinaryAccuracy.trainable": true, + "tf.metrics.BinaryAccuracy.trainable_weights": true, + "tf.metrics.BinaryAccuracy.update_state": true, + "tf.metrics.BinaryAccuracy.weights": true, + "tf.metrics.BinaryAccuracy.with_name_scope": true, + "tf.metrics.BinaryCrossentropy": false, + "tf.metrics.BinaryCrossentropy.__call__": true, + "tf.metrics.BinaryCrossentropy.__eq__": true, + "tf.metrics.BinaryCrossentropy.__ge__": true, + "tf.metrics.BinaryCrossentropy.__gt__": true, + "tf.metrics.BinaryCrossentropy.__init__": true, + "tf.metrics.BinaryCrossentropy.__le__": true, + "tf.metrics.BinaryCrossentropy.__lt__": true, + "tf.metrics.BinaryCrossentropy.__ne__": true, + "tf.metrics.BinaryCrossentropy.__new__": true, + "tf.metrics.BinaryCrossentropy.activity_regularizer": true, + "tf.metrics.BinaryCrossentropy.add_loss": true, + "tf.metrics.BinaryCrossentropy.add_metric": true, + "tf.metrics.BinaryCrossentropy.add_weight": true, + "tf.metrics.BinaryCrossentropy.build": true, + "tf.metrics.BinaryCrossentropy.call": true, + "tf.metrics.BinaryCrossentropy.compute_mask": true, + "tf.metrics.BinaryCrossentropy.compute_output_shape": true, + "tf.metrics.BinaryCrossentropy.compute_output_signature": true, + "tf.metrics.BinaryCrossentropy.count_params": true, + "tf.metrics.BinaryCrossentropy.dtype": true, + "tf.metrics.BinaryCrossentropy.dynamic": true, + "tf.metrics.BinaryCrossentropy.from_config": true, + "tf.metrics.BinaryCrossentropy.get_config": true, + "tf.metrics.BinaryCrossentropy.get_weights": true, + "tf.metrics.BinaryCrossentropy.input": true, + "tf.metrics.BinaryCrossentropy.input_spec": true, + "tf.metrics.BinaryCrossentropy.losses": true, + "tf.metrics.BinaryCrossentropy.metrics": true, + "tf.metrics.BinaryCrossentropy.name": true, + "tf.metrics.BinaryCrossentropy.name_scope": true, + "tf.metrics.BinaryCrossentropy.non_trainable_weights": true, + "tf.metrics.BinaryCrossentropy.output": true, + "tf.metrics.BinaryCrossentropy.reset_states": true, + "tf.metrics.BinaryCrossentropy.result": true, + "tf.metrics.BinaryCrossentropy.set_weights": true, + "tf.metrics.BinaryCrossentropy.submodules": true, + "tf.metrics.BinaryCrossentropy.trainable": true, + "tf.metrics.BinaryCrossentropy.trainable_weights": true, + "tf.metrics.BinaryCrossentropy.update_state": true, + "tf.metrics.BinaryCrossentropy.weights": true, + "tf.metrics.BinaryCrossentropy.with_name_scope": true, + "tf.metrics.CategoricalAccuracy": false, + "tf.metrics.CategoricalAccuracy.__call__": true, + "tf.metrics.CategoricalAccuracy.__eq__": true, + "tf.metrics.CategoricalAccuracy.__ge__": true, + "tf.metrics.CategoricalAccuracy.__gt__": true, + "tf.metrics.CategoricalAccuracy.__init__": true, + "tf.metrics.CategoricalAccuracy.__le__": true, + "tf.metrics.CategoricalAccuracy.__lt__": true, + "tf.metrics.CategoricalAccuracy.__ne__": true, + "tf.metrics.CategoricalAccuracy.__new__": true, + "tf.metrics.CategoricalAccuracy.activity_regularizer": true, + "tf.metrics.CategoricalAccuracy.add_loss": true, + "tf.metrics.CategoricalAccuracy.add_metric": true, + "tf.metrics.CategoricalAccuracy.add_weight": true, + "tf.metrics.CategoricalAccuracy.build": true, + "tf.metrics.CategoricalAccuracy.call": true, + "tf.metrics.CategoricalAccuracy.compute_mask": true, + "tf.metrics.CategoricalAccuracy.compute_output_shape": true, + "tf.metrics.CategoricalAccuracy.compute_output_signature": true, + "tf.metrics.CategoricalAccuracy.count_params": true, + "tf.metrics.CategoricalAccuracy.dtype": true, + "tf.metrics.CategoricalAccuracy.dynamic": true, + "tf.metrics.CategoricalAccuracy.from_config": true, + "tf.metrics.CategoricalAccuracy.get_config": true, + "tf.metrics.CategoricalAccuracy.get_weights": true, + "tf.metrics.CategoricalAccuracy.input": true, + "tf.metrics.CategoricalAccuracy.input_spec": true, + "tf.metrics.CategoricalAccuracy.losses": true, + "tf.metrics.CategoricalAccuracy.metrics": true, + "tf.metrics.CategoricalAccuracy.name": true, + "tf.metrics.CategoricalAccuracy.name_scope": true, + "tf.metrics.CategoricalAccuracy.non_trainable_weights": true, + "tf.metrics.CategoricalAccuracy.output": true, + "tf.metrics.CategoricalAccuracy.reset_states": true, + "tf.metrics.CategoricalAccuracy.result": true, + "tf.metrics.CategoricalAccuracy.set_weights": true, + "tf.metrics.CategoricalAccuracy.submodules": true, + "tf.metrics.CategoricalAccuracy.trainable": true, + "tf.metrics.CategoricalAccuracy.trainable_weights": true, + "tf.metrics.CategoricalAccuracy.update_state": true, + "tf.metrics.CategoricalAccuracy.weights": true, + "tf.metrics.CategoricalAccuracy.with_name_scope": true, + "tf.metrics.CategoricalCrossentropy": false, + "tf.metrics.CategoricalCrossentropy.__call__": true, + "tf.metrics.CategoricalCrossentropy.__eq__": true, + "tf.metrics.CategoricalCrossentropy.__ge__": true, + "tf.metrics.CategoricalCrossentropy.__gt__": true, + "tf.metrics.CategoricalCrossentropy.__init__": true, + "tf.metrics.CategoricalCrossentropy.__le__": true, + "tf.metrics.CategoricalCrossentropy.__lt__": true, + "tf.metrics.CategoricalCrossentropy.__ne__": true, + "tf.metrics.CategoricalCrossentropy.__new__": true, + "tf.metrics.CategoricalCrossentropy.activity_regularizer": true, + "tf.metrics.CategoricalCrossentropy.add_loss": true, + "tf.metrics.CategoricalCrossentropy.add_metric": true, + "tf.metrics.CategoricalCrossentropy.add_weight": true, + "tf.metrics.CategoricalCrossentropy.build": true, + "tf.metrics.CategoricalCrossentropy.call": true, + "tf.metrics.CategoricalCrossentropy.compute_mask": true, + "tf.metrics.CategoricalCrossentropy.compute_output_shape": true, + "tf.metrics.CategoricalCrossentropy.compute_output_signature": true, + "tf.metrics.CategoricalCrossentropy.count_params": true, + "tf.metrics.CategoricalCrossentropy.dtype": true, + "tf.metrics.CategoricalCrossentropy.dynamic": true, + "tf.metrics.CategoricalCrossentropy.from_config": true, + "tf.metrics.CategoricalCrossentropy.get_config": true, + "tf.metrics.CategoricalCrossentropy.get_weights": true, + "tf.metrics.CategoricalCrossentropy.input": true, + "tf.metrics.CategoricalCrossentropy.input_spec": true, + "tf.metrics.CategoricalCrossentropy.losses": true, + "tf.metrics.CategoricalCrossentropy.metrics": true, + "tf.metrics.CategoricalCrossentropy.name": true, + "tf.metrics.CategoricalCrossentropy.name_scope": true, + "tf.metrics.CategoricalCrossentropy.non_trainable_weights": true, + "tf.metrics.CategoricalCrossentropy.output": true, + "tf.metrics.CategoricalCrossentropy.reset_states": true, + "tf.metrics.CategoricalCrossentropy.result": true, + "tf.metrics.CategoricalCrossentropy.set_weights": true, + "tf.metrics.CategoricalCrossentropy.submodules": true, + "tf.metrics.CategoricalCrossentropy.trainable": true, + "tf.metrics.CategoricalCrossentropy.trainable_weights": true, + "tf.metrics.CategoricalCrossentropy.update_state": true, + "tf.metrics.CategoricalCrossentropy.weights": true, + "tf.metrics.CategoricalCrossentropy.with_name_scope": true, + "tf.metrics.CategoricalHinge": false, + "tf.metrics.CategoricalHinge.__call__": true, + "tf.metrics.CategoricalHinge.__eq__": true, + "tf.metrics.CategoricalHinge.__ge__": true, + "tf.metrics.CategoricalHinge.__gt__": true, + "tf.metrics.CategoricalHinge.__init__": true, + "tf.metrics.CategoricalHinge.__le__": true, + "tf.metrics.CategoricalHinge.__lt__": true, + "tf.metrics.CategoricalHinge.__ne__": true, + "tf.metrics.CategoricalHinge.__new__": true, + "tf.metrics.CategoricalHinge.activity_regularizer": true, + "tf.metrics.CategoricalHinge.add_loss": true, + "tf.metrics.CategoricalHinge.add_metric": true, + "tf.metrics.CategoricalHinge.add_weight": true, + "tf.metrics.CategoricalHinge.build": true, + "tf.metrics.CategoricalHinge.call": true, + "tf.metrics.CategoricalHinge.compute_mask": true, + "tf.metrics.CategoricalHinge.compute_output_shape": true, + "tf.metrics.CategoricalHinge.compute_output_signature": true, + "tf.metrics.CategoricalHinge.count_params": true, + "tf.metrics.CategoricalHinge.dtype": true, + "tf.metrics.CategoricalHinge.dynamic": true, + "tf.metrics.CategoricalHinge.from_config": true, + "tf.metrics.CategoricalHinge.get_config": true, + "tf.metrics.CategoricalHinge.get_weights": true, + "tf.metrics.CategoricalHinge.input": true, + "tf.metrics.CategoricalHinge.input_spec": true, + "tf.metrics.CategoricalHinge.losses": true, + "tf.metrics.CategoricalHinge.metrics": true, + "tf.metrics.CategoricalHinge.name": true, + "tf.metrics.CategoricalHinge.name_scope": true, + "tf.metrics.CategoricalHinge.non_trainable_weights": true, + "tf.metrics.CategoricalHinge.output": true, + "tf.metrics.CategoricalHinge.reset_states": true, + "tf.metrics.CategoricalHinge.result": true, + "tf.metrics.CategoricalHinge.set_weights": true, + "tf.metrics.CategoricalHinge.submodules": true, + "tf.metrics.CategoricalHinge.trainable": true, + "tf.metrics.CategoricalHinge.trainable_weights": true, + "tf.metrics.CategoricalHinge.update_state": true, + "tf.metrics.CategoricalHinge.weights": true, + "tf.metrics.CategoricalHinge.with_name_scope": true, + "tf.metrics.CosineSimilarity": false, + "tf.metrics.CosineSimilarity.__call__": true, + "tf.metrics.CosineSimilarity.__eq__": true, + "tf.metrics.CosineSimilarity.__ge__": true, + "tf.metrics.CosineSimilarity.__gt__": true, + "tf.metrics.CosineSimilarity.__init__": true, + "tf.metrics.CosineSimilarity.__le__": true, + "tf.metrics.CosineSimilarity.__lt__": true, + "tf.metrics.CosineSimilarity.__ne__": true, + "tf.metrics.CosineSimilarity.__new__": true, + "tf.metrics.CosineSimilarity.activity_regularizer": true, + "tf.metrics.CosineSimilarity.add_loss": true, + "tf.metrics.CosineSimilarity.add_metric": true, + "tf.metrics.CosineSimilarity.add_weight": true, + "tf.metrics.CosineSimilarity.build": true, + "tf.metrics.CosineSimilarity.call": true, + "tf.metrics.CosineSimilarity.compute_mask": true, + "tf.metrics.CosineSimilarity.compute_output_shape": true, + "tf.metrics.CosineSimilarity.compute_output_signature": true, + "tf.metrics.CosineSimilarity.count_params": true, + "tf.metrics.CosineSimilarity.dtype": true, + "tf.metrics.CosineSimilarity.dynamic": true, + "tf.metrics.CosineSimilarity.from_config": true, + "tf.metrics.CosineSimilarity.get_config": true, + "tf.metrics.CosineSimilarity.get_weights": true, + "tf.metrics.CosineSimilarity.input": true, + "tf.metrics.CosineSimilarity.input_spec": true, + "tf.metrics.CosineSimilarity.losses": true, + "tf.metrics.CosineSimilarity.metrics": true, + "tf.metrics.CosineSimilarity.name": true, + "tf.metrics.CosineSimilarity.name_scope": true, + "tf.metrics.CosineSimilarity.non_trainable_weights": true, + "tf.metrics.CosineSimilarity.output": true, + "tf.metrics.CosineSimilarity.reset_states": true, + "tf.metrics.CosineSimilarity.result": true, + "tf.metrics.CosineSimilarity.set_weights": true, + "tf.metrics.CosineSimilarity.submodules": true, + "tf.metrics.CosineSimilarity.trainable": true, + "tf.metrics.CosineSimilarity.trainable_weights": true, + "tf.metrics.CosineSimilarity.update_state": true, + "tf.metrics.CosineSimilarity.weights": true, + "tf.metrics.CosineSimilarity.with_name_scope": true, + "tf.metrics.FalseNegatives": false, + "tf.metrics.FalseNegatives.__call__": true, + "tf.metrics.FalseNegatives.__eq__": true, + "tf.metrics.FalseNegatives.__ge__": true, + "tf.metrics.FalseNegatives.__gt__": true, + "tf.metrics.FalseNegatives.__init__": true, + "tf.metrics.FalseNegatives.__le__": true, + "tf.metrics.FalseNegatives.__lt__": true, + "tf.metrics.FalseNegatives.__ne__": true, + "tf.metrics.FalseNegatives.__new__": true, + "tf.metrics.FalseNegatives.activity_regularizer": true, + "tf.metrics.FalseNegatives.add_loss": true, + "tf.metrics.FalseNegatives.add_metric": true, + "tf.metrics.FalseNegatives.add_weight": true, + "tf.metrics.FalseNegatives.build": true, + "tf.metrics.FalseNegatives.call": true, + "tf.metrics.FalseNegatives.compute_mask": true, + "tf.metrics.FalseNegatives.compute_output_shape": true, + "tf.metrics.FalseNegatives.compute_output_signature": true, + "tf.metrics.FalseNegatives.count_params": true, + "tf.metrics.FalseNegatives.dtype": true, + "tf.metrics.FalseNegatives.dynamic": true, + "tf.metrics.FalseNegatives.from_config": true, + "tf.metrics.FalseNegatives.get_config": true, + "tf.metrics.FalseNegatives.get_weights": true, + "tf.metrics.FalseNegatives.input": true, + "tf.metrics.FalseNegatives.input_spec": true, + "tf.metrics.FalseNegatives.losses": true, + "tf.metrics.FalseNegatives.metrics": true, + "tf.metrics.FalseNegatives.name": true, + "tf.metrics.FalseNegatives.name_scope": true, + "tf.metrics.FalseNegatives.non_trainable_weights": true, + "tf.metrics.FalseNegatives.output": true, + "tf.metrics.FalseNegatives.reset_states": true, + "tf.metrics.FalseNegatives.result": true, + "tf.metrics.FalseNegatives.set_weights": true, + "tf.metrics.FalseNegatives.submodules": true, + "tf.metrics.FalseNegatives.trainable": true, + "tf.metrics.FalseNegatives.trainable_weights": true, + "tf.metrics.FalseNegatives.update_state": true, + "tf.metrics.FalseNegatives.weights": true, + "tf.metrics.FalseNegatives.with_name_scope": true, + "tf.metrics.FalsePositives": false, + "tf.metrics.FalsePositives.__call__": true, + "tf.metrics.FalsePositives.__eq__": true, + "tf.metrics.FalsePositives.__ge__": true, + "tf.metrics.FalsePositives.__gt__": true, + "tf.metrics.FalsePositives.__init__": true, + "tf.metrics.FalsePositives.__le__": true, + "tf.metrics.FalsePositives.__lt__": true, + "tf.metrics.FalsePositives.__ne__": true, + "tf.metrics.FalsePositives.__new__": true, + "tf.metrics.FalsePositives.activity_regularizer": true, + "tf.metrics.FalsePositives.add_loss": true, + "tf.metrics.FalsePositives.add_metric": true, + "tf.metrics.FalsePositives.add_weight": true, + "tf.metrics.FalsePositives.build": true, + "tf.metrics.FalsePositives.call": true, + "tf.metrics.FalsePositives.compute_mask": true, + "tf.metrics.FalsePositives.compute_output_shape": true, + "tf.metrics.FalsePositives.compute_output_signature": true, + "tf.metrics.FalsePositives.count_params": true, + "tf.metrics.FalsePositives.dtype": true, + "tf.metrics.FalsePositives.dynamic": true, + "tf.metrics.FalsePositives.from_config": true, + "tf.metrics.FalsePositives.get_config": true, + "tf.metrics.FalsePositives.get_weights": true, + "tf.metrics.FalsePositives.input": true, + "tf.metrics.FalsePositives.input_spec": true, + "tf.metrics.FalsePositives.losses": true, + "tf.metrics.FalsePositives.metrics": true, + "tf.metrics.FalsePositives.name": true, + "tf.metrics.FalsePositives.name_scope": true, + "tf.metrics.FalsePositives.non_trainable_weights": true, + "tf.metrics.FalsePositives.output": true, + "tf.metrics.FalsePositives.reset_states": true, + "tf.metrics.FalsePositives.result": true, + "tf.metrics.FalsePositives.set_weights": true, + "tf.metrics.FalsePositives.submodules": true, + "tf.metrics.FalsePositives.trainable": true, + "tf.metrics.FalsePositives.trainable_weights": true, + "tf.metrics.FalsePositives.update_state": true, + "tf.metrics.FalsePositives.weights": true, + "tf.metrics.FalsePositives.with_name_scope": true, + "tf.metrics.Hinge": false, + "tf.metrics.Hinge.__call__": true, + "tf.metrics.Hinge.__eq__": true, + "tf.metrics.Hinge.__ge__": true, + "tf.metrics.Hinge.__gt__": true, + "tf.metrics.Hinge.__init__": true, + "tf.metrics.Hinge.__le__": true, + "tf.metrics.Hinge.__lt__": true, + "tf.metrics.Hinge.__ne__": true, + "tf.metrics.Hinge.__new__": true, + "tf.metrics.Hinge.activity_regularizer": true, + "tf.metrics.Hinge.add_loss": true, + "tf.metrics.Hinge.add_metric": true, + "tf.metrics.Hinge.add_weight": true, + "tf.metrics.Hinge.build": true, + "tf.metrics.Hinge.call": true, + "tf.metrics.Hinge.compute_mask": true, + "tf.metrics.Hinge.compute_output_shape": true, + "tf.metrics.Hinge.compute_output_signature": true, + "tf.metrics.Hinge.count_params": true, + "tf.metrics.Hinge.dtype": true, + "tf.metrics.Hinge.dynamic": true, + "tf.metrics.Hinge.from_config": true, + "tf.metrics.Hinge.get_config": true, + "tf.metrics.Hinge.get_weights": true, + "tf.metrics.Hinge.input": true, + "tf.metrics.Hinge.input_spec": true, + "tf.metrics.Hinge.losses": true, + "tf.metrics.Hinge.metrics": true, + "tf.metrics.Hinge.name": true, + "tf.metrics.Hinge.name_scope": true, + "tf.metrics.Hinge.non_trainable_weights": true, + "tf.metrics.Hinge.output": true, + "tf.metrics.Hinge.reset_states": true, + "tf.metrics.Hinge.result": true, + "tf.metrics.Hinge.set_weights": true, + "tf.metrics.Hinge.submodules": true, + "tf.metrics.Hinge.trainable": true, + "tf.metrics.Hinge.trainable_weights": true, + "tf.metrics.Hinge.update_state": true, + "tf.metrics.Hinge.weights": true, + "tf.metrics.Hinge.with_name_scope": true, + "tf.metrics.KLD": false, + "tf.metrics.KLDivergence": false, + "tf.metrics.KLDivergence.__call__": true, + "tf.metrics.KLDivergence.__eq__": true, + "tf.metrics.KLDivergence.__ge__": true, + "tf.metrics.KLDivergence.__gt__": true, + "tf.metrics.KLDivergence.__init__": true, + "tf.metrics.KLDivergence.__le__": true, + "tf.metrics.KLDivergence.__lt__": true, + "tf.metrics.KLDivergence.__ne__": true, + "tf.metrics.KLDivergence.__new__": true, + "tf.metrics.KLDivergence.activity_regularizer": true, + "tf.metrics.KLDivergence.add_loss": true, + "tf.metrics.KLDivergence.add_metric": true, + "tf.metrics.KLDivergence.add_weight": true, + "tf.metrics.KLDivergence.build": true, + "tf.metrics.KLDivergence.call": true, + "tf.metrics.KLDivergence.compute_mask": true, + "tf.metrics.KLDivergence.compute_output_shape": true, + "tf.metrics.KLDivergence.compute_output_signature": true, + "tf.metrics.KLDivergence.count_params": true, + "tf.metrics.KLDivergence.dtype": true, + "tf.metrics.KLDivergence.dynamic": true, + "tf.metrics.KLDivergence.from_config": true, + "tf.metrics.KLDivergence.get_config": true, + "tf.metrics.KLDivergence.get_weights": true, + "tf.metrics.KLDivergence.input": true, + "tf.metrics.KLDivergence.input_spec": true, + "tf.metrics.KLDivergence.losses": true, + "tf.metrics.KLDivergence.metrics": true, + "tf.metrics.KLDivergence.name": true, + "tf.metrics.KLDivergence.name_scope": true, + "tf.metrics.KLDivergence.non_trainable_weights": true, + "tf.metrics.KLDivergence.output": true, + "tf.metrics.KLDivergence.reset_states": true, + "tf.metrics.KLDivergence.result": true, + "tf.metrics.KLDivergence.set_weights": true, + "tf.metrics.KLDivergence.submodules": true, + "tf.metrics.KLDivergence.trainable": true, + "tf.metrics.KLDivergence.trainable_weights": true, + "tf.metrics.KLDivergence.update_state": true, + "tf.metrics.KLDivergence.weights": true, + "tf.metrics.KLDivergence.with_name_scope": true, + "tf.metrics.LogCoshError": false, + "tf.metrics.LogCoshError.__call__": true, + "tf.metrics.LogCoshError.__eq__": true, + "tf.metrics.LogCoshError.__ge__": true, + "tf.metrics.LogCoshError.__gt__": true, + "tf.metrics.LogCoshError.__init__": true, + "tf.metrics.LogCoshError.__le__": true, + "tf.metrics.LogCoshError.__lt__": true, + "tf.metrics.LogCoshError.__ne__": true, + "tf.metrics.LogCoshError.__new__": true, + "tf.metrics.LogCoshError.activity_regularizer": true, + "tf.metrics.LogCoshError.add_loss": true, + "tf.metrics.LogCoshError.add_metric": true, + "tf.metrics.LogCoshError.add_weight": true, + "tf.metrics.LogCoshError.build": true, + "tf.metrics.LogCoshError.call": true, + "tf.metrics.LogCoshError.compute_mask": true, + "tf.metrics.LogCoshError.compute_output_shape": true, + "tf.metrics.LogCoshError.compute_output_signature": true, + "tf.metrics.LogCoshError.count_params": true, + "tf.metrics.LogCoshError.dtype": true, + "tf.metrics.LogCoshError.dynamic": true, + "tf.metrics.LogCoshError.from_config": true, + "tf.metrics.LogCoshError.get_config": true, + "tf.metrics.LogCoshError.get_weights": true, + "tf.metrics.LogCoshError.input": true, + "tf.metrics.LogCoshError.input_spec": true, + "tf.metrics.LogCoshError.losses": true, + "tf.metrics.LogCoshError.metrics": true, + "tf.metrics.LogCoshError.name": true, + "tf.metrics.LogCoshError.name_scope": true, + "tf.metrics.LogCoshError.non_trainable_weights": true, + "tf.metrics.LogCoshError.output": true, + "tf.metrics.LogCoshError.reset_states": true, + "tf.metrics.LogCoshError.result": true, + "tf.metrics.LogCoshError.set_weights": true, + "tf.metrics.LogCoshError.submodules": true, + "tf.metrics.LogCoshError.trainable": true, + "tf.metrics.LogCoshError.trainable_weights": true, + "tf.metrics.LogCoshError.update_state": true, + "tf.metrics.LogCoshError.weights": true, + "tf.metrics.LogCoshError.with_name_scope": true, + "tf.metrics.MAE": false, + "tf.metrics.MAPE": false, + "tf.metrics.MSE": false, + "tf.metrics.MSLE": false, + "tf.metrics.Mean": false, + "tf.metrics.Mean.__call__": true, + "tf.metrics.Mean.__eq__": true, + "tf.metrics.Mean.__ge__": true, + "tf.metrics.Mean.__gt__": true, + "tf.metrics.Mean.__init__": true, + "tf.metrics.Mean.__le__": true, + "tf.metrics.Mean.__lt__": true, + "tf.metrics.Mean.__ne__": true, + "tf.metrics.Mean.__new__": true, + "tf.metrics.Mean.activity_regularizer": true, + "tf.metrics.Mean.add_loss": true, + "tf.metrics.Mean.add_metric": true, + "tf.metrics.Mean.add_weight": true, + "tf.metrics.Mean.build": true, + "tf.metrics.Mean.call": true, + "tf.metrics.Mean.compute_mask": true, + "tf.metrics.Mean.compute_output_shape": true, + "tf.metrics.Mean.compute_output_signature": true, + "tf.metrics.Mean.count_params": true, + "tf.metrics.Mean.dtype": true, + "tf.metrics.Mean.dynamic": true, + "tf.metrics.Mean.from_config": true, + "tf.metrics.Mean.get_config": true, + "tf.metrics.Mean.get_weights": true, + "tf.metrics.Mean.input": true, + "tf.metrics.Mean.input_spec": true, + "tf.metrics.Mean.losses": true, + "tf.metrics.Mean.metrics": true, + "tf.metrics.Mean.name": true, + "tf.metrics.Mean.name_scope": true, + "tf.metrics.Mean.non_trainable_weights": true, + "tf.metrics.Mean.output": true, + "tf.metrics.Mean.reset_states": true, + "tf.metrics.Mean.result": true, + "tf.metrics.Mean.set_weights": true, + "tf.metrics.Mean.submodules": true, + "tf.metrics.Mean.trainable": true, + "tf.metrics.Mean.trainable_weights": true, + "tf.metrics.Mean.update_state": true, + "tf.metrics.Mean.weights": true, + "tf.metrics.Mean.with_name_scope": true, + "tf.metrics.MeanAbsoluteError": false, + "tf.metrics.MeanAbsoluteError.__call__": true, + "tf.metrics.MeanAbsoluteError.__eq__": true, + "tf.metrics.MeanAbsoluteError.__ge__": true, + "tf.metrics.MeanAbsoluteError.__gt__": true, + "tf.metrics.MeanAbsoluteError.__init__": true, + "tf.metrics.MeanAbsoluteError.__le__": true, + "tf.metrics.MeanAbsoluteError.__lt__": true, + "tf.metrics.MeanAbsoluteError.__ne__": true, + "tf.metrics.MeanAbsoluteError.__new__": true, + "tf.metrics.MeanAbsoluteError.activity_regularizer": true, + "tf.metrics.MeanAbsoluteError.add_loss": true, + "tf.metrics.MeanAbsoluteError.add_metric": true, + "tf.metrics.MeanAbsoluteError.add_weight": true, + "tf.metrics.MeanAbsoluteError.build": true, + "tf.metrics.MeanAbsoluteError.call": true, + "tf.metrics.MeanAbsoluteError.compute_mask": true, + "tf.metrics.MeanAbsoluteError.compute_output_shape": true, + "tf.metrics.MeanAbsoluteError.compute_output_signature": true, + "tf.metrics.MeanAbsoluteError.count_params": true, + "tf.metrics.MeanAbsoluteError.dtype": true, + "tf.metrics.MeanAbsoluteError.dynamic": true, + "tf.metrics.MeanAbsoluteError.from_config": true, + "tf.metrics.MeanAbsoluteError.get_config": true, + "tf.metrics.MeanAbsoluteError.get_weights": true, + "tf.metrics.MeanAbsoluteError.input": true, + "tf.metrics.MeanAbsoluteError.input_spec": true, + "tf.metrics.MeanAbsoluteError.losses": true, + "tf.metrics.MeanAbsoluteError.metrics": true, + "tf.metrics.MeanAbsoluteError.name": true, + "tf.metrics.MeanAbsoluteError.name_scope": true, + "tf.metrics.MeanAbsoluteError.non_trainable_weights": true, + "tf.metrics.MeanAbsoluteError.output": true, + "tf.metrics.MeanAbsoluteError.reset_states": true, + "tf.metrics.MeanAbsoluteError.result": true, + "tf.metrics.MeanAbsoluteError.set_weights": true, + "tf.metrics.MeanAbsoluteError.submodules": true, + "tf.metrics.MeanAbsoluteError.trainable": true, + "tf.metrics.MeanAbsoluteError.trainable_weights": true, + "tf.metrics.MeanAbsoluteError.update_state": true, + "tf.metrics.MeanAbsoluteError.weights": true, + "tf.metrics.MeanAbsoluteError.with_name_scope": true, + "tf.metrics.MeanAbsolutePercentageError": false, + "tf.metrics.MeanAbsolutePercentageError.__call__": true, + "tf.metrics.MeanAbsolutePercentageError.__eq__": true, + "tf.metrics.MeanAbsolutePercentageError.__ge__": true, + "tf.metrics.MeanAbsolutePercentageError.__gt__": true, + "tf.metrics.MeanAbsolutePercentageError.__init__": true, + "tf.metrics.MeanAbsolutePercentageError.__le__": true, + "tf.metrics.MeanAbsolutePercentageError.__lt__": true, + "tf.metrics.MeanAbsolutePercentageError.__ne__": true, + "tf.metrics.MeanAbsolutePercentageError.__new__": true, + "tf.metrics.MeanAbsolutePercentageError.activity_regularizer": true, + "tf.metrics.MeanAbsolutePercentageError.add_loss": true, + "tf.metrics.MeanAbsolutePercentageError.add_metric": true, + "tf.metrics.MeanAbsolutePercentageError.add_weight": true, + "tf.metrics.MeanAbsolutePercentageError.build": true, + "tf.metrics.MeanAbsolutePercentageError.call": true, + "tf.metrics.MeanAbsolutePercentageError.compute_mask": true, + "tf.metrics.MeanAbsolutePercentageError.compute_output_shape": true, + "tf.metrics.MeanAbsolutePercentageError.compute_output_signature": true, + "tf.metrics.MeanAbsolutePercentageError.count_params": true, + "tf.metrics.MeanAbsolutePercentageError.dtype": true, + "tf.metrics.MeanAbsolutePercentageError.dynamic": true, + "tf.metrics.MeanAbsolutePercentageError.from_config": true, + "tf.metrics.MeanAbsolutePercentageError.get_config": true, + "tf.metrics.MeanAbsolutePercentageError.get_weights": true, + "tf.metrics.MeanAbsolutePercentageError.input": true, + "tf.metrics.MeanAbsolutePercentageError.input_spec": true, + "tf.metrics.MeanAbsolutePercentageError.losses": true, + "tf.metrics.MeanAbsolutePercentageError.metrics": true, + "tf.metrics.MeanAbsolutePercentageError.name": true, + "tf.metrics.MeanAbsolutePercentageError.name_scope": true, + "tf.metrics.MeanAbsolutePercentageError.non_trainable_weights": true, + "tf.metrics.MeanAbsolutePercentageError.output": true, + "tf.metrics.MeanAbsolutePercentageError.reset_states": true, + "tf.metrics.MeanAbsolutePercentageError.result": true, + "tf.metrics.MeanAbsolutePercentageError.set_weights": true, + "tf.metrics.MeanAbsolutePercentageError.submodules": true, + "tf.metrics.MeanAbsolutePercentageError.trainable": true, + "tf.metrics.MeanAbsolutePercentageError.trainable_weights": true, + "tf.metrics.MeanAbsolutePercentageError.update_state": true, + "tf.metrics.MeanAbsolutePercentageError.weights": true, + "tf.metrics.MeanAbsolutePercentageError.with_name_scope": true, + "tf.metrics.MeanIoU": false, + "tf.metrics.MeanIoU.__call__": true, + "tf.metrics.MeanIoU.__eq__": true, + "tf.metrics.MeanIoU.__ge__": true, + "tf.metrics.MeanIoU.__gt__": true, + "tf.metrics.MeanIoU.__init__": true, + "tf.metrics.MeanIoU.__le__": true, + "tf.metrics.MeanIoU.__lt__": true, + "tf.metrics.MeanIoU.__ne__": true, + "tf.metrics.MeanIoU.__new__": true, + "tf.metrics.MeanIoU.activity_regularizer": true, + "tf.metrics.MeanIoU.add_loss": true, + "tf.metrics.MeanIoU.add_metric": true, + "tf.metrics.MeanIoU.add_weight": true, + "tf.metrics.MeanIoU.build": true, + "tf.metrics.MeanIoU.call": true, + "tf.metrics.MeanIoU.compute_mask": true, + "tf.metrics.MeanIoU.compute_output_shape": true, + "tf.metrics.MeanIoU.compute_output_signature": true, + "tf.metrics.MeanIoU.count_params": true, + "tf.metrics.MeanIoU.dtype": true, + "tf.metrics.MeanIoU.dynamic": true, + "tf.metrics.MeanIoU.from_config": true, + "tf.metrics.MeanIoU.get_config": true, + "tf.metrics.MeanIoU.get_weights": true, + "tf.metrics.MeanIoU.input": true, + "tf.metrics.MeanIoU.input_spec": true, + "tf.metrics.MeanIoU.losses": true, + "tf.metrics.MeanIoU.metrics": true, + "tf.metrics.MeanIoU.name": true, + "tf.metrics.MeanIoU.name_scope": true, + "tf.metrics.MeanIoU.non_trainable_weights": true, + "tf.metrics.MeanIoU.output": true, + "tf.metrics.MeanIoU.reset_states": true, + "tf.metrics.MeanIoU.result": true, + "tf.metrics.MeanIoU.set_weights": true, + "tf.metrics.MeanIoU.submodules": true, + "tf.metrics.MeanIoU.trainable": true, + "tf.metrics.MeanIoU.trainable_weights": true, + "tf.metrics.MeanIoU.update_state": true, + "tf.metrics.MeanIoU.weights": true, + "tf.metrics.MeanIoU.with_name_scope": true, + "tf.metrics.MeanRelativeError": false, + "tf.metrics.MeanRelativeError.__call__": true, + "tf.metrics.MeanRelativeError.__eq__": true, + "tf.metrics.MeanRelativeError.__ge__": true, + "tf.metrics.MeanRelativeError.__gt__": true, + "tf.metrics.MeanRelativeError.__init__": true, + "tf.metrics.MeanRelativeError.__le__": true, + "tf.metrics.MeanRelativeError.__lt__": true, + "tf.metrics.MeanRelativeError.__ne__": true, + "tf.metrics.MeanRelativeError.__new__": true, + "tf.metrics.MeanRelativeError.activity_regularizer": true, + "tf.metrics.MeanRelativeError.add_loss": true, + "tf.metrics.MeanRelativeError.add_metric": true, + "tf.metrics.MeanRelativeError.add_weight": true, + "tf.metrics.MeanRelativeError.build": true, + "tf.metrics.MeanRelativeError.call": true, + "tf.metrics.MeanRelativeError.compute_mask": true, + "tf.metrics.MeanRelativeError.compute_output_shape": true, + "tf.metrics.MeanRelativeError.compute_output_signature": true, + "tf.metrics.MeanRelativeError.count_params": true, + "tf.metrics.MeanRelativeError.dtype": true, + "tf.metrics.MeanRelativeError.dynamic": true, + "tf.metrics.MeanRelativeError.from_config": true, + "tf.metrics.MeanRelativeError.get_config": true, + "tf.metrics.MeanRelativeError.get_weights": true, + "tf.metrics.MeanRelativeError.input": true, + "tf.metrics.MeanRelativeError.input_spec": true, + "tf.metrics.MeanRelativeError.losses": true, + "tf.metrics.MeanRelativeError.metrics": true, + "tf.metrics.MeanRelativeError.name": true, + "tf.metrics.MeanRelativeError.name_scope": true, + "tf.metrics.MeanRelativeError.non_trainable_weights": true, + "tf.metrics.MeanRelativeError.output": true, + "tf.metrics.MeanRelativeError.reset_states": true, + "tf.metrics.MeanRelativeError.result": true, + "tf.metrics.MeanRelativeError.set_weights": true, + "tf.metrics.MeanRelativeError.submodules": true, + "tf.metrics.MeanRelativeError.trainable": true, + "tf.metrics.MeanRelativeError.trainable_weights": true, + "tf.metrics.MeanRelativeError.update_state": true, + "tf.metrics.MeanRelativeError.weights": true, + "tf.metrics.MeanRelativeError.with_name_scope": true, + "tf.metrics.MeanSquaredError": false, + "tf.metrics.MeanSquaredError.__call__": true, + "tf.metrics.MeanSquaredError.__eq__": true, + "tf.metrics.MeanSquaredError.__ge__": true, + "tf.metrics.MeanSquaredError.__gt__": true, + "tf.metrics.MeanSquaredError.__init__": true, + "tf.metrics.MeanSquaredError.__le__": true, + "tf.metrics.MeanSquaredError.__lt__": true, + "tf.metrics.MeanSquaredError.__ne__": true, + "tf.metrics.MeanSquaredError.__new__": true, + "tf.metrics.MeanSquaredError.activity_regularizer": true, + "tf.metrics.MeanSquaredError.add_loss": true, + "tf.metrics.MeanSquaredError.add_metric": true, + "tf.metrics.MeanSquaredError.add_weight": true, + "tf.metrics.MeanSquaredError.build": true, + "tf.metrics.MeanSquaredError.call": true, + "tf.metrics.MeanSquaredError.compute_mask": true, + "tf.metrics.MeanSquaredError.compute_output_shape": true, + "tf.metrics.MeanSquaredError.compute_output_signature": true, + "tf.metrics.MeanSquaredError.count_params": true, + "tf.metrics.MeanSquaredError.dtype": true, + "tf.metrics.MeanSquaredError.dynamic": true, + "tf.metrics.MeanSquaredError.from_config": true, + "tf.metrics.MeanSquaredError.get_config": true, + "tf.metrics.MeanSquaredError.get_weights": true, + "tf.metrics.MeanSquaredError.input": true, + "tf.metrics.MeanSquaredError.input_spec": true, + "tf.metrics.MeanSquaredError.losses": true, + "tf.metrics.MeanSquaredError.metrics": true, + "tf.metrics.MeanSquaredError.name": true, + "tf.metrics.MeanSquaredError.name_scope": true, + "tf.metrics.MeanSquaredError.non_trainable_weights": true, + "tf.metrics.MeanSquaredError.output": true, + "tf.metrics.MeanSquaredError.reset_states": true, + "tf.metrics.MeanSquaredError.result": true, + "tf.metrics.MeanSquaredError.set_weights": true, + "tf.metrics.MeanSquaredError.submodules": true, + "tf.metrics.MeanSquaredError.trainable": true, + "tf.metrics.MeanSquaredError.trainable_weights": true, + "tf.metrics.MeanSquaredError.update_state": true, + "tf.metrics.MeanSquaredError.weights": true, + "tf.metrics.MeanSquaredError.with_name_scope": true, + "tf.metrics.MeanSquaredLogarithmicError": false, + "tf.metrics.MeanSquaredLogarithmicError.__call__": true, + "tf.metrics.MeanSquaredLogarithmicError.__eq__": true, + "tf.metrics.MeanSquaredLogarithmicError.__ge__": true, + "tf.metrics.MeanSquaredLogarithmicError.__gt__": true, + "tf.metrics.MeanSquaredLogarithmicError.__init__": true, + "tf.metrics.MeanSquaredLogarithmicError.__le__": true, + "tf.metrics.MeanSquaredLogarithmicError.__lt__": true, + "tf.metrics.MeanSquaredLogarithmicError.__ne__": true, + "tf.metrics.MeanSquaredLogarithmicError.__new__": true, + "tf.metrics.MeanSquaredLogarithmicError.activity_regularizer": true, + "tf.metrics.MeanSquaredLogarithmicError.add_loss": true, + "tf.metrics.MeanSquaredLogarithmicError.add_metric": true, + "tf.metrics.MeanSquaredLogarithmicError.add_weight": true, + "tf.metrics.MeanSquaredLogarithmicError.build": true, + "tf.metrics.MeanSquaredLogarithmicError.call": true, + "tf.metrics.MeanSquaredLogarithmicError.compute_mask": true, + "tf.metrics.MeanSquaredLogarithmicError.compute_output_shape": true, + "tf.metrics.MeanSquaredLogarithmicError.compute_output_signature": true, + "tf.metrics.MeanSquaredLogarithmicError.count_params": true, + "tf.metrics.MeanSquaredLogarithmicError.dtype": true, + "tf.metrics.MeanSquaredLogarithmicError.dynamic": true, + "tf.metrics.MeanSquaredLogarithmicError.from_config": true, + "tf.metrics.MeanSquaredLogarithmicError.get_config": true, + "tf.metrics.MeanSquaredLogarithmicError.get_weights": true, + "tf.metrics.MeanSquaredLogarithmicError.input": true, + "tf.metrics.MeanSquaredLogarithmicError.input_spec": true, + "tf.metrics.MeanSquaredLogarithmicError.losses": true, + "tf.metrics.MeanSquaredLogarithmicError.metrics": true, + "tf.metrics.MeanSquaredLogarithmicError.name": true, + "tf.metrics.MeanSquaredLogarithmicError.name_scope": true, + "tf.metrics.MeanSquaredLogarithmicError.non_trainable_weights": true, + "tf.metrics.MeanSquaredLogarithmicError.output": true, + "tf.metrics.MeanSquaredLogarithmicError.reset_states": true, + "tf.metrics.MeanSquaredLogarithmicError.result": true, + "tf.metrics.MeanSquaredLogarithmicError.set_weights": true, + "tf.metrics.MeanSquaredLogarithmicError.submodules": true, + "tf.metrics.MeanSquaredLogarithmicError.trainable": true, + "tf.metrics.MeanSquaredLogarithmicError.trainable_weights": true, + "tf.metrics.MeanSquaredLogarithmicError.update_state": true, + "tf.metrics.MeanSquaredLogarithmicError.weights": true, + "tf.metrics.MeanSquaredLogarithmicError.with_name_scope": true, + "tf.metrics.MeanTensor": false, + "tf.metrics.MeanTensor.__call__": true, + "tf.metrics.MeanTensor.__eq__": true, + "tf.metrics.MeanTensor.__ge__": true, + "tf.metrics.MeanTensor.__gt__": true, + "tf.metrics.MeanTensor.__init__": true, + "tf.metrics.MeanTensor.__le__": true, + "tf.metrics.MeanTensor.__lt__": true, + "tf.metrics.MeanTensor.__ne__": true, + "tf.metrics.MeanTensor.__new__": true, + "tf.metrics.MeanTensor.activity_regularizer": true, + "tf.metrics.MeanTensor.add_loss": true, + "tf.metrics.MeanTensor.add_metric": true, + "tf.metrics.MeanTensor.add_weight": true, + "tf.metrics.MeanTensor.build": true, + "tf.metrics.MeanTensor.call": true, + "tf.metrics.MeanTensor.compute_mask": true, + "tf.metrics.MeanTensor.compute_output_shape": true, + "tf.metrics.MeanTensor.compute_output_signature": true, + "tf.metrics.MeanTensor.count": true, + "tf.metrics.MeanTensor.count_params": true, + "tf.metrics.MeanTensor.dtype": true, + "tf.metrics.MeanTensor.dynamic": true, + "tf.metrics.MeanTensor.from_config": true, + "tf.metrics.MeanTensor.get_config": true, + "tf.metrics.MeanTensor.get_weights": true, + "tf.metrics.MeanTensor.input": true, + "tf.metrics.MeanTensor.input_spec": true, + "tf.metrics.MeanTensor.losses": true, + "tf.metrics.MeanTensor.metrics": true, + "tf.metrics.MeanTensor.name": true, + "tf.metrics.MeanTensor.name_scope": true, + "tf.metrics.MeanTensor.non_trainable_weights": true, + "tf.metrics.MeanTensor.output": true, + "tf.metrics.MeanTensor.reset_states": true, + "tf.metrics.MeanTensor.result": true, + "tf.metrics.MeanTensor.set_weights": true, + "tf.metrics.MeanTensor.submodules": true, + "tf.metrics.MeanTensor.total": true, + "tf.metrics.MeanTensor.trainable": true, + "tf.metrics.MeanTensor.trainable_weights": true, + "tf.metrics.MeanTensor.update_state": true, + "tf.metrics.MeanTensor.weights": true, + "tf.metrics.MeanTensor.with_name_scope": true, + "tf.metrics.Metric": false, + "tf.metrics.Metric.__call__": true, + "tf.metrics.Metric.__eq__": true, + "tf.metrics.Metric.__ge__": true, + "tf.metrics.Metric.__gt__": true, + "tf.metrics.Metric.__init__": true, + "tf.metrics.Metric.__le__": true, + "tf.metrics.Metric.__lt__": true, + "tf.metrics.Metric.__ne__": true, + "tf.metrics.Metric.__new__": true, + "tf.metrics.Metric.activity_regularizer": true, + "tf.metrics.Metric.add_loss": true, + "tf.metrics.Metric.add_metric": true, + "tf.metrics.Metric.add_weight": true, + "tf.metrics.Metric.build": true, + "tf.metrics.Metric.call": true, + "tf.metrics.Metric.compute_mask": true, + "tf.metrics.Metric.compute_output_shape": true, + "tf.metrics.Metric.compute_output_signature": true, + "tf.metrics.Metric.count_params": true, + "tf.metrics.Metric.dtype": true, + "tf.metrics.Metric.dynamic": true, + "tf.metrics.Metric.from_config": true, + "tf.metrics.Metric.get_config": true, + "tf.metrics.Metric.get_weights": true, + "tf.metrics.Metric.input": true, + "tf.metrics.Metric.input_spec": true, + "tf.metrics.Metric.losses": true, + "tf.metrics.Metric.metrics": true, + "tf.metrics.Metric.name": true, + "tf.metrics.Metric.name_scope": true, + "tf.metrics.Metric.non_trainable_weights": true, + "tf.metrics.Metric.output": true, + "tf.metrics.Metric.reset_states": true, + "tf.metrics.Metric.result": true, + "tf.metrics.Metric.set_weights": true, + "tf.metrics.Metric.submodules": true, + "tf.metrics.Metric.trainable": true, + "tf.metrics.Metric.trainable_weights": true, + "tf.metrics.Metric.update_state": true, + "tf.metrics.Metric.weights": true, + "tf.metrics.Metric.with_name_scope": true, + "tf.metrics.Poisson": false, + "tf.metrics.Poisson.__call__": true, + "tf.metrics.Poisson.__eq__": true, + "tf.metrics.Poisson.__ge__": true, + "tf.metrics.Poisson.__gt__": true, + "tf.metrics.Poisson.__init__": true, + "tf.metrics.Poisson.__le__": true, + "tf.metrics.Poisson.__lt__": true, + "tf.metrics.Poisson.__ne__": true, + "tf.metrics.Poisson.__new__": true, + "tf.metrics.Poisson.activity_regularizer": true, + "tf.metrics.Poisson.add_loss": true, + "tf.metrics.Poisson.add_metric": true, + "tf.metrics.Poisson.add_weight": true, + "tf.metrics.Poisson.build": true, + "tf.metrics.Poisson.call": true, + "tf.metrics.Poisson.compute_mask": true, + "tf.metrics.Poisson.compute_output_shape": true, + "tf.metrics.Poisson.compute_output_signature": true, + "tf.metrics.Poisson.count_params": true, + "tf.metrics.Poisson.dtype": true, + "tf.metrics.Poisson.dynamic": true, + "tf.metrics.Poisson.from_config": true, + "tf.metrics.Poisson.get_config": true, + "tf.metrics.Poisson.get_weights": true, + "tf.metrics.Poisson.input": true, + "tf.metrics.Poisson.input_spec": true, + "tf.metrics.Poisson.losses": true, + "tf.metrics.Poisson.metrics": true, + "tf.metrics.Poisson.name": true, + "tf.metrics.Poisson.name_scope": true, + "tf.metrics.Poisson.non_trainable_weights": true, + "tf.metrics.Poisson.output": true, + "tf.metrics.Poisson.reset_states": true, + "tf.metrics.Poisson.result": true, + "tf.metrics.Poisson.set_weights": true, + "tf.metrics.Poisson.submodules": true, + "tf.metrics.Poisson.trainable": true, + "tf.metrics.Poisson.trainable_weights": true, + "tf.metrics.Poisson.update_state": true, + "tf.metrics.Poisson.weights": true, + "tf.metrics.Poisson.with_name_scope": true, + "tf.metrics.Precision": false, + "tf.metrics.Precision.__call__": true, + "tf.metrics.Precision.__eq__": true, + "tf.metrics.Precision.__ge__": true, + "tf.metrics.Precision.__gt__": true, + "tf.metrics.Precision.__init__": true, + "tf.metrics.Precision.__le__": true, + "tf.metrics.Precision.__lt__": true, + "tf.metrics.Precision.__ne__": true, + "tf.metrics.Precision.__new__": true, + "tf.metrics.Precision.activity_regularizer": true, + "tf.metrics.Precision.add_loss": true, + "tf.metrics.Precision.add_metric": true, + "tf.metrics.Precision.add_weight": true, + "tf.metrics.Precision.build": true, + "tf.metrics.Precision.call": true, + "tf.metrics.Precision.compute_mask": true, + "tf.metrics.Precision.compute_output_shape": true, + "tf.metrics.Precision.compute_output_signature": true, + "tf.metrics.Precision.count_params": true, + "tf.metrics.Precision.dtype": true, + "tf.metrics.Precision.dynamic": true, + "tf.metrics.Precision.from_config": true, + "tf.metrics.Precision.get_config": true, + "tf.metrics.Precision.get_weights": true, + "tf.metrics.Precision.input": true, + "tf.metrics.Precision.input_spec": true, + "tf.metrics.Precision.losses": true, + "tf.metrics.Precision.metrics": true, + "tf.metrics.Precision.name": true, + "tf.metrics.Precision.name_scope": true, + "tf.metrics.Precision.non_trainable_weights": true, + "tf.metrics.Precision.output": true, + "tf.metrics.Precision.reset_states": true, + "tf.metrics.Precision.result": true, + "tf.metrics.Precision.set_weights": true, + "tf.metrics.Precision.submodules": true, + "tf.metrics.Precision.trainable": true, + "tf.metrics.Precision.trainable_weights": true, + "tf.metrics.Precision.update_state": true, + "tf.metrics.Precision.weights": true, + "tf.metrics.Precision.with_name_scope": true, + "tf.metrics.PrecisionAtRecall": false, + "tf.metrics.PrecisionAtRecall.__call__": true, + "tf.metrics.PrecisionAtRecall.__eq__": true, + "tf.metrics.PrecisionAtRecall.__ge__": true, + "tf.metrics.PrecisionAtRecall.__gt__": true, + "tf.metrics.PrecisionAtRecall.__init__": true, + "tf.metrics.PrecisionAtRecall.__le__": true, + "tf.metrics.PrecisionAtRecall.__lt__": true, + "tf.metrics.PrecisionAtRecall.__ne__": true, + "tf.metrics.PrecisionAtRecall.__new__": true, + "tf.metrics.PrecisionAtRecall.activity_regularizer": true, + "tf.metrics.PrecisionAtRecall.add_loss": true, + "tf.metrics.PrecisionAtRecall.add_metric": true, + "tf.metrics.PrecisionAtRecall.add_weight": true, + "tf.metrics.PrecisionAtRecall.build": true, + "tf.metrics.PrecisionAtRecall.call": true, + "tf.metrics.PrecisionAtRecall.compute_mask": true, + "tf.metrics.PrecisionAtRecall.compute_output_shape": true, + "tf.metrics.PrecisionAtRecall.compute_output_signature": true, + "tf.metrics.PrecisionAtRecall.count_params": true, + "tf.metrics.PrecisionAtRecall.dtype": true, + "tf.metrics.PrecisionAtRecall.dynamic": true, + "tf.metrics.PrecisionAtRecall.from_config": true, + "tf.metrics.PrecisionAtRecall.get_config": true, + "tf.metrics.PrecisionAtRecall.get_weights": true, + "tf.metrics.PrecisionAtRecall.input": true, + "tf.metrics.PrecisionAtRecall.input_spec": true, + "tf.metrics.PrecisionAtRecall.losses": true, + "tf.metrics.PrecisionAtRecall.metrics": true, + "tf.metrics.PrecisionAtRecall.name": true, + "tf.metrics.PrecisionAtRecall.name_scope": true, + "tf.metrics.PrecisionAtRecall.non_trainable_weights": true, + "tf.metrics.PrecisionAtRecall.output": true, + "tf.metrics.PrecisionAtRecall.reset_states": true, + "tf.metrics.PrecisionAtRecall.result": true, + "tf.metrics.PrecisionAtRecall.set_weights": true, + "tf.metrics.PrecisionAtRecall.submodules": true, + "tf.metrics.PrecisionAtRecall.trainable": true, + "tf.metrics.PrecisionAtRecall.trainable_weights": true, + "tf.metrics.PrecisionAtRecall.update_state": true, + "tf.metrics.PrecisionAtRecall.weights": true, + "tf.metrics.PrecisionAtRecall.with_name_scope": true, + "tf.metrics.Recall": false, + "tf.metrics.Recall.__call__": true, + "tf.metrics.Recall.__eq__": true, + "tf.metrics.Recall.__ge__": true, + "tf.metrics.Recall.__gt__": true, + "tf.metrics.Recall.__init__": true, + "tf.metrics.Recall.__le__": true, + "tf.metrics.Recall.__lt__": true, + "tf.metrics.Recall.__ne__": true, + "tf.metrics.Recall.__new__": true, + "tf.metrics.Recall.activity_regularizer": true, + "tf.metrics.Recall.add_loss": true, + "tf.metrics.Recall.add_metric": true, + "tf.metrics.Recall.add_weight": true, + "tf.metrics.Recall.build": true, + "tf.metrics.Recall.call": true, + "tf.metrics.Recall.compute_mask": true, + "tf.metrics.Recall.compute_output_shape": true, + "tf.metrics.Recall.compute_output_signature": true, + "tf.metrics.Recall.count_params": true, + "tf.metrics.Recall.dtype": true, + "tf.metrics.Recall.dynamic": true, + "tf.metrics.Recall.from_config": true, + "tf.metrics.Recall.get_config": true, + "tf.metrics.Recall.get_weights": true, + "tf.metrics.Recall.input": true, + "tf.metrics.Recall.input_spec": true, + "tf.metrics.Recall.losses": true, + "tf.metrics.Recall.metrics": true, + "tf.metrics.Recall.name": true, + "tf.metrics.Recall.name_scope": true, + "tf.metrics.Recall.non_trainable_weights": true, + "tf.metrics.Recall.output": true, + "tf.metrics.Recall.reset_states": true, + "tf.metrics.Recall.result": true, + "tf.metrics.Recall.set_weights": true, + "tf.metrics.Recall.submodules": true, + "tf.metrics.Recall.trainable": true, + "tf.metrics.Recall.trainable_weights": true, + "tf.metrics.Recall.update_state": true, + "tf.metrics.Recall.weights": true, + "tf.metrics.Recall.with_name_scope": true, + "tf.metrics.RecallAtPrecision": false, + "tf.metrics.RecallAtPrecision.__call__": true, + "tf.metrics.RecallAtPrecision.__eq__": true, + "tf.metrics.RecallAtPrecision.__ge__": true, + "tf.metrics.RecallAtPrecision.__gt__": true, + "tf.metrics.RecallAtPrecision.__init__": true, + "tf.metrics.RecallAtPrecision.__le__": true, + "tf.metrics.RecallAtPrecision.__lt__": true, + "tf.metrics.RecallAtPrecision.__ne__": true, + "tf.metrics.RecallAtPrecision.__new__": true, + "tf.metrics.RecallAtPrecision.activity_regularizer": true, + "tf.metrics.RecallAtPrecision.add_loss": true, + "tf.metrics.RecallAtPrecision.add_metric": true, + "tf.metrics.RecallAtPrecision.add_weight": true, + "tf.metrics.RecallAtPrecision.build": true, + "tf.metrics.RecallAtPrecision.call": true, + "tf.metrics.RecallAtPrecision.compute_mask": true, + "tf.metrics.RecallAtPrecision.compute_output_shape": true, + "tf.metrics.RecallAtPrecision.compute_output_signature": true, + "tf.metrics.RecallAtPrecision.count_params": true, + "tf.metrics.RecallAtPrecision.dtype": true, + "tf.metrics.RecallAtPrecision.dynamic": true, + "tf.metrics.RecallAtPrecision.from_config": true, + "tf.metrics.RecallAtPrecision.get_config": true, + "tf.metrics.RecallAtPrecision.get_weights": true, + "tf.metrics.RecallAtPrecision.input": true, + "tf.metrics.RecallAtPrecision.input_spec": true, + "tf.metrics.RecallAtPrecision.losses": true, + "tf.metrics.RecallAtPrecision.metrics": true, + "tf.metrics.RecallAtPrecision.name": true, + "tf.metrics.RecallAtPrecision.name_scope": true, + "tf.metrics.RecallAtPrecision.non_trainable_weights": true, + "tf.metrics.RecallAtPrecision.output": true, + "tf.metrics.RecallAtPrecision.reset_states": true, + "tf.metrics.RecallAtPrecision.result": true, + "tf.metrics.RecallAtPrecision.set_weights": true, + "tf.metrics.RecallAtPrecision.submodules": true, + "tf.metrics.RecallAtPrecision.trainable": true, + "tf.metrics.RecallAtPrecision.trainable_weights": true, + "tf.metrics.RecallAtPrecision.update_state": true, + "tf.metrics.RecallAtPrecision.weights": true, + "tf.metrics.RecallAtPrecision.with_name_scope": true, + "tf.metrics.RootMeanSquaredError": false, + "tf.metrics.RootMeanSquaredError.__call__": true, + "tf.metrics.RootMeanSquaredError.__eq__": true, + "tf.metrics.RootMeanSquaredError.__ge__": true, + "tf.metrics.RootMeanSquaredError.__gt__": true, + "tf.metrics.RootMeanSquaredError.__init__": true, + "tf.metrics.RootMeanSquaredError.__le__": true, + "tf.metrics.RootMeanSquaredError.__lt__": true, + "tf.metrics.RootMeanSquaredError.__ne__": true, + "tf.metrics.RootMeanSquaredError.__new__": true, + "tf.metrics.RootMeanSquaredError.activity_regularizer": true, + "tf.metrics.RootMeanSquaredError.add_loss": true, + "tf.metrics.RootMeanSquaredError.add_metric": true, + "tf.metrics.RootMeanSquaredError.add_weight": true, + "tf.metrics.RootMeanSquaredError.build": true, + "tf.metrics.RootMeanSquaredError.call": true, + "tf.metrics.RootMeanSquaredError.compute_mask": true, + "tf.metrics.RootMeanSquaredError.compute_output_shape": true, + "tf.metrics.RootMeanSquaredError.compute_output_signature": true, + "tf.metrics.RootMeanSquaredError.count_params": true, + "tf.metrics.RootMeanSquaredError.dtype": true, + "tf.metrics.RootMeanSquaredError.dynamic": true, + "tf.metrics.RootMeanSquaredError.from_config": true, + "tf.metrics.RootMeanSquaredError.get_config": true, + "tf.metrics.RootMeanSquaredError.get_weights": true, + "tf.metrics.RootMeanSquaredError.input": true, + "tf.metrics.RootMeanSquaredError.input_spec": true, + "tf.metrics.RootMeanSquaredError.losses": true, + "tf.metrics.RootMeanSquaredError.metrics": true, + "tf.metrics.RootMeanSquaredError.name": true, + "tf.metrics.RootMeanSquaredError.name_scope": true, + "tf.metrics.RootMeanSquaredError.non_trainable_weights": true, + "tf.metrics.RootMeanSquaredError.output": true, + "tf.metrics.RootMeanSquaredError.reset_states": true, + "tf.metrics.RootMeanSquaredError.result": true, + "tf.metrics.RootMeanSquaredError.set_weights": true, + "tf.metrics.RootMeanSquaredError.submodules": true, + "tf.metrics.RootMeanSquaredError.trainable": true, + "tf.metrics.RootMeanSquaredError.trainable_weights": true, + "tf.metrics.RootMeanSquaredError.update_state": true, + "tf.metrics.RootMeanSquaredError.weights": true, + "tf.metrics.RootMeanSquaredError.with_name_scope": true, + "tf.metrics.SensitivityAtSpecificity": false, + "tf.metrics.SensitivityAtSpecificity.__call__": true, + "tf.metrics.SensitivityAtSpecificity.__eq__": true, + "tf.metrics.SensitivityAtSpecificity.__ge__": true, + "tf.metrics.SensitivityAtSpecificity.__gt__": true, + "tf.metrics.SensitivityAtSpecificity.__init__": true, + "tf.metrics.SensitivityAtSpecificity.__le__": true, + "tf.metrics.SensitivityAtSpecificity.__lt__": true, + "tf.metrics.SensitivityAtSpecificity.__ne__": true, + "tf.metrics.SensitivityAtSpecificity.__new__": true, + "tf.metrics.SensitivityAtSpecificity.activity_regularizer": true, + "tf.metrics.SensitivityAtSpecificity.add_loss": true, + "tf.metrics.SensitivityAtSpecificity.add_metric": true, + "tf.metrics.SensitivityAtSpecificity.add_weight": true, + "tf.metrics.SensitivityAtSpecificity.build": true, + "tf.metrics.SensitivityAtSpecificity.call": true, + "tf.metrics.SensitivityAtSpecificity.compute_mask": true, + "tf.metrics.SensitivityAtSpecificity.compute_output_shape": true, + "tf.metrics.SensitivityAtSpecificity.compute_output_signature": true, + "tf.metrics.SensitivityAtSpecificity.count_params": true, + "tf.metrics.SensitivityAtSpecificity.dtype": true, + "tf.metrics.SensitivityAtSpecificity.dynamic": true, + "tf.metrics.SensitivityAtSpecificity.from_config": true, + "tf.metrics.SensitivityAtSpecificity.get_config": true, + "tf.metrics.SensitivityAtSpecificity.get_weights": true, + "tf.metrics.SensitivityAtSpecificity.input": true, + "tf.metrics.SensitivityAtSpecificity.input_spec": true, + "tf.metrics.SensitivityAtSpecificity.losses": true, + "tf.metrics.SensitivityAtSpecificity.metrics": true, + "tf.metrics.SensitivityAtSpecificity.name": true, + "tf.metrics.SensitivityAtSpecificity.name_scope": true, + "tf.metrics.SensitivityAtSpecificity.non_trainable_weights": true, + "tf.metrics.SensitivityAtSpecificity.output": true, + "tf.metrics.SensitivityAtSpecificity.reset_states": true, + "tf.metrics.SensitivityAtSpecificity.result": true, + "tf.metrics.SensitivityAtSpecificity.set_weights": true, + "tf.metrics.SensitivityAtSpecificity.submodules": true, + "tf.metrics.SensitivityAtSpecificity.trainable": true, + "tf.metrics.SensitivityAtSpecificity.trainable_weights": true, + "tf.metrics.SensitivityAtSpecificity.update_state": true, + "tf.metrics.SensitivityAtSpecificity.weights": true, + "tf.metrics.SensitivityAtSpecificity.with_name_scope": true, + "tf.metrics.SparseCategoricalAccuracy": false, + "tf.metrics.SparseCategoricalAccuracy.__call__": true, + "tf.metrics.SparseCategoricalAccuracy.__eq__": true, + "tf.metrics.SparseCategoricalAccuracy.__ge__": true, + "tf.metrics.SparseCategoricalAccuracy.__gt__": true, + "tf.metrics.SparseCategoricalAccuracy.__init__": true, + "tf.metrics.SparseCategoricalAccuracy.__le__": true, + "tf.metrics.SparseCategoricalAccuracy.__lt__": true, + "tf.metrics.SparseCategoricalAccuracy.__ne__": true, + "tf.metrics.SparseCategoricalAccuracy.__new__": true, + "tf.metrics.SparseCategoricalAccuracy.activity_regularizer": true, + "tf.metrics.SparseCategoricalAccuracy.add_loss": true, + "tf.metrics.SparseCategoricalAccuracy.add_metric": true, + "tf.metrics.SparseCategoricalAccuracy.add_weight": true, + "tf.metrics.SparseCategoricalAccuracy.build": true, + "tf.metrics.SparseCategoricalAccuracy.call": true, + "tf.metrics.SparseCategoricalAccuracy.compute_mask": true, + "tf.metrics.SparseCategoricalAccuracy.compute_output_shape": true, + "tf.metrics.SparseCategoricalAccuracy.compute_output_signature": true, + "tf.metrics.SparseCategoricalAccuracy.count_params": true, + "tf.metrics.SparseCategoricalAccuracy.dtype": true, + "tf.metrics.SparseCategoricalAccuracy.dynamic": true, + "tf.metrics.SparseCategoricalAccuracy.from_config": true, + "tf.metrics.SparseCategoricalAccuracy.get_config": true, + "tf.metrics.SparseCategoricalAccuracy.get_weights": true, + "tf.metrics.SparseCategoricalAccuracy.input": true, + "tf.metrics.SparseCategoricalAccuracy.input_spec": true, + "tf.metrics.SparseCategoricalAccuracy.losses": true, + "tf.metrics.SparseCategoricalAccuracy.metrics": true, + "tf.metrics.SparseCategoricalAccuracy.name": true, + "tf.metrics.SparseCategoricalAccuracy.name_scope": true, + "tf.metrics.SparseCategoricalAccuracy.non_trainable_weights": true, + "tf.metrics.SparseCategoricalAccuracy.output": true, + "tf.metrics.SparseCategoricalAccuracy.reset_states": true, + "tf.metrics.SparseCategoricalAccuracy.result": true, + "tf.metrics.SparseCategoricalAccuracy.set_weights": true, + "tf.metrics.SparseCategoricalAccuracy.submodules": true, + "tf.metrics.SparseCategoricalAccuracy.trainable": true, + "tf.metrics.SparseCategoricalAccuracy.trainable_weights": true, + "tf.metrics.SparseCategoricalAccuracy.update_state": true, + "tf.metrics.SparseCategoricalAccuracy.weights": true, + "tf.metrics.SparseCategoricalAccuracy.with_name_scope": true, + "tf.metrics.SparseCategoricalCrossentropy": false, + "tf.metrics.SparseCategoricalCrossentropy.__call__": true, + "tf.metrics.SparseCategoricalCrossentropy.__eq__": true, + "tf.metrics.SparseCategoricalCrossentropy.__ge__": true, + "tf.metrics.SparseCategoricalCrossentropy.__gt__": true, + "tf.metrics.SparseCategoricalCrossentropy.__init__": true, + "tf.metrics.SparseCategoricalCrossentropy.__le__": true, + "tf.metrics.SparseCategoricalCrossentropy.__lt__": true, + "tf.metrics.SparseCategoricalCrossentropy.__ne__": true, + "tf.metrics.SparseCategoricalCrossentropy.__new__": true, + "tf.metrics.SparseCategoricalCrossentropy.activity_regularizer": true, + "tf.metrics.SparseCategoricalCrossentropy.add_loss": true, + "tf.metrics.SparseCategoricalCrossentropy.add_metric": true, + "tf.metrics.SparseCategoricalCrossentropy.add_weight": true, + "tf.metrics.SparseCategoricalCrossentropy.build": true, + "tf.metrics.SparseCategoricalCrossentropy.call": true, + "tf.metrics.SparseCategoricalCrossentropy.compute_mask": true, + "tf.metrics.SparseCategoricalCrossentropy.compute_output_shape": true, + "tf.metrics.SparseCategoricalCrossentropy.compute_output_signature": true, + "tf.metrics.SparseCategoricalCrossentropy.count_params": true, + "tf.metrics.SparseCategoricalCrossentropy.dtype": true, + "tf.metrics.SparseCategoricalCrossentropy.dynamic": true, + "tf.metrics.SparseCategoricalCrossentropy.from_config": true, + "tf.metrics.SparseCategoricalCrossentropy.get_config": true, + "tf.metrics.SparseCategoricalCrossentropy.get_weights": true, + "tf.metrics.SparseCategoricalCrossentropy.input": true, + "tf.metrics.SparseCategoricalCrossentropy.input_spec": true, + "tf.metrics.SparseCategoricalCrossentropy.losses": true, + "tf.metrics.SparseCategoricalCrossentropy.metrics": true, + "tf.metrics.SparseCategoricalCrossentropy.name": true, + "tf.metrics.SparseCategoricalCrossentropy.name_scope": true, + "tf.metrics.SparseCategoricalCrossentropy.non_trainable_weights": true, + "tf.metrics.SparseCategoricalCrossentropy.output": true, + "tf.metrics.SparseCategoricalCrossentropy.reset_states": true, + "tf.metrics.SparseCategoricalCrossentropy.result": true, + "tf.metrics.SparseCategoricalCrossentropy.set_weights": true, + "tf.metrics.SparseCategoricalCrossentropy.submodules": true, + "tf.metrics.SparseCategoricalCrossentropy.trainable": true, + "tf.metrics.SparseCategoricalCrossentropy.trainable_weights": true, + "tf.metrics.SparseCategoricalCrossentropy.update_state": true, + "tf.metrics.SparseCategoricalCrossentropy.weights": true, + "tf.metrics.SparseCategoricalCrossentropy.with_name_scope": true, + "tf.metrics.SparseTopKCategoricalAccuracy": false, + "tf.metrics.SparseTopKCategoricalAccuracy.__call__": true, + "tf.metrics.SparseTopKCategoricalAccuracy.__eq__": true, + "tf.metrics.SparseTopKCategoricalAccuracy.__ge__": true, + "tf.metrics.SparseTopKCategoricalAccuracy.__gt__": true, + "tf.metrics.SparseTopKCategoricalAccuracy.__init__": true, + "tf.metrics.SparseTopKCategoricalAccuracy.__le__": true, + "tf.metrics.SparseTopKCategoricalAccuracy.__lt__": true, + "tf.metrics.SparseTopKCategoricalAccuracy.__ne__": true, + "tf.metrics.SparseTopKCategoricalAccuracy.__new__": true, + "tf.metrics.SparseTopKCategoricalAccuracy.activity_regularizer": true, + "tf.metrics.SparseTopKCategoricalAccuracy.add_loss": true, + "tf.metrics.SparseTopKCategoricalAccuracy.add_metric": true, + "tf.metrics.SparseTopKCategoricalAccuracy.add_weight": true, + "tf.metrics.SparseTopKCategoricalAccuracy.build": true, + "tf.metrics.SparseTopKCategoricalAccuracy.call": true, + "tf.metrics.SparseTopKCategoricalAccuracy.compute_mask": true, + "tf.metrics.SparseTopKCategoricalAccuracy.compute_output_shape": true, + "tf.metrics.SparseTopKCategoricalAccuracy.compute_output_signature": true, + "tf.metrics.SparseTopKCategoricalAccuracy.count_params": true, + "tf.metrics.SparseTopKCategoricalAccuracy.dtype": true, + "tf.metrics.SparseTopKCategoricalAccuracy.dynamic": true, + "tf.metrics.SparseTopKCategoricalAccuracy.from_config": true, + "tf.metrics.SparseTopKCategoricalAccuracy.get_config": true, + "tf.metrics.SparseTopKCategoricalAccuracy.get_weights": true, + "tf.metrics.SparseTopKCategoricalAccuracy.input": true, + "tf.metrics.SparseTopKCategoricalAccuracy.input_spec": true, + "tf.metrics.SparseTopKCategoricalAccuracy.losses": true, + "tf.metrics.SparseTopKCategoricalAccuracy.metrics": true, + "tf.metrics.SparseTopKCategoricalAccuracy.name": true, + "tf.metrics.SparseTopKCategoricalAccuracy.name_scope": true, + "tf.metrics.SparseTopKCategoricalAccuracy.non_trainable_weights": true, + "tf.metrics.SparseTopKCategoricalAccuracy.output": true, + "tf.metrics.SparseTopKCategoricalAccuracy.reset_states": true, + "tf.metrics.SparseTopKCategoricalAccuracy.result": true, + "tf.metrics.SparseTopKCategoricalAccuracy.set_weights": true, + "tf.metrics.SparseTopKCategoricalAccuracy.submodules": true, + "tf.metrics.SparseTopKCategoricalAccuracy.trainable": true, + "tf.metrics.SparseTopKCategoricalAccuracy.trainable_weights": true, + "tf.metrics.SparseTopKCategoricalAccuracy.update_state": true, + "tf.metrics.SparseTopKCategoricalAccuracy.weights": true, + "tf.metrics.SparseTopKCategoricalAccuracy.with_name_scope": true, + "tf.metrics.SpecificityAtSensitivity": false, + "tf.metrics.SpecificityAtSensitivity.__call__": true, + "tf.metrics.SpecificityAtSensitivity.__eq__": true, + "tf.metrics.SpecificityAtSensitivity.__ge__": true, + "tf.metrics.SpecificityAtSensitivity.__gt__": true, + "tf.metrics.SpecificityAtSensitivity.__init__": true, + "tf.metrics.SpecificityAtSensitivity.__le__": true, + "tf.metrics.SpecificityAtSensitivity.__lt__": true, + "tf.metrics.SpecificityAtSensitivity.__ne__": true, + "tf.metrics.SpecificityAtSensitivity.__new__": true, + "tf.metrics.SpecificityAtSensitivity.activity_regularizer": true, + "tf.metrics.SpecificityAtSensitivity.add_loss": true, + "tf.metrics.SpecificityAtSensitivity.add_metric": true, + "tf.metrics.SpecificityAtSensitivity.add_weight": true, + "tf.metrics.SpecificityAtSensitivity.build": true, + "tf.metrics.SpecificityAtSensitivity.call": true, + "tf.metrics.SpecificityAtSensitivity.compute_mask": true, + "tf.metrics.SpecificityAtSensitivity.compute_output_shape": true, + "tf.metrics.SpecificityAtSensitivity.compute_output_signature": true, + "tf.metrics.SpecificityAtSensitivity.count_params": true, + "tf.metrics.SpecificityAtSensitivity.dtype": true, + "tf.metrics.SpecificityAtSensitivity.dynamic": true, + "tf.metrics.SpecificityAtSensitivity.from_config": true, + "tf.metrics.SpecificityAtSensitivity.get_config": true, + "tf.metrics.SpecificityAtSensitivity.get_weights": true, + "tf.metrics.SpecificityAtSensitivity.input": true, + "tf.metrics.SpecificityAtSensitivity.input_spec": true, + "tf.metrics.SpecificityAtSensitivity.losses": true, + "tf.metrics.SpecificityAtSensitivity.metrics": true, + "tf.metrics.SpecificityAtSensitivity.name": true, + "tf.metrics.SpecificityAtSensitivity.name_scope": true, + "tf.metrics.SpecificityAtSensitivity.non_trainable_weights": true, + "tf.metrics.SpecificityAtSensitivity.output": true, + "tf.metrics.SpecificityAtSensitivity.reset_states": true, + "tf.metrics.SpecificityAtSensitivity.result": true, + "tf.metrics.SpecificityAtSensitivity.set_weights": true, + "tf.metrics.SpecificityAtSensitivity.submodules": true, + "tf.metrics.SpecificityAtSensitivity.trainable": true, + "tf.metrics.SpecificityAtSensitivity.trainable_weights": true, + "tf.metrics.SpecificityAtSensitivity.update_state": true, + "tf.metrics.SpecificityAtSensitivity.weights": true, + "tf.metrics.SpecificityAtSensitivity.with_name_scope": true, + "tf.metrics.SquaredHinge": false, + "tf.metrics.SquaredHinge.__call__": true, + "tf.metrics.SquaredHinge.__eq__": true, + "tf.metrics.SquaredHinge.__ge__": true, + "tf.metrics.SquaredHinge.__gt__": true, + "tf.metrics.SquaredHinge.__init__": true, + "tf.metrics.SquaredHinge.__le__": true, + "tf.metrics.SquaredHinge.__lt__": true, + "tf.metrics.SquaredHinge.__ne__": true, + "tf.metrics.SquaredHinge.__new__": true, + "tf.metrics.SquaredHinge.activity_regularizer": true, + "tf.metrics.SquaredHinge.add_loss": true, + "tf.metrics.SquaredHinge.add_metric": true, + "tf.metrics.SquaredHinge.add_weight": true, + "tf.metrics.SquaredHinge.build": true, + "tf.metrics.SquaredHinge.call": true, + "tf.metrics.SquaredHinge.compute_mask": true, + "tf.metrics.SquaredHinge.compute_output_shape": true, + "tf.metrics.SquaredHinge.compute_output_signature": true, + "tf.metrics.SquaredHinge.count_params": true, + "tf.metrics.SquaredHinge.dtype": true, + "tf.metrics.SquaredHinge.dynamic": true, + "tf.metrics.SquaredHinge.from_config": true, + "tf.metrics.SquaredHinge.get_config": true, + "tf.metrics.SquaredHinge.get_weights": true, + "tf.metrics.SquaredHinge.input": true, + "tf.metrics.SquaredHinge.input_spec": true, + "tf.metrics.SquaredHinge.losses": true, + "tf.metrics.SquaredHinge.metrics": true, + "tf.metrics.SquaredHinge.name": true, + "tf.metrics.SquaredHinge.name_scope": true, + "tf.metrics.SquaredHinge.non_trainable_weights": true, + "tf.metrics.SquaredHinge.output": true, + "tf.metrics.SquaredHinge.reset_states": true, + "tf.metrics.SquaredHinge.result": true, + "tf.metrics.SquaredHinge.set_weights": true, + "tf.metrics.SquaredHinge.submodules": true, + "tf.metrics.SquaredHinge.trainable": true, + "tf.metrics.SquaredHinge.trainable_weights": true, + "tf.metrics.SquaredHinge.update_state": true, + "tf.metrics.SquaredHinge.weights": true, + "tf.metrics.SquaredHinge.with_name_scope": true, + "tf.metrics.Sum": false, + "tf.metrics.Sum.__call__": true, + "tf.metrics.Sum.__eq__": true, + "tf.metrics.Sum.__ge__": true, + "tf.metrics.Sum.__gt__": true, + "tf.metrics.Sum.__init__": true, + "tf.metrics.Sum.__le__": true, + "tf.metrics.Sum.__lt__": true, + "tf.metrics.Sum.__ne__": true, + "tf.metrics.Sum.__new__": true, + "tf.metrics.Sum.activity_regularizer": true, + "tf.metrics.Sum.add_loss": true, + "tf.metrics.Sum.add_metric": true, + "tf.metrics.Sum.add_weight": true, + "tf.metrics.Sum.build": true, + "tf.metrics.Sum.call": true, + "tf.metrics.Sum.compute_mask": true, + "tf.metrics.Sum.compute_output_shape": true, + "tf.metrics.Sum.compute_output_signature": true, + "tf.metrics.Sum.count_params": true, + "tf.metrics.Sum.dtype": true, + "tf.metrics.Sum.dynamic": true, + "tf.metrics.Sum.from_config": true, + "tf.metrics.Sum.get_config": true, + "tf.metrics.Sum.get_weights": true, + "tf.metrics.Sum.input": true, + "tf.metrics.Sum.input_spec": true, + "tf.metrics.Sum.losses": true, + "tf.metrics.Sum.metrics": true, + "tf.metrics.Sum.name": true, + "tf.metrics.Sum.name_scope": true, + "tf.metrics.Sum.non_trainable_weights": true, + "tf.metrics.Sum.output": true, + "tf.metrics.Sum.reset_states": true, + "tf.metrics.Sum.result": true, + "tf.metrics.Sum.set_weights": true, + "tf.metrics.Sum.submodules": true, + "tf.metrics.Sum.trainable": true, + "tf.metrics.Sum.trainable_weights": true, + "tf.metrics.Sum.update_state": true, + "tf.metrics.Sum.weights": true, + "tf.metrics.Sum.with_name_scope": true, + "tf.metrics.TopKCategoricalAccuracy": false, + "tf.metrics.TopKCategoricalAccuracy.__call__": true, + "tf.metrics.TopKCategoricalAccuracy.__eq__": true, + "tf.metrics.TopKCategoricalAccuracy.__ge__": true, + "tf.metrics.TopKCategoricalAccuracy.__gt__": true, + "tf.metrics.TopKCategoricalAccuracy.__init__": true, + "tf.metrics.TopKCategoricalAccuracy.__le__": true, + "tf.metrics.TopKCategoricalAccuracy.__lt__": true, + "tf.metrics.TopKCategoricalAccuracy.__ne__": true, + "tf.metrics.TopKCategoricalAccuracy.__new__": true, + "tf.metrics.TopKCategoricalAccuracy.activity_regularizer": true, + "tf.metrics.TopKCategoricalAccuracy.add_loss": true, + "tf.metrics.TopKCategoricalAccuracy.add_metric": true, + "tf.metrics.TopKCategoricalAccuracy.add_weight": true, + "tf.metrics.TopKCategoricalAccuracy.build": true, + "tf.metrics.TopKCategoricalAccuracy.call": true, + "tf.metrics.TopKCategoricalAccuracy.compute_mask": true, + "tf.metrics.TopKCategoricalAccuracy.compute_output_shape": true, + "tf.metrics.TopKCategoricalAccuracy.compute_output_signature": true, + "tf.metrics.TopKCategoricalAccuracy.count_params": true, + "tf.metrics.TopKCategoricalAccuracy.dtype": true, + "tf.metrics.TopKCategoricalAccuracy.dynamic": true, + "tf.metrics.TopKCategoricalAccuracy.from_config": true, + "tf.metrics.TopKCategoricalAccuracy.get_config": true, + "tf.metrics.TopKCategoricalAccuracy.get_weights": true, + "tf.metrics.TopKCategoricalAccuracy.input": true, + "tf.metrics.TopKCategoricalAccuracy.input_spec": true, + "tf.metrics.TopKCategoricalAccuracy.losses": true, + "tf.metrics.TopKCategoricalAccuracy.metrics": true, + "tf.metrics.TopKCategoricalAccuracy.name": true, + "tf.metrics.TopKCategoricalAccuracy.name_scope": true, + "tf.metrics.TopKCategoricalAccuracy.non_trainable_weights": true, + "tf.metrics.TopKCategoricalAccuracy.output": true, + "tf.metrics.TopKCategoricalAccuracy.reset_states": true, + "tf.metrics.TopKCategoricalAccuracy.result": true, + "tf.metrics.TopKCategoricalAccuracy.set_weights": true, + "tf.metrics.TopKCategoricalAccuracy.submodules": true, + "tf.metrics.TopKCategoricalAccuracy.trainable": true, + "tf.metrics.TopKCategoricalAccuracy.trainable_weights": true, + "tf.metrics.TopKCategoricalAccuracy.update_state": true, + "tf.metrics.TopKCategoricalAccuracy.weights": true, + "tf.metrics.TopKCategoricalAccuracy.with_name_scope": true, + "tf.metrics.TrueNegatives": false, + "tf.metrics.TrueNegatives.__call__": true, + "tf.metrics.TrueNegatives.__eq__": true, + "tf.metrics.TrueNegatives.__ge__": true, + "tf.metrics.TrueNegatives.__gt__": true, + "tf.metrics.TrueNegatives.__init__": true, + "tf.metrics.TrueNegatives.__le__": true, + "tf.metrics.TrueNegatives.__lt__": true, + "tf.metrics.TrueNegatives.__ne__": true, + "tf.metrics.TrueNegatives.__new__": true, + "tf.metrics.TrueNegatives.activity_regularizer": true, + "tf.metrics.TrueNegatives.add_loss": true, + "tf.metrics.TrueNegatives.add_metric": true, + "tf.metrics.TrueNegatives.add_weight": true, + "tf.metrics.TrueNegatives.build": true, + "tf.metrics.TrueNegatives.call": true, + "tf.metrics.TrueNegatives.compute_mask": true, + "tf.metrics.TrueNegatives.compute_output_shape": true, + "tf.metrics.TrueNegatives.compute_output_signature": true, + "tf.metrics.TrueNegatives.count_params": true, + "tf.metrics.TrueNegatives.dtype": true, + "tf.metrics.TrueNegatives.dynamic": true, + "tf.metrics.TrueNegatives.from_config": true, + "tf.metrics.TrueNegatives.get_config": true, + "tf.metrics.TrueNegatives.get_weights": true, + "tf.metrics.TrueNegatives.input": true, + "tf.metrics.TrueNegatives.input_spec": true, + "tf.metrics.TrueNegatives.losses": true, + "tf.metrics.TrueNegatives.metrics": true, + "tf.metrics.TrueNegatives.name": true, + "tf.metrics.TrueNegatives.name_scope": true, + "tf.metrics.TrueNegatives.non_trainable_weights": true, + "tf.metrics.TrueNegatives.output": true, + "tf.metrics.TrueNegatives.reset_states": true, + "tf.metrics.TrueNegatives.result": true, + "tf.metrics.TrueNegatives.set_weights": true, + "tf.metrics.TrueNegatives.submodules": true, + "tf.metrics.TrueNegatives.trainable": true, + "tf.metrics.TrueNegatives.trainable_weights": true, + "tf.metrics.TrueNegatives.update_state": true, + "tf.metrics.TrueNegatives.weights": true, + "tf.metrics.TrueNegatives.with_name_scope": true, + "tf.metrics.TruePositives": false, + "tf.metrics.TruePositives.__call__": true, + "tf.metrics.TruePositives.__eq__": true, + "tf.metrics.TruePositives.__ge__": true, + "tf.metrics.TruePositives.__gt__": true, + "tf.metrics.TruePositives.__init__": true, + "tf.metrics.TruePositives.__le__": true, + "tf.metrics.TruePositives.__lt__": true, + "tf.metrics.TruePositives.__ne__": true, + "tf.metrics.TruePositives.__new__": true, + "tf.metrics.TruePositives.activity_regularizer": true, + "tf.metrics.TruePositives.add_loss": true, + "tf.metrics.TruePositives.add_metric": true, + "tf.metrics.TruePositives.add_weight": true, + "tf.metrics.TruePositives.build": true, + "tf.metrics.TruePositives.call": true, + "tf.metrics.TruePositives.compute_mask": true, + "tf.metrics.TruePositives.compute_output_shape": true, + "tf.metrics.TruePositives.compute_output_signature": true, + "tf.metrics.TruePositives.count_params": true, + "tf.metrics.TruePositives.dtype": true, + "tf.metrics.TruePositives.dynamic": true, + "tf.metrics.TruePositives.from_config": true, + "tf.metrics.TruePositives.get_config": true, + "tf.metrics.TruePositives.get_weights": true, + "tf.metrics.TruePositives.input": true, + "tf.metrics.TruePositives.input_spec": true, + "tf.metrics.TruePositives.losses": true, + "tf.metrics.TruePositives.metrics": true, + "tf.metrics.TruePositives.name": true, + "tf.metrics.TruePositives.name_scope": true, + "tf.metrics.TruePositives.non_trainable_weights": true, + "tf.metrics.TruePositives.output": true, + "tf.metrics.TruePositives.reset_states": true, + "tf.metrics.TruePositives.result": true, + "tf.metrics.TruePositives.set_weights": true, + "tf.metrics.TruePositives.submodules": true, + "tf.metrics.TruePositives.trainable": true, + "tf.metrics.TruePositives.trainable_weights": true, + "tf.metrics.TruePositives.update_state": true, + "tf.metrics.TruePositives.weights": true, + "tf.metrics.TruePositives.with_name_scope": true, + "tf.metrics.binary_accuracy": false, + "tf.metrics.binary_crossentropy": false, + "tf.metrics.categorical_accuracy": false, + "tf.metrics.categorical_crossentropy": false, + "tf.metrics.deserialize": false, + "tf.metrics.get": false, + "tf.metrics.hinge": false, + "tf.metrics.kld": false, + "tf.metrics.kullback_leibler_divergence": false, + "tf.metrics.mae": false, + "tf.metrics.mape": false, + "tf.metrics.mean_absolute_error": false, + "tf.metrics.mean_absolute_percentage_error": false, + "tf.metrics.mean_squared_error": false, + "tf.metrics.mean_squared_logarithmic_error": false, + "tf.metrics.mse": false, + "tf.metrics.msle": false, + "tf.metrics.poisson": false, + "tf.metrics.serialize": false, + "tf.metrics.sparse_categorical_accuracy": false, + "tf.metrics.sparse_categorical_crossentropy": false, + "tf.metrics.sparse_top_k_categorical_accuracy": false, + "tf.metrics.squared_hinge": false, + "tf.metrics.top_k_categorical_accuracy": false, + "tf.minimum": false, + "tf.mixed_precision": false, + "tf.mixed_precision.experimental": false, + "tf.mixed_precision.experimental.DynamicLossScale": false, + "tf.mixed_precision.experimental.DynamicLossScale.__call__": true, + "tf.mixed_precision.experimental.DynamicLossScale.__eq__": true, + "tf.mixed_precision.experimental.DynamicLossScale.__ge__": true, + "tf.mixed_precision.experimental.DynamicLossScale.__gt__": true, + "tf.mixed_precision.experimental.DynamicLossScale.__init__": true, + "tf.mixed_precision.experimental.DynamicLossScale.__le__": true, + "tf.mixed_precision.experimental.DynamicLossScale.__lt__": true, + "tf.mixed_precision.experimental.DynamicLossScale.__ne__": true, + "tf.mixed_precision.experimental.DynamicLossScale.__new__": true, + "tf.mixed_precision.experimental.DynamicLossScale.from_config": true, + "tf.mixed_precision.experimental.DynamicLossScale.get_config": true, + "tf.mixed_precision.experimental.DynamicLossScale.increment_period": true, + "tf.mixed_precision.experimental.DynamicLossScale.initial_loss_scale": true, + "tf.mixed_precision.experimental.DynamicLossScale.multiplier": true, + "tf.mixed_precision.experimental.DynamicLossScale.update": true, + "tf.mixed_precision.experimental.FixedLossScale": false, + "tf.mixed_precision.experimental.FixedLossScale.__call__": true, + "tf.mixed_precision.experimental.FixedLossScale.__eq__": true, + "tf.mixed_precision.experimental.FixedLossScale.__ge__": true, + "tf.mixed_precision.experimental.FixedLossScale.__gt__": true, + "tf.mixed_precision.experimental.FixedLossScale.__init__": true, + "tf.mixed_precision.experimental.FixedLossScale.__le__": true, + "tf.mixed_precision.experimental.FixedLossScale.__lt__": true, + "tf.mixed_precision.experimental.FixedLossScale.__ne__": true, + "tf.mixed_precision.experimental.FixedLossScale.__new__": true, + "tf.mixed_precision.experimental.FixedLossScale.from_config": true, + "tf.mixed_precision.experimental.FixedLossScale.get_config": true, + "tf.mixed_precision.experimental.FixedLossScale.update": true, + "tf.mixed_precision.experimental.LossScale": false, + "tf.mixed_precision.experimental.LossScale.__call__": true, + "tf.mixed_precision.experimental.LossScale.__eq__": true, + "tf.mixed_precision.experimental.LossScale.__ge__": true, + "tf.mixed_precision.experimental.LossScale.__gt__": true, + "tf.mixed_precision.experimental.LossScale.__init__": true, + "tf.mixed_precision.experimental.LossScale.__le__": true, + "tf.mixed_precision.experimental.LossScale.__lt__": true, + "tf.mixed_precision.experimental.LossScale.__ne__": true, + "tf.mixed_precision.experimental.LossScale.__new__": true, + "tf.mixed_precision.experimental.LossScale.from_config": true, + "tf.mixed_precision.experimental.LossScale.get_config": true, + "tf.mixed_precision.experimental.LossScale.update": true, + "tf.mlir": false, + "tf.mlir.experimental": false, + "tf.mlir.experimental.convert_graph_def": false, + "tf.multiply": false, + "tf.name_scope": false, + "tf.name_scope.__enter__": true, + "tf.name_scope.__eq__": true, + "tf.name_scope.__exit__": true, + "tf.name_scope.__ge__": true, + "tf.name_scope.__gt__": true, + "tf.name_scope.__init__": true, + "tf.name_scope.__le__": true, + "tf.name_scope.__lt__": true, + "tf.name_scope.__ne__": true, + "tf.name_scope.__new__": true, + "tf.name_scope.name": true, + "tf.negative": false, + "tf.nest": false, + "tf.nest.assert_same_structure": false, + "tf.nest.flatten": false, + "tf.nest.is_nested": false, + "tf.nest.map_structure": false, + "tf.nest.pack_sequence_as": false, + "tf.newaxis": true, + "tf.nn": false, + "tf.nn.RNNCellDeviceWrapper": false, + "tf.nn.RNNCellDeviceWrapper.__call__": true, + "tf.nn.RNNCellDeviceWrapper.__eq__": true, + "tf.nn.RNNCellDeviceWrapper.__ge__": true, + "tf.nn.RNNCellDeviceWrapper.__gt__": true, + "tf.nn.RNNCellDeviceWrapper.__init__": true, + "tf.nn.RNNCellDeviceWrapper.__le__": true, + "tf.nn.RNNCellDeviceWrapper.__lt__": true, + "tf.nn.RNNCellDeviceWrapper.__ne__": true, + "tf.nn.RNNCellDeviceWrapper.__new__": true, + "tf.nn.RNNCellDeviceWrapper.activity_regularizer": true, + "tf.nn.RNNCellDeviceWrapper.add_loss": true, + "tf.nn.RNNCellDeviceWrapper.add_metric": true, + "tf.nn.RNNCellDeviceWrapper.add_weight": true, + "tf.nn.RNNCellDeviceWrapper.build": true, + "tf.nn.RNNCellDeviceWrapper.call": true, + "tf.nn.RNNCellDeviceWrapper.compute_mask": true, + "tf.nn.RNNCellDeviceWrapper.compute_output_shape": true, + "tf.nn.RNNCellDeviceWrapper.compute_output_signature": true, + "tf.nn.RNNCellDeviceWrapper.count_params": true, + "tf.nn.RNNCellDeviceWrapper.dtype": true, + "tf.nn.RNNCellDeviceWrapper.dynamic": true, + "tf.nn.RNNCellDeviceWrapper.from_config": true, + "tf.nn.RNNCellDeviceWrapper.get_config": true, + "tf.nn.RNNCellDeviceWrapper.get_initial_state": true, + "tf.nn.RNNCellDeviceWrapper.get_weights": true, + "tf.nn.RNNCellDeviceWrapper.input": true, + "tf.nn.RNNCellDeviceWrapper.input_spec": true, + "tf.nn.RNNCellDeviceWrapper.losses": true, + "tf.nn.RNNCellDeviceWrapper.metrics": true, + "tf.nn.RNNCellDeviceWrapper.name": true, + "tf.nn.RNNCellDeviceWrapper.name_scope": true, + "tf.nn.RNNCellDeviceWrapper.non_trainable_weights": true, + "tf.nn.RNNCellDeviceWrapper.output": true, + "tf.nn.RNNCellDeviceWrapper.output_size": true, + "tf.nn.RNNCellDeviceWrapper.set_weights": true, + "tf.nn.RNNCellDeviceWrapper.state_size": true, + "tf.nn.RNNCellDeviceWrapper.submodules": true, + "tf.nn.RNNCellDeviceWrapper.trainable": true, + "tf.nn.RNNCellDeviceWrapper.trainable_weights": true, + "tf.nn.RNNCellDeviceWrapper.weights": true, + "tf.nn.RNNCellDeviceWrapper.with_name_scope": true, + "tf.nn.RNNCellDeviceWrapper.zero_state": true, + "tf.nn.RNNCellDropoutWrapper": false, + "tf.nn.RNNCellDropoutWrapper.__call__": true, + "tf.nn.RNNCellDropoutWrapper.__eq__": true, + "tf.nn.RNNCellDropoutWrapper.__ge__": true, + "tf.nn.RNNCellDropoutWrapper.__gt__": true, + "tf.nn.RNNCellDropoutWrapper.__init__": true, + "tf.nn.RNNCellDropoutWrapper.__le__": true, + "tf.nn.RNNCellDropoutWrapper.__lt__": true, + "tf.nn.RNNCellDropoutWrapper.__ne__": true, + "tf.nn.RNNCellDropoutWrapper.__new__": true, + "tf.nn.RNNCellDropoutWrapper.activity_regularizer": true, + "tf.nn.RNNCellDropoutWrapper.add_loss": true, + "tf.nn.RNNCellDropoutWrapper.add_metric": true, + "tf.nn.RNNCellDropoutWrapper.add_weight": true, + "tf.nn.RNNCellDropoutWrapper.build": true, + "tf.nn.RNNCellDropoutWrapper.call": true, + "tf.nn.RNNCellDropoutWrapper.compute_mask": true, + "tf.nn.RNNCellDropoutWrapper.compute_output_shape": true, + "tf.nn.RNNCellDropoutWrapper.compute_output_signature": true, + "tf.nn.RNNCellDropoutWrapper.count_params": true, + "tf.nn.RNNCellDropoutWrapper.dtype": true, + "tf.nn.RNNCellDropoutWrapper.dynamic": true, + "tf.nn.RNNCellDropoutWrapper.from_config": true, + "tf.nn.RNNCellDropoutWrapper.get_config": true, + "tf.nn.RNNCellDropoutWrapper.get_initial_state": true, + "tf.nn.RNNCellDropoutWrapper.get_weights": true, + "tf.nn.RNNCellDropoutWrapper.input": true, + "tf.nn.RNNCellDropoutWrapper.input_spec": true, + "tf.nn.RNNCellDropoutWrapper.losses": true, + "tf.nn.RNNCellDropoutWrapper.metrics": true, + "tf.nn.RNNCellDropoutWrapper.name": true, + "tf.nn.RNNCellDropoutWrapper.name_scope": true, + "tf.nn.RNNCellDropoutWrapper.non_trainable_weights": true, + "tf.nn.RNNCellDropoutWrapper.output": true, + "tf.nn.RNNCellDropoutWrapper.output_size": true, + "tf.nn.RNNCellDropoutWrapper.set_weights": true, + "tf.nn.RNNCellDropoutWrapper.state_size": true, + "tf.nn.RNNCellDropoutWrapper.submodules": true, + "tf.nn.RNNCellDropoutWrapper.trainable": true, + "tf.nn.RNNCellDropoutWrapper.trainable_weights": true, + "tf.nn.RNNCellDropoutWrapper.weights": true, + "tf.nn.RNNCellDropoutWrapper.with_name_scope": true, + "tf.nn.RNNCellDropoutWrapper.wrapped_cell": true, + "tf.nn.RNNCellDropoutWrapper.zero_state": true, + "tf.nn.RNNCellResidualWrapper": false, + "tf.nn.RNNCellResidualWrapper.__call__": true, + "tf.nn.RNNCellResidualWrapper.__eq__": true, + "tf.nn.RNNCellResidualWrapper.__ge__": true, + "tf.nn.RNNCellResidualWrapper.__gt__": true, + "tf.nn.RNNCellResidualWrapper.__init__": true, + "tf.nn.RNNCellResidualWrapper.__le__": true, + "tf.nn.RNNCellResidualWrapper.__lt__": true, + "tf.nn.RNNCellResidualWrapper.__ne__": true, + "tf.nn.RNNCellResidualWrapper.__new__": true, + "tf.nn.RNNCellResidualWrapper.activity_regularizer": true, + "tf.nn.RNNCellResidualWrapper.add_loss": true, + "tf.nn.RNNCellResidualWrapper.add_metric": true, + "tf.nn.RNNCellResidualWrapper.add_weight": true, + "tf.nn.RNNCellResidualWrapper.build": true, + "tf.nn.RNNCellResidualWrapper.call": true, + "tf.nn.RNNCellResidualWrapper.compute_mask": true, + "tf.nn.RNNCellResidualWrapper.compute_output_shape": true, + "tf.nn.RNNCellResidualWrapper.compute_output_signature": true, + "tf.nn.RNNCellResidualWrapper.count_params": true, + "tf.nn.RNNCellResidualWrapper.dtype": true, + "tf.nn.RNNCellResidualWrapper.dynamic": true, + "tf.nn.RNNCellResidualWrapper.from_config": true, + "tf.nn.RNNCellResidualWrapper.get_config": true, + "tf.nn.RNNCellResidualWrapper.get_initial_state": true, + "tf.nn.RNNCellResidualWrapper.get_weights": true, + "tf.nn.RNNCellResidualWrapper.input": true, + "tf.nn.RNNCellResidualWrapper.input_spec": true, + "tf.nn.RNNCellResidualWrapper.losses": true, + "tf.nn.RNNCellResidualWrapper.metrics": true, + "tf.nn.RNNCellResidualWrapper.name": true, + "tf.nn.RNNCellResidualWrapper.name_scope": true, + "tf.nn.RNNCellResidualWrapper.non_trainable_weights": true, + "tf.nn.RNNCellResidualWrapper.output": true, + "tf.nn.RNNCellResidualWrapper.output_size": true, + "tf.nn.RNNCellResidualWrapper.set_weights": true, + "tf.nn.RNNCellResidualWrapper.state_size": true, + "tf.nn.RNNCellResidualWrapper.submodules": true, + "tf.nn.RNNCellResidualWrapper.trainable": true, + "tf.nn.RNNCellResidualWrapper.trainable_weights": true, + "tf.nn.RNNCellResidualWrapper.weights": true, + "tf.nn.RNNCellResidualWrapper.with_name_scope": true, + "tf.nn.RNNCellResidualWrapper.zero_state": true, + "tf.nn.all_candidate_sampler": false, + "tf.nn.atrous_conv2d": false, + "tf.nn.atrous_conv2d_transpose": false, + "tf.nn.avg_pool": false, + "tf.nn.avg_pool1d": false, + "tf.nn.avg_pool2d": false, + "tf.nn.avg_pool3d": false, + "tf.nn.batch_norm_with_global_normalization": false, + "tf.nn.batch_normalization": false, + "tf.nn.bias_add": false, + "tf.nn.collapse_repeated": false, + "tf.nn.compute_accidental_hits": false, + "tf.nn.compute_average_loss": false, + "tf.nn.conv1d": false, + "tf.nn.conv1d_transpose": false, + "tf.nn.conv2d": false, + "tf.nn.conv2d_transpose": false, + "tf.nn.conv3d": false, + "tf.nn.conv3d_transpose": false, + "tf.nn.conv_transpose": false, + "tf.nn.convolution": false, + "tf.nn.crelu": false, + "tf.nn.ctc_beam_search_decoder": false, + "tf.nn.ctc_greedy_decoder": false, + "tf.nn.ctc_loss": false, + "tf.nn.ctc_unique_labels": false, + "tf.nn.depth_to_space": false, + "tf.nn.depthwise_conv2d": false, + "tf.nn.depthwise_conv2d_backprop_filter": false, + "tf.nn.depthwise_conv2d_backprop_input": false, + "tf.nn.dilation2d": false, + "tf.nn.dropout": false, + "tf.nn.elu": false, + "tf.nn.embedding_lookup": false, + "tf.nn.embedding_lookup_sparse": false, + "tf.nn.erosion2d": false, + "tf.nn.fixed_unigram_candidate_sampler": false, + "tf.nn.fractional_avg_pool": false, + "tf.nn.fractional_max_pool": false, + "tf.nn.in_top_k": false, + "tf.nn.l2_loss": false, + "tf.nn.l2_normalize": false, + "tf.nn.leaky_relu": false, + "tf.nn.learned_unigram_candidate_sampler": false, + "tf.nn.local_response_normalization": false, + "tf.nn.log_poisson_loss": false, + "tf.nn.log_softmax": false, + "tf.nn.lrn": false, + "tf.nn.max_pool": false, + "tf.nn.max_pool1d": false, + "tf.nn.max_pool2d": false, + "tf.nn.max_pool3d": false, + "tf.nn.max_pool_with_argmax": false, + "tf.nn.moments": false, + "tf.nn.nce_loss": false, + "tf.nn.normalize_moments": false, + "tf.nn.pool": false, + "tf.nn.relu": false, + "tf.nn.relu6": false, + "tf.nn.safe_embedding_lookup_sparse": false, + "tf.nn.sampled_softmax_loss": false, + "tf.nn.scale_regularization_loss": false, + "tf.nn.selu": false, + "tf.nn.separable_conv2d": false, + "tf.nn.sigmoid": false, + "tf.nn.sigmoid_cross_entropy_with_logits": false, + "tf.nn.softmax": false, + "tf.nn.softmax_cross_entropy_with_logits": false, + "tf.nn.softplus": false, + "tf.nn.softsign": false, + "tf.nn.space_to_batch": false, + "tf.nn.space_to_depth": false, + "tf.nn.sparse_softmax_cross_entropy_with_logits": false, + "tf.nn.sufficient_statistics": false, + "tf.nn.swish": false, + "tf.nn.tanh": false, + "tf.nn.top_k": false, + "tf.nn.weighted_cross_entropy_with_logits": false, + "tf.nn.weighted_moments": false, + "tf.nn.with_space_to_batch": false, + "tf.nn.zero_fraction": false, + "tf.no_gradient": false, + "tf.no_op": false, + "tf.nondifferentiable_batch_function": false, + "tf.norm": false, + "tf.not_equal": false, + "tf.numpy_function": false, + "tf.one_hot": false, + "tf.ones": false, + "tf.ones_initializer": false, + "tf.ones_initializer.__call__": true, + "tf.ones_initializer.__eq__": true, + "tf.ones_initializer.__ge__": true, + "tf.ones_initializer.__gt__": true, + "tf.ones_initializer.__init__": true, + "tf.ones_initializer.__le__": true, + "tf.ones_initializer.__lt__": true, + "tf.ones_initializer.__ne__": true, + "tf.ones_initializer.__new__": true, + "tf.ones_initializer.from_config": true, + "tf.ones_initializer.get_config": true, + "tf.ones_like": false, + "tf.optimizers": false, + "tf.optimizers.Adadelta": false, + "tf.optimizers.Adadelta.__eq__": true, + "tf.optimizers.Adadelta.__ge__": true, + "tf.optimizers.Adadelta.__gt__": true, + "tf.optimizers.Adadelta.__init__": true, + "tf.optimizers.Adadelta.__le__": true, + "tf.optimizers.Adadelta.__lt__": true, + "tf.optimizers.Adadelta.__ne__": true, + "tf.optimizers.Adadelta.__new__": true, + "tf.optimizers.Adadelta.add_slot": true, + "tf.optimizers.Adadelta.add_weight": true, + "tf.optimizers.Adadelta.apply_gradients": true, + "tf.optimizers.Adadelta.from_config": true, + "tf.optimizers.Adadelta.get_config": true, + "tf.optimizers.Adadelta.get_gradients": true, + "tf.optimizers.Adadelta.get_slot": true, + "tf.optimizers.Adadelta.get_slot_names": true, + "tf.optimizers.Adadelta.get_updates": true, + "tf.optimizers.Adadelta.get_weights": true, + "tf.optimizers.Adadelta.iterations": true, + "tf.optimizers.Adadelta.minimize": true, + "tf.optimizers.Adadelta.set_weights": true, + "tf.optimizers.Adadelta.variables": true, + "tf.optimizers.Adadelta.weights": true, + "tf.optimizers.Adagrad": false, + "tf.optimizers.Adagrad.__eq__": true, + "tf.optimizers.Adagrad.__ge__": true, + "tf.optimizers.Adagrad.__gt__": true, + "tf.optimizers.Adagrad.__init__": true, + "tf.optimizers.Adagrad.__le__": true, + "tf.optimizers.Adagrad.__lt__": true, + "tf.optimizers.Adagrad.__ne__": true, + "tf.optimizers.Adagrad.__new__": true, + "tf.optimizers.Adagrad.add_slot": true, + "tf.optimizers.Adagrad.add_weight": true, + "tf.optimizers.Adagrad.apply_gradients": true, + "tf.optimizers.Adagrad.from_config": true, + "tf.optimizers.Adagrad.get_config": true, + "tf.optimizers.Adagrad.get_gradients": true, + "tf.optimizers.Adagrad.get_slot": true, + "tf.optimizers.Adagrad.get_slot_names": true, + "tf.optimizers.Adagrad.get_updates": true, + "tf.optimizers.Adagrad.get_weights": true, + "tf.optimizers.Adagrad.iterations": true, + "tf.optimizers.Adagrad.minimize": true, + "tf.optimizers.Adagrad.set_weights": true, + "tf.optimizers.Adagrad.variables": true, + "tf.optimizers.Adagrad.weights": true, + "tf.optimizers.Adam": false, + "tf.optimizers.Adam.__eq__": true, + "tf.optimizers.Adam.__ge__": true, + "tf.optimizers.Adam.__gt__": true, + "tf.optimizers.Adam.__init__": true, + "tf.optimizers.Adam.__le__": true, + "tf.optimizers.Adam.__lt__": true, + "tf.optimizers.Adam.__ne__": true, + "tf.optimizers.Adam.__new__": true, + "tf.optimizers.Adam.add_slot": true, + "tf.optimizers.Adam.add_weight": true, + "tf.optimizers.Adam.apply_gradients": true, + "tf.optimizers.Adam.from_config": true, + "tf.optimizers.Adam.get_config": true, + "tf.optimizers.Adam.get_gradients": true, + "tf.optimizers.Adam.get_slot": true, + "tf.optimizers.Adam.get_slot_names": true, + "tf.optimizers.Adam.get_updates": true, + "tf.optimizers.Adam.get_weights": true, + "tf.optimizers.Adam.iterations": true, + "tf.optimizers.Adam.minimize": true, + "tf.optimizers.Adam.set_weights": true, + "tf.optimizers.Adam.variables": true, + "tf.optimizers.Adam.weights": true, + "tf.optimizers.Adamax": false, + "tf.optimizers.Adamax.__eq__": true, + "tf.optimizers.Adamax.__ge__": true, + "tf.optimizers.Adamax.__gt__": true, + "tf.optimizers.Adamax.__init__": true, + "tf.optimizers.Adamax.__le__": true, + "tf.optimizers.Adamax.__lt__": true, + "tf.optimizers.Adamax.__ne__": true, + "tf.optimizers.Adamax.__new__": true, + "tf.optimizers.Adamax.add_slot": true, + "tf.optimizers.Adamax.add_weight": true, + "tf.optimizers.Adamax.apply_gradients": true, + "tf.optimizers.Adamax.from_config": true, + "tf.optimizers.Adamax.get_config": true, + "tf.optimizers.Adamax.get_gradients": true, + "tf.optimizers.Adamax.get_slot": true, + "tf.optimizers.Adamax.get_slot_names": true, + "tf.optimizers.Adamax.get_updates": true, + "tf.optimizers.Adamax.get_weights": true, + "tf.optimizers.Adamax.iterations": true, + "tf.optimizers.Adamax.minimize": true, + "tf.optimizers.Adamax.set_weights": true, + "tf.optimizers.Adamax.variables": true, + "tf.optimizers.Adamax.weights": true, + "tf.optimizers.Ftrl": false, + "tf.optimizers.Ftrl.__eq__": true, + "tf.optimizers.Ftrl.__ge__": true, + "tf.optimizers.Ftrl.__gt__": true, + "tf.optimizers.Ftrl.__init__": true, + "tf.optimizers.Ftrl.__le__": true, + "tf.optimizers.Ftrl.__lt__": true, + "tf.optimizers.Ftrl.__ne__": true, + "tf.optimizers.Ftrl.__new__": true, + "tf.optimizers.Ftrl.add_slot": true, + "tf.optimizers.Ftrl.add_weight": true, + "tf.optimizers.Ftrl.apply_gradients": true, + "tf.optimizers.Ftrl.from_config": true, + "tf.optimizers.Ftrl.get_config": true, + "tf.optimizers.Ftrl.get_gradients": true, + "tf.optimizers.Ftrl.get_slot": true, + "tf.optimizers.Ftrl.get_slot_names": true, + "tf.optimizers.Ftrl.get_updates": true, + "tf.optimizers.Ftrl.get_weights": true, + "tf.optimizers.Ftrl.iterations": true, + "tf.optimizers.Ftrl.minimize": true, + "tf.optimizers.Ftrl.set_weights": true, + "tf.optimizers.Ftrl.variables": true, + "tf.optimizers.Ftrl.weights": true, + "tf.optimizers.Nadam": false, + "tf.optimizers.Nadam.__eq__": true, + "tf.optimizers.Nadam.__ge__": true, + "tf.optimizers.Nadam.__gt__": true, + "tf.optimizers.Nadam.__init__": true, + "tf.optimizers.Nadam.__le__": true, + "tf.optimizers.Nadam.__lt__": true, + "tf.optimizers.Nadam.__ne__": true, + "tf.optimizers.Nadam.__new__": true, + "tf.optimizers.Nadam.add_slot": true, + "tf.optimizers.Nadam.add_weight": true, + "tf.optimizers.Nadam.apply_gradients": true, + "tf.optimizers.Nadam.from_config": true, + "tf.optimizers.Nadam.get_config": true, + "tf.optimizers.Nadam.get_gradients": true, + "tf.optimizers.Nadam.get_slot": true, + "tf.optimizers.Nadam.get_slot_names": true, + "tf.optimizers.Nadam.get_updates": true, + "tf.optimizers.Nadam.get_weights": true, + "tf.optimizers.Nadam.iterations": true, + "tf.optimizers.Nadam.minimize": true, + "tf.optimizers.Nadam.set_weights": true, + "tf.optimizers.Nadam.variables": true, + "tf.optimizers.Nadam.weights": true, + "tf.optimizers.Optimizer": false, + "tf.optimizers.Optimizer.__eq__": true, + "tf.optimizers.Optimizer.__ge__": true, + "tf.optimizers.Optimizer.__gt__": true, + "tf.optimizers.Optimizer.__init__": true, + "tf.optimizers.Optimizer.__le__": true, + "tf.optimizers.Optimizer.__lt__": true, + "tf.optimizers.Optimizer.__ne__": true, + "tf.optimizers.Optimizer.__new__": true, + "tf.optimizers.Optimizer.add_slot": true, + "tf.optimizers.Optimizer.add_weight": true, + "tf.optimizers.Optimizer.apply_gradients": true, + "tf.optimizers.Optimizer.from_config": true, + "tf.optimizers.Optimizer.get_config": true, + "tf.optimizers.Optimizer.get_gradients": true, + "tf.optimizers.Optimizer.get_slot": true, + "tf.optimizers.Optimizer.get_slot_names": true, + "tf.optimizers.Optimizer.get_updates": true, + "tf.optimizers.Optimizer.get_weights": true, + "tf.optimizers.Optimizer.iterations": true, + "tf.optimizers.Optimizer.minimize": true, + "tf.optimizers.Optimizer.set_weights": true, + "tf.optimizers.Optimizer.variables": true, + "tf.optimizers.Optimizer.weights": true, + "tf.optimizers.RMSprop": false, + "tf.optimizers.RMSprop.__eq__": true, + "tf.optimizers.RMSprop.__ge__": true, + "tf.optimizers.RMSprop.__gt__": true, + "tf.optimizers.RMSprop.__init__": true, + "tf.optimizers.RMSprop.__le__": true, + "tf.optimizers.RMSprop.__lt__": true, + "tf.optimizers.RMSprop.__ne__": true, + "tf.optimizers.RMSprop.__new__": true, + "tf.optimizers.RMSprop.add_slot": true, + "tf.optimizers.RMSprop.add_weight": true, + "tf.optimizers.RMSprop.apply_gradients": true, + "tf.optimizers.RMSprop.from_config": true, + "tf.optimizers.RMSprop.get_config": true, + "tf.optimizers.RMSprop.get_gradients": true, + "tf.optimizers.RMSprop.get_slot": true, + "tf.optimizers.RMSprop.get_slot_names": true, + "tf.optimizers.RMSprop.get_updates": true, + "tf.optimizers.RMSprop.get_weights": true, + "tf.optimizers.RMSprop.iterations": true, + "tf.optimizers.RMSprop.minimize": true, + "tf.optimizers.RMSprop.set_weights": true, + "tf.optimizers.RMSprop.variables": true, + "tf.optimizers.RMSprop.weights": true, + "tf.optimizers.SGD": false, + "tf.optimizers.SGD.__eq__": true, + "tf.optimizers.SGD.__ge__": true, + "tf.optimizers.SGD.__gt__": true, + "tf.optimizers.SGD.__init__": true, + "tf.optimizers.SGD.__le__": true, + "tf.optimizers.SGD.__lt__": true, + "tf.optimizers.SGD.__ne__": true, + "tf.optimizers.SGD.__new__": true, + "tf.optimizers.SGD.add_slot": true, + "tf.optimizers.SGD.add_weight": true, + "tf.optimizers.SGD.apply_gradients": true, + "tf.optimizers.SGD.from_config": true, + "tf.optimizers.SGD.get_config": true, + "tf.optimizers.SGD.get_gradients": true, + "tf.optimizers.SGD.get_slot": true, + "tf.optimizers.SGD.get_slot_names": true, + "tf.optimizers.SGD.get_updates": true, + "tf.optimizers.SGD.get_weights": true, + "tf.optimizers.SGD.iterations": true, + "tf.optimizers.SGD.minimize": true, + "tf.optimizers.SGD.set_weights": true, + "tf.optimizers.SGD.variables": true, + "tf.optimizers.SGD.weights": true, + "tf.optimizers.deserialize": false, + "tf.optimizers.get": false, + "tf.optimizers.schedules": false, + "tf.optimizers.schedules.ExponentialDecay": false, + "tf.optimizers.schedules.ExponentialDecay.__call__": true, + "tf.optimizers.schedules.ExponentialDecay.__eq__": true, + "tf.optimizers.schedules.ExponentialDecay.__ge__": true, + "tf.optimizers.schedules.ExponentialDecay.__gt__": true, + "tf.optimizers.schedules.ExponentialDecay.__init__": true, + "tf.optimizers.schedules.ExponentialDecay.__le__": true, + "tf.optimizers.schedules.ExponentialDecay.__lt__": true, + "tf.optimizers.schedules.ExponentialDecay.__ne__": true, + "tf.optimizers.schedules.ExponentialDecay.__new__": true, + "tf.optimizers.schedules.ExponentialDecay.from_config": true, + "tf.optimizers.schedules.ExponentialDecay.get_config": true, + "tf.optimizers.schedules.InverseTimeDecay": false, + "tf.optimizers.schedules.InverseTimeDecay.__call__": true, + "tf.optimizers.schedules.InverseTimeDecay.__eq__": true, + "tf.optimizers.schedules.InverseTimeDecay.__ge__": true, + "tf.optimizers.schedules.InverseTimeDecay.__gt__": true, + "tf.optimizers.schedules.InverseTimeDecay.__init__": true, + "tf.optimizers.schedules.InverseTimeDecay.__le__": true, + "tf.optimizers.schedules.InverseTimeDecay.__lt__": true, + "tf.optimizers.schedules.InverseTimeDecay.__ne__": true, + "tf.optimizers.schedules.InverseTimeDecay.__new__": true, + "tf.optimizers.schedules.InverseTimeDecay.from_config": true, + "tf.optimizers.schedules.InverseTimeDecay.get_config": true, + "tf.optimizers.schedules.LearningRateSchedule": false, + "tf.optimizers.schedules.LearningRateSchedule.__call__": true, + "tf.optimizers.schedules.LearningRateSchedule.__eq__": true, + "tf.optimizers.schedules.LearningRateSchedule.__ge__": true, + "tf.optimizers.schedules.LearningRateSchedule.__gt__": true, + "tf.optimizers.schedules.LearningRateSchedule.__init__": true, + "tf.optimizers.schedules.LearningRateSchedule.__le__": true, + "tf.optimizers.schedules.LearningRateSchedule.__lt__": true, + "tf.optimizers.schedules.LearningRateSchedule.__ne__": true, + "tf.optimizers.schedules.LearningRateSchedule.__new__": true, + "tf.optimizers.schedules.LearningRateSchedule.from_config": true, + "tf.optimizers.schedules.LearningRateSchedule.get_config": true, + "tf.optimizers.schedules.PiecewiseConstantDecay": false, + "tf.optimizers.schedules.PiecewiseConstantDecay.__call__": true, + "tf.optimizers.schedules.PiecewiseConstantDecay.__eq__": true, + "tf.optimizers.schedules.PiecewiseConstantDecay.__ge__": true, + "tf.optimizers.schedules.PiecewiseConstantDecay.__gt__": true, + "tf.optimizers.schedules.PiecewiseConstantDecay.__init__": true, + "tf.optimizers.schedules.PiecewiseConstantDecay.__le__": true, + "tf.optimizers.schedules.PiecewiseConstantDecay.__lt__": true, + "tf.optimizers.schedules.PiecewiseConstantDecay.__ne__": true, + "tf.optimizers.schedules.PiecewiseConstantDecay.__new__": true, + "tf.optimizers.schedules.PiecewiseConstantDecay.from_config": true, + "tf.optimizers.schedules.PiecewiseConstantDecay.get_config": true, + "tf.optimizers.schedules.PolynomialDecay": false, + "tf.optimizers.schedules.PolynomialDecay.__call__": true, + "tf.optimizers.schedules.PolynomialDecay.__eq__": true, + "tf.optimizers.schedules.PolynomialDecay.__ge__": true, + "tf.optimizers.schedules.PolynomialDecay.__gt__": true, + "tf.optimizers.schedules.PolynomialDecay.__init__": true, + "tf.optimizers.schedules.PolynomialDecay.__le__": true, + "tf.optimizers.schedules.PolynomialDecay.__lt__": true, + "tf.optimizers.schedules.PolynomialDecay.__ne__": true, + "tf.optimizers.schedules.PolynomialDecay.__new__": true, + "tf.optimizers.schedules.PolynomialDecay.from_config": true, + "tf.optimizers.schedules.PolynomialDecay.get_config": true, + "tf.optimizers.schedules.deserialize": false, + "tf.optimizers.schedules.serialize": false, + "tf.optimizers.serialize": false, + "tf.pad": false, + "tf.parallel_stack": false, + "tf.pow": false, + "tf.print": false, + "tf.profiler": false, + "tf.profiler.experimental": false, + "tf.profiler.experimental.Profile": false, + "tf.profiler.experimental.Profile.__enter__": true, + "tf.profiler.experimental.Profile.__eq__": true, + "tf.profiler.experimental.Profile.__exit__": true, + "tf.profiler.experimental.Profile.__ge__": true, + "tf.profiler.experimental.Profile.__gt__": true, + "tf.profiler.experimental.Profile.__init__": true, + "tf.profiler.experimental.Profile.__le__": true, + "tf.profiler.experimental.Profile.__lt__": true, + "tf.profiler.experimental.Profile.__ne__": true, + "tf.profiler.experimental.Profile.__new__": true, + "tf.profiler.experimental.client": false, + "tf.profiler.experimental.client.monitor": false, + "tf.profiler.experimental.client.trace": false, + "tf.profiler.experimental.server": false, + "tf.profiler.experimental.server.start": false, + "tf.profiler.experimental.start": false, + "tf.profiler.experimental.stop": false, + "tf.py_function": false, + "tf.qint16": true, + "tf.qint32": true, + "tf.qint8": true, + "tf.quantization": false, + "tf.quantization.dequantize": false, + "tf.quantization.fake_quant_with_min_max_args": false, + "tf.quantization.fake_quant_with_min_max_args_gradient": false, + "tf.quantization.fake_quant_with_min_max_vars": false, + "tf.quantization.fake_quant_with_min_max_vars_gradient": false, + "tf.quantization.fake_quant_with_min_max_vars_per_channel": false, + "tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient": false, + "tf.quantization.quantize": false, + "tf.quantization.quantize_and_dequantize": false, + "tf.quantization.quantized_concat": false, + "tf.queue": false, + "tf.queue.FIFOQueue": false, + "tf.queue.FIFOQueue.__eq__": true, + "tf.queue.FIFOQueue.__ge__": true, + "tf.queue.FIFOQueue.__gt__": true, + "tf.queue.FIFOQueue.__init__": true, + "tf.queue.FIFOQueue.__le__": true, + "tf.queue.FIFOQueue.__lt__": true, + "tf.queue.FIFOQueue.__ne__": true, + "tf.queue.FIFOQueue.__new__": true, + "tf.queue.FIFOQueue.close": true, + "tf.queue.FIFOQueue.dequeue": true, + "tf.queue.FIFOQueue.dequeue_many": true, + "tf.queue.FIFOQueue.dequeue_up_to": true, + "tf.queue.FIFOQueue.dtypes": true, + "tf.queue.FIFOQueue.enqueue": true, + "tf.queue.FIFOQueue.enqueue_many": true, + "tf.queue.FIFOQueue.from_list": true, + "tf.queue.FIFOQueue.is_closed": true, + "tf.queue.FIFOQueue.name": true, + "tf.queue.FIFOQueue.names": true, + "tf.queue.FIFOQueue.queue_ref": true, + "tf.queue.FIFOQueue.shapes": true, + "tf.queue.FIFOQueue.size": true, + "tf.queue.PaddingFIFOQueue": false, + "tf.queue.PaddingFIFOQueue.__eq__": true, + "tf.queue.PaddingFIFOQueue.__ge__": true, + "tf.queue.PaddingFIFOQueue.__gt__": true, + "tf.queue.PaddingFIFOQueue.__init__": true, + "tf.queue.PaddingFIFOQueue.__le__": true, + "tf.queue.PaddingFIFOQueue.__lt__": true, + "tf.queue.PaddingFIFOQueue.__ne__": true, + "tf.queue.PaddingFIFOQueue.__new__": true, + "tf.queue.PaddingFIFOQueue.close": true, + "tf.queue.PaddingFIFOQueue.dequeue": true, + "tf.queue.PaddingFIFOQueue.dequeue_many": true, + "tf.queue.PaddingFIFOQueue.dequeue_up_to": true, + "tf.queue.PaddingFIFOQueue.dtypes": true, + "tf.queue.PaddingFIFOQueue.enqueue": true, + "tf.queue.PaddingFIFOQueue.enqueue_many": true, + "tf.queue.PaddingFIFOQueue.from_list": true, + "tf.queue.PaddingFIFOQueue.is_closed": true, + "tf.queue.PaddingFIFOQueue.name": true, + "tf.queue.PaddingFIFOQueue.names": true, + "tf.queue.PaddingFIFOQueue.queue_ref": true, + "tf.queue.PaddingFIFOQueue.shapes": true, + "tf.queue.PaddingFIFOQueue.size": true, + "tf.queue.PriorityQueue": false, + "tf.queue.PriorityQueue.__eq__": true, + "tf.queue.PriorityQueue.__ge__": true, + "tf.queue.PriorityQueue.__gt__": true, + "tf.queue.PriorityQueue.__init__": true, + "tf.queue.PriorityQueue.__le__": true, + "tf.queue.PriorityQueue.__lt__": true, + "tf.queue.PriorityQueue.__ne__": true, + "tf.queue.PriorityQueue.__new__": true, + "tf.queue.PriorityQueue.close": true, + "tf.queue.PriorityQueue.dequeue": true, + "tf.queue.PriorityQueue.dequeue_many": true, + "tf.queue.PriorityQueue.dequeue_up_to": true, + "tf.queue.PriorityQueue.dtypes": true, + "tf.queue.PriorityQueue.enqueue": true, + "tf.queue.PriorityQueue.enqueue_many": true, + "tf.queue.PriorityQueue.from_list": true, + "tf.queue.PriorityQueue.is_closed": true, + "tf.queue.PriorityQueue.name": true, + "tf.queue.PriorityQueue.names": true, + "tf.queue.PriorityQueue.queue_ref": true, + "tf.queue.PriorityQueue.shapes": true, + "tf.queue.PriorityQueue.size": true, + "tf.queue.QueueBase": false, + "tf.queue.QueueBase.__eq__": true, + "tf.queue.QueueBase.__ge__": true, + "tf.queue.QueueBase.__gt__": true, + "tf.queue.QueueBase.__init__": true, + "tf.queue.QueueBase.__le__": true, + "tf.queue.QueueBase.__lt__": true, + "tf.queue.QueueBase.__ne__": true, + "tf.queue.QueueBase.__new__": true, + "tf.queue.QueueBase.close": true, + "tf.queue.QueueBase.dequeue": true, + "tf.queue.QueueBase.dequeue_many": true, + "tf.queue.QueueBase.dequeue_up_to": true, + "tf.queue.QueueBase.dtypes": true, + "tf.queue.QueueBase.enqueue": true, + "tf.queue.QueueBase.enqueue_many": true, + "tf.queue.QueueBase.from_list": true, + "tf.queue.QueueBase.is_closed": true, + "tf.queue.QueueBase.name": true, + "tf.queue.QueueBase.names": true, + "tf.queue.QueueBase.queue_ref": true, + "tf.queue.QueueBase.shapes": true, + "tf.queue.QueueBase.size": true, + "tf.queue.RandomShuffleQueue": false, + "tf.queue.RandomShuffleQueue.__eq__": true, + "tf.queue.RandomShuffleQueue.__ge__": true, + "tf.queue.RandomShuffleQueue.__gt__": true, + "tf.queue.RandomShuffleQueue.__init__": true, + "tf.queue.RandomShuffleQueue.__le__": true, + "tf.queue.RandomShuffleQueue.__lt__": true, + "tf.queue.RandomShuffleQueue.__ne__": true, + "tf.queue.RandomShuffleQueue.__new__": true, + "tf.queue.RandomShuffleQueue.close": true, + "tf.queue.RandomShuffleQueue.dequeue": true, + "tf.queue.RandomShuffleQueue.dequeue_many": true, + "tf.queue.RandomShuffleQueue.dequeue_up_to": true, + "tf.queue.RandomShuffleQueue.dtypes": true, + "tf.queue.RandomShuffleQueue.enqueue": true, + "tf.queue.RandomShuffleQueue.enqueue_many": true, + "tf.queue.RandomShuffleQueue.from_list": true, + "tf.queue.RandomShuffleQueue.is_closed": true, + "tf.queue.RandomShuffleQueue.name": true, + "tf.queue.RandomShuffleQueue.names": true, + "tf.queue.RandomShuffleQueue.queue_ref": true, + "tf.queue.RandomShuffleQueue.shapes": true, + "tf.queue.RandomShuffleQueue.size": true, + "tf.quint16": true, + "tf.quint8": true, + "tf.ragged": false, + "tf.ragged.boolean_mask": false, + "tf.ragged.constant": false, + "tf.ragged.map_flat_values": false, + "tf.ragged.range": false, + "tf.ragged.row_splits_to_segment_ids": false, + "tf.ragged.segment_ids_to_row_splits": false, + "tf.ragged.stack": false, + "tf.ragged.stack_dynamic_partitions": false, + "tf.random": false, + "tf.random.Algorithm": false, + "tf.random.Algorithm.PHILOX": true, + "tf.random.Algorithm.THREEFRY": true, + "tf.random.Algorithm.name": true, + "tf.random.Algorithm.value": true, + "tf.random.Generator": false, + "tf.random.Generator.__eq__": true, + "tf.random.Generator.__ge__": true, + "tf.random.Generator.__gt__": true, + "tf.random.Generator.__init__": true, + "tf.random.Generator.__le__": true, + "tf.random.Generator.__lt__": true, + "tf.random.Generator.__ne__": true, + "tf.random.Generator.__new__": true, + "tf.random.Generator.algorithm": true, + "tf.random.Generator.binomial": true, + "tf.random.Generator.from_key_counter": true, + "tf.random.Generator.from_non_deterministic_state": true, + "tf.random.Generator.from_seed": true, + "tf.random.Generator.from_state": true, + "tf.random.Generator.key": true, + "tf.random.Generator.make_seeds": true, + "tf.random.Generator.normal": true, + "tf.random.Generator.reset": true, + "tf.random.Generator.reset_from_key_counter": true, + "tf.random.Generator.reset_from_seed": true, + "tf.random.Generator.skip": true, + "tf.random.Generator.split": true, + "tf.random.Generator.state": true, + "tf.random.Generator.truncated_normal": true, + "tf.random.Generator.uniform": true, + "tf.random.Generator.uniform_full_int": true, + "tf.random.all_candidate_sampler": false, + "tf.random.categorical": false, + "tf.random.create_rng_state": false, + "tf.random.experimental": false, + "tf.random.experimental.Algorithm": false, + "tf.random.experimental.Algorithm.PHILOX": true, + "tf.random.experimental.Algorithm.THREEFRY": true, + "tf.random.experimental.Algorithm.name": true, + "tf.random.experimental.Algorithm.value": true, + "tf.random.experimental.Generator": false, + "tf.random.experimental.Generator.__eq__": true, + "tf.random.experimental.Generator.__ge__": true, + "tf.random.experimental.Generator.__gt__": true, + "tf.random.experimental.Generator.__init__": true, + "tf.random.experimental.Generator.__le__": true, + "tf.random.experimental.Generator.__lt__": true, + "tf.random.experimental.Generator.__ne__": true, + "tf.random.experimental.Generator.__new__": true, + "tf.random.experimental.Generator.algorithm": true, + "tf.random.experimental.Generator.binomial": true, + "tf.random.experimental.Generator.from_key_counter": true, + "tf.random.experimental.Generator.from_non_deterministic_state": true, + "tf.random.experimental.Generator.from_seed": true, + "tf.random.experimental.Generator.from_state": true, + "tf.random.experimental.Generator.key": true, + "tf.random.experimental.Generator.make_seeds": true, + "tf.random.experimental.Generator.normal": true, + "tf.random.experimental.Generator.reset": true, + "tf.random.experimental.Generator.reset_from_key_counter": true, + "tf.random.experimental.Generator.reset_from_seed": true, + "tf.random.experimental.Generator.skip": true, + "tf.random.experimental.Generator.split": true, + "tf.random.experimental.Generator.state": true, + "tf.random.experimental.Generator.truncated_normal": true, + "tf.random.experimental.Generator.uniform": true, + "tf.random.experimental.Generator.uniform_full_int": true, + "tf.random.experimental.create_rng_state": false, + "tf.random.experimental.get_global_generator": false, + "tf.random.experimental.set_global_generator": false, + "tf.random.fixed_unigram_candidate_sampler": false, + "tf.random.gamma": false, + "tf.random.get_global_generator": false, + "tf.random.learned_unigram_candidate_sampler": false, + "tf.random.log_uniform_candidate_sampler": false, + "tf.random.normal": false, + "tf.random.poisson": false, + "tf.random.set_global_generator": false, + "tf.random.set_seed": false, + "tf.random.shuffle": false, + "tf.random.stateless_binomial": false, + "tf.random.stateless_categorical": false, + "tf.random.stateless_gamma": false, + "tf.random.stateless_normal": false, + "tf.random.stateless_poisson": false, + "tf.random.stateless_truncated_normal": false, + "tf.random.stateless_uniform": false, + "tf.random.truncated_normal": false, + "tf.random.uniform": false, + "tf.random.uniform_candidate_sampler": false, + "tf.random_normal_initializer": false, + "tf.random_normal_initializer.__call__": true, + "tf.random_normal_initializer.__eq__": true, + "tf.random_normal_initializer.__ge__": true, + "tf.random_normal_initializer.__gt__": true, + "tf.random_normal_initializer.__init__": true, + "tf.random_normal_initializer.__le__": true, + "tf.random_normal_initializer.__lt__": true, + "tf.random_normal_initializer.__ne__": true, + "tf.random_normal_initializer.__new__": true, + "tf.random_normal_initializer.from_config": true, + "tf.random_normal_initializer.get_config": true, + "tf.random_uniform_initializer": false, + "tf.random_uniform_initializer.__call__": true, + "tf.random_uniform_initializer.__eq__": true, + "tf.random_uniform_initializer.__ge__": true, + "tf.random_uniform_initializer.__gt__": true, + "tf.random_uniform_initializer.__init__": true, + "tf.random_uniform_initializer.__le__": true, + "tf.random_uniform_initializer.__lt__": true, + "tf.random_uniform_initializer.__ne__": true, + "tf.random_uniform_initializer.__new__": true, + "tf.random_uniform_initializer.from_config": true, + "tf.random_uniform_initializer.get_config": true, + "tf.range": false, + "tf.rank": false, + "tf.raw_ops": false, + "tf.raw_ops.Abort": false, + "tf.raw_ops.Abs": false, + "tf.raw_ops.AccumulateNV2": false, + "tf.raw_ops.AccumulatorApplyGradient": false, + "tf.raw_ops.AccumulatorNumAccumulated": false, + "tf.raw_ops.AccumulatorSetGlobalStep": false, + "tf.raw_ops.AccumulatorTakeGradient": false, + "tf.raw_ops.Acos": false, + "tf.raw_ops.Acosh": false, + "tf.raw_ops.Add": false, + "tf.raw_ops.AddManySparseToTensorsMap": false, + "tf.raw_ops.AddN": false, + "tf.raw_ops.AddSparseToTensorsMap": false, + "tf.raw_ops.AddV2": false, + "tf.raw_ops.AdjustContrast": false, + "tf.raw_ops.AdjustContrastv2": false, + "tf.raw_ops.AdjustHue": false, + "tf.raw_ops.AdjustSaturation": false, + "tf.raw_ops.All": false, + "tf.raw_ops.AllCandidateSampler": false, + "tf.raw_ops.AllToAll": false, + "tf.raw_ops.Angle": false, + "tf.raw_ops.AnonymousIterator": false, + "tf.raw_ops.AnonymousIteratorV2": false, + "tf.raw_ops.AnonymousMemoryCache": false, + "tf.raw_ops.AnonymousMultiDeviceIterator": false, + "tf.raw_ops.AnonymousRandomSeedGenerator": false, + "tf.raw_ops.Any": false, + "tf.raw_ops.ApplyAdaMax": false, + "tf.raw_ops.ApplyAdadelta": false, + "tf.raw_ops.ApplyAdagrad": false, + "tf.raw_ops.ApplyAdagradDA": false, + "tf.raw_ops.ApplyAdagradV2": false, + "tf.raw_ops.ApplyAdam": false, + "tf.raw_ops.ApplyAddSign": false, + "tf.raw_ops.ApplyCenteredRMSProp": false, + "tf.raw_ops.ApplyFtrl": false, + "tf.raw_ops.ApplyFtrlV2": false, + "tf.raw_ops.ApplyGradientDescent": false, + "tf.raw_ops.ApplyMomentum": false, + "tf.raw_ops.ApplyPowerSign": false, + "tf.raw_ops.ApplyProximalAdagrad": false, + "tf.raw_ops.ApplyProximalGradientDescent": false, + "tf.raw_ops.ApplyRMSProp": false, + "tf.raw_ops.ApproximateEqual": false, + "tf.raw_ops.ArgMax": false, + "tf.raw_ops.ArgMin": false, + "tf.raw_ops.AsString": false, + "tf.raw_ops.Asin": false, + "tf.raw_ops.Asinh": false, + "tf.raw_ops.Assert": false, + "tf.raw_ops.AssertCardinalityDataset": false, + "tf.raw_ops.AssertNextDataset": false, + "tf.raw_ops.Assign": false, + "tf.raw_ops.AssignAdd": false, + "tf.raw_ops.AssignAddVariableOp": false, + "tf.raw_ops.AssignSub": false, + "tf.raw_ops.AssignSubVariableOp": false, + "tf.raw_ops.AssignVariableOp": false, + "tf.raw_ops.Atan": false, + "tf.raw_ops.Atan2": false, + "tf.raw_ops.Atanh": false, + "tf.raw_ops.AudioSpectrogram": false, + "tf.raw_ops.AudioSummary": false, + "tf.raw_ops.AudioSummaryV2": false, + "tf.raw_ops.AutoShardDataset": false, + "tf.raw_ops.AvgPool": false, + "tf.raw_ops.AvgPool3D": false, + "tf.raw_ops.AvgPool3DGrad": false, + "tf.raw_ops.AvgPoolGrad": false, + "tf.raw_ops.Barrier": false, + "tf.raw_ops.BarrierClose": false, + "tf.raw_ops.BarrierIncompleteSize": false, + "tf.raw_ops.BarrierInsertMany": false, + "tf.raw_ops.BarrierReadySize": false, + "tf.raw_ops.BarrierTakeMany": false, + "tf.raw_ops.Batch": false, + "tf.raw_ops.BatchCholesky": false, + "tf.raw_ops.BatchCholeskyGrad": false, + "tf.raw_ops.BatchDataset": false, + "tf.raw_ops.BatchDatasetV2": false, + "tf.raw_ops.BatchFFT": false, + "tf.raw_ops.BatchFFT2D": false, + "tf.raw_ops.BatchFFT3D": false, + "tf.raw_ops.BatchFunction": false, + "tf.raw_ops.BatchIFFT": false, + "tf.raw_ops.BatchIFFT2D": false, + "tf.raw_ops.BatchIFFT3D": false, + "tf.raw_ops.BatchMatMul": false, + "tf.raw_ops.BatchMatMulV2": false, + "tf.raw_ops.BatchMatrixBandPart": false, + "tf.raw_ops.BatchMatrixDeterminant": false, + "tf.raw_ops.BatchMatrixDiag": false, + "tf.raw_ops.BatchMatrixDiagPart": false, + "tf.raw_ops.BatchMatrixInverse": false, + "tf.raw_ops.BatchMatrixSetDiag": false, + "tf.raw_ops.BatchMatrixSolve": false, + "tf.raw_ops.BatchMatrixSolveLs": false, + "tf.raw_ops.BatchMatrixTriangularSolve": false, + "tf.raw_ops.BatchNormWithGlobalNormalization": false, + "tf.raw_ops.BatchNormWithGlobalNormalizationGrad": false, + "tf.raw_ops.BatchSelfAdjointEig": false, + "tf.raw_ops.BatchSelfAdjointEigV2": false, + "tf.raw_ops.BatchSvd": false, + "tf.raw_ops.BatchToSpace": false, + "tf.raw_ops.BatchToSpaceND": false, + "tf.raw_ops.BesselI0e": false, + "tf.raw_ops.BesselI1e": false, + "tf.raw_ops.Betainc": false, + "tf.raw_ops.BiasAdd": false, + "tf.raw_ops.BiasAddGrad": false, + "tf.raw_ops.BiasAddV1": false, + "tf.raw_ops.Bincount": false, + "tf.raw_ops.Bitcast": false, + "tf.raw_ops.BitwiseAnd": false, + "tf.raw_ops.BitwiseOr": false, + "tf.raw_ops.BitwiseXor": false, + "tf.raw_ops.BlockLSTM": false, + "tf.raw_ops.BlockLSTMGrad": false, + "tf.raw_ops.BlockLSTMGradV2": false, + "tf.raw_ops.BlockLSTMV2": false, + "tf.raw_ops.BoostedTreesAggregateStats": false, + "tf.raw_ops.BoostedTreesBucketize": false, + "tf.raw_ops.BoostedTreesCalculateBestFeatureSplit": false, + "tf.raw_ops.BoostedTreesCalculateBestFeatureSplitV2": false, + "tf.raw_ops.BoostedTreesCalculateBestGainsPerFeature": false, + "tf.raw_ops.BoostedTreesCenterBias": false, + "tf.raw_ops.BoostedTreesCreateEnsemble": false, + "tf.raw_ops.BoostedTreesCreateQuantileStreamResource": false, + "tf.raw_ops.BoostedTreesDeserializeEnsemble": false, + "tf.raw_ops.BoostedTreesEnsembleResourceHandleOp": false, + "tf.raw_ops.BoostedTreesExampleDebugOutputs": false, + "tf.raw_ops.BoostedTreesFlushQuantileSummaries": false, + "tf.raw_ops.BoostedTreesGetEnsembleStates": false, + "tf.raw_ops.BoostedTreesMakeQuantileSummaries": false, + "tf.raw_ops.BoostedTreesMakeStatsSummary": false, + "tf.raw_ops.BoostedTreesPredict": false, + "tf.raw_ops.BoostedTreesQuantileStreamResourceAddSummaries": false, + "tf.raw_ops.BoostedTreesQuantileStreamResourceDeserialize": false, + "tf.raw_ops.BoostedTreesQuantileStreamResourceFlush": false, + "tf.raw_ops.BoostedTreesQuantileStreamResourceGetBucketBoundaries": false, + "tf.raw_ops.BoostedTreesQuantileStreamResourceHandleOp": false, + "tf.raw_ops.BoostedTreesSerializeEnsemble": false, + "tf.raw_ops.BoostedTreesSparseAggregateStats": false, + "tf.raw_ops.BoostedTreesSparseCalculateBestFeatureSplit": false, + "tf.raw_ops.BoostedTreesTrainingPredict": false, + "tf.raw_ops.BoostedTreesUpdateEnsemble": false, + "tf.raw_ops.BoostedTreesUpdateEnsembleV2": false, + "tf.raw_ops.BroadcastArgs": false, + "tf.raw_ops.BroadcastGradientArgs": false, + "tf.raw_ops.BroadcastTo": false, + "tf.raw_ops.Bucketize": false, + "tf.raw_ops.BytesProducedStatsDataset": false, + "tf.raw_ops.CSRSparseMatrixComponents": false, + "tf.raw_ops.CSRSparseMatrixToDense": false, + "tf.raw_ops.CSRSparseMatrixToSparseTensor": false, + "tf.raw_ops.CSVDataset": false, + "tf.raw_ops.CTCBeamSearchDecoder": false, + "tf.raw_ops.CTCGreedyDecoder": false, + "tf.raw_ops.CTCLoss": false, + "tf.raw_ops.CTCLossV2": false, + "tf.raw_ops.CacheDataset": false, + "tf.raw_ops.CacheDatasetV2": false, + "tf.raw_ops.Case": false, + "tf.raw_ops.Cast": false, + "tf.raw_ops.Ceil": false, + "tf.raw_ops.CheckNumerics": false, + "tf.raw_ops.CheckNumericsV2": false, + "tf.raw_ops.Cholesky": false, + "tf.raw_ops.CholeskyGrad": false, + "tf.raw_ops.ChooseFastestBranchDataset": false, + "tf.raw_ops.ChooseFastestDataset": false, + "tf.raw_ops.ClipByValue": false, + "tf.raw_ops.CloseSummaryWriter": false, + "tf.raw_ops.CollectiveBcastRecv": false, + "tf.raw_ops.CollectiveBcastSend": false, + "tf.raw_ops.CollectiveGather": false, + "tf.raw_ops.CollectivePermute": false, + "tf.raw_ops.CollectiveReduce": false, + "tf.raw_ops.CombinedNonMaxSuppression": false, + "tf.raw_ops.CompareAndBitpack": false, + "tf.raw_ops.Complex": false, + "tf.raw_ops.ComplexAbs": false, + "tf.raw_ops.ComputeAccidentalHits": false, + "tf.raw_ops.Concat": false, + "tf.raw_ops.ConcatOffset": false, + "tf.raw_ops.ConcatV2": false, + "tf.raw_ops.ConcatenateDataset": false, + "tf.raw_ops.ConditionalAccumulator": false, + "tf.raw_ops.ConfigureDistributedTPU": false, + "tf.raw_ops.ConfigureTPUEmbedding": false, + "tf.raw_ops.Conj": false, + "tf.raw_ops.ConjugateTranspose": false, + "tf.raw_ops.Const": false, + "tf.raw_ops.ConsumeMutexLock": false, + "tf.raw_ops.ControlTrigger": false, + "tf.raw_ops.Conv2D": false, + "tf.raw_ops.Conv2DBackpropFilter": false, + "tf.raw_ops.Conv2DBackpropInput": false, + "tf.raw_ops.Conv3D": false, + "tf.raw_ops.Conv3DBackpropFilter": false, + "tf.raw_ops.Conv3DBackpropFilterV2": false, + "tf.raw_ops.Conv3DBackpropInput": false, + "tf.raw_ops.Conv3DBackpropInputV2": false, + "tf.raw_ops.Copy": false, + "tf.raw_ops.CopyHost": false, + "tf.raw_ops.Cos": false, + "tf.raw_ops.Cosh": false, + "tf.raw_ops.CountUpTo": false, + "tf.raw_ops.CreateSummaryDbWriter": false, + "tf.raw_ops.CreateSummaryFileWriter": false, + "tf.raw_ops.CropAndResize": false, + "tf.raw_ops.CropAndResizeGradBoxes": false, + "tf.raw_ops.CropAndResizeGradImage": false, + "tf.raw_ops.Cross": false, + "tf.raw_ops.CrossReplicaSum": false, + "tf.raw_ops.CudnnRNN": false, + "tf.raw_ops.CudnnRNNBackprop": false, + "tf.raw_ops.CudnnRNNBackpropV2": false, + "tf.raw_ops.CudnnRNNBackpropV3": false, + "tf.raw_ops.CudnnRNNCanonicalToParams": false, + "tf.raw_ops.CudnnRNNCanonicalToParamsV2": false, + "tf.raw_ops.CudnnRNNParamsSize": false, + "tf.raw_ops.CudnnRNNParamsToCanonical": false, + "tf.raw_ops.CudnnRNNParamsToCanonicalV2": false, + "tf.raw_ops.CudnnRNNV2": false, + "tf.raw_ops.CudnnRNNV3": false, + "tf.raw_ops.Cumprod": false, + "tf.raw_ops.Cumsum": false, + "tf.raw_ops.CumulativeLogsumexp": false, + "tf.raw_ops.DataFormatDimMap": false, + "tf.raw_ops.DataFormatVecPermute": false, + "tf.raw_ops.DatasetCardinality": false, + "tf.raw_ops.DatasetFromGraph": false, + "tf.raw_ops.DatasetToGraph": false, + "tf.raw_ops.DatasetToGraphV2": false, + "tf.raw_ops.DatasetToSingleElement": false, + "tf.raw_ops.DatasetToTFRecord": false, + "tf.raw_ops.Dawsn": false, + "tf.raw_ops.DebugGradientIdentity": false, + "tf.raw_ops.DebugGradientRefIdentity": false, + "tf.raw_ops.DebugIdentity": false, + "tf.raw_ops.DebugIdentityV2": false, + "tf.raw_ops.DebugNanCount": false, + "tf.raw_ops.DebugNumericSummary": false, + "tf.raw_ops.DebugNumericSummaryV2": false, + "tf.raw_ops.DecodeAndCropJpeg": false, + "tf.raw_ops.DecodeBase64": false, + "tf.raw_ops.DecodeBmp": false, + "tf.raw_ops.DecodeCSV": false, + "tf.raw_ops.DecodeCompressed": false, + "tf.raw_ops.DecodeGif": false, + "tf.raw_ops.DecodeJSONExample": false, + "tf.raw_ops.DecodeJpeg": false, + "tf.raw_ops.DecodePaddedRaw": false, + "tf.raw_ops.DecodePng": false, + "tf.raw_ops.DecodeProtoV2": false, + "tf.raw_ops.DecodeRaw": false, + "tf.raw_ops.DecodeWav": false, + "tf.raw_ops.DeepCopy": false, + "tf.raw_ops.DeleteIterator": false, + "tf.raw_ops.DeleteMemoryCache": false, + "tf.raw_ops.DeleteMultiDeviceIterator": false, + "tf.raw_ops.DeleteRandomSeedGenerator": false, + "tf.raw_ops.DeleteSessionTensor": false, + "tf.raw_ops.DenseToCSRSparseMatrix": false, + "tf.raw_ops.DenseToDenseSetOperation": false, + "tf.raw_ops.DenseToSparseBatchDataset": false, + "tf.raw_ops.DenseToSparseSetOperation": false, + "tf.raw_ops.DepthToSpace": false, + "tf.raw_ops.DepthwiseConv2dNative": false, + "tf.raw_ops.DepthwiseConv2dNativeBackpropFilter": false, + "tf.raw_ops.DepthwiseConv2dNativeBackpropInput": false, + "tf.raw_ops.Dequantize": false, + "tf.raw_ops.DeserializeIterator": false, + "tf.raw_ops.DeserializeManySparse": false, + "tf.raw_ops.DeserializeSparse": false, + "tf.raw_ops.DestroyResourceOp": false, + "tf.raw_ops.DestroyTemporaryVariable": false, + "tf.raw_ops.Diag": false, + "tf.raw_ops.DiagPart": false, + "tf.raw_ops.Digamma": false, + "tf.raw_ops.Dilation2D": false, + "tf.raw_ops.Dilation2DBackpropFilter": false, + "tf.raw_ops.Dilation2DBackpropInput": false, + "tf.raw_ops.DirectedInterleaveDataset": false, + "tf.raw_ops.Div": false, + "tf.raw_ops.DivNoNan": false, + "tf.raw_ops.DrawBoundingBoxes": false, + "tf.raw_ops.DrawBoundingBoxesV2": false, + "tf.raw_ops.DummyMemoryCache": false, + "tf.raw_ops.DynamicPartition": false, + "tf.raw_ops.DynamicStitch": false, + "tf.raw_ops.EagerPyFunc": false, + "tf.raw_ops.EditDistance": false, + "tf.raw_ops.Eig": false, + "tf.raw_ops.Einsum": false, + "tf.raw_ops.Elu": false, + "tf.raw_ops.EluGrad": false, + "tf.raw_ops.Empty": false, + "tf.raw_ops.EmptyTensorList": false, + "tf.raw_ops.EncodeBase64": false, + "tf.raw_ops.EncodeJpeg": false, + "tf.raw_ops.EncodeJpegVariableQuality": false, + "tf.raw_ops.EncodePng": false, + "tf.raw_ops.EncodeProto": false, + "tf.raw_ops.EncodeWav": false, + "tf.raw_ops.EnqueueTPUEmbeddingIntegerBatch": false, + "tf.raw_ops.EnqueueTPUEmbeddingSparseBatch": false, + "tf.raw_ops.EnqueueTPUEmbeddingSparseTensorBatch": false, + "tf.raw_ops.EnsureShape": false, + "tf.raw_ops.Enter": false, + "tf.raw_ops.Equal": false, + "tf.raw_ops.Erf": false, + "tf.raw_ops.Erfc": false, + "tf.raw_ops.Erfinv": false, + "tf.raw_ops.EuclideanNorm": false, + "tf.raw_ops.Exit": false, + "tf.raw_ops.Exp": false, + "tf.raw_ops.ExpandDims": false, + "tf.raw_ops.ExperimentalAssertNextDataset": false, + "tf.raw_ops.ExperimentalAutoShardDataset": false, + "tf.raw_ops.ExperimentalBytesProducedStatsDataset": false, + "tf.raw_ops.ExperimentalCSVDataset": false, + "tf.raw_ops.ExperimentalChooseFastestDataset": false, + "tf.raw_ops.ExperimentalDatasetCardinality": false, + "tf.raw_ops.ExperimentalDatasetToTFRecord": false, + "tf.raw_ops.ExperimentalDenseToSparseBatchDataset": false, + "tf.raw_ops.ExperimentalDirectedInterleaveDataset": false, + "tf.raw_ops.ExperimentalGroupByReducerDataset": false, + "tf.raw_ops.ExperimentalGroupByWindowDataset": false, + "tf.raw_ops.ExperimentalIgnoreErrorsDataset": false, + "tf.raw_ops.ExperimentalIteratorGetDevice": false, + "tf.raw_ops.ExperimentalLMDBDataset": false, + "tf.raw_ops.ExperimentalLatencyStatsDataset": false, + "tf.raw_ops.ExperimentalMapAndBatchDataset": false, + "tf.raw_ops.ExperimentalMapDataset": false, + "tf.raw_ops.ExperimentalMatchingFilesDataset": false, + "tf.raw_ops.ExperimentalMaxIntraOpParallelismDataset": false, + "tf.raw_ops.ExperimentalNonSerializableDataset": false, + "tf.raw_ops.ExperimentalParallelInterleaveDataset": false, + "tf.raw_ops.ExperimentalParseExampleDataset": false, + "tf.raw_ops.ExperimentalPrivateThreadPoolDataset": false, + "tf.raw_ops.ExperimentalRandomDataset": false, + "tf.raw_ops.ExperimentalRebatchDataset": false, + "tf.raw_ops.ExperimentalScanDataset": false, + "tf.raw_ops.ExperimentalSetStatsAggregatorDataset": false, + "tf.raw_ops.ExperimentalSleepDataset": false, + "tf.raw_ops.ExperimentalSlidingWindowDataset": false, + "tf.raw_ops.ExperimentalSqlDataset": false, + "tf.raw_ops.ExperimentalStatsAggregatorHandle": false, + "tf.raw_ops.ExperimentalStatsAggregatorSummary": false, + "tf.raw_ops.ExperimentalTakeWhileDataset": false, + "tf.raw_ops.ExperimentalThreadPoolDataset": false, + "tf.raw_ops.ExperimentalThreadPoolHandle": false, + "tf.raw_ops.ExperimentalUnbatchDataset": false, + "tf.raw_ops.ExperimentalUniqueDataset": false, + "tf.raw_ops.Expint": false, + "tf.raw_ops.Expm1": false, + "tf.raw_ops.ExtractGlimpse": false, + "tf.raw_ops.ExtractImagePatches": false, + "tf.raw_ops.ExtractJpegShape": false, + "tf.raw_ops.ExtractVolumePatches": false, + "tf.raw_ops.FFT": false, + "tf.raw_ops.FFT2D": false, + "tf.raw_ops.FFT3D": false, + "tf.raw_ops.FIFOQueue": false, + "tf.raw_ops.FIFOQueueV2": false, + "tf.raw_ops.Fact": false, + "tf.raw_ops.FakeParam": false, + "tf.raw_ops.FakeQuantWithMinMaxArgs": false, + "tf.raw_ops.FakeQuantWithMinMaxArgsGradient": false, + "tf.raw_ops.FakeQuantWithMinMaxVars": false, + "tf.raw_ops.FakeQuantWithMinMaxVarsGradient": false, + "tf.raw_ops.FakeQuantWithMinMaxVarsPerChannel": false, + "tf.raw_ops.FakeQuantWithMinMaxVarsPerChannelGradient": false, + "tf.raw_ops.FakeQueue": false, + "tf.raw_ops.Fill": false, + "tf.raw_ops.FilterByLastComponentDataset": false, + "tf.raw_ops.FilterDataset": false, + "tf.raw_ops.Fingerprint": false, + "tf.raw_ops.FixedLengthRecordDataset": false, + "tf.raw_ops.FixedLengthRecordDatasetV2": false, + "tf.raw_ops.FixedLengthRecordReader": false, + "tf.raw_ops.FixedLengthRecordReaderV2": false, + "tf.raw_ops.FixedUnigramCandidateSampler": false, + "tf.raw_ops.FlatMapDataset": false, + "tf.raw_ops.Floor": false, + "tf.raw_ops.FloorDiv": false, + "tf.raw_ops.FloorMod": false, + "tf.raw_ops.FlushSummaryWriter": false, + "tf.raw_ops.For": false, + "tf.raw_ops.FractionalAvgPool": false, + "tf.raw_ops.FractionalAvgPoolGrad": false, + "tf.raw_ops.FractionalMaxPool": false, + "tf.raw_ops.FractionalMaxPoolGrad": false, + "tf.raw_ops.FresnelCos": false, + "tf.raw_ops.FresnelSin": false, + "tf.raw_ops.FusedBatchNorm": false, + "tf.raw_ops.FusedBatchNormGrad": false, + "tf.raw_ops.FusedBatchNormGradV2": false, + "tf.raw_ops.FusedBatchNormGradV3": false, + "tf.raw_ops.FusedBatchNormV2": false, + "tf.raw_ops.FusedBatchNormV3": false, + "tf.raw_ops.FusedPadConv2D": false, + "tf.raw_ops.FusedResizeAndPadConv2D": false, + "tf.raw_ops.GRUBlockCell": false, + "tf.raw_ops.GRUBlockCellGrad": false, + "tf.raw_ops.Gather": false, + "tf.raw_ops.GatherNd": false, + "tf.raw_ops.GatherV2": false, + "tf.raw_ops.GenerateBoundingBoxProposals": false, + "tf.raw_ops.GenerateVocabRemapping": false, + "tf.raw_ops.GeneratorDataset": false, + "tf.raw_ops.GetSessionHandle": false, + "tf.raw_ops.GetSessionHandleV2": false, + "tf.raw_ops.GetSessionTensor": false, + "tf.raw_ops.Greater": false, + "tf.raw_ops.GreaterEqual": false, + "tf.raw_ops.GroupByReducerDataset": false, + "tf.raw_ops.GroupByWindowDataset": false, + "tf.raw_ops.GuaranteeConst": false, + "tf.raw_ops.HSVToRGB": false, + "tf.raw_ops.HashTable": false, + "tf.raw_ops.HashTableV2": false, + "tf.raw_ops.HistogramFixedWidth": false, + "tf.raw_ops.HistogramSummary": false, + "tf.raw_ops.IFFT": false, + "tf.raw_ops.IFFT2D": false, + "tf.raw_ops.IFFT3D": false, + "tf.raw_ops.IRFFT": false, + "tf.raw_ops.IRFFT2D": false, + "tf.raw_ops.IRFFT3D": false, + "tf.raw_ops.Identity": false, + "tf.raw_ops.IdentityN": false, + "tf.raw_ops.IdentityReader": false, + "tf.raw_ops.IdentityReaderV2": false, + "tf.raw_ops.If": false, + "tf.raw_ops.Igamma": false, + "tf.raw_ops.IgammaGradA": false, + "tf.raw_ops.Igammac": false, + "tf.raw_ops.IgnoreErrorsDataset": false, + "tf.raw_ops.Imag": false, + "tf.raw_ops.ImageProjectiveTransformV2": false, + "tf.raw_ops.ImageSummary": false, + "tf.raw_ops.ImmutableConst": false, + "tf.raw_ops.ImportEvent": false, + "tf.raw_ops.InTopK": false, + "tf.raw_ops.InTopKV2": false, + "tf.raw_ops.InfeedDequeue": false, + "tf.raw_ops.InfeedDequeueTuple": false, + "tf.raw_ops.InfeedEnqueue": false, + "tf.raw_ops.InfeedEnqueuePrelinearizedBuffer": false, + "tf.raw_ops.InfeedEnqueueTuple": false, + "tf.raw_ops.InitializeTable": false, + "tf.raw_ops.InitializeTableFromTextFile": false, + "tf.raw_ops.InitializeTableFromTextFileV2": false, + "tf.raw_ops.InitializeTableV2": false, + "tf.raw_ops.InplaceAdd": false, + "tf.raw_ops.InplaceSub": false, + "tf.raw_ops.InplaceUpdate": false, + "tf.raw_ops.InterleaveDataset": false, + "tf.raw_ops.Inv": false, + "tf.raw_ops.InvGrad": false, + "tf.raw_ops.Invert": false, + "tf.raw_ops.InvertPermutation": false, + "tf.raw_ops.IsBoostedTreesEnsembleInitialized": false, + "tf.raw_ops.IsBoostedTreesQuantileStreamResourceInitialized": false, + "tf.raw_ops.IsFinite": false, + "tf.raw_ops.IsInf": false, + "tf.raw_ops.IsNan": false, + "tf.raw_ops.IsVariableInitialized": false, + "tf.raw_ops.Iterator": false, + "tf.raw_ops.IteratorFromStringHandle": false, + "tf.raw_ops.IteratorFromStringHandleV2": false, + "tf.raw_ops.IteratorGetDevice": false, + "tf.raw_ops.IteratorGetNext": false, + "tf.raw_ops.IteratorGetNextAsOptional": false, + "tf.raw_ops.IteratorGetNextSync": false, + "tf.raw_ops.IteratorToStringHandle": false, + "tf.raw_ops.IteratorV2": false, + "tf.raw_ops.L2Loss": false, + "tf.raw_ops.LMDBDataset": false, + "tf.raw_ops.LMDBReader": false, + "tf.raw_ops.LRN": false, + "tf.raw_ops.LRNGrad": false, + "tf.raw_ops.LSTMBlockCell": false, + "tf.raw_ops.LSTMBlockCellGrad": false, + "tf.raw_ops.LatencyStatsDataset": false, + "tf.raw_ops.LeakyRelu": false, + "tf.raw_ops.LeakyReluGrad": false, + "tf.raw_ops.LearnedUnigramCandidateSampler": false, + "tf.raw_ops.LeftShift": false, + "tf.raw_ops.LegacyParallelInterleaveDatasetV2": false, + "tf.raw_ops.Less": false, + "tf.raw_ops.LessEqual": false, + "tf.raw_ops.Lgamma": false, + "tf.raw_ops.LinSpace": false, + "tf.raw_ops.ListDiff": false, + "tf.raw_ops.LoadAndRemapMatrix": false, + "tf.raw_ops.LoadTPUEmbeddingADAMParameters": false, + "tf.raw_ops.LoadTPUEmbeddingADAMParametersGradAccumDebug": false, + "tf.raw_ops.LoadTPUEmbeddingAdadeltaParameters": false, + "tf.raw_ops.LoadTPUEmbeddingAdadeltaParametersGradAccumDebug": false, + "tf.raw_ops.LoadTPUEmbeddingAdagradParameters": false, + "tf.raw_ops.LoadTPUEmbeddingAdagradParametersGradAccumDebug": false, + "tf.raw_ops.LoadTPUEmbeddingCenteredRMSPropParameters": false, + "tf.raw_ops.LoadTPUEmbeddingFTRLParameters": false, + "tf.raw_ops.LoadTPUEmbeddingFTRLParametersGradAccumDebug": false, + "tf.raw_ops.LoadTPUEmbeddingMDLAdagradLightParameters": false, + "tf.raw_ops.LoadTPUEmbeddingMomentumParameters": false, + "tf.raw_ops.LoadTPUEmbeddingMomentumParametersGradAccumDebug": false, + "tf.raw_ops.LoadTPUEmbeddingProximalAdagradParameters": false, + "tf.raw_ops.LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug": false, + "tf.raw_ops.LoadTPUEmbeddingRMSPropParameters": false, + "tf.raw_ops.LoadTPUEmbeddingRMSPropParametersGradAccumDebug": false, + "tf.raw_ops.LoadTPUEmbeddingStochasticGradientDescentParameters": false, + "tf.raw_ops.Log": false, + "tf.raw_ops.Log1p": false, + "tf.raw_ops.LogMatrixDeterminant": false, + "tf.raw_ops.LogSoftmax": false, + "tf.raw_ops.LogUniformCandidateSampler": false, + "tf.raw_ops.LogicalAnd": false, + "tf.raw_ops.LogicalNot": false, + "tf.raw_ops.LogicalOr": false, + "tf.raw_ops.LookupTableExport": false, + "tf.raw_ops.LookupTableExportV2": false, + "tf.raw_ops.LookupTableFind": false, + "tf.raw_ops.LookupTableFindV2": false, + "tf.raw_ops.LookupTableImport": false, + "tf.raw_ops.LookupTableImportV2": false, + "tf.raw_ops.LookupTableInsert": false, + "tf.raw_ops.LookupTableInsertV2": false, + "tf.raw_ops.LookupTableRemoveV2": false, + "tf.raw_ops.LookupTableSize": false, + "tf.raw_ops.LookupTableSizeV2": false, + "tf.raw_ops.LoopCond": false, + "tf.raw_ops.LowerBound": false, + "tf.raw_ops.Lu": false, + "tf.raw_ops.MakeIterator": false, + "tf.raw_ops.MapAndBatchDataset": false, + "tf.raw_ops.MapClear": false, + "tf.raw_ops.MapDataset": false, + "tf.raw_ops.MapDefun": false, + "tf.raw_ops.MapIncompleteSize": false, + "tf.raw_ops.MapPeek": false, + "tf.raw_ops.MapSize": false, + "tf.raw_ops.MapStage": false, + "tf.raw_ops.MapUnstage": false, + "tf.raw_ops.MapUnstageNoKey": false, + "tf.raw_ops.MatMul": false, + "tf.raw_ops.MatchingFiles": false, + "tf.raw_ops.MatchingFilesDataset": false, + "tf.raw_ops.MatrixBandPart": false, + "tf.raw_ops.MatrixDeterminant": false, + "tf.raw_ops.MatrixDiag": false, + "tf.raw_ops.MatrixDiagPart": false, + "tf.raw_ops.MatrixDiagPartV2": false, + "tf.raw_ops.MatrixDiagPartV3": false, + "tf.raw_ops.MatrixDiagV2": false, + "tf.raw_ops.MatrixDiagV3": false, + "tf.raw_ops.MatrixExponential": false, + "tf.raw_ops.MatrixInverse": false, + "tf.raw_ops.MatrixLogarithm": false, + "tf.raw_ops.MatrixSetDiag": false, + "tf.raw_ops.MatrixSetDiagV2": false, + "tf.raw_ops.MatrixSetDiagV3": false, + "tf.raw_ops.MatrixSolve": false, + "tf.raw_ops.MatrixSolveLs": false, + "tf.raw_ops.MatrixSquareRoot": false, + "tf.raw_ops.MatrixTriangularSolve": false, + "tf.raw_ops.Max": false, + "tf.raw_ops.MaxIntraOpParallelismDataset": false, + "tf.raw_ops.MaxPool": false, + "tf.raw_ops.MaxPool3D": false, + "tf.raw_ops.MaxPool3DGrad": false, + "tf.raw_ops.MaxPool3DGradGrad": false, + "tf.raw_ops.MaxPoolGrad": false, + "tf.raw_ops.MaxPoolGradGrad": false, + "tf.raw_ops.MaxPoolGradGradV2": false, + "tf.raw_ops.MaxPoolGradGradWithArgmax": false, + "tf.raw_ops.MaxPoolGradV2": false, + "tf.raw_ops.MaxPoolGradWithArgmax": false, + "tf.raw_ops.MaxPoolV2": false, + "tf.raw_ops.MaxPoolWithArgmax": false, + "tf.raw_ops.Maximum": false, + "tf.raw_ops.Mean": false, + "tf.raw_ops.Merge": false, + "tf.raw_ops.MergeSummary": false, + "tf.raw_ops.MergeV2Checkpoints": false, + "tf.raw_ops.Mfcc": false, + "tf.raw_ops.Min": false, + "tf.raw_ops.Minimum": false, + "tf.raw_ops.MirrorPad": false, + "tf.raw_ops.MirrorPadGrad": false, + "tf.raw_ops.Mod": false, + "tf.raw_ops.ModelDataset": false, + "tf.raw_ops.Mul": false, + "tf.raw_ops.MulNoNan": false, + "tf.raw_ops.MultiDeviceIterator": false, + "tf.raw_ops.MultiDeviceIteratorFromStringHandle": false, + "tf.raw_ops.MultiDeviceIteratorGetNextFromShard": false, + "tf.raw_ops.MultiDeviceIteratorInit": false, + "tf.raw_ops.MultiDeviceIteratorToStringHandle": false, + "tf.raw_ops.Multinomial": false, + "tf.raw_ops.MutableDenseHashTable": false, + "tf.raw_ops.MutableDenseHashTableV2": false, + "tf.raw_ops.MutableHashTable": false, + "tf.raw_ops.MutableHashTableOfTensors": false, + "tf.raw_ops.MutableHashTableOfTensorsV2": false, + "tf.raw_ops.MutableHashTableV2": false, + "tf.raw_ops.MutexLock": false, + "tf.raw_ops.MutexV2": false, + "tf.raw_ops.NcclAllReduce": false, + "tf.raw_ops.NcclBroadcast": false, + "tf.raw_ops.NcclReduce": false, + "tf.raw_ops.Ndtri": false, + "tf.raw_ops.Neg": false, + "tf.raw_ops.NextAfter": false, + "tf.raw_ops.NextIteration": false, + "tf.raw_ops.NoOp": false, + "tf.raw_ops.NonDeterministicInts": false, + "tf.raw_ops.NonMaxSuppression": false, + "tf.raw_ops.NonMaxSuppressionV2": false, + "tf.raw_ops.NonMaxSuppressionV3": false, + "tf.raw_ops.NonMaxSuppressionV4": false, + "tf.raw_ops.NonMaxSuppressionV5": false, + "tf.raw_ops.NonMaxSuppressionWithOverlaps": false, + "tf.raw_ops.NonSerializableDataset": false, + "tf.raw_ops.NotEqual": false, + "tf.raw_ops.NthElement": false, + "tf.raw_ops.OneHot": false, + "tf.raw_ops.OneShotIterator": false, + "tf.raw_ops.OnesLike": false, + "tf.raw_ops.OptimizeDataset": false, + "tf.raw_ops.OptionalFromValue": false, + "tf.raw_ops.OptionalGetValue": false, + "tf.raw_ops.OptionalHasValue": false, + "tf.raw_ops.OptionalNone": false, + "tf.raw_ops.OrderedMapClear": false, + "tf.raw_ops.OrderedMapIncompleteSize": false, + "tf.raw_ops.OrderedMapPeek": false, + "tf.raw_ops.OrderedMapSize": false, + "tf.raw_ops.OrderedMapStage": false, + "tf.raw_ops.OrderedMapUnstage": false, + "tf.raw_ops.OrderedMapUnstageNoKey": false, + "tf.raw_ops.OutfeedDequeue": false, + "tf.raw_ops.OutfeedDequeueTuple": false, + "tf.raw_ops.OutfeedEnqueue": false, + "tf.raw_ops.OutfeedEnqueueTuple": false, + "tf.raw_ops.Pack": false, + "tf.raw_ops.Pad": false, + "tf.raw_ops.PadV2": false, + "tf.raw_ops.PaddedBatchDataset": false, + "tf.raw_ops.PaddedBatchDatasetV2": false, + "tf.raw_ops.PaddingFIFOQueue": false, + "tf.raw_ops.PaddingFIFOQueueV2": false, + "tf.raw_ops.ParallelConcat": false, + "tf.raw_ops.ParallelDynamicStitch": false, + "tf.raw_ops.ParallelInterleaveDataset": false, + "tf.raw_ops.ParallelInterleaveDatasetV2": false, + "tf.raw_ops.ParallelInterleaveDatasetV3": false, + "tf.raw_ops.ParallelInterleaveDatasetV4": false, + "tf.raw_ops.ParallelMapDataset": false, + "tf.raw_ops.ParallelMapDatasetV2": false, + "tf.raw_ops.ParameterizedTruncatedNormal": false, + "tf.raw_ops.ParseExample": false, + "tf.raw_ops.ParseExampleDataset": false, + "tf.raw_ops.ParseExampleDatasetV2": false, + "tf.raw_ops.ParseExampleV2": false, + "tf.raw_ops.ParseSequenceExample": false, + "tf.raw_ops.ParseSequenceExampleV2": false, + "tf.raw_ops.ParseSingleExample": false, + "tf.raw_ops.ParseSingleSequenceExample": false, + "tf.raw_ops.ParseTensor": false, + "tf.raw_ops.PartitionedCall": false, + "tf.raw_ops.Placeholder": false, + "tf.raw_ops.PlaceholderV2": false, + "tf.raw_ops.PlaceholderWithDefault": false, + "tf.raw_ops.Polygamma": false, + "tf.raw_ops.PopulationCount": false, + "tf.raw_ops.Pow": false, + "tf.raw_ops.PrefetchDataset": false, + "tf.raw_ops.Prelinearize": false, + "tf.raw_ops.PrelinearizeTuple": false, + "tf.raw_ops.PreventGradient": false, + "tf.raw_ops.Print": false, + "tf.raw_ops.PrintV2": false, + "tf.raw_ops.PriorityQueue": false, + "tf.raw_ops.PriorityQueueV2": false, + "tf.raw_ops.PrivateThreadPoolDataset": false, + "tf.raw_ops.Prod": false, + "tf.raw_ops.PyFunc": false, + "tf.raw_ops.PyFuncStateless": false, + "tf.raw_ops.Qr": false, + "tf.raw_ops.QuantizeAndDequantize": false, + "tf.raw_ops.QuantizeAndDequantizeV2": false, + "tf.raw_ops.QuantizeAndDequantizeV3": false, + "tf.raw_ops.QuantizeDownAndShrinkRange": false, + "tf.raw_ops.QuantizeV2": false, + "tf.raw_ops.QuantizedAdd": false, + "tf.raw_ops.QuantizedAvgPool": false, + "tf.raw_ops.QuantizedBatchNormWithGlobalNormalization": false, + "tf.raw_ops.QuantizedBiasAdd": false, + "tf.raw_ops.QuantizedConcat": false, + "tf.raw_ops.QuantizedConv2D": false, + "tf.raw_ops.QuantizedConv2DAndRelu": false, + "tf.raw_ops.QuantizedConv2DAndReluAndRequantize": false, + "tf.raw_ops.QuantizedConv2DAndRequantize": false, + "tf.raw_ops.QuantizedConv2DPerChannel": false, + "tf.raw_ops.QuantizedConv2DWithBias": false, + "tf.raw_ops.QuantizedConv2DWithBiasAndRelu": false, + "tf.raw_ops.QuantizedConv2DWithBiasAndReluAndRequantize": false, + "tf.raw_ops.QuantizedConv2DWithBiasAndRequantize": false, + "tf.raw_ops.QuantizedConv2DWithBiasSignedSumAndReluAndRequantize": false, + "tf.raw_ops.QuantizedConv2DWithBiasSumAndRelu": false, + "tf.raw_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize": false, + "tf.raw_ops.QuantizedDepthwiseConv2D": false, + "tf.raw_ops.QuantizedDepthwiseConv2DWithBias": false, + "tf.raw_ops.QuantizedDepthwiseConv2DWithBiasAndRelu": false, + "tf.raw_ops.QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize": false, + "tf.raw_ops.QuantizedInstanceNorm": false, + "tf.raw_ops.QuantizedMatMul": false, + "tf.raw_ops.QuantizedMatMulWithBias": false, + "tf.raw_ops.QuantizedMatMulWithBiasAndDequantize": false, + "tf.raw_ops.QuantizedMatMulWithBiasAndRelu": false, + "tf.raw_ops.QuantizedMatMulWithBiasAndReluAndRequantize": false, + "tf.raw_ops.QuantizedMatMulWithBiasAndRequantize": false, + "tf.raw_ops.QuantizedMaxPool": false, + "tf.raw_ops.QuantizedMul": false, + "tf.raw_ops.QuantizedRelu": false, + "tf.raw_ops.QuantizedRelu6": false, + "tf.raw_ops.QuantizedReluX": false, + "tf.raw_ops.QuantizedReshape": false, + "tf.raw_ops.QuantizedResizeBilinear": false, + "tf.raw_ops.QueueClose": false, + "tf.raw_ops.QueueCloseV2": false, + "tf.raw_ops.QueueDequeue": false, + "tf.raw_ops.QueueDequeueMany": false, + "tf.raw_ops.QueueDequeueManyV2": false, + "tf.raw_ops.QueueDequeueUpTo": false, + "tf.raw_ops.QueueDequeueUpToV2": false, + "tf.raw_ops.QueueDequeueV2": false, + "tf.raw_ops.QueueEnqueue": false, + "tf.raw_ops.QueueEnqueueMany": false, + "tf.raw_ops.QueueEnqueueManyV2": false, + "tf.raw_ops.QueueEnqueueV2": false, + "tf.raw_ops.QueueIsClosed": false, + "tf.raw_ops.QueueIsClosedV2": false, + "tf.raw_ops.QueueSize": false, + "tf.raw_ops.QueueSizeV2": false, + "tf.raw_ops.RFFT": false, + "tf.raw_ops.RFFT2D": false, + "tf.raw_ops.RFFT3D": false, + "tf.raw_ops.RGBToHSV": false, + "tf.raw_ops.RaggedGather": false, + "tf.raw_ops.RaggedRange": false, + "tf.raw_ops.RaggedTensorFromVariant": false, + "tf.raw_ops.RaggedTensorToSparse": false, + "tf.raw_ops.RaggedTensorToTensor": false, + "tf.raw_ops.RaggedTensorToVariant": false, + "tf.raw_ops.RandomCrop": false, + "tf.raw_ops.RandomDataset": false, + "tf.raw_ops.RandomGamma": false, + "tf.raw_ops.RandomGammaGrad": false, + "tf.raw_ops.RandomPoisson": false, + "tf.raw_ops.RandomPoissonV2": false, + "tf.raw_ops.RandomShuffle": false, + "tf.raw_ops.RandomShuffleQueue": false, + "tf.raw_ops.RandomShuffleQueueV2": false, + "tf.raw_ops.RandomStandardNormal": false, + "tf.raw_ops.RandomUniform": false, + "tf.raw_ops.RandomUniformInt": false, + "tf.raw_ops.Range": false, + "tf.raw_ops.RangeDataset": false, + "tf.raw_ops.Rank": false, + "tf.raw_ops.ReadFile": false, + "tf.raw_ops.ReadVariableOp": false, + "tf.raw_ops.ReaderNumRecordsProduced": false, + "tf.raw_ops.ReaderNumRecordsProducedV2": false, + "tf.raw_ops.ReaderNumWorkUnitsCompleted": false, + "tf.raw_ops.ReaderNumWorkUnitsCompletedV2": false, + "tf.raw_ops.ReaderRead": false, + "tf.raw_ops.ReaderReadUpTo": false, + "tf.raw_ops.ReaderReadUpToV2": false, + "tf.raw_ops.ReaderReadV2": false, + "tf.raw_ops.ReaderReset": false, + "tf.raw_ops.ReaderResetV2": false, + "tf.raw_ops.ReaderRestoreState": false, + "tf.raw_ops.ReaderRestoreStateV2": false, + "tf.raw_ops.ReaderSerializeState": false, + "tf.raw_ops.ReaderSerializeStateV2": false, + "tf.raw_ops.Real": false, + "tf.raw_ops.RealDiv": false, + "tf.raw_ops.RebatchDataset": false, + "tf.raw_ops.Reciprocal": false, + "tf.raw_ops.ReciprocalGrad": false, + "tf.raw_ops.RecordInput": false, + "tf.raw_ops.Recv": false, + "tf.raw_ops.RecvTPUEmbeddingActivations": false, + "tf.raw_ops.ReduceDataset": false, + "tf.raw_ops.ReduceJoin": false, + "tf.raw_ops.RefEnter": false, + "tf.raw_ops.RefExit": false, + "tf.raw_ops.RefIdentity": false, + "tf.raw_ops.RefMerge": false, + "tf.raw_ops.RefNextIteration": false, + "tf.raw_ops.RefSelect": false, + "tf.raw_ops.RefSwitch": false, + "tf.raw_ops.RegexFullMatch": false, + "tf.raw_ops.RegexReplace": false, + "tf.raw_ops.Relu": false, + "tf.raw_ops.Relu6": false, + "tf.raw_ops.Relu6Grad": false, + "tf.raw_ops.ReluGrad": false, + "tf.raw_ops.RemoteCall": false, + "tf.raw_ops.RepeatDataset": false, + "tf.raw_ops.RequantizationRange": false, + "tf.raw_ops.RequantizationRangePerChannel": false, + "tf.raw_ops.Requantize": false, + "tf.raw_ops.RequantizePerChannel": false, + "tf.raw_ops.Reshape": false, + "tf.raw_ops.ResizeArea": false, + "tf.raw_ops.ResizeBicubic": false, + "tf.raw_ops.ResizeBicubicGrad": false, + "tf.raw_ops.ResizeBilinear": false, + "tf.raw_ops.ResizeBilinearGrad": false, + "tf.raw_ops.ResizeNearestNeighbor": false, + "tf.raw_ops.ResizeNearestNeighborGrad": false, + "tf.raw_ops.ResourceAccumulatorApplyGradient": false, + "tf.raw_ops.ResourceAccumulatorNumAccumulated": false, + "tf.raw_ops.ResourceAccumulatorSetGlobalStep": false, + "tf.raw_ops.ResourceAccumulatorTakeGradient": false, + "tf.raw_ops.ResourceApplyAdaMax": false, + "tf.raw_ops.ResourceApplyAdadelta": false, + "tf.raw_ops.ResourceApplyAdagrad": false, + "tf.raw_ops.ResourceApplyAdagradDA": false, + "tf.raw_ops.ResourceApplyAdagradV2": false, + "tf.raw_ops.ResourceApplyAdam": false, + "tf.raw_ops.ResourceApplyAdamWithAmsgrad": false, + "tf.raw_ops.ResourceApplyAddSign": false, + "tf.raw_ops.ResourceApplyCenteredRMSProp": false, + "tf.raw_ops.ResourceApplyFtrl": false, + "tf.raw_ops.ResourceApplyFtrlV2": false, + "tf.raw_ops.ResourceApplyGradientDescent": false, + "tf.raw_ops.ResourceApplyKerasMomentum": false, + "tf.raw_ops.ResourceApplyMomentum": false, + "tf.raw_ops.ResourceApplyPowerSign": false, + "tf.raw_ops.ResourceApplyProximalAdagrad": false, + "tf.raw_ops.ResourceApplyProximalGradientDescent": false, + "tf.raw_ops.ResourceApplyRMSProp": false, + "tf.raw_ops.ResourceConditionalAccumulator": false, + "tf.raw_ops.ResourceCountUpTo": false, + "tf.raw_ops.ResourceGather": false, + "tf.raw_ops.ResourceGatherNd": false, + "tf.raw_ops.ResourceScatterAdd": false, + "tf.raw_ops.ResourceScatterDiv": false, + "tf.raw_ops.ResourceScatterMax": false, + "tf.raw_ops.ResourceScatterMin": false, + "tf.raw_ops.ResourceScatterMul": false, + "tf.raw_ops.ResourceScatterNdAdd": false, + "tf.raw_ops.ResourceScatterNdSub": false, + "tf.raw_ops.ResourceScatterNdUpdate": false, + "tf.raw_ops.ResourceScatterSub": false, + "tf.raw_ops.ResourceScatterUpdate": false, + "tf.raw_ops.ResourceSparseApplyAdadelta": false, + "tf.raw_ops.ResourceSparseApplyAdagrad": false, + "tf.raw_ops.ResourceSparseApplyAdagradDA": false, + "tf.raw_ops.ResourceSparseApplyAdagradV2": false, + "tf.raw_ops.ResourceSparseApplyCenteredRMSProp": false, + "tf.raw_ops.ResourceSparseApplyFtrl": false, + "tf.raw_ops.ResourceSparseApplyFtrlV2": false, + "tf.raw_ops.ResourceSparseApplyKerasMomentum": false, + "tf.raw_ops.ResourceSparseApplyMomentum": false, + "tf.raw_ops.ResourceSparseApplyProximalAdagrad": false, + "tf.raw_ops.ResourceSparseApplyProximalGradientDescent": false, + "tf.raw_ops.ResourceSparseApplyRMSProp": false, + "tf.raw_ops.ResourceStridedSliceAssign": false, + "tf.raw_ops.Restore": false, + "tf.raw_ops.RestoreSlice": false, + "tf.raw_ops.RestoreV2": false, + "tf.raw_ops.RetrieveTPUEmbeddingADAMParameters": false, + "tf.raw_ops.RetrieveTPUEmbeddingADAMParametersGradAccumDebug": false, + "tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParameters": false, + "tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug": false, + "tf.raw_ops.RetrieveTPUEmbeddingAdagradParameters": false, + "tf.raw_ops.RetrieveTPUEmbeddingAdagradParametersGradAccumDebug": false, + "tf.raw_ops.RetrieveTPUEmbeddingCenteredRMSPropParameters": false, + "tf.raw_ops.RetrieveTPUEmbeddingFTRLParameters": false, + "tf.raw_ops.RetrieveTPUEmbeddingFTRLParametersGradAccumDebug": false, + "tf.raw_ops.RetrieveTPUEmbeddingMDLAdagradLightParameters": false, + "tf.raw_ops.RetrieveTPUEmbeddingMomentumParameters": false, + "tf.raw_ops.RetrieveTPUEmbeddingMomentumParametersGradAccumDebug": false, + "tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters": false, + "tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug": false, + "tf.raw_ops.RetrieveTPUEmbeddingRMSPropParameters": false, + "tf.raw_ops.RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug": false, + "tf.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParameters": false, + "tf.raw_ops.Reverse": false, + "tf.raw_ops.ReverseSequence": false, + "tf.raw_ops.ReverseV2": false, + "tf.raw_ops.RightShift": false, + "tf.raw_ops.Rint": false, + "tf.raw_ops.RngSkip": false, + "tf.raw_ops.Roll": false, + "tf.raw_ops.Round": false, + "tf.raw_ops.Rsqrt": false, + "tf.raw_ops.RsqrtGrad": false, + "tf.raw_ops.SampleDistortedBoundingBox": false, + "tf.raw_ops.SampleDistortedBoundingBoxV2": false, + "tf.raw_ops.SamplingDataset": false, + "tf.raw_ops.Save": false, + "tf.raw_ops.SaveSlices": false, + "tf.raw_ops.SaveV2": false, + "tf.raw_ops.ScalarSummary": false, + "tf.raw_ops.ScaleAndTranslate": false, + "tf.raw_ops.ScaleAndTranslateGrad": false, + "tf.raw_ops.ScanDataset": false, + "tf.raw_ops.ScatterAdd": false, + "tf.raw_ops.ScatterDiv": false, + "tf.raw_ops.ScatterMax": false, + "tf.raw_ops.ScatterMin": false, + "tf.raw_ops.ScatterMul": false, + "tf.raw_ops.ScatterNd": false, + "tf.raw_ops.ScatterNdAdd": false, + "tf.raw_ops.ScatterNdNonAliasingAdd": false, + "tf.raw_ops.ScatterNdSub": false, + "tf.raw_ops.ScatterNdUpdate": false, + "tf.raw_ops.ScatterSub": false, + "tf.raw_ops.ScatterUpdate": false, + "tf.raw_ops.SdcaFprint": false, + "tf.raw_ops.SdcaOptimizer": false, + "tf.raw_ops.SdcaOptimizerV2": false, + "tf.raw_ops.SdcaShrinkL1": false, + "tf.raw_ops.SegmentMax": false, + "tf.raw_ops.SegmentMean": false, + "tf.raw_ops.SegmentMin": false, + "tf.raw_ops.SegmentProd": false, + "tf.raw_ops.SegmentSum": false, + "tf.raw_ops.Select": false, + "tf.raw_ops.SelectV2": false, + "tf.raw_ops.SelfAdjointEig": false, + "tf.raw_ops.SelfAdjointEigV2": false, + "tf.raw_ops.Selu": false, + "tf.raw_ops.SeluGrad": false, + "tf.raw_ops.Send": false, + "tf.raw_ops.SendTPUEmbeddingGradients": false, + "tf.raw_ops.SerializeIterator": false, + "tf.raw_ops.SerializeManySparse": false, + "tf.raw_ops.SerializeSparse": false, + "tf.raw_ops.SerializeTensor": false, + "tf.raw_ops.SetSize": false, + "tf.raw_ops.SetStatsAggregatorDataset": false, + "tf.raw_ops.Shape": false, + "tf.raw_ops.ShapeN": false, + "tf.raw_ops.ShardDataset": false, + "tf.raw_ops.ShardedFilename": false, + "tf.raw_ops.ShardedFilespec": false, + "tf.raw_ops.ShuffleAndRepeatDataset": false, + "tf.raw_ops.ShuffleDataset": false, + "tf.raw_ops.ShuffleDatasetV2": false, + "tf.raw_ops.ShutdownDistributedTPU": false, + "tf.raw_ops.Sigmoid": false, + "tf.raw_ops.SigmoidGrad": false, + "tf.raw_ops.Sign": false, + "tf.raw_ops.Sin": false, + "tf.raw_ops.Sinh": false, + "tf.raw_ops.Size": false, + "tf.raw_ops.SkipDataset": false, + "tf.raw_ops.SleepDataset": false, + "tf.raw_ops.Slice": false, + "tf.raw_ops.SlidingWindowDataset": false, + "tf.raw_ops.Snapshot": false, + "tf.raw_ops.SnapshotDataset": false, + "tf.raw_ops.SobolSample": false, + "tf.raw_ops.Softmax": false, + "tf.raw_ops.SoftmaxCrossEntropyWithLogits": false, + "tf.raw_ops.Softplus": false, + "tf.raw_ops.SoftplusGrad": false, + "tf.raw_ops.Softsign": false, + "tf.raw_ops.SoftsignGrad": false, + "tf.raw_ops.SpaceToBatch": false, + "tf.raw_ops.SpaceToBatchND": false, + "tf.raw_ops.SpaceToDepth": false, + "tf.raw_ops.SparseAccumulatorApplyGradient": false, + "tf.raw_ops.SparseAccumulatorTakeGradient": false, + "tf.raw_ops.SparseAdd": false, + "tf.raw_ops.SparseAddGrad": false, + "tf.raw_ops.SparseApplyAdadelta": false, + "tf.raw_ops.SparseApplyAdagrad": false, + "tf.raw_ops.SparseApplyAdagradDA": false, + "tf.raw_ops.SparseApplyAdagradV2": false, + "tf.raw_ops.SparseApplyCenteredRMSProp": false, + "tf.raw_ops.SparseApplyFtrl": false, + "tf.raw_ops.SparseApplyFtrlV2": false, + "tf.raw_ops.SparseApplyMomentum": false, + "tf.raw_ops.SparseApplyProximalAdagrad": false, + "tf.raw_ops.SparseApplyProximalGradientDescent": false, + "tf.raw_ops.SparseApplyRMSProp": false, + "tf.raw_ops.SparseConcat": false, + "tf.raw_ops.SparseConditionalAccumulator": false, + "tf.raw_ops.SparseCross": false, + "tf.raw_ops.SparseDenseCwiseAdd": false, + "tf.raw_ops.SparseDenseCwiseDiv": false, + "tf.raw_ops.SparseDenseCwiseMul": false, + "tf.raw_ops.SparseFillEmptyRows": false, + "tf.raw_ops.SparseFillEmptyRowsGrad": false, + "tf.raw_ops.SparseMatMul": false, + "tf.raw_ops.SparseMatrixAdd": false, + "tf.raw_ops.SparseMatrixMatMul": false, + "tf.raw_ops.SparseMatrixMul": false, + "tf.raw_ops.SparseMatrixNNZ": false, + "tf.raw_ops.SparseMatrixOrderingAMD": false, + "tf.raw_ops.SparseMatrixSoftmax": false, + "tf.raw_ops.SparseMatrixSoftmaxGrad": false, + "tf.raw_ops.SparseMatrixSparseCholesky": false, + "tf.raw_ops.SparseMatrixSparseMatMul": false, + "tf.raw_ops.SparseMatrixTranspose": false, + "tf.raw_ops.SparseMatrixZeros": false, + "tf.raw_ops.SparseReduceMax": false, + "tf.raw_ops.SparseReduceMaxSparse": false, + "tf.raw_ops.SparseReduceSum": false, + "tf.raw_ops.SparseReduceSumSparse": false, + "tf.raw_ops.SparseReorder": false, + "tf.raw_ops.SparseReshape": false, + "tf.raw_ops.SparseSegmentMean": false, + "tf.raw_ops.SparseSegmentMeanGrad": false, + "tf.raw_ops.SparseSegmentMeanWithNumSegments": false, + "tf.raw_ops.SparseSegmentSqrtN": false, + "tf.raw_ops.SparseSegmentSqrtNGrad": false, + "tf.raw_ops.SparseSegmentSqrtNWithNumSegments": false, + "tf.raw_ops.SparseSegmentSum": false, + "tf.raw_ops.SparseSegmentSumWithNumSegments": false, + "tf.raw_ops.SparseSlice": false, + "tf.raw_ops.SparseSliceGrad": false, + "tf.raw_ops.SparseSoftmax": false, + "tf.raw_ops.SparseSoftmaxCrossEntropyWithLogits": false, + "tf.raw_ops.SparseSparseMaximum": false, + "tf.raw_ops.SparseSparseMinimum": false, + "tf.raw_ops.SparseSplit": false, + "tf.raw_ops.SparseTensorDenseAdd": false, + "tf.raw_ops.SparseTensorDenseMatMul": false, + "tf.raw_ops.SparseTensorSliceDataset": false, + "tf.raw_ops.SparseTensorToCSRSparseMatrix": false, + "tf.raw_ops.SparseToDense": false, + "tf.raw_ops.SparseToSparseSetOperation": false, + "tf.raw_ops.Spence": false, + "tf.raw_ops.Split": false, + "tf.raw_ops.SplitV": false, + "tf.raw_ops.SqlDataset": false, + "tf.raw_ops.Sqrt": false, + "tf.raw_ops.SqrtGrad": false, + "tf.raw_ops.Square": false, + "tf.raw_ops.SquaredDifference": false, + "tf.raw_ops.Squeeze": false, + "tf.raw_ops.Stack": false, + "tf.raw_ops.StackClose": false, + "tf.raw_ops.StackCloseV2": false, + "tf.raw_ops.StackPop": false, + "tf.raw_ops.StackPopV2": false, + "tf.raw_ops.StackPush": false, + "tf.raw_ops.StackPushV2": false, + "tf.raw_ops.StackV2": false, + "tf.raw_ops.Stage": false, + "tf.raw_ops.StageClear": false, + "tf.raw_ops.StagePeek": false, + "tf.raw_ops.StageSize": false, + "tf.raw_ops.StatefulPartitionedCall": false, + "tf.raw_ops.StatefulRandomBinomial": false, + "tf.raw_ops.StatefulStandardNormal": false, + "tf.raw_ops.StatefulStandardNormalV2": false, + "tf.raw_ops.StatefulTruncatedNormal": false, + "tf.raw_ops.StatefulUniform": false, + "tf.raw_ops.StatefulUniformFullInt": false, + "tf.raw_ops.StatefulUniformInt": false, + "tf.raw_ops.StatelessIf": false, + "tf.raw_ops.StatelessMultinomial": false, + "tf.raw_ops.StatelessRandomBinomial": false, + "tf.raw_ops.StatelessRandomGammaV2": false, + "tf.raw_ops.StatelessRandomNormal": false, + "tf.raw_ops.StatelessRandomPoisson": false, + "tf.raw_ops.StatelessRandomUniform": false, + "tf.raw_ops.StatelessRandomUniformFullInt": false, + "tf.raw_ops.StatelessRandomUniformInt": false, + "tf.raw_ops.StatelessTruncatedNormal": false, + "tf.raw_ops.StatelessWhile": false, + "tf.raw_ops.StaticRegexFullMatch": false, + "tf.raw_ops.StaticRegexReplace": false, + "tf.raw_ops.StatsAggregatorHandle": false, + "tf.raw_ops.StatsAggregatorHandleV2": false, + "tf.raw_ops.StatsAggregatorSetSummaryWriter": false, + "tf.raw_ops.StatsAggregatorSummary": false, + "tf.raw_ops.StopGradient": false, + "tf.raw_ops.StridedSlice": false, + "tf.raw_ops.StridedSliceAssign": false, + "tf.raw_ops.StridedSliceGrad": false, + "tf.raw_ops.StringFormat": false, + "tf.raw_ops.StringJoin": false, + "tf.raw_ops.StringLength": false, + "tf.raw_ops.StringLower": false, + "tf.raw_ops.StringNGrams": false, + "tf.raw_ops.StringSplit": false, + "tf.raw_ops.StringSplitV2": false, + "tf.raw_ops.StringStrip": false, + "tf.raw_ops.StringToHashBucket": false, + "tf.raw_ops.StringToHashBucketFast": false, + "tf.raw_ops.StringToHashBucketStrong": false, + "tf.raw_ops.StringToNumber": false, + "tf.raw_ops.StringUpper": false, + "tf.raw_ops.Sub": false, + "tf.raw_ops.Substr": false, + "tf.raw_ops.Sum": false, + "tf.raw_ops.SummaryWriter": false, + "tf.raw_ops.Svd": false, + "tf.raw_ops.Switch": false, + "tf.raw_ops.SymbolicGradient": false, + "tf.raw_ops.TFRecordDataset": false, + "tf.raw_ops.TFRecordReader": false, + "tf.raw_ops.TFRecordReaderV2": false, + "tf.raw_ops.TPUCompilationResult": false, + "tf.raw_ops.TPUEmbeddingActivations": false, + "tf.raw_ops.TPUOrdinalSelector": false, + "tf.raw_ops.TPUPartitionedCall": false, + "tf.raw_ops.TPUReplicateMetadata": false, + "tf.raw_ops.TPUReplicatedInput": false, + "tf.raw_ops.TPUReplicatedOutput": false, + "tf.raw_ops.TakeDataset": false, + "tf.raw_ops.TakeManySparseFromTensorsMap": false, + "tf.raw_ops.TakeWhileDataset": false, + "tf.raw_ops.Tan": false, + "tf.raw_ops.Tanh": false, + "tf.raw_ops.TanhGrad": false, + "tf.raw_ops.TemporaryVariable": false, + "tf.raw_ops.TensorArray": false, + "tf.raw_ops.TensorArrayClose": false, + "tf.raw_ops.TensorArrayCloseV2": false, + "tf.raw_ops.TensorArrayCloseV3": false, + "tf.raw_ops.TensorArrayConcat": false, + "tf.raw_ops.TensorArrayConcatV2": false, + "tf.raw_ops.TensorArrayConcatV3": false, + "tf.raw_ops.TensorArrayGather": false, + "tf.raw_ops.TensorArrayGatherV2": false, + "tf.raw_ops.TensorArrayGatherV3": false, + "tf.raw_ops.TensorArrayGrad": false, + "tf.raw_ops.TensorArrayGradV2": false, + "tf.raw_ops.TensorArrayGradV3": false, + "tf.raw_ops.TensorArrayGradWithShape": false, + "tf.raw_ops.TensorArrayPack": false, + "tf.raw_ops.TensorArrayRead": false, + "tf.raw_ops.TensorArrayReadV2": false, + "tf.raw_ops.TensorArrayReadV3": false, + "tf.raw_ops.TensorArrayScatter": false, + "tf.raw_ops.TensorArrayScatterV2": false, + "tf.raw_ops.TensorArrayScatterV3": false, + "tf.raw_ops.TensorArraySize": false, + "tf.raw_ops.TensorArraySizeV2": false, + "tf.raw_ops.TensorArraySizeV3": false, + "tf.raw_ops.TensorArraySplit": false, + "tf.raw_ops.TensorArraySplitV2": false, + "tf.raw_ops.TensorArraySplitV3": false, + "tf.raw_ops.TensorArrayUnpack": false, + "tf.raw_ops.TensorArrayV2": false, + "tf.raw_ops.TensorArrayV3": false, + "tf.raw_ops.TensorArrayWrite": false, + "tf.raw_ops.TensorArrayWriteV2": false, + "tf.raw_ops.TensorArrayWriteV3": false, + "tf.raw_ops.TensorDataset": false, + "tf.raw_ops.TensorListConcat": false, + "tf.raw_ops.TensorListConcatLists": false, + "tf.raw_ops.TensorListConcatV2": false, + "tf.raw_ops.TensorListElementShape": false, + "tf.raw_ops.TensorListFromTensor": false, + "tf.raw_ops.TensorListGather": false, + "tf.raw_ops.TensorListGetItem": false, + "tf.raw_ops.TensorListLength": false, + "tf.raw_ops.TensorListPopBack": false, + "tf.raw_ops.TensorListPushBack": false, + "tf.raw_ops.TensorListPushBackBatch": false, + "tf.raw_ops.TensorListReserve": false, + "tf.raw_ops.TensorListResize": false, + "tf.raw_ops.TensorListScatter": false, + "tf.raw_ops.TensorListScatterIntoExistingList": false, + "tf.raw_ops.TensorListScatterV2": false, + "tf.raw_ops.TensorListSetItem": false, + "tf.raw_ops.TensorListSplit": false, + "tf.raw_ops.TensorListStack": false, + "tf.raw_ops.TensorScatterAdd": false, + "tf.raw_ops.TensorScatterSub": false, + "tf.raw_ops.TensorScatterUpdate": false, + "tf.raw_ops.TensorSliceDataset": false, + "tf.raw_ops.TensorStridedSliceUpdate": false, + "tf.raw_ops.TensorSummary": false, + "tf.raw_ops.TensorSummaryV2": false, + "tf.raw_ops.TextLineDataset": false, + "tf.raw_ops.TextLineReader": false, + "tf.raw_ops.TextLineReaderV2": false, + "tf.raw_ops.ThreadPoolDataset": false, + "tf.raw_ops.ThreadPoolHandle": false, + "tf.raw_ops.ThreadUnsafeUnigramCandidateSampler": false, + "tf.raw_ops.Tile": false, + "tf.raw_ops.TileGrad": false, + "tf.raw_ops.Timestamp": false, + "tf.raw_ops.ToBool": false, + "tf.raw_ops.TopK": false, + "tf.raw_ops.TopKV2": false, + "tf.raw_ops.Transpose": false, + "tf.raw_ops.TridiagonalMatMul": false, + "tf.raw_ops.TridiagonalSolve": false, + "tf.raw_ops.TruncateDiv": false, + "tf.raw_ops.TruncateMod": false, + "tf.raw_ops.TruncatedNormal": false, + "tf.raw_ops.Unbatch": false, + "tf.raw_ops.UnbatchDataset": false, + "tf.raw_ops.UnbatchGrad": false, + "tf.raw_ops.UnicodeDecode": false, + "tf.raw_ops.UnicodeDecodeWithOffsets": false, + "tf.raw_ops.UnicodeEncode": false, + "tf.raw_ops.UnicodeScript": false, + "tf.raw_ops.UnicodeTranscode": false, + "tf.raw_ops.UniformCandidateSampler": false, + "tf.raw_ops.Unique": false, + "tf.raw_ops.UniqueDataset": false, + "tf.raw_ops.UniqueV2": false, + "tf.raw_ops.UniqueWithCounts": false, + "tf.raw_ops.UniqueWithCountsV2": false, + "tf.raw_ops.Unpack": false, + "tf.raw_ops.UnravelIndex": false, + "tf.raw_ops.UnsortedSegmentJoin": false, + "tf.raw_ops.UnsortedSegmentMax": false, + "tf.raw_ops.UnsortedSegmentMin": false, + "tf.raw_ops.UnsortedSegmentProd": false, + "tf.raw_ops.UnsortedSegmentSum": false, + "tf.raw_ops.Unstage": false, + "tf.raw_ops.UnwrapDatasetVariant": false, + "tf.raw_ops.UpperBound": false, + "tf.raw_ops.VarHandleOp": false, + "tf.raw_ops.VarIsInitializedOp": false, + "tf.raw_ops.Variable": false, + "tf.raw_ops.VariableShape": false, + "tf.raw_ops.VariableV2": false, + "tf.raw_ops.Where": false, + "tf.raw_ops.While": false, + "tf.raw_ops.WholeFileReader": false, + "tf.raw_ops.WholeFileReaderV2": false, + "tf.raw_ops.WindowDataset": false, + "tf.raw_ops.WorkerHeartbeat": false, + "tf.raw_ops.WrapDatasetVariant": false, + "tf.raw_ops.WriteAudioSummary": false, + "tf.raw_ops.WriteFile": false, + "tf.raw_ops.WriteGraphSummary": false, + "tf.raw_ops.WriteHistogramSummary": false, + "tf.raw_ops.WriteImageSummary": false, + "tf.raw_ops.WriteRawProtoSummary": false, + "tf.raw_ops.WriteScalarSummary": false, + "tf.raw_ops.WriteSummary": false, + "tf.raw_ops.Xdivy": false, + "tf.raw_ops.Xlog1py": false, + "tf.raw_ops.Xlogy": false, + "tf.raw_ops.ZerosLike": false, + "tf.raw_ops.Zeta": false, + "tf.raw_ops.ZipDataset": false, + "tf.realdiv": false, + "tf.recompute_grad": false, + "tf.reduce_all": false, + "tf.reduce_any": false, + "tf.reduce_logsumexp": false, + "tf.reduce_max": false, + "tf.reduce_mean": false, + "tf.reduce_min": false, + "tf.reduce_prod": false, + "tf.reduce_sum": false, + "tf.register_tensor_conversion_function": false, + "tf.repeat": false, + "tf.required_space_to_batch_paddings": false, + "tf.reshape": false, + "tf.resource": true, + "tf.reverse": false, + "tf.reverse_sequence": false, + "tf.roll": false, + "tf.round": false, + "tf.saturate_cast": false, + "tf.saved_model": false, + "tf.saved_model.ASSETS_DIRECTORY": true, + "tf.saved_model.ASSETS_KEY": true, + "tf.saved_model.Asset": false, + "tf.saved_model.Asset.__eq__": true, + "tf.saved_model.Asset.__ge__": true, + "tf.saved_model.Asset.__gt__": true, + "tf.saved_model.Asset.__init__": true, + "tf.saved_model.Asset.__le__": true, + "tf.saved_model.Asset.__lt__": true, + "tf.saved_model.Asset.__ne__": true, + "tf.saved_model.Asset.__new__": true, + "tf.saved_model.Asset.asset_path": true, + "tf.saved_model.CLASSIFY_INPUTS": true, + "tf.saved_model.CLASSIFY_METHOD_NAME": true, + "tf.saved_model.CLASSIFY_OUTPUT_CLASSES": true, + "tf.saved_model.CLASSIFY_OUTPUT_SCORES": true, + "tf.saved_model.DEBUG_DIRECTORY": true, + "tf.saved_model.DEBUG_INFO_FILENAME_PB": true, + "tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY": true, + "tf.saved_model.GPU": true, + "tf.saved_model.PREDICT_INPUTS": true, + "tf.saved_model.PREDICT_METHOD_NAME": true, + "tf.saved_model.PREDICT_OUTPUTS": true, + "tf.saved_model.REGRESS_INPUTS": true, + "tf.saved_model.REGRESS_METHOD_NAME": true, + "tf.saved_model.REGRESS_OUTPUTS": true, + "tf.saved_model.SAVED_MODEL_FILENAME_PB": true, + "tf.saved_model.SAVED_MODEL_FILENAME_PBTXT": true, + "tf.saved_model.SAVED_MODEL_SCHEMA_VERSION": true, + "tf.saved_model.SERVING": true, + "tf.saved_model.SaveOptions": false, + "tf.saved_model.SaveOptions.__eq__": true, + "tf.saved_model.SaveOptions.__ge__": true, + "tf.saved_model.SaveOptions.__gt__": true, + "tf.saved_model.SaveOptions.__init__": true, + "tf.saved_model.SaveOptions.__le__": true, + "tf.saved_model.SaveOptions.__lt__": true, + "tf.saved_model.SaveOptions.__ne__": true, + "tf.saved_model.SaveOptions.__new__": true, + "tf.saved_model.SaveOptions.function_aliases": true, + "tf.saved_model.SaveOptions.namespace_whitelist": true, + "tf.saved_model.SaveOptions.save_debug_info": true, + "tf.saved_model.TPU": true, + "tf.saved_model.TRAINING": true, + "tf.saved_model.VARIABLES_DIRECTORY": true, + "tf.saved_model.VARIABLES_FILENAME": true, + "tf.saved_model.contains_saved_model": false, + "tf.saved_model.load": false, + "tf.saved_model.save": false, + "tf.scalar_mul": false, + "tf.scan": false, + "tf.scatter_nd": false, + "tf.searchsorted": false, + "tf.sequence_mask": false, + "tf.sets": false, + "tf.sets.difference": false, + "tf.sets.intersection": false, + "tf.sets.size": false, + "tf.sets.union": false, + "tf.shape": false, + "tf.shape_n": false, + "tf.sigmoid": false, + "tf.sign": false, + "tf.signal": false, + "tf.signal.dct": false, + "tf.signal.fft": false, + "tf.signal.fft2d": false, + "tf.signal.fft3d": false, + "tf.signal.fftshift": false, + "tf.signal.frame": false, + "tf.signal.hamming_window": false, + "tf.signal.hann_window": false, + "tf.signal.idct": false, + "tf.signal.ifft": false, + "tf.signal.ifft2d": false, + "tf.signal.ifft3d": false, + "tf.signal.ifftshift": false, + "tf.signal.inverse_mdct": false, + "tf.signal.inverse_stft": false, + "tf.signal.inverse_stft_window_fn": false, + "tf.signal.irfft": false, + "tf.signal.irfft2d": false, + "tf.signal.irfft3d": false, + "tf.signal.kaiser_bessel_derived_window": false, + "tf.signal.kaiser_window": false, + "tf.signal.linear_to_mel_weight_matrix": false, + "tf.signal.mdct": false, + "tf.signal.mfccs_from_log_mel_spectrograms": false, + "tf.signal.overlap_and_add": false, + "tf.signal.rfft": false, + "tf.signal.rfft2d": false, + "tf.signal.rfft3d": false, + "tf.signal.stft": false, + "tf.signal.vorbis_window": false, + "tf.sin": false, + "tf.sinh": false, + "tf.size": false, + "tf.slice": false, + "tf.sort": false, + "tf.space_to_batch": false, + "tf.space_to_batch_nd": false, + "tf.sparse": false, + "tf.sparse.SparseTensor": false, + "tf.sparse.SparseTensor.__div__": true, + "tf.sparse.SparseTensor.__eq__": true, + "tf.sparse.SparseTensor.__ge__": true, + "tf.sparse.SparseTensor.__gt__": true, + "tf.sparse.SparseTensor.__init__": true, + "tf.sparse.SparseTensor.__le__": true, + "tf.sparse.SparseTensor.__lt__": true, + "tf.sparse.SparseTensor.__mul__": true, + "tf.sparse.SparseTensor.__ne__": true, + "tf.sparse.SparseTensor.__new__": true, + "tf.sparse.SparseTensor.__truediv__": true, + "tf.sparse.SparseTensor.consumers": true, + "tf.sparse.SparseTensor.dense_shape": true, + "tf.sparse.SparseTensor.dtype": true, + "tf.sparse.SparseTensor.eval": true, + "tf.sparse.SparseTensor.from_value": true, + "tf.sparse.SparseTensor.get_shape": true, + "tf.sparse.SparseTensor.graph": true, + "tf.sparse.SparseTensor.indices": true, + "tf.sparse.SparseTensor.op": true, + "tf.sparse.SparseTensor.shape": true, + "tf.sparse.SparseTensor.values": true, + "tf.sparse.add": false, + "tf.sparse.concat": false, + "tf.sparse.cross": false, + "tf.sparse.cross_hashed": false, + "tf.sparse.expand_dims": false, + "tf.sparse.eye": false, + "tf.sparse.fill_empty_rows": false, + "tf.sparse.from_dense": false, + "tf.sparse.mask": false, + "tf.sparse.maximum": false, + "tf.sparse.minimum": false, + "tf.sparse.reduce_max": false, + "tf.sparse.reduce_sum": false, + "tf.sparse.reorder": false, + "tf.sparse.reset_shape": false, + "tf.sparse.reshape": false, + "tf.sparse.retain": false, + "tf.sparse.segment_mean": false, + "tf.sparse.segment_sqrt_n": false, + "tf.sparse.segment_sum": false, + "tf.sparse.slice": false, + "tf.sparse.softmax": false, + "tf.sparse.sparse_dense_matmul": false, + "tf.sparse.split": false, + "tf.sparse.to_dense": false, + "tf.sparse.to_indicator": false, + "tf.sparse.transpose": false, + "tf.split": false, + "tf.sqrt": false, + "tf.square": false, + "tf.squeeze": false, + "tf.stack": false, + "tf.stop_gradient": false, + "tf.strided_slice": false, + "tf.string": true, + "tf.strings": false, + "tf.strings.as_string": false, + "tf.strings.bytes_split": false, + "tf.strings.format": false, + "tf.strings.join": false, + "tf.strings.length": false, + "tf.strings.lower": false, + "tf.strings.ngrams": false, + "tf.strings.reduce_join": false, + "tf.strings.regex_full_match": false, + "tf.strings.regex_replace": false, + "tf.strings.split": false, + "tf.strings.strip": false, + "tf.strings.substr": false, + "tf.strings.to_hash_bucket": false, + "tf.strings.to_hash_bucket_fast": false, + "tf.strings.to_hash_bucket_strong": false, + "tf.strings.to_number": false, + "tf.strings.unicode_decode": false, + "tf.strings.unicode_decode_with_offsets": false, + "tf.strings.unicode_encode": false, + "tf.strings.unicode_script": false, + "tf.strings.unicode_split": false, + "tf.strings.unicode_split_with_offsets": false, + "tf.strings.unicode_transcode": false, + "tf.strings.unsorted_segment_join": false, + "tf.strings.upper": false, + "tf.subtract": false, + "tf.summary": false, + "tf.summary.SummaryWriter": false, + "tf.summary.SummaryWriter.__eq__": true, + "tf.summary.SummaryWriter.__ge__": true, + "tf.summary.SummaryWriter.__gt__": true, + "tf.summary.SummaryWriter.__init__": true, + "tf.summary.SummaryWriter.__le__": true, + "tf.summary.SummaryWriter.__lt__": true, + "tf.summary.SummaryWriter.__ne__": true, + "tf.summary.SummaryWriter.__new__": true, + "tf.summary.SummaryWriter.as_default": true, + "tf.summary.SummaryWriter.close": true, + "tf.summary.SummaryWriter.flush": true, + "tf.summary.SummaryWriter.init": true, + "tf.summary.SummaryWriter.set_as_default": true, + "tf.summary.audio": false, + "tf.summary.create_file_writer": false, + "tf.summary.create_noop_writer": false, + "tf.summary.experimental": false, + "tf.summary.experimental.get_step": false, + "tf.summary.experimental.set_step": false, + "tf.summary.experimental.summary_scope": false, + "tf.summary.experimental.write_raw_pb": false, + "tf.summary.flush": false, + "tf.summary.histogram": false, + "tf.summary.image": false, + "tf.summary.record_if": false, + "tf.summary.scalar": false, + "tf.summary.text": false, + "tf.summary.trace_export": false, + "tf.summary.trace_off": false, + "tf.summary.trace_on": false, + "tf.summary.write": false, + "tf.switch_case": false, + "tf.sysconfig": false, + "tf.sysconfig.CXX11_ABI_FLAG": true, + "tf.sysconfig.MONOLITHIC_BUILD": true, + "tf.sysconfig.get_compile_flags": false, + "tf.sysconfig.get_include": false, + "tf.sysconfig.get_lib": false, + "tf.sysconfig.get_link_flags": false, + "tf.tan": false, + "tf.tanh": false, + "tf.tensor_scatter_nd_add": false, + "tf.tensor_scatter_nd_sub": false, + "tf.tensor_scatter_nd_update": false, + "tf.tensordot": false, + "tf.test": false, + "tf.test.Benchmark": false, + "tf.test.Benchmark.__eq__": true, + "tf.test.Benchmark.__ge__": true, + "tf.test.Benchmark.__gt__": true, + "tf.test.Benchmark.__init__": true, + "tf.test.Benchmark.__le__": true, + "tf.test.Benchmark.__lt__": true, + "tf.test.Benchmark.__ne__": true, + "tf.test.Benchmark.__new__": true, + "tf.test.Benchmark.evaluate": true, + "tf.test.Benchmark.is_abstract": true, + "tf.test.Benchmark.report_benchmark": true, + "tf.test.Benchmark.run_op_benchmark": true, + "tf.test.TestCase": false, + "tf.test.TestCase.__call__": true, + "tf.test.TestCase.__eq__": true, + "tf.test.TestCase.__ge__": true, + "tf.test.TestCase.__gt__": true, + "tf.test.TestCase.__init__": true, + "tf.test.TestCase.__le__": true, + "tf.test.TestCase.__lt__": true, + "tf.test.TestCase.__ne__": true, + "tf.test.TestCase.__new__": true, + "tf.test.TestCase.addClassCleanup": true, + "tf.test.TestCase.addCleanup": true, + "tf.test.TestCase.addTypeEqualityFunc": true, + "tf.test.TestCase.assertAllClose": true, + "tf.test.TestCase.assertAllCloseAccordingToType": true, + "tf.test.TestCase.assertAllEqual": true, + "tf.test.TestCase.assertAllGreater": true, + "tf.test.TestCase.assertAllGreaterEqual": true, + "tf.test.TestCase.assertAllInRange": true, + "tf.test.TestCase.assertAllInSet": true, + "tf.test.TestCase.assertAllLess": true, + "tf.test.TestCase.assertAllLessEqual": true, + "tf.test.TestCase.assertAlmostEqual": true, + "tf.test.TestCase.assertAlmostEquals": true, + "tf.test.TestCase.assertArrayNear": true, + "tf.test.TestCase.assertBetween": true, + "tf.test.TestCase.assertCommandFails": true, + "tf.test.TestCase.assertCommandSucceeds": true, + "tf.test.TestCase.assertContainsExactSubsequence": true, + "tf.test.TestCase.assertContainsInOrder": true, + "tf.test.TestCase.assertContainsSubsequence": true, + "tf.test.TestCase.assertContainsSubset": true, + "tf.test.TestCase.assertCountEqual": true, + "tf.test.TestCase.assertDTypeEqual": true, + "tf.test.TestCase.assertDeviceEqual": true, + "tf.test.TestCase.assertDictContainsSubset": true, + "tf.test.TestCase.assertDictEqual": true, + "tf.test.TestCase.assertEmpty": true, + "tf.test.TestCase.assertEndsWith": true, + "tf.test.TestCase.assertEqual": true, + "tf.test.TestCase.assertEquals": true, + "tf.test.TestCase.assertFalse": true, + "tf.test.TestCase.assertGreater": true, + "tf.test.TestCase.assertGreaterEqual": true, + "tf.test.TestCase.assertIn": true, + "tf.test.TestCase.assertIs": true, + "tf.test.TestCase.assertIsInstance": true, + "tf.test.TestCase.assertIsNone": true, + "tf.test.TestCase.assertIsNot": true, + "tf.test.TestCase.assertIsNotNone": true, + "tf.test.TestCase.assertItemsEqual": true, + "tf.test.TestCase.assertJsonEqual": true, + "tf.test.TestCase.assertLen": true, + "tf.test.TestCase.assertLess": true, + "tf.test.TestCase.assertLessEqual": true, + "tf.test.TestCase.assertListEqual": true, + "tf.test.TestCase.assertLogs": true, + "tf.test.TestCase.assertMultiLineEqual": true, + "tf.test.TestCase.assertNDArrayNear": true, + "tf.test.TestCase.assertNear": true, + "tf.test.TestCase.assertNoCommonElements": true, + "tf.test.TestCase.assertNotAllClose": true, + "tf.test.TestCase.assertNotAllEqual": true, + "tf.test.TestCase.assertNotAlmostEqual": true, + "tf.test.TestCase.assertNotAlmostEquals": true, + "tf.test.TestCase.assertNotEmpty": true, + "tf.test.TestCase.assertNotEndsWith": true, + "tf.test.TestCase.assertNotEqual": true, + "tf.test.TestCase.assertNotEquals": true, + "tf.test.TestCase.assertNotIn": true, + "tf.test.TestCase.assertNotIsInstance": true, + "tf.test.TestCase.assertNotRegex": true, + "tf.test.TestCase.assertNotRegexpMatches": true, + "tf.test.TestCase.assertNotStartsWith": true, + "tf.test.TestCase.assertProtoEquals": true, + "tf.test.TestCase.assertProtoEqualsVersion": true, + "tf.test.TestCase.assertRaises": true, + "tf.test.TestCase.assertRaisesOpError": true, + "tf.test.TestCase.assertRaisesRegex": true, + "tf.test.TestCase.assertRaisesRegexp": true, + "tf.test.TestCase.assertRaisesWithLiteralMatch": true, + "tf.test.TestCase.assertRaisesWithPredicateMatch": true, + "tf.test.TestCase.assertRegex": true, + "tf.test.TestCase.assertRegexMatch": true, + "tf.test.TestCase.assertRegexpMatches": true, + "tf.test.TestCase.assertSameElements": true, + "tf.test.TestCase.assertSameStructure": true, + "tf.test.TestCase.assertSequenceAlmostEqual": true, + "tf.test.TestCase.assertSequenceEqual": true, + "tf.test.TestCase.assertSequenceStartsWith": true, + "tf.test.TestCase.assertSetEqual": true, + "tf.test.TestCase.assertShapeEqual": true, + "tf.test.TestCase.assertStartsWith": true, + "tf.test.TestCase.assertTotallyOrdered": true, + "tf.test.TestCase.assertTrue": true, + "tf.test.TestCase.assertTupleEqual": true, + "tf.test.TestCase.assertUrlEqual": true, + "tf.test.TestCase.assertWarns": true, + "tf.test.TestCase.assertWarnsRegex": true, + "tf.test.TestCase.assert_": true, + "tf.test.TestCase.cached_session": true, + "tf.test.TestCase.captureWritesToStream": true, + "tf.test.TestCase.checkedThread": true, + "tf.test.TestCase.countTestCases": true, + "tf.test.TestCase.create_tempdir": true, + "tf.test.TestCase.create_tempfile": true, + "tf.test.TestCase.debug": true, + "tf.test.TestCase.defaultTestResult": true, + "tf.test.TestCase.doClassCleanups": true, + "tf.test.TestCase.doCleanups": true, + "tf.test.TestCase.enter_context": true, + "tf.test.TestCase.evaluate": true, + "tf.test.TestCase.fail": true, + "tf.test.TestCase.failIf": true, + "tf.test.TestCase.failIfAlmostEqual": true, + "tf.test.TestCase.failIfEqual": true, + "tf.test.TestCase.failUnless": true, + "tf.test.TestCase.failUnlessAlmostEqual": true, + "tf.test.TestCase.failUnlessEqual": true, + "tf.test.TestCase.failUnlessRaises": true, + "tf.test.TestCase.failureException": false, + "tf.test.TestCase.failureException.__eq__": true, + "tf.test.TestCase.failureException.__ge__": true, + "tf.test.TestCase.failureException.__gt__": true, + "tf.test.TestCase.failureException.__init__": true, + "tf.test.TestCase.failureException.__le__": true, + "tf.test.TestCase.failureException.__lt__": true, + "tf.test.TestCase.failureException.__ne__": true, + "tf.test.TestCase.failureException.__new__": true, + "tf.test.TestCase.failureException.args": true, + "tf.test.TestCase.failureException.with_traceback": true, + "tf.test.TestCase.get_temp_dir": true, + "tf.test.TestCase.id": true, + "tf.test.TestCase.longMessage": true, + "tf.test.TestCase.maxDiff": true, + "tf.test.TestCase.run": true, + "tf.test.TestCase.session": true, + "tf.test.TestCase.setUp": true, + "tf.test.TestCase.setUpClass": true, + "tf.test.TestCase.shortDescription": true, + "tf.test.TestCase.skipTest": true, + "tf.test.TestCase.subTest": true, + "tf.test.TestCase.tearDown": true, + "tf.test.TestCase.tearDownClass": true, + "tf.test.TestCase.tempfile_cleanup": true, + "tf.test.TestCase.test_session": true, + "tf.test.assert_equal_graph_def": false, + "tf.test.benchmark_config": false, + "tf.test.compute_gradient": false, + "tf.test.create_local_cluster": false, + "tf.test.gpu_device_name": false, + "tf.test.is_built_with_cuda": false, + "tf.test.is_built_with_gpu_support": false, + "tf.test.is_built_with_rocm": false, + "tf.test.is_built_with_xla": false, + "tf.test.is_gpu_available": false, + "tf.test.main": false, + "tf.tile": false, + "tf.timestamp": false, + "tf.tpu": false, + "tf.tpu.experimental": false, + "tf.tpu.experimental.DeviceAssignment": false, + "tf.tpu.experimental.DeviceAssignment.__eq__": true, + "tf.tpu.experimental.DeviceAssignment.__ge__": true, + "tf.tpu.experimental.DeviceAssignment.__gt__": true, + "tf.tpu.experimental.DeviceAssignment.__init__": true, + "tf.tpu.experimental.DeviceAssignment.__le__": true, + "tf.tpu.experimental.DeviceAssignment.__lt__": true, + "tf.tpu.experimental.DeviceAssignment.__ne__": true, + "tf.tpu.experimental.DeviceAssignment.__new__": true, + "tf.tpu.experimental.DeviceAssignment.build": true, + "tf.tpu.experimental.DeviceAssignment.coordinates": true, + "tf.tpu.experimental.DeviceAssignment.core_assignment": true, + "tf.tpu.experimental.DeviceAssignment.host_device": true, + "tf.tpu.experimental.DeviceAssignment.lookup_replicas": true, + "tf.tpu.experimental.DeviceAssignment.num_cores_per_replica": true, + "tf.tpu.experimental.DeviceAssignment.num_replicas": true, + "tf.tpu.experimental.DeviceAssignment.topology": true, + "tf.tpu.experimental.DeviceAssignment.tpu_device": true, + "tf.tpu.experimental.DeviceAssignment.tpu_ordinal": true, + "tf.tpu.experimental.initialize_tpu_system": false, + "tf.tpu.experimental.shutdown_tpu_system": false, + "tf.train": false, + "tf.train.BytesList": false, + "tf.train.BytesList.ByteSize": true, + "tf.train.BytesList.Clear": true, + "tf.train.BytesList.ClearExtension": true, + "tf.train.BytesList.ClearField": true, + "tf.train.BytesList.CopyFrom": true, + "tf.train.BytesList.DESCRIPTOR": true, + "tf.train.BytesList.DiscardUnknownFields": true, + "tf.train.BytesList.Extensions": true, + "tf.train.BytesList.FindInitializationErrors": true, + "tf.train.BytesList.FromString": true, + "tf.train.BytesList.HasExtension": true, + "tf.train.BytesList.HasField": true, + "tf.train.BytesList.IsInitialized": true, + "tf.train.BytesList.ListFields": true, + "tf.train.BytesList.MergeFrom": true, + "tf.train.BytesList.MergeFromString": true, + "tf.train.BytesList.ParseFromString": true, + "tf.train.BytesList.RegisterExtension": true, + "tf.train.BytesList.SerializePartialToString": true, + "tf.train.BytesList.SerializeToString": true, + "tf.train.BytesList.SetInParent": true, + "tf.train.BytesList.UnknownFields": true, + "tf.train.BytesList.WhichOneof": true, + "tf.train.BytesList.__eq__": true, + "tf.train.BytesList.__ge__": true, + "tf.train.BytesList.__gt__": true, + "tf.train.BytesList.__init__": true, + "tf.train.BytesList.__le__": true, + "tf.train.BytesList.__lt__": true, + "tf.train.BytesList.__ne__": true, + "tf.train.BytesList.__new__": true, + "tf.train.BytesList.value": true, + "tf.train.Checkpoint": false, + "tf.train.Checkpoint.__eq__": true, + "tf.train.Checkpoint.__ge__": true, + "tf.train.Checkpoint.__gt__": true, + "tf.train.Checkpoint.__init__": true, + "tf.train.Checkpoint.__le__": true, + "tf.train.Checkpoint.__lt__": true, + "tf.train.Checkpoint.__ne__": true, + "tf.train.Checkpoint.__new__": true, + "tf.train.Checkpoint.restore": true, + "tf.train.Checkpoint.save": true, + "tf.train.Checkpoint.save_counter": true, + "tf.train.Checkpoint.write": true, + "tf.train.CheckpointManager": false, + "tf.train.CheckpointManager.__eq__": true, + "tf.train.CheckpointManager.__ge__": true, + "tf.train.CheckpointManager.__gt__": true, + "tf.train.CheckpointManager.__init__": true, + "tf.train.CheckpointManager.__le__": true, + "tf.train.CheckpointManager.__lt__": true, + "tf.train.CheckpointManager.__ne__": true, + "tf.train.CheckpointManager.__new__": true, + "tf.train.CheckpointManager.checkpoint": true, + "tf.train.CheckpointManager.checkpoint_interval": true, + "tf.train.CheckpointManager.checkpoints": true, + "tf.train.CheckpointManager.directory": true, + "tf.train.CheckpointManager.latest_checkpoint": true, + "tf.train.CheckpointManager.restore_or_initialize": true, + "tf.train.CheckpointManager.save": true, + "tf.train.ClusterDef": false, + "tf.train.ClusterDef.ByteSize": true, + "tf.train.ClusterDef.Clear": true, + "tf.train.ClusterDef.ClearExtension": true, + "tf.train.ClusterDef.ClearField": true, + "tf.train.ClusterDef.CopyFrom": true, + "tf.train.ClusterDef.DESCRIPTOR": true, + "tf.train.ClusterDef.DiscardUnknownFields": true, + "tf.train.ClusterDef.Extensions": true, + "tf.train.ClusterDef.FindInitializationErrors": true, + "tf.train.ClusterDef.FromString": true, + "tf.train.ClusterDef.HasExtension": true, + "tf.train.ClusterDef.HasField": true, + "tf.train.ClusterDef.IsInitialized": true, + "tf.train.ClusterDef.ListFields": true, + "tf.train.ClusterDef.MergeFrom": true, + "tf.train.ClusterDef.MergeFromString": true, + "tf.train.ClusterDef.ParseFromString": true, + "tf.train.ClusterDef.RegisterExtension": true, + "tf.train.ClusterDef.SerializePartialToString": true, + "tf.train.ClusterDef.SerializeToString": true, + "tf.train.ClusterDef.SetInParent": true, + "tf.train.ClusterDef.UnknownFields": true, + "tf.train.ClusterDef.WhichOneof": true, + "tf.train.ClusterDef.__eq__": true, + "tf.train.ClusterDef.__ge__": true, + "tf.train.ClusterDef.__gt__": true, + "tf.train.ClusterDef.__init__": true, + "tf.train.ClusterDef.__le__": true, + "tf.train.ClusterDef.__lt__": true, + "tf.train.ClusterDef.__ne__": true, + "tf.train.ClusterDef.__new__": true, + "tf.train.ClusterDef.job": true, + "tf.train.ClusterSpec": false, + "tf.train.ClusterSpec.__bool__": true, + "tf.train.ClusterSpec.__eq__": true, + "tf.train.ClusterSpec.__ge__": true, + "tf.train.ClusterSpec.__gt__": true, + "tf.train.ClusterSpec.__init__": true, + "tf.train.ClusterSpec.__le__": true, + "tf.train.ClusterSpec.__lt__": true, + "tf.train.ClusterSpec.__ne__": true, + "tf.train.ClusterSpec.__new__": true, + "tf.train.ClusterSpec.__nonzero__": true, + "tf.train.ClusterSpec.as_cluster_def": true, + "tf.train.ClusterSpec.as_dict": true, + "tf.train.ClusterSpec.job_tasks": true, + "tf.train.ClusterSpec.jobs": true, + "tf.train.ClusterSpec.num_tasks": true, + "tf.train.ClusterSpec.task_address": true, + "tf.train.ClusterSpec.task_indices": true, + "tf.train.Coordinator": false, + "tf.train.Coordinator.__eq__": true, + "tf.train.Coordinator.__ge__": true, + "tf.train.Coordinator.__gt__": true, + "tf.train.Coordinator.__init__": true, + "tf.train.Coordinator.__le__": true, + "tf.train.Coordinator.__lt__": true, + "tf.train.Coordinator.__ne__": true, + "tf.train.Coordinator.__new__": true, + "tf.train.Coordinator.clear_stop": true, + "tf.train.Coordinator.join": true, + "tf.train.Coordinator.joined": true, + "tf.train.Coordinator.raise_requested_exception": true, + "tf.train.Coordinator.register_thread": true, + "tf.train.Coordinator.request_stop": true, + "tf.train.Coordinator.should_stop": true, + "tf.train.Coordinator.stop_on_exception": true, + "tf.train.Coordinator.wait_for_stop": true, + "tf.train.Example": false, + "tf.train.Example.ByteSize": true, + "tf.train.Example.Clear": true, + "tf.train.Example.ClearExtension": true, + "tf.train.Example.ClearField": true, + "tf.train.Example.CopyFrom": true, + "tf.train.Example.DESCRIPTOR": true, + "tf.train.Example.DiscardUnknownFields": true, + "tf.train.Example.Extensions": true, + "tf.train.Example.FindInitializationErrors": true, + "tf.train.Example.FromString": true, + "tf.train.Example.HasExtension": true, + "tf.train.Example.HasField": true, + "tf.train.Example.IsInitialized": true, + "tf.train.Example.ListFields": true, + "tf.train.Example.MergeFrom": true, + "tf.train.Example.MergeFromString": true, + "tf.train.Example.ParseFromString": true, + "tf.train.Example.RegisterExtension": true, + "tf.train.Example.SerializePartialToString": true, + "tf.train.Example.SerializeToString": true, + "tf.train.Example.SetInParent": true, + "tf.train.Example.UnknownFields": true, + "tf.train.Example.WhichOneof": true, + "tf.train.Example.__eq__": true, + "tf.train.Example.__ge__": true, + "tf.train.Example.__gt__": true, + "tf.train.Example.__init__": true, + "tf.train.Example.__le__": true, + "tf.train.Example.__lt__": true, + "tf.train.Example.__ne__": true, + "tf.train.Example.__new__": true, + "tf.train.Example.features": true, + "tf.train.ExponentialMovingAverage": false, + "tf.train.ExponentialMovingAverage.__eq__": true, + "tf.train.ExponentialMovingAverage.__ge__": true, + "tf.train.ExponentialMovingAverage.__gt__": true, + "tf.train.ExponentialMovingAverage.__init__": true, + "tf.train.ExponentialMovingAverage.__le__": true, + "tf.train.ExponentialMovingAverage.__lt__": true, + "tf.train.ExponentialMovingAverage.__ne__": true, + "tf.train.ExponentialMovingAverage.__new__": true, + "tf.train.ExponentialMovingAverage.apply": true, + "tf.train.ExponentialMovingAverage.average": true, + "tf.train.ExponentialMovingAverage.average_name": true, + "tf.train.ExponentialMovingAverage.name": true, + "tf.train.ExponentialMovingAverage.variables_to_restore": true, + "tf.train.Feature": false, + "tf.train.Feature.ByteSize": true, + "tf.train.Feature.Clear": true, + "tf.train.Feature.ClearExtension": true, + "tf.train.Feature.ClearField": true, + "tf.train.Feature.CopyFrom": true, + "tf.train.Feature.DESCRIPTOR": true, + "tf.train.Feature.DiscardUnknownFields": true, + "tf.train.Feature.Extensions": true, + "tf.train.Feature.FindInitializationErrors": true, + "tf.train.Feature.FromString": true, + "tf.train.Feature.HasExtension": true, + "tf.train.Feature.HasField": true, + "tf.train.Feature.IsInitialized": true, + "tf.train.Feature.ListFields": true, + "tf.train.Feature.MergeFrom": true, + "tf.train.Feature.MergeFromString": true, + "tf.train.Feature.ParseFromString": true, + "tf.train.Feature.RegisterExtension": true, + "tf.train.Feature.SerializePartialToString": true, + "tf.train.Feature.SerializeToString": true, + "tf.train.Feature.SetInParent": true, + "tf.train.Feature.UnknownFields": true, + "tf.train.Feature.WhichOneof": true, + "tf.train.Feature.__eq__": true, + "tf.train.Feature.__ge__": true, + "tf.train.Feature.__gt__": true, + "tf.train.Feature.__init__": true, + "tf.train.Feature.__le__": true, + "tf.train.Feature.__lt__": true, + "tf.train.Feature.__ne__": true, + "tf.train.Feature.__new__": true, + "tf.train.Feature.bytes_list": true, + "tf.train.Feature.float_list": true, + "tf.train.Feature.int64_list": true, + "tf.train.FeatureList": false, + "tf.train.FeatureList.ByteSize": true, + "tf.train.FeatureList.Clear": true, + "tf.train.FeatureList.ClearExtension": true, + "tf.train.FeatureList.ClearField": true, + "tf.train.FeatureList.CopyFrom": true, + "tf.train.FeatureList.DESCRIPTOR": true, + "tf.train.FeatureList.DiscardUnknownFields": true, + "tf.train.FeatureList.Extensions": true, + "tf.train.FeatureList.FindInitializationErrors": true, + "tf.train.FeatureList.FromString": true, + "tf.train.FeatureList.HasExtension": true, + "tf.train.FeatureList.HasField": true, + "tf.train.FeatureList.IsInitialized": true, + "tf.train.FeatureList.ListFields": true, + "tf.train.FeatureList.MergeFrom": true, + "tf.train.FeatureList.MergeFromString": true, + "tf.train.FeatureList.ParseFromString": true, + "tf.train.FeatureList.RegisterExtension": true, + "tf.train.FeatureList.SerializePartialToString": true, + "tf.train.FeatureList.SerializeToString": true, + "tf.train.FeatureList.SetInParent": true, + "tf.train.FeatureList.UnknownFields": true, + "tf.train.FeatureList.WhichOneof": true, + "tf.train.FeatureList.__eq__": true, + "tf.train.FeatureList.__ge__": true, + "tf.train.FeatureList.__gt__": true, + "tf.train.FeatureList.__init__": true, + "tf.train.FeatureList.__le__": true, + "tf.train.FeatureList.__lt__": true, + "tf.train.FeatureList.__ne__": true, + "tf.train.FeatureList.__new__": true, + "tf.train.FeatureList.feature": true, + "tf.train.FeatureLists": false, + "tf.train.FeatureLists.ByteSize": true, + "tf.train.FeatureLists.Clear": true, + "tf.train.FeatureLists.ClearExtension": true, + "tf.train.FeatureLists.ClearField": true, + "tf.train.FeatureLists.CopyFrom": true, + "tf.train.FeatureLists.DESCRIPTOR": true, + "tf.train.FeatureLists.DiscardUnknownFields": true, + "tf.train.FeatureLists.Extensions": true, + "tf.train.FeatureLists.FeatureListEntry": false, + "tf.train.FeatureLists.FeatureListEntry.ByteSize": true, + "tf.train.FeatureLists.FeatureListEntry.Clear": true, + "tf.train.FeatureLists.FeatureListEntry.ClearExtension": true, + "tf.train.FeatureLists.FeatureListEntry.ClearField": true, + "tf.train.FeatureLists.FeatureListEntry.CopyFrom": true, + "tf.train.FeatureLists.FeatureListEntry.DESCRIPTOR": true, + "tf.train.FeatureLists.FeatureListEntry.DiscardUnknownFields": true, + "tf.train.FeatureLists.FeatureListEntry.Extensions": true, + "tf.train.FeatureLists.FeatureListEntry.FindInitializationErrors": true, + "tf.train.FeatureLists.FeatureListEntry.FromString": true, + "tf.train.FeatureLists.FeatureListEntry.HasExtension": true, + "tf.train.FeatureLists.FeatureListEntry.HasField": true, + "tf.train.FeatureLists.FeatureListEntry.IsInitialized": true, + "tf.train.FeatureLists.FeatureListEntry.ListFields": true, + "tf.train.FeatureLists.FeatureListEntry.MergeFrom": true, + "tf.train.FeatureLists.FeatureListEntry.MergeFromString": true, + "tf.train.FeatureLists.FeatureListEntry.ParseFromString": true, + "tf.train.FeatureLists.FeatureListEntry.RegisterExtension": true, + "tf.train.FeatureLists.FeatureListEntry.SerializePartialToString": true, + "tf.train.FeatureLists.FeatureListEntry.SerializeToString": true, + "tf.train.FeatureLists.FeatureListEntry.SetInParent": true, + "tf.train.FeatureLists.FeatureListEntry.UnknownFields": true, + "tf.train.FeatureLists.FeatureListEntry.WhichOneof": true, + "tf.train.FeatureLists.FeatureListEntry.__eq__": true, + "tf.train.FeatureLists.FeatureListEntry.__ge__": true, + "tf.train.FeatureLists.FeatureListEntry.__gt__": true, + "tf.train.FeatureLists.FeatureListEntry.__init__": true, + "tf.train.FeatureLists.FeatureListEntry.__le__": true, + "tf.train.FeatureLists.FeatureListEntry.__lt__": true, + "tf.train.FeatureLists.FeatureListEntry.__ne__": true, + "tf.train.FeatureLists.FeatureListEntry.__new__": true, + "tf.train.FeatureLists.FeatureListEntry.key": true, + "tf.train.FeatureLists.FeatureListEntry.value": true, + "tf.train.FeatureLists.FindInitializationErrors": true, + "tf.train.FeatureLists.FromString": true, + "tf.train.FeatureLists.HasExtension": true, + "tf.train.FeatureLists.HasField": true, + "tf.train.FeatureLists.IsInitialized": true, + "tf.train.FeatureLists.ListFields": true, + "tf.train.FeatureLists.MergeFrom": true, + "tf.train.FeatureLists.MergeFromString": true, + "tf.train.FeatureLists.ParseFromString": true, + "tf.train.FeatureLists.RegisterExtension": true, + "tf.train.FeatureLists.SerializePartialToString": true, + "tf.train.FeatureLists.SerializeToString": true, + "tf.train.FeatureLists.SetInParent": true, + "tf.train.FeatureLists.UnknownFields": true, + "tf.train.FeatureLists.WhichOneof": true, + "tf.train.FeatureLists.__eq__": true, + "tf.train.FeatureLists.__ge__": true, + "tf.train.FeatureLists.__gt__": true, + "tf.train.FeatureLists.__init__": true, + "tf.train.FeatureLists.__le__": true, + "tf.train.FeatureLists.__lt__": true, + "tf.train.FeatureLists.__ne__": true, + "tf.train.FeatureLists.__new__": true, + "tf.train.FeatureLists.feature_list": true, + "tf.train.Features": false, + "tf.train.Features.ByteSize": true, + "tf.train.Features.Clear": true, + "tf.train.Features.ClearExtension": true, + "tf.train.Features.ClearField": true, + "tf.train.Features.CopyFrom": true, + "tf.train.Features.DESCRIPTOR": true, + "tf.train.Features.DiscardUnknownFields": true, + "tf.train.Features.Extensions": true, + "tf.train.Features.FeatureEntry": false, + "tf.train.Features.FeatureEntry.ByteSize": true, + "tf.train.Features.FeatureEntry.Clear": true, + "tf.train.Features.FeatureEntry.ClearExtension": true, + "tf.train.Features.FeatureEntry.ClearField": true, + "tf.train.Features.FeatureEntry.CopyFrom": true, + "tf.train.Features.FeatureEntry.DESCRIPTOR": true, + "tf.train.Features.FeatureEntry.DiscardUnknownFields": true, + "tf.train.Features.FeatureEntry.Extensions": true, + "tf.train.Features.FeatureEntry.FindInitializationErrors": true, + "tf.train.Features.FeatureEntry.FromString": true, + "tf.train.Features.FeatureEntry.HasExtension": true, + "tf.train.Features.FeatureEntry.HasField": true, + "tf.train.Features.FeatureEntry.IsInitialized": true, + "tf.train.Features.FeatureEntry.ListFields": true, + "tf.train.Features.FeatureEntry.MergeFrom": true, + "tf.train.Features.FeatureEntry.MergeFromString": true, + "tf.train.Features.FeatureEntry.ParseFromString": true, + "tf.train.Features.FeatureEntry.RegisterExtension": true, + "tf.train.Features.FeatureEntry.SerializePartialToString": true, + "tf.train.Features.FeatureEntry.SerializeToString": true, + "tf.train.Features.FeatureEntry.SetInParent": true, + "tf.train.Features.FeatureEntry.UnknownFields": true, + "tf.train.Features.FeatureEntry.WhichOneof": true, + "tf.train.Features.FeatureEntry.__eq__": true, + "tf.train.Features.FeatureEntry.__ge__": true, + "tf.train.Features.FeatureEntry.__gt__": true, + "tf.train.Features.FeatureEntry.__init__": true, + "tf.train.Features.FeatureEntry.__le__": true, + "tf.train.Features.FeatureEntry.__lt__": true, + "tf.train.Features.FeatureEntry.__ne__": true, + "tf.train.Features.FeatureEntry.__new__": true, + "tf.train.Features.FeatureEntry.key": true, + "tf.train.Features.FeatureEntry.value": true, + "tf.train.Features.FindInitializationErrors": true, + "tf.train.Features.FromString": true, + "tf.train.Features.HasExtension": true, + "tf.train.Features.HasField": true, + "tf.train.Features.IsInitialized": true, + "tf.train.Features.ListFields": true, + "tf.train.Features.MergeFrom": true, + "tf.train.Features.MergeFromString": true, + "tf.train.Features.ParseFromString": true, + "tf.train.Features.RegisterExtension": true, + "tf.train.Features.SerializePartialToString": true, + "tf.train.Features.SerializeToString": true, + "tf.train.Features.SetInParent": true, + "tf.train.Features.UnknownFields": true, + "tf.train.Features.WhichOneof": true, + "tf.train.Features.__eq__": true, + "tf.train.Features.__ge__": true, + "tf.train.Features.__gt__": true, + "tf.train.Features.__init__": true, + "tf.train.Features.__le__": true, + "tf.train.Features.__lt__": true, + "tf.train.Features.__ne__": true, + "tf.train.Features.__new__": true, + "tf.train.Features.feature": true, + "tf.train.FloatList": false, + "tf.train.FloatList.ByteSize": true, + "tf.train.FloatList.Clear": true, + "tf.train.FloatList.ClearExtension": true, + "tf.train.FloatList.ClearField": true, + "tf.train.FloatList.CopyFrom": true, + "tf.train.FloatList.DESCRIPTOR": true, + "tf.train.FloatList.DiscardUnknownFields": true, + "tf.train.FloatList.Extensions": true, + "tf.train.FloatList.FindInitializationErrors": true, + "tf.train.FloatList.FromString": true, + "tf.train.FloatList.HasExtension": true, + "tf.train.FloatList.HasField": true, + "tf.train.FloatList.IsInitialized": true, + "tf.train.FloatList.ListFields": true, + "tf.train.FloatList.MergeFrom": true, + "tf.train.FloatList.MergeFromString": true, + "tf.train.FloatList.ParseFromString": true, + "tf.train.FloatList.RegisterExtension": true, + "tf.train.FloatList.SerializePartialToString": true, + "tf.train.FloatList.SerializeToString": true, + "tf.train.FloatList.SetInParent": true, + "tf.train.FloatList.UnknownFields": true, + "tf.train.FloatList.WhichOneof": true, + "tf.train.FloatList.__eq__": true, + "tf.train.FloatList.__ge__": true, + "tf.train.FloatList.__gt__": true, + "tf.train.FloatList.__init__": true, + "tf.train.FloatList.__le__": true, + "tf.train.FloatList.__lt__": true, + "tf.train.FloatList.__ne__": true, + "tf.train.FloatList.__new__": true, + "tf.train.FloatList.value": true, + "tf.train.Int64List": false, + "tf.train.Int64List.ByteSize": true, + "tf.train.Int64List.Clear": true, + "tf.train.Int64List.ClearExtension": true, + "tf.train.Int64List.ClearField": true, + "tf.train.Int64List.CopyFrom": true, + "tf.train.Int64List.DESCRIPTOR": true, + "tf.train.Int64List.DiscardUnknownFields": true, + "tf.train.Int64List.Extensions": true, + "tf.train.Int64List.FindInitializationErrors": true, + "tf.train.Int64List.FromString": true, + "tf.train.Int64List.HasExtension": true, + "tf.train.Int64List.HasField": true, + "tf.train.Int64List.IsInitialized": true, + "tf.train.Int64List.ListFields": true, + "tf.train.Int64List.MergeFrom": true, + "tf.train.Int64List.MergeFromString": true, + "tf.train.Int64List.ParseFromString": true, + "tf.train.Int64List.RegisterExtension": true, + "tf.train.Int64List.SerializePartialToString": true, + "tf.train.Int64List.SerializeToString": true, + "tf.train.Int64List.SetInParent": true, + "tf.train.Int64List.UnknownFields": true, + "tf.train.Int64List.WhichOneof": true, + "tf.train.Int64List.__eq__": true, + "tf.train.Int64List.__ge__": true, + "tf.train.Int64List.__gt__": true, + "tf.train.Int64List.__init__": true, + "tf.train.Int64List.__le__": true, + "tf.train.Int64List.__lt__": true, + "tf.train.Int64List.__ne__": true, + "tf.train.Int64List.__new__": true, + "tf.train.Int64List.value": true, + "tf.train.JobDef": false, + "tf.train.JobDef.ByteSize": true, + "tf.train.JobDef.Clear": true, + "tf.train.JobDef.ClearExtension": true, + "tf.train.JobDef.ClearField": true, + "tf.train.JobDef.CopyFrom": true, + "tf.train.JobDef.DESCRIPTOR": true, + "tf.train.JobDef.DiscardUnknownFields": true, + "tf.train.JobDef.Extensions": true, + "tf.train.JobDef.FindInitializationErrors": true, + "tf.train.JobDef.FromString": true, + "tf.train.JobDef.HasExtension": true, + "tf.train.JobDef.HasField": true, + "tf.train.JobDef.IsInitialized": true, + "tf.train.JobDef.ListFields": true, + "tf.train.JobDef.MergeFrom": true, + "tf.train.JobDef.MergeFromString": true, + "tf.train.JobDef.ParseFromString": true, + "tf.train.JobDef.RegisterExtension": true, + "tf.train.JobDef.SerializePartialToString": true, + "tf.train.JobDef.SerializeToString": true, + "tf.train.JobDef.SetInParent": true, + "tf.train.JobDef.TasksEntry": false, + "tf.train.JobDef.TasksEntry.ByteSize": true, + "tf.train.JobDef.TasksEntry.Clear": true, + "tf.train.JobDef.TasksEntry.ClearExtension": true, + "tf.train.JobDef.TasksEntry.ClearField": true, + "tf.train.JobDef.TasksEntry.CopyFrom": true, + "tf.train.JobDef.TasksEntry.DESCRIPTOR": true, + "tf.train.JobDef.TasksEntry.DiscardUnknownFields": true, + "tf.train.JobDef.TasksEntry.Extensions": true, + "tf.train.JobDef.TasksEntry.FindInitializationErrors": true, + "tf.train.JobDef.TasksEntry.FromString": true, + "tf.train.JobDef.TasksEntry.HasExtension": true, + "tf.train.JobDef.TasksEntry.HasField": true, + "tf.train.JobDef.TasksEntry.IsInitialized": true, + "tf.train.JobDef.TasksEntry.ListFields": true, + "tf.train.JobDef.TasksEntry.MergeFrom": true, + "tf.train.JobDef.TasksEntry.MergeFromString": true, + "tf.train.JobDef.TasksEntry.ParseFromString": true, + "tf.train.JobDef.TasksEntry.RegisterExtension": true, + "tf.train.JobDef.TasksEntry.SerializePartialToString": true, + "tf.train.JobDef.TasksEntry.SerializeToString": true, + "tf.train.JobDef.TasksEntry.SetInParent": true, + "tf.train.JobDef.TasksEntry.UnknownFields": true, + "tf.train.JobDef.TasksEntry.WhichOneof": true, + "tf.train.JobDef.TasksEntry.__eq__": true, + "tf.train.JobDef.TasksEntry.__ge__": true, + "tf.train.JobDef.TasksEntry.__gt__": true, + "tf.train.JobDef.TasksEntry.__init__": true, + "tf.train.JobDef.TasksEntry.__le__": true, + "tf.train.JobDef.TasksEntry.__lt__": true, + "tf.train.JobDef.TasksEntry.__ne__": true, + "tf.train.JobDef.TasksEntry.__new__": true, + "tf.train.JobDef.TasksEntry.key": true, + "tf.train.JobDef.TasksEntry.value": true, + "tf.train.JobDef.UnknownFields": true, + "tf.train.JobDef.WhichOneof": true, + "tf.train.JobDef.__eq__": true, + "tf.train.JobDef.__ge__": true, + "tf.train.JobDef.__gt__": true, + "tf.train.JobDef.__init__": true, + "tf.train.JobDef.__le__": true, + "tf.train.JobDef.__lt__": true, + "tf.train.JobDef.__ne__": true, + "tf.train.JobDef.__new__": true, + "tf.train.JobDef.name": true, + "tf.train.JobDef.tasks": true, + "tf.train.SequenceExample": false, + "tf.train.SequenceExample.ByteSize": true, + "tf.train.SequenceExample.Clear": true, + "tf.train.SequenceExample.ClearExtension": true, + "tf.train.SequenceExample.ClearField": true, + "tf.train.SequenceExample.CopyFrom": true, + "tf.train.SequenceExample.DESCRIPTOR": true, + "tf.train.SequenceExample.DiscardUnknownFields": true, + "tf.train.SequenceExample.Extensions": true, + "tf.train.SequenceExample.FindInitializationErrors": true, + "tf.train.SequenceExample.FromString": true, + "tf.train.SequenceExample.HasExtension": true, + "tf.train.SequenceExample.HasField": true, + "tf.train.SequenceExample.IsInitialized": true, + "tf.train.SequenceExample.ListFields": true, + "tf.train.SequenceExample.MergeFrom": true, + "tf.train.SequenceExample.MergeFromString": true, + "tf.train.SequenceExample.ParseFromString": true, + "tf.train.SequenceExample.RegisterExtension": true, + "tf.train.SequenceExample.SerializePartialToString": true, + "tf.train.SequenceExample.SerializeToString": true, + "tf.train.SequenceExample.SetInParent": true, + "tf.train.SequenceExample.UnknownFields": true, + "tf.train.SequenceExample.WhichOneof": true, + "tf.train.SequenceExample.__eq__": true, + "tf.train.SequenceExample.__ge__": true, + "tf.train.SequenceExample.__gt__": true, + "tf.train.SequenceExample.__init__": true, + "tf.train.SequenceExample.__le__": true, + "tf.train.SequenceExample.__lt__": true, + "tf.train.SequenceExample.__ne__": true, + "tf.train.SequenceExample.__new__": true, + "tf.train.SequenceExample.context": true, + "tf.train.SequenceExample.feature_lists": true, + "tf.train.ServerDef": false, + "tf.train.ServerDef.ByteSize": true, + "tf.train.ServerDef.Clear": true, + "tf.train.ServerDef.ClearExtension": true, + "tf.train.ServerDef.ClearField": true, + "tf.train.ServerDef.CopyFrom": true, + "tf.train.ServerDef.DESCRIPTOR": true, + "tf.train.ServerDef.DiscardUnknownFields": true, + "tf.train.ServerDef.Extensions": true, + "tf.train.ServerDef.FindInitializationErrors": true, + "tf.train.ServerDef.FromString": true, + "tf.train.ServerDef.HasExtension": true, + "tf.train.ServerDef.HasField": true, + "tf.train.ServerDef.IsInitialized": true, + "tf.train.ServerDef.ListFields": true, + "tf.train.ServerDef.MergeFrom": true, + "tf.train.ServerDef.MergeFromString": true, + "tf.train.ServerDef.ParseFromString": true, + "tf.train.ServerDef.RegisterExtension": true, + "tf.train.ServerDef.SerializePartialToString": true, + "tf.train.ServerDef.SerializeToString": true, + "tf.train.ServerDef.SetInParent": true, + "tf.train.ServerDef.UnknownFields": true, + "tf.train.ServerDef.WhichOneof": true, + "tf.train.ServerDef.__eq__": true, + "tf.train.ServerDef.__ge__": true, + "tf.train.ServerDef.__gt__": true, + "tf.train.ServerDef.__init__": true, + "tf.train.ServerDef.__le__": true, + "tf.train.ServerDef.__lt__": true, + "tf.train.ServerDef.__ne__": true, + "tf.train.ServerDef.__new__": true, + "tf.train.ServerDef.cluster": true, + "tf.train.ServerDef.cluster_device_filters": true, + "tf.train.ServerDef.default_session_config": true, + "tf.train.ServerDef.job_name": true, + "tf.train.ServerDef.port": true, + "tf.train.ServerDef.protocol": true, + "tf.train.ServerDef.task_index": true, + "tf.train.checkpoints_iterator": false, + "tf.train.experimental": false, + "tf.train.experimental.DynamicLossScale": false, + "tf.train.experimental.DynamicLossScale.__call__": true, + "tf.train.experimental.DynamicLossScale.__eq__": true, + "tf.train.experimental.DynamicLossScale.__ge__": true, + "tf.train.experimental.DynamicLossScale.__gt__": true, + "tf.train.experimental.DynamicLossScale.__init__": true, + "tf.train.experimental.DynamicLossScale.__le__": true, + "tf.train.experimental.DynamicLossScale.__lt__": true, + "tf.train.experimental.DynamicLossScale.__ne__": true, + "tf.train.experimental.DynamicLossScale.__new__": true, + "tf.train.experimental.DynamicLossScale.from_config": true, + "tf.train.experimental.DynamicLossScale.get_config": true, + "tf.train.experimental.DynamicLossScale.increment_period": true, + "tf.train.experimental.DynamicLossScale.initial_loss_scale": true, + "tf.train.experimental.DynamicLossScale.multiplier": true, + "tf.train.experimental.DynamicLossScale.update": true, + "tf.train.experimental.FixedLossScale": false, + "tf.train.experimental.FixedLossScale.__call__": true, + "tf.train.experimental.FixedLossScale.__eq__": true, + "tf.train.experimental.FixedLossScale.__ge__": true, + "tf.train.experimental.FixedLossScale.__gt__": true, + "tf.train.experimental.FixedLossScale.__init__": true, + "tf.train.experimental.FixedLossScale.__le__": true, + "tf.train.experimental.FixedLossScale.__lt__": true, + "tf.train.experimental.FixedLossScale.__ne__": true, + "tf.train.experimental.FixedLossScale.__new__": true, + "tf.train.experimental.FixedLossScale.from_config": true, + "tf.train.experimental.FixedLossScale.get_config": true, + "tf.train.experimental.FixedLossScale.update": true, + "tf.train.experimental.LossScale": false, + "tf.train.experimental.LossScale.__call__": true, + "tf.train.experimental.LossScale.__eq__": true, + "tf.train.experimental.LossScale.__ge__": true, + "tf.train.experimental.LossScale.__gt__": true, + "tf.train.experimental.LossScale.__init__": true, + "tf.train.experimental.LossScale.__le__": true, + "tf.train.experimental.LossScale.__lt__": true, + "tf.train.experimental.LossScale.__ne__": true, + "tf.train.experimental.LossScale.__new__": true, + "tf.train.experimental.LossScale.from_config": true, + "tf.train.experimental.LossScale.get_config": true, + "tf.train.experimental.LossScale.update": true, + "tf.train.experimental.PythonState": false, + "tf.train.experimental.PythonState.__eq__": true, + "tf.train.experimental.PythonState.__ge__": true, + "tf.train.experimental.PythonState.__gt__": true, + "tf.train.experimental.PythonState.__init__": true, + "tf.train.experimental.PythonState.__le__": true, + "tf.train.experimental.PythonState.__lt__": true, + "tf.train.experimental.PythonState.__ne__": true, + "tf.train.experimental.PythonState.__new__": true, + "tf.train.experimental.PythonState.deserialize": true, + "tf.train.experimental.PythonState.serialize": true, + "tf.train.experimental.disable_mixed_precision_graph_rewrite": false, + "tf.train.experimental.enable_mixed_precision_graph_rewrite": false, + "tf.train.get_checkpoint_state": false, + "tf.train.latest_checkpoint": false, + "tf.train.list_variables": false, + "tf.train.load_checkpoint": false, + "tf.train.load_variable": false, + "tf.transpose": false, + "tf.truediv": false, + "tf.truncatediv": false, + "tf.truncatemod": false, + "tf.tuple": false, + "tf.uint16": true, + "tf.uint32": true, + "tf.uint64": true, + "tf.uint8": true, + "tf.unique": false, + "tf.unique_with_counts": false, + "tf.unravel_index": false, + "tf.unstack": false, + "tf.variable_creator_scope": false, + "tf.variant": true, + "tf.vectorized_map": false, + "tf.version": false, + "tf.version.COMPILER_VERSION": true, + "tf.version.GIT_VERSION": true, + "tf.version.GRAPH_DEF_VERSION": true, + "tf.version.GRAPH_DEF_VERSION_MIN_CONSUMER": true, + "tf.version.GRAPH_DEF_VERSION_MIN_PRODUCER": true, + "tf.version.VERSION": true, + "tf.where": false, + "tf.while_loop": false, + "tf.xla": false, + "tf.xla.experimental": false, + "tf.xla.experimental.compile": false, + "tf.xla.experimental.jit_scope": false, + "tf.zeros": false, + "tf.zeros_initializer": false, + "tf.zeros_initializer.__call__": true, + "tf.zeros_initializer.__eq__": true, + "tf.zeros_initializer.__ge__": true, + "tf.zeros_initializer.__gt__": true, + "tf.zeros_initializer.__init__": true, + "tf.zeros_initializer.__le__": true, + "tf.zeros_initializer.__lt__": true, + "tf.zeros_initializer.__ne__": true, + "tf.zeros_initializer.__new__": true, + "tf.zeros_initializer.from_config": true, + "tf.zeros_initializer.get_config": true, + "tf.zeros_like": false + }, + "py_module_names": [ + "tf" + ], + "site_link": null +} diff --git a/site/en/api_docs/python/tf/_redirects.yaml b/site/en/api_docs/python/tf/_redirects.yaml new file mode 100644 index 00000000000..22dc9ffb2a6 --- /dev/null +++ b/site/en/api_docs/python/tf/_redirects.yaml @@ -0,0 +1,7231 @@ +redirects: +- from: /tf/Assert + to: /tf/debugging/Assert +- from: /tf/DType + to: /tf/dtypes/DType +- from: /tf/RaggedTensor/__abs__ + to: /tf/math/abs +- from: /tf/RaggedTensor/__add__ + to: /tf/math/add +- from: /tf/RaggedTensor/__and__ + to: /tf/math/logical_and +- from: /tf/RaggedTensor/__floordiv__ + to: /tf/math/floordiv +- from: /tf/RaggedTensor/__ge__ + to: /tf/math/greater_equal +- from: /tf/RaggedTensor/__gt__ + to: /tf/math/greater +- from: /tf/RaggedTensor/__invert__ + to: /tf/math/logical_not +- from: /tf/RaggedTensor/__le__ + to: /tf/math/less_equal +- from: /tf/RaggedTensor/__lt__ + to: /tf/math/less +- from: /tf/RaggedTensor/__mod__ + to: /tf/math/floormod +- from: /tf/RaggedTensor/__mul__ + to: /tf/math/multiply +- from: /tf/RaggedTensor/__neg__ + to: /tf/math/negative +- from: /tf/RaggedTensor/__or__ + to: /tf/math/logical_or +- from: /tf/RaggedTensor/__pow__ + to: /tf/math/pow +- from: /tf/RaggedTensor/__sub__ + to: /tf/math/subtract +- from: /tf/RaggedTensor/__truediv__ + to: /tf/math/truediv +- from: /tf/RaggedTensor/__xor__ + to: /tf/math/logical_xor +- from: /tf/SparseTensor + to: /tf/sparse/SparseTensor +- from: /tf/Tensor/__abs__ + to: /tf/math/abs +- from: /tf/Tensor/__ge__ + to: /tf/math/greater_equal +- from: /tf/Tensor/__gt__ + to: /tf/math/greater +- from: /tf/Tensor/__invert__ + to: /tf/math/logical_not +- from: /tf/Tensor/__le__ + to: /tf/math/less_equal +- from: /tf/Tensor/__lt__ + to: /tf/math/less +- from: /tf/Tensor/__neg__ + to: /tf/math/negative +- from: /tf/abs + to: /tf/math/abs +- from: /tf/acos + to: /tf/math/acos +- from: /tf/acosh + to: /tf/math/acosh +- from: /tf/add + to: /tf/math/add +- from: /tf/add_n + to: /tf/math/add_n +- from: /tf/argmax + to: /tf/math/argmax +- from: /tf/argmin + to: /tf/math/argmin +- from: /tf/as_dtype + to: /tf/dtypes/as_dtype +- from: /tf/as_string + to: /tf/strings/as_string +- from: /tf/asin + to: /tf/math/asin +- from: /tf/asinh + to: /tf/math/asinh +- from: /tf/assert_equal + to: /tf/debugging/assert_equal +- from: /tf/assert_greater + to: /tf/debugging/assert_greater +- from: /tf/assert_less + to: /tf/debugging/assert_less +- from: /tf/assert_rank + to: /tf/debugging/assert_rank +- from: /tf/atan + to: /tf/math/atan +- from: /tf/atan2 + to: /tf/math/atan2 +- from: /tf/atanh + to: /tf/math/atanh +- from: /tf/autodiff/GradientTape + to: /tf/GradientTape +- from: /tf/compat/v1/AggregationMethod + to: /tf/AggregationMethod +- from: /tf/compat/v1/Assert + to: /tf/debugging/Assert +- from: /tf/compat/v1/CriticalSection + to: /tf/CriticalSection +- from: /tf/compat/v1/DType + to: /tf/dtypes/DType +- from: /tf/compat/v1/FIFOQueue + to: /tf/queue/FIFOQueue +- from: /tf/compat/v1/FixedLenFeature + to: /tf/io/FixedLenFeature +- from: /tf/compat/v1/FixedLenSequenceFeature + to: /tf/io/FixedLenSequenceFeature +- from: /tf/compat/v1/GradientTape + to: /tf/GradientTape +- from: /tf/compat/v1/Graph + to: /tf/Graph +- from: /tf/compat/v1/IndexedSlices + to: /tf/IndexedSlices +- from: /tf/compat/v1/IndexedSlicesSpec + to: /tf/IndexedSlicesSpec +- from: /tf/compat/v1/Module + to: /tf/Module +- from: /tf/compat/v1/NoGradient + to: /tf/no_gradient +- from: /tf/compat/v1/NotDifferentiable + to: /tf/no_gradient +- from: /tf/compat/v1/OpError + to: /tf/errors/OpError +- from: /tf/compat/v1/Operation + to: /tf/Operation +- from: /tf/compat/v1/OptionalSpec + to: /tf/OptionalSpec +- from: /tf/compat/v1/PaddingFIFOQueue + to: /tf/queue/PaddingFIFOQueue +- from: /tf/compat/v1/PriorityQueue + to: /tf/queue/PriorityQueue +- from: /tf/compat/v1/QueueBase + to: /tf/queue/QueueBase +- from: /tf/compat/v1/RaggedTensor + to: /tf/RaggedTensor +- from: /tf/compat/v1/RaggedTensor/__abs__ + to: /tf/math/abs +- from: /tf/compat/v1/RaggedTensor/__add__ + to: /tf/math/add +- from: /tf/compat/v1/RaggedTensor/__and__ + to: /tf/math/logical_and +- from: /tf/compat/v1/RaggedTensor/__floordiv__ + to: /tf/math/floordiv +- from: /tf/compat/v1/RaggedTensor/__ge__ + to: /tf/math/greater_equal +- from: /tf/compat/v1/RaggedTensor/__gt__ + to: /tf/math/greater +- from: /tf/compat/v1/RaggedTensor/__invert__ + to: /tf/math/logical_not +- from: /tf/compat/v1/RaggedTensor/__le__ + to: /tf/math/less_equal +- from: /tf/compat/v1/RaggedTensor/__lt__ + to: /tf/math/less +- from: /tf/compat/v1/RaggedTensor/__mod__ + to: /tf/math/floormod +- from: /tf/compat/v1/RaggedTensor/__mul__ + to: /tf/math/multiply +- from: /tf/compat/v1/RaggedTensor/__neg__ + to: /tf/math/negative +- from: /tf/compat/v1/RaggedTensor/__or__ + to: /tf/math/logical_or +- from: /tf/compat/v1/RaggedTensor/__pow__ + to: /tf/math/pow +- from: /tf/compat/v1/RaggedTensor/__sub__ + to: /tf/math/subtract +- from: /tf/compat/v1/RaggedTensor/__truediv__ + to: /tf/math/truediv +- from: /tf/compat/v1/RaggedTensor/__xor__ + to: /tf/math/logical_xor +- from: /tf/compat/v1/RaggedTensorSpec + to: /tf/RaggedTensorSpec +- from: /tf/compat/v1/RandomShuffleQueue + to: /tf/queue/RandomShuffleQueue +- from: /tf/compat/v1/RegisterGradient + to: /tf/RegisterGradient +- from: /tf/compat/v1/SparseFeature + to: /tf/io/SparseFeature +- from: /tf/compat/v1/SparseTensor + to: /tf/sparse/SparseTensor +- from: /tf/compat/v1/SparseTensorSpec + to: /tf/SparseTensorSpec +- from: /tf/compat/v1/Tensor + to: /tf/Tensor +- from: /tf/compat/v1/Tensor/__abs__ + to: /tf/math/abs +- from: /tf/compat/v1/Tensor/__ge__ + to: /tf/math/greater_equal +- from: /tf/compat/v1/Tensor/__gt__ + to: /tf/math/greater +- from: /tf/compat/v1/Tensor/__invert__ + to: /tf/math/logical_not +- from: /tf/compat/v1/Tensor/__le__ + to: /tf/math/less_equal +- from: /tf/compat/v1/Tensor/__lt__ + to: /tf/math/less +- from: /tf/compat/v1/Tensor/__neg__ + to: /tf/math/negative +- from: /tf/compat/v1/TensorArray + to: /tf/TensorArray +- from: /tf/compat/v1/TensorArraySpec + to: /tf/TensorArraySpec +- from: /tf/compat/v1/TensorShape + to: /tf/TensorShape +- from: /tf/compat/v1/TensorSpec + to: /tf/TensorSpec +- from: /tf/compat/v1/TypeSpec + to: /tf/TypeSpec +- from: /tf/compat/v1/UnconnectedGradients + to: /tf/UnconnectedGradients +- from: /tf/compat/v1/VarLenFeature + to: /tf/io/VarLenFeature +- from: /tf/compat/v1/Variable/SaveSliceInfo + to: /tf/Variable/SaveSliceInfo +- from: /tf/compat/v1/VariableSynchronization + to: /tf/VariableSynchronization +- from: /tf/compat/v1/abs + to: /tf/math/abs +- from: /tf/compat/v1/accumulate_n + to: /tf/math/accumulate_n +- from: /tf/compat/v1/acos + to: /tf/math/acos +- from: /tf/compat/v1/acosh + to: /tf/math/acosh +- from: /tf/compat/v1/add + to: /tf/math/add +- from: /tf/compat/v1/add_n + to: /tf/math/add_n +- from: /tf/compat/v1/angle + to: /tf/math/angle +- from: /tf/compat/v1/app/flags + to: /tf/compat/v1/flags +- from: /tf/compat/v1/app/flags/ArgumentParser + to: /tf/compat/v1/flags/ArgumentParser +- from: /tf/compat/v1/app/flags/ArgumentSerializer + to: /tf/compat/v1/flags/ArgumentSerializer +- from: /tf/compat/v1/app/flags/BaseListParser + to: /tf/compat/v1/flags/BaseListParser +- from: /tf/compat/v1/app/flags/BooleanFlag + to: /tf/compat/v1/flags/BooleanFlag +- from: /tf/compat/v1/app/flags/BooleanParser + to: /tf/compat/v1/flags/BooleanParser +- from: /tf/compat/v1/app/flags/CantOpenFlagFileError + to: /tf/compat/v1/flags/CantOpenFlagFileError +- from: /tf/compat/v1/app/flags/CsvListSerializer + to: /tf/compat/v1/flags/CsvListSerializer +- from: /tf/compat/v1/app/flags/DEFINE + to: /tf/compat/v1/flags/DEFINE +- from: /tf/compat/v1/app/flags/DEFINE_alias + to: /tf/compat/v1/flags/DEFINE_alias +- from: /tf/compat/v1/app/flags/DEFINE_bool + to: /tf/compat/v1/flags/DEFINE_bool +- from: /tf/compat/v1/app/flags/DEFINE_boolean + to: /tf/compat/v1/flags/DEFINE_bool +- from: /tf/compat/v1/app/flags/DEFINE_enum + to: /tf/compat/v1/flags/DEFINE_enum +- from: /tf/compat/v1/app/flags/DEFINE_enum_class + to: /tf/compat/v1/flags/DEFINE_enum_class +- from: /tf/compat/v1/app/flags/DEFINE_flag + to: /tf/compat/v1/flags/DEFINE_flag +- from: /tf/compat/v1/app/flags/DEFINE_float + to: /tf/compat/v1/flags/DEFINE_float +- from: /tf/compat/v1/app/flags/DEFINE_integer + to: /tf/compat/v1/flags/DEFINE_integer +- from: /tf/compat/v1/app/flags/DEFINE_list + to: /tf/compat/v1/flags/DEFINE_list +- from: /tf/compat/v1/app/flags/DEFINE_multi + to: /tf/compat/v1/flags/DEFINE_multi +- from: /tf/compat/v1/app/flags/DEFINE_multi_enum + to: /tf/compat/v1/flags/DEFINE_multi_enum +- from: /tf/compat/v1/app/flags/DEFINE_multi_enum_class + to: /tf/compat/v1/flags/DEFINE_multi_enum_class +- from: /tf/compat/v1/app/flags/DEFINE_multi_float + to: /tf/compat/v1/flags/DEFINE_multi_float +- from: /tf/compat/v1/app/flags/DEFINE_multi_integer + to: /tf/compat/v1/flags/DEFINE_multi_integer +- from: /tf/compat/v1/app/flags/DEFINE_multi_string + to: /tf/compat/v1/flags/DEFINE_multi_string +- from: /tf/compat/v1/app/flags/DEFINE_spaceseplist + to: /tf/compat/v1/flags/DEFINE_spaceseplist +- from: /tf/compat/v1/app/flags/DEFINE_string + to: /tf/compat/v1/flags/DEFINE_string +- from: /tf/compat/v1/app/flags/DuplicateFlagError + to: /tf/compat/v1/flags/DuplicateFlagError +- from: /tf/compat/v1/app/flags/EnumClassFlag + to: /tf/compat/v1/flags/EnumClassFlag +- from: /tf/compat/v1/app/flags/EnumClassParser + to: /tf/compat/v1/flags/EnumClassParser +- from: /tf/compat/v1/app/flags/EnumFlag + to: /tf/compat/v1/flags/EnumFlag +- from: /tf/compat/v1/app/flags/EnumParser + to: /tf/compat/v1/flags/EnumParser +- from: /tf/compat/v1/app/flags/Error + to: /tf/compat/v1/flags/Error +- from: /tf/compat/v1/app/flags/FLAGS + to: /tf/compat/v1/flags/FLAGS +- from: /tf/compat/v1/app/flags/Flag + to: /tf/compat/v1/flags/Flag +- from: /tf/compat/v1/app/flags/FlagNameConflictsWithMethodError + to: /tf/compat/v1/flags/FlagNameConflictsWithMethodError +- from: /tf/compat/v1/app/flags/FlagValues + to: /tf/compat/v1/flags/FlagValues +- from: /tf/compat/v1/app/flags/FloatParser + to: /tf/compat/v1/flags/FloatParser +- from: /tf/compat/v1/app/flags/IllegalFlagValueError + to: /tf/compat/v1/flags/IllegalFlagValueError +- from: /tf/compat/v1/app/flags/IntegerParser + to: /tf/compat/v1/flags/IntegerParser +- from: /tf/compat/v1/app/flags/ListParser + to: /tf/compat/v1/flags/ListParser +- from: /tf/compat/v1/app/flags/ListSerializer + to: /tf/compat/v1/flags/ListSerializer +- from: /tf/compat/v1/app/flags/MultiEnumClassFlag + to: /tf/compat/v1/flags/MultiEnumClassFlag +- from: /tf/compat/v1/app/flags/MultiFlag + to: /tf/compat/v1/flags/MultiFlag +- from: /tf/compat/v1/app/flags/UnparsedFlagAccessError + to: /tf/compat/v1/flags/UnparsedFlagAccessError +- from: /tf/compat/v1/app/flags/UnrecognizedFlagError + to: /tf/compat/v1/flags/UnrecognizedFlagError +- from: /tf/compat/v1/app/flags/ValidationError + to: /tf/compat/v1/flags/ValidationError +- from: /tf/compat/v1/app/flags/WhitespaceSeparatedListParser + to: /tf/compat/v1/flags/WhitespaceSeparatedListParser +- from: /tf/compat/v1/app/flags/adopt_module_key_flags + to: /tf/compat/v1/flags/adopt_module_key_flags +- from: /tf/compat/v1/app/flags/declare_key_flag + to: /tf/compat/v1/flags/declare_key_flag +- from: /tf/compat/v1/app/flags/disclaim_key_flags + to: /tf/compat/v1/flags/disclaim_key_flags +- from: /tf/compat/v1/app/flags/doc_to_help + to: /tf/compat/v1/flags/doc_to_help +- from: /tf/compat/v1/app/flags/flag_dict_to_args + to: /tf/compat/v1/flags/flag_dict_to_args +- from: /tf/compat/v1/app/flags/get_help_width + to: /tf/compat/v1/flags/get_help_width +- from: /tf/compat/v1/app/flags/mark_bool_flags_as_mutual_exclusive + to: /tf/compat/v1/flags/mark_bool_flags_as_mutual_exclusive +- from: /tf/compat/v1/app/flags/mark_flag_as_required + to: /tf/compat/v1/flags/mark_flag_as_required +- from: /tf/compat/v1/app/flags/mark_flags_as_mutual_exclusive + to: /tf/compat/v1/flags/mark_flags_as_mutual_exclusive +- from: /tf/compat/v1/app/flags/mark_flags_as_required + to: /tf/compat/v1/flags/mark_flags_as_required +- from: /tf/compat/v1/app/flags/multi_flags_validator + to: /tf/compat/v1/flags/multi_flags_validator +- from: /tf/compat/v1/app/flags/register_multi_flags_validator + to: /tf/compat/v1/flags/register_multi_flags_validator +- from: /tf/compat/v1/app/flags/register_validator + to: /tf/compat/v1/flags/register_validator +- from: /tf/compat/v1/app/flags/text_wrap + to: /tf/compat/v1/flags/text_wrap +- from: /tf/compat/v1/app/flags/tf_decorator + to: /tf/compat/v1/flags/tf_decorator +- from: /tf/compat/v1/app/flags/tf_decorator/TFDecorator + to: /tf/compat/v1/flags/tf_decorator/TFDecorator +- from: /tf/compat/v1/app/flags/tf_decorator/make_decorator + to: /tf/compat/v1/flags/tf_decorator/make_decorator +- from: /tf/compat/v1/app/flags/tf_decorator/rewrap + to: /tf/compat/v1/flags/tf_decorator/rewrap +- from: /tf/compat/v1/app/flags/tf_decorator/tf_stack + to: /tf/compat/v1/flags/tf_decorator/tf_stack +- from: /tf/compat/v1/app/flags/tf_decorator/tf_stack/CurrentModuleFilter + to: /tf/compat/v1/flags/tf_decorator/tf_stack/CurrentModuleFilter +- from: /tf/compat/v1/app/flags/tf_decorator/tf_stack/FrameSummary + to: /tf/compat/v1/flags/tf_decorator/tf_stack/FrameSummary +- from: /tf/compat/v1/app/flags/tf_decorator/tf_stack/StackSummary + to: /tf/compat/v1/flags/tf_decorator/tf_stack/StackSummary +- from: /tf/compat/v1/app/flags/tf_decorator/tf_stack/StackTraceFilter + to: /tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceFilter +- from: /tf/compat/v1/app/flags/tf_decorator/tf_stack/StackTraceMapper + to: /tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceMapper +- from: /tf/compat/v1/app/flags/tf_decorator/tf_stack/StackTraceTransform + to: /tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceTransform +- from: /tf/compat/v1/app/flags/tf_decorator/tf_stack/extract_stack + to: /tf/compat/v1/flags/tf_decorator/tf_stack/extract_stack +- from: /tf/compat/v1/app/flags/tf_decorator/unwrap + to: /tf/compat/v1/flags/tf_decorator/unwrap +- from: /tf/compat/v1/app/flags/validator + to: /tf/compat/v1/flags/validator +- from: /tf/compat/v1/argsort + to: /tf/argsort +- from: /tf/compat/v1/as_dtype + to: /tf/dtypes/as_dtype +- from: /tf/compat/v1/as_string + to: /tf/strings/as_string +- from: /tf/compat/v1/asin + to: /tf/math/asin +- from: /tf/compat/v1/asinh + to: /tf/math/asinh +- from: /tf/compat/v1/assert_proper_iterable + to: /tf/debugging/assert_proper_iterable +- from: /tf/compat/v1/assert_same_float_dtype + to: /tf/debugging/assert_same_float_dtype +- from: /tf/compat/v1/atan + to: /tf/math/atan +- from: /tf/compat/v1/atan2 + to: /tf/math/atan2 +- from: /tf/compat/v1/atanh + to: /tf/math/atanh +- from: /tf/compat/v1/audio/decode_wav + to: /tf/audio/decode_wav +- from: /tf/compat/v1/audio/encode_wav + to: /tf/audio/encode_wav +- from: /tf/compat/v1/autograph/experimental/Feature + to: /tf/autograph/experimental/Feature +- from: /tf/compat/v1/autograph/experimental/do_not_convert + to: /tf/autograph/experimental/do_not_convert +- from: /tf/compat/v1/autograph/experimental/set_loop_options + to: /tf/autograph/experimental/set_loop_options +- from: /tf/compat/v1/autograph/set_verbosity + to: /tf/autograph/set_verbosity +- from: /tf/compat/v1/autograph/trace + to: /tf/autograph/trace +- from: /tf/compat/v1/betainc + to: /tf/math/betainc +- from: /tf/compat/v1/bitcast + to: /tf/bitcast +- from: /tf/compat/v1/bitwise/bitwise_and + to: /tf/bitwise/bitwise_and +- from: /tf/compat/v1/bitwise/bitwise_or + to: /tf/bitwise/bitwise_or +- from: /tf/compat/v1/bitwise/bitwise_xor + to: /tf/bitwise/bitwise_xor +- from: /tf/compat/v1/bitwise/invert + to: /tf/bitwise/invert +- from: /tf/compat/v1/bitwise/left_shift + to: /tf/bitwise/left_shift +- from: /tf/compat/v1/bitwise/right_shift + to: /tf/bitwise/right_shift +- from: /tf/compat/v1/broadcast_dynamic_shape + to: /tf/broadcast_dynamic_shape +- from: /tf/compat/v1/broadcast_static_shape + to: /tf/broadcast_static_shape +- from: /tf/compat/v1/broadcast_to + to: /tf/broadcast_to +- from: /tf/compat/v1/cast + to: /tf/cast +- from: /tf/compat/v1/ceil + to: /tf/math/ceil +- from: /tf/compat/v1/check_numerics + to: /tf/debugging/check_numerics +- from: /tf/compat/v1/cholesky + to: /tf/linalg/cholesky +- from: /tf/compat/v1/cholesky_solve + to: /tf/linalg/cholesky_solve +- from: /tf/compat/v1/clip_by_global_norm + to: /tf/clip_by_global_norm +- from: /tf/compat/v1/clip_by_norm + to: /tf/clip_by_norm +- from: /tf/compat/v1/clip_by_value + to: /tf/clip_by_value +- from: /tf/compat/v1/compat/as_bytes + to: /tf/compat/as_bytes +- from: /tf/compat/v1/compat/as_str + to: /tf/compat/as_str +- from: /tf/compat/v1/compat/as_str_any + to: /tf/compat/as_str_any +- from: /tf/compat/v1/compat/as_text + to: /tf/compat/as_text +- from: /tf/compat/v1/compat/dimension_at_index + to: /tf/compat/dimension_at_index +- from: /tf/compat/v1/compat/dimension_value + to: /tf/compat/dimension_value +- from: /tf/compat/v1/compat/forward_compatibility_horizon + to: /tf/compat/forward_compatibility_horizon +- from: /tf/compat/v1/compat/forward_compatible + to: /tf/compat/forward_compatible +- from: /tf/compat/v1/compat/path_to_str + to: /tf/compat/path_to_str +- from: /tf/compat/v1/complex + to: /tf/dtypes/complex +- from: /tf/compat/v1/concat + to: /tf/concat +- from: /tf/compat/v1/config/LogicalDevice + to: /tf/config/LogicalDevice +- from: /tf/compat/v1/config/LogicalDeviceConfiguration + to: /tf/config/LogicalDeviceConfiguration +- from: /tf/compat/v1/config/PhysicalDevice + to: /tf/config/PhysicalDevice +- from: /tf/compat/v1/config/experimental/ClusterDeviceFilters + to: /tf/config/experimental/ClusterDeviceFilters +- from: /tf/compat/v1/config/experimental/VirtualDeviceConfiguration + to: /tf/config/LogicalDeviceConfiguration +- from: /tf/compat/v1/config/experimental/disable_mlir_bridge + to: /tf/config/experimental/disable_mlir_bridge +- from: /tf/compat/v1/config/experimental/enable_mlir_bridge + to: /tf/config/experimental/enable_mlir_bridge +- from: /tf/compat/v1/config/experimental/get_device_policy + to: /tf/config/experimental/get_device_policy +- from: /tf/compat/v1/config/experimental/get_memory_growth + to: /tf/config/experimental/get_memory_growth +- from: /tf/compat/v1/config/experimental/get_synchronous_execution + to: /tf/config/experimental/get_synchronous_execution +- from: /tf/compat/v1/config/experimental/get_virtual_device_configuration + to: /tf/config/get_logical_device_configuration +- from: /tf/compat/v1/config/experimental/get_visible_devices + to: /tf/config/get_visible_devices +- from: /tf/compat/v1/config/experimental/list_logical_devices + to: /tf/config/list_logical_devices +- from: /tf/compat/v1/config/experimental/list_physical_devices + to: /tf/config/list_physical_devices +- from: /tf/compat/v1/config/experimental/set_device_policy + to: /tf/config/experimental/set_device_policy +- from: /tf/compat/v1/config/experimental/set_memory_growth + to: /tf/config/experimental/set_memory_growth +- from: /tf/compat/v1/config/experimental/set_synchronous_execution + to: /tf/config/experimental/set_synchronous_execution +- from: /tf/compat/v1/config/experimental/set_virtual_device_configuration + to: /tf/config/set_logical_device_configuration +- from: /tf/compat/v1/config/experimental/set_visible_devices + to: /tf/config/set_visible_devices +- from: /tf/compat/v1/config/experimental_connect_to_cluster + to: /tf/config/experimental_connect_to_cluster +- from: /tf/compat/v1/config/experimental_connect_to_host + to: /tf/config/experimental_connect_to_host +- from: /tf/compat/v1/config/experimental_functions_run_eagerly + to: /tf/config/experimental_functions_run_eagerly +- from: /tf/compat/v1/config/experimental_run_functions_eagerly + to: /tf/config/experimental_run_functions_eagerly +- from: /tf/compat/v1/config/get_logical_device_configuration + to: /tf/config/get_logical_device_configuration +- from: /tf/compat/v1/config/get_soft_device_placement + to: /tf/config/get_soft_device_placement +- from: /tf/compat/v1/config/get_visible_devices + to: /tf/config/get_visible_devices +- from: /tf/compat/v1/config/list_logical_devices + to: /tf/config/list_logical_devices +- from: /tf/compat/v1/config/list_physical_devices + to: /tf/config/list_physical_devices +- from: /tf/compat/v1/config/optimizer/get_experimental_options + to: /tf/config/optimizer/get_experimental_options +- from: /tf/compat/v1/config/optimizer/get_jit + to: /tf/config/optimizer/get_jit +- from: /tf/compat/v1/config/optimizer/set_experimental_options + to: /tf/config/optimizer/set_experimental_options +- from: /tf/compat/v1/config/optimizer/set_jit + to: /tf/config/optimizer/set_jit +- from: /tf/compat/v1/config/set_logical_device_configuration + to: /tf/config/set_logical_device_configuration +- from: /tf/compat/v1/config/set_soft_device_placement + to: /tf/config/set_soft_device_placement +- from: /tf/compat/v1/config/set_visible_devices + to: /tf/config/set_visible_devices +- from: /tf/compat/v1/config/threading/get_inter_op_parallelism_threads + to: /tf/config/threading/get_inter_op_parallelism_threads +- from: /tf/compat/v1/config/threading/get_intra_op_parallelism_threads + to: /tf/config/threading/get_intra_op_parallelism_threads +- from: /tf/compat/v1/config/threading/set_inter_op_parallelism_threads + to: /tf/config/threading/set_inter_op_parallelism_threads +- from: /tf/compat/v1/config/threading/set_intra_op_parallelism_threads + to: /tf/config/threading/set_intra_op_parallelism_threads +- from: /tf/compat/v1/conj + to: /tf/math/conj +- from: /tf/compat/v1/constant_initializer + to: /tf/compat/v1/keras/initializers/Constant +- from: /tf/compat/v1/control_dependencies + to: /tf/control_dependencies +- from: /tf/compat/v1/cos + to: /tf/math/cos +- from: /tf/compat/v1/cosh + to: /tf/math/cosh +- from: /tf/compat/v1/cross + to: /tf/linalg/cross +- from: /tf/compat/v1/cumprod + to: /tf/math/cumprod +- from: /tf/compat/v1/cumsum + to: /tf/math/cumsum +- from: /tf/compat/v1/custom_gradient + to: /tf/custom_gradient +- from: /tf/compat/v1/data/DatasetSpec + to: /tf/data/DatasetSpec +- from: /tf/compat/v1/data/Options + to: /tf/data/Options +- from: /tf/compat/v1/data/experimental/AutoShardPolicy + to: /tf/data/experimental/AutoShardPolicy +- from: /tf/compat/v1/data/experimental/CheckpointInputPipelineHook + to: /tf/data/experimental/CheckpointInputPipelineHook +- from: /tf/compat/v1/data/experimental/DatasetStructure + to: /tf/data/DatasetSpec +- from: /tf/compat/v1/data/experimental/DistributeOptions + to: /tf/data/experimental/DistributeOptions +- from: /tf/compat/v1/data/experimental/MapVectorizationOptions + to: /tf/data/experimental/MapVectorizationOptions +- from: /tf/compat/v1/data/experimental/OptimizationOptions + to: /tf/data/experimental/OptimizationOptions +- from: /tf/compat/v1/data/experimental/Optional + to: /tf/data/experimental/Optional +- from: /tf/compat/v1/data/experimental/OptionalStructure + to: /tf/OptionalSpec +- from: /tf/compat/v1/data/experimental/Reducer + to: /tf/data/experimental/Reducer +- from: /tf/compat/v1/data/experimental/StatsOptions + to: /tf/data/experimental/StatsOptions +- from: /tf/compat/v1/data/experimental/Structure + to: /tf/TypeSpec +- from: /tf/compat/v1/data/experimental/TFRecordWriter + to: /tf/data/experimental/TFRecordWriter +- from: /tf/compat/v1/data/experimental/ThreadingOptions + to: /tf/data/experimental/ThreadingOptions +- from: /tf/compat/v1/data/experimental/assert_cardinality + to: /tf/data/experimental/assert_cardinality +- from: /tf/compat/v1/data/experimental/bucket_by_sequence_length + to: /tf/data/experimental/bucket_by_sequence_length +- from: /tf/compat/v1/data/experimental/bytes_produced_stats + to: /tf/data/experimental/bytes_produced_stats +- from: /tf/compat/v1/data/experimental/cardinality + to: /tf/data/experimental/cardinality +- from: /tf/compat/v1/data/experimental/copy_to_device + to: /tf/data/experimental/copy_to_device +- from: /tf/compat/v1/data/experimental/dense_to_ragged_batch + to: /tf/data/experimental/dense_to_ragged_batch +- from: /tf/compat/v1/data/experimental/dense_to_sparse_batch + to: /tf/data/experimental/dense_to_sparse_batch +- from: /tf/compat/v1/data/experimental/enumerate_dataset + to: /tf/data/experimental/enumerate_dataset +- from: /tf/compat/v1/data/experimental/from_variant + to: /tf/data/experimental/from_variant +- from: /tf/compat/v1/data/experimental/get_next_as_optional + to: /tf/data/experimental/get_next_as_optional +- from: /tf/compat/v1/data/experimental/get_single_element + to: /tf/data/experimental/get_single_element +- from: /tf/compat/v1/data/experimental/get_structure + to: /tf/data/experimental/get_structure +- from: /tf/compat/v1/data/experimental/group_by_reducer + to: /tf/data/experimental/group_by_reducer +- from: /tf/compat/v1/data/experimental/group_by_window + to: /tf/data/experimental/group_by_window +- from: /tf/compat/v1/data/experimental/ignore_errors + to: /tf/data/experimental/ignore_errors +- from: /tf/compat/v1/data/experimental/latency_stats + to: /tf/data/experimental/latency_stats +- from: /tf/compat/v1/data/experimental/make_saveable_from_iterator + to: /tf/data/experimental/make_saveable_from_iterator +- from: /tf/compat/v1/data/experimental/map_and_batch + to: /tf/data/experimental/map_and_batch +- from: /tf/compat/v1/data/experimental/parallel_interleave + to: /tf/data/experimental/parallel_interleave +- from: /tf/compat/v1/data/experimental/parse_example_dataset + to: /tf/data/experimental/parse_example_dataset +- from: /tf/compat/v1/data/experimental/prefetch_to_device + to: /tf/data/experimental/prefetch_to_device +- from: /tf/compat/v1/data/experimental/rejection_resample + to: /tf/data/experimental/rejection_resample +- from: /tf/compat/v1/data/experimental/scan + to: /tf/data/experimental/scan +- from: /tf/compat/v1/data/experimental/shuffle_and_repeat + to: /tf/data/experimental/shuffle_and_repeat +- from: /tf/compat/v1/data/experimental/take_while + to: /tf/data/experimental/take_while +- from: /tf/compat/v1/data/experimental/to_variant + to: /tf/data/experimental/to_variant +- from: /tf/compat/v1/data/experimental/unbatch + to: /tf/data/experimental/unbatch +- from: /tf/compat/v1/data/experimental/unique + to: /tf/data/experimental/unique +- from: /tf/compat/v1/debugging/Assert + to: /tf/debugging/Assert +- from: /tf/compat/v1/debugging/assert_all_finite + to: /tf/compat/v1/verify_tensor_all_finite +- from: /tf/compat/v1/debugging/assert_equal + to: /tf/compat/v1/assert_equal +- from: /tf/compat/v1/debugging/assert_greater + to: /tf/compat/v1/assert_greater +- from: /tf/compat/v1/debugging/assert_greater_equal + to: /tf/compat/v1/assert_greater_equal +- from: /tf/compat/v1/debugging/assert_integer + to: /tf/compat/v1/assert_integer +- from: /tf/compat/v1/debugging/assert_less + to: /tf/compat/v1/assert_less +- from: /tf/compat/v1/debugging/assert_less_equal + to: /tf/compat/v1/assert_less_equal +- from: /tf/compat/v1/debugging/assert_near + to: /tf/compat/v1/assert_near +- from: /tf/compat/v1/debugging/assert_negative + to: /tf/compat/v1/assert_negative +- from: /tf/compat/v1/debugging/assert_non_negative + to: /tf/compat/v1/assert_non_negative +- from: /tf/compat/v1/debugging/assert_non_positive + to: /tf/compat/v1/assert_non_positive +- from: /tf/compat/v1/debugging/assert_none_equal + to: /tf/compat/v1/assert_none_equal +- from: /tf/compat/v1/debugging/assert_positive + to: /tf/compat/v1/assert_positive +- from: /tf/compat/v1/debugging/assert_proper_iterable + to: /tf/debugging/assert_proper_iterable +- from: /tf/compat/v1/debugging/assert_rank + to: /tf/compat/v1/assert_rank +- from: /tf/compat/v1/debugging/assert_rank_at_least + to: /tf/compat/v1/assert_rank_at_least +- from: /tf/compat/v1/debugging/assert_rank_in + to: /tf/compat/v1/assert_rank_in +- from: /tf/compat/v1/debugging/assert_same_float_dtype + to: /tf/debugging/assert_same_float_dtype +- from: /tf/compat/v1/debugging/assert_scalar + to: /tf/compat/v1/assert_scalar +- from: /tf/compat/v1/debugging/assert_type + to: /tf/compat/v1/assert_type +- from: /tf/compat/v1/debugging/check_numerics + to: /tf/debugging/check_numerics +- from: /tf/compat/v1/debugging/disable_check_numerics + to: /tf/debugging/disable_check_numerics +- from: /tf/compat/v1/debugging/enable_check_numerics + to: /tf/debugging/enable_check_numerics +- from: /tf/compat/v1/debugging/experimental/disable_dump_debug_info + to: /tf/debugging/experimental/disable_dump_debug_info +- from: /tf/compat/v1/debugging/experimental/enable_dump_debug_info + to: /tf/debugging/experimental/enable_dump_debug_info +- from: /tf/compat/v1/debugging/get_log_device_placement + to: /tf/debugging/get_log_device_placement +- from: /tf/compat/v1/debugging/is_finite + to: /tf/math/is_finite +- from: /tf/compat/v1/debugging/is_inf + to: /tf/math/is_inf +- from: /tf/compat/v1/debugging/is_nan + to: /tf/math/is_nan +- from: /tf/compat/v1/debugging/is_non_decreasing + to: /tf/math/is_non_decreasing +- from: /tf/compat/v1/debugging/is_numeric_tensor + to: /tf/debugging/is_numeric_tensor +- from: /tf/compat/v1/debugging/is_strictly_increasing + to: /tf/math/is_strictly_increasing +- from: /tf/compat/v1/debugging/set_log_device_placement + to: /tf/debugging/set_log_device_placement +- from: /tf/compat/v1/decode_base64 + to: /tf/io/decode_base64 +- from: /tf/compat/v1/decode_compressed + to: /tf/io/decode_compressed +- from: /tf/compat/v1/decode_json_example + to: /tf/io/decode_json_example +- from: /tf/compat/v1/dequantize + to: /tf/quantization/dequantize +- from: /tf/compat/v1/deserialize_many_sparse + to: /tf/io/deserialize_many_sparse +- from: /tf/compat/v1/diag + to: /tf/linalg/tensor_diag +- from: /tf/compat/v1/diag_part + to: /tf/linalg/tensor_diag_part +- from: /tf/compat/v1/digamma + to: /tf/math/digamma +- from: /tf/compat/v1/dimension_at_index + to: /tf/compat/dimension_at_index +- from: /tf/compat/v1/dimension_value + to: /tf/compat/dimension_value +- from: /tf/compat/v1/distribute/CrossDeviceOps + to: /tf/distribute/CrossDeviceOps +- from: /tf/compat/v1/distribute/HierarchicalCopyAllReduce + to: /tf/distribute/HierarchicalCopyAllReduce +- from: /tf/compat/v1/distribute/InputContext + to: /tf/distribute/InputContext +- from: /tf/compat/v1/distribute/InputReplicationMode + to: /tf/distribute/InputReplicationMode +- from: /tf/compat/v1/distribute/NcclAllReduce + to: /tf/distribute/NcclAllReduce +- from: /tf/compat/v1/distribute/ReduceOp + to: /tf/distribute/ReduceOp +- from: /tf/compat/v1/distribute/ReductionToOneDevice + to: /tf/distribute/ReductionToOneDevice +- from: /tf/compat/v1/distribute/ReplicaContext + to: /tf/distribute/ReplicaContext +- from: /tf/compat/v1/distribute/RunOptions + to: /tf/distribute/RunOptions +- from: /tf/compat/v1/distribute/Server + to: /tf/distribute/Server +- from: /tf/compat/v1/distribute/cluster_resolver/ClusterResolver + to: /tf/distribute/cluster_resolver/ClusterResolver +- from: /tf/compat/v1/distribute/cluster_resolver/GCEClusterResolver + to: /tf/distribute/cluster_resolver/GCEClusterResolver +- from: /tf/compat/v1/distribute/cluster_resolver/KubernetesClusterResolver + to: /tf/distribute/cluster_resolver/KubernetesClusterResolver +- from: /tf/compat/v1/distribute/cluster_resolver/SimpleClusterResolver + to: /tf/distribute/cluster_resolver/SimpleClusterResolver +- from: /tf/compat/v1/distribute/cluster_resolver/SlurmClusterResolver + to: /tf/distribute/cluster_resolver/SlurmClusterResolver +- from: /tf/compat/v1/distribute/cluster_resolver/TFConfigClusterResolver + to: /tf/distribute/cluster_resolver/TFConfigClusterResolver +- from: /tf/compat/v1/distribute/cluster_resolver/TPUClusterResolver + to: /tf/distribute/cluster_resolver/TPUClusterResolver +- from: /tf/compat/v1/distribute/cluster_resolver/UnionResolver + to: /tf/distribute/cluster_resolver/UnionResolver +- from: /tf/compat/v1/distribute/experimental/CollectiveCommunication + to: /tf/distribute/experimental/CollectiveCommunication +- from: /tf/compat/v1/distribute/experimental/CollectiveHints + to: /tf/distribute/experimental/CollectiveHints +- from: /tf/compat/v1/distribute/experimental_set_strategy + to: /tf/distribute/experimental_set_strategy +- from: /tf/compat/v1/distribute/get_replica_context + to: /tf/distribute/get_replica_context +- from: /tf/compat/v1/distribute/get_strategy + to: /tf/distribute/get_strategy +- from: /tf/compat/v1/distribute/has_strategy + to: /tf/distribute/has_strategy +- from: /tf/compat/v1/distribute/in_cross_replica_context + to: /tf/distribute/in_cross_replica_context +- from: /tf/compat/v1/div_no_nan + to: /tf/math/divide_no_nan +- from: /tf/compat/v1/divide + to: /tf/math/divide +- from: /tf/compat/v1/dtypes/DType + to: /tf/dtypes/DType +- from: /tf/compat/v1/dtypes/as_dtype + to: /tf/dtypes/as_dtype +- from: /tf/compat/v1/dtypes/as_string + to: /tf/strings/as_string +- from: /tf/compat/v1/dtypes/cast + to: /tf/cast +- from: /tf/compat/v1/dtypes/complex + to: /tf/dtypes/complex +- from: /tf/compat/v1/dtypes/saturate_cast + to: /tf/dtypes/saturate_cast +- from: /tf/compat/v1/dynamic_partition + to: /tf/dynamic_partition +- from: /tf/compat/v1/dynamic_stitch + to: /tf/dynamic_stitch +- from: /tf/compat/v1/edit_distance + to: /tf/edit_distance +- from: /tf/compat/v1/einsum + to: /tf/einsum +- from: /tf/compat/v1/encode_base64 + to: /tf/io/encode_base64 +- from: /tf/compat/v1/ensure_shape + to: /tf/ensure_shape +- from: /tf/compat/v1/equal + to: /tf/math/equal +- from: /tf/compat/v1/erf + to: /tf/math/erf +- from: /tf/compat/v1/erfc + to: /tf/math/erfc +- from: /tf/compat/v1/errors/AbortedError + to: /tf/errors/AbortedError +- from: /tf/compat/v1/errors/AlreadyExistsError + to: /tf/errors/AlreadyExistsError +- from: /tf/compat/v1/errors/CancelledError + to: /tf/errors/CancelledError +- from: /tf/compat/v1/errors/DataLossError + to: /tf/errors/DataLossError +- from: /tf/compat/v1/errors/DeadlineExceededError + to: /tf/errors/DeadlineExceededError +- from: /tf/compat/v1/errors/FailedPreconditionError + to: /tf/errors/FailedPreconditionError +- from: /tf/compat/v1/errors/InternalError + to: /tf/errors/InternalError +- from: /tf/compat/v1/errors/InvalidArgumentError + to: /tf/errors/InvalidArgumentError +- from: /tf/compat/v1/errors/NotFoundError + to: /tf/errors/NotFoundError +- from: /tf/compat/v1/errors/OpError + to: /tf/errors/OpError +- from: /tf/compat/v1/errors/OutOfRangeError + to: /tf/errors/OutOfRangeError +- from: /tf/compat/v1/errors/PermissionDeniedError + to: /tf/errors/PermissionDeniedError +- from: /tf/compat/v1/errors/ResourceExhaustedError + to: /tf/errors/ResourceExhaustedError +- from: /tf/compat/v1/errors/UnauthenticatedError + to: /tf/errors/UnauthenticatedError +- from: /tf/compat/v1/errors/UnavailableError + to: /tf/errors/UnavailableError +- from: /tf/compat/v1/errors/UnimplementedError + to: /tf/errors/UnimplementedError +- from: /tf/compat/v1/errors/UnknownError + to: /tf/errors/UnknownError +- from: /tf/compat/v1/estimator/BestExporter + to: /tf/estimator/BestExporter +- from: /tf/compat/v1/estimator/BinaryClassHead + to: /tf/estimator/BinaryClassHead +- from: /tf/compat/v1/estimator/BoostedTreesClassifier + to: /tf/estimator/BoostedTreesClassifier +- from: /tf/compat/v1/estimator/BoostedTreesEstimator + to: /tf/estimator/BoostedTreesEstimator +- from: /tf/compat/v1/estimator/BoostedTreesRegressor + to: /tf/estimator/BoostedTreesRegressor +- from: /tf/compat/v1/estimator/CheckpointSaverHook + to: /tf/estimator/CheckpointSaverHook +- from: /tf/compat/v1/estimator/CheckpointSaverListener + to: /tf/estimator/CheckpointSaverListener +- from: /tf/compat/v1/estimator/EstimatorSpec + to: /tf/estimator/EstimatorSpec +- from: /tf/compat/v1/estimator/EvalSpec + to: /tf/estimator/EvalSpec +- from: /tf/compat/v1/estimator/Exporter + to: /tf/estimator/Exporter +- from: /tf/compat/v1/estimator/FeedFnHook + to: /tf/estimator/FeedFnHook +- from: /tf/compat/v1/estimator/FinalExporter + to: /tf/estimator/FinalExporter +- from: /tf/compat/v1/estimator/FinalOpsHook + to: /tf/estimator/FinalOpsHook +- from: /tf/compat/v1/estimator/GlobalStepWaiterHook + to: /tf/estimator/GlobalStepWaiterHook +- from: /tf/compat/v1/estimator/Head + to: /tf/estimator/Head +- from: /tf/compat/v1/estimator/LatestExporter + to: /tf/estimator/LatestExporter +- from: /tf/compat/v1/estimator/LoggingTensorHook + to: /tf/estimator/LoggingTensorHook +- from: /tf/compat/v1/estimator/LogisticRegressionHead + to: /tf/estimator/LogisticRegressionHead +- from: /tf/compat/v1/estimator/ModeKeys + to: /tf/estimator/ModeKeys +- from: /tf/compat/v1/estimator/MultiClassHead + to: /tf/estimator/MultiClassHead +- from: /tf/compat/v1/estimator/MultiHead + to: /tf/estimator/MultiHead +- from: /tf/compat/v1/estimator/MultiLabelHead + to: /tf/estimator/MultiLabelHead +- from: /tf/compat/v1/estimator/NanLossDuringTrainingError + to: /tf/estimator/NanLossDuringTrainingError +- from: /tf/compat/v1/estimator/NanTensorHook + to: /tf/estimator/NanTensorHook +- from: /tf/compat/v1/estimator/PoissonRegressionHead + to: /tf/estimator/PoissonRegressionHead +- from: /tf/compat/v1/estimator/ProfilerHook + to: /tf/estimator/ProfilerHook +- from: /tf/compat/v1/estimator/RegressionHead + to: /tf/estimator/RegressionHead +- from: /tf/compat/v1/estimator/RunConfig + to: /tf/estimator/RunConfig +- from: /tf/compat/v1/estimator/SecondOrStepTimer + to: /tf/estimator/SecondOrStepTimer +- from: /tf/compat/v1/estimator/SessionRunArgs + to: /tf/estimator/SessionRunArgs +- from: /tf/compat/v1/estimator/SessionRunContext + to: /tf/estimator/SessionRunContext +- from: /tf/compat/v1/estimator/SessionRunHook + to: /tf/estimator/SessionRunHook +- from: /tf/compat/v1/estimator/SessionRunValues + to: /tf/estimator/SessionRunValues +- from: /tf/compat/v1/estimator/StepCounterHook + to: /tf/estimator/StepCounterHook +- from: /tf/compat/v1/estimator/StopAtStepHook + to: /tf/estimator/StopAtStepHook +- from: /tf/compat/v1/estimator/SummarySaverHook + to: /tf/estimator/SummarySaverHook +- from: /tf/compat/v1/estimator/TrainSpec + to: /tf/estimator/TrainSpec +- from: /tf/compat/v1/estimator/VocabInfo + to: /tf/estimator/VocabInfo +- from: /tf/compat/v1/estimator/WarmStartSettings + to: /tf/estimator/WarmStartSettings +- from: /tf/compat/v1/estimator/add_metrics + to: /tf/estimator/add_metrics +- from: /tf/compat/v1/estimator/experimental/InMemoryEvaluatorHook + to: /tf/estimator/experimental/InMemoryEvaluatorHook +- from: /tf/compat/v1/estimator/experimental/LinearSDCA + to: /tf/estimator/experimental/LinearSDCA +- from: /tf/compat/v1/estimator/experimental/build_raw_supervised_input_receiver_fn + to: /tf/estimator/experimental/build_raw_supervised_input_receiver_fn +- from: /tf/compat/v1/estimator/experimental/call_logit_fn + to: /tf/estimator/experimental/call_logit_fn +- from: /tf/compat/v1/estimator/experimental/make_early_stopping_hook + to: /tf/estimator/experimental/make_early_stopping_hook +- from: /tf/compat/v1/estimator/experimental/make_stop_at_checkpoint_step_hook + to: /tf/estimator/experimental/make_stop_at_checkpoint_step_hook +- from: /tf/compat/v1/estimator/experimental/stop_if_higher_hook + to: /tf/estimator/experimental/stop_if_higher_hook +- from: /tf/compat/v1/estimator/experimental/stop_if_lower_hook + to: /tf/estimator/experimental/stop_if_lower_hook +- from: /tf/compat/v1/estimator/experimental/stop_if_no_decrease_hook + to: /tf/estimator/experimental/stop_if_no_decrease_hook +- from: /tf/compat/v1/estimator/experimental/stop_if_no_increase_hook + to: /tf/estimator/experimental/stop_if_no_increase_hook +- from: /tf/compat/v1/estimator/export/ClassificationOutput + to: /tf/estimator/export/ClassificationOutput +- from: /tf/compat/v1/estimator/export/ExportOutput + to: /tf/estimator/export/ExportOutput +- from: /tf/compat/v1/estimator/export/PredictOutput + to: /tf/estimator/export/PredictOutput +- from: /tf/compat/v1/estimator/export/RegressionOutput + to: /tf/estimator/export/RegressionOutput +- from: /tf/compat/v1/estimator/export/ServingInputReceiver + to: /tf/estimator/export/ServingInputReceiver +- from: /tf/compat/v1/estimator/export/TensorServingInputReceiver + to: /tf/estimator/export/TensorServingInputReceiver +- from: /tf/compat/v1/estimator/export/build_parsing_serving_input_receiver_fn + to: /tf/estimator/export/build_parsing_serving_input_receiver_fn +- from: /tf/compat/v1/estimator/export/build_raw_serving_input_receiver_fn + to: /tf/estimator/export/build_raw_serving_input_receiver_fn +- from: /tf/compat/v1/estimator/train_and_evaluate + to: /tf/estimator/train_and_evaluate +- from: /tf/compat/v1/exp + to: /tf/math/exp +- from: /tf/compat/v1/experimental/async_clear_error + to: /tf/experimental/async_clear_error +- from: /tf/compat/v1/experimental/async_scope + to: /tf/experimental/async_scope +- from: /tf/compat/v1/experimental/function_executor_type + to: /tf/experimental/function_executor_type +- from: /tf/compat/v1/expm1 + to: /tf/math/expm1 +- from: /tf/compat/v1/extract_volume_patches + to: /tf/extract_volume_patches +- from: /tf/compat/v1/eye + to: /tf/eye +- from: /tf/compat/v1/fake_quant_with_min_max_args + to: /tf/quantization/fake_quant_with_min_max_args +- from: /tf/compat/v1/fake_quant_with_min_max_args_gradient + to: /tf/quantization/fake_quant_with_min_max_args_gradient +- from: /tf/compat/v1/fake_quant_with_min_max_vars + to: /tf/quantization/fake_quant_with_min_max_vars +- from: /tf/compat/v1/fake_quant_with_min_max_vars_gradient + to: /tf/quantization/fake_quant_with_min_max_vars_gradient +- from: /tf/compat/v1/fake_quant_with_min_max_vars_per_channel + to: /tf/quantization/fake_quant_with_min_max_vars_per_channel +- from: /tf/compat/v1/fake_quant_with_min_max_vars_per_channel_gradient + to: /tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient +- from: /tf/compat/v1/feature_column/bucketized_column + to: /tf/feature_column/bucketized_column +- from: /tf/compat/v1/feature_column/categorical_column_with_hash_bucket + to: /tf/feature_column/categorical_column_with_hash_bucket +- from: /tf/compat/v1/feature_column/categorical_column_with_identity + to: /tf/feature_column/categorical_column_with_identity +- from: /tf/compat/v1/feature_column/categorical_column_with_vocabulary_list + to: /tf/feature_column/categorical_column_with_vocabulary_list +- from: /tf/compat/v1/feature_column/crossed_column + to: /tf/feature_column/crossed_column +- from: /tf/compat/v1/feature_column/embedding_column + to: /tf/feature_column/embedding_column +- from: /tf/compat/v1/feature_column/indicator_column + to: /tf/feature_column/indicator_column +- from: /tf/compat/v1/feature_column/numeric_column + to: /tf/feature_column/numeric_column +- from: /tf/compat/v1/feature_column/sequence_categorical_column_with_hash_bucket + to: /tf/feature_column/sequence_categorical_column_with_hash_bucket +- from: /tf/compat/v1/feature_column/sequence_categorical_column_with_identity + to: /tf/feature_column/sequence_categorical_column_with_identity +- from: /tf/compat/v1/feature_column/sequence_categorical_column_with_vocabulary_file + to: /tf/feature_column/sequence_categorical_column_with_vocabulary_file +- from: /tf/compat/v1/feature_column/sequence_categorical_column_with_vocabulary_list + to: /tf/feature_column/sequence_categorical_column_with_vocabulary_list +- from: /tf/compat/v1/feature_column/sequence_numeric_column + to: /tf/feature_column/sequence_numeric_column +- from: /tf/compat/v1/feature_column/weighted_categorical_column + to: /tf/feature_column/weighted_categorical_column +- from: /tf/compat/v1/fft + to: /tf/signal/fft +- from: /tf/compat/v1/fft2d + to: /tf/signal/fft2d +- from: /tf/compat/v1/fft3d + to: /tf/signal/fft3d +- from: /tf/compat/v1/fill + to: /tf/fill +- from: /tf/compat/v1/fingerprint + to: /tf/fingerprint +- from: /tf/compat/v1/flags/DEFINE_boolean + to: /tf/compat/v1/flags/DEFINE_bool +- from: /tf/compat/v1/floor + to: /tf/math/floor +- from: /tf/compat/v1/floordiv + to: /tf/math/floordiv +- from: /tf/compat/v1/floormod + to: /tf/math/floormod +- from: /tf/compat/v1/function + to: /tf/function +- from: /tf/compat/v1/get_logger + to: /tf/get_logger +- from: /tf/compat/v1/get_static_value + to: /tf/get_static_value +- from: /tf/compat/v1/gfile/GFile + to: /tf/io/gfile/GFile +- from: /tf/compat/v1/gfile/Open + to: /tf/io/gfile/GFile +- from: /tf/compat/v1/global_norm + to: /tf/linalg/global_norm +- from: /tf/compat/v1/glorot_normal_initializer + to: /tf/compat/v1/keras/initializers/glorot_normal +- from: /tf/compat/v1/glorot_uniform_initializer + to: /tf/compat/v1/keras/initializers/glorot_uniform +- from: /tf/compat/v1/grad_pass_through + to: /tf/grad_pass_through +- from: /tf/compat/v1/graph_util/import_graph_def + to: /tf/graph_util/import_graph_def +- from: /tf/compat/v1/greater + to: /tf/math/greater +- from: /tf/compat/v1/greater_equal + to: /tf/math/greater_equal +- from: /tf/compat/v1/group + to: /tf/group +- from: /tf/compat/v1/guarantee_const + to: /tf/guarantee_const +- from: /tf/compat/v1/histogram_fixed_width + to: /tf/histogram_fixed_width +- from: /tf/compat/v1/histogram_fixed_width_bins + to: /tf/histogram_fixed_width_bins +- from: /tf/compat/v1/identity + to: /tf/identity +- from: /tf/compat/v1/identity_n + to: /tf/identity_n +- from: /tf/compat/v1/ifft + to: /tf/signal/ifft +- from: /tf/compat/v1/ifft2d + to: /tf/signal/ifft2d +- from: /tf/compat/v1/ifft3d + to: /tf/signal/ifft3d +- from: /tf/compat/v1/igamma + to: /tf/math/igamma +- from: /tf/compat/v1/igammac + to: /tf/math/igammac +- from: /tf/compat/v1/imag + to: /tf/math/imag +- from: /tf/compat/v1/image/adjust_brightness + to: /tf/image/adjust_brightness +- from: /tf/compat/v1/image/adjust_contrast + to: /tf/image/adjust_contrast +- from: /tf/compat/v1/image/adjust_gamma + to: /tf/image/adjust_gamma +- from: /tf/compat/v1/image/adjust_hue + to: /tf/image/adjust_hue +- from: /tf/compat/v1/image/adjust_jpeg_quality + to: /tf/image/adjust_jpeg_quality +- from: /tf/compat/v1/image/adjust_saturation + to: /tf/image/adjust_saturation +- from: /tf/compat/v1/image/central_crop + to: /tf/image/central_crop +- from: /tf/compat/v1/image/combined_non_max_suppression + to: /tf/image/combined_non_max_suppression +- from: /tf/compat/v1/image/convert_image_dtype + to: /tf/image/convert_image_dtype +- from: /tf/compat/v1/image/crop_to_bounding_box + to: /tf/image/crop_to_bounding_box +- from: /tf/compat/v1/image/decode_and_crop_jpeg + to: /tf/io/decode_and_crop_jpeg +- from: /tf/compat/v1/image/decode_bmp + to: /tf/io/decode_bmp +- from: /tf/compat/v1/image/decode_gif + to: /tf/io/decode_gif +- from: /tf/compat/v1/image/decode_image + to: /tf/io/decode_image +- from: /tf/compat/v1/image/decode_jpeg + to: /tf/io/decode_jpeg +- from: /tf/compat/v1/image/decode_png + to: /tf/io/decode_png +- from: /tf/compat/v1/image/encode_jpeg + to: /tf/io/encode_jpeg +- from: /tf/compat/v1/image/encode_png + to: /tf/image/encode_png +- from: /tf/compat/v1/image/extract_image_patches + to: /tf/compat/v1/extract_image_patches +- from: /tf/compat/v1/image/extract_jpeg_shape + to: /tf/io/extract_jpeg_shape +- from: /tf/compat/v1/image/extract_patches + to: /tf/image/extract_patches +- from: /tf/compat/v1/image/flip_left_right + to: /tf/image/flip_left_right +- from: /tf/compat/v1/image/flip_up_down + to: /tf/image/flip_up_down +- from: /tf/compat/v1/image/generate_bounding_box_proposals + to: /tf/image/generate_bounding_box_proposals +- from: /tf/compat/v1/image/grayscale_to_rgb + to: /tf/image/grayscale_to_rgb +- from: /tf/compat/v1/image/hsv_to_rgb + to: /tf/image/hsv_to_rgb +- from: /tf/compat/v1/image/image_gradients + to: /tf/image/image_gradients +- from: /tf/compat/v1/image/is_jpeg + to: /tf/io/is_jpeg +- from: /tf/compat/v1/image/non_max_suppression + to: /tf/image/non_max_suppression +- from: /tf/compat/v1/image/non_max_suppression_overlaps + to: /tf/image/non_max_suppression_overlaps +- from: /tf/compat/v1/image/non_max_suppression_padded + to: /tf/image/non_max_suppression_padded +- from: /tf/compat/v1/image/non_max_suppression_with_scores + to: /tf/image/non_max_suppression_with_scores +- from: /tf/compat/v1/image/pad_to_bounding_box + to: /tf/image/pad_to_bounding_box +- from: /tf/compat/v1/image/per_image_standardization + to: /tf/image/per_image_standardization +- from: /tf/compat/v1/image/psnr + to: /tf/image/psnr +- from: /tf/compat/v1/image/random_brightness + to: /tf/image/random_brightness +- from: /tf/compat/v1/image/random_contrast + to: /tf/image/random_contrast +- from: /tf/compat/v1/image/random_crop + to: /tf/image/random_crop +- from: /tf/compat/v1/image/random_flip_left_right + to: /tf/image/random_flip_left_right +- from: /tf/compat/v1/image/random_flip_up_down + to: /tf/image/random_flip_up_down +- from: /tf/compat/v1/image/random_hue + to: /tf/image/random_hue +- from: /tf/compat/v1/image/random_jpeg_quality + to: /tf/image/random_jpeg_quality +- from: /tf/compat/v1/image/random_saturation + to: /tf/image/random_saturation +- from: /tf/compat/v1/image/resize_image_with_crop_or_pad + to: /tf/image/resize_with_crop_or_pad +- from: /tf/compat/v1/image/resize_images + to: /tf/compat/v1/image/resize +- from: /tf/compat/v1/image/resize_with_crop_or_pad + to: /tf/image/resize_with_crop_or_pad +- from: /tf/compat/v1/image/rgb_to_grayscale + to: /tf/image/rgb_to_grayscale +- from: /tf/compat/v1/image/rgb_to_hsv + to: /tf/image/rgb_to_hsv +- from: /tf/compat/v1/image/rgb_to_yiq + to: /tf/image/rgb_to_yiq +- from: /tf/compat/v1/image/rgb_to_yuv + to: /tf/image/rgb_to_yuv +- from: /tf/compat/v1/image/rot90 + to: /tf/image/rot90 +- from: /tf/compat/v1/image/sobel_edges + to: /tf/image/sobel_edges +- from: /tf/compat/v1/image/ssim + to: /tf/image/ssim +- from: /tf/compat/v1/image/ssim_multiscale + to: /tf/image/ssim_multiscale +- from: /tf/compat/v1/image/total_variation + to: /tf/image/total_variation +- from: /tf/compat/v1/image/transpose + to: /tf/image/transpose +- from: /tf/compat/v1/image/transpose_image + to: /tf/image/transpose +- from: /tf/compat/v1/image/yiq_to_rgb + to: /tf/image/yiq_to_rgb +- from: /tf/compat/v1/image/yuv_to_rgb + to: /tf/image/yuv_to_rgb +- from: /tf/compat/v1/import_graph_def + to: /tf/graph_util/import_graph_def +- from: /tf/compat/v1/init_scope + to: /tf/init_scope +- from: /tf/compat/v1/initializers/constant + to: /tf/compat/v1/keras/initializers/Constant +- from: /tf/compat/v1/initializers/global_variables + to: /tf/compat/v1/global_variables_initializer +- from: /tf/compat/v1/initializers/glorot_normal + to: /tf/compat/v1/keras/initializers/glorot_normal +- from: /tf/compat/v1/initializers/glorot_uniform + to: /tf/compat/v1/keras/initializers/glorot_uniform +- from: /tf/compat/v1/initializers/he_normal + to: /tf/compat/v1/keras/initializers/he_normal +- from: /tf/compat/v1/initializers/he_uniform + to: /tf/compat/v1/keras/initializers/he_uniform +- from: /tf/compat/v1/initializers/identity + to: /tf/compat/v1/keras/initializers/Identity +- from: /tf/compat/v1/initializers/lecun_normal + to: /tf/compat/v1/keras/initializers/lecun_normal +- from: /tf/compat/v1/initializers/lecun_uniform + to: /tf/compat/v1/keras/initializers/lecun_uniform +- from: /tf/compat/v1/initializers/local_variables + to: /tf/compat/v1/local_variables_initializer +- from: /tf/compat/v1/initializers/ones + to: /tf/compat/v1/keras/initializers/Ones +- from: /tf/compat/v1/initializers/orthogonal + to: /tf/compat/v1/keras/initializers/Orthogonal +- from: /tf/compat/v1/initializers/random_normal + to: /tf/compat/v1/random_normal_initializer +- from: /tf/compat/v1/initializers/random_uniform + to: /tf/compat/v1/random_uniform_initializer +- from: /tf/compat/v1/initializers/tables_initializer + to: /tf/compat/v1/tables_initializer +- from: /tf/compat/v1/initializers/truncated_normal + to: /tf/compat/v1/truncated_normal_initializer +- from: /tf/compat/v1/initializers/uniform_unit_scaling + to: /tf/compat/v1/uniform_unit_scaling_initializer +- from: /tf/compat/v1/initializers/variables + to: /tf/compat/v1/variables_initializer +- from: /tf/compat/v1/initializers/variance_scaling + to: /tf/compat/v1/keras/initializers/VarianceScaling +- from: /tf/compat/v1/initializers/zeros + to: /tf/compat/v1/keras/initializers/Zeros +- from: /tf/compat/v1/invert_permutation + to: /tf/math/invert_permutation +- from: /tf/compat/v1/io/FixedLenFeature + to: /tf/io/FixedLenFeature +- from: /tf/compat/v1/io/FixedLenSequenceFeature + to: /tf/io/FixedLenSequenceFeature +- from: /tf/compat/v1/io/PaddingFIFOQueue + to: /tf/queue/PaddingFIFOQueue +- from: /tf/compat/v1/io/PriorityQueue + to: /tf/queue/PriorityQueue +- from: /tf/compat/v1/io/QueueBase + to: /tf/queue/QueueBase +- from: /tf/compat/v1/io/RaggedFeature + to: /tf/io/RaggedFeature +- from: /tf/compat/v1/io/RaggedFeature/RowLengths + to: /tf/io/RaggedFeature/RowLengths +- from: /tf/compat/v1/io/RaggedFeature/RowLimits + to: /tf/io/RaggedFeature/RowLimits +- from: /tf/compat/v1/io/RaggedFeature/RowSplits + to: /tf/io/RaggedFeature/RowSplits +- from: /tf/compat/v1/io/RaggedFeature/RowStarts + to: /tf/io/RaggedFeature/RowStarts +- from: /tf/compat/v1/io/RaggedFeature/UniformRowLength + to: /tf/io/RaggedFeature/UniformRowLength +- from: /tf/compat/v1/io/RaggedFeature/ValueRowIds + to: /tf/io/RaggedFeature/ValueRowIds +- from: /tf/compat/v1/io/RandomShuffleQueue + to: /tf/queue/RandomShuffleQueue +- from: /tf/compat/v1/io/SparseFeature + to: /tf/io/SparseFeature +- from: /tf/compat/v1/io/TFRecordOptions + to: /tf/io/TFRecordOptions +- from: /tf/compat/v1/io/TFRecordWriter + to: /tf/io/TFRecordWriter +- from: /tf/compat/v1/io/VarLenFeature + to: /tf/io/VarLenFeature +- from: /tf/compat/v1/io/decode_and_crop_jpeg + to: /tf/io/decode_and_crop_jpeg +- from: /tf/compat/v1/io/decode_base64 + to: /tf/io/decode_base64 +- from: /tf/compat/v1/io/decode_bmp + to: /tf/io/decode_bmp +- from: /tf/compat/v1/io/decode_compressed + to: /tf/io/decode_compressed +- from: /tf/compat/v1/io/decode_csv + to: /tf/compat/v1/decode_csv +- from: /tf/compat/v1/io/decode_gif + to: /tf/io/decode_gif +- from: /tf/compat/v1/io/decode_image + to: /tf/io/decode_image +- from: /tf/compat/v1/io/decode_jpeg + to: /tf/io/decode_jpeg +- from: /tf/compat/v1/io/decode_json_example + to: /tf/io/decode_json_example +- from: /tf/compat/v1/io/decode_png + to: /tf/io/decode_png +- from: /tf/compat/v1/io/decode_proto + to: /tf/io/decode_proto +- from: /tf/compat/v1/io/decode_raw + to: /tf/compat/v1/decode_raw +- from: /tf/compat/v1/io/deserialize_many_sparse + to: /tf/io/deserialize_many_sparse +- from: /tf/compat/v1/io/encode_base64 + to: /tf/io/encode_base64 +- from: /tf/compat/v1/io/encode_jpeg + to: /tf/io/encode_jpeg +- from: /tf/compat/v1/io/encode_proto + to: /tf/io/encode_proto +- from: /tf/compat/v1/io/extract_jpeg_shape + to: /tf/io/extract_jpeg_shape +- from: /tf/compat/v1/io/gfile/GFile + to: /tf/io/gfile/GFile +- from: /tf/compat/v1/io/gfile/copy + to: /tf/io/gfile/copy +- from: /tf/compat/v1/io/gfile/exists + to: /tf/io/gfile/exists +- from: /tf/compat/v1/io/gfile/glob + to: /tf/io/gfile/glob +- from: /tf/compat/v1/io/gfile/isdir + to: /tf/io/gfile/isdir +- from: /tf/compat/v1/io/gfile/listdir + to: /tf/io/gfile/listdir +- from: /tf/compat/v1/io/gfile/makedirs + to: /tf/io/gfile/makedirs +- from: /tf/compat/v1/io/gfile/mkdir + to: /tf/io/gfile/mkdir +- from: /tf/compat/v1/io/gfile/remove + to: /tf/io/gfile/remove +- from: /tf/compat/v1/io/gfile/rename + to: /tf/io/gfile/rename +- from: /tf/compat/v1/io/gfile/rmtree + to: /tf/io/gfile/rmtree +- from: /tf/compat/v1/io/gfile/stat + to: /tf/io/gfile/stat +- from: /tf/compat/v1/io/gfile/walk + to: /tf/io/gfile/walk +- from: /tf/compat/v1/io/is_jpeg + to: /tf/io/is_jpeg +- from: /tf/compat/v1/io/match_filenames_once + to: /tf/io/match_filenames_once +- from: /tf/compat/v1/io/matching_files + to: /tf/io/matching_files +- from: /tf/compat/v1/io/parse_example + to: /tf/compat/v1/parse_example +- from: /tf/compat/v1/io/parse_sequence_example + to: /tf/io/parse_sequence_example +- from: /tf/compat/v1/io/parse_single_example + to: /tf/compat/v1/parse_single_example +- from: /tf/compat/v1/io/parse_single_sequence_example + to: /tf/io/parse_single_sequence_example +- from: /tf/compat/v1/io/parse_tensor + to: /tf/io/parse_tensor +- from: /tf/compat/v1/io/read_file + to: /tf/io/read_file +- from: /tf/compat/v1/io/serialize_many_sparse + to: /tf/compat/v1/serialize_many_sparse +- from: /tf/compat/v1/io/serialize_sparse + to: /tf/compat/v1/serialize_sparse +- from: /tf/compat/v1/io/serialize_tensor + to: /tf/io/serialize_tensor +- from: /tf/compat/v1/io/write_file + to: /tf/io/write_file +- from: /tf/compat/v1/io/write_graph + to: /tf/io/write_graph +- from: /tf/compat/v1/is_finite + to: /tf/math/is_finite +- from: /tf/compat/v1/is_inf + to: /tf/math/is_inf +- from: /tf/compat/v1/is_nan + to: /tf/math/is_nan +- from: /tf/compat/v1/is_non_decreasing + to: /tf/math/is_non_decreasing +- from: /tf/compat/v1/is_numeric_tensor + to: /tf/debugging/is_numeric_tensor +- from: /tf/compat/v1/is_strictly_increasing + to: /tf/math/is_strictly_increasing +- from: /tf/compat/v1/is_tensor + to: /tf/is_tensor +- from: /tf/compat/v1/keras/Input + to: /tf/keras/Input +- from: /tf/compat/v1/keras/Model + to: /tf/keras/Model +- from: /tf/compat/v1/keras/Sequential + to: /tf/keras/Sequential +- from: /tf/compat/v1/keras/activations/deserialize + to: /tf/keras/activations/deserialize +- from: /tf/compat/v1/keras/activations/elu + to: /tf/keras/activations/elu +- from: /tf/compat/v1/keras/activations/exponential + to: /tf/keras/activations/exponential +- from: /tf/compat/v1/keras/activations/get + to: /tf/keras/activations/get +- from: /tf/compat/v1/keras/activations/hard_sigmoid + to: /tf/keras/activations/hard_sigmoid +- from: /tf/compat/v1/keras/activations/linear + to: /tf/keras/activations/linear +- from: /tf/compat/v1/keras/activations/relu + to: /tf/keras/activations/relu +- from: /tf/compat/v1/keras/activations/selu + to: /tf/keras/activations/selu +- from: /tf/compat/v1/keras/activations/serialize + to: /tf/keras/activations/serialize +- from: /tf/compat/v1/keras/activations/sigmoid + to: /tf/keras/activations/sigmoid +- from: /tf/compat/v1/keras/activations/softmax + to: /tf/keras/activations/softmax +- from: /tf/compat/v1/keras/activations/softplus + to: /tf/keras/activations/softplus +- from: /tf/compat/v1/keras/activations/softsign + to: /tf/keras/activations/softsign +- from: /tf/compat/v1/keras/activations/swish + to: /tf/keras/activations/swish +- from: /tf/compat/v1/keras/activations/tanh + to: /tf/keras/activations/tanh +- from: /tf/compat/v1/keras/applications/DenseNet121 + to: /tf/keras/applications/DenseNet121 +- from: /tf/compat/v1/keras/applications/DenseNet169 + to: /tf/keras/applications/DenseNet169 +- from: /tf/compat/v1/keras/applications/DenseNet201 + to: /tf/keras/applications/DenseNet201 +- from: /tf/compat/v1/keras/applications/InceptionResNetV2 + to: /tf/keras/applications/InceptionResNetV2 +- from: /tf/compat/v1/keras/applications/InceptionV3 + to: /tf/keras/applications/InceptionV3 +- from: /tf/compat/v1/keras/applications/MobileNet + to: /tf/keras/applications/MobileNet +- from: /tf/compat/v1/keras/applications/MobileNetV2 + to: /tf/keras/applications/MobileNetV2 +- from: /tf/compat/v1/keras/applications/NASNetLarge + to: /tf/keras/applications/NASNetLarge +- from: /tf/compat/v1/keras/applications/NASNetMobile + to: /tf/keras/applications/NASNetMobile +- from: /tf/compat/v1/keras/applications/ResNet101 + to: /tf/keras/applications/ResNet101 +- from: /tf/compat/v1/keras/applications/ResNet101V2 + to: /tf/keras/applications/ResNet101V2 +- from: /tf/compat/v1/keras/applications/ResNet152 + to: /tf/keras/applications/ResNet152 +- from: /tf/compat/v1/keras/applications/ResNet152V2 + to: /tf/keras/applications/ResNet152V2 +- from: /tf/compat/v1/keras/applications/ResNet50 + to: /tf/keras/applications/ResNet50 +- from: /tf/compat/v1/keras/applications/ResNet50V2 + to: /tf/keras/applications/ResNet50V2 +- from: /tf/compat/v1/keras/applications/VGG16 + to: /tf/keras/applications/VGG16 +- from: /tf/compat/v1/keras/applications/VGG19 + to: /tf/keras/applications/VGG19 +- from: /tf/compat/v1/keras/applications/Xception + to: /tf/keras/applications/Xception +- from: /tf/compat/v1/keras/applications/densenet/DenseNet121 + to: /tf/keras/applications/DenseNet121 +- from: /tf/compat/v1/keras/applications/densenet/DenseNet169 + to: /tf/keras/applications/DenseNet169 +- from: /tf/compat/v1/keras/applications/densenet/DenseNet201 + to: /tf/keras/applications/DenseNet201 +- from: /tf/compat/v1/keras/applications/densenet/decode_predictions + to: /tf/keras/applications/densenet/decode_predictions +- from: /tf/compat/v1/keras/applications/densenet/preprocess_input + to: /tf/keras/applications/densenet/preprocess_input +- from: /tf/compat/v1/keras/applications/imagenet_utils/decode_predictions + to: /tf/keras/applications/imagenet_utils/decode_predictions +- from: /tf/compat/v1/keras/applications/imagenet_utils/preprocess_input + to: /tf/keras/applications/imagenet_utils/preprocess_input +- from: /tf/compat/v1/keras/applications/inception_resnet_v2/InceptionResNetV2 + to: /tf/keras/applications/InceptionResNetV2 +- from: /tf/compat/v1/keras/applications/inception_resnet_v2/decode_predictions + to: /tf/keras/applications/inception_resnet_v2/decode_predictions +- from: /tf/compat/v1/keras/applications/inception_resnet_v2/preprocess_input + to: /tf/keras/applications/inception_resnet_v2/preprocess_input +- from: /tf/compat/v1/keras/applications/inception_v3/InceptionV3 + to: /tf/keras/applications/InceptionV3 +- from: /tf/compat/v1/keras/applications/inception_v3/decode_predictions + to: /tf/keras/applications/inception_v3/decode_predictions +- from: /tf/compat/v1/keras/applications/inception_v3/preprocess_input + to: /tf/keras/applications/inception_v3/preprocess_input +- from: /tf/compat/v1/keras/applications/mobilenet/MobileNet + to: /tf/keras/applications/MobileNet +- from: /tf/compat/v1/keras/applications/mobilenet/decode_predictions + to: /tf/keras/applications/mobilenet/decode_predictions +- from: /tf/compat/v1/keras/applications/mobilenet/preprocess_input + to: /tf/keras/applications/mobilenet/preprocess_input +- from: /tf/compat/v1/keras/applications/mobilenet_v2/MobileNetV2 + to: /tf/keras/applications/MobileNetV2 +- from: /tf/compat/v1/keras/applications/mobilenet_v2/decode_predictions + to: /tf/keras/applications/mobilenet_v2/decode_predictions +- from: /tf/compat/v1/keras/applications/mobilenet_v2/preprocess_input + to: /tf/keras/applications/mobilenet_v2/preprocess_input +- from: /tf/compat/v1/keras/applications/nasnet/NASNetLarge + to: /tf/keras/applications/NASNetLarge +- from: /tf/compat/v1/keras/applications/nasnet/NASNetMobile + to: /tf/keras/applications/NASNetMobile +- from: /tf/compat/v1/keras/applications/nasnet/decode_predictions + to: /tf/keras/applications/nasnet/decode_predictions +- from: /tf/compat/v1/keras/applications/nasnet/preprocess_input + to: /tf/keras/applications/nasnet/preprocess_input +- from: /tf/compat/v1/keras/applications/resnet/ResNet101 + to: /tf/keras/applications/ResNet101 +- from: /tf/compat/v1/keras/applications/resnet/ResNet152 + to: /tf/keras/applications/ResNet152 +- from: /tf/compat/v1/keras/applications/resnet/ResNet50 + to: /tf/keras/applications/ResNet50 +- from: /tf/compat/v1/keras/applications/resnet/decode_predictions + to: /tf/keras/applications/resnet/decode_predictions +- from: /tf/compat/v1/keras/applications/resnet/preprocess_input + to: /tf/keras/applications/resnet/preprocess_input +- from: /tf/compat/v1/keras/applications/resnet50/ResNet50 + to: /tf/keras/applications/ResNet50 +- from: /tf/compat/v1/keras/applications/resnet50/decode_predictions + to: /tf/keras/applications/resnet/decode_predictions +- from: /tf/compat/v1/keras/applications/resnet50/preprocess_input + to: /tf/keras/applications/resnet/preprocess_input +- from: /tf/compat/v1/keras/applications/resnet_v2/ResNet101V2 + to: /tf/keras/applications/ResNet101V2 +- from: /tf/compat/v1/keras/applications/resnet_v2/ResNet152V2 + to: /tf/keras/applications/ResNet152V2 +- from: /tf/compat/v1/keras/applications/resnet_v2/ResNet50V2 + to: /tf/keras/applications/ResNet50V2 +- from: /tf/compat/v1/keras/applications/resnet_v2/decode_predictions + to: /tf/keras/applications/resnet_v2/decode_predictions +- from: /tf/compat/v1/keras/applications/resnet_v2/preprocess_input + to: /tf/keras/applications/resnet_v2/preprocess_input +- from: /tf/compat/v1/keras/applications/vgg16/VGG16 + to: /tf/keras/applications/VGG16 +- from: /tf/compat/v1/keras/applications/vgg16/decode_predictions + to: /tf/keras/applications/vgg16/decode_predictions +- from: /tf/compat/v1/keras/applications/vgg16/preprocess_input + to: /tf/keras/applications/vgg16/preprocess_input +- from: /tf/compat/v1/keras/applications/vgg19/VGG19 + to: /tf/keras/applications/VGG19 +- from: /tf/compat/v1/keras/applications/vgg19/decode_predictions + to: /tf/keras/applications/vgg19/decode_predictions +- from: /tf/compat/v1/keras/applications/vgg19/preprocess_input + to: /tf/keras/applications/vgg19/preprocess_input +- from: /tf/compat/v1/keras/applications/xception/Xception + to: /tf/keras/applications/Xception +- from: /tf/compat/v1/keras/applications/xception/decode_predictions + to: /tf/keras/applications/xception/decode_predictions +- from: /tf/compat/v1/keras/applications/xception/preprocess_input + to: /tf/keras/applications/xception/preprocess_input +- from: /tf/compat/v1/keras/backend/abs + to: /tf/keras/backend/abs +- from: /tf/compat/v1/keras/backend/all + to: /tf/keras/backend/all +- from: /tf/compat/v1/keras/backend/any + to: /tf/keras/backend/any +- from: /tf/compat/v1/keras/backend/arange + to: /tf/keras/backend/arange +- from: /tf/compat/v1/keras/backend/argmax + to: /tf/keras/backend/argmax +- from: /tf/compat/v1/keras/backend/argmin + to: /tf/keras/backend/argmin +- from: /tf/compat/v1/keras/backend/backend + to: /tf/keras/backend/backend +- from: /tf/compat/v1/keras/backend/batch_dot + to: /tf/keras/backend/batch_dot +- from: /tf/compat/v1/keras/backend/batch_flatten + to: /tf/keras/backend/batch_flatten +- from: /tf/compat/v1/keras/backend/batch_get_value + to: /tf/keras/backend/batch_get_value +- from: /tf/compat/v1/keras/backend/batch_normalization + to: /tf/keras/backend/batch_normalization +- from: /tf/compat/v1/keras/backend/batch_set_value + to: /tf/keras/backend/batch_set_value +- from: /tf/compat/v1/keras/backend/bias_add + to: /tf/keras/backend/bias_add +- from: /tf/compat/v1/keras/backend/binary_crossentropy + to: /tf/keras/backend/binary_crossentropy +- from: /tf/compat/v1/keras/backend/cast + to: /tf/keras/backend/cast +- from: /tf/compat/v1/keras/backend/cast_to_floatx + to: /tf/keras/backend/cast_to_floatx +- from: /tf/compat/v1/keras/backend/categorical_crossentropy + to: /tf/keras/backend/categorical_crossentropy +- from: /tf/compat/v1/keras/backend/clear_session + to: /tf/keras/backend/clear_session +- from: /tf/compat/v1/keras/backend/clip + to: /tf/keras/backend/clip +- from: /tf/compat/v1/keras/backend/concatenate + to: /tf/keras/backend/concatenate +- from: /tf/compat/v1/keras/backend/constant + to: /tf/keras/backend/constant +- from: /tf/compat/v1/keras/backend/conv1d + to: /tf/keras/backend/conv1d +- from: /tf/compat/v1/keras/backend/conv2d + to: /tf/keras/backend/conv2d +- from: /tf/compat/v1/keras/backend/conv2d_transpose + to: /tf/keras/backend/conv2d_transpose +- from: /tf/compat/v1/keras/backend/conv3d + to: /tf/keras/backend/conv3d +- from: /tf/compat/v1/keras/backend/cos + to: /tf/keras/backend/cos +- from: /tf/compat/v1/keras/backend/count_params + to: /tf/keras/backend/count_params +- from: /tf/compat/v1/keras/backend/ctc_batch_cost + to: /tf/keras/backend/ctc_batch_cost +- from: /tf/compat/v1/keras/backend/ctc_decode + to: /tf/keras/backend/ctc_decode +- from: /tf/compat/v1/keras/backend/ctc_label_dense_to_sparse + to: /tf/keras/backend/ctc_label_dense_to_sparse +- from: /tf/compat/v1/keras/backend/cumprod + to: /tf/keras/backend/cumprod +- from: /tf/compat/v1/keras/backend/cumsum + to: /tf/keras/backend/cumsum +- from: /tf/compat/v1/keras/backend/depthwise_conv2d + to: /tf/keras/backend/depthwise_conv2d +- from: /tf/compat/v1/keras/backend/dot + to: /tf/keras/backend/dot +- from: /tf/compat/v1/keras/backend/dropout + to: /tf/keras/backend/dropout +- from: /tf/compat/v1/keras/backend/dtype + to: /tf/keras/backend/dtype +- from: /tf/compat/v1/keras/backend/elu + to: /tf/keras/backend/elu +- from: /tf/compat/v1/keras/backend/epsilon + to: /tf/keras/backend/epsilon +- from: /tf/compat/v1/keras/backend/equal + to: /tf/keras/backend/equal +- from: /tf/compat/v1/keras/backend/eval + to: /tf/keras/backend/eval +- from: /tf/compat/v1/keras/backend/exp + to: /tf/keras/backend/exp +- from: /tf/compat/v1/keras/backend/expand_dims + to: /tf/keras/backend/expand_dims +- from: /tf/compat/v1/keras/backend/eye + to: /tf/keras/backend/eye +- from: /tf/compat/v1/keras/backend/flatten + to: /tf/keras/backend/flatten +- from: /tf/compat/v1/keras/backend/floatx + to: /tf/keras/backend/floatx +- from: /tf/compat/v1/keras/backend/foldl + to: /tf/keras/backend/foldl +- from: /tf/compat/v1/keras/backend/foldr + to: /tf/keras/backend/foldr +- from: /tf/compat/v1/keras/backend/function + to: /tf/keras/backend/function +- from: /tf/compat/v1/keras/backend/gather + to: /tf/keras/backend/gather +- from: /tf/compat/v1/keras/backend/get_uid + to: /tf/keras/backend/get_uid +- from: /tf/compat/v1/keras/backend/get_value + to: /tf/keras/backend/get_value +- from: /tf/compat/v1/keras/backend/gradients + to: /tf/keras/backend/gradients +- from: /tf/compat/v1/keras/backend/greater + to: /tf/keras/backend/greater +- from: /tf/compat/v1/keras/backend/greater_equal + to: /tf/keras/backend/greater_equal +- from: /tf/compat/v1/keras/backend/hard_sigmoid + to: /tf/keras/backend/hard_sigmoid +- from: /tf/compat/v1/keras/backend/image_data_format + to: /tf/keras/backend/image_data_format +- from: /tf/compat/v1/keras/backend/in_test_phase + to: /tf/keras/backend/in_test_phase +- from: /tf/compat/v1/keras/backend/in_top_k + to: /tf/keras/backend/in_top_k +- from: /tf/compat/v1/keras/backend/in_train_phase + to: /tf/keras/backend/in_train_phase +- from: /tf/compat/v1/keras/backend/int_shape + to: /tf/keras/backend/int_shape +- from: /tf/compat/v1/keras/backend/is_keras_tensor + to: /tf/keras/backend/is_keras_tensor +- from: /tf/compat/v1/keras/backend/is_sparse + to: /tf/keras/backend/is_sparse +- from: /tf/compat/v1/keras/backend/l2_normalize + to: /tf/keras/backend/l2_normalize +- from: /tf/compat/v1/keras/backend/learning_phase + to: /tf/keras/backend/learning_phase +- from: /tf/compat/v1/keras/backend/learning_phase_scope + to: /tf/keras/backend/learning_phase_scope +- from: /tf/compat/v1/keras/backend/less + to: /tf/keras/backend/less +- from: /tf/compat/v1/keras/backend/less_equal + to: /tf/keras/backend/less_equal +- from: /tf/compat/v1/keras/backend/local_conv1d + to: /tf/keras/backend/local_conv1d +- from: /tf/compat/v1/keras/backend/local_conv2d + to: /tf/keras/backend/local_conv2d +- from: /tf/compat/v1/keras/backend/log + to: /tf/keras/backend/log +- from: /tf/compat/v1/keras/backend/manual_variable_initialization + to: /tf/keras/backend/manual_variable_initialization +- from: /tf/compat/v1/keras/backend/map_fn + to: /tf/keras/backend/map_fn +- from: /tf/compat/v1/keras/backend/max + to: /tf/keras/backend/max +- from: /tf/compat/v1/keras/backend/maximum + to: /tf/keras/backend/maximum +- from: /tf/compat/v1/keras/backend/mean + to: /tf/keras/backend/mean +- from: /tf/compat/v1/keras/backend/min + to: /tf/keras/backend/min +- from: /tf/compat/v1/keras/backend/minimum + to: /tf/keras/backend/minimum +- from: /tf/compat/v1/keras/backend/moving_average_update + to: /tf/keras/backend/moving_average_update +- from: /tf/compat/v1/keras/backend/ndim + to: /tf/keras/backend/ndim +- from: /tf/compat/v1/keras/backend/normalize_batch_in_training + to: /tf/keras/backend/normalize_batch_in_training +- from: /tf/compat/v1/keras/backend/not_equal + to: /tf/keras/backend/not_equal +- from: /tf/compat/v1/keras/backend/one_hot + to: /tf/keras/backend/one_hot +- from: /tf/compat/v1/keras/backend/ones + to: /tf/keras/backend/ones +- from: /tf/compat/v1/keras/backend/ones_like + to: /tf/keras/backend/ones_like +- from: /tf/compat/v1/keras/backend/permute_dimensions + to: /tf/keras/backend/permute_dimensions +- from: /tf/compat/v1/keras/backend/placeholder + to: /tf/keras/backend/placeholder +- from: /tf/compat/v1/keras/backend/pool2d + to: /tf/keras/backend/pool2d +- from: /tf/compat/v1/keras/backend/pool3d + to: /tf/keras/backend/pool3d +- from: /tf/compat/v1/keras/backend/pow + to: /tf/keras/backend/pow +- from: /tf/compat/v1/keras/backend/print_tensor + to: /tf/keras/backend/print_tensor +- from: /tf/compat/v1/keras/backend/prod + to: /tf/keras/backend/prod +- from: /tf/compat/v1/keras/backend/random_binomial + to: /tf/keras/backend/random_binomial +- from: /tf/compat/v1/keras/backend/random_normal + to: /tf/keras/backend/random_normal +- from: /tf/compat/v1/keras/backend/random_normal_variable + to: /tf/keras/backend/random_normal_variable +- from: /tf/compat/v1/keras/backend/random_uniform + to: /tf/keras/backend/random_uniform +- from: /tf/compat/v1/keras/backend/random_uniform_variable + to: /tf/keras/backend/random_uniform_variable +- from: /tf/compat/v1/keras/backend/relu + to: /tf/keras/backend/relu +- from: /tf/compat/v1/keras/backend/repeat + to: /tf/keras/backend/repeat +- from: /tf/compat/v1/keras/backend/repeat_elements + to: /tf/keras/backend/repeat_elements +- from: /tf/compat/v1/keras/backend/reset_uids + to: /tf/keras/backend/reset_uids +- from: /tf/compat/v1/keras/backend/reshape + to: /tf/keras/backend/reshape +- from: /tf/compat/v1/keras/backend/resize_images + to: /tf/keras/backend/resize_images +- from: /tf/compat/v1/keras/backend/resize_volumes + to: /tf/keras/backend/resize_volumes +- from: /tf/compat/v1/keras/backend/reverse + to: /tf/keras/backend/reverse +- from: /tf/compat/v1/keras/backend/rnn + to: /tf/keras/backend/rnn +- from: /tf/compat/v1/keras/backend/round + to: /tf/keras/backend/round +- from: /tf/compat/v1/keras/backend/separable_conv2d + to: /tf/keras/backend/separable_conv2d +- from: /tf/compat/v1/keras/backend/set_epsilon + to: /tf/keras/backend/set_epsilon +- from: /tf/compat/v1/keras/backend/set_floatx + to: /tf/keras/backend/set_floatx +- from: /tf/compat/v1/keras/backend/set_image_data_format + to: /tf/keras/backend/set_image_data_format +- from: /tf/compat/v1/keras/backend/set_learning_phase + to: /tf/keras/backend/set_learning_phase +- from: /tf/compat/v1/keras/backend/set_value + to: /tf/keras/backend/set_value +- from: /tf/compat/v1/keras/backend/shape + to: /tf/keras/backend/shape +- from: /tf/compat/v1/keras/backend/sigmoid + to: /tf/keras/backend/sigmoid +- from: /tf/compat/v1/keras/backend/sign + to: /tf/keras/backend/sign +- from: /tf/compat/v1/keras/backend/sin + to: /tf/keras/backend/sin +- from: /tf/compat/v1/keras/backend/softmax + to: /tf/keras/backend/softmax +- from: /tf/compat/v1/keras/backend/softplus + to: /tf/keras/backend/softplus +- from: /tf/compat/v1/keras/backend/softsign + to: /tf/keras/backend/softsign +- from: /tf/compat/v1/keras/backend/sparse_categorical_crossentropy + to: /tf/keras/backend/sparse_categorical_crossentropy +- from: /tf/compat/v1/keras/backend/spatial_2d_padding + to: /tf/keras/backend/spatial_2d_padding +- from: /tf/compat/v1/keras/backend/spatial_3d_padding + to: /tf/keras/backend/spatial_3d_padding +- from: /tf/compat/v1/keras/backend/sqrt + to: /tf/keras/backend/sqrt +- from: /tf/compat/v1/keras/backend/square + to: /tf/keras/backend/square +- from: /tf/compat/v1/keras/backend/squeeze + to: /tf/keras/backend/squeeze +- from: /tf/compat/v1/keras/backend/stack + to: /tf/keras/backend/stack +- from: /tf/compat/v1/keras/backend/std + to: /tf/keras/backend/std +- from: /tf/compat/v1/keras/backend/stop_gradient + to: /tf/keras/backend/stop_gradient +- from: /tf/compat/v1/keras/backend/sum + to: /tf/keras/backend/sum +- from: /tf/compat/v1/keras/backend/switch + to: /tf/keras/backend/switch +- from: /tf/compat/v1/keras/backend/tanh + to: /tf/keras/backend/tanh +- from: /tf/compat/v1/keras/backend/temporal_padding + to: /tf/keras/backend/temporal_padding +- from: /tf/compat/v1/keras/backend/tile + to: /tf/keras/backend/tile +- from: /tf/compat/v1/keras/backend/to_dense + to: /tf/keras/backend/to_dense +- from: /tf/compat/v1/keras/backend/transpose + to: /tf/keras/backend/transpose +- from: /tf/compat/v1/keras/backend/truncated_normal + to: /tf/keras/backend/truncated_normal +- from: /tf/compat/v1/keras/backend/update + to: /tf/keras/backend/update +- from: /tf/compat/v1/keras/backend/update_add + to: /tf/keras/backend/update_add +- from: /tf/compat/v1/keras/backend/update_sub + to: /tf/keras/backend/update_sub +- from: /tf/compat/v1/keras/backend/var + to: /tf/keras/backend/var +- from: /tf/compat/v1/keras/backend/variable + to: /tf/keras/backend/variable +- from: /tf/compat/v1/keras/backend/zeros + to: /tf/keras/backend/zeros +- from: /tf/compat/v1/keras/backend/zeros_like + to: /tf/keras/backend/zeros_like +- from: /tf/compat/v1/keras/callbacks/BaseLogger + to: /tf/keras/callbacks/BaseLogger +- from: /tf/compat/v1/keras/callbacks/CSVLogger + to: /tf/keras/callbacks/CSVLogger +- from: /tf/compat/v1/keras/callbacks/Callback + to: /tf/keras/callbacks/Callback +- from: /tf/compat/v1/keras/callbacks/EarlyStopping + to: /tf/keras/callbacks/EarlyStopping +- from: /tf/compat/v1/keras/callbacks/History + to: /tf/keras/callbacks/History +- from: /tf/compat/v1/keras/callbacks/LambdaCallback + to: /tf/keras/callbacks/LambdaCallback +- from: /tf/compat/v1/keras/callbacks/LearningRateScheduler + to: /tf/keras/callbacks/LearningRateScheduler +- from: /tf/compat/v1/keras/callbacks/ModelCheckpoint + to: /tf/keras/callbacks/ModelCheckpoint +- from: /tf/compat/v1/keras/callbacks/ProgbarLogger + to: /tf/keras/callbacks/ProgbarLogger +- from: /tf/compat/v1/keras/callbacks/ReduceLROnPlateau + to: /tf/keras/callbacks/ReduceLROnPlateau +- from: /tf/compat/v1/keras/callbacks/RemoteMonitor + to: /tf/keras/callbacks/RemoteMonitor +- from: /tf/compat/v1/keras/callbacks/TerminateOnNaN + to: /tf/keras/callbacks/TerminateOnNaN +- from: /tf/compat/v1/keras/constraints/Constraint + to: /tf/keras/constraints/Constraint +- from: /tf/compat/v1/keras/constraints/MaxNorm + to: /tf/keras/constraints/MaxNorm +- from: /tf/compat/v1/keras/constraints/MinMaxNorm + to: /tf/keras/constraints/MinMaxNorm +- from: /tf/compat/v1/keras/constraints/NonNeg + to: /tf/keras/constraints/NonNeg +- from: /tf/compat/v1/keras/constraints/RadialConstraint + to: /tf/keras/constraints/RadialConstraint +- from: /tf/compat/v1/keras/constraints/UnitNorm + to: /tf/keras/constraints/UnitNorm +- from: /tf/compat/v1/keras/constraints/deserialize + to: /tf/keras/constraints/deserialize +- from: /tf/compat/v1/keras/constraints/get + to: /tf/keras/constraints/get +- from: /tf/compat/v1/keras/constraints/max_norm + to: /tf/keras/constraints/MaxNorm +- from: /tf/compat/v1/keras/constraints/min_max_norm + to: /tf/keras/constraints/MinMaxNorm +- from: /tf/compat/v1/keras/constraints/non_neg + to: /tf/keras/constraints/NonNeg +- from: /tf/compat/v1/keras/constraints/radial_constraint + to: /tf/keras/constraints/RadialConstraint +- from: /tf/compat/v1/keras/constraints/serialize + to: /tf/keras/constraints/serialize +- from: /tf/compat/v1/keras/constraints/unit_norm + to: /tf/keras/constraints/UnitNorm +- from: /tf/compat/v1/keras/datasets/boston_housing/load_data + to: /tf/keras/datasets/boston_housing/load_data +- from: /tf/compat/v1/keras/datasets/cifar10/load_data + to: /tf/keras/datasets/cifar10/load_data +- from: /tf/compat/v1/keras/datasets/cifar100/load_data + to: /tf/keras/datasets/cifar100/load_data +- from: /tf/compat/v1/keras/datasets/fashion_mnist/load_data + to: /tf/keras/datasets/fashion_mnist/load_data +- from: /tf/compat/v1/keras/datasets/imdb/get_word_index + to: /tf/keras/datasets/imdb/get_word_index +- from: /tf/compat/v1/keras/datasets/imdb/load_data + to: /tf/keras/datasets/imdb/load_data +- from: /tf/compat/v1/keras/datasets/mnist/load_data + to: /tf/keras/datasets/mnist/load_data +- from: /tf/compat/v1/keras/datasets/reuters/get_word_index + to: /tf/keras/datasets/reuters/get_word_index +- from: /tf/compat/v1/keras/datasets/reuters/load_data + to: /tf/keras/datasets/reuters/load_data +- from: /tf/compat/v1/keras/experimental/CosineDecay + to: /tf/keras/experimental/CosineDecay +- from: /tf/compat/v1/keras/experimental/CosineDecayRestarts + to: /tf/keras/experimental/CosineDecayRestarts +- from: /tf/compat/v1/keras/experimental/LinearCosineDecay + to: /tf/keras/experimental/LinearCosineDecay +- from: /tf/compat/v1/keras/experimental/LinearModel + to: /tf/keras/experimental/LinearModel +- from: /tf/compat/v1/keras/experimental/NoisyLinearCosineDecay + to: /tf/keras/experimental/NoisyLinearCosineDecay +- from: /tf/compat/v1/keras/experimental/PeepholeLSTMCell + to: /tf/keras/experimental/PeepholeLSTMCell +- from: /tf/compat/v1/keras/experimental/SequenceFeatures + to: /tf/keras/experimental/SequenceFeatures +- from: /tf/compat/v1/keras/experimental/WideDeepModel + to: /tf/keras/experimental/WideDeepModel +- from: /tf/compat/v1/keras/experimental/terminate_keras_multiprocessing_pools + to: /tf/keras/experimental/terminate_keras_multiprocessing_pools +- from: /tf/compat/v1/keras/initializers/constant + to: /tf/compat/v1/keras/initializers/Constant +- from: /tf/compat/v1/keras/initializers/deserialize + to: /tf/keras/initializers/deserialize +- from: /tf/compat/v1/keras/initializers/get + to: /tf/keras/initializers/get +- from: /tf/compat/v1/keras/initializers/identity + to: /tf/compat/v1/keras/initializers/Identity +- from: /tf/compat/v1/keras/initializers/normal + to: /tf/compat/v1/keras/initializers/RandomNormal +- from: /tf/compat/v1/keras/initializers/ones + to: /tf/compat/v1/keras/initializers/Ones +- from: /tf/compat/v1/keras/initializers/orthogonal + to: /tf/compat/v1/keras/initializers/Orthogonal +- from: /tf/compat/v1/keras/initializers/random_normal + to: /tf/compat/v1/keras/initializers/RandomNormal +- from: /tf/compat/v1/keras/initializers/random_uniform + to: /tf/compat/v1/keras/initializers/RandomUniform +- from: /tf/compat/v1/keras/initializers/serialize + to: /tf/keras/initializers/serialize +- from: /tf/compat/v1/keras/initializers/truncated_normal + to: /tf/compat/v1/keras/initializers/TruncatedNormal +- from: /tf/compat/v1/keras/initializers/uniform + to: /tf/compat/v1/keras/initializers/RandomUniform +- from: /tf/compat/v1/keras/initializers/zeros + to: /tf/compat/v1/keras/initializers/Zeros +- from: /tf/compat/v1/keras/layers/AbstractRNNCell + to: /tf/keras/layers/AbstractRNNCell +- from: /tf/compat/v1/keras/layers/Activation + to: /tf/keras/layers/Activation +- from: /tf/compat/v1/keras/layers/ActivityRegularization + to: /tf/keras/layers/ActivityRegularization +- from: /tf/compat/v1/keras/layers/Add + to: /tf/keras/layers/Add +- from: /tf/compat/v1/keras/layers/AdditiveAttention + to: /tf/keras/layers/AdditiveAttention +- from: /tf/compat/v1/keras/layers/AlphaDropout + to: /tf/keras/layers/AlphaDropout +- from: /tf/compat/v1/keras/layers/Attention + to: /tf/keras/layers/Attention +- from: /tf/compat/v1/keras/layers/Average + to: /tf/keras/layers/Average +- from: /tf/compat/v1/keras/layers/AveragePooling1D + to: /tf/keras/layers/AveragePooling1D +- from: /tf/compat/v1/keras/layers/AveragePooling2D + to: /tf/keras/layers/AveragePooling2D +- from: /tf/compat/v1/keras/layers/AveragePooling3D + to: /tf/keras/layers/AveragePooling3D +- from: /tf/compat/v1/keras/layers/AvgPool1D + to: /tf/keras/layers/AveragePooling1D +- from: /tf/compat/v1/keras/layers/AvgPool2D + to: /tf/keras/layers/AveragePooling2D +- from: /tf/compat/v1/keras/layers/AvgPool3D + to: /tf/keras/layers/AveragePooling3D +- from: /tf/compat/v1/keras/layers/Bidirectional + to: /tf/keras/layers/Bidirectional +- from: /tf/compat/v1/keras/layers/Concatenate + to: /tf/keras/layers/Concatenate +- from: /tf/compat/v1/keras/layers/Conv1D + to: /tf/keras/layers/Conv1D +- from: /tf/compat/v1/keras/layers/Conv2D + to: /tf/keras/layers/Conv2D +- from: /tf/compat/v1/keras/layers/Conv2DTranspose + to: /tf/keras/layers/Conv2DTranspose +- from: /tf/compat/v1/keras/layers/Conv3D + to: /tf/keras/layers/Conv3D +- from: /tf/compat/v1/keras/layers/Conv3DTranspose + to: /tf/keras/layers/Conv3DTranspose +- from: /tf/compat/v1/keras/layers/ConvLSTM2D + to: /tf/keras/layers/ConvLSTM2D +- from: /tf/compat/v1/keras/layers/Convolution1D + to: /tf/keras/layers/Conv1D +- from: /tf/compat/v1/keras/layers/Convolution2D + to: /tf/keras/layers/Conv2D +- from: /tf/compat/v1/keras/layers/Convolution2DTranspose + to: /tf/keras/layers/Conv2DTranspose +- from: /tf/compat/v1/keras/layers/Convolution3D + to: /tf/keras/layers/Conv3D +- from: /tf/compat/v1/keras/layers/Convolution3DTranspose + to: /tf/keras/layers/Conv3DTranspose +- from: /tf/compat/v1/keras/layers/Cropping1D + to: /tf/keras/layers/Cropping1D +- from: /tf/compat/v1/keras/layers/Cropping2D + to: /tf/keras/layers/Cropping2D +- from: /tf/compat/v1/keras/layers/Cropping3D + to: /tf/keras/layers/Cropping3D +- from: /tf/compat/v1/keras/layers/Dense + to: /tf/keras/layers/Dense +- from: /tf/compat/v1/keras/layers/DepthwiseConv2D + to: /tf/keras/layers/DepthwiseConv2D +- from: /tf/compat/v1/keras/layers/Dot + to: /tf/keras/layers/Dot +- from: /tf/compat/v1/keras/layers/Dropout + to: /tf/keras/layers/Dropout +- from: /tf/compat/v1/keras/layers/ELU + to: /tf/keras/layers/ELU +- from: /tf/compat/v1/keras/layers/Embedding + to: /tf/keras/layers/Embedding +- from: /tf/compat/v1/keras/layers/Flatten + to: /tf/keras/layers/Flatten +- from: /tf/compat/v1/keras/layers/GaussianDropout + to: /tf/keras/layers/GaussianDropout +- from: /tf/compat/v1/keras/layers/GaussianNoise + to: /tf/keras/layers/GaussianNoise +- from: /tf/compat/v1/keras/layers/GlobalAveragePooling1D + to: /tf/keras/layers/GlobalAveragePooling1D +- from: /tf/compat/v1/keras/layers/GlobalAveragePooling2D + to: /tf/keras/layers/GlobalAveragePooling2D +- from: /tf/compat/v1/keras/layers/GlobalAveragePooling3D + to: /tf/keras/layers/GlobalAveragePooling3D +- from: /tf/compat/v1/keras/layers/GlobalAvgPool1D + to: /tf/keras/layers/GlobalAveragePooling1D +- from: /tf/compat/v1/keras/layers/GlobalAvgPool2D + to: /tf/keras/layers/GlobalAveragePooling2D +- from: /tf/compat/v1/keras/layers/GlobalAvgPool3D + to: /tf/keras/layers/GlobalAveragePooling3D +- from: /tf/compat/v1/keras/layers/GlobalMaxPool1D + to: /tf/keras/layers/GlobalMaxPool1D +- from: /tf/compat/v1/keras/layers/GlobalMaxPool2D + to: /tf/keras/layers/GlobalMaxPool2D +- from: /tf/compat/v1/keras/layers/GlobalMaxPool3D + to: /tf/keras/layers/GlobalMaxPool3D +- from: /tf/compat/v1/keras/layers/GlobalMaxPooling1D + to: /tf/keras/layers/GlobalMaxPool1D +- from: /tf/compat/v1/keras/layers/GlobalMaxPooling2D + to: /tf/keras/layers/GlobalMaxPool2D +- from: /tf/compat/v1/keras/layers/GlobalMaxPooling3D + to: /tf/keras/layers/GlobalMaxPool3D +- from: /tf/compat/v1/keras/layers/Input + to: /tf/keras/Input +- from: /tf/compat/v1/keras/layers/InputLayer + to: /tf/keras/layers/InputLayer +- from: /tf/compat/v1/keras/layers/InputSpec + to: /tf/keras/layers/InputSpec +- from: /tf/compat/v1/keras/layers/Lambda + to: /tf/keras/layers/Lambda +- from: /tf/compat/v1/keras/layers/Layer + to: /tf/keras/layers/Layer +- from: /tf/compat/v1/keras/layers/LayerNormalization + to: /tf/keras/layers/LayerNormalization +- from: /tf/compat/v1/keras/layers/LeakyReLU + to: /tf/keras/layers/LeakyReLU +- from: /tf/compat/v1/keras/layers/LocallyConnected1D + to: /tf/keras/layers/LocallyConnected1D +- from: /tf/compat/v1/keras/layers/LocallyConnected2D + to: /tf/keras/layers/LocallyConnected2D +- from: /tf/compat/v1/keras/layers/Masking + to: /tf/keras/layers/Masking +- from: /tf/compat/v1/keras/layers/MaxPool1D + to: /tf/keras/layers/MaxPool1D +- from: /tf/compat/v1/keras/layers/MaxPool2D + to: /tf/keras/layers/MaxPool2D +- from: /tf/compat/v1/keras/layers/MaxPool3D + to: /tf/keras/layers/MaxPool3D +- from: /tf/compat/v1/keras/layers/MaxPooling1D + to: /tf/keras/layers/MaxPool1D +- from: /tf/compat/v1/keras/layers/MaxPooling2D + to: /tf/keras/layers/MaxPool2D +- from: /tf/compat/v1/keras/layers/MaxPooling3D + to: /tf/keras/layers/MaxPool3D +- from: /tf/compat/v1/keras/layers/Maximum + to: /tf/keras/layers/Maximum +- from: /tf/compat/v1/keras/layers/Minimum + to: /tf/keras/layers/Minimum +- from: /tf/compat/v1/keras/layers/Multiply + to: /tf/keras/layers/Multiply +- from: /tf/compat/v1/keras/layers/PReLU + to: /tf/keras/layers/PReLU +- from: /tf/compat/v1/keras/layers/Permute + to: /tf/keras/layers/Permute +- from: /tf/compat/v1/keras/layers/RNN + to: /tf/keras/layers/RNN +- from: /tf/compat/v1/keras/layers/ReLU + to: /tf/keras/layers/ReLU +- from: /tf/compat/v1/keras/layers/RepeatVector + to: /tf/keras/layers/RepeatVector +- from: /tf/compat/v1/keras/layers/Reshape + to: /tf/keras/layers/Reshape +- from: /tf/compat/v1/keras/layers/SeparableConv1D + to: /tf/keras/layers/SeparableConv1D +- from: /tf/compat/v1/keras/layers/SeparableConv2D + to: /tf/keras/layers/SeparableConv2D +- from: /tf/compat/v1/keras/layers/SeparableConvolution1D + to: /tf/keras/layers/SeparableConv1D +- from: /tf/compat/v1/keras/layers/SeparableConvolution2D + to: /tf/keras/layers/SeparableConv2D +- from: /tf/compat/v1/keras/layers/SimpleRNN + to: /tf/keras/layers/SimpleRNN +- from: /tf/compat/v1/keras/layers/SimpleRNNCell + to: /tf/keras/layers/SimpleRNNCell +- from: /tf/compat/v1/keras/layers/Softmax + to: /tf/keras/layers/Softmax +- from: /tf/compat/v1/keras/layers/SpatialDropout1D + to: /tf/keras/layers/SpatialDropout1D +- from: /tf/compat/v1/keras/layers/SpatialDropout2D + to: /tf/keras/layers/SpatialDropout2D +- from: /tf/compat/v1/keras/layers/SpatialDropout3D + to: /tf/keras/layers/SpatialDropout3D +- from: /tf/compat/v1/keras/layers/StackedRNNCells + to: /tf/keras/layers/StackedRNNCells +- from: /tf/compat/v1/keras/layers/Subtract + to: /tf/keras/layers/Subtract +- from: /tf/compat/v1/keras/layers/ThresholdedReLU + to: /tf/keras/layers/ThresholdedReLU +- from: /tf/compat/v1/keras/layers/TimeDistributed + to: /tf/keras/layers/TimeDistributed +- from: /tf/compat/v1/keras/layers/UpSampling1D + to: /tf/keras/layers/UpSampling1D +- from: /tf/compat/v1/keras/layers/UpSampling2D + to: /tf/keras/layers/UpSampling2D +- from: /tf/compat/v1/keras/layers/UpSampling3D + to: /tf/keras/layers/UpSampling3D +- from: /tf/compat/v1/keras/layers/Wrapper + to: /tf/keras/layers/Wrapper +- from: /tf/compat/v1/keras/layers/ZeroPadding1D + to: /tf/keras/layers/ZeroPadding1D +- from: /tf/compat/v1/keras/layers/ZeroPadding2D + to: /tf/keras/layers/ZeroPadding2D +- from: /tf/compat/v1/keras/layers/ZeroPadding3D + to: /tf/keras/layers/ZeroPadding3D +- from: /tf/compat/v1/keras/layers/add + to: /tf/keras/layers/add +- from: /tf/compat/v1/keras/layers/average + to: /tf/keras/layers/average +- from: /tf/compat/v1/keras/layers/concatenate + to: /tf/keras/layers/concatenate +- from: /tf/compat/v1/keras/layers/deserialize + to: /tf/keras/layers/deserialize +- from: /tf/compat/v1/keras/layers/dot + to: /tf/keras/layers/dot +- from: /tf/compat/v1/keras/layers/experimental/preprocessing/CenterCrop + to: /tf/keras/layers/experimental/preprocessing/CenterCrop +- from: /tf/compat/v1/keras/layers/experimental/preprocessing/PreprocessingLayer + to: /tf/keras/layers/experimental/preprocessing/PreprocessingLayer +- from: /tf/compat/v1/keras/layers/experimental/preprocessing/RandomContrast + to: /tf/keras/layers/experimental/preprocessing/RandomContrast +- from: /tf/compat/v1/keras/layers/experimental/preprocessing/RandomCrop + to: /tf/keras/layers/experimental/preprocessing/RandomCrop +- from: /tf/compat/v1/keras/layers/experimental/preprocessing/RandomFlip + to: /tf/keras/layers/experimental/preprocessing/RandomFlip +- from: /tf/compat/v1/keras/layers/experimental/preprocessing/RandomHeight + to: /tf/keras/layers/experimental/preprocessing/RandomHeight +- from: /tf/compat/v1/keras/layers/experimental/preprocessing/RandomRotation + to: /tf/keras/layers/experimental/preprocessing/RandomRotation +- from: /tf/compat/v1/keras/layers/experimental/preprocessing/RandomTranslation + to: /tf/keras/layers/experimental/preprocessing/RandomTranslation +- from: /tf/compat/v1/keras/layers/experimental/preprocessing/RandomWidth + to: /tf/keras/layers/experimental/preprocessing/RandomWidth +- from: /tf/compat/v1/keras/layers/experimental/preprocessing/Rescaling + to: /tf/keras/layers/experimental/preprocessing/Rescaling +- from: /tf/compat/v1/keras/layers/experimental/preprocessing/Resizing + to: /tf/keras/layers/experimental/preprocessing/Resizing +- from: /tf/compat/v1/keras/layers/maximum + to: /tf/keras/layers/maximum +- from: /tf/compat/v1/keras/layers/minimum + to: /tf/keras/layers/minimum +- from: /tf/compat/v1/keras/layers/multiply + to: /tf/keras/layers/multiply +- from: /tf/compat/v1/keras/layers/serialize + to: /tf/keras/layers/serialize +- from: /tf/compat/v1/keras/layers/subtract + to: /tf/keras/layers/subtract +- from: /tf/compat/v1/keras/losses/BinaryCrossentropy + to: /tf/keras/losses/BinaryCrossentropy +- from: /tf/compat/v1/keras/losses/CategoricalCrossentropy + to: /tf/keras/losses/CategoricalCrossentropy +- from: /tf/compat/v1/keras/losses/CategoricalHinge + to: /tf/keras/losses/CategoricalHinge +- from: /tf/compat/v1/keras/losses/CosineSimilarity + to: /tf/keras/losses/CosineSimilarity +- from: /tf/compat/v1/keras/losses/Hinge + to: /tf/keras/losses/Hinge +- from: /tf/compat/v1/keras/losses/Huber + to: /tf/keras/losses/Huber +- from: /tf/compat/v1/keras/losses/KLD + to: /tf/keras/losses/KLD +- from: /tf/compat/v1/keras/losses/KLDivergence + to: /tf/keras/losses/KLDivergence +- from: /tf/compat/v1/keras/losses/LogCosh + to: /tf/keras/losses/LogCosh +- from: /tf/compat/v1/keras/losses/Loss + to: /tf/keras/losses/Loss +- from: /tf/compat/v1/keras/losses/MAE + to: /tf/keras/losses/MAE +- from: /tf/compat/v1/keras/losses/MAPE + to: /tf/keras/losses/MAPE +- from: /tf/compat/v1/keras/losses/MSE + to: /tf/keras/losses/MSE +- from: /tf/compat/v1/keras/losses/MSLE + to: /tf/keras/losses/MSLE +- from: /tf/compat/v1/keras/losses/MeanAbsoluteError + to: /tf/keras/losses/MeanAbsoluteError +- from: /tf/compat/v1/keras/losses/MeanAbsolutePercentageError + to: /tf/keras/losses/MeanAbsolutePercentageError +- from: /tf/compat/v1/keras/losses/MeanSquaredError + to: /tf/keras/losses/MeanSquaredError +- from: /tf/compat/v1/keras/losses/MeanSquaredLogarithmicError + to: /tf/keras/losses/MeanSquaredLogarithmicError +- from: /tf/compat/v1/keras/losses/Poisson + to: /tf/keras/losses/Poisson +- from: /tf/compat/v1/keras/losses/SparseCategoricalCrossentropy + to: /tf/keras/losses/SparseCategoricalCrossentropy +- from: /tf/compat/v1/keras/losses/SquaredHinge + to: /tf/keras/losses/SquaredHinge +- from: /tf/compat/v1/keras/losses/binary_crossentropy + to: /tf/keras/losses/binary_crossentropy +- from: /tf/compat/v1/keras/losses/categorical_crossentropy + to: /tf/keras/losses/categorical_crossentropy +- from: /tf/compat/v1/keras/losses/categorical_hinge + to: /tf/keras/losses/categorical_hinge +- from: /tf/compat/v1/keras/losses/cosine + to: /tf/keras/losses/cosine_similarity +- from: /tf/compat/v1/keras/losses/cosine_proximity + to: /tf/keras/losses/cosine_similarity +- from: /tf/compat/v1/keras/losses/cosine_similarity + to: /tf/keras/losses/cosine_similarity +- from: /tf/compat/v1/keras/losses/deserialize + to: /tf/keras/losses/deserialize +- from: /tf/compat/v1/keras/losses/get + to: /tf/keras/losses/get +- from: /tf/compat/v1/keras/losses/hinge + to: /tf/keras/losses/hinge +- from: /tf/compat/v1/keras/losses/kld + to: /tf/keras/losses/KLD +- from: /tf/compat/v1/keras/losses/kullback_leibler_divergence + to: /tf/keras/losses/KLD +- from: /tf/compat/v1/keras/losses/logcosh + to: /tf/keras/losses/logcosh +- from: /tf/compat/v1/keras/losses/mae + to: /tf/keras/losses/MAE +- from: /tf/compat/v1/keras/losses/mape + to: /tf/keras/losses/MAPE +- from: /tf/compat/v1/keras/losses/mean_absolute_error + to: /tf/keras/losses/MAE +- from: /tf/compat/v1/keras/losses/mean_absolute_percentage_error + to: /tf/keras/losses/MAPE +- from: /tf/compat/v1/keras/losses/mean_squared_error + to: /tf/keras/losses/MSE +- from: /tf/compat/v1/keras/losses/mean_squared_logarithmic_error + to: /tf/keras/losses/MSLE +- from: /tf/compat/v1/keras/losses/mse + to: /tf/keras/losses/MSE +- from: /tf/compat/v1/keras/losses/msle + to: /tf/keras/losses/MSLE +- from: /tf/compat/v1/keras/losses/poisson + to: /tf/keras/losses/poisson +- from: /tf/compat/v1/keras/losses/serialize + to: /tf/keras/losses/serialize +- from: /tf/compat/v1/keras/losses/sparse_categorical_crossentropy + to: /tf/keras/losses/sparse_categorical_crossentropy +- from: /tf/compat/v1/keras/losses/squared_hinge + to: /tf/keras/losses/squared_hinge +- from: /tf/compat/v1/keras/metrics/AUC + to: /tf/keras/metrics/AUC +- from: /tf/compat/v1/keras/metrics/Accuracy + to: /tf/keras/metrics/Accuracy +- from: /tf/compat/v1/keras/metrics/BinaryAccuracy + to: /tf/keras/metrics/BinaryAccuracy +- from: /tf/compat/v1/keras/metrics/BinaryCrossentropy + to: /tf/keras/metrics/BinaryCrossentropy +- from: /tf/compat/v1/keras/metrics/CategoricalAccuracy + to: /tf/keras/metrics/CategoricalAccuracy +- from: /tf/compat/v1/keras/metrics/CategoricalCrossentropy + to: /tf/keras/metrics/CategoricalCrossentropy +- from: /tf/compat/v1/keras/metrics/CategoricalHinge + to: /tf/keras/metrics/CategoricalHinge +- from: /tf/compat/v1/keras/metrics/CosineSimilarity + to: /tf/keras/metrics/CosineSimilarity +- from: /tf/compat/v1/keras/metrics/FalseNegatives + to: /tf/keras/metrics/FalseNegatives +- from: /tf/compat/v1/keras/metrics/FalsePositives + to: /tf/keras/metrics/FalsePositives +- from: /tf/compat/v1/keras/metrics/Hinge + to: /tf/keras/metrics/Hinge +- from: /tf/compat/v1/keras/metrics/KLD + to: /tf/keras/losses/KLD +- from: /tf/compat/v1/keras/metrics/KLDivergence + to: /tf/keras/metrics/KLDivergence +- from: /tf/compat/v1/keras/metrics/LogCoshError + to: /tf/keras/metrics/LogCoshError +- from: /tf/compat/v1/keras/metrics/MAE + to: /tf/keras/losses/MAE +- from: /tf/compat/v1/keras/metrics/MAPE + to: /tf/keras/losses/MAPE +- from: /tf/compat/v1/keras/metrics/MSE + to: /tf/keras/losses/MSE +- from: /tf/compat/v1/keras/metrics/MSLE + to: /tf/keras/losses/MSLE +- from: /tf/compat/v1/keras/metrics/Mean + to: /tf/keras/metrics/Mean +- from: /tf/compat/v1/keras/metrics/MeanAbsoluteError + to: /tf/keras/metrics/MeanAbsoluteError +- from: /tf/compat/v1/keras/metrics/MeanAbsolutePercentageError + to: /tf/keras/metrics/MeanAbsolutePercentageError +- from: /tf/compat/v1/keras/metrics/MeanIoU + to: /tf/keras/metrics/MeanIoU +- from: /tf/compat/v1/keras/metrics/MeanRelativeError + to: /tf/keras/metrics/MeanRelativeError +- from: /tf/compat/v1/keras/metrics/MeanSquaredError + to: /tf/keras/metrics/MeanSquaredError +- from: /tf/compat/v1/keras/metrics/MeanSquaredLogarithmicError + to: /tf/keras/metrics/MeanSquaredLogarithmicError +- from: /tf/compat/v1/keras/metrics/MeanTensor + to: /tf/keras/metrics/MeanTensor +- from: /tf/compat/v1/keras/metrics/Metric + to: /tf/keras/metrics/Metric +- from: /tf/compat/v1/keras/metrics/Poisson + to: /tf/keras/metrics/Poisson +- from: /tf/compat/v1/keras/metrics/Precision + to: /tf/keras/metrics/Precision +- from: /tf/compat/v1/keras/metrics/PrecisionAtRecall + to: /tf/keras/metrics/PrecisionAtRecall +- from: /tf/compat/v1/keras/metrics/Recall + to: /tf/keras/metrics/Recall +- from: /tf/compat/v1/keras/metrics/RecallAtPrecision + to: /tf/keras/metrics/RecallAtPrecision +- from: /tf/compat/v1/keras/metrics/RootMeanSquaredError + to: /tf/keras/metrics/RootMeanSquaredError +- from: /tf/compat/v1/keras/metrics/SensitivityAtSpecificity + to: /tf/keras/metrics/SensitivityAtSpecificity +- from: /tf/compat/v1/keras/metrics/SparseCategoricalAccuracy + to: /tf/keras/metrics/SparseCategoricalAccuracy +- from: /tf/compat/v1/keras/metrics/SparseCategoricalCrossentropy + to: /tf/keras/metrics/SparseCategoricalCrossentropy +- from: /tf/compat/v1/keras/metrics/SparseTopKCategoricalAccuracy + to: /tf/keras/metrics/SparseTopKCategoricalAccuracy +- from: /tf/compat/v1/keras/metrics/SpecificityAtSensitivity + to: /tf/keras/metrics/SpecificityAtSensitivity +- from: /tf/compat/v1/keras/metrics/SquaredHinge + to: /tf/keras/metrics/SquaredHinge +- from: /tf/compat/v1/keras/metrics/Sum + to: /tf/keras/metrics/Sum +- from: /tf/compat/v1/keras/metrics/TopKCategoricalAccuracy + to: /tf/keras/metrics/TopKCategoricalAccuracy +- from: /tf/compat/v1/keras/metrics/TrueNegatives + to: /tf/keras/metrics/TrueNegatives +- from: /tf/compat/v1/keras/metrics/TruePositives + to: /tf/keras/metrics/TruePositives +- from: /tf/compat/v1/keras/metrics/binary_accuracy + to: /tf/keras/metrics/binary_accuracy +- from: /tf/compat/v1/keras/metrics/binary_crossentropy + to: /tf/keras/losses/binary_crossentropy +- from: /tf/compat/v1/keras/metrics/categorical_accuracy + to: /tf/keras/metrics/categorical_accuracy +- from: /tf/compat/v1/keras/metrics/categorical_crossentropy + to: /tf/keras/losses/categorical_crossentropy +- from: /tf/compat/v1/keras/metrics/cosine + to: /tf/keras/losses/cosine_similarity +- from: /tf/compat/v1/keras/metrics/cosine_proximity + to: /tf/keras/losses/cosine_similarity +- from: /tf/compat/v1/keras/metrics/deserialize + to: /tf/keras/metrics/deserialize +- from: /tf/compat/v1/keras/metrics/get + to: /tf/keras/metrics/get +- from: /tf/compat/v1/keras/metrics/hinge + to: /tf/keras/losses/hinge +- from: /tf/compat/v1/keras/metrics/kld + to: /tf/keras/losses/KLD +- from: /tf/compat/v1/keras/metrics/kullback_leibler_divergence + to: /tf/keras/losses/KLD +- from: /tf/compat/v1/keras/metrics/mae + to: /tf/keras/losses/MAE +- from: /tf/compat/v1/keras/metrics/mape + to: /tf/keras/losses/MAPE +- from: /tf/compat/v1/keras/metrics/mean_absolute_error + to: /tf/keras/losses/MAE +- from: /tf/compat/v1/keras/metrics/mean_absolute_percentage_error + to: /tf/keras/losses/MAPE +- from: /tf/compat/v1/keras/metrics/mean_squared_error + to: /tf/keras/losses/MSE +- from: /tf/compat/v1/keras/metrics/mean_squared_logarithmic_error + to: /tf/keras/losses/MSLE +- from: /tf/compat/v1/keras/metrics/mse + to: /tf/keras/losses/MSE +- from: /tf/compat/v1/keras/metrics/msle + to: /tf/keras/losses/MSLE +- from: /tf/compat/v1/keras/metrics/poisson + to: /tf/keras/losses/poisson +- from: /tf/compat/v1/keras/metrics/serialize + to: /tf/keras/metrics/serialize +- from: /tf/compat/v1/keras/metrics/sparse_categorical_accuracy + to: /tf/keras/metrics/sparse_categorical_accuracy +- from: /tf/compat/v1/keras/metrics/sparse_categorical_crossentropy + to: /tf/keras/losses/sparse_categorical_crossentropy +- from: /tf/compat/v1/keras/metrics/sparse_top_k_categorical_accuracy + to: /tf/keras/metrics/sparse_top_k_categorical_accuracy +- from: /tf/compat/v1/keras/metrics/squared_hinge + to: /tf/keras/losses/squared_hinge +- from: /tf/compat/v1/keras/metrics/top_k_categorical_accuracy + to: /tf/keras/metrics/top_k_categorical_accuracy +- from: /tf/compat/v1/keras/mixed_precision/experimental/LossScaleOptimizer + to: /tf/keras/mixed_precision/experimental/LossScaleOptimizer +- from: /tf/compat/v1/keras/mixed_precision/experimental/Policy + to: /tf/keras/mixed_precision/experimental/Policy +- from: /tf/compat/v1/keras/mixed_precision/experimental/get_layer_policy + to: /tf/keras/mixed_precision/experimental/get_layer_policy +- from: /tf/compat/v1/keras/mixed_precision/experimental/global_policy + to: /tf/keras/mixed_precision/experimental/global_policy +- from: /tf/compat/v1/keras/mixed_precision/experimental/set_policy + to: /tf/keras/mixed_precision/experimental/set_policy +- from: /tf/compat/v1/keras/models/Model + to: /tf/keras/Model +- from: /tf/compat/v1/keras/models/Sequential + to: /tf/keras/Sequential +- from: /tf/compat/v1/keras/models/clone_model + to: /tf/keras/models/clone_model +- from: /tf/compat/v1/keras/models/load_model + to: /tf/keras/models/load_model +- from: /tf/compat/v1/keras/models/model_from_config + to: /tf/keras/models/model_from_config +- from: /tf/compat/v1/keras/models/model_from_json + to: /tf/keras/models/model_from_json +- from: /tf/compat/v1/keras/models/model_from_yaml + to: /tf/keras/models/model_from_yaml +- from: /tf/compat/v1/keras/models/save_model + to: /tf/keras/models/save_model +- from: /tf/compat/v1/keras/optimizers/Adadelta + to: /tf/keras/optimizers/Adadelta +- from: /tf/compat/v1/keras/optimizers/Adagrad + to: /tf/keras/optimizers/Adagrad +- from: /tf/compat/v1/keras/optimizers/Adam + to: /tf/keras/optimizers/Adam +- from: /tf/compat/v1/keras/optimizers/Adamax + to: /tf/keras/optimizers/Adamax +- from: /tf/compat/v1/keras/optimizers/Ftrl + to: /tf/keras/optimizers/Ftrl +- from: /tf/compat/v1/keras/optimizers/Nadam + to: /tf/keras/optimizers/Nadam +- from: /tf/compat/v1/keras/optimizers/Optimizer + to: /tf/keras/optimizers/Optimizer +- from: /tf/compat/v1/keras/optimizers/RMSprop + to: /tf/keras/optimizers/RMSprop +- from: /tf/compat/v1/keras/optimizers/SGD + to: /tf/keras/optimizers/SGD +- from: /tf/compat/v1/keras/optimizers/deserialize + to: /tf/keras/optimizers/deserialize +- from: /tf/compat/v1/keras/optimizers/get + to: /tf/keras/optimizers/get +- from: /tf/compat/v1/keras/optimizers/schedules/ExponentialDecay + to: /tf/keras/optimizers/schedules/ExponentialDecay +- from: /tf/compat/v1/keras/optimizers/schedules/InverseTimeDecay + to: /tf/keras/optimizers/schedules/InverseTimeDecay +- from: /tf/compat/v1/keras/optimizers/schedules/LearningRateSchedule + to: /tf/keras/optimizers/schedules/LearningRateSchedule +- from: /tf/compat/v1/keras/optimizers/schedules/PiecewiseConstantDecay + to: /tf/keras/optimizers/schedules/PiecewiseConstantDecay +- from: /tf/compat/v1/keras/optimizers/schedules/PolynomialDecay + to: /tf/keras/optimizers/schedules/PolynomialDecay +- from: /tf/compat/v1/keras/optimizers/schedules/deserialize + to: /tf/keras/optimizers/schedules/deserialize +- from: /tf/compat/v1/keras/optimizers/schedules/serialize + to: /tf/keras/optimizers/schedules/serialize +- from: /tf/compat/v1/keras/optimizers/serialize + to: /tf/keras/optimizers/serialize +- from: /tf/compat/v1/keras/preprocessing/image/DirectoryIterator + to: /tf/keras/preprocessing/image/DirectoryIterator +- from: /tf/compat/v1/keras/preprocessing/image/ImageDataGenerator + to: /tf/keras/preprocessing/image/ImageDataGenerator +- from: /tf/compat/v1/keras/preprocessing/image/Iterator + to: /tf/keras/preprocessing/image/Iterator +- from: /tf/compat/v1/keras/preprocessing/image/NumpyArrayIterator + to: /tf/keras/preprocessing/image/NumpyArrayIterator +- from: /tf/compat/v1/keras/preprocessing/image/apply_affine_transform + to: /tf/keras/preprocessing/image/apply_affine_transform +- from: /tf/compat/v1/keras/preprocessing/image/apply_brightness_shift + to: /tf/keras/preprocessing/image/apply_brightness_shift +- from: /tf/compat/v1/keras/preprocessing/image/apply_channel_shift + to: /tf/keras/preprocessing/image/apply_channel_shift +- from: /tf/compat/v1/keras/preprocessing/image/array_to_img + to: /tf/keras/preprocessing/image/array_to_img +- from: /tf/compat/v1/keras/preprocessing/image/img_to_array + to: /tf/keras/preprocessing/image/img_to_array +- from: /tf/compat/v1/keras/preprocessing/image/load_img + to: /tf/keras/preprocessing/image/load_img +- from: /tf/compat/v1/keras/preprocessing/image/random_brightness + to: /tf/keras/preprocessing/image/random_brightness +- from: /tf/compat/v1/keras/preprocessing/image/random_channel_shift + to: /tf/keras/preprocessing/image/random_channel_shift +- from: /tf/compat/v1/keras/preprocessing/image/random_rotation + to: /tf/keras/preprocessing/image/random_rotation +- from: /tf/compat/v1/keras/preprocessing/image/random_shear + to: /tf/keras/preprocessing/image/random_shear +- from: /tf/compat/v1/keras/preprocessing/image/random_shift + to: /tf/keras/preprocessing/image/random_shift +- from: /tf/compat/v1/keras/preprocessing/image/random_zoom + to: /tf/keras/preprocessing/image/random_zoom +- from: /tf/compat/v1/keras/preprocessing/image/save_img + to: /tf/keras/preprocessing/image/save_img +- from: /tf/compat/v1/keras/preprocessing/sequence/TimeseriesGenerator + to: /tf/keras/preprocessing/sequence/TimeseriesGenerator +- from: /tf/compat/v1/keras/preprocessing/sequence/make_sampling_table + to: /tf/keras/preprocessing/sequence/make_sampling_table +- from: /tf/compat/v1/keras/preprocessing/sequence/pad_sequences + to: /tf/keras/preprocessing/sequence/pad_sequences +- from: /tf/compat/v1/keras/preprocessing/sequence/skipgrams + to: /tf/keras/preprocessing/sequence/skipgrams +- from: /tf/compat/v1/keras/preprocessing/text/Tokenizer + to: /tf/keras/preprocessing/text/Tokenizer +- from: /tf/compat/v1/keras/preprocessing/text/hashing_trick + to: /tf/keras/preprocessing/text/hashing_trick +- from: /tf/compat/v1/keras/preprocessing/text/one_hot + to: /tf/keras/preprocessing/text/one_hot +- from: /tf/compat/v1/keras/preprocessing/text/text_to_word_sequence + to: /tf/keras/preprocessing/text/text_to_word_sequence +- from: /tf/compat/v1/keras/preprocessing/text/tokenizer_from_json + to: /tf/keras/preprocessing/text/tokenizer_from_json +- from: /tf/compat/v1/keras/regularizers/L1L2 + to: /tf/keras/regularizers/L1L2 +- from: /tf/compat/v1/keras/regularizers/Regularizer + to: /tf/keras/regularizers/Regularizer +- from: /tf/compat/v1/keras/regularizers/deserialize + to: /tf/keras/regularizers/deserialize +- from: /tf/compat/v1/keras/regularizers/get + to: /tf/keras/regularizers/get +- from: /tf/compat/v1/keras/regularizers/l1 + to: /tf/keras/regularizers/l1 +- from: /tf/compat/v1/keras/regularizers/l1_l2 + to: /tf/keras/regularizers/l1_l2 +- from: /tf/compat/v1/keras/regularizers/l2 + to: /tf/keras/regularizers/l2 +- from: /tf/compat/v1/keras/regularizers/serialize + to: /tf/keras/regularizers/serialize +- from: /tf/compat/v1/keras/utils/CustomObjectScope + to: /tf/keras/utils/CustomObjectScope +- from: /tf/compat/v1/keras/utils/GeneratorEnqueuer + to: /tf/keras/utils/GeneratorEnqueuer +- from: /tf/compat/v1/keras/utils/HDF5Matrix + to: /tf/keras/utils/HDF5Matrix +- from: /tf/compat/v1/keras/utils/OrderedEnqueuer + to: /tf/keras/utils/OrderedEnqueuer +- from: /tf/compat/v1/keras/utils/Progbar + to: /tf/keras/utils/Progbar +- from: /tf/compat/v1/keras/utils/Sequence + to: /tf/keras/utils/Sequence +- from: /tf/compat/v1/keras/utils/SequenceEnqueuer + to: /tf/keras/utils/SequenceEnqueuer +- from: /tf/compat/v1/keras/utils/convert_all_kernels_in_model + to: /tf/keras/utils/convert_all_kernels_in_model +- from: /tf/compat/v1/keras/utils/custom_object_scope + to: /tf/keras/utils/custom_object_scope +- from: /tf/compat/v1/keras/utils/deserialize_keras_object + to: /tf/keras/utils/deserialize_keras_object +- from: /tf/compat/v1/keras/utils/get_custom_objects + to: /tf/keras/utils/get_custom_objects +- from: /tf/compat/v1/keras/utils/get_file + to: /tf/keras/utils/get_file +- from: /tf/compat/v1/keras/utils/get_registered_name + to: /tf/keras/utils/get_registered_name +- from: /tf/compat/v1/keras/utils/get_registered_object + to: /tf/keras/utils/get_registered_object +- from: /tf/compat/v1/keras/utils/get_source_inputs + to: /tf/keras/utils/get_source_inputs +- from: /tf/compat/v1/keras/utils/model_to_dot + to: /tf/keras/utils/model_to_dot +- from: /tf/compat/v1/keras/utils/multi_gpu_model + to: /tf/keras/utils/multi_gpu_model +- from: /tf/compat/v1/keras/utils/normalize + to: /tf/keras/utils/normalize +- from: /tf/compat/v1/keras/utils/plot_model + to: /tf/keras/utils/plot_model +- from: /tf/compat/v1/keras/utils/register_keras_serializable + to: /tf/keras/utils/register_keras_serializable +- from: /tf/compat/v1/keras/utils/serialize_keras_object + to: /tf/keras/utils/serialize_keras_object +- from: /tf/compat/v1/keras/utils/to_categorical + to: /tf/keras/utils/to_categorical +- from: /tf/compat/v1/keras/wrappers/scikit_learn/KerasClassifier + to: /tf/keras/wrappers/scikit_learn/KerasClassifier +- from: /tf/compat/v1/keras/wrappers/scikit_learn/KerasRegressor + to: /tf/keras/wrappers/scikit_learn/KerasRegressor +- from: /tf/compat/v1/layers/InputSpec + to: /tf/keras/layers/InputSpec +- from: /tf/compat/v1/lbeta + to: /tf/math/lbeta +- from: /tf/compat/v1/less + to: /tf/math/less +- from: /tf/compat/v1/less_equal + to: /tf/math/less_equal +- from: /tf/compat/v1/lgamma + to: /tf/math/lgamma +- from: /tf/compat/v1/lin_space + to: /tf/linspace +- from: /tf/compat/v1/linalg/LinearOperator + to: /tf/linalg/LinearOperator +- from: /tf/compat/v1/linalg/LinearOperatorAdjoint + to: /tf/linalg/LinearOperatorAdjoint +- from: /tf/compat/v1/linalg/LinearOperatorBlockDiag + to: /tf/linalg/LinearOperatorBlockDiag +- from: /tf/compat/v1/linalg/LinearOperatorBlockLowerTriangular + to: /tf/linalg/LinearOperatorBlockLowerTriangular +- from: /tf/compat/v1/linalg/LinearOperatorCirculant + to: /tf/linalg/LinearOperatorCirculant +- from: /tf/compat/v1/linalg/LinearOperatorCirculant2D + to: /tf/linalg/LinearOperatorCirculant2D +- from: /tf/compat/v1/linalg/LinearOperatorCirculant3D + to: /tf/linalg/LinearOperatorCirculant3D +- from: /tf/compat/v1/linalg/LinearOperatorComposition + to: /tf/linalg/LinearOperatorComposition +- from: /tf/compat/v1/linalg/LinearOperatorDiag + to: /tf/linalg/LinearOperatorDiag +- from: /tf/compat/v1/linalg/LinearOperatorFullMatrix + to: /tf/linalg/LinearOperatorFullMatrix +- from: /tf/compat/v1/linalg/LinearOperatorHouseholder + to: /tf/linalg/LinearOperatorHouseholder +- from: /tf/compat/v1/linalg/LinearOperatorIdentity + to: /tf/linalg/LinearOperatorIdentity +- from: /tf/compat/v1/linalg/LinearOperatorInversion + to: /tf/linalg/LinearOperatorInversion +- from: /tf/compat/v1/linalg/LinearOperatorKronecker + to: /tf/linalg/LinearOperatorKronecker +- from: /tf/compat/v1/linalg/LinearOperatorLowRankUpdate + to: /tf/linalg/LinearOperatorLowRankUpdate +- from: /tf/compat/v1/linalg/LinearOperatorLowerTriangular + to: /tf/linalg/LinearOperatorLowerTriangular +- from: /tf/compat/v1/linalg/LinearOperatorPermutation + to: /tf/linalg/LinearOperatorPermutation +- from: /tf/compat/v1/linalg/LinearOperatorScaledIdentity + to: /tf/linalg/LinearOperatorScaledIdentity +- from: /tf/compat/v1/linalg/LinearOperatorToeplitz + to: /tf/linalg/LinearOperatorToeplitz +- from: /tf/compat/v1/linalg/LinearOperatorTridiag + to: /tf/linalg/LinearOperatorTridiag +- from: /tf/compat/v1/linalg/LinearOperatorZeros + to: /tf/linalg/LinearOperatorZeros +- from: /tf/compat/v1/linalg/adjoint + to: /tf/linalg/adjoint +- from: /tf/compat/v1/linalg/band_part + to: /tf/linalg/band_part +- from: /tf/compat/v1/linalg/cholesky + to: /tf/linalg/cholesky +- from: /tf/compat/v1/linalg/cholesky_solve + to: /tf/linalg/cholesky_solve +- from: /tf/compat/v1/linalg/cross + to: /tf/linalg/cross +- from: /tf/compat/v1/linalg/det + to: /tf/linalg/det +- from: /tf/compat/v1/linalg/diag + to: /tf/linalg/diag +- from: /tf/compat/v1/linalg/diag_part + to: /tf/linalg/diag_part +- from: /tf/compat/v1/linalg/eigh + to: /tf/linalg/eigh +- from: /tf/compat/v1/linalg/eigvalsh + to: /tf/linalg/eigvalsh +- from: /tf/compat/v1/linalg/einsum + to: /tf/einsum +- from: /tf/compat/v1/linalg/experimental/conjugate_gradient + to: /tf/linalg/experimental/conjugate_gradient +- from: /tf/compat/v1/linalg/expm + to: /tf/linalg/expm +- from: /tf/compat/v1/linalg/eye + to: /tf/eye +- from: /tf/compat/v1/linalg/global_norm + to: /tf/linalg/global_norm +- from: /tf/compat/v1/linalg/inv + to: /tf/linalg/inv +- from: /tf/compat/v1/linalg/logdet + to: /tf/linalg/logdet +- from: /tf/compat/v1/linalg/logm + to: /tf/linalg/logm +- from: /tf/compat/v1/linalg/lstsq + to: /tf/linalg/lstsq +- from: /tf/compat/v1/linalg/lu + to: /tf/linalg/lu +- from: /tf/compat/v1/linalg/lu_matrix_inverse + to: /tf/linalg/lu_matrix_inverse +- from: /tf/compat/v1/linalg/lu_reconstruct + to: /tf/linalg/lu_reconstruct +- from: /tf/compat/v1/linalg/lu_solve + to: /tf/linalg/lu_solve +- from: /tf/compat/v1/linalg/matmul + to: /tf/linalg/matmul +- from: /tf/compat/v1/linalg/matrix_rank + to: /tf/linalg/matrix_rank +- from: /tf/compat/v1/linalg/matrix_transpose + to: /tf/linalg/matrix_transpose +- from: /tf/compat/v1/linalg/matvec + to: /tf/linalg/matvec +- from: /tf/compat/v1/linalg/norm + to: /tf/compat/v1/norm +- from: /tf/compat/v1/linalg/normalize + to: /tf/linalg/normalize +- from: /tf/compat/v1/linalg/pinv + to: /tf/linalg/pinv +- from: /tf/compat/v1/linalg/qr + to: /tf/linalg/qr +- from: /tf/compat/v1/linalg/set_diag + to: /tf/linalg/set_diag +- from: /tf/compat/v1/linalg/slogdet + to: /tf/linalg/slogdet +- from: /tf/compat/v1/linalg/solve + to: /tf/linalg/solve +- from: /tf/compat/v1/linalg/sqrtm + to: /tf/linalg/sqrtm +- from: /tf/compat/v1/linalg/svd + to: /tf/linalg/svd +- from: /tf/compat/v1/linalg/tensor_diag + to: /tf/linalg/tensor_diag +- from: /tf/compat/v1/linalg/tensor_diag_part + to: /tf/linalg/tensor_diag_part +- from: /tf/compat/v1/linalg/tensordot + to: /tf/tensordot +- from: /tf/compat/v1/linalg/trace + to: /tf/linalg/trace +- from: /tf/compat/v1/linalg/transpose + to: /tf/linalg/matrix_transpose +- from: /tf/compat/v1/linalg/triangular_solve + to: /tf/linalg/triangular_solve +- from: /tf/compat/v1/linalg/tridiagonal_matmul + to: /tf/linalg/tridiagonal_matmul +- from: /tf/compat/v1/linalg/tridiagonal_solve + to: /tf/linalg/tridiagonal_solve +- from: /tf/compat/v1/linspace + to: /tf/linspace +- from: /tf/compat/v1/lite/Interpreter + to: /tf/lite/Interpreter +- from: /tf/compat/v1/lite/OpsSet + to: /tf/lite/OpsSet +- from: /tf/compat/v1/lite/Optimize + to: /tf/lite/Optimize +- from: /tf/compat/v1/lite/RepresentativeDataset + to: /tf/lite/RepresentativeDataset +- from: /tf/compat/v1/lite/TargetSpec + to: /tf/lite/TargetSpec +- from: /tf/compat/v1/lite/experimental/load_delegate + to: /tf/lite/experimental/load_delegate +- from: /tf/compat/v1/load_library + to: /tf/load_library +- from: /tf/compat/v1/load_op_library + to: /tf/load_op_library +- from: /tf/compat/v1/log + to: /tf/math/log +- from: /tf/compat/v1/log1p + to: /tf/math/log1p +- from: /tf/compat/v1/log_sigmoid + to: /tf/math/log_sigmoid +- from: /tf/compat/v1/logical_and + to: /tf/math/logical_and +- from: /tf/compat/v1/logical_not + to: /tf/math/logical_not +- from: /tf/compat/v1/logical_or + to: /tf/math/logical_or +- from: /tf/compat/v1/logical_xor + to: /tf/math/logical_xor +- from: /tf/compat/v1/lookup/KeyValueTensorInitializer + to: /tf/lookup/KeyValueTensorInitializer +- from: /tf/compat/v1/lookup/TextFileIndex + to: /tf/lookup/TextFileIndex +- from: /tf/compat/v1/lookup/TextFileInitializer + to: /tf/lookup/TextFileInitializer +- from: /tf/compat/v1/lookup/experimental/DenseHashTable + to: /tf/lookup/experimental/DenseHashTable +- from: /tf/compat/v1/make_ndarray + to: /tf/make_ndarray +- from: /tf/compat/v1/make_tensor_proto + to: /tf/make_tensor_proto +- from: /tf/compat/v1/manip/batch_to_space_nd + to: /tf/compat/v1/batch_to_space_nd +- from: /tf/compat/v1/manip/gather_nd + to: /tf/compat/v1/gather_nd +- from: /tf/compat/v1/manip/reshape + to: /tf/reshape +- from: /tf/compat/v1/manip/reverse + to: /tf/reverse +- from: /tf/compat/v1/manip/roll + to: /tf/roll +- from: /tf/compat/v1/manip/scatter_nd + to: /tf/scatter_nd +- from: /tf/compat/v1/manip/space_to_batch_nd + to: /tf/space_to_batch_nd +- from: /tf/compat/v1/manip/tile + to: /tf/tile +- from: /tf/compat/v1/matching_files + to: /tf/io/matching_files +- from: /tf/compat/v1/math/abs + to: /tf/math/abs +- from: /tf/compat/v1/math/accumulate_n + to: /tf/math/accumulate_n +- from: /tf/compat/v1/math/acos + to: /tf/math/acos +- from: /tf/compat/v1/math/acosh + to: /tf/math/acosh +- from: /tf/compat/v1/math/add + to: /tf/math/add +- from: /tf/compat/v1/math/add_n + to: /tf/math/add_n +- from: /tf/compat/v1/math/angle + to: /tf/math/angle +- from: /tf/compat/v1/math/argmax + to: /tf/compat/v1/argmax +- from: /tf/compat/v1/math/argmin + to: /tf/compat/v1/argmin +- from: /tf/compat/v1/math/asin + to: /tf/math/asin +- from: /tf/compat/v1/math/asinh + to: /tf/math/asinh +- from: /tf/compat/v1/math/atan + to: /tf/math/atan +- from: /tf/compat/v1/math/atan2 + to: /tf/math/atan2 +- from: /tf/compat/v1/math/atanh + to: /tf/math/atanh +- from: /tf/compat/v1/math/bessel_i0 + to: /tf/math/bessel_i0 +- from: /tf/compat/v1/math/bessel_i0e + to: /tf/math/bessel_i0e +- from: /tf/compat/v1/math/bessel_i1 + to: /tf/math/bessel_i1 +- from: /tf/compat/v1/math/bessel_i1e + to: /tf/math/bessel_i1e +- from: /tf/compat/v1/math/betainc + to: /tf/math/betainc +- from: /tf/compat/v1/math/bincount + to: /tf/compat/v1/bincount +- from: /tf/compat/v1/math/ceil + to: /tf/math/ceil +- from: /tf/compat/v1/math/confusion_matrix + to: /tf/compat/v1/confusion_matrix +- from: /tf/compat/v1/math/conj + to: /tf/math/conj +- from: /tf/compat/v1/math/cos + to: /tf/math/cos +- from: /tf/compat/v1/math/cosh + to: /tf/math/cosh +- from: /tf/compat/v1/math/count_nonzero + to: /tf/compat/v1/count_nonzero +- from: /tf/compat/v1/math/cumprod + to: /tf/math/cumprod +- from: /tf/compat/v1/math/cumsum + to: /tf/math/cumsum +- from: /tf/compat/v1/math/cumulative_logsumexp + to: /tf/math/cumulative_logsumexp +- from: /tf/compat/v1/math/digamma + to: /tf/math/digamma +- from: /tf/compat/v1/math/divide + to: /tf/math/divide +- from: /tf/compat/v1/math/divide_no_nan + to: /tf/math/divide_no_nan +- from: /tf/compat/v1/math/equal + to: /tf/math/equal +- from: /tf/compat/v1/math/erf + to: /tf/math/erf +- from: /tf/compat/v1/math/erfc + to: /tf/math/erfc +- from: /tf/compat/v1/math/erfinv + to: /tf/math/erfinv +- from: /tf/compat/v1/math/exp + to: /tf/math/exp +- from: /tf/compat/v1/math/expm1 + to: /tf/math/expm1 +- from: /tf/compat/v1/math/floor + to: /tf/math/floor +- from: /tf/compat/v1/math/floordiv + to: /tf/math/floordiv +- from: /tf/compat/v1/math/floormod + to: /tf/math/floormod +- from: /tf/compat/v1/math/greater + to: /tf/math/greater +- from: /tf/compat/v1/math/greater_equal + to: /tf/math/greater_equal +- from: /tf/compat/v1/math/igamma + to: /tf/math/igamma +- from: /tf/compat/v1/math/igammac + to: /tf/math/igammac +- from: /tf/compat/v1/math/imag + to: /tf/math/imag +- from: /tf/compat/v1/math/invert_permutation + to: /tf/math/invert_permutation +- from: /tf/compat/v1/math/is_finite + to: /tf/math/is_finite +- from: /tf/compat/v1/math/is_inf + to: /tf/math/is_inf +- from: /tf/compat/v1/math/is_nan + to: /tf/math/is_nan +- from: /tf/compat/v1/math/is_non_decreasing + to: /tf/math/is_non_decreasing +- from: /tf/compat/v1/math/is_strictly_increasing + to: /tf/math/is_strictly_increasing +- from: /tf/compat/v1/math/l2_normalize + to: /tf/compat/v1/linalg/l2_normalize +- from: /tf/compat/v1/math/lbeta + to: /tf/math/lbeta +- from: /tf/compat/v1/math/less + to: /tf/math/less +- from: /tf/compat/v1/math/less_equal + to: /tf/math/less_equal +- from: /tf/compat/v1/math/lgamma + to: /tf/math/lgamma +- from: /tf/compat/v1/math/log + to: /tf/math/log +- from: /tf/compat/v1/math/log1p + to: /tf/math/log1p +- from: /tf/compat/v1/math/log_sigmoid + to: /tf/math/log_sigmoid +- from: /tf/compat/v1/math/logical_and + to: /tf/math/logical_and +- from: /tf/compat/v1/math/logical_not + to: /tf/math/logical_not +- from: /tf/compat/v1/math/logical_or + to: /tf/math/logical_or +- from: /tf/compat/v1/math/logical_xor + to: /tf/math/logical_xor +- from: /tf/compat/v1/math/maximum + to: /tf/math/maximum +- from: /tf/compat/v1/math/minimum + to: /tf/math/minimum +- from: /tf/compat/v1/math/mod + to: /tf/math/floormod +- from: /tf/compat/v1/math/multiply + to: /tf/math/multiply +- from: /tf/compat/v1/math/multiply_no_nan + to: /tf/math/multiply_no_nan +- from: /tf/compat/v1/math/ndtri + to: /tf/math/ndtri +- from: /tf/compat/v1/math/negative + to: /tf/math/negative +- from: /tf/compat/v1/math/nextafter + to: /tf/math/nextafter +- from: /tf/compat/v1/math/not_equal + to: /tf/math/not_equal +- from: /tf/compat/v1/math/polygamma + to: /tf/math/polygamma +- from: /tf/compat/v1/math/polyval + to: /tf/math/polyval +- from: /tf/compat/v1/math/pow + to: /tf/math/pow +- from: /tf/compat/v1/math/real + to: /tf/math/real +- from: /tf/compat/v1/math/reciprocal + to: /tf/math/reciprocal +- from: /tf/compat/v1/math/reciprocal_no_nan + to: /tf/math/reciprocal_no_nan +- from: /tf/compat/v1/math/reduce_all + to: /tf/compat/v1/reduce_all +- from: /tf/compat/v1/math/reduce_any + to: /tf/compat/v1/reduce_any +- from: /tf/compat/v1/math/reduce_euclidean_norm + to: /tf/math/reduce_euclidean_norm +- from: /tf/compat/v1/math/reduce_logsumexp + to: /tf/compat/v1/reduce_logsumexp +- from: /tf/compat/v1/math/reduce_max + to: /tf/compat/v1/reduce_max +- from: /tf/compat/v1/math/reduce_mean + to: /tf/compat/v1/reduce_mean +- from: /tf/compat/v1/math/reduce_min + to: /tf/compat/v1/reduce_min +- from: /tf/compat/v1/math/reduce_prod + to: /tf/compat/v1/reduce_prod +- from: /tf/compat/v1/math/reduce_std + to: /tf/math/reduce_std +- from: /tf/compat/v1/math/reduce_sum + to: /tf/compat/v1/reduce_sum +- from: /tf/compat/v1/math/reduce_variance + to: /tf/math/reduce_variance +- from: /tf/compat/v1/math/rint + to: /tf/math/rint +- from: /tf/compat/v1/math/round + to: /tf/math/round +- from: /tf/compat/v1/math/rsqrt + to: /tf/math/rsqrt +- from: /tf/compat/v1/math/scalar_mul + to: /tf/compat/v1/scalar_mul +- from: /tf/compat/v1/math/segment_max + to: /tf/math/segment_max +- from: /tf/compat/v1/math/segment_mean + to: /tf/math/segment_mean +- from: /tf/compat/v1/math/segment_min + to: /tf/math/segment_min +- from: /tf/compat/v1/math/segment_prod + to: /tf/math/segment_prod +- from: /tf/compat/v1/math/segment_sum + to: /tf/math/segment_sum +- from: /tf/compat/v1/math/sigmoid + to: /tf/math/sigmoid +- from: /tf/compat/v1/math/sign + to: /tf/math/sign +- from: /tf/compat/v1/math/sin + to: /tf/math/sin +- from: /tf/compat/v1/math/sinh + to: /tf/math/sinh +- from: /tf/compat/v1/math/sobol_sample + to: /tf/math/sobol_sample +- from: /tf/compat/v1/math/softplus + to: /tf/math/softplus +- from: /tf/compat/v1/math/softsign + to: /tf/nn/softsign +- from: /tf/compat/v1/math/special/dawsn + to: /tf/math/special/dawsn +- from: /tf/compat/v1/math/special/expint + to: /tf/math/special/expint +- from: /tf/compat/v1/math/special/fresnel_cos + to: /tf/math/special/fresnel_cos +- from: /tf/compat/v1/math/special/fresnel_sin + to: /tf/math/special/fresnel_sin +- from: /tf/compat/v1/math/special/spence + to: /tf/math/special/spence +- from: /tf/compat/v1/math/sqrt + to: /tf/math/sqrt +- from: /tf/compat/v1/math/square + to: /tf/math/square +- from: /tf/compat/v1/math/squared_difference + to: /tf/math/squared_difference +- from: /tf/compat/v1/math/subtract + to: /tf/math/subtract +- from: /tf/compat/v1/math/tan + to: /tf/math/tan +- from: /tf/compat/v1/math/tanh + to: /tf/math/tanh +- from: /tf/compat/v1/math/top_k + to: /tf/math/top_k +- from: /tf/compat/v1/math/truediv + to: /tf/math/truediv +- from: /tf/compat/v1/math/unsorted_segment_max + to: /tf/math/unsorted_segment_max +- from: /tf/compat/v1/math/unsorted_segment_mean + to: /tf/math/unsorted_segment_mean +- from: /tf/compat/v1/math/unsorted_segment_min + to: /tf/math/unsorted_segment_min +- from: /tf/compat/v1/math/unsorted_segment_prod + to: /tf/math/unsorted_segment_prod +- from: /tf/compat/v1/math/unsorted_segment_sqrt_n + to: /tf/math/unsorted_segment_sqrt_n +- from: /tf/compat/v1/math/unsorted_segment_sum + to: /tf/math/unsorted_segment_sum +- from: /tf/compat/v1/math/xdivy + to: /tf/math/xdivy +- from: /tf/compat/v1/math/xlog1py + to: /tf/math/xlog1py +- from: /tf/compat/v1/math/xlogy + to: /tf/math/xlogy +- from: /tf/compat/v1/math/zero_fraction + to: /tf/math/zero_fraction +- from: /tf/compat/v1/math/zeta + to: /tf/math/zeta +- from: /tf/compat/v1/matmul + to: /tf/linalg/matmul +- from: /tf/compat/v1/matrix_band_part + to: /tf/linalg/band_part +- from: /tf/compat/v1/matrix_determinant + to: /tf/linalg/det +- from: /tf/compat/v1/matrix_diag + to: /tf/linalg/diag +- from: /tf/compat/v1/matrix_diag_part + to: /tf/linalg/diag_part +- from: /tf/compat/v1/matrix_inverse + to: /tf/linalg/inv +- from: /tf/compat/v1/matrix_set_diag + to: /tf/linalg/set_diag +- from: /tf/compat/v1/matrix_solve + to: /tf/linalg/solve +- from: /tf/compat/v1/matrix_solve_ls + to: /tf/linalg/lstsq +- from: /tf/compat/v1/matrix_square_root + to: /tf/linalg/sqrtm +- from: /tf/compat/v1/matrix_transpose + to: /tf/linalg/matrix_transpose +- from: /tf/compat/v1/matrix_triangular_solve + to: /tf/linalg/triangular_solve +- from: /tf/compat/v1/maximum + to: /tf/math/maximum +- from: /tf/compat/v1/meshgrid + to: /tf/meshgrid +- from: /tf/compat/v1/minimum + to: /tf/math/minimum +- from: /tf/compat/v1/mixed_precision/experimental/DynamicLossScale + to: /tf/mixed_precision/experimental/DynamicLossScale +- from: /tf/compat/v1/mixed_precision/experimental/FixedLossScale + to: /tf/mixed_precision/experimental/FixedLossScale +- from: /tf/compat/v1/mixed_precision/experimental/LossScale + to: /tf/mixed_precision/experimental/LossScale +- from: /tf/compat/v1/mlir/experimental/convert_graph_def + to: /tf/mlir/experimental/convert_graph_def +- from: /tf/compat/v1/mod + to: /tf/math/floormod +- from: /tf/compat/v1/multiply + to: /tf/math/multiply +- from: /tf/compat/v1/name_scope + to: /tf/compat/v1/keras/backend/name_scope +- from: /tf/compat/v1/negative + to: /tf/math/negative +- from: /tf/compat/v1/nest/assert_same_structure + to: /tf/nest/assert_same_structure +- from: /tf/compat/v1/nest/flatten + to: /tf/nest/flatten +- from: /tf/compat/v1/nest/is_nested + to: /tf/nest/is_nested +- from: /tf/compat/v1/nest/map_structure + to: /tf/nest/map_structure +- from: /tf/compat/v1/nest/pack_sequence_as + to: /tf/nest/pack_sequence_as +- from: /tf/compat/v1/nn/all_candidate_sampler + to: /tf/random/all_candidate_sampler +- from: /tf/compat/v1/nn/atrous_conv2d + to: /tf/nn/atrous_conv2d +- from: /tf/compat/v1/nn/atrous_conv2d_transpose + to: /tf/nn/atrous_conv2d_transpose +- from: /tf/compat/v1/nn/avg_pool1d + to: /tf/nn/avg_pool1d +- from: /tf/compat/v1/nn/avg_pool2d + to: /tf/compat/v1/nn/avg_pool +- from: /tf/compat/v1/nn/avg_pool3d + to: /tf/nn/avg_pool3d +- from: /tf/compat/v1/nn/avg_pool_v2 + to: /tf/nn/avg_pool +- from: /tf/compat/v1/nn/batch_normalization + to: /tf/nn/batch_normalization +- from: /tf/compat/v1/nn/bias_add + to: /tf/nn/bias_add +- from: /tf/compat/v1/nn/collapse_repeated + to: /tf/nn/collapse_repeated +- from: /tf/compat/v1/nn/compute_accidental_hits + to: /tf/nn/compute_accidental_hits +- from: /tf/compat/v1/nn/compute_average_loss + to: /tf/nn/compute_average_loss +- from: /tf/compat/v1/nn/conv1d_transpose + to: /tf/nn/conv1d_transpose +- from: /tf/compat/v1/nn/conv3d_backprop_filter_v2 + to: /tf/compat/v1/nn/conv3d_backprop_filter +- from: /tf/compat/v1/nn/conv_transpose + to: /tf/nn/conv_transpose +- from: /tf/compat/v1/nn/ctc_beam_search_decoder_v2 + to: /tf/nn/ctc_beam_search_decoder +- from: /tf/compat/v1/nn/ctc_greedy_decoder + to: /tf/nn/ctc_greedy_decoder +- from: /tf/compat/v1/nn/ctc_unique_labels + to: /tf/nn/ctc_unique_labels +- from: /tf/compat/v1/nn/depth_to_space + to: /tf/compat/v1/depth_to_space +- from: /tf/compat/v1/nn/depthwise_conv2d_backprop_filter + to: /tf/nn/depthwise_conv2d_backprop_filter +- from: /tf/compat/v1/nn/depthwise_conv2d_backprop_input + to: /tf/nn/depthwise_conv2d_backprop_input +- from: /tf/compat/v1/nn/depthwise_conv2d_native_backprop_filter + to: /tf/nn/depthwise_conv2d_backprop_filter +- from: /tf/compat/v1/nn/depthwise_conv2d_native_backprop_input + to: /tf/nn/depthwise_conv2d_backprop_input +- from: /tf/compat/v1/nn/elu + to: /tf/nn/elu +- from: /tf/compat/v1/nn/fixed_unigram_candidate_sampler + to: /tf/random/fixed_unigram_candidate_sampler +- from: /tf/compat/v1/nn/in_top_k + to: /tf/compat/v1/math/in_top_k +- from: /tf/compat/v1/nn/l2_loss + to: /tf/nn/l2_loss +- from: /tf/compat/v1/nn/l2_normalize + to: /tf/compat/v1/linalg/l2_normalize +- from: /tf/compat/v1/nn/leaky_relu + to: /tf/nn/leaky_relu +- from: /tf/compat/v1/nn/learned_unigram_candidate_sampler + to: /tf/random/learned_unigram_candidate_sampler +- from: /tf/compat/v1/nn/local_response_normalization + to: /tf/nn/local_response_normalization +- from: /tf/compat/v1/nn/log_poisson_loss + to: /tf/nn/log_poisson_loss +- from: /tf/compat/v1/nn/log_softmax + to: /tf/compat/v1/math/log_softmax +- from: /tf/compat/v1/nn/log_uniform_candidate_sampler + to: /tf/random/log_uniform_candidate_sampler +- from: /tf/compat/v1/nn/lrn + to: /tf/nn/local_response_normalization +- from: /tf/compat/v1/nn/max_pool1d + to: /tf/nn/max_pool1d +- from: /tf/compat/v1/nn/max_pool2d + to: /tf/nn/max_pool2d +- from: /tf/compat/v1/nn/max_pool3d + to: /tf/nn/max_pool3d +- from: /tf/compat/v1/nn/max_pool_v2 + to: /tf/nn/max_pool +- from: /tf/compat/v1/nn/normalize_moments + to: /tf/nn/normalize_moments +- from: /tf/compat/v1/nn/relu + to: /tf/nn/relu +- from: /tf/compat/v1/nn/relu6 + to: /tf/nn/relu6 +- from: /tf/compat/v1/nn/scale_regularization_loss + to: /tf/nn/scale_regularization_loss +- from: /tf/compat/v1/nn/selu + to: /tf/nn/selu +- from: /tf/compat/v1/nn/sigmoid + to: /tf/math/sigmoid +- from: /tf/compat/v1/nn/softmax + to: /tf/compat/v1/math/softmax +- from: /tf/compat/v1/nn/softplus + to: /tf/math/softplus +- from: /tf/compat/v1/nn/softsign + to: /tf/nn/softsign +- from: /tf/compat/v1/nn/space_to_batch + to: /tf/compat/v1/space_to_batch +- from: /tf/compat/v1/nn/space_to_depth + to: /tf/compat/v1/space_to_depth +- from: /tf/compat/v1/nn/swish + to: /tf/nn/swish +- from: /tf/compat/v1/nn/tanh + to: /tf/math/tanh +- from: /tf/compat/v1/nn/top_k + to: /tf/math/top_k +- from: /tf/compat/v1/nn/uniform_candidate_sampler + to: /tf/random/uniform_candidate_sampler +- from: /tf/compat/v1/nn/with_space_to_batch + to: /tf/nn/with_space_to_batch +- from: /tf/compat/v1/nn/zero_fraction + to: /tf/math/zero_fraction +- from: /tf/compat/v1/no_gradient + to: /tf/no_gradient +- from: /tf/compat/v1/no_op + to: /tf/no_op +- from: /tf/compat/v1/nondifferentiable_batch_function + to: /tf/nondifferentiable_batch_function +- from: /tf/compat/v1/not_equal + to: /tf/math/not_equal +- from: /tf/compat/v1/numpy_function + to: /tf/numpy_function +- from: /tf/compat/v1/one_hot + to: /tf/one_hot +- from: /tf/compat/v1/ones + to: /tf/ones +- from: /tf/compat/v1/ones_initializer + to: /tf/compat/v1/keras/initializers/Ones +- from: /tf/compat/v1/orthogonal_initializer + to: /tf/compat/v1/keras/initializers/Orthogonal +- from: /tf/compat/v1/parallel_stack + to: /tf/parallel_stack +- from: /tf/compat/v1/parse_single_sequence_example + to: /tf/io/parse_single_sequence_example +- from: /tf/compat/v1/parse_tensor + to: /tf/io/parse_tensor +- from: /tf/compat/v1/polygamma + to: /tf/math/polygamma +- from: /tf/compat/v1/pow + to: /tf/math/pow +- from: /tf/compat/v1/print + to: /tf/print +- from: /tf/compat/v1/py_function + to: /tf/py_function +- from: /tf/compat/v1/python_io/TFRecordCompressionType + to: /tf/compat/v1/io/TFRecordCompressionType +- from: /tf/compat/v1/python_io/TFRecordOptions + to: /tf/io/TFRecordOptions +- from: /tf/compat/v1/python_io/TFRecordWriter + to: /tf/io/TFRecordWriter +- from: /tf/compat/v1/python_io/tf_record_iterator + to: /tf/compat/v1/io/tf_record_iterator +- from: /tf/compat/v1/qr + to: /tf/linalg/qr +- from: /tf/compat/v1/quantization/dequantize + to: /tf/quantization/dequantize +- from: /tf/compat/v1/quantization/fake_quant_with_min_max_args + to: /tf/quantization/fake_quant_with_min_max_args +- from: /tf/compat/v1/quantization/fake_quant_with_min_max_args_gradient + to: /tf/quantization/fake_quant_with_min_max_args_gradient +- from: /tf/compat/v1/quantization/fake_quant_with_min_max_vars + to: /tf/quantization/fake_quant_with_min_max_vars +- from: /tf/compat/v1/quantization/fake_quant_with_min_max_vars_gradient + to: /tf/quantization/fake_quant_with_min_max_vars_gradient +- from: /tf/compat/v1/quantization/fake_quant_with_min_max_vars_per_channel + to: /tf/quantization/fake_quant_with_min_max_vars_per_channel +- from: /tf/compat/v1/quantization/fake_quant_with_min_max_vars_per_channel_gradient + to: /tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient +- from: /tf/compat/v1/quantization/quantize + to: /tf/quantization/quantize +- from: /tf/compat/v1/quantization/quantize_and_dequantize + to: /tf/quantization/quantize_and_dequantize +- from: /tf/compat/v1/quantization/quantized_concat + to: /tf/quantization/quantized_concat +- from: /tf/compat/v1/quantize + to: /tf/quantization/quantize +- from: /tf/compat/v1/quantized_concat + to: /tf/quantization/quantized_concat +- from: /tf/compat/v1/queue/FIFOQueue + to: /tf/queue/FIFOQueue +- from: /tf/compat/v1/queue/PaddingFIFOQueue + to: /tf/queue/PaddingFIFOQueue +- from: /tf/compat/v1/queue/PriorityQueue + to: /tf/queue/PriorityQueue +- from: /tf/compat/v1/queue/QueueBase + to: /tf/queue/QueueBase +- from: /tf/compat/v1/queue/RandomShuffleQueue + to: /tf/queue/RandomShuffleQueue +- from: /tf/compat/v1/ragged/boolean_mask + to: /tf/ragged/boolean_mask +- from: /tf/compat/v1/ragged/constant + to: /tf/ragged/constant +- from: /tf/compat/v1/ragged/map_flat_values + to: /tf/ragged/map_flat_values +- from: /tf/compat/v1/ragged/range + to: /tf/ragged/range +- from: /tf/compat/v1/ragged/row_splits_to_segment_ids + to: /tf/ragged/row_splits_to_segment_ids +- from: /tf/compat/v1/ragged/segment_ids_to_row_splits + to: /tf/ragged/segment_ids_to_row_splits +- from: /tf/compat/v1/ragged/stack + to: /tf/ragged/stack +- from: /tf/compat/v1/ragged/stack_dynamic_partitions + to: /tf/ragged/stack_dynamic_partitions +- from: /tf/compat/v1/random/Algorithm + to: /tf/random/Algorithm +- from: /tf/compat/v1/random/Generator + to: /tf/random/Generator +- from: /tf/compat/v1/random/all_candidate_sampler + to: /tf/random/all_candidate_sampler +- from: /tf/compat/v1/random/categorical + to: /tf/random/categorical +- from: /tf/compat/v1/random/create_rng_state + to: /tf/random/create_rng_state +- from: /tf/compat/v1/random/experimental/Algorithm + to: /tf/random/Algorithm +- from: /tf/compat/v1/random/experimental/Generator + to: /tf/random/Generator +- from: /tf/compat/v1/random/experimental/create_rng_state + to: /tf/random/create_rng_state +- from: /tf/compat/v1/random/experimental/get_global_generator + to: /tf/random/get_global_generator +- from: /tf/compat/v1/random/experimental/set_global_generator + to: /tf/random/set_global_generator +- from: /tf/compat/v1/random/fixed_unigram_candidate_sampler + to: /tf/random/fixed_unigram_candidate_sampler +- from: /tf/compat/v1/random/gamma + to: /tf/random/gamma +- from: /tf/compat/v1/random/get_global_generator + to: /tf/random/get_global_generator +- from: /tf/compat/v1/random/get_seed + to: /tf/compat/v1/get_seed +- from: /tf/compat/v1/random/learned_unigram_candidate_sampler + to: /tf/random/learned_unigram_candidate_sampler +- from: /tf/compat/v1/random/log_uniform_candidate_sampler + to: /tf/random/log_uniform_candidate_sampler +- from: /tf/compat/v1/random/multinomial + to: /tf/compat/v1/multinomial +- from: /tf/compat/v1/random/normal + to: /tf/random/normal +- from: /tf/compat/v1/random/poisson + to: /tf/compat/v1/random_poisson +- from: /tf/compat/v1/random/set_global_generator + to: /tf/random/set_global_generator +- from: /tf/compat/v1/random/set_random_seed + to: /tf/compat/v1/set_random_seed +- from: /tf/compat/v1/random/shuffle + to: /tf/random/shuffle +- from: /tf/compat/v1/random/stateless_binomial + to: /tf/random/stateless_binomial +- from: /tf/compat/v1/random/stateless_categorical + to: /tf/random/stateless_categorical +- from: /tf/compat/v1/random/stateless_gamma + to: /tf/random/stateless_gamma +- from: /tf/compat/v1/random/stateless_normal + to: /tf/random/stateless_normal +- from: /tf/compat/v1/random/stateless_poisson + to: /tf/random/stateless_poisson +- from: /tf/compat/v1/random/stateless_truncated_normal + to: /tf/random/stateless_truncated_normal +- from: /tf/compat/v1/random/stateless_uniform + to: /tf/random/stateless_uniform +- from: /tf/compat/v1/random/truncated_normal + to: /tf/random/truncated_normal +- from: /tf/compat/v1/random/uniform + to: /tf/random/uniform +- from: /tf/compat/v1/random/uniform_candidate_sampler + to: /tf/random/uniform_candidate_sampler +- from: /tf/compat/v1/random_crop + to: /tf/image/random_crop +- from: /tf/compat/v1/random_gamma + to: /tf/random/gamma +- from: /tf/compat/v1/random_normal + to: /tf/random/normal +- from: /tf/compat/v1/random_shuffle + to: /tf/random/shuffle +- from: /tf/compat/v1/random_uniform + to: /tf/random/uniform +- from: /tf/compat/v1/range + to: /tf/range +- from: /tf/compat/v1/rank + to: /tf/rank +- from: /tf/compat/v1/raw_ops/Abort + to: /tf/raw_ops/Abort +- from: /tf/compat/v1/raw_ops/Abs + to: /tf/raw_ops/Abs +- from: /tf/compat/v1/raw_ops/AccumulateNV2 + to: /tf/raw_ops/AccumulateNV2 +- from: /tf/compat/v1/raw_ops/AccumulatorApplyGradient + to: /tf/raw_ops/AccumulatorApplyGradient +- from: /tf/compat/v1/raw_ops/AccumulatorNumAccumulated + to: /tf/raw_ops/AccumulatorNumAccumulated +- from: /tf/compat/v1/raw_ops/AccumulatorSetGlobalStep + to: /tf/raw_ops/AccumulatorSetGlobalStep +- from: /tf/compat/v1/raw_ops/AccumulatorTakeGradient + to: /tf/raw_ops/AccumulatorTakeGradient +- from: /tf/compat/v1/raw_ops/Acos + to: /tf/raw_ops/Acos +- from: /tf/compat/v1/raw_ops/Acosh + to: /tf/raw_ops/Acosh +- from: /tf/compat/v1/raw_ops/Add + to: /tf/raw_ops/Add +- from: /tf/compat/v1/raw_ops/AddManySparseToTensorsMap + to: /tf/raw_ops/AddManySparseToTensorsMap +- from: /tf/compat/v1/raw_ops/AddN + to: /tf/raw_ops/AddN +- from: /tf/compat/v1/raw_ops/AddSparseToTensorsMap + to: /tf/raw_ops/AddSparseToTensorsMap +- from: /tf/compat/v1/raw_ops/AddV2 + to: /tf/raw_ops/AddV2 +- from: /tf/compat/v1/raw_ops/AdjustContrast + to: /tf/raw_ops/AdjustContrast +- from: /tf/compat/v1/raw_ops/AdjustContrastv2 + to: /tf/raw_ops/AdjustContrastv2 +- from: /tf/compat/v1/raw_ops/AdjustHue + to: /tf/raw_ops/AdjustHue +- from: /tf/compat/v1/raw_ops/AdjustSaturation + to: /tf/raw_ops/AdjustSaturation +- from: /tf/compat/v1/raw_ops/All + to: /tf/raw_ops/All +- from: /tf/compat/v1/raw_ops/AllCandidateSampler + to: /tf/raw_ops/AllCandidateSampler +- from: /tf/compat/v1/raw_ops/AllToAll + to: /tf/raw_ops/AllToAll +- from: /tf/compat/v1/raw_ops/Angle + to: /tf/raw_ops/Angle +- from: /tf/compat/v1/raw_ops/AnonymousIterator + to: /tf/raw_ops/AnonymousIterator +- from: /tf/compat/v1/raw_ops/AnonymousIteratorV2 + to: /tf/raw_ops/AnonymousIteratorV2 +- from: /tf/compat/v1/raw_ops/AnonymousMemoryCache + to: /tf/raw_ops/AnonymousMemoryCache +- from: /tf/compat/v1/raw_ops/AnonymousMultiDeviceIterator + to: /tf/raw_ops/AnonymousMultiDeviceIterator +- from: /tf/compat/v1/raw_ops/AnonymousRandomSeedGenerator + to: /tf/raw_ops/AnonymousRandomSeedGenerator +- from: /tf/compat/v1/raw_ops/Any + to: /tf/raw_ops/Any +- from: /tf/compat/v1/raw_ops/ApplyAdaMax + to: /tf/raw_ops/ApplyAdaMax +- from: /tf/compat/v1/raw_ops/ApplyAdadelta + to: /tf/raw_ops/ApplyAdadelta +- from: /tf/compat/v1/raw_ops/ApplyAdagrad + to: /tf/raw_ops/ApplyAdagrad +- from: /tf/compat/v1/raw_ops/ApplyAdagradDA + to: /tf/raw_ops/ApplyAdagradDA +- from: /tf/compat/v1/raw_ops/ApplyAdagradV2 + to: /tf/raw_ops/ApplyAdagradV2 +- from: /tf/compat/v1/raw_ops/ApplyAdam + to: /tf/raw_ops/ApplyAdam +- from: /tf/compat/v1/raw_ops/ApplyAddSign + to: /tf/raw_ops/ApplyAddSign +- from: /tf/compat/v1/raw_ops/ApplyCenteredRMSProp + to: /tf/raw_ops/ApplyCenteredRMSProp +- from: /tf/compat/v1/raw_ops/ApplyFtrl + to: /tf/raw_ops/ApplyFtrl +- from: /tf/compat/v1/raw_ops/ApplyFtrlV2 + to: /tf/raw_ops/ApplyFtrlV2 +- from: /tf/compat/v1/raw_ops/ApplyGradientDescent + to: /tf/raw_ops/ApplyGradientDescent +- from: /tf/compat/v1/raw_ops/ApplyMomentum + to: /tf/raw_ops/ApplyMomentum +- from: /tf/compat/v1/raw_ops/ApplyPowerSign + to: /tf/raw_ops/ApplyPowerSign +- from: /tf/compat/v1/raw_ops/ApplyProximalAdagrad + to: /tf/raw_ops/ApplyProximalAdagrad +- from: /tf/compat/v1/raw_ops/ApplyProximalGradientDescent + to: /tf/raw_ops/ApplyProximalGradientDescent +- from: /tf/compat/v1/raw_ops/ApplyRMSProp + to: /tf/raw_ops/ApplyRMSProp +- from: /tf/compat/v1/raw_ops/ApproximateEqual + to: /tf/raw_ops/ApproximateEqual +- from: /tf/compat/v1/raw_ops/ArgMax + to: /tf/raw_ops/ArgMax +- from: /tf/compat/v1/raw_ops/ArgMin + to: /tf/raw_ops/ArgMin +- from: /tf/compat/v1/raw_ops/AsString + to: /tf/raw_ops/AsString +- from: /tf/compat/v1/raw_ops/Asin + to: /tf/raw_ops/Asin +- from: /tf/compat/v1/raw_ops/Asinh + to: /tf/raw_ops/Asinh +- from: /tf/compat/v1/raw_ops/Assert + to: /tf/raw_ops/Assert +- from: /tf/compat/v1/raw_ops/AssertCardinalityDataset + to: /tf/raw_ops/AssertCardinalityDataset +- from: /tf/compat/v1/raw_ops/AssertNextDataset + to: /tf/raw_ops/AssertNextDataset +- from: /tf/compat/v1/raw_ops/Assign + to: /tf/raw_ops/Assign +- from: /tf/compat/v1/raw_ops/AssignAdd + to: /tf/raw_ops/AssignAdd +- from: /tf/compat/v1/raw_ops/AssignAddVariableOp + to: /tf/raw_ops/AssignAddVariableOp +- from: /tf/compat/v1/raw_ops/AssignSub + to: /tf/raw_ops/AssignSub +- from: /tf/compat/v1/raw_ops/AssignSubVariableOp + to: /tf/raw_ops/AssignSubVariableOp +- from: /tf/compat/v1/raw_ops/AssignVariableOp + to: /tf/raw_ops/AssignVariableOp +- from: /tf/compat/v1/raw_ops/Atan + to: /tf/raw_ops/Atan +- from: /tf/compat/v1/raw_ops/Atan2 + to: /tf/raw_ops/Atan2 +- from: /tf/compat/v1/raw_ops/Atanh + to: /tf/raw_ops/Atanh +- from: /tf/compat/v1/raw_ops/AudioSpectrogram + to: /tf/raw_ops/AudioSpectrogram +- from: /tf/compat/v1/raw_ops/AudioSummary + to: /tf/raw_ops/AudioSummary +- from: /tf/compat/v1/raw_ops/AudioSummaryV2 + to: /tf/raw_ops/AudioSummaryV2 +- from: /tf/compat/v1/raw_ops/AutoShardDataset + to: /tf/raw_ops/AutoShardDataset +- from: /tf/compat/v1/raw_ops/AvgPool + to: /tf/raw_ops/AvgPool +- from: /tf/compat/v1/raw_ops/AvgPool3D + to: /tf/raw_ops/AvgPool3D +- from: /tf/compat/v1/raw_ops/AvgPool3DGrad + to: /tf/raw_ops/AvgPool3DGrad +- from: /tf/compat/v1/raw_ops/AvgPoolGrad + to: /tf/raw_ops/AvgPoolGrad +- from: /tf/compat/v1/raw_ops/Barrier + to: /tf/raw_ops/Barrier +- from: /tf/compat/v1/raw_ops/BarrierClose + to: /tf/raw_ops/BarrierClose +- from: /tf/compat/v1/raw_ops/BarrierIncompleteSize + to: /tf/raw_ops/BarrierIncompleteSize +- from: /tf/compat/v1/raw_ops/BarrierInsertMany + to: /tf/raw_ops/BarrierInsertMany +- from: /tf/compat/v1/raw_ops/BarrierReadySize + to: /tf/raw_ops/BarrierReadySize +- from: /tf/compat/v1/raw_ops/BarrierTakeMany + to: /tf/raw_ops/BarrierTakeMany +- from: /tf/compat/v1/raw_ops/Batch + to: /tf/raw_ops/Batch +- from: /tf/compat/v1/raw_ops/BatchCholesky + to: /tf/raw_ops/BatchCholesky +- from: /tf/compat/v1/raw_ops/BatchCholeskyGrad + to: /tf/raw_ops/BatchCholeskyGrad +- from: /tf/compat/v1/raw_ops/BatchDataset + to: /tf/raw_ops/BatchDataset +- from: /tf/compat/v1/raw_ops/BatchDatasetV2 + to: /tf/raw_ops/BatchDatasetV2 +- from: /tf/compat/v1/raw_ops/BatchFFT + to: /tf/raw_ops/BatchFFT +- from: /tf/compat/v1/raw_ops/BatchFFT2D + to: /tf/raw_ops/BatchFFT2D +- from: /tf/compat/v1/raw_ops/BatchFFT3D + to: /tf/raw_ops/BatchFFT3D +- from: /tf/compat/v1/raw_ops/BatchFunction + to: /tf/raw_ops/BatchFunction +- from: /tf/compat/v1/raw_ops/BatchIFFT + to: /tf/raw_ops/BatchIFFT +- from: /tf/compat/v1/raw_ops/BatchIFFT2D + to: /tf/raw_ops/BatchIFFT2D +- from: /tf/compat/v1/raw_ops/BatchIFFT3D + to: /tf/raw_ops/BatchIFFT3D +- from: /tf/compat/v1/raw_ops/BatchMatMul + to: /tf/raw_ops/BatchMatMul +- from: /tf/compat/v1/raw_ops/BatchMatMulV2 + to: /tf/raw_ops/BatchMatMulV2 +- from: /tf/compat/v1/raw_ops/BatchMatrixBandPart + to: /tf/raw_ops/BatchMatrixBandPart +- from: /tf/compat/v1/raw_ops/BatchMatrixDeterminant + to: /tf/raw_ops/BatchMatrixDeterminant +- from: /tf/compat/v1/raw_ops/BatchMatrixDiag + to: /tf/raw_ops/BatchMatrixDiag +- from: /tf/compat/v1/raw_ops/BatchMatrixDiagPart + to: /tf/raw_ops/BatchMatrixDiagPart +- from: /tf/compat/v1/raw_ops/BatchMatrixInverse + to: /tf/raw_ops/BatchMatrixInverse +- from: /tf/compat/v1/raw_ops/BatchMatrixSetDiag + to: /tf/raw_ops/BatchMatrixSetDiag +- from: /tf/compat/v1/raw_ops/BatchMatrixSolve + to: /tf/raw_ops/BatchMatrixSolve +- from: /tf/compat/v1/raw_ops/BatchMatrixSolveLs + to: /tf/raw_ops/BatchMatrixSolveLs +- from: /tf/compat/v1/raw_ops/BatchMatrixTriangularSolve + to: /tf/raw_ops/BatchMatrixTriangularSolve +- from: /tf/compat/v1/raw_ops/BatchNormWithGlobalNormalization + to: /tf/raw_ops/BatchNormWithGlobalNormalization +- from: /tf/compat/v1/raw_ops/BatchNormWithGlobalNormalizationGrad + to: /tf/raw_ops/BatchNormWithGlobalNormalizationGrad +- from: /tf/compat/v1/raw_ops/BatchSelfAdjointEig + to: /tf/raw_ops/BatchSelfAdjointEig +- from: /tf/compat/v1/raw_ops/BatchSelfAdjointEigV2 + to: /tf/raw_ops/BatchSelfAdjointEigV2 +- from: /tf/compat/v1/raw_ops/BatchSvd + to: /tf/raw_ops/BatchSvd +- from: /tf/compat/v1/raw_ops/BatchToSpace + to: /tf/raw_ops/BatchToSpace +- from: /tf/compat/v1/raw_ops/BatchToSpaceND + to: /tf/raw_ops/BatchToSpaceND +- from: /tf/compat/v1/raw_ops/BesselI0e + to: /tf/raw_ops/BesselI0e +- from: /tf/compat/v1/raw_ops/BesselI1e + to: /tf/raw_ops/BesselI1e +- from: /tf/compat/v1/raw_ops/Betainc + to: /tf/raw_ops/Betainc +- from: /tf/compat/v1/raw_ops/BiasAdd + to: /tf/raw_ops/BiasAdd +- from: /tf/compat/v1/raw_ops/BiasAddGrad + to: /tf/raw_ops/BiasAddGrad +- from: /tf/compat/v1/raw_ops/BiasAddV1 + to: /tf/raw_ops/BiasAddV1 +- from: /tf/compat/v1/raw_ops/Bincount + to: /tf/raw_ops/Bincount +- from: /tf/compat/v1/raw_ops/Bitcast + to: /tf/raw_ops/Bitcast +- from: /tf/compat/v1/raw_ops/BitwiseAnd + to: /tf/raw_ops/BitwiseAnd +- from: /tf/compat/v1/raw_ops/BitwiseOr + to: /tf/raw_ops/BitwiseOr +- from: /tf/compat/v1/raw_ops/BitwiseXor + to: /tf/raw_ops/BitwiseXor +- from: /tf/compat/v1/raw_ops/BlockLSTM + to: /tf/raw_ops/BlockLSTM +- from: /tf/compat/v1/raw_ops/BlockLSTMGrad + to: /tf/raw_ops/BlockLSTMGrad +- from: /tf/compat/v1/raw_ops/BlockLSTMGradV2 + to: /tf/raw_ops/BlockLSTMGradV2 +- from: /tf/compat/v1/raw_ops/BlockLSTMV2 + to: /tf/raw_ops/BlockLSTMV2 +- from: /tf/compat/v1/raw_ops/BoostedTreesAggregateStats + to: /tf/raw_ops/BoostedTreesAggregateStats +- from: /tf/compat/v1/raw_ops/BoostedTreesBucketize + to: /tf/raw_ops/BoostedTreesBucketize +- from: /tf/compat/v1/raw_ops/BoostedTreesCalculateBestFeatureSplit + to: /tf/raw_ops/BoostedTreesCalculateBestFeatureSplit +- from: /tf/compat/v1/raw_ops/BoostedTreesCalculateBestFeatureSplitV2 + to: /tf/raw_ops/BoostedTreesCalculateBestFeatureSplitV2 +- from: /tf/compat/v1/raw_ops/BoostedTreesCalculateBestGainsPerFeature + to: /tf/raw_ops/BoostedTreesCalculateBestGainsPerFeature +- from: /tf/compat/v1/raw_ops/BoostedTreesCenterBias + to: /tf/raw_ops/BoostedTreesCenterBias +- from: /tf/compat/v1/raw_ops/BoostedTreesCreateEnsemble + to: /tf/raw_ops/BoostedTreesCreateEnsemble +- from: /tf/compat/v1/raw_ops/BoostedTreesCreateQuantileStreamResource + to: /tf/raw_ops/BoostedTreesCreateQuantileStreamResource +- from: /tf/compat/v1/raw_ops/BoostedTreesDeserializeEnsemble + to: /tf/raw_ops/BoostedTreesDeserializeEnsemble +- from: /tf/compat/v1/raw_ops/BoostedTreesEnsembleResourceHandleOp + to: /tf/raw_ops/BoostedTreesEnsembleResourceHandleOp +- from: /tf/compat/v1/raw_ops/BoostedTreesExampleDebugOutputs + to: /tf/raw_ops/BoostedTreesExampleDebugOutputs +- from: /tf/compat/v1/raw_ops/BoostedTreesFlushQuantileSummaries + to: /tf/raw_ops/BoostedTreesFlushQuantileSummaries +- from: /tf/compat/v1/raw_ops/BoostedTreesGetEnsembleStates + to: /tf/raw_ops/BoostedTreesGetEnsembleStates +- from: /tf/compat/v1/raw_ops/BoostedTreesMakeQuantileSummaries + to: /tf/raw_ops/BoostedTreesMakeQuantileSummaries +- from: /tf/compat/v1/raw_ops/BoostedTreesMakeStatsSummary + to: /tf/raw_ops/BoostedTreesMakeStatsSummary +- from: /tf/compat/v1/raw_ops/BoostedTreesPredict + to: /tf/raw_ops/BoostedTreesPredict +- from: /tf/compat/v1/raw_ops/BoostedTreesQuantileStreamResourceAddSummaries + to: /tf/raw_ops/BoostedTreesQuantileStreamResourceAddSummaries +- from: /tf/compat/v1/raw_ops/BoostedTreesQuantileStreamResourceDeserialize + to: /tf/raw_ops/BoostedTreesQuantileStreamResourceDeserialize +- from: /tf/compat/v1/raw_ops/BoostedTreesQuantileStreamResourceFlush + to: /tf/raw_ops/BoostedTreesQuantileStreamResourceFlush +- from: /tf/compat/v1/raw_ops/BoostedTreesQuantileStreamResourceGetBucketBoundaries + to: /tf/raw_ops/BoostedTreesQuantileStreamResourceGetBucketBoundaries +- from: /tf/compat/v1/raw_ops/BoostedTreesQuantileStreamResourceHandleOp + to: /tf/raw_ops/BoostedTreesQuantileStreamResourceHandleOp +- from: /tf/compat/v1/raw_ops/BoostedTreesSerializeEnsemble + to: /tf/raw_ops/BoostedTreesSerializeEnsemble +- from: /tf/compat/v1/raw_ops/BoostedTreesSparseAggregateStats + to: /tf/raw_ops/BoostedTreesSparseAggregateStats +- from: /tf/compat/v1/raw_ops/BoostedTreesSparseCalculateBestFeatureSplit + to: /tf/raw_ops/BoostedTreesSparseCalculateBestFeatureSplit +- from: /tf/compat/v1/raw_ops/BoostedTreesTrainingPredict + to: /tf/raw_ops/BoostedTreesTrainingPredict +- from: /tf/compat/v1/raw_ops/BoostedTreesUpdateEnsemble + to: /tf/raw_ops/BoostedTreesUpdateEnsemble +- from: /tf/compat/v1/raw_ops/BoostedTreesUpdateEnsembleV2 + to: /tf/raw_ops/BoostedTreesUpdateEnsembleV2 +- from: /tf/compat/v1/raw_ops/BroadcastArgs + to: /tf/raw_ops/BroadcastArgs +- from: /tf/compat/v1/raw_ops/BroadcastGradientArgs + to: /tf/raw_ops/BroadcastGradientArgs +- from: /tf/compat/v1/raw_ops/BroadcastTo + to: /tf/raw_ops/BroadcastTo +- from: /tf/compat/v1/raw_ops/Bucketize + to: /tf/raw_ops/Bucketize +- from: /tf/compat/v1/raw_ops/BytesProducedStatsDataset + to: /tf/raw_ops/BytesProducedStatsDataset +- from: /tf/compat/v1/raw_ops/CSRSparseMatrixComponents + to: /tf/raw_ops/CSRSparseMatrixComponents +- from: /tf/compat/v1/raw_ops/CSRSparseMatrixToDense + to: /tf/raw_ops/CSRSparseMatrixToDense +- from: /tf/compat/v1/raw_ops/CSRSparseMatrixToSparseTensor + to: /tf/raw_ops/CSRSparseMatrixToSparseTensor +- from: /tf/compat/v1/raw_ops/CSVDataset + to: /tf/raw_ops/CSVDataset +- from: /tf/compat/v1/raw_ops/CTCBeamSearchDecoder + to: /tf/raw_ops/CTCBeamSearchDecoder +- from: /tf/compat/v1/raw_ops/CTCGreedyDecoder + to: /tf/raw_ops/CTCGreedyDecoder +- from: /tf/compat/v1/raw_ops/CTCLoss + to: /tf/raw_ops/CTCLoss +- from: /tf/compat/v1/raw_ops/CTCLossV2 + to: /tf/raw_ops/CTCLossV2 +- from: /tf/compat/v1/raw_ops/CacheDataset + to: /tf/raw_ops/CacheDataset +- from: /tf/compat/v1/raw_ops/CacheDatasetV2 + to: /tf/raw_ops/CacheDatasetV2 +- from: /tf/compat/v1/raw_ops/Case + to: /tf/raw_ops/Case +- from: /tf/compat/v1/raw_ops/Cast + to: /tf/raw_ops/Cast +- from: /tf/compat/v1/raw_ops/Ceil + to: /tf/raw_ops/Ceil +- from: /tf/compat/v1/raw_ops/CheckNumerics + to: /tf/raw_ops/CheckNumerics +- from: /tf/compat/v1/raw_ops/CheckNumericsV2 + to: /tf/raw_ops/CheckNumericsV2 +- from: /tf/compat/v1/raw_ops/Cholesky + to: /tf/raw_ops/Cholesky +- from: /tf/compat/v1/raw_ops/CholeskyGrad + to: /tf/raw_ops/CholeskyGrad +- from: /tf/compat/v1/raw_ops/ChooseFastestBranchDataset + to: /tf/raw_ops/ChooseFastestBranchDataset +- from: /tf/compat/v1/raw_ops/ChooseFastestDataset + to: /tf/raw_ops/ChooseFastestDataset +- from: /tf/compat/v1/raw_ops/ClipByValue + to: /tf/raw_ops/ClipByValue +- from: /tf/compat/v1/raw_ops/CloseSummaryWriter + to: /tf/raw_ops/CloseSummaryWriter +- from: /tf/compat/v1/raw_ops/CollectiveBcastRecv + to: /tf/raw_ops/CollectiveBcastRecv +- from: /tf/compat/v1/raw_ops/CollectiveBcastSend + to: /tf/raw_ops/CollectiveBcastSend +- from: /tf/compat/v1/raw_ops/CollectiveGather + to: /tf/raw_ops/CollectiveGather +- from: /tf/compat/v1/raw_ops/CollectivePermute + to: /tf/raw_ops/CollectivePermute +- from: /tf/compat/v1/raw_ops/CollectiveReduce + to: /tf/raw_ops/CollectiveReduce +- from: /tf/compat/v1/raw_ops/CombinedNonMaxSuppression + to: /tf/raw_ops/CombinedNonMaxSuppression +- from: /tf/compat/v1/raw_ops/CompareAndBitpack + to: /tf/raw_ops/CompareAndBitpack +- from: /tf/compat/v1/raw_ops/Complex + to: /tf/raw_ops/Complex +- from: /tf/compat/v1/raw_ops/ComplexAbs + to: /tf/raw_ops/ComplexAbs +- from: /tf/compat/v1/raw_ops/ComputeAccidentalHits + to: /tf/raw_ops/ComputeAccidentalHits +- from: /tf/compat/v1/raw_ops/Concat + to: /tf/raw_ops/Concat +- from: /tf/compat/v1/raw_ops/ConcatOffset + to: /tf/raw_ops/ConcatOffset +- from: /tf/compat/v1/raw_ops/ConcatV2 + to: /tf/raw_ops/ConcatV2 +- from: /tf/compat/v1/raw_ops/ConcatenateDataset + to: /tf/raw_ops/ConcatenateDataset +- from: /tf/compat/v1/raw_ops/ConditionalAccumulator + to: /tf/raw_ops/ConditionalAccumulator +- from: /tf/compat/v1/raw_ops/ConfigureDistributedTPU + to: /tf/raw_ops/ConfigureDistributedTPU +- from: /tf/compat/v1/raw_ops/ConfigureTPUEmbedding + to: /tf/raw_ops/ConfigureTPUEmbedding +- from: /tf/compat/v1/raw_ops/Conj + to: /tf/raw_ops/Conj +- from: /tf/compat/v1/raw_ops/ConjugateTranspose + to: /tf/raw_ops/ConjugateTranspose +- from: /tf/compat/v1/raw_ops/Const + to: /tf/raw_ops/Const +- from: /tf/compat/v1/raw_ops/ConsumeMutexLock + to: /tf/raw_ops/ConsumeMutexLock +- from: /tf/compat/v1/raw_ops/ControlTrigger + to: /tf/raw_ops/ControlTrigger +- from: /tf/compat/v1/raw_ops/Conv2D + to: /tf/raw_ops/Conv2D +- from: /tf/compat/v1/raw_ops/Conv2DBackpropFilter + to: /tf/raw_ops/Conv2DBackpropFilter +- from: /tf/compat/v1/raw_ops/Conv2DBackpropInput + to: /tf/raw_ops/Conv2DBackpropInput +- from: /tf/compat/v1/raw_ops/Conv3D + to: /tf/raw_ops/Conv3D +- from: /tf/compat/v1/raw_ops/Conv3DBackpropFilter + to: /tf/raw_ops/Conv3DBackpropFilter +- from: /tf/compat/v1/raw_ops/Conv3DBackpropFilterV2 + to: /tf/raw_ops/Conv3DBackpropFilterV2 +- from: /tf/compat/v1/raw_ops/Conv3DBackpropInput + to: /tf/raw_ops/Conv3DBackpropInput +- from: /tf/compat/v1/raw_ops/Conv3DBackpropInputV2 + to: /tf/raw_ops/Conv3DBackpropInputV2 +- from: /tf/compat/v1/raw_ops/Copy + to: /tf/raw_ops/Copy +- from: /tf/compat/v1/raw_ops/CopyHost + to: /tf/raw_ops/CopyHost +- from: /tf/compat/v1/raw_ops/Cos + to: /tf/raw_ops/Cos +- from: /tf/compat/v1/raw_ops/Cosh + to: /tf/raw_ops/Cosh +- from: /tf/compat/v1/raw_ops/CountUpTo + to: /tf/raw_ops/CountUpTo +- from: /tf/compat/v1/raw_ops/CreateSummaryDbWriter + to: /tf/raw_ops/CreateSummaryDbWriter +- from: /tf/compat/v1/raw_ops/CreateSummaryFileWriter + to: /tf/raw_ops/CreateSummaryFileWriter +- from: /tf/compat/v1/raw_ops/CropAndResize + to: /tf/raw_ops/CropAndResize +- from: /tf/compat/v1/raw_ops/CropAndResizeGradBoxes + to: /tf/raw_ops/CropAndResizeGradBoxes +- from: /tf/compat/v1/raw_ops/CropAndResizeGradImage + to: /tf/raw_ops/CropAndResizeGradImage +- from: /tf/compat/v1/raw_ops/Cross + to: /tf/raw_ops/Cross +- from: /tf/compat/v1/raw_ops/CrossReplicaSum + to: /tf/raw_ops/CrossReplicaSum +- from: /tf/compat/v1/raw_ops/CudnnRNN + to: /tf/raw_ops/CudnnRNN +- from: /tf/compat/v1/raw_ops/CudnnRNNBackprop + to: /tf/raw_ops/CudnnRNNBackprop +- from: /tf/compat/v1/raw_ops/CudnnRNNBackpropV2 + to: /tf/raw_ops/CudnnRNNBackpropV2 +- from: /tf/compat/v1/raw_ops/CudnnRNNBackpropV3 + to: /tf/raw_ops/CudnnRNNBackpropV3 +- from: /tf/compat/v1/raw_ops/CudnnRNNCanonicalToParams + to: /tf/raw_ops/CudnnRNNCanonicalToParams +- from: /tf/compat/v1/raw_ops/CudnnRNNCanonicalToParamsV2 + to: /tf/raw_ops/CudnnRNNCanonicalToParamsV2 +- from: /tf/compat/v1/raw_ops/CudnnRNNParamsSize + to: /tf/raw_ops/CudnnRNNParamsSize +- from: /tf/compat/v1/raw_ops/CudnnRNNParamsToCanonical + to: /tf/raw_ops/CudnnRNNParamsToCanonical +- from: /tf/compat/v1/raw_ops/CudnnRNNParamsToCanonicalV2 + to: /tf/raw_ops/CudnnRNNParamsToCanonicalV2 +- from: /tf/compat/v1/raw_ops/CudnnRNNV2 + to: /tf/raw_ops/CudnnRNNV2 +- from: /tf/compat/v1/raw_ops/CudnnRNNV3 + to: /tf/raw_ops/CudnnRNNV3 +- from: /tf/compat/v1/raw_ops/Cumprod + to: /tf/raw_ops/Cumprod +- from: /tf/compat/v1/raw_ops/Cumsum + to: /tf/raw_ops/Cumsum +- from: /tf/compat/v1/raw_ops/CumulativeLogsumexp + to: /tf/raw_ops/CumulativeLogsumexp +- from: /tf/compat/v1/raw_ops/DataFormatDimMap + to: /tf/raw_ops/DataFormatDimMap +- from: /tf/compat/v1/raw_ops/DataFormatVecPermute + to: /tf/raw_ops/DataFormatVecPermute +- from: /tf/compat/v1/raw_ops/DatasetCardinality + to: /tf/raw_ops/DatasetCardinality +- from: /tf/compat/v1/raw_ops/DatasetFromGraph + to: /tf/raw_ops/DatasetFromGraph +- from: /tf/compat/v1/raw_ops/DatasetToGraph + to: /tf/raw_ops/DatasetToGraph +- from: /tf/compat/v1/raw_ops/DatasetToGraphV2 + to: /tf/raw_ops/DatasetToGraphV2 +- from: /tf/compat/v1/raw_ops/DatasetToSingleElement + to: /tf/raw_ops/DatasetToSingleElement +- from: /tf/compat/v1/raw_ops/DatasetToTFRecord + to: /tf/raw_ops/DatasetToTFRecord +- from: /tf/compat/v1/raw_ops/Dawsn + to: /tf/raw_ops/Dawsn +- from: /tf/compat/v1/raw_ops/DebugGradientIdentity + to: /tf/raw_ops/DebugGradientIdentity +- from: /tf/compat/v1/raw_ops/DebugGradientRefIdentity + to: /tf/raw_ops/DebugGradientRefIdentity +- from: /tf/compat/v1/raw_ops/DebugIdentity + to: /tf/raw_ops/DebugIdentity +- from: /tf/compat/v1/raw_ops/DebugIdentityV2 + to: /tf/raw_ops/DebugIdentityV2 +- from: /tf/compat/v1/raw_ops/DebugNanCount + to: /tf/raw_ops/DebugNanCount +- from: /tf/compat/v1/raw_ops/DebugNumericSummary + to: /tf/raw_ops/DebugNumericSummary +- from: /tf/compat/v1/raw_ops/DebugNumericSummaryV2 + to: /tf/raw_ops/DebugNumericSummaryV2 +- from: /tf/compat/v1/raw_ops/DecodeAndCropJpeg + to: /tf/raw_ops/DecodeAndCropJpeg +- from: /tf/compat/v1/raw_ops/DecodeBase64 + to: /tf/raw_ops/DecodeBase64 +- from: /tf/compat/v1/raw_ops/DecodeBmp + to: /tf/raw_ops/DecodeBmp +- from: /tf/compat/v1/raw_ops/DecodeCSV + to: /tf/raw_ops/DecodeCSV +- from: /tf/compat/v1/raw_ops/DecodeCompressed + to: /tf/raw_ops/DecodeCompressed +- from: /tf/compat/v1/raw_ops/DecodeGif + to: /tf/raw_ops/DecodeGif +- from: /tf/compat/v1/raw_ops/DecodeJSONExample + to: /tf/raw_ops/DecodeJSONExample +- from: /tf/compat/v1/raw_ops/DecodeJpeg + to: /tf/raw_ops/DecodeJpeg +- from: /tf/compat/v1/raw_ops/DecodePaddedRaw + to: /tf/raw_ops/DecodePaddedRaw +- from: /tf/compat/v1/raw_ops/DecodePng + to: /tf/raw_ops/DecodePng +- from: /tf/compat/v1/raw_ops/DecodeProtoV2 + to: /tf/raw_ops/DecodeProtoV2 +- from: /tf/compat/v1/raw_ops/DecodeRaw + to: /tf/raw_ops/DecodeRaw +- from: /tf/compat/v1/raw_ops/DecodeWav + to: /tf/raw_ops/DecodeWav +- from: /tf/compat/v1/raw_ops/DeepCopy + to: /tf/raw_ops/DeepCopy +- from: /tf/compat/v1/raw_ops/DeleteIterator + to: /tf/raw_ops/DeleteIterator +- from: /tf/compat/v1/raw_ops/DeleteMemoryCache + to: /tf/raw_ops/DeleteMemoryCache +- from: /tf/compat/v1/raw_ops/DeleteMultiDeviceIterator + to: /tf/raw_ops/DeleteMultiDeviceIterator +- from: /tf/compat/v1/raw_ops/DeleteRandomSeedGenerator + to: /tf/raw_ops/DeleteRandomSeedGenerator +- from: /tf/compat/v1/raw_ops/DeleteSessionTensor + to: /tf/raw_ops/DeleteSessionTensor +- from: /tf/compat/v1/raw_ops/DenseToCSRSparseMatrix + to: /tf/raw_ops/DenseToCSRSparseMatrix +- from: /tf/compat/v1/raw_ops/DenseToDenseSetOperation + to: /tf/raw_ops/DenseToDenseSetOperation +- from: /tf/compat/v1/raw_ops/DenseToSparseBatchDataset + to: /tf/raw_ops/DenseToSparseBatchDataset +- from: /tf/compat/v1/raw_ops/DenseToSparseSetOperation + to: /tf/raw_ops/DenseToSparseSetOperation +- from: /tf/compat/v1/raw_ops/DepthToSpace + to: /tf/raw_ops/DepthToSpace +- from: /tf/compat/v1/raw_ops/DepthwiseConv2dNative + to: /tf/raw_ops/DepthwiseConv2dNative +- from: /tf/compat/v1/raw_ops/DepthwiseConv2dNativeBackpropFilter + to: /tf/raw_ops/DepthwiseConv2dNativeBackpropFilter +- from: /tf/compat/v1/raw_ops/DepthwiseConv2dNativeBackpropInput + to: /tf/raw_ops/DepthwiseConv2dNativeBackpropInput +- from: /tf/compat/v1/raw_ops/Dequantize + to: /tf/raw_ops/Dequantize +- from: /tf/compat/v1/raw_ops/DeserializeIterator + to: /tf/raw_ops/DeserializeIterator +- from: /tf/compat/v1/raw_ops/DeserializeManySparse + to: /tf/raw_ops/DeserializeManySparse +- from: /tf/compat/v1/raw_ops/DeserializeSparse + to: /tf/raw_ops/DeserializeSparse +- from: /tf/compat/v1/raw_ops/DestroyResourceOp + to: /tf/raw_ops/DestroyResourceOp +- from: /tf/compat/v1/raw_ops/DestroyTemporaryVariable + to: /tf/raw_ops/DestroyTemporaryVariable +- from: /tf/compat/v1/raw_ops/Diag + to: /tf/raw_ops/Diag +- from: /tf/compat/v1/raw_ops/DiagPart + to: /tf/raw_ops/DiagPart +- from: /tf/compat/v1/raw_ops/Digamma + to: /tf/raw_ops/Digamma +- from: /tf/compat/v1/raw_ops/Dilation2D + to: /tf/raw_ops/Dilation2D +- from: /tf/compat/v1/raw_ops/Dilation2DBackpropFilter + to: /tf/raw_ops/Dilation2DBackpropFilter +- from: /tf/compat/v1/raw_ops/Dilation2DBackpropInput + to: /tf/raw_ops/Dilation2DBackpropInput +- from: /tf/compat/v1/raw_ops/DirectedInterleaveDataset + to: /tf/raw_ops/DirectedInterleaveDataset +- from: /tf/compat/v1/raw_ops/Div + to: /tf/raw_ops/Div +- from: /tf/compat/v1/raw_ops/DivNoNan + to: /tf/raw_ops/DivNoNan +- from: /tf/compat/v1/raw_ops/DrawBoundingBoxes + to: /tf/raw_ops/DrawBoundingBoxes +- from: /tf/compat/v1/raw_ops/DrawBoundingBoxesV2 + to: /tf/raw_ops/DrawBoundingBoxesV2 +- from: /tf/compat/v1/raw_ops/DummyMemoryCache + to: /tf/raw_ops/DummyMemoryCache +- from: /tf/compat/v1/raw_ops/DynamicPartition + to: /tf/raw_ops/DynamicPartition +- from: /tf/compat/v1/raw_ops/DynamicStitch + to: /tf/raw_ops/DynamicStitch +- from: /tf/compat/v1/raw_ops/EagerPyFunc + to: /tf/raw_ops/EagerPyFunc +- from: /tf/compat/v1/raw_ops/EditDistance + to: /tf/raw_ops/EditDistance +- from: /tf/compat/v1/raw_ops/Eig + to: /tf/raw_ops/Eig +- from: /tf/compat/v1/raw_ops/Einsum + to: /tf/raw_ops/Einsum +- from: /tf/compat/v1/raw_ops/Elu + to: /tf/raw_ops/Elu +- from: /tf/compat/v1/raw_ops/EluGrad + to: /tf/raw_ops/EluGrad +- from: /tf/compat/v1/raw_ops/Empty + to: /tf/raw_ops/Empty +- from: /tf/compat/v1/raw_ops/EmptyTensorList + to: /tf/raw_ops/EmptyTensorList +- from: /tf/compat/v1/raw_ops/EncodeBase64 + to: /tf/raw_ops/EncodeBase64 +- from: /tf/compat/v1/raw_ops/EncodeJpeg + to: /tf/raw_ops/EncodeJpeg +- from: /tf/compat/v1/raw_ops/EncodeJpegVariableQuality + to: /tf/raw_ops/EncodeJpegVariableQuality +- from: /tf/compat/v1/raw_ops/EncodePng + to: /tf/raw_ops/EncodePng +- from: /tf/compat/v1/raw_ops/EncodeProto + to: /tf/raw_ops/EncodeProto +- from: /tf/compat/v1/raw_ops/EncodeWav + to: /tf/raw_ops/EncodeWav +- from: /tf/compat/v1/raw_ops/EnqueueTPUEmbeddingIntegerBatch + to: /tf/raw_ops/EnqueueTPUEmbeddingIntegerBatch +- from: /tf/compat/v1/raw_ops/EnqueueTPUEmbeddingSparseBatch + to: /tf/raw_ops/EnqueueTPUEmbeddingSparseBatch +- from: /tf/compat/v1/raw_ops/EnqueueTPUEmbeddingSparseTensorBatch + to: /tf/raw_ops/EnqueueTPUEmbeddingSparseTensorBatch +- from: /tf/compat/v1/raw_ops/EnsureShape + to: /tf/raw_ops/EnsureShape +- from: /tf/compat/v1/raw_ops/Enter + to: /tf/raw_ops/Enter +- from: /tf/compat/v1/raw_ops/Equal + to: /tf/raw_ops/Equal +- from: /tf/compat/v1/raw_ops/Erf + to: /tf/raw_ops/Erf +- from: /tf/compat/v1/raw_ops/Erfc + to: /tf/raw_ops/Erfc +- from: /tf/compat/v1/raw_ops/Erfinv + to: /tf/raw_ops/Erfinv +- from: /tf/compat/v1/raw_ops/EuclideanNorm + to: /tf/raw_ops/EuclideanNorm +- from: /tf/compat/v1/raw_ops/Exit + to: /tf/raw_ops/Exit +- from: /tf/compat/v1/raw_ops/Exp + to: /tf/raw_ops/Exp +- from: /tf/compat/v1/raw_ops/ExpandDims + to: /tf/raw_ops/ExpandDims +- from: /tf/compat/v1/raw_ops/ExperimentalAssertNextDataset + to: /tf/raw_ops/ExperimentalAssertNextDataset +- from: /tf/compat/v1/raw_ops/ExperimentalAutoShardDataset + to: /tf/raw_ops/ExperimentalAutoShardDataset +- from: /tf/compat/v1/raw_ops/ExperimentalBytesProducedStatsDataset + to: /tf/raw_ops/ExperimentalBytesProducedStatsDataset +- from: /tf/compat/v1/raw_ops/ExperimentalCSVDataset + to: /tf/raw_ops/ExperimentalCSVDataset +- from: /tf/compat/v1/raw_ops/ExperimentalChooseFastestDataset + to: /tf/raw_ops/ExperimentalChooseFastestDataset +- from: /tf/compat/v1/raw_ops/ExperimentalDatasetCardinality + to: /tf/raw_ops/ExperimentalDatasetCardinality +- from: /tf/compat/v1/raw_ops/ExperimentalDatasetToTFRecord + to: /tf/raw_ops/ExperimentalDatasetToTFRecord +- from: /tf/compat/v1/raw_ops/ExperimentalDenseToSparseBatchDataset + to: /tf/raw_ops/ExperimentalDenseToSparseBatchDataset +- from: /tf/compat/v1/raw_ops/ExperimentalDirectedInterleaveDataset + to: /tf/raw_ops/ExperimentalDirectedInterleaveDataset +- from: /tf/compat/v1/raw_ops/ExperimentalGroupByReducerDataset + to: /tf/raw_ops/ExperimentalGroupByReducerDataset +- from: /tf/compat/v1/raw_ops/ExperimentalGroupByWindowDataset + to: /tf/raw_ops/ExperimentalGroupByWindowDataset +- from: /tf/compat/v1/raw_ops/ExperimentalIgnoreErrorsDataset + to: /tf/raw_ops/ExperimentalIgnoreErrorsDataset +- from: /tf/compat/v1/raw_ops/ExperimentalIteratorGetDevice + to: /tf/raw_ops/ExperimentalIteratorGetDevice +- from: /tf/compat/v1/raw_ops/ExperimentalLMDBDataset + to: /tf/raw_ops/ExperimentalLMDBDataset +- from: /tf/compat/v1/raw_ops/ExperimentalLatencyStatsDataset + to: /tf/raw_ops/ExperimentalLatencyStatsDataset +- from: /tf/compat/v1/raw_ops/ExperimentalMapAndBatchDataset + to: /tf/raw_ops/ExperimentalMapAndBatchDataset +- from: /tf/compat/v1/raw_ops/ExperimentalMapDataset + to: /tf/raw_ops/ExperimentalMapDataset +- from: /tf/compat/v1/raw_ops/ExperimentalMatchingFilesDataset + to: /tf/raw_ops/ExperimentalMatchingFilesDataset +- from: /tf/compat/v1/raw_ops/ExperimentalMaxIntraOpParallelismDataset + to: /tf/raw_ops/ExperimentalMaxIntraOpParallelismDataset +- from: /tf/compat/v1/raw_ops/ExperimentalNonSerializableDataset + to: /tf/raw_ops/ExperimentalNonSerializableDataset +- from: /tf/compat/v1/raw_ops/ExperimentalParallelInterleaveDataset + to: /tf/raw_ops/ExperimentalParallelInterleaveDataset +- from: /tf/compat/v1/raw_ops/ExperimentalParseExampleDataset + to: /tf/raw_ops/ExperimentalParseExampleDataset +- from: /tf/compat/v1/raw_ops/ExperimentalPrivateThreadPoolDataset + to: /tf/raw_ops/ExperimentalPrivateThreadPoolDataset +- from: /tf/compat/v1/raw_ops/ExperimentalRandomDataset + to: /tf/raw_ops/ExperimentalRandomDataset +- from: /tf/compat/v1/raw_ops/ExperimentalRebatchDataset + to: /tf/raw_ops/ExperimentalRebatchDataset +- from: /tf/compat/v1/raw_ops/ExperimentalScanDataset + to: /tf/raw_ops/ExperimentalScanDataset +- from: /tf/compat/v1/raw_ops/ExperimentalSetStatsAggregatorDataset + to: /tf/raw_ops/ExperimentalSetStatsAggregatorDataset +- from: /tf/compat/v1/raw_ops/ExperimentalSleepDataset + to: /tf/raw_ops/ExperimentalSleepDataset +- from: /tf/compat/v1/raw_ops/ExperimentalSlidingWindowDataset + to: /tf/raw_ops/ExperimentalSlidingWindowDataset +- from: /tf/compat/v1/raw_ops/ExperimentalSqlDataset + to: /tf/raw_ops/ExperimentalSqlDataset +- from: /tf/compat/v1/raw_ops/ExperimentalStatsAggregatorHandle + to: /tf/raw_ops/ExperimentalStatsAggregatorHandle +- from: /tf/compat/v1/raw_ops/ExperimentalStatsAggregatorSummary + to: /tf/raw_ops/ExperimentalStatsAggregatorSummary +- from: /tf/compat/v1/raw_ops/ExperimentalTakeWhileDataset + to: /tf/raw_ops/ExperimentalTakeWhileDataset +- from: /tf/compat/v1/raw_ops/ExperimentalThreadPoolDataset + to: /tf/raw_ops/ExperimentalThreadPoolDataset +- from: /tf/compat/v1/raw_ops/ExperimentalThreadPoolHandle + to: /tf/raw_ops/ExperimentalThreadPoolHandle +- from: /tf/compat/v1/raw_ops/ExperimentalUnbatchDataset + to: /tf/raw_ops/ExperimentalUnbatchDataset +- from: /tf/compat/v1/raw_ops/ExperimentalUniqueDataset + to: /tf/raw_ops/ExperimentalUniqueDataset +- from: /tf/compat/v1/raw_ops/Expint + to: /tf/raw_ops/Expint +- from: /tf/compat/v1/raw_ops/Expm1 + to: /tf/raw_ops/Expm1 +- from: /tf/compat/v1/raw_ops/ExtractGlimpse + to: /tf/raw_ops/ExtractGlimpse +- from: /tf/compat/v1/raw_ops/ExtractImagePatches + to: /tf/raw_ops/ExtractImagePatches +- from: /tf/compat/v1/raw_ops/ExtractJpegShape + to: /tf/raw_ops/ExtractJpegShape +- from: /tf/compat/v1/raw_ops/ExtractVolumePatches + to: /tf/raw_ops/ExtractVolumePatches +- from: /tf/compat/v1/raw_ops/FFT + to: /tf/raw_ops/FFT +- from: /tf/compat/v1/raw_ops/FFT2D + to: /tf/raw_ops/FFT2D +- from: /tf/compat/v1/raw_ops/FFT3D + to: /tf/raw_ops/FFT3D +- from: /tf/compat/v1/raw_ops/FIFOQueue + to: /tf/raw_ops/FIFOQueue +- from: /tf/compat/v1/raw_ops/FIFOQueueV2 + to: /tf/raw_ops/FIFOQueueV2 +- from: /tf/compat/v1/raw_ops/Fact + to: /tf/raw_ops/Fact +- from: /tf/compat/v1/raw_ops/FakeParam + to: /tf/raw_ops/FakeParam +- from: /tf/compat/v1/raw_ops/FakeQuantWithMinMaxArgs + to: /tf/raw_ops/FakeQuantWithMinMaxArgs +- from: /tf/compat/v1/raw_ops/FakeQuantWithMinMaxArgsGradient + to: /tf/raw_ops/FakeQuantWithMinMaxArgsGradient +- from: /tf/compat/v1/raw_ops/FakeQuantWithMinMaxVars + to: /tf/raw_ops/FakeQuantWithMinMaxVars +- from: /tf/compat/v1/raw_ops/FakeQuantWithMinMaxVarsGradient + to: /tf/raw_ops/FakeQuantWithMinMaxVarsGradient +- from: /tf/compat/v1/raw_ops/FakeQuantWithMinMaxVarsPerChannel + to: /tf/raw_ops/FakeQuantWithMinMaxVarsPerChannel +- from: /tf/compat/v1/raw_ops/FakeQuantWithMinMaxVarsPerChannelGradient + to: /tf/raw_ops/FakeQuantWithMinMaxVarsPerChannelGradient +- from: /tf/compat/v1/raw_ops/FakeQueue + to: /tf/raw_ops/FakeQueue +- from: /tf/compat/v1/raw_ops/Fill + to: /tf/raw_ops/Fill +- from: /tf/compat/v1/raw_ops/FilterByLastComponentDataset + to: /tf/raw_ops/FilterByLastComponentDataset +- from: /tf/compat/v1/raw_ops/FilterDataset + to: /tf/raw_ops/FilterDataset +- from: /tf/compat/v1/raw_ops/Fingerprint + to: /tf/raw_ops/Fingerprint +- from: /tf/compat/v1/raw_ops/FixedLengthRecordDataset + to: /tf/raw_ops/FixedLengthRecordDataset +- from: /tf/compat/v1/raw_ops/FixedLengthRecordDatasetV2 + to: /tf/raw_ops/FixedLengthRecordDatasetV2 +- from: /tf/compat/v1/raw_ops/FixedLengthRecordReader + to: /tf/raw_ops/FixedLengthRecordReader +- from: /tf/compat/v1/raw_ops/FixedLengthRecordReaderV2 + to: /tf/raw_ops/FixedLengthRecordReaderV2 +- from: /tf/compat/v1/raw_ops/FixedUnigramCandidateSampler + to: /tf/raw_ops/FixedUnigramCandidateSampler +- from: /tf/compat/v1/raw_ops/FlatMapDataset + to: /tf/raw_ops/FlatMapDataset +- from: /tf/compat/v1/raw_ops/Floor + to: /tf/raw_ops/Floor +- from: /tf/compat/v1/raw_ops/FloorDiv + to: /tf/raw_ops/FloorDiv +- from: /tf/compat/v1/raw_ops/FloorMod + to: /tf/raw_ops/FloorMod +- from: /tf/compat/v1/raw_ops/FlushSummaryWriter + to: /tf/raw_ops/FlushSummaryWriter +- from: /tf/compat/v1/raw_ops/For + to: /tf/raw_ops/For +- from: /tf/compat/v1/raw_ops/FractionalAvgPool + to: /tf/raw_ops/FractionalAvgPool +- from: /tf/compat/v1/raw_ops/FractionalAvgPoolGrad + to: /tf/raw_ops/FractionalAvgPoolGrad +- from: /tf/compat/v1/raw_ops/FractionalMaxPool + to: /tf/raw_ops/FractionalMaxPool +- from: /tf/compat/v1/raw_ops/FractionalMaxPoolGrad + to: /tf/raw_ops/FractionalMaxPoolGrad +- from: /tf/compat/v1/raw_ops/FresnelCos + to: /tf/raw_ops/FresnelCos +- from: /tf/compat/v1/raw_ops/FresnelSin + to: /tf/raw_ops/FresnelSin +- from: /tf/compat/v1/raw_ops/FusedBatchNorm + to: /tf/raw_ops/FusedBatchNorm +- from: /tf/compat/v1/raw_ops/FusedBatchNormGrad + to: /tf/raw_ops/FusedBatchNormGrad +- from: /tf/compat/v1/raw_ops/FusedBatchNormGradV2 + to: /tf/raw_ops/FusedBatchNormGradV2 +- from: /tf/compat/v1/raw_ops/FusedBatchNormGradV3 + to: /tf/raw_ops/FusedBatchNormGradV3 +- from: /tf/compat/v1/raw_ops/FusedBatchNormV2 + to: /tf/raw_ops/FusedBatchNormV2 +- from: /tf/compat/v1/raw_ops/FusedBatchNormV3 + to: /tf/raw_ops/FusedBatchNormV3 +- from: /tf/compat/v1/raw_ops/FusedPadConv2D + to: /tf/raw_ops/FusedPadConv2D +- from: /tf/compat/v1/raw_ops/FusedResizeAndPadConv2D + to: /tf/raw_ops/FusedResizeAndPadConv2D +- from: /tf/compat/v1/raw_ops/GRUBlockCell + to: /tf/raw_ops/GRUBlockCell +- from: /tf/compat/v1/raw_ops/GRUBlockCellGrad + to: /tf/raw_ops/GRUBlockCellGrad +- from: /tf/compat/v1/raw_ops/Gather + to: /tf/raw_ops/Gather +- from: /tf/compat/v1/raw_ops/GatherNd + to: /tf/raw_ops/GatherNd +- from: /tf/compat/v1/raw_ops/GatherV2 + to: /tf/raw_ops/GatherV2 +- from: /tf/compat/v1/raw_ops/GenerateBoundingBoxProposals + to: /tf/raw_ops/GenerateBoundingBoxProposals +- from: /tf/compat/v1/raw_ops/GenerateVocabRemapping + to: /tf/raw_ops/GenerateVocabRemapping +- from: /tf/compat/v1/raw_ops/GeneratorDataset + to: /tf/raw_ops/GeneratorDataset +- from: /tf/compat/v1/raw_ops/GetSessionHandle + to: /tf/raw_ops/GetSessionHandle +- from: /tf/compat/v1/raw_ops/GetSessionHandleV2 + to: /tf/raw_ops/GetSessionHandleV2 +- from: /tf/compat/v1/raw_ops/GetSessionTensor + to: /tf/raw_ops/GetSessionTensor +- from: /tf/compat/v1/raw_ops/Greater + to: /tf/raw_ops/Greater +- from: /tf/compat/v1/raw_ops/GreaterEqual + to: /tf/raw_ops/GreaterEqual +- from: /tf/compat/v1/raw_ops/GroupByReducerDataset + to: /tf/raw_ops/GroupByReducerDataset +- from: /tf/compat/v1/raw_ops/GroupByWindowDataset + to: /tf/raw_ops/GroupByWindowDataset +- from: /tf/compat/v1/raw_ops/GuaranteeConst + to: /tf/raw_ops/GuaranteeConst +- from: /tf/compat/v1/raw_ops/HSVToRGB + to: /tf/raw_ops/HSVToRGB +- from: /tf/compat/v1/raw_ops/HashTable + to: /tf/raw_ops/HashTable +- from: /tf/compat/v1/raw_ops/HashTableV2 + to: /tf/raw_ops/HashTableV2 +- from: /tf/compat/v1/raw_ops/HistogramFixedWidth + to: /tf/raw_ops/HistogramFixedWidth +- from: /tf/compat/v1/raw_ops/HistogramSummary + to: /tf/raw_ops/HistogramSummary +- from: /tf/compat/v1/raw_ops/IFFT + to: /tf/raw_ops/IFFT +- from: /tf/compat/v1/raw_ops/IFFT2D + to: /tf/raw_ops/IFFT2D +- from: /tf/compat/v1/raw_ops/IFFT3D + to: /tf/raw_ops/IFFT3D +- from: /tf/compat/v1/raw_ops/IRFFT + to: /tf/raw_ops/IRFFT +- from: /tf/compat/v1/raw_ops/IRFFT2D + to: /tf/raw_ops/IRFFT2D +- from: /tf/compat/v1/raw_ops/IRFFT3D + to: /tf/raw_ops/IRFFT3D +- from: /tf/compat/v1/raw_ops/Identity + to: /tf/raw_ops/Identity +- from: /tf/compat/v1/raw_ops/IdentityN + to: /tf/raw_ops/IdentityN +- from: /tf/compat/v1/raw_ops/IdentityReader + to: /tf/raw_ops/IdentityReader +- from: /tf/compat/v1/raw_ops/IdentityReaderV2 + to: /tf/raw_ops/IdentityReaderV2 +- from: /tf/compat/v1/raw_ops/If + to: /tf/raw_ops/If +- from: /tf/compat/v1/raw_ops/Igamma + to: /tf/raw_ops/Igamma +- from: /tf/compat/v1/raw_ops/IgammaGradA + to: /tf/raw_ops/IgammaGradA +- from: /tf/compat/v1/raw_ops/Igammac + to: /tf/raw_ops/Igammac +- from: /tf/compat/v1/raw_ops/IgnoreErrorsDataset + to: /tf/raw_ops/IgnoreErrorsDataset +- from: /tf/compat/v1/raw_ops/Imag + to: /tf/raw_ops/Imag +- from: /tf/compat/v1/raw_ops/ImageProjectiveTransformV2 + to: /tf/raw_ops/ImageProjectiveTransformV2 +- from: /tf/compat/v1/raw_ops/ImageSummary + to: /tf/raw_ops/ImageSummary +- from: /tf/compat/v1/raw_ops/ImmutableConst + to: /tf/raw_ops/ImmutableConst +- from: /tf/compat/v1/raw_ops/ImportEvent + to: /tf/raw_ops/ImportEvent +- from: /tf/compat/v1/raw_ops/InTopK + to: /tf/raw_ops/InTopK +- from: /tf/compat/v1/raw_ops/InTopKV2 + to: /tf/raw_ops/InTopKV2 +- from: /tf/compat/v1/raw_ops/InfeedDequeue + to: /tf/raw_ops/InfeedDequeue +- from: /tf/compat/v1/raw_ops/InfeedDequeueTuple + to: /tf/raw_ops/InfeedDequeueTuple +- from: /tf/compat/v1/raw_ops/InfeedEnqueue + to: /tf/raw_ops/InfeedEnqueue +- from: /tf/compat/v1/raw_ops/InfeedEnqueuePrelinearizedBuffer + to: /tf/raw_ops/InfeedEnqueuePrelinearizedBuffer +- from: /tf/compat/v1/raw_ops/InfeedEnqueueTuple + to: /tf/raw_ops/InfeedEnqueueTuple +- from: /tf/compat/v1/raw_ops/InitializeTable + to: /tf/raw_ops/InitializeTable +- from: /tf/compat/v1/raw_ops/InitializeTableFromTextFile + to: /tf/raw_ops/InitializeTableFromTextFile +- from: /tf/compat/v1/raw_ops/InitializeTableFromTextFileV2 + to: /tf/raw_ops/InitializeTableFromTextFileV2 +- from: /tf/compat/v1/raw_ops/InitializeTableV2 + to: /tf/raw_ops/InitializeTableV2 +- from: /tf/compat/v1/raw_ops/InplaceAdd + to: /tf/raw_ops/InplaceAdd +- from: /tf/compat/v1/raw_ops/InplaceSub + to: /tf/raw_ops/InplaceSub +- from: /tf/compat/v1/raw_ops/InplaceUpdate + to: /tf/raw_ops/InplaceUpdate +- from: /tf/compat/v1/raw_ops/InterleaveDataset + to: /tf/raw_ops/InterleaveDataset +- from: /tf/compat/v1/raw_ops/Inv + to: /tf/raw_ops/Inv +- from: /tf/compat/v1/raw_ops/InvGrad + to: /tf/raw_ops/InvGrad +- from: /tf/compat/v1/raw_ops/Invert + to: /tf/raw_ops/Invert +- from: /tf/compat/v1/raw_ops/InvertPermutation + to: /tf/raw_ops/InvertPermutation +- from: /tf/compat/v1/raw_ops/IsBoostedTreesEnsembleInitialized + to: /tf/raw_ops/IsBoostedTreesEnsembleInitialized +- from: /tf/compat/v1/raw_ops/IsBoostedTreesQuantileStreamResourceInitialized + to: /tf/raw_ops/IsBoostedTreesQuantileStreamResourceInitialized +- from: /tf/compat/v1/raw_ops/IsFinite + to: /tf/raw_ops/IsFinite +- from: /tf/compat/v1/raw_ops/IsInf + to: /tf/raw_ops/IsInf +- from: /tf/compat/v1/raw_ops/IsNan + to: /tf/raw_ops/IsNan +- from: /tf/compat/v1/raw_ops/IsVariableInitialized + to: /tf/raw_ops/IsVariableInitialized +- from: /tf/compat/v1/raw_ops/Iterator + to: /tf/raw_ops/Iterator +- from: /tf/compat/v1/raw_ops/IteratorFromStringHandle + to: /tf/raw_ops/IteratorFromStringHandle +- from: /tf/compat/v1/raw_ops/IteratorFromStringHandleV2 + to: /tf/raw_ops/IteratorFromStringHandleV2 +- from: /tf/compat/v1/raw_ops/IteratorGetDevice + to: /tf/raw_ops/IteratorGetDevice +- from: /tf/compat/v1/raw_ops/IteratorGetNext + to: /tf/raw_ops/IteratorGetNext +- from: /tf/compat/v1/raw_ops/IteratorGetNextAsOptional + to: /tf/raw_ops/IteratorGetNextAsOptional +- from: /tf/compat/v1/raw_ops/IteratorGetNextSync + to: /tf/raw_ops/IteratorGetNextSync +- from: /tf/compat/v1/raw_ops/IteratorToStringHandle + to: /tf/raw_ops/IteratorToStringHandle +- from: /tf/compat/v1/raw_ops/IteratorV2 + to: /tf/raw_ops/IteratorV2 +- from: /tf/compat/v1/raw_ops/L2Loss + to: /tf/raw_ops/L2Loss +- from: /tf/compat/v1/raw_ops/LMDBDataset + to: /tf/raw_ops/LMDBDataset +- from: /tf/compat/v1/raw_ops/LMDBReader + to: /tf/raw_ops/LMDBReader +- from: /tf/compat/v1/raw_ops/LRN + to: /tf/raw_ops/LRN +- from: /tf/compat/v1/raw_ops/LRNGrad + to: /tf/raw_ops/LRNGrad +- from: /tf/compat/v1/raw_ops/LSTMBlockCell + to: /tf/raw_ops/LSTMBlockCell +- from: /tf/compat/v1/raw_ops/LSTMBlockCellGrad + to: /tf/raw_ops/LSTMBlockCellGrad +- from: /tf/compat/v1/raw_ops/LatencyStatsDataset + to: /tf/raw_ops/LatencyStatsDataset +- from: /tf/compat/v1/raw_ops/LeakyRelu + to: /tf/raw_ops/LeakyRelu +- from: /tf/compat/v1/raw_ops/LeakyReluGrad + to: /tf/raw_ops/LeakyReluGrad +- from: /tf/compat/v1/raw_ops/LearnedUnigramCandidateSampler + to: /tf/raw_ops/LearnedUnigramCandidateSampler +- from: /tf/compat/v1/raw_ops/LeftShift + to: /tf/raw_ops/LeftShift +- from: /tf/compat/v1/raw_ops/LegacyParallelInterleaveDatasetV2 + to: /tf/raw_ops/LegacyParallelInterleaveDatasetV2 +- from: /tf/compat/v1/raw_ops/Less + to: /tf/raw_ops/Less +- from: /tf/compat/v1/raw_ops/LessEqual + to: /tf/raw_ops/LessEqual +- from: /tf/compat/v1/raw_ops/Lgamma + to: /tf/raw_ops/Lgamma +- from: /tf/compat/v1/raw_ops/LinSpace + to: /tf/raw_ops/LinSpace +- from: /tf/compat/v1/raw_ops/ListDiff + to: /tf/raw_ops/ListDiff +- from: /tf/compat/v1/raw_ops/LoadAndRemapMatrix + to: /tf/raw_ops/LoadAndRemapMatrix +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingADAMParameters + to: /tf/raw_ops/LoadTPUEmbeddingADAMParameters +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingADAMParametersGradAccumDebug + to: /tf/raw_ops/LoadTPUEmbeddingADAMParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingAdadeltaParameters + to: /tf/raw_ops/LoadTPUEmbeddingAdadeltaParameters +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingAdadeltaParametersGradAccumDebug + to: /tf/raw_ops/LoadTPUEmbeddingAdadeltaParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingAdagradParameters + to: /tf/raw_ops/LoadTPUEmbeddingAdagradParameters +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingAdagradParametersGradAccumDebug + to: /tf/raw_ops/LoadTPUEmbeddingAdagradParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingCenteredRMSPropParameters + to: /tf/raw_ops/LoadTPUEmbeddingCenteredRMSPropParameters +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingFTRLParameters + to: /tf/raw_ops/LoadTPUEmbeddingFTRLParameters +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingFTRLParametersGradAccumDebug + to: /tf/raw_ops/LoadTPUEmbeddingFTRLParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingMDLAdagradLightParameters + to: /tf/raw_ops/LoadTPUEmbeddingMDLAdagradLightParameters +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingMomentumParameters + to: /tf/raw_ops/LoadTPUEmbeddingMomentumParameters +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingMomentumParametersGradAccumDebug + to: /tf/raw_ops/LoadTPUEmbeddingMomentumParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingProximalAdagradParameters + to: /tf/raw_ops/LoadTPUEmbeddingProximalAdagradParameters +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug + to: /tf/raw_ops/LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingRMSPropParameters + to: /tf/raw_ops/LoadTPUEmbeddingRMSPropParameters +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingRMSPropParametersGradAccumDebug + to: /tf/raw_ops/LoadTPUEmbeddingRMSPropParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/LoadTPUEmbeddingStochasticGradientDescentParameters + to: /tf/raw_ops/LoadTPUEmbeddingStochasticGradientDescentParameters +- from: /tf/compat/v1/raw_ops/Log + to: /tf/raw_ops/Log +- from: /tf/compat/v1/raw_ops/Log1p + to: /tf/raw_ops/Log1p +- from: /tf/compat/v1/raw_ops/LogMatrixDeterminant + to: /tf/raw_ops/LogMatrixDeterminant +- from: /tf/compat/v1/raw_ops/LogSoftmax + to: /tf/raw_ops/LogSoftmax +- from: /tf/compat/v1/raw_ops/LogUniformCandidateSampler + to: /tf/raw_ops/LogUniformCandidateSampler +- from: /tf/compat/v1/raw_ops/LogicalAnd + to: /tf/raw_ops/LogicalAnd +- from: /tf/compat/v1/raw_ops/LogicalNot + to: /tf/raw_ops/LogicalNot +- from: /tf/compat/v1/raw_ops/LogicalOr + to: /tf/raw_ops/LogicalOr +- from: /tf/compat/v1/raw_ops/LookupTableExport + to: /tf/raw_ops/LookupTableExport +- from: /tf/compat/v1/raw_ops/LookupTableExportV2 + to: /tf/raw_ops/LookupTableExportV2 +- from: /tf/compat/v1/raw_ops/LookupTableFind + to: /tf/raw_ops/LookupTableFind +- from: /tf/compat/v1/raw_ops/LookupTableFindV2 + to: /tf/raw_ops/LookupTableFindV2 +- from: /tf/compat/v1/raw_ops/LookupTableImport + to: /tf/raw_ops/LookupTableImport +- from: /tf/compat/v1/raw_ops/LookupTableImportV2 + to: /tf/raw_ops/LookupTableImportV2 +- from: /tf/compat/v1/raw_ops/LookupTableInsert + to: /tf/raw_ops/LookupTableInsert +- from: /tf/compat/v1/raw_ops/LookupTableInsertV2 + to: /tf/raw_ops/LookupTableInsertV2 +- from: /tf/compat/v1/raw_ops/LookupTableRemoveV2 + to: /tf/raw_ops/LookupTableRemoveV2 +- from: /tf/compat/v1/raw_ops/LookupTableSize + to: /tf/raw_ops/LookupTableSize +- from: /tf/compat/v1/raw_ops/LookupTableSizeV2 + to: /tf/raw_ops/LookupTableSizeV2 +- from: /tf/compat/v1/raw_ops/LoopCond + to: /tf/raw_ops/LoopCond +- from: /tf/compat/v1/raw_ops/LowerBound + to: /tf/raw_ops/LowerBound +- from: /tf/compat/v1/raw_ops/Lu + to: /tf/raw_ops/Lu +- from: /tf/compat/v1/raw_ops/MakeIterator + to: /tf/raw_ops/MakeIterator +- from: /tf/compat/v1/raw_ops/MapAndBatchDataset + to: /tf/raw_ops/MapAndBatchDataset +- from: /tf/compat/v1/raw_ops/MapClear + to: /tf/raw_ops/MapClear +- from: /tf/compat/v1/raw_ops/MapDataset + to: /tf/raw_ops/MapDataset +- from: /tf/compat/v1/raw_ops/MapDefun + to: /tf/raw_ops/MapDefun +- from: /tf/compat/v1/raw_ops/MapIncompleteSize + to: /tf/raw_ops/MapIncompleteSize +- from: /tf/compat/v1/raw_ops/MapPeek + to: /tf/raw_ops/MapPeek +- from: /tf/compat/v1/raw_ops/MapSize + to: /tf/raw_ops/MapSize +- from: /tf/compat/v1/raw_ops/MapStage + to: /tf/raw_ops/MapStage +- from: /tf/compat/v1/raw_ops/MapUnstage + to: /tf/raw_ops/MapUnstage +- from: /tf/compat/v1/raw_ops/MapUnstageNoKey + to: /tf/raw_ops/MapUnstageNoKey +- from: /tf/compat/v1/raw_ops/MatMul + to: /tf/raw_ops/MatMul +- from: /tf/compat/v1/raw_ops/MatchingFiles + to: /tf/raw_ops/MatchingFiles +- from: /tf/compat/v1/raw_ops/MatchingFilesDataset + to: /tf/raw_ops/MatchingFilesDataset +- from: /tf/compat/v1/raw_ops/MatrixBandPart + to: /tf/raw_ops/MatrixBandPart +- from: /tf/compat/v1/raw_ops/MatrixDeterminant + to: /tf/raw_ops/MatrixDeterminant +- from: /tf/compat/v1/raw_ops/MatrixDiag + to: /tf/raw_ops/MatrixDiag +- from: /tf/compat/v1/raw_ops/MatrixDiagPart + to: /tf/raw_ops/MatrixDiagPart +- from: /tf/compat/v1/raw_ops/MatrixDiagPartV2 + to: /tf/raw_ops/MatrixDiagPartV2 +- from: /tf/compat/v1/raw_ops/MatrixDiagPartV3 + to: /tf/raw_ops/MatrixDiagPartV3 +- from: /tf/compat/v1/raw_ops/MatrixDiagV2 + to: /tf/raw_ops/MatrixDiagV2 +- from: /tf/compat/v1/raw_ops/MatrixDiagV3 + to: /tf/raw_ops/MatrixDiagV3 +- from: /tf/compat/v1/raw_ops/MatrixExponential + to: /tf/raw_ops/MatrixExponential +- from: /tf/compat/v1/raw_ops/MatrixInverse + to: /tf/raw_ops/MatrixInverse +- from: /tf/compat/v1/raw_ops/MatrixLogarithm + to: /tf/raw_ops/MatrixLogarithm +- from: /tf/compat/v1/raw_ops/MatrixSetDiag + to: /tf/raw_ops/MatrixSetDiag +- from: /tf/compat/v1/raw_ops/MatrixSetDiagV2 + to: /tf/raw_ops/MatrixSetDiagV2 +- from: /tf/compat/v1/raw_ops/MatrixSetDiagV3 + to: /tf/raw_ops/MatrixSetDiagV3 +- from: /tf/compat/v1/raw_ops/MatrixSolve + to: /tf/raw_ops/MatrixSolve +- from: /tf/compat/v1/raw_ops/MatrixSolveLs + to: /tf/raw_ops/MatrixSolveLs +- from: /tf/compat/v1/raw_ops/MatrixSquareRoot + to: /tf/raw_ops/MatrixSquareRoot +- from: /tf/compat/v1/raw_ops/MatrixTriangularSolve + to: /tf/raw_ops/MatrixTriangularSolve +- from: /tf/compat/v1/raw_ops/Max + to: /tf/raw_ops/Max +- from: /tf/compat/v1/raw_ops/MaxIntraOpParallelismDataset + to: /tf/raw_ops/MaxIntraOpParallelismDataset +- from: /tf/compat/v1/raw_ops/MaxPool + to: /tf/raw_ops/MaxPool +- from: /tf/compat/v1/raw_ops/MaxPool3D + to: /tf/raw_ops/MaxPool3D +- from: /tf/compat/v1/raw_ops/MaxPool3DGrad + to: /tf/raw_ops/MaxPool3DGrad +- from: /tf/compat/v1/raw_ops/MaxPool3DGradGrad + to: /tf/raw_ops/MaxPool3DGradGrad +- from: /tf/compat/v1/raw_ops/MaxPoolGrad + to: /tf/raw_ops/MaxPoolGrad +- from: /tf/compat/v1/raw_ops/MaxPoolGradGrad + to: /tf/raw_ops/MaxPoolGradGrad +- from: /tf/compat/v1/raw_ops/MaxPoolGradGradV2 + to: /tf/raw_ops/MaxPoolGradGradV2 +- from: /tf/compat/v1/raw_ops/MaxPoolGradGradWithArgmax + to: /tf/raw_ops/MaxPoolGradGradWithArgmax +- from: /tf/compat/v1/raw_ops/MaxPoolGradV2 + to: /tf/raw_ops/MaxPoolGradV2 +- from: /tf/compat/v1/raw_ops/MaxPoolGradWithArgmax + to: /tf/raw_ops/MaxPoolGradWithArgmax +- from: /tf/compat/v1/raw_ops/MaxPoolV2 + to: /tf/raw_ops/MaxPoolV2 +- from: /tf/compat/v1/raw_ops/MaxPoolWithArgmax + to: /tf/raw_ops/MaxPoolWithArgmax +- from: /tf/compat/v1/raw_ops/Maximum + to: /tf/raw_ops/Maximum +- from: /tf/compat/v1/raw_ops/Mean + to: /tf/raw_ops/Mean +- from: /tf/compat/v1/raw_ops/Merge + to: /tf/raw_ops/Merge +- from: /tf/compat/v1/raw_ops/MergeSummary + to: /tf/raw_ops/MergeSummary +- from: /tf/compat/v1/raw_ops/MergeV2Checkpoints + to: /tf/raw_ops/MergeV2Checkpoints +- from: /tf/compat/v1/raw_ops/Mfcc + to: /tf/raw_ops/Mfcc +- from: /tf/compat/v1/raw_ops/Min + to: /tf/raw_ops/Min +- from: /tf/compat/v1/raw_ops/Minimum + to: /tf/raw_ops/Minimum +- from: /tf/compat/v1/raw_ops/MirrorPad + to: /tf/raw_ops/MirrorPad +- from: /tf/compat/v1/raw_ops/MirrorPadGrad + to: /tf/raw_ops/MirrorPadGrad +- from: /tf/compat/v1/raw_ops/Mod + to: /tf/raw_ops/Mod +- from: /tf/compat/v1/raw_ops/ModelDataset + to: /tf/raw_ops/ModelDataset +- from: /tf/compat/v1/raw_ops/Mul + to: /tf/raw_ops/Mul +- from: /tf/compat/v1/raw_ops/MulNoNan + to: /tf/raw_ops/MulNoNan +- from: /tf/compat/v1/raw_ops/MultiDeviceIterator + to: /tf/raw_ops/MultiDeviceIterator +- from: /tf/compat/v1/raw_ops/MultiDeviceIteratorFromStringHandle + to: /tf/raw_ops/MultiDeviceIteratorFromStringHandle +- from: /tf/compat/v1/raw_ops/MultiDeviceIteratorGetNextFromShard + to: /tf/raw_ops/MultiDeviceIteratorGetNextFromShard +- from: /tf/compat/v1/raw_ops/MultiDeviceIteratorInit + to: /tf/raw_ops/MultiDeviceIteratorInit +- from: /tf/compat/v1/raw_ops/MultiDeviceIteratorToStringHandle + to: /tf/raw_ops/MultiDeviceIteratorToStringHandle +- from: /tf/compat/v1/raw_ops/Multinomial + to: /tf/raw_ops/Multinomial +- from: /tf/compat/v1/raw_ops/MutableDenseHashTable + to: /tf/raw_ops/MutableDenseHashTable +- from: /tf/compat/v1/raw_ops/MutableDenseHashTableV2 + to: /tf/raw_ops/MutableDenseHashTableV2 +- from: /tf/compat/v1/raw_ops/MutableHashTable + to: /tf/raw_ops/MutableHashTable +- from: /tf/compat/v1/raw_ops/MutableHashTableOfTensors + to: /tf/raw_ops/MutableHashTableOfTensors +- from: /tf/compat/v1/raw_ops/MutableHashTableOfTensorsV2 + to: /tf/raw_ops/MutableHashTableOfTensorsV2 +- from: /tf/compat/v1/raw_ops/MutableHashTableV2 + to: /tf/raw_ops/MutableHashTableV2 +- from: /tf/compat/v1/raw_ops/MutexLock + to: /tf/raw_ops/MutexLock +- from: /tf/compat/v1/raw_ops/MutexV2 + to: /tf/raw_ops/MutexV2 +- from: /tf/compat/v1/raw_ops/NcclAllReduce + to: /tf/raw_ops/NcclAllReduce +- from: /tf/compat/v1/raw_ops/NcclBroadcast + to: /tf/raw_ops/NcclBroadcast +- from: /tf/compat/v1/raw_ops/NcclReduce + to: /tf/raw_ops/NcclReduce +- from: /tf/compat/v1/raw_ops/Ndtri + to: /tf/raw_ops/Ndtri +- from: /tf/compat/v1/raw_ops/Neg + to: /tf/raw_ops/Neg +- from: /tf/compat/v1/raw_ops/NextAfter + to: /tf/raw_ops/NextAfter +- from: /tf/compat/v1/raw_ops/NextIteration + to: /tf/raw_ops/NextIteration +- from: /tf/compat/v1/raw_ops/NoOp + to: /tf/raw_ops/NoOp +- from: /tf/compat/v1/raw_ops/NonDeterministicInts + to: /tf/raw_ops/NonDeterministicInts +- from: /tf/compat/v1/raw_ops/NonMaxSuppression + to: /tf/raw_ops/NonMaxSuppression +- from: /tf/compat/v1/raw_ops/NonMaxSuppressionV2 + to: /tf/raw_ops/NonMaxSuppressionV2 +- from: /tf/compat/v1/raw_ops/NonMaxSuppressionV3 + to: /tf/raw_ops/NonMaxSuppressionV3 +- from: /tf/compat/v1/raw_ops/NonMaxSuppressionV4 + to: /tf/raw_ops/NonMaxSuppressionV4 +- from: /tf/compat/v1/raw_ops/NonMaxSuppressionV5 + to: /tf/raw_ops/NonMaxSuppressionV5 +- from: /tf/compat/v1/raw_ops/NonMaxSuppressionWithOverlaps + to: /tf/raw_ops/NonMaxSuppressionWithOverlaps +- from: /tf/compat/v1/raw_ops/NonSerializableDataset + to: /tf/raw_ops/NonSerializableDataset +- from: /tf/compat/v1/raw_ops/NotEqual + to: /tf/raw_ops/NotEqual +- from: /tf/compat/v1/raw_ops/NthElement + to: /tf/raw_ops/NthElement +- from: /tf/compat/v1/raw_ops/OneHot + to: /tf/raw_ops/OneHot +- from: /tf/compat/v1/raw_ops/OneShotIterator + to: /tf/raw_ops/OneShotIterator +- from: /tf/compat/v1/raw_ops/OnesLike + to: /tf/raw_ops/OnesLike +- from: /tf/compat/v1/raw_ops/OptimizeDataset + to: /tf/raw_ops/OptimizeDataset +- from: /tf/compat/v1/raw_ops/OptionalFromValue + to: /tf/raw_ops/OptionalFromValue +- from: /tf/compat/v1/raw_ops/OptionalGetValue + to: /tf/raw_ops/OptionalGetValue +- from: /tf/compat/v1/raw_ops/OptionalHasValue + to: /tf/raw_ops/OptionalHasValue +- from: /tf/compat/v1/raw_ops/OptionalNone + to: /tf/raw_ops/OptionalNone +- from: /tf/compat/v1/raw_ops/OrderedMapClear + to: /tf/raw_ops/OrderedMapClear +- from: /tf/compat/v1/raw_ops/OrderedMapIncompleteSize + to: /tf/raw_ops/OrderedMapIncompleteSize +- from: /tf/compat/v1/raw_ops/OrderedMapPeek + to: /tf/raw_ops/OrderedMapPeek +- from: /tf/compat/v1/raw_ops/OrderedMapSize + to: /tf/raw_ops/OrderedMapSize +- from: /tf/compat/v1/raw_ops/OrderedMapStage + to: /tf/raw_ops/OrderedMapStage +- from: /tf/compat/v1/raw_ops/OrderedMapUnstage + to: /tf/raw_ops/OrderedMapUnstage +- from: /tf/compat/v1/raw_ops/OrderedMapUnstageNoKey + to: /tf/raw_ops/OrderedMapUnstageNoKey +- from: /tf/compat/v1/raw_ops/OutfeedDequeue + to: /tf/raw_ops/OutfeedDequeue +- from: /tf/compat/v1/raw_ops/OutfeedDequeueTuple + to: /tf/raw_ops/OutfeedDequeueTuple +- from: /tf/compat/v1/raw_ops/OutfeedEnqueue + to: /tf/raw_ops/OutfeedEnqueue +- from: /tf/compat/v1/raw_ops/OutfeedEnqueueTuple + to: /tf/raw_ops/OutfeedEnqueueTuple +- from: /tf/compat/v1/raw_ops/Pack + to: /tf/raw_ops/Pack +- from: /tf/compat/v1/raw_ops/Pad + to: /tf/raw_ops/Pad +- from: /tf/compat/v1/raw_ops/PadV2 + to: /tf/raw_ops/PadV2 +- from: /tf/compat/v1/raw_ops/PaddedBatchDataset + to: /tf/raw_ops/PaddedBatchDataset +- from: /tf/compat/v1/raw_ops/PaddedBatchDatasetV2 + to: /tf/raw_ops/PaddedBatchDatasetV2 +- from: /tf/compat/v1/raw_ops/PaddingFIFOQueue + to: /tf/raw_ops/PaddingFIFOQueue +- from: /tf/compat/v1/raw_ops/PaddingFIFOQueueV2 + to: /tf/raw_ops/PaddingFIFOQueueV2 +- from: /tf/compat/v1/raw_ops/ParallelConcat + to: /tf/raw_ops/ParallelConcat +- from: /tf/compat/v1/raw_ops/ParallelDynamicStitch + to: /tf/raw_ops/ParallelDynamicStitch +- from: /tf/compat/v1/raw_ops/ParallelInterleaveDataset + to: /tf/raw_ops/ParallelInterleaveDataset +- from: /tf/compat/v1/raw_ops/ParallelInterleaveDatasetV2 + to: /tf/raw_ops/ParallelInterleaveDatasetV2 +- from: /tf/compat/v1/raw_ops/ParallelInterleaveDatasetV3 + to: /tf/raw_ops/ParallelInterleaveDatasetV3 +- from: /tf/compat/v1/raw_ops/ParallelInterleaveDatasetV4 + to: /tf/raw_ops/ParallelInterleaveDatasetV4 +- from: /tf/compat/v1/raw_ops/ParallelMapDataset + to: /tf/raw_ops/ParallelMapDataset +- from: /tf/compat/v1/raw_ops/ParallelMapDatasetV2 + to: /tf/raw_ops/ParallelMapDatasetV2 +- from: /tf/compat/v1/raw_ops/ParameterizedTruncatedNormal + to: /tf/raw_ops/ParameterizedTruncatedNormal +- from: /tf/compat/v1/raw_ops/ParseExample + to: /tf/raw_ops/ParseExample +- from: /tf/compat/v1/raw_ops/ParseExampleDataset + to: /tf/raw_ops/ParseExampleDataset +- from: /tf/compat/v1/raw_ops/ParseExampleDatasetV2 + to: /tf/raw_ops/ParseExampleDatasetV2 +- from: /tf/compat/v1/raw_ops/ParseExampleV2 + to: /tf/raw_ops/ParseExampleV2 +- from: /tf/compat/v1/raw_ops/ParseSequenceExample + to: /tf/raw_ops/ParseSequenceExample +- from: /tf/compat/v1/raw_ops/ParseSequenceExampleV2 + to: /tf/raw_ops/ParseSequenceExampleV2 +- from: /tf/compat/v1/raw_ops/ParseSingleExample + to: /tf/raw_ops/ParseSingleExample +- from: /tf/compat/v1/raw_ops/ParseSingleSequenceExample + to: /tf/raw_ops/ParseSingleSequenceExample +- from: /tf/compat/v1/raw_ops/ParseTensor + to: /tf/raw_ops/ParseTensor +- from: /tf/compat/v1/raw_ops/PartitionedCall + to: /tf/raw_ops/PartitionedCall +- from: /tf/compat/v1/raw_ops/Placeholder + to: /tf/raw_ops/Placeholder +- from: /tf/compat/v1/raw_ops/PlaceholderV2 + to: /tf/raw_ops/PlaceholderV2 +- from: /tf/compat/v1/raw_ops/PlaceholderWithDefault + to: /tf/raw_ops/PlaceholderWithDefault +- from: /tf/compat/v1/raw_ops/Polygamma + to: /tf/raw_ops/Polygamma +- from: /tf/compat/v1/raw_ops/PopulationCount + to: /tf/raw_ops/PopulationCount +- from: /tf/compat/v1/raw_ops/Pow + to: /tf/raw_ops/Pow +- from: /tf/compat/v1/raw_ops/PrefetchDataset + to: /tf/raw_ops/PrefetchDataset +- from: /tf/compat/v1/raw_ops/Prelinearize + to: /tf/raw_ops/Prelinearize +- from: /tf/compat/v1/raw_ops/PrelinearizeTuple + to: /tf/raw_ops/PrelinearizeTuple +- from: /tf/compat/v1/raw_ops/PreventGradient + to: /tf/raw_ops/PreventGradient +- from: /tf/compat/v1/raw_ops/Print + to: /tf/raw_ops/Print +- from: /tf/compat/v1/raw_ops/PrintV2 + to: /tf/raw_ops/PrintV2 +- from: /tf/compat/v1/raw_ops/PriorityQueue + to: /tf/raw_ops/PriorityQueue +- from: /tf/compat/v1/raw_ops/PriorityQueueV2 + to: /tf/raw_ops/PriorityQueueV2 +- from: /tf/compat/v1/raw_ops/PrivateThreadPoolDataset + to: /tf/raw_ops/PrivateThreadPoolDataset +- from: /tf/compat/v1/raw_ops/Prod + to: /tf/raw_ops/Prod +- from: /tf/compat/v1/raw_ops/PyFunc + to: /tf/raw_ops/PyFunc +- from: /tf/compat/v1/raw_ops/PyFuncStateless + to: /tf/raw_ops/PyFuncStateless +- from: /tf/compat/v1/raw_ops/Qr + to: /tf/raw_ops/Qr +- from: /tf/compat/v1/raw_ops/QuantizeAndDequantize + to: /tf/raw_ops/QuantizeAndDequantize +- from: /tf/compat/v1/raw_ops/QuantizeAndDequantizeV2 + to: /tf/raw_ops/QuantizeAndDequantizeV2 +- from: /tf/compat/v1/raw_ops/QuantizeAndDequantizeV3 + to: /tf/raw_ops/QuantizeAndDequantizeV3 +- from: /tf/compat/v1/raw_ops/QuantizeDownAndShrinkRange + to: /tf/raw_ops/QuantizeDownAndShrinkRange +- from: /tf/compat/v1/raw_ops/QuantizeV2 + to: /tf/raw_ops/QuantizeV2 +- from: /tf/compat/v1/raw_ops/QuantizedAdd + to: /tf/raw_ops/QuantizedAdd +- from: /tf/compat/v1/raw_ops/QuantizedAvgPool + to: /tf/raw_ops/QuantizedAvgPool +- from: /tf/compat/v1/raw_ops/QuantizedBatchNormWithGlobalNormalization + to: /tf/raw_ops/QuantizedBatchNormWithGlobalNormalization +- from: /tf/compat/v1/raw_ops/QuantizedBiasAdd + to: /tf/raw_ops/QuantizedBiasAdd +- from: /tf/compat/v1/raw_ops/QuantizedConcat + to: /tf/raw_ops/QuantizedConcat +- from: /tf/compat/v1/raw_ops/QuantizedConv2D + to: /tf/raw_ops/QuantizedConv2D +- from: /tf/compat/v1/raw_ops/QuantizedConv2DAndRelu + to: /tf/raw_ops/QuantizedConv2DAndRelu +- from: /tf/compat/v1/raw_ops/QuantizedConv2DAndReluAndRequantize + to: /tf/raw_ops/QuantizedConv2DAndReluAndRequantize +- from: /tf/compat/v1/raw_ops/QuantizedConv2DAndRequantize + to: /tf/raw_ops/QuantizedConv2DAndRequantize +- from: /tf/compat/v1/raw_ops/QuantizedConv2DPerChannel + to: /tf/raw_ops/QuantizedConv2DPerChannel +- from: /tf/compat/v1/raw_ops/QuantizedConv2DWithBias + to: /tf/raw_ops/QuantizedConv2DWithBias +- from: /tf/compat/v1/raw_ops/QuantizedConv2DWithBiasAndRelu + to: /tf/raw_ops/QuantizedConv2DWithBiasAndRelu +- from: /tf/compat/v1/raw_ops/QuantizedConv2DWithBiasAndReluAndRequantize + to: /tf/raw_ops/QuantizedConv2DWithBiasAndReluAndRequantize +- from: /tf/compat/v1/raw_ops/QuantizedConv2DWithBiasAndRequantize + to: /tf/raw_ops/QuantizedConv2DWithBiasAndRequantize +- from: /tf/compat/v1/raw_ops/QuantizedConv2DWithBiasSignedSumAndReluAndRequantize + to: /tf/raw_ops/QuantizedConv2DWithBiasSignedSumAndReluAndRequantize +- from: /tf/compat/v1/raw_ops/QuantizedConv2DWithBiasSumAndRelu + to: /tf/raw_ops/QuantizedConv2DWithBiasSumAndRelu +- from: /tf/compat/v1/raw_ops/QuantizedConv2DWithBiasSumAndReluAndRequantize + to: /tf/raw_ops/QuantizedConv2DWithBiasSumAndReluAndRequantize +- from: /tf/compat/v1/raw_ops/QuantizedDepthwiseConv2D + to: /tf/raw_ops/QuantizedDepthwiseConv2D +- from: /tf/compat/v1/raw_ops/QuantizedDepthwiseConv2DWithBias + to: /tf/raw_ops/QuantizedDepthwiseConv2DWithBias +- from: /tf/compat/v1/raw_ops/QuantizedDepthwiseConv2DWithBiasAndRelu + to: /tf/raw_ops/QuantizedDepthwiseConv2DWithBiasAndRelu +- from: /tf/compat/v1/raw_ops/QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize + to: /tf/raw_ops/QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize +- from: /tf/compat/v1/raw_ops/QuantizedInstanceNorm + to: /tf/raw_ops/QuantizedInstanceNorm +- from: /tf/compat/v1/raw_ops/QuantizedMatMul + to: /tf/raw_ops/QuantizedMatMul +- from: /tf/compat/v1/raw_ops/QuantizedMatMulWithBias + to: /tf/raw_ops/QuantizedMatMulWithBias +- from: /tf/compat/v1/raw_ops/QuantizedMatMulWithBiasAndDequantize + to: /tf/raw_ops/QuantizedMatMulWithBiasAndDequantize +- from: /tf/compat/v1/raw_ops/QuantizedMatMulWithBiasAndRelu + to: /tf/raw_ops/QuantizedMatMulWithBiasAndRelu +- from: /tf/compat/v1/raw_ops/QuantizedMatMulWithBiasAndReluAndRequantize + to: /tf/raw_ops/QuantizedMatMulWithBiasAndReluAndRequantize +- from: /tf/compat/v1/raw_ops/QuantizedMatMulWithBiasAndRequantize + to: /tf/raw_ops/QuantizedMatMulWithBiasAndRequantize +- from: /tf/compat/v1/raw_ops/QuantizedMaxPool + to: /tf/raw_ops/QuantizedMaxPool +- from: /tf/compat/v1/raw_ops/QuantizedMul + to: /tf/raw_ops/QuantizedMul +- from: /tf/compat/v1/raw_ops/QuantizedRelu + to: /tf/raw_ops/QuantizedRelu +- from: /tf/compat/v1/raw_ops/QuantizedRelu6 + to: /tf/raw_ops/QuantizedRelu6 +- from: /tf/compat/v1/raw_ops/QuantizedReluX + to: /tf/raw_ops/QuantizedReluX +- from: /tf/compat/v1/raw_ops/QuantizedReshape + to: /tf/raw_ops/QuantizedReshape +- from: /tf/compat/v1/raw_ops/QuantizedResizeBilinear + to: /tf/raw_ops/QuantizedResizeBilinear +- from: /tf/compat/v1/raw_ops/QueueClose + to: /tf/raw_ops/QueueClose +- from: /tf/compat/v1/raw_ops/QueueCloseV2 + to: /tf/raw_ops/QueueCloseV2 +- from: /tf/compat/v1/raw_ops/QueueDequeue + to: /tf/raw_ops/QueueDequeue +- from: /tf/compat/v1/raw_ops/QueueDequeueMany + to: /tf/raw_ops/QueueDequeueMany +- from: /tf/compat/v1/raw_ops/QueueDequeueManyV2 + to: /tf/raw_ops/QueueDequeueManyV2 +- from: /tf/compat/v1/raw_ops/QueueDequeueUpTo + to: /tf/raw_ops/QueueDequeueUpTo +- from: /tf/compat/v1/raw_ops/QueueDequeueUpToV2 + to: /tf/raw_ops/QueueDequeueUpToV2 +- from: /tf/compat/v1/raw_ops/QueueDequeueV2 + to: /tf/raw_ops/QueueDequeueV2 +- from: /tf/compat/v1/raw_ops/QueueEnqueue + to: /tf/raw_ops/QueueEnqueue +- from: /tf/compat/v1/raw_ops/QueueEnqueueMany + to: /tf/raw_ops/QueueEnqueueMany +- from: /tf/compat/v1/raw_ops/QueueEnqueueManyV2 + to: /tf/raw_ops/QueueEnqueueManyV2 +- from: /tf/compat/v1/raw_ops/QueueEnqueueV2 + to: /tf/raw_ops/QueueEnqueueV2 +- from: /tf/compat/v1/raw_ops/QueueIsClosed + to: /tf/raw_ops/QueueIsClosed +- from: /tf/compat/v1/raw_ops/QueueIsClosedV2 + to: /tf/raw_ops/QueueIsClosedV2 +- from: /tf/compat/v1/raw_ops/QueueSize + to: /tf/raw_ops/QueueSize +- from: /tf/compat/v1/raw_ops/QueueSizeV2 + to: /tf/raw_ops/QueueSizeV2 +- from: /tf/compat/v1/raw_ops/RFFT + to: /tf/raw_ops/RFFT +- from: /tf/compat/v1/raw_ops/RFFT2D + to: /tf/raw_ops/RFFT2D +- from: /tf/compat/v1/raw_ops/RFFT3D + to: /tf/raw_ops/RFFT3D +- from: /tf/compat/v1/raw_ops/RGBToHSV + to: /tf/raw_ops/RGBToHSV +- from: /tf/compat/v1/raw_ops/RaggedGather + to: /tf/raw_ops/RaggedGather +- from: /tf/compat/v1/raw_ops/RaggedRange + to: /tf/raw_ops/RaggedRange +- from: /tf/compat/v1/raw_ops/RaggedTensorFromVariant + to: /tf/raw_ops/RaggedTensorFromVariant +- from: /tf/compat/v1/raw_ops/RaggedTensorToSparse + to: /tf/raw_ops/RaggedTensorToSparse +- from: /tf/compat/v1/raw_ops/RaggedTensorToTensor + to: /tf/raw_ops/RaggedTensorToTensor +- from: /tf/compat/v1/raw_ops/RaggedTensorToVariant + to: /tf/raw_ops/RaggedTensorToVariant +- from: /tf/compat/v1/raw_ops/RandomCrop + to: /tf/raw_ops/RandomCrop +- from: /tf/compat/v1/raw_ops/RandomDataset + to: /tf/raw_ops/RandomDataset +- from: /tf/compat/v1/raw_ops/RandomGamma + to: /tf/raw_ops/RandomGamma +- from: /tf/compat/v1/raw_ops/RandomGammaGrad + to: /tf/raw_ops/RandomGammaGrad +- from: /tf/compat/v1/raw_ops/RandomPoisson + to: /tf/raw_ops/RandomPoisson +- from: /tf/compat/v1/raw_ops/RandomPoissonV2 + to: /tf/raw_ops/RandomPoissonV2 +- from: /tf/compat/v1/raw_ops/RandomShuffle + to: /tf/raw_ops/RandomShuffle +- from: /tf/compat/v1/raw_ops/RandomShuffleQueue + to: /tf/raw_ops/RandomShuffleQueue +- from: /tf/compat/v1/raw_ops/RandomShuffleQueueV2 + to: /tf/raw_ops/RandomShuffleQueueV2 +- from: /tf/compat/v1/raw_ops/RandomStandardNormal + to: /tf/raw_ops/RandomStandardNormal +- from: /tf/compat/v1/raw_ops/RandomUniform + to: /tf/raw_ops/RandomUniform +- from: /tf/compat/v1/raw_ops/RandomUniformInt + to: /tf/raw_ops/RandomUniformInt +- from: /tf/compat/v1/raw_ops/Range + to: /tf/raw_ops/Range +- from: /tf/compat/v1/raw_ops/RangeDataset + to: /tf/raw_ops/RangeDataset +- from: /tf/compat/v1/raw_ops/Rank + to: /tf/raw_ops/Rank +- from: /tf/compat/v1/raw_ops/ReadFile + to: /tf/raw_ops/ReadFile +- from: /tf/compat/v1/raw_ops/ReadVariableOp + to: /tf/raw_ops/ReadVariableOp +- from: /tf/compat/v1/raw_ops/ReaderNumRecordsProduced + to: /tf/raw_ops/ReaderNumRecordsProduced +- from: /tf/compat/v1/raw_ops/ReaderNumRecordsProducedV2 + to: /tf/raw_ops/ReaderNumRecordsProducedV2 +- from: /tf/compat/v1/raw_ops/ReaderNumWorkUnitsCompleted + to: /tf/raw_ops/ReaderNumWorkUnitsCompleted +- from: /tf/compat/v1/raw_ops/ReaderNumWorkUnitsCompletedV2 + to: /tf/raw_ops/ReaderNumWorkUnitsCompletedV2 +- from: /tf/compat/v1/raw_ops/ReaderRead + to: /tf/raw_ops/ReaderRead +- from: /tf/compat/v1/raw_ops/ReaderReadUpTo + to: /tf/raw_ops/ReaderReadUpTo +- from: /tf/compat/v1/raw_ops/ReaderReadUpToV2 + to: /tf/raw_ops/ReaderReadUpToV2 +- from: /tf/compat/v1/raw_ops/ReaderReadV2 + to: /tf/raw_ops/ReaderReadV2 +- from: /tf/compat/v1/raw_ops/ReaderReset + to: /tf/raw_ops/ReaderReset +- from: /tf/compat/v1/raw_ops/ReaderResetV2 + to: /tf/raw_ops/ReaderResetV2 +- from: /tf/compat/v1/raw_ops/ReaderRestoreState + to: /tf/raw_ops/ReaderRestoreState +- from: /tf/compat/v1/raw_ops/ReaderRestoreStateV2 + to: /tf/raw_ops/ReaderRestoreStateV2 +- from: /tf/compat/v1/raw_ops/ReaderSerializeState + to: /tf/raw_ops/ReaderSerializeState +- from: /tf/compat/v1/raw_ops/ReaderSerializeStateV2 + to: /tf/raw_ops/ReaderSerializeStateV2 +- from: /tf/compat/v1/raw_ops/Real + to: /tf/raw_ops/Real +- from: /tf/compat/v1/raw_ops/RealDiv + to: /tf/raw_ops/RealDiv +- from: /tf/compat/v1/raw_ops/RebatchDataset + to: /tf/raw_ops/RebatchDataset +- from: /tf/compat/v1/raw_ops/Reciprocal + to: /tf/raw_ops/Reciprocal +- from: /tf/compat/v1/raw_ops/ReciprocalGrad + to: /tf/raw_ops/ReciprocalGrad +- from: /tf/compat/v1/raw_ops/RecordInput + to: /tf/raw_ops/RecordInput +- from: /tf/compat/v1/raw_ops/Recv + to: /tf/raw_ops/Recv +- from: /tf/compat/v1/raw_ops/RecvTPUEmbeddingActivations + to: /tf/raw_ops/RecvTPUEmbeddingActivations +- from: /tf/compat/v1/raw_ops/ReduceDataset + to: /tf/raw_ops/ReduceDataset +- from: /tf/compat/v1/raw_ops/ReduceJoin + to: /tf/raw_ops/ReduceJoin +- from: /tf/compat/v1/raw_ops/RefEnter + to: /tf/raw_ops/RefEnter +- from: /tf/compat/v1/raw_ops/RefExit + to: /tf/raw_ops/RefExit +- from: /tf/compat/v1/raw_ops/RefIdentity + to: /tf/raw_ops/RefIdentity +- from: /tf/compat/v1/raw_ops/RefMerge + to: /tf/raw_ops/RefMerge +- from: /tf/compat/v1/raw_ops/RefNextIteration + to: /tf/raw_ops/RefNextIteration +- from: /tf/compat/v1/raw_ops/RefSelect + to: /tf/raw_ops/RefSelect +- from: /tf/compat/v1/raw_ops/RefSwitch + to: /tf/raw_ops/RefSwitch +- from: /tf/compat/v1/raw_ops/RegexFullMatch + to: /tf/raw_ops/RegexFullMatch +- from: /tf/compat/v1/raw_ops/RegexReplace + to: /tf/raw_ops/RegexReplace +- from: /tf/compat/v1/raw_ops/Relu + to: /tf/raw_ops/Relu +- from: /tf/compat/v1/raw_ops/Relu6 + to: /tf/raw_ops/Relu6 +- from: /tf/compat/v1/raw_ops/Relu6Grad + to: /tf/raw_ops/Relu6Grad +- from: /tf/compat/v1/raw_ops/ReluGrad + to: /tf/raw_ops/ReluGrad +- from: /tf/compat/v1/raw_ops/RemoteCall + to: /tf/raw_ops/RemoteCall +- from: /tf/compat/v1/raw_ops/RepeatDataset + to: /tf/raw_ops/RepeatDataset +- from: /tf/compat/v1/raw_ops/RequantizationRange + to: /tf/raw_ops/RequantizationRange +- from: /tf/compat/v1/raw_ops/RequantizationRangePerChannel + to: /tf/raw_ops/RequantizationRangePerChannel +- from: /tf/compat/v1/raw_ops/Requantize + to: /tf/raw_ops/Requantize +- from: /tf/compat/v1/raw_ops/RequantizePerChannel + to: /tf/raw_ops/RequantizePerChannel +- from: /tf/compat/v1/raw_ops/Reshape + to: /tf/raw_ops/Reshape +- from: /tf/compat/v1/raw_ops/ResizeArea + to: /tf/raw_ops/ResizeArea +- from: /tf/compat/v1/raw_ops/ResizeBicubic + to: /tf/raw_ops/ResizeBicubic +- from: /tf/compat/v1/raw_ops/ResizeBicubicGrad + to: /tf/raw_ops/ResizeBicubicGrad +- from: /tf/compat/v1/raw_ops/ResizeBilinear + to: /tf/raw_ops/ResizeBilinear +- from: /tf/compat/v1/raw_ops/ResizeBilinearGrad + to: /tf/raw_ops/ResizeBilinearGrad +- from: /tf/compat/v1/raw_ops/ResizeNearestNeighbor + to: /tf/raw_ops/ResizeNearestNeighbor +- from: /tf/compat/v1/raw_ops/ResizeNearestNeighborGrad + to: /tf/raw_ops/ResizeNearestNeighborGrad +- from: /tf/compat/v1/raw_ops/ResourceAccumulatorApplyGradient + to: /tf/raw_ops/ResourceAccumulatorApplyGradient +- from: /tf/compat/v1/raw_ops/ResourceAccumulatorNumAccumulated + to: /tf/raw_ops/ResourceAccumulatorNumAccumulated +- from: /tf/compat/v1/raw_ops/ResourceAccumulatorSetGlobalStep + to: /tf/raw_ops/ResourceAccumulatorSetGlobalStep +- from: /tf/compat/v1/raw_ops/ResourceAccumulatorTakeGradient + to: /tf/raw_ops/ResourceAccumulatorTakeGradient +- from: /tf/compat/v1/raw_ops/ResourceApplyAdaMax + to: /tf/raw_ops/ResourceApplyAdaMax +- from: /tf/compat/v1/raw_ops/ResourceApplyAdadelta + to: /tf/raw_ops/ResourceApplyAdadelta +- from: /tf/compat/v1/raw_ops/ResourceApplyAdagrad + to: /tf/raw_ops/ResourceApplyAdagrad +- from: /tf/compat/v1/raw_ops/ResourceApplyAdagradDA + to: /tf/raw_ops/ResourceApplyAdagradDA +- from: /tf/compat/v1/raw_ops/ResourceApplyAdagradV2 + to: /tf/raw_ops/ResourceApplyAdagradV2 +- from: /tf/compat/v1/raw_ops/ResourceApplyAdam + to: /tf/raw_ops/ResourceApplyAdam +- from: /tf/compat/v1/raw_ops/ResourceApplyAdamWithAmsgrad + to: /tf/raw_ops/ResourceApplyAdamWithAmsgrad +- from: /tf/compat/v1/raw_ops/ResourceApplyAddSign + to: /tf/raw_ops/ResourceApplyAddSign +- from: /tf/compat/v1/raw_ops/ResourceApplyCenteredRMSProp + to: /tf/raw_ops/ResourceApplyCenteredRMSProp +- from: /tf/compat/v1/raw_ops/ResourceApplyFtrl + to: /tf/raw_ops/ResourceApplyFtrl +- from: /tf/compat/v1/raw_ops/ResourceApplyFtrlV2 + to: /tf/raw_ops/ResourceApplyFtrlV2 +- from: /tf/compat/v1/raw_ops/ResourceApplyGradientDescent + to: /tf/raw_ops/ResourceApplyGradientDescent +- from: /tf/compat/v1/raw_ops/ResourceApplyKerasMomentum + to: /tf/raw_ops/ResourceApplyKerasMomentum +- from: /tf/compat/v1/raw_ops/ResourceApplyMomentum + to: /tf/raw_ops/ResourceApplyMomentum +- from: /tf/compat/v1/raw_ops/ResourceApplyPowerSign + to: /tf/raw_ops/ResourceApplyPowerSign +- from: /tf/compat/v1/raw_ops/ResourceApplyProximalAdagrad + to: /tf/raw_ops/ResourceApplyProximalAdagrad +- from: /tf/compat/v1/raw_ops/ResourceApplyProximalGradientDescent + to: /tf/raw_ops/ResourceApplyProximalGradientDescent +- from: /tf/compat/v1/raw_ops/ResourceApplyRMSProp + to: /tf/raw_ops/ResourceApplyRMSProp +- from: /tf/compat/v1/raw_ops/ResourceConditionalAccumulator + to: /tf/raw_ops/ResourceConditionalAccumulator +- from: /tf/compat/v1/raw_ops/ResourceCountUpTo + to: /tf/raw_ops/ResourceCountUpTo +- from: /tf/compat/v1/raw_ops/ResourceGather + to: /tf/raw_ops/ResourceGather +- from: /tf/compat/v1/raw_ops/ResourceGatherNd + to: /tf/raw_ops/ResourceGatherNd +- from: /tf/compat/v1/raw_ops/ResourceScatterAdd + to: /tf/raw_ops/ResourceScatterAdd +- from: /tf/compat/v1/raw_ops/ResourceScatterDiv + to: /tf/raw_ops/ResourceScatterDiv +- from: /tf/compat/v1/raw_ops/ResourceScatterMax + to: /tf/raw_ops/ResourceScatterMax +- from: /tf/compat/v1/raw_ops/ResourceScatterMin + to: /tf/raw_ops/ResourceScatterMin +- from: /tf/compat/v1/raw_ops/ResourceScatterMul + to: /tf/raw_ops/ResourceScatterMul +- from: /tf/compat/v1/raw_ops/ResourceScatterNdAdd + to: /tf/raw_ops/ResourceScatterNdAdd +- from: /tf/compat/v1/raw_ops/ResourceScatterNdSub + to: /tf/raw_ops/ResourceScatterNdSub +- from: /tf/compat/v1/raw_ops/ResourceScatterNdUpdate + to: /tf/raw_ops/ResourceScatterNdUpdate +- from: /tf/compat/v1/raw_ops/ResourceScatterSub + to: /tf/raw_ops/ResourceScatterSub +- from: /tf/compat/v1/raw_ops/ResourceScatterUpdate + to: /tf/raw_ops/ResourceScatterUpdate +- from: /tf/compat/v1/raw_ops/ResourceSparseApplyAdadelta + to: /tf/raw_ops/ResourceSparseApplyAdadelta +- from: /tf/compat/v1/raw_ops/ResourceSparseApplyAdagrad + to: /tf/raw_ops/ResourceSparseApplyAdagrad +- from: /tf/compat/v1/raw_ops/ResourceSparseApplyAdagradDA + to: /tf/raw_ops/ResourceSparseApplyAdagradDA +- from: /tf/compat/v1/raw_ops/ResourceSparseApplyAdagradV2 + to: /tf/raw_ops/ResourceSparseApplyAdagradV2 +- from: /tf/compat/v1/raw_ops/ResourceSparseApplyCenteredRMSProp + to: /tf/raw_ops/ResourceSparseApplyCenteredRMSProp +- from: /tf/compat/v1/raw_ops/ResourceSparseApplyFtrl + to: /tf/raw_ops/ResourceSparseApplyFtrl +- from: /tf/compat/v1/raw_ops/ResourceSparseApplyFtrlV2 + to: /tf/raw_ops/ResourceSparseApplyFtrlV2 +- from: /tf/compat/v1/raw_ops/ResourceSparseApplyKerasMomentum + to: /tf/raw_ops/ResourceSparseApplyKerasMomentum +- from: /tf/compat/v1/raw_ops/ResourceSparseApplyMomentum + to: /tf/raw_ops/ResourceSparseApplyMomentum +- from: /tf/compat/v1/raw_ops/ResourceSparseApplyProximalAdagrad + to: /tf/raw_ops/ResourceSparseApplyProximalAdagrad +- from: /tf/compat/v1/raw_ops/ResourceSparseApplyProximalGradientDescent + to: /tf/raw_ops/ResourceSparseApplyProximalGradientDescent +- from: /tf/compat/v1/raw_ops/ResourceSparseApplyRMSProp + to: /tf/raw_ops/ResourceSparseApplyRMSProp +- from: /tf/compat/v1/raw_ops/ResourceStridedSliceAssign + to: /tf/raw_ops/ResourceStridedSliceAssign +- from: /tf/compat/v1/raw_ops/Restore + to: /tf/raw_ops/Restore +- from: /tf/compat/v1/raw_ops/RestoreSlice + to: /tf/raw_ops/RestoreSlice +- from: /tf/compat/v1/raw_ops/RestoreV2 + to: /tf/raw_ops/RestoreV2 +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingADAMParameters + to: /tf/raw_ops/RetrieveTPUEmbeddingADAMParameters +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingADAMParametersGradAccumDebug + to: /tf/raw_ops/RetrieveTPUEmbeddingADAMParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingAdadeltaParameters + to: /tf/raw_ops/RetrieveTPUEmbeddingAdadeltaParameters +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug + to: /tf/raw_ops/RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingAdagradParameters + to: /tf/raw_ops/RetrieveTPUEmbeddingAdagradParameters +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingAdagradParametersGradAccumDebug + to: /tf/raw_ops/RetrieveTPUEmbeddingAdagradParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingCenteredRMSPropParameters + to: /tf/raw_ops/RetrieveTPUEmbeddingCenteredRMSPropParameters +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingFTRLParameters + to: /tf/raw_ops/RetrieveTPUEmbeddingFTRLParameters +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingFTRLParametersGradAccumDebug + to: /tf/raw_ops/RetrieveTPUEmbeddingFTRLParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingMDLAdagradLightParameters + to: /tf/raw_ops/RetrieveTPUEmbeddingMDLAdagradLightParameters +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingMomentumParameters + to: /tf/raw_ops/RetrieveTPUEmbeddingMomentumParameters +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingMomentumParametersGradAccumDebug + to: /tf/raw_ops/RetrieveTPUEmbeddingMomentumParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingProximalAdagradParameters + to: /tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParameters +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug + to: /tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingRMSPropParameters + to: /tf/raw_ops/RetrieveTPUEmbeddingRMSPropParameters +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug + to: /tf/raw_ops/RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug +- from: /tf/compat/v1/raw_ops/RetrieveTPUEmbeddingStochasticGradientDescentParameters + to: /tf/raw_ops/RetrieveTPUEmbeddingStochasticGradientDescentParameters +- from: /tf/compat/v1/raw_ops/Reverse + to: /tf/raw_ops/Reverse +- from: /tf/compat/v1/raw_ops/ReverseSequence + to: /tf/raw_ops/ReverseSequence +- from: /tf/compat/v1/raw_ops/ReverseV2 + to: /tf/raw_ops/ReverseV2 +- from: /tf/compat/v1/raw_ops/RightShift + to: /tf/raw_ops/RightShift +- from: /tf/compat/v1/raw_ops/Rint + to: /tf/raw_ops/Rint +- from: /tf/compat/v1/raw_ops/RngSkip + to: /tf/raw_ops/RngSkip +- from: /tf/compat/v1/raw_ops/Roll + to: /tf/raw_ops/Roll +- from: /tf/compat/v1/raw_ops/Round + to: /tf/raw_ops/Round +- from: /tf/compat/v1/raw_ops/Rsqrt + to: /tf/raw_ops/Rsqrt +- from: /tf/compat/v1/raw_ops/RsqrtGrad + to: /tf/raw_ops/RsqrtGrad +- from: /tf/compat/v1/raw_ops/SampleDistortedBoundingBox + to: /tf/raw_ops/SampleDistortedBoundingBox +- from: /tf/compat/v1/raw_ops/SampleDistortedBoundingBoxV2 + to: /tf/raw_ops/SampleDistortedBoundingBoxV2 +- from: /tf/compat/v1/raw_ops/SamplingDataset + to: /tf/raw_ops/SamplingDataset +- from: /tf/compat/v1/raw_ops/Save + to: /tf/raw_ops/Save +- from: /tf/compat/v1/raw_ops/SaveSlices + to: /tf/raw_ops/SaveSlices +- from: /tf/compat/v1/raw_ops/SaveV2 + to: /tf/raw_ops/SaveV2 +- from: /tf/compat/v1/raw_ops/ScalarSummary + to: /tf/raw_ops/ScalarSummary +- from: /tf/compat/v1/raw_ops/ScaleAndTranslate + to: /tf/raw_ops/ScaleAndTranslate +- from: /tf/compat/v1/raw_ops/ScaleAndTranslateGrad + to: /tf/raw_ops/ScaleAndTranslateGrad +- from: /tf/compat/v1/raw_ops/ScanDataset + to: /tf/raw_ops/ScanDataset +- from: /tf/compat/v1/raw_ops/ScatterAdd + to: /tf/raw_ops/ScatterAdd +- from: /tf/compat/v1/raw_ops/ScatterDiv + to: /tf/raw_ops/ScatterDiv +- from: /tf/compat/v1/raw_ops/ScatterMax + to: /tf/raw_ops/ScatterMax +- from: /tf/compat/v1/raw_ops/ScatterMin + to: /tf/raw_ops/ScatterMin +- from: /tf/compat/v1/raw_ops/ScatterMul + to: /tf/raw_ops/ScatterMul +- from: /tf/compat/v1/raw_ops/ScatterNd + to: /tf/raw_ops/ScatterNd +- from: /tf/compat/v1/raw_ops/ScatterNdAdd + to: /tf/raw_ops/ScatterNdAdd +- from: /tf/compat/v1/raw_ops/ScatterNdNonAliasingAdd + to: /tf/raw_ops/ScatterNdNonAliasingAdd +- from: /tf/compat/v1/raw_ops/ScatterNdSub + to: /tf/raw_ops/ScatterNdSub +- from: /tf/compat/v1/raw_ops/ScatterNdUpdate + to: /tf/raw_ops/ScatterNdUpdate +- from: /tf/compat/v1/raw_ops/ScatterSub + to: /tf/raw_ops/ScatterSub +- from: /tf/compat/v1/raw_ops/ScatterUpdate + to: /tf/raw_ops/ScatterUpdate +- from: /tf/compat/v1/raw_ops/SdcaFprint + to: /tf/raw_ops/SdcaFprint +- from: /tf/compat/v1/raw_ops/SdcaOptimizer + to: /tf/raw_ops/SdcaOptimizer +- from: /tf/compat/v1/raw_ops/SdcaOptimizerV2 + to: /tf/raw_ops/SdcaOptimizerV2 +- from: /tf/compat/v1/raw_ops/SdcaShrinkL1 + to: /tf/raw_ops/SdcaShrinkL1 +- from: /tf/compat/v1/raw_ops/SegmentMax + to: /tf/raw_ops/SegmentMax +- from: /tf/compat/v1/raw_ops/SegmentMean + to: /tf/raw_ops/SegmentMean +- from: /tf/compat/v1/raw_ops/SegmentMin + to: /tf/raw_ops/SegmentMin +- from: /tf/compat/v1/raw_ops/SegmentProd + to: /tf/raw_ops/SegmentProd +- from: /tf/compat/v1/raw_ops/SegmentSum + to: /tf/raw_ops/SegmentSum +- from: /tf/compat/v1/raw_ops/Select + to: /tf/raw_ops/Select +- from: /tf/compat/v1/raw_ops/SelectV2 + to: /tf/raw_ops/SelectV2 +- from: /tf/compat/v1/raw_ops/SelfAdjointEig + to: /tf/raw_ops/SelfAdjointEig +- from: /tf/compat/v1/raw_ops/SelfAdjointEigV2 + to: /tf/raw_ops/SelfAdjointEigV2 +- from: /tf/compat/v1/raw_ops/Selu + to: /tf/raw_ops/Selu +- from: /tf/compat/v1/raw_ops/SeluGrad + to: /tf/raw_ops/SeluGrad +- from: /tf/compat/v1/raw_ops/Send + to: /tf/raw_ops/Send +- from: /tf/compat/v1/raw_ops/SendTPUEmbeddingGradients + to: /tf/raw_ops/SendTPUEmbeddingGradients +- from: /tf/compat/v1/raw_ops/SerializeIterator + to: /tf/raw_ops/SerializeIterator +- from: /tf/compat/v1/raw_ops/SerializeManySparse + to: /tf/raw_ops/SerializeManySparse +- from: /tf/compat/v1/raw_ops/SerializeSparse + to: /tf/raw_ops/SerializeSparse +- from: /tf/compat/v1/raw_ops/SerializeTensor + to: /tf/raw_ops/SerializeTensor +- from: /tf/compat/v1/raw_ops/SetSize + to: /tf/raw_ops/SetSize +- from: /tf/compat/v1/raw_ops/SetStatsAggregatorDataset + to: /tf/raw_ops/SetStatsAggregatorDataset +- from: /tf/compat/v1/raw_ops/Shape + to: /tf/raw_ops/Shape +- from: /tf/compat/v1/raw_ops/ShapeN + to: /tf/raw_ops/ShapeN +- from: /tf/compat/v1/raw_ops/ShardDataset + to: /tf/raw_ops/ShardDataset +- from: /tf/compat/v1/raw_ops/ShardedFilename + to: /tf/raw_ops/ShardedFilename +- from: /tf/compat/v1/raw_ops/ShardedFilespec + to: /tf/raw_ops/ShardedFilespec +- from: /tf/compat/v1/raw_ops/ShuffleAndRepeatDataset + to: /tf/raw_ops/ShuffleAndRepeatDataset +- from: /tf/compat/v1/raw_ops/ShuffleDataset + to: /tf/raw_ops/ShuffleDataset +- from: /tf/compat/v1/raw_ops/ShuffleDatasetV2 + to: /tf/raw_ops/ShuffleDatasetV2 +- from: /tf/compat/v1/raw_ops/ShutdownDistributedTPU + to: /tf/raw_ops/ShutdownDistributedTPU +- from: /tf/compat/v1/raw_ops/Sigmoid + to: /tf/raw_ops/Sigmoid +- from: /tf/compat/v1/raw_ops/SigmoidGrad + to: /tf/raw_ops/SigmoidGrad +- from: /tf/compat/v1/raw_ops/Sign + to: /tf/raw_ops/Sign +- from: /tf/compat/v1/raw_ops/Sin + to: /tf/raw_ops/Sin +- from: /tf/compat/v1/raw_ops/Sinh + to: /tf/raw_ops/Sinh +- from: /tf/compat/v1/raw_ops/Size + to: /tf/raw_ops/Size +- from: /tf/compat/v1/raw_ops/SkipDataset + to: /tf/raw_ops/SkipDataset +- from: /tf/compat/v1/raw_ops/SleepDataset + to: /tf/raw_ops/SleepDataset +- from: /tf/compat/v1/raw_ops/Slice + to: /tf/raw_ops/Slice +- from: /tf/compat/v1/raw_ops/SlidingWindowDataset + to: /tf/raw_ops/SlidingWindowDataset +- from: /tf/compat/v1/raw_ops/Snapshot + to: /tf/raw_ops/Snapshot +- from: /tf/compat/v1/raw_ops/SnapshotDataset + to: /tf/raw_ops/SnapshotDataset +- from: /tf/compat/v1/raw_ops/SobolSample + to: /tf/raw_ops/SobolSample +- from: /tf/compat/v1/raw_ops/Softmax + to: /tf/raw_ops/Softmax +- from: /tf/compat/v1/raw_ops/SoftmaxCrossEntropyWithLogits + to: /tf/raw_ops/SoftmaxCrossEntropyWithLogits +- from: /tf/compat/v1/raw_ops/Softplus + to: /tf/raw_ops/Softplus +- from: /tf/compat/v1/raw_ops/SoftplusGrad + to: /tf/raw_ops/SoftplusGrad +- from: /tf/compat/v1/raw_ops/Softsign + to: /tf/raw_ops/Softsign +- from: /tf/compat/v1/raw_ops/SoftsignGrad + to: /tf/raw_ops/SoftsignGrad +- from: /tf/compat/v1/raw_ops/SpaceToBatch + to: /tf/raw_ops/SpaceToBatch +- from: /tf/compat/v1/raw_ops/SpaceToBatchND + to: /tf/raw_ops/SpaceToBatchND +- from: /tf/compat/v1/raw_ops/SpaceToDepth + to: /tf/raw_ops/SpaceToDepth +- from: /tf/compat/v1/raw_ops/SparseAccumulatorApplyGradient + to: /tf/raw_ops/SparseAccumulatorApplyGradient +- from: /tf/compat/v1/raw_ops/SparseAccumulatorTakeGradient + to: /tf/raw_ops/SparseAccumulatorTakeGradient +- from: /tf/compat/v1/raw_ops/SparseAdd + to: /tf/raw_ops/SparseAdd +- from: /tf/compat/v1/raw_ops/SparseAddGrad + to: /tf/raw_ops/SparseAddGrad +- from: /tf/compat/v1/raw_ops/SparseApplyAdadelta + to: /tf/raw_ops/SparseApplyAdadelta +- from: /tf/compat/v1/raw_ops/SparseApplyAdagrad + to: /tf/raw_ops/SparseApplyAdagrad +- from: /tf/compat/v1/raw_ops/SparseApplyAdagradDA + to: /tf/raw_ops/SparseApplyAdagradDA +- from: /tf/compat/v1/raw_ops/SparseApplyAdagradV2 + to: /tf/raw_ops/SparseApplyAdagradV2 +- from: /tf/compat/v1/raw_ops/SparseApplyCenteredRMSProp + to: /tf/raw_ops/SparseApplyCenteredRMSProp +- from: /tf/compat/v1/raw_ops/SparseApplyFtrl + to: /tf/raw_ops/SparseApplyFtrl +- from: /tf/compat/v1/raw_ops/SparseApplyFtrlV2 + to: /tf/raw_ops/SparseApplyFtrlV2 +- from: /tf/compat/v1/raw_ops/SparseApplyMomentum + to: /tf/raw_ops/SparseApplyMomentum +- from: /tf/compat/v1/raw_ops/SparseApplyProximalAdagrad + to: /tf/raw_ops/SparseApplyProximalAdagrad +- from: /tf/compat/v1/raw_ops/SparseApplyProximalGradientDescent + to: /tf/raw_ops/SparseApplyProximalGradientDescent +- from: /tf/compat/v1/raw_ops/SparseApplyRMSProp + to: /tf/raw_ops/SparseApplyRMSProp +- from: /tf/compat/v1/raw_ops/SparseConcat + to: /tf/raw_ops/SparseConcat +- from: /tf/compat/v1/raw_ops/SparseConditionalAccumulator + to: /tf/raw_ops/SparseConditionalAccumulator +- from: /tf/compat/v1/raw_ops/SparseCross + to: /tf/raw_ops/SparseCross +- from: /tf/compat/v1/raw_ops/SparseDenseCwiseAdd + to: /tf/raw_ops/SparseDenseCwiseAdd +- from: /tf/compat/v1/raw_ops/SparseDenseCwiseDiv + to: /tf/raw_ops/SparseDenseCwiseDiv +- from: /tf/compat/v1/raw_ops/SparseDenseCwiseMul + to: /tf/raw_ops/SparseDenseCwiseMul +- from: /tf/compat/v1/raw_ops/SparseFillEmptyRows + to: /tf/raw_ops/SparseFillEmptyRows +- from: /tf/compat/v1/raw_ops/SparseFillEmptyRowsGrad + to: /tf/raw_ops/SparseFillEmptyRowsGrad +- from: /tf/compat/v1/raw_ops/SparseMatMul + to: /tf/raw_ops/SparseMatMul +- from: /tf/compat/v1/raw_ops/SparseMatrixAdd + to: /tf/raw_ops/SparseMatrixAdd +- from: /tf/compat/v1/raw_ops/SparseMatrixMatMul + to: /tf/raw_ops/SparseMatrixMatMul +- from: /tf/compat/v1/raw_ops/SparseMatrixMul + to: /tf/raw_ops/SparseMatrixMul +- from: /tf/compat/v1/raw_ops/SparseMatrixNNZ + to: /tf/raw_ops/SparseMatrixNNZ +- from: /tf/compat/v1/raw_ops/SparseMatrixOrderingAMD + to: /tf/raw_ops/SparseMatrixOrderingAMD +- from: /tf/compat/v1/raw_ops/SparseMatrixSoftmax + to: /tf/raw_ops/SparseMatrixSoftmax +- from: /tf/compat/v1/raw_ops/SparseMatrixSoftmaxGrad + to: /tf/raw_ops/SparseMatrixSoftmaxGrad +- from: /tf/compat/v1/raw_ops/SparseMatrixSparseCholesky + to: /tf/raw_ops/SparseMatrixSparseCholesky +- from: /tf/compat/v1/raw_ops/SparseMatrixSparseMatMul + to: /tf/raw_ops/SparseMatrixSparseMatMul +- from: /tf/compat/v1/raw_ops/SparseMatrixTranspose + to: /tf/raw_ops/SparseMatrixTranspose +- from: /tf/compat/v1/raw_ops/SparseMatrixZeros + to: /tf/raw_ops/SparseMatrixZeros +- from: /tf/compat/v1/raw_ops/SparseReduceMax + to: /tf/raw_ops/SparseReduceMax +- from: /tf/compat/v1/raw_ops/SparseReduceMaxSparse + to: /tf/raw_ops/SparseReduceMaxSparse +- from: /tf/compat/v1/raw_ops/SparseReduceSum + to: /tf/raw_ops/SparseReduceSum +- from: /tf/compat/v1/raw_ops/SparseReduceSumSparse + to: /tf/raw_ops/SparseReduceSumSparse +- from: /tf/compat/v1/raw_ops/SparseReorder + to: /tf/raw_ops/SparseReorder +- from: /tf/compat/v1/raw_ops/SparseReshape + to: /tf/raw_ops/SparseReshape +- from: /tf/compat/v1/raw_ops/SparseSegmentMean + to: /tf/raw_ops/SparseSegmentMean +- from: /tf/compat/v1/raw_ops/SparseSegmentMeanGrad + to: /tf/raw_ops/SparseSegmentMeanGrad +- from: /tf/compat/v1/raw_ops/SparseSegmentMeanWithNumSegments + to: /tf/raw_ops/SparseSegmentMeanWithNumSegments +- from: /tf/compat/v1/raw_ops/SparseSegmentSqrtN + to: /tf/raw_ops/SparseSegmentSqrtN +- from: /tf/compat/v1/raw_ops/SparseSegmentSqrtNGrad + to: /tf/raw_ops/SparseSegmentSqrtNGrad +- from: /tf/compat/v1/raw_ops/SparseSegmentSqrtNWithNumSegments + to: /tf/raw_ops/SparseSegmentSqrtNWithNumSegments +- from: /tf/compat/v1/raw_ops/SparseSegmentSum + to: /tf/raw_ops/SparseSegmentSum +- from: /tf/compat/v1/raw_ops/SparseSegmentSumWithNumSegments + to: /tf/raw_ops/SparseSegmentSumWithNumSegments +- from: /tf/compat/v1/raw_ops/SparseSlice + to: /tf/raw_ops/SparseSlice +- from: /tf/compat/v1/raw_ops/SparseSliceGrad + to: /tf/raw_ops/SparseSliceGrad +- from: /tf/compat/v1/raw_ops/SparseSoftmax + to: /tf/raw_ops/SparseSoftmax +- from: /tf/compat/v1/raw_ops/SparseSoftmaxCrossEntropyWithLogits + to: /tf/raw_ops/SparseSoftmaxCrossEntropyWithLogits +- from: /tf/compat/v1/raw_ops/SparseSparseMaximum + to: /tf/raw_ops/SparseSparseMaximum +- from: /tf/compat/v1/raw_ops/SparseSparseMinimum + to: /tf/raw_ops/SparseSparseMinimum +- from: /tf/compat/v1/raw_ops/SparseSplit + to: /tf/raw_ops/SparseSplit +- from: /tf/compat/v1/raw_ops/SparseTensorDenseAdd + to: /tf/raw_ops/SparseTensorDenseAdd +- from: /tf/compat/v1/raw_ops/SparseTensorDenseMatMul + to: /tf/raw_ops/SparseTensorDenseMatMul +- from: /tf/compat/v1/raw_ops/SparseTensorSliceDataset + to: /tf/raw_ops/SparseTensorSliceDataset +- from: /tf/compat/v1/raw_ops/SparseTensorToCSRSparseMatrix + to: /tf/raw_ops/SparseTensorToCSRSparseMatrix +- from: /tf/compat/v1/raw_ops/SparseToDense + to: /tf/raw_ops/SparseToDense +- from: /tf/compat/v1/raw_ops/SparseToSparseSetOperation + to: /tf/raw_ops/SparseToSparseSetOperation +- from: /tf/compat/v1/raw_ops/Spence + to: /tf/raw_ops/Spence +- from: /tf/compat/v1/raw_ops/Split + to: /tf/raw_ops/Split +- from: /tf/compat/v1/raw_ops/SplitV + to: /tf/raw_ops/SplitV +- from: /tf/compat/v1/raw_ops/SqlDataset + to: /tf/raw_ops/SqlDataset +- from: /tf/compat/v1/raw_ops/Sqrt + to: /tf/raw_ops/Sqrt +- from: /tf/compat/v1/raw_ops/SqrtGrad + to: /tf/raw_ops/SqrtGrad +- from: /tf/compat/v1/raw_ops/Square + to: /tf/raw_ops/Square +- from: /tf/compat/v1/raw_ops/SquaredDifference + to: /tf/raw_ops/SquaredDifference +- from: /tf/compat/v1/raw_ops/Squeeze + to: /tf/raw_ops/Squeeze +- from: /tf/compat/v1/raw_ops/Stack + to: /tf/raw_ops/Stack +- from: /tf/compat/v1/raw_ops/StackClose + to: /tf/raw_ops/StackClose +- from: /tf/compat/v1/raw_ops/StackCloseV2 + to: /tf/raw_ops/StackCloseV2 +- from: /tf/compat/v1/raw_ops/StackPop + to: /tf/raw_ops/StackPop +- from: /tf/compat/v1/raw_ops/StackPopV2 + to: /tf/raw_ops/StackPopV2 +- from: /tf/compat/v1/raw_ops/StackPush + to: /tf/raw_ops/StackPush +- from: /tf/compat/v1/raw_ops/StackPushV2 + to: /tf/raw_ops/StackPushV2 +- from: /tf/compat/v1/raw_ops/StackV2 + to: /tf/raw_ops/StackV2 +- from: /tf/compat/v1/raw_ops/Stage + to: /tf/raw_ops/Stage +- from: /tf/compat/v1/raw_ops/StageClear + to: /tf/raw_ops/StageClear +- from: /tf/compat/v1/raw_ops/StagePeek + to: /tf/raw_ops/StagePeek +- from: /tf/compat/v1/raw_ops/StageSize + to: /tf/raw_ops/StageSize +- from: /tf/compat/v1/raw_ops/StatefulPartitionedCall + to: /tf/raw_ops/StatefulPartitionedCall +- from: /tf/compat/v1/raw_ops/StatefulRandomBinomial + to: /tf/raw_ops/StatefulRandomBinomial +- from: /tf/compat/v1/raw_ops/StatefulStandardNormal + to: /tf/raw_ops/StatefulStandardNormal +- from: /tf/compat/v1/raw_ops/StatefulStandardNormalV2 + to: /tf/raw_ops/StatefulStandardNormalV2 +- from: /tf/compat/v1/raw_ops/StatefulTruncatedNormal + to: /tf/raw_ops/StatefulTruncatedNormal +- from: /tf/compat/v1/raw_ops/StatefulUniform + to: /tf/raw_ops/StatefulUniform +- from: /tf/compat/v1/raw_ops/StatefulUniformFullInt + to: /tf/raw_ops/StatefulUniformFullInt +- from: /tf/compat/v1/raw_ops/StatefulUniformInt + to: /tf/raw_ops/StatefulUniformInt +- from: /tf/compat/v1/raw_ops/StatelessIf + to: /tf/raw_ops/StatelessIf +- from: /tf/compat/v1/raw_ops/StatelessMultinomial + to: /tf/raw_ops/StatelessMultinomial +- from: /tf/compat/v1/raw_ops/StatelessRandomBinomial + to: /tf/raw_ops/StatelessRandomBinomial +- from: /tf/compat/v1/raw_ops/StatelessRandomGammaV2 + to: /tf/raw_ops/StatelessRandomGammaV2 +- from: /tf/compat/v1/raw_ops/StatelessRandomNormal + to: /tf/raw_ops/StatelessRandomNormal +- from: /tf/compat/v1/raw_ops/StatelessRandomPoisson + to: /tf/raw_ops/StatelessRandomPoisson +- from: /tf/compat/v1/raw_ops/StatelessRandomUniform + to: /tf/raw_ops/StatelessRandomUniform +- from: /tf/compat/v1/raw_ops/StatelessRandomUniformFullInt + to: /tf/raw_ops/StatelessRandomUniformFullInt +- from: /tf/compat/v1/raw_ops/StatelessRandomUniformInt + to: /tf/raw_ops/StatelessRandomUniformInt +- from: /tf/compat/v1/raw_ops/StatelessTruncatedNormal + to: /tf/raw_ops/StatelessTruncatedNormal +- from: /tf/compat/v1/raw_ops/StatelessWhile + to: /tf/raw_ops/StatelessWhile +- from: /tf/compat/v1/raw_ops/StaticRegexFullMatch + to: /tf/raw_ops/StaticRegexFullMatch +- from: /tf/compat/v1/raw_ops/StaticRegexReplace + to: /tf/raw_ops/StaticRegexReplace +- from: /tf/compat/v1/raw_ops/StatsAggregatorHandle + to: /tf/raw_ops/StatsAggregatorHandle +- from: /tf/compat/v1/raw_ops/StatsAggregatorHandleV2 + to: /tf/raw_ops/StatsAggregatorHandleV2 +- from: /tf/compat/v1/raw_ops/StatsAggregatorSetSummaryWriter + to: /tf/raw_ops/StatsAggregatorSetSummaryWriter +- from: /tf/compat/v1/raw_ops/StatsAggregatorSummary + to: /tf/raw_ops/StatsAggregatorSummary +- from: /tf/compat/v1/raw_ops/StopGradient + to: /tf/raw_ops/StopGradient +- from: /tf/compat/v1/raw_ops/StridedSlice + to: /tf/raw_ops/StridedSlice +- from: /tf/compat/v1/raw_ops/StridedSliceAssign + to: /tf/raw_ops/StridedSliceAssign +- from: /tf/compat/v1/raw_ops/StridedSliceGrad + to: /tf/raw_ops/StridedSliceGrad +- from: /tf/compat/v1/raw_ops/StringFormat + to: /tf/raw_ops/StringFormat +- from: /tf/compat/v1/raw_ops/StringJoin + to: /tf/raw_ops/StringJoin +- from: /tf/compat/v1/raw_ops/StringLength + to: /tf/raw_ops/StringLength +- from: /tf/compat/v1/raw_ops/StringLower + to: /tf/raw_ops/StringLower +- from: /tf/compat/v1/raw_ops/StringNGrams + to: /tf/raw_ops/StringNGrams +- from: /tf/compat/v1/raw_ops/StringSplit + to: /tf/raw_ops/StringSplit +- from: /tf/compat/v1/raw_ops/StringSplitV2 + to: /tf/raw_ops/StringSplitV2 +- from: /tf/compat/v1/raw_ops/StringStrip + to: /tf/raw_ops/StringStrip +- from: /tf/compat/v1/raw_ops/StringToHashBucket + to: /tf/raw_ops/StringToHashBucket +- from: /tf/compat/v1/raw_ops/StringToHashBucketFast + to: /tf/raw_ops/StringToHashBucketFast +- from: /tf/compat/v1/raw_ops/StringToHashBucketStrong + to: /tf/raw_ops/StringToHashBucketStrong +- from: /tf/compat/v1/raw_ops/StringToNumber + to: /tf/raw_ops/StringToNumber +- from: /tf/compat/v1/raw_ops/StringUpper + to: /tf/raw_ops/StringUpper +- from: /tf/compat/v1/raw_ops/Sub + to: /tf/raw_ops/Sub +- from: /tf/compat/v1/raw_ops/Substr + to: /tf/raw_ops/Substr +- from: /tf/compat/v1/raw_ops/Sum + to: /tf/raw_ops/Sum +- from: /tf/compat/v1/raw_ops/SummaryWriter + to: /tf/raw_ops/SummaryWriter +- from: /tf/compat/v1/raw_ops/Svd + to: /tf/raw_ops/Svd +- from: /tf/compat/v1/raw_ops/Switch + to: /tf/raw_ops/Switch +- from: /tf/compat/v1/raw_ops/SymbolicGradient + to: /tf/raw_ops/SymbolicGradient +- from: /tf/compat/v1/raw_ops/TFRecordDataset + to: /tf/raw_ops/TFRecordDataset +- from: /tf/compat/v1/raw_ops/TFRecordReader + to: /tf/raw_ops/TFRecordReader +- from: /tf/compat/v1/raw_ops/TFRecordReaderV2 + to: /tf/raw_ops/TFRecordReaderV2 +- from: /tf/compat/v1/raw_ops/TPUCompilationResult + to: /tf/raw_ops/TPUCompilationResult +- from: /tf/compat/v1/raw_ops/TPUEmbeddingActivations + to: /tf/raw_ops/TPUEmbeddingActivations +- from: /tf/compat/v1/raw_ops/TPUOrdinalSelector + to: /tf/raw_ops/TPUOrdinalSelector +- from: /tf/compat/v1/raw_ops/TPUPartitionedCall + to: /tf/raw_ops/TPUPartitionedCall +- from: /tf/compat/v1/raw_ops/TPUReplicateMetadata + to: /tf/raw_ops/TPUReplicateMetadata +- from: /tf/compat/v1/raw_ops/TPUReplicatedInput + to: /tf/raw_ops/TPUReplicatedInput +- from: /tf/compat/v1/raw_ops/TPUReplicatedOutput + to: /tf/raw_ops/TPUReplicatedOutput +- from: /tf/compat/v1/raw_ops/TakeDataset + to: /tf/raw_ops/TakeDataset +- from: /tf/compat/v1/raw_ops/TakeManySparseFromTensorsMap + to: /tf/raw_ops/TakeManySparseFromTensorsMap +- from: /tf/compat/v1/raw_ops/TakeWhileDataset + to: /tf/raw_ops/TakeWhileDataset +- from: /tf/compat/v1/raw_ops/Tan + to: /tf/raw_ops/Tan +- from: /tf/compat/v1/raw_ops/Tanh + to: /tf/raw_ops/Tanh +- from: /tf/compat/v1/raw_ops/TanhGrad + to: /tf/raw_ops/TanhGrad +- from: /tf/compat/v1/raw_ops/TemporaryVariable + to: /tf/raw_ops/TemporaryVariable +- from: /tf/compat/v1/raw_ops/TensorArray + to: /tf/raw_ops/TensorArray +- from: /tf/compat/v1/raw_ops/TensorArrayClose + to: /tf/raw_ops/TensorArrayClose +- from: /tf/compat/v1/raw_ops/TensorArrayCloseV2 + to: /tf/raw_ops/TensorArrayCloseV2 +- from: /tf/compat/v1/raw_ops/TensorArrayCloseV3 + to: /tf/raw_ops/TensorArrayCloseV3 +- from: /tf/compat/v1/raw_ops/TensorArrayConcat + to: /tf/raw_ops/TensorArrayConcat +- from: /tf/compat/v1/raw_ops/TensorArrayConcatV2 + to: /tf/raw_ops/TensorArrayConcatV2 +- from: /tf/compat/v1/raw_ops/TensorArrayConcatV3 + to: /tf/raw_ops/TensorArrayConcatV3 +- from: /tf/compat/v1/raw_ops/TensorArrayGather + to: /tf/raw_ops/TensorArrayGather +- from: /tf/compat/v1/raw_ops/TensorArrayGatherV2 + to: /tf/raw_ops/TensorArrayGatherV2 +- from: /tf/compat/v1/raw_ops/TensorArrayGatherV3 + to: /tf/raw_ops/TensorArrayGatherV3 +- from: /tf/compat/v1/raw_ops/TensorArrayGrad + to: /tf/raw_ops/TensorArrayGrad +- from: /tf/compat/v1/raw_ops/TensorArrayGradV2 + to: /tf/raw_ops/TensorArrayGradV2 +- from: /tf/compat/v1/raw_ops/TensorArrayGradV3 + to: /tf/raw_ops/TensorArrayGradV3 +- from: /tf/compat/v1/raw_ops/TensorArrayGradWithShape + to: /tf/raw_ops/TensorArrayGradWithShape +- from: /tf/compat/v1/raw_ops/TensorArrayPack + to: /tf/raw_ops/TensorArrayPack +- from: /tf/compat/v1/raw_ops/TensorArrayRead + to: /tf/raw_ops/TensorArrayRead +- from: /tf/compat/v1/raw_ops/TensorArrayReadV2 + to: /tf/raw_ops/TensorArrayReadV2 +- from: /tf/compat/v1/raw_ops/TensorArrayReadV3 + to: /tf/raw_ops/TensorArrayReadV3 +- from: /tf/compat/v1/raw_ops/TensorArrayScatter + to: /tf/raw_ops/TensorArrayScatter +- from: /tf/compat/v1/raw_ops/TensorArrayScatterV2 + to: /tf/raw_ops/TensorArrayScatterV2 +- from: /tf/compat/v1/raw_ops/TensorArrayScatterV3 + to: /tf/raw_ops/TensorArrayScatterV3 +- from: /tf/compat/v1/raw_ops/TensorArraySize + to: /tf/raw_ops/TensorArraySize +- from: /tf/compat/v1/raw_ops/TensorArraySizeV2 + to: /tf/raw_ops/TensorArraySizeV2 +- from: /tf/compat/v1/raw_ops/TensorArraySizeV3 + to: /tf/raw_ops/TensorArraySizeV3 +- from: /tf/compat/v1/raw_ops/TensorArraySplit + to: /tf/raw_ops/TensorArraySplit +- from: /tf/compat/v1/raw_ops/TensorArraySplitV2 + to: /tf/raw_ops/TensorArraySplitV2 +- from: /tf/compat/v1/raw_ops/TensorArraySplitV3 + to: /tf/raw_ops/TensorArraySplitV3 +- from: /tf/compat/v1/raw_ops/TensorArrayUnpack + to: /tf/raw_ops/TensorArrayUnpack +- from: /tf/compat/v1/raw_ops/TensorArrayV2 + to: /tf/raw_ops/TensorArrayV2 +- from: /tf/compat/v1/raw_ops/TensorArrayV3 + to: /tf/raw_ops/TensorArrayV3 +- from: /tf/compat/v1/raw_ops/TensorArrayWrite + to: /tf/raw_ops/TensorArrayWrite +- from: /tf/compat/v1/raw_ops/TensorArrayWriteV2 + to: /tf/raw_ops/TensorArrayWriteV2 +- from: /tf/compat/v1/raw_ops/TensorArrayWriteV3 + to: /tf/raw_ops/TensorArrayWriteV3 +- from: /tf/compat/v1/raw_ops/TensorDataset + to: /tf/raw_ops/TensorDataset +- from: /tf/compat/v1/raw_ops/TensorListConcat + to: /tf/raw_ops/TensorListConcat +- from: /tf/compat/v1/raw_ops/TensorListConcatLists + to: /tf/raw_ops/TensorListConcatLists +- from: /tf/compat/v1/raw_ops/TensorListConcatV2 + to: /tf/raw_ops/TensorListConcatV2 +- from: /tf/compat/v1/raw_ops/TensorListElementShape + to: /tf/raw_ops/TensorListElementShape +- from: /tf/compat/v1/raw_ops/TensorListFromTensor + to: /tf/raw_ops/TensorListFromTensor +- from: /tf/compat/v1/raw_ops/TensorListGather + to: /tf/raw_ops/TensorListGather +- from: /tf/compat/v1/raw_ops/TensorListGetItem + to: /tf/raw_ops/TensorListGetItem +- from: /tf/compat/v1/raw_ops/TensorListLength + to: /tf/raw_ops/TensorListLength +- from: /tf/compat/v1/raw_ops/TensorListPopBack + to: /tf/raw_ops/TensorListPopBack +- from: /tf/compat/v1/raw_ops/TensorListPushBack + to: /tf/raw_ops/TensorListPushBack +- from: /tf/compat/v1/raw_ops/TensorListPushBackBatch + to: /tf/raw_ops/TensorListPushBackBatch +- from: /tf/compat/v1/raw_ops/TensorListReserve + to: /tf/raw_ops/TensorListReserve +- from: /tf/compat/v1/raw_ops/TensorListResize + to: /tf/raw_ops/TensorListResize +- from: /tf/compat/v1/raw_ops/TensorListScatter + to: /tf/raw_ops/TensorListScatter +- from: /tf/compat/v1/raw_ops/TensorListScatterIntoExistingList + to: /tf/raw_ops/TensorListScatterIntoExistingList +- from: /tf/compat/v1/raw_ops/TensorListScatterV2 + to: /tf/raw_ops/TensorListScatterV2 +- from: /tf/compat/v1/raw_ops/TensorListSetItem + to: /tf/raw_ops/TensorListSetItem +- from: /tf/compat/v1/raw_ops/TensorListSplit + to: /tf/raw_ops/TensorListSplit +- from: /tf/compat/v1/raw_ops/TensorListStack + to: /tf/raw_ops/TensorListStack +- from: /tf/compat/v1/raw_ops/TensorScatterAdd + to: /tf/raw_ops/TensorScatterAdd +- from: /tf/compat/v1/raw_ops/TensorScatterSub + to: /tf/raw_ops/TensorScatterSub +- from: /tf/compat/v1/raw_ops/TensorScatterUpdate + to: /tf/raw_ops/TensorScatterUpdate +- from: /tf/compat/v1/raw_ops/TensorSliceDataset + to: /tf/raw_ops/TensorSliceDataset +- from: /tf/compat/v1/raw_ops/TensorStridedSliceUpdate + to: /tf/raw_ops/TensorStridedSliceUpdate +- from: /tf/compat/v1/raw_ops/TensorSummary + to: /tf/raw_ops/TensorSummary +- from: /tf/compat/v1/raw_ops/TensorSummaryV2 + to: /tf/raw_ops/TensorSummaryV2 +- from: /tf/compat/v1/raw_ops/TextLineDataset + to: /tf/raw_ops/TextLineDataset +- from: /tf/compat/v1/raw_ops/TextLineReader + to: /tf/raw_ops/TextLineReader +- from: /tf/compat/v1/raw_ops/TextLineReaderV2 + to: /tf/raw_ops/TextLineReaderV2 +- from: /tf/compat/v1/raw_ops/ThreadPoolDataset + to: /tf/raw_ops/ThreadPoolDataset +- from: /tf/compat/v1/raw_ops/ThreadPoolHandle + to: /tf/raw_ops/ThreadPoolHandle +- from: /tf/compat/v1/raw_ops/ThreadUnsafeUnigramCandidateSampler + to: /tf/raw_ops/ThreadUnsafeUnigramCandidateSampler +- from: /tf/compat/v1/raw_ops/Tile + to: /tf/raw_ops/Tile +- from: /tf/compat/v1/raw_ops/TileGrad + to: /tf/raw_ops/TileGrad +- from: /tf/compat/v1/raw_ops/Timestamp + to: /tf/raw_ops/Timestamp +- from: /tf/compat/v1/raw_ops/ToBool + to: /tf/raw_ops/ToBool +- from: /tf/compat/v1/raw_ops/TopK + to: /tf/raw_ops/TopK +- from: /tf/compat/v1/raw_ops/TopKV2 + to: /tf/raw_ops/TopKV2 +- from: /tf/compat/v1/raw_ops/Transpose + to: /tf/raw_ops/Transpose +- from: /tf/compat/v1/raw_ops/TridiagonalMatMul + to: /tf/raw_ops/TridiagonalMatMul +- from: /tf/compat/v1/raw_ops/TridiagonalSolve + to: /tf/raw_ops/TridiagonalSolve +- from: /tf/compat/v1/raw_ops/TruncateDiv + to: /tf/raw_ops/TruncateDiv +- from: /tf/compat/v1/raw_ops/TruncateMod + to: /tf/raw_ops/TruncateMod +- from: /tf/compat/v1/raw_ops/TruncatedNormal + to: /tf/raw_ops/TruncatedNormal +- from: /tf/compat/v1/raw_ops/Unbatch + to: /tf/raw_ops/Unbatch +- from: /tf/compat/v1/raw_ops/UnbatchDataset + to: /tf/raw_ops/UnbatchDataset +- from: /tf/compat/v1/raw_ops/UnbatchGrad + to: /tf/raw_ops/UnbatchGrad +- from: /tf/compat/v1/raw_ops/UnicodeDecode + to: /tf/raw_ops/UnicodeDecode +- from: /tf/compat/v1/raw_ops/UnicodeDecodeWithOffsets + to: /tf/raw_ops/UnicodeDecodeWithOffsets +- from: /tf/compat/v1/raw_ops/UnicodeEncode + to: /tf/raw_ops/UnicodeEncode +- from: /tf/compat/v1/raw_ops/UnicodeScript + to: /tf/raw_ops/UnicodeScript +- from: /tf/compat/v1/raw_ops/UnicodeTranscode + to: /tf/raw_ops/UnicodeTranscode +- from: /tf/compat/v1/raw_ops/UniformCandidateSampler + to: /tf/raw_ops/UniformCandidateSampler +- from: /tf/compat/v1/raw_ops/Unique + to: /tf/raw_ops/Unique +- from: /tf/compat/v1/raw_ops/UniqueDataset + to: /tf/raw_ops/UniqueDataset +- from: /tf/compat/v1/raw_ops/UniqueV2 + to: /tf/raw_ops/UniqueV2 +- from: /tf/compat/v1/raw_ops/UniqueWithCounts + to: /tf/raw_ops/UniqueWithCounts +- from: /tf/compat/v1/raw_ops/UniqueWithCountsV2 + to: /tf/raw_ops/UniqueWithCountsV2 +- from: /tf/compat/v1/raw_ops/Unpack + to: /tf/raw_ops/Unpack +- from: /tf/compat/v1/raw_ops/UnravelIndex + to: /tf/raw_ops/UnravelIndex +- from: /tf/compat/v1/raw_ops/UnsortedSegmentJoin + to: /tf/raw_ops/UnsortedSegmentJoin +- from: /tf/compat/v1/raw_ops/UnsortedSegmentMax + to: /tf/raw_ops/UnsortedSegmentMax +- from: /tf/compat/v1/raw_ops/UnsortedSegmentMin + to: /tf/raw_ops/UnsortedSegmentMin +- from: /tf/compat/v1/raw_ops/UnsortedSegmentProd + to: /tf/raw_ops/UnsortedSegmentProd +- from: /tf/compat/v1/raw_ops/UnsortedSegmentSum + to: /tf/raw_ops/UnsortedSegmentSum +- from: /tf/compat/v1/raw_ops/Unstage + to: /tf/raw_ops/Unstage +- from: /tf/compat/v1/raw_ops/UnwrapDatasetVariant + to: /tf/raw_ops/UnwrapDatasetVariant +- from: /tf/compat/v1/raw_ops/UpperBound + to: /tf/raw_ops/UpperBound +- from: /tf/compat/v1/raw_ops/VarHandleOp + to: /tf/raw_ops/VarHandleOp +- from: /tf/compat/v1/raw_ops/VarIsInitializedOp + to: /tf/raw_ops/VarIsInitializedOp +- from: /tf/compat/v1/raw_ops/Variable + to: /tf/raw_ops/Variable +- from: /tf/compat/v1/raw_ops/VariableShape + to: /tf/raw_ops/VariableShape +- from: /tf/compat/v1/raw_ops/VariableV2 + to: /tf/raw_ops/VariableV2 +- from: /tf/compat/v1/raw_ops/Where + to: /tf/raw_ops/Where +- from: /tf/compat/v1/raw_ops/While + to: /tf/raw_ops/While +- from: /tf/compat/v1/raw_ops/WholeFileReader + to: /tf/raw_ops/WholeFileReader +- from: /tf/compat/v1/raw_ops/WholeFileReaderV2 + to: /tf/raw_ops/WholeFileReaderV2 +- from: /tf/compat/v1/raw_ops/WindowDataset + to: /tf/raw_ops/WindowDataset +- from: /tf/compat/v1/raw_ops/WorkerHeartbeat + to: /tf/raw_ops/WorkerHeartbeat +- from: /tf/compat/v1/raw_ops/WrapDatasetVariant + to: /tf/raw_ops/WrapDatasetVariant +- from: /tf/compat/v1/raw_ops/WriteAudioSummary + to: /tf/raw_ops/WriteAudioSummary +- from: /tf/compat/v1/raw_ops/WriteFile + to: /tf/raw_ops/WriteFile +- from: /tf/compat/v1/raw_ops/WriteGraphSummary + to: /tf/raw_ops/WriteGraphSummary +- from: /tf/compat/v1/raw_ops/WriteHistogramSummary + to: /tf/raw_ops/WriteHistogramSummary +- from: /tf/compat/v1/raw_ops/WriteImageSummary + to: /tf/raw_ops/WriteImageSummary +- from: /tf/compat/v1/raw_ops/WriteRawProtoSummary + to: /tf/raw_ops/WriteRawProtoSummary +- from: /tf/compat/v1/raw_ops/WriteScalarSummary + to: /tf/raw_ops/WriteScalarSummary +- from: /tf/compat/v1/raw_ops/WriteSummary + to: /tf/raw_ops/WriteSummary +- from: /tf/compat/v1/raw_ops/Xdivy + to: /tf/raw_ops/Xdivy +- from: /tf/compat/v1/raw_ops/Xlog1py + to: /tf/raw_ops/Xlog1py +- from: /tf/compat/v1/raw_ops/Xlogy + to: /tf/raw_ops/Xlogy +- from: /tf/compat/v1/raw_ops/ZerosLike + to: /tf/raw_ops/ZerosLike +- from: /tf/compat/v1/raw_ops/Zeta + to: /tf/raw_ops/Zeta +- from: /tf/compat/v1/raw_ops/ZipDataset + to: /tf/raw_ops/ZipDataset +- from: /tf/compat/v1/read_file + to: /tf/io/read_file +- from: /tf/compat/v1/real + to: /tf/math/real +- from: /tf/compat/v1/realdiv + to: /tf/realdiv +- from: /tf/compat/v1/reciprocal + to: /tf/math/reciprocal +- from: /tf/compat/v1/recompute_grad + to: /tf/recompute_grad +- from: /tf/compat/v1/regex_replace + to: /tf/strings/regex_replace +- from: /tf/compat/v1/register_tensor_conversion_function + to: /tf/register_tensor_conversion_function +- from: /tf/compat/v1/repeat + to: /tf/repeat +- from: /tf/compat/v1/required_space_to_batch_paddings + to: /tf/required_space_to_batch_paddings +- from: /tf/compat/v1/reshape + to: /tf/reshape +- from: /tf/compat/v1/reverse + to: /tf/reverse +- from: /tf/compat/v1/reverse_v2 + to: /tf/reverse +- from: /tf/compat/v1/rint + to: /tf/math/rint +- from: /tf/compat/v1/roll + to: /tf/roll +- from: /tf/compat/v1/round + to: /tf/math/round +- from: /tf/compat/v1/rsqrt + to: /tf/math/rsqrt +- from: /tf/compat/v1/saturate_cast + to: /tf/dtypes/saturate_cast +- from: /tf/compat/v1/saved_model/Asset + to: /tf/saved_model/Asset +- from: /tf/compat/v1/saved_model/SaveOptions + to: /tf/saved_model/SaveOptions +- from: /tf/compat/v1/saved_model/builder/SavedModelBuilder + to: /tf/compat/v1/saved_model/Builder +- from: /tf/compat/v1/saved_model/experimental/save + to: /tf/saved_model/save +- from: /tf/compat/v1/saved_model/load_v2 + to: /tf/saved_model/load +- from: /tf/compat/v1/saved_model/loader/load + to: /tf/compat/v1/saved_model/load +- from: /tf/compat/v1/saved_model/loader/maybe_saved_model_directory + to: /tf/compat/v1/saved_model/contains_saved_model +- from: /tf/compat/v1/saved_model/main_op/main_op_with_restore + to: /tf/compat/v1/saved_model/main_op_with_restore +- from: /tf/compat/v1/saved_model/maybe_saved_model_directory + to: /tf/compat/v1/saved_model/contains_saved_model +- from: /tf/compat/v1/saved_model/save + to: /tf/saved_model/save +- from: /tf/compat/v1/saved_model/signature_def_utils/build_signature_def + to: /tf/compat/v1/saved_model/build_signature_def +- from: /tf/compat/v1/saved_model/signature_def_utils/classification_signature_def + to: /tf/compat/v1/saved_model/classification_signature_def +- from: /tf/compat/v1/saved_model/signature_def_utils/is_valid_signature + to: /tf/compat/v1/saved_model/is_valid_signature +- from: /tf/compat/v1/saved_model/signature_def_utils/predict_signature_def + to: /tf/compat/v1/saved_model/predict_signature_def +- from: /tf/compat/v1/saved_model/signature_def_utils/regression_signature_def + to: /tf/compat/v1/saved_model/regression_signature_def +- from: /tf/compat/v1/saved_model/utils/build_tensor_info + to: /tf/compat/v1/saved_model/build_tensor_info +- from: /tf/compat/v1/saved_model/utils/get_tensor_from_tensor_info + to: /tf/compat/v1/saved_model/get_tensor_from_tensor_info +- from: /tf/compat/v1/scatter_nd + to: /tf/scatter_nd +- from: /tf/compat/v1/searchsorted + to: /tf/searchsorted +- from: /tf/compat/v1/segment_max + to: /tf/math/segment_max +- from: /tf/compat/v1/segment_mean + to: /tf/math/segment_mean +- from: /tf/compat/v1/segment_min + to: /tf/math/segment_min +- from: /tf/compat/v1/segment_prod + to: /tf/math/segment_prod +- from: /tf/compat/v1/segment_sum + to: /tf/math/segment_sum +- from: /tf/compat/v1/self_adjoint_eig + to: /tf/linalg/eigh +- from: /tf/compat/v1/self_adjoint_eigvals + to: /tf/linalg/eigvalsh +- from: /tf/compat/v1/sequence_mask + to: /tf/sequence_mask +- from: /tf/compat/v1/serialize_tensor + to: /tf/io/serialize_tensor +- from: /tf/compat/v1/sets/difference + to: /tf/sets/difference +- from: /tf/compat/v1/sets/intersection + to: /tf/sets/intersection +- from: /tf/compat/v1/sets/set_difference + to: /tf/sets/difference +- from: /tf/compat/v1/sets/set_intersection + to: /tf/sets/intersection +- from: /tf/compat/v1/sets/set_size + to: /tf/sets/size +- from: /tf/compat/v1/sets/set_union + to: /tf/sets/union +- from: /tf/compat/v1/sets/size + to: /tf/sets/size +- from: /tf/compat/v1/sets/union + to: /tf/sets/union +- from: /tf/compat/v1/shape_n + to: /tf/shape_n +- from: /tf/compat/v1/sigmoid + to: /tf/math/sigmoid +- from: /tf/compat/v1/sign + to: /tf/math/sign +- from: /tf/compat/v1/signal/dct + to: /tf/signal/dct +- from: /tf/compat/v1/signal/fft + to: /tf/signal/fft +- from: /tf/compat/v1/signal/fft2d + to: /tf/signal/fft2d +- from: /tf/compat/v1/signal/fft3d + to: /tf/signal/fft3d +- from: /tf/compat/v1/signal/fftshift + to: /tf/signal/fftshift +- from: /tf/compat/v1/signal/frame + to: /tf/signal/frame +- from: /tf/compat/v1/signal/hamming_window + to: /tf/signal/hamming_window +- from: /tf/compat/v1/signal/hann_window + to: /tf/signal/hann_window +- from: /tf/compat/v1/signal/idct + to: /tf/signal/idct +- from: /tf/compat/v1/signal/ifft + to: /tf/signal/ifft +- from: /tf/compat/v1/signal/ifft2d + to: /tf/signal/ifft2d +- from: /tf/compat/v1/signal/ifft3d + to: /tf/signal/ifft3d +- from: /tf/compat/v1/signal/ifftshift + to: /tf/signal/ifftshift +- from: /tf/compat/v1/signal/inverse_mdct + to: /tf/signal/inverse_mdct +- from: /tf/compat/v1/signal/inverse_stft + to: /tf/signal/inverse_stft +- from: /tf/compat/v1/signal/inverse_stft_window_fn + to: /tf/signal/inverse_stft_window_fn +- from: /tf/compat/v1/signal/irfft + to: /tf/signal/irfft +- from: /tf/compat/v1/signal/irfft2d + to: /tf/signal/irfft2d +- from: /tf/compat/v1/signal/irfft3d + to: /tf/signal/irfft3d +- from: /tf/compat/v1/signal/kaiser_bessel_derived_window + to: /tf/signal/kaiser_bessel_derived_window +- from: /tf/compat/v1/signal/kaiser_window + to: /tf/signal/kaiser_window +- from: /tf/compat/v1/signal/linear_to_mel_weight_matrix + to: /tf/signal/linear_to_mel_weight_matrix +- from: /tf/compat/v1/signal/mdct + to: /tf/signal/mdct +- from: /tf/compat/v1/signal/mfccs_from_log_mel_spectrograms + to: /tf/signal/mfccs_from_log_mel_spectrograms +- from: /tf/compat/v1/signal/overlap_and_add + to: /tf/signal/overlap_and_add +- from: /tf/compat/v1/signal/rfft + to: /tf/signal/rfft +- from: /tf/compat/v1/signal/rfft2d + to: /tf/signal/rfft2d +- from: /tf/compat/v1/signal/rfft3d + to: /tf/signal/rfft3d +- from: /tf/compat/v1/signal/stft + to: /tf/signal/stft +- from: /tf/compat/v1/signal/vorbis_window + to: /tf/signal/vorbis_window +- from: /tf/compat/v1/sin + to: /tf/math/sin +- from: /tf/compat/v1/sinh + to: /tf/math/sinh +- from: /tf/compat/v1/slice + to: /tf/slice +- from: /tf/compat/v1/sort + to: /tf/sort +- from: /tf/compat/v1/space_to_batch_nd + to: /tf/space_to_batch_nd +- from: /tf/compat/v1/sparse/SparseConditionalAccumulator + to: /tf/compat/v1/SparseConditionalAccumulator +- from: /tf/compat/v1/sparse/SparseTensor + to: /tf/sparse/SparseTensor +- from: /tf/compat/v1/sparse/add + to: /tf/compat/v1/sparse_add +- from: /tf/compat/v1/sparse/concat + to: /tf/compat/v1/sparse_concat +- from: /tf/compat/v1/sparse/cross + to: /tf/sparse/cross +- from: /tf/compat/v1/sparse/cross_hashed + to: /tf/sparse/cross_hashed +- from: /tf/compat/v1/sparse/expand_dims + to: /tf/sparse/expand_dims +- from: /tf/compat/v1/sparse/eye + to: /tf/sparse/eye +- from: /tf/compat/v1/sparse/fill_empty_rows + to: /tf/sparse/fill_empty_rows +- from: /tf/compat/v1/sparse/from_dense + to: /tf/sparse/from_dense +- from: /tf/compat/v1/sparse/mask + to: /tf/sparse/mask +- from: /tf/compat/v1/sparse/matmul + to: /tf/sparse/sparse_dense_matmul +- from: /tf/compat/v1/sparse/maximum + to: /tf/sparse/maximum +- from: /tf/compat/v1/sparse/merge + to: /tf/compat/v1/sparse_merge +- from: /tf/compat/v1/sparse/minimum + to: /tf/sparse/minimum +- from: /tf/compat/v1/sparse/placeholder + to: /tf/compat/v1/sparse_placeholder +- from: /tf/compat/v1/sparse/reduce_max + to: /tf/compat/v1/sparse_reduce_max +- from: /tf/compat/v1/sparse/reduce_max_sparse + to: /tf/compat/v1/sparse_reduce_max_sparse +- from: /tf/compat/v1/sparse/reduce_sum + to: /tf/compat/v1/sparse_reduce_sum +- from: /tf/compat/v1/sparse/reduce_sum_sparse + to: /tf/compat/v1/sparse_reduce_sum_sparse +- from: /tf/compat/v1/sparse/reorder + to: /tf/sparse/reorder +- from: /tf/compat/v1/sparse/reset_shape + to: /tf/sparse/reset_shape +- from: /tf/compat/v1/sparse/reshape + to: /tf/sparse/reshape +- from: /tf/compat/v1/sparse/retain + to: /tf/sparse/retain +- from: /tf/compat/v1/sparse/segment_mean + to: /tf/compat/v1/sparse_segment_mean +- from: /tf/compat/v1/sparse/segment_sqrt_n + to: /tf/compat/v1/sparse_segment_sqrt_n +- from: /tf/compat/v1/sparse/segment_sum + to: /tf/compat/v1/sparse_segment_sum +- from: /tf/compat/v1/sparse/slice + to: /tf/sparse/slice +- from: /tf/compat/v1/sparse/softmax + to: /tf/sparse/softmax +- from: /tf/compat/v1/sparse/sparse_dense_matmul + to: /tf/sparse/sparse_dense_matmul +- from: /tf/compat/v1/sparse/split + to: /tf/compat/v1/sparse_split +- from: /tf/compat/v1/sparse/to_dense + to: /tf/sparse/to_dense +- from: /tf/compat/v1/sparse/to_indicator + to: /tf/sparse/to_indicator +- from: /tf/compat/v1/sparse/transpose + to: /tf/sparse/transpose +- from: /tf/compat/v1/sparse_fill_empty_rows + to: /tf/sparse/fill_empty_rows +- from: /tf/compat/v1/sparse_mask + to: /tf/sparse/mask +- from: /tf/compat/v1/sparse_maximum + to: /tf/sparse/maximum +- from: /tf/compat/v1/sparse_minimum + to: /tf/sparse/minimum +- from: /tf/compat/v1/sparse_reorder + to: /tf/sparse/reorder +- from: /tf/compat/v1/sparse_reset_shape + to: /tf/sparse/reset_shape +- from: /tf/compat/v1/sparse_reshape + to: /tf/sparse/reshape +- from: /tf/compat/v1/sparse_retain + to: /tf/sparse/retain +- from: /tf/compat/v1/sparse_slice + to: /tf/sparse/slice +- from: /tf/compat/v1/sparse_softmax + to: /tf/sparse/softmax +- from: /tf/compat/v1/sparse_tensor_dense_matmul + to: /tf/sparse/sparse_dense_matmul +- from: /tf/compat/v1/sparse_tensor_to_dense + to: /tf/sparse/to_dense +- from: /tf/compat/v1/sparse_to_indicator + to: /tf/sparse/to_indicator +- from: /tf/compat/v1/sparse_transpose + to: /tf/sparse/transpose +- from: /tf/compat/v1/spectral/dct + to: /tf/signal/dct +- from: /tf/compat/v1/spectral/fft + to: /tf/signal/fft +- from: /tf/compat/v1/spectral/fft2d + to: /tf/signal/fft2d +- from: /tf/compat/v1/spectral/fft3d + to: /tf/signal/fft3d +- from: /tf/compat/v1/spectral/idct + to: /tf/signal/idct +- from: /tf/compat/v1/spectral/ifft + to: /tf/signal/ifft +- from: /tf/compat/v1/spectral/ifft2d + to: /tf/signal/ifft2d +- from: /tf/compat/v1/spectral/ifft3d + to: /tf/signal/ifft3d +- from: /tf/compat/v1/spectral/irfft + to: /tf/signal/irfft +- from: /tf/compat/v1/spectral/irfft2d + to: /tf/signal/irfft2d +- from: /tf/compat/v1/spectral/irfft3d + to: /tf/signal/irfft3d +- from: /tf/compat/v1/spectral/rfft + to: /tf/signal/rfft +- from: /tf/compat/v1/spectral/rfft2d + to: /tf/signal/rfft2d +- from: /tf/compat/v1/spectral/rfft3d + to: /tf/signal/rfft3d +- from: /tf/compat/v1/split + to: /tf/split +- from: /tf/compat/v1/sqrt + to: /tf/math/sqrt +- from: /tf/compat/v1/square + to: /tf/math/square +- from: /tf/compat/v1/squared_difference + to: /tf/math/squared_difference +- from: /tf/compat/v1/stack + to: /tf/stack +- from: /tf/compat/v1/stop_gradient + to: /tf/stop_gradient +- from: /tf/compat/v1/strided_slice + to: /tf/strided_slice +- from: /tf/compat/v1/string_join + to: /tf/strings/join +- from: /tf/compat/v1/string_strip + to: /tf/strings/strip +- from: /tf/compat/v1/string_to_hash_bucket_fast + to: /tf/strings/to_hash_bucket_fast +- from: /tf/compat/v1/string_to_hash_bucket_strong + to: /tf/strings/to_hash_bucket_strong +- from: /tf/compat/v1/strings/as_string + to: /tf/strings/as_string +- from: /tf/compat/v1/strings/bytes_split + to: /tf/strings/bytes_split +- from: /tf/compat/v1/strings/format + to: /tf/strings/format +- from: /tf/compat/v1/strings/join + to: /tf/strings/join +- from: /tf/compat/v1/strings/lower + to: /tf/strings/lower +- from: /tf/compat/v1/strings/ngrams + to: /tf/strings/ngrams +- from: /tf/compat/v1/strings/reduce_join + to: /tf/compat/v1/reduce_join +- from: /tf/compat/v1/strings/regex_full_match + to: /tf/strings/regex_full_match +- from: /tf/compat/v1/strings/regex_replace + to: /tf/strings/regex_replace +- from: /tf/compat/v1/strings/strip + to: /tf/strings/strip +- from: /tf/compat/v1/strings/to_hash_bucket + to: /tf/compat/v1/string_to_hash_bucket +- from: /tf/compat/v1/strings/to_hash_bucket_fast + to: /tf/strings/to_hash_bucket_fast +- from: /tf/compat/v1/strings/to_hash_bucket_strong + to: /tf/strings/to_hash_bucket_strong +- from: /tf/compat/v1/strings/to_number + to: /tf/compat/v1/string_to_number +- from: /tf/compat/v1/strings/unicode_decode + to: /tf/strings/unicode_decode +- from: /tf/compat/v1/strings/unicode_decode_with_offsets + to: /tf/strings/unicode_decode_with_offsets +- from: /tf/compat/v1/strings/unicode_encode + to: /tf/strings/unicode_encode +- from: /tf/compat/v1/strings/unicode_script + to: /tf/strings/unicode_script +- from: /tf/compat/v1/strings/unicode_split + to: /tf/strings/unicode_split +- from: /tf/compat/v1/strings/unicode_split_with_offsets + to: /tf/strings/unicode_split_with_offsets +- from: /tf/compat/v1/strings/unicode_transcode + to: /tf/strings/unicode_transcode +- from: /tf/compat/v1/strings/unsorted_segment_join + to: /tf/strings/unsorted_segment_join +- from: /tf/compat/v1/strings/upper + to: /tf/strings/upper +- from: /tf/compat/v1/subtract + to: /tf/math/subtract +- from: /tf/compat/v1/summary/Event + to: /tf/compat/v1/Event +- from: /tf/compat/v1/summary/SessionLog + to: /tf/compat/v1/SessionLog +- from: /tf/compat/v1/summary/Summary + to: /tf/compat/v1/Summary +- from: /tf/compat/v1/summary/Summary/Audio + to: /tf/compat/v1/Summary/Audio +- from: /tf/compat/v1/summary/Summary/Image + to: /tf/compat/v1/Summary/Image +- from: /tf/compat/v1/summary/Summary/Value + to: /tf/compat/v1/Summary/Value +- from: /tf/compat/v1/svd + to: /tf/linalg/svd +- from: /tf/compat/v1/switch_case + to: /tf/switch_case +- from: /tf/compat/v1/sysconfig/get_compile_flags + to: /tf/sysconfig/get_compile_flags +- from: /tf/compat/v1/sysconfig/get_include + to: /tf/sysconfig/get_include +- from: /tf/compat/v1/sysconfig/get_lib + to: /tf/sysconfig/get_lib +- from: /tf/compat/v1/sysconfig/get_link_flags + to: /tf/sysconfig/get_link_flags +- from: /tf/compat/v1/tan + to: /tf/math/tan +- from: /tf/compat/v1/tanh + to: /tf/math/tanh +- from: /tf/compat/v1/tensor_scatter_add + to: /tf/tensor_scatter_nd_add +- from: /tf/compat/v1/tensor_scatter_nd_add + to: /tf/tensor_scatter_nd_add +- from: /tf/compat/v1/tensor_scatter_nd_sub + to: /tf/tensor_scatter_nd_sub +- from: /tf/compat/v1/tensor_scatter_nd_update + to: /tf/tensor_scatter_nd_update +- from: /tf/compat/v1/tensor_scatter_sub + to: /tf/tensor_scatter_nd_sub +- from: /tf/compat/v1/tensor_scatter_update + to: /tf/tensor_scatter_nd_update +- from: /tf/compat/v1/tensordot + to: /tf/tensordot +- from: /tf/compat/v1/test/Benchmark + to: /tf/test/Benchmark +- from: /tf/compat/v1/test/TestCase + to: /tf/test/TestCase +- from: /tf/compat/v1/test/TestCase/failureException + to: /tf/test/TestCase/failureException +- from: /tf/compat/v1/test/benchmark_config + to: /tf/test/benchmark_config +- from: /tf/compat/v1/test/create_local_cluster + to: /tf/test/create_local_cluster +- from: /tf/compat/v1/test/gpu_device_name + to: /tf/test/gpu_device_name +- from: /tf/compat/v1/test/is_built_with_cuda + to: /tf/test/is_built_with_cuda +- from: /tf/compat/v1/test/is_built_with_gpu_support + to: /tf/test/is_built_with_gpu_support +- from: /tf/compat/v1/test/is_built_with_rocm + to: /tf/test/is_built_with_rocm +- from: /tf/compat/v1/test/is_built_with_xla + to: /tf/test/is_built_with_xla +- from: /tf/compat/v1/test/is_gpu_available + to: /tf/test/is_gpu_available +- from: /tf/compat/v1/test/main + to: /tf/test/main +- from: /tf/compat/v1/tile + to: /tf/tile +- from: /tf/compat/v1/timestamp + to: /tf/timestamp +- from: /tf/compat/v1/tpu/experimental/DeviceAssignment + to: /tf/tpu/experimental/DeviceAssignment +- from: /tf/compat/v1/tpu/experimental/initialize_tpu_system + to: /tf/tpu/experimental/initialize_tpu_system +- from: /tf/compat/v1/tpu/experimental/shutdown_tpu_system + to: /tf/tpu/experimental/shutdown_tpu_system +- from: /tf/compat/v1/trace + to: /tf/linalg/trace +- from: /tf/compat/v1/train/BytesList + to: /tf/train/BytesList +- from: /tf/compat/v1/train/CheckpointManager + to: /tf/train/CheckpointManager +- from: /tf/compat/v1/train/CheckpointSaverHook + to: /tf/estimator/CheckpointSaverHook +- from: /tf/compat/v1/train/CheckpointSaverListener + to: /tf/estimator/CheckpointSaverListener +- from: /tf/compat/v1/train/ClusterDef + to: /tf/train/ClusterDef +- from: /tf/compat/v1/train/ClusterSpec + to: /tf/train/ClusterSpec +- from: /tf/compat/v1/train/Coordinator + to: /tf/train/Coordinator +- from: /tf/compat/v1/train/Example + to: /tf/train/Example +- from: /tf/compat/v1/train/ExponentialMovingAverage + to: /tf/train/ExponentialMovingAverage +- from: /tf/compat/v1/train/Feature + to: /tf/train/Feature +- from: /tf/compat/v1/train/FeatureList + to: /tf/train/FeatureList +- from: /tf/compat/v1/train/FeatureLists + to: /tf/train/FeatureLists +- from: /tf/compat/v1/train/FeatureLists/FeatureListEntry + to: /tf/train/FeatureLists/FeatureListEntry +- from: /tf/compat/v1/train/Features + to: /tf/train/Features +- from: /tf/compat/v1/train/Features/FeatureEntry + to: /tf/train/Features/FeatureEntry +- from: /tf/compat/v1/train/FeedFnHook + to: /tf/estimator/FeedFnHook +- from: /tf/compat/v1/train/FinalOpsHook + to: /tf/estimator/FinalOpsHook +- from: /tf/compat/v1/train/FloatList + to: /tf/train/FloatList +- from: /tf/compat/v1/train/GlobalStepWaiterHook + to: /tf/estimator/GlobalStepWaiterHook +- from: /tf/compat/v1/train/Int64List + to: /tf/train/Int64List +- from: /tf/compat/v1/train/JobDef + to: /tf/train/JobDef +- from: /tf/compat/v1/train/JobDef/TasksEntry + to: /tf/train/JobDef/TasksEntry +- from: /tf/compat/v1/train/LoggingTensorHook + to: /tf/estimator/LoggingTensorHook +- from: /tf/compat/v1/train/NanLossDuringTrainingError + to: /tf/estimator/NanLossDuringTrainingError +- from: /tf/compat/v1/train/NanTensorHook + to: /tf/estimator/NanTensorHook +- from: /tf/compat/v1/train/ProfilerHook + to: /tf/estimator/ProfilerHook +- from: /tf/compat/v1/train/SecondOrStepTimer + to: /tf/estimator/SecondOrStepTimer +- from: /tf/compat/v1/train/SequenceExample + to: /tf/train/SequenceExample +- from: /tf/compat/v1/train/Server + to: /tf/distribute/Server +- from: /tf/compat/v1/train/ServerDef + to: /tf/train/ServerDef +- from: /tf/compat/v1/train/SessionRunArgs + to: /tf/estimator/SessionRunArgs +- from: /tf/compat/v1/train/SessionRunContext + to: /tf/estimator/SessionRunContext +- from: /tf/compat/v1/train/SessionRunHook + to: /tf/estimator/SessionRunHook +- from: /tf/compat/v1/train/SessionRunValues + to: /tf/estimator/SessionRunValues +- from: /tf/compat/v1/train/SingularMonitoredSession/StepContext + to: /tf/compat/v1/train/MonitoredSession/StepContext +- from: /tf/compat/v1/train/StepCounterHook + to: /tf/estimator/StepCounterHook +- from: /tf/compat/v1/train/StopAtStepHook + to: /tf/estimator/StopAtStepHook +- from: /tf/compat/v1/train/SummarySaverHook + to: /tf/estimator/SummarySaverHook +- from: /tf/compat/v1/train/VocabInfo + to: /tf/estimator/VocabInfo +- from: /tf/compat/v1/train/checkpoints_iterator + to: /tf/train/checkpoints_iterator +- from: /tf/compat/v1/train/experimental/DynamicLossScale + to: /tf/mixed_precision/experimental/DynamicLossScale +- from: /tf/compat/v1/train/experimental/FixedLossScale + to: /tf/mixed_precision/experimental/FixedLossScale +- from: /tf/compat/v1/train/experimental/LossScale + to: /tf/mixed_precision/experimental/LossScale +- from: /tf/compat/v1/train/experimental/PythonState + to: /tf/train/experimental/PythonState +- from: /tf/compat/v1/train/get_checkpoint_state + to: /tf/train/get_checkpoint_state +- from: /tf/compat/v1/train/latest_checkpoint + to: /tf/train/latest_checkpoint +- from: /tf/compat/v1/train/list_variables + to: /tf/train/list_variables +- from: /tf/compat/v1/train/load_checkpoint + to: /tf/train/load_checkpoint +- from: /tf/compat/v1/train/load_variable + to: /tf/train/load_variable +- from: /tf/compat/v1/train/match_filenames_once + to: /tf/io/match_filenames_once +- from: /tf/compat/v1/train/piecewise_constant_decay + to: /tf/compat/v1/train/piecewise_constant +- from: /tf/compat/v1/train/queue_runner/QueueRunner + to: /tf/compat/v1/train/QueueRunner +- from: /tf/compat/v1/train/queue_runner/add_queue_runner + to: /tf/compat/v1/train/add_queue_runner +- from: /tf/compat/v1/train/queue_runner/start_queue_runners + to: /tf/compat/v1/train/start_queue_runners +- from: /tf/compat/v1/train/write_graph + to: /tf/io/write_graph +- from: /tf/compat/v1/truediv + to: /tf/math/truediv +- from: /tf/compat/v1/truncated_normal + to: /tf/random/truncated_normal +- from: /tf/compat/v1/truncatediv + to: /tf/truncatediv +- from: /tf/compat/v1/truncatemod + to: /tf/truncatemod +- from: /tf/compat/v1/unique + to: /tf/unique +- from: /tf/compat/v1/unique_with_counts + to: /tf/unique_with_counts +- from: /tf/compat/v1/unravel_index + to: /tf/unravel_index +- from: /tf/compat/v1/unsorted_segment_max + to: /tf/math/unsorted_segment_max +- from: /tf/compat/v1/unsorted_segment_mean + to: /tf/math/unsorted_segment_mean +- from: /tf/compat/v1/unsorted_segment_min + to: /tf/math/unsorted_segment_min +- from: /tf/compat/v1/unsorted_segment_prod + to: /tf/math/unsorted_segment_prod +- from: /tf/compat/v1/unsorted_segment_sqrt_n + to: /tf/math/unsorted_segment_sqrt_n +- from: /tf/compat/v1/unsorted_segment_sum + to: /tf/math/unsorted_segment_sum +- from: /tf/compat/v1/unstack + to: /tf/unstack +- from: /tf/compat/v1/variance_scaling_initializer + to: /tf/compat/v1/keras/initializers/VarianceScaling +- from: /tf/compat/v1/vectorized_map + to: /tf/vectorized_map +- from: /tf/compat/v1/where_v2 + to: /tf/where +- from: /tf/compat/v1/write_file + to: /tf/io/write_file +- from: /tf/compat/v1/xla/experimental/compile + to: /tf/xla/experimental/compile +- from: /tf/compat/v1/xla/experimental/jit_scope + to: /tf/xla/experimental/jit_scope +- from: /tf/compat/v1/zeros + to: /tf/zeros +- from: /tf/compat/v1/zeros_initializer + to: /tf/compat/v1/keras/initializers/Zeros +- from: /tf/compat/v1/zeta + to: /tf/math/zeta +- from: /tf/complex + to: /tf/dtypes/complex +- from: /tf/config/experimental/VirtualDeviceConfiguration + to: /tf/config/LogicalDeviceConfiguration +- from: /tf/config/experimental/get_virtual_device_configuration + to: /tf/config/get_logical_device_configuration +- from: /tf/config/experimental/get_visible_devices + to: /tf/config/get_visible_devices +- from: /tf/config/experimental/list_logical_devices + to: /tf/config/list_logical_devices +- from: /tf/config/experimental/list_physical_devices + to: /tf/config/list_physical_devices +- from: /tf/config/experimental/set_virtual_device_configuration + to: /tf/config/set_logical_device_configuration +- from: /tf/config/experimental/set_visible_devices + to: /tf/config/set_visible_devices +- from: /tf/cos + to: /tf/math/cos +- from: /tf/cosh + to: /tf/math/cosh +- from: /tf/cumsum + to: /tf/math/cumsum +- from: /tf/divide + to: /tf/math/divide +- from: /tf/dtypes/cast + to: /tf/cast +- from: /tf/eig + to: /tf/linalg/eig +- from: /tf/eigvals + to: /tf/linalg/eigvals +- from: /tf/equal + to: /tf/math/equal +- from: /tf/exp + to: /tf/math/exp +- from: /tf/floor + to: /tf/math/floor +- from: /tf/greater + to: /tf/math/greater +- from: /tf/greater_equal + to: /tf/math/greater_equal +- from: /tf/image/decode_and_crop_jpeg + to: /tf/io/decode_and_crop_jpeg +- from: /tf/image/decode_bmp + to: /tf/io/decode_bmp +- from: /tf/image/decode_gif + to: /tf/io/decode_gif +- from: /tf/image/decode_image + to: /tf/io/decode_image +- from: /tf/image/decode_jpeg + to: /tf/io/decode_jpeg +- from: /tf/image/decode_png + to: /tf/io/decode_png +- from: /tf/image/encode_jpeg + to: /tf/io/encode_jpeg +- from: /tf/image/extract_jpeg_shape + to: /tf/io/extract_jpeg_shape +- from: /tf/image/is_jpeg + to: /tf/io/is_jpeg +- from: /tf/import_graph_def + to: /tf/graph_util/import_graph_def +- from: /tf/initializers + to: /tf/keras/initializers +- from: /tf/initializers/Constant + to: /tf/constant_initializer +- from: /tf/initializers/GlorotNormal + to: /tf/keras/initializers/GlorotNormal +- from: /tf/initializers/GlorotUniform + to: /tf/keras/initializers/GlorotUniform +- from: /tf/initializers/Identity + to: /tf/keras/initializers/Identity +- from: /tf/initializers/Initializer + to: /tf/keras/initializers/Initializer +- from: /tf/initializers/Ones + to: /tf/ones_initializer +- from: /tf/initializers/Orthogonal + to: /tf/keras/initializers/Orthogonal +- from: /tf/initializers/RandomNormal + to: /tf/random_normal_initializer +- from: /tf/initializers/RandomUniform + to: /tf/random_uniform_initializer +- from: /tf/initializers/TruncatedNormal + to: /tf/keras/initializers/TruncatedNormal +- from: /tf/initializers/VarianceScaling + to: /tf/keras/initializers/VarianceScaling +- from: /tf/initializers/Zeros + to: /tf/zeros_initializer +- from: /tf/initializers/constant + to: /tf/constant_initializer +- from: /tf/initializers/deserialize + to: /tf/keras/initializers/deserialize +- from: /tf/initializers/get + to: /tf/keras/initializers/get +- from: /tf/initializers/glorot_normal + to: /tf/keras/initializers/GlorotNormal +- from: /tf/initializers/glorot_uniform + to: /tf/keras/initializers/GlorotUniform +- from: /tf/initializers/he_normal + to: /tf/keras/initializers/he_normal +- from: /tf/initializers/he_uniform + to: /tf/keras/initializers/he_uniform +- from: /tf/initializers/identity + to: /tf/keras/initializers/Identity +- from: /tf/initializers/lecun_normal + to: /tf/keras/initializers/lecun_normal +- from: /tf/initializers/lecun_uniform + to: /tf/keras/initializers/lecun_uniform +- from: /tf/initializers/ones + to: /tf/ones_initializer +- from: /tf/initializers/orthogonal + to: /tf/keras/initializers/Orthogonal +- from: /tf/initializers/serialize + to: /tf/keras/initializers/serialize +- from: /tf/initializers/zeros + to: /tf/zeros_initializer +- from: /tf/keras/applications/densenet/DenseNet121 + to: /tf/keras/applications/DenseNet121 +- from: /tf/keras/applications/densenet/DenseNet169 + to: /tf/keras/applications/DenseNet169 +- from: /tf/keras/applications/densenet/DenseNet201 + to: /tf/keras/applications/DenseNet201 +- from: /tf/keras/applications/inception_resnet_v2/InceptionResNetV2 + to: /tf/keras/applications/InceptionResNetV2 +- from: /tf/keras/applications/inception_v3/InceptionV3 + to: /tf/keras/applications/InceptionV3 +- from: /tf/keras/applications/mobilenet/MobileNet + to: /tf/keras/applications/MobileNet +- from: /tf/keras/applications/mobilenet_v2/MobileNetV2 + to: /tf/keras/applications/MobileNetV2 +- from: /tf/keras/applications/nasnet/NASNetLarge + to: /tf/keras/applications/NASNetLarge +- from: /tf/keras/applications/nasnet/NASNetMobile + to: /tf/keras/applications/NASNetMobile +- from: /tf/keras/applications/resnet/ResNet101 + to: /tf/keras/applications/ResNet101 +- from: /tf/keras/applications/resnet/ResNet152 + to: /tf/keras/applications/ResNet152 +- from: /tf/keras/applications/resnet/ResNet50 + to: /tf/keras/applications/ResNet50 +- from: /tf/keras/applications/resnet50/ResNet50 + to: /tf/keras/applications/ResNet50 +- from: /tf/keras/applications/resnet50/decode_predictions + to: /tf/keras/applications/resnet/decode_predictions +- from: /tf/keras/applications/resnet50/preprocess_input + to: /tf/keras/applications/resnet/preprocess_input +- from: /tf/keras/applications/resnet_v2/ResNet101V2 + to: /tf/keras/applications/ResNet101V2 +- from: /tf/keras/applications/resnet_v2/ResNet152V2 + to: /tf/keras/applications/ResNet152V2 +- from: /tf/keras/applications/resnet_v2/ResNet50V2 + to: /tf/keras/applications/ResNet50V2 +- from: /tf/keras/applications/vgg16/VGG16 + to: /tf/keras/applications/VGG16 +- from: /tf/keras/applications/vgg19/VGG19 + to: /tf/keras/applications/VGG19 +- from: /tf/keras/applications/xception/Xception + to: /tf/keras/applications/Xception +- from: /tf/keras/constraints/max_norm + to: /tf/keras/constraints/MaxNorm +- from: /tf/keras/constraints/min_max_norm + to: /tf/keras/constraints/MinMaxNorm +- from: /tf/keras/constraints/non_neg + to: /tf/keras/constraints/NonNeg +- from: /tf/keras/constraints/radial_constraint + to: /tf/keras/constraints/RadialConstraint +- from: /tf/keras/constraints/unit_norm + to: /tf/keras/constraints/UnitNorm +- from: /tf/keras/initializers/Constant + to: /tf/constant_initializer +- from: /tf/keras/initializers/Ones + to: /tf/ones_initializer +- from: /tf/keras/initializers/RandomNormal + to: /tf/random_normal_initializer +- from: /tf/keras/initializers/RandomUniform + to: /tf/random_uniform_initializer +- from: /tf/keras/initializers/Zeros + to: /tf/zeros_initializer +- from: /tf/keras/initializers/constant + to: /tf/constant_initializer +- from: /tf/keras/initializers/glorot_normal + to: /tf/keras/initializers/GlorotNormal +- from: /tf/keras/initializers/glorot_uniform + to: /tf/keras/initializers/GlorotUniform +- from: /tf/keras/initializers/identity + to: /tf/keras/initializers/Identity +- from: /tf/keras/initializers/ones + to: /tf/ones_initializer +- from: /tf/keras/initializers/orthogonal + to: /tf/keras/initializers/Orthogonal +- from: /tf/keras/initializers/zeros + to: /tf/zeros_initializer +- from: /tf/keras/layers/AvgPool1D + to: /tf/keras/layers/AveragePooling1D +- from: /tf/keras/layers/AvgPool2D + to: /tf/keras/layers/AveragePooling2D +- from: /tf/keras/layers/AvgPool3D + to: /tf/keras/layers/AveragePooling3D +- from: /tf/keras/layers/Convolution1D + to: /tf/keras/layers/Conv1D +- from: /tf/keras/layers/Convolution2D + to: /tf/keras/layers/Conv2D +- from: /tf/keras/layers/Convolution2DTranspose + to: /tf/keras/layers/Conv2DTranspose +- from: /tf/keras/layers/Convolution3D + to: /tf/keras/layers/Conv3D +- from: /tf/keras/layers/Convolution3DTranspose + to: /tf/keras/layers/Conv3DTranspose +- from: /tf/keras/layers/GlobalAvgPool1D + to: /tf/keras/layers/GlobalAveragePooling1D +- from: /tf/keras/layers/GlobalAvgPool2D + to: /tf/keras/layers/GlobalAveragePooling2D +- from: /tf/keras/layers/GlobalAvgPool3D + to: /tf/keras/layers/GlobalAveragePooling3D +- from: /tf/keras/layers/GlobalMaxPooling1D + to: /tf/keras/layers/GlobalMaxPool1D +- from: /tf/keras/layers/GlobalMaxPooling2D + to: /tf/keras/layers/GlobalMaxPool2D +- from: /tf/keras/layers/GlobalMaxPooling3D + to: /tf/keras/layers/GlobalMaxPool3D +- from: /tf/keras/layers/Input + to: /tf/keras/Input +- from: /tf/keras/layers/MaxPooling1D + to: /tf/keras/layers/MaxPool1D +- from: /tf/keras/layers/MaxPooling2D + to: /tf/keras/layers/MaxPool2D +- from: /tf/keras/layers/MaxPooling3D + to: /tf/keras/layers/MaxPool3D +- from: /tf/keras/layers/SeparableConvolution1D + to: /tf/keras/layers/SeparableConv1D +- from: /tf/keras/layers/SeparableConvolution2D + to: /tf/keras/layers/SeparableConv2D +- from: /tf/keras/losses/kld + to: /tf/keras/losses/KLD +- from: /tf/keras/losses/kullback_leibler_divergence + to: /tf/keras/losses/KLD +- from: /tf/keras/losses/mae + to: /tf/keras/losses/MAE +- from: /tf/keras/losses/mape + to: /tf/keras/losses/MAPE +- from: /tf/keras/losses/mean_absolute_error + to: /tf/keras/losses/MAE +- from: /tf/keras/losses/mean_absolute_percentage_error + to: /tf/keras/losses/MAPE +- from: /tf/keras/losses/mean_squared_error + to: /tf/keras/losses/MSE +- from: /tf/keras/losses/mean_squared_logarithmic_error + to: /tf/keras/losses/MSLE +- from: /tf/keras/losses/mse + to: /tf/keras/losses/MSE +- from: /tf/keras/losses/msle + to: /tf/keras/losses/MSLE +- from: /tf/keras/metrics/KLD + to: /tf/keras/losses/KLD +- from: /tf/keras/metrics/MAE + to: /tf/keras/losses/MAE +- from: /tf/keras/metrics/MAPE + to: /tf/keras/losses/MAPE +- from: /tf/keras/metrics/MSE + to: /tf/keras/losses/MSE +- from: /tf/keras/metrics/MSLE + to: /tf/keras/losses/MSLE +- from: /tf/keras/metrics/binary_crossentropy + to: /tf/keras/losses/binary_crossentropy +- from: /tf/keras/metrics/categorical_crossentropy + to: /tf/keras/losses/categorical_crossentropy +- from: /tf/keras/metrics/hinge + to: /tf/keras/losses/hinge +- from: /tf/keras/metrics/kld + to: /tf/keras/losses/KLD +- from: /tf/keras/metrics/kullback_leibler_divergence + to: /tf/keras/losses/KLD +- from: /tf/keras/metrics/mae + to: /tf/keras/losses/MAE +- from: /tf/keras/metrics/mape + to: /tf/keras/losses/MAPE +- from: /tf/keras/metrics/mean_absolute_error + to: /tf/keras/losses/MAE +- from: /tf/keras/metrics/mean_absolute_percentage_error + to: /tf/keras/losses/MAPE +- from: /tf/keras/metrics/mean_squared_error + to: /tf/keras/losses/MSE +- from: /tf/keras/metrics/mean_squared_logarithmic_error + to: /tf/keras/losses/MSLE +- from: /tf/keras/metrics/mse + to: /tf/keras/losses/MSE +- from: /tf/keras/metrics/msle + to: /tf/keras/losses/MSLE +- from: /tf/keras/metrics/poisson + to: /tf/keras/losses/poisson +- from: /tf/keras/metrics/sparse_categorical_crossentropy + to: /tf/keras/losses/sparse_categorical_crossentropy +- from: /tf/keras/metrics/squared_hinge + to: /tf/keras/losses/squared_hinge +- from: /tf/keras/models/Model + to: /tf/keras/Model +- from: /tf/keras/models/Sequential + to: /tf/keras/Sequential +- from: /tf/less + to: /tf/math/less +- from: /tf/less_equal + to: /tf/math/less_equal +- from: /tf/linalg/einsum + to: /tf/einsum +- from: /tf/linalg/eye + to: /tf/eye +- from: /tf/linalg/l2_normalize + to: /tf/math/l2_normalize +- from: /tf/linalg/norm + to: /tf/norm +- from: /tf/linalg/tensordot + to: /tf/tensordot +- from: /tf/logical_and + to: /tf/math/logical_and +- from: /tf/logical_not + to: /tf/math/logical_not +- from: /tf/logical_or + to: /tf/math/logical_or +- from: /tf/losses + to: /tf/keras/losses +- from: /tf/losses/BinaryCrossentropy + to: /tf/keras/losses/BinaryCrossentropy +- from: /tf/losses/CategoricalCrossentropy + to: /tf/keras/losses/CategoricalCrossentropy +- from: /tf/losses/CategoricalHinge + to: /tf/keras/losses/CategoricalHinge +- from: /tf/losses/CosineSimilarity + to: /tf/keras/losses/CosineSimilarity +- from: /tf/losses/Hinge + to: /tf/keras/losses/Hinge +- from: /tf/losses/Huber + to: /tf/keras/losses/Huber +- from: /tf/losses/KLD + to: /tf/keras/losses/KLD +- from: /tf/losses/KLDivergence + to: /tf/keras/losses/KLDivergence +- from: /tf/losses/LogCosh + to: /tf/keras/losses/LogCosh +- from: /tf/losses/Loss + to: /tf/keras/losses/Loss +- from: /tf/losses/MAE + to: /tf/keras/losses/MAE +- from: /tf/losses/MAPE + to: /tf/keras/losses/MAPE +- from: /tf/losses/MSE + to: /tf/keras/losses/MSE +- from: /tf/losses/MSLE + to: /tf/keras/losses/MSLE +- from: /tf/losses/MeanAbsoluteError + to: /tf/keras/losses/MeanAbsoluteError +- from: /tf/losses/MeanAbsolutePercentageError + to: /tf/keras/losses/MeanAbsolutePercentageError +- from: /tf/losses/MeanSquaredError + to: /tf/keras/losses/MeanSquaredError +- from: /tf/losses/MeanSquaredLogarithmicError + to: /tf/keras/losses/MeanSquaredLogarithmicError +- from: /tf/losses/Poisson + to: /tf/keras/losses/Poisson +- from: /tf/losses/Reduction + to: /tf/keras/losses/Reduction +- from: /tf/losses/SparseCategoricalCrossentropy + to: /tf/keras/losses/SparseCategoricalCrossentropy +- from: /tf/losses/SquaredHinge + to: /tf/keras/losses/SquaredHinge +- from: /tf/losses/binary_crossentropy + to: /tf/keras/losses/binary_crossentropy +- from: /tf/losses/categorical_crossentropy + to: /tf/keras/losses/categorical_crossentropy +- from: /tf/losses/categorical_hinge + to: /tf/keras/losses/categorical_hinge +- from: /tf/losses/cosine_similarity + to: /tf/keras/losses/cosine_similarity +- from: /tf/losses/deserialize + to: /tf/keras/losses/deserialize +- from: /tf/losses/get + to: /tf/keras/losses/get +- from: /tf/losses/hinge + to: /tf/keras/losses/hinge +- from: /tf/losses/kld + to: /tf/keras/losses/KLD +- from: /tf/losses/kullback_leibler_divergence + to: /tf/keras/losses/KLD +- from: /tf/losses/logcosh + to: /tf/keras/losses/logcosh +- from: /tf/losses/mae + to: /tf/keras/losses/MAE +- from: /tf/losses/mape + to: /tf/keras/losses/MAPE +- from: /tf/losses/mean_absolute_error + to: /tf/keras/losses/MAE +- from: /tf/losses/mean_absolute_percentage_error + to: /tf/keras/losses/MAPE +- from: /tf/losses/mean_squared_error + to: /tf/keras/losses/MSE +- from: /tf/losses/mean_squared_logarithmic_error + to: /tf/keras/losses/MSLE +- from: /tf/losses/mse + to: /tf/keras/losses/MSE +- from: /tf/losses/msle + to: /tf/keras/losses/MSLE +- from: /tf/losses/poisson + to: /tf/keras/losses/poisson +- from: /tf/losses/serialize + to: /tf/keras/losses/serialize +- from: /tf/losses/sparse_categorical_crossentropy + to: /tf/keras/losses/sparse_categorical_crossentropy +- from: /tf/losses/squared_hinge + to: /tf/keras/losses/squared_hinge +- from: /tf/math/log_softmax + to: /tf/nn/log_softmax +- from: /tf/math/mod + to: /tf/math/floormod +- from: /tf/math/reduce_all + to: /tf/reduce_all +- from: /tf/math/softmax + to: /tf/nn/softmax +- from: /tf/math/softsign + to: /tf/nn/softsign +- from: /tf/matmul + to: /tf/linalg/matmul +- from: /tf/matrix_square_root + to: /tf/linalg/sqrtm +- from: /tf/maximum + to: /tf/math/maximum +- from: /tf/metrics + to: /tf/keras/metrics +- from: /tf/metrics/AUC + to: /tf/keras/metrics/AUC +- from: /tf/metrics/Accuracy + to: /tf/keras/metrics/Accuracy +- from: /tf/metrics/BinaryAccuracy + to: /tf/keras/metrics/BinaryAccuracy +- from: /tf/metrics/BinaryCrossentropy + to: /tf/keras/metrics/BinaryCrossentropy +- from: /tf/metrics/CategoricalAccuracy + to: /tf/keras/metrics/CategoricalAccuracy +- from: /tf/metrics/CategoricalCrossentropy + to: /tf/keras/metrics/CategoricalCrossentropy +- from: /tf/metrics/CategoricalHinge + to: /tf/keras/metrics/CategoricalHinge +- from: /tf/metrics/CosineSimilarity + to: /tf/keras/metrics/CosineSimilarity +- from: /tf/metrics/FalseNegatives + to: /tf/keras/metrics/FalseNegatives +- from: /tf/metrics/FalsePositives + to: /tf/keras/metrics/FalsePositives +- from: /tf/metrics/Hinge + to: /tf/keras/metrics/Hinge +- from: /tf/metrics/KLD + to: /tf/keras/losses/KLD +- from: /tf/metrics/KLDivergence + to: /tf/keras/metrics/KLDivergence +- from: /tf/metrics/LogCoshError + to: /tf/keras/metrics/LogCoshError +- from: /tf/metrics/MAE + to: /tf/keras/losses/MAE +- from: /tf/metrics/MAPE + to: /tf/keras/losses/MAPE +- from: /tf/metrics/MSE + to: /tf/keras/losses/MSE +- from: /tf/metrics/MSLE + to: /tf/keras/losses/MSLE +- from: /tf/metrics/Mean + to: /tf/keras/metrics/Mean +- from: /tf/metrics/MeanAbsoluteError + to: /tf/keras/metrics/MeanAbsoluteError +- from: /tf/metrics/MeanAbsolutePercentageError + to: /tf/keras/metrics/MeanAbsolutePercentageError +- from: /tf/metrics/MeanIoU + to: /tf/keras/metrics/MeanIoU +- from: /tf/metrics/MeanRelativeError + to: /tf/keras/metrics/MeanRelativeError +- from: /tf/metrics/MeanSquaredError + to: /tf/keras/metrics/MeanSquaredError +- from: /tf/metrics/MeanSquaredLogarithmicError + to: /tf/keras/metrics/MeanSquaredLogarithmicError +- from: /tf/metrics/MeanTensor + to: /tf/keras/metrics/MeanTensor +- from: /tf/metrics/Metric + to: /tf/keras/metrics/Metric +- from: /tf/metrics/Poisson + to: /tf/keras/metrics/Poisson +- from: /tf/metrics/Precision + to: /tf/keras/metrics/Precision +- from: /tf/metrics/PrecisionAtRecall + to: /tf/keras/metrics/PrecisionAtRecall +- from: /tf/metrics/Recall + to: /tf/keras/metrics/Recall +- from: /tf/metrics/RecallAtPrecision + to: /tf/keras/metrics/RecallAtPrecision +- from: /tf/metrics/RootMeanSquaredError + to: /tf/keras/metrics/RootMeanSquaredError +- from: /tf/metrics/SensitivityAtSpecificity + to: /tf/keras/metrics/SensitivityAtSpecificity +- from: /tf/metrics/SparseCategoricalAccuracy + to: /tf/keras/metrics/SparseCategoricalAccuracy +- from: /tf/metrics/SparseCategoricalCrossentropy + to: /tf/keras/metrics/SparseCategoricalCrossentropy +- from: /tf/metrics/SparseTopKCategoricalAccuracy + to: /tf/keras/metrics/SparseTopKCategoricalAccuracy +- from: /tf/metrics/SpecificityAtSensitivity + to: /tf/keras/metrics/SpecificityAtSensitivity +- from: /tf/metrics/SquaredHinge + to: /tf/keras/metrics/SquaredHinge +- from: /tf/metrics/Sum + to: /tf/keras/metrics/Sum +- from: /tf/metrics/TopKCategoricalAccuracy + to: /tf/keras/metrics/TopKCategoricalAccuracy +- from: /tf/metrics/TrueNegatives + to: /tf/keras/metrics/TrueNegatives +- from: /tf/metrics/TruePositives + to: /tf/keras/metrics/TruePositives +- from: /tf/metrics/binary_accuracy + to: /tf/keras/metrics/binary_accuracy +- from: /tf/metrics/binary_crossentropy + to: /tf/keras/losses/binary_crossentropy +- from: /tf/metrics/categorical_accuracy + to: /tf/keras/metrics/categorical_accuracy +- from: /tf/metrics/categorical_crossentropy + to: /tf/keras/losses/categorical_crossentropy +- from: /tf/metrics/deserialize + to: /tf/keras/metrics/deserialize +- from: /tf/metrics/get + to: /tf/keras/metrics/get +- from: /tf/metrics/hinge + to: /tf/keras/losses/hinge +- from: /tf/metrics/kld + to: /tf/keras/losses/KLD +- from: /tf/metrics/kullback_leibler_divergence + to: /tf/keras/losses/KLD +- from: /tf/metrics/mae + to: /tf/keras/losses/MAE +- from: /tf/metrics/mape + to: /tf/keras/losses/MAPE +- from: /tf/metrics/mean_absolute_error + to: /tf/keras/losses/MAE +- from: /tf/metrics/mean_absolute_percentage_error + to: /tf/keras/losses/MAPE +- from: /tf/metrics/mean_squared_error + to: /tf/keras/losses/MSE +- from: /tf/metrics/mean_squared_logarithmic_error + to: /tf/keras/losses/MSLE +- from: /tf/metrics/mse + to: /tf/keras/losses/MSE +- from: /tf/metrics/msle + to: /tf/keras/losses/MSLE +- from: /tf/metrics/poisson + to: /tf/keras/losses/poisson +- from: /tf/metrics/serialize + to: /tf/keras/metrics/serialize +- from: /tf/metrics/sparse_categorical_accuracy + to: /tf/keras/metrics/sparse_categorical_accuracy +- from: /tf/metrics/sparse_categorical_crossentropy + to: /tf/keras/losses/sparse_categorical_crossentropy +- from: /tf/metrics/sparse_top_k_categorical_accuracy + to: /tf/keras/metrics/sparse_top_k_categorical_accuracy +- from: /tf/metrics/squared_hinge + to: /tf/keras/losses/squared_hinge +- from: /tf/metrics/top_k_categorical_accuracy + to: /tf/keras/metrics/top_k_categorical_accuracy +- from: /tf/minimum + to: /tf/math/minimum +- from: /tf/multiply + to: /tf/math/multiply +- from: /tf/negative + to: /tf/math/negative +- from: /tf/nn/all_candidate_sampler + to: /tf/random/all_candidate_sampler +- from: /tf/nn/fixed_unigram_candidate_sampler + to: /tf/random/fixed_unigram_candidate_sampler +- from: /tf/nn/in_top_k + to: /tf/math/in_top_k +- from: /tf/nn/l2_normalize + to: /tf/math/l2_normalize +- from: /tf/nn/learned_unigram_candidate_sampler + to: /tf/random/learned_unigram_candidate_sampler +- from: /tf/nn/lrn + to: /tf/nn/local_response_normalization +- from: /tf/nn/sigmoid + to: /tf/math/sigmoid +- from: /tf/nn/softplus + to: /tf/math/softplus +- from: /tf/nn/space_to_batch + to: /tf/space_to_batch +- from: /tf/nn/tanh + to: /tf/math/tanh +- from: /tf/nn/top_k + to: /tf/math/top_k +- from: /tf/nn/zero_fraction + to: /tf/math/zero_fraction +- from: /tf/not_equal + to: /tf/math/not_equal +- from: /tf/optimizers + to: /tf/keras/optimizers +- from: /tf/optimizers/Adadelta + to: /tf/keras/optimizers/Adadelta +- from: /tf/optimizers/Adagrad + to: /tf/keras/optimizers/Adagrad +- from: /tf/optimizers/Adam + to: /tf/keras/optimizers/Adam +- from: /tf/optimizers/Adamax + to: /tf/keras/optimizers/Adamax +- from: /tf/optimizers/Ftrl + to: /tf/keras/optimizers/Ftrl +- from: /tf/optimizers/Nadam + to: /tf/keras/optimizers/Nadam +- from: /tf/optimizers/Optimizer + to: /tf/keras/optimizers/Optimizer +- from: /tf/optimizers/RMSprop + to: /tf/keras/optimizers/RMSprop +- from: /tf/optimizers/SGD + to: /tf/keras/optimizers/SGD +- from: /tf/optimizers/deserialize + to: /tf/keras/optimizers/deserialize +- from: /tf/optimizers/get + to: /tf/keras/optimizers/get +- from: /tf/optimizers/schedules + to: /tf/keras/optimizers/schedules +- from: /tf/optimizers/schedules/ExponentialDecay + to: /tf/keras/optimizers/schedules/ExponentialDecay +- from: /tf/optimizers/schedules/InverseTimeDecay + to: /tf/keras/optimizers/schedules/InverseTimeDecay +- from: /tf/optimizers/schedules/LearningRateSchedule + to: /tf/keras/optimizers/schedules/LearningRateSchedule +- from: /tf/optimizers/schedules/PiecewiseConstantDecay + to: /tf/keras/optimizers/schedules/PiecewiseConstantDecay +- from: /tf/optimizers/schedules/PolynomialDecay + to: /tf/keras/optimizers/schedules/PolynomialDecay +- from: /tf/optimizers/schedules/deserialize + to: /tf/keras/optimizers/schedules/deserialize +- from: /tf/optimizers/schedules/serialize + to: /tf/keras/optimizers/schedules/serialize +- from: /tf/optimizers/serialize + to: /tf/keras/optimizers/serialize +- from: /tf/pow + to: /tf/math/pow +- from: /tf/random/experimental/Algorithm + to: /tf/random/Algorithm +- from: /tf/random/experimental/Generator + to: /tf/random/Generator +- from: /tf/random/experimental/create_rng_state + to: /tf/random/create_rng_state +- from: /tf/random/experimental/get_global_generator + to: /tf/random/get_global_generator +- from: /tf/random/experimental/set_global_generator + to: /tf/random/set_global_generator +- from: /tf/reduce_any + to: /tf/math/reduce_any +- from: /tf/reduce_logsumexp + to: /tf/math/reduce_logsumexp +- from: /tf/reduce_max + to: /tf/math/reduce_max +- from: /tf/reduce_mean + to: /tf/math/reduce_mean +- from: /tf/reduce_min + to: /tf/math/reduce_min +- from: /tf/reduce_prod + to: /tf/math/reduce_prod +- from: /tf/reduce_sum + to: /tf/math/reduce_sum +- from: /tf/round + to: /tf/math/round +- from: /tf/saturate_cast + to: /tf/dtypes/saturate_cast +- from: /tf/scalar_mul + to: /tf/math/scalar_mul +- from: /tf/sigmoid + to: /tf/math/sigmoid +- from: /tf/sign + to: /tf/math/sign +- from: /tf/sin + to: /tf/math/sin +- from: /tf/sinh + to: /tf/math/sinh +- from: /tf/sqrt + to: /tf/math/sqrt +- from: /tf/square + to: /tf/math/square +- from: /tf/subtract + to: /tf/math/subtract +- from: /tf/tan + to: /tf/math/tan +- from: /tf/tanh + to: /tf/math/tanh +- from: /tf/train/experimental/DynamicLossScale + to: /tf/mixed_precision/experimental/DynamicLossScale +- from: /tf/train/experimental/FixedLossScale + to: /tf/mixed_precision/experimental/FixedLossScale +- from: /tf/train/experimental/LossScale + to: /tf/mixed_precision/experimental/LossScale +- from: /tf/truediv + to: /tf/math/truediv +- from: /tf_overview + to: /tf diff --git a/site/en/api_docs/python/tf/_toc.yaml b/site/en/api_docs/python/tf/_toc.yaml new file mode 100644 index 00000000000..a508f461df1 --- /dev/null +++ b/site/en/api_docs/python/tf/_toc.yaml @@ -0,0 +1,8530 @@ +toc: +- title: tf + section: + - title: Overview + path: /tf_overview + - title: AggregationMethod + path: /tf/AggregationMethod + - title: CriticalSection + path: /tf/CriticalSection + - title: DeviceSpec + path: /tf/DeviceSpec + - title: GradientTape + path: /tf/GradientTape + - title: Graph + path: /tf/Graph + - title: IndexedSlices + path: /tf/IndexedSlices + - title: IndexedSlicesSpec + path: /tf/IndexedSlicesSpec + - title: Module + path: /tf/Module + - title: Operation + path: /tf/Operation + - title: OptionalSpec + path: /tf/OptionalSpec + - title: RaggedTensor + path: /tf/RaggedTensor + - title: RaggedTensorSpec + path: /tf/RaggedTensorSpec + - title: RegisterGradient + path: /tf/RegisterGradient + - title: SparseTensorSpec + path: /tf/SparseTensorSpec + - title: Tensor + path: /tf/Tensor + - title: TensorArray + path: /tf/TensorArray + - title: TensorArraySpec + path: /tf/TensorArraySpec + - title: TensorShape + path: /tf/TensorShape + - title: TensorSpec + path: /tf/TensorSpec + - title: TypeSpec + path: /tf/TypeSpec + - title: UnconnectedGradients + path: /tf/UnconnectedGradients + - title: Variable + path: /tf/Variable + - title: Variable.SaveSliceInfo + path: /tf/Variable/SaveSliceInfo + - title: VariableAggregation + path: /tf/VariableAggregation + - title: VariableSynchronization + path: /tf/VariableSynchronization + - title: argsort + path: /tf/argsort + - title: batch_to_space + path: /tf/batch_to_space + - title: bitcast + path: /tf/bitcast + - title: boolean_mask + path: /tf/boolean_mask + - title: broadcast_dynamic_shape + path: /tf/broadcast_dynamic_shape + - title: broadcast_static_shape + path: /tf/broadcast_static_shape + - title: broadcast_to + path: /tf/broadcast_to + - title: case + path: /tf/case + - title: cast + path: /tf/cast + - title: clip_by_global_norm + path: /tf/clip_by_global_norm + - title: clip_by_norm + path: /tf/clip_by_norm + - title: clip_by_value + path: /tf/clip_by_value + - title: concat + path: /tf/concat + - title: cond + path: /tf/cond + - title: constant + path: /tf/constant + - title: constant_initializer + path: /tf/constant_initializer + - title: control_dependencies + path: /tf/control_dependencies + - title: convert_to_tensor + path: /tf/convert_to_tensor + - title: custom_gradient + path: /tf/custom_gradient + - title: device + path: /tf/device + - title: dynamic_partition + path: /tf/dynamic_partition + - title: dynamic_stitch + path: /tf/dynamic_stitch + - title: edit_distance + path: /tf/edit_distance + - title: einsum + path: /tf/einsum + - title: ensure_shape + path: /tf/ensure_shape + - title: executing_eagerly + path: /tf/executing_eagerly + - title: expand_dims + path: /tf/expand_dims + - title: extract_volume_patches + path: /tf/extract_volume_patches + - title: eye + path: /tf/eye + - title: fill + path: /tf/fill + - title: fingerprint + path: /tf/fingerprint + - title: foldl + path: /tf/foldl + - title: foldr + path: /tf/foldr + - title: function + path: /tf/function + - title: gather + path: /tf/gather + - title: gather_nd + path: /tf/gather_nd + - title: get_logger + path: /tf/get_logger + - title: get_static_value + path: /tf/get_static_value + - title: grad_pass_through + path: /tf/grad_pass_through + - title: gradients + path: /tf/gradients + - title: group + path: /tf/group + - title: guarantee_const + path: /tf/guarantee_const + - title: hessians + path: /tf/hessians + - title: histogram_fixed_width + path: /tf/histogram_fixed_width + - title: histogram_fixed_width_bins + path: /tf/histogram_fixed_width_bins + - title: identity + path: /tf/identity + - title: identity_n + path: /tf/identity_n + - title: init_scope + path: /tf/init_scope + - title: is_tensor + path: /tf/is_tensor + - title: linspace + path: /tf/linspace + - title: load_library + path: /tf/load_library + - title: load_op_library + path: /tf/load_op_library + - title: make_ndarray + path: /tf/make_ndarray + - title: make_tensor_proto + path: /tf/make_tensor_proto + - title: map_fn + path: /tf/map_fn + - title: meshgrid + path: /tf/meshgrid + - title: name_scope + path: /tf/name_scope + - title: no_gradient + path: /tf/no_gradient + - title: no_op + path: /tf/no_op + - title: nondifferentiable_batch_function + path: /tf/nondifferentiable_batch_function + - title: norm + path: /tf/norm + - title: numpy_function + path: /tf/numpy_function + - title: one_hot + path: /tf/one_hot + - title: ones + path: /tf/ones + - title: ones_initializer + path: /tf/ones_initializer + - title: ones_like + path: /tf/ones_like + - title: pad + path: /tf/pad + - title: parallel_stack + path: /tf/parallel_stack + - title: print + path: /tf/print + - title: py_function + path: /tf/py_function + - title: random_normal_initializer + path: /tf/random_normal_initializer + - title: random_uniform_initializer + path: /tf/random_uniform_initializer + - title: range + path: /tf/range + - title: rank + path: /tf/rank + - title: realdiv + path: /tf/realdiv + - title: recompute_grad + path: /tf/recompute_grad + - title: reduce_all + path: /tf/reduce_all + - title: register_tensor_conversion_function + path: /tf/register_tensor_conversion_function + - title: repeat + path: /tf/repeat + - title: required_space_to_batch_paddings + path: /tf/required_space_to_batch_paddings + - title: reshape + path: /tf/reshape + - title: reverse + path: /tf/reverse + - title: reverse_sequence + path: /tf/reverse_sequence + - title: roll + path: /tf/roll + - title: scan + path: /tf/scan + - title: scatter_nd + path: /tf/scatter_nd + - title: searchsorted + path: /tf/searchsorted + - title: sequence_mask + path: /tf/sequence_mask + - title: shape + path: /tf/shape + - title: shape_n + path: /tf/shape_n + - title: size + path: /tf/size + - title: slice + path: /tf/slice + - title: sort + path: /tf/sort + - title: space_to_batch + path: /tf/space_to_batch + - title: space_to_batch_nd + path: /tf/space_to_batch_nd + - title: split + path: /tf/split + - title: squeeze + path: /tf/squeeze + - title: stack + path: /tf/stack + - title: stop_gradient + path: /tf/stop_gradient + - title: strided_slice + path: /tf/strided_slice + - title: switch_case + path: /tf/switch_case + - title: tensor_scatter_nd_add + path: /tf/tensor_scatter_nd_add + - title: tensor_scatter_nd_sub + path: /tf/tensor_scatter_nd_sub + - title: tensor_scatter_nd_update + path: /tf/tensor_scatter_nd_update + - title: tensordot + path: /tf/tensordot + - title: tile + path: /tf/tile + - title: timestamp + path: /tf/timestamp + - title: transpose + path: /tf/transpose + - title: truncatediv + path: /tf/truncatediv + - title: truncatemod + path: /tf/truncatemod + - title: tuple + path: /tf/tuple + - title: unique + path: /tf/unique + - title: unique_with_counts + path: /tf/unique_with_counts + - title: unravel_index + path: /tf/unravel_index + - title: unstack + path: /tf/unstack + - title: variable_creator_scope + path: /tf/variable_creator_scope + - title: vectorized_map + path: /tf/vectorized_map + - title: where + path: /tf/where + - title: while_loop + path: /tf/while_loop + - title: zeros + path: /tf/zeros + - title: zeros_initializer + path: /tf/zeros_initializer + - title: zeros_like + path: /tf/zeros_like +- title: tf.audio + section: + - title: Overview + path: /tf/audio + - title: decode_wav + path: /tf/audio/decode_wav + - title: encode_wav + path: /tf/audio/encode_wav +- title: tf.autodiff + section: + - title: Overview + path: /tf/autodiff + - title: ForwardAccumulator + path: /tf/autodiff/ForwardAccumulator +- title: tf.autograph + section: + - title: Overview + path: /tf/autograph + - title: set_verbosity + path: /tf/autograph/set_verbosity + - title: to_code + path: /tf/autograph/to_code + - title: to_graph + path: /tf/autograph/to_graph + - title: trace + path: /tf/autograph/trace + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/autograph/experimental + - title: Feature + path: /tf/autograph/experimental/Feature + - title: do_not_convert + path: /tf/autograph/experimental/do_not_convert + - title: set_loop_options + path: /tf/autograph/experimental/set_loop_options +- title: tf.bitwise + section: + - title: Overview + path: /tf/bitwise + - title: bitwise_and + path: /tf/bitwise/bitwise_and + - title: bitwise_or + path: /tf/bitwise/bitwise_or + - title: bitwise_xor + path: /tf/bitwise/bitwise_xor + - title: invert + path: /tf/bitwise/invert + - title: left_shift + path: /tf/bitwise/left_shift + - title: right_shift + path: /tf/bitwise/right_shift +- title: tf.compat + section: + - title: Overview + path: /tf/compat + - title: as_bytes + path: /tf/compat/as_bytes + - title: as_str + path: /tf/compat/as_str + - title: as_str_any + path: /tf/compat/as_str_any + - title: as_text + path: /tf/compat/as_text + - title: dimension_at_index + path: /tf/compat/dimension_at_index + - title: dimension_value + path: /tf/compat/dimension_value + - title: forward_compatibility_horizon + path: /tf/compat/forward_compatibility_horizon + - title: forward_compatible + path: /tf/compat/forward_compatible + - title: path_to_str + path: /tf/compat/path_to_str + - title: v1 + section: + - title: Overview + path: /tf/compat/v1 + - title: AttrValue + path: /tf/compat/v1/AttrValue + - title: AttrValue.ListValue + path: /tf/compat/v1/AttrValue/ListValue + - title: ConditionalAccumulator + path: /tf/compat/v1/ConditionalAccumulator + - title: ConditionalAccumulatorBase + path: /tf/compat/v1/ConditionalAccumulatorBase + - title: ConfigProto + path: /tf/compat/v1/ConfigProto + - title: ConfigProto.DeviceCountEntry + path: /tf/compat/v1/ConfigProto/DeviceCountEntry + - title: ConfigProto.Experimental + path: /tf/compat/v1/ConfigProto/Experimental + - title: DeviceSpec + path: /tf/compat/v1/DeviceSpec + - title: Dimension + path: /tf/compat/v1/Dimension + - title: Event + path: /tf/compat/v1/Event + - title: FixedLengthRecordReader + path: /tf/compat/v1/FixedLengthRecordReader + - title: GPUOptions + path: /tf/compat/v1/GPUOptions + - title: GPUOptions.Experimental + path: /tf/compat/v1/GPUOptions/Experimental + - title: GPUOptions.Experimental.VirtualDevices + path: /tf/compat/v1/GPUOptions/Experimental/VirtualDevices + - title: GraphDef + path: /tf/compat/v1/GraphDef + - title: GraphKeys + path: /tf/compat/v1/GraphKeys + - title: GraphOptions + path: /tf/compat/v1/GraphOptions + - title: HistogramProto + path: /tf/compat/v1/HistogramProto + - title: IdentityReader + path: /tf/compat/v1/IdentityReader + - title: InteractiveSession + path: /tf/compat/v1/InteractiveSession + - title: LMDBReader + path: /tf/compat/v1/LMDBReader + - title: LogMessage + path: /tf/compat/v1/LogMessage + - title: MetaGraphDef + path: /tf/compat/v1/MetaGraphDef + - title: MetaGraphDef.CollectionDefEntry + path: /tf/compat/v1/MetaGraphDef/CollectionDefEntry + - title: MetaGraphDef.MetaInfoDef + path: /tf/compat/v1/MetaGraphDef/MetaInfoDef + - title: MetaGraphDef.MetaInfoDef.FunctionAliasesEntry + path: /tf/compat/v1/MetaGraphDef/MetaInfoDef/FunctionAliasesEntry + - title: MetaGraphDef.SignatureDefEntry + path: /tf/compat/v1/MetaGraphDef/SignatureDefEntry + - title: NameAttrList + path: /tf/compat/v1/NameAttrList + - title: NameAttrList.AttrEntry + path: /tf/compat/v1/NameAttrList/AttrEntry + - title: NodeDef + path: /tf/compat/v1/NodeDef + - title: NodeDef.AttrEntry + path: /tf/compat/v1/NodeDef/AttrEntry + - title: NodeDef.ExperimentalDebugInfo + path: /tf/compat/v1/NodeDef/ExperimentalDebugInfo + - title: OptimizerOptions + path: /tf/compat/v1/OptimizerOptions + - title: Print + status: deprecated + path: /tf/compat/v1/Print + - title: ReaderBase + path: /tf/compat/v1/ReaderBase + - title: RunMetadata + path: /tf/compat/v1/RunMetadata + - title: RunMetadata.FunctionGraphs + path: /tf/compat/v1/RunMetadata/FunctionGraphs + - title: RunOptions + path: /tf/compat/v1/RunOptions + - title: RunOptions.Experimental + path: /tf/compat/v1/RunOptions/Experimental + - title: Session + path: /tf/compat/v1/Session + - title: SessionLog + path: /tf/compat/v1/SessionLog + - title: SparseConditionalAccumulator + path: /tf/compat/v1/SparseConditionalAccumulator + - title: SparseTensorValue + path: /tf/compat/v1/SparseTensorValue + - title: Summary + path: /tf/compat/v1/Summary + - title: Summary.Audio + path: /tf/compat/v1/Summary/Audio + - title: Summary.Image + path: /tf/compat/v1/Summary/Image + - title: Summary.Value + path: /tf/compat/v1/Summary/Value + - title: SummaryMetadata + path: /tf/compat/v1/SummaryMetadata + - title: SummaryMetadata.PluginData + path: /tf/compat/v1/SummaryMetadata/PluginData + - title: TFRecordReader + path: /tf/compat/v1/TFRecordReader + - title: TensorInfo + path: /tf/compat/v1/TensorInfo + - title: TensorInfo.CompositeTensor + path: /tf/compat/v1/TensorInfo/CompositeTensor + - title: TensorInfo.CooSparse + path: /tf/compat/v1/TensorInfo/CooSparse + - title: TextLineReader + path: /tf/compat/v1/TextLineReader + - title: Variable + path: /tf/compat/v1/Variable + - title: VariableAggregation + path: /tf/compat/v1/VariableAggregation + - title: VariableScope + path: /tf/compat/v1/VariableScope + - title: WholeFileReader + path: /tf/compat/v1/WholeFileReader + - title: add_check_numerics_ops + path: /tf/compat/v1/add_check_numerics_ops + - title: add_to_collection + path: /tf/compat/v1/add_to_collection + - title: add_to_collections + path: /tf/compat/v1/add_to_collections + - title: all_variables + status: deprecated + path: /tf/compat/v1/all_variables + - title: arg_max + path: /tf/compat/v1/arg_max + - title: arg_min + path: /tf/compat/v1/arg_min + - title: argmax + path: /tf/compat/v1/argmax + - title: argmin + path: /tf/compat/v1/argmin + - title: assert_equal + path: /tf/compat/v1/assert_equal + - title: assert_greater + path: /tf/compat/v1/assert_greater + - title: assert_greater_equal + path: /tf/compat/v1/assert_greater_equal + - title: assert_integer + path: /tf/compat/v1/assert_integer + - title: assert_less + path: /tf/compat/v1/assert_less + - title: assert_less_equal + path: /tf/compat/v1/assert_less_equal + - title: assert_near + path: /tf/compat/v1/assert_near + - title: assert_negative + path: /tf/compat/v1/assert_negative + - title: assert_non_negative + path: /tf/compat/v1/assert_non_negative + - title: assert_non_positive + path: /tf/compat/v1/assert_non_positive + - title: assert_none_equal + path: /tf/compat/v1/assert_none_equal + - title: assert_positive + path: /tf/compat/v1/assert_positive + - title: assert_rank + path: /tf/compat/v1/assert_rank + - title: assert_rank_at_least + path: /tf/compat/v1/assert_rank_at_least + - title: assert_rank_in + path: /tf/compat/v1/assert_rank_in + - title: assert_scalar + path: /tf/compat/v1/assert_scalar + - title: assert_type + path: /tf/compat/v1/assert_type + - title: assert_variables_initialized + path: /tf/compat/v1/assert_variables_initialized + - title: assign + path: /tf/compat/v1/assign + - title: assign_add + path: /tf/compat/v1/assign_add + - title: assign_sub + path: /tf/compat/v1/assign_sub + - title: batch_gather + status: deprecated + path: /tf/compat/v1/batch_gather + - title: batch_scatter_update + status: deprecated + path: /tf/compat/v1/batch_scatter_update + - title: batch_to_space + path: /tf/compat/v1/batch_to_space + - title: batch_to_space_nd + path: /tf/compat/v1/batch_to_space_nd + - title: bincount + path: /tf/compat/v1/bincount + - title: boolean_mask + path: /tf/compat/v1/boolean_mask + - title: case + path: /tf/compat/v1/case + - title: clip_by_average_norm + status: deprecated + path: /tf/compat/v1/clip_by_average_norm + - title: colocate_with + status: deprecated + path: /tf/compat/v1/colocate_with + - title: cond + path: /tf/compat/v1/cond + - title: confusion_matrix + path: /tf/compat/v1/confusion_matrix + - title: constant + path: /tf/compat/v1/constant + - title: container + path: /tf/compat/v1/container + - title: control_flow_v2_enabled + path: /tf/compat/v1/control_flow_v2_enabled + - title: convert_to_tensor + path: /tf/compat/v1/convert_to_tensor + - title: convert_to_tensor_or_indexed_slices + path: /tf/compat/v1/convert_to_tensor_or_indexed_slices + - title: convert_to_tensor_or_sparse_tensor + path: /tf/compat/v1/convert_to_tensor_or_sparse_tensor + - title: count_nonzero + path: /tf/compat/v1/count_nonzero + - title: count_up_to + status: deprecated + path: /tf/compat/v1/count_up_to + - title: create_partitioned_variables + status: deprecated + path: /tf/compat/v1/create_partitioned_variables + - title: decode_csv + path: /tf/compat/v1/decode_csv + - title: decode_raw + path: /tf/compat/v1/decode_raw + - title: delete_session_tensor + path: /tf/compat/v1/delete_session_tensor + - title: depth_to_space + path: /tf/compat/v1/depth_to_space + - title: device + path: /tf/compat/v1/device + - title: disable_control_flow_v2 + path: /tf/compat/v1/disable_control_flow_v2 + - title: disable_eager_execution + path: /tf/compat/v1/disable_eager_execution + - title: disable_resource_variables + status: deprecated + path: /tf/compat/v1/disable_resource_variables + - title: disable_tensor_equality + path: /tf/compat/v1/disable_tensor_equality + - title: disable_v2_behavior + path: /tf/compat/v1/disable_v2_behavior + - title: disable_v2_tensorshape + path: /tf/compat/v1/disable_v2_tensorshape + - title: enable_control_flow_v2 + path: /tf/compat/v1/enable_control_flow_v2 + - title: enable_eager_execution + path: /tf/compat/v1/enable_eager_execution + - title: enable_resource_variables + path: /tf/compat/v1/enable_resource_variables + - title: enable_tensor_equality + path: /tf/compat/v1/enable_tensor_equality + - title: enable_v2_behavior + path: /tf/compat/v1/enable_v2_behavior + - title: enable_v2_tensorshape + path: /tf/compat/v1/enable_v2_tensorshape + - title: executing_eagerly + path: /tf/compat/v1/executing_eagerly + - title: executing_eagerly_outside_functions + path: /tf/compat/v1/executing_eagerly_outside_functions + - title: expand_dims + path: /tf/compat/v1/expand_dims + - title: extract_image_patches + path: /tf/compat/v1/extract_image_patches + - title: fixed_size_partitioner + path: /tf/compat/v1/fixed_size_partitioner + - title: floor_div + path: /tf/compat/v1/floor_div + - title: foldl + path: /tf/compat/v1/foldl + - title: foldr + path: /tf/compat/v1/foldr + - title: gather + path: /tf/compat/v1/gather + - title: gather_nd + path: /tf/compat/v1/gather_nd + - title: get_collection + path: /tf/compat/v1/get_collection + - title: get_collection_ref + path: /tf/compat/v1/get_collection_ref + - title: get_default_graph + path: /tf/compat/v1/get_default_graph + - title: get_default_session + path: /tf/compat/v1/get_default_session + - title: get_local_variable + path: /tf/compat/v1/get_local_variable + - title: get_seed + path: /tf/compat/v1/get_seed + - title: get_session_handle + path: /tf/compat/v1/get_session_handle + - title: get_session_tensor + path: /tf/compat/v1/get_session_tensor + - title: get_variable + path: /tf/compat/v1/get_variable + - title: get_variable_scope + path: /tf/compat/v1/get_variable_scope + - title: global_variables + path: /tf/compat/v1/global_variables + - title: global_variables_initializer + path: /tf/compat/v1/global_variables_initializer + - title: gradients + path: /tf/compat/v1/gradients + - title: hessians + path: /tf/compat/v1/hessians + - title: initialize_all_tables + status: deprecated + path: /tf/compat/v1/initialize_all_tables + - title: initialize_all_variables + status: deprecated + path: /tf/compat/v1/initialize_all_variables + - title: initialize_local_variables + status: deprecated + path: /tf/compat/v1/initialize_local_variables + - title: initialize_variables + status: deprecated + path: /tf/compat/v1/initialize_variables + - title: is_variable_initialized + path: /tf/compat/v1/is_variable_initialized + - title: load_file_system_library + status: deprecated + path: /tf/compat/v1/load_file_system_library + - title: local_variables + path: /tf/compat/v1/local_variables + - title: local_variables_initializer + path: /tf/compat/v1/local_variables_initializer + - title: make_template + path: /tf/compat/v1/make_template + - title: map_fn + path: /tf/compat/v1/map_fn + - title: min_max_variable_partitioner + path: /tf/compat/v1/min_max_variable_partitioner + - title: model_variables + path: /tf/compat/v1/model_variables + - title: moving_average_variables + path: /tf/compat/v1/moving_average_variables + - title: multinomial + status: deprecated + path: /tf/compat/v1/multinomial + - title: no_regularizer + path: /tf/compat/v1/no_regularizer + - title: norm + path: /tf/compat/v1/norm + - title: ones_like + path: /tf/compat/v1/ones_like + - title: op_scope + path: /tf/compat/v1/op_scope + - title: pad + path: /tf/compat/v1/pad + - title: parse_example + path: /tf/compat/v1/parse_example + - title: parse_single_example + path: /tf/compat/v1/parse_single_example + - title: placeholder + path: /tf/compat/v1/placeholder + - title: placeholder_with_default + path: /tf/compat/v1/placeholder_with_default + - title: py_func + path: /tf/compat/v1/py_func + - title: quantize_v2 + path: /tf/compat/v1/quantize_v2 + - title: random_normal_initializer + path: /tf/compat/v1/random_normal_initializer + - title: random_poisson + path: /tf/compat/v1/random_poisson + - title: random_uniform_initializer + path: /tf/compat/v1/random_uniform_initializer + - title: reduce_all + path: /tf/compat/v1/reduce_all + - title: reduce_any + path: /tf/compat/v1/reduce_any + - title: reduce_join + path: /tf/compat/v1/reduce_join + - title: reduce_logsumexp + path: /tf/compat/v1/reduce_logsumexp + - title: reduce_max + path: /tf/compat/v1/reduce_max + - title: reduce_mean + path: /tf/compat/v1/reduce_mean + - title: reduce_min + path: /tf/compat/v1/reduce_min + - title: reduce_prod + path: /tf/compat/v1/reduce_prod + - title: reduce_sum + path: /tf/compat/v1/reduce_sum + - title: report_uninitialized_variables + path: /tf/compat/v1/report_uninitialized_variables + - title: reset_default_graph + path: /tf/compat/v1/reset_default_graph + - title: resource_variables_enabled + path: /tf/compat/v1/resource_variables_enabled + - title: reverse_sequence + path: /tf/compat/v1/reverse_sequence + - title: scalar_mul + path: /tf/compat/v1/scalar_mul + - title: scan + path: /tf/compat/v1/scan + - title: scatter_add + path: /tf/compat/v1/scatter_add + - title: scatter_div + path: /tf/compat/v1/scatter_div + - title: scatter_max + path: /tf/compat/v1/scatter_max + - title: scatter_min + path: /tf/compat/v1/scatter_min + - title: scatter_mul + path: /tf/compat/v1/scatter_mul + - title: scatter_nd_add + path: /tf/compat/v1/scatter_nd_add + - title: scatter_nd_sub + path: /tf/compat/v1/scatter_nd_sub + - title: scatter_nd_update + path: /tf/compat/v1/scatter_nd_update + - title: scatter_sub + path: /tf/compat/v1/scatter_sub + - title: scatter_update + path: /tf/compat/v1/scatter_update + - title: serialize_many_sparse + path: /tf/compat/v1/serialize_many_sparse + - title: serialize_sparse + path: /tf/compat/v1/serialize_sparse + - title: set_random_seed + path: /tf/compat/v1/set_random_seed + - title: setdiff1d + path: /tf/compat/v1/setdiff1d + - title: shape + path: /tf/compat/v1/shape + - title: size + path: /tf/compat/v1/size + - title: space_to_batch + path: /tf/compat/v1/space_to_batch + - title: space_to_depth + path: /tf/compat/v1/space_to_depth + - title: sparse_add + path: /tf/compat/v1/sparse_add + - title: sparse_concat + path: /tf/compat/v1/sparse_concat + - title: sparse_matmul + path: /tf/compat/v1/sparse_matmul + - title: sparse_merge + status: deprecated + path: /tf/compat/v1/sparse_merge + - title: sparse_placeholder + path: /tf/compat/v1/sparse_placeholder + - title: sparse_reduce_max + path: /tf/compat/v1/sparse_reduce_max + - title: sparse_reduce_max_sparse + path: /tf/compat/v1/sparse_reduce_max_sparse + - title: sparse_reduce_sum + path: /tf/compat/v1/sparse_reduce_sum + - title: sparse_reduce_sum_sparse + path: /tf/compat/v1/sparse_reduce_sum_sparse + - title: sparse_segment_mean + path: /tf/compat/v1/sparse_segment_mean + - title: sparse_segment_sqrt_n + path: /tf/compat/v1/sparse_segment_sqrt_n + - title: sparse_segment_sum + path: /tf/compat/v1/sparse_segment_sum + - title: sparse_split + path: /tf/compat/v1/sparse_split + - title: sparse_to_dense + status: deprecated + path: /tf/compat/v1/sparse_to_dense + - title: squeeze + path: /tf/compat/v1/squeeze + - title: string_split + path: /tf/compat/v1/string_split + - title: string_to_hash_bucket + path: /tf/compat/v1/string_to_hash_bucket + - title: string_to_number + path: /tf/compat/v1/string_to_number + - title: substr + path: /tf/compat/v1/substr + - title: tables_initializer + path: /tf/compat/v1/tables_initializer + - title: to_bfloat16 + status: deprecated + path: /tf/compat/v1/to_bfloat16 + - title: to_complex128 + status: deprecated + path: /tf/compat/v1/to_complex128 + - title: to_complex64 + status: deprecated + path: /tf/compat/v1/to_complex64 + - title: to_double + status: deprecated + path: /tf/compat/v1/to_double + - title: to_float + status: deprecated + path: /tf/compat/v1/to_float + - title: to_int32 + status: deprecated + path: /tf/compat/v1/to_int32 + - title: to_int64 + status: deprecated + path: /tf/compat/v1/to_int64 + - title: trainable_variables + path: /tf/compat/v1/trainable_variables + - title: transpose + path: /tf/compat/v1/transpose + - title: truncated_normal_initializer + path: /tf/compat/v1/truncated_normal_initializer + - title: tuple + path: /tf/compat/v1/tuple + - title: uniform_unit_scaling_initializer + path: /tf/compat/v1/uniform_unit_scaling_initializer + - title: variable_axis_size_partitioner + path: /tf/compat/v1/variable_axis_size_partitioner + - title: variable_creator_scope + path: /tf/compat/v1/variable_creator_scope + - title: variable_op_scope + path: /tf/compat/v1/variable_op_scope + - title: variable_scope + path: /tf/compat/v1/variable_scope + - title: variables_initializer + path: /tf/compat/v1/variables_initializer + - title: verify_tensor_all_finite + path: /tf/compat/v1/verify_tensor_all_finite + - title: where + path: /tf/compat/v1/where + - title: while_loop + path: /tf/compat/v1/while_loop + - title: wrap_function + path: /tf/compat/v1/wrap_function + - title: zeros_like + path: /tf/compat/v1/zeros_like + - title: app + section: + - title: Overview + path: /tf/compat/v1/app + - title: run + path: /tf/compat/v1/app/run + - title: audio + section: + - title: Overview + path: /tf/compat/v1/audio + - title: autograph + section: + - title: Overview + path: /tf/compat/v1/autograph + - title: to_code + path: /tf/compat/v1/autograph/to_code + - title: to_graph + path: /tf/compat/v1/autograph/to_graph + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/autograph/experimental + - title: bitwise + section: + - title: Overview + path: /tf/compat/v1/bitwise + - title: compat + section: + - title: Overview + path: /tf/compat/v1/compat + - title: config + section: + - title: Overview + path: /tf/compat/v1/config + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/config/experimental + - title: optimizer + section: + - title: Overview + path: /tf/compat/v1/config/optimizer + - title: threading + section: + - title: Overview + path: /tf/compat/v1/config/threading + - title: data + section: + - title: Overview + path: /tf/compat/v1/data + - title: Dataset + path: /tf/compat/v1/data/Dataset + - title: FixedLengthRecordDataset + path: /tf/compat/v1/data/FixedLengthRecordDataset + - title: Iterator + path: /tf/compat/v1/data/Iterator + - title: TFRecordDataset + path: /tf/compat/v1/data/TFRecordDataset + - title: TextLineDataset + path: /tf/compat/v1/data/TextLineDataset + - title: get_output_classes + path: /tf/compat/v1/data/get_output_classes + - title: get_output_shapes + path: /tf/compat/v1/data/get_output_shapes + - title: get_output_types + path: /tf/compat/v1/data/get_output_types + - title: make_initializable_iterator + path: /tf/compat/v1/data/make_initializable_iterator + - title: make_one_shot_iterator + path: /tf/compat/v1/data/make_one_shot_iterator + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/data/experimental + - title: Counter + path: /tf/compat/v1/data/experimental/Counter + - title: CsvDataset + path: /tf/compat/v1/data/experimental/CsvDataset + - title: RaggedTensorStructure + status: deprecated + path: /tf/compat/v1/data/experimental/RaggedTensorStructure + - title: RandomDataset + path: /tf/compat/v1/data/experimental/RandomDataset + - title: SparseTensorStructure + status: deprecated + path: /tf/compat/v1/data/experimental/SparseTensorStructure + - title: SqlDataset + path: /tf/compat/v1/data/experimental/SqlDataset + - title: StatsAggregator + path: /tf/compat/v1/data/experimental/StatsAggregator + - title: TensorArrayStructure + status: deprecated + path: /tf/compat/v1/data/experimental/TensorArrayStructure + - title: TensorStructure + status: deprecated + path: /tf/compat/v1/data/experimental/TensorStructure + - title: choose_from_datasets + path: /tf/compat/v1/data/experimental/choose_from_datasets + - title: make_batched_features_dataset + path: /tf/compat/v1/data/experimental/make_batched_features_dataset + - title: make_csv_dataset + path: /tf/compat/v1/data/experimental/make_csv_dataset + - title: map_and_batch_with_legacy_function + status: deprecated + path: /tf/compat/v1/data/experimental/map_and_batch_with_legacy_function + - title: sample_from_datasets + path: /tf/compat/v1/data/experimental/sample_from_datasets + - title: debugging + section: + - title: Overview + path: /tf/compat/v1/debugging + - title: assert_shapes + path: /tf/compat/v1/debugging/assert_shapes + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/debugging/experimental + - title: distribute + section: + - title: Overview + path: /tf/compat/v1/distribute + - title: MirroredStrategy + path: /tf/compat/v1/distribute/MirroredStrategy + - title: OneDeviceStrategy + path: /tf/compat/v1/distribute/OneDeviceStrategy + - title: Strategy + path: /tf/compat/v1/distribute/Strategy + - title: StrategyExtended + path: /tf/compat/v1/distribute/StrategyExtended + - title: get_loss_reduction + path: /tf/compat/v1/distribute/get_loss_reduction + - title: cluster_resolver + section: + - title: Overview + path: /tf/compat/v1/distribute/cluster_resolver + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/distribute/experimental + - title: CentralStorageStrategy + path: /tf/compat/v1/distribute/experimental/CentralStorageStrategy + - title: MultiWorkerMirroredStrategy + path: /tf/compat/v1/distribute/experimental/MultiWorkerMirroredStrategy + - title: ParameterServerStrategy + path: /tf/compat/v1/distribute/experimental/ParameterServerStrategy + - title: TPUStrategy + path: /tf/compat/v1/distribute/experimental/TPUStrategy + - title: distributions + section: + - title: Overview + path: /tf/compat/v1/distributions + - title: Bernoulli + path: /tf/compat/v1/distributions/Bernoulli + - title: Beta + path: /tf/compat/v1/distributions/Beta + - title: Categorical + path: /tf/compat/v1/distributions/Categorical + - title: Dirichlet + path: /tf/compat/v1/distributions/Dirichlet + - title: DirichletMultinomial + path: /tf/compat/v1/distributions/DirichletMultinomial + - title: Distribution + path: /tf/compat/v1/distributions/Distribution + - title: Exponential + path: /tf/compat/v1/distributions/Exponential + - title: Gamma + path: /tf/compat/v1/distributions/Gamma + - title: Laplace + path: /tf/compat/v1/distributions/Laplace + - title: Multinomial + path: /tf/compat/v1/distributions/Multinomial + - title: Normal + path: /tf/compat/v1/distributions/Normal + - title: RegisterKL + path: /tf/compat/v1/distributions/RegisterKL + - title: ReparameterizationType + path: /tf/compat/v1/distributions/ReparameterizationType + - title: StudentT + path: /tf/compat/v1/distributions/StudentT + - title: Uniform + path: /tf/compat/v1/distributions/Uniform + - title: kl_divergence + status: deprecated + path: /tf/compat/v1/distributions/kl_divergence + - title: dtypes + section: + - title: Overview + path: /tf/compat/v1/dtypes + - title: errors + section: + - title: Overview + path: /tf/compat/v1/errors + - title: error_code_from_exception_type + path: /tf/compat/v1/errors/error_code_from_exception_type + - title: exception_type_from_error_code + path: /tf/compat/v1/errors/exception_type_from_error_code + - title: raise_exception_on_not_ok_status + path: /tf/compat/v1/errors/raise_exception_on_not_ok_status + - title: estimator + section: + - title: Overview + path: /tf/compat/v1/estimator + - title: BaselineClassifier + path: /tf/compat/v1/estimator/BaselineClassifier + - title: BaselineEstimator + path: /tf/compat/v1/estimator/BaselineEstimator + - title: BaselineRegressor + path: /tf/compat/v1/estimator/BaselineRegressor + - title: DNNClassifier + path: /tf/compat/v1/estimator/DNNClassifier + - title: DNNEstimator + path: /tf/compat/v1/estimator/DNNEstimator + - title: DNNLinearCombinedClassifier + path: /tf/compat/v1/estimator/DNNLinearCombinedClassifier + - title: DNNLinearCombinedEstimator + path: /tf/compat/v1/estimator/DNNLinearCombinedEstimator + - title: DNNLinearCombinedRegressor + path: /tf/compat/v1/estimator/DNNLinearCombinedRegressor + - title: DNNRegressor + path: /tf/compat/v1/estimator/DNNRegressor + - title: Estimator + path: /tf/compat/v1/estimator/Estimator + - title: LinearClassifier + path: /tf/compat/v1/estimator/LinearClassifier + - title: LinearEstimator + path: /tf/compat/v1/estimator/LinearEstimator + - title: LinearRegressor + path: /tf/compat/v1/estimator/LinearRegressor + - title: classifier_parse_example_spec + path: /tf/compat/v1/estimator/classifier_parse_example_spec + - title: regressor_parse_example_spec + path: /tf/compat/v1/estimator/regressor_parse_example_spec + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/estimator/experimental + - title: KMeans + path: /tf/compat/v1/estimator/experimental/KMeans + - title: dnn_logit_fn_builder + path: /tf/compat/v1/estimator/experimental/dnn_logit_fn_builder + - title: linear_logit_fn_builder + path: /tf/compat/v1/estimator/experimental/linear_logit_fn_builder + - title: export + section: + - title: Overview + path: /tf/compat/v1/estimator/export + - title: inputs + section: + - title: Overview + path: /tf/compat/v1/estimator/inputs + - title: numpy_input_fn + path: /tf/compat/v1/estimator/inputs/numpy_input_fn + - title: pandas_input_fn + path: /tf/compat/v1/estimator/inputs/pandas_input_fn + - title: tpu + section: + - title: Overview + path: /tf/compat/v1/estimator/tpu + - title: InputPipelineConfig + path: /tf/compat/v1/estimator/tpu/InputPipelineConfig + - title: RunConfig + path: /tf/compat/v1/estimator/tpu/RunConfig + - title: TPUConfig + path: /tf/compat/v1/estimator/tpu/TPUConfig + - title: TPUEstimator + path: /tf/compat/v1/estimator/tpu/TPUEstimator + - title: TPUEstimatorSpec + path: /tf/compat/v1/estimator/tpu/TPUEstimatorSpec + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/estimator/tpu/experimental + - title: EmbeddingConfigSpec + path: /tf/compat/v1/estimator/tpu/experimental/EmbeddingConfigSpec + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/experimental + - title: output_all_intermediates + path: /tf/compat/v1/experimental/output_all_intermediates + - title: feature_column + section: + - title: Overview + path: /tf/compat/v1/feature_column + - title: categorical_column_with_vocabulary_file + path: /tf/compat/v1/feature_column/categorical_column_with_vocabulary_file + - title: input_layer + path: /tf/compat/v1/feature_column/input_layer + - title: linear_model + path: /tf/compat/v1/feature_column/linear_model + - title: make_parse_example_spec + path: /tf/compat/v1/feature_column/make_parse_example_spec + - title: shared_embedding_columns + path: /tf/compat/v1/feature_column/shared_embedding_columns + - title: flags + section: + - title: Overview + path: /tf/compat/v1/flags + - title: ArgumentParser + path: /tf/compat/v1/flags/ArgumentParser + - title: ArgumentSerializer + path: /tf/compat/v1/flags/ArgumentSerializer + - title: BaseListParser + path: /tf/compat/v1/flags/BaseListParser + - title: BooleanFlag + path: /tf/compat/v1/flags/BooleanFlag + - title: BooleanParser + path: /tf/compat/v1/flags/BooleanParser + - title: CantOpenFlagFileError + path: /tf/compat/v1/flags/CantOpenFlagFileError + - title: CsvListSerializer + path: /tf/compat/v1/flags/CsvListSerializer + - title: DEFINE + path: /tf/compat/v1/flags/DEFINE + - title: DEFINE_alias + path: /tf/compat/v1/flags/DEFINE_alias + - title: DEFINE_bool + path: /tf/compat/v1/flags/DEFINE_bool + - title: DEFINE_enum + path: /tf/compat/v1/flags/DEFINE_enum + - title: DEFINE_enum_class + path: /tf/compat/v1/flags/DEFINE_enum_class + - title: DEFINE_flag + path: /tf/compat/v1/flags/DEFINE_flag + - title: DEFINE_float + path: /tf/compat/v1/flags/DEFINE_float + - title: DEFINE_integer + path: /tf/compat/v1/flags/DEFINE_integer + - title: DEFINE_list + path: /tf/compat/v1/flags/DEFINE_list + - title: DEFINE_multi + path: /tf/compat/v1/flags/DEFINE_multi + - title: DEFINE_multi_enum + path: /tf/compat/v1/flags/DEFINE_multi_enum + - title: DEFINE_multi_enum_class + path: /tf/compat/v1/flags/DEFINE_multi_enum_class + - title: DEFINE_multi_float + path: /tf/compat/v1/flags/DEFINE_multi_float + - title: DEFINE_multi_integer + path: /tf/compat/v1/flags/DEFINE_multi_integer + - title: DEFINE_multi_string + path: /tf/compat/v1/flags/DEFINE_multi_string + - title: DEFINE_spaceseplist + path: /tf/compat/v1/flags/DEFINE_spaceseplist + - title: DEFINE_string + path: /tf/compat/v1/flags/DEFINE_string + - title: DuplicateFlagError + path: /tf/compat/v1/flags/DuplicateFlagError + - title: EnumClassFlag + path: /tf/compat/v1/flags/EnumClassFlag + - title: EnumClassParser + path: /tf/compat/v1/flags/EnumClassParser + - title: EnumFlag + path: /tf/compat/v1/flags/EnumFlag + - title: EnumParser + path: /tf/compat/v1/flags/EnumParser + - title: Error + path: /tf/compat/v1/flags/Error + - title: FLAGS + path: /tf/compat/v1/flags/FLAGS + - title: Flag + path: /tf/compat/v1/flags/Flag + - title: FlagNameConflictsWithMethodError + path: /tf/compat/v1/flags/FlagNameConflictsWithMethodError + - title: FlagValues + path: /tf/compat/v1/flags/FlagValues + - title: FloatParser + path: /tf/compat/v1/flags/FloatParser + - title: IllegalFlagValueError + path: /tf/compat/v1/flags/IllegalFlagValueError + - title: IntegerParser + path: /tf/compat/v1/flags/IntegerParser + - title: ListParser + path: /tf/compat/v1/flags/ListParser + - title: ListSerializer + path: /tf/compat/v1/flags/ListSerializer + - title: MultiEnumClassFlag + path: /tf/compat/v1/flags/MultiEnumClassFlag + - title: MultiFlag + path: /tf/compat/v1/flags/MultiFlag + - title: UnparsedFlagAccessError + path: /tf/compat/v1/flags/UnparsedFlagAccessError + - title: UnrecognizedFlagError + path: /tf/compat/v1/flags/UnrecognizedFlagError + - title: ValidationError + path: /tf/compat/v1/flags/ValidationError + - title: WhitespaceSeparatedListParser + path: /tf/compat/v1/flags/WhitespaceSeparatedListParser + - title: adopt_module_key_flags + path: /tf/compat/v1/flags/adopt_module_key_flags + - title: declare_key_flag + path: /tf/compat/v1/flags/declare_key_flag + - title: disclaim_key_flags + path: /tf/compat/v1/flags/disclaim_key_flags + - title: doc_to_help + path: /tf/compat/v1/flags/doc_to_help + - title: flag_dict_to_args + path: /tf/compat/v1/flags/flag_dict_to_args + - title: get_help_width + path: /tf/compat/v1/flags/get_help_width + - title: mark_bool_flags_as_mutual_exclusive + path: /tf/compat/v1/flags/mark_bool_flags_as_mutual_exclusive + - title: mark_flag_as_required + path: /tf/compat/v1/flags/mark_flag_as_required + - title: mark_flags_as_mutual_exclusive + path: /tf/compat/v1/flags/mark_flags_as_mutual_exclusive + - title: mark_flags_as_required + path: /tf/compat/v1/flags/mark_flags_as_required + - title: multi_flags_validator + path: /tf/compat/v1/flags/multi_flags_validator + - title: register_multi_flags_validator + path: /tf/compat/v1/flags/register_multi_flags_validator + - title: register_validator + path: /tf/compat/v1/flags/register_validator + - title: text_wrap + path: /tf/compat/v1/flags/text_wrap + - title: validator + path: /tf/compat/v1/flags/validator + - title: tf_decorator + section: + - title: Overview + path: /tf/compat/v1/flags/tf_decorator + - title: TFDecorator + path: /tf/compat/v1/flags/tf_decorator/TFDecorator + - title: make_decorator + path: /tf/compat/v1/flags/tf_decorator/make_decorator + - title: rewrap + path: /tf/compat/v1/flags/tf_decorator/rewrap + - title: unwrap + path: /tf/compat/v1/flags/tf_decorator/unwrap + - title: tf_stack + section: + - title: Overview + path: /tf/compat/v1/flags/tf_decorator/tf_stack + - title: CurrentModuleFilter + path: /tf/compat/v1/flags/tf_decorator/tf_stack/CurrentModuleFilter + - title: FrameSummary + path: /tf/compat/v1/flags/tf_decorator/tf_stack/FrameSummary + - title: StackSummary + path: /tf/compat/v1/flags/tf_decorator/tf_stack/StackSummary + - title: StackTraceFilter + path: /tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceFilter + - title: StackTraceMapper + path: /tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceMapper + - title: StackTraceTransform + path: /tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceTransform + - title: extract_stack + path: /tf/compat/v1/flags/tf_decorator/tf_stack/extract_stack + - title: gfile + section: + - title: Overview + path: /tf/compat/v1/gfile + - title: Copy + path: /tf/compat/v1/gfile/Copy + - title: DeleteRecursively + path: /tf/compat/v1/gfile/DeleteRecursively + - title: Exists + path: /tf/compat/v1/gfile/Exists + - title: FastGFile + path: /tf/compat/v1/gfile/FastGFile + - title: Glob + path: /tf/compat/v1/gfile/Glob + - title: IsDirectory + path: /tf/compat/v1/gfile/IsDirectory + - title: ListDirectory + path: /tf/compat/v1/gfile/ListDirectory + - title: MakeDirs + path: /tf/compat/v1/gfile/MakeDirs + - title: MkDir + path: /tf/compat/v1/gfile/MkDir + - title: Remove + path: /tf/compat/v1/gfile/Remove + - title: Rename + path: /tf/compat/v1/gfile/Rename + - title: Stat + path: /tf/compat/v1/gfile/Stat + - title: Walk + path: /tf/compat/v1/gfile/Walk + - title: graph_util + section: + - title: Overview + path: /tf/compat/v1/graph_util + - title: convert_variables_to_constants + status: deprecated + path: /tf/compat/v1/graph_util/convert_variables_to_constants + - title: extract_sub_graph + status: deprecated + path: /tf/compat/v1/graph_util/extract_sub_graph + - title: must_run_on_cpu + status: deprecated + path: /tf/compat/v1/graph_util/must_run_on_cpu + - title: remove_training_nodes + status: deprecated + path: /tf/compat/v1/graph_util/remove_training_nodes + - title: tensor_shape_from_node_def_name + status: deprecated + path: /tf/compat/v1/graph_util/tensor_shape_from_node_def_name + - title: image + section: + - title: Overview + path: /tf/compat/v1/image + - title: ResizeMethod + path: /tf/compat/v1/image/ResizeMethod + - title: crop_and_resize + path: /tf/compat/v1/image/crop_and_resize + - title: draw_bounding_boxes + path: /tf/compat/v1/image/draw_bounding_boxes + - title: extract_glimpse + path: /tf/compat/v1/image/extract_glimpse + - title: resize + path: /tf/compat/v1/image/resize + - title: resize_area + path: /tf/compat/v1/image/resize_area + - title: resize_bicubic + path: /tf/compat/v1/image/resize_bicubic + - title: resize_bilinear + path: /tf/compat/v1/image/resize_bilinear + - title: resize_image_with_pad + path: /tf/compat/v1/image/resize_image_with_pad + - title: resize_nearest_neighbor + path: /tf/compat/v1/image/resize_nearest_neighbor + - title: sample_distorted_bounding_box + status: deprecated + path: /tf/compat/v1/image/sample_distorted_bounding_box + - title: initializers + section: + - title: Overview + path: /tf/compat/v1/initializers + - title: io + section: + - title: Overview + path: /tf/compat/v1/io + - title: TFRecordCompressionType + path: /tf/compat/v1/io/TFRecordCompressionType + - title: tf_record_iterator + status: deprecated + path: /tf/compat/v1/io/tf_record_iterator + - title: gfile + section: + - title: Overview + path: /tf/compat/v1/io/gfile + - title: keras + section: + - title: Overview + path: /tf/compat/v1/keras + - title: activations + section: + - title: Overview + path: /tf/compat/v1/keras/activations + - title: applications + section: + - title: Overview + path: /tf/compat/v1/keras/applications + - title: densenet + section: + - title: Overview + path: /tf/compat/v1/keras/applications/densenet + - title: imagenet_utils + section: + - title: Overview + path: /tf/compat/v1/keras/applications/imagenet_utils + - title: inception_resnet_v2 + section: + - title: Overview + path: /tf/compat/v1/keras/applications/inception_resnet_v2 + - title: inception_v3 + section: + - title: Overview + path: /tf/compat/v1/keras/applications/inception_v3 + - title: mobilenet + section: + - title: Overview + path: /tf/compat/v1/keras/applications/mobilenet + - title: mobilenet_v2 + section: + - title: Overview + path: /tf/compat/v1/keras/applications/mobilenet_v2 + - title: nasnet + section: + - title: Overview + path: /tf/compat/v1/keras/applications/nasnet + - title: resnet + section: + - title: Overview + path: /tf/compat/v1/keras/applications/resnet + - title: resnet50 + section: + - title: Overview + path: /tf/compat/v1/keras/applications/resnet50 + - title: resnet_v2 + section: + - title: Overview + path: /tf/compat/v1/keras/applications/resnet_v2 + - title: vgg16 + section: + - title: Overview + path: /tf/compat/v1/keras/applications/vgg16 + - title: vgg19 + section: + - title: Overview + path: /tf/compat/v1/keras/applications/vgg19 + - title: xception + section: + - title: Overview + path: /tf/compat/v1/keras/applications/xception + - title: backend + section: + - title: Overview + path: /tf/compat/v1/keras/backend + - title: get_session + path: /tf/compat/v1/keras/backend/get_session + - title: name_scope + path: /tf/compat/v1/keras/backend/name_scope + - title: set_session + path: /tf/compat/v1/keras/backend/set_session + - title: callbacks + section: + - title: Overview + path: /tf/compat/v1/keras/callbacks + - title: TensorBoard + path: /tf/compat/v1/keras/callbacks/TensorBoard + - title: constraints + section: + - title: Overview + path: /tf/compat/v1/keras/constraints + - title: datasets + section: + - title: Overview + path: /tf/compat/v1/keras/datasets + - title: boston_housing + section: + - title: Overview + path: /tf/compat/v1/keras/datasets/boston_housing + - title: cifar10 + section: + - title: Overview + path: /tf/compat/v1/keras/datasets/cifar10 + - title: cifar100 + section: + - title: Overview + path: /tf/compat/v1/keras/datasets/cifar100 + - title: fashion_mnist + section: + - title: Overview + path: /tf/compat/v1/keras/datasets/fashion_mnist + - title: imdb + section: + - title: Overview + path: /tf/compat/v1/keras/datasets/imdb + - title: mnist + section: + - title: Overview + path: /tf/compat/v1/keras/datasets/mnist + - title: reuters + section: + - title: Overview + path: /tf/compat/v1/keras/datasets/reuters + - title: estimator + section: + - title: Overview + path: /tf/compat/v1/keras/estimator + - title: model_to_estimator + path: /tf/compat/v1/keras/estimator/model_to_estimator + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/keras/experimental + - title: export_saved_model + status: deprecated + path: /tf/compat/v1/keras/experimental/export_saved_model + - title: load_from_saved_model + status: deprecated + path: /tf/compat/v1/keras/experimental/load_from_saved_model + - title: initializers + section: + - title: Overview + path: /tf/compat/v1/keras/initializers + - title: Constant + path: /tf/compat/v1/keras/initializers/Constant + - title: Identity + path: /tf/compat/v1/keras/initializers/Identity + - title: Initializer + path: /tf/compat/v1/keras/initializers/Initializer + - title: Ones + path: /tf/compat/v1/keras/initializers/Ones + - title: Orthogonal + path: /tf/compat/v1/keras/initializers/Orthogonal + - title: RandomNormal + path: /tf/compat/v1/keras/initializers/RandomNormal + - title: RandomUniform + path: /tf/compat/v1/keras/initializers/RandomUniform + - title: TruncatedNormal + path: /tf/compat/v1/keras/initializers/TruncatedNormal + - title: VarianceScaling + path: /tf/compat/v1/keras/initializers/VarianceScaling + - title: Zeros + path: /tf/compat/v1/keras/initializers/Zeros + - title: glorot_normal + path: /tf/compat/v1/keras/initializers/glorot_normal + - title: glorot_uniform + path: /tf/compat/v1/keras/initializers/glorot_uniform + - title: he_normal + path: /tf/compat/v1/keras/initializers/he_normal + - title: he_uniform + path: /tf/compat/v1/keras/initializers/he_uniform + - title: lecun_normal + path: /tf/compat/v1/keras/initializers/lecun_normal + - title: lecun_uniform + path: /tf/compat/v1/keras/initializers/lecun_uniform + - title: layers + section: + - title: Overview + path: /tf/compat/v1/keras/layers + - title: BatchNormalization + path: /tf/compat/v1/keras/layers/BatchNormalization + - title: CuDNNGRU + path: /tf/compat/v1/keras/layers/CuDNNGRU + - title: CuDNNLSTM + path: /tf/compat/v1/keras/layers/CuDNNLSTM + - title: DenseFeatures + path: /tf/compat/v1/keras/layers/DenseFeatures + - title: GRU + path: /tf/compat/v1/keras/layers/GRU + - title: GRUCell + path: /tf/compat/v1/keras/layers/GRUCell + - title: LSTM + path: /tf/compat/v1/keras/layers/LSTM + - title: LSTMCell + path: /tf/compat/v1/keras/layers/LSTMCell + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/keras/layers/experimental + - title: preprocessing + section: + - title: Overview + path: /tf/compat/v1/keras/layers/experimental/preprocessing + - title: Normalization + path: /tf/compat/v1/keras/layers/experimental/preprocessing/Normalization + - title: TextVectorization + path: /tf/compat/v1/keras/layers/experimental/preprocessing/TextVectorization + - title: losses + section: + - title: Overview + path: /tf/compat/v1/keras/losses + - title: metrics + section: + - title: Overview + path: /tf/compat/v1/keras/metrics + - title: mixed_precision + section: + - title: Overview + path: /tf/compat/v1/keras/mixed_precision + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/keras/mixed_precision/experimental + - title: models + section: + - title: Overview + path: /tf/compat/v1/keras/models + - title: optimizers + section: + - title: Overview + path: /tf/compat/v1/keras/optimizers + - title: schedules + section: + - title: Overview + path: /tf/compat/v1/keras/optimizers/schedules + - title: preprocessing + section: + - title: Overview + path: /tf/compat/v1/keras/preprocessing + - title: image + section: + - title: Overview + path: /tf/compat/v1/keras/preprocessing/image + - title: sequence + section: + - title: Overview + path: /tf/compat/v1/keras/preprocessing/sequence + - title: text + section: + - title: Overview + path: /tf/compat/v1/keras/preprocessing/text + - title: regularizers + section: + - title: Overview + path: /tf/compat/v1/keras/regularizers + - title: utils + section: + - title: Overview + path: /tf/compat/v1/keras/utils + - title: wrappers + section: + - title: Overview + path: /tf/compat/v1/keras/wrappers + - title: scikit_learn + section: + - title: Overview + path: /tf/compat/v1/keras/wrappers/scikit_learn + - title: layers + section: + - title: Overview + path: /tf/compat/v1/layers + - title: AveragePooling1D + path: /tf/compat/v1/layers/AveragePooling1D + - title: AveragePooling2D + path: /tf/compat/v1/layers/AveragePooling2D + - title: AveragePooling3D + path: /tf/compat/v1/layers/AveragePooling3D + - title: BatchNormalization + path: /tf/compat/v1/layers/BatchNormalization + - title: Conv1D + path: /tf/compat/v1/layers/Conv1D + - title: Conv2D + path: /tf/compat/v1/layers/Conv2D + - title: Conv2DTranspose + path: /tf/compat/v1/layers/Conv2DTranspose + - title: Conv3D + path: /tf/compat/v1/layers/Conv3D + - title: Conv3DTranspose + path: /tf/compat/v1/layers/Conv3DTranspose + - title: Dense + path: /tf/compat/v1/layers/Dense + - title: Dropout + path: /tf/compat/v1/layers/Dropout + - title: Flatten + path: /tf/compat/v1/layers/Flatten + - title: Layer + path: /tf/compat/v1/layers/Layer + - title: MaxPooling1D + path: /tf/compat/v1/layers/MaxPooling1D + - title: MaxPooling2D + path: /tf/compat/v1/layers/MaxPooling2D + - title: MaxPooling3D + path: /tf/compat/v1/layers/MaxPooling3D + - title: SeparableConv1D + path: /tf/compat/v1/layers/SeparableConv1D + - title: SeparableConv2D + path: /tf/compat/v1/layers/SeparableConv2D + - title: average_pooling1d + status: deprecated + path: /tf/compat/v1/layers/average_pooling1d + - title: average_pooling2d + status: deprecated + path: /tf/compat/v1/layers/average_pooling2d + - title: average_pooling3d + status: deprecated + path: /tf/compat/v1/layers/average_pooling3d + - title: batch_normalization + status: deprecated + path: /tf/compat/v1/layers/batch_normalization + - title: conv1d + status: deprecated + path: /tf/compat/v1/layers/conv1d + - title: conv2d + status: deprecated + path: /tf/compat/v1/layers/conv2d + - title: conv2d_transpose + status: deprecated + path: /tf/compat/v1/layers/conv2d_transpose + - title: conv3d + status: deprecated + path: /tf/compat/v1/layers/conv3d + - title: conv3d_transpose + status: deprecated + path: /tf/compat/v1/layers/conv3d_transpose + - title: dense + status: deprecated + path: /tf/compat/v1/layers/dense + - title: dropout + status: deprecated + path: /tf/compat/v1/layers/dropout + - title: flatten + status: deprecated + path: /tf/compat/v1/layers/flatten + - title: max_pooling1d + status: deprecated + path: /tf/compat/v1/layers/max_pooling1d + - title: max_pooling2d + status: deprecated + path: /tf/compat/v1/layers/max_pooling2d + - title: max_pooling3d + status: deprecated + path: /tf/compat/v1/layers/max_pooling3d + - title: separable_conv1d + status: deprecated + path: /tf/compat/v1/layers/separable_conv1d + - title: separable_conv2d + status: deprecated + path: /tf/compat/v1/layers/separable_conv2d + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/layers/experimental + - title: keras_style_scope + path: /tf/compat/v1/layers/experimental/keras_style_scope + - title: set_keras_style + path: /tf/compat/v1/layers/experimental/set_keras_style + - title: linalg + section: + - title: Overview + path: /tf/compat/v1/linalg + - title: l2_normalize + path: /tf/compat/v1/linalg/l2_normalize + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/linalg/experimental + - title: lite + section: + - title: Overview + path: /tf/compat/v1/lite + - title: OpHint + path: /tf/compat/v1/lite/OpHint + - title: OpHint.OpHintArgumentTracker + path: /tf/compat/v1/lite/OpHint/OpHintArgumentTracker + - title: TFLiteConverter + path: /tf/compat/v1/lite/TFLiteConverter + - title: TocoConverter + path: /tf/compat/v1/lite/TocoConverter + - title: toco_convert + status: deprecated + path: /tf/compat/v1/lite/toco_convert + - title: constants + section: + - title: Overview + path: /tf/compat/v1/lite/constants + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/lite/experimental + - title: convert_op_hints_to_stubs + path: /tf/compat/v1/lite/experimental/convert_op_hints_to_stubs + - title: get_potentially_supported_ops + path: /tf/compat/v1/lite/experimental/get_potentially_supported_ops + - title: nn + section: + - title: Overview + path: /tf/compat/v1/lite/experimental/nn + - title: TFLiteLSTMCell + path: /tf/compat/v1/lite/experimental/nn/TFLiteLSTMCell + - title: TfLiteRNNCell + path: /tf/compat/v1/lite/experimental/nn/TfLiteRNNCell + - title: dynamic_rnn + path: /tf/compat/v1/lite/experimental/nn/dynamic_rnn + - title: logging + section: + - title: Overview + path: /tf/compat/v1/logging + - title: TaskLevelStatusMessage + path: /tf/compat/v1/logging/TaskLevelStatusMessage + - title: debug + path: /tf/compat/v1/logging/debug + - title: error + path: /tf/compat/v1/logging/error + - title: fatal + path: /tf/compat/v1/logging/fatal + - title: flush + path: /tf/compat/v1/logging/flush + - title: get_verbosity + path: /tf/compat/v1/logging/get_verbosity + - title: info + path: /tf/compat/v1/logging/info + - title: log + path: /tf/compat/v1/logging/log + - title: log_every_n + path: /tf/compat/v1/logging/log_every_n + - title: log_first_n + path: /tf/compat/v1/logging/log_first_n + - title: log_if + path: /tf/compat/v1/logging/log_if + - title: set_verbosity + path: /tf/compat/v1/logging/set_verbosity + - title: vlog + path: /tf/compat/v1/logging/vlog + - title: warn + path: /tf/compat/v1/logging/warn + - title: warning + path: /tf/compat/v1/logging/warning + - title: lookup + section: + - title: Overview + path: /tf/compat/v1/lookup + - title: StaticHashTable + path: /tf/compat/v1/lookup/StaticHashTable + - title: StaticVocabularyTable + path: /tf/compat/v1/lookup/StaticVocabularyTable + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/lookup/experimental + - title: losses + section: + - title: Overview + path: /tf/compat/v1/losses + - title: Reduction + path: /tf/compat/v1/losses/Reduction + - title: absolute_difference + path: /tf/compat/v1/losses/absolute_difference + - title: add_loss + path: /tf/compat/v1/losses/add_loss + - title: compute_weighted_loss + path: /tf/compat/v1/losses/compute_weighted_loss + - title: cosine_distance + path: /tf/compat/v1/losses/cosine_distance + - title: get_losses + path: /tf/compat/v1/losses/get_losses + - title: get_regularization_loss + path: /tf/compat/v1/losses/get_regularization_loss + - title: get_regularization_losses + path: /tf/compat/v1/losses/get_regularization_losses + - title: get_total_loss + path: /tf/compat/v1/losses/get_total_loss + - title: hinge_loss + path: /tf/compat/v1/losses/hinge_loss + - title: huber_loss + path: /tf/compat/v1/losses/huber_loss + - title: log_loss + path: /tf/compat/v1/losses/log_loss + - title: mean_pairwise_squared_error + path: /tf/compat/v1/losses/mean_pairwise_squared_error + - title: mean_squared_error + path: /tf/compat/v1/losses/mean_squared_error + - title: sigmoid_cross_entropy + path: /tf/compat/v1/losses/sigmoid_cross_entropy + - title: softmax_cross_entropy + path: /tf/compat/v1/losses/softmax_cross_entropy + - title: sparse_softmax_cross_entropy + path: /tf/compat/v1/losses/sparse_softmax_cross_entropy + - title: manip + section: + - title: Overview + path: /tf/compat/v1/manip + - title: math + section: + - title: Overview + path: /tf/compat/v1/math + - title: in_top_k + path: /tf/compat/v1/math/in_top_k + - title: log_softmax + path: /tf/compat/v1/math/log_softmax + - title: softmax + path: /tf/compat/v1/math/softmax + - title: special + section: + - title: Overview + path: /tf/compat/v1/math/special + - title: metrics + section: + - title: Overview + path: /tf/compat/v1/metrics + - title: accuracy + path: /tf/compat/v1/metrics/accuracy + - title: auc + status: deprecated + path: /tf/compat/v1/metrics/auc + - title: average_precision_at_k + path: /tf/compat/v1/metrics/average_precision_at_k + - title: false_negatives + path: /tf/compat/v1/metrics/false_negatives + - title: false_negatives_at_thresholds + path: /tf/compat/v1/metrics/false_negatives_at_thresholds + - title: false_positives + path: /tf/compat/v1/metrics/false_positives + - title: false_positives_at_thresholds + path: /tf/compat/v1/metrics/false_positives_at_thresholds + - title: mean + path: /tf/compat/v1/metrics/mean + - title: mean_absolute_error + path: /tf/compat/v1/metrics/mean_absolute_error + - title: mean_cosine_distance + path: /tf/compat/v1/metrics/mean_cosine_distance + - title: mean_iou + path: /tf/compat/v1/metrics/mean_iou + - title: mean_per_class_accuracy + path: /tf/compat/v1/metrics/mean_per_class_accuracy + - title: mean_relative_error + path: /tf/compat/v1/metrics/mean_relative_error + - title: mean_squared_error + path: /tf/compat/v1/metrics/mean_squared_error + - title: mean_tensor + path: /tf/compat/v1/metrics/mean_tensor + - title: percentage_below + path: /tf/compat/v1/metrics/percentage_below + - title: precision + path: /tf/compat/v1/metrics/precision + - title: precision_at_k + path: /tf/compat/v1/metrics/precision_at_k + - title: precision_at_thresholds + path: /tf/compat/v1/metrics/precision_at_thresholds + - title: precision_at_top_k + path: /tf/compat/v1/metrics/precision_at_top_k + - title: recall + path: /tf/compat/v1/metrics/recall + - title: recall_at_k + path: /tf/compat/v1/metrics/recall_at_k + - title: recall_at_thresholds + path: /tf/compat/v1/metrics/recall_at_thresholds + - title: recall_at_top_k + path: /tf/compat/v1/metrics/recall_at_top_k + - title: root_mean_squared_error + path: /tf/compat/v1/metrics/root_mean_squared_error + - title: sensitivity_at_specificity + path: /tf/compat/v1/metrics/sensitivity_at_specificity + - title: sparse_average_precision_at_k + status: deprecated + path: /tf/compat/v1/metrics/sparse_average_precision_at_k + - title: sparse_precision_at_k + status: deprecated + path: /tf/compat/v1/metrics/sparse_precision_at_k + - title: specificity_at_sensitivity + path: /tf/compat/v1/metrics/specificity_at_sensitivity + - title: true_negatives + path: /tf/compat/v1/metrics/true_negatives + - title: true_negatives_at_thresholds + path: /tf/compat/v1/metrics/true_negatives_at_thresholds + - title: true_positives + path: /tf/compat/v1/metrics/true_positives + - title: true_positives_at_thresholds + path: /tf/compat/v1/metrics/true_positives_at_thresholds + - title: mixed_precision + section: + - title: Overview + path: /tf/compat/v1/mixed_precision + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/mixed_precision/experimental + - title: mlir + section: + - title: Overview + path: /tf/compat/v1/mlir + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/mlir/experimental + - title: nest + section: + - title: Overview + path: /tf/compat/v1/nest + - title: nn + section: + - title: Overview + path: /tf/compat/v1/nn + - title: avg_pool + path: /tf/compat/v1/nn/avg_pool + - title: batch_norm_with_global_normalization + path: /tf/compat/v1/nn/batch_norm_with_global_normalization + - title: bidirectional_dynamic_rnn + status: deprecated + path: /tf/compat/v1/nn/bidirectional_dynamic_rnn + - title: conv1d + path: /tf/compat/v1/nn/conv1d + - title: conv2d + path: /tf/compat/v1/nn/conv2d + - title: conv2d_backprop_filter + path: /tf/compat/v1/nn/conv2d_backprop_filter + - title: conv2d_backprop_input + path: /tf/compat/v1/nn/conv2d_backprop_input + - title: conv2d_transpose + path: /tf/compat/v1/nn/conv2d_transpose + - title: conv3d + path: /tf/compat/v1/nn/conv3d + - title: conv3d_backprop_filter + path: /tf/compat/v1/nn/conv3d_backprop_filter + - title: conv3d_transpose + path: /tf/compat/v1/nn/conv3d_transpose + - title: convolution + path: /tf/compat/v1/nn/convolution + - title: crelu + path: /tf/compat/v1/nn/crelu + - title: ctc_beam_search_decoder + path: /tf/compat/v1/nn/ctc_beam_search_decoder + - title: ctc_loss + path: /tf/compat/v1/nn/ctc_loss + - title: ctc_loss_v2 + path: /tf/compat/v1/nn/ctc_loss_v2 + - title: depthwise_conv2d + path: /tf/compat/v1/nn/depthwise_conv2d + - title: depthwise_conv2d_native + path: /tf/compat/v1/nn/depthwise_conv2d_native + - title: dilation2d + path: /tf/compat/v1/nn/dilation2d + - title: dropout + path: /tf/compat/v1/nn/dropout + - title: dynamic_rnn + status: deprecated + path: /tf/compat/v1/nn/dynamic_rnn + - title: embedding_lookup + path: /tf/compat/v1/nn/embedding_lookup + - title: embedding_lookup_sparse + path: /tf/compat/v1/nn/embedding_lookup_sparse + - title: erosion2d + path: /tf/compat/v1/nn/erosion2d + - title: fractional_avg_pool + status: deprecated + path: /tf/compat/v1/nn/fractional_avg_pool + - title: fractional_max_pool + status: deprecated + path: /tf/compat/v1/nn/fractional_max_pool + - title: fused_batch_norm + path: /tf/compat/v1/nn/fused_batch_norm + - title: max_pool + path: /tf/compat/v1/nn/max_pool + - title: max_pool_with_argmax + path: /tf/compat/v1/nn/max_pool_with_argmax + - title: moments + path: /tf/compat/v1/nn/moments + - title: nce_loss + path: /tf/compat/v1/nn/nce_loss + - title: pool + path: /tf/compat/v1/nn/pool + - title: quantized_avg_pool + path: /tf/compat/v1/nn/quantized_avg_pool + - title: quantized_conv2d + path: /tf/compat/v1/nn/quantized_conv2d + - title: quantized_max_pool + path: /tf/compat/v1/nn/quantized_max_pool + - title: quantized_relu_x + path: /tf/compat/v1/nn/quantized_relu_x + - title: raw_rnn + path: /tf/compat/v1/nn/raw_rnn + - title: relu_layer + path: /tf/compat/v1/nn/relu_layer + - title: safe_embedding_lookup_sparse + path: /tf/compat/v1/nn/safe_embedding_lookup_sparse + - title: sampled_softmax_loss + path: /tf/compat/v1/nn/sampled_softmax_loss + - title: separable_conv2d + path: /tf/compat/v1/nn/separable_conv2d + - title: sigmoid_cross_entropy_with_logits + path: /tf/compat/v1/nn/sigmoid_cross_entropy_with_logits + - title: softmax_cross_entropy_with_logits + status: deprecated + path: /tf/compat/v1/nn/softmax_cross_entropy_with_logits + - title: softmax_cross_entropy_with_logits_v2 + path: /tf/compat/v1/nn/softmax_cross_entropy_with_logits_v2 + - title: sparse_softmax_cross_entropy_with_logits + path: /tf/compat/v1/nn/sparse_softmax_cross_entropy_with_logits + - title: static_bidirectional_rnn + status: deprecated + path: /tf/compat/v1/nn/static_bidirectional_rnn + - title: static_rnn + status: deprecated + path: /tf/compat/v1/nn/static_rnn + - title: static_state_saving_rnn + status: deprecated + path: /tf/compat/v1/nn/static_state_saving_rnn + - title: sufficient_statistics + path: /tf/compat/v1/nn/sufficient_statistics + - title: weighted_cross_entropy_with_logits + path: /tf/compat/v1/nn/weighted_cross_entropy_with_logits + - title: weighted_moments + path: /tf/compat/v1/nn/weighted_moments + - title: xw_plus_b + path: /tf/compat/v1/nn/xw_plus_b + - title: rnn_cell + section: + - title: Overview + path: /tf/compat/v1/nn/rnn_cell + - title: BasicLSTMCell + path: /tf/compat/v1/nn/rnn_cell/BasicLSTMCell + - title: BasicRNNCell + path: /tf/compat/v1/nn/rnn_cell/BasicRNNCell + - title: DeviceWrapper + path: /tf/compat/v1/nn/rnn_cell/DeviceWrapper + - title: DropoutWrapper + path: /tf/compat/v1/nn/rnn_cell/DropoutWrapper + - title: GRUCell + path: /tf/compat/v1/nn/rnn_cell/GRUCell + - title: LSTMCell + path: /tf/compat/v1/nn/rnn_cell/LSTMCell + - title: LSTMStateTuple + path: /tf/compat/v1/nn/rnn_cell/LSTMStateTuple + - title: MultiRNNCell + path: /tf/compat/v1/nn/rnn_cell/MultiRNNCell + - title: RNNCell + path: /tf/compat/v1/nn/rnn_cell/RNNCell + - title: ResidualWrapper + path: /tf/compat/v1/nn/rnn_cell/ResidualWrapper + - title: profiler + section: + - title: Overview + path: /tf/compat/v1/profiler + - title: AdviceProto + path: /tf/compat/v1/profiler/AdviceProto + - title: AdviceProto.Checker + path: /tf/compat/v1/profiler/AdviceProto/Checker + - title: AdviceProto.CheckersEntry + path: /tf/compat/v1/profiler/AdviceProto/CheckersEntry + - title: GraphNodeProto + path: /tf/compat/v1/profiler/GraphNodeProto + - title: GraphNodeProto.InputShapesEntry + path: /tf/compat/v1/profiler/GraphNodeProto/InputShapesEntry + - title: MultiGraphNodeProto + path: /tf/compat/v1/profiler/MultiGraphNodeProto + - title: OpLogProto + path: /tf/compat/v1/profiler/OpLogProto + - title: OpLogProto.IdToStringEntry + path: /tf/compat/v1/profiler/OpLogProto/IdToStringEntry + - title: ProfileOptionBuilder + path: /tf/compat/v1/profiler/ProfileOptionBuilder + - title: Profiler + path: /tf/compat/v1/profiler/Profiler + - title: advise + path: /tf/compat/v1/profiler/advise + - title: profile + path: /tf/compat/v1/profiler/profile + - title: write_op_log + path: /tf/compat/v1/profiler/write_op_log + - title: python_io + section: + - title: Overview + path: /tf/compat/v1/python_io + - title: quantization + section: + - title: Overview + path: /tf/compat/v1/quantization + - title: queue + section: + - title: Overview + path: /tf/compat/v1/queue + - title: ragged + section: + - title: Overview + path: /tf/compat/v1/ragged + - title: RaggedTensorValue + path: /tf/compat/v1/ragged/RaggedTensorValue + - title: constant_value + path: /tf/compat/v1/ragged/constant_value + - title: placeholder + path: /tf/compat/v1/ragged/placeholder + - title: random + section: + - title: Overview + path: /tf/compat/v1/random + - title: stateless_multinomial + status: deprecated + path: /tf/compat/v1/random/stateless_multinomial + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/random/experimental + - title: raw_ops + section: + - title: Overview + path: /tf/compat/v1/raw_ops + - title: resource_loader + section: + - title: Overview + path: /tf/compat/v1/resource_loader + - title: get_data_files_path + path: /tf/compat/v1/resource_loader/get_data_files_path + - title: get_path_to_datafile + path: /tf/compat/v1/resource_loader/get_path_to_datafile + - title: get_root_dir_with_all_resources + path: /tf/compat/v1/resource_loader/get_root_dir_with_all_resources + - title: load_resource + path: /tf/compat/v1/resource_loader/load_resource + - title: readahead_file_path + path: /tf/compat/v1/resource_loader/readahead_file_path + - title: saved_model + section: + - title: Overview + path: /tf/compat/v1/saved_model + - title: Builder + path: /tf/compat/v1/saved_model/Builder + - title: build_signature_def + path: /tf/compat/v1/saved_model/build_signature_def + - title: build_tensor_info + status: deprecated + path: /tf/compat/v1/saved_model/build_tensor_info + - title: classification_signature_def + path: /tf/compat/v1/saved_model/classification_signature_def + - title: contains_saved_model + path: /tf/compat/v1/saved_model/contains_saved_model + - title: get_tensor_from_tensor_info + status: deprecated + path: /tf/compat/v1/saved_model/get_tensor_from_tensor_info + - title: is_valid_signature + path: /tf/compat/v1/saved_model/is_valid_signature + - title: load + status: deprecated + path: /tf/compat/v1/saved_model/load + - title: main_op_with_restore + status: deprecated + path: /tf/compat/v1/saved_model/main_op_with_restore + - title: predict_signature_def + path: /tf/compat/v1/saved_model/predict_signature_def + - title: regression_signature_def + path: /tf/compat/v1/saved_model/regression_signature_def + - title: simple_save + status: deprecated + path: /tf/compat/v1/saved_model/simple_save + - title: builder + section: + - title: Overview + path: /tf/compat/v1/saved_model/builder + - title: constants + section: + - title: Overview + path: /tf/compat/v1/saved_model/constants + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/saved_model/experimental + - title: loader + section: + - title: Overview + path: /tf/compat/v1/saved_model/loader + - title: main_op + section: + - title: Overview + path: /tf/compat/v1/saved_model/main_op + - title: main_op + status: deprecated + path: /tf/compat/v1/saved_model/main_op/main_op + - title: signature_constants + section: + - title: Overview + path: /tf/compat/v1/saved_model/signature_constants + - title: signature_def_utils + section: + - title: Overview + path: /tf/compat/v1/saved_model/signature_def_utils + - title: MethodNameUpdater + path: /tf/compat/v1/saved_model/signature_def_utils/MethodNameUpdater + - title: tag_constants + section: + - title: Overview + path: /tf/compat/v1/saved_model/tag_constants + - title: utils + section: + - title: Overview + path: /tf/compat/v1/saved_model/utils + - title: sets + section: + - title: Overview + path: /tf/compat/v1/sets + - title: signal + section: + - title: Overview + path: /tf/compat/v1/signal + - title: sparse + section: + - title: Overview + path: /tf/compat/v1/sparse + - title: spectral + section: + - title: Overview + path: /tf/compat/v1/spectral + - title: strings + section: + - title: Overview + path: /tf/compat/v1/strings + - title: length + path: /tf/compat/v1/strings/length + - title: split + path: /tf/compat/v1/strings/split + - title: substr + path: /tf/compat/v1/strings/substr + - title: summary + section: + - title: Overview + path: /tf/compat/v1/summary + - title: FileWriter + path: /tf/compat/v1/summary/FileWriter + - title: FileWriterCache + path: /tf/compat/v1/summary/FileWriterCache + - title: SummaryDescription + path: /tf/compat/v1/summary/SummaryDescription + - title: TaggedRunMetadata + path: /tf/compat/v1/summary/TaggedRunMetadata + - title: all_v2_summary_ops + path: /tf/compat/v1/summary/all_v2_summary_ops + - title: audio + path: /tf/compat/v1/summary/audio + - title: get_summary_description + path: /tf/compat/v1/summary/get_summary_description + - title: histogram + path: /tf/compat/v1/summary/histogram + - title: image + path: /tf/compat/v1/summary/image + - title: initialize + path: /tf/compat/v1/summary/initialize + - title: merge + path: /tf/compat/v1/summary/merge + - title: merge_all + path: /tf/compat/v1/summary/merge_all + - title: scalar + path: /tf/compat/v1/summary/scalar + - title: tensor_summary + path: /tf/compat/v1/summary/tensor_summary + - title: text + path: /tf/compat/v1/summary/text + - title: sysconfig + section: + - title: Overview + path: /tf/compat/v1/sysconfig + - title: test + section: + - title: Overview + path: /tf/compat/v1/test + - title: StubOutForTesting + path: /tf/compat/v1/test/StubOutForTesting + - title: assert_equal_graph_def + path: /tf/compat/v1/test/assert_equal_graph_def + - title: compute_gradient + status: deprecated + path: /tf/compat/v1/test/compute_gradient + - title: compute_gradient_error + status: deprecated + path: /tf/compat/v1/test/compute_gradient_error + - title: get_temp_dir + path: /tf/compat/v1/test/get_temp_dir + - title: test_src_dir_path + path: /tf/compat/v1/test/test_src_dir_path + - title: tpu + section: + - title: Overview + path: /tf/compat/v1/tpu + - title: CrossShardOptimizer + path: /tf/compat/v1/tpu/CrossShardOptimizer + - title: PaddingSpec + path: /tf/compat/v1/tpu/PaddingSpec + - title: batch_parallel + path: /tf/compat/v1/tpu/batch_parallel + - title: bfloat16_scope + path: /tf/compat/v1/tpu/bfloat16_scope + - title: core + path: /tf/compat/v1/tpu/core + - title: cross_replica_sum + path: /tf/compat/v1/tpu/cross_replica_sum + - title: initialize_system + path: /tf/compat/v1/tpu/initialize_system + - title: outside_compilation + path: /tf/compat/v1/tpu/outside_compilation + - title: replicate + path: /tf/compat/v1/tpu/replicate + - title: rewrite + path: /tf/compat/v1/tpu/rewrite + - title: shard + path: /tf/compat/v1/tpu/shard + - title: shutdown_system + path: /tf/compat/v1/tpu/shutdown_system + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/tpu/experimental + - title: AdagradParameters + path: /tf/compat/v1/tpu/experimental/AdagradParameters + - title: AdamParameters + path: /tf/compat/v1/tpu/experimental/AdamParameters + - title: FtrlParameters + path: /tf/compat/v1/tpu/experimental/FtrlParameters + - title: StochasticGradientDescentParameters + path: /tf/compat/v1/tpu/experimental/StochasticGradientDescentParameters + - title: embedding_column + path: /tf/compat/v1/tpu/experimental/embedding_column + - title: shared_embedding_columns + path: /tf/compat/v1/tpu/experimental/shared_embedding_columns + - title: train + section: + - title: Overview + path: /tf/compat/v1/train + - title: AdadeltaOptimizer + path: /tf/compat/v1/train/AdadeltaOptimizer + - title: AdagradDAOptimizer + path: /tf/compat/v1/train/AdagradDAOptimizer + - title: AdagradOptimizer + path: /tf/compat/v1/train/AdagradOptimizer + - title: AdamOptimizer + path: /tf/compat/v1/train/AdamOptimizer + - title: Checkpoint + path: /tf/compat/v1/train/Checkpoint + - title: ChiefSessionCreator + path: /tf/compat/v1/train/ChiefSessionCreator + - title: FtrlOptimizer + path: /tf/compat/v1/train/FtrlOptimizer + - title: GradientDescentOptimizer + path: /tf/compat/v1/train/GradientDescentOptimizer + - title: LooperThread + path: /tf/compat/v1/train/LooperThread + - title: MomentumOptimizer + path: /tf/compat/v1/train/MomentumOptimizer + - title: MonitoredSession + path: /tf/compat/v1/train/MonitoredSession + - title: MonitoredSession.StepContext + path: /tf/compat/v1/train/MonitoredSession/StepContext + - title: MonitoredTrainingSession + path: /tf/compat/v1/train/MonitoredTrainingSession + - title: NewCheckpointReader + path: /tf/compat/v1/train/NewCheckpointReader + - title: Optimizer + path: /tf/compat/v1/train/Optimizer + - title: ProximalAdagradOptimizer + path: /tf/compat/v1/train/ProximalAdagradOptimizer + - title: ProximalGradientDescentOptimizer + path: /tf/compat/v1/train/ProximalGradientDescentOptimizer + - title: QueueRunner + path: /tf/compat/v1/train/QueueRunner + - title: RMSPropOptimizer + path: /tf/compat/v1/train/RMSPropOptimizer + - title: Saver + path: /tf/compat/v1/train/Saver + - title: SaverDef + path: /tf/compat/v1/train/SaverDef + - title: Scaffold + path: /tf/compat/v1/train/Scaffold + - title: SessionCreator + path: /tf/compat/v1/train/SessionCreator + - title: SessionManager + path: /tf/compat/v1/train/SessionManager + - title: SingularMonitoredSession + path: /tf/compat/v1/train/SingularMonitoredSession + - title: Supervisor + path: /tf/compat/v1/train/Supervisor + - title: SyncReplicasOptimizer + path: /tf/compat/v1/train/SyncReplicasOptimizer + - title: WorkerSessionCreator + path: /tf/compat/v1/train/WorkerSessionCreator + - title: add_queue_runner + status: deprecated + path: /tf/compat/v1/train/add_queue_runner + - title: assert_global_step + path: /tf/compat/v1/train/assert_global_step + - title: basic_train_loop + path: /tf/compat/v1/train/basic_train_loop + - title: batch + status: deprecated + path: /tf/compat/v1/train/batch + - title: batch_join + status: deprecated + path: /tf/compat/v1/train/batch_join + - title: checkpoint_exists + status: deprecated + path: /tf/compat/v1/train/checkpoint_exists + - title: cosine_decay + path: /tf/compat/v1/train/cosine_decay + - title: cosine_decay_restarts + path: /tf/compat/v1/train/cosine_decay_restarts + - title: create_global_step + path: /tf/compat/v1/train/create_global_step + - title: do_quantize_training_on_graphdef + status: deprecated + path: /tf/compat/v1/train/do_quantize_training_on_graphdef + - title: exponential_decay + path: /tf/compat/v1/train/exponential_decay + - title: export_meta_graph + path: /tf/compat/v1/train/export_meta_graph + - title: generate_checkpoint_state_proto + path: /tf/compat/v1/train/generate_checkpoint_state_proto + - title: get_checkpoint_mtimes + status: deprecated + path: /tf/compat/v1/train/get_checkpoint_mtimes + - title: get_global_step + path: /tf/compat/v1/train/get_global_step + - title: get_or_create_global_step + path: /tf/compat/v1/train/get_or_create_global_step + - title: global_step + path: /tf/compat/v1/train/global_step + - title: import_meta_graph + path: /tf/compat/v1/train/import_meta_graph + - title: init_from_checkpoint + path: /tf/compat/v1/train/init_from_checkpoint + - title: input_producer + status: deprecated + path: /tf/compat/v1/train/input_producer + - title: inverse_time_decay + path: /tf/compat/v1/train/inverse_time_decay + - title: limit_epochs + status: deprecated + path: /tf/compat/v1/train/limit_epochs + - title: linear_cosine_decay + path: /tf/compat/v1/train/linear_cosine_decay + - title: maybe_batch + status: deprecated + path: /tf/compat/v1/train/maybe_batch + - title: maybe_batch_join + status: deprecated + path: /tf/compat/v1/train/maybe_batch_join + - title: maybe_shuffle_batch + status: deprecated + path: /tf/compat/v1/train/maybe_shuffle_batch + - title: maybe_shuffle_batch_join + status: deprecated + path: /tf/compat/v1/train/maybe_shuffle_batch_join + - title: natural_exp_decay + path: /tf/compat/v1/train/natural_exp_decay + - title: noisy_linear_cosine_decay + path: /tf/compat/v1/train/noisy_linear_cosine_decay + - title: piecewise_constant + path: /tf/compat/v1/train/piecewise_constant + - title: polynomial_decay + path: /tf/compat/v1/train/polynomial_decay + - title: range_input_producer + status: deprecated + path: /tf/compat/v1/train/range_input_producer + - title: remove_checkpoint + status: deprecated + path: /tf/compat/v1/train/remove_checkpoint + - title: replica_device_setter + path: /tf/compat/v1/train/replica_device_setter + - title: sdca_fprint + path: /tf/compat/v1/train/sdca_fprint + - title: sdca_optimizer + path: /tf/compat/v1/train/sdca_optimizer + - title: sdca_shrink_l1 + path: /tf/compat/v1/train/sdca_shrink_l1 + - title: shuffle_batch + status: deprecated + path: /tf/compat/v1/train/shuffle_batch + - title: shuffle_batch_join + status: deprecated + path: /tf/compat/v1/train/shuffle_batch_join + - title: slice_input_producer + status: deprecated + path: /tf/compat/v1/train/slice_input_producer + - title: start_queue_runners + status: deprecated + path: /tf/compat/v1/train/start_queue_runners + - title: string_input_producer + status: deprecated + path: /tf/compat/v1/train/string_input_producer + - title: summary_iterator + path: /tf/compat/v1/train/summary_iterator + - title: update_checkpoint_state + status: deprecated + path: /tf/compat/v1/train/update_checkpoint_state + - title: warm_start + path: /tf/compat/v1/train/warm_start + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/train/experimental + - title: MixedPrecisionLossScaleOptimizer + path: /tf/compat/v1/train/experimental/MixedPrecisionLossScaleOptimizer + - title: disable_mixed_precision_graph_rewrite + path: /tf/compat/v1/train/experimental/disable_mixed_precision_graph_rewrite + - title: enable_mixed_precision_graph_rewrite + path: /tf/compat/v1/train/experimental/enable_mixed_precision_graph_rewrite + - title: queue_runner + section: + - title: Overview + path: /tf/compat/v1/train/queue_runner + - title: user_ops + section: + - title: Overview + path: /tf/compat/v1/user_ops + - title: my_fact + path: /tf/compat/v1/user_ops/my_fact + - title: version + section: + - title: Overview + path: /tf/compat/v1/version + - title: xla + section: + - title: Overview + path: /tf/compat/v1/xla + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/compat/v1/xla/experimental +- title: tf.config + section: + - title: Overview + path: /tf/config + - title: LogicalDevice + path: /tf/config/LogicalDevice + - title: LogicalDeviceConfiguration + path: /tf/config/LogicalDeviceConfiguration + - title: PhysicalDevice + path: /tf/config/PhysicalDevice + - title: experimental_connect_to_cluster + status: experimental + path: /tf/config/experimental_connect_to_cluster + - title: experimental_connect_to_host + status: experimental + path: /tf/config/experimental_connect_to_host + - title: experimental_functions_run_eagerly + status: experimental + path: /tf/config/experimental_functions_run_eagerly + - title: experimental_run_functions_eagerly + status: experimental + path: /tf/config/experimental_run_functions_eagerly + - title: get_logical_device_configuration + path: /tf/config/get_logical_device_configuration + - title: get_soft_device_placement + path: /tf/config/get_soft_device_placement + - title: get_visible_devices + path: /tf/config/get_visible_devices + - title: list_logical_devices + path: /tf/config/list_logical_devices + - title: list_physical_devices + path: /tf/config/list_physical_devices + - title: set_logical_device_configuration + path: /tf/config/set_logical_device_configuration + - title: set_soft_device_placement + path: /tf/config/set_soft_device_placement + - title: set_visible_devices + path: /tf/config/set_visible_devices + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/config/experimental + - title: ClusterDeviceFilters + path: /tf/config/experimental/ClusterDeviceFilters + - title: disable_mlir_bridge + path: /tf/config/experimental/disable_mlir_bridge + - title: enable_mlir_bridge + path: /tf/config/experimental/enable_mlir_bridge + - title: get_device_policy + path: /tf/config/experimental/get_device_policy + - title: get_memory_growth + path: /tf/config/experimental/get_memory_growth + - title: get_synchronous_execution + path: /tf/config/experimental/get_synchronous_execution + - title: set_device_policy + path: /tf/config/experimental/set_device_policy + - title: set_memory_growth + path: /tf/config/experimental/set_memory_growth + - title: set_synchronous_execution + path: /tf/config/experimental/set_synchronous_execution + - title: optimizer + section: + - title: Overview + path: /tf/config/optimizer + - title: get_experimental_options + status: experimental + path: /tf/config/optimizer/get_experimental_options + - title: get_jit + path: /tf/config/optimizer/get_jit + - title: set_experimental_options + status: experimental + path: /tf/config/optimizer/set_experimental_options + - title: set_jit + path: /tf/config/optimizer/set_jit + - title: threading + section: + - title: Overview + path: /tf/config/threading + - title: get_inter_op_parallelism_threads + path: /tf/config/threading/get_inter_op_parallelism_threads + - title: get_intra_op_parallelism_threads + path: /tf/config/threading/get_intra_op_parallelism_threads + - title: set_inter_op_parallelism_threads + path: /tf/config/threading/set_inter_op_parallelism_threads + - title: set_intra_op_parallelism_threads + path: /tf/config/threading/set_intra_op_parallelism_threads +- title: tf.data + section: + - title: Overview + path: /tf/data + - title: Dataset + path: /tf/data/Dataset + - title: DatasetSpec + path: /tf/data/DatasetSpec + - title: FixedLengthRecordDataset + path: /tf/data/FixedLengthRecordDataset + - title: Options + path: /tf/data/Options + - title: TFRecordDataset + path: /tf/data/TFRecordDataset + - title: TextLineDataset + path: /tf/data/TextLineDataset + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/data/experimental + - title: AutoShardPolicy + path: /tf/data/experimental/AutoShardPolicy + - title: CheckpointInputPipelineHook + path: /tf/data/experimental/CheckpointInputPipelineHook + - title: Counter + path: /tf/data/experimental/Counter + - title: CsvDataset + path: /tf/data/experimental/CsvDataset + - title: DistributeOptions + path: /tf/data/experimental/DistributeOptions + - title: MapVectorizationOptions + path: /tf/data/experimental/MapVectorizationOptions + - title: OptimizationOptions + path: /tf/data/experimental/OptimizationOptions + - title: Optional + path: /tf/data/experimental/Optional + - title: RandomDataset + path: /tf/data/experimental/RandomDataset + - title: Reducer + path: /tf/data/experimental/Reducer + - title: SqlDataset + path: /tf/data/experimental/SqlDataset + - title: StatsAggregator + path: /tf/data/experimental/StatsAggregator + - title: StatsOptions + path: /tf/data/experimental/StatsOptions + - title: TFRecordWriter + path: /tf/data/experimental/TFRecordWriter + - title: ThreadingOptions + path: /tf/data/experimental/ThreadingOptions + - title: assert_cardinality + path: /tf/data/experimental/assert_cardinality + - title: bucket_by_sequence_length + path: /tf/data/experimental/bucket_by_sequence_length + - title: bytes_produced_stats + path: /tf/data/experimental/bytes_produced_stats + - title: cardinality + path: /tf/data/experimental/cardinality + - title: choose_from_datasets + path: /tf/data/experimental/choose_from_datasets + - title: copy_to_device + path: /tf/data/experimental/copy_to_device + - title: dense_to_ragged_batch + path: /tf/data/experimental/dense_to_ragged_batch + - title: dense_to_sparse_batch + path: /tf/data/experimental/dense_to_sparse_batch + - title: enumerate_dataset + status: deprecated + path: /tf/data/experimental/enumerate_dataset + - title: from_variant + path: /tf/data/experimental/from_variant + - title: get_next_as_optional + path: /tf/data/experimental/get_next_as_optional + - title: get_single_element + path: /tf/data/experimental/get_single_element + - title: get_structure + path: /tf/data/experimental/get_structure + - title: group_by_reducer + path: /tf/data/experimental/group_by_reducer + - title: group_by_window + path: /tf/data/experimental/group_by_window + - title: ignore_errors + path: /tf/data/experimental/ignore_errors + - title: latency_stats + path: /tf/data/experimental/latency_stats + - title: make_batched_features_dataset + path: /tf/data/experimental/make_batched_features_dataset + - title: make_csv_dataset + path: /tf/data/experimental/make_csv_dataset + - title: make_saveable_from_iterator + path: /tf/data/experimental/make_saveable_from_iterator + - title: map_and_batch + status: deprecated + path: /tf/data/experimental/map_and_batch + - title: parallel_interleave + status: deprecated + path: /tf/data/experimental/parallel_interleave + - title: parse_example_dataset + path: /tf/data/experimental/parse_example_dataset + - title: prefetch_to_device + path: /tf/data/experimental/prefetch_to_device + - title: rejection_resample + path: /tf/data/experimental/rejection_resample + - title: sample_from_datasets + path: /tf/data/experimental/sample_from_datasets + - title: scan + path: /tf/data/experimental/scan + - title: shuffle_and_repeat + status: deprecated + path: /tf/data/experimental/shuffle_and_repeat + - title: take_while + path: /tf/data/experimental/take_while + - title: to_variant + path: /tf/data/experimental/to_variant + - title: unbatch + status: deprecated + path: /tf/data/experimental/unbatch + - title: unique + path: /tf/data/experimental/unique +- title: tf.debugging + section: + - title: Overview + path: /tf/debugging + - title: Assert + path: /tf/debugging/Assert + - title: assert_all_finite + path: /tf/debugging/assert_all_finite + - title: assert_equal + path: /tf/debugging/assert_equal + - title: assert_greater + path: /tf/debugging/assert_greater + - title: assert_greater_equal + path: /tf/debugging/assert_greater_equal + - title: assert_integer + path: /tf/debugging/assert_integer + - title: assert_less + path: /tf/debugging/assert_less + - title: assert_less_equal + path: /tf/debugging/assert_less_equal + - title: assert_near + path: /tf/debugging/assert_near + - title: assert_negative + path: /tf/debugging/assert_negative + - title: assert_non_negative + path: /tf/debugging/assert_non_negative + - title: assert_non_positive + path: /tf/debugging/assert_non_positive + - title: assert_none_equal + path: /tf/debugging/assert_none_equal + - title: assert_positive + path: /tf/debugging/assert_positive + - title: assert_proper_iterable + path: /tf/debugging/assert_proper_iterable + - title: assert_rank + path: /tf/debugging/assert_rank + - title: assert_rank_at_least + path: /tf/debugging/assert_rank_at_least + - title: assert_rank_in + path: /tf/debugging/assert_rank_in + - title: assert_same_float_dtype + path: /tf/debugging/assert_same_float_dtype + - title: assert_scalar + path: /tf/debugging/assert_scalar + - title: assert_shapes + path: /tf/debugging/assert_shapes + - title: assert_type + path: /tf/debugging/assert_type + - title: check_numerics + path: /tf/debugging/check_numerics + - title: disable_check_numerics + path: /tf/debugging/disable_check_numerics + - title: enable_check_numerics + path: /tf/debugging/enable_check_numerics + - title: get_log_device_placement + path: /tf/debugging/get_log_device_placement + - title: is_numeric_tensor + path: /tf/debugging/is_numeric_tensor + - title: set_log_device_placement + path: /tf/debugging/set_log_device_placement + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/debugging/experimental + - title: disable_dump_debug_info + path: /tf/debugging/experimental/disable_dump_debug_info + - title: enable_dump_debug_info + path: /tf/debugging/experimental/enable_dump_debug_info +- title: tf.distribute + section: + - title: Overview + path: /tf/distribute + - title: CrossDeviceOps + path: /tf/distribute/CrossDeviceOps + - title: DistributedValues + path: /tf/distribute/DistributedValues + - title: HierarchicalCopyAllReduce + path: /tf/distribute/HierarchicalCopyAllReduce + - title: InputContext + path: /tf/distribute/InputContext + - title: InputReplicationMode + path: /tf/distribute/InputReplicationMode + - title: MirroredStrategy + path: /tf/distribute/MirroredStrategy + - title: NcclAllReduce + path: /tf/distribute/NcclAllReduce + - title: OneDeviceStrategy + path: /tf/distribute/OneDeviceStrategy + - title: ReduceOp + path: /tf/distribute/ReduceOp + - title: ReductionToOneDevice + path: /tf/distribute/ReductionToOneDevice + - title: ReplicaContext + path: /tf/distribute/ReplicaContext + - title: RunOptions + path: /tf/distribute/RunOptions + - title: Server + path: /tf/distribute/Server + - title: Strategy + path: /tf/distribute/Strategy + - title: StrategyExtended + path: /tf/distribute/StrategyExtended + - title: experimental_set_strategy + status: experimental + path: /tf/distribute/experimental_set_strategy + - title: get_replica_context + path: /tf/distribute/get_replica_context + - title: get_strategy + path: /tf/distribute/get_strategy + - title: has_strategy + path: /tf/distribute/has_strategy + - title: in_cross_replica_context + path: /tf/distribute/in_cross_replica_context + - title: cluster_resolver + section: + - title: Overview + path: /tf/distribute/cluster_resolver + - title: ClusterResolver + path: /tf/distribute/cluster_resolver/ClusterResolver + - title: GCEClusterResolver + path: /tf/distribute/cluster_resolver/GCEClusterResolver + - title: KubernetesClusterResolver + path: /tf/distribute/cluster_resolver/KubernetesClusterResolver + - title: SimpleClusterResolver + path: /tf/distribute/cluster_resolver/SimpleClusterResolver + - title: SlurmClusterResolver + path: /tf/distribute/cluster_resolver/SlurmClusterResolver + - title: TFConfigClusterResolver + path: /tf/distribute/cluster_resolver/TFConfigClusterResolver + - title: TPUClusterResolver + path: /tf/distribute/cluster_resolver/TPUClusterResolver + - title: UnionResolver + path: /tf/distribute/cluster_resolver/UnionResolver + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/distribute/experimental + - title: CentralStorageStrategy + path: /tf/distribute/experimental/CentralStorageStrategy + - title: CollectiveCommunication + path: /tf/distribute/experimental/CollectiveCommunication + - title: CollectiveHints + path: /tf/distribute/experimental/CollectiveHints + - title: MultiWorkerMirroredStrategy + path: /tf/distribute/experimental/MultiWorkerMirroredStrategy + - title: ParameterServerStrategy + path: /tf/distribute/experimental/ParameterServerStrategy + - title: TPUStrategy + path: /tf/distribute/experimental/TPUStrategy + - title: ValueContext + path: /tf/distribute/experimental/ValueContext +- title: tf.dtypes + section: + - title: Overview + path: /tf/dtypes + - title: DType + path: /tf/dtypes/DType + - title: as_dtype + path: /tf/dtypes/as_dtype + - title: complex + path: /tf/dtypes/complex + - title: saturate_cast + path: /tf/dtypes/saturate_cast +- title: tf.errors + section: + - title: Overview + path: /tf/errors + - title: AbortedError + path: /tf/errors/AbortedError + - title: AlreadyExistsError + path: /tf/errors/AlreadyExistsError + - title: CancelledError + path: /tf/errors/CancelledError + - title: DataLossError + path: /tf/errors/DataLossError + - title: DeadlineExceededError + path: /tf/errors/DeadlineExceededError + - title: FailedPreconditionError + path: /tf/errors/FailedPreconditionError + - title: InternalError + path: /tf/errors/InternalError + - title: InvalidArgumentError + path: /tf/errors/InvalidArgumentError + - title: NotFoundError + path: /tf/errors/NotFoundError + - title: OpError + path: /tf/errors/OpError + - title: OutOfRangeError + path: /tf/errors/OutOfRangeError + - title: PermissionDeniedError + path: /tf/errors/PermissionDeniedError + - title: ResourceExhaustedError + path: /tf/errors/ResourceExhaustedError + - title: UnauthenticatedError + path: /tf/errors/UnauthenticatedError + - title: UnavailableError + path: /tf/errors/UnavailableError + - title: UnimplementedError + path: /tf/errors/UnimplementedError + - title: UnknownError + path: /tf/errors/UnknownError +- title: tf.estimator + section: + - title: Overview + path: /tf/estimator + - title: BaselineClassifier + path: /tf/estimator/BaselineClassifier + - title: BaselineEstimator + path: /tf/estimator/BaselineEstimator + - title: BaselineRegressor + path: /tf/estimator/BaselineRegressor + - title: BestExporter + path: /tf/estimator/BestExporter + - title: BinaryClassHead + path: /tf/estimator/BinaryClassHead + - title: BoostedTreesClassifier + path: /tf/estimator/BoostedTreesClassifier + - title: BoostedTreesEstimator + path: /tf/estimator/BoostedTreesEstimator + - title: BoostedTreesRegressor + path: /tf/estimator/BoostedTreesRegressor + - title: CheckpointSaverHook + path: /tf/estimator/CheckpointSaverHook + - title: CheckpointSaverListener + path: /tf/estimator/CheckpointSaverListener + - title: DNNClassifier + path: /tf/estimator/DNNClassifier + - title: DNNEstimator + path: /tf/estimator/DNNEstimator + - title: DNNLinearCombinedClassifier + path: /tf/estimator/DNNLinearCombinedClassifier + - title: DNNLinearCombinedEstimator + path: /tf/estimator/DNNLinearCombinedEstimator + - title: DNNLinearCombinedRegressor + path: /tf/estimator/DNNLinearCombinedRegressor + - title: DNNRegressor + path: /tf/estimator/DNNRegressor + - title: Estimator + path: /tf/estimator/Estimator + - title: EstimatorSpec + path: /tf/estimator/EstimatorSpec + - title: EvalSpec + path: /tf/estimator/EvalSpec + - title: Exporter + path: /tf/estimator/Exporter + - title: FeedFnHook + path: /tf/estimator/FeedFnHook + - title: FinalExporter + path: /tf/estimator/FinalExporter + - title: FinalOpsHook + path: /tf/estimator/FinalOpsHook + - title: GlobalStepWaiterHook + path: /tf/estimator/GlobalStepWaiterHook + - title: Head + path: /tf/estimator/Head + - title: LatestExporter + path: /tf/estimator/LatestExporter + - title: LinearClassifier + path: /tf/estimator/LinearClassifier + - title: LinearEstimator + path: /tf/estimator/LinearEstimator + - title: LinearRegressor + path: /tf/estimator/LinearRegressor + - title: LoggingTensorHook + path: /tf/estimator/LoggingTensorHook + - title: LogisticRegressionHead + path: /tf/estimator/LogisticRegressionHead + - title: ModeKeys + path: /tf/estimator/ModeKeys + - title: MultiClassHead + path: /tf/estimator/MultiClassHead + - title: MultiHead + path: /tf/estimator/MultiHead + - title: MultiLabelHead + path: /tf/estimator/MultiLabelHead + - title: NanLossDuringTrainingError + path: /tf/estimator/NanLossDuringTrainingError + - title: NanTensorHook + path: /tf/estimator/NanTensorHook + - title: PoissonRegressionHead + path: /tf/estimator/PoissonRegressionHead + - title: ProfilerHook + path: /tf/estimator/ProfilerHook + - title: RegressionHead + path: /tf/estimator/RegressionHead + - title: RunConfig + path: /tf/estimator/RunConfig + - title: SecondOrStepTimer + path: /tf/estimator/SecondOrStepTimer + - title: SessionRunArgs + path: /tf/estimator/SessionRunArgs + - title: SessionRunContext + path: /tf/estimator/SessionRunContext + - title: SessionRunHook + path: /tf/estimator/SessionRunHook + - title: SessionRunValues + path: /tf/estimator/SessionRunValues + - title: StepCounterHook + path: /tf/estimator/StepCounterHook + - title: StopAtStepHook + path: /tf/estimator/StopAtStepHook + - title: SummarySaverHook + path: /tf/estimator/SummarySaverHook + - title: TrainSpec + path: /tf/estimator/TrainSpec + - title: VocabInfo + path: /tf/estimator/VocabInfo + - title: WarmStartSettings + path: /tf/estimator/WarmStartSettings + - title: add_metrics + path: /tf/estimator/add_metrics + - title: classifier_parse_example_spec + path: /tf/estimator/classifier_parse_example_spec + - title: regressor_parse_example_spec + path: /tf/estimator/regressor_parse_example_spec + - title: train_and_evaluate + path: /tf/estimator/train_and_evaluate + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/estimator/experimental + - title: InMemoryEvaluatorHook + path: /tf/estimator/experimental/InMemoryEvaluatorHook + - title: LinearSDCA + path: /tf/estimator/experimental/LinearSDCA + - title: RNNClassifier + path: /tf/estimator/experimental/RNNClassifier + - title: RNNEstimator + path: /tf/estimator/experimental/RNNEstimator + - title: build_raw_supervised_input_receiver_fn + path: /tf/estimator/experimental/build_raw_supervised_input_receiver_fn + - title: call_logit_fn + path: /tf/estimator/experimental/call_logit_fn + - title: make_early_stopping_hook + path: /tf/estimator/experimental/make_early_stopping_hook + - title: make_stop_at_checkpoint_step_hook + path: /tf/estimator/experimental/make_stop_at_checkpoint_step_hook + - title: stop_if_higher_hook + path: /tf/estimator/experimental/stop_if_higher_hook + - title: stop_if_lower_hook + path: /tf/estimator/experimental/stop_if_lower_hook + - title: stop_if_no_decrease_hook + path: /tf/estimator/experimental/stop_if_no_decrease_hook + - title: stop_if_no_increase_hook + path: /tf/estimator/experimental/stop_if_no_increase_hook + - title: export + section: + - title: Overview + path: /tf/estimator/export + - title: ClassificationOutput + path: /tf/estimator/export/ClassificationOutput + - title: ExportOutput + path: /tf/estimator/export/ExportOutput + - title: PredictOutput + path: /tf/estimator/export/PredictOutput + - title: RegressionOutput + path: /tf/estimator/export/RegressionOutput + - title: ServingInputReceiver + path: /tf/estimator/export/ServingInputReceiver + - title: TensorServingInputReceiver + path: /tf/estimator/export/TensorServingInputReceiver + - title: build_parsing_serving_input_receiver_fn + path: /tf/estimator/export/build_parsing_serving_input_receiver_fn + - title: build_raw_serving_input_receiver_fn + path: /tf/estimator/export/build_raw_serving_input_receiver_fn +- title: tf.experimental + status: experimental + section: + - title: Overview + path: /tf/experimental + - title: async_clear_error + path: /tf/experimental/async_clear_error + - title: async_scope + path: /tf/experimental/async_scope + - title: function_executor_type + path: /tf/experimental/function_executor_type + - title: dlpack + section: + - title: Overview + path: /tf/experimental/dlpack + - title: from_dlpack + path: /tf/experimental/dlpack/from_dlpack + - title: to_dlpack + path: /tf/experimental/dlpack/to_dlpack + - title: tensorrt + section: + - title: Overview + path: /tf/experimental/tensorrt + - title: ConversionParams + path: /tf/experimental/tensorrt/ConversionParams + - title: Converter + path: /tf/experimental/tensorrt/Converter +- title: tf.feature_column + section: + - title: Overview + path: /tf/feature_column + - title: bucketized_column + path: /tf/feature_column/bucketized_column + - title: categorical_column_with_hash_bucket + path: /tf/feature_column/categorical_column_with_hash_bucket + - title: categorical_column_with_identity + path: /tf/feature_column/categorical_column_with_identity + - title: categorical_column_with_vocabulary_file + path: /tf/feature_column/categorical_column_with_vocabulary_file + - title: categorical_column_with_vocabulary_list + path: /tf/feature_column/categorical_column_with_vocabulary_list + - title: crossed_column + path: /tf/feature_column/crossed_column + - title: embedding_column + path: /tf/feature_column/embedding_column + - title: indicator_column + path: /tf/feature_column/indicator_column + - title: make_parse_example_spec + path: /tf/feature_column/make_parse_example_spec + - title: numeric_column + path: /tf/feature_column/numeric_column + - title: sequence_categorical_column_with_hash_bucket + path: /tf/feature_column/sequence_categorical_column_with_hash_bucket + - title: sequence_categorical_column_with_identity + path: /tf/feature_column/sequence_categorical_column_with_identity + - title: sequence_categorical_column_with_vocabulary_file + path: /tf/feature_column/sequence_categorical_column_with_vocabulary_file + - title: sequence_categorical_column_with_vocabulary_list + path: /tf/feature_column/sequence_categorical_column_with_vocabulary_list + - title: sequence_numeric_column + path: /tf/feature_column/sequence_numeric_column + - title: shared_embeddings + path: /tf/feature_column/shared_embeddings + - title: weighted_categorical_column + path: /tf/feature_column/weighted_categorical_column +- title: tf.graph_util + section: + - title: Overview + path: /tf/graph_util + - title: import_graph_def + path: /tf/graph_util/import_graph_def +- title: tf.image + section: + - title: Overview + path: /tf/image + - title: ResizeMethod + path: /tf/image/ResizeMethod + - title: adjust_brightness + path: /tf/image/adjust_brightness + - title: adjust_contrast + path: /tf/image/adjust_contrast + - title: adjust_gamma + path: /tf/image/adjust_gamma + - title: adjust_hue + path: /tf/image/adjust_hue + - title: adjust_jpeg_quality + path: /tf/image/adjust_jpeg_quality + - title: adjust_saturation + path: /tf/image/adjust_saturation + - title: central_crop + path: /tf/image/central_crop + - title: combined_non_max_suppression + path: /tf/image/combined_non_max_suppression + - title: convert_image_dtype + path: /tf/image/convert_image_dtype + - title: crop_and_resize + path: /tf/image/crop_and_resize + - title: crop_to_bounding_box + path: /tf/image/crop_to_bounding_box + - title: draw_bounding_boxes + path: /tf/image/draw_bounding_boxes + - title: encode_png + path: /tf/image/encode_png + - title: extract_glimpse + path: /tf/image/extract_glimpse + - title: extract_patches + path: /tf/image/extract_patches + - title: flip_left_right + path: /tf/image/flip_left_right + - title: flip_up_down + path: /tf/image/flip_up_down + - title: generate_bounding_box_proposals + path: /tf/image/generate_bounding_box_proposals + - title: grayscale_to_rgb + path: /tf/image/grayscale_to_rgb + - title: hsv_to_rgb + path: /tf/image/hsv_to_rgb + - title: image_gradients + path: /tf/image/image_gradients + - title: non_max_suppression + path: /tf/image/non_max_suppression + - title: non_max_suppression_overlaps + path: /tf/image/non_max_suppression_overlaps + - title: non_max_suppression_padded + path: /tf/image/non_max_suppression_padded + - title: non_max_suppression_with_scores + path: /tf/image/non_max_suppression_with_scores + - title: pad_to_bounding_box + path: /tf/image/pad_to_bounding_box + - title: per_image_standardization + path: /tf/image/per_image_standardization + - title: psnr + path: /tf/image/psnr + - title: random_brightness + path: /tf/image/random_brightness + - title: random_contrast + path: /tf/image/random_contrast + - title: random_crop + path: /tf/image/random_crop + - title: random_flip_left_right + path: /tf/image/random_flip_left_right + - title: random_flip_up_down + path: /tf/image/random_flip_up_down + - title: random_hue + path: /tf/image/random_hue + - title: random_jpeg_quality + path: /tf/image/random_jpeg_quality + - title: random_saturation + path: /tf/image/random_saturation + - title: resize + path: /tf/image/resize + - title: resize_with_crop_or_pad + path: /tf/image/resize_with_crop_or_pad + - title: resize_with_pad + path: /tf/image/resize_with_pad + - title: rgb_to_grayscale + path: /tf/image/rgb_to_grayscale + - title: rgb_to_hsv + path: /tf/image/rgb_to_hsv + - title: rgb_to_yiq + path: /tf/image/rgb_to_yiq + - title: rgb_to_yuv + path: /tf/image/rgb_to_yuv + - title: rot90 + path: /tf/image/rot90 + - title: sample_distorted_bounding_box + path: /tf/image/sample_distorted_bounding_box + - title: sobel_edges + path: /tf/image/sobel_edges + - title: ssim + path: /tf/image/ssim + - title: ssim_multiscale + path: /tf/image/ssim_multiscale + - title: total_variation + path: /tf/image/total_variation + - title: transpose + path: /tf/image/transpose + - title: yiq_to_rgb + path: /tf/image/yiq_to_rgb + - title: yuv_to_rgb + path: /tf/image/yuv_to_rgb +- title: tf.io + section: + - title: Overview + path: /tf/io + - title: FixedLenFeature + path: /tf/io/FixedLenFeature + - title: FixedLenSequenceFeature + path: /tf/io/FixedLenSequenceFeature + - title: RaggedFeature + path: /tf/io/RaggedFeature + - title: RaggedFeature.RowLengths + path: /tf/io/RaggedFeature/RowLengths + - title: RaggedFeature.RowLimits + path: /tf/io/RaggedFeature/RowLimits + - title: RaggedFeature.RowSplits + path: /tf/io/RaggedFeature/RowSplits + - title: RaggedFeature.RowStarts + path: /tf/io/RaggedFeature/RowStarts + - title: RaggedFeature.UniformRowLength + path: /tf/io/RaggedFeature/UniformRowLength + - title: RaggedFeature.ValueRowIds + path: /tf/io/RaggedFeature/ValueRowIds + - title: SparseFeature + path: /tf/io/SparseFeature + - title: TFRecordOptions + path: /tf/io/TFRecordOptions + - title: TFRecordWriter + path: /tf/io/TFRecordWriter + - title: VarLenFeature + path: /tf/io/VarLenFeature + - title: decode_and_crop_jpeg + path: /tf/io/decode_and_crop_jpeg + - title: decode_base64 + path: /tf/io/decode_base64 + - title: decode_bmp + path: /tf/io/decode_bmp + - title: decode_compressed + path: /tf/io/decode_compressed + - title: decode_csv + path: /tf/io/decode_csv + - title: decode_gif + path: /tf/io/decode_gif + - title: decode_image + path: /tf/io/decode_image + - title: decode_jpeg + path: /tf/io/decode_jpeg + - title: decode_json_example + path: /tf/io/decode_json_example + - title: decode_png + path: /tf/io/decode_png + - title: decode_proto + path: /tf/io/decode_proto + - title: decode_raw + path: /tf/io/decode_raw + - title: deserialize_many_sparse + path: /tf/io/deserialize_many_sparse + - title: encode_base64 + path: /tf/io/encode_base64 + - title: encode_jpeg + path: /tf/io/encode_jpeg + - title: encode_proto + path: /tf/io/encode_proto + - title: extract_jpeg_shape + path: /tf/io/extract_jpeg_shape + - title: is_jpeg + path: /tf/io/is_jpeg + - title: match_filenames_once + path: /tf/io/match_filenames_once + - title: matching_files + path: /tf/io/matching_files + - title: parse_example + path: /tf/io/parse_example + - title: parse_sequence_example + path: /tf/io/parse_sequence_example + - title: parse_single_example + path: /tf/io/parse_single_example + - title: parse_single_sequence_example + path: /tf/io/parse_single_sequence_example + - title: parse_tensor + path: /tf/io/parse_tensor + - title: read_file + path: /tf/io/read_file + - title: serialize_many_sparse + path: /tf/io/serialize_many_sparse + - title: serialize_sparse + path: /tf/io/serialize_sparse + - title: serialize_tensor + path: /tf/io/serialize_tensor + - title: write_file + path: /tf/io/write_file + - title: write_graph + path: /tf/io/write_graph + - title: gfile + section: + - title: Overview + path: /tf/io/gfile + - title: GFile + path: /tf/io/gfile/GFile + - title: copy + path: /tf/io/gfile/copy + - title: exists + path: /tf/io/gfile/exists + - title: glob + path: /tf/io/gfile/glob + - title: isdir + path: /tf/io/gfile/isdir + - title: listdir + path: /tf/io/gfile/listdir + - title: makedirs + path: /tf/io/gfile/makedirs + - title: mkdir + path: /tf/io/gfile/mkdir + - title: remove + path: /tf/io/gfile/remove + - title: rename + path: /tf/io/gfile/rename + - title: rmtree + path: /tf/io/gfile/rmtree + - title: stat + path: /tf/io/gfile/stat + - title: walk + path: /tf/io/gfile/walk +- title: tf.keras + section: + - title: Overview + path: /tf/keras + - title: Input + path: /tf/keras/Input + - title: Model + path: /tf/keras/Model + - title: Sequential + path: /tf/keras/Sequential + - title: activations + section: + - title: Overview + path: /tf/keras/activations + - title: deserialize + path: /tf/keras/activations/deserialize + - title: elu + path: /tf/keras/activations/elu + - title: exponential + path: /tf/keras/activations/exponential + - title: get + path: /tf/keras/activations/get + - title: hard_sigmoid + path: /tf/keras/activations/hard_sigmoid + - title: linear + path: /tf/keras/activations/linear + - title: relu + path: /tf/keras/activations/relu + - title: selu + path: /tf/keras/activations/selu + - title: serialize + path: /tf/keras/activations/serialize + - title: sigmoid + path: /tf/keras/activations/sigmoid + - title: softmax + path: /tf/keras/activations/softmax + - title: softplus + path: /tf/keras/activations/softplus + - title: softsign + path: /tf/keras/activations/softsign + - title: swish + path: /tf/keras/activations/swish + - title: tanh + path: /tf/keras/activations/tanh + - title: applications + section: + - title: Overview + path: /tf/keras/applications + - title: DenseNet121 + path: /tf/keras/applications/DenseNet121 + - title: DenseNet169 + path: /tf/keras/applications/DenseNet169 + - title: DenseNet201 + path: /tf/keras/applications/DenseNet201 + - title: InceptionResNetV2 + path: /tf/keras/applications/InceptionResNetV2 + - title: InceptionV3 + path: /tf/keras/applications/InceptionV3 + - title: MobileNet + path: /tf/keras/applications/MobileNet + - title: MobileNetV2 + path: /tf/keras/applications/MobileNetV2 + - title: NASNetLarge + path: /tf/keras/applications/NASNetLarge + - title: NASNetMobile + path: /tf/keras/applications/NASNetMobile + - title: ResNet101 + path: /tf/keras/applications/ResNet101 + - title: ResNet101V2 + path: /tf/keras/applications/ResNet101V2 + - title: ResNet152 + path: /tf/keras/applications/ResNet152 + - title: ResNet152V2 + path: /tf/keras/applications/ResNet152V2 + - title: ResNet50 + path: /tf/keras/applications/ResNet50 + - title: ResNet50V2 + path: /tf/keras/applications/ResNet50V2 + - title: VGG16 + path: /tf/keras/applications/VGG16 + - title: VGG19 + path: /tf/keras/applications/VGG19 + - title: Xception + path: /tf/keras/applications/Xception + - title: densenet + section: + - title: Overview + path: /tf/keras/applications/densenet + - title: decode_predictions + path: /tf/keras/applications/densenet/decode_predictions + - title: preprocess_input + path: /tf/keras/applications/densenet/preprocess_input + - title: imagenet_utils + section: + - title: Overview + path: /tf/keras/applications/imagenet_utils + - title: decode_predictions + path: /tf/keras/applications/imagenet_utils/decode_predictions + - title: preprocess_input + path: /tf/keras/applications/imagenet_utils/preprocess_input + - title: inception_resnet_v2 + section: + - title: Overview + path: /tf/keras/applications/inception_resnet_v2 + - title: decode_predictions + path: /tf/keras/applications/inception_resnet_v2/decode_predictions + - title: preprocess_input + path: /tf/keras/applications/inception_resnet_v2/preprocess_input + - title: inception_v3 + section: + - title: Overview + path: /tf/keras/applications/inception_v3 + - title: decode_predictions + path: /tf/keras/applications/inception_v3/decode_predictions + - title: preprocess_input + path: /tf/keras/applications/inception_v3/preprocess_input + - title: mobilenet + section: + - title: Overview + path: /tf/keras/applications/mobilenet + - title: decode_predictions + path: /tf/keras/applications/mobilenet/decode_predictions + - title: preprocess_input + path: /tf/keras/applications/mobilenet/preprocess_input + - title: mobilenet_v2 + section: + - title: Overview + path: /tf/keras/applications/mobilenet_v2 + - title: decode_predictions + path: /tf/keras/applications/mobilenet_v2/decode_predictions + - title: preprocess_input + path: /tf/keras/applications/mobilenet_v2/preprocess_input + - title: nasnet + section: + - title: Overview + path: /tf/keras/applications/nasnet + - title: decode_predictions + path: /tf/keras/applications/nasnet/decode_predictions + - title: preprocess_input + path: /tf/keras/applications/nasnet/preprocess_input + - title: resnet + section: + - title: Overview + path: /tf/keras/applications/resnet + - title: decode_predictions + path: /tf/keras/applications/resnet/decode_predictions + - title: preprocess_input + path: /tf/keras/applications/resnet/preprocess_input + - title: resnet50 + section: + - title: Overview + path: /tf/keras/applications/resnet50 + - title: resnet_v2 + section: + - title: Overview + path: /tf/keras/applications/resnet_v2 + - title: decode_predictions + path: /tf/keras/applications/resnet_v2/decode_predictions + - title: preprocess_input + path: /tf/keras/applications/resnet_v2/preprocess_input + - title: vgg16 + section: + - title: Overview + path: /tf/keras/applications/vgg16 + - title: decode_predictions + path: /tf/keras/applications/vgg16/decode_predictions + - title: preprocess_input + path: /tf/keras/applications/vgg16/preprocess_input + - title: vgg19 + section: + - title: Overview + path: /tf/keras/applications/vgg19 + - title: decode_predictions + path: /tf/keras/applications/vgg19/decode_predictions + - title: preprocess_input + path: /tf/keras/applications/vgg19/preprocess_input + - title: xception + section: + - title: Overview + path: /tf/keras/applications/xception + - title: decode_predictions + path: /tf/keras/applications/xception/decode_predictions + - title: preprocess_input + path: /tf/keras/applications/xception/preprocess_input + - title: backend + section: + - title: Overview + path: /tf/keras/backend + - title: abs + path: /tf/keras/backend/abs + - title: all + path: /tf/keras/backend/all + - title: any + path: /tf/keras/backend/any + - title: arange + path: /tf/keras/backend/arange + - title: argmax + path: /tf/keras/backend/argmax + - title: argmin + path: /tf/keras/backend/argmin + - title: backend + path: /tf/keras/backend/backend + - title: batch_dot + path: /tf/keras/backend/batch_dot + - title: batch_flatten + path: /tf/keras/backend/batch_flatten + - title: batch_get_value + path: /tf/keras/backend/batch_get_value + - title: batch_normalization + path: /tf/keras/backend/batch_normalization + - title: batch_set_value + path: /tf/keras/backend/batch_set_value + - title: bias_add + path: /tf/keras/backend/bias_add + - title: binary_crossentropy + path: /tf/keras/backend/binary_crossentropy + - title: cast + path: /tf/keras/backend/cast + - title: cast_to_floatx + path: /tf/keras/backend/cast_to_floatx + - title: categorical_crossentropy + path: /tf/keras/backend/categorical_crossentropy + - title: clear_session + path: /tf/keras/backend/clear_session + - title: clip + path: /tf/keras/backend/clip + - title: concatenate + path: /tf/keras/backend/concatenate + - title: constant + path: /tf/keras/backend/constant + - title: conv1d + path: /tf/keras/backend/conv1d + - title: conv2d + path: /tf/keras/backend/conv2d + - title: conv2d_transpose + path: /tf/keras/backend/conv2d_transpose + - title: conv3d + path: /tf/keras/backend/conv3d + - title: cos + path: /tf/keras/backend/cos + - title: count_params + path: /tf/keras/backend/count_params + - title: ctc_batch_cost + path: /tf/keras/backend/ctc_batch_cost + - title: ctc_decode + path: /tf/keras/backend/ctc_decode + - title: ctc_label_dense_to_sparse + path: /tf/keras/backend/ctc_label_dense_to_sparse + - title: cumprod + path: /tf/keras/backend/cumprod + - title: cumsum + path: /tf/keras/backend/cumsum + - title: depthwise_conv2d + path: /tf/keras/backend/depthwise_conv2d + - title: dot + path: /tf/keras/backend/dot + - title: dropout + path: /tf/keras/backend/dropout + - title: dtype + path: /tf/keras/backend/dtype + - title: elu + path: /tf/keras/backend/elu + - title: epsilon + path: /tf/keras/backend/epsilon + - title: equal + path: /tf/keras/backend/equal + - title: eval + path: /tf/keras/backend/eval + - title: exp + path: /tf/keras/backend/exp + - title: expand_dims + path: /tf/keras/backend/expand_dims + - title: eye + path: /tf/keras/backend/eye + - title: flatten + path: /tf/keras/backend/flatten + - title: floatx + path: /tf/keras/backend/floatx + - title: foldl + path: /tf/keras/backend/foldl + - title: foldr + path: /tf/keras/backend/foldr + - title: function + path: /tf/keras/backend/function + - title: gather + path: /tf/keras/backend/gather + - title: get_uid + path: /tf/keras/backend/get_uid + - title: get_value + path: /tf/keras/backend/get_value + - title: gradients + path: /tf/keras/backend/gradients + - title: greater + path: /tf/keras/backend/greater + - title: greater_equal + path: /tf/keras/backend/greater_equal + - title: hard_sigmoid + path: /tf/keras/backend/hard_sigmoid + - title: image_data_format + path: /tf/keras/backend/image_data_format + - title: in_test_phase + path: /tf/keras/backend/in_test_phase + - title: in_top_k + path: /tf/keras/backend/in_top_k + - title: in_train_phase + path: /tf/keras/backend/in_train_phase + - title: int_shape + path: /tf/keras/backend/int_shape + - title: is_keras_tensor + path: /tf/keras/backend/is_keras_tensor + - title: is_sparse + path: /tf/keras/backend/is_sparse + - title: l2_normalize + path: /tf/keras/backend/l2_normalize + - title: learning_phase + path: /tf/keras/backend/learning_phase + - title: learning_phase_scope + path: /tf/keras/backend/learning_phase_scope + - title: less + path: /tf/keras/backend/less + - title: less_equal + path: /tf/keras/backend/less_equal + - title: local_conv1d + path: /tf/keras/backend/local_conv1d + - title: local_conv2d + path: /tf/keras/backend/local_conv2d + - title: log + path: /tf/keras/backend/log + - title: manual_variable_initialization + path: /tf/keras/backend/manual_variable_initialization + - title: map_fn + path: /tf/keras/backend/map_fn + - title: max + path: /tf/keras/backend/max + - title: maximum + path: /tf/keras/backend/maximum + - title: mean + path: /tf/keras/backend/mean + - title: min + path: /tf/keras/backend/min + - title: minimum + path: /tf/keras/backend/minimum + - title: moving_average_update + path: /tf/keras/backend/moving_average_update + - title: name_scope + path: /tf/keras/backend/name_scope + - title: ndim + path: /tf/keras/backend/ndim + - title: normalize_batch_in_training + path: /tf/keras/backend/normalize_batch_in_training + - title: not_equal + path: /tf/keras/backend/not_equal + - title: one_hot + path: /tf/keras/backend/one_hot + - title: ones + path: /tf/keras/backend/ones + - title: ones_like + path: /tf/keras/backend/ones_like + - title: permute_dimensions + path: /tf/keras/backend/permute_dimensions + - title: placeholder + path: /tf/keras/backend/placeholder + - title: pool2d + path: /tf/keras/backend/pool2d + - title: pool3d + path: /tf/keras/backend/pool3d + - title: pow + path: /tf/keras/backend/pow + - title: print_tensor + path: /tf/keras/backend/print_tensor + - title: prod + path: /tf/keras/backend/prod + - title: random_binomial + path: /tf/keras/backend/random_binomial + - title: random_normal + path: /tf/keras/backend/random_normal + - title: random_normal_variable + path: /tf/keras/backend/random_normal_variable + - title: random_uniform + path: /tf/keras/backend/random_uniform + - title: random_uniform_variable + path: /tf/keras/backend/random_uniform_variable + - title: relu + path: /tf/keras/backend/relu + - title: repeat + path: /tf/keras/backend/repeat + - title: repeat_elements + path: /tf/keras/backend/repeat_elements + - title: reset_uids + path: /tf/keras/backend/reset_uids + - title: reshape + path: /tf/keras/backend/reshape + - title: resize_images + path: /tf/keras/backend/resize_images + - title: resize_volumes + path: /tf/keras/backend/resize_volumes + - title: reverse + path: /tf/keras/backend/reverse + - title: rnn + path: /tf/keras/backend/rnn + - title: round + path: /tf/keras/backend/round + - title: separable_conv2d + path: /tf/keras/backend/separable_conv2d + - title: set_epsilon + path: /tf/keras/backend/set_epsilon + - title: set_floatx + path: /tf/keras/backend/set_floatx + - title: set_image_data_format + path: /tf/keras/backend/set_image_data_format + - title: set_learning_phase + path: /tf/keras/backend/set_learning_phase + - title: set_value + path: /tf/keras/backend/set_value + - title: shape + path: /tf/keras/backend/shape + - title: sigmoid + path: /tf/keras/backend/sigmoid + - title: sign + path: /tf/keras/backend/sign + - title: sin + path: /tf/keras/backend/sin + - title: softmax + path: /tf/keras/backend/softmax + - title: softplus + path: /tf/keras/backend/softplus + - title: softsign + path: /tf/keras/backend/softsign + - title: sparse_categorical_crossentropy + path: /tf/keras/backend/sparse_categorical_crossentropy + - title: spatial_2d_padding + path: /tf/keras/backend/spatial_2d_padding + - title: spatial_3d_padding + path: /tf/keras/backend/spatial_3d_padding + - title: sqrt + path: /tf/keras/backend/sqrt + - title: square + path: /tf/keras/backend/square + - title: squeeze + path: /tf/keras/backend/squeeze + - title: stack + path: /tf/keras/backend/stack + - title: std + path: /tf/keras/backend/std + - title: stop_gradient + path: /tf/keras/backend/stop_gradient + - title: sum + path: /tf/keras/backend/sum + - title: switch + path: /tf/keras/backend/switch + - title: tanh + path: /tf/keras/backend/tanh + - title: temporal_padding + path: /tf/keras/backend/temporal_padding + - title: tile + path: /tf/keras/backend/tile + - title: to_dense + path: /tf/keras/backend/to_dense + - title: transpose + path: /tf/keras/backend/transpose + - title: truncated_normal + path: /tf/keras/backend/truncated_normal + - title: update + path: /tf/keras/backend/update + - title: update_add + path: /tf/keras/backend/update_add + - title: update_sub + path: /tf/keras/backend/update_sub + - title: var + path: /tf/keras/backend/var + - title: variable + path: /tf/keras/backend/variable + - title: zeros + path: /tf/keras/backend/zeros + - title: zeros_like + path: /tf/keras/backend/zeros_like + - title: callbacks + section: + - title: Overview + path: /tf/keras/callbacks + - title: BaseLogger + path: /tf/keras/callbacks/BaseLogger + - title: CSVLogger + path: /tf/keras/callbacks/CSVLogger + - title: Callback + path: /tf/keras/callbacks/Callback + - title: EarlyStopping + path: /tf/keras/callbacks/EarlyStopping + - title: History + path: /tf/keras/callbacks/History + - title: LambdaCallback + path: /tf/keras/callbacks/LambdaCallback + - title: LearningRateScheduler + path: /tf/keras/callbacks/LearningRateScheduler + - title: ModelCheckpoint + path: /tf/keras/callbacks/ModelCheckpoint + - title: ProgbarLogger + path: /tf/keras/callbacks/ProgbarLogger + - title: ReduceLROnPlateau + path: /tf/keras/callbacks/ReduceLROnPlateau + - title: RemoteMonitor + path: /tf/keras/callbacks/RemoteMonitor + - title: TensorBoard + path: /tf/keras/callbacks/TensorBoard + - title: TerminateOnNaN + path: /tf/keras/callbacks/TerminateOnNaN + - title: constraints + section: + - title: Overview + path: /tf/keras/constraints + - title: Constraint + path: /tf/keras/constraints/Constraint + - title: MaxNorm + path: /tf/keras/constraints/MaxNorm + - title: MinMaxNorm + path: /tf/keras/constraints/MinMaxNorm + - title: NonNeg + path: /tf/keras/constraints/NonNeg + - title: RadialConstraint + path: /tf/keras/constraints/RadialConstraint + - title: UnitNorm + path: /tf/keras/constraints/UnitNorm + - title: deserialize + path: /tf/keras/constraints/deserialize + - title: get + path: /tf/keras/constraints/get + - title: serialize + path: /tf/keras/constraints/serialize + - title: datasets + section: + - title: Overview + path: /tf/keras/datasets + - title: boston_housing + section: + - title: Overview + path: /tf/keras/datasets/boston_housing + - title: load_data + path: /tf/keras/datasets/boston_housing/load_data + - title: cifar10 + section: + - title: Overview + path: /tf/keras/datasets/cifar10 + - title: load_data + path: /tf/keras/datasets/cifar10/load_data + - title: cifar100 + section: + - title: Overview + path: /tf/keras/datasets/cifar100 + - title: load_data + path: /tf/keras/datasets/cifar100/load_data + - title: fashion_mnist + section: + - title: Overview + path: /tf/keras/datasets/fashion_mnist + - title: load_data + path: /tf/keras/datasets/fashion_mnist/load_data + - title: imdb + section: + - title: Overview + path: /tf/keras/datasets/imdb + - title: get_word_index + path: /tf/keras/datasets/imdb/get_word_index + - title: load_data + path: /tf/keras/datasets/imdb/load_data + - title: mnist + section: + - title: Overview + path: /tf/keras/datasets/mnist + - title: load_data + path: /tf/keras/datasets/mnist/load_data + - title: reuters + section: + - title: Overview + path: /tf/keras/datasets/reuters + - title: get_word_index + path: /tf/keras/datasets/reuters/get_word_index + - title: load_data + path: /tf/keras/datasets/reuters/load_data + - title: estimator + section: + - title: Overview + path: /tf/keras/estimator + - title: model_to_estimator + path: /tf/keras/estimator/model_to_estimator + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/keras/experimental + - title: CosineDecay + path: /tf/keras/experimental/CosineDecay + - title: CosineDecayRestarts + path: /tf/keras/experimental/CosineDecayRestarts + - title: LinearCosineDecay + path: /tf/keras/experimental/LinearCosineDecay + - title: LinearModel + path: /tf/keras/experimental/LinearModel + - title: NoisyLinearCosineDecay + path: /tf/keras/experimental/NoisyLinearCosineDecay + - title: PeepholeLSTMCell + path: /tf/keras/experimental/PeepholeLSTMCell + - title: SequenceFeatures + path: /tf/keras/experimental/SequenceFeatures + - title: WideDeepModel + path: /tf/keras/experimental/WideDeepModel + - title: terminate_keras_multiprocessing_pools + path: /tf/keras/experimental/terminate_keras_multiprocessing_pools + - title: initializers + section: + - title: Overview + path: /tf/keras/initializers + - title: GlorotNormal + path: /tf/keras/initializers/GlorotNormal + - title: GlorotUniform + path: /tf/keras/initializers/GlorotUniform + - title: Identity + path: /tf/keras/initializers/Identity + - title: Initializer + path: /tf/keras/initializers/Initializer + - title: Orthogonal + path: /tf/keras/initializers/Orthogonal + - title: TruncatedNormal + path: /tf/keras/initializers/TruncatedNormal + - title: VarianceScaling + path: /tf/keras/initializers/VarianceScaling + - title: deserialize + path: /tf/keras/initializers/deserialize + - title: get + path: /tf/keras/initializers/get + - title: he_normal + path: /tf/keras/initializers/he_normal + - title: he_uniform + path: /tf/keras/initializers/he_uniform + - title: lecun_normal + path: /tf/keras/initializers/lecun_normal + - title: lecun_uniform + path: /tf/keras/initializers/lecun_uniform + - title: serialize + path: /tf/keras/initializers/serialize + - title: layers + section: + - title: Overview + path: /tf/keras/layers + - title: AbstractRNNCell + path: /tf/keras/layers/AbstractRNNCell + - title: Activation + path: /tf/keras/layers/Activation + - title: ActivityRegularization + path: /tf/keras/layers/ActivityRegularization + - title: Add + path: /tf/keras/layers/Add + - title: AdditiveAttention + path: /tf/keras/layers/AdditiveAttention + - title: AlphaDropout + path: /tf/keras/layers/AlphaDropout + - title: Attention + path: /tf/keras/layers/Attention + - title: Average + path: /tf/keras/layers/Average + - title: AveragePooling1D + path: /tf/keras/layers/AveragePooling1D + - title: AveragePooling2D + path: /tf/keras/layers/AveragePooling2D + - title: AveragePooling3D + path: /tf/keras/layers/AveragePooling3D + - title: BatchNormalization + path: /tf/keras/layers/BatchNormalization + - title: Bidirectional + path: /tf/keras/layers/Bidirectional + - title: Concatenate + path: /tf/keras/layers/Concatenate + - title: Conv1D + path: /tf/keras/layers/Conv1D + - title: Conv2D + path: /tf/keras/layers/Conv2D + - title: Conv2DTranspose + path: /tf/keras/layers/Conv2DTranspose + - title: Conv3D + path: /tf/keras/layers/Conv3D + - title: Conv3DTranspose + path: /tf/keras/layers/Conv3DTranspose + - title: ConvLSTM2D + path: /tf/keras/layers/ConvLSTM2D + - title: Cropping1D + path: /tf/keras/layers/Cropping1D + - title: Cropping2D + path: /tf/keras/layers/Cropping2D + - title: Cropping3D + path: /tf/keras/layers/Cropping3D + - title: Dense + path: /tf/keras/layers/Dense + - title: DenseFeatures + path: /tf/keras/layers/DenseFeatures + - title: DepthwiseConv2D + path: /tf/keras/layers/DepthwiseConv2D + - title: Dot + path: /tf/keras/layers/Dot + - title: Dropout + path: /tf/keras/layers/Dropout + - title: ELU + path: /tf/keras/layers/ELU + - title: Embedding + path: /tf/keras/layers/Embedding + - title: Flatten + path: /tf/keras/layers/Flatten + - title: GRU + path: /tf/keras/layers/GRU + - title: GRUCell + path: /tf/keras/layers/GRUCell + - title: GaussianDropout + path: /tf/keras/layers/GaussianDropout + - title: GaussianNoise + path: /tf/keras/layers/GaussianNoise + - title: GlobalAveragePooling1D + path: /tf/keras/layers/GlobalAveragePooling1D + - title: GlobalAveragePooling2D + path: /tf/keras/layers/GlobalAveragePooling2D + - title: GlobalAveragePooling3D + path: /tf/keras/layers/GlobalAveragePooling3D + - title: GlobalMaxPool1D + path: /tf/keras/layers/GlobalMaxPool1D + - title: GlobalMaxPool2D + path: /tf/keras/layers/GlobalMaxPool2D + - title: GlobalMaxPool3D + path: /tf/keras/layers/GlobalMaxPool3D + - title: InputLayer + path: /tf/keras/layers/InputLayer + - title: InputSpec + path: /tf/keras/layers/InputSpec + - title: LSTM + path: /tf/keras/layers/LSTM + - title: LSTMCell + path: /tf/keras/layers/LSTMCell + - title: Lambda + path: /tf/keras/layers/Lambda + - title: Layer + path: /tf/keras/layers/Layer + - title: LayerNormalization + path: /tf/keras/layers/LayerNormalization + - title: LeakyReLU + path: /tf/keras/layers/LeakyReLU + - title: LocallyConnected1D + path: /tf/keras/layers/LocallyConnected1D + - title: LocallyConnected2D + path: /tf/keras/layers/LocallyConnected2D + - title: Masking + path: /tf/keras/layers/Masking + - title: MaxPool1D + path: /tf/keras/layers/MaxPool1D + - title: MaxPool2D + path: /tf/keras/layers/MaxPool2D + - title: MaxPool3D + path: /tf/keras/layers/MaxPool3D + - title: Maximum + path: /tf/keras/layers/Maximum + - title: Minimum + path: /tf/keras/layers/Minimum + - title: Multiply + path: /tf/keras/layers/Multiply + - title: PReLU + path: /tf/keras/layers/PReLU + - title: Permute + path: /tf/keras/layers/Permute + - title: RNN + path: /tf/keras/layers/RNN + - title: ReLU + path: /tf/keras/layers/ReLU + - title: RepeatVector + path: /tf/keras/layers/RepeatVector + - title: Reshape + path: /tf/keras/layers/Reshape + - title: SeparableConv1D + path: /tf/keras/layers/SeparableConv1D + - title: SeparableConv2D + path: /tf/keras/layers/SeparableConv2D + - title: SimpleRNN + path: /tf/keras/layers/SimpleRNN + - title: SimpleRNNCell + path: /tf/keras/layers/SimpleRNNCell + - title: Softmax + path: /tf/keras/layers/Softmax + - title: SpatialDropout1D + path: /tf/keras/layers/SpatialDropout1D + - title: SpatialDropout2D + path: /tf/keras/layers/SpatialDropout2D + - title: SpatialDropout3D + path: /tf/keras/layers/SpatialDropout3D + - title: StackedRNNCells + path: /tf/keras/layers/StackedRNNCells + - title: Subtract + path: /tf/keras/layers/Subtract + - title: ThresholdedReLU + path: /tf/keras/layers/ThresholdedReLU + - title: TimeDistributed + path: /tf/keras/layers/TimeDistributed + - title: UpSampling1D + path: /tf/keras/layers/UpSampling1D + - title: UpSampling2D + path: /tf/keras/layers/UpSampling2D + - title: UpSampling3D + path: /tf/keras/layers/UpSampling3D + - title: Wrapper + path: /tf/keras/layers/Wrapper + - title: ZeroPadding1D + path: /tf/keras/layers/ZeroPadding1D + - title: ZeroPadding2D + path: /tf/keras/layers/ZeroPadding2D + - title: ZeroPadding3D + path: /tf/keras/layers/ZeroPadding3D + - title: add + path: /tf/keras/layers/add + - title: average + path: /tf/keras/layers/average + - title: concatenate + path: /tf/keras/layers/concatenate + - title: deserialize + path: /tf/keras/layers/deserialize + - title: dot + path: /tf/keras/layers/dot + - title: maximum + path: /tf/keras/layers/maximum + - title: minimum + path: /tf/keras/layers/minimum + - title: multiply + path: /tf/keras/layers/multiply + - title: serialize + path: /tf/keras/layers/serialize + - title: subtract + path: /tf/keras/layers/subtract + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/keras/layers/experimental + - title: SyncBatchNormalization + path: /tf/keras/layers/experimental/SyncBatchNormalization + - title: preprocessing + section: + - title: Overview + path: /tf/keras/layers/experimental/preprocessing + - title: CenterCrop + path: /tf/keras/layers/experimental/preprocessing/CenterCrop + - title: Normalization + path: /tf/keras/layers/experimental/preprocessing/Normalization + - title: PreprocessingLayer + path: /tf/keras/layers/experimental/preprocessing/PreprocessingLayer + - title: RandomContrast + path: /tf/keras/layers/experimental/preprocessing/RandomContrast + - title: RandomCrop + path: /tf/keras/layers/experimental/preprocessing/RandomCrop + - title: RandomFlip + path: /tf/keras/layers/experimental/preprocessing/RandomFlip + - title: RandomHeight + path: /tf/keras/layers/experimental/preprocessing/RandomHeight + - title: RandomRotation + path: /tf/keras/layers/experimental/preprocessing/RandomRotation + - title: RandomTranslation + path: /tf/keras/layers/experimental/preprocessing/RandomTranslation + - title: RandomWidth + path: /tf/keras/layers/experimental/preprocessing/RandomWidth + - title: Rescaling + path: /tf/keras/layers/experimental/preprocessing/Rescaling + - title: Resizing + path: /tf/keras/layers/experimental/preprocessing/Resizing + - title: TextVectorization + path: /tf/keras/layers/experimental/preprocessing/TextVectorization + - title: losses + section: + - title: Overview + path: /tf/keras/losses + - title: BinaryCrossentropy + path: /tf/keras/losses/BinaryCrossentropy + - title: CategoricalCrossentropy + path: /tf/keras/losses/CategoricalCrossentropy + - title: CategoricalHinge + path: /tf/keras/losses/CategoricalHinge + - title: CosineSimilarity + path: /tf/keras/losses/CosineSimilarity + - title: Hinge + path: /tf/keras/losses/Hinge + - title: Huber + path: /tf/keras/losses/Huber + - title: KLD + path: /tf/keras/losses/KLD + - title: KLDivergence + path: /tf/keras/losses/KLDivergence + - title: LogCosh + path: /tf/keras/losses/LogCosh + - title: Loss + path: /tf/keras/losses/Loss + - title: MAE + path: /tf/keras/losses/MAE + - title: MAPE + path: /tf/keras/losses/MAPE + - title: MSE + path: /tf/keras/losses/MSE + - title: MSLE + path: /tf/keras/losses/MSLE + - title: MeanAbsoluteError + path: /tf/keras/losses/MeanAbsoluteError + - title: MeanAbsolutePercentageError + path: /tf/keras/losses/MeanAbsolutePercentageError + - title: MeanSquaredError + path: /tf/keras/losses/MeanSquaredError + - title: MeanSquaredLogarithmicError + path: /tf/keras/losses/MeanSquaredLogarithmicError + - title: Poisson + path: /tf/keras/losses/Poisson + - title: Reduction + path: /tf/keras/losses/Reduction + - title: SparseCategoricalCrossentropy + path: /tf/keras/losses/SparseCategoricalCrossentropy + - title: SquaredHinge + path: /tf/keras/losses/SquaredHinge + - title: binary_crossentropy + path: /tf/keras/losses/binary_crossentropy + - title: categorical_crossentropy + path: /tf/keras/losses/categorical_crossentropy + - title: categorical_hinge + path: /tf/keras/losses/categorical_hinge + - title: cosine_similarity + path: /tf/keras/losses/cosine_similarity + - title: deserialize + path: /tf/keras/losses/deserialize + - title: get + path: /tf/keras/losses/get + - title: hinge + path: /tf/keras/losses/hinge + - title: logcosh + path: /tf/keras/losses/logcosh + - title: poisson + path: /tf/keras/losses/poisson + - title: serialize + path: /tf/keras/losses/serialize + - title: sparse_categorical_crossentropy + path: /tf/keras/losses/sparse_categorical_crossentropy + - title: squared_hinge + path: /tf/keras/losses/squared_hinge + - title: metrics + section: + - title: Overview + path: /tf/keras/metrics + - title: AUC + path: /tf/keras/metrics/AUC + - title: Accuracy + path: /tf/keras/metrics/Accuracy + - title: BinaryAccuracy + path: /tf/keras/metrics/BinaryAccuracy + - title: BinaryCrossentropy + path: /tf/keras/metrics/BinaryCrossentropy + - title: CategoricalAccuracy + path: /tf/keras/metrics/CategoricalAccuracy + - title: CategoricalCrossentropy + path: /tf/keras/metrics/CategoricalCrossentropy + - title: CategoricalHinge + path: /tf/keras/metrics/CategoricalHinge + - title: CosineSimilarity + path: /tf/keras/metrics/CosineSimilarity + - title: FalseNegatives + path: /tf/keras/metrics/FalseNegatives + - title: FalsePositives + path: /tf/keras/metrics/FalsePositives + - title: Hinge + path: /tf/keras/metrics/Hinge + - title: KLDivergence + path: /tf/keras/metrics/KLDivergence + - title: LogCoshError + path: /tf/keras/metrics/LogCoshError + - title: Mean + path: /tf/keras/metrics/Mean + - title: MeanAbsoluteError + path: /tf/keras/metrics/MeanAbsoluteError + - title: MeanAbsolutePercentageError + path: /tf/keras/metrics/MeanAbsolutePercentageError + - title: MeanIoU + path: /tf/keras/metrics/MeanIoU + - title: MeanRelativeError + path: /tf/keras/metrics/MeanRelativeError + - title: MeanSquaredError + path: /tf/keras/metrics/MeanSquaredError + - title: MeanSquaredLogarithmicError + path: /tf/keras/metrics/MeanSquaredLogarithmicError + - title: MeanTensor + path: /tf/keras/metrics/MeanTensor + - title: Metric + path: /tf/keras/metrics/Metric + - title: Poisson + path: /tf/keras/metrics/Poisson + - title: Precision + path: /tf/keras/metrics/Precision + - title: PrecisionAtRecall + path: /tf/keras/metrics/PrecisionAtRecall + - title: Recall + path: /tf/keras/metrics/Recall + - title: RecallAtPrecision + path: /tf/keras/metrics/RecallAtPrecision + - title: RootMeanSquaredError + path: /tf/keras/metrics/RootMeanSquaredError + - title: SensitivityAtSpecificity + path: /tf/keras/metrics/SensitivityAtSpecificity + - title: SparseCategoricalAccuracy + path: /tf/keras/metrics/SparseCategoricalAccuracy + - title: SparseCategoricalCrossentropy + path: /tf/keras/metrics/SparseCategoricalCrossentropy + - title: SparseTopKCategoricalAccuracy + path: /tf/keras/metrics/SparseTopKCategoricalAccuracy + - title: SpecificityAtSensitivity + path: /tf/keras/metrics/SpecificityAtSensitivity + - title: SquaredHinge + path: /tf/keras/metrics/SquaredHinge + - title: Sum + path: /tf/keras/metrics/Sum + - title: TopKCategoricalAccuracy + path: /tf/keras/metrics/TopKCategoricalAccuracy + - title: TrueNegatives + path: /tf/keras/metrics/TrueNegatives + - title: TruePositives + path: /tf/keras/metrics/TruePositives + - title: binary_accuracy + path: /tf/keras/metrics/binary_accuracy + - title: categorical_accuracy + path: /tf/keras/metrics/categorical_accuracy + - title: deserialize + path: /tf/keras/metrics/deserialize + - title: get + path: /tf/keras/metrics/get + - title: serialize + path: /tf/keras/metrics/serialize + - title: sparse_categorical_accuracy + path: /tf/keras/metrics/sparse_categorical_accuracy + - title: sparse_top_k_categorical_accuracy + path: /tf/keras/metrics/sparse_top_k_categorical_accuracy + - title: top_k_categorical_accuracy + path: /tf/keras/metrics/top_k_categorical_accuracy + - title: mixed_precision + section: + - title: Overview + path: /tf/keras/mixed_precision + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/keras/mixed_precision/experimental + - title: LossScaleOptimizer + path: /tf/keras/mixed_precision/experimental/LossScaleOptimizer + - title: Policy + path: /tf/keras/mixed_precision/experimental/Policy + - title: get_layer_policy + path: /tf/keras/mixed_precision/experimental/get_layer_policy + - title: global_policy + path: /tf/keras/mixed_precision/experimental/global_policy + - title: set_policy + path: /tf/keras/mixed_precision/experimental/set_policy + - title: models + section: + - title: Overview + path: /tf/keras/models + - title: clone_model + path: /tf/keras/models/clone_model + - title: load_model + path: /tf/keras/models/load_model + - title: model_from_config + path: /tf/keras/models/model_from_config + - title: model_from_json + path: /tf/keras/models/model_from_json + - title: model_from_yaml + path: /tf/keras/models/model_from_yaml + - title: save_model + path: /tf/keras/models/save_model + - title: optimizers + section: + - title: Overview + path: /tf/keras/optimizers + - title: Adadelta + path: /tf/keras/optimizers/Adadelta + - title: Adagrad + path: /tf/keras/optimizers/Adagrad + - title: Adam + path: /tf/keras/optimizers/Adam + - title: Adamax + path: /tf/keras/optimizers/Adamax + - title: Ftrl + path: /tf/keras/optimizers/Ftrl + - title: Nadam + path: /tf/keras/optimizers/Nadam + - title: Optimizer + path: /tf/keras/optimizers/Optimizer + - title: RMSprop + path: /tf/keras/optimizers/RMSprop + - title: SGD + path: /tf/keras/optimizers/SGD + - title: deserialize + path: /tf/keras/optimizers/deserialize + - title: get + path: /tf/keras/optimizers/get + - title: serialize + path: /tf/keras/optimizers/serialize + - title: schedules + section: + - title: Overview + path: /tf/keras/optimizers/schedules + - title: ExponentialDecay + path: /tf/keras/optimizers/schedules/ExponentialDecay + - title: InverseTimeDecay + path: /tf/keras/optimizers/schedules/InverseTimeDecay + - title: LearningRateSchedule + path: /tf/keras/optimizers/schedules/LearningRateSchedule + - title: PiecewiseConstantDecay + path: /tf/keras/optimizers/schedules/PiecewiseConstantDecay + - title: PolynomialDecay + path: /tf/keras/optimizers/schedules/PolynomialDecay + - title: deserialize + path: /tf/keras/optimizers/schedules/deserialize + - title: serialize + path: /tf/keras/optimizers/schedules/serialize + - title: preprocessing + section: + - title: Overview + path: /tf/keras/preprocessing + - title: image + section: + - title: Overview + path: /tf/keras/preprocessing/image + - title: DirectoryIterator + path: /tf/keras/preprocessing/image/DirectoryIterator + - title: ImageDataGenerator + path: /tf/keras/preprocessing/image/ImageDataGenerator + - title: Iterator + path: /tf/keras/preprocessing/image/Iterator + - title: NumpyArrayIterator + path: /tf/keras/preprocessing/image/NumpyArrayIterator + - title: apply_affine_transform + path: /tf/keras/preprocessing/image/apply_affine_transform + - title: apply_brightness_shift + path: /tf/keras/preprocessing/image/apply_brightness_shift + - title: apply_channel_shift + path: /tf/keras/preprocessing/image/apply_channel_shift + - title: array_to_img + path: /tf/keras/preprocessing/image/array_to_img + - title: img_to_array + path: /tf/keras/preprocessing/image/img_to_array + - title: load_img + path: /tf/keras/preprocessing/image/load_img + - title: random_brightness + path: /tf/keras/preprocessing/image/random_brightness + - title: random_channel_shift + path: /tf/keras/preprocessing/image/random_channel_shift + - title: random_rotation + path: /tf/keras/preprocessing/image/random_rotation + - title: random_shear + path: /tf/keras/preprocessing/image/random_shear + - title: random_shift + path: /tf/keras/preprocessing/image/random_shift + - title: random_zoom + path: /tf/keras/preprocessing/image/random_zoom + - title: save_img + path: /tf/keras/preprocessing/image/save_img + - title: sequence + section: + - title: Overview + path: /tf/keras/preprocessing/sequence + - title: TimeseriesGenerator + path: /tf/keras/preprocessing/sequence/TimeseriesGenerator + - title: make_sampling_table + path: /tf/keras/preprocessing/sequence/make_sampling_table + - title: pad_sequences + path: /tf/keras/preprocessing/sequence/pad_sequences + - title: skipgrams + path: /tf/keras/preprocessing/sequence/skipgrams + - title: text + section: + - title: Overview + path: /tf/keras/preprocessing/text + - title: Tokenizer + path: /tf/keras/preprocessing/text/Tokenizer + - title: hashing_trick + path: /tf/keras/preprocessing/text/hashing_trick + - title: one_hot + path: /tf/keras/preprocessing/text/one_hot + - title: text_to_word_sequence + path: /tf/keras/preprocessing/text/text_to_word_sequence + - title: tokenizer_from_json + path: /tf/keras/preprocessing/text/tokenizer_from_json + - title: regularizers + section: + - title: Overview + path: /tf/keras/regularizers + - title: L1L2 + path: /tf/keras/regularizers/L1L2 + - title: Regularizer + path: /tf/keras/regularizers/Regularizer + - title: deserialize + path: /tf/keras/regularizers/deserialize + - title: get + path: /tf/keras/regularizers/get + - title: l1 + path: /tf/keras/regularizers/l1 + - title: l1_l2 + path: /tf/keras/regularizers/l1_l2 + - title: l2 + path: /tf/keras/regularizers/l2 + - title: serialize + path: /tf/keras/regularizers/serialize + - title: utils + section: + - title: Overview + path: /tf/keras/utils + - title: CustomObjectScope + path: /tf/keras/utils/CustomObjectScope + - title: GeneratorEnqueuer + path: /tf/keras/utils/GeneratorEnqueuer + - title: HDF5Matrix + path: /tf/keras/utils/HDF5Matrix + - title: OrderedEnqueuer + path: /tf/keras/utils/OrderedEnqueuer + - title: Progbar + path: /tf/keras/utils/Progbar + - title: Sequence + path: /tf/keras/utils/Sequence + - title: SequenceEnqueuer + path: /tf/keras/utils/SequenceEnqueuer + - title: convert_all_kernels_in_model + status: deprecated + path: /tf/keras/utils/convert_all_kernels_in_model + - title: custom_object_scope + path: /tf/keras/utils/custom_object_scope + - title: deserialize_keras_object + path: /tf/keras/utils/deserialize_keras_object + - title: get_custom_objects + path: /tf/keras/utils/get_custom_objects + - title: get_file + path: /tf/keras/utils/get_file + - title: get_registered_name + path: /tf/keras/utils/get_registered_name + - title: get_registered_object + path: /tf/keras/utils/get_registered_object + - title: get_source_inputs + path: /tf/keras/utils/get_source_inputs + - title: model_to_dot + path: /tf/keras/utils/model_to_dot + - title: multi_gpu_model + status: deprecated + path: /tf/keras/utils/multi_gpu_model + - title: normalize + path: /tf/keras/utils/normalize + - title: plot_model + path: /tf/keras/utils/plot_model + - title: register_keras_serializable + path: /tf/keras/utils/register_keras_serializable + - title: serialize_keras_object + path: /tf/keras/utils/serialize_keras_object + - title: to_categorical + path: /tf/keras/utils/to_categorical + - title: wrappers + section: + - title: Overview + path: /tf/keras/wrappers + - title: scikit_learn + section: + - title: Overview + path: /tf/keras/wrappers/scikit_learn + - title: KerasClassifier + path: /tf/keras/wrappers/scikit_learn/KerasClassifier + - title: KerasRegressor + path: /tf/keras/wrappers/scikit_learn/KerasRegressor +- title: tf.linalg + section: + - title: Overview + path: /tf/linalg + - title: LinearOperator + path: /tf/linalg/LinearOperator + - title: LinearOperatorAdjoint + path: /tf/linalg/LinearOperatorAdjoint + - title: LinearOperatorBlockDiag + path: /tf/linalg/LinearOperatorBlockDiag + - title: LinearOperatorBlockLowerTriangular + path: /tf/linalg/LinearOperatorBlockLowerTriangular + - title: LinearOperatorCirculant + path: /tf/linalg/LinearOperatorCirculant + - title: LinearOperatorCirculant2D + path: /tf/linalg/LinearOperatorCirculant2D + - title: LinearOperatorCirculant3D + path: /tf/linalg/LinearOperatorCirculant3D + - title: LinearOperatorComposition + path: /tf/linalg/LinearOperatorComposition + - title: LinearOperatorDiag + path: /tf/linalg/LinearOperatorDiag + - title: LinearOperatorFullMatrix + path: /tf/linalg/LinearOperatorFullMatrix + - title: LinearOperatorHouseholder + path: /tf/linalg/LinearOperatorHouseholder + - title: LinearOperatorIdentity + path: /tf/linalg/LinearOperatorIdentity + - title: LinearOperatorInversion + path: /tf/linalg/LinearOperatorInversion + - title: LinearOperatorKronecker + path: /tf/linalg/LinearOperatorKronecker + - title: LinearOperatorLowRankUpdate + path: /tf/linalg/LinearOperatorLowRankUpdate + - title: LinearOperatorLowerTriangular + path: /tf/linalg/LinearOperatorLowerTriangular + - title: LinearOperatorPermutation + path: /tf/linalg/LinearOperatorPermutation + - title: LinearOperatorScaledIdentity + path: /tf/linalg/LinearOperatorScaledIdentity + - title: LinearOperatorToeplitz + path: /tf/linalg/LinearOperatorToeplitz + - title: LinearOperatorTridiag + path: /tf/linalg/LinearOperatorTridiag + - title: LinearOperatorZeros + path: /tf/linalg/LinearOperatorZeros + - title: adjoint + path: /tf/linalg/adjoint + - title: band_part + path: /tf/linalg/band_part + - title: cholesky + path: /tf/linalg/cholesky + - title: cholesky_solve + path: /tf/linalg/cholesky_solve + - title: cross + path: /tf/linalg/cross + - title: det + path: /tf/linalg/det + - title: diag + path: /tf/linalg/diag + - title: diag_part + path: /tf/linalg/diag_part + - title: eig + path: /tf/linalg/eig + - title: eigh + path: /tf/linalg/eigh + - title: eigvals + path: /tf/linalg/eigvals + - title: eigvalsh + path: /tf/linalg/eigvalsh + - title: expm + path: /tf/linalg/expm + - title: global_norm + path: /tf/linalg/global_norm + - title: inv + path: /tf/linalg/inv + - title: logdet + path: /tf/linalg/logdet + - title: logm + path: /tf/linalg/logm + - title: lstsq + path: /tf/linalg/lstsq + - title: lu + path: /tf/linalg/lu + - title: lu_matrix_inverse + path: /tf/linalg/lu_matrix_inverse + - title: lu_reconstruct + path: /tf/linalg/lu_reconstruct + - title: lu_solve + path: /tf/linalg/lu_solve + - title: matmul + path: /tf/linalg/matmul + - title: matrix_rank + path: /tf/linalg/matrix_rank + - title: matrix_transpose + path: /tf/linalg/matrix_transpose + - title: matvec + path: /tf/linalg/matvec + - title: normalize + path: /tf/linalg/normalize + - title: pinv + path: /tf/linalg/pinv + - title: qr + path: /tf/linalg/qr + - title: set_diag + path: /tf/linalg/set_diag + - title: slogdet + path: /tf/linalg/slogdet + - title: solve + path: /tf/linalg/solve + - title: sqrtm + path: /tf/linalg/sqrtm + - title: svd + path: /tf/linalg/svd + - title: tensor_diag + path: /tf/linalg/tensor_diag + - title: tensor_diag_part + path: /tf/linalg/tensor_diag_part + - title: trace + path: /tf/linalg/trace + - title: triangular_solve + path: /tf/linalg/triangular_solve + - title: tridiagonal_matmul + path: /tf/linalg/tridiagonal_matmul + - title: tridiagonal_solve + path: /tf/linalg/tridiagonal_solve + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/linalg/experimental + - title: conjugate_gradient + path: /tf/linalg/experimental/conjugate_gradient +- title: tf.lite + section: + - title: Overview + path: /tf/lite + - title: Interpreter + path: /tf/lite/Interpreter + - title: OpsSet + path: /tf/lite/OpsSet + - title: Optimize + path: /tf/lite/Optimize + - title: RepresentativeDataset + path: /tf/lite/RepresentativeDataset + - title: TFLiteConverter + path: /tf/lite/TFLiteConverter + - title: TargetSpec + path: /tf/lite/TargetSpec + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/lite/experimental + - title: load_delegate + path: /tf/lite/experimental/load_delegate +- title: tf.lookup + section: + - title: Overview + path: /tf/lookup + - title: KeyValueTensorInitializer + path: /tf/lookup/KeyValueTensorInitializer + - title: StaticHashTable + path: /tf/lookup/StaticHashTable + - title: StaticVocabularyTable + path: /tf/lookup/StaticVocabularyTable + - title: TextFileIndex + path: /tf/lookup/TextFileIndex + - title: TextFileInitializer + path: /tf/lookup/TextFileInitializer + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/lookup/experimental + - title: DenseHashTable + path: /tf/lookup/experimental/DenseHashTable +- title: tf.math + section: + - title: Overview + path: /tf/math + - title: abs + path: /tf/math/abs + - title: accumulate_n + path: /tf/math/accumulate_n + - title: acos + path: /tf/math/acos + - title: acosh + path: /tf/math/acosh + - title: add + path: /tf/math/add + - title: add_n + path: /tf/math/add_n + - title: angle + path: /tf/math/angle + - title: argmax + path: /tf/math/argmax + - title: argmin + path: /tf/math/argmin + - title: asin + path: /tf/math/asin + - title: asinh + path: /tf/math/asinh + - title: atan + path: /tf/math/atan + - title: atan2 + path: /tf/math/atan2 + - title: atanh + path: /tf/math/atanh + - title: bessel_i0 + path: /tf/math/bessel_i0 + - title: bessel_i0e + path: /tf/math/bessel_i0e + - title: bessel_i1 + path: /tf/math/bessel_i1 + - title: bessel_i1e + path: /tf/math/bessel_i1e + - title: betainc + path: /tf/math/betainc + - title: bincount + path: /tf/math/bincount + - title: ceil + path: /tf/math/ceil + - title: confusion_matrix + path: /tf/math/confusion_matrix + - title: conj + path: /tf/math/conj + - title: cos + path: /tf/math/cos + - title: cosh + path: /tf/math/cosh + - title: count_nonzero + path: /tf/math/count_nonzero + - title: cumprod + path: /tf/math/cumprod + - title: cumsum + path: /tf/math/cumsum + - title: cumulative_logsumexp + path: /tf/math/cumulative_logsumexp + - title: digamma + path: /tf/math/digamma + - title: divide + path: /tf/math/divide + - title: divide_no_nan + path: /tf/math/divide_no_nan + - title: equal + path: /tf/math/equal + - title: erf + path: /tf/math/erf + - title: erfc + path: /tf/math/erfc + - title: erfinv + path: /tf/math/erfinv + - title: exp + path: /tf/math/exp + - title: expm1 + path: /tf/math/expm1 + - title: floor + path: /tf/math/floor + - title: floordiv + path: /tf/math/floordiv + - title: floormod + path: /tf/math/floormod + - title: greater + path: /tf/math/greater + - title: greater_equal + path: /tf/math/greater_equal + - title: igamma + path: /tf/math/igamma + - title: igammac + path: /tf/math/igammac + - title: imag + path: /tf/math/imag + - title: in_top_k + path: /tf/math/in_top_k + - title: invert_permutation + path: /tf/math/invert_permutation + - title: is_finite + path: /tf/math/is_finite + - title: is_inf + path: /tf/math/is_inf + - title: is_nan + path: /tf/math/is_nan + - title: is_non_decreasing + path: /tf/math/is_non_decreasing + - title: is_strictly_increasing + path: /tf/math/is_strictly_increasing + - title: l2_normalize + path: /tf/math/l2_normalize + - title: lbeta + path: /tf/math/lbeta + - title: less + path: /tf/math/less + - title: less_equal + path: /tf/math/less_equal + - title: lgamma + path: /tf/math/lgamma + - title: log + path: /tf/math/log + - title: log1p + path: /tf/math/log1p + - title: log_sigmoid + path: /tf/math/log_sigmoid + - title: logical_and + path: /tf/math/logical_and + - title: logical_not + path: /tf/math/logical_not + - title: logical_or + path: /tf/math/logical_or + - title: logical_xor + path: /tf/math/logical_xor + - title: maximum + path: /tf/math/maximum + - title: minimum + path: /tf/math/minimum + - title: multiply + path: /tf/math/multiply + - title: multiply_no_nan + path: /tf/math/multiply_no_nan + - title: ndtri + path: /tf/math/ndtri + - title: negative + path: /tf/math/negative + - title: nextafter + path: /tf/math/nextafter + - title: not_equal + path: /tf/math/not_equal + - title: polygamma + path: /tf/math/polygamma + - title: polyval + path: /tf/math/polyval + - title: pow + path: /tf/math/pow + - title: real + path: /tf/math/real + - title: reciprocal + path: /tf/math/reciprocal + - title: reciprocal_no_nan + path: /tf/math/reciprocal_no_nan + - title: reduce_any + path: /tf/math/reduce_any + - title: reduce_euclidean_norm + path: /tf/math/reduce_euclidean_norm + - title: reduce_logsumexp + path: /tf/math/reduce_logsumexp + - title: reduce_max + path: /tf/math/reduce_max + - title: reduce_mean + path: /tf/math/reduce_mean + - title: reduce_min + path: /tf/math/reduce_min + - title: reduce_prod + path: /tf/math/reduce_prod + - title: reduce_std + path: /tf/math/reduce_std + - title: reduce_sum + path: /tf/math/reduce_sum + - title: reduce_variance + path: /tf/math/reduce_variance + - title: rint + path: /tf/math/rint + - title: round + path: /tf/math/round + - title: rsqrt + path: /tf/math/rsqrt + - title: scalar_mul + path: /tf/math/scalar_mul + - title: segment_max + path: /tf/math/segment_max + - title: segment_mean + path: /tf/math/segment_mean + - title: segment_min + path: /tf/math/segment_min + - title: segment_prod + path: /tf/math/segment_prod + - title: segment_sum + path: /tf/math/segment_sum + - title: sigmoid + path: /tf/math/sigmoid + - title: sign + path: /tf/math/sign + - title: sin + path: /tf/math/sin + - title: sinh + path: /tf/math/sinh + - title: sobol_sample + path: /tf/math/sobol_sample + - title: softplus + path: /tf/math/softplus + - title: sqrt + path: /tf/math/sqrt + - title: square + path: /tf/math/square + - title: squared_difference + path: /tf/math/squared_difference + - title: subtract + path: /tf/math/subtract + - title: tan + path: /tf/math/tan + - title: tanh + path: /tf/math/tanh + - title: top_k + path: /tf/math/top_k + - title: truediv + path: /tf/math/truediv + - title: unsorted_segment_max + path: /tf/math/unsorted_segment_max + - title: unsorted_segment_mean + path: /tf/math/unsorted_segment_mean + - title: unsorted_segment_min + path: /tf/math/unsorted_segment_min + - title: unsorted_segment_prod + path: /tf/math/unsorted_segment_prod + - title: unsorted_segment_sqrt_n + path: /tf/math/unsorted_segment_sqrt_n + - title: unsorted_segment_sum + path: /tf/math/unsorted_segment_sum + - title: xdivy + path: /tf/math/xdivy + - title: xlog1py + path: /tf/math/xlog1py + - title: xlogy + path: /tf/math/xlogy + - title: zero_fraction + path: /tf/math/zero_fraction + - title: zeta + path: /tf/math/zeta + - title: special + section: + - title: Overview + path: /tf/math/special + - title: dawsn + path: /tf/math/special/dawsn + - title: expint + path: /tf/math/special/expint + - title: fresnel_cos + path: /tf/math/special/fresnel_cos + - title: fresnel_sin + path: /tf/math/special/fresnel_sin + - title: spence + path: /tf/math/special/spence +- title: tf.mixed_precision + section: + - title: Overview + path: /tf/mixed_precision + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/mixed_precision/experimental + - title: DynamicLossScale + path: /tf/mixed_precision/experimental/DynamicLossScale + - title: FixedLossScale + path: /tf/mixed_precision/experimental/FixedLossScale + - title: LossScale + path: /tf/mixed_precision/experimental/LossScale +- title: tf.mlir + section: + - title: Overview + path: /tf/mlir + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/mlir/experimental + - title: convert_graph_def + path: /tf/mlir/experimental/convert_graph_def +- title: tf.nest + section: + - title: Overview + path: /tf/nest + - title: assert_same_structure + path: /tf/nest/assert_same_structure + - title: flatten + path: /tf/nest/flatten + - title: is_nested + path: /tf/nest/is_nested + - title: map_structure + path: /tf/nest/map_structure + - title: pack_sequence_as + path: /tf/nest/pack_sequence_as +- title: tf.nn + section: + - title: Overview + path: /tf/nn + - title: RNNCellDeviceWrapper + path: /tf/nn/RNNCellDeviceWrapper + - title: RNNCellDropoutWrapper + path: /tf/nn/RNNCellDropoutWrapper + - title: RNNCellResidualWrapper + path: /tf/nn/RNNCellResidualWrapper + - title: atrous_conv2d + path: /tf/nn/atrous_conv2d + - title: atrous_conv2d_transpose + path: /tf/nn/atrous_conv2d_transpose + - title: avg_pool + path: /tf/nn/avg_pool + - title: avg_pool1d + path: /tf/nn/avg_pool1d + - title: avg_pool2d + path: /tf/nn/avg_pool2d + - title: avg_pool3d + path: /tf/nn/avg_pool3d + - title: batch_norm_with_global_normalization + path: /tf/nn/batch_norm_with_global_normalization + - title: batch_normalization + path: /tf/nn/batch_normalization + - title: bias_add + path: /tf/nn/bias_add + - title: collapse_repeated + path: /tf/nn/collapse_repeated + - title: compute_accidental_hits + path: /tf/nn/compute_accidental_hits + - title: compute_average_loss + path: /tf/nn/compute_average_loss + - title: conv1d + path: /tf/nn/conv1d + - title: conv1d_transpose + path: /tf/nn/conv1d_transpose + - title: conv2d + path: /tf/nn/conv2d + - title: conv2d_transpose + path: /tf/nn/conv2d_transpose + - title: conv3d + path: /tf/nn/conv3d + - title: conv3d_transpose + path: /tf/nn/conv3d_transpose + - title: conv_transpose + path: /tf/nn/conv_transpose + - title: convolution + path: /tf/nn/convolution + - title: crelu + path: /tf/nn/crelu + - title: ctc_beam_search_decoder + path: /tf/nn/ctc_beam_search_decoder + - title: ctc_greedy_decoder + path: /tf/nn/ctc_greedy_decoder + - title: ctc_loss + path: /tf/nn/ctc_loss + - title: ctc_unique_labels + path: /tf/nn/ctc_unique_labels + - title: depth_to_space + path: /tf/nn/depth_to_space + - title: depthwise_conv2d + path: /tf/nn/depthwise_conv2d + - title: depthwise_conv2d_backprop_filter + path: /tf/nn/depthwise_conv2d_backprop_filter + - title: depthwise_conv2d_backprop_input + path: /tf/nn/depthwise_conv2d_backprop_input + - title: dilation2d + path: /tf/nn/dilation2d + - title: dropout + path: /tf/nn/dropout + - title: elu + path: /tf/nn/elu + - title: embedding_lookup + path: /tf/nn/embedding_lookup + - title: embedding_lookup_sparse + path: /tf/nn/embedding_lookup_sparse + - title: erosion2d + path: /tf/nn/erosion2d + - title: fractional_avg_pool + path: /tf/nn/fractional_avg_pool + - title: fractional_max_pool + path: /tf/nn/fractional_max_pool + - title: l2_loss + path: /tf/nn/l2_loss + - title: leaky_relu + path: /tf/nn/leaky_relu + - title: local_response_normalization + path: /tf/nn/local_response_normalization + - title: log_poisson_loss + path: /tf/nn/log_poisson_loss + - title: log_softmax + path: /tf/nn/log_softmax + - title: max_pool + path: /tf/nn/max_pool + - title: max_pool1d + path: /tf/nn/max_pool1d + - title: max_pool2d + path: /tf/nn/max_pool2d + - title: max_pool3d + path: /tf/nn/max_pool3d + - title: max_pool_with_argmax + path: /tf/nn/max_pool_with_argmax + - title: moments + path: /tf/nn/moments + - title: nce_loss + path: /tf/nn/nce_loss + - title: normalize_moments + path: /tf/nn/normalize_moments + - title: pool + path: /tf/nn/pool + - title: relu + path: /tf/nn/relu + - title: relu6 + path: /tf/nn/relu6 + - title: safe_embedding_lookup_sparse + path: /tf/nn/safe_embedding_lookup_sparse + - title: sampled_softmax_loss + path: /tf/nn/sampled_softmax_loss + - title: scale_regularization_loss + path: /tf/nn/scale_regularization_loss + - title: selu + path: /tf/nn/selu + - title: separable_conv2d + path: /tf/nn/separable_conv2d + - title: sigmoid_cross_entropy_with_logits + path: /tf/nn/sigmoid_cross_entropy_with_logits + - title: softmax + path: /tf/nn/softmax + - title: softmax_cross_entropy_with_logits + path: /tf/nn/softmax_cross_entropy_with_logits + - title: softsign + path: /tf/nn/softsign + - title: space_to_depth + path: /tf/nn/space_to_depth + - title: sparse_softmax_cross_entropy_with_logits + path: /tf/nn/sparse_softmax_cross_entropy_with_logits + - title: sufficient_statistics + path: /tf/nn/sufficient_statistics + - title: swish + path: /tf/nn/swish + - title: weighted_cross_entropy_with_logits + path: /tf/nn/weighted_cross_entropy_with_logits + - title: weighted_moments + path: /tf/nn/weighted_moments + - title: with_space_to_batch + path: /tf/nn/with_space_to_batch +- title: tf.profiler + section: + - title: Overview + path: /tf/profiler + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/profiler/experimental + - title: Profile + path: /tf/profiler/experimental/Profile + - title: start + path: /tf/profiler/experimental/start + - title: stop + path: /tf/profiler/experimental/stop + - title: client + section: + - title: Overview + path: /tf/profiler/experimental/client + - title: monitor + path: /tf/profiler/experimental/client/monitor + - title: trace + path: /tf/profiler/experimental/client/trace + - title: server + section: + - title: Overview + path: /tf/profiler/experimental/server + - title: start + path: /tf/profiler/experimental/server/start +- title: tf.quantization + section: + - title: Overview + path: /tf/quantization + - title: dequantize + path: /tf/quantization/dequantize + - title: fake_quant_with_min_max_args + path: /tf/quantization/fake_quant_with_min_max_args + - title: fake_quant_with_min_max_args_gradient + path: /tf/quantization/fake_quant_with_min_max_args_gradient + - title: fake_quant_with_min_max_vars + path: /tf/quantization/fake_quant_with_min_max_vars + - title: fake_quant_with_min_max_vars_gradient + path: /tf/quantization/fake_quant_with_min_max_vars_gradient + - title: fake_quant_with_min_max_vars_per_channel + path: /tf/quantization/fake_quant_with_min_max_vars_per_channel + - title: fake_quant_with_min_max_vars_per_channel_gradient + path: /tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient + - title: quantize + path: /tf/quantization/quantize + - title: quantize_and_dequantize + path: /tf/quantization/quantize_and_dequantize + - title: quantized_concat + path: /tf/quantization/quantized_concat +- title: tf.queue + section: + - title: Overview + path: /tf/queue + - title: FIFOQueue + path: /tf/queue/FIFOQueue + - title: PaddingFIFOQueue + path: /tf/queue/PaddingFIFOQueue + - title: PriorityQueue + path: /tf/queue/PriorityQueue + - title: QueueBase + path: /tf/queue/QueueBase + - title: RandomShuffleQueue + path: /tf/queue/RandomShuffleQueue +- title: tf.ragged + section: + - title: Overview + path: /tf/ragged + - title: boolean_mask + path: /tf/ragged/boolean_mask + - title: constant + path: /tf/ragged/constant + - title: map_flat_values + path: /tf/ragged/map_flat_values + - title: range + path: /tf/ragged/range + - title: row_splits_to_segment_ids + path: /tf/ragged/row_splits_to_segment_ids + - title: segment_ids_to_row_splits + path: /tf/ragged/segment_ids_to_row_splits + - title: stack + path: /tf/ragged/stack + - title: stack_dynamic_partitions + path: /tf/ragged/stack_dynamic_partitions +- title: tf.random + section: + - title: Overview + path: /tf/random + - title: Algorithm + path: /tf/random/Algorithm + - title: Generator + path: /tf/random/Generator + - title: all_candidate_sampler + path: /tf/random/all_candidate_sampler + - title: categorical + path: /tf/random/categorical + - title: create_rng_state + path: /tf/random/create_rng_state + - title: fixed_unigram_candidate_sampler + path: /tf/random/fixed_unigram_candidate_sampler + - title: gamma + path: /tf/random/gamma + - title: get_global_generator + path: /tf/random/get_global_generator + - title: learned_unigram_candidate_sampler + path: /tf/random/learned_unigram_candidate_sampler + - title: log_uniform_candidate_sampler + path: /tf/random/log_uniform_candidate_sampler + - title: normal + path: /tf/random/normal + - title: poisson + path: /tf/random/poisson + - title: set_global_generator + path: /tf/random/set_global_generator + - title: set_seed + path: /tf/random/set_seed + - title: shuffle + path: /tf/random/shuffle + - title: stateless_binomial + path: /tf/random/stateless_binomial + - title: stateless_categorical + path: /tf/random/stateless_categorical + - title: stateless_gamma + path: /tf/random/stateless_gamma + - title: stateless_normal + path: /tf/random/stateless_normal + - title: stateless_poisson + path: /tf/random/stateless_poisson + - title: stateless_truncated_normal + path: /tf/random/stateless_truncated_normal + - title: stateless_uniform + path: /tf/random/stateless_uniform + - title: truncated_normal + path: /tf/random/truncated_normal + - title: uniform + path: /tf/random/uniform + - title: uniform_candidate_sampler + path: /tf/random/uniform_candidate_sampler + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/random/experimental +- title: tf.raw_ops + section: + - title: Overview + path: /tf/raw_ops + - title: Abort + path: /tf/raw_ops/Abort + - title: Abs + path: /tf/raw_ops/Abs + - title: AccumulateNV2 + path: /tf/raw_ops/AccumulateNV2 + - title: AccumulatorApplyGradient + path: /tf/raw_ops/AccumulatorApplyGradient + - title: AccumulatorNumAccumulated + path: /tf/raw_ops/AccumulatorNumAccumulated + - title: AccumulatorSetGlobalStep + path: /tf/raw_ops/AccumulatorSetGlobalStep + - title: AccumulatorTakeGradient + path: /tf/raw_ops/AccumulatorTakeGradient + - title: Acos + path: /tf/raw_ops/Acos + - title: Acosh + path: /tf/raw_ops/Acosh + - title: Add + path: /tf/raw_ops/Add + - title: AddManySparseToTensorsMap + path: /tf/raw_ops/AddManySparseToTensorsMap + - title: AddN + path: /tf/raw_ops/AddN + - title: AddSparseToTensorsMap + path: /tf/raw_ops/AddSparseToTensorsMap + - title: AddV2 + path: /tf/raw_ops/AddV2 + - title: AdjustContrast + path: /tf/raw_ops/AdjustContrast + - title: AdjustContrastv2 + path: /tf/raw_ops/AdjustContrastv2 + - title: AdjustHue + path: /tf/raw_ops/AdjustHue + - title: AdjustSaturation + path: /tf/raw_ops/AdjustSaturation + - title: All + path: /tf/raw_ops/All + - title: AllCandidateSampler + path: /tf/raw_ops/AllCandidateSampler + - title: AllToAll + path: /tf/raw_ops/AllToAll + - title: Angle + path: /tf/raw_ops/Angle + - title: AnonymousIterator + path: /tf/raw_ops/AnonymousIterator + - title: AnonymousIteratorV2 + path: /tf/raw_ops/AnonymousIteratorV2 + - title: AnonymousMemoryCache + path: /tf/raw_ops/AnonymousMemoryCache + - title: AnonymousMultiDeviceIterator + path: /tf/raw_ops/AnonymousMultiDeviceIterator + - title: AnonymousRandomSeedGenerator + path: /tf/raw_ops/AnonymousRandomSeedGenerator + - title: Any + path: /tf/raw_ops/Any + - title: ApplyAdaMax + path: /tf/raw_ops/ApplyAdaMax + - title: ApplyAdadelta + path: /tf/raw_ops/ApplyAdadelta + - title: ApplyAdagrad + path: /tf/raw_ops/ApplyAdagrad + - title: ApplyAdagradDA + path: /tf/raw_ops/ApplyAdagradDA + - title: ApplyAdagradV2 + path: /tf/raw_ops/ApplyAdagradV2 + - title: ApplyAdam + path: /tf/raw_ops/ApplyAdam + - title: ApplyAddSign + path: /tf/raw_ops/ApplyAddSign + - title: ApplyCenteredRMSProp + path: /tf/raw_ops/ApplyCenteredRMSProp + - title: ApplyFtrl + path: /tf/raw_ops/ApplyFtrl + - title: ApplyFtrlV2 + path: /tf/raw_ops/ApplyFtrlV2 + - title: ApplyGradientDescent + path: /tf/raw_ops/ApplyGradientDescent + - title: ApplyMomentum + path: /tf/raw_ops/ApplyMomentum + - title: ApplyPowerSign + path: /tf/raw_ops/ApplyPowerSign + - title: ApplyProximalAdagrad + path: /tf/raw_ops/ApplyProximalAdagrad + - title: ApplyProximalGradientDescent + path: /tf/raw_ops/ApplyProximalGradientDescent + - title: ApplyRMSProp + path: /tf/raw_ops/ApplyRMSProp + - title: ApproximateEqual + path: /tf/raw_ops/ApproximateEqual + - title: ArgMax + path: /tf/raw_ops/ArgMax + - title: ArgMin + path: /tf/raw_ops/ArgMin + - title: AsString + path: /tf/raw_ops/AsString + - title: Asin + path: /tf/raw_ops/Asin + - title: Asinh + path: /tf/raw_ops/Asinh + - title: Assert + path: /tf/raw_ops/Assert + - title: AssertCardinalityDataset + path: /tf/raw_ops/AssertCardinalityDataset + - title: AssertNextDataset + path: /tf/raw_ops/AssertNextDataset + - title: Assign + path: /tf/raw_ops/Assign + - title: AssignAdd + path: /tf/raw_ops/AssignAdd + - title: AssignAddVariableOp + path: /tf/raw_ops/AssignAddVariableOp + - title: AssignSub + path: /tf/raw_ops/AssignSub + - title: AssignSubVariableOp + path: /tf/raw_ops/AssignSubVariableOp + - title: AssignVariableOp + path: /tf/raw_ops/AssignVariableOp + - title: Atan + path: /tf/raw_ops/Atan + - title: Atan2 + path: /tf/raw_ops/Atan2 + - title: Atanh + path: /tf/raw_ops/Atanh + - title: AudioSpectrogram + path: /tf/raw_ops/AudioSpectrogram + - title: AudioSummary + path: /tf/raw_ops/AudioSummary + - title: AudioSummaryV2 + path: /tf/raw_ops/AudioSummaryV2 + - title: AutoShardDataset + path: /tf/raw_ops/AutoShardDataset + - title: AvgPool + path: /tf/raw_ops/AvgPool + - title: AvgPool3D + path: /tf/raw_ops/AvgPool3D + - title: AvgPool3DGrad + path: /tf/raw_ops/AvgPool3DGrad + - title: AvgPoolGrad + path: /tf/raw_ops/AvgPoolGrad + - title: Barrier + path: /tf/raw_ops/Barrier + - title: BarrierClose + path: /tf/raw_ops/BarrierClose + - title: BarrierIncompleteSize + path: /tf/raw_ops/BarrierIncompleteSize + - title: BarrierInsertMany + path: /tf/raw_ops/BarrierInsertMany + - title: BarrierReadySize + path: /tf/raw_ops/BarrierReadySize + - title: BarrierTakeMany + path: /tf/raw_ops/BarrierTakeMany + - title: Batch + path: /tf/raw_ops/Batch + - title: BatchCholesky + path: /tf/raw_ops/BatchCholesky + - title: BatchCholeskyGrad + path: /tf/raw_ops/BatchCholeskyGrad + - title: BatchDataset + path: /tf/raw_ops/BatchDataset + - title: BatchDatasetV2 + path: /tf/raw_ops/BatchDatasetV2 + - title: BatchFFT + path: /tf/raw_ops/BatchFFT + - title: BatchFFT2D + path: /tf/raw_ops/BatchFFT2D + - title: BatchFFT3D + path: /tf/raw_ops/BatchFFT3D + - title: BatchFunction + path: /tf/raw_ops/BatchFunction + - title: BatchIFFT + path: /tf/raw_ops/BatchIFFT + - title: BatchIFFT2D + path: /tf/raw_ops/BatchIFFT2D + - title: BatchIFFT3D + path: /tf/raw_ops/BatchIFFT3D + - title: BatchMatMul + path: /tf/raw_ops/BatchMatMul + - title: BatchMatMulV2 + path: /tf/raw_ops/BatchMatMulV2 + - title: BatchMatrixBandPart + path: /tf/raw_ops/BatchMatrixBandPart + - title: BatchMatrixDeterminant + path: /tf/raw_ops/BatchMatrixDeterminant + - title: BatchMatrixDiag + path: /tf/raw_ops/BatchMatrixDiag + - title: BatchMatrixDiagPart + path: /tf/raw_ops/BatchMatrixDiagPart + - title: BatchMatrixInverse + path: /tf/raw_ops/BatchMatrixInverse + - title: BatchMatrixSetDiag + path: /tf/raw_ops/BatchMatrixSetDiag + - title: BatchMatrixSolve + path: /tf/raw_ops/BatchMatrixSolve + - title: BatchMatrixSolveLs + path: /tf/raw_ops/BatchMatrixSolveLs + - title: BatchMatrixTriangularSolve + path: /tf/raw_ops/BatchMatrixTriangularSolve + - title: BatchNormWithGlobalNormalization + path: /tf/raw_ops/BatchNormWithGlobalNormalization + - title: BatchNormWithGlobalNormalizationGrad + path: /tf/raw_ops/BatchNormWithGlobalNormalizationGrad + - title: BatchSelfAdjointEig + path: /tf/raw_ops/BatchSelfAdjointEig + - title: BatchSelfAdjointEigV2 + path: /tf/raw_ops/BatchSelfAdjointEigV2 + - title: BatchSvd + path: /tf/raw_ops/BatchSvd + - title: BatchToSpace + path: /tf/raw_ops/BatchToSpace + - title: BatchToSpaceND + path: /tf/raw_ops/BatchToSpaceND + - title: BesselI0e + path: /tf/raw_ops/BesselI0e + - title: BesselI1e + path: /tf/raw_ops/BesselI1e + - title: Betainc + path: /tf/raw_ops/Betainc + - title: BiasAdd + path: /tf/raw_ops/BiasAdd + - title: BiasAddGrad + path: /tf/raw_ops/BiasAddGrad + - title: BiasAddV1 + path: /tf/raw_ops/BiasAddV1 + - title: Bincount + path: /tf/raw_ops/Bincount + - title: Bitcast + path: /tf/raw_ops/Bitcast + - title: BitwiseAnd + path: /tf/raw_ops/BitwiseAnd + - title: BitwiseOr + path: /tf/raw_ops/BitwiseOr + - title: BitwiseXor + path: /tf/raw_ops/BitwiseXor + - title: BlockLSTM + path: /tf/raw_ops/BlockLSTM + - title: BlockLSTMGrad + path: /tf/raw_ops/BlockLSTMGrad + - title: BlockLSTMGradV2 + path: /tf/raw_ops/BlockLSTMGradV2 + - title: BlockLSTMV2 + path: /tf/raw_ops/BlockLSTMV2 + - title: BoostedTreesAggregateStats + path: /tf/raw_ops/BoostedTreesAggregateStats + - title: BoostedTreesBucketize + path: /tf/raw_ops/BoostedTreesBucketize + - title: BoostedTreesCalculateBestFeatureSplit + path: /tf/raw_ops/BoostedTreesCalculateBestFeatureSplit + - title: BoostedTreesCalculateBestFeatureSplitV2 + path: /tf/raw_ops/BoostedTreesCalculateBestFeatureSplitV2 + - title: BoostedTreesCalculateBestGainsPerFeature + path: /tf/raw_ops/BoostedTreesCalculateBestGainsPerFeature + - title: BoostedTreesCenterBias + path: /tf/raw_ops/BoostedTreesCenterBias + - title: BoostedTreesCreateEnsemble + path: /tf/raw_ops/BoostedTreesCreateEnsemble + - title: BoostedTreesCreateQuantileStreamResource + path: /tf/raw_ops/BoostedTreesCreateQuantileStreamResource + - title: BoostedTreesDeserializeEnsemble + path: /tf/raw_ops/BoostedTreesDeserializeEnsemble + - title: BoostedTreesEnsembleResourceHandleOp + path: /tf/raw_ops/BoostedTreesEnsembleResourceHandleOp + - title: BoostedTreesExampleDebugOutputs + path: /tf/raw_ops/BoostedTreesExampleDebugOutputs + - title: BoostedTreesFlushQuantileSummaries + path: /tf/raw_ops/BoostedTreesFlushQuantileSummaries + - title: BoostedTreesGetEnsembleStates + path: /tf/raw_ops/BoostedTreesGetEnsembleStates + - title: BoostedTreesMakeQuantileSummaries + path: /tf/raw_ops/BoostedTreesMakeQuantileSummaries + - title: BoostedTreesMakeStatsSummary + path: /tf/raw_ops/BoostedTreesMakeStatsSummary + - title: BoostedTreesPredict + path: /tf/raw_ops/BoostedTreesPredict + - title: BoostedTreesQuantileStreamResourceAddSummaries + path: /tf/raw_ops/BoostedTreesQuantileStreamResourceAddSummaries + - title: BoostedTreesQuantileStreamResourceDeserialize + path: /tf/raw_ops/BoostedTreesQuantileStreamResourceDeserialize + - title: BoostedTreesQuantileStreamResourceFlush + path: /tf/raw_ops/BoostedTreesQuantileStreamResourceFlush + - title: BoostedTreesQuantileStreamResourceGetBucketBoundaries + path: /tf/raw_ops/BoostedTreesQuantileStreamResourceGetBucketBoundaries + - title: BoostedTreesQuantileStreamResourceHandleOp + path: /tf/raw_ops/BoostedTreesQuantileStreamResourceHandleOp + - title: BoostedTreesSerializeEnsemble + path: /tf/raw_ops/BoostedTreesSerializeEnsemble + - title: BoostedTreesSparseAggregateStats + path: /tf/raw_ops/BoostedTreesSparseAggregateStats + - title: BoostedTreesSparseCalculateBestFeatureSplit + path: /tf/raw_ops/BoostedTreesSparseCalculateBestFeatureSplit + - title: BoostedTreesTrainingPredict + path: /tf/raw_ops/BoostedTreesTrainingPredict + - title: BoostedTreesUpdateEnsemble + path: /tf/raw_ops/BoostedTreesUpdateEnsemble + - title: BoostedTreesUpdateEnsembleV2 + path: /tf/raw_ops/BoostedTreesUpdateEnsembleV2 + - title: BroadcastArgs + path: /tf/raw_ops/BroadcastArgs + - title: BroadcastGradientArgs + path: /tf/raw_ops/BroadcastGradientArgs + - title: BroadcastTo + path: /tf/raw_ops/BroadcastTo + - title: Bucketize + path: /tf/raw_ops/Bucketize + - title: BytesProducedStatsDataset + path: /tf/raw_ops/BytesProducedStatsDataset + - title: CSRSparseMatrixComponents + path: /tf/raw_ops/CSRSparseMatrixComponents + - title: CSRSparseMatrixToDense + path: /tf/raw_ops/CSRSparseMatrixToDense + - title: CSRSparseMatrixToSparseTensor + path: /tf/raw_ops/CSRSparseMatrixToSparseTensor + - title: CSVDataset + path: /tf/raw_ops/CSVDataset + - title: CTCBeamSearchDecoder + path: /tf/raw_ops/CTCBeamSearchDecoder + - title: CTCGreedyDecoder + path: /tf/raw_ops/CTCGreedyDecoder + - title: CTCLoss + path: /tf/raw_ops/CTCLoss + - title: CTCLossV2 + path: /tf/raw_ops/CTCLossV2 + - title: CacheDataset + path: /tf/raw_ops/CacheDataset + - title: CacheDatasetV2 + path: /tf/raw_ops/CacheDatasetV2 + - title: Case + path: /tf/raw_ops/Case + - title: Cast + path: /tf/raw_ops/Cast + - title: Ceil + path: /tf/raw_ops/Ceil + - title: CheckNumerics + path: /tf/raw_ops/CheckNumerics + - title: CheckNumericsV2 + path: /tf/raw_ops/CheckNumericsV2 + - title: Cholesky + path: /tf/raw_ops/Cholesky + - title: CholeskyGrad + path: /tf/raw_ops/CholeskyGrad + - title: ChooseFastestBranchDataset + path: /tf/raw_ops/ChooseFastestBranchDataset + - title: ChooseFastestDataset + path: /tf/raw_ops/ChooseFastestDataset + - title: ClipByValue + path: /tf/raw_ops/ClipByValue + - title: CloseSummaryWriter + path: /tf/raw_ops/CloseSummaryWriter + - title: CollectiveBcastRecv + path: /tf/raw_ops/CollectiveBcastRecv + - title: CollectiveBcastSend + path: /tf/raw_ops/CollectiveBcastSend + - title: CollectiveGather + path: /tf/raw_ops/CollectiveGather + - title: CollectivePermute + path: /tf/raw_ops/CollectivePermute + - title: CollectiveReduce + path: /tf/raw_ops/CollectiveReduce + - title: CombinedNonMaxSuppression + path: /tf/raw_ops/CombinedNonMaxSuppression + - title: CompareAndBitpack + path: /tf/raw_ops/CompareAndBitpack + - title: Complex + path: /tf/raw_ops/Complex + - title: ComplexAbs + path: /tf/raw_ops/ComplexAbs + - title: ComputeAccidentalHits + path: /tf/raw_ops/ComputeAccidentalHits + - title: Concat + path: /tf/raw_ops/Concat + - title: ConcatOffset + path: /tf/raw_ops/ConcatOffset + - title: ConcatV2 + path: /tf/raw_ops/ConcatV2 + - title: ConcatenateDataset + path: /tf/raw_ops/ConcatenateDataset + - title: ConditionalAccumulator + path: /tf/raw_ops/ConditionalAccumulator + - title: ConfigureDistributedTPU + path: /tf/raw_ops/ConfigureDistributedTPU + - title: ConfigureTPUEmbedding + path: /tf/raw_ops/ConfigureTPUEmbedding + - title: Conj + path: /tf/raw_ops/Conj + - title: ConjugateTranspose + path: /tf/raw_ops/ConjugateTranspose + - title: Const + path: /tf/raw_ops/Const + - title: ConsumeMutexLock + path: /tf/raw_ops/ConsumeMutexLock + - title: ControlTrigger + path: /tf/raw_ops/ControlTrigger + - title: Conv2D + path: /tf/raw_ops/Conv2D + - title: Conv2DBackpropFilter + path: /tf/raw_ops/Conv2DBackpropFilter + - title: Conv2DBackpropInput + path: /tf/raw_ops/Conv2DBackpropInput + - title: Conv3D + path: /tf/raw_ops/Conv3D + - title: Conv3DBackpropFilter + path: /tf/raw_ops/Conv3DBackpropFilter + - title: Conv3DBackpropFilterV2 + path: /tf/raw_ops/Conv3DBackpropFilterV2 + - title: Conv3DBackpropInput + path: /tf/raw_ops/Conv3DBackpropInput + - title: Conv3DBackpropInputV2 + path: /tf/raw_ops/Conv3DBackpropInputV2 + - title: Copy + path: /tf/raw_ops/Copy + - title: CopyHost + path: /tf/raw_ops/CopyHost + - title: Cos + path: /tf/raw_ops/Cos + - title: Cosh + path: /tf/raw_ops/Cosh + - title: CountUpTo + path: /tf/raw_ops/CountUpTo + - title: CreateSummaryDbWriter + path: /tf/raw_ops/CreateSummaryDbWriter + - title: CreateSummaryFileWriter + path: /tf/raw_ops/CreateSummaryFileWriter + - title: CropAndResize + path: /tf/raw_ops/CropAndResize + - title: CropAndResizeGradBoxes + path: /tf/raw_ops/CropAndResizeGradBoxes + - title: CropAndResizeGradImage + path: /tf/raw_ops/CropAndResizeGradImage + - title: Cross + path: /tf/raw_ops/Cross + - title: CrossReplicaSum + path: /tf/raw_ops/CrossReplicaSum + - title: CudnnRNN + path: /tf/raw_ops/CudnnRNN + - title: CudnnRNNBackprop + path: /tf/raw_ops/CudnnRNNBackprop + - title: CudnnRNNBackpropV2 + path: /tf/raw_ops/CudnnRNNBackpropV2 + - title: CudnnRNNBackpropV3 + path: /tf/raw_ops/CudnnRNNBackpropV3 + - title: CudnnRNNCanonicalToParams + path: /tf/raw_ops/CudnnRNNCanonicalToParams + - title: CudnnRNNCanonicalToParamsV2 + path: /tf/raw_ops/CudnnRNNCanonicalToParamsV2 + - title: CudnnRNNParamsSize + path: /tf/raw_ops/CudnnRNNParamsSize + - title: CudnnRNNParamsToCanonical + path: /tf/raw_ops/CudnnRNNParamsToCanonical + - title: CudnnRNNParamsToCanonicalV2 + path: /tf/raw_ops/CudnnRNNParamsToCanonicalV2 + - title: CudnnRNNV2 + path: /tf/raw_ops/CudnnRNNV2 + - title: CudnnRNNV3 + path: /tf/raw_ops/CudnnRNNV3 + - title: Cumprod + path: /tf/raw_ops/Cumprod + - title: Cumsum + path: /tf/raw_ops/Cumsum + - title: CumulativeLogsumexp + path: /tf/raw_ops/CumulativeLogsumexp + - title: DataFormatDimMap + path: /tf/raw_ops/DataFormatDimMap + - title: DataFormatVecPermute + path: /tf/raw_ops/DataFormatVecPermute + - title: DatasetCardinality + path: /tf/raw_ops/DatasetCardinality + - title: DatasetFromGraph + path: /tf/raw_ops/DatasetFromGraph + - title: DatasetToGraph + path: /tf/raw_ops/DatasetToGraph + - title: DatasetToGraphV2 + path: /tf/raw_ops/DatasetToGraphV2 + - title: DatasetToSingleElement + path: /tf/raw_ops/DatasetToSingleElement + - title: DatasetToTFRecord + path: /tf/raw_ops/DatasetToTFRecord + - title: Dawsn + path: /tf/raw_ops/Dawsn + - title: DebugGradientIdentity + path: /tf/raw_ops/DebugGradientIdentity + - title: DebugGradientRefIdentity + path: /tf/raw_ops/DebugGradientRefIdentity + - title: DebugIdentity + path: /tf/raw_ops/DebugIdentity + - title: DebugIdentityV2 + path: /tf/raw_ops/DebugIdentityV2 + - title: DebugNanCount + path: /tf/raw_ops/DebugNanCount + - title: DebugNumericSummary + path: /tf/raw_ops/DebugNumericSummary + - title: DebugNumericSummaryV2 + path: /tf/raw_ops/DebugNumericSummaryV2 + - title: DecodeAndCropJpeg + path: /tf/raw_ops/DecodeAndCropJpeg + - title: DecodeBase64 + path: /tf/raw_ops/DecodeBase64 + - title: DecodeBmp + path: /tf/raw_ops/DecodeBmp + - title: DecodeCSV + path: /tf/raw_ops/DecodeCSV + - title: DecodeCompressed + path: /tf/raw_ops/DecodeCompressed + - title: DecodeGif + path: /tf/raw_ops/DecodeGif + - title: DecodeJSONExample + path: /tf/raw_ops/DecodeJSONExample + - title: DecodeJpeg + path: /tf/raw_ops/DecodeJpeg + - title: DecodePaddedRaw + path: /tf/raw_ops/DecodePaddedRaw + - title: DecodePng + path: /tf/raw_ops/DecodePng + - title: DecodeProtoV2 + path: /tf/raw_ops/DecodeProtoV2 + - title: DecodeRaw + path: /tf/raw_ops/DecodeRaw + - title: DecodeWav + path: /tf/raw_ops/DecodeWav + - title: DeepCopy + path: /tf/raw_ops/DeepCopy + - title: DeleteIterator + path: /tf/raw_ops/DeleteIterator + - title: DeleteMemoryCache + path: /tf/raw_ops/DeleteMemoryCache + - title: DeleteMultiDeviceIterator + path: /tf/raw_ops/DeleteMultiDeviceIterator + - title: DeleteRandomSeedGenerator + path: /tf/raw_ops/DeleteRandomSeedGenerator + - title: DeleteSessionTensor + path: /tf/raw_ops/DeleteSessionTensor + - title: DenseToCSRSparseMatrix + path: /tf/raw_ops/DenseToCSRSparseMatrix + - title: DenseToDenseSetOperation + path: /tf/raw_ops/DenseToDenseSetOperation + - title: DenseToSparseBatchDataset + path: /tf/raw_ops/DenseToSparseBatchDataset + - title: DenseToSparseSetOperation + path: /tf/raw_ops/DenseToSparseSetOperation + - title: DepthToSpace + path: /tf/raw_ops/DepthToSpace + - title: DepthwiseConv2dNative + path: /tf/raw_ops/DepthwiseConv2dNative + - title: DepthwiseConv2dNativeBackpropFilter + path: /tf/raw_ops/DepthwiseConv2dNativeBackpropFilter + - title: DepthwiseConv2dNativeBackpropInput + path: /tf/raw_ops/DepthwiseConv2dNativeBackpropInput + - title: Dequantize + path: /tf/raw_ops/Dequantize + - title: DeserializeIterator + path: /tf/raw_ops/DeserializeIterator + - title: DeserializeManySparse + path: /tf/raw_ops/DeserializeManySparse + - title: DeserializeSparse + path: /tf/raw_ops/DeserializeSparse + - title: DestroyResourceOp + path: /tf/raw_ops/DestroyResourceOp + - title: DestroyTemporaryVariable + path: /tf/raw_ops/DestroyTemporaryVariable + - title: Diag + path: /tf/raw_ops/Diag + - title: DiagPart + path: /tf/raw_ops/DiagPart + - title: Digamma + path: /tf/raw_ops/Digamma + - title: Dilation2D + path: /tf/raw_ops/Dilation2D + - title: Dilation2DBackpropFilter + path: /tf/raw_ops/Dilation2DBackpropFilter + - title: Dilation2DBackpropInput + path: /tf/raw_ops/Dilation2DBackpropInput + - title: DirectedInterleaveDataset + path: /tf/raw_ops/DirectedInterleaveDataset + - title: Div + path: /tf/raw_ops/Div + - title: DivNoNan + path: /tf/raw_ops/DivNoNan + - title: DrawBoundingBoxes + path: /tf/raw_ops/DrawBoundingBoxes + - title: DrawBoundingBoxesV2 + path: /tf/raw_ops/DrawBoundingBoxesV2 + - title: DummyMemoryCache + path: /tf/raw_ops/DummyMemoryCache + - title: DynamicPartition + path: /tf/raw_ops/DynamicPartition + - title: DynamicStitch + path: /tf/raw_ops/DynamicStitch + - title: EagerPyFunc + path: /tf/raw_ops/EagerPyFunc + - title: EditDistance + path: /tf/raw_ops/EditDistance + - title: Eig + path: /tf/raw_ops/Eig + - title: Einsum + path: /tf/raw_ops/Einsum + - title: Elu + path: /tf/raw_ops/Elu + - title: EluGrad + path: /tf/raw_ops/EluGrad + - title: Empty + path: /tf/raw_ops/Empty + - title: EmptyTensorList + path: /tf/raw_ops/EmptyTensorList + - title: EncodeBase64 + path: /tf/raw_ops/EncodeBase64 + - title: EncodeJpeg + path: /tf/raw_ops/EncodeJpeg + - title: EncodeJpegVariableQuality + path: /tf/raw_ops/EncodeJpegVariableQuality + - title: EncodePng + path: /tf/raw_ops/EncodePng + - title: EncodeProto + path: /tf/raw_ops/EncodeProto + - title: EncodeWav + path: /tf/raw_ops/EncodeWav + - title: EnqueueTPUEmbeddingIntegerBatch + path: /tf/raw_ops/EnqueueTPUEmbeddingIntegerBatch + - title: EnqueueTPUEmbeddingSparseBatch + path: /tf/raw_ops/EnqueueTPUEmbeddingSparseBatch + - title: EnqueueTPUEmbeddingSparseTensorBatch + path: /tf/raw_ops/EnqueueTPUEmbeddingSparseTensorBatch + - title: EnsureShape + path: /tf/raw_ops/EnsureShape + - title: Enter + path: /tf/raw_ops/Enter + - title: Equal + path: /tf/raw_ops/Equal + - title: Erf + path: /tf/raw_ops/Erf + - title: Erfc + path: /tf/raw_ops/Erfc + - title: Erfinv + path: /tf/raw_ops/Erfinv + - title: EuclideanNorm + path: /tf/raw_ops/EuclideanNorm + - title: Exit + path: /tf/raw_ops/Exit + - title: Exp + path: /tf/raw_ops/Exp + - title: ExpandDims + path: /tf/raw_ops/ExpandDims + - title: ExperimentalAssertNextDataset + path: /tf/raw_ops/ExperimentalAssertNextDataset + - title: ExperimentalAutoShardDataset + path: /tf/raw_ops/ExperimentalAutoShardDataset + - title: ExperimentalBytesProducedStatsDataset + path: /tf/raw_ops/ExperimentalBytesProducedStatsDataset + - title: ExperimentalCSVDataset + path: /tf/raw_ops/ExperimentalCSVDataset + - title: ExperimentalChooseFastestDataset + path: /tf/raw_ops/ExperimentalChooseFastestDataset + - title: ExperimentalDatasetCardinality + path: /tf/raw_ops/ExperimentalDatasetCardinality + - title: ExperimentalDatasetToTFRecord + path: /tf/raw_ops/ExperimentalDatasetToTFRecord + - title: ExperimentalDenseToSparseBatchDataset + path: /tf/raw_ops/ExperimentalDenseToSparseBatchDataset + - title: ExperimentalDirectedInterleaveDataset + path: /tf/raw_ops/ExperimentalDirectedInterleaveDataset + - title: ExperimentalGroupByReducerDataset + path: /tf/raw_ops/ExperimentalGroupByReducerDataset + - title: ExperimentalGroupByWindowDataset + path: /tf/raw_ops/ExperimentalGroupByWindowDataset + - title: ExperimentalIgnoreErrorsDataset + path: /tf/raw_ops/ExperimentalIgnoreErrorsDataset + - title: ExperimentalIteratorGetDevice + path: /tf/raw_ops/ExperimentalIteratorGetDevice + - title: ExperimentalLMDBDataset + path: /tf/raw_ops/ExperimentalLMDBDataset + - title: ExperimentalLatencyStatsDataset + path: /tf/raw_ops/ExperimentalLatencyStatsDataset + - title: ExperimentalMapAndBatchDataset + path: /tf/raw_ops/ExperimentalMapAndBatchDataset + - title: ExperimentalMapDataset + path: /tf/raw_ops/ExperimentalMapDataset + - title: ExperimentalMatchingFilesDataset + path: /tf/raw_ops/ExperimentalMatchingFilesDataset + - title: ExperimentalMaxIntraOpParallelismDataset + path: /tf/raw_ops/ExperimentalMaxIntraOpParallelismDataset + - title: ExperimentalNonSerializableDataset + path: /tf/raw_ops/ExperimentalNonSerializableDataset + - title: ExperimentalParallelInterleaveDataset + path: /tf/raw_ops/ExperimentalParallelInterleaveDataset + - title: ExperimentalParseExampleDataset + path: /tf/raw_ops/ExperimentalParseExampleDataset + - title: ExperimentalPrivateThreadPoolDataset + path: /tf/raw_ops/ExperimentalPrivateThreadPoolDataset + - title: ExperimentalRandomDataset + path: /tf/raw_ops/ExperimentalRandomDataset + - title: ExperimentalRebatchDataset + path: /tf/raw_ops/ExperimentalRebatchDataset + - title: ExperimentalScanDataset + path: /tf/raw_ops/ExperimentalScanDataset + - title: ExperimentalSetStatsAggregatorDataset + path: /tf/raw_ops/ExperimentalSetStatsAggregatorDataset + - title: ExperimentalSleepDataset + path: /tf/raw_ops/ExperimentalSleepDataset + - title: ExperimentalSlidingWindowDataset + path: /tf/raw_ops/ExperimentalSlidingWindowDataset + - title: ExperimentalSqlDataset + path: /tf/raw_ops/ExperimentalSqlDataset + - title: ExperimentalStatsAggregatorHandle + path: /tf/raw_ops/ExperimentalStatsAggregatorHandle + - title: ExperimentalStatsAggregatorSummary + path: /tf/raw_ops/ExperimentalStatsAggregatorSummary + - title: ExperimentalTakeWhileDataset + path: /tf/raw_ops/ExperimentalTakeWhileDataset + - title: ExperimentalThreadPoolDataset + path: /tf/raw_ops/ExperimentalThreadPoolDataset + - title: ExperimentalThreadPoolHandle + path: /tf/raw_ops/ExperimentalThreadPoolHandle + - title: ExperimentalUnbatchDataset + path: /tf/raw_ops/ExperimentalUnbatchDataset + - title: ExperimentalUniqueDataset + path: /tf/raw_ops/ExperimentalUniqueDataset + - title: Expint + path: /tf/raw_ops/Expint + - title: Expm1 + path: /tf/raw_ops/Expm1 + - title: ExtractGlimpse + path: /tf/raw_ops/ExtractGlimpse + - title: ExtractImagePatches + path: /tf/raw_ops/ExtractImagePatches + - title: ExtractJpegShape + path: /tf/raw_ops/ExtractJpegShape + - title: ExtractVolumePatches + path: /tf/raw_ops/ExtractVolumePatches + - title: FFT + path: /tf/raw_ops/FFT + - title: FFT2D + path: /tf/raw_ops/FFT2D + - title: FFT3D + path: /tf/raw_ops/FFT3D + - title: FIFOQueue + path: /tf/raw_ops/FIFOQueue + - title: FIFOQueueV2 + path: /tf/raw_ops/FIFOQueueV2 + - title: Fact + path: /tf/raw_ops/Fact + - title: FakeParam + path: /tf/raw_ops/FakeParam + - title: FakeQuantWithMinMaxArgs + path: /tf/raw_ops/FakeQuantWithMinMaxArgs + - title: FakeQuantWithMinMaxArgsGradient + path: /tf/raw_ops/FakeQuantWithMinMaxArgsGradient + - title: FakeQuantWithMinMaxVars + path: /tf/raw_ops/FakeQuantWithMinMaxVars + - title: FakeQuantWithMinMaxVarsGradient + path: /tf/raw_ops/FakeQuantWithMinMaxVarsGradient + - title: FakeQuantWithMinMaxVarsPerChannel + path: /tf/raw_ops/FakeQuantWithMinMaxVarsPerChannel + - title: FakeQuantWithMinMaxVarsPerChannelGradient + path: /tf/raw_ops/FakeQuantWithMinMaxVarsPerChannelGradient + - title: FakeQueue + path: /tf/raw_ops/FakeQueue + - title: Fill + path: /tf/raw_ops/Fill + - title: FilterByLastComponentDataset + path: /tf/raw_ops/FilterByLastComponentDataset + - title: FilterDataset + path: /tf/raw_ops/FilterDataset + - title: Fingerprint + path: /tf/raw_ops/Fingerprint + - title: FixedLengthRecordDataset + path: /tf/raw_ops/FixedLengthRecordDataset + - title: FixedLengthRecordDatasetV2 + path: /tf/raw_ops/FixedLengthRecordDatasetV2 + - title: FixedLengthRecordReader + path: /tf/raw_ops/FixedLengthRecordReader + - title: FixedLengthRecordReaderV2 + path: /tf/raw_ops/FixedLengthRecordReaderV2 + - title: FixedUnigramCandidateSampler + path: /tf/raw_ops/FixedUnigramCandidateSampler + - title: FlatMapDataset + path: /tf/raw_ops/FlatMapDataset + - title: Floor + path: /tf/raw_ops/Floor + - title: FloorDiv + path: /tf/raw_ops/FloorDiv + - title: FloorMod + path: /tf/raw_ops/FloorMod + - title: FlushSummaryWriter + path: /tf/raw_ops/FlushSummaryWriter + - title: For + path: /tf/raw_ops/For + - title: FractionalAvgPool + path: /tf/raw_ops/FractionalAvgPool + - title: FractionalAvgPoolGrad + path: /tf/raw_ops/FractionalAvgPoolGrad + - title: FractionalMaxPool + path: /tf/raw_ops/FractionalMaxPool + - title: FractionalMaxPoolGrad + path: /tf/raw_ops/FractionalMaxPoolGrad + - title: FresnelCos + path: /tf/raw_ops/FresnelCos + - title: FresnelSin + path: /tf/raw_ops/FresnelSin + - title: FusedBatchNorm + path: /tf/raw_ops/FusedBatchNorm + - title: FusedBatchNormGrad + path: /tf/raw_ops/FusedBatchNormGrad + - title: FusedBatchNormGradV2 + path: /tf/raw_ops/FusedBatchNormGradV2 + - title: FusedBatchNormGradV3 + path: /tf/raw_ops/FusedBatchNormGradV3 + - title: FusedBatchNormV2 + path: /tf/raw_ops/FusedBatchNormV2 + - title: FusedBatchNormV3 + path: /tf/raw_ops/FusedBatchNormV3 + - title: FusedPadConv2D + path: /tf/raw_ops/FusedPadConv2D + - title: FusedResizeAndPadConv2D + path: /tf/raw_ops/FusedResizeAndPadConv2D + - title: GRUBlockCell + path: /tf/raw_ops/GRUBlockCell + - title: GRUBlockCellGrad + path: /tf/raw_ops/GRUBlockCellGrad + - title: Gather + path: /tf/raw_ops/Gather + - title: GatherNd + path: /tf/raw_ops/GatherNd + - title: GatherV2 + path: /tf/raw_ops/GatherV2 + - title: GenerateBoundingBoxProposals + path: /tf/raw_ops/GenerateBoundingBoxProposals + - title: GenerateVocabRemapping + path: /tf/raw_ops/GenerateVocabRemapping + - title: GeneratorDataset + path: /tf/raw_ops/GeneratorDataset + - title: GetSessionHandle + path: /tf/raw_ops/GetSessionHandle + - title: GetSessionHandleV2 + path: /tf/raw_ops/GetSessionHandleV2 + - title: GetSessionTensor + path: /tf/raw_ops/GetSessionTensor + - title: Greater + path: /tf/raw_ops/Greater + - title: GreaterEqual + path: /tf/raw_ops/GreaterEqual + - title: GroupByReducerDataset + path: /tf/raw_ops/GroupByReducerDataset + - title: GroupByWindowDataset + path: /tf/raw_ops/GroupByWindowDataset + - title: GuaranteeConst + path: /tf/raw_ops/GuaranteeConst + - title: HSVToRGB + path: /tf/raw_ops/HSVToRGB + - title: HashTable + path: /tf/raw_ops/HashTable + - title: HashTableV2 + path: /tf/raw_ops/HashTableV2 + - title: HistogramFixedWidth + path: /tf/raw_ops/HistogramFixedWidth + - title: HistogramSummary + path: /tf/raw_ops/HistogramSummary + - title: IFFT + path: /tf/raw_ops/IFFT + - title: IFFT2D + path: /tf/raw_ops/IFFT2D + - title: IFFT3D + path: /tf/raw_ops/IFFT3D + - title: IRFFT + path: /tf/raw_ops/IRFFT + - title: IRFFT2D + path: /tf/raw_ops/IRFFT2D + - title: IRFFT3D + path: /tf/raw_ops/IRFFT3D + - title: Identity + path: /tf/raw_ops/Identity + - title: IdentityN + path: /tf/raw_ops/IdentityN + - title: IdentityReader + path: /tf/raw_ops/IdentityReader + - title: IdentityReaderV2 + path: /tf/raw_ops/IdentityReaderV2 + - title: If + path: /tf/raw_ops/If + - title: Igamma + path: /tf/raw_ops/Igamma + - title: IgammaGradA + path: /tf/raw_ops/IgammaGradA + - title: Igammac + path: /tf/raw_ops/Igammac + - title: IgnoreErrorsDataset + path: /tf/raw_ops/IgnoreErrorsDataset + - title: Imag + path: /tf/raw_ops/Imag + - title: ImageProjectiveTransformV2 + path: /tf/raw_ops/ImageProjectiveTransformV2 + - title: ImageSummary + path: /tf/raw_ops/ImageSummary + - title: ImmutableConst + path: /tf/raw_ops/ImmutableConst + - title: ImportEvent + path: /tf/raw_ops/ImportEvent + - title: InTopK + path: /tf/raw_ops/InTopK + - title: InTopKV2 + path: /tf/raw_ops/InTopKV2 + - title: InfeedDequeue + path: /tf/raw_ops/InfeedDequeue + - title: InfeedDequeueTuple + path: /tf/raw_ops/InfeedDequeueTuple + - title: InfeedEnqueue + path: /tf/raw_ops/InfeedEnqueue + - title: InfeedEnqueuePrelinearizedBuffer + path: /tf/raw_ops/InfeedEnqueuePrelinearizedBuffer + - title: InfeedEnqueueTuple + path: /tf/raw_ops/InfeedEnqueueTuple + - title: InitializeTable + path: /tf/raw_ops/InitializeTable + - title: InitializeTableFromTextFile + path: /tf/raw_ops/InitializeTableFromTextFile + - title: InitializeTableFromTextFileV2 + path: /tf/raw_ops/InitializeTableFromTextFileV2 + - title: InitializeTableV2 + path: /tf/raw_ops/InitializeTableV2 + - title: InplaceAdd + path: /tf/raw_ops/InplaceAdd + - title: InplaceSub + path: /tf/raw_ops/InplaceSub + - title: InplaceUpdate + path: /tf/raw_ops/InplaceUpdate + - title: InterleaveDataset + path: /tf/raw_ops/InterleaveDataset + - title: Inv + path: /tf/raw_ops/Inv + - title: InvGrad + path: /tf/raw_ops/InvGrad + - title: Invert + path: /tf/raw_ops/Invert + - title: InvertPermutation + path: /tf/raw_ops/InvertPermutation + - title: IsBoostedTreesEnsembleInitialized + path: /tf/raw_ops/IsBoostedTreesEnsembleInitialized + - title: IsBoostedTreesQuantileStreamResourceInitialized + path: /tf/raw_ops/IsBoostedTreesQuantileStreamResourceInitialized + - title: IsFinite + path: /tf/raw_ops/IsFinite + - title: IsInf + path: /tf/raw_ops/IsInf + - title: IsNan + path: /tf/raw_ops/IsNan + - title: IsVariableInitialized + path: /tf/raw_ops/IsVariableInitialized + - title: Iterator + path: /tf/raw_ops/Iterator + - title: IteratorFromStringHandle + path: /tf/raw_ops/IteratorFromStringHandle + - title: IteratorFromStringHandleV2 + path: /tf/raw_ops/IteratorFromStringHandleV2 + - title: IteratorGetDevice + path: /tf/raw_ops/IteratorGetDevice + - title: IteratorGetNext + path: /tf/raw_ops/IteratorGetNext + - title: IteratorGetNextAsOptional + path: /tf/raw_ops/IteratorGetNextAsOptional + - title: IteratorGetNextSync + path: /tf/raw_ops/IteratorGetNextSync + - title: IteratorToStringHandle + path: /tf/raw_ops/IteratorToStringHandle + - title: IteratorV2 + path: /tf/raw_ops/IteratorV2 + - title: L2Loss + path: /tf/raw_ops/L2Loss + - title: LMDBDataset + path: /tf/raw_ops/LMDBDataset + - title: LMDBReader + path: /tf/raw_ops/LMDBReader + - title: LRN + path: /tf/raw_ops/LRN + - title: LRNGrad + path: /tf/raw_ops/LRNGrad + - title: LSTMBlockCell + path: /tf/raw_ops/LSTMBlockCell + - title: LSTMBlockCellGrad + path: /tf/raw_ops/LSTMBlockCellGrad + - title: LatencyStatsDataset + path: /tf/raw_ops/LatencyStatsDataset + - title: LeakyRelu + path: /tf/raw_ops/LeakyRelu + - title: LeakyReluGrad + path: /tf/raw_ops/LeakyReluGrad + - title: LearnedUnigramCandidateSampler + path: /tf/raw_ops/LearnedUnigramCandidateSampler + - title: LeftShift + path: /tf/raw_ops/LeftShift + - title: LegacyParallelInterleaveDatasetV2 + path: /tf/raw_ops/LegacyParallelInterleaveDatasetV2 + - title: Less + path: /tf/raw_ops/Less + - title: LessEqual + path: /tf/raw_ops/LessEqual + - title: Lgamma + path: /tf/raw_ops/Lgamma + - title: LinSpace + path: /tf/raw_ops/LinSpace + - title: ListDiff + path: /tf/raw_ops/ListDiff + - title: LoadAndRemapMatrix + path: /tf/raw_ops/LoadAndRemapMatrix + - title: LoadTPUEmbeddingADAMParameters + path: /tf/raw_ops/LoadTPUEmbeddingADAMParameters + - title: LoadTPUEmbeddingADAMParametersGradAccumDebug + path: /tf/raw_ops/LoadTPUEmbeddingADAMParametersGradAccumDebug + - title: LoadTPUEmbeddingAdadeltaParameters + path: /tf/raw_ops/LoadTPUEmbeddingAdadeltaParameters + - title: LoadTPUEmbeddingAdadeltaParametersGradAccumDebug + path: /tf/raw_ops/LoadTPUEmbeddingAdadeltaParametersGradAccumDebug + - title: LoadTPUEmbeddingAdagradParameters + path: /tf/raw_ops/LoadTPUEmbeddingAdagradParameters + - title: LoadTPUEmbeddingAdagradParametersGradAccumDebug + path: /tf/raw_ops/LoadTPUEmbeddingAdagradParametersGradAccumDebug + - title: LoadTPUEmbeddingCenteredRMSPropParameters + path: /tf/raw_ops/LoadTPUEmbeddingCenteredRMSPropParameters + - title: LoadTPUEmbeddingFTRLParameters + path: /tf/raw_ops/LoadTPUEmbeddingFTRLParameters + - title: LoadTPUEmbeddingFTRLParametersGradAccumDebug + path: /tf/raw_ops/LoadTPUEmbeddingFTRLParametersGradAccumDebug + - title: LoadTPUEmbeddingMDLAdagradLightParameters + path: /tf/raw_ops/LoadTPUEmbeddingMDLAdagradLightParameters + - title: LoadTPUEmbeddingMomentumParameters + path: /tf/raw_ops/LoadTPUEmbeddingMomentumParameters + - title: LoadTPUEmbeddingMomentumParametersGradAccumDebug + path: /tf/raw_ops/LoadTPUEmbeddingMomentumParametersGradAccumDebug + - title: LoadTPUEmbeddingProximalAdagradParameters + path: /tf/raw_ops/LoadTPUEmbeddingProximalAdagradParameters + - title: LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug + path: /tf/raw_ops/LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug + - title: LoadTPUEmbeddingRMSPropParameters + path: /tf/raw_ops/LoadTPUEmbeddingRMSPropParameters + - title: LoadTPUEmbeddingRMSPropParametersGradAccumDebug + path: /tf/raw_ops/LoadTPUEmbeddingRMSPropParametersGradAccumDebug + - title: LoadTPUEmbeddingStochasticGradientDescentParameters + path: /tf/raw_ops/LoadTPUEmbeddingStochasticGradientDescentParameters + - title: Log + path: /tf/raw_ops/Log + - title: Log1p + path: /tf/raw_ops/Log1p + - title: LogMatrixDeterminant + path: /tf/raw_ops/LogMatrixDeterminant + - title: LogSoftmax + path: /tf/raw_ops/LogSoftmax + - title: LogUniformCandidateSampler + path: /tf/raw_ops/LogUniformCandidateSampler + - title: LogicalAnd + path: /tf/raw_ops/LogicalAnd + - title: LogicalNot + path: /tf/raw_ops/LogicalNot + - title: LogicalOr + path: /tf/raw_ops/LogicalOr + - title: LookupTableExport + path: /tf/raw_ops/LookupTableExport + - title: LookupTableExportV2 + path: /tf/raw_ops/LookupTableExportV2 + - title: LookupTableFind + path: /tf/raw_ops/LookupTableFind + - title: LookupTableFindV2 + path: /tf/raw_ops/LookupTableFindV2 + - title: LookupTableImport + path: /tf/raw_ops/LookupTableImport + - title: LookupTableImportV2 + path: /tf/raw_ops/LookupTableImportV2 + - title: LookupTableInsert + path: /tf/raw_ops/LookupTableInsert + - title: LookupTableInsertV2 + path: /tf/raw_ops/LookupTableInsertV2 + - title: LookupTableRemoveV2 + path: /tf/raw_ops/LookupTableRemoveV2 + - title: LookupTableSize + path: /tf/raw_ops/LookupTableSize + - title: LookupTableSizeV2 + path: /tf/raw_ops/LookupTableSizeV2 + - title: LoopCond + path: /tf/raw_ops/LoopCond + - title: LowerBound + path: /tf/raw_ops/LowerBound + - title: Lu + path: /tf/raw_ops/Lu + - title: MakeIterator + path: /tf/raw_ops/MakeIterator + - title: MapAndBatchDataset + path: /tf/raw_ops/MapAndBatchDataset + - title: MapClear + path: /tf/raw_ops/MapClear + - title: MapDataset + path: /tf/raw_ops/MapDataset + - title: MapDefun + path: /tf/raw_ops/MapDefun + - title: MapIncompleteSize + path: /tf/raw_ops/MapIncompleteSize + - title: MapPeek + path: /tf/raw_ops/MapPeek + - title: MapSize + path: /tf/raw_ops/MapSize + - title: MapStage + path: /tf/raw_ops/MapStage + - title: MapUnstage + path: /tf/raw_ops/MapUnstage + - title: MapUnstageNoKey + path: /tf/raw_ops/MapUnstageNoKey + - title: MatMul + path: /tf/raw_ops/MatMul + - title: MatchingFiles + path: /tf/raw_ops/MatchingFiles + - title: MatchingFilesDataset + path: /tf/raw_ops/MatchingFilesDataset + - title: MatrixBandPart + path: /tf/raw_ops/MatrixBandPart + - title: MatrixDeterminant + path: /tf/raw_ops/MatrixDeterminant + - title: MatrixDiag + path: /tf/raw_ops/MatrixDiag + - title: MatrixDiagPart + path: /tf/raw_ops/MatrixDiagPart + - title: MatrixDiagPartV2 + path: /tf/raw_ops/MatrixDiagPartV2 + - title: MatrixDiagPartV3 + path: /tf/raw_ops/MatrixDiagPartV3 + - title: MatrixDiagV2 + path: /tf/raw_ops/MatrixDiagV2 + - title: MatrixDiagV3 + path: /tf/raw_ops/MatrixDiagV3 + - title: MatrixExponential + path: /tf/raw_ops/MatrixExponential + - title: MatrixInverse + path: /tf/raw_ops/MatrixInverse + - title: MatrixLogarithm + path: /tf/raw_ops/MatrixLogarithm + - title: MatrixSetDiag + path: /tf/raw_ops/MatrixSetDiag + - title: MatrixSetDiagV2 + path: /tf/raw_ops/MatrixSetDiagV2 + - title: MatrixSetDiagV3 + path: /tf/raw_ops/MatrixSetDiagV3 + - title: MatrixSolve + path: /tf/raw_ops/MatrixSolve + - title: MatrixSolveLs + path: /tf/raw_ops/MatrixSolveLs + - title: MatrixSquareRoot + path: /tf/raw_ops/MatrixSquareRoot + - title: MatrixTriangularSolve + path: /tf/raw_ops/MatrixTriangularSolve + - title: Max + path: /tf/raw_ops/Max + - title: MaxIntraOpParallelismDataset + path: /tf/raw_ops/MaxIntraOpParallelismDataset + - title: MaxPool + path: /tf/raw_ops/MaxPool + - title: MaxPool3D + path: /tf/raw_ops/MaxPool3D + - title: MaxPool3DGrad + path: /tf/raw_ops/MaxPool3DGrad + - title: MaxPool3DGradGrad + path: /tf/raw_ops/MaxPool3DGradGrad + - title: MaxPoolGrad + path: /tf/raw_ops/MaxPoolGrad + - title: MaxPoolGradGrad + path: /tf/raw_ops/MaxPoolGradGrad + - title: MaxPoolGradGradV2 + path: /tf/raw_ops/MaxPoolGradGradV2 + - title: MaxPoolGradGradWithArgmax + path: /tf/raw_ops/MaxPoolGradGradWithArgmax + - title: MaxPoolGradV2 + path: /tf/raw_ops/MaxPoolGradV2 + - title: MaxPoolGradWithArgmax + path: /tf/raw_ops/MaxPoolGradWithArgmax + - title: MaxPoolV2 + path: /tf/raw_ops/MaxPoolV2 + - title: MaxPoolWithArgmax + path: /tf/raw_ops/MaxPoolWithArgmax + - title: Maximum + path: /tf/raw_ops/Maximum + - title: Mean + path: /tf/raw_ops/Mean + - title: Merge + path: /tf/raw_ops/Merge + - title: MergeSummary + path: /tf/raw_ops/MergeSummary + - title: MergeV2Checkpoints + path: /tf/raw_ops/MergeV2Checkpoints + - title: Mfcc + path: /tf/raw_ops/Mfcc + - title: Min + path: /tf/raw_ops/Min + - title: Minimum + path: /tf/raw_ops/Minimum + - title: MirrorPad + path: /tf/raw_ops/MirrorPad + - title: MirrorPadGrad + path: /tf/raw_ops/MirrorPadGrad + - title: Mod + path: /tf/raw_ops/Mod + - title: ModelDataset + path: /tf/raw_ops/ModelDataset + - title: Mul + path: /tf/raw_ops/Mul + - title: MulNoNan + path: /tf/raw_ops/MulNoNan + - title: MultiDeviceIterator + path: /tf/raw_ops/MultiDeviceIterator + - title: MultiDeviceIteratorFromStringHandle + path: /tf/raw_ops/MultiDeviceIteratorFromStringHandle + - title: MultiDeviceIteratorGetNextFromShard + path: /tf/raw_ops/MultiDeviceIteratorGetNextFromShard + - title: MultiDeviceIteratorInit + path: /tf/raw_ops/MultiDeviceIteratorInit + - title: MultiDeviceIteratorToStringHandle + path: /tf/raw_ops/MultiDeviceIteratorToStringHandle + - title: Multinomial + path: /tf/raw_ops/Multinomial + - title: MutableDenseHashTable + path: /tf/raw_ops/MutableDenseHashTable + - title: MutableDenseHashTableV2 + path: /tf/raw_ops/MutableDenseHashTableV2 + - title: MutableHashTable + path: /tf/raw_ops/MutableHashTable + - title: MutableHashTableOfTensors + path: /tf/raw_ops/MutableHashTableOfTensors + - title: MutableHashTableOfTensorsV2 + path: /tf/raw_ops/MutableHashTableOfTensorsV2 + - title: MutableHashTableV2 + path: /tf/raw_ops/MutableHashTableV2 + - title: MutexLock + path: /tf/raw_ops/MutexLock + - title: MutexV2 + path: /tf/raw_ops/MutexV2 + - title: NcclAllReduce + path: /tf/raw_ops/NcclAllReduce + - title: NcclBroadcast + path: /tf/raw_ops/NcclBroadcast + - title: NcclReduce + path: /tf/raw_ops/NcclReduce + - title: Ndtri + path: /tf/raw_ops/Ndtri + - title: Neg + path: /tf/raw_ops/Neg + - title: NextAfter + path: /tf/raw_ops/NextAfter + - title: NextIteration + path: /tf/raw_ops/NextIteration + - title: NoOp + path: /tf/raw_ops/NoOp + - title: NonDeterministicInts + path: /tf/raw_ops/NonDeterministicInts + - title: NonMaxSuppression + path: /tf/raw_ops/NonMaxSuppression + - title: NonMaxSuppressionV2 + path: /tf/raw_ops/NonMaxSuppressionV2 + - title: NonMaxSuppressionV3 + path: /tf/raw_ops/NonMaxSuppressionV3 + - title: NonMaxSuppressionV4 + path: /tf/raw_ops/NonMaxSuppressionV4 + - title: NonMaxSuppressionV5 + path: /tf/raw_ops/NonMaxSuppressionV5 + - title: NonMaxSuppressionWithOverlaps + path: /tf/raw_ops/NonMaxSuppressionWithOverlaps + - title: NonSerializableDataset + path: /tf/raw_ops/NonSerializableDataset + - title: NotEqual + path: /tf/raw_ops/NotEqual + - title: NthElement + path: /tf/raw_ops/NthElement + - title: OneHot + path: /tf/raw_ops/OneHot + - title: OneShotIterator + path: /tf/raw_ops/OneShotIterator + - title: OnesLike + path: /tf/raw_ops/OnesLike + - title: OptimizeDataset + path: /tf/raw_ops/OptimizeDataset + - title: OptionalFromValue + path: /tf/raw_ops/OptionalFromValue + - title: OptionalGetValue + path: /tf/raw_ops/OptionalGetValue + - title: OptionalHasValue + path: /tf/raw_ops/OptionalHasValue + - title: OptionalNone + path: /tf/raw_ops/OptionalNone + - title: OrderedMapClear + path: /tf/raw_ops/OrderedMapClear + - title: OrderedMapIncompleteSize + path: /tf/raw_ops/OrderedMapIncompleteSize + - title: OrderedMapPeek + path: /tf/raw_ops/OrderedMapPeek + - title: OrderedMapSize + path: /tf/raw_ops/OrderedMapSize + - title: OrderedMapStage + path: /tf/raw_ops/OrderedMapStage + - title: OrderedMapUnstage + path: /tf/raw_ops/OrderedMapUnstage + - title: OrderedMapUnstageNoKey + path: /tf/raw_ops/OrderedMapUnstageNoKey + - title: OutfeedDequeue + path: /tf/raw_ops/OutfeedDequeue + - title: OutfeedDequeueTuple + path: /tf/raw_ops/OutfeedDequeueTuple + - title: OutfeedEnqueue + path: /tf/raw_ops/OutfeedEnqueue + - title: OutfeedEnqueueTuple + path: /tf/raw_ops/OutfeedEnqueueTuple + - title: Pack + path: /tf/raw_ops/Pack + - title: Pad + path: /tf/raw_ops/Pad + - title: PadV2 + path: /tf/raw_ops/PadV2 + - title: PaddedBatchDataset + path: /tf/raw_ops/PaddedBatchDataset + - title: PaddedBatchDatasetV2 + path: /tf/raw_ops/PaddedBatchDatasetV2 + - title: PaddingFIFOQueue + path: /tf/raw_ops/PaddingFIFOQueue + - title: PaddingFIFOQueueV2 + path: /tf/raw_ops/PaddingFIFOQueueV2 + - title: ParallelConcat + path: /tf/raw_ops/ParallelConcat + - title: ParallelDynamicStitch + path: /tf/raw_ops/ParallelDynamicStitch + - title: ParallelInterleaveDataset + path: /tf/raw_ops/ParallelInterleaveDataset + - title: ParallelInterleaveDatasetV2 + path: /tf/raw_ops/ParallelInterleaveDatasetV2 + - title: ParallelInterleaveDatasetV3 + path: /tf/raw_ops/ParallelInterleaveDatasetV3 + - title: ParallelInterleaveDatasetV4 + path: /tf/raw_ops/ParallelInterleaveDatasetV4 + - title: ParallelMapDataset + path: /tf/raw_ops/ParallelMapDataset + - title: ParallelMapDatasetV2 + path: /tf/raw_ops/ParallelMapDatasetV2 + - title: ParameterizedTruncatedNormal + path: /tf/raw_ops/ParameterizedTruncatedNormal + - title: ParseExample + path: /tf/raw_ops/ParseExample + - title: ParseExampleDataset + path: /tf/raw_ops/ParseExampleDataset + - title: ParseExampleDatasetV2 + path: /tf/raw_ops/ParseExampleDatasetV2 + - title: ParseExampleV2 + path: /tf/raw_ops/ParseExampleV2 + - title: ParseSequenceExample + path: /tf/raw_ops/ParseSequenceExample + - title: ParseSequenceExampleV2 + path: /tf/raw_ops/ParseSequenceExampleV2 + - title: ParseSingleExample + path: /tf/raw_ops/ParseSingleExample + - title: ParseSingleSequenceExample + path: /tf/raw_ops/ParseSingleSequenceExample + - title: ParseTensor + path: /tf/raw_ops/ParseTensor + - title: PartitionedCall + path: /tf/raw_ops/PartitionedCall + - title: Placeholder + path: /tf/raw_ops/Placeholder + - title: PlaceholderV2 + path: /tf/raw_ops/PlaceholderV2 + - title: PlaceholderWithDefault + path: /tf/raw_ops/PlaceholderWithDefault + - title: Polygamma + path: /tf/raw_ops/Polygamma + - title: PopulationCount + path: /tf/raw_ops/PopulationCount + - title: Pow + path: /tf/raw_ops/Pow + - title: PrefetchDataset + path: /tf/raw_ops/PrefetchDataset + - title: Prelinearize + path: /tf/raw_ops/Prelinearize + - title: PrelinearizeTuple + path: /tf/raw_ops/PrelinearizeTuple + - title: PreventGradient + path: /tf/raw_ops/PreventGradient + - title: Print + path: /tf/raw_ops/Print + - title: PrintV2 + path: /tf/raw_ops/PrintV2 + - title: PriorityQueue + path: /tf/raw_ops/PriorityQueue + - title: PriorityQueueV2 + path: /tf/raw_ops/PriorityQueueV2 + - title: PrivateThreadPoolDataset + path: /tf/raw_ops/PrivateThreadPoolDataset + - title: Prod + path: /tf/raw_ops/Prod + - title: PyFunc + path: /tf/raw_ops/PyFunc + - title: PyFuncStateless + path: /tf/raw_ops/PyFuncStateless + - title: Qr + path: /tf/raw_ops/Qr + - title: QuantizeAndDequantize + path: /tf/raw_ops/QuantizeAndDequantize + - title: QuantizeAndDequantizeV2 + path: /tf/raw_ops/QuantizeAndDequantizeV2 + - title: QuantizeAndDequantizeV3 + path: /tf/raw_ops/QuantizeAndDequantizeV3 + - title: QuantizeDownAndShrinkRange + path: /tf/raw_ops/QuantizeDownAndShrinkRange + - title: QuantizeV2 + path: /tf/raw_ops/QuantizeV2 + - title: QuantizedAdd + path: /tf/raw_ops/QuantizedAdd + - title: QuantizedAvgPool + path: /tf/raw_ops/QuantizedAvgPool + - title: QuantizedBatchNormWithGlobalNormalization + path: /tf/raw_ops/QuantizedBatchNormWithGlobalNormalization + - title: QuantizedBiasAdd + path: /tf/raw_ops/QuantizedBiasAdd + - title: QuantizedConcat + path: /tf/raw_ops/QuantizedConcat + - title: QuantizedConv2D + path: /tf/raw_ops/QuantizedConv2D + - title: QuantizedConv2DAndRelu + path: /tf/raw_ops/QuantizedConv2DAndRelu + - title: QuantizedConv2DAndReluAndRequantize + path: /tf/raw_ops/QuantizedConv2DAndReluAndRequantize + - title: QuantizedConv2DAndRequantize + path: /tf/raw_ops/QuantizedConv2DAndRequantize + - title: QuantizedConv2DPerChannel + path: /tf/raw_ops/QuantizedConv2DPerChannel + - title: QuantizedConv2DWithBias + path: /tf/raw_ops/QuantizedConv2DWithBias + - title: QuantizedConv2DWithBiasAndRelu + path: /tf/raw_ops/QuantizedConv2DWithBiasAndRelu + - title: QuantizedConv2DWithBiasAndReluAndRequantize + path: /tf/raw_ops/QuantizedConv2DWithBiasAndReluAndRequantize + - title: QuantizedConv2DWithBiasAndRequantize + path: /tf/raw_ops/QuantizedConv2DWithBiasAndRequantize + - title: QuantizedConv2DWithBiasSignedSumAndReluAndRequantize + path: /tf/raw_ops/QuantizedConv2DWithBiasSignedSumAndReluAndRequantize + - title: QuantizedConv2DWithBiasSumAndRelu + path: /tf/raw_ops/QuantizedConv2DWithBiasSumAndRelu + - title: QuantizedConv2DWithBiasSumAndReluAndRequantize + path: /tf/raw_ops/QuantizedConv2DWithBiasSumAndReluAndRequantize + - title: QuantizedDepthwiseConv2D + path: /tf/raw_ops/QuantizedDepthwiseConv2D + - title: QuantizedDepthwiseConv2DWithBias + path: /tf/raw_ops/QuantizedDepthwiseConv2DWithBias + - title: QuantizedDepthwiseConv2DWithBiasAndRelu + path: /tf/raw_ops/QuantizedDepthwiseConv2DWithBiasAndRelu + - title: QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize + path: /tf/raw_ops/QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize + - title: QuantizedInstanceNorm + path: /tf/raw_ops/QuantizedInstanceNorm + - title: QuantizedMatMul + path: /tf/raw_ops/QuantizedMatMul + - title: QuantizedMatMulWithBias + path: /tf/raw_ops/QuantizedMatMulWithBias + - title: QuantizedMatMulWithBiasAndDequantize + path: /tf/raw_ops/QuantizedMatMulWithBiasAndDequantize + - title: QuantizedMatMulWithBiasAndRelu + path: /tf/raw_ops/QuantizedMatMulWithBiasAndRelu + - title: QuantizedMatMulWithBiasAndReluAndRequantize + path: /tf/raw_ops/QuantizedMatMulWithBiasAndReluAndRequantize + - title: QuantizedMatMulWithBiasAndRequantize + path: /tf/raw_ops/QuantizedMatMulWithBiasAndRequantize + - title: QuantizedMaxPool + path: /tf/raw_ops/QuantizedMaxPool + - title: QuantizedMul + path: /tf/raw_ops/QuantizedMul + - title: QuantizedRelu + path: /tf/raw_ops/QuantizedRelu + - title: QuantizedRelu6 + path: /tf/raw_ops/QuantizedRelu6 + - title: QuantizedReluX + path: /tf/raw_ops/QuantizedReluX + - title: QuantizedReshape + path: /tf/raw_ops/QuantizedReshape + - title: QuantizedResizeBilinear + path: /tf/raw_ops/QuantizedResizeBilinear + - title: QueueClose + path: /tf/raw_ops/QueueClose + - title: QueueCloseV2 + path: /tf/raw_ops/QueueCloseV2 + - title: QueueDequeue + path: /tf/raw_ops/QueueDequeue + - title: QueueDequeueMany + path: /tf/raw_ops/QueueDequeueMany + - title: QueueDequeueManyV2 + path: /tf/raw_ops/QueueDequeueManyV2 + - title: QueueDequeueUpTo + path: /tf/raw_ops/QueueDequeueUpTo + - title: QueueDequeueUpToV2 + path: /tf/raw_ops/QueueDequeueUpToV2 + - title: QueueDequeueV2 + path: /tf/raw_ops/QueueDequeueV2 + - title: QueueEnqueue + path: /tf/raw_ops/QueueEnqueue + - title: QueueEnqueueMany + path: /tf/raw_ops/QueueEnqueueMany + - title: QueueEnqueueManyV2 + path: /tf/raw_ops/QueueEnqueueManyV2 + - title: QueueEnqueueV2 + path: /tf/raw_ops/QueueEnqueueV2 + - title: QueueIsClosed + path: /tf/raw_ops/QueueIsClosed + - title: QueueIsClosedV2 + path: /tf/raw_ops/QueueIsClosedV2 + - title: QueueSize + path: /tf/raw_ops/QueueSize + - title: QueueSizeV2 + path: /tf/raw_ops/QueueSizeV2 + - title: RFFT + path: /tf/raw_ops/RFFT + - title: RFFT2D + path: /tf/raw_ops/RFFT2D + - title: RFFT3D + path: /tf/raw_ops/RFFT3D + - title: RGBToHSV + path: /tf/raw_ops/RGBToHSV + - title: RaggedGather + path: /tf/raw_ops/RaggedGather + - title: RaggedRange + path: /tf/raw_ops/RaggedRange + - title: RaggedTensorFromVariant + path: /tf/raw_ops/RaggedTensorFromVariant + - title: RaggedTensorToSparse + path: /tf/raw_ops/RaggedTensorToSparse + - title: RaggedTensorToTensor + path: /tf/raw_ops/RaggedTensorToTensor + - title: RaggedTensorToVariant + path: /tf/raw_ops/RaggedTensorToVariant + - title: RandomCrop + path: /tf/raw_ops/RandomCrop + - title: RandomDataset + path: /tf/raw_ops/RandomDataset + - title: RandomGamma + path: /tf/raw_ops/RandomGamma + - title: RandomGammaGrad + path: /tf/raw_ops/RandomGammaGrad + - title: RandomPoisson + path: /tf/raw_ops/RandomPoisson + - title: RandomPoissonV2 + path: /tf/raw_ops/RandomPoissonV2 + - title: RandomShuffle + path: /tf/raw_ops/RandomShuffle + - title: RandomShuffleQueue + path: /tf/raw_ops/RandomShuffleQueue + - title: RandomShuffleQueueV2 + path: /tf/raw_ops/RandomShuffleQueueV2 + - title: RandomStandardNormal + path: /tf/raw_ops/RandomStandardNormal + - title: RandomUniform + path: /tf/raw_ops/RandomUniform + - title: RandomUniformInt + path: /tf/raw_ops/RandomUniformInt + - title: Range + path: /tf/raw_ops/Range + - title: RangeDataset + path: /tf/raw_ops/RangeDataset + - title: Rank + path: /tf/raw_ops/Rank + - title: ReadFile + path: /tf/raw_ops/ReadFile + - title: ReadVariableOp + path: /tf/raw_ops/ReadVariableOp + - title: ReaderNumRecordsProduced + path: /tf/raw_ops/ReaderNumRecordsProduced + - title: ReaderNumRecordsProducedV2 + path: /tf/raw_ops/ReaderNumRecordsProducedV2 + - title: ReaderNumWorkUnitsCompleted + path: /tf/raw_ops/ReaderNumWorkUnitsCompleted + - title: ReaderNumWorkUnitsCompletedV2 + path: /tf/raw_ops/ReaderNumWorkUnitsCompletedV2 + - title: ReaderRead + path: /tf/raw_ops/ReaderRead + - title: ReaderReadUpTo + path: /tf/raw_ops/ReaderReadUpTo + - title: ReaderReadUpToV2 + path: /tf/raw_ops/ReaderReadUpToV2 + - title: ReaderReadV2 + path: /tf/raw_ops/ReaderReadV2 + - title: ReaderReset + path: /tf/raw_ops/ReaderReset + - title: ReaderResetV2 + path: /tf/raw_ops/ReaderResetV2 + - title: ReaderRestoreState + path: /tf/raw_ops/ReaderRestoreState + - title: ReaderRestoreStateV2 + path: /tf/raw_ops/ReaderRestoreStateV2 + - title: ReaderSerializeState + path: /tf/raw_ops/ReaderSerializeState + - title: ReaderSerializeStateV2 + path: /tf/raw_ops/ReaderSerializeStateV2 + - title: Real + path: /tf/raw_ops/Real + - title: RealDiv + path: /tf/raw_ops/RealDiv + - title: RebatchDataset + path: /tf/raw_ops/RebatchDataset + - title: Reciprocal + path: /tf/raw_ops/Reciprocal + - title: ReciprocalGrad + path: /tf/raw_ops/ReciprocalGrad + - title: RecordInput + path: /tf/raw_ops/RecordInput + - title: Recv + path: /tf/raw_ops/Recv + - title: RecvTPUEmbeddingActivations + path: /tf/raw_ops/RecvTPUEmbeddingActivations + - title: ReduceDataset + path: /tf/raw_ops/ReduceDataset + - title: ReduceJoin + path: /tf/raw_ops/ReduceJoin + - title: RefEnter + path: /tf/raw_ops/RefEnter + - title: RefExit + path: /tf/raw_ops/RefExit + - title: RefIdentity + path: /tf/raw_ops/RefIdentity + - title: RefMerge + path: /tf/raw_ops/RefMerge + - title: RefNextIteration + path: /tf/raw_ops/RefNextIteration + - title: RefSelect + path: /tf/raw_ops/RefSelect + - title: RefSwitch + path: /tf/raw_ops/RefSwitch + - title: RegexFullMatch + path: /tf/raw_ops/RegexFullMatch + - title: RegexReplace + path: /tf/raw_ops/RegexReplace + - title: Relu + path: /tf/raw_ops/Relu + - title: Relu6 + path: /tf/raw_ops/Relu6 + - title: Relu6Grad + path: /tf/raw_ops/Relu6Grad + - title: ReluGrad + path: /tf/raw_ops/ReluGrad + - title: RemoteCall + path: /tf/raw_ops/RemoteCall + - title: RepeatDataset + path: /tf/raw_ops/RepeatDataset + - title: RequantizationRange + path: /tf/raw_ops/RequantizationRange + - title: RequantizationRangePerChannel + path: /tf/raw_ops/RequantizationRangePerChannel + - title: Requantize + path: /tf/raw_ops/Requantize + - title: RequantizePerChannel + path: /tf/raw_ops/RequantizePerChannel + - title: Reshape + path: /tf/raw_ops/Reshape + - title: ResizeArea + path: /tf/raw_ops/ResizeArea + - title: ResizeBicubic + path: /tf/raw_ops/ResizeBicubic + - title: ResizeBicubicGrad + path: /tf/raw_ops/ResizeBicubicGrad + - title: ResizeBilinear + path: /tf/raw_ops/ResizeBilinear + - title: ResizeBilinearGrad + path: /tf/raw_ops/ResizeBilinearGrad + - title: ResizeNearestNeighbor + path: /tf/raw_ops/ResizeNearestNeighbor + - title: ResizeNearestNeighborGrad + path: /tf/raw_ops/ResizeNearestNeighborGrad + - title: ResourceAccumulatorApplyGradient + path: /tf/raw_ops/ResourceAccumulatorApplyGradient + - title: ResourceAccumulatorNumAccumulated + path: /tf/raw_ops/ResourceAccumulatorNumAccumulated + - title: ResourceAccumulatorSetGlobalStep + path: /tf/raw_ops/ResourceAccumulatorSetGlobalStep + - title: ResourceAccumulatorTakeGradient + path: /tf/raw_ops/ResourceAccumulatorTakeGradient + - title: ResourceApplyAdaMax + path: /tf/raw_ops/ResourceApplyAdaMax + - title: ResourceApplyAdadelta + path: /tf/raw_ops/ResourceApplyAdadelta + - title: ResourceApplyAdagrad + path: /tf/raw_ops/ResourceApplyAdagrad + - title: ResourceApplyAdagradDA + path: /tf/raw_ops/ResourceApplyAdagradDA + - title: ResourceApplyAdagradV2 + path: /tf/raw_ops/ResourceApplyAdagradV2 + - title: ResourceApplyAdam + path: /tf/raw_ops/ResourceApplyAdam + - title: ResourceApplyAdamWithAmsgrad + path: /tf/raw_ops/ResourceApplyAdamWithAmsgrad + - title: ResourceApplyAddSign + path: /tf/raw_ops/ResourceApplyAddSign + - title: ResourceApplyCenteredRMSProp + path: /tf/raw_ops/ResourceApplyCenteredRMSProp + - title: ResourceApplyFtrl + path: /tf/raw_ops/ResourceApplyFtrl + - title: ResourceApplyFtrlV2 + path: /tf/raw_ops/ResourceApplyFtrlV2 + - title: ResourceApplyGradientDescent + path: /tf/raw_ops/ResourceApplyGradientDescent + - title: ResourceApplyKerasMomentum + path: /tf/raw_ops/ResourceApplyKerasMomentum + - title: ResourceApplyMomentum + path: /tf/raw_ops/ResourceApplyMomentum + - title: ResourceApplyPowerSign + path: /tf/raw_ops/ResourceApplyPowerSign + - title: ResourceApplyProximalAdagrad + path: /tf/raw_ops/ResourceApplyProximalAdagrad + - title: ResourceApplyProximalGradientDescent + path: /tf/raw_ops/ResourceApplyProximalGradientDescent + - title: ResourceApplyRMSProp + path: /tf/raw_ops/ResourceApplyRMSProp + - title: ResourceConditionalAccumulator + path: /tf/raw_ops/ResourceConditionalAccumulator + - title: ResourceCountUpTo + path: /tf/raw_ops/ResourceCountUpTo + - title: ResourceGather + path: /tf/raw_ops/ResourceGather + - title: ResourceGatherNd + path: /tf/raw_ops/ResourceGatherNd + - title: ResourceScatterAdd + path: /tf/raw_ops/ResourceScatterAdd + - title: ResourceScatterDiv + path: /tf/raw_ops/ResourceScatterDiv + - title: ResourceScatterMax + path: /tf/raw_ops/ResourceScatterMax + - title: ResourceScatterMin + path: /tf/raw_ops/ResourceScatterMin + - title: ResourceScatterMul + path: /tf/raw_ops/ResourceScatterMul + - title: ResourceScatterNdAdd + path: /tf/raw_ops/ResourceScatterNdAdd + - title: ResourceScatterNdSub + path: /tf/raw_ops/ResourceScatterNdSub + - title: ResourceScatterNdUpdate + path: /tf/raw_ops/ResourceScatterNdUpdate + - title: ResourceScatterSub + path: /tf/raw_ops/ResourceScatterSub + - title: ResourceScatterUpdate + path: /tf/raw_ops/ResourceScatterUpdate + - title: ResourceSparseApplyAdadelta + path: /tf/raw_ops/ResourceSparseApplyAdadelta + - title: ResourceSparseApplyAdagrad + path: /tf/raw_ops/ResourceSparseApplyAdagrad + - title: ResourceSparseApplyAdagradDA + path: /tf/raw_ops/ResourceSparseApplyAdagradDA + - title: ResourceSparseApplyAdagradV2 + path: /tf/raw_ops/ResourceSparseApplyAdagradV2 + - title: ResourceSparseApplyCenteredRMSProp + path: /tf/raw_ops/ResourceSparseApplyCenteredRMSProp + - title: ResourceSparseApplyFtrl + path: /tf/raw_ops/ResourceSparseApplyFtrl + - title: ResourceSparseApplyFtrlV2 + path: /tf/raw_ops/ResourceSparseApplyFtrlV2 + - title: ResourceSparseApplyKerasMomentum + path: /tf/raw_ops/ResourceSparseApplyKerasMomentum + - title: ResourceSparseApplyMomentum + path: /tf/raw_ops/ResourceSparseApplyMomentum + - title: ResourceSparseApplyProximalAdagrad + path: /tf/raw_ops/ResourceSparseApplyProximalAdagrad + - title: ResourceSparseApplyProximalGradientDescent + path: /tf/raw_ops/ResourceSparseApplyProximalGradientDescent + - title: ResourceSparseApplyRMSProp + path: /tf/raw_ops/ResourceSparseApplyRMSProp + - title: ResourceStridedSliceAssign + path: /tf/raw_ops/ResourceStridedSliceAssign + - title: Restore + path: /tf/raw_ops/Restore + - title: RestoreSlice + path: /tf/raw_ops/RestoreSlice + - title: RestoreV2 + path: /tf/raw_ops/RestoreV2 + - title: RetrieveTPUEmbeddingADAMParameters + path: /tf/raw_ops/RetrieveTPUEmbeddingADAMParameters + - title: RetrieveTPUEmbeddingADAMParametersGradAccumDebug + path: /tf/raw_ops/RetrieveTPUEmbeddingADAMParametersGradAccumDebug + - title: RetrieveTPUEmbeddingAdadeltaParameters + path: /tf/raw_ops/RetrieveTPUEmbeddingAdadeltaParameters + - title: RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug + path: /tf/raw_ops/RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug + - title: RetrieveTPUEmbeddingAdagradParameters + path: /tf/raw_ops/RetrieveTPUEmbeddingAdagradParameters + - title: RetrieveTPUEmbeddingAdagradParametersGradAccumDebug + path: /tf/raw_ops/RetrieveTPUEmbeddingAdagradParametersGradAccumDebug + - title: RetrieveTPUEmbeddingCenteredRMSPropParameters + path: /tf/raw_ops/RetrieveTPUEmbeddingCenteredRMSPropParameters + - title: RetrieveTPUEmbeddingFTRLParameters + path: /tf/raw_ops/RetrieveTPUEmbeddingFTRLParameters + - title: RetrieveTPUEmbeddingFTRLParametersGradAccumDebug + path: /tf/raw_ops/RetrieveTPUEmbeddingFTRLParametersGradAccumDebug + - title: RetrieveTPUEmbeddingMDLAdagradLightParameters + path: /tf/raw_ops/RetrieveTPUEmbeddingMDLAdagradLightParameters + - title: RetrieveTPUEmbeddingMomentumParameters + path: /tf/raw_ops/RetrieveTPUEmbeddingMomentumParameters + - title: RetrieveTPUEmbeddingMomentumParametersGradAccumDebug + path: /tf/raw_ops/RetrieveTPUEmbeddingMomentumParametersGradAccumDebug + - title: RetrieveTPUEmbeddingProximalAdagradParameters + path: /tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParameters + - title: RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug + path: /tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug + - title: RetrieveTPUEmbeddingRMSPropParameters + path: /tf/raw_ops/RetrieveTPUEmbeddingRMSPropParameters + - title: RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug + path: /tf/raw_ops/RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug + - title: RetrieveTPUEmbeddingStochasticGradientDescentParameters + path: /tf/raw_ops/RetrieveTPUEmbeddingStochasticGradientDescentParameters + - title: Reverse + path: /tf/raw_ops/Reverse + - title: ReverseSequence + path: /tf/raw_ops/ReverseSequence + - title: ReverseV2 + path: /tf/raw_ops/ReverseV2 + - title: RightShift + path: /tf/raw_ops/RightShift + - title: Rint + path: /tf/raw_ops/Rint + - title: RngSkip + path: /tf/raw_ops/RngSkip + - title: Roll + path: /tf/raw_ops/Roll + - title: Round + path: /tf/raw_ops/Round + - title: Rsqrt + path: /tf/raw_ops/Rsqrt + - title: RsqrtGrad + path: /tf/raw_ops/RsqrtGrad + - title: SampleDistortedBoundingBox + path: /tf/raw_ops/SampleDistortedBoundingBox + - title: SampleDistortedBoundingBoxV2 + path: /tf/raw_ops/SampleDistortedBoundingBoxV2 + - title: SamplingDataset + path: /tf/raw_ops/SamplingDataset + - title: Save + path: /tf/raw_ops/Save + - title: SaveSlices + path: /tf/raw_ops/SaveSlices + - title: SaveV2 + path: /tf/raw_ops/SaveV2 + - title: ScalarSummary + path: /tf/raw_ops/ScalarSummary + - title: ScaleAndTranslate + path: /tf/raw_ops/ScaleAndTranslate + - title: ScaleAndTranslateGrad + path: /tf/raw_ops/ScaleAndTranslateGrad + - title: ScanDataset + path: /tf/raw_ops/ScanDataset + - title: ScatterAdd + path: /tf/raw_ops/ScatterAdd + - title: ScatterDiv + path: /tf/raw_ops/ScatterDiv + - title: ScatterMax + path: /tf/raw_ops/ScatterMax + - title: ScatterMin + path: /tf/raw_ops/ScatterMin + - title: ScatterMul + path: /tf/raw_ops/ScatterMul + - title: ScatterNd + path: /tf/raw_ops/ScatterNd + - title: ScatterNdAdd + path: /tf/raw_ops/ScatterNdAdd + - title: ScatterNdNonAliasingAdd + path: /tf/raw_ops/ScatterNdNonAliasingAdd + - title: ScatterNdSub + path: /tf/raw_ops/ScatterNdSub + - title: ScatterNdUpdate + path: /tf/raw_ops/ScatterNdUpdate + - title: ScatterSub + path: /tf/raw_ops/ScatterSub + - title: ScatterUpdate + path: /tf/raw_ops/ScatterUpdate + - title: SdcaFprint + path: /tf/raw_ops/SdcaFprint + - title: SdcaOptimizer + path: /tf/raw_ops/SdcaOptimizer + - title: SdcaOptimizerV2 + path: /tf/raw_ops/SdcaOptimizerV2 + - title: SdcaShrinkL1 + path: /tf/raw_ops/SdcaShrinkL1 + - title: SegmentMax + path: /tf/raw_ops/SegmentMax + - title: SegmentMean + path: /tf/raw_ops/SegmentMean + - title: SegmentMin + path: /tf/raw_ops/SegmentMin + - title: SegmentProd + path: /tf/raw_ops/SegmentProd + - title: SegmentSum + path: /tf/raw_ops/SegmentSum + - title: Select + path: /tf/raw_ops/Select + - title: SelectV2 + path: /tf/raw_ops/SelectV2 + - title: SelfAdjointEig + path: /tf/raw_ops/SelfAdjointEig + - title: SelfAdjointEigV2 + path: /tf/raw_ops/SelfAdjointEigV2 + - title: Selu + path: /tf/raw_ops/Selu + - title: SeluGrad + path: /tf/raw_ops/SeluGrad + - title: Send + path: /tf/raw_ops/Send + - title: SendTPUEmbeddingGradients + path: /tf/raw_ops/SendTPUEmbeddingGradients + - title: SerializeIterator + path: /tf/raw_ops/SerializeIterator + - title: SerializeManySparse + path: /tf/raw_ops/SerializeManySparse + - title: SerializeSparse + path: /tf/raw_ops/SerializeSparse + - title: SerializeTensor + path: /tf/raw_ops/SerializeTensor + - title: SetSize + path: /tf/raw_ops/SetSize + - title: SetStatsAggregatorDataset + path: /tf/raw_ops/SetStatsAggregatorDataset + - title: Shape + path: /tf/raw_ops/Shape + - title: ShapeN + path: /tf/raw_ops/ShapeN + - title: ShardDataset + path: /tf/raw_ops/ShardDataset + - title: ShardedFilename + path: /tf/raw_ops/ShardedFilename + - title: ShardedFilespec + path: /tf/raw_ops/ShardedFilespec + - title: ShuffleAndRepeatDataset + path: /tf/raw_ops/ShuffleAndRepeatDataset + - title: ShuffleDataset + path: /tf/raw_ops/ShuffleDataset + - title: ShuffleDatasetV2 + path: /tf/raw_ops/ShuffleDatasetV2 + - title: ShutdownDistributedTPU + path: /tf/raw_ops/ShutdownDistributedTPU + - title: Sigmoid + path: /tf/raw_ops/Sigmoid + - title: SigmoidGrad + path: /tf/raw_ops/SigmoidGrad + - title: Sign + path: /tf/raw_ops/Sign + - title: Sin + path: /tf/raw_ops/Sin + - title: Sinh + path: /tf/raw_ops/Sinh + - title: Size + path: /tf/raw_ops/Size + - title: SkipDataset + path: /tf/raw_ops/SkipDataset + - title: SleepDataset + path: /tf/raw_ops/SleepDataset + - title: Slice + path: /tf/raw_ops/Slice + - title: SlidingWindowDataset + path: /tf/raw_ops/SlidingWindowDataset + - title: Snapshot + path: /tf/raw_ops/Snapshot + - title: SnapshotDataset + path: /tf/raw_ops/SnapshotDataset + - title: SobolSample + path: /tf/raw_ops/SobolSample + - title: Softmax + path: /tf/raw_ops/Softmax + - title: SoftmaxCrossEntropyWithLogits + path: /tf/raw_ops/SoftmaxCrossEntropyWithLogits + - title: Softplus + path: /tf/raw_ops/Softplus + - title: SoftplusGrad + path: /tf/raw_ops/SoftplusGrad + - title: Softsign + path: /tf/raw_ops/Softsign + - title: SoftsignGrad + path: /tf/raw_ops/SoftsignGrad + - title: SpaceToBatch + path: /tf/raw_ops/SpaceToBatch + - title: SpaceToBatchND + path: /tf/raw_ops/SpaceToBatchND + - title: SpaceToDepth + path: /tf/raw_ops/SpaceToDepth + - title: SparseAccumulatorApplyGradient + path: /tf/raw_ops/SparseAccumulatorApplyGradient + - title: SparseAccumulatorTakeGradient + path: /tf/raw_ops/SparseAccumulatorTakeGradient + - title: SparseAdd + path: /tf/raw_ops/SparseAdd + - title: SparseAddGrad + path: /tf/raw_ops/SparseAddGrad + - title: SparseApplyAdadelta + path: /tf/raw_ops/SparseApplyAdadelta + - title: SparseApplyAdagrad + path: /tf/raw_ops/SparseApplyAdagrad + - title: SparseApplyAdagradDA + path: /tf/raw_ops/SparseApplyAdagradDA + - title: SparseApplyAdagradV2 + path: /tf/raw_ops/SparseApplyAdagradV2 + - title: SparseApplyCenteredRMSProp + path: /tf/raw_ops/SparseApplyCenteredRMSProp + - title: SparseApplyFtrl + path: /tf/raw_ops/SparseApplyFtrl + - title: SparseApplyFtrlV2 + path: /tf/raw_ops/SparseApplyFtrlV2 + - title: SparseApplyMomentum + path: /tf/raw_ops/SparseApplyMomentum + - title: SparseApplyProximalAdagrad + path: /tf/raw_ops/SparseApplyProximalAdagrad + - title: SparseApplyProximalGradientDescent + path: /tf/raw_ops/SparseApplyProximalGradientDescent + - title: SparseApplyRMSProp + path: /tf/raw_ops/SparseApplyRMSProp + - title: SparseConcat + path: /tf/raw_ops/SparseConcat + - title: SparseConditionalAccumulator + path: /tf/raw_ops/SparseConditionalAccumulator + - title: SparseCross + path: /tf/raw_ops/SparseCross + - title: SparseDenseCwiseAdd + path: /tf/raw_ops/SparseDenseCwiseAdd + - title: SparseDenseCwiseDiv + path: /tf/raw_ops/SparseDenseCwiseDiv + - title: SparseDenseCwiseMul + path: /tf/raw_ops/SparseDenseCwiseMul + - title: SparseFillEmptyRows + path: /tf/raw_ops/SparseFillEmptyRows + - title: SparseFillEmptyRowsGrad + path: /tf/raw_ops/SparseFillEmptyRowsGrad + - title: SparseMatMul + path: /tf/raw_ops/SparseMatMul + - title: SparseMatrixAdd + path: /tf/raw_ops/SparseMatrixAdd + - title: SparseMatrixMatMul + path: /tf/raw_ops/SparseMatrixMatMul + - title: SparseMatrixMul + path: /tf/raw_ops/SparseMatrixMul + - title: SparseMatrixNNZ + path: /tf/raw_ops/SparseMatrixNNZ + - title: SparseMatrixOrderingAMD + path: /tf/raw_ops/SparseMatrixOrderingAMD + - title: SparseMatrixSoftmax + path: /tf/raw_ops/SparseMatrixSoftmax + - title: SparseMatrixSoftmaxGrad + path: /tf/raw_ops/SparseMatrixSoftmaxGrad + - title: SparseMatrixSparseCholesky + path: /tf/raw_ops/SparseMatrixSparseCholesky + - title: SparseMatrixSparseMatMul + path: /tf/raw_ops/SparseMatrixSparseMatMul + - title: SparseMatrixTranspose + path: /tf/raw_ops/SparseMatrixTranspose + - title: SparseMatrixZeros + path: /tf/raw_ops/SparseMatrixZeros + - title: SparseReduceMax + path: /tf/raw_ops/SparseReduceMax + - title: SparseReduceMaxSparse + path: /tf/raw_ops/SparseReduceMaxSparse + - title: SparseReduceSum + path: /tf/raw_ops/SparseReduceSum + - title: SparseReduceSumSparse + path: /tf/raw_ops/SparseReduceSumSparse + - title: SparseReorder + path: /tf/raw_ops/SparseReorder + - title: SparseReshape + path: /tf/raw_ops/SparseReshape + - title: SparseSegmentMean + path: /tf/raw_ops/SparseSegmentMean + - title: SparseSegmentMeanGrad + path: /tf/raw_ops/SparseSegmentMeanGrad + - title: SparseSegmentMeanWithNumSegments + path: /tf/raw_ops/SparseSegmentMeanWithNumSegments + - title: SparseSegmentSqrtN + path: /tf/raw_ops/SparseSegmentSqrtN + - title: SparseSegmentSqrtNGrad + path: /tf/raw_ops/SparseSegmentSqrtNGrad + - title: SparseSegmentSqrtNWithNumSegments + path: /tf/raw_ops/SparseSegmentSqrtNWithNumSegments + - title: SparseSegmentSum + path: /tf/raw_ops/SparseSegmentSum + - title: SparseSegmentSumWithNumSegments + path: /tf/raw_ops/SparseSegmentSumWithNumSegments + - title: SparseSlice + path: /tf/raw_ops/SparseSlice + - title: SparseSliceGrad + path: /tf/raw_ops/SparseSliceGrad + - title: SparseSoftmax + path: /tf/raw_ops/SparseSoftmax + - title: SparseSoftmaxCrossEntropyWithLogits + path: /tf/raw_ops/SparseSoftmaxCrossEntropyWithLogits + - title: SparseSparseMaximum + path: /tf/raw_ops/SparseSparseMaximum + - title: SparseSparseMinimum + path: /tf/raw_ops/SparseSparseMinimum + - title: SparseSplit + path: /tf/raw_ops/SparseSplit + - title: SparseTensorDenseAdd + path: /tf/raw_ops/SparseTensorDenseAdd + - title: SparseTensorDenseMatMul + path: /tf/raw_ops/SparseTensorDenseMatMul + - title: SparseTensorSliceDataset + path: /tf/raw_ops/SparseTensorSliceDataset + - title: SparseTensorToCSRSparseMatrix + path: /tf/raw_ops/SparseTensorToCSRSparseMatrix + - title: SparseToDense + path: /tf/raw_ops/SparseToDense + - title: SparseToSparseSetOperation + path: /tf/raw_ops/SparseToSparseSetOperation + - title: Spence + path: /tf/raw_ops/Spence + - title: Split + path: /tf/raw_ops/Split + - title: SplitV + path: /tf/raw_ops/SplitV + - title: SqlDataset + path: /tf/raw_ops/SqlDataset + - title: Sqrt + path: /tf/raw_ops/Sqrt + - title: SqrtGrad + path: /tf/raw_ops/SqrtGrad + - title: Square + path: /tf/raw_ops/Square + - title: SquaredDifference + path: /tf/raw_ops/SquaredDifference + - title: Squeeze + path: /tf/raw_ops/Squeeze + - title: Stack + path: /tf/raw_ops/Stack + - title: StackClose + path: /tf/raw_ops/StackClose + - title: StackCloseV2 + path: /tf/raw_ops/StackCloseV2 + - title: StackPop + path: /tf/raw_ops/StackPop + - title: StackPopV2 + path: /tf/raw_ops/StackPopV2 + - title: StackPush + path: /tf/raw_ops/StackPush + - title: StackPushV2 + path: /tf/raw_ops/StackPushV2 + - title: StackV2 + path: /tf/raw_ops/StackV2 + - title: Stage + path: /tf/raw_ops/Stage + - title: StageClear + path: /tf/raw_ops/StageClear + - title: StagePeek + path: /tf/raw_ops/StagePeek + - title: StageSize + path: /tf/raw_ops/StageSize + - title: StatefulPartitionedCall + path: /tf/raw_ops/StatefulPartitionedCall + - title: StatefulRandomBinomial + path: /tf/raw_ops/StatefulRandomBinomial + - title: StatefulStandardNormal + path: /tf/raw_ops/StatefulStandardNormal + - title: StatefulStandardNormalV2 + path: /tf/raw_ops/StatefulStandardNormalV2 + - title: StatefulTruncatedNormal + path: /tf/raw_ops/StatefulTruncatedNormal + - title: StatefulUniform + path: /tf/raw_ops/StatefulUniform + - title: StatefulUniformFullInt + path: /tf/raw_ops/StatefulUniformFullInt + - title: StatefulUniformInt + path: /tf/raw_ops/StatefulUniformInt + - title: StatelessIf + path: /tf/raw_ops/StatelessIf + - title: StatelessMultinomial + path: /tf/raw_ops/StatelessMultinomial + - title: StatelessRandomBinomial + path: /tf/raw_ops/StatelessRandomBinomial + - title: StatelessRandomGammaV2 + path: /tf/raw_ops/StatelessRandomGammaV2 + - title: StatelessRandomNormal + path: /tf/raw_ops/StatelessRandomNormal + - title: StatelessRandomPoisson + path: /tf/raw_ops/StatelessRandomPoisson + - title: StatelessRandomUniform + path: /tf/raw_ops/StatelessRandomUniform + - title: StatelessRandomUniformFullInt + path: /tf/raw_ops/StatelessRandomUniformFullInt + - title: StatelessRandomUniformInt + path: /tf/raw_ops/StatelessRandomUniformInt + - title: StatelessTruncatedNormal + path: /tf/raw_ops/StatelessTruncatedNormal + - title: StatelessWhile + path: /tf/raw_ops/StatelessWhile + - title: StaticRegexFullMatch + path: /tf/raw_ops/StaticRegexFullMatch + - title: StaticRegexReplace + path: /tf/raw_ops/StaticRegexReplace + - title: StatsAggregatorHandle + path: /tf/raw_ops/StatsAggregatorHandle + - title: StatsAggregatorHandleV2 + path: /tf/raw_ops/StatsAggregatorHandleV2 + - title: StatsAggregatorSetSummaryWriter + path: /tf/raw_ops/StatsAggregatorSetSummaryWriter + - title: StatsAggregatorSummary + path: /tf/raw_ops/StatsAggregatorSummary + - title: StopGradient + path: /tf/raw_ops/StopGradient + - title: StridedSlice + path: /tf/raw_ops/StridedSlice + - title: StridedSliceAssign + path: /tf/raw_ops/StridedSliceAssign + - title: StridedSliceGrad + path: /tf/raw_ops/StridedSliceGrad + - title: StringFormat + path: /tf/raw_ops/StringFormat + - title: StringJoin + path: /tf/raw_ops/StringJoin + - title: StringLength + path: /tf/raw_ops/StringLength + - title: StringLower + path: /tf/raw_ops/StringLower + - title: StringNGrams + path: /tf/raw_ops/StringNGrams + - title: StringSplit + path: /tf/raw_ops/StringSplit + - title: StringSplitV2 + path: /tf/raw_ops/StringSplitV2 + - title: StringStrip + path: /tf/raw_ops/StringStrip + - title: StringToHashBucket + path: /tf/raw_ops/StringToHashBucket + - title: StringToHashBucketFast + path: /tf/raw_ops/StringToHashBucketFast + - title: StringToHashBucketStrong + path: /tf/raw_ops/StringToHashBucketStrong + - title: StringToNumber + path: /tf/raw_ops/StringToNumber + - title: StringUpper + path: /tf/raw_ops/StringUpper + - title: Sub + path: /tf/raw_ops/Sub + - title: Substr + path: /tf/raw_ops/Substr + - title: Sum + path: /tf/raw_ops/Sum + - title: SummaryWriter + path: /tf/raw_ops/SummaryWriter + - title: Svd + path: /tf/raw_ops/Svd + - title: Switch + path: /tf/raw_ops/Switch + - title: SymbolicGradient + path: /tf/raw_ops/SymbolicGradient + - title: TFRecordDataset + path: /tf/raw_ops/TFRecordDataset + - title: TFRecordReader + path: /tf/raw_ops/TFRecordReader + - title: TFRecordReaderV2 + path: /tf/raw_ops/TFRecordReaderV2 + - title: TPUCompilationResult + path: /tf/raw_ops/TPUCompilationResult + - title: TPUEmbeddingActivations + path: /tf/raw_ops/TPUEmbeddingActivations + - title: TPUOrdinalSelector + path: /tf/raw_ops/TPUOrdinalSelector + - title: TPUPartitionedCall + path: /tf/raw_ops/TPUPartitionedCall + - title: TPUReplicateMetadata + path: /tf/raw_ops/TPUReplicateMetadata + - title: TPUReplicatedInput + path: /tf/raw_ops/TPUReplicatedInput + - title: TPUReplicatedOutput + path: /tf/raw_ops/TPUReplicatedOutput + - title: TakeDataset + path: /tf/raw_ops/TakeDataset + - title: TakeManySparseFromTensorsMap + path: /tf/raw_ops/TakeManySparseFromTensorsMap + - title: TakeWhileDataset + path: /tf/raw_ops/TakeWhileDataset + - title: Tan + path: /tf/raw_ops/Tan + - title: Tanh + path: /tf/raw_ops/Tanh + - title: TanhGrad + path: /tf/raw_ops/TanhGrad + - title: TemporaryVariable + path: /tf/raw_ops/TemporaryVariable + - title: TensorArray + path: /tf/raw_ops/TensorArray + - title: TensorArrayClose + path: /tf/raw_ops/TensorArrayClose + - title: TensorArrayCloseV2 + path: /tf/raw_ops/TensorArrayCloseV2 + - title: TensorArrayCloseV3 + path: /tf/raw_ops/TensorArrayCloseV3 + - title: TensorArrayConcat + path: /tf/raw_ops/TensorArrayConcat + - title: TensorArrayConcatV2 + path: /tf/raw_ops/TensorArrayConcatV2 + - title: TensorArrayConcatV3 + path: /tf/raw_ops/TensorArrayConcatV3 + - title: TensorArrayGather + path: /tf/raw_ops/TensorArrayGather + - title: TensorArrayGatherV2 + path: /tf/raw_ops/TensorArrayGatherV2 + - title: TensorArrayGatherV3 + path: /tf/raw_ops/TensorArrayGatherV3 + - title: TensorArrayGrad + path: /tf/raw_ops/TensorArrayGrad + - title: TensorArrayGradV2 + path: /tf/raw_ops/TensorArrayGradV2 + - title: TensorArrayGradV3 + path: /tf/raw_ops/TensorArrayGradV3 + - title: TensorArrayGradWithShape + path: /tf/raw_ops/TensorArrayGradWithShape + - title: TensorArrayPack + path: /tf/raw_ops/TensorArrayPack + - title: TensorArrayRead + path: /tf/raw_ops/TensorArrayRead + - title: TensorArrayReadV2 + path: /tf/raw_ops/TensorArrayReadV2 + - title: TensorArrayReadV3 + path: /tf/raw_ops/TensorArrayReadV3 + - title: TensorArrayScatter + path: /tf/raw_ops/TensorArrayScatter + - title: TensorArrayScatterV2 + path: /tf/raw_ops/TensorArrayScatterV2 + - title: TensorArrayScatterV3 + path: /tf/raw_ops/TensorArrayScatterV3 + - title: TensorArraySize + path: /tf/raw_ops/TensorArraySize + - title: TensorArraySizeV2 + path: /tf/raw_ops/TensorArraySizeV2 + - title: TensorArraySizeV3 + path: /tf/raw_ops/TensorArraySizeV3 + - title: TensorArraySplit + path: /tf/raw_ops/TensorArraySplit + - title: TensorArraySplitV2 + path: /tf/raw_ops/TensorArraySplitV2 + - title: TensorArraySplitV3 + path: /tf/raw_ops/TensorArraySplitV3 + - title: TensorArrayUnpack + path: /tf/raw_ops/TensorArrayUnpack + - title: TensorArrayV2 + path: /tf/raw_ops/TensorArrayV2 + - title: TensorArrayV3 + path: /tf/raw_ops/TensorArrayV3 + - title: TensorArrayWrite + path: /tf/raw_ops/TensorArrayWrite + - title: TensorArrayWriteV2 + path: /tf/raw_ops/TensorArrayWriteV2 + - title: TensorArrayWriteV3 + path: /tf/raw_ops/TensorArrayWriteV3 + - title: TensorDataset + path: /tf/raw_ops/TensorDataset + - title: TensorListConcat + path: /tf/raw_ops/TensorListConcat + - title: TensorListConcatLists + path: /tf/raw_ops/TensorListConcatLists + - title: TensorListConcatV2 + path: /tf/raw_ops/TensorListConcatV2 + - title: TensorListElementShape + path: /tf/raw_ops/TensorListElementShape + - title: TensorListFromTensor + path: /tf/raw_ops/TensorListFromTensor + - title: TensorListGather + path: /tf/raw_ops/TensorListGather + - title: TensorListGetItem + path: /tf/raw_ops/TensorListGetItem + - title: TensorListLength + path: /tf/raw_ops/TensorListLength + - title: TensorListPopBack + path: /tf/raw_ops/TensorListPopBack + - title: TensorListPushBack + path: /tf/raw_ops/TensorListPushBack + - title: TensorListPushBackBatch + path: /tf/raw_ops/TensorListPushBackBatch + - title: TensorListReserve + path: /tf/raw_ops/TensorListReserve + - title: TensorListResize + path: /tf/raw_ops/TensorListResize + - title: TensorListScatter + path: /tf/raw_ops/TensorListScatter + - title: TensorListScatterIntoExistingList + path: /tf/raw_ops/TensorListScatterIntoExistingList + - title: TensorListScatterV2 + path: /tf/raw_ops/TensorListScatterV2 + - title: TensorListSetItem + path: /tf/raw_ops/TensorListSetItem + - title: TensorListSplit + path: /tf/raw_ops/TensorListSplit + - title: TensorListStack + path: /tf/raw_ops/TensorListStack + - title: TensorScatterAdd + path: /tf/raw_ops/TensorScatterAdd + - title: TensorScatterSub + path: /tf/raw_ops/TensorScatterSub + - title: TensorScatterUpdate + path: /tf/raw_ops/TensorScatterUpdate + - title: TensorSliceDataset + path: /tf/raw_ops/TensorSliceDataset + - title: TensorStridedSliceUpdate + path: /tf/raw_ops/TensorStridedSliceUpdate + - title: TensorSummary + path: /tf/raw_ops/TensorSummary + - title: TensorSummaryV2 + path: /tf/raw_ops/TensorSummaryV2 + - title: TextLineDataset + path: /tf/raw_ops/TextLineDataset + - title: TextLineReader + path: /tf/raw_ops/TextLineReader + - title: TextLineReaderV2 + path: /tf/raw_ops/TextLineReaderV2 + - title: ThreadPoolDataset + path: /tf/raw_ops/ThreadPoolDataset + - title: ThreadPoolHandle + path: /tf/raw_ops/ThreadPoolHandle + - title: ThreadUnsafeUnigramCandidateSampler + path: /tf/raw_ops/ThreadUnsafeUnigramCandidateSampler + - title: Tile + path: /tf/raw_ops/Tile + - title: TileGrad + path: /tf/raw_ops/TileGrad + - title: Timestamp + path: /tf/raw_ops/Timestamp + - title: ToBool + path: /tf/raw_ops/ToBool + - title: TopK + path: /tf/raw_ops/TopK + - title: TopKV2 + path: /tf/raw_ops/TopKV2 + - title: Transpose + path: /tf/raw_ops/Transpose + - title: TridiagonalMatMul + path: /tf/raw_ops/TridiagonalMatMul + - title: TridiagonalSolve + path: /tf/raw_ops/TridiagonalSolve + - title: TruncateDiv + path: /tf/raw_ops/TruncateDiv + - title: TruncateMod + path: /tf/raw_ops/TruncateMod + - title: TruncatedNormal + path: /tf/raw_ops/TruncatedNormal + - title: Unbatch + path: /tf/raw_ops/Unbatch + - title: UnbatchDataset + path: /tf/raw_ops/UnbatchDataset + - title: UnbatchGrad + path: /tf/raw_ops/UnbatchGrad + - title: UnicodeDecode + path: /tf/raw_ops/UnicodeDecode + - title: UnicodeDecodeWithOffsets + path: /tf/raw_ops/UnicodeDecodeWithOffsets + - title: UnicodeEncode + path: /tf/raw_ops/UnicodeEncode + - title: UnicodeScript + path: /tf/raw_ops/UnicodeScript + - title: UnicodeTranscode + path: /tf/raw_ops/UnicodeTranscode + - title: UniformCandidateSampler + path: /tf/raw_ops/UniformCandidateSampler + - title: Unique + path: /tf/raw_ops/Unique + - title: UniqueDataset + path: /tf/raw_ops/UniqueDataset + - title: UniqueV2 + path: /tf/raw_ops/UniqueV2 + - title: UniqueWithCounts + path: /tf/raw_ops/UniqueWithCounts + - title: UniqueWithCountsV2 + path: /tf/raw_ops/UniqueWithCountsV2 + - title: Unpack + path: /tf/raw_ops/Unpack + - title: UnravelIndex + path: /tf/raw_ops/UnravelIndex + - title: UnsortedSegmentJoin + path: /tf/raw_ops/UnsortedSegmentJoin + - title: UnsortedSegmentMax + path: /tf/raw_ops/UnsortedSegmentMax + - title: UnsortedSegmentMin + path: /tf/raw_ops/UnsortedSegmentMin + - title: UnsortedSegmentProd + path: /tf/raw_ops/UnsortedSegmentProd + - title: UnsortedSegmentSum + path: /tf/raw_ops/UnsortedSegmentSum + - title: Unstage + path: /tf/raw_ops/Unstage + - title: UnwrapDatasetVariant + path: /tf/raw_ops/UnwrapDatasetVariant + - title: UpperBound + path: /tf/raw_ops/UpperBound + - title: VarHandleOp + path: /tf/raw_ops/VarHandleOp + - title: VarIsInitializedOp + path: /tf/raw_ops/VarIsInitializedOp + - title: Variable + path: /tf/raw_ops/Variable + - title: VariableShape + path: /tf/raw_ops/VariableShape + - title: VariableV2 + path: /tf/raw_ops/VariableV2 + - title: Where + path: /tf/raw_ops/Where + - title: While + path: /tf/raw_ops/While + - title: WholeFileReader + path: /tf/raw_ops/WholeFileReader + - title: WholeFileReaderV2 + path: /tf/raw_ops/WholeFileReaderV2 + - title: WindowDataset + path: /tf/raw_ops/WindowDataset + - title: WorkerHeartbeat + path: /tf/raw_ops/WorkerHeartbeat + - title: WrapDatasetVariant + path: /tf/raw_ops/WrapDatasetVariant + - title: WriteAudioSummary + path: /tf/raw_ops/WriteAudioSummary + - title: WriteFile + path: /tf/raw_ops/WriteFile + - title: WriteGraphSummary + path: /tf/raw_ops/WriteGraphSummary + - title: WriteHistogramSummary + path: /tf/raw_ops/WriteHistogramSummary + - title: WriteImageSummary + path: /tf/raw_ops/WriteImageSummary + - title: WriteRawProtoSummary + path: /tf/raw_ops/WriteRawProtoSummary + - title: WriteScalarSummary + path: /tf/raw_ops/WriteScalarSummary + - title: WriteSummary + path: /tf/raw_ops/WriteSummary + - title: Xdivy + path: /tf/raw_ops/Xdivy + - title: Xlog1py + path: /tf/raw_ops/Xlog1py + - title: Xlogy + path: /tf/raw_ops/Xlogy + - title: ZerosLike + path: /tf/raw_ops/ZerosLike + - title: Zeta + path: /tf/raw_ops/Zeta + - title: ZipDataset + path: /tf/raw_ops/ZipDataset +- title: tf.saved_model + section: + - title: Overview + path: /tf/saved_model + - title: Asset + path: /tf/saved_model/Asset + - title: SaveOptions + path: /tf/saved_model/SaveOptions + - title: contains_saved_model + path: /tf/saved_model/contains_saved_model + - title: load + path: /tf/saved_model/load + - title: save + path: /tf/saved_model/save +- title: tf.sets + section: + - title: Overview + path: /tf/sets + - title: difference + path: /tf/sets/difference + - title: intersection + path: /tf/sets/intersection + - title: size + path: /tf/sets/size + - title: union + path: /tf/sets/union +- title: tf.signal + section: + - title: Overview + path: /tf/signal + - title: dct + path: /tf/signal/dct + - title: fft + path: /tf/signal/fft + - title: fft2d + path: /tf/signal/fft2d + - title: fft3d + path: /tf/signal/fft3d + - title: fftshift + path: /tf/signal/fftshift + - title: frame + path: /tf/signal/frame + - title: hamming_window + path: /tf/signal/hamming_window + - title: hann_window + path: /tf/signal/hann_window + - title: idct + path: /tf/signal/idct + - title: ifft + path: /tf/signal/ifft + - title: ifft2d + path: /tf/signal/ifft2d + - title: ifft3d + path: /tf/signal/ifft3d + - title: ifftshift + path: /tf/signal/ifftshift + - title: inverse_mdct + path: /tf/signal/inverse_mdct + - title: inverse_stft + path: /tf/signal/inverse_stft + - title: inverse_stft_window_fn + path: /tf/signal/inverse_stft_window_fn + - title: irfft + path: /tf/signal/irfft + - title: irfft2d + path: /tf/signal/irfft2d + - title: irfft3d + path: /tf/signal/irfft3d + - title: kaiser_bessel_derived_window + path: /tf/signal/kaiser_bessel_derived_window + - title: kaiser_window + path: /tf/signal/kaiser_window + - title: linear_to_mel_weight_matrix + path: /tf/signal/linear_to_mel_weight_matrix + - title: mdct + path: /tf/signal/mdct + - title: mfccs_from_log_mel_spectrograms + path: /tf/signal/mfccs_from_log_mel_spectrograms + - title: overlap_and_add + path: /tf/signal/overlap_and_add + - title: rfft + path: /tf/signal/rfft + - title: rfft2d + path: /tf/signal/rfft2d + - title: rfft3d + path: /tf/signal/rfft3d + - title: stft + path: /tf/signal/stft + - title: vorbis_window + path: /tf/signal/vorbis_window +- title: tf.sparse + section: + - title: Overview + path: /tf/sparse + - title: SparseTensor + path: /tf/sparse/SparseTensor + - title: add + path: /tf/sparse/add + - title: concat + path: /tf/sparse/concat + - title: cross + path: /tf/sparse/cross + - title: cross_hashed + path: /tf/sparse/cross_hashed + - title: expand_dims + path: /tf/sparse/expand_dims + - title: eye + path: /tf/sparse/eye + - title: fill_empty_rows + path: /tf/sparse/fill_empty_rows + - title: from_dense + path: /tf/sparse/from_dense + - title: mask + path: /tf/sparse/mask + - title: maximum + path: /tf/sparse/maximum + - title: minimum + path: /tf/sparse/minimum + - title: reduce_max + path: /tf/sparse/reduce_max + - title: reduce_sum + path: /tf/sparse/reduce_sum + - title: reorder + path: /tf/sparse/reorder + - title: reset_shape + path: /tf/sparse/reset_shape + - title: reshape + path: /tf/sparse/reshape + - title: retain + path: /tf/sparse/retain + - title: segment_mean + path: /tf/sparse/segment_mean + - title: segment_sqrt_n + path: /tf/sparse/segment_sqrt_n + - title: segment_sum + path: /tf/sparse/segment_sum + - title: slice + path: /tf/sparse/slice + - title: softmax + path: /tf/sparse/softmax + - title: sparse_dense_matmul + path: /tf/sparse/sparse_dense_matmul + - title: split + path: /tf/sparse/split + - title: to_dense + path: /tf/sparse/to_dense + - title: to_indicator + path: /tf/sparse/to_indicator + - title: transpose + path: /tf/sparse/transpose +- title: tf.strings + section: + - title: Overview + path: /tf/strings + - title: as_string + path: /tf/strings/as_string + - title: bytes_split + path: /tf/strings/bytes_split + - title: format + path: /tf/strings/format + - title: join + path: /tf/strings/join + - title: length + path: /tf/strings/length + - title: lower + path: /tf/strings/lower + - title: ngrams + path: /tf/strings/ngrams + - title: reduce_join + path: /tf/strings/reduce_join + - title: regex_full_match + path: /tf/strings/regex_full_match + - title: regex_replace + path: /tf/strings/regex_replace + - title: split + path: /tf/strings/split + - title: strip + path: /tf/strings/strip + - title: substr + path: /tf/strings/substr + - title: to_hash_bucket + path: /tf/strings/to_hash_bucket + - title: to_hash_bucket_fast + path: /tf/strings/to_hash_bucket_fast + - title: to_hash_bucket_strong + path: /tf/strings/to_hash_bucket_strong + - title: to_number + path: /tf/strings/to_number + - title: unicode_decode + path: /tf/strings/unicode_decode + - title: unicode_decode_with_offsets + path: /tf/strings/unicode_decode_with_offsets + - title: unicode_encode + path: /tf/strings/unicode_encode + - title: unicode_script + path: /tf/strings/unicode_script + - title: unicode_split + path: /tf/strings/unicode_split + - title: unicode_split_with_offsets + path: /tf/strings/unicode_split_with_offsets + - title: unicode_transcode + path: /tf/strings/unicode_transcode + - title: unsorted_segment_join + path: /tf/strings/unsorted_segment_join + - title: upper + path: /tf/strings/upper +- title: tf.summary + section: + - title: Overview + path: /tf/summary + - title: SummaryWriter + path: /tf/summary/SummaryWriter + - title: audio + path: /tf/summary/audio + - title: create_file_writer + path: /tf/summary/create_file_writer + - title: create_noop_writer + path: /tf/summary/create_noop_writer + - title: flush + path: /tf/summary/flush + - title: histogram + path: /tf/summary/histogram + - title: image + path: /tf/summary/image + - title: record_if + path: /tf/summary/record_if + - title: scalar + path: /tf/summary/scalar + - title: text + path: /tf/summary/text + - title: trace_export + path: /tf/summary/trace_export + - title: trace_off + path: /tf/summary/trace_off + - title: trace_on + path: /tf/summary/trace_on + - title: write + path: /tf/summary/write + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/summary/experimental + - title: get_step + path: /tf/summary/experimental/get_step + - title: set_step + path: /tf/summary/experimental/set_step + - title: summary_scope + path: /tf/summary/experimental/summary_scope + - title: write_raw_pb + path: /tf/summary/experimental/write_raw_pb +- title: tf.sysconfig + section: + - title: Overview + path: /tf/sysconfig + - title: get_compile_flags + path: /tf/sysconfig/get_compile_flags + - title: get_include + path: /tf/sysconfig/get_include + - title: get_lib + path: /tf/sysconfig/get_lib + - title: get_link_flags + path: /tf/sysconfig/get_link_flags +- title: tf.test + section: + - title: Overview + path: /tf/test + - title: Benchmark + path: /tf/test/Benchmark + - title: TestCase + path: /tf/test/TestCase + - title: TestCase.failureException + path: /tf/test/TestCase/failureException + - title: assert_equal_graph_def + path: /tf/test/assert_equal_graph_def + - title: benchmark_config + path: /tf/test/benchmark_config + - title: compute_gradient + path: /tf/test/compute_gradient + - title: create_local_cluster + path: /tf/test/create_local_cluster + - title: gpu_device_name + path: /tf/test/gpu_device_name + - title: is_built_with_cuda + path: /tf/test/is_built_with_cuda + - title: is_built_with_gpu_support + path: /tf/test/is_built_with_gpu_support + - title: is_built_with_rocm + path: /tf/test/is_built_with_rocm + - title: is_built_with_xla + path: /tf/test/is_built_with_xla + - title: is_gpu_available + status: deprecated + path: /tf/test/is_gpu_available + - title: main + path: /tf/test/main +- title: tf.tpu + section: + - title: Overview + path: /tf/tpu + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/tpu/experimental + - title: DeviceAssignment + path: /tf/tpu/experimental/DeviceAssignment + - title: initialize_tpu_system + path: /tf/tpu/experimental/initialize_tpu_system + - title: shutdown_tpu_system + path: /tf/tpu/experimental/shutdown_tpu_system +- title: tf.train + section: + - title: Overview + path: /tf/train + - title: BytesList + path: /tf/train/BytesList + - title: Checkpoint + path: /tf/train/Checkpoint + - title: CheckpointManager + path: /tf/train/CheckpointManager + - title: ClusterDef + path: /tf/train/ClusterDef + - title: ClusterSpec + path: /tf/train/ClusterSpec + - title: Coordinator + path: /tf/train/Coordinator + - title: Example + path: /tf/train/Example + - title: ExponentialMovingAverage + path: /tf/train/ExponentialMovingAverage + - title: Feature + path: /tf/train/Feature + - title: FeatureList + path: /tf/train/FeatureList + - title: FeatureLists + path: /tf/train/FeatureLists + - title: FeatureLists.FeatureListEntry + path: /tf/train/FeatureLists/FeatureListEntry + - title: Features + path: /tf/train/Features + - title: Features.FeatureEntry + path: /tf/train/Features/FeatureEntry + - title: FloatList + path: /tf/train/FloatList + - title: Int64List + path: /tf/train/Int64List + - title: JobDef + path: /tf/train/JobDef + - title: JobDef.TasksEntry + path: /tf/train/JobDef/TasksEntry + - title: SequenceExample + path: /tf/train/SequenceExample + - title: ServerDef + path: /tf/train/ServerDef + - title: checkpoints_iterator + path: /tf/train/checkpoints_iterator + - title: get_checkpoint_state + path: /tf/train/get_checkpoint_state + - title: latest_checkpoint + path: /tf/train/latest_checkpoint + - title: list_variables + path: /tf/train/list_variables + - title: load_checkpoint + path: /tf/train/load_checkpoint + - title: load_variable + path: /tf/train/load_variable + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/train/experimental + - title: PythonState + path: /tf/train/experimental/PythonState + - title: disable_mixed_precision_graph_rewrite + path: /tf/train/experimental/disable_mixed_precision_graph_rewrite + - title: enable_mixed_precision_graph_rewrite + path: /tf/train/experimental/enable_mixed_precision_graph_rewrite +- title: tf.version + section: + - title: Overview + path: /tf/version +- title: tf.xla + section: + - title: Overview + path: /tf/xla + - title: experimental + status: experimental + section: + - title: Overview + path: /tf/xla/experimental + - title: compile + path: /tf/xla/experimental/compile + - title: jit_scope + path: /tf/xla/experimental/jit_scope diff --git a/site/en/api_docs/python/tf/all_symbols.md b/site/en/api_docs/python/tf/all_symbols.md new file mode 100644 index 00000000000..7179dfabb25 --- /dev/null +++ b/site/en/api_docs/python/tf/all_symbols.md @@ -0,0 +1,6236 @@ +# All symbols in TensorFlow 2 + + + +## Primary symbols +* tf +* tf.AggregationMethod +* tf.Assert +* tf.CriticalSection +* tf.DType +* tf.DeviceSpec +* tf.GradientTape +* tf.Graph +* tf.IndexedSlices +* tf.IndexedSlicesSpec +* tf.Module +* tf.Operation +* tf.OptionalSpec +* tf.RaggedTensor +* tf.RaggedTensorSpec +* tf.RegisterGradient +* tf.SparseTensor +* tf.SparseTensorSpec +* tf.Tensor +* tf.TensorArray +* tf.TensorArraySpec +* tf.TensorShape +* tf.TensorSpec +* tf.TypeSpec +* tf.UnconnectedGradients +* tf.Variable +* tf.Variable.SaveSliceInfo +* tf.VariableAggregation +* tf.VariableSynchronization +* tf.abs +* tf.acos +* tf.acosh +* tf.add +* tf.add_n +* tf.argmax +* tf.argmin +* tf.argsort +* tf.as_dtype +* tf.as_string +* tf.asin +* tf.asinh +* tf.assert_equal +* tf.assert_greater +* tf.assert_less +* tf.assert_rank +* tf.atan +* tf.atan2 +* tf.atanh +* tf.audio +* tf.audio.decode_wav +* tf.audio.encode_wav +* tf.autodiff +* tf.autodiff.ForwardAccumulator +* tf.autodiff.GradientTape +* tf.autograph +* tf.autograph.experimental +* tf.autograph.experimental.Feature +* tf.autograph.experimental.do_not_convert +* tf.autograph.experimental.set_loop_options +* tf.autograph.set_verbosity +* tf.autograph.to_code +* tf.autograph.to_graph +* tf.autograph.trace +* tf.batch_to_space +* tf.bitcast +* tf.bitwise +* tf.bitwise.bitwise_and +* tf.bitwise.bitwise_or +* tf.bitwise.bitwise_xor +* tf.bitwise.invert +* tf.bitwise.left_shift +* tf.bitwise.right_shift +* tf.boolean_mask +* tf.broadcast_dynamic_shape +* tf.broadcast_static_shape +* tf.broadcast_to +* tf.case +* tf.cast +* tf.clip_by_global_norm +* tf.clip_by_norm +* tf.clip_by_value +* tf.compat +* tf.compat.as_bytes +* tf.compat.as_str +* tf.compat.as_str_any +* tf.compat.as_text +* tf.compat.dimension_at_index +* tf.compat.dimension_value +* tf.compat.forward_compatibility_horizon +* tf.compat.forward_compatible +* tf.compat.path_to_str +* tf.complex +* tf.concat +* tf.cond +* tf.config +* tf.config.LogicalDevice +* tf.config.LogicalDeviceConfiguration +* tf.config.PhysicalDevice +* tf.config.experimental +* tf.config.experimental.ClusterDeviceFilters +* tf.config.experimental.VirtualDeviceConfiguration +* tf.config.experimental.disable_mlir_bridge +* tf.config.experimental.enable_mlir_bridge +* tf.config.experimental.get_device_policy +* tf.config.experimental.get_memory_growth +* tf.config.experimental.get_synchronous_execution +* tf.config.experimental.get_virtual_device_configuration +* tf.config.experimental.get_visible_devices +* tf.config.experimental.list_logical_devices +* tf.config.experimental.list_physical_devices +* tf.config.experimental.set_device_policy +* tf.config.experimental.set_memory_growth +* tf.config.experimental.set_synchronous_execution +* tf.config.experimental.set_virtual_device_configuration +* tf.config.experimental.set_visible_devices +* tf.config.experimental_connect_to_cluster +* tf.config.experimental_connect_to_host +* tf.config.experimental_functions_run_eagerly +* tf.config.experimental_run_functions_eagerly +* tf.config.get_logical_device_configuration +* tf.config.get_soft_device_placement +* tf.config.get_visible_devices +* tf.config.list_logical_devices +* tf.config.list_physical_devices +* tf.config.optimizer +* tf.config.optimizer.get_experimental_options +* tf.config.optimizer.get_jit +* tf.config.optimizer.set_experimental_options +* tf.config.optimizer.set_jit +* tf.config.set_logical_device_configuration +* tf.config.set_soft_device_placement +* tf.config.set_visible_devices +* tf.config.threading +* tf.config.threading.get_inter_op_parallelism_threads +* tf.config.threading.get_intra_op_parallelism_threads +* tf.config.threading.set_inter_op_parallelism_threads +* tf.config.threading.set_intra_op_parallelism_threads +* tf.constant +* tf.constant_initializer +* tf.control_dependencies +* tf.convert_to_tensor +* tf.cos +* tf.cosh +* tf.cumsum +* tf.custom_gradient +* tf.data +* tf.data.Dataset +* tf.data.DatasetSpec +* tf.data.FixedLengthRecordDataset +* tf.data.Options +* tf.data.TFRecordDataset +* tf.data.TextLineDataset +* tf.data.experimental +* tf.data.experimental.AutoShardPolicy +* tf.data.experimental.CheckpointInputPipelineHook +* tf.data.experimental.Counter +* tf.data.experimental.CsvDataset +* tf.data.experimental.DistributeOptions +* tf.data.experimental.MapVectorizationOptions +* tf.data.experimental.OptimizationOptions +* tf.data.experimental.Optional +* tf.data.experimental.RandomDataset +* tf.data.experimental.Reducer +* tf.data.experimental.SqlDataset +* tf.data.experimental.StatsAggregator +* tf.data.experimental.StatsOptions +* tf.data.experimental.TFRecordWriter +* tf.data.experimental.ThreadingOptions +* tf.data.experimental.assert_cardinality +* tf.data.experimental.bucket_by_sequence_length +* tf.data.experimental.bytes_produced_stats +* tf.data.experimental.cardinality +* tf.data.experimental.choose_from_datasets +* tf.data.experimental.copy_to_device +* tf.data.experimental.dense_to_ragged_batch +* tf.data.experimental.dense_to_sparse_batch +* tf.data.experimental.enumerate_dataset +* tf.data.experimental.from_variant +* tf.data.experimental.get_next_as_optional +* tf.data.experimental.get_single_element +* tf.data.experimental.get_structure +* tf.data.experimental.group_by_reducer +* tf.data.experimental.group_by_window +* tf.data.experimental.ignore_errors +* tf.data.experimental.latency_stats +* tf.data.experimental.make_batched_features_dataset +* tf.data.experimental.make_csv_dataset +* tf.data.experimental.make_saveable_from_iterator +* tf.data.experimental.map_and_batch +* tf.data.experimental.parallel_interleave +* tf.data.experimental.parse_example_dataset +* tf.data.experimental.prefetch_to_device +* tf.data.experimental.rejection_resample +* tf.data.experimental.sample_from_datasets +* tf.data.experimental.scan +* tf.data.experimental.shuffle_and_repeat +* tf.data.experimental.take_while +* tf.data.experimental.to_variant +* tf.data.experimental.unbatch +* tf.data.experimental.unique +* tf.debugging +* tf.debugging.Assert +* tf.debugging.assert_all_finite +* tf.debugging.assert_equal +* tf.debugging.assert_greater +* tf.debugging.assert_greater_equal +* tf.debugging.assert_integer +* tf.debugging.assert_less +* tf.debugging.assert_less_equal +* tf.debugging.assert_near +* tf.debugging.assert_negative +* tf.debugging.assert_non_negative +* tf.debugging.assert_non_positive +* tf.debugging.assert_none_equal +* tf.debugging.assert_positive +* tf.debugging.assert_proper_iterable +* tf.debugging.assert_rank +* tf.debugging.assert_rank_at_least +* tf.debugging.assert_rank_in +* tf.debugging.assert_same_float_dtype +* tf.debugging.assert_scalar +* tf.debugging.assert_shapes +* tf.debugging.assert_type +* tf.debugging.check_numerics +* tf.debugging.disable_check_numerics +* tf.debugging.enable_check_numerics +* tf.debugging.experimental +* tf.debugging.experimental.disable_dump_debug_info +* tf.debugging.experimental.enable_dump_debug_info +* tf.debugging.get_log_device_placement +* tf.debugging.is_numeric_tensor +* tf.debugging.set_log_device_placement +* tf.device +* tf.distribute +* tf.distribute.CrossDeviceOps +* tf.distribute.DistributedValues +* tf.distribute.HierarchicalCopyAllReduce +* tf.distribute.InputContext +* tf.distribute.InputReplicationMode +* tf.distribute.MirroredStrategy +* tf.distribute.NcclAllReduce +* tf.distribute.OneDeviceStrategy +* tf.distribute.ReduceOp +* tf.distribute.ReductionToOneDevice +* tf.distribute.ReplicaContext +* tf.distribute.RunOptions +* tf.distribute.Server +* tf.distribute.Strategy +* tf.distribute.StrategyExtended +* tf.distribute.cluster_resolver +* tf.distribute.cluster_resolver.ClusterResolver +* tf.distribute.cluster_resolver.GCEClusterResolver +* tf.distribute.cluster_resolver.KubernetesClusterResolver +* tf.distribute.cluster_resolver.SimpleClusterResolver +* tf.distribute.cluster_resolver.SlurmClusterResolver +* tf.distribute.cluster_resolver.TFConfigClusterResolver +* tf.distribute.cluster_resolver.TPUClusterResolver +* tf.distribute.cluster_resolver.UnionResolver +* tf.distribute.experimental +* tf.distribute.experimental.CentralStorageStrategy +* tf.distribute.experimental.CollectiveCommunication +* tf.distribute.experimental.CollectiveHints +* tf.distribute.experimental.MultiWorkerMirroredStrategy +* tf.distribute.experimental.ParameterServerStrategy +* tf.distribute.experimental.TPUStrategy +* tf.distribute.experimental.ValueContext +* tf.distribute.experimental_set_strategy +* tf.distribute.get_replica_context +* tf.distribute.get_strategy +* tf.distribute.has_strategy +* tf.distribute.in_cross_replica_context +* tf.divide +* tf.dtypes +* tf.dtypes.DType +* tf.dtypes.as_dtype +* tf.dtypes.cast +* tf.dtypes.complex +* tf.dtypes.saturate_cast +* tf.dynamic_partition +* tf.dynamic_stitch +* tf.edit_distance +* tf.eig +* tf.eigvals +* tf.einsum +* tf.ensure_shape +* tf.equal +* tf.errors +* tf.errors.AbortedError +* tf.errors.AlreadyExistsError +* tf.errors.CancelledError +* tf.errors.DataLossError +* tf.errors.DeadlineExceededError +* tf.errors.FailedPreconditionError +* tf.errors.InternalError +* tf.errors.InvalidArgumentError +* tf.errors.NotFoundError +* tf.errors.OpError +* tf.errors.OutOfRangeError +* tf.errors.PermissionDeniedError +* tf.errors.ResourceExhaustedError +* tf.errors.UnauthenticatedError +* tf.errors.UnavailableError +* tf.errors.UnimplementedError +* tf.errors.UnknownError +* tf.estimator +* tf.estimator.BaselineClassifier +* tf.estimator.BaselineEstimator +* tf.estimator.BaselineRegressor +* tf.estimator.BestExporter +* tf.estimator.BinaryClassHead +* tf.estimator.BoostedTreesClassifier +* tf.estimator.BoostedTreesEstimator +* tf.estimator.BoostedTreesRegressor +* tf.estimator.CheckpointSaverHook +* tf.estimator.CheckpointSaverListener +* tf.estimator.DNNClassifier +* tf.estimator.DNNEstimator +* tf.estimator.DNNLinearCombinedClassifier +* tf.estimator.DNNLinearCombinedEstimator +* tf.estimator.DNNLinearCombinedRegressor +* tf.estimator.DNNRegressor +* tf.estimator.Estimator +* tf.estimator.EstimatorSpec +* tf.estimator.EvalSpec +* tf.estimator.Exporter +* tf.estimator.FeedFnHook +* tf.estimator.FinalExporter +* tf.estimator.FinalOpsHook +* tf.estimator.GlobalStepWaiterHook +* tf.estimator.Head +* tf.estimator.LatestExporter +* tf.estimator.LinearClassifier +* tf.estimator.LinearEstimator +* tf.estimator.LinearRegressor +* tf.estimator.LoggingTensorHook +* tf.estimator.LogisticRegressionHead +* tf.estimator.ModeKeys +* tf.estimator.MultiClassHead +* tf.estimator.MultiHead +* tf.estimator.MultiLabelHead +* tf.estimator.NanLossDuringTrainingError +* tf.estimator.NanTensorHook +* tf.estimator.PoissonRegressionHead +* tf.estimator.ProfilerHook +* tf.estimator.RegressionHead +* tf.estimator.RunConfig +* tf.estimator.SecondOrStepTimer +* tf.estimator.SessionRunArgs +* tf.estimator.SessionRunContext +* tf.estimator.SessionRunHook +* tf.estimator.SessionRunValues +* tf.estimator.StepCounterHook +* tf.estimator.StopAtStepHook +* tf.estimator.SummarySaverHook +* tf.estimator.TrainSpec +* tf.estimator.VocabInfo +* tf.estimator.WarmStartSettings +* tf.estimator.add_metrics +* tf.estimator.classifier_parse_example_spec +* tf.estimator.experimental +* tf.estimator.experimental.InMemoryEvaluatorHook +* tf.estimator.experimental.LinearSDCA +* tf.estimator.experimental.RNNClassifier +* tf.estimator.experimental.RNNEstimator +* tf.estimator.experimental.build_raw_supervised_input_receiver_fn +* tf.estimator.experimental.call_logit_fn +* tf.estimator.experimental.make_early_stopping_hook +* tf.estimator.experimental.make_stop_at_checkpoint_step_hook +* tf.estimator.experimental.stop_if_higher_hook +* tf.estimator.experimental.stop_if_lower_hook +* tf.estimator.experimental.stop_if_no_decrease_hook +* tf.estimator.experimental.stop_if_no_increase_hook +* tf.estimator.export +* tf.estimator.export.ClassificationOutput +* tf.estimator.export.ExportOutput +* tf.estimator.export.PredictOutput +* tf.estimator.export.RegressionOutput +* tf.estimator.export.ServingInputReceiver +* tf.estimator.export.TensorServingInputReceiver +* tf.estimator.export.build_parsing_serving_input_receiver_fn +* tf.estimator.export.build_raw_serving_input_receiver_fn +* tf.estimator.regressor_parse_example_spec +* tf.estimator.train_and_evaluate +* tf.executing_eagerly +* tf.exp +* tf.expand_dims +* tf.experimental +* tf.experimental.async_clear_error +* tf.experimental.async_scope +* tf.experimental.dlpack +* tf.experimental.dlpack.from_dlpack +* tf.experimental.dlpack.to_dlpack +* tf.experimental.function_executor_type +* tf.experimental.tensorrt +* tf.experimental.tensorrt.ConversionParams +* tf.experimental.tensorrt.Converter +* tf.extract_volume_patches +* tf.eye +* tf.feature_column +* tf.feature_column.bucketized_column +* tf.feature_column.categorical_column_with_hash_bucket +* tf.feature_column.categorical_column_with_identity +* tf.feature_column.categorical_column_with_vocabulary_file +* tf.feature_column.categorical_column_with_vocabulary_list +* tf.feature_column.crossed_column +* tf.feature_column.embedding_column +* tf.feature_column.indicator_column +* tf.feature_column.make_parse_example_spec +* tf.feature_column.numeric_column +* tf.feature_column.sequence_categorical_column_with_hash_bucket +* tf.feature_column.sequence_categorical_column_with_identity +* tf.feature_column.sequence_categorical_column_with_vocabulary_file +* tf.feature_column.sequence_categorical_column_with_vocabulary_list +* tf.feature_column.sequence_numeric_column +* tf.feature_column.shared_embeddings +* tf.feature_column.weighted_categorical_column +* tf.fill +* tf.fingerprint +* tf.floor +* tf.foldl +* tf.foldr +* tf.function +* tf.gather +* tf.gather_nd +* tf.get_logger +* tf.get_static_value +* tf.grad_pass_through +* tf.gradients +* tf.graph_util +* tf.graph_util.import_graph_def +* tf.greater +* tf.greater_equal +* tf.group +* tf.guarantee_const +* tf.hessians +* tf.histogram_fixed_width +* tf.histogram_fixed_width_bins +* tf.identity +* tf.identity_n +* tf.image +* tf.image.ResizeMethod +* tf.image.adjust_brightness +* tf.image.adjust_contrast +* tf.image.adjust_gamma +* tf.image.adjust_hue +* tf.image.adjust_jpeg_quality +* tf.image.adjust_saturation +* tf.image.central_crop +* tf.image.combined_non_max_suppression +* tf.image.convert_image_dtype +* tf.image.crop_and_resize +* tf.image.crop_to_bounding_box +* tf.image.decode_and_crop_jpeg +* tf.image.decode_bmp +* tf.image.decode_gif +* tf.image.decode_image +* tf.image.decode_jpeg +* tf.image.decode_png +* tf.image.draw_bounding_boxes +* tf.image.encode_jpeg +* tf.image.encode_png +* tf.image.extract_glimpse +* tf.image.extract_jpeg_shape +* tf.image.extract_patches +* tf.image.flip_left_right +* tf.image.flip_up_down +* tf.image.generate_bounding_box_proposals +* tf.image.grayscale_to_rgb +* tf.image.hsv_to_rgb +* tf.image.image_gradients +* tf.image.is_jpeg +* tf.image.non_max_suppression +* tf.image.non_max_suppression_overlaps +* tf.image.non_max_suppression_padded +* tf.image.non_max_suppression_with_scores +* tf.image.pad_to_bounding_box +* tf.image.per_image_standardization +* tf.image.psnr +* tf.image.random_brightness +* tf.image.random_contrast +* tf.image.random_crop +* tf.image.random_flip_left_right +* tf.image.random_flip_up_down +* tf.image.random_hue +* tf.image.random_jpeg_quality +* tf.image.random_saturation +* tf.image.resize +* tf.image.resize_with_crop_or_pad +* tf.image.resize_with_pad +* tf.image.rgb_to_grayscale +* tf.image.rgb_to_hsv +* tf.image.rgb_to_yiq +* tf.image.rgb_to_yuv +* tf.image.rot90 +* tf.image.sample_distorted_bounding_box +* tf.image.sobel_edges +* tf.image.ssim +* tf.image.ssim_multiscale +* tf.image.total_variation +* tf.image.transpose +* tf.image.yiq_to_rgb +* tf.image.yuv_to_rgb +* tf.import_graph_def +* tf.init_scope +* tf.initializers +* tf.initializers.Constant +* tf.initializers.GlorotNormal +* tf.initializers.GlorotUniform +* tf.initializers.Identity +* tf.initializers.Initializer +* tf.initializers.Ones +* tf.initializers.Orthogonal +* tf.initializers.RandomNormal +* tf.initializers.RandomUniform +* tf.initializers.TruncatedNormal +* tf.initializers.VarianceScaling +* tf.initializers.Zeros +* tf.initializers.constant +* tf.initializers.deserialize +* tf.initializers.get +* tf.initializers.glorot_normal +* tf.initializers.glorot_uniform +* tf.initializers.he_normal +* tf.initializers.he_uniform +* tf.initializers.identity +* tf.initializers.lecun_normal +* tf.initializers.lecun_uniform +* tf.initializers.ones +* tf.initializers.orthogonal +* tf.initializers.serialize +* tf.initializers.zeros +* tf.io +* tf.io.FixedLenFeature +* tf.io.FixedLenSequenceFeature +* tf.io.RaggedFeature +* tf.io.RaggedFeature.RowLengths +* tf.io.RaggedFeature.RowLimits +* tf.io.RaggedFeature.RowSplits +* tf.io.RaggedFeature.RowStarts +* tf.io.RaggedFeature.UniformRowLength +* tf.io.RaggedFeature.ValueRowIds +* tf.io.SparseFeature +* tf.io.TFRecordOptions +* tf.io.TFRecordWriter +* tf.io.VarLenFeature +* tf.io.decode_and_crop_jpeg +* tf.io.decode_base64 +* tf.io.decode_bmp +* tf.io.decode_compressed +* tf.io.decode_csv +* tf.io.decode_gif +* tf.io.decode_image +* tf.io.decode_jpeg +* tf.io.decode_json_example +* tf.io.decode_png +* tf.io.decode_proto +* tf.io.decode_raw +* tf.io.deserialize_many_sparse +* tf.io.encode_base64 +* tf.io.encode_jpeg +* tf.io.encode_proto +* tf.io.extract_jpeg_shape +* tf.io.gfile +* tf.io.gfile.GFile +* tf.io.gfile.copy +* tf.io.gfile.exists +* tf.io.gfile.glob +* tf.io.gfile.isdir +* tf.io.gfile.listdir +* tf.io.gfile.makedirs +* tf.io.gfile.mkdir +* tf.io.gfile.remove +* tf.io.gfile.rename +* tf.io.gfile.rmtree +* tf.io.gfile.stat +* tf.io.gfile.walk +* tf.io.is_jpeg +* tf.io.match_filenames_once +* tf.io.matching_files +* tf.io.parse_example +* tf.io.parse_sequence_example +* tf.io.parse_single_example +* tf.io.parse_single_sequence_example +* tf.io.parse_tensor +* tf.io.read_file +* tf.io.serialize_many_sparse +* tf.io.serialize_sparse +* tf.io.serialize_tensor +* tf.io.write_file +* tf.io.write_graph +* tf.is_tensor +* tf.keras +* tf.keras.Input +* tf.keras.Model +* tf.keras.Sequential +* tf.keras.activations +* tf.keras.activations.deserialize +* tf.keras.activations.elu +* tf.keras.activations.exponential +* tf.keras.activations.get +* tf.keras.activations.hard_sigmoid +* tf.keras.activations.linear +* tf.keras.activations.relu +* tf.keras.activations.selu +* tf.keras.activations.serialize +* tf.keras.activations.sigmoid +* tf.keras.activations.softmax +* tf.keras.activations.softplus +* tf.keras.activations.softsign +* tf.keras.activations.swish +* tf.keras.activations.tanh +* tf.keras.applications +* tf.keras.applications.DenseNet121 +* tf.keras.applications.DenseNet169 +* tf.keras.applications.DenseNet201 +* tf.keras.applications.InceptionResNetV2 +* tf.keras.applications.InceptionV3 +* tf.keras.applications.MobileNet +* tf.keras.applications.MobileNetV2 +* tf.keras.applications.NASNetLarge +* tf.keras.applications.NASNetMobile +* tf.keras.applications.ResNet101 +* tf.keras.applications.ResNet101V2 +* tf.keras.applications.ResNet152 +* tf.keras.applications.ResNet152V2 +* tf.keras.applications.ResNet50 +* tf.keras.applications.ResNet50V2 +* tf.keras.applications.VGG16 +* tf.keras.applications.VGG19 +* tf.keras.applications.Xception +* tf.keras.applications.densenet +* tf.keras.applications.densenet.DenseNet121 +* tf.keras.applications.densenet.DenseNet169 +* tf.keras.applications.densenet.DenseNet201 +* tf.keras.applications.densenet.decode_predictions +* tf.keras.applications.densenet.preprocess_input +* tf.keras.applications.imagenet_utils +* tf.keras.applications.imagenet_utils.decode_predictions +* tf.keras.applications.imagenet_utils.preprocess_input +* tf.keras.applications.inception_resnet_v2 +* tf.keras.applications.inception_resnet_v2.InceptionResNetV2 +* tf.keras.applications.inception_resnet_v2.decode_predictions +* tf.keras.applications.inception_resnet_v2.preprocess_input +* tf.keras.applications.inception_v3 +* tf.keras.applications.inception_v3.InceptionV3 +* tf.keras.applications.inception_v3.decode_predictions +* tf.keras.applications.inception_v3.preprocess_input +* tf.keras.applications.mobilenet +* tf.keras.applications.mobilenet.MobileNet +* tf.keras.applications.mobilenet.decode_predictions +* tf.keras.applications.mobilenet.preprocess_input +* tf.keras.applications.mobilenet_v2 +* tf.keras.applications.mobilenet_v2.MobileNetV2 +* tf.keras.applications.mobilenet_v2.decode_predictions +* tf.keras.applications.mobilenet_v2.preprocess_input +* tf.keras.applications.nasnet +* tf.keras.applications.nasnet.NASNetLarge +* tf.keras.applications.nasnet.NASNetMobile +* tf.keras.applications.nasnet.decode_predictions +* tf.keras.applications.nasnet.preprocess_input +* tf.keras.applications.resnet +* tf.keras.applications.resnet.ResNet101 +* tf.keras.applications.resnet.ResNet152 +* tf.keras.applications.resnet.ResNet50 +* tf.keras.applications.resnet.decode_predictions +* tf.keras.applications.resnet.preprocess_input +* tf.keras.applications.resnet50 +* tf.keras.applications.resnet50.ResNet50 +* tf.keras.applications.resnet50.decode_predictions +* tf.keras.applications.resnet50.preprocess_input +* tf.keras.applications.resnet_v2 +* tf.keras.applications.resnet_v2.ResNet101V2 +* tf.keras.applications.resnet_v2.ResNet152V2 +* tf.keras.applications.resnet_v2.ResNet50V2 +* tf.keras.applications.resnet_v2.decode_predictions +* tf.keras.applications.resnet_v2.preprocess_input +* tf.keras.applications.vgg16 +* tf.keras.applications.vgg16.VGG16 +* tf.keras.applications.vgg16.decode_predictions +* tf.keras.applications.vgg16.preprocess_input +* tf.keras.applications.vgg19 +* tf.keras.applications.vgg19.VGG19 +* tf.keras.applications.vgg19.decode_predictions +* tf.keras.applications.vgg19.preprocess_input +* tf.keras.applications.xception +* tf.keras.applications.xception.Xception +* tf.keras.applications.xception.decode_predictions +* tf.keras.applications.xception.preprocess_input +* tf.keras.backend +* tf.keras.backend.abs +* tf.keras.backend.all +* tf.keras.backend.any +* tf.keras.backend.arange +* tf.keras.backend.argmax +* tf.keras.backend.argmin +* tf.keras.backend.backend +* tf.keras.backend.batch_dot +* tf.keras.backend.batch_flatten +* tf.keras.backend.batch_get_value +* tf.keras.backend.batch_normalization +* tf.keras.backend.batch_set_value +* tf.keras.backend.bias_add +* tf.keras.backend.binary_crossentropy +* tf.keras.backend.cast +* tf.keras.backend.cast_to_floatx +* tf.keras.backend.categorical_crossentropy +* tf.keras.backend.clear_session +* tf.keras.backend.clip +* tf.keras.backend.concatenate +* tf.keras.backend.constant +* tf.keras.backend.conv1d +* tf.keras.backend.conv2d +* tf.keras.backend.conv2d_transpose +* tf.keras.backend.conv3d +* tf.keras.backend.cos +* tf.keras.backend.count_params +* tf.keras.backend.ctc_batch_cost +* tf.keras.backend.ctc_decode +* tf.keras.backend.ctc_label_dense_to_sparse +* tf.keras.backend.cumprod +* tf.keras.backend.cumsum +* tf.keras.backend.depthwise_conv2d +* tf.keras.backend.dot +* tf.keras.backend.dropout +* tf.keras.backend.dtype +* tf.keras.backend.elu +* tf.keras.backend.epsilon +* tf.keras.backend.equal +* tf.keras.backend.eval +* tf.keras.backend.exp +* tf.keras.backend.expand_dims +* tf.keras.backend.eye +* tf.keras.backend.flatten +* tf.keras.backend.floatx +* tf.keras.backend.foldl +* tf.keras.backend.foldr +* tf.keras.backend.function +* tf.keras.backend.gather +* tf.keras.backend.get_uid +* tf.keras.backend.get_value +* tf.keras.backend.gradients +* tf.keras.backend.greater +* tf.keras.backend.greater_equal +* tf.keras.backend.hard_sigmoid +* tf.keras.backend.image_data_format +* tf.keras.backend.in_test_phase +* tf.keras.backend.in_top_k +* tf.keras.backend.in_train_phase +* tf.keras.backend.int_shape +* tf.keras.backend.is_keras_tensor +* tf.keras.backend.is_sparse +* tf.keras.backend.l2_normalize +* tf.keras.backend.learning_phase +* tf.keras.backend.learning_phase_scope +* tf.keras.backend.less +* tf.keras.backend.less_equal +* tf.keras.backend.local_conv1d +* tf.keras.backend.local_conv2d +* tf.keras.backend.log +* tf.keras.backend.manual_variable_initialization +* tf.keras.backend.map_fn +* tf.keras.backend.max +* tf.keras.backend.maximum +* tf.keras.backend.mean +* tf.keras.backend.min +* tf.keras.backend.minimum +* tf.keras.backend.moving_average_update +* tf.keras.backend.name_scope +* tf.keras.backend.ndim +* tf.keras.backend.normalize_batch_in_training +* tf.keras.backend.not_equal +* tf.keras.backend.one_hot +* tf.keras.backend.ones +* tf.keras.backend.ones_like +* tf.keras.backend.permute_dimensions +* tf.keras.backend.placeholder +* tf.keras.backend.pool2d +* tf.keras.backend.pool3d +* tf.keras.backend.pow +* tf.keras.backend.print_tensor +* tf.keras.backend.prod +* tf.keras.backend.random_binomial +* tf.keras.backend.random_normal +* tf.keras.backend.random_normal_variable +* tf.keras.backend.random_uniform +* tf.keras.backend.random_uniform_variable +* tf.keras.backend.relu +* tf.keras.backend.repeat +* tf.keras.backend.repeat_elements +* tf.keras.backend.reset_uids +* tf.keras.backend.reshape +* tf.keras.backend.resize_images +* tf.keras.backend.resize_volumes +* tf.keras.backend.reverse +* tf.keras.backend.rnn +* tf.keras.backend.round +* tf.keras.backend.separable_conv2d +* tf.keras.backend.set_epsilon +* tf.keras.backend.set_floatx +* tf.keras.backend.set_image_data_format +* tf.keras.backend.set_learning_phase +* tf.keras.backend.set_value +* tf.keras.backend.shape +* tf.keras.backend.sigmoid +* tf.keras.backend.sign +* tf.keras.backend.sin +* tf.keras.backend.softmax +* tf.keras.backend.softplus +* tf.keras.backend.softsign +* tf.keras.backend.sparse_categorical_crossentropy +* tf.keras.backend.spatial_2d_padding +* tf.keras.backend.spatial_3d_padding +* tf.keras.backend.sqrt +* tf.keras.backend.square +* tf.keras.backend.squeeze +* tf.keras.backend.stack +* tf.keras.backend.std +* tf.keras.backend.stop_gradient +* tf.keras.backend.sum +* tf.keras.backend.switch +* tf.keras.backend.tanh +* tf.keras.backend.temporal_padding +* tf.keras.backend.tile +* tf.keras.backend.to_dense +* tf.keras.backend.transpose +* tf.keras.backend.truncated_normal +* tf.keras.backend.update +* tf.keras.backend.update_add +* tf.keras.backend.update_sub +* tf.keras.backend.var +* tf.keras.backend.variable +* tf.keras.backend.zeros +* tf.keras.backend.zeros_like +* tf.keras.callbacks +* tf.keras.callbacks.BaseLogger +* tf.keras.callbacks.CSVLogger +* tf.keras.callbacks.Callback +* tf.keras.callbacks.EarlyStopping +* tf.keras.callbacks.History +* tf.keras.callbacks.LambdaCallback +* tf.keras.callbacks.LearningRateScheduler +* tf.keras.callbacks.ModelCheckpoint +* tf.keras.callbacks.ProgbarLogger +* tf.keras.callbacks.ReduceLROnPlateau +* tf.keras.callbacks.RemoteMonitor +* tf.keras.callbacks.TensorBoard +* tf.keras.callbacks.TerminateOnNaN +* tf.keras.constraints +* tf.keras.constraints.Constraint +* tf.keras.constraints.MaxNorm +* tf.keras.constraints.MinMaxNorm +* tf.keras.constraints.NonNeg +* tf.keras.constraints.RadialConstraint +* tf.keras.constraints.UnitNorm +* tf.keras.constraints.deserialize +* tf.keras.constraints.get +* tf.keras.constraints.max_norm +* tf.keras.constraints.min_max_norm +* tf.keras.constraints.non_neg +* tf.keras.constraints.radial_constraint +* tf.keras.constraints.serialize +* tf.keras.constraints.unit_norm +* tf.keras.datasets +* tf.keras.datasets.boston_housing +* tf.keras.datasets.boston_housing.load_data +* tf.keras.datasets.cifar10 +* tf.keras.datasets.cifar10.load_data +* tf.keras.datasets.cifar100 +* tf.keras.datasets.cifar100.load_data +* tf.keras.datasets.fashion_mnist +* tf.keras.datasets.fashion_mnist.load_data +* tf.keras.datasets.imdb +* tf.keras.datasets.imdb.get_word_index +* tf.keras.datasets.imdb.load_data +* tf.keras.datasets.mnist +* tf.keras.datasets.mnist.load_data +* tf.keras.datasets.reuters +* tf.keras.datasets.reuters.get_word_index +* tf.keras.datasets.reuters.load_data +* tf.keras.estimator +* tf.keras.estimator.model_to_estimator +* tf.keras.experimental +* tf.keras.experimental.CosineDecay +* tf.keras.experimental.CosineDecayRestarts +* tf.keras.experimental.LinearCosineDecay +* tf.keras.experimental.LinearModel +* tf.keras.experimental.NoisyLinearCosineDecay +* tf.keras.experimental.PeepholeLSTMCell +* tf.keras.experimental.SequenceFeatures +* tf.keras.experimental.WideDeepModel +* tf.keras.experimental.terminate_keras_multiprocessing_pools +* tf.keras.initializers +* tf.keras.initializers.Constant +* tf.keras.initializers.GlorotNormal +* tf.keras.initializers.GlorotUniform +* tf.keras.initializers.Identity +* tf.keras.initializers.Initializer +* tf.keras.initializers.Ones +* tf.keras.initializers.Orthogonal +* tf.keras.initializers.RandomNormal +* tf.keras.initializers.RandomUniform +* tf.keras.initializers.TruncatedNormal +* tf.keras.initializers.VarianceScaling +* tf.keras.initializers.Zeros +* tf.keras.initializers.constant +* tf.keras.initializers.deserialize +* tf.keras.initializers.get +* tf.keras.initializers.glorot_normal +* tf.keras.initializers.glorot_uniform +* tf.keras.initializers.he_normal +* tf.keras.initializers.he_uniform +* tf.keras.initializers.identity +* tf.keras.initializers.lecun_normal +* tf.keras.initializers.lecun_uniform +* tf.keras.initializers.ones +* tf.keras.initializers.orthogonal +* tf.keras.initializers.serialize +* tf.keras.initializers.zeros +* tf.keras.layers +* tf.keras.layers.AbstractRNNCell +* tf.keras.layers.Activation +* tf.keras.layers.ActivityRegularization +* tf.keras.layers.Add +* tf.keras.layers.AdditiveAttention +* tf.keras.layers.AlphaDropout +* tf.keras.layers.Attention +* tf.keras.layers.Average +* tf.keras.layers.AveragePooling1D +* tf.keras.layers.AveragePooling2D +* tf.keras.layers.AveragePooling3D +* tf.keras.layers.AvgPool1D +* tf.keras.layers.AvgPool2D +* tf.keras.layers.AvgPool3D +* tf.keras.layers.BatchNormalization +* tf.keras.layers.Bidirectional +* tf.keras.layers.Concatenate +* tf.keras.layers.Conv1D +* tf.keras.layers.Conv2D +* tf.keras.layers.Conv2DTranspose +* tf.keras.layers.Conv3D +* tf.keras.layers.Conv3DTranspose +* tf.keras.layers.ConvLSTM2D +* tf.keras.layers.Convolution1D +* tf.keras.layers.Convolution2D +* tf.keras.layers.Convolution2DTranspose +* tf.keras.layers.Convolution3D +* tf.keras.layers.Convolution3DTranspose +* tf.keras.layers.Cropping1D +* tf.keras.layers.Cropping2D +* tf.keras.layers.Cropping3D +* tf.keras.layers.Dense +* tf.keras.layers.DenseFeatures +* tf.keras.layers.DepthwiseConv2D +* tf.keras.layers.Dot +* tf.keras.layers.Dropout +* tf.keras.layers.ELU +* tf.keras.layers.Embedding +* tf.keras.layers.Flatten +* tf.keras.layers.GRU +* tf.keras.layers.GRUCell +* tf.keras.layers.GaussianDropout +* tf.keras.layers.GaussianNoise +* tf.keras.layers.GlobalAveragePooling1D +* tf.keras.layers.GlobalAveragePooling2D +* tf.keras.layers.GlobalAveragePooling3D +* tf.keras.layers.GlobalAvgPool1D +* tf.keras.layers.GlobalAvgPool2D +* tf.keras.layers.GlobalAvgPool3D +* tf.keras.layers.GlobalMaxPool1D +* tf.keras.layers.GlobalMaxPool2D +* tf.keras.layers.GlobalMaxPool3D +* tf.keras.layers.GlobalMaxPooling1D +* tf.keras.layers.GlobalMaxPooling2D +* tf.keras.layers.GlobalMaxPooling3D +* tf.keras.layers.Input +* tf.keras.layers.InputLayer +* tf.keras.layers.InputSpec +* tf.keras.layers.LSTM +* tf.keras.layers.LSTMCell +* tf.keras.layers.Lambda +* tf.keras.layers.Layer +* tf.keras.layers.LayerNormalization +* tf.keras.layers.LeakyReLU +* tf.keras.layers.LocallyConnected1D +* tf.keras.layers.LocallyConnected2D +* tf.keras.layers.Masking +* tf.keras.layers.MaxPool1D +* tf.keras.layers.MaxPool2D +* tf.keras.layers.MaxPool3D +* tf.keras.layers.MaxPooling1D +* tf.keras.layers.MaxPooling2D +* tf.keras.layers.MaxPooling3D +* tf.keras.layers.Maximum +* tf.keras.layers.Minimum +* tf.keras.layers.Multiply +* tf.keras.layers.PReLU +* tf.keras.layers.Permute +* tf.keras.layers.RNN +* tf.keras.layers.ReLU +* tf.keras.layers.RepeatVector +* tf.keras.layers.Reshape +* tf.keras.layers.SeparableConv1D +* tf.keras.layers.SeparableConv2D +* tf.keras.layers.SeparableConvolution1D +* tf.keras.layers.SeparableConvolution2D +* tf.keras.layers.SimpleRNN +* tf.keras.layers.SimpleRNNCell +* tf.keras.layers.Softmax +* tf.keras.layers.SpatialDropout1D +* tf.keras.layers.SpatialDropout2D +* tf.keras.layers.SpatialDropout3D +* tf.keras.layers.StackedRNNCells +* tf.keras.layers.Subtract +* tf.keras.layers.ThresholdedReLU +* tf.keras.layers.TimeDistributed +* tf.keras.layers.UpSampling1D +* tf.keras.layers.UpSampling2D +* tf.keras.layers.UpSampling3D +* tf.keras.layers.Wrapper +* tf.keras.layers.ZeroPadding1D +* tf.keras.layers.ZeroPadding2D +* tf.keras.layers.ZeroPadding3D +* tf.keras.layers.add +* tf.keras.layers.average +* tf.keras.layers.concatenate +* tf.keras.layers.deserialize +* tf.keras.layers.dot +* tf.keras.layers.experimental +* tf.keras.layers.experimental.SyncBatchNormalization +* tf.keras.layers.experimental.preprocessing +* tf.keras.layers.experimental.preprocessing.CenterCrop +* tf.keras.layers.experimental.preprocessing.Normalization +* tf.keras.layers.experimental.preprocessing.PreprocessingLayer +* tf.keras.layers.experimental.preprocessing.RandomContrast +* tf.keras.layers.experimental.preprocessing.RandomCrop +* tf.keras.layers.experimental.preprocessing.RandomFlip +* tf.keras.layers.experimental.preprocessing.RandomHeight +* tf.keras.layers.experimental.preprocessing.RandomRotation +* tf.keras.layers.experimental.preprocessing.RandomTranslation +* tf.keras.layers.experimental.preprocessing.RandomWidth +* tf.keras.layers.experimental.preprocessing.Rescaling +* tf.keras.layers.experimental.preprocessing.Resizing +* tf.keras.layers.experimental.preprocessing.TextVectorization +* tf.keras.layers.maximum +* tf.keras.layers.minimum +* tf.keras.layers.multiply +* tf.keras.layers.serialize +* tf.keras.layers.subtract +* tf.keras.losses +* tf.keras.losses.BinaryCrossentropy +* tf.keras.losses.CategoricalCrossentropy +* tf.keras.losses.CategoricalHinge +* tf.keras.losses.CosineSimilarity +* tf.keras.losses.Hinge +* tf.keras.losses.Huber +* tf.keras.losses.KLD +* tf.keras.losses.KLDivergence +* tf.keras.losses.LogCosh +* tf.keras.losses.Loss +* tf.keras.losses.MAE +* tf.keras.losses.MAPE +* tf.keras.losses.MSE +* tf.keras.losses.MSLE +* tf.keras.losses.MeanAbsoluteError +* tf.keras.losses.MeanAbsolutePercentageError +* tf.keras.losses.MeanSquaredError +* tf.keras.losses.MeanSquaredLogarithmicError +* tf.keras.losses.Poisson +* tf.keras.losses.Reduction +* tf.keras.losses.SparseCategoricalCrossentropy +* tf.keras.losses.SquaredHinge +* tf.keras.losses.binary_crossentropy +* tf.keras.losses.categorical_crossentropy +* tf.keras.losses.categorical_hinge +* tf.keras.losses.cosine_similarity +* tf.keras.losses.deserialize +* tf.keras.losses.get +* tf.keras.losses.hinge +* tf.keras.losses.kld +* tf.keras.losses.kullback_leibler_divergence +* tf.keras.losses.logcosh +* tf.keras.losses.mae +* tf.keras.losses.mape +* tf.keras.losses.mean_absolute_error +* tf.keras.losses.mean_absolute_percentage_error +* tf.keras.losses.mean_squared_error +* tf.keras.losses.mean_squared_logarithmic_error +* tf.keras.losses.mse +* tf.keras.losses.msle +* tf.keras.losses.poisson +* tf.keras.losses.serialize +* tf.keras.losses.sparse_categorical_crossentropy +* tf.keras.losses.squared_hinge +* tf.keras.metrics +* tf.keras.metrics.AUC +* tf.keras.metrics.Accuracy +* tf.keras.metrics.BinaryAccuracy +* tf.keras.metrics.BinaryCrossentropy +* tf.keras.metrics.CategoricalAccuracy +* tf.keras.metrics.CategoricalCrossentropy +* tf.keras.metrics.CategoricalHinge +* tf.keras.metrics.CosineSimilarity +* tf.keras.metrics.FalseNegatives +* tf.keras.metrics.FalsePositives +* tf.keras.metrics.Hinge +* tf.keras.metrics.KLD +* tf.keras.metrics.KLDivergence +* tf.keras.metrics.LogCoshError +* tf.keras.metrics.MAE +* tf.keras.metrics.MAPE +* tf.keras.metrics.MSE +* tf.keras.metrics.MSLE +* tf.keras.metrics.Mean +* tf.keras.metrics.MeanAbsoluteError +* tf.keras.metrics.MeanAbsolutePercentageError +* tf.keras.metrics.MeanIoU +* tf.keras.metrics.MeanRelativeError +* tf.keras.metrics.MeanSquaredError +* tf.keras.metrics.MeanSquaredLogarithmicError +* tf.keras.metrics.MeanTensor +* tf.keras.metrics.Metric +* tf.keras.metrics.Poisson +* tf.keras.metrics.Precision +* tf.keras.metrics.PrecisionAtRecall +* tf.keras.metrics.Recall +* tf.keras.metrics.RecallAtPrecision +* tf.keras.metrics.RootMeanSquaredError +* tf.keras.metrics.SensitivityAtSpecificity +* tf.keras.metrics.SparseCategoricalAccuracy +* tf.keras.metrics.SparseCategoricalCrossentropy +* tf.keras.metrics.SparseTopKCategoricalAccuracy +* tf.keras.metrics.SpecificityAtSensitivity +* tf.keras.metrics.SquaredHinge +* tf.keras.metrics.Sum +* tf.keras.metrics.TopKCategoricalAccuracy +* tf.keras.metrics.TrueNegatives +* tf.keras.metrics.TruePositives +* tf.keras.metrics.binary_accuracy +* tf.keras.metrics.binary_crossentropy +* tf.keras.metrics.categorical_accuracy +* tf.keras.metrics.categorical_crossentropy +* tf.keras.metrics.deserialize +* tf.keras.metrics.get +* tf.keras.metrics.hinge +* tf.keras.metrics.kld +* tf.keras.metrics.kullback_leibler_divergence +* tf.keras.metrics.mae +* tf.keras.metrics.mape +* tf.keras.metrics.mean_absolute_error +* tf.keras.metrics.mean_absolute_percentage_error +* tf.keras.metrics.mean_squared_error +* tf.keras.metrics.mean_squared_logarithmic_error +* tf.keras.metrics.mse +* tf.keras.metrics.msle +* tf.keras.metrics.poisson +* tf.keras.metrics.serialize +* tf.keras.metrics.sparse_categorical_accuracy +* tf.keras.metrics.sparse_categorical_crossentropy +* tf.keras.metrics.sparse_top_k_categorical_accuracy +* tf.keras.metrics.squared_hinge +* tf.keras.metrics.top_k_categorical_accuracy +* tf.keras.mixed_precision +* tf.keras.mixed_precision.experimental +* tf.keras.mixed_precision.experimental.LossScaleOptimizer +* tf.keras.mixed_precision.experimental.Policy +* tf.keras.mixed_precision.experimental.get_layer_policy +* tf.keras.mixed_precision.experimental.global_policy +* tf.keras.mixed_precision.experimental.set_policy +* tf.keras.models +* tf.keras.models.Model +* tf.keras.models.Sequential +* tf.keras.models.clone_model +* tf.keras.models.load_model +* tf.keras.models.model_from_config +* tf.keras.models.model_from_json +* tf.keras.models.model_from_yaml +* tf.keras.models.save_model +* tf.keras.optimizers +* tf.keras.optimizers.Adadelta +* tf.keras.optimizers.Adagrad +* tf.keras.optimizers.Adam +* tf.keras.optimizers.Adamax +* tf.keras.optimizers.Ftrl +* tf.keras.optimizers.Nadam +* tf.keras.optimizers.Optimizer +* tf.keras.optimizers.RMSprop +* tf.keras.optimizers.SGD +* tf.keras.optimizers.deserialize +* tf.keras.optimizers.get +* tf.keras.optimizers.schedules +* tf.keras.optimizers.schedules.ExponentialDecay +* tf.keras.optimizers.schedules.InverseTimeDecay +* tf.keras.optimizers.schedules.LearningRateSchedule +* tf.keras.optimizers.schedules.PiecewiseConstantDecay +* tf.keras.optimizers.schedules.PolynomialDecay +* tf.keras.optimizers.schedules.deserialize +* tf.keras.optimizers.schedules.serialize +* tf.keras.optimizers.serialize +* tf.keras.preprocessing +* tf.keras.preprocessing.image +* tf.keras.preprocessing.image.DirectoryIterator +* tf.keras.preprocessing.image.ImageDataGenerator +* tf.keras.preprocessing.image.Iterator +* tf.keras.preprocessing.image.NumpyArrayIterator +* tf.keras.preprocessing.image.apply_affine_transform +* tf.keras.preprocessing.image.apply_brightness_shift +* tf.keras.preprocessing.image.apply_channel_shift +* tf.keras.preprocessing.image.array_to_img +* tf.keras.preprocessing.image.img_to_array +* tf.keras.preprocessing.image.load_img +* tf.keras.preprocessing.image.random_brightness +* tf.keras.preprocessing.image.random_channel_shift +* tf.keras.preprocessing.image.random_rotation +* tf.keras.preprocessing.image.random_shear +* tf.keras.preprocessing.image.random_shift +* tf.keras.preprocessing.image.random_zoom +* tf.keras.preprocessing.image.save_img +* tf.keras.preprocessing.sequence +* tf.keras.preprocessing.sequence.TimeseriesGenerator +* tf.keras.preprocessing.sequence.make_sampling_table +* tf.keras.preprocessing.sequence.pad_sequences +* tf.keras.preprocessing.sequence.skipgrams +* tf.keras.preprocessing.text +* tf.keras.preprocessing.text.Tokenizer +* tf.keras.preprocessing.text.hashing_trick +* tf.keras.preprocessing.text.one_hot +* tf.keras.preprocessing.text.text_to_word_sequence +* tf.keras.preprocessing.text.tokenizer_from_json +* tf.keras.regularizers +* tf.keras.regularizers.L1L2 +* tf.keras.regularizers.Regularizer +* tf.keras.regularizers.deserialize +* tf.keras.regularizers.get +* tf.keras.regularizers.l1 +* tf.keras.regularizers.l1_l2 +* tf.keras.regularizers.l2 +* tf.keras.regularizers.serialize +* tf.keras.utils +* tf.keras.utils.CustomObjectScope +* tf.keras.utils.GeneratorEnqueuer +* tf.keras.utils.HDF5Matrix +* tf.keras.utils.OrderedEnqueuer +* tf.keras.utils.Progbar +* tf.keras.utils.Sequence +* tf.keras.utils.SequenceEnqueuer +* tf.keras.utils.convert_all_kernels_in_model +* tf.keras.utils.custom_object_scope +* tf.keras.utils.deserialize_keras_object +* tf.keras.utils.get_custom_objects +* tf.keras.utils.get_file +* tf.keras.utils.get_registered_name +* tf.keras.utils.get_registered_object +* tf.keras.utils.get_source_inputs +* tf.keras.utils.model_to_dot +* tf.keras.utils.multi_gpu_model +* tf.keras.utils.normalize +* tf.keras.utils.plot_model +* tf.keras.utils.register_keras_serializable +* tf.keras.utils.serialize_keras_object +* tf.keras.utils.to_categorical +* tf.keras.wrappers +* tf.keras.wrappers.scikit_learn +* tf.keras.wrappers.scikit_learn.KerasClassifier +* tf.keras.wrappers.scikit_learn.KerasRegressor +* tf.less +* tf.less_equal +* tf.linalg +* tf.linalg.LinearOperator +* tf.linalg.LinearOperatorAdjoint +* tf.linalg.LinearOperatorBlockDiag +* tf.linalg.LinearOperatorBlockLowerTriangular +* tf.linalg.LinearOperatorCirculant +* tf.linalg.LinearOperatorCirculant2D +* tf.linalg.LinearOperatorCirculant3D +* tf.linalg.LinearOperatorComposition +* tf.linalg.LinearOperatorDiag +* tf.linalg.LinearOperatorFullMatrix +* tf.linalg.LinearOperatorHouseholder +* tf.linalg.LinearOperatorIdentity +* tf.linalg.LinearOperatorInversion +* tf.linalg.LinearOperatorKronecker +* tf.linalg.LinearOperatorLowRankUpdate +* tf.linalg.LinearOperatorLowerTriangular +* tf.linalg.LinearOperatorPermutation +* tf.linalg.LinearOperatorScaledIdentity +* tf.linalg.LinearOperatorToeplitz +* tf.linalg.LinearOperatorTridiag +* tf.linalg.LinearOperatorZeros +* tf.linalg.adjoint +* tf.linalg.band_part +* tf.linalg.cholesky +* tf.linalg.cholesky_solve +* tf.linalg.cross +* tf.linalg.det +* tf.linalg.diag +* tf.linalg.diag_part +* tf.linalg.eig +* tf.linalg.eigh +* tf.linalg.eigvals +* tf.linalg.eigvalsh +* tf.linalg.einsum +* tf.linalg.experimental +* tf.linalg.experimental.conjugate_gradient +* tf.linalg.expm +* tf.linalg.eye +* tf.linalg.global_norm +* tf.linalg.inv +* tf.linalg.l2_normalize +* tf.linalg.logdet +* tf.linalg.logm +* tf.linalg.lstsq +* tf.linalg.lu +* tf.linalg.lu_matrix_inverse +* tf.linalg.lu_reconstruct +* tf.linalg.lu_solve +* tf.linalg.matmul +* tf.linalg.matrix_rank +* tf.linalg.matrix_transpose +* tf.linalg.matvec +* tf.linalg.norm +* tf.linalg.normalize +* tf.linalg.pinv +* tf.linalg.qr +* tf.linalg.set_diag +* tf.linalg.slogdet +* tf.linalg.solve +* tf.linalg.sqrtm +* tf.linalg.svd +* tf.linalg.tensor_diag +* tf.linalg.tensor_diag_part +* tf.linalg.tensordot +* tf.linalg.trace +* tf.linalg.triangular_solve +* tf.linalg.tridiagonal_matmul +* tf.linalg.tridiagonal_solve +* tf.linspace +* tf.lite +* tf.lite.Interpreter +* tf.lite.OpsSet +* tf.lite.Optimize +* tf.lite.RepresentativeDataset +* tf.lite.TFLiteConverter +* tf.lite.TargetSpec +* tf.lite.experimental +* tf.lite.experimental.load_delegate +* tf.load_library +* tf.load_op_library +* tf.logical_and +* tf.logical_not +* tf.logical_or +* tf.lookup +* tf.lookup.KeyValueTensorInitializer +* tf.lookup.StaticHashTable +* tf.lookup.StaticVocabularyTable +* tf.lookup.TextFileIndex +* tf.lookup.TextFileInitializer +* tf.lookup.experimental +* tf.lookup.experimental.DenseHashTable +* tf.losses +* tf.losses.BinaryCrossentropy +* tf.losses.CategoricalCrossentropy +* tf.losses.CategoricalHinge +* tf.losses.CosineSimilarity +* tf.losses.Hinge +* tf.losses.Huber +* tf.losses.KLD +* tf.losses.KLDivergence +* tf.losses.LogCosh +* tf.losses.Loss +* tf.losses.MAE +* tf.losses.MAPE +* tf.losses.MSE +* tf.losses.MSLE +* tf.losses.MeanAbsoluteError +* tf.losses.MeanAbsolutePercentageError +* tf.losses.MeanSquaredError +* tf.losses.MeanSquaredLogarithmicError +* tf.losses.Poisson +* tf.losses.Reduction +* tf.losses.SparseCategoricalCrossentropy +* tf.losses.SquaredHinge +* tf.losses.binary_crossentropy +* tf.losses.categorical_crossentropy +* tf.losses.categorical_hinge +* tf.losses.cosine_similarity +* tf.losses.deserialize +* tf.losses.get +* tf.losses.hinge +* tf.losses.kld +* tf.losses.kullback_leibler_divergence +* tf.losses.logcosh +* tf.losses.mae +* tf.losses.mape +* tf.losses.mean_absolute_error +* tf.losses.mean_absolute_percentage_error +* tf.losses.mean_squared_error +* tf.losses.mean_squared_logarithmic_error +* tf.losses.mse +* tf.losses.msle +* tf.losses.poisson +* tf.losses.serialize +* tf.losses.sparse_categorical_crossentropy +* tf.losses.squared_hinge +* tf.make_ndarray +* tf.make_tensor_proto +* tf.map_fn +* tf.math +* tf.math.abs +* tf.math.accumulate_n +* tf.math.acos +* tf.math.acosh +* tf.math.add +* tf.math.add_n +* tf.math.angle +* tf.math.argmax +* tf.math.argmin +* tf.math.asin +* tf.math.asinh +* tf.math.atan +* tf.math.atan2 +* tf.math.atanh +* tf.math.bessel_i0 +* tf.math.bessel_i0e +* tf.math.bessel_i1 +* tf.math.bessel_i1e +* tf.math.betainc +* tf.math.bincount +* tf.math.ceil +* tf.math.confusion_matrix +* tf.math.conj +* tf.math.cos +* tf.math.cosh +* tf.math.count_nonzero +* tf.math.cumprod +* tf.math.cumsum +* tf.math.cumulative_logsumexp +* tf.math.digamma +* tf.math.divide +* tf.math.divide_no_nan +* tf.math.equal +* tf.math.erf +* tf.math.erfc +* tf.math.erfinv +* tf.math.exp +* tf.math.expm1 +* tf.math.floor +* tf.math.floordiv +* tf.math.floormod +* tf.math.greater +* tf.math.greater_equal +* tf.math.igamma +* tf.math.igammac +* tf.math.imag +* tf.math.in_top_k +* tf.math.invert_permutation +* tf.math.is_finite +* tf.math.is_inf +* tf.math.is_nan +* tf.math.is_non_decreasing +* tf.math.is_strictly_increasing +* tf.math.l2_normalize +* tf.math.lbeta +* tf.math.less +* tf.math.less_equal +* tf.math.lgamma +* tf.math.log +* tf.math.log1p +* tf.math.log_sigmoid +* tf.math.log_softmax +* tf.math.logical_and +* tf.math.logical_not +* tf.math.logical_or +* tf.math.logical_xor +* tf.math.maximum +* tf.math.minimum +* tf.math.mod +* tf.math.multiply +* tf.math.multiply_no_nan +* tf.math.ndtri +* tf.math.negative +* tf.math.nextafter +* tf.math.not_equal +* tf.math.polygamma +* tf.math.polyval +* tf.math.pow +* tf.math.real +* tf.math.reciprocal +* tf.math.reciprocal_no_nan +* tf.math.reduce_all +* tf.math.reduce_any +* tf.math.reduce_euclidean_norm +* tf.math.reduce_logsumexp +* tf.math.reduce_max +* tf.math.reduce_mean +* tf.math.reduce_min +* tf.math.reduce_prod +* tf.math.reduce_std +* tf.math.reduce_sum +* tf.math.reduce_variance +* tf.math.rint +* tf.math.round +* tf.math.rsqrt +* tf.math.scalar_mul +* tf.math.segment_max +* tf.math.segment_mean +* tf.math.segment_min +* tf.math.segment_prod +* tf.math.segment_sum +* tf.math.sigmoid +* tf.math.sign +* tf.math.sin +* tf.math.sinh +* tf.math.sobol_sample +* tf.math.softmax +* tf.math.softplus +* tf.math.softsign +* tf.math.special +* tf.math.special.dawsn +* tf.math.special.expint +* tf.math.special.fresnel_cos +* tf.math.special.fresnel_sin +* tf.math.special.spence +* tf.math.sqrt +* tf.math.square +* tf.math.squared_difference +* tf.math.subtract +* tf.math.tan +* tf.math.tanh +* tf.math.top_k +* tf.math.truediv +* tf.math.unsorted_segment_max +* tf.math.unsorted_segment_mean +* tf.math.unsorted_segment_min +* tf.math.unsorted_segment_prod +* tf.math.unsorted_segment_sqrt_n +* tf.math.unsorted_segment_sum +* tf.math.xdivy +* tf.math.xlog1py +* tf.math.xlogy +* tf.math.zero_fraction +* tf.math.zeta +* tf.matmul +* tf.matrix_square_root +* tf.maximum +* tf.meshgrid +* tf.metrics +* tf.metrics.AUC +* tf.metrics.Accuracy +* tf.metrics.BinaryAccuracy +* tf.metrics.BinaryCrossentropy +* tf.metrics.CategoricalAccuracy +* tf.metrics.CategoricalCrossentropy +* tf.metrics.CategoricalHinge +* tf.metrics.CosineSimilarity +* tf.metrics.FalseNegatives +* tf.metrics.FalsePositives +* tf.metrics.Hinge +* tf.metrics.KLD +* tf.metrics.KLDivergence +* tf.metrics.LogCoshError +* tf.metrics.MAE +* tf.metrics.MAPE +* tf.metrics.MSE +* tf.metrics.MSLE +* tf.metrics.Mean +* tf.metrics.MeanAbsoluteError +* tf.metrics.MeanAbsolutePercentageError +* tf.metrics.MeanIoU +* tf.metrics.MeanRelativeError +* tf.metrics.MeanSquaredError +* tf.metrics.MeanSquaredLogarithmicError +* tf.metrics.MeanTensor +* tf.metrics.Metric +* tf.metrics.Poisson +* tf.metrics.Precision +* tf.metrics.PrecisionAtRecall +* tf.metrics.Recall +* tf.metrics.RecallAtPrecision +* tf.metrics.RootMeanSquaredError +* tf.metrics.SensitivityAtSpecificity +* tf.metrics.SparseCategoricalAccuracy +* tf.metrics.SparseCategoricalCrossentropy +* tf.metrics.SparseTopKCategoricalAccuracy +* tf.metrics.SpecificityAtSensitivity +* tf.metrics.SquaredHinge +* tf.metrics.Sum +* tf.metrics.TopKCategoricalAccuracy +* tf.metrics.TrueNegatives +* tf.metrics.TruePositives +* tf.metrics.binary_accuracy +* tf.metrics.binary_crossentropy +* tf.metrics.categorical_accuracy +* tf.metrics.categorical_crossentropy +* tf.metrics.deserialize +* tf.metrics.get +* tf.metrics.hinge +* tf.metrics.kld +* tf.metrics.kullback_leibler_divergence +* tf.metrics.mae +* tf.metrics.mape +* tf.metrics.mean_absolute_error +* tf.metrics.mean_absolute_percentage_error +* tf.metrics.mean_squared_error +* tf.metrics.mean_squared_logarithmic_error +* tf.metrics.mse +* tf.metrics.msle +* tf.metrics.poisson +* tf.metrics.serialize +* tf.metrics.sparse_categorical_accuracy +* tf.metrics.sparse_categorical_crossentropy +* tf.metrics.sparse_top_k_categorical_accuracy +* tf.metrics.squared_hinge +* tf.metrics.top_k_categorical_accuracy +* tf.minimum +* tf.mixed_precision +* tf.mixed_precision.experimental +* tf.mixed_precision.experimental.DynamicLossScale +* tf.mixed_precision.experimental.FixedLossScale +* tf.mixed_precision.experimental.LossScale +* tf.mlir +* tf.mlir.experimental +* tf.mlir.experimental.convert_graph_def +* tf.multiply +* tf.name_scope +* tf.negative +* tf.nest +* tf.nest.assert_same_structure +* tf.nest.flatten +* tf.nest.is_nested +* tf.nest.map_structure +* tf.nest.pack_sequence_as +* tf.nn +* tf.nn.RNNCellDeviceWrapper +* tf.nn.RNNCellDropoutWrapper +* tf.nn.RNNCellResidualWrapper +* tf.nn.all_candidate_sampler +* tf.nn.atrous_conv2d +* tf.nn.atrous_conv2d_transpose +* tf.nn.avg_pool +* tf.nn.avg_pool1d +* tf.nn.avg_pool2d +* tf.nn.avg_pool3d +* tf.nn.batch_norm_with_global_normalization +* tf.nn.batch_normalization +* tf.nn.bias_add +* tf.nn.collapse_repeated +* tf.nn.compute_accidental_hits +* tf.nn.compute_average_loss +* tf.nn.conv1d +* tf.nn.conv1d_transpose +* tf.nn.conv2d +* tf.nn.conv2d_transpose +* tf.nn.conv3d +* tf.nn.conv3d_transpose +* tf.nn.conv_transpose +* tf.nn.convolution +* tf.nn.crelu +* tf.nn.ctc_beam_search_decoder +* tf.nn.ctc_greedy_decoder +* tf.nn.ctc_loss +* tf.nn.ctc_unique_labels +* tf.nn.depth_to_space +* tf.nn.depthwise_conv2d +* tf.nn.depthwise_conv2d_backprop_filter +* tf.nn.depthwise_conv2d_backprop_input +* tf.nn.dilation2d +* tf.nn.dropout +* tf.nn.elu +* tf.nn.embedding_lookup +* tf.nn.embedding_lookup_sparse +* tf.nn.erosion2d +* tf.nn.fixed_unigram_candidate_sampler +* tf.nn.fractional_avg_pool +* tf.nn.fractional_max_pool +* tf.nn.in_top_k +* tf.nn.l2_loss +* tf.nn.l2_normalize +* tf.nn.leaky_relu +* tf.nn.learned_unigram_candidate_sampler +* tf.nn.local_response_normalization +* tf.nn.log_poisson_loss +* tf.nn.log_softmax +* tf.nn.lrn +* tf.nn.max_pool +* tf.nn.max_pool1d +* tf.nn.max_pool2d +* tf.nn.max_pool3d +* tf.nn.max_pool_with_argmax +* tf.nn.moments +* tf.nn.nce_loss +* tf.nn.normalize_moments +* tf.nn.pool +* tf.nn.relu +* tf.nn.relu6 +* tf.nn.safe_embedding_lookup_sparse +* tf.nn.sampled_softmax_loss +* tf.nn.scale_regularization_loss +* tf.nn.selu +* tf.nn.separable_conv2d +* tf.nn.sigmoid +* tf.nn.sigmoid_cross_entropy_with_logits +* tf.nn.softmax +* tf.nn.softmax_cross_entropy_with_logits +* tf.nn.softplus +* tf.nn.softsign +* tf.nn.space_to_batch +* tf.nn.space_to_depth +* tf.nn.sparse_softmax_cross_entropy_with_logits +* tf.nn.sufficient_statistics +* tf.nn.swish +* tf.nn.tanh +* tf.nn.top_k +* tf.nn.weighted_cross_entropy_with_logits +* tf.nn.weighted_moments +* tf.nn.with_space_to_batch +* tf.nn.zero_fraction +* tf.no_gradient +* tf.no_op +* tf.nondifferentiable_batch_function +* tf.norm +* tf.not_equal +* tf.numpy_function +* tf.one_hot +* tf.ones +* tf.ones_initializer +* tf.ones_like +* tf.optimizers +* tf.optimizers.Adadelta +* tf.optimizers.Adagrad +* tf.optimizers.Adam +* tf.optimizers.Adamax +* tf.optimizers.Ftrl +* tf.optimizers.Nadam +* tf.optimizers.Optimizer +* tf.optimizers.RMSprop +* tf.optimizers.SGD +* tf.optimizers.deserialize +* tf.optimizers.get +* tf.optimizers.schedules +* tf.optimizers.schedules.ExponentialDecay +* tf.optimizers.schedules.InverseTimeDecay +* tf.optimizers.schedules.LearningRateSchedule +* tf.optimizers.schedules.PiecewiseConstantDecay +* tf.optimizers.schedules.PolynomialDecay +* tf.optimizers.schedules.deserialize +* tf.optimizers.schedules.serialize +* tf.optimizers.serialize +* tf.pad +* tf.parallel_stack +* tf.pow +* tf.print +* tf.profiler +* tf.profiler.experimental +* tf.profiler.experimental.Profile +* tf.profiler.experimental.client +* tf.profiler.experimental.client.monitor +* tf.profiler.experimental.client.trace +* tf.profiler.experimental.server +* tf.profiler.experimental.server.start +* tf.profiler.experimental.start +* tf.profiler.experimental.stop +* tf.py_function +* tf.quantization +* tf.quantization.dequantize +* tf.quantization.fake_quant_with_min_max_args +* tf.quantization.fake_quant_with_min_max_args_gradient +* tf.quantization.fake_quant_with_min_max_vars +* tf.quantization.fake_quant_with_min_max_vars_gradient +* tf.quantization.fake_quant_with_min_max_vars_per_channel +* tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient +* tf.quantization.quantize +* tf.quantization.quantize_and_dequantize +* tf.quantization.quantized_concat +* tf.queue +* tf.queue.FIFOQueue +* tf.queue.PaddingFIFOQueue +* tf.queue.PriorityQueue +* tf.queue.QueueBase +* tf.queue.RandomShuffleQueue +* tf.ragged +* tf.ragged.boolean_mask +* tf.ragged.constant +* tf.ragged.map_flat_values +* tf.ragged.range +* tf.ragged.row_splits_to_segment_ids +* tf.ragged.segment_ids_to_row_splits +* tf.ragged.stack +* tf.ragged.stack_dynamic_partitions +* tf.random +* tf.random.Algorithm +* tf.random.Generator +* tf.random.all_candidate_sampler +* tf.random.categorical +* tf.random.create_rng_state +* tf.random.experimental +* tf.random.experimental.Algorithm +* tf.random.experimental.Generator +* tf.random.experimental.create_rng_state +* tf.random.experimental.get_global_generator +* tf.random.experimental.set_global_generator +* tf.random.fixed_unigram_candidate_sampler +* tf.random.gamma +* tf.random.get_global_generator +* tf.random.learned_unigram_candidate_sampler +* tf.random.log_uniform_candidate_sampler +* tf.random.normal +* tf.random.poisson +* tf.random.set_global_generator +* tf.random.set_seed +* tf.random.shuffle +* tf.random.stateless_binomial +* tf.random.stateless_categorical +* tf.random.stateless_gamma +* tf.random.stateless_normal +* tf.random.stateless_poisson +* tf.random.stateless_truncated_normal +* tf.random.stateless_uniform +* tf.random.truncated_normal +* tf.random.uniform +* tf.random.uniform_candidate_sampler +* tf.random_normal_initializer +* tf.random_uniform_initializer +* tf.range +* tf.rank +* tf.raw_ops +* tf.raw_ops.Abort +* tf.raw_ops.Abs +* tf.raw_ops.AccumulateNV2 +* tf.raw_ops.AccumulatorApplyGradient +* tf.raw_ops.AccumulatorNumAccumulated +* tf.raw_ops.AccumulatorSetGlobalStep +* tf.raw_ops.AccumulatorTakeGradient +* tf.raw_ops.Acos +* tf.raw_ops.Acosh +* tf.raw_ops.Add +* tf.raw_ops.AddManySparseToTensorsMap +* tf.raw_ops.AddN +* tf.raw_ops.AddSparseToTensorsMap +* tf.raw_ops.AddV2 +* tf.raw_ops.AdjustContrast +* tf.raw_ops.AdjustContrastv2 +* tf.raw_ops.AdjustHue +* tf.raw_ops.AdjustSaturation +* tf.raw_ops.All +* tf.raw_ops.AllCandidateSampler +* tf.raw_ops.AllToAll +* tf.raw_ops.Angle +* tf.raw_ops.AnonymousIterator +* tf.raw_ops.AnonymousIteratorV2 +* tf.raw_ops.AnonymousMemoryCache +* tf.raw_ops.AnonymousMultiDeviceIterator +* tf.raw_ops.AnonymousRandomSeedGenerator +* tf.raw_ops.Any +* tf.raw_ops.ApplyAdaMax +* tf.raw_ops.ApplyAdadelta +* tf.raw_ops.ApplyAdagrad +* tf.raw_ops.ApplyAdagradDA +* tf.raw_ops.ApplyAdagradV2 +* tf.raw_ops.ApplyAdam +* tf.raw_ops.ApplyAddSign +* tf.raw_ops.ApplyCenteredRMSProp +* tf.raw_ops.ApplyFtrl +* tf.raw_ops.ApplyFtrlV2 +* tf.raw_ops.ApplyGradientDescent +* tf.raw_ops.ApplyMomentum +* tf.raw_ops.ApplyPowerSign +* tf.raw_ops.ApplyProximalAdagrad +* tf.raw_ops.ApplyProximalGradientDescent +* tf.raw_ops.ApplyRMSProp +* tf.raw_ops.ApproximateEqual +* tf.raw_ops.ArgMax +* tf.raw_ops.ArgMin +* tf.raw_ops.AsString +* tf.raw_ops.Asin +* tf.raw_ops.Asinh +* tf.raw_ops.Assert +* tf.raw_ops.AssertCardinalityDataset +* tf.raw_ops.AssertNextDataset +* tf.raw_ops.Assign +* tf.raw_ops.AssignAdd +* tf.raw_ops.AssignAddVariableOp +* tf.raw_ops.AssignSub +* tf.raw_ops.AssignSubVariableOp +* tf.raw_ops.AssignVariableOp +* tf.raw_ops.Atan +* tf.raw_ops.Atan2 +* tf.raw_ops.Atanh +* tf.raw_ops.AudioSpectrogram +* tf.raw_ops.AudioSummary +* tf.raw_ops.AudioSummaryV2 +* tf.raw_ops.AutoShardDataset +* tf.raw_ops.AvgPool +* tf.raw_ops.AvgPool3D +* tf.raw_ops.AvgPool3DGrad +* tf.raw_ops.AvgPoolGrad +* tf.raw_ops.Barrier +* tf.raw_ops.BarrierClose +* tf.raw_ops.BarrierIncompleteSize +* tf.raw_ops.BarrierInsertMany +* tf.raw_ops.BarrierReadySize +* tf.raw_ops.BarrierTakeMany +* tf.raw_ops.Batch +* tf.raw_ops.BatchCholesky +* tf.raw_ops.BatchCholeskyGrad +* tf.raw_ops.BatchDataset +* tf.raw_ops.BatchDatasetV2 +* tf.raw_ops.BatchFFT +* tf.raw_ops.BatchFFT2D +* tf.raw_ops.BatchFFT3D +* tf.raw_ops.BatchFunction +* tf.raw_ops.BatchIFFT +* tf.raw_ops.BatchIFFT2D +* tf.raw_ops.BatchIFFT3D +* tf.raw_ops.BatchMatMul +* tf.raw_ops.BatchMatMulV2 +* tf.raw_ops.BatchMatrixBandPart +* tf.raw_ops.BatchMatrixDeterminant +* tf.raw_ops.BatchMatrixDiag +* tf.raw_ops.BatchMatrixDiagPart +* tf.raw_ops.BatchMatrixInverse +* tf.raw_ops.BatchMatrixSetDiag +* tf.raw_ops.BatchMatrixSolve +* tf.raw_ops.BatchMatrixSolveLs +* tf.raw_ops.BatchMatrixTriangularSolve +* tf.raw_ops.BatchNormWithGlobalNormalization +* tf.raw_ops.BatchNormWithGlobalNormalizationGrad +* tf.raw_ops.BatchSelfAdjointEig +* tf.raw_ops.BatchSelfAdjointEigV2 +* tf.raw_ops.BatchSvd +* tf.raw_ops.BatchToSpace +* tf.raw_ops.BatchToSpaceND +* tf.raw_ops.BesselI0e +* tf.raw_ops.BesselI1e +* tf.raw_ops.Betainc +* tf.raw_ops.BiasAdd +* tf.raw_ops.BiasAddGrad +* tf.raw_ops.BiasAddV1 +* tf.raw_ops.Bincount +* tf.raw_ops.Bitcast +* tf.raw_ops.BitwiseAnd +* tf.raw_ops.BitwiseOr +* tf.raw_ops.BitwiseXor +* tf.raw_ops.BlockLSTM +* tf.raw_ops.BlockLSTMGrad +* tf.raw_ops.BlockLSTMGradV2 +* tf.raw_ops.BlockLSTMV2 +* tf.raw_ops.BoostedTreesAggregateStats +* tf.raw_ops.BoostedTreesBucketize +* tf.raw_ops.BoostedTreesCalculateBestFeatureSplit +* tf.raw_ops.BoostedTreesCalculateBestFeatureSplitV2 +* tf.raw_ops.BoostedTreesCalculateBestGainsPerFeature +* tf.raw_ops.BoostedTreesCenterBias +* tf.raw_ops.BoostedTreesCreateEnsemble +* tf.raw_ops.BoostedTreesCreateQuantileStreamResource +* tf.raw_ops.BoostedTreesDeserializeEnsemble +* tf.raw_ops.BoostedTreesEnsembleResourceHandleOp +* tf.raw_ops.BoostedTreesExampleDebugOutputs +* tf.raw_ops.BoostedTreesFlushQuantileSummaries +* tf.raw_ops.BoostedTreesGetEnsembleStates +* tf.raw_ops.BoostedTreesMakeQuantileSummaries +* tf.raw_ops.BoostedTreesMakeStatsSummary +* tf.raw_ops.BoostedTreesPredict +* tf.raw_ops.BoostedTreesQuantileStreamResourceAddSummaries +* tf.raw_ops.BoostedTreesQuantileStreamResourceDeserialize +* tf.raw_ops.BoostedTreesQuantileStreamResourceFlush +* tf.raw_ops.BoostedTreesQuantileStreamResourceGetBucketBoundaries +* tf.raw_ops.BoostedTreesQuantileStreamResourceHandleOp +* tf.raw_ops.BoostedTreesSerializeEnsemble +* tf.raw_ops.BoostedTreesSparseAggregateStats +* tf.raw_ops.BoostedTreesSparseCalculateBestFeatureSplit +* tf.raw_ops.BoostedTreesTrainingPredict +* tf.raw_ops.BoostedTreesUpdateEnsemble +* tf.raw_ops.BoostedTreesUpdateEnsembleV2 +* tf.raw_ops.BroadcastArgs +* tf.raw_ops.BroadcastGradientArgs +* tf.raw_ops.BroadcastTo +* tf.raw_ops.Bucketize +* tf.raw_ops.BytesProducedStatsDataset +* tf.raw_ops.CSRSparseMatrixComponents +* tf.raw_ops.CSRSparseMatrixToDense +* tf.raw_ops.CSRSparseMatrixToSparseTensor +* tf.raw_ops.CSVDataset +* tf.raw_ops.CTCBeamSearchDecoder +* tf.raw_ops.CTCGreedyDecoder +* tf.raw_ops.CTCLoss +* tf.raw_ops.CTCLossV2 +* tf.raw_ops.CacheDataset +* tf.raw_ops.CacheDatasetV2 +* tf.raw_ops.Case +* tf.raw_ops.Cast +* tf.raw_ops.Ceil +* tf.raw_ops.CheckNumerics +* tf.raw_ops.CheckNumericsV2 +* tf.raw_ops.Cholesky +* tf.raw_ops.CholeskyGrad +* tf.raw_ops.ChooseFastestBranchDataset +* tf.raw_ops.ChooseFastestDataset +* tf.raw_ops.ClipByValue +* tf.raw_ops.CloseSummaryWriter +* tf.raw_ops.CollectiveBcastRecv +* tf.raw_ops.CollectiveBcastSend +* tf.raw_ops.CollectiveGather +* tf.raw_ops.CollectivePermute +* tf.raw_ops.CollectiveReduce +* tf.raw_ops.CombinedNonMaxSuppression +* tf.raw_ops.CompareAndBitpack +* tf.raw_ops.Complex +* tf.raw_ops.ComplexAbs +* tf.raw_ops.ComputeAccidentalHits +* tf.raw_ops.Concat +* tf.raw_ops.ConcatOffset +* tf.raw_ops.ConcatV2 +* tf.raw_ops.ConcatenateDataset +* tf.raw_ops.ConditionalAccumulator +* tf.raw_ops.ConfigureDistributedTPU +* tf.raw_ops.ConfigureTPUEmbedding +* tf.raw_ops.Conj +* tf.raw_ops.ConjugateTranspose +* tf.raw_ops.Const +* tf.raw_ops.ConsumeMutexLock +* tf.raw_ops.ControlTrigger +* tf.raw_ops.Conv2D +* tf.raw_ops.Conv2DBackpropFilter +* tf.raw_ops.Conv2DBackpropInput +* tf.raw_ops.Conv3D +* tf.raw_ops.Conv3DBackpropFilter +* tf.raw_ops.Conv3DBackpropFilterV2 +* tf.raw_ops.Conv3DBackpropInput +* tf.raw_ops.Conv3DBackpropInputV2 +* tf.raw_ops.Copy +* tf.raw_ops.CopyHost +* tf.raw_ops.Cos +* tf.raw_ops.Cosh +* tf.raw_ops.CountUpTo +* tf.raw_ops.CreateSummaryDbWriter +* tf.raw_ops.CreateSummaryFileWriter +* tf.raw_ops.CropAndResize +* tf.raw_ops.CropAndResizeGradBoxes +* tf.raw_ops.CropAndResizeGradImage +* tf.raw_ops.Cross +* tf.raw_ops.CrossReplicaSum +* tf.raw_ops.CudnnRNN +* tf.raw_ops.CudnnRNNBackprop +* tf.raw_ops.CudnnRNNBackpropV2 +* tf.raw_ops.CudnnRNNBackpropV3 +* tf.raw_ops.CudnnRNNCanonicalToParams +* tf.raw_ops.CudnnRNNCanonicalToParamsV2 +* tf.raw_ops.CudnnRNNParamsSize +* tf.raw_ops.CudnnRNNParamsToCanonical +* tf.raw_ops.CudnnRNNParamsToCanonicalV2 +* tf.raw_ops.CudnnRNNV2 +* tf.raw_ops.CudnnRNNV3 +* tf.raw_ops.Cumprod +* tf.raw_ops.Cumsum +* tf.raw_ops.CumulativeLogsumexp +* tf.raw_ops.DataFormatDimMap +* tf.raw_ops.DataFormatVecPermute +* tf.raw_ops.DatasetCardinality +* tf.raw_ops.DatasetFromGraph +* tf.raw_ops.DatasetToGraph +* tf.raw_ops.DatasetToGraphV2 +* tf.raw_ops.DatasetToSingleElement +* tf.raw_ops.DatasetToTFRecord +* tf.raw_ops.Dawsn +* tf.raw_ops.DebugGradientIdentity +* tf.raw_ops.DebugGradientRefIdentity +* tf.raw_ops.DebugIdentity +* tf.raw_ops.DebugIdentityV2 +* tf.raw_ops.DebugNanCount +* tf.raw_ops.DebugNumericSummary +* tf.raw_ops.DebugNumericSummaryV2 +* tf.raw_ops.DecodeAndCropJpeg +* tf.raw_ops.DecodeBase64 +* tf.raw_ops.DecodeBmp +* tf.raw_ops.DecodeCSV +* tf.raw_ops.DecodeCompressed +* tf.raw_ops.DecodeGif +* tf.raw_ops.DecodeJSONExample +* tf.raw_ops.DecodeJpeg +* tf.raw_ops.DecodePaddedRaw +* tf.raw_ops.DecodePng +* tf.raw_ops.DecodeProtoV2 +* tf.raw_ops.DecodeRaw +* tf.raw_ops.DecodeWav +* tf.raw_ops.DeepCopy +* tf.raw_ops.DeleteIterator +* tf.raw_ops.DeleteMemoryCache +* tf.raw_ops.DeleteMultiDeviceIterator +* tf.raw_ops.DeleteRandomSeedGenerator +* tf.raw_ops.DeleteSessionTensor +* tf.raw_ops.DenseToCSRSparseMatrix +* tf.raw_ops.DenseToDenseSetOperation +* tf.raw_ops.DenseToSparseBatchDataset +* tf.raw_ops.DenseToSparseSetOperation +* tf.raw_ops.DepthToSpace +* tf.raw_ops.DepthwiseConv2dNative +* tf.raw_ops.DepthwiseConv2dNativeBackpropFilter +* tf.raw_ops.DepthwiseConv2dNativeBackpropInput +* tf.raw_ops.Dequantize +* tf.raw_ops.DeserializeIterator +* tf.raw_ops.DeserializeManySparse +* tf.raw_ops.DeserializeSparse +* tf.raw_ops.DestroyResourceOp +* tf.raw_ops.DestroyTemporaryVariable +* tf.raw_ops.Diag +* tf.raw_ops.DiagPart +* tf.raw_ops.Digamma +* tf.raw_ops.Dilation2D +* tf.raw_ops.Dilation2DBackpropFilter +* tf.raw_ops.Dilation2DBackpropInput +* tf.raw_ops.DirectedInterleaveDataset +* tf.raw_ops.Div +* tf.raw_ops.DivNoNan +* tf.raw_ops.DrawBoundingBoxes +* tf.raw_ops.DrawBoundingBoxesV2 +* tf.raw_ops.DummyMemoryCache +* tf.raw_ops.DynamicPartition +* tf.raw_ops.DynamicStitch +* tf.raw_ops.EagerPyFunc +* tf.raw_ops.EditDistance +* tf.raw_ops.Eig +* tf.raw_ops.Einsum +* tf.raw_ops.Elu +* tf.raw_ops.EluGrad +* tf.raw_ops.Empty +* tf.raw_ops.EmptyTensorList +* tf.raw_ops.EncodeBase64 +* tf.raw_ops.EncodeJpeg +* tf.raw_ops.EncodeJpegVariableQuality +* tf.raw_ops.EncodePng +* tf.raw_ops.EncodeProto +* tf.raw_ops.EncodeWav +* tf.raw_ops.EnqueueTPUEmbeddingIntegerBatch +* tf.raw_ops.EnqueueTPUEmbeddingSparseBatch +* tf.raw_ops.EnqueueTPUEmbeddingSparseTensorBatch +* tf.raw_ops.EnsureShape +* tf.raw_ops.Enter +* tf.raw_ops.Equal +* tf.raw_ops.Erf +* tf.raw_ops.Erfc +* tf.raw_ops.Erfinv +* tf.raw_ops.EuclideanNorm +* tf.raw_ops.Exit +* tf.raw_ops.Exp +* tf.raw_ops.ExpandDims +* tf.raw_ops.ExperimentalAssertNextDataset +* tf.raw_ops.ExperimentalAutoShardDataset +* tf.raw_ops.ExperimentalBytesProducedStatsDataset +* tf.raw_ops.ExperimentalCSVDataset +* tf.raw_ops.ExperimentalChooseFastestDataset +* tf.raw_ops.ExperimentalDatasetCardinality +* tf.raw_ops.ExperimentalDatasetToTFRecord +* tf.raw_ops.ExperimentalDenseToSparseBatchDataset +* tf.raw_ops.ExperimentalDirectedInterleaveDataset +* tf.raw_ops.ExperimentalGroupByReducerDataset +* tf.raw_ops.ExperimentalGroupByWindowDataset +* tf.raw_ops.ExperimentalIgnoreErrorsDataset +* tf.raw_ops.ExperimentalIteratorGetDevice +* tf.raw_ops.ExperimentalLMDBDataset +* tf.raw_ops.ExperimentalLatencyStatsDataset +* tf.raw_ops.ExperimentalMapAndBatchDataset +* tf.raw_ops.ExperimentalMapDataset +* tf.raw_ops.ExperimentalMatchingFilesDataset +* tf.raw_ops.ExperimentalMaxIntraOpParallelismDataset +* tf.raw_ops.ExperimentalNonSerializableDataset +* tf.raw_ops.ExperimentalParallelInterleaveDataset +* tf.raw_ops.ExperimentalParseExampleDataset +* tf.raw_ops.ExperimentalPrivateThreadPoolDataset +* tf.raw_ops.ExperimentalRandomDataset +* tf.raw_ops.ExperimentalRebatchDataset +* tf.raw_ops.ExperimentalScanDataset +* tf.raw_ops.ExperimentalSetStatsAggregatorDataset +* tf.raw_ops.ExperimentalSleepDataset +* tf.raw_ops.ExperimentalSlidingWindowDataset +* tf.raw_ops.ExperimentalSqlDataset +* tf.raw_ops.ExperimentalStatsAggregatorHandle +* tf.raw_ops.ExperimentalStatsAggregatorSummary +* tf.raw_ops.ExperimentalTakeWhileDataset +* tf.raw_ops.ExperimentalThreadPoolDataset +* tf.raw_ops.ExperimentalThreadPoolHandle +* tf.raw_ops.ExperimentalUnbatchDataset +* tf.raw_ops.ExperimentalUniqueDataset +* tf.raw_ops.Expint +* tf.raw_ops.Expm1 +* tf.raw_ops.ExtractGlimpse +* tf.raw_ops.ExtractImagePatches +* tf.raw_ops.ExtractJpegShape +* tf.raw_ops.ExtractVolumePatches +* tf.raw_ops.FFT +* tf.raw_ops.FFT2D +* tf.raw_ops.FFT3D +* tf.raw_ops.FIFOQueue +* tf.raw_ops.FIFOQueueV2 +* tf.raw_ops.Fact +* tf.raw_ops.FakeParam +* tf.raw_ops.FakeQuantWithMinMaxArgs +* tf.raw_ops.FakeQuantWithMinMaxArgsGradient +* tf.raw_ops.FakeQuantWithMinMaxVars +* tf.raw_ops.FakeQuantWithMinMaxVarsGradient +* tf.raw_ops.FakeQuantWithMinMaxVarsPerChannel +* tf.raw_ops.FakeQuantWithMinMaxVarsPerChannelGradient +* tf.raw_ops.FakeQueue +* tf.raw_ops.Fill +* tf.raw_ops.FilterByLastComponentDataset +* tf.raw_ops.FilterDataset +* tf.raw_ops.Fingerprint +* tf.raw_ops.FixedLengthRecordDataset +* tf.raw_ops.FixedLengthRecordDatasetV2 +* tf.raw_ops.FixedLengthRecordReader +* tf.raw_ops.FixedLengthRecordReaderV2 +* tf.raw_ops.FixedUnigramCandidateSampler +* tf.raw_ops.FlatMapDataset +* tf.raw_ops.Floor +* tf.raw_ops.FloorDiv +* tf.raw_ops.FloorMod +* tf.raw_ops.FlushSummaryWriter +* tf.raw_ops.For +* tf.raw_ops.FractionalAvgPool +* tf.raw_ops.FractionalAvgPoolGrad +* tf.raw_ops.FractionalMaxPool +* tf.raw_ops.FractionalMaxPoolGrad +* tf.raw_ops.FresnelCos +* tf.raw_ops.FresnelSin +* tf.raw_ops.FusedBatchNorm +* tf.raw_ops.FusedBatchNormGrad +* tf.raw_ops.FusedBatchNormGradV2 +* tf.raw_ops.FusedBatchNormGradV3 +* tf.raw_ops.FusedBatchNormV2 +* tf.raw_ops.FusedBatchNormV3 +* tf.raw_ops.FusedPadConv2D +* tf.raw_ops.FusedResizeAndPadConv2D +* tf.raw_ops.GRUBlockCell +* tf.raw_ops.GRUBlockCellGrad +* tf.raw_ops.Gather +* tf.raw_ops.GatherNd +* tf.raw_ops.GatherV2 +* tf.raw_ops.GenerateBoundingBoxProposals +* tf.raw_ops.GenerateVocabRemapping +* tf.raw_ops.GeneratorDataset +* tf.raw_ops.GetSessionHandle +* tf.raw_ops.GetSessionHandleV2 +* tf.raw_ops.GetSessionTensor +* tf.raw_ops.Greater +* tf.raw_ops.GreaterEqual +* tf.raw_ops.GroupByReducerDataset +* tf.raw_ops.GroupByWindowDataset +* tf.raw_ops.GuaranteeConst +* tf.raw_ops.HSVToRGB +* tf.raw_ops.HashTable +* tf.raw_ops.HashTableV2 +* tf.raw_ops.HistogramFixedWidth +* tf.raw_ops.HistogramSummary +* tf.raw_ops.IFFT +* tf.raw_ops.IFFT2D +* tf.raw_ops.IFFT3D +* tf.raw_ops.IRFFT +* tf.raw_ops.IRFFT2D +* tf.raw_ops.IRFFT3D +* tf.raw_ops.Identity +* tf.raw_ops.IdentityN +* tf.raw_ops.IdentityReader +* tf.raw_ops.IdentityReaderV2 +* tf.raw_ops.If +* tf.raw_ops.Igamma +* tf.raw_ops.IgammaGradA +* tf.raw_ops.Igammac +* tf.raw_ops.IgnoreErrorsDataset +* tf.raw_ops.Imag +* tf.raw_ops.ImageProjectiveTransformV2 +* tf.raw_ops.ImageSummary +* tf.raw_ops.ImmutableConst +* tf.raw_ops.ImportEvent +* tf.raw_ops.InTopK +* tf.raw_ops.InTopKV2 +* tf.raw_ops.InfeedDequeue +* tf.raw_ops.InfeedDequeueTuple +* tf.raw_ops.InfeedEnqueue +* tf.raw_ops.InfeedEnqueuePrelinearizedBuffer +* tf.raw_ops.InfeedEnqueueTuple +* tf.raw_ops.InitializeTable +* tf.raw_ops.InitializeTableFromTextFile +* tf.raw_ops.InitializeTableFromTextFileV2 +* tf.raw_ops.InitializeTableV2 +* tf.raw_ops.InplaceAdd +* tf.raw_ops.InplaceSub +* tf.raw_ops.InplaceUpdate +* tf.raw_ops.InterleaveDataset +* tf.raw_ops.Inv +* tf.raw_ops.InvGrad +* tf.raw_ops.Invert +* tf.raw_ops.InvertPermutation +* tf.raw_ops.IsBoostedTreesEnsembleInitialized +* tf.raw_ops.IsBoostedTreesQuantileStreamResourceInitialized +* tf.raw_ops.IsFinite +* tf.raw_ops.IsInf +* tf.raw_ops.IsNan +* tf.raw_ops.IsVariableInitialized +* tf.raw_ops.Iterator +* tf.raw_ops.IteratorFromStringHandle +* tf.raw_ops.IteratorFromStringHandleV2 +* tf.raw_ops.IteratorGetDevice +* tf.raw_ops.IteratorGetNext +* tf.raw_ops.IteratorGetNextAsOptional +* tf.raw_ops.IteratorGetNextSync +* tf.raw_ops.IteratorToStringHandle +* tf.raw_ops.IteratorV2 +* tf.raw_ops.L2Loss +* tf.raw_ops.LMDBDataset +* tf.raw_ops.LMDBReader +* tf.raw_ops.LRN +* tf.raw_ops.LRNGrad +* tf.raw_ops.LSTMBlockCell +* tf.raw_ops.LSTMBlockCellGrad +* tf.raw_ops.LatencyStatsDataset +* tf.raw_ops.LeakyRelu +* tf.raw_ops.LeakyReluGrad +* tf.raw_ops.LearnedUnigramCandidateSampler +* tf.raw_ops.LeftShift +* tf.raw_ops.LegacyParallelInterleaveDatasetV2 +* tf.raw_ops.Less +* tf.raw_ops.LessEqual +* tf.raw_ops.Lgamma +* tf.raw_ops.LinSpace +* tf.raw_ops.ListDiff +* tf.raw_ops.LoadAndRemapMatrix +* tf.raw_ops.LoadTPUEmbeddingADAMParameters +* tf.raw_ops.LoadTPUEmbeddingADAMParametersGradAccumDebug +* tf.raw_ops.LoadTPUEmbeddingAdadeltaParameters +* tf.raw_ops.LoadTPUEmbeddingAdadeltaParametersGradAccumDebug +* tf.raw_ops.LoadTPUEmbeddingAdagradParameters +* tf.raw_ops.LoadTPUEmbeddingAdagradParametersGradAccumDebug +* tf.raw_ops.LoadTPUEmbeddingCenteredRMSPropParameters +* tf.raw_ops.LoadTPUEmbeddingFTRLParameters +* tf.raw_ops.LoadTPUEmbeddingFTRLParametersGradAccumDebug +* tf.raw_ops.LoadTPUEmbeddingMDLAdagradLightParameters +* tf.raw_ops.LoadTPUEmbeddingMomentumParameters +* tf.raw_ops.LoadTPUEmbeddingMomentumParametersGradAccumDebug +* tf.raw_ops.LoadTPUEmbeddingProximalAdagradParameters +* tf.raw_ops.LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug +* tf.raw_ops.LoadTPUEmbeddingRMSPropParameters +* tf.raw_ops.LoadTPUEmbeddingRMSPropParametersGradAccumDebug +* tf.raw_ops.LoadTPUEmbeddingStochasticGradientDescentParameters +* tf.raw_ops.Log +* tf.raw_ops.Log1p +* tf.raw_ops.LogMatrixDeterminant +* tf.raw_ops.LogSoftmax +* tf.raw_ops.LogUniformCandidateSampler +* tf.raw_ops.LogicalAnd +* tf.raw_ops.LogicalNot +* tf.raw_ops.LogicalOr +* tf.raw_ops.LookupTableExport +* tf.raw_ops.LookupTableExportV2 +* tf.raw_ops.LookupTableFind +* tf.raw_ops.LookupTableFindV2 +* tf.raw_ops.LookupTableImport +* tf.raw_ops.LookupTableImportV2 +* tf.raw_ops.LookupTableInsert +* tf.raw_ops.LookupTableInsertV2 +* tf.raw_ops.LookupTableRemoveV2 +* tf.raw_ops.LookupTableSize +* tf.raw_ops.LookupTableSizeV2 +* tf.raw_ops.LoopCond +* tf.raw_ops.LowerBound +* tf.raw_ops.Lu +* tf.raw_ops.MakeIterator +* tf.raw_ops.MapAndBatchDataset +* tf.raw_ops.MapClear +* tf.raw_ops.MapDataset +* tf.raw_ops.MapDefun +* tf.raw_ops.MapIncompleteSize +* tf.raw_ops.MapPeek +* tf.raw_ops.MapSize +* tf.raw_ops.MapStage +* tf.raw_ops.MapUnstage +* tf.raw_ops.MapUnstageNoKey +* tf.raw_ops.MatMul +* tf.raw_ops.MatchingFiles +* tf.raw_ops.MatchingFilesDataset +* tf.raw_ops.MatrixBandPart +* tf.raw_ops.MatrixDeterminant +* tf.raw_ops.MatrixDiag +* tf.raw_ops.MatrixDiagPart +* tf.raw_ops.MatrixDiagPartV2 +* tf.raw_ops.MatrixDiagPartV3 +* tf.raw_ops.MatrixDiagV2 +* tf.raw_ops.MatrixDiagV3 +* tf.raw_ops.MatrixExponential +* tf.raw_ops.MatrixInverse +* tf.raw_ops.MatrixLogarithm +* tf.raw_ops.MatrixSetDiag +* tf.raw_ops.MatrixSetDiagV2 +* tf.raw_ops.MatrixSetDiagV3 +* tf.raw_ops.MatrixSolve +* tf.raw_ops.MatrixSolveLs +* tf.raw_ops.MatrixSquareRoot +* tf.raw_ops.MatrixTriangularSolve +* tf.raw_ops.Max +* tf.raw_ops.MaxIntraOpParallelismDataset +* tf.raw_ops.MaxPool +* tf.raw_ops.MaxPool3D +* tf.raw_ops.MaxPool3DGrad +* tf.raw_ops.MaxPool3DGradGrad +* tf.raw_ops.MaxPoolGrad +* tf.raw_ops.MaxPoolGradGrad +* tf.raw_ops.MaxPoolGradGradV2 +* tf.raw_ops.MaxPoolGradGradWithArgmax +* tf.raw_ops.MaxPoolGradV2 +* tf.raw_ops.MaxPoolGradWithArgmax +* tf.raw_ops.MaxPoolV2 +* tf.raw_ops.MaxPoolWithArgmax +* tf.raw_ops.Maximum +* tf.raw_ops.Mean +* tf.raw_ops.Merge +* tf.raw_ops.MergeSummary +* tf.raw_ops.MergeV2Checkpoints +* tf.raw_ops.Mfcc +* tf.raw_ops.Min +* tf.raw_ops.Minimum +* tf.raw_ops.MirrorPad +* tf.raw_ops.MirrorPadGrad +* tf.raw_ops.Mod +* tf.raw_ops.ModelDataset +* tf.raw_ops.Mul +* tf.raw_ops.MulNoNan +* tf.raw_ops.MultiDeviceIterator +* tf.raw_ops.MultiDeviceIteratorFromStringHandle +* tf.raw_ops.MultiDeviceIteratorGetNextFromShard +* tf.raw_ops.MultiDeviceIteratorInit +* tf.raw_ops.MultiDeviceIteratorToStringHandle +* tf.raw_ops.Multinomial +* tf.raw_ops.MutableDenseHashTable +* tf.raw_ops.MutableDenseHashTableV2 +* tf.raw_ops.MutableHashTable +* tf.raw_ops.MutableHashTableOfTensors +* tf.raw_ops.MutableHashTableOfTensorsV2 +* tf.raw_ops.MutableHashTableV2 +* tf.raw_ops.MutexLock +* tf.raw_ops.MutexV2 +* tf.raw_ops.NcclAllReduce +* tf.raw_ops.NcclBroadcast +* tf.raw_ops.NcclReduce +* tf.raw_ops.Ndtri +* tf.raw_ops.Neg +* tf.raw_ops.NextAfter +* tf.raw_ops.NextIteration +* tf.raw_ops.NoOp +* tf.raw_ops.NonDeterministicInts +* tf.raw_ops.NonMaxSuppression +* tf.raw_ops.NonMaxSuppressionV2 +* tf.raw_ops.NonMaxSuppressionV3 +* tf.raw_ops.NonMaxSuppressionV4 +* tf.raw_ops.NonMaxSuppressionV5 +* tf.raw_ops.NonMaxSuppressionWithOverlaps +* tf.raw_ops.NonSerializableDataset +* tf.raw_ops.NotEqual +* tf.raw_ops.NthElement +* tf.raw_ops.OneHot +* tf.raw_ops.OneShotIterator +* tf.raw_ops.OnesLike +* tf.raw_ops.OptimizeDataset +* tf.raw_ops.OptionalFromValue +* tf.raw_ops.OptionalGetValue +* tf.raw_ops.OptionalHasValue +* tf.raw_ops.OptionalNone +* tf.raw_ops.OrderedMapClear +* tf.raw_ops.OrderedMapIncompleteSize +* tf.raw_ops.OrderedMapPeek +* tf.raw_ops.OrderedMapSize +* tf.raw_ops.OrderedMapStage +* tf.raw_ops.OrderedMapUnstage +* tf.raw_ops.OrderedMapUnstageNoKey +* tf.raw_ops.OutfeedDequeue +* tf.raw_ops.OutfeedDequeueTuple +* tf.raw_ops.OutfeedEnqueue +* tf.raw_ops.OutfeedEnqueueTuple +* tf.raw_ops.Pack +* tf.raw_ops.Pad +* tf.raw_ops.PadV2 +* tf.raw_ops.PaddedBatchDataset +* tf.raw_ops.PaddedBatchDatasetV2 +* tf.raw_ops.PaddingFIFOQueue +* tf.raw_ops.PaddingFIFOQueueV2 +* tf.raw_ops.ParallelConcat +* tf.raw_ops.ParallelDynamicStitch +* tf.raw_ops.ParallelInterleaveDataset +* tf.raw_ops.ParallelInterleaveDatasetV2 +* tf.raw_ops.ParallelInterleaveDatasetV3 +* tf.raw_ops.ParallelInterleaveDatasetV4 +* tf.raw_ops.ParallelMapDataset +* tf.raw_ops.ParallelMapDatasetV2 +* tf.raw_ops.ParameterizedTruncatedNormal +* tf.raw_ops.ParseExample +* tf.raw_ops.ParseExampleDataset +* tf.raw_ops.ParseExampleDatasetV2 +* tf.raw_ops.ParseExampleV2 +* tf.raw_ops.ParseSequenceExample +* tf.raw_ops.ParseSequenceExampleV2 +* tf.raw_ops.ParseSingleExample +* tf.raw_ops.ParseSingleSequenceExample +* tf.raw_ops.ParseTensor +* tf.raw_ops.PartitionedCall +* tf.raw_ops.Placeholder +* tf.raw_ops.PlaceholderV2 +* tf.raw_ops.PlaceholderWithDefault +* tf.raw_ops.Polygamma +* tf.raw_ops.PopulationCount +* tf.raw_ops.Pow +* tf.raw_ops.PrefetchDataset +* tf.raw_ops.Prelinearize +* tf.raw_ops.PrelinearizeTuple +* tf.raw_ops.PreventGradient +* tf.raw_ops.Print +* tf.raw_ops.PrintV2 +* tf.raw_ops.PriorityQueue +* tf.raw_ops.PriorityQueueV2 +* tf.raw_ops.PrivateThreadPoolDataset +* tf.raw_ops.Prod +* tf.raw_ops.PyFunc +* tf.raw_ops.PyFuncStateless +* tf.raw_ops.Qr +* tf.raw_ops.QuantizeAndDequantize +* tf.raw_ops.QuantizeAndDequantizeV2 +* tf.raw_ops.QuantizeAndDequantizeV3 +* tf.raw_ops.QuantizeDownAndShrinkRange +* tf.raw_ops.QuantizeV2 +* tf.raw_ops.QuantizedAdd +* tf.raw_ops.QuantizedAvgPool +* tf.raw_ops.QuantizedBatchNormWithGlobalNormalization +* tf.raw_ops.QuantizedBiasAdd +* tf.raw_ops.QuantizedConcat +* tf.raw_ops.QuantizedConv2D +* tf.raw_ops.QuantizedConv2DAndRelu +* tf.raw_ops.QuantizedConv2DAndReluAndRequantize +* tf.raw_ops.QuantizedConv2DAndRequantize +* tf.raw_ops.QuantizedConv2DPerChannel +* tf.raw_ops.QuantizedConv2DWithBias +* tf.raw_ops.QuantizedConv2DWithBiasAndRelu +* tf.raw_ops.QuantizedConv2DWithBiasAndReluAndRequantize +* tf.raw_ops.QuantizedConv2DWithBiasAndRequantize +* tf.raw_ops.QuantizedConv2DWithBiasSignedSumAndReluAndRequantize +* tf.raw_ops.QuantizedConv2DWithBiasSumAndRelu +* tf.raw_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize +* tf.raw_ops.QuantizedDepthwiseConv2D +* tf.raw_ops.QuantizedDepthwiseConv2DWithBias +* tf.raw_ops.QuantizedDepthwiseConv2DWithBiasAndRelu +* tf.raw_ops.QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize +* tf.raw_ops.QuantizedInstanceNorm +* tf.raw_ops.QuantizedMatMul +* tf.raw_ops.QuantizedMatMulWithBias +* tf.raw_ops.QuantizedMatMulWithBiasAndDequantize +* tf.raw_ops.QuantizedMatMulWithBiasAndRelu +* tf.raw_ops.QuantizedMatMulWithBiasAndReluAndRequantize +* tf.raw_ops.QuantizedMatMulWithBiasAndRequantize +* tf.raw_ops.QuantizedMaxPool +* tf.raw_ops.QuantizedMul +* tf.raw_ops.QuantizedRelu +* tf.raw_ops.QuantizedRelu6 +* tf.raw_ops.QuantizedReluX +* tf.raw_ops.QuantizedReshape +* tf.raw_ops.QuantizedResizeBilinear +* tf.raw_ops.QueueClose +* tf.raw_ops.QueueCloseV2 +* tf.raw_ops.QueueDequeue +* tf.raw_ops.QueueDequeueMany +* tf.raw_ops.QueueDequeueManyV2 +* tf.raw_ops.QueueDequeueUpTo +* tf.raw_ops.QueueDequeueUpToV2 +* tf.raw_ops.QueueDequeueV2 +* tf.raw_ops.QueueEnqueue +* tf.raw_ops.QueueEnqueueMany +* tf.raw_ops.QueueEnqueueManyV2 +* tf.raw_ops.QueueEnqueueV2 +* tf.raw_ops.QueueIsClosed +* tf.raw_ops.QueueIsClosedV2 +* tf.raw_ops.QueueSize +* tf.raw_ops.QueueSizeV2 +* tf.raw_ops.RFFT +* tf.raw_ops.RFFT2D +* tf.raw_ops.RFFT3D +* tf.raw_ops.RGBToHSV +* tf.raw_ops.RaggedGather +* tf.raw_ops.RaggedRange +* tf.raw_ops.RaggedTensorFromVariant +* tf.raw_ops.RaggedTensorToSparse +* tf.raw_ops.RaggedTensorToTensor +* tf.raw_ops.RaggedTensorToVariant +* tf.raw_ops.RandomCrop +* tf.raw_ops.RandomDataset +* tf.raw_ops.RandomGamma +* tf.raw_ops.RandomGammaGrad +* tf.raw_ops.RandomPoisson +* tf.raw_ops.RandomPoissonV2 +* tf.raw_ops.RandomShuffle +* tf.raw_ops.RandomShuffleQueue +* tf.raw_ops.RandomShuffleQueueV2 +* tf.raw_ops.RandomStandardNormal +* tf.raw_ops.RandomUniform +* tf.raw_ops.RandomUniformInt +* tf.raw_ops.Range +* tf.raw_ops.RangeDataset +* tf.raw_ops.Rank +* tf.raw_ops.ReadFile +* tf.raw_ops.ReadVariableOp +* tf.raw_ops.ReaderNumRecordsProduced +* tf.raw_ops.ReaderNumRecordsProducedV2 +* tf.raw_ops.ReaderNumWorkUnitsCompleted +* tf.raw_ops.ReaderNumWorkUnitsCompletedV2 +* tf.raw_ops.ReaderRead +* tf.raw_ops.ReaderReadUpTo +* tf.raw_ops.ReaderReadUpToV2 +* tf.raw_ops.ReaderReadV2 +* tf.raw_ops.ReaderReset +* tf.raw_ops.ReaderResetV2 +* tf.raw_ops.ReaderRestoreState +* tf.raw_ops.ReaderRestoreStateV2 +* tf.raw_ops.ReaderSerializeState +* tf.raw_ops.ReaderSerializeStateV2 +* tf.raw_ops.Real +* tf.raw_ops.RealDiv +* tf.raw_ops.RebatchDataset +* tf.raw_ops.Reciprocal +* tf.raw_ops.ReciprocalGrad +* tf.raw_ops.RecordInput +* tf.raw_ops.Recv +* tf.raw_ops.RecvTPUEmbeddingActivations +* tf.raw_ops.ReduceDataset +* tf.raw_ops.ReduceJoin +* tf.raw_ops.RefEnter +* tf.raw_ops.RefExit +* tf.raw_ops.RefIdentity +* tf.raw_ops.RefMerge +* tf.raw_ops.RefNextIteration +* tf.raw_ops.RefSelect +* tf.raw_ops.RefSwitch +* tf.raw_ops.RegexFullMatch +* tf.raw_ops.RegexReplace +* tf.raw_ops.Relu +* tf.raw_ops.Relu6 +* tf.raw_ops.Relu6Grad +* tf.raw_ops.ReluGrad +* tf.raw_ops.RemoteCall +* tf.raw_ops.RepeatDataset +* tf.raw_ops.RequantizationRange +* tf.raw_ops.RequantizationRangePerChannel +* tf.raw_ops.Requantize +* tf.raw_ops.RequantizePerChannel +* tf.raw_ops.Reshape +* tf.raw_ops.ResizeArea +* tf.raw_ops.ResizeBicubic +* tf.raw_ops.ResizeBicubicGrad +* tf.raw_ops.ResizeBilinear +* tf.raw_ops.ResizeBilinearGrad +* tf.raw_ops.ResizeNearestNeighbor +* tf.raw_ops.ResizeNearestNeighborGrad +* tf.raw_ops.ResourceAccumulatorApplyGradient +* tf.raw_ops.ResourceAccumulatorNumAccumulated +* tf.raw_ops.ResourceAccumulatorSetGlobalStep +* tf.raw_ops.ResourceAccumulatorTakeGradient +* tf.raw_ops.ResourceApplyAdaMax +* tf.raw_ops.ResourceApplyAdadelta +* tf.raw_ops.ResourceApplyAdagrad +* tf.raw_ops.ResourceApplyAdagradDA +* tf.raw_ops.ResourceApplyAdagradV2 +* tf.raw_ops.ResourceApplyAdam +* tf.raw_ops.ResourceApplyAdamWithAmsgrad +* tf.raw_ops.ResourceApplyAddSign +* tf.raw_ops.ResourceApplyCenteredRMSProp +* tf.raw_ops.ResourceApplyFtrl +* tf.raw_ops.ResourceApplyFtrlV2 +* tf.raw_ops.ResourceApplyGradientDescent +* tf.raw_ops.ResourceApplyKerasMomentum +* tf.raw_ops.ResourceApplyMomentum +* tf.raw_ops.ResourceApplyPowerSign +* tf.raw_ops.ResourceApplyProximalAdagrad +* tf.raw_ops.ResourceApplyProximalGradientDescent +* tf.raw_ops.ResourceApplyRMSProp +* tf.raw_ops.ResourceConditionalAccumulator +* tf.raw_ops.ResourceCountUpTo +* tf.raw_ops.ResourceGather +* tf.raw_ops.ResourceGatherNd +* tf.raw_ops.ResourceScatterAdd +* tf.raw_ops.ResourceScatterDiv +* tf.raw_ops.ResourceScatterMax +* tf.raw_ops.ResourceScatterMin +* tf.raw_ops.ResourceScatterMul +* tf.raw_ops.ResourceScatterNdAdd +* tf.raw_ops.ResourceScatterNdSub +* tf.raw_ops.ResourceScatterNdUpdate +* tf.raw_ops.ResourceScatterSub +* tf.raw_ops.ResourceScatterUpdate +* tf.raw_ops.ResourceSparseApplyAdadelta +* tf.raw_ops.ResourceSparseApplyAdagrad +* tf.raw_ops.ResourceSparseApplyAdagradDA +* tf.raw_ops.ResourceSparseApplyAdagradV2 +* tf.raw_ops.ResourceSparseApplyCenteredRMSProp +* tf.raw_ops.ResourceSparseApplyFtrl +* tf.raw_ops.ResourceSparseApplyFtrlV2 +* tf.raw_ops.ResourceSparseApplyKerasMomentum +* tf.raw_ops.ResourceSparseApplyMomentum +* tf.raw_ops.ResourceSparseApplyProximalAdagrad +* tf.raw_ops.ResourceSparseApplyProximalGradientDescent +* tf.raw_ops.ResourceSparseApplyRMSProp +* tf.raw_ops.ResourceStridedSliceAssign +* tf.raw_ops.Restore +* tf.raw_ops.RestoreSlice +* tf.raw_ops.RestoreV2 +* tf.raw_ops.RetrieveTPUEmbeddingADAMParameters +* tf.raw_ops.RetrieveTPUEmbeddingADAMParametersGradAccumDebug +* tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParameters +* tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug +* tf.raw_ops.RetrieveTPUEmbeddingAdagradParameters +* tf.raw_ops.RetrieveTPUEmbeddingAdagradParametersGradAccumDebug +* tf.raw_ops.RetrieveTPUEmbeddingCenteredRMSPropParameters +* tf.raw_ops.RetrieveTPUEmbeddingFTRLParameters +* tf.raw_ops.RetrieveTPUEmbeddingFTRLParametersGradAccumDebug +* tf.raw_ops.RetrieveTPUEmbeddingMDLAdagradLightParameters +* tf.raw_ops.RetrieveTPUEmbeddingMomentumParameters +* tf.raw_ops.RetrieveTPUEmbeddingMomentumParametersGradAccumDebug +* tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters +* tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug +* tf.raw_ops.RetrieveTPUEmbeddingRMSPropParameters +* tf.raw_ops.RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug +* tf.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParameters +* tf.raw_ops.Reverse +* tf.raw_ops.ReverseSequence +* tf.raw_ops.ReverseV2 +* tf.raw_ops.RightShift +* tf.raw_ops.Rint +* tf.raw_ops.RngSkip +* tf.raw_ops.Roll +* tf.raw_ops.Round +* tf.raw_ops.Rsqrt +* tf.raw_ops.RsqrtGrad +* tf.raw_ops.SampleDistortedBoundingBox +* tf.raw_ops.SampleDistortedBoundingBoxV2 +* tf.raw_ops.SamplingDataset +* tf.raw_ops.Save +* tf.raw_ops.SaveSlices +* tf.raw_ops.SaveV2 +* tf.raw_ops.ScalarSummary +* tf.raw_ops.ScaleAndTranslate +* tf.raw_ops.ScaleAndTranslateGrad +* tf.raw_ops.ScanDataset +* tf.raw_ops.ScatterAdd +* tf.raw_ops.ScatterDiv +* tf.raw_ops.ScatterMax +* tf.raw_ops.ScatterMin +* tf.raw_ops.ScatterMul +* tf.raw_ops.ScatterNd +* tf.raw_ops.ScatterNdAdd +* tf.raw_ops.ScatterNdNonAliasingAdd +* tf.raw_ops.ScatterNdSub +* tf.raw_ops.ScatterNdUpdate +* tf.raw_ops.ScatterSub +* tf.raw_ops.ScatterUpdate +* tf.raw_ops.SdcaFprint +* tf.raw_ops.SdcaOptimizer +* tf.raw_ops.SdcaOptimizerV2 +* tf.raw_ops.SdcaShrinkL1 +* tf.raw_ops.SegmentMax +* tf.raw_ops.SegmentMean +* tf.raw_ops.SegmentMin +* tf.raw_ops.SegmentProd +* tf.raw_ops.SegmentSum +* tf.raw_ops.Select +* tf.raw_ops.SelectV2 +* tf.raw_ops.SelfAdjointEig +* tf.raw_ops.SelfAdjointEigV2 +* tf.raw_ops.Selu +* tf.raw_ops.SeluGrad +* tf.raw_ops.Send +* tf.raw_ops.SendTPUEmbeddingGradients +* tf.raw_ops.SerializeIterator +* tf.raw_ops.SerializeManySparse +* tf.raw_ops.SerializeSparse +* tf.raw_ops.SerializeTensor +* tf.raw_ops.SetSize +* tf.raw_ops.SetStatsAggregatorDataset +* tf.raw_ops.Shape +* tf.raw_ops.ShapeN +* tf.raw_ops.ShardDataset +* tf.raw_ops.ShardedFilename +* tf.raw_ops.ShardedFilespec +* tf.raw_ops.ShuffleAndRepeatDataset +* tf.raw_ops.ShuffleDataset +* tf.raw_ops.ShuffleDatasetV2 +* tf.raw_ops.ShutdownDistributedTPU +* tf.raw_ops.Sigmoid +* tf.raw_ops.SigmoidGrad +* tf.raw_ops.Sign +* tf.raw_ops.Sin +* tf.raw_ops.Sinh +* tf.raw_ops.Size +* tf.raw_ops.SkipDataset +* tf.raw_ops.SleepDataset +* tf.raw_ops.Slice +* tf.raw_ops.SlidingWindowDataset +* tf.raw_ops.Snapshot +* tf.raw_ops.SnapshotDataset +* tf.raw_ops.SobolSample +* tf.raw_ops.Softmax +* tf.raw_ops.SoftmaxCrossEntropyWithLogits +* tf.raw_ops.Softplus +* tf.raw_ops.SoftplusGrad +* tf.raw_ops.Softsign +* tf.raw_ops.SoftsignGrad +* tf.raw_ops.SpaceToBatch +* tf.raw_ops.SpaceToBatchND +* tf.raw_ops.SpaceToDepth +* tf.raw_ops.SparseAccumulatorApplyGradient +* tf.raw_ops.SparseAccumulatorTakeGradient +* tf.raw_ops.SparseAdd +* tf.raw_ops.SparseAddGrad +* tf.raw_ops.SparseApplyAdadelta +* tf.raw_ops.SparseApplyAdagrad +* tf.raw_ops.SparseApplyAdagradDA +* tf.raw_ops.SparseApplyAdagradV2 +* tf.raw_ops.SparseApplyCenteredRMSProp +* tf.raw_ops.SparseApplyFtrl +* tf.raw_ops.SparseApplyFtrlV2 +* tf.raw_ops.SparseApplyMomentum +* tf.raw_ops.SparseApplyProximalAdagrad +* tf.raw_ops.SparseApplyProximalGradientDescent +* tf.raw_ops.SparseApplyRMSProp +* tf.raw_ops.SparseConcat +* tf.raw_ops.SparseConditionalAccumulator +* tf.raw_ops.SparseCross +* tf.raw_ops.SparseDenseCwiseAdd +* tf.raw_ops.SparseDenseCwiseDiv +* tf.raw_ops.SparseDenseCwiseMul +* tf.raw_ops.SparseFillEmptyRows +* tf.raw_ops.SparseFillEmptyRowsGrad +* tf.raw_ops.SparseMatMul +* tf.raw_ops.SparseMatrixAdd +* tf.raw_ops.SparseMatrixMatMul +* tf.raw_ops.SparseMatrixMul +* tf.raw_ops.SparseMatrixNNZ +* tf.raw_ops.SparseMatrixOrderingAMD +* tf.raw_ops.SparseMatrixSoftmax +* tf.raw_ops.SparseMatrixSoftmaxGrad +* tf.raw_ops.SparseMatrixSparseCholesky +* tf.raw_ops.SparseMatrixSparseMatMul +* tf.raw_ops.SparseMatrixTranspose +* tf.raw_ops.SparseMatrixZeros +* tf.raw_ops.SparseReduceMax +* tf.raw_ops.SparseReduceMaxSparse +* tf.raw_ops.SparseReduceSum +* tf.raw_ops.SparseReduceSumSparse +* tf.raw_ops.SparseReorder +* tf.raw_ops.SparseReshape +* tf.raw_ops.SparseSegmentMean +* tf.raw_ops.SparseSegmentMeanGrad +* tf.raw_ops.SparseSegmentMeanWithNumSegments +* tf.raw_ops.SparseSegmentSqrtN +* tf.raw_ops.SparseSegmentSqrtNGrad +* tf.raw_ops.SparseSegmentSqrtNWithNumSegments +* tf.raw_ops.SparseSegmentSum +* tf.raw_ops.SparseSegmentSumWithNumSegments +* tf.raw_ops.SparseSlice +* tf.raw_ops.SparseSliceGrad +* tf.raw_ops.SparseSoftmax +* tf.raw_ops.SparseSoftmaxCrossEntropyWithLogits +* tf.raw_ops.SparseSparseMaximum +* tf.raw_ops.SparseSparseMinimum +* tf.raw_ops.SparseSplit +* tf.raw_ops.SparseTensorDenseAdd +* tf.raw_ops.SparseTensorDenseMatMul +* tf.raw_ops.SparseTensorSliceDataset +* tf.raw_ops.SparseTensorToCSRSparseMatrix +* tf.raw_ops.SparseToDense +* tf.raw_ops.SparseToSparseSetOperation +* tf.raw_ops.Spence +* tf.raw_ops.Split +* tf.raw_ops.SplitV +* tf.raw_ops.SqlDataset +* tf.raw_ops.Sqrt +* tf.raw_ops.SqrtGrad +* tf.raw_ops.Square +* tf.raw_ops.SquaredDifference +* tf.raw_ops.Squeeze +* tf.raw_ops.Stack +* tf.raw_ops.StackClose +* tf.raw_ops.StackCloseV2 +* tf.raw_ops.StackPop +* tf.raw_ops.StackPopV2 +* tf.raw_ops.StackPush +* tf.raw_ops.StackPushV2 +* tf.raw_ops.StackV2 +* tf.raw_ops.Stage +* tf.raw_ops.StageClear +* tf.raw_ops.StagePeek +* tf.raw_ops.StageSize +* tf.raw_ops.StatefulPartitionedCall +* tf.raw_ops.StatefulRandomBinomial +* tf.raw_ops.StatefulStandardNormal +* tf.raw_ops.StatefulStandardNormalV2 +* tf.raw_ops.StatefulTruncatedNormal +* tf.raw_ops.StatefulUniform +* tf.raw_ops.StatefulUniformFullInt +* tf.raw_ops.StatefulUniformInt +* tf.raw_ops.StatelessIf +* tf.raw_ops.StatelessMultinomial +* tf.raw_ops.StatelessRandomBinomial +* tf.raw_ops.StatelessRandomGammaV2 +* tf.raw_ops.StatelessRandomNormal +* tf.raw_ops.StatelessRandomPoisson +* tf.raw_ops.StatelessRandomUniform +* tf.raw_ops.StatelessRandomUniformFullInt +* tf.raw_ops.StatelessRandomUniformInt +* tf.raw_ops.StatelessTruncatedNormal +* tf.raw_ops.StatelessWhile +* tf.raw_ops.StaticRegexFullMatch +* tf.raw_ops.StaticRegexReplace +* tf.raw_ops.StatsAggregatorHandle +* tf.raw_ops.StatsAggregatorHandleV2 +* tf.raw_ops.StatsAggregatorSetSummaryWriter +* tf.raw_ops.StatsAggregatorSummary +* tf.raw_ops.StopGradient +* tf.raw_ops.StridedSlice +* tf.raw_ops.StridedSliceAssign +* tf.raw_ops.StridedSliceGrad +* tf.raw_ops.StringFormat +* tf.raw_ops.StringJoin +* tf.raw_ops.StringLength +* tf.raw_ops.StringLower +* tf.raw_ops.StringNGrams +* tf.raw_ops.StringSplit +* tf.raw_ops.StringSplitV2 +* tf.raw_ops.StringStrip +* tf.raw_ops.StringToHashBucket +* tf.raw_ops.StringToHashBucketFast +* tf.raw_ops.StringToHashBucketStrong +* tf.raw_ops.StringToNumber +* tf.raw_ops.StringUpper +* tf.raw_ops.Sub +* tf.raw_ops.Substr +* tf.raw_ops.Sum +* tf.raw_ops.SummaryWriter +* tf.raw_ops.Svd +* tf.raw_ops.Switch +* tf.raw_ops.SymbolicGradient +* tf.raw_ops.TFRecordDataset +* tf.raw_ops.TFRecordReader +* tf.raw_ops.TFRecordReaderV2 +* tf.raw_ops.TPUCompilationResult +* tf.raw_ops.TPUEmbeddingActivations +* tf.raw_ops.TPUOrdinalSelector +* tf.raw_ops.TPUPartitionedCall +* tf.raw_ops.TPUReplicateMetadata +* tf.raw_ops.TPUReplicatedInput +* tf.raw_ops.TPUReplicatedOutput +* tf.raw_ops.TakeDataset +* tf.raw_ops.TakeManySparseFromTensorsMap +* tf.raw_ops.TakeWhileDataset +* tf.raw_ops.Tan +* tf.raw_ops.Tanh +* tf.raw_ops.TanhGrad +* tf.raw_ops.TemporaryVariable +* tf.raw_ops.TensorArray +* tf.raw_ops.TensorArrayClose +* tf.raw_ops.TensorArrayCloseV2 +* tf.raw_ops.TensorArrayCloseV3 +* tf.raw_ops.TensorArrayConcat +* tf.raw_ops.TensorArrayConcatV2 +* tf.raw_ops.TensorArrayConcatV3 +* tf.raw_ops.TensorArrayGather +* tf.raw_ops.TensorArrayGatherV2 +* tf.raw_ops.TensorArrayGatherV3 +* tf.raw_ops.TensorArrayGrad +* tf.raw_ops.TensorArrayGradV2 +* tf.raw_ops.TensorArrayGradV3 +* tf.raw_ops.TensorArrayGradWithShape +* tf.raw_ops.TensorArrayPack +* tf.raw_ops.TensorArrayRead +* tf.raw_ops.TensorArrayReadV2 +* tf.raw_ops.TensorArrayReadV3 +* tf.raw_ops.TensorArrayScatter +* tf.raw_ops.TensorArrayScatterV2 +* tf.raw_ops.TensorArrayScatterV3 +* tf.raw_ops.TensorArraySize +* tf.raw_ops.TensorArraySizeV2 +* tf.raw_ops.TensorArraySizeV3 +* tf.raw_ops.TensorArraySplit +* tf.raw_ops.TensorArraySplitV2 +* tf.raw_ops.TensorArraySplitV3 +* tf.raw_ops.TensorArrayUnpack +* tf.raw_ops.TensorArrayV2 +* tf.raw_ops.TensorArrayV3 +* tf.raw_ops.TensorArrayWrite +* tf.raw_ops.TensorArrayWriteV2 +* tf.raw_ops.TensorArrayWriteV3 +* tf.raw_ops.TensorDataset +* tf.raw_ops.TensorListConcat +* tf.raw_ops.TensorListConcatLists +* tf.raw_ops.TensorListConcatV2 +* tf.raw_ops.TensorListElementShape +* tf.raw_ops.TensorListFromTensor +* tf.raw_ops.TensorListGather +* tf.raw_ops.TensorListGetItem +* tf.raw_ops.TensorListLength +* tf.raw_ops.TensorListPopBack +* tf.raw_ops.TensorListPushBack +* tf.raw_ops.TensorListPushBackBatch +* tf.raw_ops.TensorListReserve +* tf.raw_ops.TensorListResize +* tf.raw_ops.TensorListScatter +* tf.raw_ops.TensorListScatterIntoExistingList +* tf.raw_ops.TensorListScatterV2 +* tf.raw_ops.TensorListSetItem +* tf.raw_ops.TensorListSplit +* tf.raw_ops.TensorListStack +* tf.raw_ops.TensorScatterAdd +* tf.raw_ops.TensorScatterSub +* tf.raw_ops.TensorScatterUpdate +* tf.raw_ops.TensorSliceDataset +* tf.raw_ops.TensorStridedSliceUpdate +* tf.raw_ops.TensorSummary +* tf.raw_ops.TensorSummaryV2 +* tf.raw_ops.TextLineDataset +* tf.raw_ops.TextLineReader +* tf.raw_ops.TextLineReaderV2 +* tf.raw_ops.ThreadPoolDataset +* tf.raw_ops.ThreadPoolHandle +* tf.raw_ops.ThreadUnsafeUnigramCandidateSampler +* tf.raw_ops.Tile +* tf.raw_ops.TileGrad +* tf.raw_ops.Timestamp +* tf.raw_ops.ToBool +* tf.raw_ops.TopK +* tf.raw_ops.TopKV2 +* tf.raw_ops.Transpose +* tf.raw_ops.TridiagonalMatMul +* tf.raw_ops.TridiagonalSolve +* tf.raw_ops.TruncateDiv +* tf.raw_ops.TruncateMod +* tf.raw_ops.TruncatedNormal +* tf.raw_ops.Unbatch +* tf.raw_ops.UnbatchDataset +* tf.raw_ops.UnbatchGrad +* tf.raw_ops.UnicodeDecode +* tf.raw_ops.UnicodeDecodeWithOffsets +* tf.raw_ops.UnicodeEncode +* tf.raw_ops.UnicodeScript +* tf.raw_ops.UnicodeTranscode +* tf.raw_ops.UniformCandidateSampler +* tf.raw_ops.Unique +* tf.raw_ops.UniqueDataset +* tf.raw_ops.UniqueV2 +* tf.raw_ops.UniqueWithCounts +* tf.raw_ops.UniqueWithCountsV2 +* tf.raw_ops.Unpack +* tf.raw_ops.UnravelIndex +* tf.raw_ops.UnsortedSegmentJoin +* tf.raw_ops.UnsortedSegmentMax +* tf.raw_ops.UnsortedSegmentMin +* tf.raw_ops.UnsortedSegmentProd +* tf.raw_ops.UnsortedSegmentSum +* tf.raw_ops.Unstage +* tf.raw_ops.UnwrapDatasetVariant +* tf.raw_ops.UpperBound +* tf.raw_ops.VarHandleOp +* tf.raw_ops.VarIsInitializedOp +* tf.raw_ops.Variable +* tf.raw_ops.VariableShape +* tf.raw_ops.VariableV2 +* tf.raw_ops.Where +* tf.raw_ops.While +* tf.raw_ops.WholeFileReader +* tf.raw_ops.WholeFileReaderV2 +* tf.raw_ops.WindowDataset +* tf.raw_ops.WorkerHeartbeat +* tf.raw_ops.WrapDatasetVariant +* tf.raw_ops.WriteAudioSummary +* tf.raw_ops.WriteFile +* tf.raw_ops.WriteGraphSummary +* tf.raw_ops.WriteHistogramSummary +* tf.raw_ops.WriteImageSummary +* tf.raw_ops.WriteRawProtoSummary +* tf.raw_ops.WriteScalarSummary +* tf.raw_ops.WriteSummary +* tf.raw_ops.Xdivy +* tf.raw_ops.Xlog1py +* tf.raw_ops.Xlogy +* tf.raw_ops.ZerosLike +* tf.raw_ops.Zeta +* tf.raw_ops.ZipDataset +* tf.realdiv +* tf.recompute_grad +* tf.reduce_all +* tf.reduce_any +* tf.reduce_logsumexp +* tf.reduce_max +* tf.reduce_mean +* tf.reduce_min +* tf.reduce_prod +* tf.reduce_sum +* tf.register_tensor_conversion_function +* tf.repeat +* tf.required_space_to_batch_paddings +* tf.reshape +* tf.reverse +* tf.reverse_sequence +* tf.roll +* tf.round +* tf.saturate_cast +* tf.saved_model +* tf.saved_model.Asset +* tf.saved_model.SaveOptions +* tf.saved_model.contains_saved_model +* tf.saved_model.load +* tf.saved_model.save +* tf.scalar_mul +* tf.scan +* tf.scatter_nd +* tf.searchsorted +* tf.sequence_mask +* tf.sets +* tf.sets.difference +* tf.sets.intersection +* tf.sets.size +* tf.sets.union +* tf.shape +* tf.shape_n +* tf.sigmoid +* tf.sign +* tf.signal +* tf.signal.dct +* tf.signal.fft +* tf.signal.fft2d +* tf.signal.fft3d +* tf.signal.fftshift +* tf.signal.frame +* tf.signal.hamming_window +* tf.signal.hann_window +* tf.signal.idct +* tf.signal.ifft +* tf.signal.ifft2d +* tf.signal.ifft3d +* tf.signal.ifftshift +* tf.signal.inverse_mdct +* tf.signal.inverse_stft +* tf.signal.inverse_stft_window_fn +* tf.signal.irfft +* tf.signal.irfft2d +* tf.signal.irfft3d +* tf.signal.kaiser_bessel_derived_window +* tf.signal.kaiser_window +* tf.signal.linear_to_mel_weight_matrix +* tf.signal.mdct +* tf.signal.mfccs_from_log_mel_spectrograms +* tf.signal.overlap_and_add +* tf.signal.rfft +* tf.signal.rfft2d +* tf.signal.rfft3d +* tf.signal.stft +* tf.signal.vorbis_window +* tf.sin +* tf.sinh +* tf.size +* tf.slice +* tf.sort +* tf.space_to_batch +* tf.space_to_batch_nd +* tf.sparse +* tf.sparse.SparseTensor +* tf.sparse.add +* tf.sparse.concat +* tf.sparse.cross +* tf.sparse.cross_hashed +* tf.sparse.expand_dims +* tf.sparse.eye +* tf.sparse.fill_empty_rows +* tf.sparse.from_dense +* tf.sparse.mask +* tf.sparse.maximum +* tf.sparse.minimum +* tf.sparse.reduce_max +* tf.sparse.reduce_sum +* tf.sparse.reorder +* tf.sparse.reset_shape +* tf.sparse.reshape +* tf.sparse.retain +* tf.sparse.segment_mean +* tf.sparse.segment_sqrt_n +* tf.sparse.segment_sum +* tf.sparse.slice +* tf.sparse.softmax +* tf.sparse.sparse_dense_matmul +* tf.sparse.split +* tf.sparse.to_dense +* tf.sparse.to_indicator +* tf.sparse.transpose +* tf.split +* tf.sqrt +* tf.square +* tf.squeeze +* tf.stack +* tf.stop_gradient +* tf.strided_slice +* tf.strings +* tf.strings.as_string +* tf.strings.bytes_split +* tf.strings.format +* tf.strings.join +* tf.strings.length +* tf.strings.lower +* tf.strings.ngrams +* tf.strings.reduce_join +* tf.strings.regex_full_match +* tf.strings.regex_replace +* tf.strings.split +* tf.strings.strip +* tf.strings.substr +* tf.strings.to_hash_bucket +* tf.strings.to_hash_bucket_fast +* tf.strings.to_hash_bucket_strong +* tf.strings.to_number +* tf.strings.unicode_decode +* tf.strings.unicode_decode_with_offsets +* tf.strings.unicode_encode +* tf.strings.unicode_script +* tf.strings.unicode_split +* tf.strings.unicode_split_with_offsets +* tf.strings.unicode_transcode +* tf.strings.unsorted_segment_join +* tf.strings.upper +* tf.subtract +* tf.summary +* tf.summary.SummaryWriter +* tf.summary.audio +* tf.summary.create_file_writer +* tf.summary.create_noop_writer +* tf.summary.experimental +* tf.summary.experimental.get_step +* tf.summary.experimental.set_step +* tf.summary.experimental.summary_scope +* tf.summary.experimental.write_raw_pb +* tf.summary.flush +* tf.summary.histogram +* tf.summary.image +* tf.summary.record_if +* tf.summary.scalar +* tf.summary.text +* tf.summary.trace_export +* tf.summary.trace_off +* tf.summary.trace_on +* tf.summary.write +* tf.switch_case +* tf.sysconfig +* tf.sysconfig.get_compile_flags +* tf.sysconfig.get_include +* tf.sysconfig.get_lib +* tf.sysconfig.get_link_flags +* tf.tan +* tf.tanh +* tf.tensor_scatter_nd_add +* tf.tensor_scatter_nd_sub +* tf.tensor_scatter_nd_update +* tf.tensordot +* tf.test +* tf.test.Benchmark +* tf.test.TestCase +* tf.test.TestCase.failureException +* tf.test.assert_equal_graph_def +* tf.test.benchmark_config +* tf.test.compute_gradient +* tf.test.create_local_cluster +* tf.test.gpu_device_name +* tf.test.is_built_with_cuda +* tf.test.is_built_with_gpu_support +* tf.test.is_built_with_rocm +* tf.test.is_built_with_xla +* tf.test.is_gpu_available +* tf.test.main +* tf.tile +* tf.timestamp +* tf.tpu +* tf.tpu.experimental +* tf.tpu.experimental.DeviceAssignment +* tf.tpu.experimental.initialize_tpu_system +* tf.tpu.experimental.shutdown_tpu_system +* tf.train +* tf.train.BytesList +* tf.train.Checkpoint +* tf.train.CheckpointManager +* tf.train.ClusterDef +* tf.train.ClusterSpec +* tf.train.Coordinator +* tf.train.Example +* tf.train.ExponentialMovingAverage +* tf.train.Feature +* tf.train.FeatureList +* tf.train.FeatureLists +* tf.train.FeatureLists.FeatureListEntry +* tf.train.Features +* tf.train.Features.FeatureEntry +* tf.train.FloatList +* tf.train.Int64List +* tf.train.JobDef +* tf.train.JobDef.TasksEntry +* tf.train.SequenceExample +* tf.train.ServerDef +* tf.train.checkpoints_iterator +* tf.train.experimental +* tf.train.experimental.DynamicLossScale +* tf.train.experimental.FixedLossScale +* tf.train.experimental.LossScale +* tf.train.experimental.PythonState +* tf.train.experimental.disable_mixed_precision_graph_rewrite +* tf.train.experimental.enable_mixed_precision_graph_rewrite +* tf.train.get_checkpoint_state +* tf.train.latest_checkpoint +* tf.train.list_variables +* tf.train.load_checkpoint +* tf.train.load_variable +* tf.transpose +* tf.truediv +* tf.truncatediv +* tf.truncatemod +* tf.tuple +* tf.unique +* tf.unique_with_counts +* tf.unravel_index +* tf.unstack +* tf.variable_creator_scope +* tf.vectorized_map +* tf.version +* tf.where +* tf.while_loop +* tf.xla +* tf.xla.experimental +* tf.xla.experimental.compile +* tf.xla.experimental.jit_scope +* tf.zeros +* tf.zeros_initializer +* tf.zeros_like + +## Compat v1 symbols + +* tf.compat.v1 +* tf.compat.v1.AggregationMethod +* tf.compat.v1.Assert +* tf.compat.v1.AttrValue +* tf.compat.v1.AttrValue.ListValue +* tf.compat.v1.ConditionalAccumulator +* tf.compat.v1.ConditionalAccumulatorBase +* tf.compat.v1.ConfigProto +* tf.compat.v1.ConfigProto.DeviceCountEntry +* tf.compat.v1.ConfigProto.Experimental +* tf.compat.v1.CriticalSection +* tf.compat.v1.DType +* tf.compat.v1.DeviceSpec +* tf.compat.v1.Dimension +* tf.compat.v1.Event +* tf.compat.v1.FIFOQueue +* tf.compat.v1.FixedLenFeature +* tf.compat.v1.FixedLenSequenceFeature +* tf.compat.v1.FixedLengthRecordReader +* tf.compat.v1.GPUOptions +* tf.compat.v1.GPUOptions.Experimental +* tf.compat.v1.GPUOptions.Experimental.VirtualDevices +* tf.compat.v1.GradientTape +* tf.compat.v1.Graph +* tf.compat.v1.GraphDef +* tf.compat.v1.GraphKeys +* tf.compat.v1.GraphOptions +* tf.compat.v1.HistogramProto +* tf.compat.v1.IdentityReader +* tf.compat.v1.IndexedSlices +* tf.compat.v1.IndexedSlicesSpec +* tf.compat.v1.InteractiveSession +* tf.compat.v1.LMDBReader +* tf.compat.v1.LogMessage +* tf.compat.v1.MetaGraphDef +* tf.compat.v1.MetaGraphDef.CollectionDefEntry +* tf.compat.v1.MetaGraphDef.MetaInfoDef +* tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry +* tf.compat.v1.MetaGraphDef.SignatureDefEntry +* tf.compat.v1.Module +* tf.compat.v1.NameAttrList +* tf.compat.v1.NameAttrList.AttrEntry +* tf.compat.v1.NoGradient +* tf.compat.v1.NodeDef +* tf.compat.v1.NodeDef.AttrEntry +* tf.compat.v1.NodeDef.ExperimentalDebugInfo +* tf.compat.v1.NotDifferentiable +* tf.compat.v1.OpError +* tf.compat.v1.Operation +* tf.compat.v1.OptimizerOptions +* tf.compat.v1.OptionalSpec +* tf.compat.v1.PaddingFIFOQueue +* tf.compat.v1.Print +* tf.compat.v1.PriorityQueue +* tf.compat.v1.QueueBase +* tf.compat.v1.RaggedTensor +* tf.compat.v1.RaggedTensorSpec +* tf.compat.v1.RandomShuffleQueue +* tf.compat.v1.ReaderBase +* tf.compat.v1.RegisterGradient +* tf.compat.v1.RunMetadata +* tf.compat.v1.RunMetadata.FunctionGraphs +* tf.compat.v1.RunOptions +* tf.compat.v1.RunOptions.Experimental +* tf.compat.v1.Session +* tf.compat.v1.SessionLog +* tf.compat.v1.SparseConditionalAccumulator +* tf.compat.v1.SparseFeature +* tf.compat.v1.SparseTensor +* tf.compat.v1.SparseTensorSpec +* tf.compat.v1.SparseTensorValue +* tf.compat.v1.Summary +* tf.compat.v1.Summary.Audio +* tf.compat.v1.Summary.Image +* tf.compat.v1.Summary.Value +* tf.compat.v1.SummaryMetadata +* tf.compat.v1.SummaryMetadata.PluginData +* tf.compat.v1.TFRecordReader +* tf.compat.v1.Tensor +* tf.compat.v1.TensorArray +* tf.compat.v1.TensorArraySpec +* tf.compat.v1.TensorInfo +* tf.compat.v1.TensorInfo.CompositeTensor +* tf.compat.v1.TensorInfo.CooSparse +* tf.compat.v1.TensorShape +* tf.compat.v1.TensorSpec +* tf.compat.v1.TextLineReader +* tf.compat.v1.TypeSpec +* tf.compat.v1.UnconnectedGradients +* tf.compat.v1.VarLenFeature +* tf.compat.v1.Variable +* tf.compat.v1.Variable.SaveSliceInfo +* tf.compat.v1.VariableAggregation +* tf.compat.v1.VariableScope +* tf.compat.v1.VariableSynchronization +* tf.compat.v1.WholeFileReader +* tf.compat.v1.abs +* tf.compat.v1.accumulate_n +* tf.compat.v1.acos +* tf.compat.v1.acosh +* tf.compat.v1.add +* tf.compat.v1.add_check_numerics_ops +* tf.compat.v1.add_n +* tf.compat.v1.add_to_collection +* tf.compat.v1.add_to_collections +* tf.compat.v1.all_variables +* tf.compat.v1.angle +* tf.compat.v1.app +* tf.compat.v1.app.flags +* tf.compat.v1.app.flags.ArgumentParser +* tf.compat.v1.app.flags.ArgumentSerializer +* tf.compat.v1.app.flags.BaseListParser +* tf.compat.v1.app.flags.BooleanFlag +* tf.compat.v1.app.flags.BooleanParser +* tf.compat.v1.app.flags.CantOpenFlagFileError +* tf.compat.v1.app.flags.CsvListSerializer +* tf.compat.v1.app.flags.DEFINE +* tf.compat.v1.app.flags.DEFINE_alias +* tf.compat.v1.app.flags.DEFINE_bool +* tf.compat.v1.app.flags.DEFINE_boolean +* tf.compat.v1.app.flags.DEFINE_enum +* tf.compat.v1.app.flags.DEFINE_enum_class +* tf.compat.v1.app.flags.DEFINE_flag +* tf.compat.v1.app.flags.DEFINE_float +* tf.compat.v1.app.flags.DEFINE_integer +* tf.compat.v1.app.flags.DEFINE_list +* tf.compat.v1.app.flags.DEFINE_multi +* tf.compat.v1.app.flags.DEFINE_multi_enum +* tf.compat.v1.app.flags.DEFINE_multi_enum_class +* tf.compat.v1.app.flags.DEFINE_multi_float +* tf.compat.v1.app.flags.DEFINE_multi_integer +* tf.compat.v1.app.flags.DEFINE_multi_string +* tf.compat.v1.app.flags.DEFINE_spaceseplist +* tf.compat.v1.app.flags.DEFINE_string +* tf.compat.v1.app.flags.DuplicateFlagError +* tf.compat.v1.app.flags.EnumClassFlag +* tf.compat.v1.app.flags.EnumClassParser +* tf.compat.v1.app.flags.EnumFlag +* tf.compat.v1.app.flags.EnumParser +* tf.compat.v1.app.flags.Error +* tf.compat.v1.app.flags.FLAGS +* tf.compat.v1.app.flags.Flag +* tf.compat.v1.app.flags.FlagNameConflictsWithMethodError +* tf.compat.v1.app.flags.FlagValues +* tf.compat.v1.app.flags.FloatParser +* tf.compat.v1.app.flags.IllegalFlagValueError +* tf.compat.v1.app.flags.IntegerParser +* tf.compat.v1.app.flags.ListParser +* tf.compat.v1.app.flags.ListSerializer +* tf.compat.v1.app.flags.MultiEnumClassFlag +* tf.compat.v1.app.flags.MultiFlag +* tf.compat.v1.app.flags.UnparsedFlagAccessError +* tf.compat.v1.app.flags.UnrecognizedFlagError +* tf.compat.v1.app.flags.ValidationError +* tf.compat.v1.app.flags.WhitespaceSeparatedListParser +* tf.compat.v1.app.flags.adopt_module_key_flags +* tf.compat.v1.app.flags.declare_key_flag +* tf.compat.v1.app.flags.disclaim_key_flags +* tf.compat.v1.app.flags.doc_to_help +* tf.compat.v1.app.flags.flag_dict_to_args +* tf.compat.v1.app.flags.get_help_width +* tf.compat.v1.app.flags.mark_bool_flags_as_mutual_exclusive +* tf.compat.v1.app.flags.mark_flag_as_required +* tf.compat.v1.app.flags.mark_flags_as_mutual_exclusive +* tf.compat.v1.app.flags.mark_flags_as_required +* tf.compat.v1.app.flags.multi_flags_validator +* tf.compat.v1.app.flags.register_multi_flags_validator +* tf.compat.v1.app.flags.register_validator +* tf.compat.v1.app.flags.text_wrap +* tf.compat.v1.app.flags.tf_decorator +* tf.compat.v1.app.flags.tf_decorator.TFDecorator +* tf.compat.v1.app.flags.tf_decorator.make_decorator +* tf.compat.v1.app.flags.tf_decorator.rewrap +* tf.compat.v1.app.flags.tf_decorator.tf_stack +* tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter +* tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary +* tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary +* tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter +* tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper +* tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform +* tf.compat.v1.app.flags.tf_decorator.tf_stack.extract_stack +* tf.compat.v1.app.flags.tf_decorator.unwrap +* tf.compat.v1.app.flags.validator +* tf.compat.v1.app.run +* tf.compat.v1.arg_max +* tf.compat.v1.arg_min +* tf.compat.v1.argmax +* tf.compat.v1.argmin +* tf.compat.v1.argsort +* tf.compat.v1.as_dtype +* tf.compat.v1.as_string +* tf.compat.v1.asin +* tf.compat.v1.asinh +* tf.compat.v1.assert_equal +* tf.compat.v1.assert_greater +* tf.compat.v1.assert_greater_equal +* tf.compat.v1.assert_integer +* tf.compat.v1.assert_less +* tf.compat.v1.assert_less_equal +* tf.compat.v1.assert_near +* tf.compat.v1.assert_negative +* tf.compat.v1.assert_non_negative +* tf.compat.v1.assert_non_positive +* tf.compat.v1.assert_none_equal +* tf.compat.v1.assert_positive +* tf.compat.v1.assert_proper_iterable +* tf.compat.v1.assert_rank +* tf.compat.v1.assert_rank_at_least +* tf.compat.v1.assert_rank_in +* tf.compat.v1.assert_same_float_dtype +* tf.compat.v1.assert_scalar +* tf.compat.v1.assert_type +* tf.compat.v1.assert_variables_initialized +* tf.compat.v1.assign +* tf.compat.v1.assign_add +* tf.compat.v1.assign_sub +* tf.compat.v1.atan +* tf.compat.v1.atan2 +* tf.compat.v1.atanh +* tf.compat.v1.audio +* tf.compat.v1.audio.decode_wav +* tf.compat.v1.audio.encode_wav +* tf.compat.v1.autograph +* tf.compat.v1.autograph.experimental +* tf.compat.v1.autograph.experimental.Feature +* tf.compat.v1.autograph.experimental.do_not_convert +* tf.compat.v1.autograph.experimental.set_loop_options +* tf.compat.v1.autograph.set_verbosity +* tf.compat.v1.autograph.to_code +* tf.compat.v1.autograph.to_graph +* tf.compat.v1.autograph.trace +* tf.compat.v1.batch_gather +* tf.compat.v1.batch_scatter_update +* tf.compat.v1.batch_to_space +* tf.compat.v1.batch_to_space_nd +* tf.compat.v1.betainc +* tf.compat.v1.bincount +* tf.compat.v1.bitcast +* tf.compat.v1.bitwise +* tf.compat.v1.bitwise.bitwise_and +* tf.compat.v1.bitwise.bitwise_or +* tf.compat.v1.bitwise.bitwise_xor +* tf.compat.v1.bitwise.invert +* tf.compat.v1.bitwise.left_shift +* tf.compat.v1.bitwise.right_shift +* tf.compat.v1.boolean_mask +* tf.compat.v1.broadcast_dynamic_shape +* tf.compat.v1.broadcast_static_shape +* tf.compat.v1.broadcast_to +* tf.compat.v1.case +* tf.compat.v1.cast +* tf.compat.v1.ceil +* tf.compat.v1.check_numerics +* tf.compat.v1.cholesky +* tf.compat.v1.cholesky_solve +* tf.compat.v1.clip_by_average_norm +* tf.compat.v1.clip_by_global_norm +* tf.compat.v1.clip_by_norm +* tf.compat.v1.clip_by_value +* tf.compat.v1.colocate_with +* tf.compat.v1.compat +* tf.compat.v1.compat.as_bytes +* tf.compat.v1.compat.as_str +* tf.compat.v1.compat.as_str_any +* tf.compat.v1.compat.as_text +* tf.compat.v1.compat.dimension_at_index +* tf.compat.v1.compat.dimension_value +* tf.compat.v1.compat.forward_compatibility_horizon +* tf.compat.v1.compat.forward_compatible +* tf.compat.v1.compat.path_to_str +* tf.compat.v1.complex +* tf.compat.v1.concat +* tf.compat.v1.cond +* tf.compat.v1.config +* tf.compat.v1.config.LogicalDevice +* tf.compat.v1.config.LogicalDeviceConfiguration +* tf.compat.v1.config.PhysicalDevice +* tf.compat.v1.config.experimental +* tf.compat.v1.config.experimental.ClusterDeviceFilters +* tf.compat.v1.config.experimental.VirtualDeviceConfiguration +* tf.compat.v1.config.experimental.disable_mlir_bridge +* tf.compat.v1.config.experimental.enable_mlir_bridge +* tf.compat.v1.config.experimental.get_device_policy +* tf.compat.v1.config.experimental.get_memory_growth +* tf.compat.v1.config.experimental.get_synchronous_execution +* tf.compat.v1.config.experimental.get_virtual_device_configuration +* tf.compat.v1.config.experimental.get_visible_devices +* tf.compat.v1.config.experimental.list_logical_devices +* tf.compat.v1.config.experimental.list_physical_devices +* tf.compat.v1.config.experimental.set_device_policy +* tf.compat.v1.config.experimental.set_memory_growth +* tf.compat.v1.config.experimental.set_synchronous_execution +* tf.compat.v1.config.experimental.set_virtual_device_configuration +* tf.compat.v1.config.experimental.set_visible_devices +* tf.compat.v1.config.experimental_connect_to_cluster +* tf.compat.v1.config.experimental_connect_to_host +* tf.compat.v1.config.experimental_functions_run_eagerly +* tf.compat.v1.config.experimental_run_functions_eagerly +* tf.compat.v1.config.get_logical_device_configuration +* tf.compat.v1.config.get_soft_device_placement +* tf.compat.v1.config.get_visible_devices +* tf.compat.v1.config.list_logical_devices +* tf.compat.v1.config.list_physical_devices +* tf.compat.v1.config.optimizer +* tf.compat.v1.config.optimizer.get_experimental_options +* tf.compat.v1.config.optimizer.get_jit +* tf.compat.v1.config.optimizer.set_experimental_options +* tf.compat.v1.config.optimizer.set_jit +* tf.compat.v1.config.set_logical_device_configuration +* tf.compat.v1.config.set_soft_device_placement +* tf.compat.v1.config.set_visible_devices +* tf.compat.v1.config.threading +* tf.compat.v1.config.threading.get_inter_op_parallelism_threads +* tf.compat.v1.config.threading.get_intra_op_parallelism_threads +* tf.compat.v1.config.threading.set_inter_op_parallelism_threads +* tf.compat.v1.config.threading.set_intra_op_parallelism_threads +* tf.compat.v1.confusion_matrix +* tf.compat.v1.conj +* tf.compat.v1.constant +* tf.compat.v1.constant_initializer +* tf.compat.v1.container +* tf.compat.v1.control_dependencies +* tf.compat.v1.control_flow_v2_enabled +* tf.compat.v1.convert_to_tensor +* tf.compat.v1.convert_to_tensor_or_indexed_slices +* tf.compat.v1.convert_to_tensor_or_sparse_tensor +* tf.compat.v1.cos +* tf.compat.v1.cosh +* tf.compat.v1.count_nonzero +* tf.compat.v1.count_up_to +* tf.compat.v1.create_partitioned_variables +* tf.compat.v1.cross +* tf.compat.v1.cumprod +* tf.compat.v1.cumsum +* tf.compat.v1.custom_gradient +* tf.compat.v1.data +* tf.compat.v1.data.Dataset +* tf.compat.v1.data.DatasetSpec +* tf.compat.v1.data.FixedLengthRecordDataset +* tf.compat.v1.data.Iterator +* tf.compat.v1.data.Options +* tf.compat.v1.data.TFRecordDataset +* tf.compat.v1.data.TextLineDataset +* tf.compat.v1.data.experimental +* tf.compat.v1.data.experimental.AutoShardPolicy +* tf.compat.v1.data.experimental.CheckpointInputPipelineHook +* tf.compat.v1.data.experimental.Counter +* tf.compat.v1.data.experimental.CsvDataset +* tf.compat.v1.data.experimental.DatasetStructure +* tf.compat.v1.data.experimental.DistributeOptions +* tf.compat.v1.data.experimental.MapVectorizationOptions +* tf.compat.v1.data.experimental.OptimizationOptions +* tf.compat.v1.data.experimental.Optional +* tf.compat.v1.data.experimental.OptionalStructure +* tf.compat.v1.data.experimental.RaggedTensorStructure +* tf.compat.v1.data.experimental.RandomDataset +* tf.compat.v1.data.experimental.Reducer +* tf.compat.v1.data.experimental.SparseTensorStructure +* tf.compat.v1.data.experimental.SqlDataset +* tf.compat.v1.data.experimental.StatsAggregator +* tf.compat.v1.data.experimental.StatsOptions +* tf.compat.v1.data.experimental.Structure +* tf.compat.v1.data.experimental.TFRecordWriter +* tf.compat.v1.data.experimental.TensorArrayStructure +* tf.compat.v1.data.experimental.TensorStructure +* tf.compat.v1.data.experimental.ThreadingOptions +* tf.compat.v1.data.experimental.assert_cardinality +* tf.compat.v1.data.experimental.bucket_by_sequence_length +* tf.compat.v1.data.experimental.bytes_produced_stats +* tf.compat.v1.data.experimental.cardinality +* tf.compat.v1.data.experimental.choose_from_datasets +* tf.compat.v1.data.experimental.copy_to_device +* tf.compat.v1.data.experimental.dense_to_ragged_batch +* tf.compat.v1.data.experimental.dense_to_sparse_batch +* tf.compat.v1.data.experimental.enumerate_dataset +* tf.compat.v1.data.experimental.from_variant +* tf.compat.v1.data.experimental.get_next_as_optional +* tf.compat.v1.data.experimental.get_single_element +* tf.compat.v1.data.experimental.get_structure +* tf.compat.v1.data.experimental.group_by_reducer +* tf.compat.v1.data.experimental.group_by_window +* tf.compat.v1.data.experimental.ignore_errors +* tf.compat.v1.data.experimental.latency_stats +* tf.compat.v1.data.experimental.make_batched_features_dataset +* tf.compat.v1.data.experimental.make_csv_dataset +* tf.compat.v1.data.experimental.make_saveable_from_iterator +* tf.compat.v1.data.experimental.map_and_batch +* tf.compat.v1.data.experimental.map_and_batch_with_legacy_function +* tf.compat.v1.data.experimental.parallel_interleave +* tf.compat.v1.data.experimental.parse_example_dataset +* tf.compat.v1.data.experimental.prefetch_to_device +* tf.compat.v1.data.experimental.rejection_resample +* tf.compat.v1.data.experimental.sample_from_datasets +* tf.compat.v1.data.experimental.scan +* tf.compat.v1.data.experimental.shuffle_and_repeat +* tf.compat.v1.data.experimental.take_while +* tf.compat.v1.data.experimental.to_variant +* tf.compat.v1.data.experimental.unbatch +* tf.compat.v1.data.experimental.unique +* tf.compat.v1.data.get_output_classes +* tf.compat.v1.data.get_output_shapes +* tf.compat.v1.data.get_output_types +* tf.compat.v1.data.make_initializable_iterator +* tf.compat.v1.data.make_one_shot_iterator +* tf.compat.v1.debugging +* tf.compat.v1.debugging.Assert +* tf.compat.v1.debugging.assert_all_finite +* tf.compat.v1.debugging.assert_equal +* tf.compat.v1.debugging.assert_greater +* tf.compat.v1.debugging.assert_greater_equal +* tf.compat.v1.debugging.assert_integer +* tf.compat.v1.debugging.assert_less +* tf.compat.v1.debugging.assert_less_equal +* tf.compat.v1.debugging.assert_near +* tf.compat.v1.debugging.assert_negative +* tf.compat.v1.debugging.assert_non_negative +* tf.compat.v1.debugging.assert_non_positive +* tf.compat.v1.debugging.assert_none_equal +* tf.compat.v1.debugging.assert_positive +* tf.compat.v1.debugging.assert_proper_iterable +* tf.compat.v1.debugging.assert_rank +* tf.compat.v1.debugging.assert_rank_at_least +* tf.compat.v1.debugging.assert_rank_in +* tf.compat.v1.debugging.assert_same_float_dtype +* tf.compat.v1.debugging.assert_scalar +* tf.compat.v1.debugging.assert_shapes +* tf.compat.v1.debugging.assert_type +* tf.compat.v1.debugging.check_numerics +* tf.compat.v1.debugging.disable_check_numerics +* tf.compat.v1.debugging.enable_check_numerics +* tf.compat.v1.debugging.experimental +* tf.compat.v1.debugging.experimental.disable_dump_debug_info +* tf.compat.v1.debugging.experimental.enable_dump_debug_info +* tf.compat.v1.debugging.get_log_device_placement +* tf.compat.v1.debugging.is_finite +* tf.compat.v1.debugging.is_inf +* tf.compat.v1.debugging.is_nan +* tf.compat.v1.debugging.is_non_decreasing +* tf.compat.v1.debugging.is_numeric_tensor +* tf.compat.v1.debugging.is_strictly_increasing +* tf.compat.v1.debugging.set_log_device_placement +* tf.compat.v1.decode_base64 +* tf.compat.v1.decode_compressed +* tf.compat.v1.decode_csv +* tf.compat.v1.decode_json_example +* tf.compat.v1.decode_raw +* tf.compat.v1.delete_session_tensor +* tf.compat.v1.depth_to_space +* tf.compat.v1.dequantize +* tf.compat.v1.deserialize_many_sparse +* tf.compat.v1.device +* tf.compat.v1.diag +* tf.compat.v1.diag_part +* tf.compat.v1.digamma +* tf.compat.v1.dimension_at_index +* tf.compat.v1.dimension_value +* tf.compat.v1.disable_control_flow_v2 +* tf.compat.v1.disable_eager_execution +* tf.compat.v1.disable_resource_variables +* tf.compat.v1.disable_tensor_equality +* tf.compat.v1.disable_v2_behavior +* tf.compat.v1.disable_v2_tensorshape +* tf.compat.v1.distribute +* tf.compat.v1.distribute.CrossDeviceOps +* tf.compat.v1.distribute.HierarchicalCopyAllReduce +* tf.compat.v1.distribute.InputContext +* tf.compat.v1.distribute.InputReplicationMode +* tf.compat.v1.distribute.MirroredStrategy +* tf.compat.v1.distribute.NcclAllReduce +* tf.compat.v1.distribute.OneDeviceStrategy +* tf.compat.v1.distribute.ReduceOp +* tf.compat.v1.distribute.ReductionToOneDevice +* tf.compat.v1.distribute.ReplicaContext +* tf.compat.v1.distribute.RunOptions +* tf.compat.v1.distribute.Server +* tf.compat.v1.distribute.Strategy +* tf.compat.v1.distribute.StrategyExtended +* tf.compat.v1.distribute.cluster_resolver +* tf.compat.v1.distribute.cluster_resolver.ClusterResolver +* tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver +* tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver +* tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver +* tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver +* tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver +* tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver +* tf.compat.v1.distribute.cluster_resolver.UnionResolver +* tf.compat.v1.distribute.experimental +* tf.compat.v1.distribute.experimental.CentralStorageStrategy +* tf.compat.v1.distribute.experimental.CollectiveCommunication +* tf.compat.v1.distribute.experimental.CollectiveHints +* tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy +* tf.compat.v1.distribute.experimental.ParameterServerStrategy +* tf.compat.v1.distribute.experimental.TPUStrategy +* tf.compat.v1.distribute.experimental_set_strategy +* tf.compat.v1.distribute.get_loss_reduction +* tf.compat.v1.distribute.get_replica_context +* tf.compat.v1.distribute.get_strategy +* tf.compat.v1.distribute.has_strategy +* tf.compat.v1.distribute.in_cross_replica_context +* tf.compat.v1.distributions +* tf.compat.v1.distributions.Bernoulli +* tf.compat.v1.distributions.Beta +* tf.compat.v1.distributions.Categorical +* tf.compat.v1.distributions.Dirichlet +* tf.compat.v1.distributions.DirichletMultinomial +* tf.compat.v1.distributions.Distribution +* tf.compat.v1.distributions.Exponential +* tf.compat.v1.distributions.Gamma +* tf.compat.v1.distributions.Laplace +* tf.compat.v1.distributions.Multinomial +* tf.compat.v1.distributions.Normal +* tf.compat.v1.distributions.RegisterKL +* tf.compat.v1.distributions.ReparameterizationType +* tf.compat.v1.distributions.StudentT +* tf.compat.v1.distributions.Uniform +* tf.compat.v1.distributions.kl_divergence +* tf.compat.v1.div +* tf.compat.v1.div_no_nan +* tf.compat.v1.divide +* tf.compat.v1.dtypes +* tf.compat.v1.dtypes.DType +* tf.compat.v1.dtypes.as_dtype +* tf.compat.v1.dtypes.as_string +* tf.compat.v1.dtypes.cast +* tf.compat.v1.dtypes.complex +* tf.compat.v1.dtypes.saturate_cast +* tf.compat.v1.dynamic_partition +* tf.compat.v1.dynamic_stitch +* tf.compat.v1.edit_distance +* tf.compat.v1.einsum +* tf.compat.v1.enable_control_flow_v2 +* tf.compat.v1.enable_eager_execution +* tf.compat.v1.enable_resource_variables +* tf.compat.v1.enable_tensor_equality +* tf.compat.v1.enable_v2_behavior +* tf.compat.v1.enable_v2_tensorshape +* tf.compat.v1.encode_base64 +* tf.compat.v1.ensure_shape +* tf.compat.v1.equal +* tf.compat.v1.erf +* tf.compat.v1.erfc +* tf.compat.v1.errors +* tf.compat.v1.errors.AbortedError +* tf.compat.v1.errors.AlreadyExistsError +* tf.compat.v1.errors.CancelledError +* tf.compat.v1.errors.DataLossError +* tf.compat.v1.errors.DeadlineExceededError +* tf.compat.v1.errors.FailedPreconditionError +* tf.compat.v1.errors.InternalError +* tf.compat.v1.errors.InvalidArgumentError +* tf.compat.v1.errors.NotFoundError +* tf.compat.v1.errors.OpError +* tf.compat.v1.errors.OutOfRangeError +* tf.compat.v1.errors.PermissionDeniedError +* tf.compat.v1.errors.ResourceExhaustedError +* tf.compat.v1.errors.UnauthenticatedError +* tf.compat.v1.errors.UnavailableError +* tf.compat.v1.errors.UnimplementedError +* tf.compat.v1.errors.UnknownError +* tf.compat.v1.errors.error_code_from_exception_type +* tf.compat.v1.errors.exception_type_from_error_code +* tf.compat.v1.errors.raise_exception_on_not_ok_status +* tf.compat.v1.estimator +* tf.compat.v1.estimator.BaselineClassifier +* tf.compat.v1.estimator.BaselineEstimator +* tf.compat.v1.estimator.BaselineRegressor +* tf.compat.v1.estimator.BestExporter +* tf.compat.v1.estimator.BinaryClassHead +* tf.compat.v1.estimator.BoostedTreesClassifier +* tf.compat.v1.estimator.BoostedTreesEstimator +* tf.compat.v1.estimator.BoostedTreesRegressor +* tf.compat.v1.estimator.CheckpointSaverHook +* tf.compat.v1.estimator.CheckpointSaverListener +* tf.compat.v1.estimator.DNNClassifier +* tf.compat.v1.estimator.DNNEstimator +* tf.compat.v1.estimator.DNNLinearCombinedClassifier +* tf.compat.v1.estimator.DNNLinearCombinedEstimator +* tf.compat.v1.estimator.DNNLinearCombinedRegressor +* tf.compat.v1.estimator.DNNRegressor +* tf.compat.v1.estimator.Estimator +* tf.compat.v1.estimator.EstimatorSpec +* tf.compat.v1.estimator.EvalSpec +* tf.compat.v1.estimator.Exporter +* tf.compat.v1.estimator.FeedFnHook +* tf.compat.v1.estimator.FinalExporter +* tf.compat.v1.estimator.FinalOpsHook +* tf.compat.v1.estimator.GlobalStepWaiterHook +* tf.compat.v1.estimator.Head +* tf.compat.v1.estimator.LatestExporter +* tf.compat.v1.estimator.LinearClassifier +* tf.compat.v1.estimator.LinearEstimator +* tf.compat.v1.estimator.LinearRegressor +* tf.compat.v1.estimator.LoggingTensorHook +* tf.compat.v1.estimator.LogisticRegressionHead +* tf.compat.v1.estimator.ModeKeys +* tf.compat.v1.estimator.MultiClassHead +* tf.compat.v1.estimator.MultiHead +* tf.compat.v1.estimator.MultiLabelHead +* tf.compat.v1.estimator.NanLossDuringTrainingError +* tf.compat.v1.estimator.NanTensorHook +* tf.compat.v1.estimator.PoissonRegressionHead +* tf.compat.v1.estimator.ProfilerHook +* tf.compat.v1.estimator.RegressionHead +* tf.compat.v1.estimator.RunConfig +* tf.compat.v1.estimator.SecondOrStepTimer +* tf.compat.v1.estimator.SessionRunArgs +* tf.compat.v1.estimator.SessionRunContext +* tf.compat.v1.estimator.SessionRunHook +* tf.compat.v1.estimator.SessionRunValues +* tf.compat.v1.estimator.StepCounterHook +* tf.compat.v1.estimator.StopAtStepHook +* tf.compat.v1.estimator.SummarySaverHook +* tf.compat.v1.estimator.TrainSpec +* tf.compat.v1.estimator.VocabInfo +* tf.compat.v1.estimator.WarmStartSettings +* tf.compat.v1.estimator.add_metrics +* tf.compat.v1.estimator.classifier_parse_example_spec +* tf.compat.v1.estimator.experimental +* tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook +* tf.compat.v1.estimator.experimental.KMeans +* tf.compat.v1.estimator.experimental.LinearSDCA +* tf.compat.v1.estimator.experimental.build_raw_supervised_input_receiver_fn +* tf.compat.v1.estimator.experimental.call_logit_fn +* tf.compat.v1.estimator.experimental.dnn_logit_fn_builder +* tf.compat.v1.estimator.experimental.linear_logit_fn_builder +* tf.compat.v1.estimator.experimental.make_early_stopping_hook +* tf.compat.v1.estimator.experimental.make_stop_at_checkpoint_step_hook +* tf.compat.v1.estimator.experimental.stop_if_higher_hook +* tf.compat.v1.estimator.experimental.stop_if_lower_hook +* tf.compat.v1.estimator.experimental.stop_if_no_decrease_hook +* tf.compat.v1.estimator.experimental.stop_if_no_increase_hook +* tf.compat.v1.estimator.export +* tf.compat.v1.estimator.export.ClassificationOutput +* tf.compat.v1.estimator.export.ExportOutput +* tf.compat.v1.estimator.export.PredictOutput +* tf.compat.v1.estimator.export.RegressionOutput +* tf.compat.v1.estimator.export.ServingInputReceiver +* tf.compat.v1.estimator.export.TensorServingInputReceiver +* tf.compat.v1.estimator.export.build_parsing_serving_input_receiver_fn +* tf.compat.v1.estimator.export.build_raw_serving_input_receiver_fn +* tf.compat.v1.estimator.inputs +* tf.compat.v1.estimator.inputs.numpy_input_fn +* tf.compat.v1.estimator.inputs.pandas_input_fn +* tf.compat.v1.estimator.regressor_parse_example_spec +* tf.compat.v1.estimator.tpu +* tf.compat.v1.estimator.tpu.InputPipelineConfig +* tf.compat.v1.estimator.tpu.RunConfig +* tf.compat.v1.estimator.tpu.TPUConfig +* tf.compat.v1.estimator.tpu.TPUEstimator +* tf.compat.v1.estimator.tpu.TPUEstimatorSpec +* tf.compat.v1.estimator.tpu.experimental +* tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec +* tf.compat.v1.estimator.train_and_evaluate +* tf.compat.v1.executing_eagerly +* tf.compat.v1.executing_eagerly_outside_functions +* tf.compat.v1.exp +* tf.compat.v1.expand_dims +* tf.compat.v1.experimental +* tf.compat.v1.experimental.async_clear_error +* tf.compat.v1.experimental.async_scope +* tf.compat.v1.experimental.function_executor_type +* tf.compat.v1.experimental.output_all_intermediates +* tf.compat.v1.expm1 +* tf.compat.v1.extract_image_patches +* tf.compat.v1.extract_volume_patches +* tf.compat.v1.eye +* tf.compat.v1.fake_quant_with_min_max_args +* tf.compat.v1.fake_quant_with_min_max_args_gradient +* tf.compat.v1.fake_quant_with_min_max_vars +* tf.compat.v1.fake_quant_with_min_max_vars_gradient +* tf.compat.v1.fake_quant_with_min_max_vars_per_channel +* tf.compat.v1.fake_quant_with_min_max_vars_per_channel_gradient +* tf.compat.v1.feature_column +* tf.compat.v1.feature_column.bucketized_column +* tf.compat.v1.feature_column.categorical_column_with_hash_bucket +* tf.compat.v1.feature_column.categorical_column_with_identity +* tf.compat.v1.feature_column.categorical_column_with_vocabulary_file +* tf.compat.v1.feature_column.categorical_column_with_vocabulary_list +* tf.compat.v1.feature_column.crossed_column +* tf.compat.v1.feature_column.embedding_column +* tf.compat.v1.feature_column.indicator_column +* tf.compat.v1.feature_column.input_layer +* tf.compat.v1.feature_column.linear_model +* tf.compat.v1.feature_column.make_parse_example_spec +* tf.compat.v1.feature_column.numeric_column +* tf.compat.v1.feature_column.sequence_categorical_column_with_hash_bucket +* tf.compat.v1.feature_column.sequence_categorical_column_with_identity +* tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_file +* tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_list +* tf.compat.v1.feature_column.sequence_numeric_column +* tf.compat.v1.feature_column.shared_embedding_columns +* tf.compat.v1.feature_column.weighted_categorical_column +* tf.compat.v1.fft +* tf.compat.v1.fft2d +* tf.compat.v1.fft3d +* tf.compat.v1.fill +* tf.compat.v1.fingerprint +* tf.compat.v1.fixed_size_partitioner +* tf.compat.v1.flags +* tf.compat.v1.flags.ArgumentParser +* tf.compat.v1.flags.ArgumentSerializer +* tf.compat.v1.flags.BaseListParser +* tf.compat.v1.flags.BooleanFlag +* tf.compat.v1.flags.BooleanParser +* tf.compat.v1.flags.CantOpenFlagFileError +* tf.compat.v1.flags.CsvListSerializer +* tf.compat.v1.flags.DEFINE +* tf.compat.v1.flags.DEFINE_alias +* tf.compat.v1.flags.DEFINE_bool +* tf.compat.v1.flags.DEFINE_boolean +* tf.compat.v1.flags.DEFINE_enum +* tf.compat.v1.flags.DEFINE_enum_class +* tf.compat.v1.flags.DEFINE_flag +* tf.compat.v1.flags.DEFINE_float +* tf.compat.v1.flags.DEFINE_integer +* tf.compat.v1.flags.DEFINE_list +* tf.compat.v1.flags.DEFINE_multi +* tf.compat.v1.flags.DEFINE_multi_enum +* tf.compat.v1.flags.DEFINE_multi_enum_class +* tf.compat.v1.flags.DEFINE_multi_float +* tf.compat.v1.flags.DEFINE_multi_integer +* tf.compat.v1.flags.DEFINE_multi_string +* tf.compat.v1.flags.DEFINE_spaceseplist +* tf.compat.v1.flags.DEFINE_string +* tf.compat.v1.flags.DuplicateFlagError +* tf.compat.v1.flags.EnumClassFlag +* tf.compat.v1.flags.EnumClassParser +* tf.compat.v1.flags.EnumFlag +* tf.compat.v1.flags.EnumParser +* tf.compat.v1.flags.Error +* tf.compat.v1.flags.FLAGS +* tf.compat.v1.flags.Flag +* tf.compat.v1.flags.FlagNameConflictsWithMethodError +* tf.compat.v1.flags.FlagValues +* tf.compat.v1.flags.FloatParser +* tf.compat.v1.flags.IllegalFlagValueError +* tf.compat.v1.flags.IntegerParser +* tf.compat.v1.flags.ListParser +* tf.compat.v1.flags.ListSerializer +* tf.compat.v1.flags.MultiEnumClassFlag +* tf.compat.v1.flags.MultiFlag +* tf.compat.v1.flags.UnparsedFlagAccessError +* tf.compat.v1.flags.UnrecognizedFlagError +* tf.compat.v1.flags.ValidationError +* tf.compat.v1.flags.WhitespaceSeparatedListParser +* tf.compat.v1.flags.adopt_module_key_flags +* tf.compat.v1.flags.declare_key_flag +* tf.compat.v1.flags.disclaim_key_flags +* tf.compat.v1.flags.doc_to_help +* tf.compat.v1.flags.flag_dict_to_args +* tf.compat.v1.flags.get_help_width +* tf.compat.v1.flags.mark_bool_flags_as_mutual_exclusive +* tf.compat.v1.flags.mark_flag_as_required +* tf.compat.v1.flags.mark_flags_as_mutual_exclusive +* tf.compat.v1.flags.mark_flags_as_required +* tf.compat.v1.flags.multi_flags_validator +* tf.compat.v1.flags.register_multi_flags_validator +* tf.compat.v1.flags.register_validator +* tf.compat.v1.flags.text_wrap +* tf.compat.v1.flags.tf_decorator +* tf.compat.v1.flags.tf_decorator.TFDecorator +* tf.compat.v1.flags.tf_decorator.make_decorator +* tf.compat.v1.flags.tf_decorator.rewrap +* tf.compat.v1.flags.tf_decorator.tf_stack +* tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter +* tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary +* tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary +* tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter +* tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper +* tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform +* tf.compat.v1.flags.tf_decorator.tf_stack.extract_stack +* tf.compat.v1.flags.tf_decorator.unwrap +* tf.compat.v1.flags.validator +* tf.compat.v1.floor +* tf.compat.v1.floor_div +* tf.compat.v1.floordiv +* tf.compat.v1.floormod +* tf.compat.v1.foldl +* tf.compat.v1.foldr +* tf.compat.v1.function +* tf.compat.v1.gather +* tf.compat.v1.gather_nd +* tf.compat.v1.get_collection +* tf.compat.v1.get_collection_ref +* tf.compat.v1.get_default_graph +* tf.compat.v1.get_default_session +* tf.compat.v1.get_local_variable +* tf.compat.v1.get_logger +* tf.compat.v1.get_seed +* tf.compat.v1.get_session_handle +* tf.compat.v1.get_session_tensor +* tf.compat.v1.get_static_value +* tf.compat.v1.get_variable +* tf.compat.v1.get_variable_scope +* tf.compat.v1.gfile +* tf.compat.v1.gfile.Copy +* tf.compat.v1.gfile.DeleteRecursively +* tf.compat.v1.gfile.Exists +* tf.compat.v1.gfile.FastGFile +* tf.compat.v1.gfile.GFile +* tf.compat.v1.gfile.Glob +* tf.compat.v1.gfile.IsDirectory +* tf.compat.v1.gfile.ListDirectory +* tf.compat.v1.gfile.MakeDirs +* tf.compat.v1.gfile.MkDir +* tf.compat.v1.gfile.Open +* tf.compat.v1.gfile.Remove +* tf.compat.v1.gfile.Rename +* tf.compat.v1.gfile.Stat +* tf.compat.v1.gfile.Walk +* tf.compat.v1.global_norm +* tf.compat.v1.global_variables +* tf.compat.v1.global_variables_initializer +* tf.compat.v1.glorot_normal_initializer +* tf.compat.v1.glorot_uniform_initializer +* tf.compat.v1.grad_pass_through +* tf.compat.v1.gradients +* tf.compat.v1.graph_util +* tf.compat.v1.graph_util.convert_variables_to_constants +* tf.compat.v1.graph_util.extract_sub_graph +* tf.compat.v1.graph_util.import_graph_def +* tf.compat.v1.graph_util.must_run_on_cpu +* tf.compat.v1.graph_util.remove_training_nodes +* tf.compat.v1.graph_util.tensor_shape_from_node_def_name +* tf.compat.v1.greater +* tf.compat.v1.greater_equal +* tf.compat.v1.group +* tf.compat.v1.guarantee_const +* tf.compat.v1.hessians +* tf.compat.v1.histogram_fixed_width +* tf.compat.v1.histogram_fixed_width_bins +* tf.compat.v1.identity +* tf.compat.v1.identity_n +* tf.compat.v1.ifft +* tf.compat.v1.ifft2d +* tf.compat.v1.ifft3d +* tf.compat.v1.igamma +* tf.compat.v1.igammac +* tf.compat.v1.imag +* tf.compat.v1.image +* tf.compat.v1.image.ResizeMethod +* tf.compat.v1.image.adjust_brightness +* tf.compat.v1.image.adjust_contrast +* tf.compat.v1.image.adjust_gamma +* tf.compat.v1.image.adjust_hue +* tf.compat.v1.image.adjust_jpeg_quality +* tf.compat.v1.image.adjust_saturation +* tf.compat.v1.image.central_crop +* tf.compat.v1.image.combined_non_max_suppression +* tf.compat.v1.image.convert_image_dtype +* tf.compat.v1.image.crop_and_resize +* tf.compat.v1.image.crop_to_bounding_box +* tf.compat.v1.image.decode_and_crop_jpeg +* tf.compat.v1.image.decode_bmp +* tf.compat.v1.image.decode_gif +* tf.compat.v1.image.decode_image +* tf.compat.v1.image.decode_jpeg +* tf.compat.v1.image.decode_png +* tf.compat.v1.image.draw_bounding_boxes +* tf.compat.v1.image.encode_jpeg +* tf.compat.v1.image.encode_png +* tf.compat.v1.image.extract_glimpse +* tf.compat.v1.image.extract_image_patches +* tf.compat.v1.image.extract_jpeg_shape +* tf.compat.v1.image.extract_patches +* tf.compat.v1.image.flip_left_right +* tf.compat.v1.image.flip_up_down +* tf.compat.v1.image.generate_bounding_box_proposals +* tf.compat.v1.image.grayscale_to_rgb +* tf.compat.v1.image.hsv_to_rgb +* tf.compat.v1.image.image_gradients +* tf.compat.v1.image.is_jpeg +* tf.compat.v1.image.non_max_suppression +* tf.compat.v1.image.non_max_suppression_overlaps +* tf.compat.v1.image.non_max_suppression_padded +* tf.compat.v1.image.non_max_suppression_with_scores +* tf.compat.v1.image.pad_to_bounding_box +* tf.compat.v1.image.per_image_standardization +* tf.compat.v1.image.psnr +* tf.compat.v1.image.random_brightness +* tf.compat.v1.image.random_contrast +* tf.compat.v1.image.random_crop +* tf.compat.v1.image.random_flip_left_right +* tf.compat.v1.image.random_flip_up_down +* tf.compat.v1.image.random_hue +* tf.compat.v1.image.random_jpeg_quality +* tf.compat.v1.image.random_saturation +* tf.compat.v1.image.resize +* tf.compat.v1.image.resize_area +* tf.compat.v1.image.resize_bicubic +* tf.compat.v1.image.resize_bilinear +* tf.compat.v1.image.resize_image_with_crop_or_pad +* tf.compat.v1.image.resize_image_with_pad +* tf.compat.v1.image.resize_images +* tf.compat.v1.image.resize_nearest_neighbor +* tf.compat.v1.image.resize_with_crop_or_pad +* tf.compat.v1.image.rgb_to_grayscale +* tf.compat.v1.image.rgb_to_hsv +* tf.compat.v1.image.rgb_to_yiq +* tf.compat.v1.image.rgb_to_yuv +* tf.compat.v1.image.rot90 +* tf.compat.v1.image.sample_distorted_bounding_box +* tf.compat.v1.image.sobel_edges +* tf.compat.v1.image.ssim +* tf.compat.v1.image.ssim_multiscale +* tf.compat.v1.image.total_variation +* tf.compat.v1.image.transpose +* tf.compat.v1.image.transpose_image +* tf.compat.v1.image.yiq_to_rgb +* tf.compat.v1.image.yuv_to_rgb +* tf.compat.v1.import_graph_def +* tf.compat.v1.init_scope +* tf.compat.v1.initialize_all_tables +* tf.compat.v1.initialize_all_variables +* tf.compat.v1.initialize_local_variables +* tf.compat.v1.initialize_variables +* tf.compat.v1.initializers +* tf.compat.v1.initializers.constant +* tf.compat.v1.initializers.global_variables +* tf.compat.v1.initializers.glorot_normal +* tf.compat.v1.initializers.glorot_uniform +* tf.compat.v1.initializers.he_normal +* tf.compat.v1.initializers.he_uniform +* tf.compat.v1.initializers.identity +* tf.compat.v1.initializers.lecun_normal +* tf.compat.v1.initializers.lecun_uniform +* tf.compat.v1.initializers.local_variables +* tf.compat.v1.initializers.ones +* tf.compat.v1.initializers.orthogonal +* tf.compat.v1.initializers.random_normal +* tf.compat.v1.initializers.random_uniform +* tf.compat.v1.initializers.tables_initializer +* tf.compat.v1.initializers.truncated_normal +* tf.compat.v1.initializers.uniform_unit_scaling +* tf.compat.v1.initializers.variables +* tf.compat.v1.initializers.variance_scaling +* tf.compat.v1.initializers.zeros +* tf.compat.v1.invert_permutation +* tf.compat.v1.io +* tf.compat.v1.io.FixedLenFeature +* tf.compat.v1.io.FixedLenSequenceFeature +* tf.compat.v1.io.PaddingFIFOQueue +* tf.compat.v1.io.PriorityQueue +* tf.compat.v1.io.QueueBase +* tf.compat.v1.io.RaggedFeature +* tf.compat.v1.io.RaggedFeature.RowLengths +* tf.compat.v1.io.RaggedFeature.RowLimits +* tf.compat.v1.io.RaggedFeature.RowSplits +* tf.compat.v1.io.RaggedFeature.RowStarts +* tf.compat.v1.io.RaggedFeature.UniformRowLength +* tf.compat.v1.io.RaggedFeature.ValueRowIds +* tf.compat.v1.io.RandomShuffleQueue +* tf.compat.v1.io.SparseFeature +* tf.compat.v1.io.TFRecordCompressionType +* tf.compat.v1.io.TFRecordOptions +* tf.compat.v1.io.TFRecordWriter +* tf.compat.v1.io.VarLenFeature +* tf.compat.v1.io.decode_and_crop_jpeg +* tf.compat.v1.io.decode_base64 +* tf.compat.v1.io.decode_bmp +* tf.compat.v1.io.decode_compressed +* tf.compat.v1.io.decode_csv +* tf.compat.v1.io.decode_gif +* tf.compat.v1.io.decode_image +* tf.compat.v1.io.decode_jpeg +* tf.compat.v1.io.decode_json_example +* tf.compat.v1.io.decode_png +* tf.compat.v1.io.decode_proto +* tf.compat.v1.io.decode_raw +* tf.compat.v1.io.deserialize_many_sparse +* tf.compat.v1.io.encode_base64 +* tf.compat.v1.io.encode_jpeg +* tf.compat.v1.io.encode_proto +* tf.compat.v1.io.extract_jpeg_shape +* tf.compat.v1.io.gfile +* tf.compat.v1.io.gfile.GFile +* tf.compat.v1.io.gfile.copy +* tf.compat.v1.io.gfile.exists +* tf.compat.v1.io.gfile.glob +* tf.compat.v1.io.gfile.isdir +* tf.compat.v1.io.gfile.listdir +* tf.compat.v1.io.gfile.makedirs +* tf.compat.v1.io.gfile.mkdir +* tf.compat.v1.io.gfile.remove +* tf.compat.v1.io.gfile.rename +* tf.compat.v1.io.gfile.rmtree +* tf.compat.v1.io.gfile.stat +* tf.compat.v1.io.gfile.walk +* tf.compat.v1.io.is_jpeg +* tf.compat.v1.io.match_filenames_once +* tf.compat.v1.io.matching_files +* tf.compat.v1.io.parse_example +* tf.compat.v1.io.parse_sequence_example +* tf.compat.v1.io.parse_single_example +* tf.compat.v1.io.parse_single_sequence_example +* tf.compat.v1.io.parse_tensor +* tf.compat.v1.io.read_file +* tf.compat.v1.io.serialize_many_sparse +* tf.compat.v1.io.serialize_sparse +* tf.compat.v1.io.serialize_tensor +* tf.compat.v1.io.tf_record_iterator +* tf.compat.v1.io.write_file +* tf.compat.v1.io.write_graph +* tf.compat.v1.is_finite +* tf.compat.v1.is_inf +* tf.compat.v1.is_nan +* tf.compat.v1.is_non_decreasing +* tf.compat.v1.is_numeric_tensor +* tf.compat.v1.is_strictly_increasing +* tf.compat.v1.is_tensor +* tf.compat.v1.is_variable_initialized +* tf.compat.v1.keras +* tf.compat.v1.keras.Input +* tf.compat.v1.keras.Model +* tf.compat.v1.keras.Sequential +* tf.compat.v1.keras.activations +* tf.compat.v1.keras.activations.deserialize +* tf.compat.v1.keras.activations.elu +* tf.compat.v1.keras.activations.exponential +* tf.compat.v1.keras.activations.get +* tf.compat.v1.keras.activations.hard_sigmoid +* tf.compat.v1.keras.activations.linear +* tf.compat.v1.keras.activations.relu +* tf.compat.v1.keras.activations.selu +* tf.compat.v1.keras.activations.serialize +* tf.compat.v1.keras.activations.sigmoid +* tf.compat.v1.keras.activations.softmax +* tf.compat.v1.keras.activations.softplus +* tf.compat.v1.keras.activations.softsign +* tf.compat.v1.keras.activations.swish +* tf.compat.v1.keras.activations.tanh +* tf.compat.v1.keras.applications +* tf.compat.v1.keras.applications.DenseNet121 +* tf.compat.v1.keras.applications.DenseNet169 +* tf.compat.v1.keras.applications.DenseNet201 +* tf.compat.v1.keras.applications.InceptionResNetV2 +* tf.compat.v1.keras.applications.InceptionV3 +* tf.compat.v1.keras.applications.MobileNet +* tf.compat.v1.keras.applications.MobileNetV2 +* tf.compat.v1.keras.applications.NASNetLarge +* tf.compat.v1.keras.applications.NASNetMobile +* tf.compat.v1.keras.applications.ResNet101 +* tf.compat.v1.keras.applications.ResNet101V2 +* tf.compat.v1.keras.applications.ResNet152 +* tf.compat.v1.keras.applications.ResNet152V2 +* tf.compat.v1.keras.applications.ResNet50 +* tf.compat.v1.keras.applications.ResNet50V2 +* tf.compat.v1.keras.applications.VGG16 +* tf.compat.v1.keras.applications.VGG19 +* tf.compat.v1.keras.applications.Xception +* tf.compat.v1.keras.applications.densenet +* tf.compat.v1.keras.applications.densenet.DenseNet121 +* tf.compat.v1.keras.applications.densenet.DenseNet169 +* tf.compat.v1.keras.applications.densenet.DenseNet201 +* tf.compat.v1.keras.applications.densenet.decode_predictions +* tf.compat.v1.keras.applications.densenet.preprocess_input +* tf.compat.v1.keras.applications.imagenet_utils +* tf.compat.v1.keras.applications.imagenet_utils.decode_predictions +* tf.compat.v1.keras.applications.imagenet_utils.preprocess_input +* tf.compat.v1.keras.applications.inception_resnet_v2 +* tf.compat.v1.keras.applications.inception_resnet_v2.InceptionResNetV2 +* tf.compat.v1.keras.applications.inception_resnet_v2.decode_predictions +* tf.compat.v1.keras.applications.inception_resnet_v2.preprocess_input +* tf.compat.v1.keras.applications.inception_v3 +* tf.compat.v1.keras.applications.inception_v3.InceptionV3 +* tf.compat.v1.keras.applications.inception_v3.decode_predictions +* tf.compat.v1.keras.applications.inception_v3.preprocess_input +* tf.compat.v1.keras.applications.mobilenet +* tf.compat.v1.keras.applications.mobilenet.MobileNet +* tf.compat.v1.keras.applications.mobilenet.decode_predictions +* tf.compat.v1.keras.applications.mobilenet.preprocess_input +* tf.compat.v1.keras.applications.mobilenet_v2 +* tf.compat.v1.keras.applications.mobilenet_v2.MobileNetV2 +* tf.compat.v1.keras.applications.mobilenet_v2.decode_predictions +* tf.compat.v1.keras.applications.mobilenet_v2.preprocess_input +* tf.compat.v1.keras.applications.nasnet +* tf.compat.v1.keras.applications.nasnet.NASNetLarge +* tf.compat.v1.keras.applications.nasnet.NASNetMobile +* tf.compat.v1.keras.applications.nasnet.decode_predictions +* tf.compat.v1.keras.applications.nasnet.preprocess_input +* tf.compat.v1.keras.applications.resnet +* tf.compat.v1.keras.applications.resnet.ResNet101 +* tf.compat.v1.keras.applications.resnet.ResNet152 +* tf.compat.v1.keras.applications.resnet.ResNet50 +* tf.compat.v1.keras.applications.resnet.decode_predictions +* tf.compat.v1.keras.applications.resnet.preprocess_input +* tf.compat.v1.keras.applications.resnet50 +* tf.compat.v1.keras.applications.resnet50.ResNet50 +* tf.compat.v1.keras.applications.resnet50.decode_predictions +* tf.compat.v1.keras.applications.resnet50.preprocess_input +* tf.compat.v1.keras.applications.resnet_v2 +* tf.compat.v1.keras.applications.resnet_v2.ResNet101V2 +* tf.compat.v1.keras.applications.resnet_v2.ResNet152V2 +* tf.compat.v1.keras.applications.resnet_v2.ResNet50V2 +* tf.compat.v1.keras.applications.resnet_v2.decode_predictions +* tf.compat.v1.keras.applications.resnet_v2.preprocess_input +* tf.compat.v1.keras.applications.vgg16 +* tf.compat.v1.keras.applications.vgg16.VGG16 +* tf.compat.v1.keras.applications.vgg16.decode_predictions +* tf.compat.v1.keras.applications.vgg16.preprocess_input +* tf.compat.v1.keras.applications.vgg19 +* tf.compat.v1.keras.applications.vgg19.VGG19 +* tf.compat.v1.keras.applications.vgg19.decode_predictions +* tf.compat.v1.keras.applications.vgg19.preprocess_input +* tf.compat.v1.keras.applications.xception +* tf.compat.v1.keras.applications.xception.Xception +* tf.compat.v1.keras.applications.xception.decode_predictions +* tf.compat.v1.keras.applications.xception.preprocess_input +* tf.compat.v1.keras.backend +* tf.compat.v1.keras.backend.abs +* tf.compat.v1.keras.backend.all +* tf.compat.v1.keras.backend.any +* tf.compat.v1.keras.backend.arange +* tf.compat.v1.keras.backend.argmax +* tf.compat.v1.keras.backend.argmin +* tf.compat.v1.keras.backend.backend +* tf.compat.v1.keras.backend.batch_dot +* tf.compat.v1.keras.backend.batch_flatten +* tf.compat.v1.keras.backend.batch_get_value +* tf.compat.v1.keras.backend.batch_normalization +* tf.compat.v1.keras.backend.batch_set_value +* tf.compat.v1.keras.backend.bias_add +* tf.compat.v1.keras.backend.binary_crossentropy +* tf.compat.v1.keras.backend.cast +* tf.compat.v1.keras.backend.cast_to_floatx +* tf.compat.v1.keras.backend.categorical_crossentropy +* tf.compat.v1.keras.backend.clear_session +* tf.compat.v1.keras.backend.clip +* tf.compat.v1.keras.backend.concatenate +* tf.compat.v1.keras.backend.constant +* tf.compat.v1.keras.backend.conv1d +* tf.compat.v1.keras.backend.conv2d +* tf.compat.v1.keras.backend.conv2d_transpose +* tf.compat.v1.keras.backend.conv3d +* tf.compat.v1.keras.backend.cos +* tf.compat.v1.keras.backend.count_params +* tf.compat.v1.keras.backend.ctc_batch_cost +* tf.compat.v1.keras.backend.ctc_decode +* tf.compat.v1.keras.backend.ctc_label_dense_to_sparse +* tf.compat.v1.keras.backend.cumprod +* tf.compat.v1.keras.backend.cumsum +* tf.compat.v1.keras.backend.depthwise_conv2d +* tf.compat.v1.keras.backend.dot +* tf.compat.v1.keras.backend.dropout +* tf.compat.v1.keras.backend.dtype +* tf.compat.v1.keras.backend.elu +* tf.compat.v1.keras.backend.epsilon +* tf.compat.v1.keras.backend.equal +* tf.compat.v1.keras.backend.eval +* tf.compat.v1.keras.backend.exp +* tf.compat.v1.keras.backend.expand_dims +* tf.compat.v1.keras.backend.eye +* tf.compat.v1.keras.backend.flatten +* tf.compat.v1.keras.backend.floatx +* tf.compat.v1.keras.backend.foldl +* tf.compat.v1.keras.backend.foldr +* tf.compat.v1.keras.backend.function +* tf.compat.v1.keras.backend.gather +* tf.compat.v1.keras.backend.get_session +* tf.compat.v1.keras.backend.get_uid +* tf.compat.v1.keras.backend.get_value +* tf.compat.v1.keras.backend.gradients +* tf.compat.v1.keras.backend.greater +* tf.compat.v1.keras.backend.greater_equal +* tf.compat.v1.keras.backend.hard_sigmoid +* tf.compat.v1.keras.backend.image_data_format +* tf.compat.v1.keras.backend.in_test_phase +* tf.compat.v1.keras.backend.in_top_k +* tf.compat.v1.keras.backend.in_train_phase +* tf.compat.v1.keras.backend.int_shape +* tf.compat.v1.keras.backend.is_keras_tensor +* tf.compat.v1.keras.backend.is_sparse +* tf.compat.v1.keras.backend.l2_normalize +* tf.compat.v1.keras.backend.learning_phase +* tf.compat.v1.keras.backend.learning_phase_scope +* tf.compat.v1.keras.backend.less +* tf.compat.v1.keras.backend.less_equal +* tf.compat.v1.keras.backend.local_conv1d +* tf.compat.v1.keras.backend.local_conv2d +* tf.compat.v1.keras.backend.log +* tf.compat.v1.keras.backend.manual_variable_initialization +* tf.compat.v1.keras.backend.map_fn +* tf.compat.v1.keras.backend.max +* tf.compat.v1.keras.backend.maximum +* tf.compat.v1.keras.backend.mean +* tf.compat.v1.keras.backend.min +* tf.compat.v1.keras.backend.minimum +* tf.compat.v1.keras.backend.moving_average_update +* tf.compat.v1.keras.backend.name_scope +* tf.compat.v1.keras.backend.ndim +* tf.compat.v1.keras.backend.normalize_batch_in_training +* tf.compat.v1.keras.backend.not_equal +* tf.compat.v1.keras.backend.one_hot +* tf.compat.v1.keras.backend.ones +* tf.compat.v1.keras.backend.ones_like +* tf.compat.v1.keras.backend.permute_dimensions +* tf.compat.v1.keras.backend.placeholder +* tf.compat.v1.keras.backend.pool2d +* tf.compat.v1.keras.backend.pool3d +* tf.compat.v1.keras.backend.pow +* tf.compat.v1.keras.backend.print_tensor +* tf.compat.v1.keras.backend.prod +* tf.compat.v1.keras.backend.random_binomial +* tf.compat.v1.keras.backend.random_normal +* tf.compat.v1.keras.backend.random_normal_variable +* tf.compat.v1.keras.backend.random_uniform +* tf.compat.v1.keras.backend.random_uniform_variable +* tf.compat.v1.keras.backend.relu +* tf.compat.v1.keras.backend.repeat +* tf.compat.v1.keras.backend.repeat_elements +* tf.compat.v1.keras.backend.reset_uids +* tf.compat.v1.keras.backend.reshape +* tf.compat.v1.keras.backend.resize_images +* tf.compat.v1.keras.backend.resize_volumes +* tf.compat.v1.keras.backend.reverse +* tf.compat.v1.keras.backend.rnn +* tf.compat.v1.keras.backend.round +* tf.compat.v1.keras.backend.separable_conv2d +* tf.compat.v1.keras.backend.set_epsilon +* tf.compat.v1.keras.backend.set_floatx +* tf.compat.v1.keras.backend.set_image_data_format +* tf.compat.v1.keras.backend.set_learning_phase +* tf.compat.v1.keras.backend.set_session +* tf.compat.v1.keras.backend.set_value +* tf.compat.v1.keras.backend.shape +* tf.compat.v1.keras.backend.sigmoid +* tf.compat.v1.keras.backend.sign +* tf.compat.v1.keras.backend.sin +* tf.compat.v1.keras.backend.softmax +* tf.compat.v1.keras.backend.softplus +* tf.compat.v1.keras.backend.softsign +* tf.compat.v1.keras.backend.sparse_categorical_crossentropy +* tf.compat.v1.keras.backend.spatial_2d_padding +* tf.compat.v1.keras.backend.spatial_3d_padding +* tf.compat.v1.keras.backend.sqrt +* tf.compat.v1.keras.backend.square +* tf.compat.v1.keras.backend.squeeze +* tf.compat.v1.keras.backend.stack +* tf.compat.v1.keras.backend.std +* tf.compat.v1.keras.backend.stop_gradient +* tf.compat.v1.keras.backend.sum +* tf.compat.v1.keras.backend.switch +* tf.compat.v1.keras.backend.tanh +* tf.compat.v1.keras.backend.temporal_padding +* tf.compat.v1.keras.backend.tile +* tf.compat.v1.keras.backend.to_dense +* tf.compat.v1.keras.backend.transpose +* tf.compat.v1.keras.backend.truncated_normal +* tf.compat.v1.keras.backend.update +* tf.compat.v1.keras.backend.update_add +* tf.compat.v1.keras.backend.update_sub +* tf.compat.v1.keras.backend.var +* tf.compat.v1.keras.backend.variable +* tf.compat.v1.keras.backend.zeros +* tf.compat.v1.keras.backend.zeros_like +* tf.compat.v1.keras.callbacks +* tf.compat.v1.keras.callbacks.BaseLogger +* tf.compat.v1.keras.callbacks.CSVLogger +* tf.compat.v1.keras.callbacks.Callback +* tf.compat.v1.keras.callbacks.EarlyStopping +* tf.compat.v1.keras.callbacks.History +* tf.compat.v1.keras.callbacks.LambdaCallback +* tf.compat.v1.keras.callbacks.LearningRateScheduler +* tf.compat.v1.keras.callbacks.ModelCheckpoint +* tf.compat.v1.keras.callbacks.ProgbarLogger +* tf.compat.v1.keras.callbacks.ReduceLROnPlateau +* tf.compat.v1.keras.callbacks.RemoteMonitor +* tf.compat.v1.keras.callbacks.TensorBoard +* tf.compat.v1.keras.callbacks.TerminateOnNaN +* tf.compat.v1.keras.constraints +* tf.compat.v1.keras.constraints.Constraint +* tf.compat.v1.keras.constraints.MaxNorm +* tf.compat.v1.keras.constraints.MinMaxNorm +* tf.compat.v1.keras.constraints.NonNeg +* tf.compat.v1.keras.constraints.RadialConstraint +* tf.compat.v1.keras.constraints.UnitNorm +* tf.compat.v1.keras.constraints.deserialize +* tf.compat.v1.keras.constraints.get +* tf.compat.v1.keras.constraints.max_norm +* tf.compat.v1.keras.constraints.min_max_norm +* tf.compat.v1.keras.constraints.non_neg +* tf.compat.v1.keras.constraints.radial_constraint +* tf.compat.v1.keras.constraints.serialize +* tf.compat.v1.keras.constraints.unit_norm +* tf.compat.v1.keras.datasets +* tf.compat.v1.keras.datasets.boston_housing +* tf.compat.v1.keras.datasets.boston_housing.load_data +* tf.compat.v1.keras.datasets.cifar10 +* tf.compat.v1.keras.datasets.cifar10.load_data +* tf.compat.v1.keras.datasets.cifar100 +* tf.compat.v1.keras.datasets.cifar100.load_data +* tf.compat.v1.keras.datasets.fashion_mnist +* tf.compat.v1.keras.datasets.fashion_mnist.load_data +* tf.compat.v1.keras.datasets.imdb +* tf.compat.v1.keras.datasets.imdb.get_word_index +* tf.compat.v1.keras.datasets.imdb.load_data +* tf.compat.v1.keras.datasets.mnist +* tf.compat.v1.keras.datasets.mnist.load_data +* tf.compat.v1.keras.datasets.reuters +* tf.compat.v1.keras.datasets.reuters.get_word_index +* tf.compat.v1.keras.datasets.reuters.load_data +* tf.compat.v1.keras.estimator +* tf.compat.v1.keras.estimator.model_to_estimator +* tf.compat.v1.keras.experimental +* tf.compat.v1.keras.experimental.CosineDecay +* tf.compat.v1.keras.experimental.CosineDecayRestarts +* tf.compat.v1.keras.experimental.LinearCosineDecay +* tf.compat.v1.keras.experimental.LinearModel +* tf.compat.v1.keras.experimental.NoisyLinearCosineDecay +* tf.compat.v1.keras.experimental.PeepholeLSTMCell +* tf.compat.v1.keras.experimental.SequenceFeatures +* tf.compat.v1.keras.experimental.WideDeepModel +* tf.compat.v1.keras.experimental.export_saved_model +* tf.compat.v1.keras.experimental.load_from_saved_model +* tf.compat.v1.keras.experimental.terminate_keras_multiprocessing_pools +* tf.compat.v1.keras.initializers +* tf.compat.v1.keras.initializers.Constant +* tf.compat.v1.keras.initializers.Identity +* tf.compat.v1.keras.initializers.Initializer +* tf.compat.v1.keras.initializers.Ones +* tf.compat.v1.keras.initializers.Orthogonal +* tf.compat.v1.keras.initializers.RandomNormal +* tf.compat.v1.keras.initializers.RandomUniform +* tf.compat.v1.keras.initializers.TruncatedNormal +* tf.compat.v1.keras.initializers.VarianceScaling +* tf.compat.v1.keras.initializers.Zeros +* tf.compat.v1.keras.initializers.constant +* tf.compat.v1.keras.initializers.deserialize +* tf.compat.v1.keras.initializers.get +* tf.compat.v1.keras.initializers.glorot_normal +* tf.compat.v1.keras.initializers.glorot_uniform +* tf.compat.v1.keras.initializers.he_normal +* tf.compat.v1.keras.initializers.he_uniform +* tf.compat.v1.keras.initializers.identity +* tf.compat.v1.keras.initializers.lecun_normal +* tf.compat.v1.keras.initializers.lecun_uniform +* tf.compat.v1.keras.initializers.normal +* tf.compat.v1.keras.initializers.ones +* tf.compat.v1.keras.initializers.orthogonal +* tf.compat.v1.keras.initializers.random_normal +* tf.compat.v1.keras.initializers.random_uniform +* tf.compat.v1.keras.initializers.serialize +* tf.compat.v1.keras.initializers.truncated_normal +* tf.compat.v1.keras.initializers.uniform +* tf.compat.v1.keras.initializers.zeros +* tf.compat.v1.keras.layers +* tf.compat.v1.keras.layers.AbstractRNNCell +* tf.compat.v1.keras.layers.Activation +* tf.compat.v1.keras.layers.ActivityRegularization +* tf.compat.v1.keras.layers.Add +* tf.compat.v1.keras.layers.AdditiveAttention +* tf.compat.v1.keras.layers.AlphaDropout +* tf.compat.v1.keras.layers.Attention +* tf.compat.v1.keras.layers.Average +* tf.compat.v1.keras.layers.AveragePooling1D +* tf.compat.v1.keras.layers.AveragePooling2D +* tf.compat.v1.keras.layers.AveragePooling3D +* tf.compat.v1.keras.layers.AvgPool1D +* tf.compat.v1.keras.layers.AvgPool2D +* tf.compat.v1.keras.layers.AvgPool3D +* tf.compat.v1.keras.layers.BatchNormalization +* tf.compat.v1.keras.layers.Bidirectional +* tf.compat.v1.keras.layers.Concatenate +* tf.compat.v1.keras.layers.Conv1D +* tf.compat.v1.keras.layers.Conv2D +* tf.compat.v1.keras.layers.Conv2DTranspose +* tf.compat.v1.keras.layers.Conv3D +* tf.compat.v1.keras.layers.Conv3DTranspose +* tf.compat.v1.keras.layers.ConvLSTM2D +* tf.compat.v1.keras.layers.Convolution1D +* tf.compat.v1.keras.layers.Convolution2D +* tf.compat.v1.keras.layers.Convolution2DTranspose +* tf.compat.v1.keras.layers.Convolution3D +* tf.compat.v1.keras.layers.Convolution3DTranspose +* tf.compat.v1.keras.layers.Cropping1D +* tf.compat.v1.keras.layers.Cropping2D +* tf.compat.v1.keras.layers.Cropping3D +* tf.compat.v1.keras.layers.CuDNNGRU +* tf.compat.v1.keras.layers.CuDNNLSTM +* tf.compat.v1.keras.layers.Dense +* tf.compat.v1.keras.layers.DenseFeatures +* tf.compat.v1.keras.layers.DepthwiseConv2D +* tf.compat.v1.keras.layers.Dot +* tf.compat.v1.keras.layers.Dropout +* tf.compat.v1.keras.layers.ELU +* tf.compat.v1.keras.layers.Embedding +* tf.compat.v1.keras.layers.Flatten +* tf.compat.v1.keras.layers.GRU +* tf.compat.v1.keras.layers.GRUCell +* tf.compat.v1.keras.layers.GaussianDropout +* tf.compat.v1.keras.layers.GaussianNoise +* tf.compat.v1.keras.layers.GlobalAveragePooling1D +* tf.compat.v1.keras.layers.GlobalAveragePooling2D +* tf.compat.v1.keras.layers.GlobalAveragePooling3D +* tf.compat.v1.keras.layers.GlobalAvgPool1D +* tf.compat.v1.keras.layers.GlobalAvgPool2D +* tf.compat.v1.keras.layers.GlobalAvgPool3D +* tf.compat.v1.keras.layers.GlobalMaxPool1D +* tf.compat.v1.keras.layers.GlobalMaxPool2D +* tf.compat.v1.keras.layers.GlobalMaxPool3D +* tf.compat.v1.keras.layers.GlobalMaxPooling1D +* tf.compat.v1.keras.layers.GlobalMaxPooling2D +* tf.compat.v1.keras.layers.GlobalMaxPooling3D +* tf.compat.v1.keras.layers.Input +* tf.compat.v1.keras.layers.InputLayer +* tf.compat.v1.keras.layers.InputSpec +* tf.compat.v1.keras.layers.LSTM +* tf.compat.v1.keras.layers.LSTMCell +* tf.compat.v1.keras.layers.Lambda +* tf.compat.v1.keras.layers.Layer +* tf.compat.v1.keras.layers.LayerNormalization +* tf.compat.v1.keras.layers.LeakyReLU +* tf.compat.v1.keras.layers.LocallyConnected1D +* tf.compat.v1.keras.layers.LocallyConnected2D +* tf.compat.v1.keras.layers.Masking +* tf.compat.v1.keras.layers.MaxPool1D +* tf.compat.v1.keras.layers.MaxPool2D +* tf.compat.v1.keras.layers.MaxPool3D +* tf.compat.v1.keras.layers.MaxPooling1D +* tf.compat.v1.keras.layers.MaxPooling2D +* tf.compat.v1.keras.layers.MaxPooling3D +* tf.compat.v1.keras.layers.Maximum +* tf.compat.v1.keras.layers.Minimum +* tf.compat.v1.keras.layers.Multiply +* tf.compat.v1.keras.layers.PReLU +* tf.compat.v1.keras.layers.Permute +* tf.compat.v1.keras.layers.RNN +* tf.compat.v1.keras.layers.ReLU +* tf.compat.v1.keras.layers.RepeatVector +* tf.compat.v1.keras.layers.Reshape +* tf.compat.v1.keras.layers.SeparableConv1D +* tf.compat.v1.keras.layers.SeparableConv2D +* tf.compat.v1.keras.layers.SeparableConvolution1D +* tf.compat.v1.keras.layers.SeparableConvolution2D +* tf.compat.v1.keras.layers.SimpleRNN +* tf.compat.v1.keras.layers.SimpleRNNCell +* tf.compat.v1.keras.layers.Softmax +* tf.compat.v1.keras.layers.SpatialDropout1D +* tf.compat.v1.keras.layers.SpatialDropout2D +* tf.compat.v1.keras.layers.SpatialDropout3D +* tf.compat.v1.keras.layers.StackedRNNCells +* tf.compat.v1.keras.layers.Subtract +* tf.compat.v1.keras.layers.ThresholdedReLU +* tf.compat.v1.keras.layers.TimeDistributed +* tf.compat.v1.keras.layers.UpSampling1D +* tf.compat.v1.keras.layers.UpSampling2D +* tf.compat.v1.keras.layers.UpSampling3D +* tf.compat.v1.keras.layers.Wrapper +* tf.compat.v1.keras.layers.ZeroPadding1D +* tf.compat.v1.keras.layers.ZeroPadding2D +* tf.compat.v1.keras.layers.ZeroPadding3D +* tf.compat.v1.keras.layers.add +* tf.compat.v1.keras.layers.average +* tf.compat.v1.keras.layers.concatenate +* tf.compat.v1.keras.layers.deserialize +* tf.compat.v1.keras.layers.dot +* tf.compat.v1.keras.layers.experimental +* tf.compat.v1.keras.layers.experimental.preprocessing +* tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop +* tf.compat.v1.keras.layers.experimental.preprocessing.Normalization +* tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer +* tf.compat.v1.keras.layers.experimental.preprocessing.RandomContrast +* tf.compat.v1.keras.layers.experimental.preprocessing.RandomCrop +* tf.compat.v1.keras.layers.experimental.preprocessing.RandomFlip +* tf.compat.v1.keras.layers.experimental.preprocessing.RandomHeight +* tf.compat.v1.keras.layers.experimental.preprocessing.RandomRotation +* tf.compat.v1.keras.layers.experimental.preprocessing.RandomTranslation +* tf.compat.v1.keras.layers.experimental.preprocessing.RandomWidth +* tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling +* tf.compat.v1.keras.layers.experimental.preprocessing.Resizing +* tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization +* tf.compat.v1.keras.layers.maximum +* tf.compat.v1.keras.layers.minimum +* tf.compat.v1.keras.layers.multiply +* tf.compat.v1.keras.layers.serialize +* tf.compat.v1.keras.layers.subtract +* tf.compat.v1.keras.losses +* tf.compat.v1.keras.losses.BinaryCrossentropy +* tf.compat.v1.keras.losses.CategoricalCrossentropy +* tf.compat.v1.keras.losses.CategoricalHinge +* tf.compat.v1.keras.losses.CosineSimilarity +* tf.compat.v1.keras.losses.Hinge +* tf.compat.v1.keras.losses.Huber +* tf.compat.v1.keras.losses.KLD +* tf.compat.v1.keras.losses.KLDivergence +* tf.compat.v1.keras.losses.LogCosh +* tf.compat.v1.keras.losses.Loss +* tf.compat.v1.keras.losses.MAE +* tf.compat.v1.keras.losses.MAPE +* tf.compat.v1.keras.losses.MSE +* tf.compat.v1.keras.losses.MSLE +* tf.compat.v1.keras.losses.MeanAbsoluteError +* tf.compat.v1.keras.losses.MeanAbsolutePercentageError +* tf.compat.v1.keras.losses.MeanSquaredError +* tf.compat.v1.keras.losses.MeanSquaredLogarithmicError +* tf.compat.v1.keras.losses.Poisson +* tf.compat.v1.keras.losses.SparseCategoricalCrossentropy +* tf.compat.v1.keras.losses.SquaredHinge +* tf.compat.v1.keras.losses.binary_crossentropy +* tf.compat.v1.keras.losses.categorical_crossentropy +* tf.compat.v1.keras.losses.categorical_hinge +* tf.compat.v1.keras.losses.cosine +* tf.compat.v1.keras.losses.cosine_proximity +* tf.compat.v1.keras.losses.cosine_similarity +* tf.compat.v1.keras.losses.deserialize +* tf.compat.v1.keras.losses.get +* tf.compat.v1.keras.losses.hinge +* tf.compat.v1.keras.losses.kld +* tf.compat.v1.keras.losses.kullback_leibler_divergence +* tf.compat.v1.keras.losses.logcosh +* tf.compat.v1.keras.losses.mae +* tf.compat.v1.keras.losses.mape +* tf.compat.v1.keras.losses.mean_absolute_error +* tf.compat.v1.keras.losses.mean_absolute_percentage_error +* tf.compat.v1.keras.losses.mean_squared_error +* tf.compat.v1.keras.losses.mean_squared_logarithmic_error +* tf.compat.v1.keras.losses.mse +* tf.compat.v1.keras.losses.msle +* tf.compat.v1.keras.losses.poisson +* tf.compat.v1.keras.losses.serialize +* tf.compat.v1.keras.losses.sparse_categorical_crossentropy +* tf.compat.v1.keras.losses.squared_hinge +* tf.compat.v1.keras.metrics +* tf.compat.v1.keras.metrics.AUC +* tf.compat.v1.keras.metrics.Accuracy +* tf.compat.v1.keras.metrics.BinaryAccuracy +* tf.compat.v1.keras.metrics.BinaryCrossentropy +* tf.compat.v1.keras.metrics.CategoricalAccuracy +* tf.compat.v1.keras.metrics.CategoricalCrossentropy +* tf.compat.v1.keras.metrics.CategoricalHinge +* tf.compat.v1.keras.metrics.CosineSimilarity +* tf.compat.v1.keras.metrics.FalseNegatives +* tf.compat.v1.keras.metrics.FalsePositives +* tf.compat.v1.keras.metrics.Hinge +* tf.compat.v1.keras.metrics.KLD +* tf.compat.v1.keras.metrics.KLDivergence +* tf.compat.v1.keras.metrics.LogCoshError +* tf.compat.v1.keras.metrics.MAE +* tf.compat.v1.keras.metrics.MAPE +* tf.compat.v1.keras.metrics.MSE +* tf.compat.v1.keras.metrics.MSLE +* tf.compat.v1.keras.metrics.Mean +* tf.compat.v1.keras.metrics.MeanAbsoluteError +* tf.compat.v1.keras.metrics.MeanAbsolutePercentageError +* tf.compat.v1.keras.metrics.MeanIoU +* tf.compat.v1.keras.metrics.MeanRelativeError +* tf.compat.v1.keras.metrics.MeanSquaredError +* tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError +* tf.compat.v1.keras.metrics.MeanTensor +* tf.compat.v1.keras.metrics.Metric +* tf.compat.v1.keras.metrics.Poisson +* tf.compat.v1.keras.metrics.Precision +* tf.compat.v1.keras.metrics.PrecisionAtRecall +* tf.compat.v1.keras.metrics.Recall +* tf.compat.v1.keras.metrics.RecallAtPrecision +* tf.compat.v1.keras.metrics.RootMeanSquaredError +* tf.compat.v1.keras.metrics.SensitivityAtSpecificity +* tf.compat.v1.keras.metrics.SparseCategoricalAccuracy +* tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy +* tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy +* tf.compat.v1.keras.metrics.SpecificityAtSensitivity +* tf.compat.v1.keras.metrics.SquaredHinge +* tf.compat.v1.keras.metrics.Sum +* tf.compat.v1.keras.metrics.TopKCategoricalAccuracy +* tf.compat.v1.keras.metrics.TrueNegatives +* tf.compat.v1.keras.metrics.TruePositives +* tf.compat.v1.keras.metrics.binary_accuracy +* tf.compat.v1.keras.metrics.binary_crossentropy +* tf.compat.v1.keras.metrics.categorical_accuracy +* tf.compat.v1.keras.metrics.categorical_crossentropy +* tf.compat.v1.keras.metrics.cosine +* tf.compat.v1.keras.metrics.cosine_proximity +* tf.compat.v1.keras.metrics.deserialize +* tf.compat.v1.keras.metrics.get +* tf.compat.v1.keras.metrics.hinge +* tf.compat.v1.keras.metrics.kld +* tf.compat.v1.keras.metrics.kullback_leibler_divergence +* tf.compat.v1.keras.metrics.mae +* tf.compat.v1.keras.metrics.mape +* tf.compat.v1.keras.metrics.mean_absolute_error +* tf.compat.v1.keras.metrics.mean_absolute_percentage_error +* tf.compat.v1.keras.metrics.mean_squared_error +* tf.compat.v1.keras.metrics.mean_squared_logarithmic_error +* tf.compat.v1.keras.metrics.mse +* tf.compat.v1.keras.metrics.msle +* tf.compat.v1.keras.metrics.poisson +* tf.compat.v1.keras.metrics.serialize +* tf.compat.v1.keras.metrics.sparse_categorical_accuracy +* tf.compat.v1.keras.metrics.sparse_categorical_crossentropy +* tf.compat.v1.keras.metrics.sparse_top_k_categorical_accuracy +* tf.compat.v1.keras.metrics.squared_hinge +* tf.compat.v1.keras.metrics.top_k_categorical_accuracy +* tf.compat.v1.keras.mixed_precision +* tf.compat.v1.keras.mixed_precision.experimental +* tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer +* tf.compat.v1.keras.mixed_precision.experimental.Policy +* tf.compat.v1.keras.mixed_precision.experimental.get_layer_policy +* tf.compat.v1.keras.mixed_precision.experimental.global_policy +* tf.compat.v1.keras.mixed_precision.experimental.set_policy +* tf.compat.v1.keras.models +* tf.compat.v1.keras.models.Model +* tf.compat.v1.keras.models.Sequential +* tf.compat.v1.keras.models.clone_model +* tf.compat.v1.keras.models.load_model +* tf.compat.v1.keras.models.model_from_config +* tf.compat.v1.keras.models.model_from_json +* tf.compat.v1.keras.models.model_from_yaml +* tf.compat.v1.keras.models.save_model +* tf.compat.v1.keras.optimizers +* tf.compat.v1.keras.optimizers.Adadelta +* tf.compat.v1.keras.optimizers.Adagrad +* tf.compat.v1.keras.optimizers.Adam +* tf.compat.v1.keras.optimizers.Adamax +* tf.compat.v1.keras.optimizers.Ftrl +* tf.compat.v1.keras.optimizers.Nadam +* tf.compat.v1.keras.optimizers.Optimizer +* tf.compat.v1.keras.optimizers.RMSprop +* tf.compat.v1.keras.optimizers.SGD +* tf.compat.v1.keras.optimizers.deserialize +* tf.compat.v1.keras.optimizers.get +* tf.compat.v1.keras.optimizers.schedules +* tf.compat.v1.keras.optimizers.schedules.ExponentialDecay +* tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay +* tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule +* tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay +* tf.compat.v1.keras.optimizers.schedules.PolynomialDecay +* tf.compat.v1.keras.optimizers.schedules.deserialize +* tf.compat.v1.keras.optimizers.schedules.serialize +* tf.compat.v1.keras.optimizers.serialize +* tf.compat.v1.keras.preprocessing +* tf.compat.v1.keras.preprocessing.image +* tf.compat.v1.keras.preprocessing.image.DirectoryIterator +* tf.compat.v1.keras.preprocessing.image.ImageDataGenerator +* tf.compat.v1.keras.preprocessing.image.Iterator +* tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator +* tf.compat.v1.keras.preprocessing.image.apply_affine_transform +* tf.compat.v1.keras.preprocessing.image.apply_brightness_shift +* tf.compat.v1.keras.preprocessing.image.apply_channel_shift +* tf.compat.v1.keras.preprocessing.image.array_to_img +* tf.compat.v1.keras.preprocessing.image.img_to_array +* tf.compat.v1.keras.preprocessing.image.load_img +* tf.compat.v1.keras.preprocessing.image.random_brightness +* tf.compat.v1.keras.preprocessing.image.random_channel_shift +* tf.compat.v1.keras.preprocessing.image.random_rotation +* tf.compat.v1.keras.preprocessing.image.random_shear +* tf.compat.v1.keras.preprocessing.image.random_shift +* tf.compat.v1.keras.preprocessing.image.random_zoom +* tf.compat.v1.keras.preprocessing.image.save_img +* tf.compat.v1.keras.preprocessing.sequence +* tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator +* tf.compat.v1.keras.preprocessing.sequence.make_sampling_table +* tf.compat.v1.keras.preprocessing.sequence.pad_sequences +* tf.compat.v1.keras.preprocessing.sequence.skipgrams +* tf.compat.v1.keras.preprocessing.text +* tf.compat.v1.keras.preprocessing.text.Tokenizer +* tf.compat.v1.keras.preprocessing.text.hashing_trick +* tf.compat.v1.keras.preprocessing.text.one_hot +* tf.compat.v1.keras.preprocessing.text.text_to_word_sequence +* tf.compat.v1.keras.preprocessing.text.tokenizer_from_json +* tf.compat.v1.keras.regularizers +* tf.compat.v1.keras.regularizers.L1L2 +* tf.compat.v1.keras.regularizers.Regularizer +* tf.compat.v1.keras.regularizers.deserialize +* tf.compat.v1.keras.regularizers.get +* tf.compat.v1.keras.regularizers.l1 +* tf.compat.v1.keras.regularizers.l1_l2 +* tf.compat.v1.keras.regularizers.l2 +* tf.compat.v1.keras.regularizers.serialize +* tf.compat.v1.keras.utils +* tf.compat.v1.keras.utils.CustomObjectScope +* tf.compat.v1.keras.utils.GeneratorEnqueuer +* tf.compat.v1.keras.utils.HDF5Matrix +* tf.compat.v1.keras.utils.OrderedEnqueuer +* tf.compat.v1.keras.utils.Progbar +* tf.compat.v1.keras.utils.Sequence +* tf.compat.v1.keras.utils.SequenceEnqueuer +* tf.compat.v1.keras.utils.convert_all_kernels_in_model +* tf.compat.v1.keras.utils.custom_object_scope +* tf.compat.v1.keras.utils.deserialize_keras_object +* tf.compat.v1.keras.utils.get_custom_objects +* tf.compat.v1.keras.utils.get_file +* tf.compat.v1.keras.utils.get_registered_name +* tf.compat.v1.keras.utils.get_registered_object +* tf.compat.v1.keras.utils.get_source_inputs +* tf.compat.v1.keras.utils.model_to_dot +* tf.compat.v1.keras.utils.multi_gpu_model +* tf.compat.v1.keras.utils.normalize +* tf.compat.v1.keras.utils.plot_model +* tf.compat.v1.keras.utils.register_keras_serializable +* tf.compat.v1.keras.utils.serialize_keras_object +* tf.compat.v1.keras.utils.to_categorical +* tf.compat.v1.keras.wrappers +* tf.compat.v1.keras.wrappers.scikit_learn +* tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier +* tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor +* tf.compat.v1.layers +* tf.compat.v1.layers.AveragePooling1D +* tf.compat.v1.layers.AveragePooling2D +* tf.compat.v1.layers.AveragePooling3D +* tf.compat.v1.layers.BatchNormalization +* tf.compat.v1.layers.Conv1D +* tf.compat.v1.layers.Conv2D +* tf.compat.v1.layers.Conv2DTranspose +* tf.compat.v1.layers.Conv3D +* tf.compat.v1.layers.Conv3DTranspose +* tf.compat.v1.layers.Dense +* tf.compat.v1.layers.Dropout +* tf.compat.v1.layers.Flatten +* tf.compat.v1.layers.InputSpec +* tf.compat.v1.layers.Layer +* tf.compat.v1.layers.MaxPooling1D +* tf.compat.v1.layers.MaxPooling2D +* tf.compat.v1.layers.MaxPooling3D +* tf.compat.v1.layers.SeparableConv1D +* tf.compat.v1.layers.SeparableConv2D +* tf.compat.v1.layers.average_pooling1d +* tf.compat.v1.layers.average_pooling2d +* tf.compat.v1.layers.average_pooling3d +* tf.compat.v1.layers.batch_normalization +* tf.compat.v1.layers.conv1d +* tf.compat.v1.layers.conv2d +* tf.compat.v1.layers.conv2d_transpose +* tf.compat.v1.layers.conv3d +* tf.compat.v1.layers.conv3d_transpose +* tf.compat.v1.layers.dense +* tf.compat.v1.layers.dropout +* tf.compat.v1.layers.experimental +* tf.compat.v1.layers.experimental.keras_style_scope +* tf.compat.v1.layers.experimental.set_keras_style +* tf.compat.v1.layers.flatten +* tf.compat.v1.layers.max_pooling1d +* tf.compat.v1.layers.max_pooling2d +* tf.compat.v1.layers.max_pooling3d +* tf.compat.v1.layers.separable_conv1d +* tf.compat.v1.layers.separable_conv2d +* tf.compat.v1.lbeta +* tf.compat.v1.less +* tf.compat.v1.less_equal +* tf.compat.v1.lgamma +* tf.compat.v1.lin_space +* tf.compat.v1.linalg +* tf.compat.v1.linalg.LinearOperator +* tf.compat.v1.linalg.LinearOperatorAdjoint +* tf.compat.v1.linalg.LinearOperatorBlockDiag +* tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular +* tf.compat.v1.linalg.LinearOperatorCirculant +* tf.compat.v1.linalg.LinearOperatorCirculant2D +* tf.compat.v1.linalg.LinearOperatorCirculant3D +* tf.compat.v1.linalg.LinearOperatorComposition +* tf.compat.v1.linalg.LinearOperatorDiag +* tf.compat.v1.linalg.LinearOperatorFullMatrix +* tf.compat.v1.linalg.LinearOperatorHouseholder +* tf.compat.v1.linalg.LinearOperatorIdentity +* tf.compat.v1.linalg.LinearOperatorInversion +* tf.compat.v1.linalg.LinearOperatorKronecker +* tf.compat.v1.linalg.LinearOperatorLowRankUpdate +* tf.compat.v1.linalg.LinearOperatorLowerTriangular +* tf.compat.v1.linalg.LinearOperatorPermutation +* tf.compat.v1.linalg.LinearOperatorScaledIdentity +* tf.compat.v1.linalg.LinearOperatorToeplitz +* tf.compat.v1.linalg.LinearOperatorTridiag +* tf.compat.v1.linalg.LinearOperatorZeros +* tf.compat.v1.linalg.adjoint +* tf.compat.v1.linalg.band_part +* tf.compat.v1.linalg.cholesky +* tf.compat.v1.linalg.cholesky_solve +* tf.compat.v1.linalg.cross +* tf.compat.v1.linalg.det +* tf.compat.v1.linalg.diag +* tf.compat.v1.linalg.diag_part +* tf.compat.v1.linalg.eigh +* tf.compat.v1.linalg.eigvalsh +* tf.compat.v1.linalg.einsum +* tf.compat.v1.linalg.experimental +* tf.compat.v1.linalg.experimental.conjugate_gradient +* tf.compat.v1.linalg.expm +* tf.compat.v1.linalg.eye +* tf.compat.v1.linalg.global_norm +* tf.compat.v1.linalg.inv +* tf.compat.v1.linalg.l2_normalize +* tf.compat.v1.linalg.logdet +* tf.compat.v1.linalg.logm +* tf.compat.v1.linalg.lstsq +* tf.compat.v1.linalg.lu +* tf.compat.v1.linalg.lu_matrix_inverse +* tf.compat.v1.linalg.lu_reconstruct +* tf.compat.v1.linalg.lu_solve +* tf.compat.v1.linalg.matmul +* tf.compat.v1.linalg.matrix_rank +* tf.compat.v1.linalg.matrix_transpose +* tf.compat.v1.linalg.matvec +* tf.compat.v1.linalg.norm +* tf.compat.v1.linalg.normalize +* tf.compat.v1.linalg.pinv +* tf.compat.v1.linalg.qr +* tf.compat.v1.linalg.set_diag +* tf.compat.v1.linalg.slogdet +* tf.compat.v1.linalg.solve +* tf.compat.v1.linalg.sqrtm +* tf.compat.v1.linalg.svd +* tf.compat.v1.linalg.tensor_diag +* tf.compat.v1.linalg.tensor_diag_part +* tf.compat.v1.linalg.tensordot +* tf.compat.v1.linalg.trace +* tf.compat.v1.linalg.transpose +* tf.compat.v1.linalg.triangular_solve +* tf.compat.v1.linalg.tridiagonal_matmul +* tf.compat.v1.linalg.tridiagonal_solve +* tf.compat.v1.linspace +* tf.compat.v1.lite +* tf.compat.v1.lite.Interpreter +* tf.compat.v1.lite.OpHint +* tf.compat.v1.lite.OpHint.OpHintArgumentTracker +* tf.compat.v1.lite.OpsSet +* tf.compat.v1.lite.Optimize +* tf.compat.v1.lite.RepresentativeDataset +* tf.compat.v1.lite.TFLiteConverter +* tf.compat.v1.lite.TargetSpec +* tf.compat.v1.lite.TocoConverter +* tf.compat.v1.lite.constants +* tf.compat.v1.lite.experimental +* tf.compat.v1.lite.experimental.convert_op_hints_to_stubs +* tf.compat.v1.lite.experimental.get_potentially_supported_ops +* tf.compat.v1.lite.experimental.load_delegate +* tf.compat.v1.lite.experimental.nn +* tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell +* tf.compat.v1.lite.experimental.nn.TfLiteRNNCell +* tf.compat.v1.lite.experimental.nn.dynamic_rnn +* tf.compat.v1.lite.toco_convert +* tf.compat.v1.load_file_system_library +* tf.compat.v1.load_library +* tf.compat.v1.load_op_library +* tf.compat.v1.local_variables +* tf.compat.v1.local_variables_initializer +* tf.compat.v1.log +* tf.compat.v1.log1p +* tf.compat.v1.log_sigmoid +* tf.compat.v1.logging +* tf.compat.v1.logging.TaskLevelStatusMessage +* tf.compat.v1.logging.debug +* tf.compat.v1.logging.error +* tf.compat.v1.logging.fatal +* tf.compat.v1.logging.flush +* tf.compat.v1.logging.get_verbosity +* tf.compat.v1.logging.info +* tf.compat.v1.logging.log +* tf.compat.v1.logging.log_every_n +* tf.compat.v1.logging.log_first_n +* tf.compat.v1.logging.log_if +* tf.compat.v1.logging.set_verbosity +* tf.compat.v1.logging.vlog +* tf.compat.v1.logging.warn +* tf.compat.v1.logging.warning +* tf.compat.v1.logical_and +* tf.compat.v1.logical_not +* tf.compat.v1.logical_or +* tf.compat.v1.logical_xor +* tf.compat.v1.lookup +* tf.compat.v1.lookup.KeyValueTensorInitializer +* tf.compat.v1.lookup.StaticHashTable +* tf.compat.v1.lookup.StaticVocabularyTable +* tf.compat.v1.lookup.TextFileIndex +* tf.compat.v1.lookup.TextFileInitializer +* tf.compat.v1.lookup.experimental +* tf.compat.v1.lookup.experimental.DenseHashTable +* tf.compat.v1.losses +* tf.compat.v1.losses.Reduction +* tf.compat.v1.losses.absolute_difference +* tf.compat.v1.losses.add_loss +* tf.compat.v1.losses.compute_weighted_loss +* tf.compat.v1.losses.cosine_distance +* tf.compat.v1.losses.get_losses +* tf.compat.v1.losses.get_regularization_loss +* tf.compat.v1.losses.get_regularization_losses +* tf.compat.v1.losses.get_total_loss +* tf.compat.v1.losses.hinge_loss +* tf.compat.v1.losses.huber_loss +* tf.compat.v1.losses.log_loss +* tf.compat.v1.losses.mean_pairwise_squared_error +* tf.compat.v1.losses.mean_squared_error +* tf.compat.v1.losses.sigmoid_cross_entropy +* tf.compat.v1.losses.softmax_cross_entropy +* tf.compat.v1.losses.sparse_softmax_cross_entropy +* tf.compat.v1.make_ndarray +* tf.compat.v1.make_template +* tf.compat.v1.make_tensor_proto +* tf.compat.v1.manip +* tf.compat.v1.manip.batch_to_space_nd +* tf.compat.v1.manip.gather_nd +* tf.compat.v1.manip.reshape +* tf.compat.v1.manip.reverse +* tf.compat.v1.manip.roll +* tf.compat.v1.manip.scatter_nd +* tf.compat.v1.manip.space_to_batch_nd +* tf.compat.v1.manip.tile +* tf.compat.v1.map_fn +* tf.compat.v1.matching_files +* tf.compat.v1.math +* tf.compat.v1.math.abs +* tf.compat.v1.math.accumulate_n +* tf.compat.v1.math.acos +* tf.compat.v1.math.acosh +* tf.compat.v1.math.add +* tf.compat.v1.math.add_n +* tf.compat.v1.math.angle +* tf.compat.v1.math.argmax +* tf.compat.v1.math.argmin +* tf.compat.v1.math.asin +* tf.compat.v1.math.asinh +* tf.compat.v1.math.atan +* tf.compat.v1.math.atan2 +* tf.compat.v1.math.atanh +* tf.compat.v1.math.bessel_i0 +* tf.compat.v1.math.bessel_i0e +* tf.compat.v1.math.bessel_i1 +* tf.compat.v1.math.bessel_i1e +* tf.compat.v1.math.betainc +* tf.compat.v1.math.bincount +* tf.compat.v1.math.ceil +* tf.compat.v1.math.confusion_matrix +* tf.compat.v1.math.conj +* tf.compat.v1.math.cos +* tf.compat.v1.math.cosh +* tf.compat.v1.math.count_nonzero +* tf.compat.v1.math.cumprod +* tf.compat.v1.math.cumsum +* tf.compat.v1.math.cumulative_logsumexp +* tf.compat.v1.math.digamma +* tf.compat.v1.math.divide +* tf.compat.v1.math.divide_no_nan +* tf.compat.v1.math.equal +* tf.compat.v1.math.erf +* tf.compat.v1.math.erfc +* tf.compat.v1.math.erfinv +* tf.compat.v1.math.exp +* tf.compat.v1.math.expm1 +* tf.compat.v1.math.floor +* tf.compat.v1.math.floordiv +* tf.compat.v1.math.floormod +* tf.compat.v1.math.greater +* tf.compat.v1.math.greater_equal +* tf.compat.v1.math.igamma +* tf.compat.v1.math.igammac +* tf.compat.v1.math.imag +* tf.compat.v1.math.in_top_k +* tf.compat.v1.math.invert_permutation +* tf.compat.v1.math.is_finite +* tf.compat.v1.math.is_inf +* tf.compat.v1.math.is_nan +* tf.compat.v1.math.is_non_decreasing +* tf.compat.v1.math.is_strictly_increasing +* tf.compat.v1.math.l2_normalize +* tf.compat.v1.math.lbeta +* tf.compat.v1.math.less +* tf.compat.v1.math.less_equal +* tf.compat.v1.math.lgamma +* tf.compat.v1.math.log +* tf.compat.v1.math.log1p +* tf.compat.v1.math.log_sigmoid +* tf.compat.v1.math.log_softmax +* tf.compat.v1.math.logical_and +* tf.compat.v1.math.logical_not +* tf.compat.v1.math.logical_or +* tf.compat.v1.math.logical_xor +* tf.compat.v1.math.maximum +* tf.compat.v1.math.minimum +* tf.compat.v1.math.mod +* tf.compat.v1.math.multiply +* tf.compat.v1.math.multiply_no_nan +* tf.compat.v1.math.ndtri +* tf.compat.v1.math.negative +* tf.compat.v1.math.nextafter +* tf.compat.v1.math.not_equal +* tf.compat.v1.math.polygamma +* tf.compat.v1.math.polyval +* tf.compat.v1.math.pow +* tf.compat.v1.math.real +* tf.compat.v1.math.reciprocal +* tf.compat.v1.math.reciprocal_no_nan +* tf.compat.v1.math.reduce_all +* tf.compat.v1.math.reduce_any +* tf.compat.v1.math.reduce_euclidean_norm +* tf.compat.v1.math.reduce_logsumexp +* tf.compat.v1.math.reduce_max +* tf.compat.v1.math.reduce_mean +* tf.compat.v1.math.reduce_min +* tf.compat.v1.math.reduce_prod +* tf.compat.v1.math.reduce_std +* tf.compat.v1.math.reduce_sum +* tf.compat.v1.math.reduce_variance +* tf.compat.v1.math.rint +* tf.compat.v1.math.round +* tf.compat.v1.math.rsqrt +* tf.compat.v1.math.scalar_mul +* tf.compat.v1.math.segment_max +* tf.compat.v1.math.segment_mean +* tf.compat.v1.math.segment_min +* tf.compat.v1.math.segment_prod +* tf.compat.v1.math.segment_sum +* tf.compat.v1.math.sigmoid +* tf.compat.v1.math.sign +* tf.compat.v1.math.sin +* tf.compat.v1.math.sinh +* tf.compat.v1.math.sobol_sample +* tf.compat.v1.math.softmax +* tf.compat.v1.math.softplus +* tf.compat.v1.math.softsign +* tf.compat.v1.math.special +* tf.compat.v1.math.special.dawsn +* tf.compat.v1.math.special.expint +* tf.compat.v1.math.special.fresnel_cos +* tf.compat.v1.math.special.fresnel_sin +* tf.compat.v1.math.special.spence +* tf.compat.v1.math.sqrt +* tf.compat.v1.math.square +* tf.compat.v1.math.squared_difference +* tf.compat.v1.math.subtract +* tf.compat.v1.math.tan +* tf.compat.v1.math.tanh +* tf.compat.v1.math.top_k +* tf.compat.v1.math.truediv +* tf.compat.v1.math.unsorted_segment_max +* tf.compat.v1.math.unsorted_segment_mean +* tf.compat.v1.math.unsorted_segment_min +* tf.compat.v1.math.unsorted_segment_prod +* tf.compat.v1.math.unsorted_segment_sqrt_n +* tf.compat.v1.math.unsorted_segment_sum +* tf.compat.v1.math.xdivy +* tf.compat.v1.math.xlog1py +* tf.compat.v1.math.xlogy +* tf.compat.v1.math.zero_fraction +* tf.compat.v1.math.zeta +* tf.compat.v1.matmul +* tf.compat.v1.matrix_band_part +* tf.compat.v1.matrix_determinant +* tf.compat.v1.matrix_diag +* tf.compat.v1.matrix_diag_part +* tf.compat.v1.matrix_inverse +* tf.compat.v1.matrix_set_diag +* tf.compat.v1.matrix_solve +* tf.compat.v1.matrix_solve_ls +* tf.compat.v1.matrix_square_root +* tf.compat.v1.matrix_transpose +* tf.compat.v1.matrix_triangular_solve +* tf.compat.v1.maximum +* tf.compat.v1.meshgrid +* tf.compat.v1.metrics +* tf.compat.v1.metrics.accuracy +* tf.compat.v1.metrics.auc +* tf.compat.v1.metrics.average_precision_at_k +* tf.compat.v1.metrics.false_negatives +* tf.compat.v1.metrics.false_negatives_at_thresholds +* tf.compat.v1.metrics.false_positives +* tf.compat.v1.metrics.false_positives_at_thresholds +* tf.compat.v1.metrics.mean +* tf.compat.v1.metrics.mean_absolute_error +* tf.compat.v1.metrics.mean_cosine_distance +* tf.compat.v1.metrics.mean_iou +* tf.compat.v1.metrics.mean_per_class_accuracy +* tf.compat.v1.metrics.mean_relative_error +* tf.compat.v1.metrics.mean_squared_error +* tf.compat.v1.metrics.mean_tensor +* tf.compat.v1.metrics.percentage_below +* tf.compat.v1.metrics.precision +* tf.compat.v1.metrics.precision_at_k +* tf.compat.v1.metrics.precision_at_thresholds +* tf.compat.v1.metrics.precision_at_top_k +* tf.compat.v1.metrics.recall +* tf.compat.v1.metrics.recall_at_k +* tf.compat.v1.metrics.recall_at_thresholds +* tf.compat.v1.metrics.recall_at_top_k +* tf.compat.v1.metrics.root_mean_squared_error +* tf.compat.v1.metrics.sensitivity_at_specificity +* tf.compat.v1.metrics.sparse_average_precision_at_k +* tf.compat.v1.metrics.sparse_precision_at_k +* tf.compat.v1.metrics.specificity_at_sensitivity +* tf.compat.v1.metrics.true_negatives +* tf.compat.v1.metrics.true_negatives_at_thresholds +* tf.compat.v1.metrics.true_positives +* tf.compat.v1.metrics.true_positives_at_thresholds +* tf.compat.v1.min_max_variable_partitioner +* tf.compat.v1.minimum +* tf.compat.v1.mixed_precision +* tf.compat.v1.mixed_precision.experimental +* tf.compat.v1.mixed_precision.experimental.DynamicLossScale +* tf.compat.v1.mixed_precision.experimental.FixedLossScale +* tf.compat.v1.mixed_precision.experimental.LossScale +* tf.compat.v1.mlir +* tf.compat.v1.mlir.experimental +* tf.compat.v1.mlir.experimental.convert_graph_def +* tf.compat.v1.mod +* tf.compat.v1.model_variables +* tf.compat.v1.moving_average_variables +* tf.compat.v1.multinomial +* tf.compat.v1.multiply +* tf.compat.v1.name_scope +* tf.compat.v1.negative +* tf.compat.v1.nest +* tf.compat.v1.nest.assert_same_structure +* tf.compat.v1.nest.flatten +* tf.compat.v1.nest.is_nested +* tf.compat.v1.nest.map_structure +* tf.compat.v1.nest.pack_sequence_as +* tf.compat.v1.nn +* tf.compat.v1.nn.all_candidate_sampler +* tf.compat.v1.nn.atrous_conv2d +* tf.compat.v1.nn.atrous_conv2d_transpose +* tf.compat.v1.nn.avg_pool +* tf.compat.v1.nn.avg_pool1d +* tf.compat.v1.nn.avg_pool2d +* tf.compat.v1.nn.avg_pool3d +* tf.compat.v1.nn.avg_pool_v2 +* tf.compat.v1.nn.batch_norm_with_global_normalization +* tf.compat.v1.nn.batch_normalization +* tf.compat.v1.nn.bias_add +* tf.compat.v1.nn.bidirectional_dynamic_rnn +* tf.compat.v1.nn.collapse_repeated +* tf.compat.v1.nn.compute_accidental_hits +* tf.compat.v1.nn.compute_average_loss +* tf.compat.v1.nn.conv1d +* tf.compat.v1.nn.conv1d_transpose +* tf.compat.v1.nn.conv2d +* tf.compat.v1.nn.conv2d_backprop_filter +* tf.compat.v1.nn.conv2d_backprop_input +* tf.compat.v1.nn.conv2d_transpose +* tf.compat.v1.nn.conv3d +* tf.compat.v1.nn.conv3d_backprop_filter +* tf.compat.v1.nn.conv3d_backprop_filter_v2 +* tf.compat.v1.nn.conv3d_transpose +* tf.compat.v1.nn.conv_transpose +* tf.compat.v1.nn.convolution +* tf.compat.v1.nn.crelu +* tf.compat.v1.nn.ctc_beam_search_decoder +* tf.compat.v1.nn.ctc_beam_search_decoder_v2 +* tf.compat.v1.nn.ctc_greedy_decoder +* tf.compat.v1.nn.ctc_loss +* tf.compat.v1.nn.ctc_loss_v2 +* tf.compat.v1.nn.ctc_unique_labels +* tf.compat.v1.nn.depth_to_space +* tf.compat.v1.nn.depthwise_conv2d +* tf.compat.v1.nn.depthwise_conv2d_backprop_filter +* tf.compat.v1.nn.depthwise_conv2d_backprop_input +* tf.compat.v1.nn.depthwise_conv2d_native +* tf.compat.v1.nn.depthwise_conv2d_native_backprop_filter +* tf.compat.v1.nn.depthwise_conv2d_native_backprop_input +* tf.compat.v1.nn.dilation2d +* tf.compat.v1.nn.dropout +* tf.compat.v1.nn.dynamic_rnn +* tf.compat.v1.nn.elu +* tf.compat.v1.nn.embedding_lookup +* tf.compat.v1.nn.embedding_lookup_sparse +* tf.compat.v1.nn.erosion2d +* tf.compat.v1.nn.fixed_unigram_candidate_sampler +* tf.compat.v1.nn.fractional_avg_pool +* tf.compat.v1.nn.fractional_max_pool +* tf.compat.v1.nn.fused_batch_norm +* tf.compat.v1.nn.in_top_k +* tf.compat.v1.nn.l2_loss +* tf.compat.v1.nn.l2_normalize +* tf.compat.v1.nn.leaky_relu +* tf.compat.v1.nn.learned_unigram_candidate_sampler +* tf.compat.v1.nn.local_response_normalization +* tf.compat.v1.nn.log_poisson_loss +* tf.compat.v1.nn.log_softmax +* tf.compat.v1.nn.log_uniform_candidate_sampler +* tf.compat.v1.nn.lrn +* tf.compat.v1.nn.max_pool +* tf.compat.v1.nn.max_pool1d +* tf.compat.v1.nn.max_pool2d +* tf.compat.v1.nn.max_pool3d +* tf.compat.v1.nn.max_pool_v2 +* tf.compat.v1.nn.max_pool_with_argmax +* tf.compat.v1.nn.moments +* tf.compat.v1.nn.nce_loss +* tf.compat.v1.nn.normalize_moments +* tf.compat.v1.nn.pool +* tf.compat.v1.nn.quantized_avg_pool +* tf.compat.v1.nn.quantized_conv2d +* tf.compat.v1.nn.quantized_max_pool +* tf.compat.v1.nn.quantized_relu_x +* tf.compat.v1.nn.raw_rnn +* tf.compat.v1.nn.relu +* tf.compat.v1.nn.relu6 +* tf.compat.v1.nn.relu_layer +* tf.compat.v1.nn.rnn_cell +* tf.compat.v1.nn.rnn_cell.BasicLSTMCell +* tf.compat.v1.nn.rnn_cell.BasicRNNCell +* tf.compat.v1.nn.rnn_cell.DeviceWrapper +* tf.compat.v1.nn.rnn_cell.DropoutWrapper +* tf.compat.v1.nn.rnn_cell.GRUCell +* tf.compat.v1.nn.rnn_cell.LSTMCell +* tf.compat.v1.nn.rnn_cell.LSTMStateTuple +* tf.compat.v1.nn.rnn_cell.MultiRNNCell +* tf.compat.v1.nn.rnn_cell.RNNCell +* tf.compat.v1.nn.rnn_cell.ResidualWrapper +* tf.compat.v1.nn.safe_embedding_lookup_sparse +* tf.compat.v1.nn.sampled_softmax_loss +* tf.compat.v1.nn.scale_regularization_loss +* tf.compat.v1.nn.selu +* tf.compat.v1.nn.separable_conv2d +* tf.compat.v1.nn.sigmoid +* tf.compat.v1.nn.sigmoid_cross_entropy_with_logits +* tf.compat.v1.nn.softmax +* tf.compat.v1.nn.softmax_cross_entropy_with_logits +* tf.compat.v1.nn.softmax_cross_entropy_with_logits_v2 +* tf.compat.v1.nn.softplus +* tf.compat.v1.nn.softsign +* tf.compat.v1.nn.space_to_batch +* tf.compat.v1.nn.space_to_depth +* tf.compat.v1.nn.sparse_softmax_cross_entropy_with_logits +* tf.compat.v1.nn.static_bidirectional_rnn +* tf.compat.v1.nn.static_rnn +* tf.compat.v1.nn.static_state_saving_rnn +* tf.compat.v1.nn.sufficient_statistics +* tf.compat.v1.nn.swish +* tf.compat.v1.nn.tanh +* tf.compat.v1.nn.top_k +* tf.compat.v1.nn.uniform_candidate_sampler +* tf.compat.v1.nn.weighted_cross_entropy_with_logits +* tf.compat.v1.nn.weighted_moments +* tf.compat.v1.nn.with_space_to_batch +* tf.compat.v1.nn.xw_plus_b +* tf.compat.v1.nn.zero_fraction +* tf.compat.v1.no_gradient +* tf.compat.v1.no_op +* tf.compat.v1.no_regularizer +* tf.compat.v1.nondifferentiable_batch_function +* tf.compat.v1.norm +* tf.compat.v1.not_equal +* tf.compat.v1.numpy_function +* tf.compat.v1.one_hot +* tf.compat.v1.ones +* tf.compat.v1.ones_initializer +* tf.compat.v1.ones_like +* tf.compat.v1.op_scope +* tf.compat.v1.orthogonal_initializer +* tf.compat.v1.pad +* tf.compat.v1.parallel_stack +* tf.compat.v1.parse_example +* tf.compat.v1.parse_single_example +* tf.compat.v1.parse_single_sequence_example +* tf.compat.v1.parse_tensor +* tf.compat.v1.placeholder +* tf.compat.v1.placeholder_with_default +* tf.compat.v1.polygamma +* tf.compat.v1.pow +* tf.compat.v1.print +* tf.compat.v1.profiler +* tf.compat.v1.profiler.AdviceProto +* tf.compat.v1.profiler.AdviceProto.Checker +* tf.compat.v1.profiler.AdviceProto.CheckersEntry +* tf.compat.v1.profiler.GraphNodeProto +* tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry +* tf.compat.v1.profiler.MultiGraphNodeProto +* tf.compat.v1.profiler.OpLogProto +* tf.compat.v1.profiler.OpLogProto.IdToStringEntry +* tf.compat.v1.profiler.ProfileOptionBuilder +* tf.compat.v1.profiler.Profiler +* tf.compat.v1.profiler.advise +* tf.compat.v1.profiler.profile +* tf.compat.v1.profiler.write_op_log +* tf.compat.v1.py_func +* tf.compat.v1.py_function +* tf.compat.v1.python_io +* tf.compat.v1.python_io.TFRecordCompressionType +* tf.compat.v1.python_io.TFRecordOptions +* tf.compat.v1.python_io.TFRecordWriter +* tf.compat.v1.python_io.tf_record_iterator +* tf.compat.v1.qr +* tf.compat.v1.quantization +* tf.compat.v1.quantization.dequantize +* tf.compat.v1.quantization.fake_quant_with_min_max_args +* tf.compat.v1.quantization.fake_quant_with_min_max_args_gradient +* tf.compat.v1.quantization.fake_quant_with_min_max_vars +* tf.compat.v1.quantization.fake_quant_with_min_max_vars_gradient +* tf.compat.v1.quantization.fake_quant_with_min_max_vars_per_channel +* tf.compat.v1.quantization.fake_quant_with_min_max_vars_per_channel_gradient +* tf.compat.v1.quantization.quantize +* tf.compat.v1.quantization.quantize_and_dequantize +* tf.compat.v1.quantization.quantized_concat +* tf.compat.v1.quantize +* tf.compat.v1.quantize_v2 +* tf.compat.v1.quantized_concat +* tf.compat.v1.queue +* tf.compat.v1.queue.FIFOQueue +* tf.compat.v1.queue.PaddingFIFOQueue +* tf.compat.v1.queue.PriorityQueue +* tf.compat.v1.queue.QueueBase +* tf.compat.v1.queue.RandomShuffleQueue +* tf.compat.v1.ragged +* tf.compat.v1.ragged.RaggedTensorValue +* tf.compat.v1.ragged.boolean_mask +* tf.compat.v1.ragged.constant +* tf.compat.v1.ragged.constant_value +* tf.compat.v1.ragged.map_flat_values +* tf.compat.v1.ragged.placeholder +* tf.compat.v1.ragged.range +* tf.compat.v1.ragged.row_splits_to_segment_ids +* tf.compat.v1.ragged.segment_ids_to_row_splits +* tf.compat.v1.ragged.stack +* tf.compat.v1.ragged.stack_dynamic_partitions +* tf.compat.v1.random +* tf.compat.v1.random.Algorithm +* tf.compat.v1.random.Generator +* tf.compat.v1.random.all_candidate_sampler +* tf.compat.v1.random.categorical +* tf.compat.v1.random.create_rng_state +* tf.compat.v1.random.experimental +* tf.compat.v1.random.experimental.Algorithm +* tf.compat.v1.random.experimental.Generator +* tf.compat.v1.random.experimental.create_rng_state +* tf.compat.v1.random.experimental.get_global_generator +* tf.compat.v1.random.experimental.set_global_generator +* tf.compat.v1.random.fixed_unigram_candidate_sampler +* tf.compat.v1.random.gamma +* tf.compat.v1.random.get_global_generator +* tf.compat.v1.random.get_seed +* tf.compat.v1.random.learned_unigram_candidate_sampler +* tf.compat.v1.random.log_uniform_candidate_sampler +* tf.compat.v1.random.multinomial +* tf.compat.v1.random.normal +* tf.compat.v1.random.poisson +* tf.compat.v1.random.set_global_generator +* tf.compat.v1.random.set_random_seed +* tf.compat.v1.random.shuffle +* tf.compat.v1.random.stateless_binomial +* tf.compat.v1.random.stateless_categorical +* tf.compat.v1.random.stateless_gamma +* tf.compat.v1.random.stateless_multinomial +* tf.compat.v1.random.stateless_normal +* tf.compat.v1.random.stateless_poisson +* tf.compat.v1.random.stateless_truncated_normal +* tf.compat.v1.random.stateless_uniform +* tf.compat.v1.random.truncated_normal +* tf.compat.v1.random.uniform +* tf.compat.v1.random.uniform_candidate_sampler +* tf.compat.v1.random_crop +* tf.compat.v1.random_gamma +* tf.compat.v1.random_normal +* tf.compat.v1.random_normal_initializer +* tf.compat.v1.random_poisson +* tf.compat.v1.random_shuffle +* tf.compat.v1.random_uniform +* tf.compat.v1.random_uniform_initializer +* tf.compat.v1.range +* tf.compat.v1.rank +* tf.compat.v1.read_file +* tf.compat.v1.real +* tf.compat.v1.realdiv +* tf.compat.v1.reciprocal +* tf.compat.v1.recompute_grad +* tf.compat.v1.reduce_all +* tf.compat.v1.reduce_any +* tf.compat.v1.reduce_join +* tf.compat.v1.reduce_logsumexp +* tf.compat.v1.reduce_max +* tf.compat.v1.reduce_mean +* tf.compat.v1.reduce_min +* tf.compat.v1.reduce_prod +* tf.compat.v1.reduce_sum +* tf.compat.v1.regex_replace +* tf.compat.v1.register_tensor_conversion_function +* tf.compat.v1.repeat +* tf.compat.v1.report_uninitialized_variables +* tf.compat.v1.required_space_to_batch_paddings +* tf.compat.v1.reset_default_graph +* tf.compat.v1.reshape +* tf.compat.v1.resource_loader +* tf.compat.v1.resource_loader.get_data_files_path +* tf.compat.v1.resource_loader.get_path_to_datafile +* tf.compat.v1.resource_loader.get_root_dir_with_all_resources +* tf.compat.v1.resource_loader.load_resource +* tf.compat.v1.resource_loader.readahead_file_path +* tf.compat.v1.resource_variables_enabled +* tf.compat.v1.reverse +* tf.compat.v1.reverse_sequence +* tf.compat.v1.reverse_v2 +* tf.compat.v1.rint +* tf.compat.v1.roll +* tf.compat.v1.round +* tf.compat.v1.rsqrt +* tf.compat.v1.saturate_cast +* tf.compat.v1.saved_model +* tf.compat.v1.saved_model.Asset +* tf.compat.v1.saved_model.Builder +* tf.compat.v1.saved_model.SaveOptions +* tf.compat.v1.saved_model.build_signature_def +* tf.compat.v1.saved_model.build_tensor_info +* tf.compat.v1.saved_model.builder +* tf.compat.v1.saved_model.builder.SavedModelBuilder +* tf.compat.v1.saved_model.classification_signature_def +* tf.compat.v1.saved_model.constants +* tf.compat.v1.saved_model.contains_saved_model +* tf.compat.v1.saved_model.experimental +* tf.compat.v1.saved_model.experimental.save +* tf.compat.v1.saved_model.get_tensor_from_tensor_info +* tf.compat.v1.saved_model.is_valid_signature +* tf.compat.v1.saved_model.load +* tf.compat.v1.saved_model.load_v2 +* tf.compat.v1.saved_model.loader +* tf.compat.v1.saved_model.loader.load +* tf.compat.v1.saved_model.loader.maybe_saved_model_directory +* tf.compat.v1.saved_model.main_op +* tf.compat.v1.saved_model.main_op.main_op +* tf.compat.v1.saved_model.main_op.main_op_with_restore +* tf.compat.v1.saved_model.main_op_with_restore +* tf.compat.v1.saved_model.maybe_saved_model_directory +* tf.compat.v1.saved_model.predict_signature_def +* tf.compat.v1.saved_model.regression_signature_def +* tf.compat.v1.saved_model.save +* tf.compat.v1.saved_model.signature_constants +* tf.compat.v1.saved_model.signature_def_utils +* tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater +* tf.compat.v1.saved_model.signature_def_utils.build_signature_def +* tf.compat.v1.saved_model.signature_def_utils.classification_signature_def +* tf.compat.v1.saved_model.signature_def_utils.is_valid_signature +* tf.compat.v1.saved_model.signature_def_utils.predict_signature_def +* tf.compat.v1.saved_model.signature_def_utils.regression_signature_def +* tf.compat.v1.saved_model.simple_save +* tf.compat.v1.saved_model.tag_constants +* tf.compat.v1.saved_model.utils +* tf.compat.v1.saved_model.utils.build_tensor_info +* tf.compat.v1.saved_model.utils.get_tensor_from_tensor_info +* tf.compat.v1.scalar_mul +* tf.compat.v1.scan +* tf.compat.v1.scatter_add +* tf.compat.v1.scatter_div +* tf.compat.v1.scatter_max +* tf.compat.v1.scatter_min +* tf.compat.v1.scatter_mul +* tf.compat.v1.scatter_nd +* tf.compat.v1.scatter_nd_add +* tf.compat.v1.scatter_nd_sub +* tf.compat.v1.scatter_nd_update +* tf.compat.v1.scatter_sub +* tf.compat.v1.scatter_update +* tf.compat.v1.searchsorted +* tf.compat.v1.segment_max +* tf.compat.v1.segment_mean +* tf.compat.v1.segment_min +* tf.compat.v1.segment_prod +* tf.compat.v1.segment_sum +* tf.compat.v1.self_adjoint_eig +* tf.compat.v1.self_adjoint_eigvals +* tf.compat.v1.sequence_mask +* tf.compat.v1.serialize_many_sparse +* tf.compat.v1.serialize_sparse +* tf.compat.v1.serialize_tensor +* tf.compat.v1.set_random_seed +* tf.compat.v1.setdiff1d +* tf.compat.v1.sets +* tf.compat.v1.sets.difference +* tf.compat.v1.sets.intersection +* tf.compat.v1.sets.set_difference +* tf.compat.v1.sets.set_intersection +* tf.compat.v1.sets.set_size +* tf.compat.v1.sets.set_union +* tf.compat.v1.sets.size +* tf.compat.v1.sets.union +* tf.compat.v1.shape +* tf.compat.v1.shape_n +* tf.compat.v1.sigmoid +* tf.compat.v1.sign +* tf.compat.v1.signal +* tf.compat.v1.signal.dct +* tf.compat.v1.signal.fft +* tf.compat.v1.signal.fft2d +* tf.compat.v1.signal.fft3d +* tf.compat.v1.signal.fftshift +* tf.compat.v1.signal.frame +* tf.compat.v1.signal.hamming_window +* tf.compat.v1.signal.hann_window +* tf.compat.v1.signal.idct +* tf.compat.v1.signal.ifft +* tf.compat.v1.signal.ifft2d +* tf.compat.v1.signal.ifft3d +* tf.compat.v1.signal.ifftshift +* tf.compat.v1.signal.inverse_mdct +* tf.compat.v1.signal.inverse_stft +* tf.compat.v1.signal.inverse_stft_window_fn +* tf.compat.v1.signal.irfft +* tf.compat.v1.signal.irfft2d +* tf.compat.v1.signal.irfft3d +* tf.compat.v1.signal.kaiser_bessel_derived_window +* tf.compat.v1.signal.kaiser_window +* tf.compat.v1.signal.linear_to_mel_weight_matrix +* tf.compat.v1.signal.mdct +* tf.compat.v1.signal.mfccs_from_log_mel_spectrograms +* tf.compat.v1.signal.overlap_and_add +* tf.compat.v1.signal.rfft +* tf.compat.v1.signal.rfft2d +* tf.compat.v1.signal.rfft3d +* tf.compat.v1.signal.stft +* tf.compat.v1.signal.vorbis_window +* tf.compat.v1.sin +* tf.compat.v1.sinh +* tf.compat.v1.size +* tf.compat.v1.slice +* tf.compat.v1.sort +* tf.compat.v1.space_to_batch +* tf.compat.v1.space_to_batch_nd +* tf.compat.v1.space_to_depth +* tf.compat.v1.sparse +* tf.compat.v1.sparse.SparseConditionalAccumulator +* tf.compat.v1.sparse.SparseTensor +* tf.compat.v1.sparse.add +* tf.compat.v1.sparse.concat +* tf.compat.v1.sparse.cross +* tf.compat.v1.sparse.cross_hashed +* tf.compat.v1.sparse.expand_dims +* tf.compat.v1.sparse.eye +* tf.compat.v1.sparse.fill_empty_rows +* tf.compat.v1.sparse.from_dense +* tf.compat.v1.sparse.mask +* tf.compat.v1.sparse.matmul +* tf.compat.v1.sparse.maximum +* tf.compat.v1.sparse.merge +* tf.compat.v1.sparse.minimum +* tf.compat.v1.sparse.placeholder +* tf.compat.v1.sparse.reduce_max +* tf.compat.v1.sparse.reduce_max_sparse +* tf.compat.v1.sparse.reduce_sum +* tf.compat.v1.sparse.reduce_sum_sparse +* tf.compat.v1.sparse.reorder +* tf.compat.v1.sparse.reset_shape +* tf.compat.v1.sparse.reshape +* tf.compat.v1.sparse.retain +* tf.compat.v1.sparse.segment_mean +* tf.compat.v1.sparse.segment_sqrt_n +* tf.compat.v1.sparse.segment_sum +* tf.compat.v1.sparse.slice +* tf.compat.v1.sparse.softmax +* tf.compat.v1.sparse.sparse_dense_matmul +* tf.compat.v1.sparse.split +* tf.compat.v1.sparse.to_dense +* tf.compat.v1.sparse.to_indicator +* tf.compat.v1.sparse.transpose +* tf.compat.v1.sparse_add +* tf.compat.v1.sparse_concat +* tf.compat.v1.sparse_fill_empty_rows +* tf.compat.v1.sparse_mask +* tf.compat.v1.sparse_matmul +* tf.compat.v1.sparse_maximum +* tf.compat.v1.sparse_merge +* tf.compat.v1.sparse_minimum +* tf.compat.v1.sparse_placeholder +* tf.compat.v1.sparse_reduce_max +* tf.compat.v1.sparse_reduce_max_sparse +* tf.compat.v1.sparse_reduce_sum +* tf.compat.v1.sparse_reduce_sum_sparse +* tf.compat.v1.sparse_reorder +* tf.compat.v1.sparse_reset_shape +* tf.compat.v1.sparse_reshape +* tf.compat.v1.sparse_retain +* tf.compat.v1.sparse_segment_mean +* tf.compat.v1.sparse_segment_sqrt_n +* tf.compat.v1.sparse_segment_sum +* tf.compat.v1.sparse_slice +* tf.compat.v1.sparse_softmax +* tf.compat.v1.sparse_split +* tf.compat.v1.sparse_tensor_dense_matmul +* tf.compat.v1.sparse_tensor_to_dense +* tf.compat.v1.sparse_to_dense +* tf.compat.v1.sparse_to_indicator +* tf.compat.v1.sparse_transpose +* tf.compat.v1.spectral +* tf.compat.v1.spectral.dct +* tf.compat.v1.spectral.fft +* tf.compat.v1.spectral.fft2d +* tf.compat.v1.spectral.fft3d +* tf.compat.v1.spectral.idct +* tf.compat.v1.spectral.ifft +* tf.compat.v1.spectral.ifft2d +* tf.compat.v1.spectral.ifft3d +* tf.compat.v1.spectral.irfft +* tf.compat.v1.spectral.irfft2d +* tf.compat.v1.spectral.irfft3d +* tf.compat.v1.spectral.rfft +* tf.compat.v1.spectral.rfft2d +* tf.compat.v1.spectral.rfft3d +* tf.compat.v1.split +* tf.compat.v1.sqrt +* tf.compat.v1.square +* tf.compat.v1.squared_difference +* tf.compat.v1.squeeze +* tf.compat.v1.stack +* tf.compat.v1.stop_gradient +* tf.compat.v1.strided_slice +* tf.compat.v1.string_join +* tf.compat.v1.string_split +* tf.compat.v1.string_strip +* tf.compat.v1.string_to_hash_bucket +* tf.compat.v1.string_to_hash_bucket_fast +* tf.compat.v1.string_to_hash_bucket_strong +* tf.compat.v1.string_to_number +* tf.compat.v1.strings +* tf.compat.v1.strings.as_string +* tf.compat.v1.strings.bytes_split +* tf.compat.v1.strings.format +* tf.compat.v1.strings.join +* tf.compat.v1.strings.length +* tf.compat.v1.strings.lower +* tf.compat.v1.strings.ngrams +* tf.compat.v1.strings.reduce_join +* tf.compat.v1.strings.regex_full_match +* tf.compat.v1.strings.regex_replace +* tf.compat.v1.strings.split +* tf.compat.v1.strings.strip +* tf.compat.v1.strings.substr +* tf.compat.v1.strings.to_hash_bucket +* tf.compat.v1.strings.to_hash_bucket_fast +* tf.compat.v1.strings.to_hash_bucket_strong +* tf.compat.v1.strings.to_number +* tf.compat.v1.strings.unicode_decode +* tf.compat.v1.strings.unicode_decode_with_offsets +* tf.compat.v1.strings.unicode_encode +* tf.compat.v1.strings.unicode_script +* tf.compat.v1.strings.unicode_split +* tf.compat.v1.strings.unicode_split_with_offsets +* tf.compat.v1.strings.unicode_transcode +* tf.compat.v1.strings.unsorted_segment_join +* tf.compat.v1.strings.upper +* tf.compat.v1.substr +* tf.compat.v1.subtract +* tf.compat.v1.summary +* tf.compat.v1.summary.Event +* tf.compat.v1.summary.FileWriter +* tf.compat.v1.summary.FileWriterCache +* tf.compat.v1.summary.SessionLog +* tf.compat.v1.summary.Summary +* tf.compat.v1.summary.Summary.Audio +* tf.compat.v1.summary.Summary.Image +* tf.compat.v1.summary.Summary.Value +* tf.compat.v1.summary.SummaryDescription +* tf.compat.v1.summary.TaggedRunMetadata +* tf.compat.v1.summary.all_v2_summary_ops +* tf.compat.v1.summary.audio +* tf.compat.v1.summary.get_summary_description +* tf.compat.v1.summary.histogram +* tf.compat.v1.summary.image +* tf.compat.v1.summary.initialize +* tf.compat.v1.summary.merge +* tf.compat.v1.summary.merge_all +* tf.compat.v1.summary.scalar +* tf.compat.v1.summary.tensor_summary +* tf.compat.v1.summary.text +* tf.compat.v1.svd +* tf.compat.v1.switch_case +* tf.compat.v1.sysconfig +* tf.compat.v1.sysconfig.get_compile_flags +* tf.compat.v1.sysconfig.get_include +* tf.compat.v1.sysconfig.get_lib +* tf.compat.v1.sysconfig.get_link_flags +* tf.compat.v1.tables_initializer +* tf.compat.v1.tan +* tf.compat.v1.tanh +* tf.compat.v1.tensor_scatter_add +* tf.compat.v1.tensor_scatter_nd_add +* tf.compat.v1.tensor_scatter_nd_sub +* tf.compat.v1.tensor_scatter_nd_update +* tf.compat.v1.tensor_scatter_sub +* tf.compat.v1.tensor_scatter_update +* tf.compat.v1.tensordot +* tf.compat.v1.test +* tf.compat.v1.test.Benchmark +* tf.compat.v1.test.StubOutForTesting +* tf.compat.v1.test.TestCase +* tf.compat.v1.test.TestCase.failureException +* tf.compat.v1.test.assert_equal_graph_def +* tf.compat.v1.test.benchmark_config +* tf.compat.v1.test.compute_gradient +* tf.compat.v1.test.compute_gradient_error +* tf.compat.v1.test.create_local_cluster +* tf.compat.v1.test.get_temp_dir +* tf.compat.v1.test.gpu_device_name +* tf.compat.v1.test.is_built_with_cuda +* tf.compat.v1.test.is_built_with_gpu_support +* tf.compat.v1.test.is_built_with_rocm +* tf.compat.v1.test.is_built_with_xla +* tf.compat.v1.test.is_gpu_available +* tf.compat.v1.test.main +* tf.compat.v1.test.test_src_dir_path +* tf.compat.v1.tile +* tf.compat.v1.timestamp +* tf.compat.v1.to_bfloat16 +* tf.compat.v1.to_complex128 +* tf.compat.v1.to_complex64 +* tf.compat.v1.to_double +* tf.compat.v1.to_float +* tf.compat.v1.to_int32 +* tf.compat.v1.to_int64 +* tf.compat.v1.tpu +* tf.compat.v1.tpu.CrossShardOptimizer +* tf.compat.v1.tpu.PaddingSpec +* tf.compat.v1.tpu.batch_parallel +* tf.compat.v1.tpu.bfloat16_scope +* tf.compat.v1.tpu.core +* tf.compat.v1.tpu.cross_replica_sum +* tf.compat.v1.tpu.experimental +* tf.compat.v1.tpu.experimental.AdagradParameters +* tf.compat.v1.tpu.experimental.AdamParameters +* tf.compat.v1.tpu.experimental.DeviceAssignment +* tf.compat.v1.tpu.experimental.FtrlParameters +* tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters +* tf.compat.v1.tpu.experimental.embedding_column +* tf.compat.v1.tpu.experimental.initialize_tpu_system +* tf.compat.v1.tpu.experimental.shared_embedding_columns +* tf.compat.v1.tpu.experimental.shutdown_tpu_system +* tf.compat.v1.tpu.initialize_system +* tf.compat.v1.tpu.outside_compilation +* tf.compat.v1.tpu.replicate +* tf.compat.v1.tpu.rewrite +* tf.compat.v1.tpu.shard +* tf.compat.v1.tpu.shutdown_system +* tf.compat.v1.trace +* tf.compat.v1.train +* tf.compat.v1.train.AdadeltaOptimizer +* tf.compat.v1.train.AdagradDAOptimizer +* tf.compat.v1.train.AdagradOptimizer +* tf.compat.v1.train.AdamOptimizer +* tf.compat.v1.train.BytesList +* tf.compat.v1.train.Checkpoint +* tf.compat.v1.train.CheckpointManager +* tf.compat.v1.train.CheckpointSaverHook +* tf.compat.v1.train.CheckpointSaverListener +* tf.compat.v1.train.ChiefSessionCreator +* tf.compat.v1.train.ClusterDef +* tf.compat.v1.train.ClusterSpec +* tf.compat.v1.train.Coordinator +* tf.compat.v1.train.Example +* tf.compat.v1.train.ExponentialMovingAverage +* tf.compat.v1.train.Feature +* tf.compat.v1.train.FeatureList +* tf.compat.v1.train.FeatureLists +* tf.compat.v1.train.FeatureLists.FeatureListEntry +* tf.compat.v1.train.Features +* tf.compat.v1.train.Features.FeatureEntry +* tf.compat.v1.train.FeedFnHook +* tf.compat.v1.train.FinalOpsHook +* tf.compat.v1.train.FloatList +* tf.compat.v1.train.FtrlOptimizer +* tf.compat.v1.train.GlobalStepWaiterHook +* tf.compat.v1.train.GradientDescentOptimizer +* tf.compat.v1.train.Int64List +* tf.compat.v1.train.JobDef +* tf.compat.v1.train.JobDef.TasksEntry +* tf.compat.v1.train.LoggingTensorHook +* tf.compat.v1.train.LooperThread +* tf.compat.v1.train.MomentumOptimizer +* tf.compat.v1.train.MonitoredSession +* tf.compat.v1.train.MonitoredSession.StepContext +* tf.compat.v1.train.MonitoredTrainingSession +* tf.compat.v1.train.NanLossDuringTrainingError +* tf.compat.v1.train.NanTensorHook +* tf.compat.v1.train.NewCheckpointReader +* tf.compat.v1.train.Optimizer +* tf.compat.v1.train.ProfilerHook +* tf.compat.v1.train.ProximalAdagradOptimizer +* tf.compat.v1.train.ProximalGradientDescentOptimizer +* tf.compat.v1.train.QueueRunner +* tf.compat.v1.train.RMSPropOptimizer +* tf.compat.v1.train.Saver +* tf.compat.v1.train.SaverDef +* tf.compat.v1.train.Scaffold +* tf.compat.v1.train.SecondOrStepTimer +* tf.compat.v1.train.SequenceExample +* tf.compat.v1.train.Server +* tf.compat.v1.train.ServerDef +* tf.compat.v1.train.SessionCreator +* tf.compat.v1.train.SessionManager +* tf.compat.v1.train.SessionRunArgs +* tf.compat.v1.train.SessionRunContext +* tf.compat.v1.train.SessionRunHook +* tf.compat.v1.train.SessionRunValues +* tf.compat.v1.train.SingularMonitoredSession +* tf.compat.v1.train.SingularMonitoredSession.StepContext +* tf.compat.v1.train.StepCounterHook +* tf.compat.v1.train.StopAtStepHook +* tf.compat.v1.train.SummarySaverHook +* tf.compat.v1.train.Supervisor +* tf.compat.v1.train.SyncReplicasOptimizer +* tf.compat.v1.train.VocabInfo +* tf.compat.v1.train.WorkerSessionCreator +* tf.compat.v1.train.add_queue_runner +* tf.compat.v1.train.assert_global_step +* tf.compat.v1.train.basic_train_loop +* tf.compat.v1.train.batch +* tf.compat.v1.train.batch_join +* tf.compat.v1.train.checkpoint_exists +* tf.compat.v1.train.checkpoints_iterator +* tf.compat.v1.train.cosine_decay +* tf.compat.v1.train.cosine_decay_restarts +* tf.compat.v1.train.create_global_step +* tf.compat.v1.train.do_quantize_training_on_graphdef +* tf.compat.v1.train.experimental +* tf.compat.v1.train.experimental.DynamicLossScale +* tf.compat.v1.train.experimental.FixedLossScale +* tf.compat.v1.train.experimental.LossScale +* tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer +* tf.compat.v1.train.experimental.PythonState +* tf.compat.v1.train.experimental.disable_mixed_precision_graph_rewrite +* tf.compat.v1.train.experimental.enable_mixed_precision_graph_rewrite +* tf.compat.v1.train.exponential_decay +* tf.compat.v1.train.export_meta_graph +* tf.compat.v1.train.generate_checkpoint_state_proto +* tf.compat.v1.train.get_checkpoint_mtimes +* tf.compat.v1.train.get_checkpoint_state +* tf.compat.v1.train.get_global_step +* tf.compat.v1.train.get_or_create_global_step +* tf.compat.v1.train.global_step +* tf.compat.v1.train.import_meta_graph +* tf.compat.v1.train.init_from_checkpoint +* tf.compat.v1.train.input_producer +* tf.compat.v1.train.inverse_time_decay +* tf.compat.v1.train.latest_checkpoint +* tf.compat.v1.train.limit_epochs +* tf.compat.v1.train.linear_cosine_decay +* tf.compat.v1.train.list_variables +* tf.compat.v1.train.load_checkpoint +* tf.compat.v1.train.load_variable +* tf.compat.v1.train.match_filenames_once +* tf.compat.v1.train.maybe_batch +* tf.compat.v1.train.maybe_batch_join +* tf.compat.v1.train.maybe_shuffle_batch +* tf.compat.v1.train.maybe_shuffle_batch_join +* tf.compat.v1.train.natural_exp_decay +* tf.compat.v1.train.noisy_linear_cosine_decay +* tf.compat.v1.train.piecewise_constant +* tf.compat.v1.train.piecewise_constant_decay +* tf.compat.v1.train.polynomial_decay +* tf.compat.v1.train.queue_runner +* tf.compat.v1.train.queue_runner.QueueRunner +* tf.compat.v1.train.queue_runner.add_queue_runner +* tf.compat.v1.train.queue_runner.start_queue_runners +* tf.compat.v1.train.range_input_producer +* tf.compat.v1.train.remove_checkpoint +* tf.compat.v1.train.replica_device_setter +* tf.compat.v1.train.sdca_fprint +* tf.compat.v1.train.sdca_optimizer +* tf.compat.v1.train.sdca_shrink_l1 +* tf.compat.v1.train.shuffle_batch +* tf.compat.v1.train.shuffle_batch_join +* tf.compat.v1.train.slice_input_producer +* tf.compat.v1.train.start_queue_runners +* tf.compat.v1.train.string_input_producer +* tf.compat.v1.train.summary_iterator +* tf.compat.v1.train.update_checkpoint_state +* tf.compat.v1.train.warm_start +* tf.compat.v1.train.write_graph +* tf.compat.v1.trainable_variables +* tf.compat.v1.transpose +* tf.compat.v1.truediv +* tf.compat.v1.truncated_normal +* tf.compat.v1.truncated_normal_initializer +* tf.compat.v1.truncatediv +* tf.compat.v1.truncatemod +* tf.compat.v1.tuple +* tf.compat.v1.uniform_unit_scaling_initializer +* tf.compat.v1.unique +* tf.compat.v1.unique_with_counts +* tf.compat.v1.unravel_index +* tf.compat.v1.unsorted_segment_max +* tf.compat.v1.unsorted_segment_mean +* tf.compat.v1.unsorted_segment_min +* tf.compat.v1.unsorted_segment_prod +* tf.compat.v1.unsorted_segment_sqrt_n +* tf.compat.v1.unsorted_segment_sum +* tf.compat.v1.unstack +* tf.compat.v1.user_ops +* tf.compat.v1.user_ops.my_fact +* tf.compat.v1.variable_axis_size_partitioner +* tf.compat.v1.variable_creator_scope +* tf.compat.v1.variable_op_scope +* tf.compat.v1.variable_scope +* tf.compat.v1.variables_initializer +* tf.compat.v1.variance_scaling_initializer +* tf.compat.v1.vectorized_map +* tf.compat.v1.verify_tensor_all_finite +* tf.compat.v1.version +* tf.compat.v1.where +* tf.compat.v1.where_v2 +* tf.compat.v1.while_loop +* tf.compat.v1.wrap_function +* tf.compat.v1.write_file +* tf.compat.v1.xla +* tf.compat.v1.xla.experimental +* tf.compat.v1.xla.experimental.compile +* tf.compat.v1.xla.experimental.jit_scope +* tf.compat.v1.zeros +* tf.compat.v1.zeros_initializer +* tf.compat.v1.zeros_like +* tf.compat.v1.zeta \ No newline at end of file diff --git a/site/en/api_docs/python/tf/argsort.md b/site/en/api_docs/python/tf/argsort.md new file mode 100644 index 00000000000..926e7677a12 --- /dev/null +++ b/site/en/api_docs/python/tf/argsort.md @@ -0,0 +1,142 @@ +description: Returns the indices of a tensor that give its sorted order along an axis. + +
+ + +
+ +# tf.argsort + + + + + + + + + +Returns the indices of a tensor that give its sorted order along an axis. + + + + + + + + + +For a 1D tensor, `tf.gather(values, tf.argsort(values))` is equivalent to +tf.sort(values). For higher dimensions, the output has the same shape as +`values`, but along the given axis, values represent the index of the sorted +element in that slice of the tensor at the given position. + +#### Usage: + + + +```python +import tensorflow as tf +a = [1, 10, 26.9, 2.8, 166.32, 62.3] +b = tf.argsort(a,axis=-1,direction='ASCENDING',stable=False,name=None) +c = tf.keras.backend.eval(b) +# Here, c = [0 3 1 2 5 4] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`values` + +1-D or higher numeric `Tensor`. +
+`axis` + +The axis along which to sort. The default is -1, which sorts the last +axis. +
+`direction` + +The direction in which to sort the values (`'ASCENDING'` or +`'DESCENDING'`). +
+`stable` + +If True, equal elements in the original tensor will not be +re-ordered in the returned order. Unstable sort is not yet implemented, +but will eventually be the default for performance reasons. If you require +a stable order, pass `stable=True` for forwards compatibility. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
+An int32 `Tensor` with the same shape as `values`. The indices that would +sort each slice of the given `values` along the given `axis`. +
+ + + + + + + + + + + + +
+`ValueError` + +If axis is not a constant scalar, or the direction is invalid. +
+ diff --git a/site/en/api_docs/python/tf/audio.md b/site/en/api_docs/python/tf/audio.md new file mode 100644 index 00000000000..3ff3587495f --- /dev/null +++ b/site/en/api_docs/python/tf/audio.md @@ -0,0 +1,27 @@ +description: Public API for tf.audio namespace. + +
+ + +
+ +# Module: tf.audio + + + + + + + + + +Public API for tf.audio namespace. + + + +## Functions + +[`decode_wav(...)`](../tf/audio/decode_wav.md): Decode a 16-bit PCM WAV file to a float tensor. + +[`encode_wav(...)`](../tf/audio/encode_wav.md): Encode audio data using the WAV file format. + diff --git a/site/en/api_docs/python/tf/audio/decode_wav.md b/site/en/api_docs/python/tf/audio/decode_wav.md new file mode 100644 index 00000000000..e9051dee46e --- /dev/null +++ b/site/en/api_docs/python/tf/audio/decode_wav.md @@ -0,0 +1,122 @@ +description: Decode a 16-bit PCM WAV file to a float tensor. + +
+ + +
+ +# tf.audio.decode_wav + + + + + + + + + +Decode a 16-bit PCM WAV file to a float tensor. + + + + + + + + + +The -32768 to 32767 signed 16-bit values will be scaled to -1.0 to 1.0 in float. + +When desired_channels is set, if the input contains fewer channels than this +then the last channel will be duplicated to give the requested number, else if +the input has more channels than requested then the additional channels will be +ignored. + +If desired_samples is set, then the audio will be cropped or padded with zeroes +to the requested length. + +The first output contains a Tensor with the content of the audio samples. The +lowest dimension will be the number of channels, and the second will be the +number of samples. For example, a ten-sample-long stereo WAV file should give an +output shape of [10, 2]. + + + + + + + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. +The WAV-encoded audio, usually from a file. +
+`desired_channels` + +An optional `int`. Defaults to `-1`. +Number of sample channels wanted. +
+`desired_samples` + +An optional `int`. Defaults to `-1`. +Length of audio requested. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (audio, sample_rate). +
+`audio` + +A `Tensor` of type `float32`. +
+`sample_rate` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/audio/encode_wav.md b/site/en/api_docs/python/tf/audio/encode_wav.md new file mode 100644 index 00000000000..d177c7a04c9 --- /dev/null +++ b/site/en/api_docs/python/tf/audio/encode_wav.md @@ -0,0 +1,92 @@ +description: Encode audio data using the WAV file format. + +
+ + +
+ +# tf.audio.encode_wav + + + + + + + + + +Encode audio data using the WAV file format. + + + + + + + + + +This operation will generate a string suitable to be saved out to create a .wav +audio file. It will be encoded in the 16-bit PCM format. It takes in float +values in the range -1.0f to 1.0f, and any outside that value will be clamped to +that range. + +`audio` is a 2-D float Tensor of shape `[length, channels]`. +`sample_rate` is a scalar Tensor holding the rate to use (e.g. 44100). + + + + + + + + + + + + + + + + +
+`audio` + +A `Tensor` of type `float32`. 2-D with shape `[length, channels]`. +
+`sample_rate` + +A `Tensor` of type `int32`. +Scalar containing the sample frequency. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/autodiff.md b/site/en/api_docs/python/tf/autodiff.md new file mode 100644 index 00000000000..78e0a65e69e --- /dev/null +++ b/site/en/api_docs/python/tf/autodiff.md @@ -0,0 +1,27 @@ +description: Public API for tf.autodiff namespace. + +
+ + +
+ +# Module: tf.autodiff + + + + + + + + + +Public API for tf.autodiff namespace. + + + +## Classes + +[`class ForwardAccumulator`](../tf/autodiff/ForwardAccumulator.md): Computes Jacobian-vector products ("JVP"s) using forward-mode autodiff. + +[`class GradientTape`](../tf/GradientTape.md): Record operations for automatic differentiation. + diff --git a/site/en/api_docs/python/tf/autodiff/ForwardAccumulator.md b/site/en/api_docs/python/tf/autodiff/ForwardAccumulator.md new file mode 100644 index 00000000000..469c26f2538 --- /dev/null +++ b/site/en/api_docs/python/tf/autodiff/ForwardAccumulator.md @@ -0,0 +1,280 @@ +description: Computes Jacobian-vector products ("JVP"s) using forward-mode autodiff. + +
+ + + + + + +
+ +# tf.autodiff.ForwardAccumulator + + + + + + + + + +Computes Jacobian-vector products ("JVP"s) using forward-mode autodiff. + + + + + + + +Compare to tf.GradientTape which computes vector-Jacobian products ("VJP"s) +using reverse-mode autodiff (backprop). Reverse mode is more attractive when +computing gradients of a scalar-valued function with respect to many inputs +(e.g. a neural network with many parameters and a scalar loss). Forward mode +works best on functions with many outputs and few inputs. Since it does not +hold on to intermediate activations, it is much more memory efficient than +backprop where it is applicable. + +Consider a simple linear regression: + +``` +>>> x = tf.constant([[2.0, 3.0], [1.0, 4.0]]) +>>> dense = tf.keras.layers.Dense(1) +>>> dense.build([None, 2]) +>>> with tf.autodiff.ForwardAccumulator( +... primals=dense.kernel, +... tangents=tf.constant([[1.], [0.]])) as acc: +... loss = tf.reduce_sum((dense(x) - tf.constant([1., -1.])) ** 2.) +>>> acc.jvp(loss) + +``` + +The example has two variables containing parameters, `dense.kernel` (2 +parameters) and `dense.bias` (1 parameter). Considering the training data `x` +as a constant, this means the Jacobian matrix for the function mapping from +parameters to loss has one row and three columns. + +With forwardprop, we specify a length-three vector in advance which multiplies +the Jacobian. The `primals` constructor argument is the parameter (a +tf.Tensor or tf.Variable) we're specifying a vector for, and the +`tangents` argument is the "vector" in Jacobian-vector product. If our goal is +to compute the entire Jacobian matrix, forwardprop computes one column at a +time while backprop computes one row at a time. Since the Jacobian in the +linear regression example has only one row, backprop requires fewer +invocations: + +``` +>>> x = tf.constant([[2.0, 3.0], [1.0, 4.0]]) +>>> dense = tf.keras.layers.Dense(1) +>>> dense.build([None, 2]) +>>> loss_fn = lambda: tf.reduce_sum((dense(x) - tf.constant([1., -1.])) ** 2.) +>>> kernel_fprop = [] +>>> with tf.autodiff.ForwardAccumulator( +... dense.kernel, tf.constant([[1.], [0.]])) as acc: +... kernel_fprop.append(acc.jvp(loss_fn())) +>>> with tf.autodiff.ForwardAccumulator( +... dense.kernel, tf.constant([[0.], [1.]])) as acc: +... kernel_fprop.append(acc.jvp(loss_fn())) +>>> with tf.autodiff.ForwardAccumulator(dense.bias, tf.constant([1.])) as acc: +... bias_fprop = acc.jvp(loss_fn()) +>>> with tf.GradientTape() as tape: +... loss = loss_fn() +>>> kernel_grad, bias_grad = tape.gradient(loss, (dense.kernel, dense.bias)) +>>> np.testing.assert_allclose( +... kernel_grad, tf.stack(kernel_fprop)[:, tf.newaxis]) +>>> np.testing.assert_allclose(bias_grad, bias_fprop[tf.newaxis]) +``` + +Implicit in the `tape.gradient` call is a length-one vector which +left-multiplies the Jacobian, a vector-Jacobian product. + +`ForwardAccumulator` maintains JVPs corresponding primal tensors it is +watching, derived from the original `primals` specified in the constructor. As +soon as a primal tensor is deleted, `ForwardAccumulator` deletes the +corresponding JVP. + +`acc.jvp(x)` retrieves `acc`'s JVP corresponding to the primal tensor `x`. It +does not perform any computation. `acc.jvp` calls can be repeated as long as +`acc` is accessible, whether the context manager is active or not. New JVPs +are only computed while the context manager is active. + +Note that `ForwardAccumulator`s are always applied in the order their context +managers were entered, so inner accumulators will not see JVP computation from +outer accumulators. Take higher-order JVPs from outer accumulators: + +``` +>>> primal = tf.constant(1.1) +>>> with tf.autodiff.ForwardAccumulator(primal, tf.constant(1.)) as outer: +... with tf.autodiff.ForwardAccumulator(primal, tf.constant(1.)) as inner: +... primal_out = primal ** tf.constant(3.5) +>>> inner_jvp = inner.jvp(primal_out) +>>> inner_jvp # 3.5 * 1.1 ** 2.5 + +>>> outer.jvp(inner_jvp) # 3.5 * 2.5 * 1.1 ** 1.5 + +``` + +Reversing the collection in the last line to instead retrieve +`inner.jvp(outer.jvp(primal_out))` will not work. + +Strict nesting also applies to combinations of `ForwardAccumulator` and +tf.GradientTape. More deeply nested `GradientTape` objects will ignore the +products of outer `ForwardAccumulator` objects. This allows (for example) +memory-efficient forward-over-backward computation of Hessian-vector products, +where the inner `GradientTape` would otherwise hold on to all intermediate +JVPs: + +``` +>>> v = tf.Variable([1., 2.]) +>>> with tf.autodiff.ForwardAccumulator( +... v, +... # The "vector" in Hessian-vector product. +... tf.constant([1., 0.])) as acc: +... with tf.GradientTape() as tape: +... y = tf.reduce_sum(v ** 3.) +... backward = tape.gradient(y, v) +>>> backward # gradient from backprop + +>>> acc.jvp(backward) # forward-over-backward Hessian-vector product + +``` + + + + + + + + + + + + + +
+`primals` + +A tensor or nested structure of tensors to watch. +
+`tangents` + +A tensor or nested structure of tensors, with the same nesting +structure as `primals`, with each element being a vector with the same +size as the corresponding primal element. +
+ + + + + + + + + + + + +
+`ValueError` + +If the same tensor or variable is specified multiple times in +`primals`. +
+ + + +## Methods + +

jvp

+ +View source + + + +Fetches the Jacobian-vector product computed for `primals`. + +Note that this method performs no computation, and simply looks up a JVP +that was already computed (unlike backprop using a tf.GradientTape, where +the computation happens on the call to `tape.gradient`). + + + + + + + + + + + + + +
Args
+`primals` + +A watched Tensor or structure of Tensors to fetch the JVPs for. +
+`unconnected_gradients` + +A value which can either hold 'none' or 'zero' and +alters the value which will be returned if no JVP was computed for +`primals`. The possible values and effects are detailed in +'tf.UnconnectedGradients' and it defaults to 'none'. +
+ + + + + + + + + + + +
Returns
+Tensors with the same shapes and dtypes as `primals`, or None if no JVP +is available. +
+ + + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/autograph.md b/site/en/api_docs/python/tf/autograph.md new file mode 100644 index 00000000000..519040ab53c --- /dev/null +++ b/site/en/api_docs/python/tf/autograph.md @@ -0,0 +1,45 @@ +description: Conversion of plain Python into TensorFlow graph code. + +
+ + +
+ +# Module: tf.autograph + + + + + + + + + +Conversion of plain Python into TensorFlow graph code. + + +NOTE: In TensorFlow 2.0, AutoGraph is automatically applied when using +tf.function. This module contains lower-level APIs for advanced use. + +For more information, see the +[AutoGraph guide](https://www.tensorflow.org/guide/autograph). + +By equivalent graph code we mean code that generates a TensorFlow graph when +run. The generated graph has the same effects as the original code when executed +(for example with tf.function or tf.compat.v1.Session.run). In other words, +using AutoGraph can be thought of as running Python in TensorFlow. + +## Modules + +[`experimental`](../tf/autograph/experimental.md) module: Public API for tf.autograph.experimental namespace. + +## Functions + +[`set_verbosity(...)`](../tf/autograph/set_verbosity.md): Sets the AutoGraph verbosity level. + +[`to_code(...)`](../tf/autograph/to_code.md): Returns the source code generated by AutoGraph, as a string. + +[`to_graph(...)`](../tf/autograph/to_graph.md): Converts a Python entity into a TensorFlow graph. + +[`trace(...)`](../tf/autograph/trace.md): Traces argument information at compilation time. + diff --git a/site/en/api_docs/python/tf/autograph/experimental.md b/site/en/api_docs/python/tf/autograph/experimental.md new file mode 100644 index 00000000000..629ac4353c7 --- /dev/null +++ b/site/en/api_docs/python/tf/autograph/experimental.md @@ -0,0 +1,31 @@ +description: Public API for tf.autograph.experimental namespace. + +
+ + +
+ +# Module: tf.autograph.experimental + + + + + + + + + +Public API for tf.autograph.experimental namespace. + + + +## Classes + +[`class Feature`](../../tf/autograph/experimental/Feature.md): This enumeration represents optional conversion options. + +## Functions + +[`do_not_convert(...)`](../../tf/autograph/experimental/do_not_convert.md): Decorator that suppresses the conversion of a function. + +[`set_loop_options(...)`](../../tf/autograph/experimental/set_loop_options.md): Specifies additional arguments to be passed to the enclosing while_loop. + diff --git a/site/en/api_docs/python/tf/autograph/experimental/Feature.md b/site/en/api_docs/python/tf/autograph/experimental/Feature.md new file mode 100644 index 00000000000..32eee2bb0da --- /dev/null +++ b/site/en/api_docs/python/tf/autograph/experimental/Feature.md @@ -0,0 +1,131 @@ +description: This enumeration represents optional conversion options. + +
+ + + + + + + + + +
+ +# tf.autograph.experimental.Feature + + + + + + + + + +This enumeration represents optional conversion options. + + + + + +These conversion options are experimental. They are subject to change without +notice and offer no guarantees. + +_Example Usage_ + +```python +optionals= tf.autograph.experimental.Feature.EQUALITY_OPERATORS +@tf.function(experimental_autograph_options=optionals) +def f(i): + if i == 0: # EQUALITY_OPERATORS allows the use of == here. + tf.print('i is zero') +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`ALL` + +Enable all features. +
+`AUTO_CONTROL_DEPS` + +Insert of control dependencies in the generated code. +
+`ASSERT_STATEMENTS` + +Convert Tensor-dependent assert statements to tf.Assert. +
+`BUILTIN_FUNCTIONS` + +Convert builtin functions applied to Tensors to +their TF counterparts. +
+`EQUALITY_OPERATORS` + +Whether to convert the comparison operators, like +equality. This is soon to be deprecated as support is being added to the +Tensor class. +
+`LISTS` + +Convert list idioms, like initializers, slices, append, etc. +
+`NAME_SCOPES` + +Insert name scopes that name ops according to context, like the +function they were defined in. +
+ + + +## Class Variables + +* `ALL` +* `ASSERT_STATEMENTS` +* `AUTO_CONTROL_DEPS` +* `BUILTIN_FUNCTIONS` +* `EQUALITY_OPERATORS` +* `LISTS` +* `NAME_SCOPES` diff --git a/site/en/api_docs/python/tf/autograph/experimental/do_not_convert.md b/site/en/api_docs/python/tf/autograph/experimental/do_not_convert.md new file mode 100644 index 00000000000..be90f7be375 --- /dev/null +++ b/site/en/api_docs/python/tf/autograph/experimental/do_not_convert.md @@ -0,0 +1,79 @@ +description: Decorator that suppresses the conversion of a function. + +
+ + +
+ +# tf.autograph.experimental.do_not_convert + + + + + + + + + +Decorator that suppresses the conversion of a function. + + + + + + + + + + + + + + + + + + + +
+`func` + +function to decorate. +
+ + + + + + + + + + + +
+If `func` is not None, returns a `Callable` which is equivalent to +`func`, but is not converted by AutoGraph. +If `func` is None, returns a decorator that, when invoked with a +single `func` argument, returns a `Callable` equivalent to the +above case. +
+ diff --git a/site/en/api_docs/python/tf/autograph/experimental/set_loop_options.md b/site/en/api_docs/python/tf/autograph/experimental/set_loop_options.md new file mode 100644 index 00000000000..fd745ca17f0 --- /dev/null +++ b/site/en/api_docs/python/tf/autograph/experimental/set_loop_options.md @@ -0,0 +1,120 @@ +description: Specifies additional arguments to be passed to the enclosing while_loop. + +
+ + +
+ +# tf.autograph.experimental.set_loop_options + + + + + + + + + +Specifies additional arguments to be passed to the enclosing while_loop. + + + + + + + + + +The parameters apply to and only to the immediately enclosing loop. It only +has effect if the loop is staged as a TF while_loop; otherwise the parameters +have no effect. + +#### Usage: + + +``` +>>> @tf.function(autograph=True) +... def f(): +... n = 0 +... for i in tf.range(10): +... tf.autograph.experimental.set_loop_options(maximum_iterations=3) +... n += 1 +... return n +``` + +``` +>>> @tf.function(autograph=True) +... def f(): +... v = tf.constant((0,)) +... for i in tf.range(3): +... tf.autograph.experimental.set_loop_options( +... shape_invariants=[(v, tf.TensorShape([None]))] +... ) +... v = tf.concat((v, [i]), 0) +... return v +``` + + +Also see tf.while_loop. + + + + + + + + + + + + + + + + + + + +
+`parallel_iterations` + +The maximum number of iterations allowed to run in +parallel at any given time. Note that this does not guarantee parallel +execution. +
+`swap_memory` + +Whether to store intermediate values needed for +gradients on the CPU instead of GPU. +
+`maximum_iterations` + +Allows limiting the total number of iterations executed +by the loop. +
+`shape_invariants` + +Allows controlling the argument with the same name passed +to tf.while_loop. Unlike tf.while_loop, this is a list of +`(tensor, shape)` pairs. +
+ diff --git a/site/en/api_docs/python/tf/autograph/set_verbosity.md b/site/en/api_docs/python/tf/autograph/set_verbosity.md new file mode 100644 index 00000000000..114ba65b608 --- /dev/null +++ b/site/en/api_docs/python/tf/autograph/set_verbosity.md @@ -0,0 +1,106 @@ +description: Sets the AutoGraph verbosity level. + +
+ + +
+ +# tf.autograph.set_verbosity + + + + + + + + + +Sets the AutoGraph verbosity level. + + + + + + + + + +_Debug logging in AutoGraph_ + +More verbose logging is useful to enable when filing bug reports or doing +more in-depth debugging. + +There are two means to control the logging verbosity: + + * The `set_verbosity` function + + * The `AUTOGRAPH_VERBOSITY` environment variable + +`set_verbosity` takes precedence over the environment variable. + +#### For example: + + + +```python +import os +import tensorflow as tf + +os.environ['AUTOGRAPH_VERBOSITY'] = 5 +# Verbosity is now 5 + +tf.autograph.set_verbosity(0) +# Verbosity is now 0 + +os.environ['AUTOGRAPH_VERBOSITY'] = 1 +# No effect, because set_verbosity was already called. +``` + +Logs entries are output to [absl](https://abseil.io)'s +[default output](https://abseil.io/docs/python/guides/logging), +with `INFO` level. +Logs can be mirrored to stdout by using the `alsologtostdout` argument. +Mirroring is enabled by default when Python runs in interactive mode. + + + + + + + + + + + + + +
+`level` + +int, the verbosity level; larger values specify increased verbosity; +0 means no logging. When reporting bugs, it is recommended to set this +value to a larger number, like 10. +
+`alsologtostdout` + +bool, whether to also output log messages to `sys.stdout`. +
+ diff --git a/site/en/api_docs/python/tf/autograph/to_code.md b/site/en/api_docs/python/tf/autograph/to_code.md new file mode 100644 index 00000000000..4d7ccbe352e --- /dev/null +++ b/site/en/api_docs/python/tf/autograph/to_code.md @@ -0,0 +1,109 @@ +description: Returns the source code generated by AutoGraph, as a string. + +
+ + +
+ +# tf.autograph.to_code + + + + + + + + + +Returns the source code generated by AutoGraph, as a string. + + + + + + + + +#### Example usage: + + + +``` +>>> def f(x): +... if x < 0: +... x = -x +... return x +>>> tf.autograph.to_code(f) +"...def tf__f(x):..." +``` + +Also see: tf.autograph.to_graph. + +Note: If a function has been decorated with tf.function, pass its +underlying Python function, rather than the callable that `tf.function +creates: + +``` +>>> @tf.function +... def f(x): +... if x < 0: +... x = -x +... return x +>>> tf.autograph.to_code(f.python_function) +"...def tf__f(x):..." +``` + + + + + + + + + + + + + + + + +
+`entity` + +Python callable or class to convert. +
+`recursive` + +Whether to recursively convert any functions that the converted +function may call. +
+`experimental_optional_features` + +`None`, a tuple of, or a single +tf.autograph.experimental.Feature value. +
+ + + + + + + + + + + +
+The converted code as string. +
+ diff --git a/site/en/api_docs/python/tf/autograph/to_graph.md b/site/en/api_docs/python/tf/autograph/to_graph.md new file mode 100644 index 00000000000..09f8dea988d --- /dev/null +++ b/site/en/api_docs/python/tf/autograph/to_graph.md @@ -0,0 +1,142 @@ +description: Converts a Python entity into a TensorFlow graph. + +
+ + +
+ +# tf.autograph.to_graph + + + + + + + + + +Converts a Python entity into a TensorFlow graph. + + + + + + + +Also see: tf.autograph.to_code, tf.function. + +Unlike tf.function, `to_graph` is a low-level transpiler that converts +Python code to TensorFlow graph code. It does not implement any caching, +variable management or create any actual ops, and is best used where greater +control over the generated TensorFlow graph is desired. Another difference +from tf.function is that `to_graph` will not wrap the graph into a +TensorFlow function or a Python callable. Internally, tf.function uses +`to_graph`. + +#### Example usage: + + + +``` +>>> def f(x): +... if x > 0: +... y = x * x +... else: +... y = -x +... return y +... +>>> converted_f = to_graph(f) +>>> x = tf.constant(2) +>>> converted_f(x) # converted_foo is like a TensorFlow Op. + +``` + +Supported Python entities include: + * functions + * classes + * object methods + +Functions are converted into new functions with converted code. + +Classes are converted by generating a new class whose methods use converted +code. + +Methods are converted into unbound function that have an additional first +argument called `self`. + +For a tutorial, see the +[tf.function and AutoGraph guide](https://www.tensorflow.org/guide/function). +For more detailed information, see the +[AutoGraph reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/index.md). + + + + + + + + + + + + + + + + +
+`entity` + +Python callable or class to convert. +
+`recursive` + +Whether to recursively convert any functions that the converted +function may call. +
+`experimental_optional_features` + +`None`, a tuple of, or a single +tf.autograph.experimental.Feature value. +
+ + + + + + + + + + + +
+Same as `entity`, the converted Python function or class. +
+ + + + + + + + + + + + +
+`ValueError` + +If the entity could not be converted. +
+ diff --git a/site/en/api_docs/python/tf/autograph/trace.md b/site/en/api_docs/python/tf/autograph/trace.md new file mode 100644 index 00000000000..44af3f75462 --- /dev/null +++ b/site/en/api_docs/python/tf/autograph/trace.md @@ -0,0 +1,73 @@ +description: Traces argument information at compilation time. + +
+ + +
+ +# tf.autograph.trace + + + + + + + + + +Traces argument information at compilation time. + + + + + + + + + +`trace` is useful when debugging, and it always executes during the tracing +phase, that is, when the TF graph is constructed. + +_Example usage_ + +```python +import tensorflow as tf + +for i in tf.range(10): + tf.autograph.trace(i) +# Output: +``` + + + + + + + + + + +
+`*args` + +Arguments to print to `sys.stdout`. +
+ diff --git a/site/en/api_docs/python/tf/batch_to_space.md b/site/en/api_docs/python/tf/batch_to_space.md new file mode 100644 index 00000000000..a05423f731f --- /dev/null +++ b/site/en/api_docs/python/tf/batch_to_space.md @@ -0,0 +1,181 @@ +description: BatchToSpace for N-D tensors of type T. + +
+ + +
+ +# tf.batch_to_space + + + + + + + + + +BatchToSpace for N-D tensors of type T. + + + + + + + +This operation reshapes the "batch" dimension 0 into `M + 1` dimensions of +shape `block_shape + [batch]`, interleaves these blocks back into the grid +defined by the spatial dimensions `[1, ..., M]`, to obtain a result with the +same rank as the input. The spatial dimensions of this intermediate result +are then optionally cropped according to `crops` to produce the output. This +is the reverse of SpaceToBatch (see tf.space_to_batch). + + + + + + + + + + + + + + + + + + + +
+`input` + +A N-D `Tensor` with shape `input_shape = [batch] + spatial_shape + +remaining_shape`, where `spatial_shape` has M dimensions. +
+`block_shape` + +A 1-D `Tensor` with shape [M]. Must be one of the following +types: `int32`, `int64`. All values must be >= 1. For backwards +compatibility with TF 1.0, this parameter may be an int, in which case it +is converted to +`numpy.array([block_shape, block_shape], +dtype=numpy.int64)`. +
+`crops` + +A 2-D `Tensor` with shape `[M, 2]`. Must be one of the +following types: `int32`, `int64`. All values must be >= 0. +`crops[i] = [crop_start, crop_end]` specifies the amount to crop from +input dimension `i + 1`, which corresponds to spatial dimension `i`. +It is required that +`crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`. +This operation is equivalent to the following steps: +1. Reshape `input` to `reshaped` of shape: [block_shape[0], ..., +block_shape[M-1], batch / prod(block_shape), input_shape[1], ..., +input_shape[N-1]] +2. Permute dimensions of `reshaped` to produce `permuted` of shape +[batch / prod(block_shape), input_shape[1], block_shape[0], ..., +input_shape[M], block_shape[M-1], input_shape[M+1], +..., input_shape[N-1]] +3. Reshape `permuted` to produce `reshaped_permuted` of shape +[batch / prod(block_shape), input_shape[1] * block_shape[0], ..., +input_shape[M] * block_shape[M-1], input_shape[M+1], ..., +input_shape[N-1]] +4. Crop the start and end of dimensions `[1, ..., M]` of +`reshaped_permuted` according to `crops` to produce the output +of shape: +[batch / prod(block_shape), input_shape[1] * +block_shape[0] - crops[0,0] - crops[0,1], ..., input_shape[M] * +block_shape[M-1] - crops[M-1,0] - crops[M-1,1], input_shape[M+1], +..., input_shape[N-1]] +Some Examples: +(1) For the following input of shape `[4, 1, 1, 1]`, +`block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`: +```python +[[[[1]]], +[[[2]]], +[[[3]]], +[[[4]]]] +``` +The output tensor has shape `[1, 2, 2, 1]` and value: +``` x = [[[[1], [2]], +[[3], [4]]]] ``` +(2) For the following input of shape `[4, 1, 1, 3]`, +`block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`: +```python +[[[1, 2, 3]], +[[4, 5, 6]], +[[7, 8, 9]], +[[10, 11, 12]]] +``` +The output tensor has shape `[1, 2, 2, 3]` and value: +```python +x = [[[[1, 2, 3], [4, 5, 6 ]], +[[7, 8, 9], [10, 11, 12]]]] +``` +(3) For the following +input of shape `[4, 2, 2, 1]`, +`block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`: +```python +x = [[[[1], [3]], [[ 9], [11]]], +[[[2], [4]], [[10], [12]]], +[[[5], [7]], [[13], [15]]], +[[[6], [8]], [[14], [16]]]] +``` +The output tensor has shape `[1, 4, 4, 1]` and value: +```python +x = [[[1], [2], [ 3], [ 4]], +[[5], [6], [ 7], [ 8]], +[[9], [10], [11], [12]], +[[13], [14], [15], [16]]] +``` +(4) For the following input of shape +`[8, 1, 3, 1]`, +`block_shape = [2, 2]`, and `crops = [[0, 0], [2, 0]]`: +```python +x = [[[[0], [ 1], [ 3]]], +[[[0], [ 9], [11]]], +[[[0], [ 2], [ 4]]], +[[[0], [10], [12]]], +[[[0], [ 5], [ 7]]], +[[[0], [13], [15]]], +[[[0], [ 6], [ 8]]], +[[[0], [14], [16]]]] +``` +The output tensor has shape `[2, 2, 4, 1]` and value: +```python +x = [[[[ 1], [ 2], [ 3], [ 4]], +[[ 5], [ 6], [ 7], [ 8]]], +[[[ 9], [10], [11], [12]], +[[13], [14], [15], [16]]]] ``` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/bitcast.md b/site/en/api_docs/python/tf/bitcast.md new file mode 100644 index 00000000000..7128d69a1ef --- /dev/null +++ b/site/en/api_docs/python/tf/bitcast.md @@ -0,0 +1,146 @@ +description: Bitcasts a tensor from one type to another without copying data. + +
+ + +
+ +# tf.bitcast + + + + + + + + + +Bitcasts a tensor from one type to another without copying data. + + + + + + + + + +Given a tensor `input`, this operation returns a tensor that has the same buffer +data as `input` with datatype `type`. + +If the input datatype `T` is larger than the output datatype `type` then the +shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)]. + +If `T` is smaller than `type`, the operator requires that the rightmost +dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from +[..., sizeof(`type`)/sizeof(`T`)] to [...]. + +tf.bitcast() and tf.cast() work differently when real dtype is casted as a complex dtype +(e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast() +gives module error. +For example, + +#### Example 1: + + + +``` +>>> a = [1., 2., 3.] +>>> equality_bitcast = tf.bitcast(a, tf.complex128) +Traceback (most recent call last): +... +InvalidArgumentError: Cannot bitcast from 1 to 18 [Op:Bitcast] +>>> equality_cast = tf.cast(a, tf.complex128) +>>> print(equality_cast) +tf.Tensor([1.+0.j 2.+0.j 3.+0.j], shape=(3,), dtype=complex128) +``` + +#### Example 2: + + + +``` +>>> tf.bitcast(tf.constant(0xffffffff, dtype=tf.uint32), tf.uint8) + +``` + +#### Example 3: + + + +``` +>>> x = [1., 2., 3.] +>>> y = [0., 2., 3.] +>>> equality= tf.equal(x,y) +>>> equality_cast = tf.cast(equality,tf.float32) +>>> equality_bitcast = tf.bitcast(equality_cast,tf.uint8) +>>> print(equality) +tf.Tensor([False True True], shape=(3,), dtype=bool) +>>> print(equality_cast) +tf.Tensor([0. 1. 1.], shape=(3,), dtype=float32) +>>> print(equality_bitcast) +tf.Tensor( + [[ 0 0 0 0] + [ 0 0 128 63] + [ 0 0 128 63]], shape=(3, 4), dtype=uint8) +``` + +*NOTE*: Bitcast is implemented as a low-level cast, so machines with different +endian orderings will give different results. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `complex64`, `complex128`, `qint8`, `quint8`, `qint16`, `quint16`, `qint32`. +
+`type` + +A tf.DType from: `tf.bfloat16, tf.half, tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.uint32, tf.uint64, tf.int8, tf.int16, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `type`. +
+ diff --git a/site/en/api_docs/python/tf/bitwise.md b/site/en/api_docs/python/tf/bitwise.md new file mode 100644 index 00000000000..aecc09ba0d7 --- /dev/null +++ b/site/en/api_docs/python/tf/bitwise.md @@ -0,0 +1,35 @@ +description: Operations for manipulating the binary representations of integers. + +
+ + +
+ +# Module: tf.bitwise + + + + + + + + + +Operations for manipulating the binary representations of integers. + + + +## Functions + +[`bitwise_and(...)`](../tf/bitwise/bitwise_and.md): Elementwise computes the bitwise AND of `x` and `y`. + +[`bitwise_or(...)`](../tf/bitwise/bitwise_or.md): Elementwise computes the bitwise OR of `x` and `y`. + +[`bitwise_xor(...)`](../tf/bitwise/bitwise_xor.md): Elementwise computes the bitwise XOR of `x` and `y`. + +[`invert(...)`](../tf/bitwise/invert.md): Invert (flip) each bit of supported types; for example, type `uint8` value 01010101 becomes 10101010. + +[`left_shift(...)`](../tf/bitwise/left_shift.md): Elementwise computes the bitwise left-shift of `x` and `y`. + +[`right_shift(...)`](../tf/bitwise/right_shift.md): Elementwise computes the bitwise right-shift of `x` and `y`. + diff --git a/site/en/api_docs/python/tf/bitwise/bitwise_and.md b/site/en/api_docs/python/tf/bitwise/bitwise_and.md new file mode 100644 index 00000000000..e4c4aab8dd7 --- /dev/null +++ b/site/en/api_docs/python/tf/bitwise/bitwise_and.md @@ -0,0 +1,105 @@ +description: Elementwise computes the bitwise AND of x and y. + +
+ + +
+ +# tf.bitwise.bitwise_and + + + + + + + + + +Elementwise computes the bitwise AND of `x` and `y`. + + + + + + + + + +The result will have those bits set, that are set in both `x` and `y`. The +computation is performed on the underlying representations of `x` and `y`. + +#### For example: + + + +```python +import tensorflow as tf +from tensorflow.python.ops import bitwise_ops +dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64, + tf.uint8, tf.uint16, tf.uint32, tf.uint64] + +for dtype in dtype_list: + lhs = tf.constant([0, 5, 3, 14], dtype=dtype) + rhs = tf.constant([5, 0, 7, 11], dtype=dtype) + exp = tf.constant([0, 0, 3, 10], dtype=tf.float32) + + res = bitwise_ops.bitwise_and(lhs, rhs) + tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/bitwise/bitwise_or.md b/site/en/api_docs/python/tf/bitwise/bitwise_or.md new file mode 100644 index 00000000000..790e05cdfeb --- /dev/null +++ b/site/en/api_docs/python/tf/bitwise/bitwise_or.md @@ -0,0 +1,105 @@ +description: Elementwise computes the bitwise OR of x and y. + +
+ + +
+ +# tf.bitwise.bitwise_or + + + + + + + + + +Elementwise computes the bitwise OR of `x` and `y`. + + + + + + + + + +The result will have those bits set, that are set in `x`, `y` or both. The +computation is performed on the underlying representations of `x` and `y`. + +#### For example: + + + +```python +import tensorflow as tf +from tensorflow.python.ops import bitwise_ops +dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64, + tf.uint8, tf.uint16, tf.uint32, tf.uint64] + +for dtype in dtype_list: + lhs = tf.constant([0, 5, 3, 14], dtype=dtype) + rhs = tf.constant([5, 0, 7, 11], dtype=dtype) + exp = tf.constant([5, 5, 7, 15], dtype=tf.float32) + + res = bitwise_ops.bitwise_or(lhs, rhs) + tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/bitwise/bitwise_xor.md b/site/en/api_docs/python/tf/bitwise/bitwise_xor.md new file mode 100644 index 00000000000..ca87c7ed4fd --- /dev/null +++ b/site/en/api_docs/python/tf/bitwise/bitwise_xor.md @@ -0,0 +1,105 @@ +description: Elementwise computes the bitwise XOR of x and y. + +
+ + +
+ +# tf.bitwise.bitwise_xor + + + + + + + + + +Elementwise computes the bitwise XOR of `x` and `y`. + + + + + + + + + +The result will have those bits set, that are different in `x` and `y`. The +computation is performed on the underlying representations of `x` and `y`. + +#### For example: + + + +```python +import tensorflow as tf +from tensorflow.python.ops import bitwise_ops +dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64, + tf.uint8, tf.uint16, tf.uint32, tf.uint64] + +for dtype in dtype_list: + lhs = tf.constant([0, 5, 3, 14], dtype=dtype) + rhs = tf.constant([5, 0, 7, 11], dtype=dtype) + exp = tf.constant([5, 5, 4, 5], dtype=tf.float32) + + res = bitwise_ops.bitwise_xor(lhs, rhs) + tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/bitwise/invert.md b/site/en/api_docs/python/tf/bitwise/invert.md new file mode 100644 index 00000000000..5889d62e7c3 --- /dev/null +++ b/site/en/api_docs/python/tf/bitwise/invert.md @@ -0,0 +1,118 @@ +description: Invert (flip) each bit of supported types; for example, type uint8 value 01010101 becomes 10101010. + +
+ + +
+ +# tf.bitwise.invert + + + + + + + + + +Invert (flip) each bit of supported types; for example, type `uint8` value 01010101 becomes 10101010. + + + + + + + + + +Flip each bit of supported types. For example, type `int8` (decimal 2) binary 00000010 becomes (decimal -3) binary 11111101. +This operation is performed on each element of the tensor argument `x`. + +#### Example: + + +```python +import tensorflow as tf +from tensorflow.python.ops import bitwise_ops + +# flip 2 (00000010) to -3 (11111101) +tf.assert_equal(-3, bitwise_ops.invert(2)) + +dtype_list = [dtypes.int8, dtypes.int16, dtypes.int32, dtypes.int64, + dtypes.uint8, dtypes.uint16, dtypes.uint32, dtypes.uint64] + +inputs = [0, 5, 3, 14] +for dtype in dtype_list: + # Because of issues with negative numbers, let's test this indirectly. + # 1. invert(a) and a = 0 + # 2. invert(a) or a = invert(0) + input_tensor = tf.constant([0, 5, 3, 14], dtype=dtype) + not_a_and_a, not_a_or_a, not_0 = [bitwise_ops.bitwise_and( + input_tensor, bitwise_ops.invert(input_tensor)), + bitwise_ops.bitwise_or( + input_tensor, bitwise_ops.invert(input_tensor)), + bitwise_ops.invert( + tf.constant(0, dtype=dtype))] + + expected = tf.constant([0, 0, 0, 0], dtype=tf.float32) + tf.assert_equal(tf.cast(not_a_and_a, tf.float32), expected) + + expected = tf.cast([not_0] * 4, tf.float32) + tf.assert_equal(tf.cast(not_a_or_a, tf.float32), expected) + + # For unsigned dtypes let's also check the result directly. + if dtype.is_unsigned: + inverted = bitwise_ops.invert(input_tensor) + expected = tf.constant([dtype.max - x for x in inputs], dtype=tf.float32) + tf.assert_equal(tf.cast(inverted, tf.float32), tf.cast(expected, tf.float32)) +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/bitwise/left_shift.md b/site/en/api_docs/python/tf/bitwise/left_shift.md new file mode 100644 index 00000000000..ff256e78316 --- /dev/null +++ b/site/en/api_docs/python/tf/bitwise/left_shift.md @@ -0,0 +1,116 @@ +description: Elementwise computes the bitwise left-shift of x and y. + +
+ + +
+ +# tf.bitwise.left_shift + + + + + + + + + +Elementwise computes the bitwise left-shift of `x` and `y`. + + + + + + + + + +If `y` is negative, or greater than or equal to the width of `x` in bits the +result is implementation defined. + +#### Example: + + + +```python +import tensorflow as tf +from tensorflow.python.ops import bitwise_ops +import numpy as np +dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64] + +for dtype in dtype_list: + lhs = tf.constant([-1, -5, -3, -14], dtype=dtype) + rhs = tf.constant([5, 0, 7, 11], dtype=dtype) + + left_shift_result = bitwise_ops.left_shift(lhs, rhs) + + print(left_shift_result) + +# This will print: +# tf.Tensor([ -32 -5 -128 0], shape=(4,), dtype=int8) +# tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int16) +# tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int32) +# tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int64) + +lhs = np.array([-2, 64, 101, 32], dtype=np.int8) +rhs = np.array([-1, -5, -3, -14], dtype=np.int8) +bitwise_ops.left_shift(lhs, rhs) +# +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/bitwise/right_shift.md b/site/en/api_docs/python/tf/bitwise/right_shift.md new file mode 100644 index 00000000000..9ab2020376c --- /dev/null +++ b/site/en/api_docs/python/tf/bitwise/right_shift.md @@ -0,0 +1,119 @@ +description: Elementwise computes the bitwise right-shift of x and y. + +
+ + +
+ +# tf.bitwise.right_shift + + + + + + + + + +Elementwise computes the bitwise right-shift of `x` and `y`. + + + + + + + + + +Performs a logical shift for unsigned integer types, and an arithmetic shift +for signed integer types. + +If `y` is negative, or greater than or equal to than the width of `x` in bits +the result is implementation defined. + +#### Example: + + + +```python +import tensorflow as tf +from tensorflow.python.ops import bitwise_ops +import numpy as np +dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64] + +for dtype in dtype_list: + lhs = tf.constant([-1, -5, -3, -14], dtype=dtype) + rhs = tf.constant([5, 0, 7, 11], dtype=dtype) + + right_shift_result = bitwise_ops.right_shift(lhs, rhs) + + print(right_shift_result) + +# This will print: +# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int8) +# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int16) +# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int32) +# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int64) + +lhs = np.array([-2, 64, 101, 32], dtype=np.int8) +rhs = np.array([-1, -5, -3, -14], dtype=np.int8) +bitwise_ops.right_shift(lhs, rhs) +# +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/boolean_mask.md b/site/en/api_docs/python/tf/boolean_mask.md new file mode 100644 index 00000000000..21f06dd1c0e --- /dev/null +++ b/site/en/api_docs/python/tf/boolean_mask.md @@ -0,0 +1,137 @@ +description: Apply boolean mask to tensor. + +
+ + +
+ +# tf.boolean_mask + + + + + + + + + +Apply boolean mask to tensor. + + + + + + + +Numpy equivalent is `tensor[mask]`. + +```python +# 1-D example +tensor = [0, 1, 2, 3] +mask = np.array([True, False, True, False]) +boolean_mask(tensor, mask) # [0, 2] +``` + +In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match +the first K dimensions of `tensor`'s shape. We then have: + `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]` +where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order). +The `axis` could be used with `mask` to indicate the axis to mask from. +In that case, `axis + dim(mask) <= dim(tensor)` and `mask`'s shape must match +the first `axis + dim(mask)` dimensions of `tensor`'s shape. + +See also: tf.ragged.boolean_mask, which can be applied to both dense and +ragged tensors, and can be used if you need to preserve the masked dimensions +of `tensor` (rather than flattening them, as tf.boolean_mask does). + + + + + + + + + + + + + + + + + + + +
+`tensor` + +N-D tensor. +
+`mask` + +K-D boolean tensor, K <= N and K must be known statically. +
+`axis` + +A 0-D int Tensor representing the axis in `tensor` to mask from. By +default, axis is 0 which will mask from the first dimension. Otherwise K + +axis <= N. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+(N-K+1)-dimensional tensor populated by entries in `tensor` corresponding +to `True` values in `mask`. +
+ + + + + + + + + + + + +
+`ValueError` + +If shapes do not conform. +
+ + + +#### Examples: + + + +```python +# 2-D example +tensor = [[1, 2], [3, 4], [5, 6]] +mask = np.array([True, False, True]) +boolean_mask(tensor, mask) # [[1, 2], [5, 6]] +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/broadcast_dynamic_shape.md b/site/en/api_docs/python/tf/broadcast_dynamic_shape.md new file mode 100644 index 00000000000..2c48262c0ba --- /dev/null +++ b/site/en/api_docs/python/tf/broadcast_dynamic_shape.md @@ -0,0 +1,92 @@ +description: Computes the shape of a broadcast given symbolic shapes. + +
+ + +
+ +# tf.broadcast_dynamic_shape + + + + + + + + + +Computes the shape of a broadcast given symbolic shapes. + + + + + + + + + +When shape_x and shape_y are Tensors representing shapes (i.e. the result of +calling tf.shape on another Tensor) this computes a Tensor which is the shape +of the result of a broadcasting op applied in tensors of shapes shape_x and +shape_y. + +For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a +Tensor whose value is [5, 2, 3]. + +This is useful when validating the result of a broadcasting operation when the +tensors do not have statically known shapes. + + + + + + + + + + + + + +
+`shape_x` + +A rank 1 integer `Tensor`, representing the shape of x. +
+`shape_y` + +A rank 1 integer `Tensor`, representing the shape of y. +
+ + + + + + + + + + + +
+A rank 1 integer `Tensor` representing the broadcasted shape. +
+ diff --git a/site/en/api_docs/python/tf/broadcast_static_shape.md b/site/en/api_docs/python/tf/broadcast_static_shape.md new file mode 100644 index 00000000000..5e972fc274f --- /dev/null +++ b/site/en/api_docs/python/tf/broadcast_static_shape.md @@ -0,0 +1,108 @@ +description: Computes the shape of a broadcast given known shapes. + +
+ + +
+ +# tf.broadcast_static_shape + + + + + + + + + +Computes the shape of a broadcast given known shapes. + + + + + + + + + +When shape_x and shape_y are fully known TensorShapes this computes a +TensorShape which is the shape of the result of a broadcasting op applied in +tensors of shapes shape_x and shape_y. + +For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a +TensorShape whose value is [5, 2, 3]. + +This is useful when validating the result of a broadcasting operation when the +tensors have statically known shapes. + + + + + + + + + + + + + +
+`shape_x` + +A `TensorShape` +
+`shape_y` + +A `TensorShape` +
+ + + + + + + + + + + +
+A `TensorShape` representing the broadcasted shape. +
+ + + + + + + + + + + + +
+`ValueError` + +If the two shapes can not be broadcasted. +
+ diff --git a/site/en/api_docs/python/tf/broadcast_to.md b/site/en/api_docs/python/tf/broadcast_to.md new file mode 100644 index 00000000000..8450b6e110e --- /dev/null +++ b/site/en/api_docs/python/tf/broadcast_to.md @@ -0,0 +1,105 @@ +description: Broadcast an array for a compatible shape. + +
+ + +
+ +# tf.broadcast_to + + + + + + + + + +Broadcast an array for a compatible shape. + + + + + + + + + +Broadcasting is the process of making arrays to have compatible shapes +for arithmetic operations. Two shapes are compatible if for each +dimension pair they are either equal or one of them is one. When trying +to broadcast a Tensor to a shape, it starts with the trailing dimensions, +and works its way forward. + +For example, + +``` +>>> x = tf.constant([1, 2, 3]) +>>> y = tf.broadcast_to(x, [3, 3]) +>>> print(y) +tf.Tensor( + [[1 2 3] + [1 2 3] + [1 2 3]], shape=(3, 3), dtype=int32) +``` + +In the above example, the input Tensor with the shape of `[1, 3]` +is broadcasted to output Tensor with shape of `[3, 3]`. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. A Tensor to broadcast. +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +An 1-D `int` Tensor. The shape of the desired output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/case.md b/site/en/api_docs/python/tf/case.md new file mode 100644 index 00000000000..d3f6f40e10b --- /dev/null +++ b/site/en/api_docs/python/tf/case.md @@ -0,0 +1,204 @@ +description: Create a case operation. + +
+ + +
+ +# tf.case + + + + + + + + + +Create a case operation. + + + + + + + +See also tf.switch_case. + +The `pred_fn_pairs` parameter is a list of pairs of size N. +Each pair contains a boolean scalar tensor and a python callable that +creates the tensors to be returned if the boolean evaluates to True. +`default` is a callable generating a list of tensors. All the callables +in `pred_fn_pairs` as well as `default` (if provided) should return the same +number and types of tensors. + +If `exclusive==True`, all predicates are evaluated, and an exception is +thrown if more than one of the predicates evaluates to `True`. +If `exclusive==False`, execution stops at the first predicate which +evaluates to True, and the tensors generated by the corresponding function +are returned immediately. If none of the predicates evaluate to True, this +operation returns the tensors generated by `default`. + +tf.case supports nested structures as implemented in +`tf.contrib.framework.nest`. All of the callables must return the same +(possibly nested) value structure of lists, tuples, and/or named tuples. +Singleton lists and tuples form the only exceptions to this: when returned by +a callable, they are implicitly unpacked to single values. This +behavior is disabled by passing `strict=True`. + + + + +**Example 1:** + +#### Pseudocode: + + + +``` +if (x < y) return 17; +else return 23; +``` + +#### Expressions: + + + +```python +f1 = lambda: tf.constant(17) +f2 = lambda: tf.constant(23) +r = tf.case([(tf.less(x, y), f1)], default=f2) +``` + +**Example 2:** + +#### Pseudocode: + + + +``` +if (x < y && x > z) raise OpError("Only one predicate may evaluate to True"); +if (x < y) return 17; +else if (x > z) return 23; +else return -1; +``` + +#### Expressions: + + + +```python +def f1(): return tf.constant(17) +def f2(): return tf.constant(23) +def f3(): return tf.constant(-1) +r = tf.case([(tf.less(x, y), f1), (tf.greater(x, z), f2)], + default=f3, exclusive=True) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`pred_fn_pairs` + +List of pairs of a boolean scalar tensor and a callable which +returns a list of tensors. +
+`default` + +Optional callable that returns a list of tensors. +
+`exclusive` + +True iff at most one predicate is allowed to evaluate to `True`. +
+`strict` + +A boolean that enables/disables 'strict' mode; see above. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+The tensors returned by the first pair whose predicate evaluated to True, or +those returned by `default` if none does. +
+ + + + + + + + + + + + + + + + + + +
+`TypeError` + +If `pred_fn_pairs` is not a list/tuple. +
+`TypeError` + +If `pred_fn_pairs` is a list but does not contain 2-tuples. +
+`TypeError` + +If `fns[i]` is not callable for any i, or `default` is not +callable. +
+ + + +#### V2 Compatibility +`pred_fn_pairs` could be a dictionary in v1. However, tf.Tensor and +tf.Variable are no longer hashable in v2, so cannot be used as a key for a +dictionary. Please use a list or a tuple instead. + diff --git a/site/en/api_docs/python/tf/cast.md b/site/en/api_docs/python/tf/cast.md new file mode 100644 index 00000000000..9c36762150e --- /dev/null +++ b/site/en/api_docs/python/tf/cast.md @@ -0,0 +1,135 @@ +description: Casts a tensor to a new type. + +
+ + +
+ +# tf.cast + + + + + + + + + +Casts a tensor to a new type. + + + + + + + + + +The operation casts `x` (in case of `Tensor`) or `x.values` +(in case of `SparseTensor` or `IndexedSlices`) to `dtype`. + +#### For example: + + + +``` +>>> x = tf.constant([1.8, 2.2], dtype=tf.float32) +>>> tf.dtypes.cast(x, tf.int32) + +``` + +The operation supports data types (for `x` and `dtype`) of +`uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`, `int64`, +`float16`, `float32`, `float64`, `complex64`, `complex128`, `bfloat16`. +In case of casting from complex types (`complex64`, `complex128`) to real +types, only the real part of `x` is returned. In case of casting from real +types to complex types (`complex64`, `complex128`), the imaginary part of the +returned value is set to `0`. The handling of complex types here matches the +behavior of numpy. + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor` or `IndexedSlices` of numeric type. It could +be `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`, +`int64`, `float16`, `float32`, `float64`, `complex64`, `complex128`, +`bfloat16`. +
+`dtype` + +The destination type. The list of supported dtypes is the same as +`x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` and +same type as `dtype`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `x` cannot be cast to the `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/clip_by_global_norm.md b/site/en/api_docs/python/tf/clip_by_global_norm.md new file mode 100644 index 00000000000..53807fd9c15 --- /dev/null +++ b/site/en/api_docs/python/tf/clip_by_global_norm.md @@ -0,0 +1,157 @@ +description: Clips values of multiple tensors by the ratio of the sum of their norms. + +
+ + +
+ +# tf.clip_by_global_norm + + + + + + + + + +Clips values of multiple tensors by the ratio of the sum of their norms. + + + + + + + + + +Given a tuple or list of tensors `t_list`, and a clipping ratio `clip_norm`, +this operation returns a list of clipped tensors `list_clipped` +and the global norm (`global_norm`) of all tensors in `t_list`. Optionally, +if you've already computed the global norm for `t_list`, you can specify +the global norm with `use_norm`. + +To perform the clipping, the values `t_list[i]` are set to: + + t_list[i] * clip_norm / max(global_norm, clip_norm) + +where: + + global_norm = sqrt(sum([l2norm(t)**2 for t in t_list])) + +If `clip_norm > global_norm` then the entries in `t_list` remain as they are, +otherwise they're all shrunk by the global ratio. + +If `global_norm == infinity` then the entries in `t_list` are all set to `NaN` +to signal that an error occurred. + +Any of the entries of `t_list` that are of type `None` are ignored. + +This is the correct way to perform gradient clipping (Pascanu et al., 2012). + +However, it is slower than `clip_by_norm()` because all the parameters must be +ready before the clipping operation can be performed. + + + + + + + + + + + + + + + + + + + +
+`t_list` + +A tuple or list of mixed `Tensors`, `IndexedSlices`, or None. +
+`clip_norm` + +A 0-D (scalar) `Tensor` > 0. The clipping ratio. +
+`use_norm` + +A 0-D (scalar) `Tensor` of type `float` (optional). The global +norm to use. If not provided, `global_norm()` is used to compute the norm. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + +
+`list_clipped` + +A list of `Tensors` of the same type as `list_t`. +
+`global_norm` + +A 0-D (scalar) `Tensor` representing the global norm. +
+ + + + + + + + + + + + +
+`TypeError` + +If `t_list` is not a sequence. +
+ + + +#### References: + +On the difficulty of training Recurrent Neural Networks: + [Pascanu et al., 2012](http://proceedings.mlr.press/v28/pascanu13.html) + ([pdf](http://proceedings.mlr.press/v28/pascanu13.pdf)) diff --git a/site/en/api_docs/python/tf/clip_by_norm.md b/site/en/api_docs/python/tf/clip_by_norm.md new file mode 100644 index 00000000000..ed00f83b74a --- /dev/null +++ b/site/en/api_docs/python/tf/clip_by_norm.md @@ -0,0 +1,141 @@ +description: Clips tensor values to a maximum L2-norm. + +
+ + +
+ +# tf.clip_by_norm + + + + + + + + + +Clips tensor values to a maximum L2-norm. + + + + + + + + + +Given a tensor `t`, and a maximum clip value `clip_norm`, this operation +normalizes `t` so that its L2-norm is less than or equal to `clip_norm`, +along the dimensions given in `axes`. Specifically, in the default case +where all dimensions are used for calculation, if the L2-norm of `t` is +already less than or equal to `clip_norm`, then `t` is not modified. If +the L2-norm is greater than `clip_norm`, then this operation returns a +tensor of the same type and shape as `t` with its values set to: + +`t * clip_norm / l2norm(t)` + +In this case, the L2-norm of the output tensor is `clip_norm`. + +As another example, if `t` is a matrix and `axes == [1]`, then each row +of the output will have L2-norm less than or equal to `clip_norm`. If +`axes == [0]` instead, each column of the output will be clipped. + +This operation is typically used to clip gradients before applying them with +an optimizer. + + + + + + + + + + + + + + + + + + + +
+`t` + +A `Tensor` or `IndexedSlices`. +
+`clip_norm` + +A 0-D (scalar) `Tensor` > 0. A maximum clipping value. +
+`axes` + +A 1-D (vector) `Tensor` of type int32 containing the dimensions +to use for computing the L2-norm. If `None` (the default), uses all +dimensions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A clipped `Tensor` or `IndexedSlices`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If the clip_norm tensor is not a 0-D scalar tensor. +
+`TypeError` + +If dtype of the input is not a floating point or +complex type. +
+ diff --git a/site/en/api_docs/python/tf/clip_by_value.md b/site/en/api_docs/python/tf/clip_by_value.md new file mode 100644 index 00000000000..8cddda9cc34 --- /dev/null +++ b/site/en/api_docs/python/tf/clip_by_value.md @@ -0,0 +1,176 @@ +description: Clips tensor values to a specified min and max. + +
+ + +
+ +# tf.clip_by_value + + + + + + + + + +Clips tensor values to a specified min and max. + + + + + + + + + +Given a tensor `t`, this operation returns a tensor of the same type and +shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`. +Any values less than `clip_value_min` are set to `clip_value_min`. Any values +greater than `clip_value_max` are set to `clip_value_max`. + +Note: `clip_value_min` needs to be smaller or equal to `clip_value_max` for +correct results. + +#### For example: + + + +Basic usage passes a scalar as the min and max value. + +``` +>>> t = tf.constant([[-10., -1., 0.], [0., 2., 10.]]) +>>> t2 = tf.clip_by_value(t, clip_value_min=-1, clip_value_max=1) +>>> t2.numpy() +array([[-1., -1., 0.], + [ 0., 1., 1.]], dtype=float32) +``` + +The min and max can be the same size as `t`, or broadcastable to that size. + +``` +>>> t = tf.constant([[-1, 0., 10.], [-1, 0, 10]]) +>>> clip_min = [[2],[1]] +>>> t3 = tf.clip_by_value(t, clip_value_min=clip_min, clip_value_max=100) +>>> t3.numpy() +array([[ 2., 2., 10.], + [ 1., 1., 10.]], dtype=float32) +``` + +Broadcasting fails, intentionally, if you would expand the dimensions of `t` + +``` +>>> t = tf.constant([[-1, 0., 10.], [-1, 0, 10]]) +>>> clip_min = [[[2, 1]]] # Has a third axis +>>> t4 = tf.clip_by_value(t, clip_value_min=clip_min, clip_value_max=100) +Traceback (most recent call last): +... +InvalidArgumentError: Incompatible shapes: [2,3] vs. [1,1,2] +``` + +It throws a `TypeError` if you try to clip an `int` to a `float` value +(tf.cast the input to `float` first). + +``` +>>> t = tf.constant([[1, 2], [3, 4]], dtype=tf.int32) +>>> t5 = tf.clip_by_value(t, clip_value_min=-3.1, clip_value_max=3.1) +Traceback (most recent call last): +... +TypeError: Cannot convert ... +``` + + + + + + + + + + + + + + + + + + + + +
+`t` + +A `Tensor` or `IndexedSlices`. +
+`clip_value_min` + +The minimum value to clip to. A scalar `Tensor` or one that +is broadcastable to the shape of `t`. +
+`clip_value_max` + +The minimum value to clip to. A scalar `Tensor` or one that +is broadcastable to the shape of `t`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A clipped `Tensor` or `IndexedSlices`. +
+ + + + + + + + + + + + + + +
+tf.errors.InvalidArgumentError: If the clip tensors would trigger array +broadcasting that would make the returned tensor larger than the input. +
+`TypeError` + +If dtype of the input is `int32` and dtype of +the `clip_value_min` or `clip_value_max` is `float32` +
+ diff --git a/site/en/api_docs/python/tf/compat.md b/site/en/api_docs/python/tf/compat.md new file mode 100644 index 00000000000..cd9506216a1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat.md @@ -0,0 +1,82 @@ +description: Compatibility functions. + +
+ + + + + + +
+ +# Module: tf.compat + + + + + + + + + +Compatibility functions. + + +The tf.compat module contains two sets of compatibility functions. + +## Tensorflow 1.x and 2.x APIs + +The compat.v1 and `compat.v2` submodules provide a complete copy of both the +v1 and `v2` APIs for backwards and forwards compatibility across TensorFlow +versions 1.x and 2.x. See the +[migration guide](https://www.tensorflow.org/guide/migrate) for details. + +## Utilities for writing compatible code + +Aside from the compat.v1 and `compat.v2` submodules, tf.compat also contains +a set of helper functions for writing code that works in both: + +* TensorFlow 1.x and 2.x +* Python 2 and 3 + + +## Type collections + +The compatibility module also provides the following aliases for common +sets of python types: + +* `bytes_or_text_types` +* `complex_types` +* `integral_types` +* `real_types` + +## Modules + +[`v1`](../tf/compat/v1.md) module: Bring in all of the public TensorFlow interface into this module. + +## Functions + +[`as_bytes(...)`](../tf/compat/as_bytes.md): Converts `bytearray`, `bytes`, or unicode python input types to `bytes`. + +[`as_str(...)`](../tf/compat/as_str.md) + +[`as_str_any(...)`](../tf/compat/as_str_any.md): Converts input to `str` type. + +[`as_text(...)`](../tf/compat/as_text.md): Converts any string-like python input types to unicode. + +[`dimension_at_index(...)`](../tf/compat/dimension_at_index.md): Compatibility utility required to allow for both V1 and V2 behavior in TF. + +[`dimension_value(...)`](../tf/compat/dimension_value.md): Compatibility utility required to allow for both V1 and V2 behavior in TF. + +[`forward_compatibility_horizon(...)`](../tf/compat/forward_compatibility_horizon.md): Context manager for testing forward compatibility of generated graphs. + +[`forward_compatible(...)`](../tf/compat/forward_compatible.md): Return true if the forward compatibility window has expired. + +[`path_to_str(...)`](../tf/compat/path_to_str.md): Converts input which is a `PathLike` object to `str` type. + +## Other Members + +* `bytes_or_text_types` +* `complex_types` +* `integral_types` +* `real_types` diff --git a/site/en/api_docs/python/tf/compat/as_bytes.md b/site/en/api_docs/python/tf/compat/as_bytes.md new file mode 100644 index 00000000000..c3a06b26d07 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/as_bytes.md @@ -0,0 +1,100 @@ +description: Converts bytearray, bytes, or unicode python input types to bytes. + +
+ + +
+ +# tf.compat.as_bytes + + + + + + + + + +Converts `bytearray`, `bytes`, or unicode python input types to `bytes`. + + + + + + + + + +Uses utf-8 encoding for text by default. + + + + + + + + + + + + + +
+`bytes_or_text` + +A `bytearray`, `bytes`, `str`, or `unicode` object. +
+`encoding` + +A string indicating the charset for encoding unicode. +
+ + + + + + + + + + + +
+A `bytes` object. +
+ + + + + + + + + + + + +
+`TypeError` + +If `bytes_or_text` is not a binary or unicode string. +
+ diff --git a/site/en/api_docs/python/tf/compat/as_str.md b/site/en/api_docs/python/tf/compat/as_str.md new file mode 100644 index 00000000000..9d014af84e6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/as_str.md @@ -0,0 +1,42 @@ +
+ + +
+ +# tf.compat.as_str + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/as_str_any.md b/site/en/api_docs/python/tf/compat/as_str_any.md new file mode 100644 index 00000000000..1b490eaa805 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/as_str_any.md @@ -0,0 +1,77 @@ +description: Converts input to str type. + +
+ + +
+ +# tf.compat.as_str_any + + + + + + + + + +Converts input to `str` type. + + + + + + + + + + Uses `str(value)`, except for `bytes` typed inputs, which are converted + using `as_str`. + + + + + + + + + + +
+`value` + +A object that can be converted to `str`. +
+ + + + + + + + + + + +
+A `str` object. +
+ diff --git a/site/en/api_docs/python/tf/compat/as_text.md b/site/en/api_docs/python/tf/compat/as_text.md new file mode 100644 index 00000000000..2912e2f05fa --- /dev/null +++ b/site/en/api_docs/python/tf/compat/as_text.md @@ -0,0 +1,101 @@ +description: Converts any string-like python input types to unicode. + +
+ + +
+ +# tf.compat.as_text + + + + + + + + + +Converts any string-like python input types to unicode. + + + + + + + + + +Returns the input as a unicode string. Uses utf-8 encoding for text +by default. + + + + + + + + + + + + + +
+`bytes_or_text` + +A `bytes`, `str`, or `unicode` object. +
+`encoding` + +A string indicating the charset for decoding unicode. +
+ + + + + + + + + + + +
+A `unicode` (Python 2) or `str` (Python 3) object. +
+ + + + + + + + + + + + +
+`TypeError` + +If `bytes_or_text` is not a binary or unicode string. +
+ diff --git a/site/en/api_docs/python/tf/compat/dimension_at_index.md b/site/en/api_docs/python/tf/compat/dimension_at_index.md new file mode 100644 index 00000000000..a2ec6cebc6e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/dimension_at_index.md @@ -0,0 +1,110 @@ +description: Compatibility utility required to allow for both V1 and V2 behavior in TF. + +
+ + +
+ +# tf.compat.dimension_at_index + + + + + + + + + +Compatibility utility required to allow for both V1 and V2 behavior in TF. + + + + + + + + + +Until the release of TF 2.0, we need the legacy behavior of `TensorShape` to +coexist with the new behavior. This utility is a bridge between the two. + +If you want to retrieve the Dimension instance corresponding to a certain +index in a TensorShape instance, use this utility, like this: + +``` +# If you had this in your V1 code: +dim = tensor_shape[i] + +# Use `dimension_at_index` as direct replacement compatible with both V1 & V2: +dim = dimension_at_index(tensor_shape, i) + +# Another possibility would be this, but WARNING: it only works if the +# tensor_shape instance has a defined rank. +dim = tensor_shape.dims[i] # `dims` may be None if the rank is undefined! + +# In native V2 code, we recommend instead being more explicit: +if tensor_shape.rank is None: + dim = Dimension(None) +else: + dim = tensor_shape.dims[i] + +# Being more explicit will save you from the following trap (present in V1): +# you might do in-place modifications to `dim` and expect them to be reflected +# in `tensor_shape[i]`, but they would not be (as the Dimension object was +# instantiated on the fly. +``` + + + + + + + + + + + + + +
+`shape` + +A TensorShape instance. +
+`index` + +An integer index. +
+ + + + + + + + + + + +
+A dimension object. +
+ diff --git a/site/en/api_docs/python/tf/compat/dimension_value.md b/site/en/api_docs/python/tf/compat/dimension_value.md new file mode 100644 index 00000000000..d5320c093ca --- /dev/null +++ b/site/en/api_docs/python/tf/compat/dimension_value.md @@ -0,0 +1,91 @@ +description: Compatibility utility required to allow for both V1 and V2 behavior in TF. + +
+ + +
+ +# tf.compat.dimension_value + + + + + + + + + +Compatibility utility required to allow for both V1 and V2 behavior in TF. + + + + + + + + + +Until the release of TF 2.0, we need the legacy behavior of `TensorShape` to +coexist with the new behavior. This utility is a bridge between the two. + +When accessing the value of a TensorShape dimension, +use this utility, like this: + +``` +# If you had this in your V1 code: +value = tensor_shape[i].value + +# Use `dimension_value` as direct replacement compatible with both V1 & V2: +value = dimension_value(tensor_shape[i]) + +# This would be the V2 equivalent: +value = tensor_shape[i] # Warning: this will return the dim value in V2! +``` + + + + + + + + + + +
+`dimension` + +Either a `Dimension` instance, an integer, or None. +
+ + + + + + + + + + + +
+A plain value, i.e. an integer or None. +
+ diff --git a/site/en/api_docs/python/tf/compat/forward_compatibility_horizon.md b/site/en/api_docs/python/tf/compat/forward_compatibility_horizon.md new file mode 100644 index 00000000000..75f67e5d3ef --- /dev/null +++ b/site/en/api_docs/python/tf/compat/forward_compatibility_horizon.md @@ -0,0 +1,106 @@ +description: Context manager for testing forward compatibility of generated graphs. + +
+ + +
+ +# tf.compat.forward_compatibility_horizon + + + + + + + + + +Context manager for testing forward compatibility of generated graphs. + + + + + + + + + +See [Version +compatibility](https://tensorflow.org/guide/version_compat#backward_forward). + +To ensure forward compatibility of generated graphs (see `forward_compatible`) +with older binaries, new features can be gated with: + +```python +if compat.forward_compatible(year=2018, month=08, date=01): + generate_graph_with_new_features() +else: + generate_graph_so_older_binaries_can_consume_it() +``` + +However, when adding new features, one may want to unittest it before +the forward compatibility window expires. This context manager enables +such tests. For example: + +```python +from tensorflow.python.compat import compat + +def testMyNewFeature(self): + with compat.forward_compatibility_horizon(2018, 08, 02): + # Test that generate_graph_with_new_features() has an effect +``` + + + + + + + + + + + + + + + + +
+`year` + +A year (e.g., 2018). Must be an `int`. +
+`month` + +A month (1 <= month <= 12) in year. Must be an `int`. +
+`day` + +A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an +`int`. +
+ + + +#### Yields: + +Nothing. diff --git a/site/en/api_docs/python/tf/compat/forward_compatible.md b/site/en/api_docs/python/tf/compat/forward_compatible.md new file mode 100644 index 00000000000..5819d731465 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/forward_compatible.md @@ -0,0 +1,132 @@ +description: Return true if the forward compatibility window has expired. + +
+ + +
+ +# tf.compat.forward_compatible + + + + + + + + + +Return true if the forward compatibility window has expired. + + + + + + + + + +See [Version +compatibility](https://tensorflow.org/guide/version_compat#backward_forward). + +Forward-compatibility refers to scenarios where the producer of a TensorFlow +model (a GraphDef or SavedModel) is compiled against a version of the +TensorFlow library newer than what the consumer was compiled against. The +"producer" is typically a Python program that constructs and trains a model +while the "consumer" is typically another program that loads and serves the +model. + +TensorFlow has been supporting a 3 week forward-compatibility window for +programs compiled from source at HEAD. + +For example, consider the case where a new operation `MyNewAwesomeAdd` is +created with the intent of replacing the implementation of an existing Python +wrapper - tf.add. The Python wrapper implementation should change from +something like: + +```python +def add(inputs, name=None): + return gen_math_ops.add(inputs, name) +``` + +to: + +```python +from tensorflow.python.compat import compat + +def add(inputs, name=None): + if compat.forward_compatible(year, month, day): + # Can use the awesome new implementation. + return gen_math_ops.my_new_awesome_add(inputs, name) + # To maintain forward compatibility, use the old implementation. + return gen_math_ops.add(inputs, name) +``` + +Where `year`, `month`, and `day` specify the date beyond which binaries +that consume a model are expected to have been updated to include the +new operations. This date is typically at least 3 weeks beyond the date +the code that adds the new operation is committed. + + + + + + + + + + + + + + + + +
+`year` + +A year (e.g., 2018). Must be an `int`. +
+`month` + +A month (1 <= month <= 12) in year. Must be an `int`. +
+`day` + +A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an +`int`. +
+ + + + + + + + + + + +
+True if the caller can expect that serialized TensorFlow graphs produced +can be consumed by programs that are compiled with the TensorFlow library +source code after (year, month, day). +
+ diff --git a/site/en/api_docs/python/tf/compat/path_to_str.md b/site/en/api_docs/python/tf/compat/path_to_str.md new file mode 100644 index 00000000000..49ac964a326 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/path_to_str.md @@ -0,0 +1,104 @@ +description: Converts input which is a PathLike object to str type. + +
+ + +
+ +# tf.compat.path_to_str + + + + + + + + + +Converts input which is a `PathLike` object to `str` type. + + + + + + + + + +Converts from any python constant representation of a `PathLike` object to +a string. If the input is not a `PathLike` object, simply returns the input. + + + + + + + + + + +
+`path` + +An object that can be converted to path representation. +
+ + + + + + + + + + + +
+A `str` object. +
+ + + +#### Usage: + +In case a simplified `str` version of the path is needed from an +`os.PathLike` object + + + +#### Examples: + + +```python +$ tf.compat.path_to_str('C:\XYZ\tensorflow\./.././tensorflow') +'C:\XYZ\tensorflow\./.././tensorflow' # Windows OS +$ tf.compat.path_to_str(Path('C:\XYZ\tensorflow\./.././tensorflow')) +'C:\XYZ\tensorflow\..\tensorflow' # Windows OS +$ tf.compat.path_to_str(Path('./corpus')) +'corpus' # Linux OS +$ tf.compat.path_to_str('./.././Corpus') +'./.././Corpus' # Linux OS +$ tf.compat.path_to_str(Path('./.././Corpus')) +'../Corpus' # Linux OS +$ tf.compat.path_to_str(Path('./..////../')) +'../..' # Linux OS + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1.md b/site/en/api_docs/python/tf/compat/v1.md new file mode 100644 index 00000000000..ecde13fe349 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1.md @@ -0,0 +1,1300 @@ +description: Bring in all of the public TensorFlow interface into this module. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# Module: tf.compat.v1 + + + + + + + + + +Bring in all of the public TensorFlow interface into this module. + + + +## Modules + +[`app`](../../tf/compat/v1/app.md) module: Generic entry point script. + +[`audio`](../../tf/compat/v1/audio.md) module: Public API for tf.audio namespace. + +[`autograph`](../../tf/compat/v1/autograph.md) module: Conversion of plain Python into TensorFlow graph code. + +[`bitwise`](../../tf/compat/v1/bitwise.md) module: Operations for manipulating the binary representations of integers. + +[`compat`](../../tf/compat/v1/compat.md) module: Compatibility functions. + +[`config`](../../tf/compat/v1/config.md) module: Public API for tf.config namespace. + +[`data`](../../tf/compat/v1/data.md) module: tf.data.Dataset API for input pipelines. + +[`debugging`](../../tf/compat/v1/debugging.md) module: Public API for tf.debugging namespace. + +[`distribute`](../../tf/compat/v1/distribute.md) module: Library for running a computation across multiple devices. + +[`distributions`](../../tf/compat/v1/distributions.md) module: Core module for TensorFlow distribution objects and helpers. + +[`dtypes`](../../tf/compat/v1/dtypes.md) module: Public API for tf.dtypes namespace. + +[`errors`](../../tf/compat/v1/errors.md) module: Exception types for TensorFlow errors. + +[`estimator`](../../tf/compat/v1/estimator.md) module: Estimator: High level tools for working with models. + +[`experimental`](../../tf/compat/v1/experimental.md) module: Public API for tf.experimental namespace. + +[`feature_column`](../../tf/compat/v1/feature_column.md) module: Public API for tf.feature_column namespace. + +[`flags`](../../tf/compat/v1/flags.md) module: Import router for absl.flags. See https://github.com/abseil/abseil-py. + +[`gfile`](../../tf/compat/v1/gfile.md) module: Import router for file_io. + +[`graph_util`](../../tf/compat/v1/graph_util.md) module: Helpers to manipulate a tensor graph in python. + +[`image`](../../tf/compat/v1/image.md) module: Image ops. + +[`initializers`](../../tf/compat/v1/initializers.md) module: Public API for tf.initializers namespace. + +[`io`](../../tf/compat/v1/io.md) module: Public API for tf.io namespace. + +[`keras`](../../tf/compat/v1/keras.md) module: Implementation of the Keras API meant to be a high-level API for TensorFlow. + +[`layers`](../../tf/compat/v1/layers.md) module: Public API for tf.layers namespace. + +[`linalg`](../../tf/compat/v1/linalg.md) module: Operations for linear algebra. + +[`lite`](../../tf/compat/v1/lite.md) module: Public API for tf.lite namespace. + +[`logging`](../../tf/compat/v1/logging.md) module: Logging and Summary Operations. + +[`lookup`](../../tf/compat/v1/lookup.md) module: Public API for tf.lookup namespace. + +[`losses`](../../tf/compat/v1/losses.md) module: Loss operations for use in neural networks. + +[`manip`](../../tf/compat/v1/manip.md) module: Operators for manipulating tensors. + +[`math`](../../tf/compat/v1/math.md) module: Math Operations. + +[`metrics`](../../tf/compat/v1/metrics.md) module: Evaluation-related metrics. + +[`mixed_precision`](../../tf/compat/v1/mixed_precision.md) module: Public API for tf.mixed_precision namespace. + +[`mlir`](../../tf/compat/v1/mlir.md) module: Public API for tf.mlir namespace. + +[`nest`](../../tf/compat/v1/nest.md) module: Public API for tf.nest namespace. + +[`nn`](../../tf/compat/v1/nn.md) module: Wrappers for primitive Neural Net (NN) Operations. + +[`profiler`](../../tf/compat/v1/profiler.md) module: Public API for tf.profiler namespace. + +[`python_io`](../../tf/compat/v1/python_io.md) module: Python functions for directly manipulating TFRecord-formatted files. + +[`quantization`](../../tf/compat/v1/quantization.md) module: Public API for tf.quantization namespace. + +[`queue`](../../tf/compat/v1/queue.md) module: Public API for tf.queue namespace. + +[`ragged`](../../tf/compat/v1/ragged.md) module: Ragged Tensors. + +[`random`](../../tf/compat/v1/random.md) module: Public API for tf.random namespace. + +[`raw_ops`](../../tf/compat/v1/raw_ops.md) module: Public API for tf.raw_ops namespace. + +[`resource_loader`](../../tf/compat/v1/resource_loader.md) module: Resource management library. + +[`saved_model`](../../tf/compat/v1/saved_model.md) module: Public API for tf.saved_model namespace. + +[`sets`](../../tf/compat/v1/sets.md) module: Tensorflow set operations. + +[`signal`](../../tf/compat/v1/signal.md) module: Signal processing operations. + +[`sparse`](../../tf/compat/v1/sparse.md) module: Sparse Tensor Representation. + +[`spectral`](../../tf/compat/v1/spectral.md) module: Public API for tf.spectral namespace. + +[`strings`](../../tf/compat/v1/strings.md) module: Operations for working with string Tensors. + +[`summary`](../../tf/compat/v1/summary.md) module: Operations for writing summary data, for use in analysis and visualization. + +[`sysconfig`](../../tf/compat/v1/sysconfig.md) module: System configuration library. + +[`test`](../../tf/compat/v1/test.md) module: Testing. + +[`tpu`](../../tf/compat/v1/tpu.md) module: Ops related to Tensor Processing Units. + +[`train`](../../tf/compat/v1/train.md) module: Support for training models. + +[`user_ops`](../../tf/compat/v1/user_ops.md) module: Public API for tf.user_ops namespace. + +[`version`](../../tf/compat/v1/version.md) module: Public API for tf.version namespace. + +[`xla`](../../tf/compat/v1/xla.md) module: Public API for tf.xla namespace. + +## Classes + +[`class AggregationMethod`](../../tf/AggregationMethod.md): A class listing aggregation methods used to combine gradients. + +[`class AttrValue`](../../tf/compat/v1/AttrValue.md): A ProtocolMessage + +[`class ConditionalAccumulator`](../../tf/compat/v1/ConditionalAccumulator.md): A conditional accumulator for aggregating gradients. + +[`class ConditionalAccumulatorBase`](../../tf/compat/v1/ConditionalAccumulatorBase.md): A conditional accumulator for aggregating gradients. + +[`class ConfigProto`](../../tf/compat/v1/ConfigProto.md): A ProtocolMessage + +[`class CriticalSection`](../../tf/CriticalSection.md): Critical section. + +[`class DType`](../../tf/dtypes/DType.md): Represents the type of the elements in a `Tensor`. + +[`class DeviceSpec`](../../tf/compat/v1/DeviceSpec.md): Represents a (possibly partial) specification for a TensorFlow device. + +[`class Dimension`](../../tf/compat/v1/Dimension.md): Represents the value of one dimension in a TensorShape. + +[`class Event`](../../tf/compat/v1/Event.md): A ProtocolMessage + +[`class FIFOQueue`](../../tf/queue/FIFOQueue.md): A queue implementation that dequeues elements in first-in first-out order. + +[`class FixedLenFeature`](../../tf/io/FixedLenFeature.md): Configuration for parsing a fixed-length input feature. + +[`class FixedLenSequenceFeature`](../../tf/io/FixedLenSequenceFeature.md): Configuration for parsing a variable-length input feature into a `Tensor`. + +[`class FixedLengthRecordReader`](../../tf/compat/v1/FixedLengthRecordReader.md): A Reader that outputs fixed-length records from a file. + +[`class GPUOptions`](../../tf/compat/v1/GPUOptions.md): A ProtocolMessage + +[`class GradientTape`](../../tf/GradientTape.md): Record operations for automatic differentiation. + +[`class Graph`](../../tf/Graph.md): A TensorFlow computation, represented as a dataflow graph. + +[`class GraphDef`](../../tf/compat/v1/GraphDef.md): A ProtocolMessage + +[`class GraphKeys`](../../tf/compat/v1/GraphKeys.md): Standard names to use for graph collections. + +[`class GraphOptions`](../../tf/compat/v1/GraphOptions.md): A ProtocolMessage + +[`class HistogramProto`](../../tf/compat/v1/HistogramProto.md): A ProtocolMessage + +[`class IdentityReader`](../../tf/compat/v1/IdentityReader.md): A Reader that outputs the queued work as both the key and value. + +[`class IndexedSlices`](../../tf/IndexedSlices.md): A sparse representation of a set of tensor slices at given indices. + +[`class IndexedSlicesSpec`](../../tf/IndexedSlicesSpec.md): Type specification for a tf.IndexedSlices. + +[`class InteractiveSession`](../../tf/compat/v1/InteractiveSession.md): A TensorFlow `Session` for use in interactive contexts, such as a shell. + +[`class LMDBReader`](../../tf/compat/v1/LMDBReader.md): A Reader that outputs the records from a LMDB file. + +[`class LogMessage`](../../tf/compat/v1/LogMessage.md): A ProtocolMessage + +[`class MetaGraphDef`](../../tf/compat/v1/MetaGraphDef.md): A ProtocolMessage + +[`class Module`](../../tf/Module.md): Base neural network module class. + +[`class NameAttrList`](../../tf/compat/v1/NameAttrList.md): A ProtocolMessage + +[`class NodeDef`](../../tf/compat/v1/NodeDef.md): A ProtocolMessage + +[`class OpError`](../../tf/errors/OpError.md): A generic error that is raised when TensorFlow execution fails. + +[`class Operation`](../../tf/Operation.md): Represents a graph node that performs computation on tensors. + +[`class OptimizerOptions`](../../tf/compat/v1/OptimizerOptions.md): A ProtocolMessage + +[`class OptionalSpec`](../../tf/OptionalSpec.md): Represents an optional potentially containing a structured value. + +[`class PaddingFIFOQueue`](../../tf/queue/PaddingFIFOQueue.md): A FIFOQueue that supports batching variable-sized tensors by padding. + +[`class PriorityQueue`](../../tf/queue/PriorityQueue.md): A queue implementation that dequeues elements in prioritized order. + +[`class QueueBase`](../../tf/queue/QueueBase.md): Base class for queue implementations. + +[`class RaggedTensor`](../../tf/RaggedTensor.md): Represents a ragged tensor. + +[`class RaggedTensorSpec`](../../tf/RaggedTensorSpec.md): Type specification for a tf.RaggedTensor. + +[`class RandomShuffleQueue`](../../tf/queue/RandomShuffleQueue.md): A queue implementation that dequeues elements in a random order. + +[`class ReaderBase`](../../tf/compat/v1/ReaderBase.md): Base class for different Reader types, that produce a record every step. + +[`class RegisterGradient`](../../tf/RegisterGradient.md): A decorator for registering the gradient function for an op type. + +[`class RunMetadata`](../../tf/compat/v1/RunMetadata.md): A ProtocolMessage + +[`class RunOptions`](../../tf/compat/v1/RunOptions.md): A ProtocolMessage + +[`class Session`](../../tf/compat/v1/Session.md): A class for running TensorFlow operations. + +[`class SessionLog`](../../tf/compat/v1/SessionLog.md): A ProtocolMessage + +[`class SparseConditionalAccumulator`](../../tf/compat/v1/SparseConditionalAccumulator.md): A conditional accumulator for aggregating sparse gradients. + +[`class SparseFeature`](../../tf/io/SparseFeature.md): Configuration for parsing a sparse input feature from an `Example`. + +[`class SparseTensor`](../../tf/sparse/SparseTensor.md): Represents a sparse tensor. + +[`class SparseTensorSpec`](../../tf/SparseTensorSpec.md): Type specification for a tf.SparseTensor. + +[`class SparseTensorValue`](../../tf/compat/v1/SparseTensorValue.md): SparseTensorValue(indices, values, dense_shape) + +[`class Summary`](../../tf/compat/v1/Summary.md): A ProtocolMessage + +[`class SummaryMetadata`](../../tf/compat/v1/SummaryMetadata.md): A ProtocolMessage + +[`class TFRecordReader`](../../tf/compat/v1/TFRecordReader.md): A Reader that outputs the records from a TFRecords file. + +[`class Tensor`](../../tf/Tensor.md): A tensor represents a rectangular array of data. + +[`class TensorArray`](../../tf/TensorArray.md): Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays. + +[`class TensorArraySpec`](../../tf/TensorArraySpec.md): Type specification for a tf.TensorArray. + +[`class TensorInfo`](../../tf/compat/v1/TensorInfo.md): A ProtocolMessage + +[`class TensorShape`](../../tf/TensorShape.md): Represents the shape of a `Tensor`. + +[`class TensorSpec`](../../tf/TensorSpec.md): Describes a tf.Tensor. + +[`class TextLineReader`](../../tf/compat/v1/TextLineReader.md): A Reader that outputs the lines of a file delimited by newlines. + +[`class TypeSpec`](../../tf/TypeSpec.md): Specifies a TensorFlow value type. + +[`class UnconnectedGradients`](../../tf/UnconnectedGradients.md): Controls how gradient computation behaves when y does not depend on x. + +[`class VarLenFeature`](../../tf/io/VarLenFeature.md): Configuration for parsing a variable-length input feature. + +[`class Variable`](../../tf/compat/v1/Variable.md): See the [Variables Guide](https://tensorflow.org/guide/variables). + +[`class VariableAggregation`](../../tf/compat/v1/VariableAggregation.md): Indicates how a distributed variable will be aggregated. + +[`class VariableScope`](../../tf/compat/v1/VariableScope.md): Variable scope object to carry defaults to provide to `get_variable`. + +[`class VariableSynchronization`](../../tf/VariableSynchronization.md): Indicates when a distributed variable will be synced. + +[`class WholeFileReader`](../../tf/compat/v1/WholeFileReader.md): A Reader that outputs the entire contents of a file as a value. + +[`class constant_initializer`](../../tf/compat/v1/keras/initializers/Constant.md): Initializer that generates tensors with constant values. + +[`class glorot_normal_initializer`](../../tf/compat/v1/keras/initializers/glorot_normal.md): The Glorot normal initializer, also called Xavier normal initializer. + +[`class glorot_uniform_initializer`](../../tf/compat/v1/keras/initializers/glorot_uniform.md): The Glorot uniform initializer, also called Xavier uniform initializer. + +[`class name_scope`](../../tf/compat/v1/keras/backend/name_scope.md): A context manager for use when defining a Python op. + +[`class ones_initializer`](../../tf/compat/v1/keras/initializers/Ones.md): Initializer that generates tensors initialized to 1. + +[`class orthogonal_initializer`](../../tf/compat/v1/keras/initializers/Orthogonal.md): Initializer that generates an orthogonal matrix. + +[`class random_normal_initializer`](../../tf/compat/v1/random_normal_initializer.md): Initializer that generates tensors with a normal distribution. + +[`class random_uniform_initializer`](../../tf/compat/v1/random_uniform_initializer.md): Initializer that generates tensors with a uniform distribution. + +[`class truncated_normal_initializer`](../../tf/compat/v1/truncated_normal_initializer.md): Initializer that generates a truncated normal distribution. + +[`class uniform_unit_scaling_initializer`](../../tf/compat/v1/uniform_unit_scaling_initializer.md): Initializer that generates tensors without scaling variance. + +[`class variable_scope`](../../tf/compat/v1/variable_scope.md): A context manager for defining ops that creates variables (layers). + +[`class variance_scaling_initializer`](../../tf/compat/v1/keras/initializers/VarianceScaling.md): Initializer capable of adapting its scale to the shape of weights tensors. + +[`class zeros_initializer`](../../tf/compat/v1/keras/initializers/Zeros.md): Initializer that generates tensors initialized to 0. + +## Functions + +[`Assert(...)`](../../tf/debugging/Assert.md): Asserts that the given condition is true. + +[`NoGradient(...)`](../../tf/no_gradient.md): Specifies that ops of type `op_type` is not differentiable. + +[`NotDifferentiable(...)`](../../tf/no_gradient.md): Specifies that ops of type `op_type` is not differentiable. + +[`Print(...)`](../../tf/compat/v1/Print.md): Prints a list of tensors. (deprecated) + +[`abs(...)`](../../tf/math/abs.md): Computes the absolute value of a tensor. + +[`accumulate_n(...)`](../../tf/math/accumulate_n.md): Returns the element-wise sum of a list of tensors. + +[`acos(...)`](../../tf/math/acos.md): Computes acos of x element-wise. + +[`acosh(...)`](../../tf/math/acosh.md): Computes inverse hyperbolic cosine of x element-wise. + +[`add(...)`](../../tf/math/add.md): Returns x + y element-wise. + +[`add_check_numerics_ops(...)`](../../tf/compat/v1/add_check_numerics_ops.md): Connect a tf.debugging.check_numerics to every floating point tensor. + +[`add_n(...)`](../../tf/math/add_n.md): Adds all input tensors element-wise. + +[`add_to_collection(...)`](../../tf/compat/v1/add_to_collection.md): Wrapper for `Graph.add_to_collection()` using the default graph. + +[`add_to_collections(...)`](../../tf/compat/v1/add_to_collections.md): Wrapper for `Graph.add_to_collections()` using the default graph. + +[`all_variables(...)`](../../tf/compat/v1/all_variables.md): Use tf.compat.v1.global_variables instead. (deprecated) + +[`angle(...)`](../../tf/math/angle.md): Returns the element-wise argument of a complex (or real) tensor. + +[`arg_max(...)`](../../tf/compat/v1/arg_max.md): Returns the index with the largest value across dimensions of a tensor. + +[`arg_min(...)`](../../tf/compat/v1/arg_min.md): Returns the index with the smallest value across dimensions of a tensor. + +[`argmax(...)`](../../tf/compat/v1/argmax.md): Returns the index with the largest value across axes of a tensor. (deprecated arguments) + +[`argmin(...)`](../../tf/compat/v1/argmin.md): Returns the index with the smallest value across axes of a tensor. (deprecated arguments) + +[`argsort(...)`](../../tf/argsort.md): Returns the indices of a tensor that give its sorted order along an axis. + +[`as_dtype(...)`](../../tf/dtypes/as_dtype.md): Converts the given `type_value` to a `DType`. + +[`as_string(...)`](../../tf/strings/as_string.md): Converts each entry in the given tensor to strings. + +[`asin(...)`](../../tf/math/asin.md): Computes the trignometric inverse sine of x element-wise. + +[`asinh(...)`](../../tf/math/asinh.md): Computes inverse hyperbolic sine of x element-wise. + +[`assert_equal(...)`](../../tf/compat/v1/assert_equal.md): Assert the condition `x == y` holds element-wise. + +[`assert_greater(...)`](../../tf/compat/v1/assert_greater.md): Assert the condition `x > y` holds element-wise. + +[`assert_greater_equal(...)`](../../tf/compat/v1/assert_greater_equal.md): Assert the condition `x >= y` holds element-wise. + +[`assert_integer(...)`](../../tf/compat/v1/assert_integer.md): Assert that `x` is of integer dtype. + +[`assert_less(...)`](../../tf/compat/v1/assert_less.md): Assert the condition `x < y` holds element-wise. + +[`assert_less_equal(...)`](../../tf/compat/v1/assert_less_equal.md): Assert the condition `x <= y` holds element-wise. + +[`assert_near(...)`](../../tf/compat/v1/assert_near.md): Assert the condition `x` and `y` are close element-wise. + +[`assert_negative(...)`](../../tf/compat/v1/assert_negative.md): Assert the condition `x < 0` holds element-wise. + +[`assert_non_negative(...)`](../../tf/compat/v1/assert_non_negative.md): Assert the condition `x >= 0` holds element-wise. + +[`assert_non_positive(...)`](../../tf/compat/v1/assert_non_positive.md): Assert the condition `x <= 0` holds element-wise. + +[`assert_none_equal(...)`](../../tf/compat/v1/assert_none_equal.md): Assert the condition `x != y` holds element-wise. + +[`assert_positive(...)`](../../tf/compat/v1/assert_positive.md): Assert the condition `x > 0` holds element-wise. + +[`assert_proper_iterable(...)`](../../tf/debugging/assert_proper_iterable.md): Static assert that values is a "proper" iterable. + +[`assert_rank(...)`](../../tf/compat/v1/assert_rank.md): Assert `x` has rank equal to `rank`. + +[`assert_rank_at_least(...)`](../../tf/compat/v1/assert_rank_at_least.md): Assert `x` has rank equal to `rank` or higher. + +[`assert_rank_in(...)`](../../tf/compat/v1/assert_rank_in.md): Assert `x` has rank in `ranks`. + +[`assert_same_float_dtype(...)`](../../tf/debugging/assert_same_float_dtype.md): Validate and return float type based on `tensors` and `dtype`. + +[`assert_scalar(...)`](../../tf/compat/v1/assert_scalar.md): Asserts that the given `tensor` is a scalar (i.e. zero-dimensional). + +[`assert_type(...)`](../../tf/compat/v1/assert_type.md): Statically asserts that the given `Tensor` is of the specified type. + +[`assert_variables_initialized(...)`](../../tf/compat/v1/assert_variables_initialized.md): Returns an Op to check if variables are initialized. + +[`assign(...)`](../../tf/compat/v1/assign.md): Update `ref` by assigning `value` to it. + +[`assign_add(...)`](../../tf/compat/v1/assign_add.md): Update `ref` by adding `value` to it. + +[`assign_sub(...)`](../../tf/compat/v1/assign_sub.md): Update `ref` by subtracting `value` from it. + +[`atan(...)`](../../tf/math/atan.md): Computes the trignometric inverse tangent of x element-wise. + +[`atan2(...)`](../../tf/math/atan2.md): Computes arctangent of `y/x` element-wise, respecting signs of the arguments. + +[`atanh(...)`](../../tf/math/atanh.md): Computes inverse hyperbolic tangent of x element-wise. + +[`batch_gather(...)`](../../tf/compat/v1/batch_gather.md): Gather slices from params according to indices with leading batch dims. (deprecated) + +[`batch_scatter_update(...)`](../../tf/compat/v1/batch_scatter_update.md): Generalization of tf.compat.v1.scatter_update to axis different than 0. (deprecated) + +[`batch_to_space(...)`](../../tf/compat/v1/batch_to_space.md): BatchToSpace for 4-D tensors of type T. + +[`batch_to_space_nd(...)`](../../tf/compat/v1/batch_to_space_nd.md): BatchToSpace for N-D tensors of type T. + +[`betainc(...)`](../../tf/math/betainc.md): Compute the regularized incomplete beta integral \\(I_x(a, b)\\). + +[`bincount(...)`](../../tf/compat/v1/bincount.md): Counts the number of occurrences of each value in an integer array. + +[`bitcast(...)`](../../tf/bitcast.md): Bitcasts a tensor from one type to another without copying data. + +[`boolean_mask(...)`](../../tf/compat/v1/boolean_mask.md): Apply boolean mask to tensor. + +[`broadcast_dynamic_shape(...)`](../../tf/broadcast_dynamic_shape.md): Computes the shape of a broadcast given symbolic shapes. + +[`broadcast_static_shape(...)`](../../tf/broadcast_static_shape.md): Computes the shape of a broadcast given known shapes. + +[`broadcast_to(...)`](../../tf/broadcast_to.md): Broadcast an array for a compatible shape. + +[`case(...)`](../../tf/compat/v1/case.md): Create a case operation. + +[`cast(...)`](../../tf/cast.md): Casts a tensor to a new type. + +[`ceil(...)`](../../tf/math/ceil.md): Return the ceiling of the input, element-wise. + +[`check_numerics(...)`](../../tf/debugging/check_numerics.md): Checks a tensor for NaN and Inf values. + +[`cholesky(...)`](../../tf/linalg/cholesky.md): Computes the Cholesky decomposition of one or more square matrices. + +[`cholesky_solve(...)`](../../tf/linalg/cholesky_solve.md): Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations. + +[`clip_by_average_norm(...)`](../../tf/compat/v1/clip_by_average_norm.md): Clips tensor values to a maximum average L2-norm. (deprecated) + +[`clip_by_global_norm(...)`](../../tf/clip_by_global_norm.md): Clips values of multiple tensors by the ratio of the sum of their norms. + +[`clip_by_norm(...)`](../../tf/clip_by_norm.md): Clips tensor values to a maximum L2-norm. + +[`clip_by_value(...)`](../../tf/clip_by_value.md): Clips tensor values to a specified min and max. + +[`colocate_with(...)`](../../tf/compat/v1/colocate_with.md): DEPRECATED FUNCTION + +[`complex(...)`](../../tf/dtypes/complex.md): Converts two real numbers to a complex number. + +[`concat(...)`](../../tf/concat.md): Concatenates tensors along one dimension. + +[`cond(...)`](../../tf/compat/v1/cond.md): Return `true_fn()` if the predicate `pred` is true else `false_fn()`. (deprecated arguments) + +[`confusion_matrix(...)`](../../tf/compat/v1/confusion_matrix.md): Computes the confusion matrix from predictions and labels. + +[`conj(...)`](../../tf/math/conj.md): Returns the complex conjugate of a complex number. + +[`constant(...)`](../../tf/compat/v1/constant.md): Creates a constant tensor. + +[`container(...)`](../../tf/compat/v1/container.md): Wrapper for `Graph.container()` using the default graph. + +[`control_dependencies(...)`](../../tf/control_dependencies.md): Wrapper for `Graph.control_dependencies()` using the default graph. + +[`control_flow_v2_enabled(...)`](../../tf/compat/v1/control_flow_v2_enabled.md): Returns `True` if v2 control flow is enabled. + +[`convert_to_tensor(...)`](../../tf/compat/v1/convert_to_tensor.md): Converts the given `value` to a `Tensor`. + +[`convert_to_tensor_or_indexed_slices(...)`](../../tf/compat/v1/convert_to_tensor_or_indexed_slices.md): Converts the given object to a `Tensor` or an `IndexedSlices`. + +[`convert_to_tensor_or_sparse_tensor(...)`](../../tf/compat/v1/convert_to_tensor_or_sparse_tensor.md): Converts value to a `SparseTensor` or `Tensor`. + +[`cos(...)`](../../tf/math/cos.md): Computes cos of x element-wise. + +[`cosh(...)`](../../tf/math/cosh.md): Computes hyperbolic cosine of x element-wise. + +[`count_nonzero(...)`](../../tf/compat/v1/count_nonzero.md): Computes number of nonzero elements across dimensions of a tensor. (deprecated arguments) (deprecated arguments) + +[`count_up_to(...)`](../../tf/compat/v1/count_up_to.md): Increments 'ref' until it reaches 'limit'. (deprecated) + +[`create_partitioned_variables(...)`](../../tf/compat/v1/create_partitioned_variables.md): Create a list of partitioned variables according to the given `slicing`. (deprecated) + +[`cross(...)`](../../tf/linalg/cross.md): Compute the pairwise cross product. + +[`cumprod(...)`](../../tf/math/cumprod.md): Compute the cumulative product of the tensor `x` along `axis`. + +[`cumsum(...)`](../../tf/math/cumsum.md): Compute the cumulative sum of the tensor `x` along `axis`. + +[`custom_gradient(...)`](../../tf/custom_gradient.md): Decorator to define a function with a custom gradient. + +[`decode_base64(...)`](../../tf/io/decode_base64.md): Decode web-safe base64-encoded strings. + +[`decode_compressed(...)`](../../tf/io/decode_compressed.md): Decompress strings. + +[`decode_csv(...)`](../../tf/compat/v1/decode_csv.md): Convert CSV records to tensors. Each column maps to one tensor. + +[`decode_json_example(...)`](../../tf/io/decode_json_example.md): Convert JSON-encoded Example records to binary protocol buffer strings. + +[`decode_raw(...)`](../../tf/compat/v1/decode_raw.md): Convert raw byte strings into tensors. (deprecated arguments) + +[`delete_session_tensor(...)`](../../tf/compat/v1/delete_session_tensor.md): Delete the tensor for the given tensor handle. + +[`depth_to_space(...)`](../../tf/compat/v1/depth_to_space.md): DepthToSpace for tensors of type T. + +[`dequantize(...)`](../../tf/quantization/dequantize.md): Dequantize the 'input' tensor into a float or bfloat16 Tensor. + +[`deserialize_many_sparse(...)`](../../tf/io/deserialize_many_sparse.md): Deserialize and concatenate `SparseTensors` from a serialized minibatch. + +[`device(...)`](../../tf/compat/v1/device.md): Wrapper for `Graph.device()` using the default graph. + +[`diag(...)`](../../tf/linalg/tensor_diag.md): Returns a diagonal tensor with a given diagonal values. + +[`diag_part(...)`](../../tf/linalg/tensor_diag_part.md): Returns the diagonal part of the tensor. + +[`digamma(...)`](../../tf/math/digamma.md): Computes Psi, the derivative of Lgamma (the log of the absolute value of + +[`dimension_at_index(...)`](../../tf/compat/dimension_at_index.md): Compatibility utility required to allow for both V1 and V2 behavior in TF. + +[`dimension_value(...)`](../../tf/compat/dimension_value.md): Compatibility utility required to allow for both V1 and V2 behavior in TF. + +[`disable_control_flow_v2(...)`](../../tf/compat/v1/disable_control_flow_v2.md): Opts out of control flow v2. + +[`disable_eager_execution(...)`](../../tf/compat/v1/disable_eager_execution.md): Disables eager execution. + +[`disable_resource_variables(...)`](../../tf/compat/v1/disable_resource_variables.md): Opts out of resource variables. (deprecated) + +[`disable_tensor_equality(...)`](../../tf/compat/v1/disable_tensor_equality.md): Compare Tensors by their id and be hashable. + +[`disable_v2_behavior(...)`](../../tf/compat/v1/disable_v2_behavior.md): Disables TensorFlow 2.x behaviors. + +[`disable_v2_tensorshape(...)`](../../tf/compat/v1/disable_v2_tensorshape.md): Disables the V2 TensorShape behavior and reverts to V1 behavior. + +[`div(...)`](../../tf/RaggedTensor.md#__div__): Divides x / y elementwise (using Python 2 division operator semantics). (deprecated) + +[`div_no_nan(...)`](../../tf/math/divide_no_nan.md): Computes a safe divide which returns 0 if the y is zero. + +[`divide(...)`](../../tf/math/divide.md): Computes Python style division of `x` by `y`. + +[`dynamic_partition(...)`](../../tf/dynamic_partition.md): Partitions `data` into `num_partitions` tensors using indices from `partitions`. + +[`dynamic_stitch(...)`](../../tf/dynamic_stitch.md): Interleave the values from the `data` tensors into a single tensor. + +[`edit_distance(...)`](../../tf/edit_distance.md): Computes the Levenshtein distance between sequences. + +[`einsum(...)`](../../tf/einsum.md): Tensor contraction over specified indices and outer product. + +[`enable_control_flow_v2(...)`](../../tf/compat/v1/enable_control_flow_v2.md): Use control flow v2. + +[`enable_eager_execution(...)`](../../tf/compat/v1/enable_eager_execution.md): Enables eager execution for the lifetime of this program. + +[`enable_resource_variables(...)`](../../tf/compat/v1/enable_resource_variables.md): Creates resource variables by default. + +[`enable_tensor_equality(...)`](../../tf/compat/v1/enable_tensor_equality.md): Compare Tensors with element-wise comparison and thus be unhashable. + +[`enable_v2_behavior(...)`](../../tf/compat/v1/enable_v2_behavior.md): Enables TensorFlow 2.x behaviors. + +[`enable_v2_tensorshape(...)`](../../tf/compat/v1/enable_v2_tensorshape.md): In TensorFlow 2.0, iterating over a TensorShape instance returns values. + +[`encode_base64(...)`](../../tf/io/encode_base64.md): Encode strings into web-safe base64 format. + +[`ensure_shape(...)`](../../tf/ensure_shape.md): Updates the shape of a tensor and checks at runtime that the shape holds. + +[`equal(...)`](../../tf/math/equal.md): Returns the truth value of (x == y) element-wise. + +[`erf(...)`](../../tf/math/erf.md): Computes the Gauss error function of `x` element-wise. + +[`erfc(...)`](../../tf/math/erfc.md): Computes the complementary error function of `x` element-wise. + +[`executing_eagerly(...)`](../../tf/compat/v1/executing_eagerly.md): Checks whether the current thread has eager execution enabled. + +[`executing_eagerly_outside_functions(...)`](../../tf/compat/v1/executing_eagerly_outside_functions.md): Returns True if executing eagerly, even if inside a graph function. + +[`exp(...)`](../../tf/math/exp.md): Computes exponential of x element-wise. \\(y = e^x\\). + +[`expand_dims(...)`](../../tf/compat/v1/expand_dims.md): Inserts a dimension of 1 into a tensor's shape. (deprecated arguments) + +[`expm1(...)`](../../tf/math/expm1.md): Computes `exp(x) - 1` element-wise. + +[`extract_image_patches(...)`](../../tf/compat/v1/extract_image_patches.md): Extract `patches` from `images` and put them in the "depth" output dimension. + +[`extract_volume_patches(...)`](../../tf/extract_volume_patches.md): Extract `patches` from `input` and put them in the "depth" output dimension. 3D extension of `extract_image_patches`. + +[`eye(...)`](../../tf/eye.md): Construct an identity matrix, or a batch of matrices. + +[`fake_quant_with_min_max_args(...)`](../../tf/quantization/fake_quant_with_min_max_args.md): Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. + +[`fake_quant_with_min_max_args_gradient(...)`](../../tf/quantization/fake_quant_with_min_max_args_gradient.md): Compute gradients for a FakeQuantWithMinMaxArgs operation. + +[`fake_quant_with_min_max_vars(...)`](../../tf/quantization/fake_quant_with_min_max_vars.md): Fake-quantize the 'inputs' tensor of type float via global float scalars `min` + +[`fake_quant_with_min_max_vars_gradient(...)`](../../tf/quantization/fake_quant_with_min_max_vars_gradient.md): Compute gradients for a FakeQuantWithMinMaxVars operation. + +[`fake_quant_with_min_max_vars_per_channel(...)`](../../tf/quantization/fake_quant_with_min_max_vars_per_channel.md): Fake-quantize the 'inputs' tensor of type float and one of the shapes: `[d]`, + +[`fake_quant_with_min_max_vars_per_channel_gradient(...)`](../../tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient.md): Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. + +[`fft(...)`](../../tf/signal/fft.md): Fast Fourier transform. + +[`fft2d(...)`](../../tf/signal/fft2d.md): 2D fast Fourier transform. + +[`fft3d(...)`](../../tf/signal/fft3d.md): 3D fast Fourier transform. + +[`fill(...)`](../../tf/fill.md): Creates a tensor filled with a scalar value. + +[`fingerprint(...)`](../../tf/fingerprint.md): Generates fingerprint values. + +[`fixed_size_partitioner(...)`](../../tf/compat/v1/fixed_size_partitioner.md): Partitioner to specify a fixed number of shards along given axis. + +[`floor(...)`](../../tf/math/floor.md): Returns element-wise largest integer not greater than x. + +[`floor_div(...)`](../../tf/compat/v1/floor_div.md): Returns x // y element-wise. + +[`floordiv(...)`](../../tf/math/floordiv.md): Divides `x / y` elementwise, rounding toward the most negative integer. + +[`floormod(...)`](../../tf/math/floormod.md): Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +[`foldl(...)`](../../tf/compat/v1/foldl.md): foldl on the list of tensors unpacked from `elems` on dimension 0. + +[`foldr(...)`](../../tf/compat/v1/foldr.md): foldr on the list of tensors unpacked from `elems` on dimension 0. + +[`function(...)`](../../tf/function.md): Compiles a function into a callable TensorFlow graph. + +[`gather(...)`](../../tf/compat/v1/gather.md): Gather slices from params axis `axis` according to indices. + +[`gather_nd(...)`](../../tf/compat/v1/gather_nd.md): Gather slices from `params` into a Tensor with shape specified by `indices`. + +[`get_collection(...)`](../../tf/compat/v1/get_collection.md): Wrapper for `Graph.get_collection()` using the default graph. + +[`get_collection_ref(...)`](../../tf/compat/v1/get_collection_ref.md): Wrapper for `Graph.get_collection_ref()` using the default graph. + +[`get_default_graph(...)`](../../tf/compat/v1/get_default_graph.md): Returns the default graph for the current thread. + +[`get_default_session(...)`](../../tf/compat/v1/get_default_session.md): Returns the default session for the current thread. + +[`get_local_variable(...)`](../../tf/compat/v1/get_local_variable.md): Gets an existing *local* variable or creates a new one. + +[`get_logger(...)`](../../tf/get_logger.md): Return TF logger instance. + +[`get_seed(...)`](../../tf/compat/v1/get_seed.md): Returns the local seeds an operation should use given an op-specific seed. + +[`get_session_handle(...)`](../../tf/compat/v1/get_session_handle.md): Return the handle of `data`. + +[`get_session_tensor(...)`](../../tf/compat/v1/get_session_tensor.md): Get the tensor of type `dtype` by feeding a tensor handle. + +[`get_static_value(...)`](../../tf/get_static_value.md): Returns the constant value of the given tensor, if efficiently calculable. + +[`get_variable(...)`](../../tf/compat/v1/get_variable.md): Gets an existing variable with these parameters or create a new one. + +[`get_variable_scope(...)`](../../tf/compat/v1/get_variable_scope.md): Returns the current variable scope. + +[`global_norm(...)`](../../tf/linalg/global_norm.md): Computes the global norm of multiple tensors. + +[`global_variables(...)`](../../tf/compat/v1/global_variables.md): Returns global variables. + +[`global_variables_initializer(...)`](../../tf/compat/v1/global_variables_initializer.md): Returns an Op that initializes global variables. + +[`grad_pass_through(...)`](../../tf/grad_pass_through.md): Creates a grad-pass-through op with the forward behavior provided in f. + +[`gradients(...)`](../../tf/compat/v1/gradients.md): Constructs symbolic derivatives of sum of `ys` w.r.t. x in `xs`. + +[`greater(...)`](../../tf/math/greater.md): Returns the truth value of (x > y) element-wise. + +[`greater_equal(...)`](../../tf/math/greater_equal.md): Returns the truth value of (x >= y) element-wise. + +[`group(...)`](../../tf/group.md): Create an op that groups multiple operations. + +[`guarantee_const(...)`](../../tf/guarantee_const.md): Gives a guarantee to the TF runtime that the input tensor is a constant. + +[`hessians(...)`](../../tf/compat/v1/hessians.md): Constructs the Hessian of sum of `ys` with respect to `x` in `xs`. + +[`histogram_fixed_width(...)`](../../tf/histogram_fixed_width.md): Return histogram of values. + +[`histogram_fixed_width_bins(...)`](../../tf/histogram_fixed_width_bins.md): Bins the given values for use in a histogram. + +[`identity(...)`](../../tf/identity.md): Return a Tensor with the same shape and contents as input. + +[`identity_n(...)`](../../tf/identity_n.md): Returns a list of tensors with the same shapes and contents as the input + +[`ifft(...)`](../../tf/signal/ifft.md): Inverse fast Fourier transform. + +[`ifft2d(...)`](../../tf/signal/ifft2d.md): Inverse 2D fast Fourier transform. + +[`ifft3d(...)`](../../tf/signal/ifft3d.md): Inverse 3D fast Fourier transform. + +[`igamma(...)`](../../tf/math/igamma.md): Compute the lower regularized incomplete Gamma function `P(a, x)`. + +[`igammac(...)`](../../tf/math/igammac.md): Compute the upper regularized incomplete Gamma function `Q(a, x)`. + +[`imag(...)`](../../tf/math/imag.md): Returns the imaginary part of a complex (or real) tensor. + +[`import_graph_def(...)`](../../tf/graph_util/import_graph_def.md): Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments) + +[`init_scope(...)`](../../tf/init_scope.md): A context manager that lifts ops out of control-flow scopes and function-building graphs. + +[`initialize_all_tables(...)`](../../tf/compat/v1/initialize_all_tables.md): Returns an Op that initializes all tables of the default graph. (deprecated) + +[`initialize_all_variables(...)`](../../tf/compat/v1/initialize_all_variables.md): See tf.compat.v1.global_variables_initializer. (deprecated) + +[`initialize_local_variables(...)`](../../tf/compat/v1/initialize_local_variables.md): See tf.compat.v1.local_variables_initializer. (deprecated) + +[`initialize_variables(...)`](../../tf/compat/v1/initialize_variables.md): See tf.compat.v1.variables_initializer. (deprecated) + +[`invert_permutation(...)`](../../tf/math/invert_permutation.md): Computes the inverse permutation of a tensor. + +[`is_finite(...)`](../../tf/math/is_finite.md): Returns which elements of x are finite. + +[`is_inf(...)`](../../tf/math/is_inf.md): Returns which elements of x are Inf. + +[`is_nan(...)`](../../tf/math/is_nan.md): Returns which elements of x are NaN. + +[`is_non_decreasing(...)`](../../tf/math/is_non_decreasing.md): Returns `True` if `x` is non-decreasing. + +[`is_numeric_tensor(...)`](../../tf/debugging/is_numeric_tensor.md): Returns `True` if the elements of `tensor` are numbers. + +[`is_strictly_increasing(...)`](../../tf/math/is_strictly_increasing.md): Returns `True` if `x` is strictly increasing. + +[`is_tensor(...)`](../../tf/is_tensor.md): Checks whether `x` is a tensor or "tensor-like". + +[`is_variable_initialized(...)`](../../tf/compat/v1/is_variable_initialized.md): Tests if a variable has been initialized. + +[`lbeta(...)`](../../tf/math/lbeta.md): Computes \\(ln(|Beta(x)|)\\), reducing along the last dimension. + +[`less(...)`](../../tf/math/less.md): Returns the truth value of (x < y) element-wise. + +[`less_equal(...)`](../../tf/math/less_equal.md): Returns the truth value of (x <= y) element-wise. + +[`lgamma(...)`](../../tf/math/lgamma.md): Computes the log of the absolute value of `Gamma(x)` element-wise. + +[`lin_space(...)`](../../tf/linspace.md): Generates values in an interval. + +[`linspace(...)`](../../tf/linspace.md): Generates values in an interval. + +[`load_file_system_library(...)`](../../tf/compat/v1/load_file_system_library.md): Loads a TensorFlow plugin, containing file system implementation. (deprecated) + +[`load_library(...)`](../../tf/load_library.md): Loads a TensorFlow plugin. + +[`load_op_library(...)`](../../tf/load_op_library.md): Loads a TensorFlow plugin, containing custom ops and kernels. + +[`local_variables(...)`](../../tf/compat/v1/local_variables.md): Returns local variables. + +[`local_variables_initializer(...)`](../../tf/compat/v1/local_variables_initializer.md): Returns an Op that initializes all local variables. + +[`log(...)`](../../tf/math/log.md): Computes natural logarithm of x element-wise. + +[`log1p(...)`](../../tf/math/log1p.md): Computes natural logarithm of (1 + x) element-wise. + +[`log_sigmoid(...)`](../../tf/math/log_sigmoid.md): Computes log sigmoid of `x` element-wise. + +[`logical_and(...)`](../../tf/math/logical_and.md): Logical AND function. + +[`logical_not(...)`](../../tf/math/logical_not.md): Returns the truth value of `NOT x` element-wise. + +[`logical_or(...)`](../../tf/math/logical_or.md): Returns the truth value of x OR y element-wise. + +[`logical_xor(...)`](../../tf/math/logical_xor.md): Logical XOR function. + +[`make_ndarray(...)`](../../tf/make_ndarray.md): Create a numpy ndarray from a tensor. + +[`make_template(...)`](../../tf/compat/v1/make_template.md): Given an arbitrary function, wrap it so that it does variable sharing. + +[`make_tensor_proto(...)`](../../tf/make_tensor_proto.md): Create a TensorProto. + +[`map_fn(...)`](../../tf/compat/v1/map_fn.md): map on the list of tensors unpacked from `elems` on dimension 0. + +[`matching_files(...)`](../../tf/io/matching_files.md): Returns the set of files matching one or more glob patterns. + +[`matmul(...)`](../../tf/linalg/matmul.md): Multiplies matrix `a` by matrix `b`, producing `a` * `b`. + +[`matrix_band_part(...)`](../../tf/linalg/band_part.md): Copy a tensor setting everything outside a central band in each innermost matrix + +[`matrix_determinant(...)`](../../tf/linalg/det.md): Computes the determinant of one or more square matrices. + +[`matrix_diag(...)`](../../tf/linalg/diag.md): Returns a batched diagonal tensor with given batched diagonal values. + +[`matrix_diag_part(...)`](../../tf/linalg/diag_part.md): Returns the batched diagonal part of a batched tensor. + +[`matrix_inverse(...)`](../../tf/linalg/inv.md): Computes the inverse of one or more square invertible matrices or their + +[`matrix_set_diag(...)`](../../tf/linalg/set_diag.md): Returns a batched matrix tensor with new batched diagonal values. + +[`matrix_solve(...)`](../../tf/linalg/solve.md): Solves systems of linear equations. + +[`matrix_solve_ls(...)`](../../tf/linalg/lstsq.md): Solves one or more linear least-squares problems. + +[`matrix_square_root(...)`](../../tf/linalg/sqrtm.md): Computes the matrix square root of one or more square matrices: + +[`matrix_transpose(...)`](../../tf/linalg/matrix_transpose.md): Transposes last two dimensions of tensor `a`. + +[`matrix_triangular_solve(...)`](../../tf/linalg/triangular_solve.md): Solve systems of linear equations with upper or lower triangular matrices. + +[`maximum(...)`](../../tf/math/maximum.md): Returns the max of x and y (i.e. x > y ? x : y) element-wise. + +[`meshgrid(...)`](../../tf/meshgrid.md): Broadcasts parameters for evaluation on an N-D grid. + +[`min_max_variable_partitioner(...)`](../../tf/compat/v1/min_max_variable_partitioner.md): Partitioner to allocate minimum size per slice. + +[`minimum(...)`](../../tf/math/minimum.md): Returns the min of x and y (i.e. x < y ? x : y) element-wise. + +[`mod(...)`](../../tf/math/floormod.md): Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +[`model_variables(...)`](../../tf/compat/v1/model_variables.md): Returns all variables in the MODEL_VARIABLES collection. + +[`moving_average_variables(...)`](../../tf/compat/v1/moving_average_variables.md): Returns all variables that maintain their moving averages. + +[`multinomial(...)`](../../tf/compat/v1/multinomial.md): Draws samples from a multinomial distribution. (deprecated) + +[`multiply(...)`](../../tf/math/multiply.md): Returns an element-wise x * y. + +[`negative(...)`](../../tf/math/negative.md): Computes numerical negative value element-wise. + +[`no_gradient(...)`](../../tf/no_gradient.md): Specifies that ops of type `op_type` is not differentiable. + +[`no_op(...)`](../../tf/no_op.md): Does nothing. Only useful as a placeholder for control edges. + +[`no_regularizer(...)`](../../tf/compat/v1/no_regularizer.md): Use this function to prevent regularization of variables. + +[`nondifferentiable_batch_function(...)`](../../tf/nondifferentiable_batch_function.md): Batches the computation done by the decorated function. + +[`norm(...)`](../../tf/compat/v1/norm.md): Computes the norm of vectors, matrices, and tensors. (deprecated arguments) + +[`not_equal(...)`](../../tf/math/not_equal.md): Returns the truth value of (x != y) element-wise. + +[`numpy_function(...)`](../../tf/numpy_function.md): Wraps a python function and uses it as a TensorFlow op. + +[`one_hot(...)`](../../tf/one_hot.md): Returns a one-hot tensor. + +[`ones(...)`](../../tf/ones.md): Creates a tensor with all elements set to one (1). + +[`ones_like(...)`](../../tf/compat/v1/ones_like.md): Creates a tensor with all elements set to 1. + +[`op_scope(...)`](../../tf/compat/v1/op_scope.md): DEPRECATED. Same as name_scope above, just different argument order. + +[`pad(...)`](../../tf/compat/v1/pad.md): Pads a tensor. + +[`parallel_stack(...)`](../../tf/parallel_stack.md): Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor in parallel. + +[`parse_example(...)`](../../tf/compat/v1/parse_example.md): Parses `Example` protos into a `dict` of tensors. + +[`parse_single_example(...)`](../../tf/compat/v1/parse_single_example.md): Parses a single `Example` proto. + +[`parse_single_sequence_example(...)`](../../tf/io/parse_single_sequence_example.md): Parses a single `SequenceExample` proto. + +[`parse_tensor(...)`](../../tf/io/parse_tensor.md): Transforms a serialized tensorflow.TensorProto proto into a Tensor. + +[`placeholder(...)`](../../tf/compat/v1/placeholder.md): Inserts a placeholder for a tensor that will be always fed. + +[`placeholder_with_default(...)`](../../tf/compat/v1/placeholder_with_default.md): A placeholder op that passes through `input` when its output is not fed. + +[`polygamma(...)`](../../tf/math/polygamma.md): Compute the polygamma function \\(\psi^{(n)}(x)\\). + +[`pow(...)`](../../tf/math/pow.md): Computes the power of one value to another. + +[`print(...)`](../../tf/print.md): Print the specified inputs. + +[`py_func(...)`](../../tf/compat/v1/py_func.md): Wraps a python function and uses it as a TensorFlow op. + +[`py_function(...)`](../../tf/py_function.md): Wraps a python function into a TensorFlow op that executes it eagerly. + +[`qr(...)`](../../tf/linalg/qr.md): Computes the QR decompositions of one or more matrices. + +[`quantize(...)`](../../tf/quantization/quantize.md): Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. + +[`quantize_v2(...)`](../../tf/compat/v1/quantize_v2.md): Please use tf.quantization.quantize instead. + +[`quantized_concat(...)`](../../tf/quantization/quantized_concat.md): Concatenates quantized tensors along one dimension. + +[`random_crop(...)`](../../tf/image/random_crop.md): Randomly crops a tensor to a given size. + +[`random_gamma(...)`](../../tf/random/gamma.md): Draws `shape` samples from each of the given Gamma distribution(s). + +[`random_normal(...)`](../../tf/random/normal.md): Outputs random values from a normal distribution. + +[`random_poisson(...)`](../../tf/compat/v1/random_poisson.md): Draws `shape` samples from each of the given Poisson distribution(s). + +[`random_shuffle(...)`](../../tf/random/shuffle.md): Randomly shuffles a tensor along its first dimension. + +[`random_uniform(...)`](../../tf/random/uniform.md): Outputs random values from a uniform distribution. + +[`range(...)`](../../tf/range.md): Creates a sequence of numbers. + +[`rank(...)`](../../tf/rank.md): Returns the rank of a tensor. + +[`read_file(...)`](../../tf/io/read_file.md): Reads and outputs the entire contents of the input filename. + +[`real(...)`](../../tf/math/real.md): Returns the real part of a complex (or real) tensor. + +[`realdiv(...)`](../../tf/realdiv.md): Returns x / y element-wise for real types. + +[`reciprocal(...)`](../../tf/math/reciprocal.md): Computes the reciprocal of x element-wise. + +[`recompute_grad(...)`](../../tf/recompute_grad.md): An eager-compatible version of recompute_grad. + +[`reduce_all(...)`](../../tf/compat/v1/reduce_all.md): Computes the "logical and" of elements across dimensions of a tensor. (deprecated arguments) + +[`reduce_any(...)`](../../tf/compat/v1/reduce_any.md): Computes the "logical or" of elements across dimensions of a tensor. (deprecated arguments) + +[`reduce_join(...)`](../../tf/compat/v1/reduce_join.md): Joins all strings into a single string, or joins along an axis. + +[`reduce_logsumexp(...)`](../../tf/compat/v1/reduce_logsumexp.md): Computes log(sum(exp(elements across dimensions of a tensor))). (deprecated arguments) + +[`reduce_max(...)`](../../tf/compat/v1/reduce_max.md): Computes the maximum of elements across dimensions of a tensor. (deprecated arguments) + +[`reduce_mean(...)`](../../tf/compat/v1/reduce_mean.md): Computes the mean of elements across dimensions of a tensor. + +[`reduce_min(...)`](../../tf/compat/v1/reduce_min.md): Computes the minimum of elements across dimensions of a tensor. (deprecated arguments) + +[`reduce_prod(...)`](../../tf/compat/v1/reduce_prod.md): Computes the product of elements across dimensions of a tensor. (deprecated arguments) + +[`reduce_sum(...)`](../../tf/compat/v1/reduce_sum.md): Computes the sum of elements across dimensions of a tensor. (deprecated arguments) + +[`regex_replace(...)`](../../tf/strings/regex_replace.md): Replace elements of `input` matching regex `pattern` with `rewrite`. + +[`register_tensor_conversion_function(...)`](../../tf/register_tensor_conversion_function.md): Registers a function for converting objects of `base_type` to `Tensor`. + +[`repeat(...)`](../../tf/repeat.md): Repeat elements of `input`. + +[`report_uninitialized_variables(...)`](../../tf/compat/v1/report_uninitialized_variables.md): Adds ops to list the names of uninitialized variables. + +[`required_space_to_batch_paddings(...)`](../../tf/required_space_to_batch_paddings.md): Calculate padding required to make block_shape divide input_shape. + +[`reset_default_graph(...)`](../../tf/compat/v1/reset_default_graph.md): Clears the default graph stack and resets the global default graph. + +[`reshape(...)`](../../tf/reshape.md): Reshapes a tensor. + +[`resource_variables_enabled(...)`](../../tf/compat/v1/resource_variables_enabled.md): Returns `True` if resource variables are enabled. + +[`reverse(...)`](../../tf/reverse.md): Reverses specific dimensions of a tensor. + +[`reverse_sequence(...)`](../../tf/compat/v1/reverse_sequence.md): Reverses variable length slices. (deprecated arguments) (deprecated arguments) + +[`reverse_v2(...)`](../../tf/reverse.md): Reverses specific dimensions of a tensor. + +[`rint(...)`](../../tf/math/rint.md): Returns element-wise integer closest to x. + +[`roll(...)`](../../tf/roll.md): Rolls the elements of a tensor along an axis. + +[`round(...)`](../../tf/math/round.md): Rounds the values of a tensor to the nearest integer, element-wise. + +[`rsqrt(...)`](../../tf/math/rsqrt.md): Computes reciprocal of square root of x element-wise. + +[`saturate_cast(...)`](../../tf/dtypes/saturate_cast.md): Performs a safe saturating cast of `value` to `dtype`. + +[`scalar_mul(...)`](../../tf/compat/v1/scalar_mul.md): Multiplies a scalar times a `Tensor` or `IndexedSlices` object. + +[`scan(...)`](../../tf/compat/v1/scan.md): scan on the list of tensors unpacked from `elems` on dimension 0. + +[`scatter_add(...)`](../../tf/compat/v1/scatter_add.md): Adds sparse updates to the variable referenced by `resource`. + +[`scatter_div(...)`](../../tf/compat/v1/scatter_div.md): Divides a variable reference by sparse updates. + +[`scatter_max(...)`](../../tf/compat/v1/scatter_max.md): Reduces sparse updates into a variable reference using the `max` operation. + +[`scatter_min(...)`](../../tf/compat/v1/scatter_min.md): Reduces sparse updates into a variable reference using the `min` operation. + +[`scatter_mul(...)`](../../tf/compat/v1/scatter_mul.md): Multiplies sparse updates into a variable reference. + +[`scatter_nd(...)`](../../tf/scatter_nd.md): Scatter `updates` into a new tensor according to `indices`. + +[`scatter_nd_add(...)`](../../tf/compat/v1/scatter_nd_add.md): Applies sparse addition to individual values or slices in a Variable. + +[`scatter_nd_sub(...)`](../../tf/compat/v1/scatter_nd_sub.md): Applies sparse subtraction to individual values or slices in a Variable. + +[`scatter_nd_update(...)`](../../tf/compat/v1/scatter_nd_update.md): Applies sparse `updates` to individual values or slices in a Variable. + +[`scatter_sub(...)`](../../tf/compat/v1/scatter_sub.md): Subtracts sparse updates to a variable reference. + +[`scatter_update(...)`](../../tf/compat/v1/scatter_update.md): Applies sparse updates to a variable reference. + +[`searchsorted(...)`](../../tf/searchsorted.md): Searches input tensor for values on the innermost dimension. + +[`segment_max(...)`](../../tf/math/segment_max.md): Computes the maximum along segments of a tensor. + +[`segment_mean(...)`](../../tf/math/segment_mean.md): Computes the mean along segments of a tensor. + +[`segment_min(...)`](../../tf/math/segment_min.md): Computes the minimum along segments of a tensor. + +[`segment_prod(...)`](../../tf/math/segment_prod.md): Computes the product along segments of a tensor. + +[`segment_sum(...)`](../../tf/math/segment_sum.md): Computes the sum along segments of a tensor. + +[`self_adjoint_eig(...)`](../../tf/linalg/eigh.md): Computes the eigen decomposition of a batch of self-adjoint matrices. + +[`self_adjoint_eigvals(...)`](../../tf/linalg/eigvalsh.md): Computes the eigenvalues of one or more self-adjoint matrices. + +[`sequence_mask(...)`](../../tf/sequence_mask.md): Returns a mask tensor representing the first N positions of each cell. + +[`serialize_many_sparse(...)`](../../tf/compat/v1/serialize_many_sparse.md): Serialize `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor`. + +[`serialize_sparse(...)`](../../tf/compat/v1/serialize_sparse.md): Serialize a `SparseTensor` into a 3-vector (1-D `Tensor`) object. + +[`serialize_tensor(...)`](../../tf/io/serialize_tensor.md): Transforms a Tensor into a serialized TensorProto proto. + +[`set_random_seed(...)`](../../tf/compat/v1/set_random_seed.md): Sets the graph-level random seed for the default graph. + +[`setdiff1d(...)`](../../tf/compat/v1/setdiff1d.md): Computes the difference between two lists of numbers or strings. + +[`shape(...)`](../../tf/compat/v1/shape.md): Returns the shape of a tensor. + +[`shape_n(...)`](../../tf/shape_n.md): Returns shape of tensors. + +[`sigmoid(...)`](../../tf/math/sigmoid.md): Computes sigmoid of `x` element-wise. + +[`sign(...)`](../../tf/math/sign.md): Returns an element-wise indication of the sign of a number. + +[`sin(...)`](../../tf/math/sin.md): Computes sine of x element-wise. + +[`sinh(...)`](../../tf/math/sinh.md): Computes hyperbolic sine of x element-wise. + +[`size(...)`](../../tf/compat/v1/size.md): Returns the size of a tensor. + +[`slice(...)`](../../tf/slice.md): Extracts a slice from a tensor. + +[`sort(...)`](../../tf/sort.md): Sorts a tensor. + +[`space_to_batch(...)`](../../tf/compat/v1/space_to_batch.md): SpaceToBatch for 4-D tensors of type T. + +[`space_to_batch_nd(...)`](../../tf/space_to_batch_nd.md): SpaceToBatch for N-D tensors of type T. + +[`space_to_depth(...)`](../../tf/compat/v1/space_to_depth.md): SpaceToDepth for tensors of type T. + +[`sparse_add(...)`](../../tf/compat/v1/sparse_add.md): Adds two tensors, at least one of each is a `SparseTensor`. (deprecated arguments) + +[`sparse_concat(...)`](../../tf/compat/v1/sparse_concat.md): Concatenates a list of `SparseTensor` along the specified dimension. (deprecated arguments) + +[`sparse_fill_empty_rows(...)`](../../tf/sparse/fill_empty_rows.md): Fills empty rows in the input 2-D `SparseTensor` with a default value. + +[`sparse_mask(...)`](../../tf/sparse/mask.md): Masks elements of `IndexedSlices`. + +[`sparse_matmul(...)`](../../tf/compat/v1/sparse_matmul.md): Multiply matrix "a" by matrix "b". + +[`sparse_maximum(...)`](../../tf/sparse/maximum.md): Returns the element-wise max of two SparseTensors. + +[`sparse_merge(...)`](../../tf/compat/v1/sparse_merge.md): Combines a batch of feature ids and values into a single `SparseTensor`. (deprecated) + +[`sparse_minimum(...)`](../../tf/sparse/minimum.md): Returns the element-wise min of two SparseTensors. + +[`sparse_placeholder(...)`](../../tf/compat/v1/sparse_placeholder.md): Inserts a placeholder for a sparse tensor that will be always fed. + +[`sparse_reduce_max(...)`](../../tf/compat/v1/sparse_reduce_max.md): Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments) + +[`sparse_reduce_max_sparse(...)`](../../tf/compat/v1/sparse_reduce_max_sparse.md): Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments) + +[`sparse_reduce_sum(...)`](../../tf/compat/v1/sparse_reduce_sum.md): Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments) + +[`sparse_reduce_sum_sparse(...)`](../../tf/compat/v1/sparse_reduce_sum_sparse.md): Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments) + +[`sparse_reorder(...)`](../../tf/sparse/reorder.md): Reorders a `SparseTensor` into the canonical, row-major ordering. + +[`sparse_reset_shape(...)`](../../tf/sparse/reset_shape.md): Resets the shape of a `SparseTensor` with indices and values unchanged. + +[`sparse_reshape(...)`](../../tf/sparse/reshape.md): Reshapes a `SparseTensor` to represent values in a new dense shape. + +[`sparse_retain(...)`](../../tf/sparse/retain.md): Retains specified non-empty values within a `SparseTensor`. + +[`sparse_segment_mean(...)`](../../tf/compat/v1/sparse_segment_mean.md): Computes the mean along sparse segments of a tensor. + +[`sparse_segment_sqrt_n(...)`](../../tf/compat/v1/sparse_segment_sqrt_n.md): Computes the sum along sparse segments of a tensor divided by the sqrt(N). + +[`sparse_segment_sum(...)`](../../tf/compat/v1/sparse_segment_sum.md): Computes the sum along sparse segments of a tensor. + +[`sparse_slice(...)`](../../tf/sparse/slice.md): Slice a `SparseTensor` based on the `start` and `size. + +[`sparse_softmax(...)`](../../tf/sparse/softmax.md): Applies softmax to a batched N-D `SparseTensor`. + +[`sparse_split(...)`](../../tf/compat/v1/sparse_split.md): Split a `SparseTensor` into `num_split` tensors along `axis`. (deprecated arguments) + +[`sparse_tensor_dense_matmul(...)`](../../tf/sparse/sparse_dense_matmul.md): Multiply SparseTensor (or dense Matrix) (of rank 2) "A" by dense matrix + +[`sparse_tensor_to_dense(...)`](../../tf/sparse/to_dense.md): Converts a `SparseTensor` into a dense tensor. + +[`sparse_to_dense(...)`](../../tf/compat/v1/sparse_to_dense.md): Converts a sparse representation into a dense tensor. (deprecated) + +[`sparse_to_indicator(...)`](../../tf/sparse/to_indicator.md): Converts a `SparseTensor` of ids into a dense bool indicator tensor. + +[`sparse_transpose(...)`](../../tf/sparse/transpose.md): Transposes a `SparseTensor` + +[`split(...)`](../../tf/split.md): Splits a tensor `value` into a list of sub tensors. + +[`sqrt(...)`](../../tf/math/sqrt.md): Computes element-wise square root of the input tensor. + +[`square(...)`](../../tf/math/square.md): Computes square of x element-wise. + +[`squared_difference(...)`](../../tf/math/squared_difference.md): Returns (x - y)(x - y) element-wise. + +[`squeeze(...)`](../../tf/compat/v1/squeeze.md): Removes dimensions of size 1 from the shape of a tensor. (deprecated arguments) + +[`stack(...)`](../../tf/stack.md): Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor. + +[`stop_gradient(...)`](../../tf/stop_gradient.md): Stops gradient computation. + +[`strided_slice(...)`](../../tf/strided_slice.md): Extracts a strided slice of a tensor (generalized python array indexing). + +[`string_join(...)`](../../tf/strings/join.md): Perform element-wise concatenation of a list of string tensors. + +[`string_split(...)`](../../tf/compat/v1/string_split.md): Split elements of `source` based on `delimiter`. (deprecated arguments) + +[`string_strip(...)`](../../tf/strings/strip.md): Strip leading and trailing whitespaces from the Tensor. + +[`string_to_hash_bucket(...)`](../../tf/compat/v1/string_to_hash_bucket.md): Converts each string in the input Tensor to its hash mod by a number of buckets. + +[`string_to_hash_bucket_fast(...)`](../../tf/strings/to_hash_bucket_fast.md): Converts each string in the input Tensor to its hash mod by a number of buckets. + +[`string_to_hash_bucket_strong(...)`](../../tf/strings/to_hash_bucket_strong.md): Converts each string in the input Tensor to its hash mod by a number of buckets. + +[`string_to_number(...)`](../../tf/compat/v1/string_to_number.md): Converts each string in the input Tensor to the specified numeric type. + +[`substr(...)`](../../tf/compat/v1/substr.md): Return substrings from `Tensor` of strings. + +[`subtract(...)`](../../tf/math/subtract.md): Returns x - y element-wise. + +[`svd(...)`](../../tf/linalg/svd.md): Computes the singular value decompositions of one or more matrices. + +[`switch_case(...)`](../../tf/switch_case.md): Create a switch/case operation, i.e. an integer-indexed conditional. + +[`tables_initializer(...)`](../../tf/compat/v1/tables_initializer.md): Returns an Op that initializes all tables of the default graph. + +[`tan(...)`](../../tf/math/tan.md): Computes tan of x element-wise. + +[`tanh(...)`](../../tf/math/tanh.md): Computes hyperbolic tangent of `x` element-wise. + +[`tensor_scatter_add(...)`](../../tf/tensor_scatter_nd_add.md): Adds sparse `updates` to an existing tensor according to `indices`. + +[`tensor_scatter_nd_add(...)`](../../tf/tensor_scatter_nd_add.md): Adds sparse `updates` to an existing tensor according to `indices`. + +[`tensor_scatter_nd_sub(...)`](../../tf/tensor_scatter_nd_sub.md): Subtracts sparse `updates` from an existing tensor according to `indices`. + +[`tensor_scatter_nd_update(...)`](../../tf/tensor_scatter_nd_update.md): Scatter `updates` into an existing tensor according to `indices`. + +[`tensor_scatter_sub(...)`](../../tf/tensor_scatter_nd_sub.md): Subtracts sparse `updates` from an existing tensor according to `indices`. + +[`tensor_scatter_update(...)`](../../tf/tensor_scatter_nd_update.md): Scatter `updates` into an existing tensor according to `indices`. + +[`tensordot(...)`](../../tf/tensordot.md): Tensor contraction of a and b along specified axes and outer product. + +[`tile(...)`](../../tf/tile.md): Constructs a tensor by tiling a given tensor. + +[`timestamp(...)`](../../tf/timestamp.md): Provides the time since epoch in seconds. + +[`to_bfloat16(...)`](../../tf/compat/v1/to_bfloat16.md): Casts a tensor to type `bfloat16`. (deprecated) + +[`to_complex128(...)`](../../tf/compat/v1/to_complex128.md): Casts a tensor to type `complex128`. (deprecated) + +[`to_complex64(...)`](../../tf/compat/v1/to_complex64.md): Casts a tensor to type `complex64`. (deprecated) + +[`to_double(...)`](../../tf/compat/v1/to_double.md): Casts a tensor to type `float64`. (deprecated) + +[`to_float(...)`](../../tf/compat/v1/to_float.md): Casts a tensor to type `float32`. (deprecated) + +[`to_int32(...)`](../../tf/compat/v1/to_int32.md): Casts a tensor to type `int32`. (deprecated) + +[`to_int64(...)`](../../tf/compat/v1/to_int64.md): Casts a tensor to type `int64`. (deprecated) + +[`trace(...)`](../../tf/linalg/trace.md): Compute the trace of a tensor `x`. + +[`trainable_variables(...)`](../../tf/compat/v1/trainable_variables.md): Returns all variables created with `trainable=True`. + +[`transpose(...)`](../../tf/compat/v1/transpose.md): Transposes `a`. + +[`truediv(...)`](../../tf/math/truediv.md): Divides x / y elementwise (using Python 3 division operator semantics). + +[`truncated_normal(...)`](../../tf/random/truncated_normal.md): Outputs random values from a truncated normal distribution. + +[`truncatediv(...)`](../../tf/truncatediv.md): Returns x / y element-wise for integer types. + +[`truncatemod(...)`](../../tf/truncatemod.md): Returns element-wise remainder of division. This emulates C semantics in that + +[`tuple(...)`](../../tf/compat/v1/tuple.md): Group tensors together. + +[`unique(...)`](../../tf/unique.md): Finds unique elements in a 1-D tensor. + +[`unique_with_counts(...)`](../../tf/unique_with_counts.md): Finds unique elements in a 1-D tensor. + +[`unravel_index(...)`](../../tf/unravel_index.md): Converts an array of flat indices into a tuple of coordinate arrays. + +[`unsorted_segment_max(...)`](../../tf/math/unsorted_segment_max.md): Computes the maximum along segments of a tensor. + +[`unsorted_segment_mean(...)`](../../tf/math/unsorted_segment_mean.md): Computes the mean along segments of a tensor. + +[`unsorted_segment_min(...)`](../../tf/math/unsorted_segment_min.md): Computes the minimum along segments of a tensor. + +[`unsorted_segment_prod(...)`](../../tf/math/unsorted_segment_prod.md): Computes the product along segments of a tensor. + +[`unsorted_segment_sqrt_n(...)`](../../tf/math/unsorted_segment_sqrt_n.md): Computes the sum along segments of a tensor divided by the sqrt(N). + +[`unsorted_segment_sum(...)`](../../tf/math/unsorted_segment_sum.md): Computes the sum along segments of a tensor. + +[`unstack(...)`](../../tf/unstack.md): Unpacks the given dimension of a rank-`R` tensor into rank-`(R-1)` tensors. + +[`variable_axis_size_partitioner(...)`](../../tf/compat/v1/variable_axis_size_partitioner.md): Get a partitioner for VariableScope to keep shards below `max_shard_bytes`. + +[`variable_creator_scope(...)`](../../tf/compat/v1/variable_creator_scope.md): Scope which defines a variable creation function to be used by variable(). + +[`variable_op_scope(...)`](../../tf/compat/v1/variable_op_scope.md): Deprecated: context manager for defining an op that creates variables. + +[`variables_initializer(...)`](../../tf/compat/v1/variables_initializer.md): Returns an Op that initializes a list of variables. + +[`vectorized_map(...)`](../../tf/vectorized_map.md): Parallel map on the list of tensors unpacked from `elems` on dimension 0. + +[`verify_tensor_all_finite(...)`](../../tf/compat/v1/verify_tensor_all_finite.md): Assert that the tensor does not contain any NaN's or Inf's. + +[`where(...)`](../../tf/compat/v1/where.md): Return the elements, either from `x` or `y`, depending on the `condition`. + +[`where_v2(...)`](../../tf/where.md): Return the elements where `condition` is `True` (multiplexing `x` and `y`). + +[`while_loop(...)`](../../tf/compat/v1/while_loop.md): Repeat `body` while the condition `cond` is true. + +[`wrap_function(...)`](../../tf/compat/v1/wrap_function.md): Wraps the TF 1.x function fn into a graph function. + +[`write_file(...)`](../../tf/io/write_file.md): Writes contents to the file at input filename. Creates file and recursively + +[`zeros(...)`](../../tf/zeros.md): Creates a tensor with all elements set to zero. + +[`zeros_like(...)`](../../tf/compat/v1/zeros_like.md): Creates a tensor with all elements set to zero. + +[`zeta(...)`](../../tf/math/zeta.md): Compute the Hurwitz zeta function \\(\zeta(x, q)\\). + +## Other Members + +* `AUTO_REUSE` +* `COMPILER_VERSION = '7.3.1 20180303'` +* `CXX11_ABI_FLAG = 0` +* `GIT_VERSION = 'v2.2.0-rc4-8-g2b96f3662b'` +* `GRAPH_DEF_VERSION = 175` +* `GRAPH_DEF_VERSION_MIN_CONSUMER = 0` +* `GRAPH_DEF_VERSION_MIN_PRODUCER = 0` +* `MONOLITHIC_BUILD = 0` +* `QUANTIZED_DTYPES` +* `VERSION = '2.2.0'` +* `__version__ = '2.2.0'` +* `bfloat16` +* `bool` +* `complex128` +* `complex64` +* `double` +* `float16` +* `float32` +* `float64` +* `half` +* `int16` +* `int32` +* `int64` +* `int8` +* `newaxis = None` +* `qint16` +* `qint32` +* `qint8` +* `quint16` +* `quint8` +* `resource` +* `string` +* `uint16` +* `uint32` +* `uint64` +* `uint8` +* `variant` diff --git a/site/en/api_docs/python/tf/compat/v1/AttrValue.md b/site/en/api_docs/python/tf/compat/v1/AttrValue.md new file mode 100644 index 00000000000..86ece2b2faa --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/AttrValue.md @@ -0,0 +1,113 @@ +description: A ProtocolMessage + +
+ + + +
+ +# tf.compat.v1.AttrValue + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`b` + +`bool b` +
+`f` + +`float f` +
+`func` + +`NameAttrList func` +
+`i` + +`int64 i` +
+`list` + +`ListValue list` +
+`placeholder` + +`string placeholder` +
+`s` + +`bytes s` +
+`shape` + +`TensorShapeProto shape` +
+`tensor` + +`TensorProto tensor` +
+`type` + +`DataType type` +
+ + + +## Child Classes +[`class ListValue`](../../../tf/compat/v1/AttrValue/ListValue.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/AttrValue/ListValue.md b/site/en/api_docs/python/tf/compat/v1/AttrValue/ListValue.md new file mode 100644 index 00000000000..20e19dd9287 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/AttrValue/ListValue.md @@ -0,0 +1,95 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.AttrValue.ListValue + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`b` + +`repeated bool b` +
+`f` + +`repeated float f` +
+`func` + +`repeated NameAttrList func` +
+`i` + +`repeated int64 i` +
+`s` + +`repeated bytes s` +
+`shape` + +`repeated TensorShapeProto shape` +
+`tensor` + +`repeated TensorProto tensor` +
+`type` + +`repeated DataType type` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/ConditionalAccumulator.md b/site/en/api_docs/python/tf/compat/v1/ConditionalAccumulator.md new file mode 100644 index 00000000000..717ffc839cd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/ConditionalAccumulator.md @@ -0,0 +1,381 @@ +description: A conditional accumulator for aggregating gradients. + +
+ + + + + + + +
+ +# tf.compat.v1.ConditionalAccumulator + + + + + + + + + +A conditional accumulator for aggregating gradients. + +Inherits From: [`ConditionalAccumulatorBase`](../../../tf/compat/v1/ConditionalAccumulatorBase.md) + + + + + + + +Up-to-date gradients (i.e., time step at which gradient was computed is +equal to the accumulator's time step) are added to the accumulator. + +Extraction of the average gradient is blocked until the required number of +gradients has been accumulated. + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +Datatype of the accumulated gradients. +
+`shape` + +Shape of the accumulated gradients. +
+`shared_name` + +Optional. If non-empty, this accumulator will be shared under +the given name across multiple sessions. +
+`name` + +Optional name for the accumulator. +
+`reduction_type` + +Reduction type to use when taking the gradient. +
+ + + + + + + + + + + + + + + + + + + + +
+`accumulator_ref` + +The underlying accumulator reference. +
+`dtype` + +The datatype of the gradients accumulated by this accumulator. +
+`name` + +The name of the underlying accumulator. +
+ + + +## Methods + +

apply_grad

+ +View source + + + +Attempts to apply a gradient to the accumulator. + +The attempt is silently dropped if the gradient is stale, i.e., local_step +is less than the accumulator's global time step. + + + + + + + + + + + + + + + + +
Args
+`grad` + +The gradient tensor to be applied. +
+`local_step` + +Time step at which the gradient was computed. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
Returns
+The operation that (conditionally) applies a gradient to the accumulator. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If grad is of the wrong shape +
+ + + +

num_accumulated

+ +View source + + + +Number of gradients that have currently been aggregated in accumulator. + + + + + + + + + + + +
Args
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
Returns
+Number of accumulated gradients currently in accumulator. +
+ + + +

set_global_step

+ +View source + + + +Sets the global time step of the accumulator. + +The operation logs a warning if we attempt to set to a time step that is +lower than the accumulator's own time step. + + + + + + + + + + + + + +
Args
+`new_global_step` + +Value of new time step. Can be a variable or a constant +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
Returns
+Operation that sets the accumulator's time step. +
+ + + +

take_grad

+ +View source + + + +Attempts to extract the average gradient from the accumulator. + +The operation blocks until sufficient number of gradients have been +successfully applied to the accumulator. + +Once successful, the following actions are also triggered: + +- Counter of accumulated gradients is reset to 0. +- Aggregated gradient is reset to 0 tensor. +- Accumulator's internal time step is incremented by 1. + + + + + + + + + + + + + +
Args
+`num_required` + +Number of gradients that needs to have been aggregated +
+`name` + +Optional name for the operation +
+ + + + + + + + + + + +
Returns
+A tensor holding the value of the average gradient. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +If num_required < 1 +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/ConditionalAccumulatorBase.md b/site/en/api_docs/python/tf/compat/v1/ConditionalAccumulatorBase.md new file mode 100644 index 00000000000..5fb3c64b33c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/ConditionalAccumulatorBase.md @@ -0,0 +1,209 @@ +description: A conditional accumulator for aggregating gradients. + +
+ + + + + +
+ +# tf.compat.v1.ConditionalAccumulatorBase + + + + + + + + + +A conditional accumulator for aggregating gradients. + + + + + + + +Up-to-date gradients (i.e., time step at which gradient was computed is +equal to the accumulator's time step) are added to the accumulator. + +Extraction of the average gradient is blocked until the required number of +gradients has been accumulated. + + + + + + + + + + + + + + + + +
+`dtype` + +Datatype of the accumulated gradients. +
+`shape` + +Shape of the accumulated gradients. +
+`accumulator_ref` + +A handle to the conditional accumulator, created by sub- +classes +
+ + + + + + + + + + + + + + + + + + + + +
+`accumulator_ref` + +The underlying accumulator reference. +
+`dtype` + +The datatype of the gradients accumulated by this accumulator. +
+`name` + +The name of the underlying accumulator. +
+ + + +## Methods + +

num_accumulated

+ +View source + + + +Number of gradients that have currently been aggregated in accumulator. + + + + + + + + + + + +
Args
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
Returns
+Number of accumulated gradients currently in accumulator. +
+ + + +

set_global_step

+ +View source + + + +Sets the global time step of the accumulator. + +The operation logs a warning if we attempt to set to a time step that is +lower than the accumulator's own time step. + + + + + + + + + + + + + +
Args
+`new_global_step` + +Value of new time step. Can be a variable or a constant +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
Returns
+Operation that sets the accumulator's time step. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/ConfigProto.md b/site/en/api_docs/python/tf/compat/v1/ConfigProto.md new file mode 100644 index 00000000000..9f679a75450 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/ConfigProto.md @@ -0,0 +1,165 @@ +description: A ProtocolMessage + +
+ + + + +
+ +# tf.compat.v1.ConfigProto + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_soft_placement` + +`bool allow_soft_placement` +
+`cluster_def` + +`ClusterDef cluster_def` +
+`device_count` + +`repeated DeviceCountEntry device_count` +
+`device_filters` + +`repeated string device_filters` +
+`experimental` + +`Experimental experimental` +
+`gpu_options` + +`GPUOptions gpu_options` +
+`graph_options` + +`GraphOptions graph_options` +
+`inter_op_parallelism_threads` + +`int32 inter_op_parallelism_threads` +
+`intra_op_parallelism_threads` + +`int32 intra_op_parallelism_threads` +
+`isolate_session_state` + +`bool isolate_session_state` +
+`log_device_placement` + +`bool log_device_placement` +
+`operation_timeout_in_ms` + +`int64 operation_timeout_in_ms` +
+`placement_period` + +`int32 placement_period` +
+`rpc_options` + +`RPCOptions rpc_options` +
+`session_inter_op_thread_pool` + +`repeated ThreadPoolOptionProto session_inter_op_thread_pool` +
+`share_cluster_devices_in_session` + +`bool share_cluster_devices_in_session` +
+`use_per_session_threads` + +`bool use_per_session_threads` +
+ + + +## Child Classes +[`class DeviceCountEntry`](../../../tf/compat/v1/ConfigProto/DeviceCountEntry.md) + +[`class Experimental`](../../../tf/compat/v1/ConfigProto/Experimental.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/ConfigProto/DeviceCountEntry.md b/site/en/api_docs/python/tf/compat/v1/ConfigProto/DeviceCountEntry.md new file mode 100644 index 00000000000..9662ac058e8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/ConfigProto/DeviceCountEntry.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.ConfigProto.DeviceCountEntry + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`key` + +`string key` +
+`value` + +`int32 value` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/ConfigProto/Experimental.md b/site/en/api_docs/python/tf/compat/v1/ConfigProto/Experimental.md new file mode 100644 index 00000000000..5ad6a07a387 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/ConfigProto/Experimental.md @@ -0,0 +1,137 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.ConfigProto.Experimental + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`collective_deterministic_sequential_execution` + +`bool collective_deterministic_sequential_execution` +
+`collective_group_leader` + +`string collective_group_leader` +
+`collective_nccl` + +`bool collective_nccl` +
+`disable_output_partition_graphs` + +`bool disable_output_partition_graphs` +
+`disable_thread_spinning` + +`bool disable_thread_spinning` +
+`enable_mlir_bridge` + +`bool enable_mlir_bridge` +
+`executor_type` + +`string executor_type` +
+`optimize_for_static_graph` + +`bool optimize_for_static_graph` +
+`recv_buf_max_chunk` + +`int32 recv_buf_max_chunk` +
+`session_metadata` + +`SessionMetadata session_metadata` +
+`share_cluster_devices_in_session` + +`bool share_cluster_devices_in_session` +
+`share_session_state_in_clusterspec_propagation` + +`bool share_session_state_in_clusterspec_propagation` +
+`use_numa_affinity` + +`bool use_numa_affinity` +
+`xla_fusion_autotuner_thresh` + +`int64 xla_fusion_autotuner_thresh` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/DeviceSpec.md b/site/en/api_docs/python/tf/compat/v1/DeviceSpec.md new file mode 100644 index 00000000000..99647683bcd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/DeviceSpec.md @@ -0,0 +1,559 @@ +description: Represents a (possibly partial) specification for a TensorFlow device. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.DeviceSpec + + + + + + + + + +Represents a (possibly partial) specification for a TensorFlow device. + +Inherits From: [`DeviceSpec`](../../../tf/DeviceSpec.md) + + + + + + + +`DeviceSpec`s are used throughout TensorFlow to describe where state is stored +and computations occur. Using `DeviceSpec` allows you to parse device spec +strings to verify their validity, merge them or compose them programmatically. + +#### Example: + + + +```python +# Place the operations on device "GPU:0" in the "ps" job. +device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0) +with tf.device(device_spec.to_string()): + # Both my_var and squared_var will be placed on /job:ps/device:GPU:0. + my_var = tf.Variable(..., name="my_variable") + squared_var = tf.square(my_var) +``` + +With eager execution disabled (by default in TensorFlow 1.x and by calling +disable_eager_execution() in TensorFlow 2.x), the following syntax +can be used: + +```python +tf.compat.v1.disable_eager_execution() + +# Same as previous +device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0) +# No need of .to_string() method. +with tf.device(device_spec): + my_var = tf.Variable(..., name="my_variable") + squared_var = tf.square(my_var) + ``` + +If a `DeviceSpec` is partially specified, it will be merged with other +`DeviceSpec`s according to the scope in which it is defined. `DeviceSpec` +components defined in inner scopes take precedence over those defined in +outer scopes. + +```python +gpu0_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0) +with tf.device(DeviceSpec(job="train").to_string()): + with tf.device(gpu0_spec.to_string()): + # Nodes created here will be assigned to /job:ps/device:GPU:0. + with tf.device(DeviceSpec(device_type="GPU", device_index=1).to_string()): + # Nodes created here will be assigned to /job:train/device:GPU:1. +``` + +A `DeviceSpec` consists of 5 components -- each of +which is optionally specified: + +* Job: The job name. +* Replica: The replica index. +* Task: The task index. +* Device type: The device type string (e.g. "CPU" or "GPU"). +* Device index: The device index. + + + + + + + + + + + + + + + + + + + + + + +
+`job` + +string. Optional job name. +
+`replica` + +int. Optional replica index. +
+`task` + +int. Optional task index. +
+`device_type` + +Optional device type string (e.g. "CPU" or "GPU") +
+`device_index` + +int. Optional device index. If left +unspecified, device represents 'any' device_index. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`device_index` + + +
+`device_type` + + +
+`job` + + +
+`replica` + + +
+`task` + + +
+ + + +## Methods + +

from_string

+ +View source + + + +Construct a `DeviceSpec` from a string. + + + + + + + + + + + +
Args
+`spec` + +a string of the form +/job:/replica:/task:/device:CPU: +or +/job:/replica:/task:/device:GPU: +as cpu and gpu are mutually exclusive. +All entries are optional. +
+ + + + + + + + + + + +
Returns
+A DeviceSpec. +
+ + + +

make_merged_spec

+ +View source + + + +Returns a new DeviceSpec which incorporates `dev`. + +When combining specs, `dev` will take precedence over the current spec. +So for instance: +``` +first_spec = tf.DeviceSpec(job=0, device_type="CPU") +second_spec = tf.DeviceSpec(device_type="GPU") +combined_spec = first_spec.make_merged_spec(second_spec) +``` + +is equivalent to: +``` +combined_spec = tf.DeviceSpec(job=0, device_type="GPU") +``` + + + + + + + + + + +
Args
+`dev` + +a `DeviceSpec` +
+ + + + + + + + + + + +
Returns
+A new `DeviceSpec` which combines `self` and `dev` +
+ + + +

merge_from

+ +View source + + + +Merge the properties of "dev" into this `DeviceSpec`. + +Note: Will be removed in TensorFlow 2.x since DeviceSpecs will become + immutable. + + + + + + + + + + +
Args
+`dev` + +a `DeviceSpec`. +
+ + + +

parse_from_string

+ +View source + + + +Parse a `DeviceSpec` name into its components. + +2.x behavior change: + In TensorFlow 1.x, this function mutates its own state and returns itself. + In 2.x, DeviceSpecs are immutable, and this function will return a + DeviceSpec which contains the spec. + + Recommended: + ``` + # my_spec and my_updated_spec are unrelated. + my_spec = tf.DeviceSpec.from_string("/CPU:0") + my_updated_spec = tf.DeviceSpec.from_string("/GPU:0") + with tf.device(my_updated_spec): + ... + ``` + + Will work in 1.x and 2.x (though deprecated in 2.x): + ``` + my_spec = tf.DeviceSpec.from_string("/CPU:0") + my_updated_spec = my_spec.parse_from_string("/GPU:0") + with tf.device(my_updated_spec): + ... + ``` + + Will NOT work in 2.x: + ``` + my_spec = tf.DeviceSpec.from_string("/CPU:0") + my_spec.parse_from_string("/GPU:0") # <== Will not update my_spec + with tf.device(my_spec): + ... + ``` + + In general, `DeviceSpec.from_string` should completely replace + `DeviceSpec.parse_from_string`, and `DeviceSpec.replace` should + completely replace setting attributes directly. + + + + + + + + + + +
Args
+`spec` + +an optional string of the form +/job:/replica:/task:/device:CPU: +or +/job:/replica:/task:/device:GPU: +as cpu and gpu are mutually exclusive. +All entries are optional. +
+ + + + + + + + + + + +
Returns
+The `DeviceSpec`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if the spec was not valid. +
+ + + +

replace

+ +View source + + + +Convenience method for making a new DeviceSpec by overriding fields. + + +#### For instance: + + +``` +my_spec = DeviceSpec=(job="my_job", device="CPU") +my_updated_spec = my_spec.replace(device="GPU") +my_other_spec = my_spec.replace(device=None) +``` + + + + + + + + + + +
Args
+`**kwargs` + +This method takes the same args as the DeviceSpec constructor +
+ + + + + + + + + + + +
Returns
+A DeviceSpec with the fields specified in kwargs overridden. +
+ + + +

to_string

+ +View source + + + +Return a string representation of this `DeviceSpec`. + + + + + + + + + + +
Returns
+a string of the form +/job:/replica:/task:/device::. +
+ + + +

__eq__

+ +View source + + + +Checks if the `other` DeviceSpec is same as the current instance, eg have + + same value for all the internal fields. + + + + + + + + + + +
Args
+`other` + +Another DeviceSpec +
+ + + + + + + + + + + +
Returns
+Return `True` if `other` is also a DeviceSpec instance and has same value +as the current instance. +Return `False` otherwise. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/Dimension.md b/site/en/api_docs/python/tf/compat/v1/Dimension.md new file mode 100644 index 00000000000..910807f666b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/Dimension.md @@ -0,0 +1,1181 @@ +description: Represents the value of one dimension in a TensorShape. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.Dimension + + + + + + + + + +Represents the value of one dimension in a TensorShape. + + + + + + + + + + + + + + + + + + + +
+`value` + +The value of this dimension, or None if it is unknown. +
+ + + +## Methods + +

assert_is_compatible_with

+ +View source + + + +Raises an exception if `other` is not compatible with this Dimension. + + + + + + + + + + + +
Args
+`other` + +Another Dimension. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `self` and `other` are not compatible (see +is_compatible_with). +
+ + + +

is_compatible_with

+ +View source + + + +Returns true if `other` is compatible with this Dimension. + +Two known Dimensions are compatible if they have the same value. +An unknown Dimension is compatible with all other Dimensions. + + + + + + + + + + +
Args
+`other` + +Another Dimension. +
+ + + + + + + + + + + +
Returns
+True if this Dimension and `other` are compatible. +
+ + + +

merge_with

+ +View source + + + +Returns a Dimension that combines the information in `self` and `other`. + +Dimensions are combined as follows: + +```python +tf.compat.v1.Dimension(n) .merge_with(tf.compat.v1.Dimension(n)) == +tf.compat.v1.Dimension(n) +tf.compat.v1.Dimension(n) .merge_with(tf.compat.v1.Dimension(None)) == +tf.compat.v1.Dimension(n) +tf.compat.v1.Dimension(None).merge_with(tf.compat.v1.Dimension(n)) == +tf.compat.v1.Dimension(n) +# equivalent to tf.compat.v1.Dimension(None) +tf.compat.v1.Dimension(None).merge_with(tf.compat.v1.Dimension(None)) + +# raises ValueError for n != m +tf.compat.v1.Dimension(n) .merge_with(tf.compat.v1.Dimension(m)) +``` + + + + + + + + + + +
Args
+`other` + +Another Dimension. +
+ + + + + + + + + + + +
Returns
+A Dimension containing the combined information of `self` and +`other`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `self` and `other` are not compatible (see +is_compatible_with). +
+ + + +

__add__

+ +View source + + + +Returns the sum of `self` and `other`. + +Dimensions are summed as follows: + +```python +tf.compat.v1.Dimension(m) + tf.compat.v1.Dimension(n) == +tf.compat.v1.Dimension(m + n) +tf.compat.v1.Dimension(m) + tf.compat.v1.Dimension(None) # equiv. to +tf.compat.v1.Dimension(None) +tf.compat.v1.Dimension(None) + tf.compat.v1.Dimension(n) # equiv. to +tf.compat.v1.Dimension(None) +tf.compat.v1.Dimension(None) + tf.compat.v1.Dimension(None) # equiv. to +tf.compat.v1.Dimension(None) +``` + + + + + + + + + + +
Args
+`other` + +Another Dimension, or a value accepted by `as_dimension`. +
+ + + + + + + + + + + +
Returns
+A Dimension whose value is the sum of `self` and `other`. +
+ + + +

__div__

+ +View source + + + +DEPRECATED: Use `__floordiv__` via `x // y` instead. + +This function exists only for backwards compatibility purposes; new code +should use `__floordiv__` via the syntax `x // y`. Using `x // y` +communicates clearly that the result rounds down, and is forward compatible +to Python 3. + + + + + + + + + + +
Args
+`other` + +Another `Dimension`. +
+ + + + + + + + + + + +
Returns
+A `Dimension` whose value is the integer quotient of `self` and `other`. +
+ + + +

__eq__

+ +View source + + + +Returns true if `other` has the same known value as this Dimension. + + +

__floordiv__

+ +View source + + + +Returns the quotient of `self` and `other` rounded down. + +Dimensions are divided as follows: + +```python +tf.compat.v1.Dimension(m) // tf.compat.v1.Dimension(n) == +tf.compat.v1.Dimension(m // n) +tf.compat.v1.Dimension(m) // tf.compat.v1.Dimension(None) # equiv. to +tf.compat.v1.Dimension(None) +tf.compat.v1.Dimension(None) // tf.compat.v1.Dimension(n) # equiv. to +tf.compat.v1.Dimension(None) +tf.compat.v1.Dimension(None) // tf.compat.v1.Dimension(None) # equiv. to +tf.compat.v1.Dimension(None) +``` + + + + + + + + + + +
Args
+`other` + +Another Dimension, or a value accepted by `as_dimension`. +
+ + + + + + + + + + + +
Returns
+A `Dimension` whose value is the integer quotient of `self` and `other`. +
+ + + +

__ge__

+ +View source + + + +Returns True if `self` is known to be greater than or equal to `other`. + +Dimensions are compared as follows: + +```python +(tf.compat.v1.Dimension(m) >= tf.compat.v1.Dimension(n)) == (m >= n) +(tf.compat.v1.Dimension(m) >= tf.compat.v1.Dimension(None)) == None +(tf.compat.v1.Dimension(None) >= tf.compat.v1.Dimension(n)) == None +(tf.compat.v1.Dimension(None) >= tf.compat.v1.Dimension(None)) == None +``` + + + + + + + + + + +
Args
+`other` + +Another Dimension. +
+ + + + + + + + + + + +
Returns
+The value of `self.value >= other.value` if both are known, otherwise +None. +
+ + + +

__gt__

+ +View source + + + +Returns True if `self` is known to be greater than `other`. + +Dimensions are compared as follows: + +```python +(tf.compat.v1.Dimension(m) > tf.compat.v1.Dimension(n)) == (m > n) +(tf.compat.v1.Dimension(m) > tf.compat.v1.Dimension(None)) == None +(tf.compat.v1.Dimension(None) > tf.compat.v1.Dimension(n)) == None +(tf.compat.v1.Dimension(None) > tf.compat.v1.Dimension(None)) == None +``` + + + + + + + + + + +
Args
+`other` + +Another Dimension. +
+ + + + + + + + + + + +
Returns
+The value of `self.value > other.value` if both are known, otherwise +None. +
+ + + +

__le__

+ +View source + + + +Returns True if `self` is known to be less than or equal to `other`. + +Dimensions are compared as follows: + +```python +(tf.compat.v1.Dimension(m) <= tf.compat.v1.Dimension(n)) == (m <= n) +(tf.compat.v1.Dimension(m) <= tf.compat.v1.Dimension(None)) == None +(tf.compat.v1.Dimension(None) <= tf.compat.v1.Dimension(n)) == None +(tf.compat.v1.Dimension(None) <= tf.compat.v1.Dimension(None)) == None +``` + + + + + + + + + + +
Args
+`other` + +Another Dimension. +
+ + + + + + + + + + + +
Returns
+The value of `self.value <= other.value` if both are known, otherwise +None. +
+ + + +

__lt__

+ +View source + + + +Returns True if `self` is known to be less than `other`. + +Dimensions are compared as follows: + +```python +(tf.compat.v1.Dimension(m) < tf.compat.v1.Dimension(n)) == (m < n) +(tf.compat.v1.Dimension(m) < tf.compat.v1.Dimension(None)) == None +(tf.compat.v1.Dimension(None) < tf.compat.v1.Dimension(n)) == None +(tf.compat.v1.Dimension(None) < tf.compat.v1.Dimension(None)) == None +``` + + + + + + + + + + +
Args
+`other` + +Another Dimension. +
+ + + + + + + + + + + +
Returns
+The value of `self.value < other.value` if both are known, otherwise +None. +
+ + + +

__mod__

+ +View source + + + +Returns `self` modulo `other`. + +Dimension modulo are computed as follows: + +```python +tf.compat.v1.Dimension(m) % tf.compat.v1.Dimension(n) == +tf.compat.v1.Dimension(m % n) +tf.compat.v1.Dimension(m) % tf.compat.v1.Dimension(None) # equiv. to +tf.compat.v1.Dimension(None) +tf.compat.v1.Dimension(None) % tf.compat.v1.Dimension(n) # equiv. to +tf.compat.v1.Dimension(None) +tf.compat.v1.Dimension(None) % tf.compat.v1.Dimension(None) # equiv. to +tf.compat.v1.Dimension(None) +``` + + + + + + + + + + +
Args
+`other` + +Another Dimension, or a value accepted by `as_dimension`. +
+ + + + + + + + + + + +
Returns
+A Dimension whose value is `self` modulo `other`. +
+ + + +

__mul__

+ +View source + + + +Returns the product of `self` and `other`. + +Dimensions are summed as follows: + +```python +tf.compat.v1.Dimension(m) * tf.compat.v1.Dimension(n) == +tf.compat.v1.Dimension(m * n) +tf.compat.v1.Dimension(m) * tf.compat.v1.Dimension(None) # equiv. to +tf.compat.v1.Dimension(None) +tf.compat.v1.Dimension(None) * tf.compat.v1.Dimension(n) # equiv. to +tf.compat.v1.Dimension(None) +tf.compat.v1.Dimension(None) * tf.compat.v1.Dimension(None) # equiv. to +tf.compat.v1.Dimension(None) +``` + + + + + + + + + + +
Args
+`other` + +Another Dimension, or a value accepted by `as_dimension`. +
+ + + + + + + + + + + +
Returns
+A Dimension whose value is the product of `self` and `other`. +
+ + + +

__ne__

+ +View source + + + +Returns true if `other` has a different known value from `self`. + + +

__radd__

+ +View source + + + +Returns the sum of `other` and `self`. + + + + + + + + + + + +
Args
+`other` + +Another Dimension, or a value accepted by `as_dimension`. +
+ + + + + + + + + + + +
Returns
+A Dimension whose value is the sum of `self` and `other`. +
+ + + +

__rdiv__

+ +View source + + + +Use `__floordiv__` via `x // y` instead. + +This function exists only to have a better error message. Instead of: +`TypeError: unsupported operand type(s) for /: 'int' and 'Dimension'`, +this function will explicitly call for usage of `//` instead. + + + + + + + + + + +
Args
+`other` + +Another `Dimension`. +
+ + + + + + + + + + + +
Raises
+TypeError. +
+ + + +

__rfloordiv__

+ +View source + + + +Returns the quotient of `other` and `self` rounded down. + + + + + + + + + + + +
Args
+`other` + +Another Dimension, or a value accepted by `as_dimension`. +
+ + + + + + + + + + + +
Returns
+A `Dimension` whose value is the integer quotient of `self` and `other`. +
+ + + +

__rmod__

+ +View source + + + +Returns `other` modulo `self`. + + + + + + + + + + + +
Args
+`other` + +Another Dimension, or a value accepted by `as_dimension`. +
+ + + + + + + + + + + +
Returns
+A Dimension whose value is `other` modulo `self`. +
+ + + +

__rmul__

+ +View source + + + +Returns the product of `self` and `other`. + + + + + + + + + + + +
Args
+`other` + +Another Dimension, or a value accepted by `as_dimension`. +
+ + + + + + + + + + + +
Returns
+A Dimension whose value is the product of `self` and `other`. +
+ + + +

__rsub__

+ +View source + + + +Returns the subtraction of `self` from `other`. + + + + + + + + + + + +
Args
+`other` + +Another Dimension, or a value accepted by `as_dimension`. +
+ + + + + + + + + + + +
Returns
+A Dimension whose value is the subtraction of `self` from `other`. +
+ + + +

__rtruediv__

+ +View source + + + +Use `__floordiv__` via `x // y` instead. + +This function exists only to have a better error message. Instead of: +`TypeError: unsupported operand type(s) for /: 'int' and 'Dimension'`, +this function will explicitly call for usage of `//` instead. + + + + + + + + + + +
Args
+`other` + +Another `Dimension`. +
+ + + + + + + + + + + +
Raises
+TypeError. +
+ + + +

__sub__

+ +View source + + + +Returns the subtraction of `other` from `self`. + +Dimensions are subtracted as follows: + +```python +tf.compat.v1.Dimension(m) - tf.compat.v1.Dimension(n) == +tf.compat.v1.Dimension(m - n) +tf.compat.v1.Dimension(m) - tf.compat.v1.Dimension(None) # equiv. to +tf.compat.v1.Dimension(None) +tf.compat.v1.Dimension(None) - tf.compat.v1.Dimension(n) # equiv. to +tf.compat.v1.Dimension(None) +tf.compat.v1.Dimension(None) - tf.compat.v1.Dimension(None) # equiv. to +tf.compat.v1.Dimension(None) +``` + + + + + + + + + + +
Args
+`other` + +Another Dimension, or a value accepted by `as_dimension`. +
+ + + + + + + + + + + +
Returns
+A Dimension whose value is the subtraction of `other` from `self`. +
+ + + +

__truediv__

+ +View source + + + +Use `__floordiv__` via `x // y` instead. + +This function exists only to have a better error message. Instead of: +`TypeError: unsupported operand type(s) for /: 'Dimension' and 'int'`, +this function will explicitly call for usage of `//` instead. + + + + + + + + + + +
Args
+`other` + +Another `Dimension`. +
+ + + + + + + + + + + +
Raises
+TypeError. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/Event.md b/site/en/api_docs/python/tf/compat/v1/Event.md new file mode 100644 index 00000000000..b1dedd07098 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/Event.md @@ -0,0 +1,113 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.Event + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`file_version` + +`string file_version` +
+`graph_def` + +`bytes graph_def` +
+`log_message` + +`LogMessage log_message` +
+`meta_graph_def` + +`bytes meta_graph_def` +
+`session_log` + +`SessionLog session_log` +
+`step` + +`int64 step` +
+`summary` + +`Summary summary` +
+`tagged_run_metadata` + +`TaggedRunMetadata tagged_run_metadata` +
+`wall_time` + +`double wall_time` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/FixedLengthRecordReader.md b/site/en/api_docs/python/tf/compat/v1/FixedLengthRecordReader.md new file mode 100644 index 00000000000..f546b333210 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/FixedLengthRecordReader.md @@ -0,0 +1,517 @@ +description: A Reader that outputs fixed-length records from a file. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.FixedLengthRecordReader + + + + + + + + + +A Reader that outputs fixed-length records from a file. + +Inherits From: [`ReaderBase`](../../../tf/compat/v1/ReaderBase.md) + + + + + + + +See ReaderBase for supported methods. + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`record_bytes` + +An int. +
+`header_bytes` + +An optional int. Defaults to 0. +
+`footer_bytes` + +An optional int. Defaults to 0. +
+`hop_bytes` + +An optional int. Defaults to 0. +
+`name` + +A name for the operation (optional). +
+`encoding` + +The type of encoding for the file. Defaults to none. +
+ + + +#### Eager Compatibility +Readers are not compatible with eager execution. Instead, please +use tf.data to get data into your model. + + + + + + + + + + + + + + + + + +
+`reader_ref` + +Op that implements the reader. +
+`supports_serialize` + +Whether the Reader implementation can serialize its state. +
+ + + +## Methods + +

num_records_produced

+ +View source + + + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

num_work_units_completed

+ +View source + + + +Returns the number of work units this reader has finished processing. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

read

+ +View source + + + +Returns the next record (key, value) pair produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (key, value). +
+`key` + +A string scalar Tensor. +
+`value` + +A string scalar Tensor. +
+ + + +

read_up_to

+ +View source + + + +Returns up to num_records (key, value) pairs produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g., when the +Reader needs to start reading from a new file since it has +finished with the previous file). +It may return less than num_records even before the last batch. + + + + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`num_records` + +Number of records to read. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (keys, values). +
+`keys` + +A 1-D string Tensor. +
+`values` + +A 1-D string Tensor. +
+ + + +

reset

+ +View source + + + +Restore a reader to its initial clean state. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

restore_state

+ +View source + + + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + + + + + + + + + + + + + +
Args
+`state` + +A string Tensor. +Result of a SerializeState of a Reader with matching type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

serialize_state

+ +View source + + + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A string Tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/GPUOptions.md b/site/en/api_docs/python/tf/compat/v1/GPUOptions.md new file mode 100644 index 00000000000..119538030b0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/GPUOptions.md @@ -0,0 +1,106 @@ +description: A ProtocolMessage + +
+ + + +
+ +# tf.compat.v1.GPUOptions + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allocator_type` + +`string allocator_type` +
+`allow_growth` + +`bool allow_growth` +
+`deferred_deletion_bytes` + +`int64 deferred_deletion_bytes` +
+`experimental` + +`Experimental experimental` +
+`force_gpu_compatible` + +`bool force_gpu_compatible` +
+`per_process_gpu_memory_fraction` + +`double per_process_gpu_memory_fraction` +
+`polling_active_delay_usecs` + +`int32 polling_active_delay_usecs` +
+`polling_inactive_delay_msecs` + +`int32 polling_inactive_delay_msecs` +
+`visible_device_list` + +`string visible_device_list` +
+ + + +## Child Classes +[`class Experimental`](../../../tf/compat/v1/GPUOptions/Experimental.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/GPUOptions/Experimental.md b/site/en/api_docs/python/tf/compat/v1/GPUOptions/Experimental.md new file mode 100644 index 00000000000..fcc4f695717 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/GPUOptions/Experimental.md @@ -0,0 +1,99 @@ +description: A ProtocolMessage + +
+ + + +
+ +# tf.compat.v1.GPUOptions.Experimental + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`collective_ring_order` + +`string collective_ring_order` +
+`kernel_tracker_max_bytes` + +`int32 kernel_tracker_max_bytes` +
+`kernel_tracker_max_interval` + +`int32 kernel_tracker_max_interval` +
+`kernel_tracker_max_pending` + +`int32 kernel_tracker_max_pending` +
+`num_dev_to_dev_copy_streams` + +`int32 num_dev_to_dev_copy_streams` +
+`timestamped_allocator` + +`bool timestamped_allocator` +
+`use_unified_memory` + +`bool use_unified_memory` +
+`virtual_devices` + +`repeated VirtualDevices virtual_devices` +
+ + + +## Child Classes +[`class VirtualDevices`](../../../../tf/compat/v1/GPUOptions/Experimental/VirtualDevices.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/GPUOptions/Experimental/VirtualDevices.md b/site/en/api_docs/python/tf/compat/v1/GPUOptions/Experimental/VirtualDevices.md new file mode 100644 index 00000000000..d21052e4177 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/GPUOptions/Experimental/VirtualDevices.md @@ -0,0 +1,46 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.GPUOptions.Experimental.VirtualDevices + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + +
+`memory_limit_mb` + +`repeated float memory_limit_mb` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/GraphDef.md b/site/en/api_docs/python/tf/compat/v1/GraphDef.md new file mode 100644 index 00000000000..16edde4a55c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/GraphDef.md @@ -0,0 +1,67 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.GraphDef + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + +
+`library` + +`FunctionDefLibrary library` +
+`node` + +`repeated NodeDef node` +
+`version` + +`int32 version` +
+`versions` + +`VersionDef versions` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/GraphKeys.md b/site/en/api_docs/python/tf/compat/v1/GraphKeys.md new file mode 100644 index 00000000000..5a1397985fc --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/GraphKeys.md @@ -0,0 +1,141 @@ +description: Standard names to use for graph collections. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.GraphKeys + + + + + + + + + +Standard names to use for graph collections. + + + +The standard library uses various well-known names to collect and +retrieve values associated with a graph. For example, the +`tf.Optimizer` subclasses default to optimizing the variables +collected under `tf.GraphKeys.TRAINABLE_VARIABLES` if none is +specified, but it is also possible to pass an explicit list of +variables. + +The following standard keys are defined: + +* `GLOBAL_VARIABLES`: the default collection of `Variable` objects, shared + across distributed environment (model variables are subset of these). See + tf.compat.v1.global_variables + for more details. + Commonly, all `TRAINABLE_VARIABLES` variables will be in `MODEL_VARIABLES`, + and all `MODEL_VARIABLES` variables will be in `GLOBAL_VARIABLES`. +* `LOCAL_VARIABLES`: the subset of `Variable` objects that are local to each + machine. Usually used for temporarily variables, like counters. + Note: use `tf.contrib.framework.local_variable` to add to this collection. +* `MODEL_VARIABLES`: the subset of `Variable` objects that are used in the + model for inference (feed forward). Note: use + `tf.contrib.framework.model_variable` to add to this collection. +* `TRAINABLE_VARIABLES`: the subset of `Variable` objects that will + be trained by an optimizer. See + tf.compat.v1.trainable_variables + for more details. +* `SUMMARIES`: the summary `Tensor` objects that have been created in the + graph. See + tf.compat.v1.summary.merge_all + for more details. +* `QUEUE_RUNNERS`: the `QueueRunner` objects that are used to + produce input for a computation. See + tf.compat.v1.train.start_queue_runners + for more details. +* `MOVING_AVERAGE_VARIABLES`: the subset of `Variable` objects that will also + keep moving averages. See + tf.compat.v1.moving_average_variables + for more details. +* `REGULARIZATION_LOSSES`: regularization losses collected during graph + construction. + +The following standard keys are _defined_, but their collections are **not** +automatically populated as many of the others are: + +* `WEIGHTS` +* `BIASES` +* `ACTIVATIONS` + +## Class Variables + +* `ACTIVATIONS = 'activations'` +* `ASSET_FILEPATHS = 'asset_filepaths'` +* `BIASES = 'biases'` +* `CONCATENATED_VARIABLES = 'concatenated_variables'` +* `COND_CONTEXT = 'cond_context'` +* `EVAL_STEP = 'eval_step'` +* `GLOBAL_STEP = 'global_step'` +* `GLOBAL_VARIABLES = 'variables'` +* `INIT_OP = 'init_op'` +* `LOCAL_INIT_OP = 'local_init_op'` +* `LOCAL_RESOURCES = 'local_resources'` +* `LOCAL_VARIABLES = 'local_variables'` +* `LOSSES = 'losses'` +* `METRIC_VARIABLES = 'metric_variables'` +* `MODEL_VARIABLES = 'model_variables'` +* `MOVING_AVERAGE_VARIABLES = 'moving_average_variables'` +* `QUEUE_RUNNERS = 'queue_runners'` +* `READY_FOR_LOCAL_INIT_OP = 'ready_for_local_init_op'` +* `READY_OP = 'ready_op'` +* `REGULARIZATION_LOSSES = 'regularization_losses'` +* `RESOURCES = 'resources'` +* `SAVEABLE_OBJECTS = 'saveable_objects'` +* `SAVERS = 'savers'` +* `SUMMARIES = 'summaries'` +* `SUMMARY_OP = 'summary_op'` +* `TABLE_INITIALIZERS = 'table_initializer'` +* `TRAINABLE_RESOURCE_VARIABLES = 'trainable_resource_variables'` +* `TRAINABLE_VARIABLES = 'trainable_variables'` +* `TRAIN_OP = 'train_op'` +* `UPDATE_OPS = 'update_ops'` +* `VARIABLES = 'variables'` +* `WEIGHTS = 'weights'` +* `WHILE_CONTEXT = 'while_context'` diff --git a/site/en/api_docs/python/tf/compat/v1/GraphOptions.md b/site/en/api_docs/python/tf/compat/v1/GraphOptions.md new file mode 100644 index 00000000000..0de4d46a2d5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/GraphOptions.md @@ -0,0 +1,102 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.GraphOptions + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`build_cost_model` + +`int64 build_cost_model` +
+`build_cost_model_after` + +`int64 build_cost_model_after` +
+`enable_bfloat16_sendrecv` + +`bool enable_bfloat16_sendrecv` +
+`enable_recv_scheduling` + +`bool enable_recv_scheduling` +
+`infer_shapes` + +`bool infer_shapes` +
+`optimizer_options` + +`OptimizerOptions optimizer_options` +
+`place_pruned_graph` + +`bool place_pruned_graph` +
+`rewrite_options` + +`RewriterConfig rewrite_options` +
+`timeline_step` + +`int32 timeline_step` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/HistogramProto.md b/site/en/api_docs/python/tf/compat/v1/HistogramProto.md new file mode 100644 index 00000000000..0dac6610142 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/HistogramProto.md @@ -0,0 +1,88 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.HistogramProto + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`bucket` + +`repeated double bucket` +
+`bucket_limit` + +`repeated double bucket_limit` +
+`max` + +`double max` +
+`min` + +`double min` +
+`num` + +`double num` +
+`sum` + +`double sum` +
+`sum_squares` + +`double sum_squares` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/IdentityReader.md b/site/en/api_docs/python/tf/compat/v1/IdentityReader.md new file mode 100644 index 00000000000..65115f6b659 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/IdentityReader.md @@ -0,0 +1,484 @@ +description: A Reader that outputs the queued work as both the key and value. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.IdentityReader + + + + + + + + + +A Reader that outputs the queued work as both the key and value. + +Inherits From: [`ReaderBase`](../../../tf/compat/v1/ReaderBase.md) + + + + + + + +To use, enqueue strings in a Queue. Read will take the front +work string and output (work, work). + +See ReaderBase for supported methods. + + + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + +#### Eager Compatibility +Readers are not compatible with eager execution. Instead, please +use tf.data to get data into your model. + + + + + + + + + + + + + + + + + +
+`reader_ref` + +Op that implements the reader. +
+`supports_serialize` + +Whether the Reader implementation can serialize its state. +
+ + + +## Methods + +

num_records_produced

+ +View source + + + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

num_work_units_completed

+ +View source + + + +Returns the number of work units this reader has finished processing. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

read

+ +View source + + + +Returns the next record (key, value) pair produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (key, value). +
+`key` + +A string scalar Tensor. +
+`value` + +A string scalar Tensor. +
+ + + +

read_up_to

+ +View source + + + +Returns up to num_records (key, value) pairs produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g., when the +Reader needs to start reading from a new file since it has +finished with the previous file). +It may return less than num_records even before the last batch. + + + + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`num_records` + +Number of records to read. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (keys, values). +
+`keys` + +A 1-D string Tensor. +
+`values` + +A 1-D string Tensor. +
+ + + +

reset

+ +View source + + + +Restore a reader to its initial clean state. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

restore_state

+ +View source + + + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + + + + + + + + + + + + + +
Args
+`state` + +A string Tensor. +Result of a SerializeState of a Reader with matching type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

serialize_state

+ +View source + + + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A string Tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/InteractiveSession.md b/site/en/api_docs/python/tf/compat/v1/InteractiveSession.md new file mode 100644 index 00000000000..ea762fe39b8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/InteractiveSession.md @@ -0,0 +1,761 @@ +description: A TensorFlow Session for use in interactive contexts, such as a shell. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.InteractiveSession + + + + + + + + + +A TensorFlow `Session` for use in interactive contexts, such as a shell. + + + + + + + +The only difference with a regular `Session` is that an `InteractiveSession` +installs itself as the default session on construction. +The methods tf.Tensor.eval +and tf.Operation.run +will use that session to run ops. + +This is convenient in interactive shells and [IPython +notebooks](http://ipython.org), as it avoids having to pass an explicit +`Session` object to run ops. + +#### For example: + + + +```python +sess = tf.compat.v1.InteractiveSession() +a = tf.constant(5.0) +b = tf.constant(6.0) +c = a * b +# We can just use 'c.eval()' without passing 'sess' +print(c.eval()) +sess.close() +``` + +Note that a regular session installs itself as the default session when it +is created in a `with` statement. The common usage in non-interactive +programs is to follow that pattern: + +```python +a = tf.constant(5.0) +b = tf.constant(6.0) +c = a * b +with tf.compat.v1.Session(): + # We can also use 'c.eval()' here. + print(c.eval()) +``` + + + + + + + + + + + + + + + + +
+`target` + +(Optional.) The execution engine to connect to. Defaults to using +an in-process engine. +
+`graph` + +(Optional.) The `Graph` to be launched (described above). +
+`config` + +(Optional) `ConfigProto` proto used to configure the session. +
+ + + + + + + + + + + + + + + + + + + + +
+`graph` + +The graph that was launched in this session. +
+`graph_def` + +A serializable version of the underlying TensorFlow graph. +
+`sess_str` + +The TensorFlow process to which this session will connect. +
+ + + +## Methods + +

as_default

+ +View source + + + +Returns a context manager that makes this object the default session. + +Use with the `with` keyword to specify that calls to +tf.Operation.run or tf.Tensor.eval should be executed in +this session. + +```python +c = tf.constant(..) +sess = tf.compat.v1.Session() + +with sess.as_default(): + assert tf.compat.v1.get_default_session() is sess + print(c.eval()) +``` + +To get the current default session, use tf.compat.v1.get_default_session. + +*N.B.* The `as_default` context manager *does not* close the +session when you exit the context, and you must close the session +explicitly. + +```python +c = tf.constant(...) +sess = tf.compat.v1.Session() +with sess.as_default(): + print(c.eval()) +# ... +with sess.as_default(): + print(c.eval()) + +sess.close() +``` + +Alternatively, you can use `with tf.compat.v1.Session():` to create a +session that is automatically closed on exiting the context, +including when an uncaught exception is raised. + +*N.B.* The default session is a property of the current thread. If you +create a new thread, and wish to use the default session in that +thread, you must explicitly add a `with sess.as_default():` in that +thread's function. + +*N.B.* Entering a `with sess.as_default():` block does not affect +the current default graph. If you are using multiple graphs, and +`sess.graph` is different from the value of +tf.compat.v1.get_default_graph, you must explicitly enter a +`with sess.graph.as_default():` block to make `sess.graph` the default +graph. + + + + + + + + + +
Returns
+A context manager using this session as the default session. +
+ + + +

close

+ +View source + + + +Closes an `InteractiveSession`. + + +

list_devices

+ +View source + + + +Lists available devices in this session. + +```python +devices = sess.list_devices() +for d in devices: + print(d.name) +``` + +#### Where: + +Each element in the list has the following properties + +* `name`: A string with the full name of the device. ex: + `/job:worker/replica:0/task:3/device:CPU:0` +* `device_type`: The type of the device (e.g. `CPU`, `GPU`, `TPU`.) +* `memory_limit`: The maximum amount of memory available on the device. + Note: depending on the device, it is possible the usable memory could + be substantially less. + + + + + + + + + + + +
Raises
+`tf.errors.OpError` + +If it encounters an error (e.g. session is in an +invalid state, or network errors occur). +
+ + + + + + + + + + + +
Returns
+A list of devices in the session. +
+ + + +

make_callable

+ +View source + + + +Returns a Python callable that runs a particular step. + +The returned callable will take `len(feed_list)` arguments whose types +must be compatible feed values for the respective elements of `feed_list`. +For example, if element `i` of `feed_list` is a tf.Tensor, the `i`th +argument to the returned callable must be a numpy ndarray (or something +convertible to an ndarray) with matching element type and shape. See +`tf.Session.run` for details of the allowable feed key and value types. + +The returned callable will have the same return type as +`tf.Session.run(fetches, ...)`. For example, if `fetches` is a tf.Tensor, +the callable will return a numpy ndarray; if `fetches` is a tf.Operation, +it will return `None`. + + + + + + + + + + + + + + + + +
Args
+`fetches` + +A value or list of values to fetch. See `tf.Session.run` for +details of the allowable fetch types. +
+`feed_list` + +(Optional.) A list of `feed_dict` keys. See `tf.Session.run` +for details of the allowable feed key types. +
+`accept_options` + +(Optional.) If `True`, the returned `Callable` will be +able to accept tf.compat.v1.RunOptions and tf.compat.v1.RunMetadata +as optional keyword arguments `options` and `run_metadata`, +respectively, with the same syntax and semantics as `tf.Session.run`, +which is useful for certain use cases (profiling and debugging) but will +result in measurable slowdown of the `Callable`'s +performance. Default: `False`. +
+ + + + + + + + + + + +
Returns
+A function that when called will execute the step defined by +`feed_list` and `fetches` in this session. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If `fetches` or `feed_list` cannot be interpreted +as arguments to `tf.Session.run`. +
+ + + +

partial_run

+ +View source + + + +Continues the execution with more feeds and fetches. + +This is EXPERIMENTAL and subject to change. + +To use partial execution, a user first calls `partial_run_setup()` and +then a sequence of `partial_run()`. `partial_run_setup` specifies the +list of feeds and fetches that will be used in the subsequent +`partial_run` calls. + +The optional `feed_dict` argument allows the caller to override +the value of tensors in the graph. See run() for more information. + +Below is a simple example: + +```python +a = array_ops.placeholder(dtypes.float32, shape=[]) +b = array_ops.placeholder(dtypes.float32, shape=[]) +c = array_ops.placeholder(dtypes.float32, shape=[]) +r1 = math_ops.add(a, b) +r2 = math_ops.multiply(r1, c) + +h = sess.partial_run_setup([r1, r2], [a, b, c]) +res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2}) +res = sess.partial_run(h, r2, feed_dict={c: res}) +``` + + + + + + + + + + + + + + + + +
Args
+`handle` + +A handle for a sequence of partial runs. +
+`fetches` + +A single graph element, a list of graph elements, or a dictionary +whose values are graph elements or lists of graph elements (see +documentation for `run`). +
+`feed_dict` + +A dictionary that maps graph elements to values (described +above). +
+ + + + + + + + + + + +
Returns
+Either a single value if `fetches` is a single graph element, or +a list of values if `fetches` is a list, or a dictionary with the +same keys as `fetches` if that is a dictionary +(see documentation for `run`). +
+ + + + + + + + + + + + +
Raises
+`tf.errors.OpError` + +Or one of its subclasses on error. +
+ + + +

partial_run_setup

+ +View source + + + +Sets up a graph with feeds and fetches for partial run. + +This is EXPERIMENTAL and subject to change. + +Note that contrary to `run`, `feeds` only specifies the graph elements. +The tensors will be supplied by the subsequent `partial_run` calls. + + + + + + + + + + + + + +
Args
+`fetches` + +A single graph element, or a list of graph elements. +
+`feeds` + +A single graph element, or a list of graph elements. +
+ + + + + + + + + + + +
Returns
+A handle for partial run. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If this `Session` is in an invalid state (e.g. has been +closed). +
+`TypeError` + +If `fetches` or `feed_dict` keys are of an inappropriate type. +
+`tf.errors.OpError` + +Or one of its subclasses if a TensorFlow error happens. +
+ + + +

run

+ +View source + + + +Runs operations and evaluates tensors in `fetches`. + +This method runs one "step" of TensorFlow computation, by +running the necessary graph fragment to execute every `Operation` +and evaluate every `Tensor` in `fetches`, substituting the values in +`feed_dict` for the corresponding input values. + +The `fetches` argument may be a single graph element, or an arbitrarily +nested list, tuple, namedtuple, dict, or OrderedDict containing graph +elements at its leaves. A graph element can be one of the following types: + +* A tf.Operation. + The corresponding fetched value will be `None`. +* A tf.Tensor. + The corresponding fetched value will be a numpy ndarray containing the + value of that tensor. +* A tf.SparseTensor. + The corresponding fetched value will be a + tf.compat.v1.SparseTensorValue + containing the value of that sparse tensor. +* A `get_tensor_handle` op. The corresponding fetched value will be a + numpy ndarray containing the handle of that tensor. +* A `string` which is the name of a tensor or operation in the graph. + +The value returned by `run()` has the same shape as the `fetches` argument, +where the leaves are replaced by the corresponding values returned by +TensorFlow. + +#### Example: + + + +```python + a = tf.constant([10, 20]) + b = tf.constant([1.0, 2.0]) + # 'fetches' can be a singleton + v = session.run(a) + # v is the numpy array [10, 20] + # 'fetches' can be a list. + v = session.run([a, b]) + # v is a Python list with 2 numpy arrays: the 1-D array [10, 20] and the + # 1-D array [1.0, 2.0] + # 'fetches' can be arbitrary lists, tuples, namedtuple, dicts: + MyData = collections.namedtuple('MyData', ['a', 'b']) + v = session.run({'k1': MyData(a, b), 'k2': [b, a]}) + # v is a dict with + # v['k1'] is a MyData namedtuple with 'a' (the numpy array [10, 20]) and + # 'b' (the numpy array [1.0, 2.0]) + # v['k2'] is a list with the numpy array [1.0, 2.0] and the numpy array + # [10, 20]. +``` + +The optional `feed_dict` argument allows the caller to override +the value of tensors in the graph. Each key in `feed_dict` can be +one of the following types: + +* If the key is a tf.Tensor, the + value may be a Python scalar, string, list, or numpy ndarray + that can be converted to the same `dtype` as that + tensor. Additionally, if the key is a + tf.compat.v1.placeholder, the shape of + the value will be checked for compatibility with the placeholder. +* If the key is a + tf.SparseTensor, + the value should be a + tf.compat.v1.SparseTensorValue. +* If the key is a nested tuple of `Tensor`s or `SparseTensor`s, the value + should be a nested tuple with the same structure that maps to their + corresponding values as above. + +Each value in `feed_dict` must be convertible to a numpy array of the dtype +of the corresponding key. + +The optional `options` argument expects a [`RunOptions`] proto. The options +allow controlling the behavior of this particular step (e.g. turning tracing +on). + +The optional `run_metadata` argument expects a [`RunMetadata`] proto. When +appropriate, the non-Tensor output of this step will be collected there. For +example, when users turn on tracing in `options`, the profiled info will be +collected into this argument and passed back. + + + + + + + + + + + + + + + + + + + +
Args
+`fetches` + +A single graph element, a list of graph elements, or a dictionary +whose values are graph elements or lists of graph elements (described +above). +
+`feed_dict` + +A dictionary that maps graph elements to values (described +above). +
+`options` + +A [`RunOptions`] protocol buffer +
+`run_metadata` + +A [`RunMetadata`] protocol buffer +
+ + + + + + + + + + + +
Returns
+Either a single value if `fetches` is a single graph element, or +a list of values if `fetches` is a list, or a dictionary with the +same keys as `fetches` if that is a dictionary (described above). +Order in which `fetches` operations are evaluated inside the call +is undefined. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If this `Session` is in an invalid state (e.g. has been +closed). +
+`TypeError` + +If `fetches` or `feed_dict` keys are of an inappropriate type. +
+`ValueError` + +If `fetches` or `feed_dict` keys are invalid or refer to a +`Tensor` that doesn't exist. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/LMDBReader.md b/site/en/api_docs/python/tf/compat/v1/LMDBReader.md new file mode 100644 index 00000000000..f1c2cbd3de0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/LMDBReader.md @@ -0,0 +1,488 @@ +description: A Reader that outputs the records from a LMDB file. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.LMDBReader + + + + + + + + + +A Reader that outputs the records from a LMDB file. + +Inherits From: [`ReaderBase`](../../../tf/compat/v1/ReaderBase.md) + + + + + + + +See ReaderBase for supported methods. + + + + + + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+`options` + +A LMDBRecordOptions object (optional). +
+ + + +#### Eager Compatibility +Readers are not compatible with eager execution. Instead, please +use tf.data to get data into your model. + + + + + + + + + + + + + + + + + +
+`reader_ref` + +Op that implements the reader. +
+`supports_serialize` + +Whether the Reader implementation can serialize its state. +
+ + + +## Methods + +

num_records_produced

+ +View source + + + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

num_work_units_completed

+ +View source + + + +Returns the number of work units this reader has finished processing. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

read

+ +View source + + + +Returns the next record (key, value) pair produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (key, value). +
+`key` + +A string scalar Tensor. +
+`value` + +A string scalar Tensor. +
+ + + +

read_up_to

+ +View source + + + +Returns up to num_records (key, value) pairs produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g., when the +Reader needs to start reading from a new file since it has +finished with the previous file). +It may return less than num_records even before the last batch. + + + + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`num_records` + +Number of records to read. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (keys, values). +
+`keys` + +A 1-D string Tensor. +
+`values` + +A 1-D string Tensor. +
+ + + +

reset

+ +View source + + + +Restore a reader to its initial clean state. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

restore_state

+ +View source + + + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + + + + + + + + + + + + + +
Args
+`state` + +A string Tensor. +Result of a SerializeState of a Reader with matching type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

serialize_state

+ +View source + + + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A string Tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/LogMessage.md b/site/en/api_docs/python/tf/compat/v1/LogMessage.md new file mode 100644 index 00000000000..b567f839a5a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/LogMessage.md @@ -0,0 +1,69 @@ +description: A ProtocolMessage + +
+ + + + + + + + + +
+ +# tf.compat.v1.LogMessage + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`level` + +`Level level` +
+`message` + +`string message` +
+ + + +## Class Variables + +* `DEBUGGING = 10` +* `ERROR = 40` +* `FATAL = 50` +* `INFO = 20` +* `Level` +* `UNKNOWN = 0` +* `WARN = 30` diff --git a/site/en/api_docs/python/tf/compat/v1/MetaGraphDef.md b/site/en/api_docs/python/tf/compat/v1/MetaGraphDef.md new file mode 100644 index 00000000000..6ea383d670c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/MetaGraphDef.md @@ -0,0 +1,98 @@ +description: A ProtocolMessage + +
+ + + + + +
+ +# tf.compat.v1.MetaGraphDef + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`asset_file_def` + +`repeated AssetFileDef asset_file_def` +
+`collection_def` + +`repeated CollectionDefEntry collection_def` +
+`graph_def` + +`GraphDef graph_def` +
+`meta_info_def` + +`MetaInfoDef meta_info_def` +
+`object_graph_def` + +`SavedObjectGraph object_graph_def` +
+`saver_def` + +`SaverDef saver_def` +
+`signature_def` + +`repeated SignatureDefEntry signature_def` +
+ + + +## Child Classes +[`class CollectionDefEntry`](../../../tf/compat/v1/MetaGraphDef/CollectionDefEntry.md) + +[`class MetaInfoDef`](../../../tf/compat/v1/MetaGraphDef/MetaInfoDef.md) + +[`class SignatureDefEntry`](../../../tf/compat/v1/MetaGraphDef/SignatureDefEntry.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/MetaGraphDef/CollectionDefEntry.md b/site/en/api_docs/python/tf/compat/v1/MetaGraphDef/CollectionDefEntry.md new file mode 100644 index 00000000000..c9e85256b21 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/MetaGraphDef/CollectionDefEntry.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.MetaGraphDef.CollectionDefEntry + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`key` + +`string key` +
+`value` + +`CollectionDef value` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/MetaGraphDef/MetaInfoDef.md b/site/en/api_docs/python/tf/compat/v1/MetaGraphDef/MetaInfoDef.md new file mode 100644 index 00000000000..c58ef58de43 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/MetaGraphDef/MetaInfoDef.md @@ -0,0 +1,99 @@ +description: A ProtocolMessage + +
+ + + +
+ +# tf.compat.v1.MetaGraphDef.MetaInfoDef + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`any_info` + +`Any any_info` +
+`function_aliases` + +`repeated FunctionAliasesEntry function_aliases` +
+`meta_graph_version` + +`string meta_graph_version` +
+`stripped_default_attrs` + +`bool stripped_default_attrs` +
+`stripped_op_list` + +`OpList stripped_op_list` +
+`tags` + +`repeated string tags` +
+`tensorflow_git_version` + +`string tensorflow_git_version` +
+`tensorflow_version` + +`string tensorflow_version` +
+ + + +## Child Classes +[`class FunctionAliasesEntry`](../../../../tf/compat/v1/MetaGraphDef/MetaInfoDef/FunctionAliasesEntry.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/MetaGraphDef/MetaInfoDef/FunctionAliasesEntry.md b/site/en/api_docs/python/tf/compat/v1/MetaGraphDef/MetaInfoDef/FunctionAliasesEntry.md new file mode 100644 index 00000000000..b27c7abe38a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/MetaGraphDef/MetaInfoDef/FunctionAliasesEntry.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`key` + +`string key` +
+`value` + +`string value` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/MetaGraphDef/SignatureDefEntry.md b/site/en/api_docs/python/tf/compat/v1/MetaGraphDef/SignatureDefEntry.md new file mode 100644 index 00000000000..18dee88be03 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/MetaGraphDef/SignatureDefEntry.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.MetaGraphDef.SignatureDefEntry + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`key` + +`string key` +
+`value` + +`SignatureDef value` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/NameAttrList.md b/site/en/api_docs/python/tf/compat/v1/NameAttrList.md new file mode 100644 index 00000000000..55b2238e9a3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/NameAttrList.md @@ -0,0 +1,57 @@ +description: A ProtocolMessage + +
+ + + +
+ +# tf.compat.v1.NameAttrList + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`attr` + +`repeated AttrEntry attr` +
+`name` + +`string name` +
+ + + +## Child Classes +[`class AttrEntry`](../../../tf/compat/v1/NameAttrList/AttrEntry.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/NameAttrList/AttrEntry.md b/site/en/api_docs/python/tf/compat/v1/NameAttrList/AttrEntry.md new file mode 100644 index 00000000000..c4b619f5c3d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/NameAttrList/AttrEntry.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.NameAttrList.AttrEntry + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`key` + +`string key` +
+`value` + +`AttrValue value` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/NodeDef.md b/site/en/api_docs/python/tf/compat/v1/NodeDef.md new file mode 100644 index 00000000000..75ccc41276e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/NodeDef.md @@ -0,0 +1,88 @@ +description: A ProtocolMessage + +
+ + + + +
+ +# tf.compat.v1.NodeDef + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`attr` + +`repeated AttrEntry attr` +
+`device` + +`string device` +
+`experimental_debug_info` + +`ExperimentalDebugInfo experimental_debug_info` +
+`input` + +`repeated string input` +
+`name` + +`string name` +
+`op` + +`string op` +
+ + + +## Child Classes +[`class AttrEntry`](../../../tf/compat/v1/NodeDef/AttrEntry.md) + +[`class ExperimentalDebugInfo`](../../../tf/compat/v1/NodeDef/ExperimentalDebugInfo.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/NodeDef/AttrEntry.md b/site/en/api_docs/python/tf/compat/v1/NodeDef/AttrEntry.md new file mode 100644 index 00000000000..4fc49301700 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/NodeDef/AttrEntry.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.NodeDef.AttrEntry + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`key` + +`string key` +
+`value` + +`AttrValue value` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/NodeDef/ExperimentalDebugInfo.md b/site/en/api_docs/python/tf/compat/v1/NodeDef/ExperimentalDebugInfo.md new file mode 100644 index 00000000000..1b6666f77b7 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/NodeDef/ExperimentalDebugInfo.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.NodeDef.ExperimentalDebugInfo + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`original_func_names` + +`repeated string original_func_names` +
+`original_node_names` + +`repeated string original_node_names` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/OptimizerOptions.md b/site/en/api_docs/python/tf/compat/v1/OptimizerOptions.md new file mode 100644 index 00000000000..63f293f822b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/OptimizerOptions.md @@ -0,0 +1,99 @@ +description: A ProtocolMessage + +
+ + + + + + + + + + +
+ +# tf.compat.v1.OptimizerOptions + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`do_common_subexpression_elimination` + +`bool do_common_subexpression_elimination` +
+`do_constant_folding` + +`bool do_constant_folding` +
+`do_function_inlining` + +`bool do_function_inlining` +
+`global_jit_level` + +`GlobalJitLevel global_jit_level` +
+`max_folded_constant_in_bytes` + +`int64 max_folded_constant_in_bytes` +
+`opt_level` + +`Level opt_level` +
+ + + +## Class Variables + +* `DEFAULT = 0` +* `GlobalJitLevel` +* `L0 = -1` +* `L1 = 0` +* `Level` +* `OFF = -1` +* `ON_1 = 1` +* `ON_2 = 2` diff --git a/site/en/api_docs/python/tf/compat/v1/Print.md b/site/en/api_docs/python/tf/compat/v1/Print.md new file mode 100644 index 00000000000..4c81c1acd60 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/Print.md @@ -0,0 +1,127 @@ +description: Prints a list of tensors. (deprecated) + +
+ + +
+ +# tf.compat.v1.Print + + + + + + + + + +Prints a list of tensors. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-08-20. +Instructions for updating: +Use tf.print instead of tf.Print. Note that tf.print returns a no-output operator that directly prints the output. Outside of defuns or eager mode, this operator will not be executed unless it is directly specified in session.run or used as a control dependency for other operators. This is only a concern in graph mode. Below is an example of how to ensure tf.print executes in graph mode: + + +This is an identity op (behaves like tf.identity) with the side effect +of printing `data` when evaluating. + +Note: This op prints to the standard error. It is not currently compatible + with jupyter notebook (printing to the notebook *server's* output, not into + the notebook). + +Additionally, to use tf.print in python 2.7, users must make sure to import +the following: + +`from __future__ import print_function` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_` + +A tensor passed through this op. +
+`data` + +A list of tensors to print out when op is evaluated. +
+`message` + +A string, prefix of the error message. +
+`first_n` + +Only log `first_n` number of times. Negative numbers log always; +this is the default. +
+`summarize` + +Only print this many entries of each tensor. If None, then a +maximum of 3 elements are printed per input tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type and contents as `input_`. + +```python +sess = tf.compat.v1.Session() +with sess.as_default(): +tensor = tf.range(10) +print_op = tf.print(tensor) +with tf.control_dependencies([print_op]): +out = tf.add(tensor, tensor) +sess.run(out) +``` +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/ReaderBase.md b/site/en/api_docs/python/tf/compat/v1/ReaderBase.md new file mode 100644 index 00000000000..3a7a5e8e457 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/ReaderBase.md @@ -0,0 +1,513 @@ +description: Base class for different Reader types, that produce a record every step. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.ReaderBase + + + + + + + + + +Base class for different Reader types, that produce a record every step. + + + + + + + +Conceptually, Readers convert string 'work units' into records (key, +value pairs). Typically the 'work units' are filenames and the +records are extracted from the contents of those files. We want a +single record produced per step, but a work unit can correspond to +many records. + +Therefore we introduce some decoupling using a queue. The queue +contains the work units and the Reader dequeues from the queue when +it is asked to produce a record (via Read()) but it has finished the +last work unit. + + + + + + + + + + + + + + + +
+`reader_ref` + +The operation that implements the reader. +
+`supports_serialize` + +True if the reader implementation can +serialize its state. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If eager execution is enabled. +
+ + + +#### Eager Compatibility +Readers are not compatible with eager execution. Instead, please +use tf.data to get data into your model. + + + + + + + + + + + + + + + + + +
+`reader_ref` + +Op that implements the reader. +
+`supports_serialize` + +Whether the Reader implementation can serialize its state. +
+ + + +## Methods + +

num_records_produced

+ +View source + + + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

num_work_units_completed

+ +View source + + + +Returns the number of work units this reader has finished processing. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

read

+ +View source + + + +Returns the next record (key, value) pair produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (key, value). +
+`key` + +A string scalar Tensor. +
+`value` + +A string scalar Tensor. +
+ + + +

read_up_to

+ +View source + + + +Returns up to num_records (key, value) pairs produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g., when the +Reader needs to start reading from a new file since it has +finished with the previous file). +It may return less than num_records even before the last batch. + + + + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`num_records` + +Number of records to read. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (keys, values). +
+`keys` + +A 1-D string Tensor. +
+`values` + +A 1-D string Tensor. +
+ + + +

reset

+ +View source + + + +Restore a reader to its initial clean state. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

restore_state

+ +View source + + + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + + + + + + + + + + + + + +
Args
+`state` + +A string Tensor. +Result of a SerializeState of a Reader with matching type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

serialize_state

+ +View source + + + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A string Tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/RunMetadata.md b/site/en/api_docs/python/tf/compat/v1/RunMetadata.md new file mode 100644 index 00000000000..52b30a0ebc5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/RunMetadata.md @@ -0,0 +1,71 @@ +description: A ProtocolMessage + +
+ + + +
+ +# tf.compat.v1.RunMetadata + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + +
+`cost_graph` + +`CostGraphDef cost_graph` +
+`function_graphs` + +`repeated FunctionGraphs function_graphs` +
+`partition_graphs` + +`repeated GraphDef partition_graphs` +
+`step_stats` + +`StepStats step_stats` +
+ + + +## Child Classes +[`class FunctionGraphs`](../../../tf/compat/v1/RunMetadata/FunctionGraphs.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/RunMetadata/FunctionGraphs.md b/site/en/api_docs/python/tf/compat/v1/RunMetadata/FunctionGraphs.md new file mode 100644 index 00000000000..de89b3bec29 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/RunMetadata/FunctionGraphs.md @@ -0,0 +1,60 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.RunMetadata.FunctionGraphs + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + +
+`partition_graphs` + +`repeated GraphDef partition_graphs` +
+`post_optimization_graph` + +`GraphDef post_optimization_graph` +
+`pre_optimization_graph` + +`GraphDef pre_optimization_graph` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/RunOptions.md b/site/en/api_docs/python/tf/compat/v1/RunOptions.md new file mode 100644 index 00000000000..7e3909770c2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/RunOptions.md @@ -0,0 +1,104 @@ +description: A ProtocolMessage + +
+ + + + + + + + +
+ +# tf.compat.v1.RunOptions + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`debug_options` + +`DebugOptions debug_options` +
+`experimental` + +`Experimental experimental` +
+`inter_op_thread_pool` + +`int32 inter_op_thread_pool` +
+`output_partition_graphs` + +`bool output_partition_graphs` +
+`report_tensor_allocations_upon_oom` + +`bool report_tensor_allocations_upon_oom` +
+`timeout_in_ms` + +`int64 timeout_in_ms` +
+`trace_level` + +`TraceLevel trace_level` +
+ + + +## Child Classes +[`class Experimental`](../../../tf/compat/v1/RunOptions/Experimental.md) + +## Class Variables + +* `FULL_TRACE = 3` +* `HARDWARE_TRACE = 2` +* `NO_TRACE = 0` +* `SOFTWARE_TRACE = 1` +* `TraceLevel` diff --git a/site/en/api_docs/python/tf/compat/v1/RunOptions/Experimental.md b/site/en/api_docs/python/tf/compat/v1/RunOptions/Experimental.md new file mode 100644 index 00000000000..dcac721b043 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/RunOptions/Experimental.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.RunOptions.Experimental + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`collective_graph_key` + +`int64 collective_graph_key` +
+`use_run_handler_pool` + +`bool use_run_handler_pool` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/Session.md b/site/en/api_docs/python/tf/compat/v1/Session.md new file mode 100644 index 00000000000..1a9d4e642ed --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/Session.md @@ -0,0 +1,903 @@ +description: A class for running TensorFlow operations. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.Session + + + + + + + + + +A class for running TensorFlow operations. + + + + + + + +A `Session` object encapsulates the environment in which `Operation` +objects are executed, and `Tensor` objects are evaluated. For +example: + +```python +tf.compat.v1.disable_eager_execution() # need to disable eager in TF2.x +# Build a graph. +a = tf.constant(5.0) +b = tf.constant(6.0) +c = a * b + +# Launch the graph in a session. +sess = tf.compat.v1.Session() + +# Evaluate the tensor `c`. +print(sess.run(c)) # prints 30.0 +``` + +A session may own resources, such as +tf.Variable, tf.queue.QueueBase, +and tf.compat.v1.ReaderBase. It is important to release +these resources when they are no longer required. To do this, either +invoke the `tf.Session.close` method on the session, or use +the session as a context manager. The following two examples are +equivalent: + +```python +# Using the `close()` method. +sess = tf.compat.v1.Session() +sess.run(...) +sess.close() + +# Using the context manager. +with tf.compat.v1.Session() as sess: + sess.run(...) +``` + +The +[`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto) +protocol buffer exposes various configuration options for a +session. For example, to create a session that uses soft constraints +for device placement, and log the resulting placement decisions, +create a session as follows: + +```python +# Launch the graph in a session that allows soft device placement and +# logs the placement decisions. +sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto( + allow_soft_placement=True, + log_device_placement=True)) +``` + + + + + + + + + + + + + + + + +
+`target` + +(Optional.) The execution engine to connect to. Defaults to using +an in-process engine. See +[Distributed TensorFlow](https://tensorflow.org/deploy/distributed) for +more examples. +
+`graph` + +(Optional.) The `Graph` to be launched (described above). +
+`config` + +(Optional.) A +[`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto) +protocol buffer with configuration options for the session. +
+ + + + + + + + + + + + + + + + + + + + +
+`graph` + +The graph that was launched in this session. +
+`graph_def` + +A serializable version of the underlying TensorFlow graph. +
+`sess_str` + +The TensorFlow process to which this session will connect. +
+ + + +## Methods + +

as_default

+ +View source + + + +Returns a context manager that makes this object the default session. + +Use with the `with` keyword to specify that calls to +tf.Operation.run or tf.Tensor.eval should be executed in +this session. + +```python +c = tf.constant(..) +sess = tf.compat.v1.Session() + +with sess.as_default(): + assert tf.compat.v1.get_default_session() is sess + print(c.eval()) +``` + +To get the current default session, use tf.compat.v1.get_default_session. + +*N.B.* The `as_default` context manager *does not* close the +session when you exit the context, and you must close the session +explicitly. + +```python +c = tf.constant(...) +sess = tf.compat.v1.Session() +with sess.as_default(): + print(c.eval()) +# ... +with sess.as_default(): + print(c.eval()) + +sess.close() +``` + +Alternatively, you can use `with tf.compat.v1.Session():` to create a +session that is automatically closed on exiting the context, +including when an uncaught exception is raised. + +*N.B.* The default session is a property of the current thread. If you +create a new thread, and wish to use the default session in that +thread, you must explicitly add a `with sess.as_default():` in that +thread's function. + +*N.B.* Entering a `with sess.as_default():` block does not affect +the current default graph. If you are using multiple graphs, and +`sess.graph` is different from the value of +tf.compat.v1.get_default_graph, you must explicitly enter a +`with sess.graph.as_default():` block to make `sess.graph` the default +graph. + + + + + + + + + +
Returns
+A context manager using this session as the default session. +
+ + + +

close

+ +View source + + + +Closes this session. + +Calling this method frees all resources associated with the session. + + + + + + + + + + +
Raises
+`tf.errors.OpError` + +Or one of its subclasses if an error occurs while +closing the TensorFlow session. +
+ + + +

list_devices

+ +View source + + + +Lists available devices in this session. + +```python +devices = sess.list_devices() +for d in devices: + print(d.name) +``` + +#### Where: + +Each element in the list has the following properties + +* `name`: A string with the full name of the device. ex: + `/job:worker/replica:0/task:3/device:CPU:0` +* `device_type`: The type of the device (e.g. `CPU`, `GPU`, `TPU`.) +* `memory_limit`: The maximum amount of memory available on the device. + Note: depending on the device, it is possible the usable memory could + be substantially less. + + + + + + + + + + + +
Raises
+`tf.errors.OpError` + +If it encounters an error (e.g. session is in an +invalid state, or network errors occur). +
+ + + + + + + + + + + +
Returns
+A list of devices in the session. +
+ + + +

make_callable

+ +View source + + + +Returns a Python callable that runs a particular step. + +The returned callable will take `len(feed_list)` arguments whose types +must be compatible feed values for the respective elements of `feed_list`. +For example, if element `i` of `feed_list` is a tf.Tensor, the `i`th +argument to the returned callable must be a numpy ndarray (or something +convertible to an ndarray) with matching element type and shape. See +`tf.Session.run` for details of the allowable feed key and value types. + +The returned callable will have the same return type as +`tf.Session.run(fetches, ...)`. For example, if `fetches` is a tf.Tensor, +the callable will return a numpy ndarray; if `fetches` is a tf.Operation, +it will return `None`. + + + + + + + + + + + + + + + + +
Args
+`fetches` + +A value or list of values to fetch. See `tf.Session.run` for +details of the allowable fetch types. +
+`feed_list` + +(Optional.) A list of `feed_dict` keys. See `tf.Session.run` +for details of the allowable feed key types. +
+`accept_options` + +(Optional.) If `True`, the returned `Callable` will be +able to accept tf.compat.v1.RunOptions and tf.compat.v1.RunMetadata +as optional keyword arguments `options` and `run_metadata`, +respectively, with the same syntax and semantics as `tf.Session.run`, +which is useful for certain use cases (profiling and debugging) but will +result in measurable slowdown of the `Callable`'s +performance. Default: `False`. +
+ + + + + + + + + + + +
Returns
+A function that when called will execute the step defined by +`feed_list` and `fetches` in this session. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If `fetches` or `feed_list` cannot be interpreted +as arguments to `tf.Session.run`. +
+ + + +

partial_run

+ +View source + + + +Continues the execution with more feeds and fetches. + +This is EXPERIMENTAL and subject to change. + +To use partial execution, a user first calls `partial_run_setup()` and +then a sequence of `partial_run()`. `partial_run_setup` specifies the +list of feeds and fetches that will be used in the subsequent +`partial_run` calls. + +The optional `feed_dict` argument allows the caller to override +the value of tensors in the graph. See run() for more information. + +Below is a simple example: + +```python +a = array_ops.placeholder(dtypes.float32, shape=[]) +b = array_ops.placeholder(dtypes.float32, shape=[]) +c = array_ops.placeholder(dtypes.float32, shape=[]) +r1 = math_ops.add(a, b) +r2 = math_ops.multiply(r1, c) + +h = sess.partial_run_setup([r1, r2], [a, b, c]) +res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2}) +res = sess.partial_run(h, r2, feed_dict={c: res}) +``` + + + + + + + + + + + + + + + + +
Args
+`handle` + +A handle for a sequence of partial runs. +
+`fetches` + +A single graph element, a list of graph elements, or a dictionary +whose values are graph elements or lists of graph elements (see +documentation for `run`). +
+`feed_dict` + +A dictionary that maps graph elements to values (described +above). +
+ + + + + + + + + + + +
Returns
+Either a single value if `fetches` is a single graph element, or +a list of values if `fetches` is a list, or a dictionary with the +same keys as `fetches` if that is a dictionary +(see documentation for `run`). +
+ + + + + + + + + + + + +
Raises
+`tf.errors.OpError` + +Or one of its subclasses on error. +
+ + + +

partial_run_setup

+ +View source + + + +Sets up a graph with feeds and fetches for partial run. + +This is EXPERIMENTAL and subject to change. + +Note that contrary to `run`, `feeds` only specifies the graph elements. +The tensors will be supplied by the subsequent `partial_run` calls. + + + + + + + + + + + + + +
Args
+`fetches` + +A single graph element, or a list of graph elements. +
+`feeds` + +A single graph element, or a list of graph elements. +
+ + + + + + + + + + + +
Returns
+A handle for partial run. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If this `Session` is in an invalid state (e.g. has been +closed). +
+`TypeError` + +If `fetches` or `feed_dict` keys are of an inappropriate type. +
+`tf.errors.OpError` + +Or one of its subclasses if a TensorFlow error happens. +
+ + + +

reset

+ +View source + + + +Resets resource containers on `target`, and close all connected sessions. + +A resource container is distributed across all workers in the +same cluster as `target`. When a resource container on `target` +is reset, resources associated with that container will be cleared. +In particular, all Variables in the container will become undefined: +they lose their values and shapes. + +#### NOTE: + + +(i) reset() is currently only implemented for distributed sessions. +(ii) Any sessions on the master named by `target` will be closed. + +If no resource containers are provided, all containers are reset. + + + + + + + + + + + + + + + + +
Args
+`target` + +The execution engine to connect to. +
+`containers` + +A list of resource container name strings, or `None` if all of +all the containers are to be reset. +
+`config` + +(Optional.) Protocol buffer with configuration options. +
+ + + + + + + + + + + + +
Raises
+`tf.errors.OpError` + +Or one of its subclasses if an error occurs while +resetting containers. +
+ + + +

run

+ +View source + + + +Runs operations and evaluates tensors in `fetches`. + +This method runs one "step" of TensorFlow computation, by +running the necessary graph fragment to execute every `Operation` +and evaluate every `Tensor` in `fetches`, substituting the values in +`feed_dict` for the corresponding input values. + +The `fetches` argument may be a single graph element, or an arbitrarily +nested list, tuple, namedtuple, dict, or OrderedDict containing graph +elements at its leaves. A graph element can be one of the following types: + +* A tf.Operation. + The corresponding fetched value will be `None`. +* A tf.Tensor. + The corresponding fetched value will be a numpy ndarray containing the + value of that tensor. +* A tf.SparseTensor. + The corresponding fetched value will be a + tf.compat.v1.SparseTensorValue + containing the value of that sparse tensor. +* A `get_tensor_handle` op. The corresponding fetched value will be a + numpy ndarray containing the handle of that tensor. +* A `string` which is the name of a tensor or operation in the graph. + +The value returned by `run()` has the same shape as the `fetches` argument, +where the leaves are replaced by the corresponding values returned by +TensorFlow. + +#### Example: + + + +```python + a = tf.constant([10, 20]) + b = tf.constant([1.0, 2.0]) + # 'fetches' can be a singleton + v = session.run(a) + # v is the numpy array [10, 20] + # 'fetches' can be a list. + v = session.run([a, b]) + # v is a Python list with 2 numpy arrays: the 1-D array [10, 20] and the + # 1-D array [1.0, 2.0] + # 'fetches' can be arbitrary lists, tuples, namedtuple, dicts: + MyData = collections.namedtuple('MyData', ['a', 'b']) + v = session.run({'k1': MyData(a, b), 'k2': [b, a]}) + # v is a dict with + # v['k1'] is a MyData namedtuple with 'a' (the numpy array [10, 20]) and + # 'b' (the numpy array [1.0, 2.0]) + # v['k2'] is a list with the numpy array [1.0, 2.0] and the numpy array + # [10, 20]. +``` + +The optional `feed_dict` argument allows the caller to override +the value of tensors in the graph. Each key in `feed_dict` can be +one of the following types: + +* If the key is a tf.Tensor, the + value may be a Python scalar, string, list, or numpy ndarray + that can be converted to the same `dtype` as that + tensor. Additionally, if the key is a + tf.compat.v1.placeholder, the shape of + the value will be checked for compatibility with the placeholder. +* If the key is a + tf.SparseTensor, + the value should be a + tf.compat.v1.SparseTensorValue. +* If the key is a nested tuple of `Tensor`s or `SparseTensor`s, the value + should be a nested tuple with the same structure that maps to their + corresponding values as above. + +Each value in `feed_dict` must be convertible to a numpy array of the dtype +of the corresponding key. + +The optional `options` argument expects a [`RunOptions`] proto. The options +allow controlling the behavior of this particular step (e.g. turning tracing +on). + +The optional `run_metadata` argument expects a [`RunMetadata`] proto. When +appropriate, the non-Tensor output of this step will be collected there. For +example, when users turn on tracing in `options`, the profiled info will be +collected into this argument and passed back. + + + + + + + + + + + + + + + + + + + +
Args
+`fetches` + +A single graph element, a list of graph elements, or a dictionary +whose values are graph elements or lists of graph elements (described +above). +
+`feed_dict` + +A dictionary that maps graph elements to values (described +above). +
+`options` + +A [`RunOptions`] protocol buffer +
+`run_metadata` + +A [`RunMetadata`] protocol buffer +
+ + + + + + + + + + + +
Returns
+Either a single value if `fetches` is a single graph element, or +a list of values if `fetches` is a list, or a dictionary with the +same keys as `fetches` if that is a dictionary (described above). +Order in which `fetches` operations are evaluated inside the call +is undefined. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If this `Session` is in an invalid state (e.g. has been +closed). +
+`TypeError` + +If `fetches` or `feed_dict` keys are of an inappropriate type. +
+`ValueError` + +If `fetches` or `feed_dict` keys are invalid or refer to a +`Tensor` that doesn't exist. +
+ + + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/SessionLog.md b/site/en/api_docs/python/tf/compat/v1/SessionLog.md new file mode 100644 index 00000000000..458f1adfd2e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/SessionLog.md @@ -0,0 +1,83 @@ +description: A ProtocolMessage + +
+ + + + + + + +
+ +# tf.compat.v1.SessionLog + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + +
+`checkpoint_path` + +`string checkpoint_path` +
+`msg` + +`string msg` +
+`status` + +`SessionStatus status` +
+ + + +## Class Variables + +* `CHECKPOINT = 3` +* `START = 1` +* `STATUS_UNSPECIFIED = 0` +* `STOP = 2` +* `SessionStatus` diff --git a/site/en/api_docs/python/tf/compat/v1/SparseConditionalAccumulator.md b/site/en/api_docs/python/tf/compat/v1/SparseConditionalAccumulator.md new file mode 100644 index 00000000000..4901fa09989 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/SparseConditionalAccumulator.md @@ -0,0 +1,609 @@ +description: A conditional accumulator for aggregating sparse gradients. + +
+ + + + + + + + + +
+ +# tf.compat.v1.SparseConditionalAccumulator + + + + + + + + + +A conditional accumulator for aggregating sparse gradients. + +Inherits From: [`ConditionalAccumulatorBase`](../../../tf/compat/v1/ConditionalAccumulatorBase.md) + + + + + + + + + +Sparse gradients are represented by `IndexedSlices`. + +Up-to-date gradients (i.e., time step at which gradient was computed is +equal to the accumulator's time step) are added to the accumulator. + +Extraction of the average gradient is blocked until the required number of +gradients has been accumulated. + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +Datatype of the accumulated gradients. +
+`shape` + +Shape of the accumulated gradients. +
+`shared_name` + +Optional. If non-empty, this accumulator will be shared under +the given name across multiple sessions. +
+`name` + +Optional name for the accumulator. +
+`reduction_type` + +Reduction type to use when taking the gradient. +
+ + + + + + + + + + + + + + + + + + +
+`dtype` + +Datatype of the accumulated gradients. +
+`shape` + +Shape of the accumulated gradients. +
+`accumulator_ref` + +A handle to the conditional accumulator, created by sub- +classes +
+ + + + + + + + + + + + + + + + + + + + +
+`accumulator_ref` + +The underlying accumulator reference. +
+`dtype` + +The datatype of the gradients accumulated by this accumulator. +
+`name` + +The name of the underlying accumulator. +
+ + + +## Methods + +

apply_grad

+ +View source + + + +Attempts to apply a sparse gradient to the accumulator. + +The attempt is silently dropped if the gradient is stale, i.e., `local_step` +is less than the accumulator's global time step. + +A sparse gradient is represented by its indices, values and possibly empty +or None shape. Indices must be a vector representing the locations of +non-zero entries in the tensor. Values are the non-zero slices of the +gradient, and must have the same first dimension as indices, i.e., the nnz +represented by indices and values must be consistent. Shape, if not empty or +None, must be consistent with the accumulator's shape (if also provided). + +#### Example: + +A tensor [[0, 0], [0, 1], [2, 3]] can be represented + indices: [1,2] + values: [[0,1],[2,3]] + shape: [3, 2] + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`grad_indices` + +Indices of the sparse gradient to be applied. +
+`grad_values` + +Values of the sparse gradient to be applied. +
+`grad_shape` + +Shape of the sparse gradient to be applied. +
+`local_step` + +Time step at which the gradient was computed. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
Returns
+The operation that (conditionally) applies a gradient to the accumulator. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +If grad is of the wrong shape +
+ + + +

apply_indexed_slices_grad

+ +View source + + + +Attempts to apply a gradient to the accumulator. + +The attempt is silently dropped if the gradient is stale, i.e., `local_step` +is less than the accumulator's global time step. + + + + + + + + + + + + + + + + +
Args
+`grad` + +The gradient `IndexedSlices` to be applied. +
+`local_step` + +Time step at which the gradient was computed. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
Returns
+The operation that (conditionally) applies a gradient to the accumulator. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +If grad is of the wrong shape +
+ + + +

num_accumulated

+ +View source + + + +Number of gradients that have currently been aggregated in accumulator. + + + + + + + + + + + +
Args
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
Returns
+Number of accumulated gradients currently in accumulator. +
+ + + +

set_global_step

+ +View source + + + +Sets the global time step of the accumulator. + +The operation logs a warning if we attempt to set to a time step that is +lower than the accumulator's own time step. + + + + + + + + + + + + + +
Args
+`new_global_step` + +Value of new time step. Can be a variable or a constant +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
Returns
+Operation that sets the accumulator's time step. +
+ + + +

take_grad

+ +View source + + + +Attempts to extract the average gradient from the accumulator. + +The operation blocks until sufficient number of gradients have been +successfully applied to the accumulator. + +Once successful, the following actions are also triggered: +- Counter of accumulated gradients is reset to 0. +- Aggregated gradient is reset to 0 tensor. +- Accumulator's internal time step is incremented by 1. + + + + + + + + + + + + + +
Args
+`num_required` + +Number of gradients that needs to have been aggregated +
+`name` + +Optional name for the operation +
+ + + + + + + + + + + +
Returns
+A tuple of indices, values, and shape representing the average gradient. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +If `num_required` < 1 +
+ + + +

take_indexed_slices_grad

+ +View source + + + +Attempts to extract the average gradient from the accumulator. + +The operation blocks until sufficient number of gradients have been +successfully applied to the accumulator. + +Once successful, the following actions are also triggered: +- Counter of accumulated gradients is reset to 0. +- Aggregated gradient is reset to 0 tensor. +- Accumulator's internal time step is incremented by 1. + + + + + + + + + + + + + +
Args
+`num_required` + +Number of gradients that needs to have been aggregated +
+`name` + +Optional name for the operation +
+ + + + + + + + + + + +
Returns
+An `IndexedSlices` holding the value of the average gradient. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +If `num_required` < 1 +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/SparseTensorValue.md b/site/en/api_docs/python/tf/compat/v1/SparseTensorValue.md new file mode 100644 index 00000000000..46e37b9a726 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/SparseTensorValue.md @@ -0,0 +1,77 @@ +description: SparseTensorValue(indices, values, dense_shape) + +
+ + + + + + +
+ +# tf.compat.v1.SparseTensorValue + + + + + + + + + +SparseTensorValue(indices, values, dense_shape) + + + + + + + + + + + + + + + + + + + + + + + + + +
+`indices` + + +
+`values` + + +
+`dense_shape` + + +
+ + + +## Class Variables + +* `dense_shape` +* `indices` +* `values` diff --git a/site/en/api_docs/python/tf/compat/v1/Summary.md b/site/en/api_docs/python/tf/compat/v1/Summary.md new file mode 100644 index 00000000000..c524e67f358 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/Summary.md @@ -0,0 +1,67 @@ +description: A ProtocolMessage + +
+ + + + + +
+ +# tf.compat.v1.Summary + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + +
+`value` + +`repeated Value value` +
+ + + +## Child Classes +[`class Audio`](../../../tf/compat/v1/Summary/Audio.md) + +[`class Image`](../../../tf/compat/v1/Summary/Image.md) + +[`class Value`](../../../tf/compat/v1/Summary/Value.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/Summary/Audio.md b/site/en/api_docs/python/tf/compat/v1/Summary/Audio.md new file mode 100644 index 00000000000..b0e7880c64e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/Summary/Audio.md @@ -0,0 +1,85 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.Summary.Audio + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`content_type` + +`string content_type` +
+`encoded_audio_string` + +`bytes encoded_audio_string` +
+`length_frames` + +`int64 length_frames` +
+`num_channels` + +`int64 num_channels` +
+`sample_rate` + +`float sample_rate` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/Summary/Image.md b/site/en/api_docs/python/tf/compat/v1/Summary/Image.md new file mode 100644 index 00000000000..d1838af1ba7 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/Summary/Image.md @@ -0,0 +1,78 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.Summary.Image + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`colorspace` + +`int32 colorspace` +
+`encoded_image_string` + +`bytes encoded_image_string` +
+`height` + +`int32 height` +
+`width` + +`int32 width` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/Summary/Value.md b/site/en/api_docs/python/tf/compat/v1/Summary/Value.md new file mode 100644 index 00000000000..b469f58712e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/Summary/Value.md @@ -0,0 +1,113 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.Summary.Value + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`audio` + +`Audio audio` +
+`histo` + +`HistogramProto histo` +
+`image` + +`Image image` +
+`metadata` + +`SummaryMetadata metadata` +
+`node_name` + +`string node_name` +
+`obsolete_old_style_histogram` + +`bytes obsolete_old_style_histogram` +
+`simple_value` + +`float simple_value` +
+`tag` + +`string tag` +
+`tensor` + +`TensorProto tensor` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/SummaryMetadata.md b/site/en/api_docs/python/tf/compat/v1/SummaryMetadata.md new file mode 100644 index 00000000000..837f99c374d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/SummaryMetadata.md @@ -0,0 +1,71 @@ +description: A ProtocolMessage + +
+ + + +
+ +# tf.compat.v1.SummaryMetadata + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + +
+`data_class` + +`DataClass data_class` +
+`display_name` + +`string display_name` +
+`plugin_data` + +`PluginData plugin_data` +
+`summary_description` + +`string summary_description` +
+ + + +## Child Classes +[`class PluginData`](../../../tf/compat/v1/SummaryMetadata/PluginData.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/SummaryMetadata/PluginData.md b/site/en/api_docs/python/tf/compat/v1/SummaryMetadata/PluginData.md new file mode 100644 index 00000000000..1b64d671321 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/SummaryMetadata/PluginData.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.SummaryMetadata.PluginData + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`content` + +`bytes content` +
+`plugin_name` + +`string plugin_name` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/TFRecordReader.md b/site/en/api_docs/python/tf/compat/v1/TFRecordReader.md new file mode 100644 index 00000000000..cfd3e7d36ef --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/TFRecordReader.md @@ -0,0 +1,488 @@ +description: A Reader that outputs the records from a TFRecords file. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.TFRecordReader + + + + + + + + + +A Reader that outputs the records from a TFRecords file. + +Inherits From: [`ReaderBase`](../../../tf/compat/v1/ReaderBase.md) + + + + + + + +See ReaderBase for supported methods. + + + + + + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+`options` + +A TFRecordOptions object (optional). +
+ + + +#### Eager Compatibility +Readers are not compatible with eager execution. Instead, please +use tf.data to get data into your model. + + + + + + + + + + + + + + + + + +
+`reader_ref` + +Op that implements the reader. +
+`supports_serialize` + +Whether the Reader implementation can serialize its state. +
+ + + +## Methods + +

num_records_produced

+ +View source + + + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

num_work_units_completed

+ +View source + + + +Returns the number of work units this reader has finished processing. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

read

+ +View source + + + +Returns the next record (key, value) pair produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (key, value). +
+`key` + +A string scalar Tensor. +
+`value` + +A string scalar Tensor. +
+ + + +

read_up_to

+ +View source + + + +Returns up to num_records (key, value) pairs produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g., when the +Reader needs to start reading from a new file since it has +finished with the previous file). +It may return less than num_records even before the last batch. + + + + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`num_records` + +Number of records to read. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (keys, values). +
+`keys` + +A 1-D string Tensor. +
+`values` + +A 1-D string Tensor. +
+ + + +

reset

+ +View source + + + +Restore a reader to its initial clean state. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

restore_state

+ +View source + + + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + + + + + + + + + + + + + +
Args
+`state` + +A string Tensor. +Result of a SerializeState of a Reader with matching type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

serialize_state

+ +View source + + + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A string Tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/TensorInfo.md b/site/en/api_docs/python/tf/compat/v1/TensorInfo.md new file mode 100644 index 00000000000..07d25f6d93f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/TensorInfo.md @@ -0,0 +1,81 @@ +description: A ProtocolMessage + +
+ + + + +
+ +# tf.compat.v1.TensorInfo + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`composite_tensor` + +`CompositeTensor composite_tensor` +
+`coo_sparse` + +`CooSparse coo_sparse` +
+`dtype` + +`DataType dtype` +
+`name` + +`string name` +
+`tensor_shape` + +`TensorShapeProto tensor_shape` +
+ + + +## Child Classes +[`class CompositeTensor`](../../../tf/compat/v1/TensorInfo/CompositeTensor.md) + +[`class CooSparse`](../../../tf/compat/v1/TensorInfo/CooSparse.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/TensorInfo/CompositeTensor.md b/site/en/api_docs/python/tf/compat/v1/TensorInfo/CompositeTensor.md new file mode 100644 index 00000000000..1fd600e5e2a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/TensorInfo/CompositeTensor.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.TensorInfo.CompositeTensor + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`components` + +`repeated TensorInfo components` +
+`type_spec` + +`TypeSpecProto type_spec` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/TensorInfo/CooSparse.md b/site/en/api_docs/python/tf/compat/v1/TensorInfo/CooSparse.md new file mode 100644 index 00000000000..fdf9e0787f3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/TensorInfo/CooSparse.md @@ -0,0 +1,60 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.TensorInfo.CooSparse + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + +
+`dense_shape_tensor_name` + +`string dense_shape_tensor_name` +
+`indices_tensor_name` + +`string indices_tensor_name` +
+`values_tensor_name` + +`string values_tensor_name` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/TextLineReader.md b/site/en/api_docs/python/tf/compat/v1/TextLineReader.md new file mode 100644 index 00000000000..3c74278700f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/TextLineReader.md @@ -0,0 +1,490 @@ +description: A Reader that outputs the lines of a file delimited by newlines. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.TextLineReader + + + + + + + + + +A Reader that outputs the lines of a file delimited by newlines. + +Inherits From: [`ReaderBase`](../../../tf/compat/v1/ReaderBase.md) + + + + + + + +Newlines are stripped from the output. +See ReaderBase for supported methods. + + + + + + + + + + + + + + + +
+`skip_header_lines` + +An optional int. Defaults to 0. Number of lines +to skip from the beginning of every file. +
+`name` + +A name for the operation (optional). +
+ + + +#### Eager Compatibility +Readers are not compatible with eager execution. Instead, please +use tf.data to get data into your model. + + + + + + + + + + + + + + + + + +
+`reader_ref` + +Op that implements the reader. +
+`supports_serialize` + +Whether the Reader implementation can serialize its state. +
+ + + +## Methods + +

num_records_produced

+ +View source + + + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

num_work_units_completed

+ +View source + + + +Returns the number of work units this reader has finished processing. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

read

+ +View source + + + +Returns the next record (key, value) pair produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (key, value). +
+`key` + +A string scalar Tensor. +
+`value` + +A string scalar Tensor. +
+ + + +

read_up_to

+ +View source + + + +Returns up to num_records (key, value) pairs produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g., when the +Reader needs to start reading from a new file since it has +finished with the previous file). +It may return less than num_records even before the last batch. + + + + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`num_records` + +Number of records to read. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (keys, values). +
+`keys` + +A 1-D string Tensor. +
+`values` + +A 1-D string Tensor. +
+ + + +

reset

+ +View source + + + +Restore a reader to its initial clean state. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

restore_state

+ +View source + + + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + + + + + + + + + + + + + +
Args
+`state` + +A string Tensor. +Result of a SerializeState of a Reader with matching type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

serialize_state

+ +View source + + + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A string Tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/Variable.md b/site/en/api_docs/python/tf/compat/v1/Variable.md new file mode 100644 index 00000000000..3f278aeccbb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/Variable.md @@ -0,0 +1,4424 @@ +description: See the [Variables Guide](https://tensorflow.org/guide/variables). + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.Variable + + + + + + + + + +See the [Variables Guide](https://tensorflow.org/guide/variables). + +Inherits From: [`Variable`](../../../tf/Variable.md) + + + + + + + +A variable maintains state in the graph across calls to `run()`. You add a +variable to the graph by constructing an instance of the class `Variable`. + +The `Variable()` constructor requires an initial value for the variable, +which can be a `Tensor` of any type and shape. The initial value defines the +type and shape of the variable. After construction, the type and shape of +the variable are fixed. The value can be changed using one of the assign +methods. + +If you want to change the shape of a variable later you have to use an +`assign` Op with `validate_shape=False`. + +Just like any `Tensor`, variables created with `Variable()` can be used as +inputs for other Ops in the graph. Additionally, all the operators +overloaded for the `Tensor` class are carried over to variables, so you can +also add nodes to the graph by just doing arithmetic on variables. + +```python +import tensorflow as tf + +# Create a variable. +w = tf.Variable(, name=) + +# Use the variable in the graph like any Tensor. +y = tf.matmul(w, ...another variable or tensor...) + +# The overloaded operators are available too. +z = tf.sigmoid(w + y) + +# Assign a new value to the variable with `assign()` or a related method. +w.assign(w + 1.0) +w.assign_add(1.0) +``` + +When you launch the graph, variables have to be explicitly initialized before +you can run Ops that use their value. You can initialize a variable by +running its *initializer op*, restoring the variable from a save file, or +simply running an `assign` Op that assigns a value to the variable. In fact, +the variable *initializer op* is just an `assign` Op that assigns the +variable's initial value to the variable itself. + +```python +# Launch the graph in a session. +with tf.compat.v1.Session() as sess: + # Run the variable initializer. + sess.run(w.initializer) + # ...you now can run ops that use the value of 'w'... +``` + +The most common initialization pattern is to use the convenience function +`global_variables_initializer()` to add an Op to the graph that initializes +all the variables. You then run that Op after launching the graph. + +```python +# Add an Op to initialize global variables. +init_op = tf.compat.v1.global_variables_initializer() + +# Launch the graph in a session. +with tf.compat.v1.Session() as sess: + # Run the Op that initializes global variables. + sess.run(init_op) + # ...you can now run any Op that uses variable values... +``` + +If you need to create a variable with an initial value dependent on another +variable, use the other variable's `initialized_value()`. This ensures that +variables are initialized in the right order. + +All variables are automatically collected in the graph where they are +created. By default, the constructor adds the new variable to the graph +collection `GraphKeys.GLOBAL_VARIABLES`. The convenience function +`global_variables()` returns the contents of that collection. + +When building a machine learning model it is often convenient to distinguish +between variables holding the trainable model parameters and other variables +such as a `global step` variable used to count training steps. To make this +easier, the variable constructor supports a `trainable=` parameter. If +`True`, the new variable is also added to the graph collection +`GraphKeys.TRAINABLE_VARIABLES`. The convenience function +`trainable_variables()` returns the contents of this collection. The +various `Optimizer` classes use this collection as the default list of +variables to optimize. + +WARNING: tf.Variable objects by default have a non-intuitive memory model. A +Variable is represented internally as a mutable Tensor which can +non-deterministically alias other Tensors in a graph. The set of operations +which consume a Variable and can lead to aliasing is undetermined and can +change across TensorFlow versions. Avoid writing code which relies on the +value of a Variable either changing or not changing as other operations +happen. For example, using Variable objects or simple functions thereof as +predicates in a tf.cond is dangerous and error-prone: + +``` +v = tf.Variable(True) +tf.cond(v, lambda: v.assign(False), my_false_fn) # Note: this is broken. +``` + +Here, adding `use_resource=True` when constructing the variable will +fix any nondeterminism issues: +``` +v = tf.Variable(True, use_resource=True) +tf.cond(v, lambda: v.assign(False), my_false_fn) +``` + +To use the replacement for variables which does +not have these issues: + +* Add `use_resource=True` when constructing tf.Variable; +* Call `tf.compat.v1.get_variable_scope().set_use_resource(True)` inside a + tf.compat.v1.variable_scope before the tf.compat.v1.get_variable() call. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`initial_value` + +A `Tensor`, or Python object convertible to a `Tensor`, +which is the initial value for the Variable. The initial value must have +a shape specified unless `validate_shape` is set to False. Can also be a +callable with no argument that returns the initial value when called. In +that case, `dtype` must be specified. (Note that initializer functions +from init_ops.py must first be bound to a shape before being used here.) +
+`trainable` + +If `True`, also adds the variable to the graph collection +`GraphKeys.TRAINABLE_VARIABLES`. This collection is used as the default +list of variables to use by the `Optimizer` classes. Defaults to `True`, +unless `synchronization` is set to `ON_READ`, in which case it defaults +to `False`. +
+`collections` + +List of graph collections keys. The new variable is added to +these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`. +
+`validate_shape` + +If `False`, allows the variable to be initialized with a +value of unknown shape. If `True`, the default, the shape of +`initial_value` must be known. +
+`caching_device` + +Optional device string describing where the Variable +should be cached for reading. Defaults to the Variable's device. If not +`None`, caches on another device. Typical use is to cache on the device +where the Ops using the Variable reside, to deduplicate copying through +`Switch` and other conditional statements. +
+`name` + +Optional name for the variable. Defaults to `'Variable'` and gets +uniquified automatically. +
+`variable_def` + +`VariableDef` protocol buffer. If not `None`, recreates the +Variable object with its contents, referencing the variable's nodes in +the graph, which must already exist. The graph is not changed. +`variable_def` and the other arguments are mutually exclusive. +
+`dtype` + +If set, initial_value will be converted to the given type. If +`None`, either the datatype will be kept (if `initial_value` is a +Tensor), or `convert_to_tensor` will decide. +
+`expected_shape` + +A TensorShape. If set, initial_value is expected to have +this shape. +
+`import_scope` + +Optional `string`. Name scope to add to the `Variable.` Only +used when initializing from protocol buffer. +
+`constraint` + +An optional projection function to be applied to the variable +after being updated by an `Optimizer` (e.g. used to implement norm +constraints or value constraints for layer weights). The function must +take as input the unprojected Tensor representing the value of the +variable and return the Tensor for the projected value (which must have +the same shape). Constraints are not safe to use when doing asynchronous +distributed training. +
+`use_resource` + +whether to use resource variables. +
+`synchronization` + +Indicates when a distributed a variable will be +aggregated. Accepted values are constants defined in the class +tf.VariableSynchronization. By default the synchronization is set to +`AUTO` and the current `DistributionStrategy` chooses when to +synchronize. +
+`aggregation` + +Indicates how a distributed variable will be aggregated. +Accepted values are constants defined in the class +tf.VariableAggregation. +
+`shape` + +(optional) The shape of this variable. If None, the shape of +`initial_value` will be used. When setting this argument to +tf.TensorShape(None) (representing an unspecified shape), the variable +can be assigned with values of different shapes. +
+ + + + + + + + + + + + + + + + + + +
+`ValueError` + +If both `variable_def` and initial_value are specified. +
+`ValueError` + +If the initial value is not specified, or does not have a +shape and `validate_shape` is `True`. +
+`RuntimeError` + +If eager execution is enabled. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`aggregation` + + +
+`constraint` + +Returns the constraint function associated with this variable. +
+`device` + +The device of this variable. +
+`dtype` + +The `DType` of this variable. +
+`graph` + +The `Graph` of this variable. +
+`initial_value` + +Returns the Tensor used as the initial value for the variable. + +Note that this is different from `initialized_value()` which runs +the op that initializes the variable before returning its value. +This method returns the tensor that is used by the op that initializes +the variable. +
+`initializer` + +The initializer operation for this variable. +
+`name` + +The name of this variable. +
+`op` + +The `Operation` of this variable. +
+`shape` + +The `TensorShape` of this variable. +
+`synchronization` + + +
+`trainable` + + +
+ + + +## Child Classes +[`class SaveSliceInfo`](../../../tf/Variable/SaveSliceInfo.md) + +## Methods + +

assign

+ +View source + + + +Assigns a new value to the variable. + +This is essentially a shortcut for `assign(self, value)`. + + + + + + + + + + + + + + + + + + + +
Args
+`value` + +A `Tensor`. The new value for this variable. +
+`use_locking` + +If `True`, use locking during the assignment. +
+`name` + +The name of the operation to be created +
+`read_value` + +if True, will return something which evaluates to the new +value of the variable; if False will return the assign op. +
+ + + + + + + + + + + +
Returns
+The updated variable. If `read_value` is false, instead returns None in +Eager mode and the assign op in graph mode. +
+ + + +

assign_add

+ +View source + + + +Adds a value to this variable. + + This is essentially a shortcut for `assign_add(self, delta)`. + + + + + + + + + + + + + + + + + + + +
Args
+`delta` + +A `Tensor`. The value to add to this variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +The name of the operation to be created +
+`read_value` + +if True, will return something which evaluates to the new +value of the variable; if False will return the assign op. +
+ + + + + + + + + + + +
Returns
+The updated variable. If `read_value` is false, instead returns None in +Eager mode and the assign op in graph mode. +
+ + + +

assign_sub

+ +View source + + + +Subtracts a value from this variable. + +This is essentially a shortcut for `assign_sub(self, delta)`. + + + + + + + + + + + + + + + + + + + +
Args
+`delta` + +A `Tensor`. The value to subtract from this variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +The name of the operation to be created +
+`read_value` + +if True, will return something which evaluates to the new +value of the variable; if False will return the assign op. +
+ + + + + + + + + + + +
Returns
+The updated variable. If `read_value` is false, instead returns None in +Eager mode and the assign op in graph mode. +
+ + + +

batch_scatter_update

+ +View source + + + +Assigns tf.IndexedSlices to this variable batch-wise. + +Analogous to `batch_gather`. This assumes that this variable and the +sparse_delta IndexedSlices have a series of leading dimensions that are the +same for all of them, and the updates are performed on the last dimension of +indices. In other words, the dimensions should be the following: + +`num_prefix_dims = sparse_delta.indices.ndims - 1` +`batch_dim = num_prefix_dims + 1` +`sparse_delta.updates.shape = sparse_delta.indices.shape + var.shape[ + batch_dim:]` + +where + +`sparse_delta.updates.shape[:num_prefix_dims]` +`== sparse_delta.indices.shape[:num_prefix_dims]` +`== var.shape[:num_prefix_dims]` + +And the operation performed can be expressed as: + +`var[i_1, ..., i_n, + sparse_delta.indices[i_1, ..., i_n, j]] = sparse_delta.updates[ + i_1, ..., i_n, j]` + +When sparse_delta.indices is a 1D tensor, this operation is equivalent to +`scatter_update`. + +To avoid this operation one can looping over the first `ndims` of the +variable and using `scatter_update` on the subtensors that result of slicing +the first dimension. This is a valid option for `ndims = 1`, but less +efficient than this implementation. + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to be assigned to this variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

count_up_to

+ +View source + + + +Increments this variable until it reaches `limit`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Prefer Dataset.range instead. + +When that Op is run it tries to increment the variable by `1`. If +incrementing the variable would bring it above `limit` then the Op raises +the exception `OutOfRangeError`. + +If no error is raised, the Op outputs the value of the variable before +the increment. + +This is essentially a shortcut for `count_up_to(self, limit)`. + + + + + + + + + + +
Args
+`limit` + +value at which incrementing the variable raises an error. +
+ + + + + + + + + + + +
Returns
+A `Tensor` that will hold the variable value before the increment. If no +other Op modifies this variable, the values produced will all be +distinct. +
+ + + +

eval

+ +View source + + + +In a session, computes and returns the value of this variable. + +This is not a graph construction method, it does not add ops to the graph. + +This convenience method requires a session where the graph +containing this variable has been launched. If no session is +passed, the default session is used. See tf.compat.v1.Session for more +information on launching a graph and on sessions. + +```python +v = tf.Variable([1, 2]) +init = tf.compat.v1.global_variables_initializer() + +with tf.compat.v1.Session() as sess: + sess.run(init) + # Usage passing the session explicitly. + print(v.eval(sess)) + # Usage with the default session. The 'with' block + # above makes 'sess' the default session. + print(v.eval()) +``` + + + + + + + + + + +
Args
+`session` + +The session to use to evaluate this variable. If none, the +default session is used. +
+ + + + + + + + + + + +
Returns
+A numpy `ndarray` with a copy of the value of this variable. +
+ + + +

experimental_ref

+ +View source + + + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use ref() instead. + +

from_proto

+ +View source + + + +Returns a `Variable` object created from `variable_def`. + + +

gather_nd

+ +View source + + + +Gather slices from `params` into a Tensor with shape specified by `indices`. + +See tf.gather_nd for details. + + + + + + + + + + + + + +
Args
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `params`. +
+ + + +

get_shape

+ +View source + + + +Alias of `Variable.shape`. + + +

initialized_value

+ +View source + + + +Returns the value of the initialized variable. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts. + +You should use this instead of the variable itself to initialize another +variable with a value that depends on the value of this variable. + +```python +# Initialize 'v' with a random tensor. +v = tf.Variable(tf.random.truncated_normal([10, 40])) +# Use `initialized_value` to guarantee that `v` has been +# initialized before its value is used to initialize `w`. +# The random values are picked only once. +w = tf.Variable(v.initialized_value() * 2.0) +``` + + + + + + + + + +
Returns
+A `Tensor` holding the value of this variable after its initializer +has run. +
+ + + +

load

+ +View source + + + +Load new value into this variable. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Prefer Variable.assign which has equivalent behavior in 2.X. + +Writes new value to variable's memory. Doesn't add ops to the graph. + +This convenience method requires a session where the graph +containing this variable has been launched. If no session is +passed, the default session is used. See tf.compat.v1.Session for more +information on launching a graph and on sessions. + +```python +v = tf.Variable([1, 2]) +init = tf.compat.v1.global_variables_initializer() + +with tf.compat.v1.Session() as sess: + sess.run(init) + # Usage passing the session explicitly. + v.load([2, 3], sess) + print(v.eval(sess)) # prints [2 3] + # Usage with the default session. The 'with' block + # above makes 'sess' the default session. + v.load([3, 4], sess) + print(v.eval()) # prints [3 4] +``` + + + + + + + + + + + + + +
Args
+`value` + +New variable value +
+`session` + +The session to use to evaluate this variable. If none, the +default session is used. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +Session is not passed and no default session +
+ + + +

read_value

+ +View source + + + +Returns the value of this variable, read in the current context. + +Can be different from value() if it's on another device, with control +dependencies, etc. + + + + + + + + + +
Returns
+A `Tensor` containing the value of the variable. +
+ + + +

ref

+ +View source + + + +Returns a hashable reference object to this Variable. + +The primary use case for this API is to put variables in a set/dictionary. +We can't put variables in a set/dictionary as `variable.__hash__()` is no +longer available starting Tensorflow 2.0. + +The following will raise an exception starting 2.0 + +``` +>>> x = tf.Variable(5) +>>> y = tf.Variable(10) +>>> z = tf.Variable(10) +>>> variable_set = {x, y, z} +Traceback (most recent call last): + ... +TypeError: Variable is unhashable. Instead, use tensor.ref() as the key. +>>> variable_dict = {x: 'five', y: 'ten'} +Traceback (most recent call last): + ... +TypeError: Variable is unhashable. Instead, use tensor.ref() as the key. +``` + +Instead, we can use `variable.ref()`. + +``` +>>> variable_set = {x.ref(), y.ref(), z.ref()} +>>> x.ref() in variable_set +True +>>> variable_dict = {x.ref(): 'five', y.ref(): 'ten', z.ref(): 'ten'} +>>> variable_dict[y.ref()] +'ten' +``` + +Also, the reference object provides `.deref()` function that returns the +original Variable. + +``` +>>> x = tf.Variable(5) +>>> x.ref().deref() + +``` + +

scatter_add

+ +View source + + + +Adds tf.IndexedSlices to this variable. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to be added to this variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

scatter_div

+ +View source + + + +Divide this variable by tf.IndexedSlices. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to divide this variable by. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

scatter_max

+ +View source + + + +Updates this variable with the max of tf.IndexedSlices and itself. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to use as an argument of max with this +variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

scatter_min

+ +View source + + + +Updates this variable with the min of tf.IndexedSlices and itself. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to use as an argument of min with this +variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

scatter_mul

+ +View source + + + +Multiply this variable by tf.IndexedSlices. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to multiply this variable by. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

scatter_nd_add

+ +View source + + + +Applies sparse addition to individual values or slices in a Variable. + +The Variable has rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into self. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of self. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. +``` + +For example, say we want to add 4 scattered elements to a rank-1 tensor to +8 elements. In Python, that update would look like this: + +```python + v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) + indices = tf.constant([[4], [3], [1] ,[7]]) + updates = tf.constant([9, 10, 11, 12]) + add = v.scatter_nd_add(indices, updates) + with tf.compat.v1.Session() as sess: + print sess.run(add) +``` + +The resulting update to v would look like this: + + [1, 13, 3, 14, 14, 6, 7, 20] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + +
Args
+`indices` + +The indices to be used in the operation. +
+`updates` + +The values to be used in the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + +

scatter_nd_sub

+ +View source + + + +Applies sparse subtraction to individual values or slices in a Variable. + +Assuming the variable has rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into self. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of self. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. +``` + +For example, say we want to add 4 scattered elements to a rank-1 tensor to +8 elements. In Python, that update would look like this: + +```python + v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) + indices = tf.constant([[4], [3], [1] ,[7]]) + updates = tf.constant([9, 10, 11, 12]) + op = v.scatter_nd_sub(indices, updates) + with tf.compat.v1.Session() as sess: + print sess.run(op) +``` + +The resulting update to v would look like this: + + [1, -9, 3, -6, -6, 6, 7, -4] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + +
Args
+`indices` + +The indices to be used in the operation. +
+`updates` + +The values to be used in the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + +

scatter_nd_update

+ +View source + + + +Applies sparse assignment to individual values or slices in a Variable. + +The Variable has rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into self. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of self. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. +``` + +For example, say we want to add 4 scattered elements to a rank-1 tensor to +8 elements. In Python, that update would look like this: + +```python + v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) + indices = tf.constant([[4], [3], [1] ,[7]]) + updates = tf.constant([9, 10, 11, 12]) + op = v.scatter_nd_assign(indices, updates) + with tf.compat.v1.Session() as sess: + print sess.run(op) +``` + +The resulting update to v would look like this: + + [1, 11, 3, 10, 9, 6, 7, 12] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + +
Args
+`indices` + +The indices to be used in the operation. +
+`updates` + +The values to be used in the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + +

scatter_sub

+ +View source + + + +Subtracts tf.IndexedSlices from this variable. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to be subtracted from this variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

scatter_update

+ +View source + + + +Assigns tf.IndexedSlices to this variable. + + + + + + + + + + + + + + + + + +
Args
+`sparse_delta` + +tf.IndexedSlices to be assigned to this variable. +
+`use_locking` + +If `True`, use locking during the operation. +
+`name` + +the name of the operation. +
+ + + + + + + + + + + +
Returns
+The updated variable. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +if `sparse_delta` is not an `IndexedSlices`. +
+ + + +

set_shape

+ +View source + + + +Overrides the shape for this variable. + + + + + + + + + + + +
Args
+`shape` + +the `TensorShape` representing the overridden shape. +
+ + + +

sparse_read

+ +View source + + + +Gather slices from params axis axis according to indices. + +This function supports a subset of tf.gather, see tf.gather for details on +usage. + + + + + + + + + + + + + +
Args
+`indices` + +The index `Tensor`. Must be one of the following types: `int32`, +`int64`. Must be in range `[0, params.shape[axis])`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `params`. +
+ + + +

to_proto

+ +View source + + + +Converts a `Variable` to a `VariableDef` protocol buffer. + + + + + + + + + + + +
Args
+`export_scope` + +Optional `string`. Name scope to remove. +
+ + + + + + + + + + + +
Returns
+A `VariableDef` protocol buffer, or `None` if the `Variable` is not +in the specified name scope. +
+ + + +

value

+ +View source + + + +Returns the last snapshot of this variable. + +You usually do not need to call this method as all ops that need the value +of the variable call it automatically through a `convert_to_tensor()` call. + +Returns a `Tensor` which holds the value of the variable. You can not +assign a new value to this tensor as it is not a reference to the variable. + +To avoid copies, if the consumer of the returned value is on the same device +as the variable, this actually returns the live value of the variable, not +a copy. Updates to the variable are seen by the consumer. If the consumer +is on a different device it will get a copy of the variable. + + + + + + + + + +
Returns
+A `Tensor` containing the value of the variable. +
+ + + +

__abs__

+ +View source + + + +Computes the absolute value of a tensor. + +Given a tensor of integer or floating-point values, this operation returns a +tensor of the same type, where each element contains the absolute value of the +corresponding element in the input. + +Given a tensor `x` of complex numbers, this operation returns a tensor of type +`float32` or `float64` that is the absolute value of each element in `x`. For +a complex number \\(a + bj\\), its absolute value is computed as \\(\sqrt{a^2 ++ b^2}\\). For example: + +``` +>>> x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]]) +>>> tf.abs(x) + +``` + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` or `SparseTensor` of type `float16`, `float32`, `float64`, +`int32`, `int64`, `complex64` or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` or `SparseTensor` of the same size, type and sparsity as `x`, +with absolute values. Note, for `complex64` or `complex128` input, the +returned `Tensor` will be of type `float32` or `float64`, respectively. +
+ + + +

__add__

+ +View source + + + +Dispatches to add for strings and add_v2 for all other types. + + +

__and__

+ +View source + + + +Returns the truth value of x AND y element-wise. + +*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__div__

+ +View source + + + +Divide two values using Python 2 semantics. + +Used for Tensor.__div__. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` returns the quotient of x and y. +
+ + + +

__eq__

+ +View source + + + +Compares two variables element-wise for equality. + + +

__floordiv__

+ +View source + + + +Divides `x / y` elementwise, rounding toward the most negative integer. + +The same as tf.compat.v1.div(x,y) for integers, but uses +`tf.floor(tf.compat.v1.div(x,y))` for +floating point arguments so that the result is always an integer (though +possibly an integer represented as floating point). This op is generated by +`x // y` floor division in Python 3 and in Python 2.7 with +`from __future__ import division`. + +`x` and `y` must have the same type, and the result will have the same type +as well. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` rounded down. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If the inputs are complex. +
+ + + +

__ge__

+ + + +Returns the truth value of (x >= y) element-wise. + +*NOTE*: `math.greater_equal` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6, 7]) +y = tf.constant([5, 2, 5, 10]) +tf.math.greater_equal(x, y) ==> [True, True, True, False] + +x = tf.constant([5, 4, 6, 7]) +y = tf.constant([5]) +tf.math.greater_equal(x, y) ==> [True, False, True, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__getitem__

+ +View source + + + +Creates a slice helper object given a variable. + +This allows creating a sub-tensor from part of the current contents +of a variable. See tf.Tensor.__getitem__ for detailed examples +of slicing. + +This function in addition also allows assignment to a sliced range. +This is similar to `__setitem__` functionality in Python. However, +the syntax is different so that the user can capture the assignment +operation for grouping or passing to `sess.run()`. +For example, + +```python +import tensorflow as tf +A = tf.Variable([[1,2,3], [4,5,6], [7,8,9]], dtype=tf.float32) +with tf.compat.v1.Session() as sess: + sess.run(tf.compat.v1.global_variables_initializer()) + print(sess.run(A[:2, :2])) # => [[1,2], [4,5]] + + op = A[:2,:2].assign(22. * tf.ones((2, 2))) + print(sess.run(op)) # => [[22, 22, 3], [22, 22, 6], [7,8,9]] +``` + +Note that assignments currently do not support NumPy broadcasting +semantics. + + + + + + + + + + + + + +
Args
+`var` + +An `ops.Variable` object. +
+`slice_spec` + +The arguments to `Tensor.__getitem__`. +
+ + + + + + + + + + + +
Returns
+The appropriate slice of "tensor", based on "slice_spec". +As an operator. The operator also has a `assign()` method +that can be used to generate an assignment operator. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If a slice range is negative size. +
+`TypeError` + +TypeError: If the slice indices aren't int, slice, +ellipsis, tf.newaxis or int32/int64 tensors. +
+ + + +

__gt__

+ + + +Returns the truth value of (x > y) element-wise. + +*NOTE*: `math.greater` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 2, 5]) +tf.math.greater(x, y) ==> [False, True, True] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.greater(x, y) ==> [False, False, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__invert__

+ + + +Returns the truth value of `NOT x` element-wise. + + +#### Example: + + + +``` +>>> tf.math.logical_not(tf.constant([True, False])) + +``` + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__iter__

+ +View source + + + +Dummy method to prevent iteration. + +Do not call. + +NOTE(mrry): If we register __getitem__ as an overloaded operator, +Python will valiantly attempt to iterate over the variable's Tensor from 0 +to infinity. Declaring this method prevents this unintended behavior. + + + + + + + + + + +
Raises
+`TypeError` + +when invoked. +
+ + + +

__le__

+ + + +Returns the truth value of (x <= y) element-wise. + +*NOTE*: `math.less_equal` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.less_equal(x, y) ==> [True, True, False] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 6, 6]) +tf.math.less_equal(x, y) ==> [True, True, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__lt__

+ + + +Returns the truth value of (x < y) element-wise. + +*NOTE*: `math.less` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.less(x, y) ==> [False, True, False] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 6, 7]) +tf.math.less(x, y) ==> [False, True, True] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__matmul__

+ +View source + + + +Multiplies matrix `a` by matrix `b`, producing `a` * `b`. + +The inputs must, following any transpositions, be tensors of rank >= 2 +where the inner 2 dimensions specify valid matrix multiplication dimensions, +and any further outer dimensions specify matching batch size. + +Both matrices must be of the same type. The supported types are: +`float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`. + +Either matrix can be transposed or adjointed (conjugated and transposed) on +the fly by setting one of the corresponding flag to `True`. These are `False` +by default. + +If one or both of the matrices contain a lot of zeros, a more efficient +multiplication algorithm can be used by setting the corresponding +`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default. +This optimization is only available for plain matrices (rank-2 tensors) with +datatypes `bfloat16` or `float32`. + +A simple 2-D tensor matrix multiplication: + +``` +>>> a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) +>>> a # 2-D tensor + +>>> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) +>>> b # 2-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +A batch matrix multiplication with batch shape [2]: + +``` +>>> a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) +>>> a # 3-D tensor + +>>> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) +>>> b # 3-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +Since python >= 3.5 the @ operator is supported +(see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow, +it simply calls the tf.matmul() function, so the following lines are +equivalent: + +``` +>>> d = a @ b @ [[10], [11]] +>>> d = tf.matmul(tf.matmul(a, b), [[10], [11]]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`a` + +tf.Tensor of type `float16`, `float32`, `float64`, `int32`, +`complex64`, `complex128` and rank > 1. +
+`b` + +tf.Tensor with same type and rank as `a`. +
+`transpose_a` + +If `True`, `a` is transposed before multiplication. +
+`transpose_b` + +If `True`, `b` is transposed before multiplication. +
+`adjoint_a` + +If `True`, `a` is conjugated and transposed before +multiplication. +
+`adjoint_b` + +If `True`, `b` is conjugated and transposed before +multiplication. +
+`a_is_sparse` + +If `True`, `a` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`b_is_sparse` + +If `True`, `b` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`name` + +Name for the operation (optional). +
+ + + + + + + + + + + + + + +
Returns
+A tf.Tensor of the same type as `a` and `b` where each inner-most matrix +is the product of the corresponding matrices in `a` and `b`, e.g. if all +transpose or adjoint attributes are `False`: + +`output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j])`, +for all indices `i`, `j`. +
+`Note` + +This is matrix product, not element-wise product. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `transpose_a` and `adjoint_a`, or `transpose_b` and +`adjoint_b` are both set to `True`. +
+ + + +

__mod__

+ +View source + + + +Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +true, this follows Python semantics in that the result here is consistent +with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`. + +*NOTE*: `math.floormod` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__mul__

+ +View source + + + +Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse". + + +

__ne__

+ +View source + + + +Compares two variables element-wise for equality. + + +

__neg__

+ + + +Computes numerical negative value element-wise. + +I.e., \\(y = -x\\). + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__or__

+ +View source + + + +Returns the truth value of x OR y element-wise. + +*NOTE*: `math.logical_or` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__pow__

+ +View source + + + +Computes the power of one value to another. + +Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for +corresponding elements in `x` and `y`. For example: + +```python +x = tf.constant([[2, 2], [3, 3]]) +y = tf.constant([[8, 16], [2, 3]]) +tf.pow(x, y) # [[256, 65536], [9, 27]] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`y` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

__radd__

+ +View source + + + +Dispatches to add for strings and add_v2 for all other types. + + +

__rand__

+ +View source + + + +Returns the truth value of x AND y element-wise. + +*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__rdiv__

+ +View source + + + +Divide two values using Python 2 semantics. + +Used for Tensor.__div__. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` returns the quotient of x and y. +
+ + + +

__rfloordiv__

+ +View source + + + +Divides `x / y` elementwise, rounding toward the most negative integer. + +The same as tf.compat.v1.div(x,y) for integers, but uses +`tf.floor(tf.compat.v1.div(x,y))` for +floating point arguments so that the result is always an integer (though +possibly an integer represented as floating point). This op is generated by +`x // y` floor division in Python 3 and in Python 2.7 with +`from __future__ import division`. + +`x` and `y` must have the same type, and the result will have the same type +as well. + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+`x / y` rounded down. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If the inputs are complex. +
+ + + +

__rmatmul__

+ +View source + + + +Multiplies matrix `a` by matrix `b`, producing `a` * `b`. + +The inputs must, following any transpositions, be tensors of rank >= 2 +where the inner 2 dimensions specify valid matrix multiplication dimensions, +and any further outer dimensions specify matching batch size. + +Both matrices must be of the same type. The supported types are: +`float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`. + +Either matrix can be transposed or adjointed (conjugated and transposed) on +the fly by setting one of the corresponding flag to `True`. These are `False` +by default. + +If one or both of the matrices contain a lot of zeros, a more efficient +multiplication algorithm can be used by setting the corresponding +`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default. +This optimization is only available for plain matrices (rank-2 tensors) with +datatypes `bfloat16` or `float32`. + +A simple 2-D tensor matrix multiplication: + +``` +>>> a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) +>>> a # 2-D tensor + +>>> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) +>>> b # 2-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +A batch matrix multiplication with batch shape [2]: + +``` +>>> a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) +>>> a # 3-D tensor + +>>> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) +>>> b # 3-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +Since python >= 3.5 the @ operator is supported +(see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow, +it simply calls the tf.matmul() function, so the following lines are +equivalent: + +``` +>>> d = a @ b @ [[10], [11]] +>>> d = tf.matmul(tf.matmul(a, b), [[10], [11]]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`a` + +tf.Tensor of type `float16`, `float32`, `float64`, `int32`, +`complex64`, `complex128` and rank > 1. +
+`b` + +tf.Tensor with same type and rank as `a`. +
+`transpose_a` + +If `True`, `a` is transposed before multiplication. +
+`transpose_b` + +If `True`, `b` is transposed before multiplication. +
+`adjoint_a` + +If `True`, `a` is conjugated and transposed before +multiplication. +
+`adjoint_b` + +If `True`, `b` is conjugated and transposed before +multiplication. +
+`a_is_sparse` + +If `True`, `a` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`b_is_sparse` + +If `True`, `b` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`name` + +Name for the operation (optional). +
+ + + + + + + + + + + + + + +
Returns
+A tf.Tensor of the same type as `a` and `b` where each inner-most matrix +is the product of the corresponding matrices in `a` and `b`, e.g. if all +transpose or adjoint attributes are `False`: + +`output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j])`, +for all indices `i`, `j`. +
+`Note` + +This is matrix product, not element-wise product. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `transpose_a` and `adjoint_a`, or `transpose_b` and +`adjoint_b` are both set to `True`. +
+ + + +

__rmod__

+ +View source + + + +Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +true, this follows Python semantics in that the result here is consistent +with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`. + +*NOTE*: `math.floormod` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__rmul__

+ +View source + + + +Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse". + + +

__ror__

+ +View source + + + +Returns the truth value of x OR y element-wise. + +*NOTE*: `math.logical_or` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor` of type `bool`. +
+ + + +

__rpow__

+ +View source + + + +Computes the power of one value to another. + +Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for +corresponding elements in `x` and `y`. For example: + +```python +x = tf.constant([[2, 2], [3, 3]]) +y = tf.constant([[8, 16], [2, 3]]) +tf.pow(x, y) # [[256, 65536], [9, 27]] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`y` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

__rsub__

+ +View source + + + +Returns x - y element-wise. + +*NOTE*: `Subtract` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__rtruediv__

+ +View source + + + + + + +

__rxor__

+ +View source + + + +Logical XOR function. + +x ^ y = (x | y) & ~(x & y) + +The operation works for the following input types: + +- Two single elements of type `bool` +- One tf.Tensor of type `bool` and one single `bool`, where the result will + be calculated by applying logical XOR with the single element to each + element in the larger Tensor. +- Two tf.Tensor objects of type `bool` of the same shape. In this case, + the result will be the element-wise logical XOR of the two input tensors. + +#### Usage: + + + +``` +>>> a = tf.constant([True]) +>>> b = tf.constant([False]) +>>> tf.math.logical_xor(a, b) + +``` + +``` +>>> c = tf.constant([True]) +>>> x = tf.constant([False, True, True, False]) +>>> tf.math.logical_xor(c, x) + +``` + +``` +>>> y = tf.constant([False, False, True, True]) +>>> z = tf.constant([False, True, False, True]) +>>> tf.math.logical_xor(y, z) + +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A tf.Tensor type bool. +
+`y` + +A tf.Tensor of type bool. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tf.Tensor of type bool with the same size as that of x or y. +
+ + + +

__sub__

+ +View source + + + +Returns x - y element-wise. + +*NOTE*: `Subtract` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
Args
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `x`. +
+ + + +

__truediv__

+ +View source + + + + + + +

__xor__

+ +View source + + + +Logical XOR function. + +x ^ y = (x | y) & ~(x & y) + +The operation works for the following input types: + +- Two single elements of type `bool` +- One tf.Tensor of type `bool` and one single `bool`, where the result will + be calculated by applying logical XOR with the single element to each + element in the larger Tensor. +- Two tf.Tensor objects of type `bool` of the same shape. In this case, + the result will be the element-wise logical XOR of the two input tensors. + +#### Usage: + + + +``` +>>> a = tf.constant([True]) +>>> b = tf.constant([False]) +>>> tf.math.logical_xor(a, b) + +``` + +``` +>>> c = tf.constant([True]) +>>> x = tf.constant([False, True, True, False]) +>>> tf.math.logical_xor(c, x) + +``` + +``` +>>> y = tf.constant([False, False, True, True]) +>>> z = tf.constant([False, True, False, True]) +>>> tf.math.logical_xor(y, z) + +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +A tf.Tensor type bool. +
+`y` + +A tf.Tensor of type bool. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tf.Tensor of type bool with the same size as that of x or y. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/VariableAggregation.md b/site/en/api_docs/python/tf/compat/v1/VariableAggregation.md new file mode 100644 index 00000000000..57f90d58b43 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/VariableAggregation.md @@ -0,0 +1,51 @@ +description: Indicates how a distributed variable will be aggregated. + +
+ + + + + + +
+ +# tf.compat.v1.VariableAggregation + + + + + + + + + +Indicates how a distributed variable will be aggregated. + + + +tf.distribute.Strategy distributes a model by making multiple copies +(called "replicas") acting data-parallel on different elements of the input +batch. When performing some variable-update operation, say +`var.assign_add(x)`, in a model, we need to resolve how to combine the +different values for `x` computed in the different replicas. + +* `NONE`: This is the default, giving an error if you use a + variable-update operation with multiple replicas. +* `SUM`: Add the updates across replicas. +* `MEAN`: Take the arithmetic mean ("average") of the updates across replicas. +* `ONLY_FIRST_REPLICA`: This is for when every replica is performing the same + update, but we only want to perform the update once. Used, e.g., for the + global step counter. +* `ONLY_FIRST_TOWER`: Deprecated alias for `ONLY_FIRST_REPLICA`. + +## Class Variables + +* `MEAN` +* `NONE` +* `ONLY_FIRST_REPLICA` +* `SUM` diff --git a/site/en/api_docs/python/tf/compat/v1/VariableScope.md b/site/en/api_docs/python/tf/compat/v1/VariableScope.md new file mode 100644 index 00000000000..f9595c38141 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/VariableScope.md @@ -0,0 +1,330 @@ +description: Variable scope object to carry defaults to provide to get_variable. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.VariableScope + + + + + + + + + +Variable scope object to carry defaults to provide to `get_variable`. + + + + + + + +Many of the arguments we need for `get_variable` in a variable store are most +easily handled with a context. This object is used for the defaults. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +name of the current scope, used as prefix in get_variable. +
+`initializer` + +default initializer passed to get_variable. +
+`regularizer` + +default regularizer passed to get_variable. +
+`reuse` + +Boolean, None, or tf.compat.v1.AUTO_REUSE, setting the reuse in +get_variable. When eager execution is enabled this argument is always +forced to be False. +
+`caching_device` + +string, callable, or None: the caching device passed to +get_variable. +
+`partitioner` + +callable or `None`: the partitioner passed to `get_variable`. +
+`custom_getter` + +default custom getter passed to get_variable. +
+`name_scope` + +The name passed to tf.name_scope. +
+`dtype` + +default type passed to get_variable (defaults to DT_FLOAT). +
+`use_resource` + +if False, create a normal Variable; if True create an +experimental ResourceVariable with well-defined semantics. Defaults to +False (will later change to True). When eager execution is enabled this +argument is always forced to be True. +
+`constraint` + +An optional projection function to be applied to the variable +after being updated by an `Optimizer` (e.g. used to implement norm +constraints or value constraints for layer weights). The function must +take as input the unprojected Tensor representing the value of the +variable and return the Tensor for the projected value (which must have +the same shape). Constraints are not safe to use when doing asynchronous +distributed training. +
+`original_name_scope` + + +
+ + + +## Methods + +

get_collection

+ +View source + + + +Get this scope's variables. + + +

get_variable

+ +View source + + + +Gets an existing variable with this name or create a new one. + + +

global_variables

+ +View source + + + +Get this scope's global variables. + + +

local_variables

+ +View source + + + +Get this scope's local variables. + + +

reuse_variables

+ +View source + + + +Reuse variables in this scope. + + +

set_caching_device

+ +View source + + + +Set caching_device for this scope. + + +

set_custom_getter

+ +View source + + + +Set custom getter for this scope. + + +

set_dtype

+ +View source + + + +Set data type for this scope. + + +

set_initializer

+ +View source + + + +Set initializer for this scope. + + +

set_partitioner

+ +View source + + + +Set partitioner for this scope. + + +

set_regularizer

+ +View source + + + +Set regularizer for this scope. + + +

set_use_resource

+ +View source + + + +Sets whether to use ResourceVariables for this scope. + + +

trainable_variables

+ +View source + + + +Get this scope's trainable variables. + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/WholeFileReader.md b/site/en/api_docs/python/tf/compat/v1/WholeFileReader.md new file mode 100644 index 00000000000..afe4d823806 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/WholeFileReader.md @@ -0,0 +1,484 @@ +description: A Reader that outputs the entire contents of a file as a value. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.WholeFileReader + + + + + + + + + +A Reader that outputs the entire contents of a file as a value. + +Inherits From: [`ReaderBase`](../../../tf/compat/v1/ReaderBase.md) + + + + + + + +To use, enqueue filenames in a Queue. The output of Read will +be a filename (key) and the contents of that file (value). + +See ReaderBase for supported methods. + + + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + +#### Eager Compatibility +Readers are not compatible with eager execution. Instead, please +use tf.data to get data into your model. + + + + + + + + + + + + + + + + + +
+`reader_ref` + +Op that implements the reader. +
+`supports_serialize` + +Whether the Reader implementation can serialize its state. +
+ + + +## Methods + +

num_records_produced

+ +View source + + + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

num_work_units_completed

+ +View source + + + +Returns the number of work units this reader has finished processing. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+An int64 Tensor. +
+ + + +

read

+ +View source + + + +Returns the next record (key, value) pair produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (key, value). +
+`key` + +A string scalar Tensor. +
+`value` + +A string scalar Tensor. +
+ + + +

read_up_to

+ +View source + + + +Returns up to num_records (key, value) pairs produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g., when the +Reader needs to start reading from a new file since it has +finished with the previous file). +It may return less than num_records even before the last batch. + + + + + + + + + + + + + + + + +
Args
+`queue` + +A Queue or a mutable string Tensor representing a handle +to a Queue, with string work items. +
+`num_records` + +Number of records to read. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
Returns
+A tuple of Tensors (keys, values). +
+`keys` + +A 1-D string Tensor. +
+`values` + +A 1-D string Tensor. +
+ + + +

reset

+ +View source + + + +Restore a reader to its initial clean state. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

restore_state

+ +View source + + + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + + + + + + + + + + + + + +
Args
+`state` + +A string Tensor. +Result of a SerializeState of a Reader with matching type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + +

serialize_state

+ +View source + + + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A string Tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/add_check_numerics_ops.md b/site/en/api_docs/python/tf/compat/v1/add_check_numerics_ops.md new file mode 100644 index 00000000000..d6e57518152 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/add_check_numerics_ops.md @@ -0,0 +1,86 @@ +description: Connect a tf.debugging.check_numerics to every floating point tensor. + +
+ + +
+ +# tf.compat.v1.add_check_numerics_ops + + + + + + + + + +Connect a tf.debugging.check_numerics to every floating point tensor. + + + + + + + +`check_numerics` operations themselves are added for each `half`, `float`, +or `double` tensor in the current default graph. For all ops in the graph, the +`check_numerics` op for all of its (`half`, `float`, or `double`) inputs +is guaranteed to run before the `check_numerics` op on any of its outputs. + +Note: This API is not compatible with the use of tf.cond or +tf.while_loop, and will raise a `ValueError` if you attempt to call it +in such a graph. + + + + + + + + + +
+A `group` op depending on all `check_numerics` ops added. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If the graph contains any numeric operations in a control flow +structure. +
+`RuntimeError` + +If called with eager execution enabled. +
+ + + + +#### Eager Compatibility +Not compatible with eager execution. To check for `Inf`s and `NaN`s under +eager execution, call tf.debugging.enable_check_numerics() once before +executing the checked operations. + diff --git a/site/en/api_docs/python/tf/compat/v1/add_to_collection.md b/site/en/api_docs/python/tf/compat/v1/add_to_collection.md new file mode 100644 index 00000000000..0fa8e2cf804 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/add_to_collection.md @@ -0,0 +1,66 @@ +description: Wrapper for Graph.add_to_collection() using the default graph. + +
+ + +
+ +# tf.compat.v1.add_to_collection + + + + + + + + + +Wrapper for `Graph.add_to_collection()` using the default graph. + + + + + + + +See tf.Graph.add_to_collection +for more details. + + + + + + + + + + + + + +
+`name` + +The key for the collection. For example, the `GraphKeys` class +contains many standard names for collections. +
+`value` + +The value to add to the collection. +
+ + + +#### Eager Compatibility +Collections are only supported in eager when variables are created inside +an EagerVariableStore (e.g. as part of a layer or template). + diff --git a/site/en/api_docs/python/tf/compat/v1/add_to_collections.md b/site/en/api_docs/python/tf/compat/v1/add_to_collections.md new file mode 100644 index 00000000000..969b5ce970b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/add_to_collections.md @@ -0,0 +1,66 @@ +description: Wrapper for Graph.add_to_collections() using the default graph. + +
+ + +
+ +# tf.compat.v1.add_to_collections + + + + + + + + + +Wrapper for `Graph.add_to_collections()` using the default graph. + + + + + + + +See tf.Graph.add_to_collections +for more details. + + + + + + + + + + + + + +
+`names` + +The key for the collections. The `GraphKeys` class contains many +standard names for collections. +
+`value` + +The value to add to the collections. +
+ + + +#### Eager Compatibility +Collections are only supported in eager when variables are created inside +an EagerVariableStore (e.g. as part of a layer or template). + diff --git a/site/en/api_docs/python/tf/compat/v1/all_variables.md b/site/en/api_docs/python/tf/compat/v1/all_variables.md new file mode 100644 index 00000000000..f2a29ff5c61 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/all_variables.md @@ -0,0 +1,35 @@ +description: Use tf.compat.v1.global_variables instead. (deprecated) + +
+ + +
+ +# tf.compat.v1.all_variables + + + + + + + + + +Use tf.compat.v1.global_variables instead. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02. +Instructions for updating: +Please use tf.global_variables instead. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/app.md b/site/en/api_docs/python/tf/compat/v1/app.md new file mode 100644 index 00000000000..d8af7f5bbcb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/app.md @@ -0,0 +1,29 @@ +description: Generic entry point script. + +
+ + +
+ +# Module: tf.compat.v1.app + + + + + + + + + +Generic entry point script. + + + +## Modules + +[`flags`](../../../tf/compat/v1/flags.md) module: Import router for absl.flags. See https://github.com/abseil/abseil-py. + +## Functions + +[`run(...)`](../../../tf/compat/v1/app/run.md): Runs the program with an optional 'main' function and 'argv' list. + diff --git a/site/en/api_docs/python/tf/compat/v1/app/run.md b/site/en/api_docs/python/tf/compat/v1/app/run.md new file mode 100644 index 00000000000..132c9025d1e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/app/run.md @@ -0,0 +1,33 @@ +description: Runs the program with an optional 'main' function and 'argv' list. + +
+ + +
+ +# tf.compat.v1.app.run + + + + + + + + + +Runs the program with an optional 'main' function and 'argv' list. + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/arg_max.md b/site/en/api_docs/python/tf/compat/v1/arg_max.md new file mode 100644 index 00000000000..5a8e37be528 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/arg_max.md @@ -0,0 +1,97 @@ +description: Returns the index with the largest value across dimensions of a tensor. + +
+ + +
+ +# tf.compat.v1.arg_max + + + + + + + + + +Returns the index with the largest value across dimensions of a tensor. + + + + + + + +Note that in case of ties the identity of the return value is not guaranteed. + +#### Usage: + +```python +import tensorflow as tf +a = [1, 10, 26.9, 2.8, 166.32, 62.3] +b = tf.math.argmax(input = a) +c = tf.keras.backend.eval(b) +# c = 4 +# here a[4] = 166.32 which is the largest element of a across axis 0 +``` + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`dimension` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +int32 or int64, must be in the range `[-rank(input), rank(input))`. +Describes which dimension of the input Tensor to reduce across. For vectors, +use dimension = 0. +
+`output_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `output_type`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/arg_min.md b/site/en/api_docs/python/tf/compat/v1/arg_min.md new file mode 100644 index 00000000000..d2fe65fbd8a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/arg_min.md @@ -0,0 +1,97 @@ +description: Returns the index with the smallest value across dimensions of a tensor. + +
+ + +
+ +# tf.compat.v1.arg_min + + + + + + + + + +Returns the index with the smallest value across dimensions of a tensor. + + + + + + + +Note that in case of ties the identity of the return value is not guaranteed. + +#### Usage: + +```python +import tensorflow as tf +a = [1, 10, 26.9, 2.8, 166.32, 62.3] +b = tf.math.argmin(input = a) +c = tf.keras.backend.eval(b) +# c = 0 +# here a[0] = 1 which is the smallest element of a across axis 0 +``` + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`dimension` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +int32 or int64, must be in the range `[-rank(input), rank(input))`. +Describes which dimension of the input Tensor to reduce across. For vectors, +use dimension = 0. +
+`output_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `output_type`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/argmax.md b/site/en/api_docs/python/tf/compat/v1/argmax.md new file mode 100644 index 00000000000..f594737adbc --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/argmax.md @@ -0,0 +1,117 @@ +description: Returns the index with the largest value across axes of a tensor. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.argmax + + + + + + + + + +Returns the index with the largest value across axes of a tensor. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. +Instructions for updating: +Use the `axis` argument instead + +Note that in case of ties the identity of the return value is not guaranteed. + +#### Usage: + +```python +import tensorflow as tf +a = [1, 10, 26.9, 2.8, 166.32, 62.3] +b = tf.math.argmax(input = a) +c = tf.keras.backend.eval(b) +# c = 4 +# here a[4] = 166.32 which is the largest element of a across axis 0 +``` + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +int32 or int64, must be in the range `[-rank(input), rank(input))`. +Describes which axis of the input Tensor to reduce across. For vectors, +use axis = 0. +
+`output_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `output_type`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/argmin.md b/site/en/api_docs/python/tf/compat/v1/argmin.md new file mode 100644 index 00000000000..dab3ac57ed8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/argmin.md @@ -0,0 +1,117 @@ +description: Returns the index with the smallest value across axes of a tensor. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.argmin + + + + + + + + + +Returns the index with the smallest value across axes of a tensor. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version. +Instructions for updating: +Use the `axis` argument instead + +Note that in case of ties the identity of the return value is not guaranteed. + +#### Usage: + +```python +import tensorflow as tf +a = [1, 10, 26.9, 2.8, 166.32, 62.3] +b = tf.math.argmin(input = a) +c = tf.keras.backend.eval(b) +# c = 0 +# here a[0] = 1 which is the smallest element of a across axis 0 +``` + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +int32 or int64, must be in the range `[-rank(input), rank(input))`. +Describes which axis of the input Tensor to reduce across. For vectors, +use axis = 0. +
+`output_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `output_type`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/assert_equal.md b/site/en/api_docs/python/tf/compat/v1/assert_equal.md new file mode 100644 index 00000000000..5924369bff6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_equal.md @@ -0,0 +1,146 @@ +description: Assert the condition x == y holds element-wise. + +
+ + +
+ +# tf.compat.v1.assert_equal + + + + + + + + + +Assert the condition `x == y` holds element-wise. + + + + + + + + + +This condition holds if for every pair of (possibly broadcast) elements +`x[i]`, `y[i]`, we have `x[i] == y[i]`. +If both `x` and `y` are empty, this is trivially satisfied. + +When running in graph mode, you should add a dependency on this operation +to ensure that it runs. Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]): + output = tf.reduce_sum(x) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`y` + +Numeric `Tensor`, same dtype as and broadcastable to `x`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`, `y`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_equal". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x == y` is False. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x == y` is False. The check can be performed immediately during +eager execution or if `x` and `y` are statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/compat/v1/assert_greater.md b/site/en/api_docs/python/tf/compat/v1/assert_greater.md new file mode 100644 index 00000000000..f43bc4f8513 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_greater.md @@ -0,0 +1,146 @@ +description: Assert the condition x > y holds element-wise. + +
+ + +
+ +# tf.compat.v1.assert_greater + + + + + + + + + +Assert the condition `x > y` holds element-wise. + + + + + + + + + +This condition holds if for every pair of (possibly broadcast) elements +`x[i]`, `y[i]`, we have `x[i] > y[i]`. +If both `x` and `y` are empty, this is trivially satisfied. + +When running in graph mode, you should add a dependency on this operation +to ensure that it runs. Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]): + output = tf.reduce_sum(x) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`y` + +Numeric `Tensor`, same dtype as and broadcastable to `x`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`, `y`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_greater". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x > y` is False. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x > y` is False. The check can be performed immediately during +eager execution or if `x` and `y` are statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/compat/v1/assert_greater_equal.md b/site/en/api_docs/python/tf/compat/v1/assert_greater_equal.md new file mode 100644 index 00000000000..7f3fd78fe6a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_greater_equal.md @@ -0,0 +1,146 @@ +description: Assert the condition x >= y holds element-wise. + +
+ + +
+ +# tf.compat.v1.assert_greater_equal + + + + + + + + + +Assert the condition `x >= y` holds element-wise. + + + + + + + + + +This condition holds if for every pair of (possibly broadcast) elements +`x[i]`, `y[i]`, we have `x[i] >= y[i]`. +If both `x` and `y` are empty, this is trivially satisfied. + +When running in graph mode, you should add a dependency on this operation +to ensure that it runs. Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]): + output = tf.reduce_sum(x) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`y` + +Numeric `Tensor`, same dtype as and broadcastable to `x`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`, `y`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_greater_equal". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x >= y` is False. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x >= y` is False. The check can be performed immediately during +eager execution or if `x` and `y` are statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/compat/v1/assert_integer.md b/site/en/api_docs/python/tf/compat/v1/assert_integer.md new file mode 100644 index 00000000000..81aef5a9c04 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_integer.md @@ -0,0 +1,112 @@ +description: Assert that x is of integer dtype. + +
+ + +
+ +# tf.compat.v1.assert_integer + + + + + + + + + +Assert that `x` is of integer dtype. + + + + + + + + + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.compat.v1.assert_integer(x)]): + output = tf.reduce_sum(x) +``` + + + + + + + + + + + + + + + + +
+`x` + +`Tensor` whose basetype is integer and is not quantized. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_integer". +
+ + + + + + + + + + + + +
+`TypeError` + +If `x.dtype` is anything other than non-quantized integer. +
+ + + + + + + + + + + +
+A `no_op` that does nothing. Type can be determined statically. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/assert_less.md b/site/en/api_docs/python/tf/compat/v1/assert_less.md new file mode 100644 index 00000000000..aa450611bfe --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_less.md @@ -0,0 +1,146 @@ +description: Assert the condition x < y holds element-wise. + +
+ + +
+ +# tf.compat.v1.assert_less + + + + + + + + + +Assert the condition `x < y` holds element-wise. + + + + + + + + + +This condition holds if for every pair of (possibly broadcast) elements +`x[i]`, `y[i]`, we have `x[i] < y[i]`. +If both `x` and `y` are empty, this is trivially satisfied. + +When running in graph mode, you should add a dependency on this operation +to ensure that it runs. Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]): + output = tf.reduce_sum(x) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`y` + +Numeric `Tensor`, same dtype as and broadcastable to `x`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`, `y`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_less". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x < y` is False. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x < y` is False. The check can be performed immediately during +eager execution or if `x` and `y` are statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/compat/v1/assert_less_equal.md b/site/en/api_docs/python/tf/compat/v1/assert_less_equal.md new file mode 100644 index 00000000000..4eef0f55660 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_less_equal.md @@ -0,0 +1,146 @@ +description: Assert the condition x <= y holds element-wise. + +
+ + +
+ +# tf.compat.v1.assert_less_equal + + + + + + + + + +Assert the condition `x <= y` holds element-wise. + + + + + + + + + +This condition holds if for every pair of (possibly broadcast) elements +`x[i]`, `y[i]`, we have `x[i] <= y[i]`. +If both `x` and `y` are empty, this is trivially satisfied. + +When running in graph mode, you should add a dependency on this operation +to ensure that it runs. Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.compat.v1.assert_less_equal(x, y)]): + output = tf.reduce_sum(x) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`y` + +Numeric `Tensor`, same dtype as and broadcastable to `x`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`, `y`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_less_equal". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x <= y` is False. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x <= y` is False. The check can be performed immediately during +eager execution or if `x` and `y` are statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/compat/v1/assert_near.md b/site/en/api_docs/python/tf/compat/v1/assert_near.md new file mode 100644 index 00000000000..c4d6a57a14e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_near.md @@ -0,0 +1,153 @@ +description: Assert the condition x and y are close element-wise. + +
+ + +
+ +# tf.compat.v1.assert_near + + + + + + + + + +Assert the condition `x` and `y` are close element-wise. + + + + + + + + + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.compat.v1.assert_near(x, y)]): + output = tf.reduce_sum(x) +``` + +This condition holds if for every pair of (possibly broadcast) elements +`x[i]`, `y[i]`, we have + +```tf.abs(x[i] - y[i]) <= atol + rtol * tf.abs(y[i])```. + +If both `x` and `y` are empty, this is trivially satisfied. + +The default `atol` and `rtol` is `10 * eps`, where `eps` is the smallest +representable positive number such that `1 + eps != 1`. This is about +`1.2e-6` in `32bit`, `2.22e-15` in `64bit`, and `0.00977` in `16bit`. +See `numpy.finfo`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Float or complex `Tensor`. +
+`y` + +Float or complex `Tensor`, same `dtype` as, and broadcastable to, `x`. +
+`rtol` + +`Tensor`. Same `dtype` as, and broadcastable to, `x`. +The relative tolerance. Default is `10 * eps`. +
+`atol` + +`Tensor`. Same `dtype` as, and broadcastable to, `x`. +The absolute tolerance. Default is `10 * eps`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`, `y`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_near". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x` and `y` are not close enough. +
+ + + + +#### Numpy Compatibility +Similar to `numpy.assert_allclose`, except tolerance depends on data type. +This is due to the fact that `TensorFlow` is often used with `32bit`, `64bit`, +and even `16bit` data. + diff --git a/site/en/api_docs/python/tf/compat/v1/assert_negative.md b/site/en/api_docs/python/tf/compat/v1/assert_negative.md new file mode 100644 index 00000000000..5798fbc5a07 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_negative.md @@ -0,0 +1,138 @@ +description: Assert the condition x < 0 holds element-wise. + +
+ + +
+ +# tf.compat.v1.assert_negative + + + + + + + + + +Assert the condition `x < 0` holds element-wise. + + + + + + + + + +When running in graph mode, you should add a dependency on this operation +to ensure that it runs. Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.debugging.assert_negative(x, y)]): + output = tf.reduce_sum(x) +``` + +Negative means, for every element `x[i]` of `x`, we have `x[i] < 0`. +If `x` is empty this is trivially satisfied. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_negative". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x < 0` is False. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x < 0` is False. The check can be performed immediately during +eager execution or if `x` is statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/compat/v1/assert_non_negative.md b/site/en/api_docs/python/tf/compat/v1/assert_non_negative.md new file mode 100644 index 00000000000..cc153a828cb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_non_negative.md @@ -0,0 +1,138 @@ +description: Assert the condition x >= 0 holds element-wise. + +
+ + +
+ +# tf.compat.v1.assert_non_negative + + + + + + + + + +Assert the condition `x >= 0` holds element-wise. + + + + + + + + + +When running in graph mode, you should add a dependency on this operation +to ensure that it runs. Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.debugging.assert_non_negative(x, y)]): + output = tf.reduce_sum(x) +``` + +Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`. +If `x` is empty this is trivially satisfied. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_non_negative". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x >= 0` is False. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x >= 0` is False. The check can be performed immediately during +eager execution or if `x` is statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/compat/v1/assert_non_positive.md b/site/en/api_docs/python/tf/compat/v1/assert_non_positive.md new file mode 100644 index 00000000000..a957de26a2c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_non_positive.md @@ -0,0 +1,138 @@ +description: Assert the condition x <= 0 holds element-wise. + +
+ + +
+ +# tf.compat.v1.assert_non_positive + + + + + + + + + +Assert the condition `x <= 0` holds element-wise. + + + + + + + + + +When running in graph mode, you should add a dependency on this operation +to ensure that it runs. Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.debugging.assert_non_positive(x, y)]): + output = tf.reduce_sum(x) +``` + +Non-positive means, for every element `x[i]` of `x`, we have `x[i] <= 0`. +If `x` is empty this is trivially satisfied. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_non_positive". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x <= 0` is False. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x <= 0` is False. The check can be performed immediately during +eager execution or if `x` is statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/compat/v1/assert_none_equal.md b/site/en/api_docs/python/tf/compat/v1/assert_none_equal.md new file mode 100644 index 00000000000..34285111e7e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_none_equal.md @@ -0,0 +1,146 @@ +description: Assert the condition x != y holds element-wise. + +
+ + +
+ +# tf.compat.v1.assert_none_equal + + + + + + + + + +Assert the condition `x != y` holds element-wise. + + + + + + + + + +This condition holds if for every pair of (possibly broadcast) elements +`x[i]`, `y[i]`, we have `x[i] != y[i]`. +If both `x` and `y` are empty, this is trivially satisfied. + +When running in graph mode, you should add a dependency on this operation +to ensure that it runs. Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]): + output = tf.reduce_sum(x) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`y` + +Numeric `Tensor`, same dtype as and broadcastable to `x`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`, `y`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_none_equal". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x != y` is False. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x != y` is False. The check can be performed immediately during +eager execution or if `x` and `y` are statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/compat/v1/assert_positive.md b/site/en/api_docs/python/tf/compat/v1/assert_positive.md new file mode 100644 index 00000000000..447c9efc409 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_positive.md @@ -0,0 +1,138 @@ +description: Assert the condition x > 0 holds element-wise. + +
+ + +
+ +# tf.compat.v1.assert_positive + + + + + + + + + +Assert the condition `x > 0` holds element-wise. + + + + + + + + + +When running in graph mode, you should add a dependency on this operation +to ensure that it runs. Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.debugging.assert_positive(x, y)]): + output = tf.reduce_sum(x) +``` + +Positive means, for every element `x[i]` of `x`, we have `x[i] > 0`. +If `x` is empty this is trivially satisfied. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_positive". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x > 0` is False. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x > 0` is False. The check can be performed immediately during +eager execution or if `x` is statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/compat/v1/assert_rank.md b/site/en/api_docs/python/tf/compat/v1/assert_rank.md new file mode 100644 index 00000000000..45e65a122d6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_rank.md @@ -0,0 +1,135 @@ +description: Assert x has rank equal to rank. + +
+ + +
+ +# tf.compat.v1.assert_rank + + + + + + + + + +Assert `x` has rank equal to `rank`. + + + + + + + + + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]): + output = tf.reduce_sum(x) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`rank` + +Scalar integer `Tensor`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and the shape of `x`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_rank". +
+ + + + + + + + + + + +
+Op raising `InvalidArgumentError` unless `x` has specified rank. +If static checks determine `x` has correct rank, a `no_op` is returned. +
+ + + + + + + + + + + + +
+`ValueError` + +If static checks determine `x` has wrong rank. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/assert_rank_at_least.md b/site/en/api_docs/python/tf/compat/v1/assert_rank_at_least.md new file mode 100644 index 00000000000..70143dca8fb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_rank_at_least.md @@ -0,0 +1,136 @@ +description: Assert x has rank equal to rank or higher. + +
+ + +
+ +# tf.compat.v1.assert_rank_at_least + + + + + + + + + +Assert `x` has rank equal to `rank` or higher. + + + + + + + + + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.compat.v1.assert_rank_at_least(x, 2)]): + output = tf.reduce_sum(x) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`rank` + +Scalar `Tensor`. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). +Defaults to "assert_rank_at_least". +
+ + + + + + + + + + + +
+Op raising `InvalidArgumentError` unless `x` has specified rank or higher. +If static checks determine `x` has correct rank, a `no_op` is returned. +
+ + + + + + + + + + + + +
+`ValueError` + +If static checks determine `x` has wrong rank. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/assert_rank_in.md b/site/en/api_docs/python/tf/compat/v1/assert_rank_in.md new file mode 100644 index 00000000000..1c5f81c0a26 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_rank_in.md @@ -0,0 +1,136 @@ +description: Assert x has rank in ranks. + +
+ + +
+ +# tf.compat.v1.assert_rank_in + + + + + + + + + +Assert `x` has rank in `ranks`. + + + + + + + + + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]): + output = tf.reduce_sum(x) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`ranks` + +Iterable of scalar `Tensor` objects. +
+`data` + +The tensors to print out if the condition is False. Defaults to +error message and first few entries of `x`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). +Defaults to "assert_rank_in". +
+ + + + + + + + + + + +
+Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. +If static checks determine `x` has matching rank, a `no_op` is returned. +
+ + + + + + + + + + + + +
+`ValueError` + +If static checks determine `x` has mismatched rank. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/assert_scalar.md b/site/en/api_docs/python/tf/compat/v1/assert_scalar.md new file mode 100644 index 00000000000..b6c3a2a0034 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_scalar.md @@ -0,0 +1,110 @@ +description: Asserts that the given tensor is a scalar (i.e. zero-dimensional). + +
+ + +
+ +# tf.compat.v1.assert_scalar + + + + + + + + + +Asserts that the given `tensor` is a scalar (i.e. zero-dimensional). + + + + + + + + + +This function raises `ValueError` unless it can be certain that the given +`tensor` is a scalar. `ValueError` is also raised if the shape of `tensor` is +unknown. + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`name` + +A name for this operation. Defaults to "assert_scalar" +
+`message` + +A string to prefix to the default message. +
+ + + + + + + + + + + +
+The input tensor (potentially converted to a `Tensor`). +
+ + + + + + + + + + + + +
+`ValueError` + +If the tensor is not scalar (rank 0), or if its shape is +unknown. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/assert_type.md b/site/en/api_docs/python/tf/compat/v1/assert_type.md new file mode 100644 index 00000000000..183900b0a6d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_type.md @@ -0,0 +1,114 @@ +description: Statically asserts that the given Tensor is of the specified type. + +
+ + +
+ +# tf.compat.v1.assert_type + + + + + + + + + +Statically asserts that the given `Tensor` is of the specified type. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`tf_type` + +A tensorflow type (`dtypes.float32`, tf.int64, `dtypes.bool`, +etc). +
+`message` + +A string to prefix to the default message. +
+`name` + +A name to give this `Op`. Defaults to "assert_type" +
+ + + + + + + + + + + + +
+`TypeError` + +If the tensors data type doesn't match `tf_type`. +
+ + + + + + + + + + + +
+A `no_op` that does nothing. Type can be determined statically. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/assert_variables_initialized.md b/site/en/api_docs/python/tf/compat/v1/assert_variables_initialized.md new file mode 100644 index 00000000000..7df87c569d1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assert_variables_initialized.md @@ -0,0 +1,76 @@ +description: Returns an Op to check if variables are initialized. + +
+ + +
+ +# tf.compat.v1.assert_variables_initialized + + + + + + + + + +Returns an Op to check if variables are initialized. + + + + + + + +NOTE: This function is obsolete and will be removed in 6 months. Please +change your implementation to use `report_uninitialized_variables()`. + +When run, the returned Op will raise the exception `FailedPreconditionError` +if any of the variables has not yet been initialized. + +Note: This function is implemented by trying to fetch the values of the +variables. If one of the variables is not initialized a message may be +logged by the C++ runtime. This is expected. + + + + + + + + + + +
+`var_list` + +List of `Variable` objects to check. Defaults to the value of +`global_variables().` +
+ + + + + + + + + + + +
+An Op, or None if there are no variables. +
+ + +**NOTE** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/assign.md b/site/en/api_docs/python/tf/compat/v1/assign.md new file mode 100644 index 00000000000..daacb6585de --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assign.md @@ -0,0 +1,103 @@ +description: Update ref by assigning value to it. + +
+ + +
+ +# tf.compat.v1.assign + + + + + + + + + +Update `ref` by assigning `value` to it. + + + + + + + +This operation outputs a Tensor that holds the new value of `ref` after +the value has been assigned. This makes it easier to chain operations that +need to use the reset value. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Should be from a `Variable` node. May be +uninitialized. +
+`value` + +A `Tensor`. Must have the same shape and dtype as `ref`. The value to +be assigned to the variable. +
+`validate_shape` + +An optional `bool`. Defaults to `True`. If true, the +operation will validate that the shape of 'value' matches the shape of the +Tensor being assigned to. If false, 'ref' will take on the shape of +'value'. +
+`use_locking` + +An optional `bool`. Defaults to `True`. If True, the assignment +will be protected by a lock; otherwise the behavior is undefined, but may +exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` that will hold the new value of `ref` after +the assignment has completed. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/assign_add.md b/site/en/api_docs/python/tf/compat/v1/assign_add.md new file mode 100644 index 00000000000..482fa1f49c9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assign_add.md @@ -0,0 +1,96 @@ +description: Update ref by adding value to it. + +
+ + +
+ +# tf.compat.v1.assign_add + + + + + + + + + +Update `ref` by adding `value` to it. + + + + + + + +This operation outputs "ref" after the update is done. +This makes it easier to chain operations that need to use the reset value. +Unlike tf.math.add, this op does not broadcast. `ref` and `value` must have +the same shape. + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, +`float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, +`complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be +from a `Variable` node. +
+`value` + +A `Tensor`. Must have the same shape and dtype as `ref`. The value to +be added to the variable. +
+`use_locking` + +An optional `bool`. Defaults to `False`. If True, the addition +will be protected by a lock; otherwise the behavior is undefined, but may +exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+Same as "ref". Returned as a convenience for operations that want +to use the new value after the variable has been updated. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/assign_sub.md b/site/en/api_docs/python/tf/compat/v1/assign_sub.md new file mode 100644 index 00000000000..2cbe5e90479 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/assign_sub.md @@ -0,0 +1,96 @@ +description: Update ref by subtracting value from it. + +
+ + +
+ +# tf.compat.v1.assign_sub + + + + + + + + + +Update `ref` by subtracting `value` from it. + + + + + + + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. +Unlike tf.math.subtract, this op does not broadcast. `ref` and `value` +must have the same shape. + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, +`float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, +`complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be +from a `Variable` node. +
+`value` + +A `Tensor`. Must have the same shape and dtype as `ref`. The value to +be subtracted to the variable. +
+`use_locking` + +An optional `bool`. Defaults to `False`. If True, the +subtraction will be protected by a lock; otherwise the behavior is +undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+Same as "ref". Returned as a convenience for operations that want +to use the new value after the variable has been updated. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/audio.md b/site/en/api_docs/python/tf/compat/v1/audio.md new file mode 100644 index 00000000000..8bfea5b7aca --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/audio.md @@ -0,0 +1,27 @@ +description: Public API for tf.audio namespace. + +
+ + +
+ +# Module: tf.compat.v1.audio + + + + + + + + + +Public API for tf.audio namespace. + + + +## Functions + +[`decode_wav(...)`](../../../tf/audio/decode_wav.md): Decode a 16-bit PCM WAV file to a float tensor. + +[`encode_wav(...)`](../../../tf/audio/encode_wav.md): Encode audio data using the WAV file format. + diff --git a/site/en/api_docs/python/tf/compat/v1/autograph.md b/site/en/api_docs/python/tf/compat/v1/autograph.md new file mode 100644 index 00000000000..6483cc9e8f9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/autograph.md @@ -0,0 +1,45 @@ +description: Conversion of plain Python into TensorFlow graph code. + +
+ + +
+ +# Module: tf.compat.v1.autograph + + + + + + + + + +Conversion of plain Python into TensorFlow graph code. + + +NOTE: In TensorFlow 2.0, AutoGraph is automatically applied when using +tf.function. This module contains lower-level APIs for advanced use. + +For more information, see the +[AutoGraph guide](https://www.tensorflow.org/guide/autograph). + +By equivalent graph code we mean code that generates a TensorFlow graph when +run. The generated graph has the same effects as the original code when executed +(for example with tf.function or tf.compat.v1.Session.run). In other words, +using AutoGraph can be thought of as running Python in TensorFlow. + +## Modules + +[`experimental`](../../../tf/compat/v1/autograph/experimental.md) module: Public API for tf.autograph.experimental namespace. + +## Functions + +[`set_verbosity(...)`](../../../tf/autograph/set_verbosity.md): Sets the AutoGraph verbosity level. + +[`to_code(...)`](../../../tf/compat/v1/autograph/to_code.md): Returns the source code generated by AutoGraph, as a string. + +[`to_graph(...)`](../../../tf/compat/v1/autograph/to_graph.md): Converts a Python entity into a TensorFlow graph. + +[`trace(...)`](../../../tf/autograph/trace.md): Traces argument information at compilation time. + diff --git a/site/en/api_docs/python/tf/compat/v1/autograph/experimental.md b/site/en/api_docs/python/tf/compat/v1/autograph/experimental.md new file mode 100644 index 00000000000..1de45dbeec9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/autograph/experimental.md @@ -0,0 +1,31 @@ +description: Public API for tf.autograph.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.autograph.experimental + + + + + + + + + +Public API for tf.autograph.experimental namespace. + + + +## Classes + +[`class Feature`](../../../../tf/autograph/experimental/Feature.md): This enumeration represents optional conversion options. + +## Functions + +[`do_not_convert(...)`](../../../../tf/autograph/experimental/do_not_convert.md): Decorator that suppresses the conversion of a function. + +[`set_loop_options(...)`](../../../../tf/autograph/experimental/set_loop_options.md): Specifies additional arguments to be passed to the enclosing while_loop. + diff --git a/site/en/api_docs/python/tf/compat/v1/autograph/to_code.md b/site/en/api_docs/python/tf/compat/v1/autograph/to_code.md new file mode 100644 index 00000000000..cc5eb9c3b64 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/autograph/to_code.md @@ -0,0 +1,131 @@ +description: Returns the source code generated by AutoGraph, as a string. + +
+ + +
+ +# tf.compat.v1.autograph.to_code + + + + + + + + + +Returns the source code generated by AutoGraph, as a string. + + + + + + + + +#### Example usage: + + + +``` +>>> def f(x): +... if x < 0: +... x = -x +... return x +>>> tf.autograph.to_code(f) +"...def tf__f(x):..." +``` + +Also see: tf.autograph.to_graph. + +Note: If a function has been decorated with tf.function, pass its +underlying Python function, rather than the callable that `tf.function +creates: + +``` +>>> @tf.function +... def f(x): +... if x < 0: +... x = -x +... return x +>>> tf.autograph.to_code(f.python_function) +"...def tf__f(x):..." +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`entity` + +Python callable or class. +
+`recursive` + +Whether to recursively convert any functions that the converted +function may call. +
+`arg_values` + +Deprecated. +
+`arg_types` + +Deprecated. +
+`indentation` + +Deprecated. +
+`experimental_optional_features` + +`None`, a tuple of, or a single +tf.autograph.experimental.Feature value. +
+ + + + + + + + + + + +
+The converted code as string. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/autograph/to_graph.md b/site/en/api_docs/python/tf/compat/v1/autograph/to_graph.md new file mode 100644 index 00000000000..8568c3d776d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/autograph/to_graph.md @@ -0,0 +1,151 @@ +description: Converts a Python entity into a TensorFlow graph. + +
+ + +
+ +# tf.compat.v1.autograph.to_graph + + + + + + + + + +Converts a Python entity into a TensorFlow graph. + + + + + + + +Also see: tf.autograph.to_code, tf.function. + +Unlike tf.function, `to_graph` is a low-level transpiler that converts +Python code to TensorFlow graph code. It does not implement any caching, +variable management or create any actual ops, and is best used where greater +control over the generated TensorFlow graph is desired. Another difference +from tf.function is that `to_graph` will not wrap the graph into a +TensorFlow function or a Python callable. Internally, tf.function uses +`to_graph`. + +_Example Usage_ + +```python + def foo(x): + if x > 0: + y = x * x + else: + y = -x + return y + + converted_foo = to_graph(foo) + + x = tf.constant(1) + y = converted_foo(x) # converted_foo is a TensorFlow Op-like. + assert is_tensor(y) +``` + +Supported Python entities include: + * functions + * classes + * object methods + +Functions are converted into new functions with converted code. + +Classes are converted by generating a new class whose methods use converted +code. + +Methods are converted into unbound function that have an additional first +argument called `self`. + + + + + + + + + + + + + + + + + + + + + + +
+`entity` + +Python callable or class to convert. +
+`recursive` + +Whether to recursively convert any functions that the converted +function may call. +
+`arg_values` + +Deprecated. +
+`arg_types` + +Deprecated. +
+`experimental_optional_features` + +`None`, a tuple of, or a single +tf.autograph.experimental.Feature value. +
+ + + + + + + + + + + +
+Same as `entity`, the converted Python function or class. +
+ + + + + + + + + + + + +
+`ValueError` + +If the entity could not be converted. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/batch_gather.md b/site/en/api_docs/python/tf/compat/v1/batch_gather.md new file mode 100644 index 00000000000..207a53077ca --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/batch_gather.md @@ -0,0 +1,37 @@ +description: Gather slices from params according to indices with leading batch dims. (deprecated) + +
+ + +
+ +# tf.compat.v1.batch_gather + + + + + + + + + +Gather slices from params according to indices with leading batch dims. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. +Instructions for updating: +`tf.batch_gather` is deprecated, please use tf.gather with `batch_dims=-1` instead. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/batch_scatter_update.md b/site/en/api_docs/python/tf/compat/v1/batch_scatter_update.md new file mode 100644 index 00000000000..10e675a551b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/batch_scatter_update.md @@ -0,0 +1,147 @@ +description: Generalization of tf.compat.v1.scatter_update to axis different than 0. (deprecated) + +
+ + +
+ +# tf.compat.v1.batch_scatter_update + + + + + + + + + +Generalization of tf.compat.v1.scatter_update to axis different than 0. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29. +Instructions for updating: +Use the batch_scatter_update method of Variable instead. + +Analogous to `batch_gather`. This assumes that `ref`, `indices` and `updates` +have a series of leading dimensions that are the same for all of them, and the +updates are performed on the last dimension of indices. In other words, the +dimensions should be the following: + +`num_prefix_dims = indices.ndims - 1` +`batch_dim = num_prefix_dims + 1` +`updates.shape = indices.shape + var.shape[batch_dim:]` + +where + +`updates.shape[:num_prefix_dims]` +`== indices.shape[:num_prefix_dims]` +`== var.shape[:num_prefix_dims]` + +And the operation performed can be expressed as: + +`var[i_1, ..., i_n, indices[i_1, ..., i_n, j]] = updates[i_1, ..., i_n, j]` + +When indices is a 1D tensor, this operation is equivalent to +tf.compat.v1.scatter_update. + +To avoid this operation there would be 2 alternatives: +1) Reshaping the variable by merging the first `ndims` dimensions. However, + this is not possible because tf.reshape returns a Tensor, which we + cannot use tf.compat.v1.scatter_update on. +2) Looping over the first `ndims` of the variable and using + tf.compat.v1.scatter_update on the subtensors that result of slicing the + first + dimension. This is a valid option for `ndims = 1`, but less efficient than + this implementation. + +See also tf.compat.v1.scatter_update and tf.compat.v1.scatter_nd_update. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +`Variable` to scatter onto. +
+`indices` + +Tensor containing indices as described above. +
+`updates` + +Tensor of updates to apply to `ref`. +
+`use_locking` + +Boolean indicating whether to lock the writing operation. +
+`name` + +Optional scope name string. +
+ + + + + + + + + + + +
+Ref to `variable` after it has been modified. +
+ + + + + + + + + + + + +
+`ValueError` + +If the initial `ndims` of `ref`, `indices`, and `updates` are +not the same. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/batch_to_space.md b/site/en/api_docs/python/tf/compat/v1/batch_to_space.md new file mode 100644 index 00000000000..a39bb7fd464 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/batch_to_space.md @@ -0,0 +1,100 @@ +description: BatchToSpace for 4-D tensors of type T. + +
+ + +
+ +# tf.compat.v1.batch_to_space + + + + + + + + + +BatchToSpace for 4-D tensors of type T. + + + + + + + +This is a legacy version of the more general BatchToSpaceND. + +Rearranges (permutes) data from batch into blocks of spatial data, followed by +cropping. This is the reverse transformation of SpaceToBatch. More specifically, +this op outputs a copy of the input tensor where values from the `batch` +dimension are moved in spatial blocks to the `height` and `width` dimensions, +followed by cropping along the `height` and `width` dimensions. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. 4-D tensor with shape +`[batch*block_size*block_size, height_pad/block_size, width_pad/block_size, +depth]`. Note that the batch size of the input tensor must be divisible by +`block_size * block_size`. +
+`crops` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2-D tensor of non-negative integers with shape `[2, 2]`. It specifies +how many elements to crop from the intermediate result across the spatial +dimensions as follows: + +crops = [[crop_top, crop_bottom], [crop_left, crop_right]] +
+`block_size` + +An `int` that is `>= 2`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/batch_to_space_nd.md b/site/en/api_docs/python/tf/compat/v1/batch_to_space_nd.md new file mode 100644 index 00000000000..8bb8a5979ec --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/batch_to_space_nd.md @@ -0,0 +1,209 @@ +description: BatchToSpace for N-D tensors of type T. + +
+ + +
+ +# tf.compat.v1.batch_to_space_nd + + + + + + + + + +BatchToSpace for N-D tensors of type T. + + + + + + + + + +This operation reshapes the "batch" dimension 0 into `M + 1` dimensions of shape +`block_shape + [batch]`, interleaves these blocks back into the grid defined by +the spatial dimensions `[1, ..., M]`, to obtain a result with the same rank as +the input. The spatial dimensions of this intermediate result are then +optionally cropped according to `crops` to produce the output. This is the +reverse of SpaceToBatch. See below for a precise description. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`, +where spatial_shape has M dimensions. +
+`block_shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D with shape `[M]`, all values must be >= 1. +
+`crops` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2-D with shape `[M, 2]`, all values must be >= 0. +`crops[i] = [crop_start, crop_end]` specifies the amount to crop from input +dimension `i + 1`, which corresponds to spatial dimension `i`. It is +required that +`crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`. + +This operation is equivalent to the following steps: + +1. Reshape `input` to `reshaped` of shape: +[block_shape[0], ..., block_shape[M-1], +batch / prod(block_shape), +input_shape[1], ..., input_shape[N-1]] + +2. Permute dimensions of `reshaped` to produce `permuted` of shape +[batch / prod(block_shape), + +input_shape[1], block_shape[0], +..., +input_shape[M], block_shape[M-1], + +input_shape[M+1], ..., input_shape[N-1]] + +3. Reshape `permuted` to produce `reshaped_permuted` of shape +[batch / prod(block_shape), + +input_shape[1] * block_shape[0], +..., +input_shape[M] * block_shape[M-1], + +input_shape[M+1], +..., +input_shape[N-1]] + +4. Crop the start and end of dimensions `[1, ..., M]` of +`reshaped_permuted` according to `crops` to produce the output of shape: +[batch / prod(block_shape), + +input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1], +..., +input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1], + +input_shape[M+1], ..., input_shape[N-1]] + +Some examples: + +(1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and +`crops = [[0, 0], [0, 0]]`: + +``` +[[[[1]]], [[[2]]], [[[3]]], [[[4]]]] +``` + +The output tensor has shape `[1, 2, 2, 1]` and value: + +``` +x = [[[[1], [2]], [[3], [4]]]] +``` + +(2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and +`crops = [[0, 0], [0, 0]]`: + +``` +[[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] +``` + +The output tensor has shape `[1, 2, 2, 3]` and value: + +``` +x = [[[[1, 2, 3], [4, 5, 6]], +[[7, 8, 9], [10, 11, 12]]]] +``` + +(3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and +`crops = [[0, 0], [0, 0]]`: + +``` +x = [[[[1], [3]], [[9], [11]]], +[[[2], [4]], [[10], [12]]], +[[[5], [7]], [[13], [15]]], +[[[6], [8]], [[14], [16]]]] +``` + +The output tensor has shape `[1, 4, 4, 1]` and value: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]], +[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` + +(4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and +`crops = [[0, 0], [2, 0]]`: + +``` +x = [[[[0], [1], [3]]], [[[0], [9], [11]]], +[[[0], [2], [4]]], [[[0], [10], [12]]], +[[[0], [5], [7]]], [[[0], [13], [15]]], +[[[0], [6], [8]]], [[[0], [14], [16]]]] +``` + +The output tensor has shape `[2, 2, 4, 1]` and value: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]]], +[[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/bincount.md b/site/en/api_docs/python/tf/compat/v1/bincount.md new file mode 100644 index 00000000000..5adf8d14ca8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/bincount.md @@ -0,0 +1,113 @@ +description: Counts the number of occurrences of each value in an integer array. + +
+ + +
+ +# tf.compat.v1.bincount + + + + + + + + + +Counts the number of occurrences of each value in an integer array. + + + + + + + + + +If `minlength` and `maxlength` are not given, returns a vector with length +`tf.reduce_max(arr) + 1` if `arr` is non-empty, and length 0 otherwise. +If `weights` are non-None, then index `i` of the output stores the sum of the +value in `weights` at each index where the corresponding value in `arr` is +`i`. + + + + + + + + + + + + + + + + + + + + + + +
+`arr` + +An int32 tensor of non-negative values. +
+`weights` + +If non-None, must be the same shape as arr. For each value in +`arr`, the bin will be incremented by the corresponding weight instead of +1. +
+`minlength` + +If given, ensures the output has length at least `minlength`, +padding with zeros at the end if necessary. +
+`maxlength` + +If given, skips values in `arr` that are equal or greater than +`maxlength`, ensuring that the output has length at most `maxlength`. +
+`dtype` + +If `weights` is None, determines the type of the output bins. +
+ + + + + + + + + + + +
+A vector with the same dtype as `weights` or the given `dtype`. The bin +values. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/bitwise.md b/site/en/api_docs/python/tf/compat/v1/bitwise.md new file mode 100644 index 00000000000..623b1c17f79 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/bitwise.md @@ -0,0 +1,35 @@ +description: Operations for manipulating the binary representations of integers. + +
+ + +
+ +# Module: tf.compat.v1.bitwise + + + + + + + + + +Operations for manipulating the binary representations of integers. + + + +## Functions + +[`bitwise_and(...)`](../../../tf/bitwise/bitwise_and.md): Elementwise computes the bitwise AND of `x` and `y`. + +[`bitwise_or(...)`](../../../tf/bitwise/bitwise_or.md): Elementwise computes the bitwise OR of `x` and `y`. + +[`bitwise_xor(...)`](../../../tf/bitwise/bitwise_xor.md): Elementwise computes the bitwise XOR of `x` and `y`. + +[`invert(...)`](../../../tf/bitwise/invert.md): Invert (flip) each bit of supported types; for example, type `uint8` value 01010101 becomes 10101010. + +[`left_shift(...)`](../../../tf/bitwise/left_shift.md): Elementwise computes the bitwise left-shift of `x` and `y`. + +[`right_shift(...)`](../../../tf/bitwise/right_shift.md): Elementwise computes the bitwise right-shift of `x` and `y`. + diff --git a/site/en/api_docs/python/tf/compat/v1/boolean_mask.md b/site/en/api_docs/python/tf/compat/v1/boolean_mask.md new file mode 100644 index 00000000000..8ba761105d3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/boolean_mask.md @@ -0,0 +1,137 @@ +description: Apply boolean mask to tensor. + +
+ + +
+ +# tf.compat.v1.boolean_mask + + + + + + + + + +Apply boolean mask to tensor. + + + + + + + +Numpy equivalent is `tensor[mask]`. + +```python +# 1-D example +tensor = [0, 1, 2, 3] +mask = np.array([True, False, True, False]) +boolean_mask(tensor, mask) # [0, 2] +``` + +In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match +the first K dimensions of `tensor`'s shape. We then have: + `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]` +where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order). +The `axis` could be used with `mask` to indicate the axis to mask from. +In that case, `axis + dim(mask) <= dim(tensor)` and `mask`'s shape must match +the first `axis + dim(mask)` dimensions of `tensor`'s shape. + +See also: tf.ragged.boolean_mask, which can be applied to both dense and +ragged tensors, and can be used if you need to preserve the masked dimensions +of `tensor` (rather than flattening them, as tf.boolean_mask does). + + + + + + + + + + + + + + + + + + + +
+`tensor` + +N-D tensor. +
+`mask` + +K-D boolean tensor, K <= N and K must be known statically. +
+`name` + +A name for this operation (optional). +
+`axis` + +A 0-D int Tensor representing the axis in `tensor` to mask from. By +default, axis is 0 which will mask from the first dimension. Otherwise K + +axis <= N. +
+ + + + + + + + + + + +
+(N-K+1)-dimensional tensor populated by entries in `tensor` corresponding +to `True` values in `mask`. +
+ + + + + + + + + + + + +
+`ValueError` + +If shapes do not conform. +
+ + + +#### Examples: + + + +```python +# 2-D example +tensor = [[1, 2], [3, 4], [5, 6]] +mask = np.array([True, False, True]) +boolean_mask(tensor, mask) # [[1, 2], [5, 6]] +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/case.md b/site/en/api_docs/python/tf/compat/v1/case.md new file mode 100644 index 00000000000..65afbbff8a8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/case.md @@ -0,0 +1,208 @@ +description: Create a case operation. + +
+ + +
+ +# tf.compat.v1.case + + + + + + + + + +Create a case operation. + + + + + + + +See also tf.switch_case. + +The `pred_fn_pairs` parameter is a dict or list of pairs of size N. +Each pair contains a boolean scalar tensor and a python callable that +creates the tensors to be returned if the boolean evaluates to True. +`default` is a callable generating a list of tensors. All the callables +in `pred_fn_pairs` as well as `default` (if provided) should return the same +number and types of tensors. + +If `exclusive==True`, all predicates are evaluated, and an exception is +thrown if more than one of the predicates evaluates to `True`. +If `exclusive==False`, execution stops at the first predicate which +evaluates to True, and the tensors generated by the corresponding function +are returned immediately. If none of the predicates evaluate to True, this +operation returns the tensors generated by `default`. + +tf.case supports nested structures as implemented in +`tf.contrib.framework.nest`. All of the callables must return the same +(possibly nested) value structure of lists, tuples, and/or named tuples. +Singleton lists and tuples form the only exceptions to this: when returned by +a callable, they are implicitly unpacked to single values. This +behavior is disabled by passing `strict=True`. + +If an unordered dictionary is used for `pred_fn_pairs`, the order of the +conditional tests is not guaranteed. However, the order is guaranteed to be +deterministic, so that variables created in conditional branches are created +in fixed order across runs. + + + + +**Example 1:** + +#### Pseudocode: + + + +``` +if (x < y) return 17; +else return 23; +``` + +#### Expressions: + + + +```python +f1 = lambda: tf.constant(17) +f2 = lambda: tf.constant(23) +r = tf.case([(tf.less(x, y), f1)], default=f2) +``` + +**Example 2:** + +#### Pseudocode: + + + +``` +if (x < y && x > z) raise OpError("Only one predicate may evaluate to True"); +if (x < y) return 17; +else if (x > z) return 23; +else return -1; +``` + +#### Expressions: + + + +```python +def f1(): return tf.constant(17) +def f2(): return tf.constant(23) +def f3(): return tf.constant(-1) +r = tf.case({tf.less(x, y): f1, tf.greater(x, z): f2}, + default=f3, exclusive=True) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`pred_fn_pairs` + +Dict or list of pairs of a boolean scalar tensor and a +callable which returns a list of tensors. +
+`default` + +Optional callable that returns a list of tensors. +
+`exclusive` + +True iff at most one predicate is allowed to evaluate to `True`. +
+`strict` + +A boolean that enables/disables 'strict' mode; see above. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+The tensors returned by the first pair whose predicate evaluated to True, or +those returned by `default` if none does. +
+ + + + + + + + + + + + + + + + + + +
+`TypeError` + +If `pred_fn_pairs` is not a list/dictionary. +
+`TypeError` + +If `pred_fn_pairs` is a list but does not contain 2-tuples. +
+`TypeError` + +If `fns[i]` is not callable for any i, or `default` is not +callable. +
+ + + +#### Eager Compatibility +Unordered dictionaries are not supported in eager mode when `exclusive=False`. +Use a list of tuples instead. + diff --git a/site/en/api_docs/python/tf/compat/v1/clip_by_average_norm.md b/site/en/api_docs/python/tf/compat/v1/clip_by_average_norm.md new file mode 100644 index 00000000000..b7794d29496 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/clip_by_average_norm.md @@ -0,0 +1,95 @@ +description: Clips tensor values to a maximum average L2-norm. (deprecated) + +
+ + +
+ +# tf.compat.v1.clip_by_average_norm + + + + + + + + + +Clips tensor values to a maximum average L2-norm. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +clip_by_average_norm is deprecated in TensorFlow 2.0. Please use clip_by_norm(t, clip_norm * tf.cast(tf.size(t), tf.float32), name) instead. + +Given a tensor `t`, and a maximum clip value `clip_norm`, this operation +normalizes `t` so that its average L2-norm is less than or equal to +`clip_norm`. Specifically, if the average L2-norm is already less than or +equal to `clip_norm`, then `t` is not modified. If the average L2-norm is +greater than `clip_norm`, then this operation returns a tensor of the same +type and shape as `t` with its values set to: + +`t * clip_norm / l2norm_avg(t)` + +In this case, the average L2-norm of the output tensor is `clip_norm`. + +This operation is typically used to clip gradients before applying them with +an optimizer. + + + + + + + + + + + + + + + + +
+`t` + +A `Tensor`. +
+`clip_norm` + +A 0-D (scalar) `Tensor` > 0. A maximum clipping value. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A clipped `Tensor`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/colocate_with.md b/site/en/api_docs/python/tf/compat/v1/colocate_with.md new file mode 100644 index 00000000000..6cda65a67c2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/colocate_with.md @@ -0,0 +1,37 @@ +description: DEPRECATED FUNCTION + +
+ + +
+ +# tf.compat.v1.colocate_with + + + + + + + + + +DEPRECATED FUNCTION + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Colocations handled automatically by placer. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/compat.md b/site/en/api_docs/python/tf/compat/v1/compat.md new file mode 100644 index 00000000000..f70550cc229 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/compat.md @@ -0,0 +1,78 @@ +description: Compatibility functions. + +
+ + + + + + +
+ +# Module: tf.compat.v1.compat + + + + + + + + + +Compatibility functions. + + +The tf.compat module contains two sets of compatibility functions. + +## Tensorflow 1.x and 2.x APIs + +The compat.v1 and `compat.v2` submodules provide a complete copy of both the +v1 and `v2` APIs for backwards and forwards compatibility across TensorFlow +versions 1.x and 2.x. See the +[migration guide](https://www.tensorflow.org/guide/migrate) for details. + +## Utilities for writing compatible code + +Aside from the compat.v1 and `compat.v2` submodules, tf.compat also contains +a set of helper functions for writing code that works in both: + +* TensorFlow 1.x and 2.x +* Python 2 and 3 + + +## Type collections + +The compatibility module also provides the following aliases for common +sets of python types: + +* `bytes_or_text_types` +* `complex_types` +* `integral_types` +* `real_types` + +## Functions + +[`as_bytes(...)`](../../../tf/compat/as_bytes.md): Converts `bytearray`, `bytes`, or unicode python input types to `bytes`. + +[`as_str(...)`](../../../tf/compat/as_str.md) + +[`as_str_any(...)`](../../../tf/compat/as_str_any.md): Converts input to `str` type. + +[`as_text(...)`](../../../tf/compat/as_text.md): Converts any string-like python input types to unicode. + +[`dimension_at_index(...)`](../../../tf/compat/dimension_at_index.md): Compatibility utility required to allow for both V1 and V2 behavior in TF. + +[`dimension_value(...)`](../../../tf/compat/dimension_value.md): Compatibility utility required to allow for both V1 and V2 behavior in TF. + +[`forward_compatibility_horizon(...)`](../../../tf/compat/forward_compatibility_horizon.md): Context manager for testing forward compatibility of generated graphs. + +[`forward_compatible(...)`](../../../tf/compat/forward_compatible.md): Return true if the forward compatibility window has expired. + +[`path_to_str(...)`](../../../tf/compat/path_to_str.md): Converts input which is a `PathLike` object to `str` type. + +## Other Members + +* `bytes_or_text_types` +* `complex_types` +* `integral_types` +* `real_types` diff --git a/site/en/api_docs/python/tf/compat/v1/cond.md b/site/en/api_docs/python/tf/compat/v1/cond.md new file mode 100644 index 00000000000..2f445a4d7e1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/cond.md @@ -0,0 +1,170 @@ +description: Return true_fn() if the predicate pred is true else false_fn(). (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.cond + + + + + + + + + +Return `true_fn()` if the predicate `pred` is true else `false_fn()`. (deprecated arguments) + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(fn1, fn2)`. They will be removed in a future version. +Instructions for updating: +fn1/fn2 are deprecated in favor of the true_fn/false_fn arguments. + +`true_fn` and `false_fn` both return lists of output tensors. `true_fn` and +`false_fn` must have the same non-zero number and type of outputs. + +**WARNING**: Any Tensors or Operations created outside of `true_fn` and +`false_fn` will be executed regardless of which branch is selected at runtime. + +Although this behavior is consistent with the dataflow model of TensorFlow, +it has frequently surprised users who expected a lazier semantics. +Consider the following simple program: + +```python +z = tf.multiply(a, b) +result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y)) +``` + +If `x < y`, the `tf.add` operation will be executed and `tf.square` +operation will not be executed. Since `z` is needed for at least one +branch of the `cond`, the tf.multiply operation is always executed, +unconditionally. + +Note that `cond` calls `true_fn` and `false_fn` *exactly once* (inside the +call to `cond`, and not at all during `Session.run()`). `cond` +stitches together the graph fragments created during the `true_fn` and +`false_fn` calls with some additional graph nodes to ensure that the right +branch gets executed depending on the value of `pred`. + +tf.cond supports nested structures as implemented in +`tensorflow.python.util.nest`. Both `true_fn` and `false_fn` must return the +same (possibly nested) value structure of lists, tuples, and/or named tuples. +Singleton lists and tuples form the only exceptions to this: when returned by +`true_fn` and/or `false_fn`, they are implicitly unpacked to single values. +This behavior is disabled by passing `strict=True`. + + + + + + + + + + + + + + + + + + + + + + +
+`pred` + +A scalar determining whether to return the result of `true_fn` or +`false_fn`. +
+`true_fn` + +The callable to be performed if pred is true. +
+`false_fn` + +The callable to be performed if pred is false. +
+`strict` + +A boolean that enables/disables 'strict' mode; see above. +
+`name` + +Optional name prefix for the returned tensors. +
+ + + + + + + + + + + +
+Tensors returned by the call to either `true_fn` or `false_fn`. If the +callables return a singleton list, the element is extracted from the list. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if `true_fn` or `false_fn` is not callable. +
+`ValueError` + +if `true_fn` and `false_fn` do not return the same number of +tensors, or return tensors of different types. +
+ + + +#### Example: + + + +```python +x = tf.constant(2) +y = tf.constant(5) +def f1(): return tf.multiply(x, 17) +def f2(): return tf.add(y, 23) +r = tf.cond(tf.less(x, y), f1, f2) +# r is set to f1(). +# Operations in f2 (e.g., tf.add) are not executed. +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/config.md b/site/en/api_docs/python/tf/compat/v1/config.md new file mode 100644 index 00000000000..3e4c0126c3b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/config.md @@ -0,0 +1,63 @@ +description: Public API for tf.config namespace. + +
+ + +
+ +# Module: tf.compat.v1.config + + + + + + + + + +Public API for tf.config namespace. + + + +## Modules + +[`experimental`](../../../tf/compat/v1/config/experimental.md) module: Public API for tf.config.experimental namespace. + +[`optimizer`](../../../tf/compat/v1/config/optimizer.md) module: Public API for tf.config.optimizer namespace. + +[`threading`](../../../tf/compat/v1/config/threading.md) module: Public API for tf.config.threading namespace. + +## Classes + +[`class LogicalDevice`](../../../tf/config/LogicalDevice.md): Abstraction for a logical device initialized by the runtime. + +[`class LogicalDeviceConfiguration`](../../../tf/config/LogicalDeviceConfiguration.md): Configuration class for a logical devices. + +[`class PhysicalDevice`](../../../tf/config/PhysicalDevice.md): Abstraction for a locally visible physical device. + +## Functions + +[`experimental_connect_to_cluster(...)`](../../../tf/config/experimental_connect_to_cluster.md): Connects to the given cluster. + +[`experimental_connect_to_host(...)`](../../../tf/config/experimental_connect_to_host.md): Connects to a single machine to enable remote execution on it. + +[`experimental_functions_run_eagerly(...)`](../../../tf/config/experimental_functions_run_eagerly.md): Returns the value of the `experimental_run_functions_eagerly` setting. + +[`experimental_run_functions_eagerly(...)`](../../../tf/config/experimental_run_functions_eagerly.md): Enables / disables eager execution of tf.functions. + +[`get_logical_device_configuration(...)`](../../../tf/config/get_logical_device_configuration.md): Get the virtual device configuration for a tf.config.PhysicalDevice. + +[`get_soft_device_placement(...)`](../../../tf/config/get_soft_device_placement.md): Get if soft device placement is enabled. + +[`get_visible_devices(...)`](../../../tf/config/get_visible_devices.md): Get the list of visible physical devices. + +[`list_logical_devices(...)`](../../../tf/config/list_logical_devices.md): Return a list of logical devices created by runtime. + +[`list_physical_devices(...)`](../../../tf/config/list_physical_devices.md): Return a list of physical devices visible to the host runtime. + +[`set_logical_device_configuration(...)`](../../../tf/config/set_logical_device_configuration.md): Set the logical device configuration for a tf.config.PhysicalDevice. + +[`set_soft_device_placement(...)`](../../../tf/config/set_soft_device_placement.md): Set if soft device placement is enabled. + +[`set_visible_devices(...)`](../../../tf/config/set_visible_devices.md): Set the list of visible devices. + diff --git a/site/en/api_docs/python/tf/compat/v1/config/experimental.md b/site/en/api_docs/python/tf/compat/v1/config/experimental.md new file mode 100644 index 00000000000..9396fb96d5b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/config/experimental.md @@ -0,0 +1,57 @@ +description: Public API for tf.config.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.config.experimental + + + + + + + + + +Public API for tf.config.experimental namespace. + + + +## Classes + +[`class ClusterDeviceFilters`](../../../../tf/config/experimental/ClusterDeviceFilters.md): Represent a collection of device filters for the remote workers in cluster. + +[`class VirtualDeviceConfiguration`](../../../../tf/config/LogicalDeviceConfiguration.md): Configuration class for a logical devices. + +## Functions + +[`disable_mlir_bridge(...)`](../../../../tf/config/experimental/disable_mlir_bridge.md): Disables experimental MLIR-Based TensorFlow Compiler Bridge. + +[`enable_mlir_bridge(...)`](../../../../tf/config/experimental/enable_mlir_bridge.md): Enables experimental MLIR-Based TensorFlow Compiler Bridge. + +[`get_device_policy(...)`](../../../../tf/config/experimental/get_device_policy.md): Gets the current device policy. + +[`get_memory_growth(...)`](../../../../tf/config/experimental/get_memory_growth.md): Get if memory growth is enabled for a `PhysicalDevice`. + +[`get_synchronous_execution(...)`](../../../../tf/config/experimental/get_synchronous_execution.md): Gets whether operations are executed synchronously or asynchronously. + +[`get_virtual_device_configuration(...)`](../../../../tf/config/get_logical_device_configuration.md): Get the virtual device configuration for a tf.config.PhysicalDevice. + +[`get_visible_devices(...)`](../../../../tf/config/get_visible_devices.md): Get the list of visible physical devices. + +[`list_logical_devices(...)`](../../../../tf/config/list_logical_devices.md): Return a list of logical devices created by runtime. + +[`list_physical_devices(...)`](../../../../tf/config/list_physical_devices.md): Return a list of physical devices visible to the host runtime. + +[`set_device_policy(...)`](../../../../tf/config/experimental/set_device_policy.md): Sets the current thread device policy. + +[`set_memory_growth(...)`](../../../../tf/config/experimental/set_memory_growth.md): Set if memory growth should be enabled for a `PhysicalDevice`. + +[`set_synchronous_execution(...)`](../../../../tf/config/experimental/set_synchronous_execution.md): Specifies whether operations are executed synchronously or asynchronously. + +[`set_virtual_device_configuration(...)`](../../../../tf/config/set_logical_device_configuration.md): Set the logical device configuration for a tf.config.PhysicalDevice. + +[`set_visible_devices(...)`](../../../../tf/config/set_visible_devices.md): Set the list of visible devices. + diff --git a/site/en/api_docs/python/tf/compat/v1/config/optimizer.md b/site/en/api_docs/python/tf/compat/v1/config/optimizer.md new file mode 100644 index 00000000000..26662469008 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/config/optimizer.md @@ -0,0 +1,31 @@ +description: Public API for tf.config.optimizer namespace. + +
+ + +
+ +# Module: tf.compat.v1.config.optimizer + + + + + + + + + +Public API for tf.config.optimizer namespace. + + + +## Functions + +[`get_experimental_options(...)`](../../../../tf/config/optimizer/get_experimental_options.md): Get experimental optimizer options. + +[`get_jit(...)`](../../../../tf/config/optimizer/get_jit.md): Get if JIT compilation is enabled. + +[`set_experimental_options(...)`](../../../../tf/config/optimizer/set_experimental_options.md): Set experimental optimizer options. + +[`set_jit(...)`](../../../../tf/config/optimizer/set_jit.md): Set if JIT compilation is enabled. + diff --git a/site/en/api_docs/python/tf/compat/v1/config/threading.md b/site/en/api_docs/python/tf/compat/v1/config/threading.md new file mode 100644 index 00000000000..6393548db89 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/config/threading.md @@ -0,0 +1,31 @@ +description: Public API for tf.config.threading namespace. + +
+ + +
+ +# Module: tf.compat.v1.config.threading + + + + + + + + + +Public API for tf.config.threading namespace. + + + +## Functions + +[`get_inter_op_parallelism_threads(...)`](../../../../tf/config/threading/get_inter_op_parallelism_threads.md): Get number of threads used for parallelism between independent operations. + +[`get_intra_op_parallelism_threads(...)`](../../../../tf/config/threading/get_intra_op_parallelism_threads.md): Get number of threads used within an individual op for parallelism. + +[`set_inter_op_parallelism_threads(...)`](../../../../tf/config/threading/set_inter_op_parallelism_threads.md): Set number of threads used for parallelism between independent operations. + +[`set_intra_op_parallelism_threads(...)`](../../../../tf/config/threading/set_intra_op_parallelism_threads.md): Set number of threads used within an individual op for parallelism. + diff --git a/site/en/api_docs/python/tf/compat/v1/confusion_matrix.md b/site/en/api_docs/python/tf/compat/v1/confusion_matrix.md new file mode 100644 index 00000000000..9e46f32d682 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/confusion_matrix.md @@ -0,0 +1,163 @@ +description: Computes the confusion matrix from predictions and labels. + +
+ + +
+ +# tf.compat.v1.confusion_matrix + + + + + + + + + +Computes the confusion matrix from predictions and labels. + + + + + + + + + +The matrix columns represent the prediction labels and the rows represent the +real labels. The confusion matrix is always a 2-D array of shape `[n, n]`, +where `n` is the number of valid labels for a given classification task. Both +prediction and labels must be 1-D arrays of the same shape in order for this +function to work. + +If `num_classes` is `None`, then `num_classes` will be set to one plus the +maximum value in either predictions or labels. Class labels are expected to +start at 0. For example, if `num_classes` is 3, then the possible labels +would be `[0, 1, 2]`. + +If `weights` is not `None`, then each prediction contributes its +corresponding weight to the total value of the confusion matrix cell. + +#### For example: + + + +```python + tf.math.confusion_matrix([1, 2, 4], [2, 2, 4]) ==> + [[0 0 0 0 0] + [0 0 1 0 0] + [0 0 1 0 0] + [0 0 0 0 0] + [0 0 0 0 1]] +``` + +Note that the possible labels are assumed to be `[0, 1, 2, 3, 4]`, +resulting in a 5x5 confusion matrix. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +1-D `Tensor` of real labels for the classification task. +
+`predictions` + +1-D `Tensor` of predictions for a given classification. +
+`num_classes` + +The possible number of labels the classification task can have. +If this value is not provided, it will be calculated using both +predictions and labels array. +
+`dtype` + +Data type of the confusion matrix. +
+`name` + +Scope name. +
+`weights` + +An optional `Tensor` whose shape matches `predictions`. +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype` with shape `[n, n]` representing the confusion +matrix, where `n` is the number of possible labels in the classification +task. +
+ + + + + + + + + + + + +
+`ValueError` + +If both predictions and labels are not 1-D vectors and have +mismatched shapes, or if `weights` is not `None` and its shape doesn't +match `predictions`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/constant.md b/site/en/api_docs/python/tf/compat/v1/constant.md new file mode 100644 index 00000000000..2d2bbbce6c8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/constant.md @@ -0,0 +1,149 @@ +description: Creates a constant tensor. + +
+ + +
+ +# tf.compat.v1.constant + + + + + + + + + +Creates a constant tensor. + + + + + + + +The resulting tensor is populated with values of type `dtype`, as +specified by arguments `value` and (optionally) `shape` (see examples +below). + +The argument `value` can be a constant value, or a list of values of type +`dtype`. If `value` is a list, then the length of the list must be less +than or equal to the number of elements implied by the `shape` argument (if +specified). In the case where the list length is less than the number of +elements specified by `shape`, the last element in the list will be used +to fill the remaining entries. + +The argument `shape` is optional. If present, it specifies the dimensions of +the resulting tensor. If not present, the shape of `value` is used. + +If the argument `dtype` is not specified, then the type is inferred from +the type of `value`. + +#### For example: + + + +```python +# Constant 1-D Tensor populated with value list. +tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7] + +# Constant 2-D tensor populated with scalar value -1. +tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.] + [-1. -1. -1.]] +``` + +tf.constant differs from tf.fill in a few ways: + +* tf.constant supports arbitrary constants, not just uniform scalar + Tensors like tf.fill. +* tf.constant creates a `Const` node in the computation graph with the + exact value at graph construction time. On the other hand, tf.fill + creates an Op in the graph that is expanded at runtime. +* Because tf.constant only embeds constant values in the graph, it does + not support dynamic shapes based on other runtime Tensors, whereas + tf.fill does. + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A constant value (or list) of output type `dtype`. +
+`dtype` + +The type of the elements of the resulting tensor. +
+`shape` + +Optional dimensions of resulting tensor. +
+`name` + +Optional name for the tensor. +
+`verify_shape` + +Boolean that enables verification of a shape of values. +
+ + + + + + + + + + + +
+A Constant Tensor. +
+ + + + + + + + + + + + +
+`TypeError` + +if shape is incorrectly specified or unsupported. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/container.md b/site/en/api_docs/python/tf/compat/v1/container.md new file mode 100644 index 00000000000..7dfabb5a592 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/container.md @@ -0,0 +1,65 @@ +description: Wrapper for Graph.container() using the default graph. + +
+ + +
+ +# tf.compat.v1.container + + + + + + + + + +Wrapper for `Graph.container()` using the default graph. + + + + + + + + + + + + + + + + + +
+`container_name` + +The container string to use in the context. +
+ + + + + + + + + + + +
+A context manager that specifies the default container to use for newly +created stateful ops. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/control_flow_v2_enabled.md b/site/en/api_docs/python/tf/compat/v1/control_flow_v2_enabled.md new file mode 100644 index 00000000000..a9355420ec4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/control_flow_v2_enabled.md @@ -0,0 +1,33 @@ +description: Returns True if v2 control flow is enabled. + +
+ + +
+ +# tf.compat.v1.control_flow_v2_enabled + + + + + + + + + +Returns `True` if v2 control flow is enabled. + + + + + + + +Note: v2 control flow is always enabled inside of tf.function. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/convert_to_tensor.md b/site/en/api_docs/python/tf/compat/v1/convert_to_tensor.md new file mode 100644 index 00000000000..1dcde41347d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/convert_to_tensor.md @@ -0,0 +1,154 @@ +description: Converts the given value to a Tensor. + +
+ + +
+ +# tf.compat.v1.convert_to_tensor + + + + + + + + + +Converts the given `value` to a `Tensor`. + + + + + + + +This function converts Python objects of various types to `Tensor` +objects. It accepts `Tensor` objects, numpy arrays, Python lists, +and Python scalars. For example: + +```python +import numpy as np + +def my_func(arg): + arg = tf.convert_to_tensor(arg, dtype=tf.float32) + return tf.matmul(arg, arg) + arg + +# The following calls are equivalent. +value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]])) +value_2 = my_func([[1.0, 2.0], [3.0, 4.0]]) +value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)) +``` + +This function can be useful when composing a new operation in Python +(such as `my_func` in the example above). All standard Python op +constructors apply this function to each of their Tensor-valued +inputs, which allows those ops to accept numpy arrays, Python lists, +and scalars in addition to `Tensor` objects. + +Note: This function diverges from default Numpy behavior for `float` and + `string` types when `None` is present in a Python list or scalar. Rather + than silently converting `None` values, an error will be thrown. + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +An object whose type has a registered `Tensor` conversion function. +
+`dtype` + +Optional element type for the returned tensor. If missing, the type +is inferred from the type of `value`. +
+`name` + +Optional name to use if a new `Tensor` is created. +
+`preferred_dtype` + +Optional element type for the returned tensor, used when +dtype is None. In some cases, a caller may not have a dtype in mind when +converting to a tensor, so preferred_dtype can be used as a soft +preference. If the conversion to `preferred_dtype` is not possible, this +argument has no effect. +
+`dtype_hint` + +same meaning as preferred_dtype, and overrides it. +
+ + + + + + + + + + + +
+A `Tensor` based on `value`. +
+ + + + + + + + + + + + + + + + + + +
+`TypeError` + +If no conversion function is registered for `value` to `dtype`. +
+`RuntimeError` + +If a registered conversion function returns an invalid value. +
+`ValueError` + +If the `value` is a tensor not of given `dtype` in graph mode. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/convert_to_tensor_or_indexed_slices.md b/site/en/api_docs/python/tf/compat/v1/convert_to_tensor_or_indexed_slices.md new file mode 100644 index 00000000000..0066704792b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/convert_to_tensor_or_indexed_slices.md @@ -0,0 +1,100 @@ +description: Converts the given object to a Tensor or an IndexedSlices. + +
+ + +
+ +# tf.compat.v1.convert_to_tensor_or_indexed_slices + + + + + + + + + +Converts the given object to a `Tensor` or an `IndexedSlices`. + + + + + + + +If `value` is an `IndexedSlices` or `SparseTensor` it is returned +unmodified. Otherwise, it is converted to a `Tensor` using +`convert_to_tensor()`. + + + + + + + + + + + + + + + + +
+`value` + +An `IndexedSlices`, `SparseTensor`, or an object that can be consumed +by `convert_to_tensor()`. +
+`dtype` + +(Optional.) The required `DType` of the returned `Tensor` or +`IndexedSlices`. +
+`name` + +(Optional.) A name to use if a new `Tensor` is created. +
+ + + + + + + + + + + +
+A `Tensor`, `IndexedSlices`, or `SparseTensor` based on `value`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `dtype` does not match the element type of `value`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/convert_to_tensor_or_sparse_tensor.md b/site/en/api_docs/python/tf/compat/v1/convert_to_tensor_or_sparse_tensor.md new file mode 100644 index 00000000000..a8dbe7d9383 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/convert_to_tensor_or_sparse_tensor.md @@ -0,0 +1,97 @@ +description: Converts value to a SparseTensor or Tensor. + +
+ + +
+ +# tf.compat.v1.convert_to_tensor_or_sparse_tensor + + + + + + + + + +Converts value to a `SparseTensor` or `Tensor`. + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `SparseTensor`, `SparseTensorValue`, or an object whose type has a +registered `Tensor` conversion function. +
+`dtype` + +Optional element type for the returned tensor. If missing, the type +is inferred from the type of `value`. +
+`name` + +Optional name to use if a new `Tensor` is created. +
+ + + + + + + + + + + +
+A `SparseTensor` or `Tensor` based on `value`. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If result type is incompatible with `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/count_nonzero.md b/site/en/api_docs/python/tf/compat/v1/count_nonzero.md new file mode 100644 index 00000000000..e1bfa13fc97 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/count_nonzero.md @@ -0,0 +1,171 @@ +description: Computes number of nonzero elements across dimensions of a tensor. (deprecated arguments) (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.count_nonzero + + + + + + + + + +Computes number of nonzero elements across dimensions of a tensor. (deprecated arguments) (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(reduction_indices)`. They will be removed in a future version. +Instructions for updating: +reduction_indices is deprecated, use axis instead + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` has no entries, all dimensions are reduced, and a +tensor with a single element is returned. + +**NOTE** Floating point comparison to zero is done by exact floating point +equality check. Small values are **not** rounded to zero for purposes of +the nonzero check. + +#### For example: + + + +```python +x = tf.constant([[0, 1, 0], [1, 1, 0]]) +tf.math.count_nonzero(x) # 3 +tf.math.count_nonzero(x, 0) # [1, 2, 0] +tf.math.count_nonzero(x, 1) # [1, 2] +tf.math.count_nonzero(x, 1, keepdims=True) # [[1], [2]] +tf.math.count_nonzero(x, [0, 1]) # 3 +``` + +**NOTE** Strings are compared against zero-length empty string `""`. Any +string with a size greater than zero is already considered as nonzero. + +#### For example: + + +```python +x = tf.constant(["", "a", " ", "b", ""]) +tf.math.count_nonzero(x) # 3, with "a", " ", and "b" as nonzero strings. +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should be of numeric type, `bool`, or +`string`. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`dtype` + +The output dtype; defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+`reduction_indices` + +The old (deprecated) name for axis. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+`input` + +Overrides input_tensor. For compatibility. +
+ + + + + + + + + + + +
+The reduced tensor (number of nonzero values). +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/count_up_to.md b/site/en/api_docs/python/tf/compat/v1/count_up_to.md new file mode 100644 index 00000000000..6e2ee275cb6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/count_up_to.md @@ -0,0 +1,86 @@ +description: Increments 'ref' until it reaches 'limit'. (deprecated) + +
+ + +
+ +# tf.compat.v1.count_up_to + + + + + + + + + +Increments 'ref' until it reaches 'limit'. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Prefer Dataset.range instead. + + + + + + + + + + + + + + + + +
+`ref` + +A Variable. Must be one of the following types: `int32`, `int64`. +Should be from a scalar `Variable` node. +
+`limit` + +An `int`. +If incrementing ref would bring it above limit, instead generates an +'OutOfRange' error. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `ref`. +A copy of the input before increment. If nothing else modifies the +input, the values produced will all be distinct. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/create_partitioned_variables.md b/site/en/api_docs/python/tf/compat/v1/create_partitioned_variables.md new file mode 100644 index 00000000000..afca1482754 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/create_partitioned_variables.md @@ -0,0 +1,157 @@ +description: Create a list of partitioned variables according to the given slicing. (deprecated) + +
+ + +
+ +# tf.compat.v1.create_partitioned_variables + + + + + + + + + +Create a list of partitioned variables according to the given `slicing`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.get_variable` with a partitioner set. + +Currently only one dimension of the full variable can be sliced, and the +full variable can be reconstructed by the concatenation of the returned +list along that dimension. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +List of integers. The shape of the full variable. +
+`slicing` + +List of integers. How to partition the variable. +Must be of the same length as `shape`. Each value +indicate how many slices to create in the corresponding +dimension. Presently only one of the values can be more than 1; +that is, the variable can only be sliced along one dimension. + +For convenience, The requested number of partitions does not have to +divide the corresponding dimension evenly. If it does not, the +shapes of the partitions are incremented by 1 starting from partition +0 until all slack is absorbed. The adjustment rules may change in the +future, but as you can save/restore these variables with different +slicing specifications this should not be a problem. +
+`initializer` + +A `Tensor` of shape `shape` or a variable initializer +function. If a function, it will be called once for each slice, +passing the shape and data type of the slice as parameters. The +function must return a tensor with the same shape as the slice. +
+`dtype` + +Type of the variables. Ignored if `initializer` is a `Tensor`. +
+`trainable` + +If True also add all the variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES`. +
+`collections` + +List of graph collections keys to add the variables to. +Defaults to `[GraphKeys.GLOBAL_VARIABLES]`. +
+`name` + +Optional name for the full variable. Defaults to +`"PartitionedVariable"` and gets uniquified automatically. +
+`reuse` + +Boolean or `None`; if `True` and name is set, it would reuse +previously created variables. if `False` it will create new variables. +if `None`, it would inherit the parent scope reuse. +
+ + + + + + + + + + + +
+A list of Variables corresponding to the slicing. +
+ + + + + + + + + + + + +
+`ValueError` + +If any of the arguments is malformed. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/data.md b/site/en/api_docs/python/tf/compat/v1/data.md new file mode 100644 index 00000000000..c14358dd6f7 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data.md @@ -0,0 +1,54 @@ +description: tf.data.Dataset API for input pipelines. + +
+ + +
+ +# Module: tf.compat.v1.data + + + + + + + + + +tf.data.Dataset API for input pipelines. + + +See [Importing Data](https://tensorflow.org/guide/data) for an overview. + +## Modules + +[`experimental`](../../../tf/compat/v1/data/experimental.md) module: Experimental API for building input pipelines. + +## Classes + +[`class Dataset`](../../../tf/compat/v1/data/Dataset.md): Represents a potentially large set of elements. + +[`class DatasetSpec`](../../../tf/data/DatasetSpec.md): Type specification for tf.data.Dataset. + +[`class FixedLengthRecordDataset`](../../../tf/compat/v1/data/FixedLengthRecordDataset.md): A `Dataset` of fixed-length records from one or more binary files. + +[`class Iterator`](../../../tf/compat/v1/data/Iterator.md): Represents the state of iterating through a `Dataset`. + +[`class Options`](../../../tf/data/Options.md): Represents options for tf.data.Dataset. + +[`class TFRecordDataset`](../../../tf/compat/v1/data/TFRecordDataset.md): A `Dataset` comprising records from one or more TFRecord files. + +[`class TextLineDataset`](../../../tf/compat/v1/data/TextLineDataset.md): A `Dataset` comprising lines from one or more text files. + +## Functions + +[`get_output_classes(...)`](../../../tf/compat/v1/data/get_output_classes.md): Returns the output classes of a `Dataset` or `Iterator` elements. + +[`get_output_shapes(...)`](../../../tf/compat/v1/data/get_output_shapes.md): Returns the output shapes of a `Dataset` or `Iterator` elements. + +[`get_output_types(...)`](../../../tf/compat/v1/data/get_output_types.md): Returns the output shapes of a `Dataset` or `Iterator` elements. + +[`make_initializable_iterator(...)`](../../../tf/compat/v1/data/make_initializable_iterator.md): Creates a tf.compat.v1.data.Iterator for enumerating the elements of a dataset. + +[`make_one_shot_iterator(...)`](../../../tf/compat/v1/data/make_one_shot_iterator.md): Creates a tf.compat.v1.data.Iterator for enumerating dataset elements. + diff --git a/site/en/api_docs/python/tf/compat/v1/data/Dataset.md b/site/en/api_docs/python/tf/compat/v1/data/Dataset.md new file mode 100644 index 00000000000..9ac28e0bd0f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/Dataset.md @@ -0,0 +1,2971 @@ +description: Represents a potentially large set of elements. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.data.Dataset + + + + + + + + + +Represents a potentially large set of elements. + +Inherits From: [`Dataset`](../../../../tf/data/Dataset.md) + + + + + + + +A `Dataset` can be used to represent an input pipeline as a +collection of elements and a "logical plan" of transformations that act on +those elements. + + + + + + + + + + +
+`variant_tensor` + +A DT_VARIANT tensor that represents the dataset. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+`output_classes` + +Returns the class of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_classes(dataset). +
+`output_shapes` + +Returns the shape of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_shapes(dataset). +
+`output_types` + +Returns the type of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_types(dataset). +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

filter_with_legacy_function

+ +View source + + + +Filters this dataset according to `predicate`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.filter() + +Note: This is an escape hatch for existing uses of `filter` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `filter` as this method will be removed in V2. + + + + + + + + + + +
Args
+`predicate` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to a +scalar tf.bool tensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of `Dataset.from_generator()` uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +`Dataset.from_generator()`. The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to `Dataset.from_generator()` and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +`Dataset.from_generator()`. + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_sparse_tensor_slices

+ +View source + + + +Splits each rank-N tf.SparseTensor in this dataset row-wise. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.data.Dataset.from_tensor_slices(). + + + + + + + + + + +
Args
+`sparse_tensor` + +A tf.SparseTensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of rank-(N-1) sparse tensors. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use `Dataset.interleave()` to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +`Dataset.from_tensor_slices(filenames)` instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

make_initializable_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_initializable_iterator(dataset)`. + +Note: The returned iterator will be in an uninitialized state, +and you must run the `iterator.initializer` operation before using it: + +```python +dataset = ... +iterator = dataset.make_initializable_iterator() +# ... +sess.run(iterator.initializer) +``` + + + + + + + + + + +
Args
+`shared_name` + +(Optional.) If non-empty, the returned iterator will be +shared under the given name across multiple sessions that share the same +devices (e.g. when using a remote server). +
+ + + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If eager execution is enabled. +
+ + + +

make_one_shot_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)`. + +Note: The returned iterator will be initialized automatically. +A "one-shot" iterator does not currently support re-initialization. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

map_with_legacy_function

+ +View source + + + +Maps `map_func` across the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.map() + +Note: This is an escape hatch for existing uses of `map` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `map` as this method will be removed in V2. + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to +another nested structure of tensors. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/data/FixedLengthRecordDataset.md b/site/en/api_docs/python/tf/compat/v1/data/FixedLengthRecordDataset.md new file mode 100644 index 00000000000..a54f6e8989a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/FixedLengthRecordDataset.md @@ -0,0 +1,3022 @@ +description: A Dataset of fixed-length records from one or more binary files. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.data.FixedLengthRecordDataset + + + + + + + + + +A `Dataset` of fixed-length records from one or more binary files. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A tf.string tensor or tf.data.Dataset containing one or +more filenames. +
+`record_bytes` + +A tf.int64 scalar representing the number of bytes in each +record. +
+`header_bytes` + +(Optional.) A tf.int64 scalar representing the number of +bytes to skip at the start of a file. +
+`footer_bytes` + +(Optional.) A tf.int64 scalar representing the number of +bytes to ignore at the end of a file. +
+`buffer_size` + +(Optional.) A tf.int64 scalar representing the number of +bytes to buffer when reading. +
+`compression_type` + +(Optional.) A tf.string scalar evaluating to one of +`""` (no compression), `"ZLIB"`, or `"GZIP"`. +
+`num_parallel_reads` + +(Optional.) A tf.int64 scalar representing the +number of files to read in parallel. If greater than one, the records of +files read in parallel are outputted in an interleaved order. If your +input pipeline is I/O bottlenecked, consider setting this parameter to a +value greater than one to parallelize the I/O. If `None`, files will be +read sequentially. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+`output_classes` + +Returns the class of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_classes(dataset). +
+`output_shapes` + +Returns the shape of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_shapes(dataset). +
+`output_types` + +Returns the type of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_types(dataset). +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

filter_with_legacy_function

+ +View source + + + +Filters this dataset according to `predicate`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.filter() + +Note: This is an escape hatch for existing uses of `filter` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `filter` as this method will be removed in V2. + + + + + + + + + + +
Args
+`predicate` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to a +scalar tf.bool tensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of `Dataset.from_generator()` uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +`Dataset.from_generator()`. The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to `Dataset.from_generator()` and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +`Dataset.from_generator()`. + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_sparse_tensor_slices

+ +View source + + + +Splits each rank-N tf.SparseTensor in this dataset row-wise. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.data.Dataset.from_tensor_slices(). + + + + + + + + + + +
Args
+`sparse_tensor` + +A tf.SparseTensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of rank-(N-1) sparse tensors. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use `Dataset.interleave()` to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +`Dataset.from_tensor_slices(filenames)` instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

make_initializable_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_initializable_iterator(dataset)`. + +Note: The returned iterator will be in an uninitialized state, +and you must run the `iterator.initializer` operation before using it: + +```python +dataset = ... +iterator = dataset.make_initializable_iterator() +# ... +sess.run(iterator.initializer) +``` + + + + + + + + + + +
Args
+`shared_name` + +(Optional.) If non-empty, the returned iterator will be +shared under the given name across multiple sessions that share the same +devices (e.g. when using a remote server). +
+ + + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If eager execution is enabled. +
+ + + +

make_one_shot_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)`. + +Note: The returned iterator will be initialized automatically. +A "one-shot" iterator does not currently support re-initialization. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

map_with_legacy_function

+ +View source + + + +Maps `map_func` across the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.map() + +Note: This is an escape hatch for existing uses of `map` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `map` as this method will be removed in V2. + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to +another nested structure of tensors. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/data/Iterator.md b/site/en/api_docs/python/tf/compat/v1/data/Iterator.md new file mode 100644 index 00000000000..fda3bc21a49 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/Iterator.md @@ -0,0 +1,580 @@ +description: Represents the state of iterating through a Dataset. + +
+ + + + + + + + +
+ +# tf.compat.v1.data.Iterator + + + + + + + + + +Represents the state of iterating through a `Dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`iterator_resource` + +A tf.resource scalar tf.Tensor representing the +iterator. +
+`initializer` + +A tf.Operation that should be run to initialize this +iterator. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element of this iterator. +
+`output_shapes` + +A nested structure of tf.TensorShape objects +corresponding to each component of an element of this iterator. +
+`output_classes` + +A nested structure of Python `type` objects corresponding +to each component of an element of this iterator. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this iterator. +
+`initializer` + +A tf.Operation that should be run to initialize this iterator. +
+`output_classes` + +Returns the class of each component of an element of this iterator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_classes(iterator). + +The expected values are tf.Tensor and tf.SparseTensor. +
+`output_shapes` + +Returns the shape of each component of an element of this iterator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_shapes(iterator). +
+`output_types` + +Returns the type of each component of an element of this iterator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_types(iterator). +
+ + + +## Methods + +

from_string_handle

+ +View source + + + +Creates a new, uninitialized `Iterator` based on the given handle. + +This method allows you to define a "feedable" iterator where you can choose +between concrete iterators by feeding a value in a `tf.Session.run` call. +In that case, `string_handle` would be a tf.compat.v1.placeholder, and you +would +feed it with the value of `tf.data.Iterator.string_handle` in each step. + +For example, if you had two iterators that marked the current position in +a training dataset and a test dataset, you could choose which to use in +each step as follows: + +```python +train_iterator = tf.data.Dataset(...).make_one_shot_iterator() +train_iterator_handle = sess.run(train_iterator.string_handle()) + +test_iterator = tf.data.Dataset(...).make_one_shot_iterator() +test_iterator_handle = sess.run(test_iterator.string_handle()) + +handle = tf.compat.v1.placeholder(tf.string, shape=[]) +iterator = tf.data.Iterator.from_string_handle( + handle, train_iterator.output_types) + +next_element = iterator.get_next() +loss = f(next_element) + +train_loss = sess.run(loss, feed_dict={handle: train_iterator_handle}) +test_loss = sess.run(loss, feed_dict={handle: test_iterator_handle}) +``` + + + + + + + + + + + + + + + + + + + +
Args
+`string_handle` + +A scalar tf.Tensor of type tf.string that evaluates to +a handle produced by the `Iterator.string_handle()` method. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element of this dataset. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element of this dataset. If +omitted, each component will have an unconstrainted shape. +
+`output_classes` + +(Optional.) A nested structure of Python `type` objects +corresponding to each component of an element of this iterator. If +omitted, each component is assumed to be of type tf.Tensor. +
+ + + + + + + + + + + +
Returns
+An `Iterator`. +
+ + + +

from_structure

+ +View source + + + +Creates a new, uninitialized `Iterator` with the given structure. + +This iterator-constructing method can be used to create an iterator that +is reusable with many different datasets. + +The returned iterator is not bound to a particular dataset, and it has +no `initializer`. To initialize the iterator, run the operation returned by +`Iterator.make_initializer(dataset)`. + +The following is an example + +```python +iterator = Iterator.from_structure(tf.int64, tf.TensorShape([])) + +dataset_range = Dataset.range(10) +range_initializer = iterator.make_initializer(dataset_range) + +dataset_evens = dataset_range.filter(lambda x: x % 2 == 0) +evens_initializer = iterator.make_initializer(dataset_evens) + +# Define a model based on the iterator; in this example, the model_fn +# is expected to take scalar tf.int64 Tensors as input (see +# the definition of 'iterator' above). +prediction, loss = model_fn(iterator.get_next()) + +# Train for `num_epochs`, where for each epoch, we first iterate over +# dataset_range, and then iterate over dataset_evens. +for _ in range(num_epochs): + # Initialize the iterator to `dataset_range` + sess.run(range_initializer) + while True: + try: + pred, loss_val = sess.run([prediction, loss]) + except tf.errors.OutOfRangeError: + break + + # Initialize the iterator to `dataset_evens` + sess.run(evens_initializer) + while True: + try: + pred, loss_val = sess.run([prediction, loss]) + except tf.errors.OutOfRangeError: + break +``` + + + + + + + + + + + + + + + + + + + +
Args
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element of this dataset. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element of this dataset. If +omitted, each component will have an unconstrainted shape. +
+`shared_name` + +(Optional.) If non-empty, this iterator will be shared under +the given name across multiple sessions that share the same devices +(e.g. when using a remote server). +
+`output_classes` + +(Optional.) A nested structure of Python `type` objects +corresponding to each component of an element of this iterator. If +omitted, each component is assumed to be of type tf.Tensor. +
+ + + + + + + + + + + +
Returns
+An `Iterator`. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If the structures of `output_shapes` and `output_types` are +not the same. +
+ + + +

get_next

+ +View source + + + +Returns a nested structure of tf.Tensors representing the next element. + +In graph mode, you should typically call this method *once* and use its +result as the input to another computation. A typical loop will then call +`tf.Session.run` on the result of that computation. The loop will terminate +when the `Iterator.get_next()` operation raises +tf.errors.OutOfRangeError. The following skeleton shows how to use +this method when building a training loop: + +```python +dataset = ... # A `tf.data.Dataset` object. +iterator = dataset.make_initializable_iterator() +next_element = iterator.get_next() + +# Build a TensorFlow graph that does something with each element. +loss = model_function(next_element) +optimizer = ... # A `tf.compat.v1.train.Optimizer` object. +train_op = optimizer.minimize(loss) + +with tf.compat.v1.Session() as sess: + try: + while True: + sess.run(train_op) + except tf.errors.OutOfRangeError: + pass +``` + +NOTE: It is legitimate to call `Iterator.get_next()` multiple times, e.g. +when you are distributing different elements to multiple devices in a single +step. However, a common pitfall arises when users call `Iterator.get_next()` +in each iteration of their training loop. `Iterator.get_next()` adds ops to +the graph, and executing each op allocates resources (including threads); as +a consequence, invoking it in every iteration of a training loop causes +slowdown and eventual resource exhaustion. To guard against this outcome, we +log a warning when the number of uses crosses a fixed threshold of +suspiciousness. + + + + + + + + + + +
Args
+`name` + +(Optional.) A name for the created operation. +
+ + + + + + + + + + + +
Returns
+A nested structure of tf.Tensor objects. +
+ + + +

make_initializer

+ +View source + + + +Returns a tf.Operation that initializes this iterator on `dataset`. + + + + + + + + + + + + + + +
Args
+`dataset` + +A `Dataset` with compatible structure to this iterator. +
+`name` + +(Optional.) A name for the created operation. +
+ + + + + + + + + + + +
Returns
+A tf.Operation that can be run to initialize this iterator on the given +`dataset`. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If `dataset` and this iterator do not have a compatible +element structure. +
+ + + +

string_handle

+ +View source + + + +Returns a string-valued tf.Tensor that represents this iterator. + + + + + + + + + + + +
Args
+`name` + +(Optional.) A name for the created operation. +
+ + + + + + + + + + + +
Returns
+A scalar tf.Tensor of type tf.string. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/data/TFRecordDataset.md b/site/en/api_docs/python/tf/compat/v1/data/TFRecordDataset.md new file mode 100644 index 00000000000..61bfde3743e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/TFRecordDataset.md @@ -0,0 +1,3023 @@ +description: A Dataset comprising records from one or more TFRecord files. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.data.TFRecordDataset + + + + + + + + + +A `Dataset` comprising records from one or more TFRecord files. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A tf.string tensor or tf.data.Dataset containing one or +more filenames. +
+`compression_type` + +(Optional.) A tf.string scalar evaluating to one of +`""` (no compression), `"ZLIB"`, or `"GZIP"`. +
+`buffer_size` + +(Optional.) A tf.int64 scalar representing the number of +bytes in the read buffer. If your input pipeline is I/O bottlenecked, +consider setting this parameter to a value 1-100 MBs. If `None`, a +sensible default for both local and remote file systems is used. +
+`num_parallel_reads` + +(Optional.) A tf.int64 scalar representing the +number of files to read in parallel. If greater than one, the records of +files read in parallel are outputted in an interleaved order. If your +input pipeline is I/O bottlenecked, consider setting this parameter to a +value greater than one to parallelize the I/O. If `None`, files will be +read sequentially. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If any argument does not have the expected type. +
+`ValueError` + +If any argument does not have the expected shape. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+`output_classes` + +Returns the class of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_classes(dataset). +
+`output_shapes` + +Returns the shape of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_shapes(dataset). +
+`output_types` + +Returns the type of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_types(dataset). +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

filter_with_legacy_function

+ +View source + + + +Filters this dataset according to `predicate`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.filter() + +Note: This is an escape hatch for existing uses of `filter` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `filter` as this method will be removed in V2. + + + + + + + + + + +
Args
+`predicate` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to a +scalar tf.bool tensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of `Dataset.from_generator()` uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +`Dataset.from_generator()`. The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to `Dataset.from_generator()` and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +`Dataset.from_generator()`. + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_sparse_tensor_slices

+ +View source + + + +Splits each rank-N tf.SparseTensor in this dataset row-wise. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.data.Dataset.from_tensor_slices(). + + + + + + + + + + +
Args
+`sparse_tensor` + +A tf.SparseTensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of rank-(N-1) sparse tensors. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use `Dataset.interleave()` to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +`Dataset.from_tensor_slices(filenames)` instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

make_initializable_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_initializable_iterator(dataset)`. + +Note: The returned iterator will be in an uninitialized state, +and you must run the `iterator.initializer` operation before using it: + +```python +dataset = ... +iterator = dataset.make_initializable_iterator() +# ... +sess.run(iterator.initializer) +``` + + + + + + + + + + +
Args
+`shared_name` + +(Optional.) If non-empty, the returned iterator will be +shared under the given name across multiple sessions that share the same +devices (e.g. when using a remote server). +
+ + + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If eager execution is enabled. +
+ + + +

make_one_shot_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)`. + +Note: The returned iterator will be initialized automatically. +A "one-shot" iterator does not currently support re-initialization. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

map_with_legacy_function

+ +View source + + + +Maps `map_func` across the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.map() + +Note: This is an escape hatch for existing uses of `map` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `map` as this method will be removed in V2. + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to +another nested structure of tensors. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/data/TextLineDataset.md b/site/en/api_docs/python/tf/compat/v1/data/TextLineDataset.md new file mode 100644 index 00000000000..5e2cdf70600 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/TextLineDataset.md @@ -0,0 +1,2998 @@ +description: A Dataset comprising lines from one or more text files. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.data.TextLineDataset + + + + + + + + + +A `Dataset` comprising lines from one or more text files. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A tf.string tensor or tf.data.Dataset containing one or +more filenames. +
+`compression_type` + +(Optional.) A tf.string scalar evaluating to one of +`""` (no compression), `"ZLIB"`, or `"GZIP"`. +
+`buffer_size` + +(Optional.) A tf.int64 scalar denoting the number of bytes +to buffer. A value of 0 results in the default buffering values chosen +based on the compression type. +
+`num_parallel_reads` + +(Optional.) A tf.int64 scalar representing the +number of files to read in parallel. If greater than one, the records of +files read in parallel are outputted in an interleaved order. If your +input pipeline is I/O bottlenecked, consider setting this parameter to a +value greater than one to parallelize the I/O. If `None`, files will be +read sequentially. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+`output_classes` + +Returns the class of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_classes(dataset). +
+`output_shapes` + +Returns the shape of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_shapes(dataset). +
+`output_types` + +Returns the type of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_types(dataset). +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

filter_with_legacy_function

+ +View source + + + +Filters this dataset according to `predicate`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.filter() + +Note: This is an escape hatch for existing uses of `filter` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `filter` as this method will be removed in V2. + + + + + + + + + + +
Args
+`predicate` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to a +scalar tf.bool tensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of `Dataset.from_generator()` uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +`Dataset.from_generator()`. The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to `Dataset.from_generator()` and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +`Dataset.from_generator()`. + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_sparse_tensor_slices

+ +View source + + + +Splits each rank-N tf.SparseTensor in this dataset row-wise. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.data.Dataset.from_tensor_slices(). + + + + + + + + + + +
Args
+`sparse_tensor` + +A tf.SparseTensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of rank-(N-1) sparse tensors. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use `Dataset.interleave()` to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +`Dataset.from_tensor_slices(filenames)` instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

make_initializable_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_initializable_iterator(dataset)`. + +Note: The returned iterator will be in an uninitialized state, +and you must run the `iterator.initializer` operation before using it: + +```python +dataset = ... +iterator = dataset.make_initializable_iterator() +# ... +sess.run(iterator.initializer) +``` + + + + + + + + + + +
Args
+`shared_name` + +(Optional.) If non-empty, the returned iterator will be +shared under the given name across multiple sessions that share the same +devices (e.g. when using a remote server). +
+ + + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If eager execution is enabled. +
+ + + +

make_one_shot_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)`. + +Note: The returned iterator will be initialized automatically. +A "one-shot" iterator does not currently support re-initialization. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

map_with_legacy_function

+ +View source + + + +Maps `map_func` across the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.map() + +Note: This is an escape hatch for existing uses of `map` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `map` as this method will be removed in V2. + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to +another nested structure of tensors. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental.md b/site/en/api_docs/python/tf/compat/v1/data/experimental.md new file mode 100644 index 00000000000..11f6cc297d9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental.md @@ -0,0 +1,153 @@ +description: Experimental API for building input pipelines. + +
+ + + + + +
+ +# Module: tf.compat.v1.data.experimental + + + + + + + + + +Experimental API for building input pipelines. + + +This module contains experimental `Dataset` sources and transformations that can +be used in conjunction with the tf.data.Dataset API. Note that the +tf.data.experimental API is not subject to the same backwards compatibility +guarantees as tf.data, but we will provide deprecation advice in advance of +removing existing functionality. + +See [Importing Data](https://tensorflow.org/guide/datasets) for an overview. + + + + +## Classes + +[`class AutoShardPolicy`](../../../../tf/data/experimental/AutoShardPolicy.md): Represents the type of auto-sharding we enable. + +[`class CheckpointInputPipelineHook`](../../../../tf/data/experimental/CheckpointInputPipelineHook.md): Checkpoints input pipeline state every N steps or seconds. + +[`class CsvDataset`](../../../../tf/compat/v1/data/experimental/CsvDataset.md): A Dataset comprising lines from one or more CSV files. + +[`class DatasetStructure`](../../../../tf/data/DatasetSpec.md): Type specification for tf.data.Dataset. + +[`class DistributeOptions`](../../../../tf/data/experimental/DistributeOptions.md): Represents options for distributed data processing. + +[`class MapVectorizationOptions`](../../../../tf/data/experimental/MapVectorizationOptions.md): Represents options for the MapVectorization optimization. + +[`class OptimizationOptions`](../../../../tf/data/experimental/OptimizationOptions.md): Represents options for dataset optimizations. + +[`class Optional`](../../../../tf/data/experimental/Optional.md): Wraps a value that may/may not be present at runtime. + +[`class OptionalStructure`](../../../../tf/OptionalSpec.md): Represents an optional potentially containing a structured value. + +[`class RandomDataset`](../../../../tf/compat/v1/data/experimental/RandomDataset.md): A `Dataset` of pseudorandom values. + +[`class Reducer`](../../../../tf/data/experimental/Reducer.md): A reducer is used for reducing a set of elements. + +[`class SqlDataset`](../../../../tf/compat/v1/data/experimental/SqlDataset.md): A `Dataset` consisting of the results from a SQL query. + +[`class StatsAggregator`](../../../../tf/compat/v1/data/experimental/StatsAggregator.md): A stateful resource that aggregates statistics from one or more iterators. + +[`class StatsOptions`](../../../../tf/data/experimental/StatsOptions.md): Represents options for collecting dataset stats using `StatsAggregator`. + +[`class Structure`](../../../../tf/TypeSpec.md): Specifies a TensorFlow value type. + +[`class TFRecordWriter`](../../../../tf/data/experimental/TFRecordWriter.md): Writes a dataset to a TFRecord file. + +[`class ThreadingOptions`](../../../../tf/data/experimental/ThreadingOptions.md): Represents options for dataset threading. + +## Functions + +[`Counter(...)`](../../../../tf/compat/v1/data/experimental/Counter.md): Creates a `Dataset` that counts from `start` in steps of size `step`. + +[`RaggedTensorStructure(...)`](../../../../tf/compat/v1/data/experimental/RaggedTensorStructure.md): DEPRECATED FUNCTION + +[`SparseTensorStructure(...)`](../../../../tf/compat/v1/data/experimental/SparseTensorStructure.md): DEPRECATED FUNCTION + +[`TensorArrayStructure(...)`](../../../../tf/compat/v1/data/experimental/TensorArrayStructure.md): DEPRECATED FUNCTION + +[`TensorStructure(...)`](../../../../tf/compat/v1/data/experimental/TensorStructure.md): DEPRECATED FUNCTION + +[`assert_cardinality(...)`](../../../../tf/data/experimental/assert_cardinality.md): Asserts the cardinality of the input dataset. + +[`bucket_by_sequence_length(...)`](../../../../tf/data/experimental/bucket_by_sequence_length.md): A transformation that buckets elements in a `Dataset` by length. + +[`bytes_produced_stats(...)`](../../../../tf/data/experimental/bytes_produced_stats.md): Records the number of bytes produced by each element of the input dataset. + +[`cardinality(...)`](../../../../tf/data/experimental/cardinality.md): Returns the cardinality of `dataset`, if known. + +[`choose_from_datasets(...)`](../../../../tf/compat/v1/data/experimental/choose_from_datasets.md): Creates a dataset that deterministically chooses elements from `datasets`. + +[`copy_to_device(...)`](../../../../tf/data/experimental/copy_to_device.md): A transformation that copies dataset elements to the given `target_device`. + +[`dense_to_ragged_batch(...)`](../../../../tf/data/experimental/dense_to_ragged_batch.md): A transformation that batches ragged elements into tf.RaggedTensors. + +[`dense_to_sparse_batch(...)`](../../../../tf/data/experimental/dense_to_sparse_batch.md): A transformation that batches ragged elements into tf.SparseTensors. + +[`enumerate_dataset(...)`](../../../../tf/data/experimental/enumerate_dataset.md): A transformation that enumerates the elements of a dataset. (deprecated) + +[`from_variant(...)`](../../../../tf/data/experimental/from_variant.md): Constructs a dataset from the given variant and structure. + +[`get_next_as_optional(...)`](../../../../tf/data/experimental/get_next_as_optional.md): Returns an `Optional` that contains the next value from the iterator. + +[`get_single_element(...)`](../../../../tf/data/experimental/get_single_element.md): Returns the single element in `dataset` as a nested structure of tensors. + +[`get_structure(...)`](../../../../tf/data/experimental/get_structure.md): Returns the type specification of an element of a `Dataset` or `Iterator`. + +[`group_by_reducer(...)`](../../../../tf/data/experimental/group_by_reducer.md): A transformation that groups elements and performs a reduction. + +[`group_by_window(...)`](../../../../tf/data/experimental/group_by_window.md): A transformation that groups windows of elements by key and reduces them. + +[`ignore_errors(...)`](../../../../tf/data/experimental/ignore_errors.md): Creates a `Dataset` from another `Dataset` and silently ignores any errors. + +[`latency_stats(...)`](../../../../tf/data/experimental/latency_stats.md): Records the latency of producing each element of the input dataset. + +[`make_batched_features_dataset(...)`](../../../../tf/compat/v1/data/experimental/make_batched_features_dataset.md): Returns a `Dataset` of feature dictionaries from `Example` protos. + +[`make_csv_dataset(...)`](../../../../tf/compat/v1/data/experimental/make_csv_dataset.md): Reads CSV files into a dataset. + +[`make_saveable_from_iterator(...)`](../../../../tf/data/experimental/make_saveable_from_iterator.md): Returns a SaveableObject for saving/restoring iterator state using Saver. + +[`map_and_batch(...)`](../../../../tf/data/experimental/map_and_batch.md): Fused implementation of `map` and `batch`. (deprecated) + +[`map_and_batch_with_legacy_function(...)`](../../../../tf/compat/v1/data/experimental/map_and_batch_with_legacy_function.md): Fused implementation of `map` and `batch`. (deprecated) + +[`parallel_interleave(...)`](../../../../tf/data/experimental/parallel_interleave.md): A parallel version of the `Dataset.interleave()` transformation. (deprecated) + +[`parse_example_dataset(...)`](../../../../tf/data/experimental/parse_example_dataset.md): A transformation that parses `Example` protos into a `dict` of tensors. + +[`prefetch_to_device(...)`](../../../../tf/data/experimental/prefetch_to_device.md): A transformation that prefetches dataset values to the given `device`. + +[`rejection_resample(...)`](../../../../tf/data/experimental/rejection_resample.md): A transformation that resamples a dataset to achieve a target distribution. + +[`sample_from_datasets(...)`](../../../../tf/compat/v1/data/experimental/sample_from_datasets.md): Samples elements at random from the datasets in `datasets`. + +[`scan(...)`](../../../../tf/data/experimental/scan.md): A transformation that scans a function across an input dataset. + +[`shuffle_and_repeat(...)`](../../../../tf/data/experimental/shuffle_and_repeat.md): Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated) + +[`take_while(...)`](../../../../tf/data/experimental/take_while.md): A transformation that stops dataset iteration based on a `predicate`. + +[`to_variant(...)`](../../../../tf/data/experimental/to_variant.md): Returns a variant representing the given dataset. + +[`unbatch(...)`](../../../../tf/data/experimental/unbatch.md): Splits elements of a dataset into multiple elements on the batch dimension. (deprecated) + +[`unique(...)`](../../../../tf/data/experimental/unique.md): Creates a `Dataset` from another `Dataset`, discarding duplicates. + +## Other Members + +* `AUTOTUNE = -1` +* `INFINITE_CARDINALITY = -1` +* `UNKNOWN_CARDINALITY = -2` diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/Counter.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/Counter.md new file mode 100644 index 00000000000..640f6f8bbd1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/Counter.md @@ -0,0 +1,91 @@ +description: Creates a Dataset that counts from start in steps of size step. + +
+ + +
+ +# tf.compat.v1.data.experimental.Counter + + + + + + + + + +Creates a `Dataset` that counts from `start` in steps of size `step`. + + + + + + + + +#### For example: + + + +```python +Dataset.count() == [0, 1, 2, ...) +Dataset.count(2) == [2, 3, ...) +Dataset.count(2, 5) == [2, 7, 12, ...) +Dataset.count(0, -1) == [0, -1, -2, ...) +Dataset.count(10, -1) == [10, 9, ...) +``` + + + + + + + + + + + + + + + + +
+`start` + +(Optional.) The starting value for the counter. Defaults to 0. +
+`step` + +(Optional.) The step size for the counter. Defaults to 1. +
+`dtype` + +(Optional.) The data type for counter elements. Defaults to +tf.int64. +
+ + + + + + + + + + + +
+A `Dataset` of scalar `dtype` elements. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/CsvDataset.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/CsvDataset.md new file mode 100644 index 00000000000..7a9e49a55c0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/CsvDataset.md @@ -0,0 +1,3044 @@ +description: A Dataset comprising lines from one or more CSV files. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.data.experimental.CsvDataset + + + + + + + + + +A Dataset comprising lines from one or more CSV files. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A tf.string tensor containing one or more filenames. +
+`record_defaults` + +A list of default values for the CSV fields. Each item in +the list is either a valid CSV `DType` (float32, float64, int32, int64, +string), or a `Tensor` object with one of the above types. One per +column of CSV data, with either a scalar `Tensor` default value for the +column if it is optional, or `DType` or empty `Tensor` if required. If +both this and `select_columns` are specified, these must have the same +lengths, and `column_defaults` is assumed to be sorted in order of +increasing column index. +
+`compression_type` + +(Optional.) A tf.string scalar evaluating to one of +`""` (no compression), `"ZLIB"`, or `"GZIP"`. Defaults to no +compression. +
+`buffer_size` + +(Optional.) A tf.int64 scalar denoting the number of bytes +to buffer while reading files. Defaults to 4MB. +
+`header` + +(Optional.) A tf.bool scalar indicating whether the CSV file(s) +have header line(s) that should be skipped when parsing. Defaults to +`False`. +
+`field_delim` + +(Optional.) A tf.string scalar containing the delimiter +character that separates fields in a record. Defaults to `","`. +
+`use_quote_delim` + +(Optional.) A tf.bool scalar. If `False`, treats +double quotation marks as regular characters inside of string fields +(ignoring RFC 4180, Section 2, Bullet 5). Defaults to `True`. +
+`na_value` + +(Optional.) A tf.string scalar indicating a value that will +be treated as NA/NaN. +
+`select_cols` + +(Optional.) A sorted list of column indices to select from +the input data. If specified, only this subset of columns will be +parsed. Defaults to parsing all columns. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+`output_classes` + +Returns the class of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_classes(dataset). +
+`output_shapes` + +Returns the shape of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_shapes(dataset). +
+`output_types` + +Returns the type of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_types(dataset). +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

filter_with_legacy_function

+ +View source + + + +Filters this dataset according to `predicate`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.filter() + +Note: This is an escape hatch for existing uses of `filter` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `filter` as this method will be removed in V2. + + + + + + + + + + +
Args
+`predicate` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to a +scalar tf.bool tensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of `Dataset.from_generator()` uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +`Dataset.from_generator()`. The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to `Dataset.from_generator()` and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +`Dataset.from_generator()`. + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_sparse_tensor_slices

+ +View source + + + +Splits each rank-N tf.SparseTensor in this dataset row-wise. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.data.Dataset.from_tensor_slices(). + + + + + + + + + + +
Args
+`sparse_tensor` + +A tf.SparseTensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of rank-(N-1) sparse tensors. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use `Dataset.interleave()` to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +`Dataset.from_tensor_slices(filenames)` instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

make_initializable_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_initializable_iterator(dataset)`. + +Note: The returned iterator will be in an uninitialized state, +and you must run the `iterator.initializer` operation before using it: + +```python +dataset = ... +iterator = dataset.make_initializable_iterator() +# ... +sess.run(iterator.initializer) +``` + + + + + + + + + + +
Args
+`shared_name` + +(Optional.) If non-empty, the returned iterator will be +shared under the given name across multiple sessions that share the same +devices (e.g. when using a remote server). +
+ + + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If eager execution is enabled. +
+ + + +

make_one_shot_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)`. + +Note: The returned iterator will be initialized automatically. +A "one-shot" iterator does not currently support re-initialization. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

map_with_legacy_function

+ +View source + + + +Maps `map_func` across the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.map() + +Note: This is an escape hatch for existing uses of `map` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `map` as this method will be removed in V2. + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to +another nested structure of tensors. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/RaggedTensorStructure.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/RaggedTensorStructure.md new file mode 100644 index 00000000000..268d89b6f4e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/RaggedTensorStructure.md @@ -0,0 +1,37 @@ +description: DEPRECATED FUNCTION + +
+ + +
+ +# tf.compat.v1.data.experimental.RaggedTensorStructure + + + + + + + + + +DEPRECATED FUNCTION + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.RaggedTensorSpec instead. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/RandomDataset.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/RandomDataset.md new file mode 100644 index 00000000000..3543b4a115a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/RandomDataset.md @@ -0,0 +1,2951 @@ +description: A Dataset of pseudorandom values. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.data.experimental.RandomDataset + + + + + + + + + +A `Dataset` of pseudorandom values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+`output_classes` + +Returns the class of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_classes(dataset). +
+`output_shapes` + +Returns the shape of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_shapes(dataset). +
+`output_types` + +Returns the type of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_types(dataset). +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

filter_with_legacy_function

+ +View source + + + +Filters this dataset according to `predicate`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.filter() + +Note: This is an escape hatch for existing uses of `filter` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `filter` as this method will be removed in V2. + + + + + + + + + + +
Args
+`predicate` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to a +scalar tf.bool tensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of `Dataset.from_generator()` uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +`Dataset.from_generator()`. The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to `Dataset.from_generator()` and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +`Dataset.from_generator()`. + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_sparse_tensor_slices

+ +View source + + + +Splits each rank-N tf.SparseTensor in this dataset row-wise. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.data.Dataset.from_tensor_slices(). + + + + + + + + + + +
Args
+`sparse_tensor` + +A tf.SparseTensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of rank-(N-1) sparse tensors. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use `Dataset.interleave()` to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +`Dataset.from_tensor_slices(filenames)` instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

make_initializable_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_initializable_iterator(dataset)`. + +Note: The returned iterator will be in an uninitialized state, +and you must run the `iterator.initializer` operation before using it: + +```python +dataset = ... +iterator = dataset.make_initializable_iterator() +# ... +sess.run(iterator.initializer) +``` + + + + + + + + + + +
Args
+`shared_name` + +(Optional.) If non-empty, the returned iterator will be +shared under the given name across multiple sessions that share the same +devices (e.g. when using a remote server). +
+ + + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If eager execution is enabled. +
+ + + +

make_one_shot_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)`. + +Note: The returned iterator will be initialized automatically. +A "one-shot" iterator does not currently support re-initialization. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

map_with_legacy_function

+ +View source + + + +Maps `map_func` across the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.map() + +Note: This is an escape hatch for existing uses of `map` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `map` as this method will be removed in V2. + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to +another nested structure of tensors. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/SparseTensorStructure.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/SparseTensorStructure.md new file mode 100644 index 00000000000..a60df5b6bc5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/SparseTensorStructure.md @@ -0,0 +1,37 @@ +description: DEPRECATED FUNCTION + +
+ + +
+ +# tf.compat.v1.data.experimental.SparseTensorStructure + + + + + + + + + +DEPRECATED FUNCTION + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.SparseTensorSpec instead. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/SqlDataset.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/SqlDataset.md new file mode 100644 index 00000000000..f0b79bd2faf --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/SqlDataset.md @@ -0,0 +1,2992 @@ +description: A Dataset consisting of the results from a SQL query. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.data.experimental.SqlDataset + + + + + + + + + +A `Dataset` consisting of the results from a SQL query. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`driver_name` + +A 0-D tf.string tensor containing the database type. +Currently, the only supported value is 'sqlite'. +
+`data_source_name` + +A 0-D tf.string tensor containing a connection string +to connect to the database. +
+`query` + +A 0-D tf.string tensor containing the SQL query to execute. +
+`output_types` + +A tuple of tf.DType objects representing the types of the +columns returned by `query`. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+`output_classes` + +Returns the class of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_classes(dataset). +
+`output_shapes` + +Returns the shape of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_shapes(dataset). +
+`output_types` + +Returns the type of each component of an element of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.data.get_output_types(dataset). +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

filter_with_legacy_function

+ +View source + + + +Filters this dataset according to `predicate`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.filter() + +Note: This is an escape hatch for existing uses of `filter` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `filter` as this method will be removed in V2. + + + + + + + + + + +
Args
+`predicate` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to a +scalar tf.bool tensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of `Dataset.from_generator()` uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +`Dataset.from_generator()`. The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to `Dataset.from_generator()` and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +`Dataset.from_generator()`. + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_sparse_tensor_slices

+ +View source + + + +Splits each rank-N tf.SparseTensor in this dataset row-wise. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.data.Dataset.from_tensor_slices(). + + + + + + + + + + +
Args
+`sparse_tensor` + +A tf.SparseTensor. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of rank-(N-1) sparse tensors. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use `Dataset.interleave()` to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +`Dataset.from_tensor_slices(filenames)` instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

make_initializable_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_initializable_iterator(dataset)`. + +Note: The returned iterator will be in an uninitialized state, +and you must run the `iterator.initializer` operation before using it: + +```python +dataset = ... +iterator = dataset.make_initializable_iterator() +# ... +sess.run(iterator.initializer) +``` + + + + + + + + + + +
Args
+`shared_name` + +(Optional.) If non-empty, the returned iterator will be +shared under the given name across multiple sessions that share the same +devices (e.g. when using a remote server). +
+ + + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If eager execution is enabled. +
+ + + +

make_one_shot_iterator

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)`. + +Note: The returned iterator will be initialized automatically. +A "one-shot" iterator does not currently support re-initialization. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

map_with_legacy_function

+ +View source + + + +Maps `map_func` across the elements of this dataset. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.map() + +Note: This is an escape hatch for existing uses of `map` that do not work +with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `map` as this method will be removed in V2. + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to +another nested structure of tensors. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/StatsAggregator.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/StatsAggregator.md new file mode 100644 index 00000000000..68715b88703 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/StatsAggregator.md @@ -0,0 +1,106 @@ +description: A stateful resource that aggregates statistics from one or more iterators. + +
+ + + + +
+ +# tf.compat.v1.data.experimental.StatsAggregator + + + + + + + + + +A stateful resource that aggregates statistics from one or more iterators. + + + + + + + +To record statistics, use one of the custom transformation functions defined +in this module when defining your tf.data.Dataset. All statistics will be +aggregated by the `StatsAggregator` that is associated with a particular +iterator (see below). For example, to record the latency of producing each +element by iterating over a dataset: + +```python +dataset = ... +dataset = dataset.apply(tf.data.experimental.latency_stats("total_bytes")) +``` + +To associate a `StatsAggregator` with a tf.data.Dataset object, use +the following pattern: + +```python +aggregator = tf.data.experimental.StatsAggregator() +dataset = ... + +# Apply `StatsOptions` to associate `dataset` with `aggregator`. +options = tf.data.Options() +options.experimental_stats.aggregator = aggregator +dataset = dataset.with_options(options) +``` + +To get a protocol buffer summary of the currently aggregated statistics, +use the `StatsAggregator.get_summary()` tensor. The easiest way to do this +is to add the returned tensor to the `tf.GraphKeys.SUMMARIES` collection, +so that the summaries will be included with any existing summaries. + +```python +aggregator = tf.data.experimental.StatsAggregator() +# ... +stats_summary = aggregator.get_summary() +tf.compat.v1.add_to_collection(tf.GraphKeys.SUMMARIES, stats_summary) +``` + +Note: This interface is experimental and expected to change. In particular, +we expect to add other implementations of `StatsAggregator` that provide +different ways of exporting statistics, and add more types of statistics. + +## Methods + +

get_summary

+ +View source + + + +Returns a string tf.Tensor that summarizes the aggregated statistics. + +The returned tensor will contain a serialized tf.compat.v1.summary.Summary +protocol +buffer, which can be used with the standard TensorBoard logging facilities. + + + + + + + + + +
Returns
+A scalar string tf.Tensor that summarizes the aggregated statistics. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/TensorArrayStructure.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/TensorArrayStructure.md new file mode 100644 index 00000000000..24ec6f82d0e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/TensorArrayStructure.md @@ -0,0 +1,37 @@ +description: DEPRECATED FUNCTION + +
+ + +
+ +# tf.compat.v1.data.experimental.TensorArrayStructure + + + + + + + + + +DEPRECATED FUNCTION + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.TensorArraySpec instead. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/TensorStructure.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/TensorStructure.md new file mode 100644 index 00000000000..9177b8c2831 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/TensorStructure.md @@ -0,0 +1,37 @@ +description: DEPRECATED FUNCTION + +
+ + +
+ +# tf.compat.v1.data.experimental.TensorStructure + + + + + + + + + +DEPRECATED FUNCTION + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.TensorSpec instead. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/choose_from_datasets.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/choose_from_datasets.md new file mode 100644 index 00000000000..ff9b4ffda3c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/choose_from_datasets.md @@ -0,0 +1,109 @@ +description: Creates a dataset that deterministically chooses elements from datasets. + +
+ + +
+ +# tf.compat.v1.data.experimental.choose_from_datasets + + + + + + + + + +Creates a dataset that deterministically chooses elements from `datasets`. + + + + + + + +For example, given the following datasets: + +```python +datasets = [tf.data.Dataset.from_tensors("foo").repeat(), + tf.data.Dataset.from_tensors("bar").repeat(), + tf.data.Dataset.from_tensors("baz").repeat()] + +# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`. +choice_dataset = tf.data.Dataset.range(3).repeat(3) + +result = tf.data.experimental.choose_from_datasets(datasets, choice_dataset) +``` + +The elements of `result` will be: + +``` +"foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz" +``` + + + + + + + + + + + + + +
+`datasets` + +A list of tf.data.Dataset objects with compatible structure. +
+`choice_dataset` + +A tf.data.Dataset of scalar tf.int64 tensors between +`0` and `len(datasets) - 1`. +
+ + + + + + + + + + + +
+A dataset that interleaves elements from `datasets` according to the values +of `choice_dataset`. +
+ + + + + + + + + + + + +
+`TypeError` + +If the `datasets` or `choice_dataset` arguments have the wrong +type. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/make_batched_features_dataset.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/make_batched_features_dataset.md new file mode 100644 index 00000000000..ad294fe514b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/make_batched_features_dataset.md @@ -0,0 +1,256 @@ +description: Returns a Dataset of feature dictionaries from Example protos. + +
+ + +
+ +# tf.compat.v1.data.experimental.make_batched_features_dataset + + + + + + + + + +Returns a `Dataset` of feature dictionaries from `Example` protos. + + + + + + + +If label_key argument is provided, returns a `Dataset` of tuple +comprising of feature dictionaries and label. + +#### Example: + + + +``` +serialized_examples = [ + features { + feature { key: "age" value { int64_list { value: [ 0 ] } } } + feature { key: "gender" value { bytes_list { value: [ "f" ] } } } + feature { key: "kws" value { bytes_list { value: [ "code", "art" ] } } } + }, + features { + feature { key: "age" value { int64_list { value: [] } } } + feature { key: "gender" value { bytes_list { value: [ "f" ] } } } + feature { key: "kws" value { bytes_list { value: [ "sports" ] } } } + } +] +``` + +#### We can use arguments: + + + +``` +features: { + "age": FixedLenFeature([], dtype=tf.int64, default_value=-1), + "gender": FixedLenFeature([], dtype=tf.string), + "kws": VarLenFeature(dtype=tf.string), +} +``` + +And the expected output is: + +```python +{ + "age": [[0], [-1]], + "gender": [["f"], ["f"]], + "kws": SparseTensor( + indices=[[0, 0], [0, 1], [1, 0]], + values=["code", "art", "sports"] + dense_shape=[2, 2]), +} +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`file_pattern` + +List of files or patterns of file paths containing +`Example` records. See tf.io.gfile.glob for pattern rules. +
+`batch_size` + +An int representing the number of records to combine +in a single batch. +
+`features` + +A `dict` mapping feature keys to `FixedLenFeature` or +`VarLenFeature` values. See tf.io.parse_example. +
+`reader` + +A function or class that can be +called with a `filenames` tensor and (optional) `reader_args` and returns +a `Dataset` of `Example` tensors. Defaults to tf.data.TFRecordDataset. +
+`label_key` + +(Optional) A string corresponding to the key labels are stored in +`tf.Examples`. If provided, it must be one of the `features` key, +otherwise results in `ValueError`. +
+`reader_args` + +Additional arguments to pass to the reader class. +
+`num_epochs` + +Integer specifying the number of times to read through the +dataset. If None, cycles through the dataset forever. Defaults to `None`. +
+`shuffle` + +A boolean, indicates whether the input should be shuffled. Defaults +to `True`. +
+`shuffle_buffer_size` + +Buffer size of the ShuffleDataset. A large capacity +ensures better shuffling but would increase memory usage and startup time. +
+`shuffle_seed` + +Randomization seed to use for shuffling. +
+`prefetch_buffer_size` + +Number of feature batches to prefetch in order to +improve performance. Recommended value is the number of batches consumed +per training step. Defaults to auto-tune. +
+`reader_num_threads` + +Number of threads used to read `Example` records. If >1, +the results will be interleaved. Defaults to `1`. +
+`parser_num_threads` + +Number of threads to use for parsing `Example` tensors +into a dictionary of `Feature` tensors. Defaults to `2`. +
+`sloppy_ordering` + +If `True`, reading performance will be improved at +the cost of non-deterministic ordering. If `False`, the order of elements +produced is deterministic prior to shuffling (elements are still +randomized if `shuffle=True`. Note that if the seed is set, then order +of elements after shuffling is deterministic). Defaults to `False`. +
+`drop_final_batch` + +If `True`, and the batch size does not evenly divide the +input dataset size, the final smaller batch will be dropped. Defaults to +`False`. +
+ + + + + + + + + + + +
+A dataset of `dict` elements, (or a tuple of `dict` elements and label). +Each `dict` maps feature keys to `Tensor` or `SparseTensor` objects. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `reader` is of the wrong type. +
+`ValueError` + +If `label_key` is not one of the `features` keys. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/make_csv_dataset.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/make_csv_dataset.md new file mode 100644 index 00000000000..6d4371acdc9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/make_csv_dataset.md @@ -0,0 +1,275 @@ +description: Reads CSV files into a dataset. + +
+ + +
+ +# tf.compat.v1.data.experimental.make_csv_dataset + + + + + + + + + +Reads CSV files into a dataset. + + + + + + + +Reads CSV files into a dataset, where each element is a (features, labels) +tuple that corresponds to a batch of CSV rows. The features dictionary +maps feature column names to `Tensor`s containing the corresponding +feature data, and labels is a `Tensor` containing the batch's label data. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`file_pattern` + +List of files or patterns of file paths containing CSV +records. See tf.io.gfile.glob for pattern rules. +
+`batch_size` + +An int representing the number of records to combine +in a single batch. +
+`column_names` + +An optional list of strings that corresponds to the CSV +columns, in order. One per column of the input record. If this is not +provided, infers the column names from the first row of the records. +These names will be the keys of the features dict of each dataset element. +
+`column_defaults` + +A optional list of default values for the CSV fields. One +item per selected column of the input record. Each item in the list is +either a valid CSV dtype (float32, float64, int32, int64, or string), or a +`Tensor` with one of the aforementioned types. The tensor can either be +a scalar default value (if the column is optional), or an empty tensor (if +the column is required). If a dtype is provided instead of a tensor, the +column is also treated as required. If this list is not provided, tries +to infer types based on reading the first num_rows_for_inference rows of +files specified, and assumes all columns are optional, defaulting to `0` +for numeric values and `""` for string values. If both this and +`select_columns` are specified, these must have the same lengths, and +`column_defaults` is assumed to be sorted in order of increasing column +index. +
+`label_name` + +A optional string corresponding to the label column. If +provided, the data for this column is returned as a separate `Tensor` from +the features dictionary, so that the dataset complies with the format +expected by a `tf.Estimator.train` or `tf.Estimator.evaluate` input +function. +
+`select_columns` + +An optional list of integer indices or string column +names, that specifies a subset of columns of CSV data to select. If +column names are provided, these must correspond to names provided in +`column_names` or inferred from the file header lines. When this argument +is specified, only a subset of CSV columns will be parsed and returned, +corresponding to the columns specified. Using this results in faster +parsing and lower memory usage. If both this and `column_defaults` are +specified, these must have the same lengths, and `column_defaults` is +assumed to be sorted in order of increasing column index. +
+`field_delim` + +An optional `string`. Defaults to `","`. Char delimiter to +separate fields in a record. +
+`use_quote_delim` + +An optional bool. Defaults to `True`. If false, treats +double quotation marks as regular characters inside of the string fields. +
+`na_value` + +Additional string to recognize as NA/NaN. +
+`header` + +A bool that indicates whether the first rows of provided CSV files +correspond to header lines with column names, and should not be included +in the data. +
+`num_epochs` + +An int specifying the number of times this dataset is repeated. +If None, cycles through the dataset forever. +
+`shuffle` + +A bool that indicates whether the input should be shuffled. +
+`shuffle_buffer_size` + +Buffer size to use for shuffling. A large buffer size +ensures better shuffling, but increases memory usage and startup time. +
+`shuffle_seed` + +Randomization seed to use for shuffling. +
+`prefetch_buffer_size` + +An int specifying the number of feature +batches to prefetch for performance improvement. Recommended value is the +number of batches consumed per training step. Defaults to auto-tune. +
+`num_parallel_reads` + +Number of threads used to read CSV records from files. +If >1, the results will be interleaved. Defaults to `1`. +
+`sloppy` + +If `True`, reading performance will be improved at +the cost of non-deterministic ordering. If `False`, the order of elements +produced is deterministic prior to shuffling (elements are still +randomized if `shuffle=True`. Note that if the seed is set, then order +of elements after shuffling is deterministic). Defaults to `False`. +
+`num_rows_for_inference` + +Number of rows of a file to use for type inference +if record_defaults is not provided. If None, reads all the rows of all +the files. Defaults to 100. +
+`compression_type` + +(Optional.) A tf.string scalar evaluating to one of +`""` (no compression), `"ZLIB"`, or `"GZIP"`. Defaults to no compression. +
+`ignore_errors` + +(Optional.) If `True`, ignores errors with CSV file parsing, +such as malformed data or empty lines, and moves on to the next valid +CSV record. Otherwise, the dataset raises an error and stops processing +when encountering any invalid records. Defaults to `False`. +
+ + + + + + + + + + + +
+A dataset, where each element is a (features, labels) tuple that corresponds +to a batch of `batch_size` CSV rows. The features dictionary maps feature +column names to `Tensor`s containing the corresponding column data, and +labels is a `Tensor` containing the column data for the label column +specified by `label_name`. +
+ + + + + + + + + + + + +
+`ValueError` + +If any of the arguments is malformed. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/map_and_batch_with_legacy_function.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/map_and_batch_with_legacy_function.md new file mode 100644 index 00000000000..a0b74a8ee77 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/map_and_batch_with_legacy_function.md @@ -0,0 +1,130 @@ +description: Fused implementation of map and batch. (deprecated) + +
+ + +
+ +# tf.compat.v1.data.experimental.map_and_batch_with_legacy_function + + + + + + + + + +Fused implementation of `map` and `batch`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.experimental.map_and_batch() + +NOTE: This is an escape hatch for existing uses of `map_and_batch` that do not +work with V2 functions. New uses are strongly discouraged and existing uses +should migrate to `map_and_batch` as this method will not be removed in V2. + + + + + + + + + + + + + + + + + + + + + + +
+`map_func` + +A function mapping a nested structure of tensors to another +nested structure of tensors. +
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`num_parallel_batches` + +(Optional.) A tf.int64 scalar tf.Tensor, +representing the number of batches to create in parallel. On one hand, +higher values can help mitigate the effect of stragglers. On the other +hand, higher values can increase contention if CPU is scarce. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in case its size is smaller than +desired; the default behavior is not to drop the smaller batch. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number of elements to process in parallel. If not +specified, `batch_size * num_parallel_batches` elements will be processed +in parallel. If the value tf.data.experimental.AUTOTUNE is used, then +the number of parallel calls is set dynamically based on available CPU. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ + + + + + + + + + + + +
+`ValueError` + +If both `num_parallel_batches` and `num_parallel_calls` are +specified. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/data/experimental/sample_from_datasets.md b/site/en/api_docs/python/tf/compat/v1/data/experimental/sample_from_datasets.md new file mode 100644 index 00000000000..0a5610d1d62 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/experimental/sample_from_datasets.md @@ -0,0 +1,110 @@ +description: Samples elements at random from the datasets in datasets. + +
+ + +
+ +# tf.compat.v1.data.experimental.sample_from_datasets + + + + + + + + + +Samples elements at random from the datasets in `datasets`. + + + + + + + + + + + + + + + + + + + + + + + +
+`datasets` + +A list of tf.data.Dataset objects with compatible structure. +
+`weights` + +(Optional.) A list of `len(datasets)` floating-point values where +`weights[i]` represents the probability with which an element should be +sampled from `datasets[i]`, or a tf.data.Dataset object where each +element is such a list. Defaults to a uniform distribution across +`datasets`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +random seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + +
+A dataset that interleaves elements from `datasets` at random, according to +`weights` if provided, otherwise with uniform probability. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If the `datasets` or `weights` arguments have the wrong type. +
+`ValueError` + +If the `weights` argument is specified and does not match the +length of the `datasets` element. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/data/get_output_classes.md b/site/en/api_docs/python/tf/compat/v1/data/get_output_classes.md new file mode 100644 index 00000000000..036a1547616 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/get_output_classes.md @@ -0,0 +1,68 @@ +description: Returns the output classes of a Dataset or Iterator elements. + +
+ + +
+ +# tf.compat.v1.data.get_output_classes + + + + + + + + + +Returns the output classes of a `Dataset` or `Iterator` elements. + + + + + + + +This utility method replaces the deprecated-in-V2 +`tf.compat.v1.Dataset.output_classes` property. + + + + + + + + + + +
+`dataset_or_iterator` + +A tf.data.Dataset or `tf.data.Iterator`. +
+ + + + + + + + + + + +
+A nested structure of Python `type` objects matching the structure of the +dataset / iterator elements and specifying the class of the individual +components. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/data/get_output_shapes.md b/site/en/api_docs/python/tf/compat/v1/data/get_output_shapes.md new file mode 100644 index 00000000000..bfea599f766 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/get_output_shapes.md @@ -0,0 +1,68 @@ +description: Returns the output shapes of a Dataset or Iterator elements. + +
+ + +
+ +# tf.compat.v1.data.get_output_shapes + + + + + + + + + +Returns the output shapes of a `Dataset` or `Iterator` elements. + + + + + + + +This utility method replaces the deprecated-in-V2 +`tf.compat.v1.Dataset.output_shapes` property. + + + + + + + + + + +
+`dataset_or_iterator` + +A tf.data.Dataset or `tf.data.Iterator`. +
+ + + + + + + + + + + +
+A nested structure of tf.TensorShape objects matching the structure of +the dataset / iterator elements and specifying the shape of the individual +components. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/data/get_output_types.md b/site/en/api_docs/python/tf/compat/v1/data/get_output_types.md new file mode 100644 index 00000000000..d3b983d699e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/get_output_types.md @@ -0,0 +1,68 @@ +description: Returns the output shapes of a Dataset or Iterator elements. + +
+ + +
+ +# tf.compat.v1.data.get_output_types + + + + + + + + + +Returns the output shapes of a `Dataset` or `Iterator` elements. + + + + + + + +This utility method replaces the deprecated-in-V2 +`tf.compat.v1.Dataset.output_types` property. + + + + + + + + + + +
+`dataset_or_iterator` + +A tf.data.Dataset or `tf.data.Iterator`. +
+ + + + + + + + + + + +
+A nested structure of tf.DType objects objects matching the structure of +dataset / iterator elements and specifying the shape of the individual +components. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/data/make_initializable_iterator.md b/site/en/api_docs/python/tf/compat/v1/data/make_initializable_iterator.md new file mode 100644 index 00000000000..effb830195e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/make_initializable_iterator.md @@ -0,0 +1,99 @@ +description: Creates a tf.compat.v1.data.Iterator for enumerating the elements of a dataset. + +
+ + +
+ +# tf.compat.v1.data.make_initializable_iterator + + + + + + + + + +Creates a tf.compat.v1.data.Iterator for enumerating the elements of a dataset. + + + + + + + +Note: The returned iterator will be in an uninitialized state, +and you must run the `iterator.initializer` operation before using it: + +```python +dataset = ... +iterator = tf.compat.v1.data.make_initializable_iterator(dataset) +# ... +sess.run(iterator.initializer) +``` + + + + + + + + + + + + + +
+`dataset` + +A tf.data.Dataset. +
+`shared_name` + +(Optional.) If non-empty, the returned iterator will be shared +under the given name across multiple sessions that share the same devices +(e.g. when using a remote server). +
+ + + + + + + + + + + +
+A tf.compat.v1.data.Iterator over the elements of `dataset`. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/data/make_one_shot_iterator.md b/site/en/api_docs/python/tf/compat/v1/data/make_one_shot_iterator.md new file mode 100644 index 00000000000..f104aed6e29 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/data/make_one_shot_iterator.md @@ -0,0 +1,66 @@ +description: Creates a tf.compat.v1.data.Iterator for enumerating dataset elements. + +
+ + +
+ +# tf.compat.v1.data.make_one_shot_iterator + + + + + + + + + +Creates a tf.compat.v1.data.Iterator for enumerating dataset elements. + + + + + + + +Note: The returned iterator will be initialized automatically. +A "one-shot" iterator does not support re-initialization. + + + + + + + + + + +
+`dataset` + +A tf.data.Dataset. +
+ + + + + + + + + + + +
+A tf.compat.v1.data.Iterator over the elements of this dataset. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/debugging.md b/site/en/api_docs/python/tf/compat/v1/debugging.md new file mode 100644 index 00000000000..1fdac5f0173 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/debugging.md @@ -0,0 +1,93 @@ +description: Public API for tf.debugging namespace. + +
+ + +
+ +# Module: tf.compat.v1.debugging + + + + + + + + + +Public API for tf.debugging namespace. + + + +## Modules + +[`experimental`](../../../tf/compat/v1/debugging/experimental.md) module: Public API for tf.debugging.experimental namespace. + +## Functions + +[`Assert(...)`](../../../tf/debugging/Assert.md): Asserts that the given condition is true. + +[`assert_all_finite(...)`](../../../tf/compat/v1/verify_tensor_all_finite.md): Assert that the tensor does not contain any NaN's or Inf's. + +[`assert_equal(...)`](../../../tf/compat/v1/assert_equal.md): Assert the condition `x == y` holds element-wise. + +[`assert_greater(...)`](../../../tf/compat/v1/assert_greater.md): Assert the condition `x > y` holds element-wise. + +[`assert_greater_equal(...)`](../../../tf/compat/v1/assert_greater_equal.md): Assert the condition `x >= y` holds element-wise. + +[`assert_integer(...)`](../../../tf/compat/v1/assert_integer.md): Assert that `x` is of integer dtype. + +[`assert_less(...)`](../../../tf/compat/v1/assert_less.md): Assert the condition `x < y` holds element-wise. + +[`assert_less_equal(...)`](../../../tf/compat/v1/assert_less_equal.md): Assert the condition `x <= y` holds element-wise. + +[`assert_near(...)`](../../../tf/compat/v1/assert_near.md): Assert the condition `x` and `y` are close element-wise. + +[`assert_negative(...)`](../../../tf/compat/v1/assert_negative.md): Assert the condition `x < 0` holds element-wise. + +[`assert_non_negative(...)`](../../../tf/compat/v1/assert_non_negative.md): Assert the condition `x >= 0` holds element-wise. + +[`assert_non_positive(...)`](../../../tf/compat/v1/assert_non_positive.md): Assert the condition `x <= 0` holds element-wise. + +[`assert_none_equal(...)`](../../../tf/compat/v1/assert_none_equal.md): Assert the condition `x != y` holds element-wise. + +[`assert_positive(...)`](../../../tf/compat/v1/assert_positive.md): Assert the condition `x > 0` holds element-wise. + +[`assert_proper_iterable(...)`](../../../tf/debugging/assert_proper_iterable.md): Static assert that values is a "proper" iterable. + +[`assert_rank(...)`](../../../tf/compat/v1/assert_rank.md): Assert `x` has rank equal to `rank`. + +[`assert_rank_at_least(...)`](../../../tf/compat/v1/assert_rank_at_least.md): Assert `x` has rank equal to `rank` or higher. + +[`assert_rank_in(...)`](../../../tf/compat/v1/assert_rank_in.md): Assert `x` has rank in `ranks`. + +[`assert_same_float_dtype(...)`](../../../tf/debugging/assert_same_float_dtype.md): Validate and return float type based on `tensors` and `dtype`. + +[`assert_scalar(...)`](../../../tf/compat/v1/assert_scalar.md): Asserts that the given `tensor` is a scalar (i.e. zero-dimensional). + +[`assert_shapes(...)`](../../../tf/compat/v1/debugging/assert_shapes.md): Assert tensor shapes and dimension size relationships between tensors. + +[`assert_type(...)`](../../../tf/compat/v1/assert_type.md): Statically asserts that the given `Tensor` is of the specified type. + +[`check_numerics(...)`](../../../tf/debugging/check_numerics.md): Checks a tensor for NaN and Inf values. + +[`disable_check_numerics(...)`](../../../tf/debugging/disable_check_numerics.md): Disable the eager/graph unified numerics checking mechanism. + +[`enable_check_numerics(...)`](../../../tf/debugging/enable_check_numerics.md): Enable tensor numerics checking in an eager/graph unified fashion. + +[`get_log_device_placement(...)`](../../../tf/debugging/get_log_device_placement.md): Get if device placements are logged. + +[`is_finite(...)`](../../../tf/math/is_finite.md): Returns which elements of x are finite. + +[`is_inf(...)`](../../../tf/math/is_inf.md): Returns which elements of x are Inf. + +[`is_nan(...)`](../../../tf/math/is_nan.md): Returns which elements of x are NaN. + +[`is_non_decreasing(...)`](../../../tf/math/is_non_decreasing.md): Returns `True` if `x` is non-decreasing. + +[`is_numeric_tensor(...)`](../../../tf/debugging/is_numeric_tensor.md): Returns `True` if the elements of `tensor` are numbers. + +[`is_strictly_increasing(...)`](../../../tf/math/is_strictly_increasing.md): Returns `True` if `x` is strictly increasing. + +[`set_log_device_placement(...)`](../../../tf/debugging/set_log_device_placement.md): Set if device placements should be logged. + diff --git a/site/en/api_docs/python/tf/compat/v1/debugging/assert_shapes.md b/site/en/api_docs/python/tf/compat/v1/debugging/assert_shapes.md new file mode 100644 index 00000000000..3d29ed3171e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/debugging/assert_shapes.md @@ -0,0 +1,154 @@ +description: Assert tensor shapes and dimension size relationships between tensors. + +
+ + +
+ +# tf.compat.v1.debugging.assert_shapes + + + + + + + + + +Assert tensor shapes and dimension size relationships between tensors. + + + + + + + +This Op checks that a collection of tensors shape relationships +satisfies given constraints. + +#### Example: + + + +```python +tf.assert_shapes([ + (x, ('N', 'Q')), + (y, ('N', 'D')), + (param, ('Q',)), + (scalar, ()) +]) +``` + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.assert_shapes(shapes)]): + output = tf.matmul(x, y, transpose_a=True) +``` + +If `x`, `y`, `param` or `scalar` does not have a shape that satisfies +all specified constraints, `message`, as well as the first `summarize` entries +of the first encountered violating tensor are printed, and +`InvalidArgumentError` is raised. + +Size entries in the specified shapes are checked against other entries by +their __hash__, except: + - a size entry is interpreted as an explicit size if it can be parsed as an + integer primitive. + - a size entry is interpreted as *any* size if it is None or '.'. + +If the first entry of a shape is `...` (type `Ellipsis`) or '*' that indicates +a variable number of outer dimensions of unspecified size, i.e. the constraint +applies to the inner-most dimensions only. + +Scalar tensors and specified shapes of length zero (excluding the 'inner-most' +prefix) are both treated as having a single dimension of size one. + + + + + + + + + + + + + + + + + + + + + + +
+`shapes` + +dictionary with (`Tensor` to shape) items. A shape must be an +iterable. +
+`data` + +The tensors to print out if the condition is False. Defaults to error +message and first few entries of the violating tensor. +
+`summarize` + +Print this many entries of the tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_shapes". +
+ + + + + + + + + + + +
+Op raising `InvalidArgumentError` unless all shape constraints are +satisfied. +If static checks determine all constraints are satisfied, a `no_op` is +returned. +
+ + + + + + + + + + + + +
+`ValueError` + +If static checks determine any shape constraint is violated. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/debugging/experimental.md b/site/en/api_docs/python/tf/compat/v1/debugging/experimental.md new file mode 100644 index 00000000000..429a4fa0b9a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/debugging/experimental.md @@ -0,0 +1,27 @@ +description: Public API for tf.debugging.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.debugging.experimental + + + + + + + + + +Public API for tf.debugging.experimental namespace. + + + +## Functions + +[`disable_dump_debug_info(...)`](../../../../tf/debugging/experimental/disable_dump_debug_info.md): Disable the currently-enabled debugging dumping. + +[`enable_dump_debug_info(...)`](../../../../tf/debugging/experimental/enable_dump_debug_info.md): Enable dumping debugging information from a TensorFlow program. + diff --git a/site/en/api_docs/python/tf/compat/v1/decode_csv.md b/site/en/api_docs/python/tf/compat/v1/decode_csv.md new file mode 100644 index 00000000000..7bd7f571f62 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/decode_csv.md @@ -0,0 +1,150 @@ +description: Convert CSV records to tensors. Each column maps to one tensor. + +
+ + +
+ +# tf.compat.v1.decode_csv + + + + + + + + + +Convert CSV records to tensors. Each column maps to one tensor. + + + + + + + + + +RFC 4180 format is expected for the CSV records. +(https://tools.ietf.org/html/rfc4180) +Note that we allow leading and trailing spaces with int or float field. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`records` + +A `Tensor` of type `string`. +Each string is a record/row in the csv and all records should have +the same format. +
+`record_defaults` + +A list of `Tensor` objects with specific types. +Acceptable types are `float32`, `float64`, `int32`, `int64`, `string`. +One tensor per column of the input record, with either a +scalar default value for that column or an empty vector if the column is +required. +
+`field_delim` + +An optional `string`. Defaults to `","`. +char delimiter to separate fields in a record. +
+`use_quote_delim` + +An optional `bool`. Defaults to `True`. +If false, treats double quotation marks as regular +characters inside of the string fields (ignoring RFC 4180, Section 2, +Bullet 5). +
+`name` + +A name for the operation (optional). +
+`na_value` + +Additional string to recognize as NA/NaN. +
+`select_cols` + +Optional sorted list of column indices to select. If specified, +only this subset of columns will be parsed and returned. +
+ + + + + + + + + + + +
+A list of `Tensor` objects. Has the same type as `record_defaults`. +Each tensor will have the same shape as records. +
+ + + + + + + + + + + + +
+`ValueError` + +If any of the arguments is malformed. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/decode_raw.md b/site/en/api_docs/python/tf/compat/v1/decode_raw.md new file mode 100644 index 00000000000..57f90c3feb2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/decode_raw.md @@ -0,0 +1,108 @@ +description: Convert raw byte strings into tensors. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.decode_raw + + + + + + + + + +Convert raw byte strings into tensors. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(bytes)`. They will be removed in a future version. +Instructions for updating: +bytes is deprecated, use input_bytes instead + + + + + + + + + + + + + + + + + + + + + + +
+`input_bytes` + +Each element of the input Tensor is converted to an array of bytes. +
+`out_type` + +`DType` of the output. Acceptable types are `half`, `float`, `double`, +`int32`, `uint16`, `uint8`, `int16`, `int8`, `int64`. +
+`little_endian` + +Whether the `input_bytes` data is in little-endian format. Data will be +converted into host byte order if necessary. +
+`name` + +A name for the operation (optional). +
+`bytes` + +Deprecated parameter. Use `input_bytes` instead. +
+ + + + + + + + + + + +
+A `Tensor` object storing the decoded bytes. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/delete_session_tensor.md b/site/en/api_docs/python/tf/compat/v1/delete_session_tensor.md new file mode 100644 index 00000000000..1287c7e0f30 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/delete_session_tensor.md @@ -0,0 +1,76 @@ +description: Delete the tensor for the given tensor handle. + +
+ + +
+ +# tf.compat.v1.delete_session_tensor + + + + + + + + + +Delete the tensor for the given tensor handle. + + + + + + + +This is EXPERIMENTAL and subject to change. + +Delete the tensor of a given tensor handle. The tensor is produced +in a previous run() and stored in the state of the session. + + + + + + + + + + + + + +
+`handle` + +The string representation of a persistent tensor handle. +
+`name` + +Optional name prefix for the return tensor. +
+ + + + + + + + + + + +
+A pair of graph elements. The first is a placeholder for feeding a +tensor handle and the second is a deletion operation. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/depth_to_space.md b/site/en/api_docs/python/tf/compat/v1/depth_to_space.md new file mode 100644 index 00000000000..a841bc92512 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/depth_to_space.md @@ -0,0 +1,186 @@ +description: DepthToSpace for tensors of type T. + +
+ + +
+ +# tf.compat.v1.depth_to_space + + + + + + + + + +DepthToSpace for tensors of type T. + + + + + + + + + +Rearranges data from depth into blocks of spatial data. +This is the reverse transformation of SpaceToDepth. More specifically, +this op outputs a copy of the input tensor where values from the `depth` +dimension are moved in spatial blocks to the `height` and `width` dimensions. +The attr `block_size` indicates the input block size and how the data is moved. + + * Chunks of data of size `block_size * block_size` from depth are rearranged + into non-overlapping blocks of size `block_size x block_size` + * The width the output tensor is `input_depth * block_size`, whereas the + height is `input_height * block_size`. + * The Y, X coordinates within each block of the output image are determined + by the high order component of the input channel index. + * The depth of the input tensor must be divisible by + `block_size * block_size`. + +The `data_format` attr specifies the layout of the input and output tensors +with the following options: + "NHWC": `[ batch, height, width, channels ]` + "NCHW": `[ batch, channels, height, width ]` + "NCHW_VECT_C": + `qint8 [ batch, channels / 4, height, width, 4 ]` + +It is useful to consider the operation as transforming a 6-D Tensor. +e.g. for data_format = NHWC, + Each element in the input tensor can be specified via 6 coordinates, + ordered by decreasing memory layout significance as: + n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates + within the input image, bX, bY means coordinates + within the output block, oC means output channels). + The output would be the input transposed to the following layout: + n,iY,bY,iX,bX,oC + +This operation is useful for resizing the activations between convolutions +(but keeping all data), e.g. instead of pooling. It is also useful for training +purely convolutional models. + +For example, given an input of shape `[1, 1, 1, 4]`, data_format = "NHWC" and +block_size = 2: + +``` +x = [[[[1, 2, 3, 4]]]] + +``` + +This operation will output a tensor of shape `[1, 2, 2, 1]`: + +``` + [[[[1], [2]], + [[3], [4]]]] +``` + +Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`, +the corresponding output will have 2x2 elements and will have a depth of +1 channel (1 = `4 / (block_size * block_size)`). +The output element shape is `[2, 2, 1]`. + +For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g. + +``` +x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] +``` + +This operation, for block size of 2, will return the following tensor of shape +`[1, 2, 2, 3]` + +``` + [[[[1, 2, 3], [4, 5, 6]], + [[7, 8, 9], [10, 11, 12]]]] + +``` + +Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2: + +``` +x = [[[[1, 2, 3, 4], + [5, 6, 7, 8]], + [[9, 10, 11, 12], + [13, 14, 15, 16]]]] +``` + +the operator will return the following tensor of shape `[1 4 4 1]`: + +``` +x = [[[ [1], [2], [5], [6]], + [ [3], [4], [7], [8]], + [ [9], [10], [13], [14]], + [ [11], [12], [15], [16]]]] + +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`block_size` + +An `int` that is `>= 2`. +The size of the spatial block, same as in Space2Depth. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW", "NCHW_VECT_C"`. Defaults to `"NHWC"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/device.md b/site/en/api_docs/python/tf/compat/v1/device.md new file mode 100644 index 00000000000..aa09120e4b5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/device.md @@ -0,0 +1,83 @@ +description: Wrapper for Graph.device() using the default graph. + +
+ + +
+ +# tf.compat.v1.device + + + + + + + + + +Wrapper for `Graph.device()` using the default graph. + + + + + + + +See tf.Graph.device for more details. + + + + + + + + + + +
+`device_name_or_function` + +The device name or function to use in the context. +
+ + + + + + + + + + + +
+A context manager that specifies the default device to use for newly +created ops. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If eager execution is enabled and a function is passed in. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/disable_control_flow_v2.md b/site/en/api_docs/python/tf/compat/v1/disable_control_flow_v2.md new file mode 100644 index 00000000000..f8a5fc40568 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/disable_control_flow_v2.md @@ -0,0 +1,37 @@ +description: Opts out of control flow v2. + +
+ + +
+ +# tf.compat.v1.disable_control_flow_v2 + + + + + + + + + +Opts out of control flow v2. + + + + + + + +Note: v2 control flow is always enabled inside of tf.function. Calling this +function has no effect in that case. + +If your code needs tf.disable_control_flow_v2() to be called to work +properly please file a bug. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/disable_eager_execution.md b/site/en/api_docs/python/tf/compat/v1/disable_eager_execution.md new file mode 100644 index 00000000000..111df3d7997 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/disable_eager_execution.md @@ -0,0 +1,35 @@ +description: Disables eager execution. + +
+ + +
+ +# tf.compat.v1.disable_eager_execution + + + + + + + + + +Disables eager execution. + + + + + + + +This function can only be called before any Graphs, Ops, or Tensors have been +created. It can be used at the beginning of the program for complex migration +projects from TensorFlow 1.x to 2.x. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/disable_resource_variables.md b/site/en/api_docs/python/tf/compat/v1/disable_resource_variables.md new file mode 100644 index 00000000000..3af915b6419 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/disable_resource_variables.md @@ -0,0 +1,38 @@ +description: Opts out of resource variables. (deprecated) + +
+ + +
+ +# tf.compat.v1.disable_resource_variables + + + + + + + + + +Opts out of resource variables. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +non-resource variables are not supported in the long term + +If your code needs tf.disable_resource_variables() to be called to work +properly please file a bug. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/disable_tensor_equality.md b/site/en/api_docs/python/tf/compat/v1/disable_tensor_equality.md new file mode 100644 index 00000000000..e1311da433d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/disable_tensor_equality.md @@ -0,0 +1,33 @@ +description: Compare Tensors by their id and be hashable. + +
+ + +
+ +# tf.compat.v1.disable_tensor_equality + + + + + + + + + +Compare Tensors by their id and be hashable. + + + + + + + +This is a legacy behaviour of TensorFlow and is highly discouraged. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/disable_v2_behavior.md b/site/en/api_docs/python/tf/compat/v1/disable_v2_behavior.md new file mode 100644 index 00000000000..96d4c07ea8a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/disable_v2_behavior.md @@ -0,0 +1,38 @@ +description: Disables TensorFlow 2.x behaviors. + +
+ + +
+ +# tf.compat.v1.disable_v2_behavior + + + + + + + + + +Disables TensorFlow 2.x behaviors. + + + + + + + +This function can be called at the beginning of the program (before `Tensors`, +`Graphs` or other structures have been created, and before devices have been +initialized. It switches all global behaviors that are different between +TensorFlow 1.x and 2.x to behave as intended for 1.x. + +User can call this function to disable 2.x behavior during complex migrations. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/disable_v2_tensorshape.md b/site/en/api_docs/python/tf/compat/v1/disable_v2_tensorshape.md new file mode 100644 index 00000000000..4567f835ddd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/disable_v2_tensorshape.md @@ -0,0 +1,33 @@ +description: Disables the V2 TensorShape behavior and reverts to V1 behavior. + +
+ + +
+ +# tf.compat.v1.disable_v2_tensorshape + + + + + + + + + +Disables the V2 TensorShape behavior and reverts to V1 behavior. + + + + + + + +See docstring for `enable_v2_tensorshape` for details about the new behavior. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/distribute.md b/site/en/api_docs/python/tf/compat/v1/distribute.md new file mode 100644 index 00000000000..6f38f7255c5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distribute.md @@ -0,0 +1,146 @@ +description: Library for running a computation across multiple devices. + +
+ + +
+ +# Module: tf.compat.v1.distribute + + + + + + + + + +Library for running a computation across multiple devices. + + +See the guide for overview and examples: +[TensorFlow v2.x](https://www.tensorflow.org/guide/distributed_training), +[TensorFlow v1.x](https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/distribute_strategy.ipynb). + +The intent of this library is that you can write an algorithm in a stylized way +and it will be usable with a variety of different tf.distribute.Strategy +implementations. Each descendant will implement a different strategy for +distributing the algorithm across multiple devices/machines. Furthermore, these +changes can be hidden inside the specific layers and other library classes that +need special treatment to run in a distributed setting, so that most users' +model definition code can run unchanged. The tf.distribute.Strategy API works +the same way with eager and graph execution. + +*Glossary* + +* _Data parallelism_ is where we run multiple copies of the model + on different slices of the input data. This is in contrast to + _model parallelism_ where we divide up a single copy of a model + across multiple devices. + Note: we only support data parallelism for now, but + hope to add support for model parallelism in the future. +* A _device_ is a CPU or accelerator (e.g. GPUs, TPUs) on some machine that + TensorFlow can run operations on (see e.g. tf.device). You may have multiple + devices on a single machine, or be connected to devices on multiple + machines. Devices used to run computations are called _worker devices_. + Devices used to store variables are _parameter devices_. For some strategies, + such as tf.distribute.MirroredStrategy, the worker and parameter devices + will be the same (see mirrored variables below). For others they will be + different. For example, tf.distribute.experimental.CentralStorageStrategy + puts the variables on a single device (which may be a worker device or may be + the CPU), and tf.distribute.experimental.ParameterServerStrategy puts the + variables on separate machines called parameter servers (see below). +* A _replica_ is one copy of the model, running on one slice of the + input data. Right now each replica is executed on its own + worker device, but once we add support for model parallelism + a replica may span multiple worker devices. +* A _host_ is the CPU device on a machine with worker devices, typically + used for running input pipelines. +* A _worker_ is defined to be the physical machine(s) containing the physical + devices (e.g. GPUs, TPUs) on which the replicated computation is executed. A + worker may contain one or more replicas, but contains at least one + replica. Typically one worker will correspond to one machine, but in the case + of very large models with model parallelism, one worker may span multiple + machines. We typically run one input pipeline per worker, feeding all the + replicas on that worker. +* _Synchronous_, or more commonly _sync_, training is where the updates from + each replica are aggregated together before updating the model variables. This + is in contrast to _asynchronous_, or _async_ training, where each replica + updates the model variables independently. You may also have replicas + partitioned into groups which are in sync within each group but async between + groups. +* _Parameter servers_: These are machines that hold a single copy of + parameters/variables, used by some strategies (right now just + tf.distribute.experimental.ParameterServerStrategy). All replicas that want + to operate on a variable retrieve it at the beginning of a step and send an + update to be applied at the end of the step. These can in priniciple support + either sync or async training, but right now we only have support for async + training with parameter servers. Compare to + tf.distribute.experimental.CentralStorageStrategy, which puts all variables + on a single device on the same machine (and does sync training), and + tf.distribute.MirroredStrategy, which mirrors variables to multiple devices + (see below). +* _Mirrored variables_: These are variables that are copied to multiple + devices, where we keep the copies in sync by applying the same + updates to every copy. Normally would only be used with sync training. +* Reductions and all-reduce: A _reduction_ is some method of aggregating + multiple values into one value, like "sum" or "mean". If a strategy is doing + sync training, we will perform a reduction on the gradients to a parameter + from all replicas before applying the update. _All-reduce_ is an algorithm for + performing a reduction on values from multiple devices and making the result + available on all of those devices. + +Note that we provide a default version of tf.distribute.Strategy that is +used when no other strategy is in scope, that provides the same API with +reasonable default behavior. + +## Modules + +[`cluster_resolver`](../../../tf/compat/v1/distribute/cluster_resolver.md) module: Library imports for ClusterResolvers. + +[`experimental`](../../../tf/compat/v1/distribute/experimental.md) module: Experimental Distribution Strategy library. + +## Classes + +[`class CrossDeviceOps`](../../../tf/distribute/CrossDeviceOps.md): Base class for cross-device reduction and broadcasting algorithms. + +[`class HierarchicalCopyAllReduce`](../../../tf/distribute/HierarchicalCopyAllReduce.md): Reduction using hierarchical copy all-reduce. + +[`class InputContext`](../../../tf/distribute/InputContext.md): A class wrapping information needed by an input function. + +[`class InputReplicationMode`](../../../tf/distribute/InputReplicationMode.md): Replication mode for input function. + +[`class MirroredStrategy`](../../../tf/compat/v1/distribute/MirroredStrategy.md): Synchronous training across multiple replicas on one machine. + +[`class NcclAllReduce`](../../../tf/distribute/NcclAllReduce.md): Reduction using NCCL all-reduce. + +[`class OneDeviceStrategy`](../../../tf/compat/v1/distribute/OneDeviceStrategy.md): A distribution strategy for running on a single device. + +[`class ReduceOp`](../../../tf/distribute/ReduceOp.md): Indicates how a set of values should be reduced. + +[`class ReductionToOneDevice`](../../../tf/distribute/ReductionToOneDevice.md): Always do reduction to one device first and then do broadcasting. + +[`class ReplicaContext`](../../../tf/distribute/ReplicaContext.md): tf.distribute.Strategy API when in a replica context. + +[`class RunOptions`](../../../tf/distribute/RunOptions.md): Run options for `strategy.run`. + +[`class Server`](../../../tf/distribute/Server.md): An in-process TensorFlow server, for use in distributed training. + +[`class Strategy`](../../../tf/compat/v1/distribute/Strategy.md): A list of devices with a state & compute distribution policy. + +[`class StrategyExtended`](../../../tf/compat/v1/distribute/StrategyExtended.md): Additional APIs for algorithms that need to be distribution-aware. + +## Functions + +[`experimental_set_strategy(...)`](../../../tf/distribute/experimental_set_strategy.md): Set a tf.distribute.Strategy as current without `with strategy.scope()`. + +[`get_loss_reduction(...)`](../../../tf/compat/v1/distribute/get_loss_reduction.md): tf.distribute.ReduceOp corresponding to the last loss reduction. + +[`get_replica_context(...)`](../../../tf/distribute/get_replica_context.md): Returns the current tf.distribute.ReplicaContext or `None`. + +[`get_strategy(...)`](../../../tf/distribute/get_strategy.md): Returns the current tf.distribute.Strategy object. + +[`has_strategy(...)`](../../../tf/distribute/has_strategy.md): Return if there is a current non-default tf.distribute.Strategy. + +[`in_cross_replica_context(...)`](../../../tf/distribute/in_cross_replica_context.md): Returns `True` if in a cross-replica context. + diff --git a/site/en/api_docs/python/tf/compat/v1/distribute/MirroredStrategy.md b/site/en/api_docs/python/tf/compat/v1/distribute/MirroredStrategy.md new file mode 100644 index 00000000000..248dff891da --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distribute/MirroredStrategy.md @@ -0,0 +1,1015 @@ +description: Synchronous training across multiple replicas on one machine. + +
+ + + + + + + + + + + + + + +
+ +# tf.compat.v1.distribute.MirroredStrategy + + + + + + + + + +Synchronous training across multiple replicas on one machine. + +Inherits From: [`Strategy`](../../../../tf/compat/v1/distribute/Strategy.md) + + + + + + + +This strategy is typically used for training on one +machine with multiple GPUs. For TPUs, use +tf.distribute.experimental.TPUStrategy. To use `MirroredStrategy` with +multiple workers, please refer to +tf.distribute.experimental.MultiWorkerMirroredStrategy. + +For example, a variable created under a `MirroredStrategy` is a +`MirroredVariable`. If no devices are specified in the constructor argument of +the strategy then it will use all the available GPUs. If no GPUs are found, it +will use the available CPUs. Note that TensorFlow treats all CPUs on a +machine as a single device, and uses threads internally for parallelism. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> with strategy.scope(): +... x = tf.Variable(1.) +>>> x +MirroredVariable:{ + 0: + } +``` + +While using distribution strategies, all the variable creation should be done +within the strategy's scope. This will replicate the variables across all the +replicas and keep them in sync using an all-reduce algorithm. + +Variables created inside a `MirroredStrategy` which is wrapped with a +tf.function are still `MirroredVariables`. + +``` +>>> x = [] +>>> @tf.function # Wrap the function with tf.function. +... def create_variable(): +... if not x: +... x.append(tf.Variable(1.)) +>>> strategy = tf.distribute.MirroredStrategy() +>>> with strategy.scope(): +... create_variable() +... print (x[0]) +MirroredVariable:{ + 0: + } +``` + +`experimental_distribute_dataset` can be used to distribute the dataset across +the replicas when writing your own training loop. If you are using `.fit` and +`.compile` methods available in tf.keras, then tf.keras will handle the +distribution for you. + +#### For example: + + + +```python +my_strategy = tf.distribute.MirroredStrategy() +with my_strategy.scope(): + @tf.function + def distribute_train_epoch(dataset): + def replica_fn(input): + # process input and return result + return result + + total_result = 0 + for x in dataset: + per_replica_result = my_strategy.run(replica_fn, args=(x,)) + total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM, + per_replica_result, axis=None) + return total_result + + dist_dataset = my_strategy.experimental_distribute_dataset(dataset) + for _ in range(EPOCHS): + train_result = distribute_train_epoch(dist_dataset) +``` + + + + + + + + + + + + + +
+`devices` + +a list of device strings such as `['/gpu:0', '/gpu:1']`. If +`None`, all available GPUs are used. If no GPUs are found, CPU is used. +
+`cross_device_ops` + +optional, a descedant of `CrossDeviceOps`. If this is not +set, `NcclAllReduce()` will be used by default. One would customize this +if NCCL isn't available or if a special implementation that exploits +the particular hardware is available. +
+ + + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via `dataset`. + +The returned distributed dataset can be iterated over similar to how +regular datasets can. +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +The following is an example: + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + +We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). + +In a multi-worker setting, we will first attempt to distribute the dataset +by attempting to detect whether the dataset is being created out of +ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, +attempting to shard the input files. Note that there has to be at least one +input file per worker. If you have less than one input file per worker, we +suggest that you should disable distributing your dataset using the method +below. + +If that attempt is unsuccessful (e.g. the dataset is created from a +Dataset.range), we will shard the dataset evenly at the end by appending a +`.shard` operation to the end of the processing pipeline. This will cause +the entire preprocessing pipeline for all the data to be run on every +worker, and each worker will do redundant work. We will print a warning +if this method of sharding is selected. + +You can disable dataset sharding across workers using the +`auto_shard_policy` option in tf.data.experimental.DistributeOptions. + +Within each worker, we will also split the data among all the worker +devices (if more than one a present), and this will happen even if +multi-worker sharding is disabled using the method above. + +If the above batch splitting and dataset sharding logic is undesirable, +please use `experimental_distribute_datasets_from_function` instead, which +does not do any automatic splitting or sharding. + +You can also use the `element_spec` property of the distributed dataset +returned by this API to query the tf.TypeSpec of the elements returned +by the iterator. This can be used to set the `input_signature` property +of a tf.function. + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +@tf.function(input_signature=[dist_dataset.element_spec]) +def train_step(inputs): + # train model with inputs + return + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be sharded across all replicas using +the rules stated above. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. Each +replica on that worker will dequeue one batch of inputs from the local +`Dataset` (i.e. if a worker has two replicas, two batches will be dequeued +from the `Dataset` every step). + +This method can be used for several purposes. For example, where +`experimental_distribute_dataset` is unable to shard the input files, this +method might be used to manually shard the dataset (avoiding the slow +fallback behavior in `experimental_distribute_dataset`). In cases where the +dataset is infinite, this sharding can be done by creating dataset replicas +that differ only in their random seed. +`experimental_distribute_dataset` may also sometimes fail to split the +batch across replicas on a worker. In that case, this method can be used +where that limitation does not exist. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + +To query the tf.TypeSpec of the elements in the distributed dataset +returned by this API, you need to use the `element_spec` property of the +distributed iterator. This tf.TypeSpec can be used to set the +`input_signature` property of a tf.function. + +```python +# If you want to specify `input_signature` for a `tf.function` you must +# first create the iterator. +iterator = iter(inputs) + +@tf.function(input_signature=[iterator.element_spec]) +def replica_fn_with_signature(inputs): + # train the model with inputs + return + +for _ in range(steps): + strategy.run(replica_fn_with_signature, + args=(next(iterator),)) +``` + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +Note: This only returns values on the worker initiated by this client. +When using a tf.distribute.Strategy like +tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker +will be its own client, and this function will only return values +computed on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use +tf.distribute.Strategy.experimental_distribute_dataset +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+`session` + +(TensorFlow v1.x graph execution only) A session used for +initialization. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_run

+ +View source + + + +Runs ops in `fn` on each replica, with inputs from `input_iterator`. + +DEPRECATED: This method is not available in TF 2.x. Please switch +to using `run` instead. + +When eager execution is enabled, executes ops specified by `fn` on each +replica. Otherwise, builds a graph to execute the ops on each replica. + +Each replica will take a single, different input from the inputs provided by +one `get_next` call on the input iterator. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `replica_id_in_sync_group`. + +IMPORTANT: Depending on the tf.distribute.Strategy implementation being +used, and whether eager execution is enabled, `fn` may be called one or more +times (once for each replica). + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The inputs to the function must match the outputs +of `input_iterator.get_next()`. The output must be a tf.nest of +`Tensor`s. +
+`input_iterator` + +(Optional) input iterator from which the inputs are taken. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be `PerReplica` (if the values are unsynchronized), +`Mirrored` (if the values are kept in sync), or `Tensor` (if running on a +single replica). +
+ + + +

make_dataset_iterator

+ +View source + + + +Makes an iterator for input provided via `dataset`. + +DEPRECATED: This method is not available in TF 2.x. + +Data from the given dataset will be distributed evenly across all the +compute replicas. We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). +If this effort fails, an error will be thrown, and the user should instead +use `make_input_fn_iterator` which provides more control to the user, and +does not try to divide a batch across replicas. + +The user could also use `make_input_fn_iterator` if they want to +customize which input is fed to which replica/worker etc. + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be distributed evenly across all +replicas. +
+ + + + + + + + + + + +
Returns
+An `tf.distribute.InputIterator` which returns inputs for each step of the +computation. User should call `initialize` on the returned iterator. +
+ + + +

make_input_fn_iterator

+ +View source + + + +Returns an iterator split across replicas created from an input function. + +DEPRECATED: This method is not available in TF 2.x. + +The `input_fn` should take an tf.distribute.InputContext object where +information about batching and input sharding can be accessed: + +``` +def input_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard(input_context.num_input_pipelines, + input_context.input_pipeline_id) +with strategy.scope(): + iterator = strategy.make_input_fn_iterator(input_fn) + replica_results = strategy.experimental_run(replica_fn, iterator) +``` + +The tf.data.Dataset returned by `input_fn` should have a per-replica +batch size, which may be computed using +`input_context.get_per_replica_batch_size`. + + + + + + + + + + + + + +
Args
+`input_fn` + +A function taking a tf.distribute.InputContext object and +returning a tf.data.Dataset. +
+`replication_mode` + +an enum value of tf.distribute.InputReplicationMode. +Only `PER_WORKER` is supported currently, which means there will be +a single call to `input_fn` per worker. Replicas will dequeue from the +local tf.data.Dataset on their worker. +
+ + + + + + + + + + + +
Returns
+An iterator object that should first be `.initialize()`-ed. It may then +either be passed to `strategy.experimental_run()` or you can +`iterator.get_next()` to get the next value to pass to +`strategy.extended.call_for_each_replica()`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +Executes ops specified by `fn` on each replica. If `args` or `kwargs` have +tf.distribute.DistributedValues, such as those produced by a +"distributed `Dataset`" or `experimental_distribute_values_from_function` +when `fn` is executed on a particular replica, it will be executed with the +component of tf.distribute.DistributedValues that correspond to that +replica. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `all_reduce`. + +All arguments in `args` or `kwargs` should either be nest of tensors or +tf.distribute.DistributedValues containing tensors or composite tensors. + +IMPORTANT: Depending on the implementation of tf.distribute.Strategy and +whether eager execution is enabled, `fn` may be called one or more times ( +once for each replica). + +#### Example usage: + + + +1. Constant tensor input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> tensor_input = tf.constant(3.0) +>>> @tf.function +... def replica_fn(input): +... return input*2.0 +>>> result = strategy.run(replica_fn, args=(tensor_input,)) +>>> result + +``` + +2. DistributedValues input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> @tf.function +... def run(): +... def value_fn(value_context): +... return value_context.num_replicas_in_sync +... distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +... def replica_fn2(input): +... return input*2 +... return strategy.run(replica_fn2, args=(distributed_values,)) +>>> result = run() +>>> result + +``` + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be tf.distribute.DistributedValues, `Tensor` +objects, or `Tensor`s (for example, if running on a single replica). +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + + + + + + + + + +
Returns
+A context manager. +
+ + + +

update_config_proto

+ +View source + + + +Returns a copy of `config_proto` modified for use with this strategy. + +DEPRECATED: This method is not available in TF 2.x. + +The updated config has something needed to run a strategy, e.g. +configuration to run collective ops, or device filters to improve +distributed training performance. + + + + + + + + + + +
Args
+`config_proto` + +a `tf.ConfigProto` object. +
+ + + + + + + + + + + +
Returns
+The updated copy of the `config_proto`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distribute/OneDeviceStrategy.md b/site/en/api_docs/python/tf/compat/v1/distribute/OneDeviceStrategy.md new file mode 100644 index 00000000000..293590f7a8e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distribute/OneDeviceStrategy.md @@ -0,0 +1,962 @@ +description: A distribution strategy for running on a single device. + +
+ + + + + + + + + + + + + + +
+ +# tf.compat.v1.distribute.OneDeviceStrategy + + + + + + + + + +A distribution strategy for running on a single device. + +Inherits From: [`Strategy`](../../../../tf/compat/v1/distribute/Strategy.md) + + + + + + + +Using this strategy will place any variables created in its scope on the +specified device. Input distributed through this strategy will be +prefetched to the specified device. Moreover, any functions called via +`strategy.run` will also be placed on the specified device +as well. + +Typical usage of this strategy could be testing your code with the +tf.distribute.Strategy API before switching to other strategies which +actually distribute to multiple devices/machines. + +#### For example: + + +``` +tf.enable_eager_execution() +strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0") + +with strategy.scope(): + v = tf.Variable(1.0) + print(v.device) # /job:localhost/replica:0/task:0/device:GPU:0 + +def step_fn(x): + return x * 2 + +result = 0 +for i in range(10): + result += strategy.run(step_fn, args=(i,)) +print(result) # 90 +``` + + + + + + + + + + +
+`device` + +Device string identifier for the device on which the variables +should be placed. See class docs for more details on how the device is +used. Examples: "/cpu:0", "/gpu:0", "/device:CPU:0", "/device:GPU:0" +
+ + + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via `dataset`. + +The returned distributed dataset can be iterated over similar to how +regular datasets can. +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +The following is an example: + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + +We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). + +In a multi-worker setting, we will first attempt to distribute the dataset +by attempting to detect whether the dataset is being created out of +ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, +attempting to shard the input files. Note that there has to be at least one +input file per worker. If you have less than one input file per worker, we +suggest that you should disable distributing your dataset using the method +below. + +If that attempt is unsuccessful (e.g. the dataset is created from a +Dataset.range), we will shard the dataset evenly at the end by appending a +`.shard` operation to the end of the processing pipeline. This will cause +the entire preprocessing pipeline for all the data to be run on every +worker, and each worker will do redundant work. We will print a warning +if this method of sharding is selected. + +You can disable dataset sharding across workers using the +`auto_shard_policy` option in tf.data.experimental.DistributeOptions. + +Within each worker, we will also split the data among all the worker +devices (if more than one a present), and this will happen even if +multi-worker sharding is disabled using the method above. + +If the above batch splitting and dataset sharding logic is undesirable, +please use `experimental_distribute_datasets_from_function` instead, which +does not do any automatic splitting or sharding. + +You can also use the `element_spec` property of the distributed dataset +returned by this API to query the tf.TypeSpec of the elements returned +by the iterator. This can be used to set the `input_signature` property +of a tf.function. + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +@tf.function(input_signature=[dist_dataset.element_spec]) +def train_step(inputs): + # train model with inputs + return + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be sharded across all replicas using +the rules stated above. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. Each +replica on that worker will dequeue one batch of inputs from the local +`Dataset` (i.e. if a worker has two replicas, two batches will be dequeued +from the `Dataset` every step). + +This method can be used for several purposes. For example, where +`experimental_distribute_dataset` is unable to shard the input files, this +method might be used to manually shard the dataset (avoiding the slow +fallback behavior in `experimental_distribute_dataset`). In cases where the +dataset is infinite, this sharding can be done by creating dataset replicas +that differ only in their random seed. +`experimental_distribute_dataset` may also sometimes fail to split the +batch across replicas on a worker. In that case, this method can be used +where that limitation does not exist. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + +To query the tf.TypeSpec of the elements in the distributed dataset +returned by this API, you need to use the `element_spec` property of the +distributed iterator. This tf.TypeSpec can be used to set the +`input_signature` property of a tf.function. + +```python +# If you want to specify `input_signature` for a `tf.function` you must +# first create the iterator. +iterator = iter(inputs) + +@tf.function(input_signature=[iterator.element_spec]) +def replica_fn_with_signature(inputs): + # train the model with inputs + return + +for _ in range(steps): + strategy.run(replica_fn_with_signature, + args=(next(iterator),)) +``` + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +Note: This only returns values on the worker initiated by this client. +When using a tf.distribute.Strategy like +tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker +will be its own client, and this function will only return values +computed on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use +tf.distribute.Strategy.experimental_distribute_dataset +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+`session` + +(TensorFlow v1.x graph execution only) A session used for +initialization. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_run

+ +View source + + + +Runs ops in `fn` on each replica, with inputs from `input_iterator`. + +DEPRECATED: This method is not available in TF 2.x. Please switch +to using `run` instead. + +When eager execution is enabled, executes ops specified by `fn` on each +replica. Otherwise, builds a graph to execute the ops on each replica. + +Each replica will take a single, different input from the inputs provided by +one `get_next` call on the input iterator. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `replica_id_in_sync_group`. + +IMPORTANT: Depending on the tf.distribute.Strategy implementation being +used, and whether eager execution is enabled, `fn` may be called one or more +times (once for each replica). + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The inputs to the function must match the outputs +of `input_iterator.get_next()`. The output must be a tf.nest of +`Tensor`s. +
+`input_iterator` + +(Optional) input iterator from which the inputs are taken. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be `PerReplica` (if the values are unsynchronized), +`Mirrored` (if the values are kept in sync), or `Tensor` (if running on a +single replica). +
+ + + +

make_dataset_iterator

+ +View source + + + +Makes an iterator for input provided via `dataset`. + +DEPRECATED: This method is not available in TF 2.x. + +Data from the given dataset will be distributed evenly across all the +compute replicas. We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). +If this effort fails, an error will be thrown, and the user should instead +use `make_input_fn_iterator` which provides more control to the user, and +does not try to divide a batch across replicas. + +The user could also use `make_input_fn_iterator` if they want to +customize which input is fed to which replica/worker etc. + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be distributed evenly across all +replicas. +
+ + + + + + + + + + + +
Returns
+An `tf.distribute.InputIterator` which returns inputs for each step of the +computation. User should call `initialize` on the returned iterator. +
+ + + +

make_input_fn_iterator

+ +View source + + + +Returns an iterator split across replicas created from an input function. + +DEPRECATED: This method is not available in TF 2.x. + +The `input_fn` should take an tf.distribute.InputContext object where +information about batching and input sharding can be accessed: + +``` +def input_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard(input_context.num_input_pipelines, + input_context.input_pipeline_id) +with strategy.scope(): + iterator = strategy.make_input_fn_iterator(input_fn) + replica_results = strategy.experimental_run(replica_fn, iterator) +``` + +The tf.data.Dataset returned by `input_fn` should have a per-replica +batch size, which may be computed using +`input_context.get_per_replica_batch_size`. + + + + + + + + + + + + + +
Args
+`input_fn` + +A function taking a tf.distribute.InputContext object and +returning a tf.data.Dataset. +
+`replication_mode` + +an enum value of tf.distribute.InputReplicationMode. +Only `PER_WORKER` is supported currently, which means there will be +a single call to `input_fn` per worker. Replicas will dequeue from the +local tf.data.Dataset on their worker. +
+ + + + + + + + + + + +
Returns
+An iterator object that should first be `.initialize()`-ed. It may then +either be passed to `strategy.experimental_run()` or you can +`iterator.get_next()` to get the next value to pass to +`strategy.extended.call_for_each_replica()`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +Executes ops specified by `fn` on each replica. If `args` or `kwargs` have +tf.distribute.DistributedValues, such as those produced by a +"distributed `Dataset`" or `experimental_distribute_values_from_function` +when `fn` is executed on a particular replica, it will be executed with the +component of tf.distribute.DistributedValues that correspond to that +replica. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `all_reduce`. + +All arguments in `args` or `kwargs` should either be nest of tensors or +tf.distribute.DistributedValues containing tensors or composite tensors. + +IMPORTANT: Depending on the implementation of tf.distribute.Strategy and +whether eager execution is enabled, `fn` may be called one or more times ( +once for each replica). + +#### Example usage: + + + +1. Constant tensor input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> tensor_input = tf.constant(3.0) +>>> @tf.function +... def replica_fn(input): +... return input*2.0 +>>> result = strategy.run(replica_fn, args=(tensor_input,)) +>>> result + +``` + +2. DistributedValues input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> @tf.function +... def run(): +... def value_fn(value_context): +... return value_context.num_replicas_in_sync +... distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +... def replica_fn2(input): +... return input*2 +... return strategy.run(replica_fn2, args=(distributed_values,)) +>>> result = run() +>>> result + +``` + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be tf.distribute.DistributedValues, `Tensor` +objects, or `Tensor`s (for example, if running on a single replica). +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + + + + + + + + + +
Returns
+A context manager. +
+ + + +

update_config_proto

+ +View source + + + +Returns a copy of `config_proto` modified for use with this strategy. + +DEPRECATED: This method is not available in TF 2.x. + +The updated config has something needed to run a strategy, e.g. +configuration to run collective ops, or device filters to improve +distributed training performance. + + + + + + + + + + +
Args
+`config_proto` + +a `tf.ConfigProto` object. +
+ + + + + + + + + + + +
Returns
+The updated copy of the `config_proto`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distribute/Strategy.md b/site/en/api_docs/python/tf/compat/v1/distribute/Strategy.md new file mode 100644 index 00000000000..c4b127e9fa0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distribute/Strategy.md @@ -0,0 +1,918 @@ +description: A list of devices with a state & compute distribution policy. + +
+ + + + + + + + + + + + + + +
+ +# tf.compat.v1.distribute.Strategy + + + + + + + + + +A list of devices with a state & compute distribution policy. + + + + + + + +See [the guide](https://www.tensorflow.org/guide/distribute_strategy) +for overview and examples. + +Note: Not all tf.distribute.Strategy implementations currently support +TensorFlow's partitioned variables (where a single variable is split across +multiple devices) at this time. + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via `dataset`. + +The returned distributed dataset can be iterated over similar to how +regular datasets can. +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +The following is an example: + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + +We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). + +In a multi-worker setting, we will first attempt to distribute the dataset +by attempting to detect whether the dataset is being created out of +ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, +attempting to shard the input files. Note that there has to be at least one +input file per worker. If you have less than one input file per worker, we +suggest that you should disable distributing your dataset using the method +below. + +If that attempt is unsuccessful (e.g. the dataset is created from a +Dataset.range), we will shard the dataset evenly at the end by appending a +`.shard` operation to the end of the processing pipeline. This will cause +the entire preprocessing pipeline for all the data to be run on every +worker, and each worker will do redundant work. We will print a warning +if this method of sharding is selected. + +You can disable dataset sharding across workers using the +`auto_shard_policy` option in tf.data.experimental.DistributeOptions. + +Within each worker, we will also split the data among all the worker +devices (if more than one a present), and this will happen even if +multi-worker sharding is disabled using the method above. + +If the above batch splitting and dataset sharding logic is undesirable, +please use `experimental_distribute_datasets_from_function` instead, which +does not do any automatic splitting or sharding. + +You can also use the `element_spec` property of the distributed dataset +returned by this API to query the tf.TypeSpec of the elements returned +by the iterator. This can be used to set the `input_signature` property +of a tf.function. + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +@tf.function(input_signature=[dist_dataset.element_spec]) +def train_step(inputs): + # train model with inputs + return + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be sharded across all replicas using +the rules stated above. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. Each +replica on that worker will dequeue one batch of inputs from the local +`Dataset` (i.e. if a worker has two replicas, two batches will be dequeued +from the `Dataset` every step). + +This method can be used for several purposes. For example, where +`experimental_distribute_dataset` is unable to shard the input files, this +method might be used to manually shard the dataset (avoiding the slow +fallback behavior in `experimental_distribute_dataset`). In cases where the +dataset is infinite, this sharding can be done by creating dataset replicas +that differ only in their random seed. +`experimental_distribute_dataset` may also sometimes fail to split the +batch across replicas on a worker. In that case, this method can be used +where that limitation does not exist. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + +To query the tf.TypeSpec of the elements in the distributed dataset +returned by this API, you need to use the `element_spec` property of the +distributed iterator. This tf.TypeSpec can be used to set the +`input_signature` property of a tf.function. + +```python +# If you want to specify `input_signature` for a `tf.function` you must +# first create the iterator. +iterator = iter(inputs) + +@tf.function(input_signature=[iterator.element_spec]) +def replica_fn_with_signature(inputs): + # train the model with inputs + return + +for _ in range(steps): + strategy.run(replica_fn_with_signature, + args=(next(iterator),)) +``` + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +Note: This only returns values on the worker initiated by this client. +When using a tf.distribute.Strategy like +tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker +will be its own client, and this function will only return values +computed on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use +tf.distribute.Strategy.experimental_distribute_dataset +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+`session` + +(TensorFlow v1.x graph execution only) A session used for +initialization. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_run

+ +View source + + + +Runs ops in `fn` on each replica, with inputs from `input_iterator`. + +DEPRECATED: This method is not available in TF 2.x. Please switch +to using `run` instead. + +When eager execution is enabled, executes ops specified by `fn` on each +replica. Otherwise, builds a graph to execute the ops on each replica. + +Each replica will take a single, different input from the inputs provided by +one `get_next` call on the input iterator. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `replica_id_in_sync_group`. + +IMPORTANT: Depending on the tf.distribute.Strategy implementation being +used, and whether eager execution is enabled, `fn` may be called one or more +times (once for each replica). + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The inputs to the function must match the outputs +of `input_iterator.get_next()`. The output must be a tf.nest of +`Tensor`s. +
+`input_iterator` + +(Optional) input iterator from which the inputs are taken. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be `PerReplica` (if the values are unsynchronized), +`Mirrored` (if the values are kept in sync), or `Tensor` (if running on a +single replica). +
+ + + +

make_dataset_iterator

+ +View source + + + +Makes an iterator for input provided via `dataset`. + +DEPRECATED: This method is not available in TF 2.x. + +Data from the given dataset will be distributed evenly across all the +compute replicas. We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). +If this effort fails, an error will be thrown, and the user should instead +use `make_input_fn_iterator` which provides more control to the user, and +does not try to divide a batch across replicas. + +The user could also use `make_input_fn_iterator` if they want to +customize which input is fed to which replica/worker etc. + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be distributed evenly across all +replicas. +
+ + + + + + + + + + + +
Returns
+An `tf.distribute.InputIterator` which returns inputs for each step of the +computation. User should call `initialize` on the returned iterator. +
+ + + +

make_input_fn_iterator

+ +View source + + + +Returns an iterator split across replicas created from an input function. + +DEPRECATED: This method is not available in TF 2.x. + +The `input_fn` should take an tf.distribute.InputContext object where +information about batching and input sharding can be accessed: + +``` +def input_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard(input_context.num_input_pipelines, + input_context.input_pipeline_id) +with strategy.scope(): + iterator = strategy.make_input_fn_iterator(input_fn) + replica_results = strategy.experimental_run(replica_fn, iterator) +``` + +The tf.data.Dataset returned by `input_fn` should have a per-replica +batch size, which may be computed using +`input_context.get_per_replica_batch_size`. + + + + + + + + + + + + + +
Args
+`input_fn` + +A function taking a tf.distribute.InputContext object and +returning a tf.data.Dataset. +
+`replication_mode` + +an enum value of tf.distribute.InputReplicationMode. +Only `PER_WORKER` is supported currently, which means there will be +a single call to `input_fn` per worker. Replicas will dequeue from the +local tf.data.Dataset on their worker. +
+ + + + + + + + + + + +
Returns
+An iterator object that should first be `.initialize()`-ed. It may then +either be passed to `strategy.experimental_run()` or you can +`iterator.get_next()` to get the next value to pass to +`strategy.extended.call_for_each_replica()`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +Executes ops specified by `fn` on each replica. If `args` or `kwargs` have +tf.distribute.DistributedValues, such as those produced by a +"distributed `Dataset`" or `experimental_distribute_values_from_function` +when `fn` is executed on a particular replica, it will be executed with the +component of tf.distribute.DistributedValues that correspond to that +replica. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `all_reduce`. + +All arguments in `args` or `kwargs` should either be nest of tensors or +tf.distribute.DistributedValues containing tensors or composite tensors. + +IMPORTANT: Depending on the implementation of tf.distribute.Strategy and +whether eager execution is enabled, `fn` may be called one or more times ( +once for each replica). + +#### Example usage: + + + +1. Constant tensor input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> tensor_input = tf.constant(3.0) +>>> @tf.function +... def replica_fn(input): +... return input*2.0 +>>> result = strategy.run(replica_fn, args=(tensor_input,)) +>>> result + +``` + +2. DistributedValues input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> @tf.function +... def run(): +... def value_fn(value_context): +... return value_context.num_replicas_in_sync +... distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +... def replica_fn2(input): +... return input*2 +... return strategy.run(replica_fn2, args=(distributed_values,)) +>>> result = run() +>>> result + +``` + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be tf.distribute.DistributedValues, `Tensor` +objects, or `Tensor`s (for example, if running on a single replica). +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + + + + + + + + + +
Returns
+A context manager. +
+ + + +

update_config_proto

+ +View source + + + +Returns a copy of `config_proto` modified for use with this strategy. + +DEPRECATED: This method is not available in TF 2.x. + +The updated config has something needed to run a strategy, e.g. +configuration to run collective ops, or device filters to improve +distributed training performance. + + + + + + + + + + +
Args
+`config_proto` + +a `tf.ConfigProto` object. +
+ + + + + + + + + + + +
Returns
+The updated copy of the `config_proto`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distribute/StrategyExtended.md b/site/en/api_docs/python/tf/compat/v1/distribute/StrategyExtended.md new file mode 100644 index 00000000000..e5626feb03c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distribute/StrategyExtended.md @@ -0,0 +1,1141 @@ +description: Additional APIs for algorithms that need to be distribution-aware. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distribute.StrategyExtended + + + + + + + + + +Additional APIs for algorithms that need to be distribution-aware. + +Inherits From: [`StrategyExtended`](../../../../tf/distribute/StrategyExtended.md) + + + + + + + +Note: For most usage of tf.distribute.Strategy, there should be no need to +call these methods, since TensorFlow libraries (such as optimizers) already +call these methods when needed on your behalf. + +Lower-level concepts: + +* Wrapped values: In order to represent values parallel across devices + (either replicas or the devices associated with a particular value), we + wrap them in a "PerReplica" or "Mirrored" object that contains a map + from replica id to values. "PerReplica" is used when the value may be + different across replicas, and "Mirrored" when the value are the same. +* Unwrapping and merging: Consider calling a function `fn` on multiple + replicas, like `run(fn, args=[w])` with an + argument `w` that is a wrapped value. This means `w` will have a map taking + replica id `0` to `w0`, replica id `11` to `w1`, etc. + `run()` unwraps `w` before calling `fn`, so + it calls `fn(w0)` on `d0`, `fn(w1)` on `d1`, etc. It then merges the return + values from `fn()`, which can possibly result in wrapped values. For + example, let's say `fn()` returns a tuple with three components: `(x, a, + v0)` from replica 0, `(x, b, v1)` on replica 1, etc. If the first component + is the same object `x` from every replica, then the first component of the + merged result will also be `x`. If the second component is different (`a`, + `b`, ...) from each replica, then the merged value will have a wrapped map + from replica device to the different values. If the third component is the + members of a mirrored variable (`v` maps `d0` to `v0`, `d1` to v1, etc.), + then the merged result will be that mirrored variable (`v`). +* Worker devices vs. parameter devices: Most replica computations will + happen on worker devices. Since we don't yet support model + parallelism, there will be one worker device per replica. When using + parameter servers or central storage, the set of devices holding + variables may be different, otherwise the parameter devices might + match the worker devices. + +*Replica context vs. Cross-replica context* + +A _replica context_ applies when we are in some function that is being called +once for each replica. Otherwise we are in cross-replica context, which is +useful for calling tf.distribute.Strategy methods which operate across the +replicas (like `reduce_to()`). By default you start in a replica context +(the "default single replica context") and then some methods can switch you +back and forth. There is a third mode you can be in called _update context_ +used when updating variables. + +* tf.distribute.Strategy.scope: enters cross-replica context when + no other strategy is in scope. +* tf.distribute.Strategy.run: calls a function in + replica context. +* tf.distribute.ReplicaContext.merge_call: transitions from replica + context to cross-replica context. +* tf.distribute.StrategyExtended.update: calls a function in an update + context from a cross-replica context. + +In a replica context, you may freely read the values of variables, but +you may only update their value if they specify a way to aggregate the +update using the `aggregation` parameter in the variable's constructor. +In a cross-replica context, you may read or write variables (writes may +need to be broadcast to all copies of the variable if it is mirrored). + +*Sync on read variables* + +In some cases, such as a metric, we want to accumulate a bunch of updates on +each replica independently and only aggregate when reading. This can be a big +performance win when the value is read only rarely (maybe the value is only +read at the end of an epoch or when checkpointing). These are variables +created by passing `synchronization=ON_READ` to the variable's constructor +(and some value for `aggregation`). + +The strategy may choose to put the variable on multiple devices, like mirrored +variables, but unlike mirrored variables we don't synchronize the updates to +them to make sure they have the same value. Instead, the synchronization is +performed when reading in cross-replica context. In a replica context, reads +and writes are performed on the local copy (we allow reads so you can write +code like `v = 0.9*v + 0.1*update`). We don't allow operations like +`v.assign_add` in a cross-replica context for sync on read variables; right +now we don't have a use case for such updates and depending on the aggregation +mode such updates may not be sensible. + +*Locality* + +Depending on how a value is produced, it will have a type that will determine +how it may be used. + +"Per-replica" values exist on the worker devices, with a different value for +each replica. They are produced by iterating through a "distributed `Dataset`" +returned by tf.distribute.Strategy.experimental_distribute_dataset and +tf.distribute.Strategy.experimental_distribute_datasets_from_function. They +are also the typical result returned by +tf.distribute.Strategy.run. You typically can't use a +per-replica value directly in a cross-replica context, without first resolving +how to aggregate the values across replicas, for instance by using +tf.distribute.Strategy.reduce. + +"Mirrored" values are like per-replica values, except we know that the value +on all replicas are the same. We can safely read a mirrored value in a +cross-replica context by using the value on any replica. You can convert +a per-replica value into a mirrored value by using +tf.distribute.ReplicaContext.all_reduce. + +Values can also have the same locality as a variable, which is a mirrored +value but residing on the same devices as the variable (as opposed to the +compute devices). Such values may be passed to a call to +tf.distribute.StrategyExtended.update to update the value of a variable. +You may use tf.distribute.StrategyExtended.colocate_vars_with to give a +variable the same locality as another variable. This is useful, for example, +for "slot" variables used by an optimizer for keeping track of statistics +used to update a primary/model variable. You may convert a per-replica +value to a variable's locality by using +tf.distribute.StrategyExtended.reduce_to or +tf.distribute.StrategyExtended.batch_reduce_to. + +In addition to slot variables which should be colocated with their primary +variables, optimizers also define non-slot variables. These can be things like +"number of step updates performed" or "beta1^t" and "beta2^t". Each strategy +has some policy for which devices those variables should be copied too, called +the "non-slot devices" (some subset of the parameter devices). We require that +all non-slot variables are allocated on the same device, or mirrored across +the same set of devices. You can use +tf.distribute.StrategyExtended.non_slot_devices to pick a consistent set of +devices to pass to both tf.distribute.StrategyExtended.colocate_vars_with +and tf.distribute.StrategyExtended.update_non_slot. + +*How to update a variable* + +The standard pattern for updating variables is to: + +1. In your function passed to tf.distribute.Strategy.run, + compute a list of (update, variable) pairs. For example, the update might + be a the gradient of the loss with respect to the variable. +2. Switch to cross-replica mode by calling + `tf.distribute.get_replica_context().merge_call()` with the updates and + variables as arguments. +3. Call + tf.distribute.StrategyExtended.reduce_to(VariableAggregation.SUM, t, v) + (for one variable) or tf.distribute.StrategyExtended.batch_reduce_to + (for a list of variables) to sum the updates. + and broadcast the result to the variable's devices. +4. Call tf.distribute.StrategyExtended.update(v) for each variable to update + its value. + +Steps 2 through 4 are done automatically by class +tf.keras.optimizers.Optimizer if you call its +tf.keras.optimizers.Optimizer.apply_gradients method in a replica context. +They are also done automatically if you call an `assign*` method on a (non +sync-on-read) variable that was constructed with an aggregation method (which +is used to determine the reduction used in step 3). + +*Distribute-aware layers* + +Layers are generally called in a replica context, except when defining a +functional model. tf.distribute.in_cross_replica_context will let you +determine which case you are in. If in a replica context, +the tf.distribute.get_replica_context function will return a +tf.distribute.ReplicaContext object. The `ReplicaContext` object has an +`all_reduce` method for aggregating across all replicas. Alternatively, you +can update variables following steps 2-4 above. + +Note: For new tf.distribute.Strategy implementations, please put all logic +in a subclass of tf.distribute.StrategyExtended. The only code needed for +the tf.distribute.Strategy subclass is for instantiating your subclass of +tf.distribute.StrategyExtended in the `__init__` method. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`experimental_between_graph` + +Whether the strategy uses between-graph replication or not. + +This is expected to return a constant value that will not be changed +throughout its life cycle. +
+`experimental_require_static_shapes` + +Returns `True` if static shape is required; `False` otherwise. +
+`experimental_should_init` + +Whether initialization is needed. +
+`parameter_devices` + +Returns the tuple of all devices used to place variables. +
+`should_checkpoint` + +Whether checkpointing is needed. +
+`should_save_summary` + +Whether saving summaries is needed. +
+`worker_devices` + +Returns the tuple of all devices used to for compute replica execution. +
+ + + +## Methods + +

batch_reduce_to

+ +View source + + + +Combine multiple `reduce_to` calls into one for faster execution. + + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +Reduction type, an instance of tf.distribute.ReduceOp enum. +
+`value_destination_pairs` + +A sequence of (value, destinations) pairs. See +`reduce_to()` for a description. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+A list of mirrored values, one per pair in `value_destination_pairs`. +
+ + + +

broadcast_to

+ +View source + + + +Mirror a tensor on one device to all worker devices. + + + + + + + + + + + + + + +
Args
+`tensor` + +A Tensor value to broadcast. +
+`destinations` + +A mirrored variable or device string specifying the +destination devices to copy `tensor` to. +
+ + + + + + + + + + + +
Returns
+A value mirrored to `destinations` devices. +
+ + + +

call_for_each_replica

+ +View source + + + +Run `fn` once per replica. + +`fn` may call `tf.get_replica_context()` to access methods such as +`replica_id_in_sync_group` and `merge_call()`. + +`merge_call()` is used to communicate between the replicas and +re-enter the cross-replica context. All replicas pause their execution +having encountered a `merge_call()` call. After that the +`merge_fn`-function is executed. Its results are then unwrapped and +given back to each replica call. After that execution resumes until +`fn` is complete or encounters another `merge_call()`. Example: + +```python +# Called once in "cross-replica" context. +def merge_fn(distribution, three_plus_replica_id): + # sum the values across replicas + return sum(distribution.experimental_local_results(three_plus_replica_id)) + +# Called once per replica in `distribution`, in a "replica" context. +def fn(three): + replica_ctx = tf.get_replica_context() + v = three + replica_ctx.replica_id_in_sync_group + # Computes the sum of the `v` values across all replicas. + s = replica_ctx.merge_call(merge_fn, args=(v,)) + return s + v + +with distribution.scope(): + # in "cross-replica" context + ... + merged_results = distribution.run(fn, args=[3]) + # merged_results has the values from every replica execution of `fn`. + # This statement prints a list: + print(distribution.experimental_local_results(merged_results)) +``` + + + + + + + + + + + + + + + + +
Args
+`fn` + +function to run (will be run once per replica). +
+`args` + +Tuple or list with positional arguments for `fn`. +
+`kwargs` + +Dict with keyword arguments for `fn`. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across all replicas. +
+ + + +

colocate_vars_with

+ +View source + + + +Scope that controls which devices variables will be created on. + +No operations should be added to the graph inside this scope, it +should only be used when creating variables (some implementations +work by changing variable creation, others work by using a +tf.compat.v1.colocate_with() scope). + +This may only be used inside `self.scope()`. + +#### Example usage: + + + +``` +with strategy.scope(): + var1 = tf.Variable(...) + with strategy.extended.colocate_vars_with(var1): + # var2 and var3 will be created on the same device(s) as var1 + var2 = tf.Variable(...) + var3 = tf.Variable(...) + + def fn(v1, v2, v3): + # operates on v1 from var1, v2 from var2, and v3 from var3 + + # `fn` runs on every device `var1` is on, `var2` and `var3` will be there + # too. + strategy.extended.update(var1, fn, args=(var2, var3)) +``` + + + + + + + + + + +
Args
+`colocate_with_variable` + +A variable created in this strategy's `scope()`. +Variables created while in the returned context manager will be on the +same set of devices as `colocate_with_variable`. +
+ + + + + + + + + + + +
Returns
+A context manager. +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + + + + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be distributed evenly +across all replicas. Note that lists of Numpy arrays are stacked, as +that is normal tf.data.Dataset behavior. +
+`session` + +(TensorFlow v1.x graph execution only) A session used for +initialization. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_run_steps_on_iterator

+ +View source + + + +DEPRECATED: please use `run` instead. + +Run `fn` with input from `iterator` for `iterations` times. + +This method can be used to run a step function for training a number of +times using input from a dataset. + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +function to run using this distribution strategy. The function must +have the following signature: `def fn(context, inputs)`. `context` is an +instance of `MultiStepContext` that will be passed when `fn` is run. +`context` can be used to specify the outputs to be returned from `fn` +by calling `context.set_last_step_output`. It can also be used to +capture non tensor outputs by `context.set_non_tensor_output`. See +`MultiStepContext` documentation for more information. `inputs` will +have same type/structure as `iterator.get_next()`. Typically, `fn` +will use `call_for_each_replica` method of the strategy to distribute +the computation over multiple replicas. +
+`iterator` + +Iterator of a dataset that represents the input for `fn`. The +caller is responsible for initializing the iterator as needed. +
+`iterations` + +(Optional) Number of iterations that `fn` should be run. +Defaults to 1. +
+`initial_loop_values` + +(Optional) Initial values to be passed into the +loop that runs `fn`. Defaults to `None`. +initial_loop_values argument when we have a mechanism to infer the +outputs of `fn`. +
+ + + + + + + + + + + +
Returns
+Returns the `MultiStepContext` object which has the following properties, +among other things: +- run_op: An op that runs `fn` `iterations` times. +- last_step_outputs: A dictionary containing tensors set using +`context.set_last_step_output`. Evaluating this returns the value of +the tensors after the last iteration. +- non_tensor_outputs: A dictionary containing anything that was set by +`fn` by calling `context.set_non_tensor_output`. +
+ + + +

non_slot_devices

+ +View source + + + +Device(s) for non-slot variables. + +Create variables on these devices in a +`with colocate_vars_with(non_slot_devices(...)):` block. +Update those using `update_non_slot()`. + + + + + + + + + + +
Args
+`var_list` + +The list of variables being optimized, needed with the +default tf.distribute.Strategy. +
+ + + + + + + + + + + +
Returns
+A sequence of devices for non-slot variables. +
+ + + +

read_var

+ +View source + + + +Reads the value of a variable. + +Returns the aggregate value of a replica-local variable, or the +(read-only) value of any other variable. + + + + + + + + + + +
Args
+`v` + +A variable allocated within the scope of this tf.distribute.Strategy. +
+ + + + + + + + + + + +
Returns
+A tensor representing the value of `v`, aggregated across replicas if +necessary. +
+ + + +

reduce_to

+ +View source + + + +Combine (via e.g. sum or mean) values across replicas. + + + + + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +Reduction type, an instance of tf.distribute.ReduceOp enum. +
+`value` + +A per-replica value with one value per replica. +
+`destinations` + +A mirrored variable, a per-replica tensor, or a device +string. The return value will be copied to all destination devices (or +all the devices where the `destinations` value resides). To perform an +all-reduction, pass `value` to `destinations`. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+A tensor or value mirrored to `destinations`. +
+ + + +

update

+ +View source + + + +Run `fn` to update `var` using inputs mirrored to the same devices. + +If `var` is mirrored across multiple devices, then this implements +logic like: + +``` +results = {} +for device, v in var: + with tf.device(device): + # args and kwargs will be unwrapped if they are mirrored. + results[device] = fn(v, *args, **kwargs) +return merged(results) +``` + +Otherwise this returns `fn(var, *args, **kwargs)` colocated with `var`. + +Neither `args` nor `kwargs` may contain per-replica values. +If they contain mirrored values, they will be unwrapped before +calling `fn`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`var` + +Variable, possibly mirrored to multiple devices, to operate on. +
+`fn` + +Function to call. Should take the variable as the first argument. +
+`args` + +Tuple or list. Additional positional arguments to pass to `fn()`. +
+`kwargs` + +Dict with keyword arguments to pass to `fn()`. +
+`group` + +Boolean. Defaults to True. If False, the return value will be +unwrapped. +
+ + + + + + + + + + + +
Returns
+By default, the merged return value of `fn` across all replicas. The +merged result has dependencies to make sure that if it is evaluated at +all, the side effects (updates) will happen on every replica. If instead +"group=False" is specified, this function will return a nest of lists +where each list has an element per replica, and the caller is responsible +for ensuring all elements are executed. +
+ + + +

update_non_slot

+ +View source + + + +Runs `fn(*args, **kwargs)` on `colocate_with` devices. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`colocate_with` + +The return value of `non_slot_devices()`. +
+`fn` + +Function to execute. +
+`args` + +Tuple or list. Positional arguments to pass to `fn()`. +
+`kwargs` + +Dict with keyword arguments to pass to `fn()`. +
+`group` + +Boolean. Defaults to True. If False, the return value will be +unwrapped. +
+ + + + + + + + + + + +
Returns
+Return value of `fn`, possibly merged across devices. +
+ + + +

value_container

+ +View source + + + +Returns the container that this per-replica `value` belongs to. + + + + + + + + + + + +
Args
+`value` + +A value returned by `run()` or a variable created in `scope()`. +
+ + + + + + + + + + + +
Returns
+A container that `value` belongs to. +If value does not belong to any container (including the case of +container having been destroyed), returns the value itself. +`value in experimental_local_results(value_container(value))` will +always be true. +
+ + + +

variable_created_in_scope

+ +View source + + + +Tests whether `v` was created while this strategy scope was active. + +Variables created inside the strategy scope are "owned" by it: + +```python +strategy = tf.distribute.StrategyExtended() +with strategy.scope(): + v = tf.Variable(1.) +strategy.variable_created_in_scope(v) +True +``` + +Variables created outside the strategy are not owned by it: + +```python +v = tf.Variable(1.) +strategy.variable_created_in_scope(v) +False +``` + + + + + + + + + + +
Args
+`v` + +A tf.Variable instance. +
+ + + + + + + + + + + +
Returns
+True if `v` was created inside the scope, False if not. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distribute/cluster_resolver.md b/site/en/api_docs/python/tf/compat/v1/distribute/cluster_resolver.md new file mode 100644 index 00000000000..d066b974071 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distribute/cluster_resolver.md @@ -0,0 +1,44 @@ +description: Library imports for ClusterResolvers. + +
+ + +
+ +# Module: tf.compat.v1.distribute.cluster_resolver + + + + + + + + + +Library imports for ClusterResolvers. + + +This library contains all implementations of ClusterResolvers. +ClusterResolvers are a way of specifying cluster information for distributed +execution. Built on top of existing `ClusterSpec` framework, ClusterResolvers +are a way for TensorFlow to communicate with various cluster management +systems (e.g. GCE, AWS, etc...). + +## Classes + +[`class ClusterResolver`](../../../../tf/distribute/cluster_resolver/ClusterResolver.md): Abstract class for all implementations of ClusterResolvers. + +[`class GCEClusterResolver`](../../../../tf/distribute/cluster_resolver/GCEClusterResolver.md): ClusterResolver for Google Compute Engine. + +[`class KubernetesClusterResolver`](../../../../tf/distribute/cluster_resolver/KubernetesClusterResolver.md): ClusterResolver for Kubernetes. + +[`class SimpleClusterResolver`](../../../../tf/distribute/cluster_resolver/SimpleClusterResolver.md): Simple implementation of ClusterResolver that accepts a ClusterSpec. + +[`class SlurmClusterResolver`](../../../../tf/distribute/cluster_resolver/SlurmClusterResolver.md): ClusterResolver for system with Slurm workload manager. + +[`class TFConfigClusterResolver`](../../../../tf/distribute/cluster_resolver/TFConfigClusterResolver.md): Implementation of a ClusterResolver which reads the TF_CONFIG EnvVar. + +[`class TPUClusterResolver`](../../../../tf/distribute/cluster_resolver/TPUClusterResolver.md): Cluster Resolver for Google Cloud TPUs. + +[`class UnionResolver`](../../../../tf/distribute/cluster_resolver/UnionResolver.md): Performs a union on underlying ClusterResolvers. + diff --git a/site/en/api_docs/python/tf/compat/v1/distribute/experimental.md b/site/en/api_docs/python/tf/compat/v1/distribute/experimental.md new file mode 100644 index 00000000000..11f6575cd75 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distribute/experimental.md @@ -0,0 +1,35 @@ +description: Experimental Distribution Strategy library. + +
+ + +
+ +# Module: tf.compat.v1.distribute.experimental + + + + + + + + + +Experimental Distribution Strategy library. + + + +## Classes + +[`class CentralStorageStrategy`](../../../../tf/compat/v1/distribute/experimental/CentralStorageStrategy.md): A one-machine strategy that puts all variables on a single device. + +[`class CollectiveCommunication`](../../../../tf/distribute/experimental/CollectiveCommunication.md): Communication choices for CollectiveOps. + +[`class CollectiveHints`](../../../../tf/distribute/experimental/CollectiveHints.md): Hints for collective operations like AllReduce. + +[`class MultiWorkerMirroredStrategy`](../../../../tf/compat/v1/distribute/experimental/MultiWorkerMirroredStrategy.md): A distribution strategy for synchronous training on multiple workers. + +[`class ParameterServerStrategy`](../../../../tf/compat/v1/distribute/experimental/ParameterServerStrategy.md): An asynchronous multi-worker parameter server tf.distribute strategy. + +[`class TPUStrategy`](../../../../tf/compat/v1/distribute/experimental/TPUStrategy.md): TPU distribution strategy implementation. + diff --git a/site/en/api_docs/python/tf/compat/v1/distribute/experimental/CentralStorageStrategy.md b/site/en/api_docs/python/tf/compat/v1/distribute/experimental/CentralStorageStrategy.md new file mode 100644 index 00000000000..e9e8a33d781 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distribute/experimental/CentralStorageStrategy.md @@ -0,0 +1,938 @@ +description: A one-machine strategy that puts all variables on a single device. + +
+ + + + + + + + + + + + + + +
+ +# tf.compat.v1.distribute.experimental.CentralStorageStrategy + + + + + + + + + +A one-machine strategy that puts all variables on a single device. + +Inherits From: [`Strategy`](../../../../../tf/compat/v1/distribute/Strategy.md) + + + + + + + +Variables are assigned to local CPU or the only GPU. If there is more +than one GPU, compute operations (other than variable update operations) +will be replicated across all GPUs. + +#### For Example: + + +``` +strategy = tf.distribute.experimental.CentralStorageStrategy() +# Create a dataset +ds = tf.data.Dataset.range(5).batch(2) +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(ds) + +with strategy.scope(): + @tf.function + def train_step(val): + return val + 1 + + # Iterate over the distributed dataset + for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via `dataset`. + +The returned distributed dataset can be iterated over similar to how +regular datasets can. +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +The following is an example: + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + +We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). + +In a multi-worker setting, we will first attempt to distribute the dataset +by attempting to detect whether the dataset is being created out of +ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, +attempting to shard the input files. Note that there has to be at least one +input file per worker. If you have less than one input file per worker, we +suggest that you should disable distributing your dataset using the method +below. + +If that attempt is unsuccessful (e.g. the dataset is created from a +Dataset.range), we will shard the dataset evenly at the end by appending a +`.shard` operation to the end of the processing pipeline. This will cause +the entire preprocessing pipeline for all the data to be run on every +worker, and each worker will do redundant work. We will print a warning +if this method of sharding is selected. + +You can disable dataset sharding across workers using the +`auto_shard_policy` option in tf.data.experimental.DistributeOptions. + +Within each worker, we will also split the data among all the worker +devices (if more than one a present), and this will happen even if +multi-worker sharding is disabled using the method above. + +If the above batch splitting and dataset sharding logic is undesirable, +please use `experimental_distribute_datasets_from_function` instead, which +does not do any automatic splitting or sharding. + +You can also use the `element_spec` property of the distributed dataset +returned by this API to query the tf.TypeSpec of the elements returned +by the iterator. This can be used to set the `input_signature` property +of a tf.function. + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +@tf.function(input_signature=[dist_dataset.element_spec]) +def train_step(inputs): + # train model with inputs + return + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be sharded across all replicas using +the rules stated above. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. Each +replica on that worker will dequeue one batch of inputs from the local +`Dataset` (i.e. if a worker has two replicas, two batches will be dequeued +from the `Dataset` every step). + +This method can be used for several purposes. For example, where +`experimental_distribute_dataset` is unable to shard the input files, this +method might be used to manually shard the dataset (avoiding the slow +fallback behavior in `experimental_distribute_dataset`). In cases where the +dataset is infinite, this sharding can be done by creating dataset replicas +that differ only in their random seed. +`experimental_distribute_dataset` may also sometimes fail to split the +batch across replicas on a worker. In that case, this method can be used +where that limitation does not exist. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + +To query the tf.TypeSpec of the elements in the distributed dataset +returned by this API, you need to use the `element_spec` property of the +distributed iterator. This tf.TypeSpec can be used to set the +`input_signature` property of a tf.function. + +```python +# If you want to specify `input_signature` for a `tf.function` you must +# first create the iterator. +iterator = iter(inputs) + +@tf.function(input_signature=[iterator.element_spec]) +def replica_fn_with_signature(inputs): + # train the model with inputs + return + +for _ in range(steps): + strategy.run(replica_fn_with_signature, + args=(next(iterator),)) +``` + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +Note: This only returns values on the worker initiated by this client. +When using a tf.distribute.Strategy like +tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker +will be its own client, and this function will only return values +computed on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use +tf.distribute.Strategy.experimental_distribute_dataset +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+`session` + +(TensorFlow v1.x graph execution only) A session used for +initialization. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_run

+ +View source + + + +Runs ops in `fn` on each replica, with inputs from `input_iterator`. + +DEPRECATED: This method is not available in TF 2.x. Please switch +to using `run` instead. + +When eager execution is enabled, executes ops specified by `fn` on each +replica. Otherwise, builds a graph to execute the ops on each replica. + +Each replica will take a single, different input from the inputs provided by +one `get_next` call on the input iterator. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `replica_id_in_sync_group`. + +IMPORTANT: Depending on the tf.distribute.Strategy implementation being +used, and whether eager execution is enabled, `fn` may be called one or more +times (once for each replica). + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The inputs to the function must match the outputs +of `input_iterator.get_next()`. The output must be a tf.nest of +`Tensor`s. +
+`input_iterator` + +(Optional) input iterator from which the inputs are taken. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be `PerReplica` (if the values are unsynchronized), +`Mirrored` (if the values are kept in sync), or `Tensor` (if running on a +single replica). +
+ + + +

make_dataset_iterator

+ +View source + + + +Makes an iterator for input provided via `dataset`. + +DEPRECATED: This method is not available in TF 2.x. + +Data from the given dataset will be distributed evenly across all the +compute replicas. We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). +If this effort fails, an error will be thrown, and the user should instead +use `make_input_fn_iterator` which provides more control to the user, and +does not try to divide a batch across replicas. + +The user could also use `make_input_fn_iterator` if they want to +customize which input is fed to which replica/worker etc. + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be distributed evenly across all +replicas. +
+ + + + + + + + + + + +
Returns
+An `tf.distribute.InputIterator` which returns inputs for each step of the +computation. User should call `initialize` on the returned iterator. +
+ + + +

make_input_fn_iterator

+ +View source + + + +Returns an iterator split across replicas created from an input function. + +DEPRECATED: This method is not available in TF 2.x. + +The `input_fn` should take an tf.distribute.InputContext object where +information about batching and input sharding can be accessed: + +``` +def input_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard(input_context.num_input_pipelines, + input_context.input_pipeline_id) +with strategy.scope(): + iterator = strategy.make_input_fn_iterator(input_fn) + replica_results = strategy.experimental_run(replica_fn, iterator) +``` + +The tf.data.Dataset returned by `input_fn` should have a per-replica +batch size, which may be computed using +`input_context.get_per_replica_batch_size`. + + + + + + + + + + + + + +
Args
+`input_fn` + +A function taking a tf.distribute.InputContext object and +returning a tf.data.Dataset. +
+`replication_mode` + +an enum value of tf.distribute.InputReplicationMode. +Only `PER_WORKER` is supported currently, which means there will be +a single call to `input_fn` per worker. Replicas will dequeue from the +local tf.data.Dataset on their worker. +
+ + + + + + + + + + + +
Returns
+An iterator object that should first be `.initialize()`-ed. It may then +either be passed to `strategy.experimental_run()` or you can +`iterator.get_next()` to get the next value to pass to +`strategy.extended.call_for_each_replica()`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +Executes ops specified by `fn` on each replica. If `args` or `kwargs` have +tf.distribute.DistributedValues, such as those produced by a +"distributed `Dataset`" or `experimental_distribute_values_from_function` +when `fn` is executed on a particular replica, it will be executed with the +component of tf.distribute.DistributedValues that correspond to that +replica. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `all_reduce`. + +All arguments in `args` or `kwargs` should either be nest of tensors or +tf.distribute.DistributedValues containing tensors or composite tensors. + +IMPORTANT: Depending on the implementation of tf.distribute.Strategy and +whether eager execution is enabled, `fn` may be called one or more times ( +once for each replica). + +#### Example usage: + + + +1. Constant tensor input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> tensor_input = tf.constant(3.0) +>>> @tf.function +... def replica_fn(input): +... return input*2.0 +>>> result = strategy.run(replica_fn, args=(tensor_input,)) +>>> result + +``` + +2. DistributedValues input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> @tf.function +... def run(): +... def value_fn(value_context): +... return value_context.num_replicas_in_sync +... distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +... def replica_fn2(input): +... return input*2 +... return strategy.run(replica_fn2, args=(distributed_values,)) +>>> result = run() +>>> result + +``` + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be tf.distribute.DistributedValues, `Tensor` +objects, or `Tensor`s (for example, if running on a single replica). +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + + + + + + + + + +
Returns
+A context manager. +
+ + + +

update_config_proto

+ +View source + + + +Returns a copy of `config_proto` modified for use with this strategy. + +DEPRECATED: This method is not available in TF 2.x. + +The updated config has something needed to run a strategy, e.g. +configuration to run collective ops, or device filters to improve +distributed training performance. + + + + + + + + + + +
Args
+`config_proto` + +a `tf.ConfigProto` object. +
+ + + + + + + + + + + +
Returns
+The updated copy of the `config_proto`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distribute/experimental/MultiWorkerMirroredStrategy.md b/site/en/api_docs/python/tf/compat/v1/distribute/experimental/MultiWorkerMirroredStrategy.md new file mode 100644 index 00000000000..33d9bc7981f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distribute/experimental/MultiWorkerMirroredStrategy.md @@ -0,0 +1,941 @@ +description: A distribution strategy for synchronous training on multiple workers. + +
+ + + + + + + + + + + + + + +
+ +# tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy + + + + + + + + + +A distribution strategy for synchronous training on multiple workers. + +Inherits From: [`Strategy`](../../../../../tf/compat/v1/distribute/Strategy.md) + + + + + + + +This strategy implements synchronous distributed training across multiple +workers, each with potentially multiple GPUs. Similar to +tf.distribute.MirroredStrategy, it creates copies of all variables in the +model on each device across all workers. + +It uses CollectiveOps's implementation of multi-worker all-reduce to +to keep variables in sync. A collective op is a single op in the +TensorFlow graph which can automatically choose an all-reduce algorithm in +the TensorFlow runtime according to hardware, network topology and tensor +sizes. + +By default it uses all local GPUs or CPU for single-worker training. + +When 'TF_CONFIG' environment variable is set, it parses cluster_spec, +task_type and task_id from 'TF_CONFIG' and turns into a multi-worker strategy +which mirrored models on GPUs of all machines in a cluster. In the current +implementation, it uses all GPUs in a cluster and it assumes all workers have +the same number of GPUs. + +You can also pass a `distribute.cluster_resolver.ClusterResolver` instance +when instantiating the strategy. The task_type, task_id etc. will be parsed +from the resolver instance instead of from the `TF_CONFIG` env var. + +It supports both eager mode and graph mode. However, for eager mode, it has to +set up the eager context in its constructor and therefore all ops in eager +mode have to run after the strategy object is created. + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via `dataset`. + +The returned distributed dataset can be iterated over similar to how +regular datasets can. +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +The following is an example: + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + +We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). + +In a multi-worker setting, we will first attempt to distribute the dataset +by attempting to detect whether the dataset is being created out of +ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, +attempting to shard the input files. Note that there has to be at least one +input file per worker. If you have less than one input file per worker, we +suggest that you should disable distributing your dataset using the method +below. + +If that attempt is unsuccessful (e.g. the dataset is created from a +Dataset.range), we will shard the dataset evenly at the end by appending a +`.shard` operation to the end of the processing pipeline. This will cause +the entire preprocessing pipeline for all the data to be run on every +worker, and each worker will do redundant work. We will print a warning +if this method of sharding is selected. + +You can disable dataset sharding across workers using the +`auto_shard_policy` option in tf.data.experimental.DistributeOptions. + +Within each worker, we will also split the data among all the worker +devices (if more than one a present), and this will happen even if +multi-worker sharding is disabled using the method above. + +If the above batch splitting and dataset sharding logic is undesirable, +please use `experimental_distribute_datasets_from_function` instead, which +does not do any automatic splitting or sharding. + +You can also use the `element_spec` property of the distributed dataset +returned by this API to query the tf.TypeSpec of the elements returned +by the iterator. This can be used to set the `input_signature` property +of a tf.function. + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +@tf.function(input_signature=[dist_dataset.element_spec]) +def train_step(inputs): + # train model with inputs + return + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be sharded across all replicas using +the rules stated above. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. Each +replica on that worker will dequeue one batch of inputs from the local +`Dataset` (i.e. if a worker has two replicas, two batches will be dequeued +from the `Dataset` every step). + +This method can be used for several purposes. For example, where +`experimental_distribute_dataset` is unable to shard the input files, this +method might be used to manually shard the dataset (avoiding the slow +fallback behavior in `experimental_distribute_dataset`). In cases where the +dataset is infinite, this sharding can be done by creating dataset replicas +that differ only in their random seed. +`experimental_distribute_dataset` may also sometimes fail to split the +batch across replicas on a worker. In that case, this method can be used +where that limitation does not exist. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + +To query the tf.TypeSpec of the elements in the distributed dataset +returned by this API, you need to use the `element_spec` property of the +distributed iterator. This tf.TypeSpec can be used to set the +`input_signature` property of a tf.function. + +```python +# If you want to specify `input_signature` for a `tf.function` you must +# first create the iterator. +iterator = iter(inputs) + +@tf.function(input_signature=[iterator.element_spec]) +def replica_fn_with_signature(inputs): + # train the model with inputs + return + +for _ in range(steps): + strategy.run(replica_fn_with_signature, + args=(next(iterator),)) +``` + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +Note: This only returns values on the worker initiated by this client. +When using a tf.distribute.Strategy like +tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker +will be its own client, and this function will only return values +computed on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use +tf.distribute.Strategy.experimental_distribute_dataset +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+`session` + +(TensorFlow v1.x graph execution only) A session used for +initialization. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_run

+ +View source + + + +Runs ops in `fn` on each replica, with inputs from `input_iterator`. + +DEPRECATED: This method is not available in TF 2.x. Please switch +to using `run` instead. + +When eager execution is enabled, executes ops specified by `fn` on each +replica. Otherwise, builds a graph to execute the ops on each replica. + +Each replica will take a single, different input from the inputs provided by +one `get_next` call on the input iterator. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `replica_id_in_sync_group`. + +IMPORTANT: Depending on the tf.distribute.Strategy implementation being +used, and whether eager execution is enabled, `fn` may be called one or more +times (once for each replica). + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The inputs to the function must match the outputs +of `input_iterator.get_next()`. The output must be a tf.nest of +`Tensor`s. +
+`input_iterator` + +(Optional) input iterator from which the inputs are taken. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be `PerReplica` (if the values are unsynchronized), +`Mirrored` (if the values are kept in sync), or `Tensor` (if running on a +single replica). +
+ + + +

make_dataset_iterator

+ +View source + + + +Makes an iterator for input provided via `dataset`. + +DEPRECATED: This method is not available in TF 2.x. + +Data from the given dataset will be distributed evenly across all the +compute replicas. We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). +If this effort fails, an error will be thrown, and the user should instead +use `make_input_fn_iterator` which provides more control to the user, and +does not try to divide a batch across replicas. + +The user could also use `make_input_fn_iterator` if they want to +customize which input is fed to which replica/worker etc. + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be distributed evenly across all +replicas. +
+ + + + + + + + + + + +
Returns
+An `tf.distribute.InputIterator` which returns inputs for each step of the +computation. User should call `initialize` on the returned iterator. +
+ + + +

make_input_fn_iterator

+ +View source + + + +Returns an iterator split across replicas created from an input function. + +DEPRECATED: This method is not available in TF 2.x. + +The `input_fn` should take an tf.distribute.InputContext object where +information about batching and input sharding can be accessed: + +``` +def input_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard(input_context.num_input_pipelines, + input_context.input_pipeline_id) +with strategy.scope(): + iterator = strategy.make_input_fn_iterator(input_fn) + replica_results = strategy.experimental_run(replica_fn, iterator) +``` + +The tf.data.Dataset returned by `input_fn` should have a per-replica +batch size, which may be computed using +`input_context.get_per_replica_batch_size`. + + + + + + + + + + + + + +
Args
+`input_fn` + +A function taking a tf.distribute.InputContext object and +returning a tf.data.Dataset. +
+`replication_mode` + +an enum value of tf.distribute.InputReplicationMode. +Only `PER_WORKER` is supported currently, which means there will be +a single call to `input_fn` per worker. Replicas will dequeue from the +local tf.data.Dataset on their worker. +
+ + + + + + + + + + + +
Returns
+An iterator object that should first be `.initialize()`-ed. It may then +either be passed to `strategy.experimental_run()` or you can +`iterator.get_next()` to get the next value to pass to +`strategy.extended.call_for_each_replica()`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +Executes ops specified by `fn` on each replica. If `args` or `kwargs` have +tf.distribute.DistributedValues, such as those produced by a +"distributed `Dataset`" or `experimental_distribute_values_from_function` +when `fn` is executed on a particular replica, it will be executed with the +component of tf.distribute.DistributedValues that correspond to that +replica. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `all_reduce`. + +All arguments in `args` or `kwargs` should either be nest of tensors or +tf.distribute.DistributedValues containing tensors or composite tensors. + +IMPORTANT: Depending on the implementation of tf.distribute.Strategy and +whether eager execution is enabled, `fn` may be called one or more times ( +once for each replica). + +#### Example usage: + + + +1. Constant tensor input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> tensor_input = tf.constant(3.0) +>>> @tf.function +... def replica_fn(input): +... return input*2.0 +>>> result = strategy.run(replica_fn, args=(tensor_input,)) +>>> result + +``` + +2. DistributedValues input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> @tf.function +... def run(): +... def value_fn(value_context): +... return value_context.num_replicas_in_sync +... distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +... def replica_fn2(input): +... return input*2 +... return strategy.run(replica_fn2, args=(distributed_values,)) +>>> result = run() +>>> result + +``` + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be tf.distribute.DistributedValues, `Tensor` +objects, or `Tensor`s (for example, if running on a single replica). +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + + + + + + + + + +
Returns
+A context manager. +
+ + + +

update_config_proto

+ +View source + + + +Returns a copy of `config_proto` modified for use with this strategy. + +DEPRECATED: This method is not available in TF 2.x. + +The updated config has something needed to run a strategy, e.g. +configuration to run collective ops, or device filters to improve +distributed training performance. + + + + + + + + + + +
Args
+`config_proto` + +a `tf.ConfigProto` object. +
+ + + + + + + + + + + +
Returns
+The updated copy of the `config_proto`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distribute/experimental/ParameterServerStrategy.md b/site/en/api_docs/python/tf/compat/v1/distribute/experimental/ParameterServerStrategy.md new file mode 100644 index 00000000000..58b2c6b31af --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distribute/experimental/ParameterServerStrategy.md @@ -0,0 +1,983 @@ +description: An asynchronous multi-worker parameter server tf.distribute strategy. + +
+ + + + + + + + + + + + + + +
+ +# tf.compat.v1.distribute.experimental.ParameterServerStrategy + + + + + + + + + +An asynchronous multi-worker parameter server tf.distribute strategy. + +Inherits From: [`Strategy`](../../../../../tf/compat/v1/distribute/Strategy.md) + + + + + + + +This strategy requires two roles: workers and parameter servers. Variables and +updates to those variables will be assigned to parameter servers and other +operations are assigned to workers. + +When each worker has more than one GPU, operations will be replicated on all +GPUs. Even though operations may be replicated, variables are not and each +worker shares a common view for which parameter server a variable is assigned +to. + +By default it uses `TFConfigClusterResolver` to detect configurations for +multi-worker training. This requires a 'TF_CONFIG' environment variable and +the 'TF_CONFIG' must have a cluster spec. + +This class assumes each worker is running the same code independently, but +parameter servers are running a standard server. This means that while each +worker will synchronously compute a single gradient update across all GPUs, +updates between workers proceed asynchronously. Operations that occur only on +the first replica (such as incrementing the global step), will occur on the +first replica *of every worker*. + +It is expected to call `call_for_each_replica(fn, ...)` for any +operations which potentially can be replicated across replicas (i.e. multiple +GPUs) even if there is only CPU or one GPU. When defining the `fn`, extra +caution needs to be taken: + +1) It is generally not recommended to open a device scope under the strategy's +scope. A device scope (i.e. calling tf.device) will be merged with or +override the device for operations but will not change the device for +variables. + +2) It is also not recommended to open a colocation scope (i.e. calling +tf.compat.v1.colocate_with) under the strategy's scope. For colocating +variables, use `strategy.extended.colocate_vars_with` instead. Colocation of +ops will possibly create device assignment conflicts. + +Note: This strategy only works with the Estimator API. Pass an instance of +this strategy to the `experimental_distribute` argument when you create the +`RunConfig`. This instance of `RunConfig` should then be passed to the +`Estimator` instance on which `train_and_evaluate` is called. + +#### For Example: + + +``` +strategy = tf.distribute.experimental.ParameterServerStrategy() +run_config = tf.estimator.RunConfig( + experimental_distribute.train_distribute=strategy) +estimator = tf.estimator.Estimator(config=run_config) +tf.estimator.train_and_evaluate(estimator,...) +``` + + + + + + + + + + +
+`cluster_resolver` + +Optional +tf.distribute.cluster_resolver.ClusterResolver object. Defaults to a +tf.distribute.cluster_resolver.TFConfigClusterResolver. +
+ + + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via `dataset`. + +The returned distributed dataset can be iterated over similar to how +regular datasets can. +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +The following is an example: + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + +We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). + +In a multi-worker setting, we will first attempt to distribute the dataset +by attempting to detect whether the dataset is being created out of +ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, +attempting to shard the input files. Note that there has to be at least one +input file per worker. If you have less than one input file per worker, we +suggest that you should disable distributing your dataset using the method +below. + +If that attempt is unsuccessful (e.g. the dataset is created from a +Dataset.range), we will shard the dataset evenly at the end by appending a +`.shard` operation to the end of the processing pipeline. This will cause +the entire preprocessing pipeline for all the data to be run on every +worker, and each worker will do redundant work. We will print a warning +if this method of sharding is selected. + +You can disable dataset sharding across workers using the +`auto_shard_policy` option in tf.data.experimental.DistributeOptions. + +Within each worker, we will also split the data among all the worker +devices (if more than one a present), and this will happen even if +multi-worker sharding is disabled using the method above. + +If the above batch splitting and dataset sharding logic is undesirable, +please use `experimental_distribute_datasets_from_function` instead, which +does not do any automatic splitting or sharding. + +You can also use the `element_spec` property of the distributed dataset +returned by this API to query the tf.TypeSpec of the elements returned +by the iterator. This can be used to set the `input_signature` property +of a tf.function. + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +@tf.function(input_signature=[dist_dataset.element_spec]) +def train_step(inputs): + # train model with inputs + return + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be sharded across all replicas using +the rules stated above. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. Each +replica on that worker will dequeue one batch of inputs from the local +`Dataset` (i.e. if a worker has two replicas, two batches will be dequeued +from the `Dataset` every step). + +This method can be used for several purposes. For example, where +`experimental_distribute_dataset` is unable to shard the input files, this +method might be used to manually shard the dataset (avoiding the slow +fallback behavior in `experimental_distribute_dataset`). In cases where the +dataset is infinite, this sharding can be done by creating dataset replicas +that differ only in their random seed. +`experimental_distribute_dataset` may also sometimes fail to split the +batch across replicas on a worker. In that case, this method can be used +where that limitation does not exist. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + +To query the tf.TypeSpec of the elements in the distributed dataset +returned by this API, you need to use the `element_spec` property of the +distributed iterator. This tf.TypeSpec can be used to set the +`input_signature` property of a tf.function. + +```python +# If you want to specify `input_signature` for a `tf.function` you must +# first create the iterator. +iterator = iter(inputs) + +@tf.function(input_signature=[iterator.element_spec]) +def replica_fn_with_signature(inputs): + # train the model with inputs + return + +for _ in range(steps): + strategy.run(replica_fn_with_signature, + args=(next(iterator),)) +``` + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +Note: This only returns values on the worker initiated by this client. +When using a tf.distribute.Strategy like +tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker +will be its own client, and this function will only return values +computed on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use +tf.distribute.Strategy.experimental_distribute_dataset +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+`session` + +(TensorFlow v1.x graph execution only) A session used for +initialization. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_run

+ +View source + + + +Runs ops in `fn` on each replica, with inputs from `input_iterator`. + +DEPRECATED: This method is not available in TF 2.x. Please switch +to using `run` instead. + +When eager execution is enabled, executes ops specified by `fn` on each +replica. Otherwise, builds a graph to execute the ops on each replica. + +Each replica will take a single, different input from the inputs provided by +one `get_next` call on the input iterator. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `replica_id_in_sync_group`. + +IMPORTANT: Depending on the tf.distribute.Strategy implementation being +used, and whether eager execution is enabled, `fn` may be called one or more +times (once for each replica). + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The inputs to the function must match the outputs +of `input_iterator.get_next()`. The output must be a tf.nest of +`Tensor`s. +
+`input_iterator` + +(Optional) input iterator from which the inputs are taken. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be `PerReplica` (if the values are unsynchronized), +`Mirrored` (if the values are kept in sync), or `Tensor` (if running on a +single replica). +
+ + + +

make_dataset_iterator

+ +View source + + + +Makes an iterator for input provided via `dataset`. + +DEPRECATED: This method is not available in TF 2.x. + +Data from the given dataset will be distributed evenly across all the +compute replicas. We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). +If this effort fails, an error will be thrown, and the user should instead +use `make_input_fn_iterator` which provides more control to the user, and +does not try to divide a batch across replicas. + +The user could also use `make_input_fn_iterator` if they want to +customize which input is fed to which replica/worker etc. + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be distributed evenly across all +replicas. +
+ + + + + + + + + + + +
Returns
+An `tf.distribute.InputIterator` which returns inputs for each step of the +computation. User should call `initialize` on the returned iterator. +
+ + + +

make_input_fn_iterator

+ +View source + + + +Returns an iterator split across replicas created from an input function. + +DEPRECATED: This method is not available in TF 2.x. + +The `input_fn` should take an tf.distribute.InputContext object where +information about batching and input sharding can be accessed: + +``` +def input_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard(input_context.num_input_pipelines, + input_context.input_pipeline_id) +with strategy.scope(): + iterator = strategy.make_input_fn_iterator(input_fn) + replica_results = strategy.experimental_run(replica_fn, iterator) +``` + +The tf.data.Dataset returned by `input_fn` should have a per-replica +batch size, which may be computed using +`input_context.get_per_replica_batch_size`. + + + + + + + + + + + + + +
Args
+`input_fn` + +A function taking a tf.distribute.InputContext object and +returning a tf.data.Dataset. +
+`replication_mode` + +an enum value of tf.distribute.InputReplicationMode. +Only `PER_WORKER` is supported currently, which means there will be +a single call to `input_fn` per worker. Replicas will dequeue from the +local tf.data.Dataset on their worker. +
+ + + + + + + + + + + +
Returns
+An iterator object that should first be `.initialize()`-ed. It may then +either be passed to `strategy.experimental_run()` or you can +`iterator.get_next()` to get the next value to pass to +`strategy.extended.call_for_each_replica()`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +Executes ops specified by `fn` on each replica. If `args` or `kwargs` have +tf.distribute.DistributedValues, such as those produced by a +"distributed `Dataset`" or `experimental_distribute_values_from_function` +when `fn` is executed on a particular replica, it will be executed with the +component of tf.distribute.DistributedValues that correspond to that +replica. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `all_reduce`. + +All arguments in `args` or `kwargs` should either be nest of tensors or +tf.distribute.DistributedValues containing tensors or composite tensors. + +IMPORTANT: Depending on the implementation of tf.distribute.Strategy and +whether eager execution is enabled, `fn` may be called one or more times ( +once for each replica). + +#### Example usage: + + + +1. Constant tensor input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> tensor_input = tf.constant(3.0) +>>> @tf.function +... def replica_fn(input): +... return input*2.0 +>>> result = strategy.run(replica_fn, args=(tensor_input,)) +>>> result + +``` + +2. DistributedValues input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> @tf.function +... def run(): +... def value_fn(value_context): +... return value_context.num_replicas_in_sync +... distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +... def replica_fn2(input): +... return input*2 +... return strategy.run(replica_fn2, args=(distributed_values,)) +>>> result = run() +>>> result + +``` + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be tf.distribute.DistributedValues, `Tensor` +objects, or `Tensor`s (for example, if running on a single replica). +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + + + + + + + + + +
Returns
+A context manager. +
+ + + +

update_config_proto

+ +View source + + + +Returns a copy of `config_proto` modified for use with this strategy. + +DEPRECATED: This method is not available in TF 2.x. + +The updated config has something needed to run a strategy, e.g. +configuration to run collective ops, or device filters to improve +distributed training performance. + + + + + + + + + + +
Args
+`config_proto` + +a `tf.ConfigProto` object. +
+ + + + + + + + + + + +
Returns
+The updated copy of the `config_proto`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distribute/experimental/TPUStrategy.md b/site/en/api_docs/python/tf/compat/v1/distribute/experimental/TPUStrategy.md new file mode 100644 index 00000000000..2955dd066b1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distribute/experimental/TPUStrategy.md @@ -0,0 +1,941 @@ +description: TPU distribution strategy implementation. + +
+ + + + + + + + + + + + + + +
+ +# tf.compat.v1.distribute.experimental.TPUStrategy + + + + + + + + + +TPU distribution strategy implementation. + +Inherits From: [`Strategy`](../../../../../tf/compat/v1/distribute/Strategy.md) + + + + + + + + + + + + + + + + + + + + + + + +
+`tpu_cluster_resolver` + +A tf.distribute.cluster_resolver.TPUClusterResolver, +which provides information about the TPU cluster. +
+`steps_per_run` + +Number of steps to run on device before returning to the +host. Note that this can have side-effects on performance, hooks, +metrics, summaries etc. +This parameter is only used when Distribution Strategy is used with +estimator or keras. +
+`device_assignment` + +Optional tf.tpu.experimental.DeviceAssignment to +specify the placement of replicas on the TPU cluster. Currently only +supports the usecase of using a single core within a TPU cluster. +
+ + + + + + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+`steps_per_run` + +DEPRECATED: use .extended.steps_per_run instead. +
+ + + +## Methods + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via `dataset`. + +The returned distributed dataset can be iterated over similar to how +regular datasets can. +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +The following is an example: + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + +We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). + +In a multi-worker setting, we will first attempt to distribute the dataset +by attempting to detect whether the dataset is being created out of +ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, +attempting to shard the input files. Note that there has to be at least one +input file per worker. If you have less than one input file per worker, we +suggest that you should disable distributing your dataset using the method +below. + +If that attempt is unsuccessful (e.g. the dataset is created from a +Dataset.range), we will shard the dataset evenly at the end by appending a +`.shard` operation to the end of the processing pipeline. This will cause +the entire preprocessing pipeline for all the data to be run on every +worker, and each worker will do redundant work. We will print a warning +if this method of sharding is selected. + +You can disable dataset sharding across workers using the +`auto_shard_policy` option in tf.data.experimental.DistributeOptions. + +Within each worker, we will also split the data among all the worker +devices (if more than one a present), and this will happen even if +multi-worker sharding is disabled using the method above. + +If the above batch splitting and dataset sharding logic is undesirable, +please use `experimental_distribute_datasets_from_function` instead, which +does not do any automatic splitting or sharding. + +You can also use the `element_spec` property of the distributed dataset +returned by this API to query the tf.TypeSpec of the elements returned +by the iterator. This can be used to set the `input_signature` property +of a tf.function. + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +@tf.function(input_signature=[dist_dataset.element_spec]) +def train_step(inputs): + # train model with inputs + return + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be sharded across all replicas using +the rules stated above. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. Each +replica on that worker will dequeue one batch of inputs from the local +`Dataset` (i.e. if a worker has two replicas, two batches will be dequeued +from the `Dataset` every step). + +This method can be used for several purposes. For example, where +`experimental_distribute_dataset` is unable to shard the input files, this +method might be used to manually shard the dataset (avoiding the slow +fallback behavior in `experimental_distribute_dataset`). In cases where the +dataset is infinite, this sharding can be done by creating dataset replicas +that differ only in their random seed. +`experimental_distribute_dataset` may also sometimes fail to split the +batch across replicas on a worker. In that case, this method can be used +where that limitation does not exist. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + +To query the tf.TypeSpec of the elements in the distributed dataset +returned by this API, you need to use the `element_spec` property of the +distributed iterator. This tf.TypeSpec can be used to set the +`input_signature` property of a tf.function. + +```python +# If you want to specify `input_signature` for a `tf.function` you must +# first create the iterator. +iterator = iter(inputs) + +@tf.function(input_signature=[iterator.element_spec]) +def replica_fn_with_signature(inputs): + # train the model with inputs + return + +for _ in range(steps): + strategy.run(replica_fn_with_signature, + args=(next(iterator),)) +``` + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +Note: This only returns values on the worker initiated by this client. +When using a tf.distribute.Strategy like +tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker +will be its own client, and this function will only return values +computed on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use +tf.distribute.Strategy.experimental_distribute_dataset +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+`session` + +(TensorFlow v1.x graph execution only) A session used for +initialization. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_run

+ +View source + + + +Runs ops in `fn` on each replica, with inputs from `input_iterator`. + +DEPRECATED: This method is not available in TF 2.x. Please switch +to using `run` instead. + +When eager execution is enabled, executes ops specified by `fn` on each +replica. Otherwise, builds a graph to execute the ops on each replica. + +Each replica will take a single, different input from the inputs provided by +one `get_next` call on the input iterator. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `replica_id_in_sync_group`. + +IMPORTANT: Depending on the tf.distribute.Strategy implementation being +used, and whether eager execution is enabled, `fn` may be called one or more +times (once for each replica). + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The inputs to the function must match the outputs +of `input_iterator.get_next()`. The output must be a tf.nest of +`Tensor`s. +
+`input_iterator` + +(Optional) input iterator from which the inputs are taken. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be `PerReplica` (if the values are unsynchronized), +`Mirrored` (if the values are kept in sync), or `Tensor` (if running on a +single replica). +
+ + + +

make_dataset_iterator

+ +View source + + + +Makes an iterator for input provided via `dataset`. + +DEPRECATED: This method is not available in TF 2.x. + +Data from the given dataset will be distributed evenly across all the +compute replicas. We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). +If this effort fails, an error will be thrown, and the user should instead +use `make_input_fn_iterator` which provides more control to the user, and +does not try to divide a batch across replicas. + +The user could also use `make_input_fn_iterator` if they want to +customize which input is fed to which replica/worker etc. + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be distributed evenly across all +replicas. +
+ + + + + + + + + + + +
Returns
+An `tf.distribute.InputIterator` which returns inputs for each step of the +computation. User should call `initialize` on the returned iterator. +
+ + + +

make_input_fn_iterator

+ +View source + + + +Returns an iterator split across replicas created from an input function. + +DEPRECATED: This method is not available in TF 2.x. + +The `input_fn` should take an tf.distribute.InputContext object where +information about batching and input sharding can be accessed: + +``` +def input_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard(input_context.num_input_pipelines, + input_context.input_pipeline_id) +with strategy.scope(): + iterator = strategy.make_input_fn_iterator(input_fn) + replica_results = strategy.experimental_run(replica_fn, iterator) +``` + +The tf.data.Dataset returned by `input_fn` should have a per-replica +batch size, which may be computed using +`input_context.get_per_replica_batch_size`. + + + + + + + + + + + + + +
Args
+`input_fn` + +A function taking a tf.distribute.InputContext object and +returning a tf.data.Dataset. +
+`replication_mode` + +an enum value of tf.distribute.InputReplicationMode. +Only `PER_WORKER` is supported currently, which means there will be +a single call to `input_fn` per worker. Replicas will dequeue from the +local tf.data.Dataset on their worker. +
+ + + + + + + + + + + +
Returns
+An iterator object that should first be `.initialize()`-ed. It may then +either be passed to `strategy.experimental_run()` or you can +`iterator.get_next()` to get the next value to pass to +`strategy.extended.call_for_each_replica()`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +Executes ops specified by `fn` on each replica. If `args` or `kwargs` have +"per-replica" values, such as those produced by a "distributed `Dataset`", +when `fn` is executed on a particular replica, it will be executed with the +component of those "per-replica" values that correspond to that replica. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `all_reduce`. + +All arguments in `args` or `kwargs` should either be nest of tensors or +per-replica objects containing tensors or composite tensors. + +Users can pass strategy specific options to `options` argument. An example +to enable bucketizing dynamic shapes in `TPUStrategy.run` +is: +```python + +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +tf.tpu.experimental.initialize_tpu_system(resolver) +strategy = tf.distribute.experimental.TPUStrategy(tpu='') + +options = tf.distribute.RunOptions() +options.experimental_bucketizing_dynamic_shape = True + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + output = tf.reduce_sum(inputs) + return output + + strategy.run(step_fn, args=(next(iterator),), + options=options) +``` + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be "per-replica" `Tensor` objects or `Tensor`s +(for example, if running on a single replica). +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + + + + + + + + + +
Returns
+A context manager. +
+ + + +

update_config_proto

+ +View source + + + +Returns a copy of `config_proto` modified for use with this strategy. + +DEPRECATED: This method is not available in TF 2.x. + +The updated config has something needed to run a strategy, e.g. +configuration to run collective ops, or device filters to improve +distributed training performance. + + + + + + + + + + +
Args
+`config_proto` + +a `tf.ConfigProto` object. +
+ + + + + + + + + + + +
Returns
+The updated copy of the `config_proto`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distribute/get_loss_reduction.md b/site/en/api_docs/python/tf/compat/v1/distribute/get_loss_reduction.md new file mode 100644 index 00000000000..b4123996a21 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distribute/get_loss_reduction.md @@ -0,0 +1,48 @@ +description: tf.distribute.ReduceOp corresponding to the last loss reduction. + +
+ + +
+ +# tf.compat.v1.distribute.get_loss_reduction + + + + + + + + + +tf.distribute.ReduceOp corresponding to the last loss reduction. + + + + + + + +This is used to decide whether loss should be scaled in optimizer (used only +for estimator + v1 optimizer use case). + + + + + + + + + +
+tf.distribute.ReduceOp corresponding to the last loss reduction for +estimator and v1 optimizer use case. tf.distribute.ReduceOp.SUM otherwise. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/distributions.md b/site/en/api_docs/python/tf/compat/v1/distributions.md new file mode 100644 index 00000000000..41fe91f055d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions.md @@ -0,0 +1,63 @@ +description: Core module for TensorFlow distribution objects and helpers. + +
+ + + + +
+ +# Module: tf.compat.v1.distributions + + + + + + + + + +Core module for TensorFlow distribution objects and helpers. + + + +## Classes + +[`class Bernoulli`](../../../tf/compat/v1/distributions/Bernoulli.md): Bernoulli distribution. + +[`class Beta`](../../../tf/compat/v1/distributions/Beta.md): Beta distribution. + +[`class Categorical`](../../../tf/compat/v1/distributions/Categorical.md): Categorical distribution. + +[`class Dirichlet`](../../../tf/compat/v1/distributions/Dirichlet.md): Dirichlet distribution. + +[`class DirichletMultinomial`](../../../tf/compat/v1/distributions/DirichletMultinomial.md): Dirichlet-Multinomial compound distribution. + +[`class Distribution`](../../../tf/compat/v1/distributions/Distribution.md): A generic probability distribution base class. + +[`class Exponential`](../../../tf/compat/v1/distributions/Exponential.md): Exponential distribution. + +[`class Gamma`](../../../tf/compat/v1/distributions/Gamma.md): Gamma distribution. + +[`class Laplace`](../../../tf/compat/v1/distributions/Laplace.md): The Laplace distribution with location `loc` and `scale` parameters. + +[`class Multinomial`](../../../tf/compat/v1/distributions/Multinomial.md): Multinomial distribution. + +[`class Normal`](../../../tf/compat/v1/distributions/Normal.md): The Normal distribution with location `loc` and `scale` parameters. + +[`class RegisterKL`](../../../tf/compat/v1/distributions/RegisterKL.md): Decorator to register a KL divergence implementation function. + +[`class ReparameterizationType`](../../../tf/compat/v1/distributions/ReparameterizationType.md): Instances of this class represent how sampling is reparameterized. + +[`class StudentT`](../../../tf/compat/v1/distributions/StudentT.md): Student's t-distribution. + +[`class Uniform`](../../../tf/compat/v1/distributions/Uniform.md): Uniform distribution with `low` and `high` parameters. + +## Functions + +[`kl_divergence(...)`](../../../tf/compat/v1/distributions/kl_divergence.md): Get the KL-divergence KL(distribution_a || distribution_b). (deprecated) + +## Other Members + +* `FULLY_REPARAMETERIZED` +* `NOT_REPARAMETERIZED` diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/Bernoulli.md b/site/en/api_docs/python/tf/compat/v1/distributions/Bernoulli.md new file mode 100644 index 00000000000..3a9957e6925 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/Bernoulli.md @@ -0,0 +1,1458 @@ +description: Bernoulli distribution. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.Bernoulli + + + + + + + + + +Bernoulli distribution. + +Inherits From: [`Distribution`](../../../../tf/compat/v1/distributions/Distribution.md) + + + + + + + +The Bernoulli distribution with `probs` parameter, i.e., the probability of a +`1` outcome (vs a `0` outcome). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`logits` + +An N-D `Tensor` representing the log-odds of a `1` event. Each +entry in the `Tensor` parametrizes an independent Bernoulli distribution +where the probability of an event is sigmoid(logits). Only one of +`logits` or `probs` should be passed in. +
+`probs` + +An N-D `Tensor` representing the probability of a `1` +event. Each entry in the `Tensor` parameterizes an independent +Bernoulli distribution. Only one of `logits` or `probs` should be passed +in. +
+`dtype` + +The type of the event samples. Default: `int32`. +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, +statistics (e.g., mean, mode, variance) use the value "`NaN`" to +indicate the result is undefined. When `False`, an exception is raised +if one or more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + + +
+`ValueError` + +If p and logits are passed, or if neither are passed. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`logits` + +Log-odds of a `1` outcome (vs `0`). +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`probs` + +Probability of a `1` outcome (vs `0`). +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + + +

mode

+ +View source + + + +Mode. + +Additional documentation from `Bernoulli`: + +Returns `1` if `prob > 0.5` and `0` otherwise. + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/Beta.md b/site/en/api_docs/python/tf/compat/v1/distributions/Beta.md new file mode 100644 index 00000000000..8b071bc086e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/Beta.md @@ -0,0 +1,1567 @@ +description: Beta distribution. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.Beta + + + + + + + + + +Beta distribution. + +Inherits From: [`Distribution`](../../../../tf/compat/v1/distributions/Distribution.md) + + + + + + + +The Beta distribution is defined over the `(0, 1)` interval using parameters +`concentration1` (aka "alpha") and `concentration0` (aka "beta"). + +#### Mathematical Details + +The probability density function (pdf) is, + +```none +pdf(x; alpha, beta) = x**(alpha - 1) (1 - x)**(beta - 1) / Z +Z = Gamma(alpha) Gamma(beta) / Gamma(alpha + beta) +``` + +where: + +* `concentration1 = alpha`, +* `concentration0 = beta`, +* `Z` is the normalization constant, and, +* `Gamma` is the [gamma function]( + https://en.wikipedia.org/wiki/Gamma_function). + +The concentration parameters represent mean total counts of a `1` or a `0`, +i.e., + +```none +concentration1 = alpha = mean * total_concentration +concentration0 = beta = (1. - mean) * total_concentration +``` + +where `mean` in `(0, 1)` and `total_concentration` is a positive real number +representing a mean `total_count = concentration1 + concentration0`. + +Distribution parameters are automatically broadcast in all functions; see +examples for details. + +Warning: The samples can be zero due to finite precision. +This happens more often when some of the concentrations are very small. +Make sure to round the samples to `np.finfo(dtype).tiny` before computing the +density. + +Samples of this distribution are reparameterized (pathwise differentiable). +The derivatives are computed using the approach described in +(Figurnov et al., 2018). + +#### Examples + +```python +import tensorflow_probability as tfp +tfd = tfp.distributions + +# Create a batch of three Beta distributions. +alpha = [1, 2, 3] +beta = [1, 2, 3] +dist = tfd.Beta(alpha, beta) + +dist.sample([4, 5]) # Shape [4, 5, 3] + +# `x` has three batch entries, each with two samples. +x = [[.1, .4, .5], + [.2, .3, .5]] +# Calculate the probability of each pair of samples under the corresponding +# distribution in `dist`. +dist.prob(x) # Shape [2, 3] +``` + +```python +# Create batch_shape=[2, 3] via parameter broadcast: +alpha = [[1.], [2]] # Shape [2, 1] +beta = [3., 4, 5] # Shape [3] +dist = tfd.Beta(alpha, beta) + +# alpha broadcast as: [[1., 1, 1,], +# [2, 2, 2]] +# beta broadcast as: [[3., 4, 5], +# [3, 4, 5]] +# batch_Shape [2, 3] +dist.sample([4, 5]) # Shape [4, 5, 2, 3] + +x = [.2, .3, .5] +# x will be broadcast as [[.2, .3, .5], +# [.2, .3, .5]], +# thus matching batch_shape [2, 3]. +dist.prob(x) # Shape [2, 3] +``` + +Compute the gradients of samples w.r.t. the parameters: + +```python +alpha = tf.constant(1.0) +beta = tf.constant(2.0) +dist = tfd.Beta(alpha, beta) +samples = dist.sample(5) # Shape [5] +loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function +# Unbiased stochastic gradients of the loss function +grads = tf.gradients(loss, [alpha, beta]) +``` + +#### References: + +Implicit Reparameterization Gradients: + [Figurnov et al., 2018] + (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients) + ([pdf] + (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf)) + + + + + + + + + + + + + + + + + + + + + + + +
+`concentration1` + +Positive floating-point `Tensor` indicating mean +number of successes; aka "alpha". Implies `self.dtype` and +`self.batch_shape`, i.e., +`concentration1.shape = [N1, N2, ..., Nm] = self.batch_shape`. +
+`concentration0` + +Positive floating-point `Tensor` indicating mean +number of failures; aka "beta". Otherwise has same semantics as +`concentration1`. +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, statistics +(e.g., mean, mode, variance) use the value "`NaN`" to indicate the +result is undefined. When `False`, an exception is raised if one or +more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`concentration0` + +Concentration parameter associated with a `0` outcome. +
+`concentration1` + +Concentration parameter associated with a `1` outcome. +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`total_concentration` + +Sum of concentration parameters. +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + +Additional documentation from `Beta`: + +Note: `x` must have dtype `self.dtype` and be in +`[0, 1].` It must have a shape compatible with `self.batch_shape()`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + +Additional documentation from `Beta`: + +Note: `x` must have dtype `self.dtype` and be in +`[0, 1].` It must have a shape compatible with `self.batch_shape()`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + +Additional documentation from `Beta`: + +Note: `x` must have dtype `self.dtype` and be in +`[0, 1].` It must have a shape compatible with `self.batch_shape()`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + + +

mode

+ +View source + + + +Mode. + +Additional documentation from `Beta`: + +Note: The mode is undefined when `concentration1 <= 1` or +`concentration0 <= 1`. If `self.allow_nan_stats` is `True`, `NaN` +is used for undefined modes. If `self.allow_nan_stats` is `False` an +exception is raised when one or more modes are undefined. + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + +Additional documentation from `Beta`: + +Note: `x` must have dtype `self.dtype` and be in +`[0, 1].` It must have a shape compatible with `self.batch_shape()`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/Categorical.md b/site/en/api_docs/python/tf/compat/v1/distributions/Categorical.md new file mode 100644 index 00000000000..2e62c299c7d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/Categorical.md @@ -0,0 +1,1529 @@ +description: Categorical distribution. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.Categorical + + + + + + + + + +Categorical distribution. + +Inherits From: [`Distribution`](../../../../tf/compat/v1/distributions/Distribution.md) + + + + + + + +The Categorical distribution is parameterized by either probabilities or +log-probabilities of a set of `K` classes. It is defined over the integers +`{0, 1, ..., K}`. + +The Categorical distribution is closely related to the `OneHotCategorical` and +`Multinomial` distributions. The Categorical distribution can be intuited as +generating samples according to `argmax{ OneHotCategorical(probs) }` itself +being identical to `argmax{ Multinomial(probs, total_count=1) }`. + +#### Mathematical Details + +The probability mass function (pmf) is, + +```none +pmf(k; pi) = prod_j pi_j**[k == j] +``` + +#### Pitfalls + +The number of classes, `K`, must not exceed: +- the largest integer representable by `self.dtype`, i.e., + `2**(mantissa_bits+1)` (IEEE 754), +- the maximum `Tensor` index, i.e., `2**31-1`. + +In other words, + +```python +K <= min(2**31-1, { + tf.float16: 2**11, + tf.float32: 2**24, + tf.float64: 2**53 }[param.dtype]) +``` + +Note: This condition is validated only when `self.validate_args = True`. + +#### Examples + +Creates a 3-class distribution with the 2nd class being most likely. + +```python +dist = Categorical(probs=[0.1, 0.5, 0.4]) +n = 1e4 +empirical_prob = tf.cast( + tf.histogram_fixed_width( + dist.sample(int(n)), + [0., 2], + nbins=3), + dtype=tf.float32) / n +# ==> array([ 0.1005, 0.5037, 0.3958], dtype=float32) +``` + +Creates a 3-class distribution with the 2nd class being most likely. +Parameterized by [logits](https://en.wikipedia.org/wiki/Logit) rather than +probabilities. + +```python +dist = Categorical(logits=np.log([0.1, 0.5, 0.4]) +n = 1e4 +empirical_prob = tf.cast( + tf.histogram_fixed_width( + dist.sample(int(n)), + [0., 2], + nbins=3), + dtype=tf.float32) / n +# ==> array([0.1045, 0.5047, 0.3908], dtype=float32) +``` + +Creates a 3-class distribution with the 3rd class being most likely. +The distribution functions can be evaluated on counts. + +```python +# counts is a scalar. +p = [0.1, 0.4, 0.5] +dist = Categorical(probs=p) +dist.prob(0) # Shape [] + +# p will be broadcast to [[0.1, 0.4, 0.5], [0.1, 0.4, 0.5]] to match counts. +counts = [1, 0] +dist.prob(counts) # Shape [2] + +# p will be broadcast to shape [3, 5, 7, 3] to match counts. +counts = [[...]] # Shape [5, 7, 3] +dist.prob(counts) # Shape [5, 7, 3] +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`logits` + +An N-D `Tensor`, `N >= 1`, representing the log probabilities +of a set of Categorical distributions. The first `N - 1` dimensions +index into a batch of independent distributions and the last dimension +represents a vector of logits for each class. Only one of `logits` or +`probs` should be passed in. +
+`probs` + +An N-D `Tensor`, `N >= 1`, representing the probabilities +of a set of Categorical distributions. The first `N - 1` dimensions +index into a batch of independent distributions and the last dimension +represents a vector of probabilities for each class. Only one of +`logits` or `probs` should be passed in. +
+`dtype` + +The type of the event samples (default: int32). +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, statistics +(e.g., mean, mode, variance) use the value "`NaN`" to indicate the +result is undefined. When `False`, an exception is raised if one or +more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`event_size` + +Scalar `int32` tensor: the number of classes. +
+`logits` + +Vector of coordinatewise logits. +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`probs` + +Vector of coordinatewise probabilities. +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + + +

mode

+ +View source + + + +Mode. + + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/Dirichlet.md b/site/en/api_docs/python/tf/compat/v1/distributions/Dirichlet.md new file mode 100644 index 00000000000..e94acc61e8e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/Dirichlet.md @@ -0,0 +1,1552 @@ +description: Dirichlet distribution. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.Dirichlet + + + + + + + + + +Dirichlet distribution. + +Inherits From: [`Distribution`](../../../../tf/compat/v1/distributions/Distribution.md) + + + + + + + +The Dirichlet distribution is defined over the +[`(k-1)`-simplex](https://en.wikipedia.org/wiki/Simplex) using a positive, +length-`k` vector `concentration` (`k > 1`). The Dirichlet is identically the +Beta distribution when `k = 2`. + +#### Mathematical Details + +The Dirichlet is a distribution over the open `(k-1)`-simplex, i.e., + +```none +S^{k-1} = { (x_0, ..., x_{k-1}) in R^k : sum_j x_j = 1 and all_j x_j > 0 }. +``` + +The probability density function (pdf) is, + +```none +pdf(x; alpha) = prod_j x_j**(alpha_j - 1) / Z +Z = prod_j Gamma(alpha_j) / Gamma(sum_j alpha_j) +``` + +where: + +* `x in S^{k-1}`, i.e., the `(k-1)`-simplex, +* `concentration = alpha = [alpha_0, ..., alpha_{k-1}]`, `alpha_j > 0`, +* `Z` is the normalization constant aka the [multivariate beta function]( + https://en.wikipedia.org/wiki/Beta_function#Multivariate_beta_function), + and, +* `Gamma` is the [gamma function]( + https://en.wikipedia.org/wiki/Gamma_function). + +The `concentration` represents mean total counts of class occurrence, i.e., + +```none +concentration = alpha = mean * total_concentration +``` + +where `mean` in `S^{k-1}` and `total_concentration` is a positive real number +representing a mean total count. + +Distribution parameters are automatically broadcast in all functions; see +examples for details. + +Warning: Some components of the samples can be zero due to finite precision. +This happens more often when some of the concentrations are very small. +Make sure to round the samples to `np.finfo(dtype).tiny` before computing the +density. + +Samples of this distribution are reparameterized (pathwise differentiable). +The derivatives are computed using the approach described in +(Figurnov et al., 2018). + +#### Examples + +```python +import tensorflow_probability as tfp +tfd = tfp.distributions + +# Create a single trivariate Dirichlet, with the 3rd class being three times +# more frequent than the first. I.e., batch_shape=[], event_shape=[3]. +alpha = [1., 2, 3] +dist = tfd.Dirichlet(alpha) + +dist.sample([4, 5]) # shape: [4, 5, 3] + +# x has one sample, one batch, three classes: +x = [.2, .3, .5] # shape: [3] +dist.prob(x) # shape: [] + +# x has two samples from one batch: +x = [[.1, .4, .5], + [.2, .3, .5]] +dist.prob(x) # shape: [2] + +# alpha will be broadcast to shape [5, 7, 3] to match x. +x = [[...]] # shape: [5, 7, 3] +dist.prob(x) # shape: [5, 7] +``` + +```python +# Create batch_shape=[2], event_shape=[3]: +alpha = [[1., 2, 3], + [4, 5, 6]] # shape: [2, 3] +dist = tfd.Dirichlet(alpha) + +dist.sample([4, 5]) # shape: [4, 5, 2, 3] + +x = [.2, .3, .5] +# x will be broadcast as [[.2, .3, .5], +# [.2, .3, .5]], +# thus matching batch_shape [2, 3]. +dist.prob(x) # shape: [2] +``` + +Compute the gradients of samples w.r.t. the parameters: + +```python +alpha = tf.constant([1.0, 2.0, 3.0]) +dist = tfd.Dirichlet(alpha) +samples = dist.sample(5) # Shape [5, 3] +loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function +# Unbiased stochastic gradients of the loss function +grads = tf.gradients(loss, alpha) +``` + +#### References: + +Implicit Reparameterization Gradients: + [Figurnov et al., 2018] + (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients) + ([pdf] + (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf)) + + + + + + + + + + + + + + + + + + + + +
+`concentration` + +Positive floating-point `Tensor` indicating mean number +of class occurrences; aka "alpha". Implies `self.dtype`, and +`self.batch_shape`, `self.event_shape`, i.e., if +`concentration.shape = [N1, N2, ..., Nm, k]` then +`batch_shape = [N1, N2, ..., Nm]` and +`event_shape = [k]`. +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, statistics +(e.g., mean, mode, variance) use the value "`NaN`" to indicate the +result is undefined. When `False`, an exception is raised if one or +more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`concentration` + +Concentration parameter; expected counts for that coordinate. +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`total_concentration` + +Sum of last dim of concentration parameter. +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + +Additional documentation from `Dirichlet`: + +Note: `value` must be a non-negative tensor with +dtype `self.dtype` and be in the `(self.event_shape() - 1)`-simplex, i.e., +`tf.reduce_sum(value, -1) = 1`. It must have a shape compatible with +`self.batch_shape() + self.event_shape()`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + + +

mode

+ +View source + + + +Mode. + +Additional documentation from `Dirichlet`: + +Note: The mode is undefined when any `concentration <= 1`. If +`self.allow_nan_stats` is `True`, `NaN` is used for undefined modes. If +`self.allow_nan_stats` is `False` an exception is raised when one or more +modes are undefined. + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + +Additional documentation from `Dirichlet`: + +Note: `value` must be a non-negative tensor with +dtype `self.dtype` and be in the `(self.event_shape() - 1)`-simplex, i.e., +`tf.reduce_sum(value, -1) = 1`. It must have a shape compatible with +`self.batch_shape() + self.event_shape()`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/DirichletMultinomial.md b/site/en/api_docs/python/tf/compat/v1/distributions/DirichletMultinomial.md new file mode 100644 index 00000000000..aa8c426de6b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/DirichletMultinomial.md @@ -0,0 +1,1591 @@ +description: Dirichlet-Multinomial compound distribution. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.DirichletMultinomial + + + + + + + + + +Dirichlet-Multinomial compound distribution. + +Inherits From: [`Distribution`](../../../../tf/compat/v1/distributions/Distribution.md) + + + + + + + +The Dirichlet-Multinomial distribution is parameterized by a (batch of) +length-`K` `concentration` vectors (`K > 1`) and a `total_count` number of +trials, i.e., the number of trials per draw from the DirichletMultinomial. It +is defined over a (batch of) length-`K` vector `counts` such that +`tf.reduce_sum(counts, -1) = total_count`. The Dirichlet-Multinomial is +identically the Beta-Binomial distribution when `K = 2`. + +#### Mathematical Details + +The Dirichlet-Multinomial is a distribution over `K`-class counts, i.e., a +length-`K` vector of non-negative integer `counts = n = [n_0, ..., n_{K-1}]`. + +The probability mass function (pmf) is, + +```none +pmf(n; alpha, N) = Beta(alpha + n) / (prod_j n_j!) / Z +Z = Beta(alpha) / N! +``` + +where: + +* `concentration = alpha = [alpha_0, ..., alpha_{K-1}]`, `alpha_j > 0`, +* `total_count = N`, `N` a positive integer, +* `N!` is `N` factorial, and, +* `Beta(x) = prod_j Gamma(x_j) / Gamma(sum_j x_j)` is the + [multivariate beta function]( + https://en.wikipedia.org/wiki/Beta_function#Multivariate_beta_function), + and, +* `Gamma` is the [gamma function]( + https://en.wikipedia.org/wiki/Gamma_function). + +Dirichlet-Multinomial is a [compound distribution]( +https://en.wikipedia.org/wiki/Compound_probability_distribution), i.e., its +samples are generated as follows. + + 1. Choose class probabilities: + `probs = [p_0,...,p_{K-1}] ~ Dir(concentration)` + 2. Draw integers: + `counts = [n_0,...,n_{K-1}] ~ Multinomial(total_count, probs)` + +The last `concentration` dimension parametrizes a single Dirichlet-Multinomial +distribution. When calling distribution functions (e.g., `dist.prob(counts)`), +`concentration`, `total_count` and `counts` are broadcast to the same shape. +The last dimension of `counts` corresponds single Dirichlet-Multinomial +distributions. + +Distribution parameters are automatically broadcast in all functions; see +examples for details. + +#### Pitfalls + +The number of classes, `K`, must not exceed: +- the largest integer representable by `self.dtype`, i.e., + `2**(mantissa_bits+1)` (IEE754), +- the maximum `Tensor` index, i.e., `2**31-1`. + +In other words, + +```python +K <= min(2**31-1, { + tf.float16: 2**11, + tf.float32: 2**24, + tf.float64: 2**53 }[param.dtype]) +``` + +Note: This condition is validated only when `self.validate_args = True`. + +#### Examples + +```python +alpha = [1., 2., 3.] +n = 2. +dist = DirichletMultinomial(n, alpha) +``` + +Creates a 3-class distribution, with the 3rd class is most likely to be +drawn. +The distribution functions can be evaluated on counts. + +```python +# counts same shape as alpha. +counts = [0., 0., 2.] +dist.prob(counts) # Shape [] + +# alpha will be broadcast to [[1., 2., 3.], [1., 2., 3.]] to match counts. +counts = [[1., 1., 0.], [1., 0., 1.]] +dist.prob(counts) # Shape [2] + +# alpha will be broadcast to shape [5, 7, 3] to match counts. +counts = [[...]] # Shape [5, 7, 3] +dist.prob(counts) # Shape [5, 7] +``` + +Creates a 2-batch of 3-class distributions. + +```python +alpha = [[1., 2., 3.], [4., 5., 6.]] # Shape [2, 3] +n = [3., 3.] +dist = DirichletMultinomial(n, alpha) + +# counts will be broadcast to [[2., 1., 0.], [2., 1., 0.]] to match alpha. +counts = [2., 1., 0.] +dist.prob(counts) # Shape [2] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`total_count` + +Non-negative floating point tensor, whose dtype is the same +as `concentration`. The shape is broadcastable to `[N1,..., Nm]` with +`m >= 0`. Defines this as a batch of `N1 x ... x Nm` different +Dirichlet multinomial distributions. Its components should be equal to +integer values. +
+`concentration` + +Positive floating point tensor, whose dtype is the +same as `n` with shape broadcastable to `[N1,..., Nm, K]` `m >= 0`. +Defines this as a batch of `N1 x ... x Nm` different `K` class Dirichlet +multinomial distributions. +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, statistics +(e.g., mean, mode, variance) use the value "`NaN`" to indicate the +result is undefined. When `False`, an exception is raised if one or +more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`concentration` + +Concentration parameter; expected prior counts for that coordinate. +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`total_concentration` + +Sum of last dim of concentration parameter. +
+`total_count` + +Number of trials used to construct a sample. +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + +Additional documentation from `DirichletMultinomial`: + +The covariance for each batch member is defined as the following: + +```none +Var(X_j) = n * alpha_j / alpha_0 * (1 - alpha_j / alpha_0) * +(n + alpha_0) / (1 + alpha_0) +``` + +where `concentration = alpha` and +`total_concentration = alpha_0 = sum_j alpha_j`. + +The covariance between elements in a batch is defined as: + +```none +Cov(X_i, X_j) = -n * alpha_i * alpha_j / alpha_0 ** 2 * +(n + alpha_0) / (1 + alpha_0) +``` + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + +Additional documentation from `DirichletMultinomial`: + +For each batch of counts, +`value = [n_0, ..., n_{K-1}]`, `P[value]` is the probability that after +sampling `self.total_count` draws from this Dirichlet-Multinomial distribution, +the number of draws falling in class `j` is `n_j`. Since this definition is +[exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables); +different sequences have the same counts so the probability includes a +combinatorial coefficient. + +Note: `value` must be a non-negative tensor with dtype `self.dtype`, have no +fractional components, and such that +`tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable +with `self.concentration` and `self.total_count`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + + +

mode

+ +View source + + + +Mode. + + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + +Additional documentation from `DirichletMultinomial`: + +For each batch of counts, +`value = [n_0, ..., n_{K-1}]`, `P[value]` is the probability that after +sampling `self.total_count` draws from this Dirichlet-Multinomial distribution, +the number of draws falling in class `j` is `n_j`. Since this definition is +[exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables); +different sequences have the same counts so the probability includes a +combinatorial coefficient. + +Note: `value` must be a non-negative tensor with dtype `self.dtype`, have no +fractional components, and such that +`tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable +with `self.concentration` and `self.total_count`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/Distribution.md b/site/en/api_docs/python/tf/compat/v1/distributions/Distribution.md new file mode 100644 index 00000000000..2c882ac7128 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/Distribution.md @@ -0,0 +1,1577 @@ +description: A generic probability distribution base class. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.Distribution + + + + + + + + + +A generic probability distribution base class. + + + + + + + +`Distribution` is a base class for constructing and organizing properties +(e.g., mean, variance) of random variables (e.g, Bernoulli, Gaussian). + +#### Subclassing + +Subclasses are expected to implement a leading-underscore version of the +same-named function. The argument signature should be identical except for +the omission of `name="..."`. For example, to enable `log_prob(value, +name="log_prob")` a subclass should implement `_log_prob(value)`. + +Subclasses can append to public-level docstrings by providing +docstrings for their method specializations. For example: + +```python +@util.AppendDocstring("Some other details.") +def _log_prob(self, value): + ... +``` + +would add the string "Some other details." to the `log_prob` function +docstring. This is implemented as a simple decorator to avoid python +linter complaining about missing Args/Returns/Raises sections in the +partial docstrings. + +#### Broadcasting, batching, and shapes + +All distributions support batches of independent distributions of that type. +The batch shape is determined by broadcasting together the parameters. + +The shape of arguments to `__init__`, `cdf`, `log_cdf`, `prob`, and +`log_prob` reflect this broadcasting, as does the return value of `sample` and +`sample_n`. + +`sample_n_shape = [n] + batch_shape + event_shape`, where `sample_n_shape` is +the shape of the `Tensor` returned from `sample_n`, `n` is the number of +samples, `batch_shape` defines how many independent distributions there are, +and `event_shape` defines the shape of samples from each of those independent +distributions. Samples are independent along the `batch_shape` dimensions, but +not necessarily so along the `event_shape` dimensions (depending on the +particulars of the underlying distribution). + +Using the `Uniform` distribution as an example: + +```python +minval = 3.0 +maxval = [[4.0, 6.0], + [10.0, 12.0]] + +# Broadcasting: +# This instance represents 4 Uniform distributions. Each has a lower bound at +# 3.0 as the `minval` parameter was broadcasted to match `maxval`'s shape. +u = Uniform(minval, maxval) + +# `event_shape` is `TensorShape([])`. +event_shape = u.event_shape +# `event_shape_t` is a `Tensor` which will evaluate to []. +event_shape_t = u.event_shape_tensor() + +# Sampling returns a sample per distribution. `samples` has shape +# [5, 2, 2], which is [n] + batch_shape + event_shape, where n=5, +# batch_shape=[2, 2], and event_shape=[]. +samples = u.sample_n(5) + +# The broadcasting holds across methods. Here we use `cdf` as an example. The +# same holds for `log_cdf` and the likelihood functions. + +# `cum_prob` has shape [2, 2] as the `value` argument was broadcasted to the +# shape of the `Uniform` instance. +cum_prob_broadcast = u.cdf(4.0) + +# `cum_prob`'s shape is [2, 2], one per distribution. No broadcasting +# occurred. +cum_prob_per_dist = u.cdf([[4.0, 5.0], + [6.0, 7.0]]) + +# INVALID as the `value` argument is not broadcastable to the distribution's +# shape. +cum_prob_invalid = u.cdf([4.0, 5.0, 6.0]) +``` + +#### Shapes + +There are three important concepts associated with TensorFlow Distributions +shapes: +- Event shape describes the shape of a single draw from the distribution; + it may be dependent across dimensions. For scalar distributions, the event + shape is `[]`. For a 5-dimensional MultivariateNormal, the event shape is + `[5]`. +- Batch shape describes independent, not identically distributed draws, aka a + "collection" or "bunch" of distributions. +- Sample shape describes independent, identically distributed draws of batches + from the distribution family. + +The event shape and the batch shape are properties of a Distribution object, +whereas the sample shape is associated with a specific call to `sample` or +`log_prob`. + +For detailed usage examples of TensorFlow Distributions shapes, see +[this tutorial]( +https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb) + +#### Parameter values leading to undefined statistics or distributions. + +Some distributions do not have well-defined statistics for all initialization +parameter values. For example, the beta distribution is parameterized by +positive real numbers `concentration1` and `concentration0`, and does not have +well-defined mode if `concentration1 < 1` or `concentration0 < 1`. + +The user is given the option of raising an exception or returning `NaN`. + +```python +a = tf.exp(tf.matmul(logits, weights_a)) +b = tf.exp(tf.matmul(logits, weights_b)) + +# Will raise exception if ANY batch member has a < 1 or b < 1. +dist = distributions.beta(a, b, allow_nan_stats=False) +mode = dist.mode().eval() + +# Will return NaN for batch members with either a < 1 or b < 1. +dist = distributions.beta(a, b, allow_nan_stats=True) # Default behavior +mode = dist.mode().eval() +``` + +In all cases, an exception is raised if *invalid* parameters are passed, e.g. + +```python +# Will raise an exception if any Op is run. +negative_a = -1.0 * a # beta distribution by definition has a > 0. +dist = distributions.beta(negative_a, b, allow_nan_stats=True) +dist.mean().eval() +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +The type of the event samples. `None` implies no type-enforcement. +
+`reparameterization_type` + +Instance of `ReparameterizationType`. +If `distributions.FULLY_REPARAMETERIZED`, this +`Distribution` can be reparameterized in terms of some standard +distribution with a function whose Jacobian is constant for the support +of the standard distribution. If `distributions.NOT_REPARAMETERIZED`, +then no such reparameterization is available. +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, statistics +(e.g., mean, mode, variance) use the value "`NaN`" to indicate the +result is undefined. When `False`, an exception is raised if one or +more of the statistic's batch members are undefined. +
+`parameters` + +Python `dict` of parameters used to instantiate this +`Distribution`. +
+`graph_parents` + +Python `list` of graph prerequisites of this +`Distribution`. +
+`name` + +Python `str` name prefixed to Ops created by this class. Default: +subclass name. +
+ + + + + + + + + + + + +
+`ValueError` + +if any member of graph_parents is `None` or not a `Tensor`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + + +

mode

+ +View source + + + +Mode. + + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/Exponential.md b/site/en/api_docs/python/tf/compat/v1/distributions/Exponential.md new file mode 100644 index 00000000000..3f8297f0c0c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/Exponential.md @@ -0,0 +1,1448 @@ +description: Exponential distribution. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.Exponential + + + + + + + + + +Exponential distribution. + +Inherits From: [`Gamma`](../../../../tf/compat/v1/distributions/Gamma.md) + + + + + + + +The Exponential distribution is parameterized by an event `rate` parameter. + +#### Mathematical Details + +The probability density function (pdf) is, + +```none +pdf(x; lambda, x > 0) = exp(-lambda x) / Z +Z = 1 / lambda +``` + +where `rate = lambda` and `Z` is the normalizaing constant. + +The Exponential distribution is a special case of the Gamma distribution, +i.e., + +```python +Exponential(rate) = Gamma(concentration=1., rate) +``` + +The Exponential distribution uses a `rate` parameter, or "inverse scale", +which can be intuited as, + +```none +X ~ Exponential(rate=1) +Y = X / rate +``` + + + + + + + + + + + + + + + + + + + +
+`rate` + +Floating point tensor, equivalent to `1 / mean`. Must contain only +positive values. +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, statistics +(e.g., mean, mode, variance) use the value "`NaN`" to indicate the +result is undefined. When `False`, an exception is raised if one or +more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`concentration` + +Concentration parameter. +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`rate` + +Rate parameter. +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + + +

mode

+ +View source + + + +Mode. + +Additional documentation from `Gamma`: + +The mode of a gamma distribution is `(shape - 1) / rate` when +`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`, +an exception will be raised rather than returning `NaN`. + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/Gamma.md b/site/en/api_docs/python/tf/compat/v1/distributions/Gamma.md new file mode 100644 index 00000000000..61210d71045 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/Gamma.md @@ -0,0 +1,1524 @@ +description: Gamma distribution. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.Gamma + + + + + + + + + +Gamma distribution. + +Inherits From: [`Distribution`](../../../../tf/compat/v1/distributions/Distribution.md) + + + + + + + +The Gamma distribution is defined over positive real numbers using +parameters `concentration` (aka "alpha") and `rate` (aka "beta"). + +#### Mathematical Details + +The probability density function (pdf) is, + +```none +pdf(x; alpha, beta, x > 0) = x**(alpha - 1) exp(-x beta) / Z +Z = Gamma(alpha) beta**(-alpha) +``` + +where: + +* `concentration = alpha`, `alpha > 0`, +* `rate = beta`, `beta > 0`, +* `Z` is the normalizing constant, and, +* `Gamma` is the [gamma function]( + https://en.wikipedia.org/wiki/Gamma_function). + +The cumulative density function (cdf) is, + +```none +cdf(x; alpha, beta, x > 0) = GammaInc(alpha, beta x) / Gamma(alpha) +``` + +where `GammaInc` is the [lower incomplete Gamma function]( +https://en.wikipedia.org/wiki/Incomplete_gamma_function). + +The parameters can be intuited via their relationship to mean and stddev, + +```none +concentration = alpha = (mean / stddev)**2 +rate = beta = mean / stddev**2 = concentration / mean +``` + +Distribution parameters are automatically broadcast in all functions; see +examples for details. + +Warning: The samples of this distribution are always non-negative. However, +the samples that are smaller than `np.finfo(dtype).tiny` are rounded +to this value, so it appears more often than it should. +This should only be noticeable when the `concentration` is very small, or the +`rate` is very large. See note in tf.random.gamma docstring. + +Samples of this distribution are reparameterized (pathwise differentiable). +The derivatives are computed using the approach described in +(Figurnov et al., 2018). + +#### Examples + +```python +import tensorflow_probability as tfp +tfd = tfp.distributions + +dist = tfd.Gamma(concentration=3.0, rate=2.0) +dist2 = tfd.Gamma(concentration=[3.0, 4.0], rate=[2.0, 3.0]) +``` + +Compute the gradients of samples w.r.t. the parameters: + +```python +concentration = tf.constant(3.0) +rate = tf.constant(2.0) +dist = tfd.Gamma(concentration, rate) +samples = dist.sample(5) # Shape [5] +loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function +# Unbiased stochastic gradients of the loss function +grads = tf.gradients(loss, [concentration, rate]) +``` + +#### References: + +Implicit Reparameterization Gradients: + [Figurnov et al., 2018] + (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients) + ([pdf](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf)) + + + + + + + + + + + + + + + + + + + + + + + +
+`concentration` + +Floating point tensor, the concentration params of the +distribution(s). Must contain only positive values. +
+`rate` + +Floating point tensor, the inverse scale params of the +distribution(s). Must contain only positive values. +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, statistics +(e.g., mean, mode, variance) use the value "`NaN`" to indicate the +result is undefined. When `False`, an exception is raised if one or +more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + + +
+`TypeError` + +if `concentration` and `rate` are different dtypes. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`concentration` + +Concentration parameter. +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`rate` + +Rate parameter. +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + + +

mode

+ +View source + + + +Mode. + +Additional documentation from `Gamma`: + +The mode of a gamma distribution is `(shape - 1) / rate` when +`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`, +an exception will be raised rather than returning `NaN`. + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/Laplace.md b/site/en/api_docs/python/tf/compat/v1/distributions/Laplace.md new file mode 100644 index 00000000000..e20683c44af --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/Laplace.md @@ -0,0 +1,1463 @@ +description: The Laplace distribution with location loc and scale parameters. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.Laplace + + + + + + + + + +The Laplace distribution with location `loc` and `scale` parameters. + +Inherits From: [`Distribution`](../../../../tf/compat/v1/distributions/Distribution.md) + + + + + + + +#### Mathematical details + +The probability density function (pdf) of this distribution is, + +```none +pdf(x; mu, sigma) = exp(-|x - mu| / sigma) / Z +Z = 2 sigma +``` + +where `loc = mu`, `scale = sigma`, and `Z` is the normalization constant. + +Note that the Laplace distribution can be thought of two exponential +distributions spliced together "back-to-back." + +The Lpalce distribution is a member of the [location-scale family]( +https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be +constructed as, + +```none +X ~ Laplace(loc=0, scale=1) +Y = loc + scale * X +``` + + + + + + + + + + + + + + + + + + + + + + +
+`loc` + +Floating point tensor which characterizes the location (center) +of the distribution. +
+`scale` + +Positive floating point tensor which characterizes the spread of +the distribution. +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, +statistics (e.g., mean, mode, variance) use the value "`NaN`" to +indicate the result is undefined. When `False`, an exception is raised +if one or more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + + +
+`TypeError` + +if `loc` and `scale` are of different dtype. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`loc` + +Distribution parameter for the location. +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`scale` + +Distribution parameter for scale. +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + + +

mode

+ +View source + + + +Mode. + + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/Multinomial.md b/site/en/api_docs/python/tf/compat/v1/distributions/Multinomial.md new file mode 100644 index 00000000000..a70c45cf211 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/Multinomial.md @@ -0,0 +1,1554 @@ +description: Multinomial distribution. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.Multinomial + + + + + + + + + +Multinomial distribution. + +Inherits From: [`Distribution`](../../../../tf/compat/v1/distributions/Distribution.md) + + + + + + + +This Multinomial distribution is parameterized by `probs`, a (batch of) +length-`K` `prob` (probability) vectors (`K > 1`) such that +`tf.reduce_sum(probs, -1) = 1`, and a `total_count` number of trials, i.e., +the number of trials per draw from the Multinomial. It is defined over a +(batch of) length-`K` vector `counts` such that +`tf.reduce_sum(counts, -1) = total_count`. The Multinomial is identically the +Binomial distribution when `K = 2`. + +#### Mathematical Details + +The Multinomial is a distribution over `K`-class counts, i.e., a length-`K` +vector of non-negative integer `counts = n = [n_0, ..., n_{K-1}]`. + +The probability mass function (pmf) is, + +```none +pmf(n; pi, N) = prod_j (pi_j)**n_j / Z +Z = (prod_j n_j!) / N! +``` + +where: +* `probs = pi = [pi_0, ..., pi_{K-1}]`, `pi_j > 0`, `sum_j pi_j = 1`, +* `total_count = N`, `N` a positive integer, +* `Z` is the normalization constant, and, +* `N!` denotes `N` factorial. + +Distribution parameters are automatically broadcast in all functions; see +examples for details. + +#### Pitfalls + +The number of classes, `K`, must not exceed: +- the largest integer representable by `self.dtype`, i.e., + `2**(mantissa_bits+1)` (IEE754), +- the maximum `Tensor` index, i.e., `2**31-1`. + +In other words, + +```python +K <= min(2**31-1, { + tf.float16: 2**11, + tf.float32: 2**24, + tf.float64: 2**53 }[param.dtype]) +``` + +Note: This condition is validated only when `self.validate_args = True`. + +#### Examples + +Create a 3-class distribution, with the 3rd class is most likely to be drawn, +using logits. + +```python +logits = [-50., -43, 0] +dist = Multinomial(total_count=4., logits=logits) +``` + +Create a 3-class distribution, with the 3rd class is most likely to be drawn. + +```python +p = [.2, .3, .5] +dist = Multinomial(total_count=4., probs=p) +``` + +The distribution functions can be evaluated on counts. + +```python +# counts same shape as p. +counts = [1., 0, 3] +dist.prob(counts) # Shape [] + +# p will be broadcast to [[.2, .3, .5], [.2, .3, .5]] to match counts. +counts = [[1., 2, 1], [2, 2, 0]] +dist.prob(counts) # Shape [2] + +# p will be broadcast to shape [5, 7, 3] to match counts. +counts = [[...]] # Shape [5, 7, 3] +dist.prob(counts) # Shape [5, 7] +``` + +Create a 2-batch of 3-class distributions. + +```python +p = [[.1, .2, .7], [.3, .3, .4]] # Shape [2, 3] +dist = Multinomial(total_count=[4., 5], probs=p) + +counts = [[2., 1, 1], [3, 1, 1]] +dist.prob(counts) # Shape [2] + +dist.sample(5) # Shape [5, 2, 3] +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`total_count` + +Non-negative floating point tensor with shape broadcastable +to `[N1,..., Nm]` with `m >= 0`. Defines this as a batch of +`N1 x ... x Nm` different Multinomial distributions. Its components +should be equal to integer values. +
+`logits` + +Floating point tensor representing unnormalized log-probabilities +of a positive event with shape broadcastable to +`[N1,..., Nm, K]` `m >= 0`, and the same dtype as `total_count`. Defines +this as a batch of `N1 x ... x Nm` different `K` class Multinomial +distributions. Only one of `logits` or `probs` should be passed in. +
+`probs` + +Positive floating point tensor with shape broadcastable to +`[N1,..., Nm, K]` `m >= 0` and same dtype as `total_count`. Defines +this as a batch of `N1 x ... x Nm` different `K` class Multinomial +distributions. `probs`'s components in the last portion of its shape +should sum to `1`. Only one of `logits` or `probs` should be passed in. +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, statistics +(e.g., mean, mode, variance) use the value "`NaN`" to indicate the +result is undefined. When `False`, an exception is raised if one or +more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`logits` + +Vector of coordinatewise logits. +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`probs` + +Probability of drawing a `1` in that coordinate. +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`total_count` + +Number of trials used to construct a sample. +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + +Additional documentation from `Multinomial`: + +For each batch of counts, `value = [n_0, ... +,n_{k-1}]`, `P[value]` is the probability that after sampling `self.total_count` +draws from this Multinomial distribution, the number of draws falling in class +`j` is `n_j`. Since this definition is [exchangeable]( +https://en.wikipedia.org/wiki/Exchangeable_random_variables); different +sequences have the same counts so the probability includes a combinatorial +coefficient. + +Note: `value` must be a non-negative tensor with dtype `self.dtype`, have no +fractional components, and such that +`tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable +with `self.probs` and `self.total_count`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + + +

mode

+ +View source + + + +Mode. + + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/Normal.md b/site/en/api_docs/python/tf/compat/v1/distributions/Normal.md new file mode 100644 index 00000000000..13c4d9b701f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/Normal.md @@ -0,0 +1,1498 @@ +description: The Normal distribution with location loc and scale parameters. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.Normal + + + + + + + + + +The Normal distribution with location `loc` and `scale` parameters. + +Inherits From: [`Distribution`](../../../../tf/compat/v1/distributions/Distribution.md) + + + + + + + +#### Mathematical details + +The probability density function (pdf) is, + +```none +pdf(x; mu, sigma) = exp(-0.5 (x - mu)**2 / sigma**2) / Z +Z = (2 pi sigma**2)**0.5 +``` + +where `loc = mu` is the mean, `scale = sigma` is the std. deviation, and, `Z` +is the normalization constant. + +The Normal distribution is a member of the [location-scale family]( +https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be +constructed as, + +```none +X ~ Normal(loc=0, scale=1) +Y = loc + scale * X +``` + +#### Examples + +Examples of initialization of one or a batch of distributions. + +```python +import tensorflow_probability as tfp +tfd = tfp.distributions + +# Define a single scalar Normal distribution. +dist = tfd.Normal(loc=0., scale=3.) + +# Evaluate the cdf at 1, returning a scalar. +dist.cdf(1.) + +# Define a batch of two scalar valued Normals. +# The first has mean 1 and standard deviation 11, the second 2 and 22. +dist = tfd.Normal(loc=[1, 2.], scale=[11, 22.]) + +# Evaluate the pdf of the first distribution on 0, and the second on 1.5, +# returning a length two tensor. +dist.prob([0, 1.5]) + +# Get 3 samples, returning a 3 x 2 tensor. +dist.sample([3]) +``` + +Arguments are broadcast when possible. + +```python +# Define a batch of two scalar valued Normals. +# Both have mean 1, but different standard deviations. +dist = tfd.Normal(loc=1., scale=[11, 22.]) + +# Evaluate the pdf of both distributions on the same point, 3.0, +# returning a length 2 tensor. +dist.prob(3.0) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`loc` + +Floating point tensor; the means of the distribution(s). +
+`scale` + +Floating point tensor; the stddevs of the distribution(s). +Must contain only positive values. +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, +statistics (e.g., mean, mode, variance) use the value "`NaN`" to +indicate the result is undefined. When `False`, an exception is raised +if one or more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + + +
+`TypeError` + +if `loc` and `scale` have different `dtype`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`loc` + +Distribution parameter for the mean. +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`scale` + +Distribution parameter for standard deviation. +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + + +

mode

+ +View source + + + +Mode. + + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/RegisterKL.md b/site/en/api_docs/python/tf/compat/v1/distributions/RegisterKL.md new file mode 100644 index 00000000000..335960d012a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/RegisterKL.md @@ -0,0 +1,142 @@ +description: Decorator to register a KL divergence implementation function. + +
+ + + + +
+ +# tf.compat.v1.distributions.RegisterKL + + + + + + + + + +Decorator to register a KL divergence implementation function. + + + + + + + + +#### Usage: + + + +@distributions.RegisterKL(distributions.Normal, distributions.Normal) +def _kl_normal_mvn(norm_a, norm_b): + # Return KL(norm_a || norm_b) + + + + + + + + + + + + + +
+`dist_cls_a` + +the class of the first argument of the KL divergence. +
+`dist_cls_b` + +the class of the second argument of the KL divergence. +
+ + + +## Methods + +

__call__

+ +View source + + + +Perform the KL registration. + + + + + + + + + + + +
Args
+`kl_fn` + +The function to use for the KL divergence. +
+ + + + + + + + + + + +
Returns
+kl_fn +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if kl_fn is not a callable. +
+`ValueError` + +if a KL divergence function has already been registered for +the given argument classes. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/ReparameterizationType.md b/site/en/api_docs/python/tf/compat/v1/distributions/ReparameterizationType.md new file mode 100644 index 00000000000..b286416368c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/ReparameterizationType.md @@ -0,0 +1,99 @@ +description: Instances of this class represent how sampling is reparameterized. + +
+ + + + +
+ +# tf.compat.v1.distributions.ReparameterizationType + + + + + + + + + +Instances of this class represent how sampling is reparameterized. + + + + + + + +Two static instances exist in the distributions library, signifying +one of two possible properties for samples from a distribution: + +`FULLY_REPARAMETERIZED`: Samples from the distribution are fully + reparameterized, and straight-through gradients are supported. + +`NOT_REPARAMETERIZED`: Samples from the distribution are not fully + reparameterized, and straight-through gradients are either partially + unsupported or are not supported at all. In this case, for purposes of + e.g. RL or variational inference, it is generally safest to wrap the + sample results in a `stop_gradients` call and use policy + gradients / surrogate loss instead. + +## Methods + +

__eq__

+ +View source + + + +Determine if this `ReparameterizationType` is equal to another. + +Since ReparameterizationType instances are constant static global +instances, equality checks if two instances' id() values are equal. + + + + + + + + + + +
Args
+`other` + +Object to compare against. +
+ + + + + + + + + + + +
Returns
+`self is other`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/StudentT.md b/site/en/api_docs/python/tf/compat/v1/distributions/StudentT.md new file mode 100644 index 00000000000..79ab210fcd8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/StudentT.md @@ -0,0 +1,1570 @@ +description: Student's t-distribution. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.StudentT + + + + + + + + + +Student's t-distribution. + +Inherits From: [`Distribution`](../../../../tf/compat/v1/distributions/Distribution.md) + + + + + + + +This distribution has parameters: degree of freedom `df`, location `loc`, +and `scale`. + +#### Mathematical details + +The probability density function (pdf) is, + +```none +pdf(x; df, mu, sigma) = (1 + y**2 / df)**(-0.5 (df + 1)) / Z +where, +y = (x - mu) / sigma +Z = abs(sigma) sqrt(df pi) Gamma(0.5 df) / Gamma(0.5 (df + 1)) +``` + +where: +* `loc = mu`, +* `scale = sigma`, and, +* `Z` is the normalization constant, and, +* `Gamma` is the [gamma function]( + https://en.wikipedia.org/wiki/Gamma_function). + +The StudentT distribution is a member of the [location-scale family]( +https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be +constructed as, + +```none +X ~ StudentT(df, loc=0, scale=1) +Y = loc + scale * X +``` + +Notice that `scale` has semantics more similar to standard deviation than +variance. However it is not actually the std. deviation; the Student's +t-distribution std. dev. is `scale sqrt(df / (df - 2))` when `df > 2`. + +Samples of this distribution are reparameterized (pathwise differentiable). +The derivatives are computed using the approach described in +(Figurnov et al., 2018). + +#### Examples + +Examples of initialization of one or a batch of distributions. + +```python +import tensorflow_probability as tfp +tfd = tfp.distributions + +# Define a single scalar Student t distribution. +single_dist = tfd.StudentT(df=3) + +# Evaluate the pdf at 1, returning a scalar Tensor. +single_dist.prob(1.) + +# Define a batch of two scalar valued Student t's. +# The first has degrees of freedom 2, mean 1, and scale 11. +# The second 3, 2 and 22. +multi_dist = tfd.StudentT(df=[2, 3], loc=[1, 2.], scale=[11, 22.]) + +# Evaluate the pdf of the first distribution on 0, and the second on 1.5, +# returning a length two tensor. +multi_dist.prob([0, 1.5]) + +# Get 3 samples, returning a 3 x 2 tensor. +multi_dist.sample(3) +``` + +Arguments are broadcast when possible. + +```python +# Define a batch of two Student's t distributions. +# Both have df 2 and mean 1, but different scales. +dist = tfd.StudentT(df=2, loc=1, scale=[11, 22.]) + +# Evaluate the pdf of both distributions on the same point, 3.0, +# returning a length 2 tensor. +dist.prob(3.0) +``` + +Compute the gradients of samples w.r.t. the parameters: + +```python +df = tf.constant(2.0) +loc = tf.constant(2.0) +scale = tf.constant(11.0) +dist = tfd.StudentT(df=df, loc=loc, scale=scale) +samples = dist.sample(5) # Shape [5] +loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function +# Unbiased stochastic gradients of the loss function +grads = tf.gradients(loss, [df, loc, scale]) +``` + +#### References: + +Implicit Reparameterization Gradients: + [Figurnov et al., 2018] + (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients) + ([pdf](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf)) + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`df` + +Floating-point `Tensor`. The degrees of freedom of the +distribution(s). `df` must contain only positive values. +
+`loc` + +Floating-point `Tensor`. The mean(s) of the distribution(s). +
+`scale` + +Floating-point `Tensor`. The scaling factor(s) for the +distribution(s). Note that `scale` is not technically the standard +deviation of this distribution but has semantics more similar to +standard deviation than variance. +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, +statistics (e.g., mean, mode, variance) use the value "`NaN`" to +indicate the result is undefined. When `False`, an exception is raised +if one or more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + + +
+`TypeError` + +if loc and scale are different dtypes. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`df` + +Degrees of freedom in these Student's t distribution(s). +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`loc` + +Locations of these Student's t distribution(s). +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`scale` + +Scaling factors of these Student's t distribution(s). +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + +Additional documentation from `StudentT`: + +The mean of Student's T equals `loc` if `df > 1`, otherwise it is +`NaN`. If `self.allow_nan_stats=True`, then an exception will be raised +rather than returning `NaN`. + +

mode

+ +View source + + + +Mode. + + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + +Additional documentation from `StudentT`: + +The variance for Student's T equals + +``` +df / (df - 2), when df > 2 +infinity, when 1 < df <= 2 +NaN, when df <= 1 +``` + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/Uniform.md b/site/en/api_docs/python/tf/compat/v1/distributions/Uniform.md new file mode 100644 index 00000000000..3db63c17a78 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/Uniform.md @@ -0,0 +1,1492 @@ +description: Uniform distribution with low and high parameters. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.distributions.Uniform + + + + + + + + + +Uniform distribution with `low` and `high` parameters. + +Inherits From: [`Distribution`](../../../../tf/compat/v1/distributions/Distribution.md) + + + + + + + +#### Mathematical Details + +The probability density function (pdf) is, + +```none +pdf(x; a, b) = I[a <= x < b] / Z +Z = b - a +``` + +where + +- `low = a`, +- `high = b`, +- `Z` is the normalizing constant, and +- `I[predicate]` is the [indicator function]( + https://en.wikipedia.org/wiki/Indicator_function) for `predicate`. + +The parameters `low` and `high` must be shaped in a way that supports +broadcasting (e.g., `high - low` is a valid operation). + +#### Examples + +```python +# Without broadcasting: +u1 = Uniform(low=3.0, high=4.0) # a single uniform distribution [3, 4] +u2 = Uniform(low=[1.0, 2.0], + high=[3.0, 4.0]) # 2 distributions [1, 3], [2, 4] +u3 = Uniform(low=[[1.0, 2.0], + [3.0, 4.0]], + high=[[1.5, 2.5], + [3.5, 4.5]]) # 4 distributions +``` + +```python +# With broadcasting: +u1 = Uniform(low=3.0, high=[5.0, 6.0, 7.0]) # 3 distributions +``` + + + + + + + + + + + + + + + + + + + + + + +
+`low` + +Floating point tensor, lower boundary of the output interval. Must +have `low < high`. +
+`high` + +Floating point tensor, upper boundary of the output interval. Must +have `low < high`. +
+`validate_args` + +Python `bool`, default `False`. When `True` distribution +parameters are checked for validity despite possibly degrading runtime +performance. When `False` invalid inputs may silently render incorrect +outputs. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, statistics +(e.g., mean, mode, variance) use the value "`NaN`" to indicate the +result is undefined. When `False`, an exception is raised if one or +more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if `low >= high` and `validate_args=False`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_nan_stats` + +Python `bool` describing behavior when a stat is undefined. + +Stats return +/- infinity when it makes sense. E.g., the variance of a +Cauchy distribution is infinity. However, sometimes the statistic is +undefined, e.g., if a distribution's pdf does not achieve a maximum within +the support of the distribution, the mode is undefined. If the mean is +undefined, then by definition the variance is undefined. E.g. the mean for +Student's T for df = 1 is undefined (no clear way to say it is either + or - +infinity), so the variance = E[(X - mean)**2] is also undefined. +
+`batch_shape` + +Shape of a single sample from a single event index as a `TensorShape`. + +May be partially defined or unknown. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. +
+`dtype` + +The `DType` of `Tensor`s handled by this `Distribution`. +
+`event_shape` + +Shape of a single sample from a single batch as a `TensorShape`. + +May be partially defined or unknown. +
+`high` + +Upper boundary of the output interval. +
+`low` + +Lower boundary of the output interval. +
+`name` + +Name prepended to all ops created by this `Distribution`. +
+`parameters` + +Dictionary of parameters used to instantiate this `Distribution`. +
+`reparameterization_type` + +Describes how samples from the distribution are reparameterized. + +Currently this is one of the static instances +`distributions.FULLY_REPARAMETERIZED` +or `distributions.NOT_REPARAMETERIZED`. +
+`validate_args` + +Python `bool` indicating possibly expensive checks are enabled. +
+ + + +## Methods + +

batch_shape_tensor

+ +View source + + + +Shape of a single sample from a single event index as a 1-D `Tensor`. + +The batch dimensions are indexes into independent, non-identical +parameterizations of this distribution. + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`batch_shape` + +`Tensor`. +
+ + + +

cdf

+ +View source + + + +Cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +cdf(x) := P[X <= x] +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

copy

+ +View source + + + +Creates a deep copy of the distribution. + +Note: the copy distribution may continue to depend on the original +initialization arguments. + + + + + + + + + + +
Args
+`**override_parameters_kwargs` + +String/value dictionary of initialization +arguments to override with new values. +
+ + + + + + + + + + + + +
Returns
+`distribution` + +A new instance of `type(self)` initialized from the union +of self.parameters and override_parameters_kwargs, i.e., +`dict(self.parameters, **override_parameters_kwargs)`. +
+ + + +

covariance

+ +View source + + + +Covariance. + +Covariance is (possibly) defined only for non-scalar-event distributions. + +For example, for a length-`k`, vector-valued distribution, it is calculated +as, + +```none +Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] +``` + +where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` +denotes expectation. + +Alternatively, for non-vector, multivariate distributions (e.g., +matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices +under some vectorization of the events, i.e., + +```none +Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] +``` + +where `Cov` is a (batch of) `k' x k'` matrices, +`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function +mapping indices of this distribution's event dimensions to indices of a +length-`k'` vector. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`covariance` + +Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` +where the first `n` dimensions are batch coordinates and +`k' = reduce_prod(self.event_shape)`. +
+ + + +

cross_entropy

+ +View source + + + +Computes the (Shannon) cross entropy. + +Denote this distribution (`self`) by `P` and the `other` distribution by +`Q`. Assuming `P, Q` are absolutely continuous with respect to +one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shanon) +cross entropy is defined as: + +```none +H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) +``` + +where `F` denotes the support of the random variable `X ~ P`. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`cross_entropy` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of (Shanon) cross entropy. +
+ + + +

entropy

+ +View source + + + +Shannon entropy in nats. + + +

event_shape_tensor

+ +View source + + + +Shape of a single sample from a single batch as a 1-D int32 `Tensor`. + + + + + + + + + + + +
Args
+`name` + +name to give to the op +
+ + + + + + + + + + + + +
Returns
+`event_shape` + +`Tensor`. +
+ + + +

is_scalar_batch

+ +View source + + + +Indicates that `batch_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_batch` + +`bool` scalar `Tensor`. +
+ + + +

is_scalar_event

+ +View source + + + +Indicates that `event_shape == []`. + + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`is_scalar_event` + +`bool` scalar `Tensor`. +
+ + + +

kl_divergence

+ +View source + + + +Computes the Kullback--Leibler divergence. + +Denote this distribution (`self`) by `p` and the `other` distribution by +`q`. Assuming `p, q` are absolutely continuous with respect to reference +measure `r`, the KL divergence is defined as: + +```none +KL[p, q] = E_p[log(p(X)/q(X))] + = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) + = H[p, q] - H[p] +``` + +where `F` denotes the support of the random variable `X ~ p`, `H[., .]` +denotes (Shanon) cross entropy, and `H[.]` denotes (Shanon) entropy. + + + + + + + + + + + + + +
Args
+`other` + +`tfp.distributions.Distribution` instance. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`kl_divergence` + +`self.dtype` `Tensor` with shape `[B1, ..., Bn]` +representing `n` different calculations of the Kullback-Leibler +divergence. +
+ + + +

log_cdf

+ +View source + + + +Log cumulative distribution function. + +Given random variable `X`, the cumulative distribution function `cdf` is: + +```none +log_cdf(x) := Log[ P[X <= x] ] +``` + +Often, a numerical approximation can be used for `log_cdf(x)` that yields +a more accurate answer than simply taking the logarithm of the `cdf` when +`x << -1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`logcdf` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_prob

+ +View source + + + +Log probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`log_prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

log_survival_function

+ +View source + + + +Log survival function. + +Given random variable `X`, the survival function is defined: + +```none +log_survival_function(x) = Log[ P[X > x] ] + = Log[ 1 - P[X <= x] ] + = Log[ 1 - cdf(x) ] +``` + +Typically, different numerical approximations can be used for the log +survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

mean

+ +View source + + + +Mean. + + +

mode

+ +View source + + + +Mode. + + +

param_shapes

+ +View source + + + +Shapes of parameters given the desired shape of a call to `sample()`. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. + +Subclasses should override class method `_param_shapes`. + + + + + + + + + + + + + +
Args
+`sample_shape` + +`Tensor` or python list/tuple. Desired shape of a call to +`sample()`. +
+`name` + +name to prepend ops with. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `Tensor` shapes. +
+ + + +

param_static_shapes

+ +View source + + + +param_shapes with static (i.e. `TensorShape`) shapes. + +This is a class method that describes what key/value arguments are required +to instantiate the given `Distribution` so that a particular shape is +returned for that instance's call to `sample()`. Assumes that the sample's +shape is known statically. + +Subclasses should override class method `_param_shapes` to return +constant-valued tensors when constant values are fed. + + + + + + + + + + +
Args
+`sample_shape` + +`TensorShape` or python list/tuple. Desired shape of a call +to `sample()`. +
+ + + + + + + + + + + +
Returns
+`dict` of parameter name to `TensorShape`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `sample_shape` is a `TensorShape` and is not fully defined. +
+ + + +

prob

+ +View source + + + +Probability density/mass function. + + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`prob` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

quantile

+ +View source + + + +Quantile function. Aka "inverse cdf" or "percent point function". + +Given random variable `X` and `p in [0, 1]`, the `quantile` is: + +```none +quantile(p) := x such that P[X <= x] == p +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`quantile` + +a `Tensor` of shape `sample_shape(x) + self.batch_shape` with +values of type `self.dtype`. +
+ + + +

range

+ +View source + + + +`high - low`. + + +

sample

+ +View source + + + +Generate samples of the specified shape. + +Note that a call to `sample()` without arguments will generate a single +sample. + + + + + + + + + + + + + + + + +
Args
+`sample_shape` + +0D or 1D `int32` `Tensor`. Shape of the generated samples. +
+`seed` + +Python integer seed for RNG +
+`name` + +name to give to the op. +
+ + + + + + + + + + + + +
Returns
+`samples` + +a `Tensor` with prepended dimensions `sample_shape`. +
+ + + +

stddev

+ +View source + + + +Standard deviation. + +Standard deviation is defined as, + +```none +stddev = E[(X - E[X])**2]**0.5 +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `stddev.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`stddev` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + +

survival_function

+ +View source + + + +Survival function. + +Given random variable `X`, the survival function is defined: + +```none +survival_function(x) = P[X > x] + = 1 - P[X <= x] + = 1 - cdf(x). +``` + + + + + + + + + + + + + +
Args
+`value` + +`float` or `double` `Tensor`. +
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + +
Returns
+`Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type +`self.dtype`. +
+ + + +

variance

+ +View source + + + +Variance. + +Variance is defined as, + +```none +Var = E[(X - E[X])**2] +``` + +where `X` is the random variable associated with this distribution, `E` +denotes expectation, and `Var.shape = batch_shape + event_shape`. + + + + + + + + + + +
Args
+`name` + +Python `str` prepended to names of ops created by this function. +
+ + + + + + + + + + + + +
Returns
+`variance` + +Floating-point `Tensor` with shape identical to +`batch_shape + event_shape`, i.e., the same shape as `self.mean()`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/distributions/kl_divergence.md b/site/en/api_docs/python/tf/compat/v1/distributions/kl_divergence.md new file mode 100644 index 00000000000..3d1dd250d45 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/distributions/kl_divergence.md @@ -0,0 +1,124 @@ +description: Get the KL-divergence KL(distribution_a || distribution_b). (deprecated) + +
+ + +
+ +# tf.compat.v1.distributions.kl_divergence + + + + + + + + + +Get the KL-divergence KL(distribution_a || distribution_b). (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2019-01-01. +Instructions for updating: +The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`. + +If there is no KL method registered specifically for `type(distribution_a)` +and `type(distribution_b)`, then the class hierarchies of these types are +searched. + +If one KL method is registered between any pairs of classes in these two +parent hierarchies, it is used. + +If more than one such registered method exists, the method whose registered +classes have the shortest sum MRO paths to the input types is used. + +If more than one such shortest path exists, the first method +identified in the search is used (favoring a shorter MRO distance to +`type(distribution_a)`). + + + + + + + + + + + + + + + + + + + +
+`distribution_a` + +The first distribution. +
+`distribution_b` + +The second distribution. +
+`allow_nan_stats` + +Python `bool`, default `True`. When `True`, +statistics (e.g., mean, mode, variance) use the value "`NaN`" to +indicate the result is undefined. When `False`, an exception is raised +if one or more of the statistic's batch members are undefined. +
+`name` + +Python `str` name prefixed to Ops created by this class. +
+ + + + + + + + + + + +
+A Tensor with the batchwise KL-divergence between `distribution_a` +and `distribution_b`. +
+ + + + + + + + + + + + +
+`NotImplementedError` + +If no KL method is defined for distribution types +of `distribution_a` and `distribution_b`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/dtypes.md b/site/en/api_docs/python/tf/compat/v1/dtypes.md new file mode 100644 index 00000000000..7c8e8eeb992 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/dtypes.md @@ -0,0 +1,91 @@ +description: Public API for tf.dtypes namespace. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# Module: tf.compat.v1.dtypes + + + + + + + + + +Public API for tf.dtypes namespace. + + + +## Classes + +[`class DType`](../../../tf/dtypes/DType.md): Represents the type of the elements in a `Tensor`. + +## Functions + +[`as_dtype(...)`](../../../tf/dtypes/as_dtype.md): Converts the given `type_value` to a `DType`. + +[`as_string(...)`](../../../tf/strings/as_string.md): Converts each entry in the given tensor to strings. + +[`cast(...)`](../../../tf/cast.md): Casts a tensor to a new type. + +[`complex(...)`](../../../tf/dtypes/complex.md): Converts two real numbers to a complex number. + +[`saturate_cast(...)`](../../../tf/dtypes/saturate_cast.md): Performs a safe saturating cast of `value` to `dtype`. + +## Other Members + +* `QUANTIZED_DTYPES` +* `bfloat16` +* `bool` +* `complex128` +* `complex64` +* `double` +* `float16` +* `float32` +* `float64` +* `half` +* `int16` +* `int32` +* `int64` +* `int8` +* `qint16` +* `qint32` +* `qint8` +* `quint16` +* `quint8` +* `resource` +* `string` +* `uint16` +* `uint32` +* `uint64` +* `uint8` +* `variant` diff --git a/site/en/api_docs/python/tf/compat/v1/enable_control_flow_v2.md b/site/en/api_docs/python/tf/compat/v1/enable_control_flow_v2.md new file mode 100644 index 00000000000..c0c622fcc7f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/enable_control_flow_v2.md @@ -0,0 +1,44 @@ +description: Use control flow v2. + +
+ + +
+ +# tf.compat.v1.enable_control_flow_v2 + + + + + + + + + +Use control flow v2. + + + + + + + +control flow v2 (cfv2) is an improved version of control flow in TensorFlow +with support for higher order derivatives. Enabling cfv2 will change the +graph/function representation of control flow, e.g., tf.while_loop and +tf.cond will generate functional `While` and `If` ops instead of low-level +`Switch`, `Merge` etc. ops. Note: Importing and running graphs exported +with old control flow will still be supported. + +Calling tf.enable_control_flow_v2() lets you opt-in to this TensorFlow 2.0 +feature. + +Note: v2 control flow is always enabled inside of tf.function. Calling this +function is not required. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/enable_eager_execution.md b/site/en/api_docs/python/tf/compat/v1/enable_eager_execution.md new file mode 100644 index 00000000000..08e4deffdfd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/enable_eager_execution.md @@ -0,0 +1,132 @@ +description: Enables eager execution for the lifetime of this program. + +
+ + +
+ +# tf.compat.v1.enable_eager_execution + + + + + + + + + +Enables eager execution for the lifetime of this program. + + + + + + + +Eager execution provides an imperative interface to TensorFlow. With eager +execution enabled, TensorFlow functions execute operations immediately (as +opposed to adding to a graph to be executed later in a tf.compat.v1.Session) +and +return concrete values (as opposed to symbolic references to a node in a +computational graph). + +#### For example: + + + +```python +tf.compat.v1.enable_eager_execution() + +# After eager execution is enabled, operations are executed as they are +# defined and Tensor objects hold concrete values, which can be accessed as +# numpy.ndarray`s through the numpy() method. +assert tf.multiply(6, 7).numpy() == 42 +``` + +Eager execution cannot be enabled after TensorFlow APIs have been used to +create or execute graphs. It is typically recommended to invoke this function +at program startup and not in a library (as most libraries should be usable +both with and without eager execution). + + + + + + + + + + + + + + + + +
+`config` + +(Optional.) A tf.compat.v1.ConfigProto to use to configure the +environment in which operations are executed. Note that +tf.compat.v1.ConfigProto is also used to configure graph execution (via +tf.compat.v1.Session) and many options within tf.compat.v1.ConfigProto +are not implemented (or are irrelevant) when eager execution is enabled. +
+`device_policy` + +(Optional.) Policy controlling how operations requiring +inputs on a specific device (e.g., a GPU 0) handle inputs on a different +device (e.g. GPU 1 or CPU). When set to None, an appropriate value will +be picked automatically. The value picked may change between TensorFlow +releases. +Valid values: +- tf.contrib.eager.DEVICE_PLACEMENT_EXPLICIT: raises an error if the +placement is not correct. +- tf.contrib.eager.DEVICE_PLACEMENT_WARN: copies the tensors which are not +on the right device but logs a warning. +- tf.contrib.eager.DEVICE_PLACEMENT_SILENT: silently copies the tensors. +Note that this may hide performance problems as there is no notification +provided when operations are blocked on the tensor being copied between +devices. +- tf.contrib.eager.DEVICE_PLACEMENT_SILENT_FOR_INT32: silently copies +int32 tensors, raising errors on the other ones. +
+`execution_mode` + +(Optional.) Policy controlling how operations dispatched are +actually executed. When set to None, an appropriate value will be picked +automatically. The value picked may change between TensorFlow releases. +Valid values: +- tf.contrib.eager.SYNC: executes each operation synchronously. +- tf.contrib.eager.ASYNC: executes each operation asynchronously. These +operations may return "non-ready" handles. +
+ + + + + + + + + + + + +
+`ValueError` + +If eager execution is enabled after creating/executing a +TensorFlow graph, or if options provided conflict with a previous call +to this function. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/enable_resource_variables.md b/site/en/api_docs/python/tf/compat/v1/enable_resource_variables.md new file mode 100644 index 00000000000..f99b78a80c5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/enable_resource_variables.md @@ -0,0 +1,43 @@ +description: Creates resource variables by default. + +
+ + +
+ +# tf.compat.v1.enable_resource_variables + + + + + + + + + +Creates resource variables by default. + + + + + + + +Resource variables are improved versions of TensorFlow variables with a +well-defined memory model. Accessing a resource variable reads its value, and +all ops which access a specific read value of the variable are guaranteed to +see the same value for that tensor. Writes which happen after a read (by +having a control or data dependency on the read) are guaranteed not to affect +the value of the read tensor, and similarly writes which happen before a read +are guaranteed to affect the value. No guarantees are made about unordered +read/write pairs. + +Calling tf.enable_resource_variables() lets you opt-in to this TensorFlow 2.0 +feature. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/enable_tensor_equality.md b/site/en/api_docs/python/tf/compat/v1/enable_tensor_equality.md new file mode 100644 index 00000000000..8d6132f6b97 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/enable_tensor_equality.md @@ -0,0 +1,36 @@ +description: Compare Tensors with element-wise comparison and thus be unhashable. + +
+ + +
+ +# tf.compat.v1.enable_tensor_equality + + + + + + + + + +Compare Tensors with element-wise comparison and thus be unhashable. + + + + + + + +Comparing tensors with element-wise allows comparisons such as +tf.Variable(1.0) == 1.0. Element-wise equality implies that tensors are +unhashable. Thus tensors can no longer be directly used in sets or as a key in +a dictionary. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/enable_v2_behavior.md b/site/en/api_docs/python/tf/compat/v1/enable_v2_behavior.md new file mode 100644 index 00000000000..5771654a20f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/enable_v2_behavior.md @@ -0,0 +1,39 @@ +description: Enables TensorFlow 2.x behaviors. + +
+ + +
+ +# tf.compat.v1.enable_v2_behavior + + + + + + + + + +Enables TensorFlow 2.x behaviors. + + + + + + + +This function can be called at the beginning of the program (before `Tensors`, +`Graphs` or other structures have been created, and before devices have been +initialized. It switches all global behaviors that are different between +TensorFlow 1.x and 2.x to behave as intended for 2.x. + +This function is called in the main TensorFlow `__init__.py` file, user should +not need to call it, except during complex migrations. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/enable_v2_tensorshape.md b/site/en/api_docs/python/tf/compat/v1/enable_v2_tensorshape.md new file mode 100644 index 00000000000..1e3c97ba418 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/enable_v2_tensorshape.md @@ -0,0 +1,76 @@ +description: In TensorFlow 2.0, iterating over a TensorShape instance returns values. + +
+ + +
+ +# tf.compat.v1.enable_v2_tensorshape + + + + + + + + + +In TensorFlow 2.0, iterating over a TensorShape instance returns values. + + + + + + + +This enables the new behavior. + +Concretely, `tensor_shape[i]` returned a Dimension instance in V1, but +it V2 it returns either an integer, or None. + +#### Examples: + + + +``` +####################### +# If you had this in V1: +value = tensor_shape[i].value + +# Do this in V2 instead: +value = tensor_shape[i] + +####################### +# If you had this in V1: +for dim in tensor_shape: + value = dim.value + print(value) + +# Do this in V2 instead: +for value in tensor_shape: + print(value) + +####################### +# If you had this in V1: +dim = tensor_shape[i] +dim.assert_is_compatible_with(other_shape) # or using any other shape method + +# Do this in V2 instead: +if tensor_shape.rank is None: + dim = Dimension(None) +else: + dim = tensor_shape.dims[i] +dim.assert_is_compatible_with(other_shape) # or using any other shape method + +# The V2 suggestion above is more explicit, which will save you from +# the following trap (present in V1): +# you might do in-place modifications to `dim` and expect them to be reflected +# in `tensor_shape[i]`, but they would not be. +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/errors.md b/site/en/api_docs/python/tf/compat/v1/errors.md new file mode 100644 index 00000000000..f1419a64622 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/errors.md @@ -0,0 +1,101 @@ +description: Exception types for TensorFlow errors. + +
+ + + + + + + + + + + + + + + + + + + +
+ +# Module: tf.compat.v1.errors + + + + + + + + + +Exception types for TensorFlow errors. + + + +## Classes + +[`class AbortedError`](../../../tf/errors/AbortedError.md): The operation was aborted, typically due to a concurrent action. + +[`class AlreadyExistsError`](../../../tf/errors/AlreadyExistsError.md): Raised when an entity that we attempted to create already exists. + +[`class CancelledError`](../../../tf/errors/CancelledError.md): Raised when an operation or step is cancelled. + +[`class DataLossError`](../../../tf/errors/DataLossError.md): Raised when unrecoverable data loss or corruption is encountered. + +[`class DeadlineExceededError`](../../../tf/errors/DeadlineExceededError.md): Raised when a deadline expires before an operation could complete. + +[`class FailedPreconditionError`](../../../tf/errors/FailedPreconditionError.md): Operation was rejected because the system is not in a state to execute it. + +[`class InternalError`](../../../tf/errors/InternalError.md): Raised when the system experiences an internal error. + +[`class InvalidArgumentError`](../../../tf/errors/InvalidArgumentError.md): Raised when an operation receives an invalid argument. + +[`class NotFoundError`](../../../tf/errors/NotFoundError.md): Raised when a requested entity (e.g., a file or directory) was not found. + +[`class OpError`](../../../tf/errors/OpError.md): A generic error that is raised when TensorFlow execution fails. + +[`class OutOfRangeError`](../../../tf/errors/OutOfRangeError.md): Raised when an operation iterates past the valid input range. + +[`class PermissionDeniedError`](../../../tf/errors/PermissionDeniedError.md): Raised when the caller does not have permission to run an operation. + +[`class ResourceExhaustedError`](../../../tf/errors/ResourceExhaustedError.md): Some resource has been exhausted. + +[`class UnauthenticatedError`](../../../tf/errors/UnauthenticatedError.md): The request does not have valid authentication credentials. + +[`class UnavailableError`](../../../tf/errors/UnavailableError.md): Raised when the runtime is currently unavailable. + +[`class UnimplementedError`](../../../tf/errors/UnimplementedError.md): Raised when an operation has not been implemented. + +[`class UnknownError`](../../../tf/errors/UnknownError.md): Unknown error. + +[`class raise_exception_on_not_ok_status`](../../../tf/compat/v1/errors/raise_exception_on_not_ok_status.md): Context manager to check for C API status. + +## Functions + +[`error_code_from_exception_type(...)`](../../../tf/compat/v1/errors/error_code_from_exception_type.md) + +[`exception_type_from_error_code(...)`](../../../tf/compat/v1/errors/exception_type_from_error_code.md) + +## Other Members + +* `ABORTED = 10` +* `ALREADY_EXISTS = 6` +* `CANCELLED = 1` +* `DATA_LOSS = 15` +* `DEADLINE_EXCEEDED = 4` +* `FAILED_PRECONDITION = 9` +* `INTERNAL = 13` +* `INVALID_ARGUMENT = 3` +* `NOT_FOUND = 5` +* `OK = 0` +* `OUT_OF_RANGE = 11` +* `PERMISSION_DENIED = 7` +* `RESOURCE_EXHAUSTED = 8` +* `UNAUTHENTICATED = 16` +* `UNAVAILABLE = 14` +* `UNIMPLEMENTED = 12` +* `UNKNOWN = 2` diff --git a/site/en/api_docs/python/tf/compat/v1/errors/error_code_from_exception_type.md b/site/en/api_docs/python/tf/compat/v1/errors/error_code_from_exception_type.md new file mode 100644 index 00000000000..f290e28394f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/errors/error_code_from_exception_type.md @@ -0,0 +1,29 @@ +
+ + +
+ +# tf.compat.v1.errors.error_code_from_exception_type + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/errors/exception_type_from_error_code.md b/site/en/api_docs/python/tf/compat/v1/errors/exception_type_from_error_code.md new file mode 100644 index 00000000000..e76852c75a5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/errors/exception_type_from_error_code.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.errors.exception_type_from_error_code + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/errors/raise_exception_on_not_ok_status.md b/site/en/api_docs/python/tf/compat/v1/errors/raise_exception_on_not_ok_status.md new file mode 100644 index 00000000000..3aebfec89c1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/errors/raise_exception_on_not_ok_status.md @@ -0,0 +1,57 @@ +description: Context manager to check for C API status. + +
+ + + + +
+ +# tf.compat.v1.errors.raise_exception_on_not_ok_status + + + + + + + + + +Context manager to check for C API status. + + + + +## Methods + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator.md b/site/en/api_docs/python/tf/compat/v1/estimator.md new file mode 100644 index 00000000000..86b80a10f15 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator.md @@ -0,0 +1,147 @@ +description: Estimator: High level tools for working with models. + +
+ + +
+ +# Module: tf.compat.v1.estimator + + + + + + + + + +Estimator: High level tools for working with models. + + + +## Modules + +[`experimental`](../../../tf/compat/v1/estimator/experimental.md) module: Public API for tf.estimator.experimental namespace. + +[`export`](../../../tf/compat/v1/estimator/export.md) module: All public utility methods for exporting Estimator to SavedModel. + +[`inputs`](../../../tf/compat/v1/estimator/inputs.md) module: Utility methods to create simple input_fns. + +[`tpu`](../../../tf/compat/v1/estimator/tpu.md) module: Public API for tf.estimator.tpu namespace. + +## Classes + +[`class BaselineClassifier`](../../../tf/compat/v1/estimator/BaselineClassifier.md): A classifier that can establish a simple baseline. + +[`class BaselineEstimator`](../../../tf/compat/v1/estimator/BaselineEstimator.md): An estimator that can establish a simple baseline. + +[`class BaselineRegressor`](../../../tf/compat/v1/estimator/BaselineRegressor.md): A regressor that can establish a simple baseline. + +[`class BestExporter`](../../../tf/estimator/BestExporter.md): This class exports the serving graph and checkpoints of the best models. + +[`class BinaryClassHead`](../../../tf/estimator/BinaryClassHead.md): Creates a `Head` for single label binary classification. + +[`class BoostedTreesClassifier`](../../../tf/estimator/BoostedTreesClassifier.md): A Classifier for Tensorflow Boosted Trees models. + +[`class BoostedTreesEstimator`](../../../tf/estimator/BoostedTreesEstimator.md): An Estimator for Tensorflow Boosted Trees models. + +[`class BoostedTreesRegressor`](../../../tf/estimator/BoostedTreesRegressor.md): A Regressor for Tensorflow Boosted Trees models. + +[`class CheckpointSaverHook`](../../../tf/estimator/CheckpointSaverHook.md): Saves checkpoints every N steps or seconds. + +[`class CheckpointSaverListener`](../../../tf/estimator/CheckpointSaverListener.md): Interface for listeners that take action before or after checkpoint save. + +[`class DNNClassifier`](../../../tf/compat/v1/estimator/DNNClassifier.md): A classifier for TensorFlow DNN models. + +[`class DNNEstimator`](../../../tf/compat/v1/estimator/DNNEstimator.md): An estimator for TensorFlow DNN models with user-specified head. + +[`class DNNLinearCombinedClassifier`](../../../tf/compat/v1/estimator/DNNLinearCombinedClassifier.md): An estimator for TensorFlow Linear and DNN joined classification models. + +[`class DNNLinearCombinedEstimator`](../../../tf/compat/v1/estimator/DNNLinearCombinedEstimator.md): An estimator for TensorFlow Linear and DNN joined models with custom head. + +[`class DNNLinearCombinedRegressor`](../../../tf/compat/v1/estimator/DNNLinearCombinedRegressor.md): An estimator for TensorFlow Linear and DNN joined models for regression. + +[`class DNNRegressor`](../../../tf/compat/v1/estimator/DNNRegressor.md): A regressor for TensorFlow DNN models. + +[`class Estimator`](../../../tf/compat/v1/estimator/Estimator.md): Estimator class to train and evaluate TensorFlow models. + +[`class EstimatorSpec`](../../../tf/estimator/EstimatorSpec.md): Ops and objects returned from a `model_fn` and passed to an `Estimator`. + +[`class EvalSpec`](../../../tf/estimator/EvalSpec.md): Configuration for the "eval" part for the `train_and_evaluate` call. + +[`class Exporter`](../../../tf/estimator/Exporter.md): A class representing a type of model export. + +[`class FeedFnHook`](../../../tf/estimator/FeedFnHook.md): Runs `feed_fn` and sets the `feed_dict` accordingly. + +[`class FinalExporter`](../../../tf/estimator/FinalExporter.md): This class exports the serving graph and checkpoints at the end. + +[`class FinalOpsHook`](../../../tf/estimator/FinalOpsHook.md): A hook which evaluates `Tensors` at the end of a session. + +[`class GlobalStepWaiterHook`](../../../tf/estimator/GlobalStepWaiterHook.md): Delays execution until global step reaches `wait_until_step`. + +[`class Head`](../../../tf/estimator/Head.md): Interface for the head/top of a model. + +[`class LatestExporter`](../../../tf/estimator/LatestExporter.md): This class regularly exports the serving graph and checkpoints. + +[`class LinearClassifier`](../../../tf/compat/v1/estimator/LinearClassifier.md): Linear classifier model. + +[`class LinearEstimator`](../../../tf/compat/v1/estimator/LinearEstimator.md): An estimator for TensorFlow linear models with user-specified head. + +[`class LinearRegressor`](../../../tf/compat/v1/estimator/LinearRegressor.md): An estimator for TensorFlow Linear regression problems. + +[`class LoggingTensorHook`](../../../tf/estimator/LoggingTensorHook.md): Prints the given tensors every N local steps, every N seconds, or at end. + +[`class LogisticRegressionHead`](../../../tf/estimator/LogisticRegressionHead.md): Creates a `Head` for logistic regression. + +[`class ModeKeys`](../../../tf/estimator/ModeKeys.md): Standard names for Estimator model modes. + +[`class MultiClassHead`](../../../tf/estimator/MultiClassHead.md): Creates a `Head` for multi class classification. + +[`class MultiHead`](../../../tf/estimator/MultiHead.md): Creates a `Head` for multi-objective learning. + +[`class MultiLabelHead`](../../../tf/estimator/MultiLabelHead.md): Creates a `Head` for multi-label classification. + +[`class NanLossDuringTrainingError`](../../../tf/estimator/NanLossDuringTrainingError.md): Unspecified run-time error. + +[`class NanTensorHook`](../../../tf/estimator/NanTensorHook.md): Monitors the loss tensor and stops training if loss is NaN. + +[`class PoissonRegressionHead`](../../../tf/estimator/PoissonRegressionHead.md): Creates a `Head` for poisson regression using tf.nn.log_poisson_loss. + +[`class ProfilerHook`](../../../tf/estimator/ProfilerHook.md): Captures CPU/GPU profiling information every N steps or seconds. + +[`class RegressionHead`](../../../tf/estimator/RegressionHead.md): Creates a `Head` for regression using the `mean_squared_error` loss. + +[`class RunConfig`](../../../tf/estimator/RunConfig.md): This class specifies the configurations for an `Estimator` run. + +[`class SecondOrStepTimer`](../../../tf/estimator/SecondOrStepTimer.md): Timer that triggers at most once every N seconds or once every N steps. + +[`class SessionRunArgs`](../../../tf/estimator/SessionRunArgs.md): Represents arguments to be added to a `Session.run()` call. + +[`class SessionRunContext`](../../../tf/estimator/SessionRunContext.md): Provides information about the `session.run()` call being made. + +[`class SessionRunHook`](../../../tf/estimator/SessionRunHook.md): Hook to extend calls to MonitoredSession.run(). + +[`class SessionRunValues`](../../../tf/estimator/SessionRunValues.md): Contains the results of `Session.run()`. + +[`class StepCounterHook`](../../../tf/estimator/StepCounterHook.md): Hook that counts steps per second. + +[`class StopAtStepHook`](../../../tf/estimator/StopAtStepHook.md): Hook that requests stop at a specified step. + +[`class SummarySaverHook`](../../../tf/estimator/SummarySaverHook.md): Saves summaries every N steps. + +[`class TrainSpec`](../../../tf/estimator/TrainSpec.md): Configuration for the "train" part for the `train_and_evaluate` call. + +[`class VocabInfo`](../../../tf/estimator/VocabInfo.md): Vocabulary information for warm-starting. + +[`class WarmStartSettings`](../../../tf/estimator/WarmStartSettings.md): Settings for warm-starting in `tf.estimator.Estimators`. + +## Functions + +[`add_metrics(...)`](../../../tf/estimator/add_metrics.md): Creates a new tf.estimator.Estimator which has given metrics. + +[`classifier_parse_example_spec(...)`](../../../tf/compat/v1/estimator/classifier_parse_example_spec.md): Generates parsing spec for tf.parse_example to be used with classifiers. + +[`regressor_parse_example_spec(...)`](../../../tf/compat/v1/estimator/regressor_parse_example_spec.md): Generates parsing spec for tf.parse_example to be used with regressors. + +[`train_and_evaluate(...)`](../../../tf/estimator/train_and_evaluate.md): Train and evaluate the `estimator`. + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/BaselineClassifier.md b/site/en/api_docs/python/tf/compat/v1/estimator/BaselineClassifier.md new file mode 100644 index 00000000000..7a275fb6372 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/BaselineClassifier.md @@ -0,0 +1,1201 @@ +description: A classifier that can establish a simple baseline. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.BaselineClassifier + + + + + + + + + +A classifier that can establish a simple baseline. + +Inherits From: [`Estimator`](../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + +This classifier ignores feature values and will learn to predict the average +value of each label. For single-label problems, this will predict the +probability distribution of the classes as seen in the labels. For multi-label +problems, this will predict the fraction of examples that are positive for +each class. + +#### Example: + + + +```python + +# Build BaselineClassifier +classifier = tf.estimator.BaselineClassifier(n_classes=3) + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass + +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass + +# Fit model. +classifier.train(input_fn=input_fn_train) + +# Evaluate cross entropy between the test and train labels. +loss = classifier.evaluate(input_fn=input_fn_eval)["loss"] + +# predict outputs the probability distribution of the classes as seen in +# training. +predictions = classifier.predict(new_samples) + +``` + +Input of `train` and `evaluate` should have following features, + otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with + `key=weight_column` whose value is a `Tensor`. + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional `estimator.RunConfig` object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +`estimator.RunConfig` configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/BaselineEstimator.md b/site/en/api_docs/python/tf/compat/v1/estimator/BaselineEstimator.md new file mode 100644 index 00000000000..cc4e9715d21 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/BaselineEstimator.md @@ -0,0 +1,1194 @@ +description: An estimator that can establish a simple baseline. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.BaselineEstimator + + + + + + + + + +An estimator that can establish a simple baseline. + +Inherits From: [`Estimator`](../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + +The estimator uses a user-specified head. + +This estimator ignores feature values and will learn to predict the average +value of each label. E.g. for single-label classification problems, this will +predict the probability distribution of the classes as seen in the labels. +For multi-label classification problems, it will predict the ratio of examples +that contain each class. + +#### Example: + + + +```python + +# Build baseline multi-label classifier. +estimator = tf.estimator.BaselineEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3)) + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass + +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass + +# Fit model. +estimator.train(input_fn=input_fn_train) + +# Evaluates cross entropy between the test and train labels. +loss = estimator.evaluate(input_fn=input_fn_eval)["loss"] + +# For each class, predicts the ratio of training examples that contain the +# class. +predictions = estimator.predict(new_samples) + +``` + +Input of `train` and `evaluate` should have following features, + otherwise there will be a `KeyError`: + +* if `weight_column` is specified in the `head` constructor (and not None) for + the head passed to BaselineEstimator's constructor, a feature with + `key=weight_column` whose value is a `Tensor`. + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional `estimator.RunConfig` object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +`estimator.RunConfig` configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/BaselineRegressor.md b/site/en/api_docs/python/tf/compat/v1/estimator/BaselineRegressor.md new file mode 100644 index 00000000000..0bc6598c5c4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/BaselineRegressor.md @@ -0,0 +1,1196 @@ +description: A regressor that can establish a simple baseline. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.BaselineRegressor + + + + + + + + + +A regressor that can establish a simple baseline. + +Inherits From: [`Estimator`](../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + +This regressor ignores feature values and will learn to predict the average +value of each label. + +#### Example: + + + +```python + +# Build BaselineRegressor +regressor = tf.estimator.BaselineRegressor() + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass + +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass + +# Fit model. +regressor.train(input_fn=input_fn_train) + +# Evaluate squared-loss between the test and train targets. +loss = regressor.evaluate(input_fn=input_fn_eval)["loss"] + +# predict outputs the mean value seen during training. +predictions = regressor.predict(new_samples) +``` + +Input of `train` and `evaluate` should have following features, + otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with + `key=weight_column` whose value is a `Tensor`. + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional `estimator.RunConfig` object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +`estimator.RunConfig` configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/DNNClassifier.md b/site/en/api_docs/python/tf/compat/v1/estimator/DNNClassifier.md new file mode 100644 index 00000000000..a8f39b53d92 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/DNNClassifier.md @@ -0,0 +1,1237 @@ +description: A classifier for TensorFlow DNN models. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.DNNClassifier + + + + + + + + + +A classifier for TensorFlow DNN models. + +Inherits From: [`Estimator`](../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + + +#### Example: + + + +```python +categorical_feature_a = categorical_column_with_hash_bucket(...) +categorical_feature_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_emb = embedding_column( + categorical_column=categorical_feature_a, ...) +categorical_feature_b_emb = embedding_column( + categorical_column=categorical_feature_b, ...) + +estimator = tf.estimator.DNNClassifier( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256]) + +# Or estimator using the ProximalAdagradOptimizer optimizer with +# regularization. +estimator = tf.estimator.DNNClassifier( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + optimizer=tf.compat.v1.train.ProximalAdagradOptimizer( + learning_rate=0.1, + l1_regularization_strength=0.001 + )) + +# Or estimator using an optimizer with a learning rate decay. +estimator = tf.estimator.DNNClassifier( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + optimizer=lambda: tf.keras.optimizers.Adam( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96)) + +# Or estimator with warm-starting from a previous checkpoint. +estimator = tf.estimator.DNNClassifier( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + warm_start_from="/path/to/checkpoint/dir") + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train) +metrics = estimator.evaluate(input_fn=input_fn_eval) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with `key=weight_column` whose + value is a `Tensor`. +* for each `column` in `feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using softmax cross entropy. + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional `estimator.RunConfig` object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +`estimator.RunConfig` configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/DNNEstimator.md b/site/en/api_docs/python/tf/compat/v1/estimator/DNNEstimator.md new file mode 100644 index 00000000000..1d80edd4dba --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/DNNEstimator.md @@ -0,0 +1,1240 @@ +description: An estimator for TensorFlow DNN models with user-specified head. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.DNNEstimator + + + + + + + + + +An estimator for TensorFlow DNN models with user-specified head. + +Inherits From: [`Estimator`](../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + + +#### Example: + + + +```python +sparse_feature_a = sparse_column_with_hash_bucket(...) +sparse_feature_b = sparse_column_with_hash_bucket(...) + +sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a, + ...) +sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b, + ...) + +estimator = tf.estimator.DNNEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], + hidden_units=[1024, 512, 256]) + +# Or estimator using the ProximalAdagradOptimizer optimizer with +# regularization. +estimator = tf.estimator.DNNEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], + hidden_units=[1024, 512, 256], + optimizer=tf.compat.v1.train.ProximalAdagradOptimizer( + learning_rate=0.1, + l1_regularization_strength=0.001 + )) + +# Or estimator using an optimizer with a learning rate decay. +estimator = tf.estimator.DNNEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], + hidden_units=[1024, 512, 256], + optimizer=lambda: tf.keras.optimizers.Adam( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96)) + +# Or estimator with warm-starting from a previous checkpoint. +estimator = tf.estimator.DNNEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], + hidden_units=[1024, 512, 256], + warm_start_from="/path/to/checkpoint/dir") + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train) +metrics = estimator.evaluate(input_fn=input_fn_eval) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with `key=weight_column` whose + value is a `Tensor`. +* for each `column` in `feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss and predicted output are determined by the specified head. + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional `estimator.RunConfig` object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +`estimator.RunConfig` configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/DNNLinearCombinedClassifier.md b/site/en/api_docs/python/tf/compat/v1/estimator/DNNLinearCombinedClassifier.md new file mode 100644 index 00000000000..e52eb6a050c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/DNNLinearCombinedClassifier.md @@ -0,0 +1,1236 @@ +description: An estimator for TensorFlow Linear and DNN joined classification models. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.DNNLinearCombinedClassifier + + + + + + + + + +An estimator for TensorFlow Linear and DNN joined classification models. + +Inherits From: [`Estimator`](../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + +Note: This estimator is also known as wide-n-deep. + +#### Example: + + + +```python +numeric_feature = numeric_column(...) +categorical_column_a = categorical_column_with_hash_bucket(...) +categorical_column_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_x_categorical_feature_b = crossed_column(...) +categorical_feature_a_emb = embedding_column( + categorical_column=categorical_feature_a, ...) +categorical_feature_b_emb = embedding_column( + categorical_id_column=categorical_feature_b, ...) + +estimator = tf.estimator.DNNLinearCombinedClassifier( + # wide settings + linear_feature_columns=[categorical_feature_a_x_categorical_feature_b], + linear_optimizer=tf.keras.optimizers.Ftrl(...), + # deep settings + dnn_feature_columns=[ + categorical_feature_a_emb, categorical_feature_b_emb, + numeric_feature], + dnn_hidden_units=[1000, 500, 100], + dnn_optimizer=tf.keras.optimizers.Adagrad(...), + # warm-start settings + warm_start_from="/path/to/checkpoint/dir") + +# To apply L1 and L2 regularization, you can set dnn_optimizer to: +tf.compat.v1.train.ProximalAdagradOptimizer( + learning_rate=0.1, + l1_regularization_strength=0.001, + l2_regularization_strength=0.001) +# To apply learning rate decay, you can set dnn_optimizer to a callable: +lambda: tf.keras.optimizers.Adam( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96) +# It is the same for linear_optimizer. + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train, steps=100) +metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* for each `column` in `dnn_feature_columns` + `linear_feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using softmax cross entropy. + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional `estimator.RunConfig` object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +`estimator.RunConfig` configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/DNNLinearCombinedEstimator.md b/site/en/api_docs/python/tf/compat/v1/estimator/DNNLinearCombinedEstimator.md new file mode 100644 index 00000000000..b2a13d5741f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/DNNLinearCombinedEstimator.md @@ -0,0 +1,1233 @@ +description: An estimator for TensorFlow Linear and DNN joined models with custom head. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.DNNLinearCombinedEstimator + + + + + + + + + +An estimator for TensorFlow Linear and DNN joined models with custom head. + +Inherits From: [`Estimator`](../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + +Note: This estimator is also known as wide-n-deep. + +#### Example: + + + +```python +numeric_feature = numeric_column(...) +categorical_column_a = categorical_column_with_hash_bucket(...) +categorical_column_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_x_categorical_feature_b = crossed_column(...) +categorical_feature_a_emb = embedding_column( + categorical_column=categorical_feature_a, ...) +categorical_feature_b_emb = embedding_column( + categorical_column=categorical_feature_b, ...) + +estimator = tf.estimator.DNNLinearCombinedEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + # wide settings + linear_feature_columns=[categorical_feature_a_x_categorical_feature_b], + linear_optimizer=tf.keras.optimizers.Ftrl(...), + # deep settings + dnn_feature_columns=[ + categorical_feature_a_emb, categorical_feature_b_emb, + numeric_feature], + dnn_hidden_units=[1000, 500, 100], + dnn_optimizer=tf.keras.optimizers.Adagrad(...)) + +# To apply L1 and L2 regularization, you can set dnn_optimizer to: +tf.compat.v1.train.ProximalAdagradOptimizer( + learning_rate=0.1, + l1_regularization_strength=0.001, + l2_regularization_strength=0.001) +# To apply learning rate decay, you can set dnn_optimizer to a callable: +lambda: tf.keras.optimizers.Adam( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96) +# It is the same for linear_optimizer. + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train, steps=100) +metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* for each `column` in `dnn_feature_columns` + `linear_feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using mean squared error. + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional `estimator.RunConfig` object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +`estimator.RunConfig` configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/DNNLinearCombinedRegressor.md b/site/en/api_docs/python/tf/compat/v1/estimator/DNNLinearCombinedRegressor.md new file mode 100644 index 00000000000..369aa3408a8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/DNNLinearCombinedRegressor.md @@ -0,0 +1,1236 @@ +description: An estimator for TensorFlow Linear and DNN joined models for regression. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.DNNLinearCombinedRegressor + + + + + + + + + +An estimator for TensorFlow Linear and DNN joined models for regression. + +Inherits From: [`Estimator`](../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + +Note: This estimator is also known as wide-n-deep. + +#### Example: + + + +```python +numeric_feature = numeric_column(...) +categorical_column_a = categorical_column_with_hash_bucket(...) +categorical_column_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_x_categorical_feature_b = crossed_column(...) +categorical_feature_a_emb = embedding_column( + categorical_column=categorical_feature_a, ...) +categorical_feature_b_emb = embedding_column( + categorical_column=categorical_feature_b, ...) + +estimator = tf.estimator.DNNLinearCombinedRegressor( + # wide settings + linear_feature_columns=[categorical_feature_a_x_categorical_feature_b], + linear_optimizer=tf.keras.optimizers.Ftrl(...), + # deep settings + dnn_feature_columns=[ + categorical_feature_a_emb, categorical_feature_b_emb, + numeric_feature], + dnn_hidden_units=[1000, 500, 100], + dnn_optimizer=tf.keras.optimizers.Adagrad(...), + # warm-start settings + warm_start_from="/path/to/checkpoint/dir") + +# To apply L1 and L2 regularization, you can set dnn_optimizer to: +tf.compat.v1.train.ProximalAdagradOptimizer( + learning_rate=0.1, + l1_regularization_strength=0.001, + l2_regularization_strength=0.001) +# To apply learning rate decay, you can set dnn_optimizer to a callable: +lambda: tf.keras.optimizers.Adam( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96) +# It is the same for linear_optimizer. + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train, steps=100) +metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* for each `column` in `dnn_feature_columns` + `linear_feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using mean squared error. + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional `estimator.RunConfig` object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +`estimator.RunConfig` configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/DNNRegressor.md b/site/en/api_docs/python/tf/compat/v1/estimator/DNNRegressor.md new file mode 100644 index 00000000000..42eb0ef27cd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/DNNRegressor.md @@ -0,0 +1,1237 @@ +description: A regressor for TensorFlow DNN models. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.DNNRegressor + + + + + + + + + +A regressor for TensorFlow DNN models. + +Inherits From: [`Estimator`](../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + + +#### Example: + + + +```python +categorical_feature_a = categorical_column_with_hash_bucket(...) +categorical_feature_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_emb = embedding_column( + categorical_column=categorical_feature_a, ...) +categorical_feature_b_emb = embedding_column( + categorical_column=categorical_feature_b, ...) + +estimator = tf.estimator.DNNRegressor( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256]) + +# Or estimator using the ProximalAdagradOptimizer optimizer with +# regularization. +estimator = tf.estimator.DNNRegressor( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + optimizer=tf.compat.v1.train.ProximalAdagradOptimizer( + learning_rate=0.1, + l1_regularization_strength=0.001 + )) + +# Or estimator using an optimizer with a learning rate decay. +estimator = tf.estimator.DNNRegressor( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + optimizer=lambda: tf.keras.optimizers.Adam( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96)) + +# Or estimator with warm-starting from a previous checkpoint. +estimator = tf.estimator.DNNRegressor( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + warm_start_from="/path/to/checkpoint/dir") + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train) +metrics = estimator.evaluate(input_fn=input_fn_eval) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with `key=weight_column` whose + value is a `Tensor`. +* for each `column` in `feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using mean squared error. + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional `estimator.RunConfig` object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +`estimator.RunConfig` configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/Estimator.md b/site/en/api_docs/python/tf/compat/v1/estimator/Estimator.md new file mode 100644 index 00000000000..1a7e54d1ea5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/Estimator.md @@ -0,0 +1,1199 @@ +description: Estimator class to train and evaluate TensorFlow models. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.Estimator + + + + + + + + + +Estimator class to train and evaluate TensorFlow models. + + + + + + + +The `Estimator` object wraps a model which is specified by a `model_fn`, +which, given inputs and a number of other parameters, returns the ops +necessary to perform training, evaluation, or predictions. + +All outputs (checkpoints, event files, etc.) are written to `model_dir`, or a +subdirectory thereof. If `model_dir` is not set, a temporary directory is +used. + +The `config` argument can be passed tf.estimator.RunConfig object containing +information about the execution environment. It is passed on to the +`model_fn`, if the `model_fn` has a parameter named "config" (and input +functions in the same manner). If the `config` parameter is not passed, it is +instantiated by the `Estimator`. Not passing config means that defaults useful +for local execution are used. `Estimator` makes config available to the model +(for instance, to allow specialization based on the number of workers +available), and also uses some of its fields to control internals, especially +regarding checkpointing. + +The `params` argument contains hyperparameters. It is passed to the +`model_fn`, if the `model_fn` has a parameter named "params", and to the input +functions in the same manner. `Estimator` only passes params along, it does +not inspect it. The structure of `params` is therefore entirely up to the +developer. + +None of `Estimator`'s methods can be overridden in subclasses (its +constructor enforces this). Subclasses should use `model_fn` to configure +the base class, and may add methods implementing specialized functionality. + +See [estimators](https://tensorflow.org/guide/estimators) for more + information. + +To warm-start an `Estimator`: + +```python +estimator = tf.estimator.DNNClassifier( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + warm_start_from="/path/to/checkpoint/dir") +``` + +For more details on warm-start configuration, see +tf.estimator.WarmStartSettings. + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional `estimator.RunConfig` object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +`estimator.RunConfig` configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + +#### Eager Compatibility +Calling methods of `Estimator` will work while eager execution is enabled. +However, the `model_fn` and `input_fn` is not executed eagerly, `Estimator` +will switch to graph mode before calling all user-provided functions (incl. +hooks), so their code has to be compatible with graph mode execution. Note +that `input_fn` code using tf.data generally works in both graph and eager +modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/LinearClassifier.md b/site/en/api_docs/python/tf/compat/v1/estimator/LinearClassifier.md new file mode 100644 index 00000000000..65cb199447c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/LinearClassifier.md @@ -0,0 +1,1237 @@ +description: Linear classifier model. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.LinearClassifier + + + + + + + + + +Linear classifier model. + +Inherits From: [`Estimator`](../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + +Train a linear model to classify instances into one of multiple possible +classes. When number of possible classes is 2, this is binary classification. + +#### Example: + + + +```python +categorical_column_a = categorical_column_with_hash_bucket(...) +categorical_column_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_x_categorical_feature_b = crossed_column(...) + +# Estimator using the default optimizer. +estimator = tf.estimator.LinearClassifier( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b]) + +# Or estimator using the FTRL optimizer with regularization. +estimator = tf.estimator.LinearClassifier( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + optimizer=tf.keras.optimizers.Ftrl( + learning_rate=0.1, + l1_regularization_strength=0.001 + )) + +# Or estimator using an optimizer with a learning rate decay. +estimator = tf.estimator.LinearClassifier( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + optimizer=lambda: tf.keras.optimizers.Ftrl( + learning_rate=tf.exponential_decay( + learning_rate=0.1, + global_step=tf.get_global_step(), + decay_steps=10000, + decay_rate=0.96)) + +# Or estimator with warm-starting from a previous checkpoint. +estimator = tf.estimator.LinearClassifier( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + warm_start_from="/path/to/checkpoint/dir") + + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train) +metrics = estimator.evaluate(input_fn=input_fn_eval) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, + otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with `key=weight_column` whose + value is a `Tensor`. +* for each `column` in `feature_columns`: + - if `column` is a `SparseColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedSparseColumn`, two features: the first with + `key` the id column name, the second with `key` the weight column name. + Both features' `value` must be a `SparseTensor`. + - if `column` is a `RealValuedColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using softmax cross entropy. + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional `estimator.RunConfig` object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +`estimator.RunConfig` configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/LinearEstimator.md b/site/en/api_docs/python/tf/compat/v1/estimator/LinearEstimator.md new file mode 100644 index 00000000000..b9251a32b4f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/LinearEstimator.md @@ -0,0 +1,1198 @@ +description: An estimator for TensorFlow linear models with user-specified head. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.LinearEstimator + + + + + + + + + +An estimator for TensorFlow linear models with user-specified head. + +Inherits From: [`Estimator`](../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + + +#### Example: + + + +```python +categorical_column_a = categorical_column_with_hash_bucket(...) +categorical_column_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_x_categorical_feature_b = crossed_column(...) + +# Estimator using the default optimizer. +estimator = tf.estimator.LinearEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b]) + +# Or estimator using an optimizer with a learning rate decay. +estimator = tf.estimator.LinearEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + optimizer=lambda: tf.keras.optimizers.Ftrl( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96)) + +# Or estimator using the FTRL optimizer with regularization. +estimator = tf.estimator.LinearEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b]) + optimizer=tf.keras.optimizers.Ftrl( + learning_rate=0.1, + l1_regularization_strength=0.001 + )) + +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train, steps=100) +metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with `key=weight_column` whose + value is a `Tensor`. +* for each `column` in `feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss and predicted output are determined by the specified head. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`head` + +A `_Head` instance constructed with a method such as +`tf.contrib.estimator.multi_label_head`. +
+`feature_columns` + +An iterable containing all the feature columns used by +the model. All items in the set should be instances of classes derived +from `FeatureColumn`. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`optimizer` + +An instance of `tf.Optimizer` used to train the model. Can also +be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or +callable. Defaults to FTRL optimizer. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+`partitioner` + +Optional. Partitioner for input layer. +
+`sparse_combiner` + +A string specifying how to reduce if a categorical column +is multivalent. One of "mean", "sqrtn", and "sum" -- these are +effectively different ways to do example-level normalization, which can +be useful for bag-of-words features. for more details, see +`tf.feature_column.linear_model`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/LinearRegressor.md b/site/en/api_docs/python/tf/compat/v1/estimator/LinearRegressor.md new file mode 100644 index 00000000000..bfb418fb2eb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/LinearRegressor.md @@ -0,0 +1,1236 @@ +description: An estimator for TensorFlow Linear regression problems. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.LinearRegressor + + + + + + + + + +An estimator for TensorFlow Linear regression problems. + +Inherits From: [`Estimator`](../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + +Train a linear regression model to predict label value given observation of +feature values. + +#### Example: + + + +```python +categorical_column_a = categorical_column_with_hash_bucket(...) +categorical_column_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_x_categorical_feature_b = crossed_column(...) + +# Estimator using the default optimizer. +estimator = tf.estimator.LinearRegressor( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b]) + +# Or estimator using the FTRL optimizer with regularization. +estimator = tf.estimator.LinearRegressor( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + optimizer=tf.keras.optimizers.Ftrl( + learning_rate=0.1, + l1_regularization_strength=0.001 + )) + +# Or estimator using an optimizer with a learning rate decay. +estimator = tf.estimator.LinearRegressor( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + optimizer=lambda: tf.keras.optimizers.Ftrl( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96)) + +# Or estimator with warm-starting from a previous checkpoint. +estimator = tf.estimator.LinearRegressor( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + warm_start_from="/path/to/checkpoint/dir") + + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train) +metrics = estimator.evaluate(input_fn=input_fn_eval) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, + otherwise there will be a KeyError: + +* if `weight_column` is not `None`, a feature with `key=weight_column` whose + value is a `Tensor`. +* for each `column` in `feature_columns`: + - if `column` is a `SparseColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedSparseColumn`, two features: the first with + `key` the id column name, the second with `key` the weight column name. + Both features' `value` must be a `SparseTensor`. + - if `column` is a `RealValuedColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using mean squared error. + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional `estimator.RunConfig` object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +`estimator.RunConfig` configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/classifier_parse_example_spec.md b/site/en/api_docs/python/tf/compat/v1/estimator/classifier_parse_example_spec.md new file mode 100644 index 00000000000..a1251e13aed --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/classifier_parse_example_spec.md @@ -0,0 +1,223 @@ +description: Generates parsing spec for tf.parse_example to be used with classifiers. + +
+ + +
+ +# tf.compat.v1.estimator.classifier_parse_example_spec + + + + + + + + + +Generates parsing spec for tf.parse_example to be used with classifiers. + + + + + + + +If users keep data in tf.Example format, they need to call tf.parse_example +with a proper feature spec. There are two main things that this utility helps: + +* Users need to combine parsing spec of features with labels and weights + (if any) since they are all parsed from same tf.Example instance. This + utility combines these specs. +* It is difficult to map expected label by a classifier such as + `DNNClassifier` to corresponding tf.parse_example spec. This utility encodes + it by getting related information from users (key, dtype). + +Example output of parsing spec: + +```python +# Define features and transformations +feature_b = tf.feature_column.numeric_column(...) +feature_c_bucketized = tf.feature_column.bucketized_column( + tf.feature_column.numeric_column("feature_c"), ...) +feature_a_x_feature_c = tf.feature_column.crossed_column( + columns=["feature_a", feature_c_bucketized], ...) + +feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c] +parsing_spec = tf.estimator.classifier_parse_example_spec( + feature_columns, label_key='my-label', label_dtype=tf.string) + +# For the above example, classifier_parse_example_spec would return the dict: +assert parsing_spec == { + "feature_a": parsing_ops.VarLenFeature(tf.string), + "feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32), + "feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32) + "my-label" : parsing_ops.FixedLenFeature([1], dtype=tf.string) +} +``` + +Example usage with a classifier: + +```python +feature_columns = # define features via tf.feature_column +estimator = DNNClassifier( + n_classes=1000, + feature_columns=feature_columns, + weight_column='example-weight', + label_vocabulary=['photos', 'keep', ...], + hidden_units=[256, 64, 16]) +# This label configuration tells the classifier the following: +# * weights are retrieved with key 'example-weight' +# * label is string and can be one of the following ['photos', 'keep', ...] +# * integer id for label 'photos' is 0, 'keep' is 1, ... + + +# Input builders +def input_fn_train(): # Returns a tuple of features and labels. + features = tf.contrib.learn.read_keyed_batch_features( + file_pattern=train_files, + batch_size=batch_size, + # creates parsing configuration for tf.parse_example + features=tf.estimator.classifier_parse_example_spec( + feature_columns, + label_key='my-label', + label_dtype=tf.string, + weight_column='example-weight'), + reader=tf.RecordIOReader) + labels = features.pop('my-label') + return features, labels + +estimator.train(input_fn=input_fn_train) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +An iterable containing all feature columns. All items +should be instances of classes derived from `FeatureColumn`. +
+`label_key` + +A string identifying the label. It means tf.Example stores labels +with this key. +
+`label_dtype` + +A `tf.dtype` identifies the type of labels. By default it is +tf.int64. If user defines a `label_vocabulary`, this should be set as +tf.string. tf.float32 labels are only supported for binary +classification. +
+`label_default` + +used as label if label_key does not exist in given +tf.Example. An example usage: let's say `label_key` is 'clicked' and +tf.Example contains clicked data only for positive examples in following +format `key:clicked, value:1`. This means that if there is no data with +key 'clicked' it should count as negative example by setting +`label_deafault=0`. Type of this value should be compatible with +`label_dtype`. +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. If it is a string, it is +used as a key to fetch weight tensor from the `features`. If it is a +`NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +weight_column.normalizer_fn is applied on it to get weight tensor. +
+ + + + + + + + + + + +
+A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature` +value. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +If label is used in `feature_columns`. +
+`ValueError` + +If weight_column is used in `feature_columns`. +
+`ValueError` + +If any of the given `feature_columns` is not a `_FeatureColumn` +instance. +
+`ValueError` + +If `weight_column` is not a `NumericColumn` instance. +
+`ValueError` + +if label_key is None. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/experimental.md b/site/en/api_docs/python/tf/compat/v1/estimator/experimental.md new file mode 100644 index 00000000000..35116699fb6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/experimental.md @@ -0,0 +1,51 @@ +description: Public API for tf.estimator.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.estimator.experimental + + + + + + + + + +Public API for tf.estimator.experimental namespace. + + + +## Classes + +[`class InMemoryEvaluatorHook`](../../../../tf/estimator/experimental/InMemoryEvaluatorHook.md): Hook to run evaluation in training without a checkpoint. + +[`class KMeans`](../../../../tf/compat/v1/estimator/experimental/KMeans.md): An Estimator for K-Means clustering. + +[`class LinearSDCA`](../../../../tf/estimator/experimental/LinearSDCA.md): Stochastic Dual Coordinate Ascent helper for linear estimators. + +## Functions + +[`build_raw_supervised_input_receiver_fn(...)`](../../../../tf/estimator/experimental/build_raw_supervised_input_receiver_fn.md): Build a supervised_input_receiver_fn for raw features and labels. + +[`call_logit_fn(...)`](../../../../tf/estimator/experimental/call_logit_fn.md): Calls logit_fn (experimental). + +[`dnn_logit_fn_builder(...)`](../../../../tf/compat/v1/estimator/experimental/dnn_logit_fn_builder.md): Function builder for a dnn logit_fn. + +[`linear_logit_fn_builder(...)`](../../../../tf/compat/v1/estimator/experimental/linear_logit_fn_builder.md): Function builder for a linear logit_fn. + +[`make_early_stopping_hook(...)`](../../../../tf/estimator/experimental/make_early_stopping_hook.md): Creates early-stopping hook. + +[`make_stop_at_checkpoint_step_hook(...)`](../../../../tf/estimator/experimental/make_stop_at_checkpoint_step_hook.md): Creates a proper StopAtCheckpointStepHook based on chief status. + +[`stop_if_higher_hook(...)`](../../../../tf/estimator/experimental/stop_if_higher_hook.md): Creates hook to stop if the given metric is higher than the threshold. + +[`stop_if_lower_hook(...)`](../../../../tf/estimator/experimental/stop_if_lower_hook.md): Creates hook to stop if the given metric is lower than the threshold. + +[`stop_if_no_decrease_hook(...)`](../../../../tf/estimator/experimental/stop_if_no_decrease_hook.md): Creates hook to stop if metric does not decrease within given max steps. + +[`stop_if_no_increase_hook(...)`](../../../../tf/estimator/experimental/stop_if_no_increase_hook.md): Creates hook to stop if metric does not increase within given max steps. + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/experimental/KMeans.md b/site/en/api_docs/python/tf/compat/v1/estimator/experimental/KMeans.md new file mode 100644 index 00000000000..5006140c4ad --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/experimental/KMeans.md @@ -0,0 +1,1393 @@ +description: An Estimator for K-Means clustering. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.experimental.KMeans + + + + + + + + + +An Estimator for K-Means clustering. + +Inherits From: [`Estimator`](../../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + + +#### Example: + + +``` +import numpy as np +import tensorflow as tf + +num_points = 100 +dimensions = 2 +points = np.random.uniform(0, 1000, [num_points, dimensions]) + +def input_fn(): + return tf.compat.v1.train.limit_epochs( + tf.convert_to_tensor(points, dtype=tf.float32), num_epochs=1) + +num_clusters = 5 +kmeans = tf.compat.v1.estimator.experimental.KMeans( + num_clusters=num_clusters, use_mini_batch=False) + +# train +num_iterations = 10 +previous_centers = None +for _ in xrange(num_iterations): + kmeans.train(input_fn) + cluster_centers = kmeans.cluster_centers() + if previous_centers is not None: + print 'delta:', cluster_centers - previous_centers + previous_centers = cluster_centers + print 'score:', kmeans.score(input_fn) +print 'cluster centers:', cluster_centers + +# map the input points to their clusters +cluster_indices = list(kmeans.predict_cluster_index(input_fn)) +for i, point in enumerate(points): + cluster_index = cluster_indices[i] + center = cluster_centers[cluster_index] + print 'point:', point, 'is in cluster', cluster_index, 'centered at', center +``` + +The `SavedModel` saved by the `export_saved_model` method does not include the +cluster centers. However, the cluster centers may be retrieved by the +latest checkpoint saved during training. Specifically, +``` +kmeans.cluster_centers() +``` +is equivalent to +``` +tf.train.load_variable( + kmeans.model_dir, KMeansClustering.CLUSTER_CENTERS_VAR_NAME) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_clusters` + +An integer tensor specifying the number of clusters. This +argument is ignored if `initial_clusters` is a tensor or numpy array. +
+`model_dir` + +The directory to save the model results and log files. +
+`initial_clusters` + +Specifies how the initial cluster centers are chosen. +One of the following: * a tensor or numpy array with the initial cluster +centers. * a callable `f(inputs, k)` that selects and returns up to +`k` centers from an input batch. `f` is free to return any number of +centers from `0` to `k`. It will be invoked on successive input +batches as necessary until all `num_clusters` centers are chosen. +* `KMeansClustering.RANDOM_INIT`: Choose centers randomly from an input +batch. If the batch size is less than `num_clusters` then the entire +batch is chosen to be initial cluster centers and the remaining +centers are chosen from successive input batches. +* `KMeansClustering.KMEANS_PLUS_PLUS_INIT`: Use kmeans++ to choose +centers from the first input batch. If the batch size is less than +`num_clusters`, a TensorFlow runtime error occurs. +
+`distance_metric` + +The distance metric used for clustering. One of: +* `KMeansClustering.SQUARED_EUCLIDEAN_DISTANCE`: Euclidean distance +between vectors `u` and `v` is defined as \\(||u - v||_2\\) which is +the square root of the sum of the absolute squares of the elements' +difference. +* `KMeansClustering.COSINE_DISTANCE`: Cosine distance between vectors +`u` and `v` is defined as \\(1 - (u . v) / (||u||_2 ||v||_2)\\). +
+`seed` + +Python integer. Seed for PRNG used to initialize centers. +
+`use_mini_batch` + +A boolean specifying whether to use the mini-batch k-means +algorithm. See explanation above. +
+`mini_batch_steps_per_iteration` + +The number of steps after which the +updated cluster centers are synced back to a master copy. Used only if +`use_mini_batch=True`. See explanation above. +
+`kmeans_plus_plus_num_retries` + +For each point that is sampled during +kmeans++ initialization, this parameter specifies the number of +additional points to draw from the current distribution before selecting +the best. If a negative value is specified, a heuristic is used to +sample `O(log(num_to_sample))` additional points. Used only if +`initial_clusters=KMeansClustering.KMEANS_PLUS_PLUS_INIT`. +
+`relative_tolerance` + +A relative tolerance of change in the loss between +iterations. Stops learning if the loss changes less than this amount. +This may not work correctly if `use_mini_batch=True`. +
+`config` + +See tf.estimator.Estimator. +
+`feature_columns` + +An optionable iterable containing all the feature columns +used by the model. All items in the set should be feature column +instances that can be passed to `tf.feature_column.input_layer`. If this +is None, all features will be used. +
+ + + + + + + + + + + + +
+`ValueError` + +An invalid argument was passed to `initial_clusters` or +`distance_metric`. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

cluster_centers

+ +View source + + + +Returns the cluster centers. + + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

predict_cluster_index

+ +View source + + + +Finds the index of the closest cluster center to each input point. + + + + + + + + + + + +
Args
+`input_fn` + +Input points. See tf.estimator.Estimator.predict. +
+ + + +#### Yields: + +The index of the closest cluster center for each input point. + + +

score

+ +View source + + + +Returns the sum of squared distances to nearest clusters. + +Note that this function is different from the corresponding one in sklearn +which returns the negative sum. + + + + + + + + + + +
Args
+`input_fn` + +Input points. See tf.estimator.Estimator.evaluate. Only one +batch is retrieved. +
+ + + + + + + + + + + +
Returns
+The sum of the squared distance from each point in the first batch of +inputs to its nearest cluster center. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + +

transform

+ +View source + + + +Transforms each input point to its distances to all cluster centers. + +Note that if `distance_metric=KMeansClustering.SQUARED_EUCLIDEAN_DISTANCE`, +this +function returns the squared Euclidean distance while the corresponding +sklearn function returns the Euclidean distance. + + + + + + + + + + +
Args
+`input_fn` + +Input points. See tf.estimator.Estimator.predict. +
+ + + +#### Yields: + +The distances from each input point to each cluster center. + + + + +## Class Variables + +* `ALL_DISTANCES = 'all_distances'` +* `CLUSTER_CENTERS_VAR_NAME = 'clusters'` +* `CLUSTER_INDEX = 'cluster_index'` +* `COSINE_DISTANCE = 'cosine'` +* `KMEANS_PLUS_PLUS_INIT = 'kmeans_plus_plus'` +* `RANDOM_INIT = 'random'` +* `SCORE = 'score'` +* `SQUARED_EUCLIDEAN_DISTANCE = 'squared_euclidean'` diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/experimental/dnn_logit_fn_builder.md b/site/en/api_docs/python/tf/compat/v1/estimator/experimental/dnn_logit_fn_builder.md new file mode 100644 index 00000000000..e3d4ec0c094 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/experimental/dnn_logit_fn_builder.md @@ -0,0 +1,126 @@ +description: Function builder for a dnn logit_fn. + +
+ + +
+ +# tf.compat.v1.estimator.experimental.dnn_logit_fn_builder + + + + + + + + + +Function builder for a dnn logit_fn. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +An int indicating the dimension of the logit layer. In the MultiHead +case, this should be the sum of all component Heads' logit dimensions. +
+`hidden_units` + +Iterable of integer number of hidden units per layer. +
+`feature_columns` + +Iterable of `feature_column._FeatureColumn` model inputs. +
+`activation_fn` + +Activation function applied to each layer. +
+`dropout` + +When not `None`, the probability we will drop out a given +coordinate. +
+`input_layer_partitioner` + +Partitioner for input layer. +
+`batch_norm` + +Whether to use batch normalization after each hidden layer. +
+ + + + + + + + + + + +
+A logit_fn (see below). +
+ + + + + + + + + + + + +
+`ValueError` + +If units is not an int. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/experimental/linear_logit_fn_builder.md b/site/en/api_docs/python/tf/compat/v1/estimator/experimental/linear_logit_fn_builder.md new file mode 100644 index 00000000000..3338b75e4ed --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/experimental/linear_logit_fn_builder.md @@ -0,0 +1,80 @@ +description: Function builder for a linear logit_fn. + +
+ + +
+ +# tf.compat.v1.estimator.experimental.linear_logit_fn_builder + + + + + + + + + +Function builder for a linear logit_fn. + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +An int indicating the dimension of the logit layer. +
+`feature_columns` + +An iterable containing all the feature columns used by the +model. +
+`sparse_combiner` + +A string specifying how to reduce if a categorical column +is multivalent. One of "mean", "sqrtn", and "sum". +
+ + + + + + + + + + + +
+A logit_fn (see below). +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/export.md b/site/en/api_docs/python/tf/compat/v1/estimator/export.md new file mode 100644 index 00000000000..d7c1ba82ec2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/export.md @@ -0,0 +1,42 @@ +description: All public utility methods for exporting Estimator to SavedModel. + +
+ + +
+ +# Module: tf.compat.v1.estimator.export + + + + + + + + + +All public utility methods for exporting Estimator to SavedModel. + + +This file includes functions and constants from core (model_utils) and export.py + +## Classes + +[`class ClassificationOutput`](../../../../tf/estimator/export/ClassificationOutput.md): Represents the output of a classification head. + +[`class ExportOutput`](../../../../tf/estimator/export/ExportOutput.md): Represents an output of a model that can be served. + +[`class PredictOutput`](../../../../tf/estimator/export/PredictOutput.md): Represents the output of a generic prediction head. + +[`class RegressionOutput`](../../../../tf/estimator/export/RegressionOutput.md): Represents the output of a regression head. + +[`class ServingInputReceiver`](../../../../tf/estimator/export/ServingInputReceiver.md): A return type for a serving_input_receiver_fn. + +[`class TensorServingInputReceiver`](../../../../tf/estimator/export/TensorServingInputReceiver.md): A return type for a serving_input_receiver_fn. + +## Functions + +[`build_parsing_serving_input_receiver_fn(...)`](../../../../tf/estimator/export/build_parsing_serving_input_receiver_fn.md): Build a serving_input_receiver_fn expecting fed tf.Examples. + +[`build_raw_serving_input_receiver_fn(...)`](../../../../tf/estimator/export/build_raw_serving_input_receiver_fn.md): Build a serving_input_receiver_fn expecting feature Tensors. + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/inputs.md b/site/en/api_docs/python/tf/compat/v1/estimator/inputs.md new file mode 100644 index 00000000000..d950fdb60be --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/inputs.md @@ -0,0 +1,27 @@ +description: Utility methods to create simple input_fns. + +
+ + +
+ +# Module: tf.compat.v1.estimator.inputs + + + + + + + + + +Utility methods to create simple input_fns. + + + +## Functions + +[`numpy_input_fn(...)`](../../../../tf/compat/v1/estimator/inputs/numpy_input_fn.md): Returns input function that would feed dict of numpy arrays into the model. + +[`pandas_input_fn(...)`](../../../../tf/compat/v1/estimator/inputs/pandas_input_fn.md): Returns input function that would feed Pandas DataFrame into the model. + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/inputs/numpy_input_fn.md b/site/en/api_docs/python/tf/compat/v1/estimator/inputs/numpy_input_fn.md new file mode 100644 index 00000000000..88fa87bd527 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/inputs/numpy_input_fn.md @@ -0,0 +1,176 @@ +description: Returns input function that would feed dict of numpy arrays into the model. + +
+ + +
+ +# tf.compat.v1.estimator.inputs.numpy_input_fn + + + + + + + + + +Returns input function that would feed dict of numpy arrays into the model. + + + + + + + +This returns a function outputting `features` and `targets` based on the dict +of numpy arrays. The dict `features` has the same keys as the `x`. The dict +`targets` has the same keys as the `y` if `y` is a dict. + +#### Example: + + + +```python +age = np.arange(4) * 1.0 +height = np.arange(32, 36) +x = {'age': age, 'height': height} +y = np.arange(-32, -28) + +with tf.Session() as session: + input_fn = numpy_io.numpy_input_fn( + x, y, batch_size=2, shuffle=False, num_epochs=1) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +numpy array object or dict of numpy array objects. If an array, the array +will be treated as a single feature. +
+`y` + +numpy array object or dict of numpy array object. `None` if absent. +
+`batch_size` + +Integer, size of batches to return. +
+`num_epochs` + +Integer, number of epochs to iterate over data. If `None` will +run forever. +
+`shuffle` + +Boolean, if True shuffles the queue. Avoid shuffle at prediction +time. +
+`queue_capacity` + +Integer, size of queue to accumulate. +
+`num_threads` + +Integer, number of threads used for reading and enqueueing. In +order to have predicted and repeatable order of reading and enqueueing, +such as in prediction and evaluation mode, `num_threads` should be 1. +
+ + + + + + + + + + + +
+Function, that has signature of ()->(dict of `features`, `targets`) +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +if the shape of `y` mismatches the shape of values in `x` (i.e., +values in `x` have same shape). +
+`ValueError` + +if duplicate keys are in both `x` and `y` when `y` is a dict. +
+`ValueError` + +if x or y is an empty dict. +
+`TypeError` + +`x` is not a dict or array. +
+`ValueError` + +if 'shuffle' is not provided or a bool. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/inputs/pandas_input_fn.md b/site/en/api_docs/python/tf/compat/v1/estimator/inputs/pandas_input_fn.md new file mode 100644 index 00000000000..1dd9fbe682a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/inputs/pandas_input_fn.md @@ -0,0 +1,145 @@ +description: Returns input function that would feed Pandas DataFrame into the model. + +
+ + +
+ +# tf.compat.v1.estimator.inputs.pandas_input_fn + + + + + + + + + +Returns input function that would feed Pandas DataFrame into the model. + + + + + + + +Note: `y`'s index must match `x`'s index. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +pandas `DataFrame` object. +
+`y` + +pandas `Series` object or `DataFrame`. `None` if absent. +
+`batch_size` + +int, size of batches to return. +
+`num_epochs` + +int, number of epochs to iterate over data. If not `None`, read +attempts that would exceed this value will raise `OutOfRangeError`. +
+`shuffle` + +bool, whether to read the records in random order. +
+`queue_capacity` + +int, size of the read queue. If `None`, it will be set +roughly to the size of `x`. +
+`num_threads` + +Integer, number of threads used for reading and enqueueing. In +order to have predicted and repeatable order of reading and enqueueing, +such as in prediction and evaluation mode, `num_threads` should be 1. +
+`target_column` + +str, name to give the target column `y`. This parameter is +not used when `y` is a `DataFrame`. +
+ + + + + + + + + + + +
+Function, that has signature of ()->(dict of `features`, `target`) +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `x` already contains a column with the same name as `y`, or +if the indexes of `x` and `y` don't match. +
+`ValueError` + +if 'shuffle' is not provided or a bool. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/regressor_parse_example_spec.md b/site/en/api_docs/python/tf/compat/v1/estimator/regressor_parse_example_spec.md new file mode 100644 index 00000000000..3025f2d5aad --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/regressor_parse_example_spec.md @@ -0,0 +1,224 @@ +description: Generates parsing spec for tf.parse_example to be used with regressors. + +
+ + +
+ +# tf.compat.v1.estimator.regressor_parse_example_spec + + + + + + + + + +Generates parsing spec for tf.parse_example to be used with regressors. + + + + + + + +If users keep data in tf.Example format, they need to call tf.parse_example +with a proper feature spec. There are two main things that this utility helps: + +* Users need to combine parsing spec of features with labels and weights + (if any) since they are all parsed from same tf.Example instance. This + utility combines these specs. +* It is difficult to map expected label by a regressor such as `DNNRegressor` + to corresponding tf.parse_example spec. This utility encodes it by getting + related information from users (key, dtype). + +Example output of parsing spec: + +```python +# Define features and transformations +feature_b = tf.feature_column.numeric_column(...) +feature_c_bucketized = tf.feature_column.bucketized_column( + tf.feature_column.numeric_column("feature_c"), ...) +feature_a_x_feature_c = tf.feature_column.crossed_column( + columns=["feature_a", feature_c_bucketized], ...) + +feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c] +parsing_spec = tf.estimator.regressor_parse_example_spec( + feature_columns, label_key='my-label') + +# For the above example, regressor_parse_example_spec would return the dict: +assert parsing_spec == { + "feature_a": parsing_ops.VarLenFeature(tf.string), + "feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32), + "feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32) + "my-label" : parsing_ops.FixedLenFeature([1], dtype=tf.float32) +} +``` + +Example usage with a regressor: + +```python +feature_columns = # define features via tf.feature_column +estimator = DNNRegressor( + hidden_units=[256, 64, 16], + feature_columns=feature_columns, + weight_column='example-weight', + label_dimension=3) +# This label configuration tells the regressor the following: +# * weights are retrieved with key 'example-weight' +# * label is a 3 dimension tensor with float32 dtype. + + +# Input builders +def input_fn_train(): # Returns a tuple of features and labels. + features = tf.contrib.learn.read_keyed_batch_features( + file_pattern=train_files, + batch_size=batch_size, + # creates parsing configuration for tf.parse_example + features=tf.estimator.classifier_parse_example_spec( + feature_columns, + label_key='my-label', + label_dimension=3, + weight_column='example-weight'), + reader=tf.RecordIOReader) + labels = features.pop('my-label') + return features, labels + +estimator.train(input_fn=input_fn_train) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +An iterable containing all feature columns. All items +should be instances of classes derived from `_FeatureColumn`. +
+`label_key` + +A string identifying the label. It means tf.Example stores labels +with this key. +
+`label_dtype` + +A `tf.dtype` identifies the type of labels. By default it is +tf.float32. +
+`label_default` + +used as label if label_key does not exist in given +tf.Example. By default default_value is none, which means +`tf.parse_example` will error out if there is any missing label. +
+`label_dimension` + +Number of regression targets per example. This is the size +of the last dimension of the labels and logits `Tensor` objects +(typically, these have shape `[batch_size, label_dimension]`). +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. If it is a string, it is +used as a key to fetch weight tensor from the `features`. If it is a +`NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +weight_column.normalizer_fn is applied on it to get weight tensor. +
+ + + + + + + + + + + +
+A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature` +value. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +If label is used in `feature_columns`. +
+`ValueError` + +If weight_column is used in `feature_columns`. +
+`ValueError` + +If any of the given `feature_columns` is not a `_FeatureColumn` +instance. +
+`ValueError` + +If `weight_column` is not a `NumericColumn` instance. +
+`ValueError` + +if label_key is None. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/tpu.md b/site/en/api_docs/python/tf/compat/v1/estimator/tpu.md new file mode 100644 index 00000000000..341a8e0a47e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/tpu.md @@ -0,0 +1,37 @@ +description: Public API for tf.estimator.tpu namespace. + +
+ + +
+ +# Module: tf.compat.v1.estimator.tpu + + + + + + + + + +Public API for tf.estimator.tpu namespace. + + + +## Modules + +[`experimental`](../../../../tf/compat/v1/estimator/tpu/experimental.md) module: Public API for tf.estimator.tpu.experimental namespace. + +## Classes + +[`class InputPipelineConfig`](../../../../tf/compat/v1/estimator/tpu/InputPipelineConfig.md): Please see the definition of these values in TPUConfig. + +[`class RunConfig`](../../../../tf/compat/v1/estimator/tpu/RunConfig.md): RunConfig with TPU support. + +[`class TPUConfig`](../../../../tf/compat/v1/estimator/tpu/TPUConfig.md): TPU related configuration required by `TPUEstimator`. + +[`class TPUEstimator`](../../../../tf/compat/v1/estimator/tpu/TPUEstimator.md): Estimator with TPU support. + +[`class TPUEstimatorSpec`](../../../../tf/compat/v1/estimator/tpu/TPUEstimatorSpec.md): Ops and objects returned from a `model_fn` and passed to `TPUEstimator`. + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/tpu/InputPipelineConfig.md b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/InputPipelineConfig.md new file mode 100644 index 00000000000..6de8af40f97 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/InputPipelineConfig.md @@ -0,0 +1,39 @@ +description: Please see the definition of these values in TPUConfig. + +
+ + + + + + + +
+ +# tf.compat.v1.estimator.tpu.InputPipelineConfig + + + + + + + + + +Please see the definition of these values in TPUConfig. + + + + +## Class Variables + +* `BROADCAST = 4` +* `PER_HOST_V1 = 2` +* `PER_HOST_V2 = 3` +* `PER_SHARD_V1 = 1` +* `SLICED = 5` diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/tpu/RunConfig.md b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/RunConfig.md new file mode 100644 index 00000000000..239b6e17ac6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/RunConfig.md @@ -0,0 +1,426 @@ +description: RunConfig with TPU support. + +
+ + + + +
+ +# tf.compat.v1.estimator.tpu.RunConfig + + + + + + + + + +RunConfig with TPU support. + +Inherits From: [`RunConfig`](../../../../../tf/estimator/RunConfig.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tpu_config` + +the TPUConfig that specifies TPU-specific configuration. +
+`evaluation_master` + +a string. The address of the master to use for eval. +Defaults to master if not set. +
+`master` + +a string. The address of the master to use for training. +
+`cluster` + +a ClusterResolver +
+`**kwargs` + +keyword config parameters. +
+ + + + + + + + + + + + +
+`ValueError` + +if cluster is not None and the provided session_config has a +cluster_def already. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cluster` + + +
+`cluster_spec` + + +
+`device_fn` + +Returns the device_fn. + +If device_fn is not `None`, it overrides the default +device function used in `Estimator`. +Otherwise the default one is used. +
+`eval_distribute` + +Optional tf.distribute.Strategy for evaluation. +
+`evaluation_master` + + +
+`experimental_max_worker_delay_secs` + + +
+`global_id_in_cluster` + +The global id in the training cluster. + +All global ids in the training cluster are assigned from an increasing +sequence of consecutive integers. The first id is 0. + +Note: Task id (the property field `task_id`) is tracking the index of the +node among all nodes with the SAME task type. For example, given the cluster +definition as follows: + +``` +cluster = {'chief': ['host0:2222'], +'ps': ['host1:2222', 'host2:2222'], +'worker': ['host3:2222', 'host4:2222', 'host5:2222']} +``` + +Nodes with task type `worker` can have id 0, 1, 2. Nodes with task type +`ps` can have id, 0, 1. So, `task_id` is not unique, but the pair +(`task_type`, `task_id`) can uniquely determine a node in the cluster. + +Global id, i.e., this field, is tracking the index of the node among ALL +nodes in the cluster. It is uniquely assigned. For example, for the cluster +spec given above, the global ids are assigned as: +``` +task_type | task_id | global_id +-------------------------------- +chief | 0 | 0 +worker | 0 | 1 +worker | 1 | 2 +worker | 2 | 3 +ps | 0 | 4 +ps | 1 | 5 +``` +
+`is_chief` + + +
+`keep_checkpoint_every_n_hours` + + +
+`keep_checkpoint_max` + + +
+`log_step_count_steps` + + +
+`master` + + +
+`model_dir` + + +
+`num_ps_replicas` + + +
+`num_worker_replicas` + + +
+`protocol` + +Returns the optional protocol value. +
+`save_checkpoints_secs` + + +
+`save_checkpoints_steps` + + +
+`save_summary_steps` + + +
+`service` + +Returns the platform defined (in TF_CONFIG) service dict. +
+`session_config` + + +
+`session_creation_timeout_secs` + + +
+`task_id` + + +
+`task_type` + + +
+`tf_random_seed` + + +
+`tpu_config` + + +
+`train_distribute` + +Optional tf.distribute.Strategy for training. +
+ + + +## Methods + +

replace

+ +View source + + + +Returns a new instance of `RunConfig` replacing specified properties. + +Only the properties in the following list are allowed to be replaced: + + - `model_dir`, + - `tf_random_seed`, + - `save_summary_steps`, + - `save_checkpoints_steps`, + - `save_checkpoints_secs`, + - `session_config`, + - `keep_checkpoint_max`, + - `keep_checkpoint_every_n_hours`, + - `log_step_count_steps`, + - `train_distribute`, + - `device_fn`, + - `protocol`. + - `eval_distribute`, + - `experimental_distribute`, + - `experimental_max_worker_delay_secs`, + +In addition, either `save_checkpoints_steps` or `save_checkpoints_secs` +can be set (should not be both). + + + + + + + + + + +
Args
+`**kwargs` + +keyword named properties with new values. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If any property name in `kwargs` does not exist or is not +allowed to be replaced, or both `save_checkpoints_steps` and +`save_checkpoints_secs` are set. +
+ + + + + + + + + + + +
Returns
+a new instance of `RunConfig`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/tpu/TPUConfig.md b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/TPUConfig.md new file mode 100644 index 00000000000..e539b78eb2e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/TPUConfig.md @@ -0,0 +1,277 @@ +description: TPU related configuration required by TPUEstimator. + +
+ + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.tpu.TPUConfig + + + + + + + + + +TPU related configuration required by `TPUEstimator`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`iterations_per_loop` + +This is the number of train steps running in TPU system +before returning to CPU host for each `Session.run`. This means global +step is increased `iterations_per_loop` times in one `Session.run`. It is +recommended to be set as number of global steps for next checkpoint. Note +that in evaluation don't use this value, instead we run total eval `steps` +on TPU for a single `Session.run`. +[Experimental]: `iterations_per_loop` can be specified as a time interval. +To specify N seconds in one `Session.run`, one can specify it as `Ns` +and substitute the N with the N with the number of desired seconds. +Alternatively, the unit of time can also be specified in minutes or +hours, e.g. `3600s` or `60m` or `1h`. +
+`num_shards` + +(Deprecated, ignored by TPUEstimator). The number of model +replicas in the system. For non-model-parallelism case, this number equals +the total number of TPU cores. For model-parallelism, the total number of +TPU cores equals num_cores_per_replica * num_shards. +
+`num_cores_per_replica` + +Defaults to `None`, which disables model parallelism. +An integer which describes the number of TPU cores per model replica. This +is required by model-parallelism which enables partitioning the model to +multiple cores. Currently num_cores_per_replica must be 1, 2, 4, or 8. +
+`per_host_input_for_training` + +If `True`, for `PER_HOST_V1`, the `input_fn` is +invoked once on each host, and the number of hosts must be smaller or +equal to the number of replicas. For PER_HOST_V2, the `input_fn` is +invoked once for each host (if the number of hosts is less than the number +of replicas) or replica (if the number of replicas is less than the number +of hosts. With the per-core input pipeline configuration, it is invoked +once for each core. With a global batch size `train_batch_size` in +`TPUEstimator` constructor, the batch size for each shard is +`train_batch_size` // #hosts in the `True` or `PER_HOST_V1` mode. In +`PER_HOST_V2` mode, it is `train_batch_size` // #cores. In `BROADCAST` +mode, `input_fn` is only invoked once on host 0 and the tensors are +broadcasted to all other replicas. The batch size equals to +`train_batch_size`. With the per-core input pipeline configuration, the +shard batch size is also `train_batch_size` // #cores. +Note: per_host_input_for_training==PER_SHARD_V1 only supports mode.TRAIN. +
+`tpu_job_name` + +The name of the TPU job. Typically, this name is auto-inferred +within TPUEstimator, however when using ClusterSpec propagation in more +esoteric cluster configurations, you may need to specify the job name as a +string. +
+`initial_infeed_sleep_secs` + +The number of seconds the infeed thread should +wait before enqueueing the first batch. This helps avoid timeouts for +models that require a long compilation time. +
+`input_partition_dims` + +A nested list to describe the partition dims for all +the tensors from input_fn(). The structure of input_partition_dims must +match the structure of `features` and `labels` from input_fn(). The total +number of partitions must match +`num_cores_per_replica`. For example, if input_fn() returns two tensors: +images with shape [N, H, W, C] and labels [N]. input_partition_dims = +[[1, 2, 2, 1], None] will split the images to 4 pieces and feed into 4 +TPU cores. labels tensor are directly broadcasted to all the TPU cores +since the partition dims is `None`. +Current limitations: This feature is only supported with the PER_HOST_V2 +input mode. +
+`eval_training_input_configuration` + +If `SLICED`, `input_fn` is only invoked +once on host 0 and the tensors are broadcasted to all other replicas. +Unlike per_host_input_for_training=BROADCAST, each replica will only get a +slice of the data instead of a whole copy. If `PER_HOST_V1`, the behaviour +is determined by per_host_input_for_training. +
+`experimental_host_call_every_n_steps` + +Within a training loop, this argument +sets how often host calls are performed during training. Host calls will +be evaluated every n steps within a training loop where n is the value of +this argument. +
+ + + + + + + + + + + + +
+`ValueError` + +If `num_cores_per_replica` is not 1, 2, 4, 8, ..., 128. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`iterations_per_loop` + + +
+`num_shards` + + +
+`num_cores_per_replica` + + +
+`per_host_input_for_training` + + +
+`tpu_job_name` + + +
+`initial_infeed_sleep_secs` + + +
+`input_partition_dims` + + +
+`eval_training_input_configuration` + + +
+`experimental_host_call_every_n_steps` + + +
+ + + +## Class Variables + +* `eval_training_input_configuration` +* `experimental_host_call_every_n_steps` +* `initial_infeed_sleep_secs` +* `input_partition_dims` +* `iterations_per_loop` +* `num_cores_per_replica` +* `num_shards` +* `per_host_input_for_training` +* `tpu_job_name` diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/tpu/TPUEstimator.md b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/TPUEstimator.md new file mode 100644 index 00000000000..a5328fa494b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/TPUEstimator.md @@ -0,0 +1,1464 @@ +description: Estimator with TPU support. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.tpu.TPUEstimator + + + + + + + + + +Estimator with TPU support. + +Inherits From: [`Estimator`](../../../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + +TPUEstimator also supports training on CPU and GPU. You don't need to define +a separate tf.estimator.Estimator. + +TPUEstimator handles many of the details of running on TPU devices, such as +replicating inputs and models for each core, and returning to host +periodically to run hooks. + +TPUEstimator transforms a global batch size in params to a per-shard batch +size when calling the `input_fn` and `model_fn`. Users should specify +global batch size in constructor, and then get the batch size for each shard +in `input_fn` and `model_fn` by `params['batch_size']`. + +- For training, `model_fn` gets per-core batch size; `input_fn` may get + per-core or per-host batch size depending on `per_host_input_for_training` + in `TPUConfig` (See docstring for TPUConfig for details). + +- For evaluation and prediction, `model_fn` gets per-core batch size and + `input_fn` get per-host batch size. + +Evaluation +========== + +`model_fn` should return `TPUEstimatorSpec`, which expects the `eval_metrics` +for TPU evaluation. If eval_on_tpu is False, the evaluation will execute on +CPU or GPU; in this case the following discussion on TPU evaluation does not +apply. + +`TPUEstimatorSpec.eval_metrics` is a tuple of `metric_fn` and `tensors`, where +`tensors` could be a list of any nested structure of `Tensor`s (See +`TPUEstimatorSpec` for details). `metric_fn` takes the `tensors` and returns +a dict from metric string name to the result of calling a metric function, +namely a `(metric_tensor, update_op)` tuple. + +One can set `use_tpu` to `False` for testing. All training, evaluation, and +predict will be executed on CPU. `input_fn` and `model_fn` will receive +`train_batch_size` or `eval_batch_size` unmodified as `params['batch_size']`. + +#### Current limitations: + + +-------------------- + +1. TPU evaluation only works on a single host (one TPU worker) except + BROADCAST mode. + +2. `input_fn` for evaluation should **NOT** raise an end-of-input exception + (`OutOfRangeError` or `StopIteration`). And all evaluation steps and all + batches should have the same size. + +Example (MNIST): +---------------- + +``` +# The metric Fn which runs on CPU. +def metric_fn(labels, logits): + predictions = tf.argmax(logits, 1) + return { + 'accuracy': tf.compat.v1.metrics.precision( + labels=labels, predictions=predictions), + } + +# Your model Fn which runs on TPU (eval_metrics is list in this example) +def model_fn(features, labels, mode, config, params): + ... + logits = ... + + if mode = tf.estimator.ModeKeys.EVAL: + return tpu_estimator.TPUEstimatorSpec( + mode=mode, + loss=loss, + eval_metrics=(metric_fn, [labels, logits])) + +# or specify the eval_metrics tensors as dict. +def model_fn(features, labels, mode, config, params): + ... + final_layer_output = ... + + if mode = tf.estimator.ModeKeys.EVAL: + return tpu_estimator.TPUEstimatorSpec( + mode=mode, + loss=loss, + eval_metrics=(metric_fn, { + 'labels': labels, + 'logits': final_layer_output, + })) +``` + +Prediction +========== + +Prediction on TPU is an experimental feature to support large batch inference. +It is not designed for latency-critical system. In addition, due to some +usability issues, for prediction with small dataset, CPU `.predict`, i.e., +creating a new `TPUEstimator` instance with `use_tpu=False`, might be more +convenient. + +Note: In contrast to TPU training/evaluation, the `input_fn` for prediction +*should* raise an end-of-input exception (`OutOfRangeError` or +`StopIteration`), which serves as the stopping signal to `TPUEstimator`. To be +precise, the ops created by `input_fn` produce one batch of the data. +The `predict()` API processes one batch at a time. When reaching the end of +the data source, an end-of-input exception should be raised by one of these +operations. The user usually does not need to do this manually. As long as the +dataset is not repeated forever, the tf.data API will raise an end-of-input +exception automatically after the last batch has been produced. + +Note: Estimator.predict returns a Python generator. Please consume all the +data from the generator so that TPUEstimator can shutdown the TPU system +properly for user. + +#### Current limitations: + + +-------------------- +1. TPU prediction only works on a single host (one TPU worker). + +2. `input_fn` must return a `Dataset` instance rather than `features`. In +fact, .train() and .evaluate() also support Dataset as return value. + +Example (MNIST): +---------------- +``` +height = 32 +width = 32 +total_examples = 100 + +def predict_input_fn(params): + batch_size = params['batch_size'] + + images = tf.random.uniform( + [total_examples, height, width, 3], minval=-1, maxval=1) + + dataset = tf.data.Dataset.from_tensor_slices(images) + dataset = dataset.map(lambda images: {'image': images}) + + dataset = dataset.batch(batch_size) + return dataset + +def model_fn(features, labels, params, mode): + # Generate predictions, called 'output', from features['image'] + + if mode == tf.estimator.ModeKeys.PREDICT: + return tf.contrib.tpu.TPUEstimatorSpec( + mode=mode, + predictions={ + 'predictions': output, + 'is_padding': features['is_padding'] + }) + +tpu_est = TPUEstimator( + model_fn=model_fn, + ..., + predict_batch_size=16) + +# Fully consume the generator so that TPUEstimator can shutdown the TPU +# system. +for item in tpu_est.predict(input_fn=input_fn): + # Filter out item if the `is_padding` is 1. + # Process the 'predictions' +``` + +Exporting +========= + +`export_saved_model` exports 2 metagraphs, one with `saved_model.SERVING`, and +another with `saved_model.SERVING` and `saved_model.TPU` tags. At serving +time, these tags are used to select the appropriate metagraph to load. + +Before running the graph on TPU, the TPU system needs to be initialized. If +TensorFlow Serving model-server is used, this is done automatically. If not, +please use `session.run(tpu.initialize_system())`. + +There are two versions of the API: ExportSavedModelApiVersion.V1 and V2. + +In V1, the exported CPU graph is `model_fn` as it is. The exported TPU graph +wraps `tpu.rewrite()` and `TPUPartitionedCallOp` around `model_fn` so +`model_fn` is on TPU by default. To place ops on CPU, +`tpu.outside_compilation(host_call, logits)` can be used. + +#### Example: + + +---------------- + +``` +def model_fn(features, labels, mode, config, params): + ... + logits = ... + export_outputs = { + 'logits': export_output_lib.PredictOutput( + {'logits': logits}) + } + + def host_call(logits): + class_ids = math_ops.argmax(logits) + classes = string_ops.as_string(class_ids) + export_outputs['classes'] = + export_output_lib.ClassificationOutput(classes=classes) + + tpu.outside_compilation(host_call, logits) + + ... +``` + +In V2, `export_saved_model()` sets up `params['use_tpu']` flag to let the user +know if the code is exporting to TPU (or not). When `params['use_tpu']` is +`True`, users need to call `tpu.rewrite()`, `TPUPartitionedCallOp` and/or +`batch_function()`. Alternatively use `inference_on_tpu()` which is a +convenience wrapper of the three. + +``` + def model_fn(features, labels, mode, config, params): + ... + # This could be some pre-processing on CPU like calls to input layer with + # embedding columns. + x2 = features['x'] * 2 + + def computation(input_tensor): + return layers.dense( + input_tensor, 1, kernel_initializer=init_ops.zeros_initializer()) + + inputs = [x2] + if params['use_tpu']: + predictions = array_ops.identity( + tpu_estimator.inference_on_tpu(computation, inputs, + num_batch_threads=1, max_batch_size=2, batch_timeout_micros=100), + name='predictions') + else: + predictions = array_ops.identity( + computation(*inputs), name='predictions') + key = signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY + export_outputs = { + key: export_lib.PredictOutput({'prediction': predictions}) + } + ... +``` + +TIP: V2 is recommended as it is more flexible (eg: batching, etc). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function as required by `Estimator` which returns +EstimatorSpec or TPUEstimatorSpec. `training_hooks`, 'evaluation_hooks', +and `prediction_hooks` must not capure any TPU Tensor inside the +model_fn. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. If `None`, the model_dir in +`config` will be used if set. If both are set, they must be same. If +both are `None`, a temporary directory will be used. +
+`config` + +An `tpu_config.RunConfig` configuration object. Cannot be `None`. +
+`params` + +An optional `dict` of hyper parameters that will be passed into +`input_fn` and `model_fn`. Keys are names of parameters, values are +basic python types. There are reserved keys for `TPUEstimator`, +including 'batch_size'. +
+`use_tpu` + +A bool indicating whether TPU support is enabled. Currently, - +TPU training and evaluation respect this bit, but eval_on_tpu can +override execution of eval. See below. +
+`train_batch_size` + +An int representing the global training batch size. +TPUEstimator transforms this global batch size to a per-shard batch +size, as params['batch_size'], when calling `input_fn` and `model_fn`. +Cannot be `None` if `use_tpu` is `True`. Must be divisible by total +number of replicas. +
+`eval_batch_size` + +An int representing evaluation batch size. Must be +divisible by total number of replicas. +
+`predict_batch_size` + +An int representing the prediction batch size. Must be +divisible by total number of replicas. +
+`batch_axis` + +A python tuple of int values describing how each tensor +produced by the Estimator `input_fn` should be split across the TPU +compute shards. For example, if your input_fn produced (images, labels) +where the images tensor is in `HWCN` format, your shard dimensions would +be [3, 0], where 3 corresponds to the `N` dimension of your images +Tensor, and 0 corresponds to the dimension along which to split the +labels to match up with the corresponding images. If None is supplied, +and per_host_input_for_training is True, batches will be sharded based +on the major dimension. If tpu_config.per_host_input_for_training is +False or `PER_HOST_V2`, batch_axis is ignored. +
+`eval_on_tpu` + +If False, evaluation runs on CPU or GPU. In this case, the +model_fn must return `EstimatorSpec` when called with `mode` as `EVAL`. +
+`export_to_tpu` + +If True, `export_saved_model()` exports a metagraph for +serving on TPU. Note that unsupported export modes such as EVAL will be +ignored. For those modes, only a CPU model will be exported. Currently, +export_to_tpu only supports PREDICT. +
+`export_to_cpu` + +If True, `export_saved_model()` exports a metagraph for +serving on CPU. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If the string filepath is provided instead of +a `WarmStartSettings`, then all variables are warm-started, and it is +assumed that vocabularies and Tensor names are unchanged. +
+`embedding_config_spec` + +Optional EmbeddingConfigSpec instance to support +using TPU embedding. +
+`export_saved_model_api_version` + +ExportSavedModelApiVersion, V1 or V2. With +V1, `export_saved_model()` adds rewrite() and TPUPartitionedCallOp() for +user; while in v2, user is expected to add rewrite(), +TPUPartitionedCallOp() etc in their model_fn. A helper function +`inference_on_tpu` is provided for V2. brn_tpu_estimator.py includes +examples for both versions i.e. TPUEstimatorExportTest and +TPUEstimatorExportV2Test. +
+ + + + + + + + + + + + +
+`ValueError` + +`params` has reserved keys already. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/tpu/TPUEstimatorSpec.md b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/TPUEstimatorSpec.md new file mode 100644 index 00000000000..f35b26f83df --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/TPUEstimatorSpec.md @@ -0,0 +1,198 @@ +description: Ops and objects returned from a model_fn and passed to TPUEstimator. + +
+ + + + + + + + + + + + + + + +
+ +# tf.compat.v1.estimator.tpu.TPUEstimatorSpec + + + + + + + + + +Ops and objects returned from a `model_fn` and passed to `TPUEstimator`. + + + + + + + +See `EstimatorSpec` for `mode`, `predictions`, `loss`, `train_op`, and +`export_outputs`. + +For evaluation, `eval_metrics `is a tuple of `metric_fn` and `tensors`, where +`metric_fn` runs on CPU to generate metrics and `tensors` represents the +`Tensor`s transferred from TPU system to CPU host and passed to `metric_fn`. +To be precise, TPU evaluation expects a slightly different signature from the +tf.estimator.Estimator. While `EstimatorSpec.eval_metric_ops` expects a +dict, `TPUEstimatorSpec.eval_metrics` is a tuple of `metric_fn` and `tensors`. +The `tensors` could be a list of `Tensor`s or dict of names to `Tensor`s. The +`tensors` usually specify the model logits, which are transferred back from +TPU system to CPU host. All tensors must have be batch-major, i.e., the batch +size is the first dimension. Once all tensors are available at CPU host from +all shards, they are concatenated (on CPU) and passed as positional arguments +to the `metric_fn` if `tensors` is list or keyword arguments if `tensors` is +a dict. `metric_fn` takes the `tensors` and returns a dict from metric string +name to the result of calling a metric function, namely a `(metric_tensor, +update_op)` tuple. See `TPUEstimator` for MNIST example how to specify the +`eval_metrics`. + +`scaffold_fn` is a function running on CPU to generate the `Scaffold`. This +function should not capture any Tensors in `model_fn`. + +`host_call` is a tuple of a `function` and a list or dictionary of `tensors` +to pass to that function and returns a list of Tensors. `host_call` currently +works for train() and evaluate(). The Tensors returned by the function is +executed on the CPU on every step, so there is communication overhead when +sending tensors from TPU to CPU. To reduce the overhead, try reducing the +size of the tensors. The `tensors` are concatenated along their major (batch) +dimension, and so must be >= rank 1. The `host_call` is useful for writing +summaries with `tf.contrib.summary.create_file_writer`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`mode` + + +
+`predictions` + + +
+`loss` + + +
+`train_op` + + +
+`eval_metrics` + + +
+`export_outputs` + + +
+`scaffold_fn` + + +
+`host_call` + + +
+`training_hooks` + + +
+`evaluation_hooks` + + +
+`prediction_hooks` + + +
+ + + +## Methods + +

as_estimator_spec

+ +View source + + + +Creates an equivalent `EstimatorSpec` used by CPU train/eval. + + + + +## Class Variables + +* `eval_metrics` +* `evaluation_hooks` +* `export_outputs` +* `host_call` +* `loss` +* `mode` +* `prediction_hooks` +* `predictions` +* `scaffold_fn` +* `train_op` +* `training_hooks` diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/tpu/experimental.md b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/experimental.md new file mode 100644 index 00000000000..1c6bd95c5e8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/experimental.md @@ -0,0 +1,25 @@ +description: Public API for tf.estimator.tpu.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.estimator.tpu.experimental + + + + + + + + + +Public API for tf.estimator.tpu.experimental namespace. + + + +## Classes + +[`class EmbeddingConfigSpec`](../../../../../tf/compat/v1/estimator/tpu/experimental/EmbeddingConfigSpec.md): Class to keep track of the specification for TPU embeddings. + diff --git a/site/en/api_docs/python/tf/compat/v1/estimator/tpu/experimental/EmbeddingConfigSpec.md b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/experimental/EmbeddingConfigSpec.md new file mode 100644 index 00000000000..a2be676b8a4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/estimator/tpu/experimental/EmbeddingConfigSpec.md @@ -0,0 +1,274 @@ +description: Class to keep track of the specification for TPU embeddings. + +
+ + + + + + + + + + + +
+ +# tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec + + + + + + + + + +Class to keep track of the specification for TPU embeddings. + + + + + + + +Pass this class to `tf.estimator.tpu.TPUEstimator` via the +`embedding_config_spec` parameter. At minimum you need to specify +`feature_columns` and `optimization_parameters`. The feature columns passed +should be created with some combination of +`tf.tpu.experimental.embedding_column` and +`tf.tpu.experimental.shared_embedding_columns`. + +TPU embeddings do not support arbitrary Tensorflow optimizers and the +main optimizer you use for your model will be ignored for the embedding table +variables. Instead TPU embeddigns support a fixed set of predefined optimizers +that you can select from and set the parameters of. These include adagrad, +adam and stochastic gradient descent. Each supported optimizer has a +`Parameters` class in the tf.tpu.experimental namespace. + +``` +column_a = tf.feature_column.categorical_column_with_identity(...) +column_b = tf.feature_column.categorical_column_with_identity(...) +column_c = tf.feature_column.categorical_column_with_identity(...) +tpu_shared_columns = tf.tpu.experimental.shared_embedding_columns( + [column_a, column_b], 10) +tpu_non_shared_column = tf.tpu.experimental.embedding_column( + column_c, 10) +tpu_columns = [tpu_non_shared_column] + tpu_shared_columns +... +def model_fn(features): + dense_features = tf.keras.layers.DenseFeature(tpu_columns) + embedded_feature = dense_features(features) + ... + +estimator = tf.estimator.tpu.TPUEstimator( + model_fn=model_fn, + ... + embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec( + column=tpu_columns, + optimization_parameters=( + tf.estimator.tpu.experimental.AdagradParameters(0.1)))) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +All embedding `FeatureColumn`s used by model. +
+`optimization_parameters` + +An instance of `AdagradParameters`, +`AdamParameters` or `StochasticGradientDescentParameters`. This +optimizer will be applied to all embedding variables specified by +`feature_columns`. +
+`clipping_limit` + +(Optional) Clipping limit (absolute value). +
+`pipeline_execution_with_tensor_core` + +setting this to `True` makes training +faster, but trained model will be different if step N and step N+1 +involve the same set of embedding IDs. Please see +`tpu_embedding_configuration.proto` for details. +
+`experimental_gradient_multiplier_fn` + +(Optional) A Fn taking global step as +input returning the current multiplier for all embedding gradients. +
+`feature_to_config_dict` + +A dictionary mapping features names to instances +of the class `FeatureConfig`. Either features_columns or the pair of +`feature_to_config_dict` and `table_to_config_dict` must be specified. +
+`table_to_config_dict` + +A dictionary mapping features names to instances of +the class `TableConfig`. Either features_columns or the pair of +`feature_to_config_dict` and `table_to_config_dict` must be specified. +
+`partition_strategy` + +A string, determining how tensors are sharded to the +tpu hosts. See tf.nn.safe_embedding_lookup_sparse for more details. +Allowed value are `"div"` and `"mod"'. If `"mod"` is used, evaluation +and exporting the model to CPU will not work as expected. +
+ + + + + + + + + + + + + + + + + + +
+`ValueError` + +If the feature_columns are not specified. +
+`TypeError` + +If the feature columns are not of ths correct type (one of +_SUPPORTED_FEATURE_COLUMNS, _TPU_EMBEDDING_COLUMN_CLASSES OR +_EMBEDDING_COLUMN_CLASSES). +
+`ValueError` + +If `optimization_parameters` is not one of the required types. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + + +
+`optimization_parameters` + + +
+`clipping_limit` + + +
+`pipeline_execution_with_tensor_core` + + +
+`experimental_gradient_multiplier_fn` + + +
+`feature_to_config_dict` + + +
+`table_to_config_dict` + + +
+`partition_strategy` + + +
+ + + +## Class Variables + +* `clipping_limit` +* `experimental_gradient_multiplier_fn` +* `feature_columns` +* `feature_to_config_dict` +* `optimization_parameters` +* `partition_strategy` +* `pipeline_execution_with_tensor_core` +* `table_to_config_dict` diff --git a/site/en/api_docs/python/tf/compat/v1/executing_eagerly.md b/site/en/api_docs/python/tf/compat/v1/executing_eagerly.md new file mode 100644 index 00000000000..2b4be2caf92 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/executing_eagerly.md @@ -0,0 +1,109 @@ +description: Checks whether the current thread has eager execution enabled. + +
+ + +
+ +# tf.compat.v1.executing_eagerly + + + + + + + + + +Checks whether the current thread has eager execution enabled. + + + + + + + +Eager execution is typically enabled via +tf.compat.v1.enable_eager_execution, but may also be enabled within the +context of a Python function via tf.contrib.eager.py_func. + +When eager execution is enabled, returns `True` in most cases. However, +this API might return `False` in the following use cases. + +* Executing inside tf.function, unless under tf.init_scope or + tf.config.experimental_run_functions_eagerly(True) is previously called. +* Executing inside a transformation function for `tf.dataset`. +* tf.compat.v1.disable_eager_execution() is called. + +``` +>>> tf.compat.v1.enable_eager_execution() +``` + +#### General case: + + + +``` +>>> print(tf.executing_eagerly()) +True +``` + +Inside tf.function: + +``` +>>> @tf.function +... def fn(): +... with tf.init_scope(): +... print(tf.executing_eagerly()) +... print(tf.executing_eagerly()) +>>> fn() +True +False +``` + +Inside tf.function +after tf.config.experimental_run_functions_eagerly(True) is called: + +``` +>>> tf.config.experimental_run_functions_eagerly(True) +>>> @tf.function +... def fn(): +... with tf.init_scope(): +... print(tf.executing_eagerly()) +... print(tf.executing_eagerly()) +>>> fn() +True +True +>>> tf.config.experimental_run_functions_eagerly(False) +``` + +Inside a transformation function for `tf.dataset`: + +``` +>>> def data_fn(x): +... print(tf.executing_eagerly()) +... return x +>>> dataset = tf.data.Dataset.range(100) +>>> dataset = dataset.map(data_fn) +False +``` + + + + + + + + + +
+`True` if the current thread has eager execution enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/executing_eagerly_outside_functions.md b/site/en/api_docs/python/tf/compat/v1/executing_eagerly_outside_functions.md new file mode 100644 index 00000000000..f8866844dbf --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/executing_eagerly_outside_functions.md @@ -0,0 +1,66 @@ +description: Returns True if executing eagerly, even if inside a graph function. + +
+ + +
+ +# tf.compat.v1.executing_eagerly_outside_functions + + + + + + + + + +Returns True if executing eagerly, even if inside a graph function. + + + + + + + +This function will check the outermost context for the program and see if +it is in eager mode. It is useful comparing to tf.executing_eagerly(), +which checks the current context and will return `False` within a +tf.function body. It can be used to build library that behave differently +in eager runtime and v1 session runtime (deprecated). + +#### Example: + + + +``` +>>> tf.compat.v1.enable_eager_execution() +>>> @tf.function +... def func(): +... # A function constructs TensorFlow graphs, it does not execute eagerly, +... # but the outer most context is still eager. +... assert not tf.executing_eagerly() +... return tf.compat.v1.executing_eagerly_outside_functions() +>>> func() + +``` + + + + + + + + + +
+boolean, whether the outermost context is in eager mode. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/expand_dims.md b/site/en/api_docs/python/tf/compat/v1/expand_dims.md new file mode 100644 index 00000000000..5689ee112b6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/expand_dims.md @@ -0,0 +1,140 @@ +description: Inserts a dimension of 1 into a tensor's shape. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.expand_dims + + + + + + + + + +Inserts a dimension of 1 into a tensor's shape. (deprecated arguments) + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. +Instructions for updating: +Use the `axis` argument instead + +Given a tensor `input`, this operation inserts a dimension of 1 at the +dimension index `axis` of `input`'s shape. The dimension index `axis` starts +at zero; if you specify a negative number for `axis` it is counted backward +from the end. + +This operation is useful if you want to add a batch dimension to a single +element. For example, if you have a single image of shape `[height, width, +channels]`, you can make it a batch of 1 image with `expand_dims(image, 0)`, +which will make the shape `[1, height, width, channels]`. + +#### Other examples: + + + +```python +# 't' is a tensor of shape [2] +tf.shape(tf.expand_dims(t, 0)) # [1, 2] +tf.shape(tf.expand_dims(t, 1)) # [2, 1] +tf.shape(tf.expand_dims(t, -1)) # [2, 1] + +# 't2' is a tensor of shape [2, 3, 5] +tf.shape(tf.expand_dims(t2, 0)) # [1, 2, 3, 5] +tf.shape(tf.expand_dims(t2, 2)) # [2, 3, 1, 5] +tf.shape(tf.expand_dims(t2, 3)) # [2, 3, 5, 1] +``` + +This operation requires that: + +`-1-input.dims() <= dim <= input.dims()` + +This operation is related to `squeeze()`, which removes dimensions of +size 1. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`axis` + +0-D (scalar). Specifies the dimension index at which to expand the +shape of `input`. Must be in the range `[-rank(input) - 1, rank(input)]`. +
+`name` + +The name of the output `Tensor` (optional). +
+`dim` + +0-D (scalar). Equivalent to `axis`, to be deprecated. +
+ + + + + + + + + + + +
+A `Tensor` with the same data as `input`, but its shape has an additional +dimension of size 1 added. +
+ + + + + + + + + + + + +
+`ValueError` + +if either both or neither of `dim` and `axis` are specified. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/experimental.md b/site/en/api_docs/python/tf/compat/v1/experimental.md new file mode 100644 index 00000000000..00d2befba2e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/experimental.md @@ -0,0 +1,31 @@ +description: Public API for tf.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.experimental + + + + + + + + + +Public API for tf.experimental namespace. + + + +## Functions + +[`async_clear_error(...)`](../../../tf/experimental/async_clear_error.md): Clear pending operations and error statuses in async execution. + +[`async_scope(...)`](../../../tf/experimental/async_scope.md): Context manager for grouping async operations. + +[`function_executor_type(...)`](../../../tf/experimental/function_executor_type.md): Context manager for setting the executor of eager defined functions. + +[`output_all_intermediates(...)`](../../../tf/compat/v1/experimental/output_all_intermediates.md): Whether to output all intermediates from functional control flow ops. + diff --git a/site/en/api_docs/python/tf/compat/v1/experimental/output_all_intermediates.md b/site/en/api_docs/python/tf/compat/v1/experimental/output_all_intermediates.md new file mode 100644 index 00000000000..40e418a91cf --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/experimental/output_all_intermediates.md @@ -0,0 +1,66 @@ +description: Whether to output all intermediates from functional control flow ops. + +
+ + +
+ +# tf.compat.v1.experimental.output_all_intermediates + + + + + + + + + +Whether to output all intermediates from functional control flow ops. + + + + + + + +The "default" behavior to is to output all intermediates when using v2 control +flow inside Keras models in graph mode (possibly inside Estimators). This is +needed to support taking gradients of v2 control flow. In graph mode, Keras +can sometimes freeze the forward graph before the gradient computation which +does not work for v2 control flow since it requires updating the forward ops +to output the needed intermediates. We work around this by proactively +outputting the needed intermediates when building the forward pass itself. +Ideally any such extra tensors should be pruned out at runtime. However, if +for any reason this doesn't work for you or if you have an inference-only +model you can turn this behavior off using +tf.compat.v1.experimental.output_all_intermediates(False). + +If with the default behavior you are still seeing errors of the form +"Connecting to invalid output X of source node Y which has Z outputs" try +setting tf.compat.v1.experimental.output_all_intermediates(True) and +please file an issue at https://github.com/tensorflow/tensorflow/issues. + + + + + + + + + + +
+`state` + +True, False or None. None restores the default behavior. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/extract_image_patches.md b/site/en/api_docs/python/tf/compat/v1/extract_image_patches.md new file mode 100644 index 00000000000..e927776732a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/extract_image_patches.md @@ -0,0 +1,122 @@ +description: Extract patches from images and put them in the "depth" output dimension. + +
+ + +
+ +# tf.compat.v1.extract_image_patches + + + + + + + + + +Extract `patches` from `images` and put them in the "depth" output dimension. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `complex64`, `complex128`, `bool`. +4-D Tensor with shape `[batch, in_rows, in_cols, depth]`. +
+`ksizes` + +A list of `ints` that has length `>= 4`. +The size of the sliding window for each dimension of `images`. +
+`strides` + +A list of `ints` that has length `>= 4`. +How far the centers of two consecutive patches are in +the images. Must be: `[1, stride_rows, stride_cols, 1]`. +
+`rates` + +A list of `ints` that has length `>= 4`. +Must be: `[1, rate_rows, rate_cols, 1]`. This is the +input stride, specifying how far two consecutive patch samples are in the +input. Equivalent to extracting patches with +`patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by +subsampling them spatially by a factor of `rates`. This is equivalent to +`rate` in dilated (a.k.a. Atrous) convolutions. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/feature_column.md b/site/en/api_docs/python/tf/compat/v1/feature_column.md new file mode 100644 index 00000000000..8ed121c7c74 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/feature_column.md @@ -0,0 +1,61 @@ +description: Public API for tf.feature_column namespace. + +
+ + +
+ +# Module: tf.compat.v1.feature_column + + + + + + + + + +Public API for tf.feature_column namespace. + + + +## Functions + +[`bucketized_column(...)`](../../../tf/feature_column/bucketized_column.md): Represents discretized dense input bucketed by `boundaries`. + +[`categorical_column_with_hash_bucket(...)`](../../../tf/feature_column/categorical_column_with_hash_bucket.md): Represents sparse feature where ids are set by hashing. + +[`categorical_column_with_identity(...)`](../../../tf/feature_column/categorical_column_with_identity.md): A `CategoricalColumn` that returns identity values. + +[`categorical_column_with_vocabulary_file(...)`](../../../tf/compat/v1/feature_column/categorical_column_with_vocabulary_file.md): A `CategoricalColumn` with a vocabulary file. + +[`categorical_column_with_vocabulary_list(...)`](../../../tf/feature_column/categorical_column_with_vocabulary_list.md): A `CategoricalColumn` with in-memory vocabulary. + +[`crossed_column(...)`](../../../tf/feature_column/crossed_column.md): Returns a column for performing crosses of categorical features. + +[`embedding_column(...)`](../../../tf/feature_column/embedding_column.md): `DenseColumn` that converts from sparse, categorical input. + +[`indicator_column(...)`](../../../tf/feature_column/indicator_column.md): Represents multi-hot representation of given categorical column. + +[`input_layer(...)`](../../../tf/compat/v1/feature_column/input_layer.md): Returns a dense `Tensor` as input layer based on given `feature_columns`. + +[`linear_model(...)`](../../../tf/compat/v1/feature_column/linear_model.md): Returns a linear prediction `Tensor` based on given `feature_columns`. + +[`make_parse_example_spec(...)`](../../../tf/compat/v1/feature_column/make_parse_example_spec.md): Creates parsing spec dictionary from input feature_columns. + +[`numeric_column(...)`](../../../tf/feature_column/numeric_column.md): Represents real valued or numerical features. + +[`sequence_categorical_column_with_hash_bucket(...)`](../../../tf/feature_column/sequence_categorical_column_with_hash_bucket.md): A sequence of categorical terms where ids are set by hashing. + +[`sequence_categorical_column_with_identity(...)`](../../../tf/feature_column/sequence_categorical_column_with_identity.md): Returns a feature column that represents sequences of integers. + +[`sequence_categorical_column_with_vocabulary_file(...)`](../../../tf/feature_column/sequence_categorical_column_with_vocabulary_file.md): A sequence of categorical terms where ids use a vocabulary file. + +[`sequence_categorical_column_with_vocabulary_list(...)`](../../../tf/feature_column/sequence_categorical_column_with_vocabulary_list.md): A sequence of categorical terms where ids use an in-memory list. + +[`sequence_numeric_column(...)`](../../../tf/feature_column/sequence_numeric_column.md): Returns a feature column that represents sequences of numeric data. + +[`shared_embedding_columns(...)`](../../../tf/compat/v1/feature_column/shared_embedding_columns.md): List of dense columns that convert from sparse, categorical input. + +[`weighted_categorical_column(...)`](../../../tf/feature_column/weighted_categorical_column.md): Applies weight values to a `CategoricalColumn`. + diff --git a/site/en/api_docs/python/tf/compat/v1/feature_column/categorical_column_with_vocabulary_file.md b/site/en/api_docs/python/tf/compat/v1/feature_column/categorical_column_with_vocabulary_file.md new file mode 100644 index 00000000000..7f014cf5f6f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/feature_column/categorical_column_with_vocabulary_file.md @@ -0,0 +1,202 @@ +description: A CategoricalColumn with a vocabulary file. + +
+ + +
+ +# tf.compat.v1.feature_column.categorical_column_with_vocabulary_file + + + + + + + + + +A `CategoricalColumn` with a vocabulary file. + + + + + + + +Use this when your inputs are in string or integer format, and you have a +vocabulary file that maps each value to an integer ID. By default, +out-of-vocabulary values are ignored. Use either (but not both) of +`num_oov_buckets` and `default_value` to specify how to include +out-of-vocabulary values. + +For input dictionary `features`, `features[key]` is either `Tensor` or +`SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int +and `''` for string, which will be dropped by this feature column. + +Example with `num_oov_buckets`: +File '/us/states.txt' contains 50 lines, each with a 2-character U.S. state +abbreviation. All inputs with values in that file are assigned an ID 0-49, +corresponding to its line number. All other values are hashed and assigned an +ID 50-54. + +```python +states = categorical_column_with_vocabulary_file( + key='states', vocabulary_file='/us/states.txt', vocabulary_size=50, + num_oov_buckets=5) +columns = [states, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction = linear_model(features, columns) +``` + +Example with `default_value`: +File '/us/states.txt' contains 51 lines - the first line is 'XX', and the +other 50 each have a 2-character U.S. state abbreviation. Both a literal 'XX' +in input, and other values missing from the file, will be assigned ID 0. All +others are assigned the corresponding line number 1-50. + +```python +states = categorical_column_with_vocabulary_file( + key='states', vocabulary_file='/us/states.txt', vocabulary_size=51, + default_value=0) +columns = [states, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction, _, _ = linear_model(features, columns) +``` + +And to make an embedding with either: + +```python +columns = [embedding_column(states, 3),...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +dense_tensor = input_layer(features, columns) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A unique string identifying the input feature. It is used as the +column name and the dictionary key for feature parsing configs, feature +`Tensor` objects, and feature columns. +
+`vocabulary_file` + +The vocabulary file name. +
+`vocabulary_size` + +Number of the elements in the vocabulary. This must be no +greater than length of `vocabulary_file`, if less than length, later +values are ignored. If None, it is set to the length of `vocabulary_file`. +
+`num_oov_buckets` + +Non-negative integer, the number of out-of-vocabulary +buckets. All out-of-vocabulary inputs will be assigned IDs in the range +`[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of +the input value. A positive `num_oov_buckets` can not be specified with +`default_value`. +
+`default_value` + +The integer ID value to return for out-of-vocabulary feature +values, defaults to `-1`. This can not be specified with a positive +`num_oov_buckets`. +
+`dtype` + +The type of features. Only string and integer types are supported. +
+ + + + + + + + + + + +
+A `CategoricalColumn` with a vocabulary file. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +`vocabulary_file` is missing or cannot be opened. +
+`ValueError` + +`vocabulary_size` is missing or < 1. +
+`ValueError` + +`num_oov_buckets` is a negative integer. +
+`ValueError` + +`num_oov_buckets` and `default_value` are both specified. +
+`ValueError` + +`dtype` is neither string nor integer. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/feature_column/input_layer.md b/site/en/api_docs/python/tf/compat/v1/feature_column/input_layer.md new file mode 100644 index 00000000000..59911664a3d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/feature_column/input_layer.md @@ -0,0 +1,158 @@ +description: Returns a dense Tensor as input layer based on given feature_columns. + +
+ + +
+ +# tf.compat.v1.feature_column.input_layer + + + + + + + + + +Returns a dense `Tensor` as input layer based on given `feature_columns`. + + + + + + + +Generally a single example in training data is described with FeatureColumns. +At the first layer of the model, this column oriented data should be converted +to a single `Tensor`. + +#### Example: + + + +```python +price = numeric_column('price') +keywords_embedded = embedding_column( + categorical_column_with_hash_bucket("keywords", 10K), dimensions=16) +columns = [price, keywords_embedded, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +dense_tensor = input_layer(features, columns) +for units in [128, 64, 32]: + dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu) +prediction = tf.compat.v1.layers.dense(dense_tensor, 1) +``` + + + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A mapping from key to tensors. `_FeatureColumn`s look up via these +keys. For example `numeric_column('price')` will look at 'price' key in +this dict. Values can be a `SparseTensor` or a `Tensor` depends on +corresponding `_FeatureColumn`. +
+`feature_columns` + +An iterable containing the FeatureColumns to use as inputs +to your model. All items should be instances of classes derived from +`_DenseColumn` such as `numeric_column`, `embedding_column`, +`bucketized_column`, `indicator_column`. If you have categorical features, +you can wrap them with an `embedding_column` or `indicator_column`. +
+`weight_collections` + +A list of collection names to which the Variable will be +added. Note that variables will also be added to collections +`tf.GraphKeys.GLOBAL_VARIABLES` and `ops.GraphKeys.MODEL_VARIABLES`. +
+`trainable` + +If `True` also add the variable to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`cols_to_vars` + +If not `None`, must be a dictionary that will be filled with a +mapping from `_FeatureColumn` to list of `Variable`s. For example, after +the call, we might have cols_to_vars = +{_EmbeddingColumn( +categorical_column=_HashedCategoricalColumn( +key='sparse_feature', hash_bucket_size=5, dtype=tf.string), +dimension=10): [ +
+`cols_to_output_tensors` + +If not `None`, must be a dictionary that will be +filled with a mapping from '_FeatureColumn' to the associated +output `Tensor`s. +
+ + + + + + + + + + + +
+A `Tensor` which represents input layer of a model. Its shape +is (batch_size, first_layer_dimension) and its dtype is `float32`. +first_layer_dimension is determined based on given `feature_columns`. +
+ + + + + + + + + + + + +
+`ValueError` + +if an item in `feature_columns` is not a `_DenseColumn`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/feature_column/linear_model.md b/site/en/api_docs/python/tf/compat/v1/feature_column/linear_model.md new file mode 100644 index 00000000000..16d9b05d3f4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/feature_column/linear_model.md @@ -0,0 +1,223 @@ +description: Returns a linear prediction Tensor based on given feature_columns. + +
+ + +
+ +# tf.compat.v1.feature_column.linear_model + + + + + + + + + +Returns a linear prediction `Tensor` based on given `feature_columns`. + + + + + + + +This function generates a weighted sum based on output dimension `units`. +Weighted sum refers to logits in classification problems. It refers to the +prediction itself for linear regression problems. + +Note on supported columns: `linear_model` treats categorical columns as +`indicator_column`s. To be specific, assume the input as `SparseTensor` looks +like: + +```python + shape = [2, 2] + { + [0, 0]: "a" + [1, 0]: "b" + [1, 1]: "c" + } +``` +`linear_model` assigns weights for the presence of "a", "b", "c' implicitly, +just like `indicator_column`, while `input_layer` explicitly requires wrapping +each of categorical columns with an `embedding_column` or an +`indicator_column`. + +#### Example of usage: + + + +```python +price = numeric_column('price') +price_buckets = bucketized_column(price, boundaries=[0., 10., 100., 1000.]) +keywords = categorical_column_with_hash_bucket("keywords", 10K) +keywords_price = crossed_column('keywords', price_buckets, ...) +columns = [price_buckets, keywords, keywords_price ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +prediction = linear_model(features, columns) +``` + +The `sparse_combiner` argument works as follows +For example, for two features represented as the categorical columns: + +```python + # Feature 1 + + shape = [2, 2] + { + [0, 0]: "a" + [0, 1]: "b" + [1, 0]: "c" + } + + # Feature 2 + + shape = [2, 3] + { + [0, 0]: "d" + [1, 0]: "e" + [1, 1]: "f" + [1, 2]: "f" + } +``` + +with `sparse_combiner` as "mean", the linear model outputs consequently +are: + +``` + y_0 = 1.0 / 2.0 * ( w_a + w_b ) + w_d + b + y_1 = w_c + 1.0 / 3.0 * ( w_e + 2.0 * w_f ) + b +``` + +where `y_i` is the output, `b` is the bias, and `w_x` is the weight +assigned to the presence of `x` in the input features. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A mapping from key to tensors. `_FeatureColumn`s look up via these +keys. For example `numeric_column('price')` will look at 'price' key in +this dict. Values are `Tensor` or `SparseTensor` depending on +corresponding `_FeatureColumn`. +
+`feature_columns` + +An iterable containing the FeatureColumns to use as inputs +to your model. All items should be instances of classes derived from +`_FeatureColumn`s. +
+`units` + +An integer, dimensionality of the output space. Default value is 1. +
+`sparse_combiner` + +A string specifying how to reduce if a categorical column +is multivalent. Except `numeric_column`, almost all columns passed to +`linear_model` are considered as categorical columns. It combines each +categorical column independently. Currently "mean", "sqrtn" and "sum" are +supported, with "sum" the default for linear model. "sqrtn" often achieves +good accuracy, in particular with bag-of-words columns. +* "sum": do not normalize features in the column +* "mean": do l1 normalization on features in the column +* "sqrtn": do l2 normalization on features in the column +
+`weight_collections` + +A list of collection names to which the Variable will be +added. Note that, variables will also be added to collections +`tf.GraphKeys.GLOBAL_VARIABLES` and `ops.GraphKeys.MODEL_VARIABLES`. +
+`trainable` + +If `True` also add the variable to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`cols_to_vars` + +If not `None`, must be a dictionary that will be filled with a +mapping from `_FeatureColumn` to associated list of `Variable`s. For +example, after the call, we might have cols_to_vars = { +_NumericColumn( +key='numeric_feature1', shape=(1,): +[], +'bias': [], +_NumericColumn( +key='numeric_feature2', shape=(2,)): +[]} +If a column creates no variables, its value will be an empty list. Note +that cols_to_vars will also contain a string key 'bias' that maps to a +list of Variables. +
+ + + + + + + + + + + +
+A `Tensor` which represents predictions/logits of a linear model. Its shape +is (batch_size, units) and its dtype is `float32`. +
+ + + + + + + + + + + + +
+`ValueError` + +if an item in `feature_columns` is neither a `_DenseColumn` +nor `_CategoricalColumn`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/feature_column/make_parse_example_spec.md b/site/en/api_docs/python/tf/compat/v1/feature_column/make_parse_example_spec.md new file mode 100644 index 00000000000..e84b38ad8cc --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/feature_column/make_parse_example_spec.md @@ -0,0 +1,115 @@ +description: Creates parsing spec dictionary from input feature_columns. + +
+ + +
+ +# tf.compat.v1.feature_column.make_parse_example_spec + + + + + + + + + +Creates parsing spec dictionary from input feature_columns. + + + + + + + +The returned dictionary can be used as arg 'features' in +tf.io.parse_example. + +#### Typical usage example: + + + +```python +# Define features and transformations +feature_a = categorical_column_with_vocabulary_file(...) +feature_b = numeric_column(...) +feature_c_bucketized = bucketized_column(numeric_column("feature_c"), ...) +feature_a_x_feature_c = crossed_column( + columns=["feature_a", feature_c_bucketized], ...) + +feature_columns = set( + [feature_b, feature_c_bucketized, feature_a_x_feature_c]) +features = tf.io.parse_example( + serialized=serialized_examples, + features=make_parse_example_spec(feature_columns)) +``` + +For the above example, make_parse_example_spec would return the dict: + +```python +{ + "feature_a": parsing_ops.VarLenFeature(tf.string), + "feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32), + "feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32) +} +``` + + + + + + + + + + +
+`feature_columns` + +An iterable containing all feature columns. All items +should be instances of classes derived from `_FeatureColumn`. +
+ + + + + + + + + + + +
+A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature` +value. +
+ + + + + + + + + + + + +
+`ValueError` + +If any of the given `feature_columns` is not a `_FeatureColumn` +instance. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/feature_column/shared_embedding_columns.md b/site/en/api_docs/python/tf/compat/v1/feature_column/shared_embedding_columns.md new file mode 100644 index 00000000000..3e36066a871 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/feature_column/shared_embedding_columns.md @@ -0,0 +1,253 @@ +description: List of dense columns that convert from sparse, categorical input. + +
+ + +
+ +# tf.compat.v1.feature_column.shared_embedding_columns + + + + + + + + + +List of dense columns that convert from sparse, categorical input. + + + + + + + +This is similar to `embedding_column`, except that it produces a list of +embedding columns that share the same embedding weights. + +Use this when your inputs are sparse and of the same type (e.g. watched and +impression video IDs that share the same vocabulary), and you want to convert +them to a dense representation (e.g., to feed to a DNN). + +Inputs must be a list of categorical columns created by any of the +`categorical_column_*` function. They must all be of the same type and have +the same arguments except `key`. E.g. they can be +categorical_column_with_vocabulary_file with the same vocabulary_file. Some or +all columns could also be weighted_categorical_column. + +Here is an example embedding of two features for a DNNClassifier model: + +```python +watched_video_id = categorical_column_with_vocabulary_file( + 'watched_video_id', video_vocabulary_file, video_vocabulary_size) +impression_video_id = categorical_column_with_vocabulary_file( + 'impression_video_id', video_vocabulary_file, video_vocabulary_size) +columns = shared_embedding_columns( + [watched_video_id, impression_video_id], dimension=10) + +estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...) + +label_column = ... +def input_fn(): + features = tf.io.parse_example( + ..., features=make_parse_example_spec(columns + [label_column])) + labels = features.pop(label_column.name) + return features, labels + +estimator.train(input_fn=input_fn, steps=100) +``` + +Here is an example using `shared_embedding_columns` with model_fn: + +```python +def model_fn(features, ...): + watched_video_id = categorical_column_with_vocabulary_file( + 'watched_video_id', video_vocabulary_file, video_vocabulary_size) + impression_video_id = categorical_column_with_vocabulary_file( + 'impression_video_id', video_vocabulary_file, video_vocabulary_size) + columns = shared_embedding_columns( + [watched_video_id, impression_video_id], dimension=10) + dense_tensor = input_layer(features, columns) + # Form DNN layers, calculate loss, and return EstimatorSpec. + ... +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`categorical_columns` + +List of categorical columns created by a +`categorical_column_with_*` function. These columns produce the sparse IDs +that are inputs to the embedding lookup. All columns must be of the same +type and have the same arguments except `key`. E.g. they can be +categorical_column_with_vocabulary_file with the same vocabulary_file. +Some or all columns could also be weighted_categorical_column. +
+`dimension` + +An integer specifying dimension of the embedding, must be > 0. +
+`combiner` + +A string specifying how to reduce if there are multiple entries in +a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with +'mean' the default. 'sqrtn' often achieves good accuracy, in particular +with bag-of-words columns. Each of this can be thought as example level +normalizations on the column. For more information, see +`tf.embedding_lookup_sparse`. +
+`initializer` + +A variable initializer function to be used in embedding +variable initialization. If not specified, defaults to +`truncated_normal_initializer` with mean `0.0` and +standard deviation `1/sqrt(dimension)`. +
+`shared_embedding_collection_name` + +Optional name of the collection where +shared embedding weights are added. If not given, a reasonable name will +be chosen based on the names of `categorical_columns`. This is also used +in `variable_scope` when creating shared embedding weights. +
+`ckpt_to_load_from` + +String representing checkpoint name/pattern from which to +restore column weights. Required if `tensor_name_in_ckpt` is not `None`. +
+`tensor_name_in_ckpt` + +Name of the `Tensor` in `ckpt_to_load_from` from which +to restore the column weights. Required if `ckpt_to_load_from` is not +`None`. +
+`max_norm` + +If not `None`, each embedding is clipped if its l2-norm is larger +than this value, before combining. +
+`trainable` + +Whether or not the embedding is trainable. Default is True. +
+`use_safe_embedding_lookup` + +If true, uses safe_embedding_lookup_sparse +instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures +there are no empty rows and all weights and ids are positive at the +expense of extra compute cost. This only applies to rank 2 (NxM) shaped +input tensors. Defaults to true, consider turning off if the above checks +are not needed. Note that having empty rows will not trigger any error +though the output result might be 0 or omitted. +
+ + + + + + + + + + + +
+A list of dense columns that converts from sparse input. The order of +results follows the ordering of `categorical_columns`. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +if `dimension` not > 0. +
+`ValueError` + +if any of the given `categorical_columns` is of different type +or has different arguments than the others. +
+`ValueError` + +if exactly one of `ckpt_to_load_from` and `tensor_name_in_ckpt` +is specified. +
+`ValueError` + +if `initializer` is specified and is not callable. +
+`RuntimeError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/fixed_size_partitioner.md b/site/en/api_docs/python/tf/compat/v1/fixed_size_partitioner.md new file mode 100644 index 00000000000..08b5dd5e338 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/fixed_size_partitioner.md @@ -0,0 +1,72 @@ +description: Partitioner to specify a fixed number of shards along given axis. + +
+ + +
+ +# tf.compat.v1.fixed_size_partitioner + + + + + + + + + +Partitioner to specify a fixed number of shards along given axis. + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +`int`, number of shards to partition variable. +
+`axis` + +`int`, axis to partition on. +
+ + + + + + + + + + + +
+A partition function usable as the `partitioner` argument to +`variable_scope` and `get_variable`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags.md b/site/en/api_docs/python/tf/compat/v1/flags.md new file mode 100644 index 00000000000..416bddafd82 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags.md @@ -0,0 +1,167 @@ +description: Import router for absl.flags. See https://github.com/abseil/abseil-py. + +
+ + +
+ +# Module: tf.compat.v1.flags + + + + + + + + + +Import router for absl.flags. See https://github.com/abseil/abseil-py. + + + + + +## Modules + +[`tf_decorator`](../../../tf/compat/v1/flags/tf_decorator.md) module: Base TFDecorator class and utility functions for working with decorators. + +## Classes + +[`class ArgumentParser`](../../../tf/compat/v1/flags/ArgumentParser.md): Base class used to parse and convert arguments. + +[`class ArgumentSerializer`](../../../tf/compat/v1/flags/ArgumentSerializer.md): Base class for generating string representations of a flag value. + +[`class BaseListParser`](../../../tf/compat/v1/flags/BaseListParser.md): Base class for a parser of lists of strings. + +[`class BooleanFlag`](../../../tf/compat/v1/flags/BooleanFlag.md): Basic boolean flag. + +[`class BooleanParser`](../../../tf/compat/v1/flags/BooleanParser.md): Parser of boolean values. + +[`class CantOpenFlagFileError`](../../../tf/compat/v1/flags/CantOpenFlagFileError.md): Raised when flagfile fails to open. + +[`class CsvListSerializer`](../../../tf/compat/v1/flags/CsvListSerializer.md): Base class for generating string representations of a flag value. + +[`class DuplicateFlagError`](../../../tf/compat/v1/flags/DuplicateFlagError.md): Raised if there is a flag naming conflict. + +[`class EnumClassFlag`](../../../tf/compat/v1/flags/EnumClassFlag.md): Basic enum flag; its value is an enum class's member. + +[`class EnumClassParser`](../../../tf/compat/v1/flags/EnumClassParser.md): Parser of an Enum class member. + +[`class EnumFlag`](../../../tf/compat/v1/flags/EnumFlag.md): Basic enum flag; its value can be any string from list of enum_values. + +[`class EnumParser`](../../../tf/compat/v1/flags/EnumParser.md): Parser of a string enum value (a string value from a given set). + +[`class Error`](../../../tf/compat/v1/flags/Error.md): The base class for all flags errors. + +[`class Flag`](../../../tf/compat/v1/flags/Flag.md): Information about a command-line flag. + +[`class FlagNameConflictsWithMethodError`](../../../tf/compat/v1/flags/FlagNameConflictsWithMethodError.md): Raised when a flag name conflicts with FlagValues methods. + +[`class FlagValues`](../../../tf/compat/v1/flags/FlagValues.md): Registry of 'Flag' objects. + +[`class FloatParser`](../../../tf/compat/v1/flags/FloatParser.md): Parser of floating point values. + +[`class IllegalFlagValueError`](../../../tf/compat/v1/flags/IllegalFlagValueError.md): Raised when the flag command line argument is illegal. + +[`class IntegerParser`](../../../tf/compat/v1/flags/IntegerParser.md): Parser of an integer value. + +[`class ListParser`](../../../tf/compat/v1/flags/ListParser.md): Parser for a comma-separated list of strings. + +[`class ListSerializer`](../../../tf/compat/v1/flags/ListSerializer.md): Base class for generating string representations of a flag value. + +[`class MultiEnumClassFlag`](../../../tf/compat/v1/flags/MultiEnumClassFlag.md): A multi_enum_class flag. + +[`class MultiFlag`](../../../tf/compat/v1/flags/MultiFlag.md): A flag that can appear multiple time on the command-line. + +[`class UnparsedFlagAccessError`](../../../tf/compat/v1/flags/UnparsedFlagAccessError.md): Raised when accessing the flag value from unparsed FlagValues. + +[`class UnrecognizedFlagError`](../../../tf/compat/v1/flags/UnrecognizedFlagError.md): Raised when a flag is unrecognized. + +[`class ValidationError`](../../../tf/compat/v1/flags/ValidationError.md): Raised when flag validator constraint is not satisfied. + +[`class WhitespaceSeparatedListParser`](../../../tf/compat/v1/flags/WhitespaceSeparatedListParser.md): Parser for a whitespace-separated list of strings. + +## Functions + +[`DEFINE(...)`](../../../tf/compat/v1/flags/DEFINE.md): Registers a generic Flag object. + +[`DEFINE_alias(...)`](../../../tf/compat/v1/flags/DEFINE_alias.md): Defines an alias flag for an existing one. + +[`DEFINE_bool(...)`](../../../tf/compat/v1/flags/DEFINE_bool.md): Registers a boolean flag. + +[`DEFINE_boolean(...)`](../../../tf/compat/v1/flags/DEFINE_bool.md): Registers a boolean flag. + +[`DEFINE_enum(...)`](../../../tf/compat/v1/flags/DEFINE_enum.md): Registers a flag whose value can be any string from enum_values. + +[`DEFINE_enum_class(...)`](../../../tf/compat/v1/flags/DEFINE_enum_class.md): Registers a flag whose value can be the name of enum members. + +[`DEFINE_flag(...)`](../../../tf/compat/v1/flags/DEFINE_flag.md): Registers a 'Flag' object with a 'FlagValues' object. + +[`DEFINE_float(...)`](../../../tf/compat/v1/flags/DEFINE_float.md): Registers a flag whose value must be a float. + +[`DEFINE_integer(...)`](../../../tf/compat/v1/flags/DEFINE_integer.md): Registers a flag whose value must be an integer. + +[`DEFINE_list(...)`](../../../tf/compat/v1/flags/DEFINE_list.md): Registers a flag whose value is a comma-separated list of strings. + +[`DEFINE_multi(...)`](../../../tf/compat/v1/flags/DEFINE_multi.md): Registers a generic MultiFlag that parses its args with a given parser. + +[`DEFINE_multi_enum(...)`](../../../tf/compat/v1/flags/DEFINE_multi_enum.md): Registers a flag whose value can be a list strings from enum_values. + +[`DEFINE_multi_enum_class(...)`](../../../tf/compat/v1/flags/DEFINE_multi_enum_class.md): Registers a flag whose value can be a list of enum members. + +[`DEFINE_multi_float(...)`](../../../tf/compat/v1/flags/DEFINE_multi_float.md): Registers a flag whose value can be a list of arbitrary floats. + +[`DEFINE_multi_integer(...)`](../../../tf/compat/v1/flags/DEFINE_multi_integer.md): Registers a flag whose value can be a list of arbitrary integers. + +[`DEFINE_multi_string(...)`](../../../tf/compat/v1/flags/DEFINE_multi_string.md): Registers a flag whose value can be a list of any strings. + +[`DEFINE_spaceseplist(...)`](../../../tf/compat/v1/flags/DEFINE_spaceseplist.md): Registers a flag whose value is a whitespace-separated list of strings. + +[`DEFINE_string(...)`](../../../tf/compat/v1/flags/DEFINE_string.md): Registers a flag whose value can be any string. + +[`FLAGS(...)`](../../../tf/compat/v1/flags/FLAGS.md): Registry of 'Flag' objects. + +[`adopt_module_key_flags(...)`](../../../tf/compat/v1/flags/adopt_module_key_flags.md): Declares that all flags key to a module are key to the current module. + +[`declare_key_flag(...)`](../../../tf/compat/v1/flags/declare_key_flag.md): Declares one flag as key to the current module. + +[`disclaim_key_flags(...)`](../../../tf/compat/v1/flags/disclaim_key_flags.md): Declares that the current module will not define any more key flags. + +[`doc_to_help(...)`](../../../tf/compat/v1/flags/doc_to_help.md): Takes a __doc__ string and reformats it as help. + +[`flag_dict_to_args(...)`](../../../tf/compat/v1/flags/flag_dict_to_args.md): Convert a dict of values into process call parameters. + +[`get_help_width(...)`](../../../tf/compat/v1/flags/get_help_width.md): Returns the integer width of help lines that is used in TextWrap. + +[`mark_bool_flags_as_mutual_exclusive(...)`](../../../tf/compat/v1/flags/mark_bool_flags_as_mutual_exclusive.md): Ensures that only one flag among flag_names is True. + +[`mark_flag_as_required(...)`](../../../tf/compat/v1/flags/mark_flag_as_required.md): Ensures that flag is not None during program execution. + +[`mark_flags_as_mutual_exclusive(...)`](../../../tf/compat/v1/flags/mark_flags_as_mutual_exclusive.md): Ensures that only one flag among flag_names is not None. + +[`mark_flags_as_required(...)`](../../../tf/compat/v1/flags/mark_flags_as_required.md): Ensures that flags are not None during program execution. + +[`multi_flags_validator(...)`](../../../tf/compat/v1/flags/multi_flags_validator.md): A function decorator for defining a multi-flag validator. + +[`register_multi_flags_validator(...)`](../../../tf/compat/v1/flags/register_multi_flags_validator.md): Adds a constraint to multiple flags. + +[`register_validator(...)`](../../../tf/compat/v1/flags/register_validator.md): Adds a constraint, which will be enforced during program execution. + +[`text_wrap(...)`](../../../tf/compat/v1/flags/text_wrap.md): Wraps a given text to a maximum line length and returns it. + +[`validator(...)`](../../../tf/compat/v1/flags/validator.md): A function decorator for defining a flag validator. + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/ArgumentParser.md b/site/en/api_docs/python/tf/compat/v1/flags/ArgumentParser.md new file mode 100644 index 00000000000..6f6a24ae835 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/ArgumentParser.md @@ -0,0 +1,130 @@ +description: Base class used to parse and convert arguments. + +
+ + + + + +
+ +# tf.compat.v1.flags.ArgumentParser + + + + + + + + + +Base class used to parse and convert arguments. + + + + + +The parse() method checks to make sure that the string argument is a +legal value and convert it to a native type. If the value cannot be +converted, it should throw a 'ValueError' exception with a human +readable explanation of why the value is illegal. + +Subclasses should also define a syntactic_help string which may be +presented to the user to describe the form of the legal values. + +Argument parser classes must be stateless, since instances are cached +and shared between flags. Initializer arguments are allowed, but all +member variables must be derived from initializer arguments only. + +## Methods + +

flag_type

+ + + +Returns a string representing the type of the flag. + + +

parse

+ + + +Parses the string argument and returns the native value. + +By default it returns its argument unmodified. + + + + + + + + + + +
Args
+`argument` + +string argument passed in the commandline. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +Raised when it fails to parse the argument. +
+`TypeError` + +Raised when the argument has the wrong type. +
+ + + + + + + + + + + +
Returns
+The parsed value in native type. +
+ + + + + +## Class Variables + +* `syntactic_help = ''` diff --git a/site/en/api_docs/python/tf/compat/v1/flags/ArgumentSerializer.md b/site/en/api_docs/python/tf/compat/v1/flags/ArgumentSerializer.md new file mode 100644 index 00000000000..16be9e3902e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/ArgumentSerializer.md @@ -0,0 +1,49 @@ +description: Base class for generating string representations of a flag value. + +
+ + + +
+ +# tf.compat.v1.flags.ArgumentSerializer + + + + + + + + + +Base class for generating string representations of a flag value. + + + + + + +## Methods + +

serialize

+ + + +Returns a serialized string of the value. + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/BaseListParser.md b/site/en/api_docs/python/tf/compat/v1/flags/BaseListParser.md new file mode 100644 index 00000000000..1c226914119 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/BaseListParser.md @@ -0,0 +1,80 @@ +description: Base class for a parser of lists of strings. + +
+ + + + + + +
+ +# tf.compat.v1.flags.BaseListParser + + + + + + + + + +Base class for a parser of lists of strings. + +Inherits From: [`ArgumentParser`](../../../../tf/compat/v1/flags/ArgumentParser.md) + + + + + + + + + +To extend, inherit from this class; from the subclass __init__, call + + BaseListParser.__init__(self, token, name) + +where token is a character used to tokenize, and name is a description +of the separator. + +## Methods + +

flag_type

+ + + +See base class. + + +

parse

+ + + +See base class. + + + + +## Class Variables + +* `syntactic_help = ''` diff --git a/site/en/api_docs/python/tf/compat/v1/flags/BooleanFlag.md b/site/en/api_docs/python/tf/compat/v1/flags/BooleanFlag.md new file mode 100644 index 00000000000..fa997dd466e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/BooleanFlag.md @@ -0,0 +1,197 @@ +description: Basic boolean flag. + +
+ + + + + + + + + + + + +
+ +# tf.compat.v1.flags.BooleanFlag + + + + + + + + + +Basic boolean flag. + +Inherits From: [`Flag`](../../../../tf/compat/v1/flags/Flag.md) + + + + + + + + + +Boolean flags do not take any arguments, and their value is either +True (1) or False (0). The false value is specified on the command +line by prepending the word 'no' to either the long or the short flag +name. + +For example, if a Boolean flag was created whose long name was +'update' and whose short name was 'x', then this flag could be +explicitly unset through either --noupdate or --nox. + + + + + + + + + + + + +
+`value` + + +
+ + + +## Methods + +

flag_type

+ + + +Returns a str that describes the type of the flag. + +NOTE: we use strings, and not the types.*Type constants because +our flags can have more exotic types, e.g., 'comma separated list +of strings', 'whitespace separated list of strings', etc. + +

parse

+ + + +Parses string and sets flag value. + + + + + + + + + + + +
Args
+`argument` + +str or the correct flag value type, argument to be parsed. +
+ + + +

serialize

+ + + +Serializes the flag. + + +

unparse

+ + + + + + +

__eq__

+ + + +Return self==value. + + +

__ge__

+ + + +Return a >= b. Computed by @total_ordering from (not a < b). + + +

__gt__

+ + + +Return a > b. Computed by @total_ordering from (not a < b) and (a != b). + + +

__le__

+ + + +Return a <= b. Computed by @total_ordering from (a < b) or (a == b). + + +

__lt__

+ + + +Return self + + + + + + + +# tf.compat.v1.flags.BooleanParser + + + + + + + + + +Parser of boolean values. + +Inherits From: [`ArgumentParser`](../../../../tf/compat/v1/flags/ArgumentParser.md) + + + + + + +## Methods + +

flag_type

+ + + +See base class. + + +

parse

+ + + +See base class. + + + + +## Class Variables + +* `syntactic_help = ''` diff --git a/site/en/api_docs/python/tf/compat/v1/flags/CantOpenFlagFileError.md b/site/en/api_docs/python/tf/compat/v1/flags/CantOpenFlagFileError.md new file mode 100644 index 00000000000..866d350a194 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/CantOpenFlagFileError.md @@ -0,0 +1,36 @@ +description: Raised when flagfile fails to open. + +
+ + +
+ +# tf.compat.v1.flags.CantOpenFlagFileError + + + + + + + + + +Raised when flagfile fails to open. + +Inherits From: [`Error`](../../../../tf/compat/v1/flags/Error.md) + + + + + +E.g. the file doesn't exist, or has wrong permissions. + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/CsvListSerializer.md b/site/en/api_docs/python/tf/compat/v1/flags/CsvListSerializer.md new file mode 100644 index 00000000000..6c215f18005 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/CsvListSerializer.md @@ -0,0 +1,60 @@ +description: Base class for generating string representations of a flag value. + +
+ + + + +
+ +# tf.compat.v1.flags.CsvListSerializer + + + + + + + + + +Base class for generating string representations of a flag value. + +Inherits From: [`ArgumentSerializer`](../../../../tf/compat/v1/flags/ArgumentSerializer.md) + + + + + + + + + + +## Methods + +

serialize

+ + + +Serializes a list as a CSV string or unicode. + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE.md new file mode 100644 index 00000000000..99d43058b6c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE.md @@ -0,0 +1,113 @@ +description: Registers a generic Flag object. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE + + + + + + + + + +Registers a generic Flag object. + + + + + + + + + +NOTE: in the docstrings of all DEFINE* functions, "registers" is short +for "creates a new flag and registers it". + +Auxiliary function: clients should use the specialized DEFINE_ +function instead. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parser` + +ArgumentParser, used to parse the flag arguments. +
+`name` + +str, the flag name. +
+`default` + +The default value of the flag. +
+`help` + +str, the help message. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`serializer` + +ArgumentSerializer, the flag serializer instance. +
+`module_name` + +str, the name of the Python module declaring this flag. +If not provided, it will be computed using the stack trace of this call. +
+`**args` + +dict, the extra keyword args that are passed to Flag __init__. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_alias.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_alias.md new file mode 100644 index 00000000000..da3d0252084 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_alias.md @@ -0,0 +1,96 @@ +description: Defines an alias flag for an existing one. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_alias + + + + + + + + + +Defines an alias flag for an existing one. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`original_name` + +str, the original flag name. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`module_name` + +A string, the name of the module that defines this flag. +
+ + + + + + + + + + + + +
+`flags.FlagError` + +UnrecognizedFlagError: if the referenced flag doesn't exist. +DuplicateFlagError: if the alias name has been used by some existing flag. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_bool.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_bool.md new file mode 100644 index 00000000000..df111257b28 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_bool.md @@ -0,0 +1,100 @@ +description: Registers a boolean flag. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_bool + + + + + + + + + +Registers a boolean flag. + + + + + + + + + +Such a boolean flag does not take an argument. If a user wants to +specify a false value explicitly, the long option beginning with 'no' +must be used: i.e. --noflag + +This flag will have a value of None, True or False. None is possible +if default=None and the user does not specify the flag on the command +line. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`default` + +bool|str|None, the default value of the flag. +
+`help` + +str, the help message. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`module_name` + +str, the name of the Python module declaring this flag. +If not provided, it will be computed using the stack trace of this call. +
+`**args` + +dict, the extra keyword args that are passed to Flag __init__. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_enum.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_enum.md new file mode 100644 index 00000000000..ebb7e453677 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_enum.md @@ -0,0 +1,104 @@ +description: Registers a flag whose value can be any string from enum_values. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_enum + + + + + + + + + +Registers a flag whose value can be any string from enum_values. + + + + + + + + + +Instead of a string enum, prefer `DEFINE_enum_class`, which allows +defining enums from an `enum.Enum` class. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`default` + +str|None, the default value of the flag. +
+`enum_values` + +[str], a non-empty list of strings with the possible values for +the flag. +
+`help` + +str, the help message. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`module_name` + +str, the name of the Python module declaring this flag. +If not provided, it will be computed using the stack trace of this call. +
+`**args` + +dict, the extra keyword args that are passed to Flag __init__. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_enum_class.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_enum_class.md new file mode 100644 index 00000000000..fd056829f65 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_enum_class.md @@ -0,0 +1,101 @@ +description: Registers a flag whose value can be the name of enum members. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_enum_class + + + + + + + + + +Registers a flag whose value can be the name of enum members. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`default` + +Enum|str|None, the default value of the flag. +
+`enum_class` + +class, the Enum class with all the possible values for the flag. +
+`help` + +str, the help message. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`module_name` + +str, the name of the Python module declaring this flag. +If not provided, it will be computed using the stack trace of this call. +
+`**args` + +dict, the extra keyword args that are passed to Flag __init__. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_flag.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_flag.md new file mode 100644 index 00000000000..c4830a06366 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_flag.md @@ -0,0 +1,78 @@ +description: Registers a 'Flag' object with a 'FlagValues' object. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_flag + + + + + + + + + +Registers a 'Flag' object with a 'FlagValues' object. + + + + + + + + + +By default, the global FLAGS 'FlagValue' object is used. + +Typical users will use one of the more specialized DEFINE_xxx +functions, such as DEFINE_string or DEFINE_integer. But developers +who need to create Flag objects themselves should use this function +to register their flags. + + + + + + + + + + + + + + + + +
+`flag` + +Flag, a flag that is key to the module. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`module_name` + +str, the name of the Python module declaring this flag. +If not provided, it will be computed using the stack trace of this call. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_float.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_float.md new file mode 100644 index 00000000000..301b4304917 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_float.md @@ -0,0 +1,102 @@ +description: Registers a flag whose value must be a float. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_float + + + + + + + + + +Registers a flag whose value must be a float. + + + + + + + + + +If lower_bound or upper_bound are set, then this flag must be +within the given range. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`default` + +float|str|None, the default value of the flag. +
+`help` + +str, the help message. +
+`lower_bound` + +float, min value of the flag. +
+`upper_bound` + +float, max value of the flag. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`**args` + +dict, the extra keyword args that are passed to DEFINE. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_integer.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_integer.md new file mode 100644 index 00000000000..0d0ea32c0e8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_integer.md @@ -0,0 +1,102 @@ +description: Registers a flag whose value must be an integer. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_integer + + + + + + + + + +Registers a flag whose value must be an integer. + + + + + + + + + +If lower_bound, or upper_bound are set, then this flag must be +within the given range. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`default` + +int|str|None, the default value of the flag. +
+`help` + +str, the help message. +
+`lower_bound` + +int, min value of the flag. +
+`upper_bound` + +int, max value of the flag. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`**args` + +dict, the extra keyword args that are passed to DEFINE. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_list.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_list.md new file mode 100644 index 00000000000..f0179cdf67b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_list.md @@ -0,0 +1,87 @@ +description: Registers a flag whose value is a comma-separated list of strings. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_list + + + + + + + + + +Registers a flag whose value is a comma-separated list of strings. + + + + + + + + + +The flag value is parsed with a CSV parser. + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`default` + +list|str|None, the default value of the flag. +
+`help` + +str, the help message. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`**args` + +Dictionary with extra keyword args that are passed to the +Flag __init__. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi.md new file mode 100644 index 00000000000..428e29aea12 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi.md @@ -0,0 +1,118 @@ +description: Registers a generic MultiFlag that parses its args with a given parser. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_multi + + + + + + + + + +Registers a generic MultiFlag that parses its args with a given parser. + + + + + + + + + +Auxiliary function. Normal users should NOT use it directly. + +Developers who need to create their own 'Parser' classes for options +which can appear multiple times can call this module function to +register their flags. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parser` + +ArgumentParser, used to parse the flag arguments. +
+`serializer` + +ArgumentSerializer, the flag serializer instance. +
+`name` + +str, the flag name. +
+`default` + +Union[Iterable[T], Text, None], the default value of the flag. +If the value is text, it will be parsed as if it was provided from +the command line. If the value is a non-string iterable, it will be +iterated over to create a shallow copy of the values. If it is None, +it is left as-is. +
+`help` + +str, the help message. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`module_name` + +A string, the name of the Python module declaring this flag. +If not provided, it will be computed using the stack trace of this call. +
+`**args` + +Dictionary with extra keyword args that are passed to the +Flag __init__. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_enum.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_enum.md new file mode 100644 index 00000000000..a57e85b28fe --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_enum.md @@ -0,0 +1,107 @@ +description: Registers a flag whose value can be a list strings from enum_values. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_multi_enum + + + + + + + + + +Registers a flag whose value can be a list strings from enum_values. + + + + + + + + + +Use the flag on the command line multiple times to place multiple +enum values into the list. The 'default' may be a single string +(which will be converted into a single-element list) or a list of +strings. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`default` + +Union[Iterable[Text], Text, None], the default value of the flag; +see `DEFINE_multi`. +
+`enum_values` + +[str], a non-empty list of strings with the possible values for +the flag. +
+`help` + +str, the help message. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`case_sensitive` + +Whether or not the enum is to be case-sensitive. +
+`**args` + +Dictionary with extra keyword args that are passed to the +Flag __init__. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_enum_class.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_enum_class.md new file mode 100644 index 00000000000..a0dea08db4a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_enum_class.md @@ -0,0 +1,103 @@ +description: Registers a flag whose value can be a list of enum members. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_multi_enum_class + + + + + + + + + +Registers a flag whose value can be a list of enum members. + + + + + + + + + +Use the flag on the command line multiple times to place multiple +enum values into the list. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`default` + +Union[Iterable[Enum], Iterable[Text], Enum, Text, None], the +default value of the flag; see +`DEFINE_multi`; only differences are documented here. If the value is +a single Enum, it is treated as a single-item list of that Enum value. +If it is an iterable, text values within the iterable will be converted +to the equivalent Enum objects. +
+`enum_class` + +class, the Enum class with all the possible values for the flag. +help: str, the help message. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will be +registered. This should almost never need to be overridden. +
+`module_name` + +A string, the name of the Python module declaring this flag. If +not provided, it will be computed using the stack trace of this call. +
+`**args` + +Dictionary with extra keyword args that are passed to the Flag +__init__. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_float.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_float.md new file mode 100644 index 00000000000..4b34729cc82 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_float.md @@ -0,0 +1,106 @@ +description: Registers a flag whose value can be a list of arbitrary floats. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_multi_float + + + + + + + + + +Registers a flag whose value can be a list of arbitrary floats. + + + + + + + + + +Use the flag on the command line multiple times to place multiple +float values into the list. The 'default' may be a single float +(which will be converted into a single-element list) or a list of +floats. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`default` + +Union[Iterable[float], Text, None], the default value of the flag; +see `DEFINE_multi`. +
+`help` + +str, the help message. +
+`lower_bound` + +float, min values of the flag. +
+`upper_bound` + +float, max values of the flag. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`**args` + +Dictionary with extra keyword args that are passed to the +Flag __init__. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_integer.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_integer.md new file mode 100644 index 00000000000..a61998272b4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_integer.md @@ -0,0 +1,106 @@ +description: Registers a flag whose value can be a list of arbitrary integers. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_multi_integer + + + + + + + + + +Registers a flag whose value can be a list of arbitrary integers. + + + + + + + + + +Use the flag on the command line multiple times to place multiple +integer values into the list. The 'default' may be a single integer +(which will be converted into a single-element list) or a list of +integers. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`default` + +Union[Iterable[int], Text, None], the default value of the flag; +see `DEFINE_multi`. +
+`help` + +str, the help message. +
+`lower_bound` + +int, min values of the flag. +
+`upper_bound` + +int, max values of the flag. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`**args` + +Dictionary with extra keyword args that are passed to the +Flag __init__. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_string.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_string.md new file mode 100644 index 00000000000..c3058da3a1c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_multi_string.md @@ -0,0 +1,92 @@ +description: Registers a flag whose value can be a list of any strings. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_multi_string + + + + + + + + + +Registers a flag whose value can be a list of any strings. + + + + + + + + + +Use the flag on the command line multiple times to place multiple +string values into the list. The 'default' may be a single string +(which will be converted into a single-element list) or a list of +strings. + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`default` + +Union[Iterable[Text], Text, None], the default value of the flag; +see `DEFINE_multi`. +
+`help` + +str, the help message. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`**args` + +Dictionary with extra keyword args that are passed to the +Flag __init__. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_spaceseplist.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_spaceseplist.md new file mode 100644 index 00000000000..8244b0028a3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_spaceseplist.md @@ -0,0 +1,96 @@ +description: Registers a flag whose value is a whitespace-separated list of strings. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_spaceseplist + + + + + + + + + +Registers a flag whose value is a whitespace-separated list of strings. + + + + + + + + + +Any whitespace can be used as a separator. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +str, the flag name. +
+`default` + +list|str|None, the default value of the flag. +
+`help` + +str, the help message. +
+`comma_compat` + +bool - Whether to support comma as an additional separator. +If false then only whitespace is supported. This is intended only for +backwards compatibility with flags that used to be comma-separated. +
+`flag_values` + +FlagValues, the FlagValues instance with which the flag will +be registered. This should almost never need to be overridden. +
+`**args` + +Dictionary with extra keyword args that are passed to the +Flag __init__. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_string.md b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_string.md new file mode 100644 index 00000000000..2ebac771dee --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DEFINE_string.md @@ -0,0 +1,39 @@ +description: Registers a flag whose value can be any string. + +
+ + +
+ +# tf.compat.v1.flags.DEFINE_string + + + + + + + + + +Registers a flag whose value can be any string. + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/DuplicateFlagError.md b/site/en/api_docs/python/tf/compat/v1/flags/DuplicateFlagError.md new file mode 100644 index 00000000000..1119dfcab41 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/DuplicateFlagError.md @@ -0,0 +1,102 @@ +description: Raised if there is a flag naming conflict. + +
+ + + +
+ +# tf.compat.v1.flags.DuplicateFlagError + + + + + + + + + +Raised if there is a flag naming conflict. + +Inherits From: [`Error`](../../../../tf/compat/v1/flags/Error.md) + + + + + + +## Methods + +

from_flag

+ + + +Creates a DuplicateFlagError by providing flag name and values. + + + + + + + + + + + + + + + + + +
Args
+`flagname` + +str, the name of the flag being redefined. +
+`flag_values` + +FlagValues, the FlagValues instance containing the first +definition of flagname. +
+`other_flag_values` + +FlagValues, if it is not None, it should be the +FlagValues object where the second definition of flagname occurs. +If it is None, we assume that we're being called when attempting +to create the flag a second time, and we use the module calling +this one as the source of the second definition. +
+ + + + + + + + + + + +
Returns
+An instance of DuplicateFlagError. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/EnumClassFlag.md b/site/en/api_docs/python/tf/compat/v1/flags/EnumClassFlag.md new file mode 100644 index 00000000000..2d45084c0a3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/EnumClassFlag.md @@ -0,0 +1,189 @@ +description: Basic enum flag; its value is an enum class's member. + +
+ + + + + + + + + + + + +
+ +# tf.compat.v1.flags.EnumClassFlag + + + + + + + + + +Basic enum flag; its value is an enum class's member. + +Inherits From: [`Flag`](../../../../tf/compat/v1/flags/Flag.md) + + + + + + + + + + + + + + + + + + + + + +
+`value` + + +
+ + + +## Methods + +

flag_type

+ + + +Returns a str that describes the type of the flag. + +NOTE: we use strings, and not the types.*Type constants because +our flags can have more exotic types, e.g., 'comma separated list +of strings', 'whitespace separated list of strings', etc. + +

parse

+ + + +Parses string and sets flag value. + + + + + + + + + + + +
Args
+`argument` + +str or the correct flag value type, argument to be parsed. +
+ + + +

serialize

+ + + +Serializes the flag. + + +

unparse

+ + + + + + +

__eq__

+ + + +Return self==value. + + +

__ge__

+ + + +Return a >= b. Computed by @total_ordering from (not a < b). + + +

__gt__

+ + + +Return a > b. Computed by @total_ordering from (not a < b) and (a != b). + + +

__le__

+ + + +Return a <= b. Computed by @total_ordering from (a < b) or (a == b). + + +

__lt__

+ + + +Return self + + + + + + + + +# tf.compat.v1.flags.EnumClassParser + + + + + + + + + +Parser of an Enum class member. + +Inherits From: [`ArgumentParser`](../../../../tf/compat/v1/flags/ArgumentParser.md) + + + + + + + + + + + + + + + + + + + +
+`enum_class` + +class, the Enum class with all possible flag values. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +When enum_class is not a subclass of Enum. +
+`ValueError` + +When enum_class is empty. +
+ + + +## Methods + +

flag_type

+ + + +See base class. + + +

parse

+ + + +Determines validity of argument and returns the correct element of enum. + + + + + + + + + + + +
Args
+`argument` + +str or Enum class member, the supplied flag value. +
+ + + + + + + + + + + +
Returns
+The first matching Enum class member in Enum class. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +Raised when argument didn't match anything in enum. +
+ + + + + +## Class Variables + +* `syntactic_help = ''` diff --git a/site/en/api_docs/python/tf/compat/v1/flags/EnumFlag.md b/site/en/api_docs/python/tf/compat/v1/flags/EnumFlag.md new file mode 100644 index 00000000000..27fbaf05705 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/EnumFlag.md @@ -0,0 +1,189 @@ +description: Basic enum flag; its value can be any string from list of enum_values. + +
+ + + + + + + + + + + + +
+ +# tf.compat.v1.flags.EnumFlag + + + + + + + + + +Basic enum flag; its value can be any string from list of enum_values. + +Inherits From: [`Flag`](../../../../tf/compat/v1/flags/Flag.md) + + + + + + + + + + + + + + + + + + + + + +
+`value` + + +
+ + + +## Methods + +

flag_type

+ + + +Returns a str that describes the type of the flag. + +NOTE: we use strings, and not the types.*Type constants because +our flags can have more exotic types, e.g., 'comma separated list +of strings', 'whitespace separated list of strings', etc. + +

parse

+ + + +Parses string and sets flag value. + + + + + + + + + + + +
Args
+`argument` + +str or the correct flag value type, argument to be parsed. +
+ + + +

serialize

+ + + +Serializes the flag. + + +

unparse

+ + + + + + +

__eq__

+ + + +Return self==value. + + +

__ge__

+ + + +Return a >= b. Computed by @total_ordering from (not a < b). + + +

__gt__

+ + + +Return a > b. Computed by @total_ordering from (not a < b) and (a != b). + + +

__le__

+ + + +Return a <= b. Computed by @total_ordering from (a < b) or (a == b). + + +

__lt__

+ + + +Return self + + + + + + + + +# tf.compat.v1.flags.EnumParser + + + + + + + + + +Parser of a string enum value (a string value from a given set). + +Inherits From: [`ArgumentParser`](../../../../tf/compat/v1/flags/ArgumentParser.md) + + + + + + + + + + + + + + + + + + + + + + +
+`enum_values` + +[str], a non-empty list of string values in the enum. +
+`case_sensitive` + +bool, whether or not the enum is to be case-sensitive. +
+ + + + + + + + + + + + +
+`ValueError` + +When enum_values is empty. +
+ + + +## Methods + +

flag_type

+ + + +See base class. + + +

parse

+ + + +Determines validity of argument and returns the correct element of enum. + + + + + + + + + + + +
Args
+`argument` + +str, the supplied flag value. +
+ + + + + + + + + + + +
Returns
+The first matching element from enum_values. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +Raised when argument didn't match anything in enum. +
+ + + + + +## Class Variables + +* `syntactic_help = ''` diff --git a/site/en/api_docs/python/tf/compat/v1/flags/Error.md b/site/en/api_docs/python/tf/compat/v1/flags/Error.md new file mode 100644 index 00000000000..62c3cc936c6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/Error.md @@ -0,0 +1,33 @@ +description: The base class for all flags errors. + +
+ + +
+ +# tf.compat.v1.flags.Error + + + + + + + + + +The base class for all flags errors. + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/FLAGS.md b/site/en/api_docs/python/tf/compat/v1/flags/FLAGS.md new file mode 100644 index 00000000000..bd780869013 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/FLAGS.md @@ -0,0 +1,68 @@ +description: Registry of 'Flag' objects. + +
+ + +
+ +# tf.compat.v1.flags.FLAGS + + + + + + + + + +Registry of 'Flag' objects. + + + + + + + + + +A 'FlagValues' can then scan command line arguments, passing flag +arguments through to the 'Flag' objects that it owns. It also +provides easy access to the flag values. Typically only one +'FlagValues' object is needed by an application: flags.FLAGS + +This class is heavily overloaded: + +'Flag' objects are registered via __setitem__: + FLAGS['longname'] = x # register a new flag + +The .value attribute of the registered 'Flag' objects can be accessed +as attributes of this 'FlagValues' object, through __getattr__. Both +the long and short name of the original 'Flag' objects can be used to +access its value: + FLAGS.longname # parsed flag value + FLAGS.x # parsed flag value (short name) + +Command line arguments are scanned and passed to the registered 'Flag' +objects through the __call__ method. Unparsed arguments, including +argv[0] (e.g. the program name) are returned. + argv = FLAGS(sys.argv) # scan command line arguments + +The original registered Flag objects can be retrieved through the use +of the dictionary-like operator, __getitem__: + x = FLAGS['longname'] # access the registered Flag object + +The str() operator of a 'FlagValues' object provides help for all of +the registered 'Flag' objects. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/flags/Flag.md b/site/en/api_docs/python/tf/compat/v1/flags/Flag.md new file mode 100644 index 00000000000..8d6df3d0d0e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/Flag.md @@ -0,0 +1,229 @@ +description: Information about a command-line flag. + +
+ + + + + + + + + + + + +
+ +# tf.compat.v1.flags.Flag + + + + + + + + + +Information about a command-line flag. + + + + + + + + + +'Flag' objects define the following fields: + .name - the name for this flag; + .default - the default value for this flag; + .default_unparsed - the unparsed default value for this flag. + .default_as_str - default value as repr'd string, e.g., "'true'" (or None); + .value - the most recent parsed value of this flag; set by parse(); + .help - a help string or None if no help is available; + .short_name - the single letter alias for this flag (or None); + .boolean - if 'true', this flag does not accept arguments; + .present - true if this flag was parsed from command line flags; + .parser - an ArgumentParser object; + .serializer - an ArgumentSerializer object; + .allow_override - the flag may be redefined without raising an error, and + newly defined flag overrides the old one. + .allow_override_cpp - use the flag from C++ if available; the flag + definition is replaced by the C++ flag after init; + .allow_hide_cpp - use the Python flag despite having a C++ flag with + the same name (ignore the C++ flag); + .using_default_value - the flag value has not been set by user; + .allow_overwrite - the flag may be parsed more than once without raising + an error, the last set value will be used; + .allow_using_method_names - whether this flag can be defined even if it has + a name that conflicts with a FlagValues method. + +The only public method of a 'Flag' object is parse(), but it is +typically only called by a 'FlagValues' object. The parse() method is +a thin wrapper around the 'ArgumentParser' parse() method. The parsed +value is saved in .value, and the .present attribute is updated. If +this flag was already present, an Error is raised. + +parse() is also called during __init__ to parse the default value and +initialize the .value attribute. This enables other python modules to +safely use flags even if the __main__ module neglects to parse the +command line arguments. The .present attribute is cleared after +__init__ parsing. If the default value is set to None, then the +__init__ parsing step is skipped and the .value attribute is +initialized to None. + +Note: The default value is also presented to the user in the help +string, so it is important that it be a legal value for this flag. + + + + + + + + + + + + +
+`value` + + +
+ + + +## Methods + +

flag_type

+ + + +Returns a str that describes the type of the flag. + +NOTE: we use strings, and not the types.*Type constants because +our flags can have more exotic types, e.g., 'comma separated list +of strings', 'whitespace separated list of strings', etc. + +

parse

+ + + +Parses string and sets flag value. + + + + + + + + + + + +
Args
+`argument` + +str or the correct flag value type, argument to be parsed. +
+ + + +

serialize

+ + + +Serializes the flag. + + +

unparse

+ + + + + + +

__eq__

+ + + +Return self==value. + + +

__ge__

+ + + +Return a >= b. Computed by @total_ordering from (not a < b). + + +

__gt__

+ + + +Return a > b. Computed by @total_ordering from (not a < b) and (a != b). + + +

__le__

+ + + +Return a <= b. Computed by @total_ordering from (a < b) or (a == b). + + +

__lt__

+ + + +Return self + + + + +# tf.compat.v1.flags.FlagNameConflictsWithMethodError + + + + + + + + + +Raised when a flag name conflicts with FlagValues methods. + +Inherits From: [`Error`](../../../../tf/compat/v1/flags/Error.md) + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/FlagValues.md b/site/en/api_docs/python/tf/compat/v1/flags/FlagValues.md new file mode 100644 index 00000000000..50975736452 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/FlagValues.md @@ -0,0 +1,1104 @@ +description: Registry of 'Flag' objects. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.flags.FlagValues + + + + + + + + + +Registry of 'Flag' objects. + + + + + + + + + +A 'FlagValues' can then scan command line arguments, passing flag +arguments through to the 'Flag' objects that it owns. It also +provides easy access to the flag values. Typically only one +'FlagValues' object is needed by an application: flags.FLAGS + +This class is heavily overloaded: + +'Flag' objects are registered via __setitem__: + FLAGS['longname'] = x # register a new flag + +The .value attribute of the registered 'Flag' objects can be accessed +as attributes of this 'FlagValues' object, through __getattr__. Both +the long and short name of the original 'Flag' objects can be used to +access its value: + FLAGS.longname # parsed flag value + FLAGS.x # parsed flag value (short name) + +Command line arguments are scanned and passed to the registered 'Flag' +objects through the __call__ method. Unparsed arguments, including +argv[0] (e.g. the program name) are returned. + argv = FLAGS(sys.argv) # scan command line arguments + +The original registered Flag objects can be retrieved through the use +of the dictionary-like operator, __getitem__: + x = FLAGS['longname'] # access the registered Flag object + +The str() operator of a 'FlagValues' object provides help for all of +the registered 'Flag' objects. + +## Methods + +

append_flag_values

+ + + +Appends flags registered in another FlagValues instance. + + + + + + + + + + + +
Args
+`flag_values` + +FlagValues, the FlagValues instance from which to copy flags. +
+ + + +

append_flags_into_file

+ + + +Appends all flags assignments from this FlagInfo object to a file. + +Output will be in the format of a flagfile. + +NOTE: MUST mirror the behavior of the C++ AppendFlagsIntoFile +from https://github.com/gflags/gflags. + + + + + + + + + + +
Args
+`filename` + +str, name of the file. +
+ + + +

find_module_defining_flag

+ + + +Return the name of the module defining this flag, or default. + + + + + + + + + + + + + + +
Args
+`flagname` + +str, name of the flag to lookup. +
+`default` + +Value to return if flagname is not defined. Defaults +to None. +
+ + + + + + + + + + + +
Returns
+The name of the module which registered the flag with this name. +If no such module exists (i.e. no flag with this name exists), +we return default. +
+ + + +

find_module_id_defining_flag

+ + + +Return the ID of the module defining this flag, or default. + + + + + + + + + + + + + + +
Args
+`flagname` + +str, name of the flag to lookup. +
+`default` + +Value to return if flagname is not defined. Defaults +to None. +
+ + + + + + + + + + + +
Returns
+The ID of the module which registered the flag with this name. +If no such module exists (i.e. no flag with this name exists), +we return default. +
+ + + +

flag_values_dict

+ + + +Returns a dictionary that maps flag names to flag values. + + +

flags_by_module_dict

+ + + +Returns the dictionary of module_name -> list of defined flags. + + + + + + + + + + +
Returns
+A dictionary. Its keys are module names (strings). Its values +are lists of Flag objects. +
+ + + +

flags_by_module_id_dict

+ + + +Returns the dictionary of module_id -> list of defined flags. + + + + + + + + + + +
Returns
+A dictionary. Its keys are module IDs (ints). Its values +are lists of Flag objects. +
+ + + +

flags_into_string

+ + + +Returns a string with the flags assignments from this FlagValues object. + +This function ignores flags whose value is None. Each flag +assignment is separated by a newline. + +NOTE: MUST mirror the behavior of the C++ CommandlineFlagsIntoString +from https://github.com/gflags/gflags. + + + + + + + + + +
Returns
+str, the string with the flags assignments from this FlagValues object. +The flags are ordered by (module_name, flag_name). +
+ + + +

get_flag_value

+ + + +Returns the value of a flag (if not None) or a default value. + + + + + + + + + + + + + + +
Args
+`name` + +str, the name of a flag. +
+`default` + +Default value to use if the flag value is None. +
+ + + + + + + + + + + +
Returns
+Requested flag value or default. +
+ + + +

get_help

+ + + +Returns a help string for all known flags. + + + + + + + + + + + + + + +
Args
+`prefix` + +str, per-line output prefix. +
+`include_special_flags` + +bool, whether to include description of +SPECIAL_FLAGS, i.e. --flagfile and --undefok. +
+ + + + + + + + + + + +
Returns
+str, formatted help message. +
+ + + +

get_key_flags_for_module

+ + + +Returns the list of key flags for a module. + + + + + + + + + + + +
Args
+`module` + +module|str, the module to get key flags from. +
+ + + + + + + + + + + + + + +
Returns
+[Flag], a new list of Flag instances. Caller may update this list as +
+`desired` + +none of those changes will affect the internals of this +FlagValue instance. +
+ + + +

is_gnu_getopt

+ + + + + + +

is_parsed

+ + + +Returns whether flags were parsed. + + +

key_flags_by_module_dict

+ + + +Returns the dictionary of module_name -> list of key flags. + + + + + + + + + + +
Returns
+A dictionary. Its keys are module names (strings). Its values +are lists of Flag objects. +
+ + + +

main_module_help

+ + + +Describes the key flags of the main module. + + + + + + + + + + +
Returns
+str, describing the key flags of the main module. +
+ + + +

mark_as_parsed

+ + + +Explicitly marks flags as parsed. + +Use this when the caller knows that this FlagValues has been parsed as if +a __call__() invocation has happened. This is only a public method for +use by things like appcommands which do additional command like parsing. + +

module_help

+ + + +Describes the key flags of a module. + + + + + + + + + + + +
Args
+`module` + +module|str, the module to describe the key flags for. +
+ + + + + + + + + + + +
Returns
+str, describing the key flags of a module. +
+ + + +

read_flags_from_files

+ + + +Processes command line args, but also allow args to be read from file. + + + + + + + + + + + + + + +
Args
+`argv` + +[str], a list of strings, usually sys.argv[1:], which may contain +one or more flagfile directives of the form --flagfile="./filename". +Note that the name of the program (sys.argv[0]) should be omitted. +
+`force_gnu` + +bool, if False, --flagfile parsing obeys the +FLAGS.is_gnu_getopt() value. If True, ignore the value and always +follow gnu_getopt semantics. +
+ + + + + + + + + + + +
Returns
+A new list which has the original list combined with what we read +from any flagfile(s). +
+ + + + + + + + + + + + +
Raises
+`IllegalFlagValueError` + +Raised when --flagfile is provided with no +argument. +
+ + +This function is called by FLAGS(argv). +It scans the input list for a flag that looks like: +--flagfile=. Then it opens , reads all valid key +and value pairs and inserts them into the input list in exactly the +place where the --flagfile arg is found. + +Note that your application's flags are still defined the usual way +using absl.flags DEFINE_flag() type functions. + +Notes (assuming we're getting a commandline of some sort as our input): +--> For duplicate flags, the last one we hit should "win". +--> Since flags that appear later win, a flagfile's settings can be "weak" + if the --flagfile comes at the beginning of the argument sequence, + and it can be "strong" if the --flagfile comes at the end. +--> A further "--flagfile=" CAN be nested in a flagfile. + It will be expanded in exactly the spot where it is found. +--> In a flagfile, a line beginning with # or // is a comment. +--> Entirely blank lines _should_ be ignored. + +

register_flag_by_module

+ + + +Records the module that defines a specific flag. + +We keep track of which flag is defined by which module so that we +can later sort the flags by module. + + + + + + + + + + + + + +
Args
+`module_name` + +str, the name of a Python module. +
+`flag` + +Flag, the Flag instance that is key to the module. +
+ + + +

register_flag_by_module_id

+ + + +Records the module that defines a specific flag. + + + + + + + + + + + + + + +
Args
+`module_id` + +int, the ID of the Python module. +
+`flag` + +Flag, the Flag instance that is key to the module. +
+ + + +

register_key_flag_for_module

+ + + +Specifies that a flag is a key flag for a module. + + + + + + + + + + + + + + +
Args
+`module_name` + +str, the name of a Python module. +
+`flag` + +Flag, the Flag instance that is key to the module. +
+ + + +

remove_flag_values

+ + + +Remove flags that were previously appended from another FlagValues. + + + + + + + + + + + +
Args
+`flag_values` + +FlagValues, the FlagValues instance containing flags to +remove. +
+ + + +

set_default

+ + + +Changes the default value of the named flag object. + +The flag's current value is also updated if the flag is currently using +the default value, i.e. not specified in the command line, and not set +by FLAGS.name = value. + + + + + + + + + + + + + +
Args
+`name` + +str, the name of the flag to modify. +
+`value` + +The new default value. +
+ + + + + + + + + + + + + + + +
Raises
+`UnrecognizedFlagError` + +Raised when there is no registered flag named name. +
+`IllegalFlagValueError` + +Raised when value is not valid. +
+ + + +

set_gnu_getopt

+ + + +Sets whether or not to use GNU style scanning. + +GNU style allows mixing of flag and non-flag arguments. See +http://docs.python.org/library/getopt.html#getopt.gnu_getopt + + + + + + + + + + +
Args
+`gnu_getopt` + +bool, whether or not to use GNU style scanning. +
+ + + +

unparse_flags

+ + + +Unparses all flags to the point before any FLAGS(argv) was called. + + +

write_help_in_xml_format

+ + + +Outputs flag documentation in XML format. + +NOTE: We use element names that are consistent with those used by +the C++ command-line flag library, from +https://github.com/gflags/gflags. +We also use a few new elements (e.g., ), but we do not +interfere / overlap with existing XML elements used by the C++ +library. Please maintain this consistency. + + + + + + + + + + +
Args
+`outfile` + +File object we write to. Default None means sys.stdout. +
+ + + +

__call__

+ + + +Parses flags from argv; stores parsed flags into this FlagValues object. + +All unparsed arguments are returned. + + + + + + + + + + + + + +
Args
+`argv` + +a tuple/list of strings. +
+`known_only` + +bool, if True, parse and remove known flags; return the rest +untouched. Unknown flags specified by --undefok are not returned. +
+ + + + + + + + + + + +
Returns
+The list of arguments not parsed as options, including argv[0]. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`Error` + +Raised on any parsing error. +
+`TypeError` + +Raised on passing wrong type of arguments. +
+`ValueError` + +Raised on flag value parsing error. +
+ + + +

__contains__

+ + + +Returns True if name is a value (flag) in the dict. + + +

__getitem__

+ + + +Returns the Flag object for the flag --name. + + +

__iter__

+ + + + + + +

__len__

+ + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/FloatParser.md b/site/en/api_docs/python/tf/compat/v1/flags/FloatParser.md new file mode 100644 index 00000000000..b8a8a276865 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/FloatParser.md @@ -0,0 +1,101 @@ +description: Parser of floating point values. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.flags.FloatParser + + + + + + + + + +Parser of floating point values. + + + + + + + + + +Parsed value may be bounded to a given upper and lower bound. + +## Methods + +

convert

+ + + +Returns the float value of argument. + + +

flag_type

+ + + +See base class. + + +

is_outside_bounds

+ + + +Returns whether the value is outside the bounds or not. + + +

parse

+ + + +See base class. + + + + +## Class Variables + +* `number_article = 'a'` +* `number_name = 'number'` +* `syntactic_help = 'a number'` diff --git a/site/en/api_docs/python/tf/compat/v1/flags/IllegalFlagValueError.md b/site/en/api_docs/python/tf/compat/v1/flags/IllegalFlagValueError.md new file mode 100644 index 00000000000..e5dbc965298 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/IllegalFlagValueError.md @@ -0,0 +1,35 @@ +description: Raised when the flag command line argument is illegal. + +
+ + +
+ +# tf.compat.v1.flags.IllegalFlagValueError + + + + + + + + + +Raised when the flag command line argument is illegal. + +Inherits From: [`Error`](../../../../tf/compat/v1/flags/Error.md) + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/IntegerParser.md b/site/en/api_docs/python/tf/compat/v1/flags/IntegerParser.md new file mode 100644 index 00000000000..dcd249260c8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/IntegerParser.md @@ -0,0 +1,101 @@ +description: Parser of an integer value. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.flags.IntegerParser + + + + + + + + + +Parser of an integer value. + + + + + + + + + +Parsed value may be bounded to a given upper and lower bound. + +## Methods + +

convert

+ + + +Returns the int value of argument. + + +

flag_type

+ + + +See base class. + + +

is_outside_bounds

+ + + +Returns whether the value is outside the bounds or not. + + +

parse

+ + + +See base class. + + + + +## Class Variables + +* `number_article = 'an'` +* `number_name = 'integer'` +* `syntactic_help = 'an integer'` diff --git a/site/en/api_docs/python/tf/compat/v1/flags/ListParser.md b/site/en/api_docs/python/tf/compat/v1/flags/ListParser.md new file mode 100644 index 00000000000..e707e07ba85 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/ListParser.md @@ -0,0 +1,72 @@ +description: Parser for a comma-separated list of strings. + +
+ + + + + + +
+ +# tf.compat.v1.flags.ListParser + + + + + + + + + +Parser for a comma-separated list of strings. + +Inherits From: [`BaseListParser`](../../../../tf/compat/v1/flags/BaseListParser.md) + + + + + + + + + + +## Methods + +

flag_type

+ + + +See base class. + + +

parse

+ + + +Parses argument as comma-separated list of strings. + + + + +## Class Variables + +* `syntactic_help = ''` diff --git a/site/en/api_docs/python/tf/compat/v1/flags/ListSerializer.md b/site/en/api_docs/python/tf/compat/v1/flags/ListSerializer.md new file mode 100644 index 00000000000..0930fb36a6d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/ListSerializer.md @@ -0,0 +1,60 @@ +description: Base class for generating string representations of a flag value. + +
+ + + + +
+ +# tf.compat.v1.flags.ListSerializer + + + + + + + + + +Base class for generating string representations of a flag value. + +Inherits From: [`ArgumentSerializer`](../../../../tf/compat/v1/flags/ArgumentSerializer.md) + + + + + + + + + + +## Methods + +

serialize

+ + + +See base class. + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/MultiEnumClassFlag.md b/site/en/api_docs/python/tf/compat/v1/flags/MultiEnumClassFlag.md new file mode 100644 index 00000000000..d35a9ea30dd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/MultiEnumClassFlag.md @@ -0,0 +1,191 @@ +description: A multi_enum_class flag. + +
+ + + + + + + + + + + + +
+ +# tf.compat.v1.flags.MultiEnumClassFlag + + + + + + + + + +A multi_enum_class flag. + +Inherits From: [`MultiFlag`](../../../../tf/compat/v1/flags/MultiFlag.md) + + + + + + + + + +See the __doc__ for MultiFlag for most behaviors of this class. In addition, +this class knows how to handle enum.Enum instances as values for this flag +type. + + + + + + + + + + + + +
+`value` + + +
+ + + +## Methods + +

flag_type

+ + + +See base class. + + +

parse

+ + + +Parses one or more arguments with the installed parser. + + + + + + + + + + + +
Args
+`arguments` + +a single argument or a list of arguments (typically a +list of default values); a single argument is converted +internally into a list containing one item. +
+ + + +

serialize

+ + + +Serializes the flag. + + +

unparse

+ + + + + + +

__eq__

+ + + +Return self==value. + + +

__ge__

+ + + +Return a >= b. Computed by @total_ordering from (not a < b). + + +

__gt__

+ + + +Return a > b. Computed by @total_ordering from (not a < b) and (a != b). + + +

__le__

+ + + +Return a <= b. Computed by @total_ordering from (a < b) or (a == b). + + +

__lt__

+ + + +Return self + + + + + + + + + + + + + + +# tf.compat.v1.flags.MultiFlag + + + + + + + + + +A flag that can appear multiple time on the command-line. + +Inherits From: [`Flag`](../../../../tf/compat/v1/flags/Flag.md) + + + + + + + + + +The value of such a flag is a list that contains the individual values +from all the appearances of that flag on the command-line. + +See the __doc__ for Flag for most behavior of this class. Only +differences in behavior are described here: + + * The default value may be either a single value or an iterable of values. + A single value is transformed into a single-item list of that value. + + * The value of the flag is always a list, even if the option was + only supplied once, and even if the default value is a single + value + + + + + + + + + + + + +
+`value` + + +
+ + + +## Methods + +

flag_type

+ + + +See base class. + + +

parse

+ + + +Parses one or more arguments with the installed parser. + + + + + + + + + + + +
Args
+`arguments` + +a single argument or a list of arguments (typically a +list of default values); a single argument is converted +internally into a list containing one item. +
+ + + +

serialize

+ + + +Serializes the flag. + + +

unparse

+ + + + + + +

__eq__

+ + + +Return self==value. + + +

__ge__

+ + + +Return a >= b. Computed by @total_ordering from (not a < b). + + +

__gt__

+ + + +Return a > b. Computed by @total_ordering from (not a < b) and (a != b). + + +

__le__

+ + + +Return a <= b. Computed by @total_ordering from (a < b) or (a == b). + + +

__lt__

+ + + +Return self + + + + +# tf.compat.v1.flags.UnparsedFlagAccessError + + + + + + + + + +Raised when accessing the flag value from unparsed FlagValues. + +Inherits From: [`Error`](../../../../tf/compat/v1/flags/Error.md) + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/UnrecognizedFlagError.md b/site/en/api_docs/python/tf/compat/v1/flags/UnrecognizedFlagError.md new file mode 100644 index 00000000000..bc7fe6c1809 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/UnrecognizedFlagError.md @@ -0,0 +1,70 @@ +description: Raised when a flag is unrecognized. + +
+ + + +
+ +# tf.compat.v1.flags.UnrecognizedFlagError + + + + + + + + + +Raised when a flag is unrecognized. + +Inherits From: [`Error`](../../../../tf/compat/v1/flags/Error.md) + + + + + + + + + + + + + + + + + + + + + + + + +
+`flagname` + +str, the name of the unrecognized flag. +
+`flagvalue` + +The value of the flag, empty if the flag is not defined. +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/ValidationError.md b/site/en/api_docs/python/tf/compat/v1/flags/ValidationError.md new file mode 100644 index 00000000000..c5d452c4cfa --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/ValidationError.md @@ -0,0 +1,35 @@ +description: Raised when flag validator constraint is not satisfied. + +
+ + +
+ +# tf.compat.v1.flags.ValidationError + + + + + + + + + +Raised when flag validator constraint is not satisfied. + +Inherits From: [`Error`](../../../../tf/compat/v1/flags/Error.md) + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/WhitespaceSeparatedListParser.md b/site/en/api_docs/python/tf/compat/v1/flags/WhitespaceSeparatedListParser.md new file mode 100644 index 00000000000..0bb4db23c02 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/WhitespaceSeparatedListParser.md @@ -0,0 +1,125 @@ +description: Parser for a whitespace-separated list of strings. + +
+ + + + + + +
+ +# tf.compat.v1.flags.WhitespaceSeparatedListParser + + + + + + + + + +Parser for a whitespace-separated list of strings. + +Inherits From: [`BaseListParser`](../../../../tf/compat/v1/flags/BaseListParser.md) + + + + + + + + + + + + + + + + + + + +
+`comma_compat` + +bool, whether to support comma as an additional separator. +If False then only whitespace is supported. This is intended only for +backwards compatibility with flags that used to be comma-separated. +
+ + + +## Methods + +

flag_type

+ + + +See base class. + + +

parse

+ + + +Parses argument as whitespace-separated list of strings. + +It also parses argument as comma-separated list of strings if requested. + + + + + + + + + + +
Args
+`argument` + +string argument passed in the commandline. +
+ + + + + + + + + + + +
Returns
+[str], the parsed flag value. +
+ + + + + +## Class Variables + +* `syntactic_help = ''` diff --git a/site/en/api_docs/python/tf/compat/v1/flags/adopt_module_key_flags.md b/site/en/api_docs/python/tf/compat/v1/flags/adopt_module_key_flags.md new file mode 100644 index 00000000000..b8a81a12a8f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/adopt_module_key_flags.md @@ -0,0 +1,84 @@ +description: Declares that all flags key to a module are key to the current module. + +
+ + +
+ +# tf.compat.v1.flags.adopt_module_key_flags + + + + + + + + + +Declares that all flags key to a module are key to the current module. + + + + + + + + + + + + + + + + + + + + + + +
+`module` + +module, the module object from which all key flags will be declared +as key flags to the current module. +
+`flag_values` + +FlagValues, the FlagValues instance in which the flags will +be declared as key flags. This should almost never need to be +overridden. +
+ + + + + + + + + + + + +
+`Error` + +Raised when given an argument that is a module name (a string), +instead of a module object. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/declare_key_flag.md b/site/en/api_docs/python/tf/compat/v1/flags/declare_key_flag.md new file mode 100644 index 00000000000..100c8d2d883 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/declare_key_flag.md @@ -0,0 +1,96 @@ +description: Declares one flag as key to the current module. + +
+ + +
+ +# tf.compat.v1.flags.declare_key_flag + + + + + + + + + +Declares one flag as key to the current module. + + + + + + + + + +Key flags are flags that are deemed really important for a module. +They are important when listing help messages; e.g., if the +--helpshort command-line flag is used, then only the key flags of the +main module are listed (instead of all flags, as in the case of +--helpfull). + +#### Sample usage: + + +flags.declare_key_flag('flag_1') + + + + + + + + + + + + + + + +
+`flag_name` + +str, the name of an already declared flag. +(Redeclaring flags as key, including flags implicitly key +because they were declared in this module, is a no-op.) +
+`flag_values` + +FlagValues, the FlagValues instance in which the flag will +be declared as a key flag. This should almost never need to be +overridden. +
+ + + + + + + + + + + + +
+`ValueError` + +Raised if flag_name not defined as a Python flag. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/disclaim_key_flags.md b/site/en/api_docs/python/tf/compat/v1/flags/disclaim_key_flags.md new file mode 100644 index 00000000000..686c67799af --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/disclaim_key_flags.md @@ -0,0 +1,48 @@ +description: Declares that the current module will not define any more key flags. + +
+ + +
+ +# tf.compat.v1.flags.disclaim_key_flags + + + + + + + + + +Declares that the current module will not define any more key flags. + + + + + + + + + +Normally, the module that calls the DEFINE_xxx functions claims the +flag to be its key flag. This is undesirable for modules that +define additional DEFINE_yyy functions with its own flag parsers and +serializers, since that module will accidentally claim flags defined +by DEFINE_yyy as its key flags. After calling this function, the +module disclaims flag definitions thereafter, so the key flags will +be correctly attributed to the caller of DEFINE_yyy. + +After calling this function, the module will not be able to define +any more flags. This function will affect all FlagValues objects. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/flags/doc_to_help.md b/site/en/api_docs/python/tf/compat/v1/flags/doc_to_help.md new file mode 100644 index 00000000000..73a705ec187 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/doc_to_help.md @@ -0,0 +1,39 @@ +description: Takes a __doc__ string and reformats it as help. + +
+ + +
+ +# tf.compat.v1.flags.doc_to_help + + + + + + + + + +Takes a __doc__ string and reformats it as help. + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/flag_dict_to_args.md b/site/en/api_docs/python/tf/compat/v1/flags/flag_dict_to_args.md new file mode 100644 index 00000000000..e0695e9d6a5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/flag_dict_to_args.md @@ -0,0 +1,70 @@ +description: Convert a dict of values into process call parameters. + +
+ + +
+ +# tf.compat.v1.flags.flag_dict_to_args + + + + + + + + + +Convert a dict of values into process call parameters. + + + + + + + + + +This method is used to convert a dictionary into a sequence of parameters +for a binary that parses arguments using this module. + + + + + + + + + + +
+`flag_map` + +dict, a mapping where the keys are flag names (strings). +values are treated according to their type: +* If value is None, then only the name is emitted. +* If value is True, then only the name is emitted. +* If value is False, then only the name prepended with 'no' is emitted. +* If value is a string then --name=value is emitted. +* If value is a collection, this will emit --name=value1,value2,value3. +* Everything else is converted to string an passed as such. +
+ + + +#### Yields: + +sequence of string suitable for a subprocess execution. diff --git a/site/en/api_docs/python/tf/compat/v1/flags/get_help_width.md b/site/en/api_docs/python/tf/compat/v1/flags/get_help_width.md new file mode 100644 index 00000000000..6ced1791f9c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/get_help_width.md @@ -0,0 +1,37 @@ +description: Returns the integer width of help lines that is used in TextWrap. + +
+ + +
+ +# tf.compat.v1.flags.get_help_width + + + + + + + + + +Returns the integer width of help lines that is used in TextWrap. + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/mark_bool_flags_as_mutual_exclusive.md b/site/en/api_docs/python/tf/compat/v1/flags/mark_bool_flags_as_mutual_exclusive.md new file mode 100644 index 00000000000..c891074ecf6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/mark_bool_flags_as_mutual_exclusive.md @@ -0,0 +1,72 @@ +description: Ensures that only one flag among flag_names is True. + +
+ + +
+ +# tf.compat.v1.flags.mark_bool_flags_as_mutual_exclusive + + + + + + + + + +Ensures that only one flag among flag_names is True. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`flag_names` + +[str], names of the flags. +
+`required` + +bool. If true, exactly one flag must be True. Otherwise, at most +one flag can be True, and it is valid for all flags to be False. +
+`flag_values` + +flags.FlagValues, optional FlagValues instance where the flags +are defined. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/mark_flag_as_required.md b/site/en/api_docs/python/tf/compat/v1/flags/mark_flag_as_required.md new file mode 100644 index 00000000000..11f159d088e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/mark_flag_as_required.md @@ -0,0 +1,95 @@ +description: Ensures that flag is not None during program execution. + +
+ + +
+ +# tf.compat.v1.flags.mark_flag_as_required + + + + + + + + + +Ensures that flag is not None during program execution. + + + + + + + + + +Registers a flag validator, which will follow usual validator rules. +Important note: validator will pass for any non-None value, such as False, +0 (zero), '' (empty string) and so on. + +It is recommended to call this method like this: + + if __name__ == '__main__': + flags.mark_flag_as_required('your_flag_name') + app.run() + +Because validation happens at app.run() we want to ensure required-ness +is enforced at that time. You generally do not want to force users who import +your code to have additional required flags for their own binaries or tests. + + + + + + + + + + + + + +
+`flag_name` + +str, name of the flag +
+`flag_values` + +flags.FlagValues, optional FlagValues instance where the flag +is defined. +
+ + + + + + + + + + + + +
+`AttributeError` + +Raised when flag_name is not registered as a valid flag +name. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/mark_flags_as_mutual_exclusive.md b/site/en/api_docs/python/tf/compat/v1/flags/mark_flags_as_mutual_exclusive.md new file mode 100644 index 00000000000..ebad5b32411 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/mark_flags_as_mutual_exclusive.md @@ -0,0 +1,78 @@ +description: Ensures that only one flag among flag_names is not None. + +
+ + +
+ +# tf.compat.v1.flags.mark_flags_as_mutual_exclusive + + + + + + + + + +Ensures that only one flag among flag_names is not None. + + + + + + + + + +Important note: This validator checks if flag values are None, and it does not +distinguish between default and explicit values. Therefore, this validator +does not make sense when applied to flags with default values other than None, +including other false values (e.g. False, 0, '', []). That includes multi +flags with a default value of [] instead of None. + + + + + + + + + + + + + + + + +
+`flag_names` + +[str], names of the flags. +
+`required` + +bool. If true, exactly one of the flags must have a value other +than None. Otherwise, at most one of the flags can have a value other +than None, and it is valid for all of the flags to be None. +
+`flag_values` + +flags.FlagValues, optional FlagValues instance where the flags +are defined. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/mark_flags_as_required.md b/site/en/api_docs/python/tf/compat/v1/flags/mark_flags_as_required.md new file mode 100644 index 00000000000..9ea4f5d4302 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/mark_flags_as_required.md @@ -0,0 +1,90 @@ +description: Ensures that flags are not None during program execution. + +
+ + +
+ +# tf.compat.v1.flags.mark_flags_as_required + + + + + + + + + +Ensures that flags are not None during program execution. + + + + + + + + + + +#### Recommended usage: + + +if __name__ == '__main__': + flags.mark_flags_as_required(['flag1', 'flag2', 'flag3']) + app.run() + + + + + + + + + + + + + + + +
+`flag_names` + +Sequence[str], names of the flags. +
+`flag_values` + +flags.FlagValues, optional FlagValues instance where the flags +are defined. +
+ + + + + + + + + + + + +
+`AttributeError` + +If any of flag name has not already been defined as a flag. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/multi_flags_validator.md b/site/en/api_docs/python/tf/compat/v1/flags/multi_flags_validator.md new file mode 100644 index 00000000000..58bf31c4f7e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/multi_flags_validator.md @@ -0,0 +1,112 @@ +description: A function decorator for defining a multi-flag validator. + +
+ + +
+ +# tf.compat.v1.flags.multi_flags_validator + + + + + + + + + +A function decorator for defining a multi-flag validator. + + + + + + + + + +Registers the decorated function as a validator for flag_names, e.g. + +@flags.multi_flags_validator(['foo', 'bar']) +def _CheckFooBar(flags_dict): + ... + +See register_multi_flags_validator() for the specification of checker +function. + + + + + + + + + + + + + + + + +
+`flag_names` + +[str], a list of the flag names to be checked. +
+`message` + +str, error text to be shown to the user if checker returns False. +If checker raises flags.ValidationError, message from the raised +error will be shown. +
+`flag_values` + +flags.FlagValues, optional FlagValues instance to validate +against. +
+ + + + + + + + + + + +
+A function decorator that registers its function argument as a validator. +
+ + + + + + + + + + + + +
+`AttributeError` + +Raised when a flag is not registered as a valid flag name. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/register_multi_flags_validator.md b/site/en/api_docs/python/tf/compat/v1/flags/register_multi_flags_validator.md new file mode 100644 index 00000000000..3a783bc9cc3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/register_multi_flags_validator.md @@ -0,0 +1,105 @@ +description: Adds a constraint to multiple flags. + +
+ + +
+ +# tf.compat.v1.flags.register_multi_flags_validator + + + + + + + + + +Adds a constraint to multiple flags. + + + + + + + + + +The constraint is validated when flags are initially parsed, and after each +change of the corresponding flag's value. + + + + + + + + + + + + + + + + + + + +
+`flag_names` + +[str], a list of the flag names to be checked. +
+`multi_flags_checker` + +callable, a function to validate the flag. +input - dict, with keys() being flag_names, and value for each key +being the value of the corresponding flag (string, boolean, etc). +output - bool, True if validator constraint is satisfied. +If constraint is not satisfied, it should either return False or +raise flags.ValidationError. +
+`message` + +str, error text to be shown to the user if checker returns False. +If checker raises flags.ValidationError, message from the raised +error will be shown. +
+`flag_values` + +flags.FlagValues, optional FlagValues instance to validate +against. +
+ + + + + + + + + + + + +
+`AttributeError` + +Raised when a flag is not registered as a valid flag name. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/register_validator.md b/site/en/api_docs/python/tf/compat/v1/flags/register_validator.md new file mode 100644 index 00000000000..d71ffa5fee0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/register_validator.md @@ -0,0 +1,60 @@ +description: Adds a constraint, which will be enforced during program execution. + +
+ + +
+ +# tf.compat.v1.flags.register_validator + + + + + + + + + +Adds a constraint, which will be enforced during program execution. + + + + + + + + + +The constraint is validated when flags are initially parsed, and after each +change of the corresponding flag's value. +Args: + flag_name: str, name of the flag to be checked. + checker: callable, a function to validate the flag. + input - A single positional argument: The value of the corresponding + flag (string, boolean, etc. This value will be passed to checker + by the library). + output - bool, True if validator constraint is satisfied. + If constraint is not satisfied, it should either return False or + raise flags.ValidationError(desired_error_message). + message: str, error text to be shown to the user if checker returns False. + If checker raises flags.ValidationError, message from the raised + error will be shown. + flag_values: flags.FlagValues, optional FlagValues instance to validate + against. +Raises: + AttributeError: Raised when flag_name is not registered as a valid flag + name. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/flags/text_wrap.md b/site/en/api_docs/python/tf/compat/v1/flags/text_wrap.md new file mode 100644 index 00000000000..839de4bddeb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/text_wrap.md @@ -0,0 +1,111 @@ +description: Wraps a given text to a maximum line length and returns it. + +
+ + +
+ +# tf.compat.v1.flags.text_wrap + + + + + + + + + +Wraps a given text to a maximum line length and returns it. + + + + + + + + + +It turns lines that only contain whitespace into empty lines, keeps new lines, +and expands tabs using 4 spaces. + + + + + + + + + + + + + + + + + + + +
+`text` + +str, text to wrap. +
+`length` + +int, maximum length of a line, includes indentation. +If this is None then use get_help_width() +
+`indent` + +str, indent for all but first line. +
+`firstline_indent` + +str, indent for first line; if None, fall back to indent. +
+ + + + + + + + + + + +
+str, the wrapped text. +
+ + + + + + + + + + + + +
+`ValueError` + +Raised if indent or firstline_indent not shorter than length. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator.md new file mode 100644 index 00000000000..b513867ed62 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator.md @@ -0,0 +1,97 @@ +description: Base TFDecorator class and utility functions for working with decorators. + +
+ + +
+ +# Module: tf.compat.v1.flags.tf_decorator + + + + + + + + + +Base TFDecorator class and utility functions for working with decorators. + + + + +There are two ways to create decorators that TensorFlow can introspect into. +This is important for documentation generation purposes, so that function +signatures aren't obscured by the (*args, **kwds) signature that decorators +often provide. + +1. Call `tf_decorator.make_decorator` on your wrapper function. If your +decorator is stateless, or can capture all of the variables it needs to work +with through lexical closure, this is the simplest option. Create your wrapper +function as usual, but instead of returning it, return +`tf_decorator.make_decorator(target, your_wrapper)`. This will attach some +decorator introspection metadata onto your wrapper and return it. + +#### Example: + + +def print_hello_before_calling(target): + def wrapper(*args, **kwargs): + print('hello') + return target(*args, **kwargs) + return tf_decorator.make_decorator(target, wrapper) + + +2. Derive from TFDecorator. If your decorator needs to be stateful, you can +implement it in terms of a TFDecorator. Store whatever state you need in your +derived class, and implement the `__call__` method to do your work before +calling into your target. You can retrieve the target via +`super(MyDecoratorClass, self).decorated_target`, and call it with whatever +parameters it needs. + +#### Example: + + +class CallCounter(tf_decorator.TFDecorator): + def __init__(self, target): + super(CallCounter, self).__init__('count_calls', target) + self.call_count = 0 + + def __call__(self, *args, **kwargs): + self.call_count += 1 + return super(CallCounter, self).decorated_target(*args, **kwargs) + +def count_calls(target): + return CallCounter(target) + + +## Modules + +[`tf_stack`](../../../../tf/compat/v1/flags/tf_decorator/tf_stack.md) module: Functions used to extract and analyze stacks. Faster than Python libs. + +## Classes + +[`class TFDecorator`](../../../../tf/compat/v1/flags/tf_decorator/TFDecorator.md): Base class for all TensorFlow decorators. + +## Functions + +[`make_decorator(...)`](../../../../tf/compat/v1/flags/tf_decorator/make_decorator.md): Make a decorator from a wrapper and a target. + +[`rewrap(...)`](../../../../tf/compat/v1/flags/tf_decorator/rewrap.md): Injects a new target into a function built by make_decorator. + +[`unwrap(...)`](../../../../tf/compat/v1/flags/tf_decorator/unwrap.md): Unwraps an object into a list of TFDecorators and a final target. + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/TFDecorator.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/TFDecorator.md new file mode 100644 index 00000000000..9706946731c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/TFDecorator.md @@ -0,0 +1,107 @@ +description: Base class for all TensorFlow decorators. + +
+ + + + +
+ +# tf.compat.v1.flags.tf_decorator.TFDecorator + + + + + + + + + +Base class for all TensorFlow decorators. + + + + + + + + + +TFDecorator captures and exposes the wrapped target, and provides details +about the current decorator. + + + + + + + + + + + + + + + + + + + + + +
+`decorated_target` + + +
+`decorator_argspec` + + +
+`decorator_doc` + + +
+`decorator_name` + + +
+ + + +## Methods + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/make_decorator.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/make_decorator.md new file mode 100644 index 00000000000..46a883095c4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/make_decorator.md @@ -0,0 +1,106 @@ +description: Make a decorator from a wrapper and a target. + +
+ + +
+ +# tf.compat.v1.flags.tf_decorator.make_decorator + + + + + + + + + +Make a decorator from a wrapper and a target. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`target` + +The final callable to be wrapped. +
+`decorator_func` + +The wrapper function. +
+`decorator_name` + +The name of the decorator. If `None`, the name of the +function calling make_decorator. +
+`decorator_doc` + +Documentation specific to this application of +`decorator_func` to `target`. +
+`decorator_argspec` + +The new callable signature of this decorator. +
+ + + + + + + + + + + +
+The `decorator_func` argument with new metadata attached. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/rewrap.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/rewrap.md new file mode 100644 index 00000000000..2209bc00e18 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/rewrap.md @@ -0,0 +1,113 @@ +description: Injects a new target into a function built by make_decorator. + +
+ + +
+ +# tf.compat.v1.flags.tf_decorator.rewrap + + + + + + + + + +Injects a new target into a function built by make_decorator. + + + + + + + + + +This function allows replacing a function wrapped by `decorator_func`, +assuming the decorator that wraps the function is written as described below. + +The decorator function must use `.__wrapped__` instead of the +wrapped function that is normally used: + +#### Example: + + +# Instead of this: +def simple_parametrized_wrapper(*args, **kwds): + return wrapped_fn(*args, **kwds) + +tf_decorator.make_decorator(simple_parametrized_wrapper, wrapped_fn) + +# Write this: +def simple_parametrized_wrapper(*args, **kwds): + return simple_parametrized_wrapper.__wrapped__(*args, **kwds) + +tf_decorator.make_decorator(simple_parametrized_wrapper, wrapped_fn) + + +Note that this process modifies decorator_func. + + + + + + + + + + + + + + + + +
+`decorator_func` + +Callable returned by `wrap`. +
+`previous_target` + +Callable that needs to be replaced. +
+`new_target` + +Callable to replace previous_target with. +
+ + + + + + + + + + + +
+The updated decorator. If decorator_func is not a tf_decorator, new_target +is returned. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack.md new file mode 100644 index 00000000000..cd036fe6c31 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack.md @@ -0,0 +1,55 @@ +description: Functions used to extract and analyze stacks. Faster than Python libs. + +
+ + +
+ +# Module: tf.compat.v1.flags.tf_decorator.tf_stack + + + + + + + + + +Functions used to extract and analyze stacks. Faster than Python libs. + + + + + +## Classes + +[`class CurrentModuleFilter`](../../../../../tf/compat/v1/flags/tf_decorator/tf_stack/CurrentModuleFilter.md): Filters stack frames from the module where this is used (best effort). + +[`class FrameSummary`](../../../../../tf/compat/v1/flags/tf_decorator/tf_stack/FrameSummary.md) + +[`class StackSummary`](../../../../../tf/compat/v1/flags/tf_decorator/tf_stack/StackSummary.md) + +[`class StackTraceFilter`](../../../../../tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceFilter.md): Allows filtering traceback information by removing superfluous frames. + +[`class StackTraceMapper`](../../../../../tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceMapper.md): Allows remapping traceback information to different source code. + +[`class StackTraceTransform`](../../../../../tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceTransform.md): Base class for stack trace transformation functions. + +## Functions + +[`extract_stack(...)`](../../../../../tf/compat/v1/flags/tf_decorator/tf_stack/extract_stack.md): A lightweight, extensible re-implementation of traceback.extract_stack. + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/CurrentModuleFilter.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/CurrentModuleFilter.md new file mode 100644 index 00000000000..bdebf1974a5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/CurrentModuleFilter.md @@ -0,0 +1,101 @@ +description: Filters stack frames from the module where this is used (best effort). + +
+ + + + + + + +
+ +# tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter + + + + + + + + + +Filters stack frames from the module where this is used (best effort). + +Inherits From: [`StackTraceFilter`](../../../../../../tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceFilter.md) + + + + + + + + + + +## Methods + +

get_filtered_filenames

+ +View source + + + + + + +

reset

+ +View source + + + + + + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/FrameSummary.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/FrameSummary.md new file mode 100644 index 00000000000..f7d9c192e31 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/FrameSummary.md @@ -0,0 +1,140 @@ +
+ + + + + + + + + +
+ +# tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filename` + + +
+`line` + + +
+`lineno` + + +
+`name` + + +
+ + + +## Methods + +

__eq__

+ + + +__eq__(self: tensorflow.python._tf_stack.FrameSummary, arg0: tensorflow.python._tf_stack.FrameSummary) -> bool + + +

__getitem__

+ + + +__getitem__(self: tensorflow.python._tf_stack.FrameSummary, arg0: object) -> object + + +

__iter__

+ + + +__iter__(self: tensorflow.python._tf_stack.FrameSummary) -> iterator + + +

__len__

+ + + +__len__(self: tensorflow.python._tf_stack.FrameSummary) -> int + + +

__ne__

+ + + +__ne__(self: tensorflow.python._tf_stack.FrameSummary, arg0: tensorflow.python._tf_stack.FrameSummary) -> bool + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/StackSummary.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/StackSummary.md new file mode 100644 index 00000000000..89af27ec374 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/StackSummary.md @@ -0,0 +1,208 @@ +
+ + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary + + + + + + + + + + + + + + + + + + + + +## Methods + +

append

+ + + +append(self: tensorflow.python._tf_stack.StackSummary, x: tensorflow.python._tf_stack.FrameSummary) -> None + +Add an item to the end of the list + +

count

+ + + +count(self: tensorflow.python._tf_stack.StackSummary, x: tensorflow.python._tf_stack.FrameSummary) -> int + +Return the number of times ``x`` appears in the list + +

extend

+ + + +extend(*args, **kwargs) +Overloaded function. + +1. extend(self: tensorflow.python._tf_stack.StackSummary, L: tensorflow.python._tf_stack.StackSummary) -> None + +Extend the list by appending all the items in the given list + +2. extend(self: tensorflow.python._tf_stack.StackSummary, L: iterable) -> None + +Extend the list by appending all the items in the given list + +

insert

+ + + +insert(self: tensorflow.python._tf_stack.StackSummary, i: int, x: tensorflow.python._tf_stack.FrameSummary) -> None + +Insert an item at a given position. + +

pop

+ + + +pop(*args, **kwargs) +Overloaded function. + +1. pop(self: tensorflow.python._tf_stack.StackSummary) -> tensorflow.python._tf_stack.FrameSummary + +Remove and return the last item + +2. pop(self: tensorflow.python._tf_stack.StackSummary, i: int) -> tensorflow.python._tf_stack.FrameSummary + +Remove and return the item at index ``i`` + +

remove

+ + + +remove(self: tensorflow.python._tf_stack.StackSummary, x: tensorflow.python._tf_stack.FrameSummary) -> None + +Remove the first item from the list whose value is x. It is an error if there is no such item. + +

__bool__

+ + + +__bool__(self: tensorflow.python._tf_stack.StackSummary) -> bool + +Check whether the list is nonempty + +

__contains__

+ + + +__contains__(self: tensorflow.python._tf_stack.StackSummary, x: tensorflow.python._tf_stack.FrameSummary) -> bool + +Return true the container contains ``x`` + +

__eq__

+ + + +__eq__(self: tensorflow.python._tf_stack.StackSummary, arg0: tensorflow.python._tf_stack.StackSummary) -> bool + + +

__getitem__

+ + + +__getitem__(*args, **kwargs) +Overloaded function. + +1. __getitem__(self: tensorflow.python._tf_stack.StackSummary, s: slice) -> tensorflow.python._tf_stack.StackSummary + +Retrieve list elements using a slice object + +2. __getitem__(self: tensorflow.python._tf_stack.StackSummary, arg0: int) -> tensorflow.python._tf_stack.FrameSummary + +3. __getitem__(self: tensorflow.python._tf_stack.StackSummary, arg0: int) -> tensorflow.python._tf_stack.FrameSummary + +

__iter__

+ + + +__iter__(self: tensorflow.python._tf_stack.StackSummary) -> iterator + + +

__len__

+ + + +__len__(self: tensorflow.python._tf_stack.StackSummary) -> int + + +

__ne__

+ + + +__ne__(self: tensorflow.python._tf_stack.StackSummary, arg0: tensorflow.python._tf_stack.StackSummary) -> bool + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceFilter.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceFilter.md new file mode 100644 index 00000000000..8216e98a5b9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceFilter.md @@ -0,0 +1,94 @@ +description: Allows filtering traceback information by removing superfluous frames. + +
+ + + + + + +
+ +# tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter + + + + + + + + + +Allows filtering traceback information by removing superfluous frames. + +Inherits From: [`StackTraceTransform`](../../../../../../tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceTransform.md) + + + + + + +## Methods + +

get_filtered_filenames

+ +View source + + + + + + +

reset

+ +View source + + + + + + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceMapper.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceMapper.md new file mode 100644 index 00000000000..55c039adf3d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceMapper.md @@ -0,0 +1,94 @@ +description: Allows remapping traceback information to different source code. + +
+ + + + + + +
+ +# tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper + + + + + + + + + +Allows remapping traceback information to different source code. + +Inherits From: [`StackTraceTransform`](../../../../../../tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceTransform.md) + + + + + + +## Methods + +

get_effective_source_map

+ +View source + + + +Returns a map (filename, lineno) -> (filename, lineno, function_name). + + +

reset

+ +View source + + + + + + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceTransform.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceTransform.md new file mode 100644 index 00000000000..02b1eaa39fa --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/StackTraceTransform.md @@ -0,0 +1,80 @@ +description: Base class for stack trace transformation functions. + +
+ + + + + +
+ +# tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform + + + + + + + + + +Base class for stack trace transformation functions. + + + + + + +## Methods + +

reset

+ +View source + + + + + + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/extract_stack.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/extract_stack.md new file mode 100644 index 00000000000..7dcb18fbd7b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/tf_stack/extract_stack.md @@ -0,0 +1,81 @@ +description: A lightweight, extensible re-implementation of traceback.extract_stack. + +
+ + +
+ +# tf.compat.v1.flags.tf_decorator.tf_stack.extract_stack + + + + + + + + + +A lightweight, extensible re-implementation of traceback.extract_stack. + + + + + + + + + +NOTE(mrry): traceback.extract_stack eagerly retrieves the line of code for + each stack frame using linecache, which results in an abundance of stat() + calls. This implementation does not retrieve the code, and any consumer + should apply _convert_stack to the result to obtain a traceback that can + be formatted etc. using traceback methods. + + + + + + + + + + +
+`limit` + +A limit on the number of frames to return. +
+ + + + + + + + + + + +
+A sequence of FrameSummary objects (filename, lineno, name, line) +corresponding to the call stack of the current thread. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/unwrap.md b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/unwrap.md new file mode 100644 index 00000000000..f27f57ef632 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/tf_decorator/unwrap.md @@ -0,0 +1,80 @@ +description: Unwraps an object into a list of TFDecorators and a final target. + +
+ + +
+ +# tf.compat.v1.flags.tf_decorator.unwrap + + + + + + + + + +Unwraps an object into a list of TFDecorators and a final target. + + + + + + + + + + + + + + + + + + + +
+`maybe_tf_decorator` + +Any callable object. +
+ + + + + + + + + + + +
+A tuple whose first element is an list of TFDecorator-derived objects that +were applied to the final callable target, and whose second element is the +final undecorated callable target. If the `maybe_tf_decorator` parameter is +not decorated by any TFDecorators, the first tuple element will be an empty +list. The `TFDecorator` list is ordered from outermost to innermost +decorators. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/flags/validator.md b/site/en/api_docs/python/tf/compat/v1/flags/validator.md new file mode 100644 index 00000000000..4992cad6784 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/flags/validator.md @@ -0,0 +1,112 @@ +description: A function decorator for defining a flag validator. + +
+ + +
+ +# tf.compat.v1.flags.validator + + + + + + + + + +A function decorator for defining a flag validator. + + + + + + + + + +Registers the decorated function as a validator for flag_name, e.g. + +@flags.validator('foo') +def _CheckFoo(foo): + ... + +See register_validator() for the specification of checker function. + + + + + + + + + + + + + + + + +
+`flag_name` + +str, name of the flag to be checked. +
+`message` + +str, error text to be shown to the user if checker returns False. +If checker raises flags.ValidationError, message from the raised +error will be shown. +
+`flag_values` + +flags.FlagValues, optional FlagValues instance to validate +against. +
+ + + + + + + + + + + +
+A function decorator that registers its function argument as a validator. +
+ + + + + + + + + + + + +
+`AttributeError` + +Raised when flag_name is not registered as a valid flag +name. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/floor_div.md b/site/en/api_docs/python/tf/compat/v1/floor_div.md new file mode 100644 index 00000000000..3e6617753a3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/floor_div.md @@ -0,0 +1,75 @@ +description: Returns x // y element-wise. + +
+ + +
+ +# tf.compat.v1.floor_div + + + + + + + + + +Returns x // y element-wise. + + + + + + + +*NOTE*: `floor_div` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/foldl.md b/site/en/api_docs/python/tf/compat/v1/foldl.md new file mode 100644 index 00000000000..6cdb2fe764c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/foldl.md @@ -0,0 +1,156 @@ +description: foldl on the list of tensors unpacked from elems on dimension 0. + +
+ + +
+ +# tf.compat.v1.foldl + + + + + + + + + +foldl on the list of tensors unpacked from `elems` on dimension 0. + + + + + + + +This foldl operator repeatedly applies the callable `fn` to a sequence +of elements from first to last. The elements are made of the tensors +unpacked from `elems` on dimension 0. The callable fn takes two tensors as +arguments. The first argument is the accumulated value computed from the +preceding invocation of fn, and the second is the value at the current +position of `elems`. If `initializer` is None, `elems` must contain at least +one element, and its first element is used as the initializer. + +Suppose that `elems` is unpacked into `values`, a list of tensors. The shape +of the result tensor is fn(initializer, values[0]).shape`. + +This method also allows multi-arity `elems` and output of `fn`. If `elems` +is a (possibly nested) list or tuple of tensors, then each of these tensors +must have a matching first (unpack) dimension. The signature of `fn` may +match the structure of `elems`. That is, if `elems` is +`(t1, [t2, t3, [t4, t5]])`, then an appropriate signature for `fn` is: +`fn = lambda (t1, [t2, t3, [t4, t5]]):`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fn` + +The callable to be performed. +
+`elems` + +A tensor or (possibly nested) sequence of tensors, each of which will +be unpacked along their first dimension. The nested sequence of the +resulting slices will be the first argument to `fn`. +
+`initializer` + +(optional) A tensor or (possibly nested) sequence of tensors, +as the initial value for the accumulator. +
+`parallel_iterations` + +(optional) The number of iterations allowed to run in +parallel. +
+`back_prop` + +(optional) True enables support for back propagation. +
+`swap_memory` + +(optional) True enables GPU-CPU memory swapping. +
+`name` + +(optional) Name prefix for the returned tensors. +
+ + + + + + + + + + + +
+A tensor or (possibly nested) sequence of tensors, resulting from applying +`fn` consecutively to the list of tensors unpacked from `elems`, from first +to last. +
+ + + + + + + + + + + + +
+`TypeError` + +if `fn` is not callable. +
+ + + +#### Example: + +```python +elems = tf.constant([1, 2, 3, 4, 5, 6]) +sum = foldl(lambda a, x: a + x, elems) +# sum == 21 +``` diff --git a/site/en/api_docs/python/tf/compat/v1/foldr.md b/site/en/api_docs/python/tf/compat/v1/foldr.md new file mode 100644 index 00000000000..30ca1c7bcc8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/foldr.md @@ -0,0 +1,156 @@ +description: foldr on the list of tensors unpacked from elems on dimension 0. + +
+ + +
+ +# tf.compat.v1.foldr + + + + + + + + + +foldr on the list of tensors unpacked from `elems` on dimension 0. + + + + + + + +This foldr operator repeatedly applies the callable `fn` to a sequence +of elements from last to first. The elements are made of the tensors +unpacked from `elems`. The callable fn takes two tensors as arguments. +The first argument is the accumulated value computed from the preceding +invocation of fn, and the second is the value at the current position of +`elems`. If `initializer` is None, `elems` must contain at least one element, +and its first element is used as the initializer. + +Suppose that `elems` is unpacked into `values`, a list of tensors. The shape +of the result tensor is `fn(initializer, values[0]).shape`. + +This method also allows multi-arity `elems` and output of `fn`. If `elems` +is a (possibly nested) list or tuple of tensors, then each of these tensors +must have a matching first (unpack) dimension. The signature of `fn` may +match the structure of `elems`. That is, if `elems` is +`(t1, [t2, t3, [t4, t5]])`, then an appropriate signature for `fn` is: +`fn = lambda (t1, [t2, t3, [t4, t5]]):`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fn` + +The callable to be performed. +
+`elems` + +A tensor or (possibly nested) sequence of tensors, each of which will +be unpacked along their first dimension. The nested sequence of the +resulting slices will be the first argument to `fn`. +
+`initializer` + +(optional) A tensor or (possibly nested) sequence of tensors, +as the initial value for the accumulator. +
+`parallel_iterations` + +(optional) The number of iterations allowed to run in +parallel. +
+`back_prop` + +(optional) True enables support for back propagation. +
+`swap_memory` + +(optional) True enables GPU-CPU memory swapping. +
+`name` + +(optional) Name prefix for the returned tensors. +
+ + + + + + + + + + + +
+A tensor or (possibly nested) sequence of tensors, resulting from applying +`fn` consecutively to the list of tensors unpacked from `elems`, from last +to first. +
+ + + + + + + + + + + + +
+`TypeError` + +if `fn` is not callable. +
+ + + +#### Example: + +```python +elems = [1, 2, 3, 4, 5, 6] +sum = foldr(lambda a, x: a + x, elems) +# sum == 21 +``` diff --git a/site/en/api_docs/python/tf/compat/v1/gather.md b/site/en/api_docs/python/tf/compat/v1/gather.md new file mode 100644 index 00000000000..c7ad1dd3878 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gather.md @@ -0,0 +1,154 @@ +description: Gather slices from params axis axis according to indices. + +
+ + +
+ +# tf.compat.v1.gather + + + + + + + + + +Gather slices from params axis `axis` according to indices. + + + + + + + +Gather slices from params axis `axis` according to `indices`. `indices` must +be an integer tensor of any dimension (usually 0-D or 1-D). + +For 0-D (scalar) `indices`: + +$$\begin{align*} +output[p_0, ..., p_{axis-1}, && &&& p_{axis + 1}, ..., p_{N-1}] = \\ +params[p_0, ..., p_{axis-1}, && indices, &&& p_{axis + 1}, ..., p_{N-1}] +\end{align*}$$ + +Where *N* = `ndims(params)`. + +For 1-D (vector) `indices` with `batch_dims=0`: + +$$\begin{align*} +output[p_0, ..., p_{axis-1}, && &i, &&p_{axis + 1}, ..., p_{N-1}] =\\ +params[p_0, ..., p_{axis-1}, && indices[&i], &&p_{axis + 1}, ..., p_{N-1}] +\end{align*}$$ + +In the general case, produces an output tensor where: + +$$\begin{align*} +output[p_0, &..., p_{axis-1}, & + &i_{B}, ..., i_{M-1}, & + p_{axis + 1}, &..., p_{N-1}] = \\ +params[p_0, &..., p_{axis-1}, & + indices[p_0, ..., p_{B-1}, &i_{B}, ..., i_{M-1}], & + p_{axis + 1}, &..., p_{N-1}] +\end{align*}$$ + +Where *N* = `ndims(params)`, *M* = `ndims(indices)`, and *B* = `batch_dims`. +Note that `params.shape[:batch_dims]` must be identical to +`indices.shape[:batch_dims]`. + +The shape of the output tensor is: + +> `output.shape = params.shape[:axis] + indices.shape[batch_dims:] + +> params.shape[axis + 1:]`. + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, a 0 is stored in the corresponding +output value. + +See also tf.gather_nd. + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + + + + +
+`params` + +The `Tensor` from which to gather values. Must be at least rank +`axis + 1`. +
+`indices` + +The index `Tensor`. Must be one of the following types: `int32`, +`int64`. Must be in range `[0, params.shape[axis])`. +
+`validate_indices` + +Deprecated, does nothing. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. The +`axis` in `params` to gather `indices` from. Must be greater than or equal +to `batch_dims`. Defaults to the first non-batch dimension. Supports +negative indexes. +
+`batch_dims` + +An `integer`. The number of batch dimensions. Must be less +than or equal to `rank(indices)`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `params`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gather_nd.md b/site/en/api_docs/python/tf/compat/v1/gather_nd.md new file mode 100644 index 00000000000..1bf3ee2674c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gather_nd.md @@ -0,0 +1,230 @@ +description: Gather slices from params into a Tensor with shape specified by indices. + +
+ + +
+ +# tf.compat.v1.gather_nd + + + + + + + + + +Gather slices from `params` into a Tensor with shape specified by `indices`. + + + + + + + + + +`indices` is an K-dimensional integer tensor, best thought of as a +(K-1)-dimensional tensor of indices into `params`, where each element defines +a slice of `params`: + + output[\\(i_0, ..., i_{K-2}\\)] = params[indices[\\(i_0, ..., i_{K-2}\\)]] + +Whereas in tf.gather `indices` defines slices into the first +dimension of `params`, in tf.gather_nd, `indices` defines slices into the +first `N` dimensions of `params`, where `N = indices.shape[-1]`. + +The last dimension of `indices` can be at most the rank of +`params`: + + indices.shape[-1] <= params.rank + +The last dimension of `indices` corresponds to elements +(if `indices.shape[-1] == params.rank`) or slices +(if `indices.shape[-1] < params.rank`) along dimension `indices.shape[-1]` +of `params`. The output tensor has shape + + indices.shape[:-1] + params.shape[indices.shape[-1]:] + +Additionally both 'params' and 'indices' can have M leading batch +dimensions that exactly match. In this case 'batch_dims' must be M. + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, a 0 is stored in the +corresponding output value. + +Some examples below. + +Simple indexing into a matrix: + +```python + indices = [[0, 0], [1, 1]] + params = [['a', 'b'], ['c', 'd']] + output = ['a', 'd'] +``` + +Slice indexing into a matrix: + +```python + indices = [[1], [0]] + params = [['a', 'b'], ['c', 'd']] + output = [['c', 'd'], ['a', 'b']] +``` + +Indexing into a 3-tensor: + +```python + indices = [[1]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [[['a1', 'b1'], ['c1', 'd1']]] + + + indices = [[0, 1], [1, 0]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [['c0', 'd0'], ['a1', 'b1']] + + + indices = [[0, 0, 1], [1, 0, 1]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = ['b0', 'b1'] +``` + +The examples below are for the case when only indices have leading extra +dimensions. If both 'params' and 'indices' have leading batch dimensions, use +the 'batch_dims' parameter to run gather_nd in batch mode. + +Batched indexing into a matrix: + +```python + indices = [[[0, 0]], [[0, 1]]] + params = [['a', 'b'], ['c', 'd']] + output = [['a'], ['b']] +``` + +Batched slice indexing into a matrix: + +```python + indices = [[[1]], [[0]]] + params = [['a', 'b'], ['c', 'd']] + output = [[['c', 'd']], [['a', 'b']]] +``` + +Batched indexing into a 3-tensor: + +```python + indices = [[[1]], [[0]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [[[['a1', 'b1'], ['c1', 'd1']]], + [[['a0', 'b0'], ['c0', 'd0']]]] + + indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [[['c0', 'd0'], ['a1', 'b1']], + [['a0', 'b0'], ['c1', 'd1']]] + + + indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [['b0', 'b1'], ['d0', 'c1']] +``` + +Examples with batched 'params' and 'indices': + +```python + batch_dims = 1 + indices = [[1], [0]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [['c0', 'd0'], ['a1', 'b1']] + + batch_dims = 1 + indices = [[[1]], [[0]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [[['c0', 'd0']], [['a1', 'b1']]] + + batch_dims = 1 + indices = [[[1, 0]], [[0, 1]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [['c0'], ['b1']] +``` + +See also tf.gather. + + + + + + + + + + + + + + + + + + + +
+`params` + +A `Tensor`. The tensor from which to gather values. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`name` + +A name for the operation (optional). +
+`batch_dims` + +An integer or a scalar 'Tensor'. The number of batch dimensions. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `params`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/get_collection.md b/site/en/api_docs/python/tf/compat/v1/get_collection.md new file mode 100644 index 00000000000..e3d8219321f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/get_collection.md @@ -0,0 +1,87 @@ +description: Wrapper for Graph.get_collection() using the default graph. + +
+ + +
+ +# tf.compat.v1.get_collection + + + + + + + + + +Wrapper for `Graph.get_collection()` using the default graph. + + + + + + + +See tf.Graph.get_collection +for more details. + + + + + + + + + + + + + +
+`key` + +The key for the collection. For example, the `GraphKeys` class contains +many standard names for collections. +
+`scope` + +(Optional.) If supplied, the resulting list is filtered to include +only items whose `name` attribute matches using `re.match`. Items without +a `name` attribute are never returned if a scope is supplied and the +choice or `re.match` means that a `scope` without special tokens filters +by prefix. +
+ + + + + + + + + + + +
+The list of values in the collection with the given `name`, or +an empty list if no value has been added to that collection. The +list contains the values in the order under which they were +collected. +
+ + + + +#### Eager Compatibility +Collections are not supported when eager execution is enabled. + diff --git a/site/en/api_docs/python/tf/compat/v1/get_collection_ref.md b/site/en/api_docs/python/tf/compat/v1/get_collection_ref.md new file mode 100644 index 00000000000..d9be8d6fc77 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/get_collection_ref.md @@ -0,0 +1,76 @@ +description: Wrapper for Graph.get_collection_ref() using the default graph. + +
+ + +
+ +# tf.compat.v1.get_collection_ref + + + + + + + + + +Wrapper for `Graph.get_collection_ref()` using the default graph. + + + + + + + +See tf.Graph.get_collection_ref +for more details. + + + + + + + + + + +
+`key` + +The key for the collection. For example, the `GraphKeys` class contains +many standard names for collections. +
+ + + + + + + + + + + +
+The list of values in the collection with the given `name`, or an empty +list if no value has been added to that collection. Note that this returns +the collection list itself, which can be modified in place to change the +collection. +
+ + + + +#### Eager Compatibility +Collections are not supported when eager execution is enabled. + diff --git a/site/en/api_docs/python/tf/compat/v1/get_default_graph.md b/site/en/api_docs/python/tf/compat/v1/get_default_graph.md new file mode 100644 index 00000000000..6f9f43125f1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/get_default_graph.md @@ -0,0 +1,53 @@ +description: Returns the default graph for the current thread. + +
+ + +
+ +# tf.compat.v1.get_default_graph + + + + + + + + + +Returns the default graph for the current thread. + + + + + + + +The returned graph will be the innermost graph on which a +`Graph.as_default()` context has been entered, or a global default +graph if none has been explicitly created. + +NOTE: The default graph is a property of the current thread. If you +create a new thread, and wish to use the default graph in that +thread, you must explicitly add a `with g.as_default():` in that +thread's function. + + + + + + + + + +
+The default `Graph` being used in the current thread. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/get_default_session.md b/site/en/api_docs/python/tf/compat/v1/get_default_session.md new file mode 100644 index 00000000000..fedd113eb9b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/get_default_session.md @@ -0,0 +1,52 @@ +description: Returns the default session for the current thread. + +
+ + +
+ +# tf.compat.v1.get_default_session + + + + + + + + + +Returns the default session for the current thread. + + + + + + + +The returned `Session` will be the innermost session on which a +`Session` or `Session.as_default()` context has been entered. + +NOTE: The default session is a property of the current thread. If you +create a new thread, and wish to use the default session in that +thread, you must explicitly add a `with sess.as_default():` in that +thread's function. + + + + + + + + + +
+The default `Session` being used in the current thread. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/get_local_variable.md b/site/en/api_docs/python/tf/compat/v1/get_local_variable.md new file mode 100644 index 00000000000..85e8fccd718 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/get_local_variable.md @@ -0,0 +1,253 @@ +description: Gets an existing *local* variable or creates a new one. + +
+ + +
+ +# tf.compat.v1.get_local_variable + + + + + + + + + +Gets an existing *local* variable or creates a new one. + + + + + + + +Behavior is the same as in `get_variable`, except that variables are +added to the `LOCAL_VARIABLES` collection and `trainable` is set to +`False`. +This function prefixes the name with the current variable scope +and performs reuse checks. See the +[Variable Scope How To](https://tensorflow.org/guide/variables) +for an extensive description of how reusing works. Here is a basic example: + +```python +def foo(): + with tf.variable_scope("foo", reuse=tf.AUTO_REUSE): + v = tf.get_variable("v", [1]) + return v + +v1 = foo() # Creates v. +v2 = foo() # Gets the same, existing v. +assert v1 == v2 +``` + +If initializer is `None` (the default), the default initializer passed in +the variable scope will be used. If that one is `None` too, a +`glorot_uniform_initializer` will be used. The initializer can also be +a Tensor, in which case the variable is initialized to this value and shape. + +Similarly, if the regularizer is `None` (the default), the default regularizer +passed in the variable scope will be used (if that is `None` too, +then by default no regularization is performed). + +If a partitioner is provided, a `PartitionedVariable` is returned. +Accessing this object as a `Tensor` returns the shards concatenated along +the partition axis. + +Some useful partitioners are available. See, e.g., +`variable_axis_size_partitioner` and `min_max_variable_partitioner`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +The name of the new or existing variable. +
+`shape` + +Shape of the new or existing variable. +
+`dtype` + +Type of the new or existing variable (defaults to `DT_FLOAT`). +
+`initializer` + +Initializer for the variable if one is created. Can either be +an initializer object or a Tensor. If it's a Tensor, its shape must be known +unless validate_shape is False. +
+`regularizer` + +A (Tensor -> Tensor or None) function; the result of +applying it on a newly created variable will be added to the collection +`tf.GraphKeys.REGULARIZATION_LOSSES` and can be used for regularization. +
+`collections` + +List of graph collections keys to add the Variable to. +Defaults to `[GraphKeys.LOCAL_VARIABLES]` (see tf.Variable). +
+`caching_device` + +Optional device string or function describing where the +Variable should be cached for reading. Defaults to the Variable's +device. If not `None`, caches on another device. Typical use is to +cache on the device where the Ops using the Variable reside, to +deduplicate copying through `Switch` and other conditional statements. +
+`partitioner` + +Optional callable that accepts a fully defined `TensorShape` +and `dtype` of the Variable to be created, and returns a list of +partitions for each axis (currently only one axis can be partitioned). +
+`validate_shape` + +If False, allows the variable to be initialized with a +value of unknown shape. If True, the default, the shape of initial_value +must be known. For this to be used the initializer must be a Tensor and +not an initializer object. +
+`use_resource` + +If False, creates a regular Variable. If true, creates an +experimental ResourceVariable instead with well-defined semantics. +Defaults to False (will later change to True). When eager execution is +enabled this argument is always forced to be True. +
+`custom_getter` + +Callable that takes as a first argument the true getter, and +allows overwriting the internal get_variable method. +The signature of `custom_getter` should match that of this method, +but the most future-proof version will allow for changes: +`def custom_getter(getter, *args, **kwargs)`. Direct access to +all `get_variable` parameters is also allowed: +`def custom_getter(getter, name, *args, **kwargs)`. A simple identity +custom getter that simply creates variables with modified names is: +```python +def custom_getter(getter, name, *args, **kwargs): +return getter(name + '_suffix', *args, **kwargs) +``` +
+`constraint` + +An optional projection function to be applied to the variable +after being updated by an `Optimizer` (e.g. used to implement norm +constraints or value constraints for layer weights). The function must +take as input the unprojected Tensor representing the value of the +variable and return the Tensor for the projected value +(which must have the same shape). Constraints are not safe to +use when doing asynchronous distributed training. +
+`synchronization` + +Indicates when a distributed a variable will be +aggregated. Accepted values are constants defined in the class +tf.VariableSynchronization. By default the synchronization is set to +`AUTO` and the current `DistributionStrategy` chooses +when to synchronize. +
+`aggregation` + +Indicates how a distributed variable will be aggregated. +Accepted values are constants defined in the class +tf.VariableAggregation. +
+ + + + + + + + + + + +
+The created or existing `Variable` (or `PartitionedVariable`, if a +partitioner was used). +
+ + + + + + + + + + + + +
+`ValueError` + +when creating a new variable and shape is not declared, +when violating reuse during variable creation, or when `initializer` dtype +and `dtype` don't match. Reuse is set inside `variable_scope`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/get_seed.md b/site/en/api_docs/python/tf/compat/v1/get_seed.md new file mode 100644 index 00000000000..e10eb6d244b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/get_seed.md @@ -0,0 +1,83 @@ +description: Returns the local seeds an operation should use given an op-specific seed. + +
+ + +
+ +# tf.compat.v1.get_seed + + + + + + + + + +Returns the local seeds an operation should use given an op-specific seed. + + + + + + + + + +Given operation-specific seed, `op_seed`, this helper function returns two +seeds derived from graph-level and op-level seeds. Many random operations +internally use the two seeds to allow user to change the seed globally for a +graph, or for only specific operations. + +For details on how the graph-level seed interacts with op seeds, see +tf.compat.v1.random.set_random_seed. + + + + + + + + + + +
+`op_seed` + +integer. +
+ + + + + + + + + + + +
+A tuple of two integers that should be used for the local seed of this +operation. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/get_session_handle.md b/site/en/api_docs/python/tf/compat/v1/get_session_handle.md new file mode 100644 index 00000000000..26bef727a3b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/get_session_handle.md @@ -0,0 +1,110 @@ +description: Return the handle of data. + +
+ + +
+ +# tf.compat.v1.get_session_handle + + + + + + + + + +Return the handle of `data`. + + + + + + + +This is EXPERIMENTAL and subject to change. + +Keep `data` "in-place" in the runtime and create a handle that can be +used to retrieve `data` in a subsequent run(). + +Combined with `get_session_tensor`, we can keep a tensor produced in +one run call in place, and use it as the input in a future run call. + + + + + + + + + + + + + +
+`data` + +A tensor to be stored in the session. +
+`name` + +Optional name prefix for the return tensor. +
+ + + + + + + + + + + +
+A scalar string tensor representing a unique handle for `data`. +
+ + + + + + + + + + + + +
+`TypeError` + +if `data` is not a Tensor. +
+ + + +#### Example: + + + +```python +c = tf.multiply(a, b) +h = tf.compat.v1.get_session_handle(c) +h = sess.run(h) + +p, a = tf.compat.v1.get_session_tensor(h.handle, tf.float32) +b = tf.multiply(a, 10) +c = sess.run(b, feed_dict={p: h.handle}) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/get_session_tensor.md b/site/en/api_docs/python/tf/compat/v1/get_session_tensor.md new file mode 100644 index 00000000000..c445bc08266 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/get_session_tensor.md @@ -0,0 +1,100 @@ +description: Get the tensor of type dtype by feeding a tensor handle. + +
+ + +
+ +# tf.compat.v1.get_session_tensor + + + + + + + + + +Get the tensor of type `dtype` by feeding a tensor handle. + + + + + + + +This is EXPERIMENTAL and subject to change. + +Get the value of the tensor from a tensor handle. The tensor +is produced in a previous run() and stored in the state of the +session. + + + + + + + + + + + + + + + + +
+`handle` + +The string representation of a persistent tensor handle. +
+`dtype` + +The type of the output tensor. +
+`name` + +Optional name prefix for the return tensor. +
+ + + + + + + + + + + +
+A pair of tensors. The first is a placeholder for feeding a +tensor handle and the second is the tensor in the session state +keyed by the tensor handle. +
+ + + +#### Example: + + + +```python +c = tf.multiply(a, b) +h = tf.compat.v1.get_session_handle(c) +h = sess.run(h) + +p, a = tf.compat.v1.get_session_tensor(h.handle, tf.float32) +b = tf.multiply(a, 10) +c = sess.run(b, feed_dict={p: h.handle}) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/get_variable.md b/site/en/api_docs/python/tf/compat/v1/get_variable.md new file mode 100644 index 00000000000..e27c5dcda4a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/get_variable.md @@ -0,0 +1,258 @@ +description: Gets an existing variable with these parameters or create a new one. + +
+ + +
+ +# tf.compat.v1.get_variable + + + + + + + + + +Gets an existing variable with these parameters or create a new one. + + + + + + + +This function prefixes the name with the current variable scope +and performs reuse checks. See the +[Variable Scope How To](https://tensorflow.org/guide/variables) +for an extensive description of how reusing works. Here is a basic example: + +```python +def foo(): + with tf.variable_scope("foo", reuse=tf.AUTO_REUSE): + v = tf.get_variable("v", [1]) + return v + +v1 = foo() # Creates v. +v2 = foo() # Gets the same, existing v. +assert v1 == v2 +``` + +If initializer is `None` (the default), the default initializer passed in +the variable scope will be used. If that one is `None` too, a +`glorot_uniform_initializer` will be used. The initializer can also be +a Tensor, in which case the variable is initialized to this value and shape. + +Similarly, if the regularizer is `None` (the default), the default regularizer +passed in the variable scope will be used (if that is `None` too, +then by default no regularization is performed). + +If a partitioner is provided, a `PartitionedVariable` is returned. +Accessing this object as a `Tensor` returns the shards concatenated along +the partition axis. + +Some useful partitioners are available. See, e.g., +`variable_axis_size_partitioner` and `min_max_variable_partitioner`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +The name of the new or existing variable. +
+`shape` + +Shape of the new or existing variable. +
+`dtype` + +Type of the new or existing variable (defaults to `DT_FLOAT`). +
+`initializer` + +Initializer for the variable if one is created. Can either be +an initializer object or a Tensor. If it's a Tensor, its shape must be known +unless validate_shape is False. +
+`regularizer` + +A (Tensor -> Tensor or None) function; the result of +applying it on a newly created variable will be added to the collection +`tf.GraphKeys.REGULARIZATION_LOSSES` and can be used for regularization. +
+`trainable` + +If `True` also add the variable to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`collections` + +List of graph collections keys to add the Variable to. +Defaults to `[GraphKeys.GLOBAL_VARIABLES]` (see tf.Variable). +
+`caching_device` + +Optional device string or function describing where the +Variable should be cached for reading. Defaults to the Variable's +device. If not `None`, caches on another device. Typical use is to +cache on the device where the Ops using the Variable reside, to +deduplicate copying through `Switch` and other conditional statements. +
+`partitioner` + +Optional callable that accepts a fully defined `TensorShape` +and `dtype` of the Variable to be created, and returns a list of +partitions for each axis (currently only one axis can be partitioned). +
+`validate_shape` + +If False, allows the variable to be initialized with a +value of unknown shape. If True, the default, the shape of initial_value +must be known. For this to be used the initializer must be a Tensor and +not an initializer object. +
+`use_resource` + +If False, creates a regular Variable. If true, creates an +experimental ResourceVariable instead with well-defined semantics. +Defaults to False (will later change to True). When eager execution is +enabled this argument is always forced to be True. +
+`custom_getter` + +Callable that takes as a first argument the true getter, and +allows overwriting the internal get_variable method. +The signature of `custom_getter` should match that of this method, +but the most future-proof version will allow for changes: +`def custom_getter(getter, *args, **kwargs)`. Direct access to +all `get_variable` parameters is also allowed: +`def custom_getter(getter, name, *args, **kwargs)`. A simple identity +custom getter that simply creates variables with modified names is: +```python +def custom_getter(getter, name, *args, **kwargs): +return getter(name + '_suffix', *args, **kwargs) +``` +
+`constraint` + +An optional projection function to be applied to the variable +after being updated by an `Optimizer` (e.g. used to implement norm +constraints or value constraints for layer weights). The function must +take as input the unprojected Tensor representing the value of the +variable and return the Tensor for the projected value +(which must have the same shape). Constraints are not safe to +use when doing asynchronous distributed training. +
+`synchronization` + +Indicates when a distributed a variable will be +aggregated. Accepted values are constants defined in the class +tf.VariableSynchronization. By default the synchronization is set to +`AUTO` and the current `DistributionStrategy` chooses +when to synchronize. +
+`aggregation` + +Indicates how a distributed variable will be aggregated. +Accepted values are constants defined in the class +tf.VariableAggregation. +
+ + + + + + + + + + + +
+The created or existing `Variable` (or `PartitionedVariable`, if a +partitioner was used). +
+ + + + + + + + + + + + +
+`ValueError` + +when creating a new variable and shape is not declared, +when violating reuse during variable creation, or when `initializer` dtype +and `dtype` don't match. Reuse is set inside `variable_scope`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/get_variable_scope.md b/site/en/api_docs/python/tf/compat/v1/get_variable_scope.md new file mode 100644 index 00000000000..bc515b35fce --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/get_variable_scope.md @@ -0,0 +1,31 @@ +description: Returns the current variable scope. + +
+ + +
+ +# tf.compat.v1.get_variable_scope + + + + + + + + + +Returns the current variable scope. + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/gfile.md b/site/en/api_docs/python/tf/compat/v1/gfile.md new file mode 100644 index 00000000000..e3e827a0367 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile.md @@ -0,0 +1,55 @@ +description: Import router for file_io. + +
+ + +
+ +# Module: tf.compat.v1.gfile + + + + + + + + + +Import router for file_io. + + + +## Classes + +[`class FastGFile`](../../../tf/compat/v1/gfile/FastGFile.md): File I/O wrappers without thread locking. + +[`class GFile`](../../../tf/io/gfile/GFile.md): File I/O wrappers without thread locking. + +[`class Open`](../../../tf/io/gfile/GFile.md): File I/O wrappers without thread locking. + +## Functions + +[`Copy(...)`](../../../tf/compat/v1/gfile/Copy.md): Copies data from `oldpath` to `newpath`. + +[`DeleteRecursively(...)`](../../../tf/compat/v1/gfile/DeleteRecursively.md): Deletes everything under dirname recursively. + +[`Exists(...)`](../../../tf/compat/v1/gfile/Exists.md): Determines whether a path exists or not. + +[`Glob(...)`](../../../tf/compat/v1/gfile/Glob.md): Returns a list of files that match the given pattern(s). + +[`IsDirectory(...)`](../../../tf/compat/v1/gfile/IsDirectory.md): Returns whether the path is a directory or not. + +[`ListDirectory(...)`](../../../tf/compat/v1/gfile/ListDirectory.md): Returns a list of entries contained within a directory. + +[`MakeDirs(...)`](../../../tf/compat/v1/gfile/MakeDirs.md): Creates a directory and all parent/intermediate directories. + +[`MkDir(...)`](../../../tf/compat/v1/gfile/MkDir.md): Creates a directory with the name `dirname`. + +[`Remove(...)`](../../../tf/compat/v1/gfile/Remove.md): Deletes the file located at 'filename'. + +[`Rename(...)`](../../../tf/compat/v1/gfile/Rename.md): Rename or move a file / directory. + +[`Stat(...)`](../../../tf/compat/v1/gfile/Stat.md): Returns file statistics for a given path. + +[`Walk(...)`](../../../tf/compat/v1/gfile/Walk.md): Recursive directory tree generator for directories. + diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/Copy.md b/site/en/api_docs/python/tf/compat/v1/gfile/Copy.md new file mode 100644 index 00000000000..ae7b1bb7db5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/Copy.md @@ -0,0 +1,82 @@ +description: Copies data from oldpath to newpath. + +
+ + +
+ +# tf.compat.v1.gfile.Copy + + + + + + + + + +Copies data from `oldpath` to `newpath`. + + + + + + + + + + + + + + + + + + + + + + + +
+`oldpath` + +string, name of the file who's contents need to be copied +
+`newpath` + +string, name of the file to which to copy to +
+`overwrite` + +boolean, if false it's an error for `newpath` to be occupied by +an existing file. +
+ + + + + + + + + + + + +
+`errors.OpError` + +If the operation fails. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/DeleteRecursively.md b/site/en/api_docs/python/tf/compat/v1/gfile/DeleteRecursively.md new file mode 100644 index 00000000000..64d959b7896 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/DeleteRecursively.md @@ -0,0 +1,67 @@ +description: Deletes everything under dirname recursively. + +
+ + +
+ +# tf.compat.v1.gfile.DeleteRecursively + + + + + + + + + +Deletes everything under dirname recursively. + + + + + + + + + + + + + + + + + +
+`dirname` + +string, a path to a directory +
+ + + + + + + + + + + + +
+`errors.OpError` + +If the operation fails. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/Exists.md b/site/en/api_docs/python/tf/compat/v1/gfile/Exists.md new file mode 100644 index 00000000000..611ad187a1c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/Exists.md @@ -0,0 +1,82 @@ +description: Determines whether a path exists or not. + +
+ + +
+ +# tf.compat.v1.gfile.Exists + + + + + + + + + +Determines whether a path exists or not. + + + + + + + + + + + + + + + + + +
+`filename` + +string, a path +
+ + + + + + + + + + + +
+True if the path exists, whether it's a file or a directory. +False if the path does not exist and there are no filesystem errors. +
+ + + + + + + + + + + + +
+`errors.OpError` + +Propagates any errors reported by the FileSystem API. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/FastGFile.md b/site/en/api_docs/python/tf/compat/v1/gfile/FastGFile.md new file mode 100644 index 00000000000..362dacbf126 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/FastGFile.md @@ -0,0 +1,313 @@ +description: File I/O wrappers without thread locking. + +
+ + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.gfile.FastGFile + + + + + + + + + +File I/O wrappers without thread locking. + + + + + + + +Note, that this is somewhat like builtin Python file I/O, but +there are semantic differences to make it more efficient for +some backing filesystems. For example, a write mode file will +not be opened until the first write call (to minimize RPC +invocations in network filesystems). + + + + + + + + + + + + + + + +
+`mode` + +Returns the mode in which the file was opened. +
+`name` + +Returns the file name. +
+ + + +## Methods + +

close

+ +View source + + + +Closes FileIO. Should be called for the WritableFile to be flushed. + + +

flush

+ +View source + + + +Flushes the Writable file. + +This only ensures that the data has made its way out of the process without +any guarantees on whether it's written to disk. This means that the +data would survive an application crash but not necessarily an OS crash. + +

next

+ +View source + + + + + + +

read

+ +View source + + + +Returns the contents of a file as a string. + +Starts reading from current position in file. + + + + + + + + + + +
Args
+`n` + +Read `n` bytes if `n != -1`. If `n = -1`, reads to end of file. +
+ + + + + + + + + + + +
Returns
+`n` bytes of the file (or whole file) in bytes mode or `n` bytes of the +string if in string (regular) mode. +
+ + + +

readline

+ +View source + + + +Reads the next line from the file. Leaves the '\n' at the end. + + +

readlines

+ +View source + + + +Returns all lines from the file in a list. + + +

seek

+ +View source + + + +Seeks to the offset in the file. (deprecated arguments) + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(position)`. They will be removed in a future version. +Instructions for updating: +position is deprecated in favor of the offset argument. + + + + + + + + + + + + + +
Args
+`offset` + +The byte count relative to the whence argument. +
+`whence` + +Valid values for whence are: +0: start of the file (default) +1: relative to the current position of the file +2: relative to the end of file. `offset` is usually negative. +
+ + + +

seekable

+ +View source + + + +Returns True as FileIO supports random access ops of seek()/tell() + + +

size

+ +View source + + + +Returns the size of the file. + + +

tell

+ +View source + + + +Returns the current position in the file. + + +

write

+ +View source + + + +Writes file_content to the file. Appends to the end of the file. + + +

__enter__

+ +View source + + + +Make usable with "with" statement. + + +

__exit__

+ +View source + + + +Make usable with "with" statement. + + +

__iter__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/Glob.md b/site/en/api_docs/python/tf/compat/v1/gfile/Glob.md new file mode 100644 index 00000000000..c0c814c0e74 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/Glob.md @@ -0,0 +1,81 @@ +description: Returns a list of files that match the given pattern(s). + +
+ + +
+ +# tf.compat.v1.gfile.Glob + + + + + + + + + +Returns a list of files that match the given pattern(s). + + + + + + + + + + + + + + + + + +
+`filename` + +string or iterable of strings. The glob pattern(s). +
+ + + + + + + + + + + +
+A list of strings containing filenames that match the given pattern(s). +
+ + + + + + + + + + + + +
+`errors.OpError` + +If there are filesystem / directory listing errors. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/IsDirectory.md b/site/en/api_docs/python/tf/compat/v1/gfile/IsDirectory.md new file mode 100644 index 00000000000..e02228b3fc1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/IsDirectory.md @@ -0,0 +1,64 @@ +description: Returns whether the path is a directory or not. + +
+ + +
+ +# tf.compat.v1.gfile.IsDirectory + + + + + + + + + +Returns whether the path is a directory or not. + + + + + + + + + + + + + + + + + +
+`dirname` + +string, path to a potential directory +
+ + + + + + + + + + + +
+True, if the path is a directory; False otherwise +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/ListDirectory.md b/site/en/api_docs/python/tf/compat/v1/gfile/ListDirectory.md new file mode 100644 index 00000000000..6b1e20c76f5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/ListDirectory.md @@ -0,0 +1,80 @@ +description: Returns a list of entries contained within a directory. + +
+ + +
+ +# tf.compat.v1.gfile.ListDirectory + + + + + + + + + +Returns a list of entries contained within a directory. + + + + + + + +The list is in arbitrary order. It does not contain the special entries "." +and "..". + + + + + + + + + + +
+`dirname` + +string, path to a directory +
+ + + + + + + + + + + +
+[filename1, filename2, ... filenameN] as strings +
+ + + + + + + + + + + +
+errors.NotFoundError if directory doesn't exist +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/MakeDirs.md b/site/en/api_docs/python/tf/compat/v1/gfile/MakeDirs.md new file mode 100644 index 00000000000..60979f0ff01 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/MakeDirs.md @@ -0,0 +1,68 @@ +description: Creates a directory and all parent/intermediate directories. + +
+ + +
+ +# tf.compat.v1.gfile.MakeDirs + + + + + + + + + +Creates a directory and all parent/intermediate directories. + + + + + + + +It succeeds if dirname already exists and is writable. + + + + + + + + + + +
+`dirname` + +string, name of the directory to be created +
+ + + + + + + + + + + + +
+`errors.OpError` + +If the operation fails. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/MkDir.md b/site/en/api_docs/python/tf/compat/v1/gfile/MkDir.md new file mode 100644 index 00000000000..2c5b5f4c1d4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/MkDir.md @@ -0,0 +1,69 @@ +description: Creates a directory with the name dirname. + +
+ + +
+ +# tf.compat.v1.gfile.MkDir + + + + + + + + + +Creates a directory with the name `dirname`. + + + + + + + + + + + + + + + + + +
+`dirname` + +string, name of the directory to be created +
+ + +Notes: The parent directories need to exist. Use tf.io.gfile.makedirs + instead if there is the possibility that the parent dirs don't exist. + + + + + + + + + + +
+`errors.OpError` + +If the operation fails. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/Remove.md b/site/en/api_docs/python/tf/compat/v1/gfile/Remove.md new file mode 100644 index 00000000000..fc262753436 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/Remove.md @@ -0,0 +1,68 @@ +description: Deletes the file located at 'filename'. + +
+ + +
+ +# tf.compat.v1.gfile.Remove + + + + + + + + + +Deletes the file located at 'filename'. + + + + + + + + + + + + + + + + + +
+`filename` + +string, a filename +
+ + + + + + + + + + + + +
+`errors.OpError` + +Propagates any errors reported by the FileSystem API. E.g., +`NotFoundError` if the file does not exist. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/Rename.md b/site/en/api_docs/python/tf/compat/v1/gfile/Rename.md new file mode 100644 index 00000000000..3dc29198614 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/Rename.md @@ -0,0 +1,82 @@ +description: Rename or move a file / directory. + +
+ + +
+ +# tf.compat.v1.gfile.Rename + + + + + + + + + +Rename or move a file / directory. + + + + + + + + + + + + + + + + + + + + + + + +
+`oldname` + +string, pathname for a file +
+`newname` + +string, pathname to which the file needs to be moved +
+`overwrite` + +boolean, if false it's an error for `newname` to be occupied by +an existing file. +
+ + + + + + + + + + + + +
+`errors.OpError` + +If the operation fails. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/Stat.md b/site/en/api_docs/python/tf/compat/v1/gfile/Stat.md new file mode 100644 index 00000000000..057910dd341 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/Stat.md @@ -0,0 +1,81 @@ +description: Returns file statistics for a given path. + +
+ + +
+ +# tf.compat.v1.gfile.Stat + + + + + + + + + +Returns file statistics for a given path. + + + + + + + + + + + + + + + + + +
+`filename` + +string, path to a file +
+ + + + + + + + + + + +
+FileStatistics struct that contains information about the path +
+ + + + + + + + + + + + +
+`errors.OpError` + +If the operation fails. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gfile/Walk.md b/site/en/api_docs/python/tf/compat/v1/gfile/Walk.md new file mode 100644 index 00000000000..e4a3ae294de --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gfile/Walk.md @@ -0,0 +1,66 @@ +description: Recursive directory tree generator for directories. + +
+ + +
+ +# tf.compat.v1.gfile.Walk + + + + + + + + + +Recursive directory tree generator for directories. + + + + + + + + + + + + + + + + + + + + +
+`top` + +string, a Directory name +
+`in_order` + +bool, Traverse in order if True, post order if False. Errors that +happen while listing directories are ignored. +
+ + + +#### Yields: + +Each yield is a 3-tuple: the pathname of a directory, followed by lists of +all its subdirectories and leaf files. That is, each yield looks like: +`(dirname, [subdirname, subdirname, ...], [filename, filename, ...])`. +Each item is a string. diff --git a/site/en/api_docs/python/tf/compat/v1/global_variables.md b/site/en/api_docs/python/tf/compat/v1/global_variables.md new file mode 100644 index 00000000000..9dc5502d172 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/global_variables.md @@ -0,0 +1,76 @@ +description: Returns global variables. + +
+ + +
+ +# tf.compat.v1.global_variables + + + + + + + + + +Returns global variables. + + + + + + + +Global variables are variables that are shared across machines in a +distributed environment. The `Variable()` constructor or `get_variable()` +automatically adds new variables to the graph collection +`GraphKeys.GLOBAL_VARIABLES`. +This convenience function returns the contents of that collection. + +An alternative to global variables are local variables. See +tf.compat.v1.local_variables + + + + + + + + + + +
+`scope` + +(Optional.) A string. If supplied, the resulting list is filtered to +include only items whose `name` attribute matches `scope` using +`re.match`. Items without a `name` attribute are never returned if a scope +is supplied. The choice of `re.match` means that a `scope` without special +tokens filters by prefix. +
+ + + + + + + + + + + +
+A list of `Variable` objects. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/global_variables_initializer.md b/site/en/api_docs/python/tf/compat/v1/global_variables_initializer.md new file mode 100644 index 00000000000..ddda71c540d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/global_variables_initializer.md @@ -0,0 +1,57 @@ +description: Returns an Op that initializes global variables. + +
+ + +
+ +# tf.compat.v1.global_variables_initializer + + + + + + + + + +Returns an Op that initializes global variables. + + + + + + + + + +This is just a shortcut for `variables_initializer(global_variables())` + + + + + + + + + +
+An Op that initializes global variables in the graph. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/gradients.md b/site/en/api_docs/python/tf/compat/v1/gradients.md new file mode 100644 index 00000000000..2cf2e0f030a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/gradients.md @@ -0,0 +1,241 @@ +description: Constructs symbolic derivatives of sum of ys w.r.t. x in xs. + +
+ + +
+ +# tf.compat.v1.gradients + + + + + + + + + +Constructs symbolic derivatives of sum of `ys` w.r.t. x in `xs`. + + + + + + + +`ys` and `xs` are each a `Tensor` or a list of tensors. `grad_ys` +is a list of `Tensor`, holding the gradients received by the +`ys`. The list must be the same length as `ys`. + +`gradients()` adds ops to the graph to output the derivatives of `ys` with +respect to `xs`. It returns a list of `Tensor` of length `len(xs)` where +each tensor is the `sum(dy/dx)` for y in `ys` and for x in `xs`. + +`grad_ys` is a list of tensors of the same length as `ys` that holds +the initial gradients for each y in `ys`. When `grad_ys` is None, +we fill in a tensor of '1's of the shape of y for each y in `ys`. A +user can provide their own initial `grad_ys` to compute the +derivatives using a different initial gradient for each y (e.g., if +one wanted to weight the gradient differently for each value in +each y). + +`stop_gradients` is a `Tensor` or a list of tensors to be considered constant +with respect to all `xs`. These tensors will not be backpropagated through, +as though they had been explicitly disconnected using `stop_gradient`. Among +other things, this allows computation of partial derivatives as opposed to +total derivatives. For example: + +```python +a = tf.constant(0.) +b = 2 * a +g = tf.gradients(a + b, [a, b], stop_gradients=[a, b]) +``` + +Here the partial derivatives `g` evaluate to `[1.0, 1.0]`, compared to the +total derivatives `tf.gradients(a + b, [a, b])`, which take into account the +influence of `a` on `b` and evaluate to `[3.0, 1.0]`. Note that the above is +equivalent to: + +```python +a = tf.stop_gradient(tf.constant(0.)) +b = tf.stop_gradient(2 * a) +g = tf.gradients(a + b, [a, b]) +``` + +`stop_gradients` provides a way of stopping gradient after the graph has +already been constructed, as compared to tf.stop_gradient which is used +during graph construction. When the two approaches are combined, +backpropagation stops at both tf.stop_gradient nodes and nodes in +`stop_gradients`, whichever is encountered first. + +All integer tensors are considered constant with respect to all `xs`, as if +they were included in `stop_gradients`. + +`unconnected_gradients` determines the value returned for each x in xs if it +is unconnected in the graph to ys. By default this is None to safeguard +against errors. Mathematically these gradients are zero which can be requested +using the `'zero'` option. `tf.UnconnectedGradients` provides the +following options and behaviors: + +```python +a = tf.ones([1, 2]) +b = tf.ones([3, 1]) +g1 = tf.gradients([b], [a], unconnected_gradients='none') +sess.run(g1) # [None] + +g2 = tf.gradients([b], [a], unconnected_gradients='zero') +sess.run(g2) # [array([[0., 0.]], dtype=float32)] +``` + +Let us take one practical example which comes during the back propogation +phase. This function is used to evaluate the derivatives of the cost function +with respect to Weights `Ws` and Biases `bs`. Below sample implementation +provides the exaplantion of what it is actually used for : + +```python +Ws = tf.constant(0.) +bs = 2 * Ws +cost = Ws + bs # This is just an example. So, please ignore the formulas. +g = tf.gradients(cost, [Ws, bs]) +dCost_dW, dCost_db = g +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`ys` + +A `Tensor` or list of tensors to be differentiated. +
+`xs` + +A `Tensor` or list of tensors to be used for differentiation. +
+`grad_ys` + +Optional. A `Tensor` or list of tensors the same size as +`ys` and holding the gradients computed for each y in `ys`. +
+`name` + +Optional name to use for grouping all the gradient ops together. +defaults to 'gradients'. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`gate_gradients` + +If True, add a tuple around the gradients returned +for an operations. This avoids some race conditions. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Accepted values are constants defined in the class `AggregationMethod`. +
+`stop_gradients` + +Optional. A `Tensor` or list of tensors not to differentiate +through. +
+`unconnected_gradients` + +Optional. Specifies the gradient value returned when +the given input tensors are unconnected. Accepted values are constants +defined in the class tf.UnconnectedGradients and the default value is +`none`. +
+ + + + + + + + + + + +
+A list of `Tensor` of length `len(xs)` where each tensor is the `sum(dy/dx)` +for y in `ys` and for x in `xs`. +
+ + + + + + + + + + + + + + + + + + +
+`LookupError` + +if one of the operations between `x` and `y` does not +have a registered gradient function. +
+`ValueError` + +if the arguments are invalid. +
+`RuntimeError` + +if called in Eager mode. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/graph_util.md b/site/en/api_docs/python/tf/compat/v1/graph_util.md new file mode 100644 index 00000000000..fcea01add2c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/graph_util.md @@ -0,0 +1,35 @@ +description: Helpers to manipulate a tensor graph in python. + +
+ + +
+ +# Module: tf.compat.v1.graph_util + + + + + + + + + +Helpers to manipulate a tensor graph in python. + + + +## Functions + +[`convert_variables_to_constants(...)`](../../../tf/compat/v1/graph_util/convert_variables_to_constants.md): Replaces all the variables in a graph with constants of the same values. (deprecated) + +[`extract_sub_graph(...)`](../../../tf/compat/v1/graph_util/extract_sub_graph.md): Extract the subgraph that can reach any of the nodes in 'dest_nodes'. (deprecated) + +[`import_graph_def(...)`](../../../tf/graph_util/import_graph_def.md): Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments) + +[`must_run_on_cpu(...)`](../../../tf/compat/v1/graph_util/must_run_on_cpu.md): Returns True if the given node_def must run on CPU, otherwise False. (deprecated) + +[`remove_training_nodes(...)`](../../../tf/compat/v1/graph_util/remove_training_nodes.md): Prunes out nodes that aren't needed for inference. (deprecated) + +[`tensor_shape_from_node_def_name(...)`](../../../tf/compat/v1/graph_util/tensor_shape_from_node_def_name.md): Convenience function to get a shape from a NodeDef's input string. (deprecated) + diff --git a/site/en/api_docs/python/tf/compat/v1/graph_util/convert_variables_to_constants.md b/site/en/api_docs/python/tf/compat/v1/graph_util/convert_variables_to_constants.md new file mode 100644 index 00000000000..541a0a2eef3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/graph_util/convert_variables_to_constants.md @@ -0,0 +1,121 @@ +description: Replaces all the variables in a graph with constants of the same values. (deprecated) + +
+ + +
+ +# tf.compat.v1.graph_util.convert_variables_to_constants + + + + + + + + + +Replaces all the variables in a graph with constants of the same values. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.graph_util.convert_variables_to_constants + +If you have a trained graph containing Variable ops, it can be convenient to +convert them all to Const ops holding the same values. This makes it possible +to describe the network fully with a single GraphDef file, and allows the +removal of a lot of ops related to loading and saving the variables. + + + + + + + + + + + + + + + + + + + + + + +
+`sess` + +Active TensorFlow session containing the variables. +
+`input_graph_def` + +GraphDef object holding the network. +
+`output_node_names` + +List of name strings for the result nodes of the graph. +
+`variable_names_whitelist` + +The set of variable names to convert (by default, +all variables are converted). +
+`variable_names_blacklist` + +The set of variable names to omit converting +to constants. +
+ + + + + + + + + + + +
+GraphDef containing a simplified version of the original. +
+ + + + + + + + + + + + +
+`RuntimeError` + +if a DT_RESOURCE op is found whose ancestor Variables are both +blacklisted AND whitelisted for freezing. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/graph_util/extract_sub_graph.md b/site/en/api_docs/python/tf/compat/v1/graph_util/extract_sub_graph.md new file mode 100644 index 00000000000..0de6694eb92 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/graph_util/extract_sub_graph.md @@ -0,0 +1,91 @@ +description: Extract the subgraph that can reach any of the nodes in 'dest_nodes'. (deprecated) + +
+ + +
+ +# tf.compat.v1.graph_util.extract_sub_graph + + + + + + + + + +Extract the subgraph that can reach any of the nodes in 'dest_nodes'. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.graph_util.extract_sub_graph + + + + + + + + + + + + + +
+`graph_def` + +A graph_pb2.GraphDef proto. +
+`dest_nodes` + +A list of strings specifying the destination node names. +
+ + + + + + + + + + + +
+The GraphDef of the sub-graph. +
+ + + + + + + + + + + + +
+`TypeError` + +If 'graph_def' is not a graph_pb2.GraphDef proto. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/graph_util/must_run_on_cpu.md b/site/en/api_docs/python/tf/compat/v1/graph_util/must_run_on_cpu.md new file mode 100644 index 00000000000..74aa6072c74 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/graph_util/must_run_on_cpu.md @@ -0,0 +1,76 @@ +description: Returns True if the given node_def must run on CPU, otherwise False. (deprecated) + +
+ + +
+ +# tf.compat.v1.graph_util.must_run_on_cpu + + + + + + + + + +Returns True if the given node_def must run on CPU, otherwise False. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.graph_util.must_run_on_cpu + + + + + + + + + + + + + +
+`node` + +The node to be assigned to a device. Could be either an ops.Operation +or NodeDef. +
+`pin_variables_on_cpu` + +If True, this function will return False if node_def +represents a variable-related op. +
+ + + + + + + + + + + +
+True if the given node must run on CPU, otherwise False. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/graph_util/remove_training_nodes.md b/site/en/api_docs/python/tf/compat/v1/graph_util/remove_training_nodes.md new file mode 100644 index 00000000000..197bb60b8ed --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/graph_util/remove_training_nodes.md @@ -0,0 +1,83 @@ +description: Prunes out nodes that aren't needed for inference. (deprecated) + +
+ + +
+ +# tf.compat.v1.graph_util.remove_training_nodes + + + + + + + + + +Prunes out nodes that aren't needed for inference. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.graph_util.remove_training_nodes + +There are nodes like Identity and CheckNumerics that are only useful +during training, and can be removed in graphs that will be used for +nothing but inference. Here we identify and remove them, returning an +equivalent graph. To be specific, CheckNumerics nodes are always removed, and +Identity nodes that aren't involved in control edges are spliced out so that +their input and outputs are directly connected. + + + + + + + + + + + + + +
+`input_graph` + +Model to analyze and prune. +
+`protected_nodes` + +An optional list of names of nodes to be kept +unconditionally. This is for example useful to preserve Identity output +nodes. +
+ + + + + + + + + + + +
+A list of nodes with the unnecessary ones removed. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/graph_util/tensor_shape_from_node_def_name.md b/site/en/api_docs/python/tf/compat/v1/graph_util/tensor_shape_from_node_def_name.md new file mode 100644 index 00000000000..6d71c0ec976 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/graph_util/tensor_shape_from_node_def_name.md @@ -0,0 +1,37 @@ +description: Convenience function to get a shape from a NodeDef's input string. (deprecated) + +
+ + +
+ +# tf.compat.v1.graph_util.tensor_shape_from_node_def_name + + + + + + + + + +Convenience function to get a shape from a NodeDef's input string. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.compat.v1.graph_util.tensor_shape_from_node_def_name \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/hessians.md b/site/en/api_docs/python/tf/compat/v1/hessians.md new file mode 100644 index 00000000000..5ab14acaea1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/hessians.md @@ -0,0 +1,125 @@ +description: Constructs the Hessian of sum of ys with respect to x in xs. + +
+ + +
+ +# tf.compat.v1.hessians + + + + + + + + + +Constructs the Hessian of sum of `ys` with respect to `x` in `xs`. + + + + + + + +`hessians()` adds ops to the graph to output the Hessian matrix of `ys` +with respect to `xs`. It returns a list of `Tensor` of length `len(xs)` +where each tensor is the Hessian of `sum(ys)`. + +The Hessian is a matrix of second-order partial derivatives of a scalar +tensor (see https://en.wikipedia.org/wiki/Hessian_matrix for more details). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`ys` + +A `Tensor` or list of tensors to be differentiated. +
+`xs` + +A `Tensor` or list of tensors to be used for differentiation. +
+`name` + +Optional name to use for grouping all the gradient ops together. +defaults to 'hessians'. +
+`colocate_gradients_with_ops` + +See `gradients()` documentation for details. +
+`gate_gradients` + +See `gradients()` documentation for details. +
+`aggregation_method` + +See `gradients()` documentation for details. +
+ + + + + + + + + + + +
+A list of Hessian matrices of `sum(ys)` for each `x` in `xs`. +
+ + + + + + + + + + + + +
+`LookupError` + +if one of the operations between `xs` and `ys` does not +have a registered gradient function. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/image.md b/site/en/api_docs/python/tf/compat/v1/image.md new file mode 100644 index 00000000000..2c07b854535 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/image.md @@ -0,0 +1,294 @@ +description: Image ops. + +
+ + +
+ +# Module: tf.compat.v1.image + + + + + + + + + +Image ops. + + +The tf.image module contains various functions for image +processing and decoding-encoding Ops. + +Many of the encoding/decoding functions are also available in the +core tf.io module. + +## Image processing + +### Resizing + +The resizing Ops accept input images as tensors of several types. They always +output resized images as float32 tensors. + +The convenience function tf.image.resize supports both 4-D +and 3-D tensors as input and output. 4-D tensors are for batches of images, +3-D tensors for individual images. + +Resized images will be distorted if their original aspect ratio is not the +same as size. To avoid distortions see tf.image.resize_with_pad. + +* tf.image.resize +* tf.image.resize_with_pad +* tf.image.resize_with_crop_or_pad + +The Class tf.image.ResizeMethod provides various resize methods like +`bilinear`, `nearest_neighbor`. + +### Converting Between Colorspaces + +Image ops work either on individual images or on batches of images, depending on +the shape of their input Tensor. + +If 3-D, the shape is `[height, width, channels]`, and the Tensor represents one +image. If 4-D, the shape is `[batch_size, height, width, channels]`, and the +Tensor represents `batch_size` images. + +Currently, `channels` can usefully be 1, 2, 3, or 4. Single-channel images are +grayscale, images with 3 channels are encoded as either RGB or HSV. Images +with 2 or 4 channels include an alpha channel, which has to be stripped from the +image before passing the image to most image processing functions (and can be +re-attached later). + +Internally, images are either stored in as one `float32` per channel per pixel +(implicitly, values are assumed to lie in `[0,1)`) or one `uint8` per channel +per pixel (values are assumed to lie in `[0,255]`). + +TensorFlow can convert between images in RGB or HSV or YIQ. + +* tf.image.rgb_to_grayscale, tf.image.grayscale_to_rgb +* tf.image.rgb_to_hsv, tf.image.hsv_to_rgb +* tf.image.rgb_to_yiq, tf.image.yiq_to_rgb +* tf.image.rgb_to_yuv, tf.image.yuv_to_rgb +* tf.image.image_gradients +* tf.image.convert_image_dtype + +### Image Adjustments + +TensorFlow provides functions to adjust images in various ways: brightness, +contrast, hue, and saturation. Each adjustment can be done with predefined +parameters or with random parameters picked from predefined intervals. Random +adjustments are often useful to expand a training set and reduce overfitting. + +If several adjustments are chained it is advisable to minimize the number of +redundant conversions by first converting the images to the most natural data +type and representation. + +* tf.image.adjust_brightness +* tf.image.adjust_contrast +* tf.image.adjust_gamma +* tf.image.adjust_hue +* tf.image.adjust_jpeg_quality +* tf.image.adjust_saturation +* tf.image.random_brightness +* tf.image.random_contrast +* tf.image.random_hue +* tf.image.random_saturation +* tf.image.per_image_standardization + +### Working with Bounding Boxes + +* tf.image.draw_bounding_boxes +* tf.image.combined_non_max_suppression +* tf.image.generate_bounding_box_proposals +* tf.image.non_max_suppression +* tf.image.non_max_suppression_overlaps +* tf.image.non_max_suppression_padded +* tf.image.non_max_suppression_with_scores +* tf.image.pad_to_bounding_box +* tf.image.sample_distorted_bounding_box + +### Cropping + +* tf.image.central_crop +* tf.image.crop_and_resize +* tf.image.crop_to_bounding_box +* tf.io.decode_and_crop_jpeg +* tf.image.extract_glimpse +* tf.image.random_crop +* tf.image.resize_with_crop_or_pad + +### Flipping, Rotating and Transposing + +* tf.image.flip_left_right +* tf.image.flip_up_down +* tf.image.random_flip_left_right +* tf.image.random_flip_up_down +* tf.image.rot90 +* tf.image.transpose + +## Image decoding and encoding + +TensorFlow provides Ops to decode and encode JPEG and PNG formats. Encoded +images are represented by scalar string Tensors, decoded images by 3-D uint8 +tensors of shape `[height, width, channels]`. (PNG also supports uint16.) + +Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]` + +The encode and decode Ops apply to one image at a time. Their input and output +are all of variable size. If you need fixed size images, pass the output of +the decode Ops to one of the cropping and resizing Ops. + +* tf.io.decode_bmp +* tf.io.decode_gif +* tf.io.decode_image +* tf.io.decode_jpeg +* tf.io.decode_and_crop_jpeg +* tf.io.decode_png +* tf.io.encode_jpeg +* tf.image.encode_png + +## Classes + +[`class ResizeMethod`](../../../tf/compat/v1/image/ResizeMethod.md): See v1.image.resize for details. + +## Functions + +[`adjust_brightness(...)`](../../../tf/image/adjust_brightness.md): Adjust the brightness of RGB or Grayscale images. + +[`adjust_contrast(...)`](../../../tf/image/adjust_contrast.md): Adjust contrast of RGB or grayscale images. + +[`adjust_gamma(...)`](../../../tf/image/adjust_gamma.md): Performs [Gamma Correction](http://en.wikipedia.org/wiki/Gamma_correction). + +[`adjust_hue(...)`](../../../tf/image/adjust_hue.md): Adjust hue of RGB images. + +[`adjust_jpeg_quality(...)`](../../../tf/image/adjust_jpeg_quality.md): Adjust jpeg encoding quality of an image. + +[`adjust_saturation(...)`](../../../tf/image/adjust_saturation.md): Adjust saturation of RGB images. + +[`central_crop(...)`](../../../tf/image/central_crop.md): Crop the central region of the image(s). + +[`combined_non_max_suppression(...)`](../../../tf/image/combined_non_max_suppression.md): Greedily selects a subset of bounding boxes in descending order of score. + +[`convert_image_dtype(...)`](../../../tf/image/convert_image_dtype.md): Convert `image` to `dtype`, scaling its values if needed. + +[`crop_and_resize(...)`](../../../tf/compat/v1/image/crop_and_resize.md): Extracts crops from the input image tensor and resizes them. + +[`crop_to_bounding_box(...)`](../../../tf/image/crop_to_bounding_box.md): Crops an image to a specified bounding box. + +[`decode_and_crop_jpeg(...)`](../../../tf/io/decode_and_crop_jpeg.md): Decode and Crop a JPEG-encoded image to a uint8 tensor. + +[`decode_bmp(...)`](../../../tf/io/decode_bmp.md): Decode the first frame of a BMP-encoded image to a uint8 tensor. + +[`decode_gif(...)`](../../../tf/io/decode_gif.md): Decode the frame(s) of a GIF-encoded image to a uint8 tensor. + +[`decode_image(...)`](../../../tf/io/decode_image.md): Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`. + +[`decode_jpeg(...)`](../../../tf/io/decode_jpeg.md): Decode a JPEG-encoded image to a uint8 tensor. + +[`decode_png(...)`](../../../tf/io/decode_png.md): Decode a PNG-encoded image to a uint8 or uint16 tensor. + +[`draw_bounding_boxes(...)`](../../../tf/compat/v1/image/draw_bounding_boxes.md): Draw bounding boxes on a batch of images. + +[`encode_jpeg(...)`](../../../tf/io/encode_jpeg.md): JPEG-encode an image. + +[`encode_png(...)`](../../../tf/image/encode_png.md): PNG-encode an image. + +[`extract_glimpse(...)`](../../../tf/compat/v1/image/extract_glimpse.md): Extracts a glimpse from the input tensor. + +[`extract_image_patches(...)`](../../../tf/compat/v1/extract_image_patches.md): Extract `patches` from `images` and put them in the "depth" output dimension. + +[`extract_jpeg_shape(...)`](../../../tf/io/extract_jpeg_shape.md): Extract the shape information of a JPEG-encoded image. + +[`extract_patches(...)`](../../../tf/image/extract_patches.md): Extract `patches` from `images`. + +[`flip_left_right(...)`](../../../tf/image/flip_left_right.md): Flip an image horizontally (left to right). + +[`flip_up_down(...)`](../../../tf/image/flip_up_down.md): Flip an image vertically (upside down). + +[`generate_bounding_box_proposals(...)`](../../../tf/image/generate_bounding_box_proposals.md): Generate bounding box proposals from encoded bounding boxes. + +[`grayscale_to_rgb(...)`](../../../tf/image/grayscale_to_rgb.md): Converts one or more images from Grayscale to RGB. + +[`hsv_to_rgb(...)`](../../../tf/image/hsv_to_rgb.md): Convert one or more images from HSV to RGB. + +[`image_gradients(...)`](../../../tf/image/image_gradients.md): Returns image gradients (dy, dx) for each color channel. + +[`is_jpeg(...)`](../../../tf/io/is_jpeg.md): Convenience function to check if the 'contents' encodes a JPEG image. + +[`non_max_suppression(...)`](../../../tf/image/non_max_suppression.md): Greedily selects a subset of bounding boxes in descending order of score. + +[`non_max_suppression_overlaps(...)`](../../../tf/image/non_max_suppression_overlaps.md): Greedily selects a subset of bounding boxes in descending order of score. + +[`non_max_suppression_padded(...)`](../../../tf/image/non_max_suppression_padded.md): Greedily selects a subset of bounding boxes in descending order of score. + +[`non_max_suppression_with_scores(...)`](../../../tf/image/non_max_suppression_with_scores.md): Greedily selects a subset of bounding boxes in descending order of score. + +[`pad_to_bounding_box(...)`](../../../tf/image/pad_to_bounding_box.md): Pad `image` with zeros to the specified `height` and `width`. + +[`per_image_standardization(...)`](../../../tf/image/per_image_standardization.md): Linearly scales each image in `image` to have mean 0 and variance 1. + +[`psnr(...)`](../../../tf/image/psnr.md): Returns the Peak Signal-to-Noise Ratio between a and b. + +[`random_brightness(...)`](../../../tf/image/random_brightness.md): Adjust the brightness of images by a random factor. + +[`random_contrast(...)`](../../../tf/image/random_contrast.md): Adjust the contrast of an image or images by a random factor. + +[`random_crop(...)`](../../../tf/image/random_crop.md): Randomly crops a tensor to a given size. + +[`random_flip_left_right(...)`](../../../tf/image/random_flip_left_right.md): Randomly flip an image horizontally (left to right). + +[`random_flip_up_down(...)`](../../../tf/image/random_flip_up_down.md): Randomly flips an image vertically (upside down). + +[`random_hue(...)`](../../../tf/image/random_hue.md): Adjust the hue of RGB images by a random factor. + +[`random_jpeg_quality(...)`](../../../tf/image/random_jpeg_quality.md): Randomly changes jpeg encoding quality for inducing jpeg noise. + +[`random_saturation(...)`](../../../tf/image/random_saturation.md): Adjust the saturation of RGB images by a random factor. + +[`resize(...)`](../../../tf/compat/v1/image/resize.md): Resize `images` to `size` using the specified `method`. + +[`resize_area(...)`](../../../tf/compat/v1/image/resize_area.md): Resize `images` to `size` using area interpolation. + +[`resize_bicubic(...)`](../../../tf/compat/v1/image/resize_bicubic.md) + +[`resize_bilinear(...)`](../../../tf/compat/v1/image/resize_bilinear.md) + +[`resize_image_with_crop_or_pad(...)`](../../../tf/image/resize_with_crop_or_pad.md): Crops and/or pads an image to a target width and height. + +[`resize_image_with_pad(...)`](../../../tf/compat/v1/image/resize_image_with_pad.md): Resizes and pads an image to a target width and height. + +[`resize_images(...)`](../../../tf/compat/v1/image/resize.md): Resize `images` to `size` using the specified `method`. + +[`resize_nearest_neighbor(...)`](../../../tf/compat/v1/image/resize_nearest_neighbor.md) + +[`resize_with_crop_or_pad(...)`](../../../tf/image/resize_with_crop_or_pad.md): Crops and/or pads an image to a target width and height. + +[`rgb_to_grayscale(...)`](../../../tf/image/rgb_to_grayscale.md): Converts one or more images from RGB to Grayscale. + +[`rgb_to_hsv(...)`](../../../tf/image/rgb_to_hsv.md): Converts one or more images from RGB to HSV. + +[`rgb_to_yiq(...)`](../../../tf/image/rgb_to_yiq.md): Converts one or more images from RGB to YIQ. + +[`rgb_to_yuv(...)`](../../../tf/image/rgb_to_yuv.md): Converts one or more images from RGB to YUV. + +[`rot90(...)`](../../../tf/image/rot90.md): Rotate image(s) counter-clockwise by 90 degrees. + +[`sample_distorted_bounding_box(...)`](../../../tf/compat/v1/image/sample_distorted_bounding_box.md): Generate a single randomly distorted bounding box for an image. (deprecated) + +[`sobel_edges(...)`](../../../tf/image/sobel_edges.md): Returns a tensor holding Sobel edge maps. + +[`ssim(...)`](../../../tf/image/ssim.md): Computes SSIM index between img1 and img2. + +[`ssim_multiscale(...)`](../../../tf/image/ssim_multiscale.md): Computes the MS-SSIM between img1 and img2. + +[`total_variation(...)`](../../../tf/image/total_variation.md): Calculate and return the total variation for one or more images. + +[`transpose(...)`](../../../tf/image/transpose.md): Transpose image(s) by swapping the height and width dimension. + +[`transpose_image(...)`](../../../tf/image/transpose.md): Transpose image(s) by swapping the height and width dimension. + +[`yiq_to_rgb(...)`](../../../tf/image/yiq_to_rgb.md): Converts one or more images from YIQ to RGB. + +[`yuv_to_rgb(...)`](../../../tf/image/yuv_to_rgb.md): Converts one or more images from YUV to RGB. + diff --git a/site/en/api_docs/python/tf/compat/v1/image/ResizeMethod.md b/site/en/api_docs/python/tf/compat/v1/image/ResizeMethod.md new file mode 100644 index 00000000000..bd6100c8f3d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/image/ResizeMethod.md @@ -0,0 +1,37 @@ +description: See v1.image.resize for details. + +
+ + + + + + +
+ +# tf.compat.v1.image.ResizeMethod + + + + + + + + + +See v1.image.resize for details. + + + + +## Class Variables + +* `AREA = 3` +* `BICUBIC = 2` +* `BILINEAR = 0` +* `NEAREST_NEIGHBOR = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/image/crop_and_resize.md b/site/en/api_docs/python/tf/compat/v1/image/crop_and_resize.md new file mode 100644 index 00000000000..853c642b3e9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/image/crop_and_resize.md @@ -0,0 +1,144 @@ +description: Extracts crops from the input image tensor and resizes them. + +
+ + +
+ +# tf.compat.v1.image.crop_and_resize + + + + + + + + + +Extracts crops from the input image tensor and resizes them. + + + + + + + +Extracts crops from the input image tensor and resizes them using bilinear +sampling or nearest neighbor sampling (possibly with aspect ratio change) to a +common output size specified by `crop_size`. This is more general than the +`crop_to_bounding_box` op which extracts a fixed size slice from the input image +and does not allow resizing or aspect ratio change. + +Returns a tensor with `crops` from the input `image` at positions defined at the +bounding box locations in `boxes`. The cropped boxes are all resized (with +bilinear or nearest neighbor interpolation) to a fixed +`size = [crop_height, crop_width]`. The result is a 4-D tensor +`[num_boxes, crop_height, crop_width, depth]`. The resizing is corner aligned. +In particular, if `boxes = [[0, 0, 1, 1]]`, the method will give identical +results to using `tf.image.resize_bilinear()` or +`tf.image.resize_nearest_neighbor()`(depends on the `method` argument) with +`align_corners=True`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`image` + +A `Tensor`. Must be one of the following types: `uint8`, `uint16`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. +A 4-D tensor of shape `[batch, image_height, image_width, depth]`. +Both `image_height` and `image_width` need to be positive. +
+`boxes` + +A `Tensor` of type `float32`. +A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor +specifies the coordinates of a box in the `box_ind[i]` image and is specified +in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of +`y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the +`[0, 1]` interval of normalized image height is mapped to +`[0, image_height - 1]` in image height coordinates. We do allow `y1` > `y2`, in +which case the sampled crop is an up-down flipped version of the original +image. The width dimension is treated similarly. Normalized coordinates +outside the `[0, 1]` range are allowed, in which case we use +`extrapolation_value` to extrapolate the input image values. +
+`box_ind` + +A `Tensor` of type `int32`. +A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`. +The value of `box_ind[i]` specifies the image that the `i`-th box refers to. +
+`crop_size` + +A `Tensor` of type `int32`. +A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`. All +cropped image patches are resized to this size. The aspect ratio of the image +content is not preserved. Both `crop_height` and `crop_width` need to be +positive. +
+`method` + +An optional `string` from: `"bilinear", "nearest"`. Defaults to `"bilinear"`. +A string specifying the sampling method for resizing. It can be either +`"bilinear"` or `"nearest"` and default to `"bilinear"`. Currently two sampling +methods are supported: Bilinear and Nearest Neighbor. +
+`extrapolation_value` + +An optional `float`. Defaults to `0`. +Value used for extrapolation, when applicable. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/image/draw_bounding_boxes.md b/site/en/api_docs/python/tf/compat/v1/image/draw_bounding_boxes.md new file mode 100644 index 00000000000..5f16d88f0ed --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/image/draw_bounding_boxes.md @@ -0,0 +1,125 @@ +description: Draw bounding boxes on a batch of images. + +
+ + +
+ +# tf.compat.v1.image.draw_bounding_boxes + + + + + + + + + +Draw bounding boxes on a batch of images. + + + + + + + +Outputs a copy of `images` but draws on top of the pixels zero or more +bounding boxes specified by the locations in `boxes`. The coordinates of the +each bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`. +The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width +and the height of the underlying image. + +For example, if an image is 100 x 200 pixels (height x width) and the bounding +box is `[0.1, 0.2, 0.5, 0.9]`, the upper-left and bottom-right coordinates of +the bounding box will be `(40, 10)` to `(180, 50)` (in (x,y) coordinates). + +Parts of the bounding box may fall outside the image. + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `float32`, `half`. +4-D with shape `[batch, height, width, depth]`. A batch of images. +
+`boxes` + +A `Tensor` of type `float32`. 3-D with shape `[batch, +num_bounding_boxes, 4]` containing bounding boxes. +
+`name` + +A name for the operation (optional). +
+`colors` + +A `Tensor` of type `float32`. 2-D. A list of RGBA colors to cycle +through for the boxes. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ + + +#### Usage Example: + + + +``` +>>> # create an empty image +>>> img = tf.zeros([1, 3, 3, 3]) +>>> # draw a box around the image +>>> box = np.array([0, 0, 1, 1]) +>>> boxes = box.reshape([1, 1, 4]) +>>> # alternate between red and blue +>>> colors = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, 1.0]]) +>>> tf.image.draw_bounding_boxes(img, boxes, colors) + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/image/extract_glimpse.md b/site/en/api_docs/python/tf/compat/v1/image/extract_glimpse.md new file mode 100644 index 00000000000..e353958f37a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/image/extract_glimpse.md @@ -0,0 +1,162 @@ +description: Extracts a glimpse from the input tensor. + +
+ + +
+ +# tf.compat.v1.image.extract_glimpse + + + + + + + + + +Extracts a glimpse from the input tensor. + + + + + + + +Returns a set of windows called glimpses extracted at location +`offsets` from the input tensor. If the windows only partially +overlaps the inputs, the non-overlapping areas will be filled with +random noise. + +The result is a 4-D tensor of shape `[batch_size, glimpse_height, +glimpse_width, channels]`. The channels and batch dimensions are the +same as that of the input tensor. The height and width of the output +windows are specified in the `size` parameter. + +The argument `normalized` and `centered` controls how the windows are built: + +* If the coordinates are normalized but not centered, 0.0 and 1.0 + correspond to the minimum and maximum of each height and width + dimension. +* If the coordinates are both normalized and centered, they range from + -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper + left corner, the lower right corner is located at (1.0, 1.0) and the + center is at (0, 0). +* If the coordinates are not normalized they are interpreted as + numbers of pixels. + +#### Usage Example: + + + +``` +>>> x = [[[[0.0], +... [1.0], +... [2.0]], +... [[3.0], +... [4.0], +... [5.0]], +... [[6.0], +... [7.0], +... [8.0]]]] +>>> tf.image.extract_glimpse(x, size=(2, 2), offsets=[[1, 1]], +... centered=False, normalized=False) + +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `float32`. A 4-D float tensor of shape +`[batch_size, height, width, channels]`. +
+`size` + +A `Tensor` of type `int32`. A 1-D tensor of 2 elements containing the +size of the glimpses to extract. The glimpse height must be specified +first, following by the glimpse width. +
+`offsets` + +A `Tensor` of type `float32`. A 2-D integer tensor of shape +`[batch_size, 2]` containing the y, x locations of the center of each +window. +
+`centered` + +An optional `bool`. Defaults to `True`. indicates if the offset +coordinates are centered relative to the image, in which case the (0, 0) +offset is relative to the center of the input images. If false, the (0,0) +offset corresponds to the upper left corner of the input images. +
+`normalized` + +An optional `bool`. Defaults to `True`. indicates if the offset +coordinates are normalized. +
+`uniform_noise` + +An optional `bool`. Defaults to `True`. indicates if the +noise should be generated using a uniform distribution or a Gaussian +distribution. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/image/resize.md b/site/en/api_docs/python/tf/compat/v1/image/resize.md new file mode 100644 index 00000000000..d0b2ea96516 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/image/resize.md @@ -0,0 +1,172 @@ +description: Resize images to size using the specified method. + +
+ + +
+ +# tf.compat.v1.image.resize + + + + + + + + + +Resize `images` to `size` using the specified `method`. + + + + + + + + + +Resized images will be distorted if their original aspect ratio is not +the same as `size`. To avoid distortions see +tf.image.resize_with_pad or tf.image.resize_with_crop_or_pad. + +The `method` can be one of: + +* `ResizeMethod.BILINEAR`: [Bilinear interpolation.]( + https://en.wikipedia.org/wiki/Bilinear_interpolation) +* `ResizeMethod.NEAREST_NEIGHBOR`: [Nearest neighbor interpolation.]( + https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation) +* `ResizeMethod.BICUBIC`: [Bicubic interpolation.]( + https://en.wikipedia.org/wiki/Bicubic_interpolation) +* `ResizeMethod.AREA`: Area interpolation. + +The return value has the same type as `images` if `method` is +`ResizeMethod.NEAREST_NEIGHBOR`. It will also have the same type as `images` +if the size of `images` can be statically determined to be the same as `size`, +because `images` is returned in this case. Otherwise, the return value has +type `float32`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`images` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+`size` + +A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new +size for the images. +
+`method` + +ResizeMethod. Defaults to `ResizeMethod.BILINEAR`. +
+`align_corners` + +bool. If True, the centers of the 4 corner pixels of the +input and output tensors are aligned, preserving the values at the corner +pixels. Defaults to `False`. +
+`preserve_aspect_ratio` + +Whether to preserve the aspect ratio. If this is set, +then `images` will be resized to a size that fits in `size` while +preserving the aspect ratio of the original image. Scales up the image if +`size` is bigger than the current size of the `image`. Defaults to False. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + + + + + + + + +
+`ValueError` + +if the shape of `images` is incompatible with the +shape arguments to this function +
+`ValueError` + +if `size` has invalid shape or type. +
+`ValueError` + +if an unsupported resize method is specified. +
+ + + + + + + + + + + +
+If `images` was 4-D, a 4-D float Tensor of shape +`[batch, new_height, new_width, channels]`. +If `images` was 3-D, a 3-D float Tensor of shape +`[new_height, new_width, channels]`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/image/resize_area.md b/site/en/api_docs/python/tf/compat/v1/image/resize_area.md new file mode 100644 index 00000000000..bd0505ef551 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/image/resize_area.md @@ -0,0 +1,95 @@ +description: Resize images to size using area interpolation. + +
+ + +
+ +# tf.compat.v1.image.resize_area + + + + + + + + + +Resize `images` to `size` using area interpolation. + + + + + + + +Input images can be of different types but output images are always float. + +The range of pixel values for the output image might be slightly different +from the range for the input image because of limited numerical precision. +To guarantee an output range, for example `[0.0, 1.0]`, apply +tf.clip_by_value to the output. + +Each output pixel is computed by first transforming the pixel's footprint into +the input tensor and then averaging the pixels that intersect the footprint. An +input pixel's contribution to the average is weighted by the fraction of its +area that intersects the footprint. This is the same as OpenCV's INTER_AREA. + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `half`, `float32`, `float64`. +4-D with shape `[batch, height, width, channels]`. +
+`size` + +A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The +new size for the images. +
+`align_corners` + +An optional `bool`. Defaults to `False`. +If true, the centers of the 4 corner pixels of the input and output tensors are +aligned, preserving the values at the corner pixels. Defaults to false. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/image/resize_bicubic.md b/site/en/api_docs/python/tf/compat/v1/image/resize_bicubic.md new file mode 100644 index 00000000000..2873b28bdfb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/image/resize_bicubic.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.image.resize_bicubic + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/image/resize_bilinear.md b/site/en/api_docs/python/tf/compat/v1/image/resize_bilinear.md new file mode 100644 index 00000000000..dc6c2712705 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/image/resize_bilinear.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.image.resize_bilinear + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/image/resize_image_with_pad.md b/site/en/api_docs/python/tf/compat/v1/image/resize_image_with_pad.md new file mode 100644 index 00000000000..8fd2e7a9ef5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/image/resize_image_with_pad.md @@ -0,0 +1,122 @@ +description: Resizes and pads an image to a target width and height. + +
+ + +
+ +# tf.compat.v1.image.resize_image_with_pad + + + + + + + + + +Resizes and pads an image to a target width and height. + + + + + + + +Resizes an image to a target width and height by keeping +the aspect ratio the same without distortion. If the target +dimensions don't match the image dimensions, the image +is resized and then padded with zeroes to match requested +dimensions. + + + + + + + + + + + + + + + + + + + + + + +
+`image` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+`target_height` + +Target height. +
+`target_width` + +Target width. +
+`method` + +Method to use for resizing image. See `resize_images()` +
+`align_corners` + +bool. If True, the centers of the 4 corner pixels of the +input and output tensors are aligned, preserving the values at the corner +pixels. Defaults to `False`. +
+ + + + + + + + + + + + +
+`ValueError` + +if `target_height` or `target_width` are zero or negative. +
+ + + + + + + + + + + +
+Resized and padded image. +If `images` was 4-D, a 4-D float Tensor of shape +`[batch, new_height, new_width, channels]`. +If `images` was 3-D, a 3-D float Tensor of shape +`[new_height, new_width, channels]`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/image/resize_nearest_neighbor.md b/site/en/api_docs/python/tf/compat/v1/image/resize_nearest_neighbor.md new file mode 100644 index 00000000000..d1465a02478 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/image/resize_nearest_neighbor.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.image.resize_nearest_neighbor + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/image/sample_distorted_bounding_box.md b/site/en/api_docs/python/tf/compat/v1/image/sample_distorted_bounding_box.md new file mode 100644 index 00000000000..a86497ef200 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/image/sample_distorted_bounding_box.md @@ -0,0 +1,217 @@ +description: Generate a single randomly distorted bounding box for an image. (deprecated) + +
+ + +
+ +# tf.compat.v1.image.sample_distorted_bounding_box + + + + + + + + + +Generate a single randomly distorted bounding box for an image. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead. + +Bounding box annotations are often supplied in addition to ground-truth labels +in image recognition or object localization tasks. A common technique for +training such a system is to randomly distort an image while preserving +its content, i.e. *data augmentation*. This Op outputs a randomly distorted +localization of an object, i.e. bounding box, given an `image_size`, +`bounding_boxes` and a series of constraints. + +The output of this Op is a single bounding box that may be used to crop the +original image. The output is returned as 3 tensors: `begin`, `size` and +`bboxes`. The first 2 tensors can be fed directly into tf.slice to crop the +image. The latter may be supplied to tf.image.draw_bounding_boxes to +visualize what the bounding box looks like. + +Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. +The +bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and +height of the underlying image. + +For example, + +```python + # Generate a single distorted bounding box. + begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box( + tf.shape(image), + bounding_boxes=bounding_boxes, + min_object_covered=0.1) + + # Draw the bounding box in an image summary. + image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), + bbox_for_draw) + tf.compat.v1.summary.image('images_with_box', image_with_box) + + # Employ the bounding box to distort the image. + distorted_image = tf.slice(image, begin, size) +``` + +Note that if no bounding box information is available, setting +`use_image_if_no_bounding_boxes = True` will assume there is a single implicit +bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is +false and no bounding boxes are supplied, an error is raised. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`image_size` + +A `Tensor`. Must be one of the following types: `uint8`, `int8`, +`int16`, `int32`, `int64`. 1-D, containing `[height, width, channels]`. +
+`bounding_boxes` + +A `Tensor` of type `float32`. 3-D with shape `[batch, N, 4]` +describing the N bounding boxes associated with the image. +
+`seed` + +An optional `int`. Defaults to `0`. If either `seed` or `seed2` are +set to non-zero, the random number generator is seeded by the given +`seed`. Otherwise, it is seeded by a random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. A second seed to avoid seed +collision. +
+`min_object_covered` + +A Tensor of type `float32`. Defaults to `0.1`. The +cropped area of the image must contain at least this fraction of any +bounding box supplied. The value of this parameter should be non-negative. +In the case of 0, the cropped area does not need to overlap any of the +bounding boxes supplied. +
+`aspect_ratio_range` + +An optional list of `floats`. Defaults to `[0.75, +1.33]`. The cropped area of the image must have an aspect ratio = width / +height within this range. +
+`area_range` + +An optional list of `floats`. Defaults to `[0.05, 1]`. The +cropped area of the image must contain a fraction of the supplied image +within this range. +
+`max_attempts` + +An optional `int`. Defaults to `100`. Number of attempts at +generating a cropped region of the image of the specified constraints. +After `max_attempts` failures, return the entire image. +
+`use_image_if_no_bounding_boxes` + +An optional `bool`. Defaults to `False`. +Controls behavior if no bounding boxes supplied. If true, assume an +implicit bounding box covering the whole input. If false, raise an error. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (begin, size, bboxes). +
+`begin` + +A `Tensor`. Has the same type as `image_size`. 1-D, containing +`[offset_height, offset_width, 0]`. Provide as input to +tf.slice. +
+`size` + +A `Tensor`. Has the same type as `image_size`. 1-D, containing +`[target_height, target_width, -1]`. Provide as input to +tf.slice. +
+`bboxes` + +A `Tensor` of type `float32`. 3-D with shape `[1, 1, 4]` containing +the distorted bounding box. +Provide as input to tf.image.draw_bounding_boxes. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/initialize_all_tables.md b/site/en/api_docs/python/tf/compat/v1/initialize_all_tables.md new file mode 100644 index 00000000000..6a80253f1dd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/initialize_all_tables.md @@ -0,0 +1,68 @@ +description: Returns an Op that initializes all tables of the default graph. (deprecated) + +
+ + +
+ +# tf.compat.v1.initialize_all_tables + + + + + + + + + +Returns an Op that initializes all tables of the default graph. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.tables_initializer` instead. + + + + + + + + + + +
+`name` + +Optional name for the initialization op. +
+ + + + + + + + + + + +
+An Op that initializes all tables. Note that if there are +not tables the returned Op is a NoOp. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/initialize_all_variables.md b/site/en/api_docs/python/tf/compat/v1/initialize_all_variables.md new file mode 100644 index 00000000000..78788ab0b94 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/initialize_all_variables.md @@ -0,0 +1,37 @@ +description: See tf.compat.v1.global_variables_initializer. (deprecated) + +
+ + +
+ +# tf.compat.v1.initialize_all_variables + + + + + + + + + +See tf.compat.v1.global_variables_initializer. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02. +Instructions for updating: +Use `tf.global_variables_initializer` instead. + + **NOTE** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/initialize_local_variables.md b/site/en/api_docs/python/tf/compat/v1/initialize_local_variables.md new file mode 100644 index 00000000000..1cb3e2c30b5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/initialize_local_variables.md @@ -0,0 +1,37 @@ +description: See tf.compat.v1.local_variables_initializer. (deprecated) + +
+ + +
+ +# tf.compat.v1.initialize_local_variables + + + + + + + + + +See tf.compat.v1.local_variables_initializer. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02. +Instructions for updating: +Use `tf.local_variables_initializer` instead. + + **NOTE** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/initialize_variables.md b/site/en/api_docs/python/tf/compat/v1/initialize_variables.md new file mode 100644 index 00000000000..bbf6ec5a383 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/initialize_variables.md @@ -0,0 +1,39 @@ +description: See tf.compat.v1.variables_initializer. (deprecated) + +
+ + +
+ +# tf.compat.v1.initialize_variables + + + + + + + + + +See tf.compat.v1.variables_initializer. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02. +Instructions for updating: +Use `tf.variables_initializer` instead. + + **NOTE** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/initializers.md b/site/en/api_docs/python/tf/compat/v1/initializers.md new file mode 100644 index 00000000000..a1e8c9cb49e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/initializers.md @@ -0,0 +1,65 @@ +description: Public API for tf.initializers namespace. + +
+ + +
+ +# Module: tf.compat.v1.initializers + + + + + + + + + +Public API for tf.initializers namespace. + + + +## Classes + +[`class constant`](../../../tf/compat/v1/keras/initializers/Constant.md): Initializer that generates tensors with constant values. + +[`class glorot_normal`](../../../tf/compat/v1/keras/initializers/glorot_normal.md): The Glorot normal initializer, also called Xavier normal initializer. + +[`class glorot_uniform`](../../../tf/compat/v1/keras/initializers/glorot_uniform.md): The Glorot uniform initializer, also called Xavier uniform initializer. + +[`class identity`](../../../tf/compat/v1/keras/initializers/Identity.md): Initializer that generates the identity matrix. + +[`class ones`](../../../tf/compat/v1/keras/initializers/Ones.md): Initializer that generates tensors initialized to 1. + +[`class orthogonal`](../../../tf/compat/v1/keras/initializers/Orthogonal.md): Initializer that generates an orthogonal matrix. + +[`class random_normal`](../../../tf/compat/v1/random_normal_initializer.md): Initializer that generates tensors with a normal distribution. + +[`class random_uniform`](../../../tf/compat/v1/random_uniform_initializer.md): Initializer that generates tensors with a uniform distribution. + +[`class truncated_normal`](../../../tf/compat/v1/truncated_normal_initializer.md): Initializer that generates a truncated normal distribution. + +[`class uniform_unit_scaling`](../../../tf/compat/v1/uniform_unit_scaling_initializer.md): Initializer that generates tensors without scaling variance. + +[`class variance_scaling`](../../../tf/compat/v1/keras/initializers/VarianceScaling.md): Initializer capable of adapting its scale to the shape of weights tensors. + +[`class zeros`](../../../tf/compat/v1/keras/initializers/Zeros.md): Initializer that generates tensors initialized to 0. + +## Functions + +[`global_variables(...)`](../../../tf/compat/v1/global_variables_initializer.md): Returns an Op that initializes global variables. + +[`he_normal(...)`](../../../tf/compat/v1/keras/initializers/he_normal.md): He normal initializer. + +[`he_uniform(...)`](../../../tf/compat/v1/keras/initializers/he_uniform.md): He uniform variance scaling initializer. + +[`lecun_normal(...)`](../../../tf/compat/v1/keras/initializers/lecun_normal.md): LeCun normal initializer. + +[`lecun_uniform(...)`](../../../tf/compat/v1/keras/initializers/lecun_uniform.md): LeCun uniform initializer. + +[`local_variables(...)`](../../../tf/compat/v1/local_variables_initializer.md): Returns an Op that initializes all local variables. + +[`tables_initializer(...)`](../../../tf/compat/v1/tables_initializer.md): Returns an Op that initializes all tables of the default graph. + +[`variables(...)`](../../../tf/compat/v1/variables_initializer.md): Returns an Op that initializes a list of variables. + diff --git a/site/en/api_docs/python/tf/compat/v1/io.md b/site/en/api_docs/python/tf/compat/v1/io.md new file mode 100644 index 00000000000..b8710e04bb2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/io.md @@ -0,0 +1,117 @@ +description: Public API for tf.io namespace. + +
+ + +
+ +# Module: tf.compat.v1.io + + + + + + + + + +Public API for tf.io namespace. + + + +## Modules + +[`gfile`](../../../tf/compat/v1/io/gfile.md) module: Public API for tf.io.gfile namespace. + +## Classes + +[`class FixedLenFeature`](../../../tf/io/FixedLenFeature.md): Configuration for parsing a fixed-length input feature. + +[`class FixedLenSequenceFeature`](../../../tf/io/FixedLenSequenceFeature.md): Configuration for parsing a variable-length input feature into a `Tensor`. + +[`class PaddingFIFOQueue`](../../../tf/queue/PaddingFIFOQueue.md): A FIFOQueue that supports batching variable-sized tensors by padding. + +[`class PriorityQueue`](../../../tf/queue/PriorityQueue.md): A queue implementation that dequeues elements in prioritized order. + +[`class QueueBase`](../../../tf/queue/QueueBase.md): Base class for queue implementations. + +[`class RaggedFeature`](../../../tf/io/RaggedFeature.md): Configuration for passing a RaggedTensor input feature. + +[`class RandomShuffleQueue`](../../../tf/queue/RandomShuffleQueue.md): A queue implementation that dequeues elements in a random order. + +[`class SparseFeature`](../../../tf/io/SparseFeature.md): Configuration for parsing a sparse input feature from an `Example`. + +[`class TFRecordCompressionType`](../../../tf/compat/v1/io/TFRecordCompressionType.md): The type of compression for the record. + +[`class TFRecordOptions`](../../../tf/io/TFRecordOptions.md): Options used for manipulating TFRecord files. + +[`class TFRecordWriter`](../../../tf/io/TFRecordWriter.md): A class to write records to a TFRecords file. + +[`class VarLenFeature`](../../../tf/io/VarLenFeature.md): Configuration for parsing a variable-length input feature. + +## Functions + +[`decode_and_crop_jpeg(...)`](../../../tf/io/decode_and_crop_jpeg.md): Decode and Crop a JPEG-encoded image to a uint8 tensor. + +[`decode_base64(...)`](../../../tf/io/decode_base64.md): Decode web-safe base64-encoded strings. + +[`decode_bmp(...)`](../../../tf/io/decode_bmp.md): Decode the first frame of a BMP-encoded image to a uint8 tensor. + +[`decode_compressed(...)`](../../../tf/io/decode_compressed.md): Decompress strings. + +[`decode_csv(...)`](../../../tf/compat/v1/decode_csv.md): Convert CSV records to tensors. Each column maps to one tensor. + +[`decode_gif(...)`](../../../tf/io/decode_gif.md): Decode the frame(s) of a GIF-encoded image to a uint8 tensor. + +[`decode_image(...)`](../../../tf/io/decode_image.md): Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`. + +[`decode_jpeg(...)`](../../../tf/io/decode_jpeg.md): Decode a JPEG-encoded image to a uint8 tensor. + +[`decode_json_example(...)`](../../../tf/io/decode_json_example.md): Convert JSON-encoded Example records to binary protocol buffer strings. + +[`decode_png(...)`](../../../tf/io/decode_png.md): Decode a PNG-encoded image to a uint8 or uint16 tensor. + +[`decode_proto(...)`](../../../tf/io/decode_proto.md): The op extracts fields from a serialized protocol buffers message into tensors. + +[`decode_raw(...)`](../../../tf/compat/v1/decode_raw.md): Convert raw byte strings into tensors. (deprecated arguments) + +[`deserialize_many_sparse(...)`](../../../tf/io/deserialize_many_sparse.md): Deserialize and concatenate `SparseTensors` from a serialized minibatch. + +[`encode_base64(...)`](../../../tf/io/encode_base64.md): Encode strings into web-safe base64 format. + +[`encode_jpeg(...)`](../../../tf/io/encode_jpeg.md): JPEG-encode an image. + +[`encode_proto(...)`](../../../tf/io/encode_proto.md): The op serializes protobuf messages provided in the input tensors. + +[`extract_jpeg_shape(...)`](../../../tf/io/extract_jpeg_shape.md): Extract the shape information of a JPEG-encoded image. + +[`is_jpeg(...)`](../../../tf/io/is_jpeg.md): Convenience function to check if the 'contents' encodes a JPEG image. + +[`match_filenames_once(...)`](../../../tf/io/match_filenames_once.md): Save the list of files matching pattern, so it is only computed once. + +[`matching_files(...)`](../../../tf/io/matching_files.md): Returns the set of files matching one or more glob patterns. + +[`parse_example(...)`](../../../tf/compat/v1/parse_example.md): Parses `Example` protos into a `dict` of tensors. + +[`parse_sequence_example(...)`](../../../tf/io/parse_sequence_example.md): Parses a batch of `SequenceExample` protos. + +[`parse_single_example(...)`](../../../tf/compat/v1/parse_single_example.md): Parses a single `Example` proto. + +[`parse_single_sequence_example(...)`](../../../tf/io/parse_single_sequence_example.md): Parses a single `SequenceExample` proto. + +[`parse_tensor(...)`](../../../tf/io/parse_tensor.md): Transforms a serialized tensorflow.TensorProto proto into a Tensor. + +[`read_file(...)`](../../../tf/io/read_file.md): Reads and outputs the entire contents of the input filename. + +[`serialize_many_sparse(...)`](../../../tf/compat/v1/serialize_many_sparse.md): Serialize `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor`. + +[`serialize_sparse(...)`](../../../tf/compat/v1/serialize_sparse.md): Serialize a `SparseTensor` into a 3-vector (1-D `Tensor`) object. + +[`serialize_tensor(...)`](../../../tf/io/serialize_tensor.md): Transforms a Tensor into a serialized TensorProto proto. + +[`tf_record_iterator(...)`](../../../tf/compat/v1/io/tf_record_iterator.md): An iterator that read the records from a TFRecords file. (deprecated) + +[`write_file(...)`](../../../tf/io/write_file.md): Writes contents to the file at input filename. Creates file and recursively + +[`write_graph(...)`](../../../tf/io/write_graph.md): Writes a graph proto to a file. + diff --git a/site/en/api_docs/python/tf/compat/v1/io/TFRecordCompressionType.md b/site/en/api_docs/python/tf/compat/v1/io/TFRecordCompressionType.md new file mode 100644 index 00000000000..a01c6ad6c2a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/io/TFRecordCompressionType.md @@ -0,0 +1,46 @@ +description: The type of compression for the record. + +
+ + + + + +
+ +# tf.compat.v1.io.TFRecordCompressionType + + + + + + + + + +The type of compression for the record. + + + + + + +## Class Variables + +* `GZIP = 2` +* `NONE = 0` +* `ZLIB = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/io/gfile.md b/site/en/api_docs/python/tf/compat/v1/io/gfile.md new file mode 100644 index 00000000000..3258f8be295 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/io/gfile.md @@ -0,0 +1,51 @@ +description: Public API for tf.io.gfile namespace. + +
+ + +
+ +# Module: tf.compat.v1.io.gfile + + + + + + + + + +Public API for tf.io.gfile namespace. + + + +## Classes + +[`class GFile`](../../../../tf/io/gfile/GFile.md): File I/O wrappers without thread locking. + +## Functions + +[`copy(...)`](../../../../tf/io/gfile/copy.md): Copies data from `src` to `dst`. + +[`exists(...)`](../../../../tf/io/gfile/exists.md): Determines whether a path exists or not. + +[`glob(...)`](../../../../tf/io/gfile/glob.md): Returns a list of files that match the given pattern(s). + +[`isdir(...)`](../../../../tf/io/gfile/isdir.md): Returns whether the path is a directory or not. + +[`listdir(...)`](../../../../tf/io/gfile/listdir.md): Returns a list of entries contained within a directory. + +[`makedirs(...)`](../../../../tf/io/gfile/makedirs.md): Creates a directory and all parent/intermediate directories. + +[`mkdir(...)`](../../../../tf/io/gfile/mkdir.md): Creates a directory with the name given by `path`. + +[`remove(...)`](../../../../tf/io/gfile/remove.md): Deletes the path located at 'path'. + +[`rename(...)`](../../../../tf/io/gfile/rename.md): Rename or move a file / directory. + +[`rmtree(...)`](../../../../tf/io/gfile/rmtree.md): Deletes everything under path recursively. + +[`stat(...)`](../../../../tf/io/gfile/stat.md): Returns file statistics for a given path. + +[`walk(...)`](../../../../tf/io/gfile/walk.md): Recursive directory tree generator for directories. + diff --git a/site/en/api_docs/python/tf/compat/v1/io/tf_record_iterator.md b/site/en/api_docs/python/tf/compat/v1/io/tf_record_iterator.md new file mode 100644 index 00000000000..6e9cb157c80 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/io/tf_record_iterator.md @@ -0,0 +1,103 @@ +description: An iterator that read the records from a TFRecords file. (deprecated) + +
+ + +
+ +# tf.compat.v1.io.tf_record_iterator + + + + + + + + + +An iterator that read the records from a TFRecords file. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use eager execution and: +tf.data.TFRecordDataset(path) + + + + + + + + + + + + + +
+`path` + +The path to the TFRecords file. +
+`options` + +(optional) A TFRecordOptions object. +
+ + + + + + + + + + + +
+An iterator of serialized TFRecords. +
+ + + + + + + + + + + + +
+`IOError` + +If `path` cannot be opened for reading. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/is_variable_initialized.md b/site/en/api_docs/python/tf/compat/v1/is_variable_initialized.md new file mode 100644 index 00000000000..c17fd315f25 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/is_variable_initialized.md @@ -0,0 +1,67 @@ +description: Tests if a variable has been initialized. + +
+ + +
+ +# tf.compat.v1.is_variable_initialized + + + + + + + + + +Tests if a variable has been initialized. + + + + + + + + + + + + + + + + + +
+`variable` + +A `Variable`. +
+ + + + + + + + + + + +
+Returns a scalar boolean Tensor, `True` if the variable has been +initialized, `False` otherwise. +
+ + +**NOTE** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/keras.md b/site/en/api_docs/python/tf/compat/v1/keras.md new file mode 100644 index 00000000000..9341eb74718 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras.md @@ -0,0 +1,73 @@ +description: Implementation of the Keras API meant to be a high-level API for TensorFlow. + +
+ + +
+ +# Module: tf.compat.v1.keras + + + + + + + + + +Implementation of the Keras API meant to be a high-level API for TensorFlow. + + +Detailed documentation and user guides are available at +[tensorflow.org](https://www.tensorflow.org/guide/keras). + +## Modules + +[`activations`](../../../tf/compat/v1/keras/activations.md) module: Built-in activation functions. + +[`applications`](../../../tf/compat/v1/keras/applications.md) module: Keras Applications are canned architectures with pre-trained weights. + +[`backend`](../../../tf/compat/v1/keras/backend.md) module: Keras backend API. + +[`callbacks`](../../../tf/compat/v1/keras/callbacks.md) module: Callbacks: utilities called at certain points during model training. + +[`constraints`](../../../tf/compat/v1/keras/constraints.md) module: Constraints: functions that impose constraints on weight values. + +[`datasets`](../../../tf/compat/v1/keras/datasets.md) module: Public API for tf.keras.datasets namespace. + +[`estimator`](../../../tf/compat/v1/keras/estimator.md) module: Keras estimator API. + +[`experimental`](../../../tf/compat/v1/keras/experimental.md) module: Public API for tf.keras.experimental namespace. + +[`initializers`](../../../tf/compat/v1/keras/initializers.md) module: Keras initializer serialization / deserialization. + +[`layers`](../../../tf/compat/v1/keras/layers.md) module: Keras layers API. + +[`losses`](../../../tf/compat/v1/keras/losses.md) module: Built-in loss functions. + +[`metrics`](../../../tf/compat/v1/keras/metrics.md) module: Built-in metrics. + +[`mixed_precision`](../../../tf/compat/v1/keras/mixed_precision.md) module: Public API for tf.keras.mixed_precision namespace. + +[`models`](../../../tf/compat/v1/keras/models.md) module: Code for model cloning, plus model-related API entries. + +[`optimizers`](../../../tf/compat/v1/keras/optimizers.md) module: Built-in optimizer classes. + +[`preprocessing`](../../../tf/compat/v1/keras/preprocessing.md) module: Keras data preprocessing utils. + +[`regularizers`](../../../tf/compat/v1/keras/regularizers.md) module: Built-in regularizers. + +[`utils`](../../../tf/compat/v1/keras/utils.md) module: Public API for tf.keras.utils namespace. + +[`wrappers`](../../../tf/compat/v1/keras/wrappers.md) module: Public API for tf.keras.wrappers namespace. + +## Classes + +[`class Model`](../../../tf/keras/Model.md): `Model` groups layers into an object with training and inference features. + +[`class Sequential`](../../../tf/keras/Sequential.md): `Sequential` groups a linear stack of layers into a tf.keras.Model. + +## Functions + +[`Input(...)`](../../../tf/keras/Input.md): `Input()` is used to instantiate a Keras tensor. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/activations.md b/site/en/api_docs/python/tf/compat/v1/keras/activations.md new file mode 100644 index 00000000000..615eb38fd4e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/activations.md @@ -0,0 +1,53 @@ +description: Built-in activation functions. + +
+ + +
+ +# Module: tf.compat.v1.keras.activations + + + + + + + + + +Built-in activation functions. + + + +## Functions + +[`deserialize(...)`](../../../../tf/keras/activations/deserialize.md): Returns activation function denoted by input string. + +[`elu(...)`](../../../../tf/keras/activations/elu.md): Exponential linear unit. + +[`exponential(...)`](../../../../tf/keras/activations/exponential.md): Exponential activation function. + +[`get(...)`](../../../../tf/keras/activations/get.md): Returns function. + +[`hard_sigmoid(...)`](../../../../tf/keras/activations/hard_sigmoid.md): Hard sigmoid activation function. + +[`linear(...)`](../../../../tf/keras/activations/linear.md): Linear activation function. + +[`relu(...)`](../../../../tf/keras/activations/relu.md): Applies the rectified linear unit activation function. + +[`selu(...)`](../../../../tf/keras/activations/selu.md): Scaled Exponential Linear Unit (SELU). + +[`serialize(...)`](../../../../tf/keras/activations/serialize.md): Returns name attribute (`__name__`) of function. + +[`sigmoid(...)`](../../../../tf/keras/activations/sigmoid.md): Sigmoid activation function. + +[`softmax(...)`](../../../../tf/keras/activations/softmax.md): Softmax converts a real vector to a vector of categorical probabilities. + +[`softplus(...)`](../../../../tf/keras/activations/softplus.md): Softplus activation function. + +[`softsign(...)`](../../../../tf/keras/activations/softsign.md): Softsign activation function. + +[`swish(...)`](../../../../tf/keras/activations/swish.md): Swish activation function. + +[`tanh(...)`](../../../../tf/keras/activations/tanh.md): Hyperbolic tangent activation function. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications.md b/site/en/api_docs/python/tf/compat/v1/keras/applications.md new file mode 100644 index 00000000000..891c5977239 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications.md @@ -0,0 +1,87 @@ +description: Keras Applications are canned architectures with pre-trained weights. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications + + + + + + + + + +Keras Applications are canned architectures with pre-trained weights. + + + +## Modules + +[`densenet`](../../../../tf/compat/v1/keras/applications/densenet.md) module: DenseNet models for Keras. + +[`imagenet_utils`](../../../../tf/compat/v1/keras/applications/imagenet_utils.md) module: Utilities for ImageNet data preprocessing & prediction decoding. + +[`inception_resnet_v2`](../../../../tf/compat/v1/keras/applications/inception_resnet_v2.md) module: Inception-ResNet V2 model for Keras. + +[`inception_v3`](../../../../tf/compat/v1/keras/applications/inception_v3.md) module: Inception V3 model for Keras. + +[`mobilenet`](../../../../tf/compat/v1/keras/applications/mobilenet.md) module: MobileNet v1 models for Keras. + +[`mobilenet_v2`](../../../../tf/compat/v1/keras/applications/mobilenet_v2.md) module: MobileNet v2 models for Keras. + +[`nasnet`](../../../../tf/compat/v1/keras/applications/nasnet.md) module: NASNet-A models for Keras. + +[`resnet`](../../../../tf/compat/v1/keras/applications/resnet.md) module: ResNet models for Keras. + +[`resnet50`](../../../../tf/compat/v1/keras/applications/resnet50.md) module: Public API for tf.keras.applications.resnet50 namespace. + +[`resnet_v2`](../../../../tf/compat/v1/keras/applications/resnet_v2.md) module: ResNet v2 models for Keras. + +[`vgg16`](../../../../tf/compat/v1/keras/applications/vgg16.md) module: VGG16 model for Keras. + +[`vgg19`](../../../../tf/compat/v1/keras/applications/vgg19.md) module: VGG19 model for Keras. + +[`xception`](../../../../tf/compat/v1/keras/applications/xception.md) module: Xception V1 model for Keras. + +## Functions + +[`DenseNet121(...)`](../../../../tf/keras/applications/DenseNet121.md): Instantiates the Densenet121 architecture. + +[`DenseNet169(...)`](../../../../tf/keras/applications/DenseNet169.md): Instantiates the Densenet169 architecture. + +[`DenseNet201(...)`](../../../../tf/keras/applications/DenseNet201.md): Instantiates the Densenet201 architecture. + +[`InceptionResNetV2(...)`](../../../../tf/keras/applications/InceptionResNetV2.md): Instantiates the Inception-ResNet v2 architecture. + +[`InceptionV3(...)`](../../../../tf/keras/applications/InceptionV3.md): Instantiates the Inception v3 architecture. + +[`MobileNet(...)`](../../../../tf/keras/applications/MobileNet.md): Instantiates the MobileNet architecture. + +[`MobileNetV2(...)`](../../../../tf/keras/applications/MobileNetV2.md): Instantiates the MobileNetV2 architecture. + +[`NASNetLarge(...)`](../../../../tf/keras/applications/NASNetLarge.md): Instantiates a NASNet model in ImageNet mode. + +[`NASNetMobile(...)`](../../../../tf/keras/applications/NASNetMobile.md): Instantiates a Mobile NASNet model in ImageNet mode. + +[`ResNet101(...)`](../../../../tf/keras/applications/ResNet101.md): Instantiates the ResNet101 architecture. + +[`ResNet101V2(...)`](../../../../tf/keras/applications/ResNet101V2.md): Instantiates the ResNet101V2 architecture. + +[`ResNet152(...)`](../../../../tf/keras/applications/ResNet152.md): Instantiates the ResNet152 architecture. + +[`ResNet152V2(...)`](../../../../tf/keras/applications/ResNet152V2.md): Instantiates the ResNet152V2 architecture. + +[`ResNet50(...)`](../../../../tf/keras/applications/ResNet50.md): Instantiates the ResNet50 architecture. + +[`ResNet50V2(...)`](../../../../tf/keras/applications/ResNet50V2.md): Instantiates the ResNet50V2 architecture. + +[`VGG16(...)`](../../../../tf/keras/applications/VGG16.md): Instantiates the VGG16 model. + +[`VGG19(...)`](../../../../tf/keras/applications/VGG19.md): Instantiates the VGG19 architecture. + +[`Xception(...)`](../../../../tf/keras/applications/Xception.md): Instantiates the Xception architecture. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/densenet.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/densenet.md new file mode 100644 index 00000000000..b4125b6539f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/densenet.md @@ -0,0 +1,39 @@ +description: DenseNet models for Keras. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.densenet + + + + + + + + + +DenseNet models for Keras. + + + +#### Reference paper: + +- [Densely Connected Convolutional Networks] + (https://arxiv.org/abs/1608.06993) (CVPR 2017 Best Paper Award) + + +## Functions + +[`DenseNet121(...)`](../../../../../tf/keras/applications/DenseNet121.md): Instantiates the Densenet121 architecture. + +[`DenseNet169(...)`](../../../../../tf/keras/applications/DenseNet169.md): Instantiates the Densenet169 architecture. + +[`DenseNet201(...)`](../../../../../tf/keras/applications/DenseNet201.md): Instantiates the Densenet201 architecture. + +[`decode_predictions(...)`](../../../../../tf/keras/applications/densenet/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/densenet/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/imagenet_utils.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/imagenet_utils.md new file mode 100644 index 00000000000..2c355159131 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/imagenet_utils.md @@ -0,0 +1,27 @@ +description: Utilities for ImageNet data preprocessing & prediction decoding. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.imagenet_utils + + + + + + + + + +Utilities for ImageNet data preprocessing & prediction decoding. + + + +## Functions + +[`decode_predictions(...)`](../../../../../tf/keras/applications/imagenet_utils/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/imagenet_utils/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/inception_resnet_v2.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/inception_resnet_v2.md new file mode 100644 index 00000000000..26216d21638 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/inception_resnet_v2.md @@ -0,0 +1,36 @@ +description: Inception-ResNet V2 model for Keras. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.inception_resnet_v2 + + + + + + + + + +Inception-ResNet V2 model for Keras. + + + +#### Reference paper: + +- [Inception-v4, Inception-ResNet and the Impact of + Residual Connections on Learning](https://arxiv.org/abs/1602.07261) + (AAAI 2017) + + +## Functions + +[`InceptionResNetV2(...)`](../../../../../tf/keras/applications/InceptionResNetV2.md): Instantiates the Inception-ResNet v2 architecture. + +[`decode_predictions(...)`](../../../../../tf/keras/applications/inception_resnet_v2/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/inception_resnet_v2/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/inception_v3.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/inception_v3.md new file mode 100644 index 00000000000..4e630045daf --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/inception_v3.md @@ -0,0 +1,35 @@ +description: Inception V3 model for Keras. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.inception_v3 + + + + + + + + + +Inception V3 model for Keras. + + + +#### Reference paper: + +- [Rethinking the Inception Architecture for Computer Vision]( + http://arxiv.org/abs/1512.00567) (CVPR 2016) + + +## Functions + +[`InceptionV3(...)`](../../../../../tf/keras/applications/InceptionV3.md): Instantiates the Inception v3 architecture. + +[`decode_predictions(...)`](../../../../../tf/keras/applications/inception_v3/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/inception_v3/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/mobilenet.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/mobilenet.md new file mode 100644 index 00000000000..2859d6ae391 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/mobilenet.md @@ -0,0 +1,75 @@ +description: MobileNet v1 models for Keras. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.mobilenet + + + + + + + + + +MobileNet v1 models for Keras. + + +MobileNet is a general architecture and can be used for multiple use cases. +Depending on the use case, it can use different input layer size and +different width factors. This allows different width models to reduce +the number of multiply-adds and thereby +reduce inference cost on mobile devices. + +MobileNets support any input size greater than 32 x 32, with larger image sizes +offering better performance. +The number of parameters and number of multiply-adds +can be modified by using the `alpha` parameter, +which increases/decreases the number of filters in each layer. +By altering the image size and `alpha` parameter, +all 16 models from the paper can be built, with ImageNet weights provided. + +The paper demonstrates the performance of MobileNets using `alpha` values of +1.0 (also called 100 % MobileNet), 0.75, 0.5 and 0.25. +For each of these `alpha` values, weights for 4 different input image sizes +are provided (224, 192, 160, 128). + +The following table describes the size and accuracy of the 100% MobileNet +on size 224 x 224: +---------------------------------------------------------------------------- +Width Multiplier (alpha) | ImageNet Acc | Multiply-Adds (M) | Params (M) +---------------------------------------------------------------------------- +| 1.0 MobileNet-224 | 70.6 % | 529 | 4.2 | +| 0.75 MobileNet-224 | 68.4 % | 325 | 2.6 | +| 0.50 MobileNet-224 | 63.7 % | 149 | 1.3 | +| 0.25 MobileNet-224 | 50.6 % | 41 | 0.5 | +---------------------------------------------------------------------------- + +The following table describes the performance of +the 100 % MobileNet on various input sizes: +------------------------------------------------------------------------ + Resolution | ImageNet Acc | Multiply-Adds (M) | Params (M) +------------------------------------------------------------------------ +| 1.0 MobileNet-224 | 70.6 % | 529 | 4.2 | +| 1.0 MobileNet-192 | 69.1 % | 529 | 4.2 | +| 1.0 MobileNet-160 | 67.2 % | 529 | 4.2 | +| 1.0 MobileNet-128 | 64.4 % | 529 | 4.2 | +------------------------------------------------------------------------ + +#### Reference paper: + +- [MobileNets: Efficient Convolutional Neural Networks for + Mobile Vision Applications](https://arxiv.org/abs/1704.04861) + + +## Functions + +[`MobileNet(...)`](../../../../../tf/keras/applications/MobileNet.md): Instantiates the MobileNet architecture. + +[`decode_predictions(...)`](../../../../../tf/keras/applications/mobilenet/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/mobilenet/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/mobilenet_v2.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/mobilenet_v2.md new file mode 100644 index 00000000000..df2ceccf923 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/mobilenet_v2.md @@ -0,0 +1,82 @@ +description: MobileNet v2 models for Keras. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.mobilenet_v2 + + + + + + + + + +MobileNet v2 models for Keras. + + +MobileNetV2 is a general architecture and can be used for multiple use cases. +Depending on the use case, it can use different input layer size and +different width factors. This allows different width models to reduce +the number of multiply-adds and thereby +reduce inference cost on mobile devices. + +MobileNetV2 is very similar to the original MobileNet, +except that it uses inverted residual blocks with +bottlenecking features. It has a drastically lower +parameter count than the original MobileNet. +MobileNets support any input size greater +than 32 x 32, with larger image sizes +offering better performance. + +The number of parameters and number of multiply-adds +can be modified by using the `alpha` parameter, +which increases/decreases the number of filters in each layer. +By altering the image size and `alpha` parameter, +all 22 models from the paper can be built, with ImageNet weights provided. + +The paper demonstrates the performance of MobileNets using `alpha` values of +1.0 (also called 100 % MobileNet), 0.35, 0.5, 0.75, 1.0, 1.3, and 1.4 +For each of these `alpha` values, weights for 5 different input image sizes +are provided (224, 192, 160, 128, and 96). + +The following table describes the performance of +MobileNet on various input sizes: +------------------------------------------------------------------------ +MACs stands for Multiply Adds + Classification Checkpoint|MACs (M)|Parameters (M)|Top 1 Accuracy|Top 5 Accuracy +--------------------------|------------|---------------|---------|----|--------- +| [mobilenet_v2_1.4_224] | 582 | 6.06 | 75.0 | 92.5 | +| [mobilenet_v2_1.3_224] | 509 | 5.34 | 74.4 | 92.1 | +| [mobilenet_v2_1.0_224] | 300 | 3.47 | 71.8 | 91.0 | +| [mobilenet_v2_1.0_192] | 221 | 3.47 | 70.7 | 90.1 | +| [mobilenet_v2_1.0_160] | 154 | 3.47 | 68.8 | 89.0 | +| [mobilenet_v2_1.0_128] | 99 | 3.47 | 65.3 | 86.9 | +| [mobilenet_v2_1.0_96] | 56 | 3.47 | 60.3 | 83.2 | +| [mobilenet_v2_0.75_224] | 209 | 2.61 | 69.8 | 89.6 | +| [mobilenet_v2_0.75_192] | 153 | 2.61 | 68.7 | 88.9 | +| [mobilenet_v2_0.75_160] | 107 | 2.61 | 66.4 | 87.3 | +| [mobilenet_v2_0.75_128] | 69 | 2.61 | 63.2 | 85.3 | +| [mobilenet_v2_0.75_96] | 39 | 2.61 | 58.8 | 81.6 | +| [mobilenet_v2_0.5_224] | 97 | 1.95 | 65.4 | 86.4 | +| [mobilenet_v2_0.5_192] | 71 | 1.95 | 63.9 | 85.4 | +| [mobilenet_v2_0.5_160] | 50 | 1.95 | 61.0 | 83.2 | +| [mobilenet_v2_0.5_128] | 32 | 1.95 | 57.7 | 80.8 | +| [mobilenet_v2_0.5_96] | 18 | 1.95 | 51.2 | 75.8 | +| [mobilenet_v2_0.35_224] | 59 | 1.66 | 60.3 | 82.9 | +| [mobilenet_v2_0.35_192] | 43 | 1.66 | 58.2 | 81.2 | +| [mobilenet_v2_0.35_160] | 30 | 1.66 | 55.7 | 79.1 | +| [mobilenet_v2_0.35_128] | 20 | 1.66 | 50.8 | 75.0 | +| [mobilenet_v2_0.35_96] | 11 | 1.66 | 45.5 | 70.4 | + +## Functions + +[`MobileNetV2(...)`](../../../../../tf/keras/applications/MobileNetV2.md): Instantiates the MobileNetV2 architecture. + +[`decode_predictions(...)`](../../../../../tf/keras/applications/mobilenet_v2/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/mobilenet_v2/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/nasnet.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/nasnet.md new file mode 100644 index 00000000000..0834737a7ea --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/nasnet.md @@ -0,0 +1,54 @@ +description: NASNet-A models for Keras. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.nasnet + + + + + + + + + +NASNet-A models for Keras. + + +NASNet refers to Neural Architecture Search Network, a family of models +that were designed automatically by learning the model architectures +directly on the dataset of interest. + +Here we consider NASNet-A, the highest performance model that was found +for the CIFAR-10 dataset, and then extended to ImageNet 2012 dataset, +obtaining state of the art performance on CIFAR-10 and ImageNet 2012. +Only the NASNet-A models, and their respective weights, which are suited +for ImageNet 2012 are provided. + +The below table describes the performance on ImageNet 2012: +-------------------------------------------------------------------------------- + Architecture | Top-1 Acc | Top-5 Acc | Multiply-Adds | Params (M) +-------------------------------------------------------------------------------- +| NASNet-A (4 @ 1056) | 74.0 % | 91.6 % | 564 M | 5.3 | +| NASNet-A (6 @ 4032) | 82.7 % | 96.2 % | 23.8 B | 88.9 | +-------------------------------------------------------------------------------- + +#### References: + +- [Learning Transferable Architectures for Scalable Image Recognition] + (https://arxiv.org/abs/1707.07012) (CVPR 2018) + + +## Functions + +[`NASNetLarge(...)`](../../../../../tf/keras/applications/NASNetLarge.md): Instantiates a NASNet model in ImageNet mode. + +[`NASNetMobile(...)`](../../../../../tf/keras/applications/NASNetMobile.md): Instantiates a Mobile NASNet model in ImageNet mode. + +[`decode_predictions(...)`](../../../../../tf/keras/applications/nasnet/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/nasnet/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/resnet.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/resnet.md new file mode 100644 index 00000000000..a70342a3f3e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/resnet.md @@ -0,0 +1,33 @@ +description: ResNet models for Keras. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.resnet + + + + + + + + + +ResNet models for Keras. + + + +## Functions + +[`ResNet101(...)`](../../../../../tf/keras/applications/ResNet101.md): Instantiates the ResNet101 architecture. + +[`ResNet152(...)`](../../../../../tf/keras/applications/ResNet152.md): Instantiates the ResNet152 architecture. + +[`ResNet50(...)`](../../../../../tf/keras/applications/ResNet50.md): Instantiates the ResNet50 architecture. + +[`decode_predictions(...)`](../../../../../tf/keras/applications/resnet/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/resnet/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/resnet50.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/resnet50.md new file mode 100644 index 00000000000..80c33a0f170 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/resnet50.md @@ -0,0 +1,29 @@ +description: Public API for tf.keras.applications.resnet50 namespace. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.resnet50 + + + + + + + + + +Public API for tf.keras.applications.resnet50 namespace. + + + +## Functions + +[`ResNet50(...)`](../../../../../tf/keras/applications/ResNet50.md): Instantiates the ResNet50 architecture. + +[`decode_predictions(...)`](../../../../../tf/keras/applications/resnet/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/resnet/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/resnet_v2.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/resnet_v2.md new file mode 100644 index 00000000000..76f42b012d7 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/resnet_v2.md @@ -0,0 +1,33 @@ +description: ResNet v2 models for Keras. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.resnet_v2 + + + + + + + + + +ResNet v2 models for Keras. + + + +## Functions + +[`ResNet101V2(...)`](../../../../../tf/keras/applications/ResNet101V2.md): Instantiates the ResNet101V2 architecture. + +[`ResNet152V2(...)`](../../../../../tf/keras/applications/ResNet152V2.md): Instantiates the ResNet152V2 architecture. + +[`ResNet50V2(...)`](../../../../../tf/keras/applications/ResNet50V2.md): Instantiates the ResNet50V2 architecture. + +[`decode_predictions(...)`](../../../../../tf/keras/applications/resnet_v2/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/resnet_v2/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/vgg16.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/vgg16.md new file mode 100644 index 00000000000..747c2abbcfc --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/vgg16.md @@ -0,0 +1,29 @@ +description: VGG16 model for Keras. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.vgg16 + + + + + + + + + +VGG16 model for Keras. + + + +## Functions + +[`VGG16(...)`](../../../../../tf/keras/applications/VGG16.md): Instantiates the VGG16 model. + +[`decode_predictions(...)`](../../../../../tf/keras/applications/vgg16/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/vgg16/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/vgg19.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/vgg19.md new file mode 100644 index 00000000000..206f5a81731 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/vgg19.md @@ -0,0 +1,35 @@ +description: VGG19 model for Keras. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.vgg19 + + + + + + + + + +VGG19 model for Keras. + + + +#### Reference: + +- [Very Deep Convolutional Networks for Large-Scale Image Recognition]( + https://arxiv.org/abs/1409.1556) (ICLR 2015) + + +## Functions + +[`VGG19(...)`](../../../../../tf/keras/applications/VGG19.md): Instantiates the VGG19 architecture. + +[`decode_predictions(...)`](../../../../../tf/keras/applications/vgg19/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/vgg19/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/applications/xception.md b/site/en/api_docs/python/tf/compat/v1/keras/applications/xception.md new file mode 100644 index 00000000000..5c50e752ade --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/applications/xception.md @@ -0,0 +1,37 @@ +description: Xception V1 model for Keras. + +
+ + +
+ +# Module: tf.compat.v1.keras.applications.xception + + + + + + + + + +Xception V1 model for Keras. + + +On ImageNet, this model gets to a top-1 validation accuracy of 0.790 +and a top-5 validation accuracy of 0.945. + +#### Reference paper: + +- [Xception: Deep Learning with Depthwise Separable Convolutions]( + https://arxiv.org/abs/1610.02357) (CVPR 2017) + + +## Functions + +[`Xception(...)`](../../../../../tf/keras/applications/Xception.md): Instantiates the Xception architecture. + +[`decode_predictions(...)`](../../../../../tf/keras/applications/xception/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../../../tf/keras/applications/xception/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/backend.md b/site/en/api_docs/python/tf/compat/v1/keras/backend.md new file mode 100644 index 00000000000..8f0454ee6b0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/backend.md @@ -0,0 +1,317 @@ +description: Keras backend API. + +
+ + +
+ +# Module: tf.compat.v1.keras.backend + + + + + + + + + +Keras backend API. + + + +## Classes + +[`class name_scope`](../../../../tf/compat/v1/keras/backend/name_scope.md): A context manager for use when defining a Python op. + +## Functions + +[`abs(...)`](../../../../tf/keras/backend/abs.md): Element-wise absolute value. + +[`all(...)`](../../../../tf/keras/backend/all.md): Bitwise reduction (logical AND). + +[`any(...)`](../../../../tf/keras/backend/any.md): Bitwise reduction (logical OR). + +[`arange(...)`](../../../../tf/keras/backend/arange.md): Creates a 1D tensor containing a sequence of integers. + +[`argmax(...)`](../../../../tf/keras/backend/argmax.md): Returns the index of the maximum value along an axis. + +[`argmin(...)`](../../../../tf/keras/backend/argmin.md): Returns the index of the minimum value along an axis. + +[`backend(...)`](../../../../tf/keras/backend/backend.md): Publicly accessible method for determining the current backend. + +[`batch_dot(...)`](../../../../tf/keras/backend/batch_dot.md): Batchwise dot product. + +[`batch_flatten(...)`](../../../../tf/keras/backend/batch_flatten.md): Turn a nD tensor into a 2D tensor with same 0th dimension. + +[`batch_get_value(...)`](../../../../tf/keras/backend/batch_get_value.md): Returns the value of more than one tensor variable. + +[`batch_normalization(...)`](../../../../tf/keras/backend/batch_normalization.md): Applies batch normalization on x given mean, var, beta and gamma. + +[`batch_set_value(...)`](../../../../tf/keras/backend/batch_set_value.md): Sets the values of many tensor variables at once. + +[`bias_add(...)`](../../../../tf/keras/backend/bias_add.md): Adds a bias vector to a tensor. + +[`binary_crossentropy(...)`](../../../../tf/keras/backend/binary_crossentropy.md): Binary crossentropy between an output tensor and a target tensor. + +[`cast(...)`](../../../../tf/keras/backend/cast.md): Casts a tensor to a different dtype and returns it. + +[`cast_to_floatx(...)`](../../../../tf/keras/backend/cast_to_floatx.md): Cast a Numpy array to the default Keras float type. + +[`categorical_crossentropy(...)`](../../../../tf/keras/backend/categorical_crossentropy.md): Categorical crossentropy between an output tensor and a target tensor. + +[`clear_session(...)`](../../../../tf/keras/backend/clear_session.md): Destroys the current TF graph and session, and creates a new one. + +[`clip(...)`](../../../../tf/keras/backend/clip.md): Element-wise value clipping. + +[`concatenate(...)`](../../../../tf/keras/backend/concatenate.md): Concatenates a list of tensors alongside the specified axis. + +[`constant(...)`](../../../../tf/keras/backend/constant.md): Creates a constant tensor. + +[`conv1d(...)`](../../../../tf/keras/backend/conv1d.md): 1D convolution. + +[`conv2d(...)`](../../../../tf/keras/backend/conv2d.md): 2D convolution. + +[`conv2d_transpose(...)`](../../../../tf/keras/backend/conv2d_transpose.md): 2D deconvolution (i.e. + +[`conv3d(...)`](../../../../tf/keras/backend/conv3d.md): 3D convolution. + +[`cos(...)`](../../../../tf/keras/backend/cos.md): Computes cos of x element-wise. + +[`count_params(...)`](../../../../tf/keras/backend/count_params.md): Returns the static number of elements in a variable or tensor. + +[`ctc_batch_cost(...)`](../../../../tf/keras/backend/ctc_batch_cost.md): Runs CTC loss algorithm on each batch element. + +[`ctc_decode(...)`](../../../../tf/keras/backend/ctc_decode.md): Decodes the output of a softmax. + +[`ctc_label_dense_to_sparse(...)`](../../../../tf/keras/backend/ctc_label_dense_to_sparse.md): Converts CTC labels from dense to sparse. + +[`cumprod(...)`](../../../../tf/keras/backend/cumprod.md): Cumulative product of the values in a tensor, alongside the specified axis. + +[`cumsum(...)`](../../../../tf/keras/backend/cumsum.md): Cumulative sum of the values in a tensor, alongside the specified axis. + +[`depthwise_conv2d(...)`](../../../../tf/keras/backend/depthwise_conv2d.md): 2D convolution with separable filters. + +[`dot(...)`](../../../../tf/keras/backend/dot.md): Multiplies 2 tensors (and/or variables) and returns a tensor. + +[`dropout(...)`](../../../../tf/keras/backend/dropout.md): Sets entries in `x` to zero at random, while scaling the entire tensor. + +[`dtype(...)`](../../../../tf/keras/backend/dtype.md): Returns the dtype of a Keras tensor or variable, as a string. + +[`elu(...)`](../../../../tf/keras/backend/elu.md): Exponential linear unit. + +[`epsilon(...)`](../../../../tf/keras/backend/epsilon.md): Returns the value of the fuzz factor used in numeric expressions. + +[`equal(...)`](../../../../tf/keras/backend/equal.md): Element-wise equality between two tensors. + +[`eval(...)`](../../../../tf/keras/backend/eval.md): Evaluates the value of a variable. + +[`exp(...)`](../../../../tf/keras/backend/exp.md): Element-wise exponential. + +[`expand_dims(...)`](../../../../tf/keras/backend/expand_dims.md): Adds a 1-sized dimension at index "axis". + +[`eye(...)`](../../../../tf/keras/backend/eye.md): Instantiate an identity matrix and returns it. + +[`flatten(...)`](../../../../tf/keras/backend/flatten.md): Flatten a tensor. + +[`floatx(...)`](../../../../tf/keras/backend/floatx.md): Returns the default float type, as a string. + +[`foldl(...)`](../../../../tf/keras/backend/foldl.md): Reduce elems using fn to combine them from left to right. + +[`foldr(...)`](../../../../tf/keras/backend/foldr.md): Reduce elems using fn to combine them from right to left. + +[`function(...)`](../../../../tf/keras/backend/function.md): Instantiates a Keras function. + +[`gather(...)`](../../../../tf/keras/backend/gather.md): Retrieves the elements of indices `indices` in the tensor `reference`. + +[`get_session(...)`](../../../../tf/compat/v1/keras/backend/get_session.md): Returns the TF session to be used by the backend. + +[`get_uid(...)`](../../../../tf/keras/backend/get_uid.md): Associates a string prefix with an integer counter in a TensorFlow graph. + +[`get_value(...)`](../../../../tf/keras/backend/get_value.md): Returns the value of a variable. + +[`gradients(...)`](../../../../tf/keras/backend/gradients.md): Returns the gradients of `loss` w.r.t. `variables`. + +[`greater(...)`](../../../../tf/keras/backend/greater.md): Element-wise truth value of (x > y). + +[`greater_equal(...)`](../../../../tf/keras/backend/greater_equal.md): Element-wise truth value of (x >= y). + +[`hard_sigmoid(...)`](../../../../tf/keras/backend/hard_sigmoid.md): Segment-wise linear approximation of sigmoid. + +[`image_data_format(...)`](../../../../tf/keras/backend/image_data_format.md): Returns the default image data format convention. + +[`in_test_phase(...)`](../../../../tf/keras/backend/in_test_phase.md): Selects `x` in test phase, and `alt` otherwise. + +[`in_top_k(...)`](../../../../tf/keras/backend/in_top_k.md): Returns whether the `targets` are in the top `k` `predictions`. + +[`in_train_phase(...)`](../../../../tf/keras/backend/in_train_phase.md): Selects `x` in train phase, and `alt` otherwise. + +[`int_shape(...)`](../../../../tf/keras/backend/int_shape.md): Returns the shape of tensor or variable as a tuple of int or None entries. + +[`is_keras_tensor(...)`](../../../../tf/keras/backend/is_keras_tensor.md): Returns whether `x` is a Keras tensor. + +[`is_sparse(...)`](../../../../tf/keras/backend/is_sparse.md): Returns whether a tensor is a sparse tensor. + +[`l2_normalize(...)`](../../../../tf/keras/backend/l2_normalize.md): Normalizes a tensor wrt the L2 norm alongside the specified axis. + +[`learning_phase(...)`](../../../../tf/keras/backend/learning_phase.md): Returns the learning phase flag. + +[`learning_phase_scope(...)`](../../../../tf/keras/backend/learning_phase_scope.md): Provides a scope within which the learning phase is equal to `value`. + +[`less(...)`](../../../../tf/keras/backend/less.md): Element-wise truth value of (x < y). + +[`less_equal(...)`](../../../../tf/keras/backend/less_equal.md): Element-wise truth value of (x <= y). + +[`local_conv1d(...)`](../../../../tf/keras/backend/local_conv1d.md): Apply 1D conv with un-shared weights. + +[`local_conv2d(...)`](../../../../tf/keras/backend/local_conv2d.md): Apply 2D conv with un-shared weights. + +[`log(...)`](../../../../tf/keras/backend/log.md): Element-wise log. + +[`manual_variable_initialization(...)`](../../../../tf/keras/backend/manual_variable_initialization.md): Sets the manual variable initialization flag. + +[`map_fn(...)`](../../../../tf/keras/backend/map_fn.md): Map the function fn over the elements elems and return the outputs. + +[`max(...)`](../../../../tf/keras/backend/max.md): Maximum value in a tensor. + +[`maximum(...)`](../../../../tf/keras/backend/maximum.md): Element-wise maximum of two tensors. + +[`mean(...)`](../../../../tf/keras/backend/mean.md): Mean of a tensor, alongside the specified axis. + +[`min(...)`](../../../../tf/keras/backend/min.md): Minimum value in a tensor. + +[`minimum(...)`](../../../../tf/keras/backend/minimum.md): Element-wise minimum of two tensors. + +[`moving_average_update(...)`](../../../../tf/keras/backend/moving_average_update.md): Compute the moving average of a variable. + +[`ndim(...)`](../../../../tf/keras/backend/ndim.md): Returns the number of axes in a tensor, as an integer. + +[`normalize_batch_in_training(...)`](../../../../tf/keras/backend/normalize_batch_in_training.md): Computes mean and std for batch then apply batch_normalization on batch. + +[`not_equal(...)`](../../../../tf/keras/backend/not_equal.md): Element-wise inequality between two tensors. + +[`one_hot(...)`](../../../../tf/keras/backend/one_hot.md): Computes the one-hot representation of an integer tensor. + +[`ones(...)`](../../../../tf/keras/backend/ones.md): Instantiates an all-ones variable and returns it. + +[`ones_like(...)`](../../../../tf/keras/backend/ones_like.md): Instantiates an all-ones variable of the same shape as another tensor. + +[`permute_dimensions(...)`](../../../../tf/keras/backend/permute_dimensions.md): Permutes axes in a tensor. + +[`placeholder(...)`](../../../../tf/keras/backend/placeholder.md): Instantiates a placeholder tensor and returns it. + +[`pool2d(...)`](../../../../tf/keras/backend/pool2d.md): 2D Pooling. + +[`pool3d(...)`](../../../../tf/keras/backend/pool3d.md): 3D Pooling. + +[`pow(...)`](../../../../tf/keras/backend/pow.md): Element-wise exponentiation. + +[`print_tensor(...)`](../../../../tf/keras/backend/print_tensor.md): Prints `message` and the tensor value when evaluated. + +[`prod(...)`](../../../../tf/keras/backend/prod.md): Multiplies the values in a tensor, alongside the specified axis. + +[`random_binomial(...)`](../../../../tf/keras/backend/random_binomial.md): Returns a tensor with random binomial distribution of values. + +[`random_normal(...)`](../../../../tf/keras/backend/random_normal.md): Returns a tensor with normal distribution of values. + +[`random_normal_variable(...)`](../../../../tf/keras/backend/random_normal_variable.md): Instantiates a variable with values drawn from a normal distribution. + +[`random_uniform(...)`](../../../../tf/keras/backend/random_uniform.md): Returns a tensor with uniform distribution of values. + +[`random_uniform_variable(...)`](../../../../tf/keras/backend/random_uniform_variable.md): Instantiates a variable with values drawn from a uniform distribution. + +[`relu(...)`](../../../../tf/keras/backend/relu.md): Rectified linear unit. + +[`repeat(...)`](../../../../tf/keras/backend/repeat.md): Repeats a 2D tensor. + +[`repeat_elements(...)`](../../../../tf/keras/backend/repeat_elements.md): Repeats the elements of a tensor along an axis, like `np.repeat`. + +[`reset_uids(...)`](../../../../tf/keras/backend/reset_uids.md): Resets graph identifiers. + +[`reshape(...)`](../../../../tf/keras/backend/reshape.md): Reshapes a tensor to the specified shape. + +[`resize_images(...)`](../../../../tf/keras/backend/resize_images.md): Resizes the images contained in a 4D tensor. + +[`resize_volumes(...)`](../../../../tf/keras/backend/resize_volumes.md): Resizes the volume contained in a 5D tensor. + +[`reverse(...)`](../../../../tf/keras/backend/reverse.md): Reverse a tensor along the specified axes. + +[`rnn(...)`](../../../../tf/keras/backend/rnn.md): Iterates over the time dimension of a tensor. + +[`round(...)`](../../../../tf/keras/backend/round.md): Element-wise rounding to the closest integer. + +[`separable_conv2d(...)`](../../../../tf/keras/backend/separable_conv2d.md): 2D convolution with separable filters. + +[`set_epsilon(...)`](../../../../tf/keras/backend/set_epsilon.md): Sets the value of the fuzz factor used in numeric expressions. + +[`set_floatx(...)`](../../../../tf/keras/backend/set_floatx.md): Sets the default float type. + +[`set_image_data_format(...)`](../../../../tf/keras/backend/set_image_data_format.md): Sets the value of the image data format convention. + +[`set_learning_phase(...)`](../../../../tf/keras/backend/set_learning_phase.md): Sets the learning phase to a fixed value. + +[`set_session(...)`](../../../../tf/compat/v1/keras/backend/set_session.md): Sets the global TensorFlow session. + +[`set_value(...)`](../../../../tf/keras/backend/set_value.md): Sets the value of a variable, from a Numpy array. + +[`shape(...)`](../../../../tf/keras/backend/shape.md): Returns the symbolic shape of a tensor or variable. + +[`sigmoid(...)`](../../../../tf/keras/backend/sigmoid.md): Element-wise sigmoid. + +[`sign(...)`](../../../../tf/keras/backend/sign.md): Element-wise sign. + +[`sin(...)`](../../../../tf/keras/backend/sin.md): Computes sin of x element-wise. + +[`softmax(...)`](../../../../tf/keras/backend/softmax.md): Softmax of a tensor. + +[`softplus(...)`](../../../../tf/keras/backend/softplus.md): Softplus of a tensor. + +[`softsign(...)`](../../../../tf/keras/backend/softsign.md): Softsign of a tensor. + +[`sparse_categorical_crossentropy(...)`](../../../../tf/keras/backend/sparse_categorical_crossentropy.md): Categorical crossentropy with integer targets. + +[`spatial_2d_padding(...)`](../../../../tf/keras/backend/spatial_2d_padding.md): Pads the 2nd and 3rd dimensions of a 4D tensor. + +[`spatial_3d_padding(...)`](../../../../tf/keras/backend/spatial_3d_padding.md): Pads 5D tensor with zeros along the depth, height, width dimensions. + +[`sqrt(...)`](../../../../tf/keras/backend/sqrt.md): Element-wise square root. + +[`square(...)`](../../../../tf/keras/backend/square.md): Element-wise square. + +[`squeeze(...)`](../../../../tf/keras/backend/squeeze.md): Removes a 1-dimension from the tensor at index "axis". + +[`stack(...)`](../../../../tf/keras/backend/stack.md): Stacks a list of rank `R` tensors into a rank `R+1` tensor. + +[`std(...)`](../../../../tf/keras/backend/std.md): Standard deviation of a tensor, alongside the specified axis. + +[`stop_gradient(...)`](../../../../tf/keras/backend/stop_gradient.md): Returns `variables` but with zero gradient w.r.t. every other variable. + +[`sum(...)`](../../../../tf/keras/backend/sum.md): Sum of the values in a tensor, alongside the specified axis. + +[`switch(...)`](../../../../tf/keras/backend/switch.md): Switches between two operations depending on a scalar value. + +[`tanh(...)`](../../../../tf/keras/backend/tanh.md): Element-wise tanh. + +[`temporal_padding(...)`](../../../../tf/keras/backend/temporal_padding.md): Pads the middle dimension of a 3D tensor. + +[`tile(...)`](../../../../tf/keras/backend/tile.md): Creates a tensor by tiling `x` by `n`. + +[`to_dense(...)`](../../../../tf/keras/backend/to_dense.md): Converts a sparse tensor into a dense tensor and returns it. + +[`transpose(...)`](../../../../tf/keras/backend/transpose.md): Transposes a tensor and returns it. + +[`truncated_normal(...)`](../../../../tf/keras/backend/truncated_normal.md): Returns a tensor with truncated random normal distribution of values. + +[`update(...)`](../../../../tf/keras/backend/update.md) + +[`update_add(...)`](../../../../tf/keras/backend/update_add.md): Update the value of `x` by adding `increment`. + +[`update_sub(...)`](../../../../tf/keras/backend/update_sub.md): Update the value of `x` by subtracting `decrement`. + +[`var(...)`](../../../../tf/keras/backend/var.md): Variance of a tensor, alongside the specified axis. + +[`variable(...)`](../../../../tf/keras/backend/variable.md): Instantiates a variable and returns it. + +[`zeros(...)`](../../../../tf/keras/backend/zeros.md): Instantiates an all-zeros variable and returns it. + +[`zeros_like(...)`](../../../../tf/keras/backend/zeros_like.md): Instantiates an all-zeros variable of the same shape as another tensor. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/backend/get_session.md b/site/en/api_docs/python/tf/compat/v1/keras/backend/get_session.md new file mode 100644 index 00000000000..25441109060 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/backend/get_session.md @@ -0,0 +1,76 @@ +description: Returns the TF session to be used by the backend. + +
+ + +
+ +# tf.compat.v1.keras.backend.get_session + + + + + + + + + +Returns the TF session to be used by the backend. + + + + + + + +If a default TensorFlow session is available, we will return it. + +Else, we will return the global Keras session assuming it matches +the current graph. + +If no global Keras session exists at this point: +we will create a new global session. + +Note that you can manually set the global session +via `K.set_session(sess)`. + + + + + + + + + + +
+`op_input_list` + +An option sequence of tensors or ops, which will be used +to determine the current graph. Otherwise the default graph will be +used. +
+ + + + + + + + + + + +
+A TensorFlow session. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/keras/backend/name_scope.md b/site/en/api_docs/python/tf/compat/v1/keras/backend/name_scope.md new file mode 100644 index 00000000000..ebf2d053a5e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/backend/name_scope.md @@ -0,0 +1,161 @@ +description: A context manager for use when defining a Python op. + +
+ + + + + +
+ +# tf.compat.v1.keras.backend.name_scope + + + + + + + + + +A context manager for use when defining a Python op. + + + + + + + + + +This context manager validates that the given `values` are from the +same graph, makes that graph the default graph, and pushes a +name scope in that graph (see +tf.Graph.name_scope +for more details on that). + +For example, to define a new Python op called `my_op`: + +```python +def my_op(a, b, c, name=None): + with tf.name_scope(name, "MyOp", [a, b, c]) as scope: + a = tf.convert_to_tensor(a, name="a") + b = tf.convert_to_tensor(b, name="b") + c = tf.convert_to_tensor(c, name="c") + # Define some computation that uses `a`, `b`, and `c`. + return foo_op(..., name=scope) +``` + + + + + + + + + + + + + + + + +
+`name` + +The name argument that is passed to the op function. +
+`default_name` + +The default name to use if the `name` argument is `None`. +
+`values` + +The list of `Tensor` arguments that are passed to the op function. +
+ + + + + + + + + + + + +
+`TypeError` + +if `default_name` is passed in but not a string. +
+ + + + + + + + + + + + + + +
+`name` + + +
+ + + +## Methods + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/backend/set_session.md b/site/en/api_docs/python/tf/compat/v1/keras/backend/set_session.md new file mode 100644 index 00000000000..d6cd68c8f98 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/backend/set_session.md @@ -0,0 +1,50 @@ +description: Sets the global TensorFlow session. + +
+ + +
+ +# tf.compat.v1.keras.backend.set_session + + + + + + + + + +Sets the global TensorFlow session. + + + + + + + + + + + + + + + + + +
+`session` + +A TF Session. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/keras/callbacks.md b/site/en/api_docs/python/tf/compat/v1/keras/callbacks.md new file mode 100644 index 00000000000..80fe8783b92 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/callbacks.md @@ -0,0 +1,49 @@ +description: Callbacks: utilities called at certain points during model training. + +
+ + +
+ +# Module: tf.compat.v1.keras.callbacks + + + + + + + + + +Callbacks: utilities called at certain points during model training. + + + +## Classes + +[`class BaseLogger`](../../../../tf/keras/callbacks/BaseLogger.md): Callback that accumulates epoch averages of metrics. + +[`class CSVLogger`](../../../../tf/keras/callbacks/CSVLogger.md): Callback that streams epoch results to a csv file. + +[`class Callback`](../../../../tf/keras/callbacks/Callback.md): Abstract base class used to build new callbacks. + +[`class EarlyStopping`](../../../../tf/keras/callbacks/EarlyStopping.md): Stop training when a monitored metric has stopped improving. + +[`class History`](../../../../tf/keras/callbacks/History.md): Callback that records events into a `History` object. + +[`class LambdaCallback`](../../../../tf/keras/callbacks/LambdaCallback.md): Callback for creating simple, custom callbacks on-the-fly. + +[`class LearningRateScheduler`](../../../../tf/keras/callbacks/LearningRateScheduler.md): Learning rate scheduler. + +[`class ModelCheckpoint`](../../../../tf/keras/callbacks/ModelCheckpoint.md): Callback to save the Keras model or model weights at some frequency. + +[`class ProgbarLogger`](../../../../tf/keras/callbacks/ProgbarLogger.md): Callback that prints metrics to stdout. + +[`class ReduceLROnPlateau`](../../../../tf/keras/callbacks/ReduceLROnPlateau.md): Reduce learning rate when a metric has stopped improving. + +[`class RemoteMonitor`](../../../../tf/keras/callbacks/RemoteMonitor.md): Callback used to stream events to a server. + +[`class TensorBoard`](../../../../tf/compat/v1/keras/callbacks/TensorBoard.md): Enable visualizations for TensorBoard. + +[`class TerminateOnNaN`](../../../../tf/keras/callbacks/TerminateOnNaN.md): Callback that terminates training when a NaN loss is encountered. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/callbacks/TensorBoard.md b/site/en/api_docs/python/tf/compat/v1/keras/callbacks/TensorBoard.md new file mode 100644 index 00000000000..00275584315 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/callbacks/TensorBoard.md @@ -0,0 +1,236 @@ +description: Enable visualizations for TensorBoard. + +
+ + + + + +
+ +# tf.compat.v1.keras.callbacks.TensorBoard + + + + + + + + + +Enable visualizations for TensorBoard. + +Inherits From: [`Callback`](../../../../../tf/keras/callbacks/Callback.md) + + + + + + + +TensorBoard is a visualization tool provided with TensorFlow. + +This callback logs events for TensorBoard, including: +* Metrics summary plots +* Training graph visualization +* Activation histograms +* Sampled profiling + +If you have installed TensorFlow with pip, you should be able +to launch TensorBoard from the command line: + +```sh +tensorboard --logdir=path_to_your_logs +``` + +You can find more information about TensorBoard +[here](https://www.tensorflow.org/get_started/summaries_and_tensorboard). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`log_dir` + +the path of the directory where to save the log files to be +parsed by TensorBoard. +
+`histogram_freq` + +frequency (in epochs) at which to compute activation and +weight histograms for the layers of the model. If set to 0, histograms +won't be computed. Validation data (or split) must be specified for +histogram visualizations. +
+`write_graph` + +whether to visualize the graph in TensorBoard. The log file +can become quite large when write_graph is set to True. +
+`write_grads` + +whether to visualize gradient histograms in TensorBoard. +`histogram_freq` must be greater than 0. +
+`batch_size` + +size of batch of inputs to feed to the network for histograms +computation. +
+`write_images` + +whether to write model weights to visualize as image in +TensorBoard. +
+`embeddings_freq` + +frequency (in epochs) at which selected embedding layers +will be saved. If set to 0, embeddings won't be computed. Data to be +visualized in TensorBoard's Embedding tab must be passed as +`embeddings_data`. +
+`embeddings_layer_names` + +a list of names of layers to keep eye on. If None +or empty list all the embedding layer will be watched. +
+`embeddings_metadata` + +a dictionary which maps layer name to a file name in +which metadata for this embedding layer is saved. See the +[details](https://www.tensorflow.org/how_tos/embedding_viz/#metadata_optional) +about metadata files format. In case if the same metadata file is +used for all embedding layers, string can be passed. +
+`embeddings_data` + +data to be embedded at layers specified in +`embeddings_layer_names`. Numpy array (if the model has a single input) +or list of Numpy arrays (if the model has multiple inputs). Learn [more +about +embeddings](https://www.tensorflow.org/programmers_guide/embedding) +
+`update_freq` + +`'batch'` or `'epoch'` or integer. When using `'batch'`, +writes the losses and metrics to TensorBoard after each batch. The same +applies for `'epoch'`. If using an integer, let's say `1000`, the +callback will write the metrics and losses to TensorBoard every 1000 +samples. Note that writing too frequently to TensorBoard can slow down +your training. +
+`profile_batch` + +Profile the batch to sample compute characteristics. By +default, it will profile the second batch. Set profile_batch=0 to +disable profiling. +
+ + + + + + + + + + + + +
+`ValueError` + +If histogram_freq is set and no validation data is provided. +
+ + + + +#### Eager Compatibility +Using the `TensorBoard` callback will work when eager execution is enabled, +with the restriction that outputting histogram summaries of weights and +gradients is not supported. Consequently, `histogram_freq` will be ignored. + + + +## Methods + +

set_model

+ +View source + + + +Sets Keras model and creates summary ops. + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/constraints.md b/site/en/api_docs/python/tf/compat/v1/keras/constraints.md new file mode 100644 index 00000000000..dbdbd4ea0fd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/constraints.md @@ -0,0 +1,53 @@ +description: Constraints: functions that impose constraints on weight values. + +
+ + +
+ +# Module: tf.compat.v1.keras.constraints + + + + + + + + + +Constraints: functions that impose constraints on weight values. + + + +## Classes + +[`class Constraint`](../../../../tf/keras/constraints/Constraint.md) + +[`class MaxNorm`](../../../../tf/keras/constraints/MaxNorm.md): MaxNorm weight constraint. + +[`class MinMaxNorm`](../../../../tf/keras/constraints/MinMaxNorm.md): MinMaxNorm weight constraint. + +[`class NonNeg`](../../../../tf/keras/constraints/NonNeg.md): Constrains the weights to be non-negative. + +[`class RadialConstraint`](../../../../tf/keras/constraints/RadialConstraint.md): Constrains `Conv2D` kernel weights to be the same for each radius. + +[`class UnitNorm`](../../../../tf/keras/constraints/UnitNorm.md): Constrains the weights incident to each hidden unit to have unit norm. + +[`class max_norm`](../../../../tf/keras/constraints/MaxNorm.md): MaxNorm weight constraint. + +[`class min_max_norm`](../../../../tf/keras/constraints/MinMaxNorm.md): MinMaxNorm weight constraint. + +[`class non_neg`](../../../../tf/keras/constraints/NonNeg.md): Constrains the weights to be non-negative. + +[`class radial_constraint`](../../../../tf/keras/constraints/RadialConstraint.md): Constrains `Conv2D` kernel weights to be the same for each radius. + +[`class unit_norm`](../../../../tf/keras/constraints/UnitNorm.md): Constrains the weights incident to each hidden unit to have unit norm. + +## Functions + +[`deserialize(...)`](../../../../tf/keras/constraints/deserialize.md) + +[`get(...)`](../../../../tf/keras/constraints/get.md) + +[`serialize(...)`](../../../../tf/keras/constraints/serialize.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/datasets.md b/site/en/api_docs/python/tf/compat/v1/keras/datasets.md new file mode 100644 index 00000000000..e9768ef785b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/datasets.md @@ -0,0 +1,37 @@ +description: Public API for tf.keras.datasets namespace. + +
+ + +
+ +# Module: tf.compat.v1.keras.datasets + + + + + + + + + +Public API for tf.keras.datasets namespace. + + + +## Modules + +[`boston_housing`](../../../../tf/compat/v1/keras/datasets/boston_housing.md) module: Boston housing price regression dataset. + +[`cifar10`](../../../../tf/compat/v1/keras/datasets/cifar10.md) module: CIFAR10 small images classification dataset. + +[`cifar100`](../../../../tf/compat/v1/keras/datasets/cifar100.md) module: CIFAR100 small images classification dataset. + +[`fashion_mnist`](../../../../tf/compat/v1/keras/datasets/fashion_mnist.md) module: Fashion-MNIST dataset. + +[`imdb`](../../../../tf/compat/v1/keras/datasets/imdb.md) module: IMDB sentiment classification dataset. + +[`mnist`](../../../../tf/compat/v1/keras/datasets/mnist.md) module: MNIST handwritten digits dataset. + +[`reuters`](../../../../tf/compat/v1/keras/datasets/reuters.md) module: Reuters topic classification dataset. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/datasets/boston_housing.md b/site/en/api_docs/python/tf/compat/v1/keras/datasets/boston_housing.md new file mode 100644 index 00000000000..06f5ce15912 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/datasets/boston_housing.md @@ -0,0 +1,25 @@ +description: Boston housing price regression dataset. + +
+ + +
+ +# Module: tf.compat.v1.keras.datasets.boston_housing + + + + + + + + + +Boston housing price regression dataset. + + + +## Functions + +[`load_data(...)`](../../../../../tf/keras/datasets/boston_housing/load_data.md): Loads the Boston Housing dataset. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/datasets/cifar10.md b/site/en/api_docs/python/tf/compat/v1/keras/datasets/cifar10.md new file mode 100644 index 00000000000..971003fd00a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/datasets/cifar10.md @@ -0,0 +1,25 @@ +description: CIFAR10 small images classification dataset. + +
+ + +
+ +# Module: tf.compat.v1.keras.datasets.cifar10 + + + + + + + + + +CIFAR10 small images classification dataset. + + + +## Functions + +[`load_data(...)`](../../../../../tf/keras/datasets/cifar10/load_data.md): Loads [CIFAR10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/datasets/cifar100.md b/site/en/api_docs/python/tf/compat/v1/keras/datasets/cifar100.md new file mode 100644 index 00000000000..b0a9762ebe6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/datasets/cifar100.md @@ -0,0 +1,25 @@ +description: CIFAR100 small images classification dataset. + +
+ + +
+ +# Module: tf.compat.v1.keras.datasets.cifar100 + + + + + + + + + +CIFAR100 small images classification dataset. + + + +## Functions + +[`load_data(...)`](../../../../../tf/keras/datasets/cifar100/load_data.md): Loads [CIFAR100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/datasets/fashion_mnist.md b/site/en/api_docs/python/tf/compat/v1/keras/datasets/fashion_mnist.md new file mode 100644 index 00000000000..ad83c4252a2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/datasets/fashion_mnist.md @@ -0,0 +1,25 @@ +description: Fashion-MNIST dataset. + +
+ + +
+ +# Module: tf.compat.v1.keras.datasets.fashion_mnist + + + + + + + + + +Fashion-MNIST dataset. + + + +## Functions + +[`load_data(...)`](../../../../../tf/keras/datasets/fashion_mnist/load_data.md): Loads the Fashion-MNIST dataset. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/datasets/imdb.md b/site/en/api_docs/python/tf/compat/v1/keras/datasets/imdb.md new file mode 100644 index 00000000000..a716828dd3f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/datasets/imdb.md @@ -0,0 +1,27 @@ +description: IMDB sentiment classification dataset. + +
+ + +
+ +# Module: tf.compat.v1.keras.datasets.imdb + + + + + + + + + +IMDB sentiment classification dataset. + + + +## Functions + +[`get_word_index(...)`](../../../../../tf/keras/datasets/imdb/get_word_index.md): Retrieves a dict mapping words to their index in the IMDB dataset. + +[`load_data(...)`](../../../../../tf/keras/datasets/imdb/load_data.md): Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/). + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/datasets/mnist.md b/site/en/api_docs/python/tf/compat/v1/keras/datasets/mnist.md new file mode 100644 index 00000000000..f7b4ad9fdc0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/datasets/mnist.md @@ -0,0 +1,25 @@ +description: MNIST handwritten digits dataset. + +
+ + +
+ +# Module: tf.compat.v1.keras.datasets.mnist + + + + + + + + + +MNIST handwritten digits dataset. + + + +## Functions + +[`load_data(...)`](../../../../../tf/keras/datasets/mnist/load_data.md): Loads the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/datasets/reuters.md b/site/en/api_docs/python/tf/compat/v1/keras/datasets/reuters.md new file mode 100644 index 00000000000..5e2f324dfd0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/datasets/reuters.md @@ -0,0 +1,27 @@ +description: Reuters topic classification dataset. + +
+ + +
+ +# Module: tf.compat.v1.keras.datasets.reuters + + + + + + + + + +Reuters topic classification dataset. + + + +## Functions + +[`get_word_index(...)`](../../../../../tf/keras/datasets/reuters/get_word_index.md): Retrieves a dict mapping words to their index in the Reuters dataset. + +[`load_data(...)`](../../../../../tf/keras/datasets/reuters/load_data.md): Loads the Reuters newswire classification dataset. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/estimator.md b/site/en/api_docs/python/tf/compat/v1/keras/estimator.md new file mode 100644 index 00000000000..b10ba8840be --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/estimator.md @@ -0,0 +1,25 @@ +description: Keras estimator API. + +
+ + +
+ +# Module: tf.compat.v1.keras.estimator + + + + + + + + + +Keras estimator API. + + + +## Functions + +[`model_to_estimator(...)`](../../../../tf/compat/v1/keras/estimator/model_to_estimator.md): Constructs an `Estimator` instance from given keras model. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/estimator/model_to_estimator.md b/site/en/api_docs/python/tf/compat/v1/keras/estimator/model_to_estimator.md new file mode 100644 index 00000000000..4528683c67b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/estimator/model_to_estimator.md @@ -0,0 +1,198 @@ +description: Constructs an Estimator instance from given keras model. + +
+ + +
+ +# tf.compat.v1.keras.estimator.model_to_estimator + + + + + + + + + +Constructs an `Estimator` instance from given keras model. + + + + + + + +If you use infrastructure or other tooling that relies on Estimators, you can +still build a Keras model and use model_to_estimator to convert the Keras +model to an Estimator for use with downstream systems. + +For usage example, please see: +[Creating estimators from Keras +Models](https://www.tensorflow.org/guide/estimators#creating_estimators_from_keras_models). + +#### Sample Weights: + + +Estimators returned by `model_to_estimator` are configured so that they can +handle sample weights (similar to `keras_model.fit(x, y, sample_weights)`). + +To pass sample weights when training or evaluating the Estimator, the first +item returned by the input function should be a dictionary with keys +`features` and `sample_weights`. Example below: + +```python +keras_model = tf.keras.Model(...) +keras_model.compile(...) + +estimator = tf.keras.estimator.model_to_estimator(keras_model) + +def input_fn(): + return dataset_ops.Dataset.from_tensors( + ({'features': features, 'sample_weights': sample_weights}, + targets)) + +estimator.train(input_fn, steps=1) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`keras_model` + +A compiled Keras model object. This argument is mutually +exclusive with `keras_model_path`. Estimator's `model_fn` uses the +structure of the model to clone the model. Defaults to `None`. +
+`keras_model_path` + +Path to a compiled Keras model saved on disk, in HDF5 +format, which can be generated with the `save()` method of a Keras model. +This argument is mutually exclusive with `keras_model`. +Defaults to `None`. +
+`custom_objects` + +Dictionary for cloning customized objects. This is +used with classes that is not part of this pip package. For example, if +user maintains a `relu6` class that inherits from tf.keras.layers.Layer, +then pass `custom_objects={'relu6': relu6}`. Defaults to `None`. +
+`model_dir` + +Directory to save `Estimator` model parameters, graph, summary +files for TensorBoard, etc. If unset a directory will be created with +`tempfile.mkdtemp` +
+`config` + +`RunConfig` to config `Estimator`. Allows setting up things in +`model_fn` based on configuration such as `num_ps_replicas`, or +`model_dir`. Defaults to `None`. If both `config.model_dir` and the +`model_dir` argument (above) are specified the `model_dir` **argument** +takes precedence. +
+`checkpoint_format` + +Sets the format of the checkpoint saved by the estimator +when training. May be `saver` or `checkpoint`, depending on whether to +save checkpoints from `tf.train.Saver` or tf.train.Checkpoint. This +argument currently defaults to `saver`. When 2.0 is released, the default +will be `checkpoint`. Estimators use name-based `tf.train.Saver` +checkpoints, while Keras models use object-based checkpoints from +tf.train.Checkpoint. Currently, saving object-based checkpoints from +`model_to_estimator` is only supported by Functional and Sequential +models. Defaults to 'saver'. +
+ + + + + + + + + + + +
+An Estimator from given keras model. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +If neither keras_model nor keras_model_path was given. +
+`ValueError` + +If both keras_model and keras_model_path was given. +
+`ValueError` + +If the keras_model_path is a GCS URI. +
+`ValueError` + +If keras_model has not been compiled. +
+`ValueError` + +If an invalid checkpoint_format was given. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/keras/experimental.md b/site/en/api_docs/python/tf/compat/v1/keras/experimental.md new file mode 100644 index 00000000000..8e061d4b22f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/experimental.md @@ -0,0 +1,47 @@ +description: Public API for tf.keras.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.keras.experimental + + + + + + + + + +Public API for tf.keras.experimental namespace. + + + +## Classes + +[`class CosineDecay`](../../../../tf/keras/experimental/CosineDecay.md): A LearningRateSchedule that uses a cosine decay schedule. + +[`class CosineDecayRestarts`](../../../../tf/keras/experimental/CosineDecayRestarts.md): A LearningRateSchedule that uses a cosine decay schedule with restarts. + +[`class LinearCosineDecay`](../../../../tf/keras/experimental/LinearCosineDecay.md): A LearningRateSchedule that uses a linear cosine decay schedule. + +[`class LinearModel`](../../../../tf/keras/experimental/LinearModel.md): Linear Model for regression and classification problems. + +[`class NoisyLinearCosineDecay`](../../../../tf/keras/experimental/NoisyLinearCosineDecay.md): A LearningRateSchedule that uses a noisy linear cosine decay schedule. + +[`class PeepholeLSTMCell`](../../../../tf/keras/experimental/PeepholeLSTMCell.md): Equivalent to LSTMCell class but adds peephole connections. + +[`class SequenceFeatures`](../../../../tf/keras/experimental/SequenceFeatures.md): A layer for sequence input. + +[`class WideDeepModel`](../../../../tf/keras/experimental/WideDeepModel.md): Wide & Deep Model for regression and classification problems. + +## Functions + +[`export_saved_model(...)`](../../../../tf/compat/v1/keras/experimental/export_saved_model.md): Exports a tf.keras.Model as a Tensorflow SavedModel. (deprecated) + +[`load_from_saved_model(...)`](../../../../tf/compat/v1/keras/experimental/load_from_saved_model.md): Loads a keras Model from a SavedModel created by `export_saved_model()`. (deprecated) + +[`terminate_keras_multiprocessing_pools(...)`](../../../../tf/keras/experimental/terminate_keras_multiprocessing_pools.md): Destroy Keras' multiprocessing pools to prevent deadlocks. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/experimental/export_saved_model.md b/site/en/api_docs/python/tf/compat/v1/keras/experimental/export_saved_model.md new file mode 100644 index 00000000000..51fc7409212 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/experimental/export_saved_model.md @@ -0,0 +1,165 @@ +description: Exports a tf.keras.Model as a Tensorflow SavedModel. (deprecated) + +
+ + +
+ +# tf.compat.v1.keras.experimental.export_saved_model + + + + + + + + + +Exports a tf.keras.Model as a Tensorflow SavedModel. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use `model.save(..., save_format="tf")` or `tf.keras.models.save_model(..., save_format="tf")`. + +Note that at this time, subclassed models can only be saved using +`serving_only=True`. + +The exported `SavedModel` is a standalone serialization of Tensorflow objects, +and is supported by TF language APIs and the Tensorflow Serving system. +To load the model, use the function +`tf.keras.experimental.load_from_saved_model`. + +The `SavedModel` contains: + +1. a checkpoint containing the model weights. +2. a `SavedModel` proto containing the Tensorflow backend graph. Separate + graphs are saved for prediction (serving), train, and evaluation. If + the model has not been compiled, then only the graph computing predictions + will be exported. +3. the model's json config. If the model is subclassed, this will only be + included if the model's `get_config()` method is overwritten. + +#### Example: + + + +```python +import tensorflow as tf + +# Create a tf.keras model. +model = tf.keras.Sequential() +model.add(tf.keras.layers.Dense(1, input_shape=[10])) +model.summary() + +# Save the tf.keras model in the SavedModel format. +path = '/tmp/simple_keras_model' +tf.keras.experimental.export_saved_model(model, path) + +# Load the saved keras model back. +new_model = tf.keras.experimental.load_from_saved_model(path) +new_model.summary() +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`model` + +A tf.keras.Model to be saved. If the model is subclassed, the flag +`serving_only` must be set to True. +
+`saved_model_path` + +a string specifying the path to the SavedModel directory. +
+`custom_objects` + +Optional dictionary mapping string names to custom classes +or functions (e.g. custom loss functions). +
+`as_text` + +bool, `False` by default. Whether to write the `SavedModel` proto +in text format. Currently unavailable in serving-only mode. +
+`input_signature` + +A possibly nested sequence of tf.TensorSpec objects, used +to specify the expected model inputs. See tf.function for more details. +
+`serving_only` + +bool, `False` by default. When this is true, only the +prediction graph is saved. +
+ + + + + + + + + + + + + + + + + + +
+`NotImplementedError` + +If the model is a subclassed model, and serving_only is +False. +
+`ValueError` + +If the input signature cannot be inferred from the model. +
+`AssertionError` + +If the SavedModel directory already exists and isn't empty. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/keras/experimental/load_from_saved_model.md b/site/en/api_docs/python/tf/compat/v1/keras/experimental/load_from_saved_model.md new file mode 100644 index 00000000000..3ab746e7dff --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/experimental/load_from_saved_model.md @@ -0,0 +1,102 @@ +description: Loads a keras Model from a SavedModel created by export_saved_model(). (deprecated) + +
+ + +
+ +# tf.compat.v1.keras.experimental.load_from_saved_model + + + + + + + + + +Loads a keras Model from a SavedModel created by `export_saved_model()`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +The experimental save and load functions have been deprecated. Please switch to tf.keras.models.load_model. + +This function reinstantiates model state by: +1) loading model topology from json (this will eventually come + from metagraph). +2) loading model weights from checkpoint. + +#### Example: + + + +```python +import tensorflow as tf + +# Create a tf.keras model. +model = tf.keras.Sequential() +model.add(tf.keras.layers.Dense(1, input_shape=[10])) +model.summary() + +# Save the tf.keras model in the SavedModel format. +path = '/tmp/simple_keras_model' +tf.keras.experimental.export_saved_model(model, path) + +# Load the saved keras model back. +new_model = tf.keras.experimental.load_from_saved_model(path) +new_model.summary() +``` + + + + + + + + + + + + + +
+`saved_model_path` + +a string specifying the path to an existing SavedModel. +
+`custom_objects` + +Optional dictionary mapping names +(strings) to custom classes or functions to be +considered during deserialization. +
+ + + + + + + + + + + +
+a keras.Model instance. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers.md new file mode 100644 index 00000000000..58d4b638220 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers.md @@ -0,0 +1,83 @@ +description: Keras initializer serialization / deserialization. + +
+ + +
+ +# Module: tf.compat.v1.keras.initializers + + + + + + + + + +Keras initializer serialization / deserialization. + + + +## Classes + +[`class Constant`](../../../../tf/compat/v1/keras/initializers/Constant.md): Initializer that generates tensors with constant values. + +[`class Identity`](../../../../tf/compat/v1/keras/initializers/Identity.md): Initializer that generates the identity matrix. + +[`class Initializer`](../../../../tf/compat/v1/keras/initializers/Initializer.md): Initializer base class: all initializers inherit from this class. + +[`class Ones`](../../../../tf/compat/v1/keras/initializers/Ones.md): Initializer that generates tensors initialized to 1. + +[`class Orthogonal`](../../../../tf/compat/v1/keras/initializers/Orthogonal.md): Initializer that generates an orthogonal matrix. + +[`class RandomNormal`](../../../../tf/compat/v1/keras/initializers/RandomNormal.md): Initializer that generates tensors with a normal distribution. + +[`class RandomUniform`](../../../../tf/compat/v1/keras/initializers/RandomUniform.md): Initializer that generates tensors with a uniform distribution. + +[`class TruncatedNormal`](../../../../tf/compat/v1/keras/initializers/TruncatedNormal.md): Initializer that generates a truncated normal distribution. + +[`class VarianceScaling`](../../../../tf/compat/v1/keras/initializers/VarianceScaling.md): Initializer capable of adapting its scale to the shape of weights tensors. + +[`class Zeros`](../../../../tf/compat/v1/keras/initializers/Zeros.md): Initializer that generates tensors initialized to 0. + +[`class constant`](../../../../tf/compat/v1/keras/initializers/Constant.md): Initializer that generates tensors with constant values. + +[`class glorot_normal`](../../../../tf/compat/v1/keras/initializers/glorot_normal.md): The Glorot normal initializer, also called Xavier normal initializer. + +[`class glorot_uniform`](../../../../tf/compat/v1/keras/initializers/glorot_uniform.md): The Glorot uniform initializer, also called Xavier uniform initializer. + +[`class identity`](../../../../tf/compat/v1/keras/initializers/Identity.md): Initializer that generates the identity matrix. + +[`class normal`](../../../../tf/compat/v1/keras/initializers/RandomNormal.md): Initializer that generates tensors with a normal distribution. + +[`class ones`](../../../../tf/compat/v1/keras/initializers/Ones.md): Initializer that generates tensors initialized to 1. + +[`class orthogonal`](../../../../tf/compat/v1/keras/initializers/Orthogonal.md): Initializer that generates an orthogonal matrix. + +[`class random_normal`](../../../../tf/compat/v1/keras/initializers/RandomNormal.md): Initializer that generates tensors with a normal distribution. + +[`class random_uniform`](../../../../tf/compat/v1/keras/initializers/RandomUniform.md): Initializer that generates tensors with a uniform distribution. + +[`class truncated_normal`](../../../../tf/compat/v1/keras/initializers/TruncatedNormal.md): Initializer that generates a truncated normal distribution. + +[`class uniform`](../../../../tf/compat/v1/keras/initializers/RandomUniform.md): Initializer that generates tensors with a uniform distribution. + +[`class zeros`](../../../../tf/compat/v1/keras/initializers/Zeros.md): Initializer that generates tensors initialized to 0. + +## Functions + +[`deserialize(...)`](../../../../tf/keras/initializers/deserialize.md): Return an `Initializer` object from its config. + +[`get(...)`](../../../../tf/keras/initializers/get.md) + +[`he_normal(...)`](../../../../tf/compat/v1/keras/initializers/he_normal.md): He normal initializer. + +[`he_uniform(...)`](../../../../tf/compat/v1/keras/initializers/he_uniform.md): He uniform variance scaling initializer. + +[`lecun_normal(...)`](../../../../tf/compat/v1/keras/initializers/lecun_normal.md): LeCun normal initializer. + +[`lecun_uniform(...)`](../../../../tf/compat/v1/keras/initializers/lecun_uniform.md): LeCun uniform initializer. + +[`serialize(...)`](../../../../tf/keras/initializers/serialize.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/Constant.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/Constant.md new file mode 100644 index 00000000000..21ebedbb45c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/Constant.md @@ -0,0 +1,289 @@ +description: Initializer that generates tensors with constant values. + +
+ + + + + + +
+ +# tf.compat.v1.keras.initializers.Constant + + + + + + + + + +Initializer that generates tensors with constant values. + +Inherits From: [`Initializer`](../../../../../tf/compat/v1/keras/initializers/Initializer.md) + + + + + + + + + +The resulting tensor is populated with values of type `dtype`, as +specified by arguments `value` following the desired `shape` of the +new tensor (see examples below). + +The argument `value` can be a constant value, or a list of values of type +`dtype`. If `value` is a list, then the length of the list must be less +than or equal to the number of elements implied by the desired shape of the +tensor. In the case where the total number of elements in `value` is less +than the number of elements required by the tensor shape, the last element +in `value` will be used to fill the remaining entries. If the total number of +elements in `value` is greater than the number of elements required by the +tensor shape, the initializer will raise a `ValueError`. + + + + + + + + + + + + + + + + +
+`value` + +A Python scalar, list or tuple of values, or a N-dimensional numpy +array. All elements of the initialized variable will be set to the +corresponding value in the `value` argument. +
+`dtype` + +Default data type, used if no `dtype` argument is provided when +calling the initializer. +
+`verify_shape` + +Boolean that enables verification of the shape of `value`. If +`True`, the initializer will throw an error if the shape of `value` is not +compatible with the shape of the initialized tensor. +
+ + + + + + + + + + + + +
+`TypeError` + +If the input `value` is not one of the expected types. +
+ + + +#### Examples: + +The following example can be rewritten using a numpy.ndarray instead +of the `value` list, even reshaped, as shown in the two commented lines +below the `value` list initialization. + + +``` +>>> value = [0, 1, 2, 3, 4, 5, 6, 7] +>>> init = tf.compat.v1.constant_initializer(value) +>>> # fitting shape +>>> with tf.compat.v1.Session(): +... x = tf.compat.v1.get_variable('x', shape=[2, 4], initializer=init) +... x.initializer.run() +... print(x.eval()) +[[0. 1. 2. 3.] + [4. 5. 6. 7.]] +>>> # Larger shape +>>> with tf.compat.v1.Session(): +... y = tf.compat.v1.get_variable('y', shape=[3, 4], initializer=init) +... y.initializer.run() +... print(y.eval()) +[[0. 1. 2. 3.] + [4. 5. 6. 7.] + [7. 7. 7. 7.]] +>>> # Smaller shape +>>> with tf.compat.v1.Session(): +... z = tf.compat.v1.get_variable('z', shape=[2, 3], initializer=init) +Traceback (most recent call last): +... +ValueError: Too many elements provided. Needed at most 6, but received 8 +>>> # Shape verification +>>> init_verify = tf.compat.v1.constant_initializer(value, verify_shape=True) +>>> with tf.compat.v1.Session(): +... u = tf.compat.v1.get_variable('u', shape=[3, 4], +... initializer=init_verify) +Traceback (most recent call last): +... +TypeError: Expected Tensor's shape: (3, 4), got (8,). +``` + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/Identity.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/Identity.md new file mode 100644 index 00000000000..e18f100f482 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/Identity.md @@ -0,0 +1,209 @@ +description: Initializer that generates the identity matrix. + +
+ + + + + + +
+ +# tf.compat.v1.keras.initializers.Identity + + + + + + + + + +Initializer that generates the identity matrix. + +Inherits From: [`Initializer`](../../../../../tf/compat/v1/keras/initializers/Initializer.md) + + + + + + + + + +Only use for 2D matrices. + + + + + + + + + + + + + +
+`gain` + +Multiplicative factor to apply to the identity matrix. +
+`dtype` + +Default data type, used if no `dtype` argument is provided when +calling the initializer. Only floating point types are supported. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/Initializer.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/Initializer.md new file mode 100644 index 00000000000..994937e3c9e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/Initializer.md @@ -0,0 +1,161 @@ +description: Initializer base class: all initializers inherit from this class. + +
+ + + + + +
+ +# tf.compat.v1.keras.initializers.Initializer + + + + + + + + + +Initializer base class: all initializers inherit from this class. + + + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/Ones.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/Ones.md new file mode 100644 index 00000000000..b7a0ab88c6e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/Ones.md @@ -0,0 +1,183 @@ +description: Initializer that generates tensors initialized to 1. + +
+ + + + + + +
+ +# tf.compat.v1.keras.initializers.Ones + + + + + + + + + +Initializer that generates tensors initialized to 1. + +Inherits From: [`Initializer`](../../../../../tf/compat/v1/keras/initializers/Initializer.md) + + + + + + + + + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/Orthogonal.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/Orthogonal.md new file mode 100644 index 00000000000..2906fa10acf --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/Orthogonal.md @@ -0,0 +1,232 @@ +description: Initializer that generates an orthogonal matrix. + +
+ + + + + + +
+ +# tf.compat.v1.keras.initializers.Orthogonal + + + + + + + + + +Initializer that generates an orthogonal matrix. + +Inherits From: [`Initializer`](../../../../../tf/compat/v1/keras/initializers/Initializer.md) + + + + + + + + + +If the shape of the tensor to initialize is two-dimensional, it is initialized +with an orthogonal matrix obtained from the QR decomposition of a matrix of +random numbers drawn from a normal distribution. +If the matrix has fewer rows than columns then the output will have orthogonal +rows. Otherwise, the output will have orthogonal columns. + +If the shape of the tensor to initialize is more than two-dimensional, +a matrix of shape `(shape[0] * ... * shape[n - 2], shape[n - 1])` +is initialized, where `n` is the length of the shape vector. +The matrix is subsequently reshaped to give a tensor of the desired shape. + + + + + + + + + + + + + + + + +
+`gain` + +multiplicative factor to apply to the orthogonal matrix +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed for behavior. +
+`dtype` + +Default data type, used if no `dtype` argument is provided when +calling the initializer. Only floating point types are supported. +
+ + + +#### References: + +[Saxe et al., 2014](https://openreview.net/forum?id=_wzZwKpTDF_9C) +([pdf](https://arxiv.org/pdf/1312.6120.pdf)) + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/RandomNormal.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/RandomNormal.md new file mode 100644 index 00000000000..5da5db242b9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/RandomNormal.md @@ -0,0 +1,238 @@ +description: Initializer that generates tensors with a normal distribution. + +
+ + + + + + +
+ +# tf.compat.v1.keras.initializers.RandomNormal + + + + + + + + + +Initializer that generates tensors with a normal distribution. + +Inherits From: [`random_normal_initializer`](../../../../../tf/compat/v1/random_normal_initializer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`mean` + +a python scalar or a scalar tensor. Mean of the random values to +generate. Defaults to 0. +
+`stddev` + +a python scalar or a scalar tensor. Standard deviation of the random +values to generate. Defaults to 0.05. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed for behavior. +
+`dtype` + +The data type. Only floating point types are supported. +
+ + + + + + + + + + + +
+RandomNormal instance. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/RandomUniform.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/RandomUniform.md new file mode 100644 index 00000000000..189e442034e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/RandomUniform.md @@ -0,0 +1,238 @@ +description: Initializer that generates tensors with a uniform distribution. + +
+ + + + + + +
+ +# tf.compat.v1.keras.initializers.RandomUniform + + + + + + + + + +Initializer that generates tensors with a uniform distribution. + +Inherits From: [`random_uniform_initializer`](../../../../../tf/compat/v1/random_uniform_initializer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`minval` + +A python scalar or a scalar tensor. Lower bound of the range of +random values to generate. Defaults to -0.05. +
+`maxval` + +A python scalar or a scalar tensor. Upper bound of the range of +random values to generate. Defaults to 0.05. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed for behavior. +
+`dtype` + +The data type. +
+ + + + + + + + + + + +
+A RandomUniform instance. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/TruncatedNormal.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/TruncatedNormal.md new file mode 100644 index 00000000000..1599f602eb6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/TruncatedNormal.md @@ -0,0 +1,242 @@ +description: Initializer that generates a truncated normal distribution. + +
+ + + + + + +
+ +# tf.compat.v1.keras.initializers.TruncatedNormal + + + + + + + + + +Initializer that generates a truncated normal distribution. + +Inherits From: [`truncated_normal_initializer`](../../../../../tf/compat/v1/truncated_normal_initializer.md) + + + + + + + + + +These values are similar to values from a `random_normal_initializer` +except that values more than two standard deviations from the mean +are discarded and re-drawn. This is the recommended initializer for +neural network weights and filters. + + + + + + + + + + + + + + + + + + + +
+`mean` + +a python scalar or a scalar tensor. Mean of the random values to +generate. Defaults to 0. +
+`stddev` + +a python scalar or a scalar tensor. Standard deviation of the random +values to generate. Defaults to 0.05. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed for behavior. +
+`dtype` + +The data type. Only floating point types are supported. +
+ + + + + + + + + + + +
+A TruncatedNormal instance. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/VarianceScaling.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/VarianceScaling.md new file mode 100644 index 00000000000..78227810fde --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/VarianceScaling.md @@ -0,0 +1,260 @@ +description: Initializer capable of adapting its scale to the shape of weights tensors. + +
+ + + + + + +
+ +# tf.compat.v1.keras.initializers.VarianceScaling + + + + + + + + + +Initializer capable of adapting its scale to the shape of weights tensors. + +Inherits From: [`Initializer`](../../../../../tf/compat/v1/keras/initializers/Initializer.md) + + + + + + + + + +With `distribution="truncated_normal" or "untruncated_normal"`, +samples are drawn from a truncated/untruncated normal +distribution with a mean of zero and a standard deviation (after truncation, +if used) `stddev = sqrt(scale / n)` +where n is: + - number of input units in the weight tensor, if mode = "fan_in" + - number of output units, if mode = "fan_out" + - average of the numbers of input and output units, if mode = "fan_avg" + +With `distribution="uniform"`, samples are drawn from a uniform distribution +within [-limit, limit], with `limit = sqrt(3 * scale / n)`. + + + + + + + + + + + + + + + + + + + + + + +
+`scale` + +Scaling factor (positive float). +
+`mode` + +One of "fan_in", "fan_out", "fan_avg". +
+`distribution` + +Random distribution to use. One of "normal", "uniform". +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed for behavior. +
+`dtype` + +Default data type, used if no `dtype` argument is provided when +calling the initializer. Only floating point types are supported. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of an invalid value for the "scale", mode" or +"distribution" arguments. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/Zeros.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/Zeros.md new file mode 100644 index 00000000000..6d0c6c824ca --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/Zeros.md @@ -0,0 +1,183 @@ +description: Initializer that generates tensors initialized to 0. + +
+ + + + + + +
+ +# tf.compat.v1.keras.initializers.Zeros + + + + + + + + + +Initializer that generates tensors initialized to 0. + +Inherits From: [`Initializer`](../../../../../tf/compat/v1/keras/initializers/Initializer.md) + + + + + + + + + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/glorot_normal.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/glorot_normal.md new file mode 100644 index 00000000000..be1e4e69aa7 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/glorot_normal.md @@ -0,0 +1,220 @@ +description: The Glorot normal initializer, also called Xavier normal initializer. + +
+ + + + + + +
+ +# tf.compat.v1.keras.initializers.glorot_normal + + + + + + + + + +The Glorot normal initializer, also called Xavier normal initializer. + +Inherits From: [`VarianceScaling`](../../../../../tf/compat/v1/keras/initializers/VarianceScaling.md) + + + + + + + + + +It draws samples from a truncated normal distribution centered on 0 +with standard deviation (after truncation) given by +`stddev = sqrt(2 / (fan_in + fan_out))` where `fan_in` is the number +of input units in the weight tensor and `fan_out` is the number of +output units in the weight tensor. + + + + + + + + + + + + + +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed for behavior. +
+`dtype` + +Default data type, used if no `dtype` argument is provided when +calling the initializer. Only floating point types are supported. +
+ + + +#### References: + +[Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html) +([pdf](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf)) + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/glorot_uniform.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/glorot_uniform.md new file mode 100644 index 00000000000..3576f96619d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/glorot_uniform.md @@ -0,0 +1,219 @@ +description: The Glorot uniform initializer, also called Xavier uniform initializer. + +
+ + + + + + +
+ +# tf.compat.v1.keras.initializers.glorot_uniform + + + + + + + + + +The Glorot uniform initializer, also called Xavier uniform initializer. + +Inherits From: [`VarianceScaling`](../../../../../tf/compat/v1/keras/initializers/VarianceScaling.md) + + + + + + + + + +It draws samples from a uniform distribution within [-limit, limit] +where `limit` is `sqrt(6 / (fan_in + fan_out))` +where `fan_in` is the number of input units in the weight tensor +and `fan_out` is the number of output units in the weight tensor. + + + + + + + + + + + + + +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed for behavior. +
+`dtype` + +Default data type, used if no `dtype` argument is provided when +calling the initializer. Only floating point types are supported. +
+ + + +#### References: + +[Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html) +([pdf](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf)) + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/he_normal.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/he_normal.md new file mode 100644 index 00000000000..1661573dd13 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/he_normal.md @@ -0,0 +1,87 @@ +description: He normal initializer. + +
+ + +
+ +# tf.compat.v1.keras.initializers.he_normal + + + + + + + + + +He normal initializer. + + + + + + + + + +It draws samples from a truncated normal distribution centered on 0 +with standard deviation (after truncation) given by +`stddev = sqrt(2 / fan_in)` where `fan_in` is the number of +input units in the weight tensor. + + + + + + + + + + +
+`seed` + +A Python integer. Used to seed the random generator. +
+ + + + + + + + + + + +
+An initializer. +
+ + + +#### References: + +[He et al., 2015] +(https://www.cv-foundation.org/openaccess/content_iccv_2015/html/He_Delving_Deep_into_ICCV_2015_paper.html) + +([pdf](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/he_uniform.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/he_uniform.md new file mode 100644 index 00000000000..a8b0d9ba44e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/he_uniform.md @@ -0,0 +1,86 @@ +description: He uniform variance scaling initializer. + +
+ + +
+ +# tf.compat.v1.keras.initializers.he_uniform + + + + + + + + + +He uniform variance scaling initializer. + + + + + + + + + +It draws samples from a uniform distribution within [-limit, limit] +where `limit` is `sqrt(6 / fan_in)` +where `fan_in` is the number of input units in the weight tensor. + + + + + + + + + + +
+`seed` + +A Python integer. Used to seed the random generator. +
+ + + + + + + + + + + +
+An initializer. +
+ + + +#### References: + +[He et al., 2015] +(https://www.cv-foundation.org/openaccess/content_iccv_2015/html/He_Delving_Deep_into_ICCV_2015_paper.html) + +([pdf](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/lecun_normal.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/lecun_normal.md new file mode 100644 index 00000000000..d4cf805bd9b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/lecun_normal.md @@ -0,0 +1,90 @@ +description: LeCun normal initializer. + +
+ + +
+ +# tf.compat.v1.keras.initializers.lecun_normal + + + + + + + + + +LeCun normal initializer. + + + + + + + + + +It draws samples from a truncated normal distribution centered on 0 +with standard deviation (after truncation) given by +`stddev = sqrt(1 / fan_in)` where `fan_in` is the number of +input units in the weight tensor. + + + + + + + + + + +
+`seed` + +A Python integer. Used to seed the random generator. +
+ + + + + + + + + + + +
+An initializer. +
+ + + +#### References: + +- Self-Normalizing Neural Networks, +[Klambauer et al., +2017](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks) + +([pdf](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf)) +- Efficient Backprop, +[Lecun et al., 1998](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) diff --git a/site/en/api_docs/python/tf/compat/v1/keras/initializers/lecun_uniform.md b/site/en/api_docs/python/tf/compat/v1/keras/initializers/lecun_uniform.md new file mode 100644 index 00000000000..21f415334ed --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/initializers/lecun_uniform.md @@ -0,0 +1,89 @@ +description: LeCun uniform initializer. + +
+ + +
+ +# tf.compat.v1.keras.initializers.lecun_uniform + + + + + + + + + +LeCun uniform initializer. + + + + + + + + + +It draws samples from a uniform distribution within [-limit, limit] +where `limit` is `sqrt(3 / fan_in)` +where `fan_in` is the number of input units in the weight tensor. + + + + + + + + + + +
+`seed` + +A Python integer. Used to seed the random generator. +
+ + + + + + + + + + + +
+An initializer. +
+ + + +#### References: + +- Self-Normalizing Neural Networks, +[Klambauer et al., +2017](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks) + +([pdf](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf)) +- Efficient Backprop, +[Lecun et al., 1998](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers.md b/site/en/api_docs/python/tf/compat/v1/keras/layers.md new file mode 100644 index 00000000000..e8727853886 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers.md @@ -0,0 +1,259 @@ +description: Keras layers API. + +
+ + +
+ +# Module: tf.compat.v1.keras.layers + + + + + + + + + +Keras layers API. + + + +## Modules + +[`experimental`](../../../../tf/compat/v1/keras/layers/experimental.md) module: Public API for tf.keras.layers.experimental namespace. + +## Classes + +[`class AbstractRNNCell`](../../../../tf/keras/layers/AbstractRNNCell.md): Abstract object representing an RNN cell. + +[`class Activation`](../../../../tf/keras/layers/Activation.md): Applies an activation function to an output. + +[`class ActivityRegularization`](../../../../tf/keras/layers/ActivityRegularization.md): Layer that applies an update to the cost function based input activity. + +[`class Add`](../../../../tf/keras/layers/Add.md): Layer that adds a list of inputs. + +[`class AdditiveAttention`](../../../../tf/keras/layers/AdditiveAttention.md): Additive attention layer, a.k.a. Bahdanau-style attention. + +[`class AlphaDropout`](../../../../tf/keras/layers/AlphaDropout.md): Applies Alpha Dropout to the input. + +[`class Attention`](../../../../tf/keras/layers/Attention.md): Dot-product attention layer, a.k.a. Luong-style attention. + +[`class Average`](../../../../tf/keras/layers/Average.md): Layer that averages a list of inputs element-wise. + +[`class AveragePooling1D`](../../../../tf/keras/layers/AveragePooling1D.md): Average pooling for temporal data. + +[`class AveragePooling2D`](../../../../tf/keras/layers/AveragePooling2D.md): Average pooling operation for spatial data. + +[`class AveragePooling3D`](../../../../tf/keras/layers/AveragePooling3D.md): Average pooling operation for 3D data (spatial or spatio-temporal). + +[`class AvgPool1D`](../../../../tf/keras/layers/AveragePooling1D.md): Average pooling for temporal data. + +[`class AvgPool2D`](../../../../tf/keras/layers/AveragePooling2D.md): Average pooling operation for spatial data. + +[`class AvgPool3D`](../../../../tf/keras/layers/AveragePooling3D.md): Average pooling operation for 3D data (spatial or spatio-temporal). + +[`class BatchNormalization`](../../../../tf/compat/v1/keras/layers/BatchNormalization.md): Normalize and scale inputs or activations. (Ioffe and Szegedy, 2014). + +[`class Bidirectional`](../../../../tf/keras/layers/Bidirectional.md): Bidirectional wrapper for RNNs. + +[`class Concatenate`](../../../../tf/keras/layers/Concatenate.md): Layer that concatenates a list of inputs. + +[`class Conv1D`](../../../../tf/keras/layers/Conv1D.md): 1D convolution layer (e.g. temporal convolution). + +[`class Conv2D`](../../../../tf/keras/layers/Conv2D.md): 2D convolution layer (e.g. spatial convolution over images). + +[`class Conv2DTranspose`](../../../../tf/keras/layers/Conv2DTranspose.md): Transposed convolution layer (sometimes called Deconvolution). + +[`class Conv3D`](../../../../tf/keras/layers/Conv3D.md): 3D convolution layer (e.g. spatial convolution over volumes). + +[`class Conv3DTranspose`](../../../../tf/keras/layers/Conv3DTranspose.md): Transposed convolution layer (sometimes called Deconvolution). + +[`class ConvLSTM2D`](../../../../tf/keras/layers/ConvLSTM2D.md): Convolutional LSTM. + +[`class Convolution1D`](../../../../tf/keras/layers/Conv1D.md): 1D convolution layer (e.g. temporal convolution). + +[`class Convolution2D`](../../../../tf/keras/layers/Conv2D.md): 2D convolution layer (e.g. spatial convolution over images). + +[`class Convolution2DTranspose`](../../../../tf/keras/layers/Conv2DTranspose.md): Transposed convolution layer (sometimes called Deconvolution). + +[`class Convolution3D`](../../../../tf/keras/layers/Conv3D.md): 3D convolution layer (e.g. spatial convolution over volumes). + +[`class Convolution3DTranspose`](../../../../tf/keras/layers/Conv3DTranspose.md): Transposed convolution layer (sometimes called Deconvolution). + +[`class Cropping1D`](../../../../tf/keras/layers/Cropping1D.md): Cropping layer for 1D input (e.g. temporal sequence). + +[`class Cropping2D`](../../../../tf/keras/layers/Cropping2D.md): Cropping layer for 2D input (e.g. picture). + +[`class Cropping3D`](../../../../tf/keras/layers/Cropping3D.md): Cropping layer for 3D data (e.g. spatial or spatio-temporal). + +[`class CuDNNGRU`](../../../../tf/compat/v1/keras/layers/CuDNNGRU.md): Fast GRU implementation backed by cuDNN. + +[`class CuDNNLSTM`](../../../../tf/compat/v1/keras/layers/CuDNNLSTM.md): Fast LSTM implementation backed by cuDNN. + +[`class Dense`](../../../../tf/keras/layers/Dense.md): Just your regular densely-connected NN layer. + +[`class DenseFeatures`](../../../../tf/compat/v1/keras/layers/DenseFeatures.md): A layer that produces a dense `Tensor` based on given `feature_columns`. + +[`class DepthwiseConv2D`](../../../../tf/keras/layers/DepthwiseConv2D.md): Depthwise separable 2D convolution. + +[`class Dot`](../../../../tf/keras/layers/Dot.md): Layer that computes a dot product between samples in two tensors. + +[`class Dropout`](../../../../tf/keras/layers/Dropout.md): Applies Dropout to the input. + +[`class ELU`](../../../../tf/keras/layers/ELU.md): Exponential Linear Unit. + +[`class Embedding`](../../../../tf/keras/layers/Embedding.md): Turns positive integers (indexes) into dense vectors of fixed size. + +[`class Flatten`](../../../../tf/keras/layers/Flatten.md): Flattens the input. Does not affect the batch size. + +[`class GRU`](../../../../tf/compat/v1/keras/layers/GRU.md): Gated Recurrent Unit - Cho et al. 2014. + +[`class GRUCell`](../../../../tf/compat/v1/keras/layers/GRUCell.md): Cell class for the GRU layer. + +[`class GaussianDropout`](../../../../tf/keras/layers/GaussianDropout.md): Apply multiplicative 1-centered Gaussian noise. + +[`class GaussianNoise`](../../../../tf/keras/layers/GaussianNoise.md): Apply additive zero-centered Gaussian noise. + +[`class GlobalAveragePooling1D`](../../../../tf/keras/layers/GlobalAveragePooling1D.md): Global average pooling operation for temporal data. + +[`class GlobalAveragePooling2D`](../../../../tf/keras/layers/GlobalAveragePooling2D.md): Global average pooling operation for spatial data. + +[`class GlobalAveragePooling3D`](../../../../tf/keras/layers/GlobalAveragePooling3D.md): Global Average pooling operation for 3D data. + +[`class GlobalAvgPool1D`](../../../../tf/keras/layers/GlobalAveragePooling1D.md): Global average pooling operation for temporal data. + +[`class GlobalAvgPool2D`](../../../../tf/keras/layers/GlobalAveragePooling2D.md): Global average pooling operation for spatial data. + +[`class GlobalAvgPool3D`](../../../../tf/keras/layers/GlobalAveragePooling3D.md): Global Average pooling operation for 3D data. + +[`class GlobalMaxPool1D`](../../../../tf/keras/layers/GlobalMaxPool1D.md): Global max pooling operation for 1D temporal data. + +[`class GlobalMaxPool2D`](../../../../tf/keras/layers/GlobalMaxPool2D.md): Global max pooling operation for spatial data. + +[`class GlobalMaxPool3D`](../../../../tf/keras/layers/GlobalMaxPool3D.md): Global Max pooling operation for 3D data. + +[`class GlobalMaxPooling1D`](../../../../tf/keras/layers/GlobalMaxPool1D.md): Global max pooling operation for 1D temporal data. + +[`class GlobalMaxPooling2D`](../../../../tf/keras/layers/GlobalMaxPool2D.md): Global max pooling operation for spatial data. + +[`class GlobalMaxPooling3D`](../../../../tf/keras/layers/GlobalMaxPool3D.md): Global Max pooling operation for 3D data. + +[`class InputLayer`](../../../../tf/keras/layers/InputLayer.md): Layer to be used as an entry point into a Network (a graph of layers). + +[`class InputSpec`](../../../../tf/keras/layers/InputSpec.md): Specifies the rank, dtype and shape of every input to a layer. + +[`class LSTM`](../../../../tf/compat/v1/keras/layers/LSTM.md): Long Short-Term Memory layer - Hochreiter 1997. + +[`class LSTMCell`](../../../../tf/compat/v1/keras/layers/LSTMCell.md): Cell class for the LSTM layer. + +[`class Lambda`](../../../../tf/keras/layers/Lambda.md): Wraps arbitrary expressions as a `Layer` object. + +[`class Layer`](../../../../tf/keras/layers/Layer.md): This is the class from which all layers inherit. + +[`class LayerNormalization`](../../../../tf/keras/layers/LayerNormalization.md): Layer normalization layer (Ba et al., 2016). + +[`class LeakyReLU`](../../../../tf/keras/layers/LeakyReLU.md): Leaky version of a Rectified Linear Unit. + +[`class LocallyConnected1D`](../../../../tf/keras/layers/LocallyConnected1D.md): Locally-connected layer for 1D inputs. + +[`class LocallyConnected2D`](../../../../tf/keras/layers/LocallyConnected2D.md): Locally-connected layer for 2D inputs. + +[`class Masking`](../../../../tf/keras/layers/Masking.md): Masks a sequence by using a mask value to skip timesteps. + +[`class MaxPool1D`](../../../../tf/keras/layers/MaxPool1D.md): Max pooling operation for 1D temporal data. + +[`class MaxPool2D`](../../../../tf/keras/layers/MaxPool2D.md): Max pooling operation for 2D spatial data. + +[`class MaxPool3D`](../../../../tf/keras/layers/MaxPool3D.md): Max pooling operation for 3D data (spatial or spatio-temporal). + +[`class MaxPooling1D`](../../../../tf/keras/layers/MaxPool1D.md): Max pooling operation for 1D temporal data. + +[`class MaxPooling2D`](../../../../tf/keras/layers/MaxPool2D.md): Max pooling operation for 2D spatial data. + +[`class MaxPooling3D`](../../../../tf/keras/layers/MaxPool3D.md): Max pooling operation for 3D data (spatial or spatio-temporal). + +[`class Maximum`](../../../../tf/keras/layers/Maximum.md): Layer that computes the maximum (element-wise) a list of inputs. + +[`class Minimum`](../../../../tf/keras/layers/Minimum.md): Layer that computes the minimum (element-wise) a list of inputs. + +[`class Multiply`](../../../../tf/keras/layers/Multiply.md): Layer that multiplies (element-wise) a list of inputs. + +[`class PReLU`](../../../../tf/keras/layers/PReLU.md): Parametric Rectified Linear Unit. + +[`class Permute`](../../../../tf/keras/layers/Permute.md): Permutes the dimensions of the input according to a given pattern. + +[`class RNN`](../../../../tf/keras/layers/RNN.md): Base class for recurrent layers. + +[`class ReLU`](../../../../tf/keras/layers/ReLU.md): Rectified Linear Unit activation function. + +[`class RepeatVector`](../../../../tf/keras/layers/RepeatVector.md): Repeats the input n times. + +[`class Reshape`](../../../../tf/keras/layers/Reshape.md): Layer that reshapes inputs into the given shape. + +[`class SeparableConv1D`](../../../../tf/keras/layers/SeparableConv1D.md): Depthwise separable 1D convolution. + +[`class SeparableConv2D`](../../../../tf/keras/layers/SeparableConv2D.md): Depthwise separable 2D convolution. + +[`class SeparableConvolution1D`](../../../../tf/keras/layers/SeparableConv1D.md): Depthwise separable 1D convolution. + +[`class SeparableConvolution2D`](../../../../tf/keras/layers/SeparableConv2D.md): Depthwise separable 2D convolution. + +[`class SimpleRNN`](../../../../tf/keras/layers/SimpleRNN.md): Fully-connected RNN where the output is to be fed back to input. + +[`class SimpleRNNCell`](../../../../tf/keras/layers/SimpleRNNCell.md): Cell class for SimpleRNN. + +[`class Softmax`](../../../../tf/keras/layers/Softmax.md): Softmax activation function. + +[`class SpatialDropout1D`](../../../../tf/keras/layers/SpatialDropout1D.md): Spatial 1D version of Dropout. + +[`class SpatialDropout2D`](../../../../tf/keras/layers/SpatialDropout2D.md): Spatial 2D version of Dropout. + +[`class SpatialDropout3D`](../../../../tf/keras/layers/SpatialDropout3D.md): Spatial 3D version of Dropout. + +[`class StackedRNNCells`](../../../../tf/keras/layers/StackedRNNCells.md): Wrapper allowing a stack of RNN cells to behave as a single cell. + +[`class Subtract`](../../../../tf/keras/layers/Subtract.md): Layer that subtracts two inputs. + +[`class ThresholdedReLU`](../../../../tf/keras/layers/ThresholdedReLU.md): Thresholded Rectified Linear Unit. + +[`class TimeDistributed`](../../../../tf/keras/layers/TimeDistributed.md): This wrapper allows to apply a layer to every temporal slice of an input. + +[`class UpSampling1D`](../../../../tf/keras/layers/UpSampling1D.md): Upsampling layer for 1D inputs. + +[`class UpSampling2D`](../../../../tf/keras/layers/UpSampling2D.md): Upsampling layer for 2D inputs. + +[`class UpSampling3D`](../../../../tf/keras/layers/UpSampling3D.md): Upsampling layer for 3D inputs. + +[`class Wrapper`](../../../../tf/keras/layers/Wrapper.md): Abstract wrapper base class. + +[`class ZeroPadding1D`](../../../../tf/keras/layers/ZeroPadding1D.md): Zero-padding layer for 1D input (e.g. temporal sequence). + +[`class ZeroPadding2D`](../../../../tf/keras/layers/ZeroPadding2D.md): Zero-padding layer for 2D input (e.g. picture). + +[`class ZeroPadding3D`](../../../../tf/keras/layers/ZeroPadding3D.md): Zero-padding layer for 3D data (spatial or spatio-temporal). + +## Functions + +[`Input(...)`](../../../../tf/keras/Input.md): `Input()` is used to instantiate a Keras tensor. + +[`add(...)`](../../../../tf/keras/layers/add.md): Functional interface to the tf.keras.layers.Add layer. + +[`average(...)`](../../../../tf/keras/layers/average.md): Functional interface to the tf.keras.layers.Average layer. + +[`concatenate(...)`](../../../../tf/keras/layers/concatenate.md): Functional interface to the `Concatenate` layer. + +[`deserialize(...)`](../../../../tf/keras/layers/deserialize.md): Instantiates a layer from a config dictionary. + +[`dot(...)`](../../../../tf/keras/layers/dot.md): Functional interface to the `Dot` layer. + +[`maximum(...)`](../../../../tf/keras/layers/maximum.md): Functional interface to compute maximum (element-wise) list of `inputs`. + +[`minimum(...)`](../../../../tf/keras/layers/minimum.md): Functional interface to the `Minimum` layer. + +[`multiply(...)`](../../../../tf/keras/layers/multiply.md): Functional interface to the `Multiply` layer. + +[`serialize(...)`](../../../../tf/keras/layers/serialize.md) + +[`subtract(...)`](../../../../tf/keras/layers/subtract.md): Functional interface to the `Subtract` layer. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers/BatchNormalization.md b/site/en/api_docs/python/tf/compat/v1/keras/layers/BatchNormalization.md new file mode 100644 index 00000000000..af45c102876 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers/BatchNormalization.md @@ -0,0 +1,300 @@ +description: Normalize and scale inputs or activations. (Ioffe and Szegedy, 2014). + +
+ + + + +
+ +# tf.compat.v1.keras.layers.BatchNormalization + + + + + + + + + +Normalize and scale inputs or activations. (Ioffe and Szegedy, 2014). + + + + + + + +Normalize the activations of the previous layer at each batch, +i.e. applies a transformation that maintains the mean activation +close to 0 and the activation standard deviation close to 1. + +Batch normalization differs from other layers in several key aspects: + +1) Adding BatchNormalization with `training=True` to a model causes the +result of one example to depend on the contents of all other examples in a +minibatch. Be careful when padding batches or masking examples, as these can +change the minibatch statistics and affect other examples. + +2) Updates to the weights (moving statistics) are based on the forward pass +of a model rather than the result of gradient computations. + +3) When performing inference using a model containing batch normalization, it +is generally (though not always) desirable to use accumulated statistics +rather than mini-batch statistics. This is accomplished by passing +`training=False` when calling the model, or using `model.predict`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`axis` + +Integer, the axis that should be normalized +(typically the features axis). +For instance, after a `Conv2D` layer with +`data_format="channels_first"`, +set `axis=1` in `BatchNormalization`. +
+`momentum` + +Momentum for the moving average. +
+`epsilon` + +Small float added to variance to avoid dividing by zero. +
+`center` + +If True, add offset of `beta` to normalized tensor. +If False, `beta` is ignored. +
+`scale` + +If True, multiply by `gamma`. +If False, `gamma` is not used. +When the next layer is linear (also e.g. `nn.relu`), +this can be disabled since the scaling +will be done by the next layer. +
+`beta_initializer` + +Initializer for the beta weight. +
+`gamma_initializer` + +Initializer for the gamma weight. +
+`moving_mean_initializer` + +Initializer for the moving mean. +
+`moving_variance_initializer` + +Initializer for the moving variance. +
+`beta_regularizer` + +Optional regularizer for the beta weight. +
+`gamma_regularizer` + +Optional regularizer for the gamma weight. +
+`beta_constraint` + +Optional constraint for the beta weight. +
+`gamma_constraint` + +Optional constraint for the gamma weight. +
+`renorm` + +Whether to use Batch Renormalization +(https://arxiv.org/abs/1702.03275). This adds extra variables during +training. The inference is the same for either value of this parameter. +
+`renorm_clipping` + +A dictionary that may map keys 'rmax', 'rmin', 'dmax' to +scalar `Tensors` used to clip the renorm correction. The correction +`(r, d)` is used as `corrected_value = normalized_value * r + d`, with +`r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, +dmax are set to inf, 0, inf, respectively. +
+`renorm_momentum` + +Momentum used to update the moving means and standard +deviations with renorm. Unlike `momentum`, this affects training +and should be neither too small (which would add noise) nor too large +(which would give stale estimates). Note that `momentum` is still applied +to get the means and variances for inference. +
+`fused` + +if `None` or `True`, use a faster, fused implementation if possible. +If `False`, use the system recommended implementation. +
+`trainable` + +Boolean, if `True` the variables will be marked as trainable. +
+`virtual_batch_size` + +An `int`. By default, `virtual_batch_size` is `None`, +which means batch normalization is performed across the whole batch. When +`virtual_batch_size` is not `None`, instead perform "Ghost Batch +Normalization", which creates virtual sub-batches which are each +normalized separately (with shared gamma, beta, and moving statistics). +Must divide the actual batch size during execution. +
+`adjustment` + +A function taking the `Tensor` containing the (dynamic) shape of +the input tensor and returning a pair (scale, bias) to apply to the +normalized values (before gamma and beta), only during training. For +example, if axis==-1, +`adjustment = lambda shape: ( +tf.random.uniform(shape[-1:], 0.93, 1.07), +tf.random.uniform(shape[-1:], -0.1, 0.1))` +will scale the normalized value by up to 7% up or down, then shift the +result by up to 0.1 (with independent scaling and bias for each feature +but shared across all examples), and finally apply gamma and/or beta. If +`None`, no adjustment is applied. Cannot be specified if +virtual_batch_size is specified. +
+ + + +#### Call arguments: + + +* `inputs`: Input tensor (of any rank). +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. + - `training=True`: The layer will normalize its inputs using the + mean and variance of the current batch of inputs. + - `training=False`: The layer will normalize its inputs using the + mean and variance of its moving statistics, learned during training. + + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as input. + + + + +Normalization equations: + Consider the intermediate activations \(x\) of a mini-batch of size + \\(m\\): + + We can compute the mean and variance of the batch + + \\({\mu_B} = \frac{1}{m} \sum_{i=1}^{m} {x_i}\\) + + \\({\sigma_B^2} = \frac{1}{m} \sum_{i=1}^{m} ({x_i} - {\mu_B})^2\\) + + and then compute a normalized \\(x\\), including a small factor + \\({\epsilon}\\) for numerical stability. + + \\(\hat{x_i} = \frac{x_i - \mu_B}{\sqrt{\sigma_B^2 + \epsilon}}\\) + + And finally \\(\hat{x}\) is linearly transformed by \({\gamma}\\) + and \\({\beta}\\), which are learned parameters: + + \\({y_i} = {\gamma * \hat{x_i} + \beta}\\) + +#### References: + + +- [Batch Normalization: Accelerating Deep Network Training by Reducing + Internal Covariate Shift](https://arxiv.org/abs/1502.03167) + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers/CuDNNGRU.md b/site/en/api_docs/python/tf/compat/v1/keras/layers/CuDNNGRU.md new file mode 100644 index 00000000000..6b9007e0b36 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers/CuDNNGRU.md @@ -0,0 +1,252 @@ +description: Fast GRU implementation backed by cuDNN. + +
+ + + + + +
+ +# tf.compat.v1.keras.layers.CuDNNGRU + + + + + + + + + +Fast GRU implementation backed by cuDNN. + + + + + + + +More information about cuDNN can be found on the [NVIDIA +developer website](https://developer.nvidia.com/cudnn). +Can only be run on GPU. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, used for +the linear transformation of the inputs. +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` weights +matrix, used for the linear transformation of the recurrent state. +
+`bias_initializer` + +Initializer for the bias vector. +
+`kernel_regularizer` + +Regularizer function applied to the `kernel` weights +matrix. +
+`recurrent_regularizer` + +Regularizer function applied to the +`recurrent_kernel` weights matrix. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. +
+`activity_regularizer` + +Regularizer function applied to the output of the +layer (its "activation"). +
+`kernel_constraint` + +Constraint function applied to the `kernel` weights +matrix. +
+`recurrent_constraint` + +Constraint function applied to the +`recurrent_kernel` weights matrix. +
+`bias_constraint` + +Constraint function applied to the bias vector. +
+`return_sequences` + +Boolean. Whether to return the last output in the output +sequence, or the full sequence. +
+`return_state` + +Boolean. Whether to return the last state in addition to the +output. +
+`go_backwards` + +Boolean (default False). If True, process the input sequence +backwards and return the reversed sequence. +
+`stateful` + +Boolean (default False). If True, the last state for each sample +at index i in a batch will be used as initial state for the sample of +index i in the following batch. +
+ + + + + + + + + + + + + + + + + +
+`cell` + + +
+`states` + + +
+ + + +## Methods + +

reset_states

+ +View source + + + +Reset the recorded states for the stateful RNN layer. + +Can only be used when RNN layer is constructed with `stateful` = `True`. +Args: + states: Numpy arrays that contains the value for the initial state, which + will be feed to cell at the first time step. When the value is None, + zero filled numpy array will be created based on the cell state size. + + + + + + + + + + + + + + + + +
Raises
+`AttributeError` + +When the RNN layer is not stateful. +
+`ValueError` + +When the batch size of the RNN layer is unknown. +
+`ValueError` + +When the input numpy array is not compatible with the RNN +layer state, either size wise or dtype wise. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers/CuDNNLSTM.md b/site/en/api_docs/python/tf/compat/v1/keras/layers/CuDNNLSTM.md new file mode 100644 index 00000000000..3141255f7ed --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers/CuDNNLSTM.md @@ -0,0 +1,263 @@ +description: Fast LSTM implementation backed by cuDNN. + +
+ + + + + +
+ +# tf.compat.v1.keras.layers.CuDNNLSTM + + + + + + + + + +Fast LSTM implementation backed by cuDNN. + + + + + + + +More information about cuDNN can be found on the [NVIDIA +developer website](https://developer.nvidia.com/cudnn). +Can only be run on GPU. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, used for +the linear transformation of the inputs. +
+`unit_forget_bias` + +Boolean. If True, add 1 to the bias of the forget gate +at initialization. Setting it to true will also force +`bias_initializer="zeros"`. This is recommended in [Jozefowicz et +al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf) +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` weights +matrix, used for the linear transformation of the recurrent state. +
+`bias_initializer` + +Initializer for the bias vector. +
+`kernel_regularizer` + +Regularizer function applied to the `kernel` weights +matrix. +
+`recurrent_regularizer` + +Regularizer function applied to the +`recurrent_kernel` weights matrix. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. +
+`activity_regularizer` + +Regularizer function applied to the output of the +layer (its "activation"). +
+`kernel_constraint` + +Constraint function applied to the `kernel` weights +matrix. +
+`recurrent_constraint` + +Constraint function applied to the +`recurrent_kernel` weights matrix. +
+`bias_constraint` + +Constraint function applied to the bias vector. +
+`return_sequences` + +Boolean. Whether to return the last output. in the +output sequence, or the full sequence. +
+`return_state` + +Boolean. Whether to return the last state in addition to the +output. +
+`go_backwards` + +Boolean (default False). If True, process the input sequence +backwards and return the reversed sequence. +
+`stateful` + +Boolean (default False). If True, the last state for each sample +at index i in a batch will be used as initial state for the sample of +index i in the following batch. +
+ + + + + + + + + + + + + + + + + +
+`cell` + + +
+`states` + + +
+ + + +## Methods + +

reset_states

+ +View source + + + +Reset the recorded states for the stateful RNN layer. + +Can only be used when RNN layer is constructed with `stateful` = `True`. +Args: + states: Numpy arrays that contains the value for the initial state, which + will be feed to cell at the first time step. When the value is None, + zero filled numpy array will be created based on the cell state size. + + + + + + + + + + + + + + + + +
Raises
+`AttributeError` + +When the RNN layer is not stateful. +
+`ValueError` + +When the batch size of the RNN layer is unknown. +
+`ValueError` + +When the input numpy array is not compatible with the RNN +layer state, either size wise or dtype wise. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers/DenseFeatures.md b/site/en/api_docs/python/tf/compat/v1/keras/layers/DenseFeatures.md new file mode 100644 index 00000000000..fff212540b3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers/DenseFeatures.md @@ -0,0 +1,140 @@ +description: A layer that produces a dense Tensor based on given feature_columns. + +
+ + + + +
+ +# tf.compat.v1.keras.layers.DenseFeatures + + + + + + + + + +A layer that produces a dense `Tensor` based on given `feature_columns`. + + + + + + + +Generally a single example in training data is described with FeatureColumns. +At the first layer of the model, this column-oriented data should be converted +to a single `Tensor`. + +This layer can be called multiple times with different features. + +This is the V1 version of this layer that uses variable_scope's or partitioner +to create variables which works well with PartitionedVariables. Variable +scopes are deprecated in V2, so the V2 version uses name_scopes instead. But +currently that lacks support for partitioned variables. Use this if you need +partitioned variables. Use the partitioner argument if you have a Keras model +and uses tf.compat.v1.keras.estimator.model_to_estimator for training. + +#### Example: + + + +```python +price = tf.feature_column.numeric_column('price') +keywords_embedded = tf.feature_column.embedding_column( + tf.feature_column.categorical_column_with_hash_bucket("keywords", 10K), + dimensions=16) +columns = [price, keywords_embedded, ...] +partitioner = tf.compat.v1.fixed_size_partitioner(num_shards=4) +feature_layer = tf.compat.v1.keras.layers.DenseFeatures( + feature_columns=columns, partitioner=partitioner) + +features = tf.io.parse_example( + ..., features=tf.feature_column.make_parse_example_spec(columns)) +dense_tensor = feature_layer(features) +for units in [128, 64, 32]: + dense_tensor = tf.compat.v1.keras.layers.Dense( + units, activation='relu')(dense_tensor) +prediction = tf.compat.v1.keras.layers.Dense(1)(dense_tensor) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +An iterable containing the FeatureColumns to use as +inputs to your model. All items should be instances of classes derived +from `DenseColumn` such as `numeric_column`, `embedding_column`, +`bucketized_column`, `indicator_column`. If you have categorical +features, you can wrap them with an `embedding_column` or +`indicator_column`. +
+`trainable` + +Boolean, whether the layer's variables will be updated via +gradient descent during training. +
+`name` + +Name to give to the DenseFeatures. +
+`partitioner` + +Partitioner for input layer. Defaults to None. +
+`**kwargs` + +Keyword arguments to construct a layer. +
+ + + + + + + + + + + + +
+`ValueError` + +if an item in `feature_columns` is not a `DenseColumn`. +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers/GRU.md b/site/en/api_docs/python/tf/compat/v1/keras/layers/GRU.md new file mode 100644 index 00000000000..a90c58083aa --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers/GRU.md @@ -0,0 +1,482 @@ +description: Gated Recurrent Unit - Cho et al. 2014. + +
+ + + + + +
+ +# tf.compat.v1.keras.layers.GRU + + + + + + + + + +Gated Recurrent Unit - Cho et al. 2014. + +Inherits From: [`RNN`](../../../../../tf/keras/layers/RNN.md) + + + + + + + +There are two variants. The default one is based on 1406.1078v3 and +has reset gate applied to hidden state before matrix multiplication. The +other one is based on original 1406.1078v1 and has the order reversed. + +The second variant is compatible with CuDNNGRU (GPU-only) and allows +inference on CPU. Thus it has separate biases for `kernel` and +`recurrent_kernel`. Use `'reset_after'=True` and +`recurrent_activation='sigmoid'`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`activation` + +Activation function to use. +Default: hyperbolic tangent (`tanh`). +If you pass `None`, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`recurrent_activation` + +Activation function to use +for the recurrent step. +Default: hard sigmoid (`hard_sigmoid`). +If you pass `None`, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, +used for the linear transformation of the inputs. +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` +weights matrix, used for the linear transformation of the recurrent state. +
+`bias_initializer` + +Initializer for the bias vector. +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix. +
+`recurrent_regularizer` + +Regularizer function applied to +the `recurrent_kernel` weights matrix. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. +
+`activity_regularizer` + +Regularizer function applied to +the output of the layer (its "activation").. +
+`kernel_constraint` + +Constraint function applied to +the `kernel` weights matrix. +
+`recurrent_constraint` + +Constraint function applied to +the `recurrent_kernel` weights matrix. +
+`bias_constraint` + +Constraint function applied to the bias vector. +
+`dropout` + +Float between 0 and 1. +Fraction of the units to drop for +the linear transformation of the inputs. +
+`recurrent_dropout` + +Float between 0 and 1. +Fraction of the units to drop for +the linear transformation of the recurrent state. +
+`implementation` + +Implementation mode, either 1 or 2. +Mode 1 will structure its operations as a larger number of +smaller dot products and additions, whereas mode 2 will +batch them into fewer, larger operations. These modes will +have different performance profiles on different hardware and +for different applications. +
+`return_sequences` + +Boolean. Whether to return the last output +in the output sequence, or the full sequence. +
+`return_state` + +Boolean. Whether to return the last state +in addition to the output. +
+`go_backwards` + +Boolean (default False). +If True, process the input sequence backwards and return the +reversed sequence. +
+`stateful` + +Boolean (default False). If True, the last state +for each sample at index i in a batch will be used as initial +state for the sample of index i in the following batch. +
+`unroll` + +Boolean (default False). +If True, the network will be unrolled, +else a symbolic loop will be used. +Unrolling can speed-up a RNN, +although it tends to be more memory-intensive. +Unrolling is only suitable for short sequences. +
+`time_major` + +The shape format of the `inputs` and `outputs` tensors. +If True, the inputs and outputs will be in shape +`(timesteps, batch, ...)`, whereas in the False case, it will be +`(batch, timesteps, ...)`. Using `time_major = True` is a bit more +efficient because it avoids transposes at the beginning and end of the +RNN calculation. However, most TensorFlow data is batch-major, so by +default this function accepts input and emits output in batch-major +form. +
+`reset_after` + +GRU convention (whether to apply reset gate after or +before matrix multiplication). False = "before" (default), +True = "after" (CuDNN compatible). +
+ + + +#### Call arguments: + + +* `inputs`: A 3D tensor. +* `mask`: Binary tensor of shape `(samples, timesteps)` indicating whether + a given timestep should be masked. +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. This argument is passed to the cell + when calling it. This is only relevant if `dropout` or + `recurrent_dropout` is used. +* `initial_state`: List of initial state tensors to be passed to the first + call of the cell. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`activation` + + +
+`bias_constraint` + + +
+`bias_initializer` + + +
+`bias_regularizer` + + +
+`dropout` + + +
+`implementation` + + +
+`kernel_constraint` + + +
+`kernel_initializer` + + +
+`kernel_regularizer` + + +
+`recurrent_activation` + + +
+`recurrent_constraint` + + +
+`recurrent_dropout` + + +
+`recurrent_initializer` + + +
+`recurrent_regularizer` + + +
+`reset_after` + + +
+`states` + + +
+`units` + + +
+`use_bias` + + +
+ + + +## Methods + +

reset_states

+ +View source + + + +Reset the recorded states for the stateful RNN layer. + +Can only be used when RNN layer is constructed with `stateful` = `True`. +Args: + states: Numpy arrays that contains the value for the initial state, which + will be feed to cell at the first time step. When the value is None, + zero filled numpy array will be created based on the cell state size. + + + + + + + + + + + + + + + + +
Raises
+`AttributeError` + +When the RNN layer is not stateful. +
+`ValueError` + +When the batch size of the RNN layer is unknown. +
+`ValueError` + +When the input numpy array is not compatible with the RNN +layer state, either size wise or dtype wise. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers/GRUCell.md b/site/en/api_docs/python/tf/compat/v1/keras/layers/GRUCell.md new file mode 100644 index 00000000000..ae6ac65b9fd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers/GRUCell.md @@ -0,0 +1,387 @@ +description: Cell class for the GRU layer. + +
+ + + + + + + + + +
+ +# tf.compat.v1.keras.layers.GRUCell + + + + + + + + + +Cell class for the GRU layer. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`activation` + +Activation function to use. +Default: hyperbolic tangent (`tanh`). +If you pass None, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`recurrent_activation` + +Activation function to use +for the recurrent step. +Default: hard sigmoid (`hard_sigmoid`). +If you pass `None`, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, +used for the linear transformation of the inputs. +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` +weights matrix, +used for the linear transformation of the recurrent state. +
+`bias_initializer` + +Initializer for the bias vector. +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix. +
+`recurrent_regularizer` + +Regularizer function applied to +the `recurrent_kernel` weights matrix. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. +
+`kernel_constraint` + +Constraint function applied to +the `kernel` weights matrix. +
+`recurrent_constraint` + +Constraint function applied to +the `recurrent_kernel` weights matrix. +
+`bias_constraint` + +Constraint function applied to the bias vector. +
+`dropout` + +Float between 0 and 1. +Fraction of the units to drop for the linear transformation of the inputs. +
+`recurrent_dropout` + +Float between 0 and 1. +Fraction of the units to drop for +the linear transformation of the recurrent state. +
+`implementation` + +Implementation mode, either 1 or 2. +Mode 1 will structure its operations as a larger number of +smaller dot products and additions, whereas mode 2 will +batch them into fewer, larger operations. These modes will +have different performance profiles on different hardware and +for different applications. +
+`reset_after` + +GRU convention (whether to apply reset gate after or +before matrix multiplication). False = "before" (default), +True = "after" (CuDNN compatible). +
+ + + +#### Call arguments: + + +* `inputs`: A 2D tensor. +* `states`: List of state tensors corresponding to the previous timestep. +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. Only relevant when `dropout` or + `recurrent_dropout` is used. + + +## Methods + +

get_dropout_mask_for_cell

+ +View source + + + +Get the dropout mask for RNN cell's input. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

get_initial_state

+ +View source + + + + + + +

get_recurrent_dropout_mask_for_cell

+ +View source + + + +Get the recurrent dropout mask for RNN cell. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

reset_dropout_mask

+ +View source + + + +Reset the cached dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + +

reset_recurrent_dropout_mask

+ +View source + + + +Reset the cached recurrent dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers/LSTM.md b/site/en/api_docs/python/tf/compat/v1/keras/layers/LSTM.md new file mode 100644 index 00000000000..711735e34a8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers/LSTM.md @@ -0,0 +1,479 @@ +description: Long Short-Term Memory layer - Hochreiter 1997. + +
+ + + + + +
+ +# tf.compat.v1.keras.layers.LSTM + + + + + + + + + +Long Short-Term Memory layer - Hochreiter 1997. + +Inherits From: [`RNN`](../../../../../tf/keras/layers/RNN.md) + + + + + + + + Note that this cell is not optimized for performance on GPU. Please use +tf.compat.v1.keras.layers.CuDNNLSTM for better performance on GPU. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`activation` + +Activation function to use. +Default: hyperbolic tangent (`tanh`). +If you pass `None`, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`recurrent_activation` + +Activation function to use +for the recurrent step. +Default: hard sigmoid (`hard_sigmoid`). +If you pass `None`, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, +used for the linear transformation of the inputs.. +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` +weights matrix, +used for the linear transformation of the recurrent state. +
+`bias_initializer` + +Initializer for the bias vector. +
+`unit_forget_bias` + +Boolean. +If True, add 1 to the bias of the forget gate at initialization. +Setting it to true will also force `bias_initializer="zeros"`. +This is recommended in [Jozefowicz et +al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf). +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix. +
+`recurrent_regularizer` + +Regularizer function applied to +the `recurrent_kernel` weights matrix. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. +
+`activity_regularizer` + +Regularizer function applied to +the output of the layer (its "activation").. +
+`kernel_constraint` + +Constraint function applied to +the `kernel` weights matrix. +
+`recurrent_constraint` + +Constraint function applied to +the `recurrent_kernel` weights matrix. +
+`bias_constraint` + +Constraint function applied to the bias vector. +
+`dropout` + +Float between 0 and 1. +Fraction of the units to drop for +the linear transformation of the inputs. +
+`recurrent_dropout` + +Float between 0 and 1. +Fraction of the units to drop for +the linear transformation of the recurrent state. +
+`implementation` + +Implementation mode, either 1 or 2. +Mode 1 will structure its operations as a larger number of +smaller dot products and additions, whereas mode 2 will +batch them into fewer, larger operations. These modes will +have different performance profiles on different hardware and +for different applications. +
+`return_sequences` + +Boolean. Whether to return the last output. +in the output sequence, or the full sequence. +
+`return_state` + +Boolean. Whether to return the last state +in addition to the output. +
+`go_backwards` + +Boolean (default False). +If True, process the input sequence backwards and return the +reversed sequence. +
+`stateful` + +Boolean (default False). If True, the last state +for each sample at index i in a batch will be used as initial +state for the sample of index i in the following batch. +
+`unroll` + +Boolean (default False). +If True, the network will be unrolled, +else a symbolic loop will be used. +Unrolling can speed-up a RNN, +although it tends to be more memory-intensive. +Unrolling is only suitable for short sequences. +
+`time_major` + +The shape format of the `inputs` and `outputs` tensors. +If True, the inputs and outputs will be in shape +`(timesteps, batch, ...)`, whereas in the False case, it will be +`(batch, timesteps, ...)`. Using `time_major = True` is a bit more +efficient because it avoids transposes at the beginning and end of the +RNN calculation. However, most TensorFlow data is batch-major, so by +default this function accepts input and emits output in batch-major +form. +
+ + + +#### Call arguments: + + +* `inputs`: A 3D tensor. +* `mask`: Binary tensor of shape `(samples, timesteps)` indicating whether + a given timestep should be masked. +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. This argument is passed to the cell + when calling it. This is only relevant if `dropout` or + `recurrent_dropout` is used. +* `initial_state`: List of initial state tensors to be passed to the first + call of the cell. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`activation` + + +
+`bias_constraint` + + +
+`bias_initializer` + + +
+`bias_regularizer` + + +
+`dropout` + + +
+`implementation` + + +
+`kernel_constraint` + + +
+`kernel_initializer` + + +
+`kernel_regularizer` + + +
+`recurrent_activation` + + +
+`recurrent_constraint` + + +
+`recurrent_dropout` + + +
+`recurrent_initializer` + + +
+`recurrent_regularizer` + + +
+`states` + + +
+`unit_forget_bias` + + +
+`units` + + +
+`use_bias` + + +
+ + + +## Methods + +

reset_states

+ +View source + + + +Reset the recorded states for the stateful RNN layer. + +Can only be used when RNN layer is constructed with `stateful` = `True`. +Args: + states: Numpy arrays that contains the value for the initial state, which + will be feed to cell at the first time step. When the value is None, + zero filled numpy array will be created based on the cell state size. + + + + + + + + + + + + + + + + +
Raises
+`AttributeError` + +When the RNN layer is not stateful. +
+`ValueError` + +When the batch size of the RNN layer is unknown. +
+`ValueError` + +When the input numpy array is not compatible with the RNN +layer state, either size wise or dtype wise. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers/LSTMCell.md b/site/en/api_docs/python/tf/compat/v1/keras/layers/LSTMCell.md new file mode 100644 index 00000000000..2b8be96362c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers/LSTMCell.md @@ -0,0 +1,390 @@ +description: Cell class for the LSTM layer. + +
+ + + + + + + + + +
+ +# tf.compat.v1.keras.layers.LSTMCell + + + + + + + + + +Cell class for the LSTM layer. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`activation` + +Activation function to use. +Default: hyperbolic tangent (`tanh`). +If you pass `None`, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`recurrent_activation` + +Activation function to use +for the recurrent step. +Default: hard sigmoid (`hard_sigmoid`). +If you pass `None`, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, +used for the linear transformation of the inputs. +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` +weights matrix, +used for the linear transformation of the recurrent state. +
+`bias_initializer` + +Initializer for the bias vector. +
+`unit_forget_bias` + +Boolean. +If True, add 1 to the bias of the forget gate at initialization. +Setting it to true will also force `bias_initializer="zeros"`. +This is recommended in [Jozefowicz et +al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf) +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix. +
+`recurrent_regularizer` + +Regularizer function applied to +the `recurrent_kernel` weights matrix. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. +
+`kernel_constraint` + +Constraint function applied to +the `kernel` weights matrix. +
+`recurrent_constraint` + +Constraint function applied to +the `recurrent_kernel` weights matrix. +
+`bias_constraint` + +Constraint function applied to the bias vector. +
+`dropout` + +Float between 0 and 1. +Fraction of the units to drop for +the linear transformation of the inputs. +
+`recurrent_dropout` + +Float between 0 and 1. +Fraction of the units to drop for +the linear transformation of the recurrent state. +
+`implementation` + +Implementation mode, either 1 or 2. +Mode 1 will structure its operations as a larger number of +smaller dot products and additions, whereas mode 2 will +batch them into fewer, larger operations. These modes will +have different performance profiles on different hardware and +for different applications. +
+ + + +#### Call arguments: + + +* `inputs`: A 2D tensor. +* `states`: List of state tensors corresponding to the previous timestep. +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. Only relevant when `dropout` or + `recurrent_dropout` is used. + + +## Methods + +

get_dropout_mask_for_cell

+ +View source + + + +Get the dropout mask for RNN cell's input. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

get_initial_state

+ +View source + + + + + + +

get_recurrent_dropout_mask_for_cell

+ +View source + + + +Get the recurrent dropout mask for RNN cell. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

reset_dropout_mask

+ +View source + + + +Reset the cached dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + +

reset_recurrent_dropout_mask

+ +View source + + + +Reset the cached recurrent dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers/experimental.md b/site/en/api_docs/python/tf/compat/v1/keras/layers/experimental.md new file mode 100644 index 00000000000..eac283cd929 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers/experimental.md @@ -0,0 +1,25 @@ +description: Public API for tf.keras.layers.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.keras.layers.experimental + + + + + + + + + +Public API for tf.keras.layers.experimental namespace. + + + +## Modules + +[`preprocessing`](../../../../../tf/compat/v1/keras/layers/experimental/preprocessing.md) module: Public API for tf.keras.layers.experimental.preprocessing namespace. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers/experimental/preprocessing.md b/site/en/api_docs/python/tf/compat/v1/keras/layers/experimental/preprocessing.md new file mode 100644 index 00000000000..9f1fa0509c8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers/experimental/preprocessing.md @@ -0,0 +1,49 @@ +description: Public API for tf.keras.layers.experimental.preprocessing namespace. + +
+ + +
+ +# Module: tf.compat.v1.keras.layers.experimental.preprocessing + + + + + + + + + +Public API for tf.keras.layers.experimental.preprocessing namespace. + + + +## Classes + +[`class CenterCrop`](../../../../../../tf/keras/layers/experimental/preprocessing/CenterCrop.md): Crop the central portion of the images to target height and width. + +[`class Normalization`](../../../../../../tf/compat/v1/keras/layers/experimental/preprocessing/Normalization.md): Feature-wise normalization of the data. + +[`class PreprocessingLayer`](../../../../../../tf/keras/layers/experimental/preprocessing/PreprocessingLayer.md): Base class for PreprocessingLayers. + +[`class RandomContrast`](../../../../../../tf/keras/layers/experimental/preprocessing/RandomContrast.md): Adjust the contrast of an image or images by a random factor. + +[`class RandomCrop`](../../../../../../tf/keras/layers/experimental/preprocessing/RandomCrop.md): Randomly crop the images to target height and width. + +[`class RandomFlip`](../../../../../../tf/keras/layers/experimental/preprocessing/RandomFlip.md): Randomly flip each image horizontally and vertically. + +[`class RandomHeight`](../../../../../../tf/keras/layers/experimental/preprocessing/RandomHeight.md): Randomly vary the height of a batch of images during training. + +[`class RandomRotation`](../../../../../../tf/keras/layers/experimental/preprocessing/RandomRotation.md): Randomly rotate each image. + +[`class RandomTranslation`](../../../../../../tf/keras/layers/experimental/preprocessing/RandomTranslation.md): Randomly translate each image during training. + +[`class RandomWidth`](../../../../../../tf/keras/layers/experimental/preprocessing/RandomWidth.md): Randomly vary the width of a batch of images during training. + +[`class Rescaling`](../../../../../../tf/keras/layers/experimental/preprocessing/Rescaling.md): Multiply inputs by `scale`. + +[`class Resizing`](../../../../../../tf/keras/layers/experimental/preprocessing/Resizing.md): Image resizing layer. + +[`class TextVectorization`](../../../../../../tf/compat/v1/keras/layers/experimental/preprocessing/TextVectorization.md): Text vectorization layer. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers/experimental/preprocessing/Normalization.md b/site/en/api_docs/python/tf/compat/v1/keras/layers/experimental/preprocessing/Normalization.md new file mode 100644 index 00000000000..da7b16d3bb1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers/experimental/preprocessing/Normalization.md @@ -0,0 +1,114 @@ +description: Feature-wise normalization of the data. + +
+ + + + + +
+ +# tf.compat.v1.keras.layers.experimental.preprocessing.Normalization + + + + + + + + + +Feature-wise normalization of the data. + +Inherits From: [`Normalization`](../../../../../../../tf/keras/layers/experimental/preprocessing/Normalization.md) + + + + + + + +This layer will coerce its inputs into a normal distribution centered around +0 with standard deviation 1. It accomplishes this by precomputing the mean and +variance of the data, and calling (input-mean)/sqrt(var) at runtime. + +What happens in `adapt`: Compute mean and variance of the data and store them + as the layer's weights. `adapt` should be called before `fit`, `evaluate`, + or `predict`. + + + + + + + + + + + + +
+`axis` + +Integer or tuple of integers, the axis or axes that should be +normalized (typically the features axis). We will normalize each element +in the specified axis. The default is '-1' (the innermost axis); 0 (the +batch axis) is not allowed. +
+ + + +## Methods + +

adapt

+ +View source + + + +Fits the state of the preprocessing layer to the data being passed. + + + + + + + + + + + + + + +
Arguments
+`data` + +The data to train on. It can be passed either as a tf.data Dataset, +or as a numpy array. +
+`reset_state` + +Optional argument specifying whether to clear the state of +the layer at the start of the call to `adapt`, or whether to start from +the existing state. Subclasses may choose to throw if reset_state is set +to 'False'. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/layers/experimental/preprocessing/TextVectorization.md b/site/en/api_docs/python/tf/compat/v1/keras/layers/experimental/preprocessing/TextVectorization.md new file mode 100644 index 00000000000..ce7639df258 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/layers/experimental/preprocessing/TextVectorization.md @@ -0,0 +1,286 @@ +description: Text vectorization layer. + +
+ + + + + + + +
+ +# tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization + + + + + + + + + +Text vectorization layer. + +Inherits From: [`TextVectorization`](../../../../../../../tf/keras/layers/experimental/preprocessing/TextVectorization.md) + + + + + + + +This layer has basic options for managing text in a Keras model. It +transforms a batch of strings (one sample = one string) into either a list of +token indices (one sample = 1D tensor of integer token indices) or a dense +representation (one sample = 1D tensor of float values representing data about +the sample's tokens). + +The processing of each sample contains the following steps: + 1) standardize each sample (usually lowercasing + punctuation stripping) + 2) split each sample into substrings (usually words) + 3) recombine substrings into tokens (usually ngrams) + 4) index tokens (associate a unique int value with each token) + 5) transform each sample using this index, either into a vector of ints or + a dense float vector. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`max_tokens` + +The maximum size of the vocabulary for this layer. If None, +there is no cap on the size of the vocabulary. +
+`standardize` + +Optional specification for standardization to apply to the +input text. Values can be None (no standardization), +LOWER_AND_STRIP_PUNCTUATION (lowercase and remove punctuation) or a +Callable. +
+`split` + +Optional specification for splitting the input text. Values can be +None (no splitting), SPLIT_ON_WHITESPACE (split on ASCII whitespace), or a +Callable. +
+`ngrams` + +Optional specification for ngrams to create from the possibly-split +input text. Values can be None, an integer or tuple of integers; passing +an integer will create ngrams up to that integer, and passing a tuple of +integers will create ngrams for the specified values in the tuple. Passing +None means that no ngrams will be created. +
+`output_mode` + +Optional specification for the output of the layer. Values can +be INT, BINARY, COUNT or TFIDF, which control the outputs as follows: +INT: Outputs integer indices, one integer index per split string token. +BINARY: Outputs a single int array per batch, of either vocab_size or +max_tokens size, containing 1s in all elements where the token mapped +to that index exists at least once in the batch item. +COUNT: As BINARY, but the int array contains a count of the number of +times the token at that index appeared in the batch item. +TFIDF: As BINARY, but the TF-IDF algorithm is applied to find the value +in each token slot. +
+`output_sequence_length` + +Optional length for the output tensor. If set, the +output will be padded or truncated to this value in INT mode. +
+`pad_to_max_tokens` + +If True, BINARY, COUNT, and TFIDF modes will have their +outputs padded to max_tokens, even if the number of unique tokens in the +vocabulary is less than max_tokens. +
+ + + +## Methods + +

adapt

+ +View source + + + +Fits the state of the preprocessing layer to the dataset. + +Overrides the default adapt method to apply relevant preprocessing to the +inputs before passing to the combiner. + + + + + + + + + + + + + +
Arguments
+`data` + +The data to train on. It can be passed either as a tf.data Dataset, +or as a numpy array. +
+`reset_state` + +Optional argument specifying whether to clear the state of +the layer at the start of the call to `adapt`. This must be True for +this layer, which does not support repeated calls to `adapt`. +
+ + + +

get_vocabulary

+ +View source + + + + + + +

set_vocabulary

+ +View source + + + +Sets vocabulary (and optionally document frequency) data for this layer. + +This method sets the vocabulary and DF data for this layer directly, instead +of analyzing a dataset through 'adapt'. It should be used whenever the vocab +(and optionally document frequency) information is already known. If +vocabulary data is already present in the layer, this method will either +replace it, if 'append' is set to False, or append to it (if 'append' is set +to True). + + + + + + + + + + + + + + + + + + + +
Arguments
+`vocab` + +An array of string tokens. +
+`df_data` + +An array of document frequency data. Only necessary if the layer +output_mode is TFIDF. +
+`oov_df_value` + +The document frequency of the OOV token. Only necessary if +output_mode is TFIDF. OOV data is optional when appending additional +data in TFIDF mode; if an OOV value is supplied it will overwrite the +existing OOV value. +
+`append` + +Whether to overwrite or append any existing vocabulary data. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If there are too many inputs, the inputs do not match, or +input data is missing. +
+`RuntimeError` + +If the vocabulary cannot be set when this function is +called. This happens when "binary", "count", and "tfidf" modes, +if "pad_to_max_tokens" is False and the layer itself has already been +called. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/losses.md b/site/en/api_docs/python/tf/compat/v1/keras/losses.md new file mode 100644 index 00000000000..e7d88dff74d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/losses.md @@ -0,0 +1,115 @@ +description: Built-in loss functions. + +
+ + +
+ +# Module: tf.compat.v1.keras.losses + + + + + + + + + +Built-in loss functions. + + + +## Classes + +[`class BinaryCrossentropy`](../../../../tf/keras/losses/BinaryCrossentropy.md): Computes the cross-entropy loss between true labels and predicted labels. + +[`class CategoricalCrossentropy`](../../../../tf/keras/losses/CategoricalCrossentropy.md): Computes the crossentropy loss between the labels and predictions. + +[`class CategoricalHinge`](../../../../tf/keras/losses/CategoricalHinge.md): Computes the categorical hinge loss between `y_true` and `y_pred`. + +[`class CosineSimilarity`](../../../../tf/keras/losses/CosineSimilarity.md): Computes the cosine similarity between `y_true` and `y_pred`. + +[`class Hinge`](../../../../tf/keras/losses/Hinge.md): Computes the hinge loss between `y_true` and `y_pred`. + +[`class Huber`](../../../../tf/keras/losses/Huber.md): Computes the Huber loss between `y_true` and `y_pred`. + +[`class KLDivergence`](../../../../tf/keras/losses/KLDivergence.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`class LogCosh`](../../../../tf/keras/losses/LogCosh.md): Computes the logarithm of the hyperbolic cosine of the prediction error. + +[`class Loss`](../../../../tf/keras/losses/Loss.md): Loss base class. + +[`class MeanAbsoluteError`](../../../../tf/keras/losses/MeanAbsoluteError.md): Computes the mean of absolute difference between labels and predictions. + +[`class MeanAbsolutePercentageError`](../../../../tf/keras/losses/MeanAbsolutePercentageError.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`class MeanSquaredError`](../../../../tf/keras/losses/MeanSquaredError.md): Computes the mean of squares of errors between labels and predictions. + +[`class MeanSquaredLogarithmicError`](../../../../tf/keras/losses/MeanSquaredLogarithmicError.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`class Poisson`](../../../../tf/keras/losses/Poisson.md): Computes the Poisson loss between `y_true` and `y_pred`. + +[`class SparseCategoricalCrossentropy`](../../../../tf/keras/losses/SparseCategoricalCrossentropy.md): Computes the crossentropy loss between the labels and predictions. + +[`class SquaredHinge`](../../../../tf/keras/losses/SquaredHinge.md): Computes the squared hinge loss between `y_true` and `y_pred`. + +## Functions + +[`KLD(...)`](../../../../tf/keras/losses/KLD.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`MAE(...)`](../../../../tf/keras/losses/MAE.md): Computes the mean absolute error between labels and predictions. + +[`MAPE(...)`](../../../../tf/keras/losses/MAPE.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`MSE(...)`](../../../../tf/keras/losses/MSE.md): Computes the mean squared error between labels and predictions. + +[`MSLE(...)`](../../../../tf/keras/losses/MSLE.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`binary_crossentropy(...)`](../../../../tf/keras/losses/binary_crossentropy.md): Computes the binary crossentropy loss. + +[`categorical_crossentropy(...)`](../../../../tf/keras/losses/categorical_crossentropy.md): Computes the categorical crossentropy loss. + +[`categorical_hinge(...)`](../../../../tf/keras/losses/categorical_hinge.md): Computes the categorical hinge loss between `y_true` and `y_pred`. + +[`cosine(...)`](../../../../tf/keras/losses/cosine_similarity.md): Computes the cosine similarity between labels and predictions. + +[`cosine_proximity(...)`](../../../../tf/keras/losses/cosine_similarity.md): Computes the cosine similarity between labels and predictions. + +[`cosine_similarity(...)`](../../../../tf/keras/losses/cosine_similarity.md): Computes the cosine similarity between labels and predictions. + +[`deserialize(...)`](../../../../tf/keras/losses/deserialize.md): Deserializes a serialized loss class/function instance. + +[`get(...)`](../../../../tf/keras/losses/get.md): Retrieves a Keras loss function. + +[`hinge(...)`](../../../../tf/keras/losses/hinge.md): Computes the hinge loss between `y_true` and `y_pred`. + +[`kld(...)`](../../../../tf/keras/losses/KLD.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`kullback_leibler_divergence(...)`](../../../../tf/keras/losses/KLD.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`logcosh(...)`](../../../../tf/keras/losses/logcosh.md): Logarithm of the hyperbolic cosine of the prediction error. + +[`mae(...)`](../../../../tf/keras/losses/MAE.md): Computes the mean absolute error between labels and predictions. + +[`mape(...)`](../../../../tf/keras/losses/MAPE.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`mean_absolute_error(...)`](../../../../tf/keras/losses/MAE.md): Computes the mean absolute error between labels and predictions. + +[`mean_absolute_percentage_error(...)`](../../../../tf/keras/losses/MAPE.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`mean_squared_error(...)`](../../../../tf/keras/losses/MSE.md): Computes the mean squared error between labels and predictions. + +[`mean_squared_logarithmic_error(...)`](../../../../tf/keras/losses/MSLE.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`mse(...)`](../../../../tf/keras/losses/MSE.md): Computes the mean squared error between labels and predictions. + +[`msle(...)`](../../../../tf/keras/losses/MSLE.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`poisson(...)`](../../../../tf/keras/losses/poisson.md): Computes the Poisson loss between y_true and y_pred. + +[`serialize(...)`](../../../../tf/keras/losses/serialize.md): Serializes loss function or `Loss` instance. + +[`sparse_categorical_crossentropy(...)`](../../../../tf/keras/losses/sparse_categorical_crossentropy.md): Computes the sparse categorical crossentropy loss. + +[`squared_hinge(...)`](../../../../tf/keras/losses/squared_hinge.md): Computes the squared hinge loss between `y_true` and `y_pred`. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/metrics.md b/site/en/api_docs/python/tf/compat/v1/keras/metrics.md new file mode 100644 index 00000000000..74963c8de86 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/metrics.md @@ -0,0 +1,163 @@ +description: Built-in metrics. + +
+ + +
+ +# Module: tf.compat.v1.keras.metrics + + + + + + + + + +Built-in metrics. + + + +## Classes + +[`class AUC`](../../../../tf/keras/metrics/AUC.md): Computes the approximate AUC (Area under the curve) via a Riemann sum. + +[`class Accuracy`](../../../../tf/keras/metrics/Accuracy.md): Calculates how often predictions equals labels. + +[`class BinaryAccuracy`](../../../../tf/keras/metrics/BinaryAccuracy.md): Calculates how often predictions matches binary labels. + +[`class BinaryCrossentropy`](../../../../tf/keras/metrics/BinaryCrossentropy.md): Computes the crossentropy metric between the labels and predictions. + +[`class CategoricalAccuracy`](../../../../tf/keras/metrics/CategoricalAccuracy.md): Calculates how often predictions matches one-hot labels. + +[`class CategoricalCrossentropy`](../../../../tf/keras/metrics/CategoricalCrossentropy.md): Computes the crossentropy metric between the labels and predictions. + +[`class CategoricalHinge`](../../../../tf/keras/metrics/CategoricalHinge.md): Computes the categorical hinge metric between `y_true` and `y_pred`. + +[`class CosineSimilarity`](../../../../tf/keras/metrics/CosineSimilarity.md): Computes the cosine similarity between the labels and predictions. + +[`class FalseNegatives`](../../../../tf/keras/metrics/FalseNegatives.md): Calculates the number of false negatives. + +[`class FalsePositives`](../../../../tf/keras/metrics/FalsePositives.md): Calculates the number of false positives. + +[`class Hinge`](../../../../tf/keras/metrics/Hinge.md): Computes the hinge metric between `y_true` and `y_pred`. + +[`class KLDivergence`](../../../../tf/keras/metrics/KLDivergence.md): Computes Kullback-Leibler divergence metric between `y_true` and `y_pred`. + +[`class LogCoshError`](../../../../tf/keras/metrics/LogCoshError.md): Computes the logarithm of the hyperbolic cosine of the prediction error. + +[`class Mean`](../../../../tf/keras/metrics/Mean.md): Computes the (weighted) mean of the given values. + +[`class MeanAbsoluteError`](../../../../tf/keras/metrics/MeanAbsoluteError.md): Computes the mean absolute error between the labels and predictions. + +[`class MeanAbsolutePercentageError`](../../../../tf/keras/metrics/MeanAbsolutePercentageError.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`class MeanIoU`](../../../../tf/keras/metrics/MeanIoU.md): Computes the mean Intersection-Over-Union metric. + +[`class MeanRelativeError`](../../../../tf/keras/metrics/MeanRelativeError.md): Computes the mean relative error by normalizing with the given values. + +[`class MeanSquaredError`](../../../../tf/keras/metrics/MeanSquaredError.md): Computes the mean squared error between `y_true` and `y_pred`. + +[`class MeanSquaredLogarithmicError`](../../../../tf/keras/metrics/MeanSquaredLogarithmicError.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`class MeanTensor`](../../../../tf/keras/metrics/MeanTensor.md): Computes the element-wise (weighted) mean of the given tensors. + +[`class Metric`](../../../../tf/keras/metrics/Metric.md): Encapsulates metric logic and state. + +[`class Poisson`](../../../../tf/keras/metrics/Poisson.md): Computes the Poisson metric between `y_true` and `y_pred`. + +[`class Precision`](../../../../tf/keras/metrics/Precision.md): Computes the precision of the predictions with respect to the labels. + +[`class PrecisionAtRecall`](../../../../tf/keras/metrics/PrecisionAtRecall.md): Computes the precision at a given recall. + +[`class Recall`](../../../../tf/keras/metrics/Recall.md): Computes the recall of the predictions with respect to the labels. + +[`class RecallAtPrecision`](../../../../tf/keras/metrics/RecallAtPrecision.md): Computes the maximally achievable recall at a required precision. + +[`class RootMeanSquaredError`](../../../../tf/keras/metrics/RootMeanSquaredError.md): Computes root mean squared error metric between `y_true` and `y_pred`. + +[`class SensitivityAtSpecificity`](../../../../tf/keras/metrics/SensitivityAtSpecificity.md): Computes the sensitivity at a given specificity. + +[`class SparseCategoricalAccuracy`](../../../../tf/keras/metrics/SparseCategoricalAccuracy.md): Calculates how often predictions matches integer labels. + +[`class SparseCategoricalCrossentropy`](../../../../tf/keras/metrics/SparseCategoricalCrossentropy.md): Computes the crossentropy metric between the labels and predictions. + +[`class SparseTopKCategoricalAccuracy`](../../../../tf/keras/metrics/SparseTopKCategoricalAccuracy.md): Computes how often integer targets are in the top `K` predictions. + +[`class SpecificityAtSensitivity`](../../../../tf/keras/metrics/SpecificityAtSensitivity.md): Computes the specificity at a given sensitivity. + +[`class SquaredHinge`](../../../../tf/keras/metrics/SquaredHinge.md): Computes the squared hinge metric between `y_true` and `y_pred`. + +[`class Sum`](../../../../tf/keras/metrics/Sum.md): Computes the (weighted) sum of the given values. + +[`class TopKCategoricalAccuracy`](../../../../tf/keras/metrics/TopKCategoricalAccuracy.md): Computes how often targets are in the top `K` predictions. + +[`class TrueNegatives`](../../../../tf/keras/metrics/TrueNegatives.md): Calculates the number of true negatives. + +[`class TruePositives`](../../../../tf/keras/metrics/TruePositives.md): Calculates the number of true positives. + +## Functions + +[`KLD(...)`](../../../../tf/keras/losses/KLD.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`MAE(...)`](../../../../tf/keras/losses/MAE.md): Computes the mean absolute error between labels and predictions. + +[`MAPE(...)`](../../../../tf/keras/losses/MAPE.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`MSE(...)`](../../../../tf/keras/losses/MSE.md): Computes the mean squared error between labels and predictions. + +[`MSLE(...)`](../../../../tf/keras/losses/MSLE.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`binary_accuracy(...)`](../../../../tf/keras/metrics/binary_accuracy.md): Calculates how often predictions matches binary labels. + +[`binary_crossentropy(...)`](../../../../tf/keras/losses/binary_crossentropy.md): Computes the binary crossentropy loss. + +[`categorical_accuracy(...)`](../../../../tf/keras/metrics/categorical_accuracy.md): Calculates how often predictions matches one-hot labels. + +[`categorical_crossentropy(...)`](../../../../tf/keras/losses/categorical_crossentropy.md): Computes the categorical crossentropy loss. + +[`cosine(...)`](../../../../tf/keras/losses/cosine_similarity.md): Computes the cosine similarity between labels and predictions. + +[`cosine_proximity(...)`](../../../../tf/keras/losses/cosine_similarity.md): Computes the cosine similarity between labels and predictions. + +[`deserialize(...)`](../../../../tf/keras/metrics/deserialize.md) + +[`get(...)`](../../../../tf/keras/metrics/get.md): Return a metric given its identifer. + +[`hinge(...)`](../../../../tf/keras/losses/hinge.md): Computes the hinge loss between `y_true` and `y_pred`. + +[`kld(...)`](../../../../tf/keras/losses/KLD.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`kullback_leibler_divergence(...)`](../../../../tf/keras/losses/KLD.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`mae(...)`](../../../../tf/keras/losses/MAE.md): Computes the mean absolute error between labels and predictions. + +[`mape(...)`](../../../../tf/keras/losses/MAPE.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`mean_absolute_error(...)`](../../../../tf/keras/losses/MAE.md): Computes the mean absolute error between labels and predictions. + +[`mean_absolute_percentage_error(...)`](../../../../tf/keras/losses/MAPE.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`mean_squared_error(...)`](../../../../tf/keras/losses/MSE.md): Computes the mean squared error between labels and predictions. + +[`mean_squared_logarithmic_error(...)`](../../../../tf/keras/losses/MSLE.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`mse(...)`](../../../../tf/keras/losses/MSE.md): Computes the mean squared error between labels and predictions. + +[`msle(...)`](../../../../tf/keras/losses/MSLE.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`poisson(...)`](../../../../tf/keras/losses/poisson.md): Computes the Poisson loss between y_true and y_pred. + +[`serialize(...)`](../../../../tf/keras/metrics/serialize.md) + +[`sparse_categorical_accuracy(...)`](../../../../tf/keras/metrics/sparse_categorical_accuracy.md): Calculates how often predictions matches integer labels. + +[`sparse_categorical_crossentropy(...)`](../../../../tf/keras/losses/sparse_categorical_crossentropy.md): Computes the sparse categorical crossentropy loss. + +[`sparse_top_k_categorical_accuracy(...)`](../../../../tf/keras/metrics/sparse_top_k_categorical_accuracy.md): Computes how often integer targets are in the top `K` predictions. + +[`squared_hinge(...)`](../../../../tf/keras/losses/squared_hinge.md): Computes the squared hinge loss between `y_true` and `y_pred`. + +[`top_k_categorical_accuracy(...)`](../../../../tf/keras/metrics/top_k_categorical_accuracy.md): Computes how often targets are in the top `K` predictions. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/mixed_precision.md b/site/en/api_docs/python/tf/compat/v1/keras/mixed_precision.md new file mode 100644 index 00000000000..1b3d673f4ed --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/mixed_precision.md @@ -0,0 +1,25 @@ +description: Public API for tf.keras.mixed_precision namespace. + +
+ + +
+ +# Module: tf.compat.v1.keras.mixed_precision + + + + + + + + + +Public API for tf.keras.mixed_precision namespace. + + + +## Modules + +[`experimental`](../../../../tf/compat/v1/keras/mixed_precision/experimental.md) module: Public API for tf.keras.mixed_precision.experimental namespace. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/mixed_precision/experimental.md b/site/en/api_docs/python/tf/compat/v1/keras/mixed_precision/experimental.md new file mode 100644 index 00000000000..6e54e53f5bf --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/mixed_precision/experimental.md @@ -0,0 +1,35 @@ +description: Public API for tf.keras.mixed_precision.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.keras.mixed_precision.experimental + + + + + + + + + +Public API for tf.keras.mixed_precision.experimental namespace. + + + +## Classes + +[`class LossScaleOptimizer`](../../../../../tf/keras/mixed_precision/experimental/LossScaleOptimizer.md): An optimizer that applies loss scaling. + +[`class Policy`](../../../../../tf/keras/mixed_precision/experimental/Policy.md): A dtype policy for a Keras layer. + +## Functions + +[`get_layer_policy(...)`](../../../../../tf/keras/mixed_precision/experimental/get_layer_policy.md): Returns the dtype policy of a layer. + +[`global_policy(...)`](../../../../../tf/keras/mixed_precision/experimental/global_policy.md): Returns the global Policy. + +[`set_policy(...)`](../../../../../tf/keras/mixed_precision/experimental/set_policy.md): Sets the global Policy. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/models.md b/site/en/api_docs/python/tf/compat/v1/keras/models.md new file mode 100644 index 00000000000..e1de61b0b51 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/models.md @@ -0,0 +1,41 @@ +description: Code for model cloning, plus model-related API entries. + +
+ + +
+ +# Module: tf.compat.v1.keras.models + + + + + + + + + +Code for model cloning, plus model-related API entries. + + + +## Classes + +[`class Model`](../../../../tf/keras/Model.md): `Model` groups layers into an object with training and inference features. + +[`class Sequential`](../../../../tf/keras/Sequential.md): `Sequential` groups a linear stack of layers into a tf.keras.Model. + +## Functions + +[`clone_model(...)`](../../../../tf/keras/models/clone_model.md): Clone any `Model` instance. + +[`load_model(...)`](../../../../tf/keras/models/load_model.md): Loads a model saved via `save_model`. + +[`model_from_config(...)`](../../../../tf/keras/models/model_from_config.md): Instantiates a Keras model from its config. + +[`model_from_json(...)`](../../../../tf/keras/models/model_from_json.md): Parses a JSON model configuration string and returns a model instance. + +[`model_from_yaml(...)`](../../../../tf/keras/models/model_from_yaml.md): Parses a yaml model configuration file and returns a model instance. + +[`save_model(...)`](../../../../tf/keras/models/save_model.md): Saves a model as a TensorFlow SavedModel or HDF5 file. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/optimizers.md b/site/en/api_docs/python/tf/compat/v1/keras/optimizers.md new file mode 100644 index 00000000000..0e48b87af06 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/optimizers.md @@ -0,0 +1,53 @@ +description: Built-in optimizer classes. + +
+ + +
+ +# Module: tf.compat.v1.keras.optimizers + + + + + + + + + +Built-in optimizer classes. + + + +## Modules + +[`schedules`](../../../../tf/compat/v1/keras/optimizers/schedules.md) module: Public API for tf.keras.optimizers.schedules namespace. + +## Classes + +[`class Adadelta`](../../../../tf/keras/optimizers/Adadelta.md): Optimizer that implements the Adadelta algorithm. + +[`class Adagrad`](../../../../tf/keras/optimizers/Adagrad.md): Optimizer that implements the Adagrad algorithm. + +[`class Adam`](../../../../tf/keras/optimizers/Adam.md): Optimizer that implements the Adam algorithm. + +[`class Adamax`](../../../../tf/keras/optimizers/Adamax.md): Optimizer that implements the Adamax algorithm. + +[`class Ftrl`](../../../../tf/keras/optimizers/Ftrl.md): Optimizer that implements the FTRL algorithm. + +[`class Nadam`](../../../../tf/keras/optimizers/Nadam.md): Optimizer that implements the NAdam algorithm. + +[`class Optimizer`](../../../../tf/keras/optimizers/Optimizer.md): Updated base class for optimizers. + +[`class RMSprop`](../../../../tf/keras/optimizers/RMSprop.md): Optimizer that implements the RMSprop algorithm. + +[`class SGD`](../../../../tf/keras/optimizers/SGD.md): Stochastic gradient descent and momentum optimizer. + +## Functions + +[`deserialize(...)`](../../../../tf/keras/optimizers/deserialize.md): Inverse of the `serialize` function. + +[`get(...)`](../../../../tf/keras/optimizers/get.md): Retrieves a Keras Optimizer instance. + +[`serialize(...)`](../../../../tf/keras/optimizers/serialize.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/optimizers/schedules.md b/site/en/api_docs/python/tf/compat/v1/keras/optimizers/schedules.md new file mode 100644 index 00000000000..4bea804ec25 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/optimizers/schedules.md @@ -0,0 +1,39 @@ +description: Public API for tf.keras.optimizers.schedules namespace. + +
+ + +
+ +# Module: tf.compat.v1.keras.optimizers.schedules + + + + + + + + + +Public API for tf.keras.optimizers.schedules namespace. + + + +## Classes + +[`class ExponentialDecay`](../../../../../tf/keras/optimizers/schedules/ExponentialDecay.md): A LearningRateSchedule that uses an exponential decay schedule. + +[`class InverseTimeDecay`](../../../../../tf/keras/optimizers/schedules/InverseTimeDecay.md): A LearningRateSchedule that uses an inverse time decay schedule. + +[`class LearningRateSchedule`](../../../../../tf/keras/optimizers/schedules/LearningRateSchedule.md): A serializable learning rate decay schedule. + +[`class PiecewiseConstantDecay`](../../../../../tf/keras/optimizers/schedules/PiecewiseConstantDecay.md): A LearningRateSchedule that uses a piecewise constant decay schedule. + +[`class PolynomialDecay`](../../../../../tf/keras/optimizers/schedules/PolynomialDecay.md): A LearningRateSchedule that uses a polynomial decay schedule. + +## Functions + +[`deserialize(...)`](../../../../../tf/keras/optimizers/schedules/deserialize.md) + +[`serialize(...)`](../../../../../tf/keras/optimizers/schedules/serialize.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/preprocessing.md b/site/en/api_docs/python/tf/compat/v1/keras/preprocessing.md new file mode 100644 index 00000000000..af4bdee99fe --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/preprocessing.md @@ -0,0 +1,29 @@ +description: Keras data preprocessing utils. + +
+ + +
+ +# Module: tf.compat.v1.keras.preprocessing + + + + + + + + + +Keras data preprocessing utils. + + + +## Modules + +[`image`](../../../../tf/compat/v1/keras/preprocessing/image.md) module: Set of tools for real-time data augmentation on image data. + +[`sequence`](../../../../tf/compat/v1/keras/preprocessing/sequence.md) module: Utilities for preprocessing sequence data. + +[`text`](../../../../tf/compat/v1/keras/preprocessing/text.md) module: Utilities for text input preprocessing. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/preprocessing/image.md b/site/en/api_docs/python/tf/compat/v1/keras/preprocessing/image.md new file mode 100644 index 00000000000..954b2765ef9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/preprocessing/image.md @@ -0,0 +1,59 @@ +description: Set of tools for real-time data augmentation on image data. + +
+ + +
+ +# Module: tf.compat.v1.keras.preprocessing.image + + + + + + + + + +Set of tools for real-time data augmentation on image data. + + + +## Classes + +[`class DirectoryIterator`](../../../../../tf/keras/preprocessing/image/DirectoryIterator.md): Iterator capable of reading images from a directory on disk. + +[`class ImageDataGenerator`](../../../../../tf/keras/preprocessing/image/ImageDataGenerator.md): Generate batches of tensor image data with real-time data augmentation. + +[`class Iterator`](../../../../../tf/keras/preprocessing/image/Iterator.md): Base class for image data iterators. + +[`class NumpyArrayIterator`](../../../../../tf/keras/preprocessing/image/NumpyArrayIterator.md): Iterator yielding data from a Numpy array. + +## Functions + +[`apply_affine_transform(...)`](../../../../../tf/keras/preprocessing/image/apply_affine_transform.md): Applies an affine transformation specified by the parameters given. + +[`apply_brightness_shift(...)`](../../../../../tf/keras/preprocessing/image/apply_brightness_shift.md): Performs a brightness shift. + +[`apply_channel_shift(...)`](../../../../../tf/keras/preprocessing/image/apply_channel_shift.md): Performs a channel shift. + +[`array_to_img(...)`](../../../../../tf/keras/preprocessing/image/array_to_img.md): Converts a 3D Numpy array to a PIL Image instance. + +[`img_to_array(...)`](../../../../../tf/keras/preprocessing/image/img_to_array.md): Converts a PIL Image instance to a Numpy array. + +[`load_img(...)`](../../../../../tf/keras/preprocessing/image/load_img.md): Loads an image into PIL format. + +[`random_brightness(...)`](../../../../../tf/keras/preprocessing/image/random_brightness.md): Performs a random brightness shift. + +[`random_channel_shift(...)`](../../../../../tf/keras/preprocessing/image/random_channel_shift.md): Performs a random channel shift. + +[`random_rotation(...)`](../../../../../tf/keras/preprocessing/image/random_rotation.md): Performs a random rotation of a Numpy image tensor. + +[`random_shear(...)`](../../../../../tf/keras/preprocessing/image/random_shear.md): Performs a random spatial shear of a Numpy image tensor. + +[`random_shift(...)`](../../../../../tf/keras/preprocessing/image/random_shift.md): Performs a random spatial shift of a Numpy image tensor. + +[`random_zoom(...)`](../../../../../tf/keras/preprocessing/image/random_zoom.md): Performs a random spatial zoom of a Numpy image tensor. + +[`save_img(...)`](../../../../../tf/keras/preprocessing/image/save_img.md): Saves an image stored as a Numpy array to a path or file object. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/preprocessing/sequence.md b/site/en/api_docs/python/tf/compat/v1/keras/preprocessing/sequence.md new file mode 100644 index 00000000000..723bc2f1aec --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/preprocessing/sequence.md @@ -0,0 +1,33 @@ +description: Utilities for preprocessing sequence data. + +
+ + +
+ +# Module: tf.compat.v1.keras.preprocessing.sequence + + + + + + + + + +Utilities for preprocessing sequence data. + + + +## Classes + +[`class TimeseriesGenerator`](../../../../../tf/keras/preprocessing/sequence/TimeseriesGenerator.md): Utility class for generating batches of temporal data. + +## Functions + +[`make_sampling_table(...)`](../../../../../tf/keras/preprocessing/sequence/make_sampling_table.md): Generates a word rank-based probabilistic sampling table. + +[`pad_sequences(...)`](../../../../../tf/keras/preprocessing/sequence/pad_sequences.md): Pads sequences to the same length. + +[`skipgrams(...)`](../../../../../tf/keras/preprocessing/sequence/skipgrams.md): Generates skipgram word pairs. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/preprocessing/text.md b/site/en/api_docs/python/tf/compat/v1/keras/preprocessing/text.md new file mode 100644 index 00000000000..a9d0885d00d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/preprocessing/text.md @@ -0,0 +1,35 @@ +description: Utilities for text input preprocessing. + +
+ + +
+ +# Module: tf.compat.v1.keras.preprocessing.text + + + + + + + + + +Utilities for text input preprocessing. + + + +## Classes + +[`class Tokenizer`](../../../../../tf/keras/preprocessing/text/Tokenizer.md): Text tokenization utility class. + +## Functions + +[`hashing_trick(...)`](../../../../../tf/keras/preprocessing/text/hashing_trick.md): Converts a text to a sequence of indexes in a fixed-size hashing space. + +[`one_hot(...)`](../../../../../tf/keras/preprocessing/text/one_hot.md): One-hot encodes a text into a list of word indexes of size n. + +[`text_to_word_sequence(...)`](../../../../../tf/keras/preprocessing/text/text_to_word_sequence.md): Converts a text to a sequence of words (or tokens). + +[`tokenizer_from_json(...)`](../../../../../tf/keras/preprocessing/text/tokenizer_from_json.md): Parses a JSON tokenizer configuration file and returns a + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/regularizers.md b/site/en/api_docs/python/tf/compat/v1/keras/regularizers.md new file mode 100644 index 00000000000..d5235ebb396 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/regularizers.md @@ -0,0 +1,41 @@ +description: Built-in regularizers. + +
+ + +
+ +# Module: tf.compat.v1.keras.regularizers + + + + + + + + + +Built-in regularizers. + + + +## Classes + +[`class L1L2`](../../../../tf/keras/regularizers/L1L2.md): A regularizer that applies both L1 and L2 regularization penalties. + +[`class Regularizer`](../../../../tf/keras/regularizers/Regularizer.md): Regularizer base class. + +## Functions + +[`deserialize(...)`](../../../../tf/keras/regularizers/deserialize.md) + +[`get(...)`](../../../../tf/keras/regularizers/get.md) + +[`l1(...)`](../../../../tf/keras/regularizers/l1.md): Create a regularizer that applies an L1 regularization penalty. + +[`l1_l2(...)`](../../../../tf/keras/regularizers/l1_l2.md): Create a regularizer that applies both L1 and L2 penalties. + +[`l2(...)`](../../../../tf/keras/regularizers/l2.md): Create a regularizer that applies an L2 regularization penalty. + +[`serialize(...)`](../../../../tf/keras/regularizers/serialize.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/utils.md b/site/en/api_docs/python/tf/compat/v1/keras/utils.md new file mode 100644 index 00000000000..1a506d32692 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/utils.md @@ -0,0 +1,69 @@ +description: Public API for tf.keras.utils namespace. + +
+ + +
+ +# Module: tf.compat.v1.keras.utils + + + + + + + + + +Public API for tf.keras.utils namespace. + + + +## Classes + +[`class CustomObjectScope`](../../../../tf/keras/utils/CustomObjectScope.md): Provides a scope that changes to `_GLOBAL_CUSTOM_OBJECTS` cannot escape. + +[`class GeneratorEnqueuer`](../../../../tf/keras/utils/GeneratorEnqueuer.md): Builds a queue out of a data generator. + +[`class HDF5Matrix`](../../../../tf/keras/utils/HDF5Matrix.md): Representation of HDF5 dataset to be used instead of a Numpy array. + +[`class OrderedEnqueuer`](../../../../tf/keras/utils/OrderedEnqueuer.md): Builds a Enqueuer from a Sequence. + +[`class Progbar`](../../../../tf/keras/utils/Progbar.md): Displays a progress bar. + +[`class Sequence`](../../../../tf/keras/utils/Sequence.md): Base object for fitting to a sequence of data, such as a dataset. + +[`class SequenceEnqueuer`](../../../../tf/keras/utils/SequenceEnqueuer.md): Base class to enqueue inputs. + +## Functions + +[`convert_all_kernels_in_model(...)`](../../../../tf/keras/utils/convert_all_kernels_in_model.md): Converts all convolution kernels in a model from Theano to TensorFlow. (deprecated) + +[`custom_object_scope(...)`](../../../../tf/keras/utils/custom_object_scope.md): Provides a scope that changes to `_GLOBAL_CUSTOM_OBJECTS` cannot escape. + +[`deserialize_keras_object(...)`](../../../../tf/keras/utils/deserialize_keras_object.md) + +[`get_custom_objects(...)`](../../../../tf/keras/utils/get_custom_objects.md): Retrieves a live reference to the global dictionary of custom objects. + +[`get_file(...)`](../../../../tf/keras/utils/get_file.md): Downloads a file from a URL if it not already in the cache. + +[`get_registered_name(...)`](../../../../tf/keras/utils/get_registered_name.md): Returns the name registered to an object within the Keras framework. + +[`get_registered_object(...)`](../../../../tf/keras/utils/get_registered_object.md): Returns the class associated with `name` if it is registered with Keras. + +[`get_source_inputs(...)`](../../../../tf/keras/utils/get_source_inputs.md): Returns the list of input tensors necessary to compute `tensor`. + +[`model_to_dot(...)`](../../../../tf/keras/utils/model_to_dot.md): Convert a Keras model to dot format. + +[`multi_gpu_model(...)`](../../../../tf/keras/utils/multi_gpu_model.md): Replicates a model on different GPUs. (deprecated) + +[`normalize(...)`](../../../../tf/keras/utils/normalize.md): Normalizes a Numpy array. + +[`plot_model(...)`](../../../../tf/keras/utils/plot_model.md): Converts a Keras model to dot format and save to a file. + +[`register_keras_serializable(...)`](../../../../tf/keras/utils/register_keras_serializable.md): Registers an object with the Keras serialization framework. + +[`serialize_keras_object(...)`](../../../../tf/keras/utils/serialize_keras_object.md): Serialize Keras object into JSON. + +[`to_categorical(...)`](../../../../tf/keras/utils/to_categorical.md): Converts a class vector (integers) to binary class matrix. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/wrappers.md b/site/en/api_docs/python/tf/compat/v1/keras/wrappers.md new file mode 100644 index 00000000000..dcf864363cb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/wrappers.md @@ -0,0 +1,25 @@ +description: Public API for tf.keras.wrappers namespace. + +
+ + +
+ +# Module: tf.compat.v1.keras.wrappers + + + + + + + + + +Public API for tf.keras.wrappers namespace. + + + +## Modules + +[`scikit_learn`](../../../../tf/compat/v1/keras/wrappers/scikit_learn.md) module: Wrapper for using the Scikit-Learn API with Keras models. + diff --git a/site/en/api_docs/python/tf/compat/v1/keras/wrappers/scikit_learn.md b/site/en/api_docs/python/tf/compat/v1/keras/wrappers/scikit_learn.md new file mode 100644 index 00000000000..5f3351ac4ab --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/keras/wrappers/scikit_learn.md @@ -0,0 +1,27 @@ +description: Wrapper for using the Scikit-Learn API with Keras models. + +
+ + +
+ +# Module: tf.compat.v1.keras.wrappers.scikit_learn + + + + + + + + + +Wrapper for using the Scikit-Learn API with Keras models. + + + +## Classes + +[`class KerasClassifier`](../../../../../tf/keras/wrappers/scikit_learn/KerasClassifier.md): Implementation of the scikit-learn classifier API for Keras. + +[`class KerasRegressor`](../../../../../tf/keras/wrappers/scikit_learn/KerasRegressor.md): Implementation of the scikit-learn regressor API for Keras. + diff --git a/site/en/api_docs/python/tf/compat/v1/layers.md b/site/en/api_docs/python/tf/compat/v1/layers.md new file mode 100644 index 00000000000..e8ef4fded92 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers.md @@ -0,0 +1,101 @@ +description: Public API for tf.layers namespace. + +
+ + +
+ +# Module: tf.compat.v1.layers + + + + + + + + + +Public API for tf.layers namespace. + + + +## Modules + +[`experimental`](../../../tf/compat/v1/layers/experimental.md) module: Public API for tf.layers.experimental namespace. + +## Classes + +[`class AveragePooling1D`](../../../tf/compat/v1/layers/AveragePooling1D.md): Average Pooling layer for 1D inputs. + +[`class AveragePooling2D`](../../../tf/compat/v1/layers/AveragePooling2D.md): Average pooling layer for 2D inputs (e.g. images). + +[`class AveragePooling3D`](../../../tf/compat/v1/layers/AveragePooling3D.md): Average pooling layer for 3D inputs (e.g. volumes). + +[`class BatchNormalization`](../../../tf/compat/v1/layers/BatchNormalization.md): Batch Normalization layer from (Ioffe et al., 2015). + +[`class Conv1D`](../../../tf/compat/v1/layers/Conv1D.md): 1D convolution layer (e.g. temporal convolution). + +[`class Conv2D`](../../../tf/compat/v1/layers/Conv2D.md): 2D convolution layer (e.g. spatial convolution over images). + +[`class Conv2DTranspose`](../../../tf/compat/v1/layers/Conv2DTranspose.md): Transposed 2D convolution layer (sometimes called 2D Deconvolution). + +[`class Conv3D`](../../../tf/compat/v1/layers/Conv3D.md): 3D convolution layer (e.g. spatial convolution over volumes). + +[`class Conv3DTranspose`](../../../tf/compat/v1/layers/Conv3DTranspose.md): Transposed 3D convolution layer (sometimes called 3D Deconvolution). + +[`class Dense`](../../../tf/compat/v1/layers/Dense.md): Densely-connected layer class. + +[`class Dropout`](../../../tf/compat/v1/layers/Dropout.md): Applies Dropout to the input. + +[`class Flatten`](../../../tf/compat/v1/layers/Flatten.md): Flattens an input tensor while preserving the batch axis (axis 0). + +[`class InputSpec`](../../../tf/keras/layers/InputSpec.md): Specifies the rank, dtype and shape of every input to a layer. + +[`class Layer`](../../../tf/compat/v1/layers/Layer.md): Base layer class. + +[`class MaxPooling1D`](../../../tf/compat/v1/layers/MaxPooling1D.md): Max Pooling layer for 1D inputs. + +[`class MaxPooling2D`](../../../tf/compat/v1/layers/MaxPooling2D.md): Max pooling layer for 2D inputs (e.g. images). + +[`class MaxPooling3D`](../../../tf/compat/v1/layers/MaxPooling3D.md): Max pooling layer for 3D inputs (e.g. volumes). + +[`class SeparableConv1D`](../../../tf/compat/v1/layers/SeparableConv1D.md): Depthwise separable 1D convolution. + +[`class SeparableConv2D`](../../../tf/compat/v1/layers/SeparableConv2D.md): Depthwise separable 2D convolution. + +## Functions + +[`average_pooling1d(...)`](../../../tf/compat/v1/layers/average_pooling1d.md): Average Pooling layer for 1D inputs. (deprecated) + +[`average_pooling2d(...)`](../../../tf/compat/v1/layers/average_pooling2d.md): Average pooling layer for 2D inputs (e.g. images). (deprecated) + +[`average_pooling3d(...)`](../../../tf/compat/v1/layers/average_pooling3d.md): Average pooling layer for 3D inputs (e.g. volumes). (deprecated) + +[`batch_normalization(...)`](../../../tf/compat/v1/layers/batch_normalization.md): Functional interface for the batch normalization layer from_config(Ioffe et al., 2015). (deprecated) + +[`conv1d(...)`](../../../tf/compat/v1/layers/conv1d.md): Functional interface for 1D convolution layer (e.g. temporal convolution). (deprecated) + +[`conv2d(...)`](../../../tf/compat/v1/layers/conv2d.md): Functional interface for the 2D convolution layer. (deprecated) + +[`conv2d_transpose(...)`](../../../tf/compat/v1/layers/conv2d_transpose.md): Functional interface for transposed 2D convolution layer. (deprecated) + +[`conv3d(...)`](../../../tf/compat/v1/layers/conv3d.md): Functional interface for the 3D convolution layer. (deprecated) + +[`conv3d_transpose(...)`](../../../tf/compat/v1/layers/conv3d_transpose.md): Functional interface for transposed 3D convolution layer. (deprecated) + +[`dense(...)`](../../../tf/compat/v1/layers/dense.md): Functional interface for the densely-connected layer. (deprecated) + +[`dropout(...)`](../../../tf/compat/v1/layers/dropout.md): Applies Dropout to the input. (deprecated) + +[`flatten(...)`](../../../tf/compat/v1/layers/flatten.md): Flattens an input tensor while preserving the batch axis (axis 0). (deprecated) + +[`max_pooling1d(...)`](../../../tf/compat/v1/layers/max_pooling1d.md): Max Pooling layer for 1D inputs. (deprecated) + +[`max_pooling2d(...)`](../../../tf/compat/v1/layers/max_pooling2d.md): Max pooling layer for 2D inputs (e.g. images). (deprecated) + +[`max_pooling3d(...)`](../../../tf/compat/v1/layers/max_pooling3d.md): Max pooling layer for 3D inputs (e.g. (deprecated) + +[`separable_conv1d(...)`](../../../tf/compat/v1/layers/separable_conv1d.md): Functional interface for the depthwise separable 1D convolution layer. (deprecated) + +[`separable_conv2d(...)`](../../../tf/compat/v1/layers/separable_conv2d.md): Functional interface for the depthwise separable 2D convolution layer. (deprecated) + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/AveragePooling1D.md b/site/en/api_docs/python/tf/compat/v1/layers/AveragePooling1D.md new file mode 100644 index 00000000000..64bed08fb45 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/AveragePooling1D.md @@ -0,0 +1,122 @@ +description: Average Pooling layer for 1D inputs. + +
+ + + + +
+ +# tf.compat.v1.layers.AveragePooling1D + + + + + + + + + +Average Pooling layer for 1D inputs. + +Inherits From: [`AveragePooling1D`](../../../../tf/keras/layers/AveragePooling1D.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`pool_size` + +An integer or tuple/list of a single integer, +representing the size of the pooling window. +
+`strides` + +An integer or tuple/list of a single integer, specifying the +strides of the pooling operation. +
+`padding` + +A string. The padding method, either 'valid' or 'same'. +Case-insensitive. +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, length, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, length)`. +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/AveragePooling2D.md b/site/en/api_docs/python/tf/compat/v1/layers/AveragePooling2D.md new file mode 100644 index 00000000000..b5da3d67263 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/AveragePooling2D.md @@ -0,0 +1,126 @@ +description: Average pooling layer for 2D inputs (e.g. images). + +
+ + + + +
+ +# tf.compat.v1.layers.AveragePooling2D + + + + + + + + + +Average pooling layer for 2D inputs (e.g. images). + +Inherits From: [`AveragePooling2D`](../../../../tf/keras/layers/AveragePooling2D.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`pool_size` + +An integer or tuple/list of 2 integers: (pool_height, pool_width) +specifying the size of the pooling window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 2 integers, +specifying the strides of the pooling operation. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`padding` + +A string. The padding method, either 'valid' or 'same'. +Case-insensitive. +
+`data_format` + +A string. The ordering of the dimensions in the inputs. +`channels_last` (default) and `channels_first` are supported. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, height, width)`. +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/AveragePooling3D.md b/site/en/api_docs/python/tf/compat/v1/layers/AveragePooling3D.md new file mode 100644 index 00000000000..47ede51cdb2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/AveragePooling3D.md @@ -0,0 +1,128 @@ +description: Average pooling layer for 3D inputs (e.g. volumes). + +
+ + + + +
+ +# tf.compat.v1.layers.AveragePooling3D + + + + + + + + + +Average pooling layer for 3D inputs (e.g. volumes). + +Inherits From: [`AveragePooling3D`](../../../../tf/keras/layers/AveragePooling3D.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`pool_size` + +An integer or tuple/list of 3 integers: +(pool_depth, pool_height, pool_width) +specifying the size of the pooling window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 3 integers, +specifying the strides of the pooling operation. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`padding` + +A string. The padding method, either 'valid' or 'same'. +Case-insensitive. +
+`data_format` + +A string. The ordering of the dimensions in the inputs. +`channels_last` (default) and `channels_first` are supported. +`channels_last` corresponds to inputs with shape +`(batch, depth, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, depth, height, width)`. +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/BatchNormalization.md b/site/en/api_docs/python/tf/compat/v1/layers/BatchNormalization.md new file mode 100644 index 00000000000..512be112d54 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/BatchNormalization.md @@ -0,0 +1,301 @@ +description: Batch Normalization layer from (Ioffe et al., 2015). + +
+ + + + +
+ +# tf.compat.v1.layers.BatchNormalization + + + + + + + + + +Batch Normalization layer from (Ioffe et al., 2015). + +Inherits From: [`BatchNormalization`](../../../../tf/compat/v1/keras/layers/BatchNormalization.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + +Keras APIs handle BatchNormalization updates to the moving_mean and +moving_variance as part of their `fit()` and `evaluate()` loops. However, if a +custom training loop is used with an instance of `Model`, these updates need +to be explicitly included. Here's a simple example of how it can be done: + +```python + # model is an instance of Model that contains BatchNormalization layer. + update_ops = model.get_updates_for(None) + model.get_updates_for(features) + train_op = optimizer.minimize(loss) + train_op = tf.group([train_op, update_ops]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`axis` + +An `int` or list of `int`, the axis or axes that should be normalized, +typically the features axis/axes. For instance, after a `Conv2D` layer +with `data_format="channels_first"`, set `axis=1`. If a list of axes is +provided, each axis in `axis` will be normalized +simultaneously. Default is `-1` which uses the last axis. Note: when +using multi-axis batch norm, the `beta`, `gamma`, `moving_mean`, and +`moving_variance` variables are the same rank as the input Tensor, +with dimension size 1 in all reduced (non-axis) dimensions). +
+`momentum` + +Momentum for the moving average. +
+`epsilon` + +Small float added to variance to avoid dividing by zero. +
+`center` + +If True, add offset of `beta` to normalized tensor. If False, `beta` +is ignored. +
+`scale` + +If True, multiply by `gamma`. If False, `gamma` is not used. When the +next layer is linear (also e.g. `nn.relu`), this can be disabled since the +scaling can be done by the next layer. +
+`beta_initializer` + +Initializer for the beta weight. +
+`gamma_initializer` + +Initializer for the gamma weight. +
+`moving_mean_initializer` + +Initializer for the moving mean. +
+`moving_variance_initializer` + +Initializer for the moving variance. +
+`beta_regularizer` + +Optional regularizer for the beta weight. +
+`gamma_regularizer` + +Optional regularizer for the gamma weight. +
+`beta_constraint` + +An optional projection function to be applied to the `beta` +weight after being updated by an `Optimizer` (e.g. used to implement norm +constraints or value constraints for layer weights). The function must +take as input the unprojected variable and must return the projected +variable (which must have the same shape). Constraints are not safe to use +when doing asynchronous distributed training. +
+`gamma_constraint` + +An optional projection function to be applied to the +`gamma` weight after being updated by an `Optimizer`. +
+`renorm` + +Whether to use Batch Renormalization (Ioffe, 2017). This adds extra +variables during training. The inference is the same for either value of +this parameter. +
+`renorm_clipping` + +A dictionary that may map keys 'rmax', 'rmin', 'dmax' to +scalar `Tensors` used to clip the renorm correction. The correction `(r, +d)` is used as `corrected_value = normalized_value * r + d`, with `r` +clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, +dmax are set to inf, 0, inf, respectively. +
+`renorm_momentum` + +Momentum used to update the moving means and standard +deviations with renorm. Unlike `momentum`, this affects training and +should be neither too small (which would add noise) nor too large (which +would give stale estimates). Note that `momentum` is still applied to get +the means and variances for inference. +
+`fused` + +if `None` or `True`, use a faster, fused implementation if possible. +If `False`, use the system recommended implementation. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`virtual_batch_size` + +An `int`. By default, `virtual_batch_size` is `None`, +which means batch normalization is performed across the whole batch. When +`virtual_batch_size` is not `None`, instead perform "Ghost Batch +Normalization", which creates virtual sub-batches which are each +normalized separately (with shared gamma, beta, and moving statistics). +Must divide the actual batch size during execution. +
+`adjustment` + +A function taking the `Tensor` containing the (dynamic) shape of +the input tensor and returning a pair (scale, bias) to apply to the +normalized values (before gamma and beta), only during training. For +example, if axis==-1, +`adjustment = lambda shape: ( +tf.random.uniform(shape[-1:], 0.93, 1.07), +tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized +value by up to 7% up or down, then shift the result by up to 0.1 +(with independent scaling and bias for each feature but shared +across all examples), and finally apply gamma and/or beta. If +`None`, no adjustment is applied. Cannot be specified if +virtual_batch_size is specified. +
+`name` + +A string, the name of the layer. +
+ + + +#### References: + +Batch Normalization - Accelerating Deep Network Training by Reducing + Internal Covariate Shift: + [Ioffe et al., 2015](http://proceedings.mlr.press/v37/ioffe15.html) + ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf)) +Batch Renormalization - Towards Reducing Minibatch Dependence in + Batch-Normalized Models: + [Ioffe, + 2017](http://papers.nips.cc/paper/6790-batch-renormalization-towards-reducing-minibatch-dependence-in-batch-normalized-models) + ([pdf](http://papers.nips.cc/paper/6790-batch-renormalization-towards-reducing-minibatch-dependence-in-batch-normalized-models.pdf)) + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/Conv1D.md b/site/en/api_docs/python/tf/compat/v1/layers/Conv1D.md new file mode 100644 index 00000000000..20b8001ce4e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/Conv1D.md @@ -0,0 +1,228 @@ +description: 1D convolution layer (e.g. temporal convolution). + +
+ + + + +
+ +# tf.compat.v1.layers.Conv1D + + + + + + + + + +1D convolution layer (e.g. temporal convolution). + +Inherits From: [`Conv1D`](../../../../tf/keras/layers/Conv1D.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + +This layer creates a convolution kernel that is convolved +(actually cross-correlated) with the layer input to produce a tensor of +outputs. If `use_bias` is True (and a `bias_initializer` is provided), +a bias vector is created and added to the outputs. Finally, if +`activation` is not `None`, it is applied to the outputs as well. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of a single integer, specifying the +length of the 1D convolution window. +
+`strides` + +An integer or tuple/list of a single integer, +specifying the stride length of the convolution. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, length, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, length)`. +
+`dilation_rate` + +An integer or tuple/list of a single integer, specifying +the dilation rate to use for dilated convolution. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any `strides` value != 1. +
+`activation` + +Activation function. Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`kernel_initializer` + +An initializer for the convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used. +
+`kernel_regularizer` + +Optional regularizer for the convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`kernel_constraint` + +Optional projection function to be applied to the +kernel after being updated by an `Optimizer` (e.g. used to implement +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/Conv2D.md b/site/en/api_docs/python/tf/compat/v1/layers/Conv2D.md new file mode 100644 index 00000000000..b790dcc86ec --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/Conv2D.md @@ -0,0 +1,235 @@ +description: 2D convolution layer (e.g. spatial convolution over images). + +
+ + + + +
+ +# tf.compat.v1.layers.Conv2D + + + + + + + + + +2D convolution layer (e.g. spatial convolution over images). + +Inherits From: [`Conv2D`](../../../../tf/keras/layers/Conv2D.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + +This layer creates a convolution kernel that is convolved +(actually cross-correlated) with the layer input to produce a tensor of +outputs. If `use_bias` is True (and a `bias_initializer` is provided), +a bias vector is created and added to the outputs. Finally, if +`activation` is not `None`, it is applied to the outputs as well. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of 2 integers, specifying the +height and width of the 2D convolution window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 2 integers, +specifying the strides of the convolution along the height and width. +Can be a single integer to specify the same value for +all spatial dimensions. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, height, width)`. +
+`dilation_rate` + +An integer or tuple/list of 2 integers, specifying +the dilation rate to use for dilated convolution. +Can be a single integer to specify the same value for +all spatial dimensions. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`activation` + +Activation function. Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`kernel_initializer` + +An initializer for the convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used. +
+`kernel_regularizer` + +Optional regularizer for the convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`kernel_constraint` + +Optional projection function to be applied to the +kernel after being updated by an `Optimizer` (e.g. used to implement +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/Conv2DTranspose.md b/site/en/api_docs/python/tf/compat/v1/layers/Conv2DTranspose.md new file mode 100644 index 00000000000..c03856c9fe7 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/Conv2DTranspose.md @@ -0,0 +1,220 @@ +description: Transposed 2D convolution layer (sometimes called 2D Deconvolution). + +
+ + + + +
+ +# tf.compat.v1.layers.Conv2DTranspose + + + + + + + + + +Transposed 2D convolution layer (sometimes called 2D Deconvolution). + +Inherits From: [`Conv2DTranspose`](../../../../tf/keras/layers/Conv2DTranspose.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + +The need for transposed convolutions generally arises +from the desire to use a transformation going in the opposite direction +of a normal convolution, i.e., from something that has the shape of the +output of some convolution to something that has the shape of its input +while maintaining a connectivity pattern that is compatible with +said convolution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +A tuple or list of 2 positive integers specifying the spatial +dimensions of the filters. Can be a single integer to specify the same +value for all spatial dimensions. +
+`strides` + +A tuple or list of 2 positive integers specifying the strides +of the convolution. Can be a single integer to specify the same value for +all spatial dimensions. +
+`padding` + +one of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, height, width)`. +
+`activation` + +Activation function. Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`kernel_initializer` + +An initializer for the convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used. +
+`kernel_regularizer` + +Optional regularizer for the convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`kernel_constraint` + +Optional projection function to be applied to the +kernel after being updated by an `Optimizer` (e.g. used to implement +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/Conv3D.md b/site/en/api_docs/python/tf/compat/v1/layers/Conv3D.md new file mode 100644 index 00000000000..4c87a2c8458 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/Conv3D.md @@ -0,0 +1,237 @@ +description: 3D convolution layer (e.g. spatial convolution over volumes). + +
+ + + + +
+ +# tf.compat.v1.layers.Conv3D + + + + + + + + + +3D convolution layer (e.g. spatial convolution over volumes). + +Inherits From: [`Conv3D`](../../../../tf/keras/layers/Conv3D.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + +This layer creates a convolution kernel that is convolved +(actually cross-correlated) with the layer input to produce a tensor of +outputs. If `use_bias` is True (and a `bias_initializer` is provided), +a bias vector is created and added to the outputs. Finally, if +`activation` is not `None`, it is applied to the outputs as well. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of 3 integers, specifying the +depth, height and width of the 3D convolution window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 3 integers, +specifying the strides of the convolution along the depth, +height and width. +Can be a single integer to specify the same value for +all spatial dimensions. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, depth, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, depth, height, width)`. +
+`dilation_rate` + +An integer or tuple/list of 3 integers, specifying +the dilation rate to use for dilated convolution. +Can be a single integer to specify the same value for +all spatial dimensions. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`activation` + +Activation function. Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`kernel_initializer` + +An initializer for the convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used. +
+`kernel_regularizer` + +Optional regularizer for the convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`kernel_constraint` + +Optional projection function to be applied to the +kernel after being updated by an `Optimizer` (e.g. used to implement +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/Conv3DTranspose.md b/site/en/api_docs/python/tf/compat/v1/layers/Conv3DTranspose.md new file mode 100644 index 00000000000..f20b1e406e8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/Conv3DTranspose.md @@ -0,0 +1,217 @@ +description: Transposed 3D convolution layer (sometimes called 3D Deconvolution). + +
+ + + + +
+ +# tf.compat.v1.layers.Conv3DTranspose + + + + + + + + + +Transposed 3D convolution layer (sometimes called 3D Deconvolution). + +Inherits From: [`Conv3DTranspose`](../../../../tf/keras/layers/Conv3DTranspose.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of 3 integers, specifying the +depth, height and width of the 3D convolution window. +Can be a single integer to specify the same value for all spatial +dimensions. +
+`strides` + +An integer or tuple/list of 3 integers, specifying the strides +of the convolution along the depth, height and width. +Can be a single integer to specify the same value for all spatial +dimensions. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, depth, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, depth, height, width)`. +
+`activation` + +Activation function. Set it to `None` to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`kernel_initializer` + +An initializer for the convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If `None`, the default +initializer will be used. +
+`kernel_regularizer` + +Optional regularizer for the convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`kernel_constraint` + +Optional projection function to be applied to the +kernel after being updated by an `Optimizer` (e.g. used to implement +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/Dense.md b/site/en/api_docs/python/tf/compat/v1/layers/Dense.md new file mode 100644 index 00000000000..504ab7ff26a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/Dense.md @@ -0,0 +1,208 @@ +description: Densely-connected layer class. + +
+ + + + +
+ +# tf.compat.v1.layers.Dense + + + + + + + + + +Densely-connected layer class. + +Inherits From: [`Dense`](../../../../tf/keras/layers/Dense.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + +This layer implements the operation: +`outputs = activation(inputs * kernel + bias)` +Where `activation` is the activation function passed as the `activation` +argument (if not `None`), `kernel` is a weights matrix created by the layer, +and `bias` is a bias vector created by the layer +(only if `use_bias` is `True`). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Integer or Long, dimensionality of the output space. +
+`activation` + +Activation function (callable). Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`kernel_initializer` + +Initializer function for the weight matrix. +If `None` (default), weights are initialized using the default +initializer used by tf.compat.v1.get_variable. +
+`bias_initializer` + +Initializer function for the bias. +
+`kernel_regularizer` + +Regularizer function for the weight matrix. +
+`bias_regularizer` + +Regularizer function for the bias. +
+`activity_regularizer` + +Regularizer function for the output. +
+`kernel_constraint` + +An optional projection function to be applied to the +kernel after being updated by an `Optimizer` (e.g. used to implement +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`bias_constraint` + +An optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +String, the name of the layer. Layers with the same name will +share weights, but to avoid mistakes we require reuse=True in such cases. +
+`_reuse` + +Boolean, whether to reuse the weights of a previous layer +by the same name. +
+ + + +#### Properties: + + +* `units`: Python integer, dimensionality of the output space. +* `activation`: Activation function (callable). +* `use_bias`: Boolean, whether the layer uses a bias. +* `kernel_initializer`: Initializer instance (or name) for the kernel matrix. +* `bias_initializer`: Initializer instance (or name) for the bias. +* `kernel_regularizer`: Regularizer instance for the kernel matrix (callable) +* `bias_regularizer`: Regularizer instance for the bias (callable). +* `activity_regularizer`: Regularizer instance for the output (callable) +* `kernel_constraint`: Constraint function for the kernel matrix. +* `bias_constraint`: Constraint function for the bias. +* `kernel`: Weight matrix (TensorFlow variable or tensor). +* `bias`: Bias vector, if applicable (TensorFlow variable or tensor). + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/Dropout.md b/site/en/api_docs/python/tf/compat/v1/layers/Dropout.md new file mode 100644 index 00000000000..ce1d53a5395 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/Dropout.md @@ -0,0 +1,119 @@ +description: Applies Dropout to the input. + +
+ + + + +
+ +# tf.compat.v1.layers.Dropout + + + + + + + + + +Applies Dropout to the input. + +Inherits From: [`Dropout`](../../../../tf/keras/layers/Dropout.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + +Dropout consists in randomly setting a fraction `rate` of input units to 0 +at each update during training time, which helps prevent overfitting. +The units that are kept are scaled by `1 / (1 - rate)`, so that their +sum is unchanged at training time and inference time. + + + + + + + + + + + + + + + + + + + +
+`rate` + +The dropout rate, between 0 and 1. E.g. `rate=0.1` would drop out +10% of input units. +
+`noise_shape` + +1D tensor of type `int32` representing the shape of the +binary dropout mask that will be multiplied with the input. +For instance, if your inputs have shape +`(batch_size, timesteps, features)`, and you want the dropout mask +to be the same for all timesteps, you can use +`noise_shape=[batch_size, 1, features]`. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed. +for behavior. +
+`name` + +The name of the layer (string). +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/Flatten.md b/site/en/api_docs/python/tf/compat/v1/layers/Flatten.md new file mode 100644 index 00000000000..43f983fbc2f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/Flatten.md @@ -0,0 +1,104 @@ +description: Flattens an input tensor while preserving the batch axis (axis 0). + +
+ + + + +
+ +# tf.compat.v1.layers.Flatten + + + + + + + + + +Flattens an input tensor while preserving the batch axis (axis 0). + +Inherits From: [`Flatten`](../../../../tf/keras/layers/Flatten.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + + + + + + + + + + + +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, ..., channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, ...)`. +
+ + + +#### Examples: + + + +``` + x = tf.compat.v1.placeholder(shape=(None, 4, 4), dtype='float32') + y = Flatten()(x) + # now `y` has shape `(None, 16)` + + x = tf.compat.v1.placeholder(shape=(None, 3, None), dtype='float32') + y = Flatten()(x) + # now `y` has shape `(None, None)` +``` + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/Layer.md b/site/en/api_docs/python/tf/compat/v1/layers/Layer.md new file mode 100644 index 00000000000..fb7aeacade7 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/Layer.md @@ -0,0 +1,126 @@ +description: Base layer class. + +
+ + + + +
+ +# tf.compat.v1.layers.Layer + + + + + + + + + +Base layer class. + +Inherits From: [`Layer`](../../../../tf/keras/layers/Layer.md) + + + + + + + +It is considered legacy, and we recommend the use of tf.keras.layers.Layer +instead. + + + + + + + + + + + + + + + + +
+`trainable` + +Boolean, whether the layer's variables should be trainable. +
+`name` + +String name of the layer. +
+`dtype` + +Default dtype of the layer's weights (default of `None` means use the +type of the first input). +
+ + +Read-only properties: + name: The name of the layer (string). + dtype: Default dtype of the layer's weights (default of `None` means use the + type of the first input). + trainable_variables: List of trainable variables. + non_trainable_variables: List of non-trainable variables. + variables: List of all variables of this layer, trainable and + non-trainable. + updates: List of update ops of this layer. + losses: List of losses added by this layer. + trainable_weights: List of variables to be included in backprop. + non_trainable_weights: List of variables that should not be + included in backprop. + weights: The concatenation of the lists trainable_weights and + non_trainable_weights (in this order). + +#### Mutable properties: + + +* `trainable`: Whether the layer should be trained (boolean). +* `input_spec`: Optional (list of) `InputSpec` object(s) specifying the + constraints on inputs that can be accepted by the layer. + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/MaxPooling1D.md b/site/en/api_docs/python/tf/compat/v1/layers/MaxPooling1D.md new file mode 100644 index 00000000000..1798a01c9c1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/MaxPooling1D.md @@ -0,0 +1,122 @@ +description: Max Pooling layer for 1D inputs. + +
+ + + + +
+ +# tf.compat.v1.layers.MaxPooling1D + + + + + + + + + +Max Pooling layer for 1D inputs. + +Inherits From: [`MaxPool1D`](../../../../tf/keras/layers/MaxPool1D.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`pool_size` + +An integer or tuple/list of a single integer, +representing the size of the pooling window. +
+`strides` + +An integer or tuple/list of a single integer, specifying the +strides of the pooling operation. +
+`padding` + +A string. The padding method, either 'valid' or 'same'. +Case-insensitive. +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, length, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, length)`. +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/MaxPooling2D.md b/site/en/api_docs/python/tf/compat/v1/layers/MaxPooling2D.md new file mode 100644 index 00000000000..7651710071a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/MaxPooling2D.md @@ -0,0 +1,126 @@ +description: Max pooling layer for 2D inputs (e.g. images). + +
+ + + + +
+ +# tf.compat.v1.layers.MaxPooling2D + + + + + + + + + +Max pooling layer for 2D inputs (e.g. images). + +Inherits From: [`MaxPool2D`](../../../../tf/keras/layers/MaxPool2D.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`pool_size` + +An integer or tuple/list of 2 integers: (pool_height, pool_width) +specifying the size of the pooling window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 2 integers, +specifying the strides of the pooling operation. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`padding` + +A string. The padding method, either 'valid' or 'same'. +Case-insensitive. +
+`data_format` + +A string. The ordering of the dimensions in the inputs. +`channels_last` (default) and `channels_first` are supported. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, height, width)`. +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/MaxPooling3D.md b/site/en/api_docs/python/tf/compat/v1/layers/MaxPooling3D.md new file mode 100644 index 00000000000..358ca872c8b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/MaxPooling3D.md @@ -0,0 +1,128 @@ +description: Max pooling layer for 3D inputs (e.g. volumes). + +
+ + + + +
+ +# tf.compat.v1.layers.MaxPooling3D + + + + + + + + + +Max pooling layer for 3D inputs (e.g. volumes). + +Inherits From: [`MaxPool3D`](../../../../tf/keras/layers/MaxPool3D.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`pool_size` + +An integer or tuple/list of 3 integers: +(pool_depth, pool_height, pool_width) +specifying the size of the pooling window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 3 integers, +specifying the strides of the pooling operation. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`padding` + +A string. The padding method, either 'valid' or 'same'. +Case-insensitive. +
+`data_format` + +A string. The ordering of the dimensions in the inputs. +`channels_last` (default) and `channels_first` are supported. +`channels_last` corresponds to inputs with shape +`(batch, depth, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, depth, height, width)`. +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/SeparableConv1D.md b/site/en/api_docs/python/tf/compat/v1/layers/SeparableConv1D.md new file mode 100644 index 00000000000..53abfd62d8a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/SeparableConv1D.md @@ -0,0 +1,263 @@ +description: Depthwise separable 1D convolution. + +
+ + + + +
+ +# tf.compat.v1.layers.SeparableConv1D + + + + + + + + + +Depthwise separable 1D convolution. + +Inherits From: [`SeparableConv1D`](../../../../tf/keras/layers/SeparableConv1D.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + +This layer performs a depthwise convolution that acts separately on +channels, followed by a pointwise convolution that mixes channels. +If `use_bias` is True and a bias initializer is provided, +it adds a bias vector to the output. +It then optionally applies an activation function to produce the final output. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +A single integer specifying the spatial +dimensions of the filters. +
+`strides` + +A single integer specifying the strides +of the convolution. +Specifying any `stride` value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, length, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, length)`. +
+`dilation_rate` + +A single integer, specifying +the dilation rate to use for dilated convolution. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`depth_multiplier` + +The number of depthwise convolution output channels for +each input channel. The total number of depthwise convolution output +channels will be equal to `num_filters_in * depth_multiplier`. +
+`activation` + +Activation function. Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`depthwise_initializer` + +An initializer for the depthwise convolution kernel. +
+`pointwise_initializer` + +An initializer for the pointwise convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used. +
+`depthwise_regularizer` + +Optional regularizer for the depthwise +convolution kernel. +
+`pointwise_regularizer` + +Optional regularizer for the pointwise +convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`depthwise_constraint` + +Optional projection function to be applied to the +depthwise kernel after being updated by an `Optimizer` (e.g. used for +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`pointwise_constraint` + +Optional projection function to be applied to the +pointwise kernel after being updated by an `Optimizer`. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/SeparableConv2D.md b/site/en/api_docs/python/tf/compat/v1/layers/SeparableConv2D.md new file mode 100644 index 00000000000..e693838db4b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/SeparableConv2D.md @@ -0,0 +1,267 @@ +description: Depthwise separable 2D convolution. + +
+ + + + +
+ +# tf.compat.v1.layers.SeparableConv2D + + + + + + + + + +Depthwise separable 2D convolution. + +Inherits From: [`SeparableConv2D`](../../../../tf/keras/layers/SeparableConv2D.md), [`Layer`](../../../../tf/compat/v1/layers/Layer.md) + + + + + + + +This layer performs a depthwise convolution that acts separately on +channels, followed by a pointwise convolution that mixes channels. +If `use_bias` is True and a bias initializer is provided, +it adds a bias vector to the output. +It then optionally applies an activation function to produce the final output. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +A tuple or list of 2 integers specifying the spatial +dimensions of the filters. Can be a single integer to specify the same +value for all spatial dimensions. +
+`strides` + +A tuple or list of 2 positive integers specifying the strides +of the convolution. Can be a single integer to specify the same value for +all spatial dimensions. +Specifying any `stride` value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, height, width)`. +
+`dilation_rate` + +An integer or tuple/list of 2 integers, specifying +the dilation rate to use for dilated convolution. +Can be a single integer to specify the same value for +all spatial dimensions. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`depth_multiplier` + +The number of depthwise convolution output channels for +each input channel. The total number of depthwise convolution output +channels will be equal to `num_filters_in * depth_multiplier`. +
+`activation` + +Activation function. Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`depthwise_initializer` + +An initializer for the depthwise convolution kernel. +
+`pointwise_initializer` + +An initializer for the pointwise convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used. +
+`depthwise_regularizer` + +Optional regularizer for the depthwise +convolution kernel. +
+`pointwise_regularizer` + +Optional regularizer for the pointwise +convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`depthwise_constraint` + +Optional projection function to be applied to the +depthwise kernel after being updated by an `Optimizer` (e.g. used for +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`pointwise_constraint` + +Optional projection function to be applied to the +pointwise kernel after being updated by an `Optimizer`. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`scope_name` + + +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/average_pooling1d.md b/site/en/api_docs/python/tf/compat/v1/layers/average_pooling1d.md new file mode 100644 index 00000000000..06182a6cbed --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/average_pooling1d.md @@ -0,0 +1,127 @@ +description: Average Pooling layer for 1D inputs. (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.average_pooling1d + + + + + + + + + +Average Pooling layer for 1D inputs. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use keras.layers.AveragePooling1D instead. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +The tensor over which to pool. Must have rank 3. +
+`pool_size` + +An integer or tuple/list of a single integer, +representing the size of the pooling window. +
+`strides` + +An integer or tuple/list of a single integer, specifying the +strides of the pooling operation. +
+`padding` + +A string. The padding method, either 'valid' or 'same'. +Case-insensitive. +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, length, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, length)`. +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + +
+The output tensor, of rank 3. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/average_pooling2d.md b/site/en/api_docs/python/tf/compat/v1/layers/average_pooling2d.md new file mode 100644 index 00000000000..999069c3571 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/average_pooling2d.md @@ -0,0 +1,131 @@ +description: Average pooling layer for 2D inputs (e.g. images). (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.average_pooling2d + + + + + + + + + +Average pooling layer for 2D inputs (e.g. images). (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use keras.layers.AveragePooling2D instead. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +The tensor over which to pool. Must have rank 4. +
+`pool_size` + +An integer or tuple/list of 2 integers: (pool_height, pool_width) +specifying the size of the pooling window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 2 integers, +specifying the strides of the pooling operation. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`padding` + +A string. The padding method, either 'valid' or 'same'. +Case-insensitive. +
+`data_format` + +A string. The ordering of the dimensions in the inputs. +`channels_last` (default) and `channels_first` are supported. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, height, width)`. +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/average_pooling3d.md b/site/en/api_docs/python/tf/compat/v1/layers/average_pooling3d.md new file mode 100644 index 00000000000..921472d15d5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/average_pooling3d.md @@ -0,0 +1,133 @@ +description: Average pooling layer for 3D inputs (e.g. volumes). (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.average_pooling3d + + + + + + + + + +Average pooling layer for 3D inputs (e.g. volumes). (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use keras.layers.AveragePooling3D instead. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +The tensor over which to pool. Must have rank 5. +
+`pool_size` + +An integer or tuple/list of 3 integers: +(pool_depth, pool_height, pool_width) +specifying the size of the pooling window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 3 integers, +specifying the strides of the pooling operation. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`padding` + +A string. The padding method, either 'valid' or 'same'. +Case-insensitive. +
+`data_format` + +A string. The ordering of the dimensions in the inputs. +`channels_last` (default) and `channels_first` are supported. +`channels_last` corresponds to inputs with shape +`(batch, depth, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, depth, height, width)`. +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/batch_normalization.md b/site/en/api_docs/python/tf/compat/v1/layers/batch_normalization.md new file mode 100644 index 00000000000..e82ca877f09 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/batch_normalization.md @@ -0,0 +1,328 @@ +description: Functional interface for the batch normalization layer from_config(Ioffe et al., 2015). (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.batch_normalization + + + + + + + + + +Functional interface for the batch normalization layer from_config(Ioffe et al., 2015). (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use keras.layers.BatchNormalization instead. In particular, tf.control_dependencies(tf.GraphKeys.UPDATE_OPS) should not be used (consult the tf.keras.layers.BatchNormalization documentation). + +Note: when training, the moving_mean and moving_variance need to be updated. +By default the update ops are placed in `tf.GraphKeys.UPDATE_OPS`, so they +need to be executed alongside the `train_op`. Also, be sure to add any +batch_normalization ops before getting the update_ops collection. Otherwise, +update_ops will be empty, and training/inference will not work properly. For +example: + +```python + x_norm = tf.compat.v1.layers.batch_normalization(x, training=training) + + # ... + + update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS) + train_op = optimizer.minimize(loss) + train_op = tf.group([train_op, update_ops]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +Tensor input. +
+`axis` + +An `int`, the axis that should be normalized (typically the features +axis). For instance, after a `Convolution2D` layer with +`data_format="channels_first"`, set `axis=1` in `BatchNormalization`. +
+`momentum` + +Momentum for the moving average. +
+`epsilon` + +Small float added to variance to avoid dividing by zero. +
+`center` + +If True, add offset of `beta` to normalized tensor. If False, `beta` +is ignored. +
+`scale` + +If True, multiply by `gamma`. If False, `gamma` is not used. When the +next layer is linear (also e.g. `nn.relu`), this can be disabled since the +scaling can be done by the next layer. +
+`beta_initializer` + +Initializer for the beta weight. +
+`gamma_initializer` + +Initializer for the gamma weight. +
+`moving_mean_initializer` + +Initializer for the moving mean. +
+`moving_variance_initializer` + +Initializer for the moving variance. +
+`beta_regularizer` + +Optional regularizer for the beta weight. +
+`gamma_regularizer` + +Optional regularizer for the gamma weight. +
+`beta_constraint` + +An optional projection function to be applied to the `beta` +weight after being updated by an `Optimizer` (e.g. used to implement norm +constraints or value constraints for layer weights). The function must +take as input the unprojected variable and must return the projected +variable (which must have the same shape). Constraints are not safe to use +when doing asynchronous distributed training. +
+`gamma_constraint` + +An optional projection function to be applied to the +`gamma` weight after being updated by an `Optimizer`. +
+`training` + +Either a Python boolean, or a TensorFlow boolean scalar tensor +(e.g. a placeholder). Whether to return the output in training mode +(normalized with statistics of the current batch) or in inference mode +(normalized with moving statistics). **NOTE**: make sure to set this +parameter correctly, or else your training/inference will not work +properly. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +String, the name of the layer. +
+`reuse` + +Boolean, whether to reuse the weights of a previous layer by the same +name. +
+`renorm` + +Whether to use Batch Renormalization (Ioffe, 2017). This adds extra +variables during training. The inference is the same for either value of +this parameter. +
+`renorm_clipping` + +A dictionary that may map keys 'rmax', 'rmin', 'dmax' to +scalar `Tensors` used to clip the renorm correction. The correction `(r, +d)` is used as `corrected_value = normalized_value * r + d`, with `r` +clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, +dmax are set to inf, 0, inf, respectively. +
+`renorm_momentum` + +Momentum used to update the moving means and standard +deviations with renorm. Unlike `momentum`, this affects training and +should be neither too small (which would add noise) nor too large (which +would give stale estimates). Note that `momentum` is still applied to get +the means and variances for inference. +
+`fused` + +if `None` or `True`, use a faster, fused implementation if possible. +If `False`, use the system recommended implementation. +
+`virtual_batch_size` + +An `int`. By default, `virtual_batch_size` is `None`, +which means batch normalization is performed across the whole batch. When +`virtual_batch_size` is not `None`, instead perform "Ghost Batch +Normalization", which creates virtual sub-batches which are each +normalized separately (with shared gamma, beta, and moving statistics). +Must divide the actual batch size during execution. +
+`adjustment` + +A function taking the `Tensor` containing the (dynamic) shape of +the input tensor and returning a pair (scale, bias) to apply to the +normalized values (before gamma and beta), only during training. For +example, if axis==-1, +`adjustment = lambda shape: ( +tf.random.uniform(shape[-1:], 0.93, 1.07), +tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized +value by up to 7% up or down, then shift the result by up to 0.1 +(with independent scaling and bias for each feature but shared +across all examples), and finally apply gamma and/or beta. If +`None`, no adjustment is applied. Cannot be specified if +virtual_batch_size is specified. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ + + +#### References: + +Batch Normalization - Accelerating Deep Network Training by Reducing +Internal Covariate Shift: + [Ioffe et al., 2015](http://proceedings.mlr.press/v37/ioffe15.html) + ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf)) +Batch Renormalization - Towards Reducing Minibatch Dependence in +Batch-Normalized Models: + [Ioffe, + 2017](http://papers.nips.cc/paper/6790-batch-renormalization-towards-reducing-minibatch-dependence-in-batch-normalized-models) + ([pdf](http://papers.nips.cc/paper/6790-batch-renormalization-towards-reducing-minibatch-dependence-in-batch-normalized-models.pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/layers/conv1d.md b/site/en/api_docs/python/tf/compat/v1/layers/conv1d.md new file mode 100644 index 00000000000..51d19c9b24e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/conv1d.md @@ -0,0 +1,243 @@ +description: Functional interface for 1D convolution layer (e.g. temporal convolution). (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.conv1d + + + + + + + + + +Functional interface for 1D convolution layer (e.g. temporal convolution). (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.keras.layers.Conv1D instead. + +This layer creates a convolution kernel that is convolved +(actually cross-correlated) with the layer input to produce a tensor of +outputs. If `use_bias` is True (and a `bias_initializer` is provided), +a bias vector is created and added to the outputs. Finally, if +`activation` is not `None`, it is applied to the outputs as well. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +Tensor input. +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of a single integer, specifying the +length of the 1D convolution window. +
+`strides` + +An integer or tuple/list of a single integer, +specifying the stride length of the convolution. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, length, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, length)`. +
+`dilation_rate` + +An integer or tuple/list of a single integer, specifying +the dilation rate to use for dilated convolution. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any `strides` value != 1. +
+`activation` + +Activation function. Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`kernel_initializer` + +An initializer for the convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used. +
+`kernel_regularizer` + +Optional regularizer for the convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`kernel_constraint` + +Optional projection function to be applied to the +kernel after being updated by an `Optimizer` (e.g. used to implement +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+`reuse` + +Boolean, whether to reuse the weights of a previous layer +by the same name. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/conv2d.md b/site/en/api_docs/python/tf/compat/v1/layers/conv2d.md new file mode 100644 index 00000000000..510b9d90eda --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/conv2d.md @@ -0,0 +1,249 @@ +description: Functional interface for the 2D convolution layer. (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.conv2d + + + + + + + + + +Functional interface for the 2D convolution layer. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.keras.layers.Conv2D instead. + +This layer creates a convolution kernel that is convolved +(actually cross-correlated) with the layer input to produce a tensor of +outputs. If `use_bias` is True (and a `bias_initializer` is provided), +a bias vector is created and added to the outputs. Finally, if +`activation` is not `None`, it is applied to the outputs as well. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +Tensor input. +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of 2 integers, specifying the +height and width of the 2D convolution window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 2 integers, +specifying the strides of the convolution along the height and width. +Can be a single integer to specify the same value for +all spatial dimensions. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, height, width)`. +
+`dilation_rate` + +An integer or tuple/list of 2 integers, specifying +the dilation rate to use for dilated convolution. +Can be a single integer to specify the same value for +all spatial dimensions. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`activation` + +Activation function. Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`kernel_initializer` + +An initializer for the convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used. +
+`kernel_regularizer` + +Optional regularizer for the convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`kernel_constraint` + +Optional projection function to be applied to the +kernel after being updated by an `Optimizer` (e.g. used to implement +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+`reuse` + +Boolean, whether to reuse the weights of a previous layer +by the same name. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/conv2d_transpose.md b/site/en/api_docs/python/tf/compat/v1/layers/conv2d_transpose.md new file mode 100644 index 00000000000..25f356b274c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/conv2d_transpose.md @@ -0,0 +1,234 @@ +description: Functional interface for transposed 2D convolution layer. (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.conv2d_transpose + + + + + + + + + +Functional interface for transposed 2D convolution layer. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.keras.layers.Conv2DTranspose instead. + +The need for transposed convolutions generally arises +from the desire to use a transformation going in the opposite direction +of a normal convolution, i.e., from something that has the shape of the +output of some convolution to something that has the shape of its input +while maintaining a connectivity pattern that is compatible with +said convolution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +Input tensor. +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +A tuple or list of 2 positive integers specifying the spatial +dimensions of the filters. Can be a single integer to specify the same +value for all spatial dimensions. +
+`strides` + +A tuple or list of 2 positive integers specifying the strides +of the convolution. Can be a single integer to specify the same value for +all spatial dimensions. +
+`padding` + +one of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, height, width)`. +
+`activation` + +Activation function. Set it to `None` to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`kernel_initializer` + +An initializer for the convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If `None`, the default +initializer will be used. +
+`kernel_regularizer` + +Optional regularizer for the convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`kernel_constraint` + +Optional projection function to be applied to the +kernel after being updated by an `Optimizer` (e.g. used to implement +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+`reuse` + +Boolean, whether to reuse the weights of a previous layer +by the same name. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/conv3d.md b/site/en/api_docs/python/tf/compat/v1/layers/conv3d.md new file mode 100644 index 00000000000..7a8a68b77da --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/conv3d.md @@ -0,0 +1,251 @@ +description: Functional interface for the 3D convolution layer. (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.conv3d + + + + + + + + + +Functional interface for the 3D convolution layer. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.keras.layers.Conv3D instead. + +This layer creates a convolution kernel that is convolved +(actually cross-correlated) with the layer input to produce a tensor of +outputs. If `use_bias` is True (and a `bias_initializer` is provided), +a bias vector is created and added to the outputs. Finally, if +`activation` is not `None`, it is applied to the outputs as well. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +Tensor input. +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of 3 integers, specifying the +depth, height and width of the 3D convolution window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 3 integers, +specifying the strides of the convolution along the depth, +height and width. +Can be a single integer to specify the same value for +all spatial dimensions. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, depth, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, depth, height, width)`. +
+`dilation_rate` + +An integer or tuple/list of 3 integers, specifying +the dilation rate to use for dilated convolution. +Can be a single integer to specify the same value for +all spatial dimensions. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`activation` + +Activation function. Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`kernel_initializer` + +An initializer for the convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used. +
+`kernel_regularizer` + +Optional regularizer for the convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`kernel_constraint` + +Optional projection function to be applied to the +kernel after being updated by an `Optimizer` (e.g. used to implement +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+`reuse` + +Boolean, whether to reuse the weights of a previous layer +by the same name. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/conv3d_transpose.md b/site/en/api_docs/python/tf/compat/v1/layers/conv3d_transpose.md new file mode 100644 index 00000000000..b2a219d4e90 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/conv3d_transpose.md @@ -0,0 +1,228 @@ +description: Functional interface for transposed 3D convolution layer. (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.conv3d_transpose + + + + + + + + + +Functional interface for transposed 3D convolution layer. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.keras.layers.Conv3DTranspose instead. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +Input tensor. +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +A tuple or list of 3 positive integers specifying the spatial +dimensions of the filters. Can be a single integer to specify the same +value for all spatial dimensions. +
+`strides` + +A tuple or list of 3 positive integers specifying the strides +of the convolution. Can be a single integer to specify the same value for +all spatial dimensions. +
+`padding` + +one of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, depth, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, depth, height, width)`. +
+`activation` + +Activation function. Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`kernel_initializer` + +An initializer for the convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used. +
+`kernel_regularizer` + +Optional regularizer for the convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`kernel_constraint` + +Optional projection function to be applied to the +kernel after being updated by an `Optimizer` (e.g. used to implement +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+`reuse` + +Boolean, whether to reuse the weights of a previous layer +by the same name. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/dense.md b/site/en/api_docs/python/tf/compat/v1/layers/dense.md new file mode 100644 index 00000000000..ff4471e3163 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/dense.md @@ -0,0 +1,197 @@ +description: Functional interface for the densely-connected layer. (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.dense + + + + + + + + + +Functional interface for the densely-connected layer. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use keras.layers.Dense instead. + +This layer implements the operation: +`outputs = activation(inputs * kernel + bias)` +where `activation` is the activation function passed as the `activation` +argument (if not `None`), `kernel` is a weights matrix created by the layer, +and `bias` is a bias vector created by the layer +(only if `use_bias` is `True`). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +Tensor input. +
+`units` + +Integer or Long, dimensionality of the output space. +
+`activation` + +Activation function (callable). Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`kernel_initializer` + +Initializer function for the weight matrix. +If `None` (default), weights are initialized using the default +initializer used by tf.compat.v1.get_variable. +
+`bias_initializer` + +Initializer function for the bias. +
+`kernel_regularizer` + +Regularizer function for the weight matrix. +
+`bias_regularizer` + +Regularizer function for the bias. +
+`activity_regularizer` + +Regularizer function for the output. +
+`kernel_constraint` + +An optional projection function to be applied to the +kernel after being updated by an `Optimizer` (e.g. used to implement +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`bias_constraint` + +An optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +String, the name of the layer. +
+`reuse` + +Boolean, whether to reuse the weights of a previous layer +by the same name. +
+ + + + + + + + + + + +
+Output tensor the same shape as `inputs` except the last dimension is of +size `units`. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/dropout.md b/site/en/api_docs/python/tf/compat/v1/layers/dropout.md new file mode 100644 index 00000000000..a15a5acb446 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/dropout.md @@ -0,0 +1,134 @@ +description: Applies Dropout to the input. (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.dropout + + + + + + + + + +Applies Dropout to the input. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use keras.layers.dropout instead. + +Dropout consists in randomly setting a fraction `rate` of input units to 0 +at each update during training time, which helps prevent overfitting. +The units that are kept are scaled by `1 / (1 - rate)`, so that their +sum is unchanged at training time and inference time. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +Tensor input. +
+`rate` + +The dropout rate, between 0 and 1. E.g. "rate=0.1" would drop out +10% of input units. +
+`noise_shape` + +1D tensor of type `int32` representing the shape of the +binary dropout mask that will be multiplied with the input. +For instance, if your inputs have shape +`(batch_size, timesteps, features)`, and you want the dropout mask +to be the same for all timesteps, you can use +`noise_shape=[batch_size, 1, features]`. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed +for behavior. +
+`training` + +Either a Python boolean, or a TensorFlow boolean scalar tensor +(e.g. a placeholder). Whether to return the output in training mode +(apply dropout) or in inference mode (return the input untouched). +
+`name` + +The name of the layer (string). +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/experimental.md b/site/en/api_docs/python/tf/compat/v1/layers/experimental.md new file mode 100644 index 00000000000..a23cd8041e3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/experimental.md @@ -0,0 +1,27 @@ +description: Public API for tf.layers.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.layers.experimental + + + + + + + + + +Public API for tf.layers.experimental namespace. + + + +## Functions + +[`keras_style_scope(...)`](../../../../tf/compat/v1/layers/experimental/keras_style_scope.md): Use Keras-style variable management. + +[`set_keras_style(...)`](../../../../tf/compat/v1/layers/experimental/set_keras_style.md): Use Keras-style variable management. + diff --git a/site/en/api_docs/python/tf/compat/v1/layers/experimental/keras_style_scope.md b/site/en/api_docs/python/tf/compat/v1/layers/experimental/keras_style_scope.md new file mode 100644 index 00000000000..247e383a4f5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/experimental/keras_style_scope.md @@ -0,0 +1,90 @@ +description: Use Keras-style variable management. + +
+ + +
+ +# tf.compat.v1.layers.experimental.keras_style_scope + + + + + + + + + +Use Keras-style variable management. + + + + + + + +All tf.layers and tf RNN cells created in this scope use Keras-style +variable management. Creating such layers with a scope= argument is +disallowed, and reuse=True is disallowed. + +The purpose of this scope is to allow users of existing layers to +slowly transition to a Keras layers API without breaking existing +functionality. + +One example of this is when using TensorFlow's RNN classes with Keras +Models or Networks. Because Keras models do not properly set variable +scopes, users of RNNs may either accidentally share scopes between two +different models, or get errors about variables that already exist. + +#### Example: + + + +```python +class RNNModel(tf.keras.Model): + + def __init__(self, name): + super(RNNModel, self).__init__(name=name) + self.rnn = tf.compat.v1.nn.rnn_cell.MultiRNNCell( + [tf.compat.v1.nn.rnn_cell.LSTMCell(64) for _ in range(2)]) + + def call(self, input, state): + return self.rnn(input, state) + +model_1 = RNNModel("model_1") +model_2 = RNNModel("model_2") + +# OK +output_1, next_state_1 = model_1(input, state) +# Raises an error about trying to create an already existing variable. +output_2, next_state_2 = model_2(input, state) +``` + +The solution is to wrap the model construction and execution in a keras-style +scope: + +```python +with keras_style_scope(): + model_1 = RNNModel("model_1") + model_2 = RNNModel("model_2") + + # model_1 and model_2 are guaranteed to create their own variables. + output_1, next_state_1 = model_1(input, state) + output_2, next_state_2 = model_2(input, state) + + assert len(model_1.weights) > 0 + assert len(model_2.weights) > 0 + assert(model_1.weights != model_2.weights) +``` + +#### Yields: + +A keras layer style scope. diff --git a/site/en/api_docs/python/tf/compat/v1/layers/experimental/set_keras_style.md b/site/en/api_docs/python/tf/compat/v1/layers/experimental/set_keras_style.md new file mode 100644 index 00000000000..3bd77661ee2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/experimental/set_keras_style.md @@ -0,0 +1,63 @@ +description: Use Keras-style variable management. + +
+ + +
+ +# tf.compat.v1.layers.experimental.set_keras_style + + + + + + + + + +Use Keras-style variable management. + + + + + + + +All tf.layers and tf RNN cells created after keras style ha been enabled +use Keras-style variable management. Creating such layers with a +scope= argument is disallowed, and reuse=True is disallowed. + +The purpose of this function is to allow users of existing layers to +slowly transition to Keras layers API without breaking existing +functionality. + +For more details, see the documentation for `keras_style_scope`. + +Note, once keras style has been set, it is set globally for the entire +program and cannot be unset. + +#### Example: + + + +```python +set_keras_style() + +model_1 = RNNModel(name="model_1") +model_2 = RNNModel(name="model_2") + +# model_1 and model_2 are guaranteed to create their own variables. +output_1, next_state_1 = model_1(input, state) +output_2, next_state_2 = model_2(input, state) + +assert len(model_1.weights) > 0 +assert len(model_2.weights) > 0 +assert(model_1.weights != model_2.weights) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/layers/flatten.md b/site/en/api_docs/python/tf/compat/v1/layers/flatten.md new file mode 100644 index 00000000000..59eb351b39e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/flatten.md @@ -0,0 +1,100 @@ +description: Flattens an input tensor while preserving the batch axis (axis 0). (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.flatten + + + + + + + + + +Flattens an input tensor while preserving the batch axis (axis 0). (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use keras.layers.Flatten instead. + + + + + + + + + + + + + + + + +
+`inputs` + +Tensor input. +
+`name` + +The name of the layer (string). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, height, width)`. +
+ + + + + + + + + + + +
+Reshaped tensor. +
+ + + +#### Examples: + + + +``` + x = tf.compat.v1.placeholder(shape=(None, 4, 4), dtype='float32') + y = flatten(x) + # now `y` has shape `(None, 16)` + + x = tf.compat.v1.placeholder(shape=(None, 3, None), dtype='float32') + y = flatten(x) + # now `y` has shape `(None, None)` +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/layers/max_pooling1d.md b/site/en/api_docs/python/tf/compat/v1/layers/max_pooling1d.md new file mode 100644 index 00000000000..dbbaca34ad0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/max_pooling1d.md @@ -0,0 +1,127 @@ +description: Max Pooling layer for 1D inputs. (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.max_pooling1d + + + + + + + + + +Max Pooling layer for 1D inputs. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use keras.layers.MaxPooling1D instead. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +The tensor over which to pool. Must have rank 3. +
+`pool_size` + +An integer or tuple/list of a single integer, +representing the size of the pooling window. +
+`strides` + +An integer or tuple/list of a single integer, specifying the +strides of the pooling operation. +
+`padding` + +A string. The padding method, either 'valid' or 'same'. +Case-insensitive. +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, length, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, length)`. +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + +
+The output tensor, of rank 3. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/max_pooling2d.md b/site/en/api_docs/python/tf/compat/v1/layers/max_pooling2d.md new file mode 100644 index 00000000000..abb947e0802 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/max_pooling2d.md @@ -0,0 +1,131 @@ +description: Max pooling layer for 2D inputs (e.g. images). (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.max_pooling2d + + + + + + + + + +Max pooling layer for 2D inputs (e.g. images). (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use keras.layers.MaxPooling2D instead. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +The tensor over which to pool. Must have rank 4. +
+`pool_size` + +An integer or tuple/list of 2 integers: (pool_height, pool_width) +specifying the size of the pooling window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 2 integers, +specifying the strides of the pooling operation. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`padding` + +A string. The padding method, either 'valid' or 'same'. +Case-insensitive. +
+`data_format` + +A string. The ordering of the dimensions in the inputs. +`channels_last` (default) and `channels_first` are supported. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, height, width)`. +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/max_pooling3d.md b/site/en/api_docs/python/tf/compat/v1/layers/max_pooling3d.md new file mode 100644 index 00000000000..04784f01380 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/max_pooling3d.md @@ -0,0 +1,131 @@ +description: Max pooling layer for 3D inputs (e.g. (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.max_pooling3d + + + + + + + + + +Max pooling layer for 3D inputs (e.g. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use keras.layers.MaxPooling3D instead. + +volumes). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +The tensor over which to pool. Must have rank 5. +
+`pool_size` + +An integer or tuple/list of 3 integers: (pool_depth, pool_height, +pool_width) specifying the size of the pooling window. Can be a single +integer to specify the same value for all spatial dimensions. +
+`strides` + +An integer or tuple/list of 3 integers, specifying the strides of +the pooling operation. Can be a single integer to specify the same value +for all spatial dimensions. +
+`padding` + +A string. The padding method, either 'valid' or 'same'. +Case-insensitive. +
+`data_format` + +A string. The ordering of the dimensions in the inputs. +`channels_last` (default) and `channels_first` are supported. +`channels_last` corresponds to inputs with shape `(batch, depth, height, +width, channels)` while `channels_first` corresponds to inputs with shape +`(batch, channels, depth, height, width)`. +
+`name` + +A string, the name of the layer. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/separable_conv1d.md b/site/en/api_docs/python/tf/compat/v1/layers/separable_conv1d.md new file mode 100644 index 00000000000..e8e9deb69bd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/separable_conv1d.md @@ -0,0 +1,277 @@ +description: Functional interface for the depthwise separable 1D convolution layer. (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.separable_conv1d + + + + + + + + + +Functional interface for the depthwise separable 1D convolution layer. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.keras.layers.SeparableConv1D instead. + +This layer performs a depthwise convolution that acts separately on +channels, followed by a pointwise convolution that mixes channels. +If `use_bias` is True and a bias initializer is provided, +it adds a bias vector to the output. +It then optionally applies an activation function to produce the final output. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +Input tensor. +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +A single integer specifying the spatial +dimensions of the filters. +
+`strides` + +A single integer specifying the strides +of the convolution. +Specifying any `stride` value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, length, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, length)`. +
+`dilation_rate` + +A single integer, specifying +the dilation rate to use for dilated convolution. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`depth_multiplier` + +The number of depthwise convolution output channels for +each input channel. The total number of depthwise convolution output +channels will be equal to `num_filters_in * depth_multiplier`. +
+`activation` + +Activation function. Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`depthwise_initializer` + +An initializer for the depthwise convolution kernel. +
+`pointwise_initializer` + +An initializer for the pointwise convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used. +
+`depthwise_regularizer` + +Optional regularizer for the depthwise +convolution kernel. +
+`pointwise_regularizer` + +Optional regularizer for the pointwise +convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`depthwise_constraint` + +Optional projection function to be applied to the +depthwise kernel after being updated by an `Optimizer` (e.g. used for +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`pointwise_constraint` + +Optional projection function to be applied to the +pointwise kernel after being updated by an `Optimizer`. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+`reuse` + +Boolean, whether to reuse the weights of a previous layer +by the same name. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/layers/separable_conv2d.md b/site/en/api_docs/python/tf/compat/v1/layers/separable_conv2d.md new file mode 100644 index 00000000000..0eea9872dc1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/layers/separable_conv2d.md @@ -0,0 +1,281 @@ +description: Functional interface for the depthwise separable 2D convolution layer. (deprecated) + +
+ + +
+ +# tf.compat.v1.layers.separable_conv2d + + + + + + + + + +Functional interface for the depthwise separable 2D convolution layer. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.keras.layers.SeparableConv2D instead. + +This layer performs a depthwise convolution that acts separately on +channels, followed by a pointwise convolution that mixes channels. +If `use_bias` is True and a bias initializer is provided, +it adds a bias vector to the output. +It then optionally applies an activation function to produce the final output. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +Input tensor. +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +A tuple or list of 2 integers specifying the spatial +dimensions of the filters. Can be a single integer to specify the same +value for all spatial dimensions. +
+`strides` + +A tuple or list of 2 positive integers specifying the strides +of the convolution. Can be a single integer to specify the same value for +all spatial dimensions. +Specifying any `stride` value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, height, width)`. +
+`dilation_rate` + +An integer or tuple/list of 2 integers, specifying +the dilation rate to use for dilated convolution. +Can be a single integer to specify the same value for +all spatial dimensions. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`depth_multiplier` + +The number of depthwise convolution output channels for +each input channel. The total number of depthwise convolution output +channels will be equal to `num_filters_in * depth_multiplier`. +
+`activation` + +Activation function. Set it to None to maintain a +linear activation. +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`depthwise_initializer` + +An initializer for the depthwise convolution kernel. +
+`pointwise_initializer` + +An initializer for the pointwise convolution kernel. +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used. +
+`depthwise_regularizer` + +Optional regularizer for the depthwise +convolution kernel. +
+`pointwise_regularizer` + +Optional regularizer for the pointwise +convolution kernel. +
+`bias_regularizer` + +Optional regularizer for the bias vector. +
+`activity_regularizer` + +Optional regularizer function for the output. +
+`depthwise_constraint` + +Optional projection function to be applied to the +depthwise kernel after being updated by an `Optimizer` (e.g. used for +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training. +
+`pointwise_constraint` + +Optional projection function to be applied to the +pointwise kernel after being updated by an `Optimizer`. +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer`. +
+`trainable` + +Boolean, if `True` also add variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +
+`name` + +A string, the name of the layer. +
+`reuse` + +Boolean, whether to reuse the weights of a previous layer +by the same name. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/linalg.md b/site/en/api_docs/python/tf/compat/v1/linalg.md new file mode 100644 index 00000000000..8f375ca5d3e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/linalg.md @@ -0,0 +1,159 @@ +description: Operations for linear algebra. + +
+ + +
+ +# Module: tf.compat.v1.linalg + + + + + + + + + +Operations for linear algebra. + + + +## Modules + +[`experimental`](../../../tf/compat/v1/linalg/experimental.md) module: Public API for tf.linalg.experimental namespace. + +## Classes + +[`class LinearOperator`](../../../tf/linalg/LinearOperator.md): Base class defining a [batch of] linear operator[s]. + +[`class LinearOperatorAdjoint`](../../../tf/linalg/LinearOperatorAdjoint.md): `LinearOperator` representing the adjoint of another operator. + +[`class LinearOperatorBlockDiag`](../../../tf/linalg/LinearOperatorBlockDiag.md): Combines one or more `LinearOperators` in to a Block Diagonal matrix. + +[`class LinearOperatorBlockLowerTriangular`](../../../tf/linalg/LinearOperatorBlockLowerTriangular.md): Combines `LinearOperators` into a blockwise lower-triangular matrix. + +[`class LinearOperatorCirculant`](../../../tf/linalg/LinearOperatorCirculant.md): `LinearOperator` acting like a circulant matrix. + +[`class LinearOperatorCirculant2D`](../../../tf/linalg/LinearOperatorCirculant2D.md): `LinearOperator` acting like a block circulant matrix. + +[`class LinearOperatorCirculant3D`](../../../tf/linalg/LinearOperatorCirculant3D.md): `LinearOperator` acting like a nested block circulant matrix. + +[`class LinearOperatorComposition`](../../../tf/linalg/LinearOperatorComposition.md): Composes one or more `LinearOperators`. + +[`class LinearOperatorDiag`](../../../tf/linalg/LinearOperatorDiag.md): `LinearOperator` acting like a [batch] square diagonal matrix. + +[`class LinearOperatorFullMatrix`](../../../tf/linalg/LinearOperatorFullMatrix.md): `LinearOperator` that wraps a [batch] matrix. + +[`class LinearOperatorHouseholder`](../../../tf/linalg/LinearOperatorHouseholder.md): `LinearOperator` acting like a [batch] of Householder transformations. + +[`class LinearOperatorIdentity`](../../../tf/linalg/LinearOperatorIdentity.md): `LinearOperator` acting like a [batch] square identity matrix. + +[`class LinearOperatorInversion`](../../../tf/linalg/LinearOperatorInversion.md): `LinearOperator` representing the inverse of another operator. + +[`class LinearOperatorKronecker`](../../../tf/linalg/LinearOperatorKronecker.md): Kronecker product between two `LinearOperators`. + +[`class LinearOperatorLowRankUpdate`](../../../tf/linalg/LinearOperatorLowRankUpdate.md): Perturb a `LinearOperator` with a rank `K` update. + +[`class LinearOperatorLowerTriangular`](../../../tf/linalg/LinearOperatorLowerTriangular.md): `LinearOperator` acting like a [batch] square lower triangular matrix. + +[`class LinearOperatorPermutation`](../../../tf/linalg/LinearOperatorPermutation.md): `LinearOperator` acting like a [batch] of permutation matrices. + +[`class LinearOperatorScaledIdentity`](../../../tf/linalg/LinearOperatorScaledIdentity.md): `LinearOperator` acting like a scaled [batch] identity matrix `A = c I`. + +[`class LinearOperatorToeplitz`](../../../tf/linalg/LinearOperatorToeplitz.md): `LinearOperator` acting like a [batch] of toeplitz matrices. + +[`class LinearOperatorTridiag`](../../../tf/linalg/LinearOperatorTridiag.md): `LinearOperator` acting like a [batch] square tridiagonal matrix. + +[`class LinearOperatorZeros`](../../../tf/linalg/LinearOperatorZeros.md): `LinearOperator` acting like a [batch] zero matrix. + +## Functions + +[`adjoint(...)`](../../../tf/linalg/adjoint.md): Transposes the last two dimensions of and conjugates tensor `matrix`. + +[`band_part(...)`](../../../tf/linalg/band_part.md): Copy a tensor setting everything outside a central band in each innermost matrix + +[`cholesky(...)`](../../../tf/linalg/cholesky.md): Computes the Cholesky decomposition of one or more square matrices. + +[`cholesky_solve(...)`](../../../tf/linalg/cholesky_solve.md): Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations. + +[`cross(...)`](../../../tf/linalg/cross.md): Compute the pairwise cross product. + +[`det(...)`](../../../tf/linalg/det.md): Computes the determinant of one or more square matrices. + +[`diag(...)`](../../../tf/linalg/diag.md): Returns a batched diagonal tensor with given batched diagonal values. + +[`diag_part(...)`](../../../tf/linalg/diag_part.md): Returns the batched diagonal part of a batched tensor. + +[`eigh(...)`](../../../tf/linalg/eigh.md): Computes the eigen decomposition of a batch of self-adjoint matrices. + +[`eigvalsh(...)`](../../../tf/linalg/eigvalsh.md): Computes the eigenvalues of one or more self-adjoint matrices. + +[`einsum(...)`](../../../tf/einsum.md): Tensor contraction over specified indices and outer product. + +[`expm(...)`](../../../tf/linalg/expm.md): Computes the matrix exponential of one or more square matrices. + +[`eye(...)`](../../../tf/eye.md): Construct an identity matrix, or a batch of matrices. + +[`global_norm(...)`](../../../tf/linalg/global_norm.md): Computes the global norm of multiple tensors. + +[`inv(...)`](../../../tf/linalg/inv.md): Computes the inverse of one or more square invertible matrices or their + +[`l2_normalize(...)`](../../../tf/compat/v1/linalg/l2_normalize.md): Normalizes along dimension `axis` using an L2 norm. (deprecated arguments) + +[`logdet(...)`](../../../tf/linalg/logdet.md): Computes log of the determinant of a hermitian positive definite matrix. + +[`logm(...)`](../../../tf/linalg/logm.md): Computes the matrix logarithm of one or more square matrices: + +[`lstsq(...)`](../../../tf/linalg/lstsq.md): Solves one or more linear least-squares problems. + +[`lu(...)`](../../../tf/linalg/lu.md): Computes the LU decomposition of one or more square matrices. + +[`lu_matrix_inverse(...)`](../../../tf/linalg/lu_matrix_inverse.md): Computes the inverse given the LU decomposition(s) of one or more matrices. + +[`lu_reconstruct(...)`](../../../tf/linalg/lu_reconstruct.md): The reconstruct one or more matrices from their LU decomposition(s). + +[`lu_solve(...)`](../../../tf/linalg/lu_solve.md): Solves systems of linear eqns `A X = RHS`, given LU factorizations. + +[`matmul(...)`](../../../tf/linalg/matmul.md): Multiplies matrix `a` by matrix `b`, producing `a` * `b`. + +[`matrix_rank(...)`](../../../tf/linalg/matrix_rank.md): Compute the matrix rank of one or more matrices. + +[`matrix_transpose(...)`](../../../tf/linalg/matrix_transpose.md): Transposes last two dimensions of tensor `a`. + +[`matvec(...)`](../../../tf/linalg/matvec.md): Multiplies matrix `a` by vector `b`, producing `a` * `b`. + +[`norm(...)`](../../../tf/compat/v1/norm.md): Computes the norm of vectors, matrices, and tensors. (deprecated arguments) + +[`normalize(...)`](../../../tf/linalg/normalize.md): Normalizes `tensor` along dimension `axis` using specified norm. + +[`pinv(...)`](../../../tf/linalg/pinv.md): Compute the Moore-Penrose pseudo-inverse of one or more matrices. + +[`qr(...)`](../../../tf/linalg/qr.md): Computes the QR decompositions of one or more matrices. + +[`set_diag(...)`](../../../tf/linalg/set_diag.md): Returns a batched matrix tensor with new batched diagonal values. + +[`slogdet(...)`](../../../tf/linalg/slogdet.md): Computes the sign and the log of the absolute value of the determinant of + +[`solve(...)`](../../../tf/linalg/solve.md): Solves systems of linear equations. + +[`sqrtm(...)`](../../../tf/linalg/sqrtm.md): Computes the matrix square root of one or more square matrices: + +[`svd(...)`](../../../tf/linalg/svd.md): Computes the singular value decompositions of one or more matrices. + +[`tensor_diag(...)`](../../../tf/linalg/tensor_diag.md): Returns a diagonal tensor with a given diagonal values. + +[`tensor_diag_part(...)`](../../../tf/linalg/tensor_diag_part.md): Returns the diagonal part of the tensor. + +[`tensordot(...)`](../../../tf/tensordot.md): Tensor contraction of a and b along specified axes and outer product. + +[`trace(...)`](../../../tf/linalg/trace.md): Compute the trace of a tensor `x`. + +[`transpose(...)`](../../../tf/linalg/matrix_transpose.md): Transposes last two dimensions of tensor `a`. + +[`triangular_solve(...)`](../../../tf/linalg/triangular_solve.md): Solve systems of linear equations with upper or lower triangular matrices. + +[`tridiagonal_matmul(...)`](../../../tf/linalg/tridiagonal_matmul.md): Multiplies tridiagonal matrix by matrix. + +[`tridiagonal_solve(...)`](../../../tf/linalg/tridiagonal_solve.md): Solves tridiagonal systems of equations. + diff --git a/site/en/api_docs/python/tf/compat/v1/linalg/experimental.md b/site/en/api_docs/python/tf/compat/v1/linalg/experimental.md new file mode 100644 index 00000000000..089ee032914 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/linalg/experimental.md @@ -0,0 +1,25 @@ +description: Public API for tf.linalg.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.linalg.experimental + + + + + + + + + +Public API for tf.linalg.experimental namespace. + + + +## Functions + +[`conjugate_gradient(...)`](../../../../tf/linalg/experimental/conjugate_gradient.md): Conjugate gradient solver. + diff --git a/site/en/api_docs/python/tf/compat/v1/linalg/l2_normalize.md b/site/en/api_docs/python/tf/compat/v1/linalg/l2_normalize.md new file mode 100644 index 00000000000..e07bfe7979a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/linalg/l2_normalize.md @@ -0,0 +1,115 @@ +description: Normalizes along dimension axis using an L2 norm. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.linalg.l2_normalize + + + + + + + + + +Normalizes along dimension `axis` using an L2 norm. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. +Instructions for updating: +dim is deprecated, use axis instead + +For a 1-D tensor with `axis = 0`, computes + + output = x / sqrt(max(sum(x**2), epsilon)) + +For `x` with more dimensions, independently normalizes each 1-D slice along +dimension `axis`. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. +
+`axis` + +Dimension along which to normalize. A scalar or a vector of +integers. +
+`epsilon` + +A lower bound value for the norm. Will use `sqrt(epsilon)` as the +divisor if `norm < sqrt(epsilon)`. +
+`name` + +A name for this operation (optional). +
+`dim` + +Deprecated alias for axis. +
+ + + + + + + + + + + +
+A `Tensor` with the same shape as `x`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/lite.md b/site/en/api_docs/python/tf/compat/v1/lite.md new file mode 100644 index 00000000000..56f5b6944f7 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite.md @@ -0,0 +1,49 @@ +description: Public API for tf.lite namespace. + +
+ + +
+ +# Module: tf.compat.v1.lite + + + + + + + + + +Public API for tf.lite namespace. + + + +## Modules + +[`constants`](../../../tf/compat/v1/lite/constants.md) module: Public API for tf.lite.constants namespace. + +[`experimental`](../../../tf/compat/v1/lite/experimental.md) module: Public API for tf.lite.experimental namespace. + +## Classes + +[`class Interpreter`](../../../tf/lite/Interpreter.md): Interpreter interface for TensorFlow Lite Models. + +[`class OpHint`](../../../tf/compat/v1/lite/OpHint.md): A class that helps build tflite function invocations. + +[`class OpsSet`](../../../tf/lite/OpsSet.md): Enum class defining the sets of ops available to generate TFLite models. + +[`class Optimize`](../../../tf/lite/Optimize.md): Enum defining the optimizations to apply when generating tflite graphs. + +[`class RepresentativeDataset`](../../../tf/lite/RepresentativeDataset.md): Representative dataset to evaluate optimizations. + +[`class TFLiteConverter`](../../../tf/compat/v1/lite/TFLiteConverter.md): Convert a TensorFlow model into `output_format`. + +[`class TargetSpec`](../../../tf/lite/TargetSpec.md): Specification of target device. + +[`class TocoConverter`](../../../tf/compat/v1/lite/TocoConverter.md): Convert a TensorFlow model into `output_format` using TOCO. + +## Functions + +[`toco_convert(...)`](../../../tf/compat/v1/lite/toco_convert.md): Convert a model using TOCO. (deprecated) + diff --git a/site/en/api_docs/python/tf/compat/v1/lite/OpHint.md b/site/en/api_docs/python/tf/compat/v1/lite/OpHint.md new file mode 100644 index 00000000000..888b8dab108 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/OpHint.md @@ -0,0 +1,355 @@ +description: A class that helps build tflite function invocations. + +
+ + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.lite.OpHint + + + + + + + + + +A class that helps build tflite function invocations. + + + + + + + +It allows you to take a bunch of TensorFlow ops and annotate the construction +such that toco knows how to convert it to tflite. This embeds a pseudo +function in a TensorFlow graph. This allows embedding high-level API usage +information in a lower level TensorFlow implementation so that an alternative +implementation can be substituted later. + +Essentially, any "input" into this pseudo op is fed into an identity, and +attributes are added to that input before being used by the constituent ops +that make up the pseudo op. A similar process is done to any output that +is to be exported from the current op. + + + + + + + + + + + + + + + + + + + +
+`function_name` + +Name of the function (the custom op name in tflite) +
+`level` + +OpHint level. +
+`children_inputs_mappings` + +Children OpHint inputs/outputs mapping. +children_inputs_mappings should like below: +"parent_first_child_input": +[{"parent_input_index": num, "child_input_index": num}, ...] +"parent_last_child_output": +[{"parent_output_index": num, "child_output_index": num}, ...] +"internal_children_input_output": +[{"child_input_index": num, "child_output_index": num}, ...] +
+`**kwargs` + +Keyword arguments of any constant attributes for the function. +
+ + + +## Child Classes +[`class OpHintArgumentTracker`](../../../../tf/compat/v1/lite/OpHint/OpHintArgumentTracker.md) + +## Methods + +

add_input

+ +View source + + + +Add a wrapped input argument to the hint. + + + + + + + + + + + + + + +
Args
+`*args` + +The input tensor. +
+`**kwargs` + +"name" label +"tag" a tag to group multiple arguments that will be aggregated. I.e. +a string like 'cool_input'. Basically multiple inputs can be added +to the same hint for parallel operations that will eventually be +combined. An example would be static_rnn which creates multiple copies +of state or inputs. +"aggregate" aggregation strategy that is valid only for tag non None. +Acceptable values are OpHint.AGGREGATE_FIRST, OpHint.AGGREGATE_LAST, +and OpHint.AGGREGATE_STACK. +"index_override" The global index to use. This corresponds to the +argument order in the final stub that will be generated. +
+ + + + + + + + + + + +
Returns
+The wrapped input tensor. +
+ + + +

add_inputs

+ +View source + + + +Add a sequence of inputs to the function invocation. + + + + + + + + + + + + + + +
Args
+`*args` + +List of inputs to be converted (should be Tf.Tensor). +
+`**kwargs` + +This allows 'names' which should be a list of names. +
+ + + + + + + + + + + +
Returns
+Wrapped inputs (identity standins that have additional metadata). These +are also are also tf.Tensor's. +
+ + + +

add_output

+ +View source + + + +Add a wrapped output argument to the hint. + + + + + + + + + + + + + + +
Args
+`*args` + +The output tensor. +
+`**kwargs` + +"name" label +"tag" a tag to group multiple arguments that will be aggregated. I.e. +a string like 'cool_input'. Basically multiple inputs can be added +to the same hint for parallel operations that will eventually be +combined. An example would be static_rnn which creates multiple copies +of state or inputs. +"aggregate" aggregation strategy that is valid only for tag non None. +Acceptable values are OpHint.AGGREGATE_FIRST, OpHint.AGGREGATE_LAST, +and OpHint.AGGREGATE_STACK. +"index_override" The global index to use. This corresponds to the +argument order in the final stub that will be generated. +
+ + + + + + + + + + + +
Returns
+The wrapped output tensor. +
+ + + +

add_outputs

+ +View source + + + +Add a sequence of outputs to the function invocation. + + + + + + + + + + + + + + +
Args
+`*args` + +List of outputs to be converted (should be tf.Tensor). +
+`**kwargs` + +See +
+ + + + + + + + + + + +
Returns
+Wrapped outputs (identity standins that have additional metadata). These +are also tf.Tensor's. +
+ + + + + +## Class Variables + +* `AGGREGATE_FIRST = 'first'` +* `AGGREGATE_LAST = 'last'` +* `AGGREGATE_STACK = 'stack'` +* `CHILDREN_INPUTS_MAPPINGS = '_tflite_children_ophint_inputs_mapping'` +* `FUNCTION_AGGREGATE_ATTR = '_tflite_function_aggregate'` +* `FUNCTION_INPUT_INDEX_ATTR = '_tflite_function_input_index'` +* `FUNCTION_LEVEL_ATTR = '_tflite_ophint_level'` +* `FUNCTION_NAME_ATTR = '_tflite_function_name'` +* `FUNCTION_OUTPUT_INDEX_ATTR = '_tflite_function_output_index'` +* `FUNCTION_SORT_INDEX_ATTR = '_tflite_function_sort_index'` +* `FUNCTION_UUID_ATTR = '_tflite_function_uuid'` +* `TFLITE_INPUT_INDICES = '_tflite_input_indices'` diff --git a/site/en/api_docs/python/tf/compat/v1/lite/OpHint/OpHintArgumentTracker.md b/site/en/api_docs/python/tf/compat/v1/lite/OpHint/OpHintArgumentTracker.md new file mode 100644 index 00000000000..cb796780093 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/OpHint/OpHintArgumentTracker.md @@ -0,0 +1,192 @@ +description: Conceptually tracks indices of arguments of "OpHint functions". + +
+ + + + +
+ +# tf.compat.v1.lite.OpHint.OpHintArgumentTracker + + + + + + + + + +Conceptually tracks indices of arguments of "OpHint functions". + + + + + + + +The inputs and arguments of these functions both use an instance +of the class so they can have independent numbering. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`function_name` + +Name of the function that this tracks arguments for. +
+`unique_function_id` + +UUID of function that this tracks arguments for. +
+`node_name_prefix` + +How identities that are created are named. +
+`attr_name` + +Name of attribute to use to store the index for this hint. +i.e. FUNCTION_INPUT_INDEX or FUNCTION_OUTPUT_INDEX +
+`level` + +Hierarchical level of the Ophint node, a number. +
+`children_inputs_mappings` + +Inputs/Outputs mapping for children hints. +
+ + + +## Methods + +

add

+ +View source + + + +Return a wrapped tensor of an input tensor as an argument. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`arg` + +A TensorFlow tensor that should be considered an argument. +
+`tag` + +String tag to identify arguments that should be packed. +
+`name` + +Name of argument. This is included in the Identity hint op names. +
+`aggregate` + +Strategy to aggregate. +Acceptable values are OpHint.AGGREGATE_FIRST, OpHint.AGGREGATE_LAST, +and OpHint.AGGREGATE_STACK. +Note, aggregate is only valid if tag is specified. +
+`index_override` + +Specify what input/output index should this be in the +final stub. i.e. add(arg0, index=1); add(arg1, index=0) will make the +final stub be as stub_func(inputs[arg1, arg0], outputs=[]) rather than +the default call order based ordering. +
+ + + + + + + + + + + +
Returns
+A tensor representing the wrapped argument. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When indices are not consistent. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/lite/TFLiteConverter.md b/site/en/api_docs/python/tf/compat/v1/lite/TFLiteConverter.md new file mode 100644 index 00000000000..f858cb2dae3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/TFLiteConverter.md @@ -0,0 +1,744 @@ +description: Convert a TensorFlow model into output_format. + +
+ + + + + + + + + +
+ +# tf.compat.v1.lite.TFLiteConverter + + + + + + + + + +Convert a TensorFlow model into `output_format`. + + + + + + + +This is used to convert from a TensorFlow GraphDef, SavedModel or tf.keras +model into either a TFLite FlatBuffer or graph visualization. + +#### Example usage: + + +```python +# Converting a GraphDef from session. +converter = lite.TFLiteConverter.from_session(sess, in_tensors, out_tensors) +tflite_model = converter.convert() +open("converted_model.tflite", "wb").write(tflite_model) + +# Converting a GraphDef from file. +converter = lite.TFLiteConverter.from_frozen_graph( + graph_def_file, input_arrays, output_arrays) +tflite_model = converter.convert() +open("converted_model.tflite", "wb").write(tflite_model) + +# Converting a SavedModel. +converter = lite.TFLiteConverter.from_saved_model(saved_model_dir) +tflite_model = converter.convert() +open("converted_model.tflite", "wb").write(tflite_model) + +# Converting a tf.keras model. +converter = lite.TFLiteConverter.from_keras_model_file(keras_model) +tflite_model = converter.convert() +open("converted_model.tflite", "wb").write(tflite_model) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`graph_def` + +Frozen TensorFlow GraphDef. +
+`input_tensors` + +List of input tensors. Type and shape are computed using +`foo.shape` and `foo.dtype`. +
+`output_tensors` + +List of output tensors (only .name is used from this). +
+`input_arrays_with_shape` + +Tuple of strings representing input tensor names +and list of integers representing input shapes +(e.g., [("foo" : [1, 16, 16, 3])]). Use only when graph cannot be loaded +into TensorFlow and when `input_tensors` and `output_tensors` are +None. (default None) +
+`output_arrays` + +List of output tensors to freeze graph with. Use only when +graph cannot be loaded into TensorFlow and when `input_tensors` and +`output_tensors` are None. (default None) +
+`experimental_debug_info_func` + +An experimental function to retrieve the +graph debug info for a set of nodes from the `graph_def`. +
+ + + + + + + + + + + + +
+`ValueError` + +Invalid arguments. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inference_type` + +Target data type of real-number arrays in the output file. +Must be `{tf.float32, tf.uint8}`. If `optimzations` are provided, this +parameter is ignored. (default tf.float32) +
+`inference_input_type` + +Target data type of real-number input arrays. Allows +for a different type for input arrays. +If an integer type is provided and `optimizations` are not used, +`quantized_inputs_stats` must be provided. +If `inference_type` is tf.uint8, signaling conversion to a fully quantized +model from a quantization-aware trained input model, then +`inference_input_type` defaults to tf.uint8. +In all other cases, `inference_input_type` defaults to tf.float32. +Must be `{tf.float32, tf.uint8, tf.int8}` +
+`inference_output_type` + +Target data type of real-number output arrays. Allows +for a different type for output arrays. +If `inference_type` is tf.uint8, signaling conversion to a fully quantized +model from a quantization-aware trained output model, then +`inference_output_type` defaults to tf.uint8. +In all other cases, `inference_output_type` must be tf.float32, an error +will be thrown otherwise. +Must be `{tf.float32, tf.uint8, tf.int8}` +
+`output_format` + +Output file format. Currently must be `{TFLITE, +GRAPHVIZ_DOT}`. (default TFLITE) +
+`quantized_input_stats` + +Dict of strings representing input tensor names +mapped to tuple of floats representing the mean and standard deviation +of the training data (e.g., {"foo" : (0., 1.)}). Only need if +`inference_input_type` is `QUANTIZED_UINT8`. +real_input_value = (quantized_input_value - mean_value) / std_dev_value. +(default {}) +
+`default_ranges_stats` + +Tuple of integers representing (min, max) range values +for all arrays without a specified range. Intended for experimenting with +quantization via "dummy quantization". (default None) +
+`drop_control_dependency` + +Boolean indicating whether to drop control +dependencies silently. This is due to TFLite not supporting control +dependencies. (default True) +
+`reorder_across_fake_quant` + +Boolean indicating whether to reorder FakeQuant +nodes in unexpected locations. Used when the location of the FakeQuant +nodes is preventing graph transformations necessary to convert the graph. +Results in a graph that differs from the quantized training graph, +potentially causing differing arithmetic behavior. (default False) +
+`change_concat_input_ranges` + +Boolean to change behavior of min/max ranges for +inputs and outputs of the concat operator for quantized models. Changes +the ranges of concat operator overlap when true. (default False) +
+`allow_custom_ops` + +Boolean indicating whether to allow custom operations. +When false any unknown operation is an error. When true, custom ops are +created for any op that is unknown. The developer will need to provide +these to the TensorFlow Lite runtime with a custom resolver. +(default False) +
+`post_training_quantize` + +Deprecated. Please specify `[Optimize.DEFAULT]` for +`optimizations` instead. Boolean indicating whether to quantize the +weights of the converted float model. Model size will be reduced and +there will be latency improvements (at the cost of accuracy). +(default False) +
+`dump_graphviz_dir` + +Full filepath of folder to dump the graphs at various +stages of processing GraphViz .dot files. Preferred over +--output_format=GRAPHVIZ_DOT in order to keep the requirements of the +output file. (default None) +
+`dump_graphviz_video` + +Boolean indicating whether to dump the graph after +every graph transformation. (default False) +
+`conversion_summary_dir` + +A string indicating the path to the generated +conversion logs. +
+`target_ops` + +Deprecated. Please specify `target_spec.supported_ops` instead. +Set of OpsSet options indicating which converter to use. +(default set([OpsSet.TFLITE_BUILTINS])) +
+`target_spec` + +Experimental flag, subject to change. Specification of target +device. +
+`optimizations` + +Experimental flag, subject to change. A list of optimizations +to apply when converting the model. E.g. `[Optimize.DEFAULT]` +
+`representative_dataset` + +A representative dataset that can be used to +generate input and output samples for the model. The converter can use +the dataset to evaluate different optimizations. +
+`experimental_new_converter` + +Experimental flag, subject to change. +Enables MLIR-based conversion instead of TOCO conversion. +
+ + + +## Methods + +

convert

+ +View source + + + +Converts a TensorFlow GraphDef based on instance variables. + + + + + + + + + + +
Returns
+The converted data in serialized format. Either a TFLite Flatbuffer or a +Graphviz graph depending on value in `output_format`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +Input shape is not specified. +None value for dimension in input_tensor. +
+ + + +

from_frozen_graph

+ +View source + + + +Creates a TFLiteConverter class from a file containing a frozen GraphDef. + + + + + + + + + + + + + + + + + + + + +
Args
+`graph_def_file` + +Full filepath of file containing frozen GraphDef. +
+`input_arrays` + +List of input tensors to freeze graph with. +
+`output_arrays` + +List of output tensors to freeze graph with. +
+`input_shapes` + +Dict of strings representing input tensor names to list of +integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). +Automatically determined when input shapes is None (e.g., {"foo" : +None}). (default None) +
+ + + + + + + + + + + +
Returns
+TFLiteConverter class. +
+ + + + + + + + + + + + + + + +
Raises
+`IOError` + +File not found. +Unable to parse input file. +
+`ValueError` + +The graph is not frozen. +input_arrays or output_arrays contains an invalid tensor name. +input_shapes is not correctly defined when required +
+ + + +

from_keras_model_file

+ +View source + + + +Creates a TFLiteConverter class from a tf.keras model file. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`model_file` + +Full filepath of HDF5 file containing the tf.keras model. +
+`input_arrays` + +List of input tensors to freeze graph with. Uses input +arrays from SignatureDef when none are provided. (default None) +
+`input_shapes` + +Dict of strings representing input tensor names to list of +integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). +Automatically determined when input shapes is None (e.g., {"foo" : +None}). (default None) +
+`output_arrays` + +List of output tensors to freeze graph with. Uses output +arrays from SignatureDef when none are provided. (default None) +
+`custom_objects` + +Dict mapping names (strings) to custom classes or +functions to be considered during model deserialization. (default None) +
+ + + + + + + + + + + +
Returns
+TFLiteConverter class. +
+ + + +

from_saved_model

+ +View source + + + +Creates a TFLiteConverter class from a SavedModel. + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`saved_model_dir` + +SavedModel directory to convert. +
+`input_arrays` + +List of input tensors to freeze graph with. Uses input +arrays from SignatureDef when none are provided. (default None) +
+`input_shapes` + +Dict of strings representing input tensor names to list of +integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). +Automatically determined when input shapes is None (e.g., {"foo" : +None}). (default None) +
+`output_arrays` + +List of output tensors to freeze graph with. Uses output +arrays from SignatureDef when none are provided. (default None) +
+`tag_set` + +Set of tags identifying the MetaGraphDef within the SavedModel to +analyze. All tags in the tag set must be present. (default set("serve")) +
+`signature_key` + +Key identifying SignatureDef containing inputs and outputs. +(default DEFAULT_SERVING_SIGNATURE_DEF_KEY) +
+ + + + + + + + + + + +
Returns
+TFLiteConverter class. +
+ + + +

from_session

+ +View source + + + +Creates a TFLiteConverter class from a TensorFlow Session. + + + + + + + + + + + + + + + + + +
Args
+`sess` + +TensorFlow Session. +
+`input_tensors` + +List of input tensors. Type and shape are computed using +`foo.shape` and `foo.dtype`. +
+`output_tensors` + +List of output tensors (only .name is used from this). +
+ + + + + + + + + + + +
Returns
+TFLiteConverter class. +
+ + + +

get_input_arrays

+ +View source + + + +Returns a list of the names of the input tensors. + + + + + + + + + + +
Returns
+List of strings. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/lite/TocoConverter.md b/site/en/api_docs/python/tf/compat/v1/lite/TocoConverter.md new file mode 100644 index 00000000000..06cc35a3a62 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/TocoConverter.md @@ -0,0 +1,105 @@ +description: Convert a TensorFlow model into output_format using TOCO. + +
+ + + + + + +
+ +# tf.compat.v1.lite.TocoConverter + + + + + + + + + +Convert a TensorFlow model into `output_format` using TOCO. + + + +This class has been deprecated. Please use `lite.TFLiteConverter` instead. + +## Methods + +

from_frozen_graph

+ +View source + + + +Creates a TocoConverter class from a file containing a frozen graph. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `lite.TFLiteConverter.from_frozen_graph` instead. + +

from_keras_model_file

+ +View source + + + +Creates a TocoConverter class from a tf.keras model file. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `lite.TFLiteConverter.from_keras_model_file` instead. + +

from_saved_model

+ +View source + + + +Creates a TocoConverter class from a SavedModel. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `lite.TFLiteConverter.from_saved_model` instead. + +

from_session

+ +View source + + + +Creates a TocoConverter class from a TensorFlow Session. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `lite.TFLiteConverter.from_session` instead. + + + diff --git a/site/en/api_docs/python/tf/compat/v1/lite/constants.md b/site/en/api_docs/python/tf/compat/v1/lite/constants.md new file mode 100644 index 00000000000..19c2a89219e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/constants.md @@ -0,0 +1,41 @@ +description: Public API for tf.lite.constants namespace. + +
+ + + + + + + + + + + +
+ +# Module: tf.compat.v1.lite.constants + + + + + + + + + +Public API for tf.lite.constants namespace. + + + +## Other Members + +* `FLOAT` +* `FLOAT16` +* `GRAPHVIZ_DOT = 3` +* `INT32` +* `INT64` +* `INT8` +* `QUANTIZED_UINT8` +* `STRING` +* `TFLITE = 2` diff --git a/site/en/api_docs/python/tf/compat/v1/lite/experimental.md b/site/en/api_docs/python/tf/compat/v1/lite/experimental.md new file mode 100644 index 00000000000..c8526ebce60 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/experimental.md @@ -0,0 +1,33 @@ +description: Public API for tf.lite.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.lite.experimental + + + + + + + + + +Public API for tf.lite.experimental namespace. + + + +## Modules + +[`nn`](../../../../tf/compat/v1/lite/experimental/nn.md) module: Public API for tf.lite.experimental.nn namespace. + +## Functions + +[`convert_op_hints_to_stubs(...)`](../../../../tf/compat/v1/lite/experimental/convert_op_hints_to_stubs.md): Converts a graphdef with LiteOp hints into stub operations. + +[`get_potentially_supported_ops(...)`](../../../../tf/compat/v1/lite/experimental/get_potentially_supported_ops.md): Returns operations potentially supported by TensorFlow Lite. + +[`load_delegate(...)`](../../../../tf/lite/experimental/load_delegate.md): Returns loaded Delegate object. + diff --git a/site/en/api_docs/python/tf/compat/v1/lite/experimental/convert_op_hints_to_stubs.md b/site/en/api_docs/python/tf/compat/v1/lite/experimental/convert_op_hints_to_stubs.md new file mode 100644 index 00000000000..feb54e518fd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/experimental/convert_op_hints_to_stubs.md @@ -0,0 +1,99 @@ +description: Converts a graphdef with LiteOp hints into stub operations. + +
+ + +
+ +# tf.compat.v1.lite.experimental.convert_op_hints_to_stubs + + + + + + + + + +Converts a graphdef with LiteOp hints into stub operations. + + + + + + + +This is used to prepare for toco conversion of complex intrinsic usages. +Note: only one of session or graph_def should be used, not both. + + + + + + + + + + + + + + + + +
+`session` + +A TensorFlow session that contains the graph to convert. +
+`graph_def` + +A graph def that we should convert. +
+`write_callback` + +A function pointer that can be used to write intermediate +steps of graph transformation (optional). +
+ + + + + + + + + + + +
+A new graphdef with all ops contained in OpHints being replaced by +a single op call with the right parameters. +
+ + + + + + + + + + + + +
+`ValueError` + +If both session and graph_def are provided. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/lite/experimental/get_potentially_supported_ops.md b/site/en/api_docs/python/tf/compat/v1/lite/experimental/get_potentially_supported_ops.md new file mode 100644 index 00000000000..b03304b8d3f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/experimental/get_potentially_supported_ops.md @@ -0,0 +1,52 @@ +description: Returns operations potentially supported by TensorFlow Lite. + +
+ + +
+ +# tf.compat.v1.lite.experimental.get_potentially_supported_ops + + + + + + + + + +Returns operations potentially supported by TensorFlow Lite. + + + + + + + +The potentially support list contains a list of ops that are partially or +fully supported, which is derived by simply scanning op names to check whether +they can be handled without real conversion and specific parameters. + +Given that some ops may be partially supported, the optimal way to determine +if a model's operations are supported is by converting using the TensorFlow +Lite converter. + + + + + + + + + +
+A list of SupportedOp. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/lite/experimental/nn.md b/site/en/api_docs/python/tf/compat/v1/lite/experimental/nn.md new file mode 100644 index 00000000000..d184676f03c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/experimental/nn.md @@ -0,0 +1,31 @@ +description: Public API for tf.lite.experimental.nn namespace. + +
+ + +
+ +# Module: tf.compat.v1.lite.experimental.nn + + + + + + + + + +Public API for tf.lite.experimental.nn namespace. + + + +## Classes + +[`class TFLiteLSTMCell`](../../../../../tf/compat/v1/lite/experimental/nn/TFLiteLSTMCell.md): Long short-term memory unit (LSTM) recurrent network cell. + +[`class TfLiteRNNCell`](../../../../../tf/compat/v1/lite/experimental/nn/TfLiteRNNCell.md): The most basic RNN cell. + +## Functions + +[`dynamic_rnn(...)`](../../../../../tf/compat/v1/lite/experimental/nn/dynamic_rnn.md): Creates a recurrent neural network specified by RNNCell `cell`. + diff --git a/site/en/api_docs/python/tf/compat/v1/lite/experimental/nn/TFLiteLSTMCell.md b/site/en/api_docs/python/tf/compat/v1/lite/experimental/nn/TFLiteLSTMCell.md new file mode 100644 index 00000000000..9098f786826 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/experimental/nn/TFLiteLSTMCell.md @@ -0,0 +1,313 @@ +description: Long short-term memory unit (LSTM) recurrent network cell. + +
+ + + + + + +
+ +# tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell + + + + + + + + + +Long short-term memory unit (LSTM) recurrent network cell. + + + + + + + +This is used only for TfLite, it provides hints and it also makes the +variables in the desired for the tflite ops (transposed and seaparated). + +The default non-peephole implementation is based on: + + https://pdfs.semanticscholar.org/1154/0131eae85b2e11d53df7f1360eeb6476e7f4.pdf + +Felix Gers, Jurgen Schmidhuber, and Fred Cummins. +"Learning to forget: Continual prediction with LSTM." IET, 850-855, 1999. + +The peephole implementation is based on: + + https://research.google.com/pubs/archive/43905.pdf + +Hasim Sak, Andrew Senior, and Francoise Beaufays. +"Long short-term memory recurrent neural network architectures for + large scale acoustic modeling." INTERSPEECH, 2014. + +The class uses optional peep-hole connections, optional cell clipping, and +an optional projection layer. + +Note that this cell is not optimized for performance. Please use +`tf.contrib.cudnn_rnn.CudnnLSTM` for better performance on GPU, or +`tf.contrib.rnn.LSTMBlockCell` and `tf.contrib.rnn.LSTMBlockFusedCell` for +better performance on CPU. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_units` + +int, The number of units in the LSTM cell. +
+`use_peepholes` + +bool, set True to enable diagonal/peephole connections. +
+`cell_clip` + +(optional) A float value, if provided the cell state is clipped +by this value prior to the cell output activation. +
+`initializer` + +(optional) The initializer to use for the weight and +projection matrices. +
+`num_proj` + +(optional) int, The output dimensionality for the projection +matrices. If None, no projection is performed. +
+`proj_clip` + +(optional) A float value. If `num_proj > 0` and `proj_clip` is +provided, then the projected values are clipped elementwise to within +`[-proj_clip, proj_clip]`. +
+`num_unit_shards` + +Deprecated, will be removed by Jan. 2017. Use a +variable_scope partitioner instead. +
+`num_proj_shards` + +Deprecated, will be removed by Jan. 2017. Use a +variable_scope partitioner instead. +
+`forget_bias` + +Biases of the forget gate are initialized by default to 1 in +order to reduce the scale of forgetting at the beginning of the +training. Must set it manually to `0.0` when restoring from CudnnLSTM +trained checkpoints. +
+`state_is_tuple` + +If True, accepted and returned states are 2-tuples of the +`c_state` and `m_state`. If False, they are concatenated along the +column axis. This latter behavior will soon be deprecated. +
+`activation` + +Activation function of the inner states. Default: `tanh`. +
+`reuse` + +(optional) Python boolean describing whether to reuse variables in +an existing scope. If not `True`, and the existing scope already has +the given variables, an error is raised. +
+`name` + +String, the name of the layer. Layers with the same name will share +weights, but to avoid mistakes we require reuse=True in such cases. +
+`dtype` + +Default dtype of the layer (default of `None` means use the type of +the first input). Required when `build` is called before `call`. When +restoring from CudnnLSTM-trained checkpoints, use +`CudnnCompatibleLSTMCell` instead. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`output_size` + +Integer or TensorShape: size of outputs produced by this cell. +
+`scope_name` + + +
+`state_size` + +size(s) of state(s) used by this cell. + +It can be represented by an Integer, a TensorShape or a tuple of Integers +or TensorShapes. +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + +Return zero-filled state tensor(s). + + + + + + + + + + + + + + +
Args
+`batch_size` + +int, float, or unit Tensor representing the batch size. +
+`dtype` + +the data type to use for the state. +
+ + + + + + + + + + + +
Returns
+If `state_size` is an int or TensorShape, then the return value is a +`N-D` tensor of shape `[batch_size, state_size]` filled with zeros. + +If `state_size` is a nested list or tuple, then the return value is +a nested list or tuple (of the same structure) of `2-D` tensors with +the shapes `[batch_size, s]` for each s in `state_size`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/lite/experimental/nn/TfLiteRNNCell.md b/site/en/api_docs/python/tf/compat/v1/lite/experimental/nn/TfLiteRNNCell.md new file mode 100644 index 00000000000..6014002632e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/experimental/nn/TfLiteRNNCell.md @@ -0,0 +1,236 @@ +description: The most basic RNN cell. + +
+ + + + + + +
+ +# tf.compat.v1.lite.experimental.nn.TfLiteRNNCell + + + + + + + + + +The most basic RNN cell. + + + + + + + +This is used only for TfLite, it provides hints and it also makes the +variables in the desired for the tflite ops. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_units` + +int, The number of units in the RNN cell. +
+`activation` + +Nonlinearity to use. Default: `tanh`. It could also be string +that is within Keras activation function names. +
+`reuse` + +(optional) Python boolean describing whether to reuse variables in +an existing scope. Raises an error if not `True` and the existing scope +already has the given variables. +
+`name` + +String, the name of the layer. Layers with the same name will share +weights, but to avoid mistakes we require reuse=True in such cases. +
+`dtype` + +Default dtype of the layer (default of `None` means use the type of +the first input). Required when `build` is called before `call`. +
+`**kwargs` + +Dict, keyword named properties for common layer attributes, like +`trainable` etc when constructing the cell from configs of get_config(). +
+ + + + + + + + + + + + +
+`ValueError` + +If the existing scope already has the given variables. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`output_size` + +Integer or TensorShape: size of outputs produced by this cell. +
+`scope_name` + + +
+`state_size` + +size(s) of state(s) used by this cell. + +It can be represented by an Integer, a TensorShape or a tuple of Integers +or TensorShapes. +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + +Return zero-filled state tensor(s). + + + + + + + + + + + + + + +
Args
+`batch_size` + +int, float, or unit Tensor representing the batch size. +
+`dtype` + +the data type to use for the state. +
+ + + + + + + + + + + +
Returns
+If `state_size` is an int or TensorShape, then the return value is a +`N-D` tensor of shape `[batch_size, state_size]` filled with zeros. + +If `state_size` is a nested list or tuple, then the return value is +a nested list or tuple (of the same structure) of `2-D` tensors with +the shapes `[batch_size, s]` for each s in `state_size`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/lite/experimental/nn/dynamic_rnn.md b/site/en/api_docs/python/tf/compat/v1/lite/experimental/nn/dynamic_rnn.md new file mode 100644 index 00000000000..a330190e4a0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/experimental/nn/dynamic_rnn.md @@ -0,0 +1,250 @@ +description: Creates a recurrent neural network specified by RNNCell cell. + +
+ + +
+ +# tf.compat.v1.lite.experimental.nn.dynamic_rnn + + + + + + + + + +Creates a recurrent neural network specified by RNNCell `cell`. + + + + + + + +Performs fully dynamic unrolling of `inputs`. + +#### Example: + + + +```python +# create a BasicRNNCell +rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) + +# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size] + +# defining initial state +initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32) + +# 'state' is a tensor of shape [batch_size, cell_state_size] +outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, + initial_state=initial_state, + dtype=tf.float32) +``` + +```python +# create 2 LSTMCells +rnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size) for size in [128, 256]] + +# create a RNN cell composed sequentially of a number of RNNCells +multi_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers) + +# 'outputs' is a tensor of shape [batch_size, max_time, 256] +# 'state' is a N-tuple where N is the number of LSTMCells containing a +# tf.nn.rnn_cell.LSTMStateTuple for each cell +outputs, state = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell, + inputs=data, + dtype=tf.float32) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cell` + +An instance of RNNCell. +
+`inputs` + +The RNN inputs. +If `time_major == False` (default), this must be a `Tensor` of shape: +`[batch_size, max_time, ...]`, or a nested tuple of such elements. +If `time_major == True`, this must be a `Tensor` of shape: `[max_time, +batch_size, ...]`, or a nested tuple of such elements. This may also be +a (possibly nested) tuple of Tensors satisfying this property. The +first two dimensions must match across all the inputs, but otherwise the +ranks and other shape components may differ. In this case, input to +`cell` at each time-step will replicate the structure of these tuples, +except for the time dimension (from which the time is taken). The input +to `cell` at each time step will be a `Tensor` or (possibly nested) +tuple of Tensors each with dimensions `[batch_size, ...]`. +
+`sequence_length` + +(optional) An int32/int64 vector sized `[batch_size]`. Used +to copy-through state and zero-out outputs when past a batch element's +sequence length. So it's more for performance than correctness. +
+`initial_state` + +(optional) An initial state for the RNN. If `cell.state_size` +is an integer, this must be a `Tensor` of appropriate type and shape +`[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this +should be a tuple of tensors having shapes `[batch_size, s] for s in +cell.state_size`. +
+`dtype` + +(optional) The data type for the initial state and expected output. +Required if initial_state is not provided or RNN state has a heterogeneous +dtype. +
+`parallel_iterations` + +(Default: 32). The number of iterations to run in +parallel. Those operations which do not have any temporal dependency and +can be run in parallel, will be. This parameter trades off time for +space. Values >> 1 use more memory but take less time, while smaller +values use less memory but computations take longer. +
+`swap_memory` + +Transparently swap the tensors produced in forward inference +but needed for back prop from GPU to CPU. This allows training RNNs which +would typically not fit on a single GPU, with very minimal (or no) +performance penalty. +
+`time_major` + +The shape format of the `inputs` and `outputs` Tensors. If true, +these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, +these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using +`time_major = True` is a bit more efficient because it avoids transposes +at the beginning and end of the RNN calculation. However, most TensorFlow +data is batch-major, so by default this function accepts input and emits +output in batch-major form. +
+`scope` + +VariableScope for the created subgraph; defaults to "rnn". +
+ + + + + + + + + + + + + + + + + +
+A pair (outputs, state) where: +
+`outputs` + +The RNN output `Tensor`. + +If time_major == False (default), this will be a `Tensor` shaped: +`[batch_size, max_time, cell.output_size]`. + +If time_major == True, this will be a `Tensor` shaped: +`[max_time, batch_size, cell.output_size]`. + +Note, if `cell.output_size` is a (possibly nested) tuple of integers +or `TensorShape` objects, then `outputs` will be a tuple having the +same structure as `cell.output_size`, containing Tensors having shapes +corresponding to the shape data in `cell.output_size`. +
+`state` + +The final state. If `cell.state_size` is an int, this +will be shaped `[batch_size, cell.state_size]`. If it is a +`TensorShape`, this will be shaped `[batch_size] + cell.state_size`. +If it is a (possibly nested) tuple of ints or `TensorShape`, this will +be a tuple having the corresponding shapes. If cells are `LSTMCells` +`state` will be a tuple containing a `LSTMStateTuple` for each cell. +
+ + + + + + + + + + + + + + + + + + +
+`TypeError` + +If `cell` is not an instance of RNNCell. +
+`ValueError` + +If inputs is None or an empty list. +
+`RuntimeError` + +If not using control flow v2. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/lite/toco_convert.md b/site/en/api_docs/python/tf/compat/v1/lite/toco_convert.md new file mode 100644 index 00000000000..4e7c0635b51 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lite/toco_convert.md @@ -0,0 +1,116 @@ +description: Convert a model using TOCO. (deprecated) + +
+ + +
+ +# tf.compat.v1.lite.toco_convert + + + + + + + + + +Convert a model using TOCO. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `lite.TFLiteConverter` instead. + +Typically this function is used to convert from TensorFlow GraphDef to TFLite. +Conversion can be customized by providing arguments that are forwarded to +`build_toco_convert_protos` (see documentation for details). This function has +been deprecated. Please use `lite.TFLiteConverter` instead. + + + + + + + + + + + + + + + + + + + + + + +
+`input_data` + +Input data (i.e. often `sess.graph_def`), +
+`input_tensors` + +List of input tensors. Type and shape are computed using +`foo.shape` and `foo.dtype`. +
+`output_tensors` + +List of output tensors (only .name is used from this). +
+`*args` + +See `build_toco_convert_protos`, +
+`**kwargs` + +See `build_toco_convert_protos`. +
+ + + + + + + + + + + +
+The converted data. For example if TFLite was the destination, then +this will be a tflite flatbuffer in a bytes array. +
+ + + + + + + + + + + +
+Defined in `build_toco_convert_protos`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/load_file_system_library.md b/site/en/api_docs/python/tf/compat/v1/load_file_system_library.md new file mode 100644 index 00000000000..e8bfbf8529b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/load_file_system_library.md @@ -0,0 +1,89 @@ +description: Loads a TensorFlow plugin, containing file system implementation. (deprecated) + +
+ + +
+ +# tf.compat.v1.load_file_system_library + + + + + + + + + +Loads a TensorFlow plugin, containing file system implementation. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.load_library instead. + +Pass `library_filename` to a platform-specific mechanism for dynamically +loading a library. The rules for determining the exact location of the +library are platform-specific and are not documented here. + + + + + + + + + + +
+`library_filename` + +Path to the plugin. +Relative or absolute filesystem path to a dynamic library file. +
+ + + + + + + + + + + +
+None. +
+ + + + + + + + + + + + +
+`RuntimeError` + +when unable to load the library. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/local_variables.md b/site/en/api_docs/python/tf/compat/v1/local_variables.md new file mode 100644 index 00000000000..7f6a66887cb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/local_variables.md @@ -0,0 +1,78 @@ +description: Returns local variables. + +
+ + +
+ +# tf.compat.v1.local_variables + + + + + + + + + +Returns local variables. + + + + + + + +Local variables - per process variables, usually not saved/restored to +checkpoint and used for temporary or intermediate values. +For example, they can be used as counters for metrics computation or +number of epochs this machine has read data. +The `tf.contrib.framework.local_variable()` function automatically adds the +new variable to `GraphKeys.LOCAL_VARIABLES`. +This convenience function returns the contents of that collection. + +An alternative to local variables are global variables. See +tf.compat.v1.global_variables + + + + + + + + + + +
+`scope` + +(Optional.) A string. If supplied, the resulting list is filtered to +include only items whose `name` attribute matches `scope` using +`re.match`. Items without a `name` attribute are never returned if a scope +is supplied. The choice of `re.match` means that a `scope` without special +tokens filters by prefix. +
+ + + + + + + + + + + +
+A list of local `Variable` objects. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/local_variables_initializer.md b/site/en/api_docs/python/tf/compat/v1/local_variables_initializer.md new file mode 100644 index 00000000000..11a3c34be61 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/local_variables_initializer.md @@ -0,0 +1,57 @@ +description: Returns an Op that initializes all local variables. + +
+ + +
+ +# tf.compat.v1.local_variables_initializer + + + + + + + + + +Returns an Op that initializes all local variables. + + + + + + + + + +This is just a shortcut for `variables_initializer(local_variables())` + + + + + + + + + +
+An Op that initializes all local variables in the graph. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/logging.md b/site/en/api_docs/python/tf/compat/v1/logging.md new file mode 100644 index 00000000000..83ad40da954 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging.md @@ -0,0 +1,65 @@ +description: Logging and Summary Operations. + +
+ + + + + + + +
+ +# Module: tf.compat.v1.logging + + + + + + + + + +Logging and Summary Operations. + + + +## Functions + +[`TaskLevelStatusMessage(...)`](../../../tf/compat/v1/logging/TaskLevelStatusMessage.md) + +[`debug(...)`](../../../tf/compat/v1/logging/debug.md) + +[`error(...)`](../../../tf/compat/v1/logging/error.md) + +[`fatal(...)`](../../../tf/compat/v1/logging/fatal.md) + +[`flush(...)`](../../../tf/compat/v1/logging/flush.md) + +[`get_verbosity(...)`](../../../tf/compat/v1/logging/get_verbosity.md): Return how much logging output will be produced. + +[`info(...)`](../../../tf/compat/v1/logging/info.md) + +[`log(...)`](../../../tf/compat/v1/logging/log.md) + +[`log_every_n(...)`](../../../tf/compat/v1/logging/log_every_n.md): Log 'msg % args' at level 'level' once per 'n' times. + +[`log_first_n(...)`](../../../tf/compat/v1/logging/log_first_n.md): Log 'msg % args' at level 'level' only first 'n' times. + +[`log_if(...)`](../../../tf/compat/v1/logging/log_if.md): Log 'msg % args' at level 'level' only if condition is fulfilled. + +[`set_verbosity(...)`](../../../tf/compat/v1/logging/set_verbosity.md): Sets the threshold for what messages will be logged. + +[`vlog(...)`](../../../tf/compat/v1/logging/vlog.md) + +[`warn(...)`](../../../tf/compat/v1/logging/warn.md) + +[`warning(...)`](../../../tf/compat/v1/logging/warning.md) + +## Other Members + +* `DEBUG = 10` +* `ERROR = 40` +* `FATAL = 50` +* `INFO = 20` +* `WARN = 30` diff --git a/site/en/api_docs/python/tf/compat/v1/logging/TaskLevelStatusMessage.md b/site/en/api_docs/python/tf/compat/v1/logging/TaskLevelStatusMessage.md new file mode 100644 index 00000000000..f5f4af4d8e2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/TaskLevelStatusMessage.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.logging.TaskLevelStatusMessage + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/logging/debug.md b/site/en/api_docs/python/tf/compat/v1/logging/debug.md new file mode 100644 index 00000000000..96be7bd470e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/debug.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.logging.debug + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/logging/error.md b/site/en/api_docs/python/tf/compat/v1/logging/error.md new file mode 100644 index 00000000000..ef27371c5aa --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/error.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.logging.error + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/logging/fatal.md b/site/en/api_docs/python/tf/compat/v1/logging/fatal.md new file mode 100644 index 00000000000..0aec19d73a6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/fatal.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.logging.fatal + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/logging/flush.md b/site/en/api_docs/python/tf/compat/v1/logging/flush.md new file mode 100644 index 00000000000..285d08fa970 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/flush.md @@ -0,0 +1,29 @@ +
+ + +
+ +# tf.compat.v1.logging.flush + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/logging/get_verbosity.md b/site/en/api_docs/python/tf/compat/v1/logging/get_verbosity.md new file mode 100644 index 00000000000..f8c6adaafc2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/get_verbosity.md @@ -0,0 +1,31 @@ +description: Return how much logging output will be produced. + +
+ + +
+ +# tf.compat.v1.logging.get_verbosity + + + + + + + + + +Return how much logging output will be produced. + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/logging/info.md b/site/en/api_docs/python/tf/compat/v1/logging/info.md new file mode 100644 index 00000000000..31daefc8a11 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/info.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.logging.info + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/logging/log.md b/site/en/api_docs/python/tf/compat/v1/logging/log.md new file mode 100644 index 00000000000..c1efeb986aa --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/log.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.logging.log + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/logging/log_every_n.md b/site/en/api_docs/python/tf/compat/v1/logging/log_every_n.md new file mode 100644 index 00000000000..8df93d12d6b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/log_every_n.md @@ -0,0 +1,73 @@ +description: Log 'msg % args' at level 'level' once per 'n' times. + +
+ + +
+ +# tf.compat.v1.logging.log_every_n + + + + + + + + + +Log 'msg % args' at level 'level' once per 'n' times. + + + + + + + +Logs the 1st call, (N+1)st call, (2N+1)st call, etc. +Not threadsafe. + + + + + + + + + + + + + + + + + + + +
+`level` + +The level at which to log. +
+`msg` + +The message to be logged. +
+`n` + +The number of times this should be called before it is logged. +
+`*args` + +The args to be substituted into the msg. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/logging/log_first_n.md b/site/en/api_docs/python/tf/compat/v1/logging/log_first_n.md new file mode 100644 index 00000000000..3bc3316ab43 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/log_first_n.md @@ -0,0 +1,72 @@ +description: Log 'msg % args' at level 'level' only first 'n' times. + +
+ + +
+ +# tf.compat.v1.logging.log_first_n + + + + + + + + + +Log 'msg % args' at level 'level' only first 'n' times. + + + + + + + +Not threadsafe. + + + + + + + + + + + + + + + + + + + +
+`level` + +The level at which to log. +
+`msg` + +The message to be logged. +
+`n` + +The number of times this should be called before it is logged. +
+`*args` + +The args to be substituted into the msg. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/logging/log_if.md b/site/en/api_docs/python/tf/compat/v1/logging/log_if.md new file mode 100644 index 00000000000..ff76ab8711d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/log_if.md @@ -0,0 +1,33 @@ +description: Log 'msg % args' at level 'level' only if condition is fulfilled. + +
+ + +
+ +# tf.compat.v1.logging.log_if + + + + + + + + + +Log 'msg % args' at level 'level' only if condition is fulfilled. + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/logging/set_verbosity.md b/site/en/api_docs/python/tf/compat/v1/logging/set_verbosity.md new file mode 100644 index 00000000000..5d59c69f462 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/set_verbosity.md @@ -0,0 +1,33 @@ +description: Sets the threshold for what messages will be logged. + +
+ + +
+ +# tf.compat.v1.logging.set_verbosity + + + + + + + + + +Sets the threshold for what messages will be logged. + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/logging/vlog.md b/site/en/api_docs/python/tf/compat/v1/logging/vlog.md new file mode 100644 index 00000000000..7e1d936b3e2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/vlog.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.logging.vlog + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/logging/warn.md b/site/en/api_docs/python/tf/compat/v1/logging/warn.md new file mode 100644 index 00000000000..655f097fc17 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/warn.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.logging.warn + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/logging/warning.md b/site/en/api_docs/python/tf/compat/v1/logging/warning.md new file mode 100644 index 00000000000..b0a3ce39356 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/logging/warning.md @@ -0,0 +1,31 @@ +
+ + +
+ +# tf.compat.v1.logging.warning + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/lookup.md b/site/en/api_docs/python/tf/compat/v1/lookup.md new file mode 100644 index 00000000000..8983ab72545 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lookup.md @@ -0,0 +1,37 @@ +description: Public API for tf.lookup namespace. + +
+ + +
+ +# Module: tf.compat.v1.lookup + + + + + + + + + +Public API for tf.lookup namespace. + + + +## Modules + +[`experimental`](../../../tf/compat/v1/lookup/experimental.md) module: Public API for tf.lookup.experimental namespace. + +## Classes + +[`class KeyValueTensorInitializer`](../../../tf/lookup/KeyValueTensorInitializer.md): Table initializers given `keys` and `values` tensors. + +[`class StaticHashTable`](../../../tf/compat/v1/lookup/StaticHashTable.md): A generic hash table that is immutable once initialized. + +[`class StaticVocabularyTable`](../../../tf/compat/v1/lookup/StaticVocabularyTable.md): String to Id table wrapper that assigns out-of-vocabulary keys to buckets. + +[`class TextFileIndex`](../../../tf/lookup/TextFileIndex.md): The key and value content to get from each line. + +[`class TextFileInitializer`](../../../tf/lookup/TextFileInitializer.md): Table initializers from a text file. + diff --git a/site/en/api_docs/python/tf/compat/v1/lookup/StaticHashTable.md b/site/en/api_docs/python/tf/compat/v1/lookup/StaticHashTable.md new file mode 100644 index 00000000000..7fb6b6a5e5a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lookup/StaticHashTable.md @@ -0,0 +1,318 @@ +description: A generic hash table that is immutable once initialized. + +
+ + + + + + +
+ +# tf.compat.v1.lookup.StaticHashTable + + + + + + + + + +A generic hash table that is immutable once initialized. + +Inherits From: [`StaticHashTable`](../../../../tf/lookup/StaticHashTable.md) + + + + + + + +When running in graph mode, you must evaluate the tensor returned by +`tf.tables_initializer()` before evaluating the tensor returned by +this class's `lookup()` method. Example usage in graph mode: + +```python +keys_tensor = tf.constant([1, 2]) +vals_tensor = tf.constant([3, 4]) +input_tensor = tf.constant([1, 5]) +table = tf.lookup.StaticHashTable( + tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor), -1) +out = table.lookup(input_tensor) +with tf.Session() as sess: + sess.run(tf.tables_initializer()) + print(sess.run(out)) +``` + +In eager mode, no special code is needed to initialize the table. +Example usage in eager mode: + +```python +tf.enable_eager_execution() +keys_tensor = tf.constant([1, 2]) +vals_tensor = tf.constant([3, 4]) +input_tensor = tf.constant([1, 5]) +table = tf.lookup.StaticHashTable( + tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor), -1) +print(table.lookup(input_tensor)) +``` + + + + + + + + + + + + + + + + +
+`initializer` + +The table initializer to use. See `HashTable` kernel for +supported key and value types. +
+`default_value` + +The value to use if a key is missing in the table. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`default_value` + +The default value of the table. +
+`initializer` + + +
+`key_dtype` + +The table key dtype. +
+`name` + +The name of the table. +
+`resource_handle` + +Returns the resource handle associated with this Resource. +
+`value_dtype` + +The table value dtype. +
+ + + +## Methods + +

export

+ +View source + + + +Returns tensors of all keys and values in the table. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A pair of tensors with the first tensor containing all keys and the +second tensors containing all values in the table. +
+ + + +

lookup

+ +View source + + + +Looks up `keys` in a table, outputs the corresponding values. + +The `default_value` is used for keys not present in the table. + + + + + + + + + + + + + +
Args
+`keys` + +Keys to look up. May be either a `SparseTensor` or dense `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `SparseTensor` if keys are sparse, otherwise a dense `Tensor`. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +when `keys` or `default_value` doesn't match the table data +types. +
+ + + +

size

+ +View source + + + +Compute the number of elements in this table. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A scalar tensor containing the number of elements in this table. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/lookup/StaticVocabularyTable.md b/site/en/api_docs/python/tf/compat/v1/lookup/StaticVocabularyTable.md new file mode 100644 index 00000000000..c16aa1f6a61 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lookup/StaticVocabularyTable.md @@ -0,0 +1,283 @@ +description: String to Id table wrapper that assigns out-of-vocabulary keys to buckets. + +
+ + + + + +
+ +# tf.compat.v1.lookup.StaticVocabularyTable + + + + + + + + + +String to Id table wrapper that assigns out-of-vocabulary keys to buckets. + +Inherits From: [`StaticVocabularyTable`](../../../../tf/lookup/StaticVocabularyTable.md) + + + + + + + +For example, if an instance of `StaticVocabularyTable` is initialized with a +string-to-id initializer that maps: + +* `emerson -> 0` +* `lake -> 1` +* `palmer -> 2` + +The `Vocabulary` object will performs the following mapping: + +* `emerson -> 0` +* `lake -> 1` +* `palmer -> 2` +* ` -> bucket_id`, where bucket_id will be between `3` and +`3 + num_oov_buckets - 1`, calculated by: +`hash() % num_oov_buckets + vocab_size` + +If input_tensor is `["emerson", "lake", "palmer", "king", "crimson"]`, +the lookup result is `[0, 1, 2, 4, 7]`. + +If `initializer` is None, only out-of-vocabulary buckets are used. + +#### Example usage: + + + +```python +num_oov_buckets = 3 +input_tensor = tf.constant(["emerson", "lake", "palmer", "king", "crimnson"]) +table = tf.lookup.StaticVocabularyTable( + tf.lookup.TextFileInitializer( + filename, + key_dtype=tf.string, key_index=tf.lookup.TextFileIndex.WHOLE_LINE, + value_dtype=tf.int64, value_index=tf.lookup.TextFileIndex.LINE_NUMBER, + delimiter="\t"), + num_oov_buckets) +out = table.lookup(input_tensor). +table.init.run() +print(out.eval()) +``` + +The hash function used for generating out-of-vocabulary buckets ID is +Fingerprint64. + + + + + + + + + + + + + + + + + + + +
+`initializer` + +A TableInitializerBase object that contains the data used to +initialize the table. If None, then we only use out-of-vocab buckets. +
+`num_oov_buckets` + +Number of buckets to use for out-of-vocabulary keys. Must +be greater than zero. +
+`lookup_key_dtype` + +Data type of keys passed to `lookup`. Defaults to +`initializer.key_dtype` if `initializer` is specified, otherwise +tf.string. Must be string or integer, and must be castable to +`initializer.key_dtype`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + +
+`ValueError` + +when `num_oov_buckets` is not positive. +
+`TypeError` + +when lookup_key_dtype or initializer.key_dtype are not +integer or string. Also when initializer.value_dtype != int64. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`initializer` + + +
+`key_dtype` + +The table key dtype. +
+`name` + +The name of the table. +
+`resource_handle` + +Returns the resource handle associated with this Resource. +
+`value_dtype` + +The table value dtype. +
+ + + +## Methods + +

lookup

+ +View source + + + +Looks up `keys` in the table, outputs the corresponding values. + +It assigns out-of-vocabulary keys to buckets based in their hashes. + + + + + + + + + + + + + +
Args
+`keys` + +Keys to look up. May be either a `SparseTensor` or dense `Tensor`. +
+`name` + +Optional name for the op. +
+ + + + + + + + + + + +
Returns
+A `SparseTensor` if keys are sparse, otherwise a dense `Tensor`. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +when `keys` doesn't match the table key data type. +
+ + + +

size

+ +View source + + + +Compute the number of elements in this table. + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/lookup/experimental.md b/site/en/api_docs/python/tf/compat/v1/lookup/experimental.md new file mode 100644 index 00000000000..99edb6c2be1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/lookup/experimental.md @@ -0,0 +1,25 @@ +description: Public API for tf.lookup.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.lookup.experimental + + + + + + + + + +Public API for tf.lookup.experimental namespace. + + + +## Classes + +[`class DenseHashTable`](../../../../tf/lookup/experimental/DenseHashTable.md): A generic mutable hash table implementation using tensors as backing store. + diff --git a/site/en/api_docs/python/tf/compat/v1/losses.md b/site/en/api_docs/python/tf/compat/v1/losses.md new file mode 100644 index 00000000000..43fedded7a3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses.md @@ -0,0 +1,60 @@ +description: Loss operations for use in neural networks. + +
+ + +
+ +# Module: tf.compat.v1.losses + + + + + + + + + +Loss operations for use in neural networks. + + +Note: All the losses are added to the `GraphKeys.LOSSES` collection by default. + +## Classes + +[`class Reduction`](../../../tf/compat/v1/losses/Reduction.md): Types of loss reduction. + +## Functions + +[`absolute_difference(...)`](../../../tf/compat/v1/losses/absolute_difference.md): Adds an Absolute Difference loss to the training procedure. + +[`add_loss(...)`](../../../tf/compat/v1/losses/add_loss.md): Adds a externally defined loss to the collection of losses. + +[`compute_weighted_loss(...)`](../../../tf/compat/v1/losses/compute_weighted_loss.md): Computes the weighted loss. + +[`cosine_distance(...)`](../../../tf/compat/v1/losses/cosine_distance.md): Adds a cosine-distance loss to the training procedure. (deprecated arguments) + +[`get_losses(...)`](../../../tf/compat/v1/losses/get_losses.md): Gets the list of losses from the loss_collection. + +[`get_regularization_loss(...)`](../../../tf/compat/v1/losses/get_regularization_loss.md): Gets the total regularization loss. + +[`get_regularization_losses(...)`](../../../tf/compat/v1/losses/get_regularization_losses.md): Gets the list of regularization losses. + +[`get_total_loss(...)`](../../../tf/compat/v1/losses/get_total_loss.md): Returns a tensor whose value represents the total loss. + +[`hinge_loss(...)`](../../../tf/compat/v1/losses/hinge_loss.md): Adds a hinge loss to the training procedure. + +[`huber_loss(...)`](../../../tf/compat/v1/losses/huber_loss.md): Adds a [Huber Loss](https://en.wikipedia.org/wiki/Huber_loss) term to the training procedure. + +[`log_loss(...)`](../../../tf/compat/v1/losses/log_loss.md): Adds a Log Loss term to the training procedure. + +[`mean_pairwise_squared_error(...)`](../../../tf/compat/v1/losses/mean_pairwise_squared_error.md): Adds a pairwise-errors-squared loss to the training procedure. + +[`mean_squared_error(...)`](../../../tf/compat/v1/losses/mean_squared_error.md): Adds a Sum-of-Squares loss to the training procedure. + +[`sigmoid_cross_entropy(...)`](../../../tf/compat/v1/losses/sigmoid_cross_entropy.md): Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits. + +[`softmax_cross_entropy(...)`](../../../tf/compat/v1/losses/softmax_cross_entropy.md): Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2. + +[`sparse_softmax_cross_entropy(...)`](../../../tf/compat/v1/losses/sparse_softmax_cross_entropy.md): Cross-entropy loss using tf.nn.sparse_softmax_cross_entropy_with_logits. + diff --git a/site/en/api_docs/python/tf/compat/v1/losses/Reduction.md b/site/en/api_docs/python/tf/compat/v1/losses/Reduction.md new file mode 100644 index 00000000000..5336d44e0d8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/Reduction.md @@ -0,0 +1,82 @@ +description: Types of loss reduction. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.losses.Reduction + + + + + + + + + +Types of loss reduction. + + + +Contains the following values: + +* `NONE`: Un-reduced weighted losses with the same shape as input. +* `SUM`: Scalar sum of weighted losses. +* `MEAN`: Scalar `SUM` divided by sum of weights. DEPRECATED. +* `SUM_OVER_BATCH_SIZE`: Scalar `SUM` divided by number of elements in losses. +* `SUM_OVER_NONZERO_WEIGHTS`: Scalar `SUM` divided by number of non-zero + weights. DEPRECATED. +* `SUM_BY_NONZERO_WEIGHTS`: Same as `SUM_OVER_NONZERO_WEIGHTS`. DEPRECATED. + +## Methods + +

all

+ +View source + + + + + + +

validate

+ +View source + + + + + + + + +## Class Variables + +* `MEAN = 'weighted_mean'` +* `NONE = 'none'` +* `SUM = 'weighted_sum'` +* `SUM_BY_NONZERO_WEIGHTS = 'weighted_sum_by_nonzero_weights'` +* `SUM_OVER_BATCH_SIZE = 'weighted_sum_over_batch_size'` +* `SUM_OVER_NONZERO_WEIGHTS = 'weighted_sum_by_nonzero_weights'` diff --git a/site/en/api_docs/python/tf/compat/v1/losses/absolute_difference.md b/site/en/api_docs/python/tf/compat/v1/losses/absolute_difference.md new file mode 100644 index 00000000000..260ad47a0be --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/absolute_difference.md @@ -0,0 +1,136 @@ +description: Adds an Absolute Difference loss to the training procedure. + +
+ + +
+ +# tf.compat.v1.losses.absolute_difference + + + + + + + + + +Adds an Absolute Difference loss to the training procedure. + + + + + + + +`weights` acts as a coefficient for the loss. If a scalar is provided, then +the loss is simply scaled by the given value. If `weights` is a `Tensor` of +shape `[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `weights` vector. If the shape of +`weights` matches the shape of `predictions`, then the loss of each +measurable element of `predictions` is scaled by the corresponding value of +`weights`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth output tensor, same dimensions as 'predictions'. +
+`predictions` + +The predicted outputs. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `losses` dimension). +
+`scope` + +The scope for the operations performed in computing the loss. +
+`loss_collection` + +collection to which this loss will be added. +
+`reduction` + +Type of reduction to apply to loss. +
+ + + + + + + + + + + +
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same +shape as `labels`; otherwise, it is scalar. +
+ + + + + + + + + + + + +
+`ValueError` + +If the shape of `predictions` doesn't match that of +`labels` or if the shape of `weights` is invalid or if `labels` +or `predictions` is None. +
+ + + + +#### Eager Compatibility +The `loss_collection` argument is ignored when executing eagerly. Consider +holding on to the return value or collecting losses via a tf.keras.Model. + diff --git a/site/en/api_docs/python/tf/compat/v1/losses/add_loss.md b/site/en/api_docs/python/tf/compat/v1/losses/add_loss.md new file mode 100644 index 00000000000..e332e18254a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/add_loss.md @@ -0,0 +1,57 @@ +description: Adds a externally defined loss to the collection of losses. + +
+ + +
+ +# tf.compat.v1.losses.add_loss + + + + + + + + + +Adds a externally defined loss to the collection of losses. + + + + + + + + + + + + + + + + + + + + +
+`loss` + +A loss `Tensor`. +
+`loss_collection` + +Optional collection to add the loss to. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/losses/compute_weighted_loss.md b/site/en/api_docs/python/tf/compat/v1/losses/compute_weighted_loss.md new file mode 100644 index 00000000000..49abf950161 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/compute_weighted_loss.md @@ -0,0 +1,132 @@ +description: Computes the weighted loss. + +
+ + +
+ +# tf.compat.v1.losses.compute_weighted_loss + + + + + + + + + +Computes the weighted loss. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`losses` + +`Tensor` of shape `[batch_size, d1, ... dN]`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`losses`, and must be broadcastable to `losses` (i.e., all dimensions must +be either `1`, or the same as the corresponding `losses` dimension). +
+`scope` + +the scope for the operations performed in computing the loss. +
+`loss_collection` + +the loss will be added to these collections. +
+`reduction` + +Type of reduction to apply to loss. +
+ + + + + + + + + + + +
+Weighted loss `Tensor` of the same type as `losses`. If `reduction` is +`NONE`, this has the same shape as `losses`; otherwise, it is scalar. +
+ + + + + + + + + + + + +
+`ValueError` + +If `weights` is `None` or the shape is not compatible with +`losses`, or if the number of dimensions (rank) of either `losses` or +`weights` is missing. +
+ + + +#### Note: + +When calculating the gradient of a weighted loss contributions from +both `losses` and `weights` are considered. If your `weights` depend +on some model parameters but you do not want this to affect the loss +gradient, you need to apply tf.stop_gradient to `weights` before +passing them to `compute_weighted_loss`. + + + + +#### Eager Compatibility +The `loss_collection` argument is ignored when executing eagerly. Consider +holding on to the return value or collecting losses via a tf.keras.Model. + diff --git a/site/en/api_docs/python/tf/compat/v1/losses/cosine_distance.md b/site/en/api_docs/python/tf/compat/v1/losses/cosine_distance.md new file mode 100644 index 00000000000..445b19a5f18 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/cosine_distance.md @@ -0,0 +1,149 @@ +description: Adds a cosine-distance loss to the training procedure. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.losses.cosine_distance + + + + + + + + + +Adds a cosine-distance loss to the training procedure. (deprecated arguments) + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. +Instructions for updating: +dim is deprecated, use axis instead + +Note that the function assumes that `predictions` and `labels` are already +unit-normalized. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +`Tensor` whose shape matches 'predictions' +
+`predictions` + +An arbitrary matrix. +
+`axis` + +The dimension along which the cosine distance is computed. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `losses` dimension). +
+`scope` + +The scope for the operations performed in computing the loss. +
+`loss_collection` + +collection to which this loss will be added. +
+`reduction` + +Type of reduction to apply to loss. +
+`dim` + +The old (deprecated) name for `axis`. +
+ + + + + + + + + + + +
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same +shape as `labels`; otherwise, it is scalar. +
+ + + + + + + + + + + + +
+`ValueError` + +If `predictions` shape doesn't match `labels` shape, or +`axis`, `labels`, `predictions` or `weights` is `None`. +
+ + + + +#### Eager Compatibility +The `loss_collection` argument is ignored when executing eagerly. Consider +holding on to the return value or collecting losses via a tf.keras.Model. + diff --git a/site/en/api_docs/python/tf/compat/v1/losses/get_losses.md b/site/en/api_docs/python/tf/compat/v1/losses/get_losses.md new file mode 100644 index 00000000000..30a8d1ca386 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/get_losses.md @@ -0,0 +1,71 @@ +description: Gets the list of losses from the loss_collection. + +
+ + +
+ +# tf.compat.v1.losses.get_losses + + + + + + + + + +Gets the list of losses from the loss_collection. + + + + + + + + + + + + + + + + + + + + +
+`scope` + +An optional scope name for filtering the losses to return. +
+`loss_collection` + +Optional losses collection. +
+ + + + + + + + + + + +
+a list of loss tensors. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/losses/get_regularization_loss.md b/site/en/api_docs/python/tf/compat/v1/losses/get_regularization_loss.md new file mode 100644 index 00000000000..72214b4ceb1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/get_regularization_loss.md @@ -0,0 +1,71 @@ +description: Gets the total regularization loss. + +
+ + +
+ +# tf.compat.v1.losses.get_regularization_loss + + + + + + + + + +Gets the total regularization loss. + + + + + + + + + + + + + + + + + + + + +
+`scope` + +An optional scope name for filtering the losses to return. +
+`name` + +The name of the returned tensor. +
+ + + + + + + + + + + +
+A scalar regularization loss. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/losses/get_regularization_losses.md b/site/en/api_docs/python/tf/compat/v1/losses/get_regularization_losses.md new file mode 100644 index 00000000000..d28900d7ffd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/get_regularization_losses.md @@ -0,0 +1,64 @@ +description: Gets the list of regularization losses. + +
+ + +
+ +# tf.compat.v1.losses.get_regularization_losses + + + + + + + + + +Gets the list of regularization losses. + + + + + + + + + + + + + + + + + +
+`scope` + +An optional scope name for filtering the losses to return. +
+ + + + + + + + + + + +
+A list of regularization losses as Tensors. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/losses/get_total_loss.md b/site/en/api_docs/python/tf/compat/v1/losses/get_total_loss.md new file mode 100644 index 00000000000..1f29ea1950c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/get_total_loss.md @@ -0,0 +1,103 @@ +description: Returns a tensor whose value represents the total loss. + +
+ + +
+ +# tf.compat.v1.losses.get_total_loss + + + + + + + + + +Returns a tensor whose value represents the total loss. + + + + + + + +In particular, this adds any losses you have added with `tf.add_loss()` to +any regularization losses that have been added by regularization parameters +on layers constructors e.g. `tf.layers`. Be very sure to use this if you +are constructing a loss_op manually. Otherwise regularization arguments +on `tf.layers` methods will not function. + + + + + + + + + + + + + + + + +
+`add_regularization_losses` + +A boolean indicating whether or not to use the +regularization losses in the sum. +
+`name` + +The name of the returned tensor. +
+`scope` + +An optional scope name for filtering the losses to return. Note that +this filters the losses added with `tf.add_loss()` as well as the +regularization losses to that scope. +
+ + + + + + + + + + + +
+A `Tensor` whose value represents the total loss. +
+ + + + + + + + + + + + +
+`ValueError` + +if `losses` is not iterable. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/losses/hinge_loss.md b/site/en/api_docs/python/tf/compat/v1/losses/hinge_loss.md new file mode 100644 index 00000000000..15e262b4d2b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/hinge_loss.md @@ -0,0 +1,132 @@ +description: Adds a hinge loss to the training procedure. + +
+ + +
+ +# tf.compat.v1.losses.hinge_loss + + + + + + + + + +Adds a hinge loss to the training procedure. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth output tensor. Its shape should match the shape of +logits. The values of the tensor are expected to be 0.0 or 1.0. Internally +the {0,1} labels are converted to {-1,1} when calculating the hinge loss. +
+`logits` + +The logits, a float tensor. Note that logits are assumed to be +unbounded and 0-centered. A value > 0 (resp. < 0) is considered a positive +(resp. negative) binary prediction. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `losses` dimension). +
+`scope` + +The scope for the operations performed in computing the loss. +
+`loss_collection` + +collection to which the loss will be added. +
+`reduction` + +Type of reduction to apply to loss. +
+ + + + + + + + + + + +
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same +shape as `labels`; otherwise, it is scalar. +
+ + + + + + + + + + + + +
+`ValueError` + +If the shapes of `logits` and `labels` don't match or +if `labels` or `logits` is None. +
+ + + + +#### Eager Compatibility +The `loss_collection` argument is ignored when executing eagerly. Consider +holding on to the return value or collecting losses via a tf.keras.Model. + diff --git a/site/en/api_docs/python/tf/compat/v1/losses/huber_loss.md b/site/en/api_docs/python/tf/compat/v1/losses/huber_loss.md new file mode 100644 index 00000000000..0d55df00f70 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/huber_loss.md @@ -0,0 +1,153 @@ +description: Adds a [Huber Loss](https://en.wikipedia.org/wiki/Huber_loss) term to the training procedure. + +
+ + +
+ +# tf.compat.v1.losses.huber_loss + + + + + + + + + +Adds a [Huber Loss](https://en.wikipedia.org/wiki/Huber_loss) term to the training procedure. + + + + + + + +For each value x in `error=labels-predictions`, the following is calculated: + +``` + 0.5 * x^2 if |x| <= d + 0.5 * d^2 + d * (|x| - d) if |x| > d +``` + +where d is `delta`. + +`weights` acts as a coefficient for the loss. If a scalar is provided, then +the loss is simply scaled by the given value. If `weights` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is rescaled +by the corresponding element in the `weights` vector. If the shape of +`weights` matches the shape of `predictions`, then the loss of each +measurable element of `predictions` is scaled by the corresponding value of +`weights`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth output tensor, same dimensions as 'predictions'. +
+`predictions` + +The predicted outputs. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `losses` dimension). +
+`delta` + +`float`, the point where the huber loss function changes from a +quadratic to linear. +
+`scope` + +The scope for the operations performed in computing the loss. +
+`loss_collection` + +collection to which the loss will be added. +
+`reduction` + +Type of reduction to apply to loss. +
+ + + + + + + + + + + +
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same +shape as `labels`; otherwise, it is scalar. +
+ + + + + + + + + + + + +
+`ValueError` + +If the shape of `predictions` doesn't match that of `labels` or +if the shape of `weights` is invalid. Also if `labels` or +`predictions` is None. +
+ + + + +#### Eager Compatibility +The `loss_collection` argument is ignored when executing eagerly. Consider +holding on to the return value or collecting losses via a tf.keras.Model. + diff --git a/site/en/api_docs/python/tf/compat/v1/losses/log_loss.md b/site/en/api_docs/python/tf/compat/v1/losses/log_loss.md new file mode 100644 index 00000000000..9e89d772a9d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/log_loss.md @@ -0,0 +1,143 @@ +description: Adds a Log Loss term to the training procedure. + +
+ + +
+ +# tf.compat.v1.losses.log_loss + + + + + + + + + +Adds a Log Loss term to the training procedure. + + + + + + + +`weights` acts as a coefficient for the loss. If a scalar is provided, then +the loss is simply scaled by the given value. If `weights` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is rescaled +by the corresponding element in the `weights` vector. If the shape of +`weights` matches the shape of `predictions`, then the loss of each +measurable element of `predictions` is scaled by the corresponding value of +`weights`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth output tensor, same dimensions as 'predictions'. +
+`predictions` + +The predicted outputs. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `losses` dimension). +
+`epsilon` + +A small increment to add to avoid taking a log of zero. +
+`scope` + +The scope for the operations performed in computing the loss. +
+`loss_collection` + +collection to which the loss will be added. +
+`reduction` + +Type of reduction to apply to loss. +
+ + + + + + + + + + + +
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same +shape as `labels`; otherwise, it is scalar. +
+ + + + + + + + + + + + +
+`ValueError` + +If the shape of `predictions` doesn't match that of `labels` or +if the shape of `weights` is invalid. Also if `labels` or `predictions` +is None. +
+ + + + +#### Eager Compatibility +The `loss_collection` argument is ignored when executing eagerly. Consider +holding on to the return value or collecting losses via a tf.keras.Model. + diff --git a/site/en/api_docs/python/tf/compat/v1/losses/mean_pairwise_squared_error.md b/site/en/api_docs/python/tf/compat/v1/losses/mean_pairwise_squared_error.md new file mode 100644 index 00000000000..0b4c9196f1c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/mean_pairwise_squared_error.md @@ -0,0 +1,142 @@ +description: Adds a pairwise-errors-squared loss to the training procedure. + +
+ + +
+ +# tf.compat.v1.losses.mean_pairwise_squared_error + + + + + + + + + +Adds a pairwise-errors-squared loss to the training procedure. + + + + + + + +Unlike `mean_squared_error`, which is a measure of the differences between +corresponding elements of `predictions` and `labels`, +`mean_pairwise_squared_error` is a measure of the differences between pairs of +corresponding elements of `predictions` and `labels`. + +For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are +three pairs of differences are summed to compute the loss: + loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3 + +Note that since the inputs are of shape `[batch_size, d0, ... dN]`, the +corresponding pairs are computed within each batch sample but not across +samples within a batch. For example, if `predictions` represents a batch of +16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs +is drawn from each image, but not across images. + +`weights` acts as a coefficient for the loss. If a scalar is provided, then +the loss is simply scaled by the given value. If `weights` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is rescaled +by the corresponding element in the `weights` vector. + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth output tensor, whose shape must match the shape of +`predictions`. +
+`predictions` + +The predicted outputs, a tensor of size +`[batch_size, d0, .. dN]` where N+1 is the total number of dimensions in +`predictions`. +
+`weights` + +Coefficients for the loss a scalar, a tensor of shape +`[batch_size]` or a tensor whose shape matches `predictions`. +
+`scope` + +The scope for the operations performed in computing the loss. +
+`loss_collection` + +collection to which the loss will be added. +
+ + + + + + + + + + + +
+A scalar `Tensor` that returns the weighted loss. +
+ + + + + + + + + + + + +
+`ValueError` + +If the shape of `predictions` doesn't match that of `labels` or +if the shape of `weights` is invalid. Also if `labels` or `predictions` +is None. +
+ + + + +#### Eager Compatibility +The `loss_collection` argument is ignored when executing eagerly. Consider +holding on to the return value or collecting losses via a tf.keras.Model. + diff --git a/site/en/api_docs/python/tf/compat/v1/losses/mean_squared_error.md b/site/en/api_docs/python/tf/compat/v1/losses/mean_squared_error.md new file mode 100644 index 00000000000..ad95c0d1de0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/mean_squared_error.md @@ -0,0 +1,136 @@ +description: Adds a Sum-of-Squares loss to the training procedure. + +
+ + +
+ +# tf.compat.v1.losses.mean_squared_error + + + + + + + + + +Adds a Sum-of-Squares loss to the training procedure. + + + + + + + +`weights` acts as a coefficient for the loss. If a scalar is provided, then +the loss is simply scaled by the given value. If `weights` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is rescaled +by the corresponding element in the `weights` vector. If the shape of +`weights` matches the shape of `predictions`, then the loss of each +measurable element of `predictions` is scaled by the corresponding value of +`weights`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth output tensor, same dimensions as 'predictions'. +
+`predictions` + +The predicted outputs. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `losses` dimension). +
+`scope` + +The scope for the operations performed in computing the loss. +
+`loss_collection` + +collection to which the loss will be added. +
+`reduction` + +Type of reduction to apply to loss. +
+ + + + + + + + + + + +
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same +shape as `labels`; otherwise, it is scalar. +
+ + + + + + + + + + + + +
+`ValueError` + +If the shape of `predictions` doesn't match that of `labels` or +if the shape of `weights` is invalid. Also if `labels` or `predictions` +is None. +
+ + + + +#### Eager Compatibility +The `loss_collection` argument is ignored when executing eagerly. Consider +holding on to the return value or collecting losses via a tf.keras.Model. + diff --git a/site/en/api_docs/python/tf/compat/v1/losses/sigmoid_cross_entropy.md b/site/en/api_docs/python/tf/compat/v1/losses/sigmoid_cross_entropy.md new file mode 100644 index 00000000000..75f92ca69f3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/sigmoid_cross_entropy.md @@ -0,0 +1,146 @@ +description: Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits. + +
+ + +
+ +# tf.compat.v1.losses.sigmoid_cross_entropy + + + + + + + + + +Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits. + + + + + + + +`weights` acts as a coefficient for the loss. If a scalar is provided, +then the loss is simply scaled by the given value. If `weights` is a +tensor of shape `[batch_size]`, then the loss weights apply to each +corresponding sample. + +If `label_smoothing` is nonzero, smooth the labels towards 1/2: + + new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + + 0.5 * label_smoothing + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`multi_class_labels` + +`[batch_size, num_classes]` target integer labels in +`{0, 1}`. +
+`logits` + +Float `[batch_size, num_classes]` logits outputs of the network. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `losses` dimension). +
+`label_smoothing` + +If greater than `0` then smooth the labels. +
+`scope` + +The scope for the operations performed in computing the loss. +
+`loss_collection` + +collection to which the loss will be added. +
+`reduction` + +Type of reduction to apply to loss. +
+ + + + + + + + + + + +
+Weighted loss `Tensor` of the same type as `logits`. If `reduction` is +`NONE`, this has the same shape as `logits`; otherwise, it is scalar. +
+ + + + + + + + + + + + +
+`ValueError` + +If the shape of `logits` doesn't match that of +`multi_class_labels` or if the shape of `weights` is invalid, or if +`weights` is None. Also if `multi_class_labels` or `logits` is None. +
+ + + + +#### Eager Compatibility +The `loss_collection` argument is ignored when executing eagerly. Consider +holding on to the return value or collecting losses via a tf.keras.Model. + diff --git a/site/en/api_docs/python/tf/compat/v1/losses/softmax_cross_entropy.md b/site/en/api_docs/python/tf/compat/v1/losses/softmax_cross_entropy.md new file mode 100644 index 00000000000..f880f195ee1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/softmax_cross_entropy.md @@ -0,0 +1,148 @@ +description: Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2. + +
+ + +
+ +# tf.compat.v1.losses.softmax_cross_entropy + + + + + + + + + +Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2. + + + + + + + +`weights` acts as a coefficient for the loss. If a scalar is provided, +then the loss is simply scaled by the given value. If `weights` is a +tensor of shape `[batch_size]`, then the loss weights apply to each +corresponding sample. + +If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes: + new_onehot_labels = onehot_labels * (1 - label_smoothing) + + label_smoothing / num_classes + +Note that `onehot_labels` and `logits` must have the same shape, +e.g. `[batch_size, num_classes]`. The shape of `weights` must be +broadcastable to loss, whose shape is decided by the shape of `logits`. +In case the shape of `logits` is `[batch_size, num_classes]`, loss is +a `Tensor` of shape `[batch_size]`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`onehot_labels` + +One-hot-encoded labels. +
+`logits` + +Logits outputs of the network. +
+`weights` + +Optional `Tensor` that is broadcastable to loss. +
+`label_smoothing` + +If greater than 0 then smooth the labels. +
+`scope` + +the scope for the operations performed in computing the loss. +
+`loss_collection` + +collection to which the loss will be added. +
+`reduction` + +Type of reduction to apply to loss. +
+ + + + + + + + + + + +
+Weighted loss `Tensor` of the same type as `logits`. If `reduction` is +`NONE`, this has shape `[batch_size]`; otherwise, it is scalar. +
+ + + + + + + + + + + + +
+`ValueError` + +If the shape of `logits` doesn't match that of `onehot_labels` +or if the shape of `weights` is invalid or if `weights` is None. Also if +`onehot_labels` or `logits` is None. +
+ + + + +#### Eager Compatibility +The `loss_collection` argument is ignored when executing eagerly. Consider +holding on to the return value or collecting losses via a tf.keras.Model. + diff --git a/site/en/api_docs/python/tf/compat/v1/losses/sparse_softmax_cross_entropy.md b/site/en/api_docs/python/tf/compat/v1/losses/sparse_softmax_cross_entropy.md new file mode 100644 index 00000000000..a25b24f8095 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/losses/sparse_softmax_cross_entropy.md @@ -0,0 +1,137 @@ +description: Cross-entropy loss using tf.nn.sparse_softmax_cross_entropy_with_logits. + +
+ + +
+ +# tf.compat.v1.losses.sparse_softmax_cross_entropy + + + + + + + + + +Cross-entropy loss using tf.nn.sparse_softmax_cross_entropy_with_logits. + + + + + + + +`weights` acts as a coefficient for the loss. If a scalar is provided, +then the loss is simply scaled by the given value. If `weights` is a +tensor of shape `[batch_size]`, then the loss weights apply to each +corresponding sample. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +`Tensor` of shape `[d_0, d_1, ..., d_{r-1}]` (where `r` is rank of +`labels` and result) and dtype `int32` or `int64`. Each entry in `labels` +must be an index in `[0, num_classes)`. Other values will raise an +exception when this op is run on CPU, and return `NaN` for corresponding +loss and gradient rows on GPU. +
+`logits` + +Unscaled log probabilities of shape +`[d_0, d_1, ..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or +`float64`. +
+`weights` + +Coefficients for the loss. This must be scalar or broadcastable to +`labels` (i.e. same rank and each dimension is either 1 or the same). +
+`scope` + +the scope for the operations performed in computing the loss. +
+`loss_collection` + +collection to which the loss will be added. +
+`reduction` + +Type of reduction to apply to loss. +
+ + + + + + + + + + + +
+Weighted loss `Tensor` of the same type as `logits`. If `reduction` is +`NONE`, this has the same shape as `labels`; otherwise, it is scalar. +
+ + + + + + + + + + + + +
+`ValueError` + +If the shapes of `logits`, `labels`, and `weights` are +incompatible, or if any of them are None. +
+ + + + +#### Eager Compatibility +The `loss_collection` argument is ignored when executing eagerly. Consider +holding on to the return value or collecting losses via a tf.keras.Model. + diff --git a/site/en/api_docs/python/tf/compat/v1/make_template.md b/site/en/api_docs/python/tf/compat/v1/make_template.md new file mode 100644 index 00000000000..b3a7f9a976d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/make_template.md @@ -0,0 +1,208 @@ +description: Given an arbitrary function, wrap it so that it does variable sharing. + +
+ + +
+ +# tf.compat.v1.make_template + + + + + + + + + +Given an arbitrary function, wrap it so that it does variable sharing. + + + + + + + +This wraps `func_` in a Template and partially evaluates it. Templates are +functions that create variables the first time they are called and reuse them +thereafter. In order for `func_` to be compatible with a `Template` it must +have the following properties: + +* The function should create all trainable variables and any variables that + should be reused by calling tf.compat.v1.get_variable. If a trainable + variable is + created using tf.Variable, then a ValueError will be thrown. Variables + that are intended to be locals can be created by specifying + tf.Variable(..., trainable=false). +* The function may use variable scopes and other templates internally to + create and reuse variables, but it shouldn't use + tf.compat.v1.global_variables to + capture variables that are defined outside of the scope of the function. +* Internal scopes and variable names should not depend on any arguments that + are not supplied to `make_template`. In general you will get a ValueError + telling you that you are trying to reuse a variable that doesn't exist + if you make a mistake. + +In the following example, both `z` and `w` will be scaled by the same `y`. It +is important to note that if we didn't assign `scalar_name` and used a +different name for z and w that a `ValueError` would be thrown because it +couldn't reuse the variable. + +```python +def my_op(x, scalar_name): + var1 = tf.compat.v1.get_variable(scalar_name, + shape=[], + initializer=tf.compat.v1.constant_initializer(1)) + return x * var1 + +scale_by_y = tf.compat.v1.make_template('scale_by_y', my_op, scalar_name='y') + +z = scale_by_y(input1) +w = scale_by_y(input2) +``` + +As a safe-guard, the returned function will raise a `ValueError` after the +first call if trainable variables are created by calling tf.Variable. + +If all of these are true, then 2 properties are enforced by the template: + +1. Calling the same template multiple times will share all non-local + variables. +2. Two different templates are guaranteed to be unique, unless you reenter the + same variable scope as the initial definition of a template and redefine + it. An examples of this exception: + +```python +def my_op(x, scalar_name): + var1 = tf.compat.v1.get_variable(scalar_name, + shape=[], + initializer=tf.compat.v1.constant_initializer(1)) + return x * var1 + +with tf.compat.v1.variable_scope('scope') as vs: + scale_by_y = tf.compat.v1.make_template('scale_by_y', my_op, + scalar_name='y') + z = scale_by_y(input1) + w = scale_by_y(input2) + +# Creates a template that reuses the variables above. +with tf.compat.v1.variable_scope(vs, reuse=True): + scale_by_y2 = tf.compat.v1.make_template('scale_by_y', my_op, + scalar_name='y') + z2 = scale_by_y2(input1) + w2 = scale_by_y2(input2) +``` + +Depending on the value of `create_scope_now_`, the full variable scope may be +captured either at the time of first call or at the time of construction. If +this option is set to True, then all Tensors created by repeated calls to the +template will have an extra trailing _N+1 to their name, as the first time the +scope is entered in the Template constructor no Tensors are created. + +Note: `name_`, `func_` and `create_scope_now_` have a trailing underscore to +reduce the likelihood of collisions with kwargs. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name_` + +A name for the scope created by this template. If necessary, the name +will be made unique by appending `_N` to the name. +
+`func_` + +The function to wrap. +
+`create_scope_now_` + +Boolean controlling whether the scope should be created +when the template is constructed or when the template is called. Default +is False, meaning the scope is created when the template is called. +
+`unique_name_` + +When used, it overrides name_ and is not made unique. If a +template of the same scope/unique_name already exists and reuse is false, +an error is raised. Defaults to None. +
+`custom_getter_` + +Optional custom getter for variables used in `func_`. See +the tf.compat.v1.get_variable `custom_getter` documentation for more +information. +
+`**kwargs` + +Keyword arguments to apply to `func_`. +
+ + + + + + + + + + + +
+A function to encapsulate a set of variables which should be created once +and reused. An enclosing scope will be created either when `make_template` +is called or when the result is called, depending on the value of +`create_scope_now_`. Regardless of the value, the first time the template +is called it will enter the scope with no reuse, and call `func_` to create +variables, which are guaranteed to be unique. All subsequent calls will +re-enter the scope and reuse those variables. +
+ + + + + + + + + + + + +
+`ValueError` + +if `name_` is None. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/manip.md b/site/en/api_docs/python/tf/compat/v1/manip.md new file mode 100644 index 00000000000..87bde1100dc --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/manip.md @@ -0,0 +1,39 @@ +description: Operators for manipulating tensors. + +
+ + +
+ +# Module: tf.compat.v1.manip + + + + + + + + + +Operators for manipulating tensors. + + + +## Functions + +[`batch_to_space_nd(...)`](../../../tf/compat/v1/batch_to_space_nd.md): BatchToSpace for N-D tensors of type T. + +[`gather_nd(...)`](../../../tf/compat/v1/gather_nd.md): Gather slices from `params` into a Tensor with shape specified by `indices`. + +[`reshape(...)`](../../../tf/reshape.md): Reshapes a tensor. + +[`reverse(...)`](../../../tf/reverse.md): Reverses specific dimensions of a tensor. + +[`roll(...)`](../../../tf/roll.md): Rolls the elements of a tensor along an axis. + +[`scatter_nd(...)`](../../../tf/scatter_nd.md): Scatter `updates` into a new tensor according to `indices`. + +[`space_to_batch_nd(...)`](../../../tf/space_to_batch_nd.md): SpaceToBatch for N-D tensors of type T. + +[`tile(...)`](../../../tf/tile.md): Constructs a tensor by tiling a given tensor. + diff --git a/site/en/api_docs/python/tf/compat/v1/map_fn.md b/site/en/api_docs/python/tf/compat/v1/map_fn.md new file mode 100644 index 00000000000..e622f58e159 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/map_fn.md @@ -0,0 +1,228 @@ +description: map on the list of tensors unpacked from elems on dimension 0. + +
+ + +
+ +# tf.compat.v1.map_fn + + + + + + + + + +map on the list of tensors unpacked from `elems` on dimension 0. + + + + + + + +The simplest version of `map_fn` repeatedly applies the callable `fn` to a +sequence of elements from first to last. The elements are made of the +tensors unpacked from `elems`. `dtype` is the data type of the return +value of `fn`. Users must provide `dtype` if it is different from +the data type of `elems`. + +Suppose that `elems` is unpacked into `values`, a list of tensors. The shape +of the result tensor is `[values.shape[0]] + fn(values[0]).shape`. + +This method also allows multi-arity `elems` and output of `fn`. If `elems` +is a (possibly nested) list or tuple of tensors, then each of these tensors +must have a matching first (unpack) dimension. The signature of `fn` may +match the structure of `elems`. That is, if `elems` is +`(t1, [t2, t3, [t4, t5]])`, then an appropriate signature for `fn` is: +`fn = lambda (t1, [t2, t3, [t4, t5]]):`. + +Furthermore, `fn` may emit a different structure than its input. For example, +`fn` may look like: `fn = lambda t1: return (t1 + 1, t1 - 1)`. In this case, +the `dtype` parameter is not optional: `dtype` must be a type or (possibly +nested) tuple of types matching the output of `fn`. + +To apply a functional operation to the nonzero elements of a SparseTensor +one of the following methods is recommended. First, if the function is +expressible as TensorFlow ops, use + +```python + result = SparseTensor(input.indices, fn(input.values), input.dense_shape) +``` + +If, however, the function is not expressible as a TensorFlow op, then use + +```python +result = SparseTensor( + input.indices, map_fn(fn, input.values), input.dense_shape) +``` + +instead. + +When executing eagerly, map_fn does not execute in parallel even if +`parallel_iterations` is set to a value > 1. You can still get the +performance benefits of running a function in parallel by using the +tf.function decorator, + +```python +# Assume the function being used in map_fn is fn. +# To ensure map_fn calls fn in parallel, use the tf.function decorator. +@tf.function +def func(tensor): + return tf.map_fn(fn, tensor) +``` + +Note that if you use the tf.function decorator, any non-TensorFlow Python +code that you may have written in your function won't get executed. See +[`tf.function`](https://www.tensorflow.org/api_docs/python/tf/function) for +more details. The recommendation would be to debug without tf.function but +switch to it to get performance benefits of running `map_fn` in parallel. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fn` + +The callable to be performed. It accepts one argument, which will +have the same (possibly nested) structure as `elems`. Its output +must have the same structure as `dtype` if one is provided, otherwise +it must have the same structure as `elems`. +
+`elems` + +A tensor or (possibly nested) sequence of tensors, each of which +will be unpacked along their first dimension. The nested sequence +of the resulting slices will be applied to `fn`. +
+`dtype` + +(optional) The output type(s) of `fn`. If `fn` returns a structure +of Tensors differing from the structure of `elems`, then `dtype` is not +optional and must have the same structure as the output of `fn`. +
+`parallel_iterations` + +(optional) The number of iterations allowed to run +in parallel. When graph building, the default value is 10. While executing +eagerly, the default value is set to 1. +
+`back_prop` + +(optional) True enables support for back propagation. +
+`swap_memory` + +(optional) True enables GPU-CPU memory swapping. +
+`infer_shape` + +(optional) False disables tests for consistent output shapes. +
+`name` + +(optional) Name prefix for the returned tensors. +
+ + + + + + + + + + + +
+A tensor or (possibly nested) sequence of tensors. Each tensor packs the +results of applying `fn` to tensors unpacked from `elems` along the first +dimension, from first to last. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if `fn` is not callable or the structure of the output of +`fn` and `dtype` do not match, or if elems is a SparseTensor. +
+`ValueError` + +if the lengths of the output of `fn` and `dtype` do not match. +
+ + + +#### Examples: + +```python +elems = np.array([1, 2, 3, 4, 5, 6]) +squares = map_fn(lambda x: x * x, elems) +# squares == [1, 4, 9, 16, 25, 36] +``` + +```python +elems = (np.array([1, 2, 3]), np.array([-1, 1, -1])) +alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64) +# alternate == [-1, 2, -3] +``` + +```python +elems = np.array([1, 2, 3]) +alternates = map_fn(lambda x: (x, -x), elems, dtype=(tf.int64, tf.int64)) +# alternates[0] == [1, 2, 3] +# alternates[1] == [-1, -2, -3] +``` diff --git a/site/en/api_docs/python/tf/compat/v1/math.md b/site/en/api_docs/python/tf/compat/v1/math.md new file mode 100644 index 00000000000..9930504e0e8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/math.md @@ -0,0 +1,335 @@ +description: Math Operations. + +
+ + +
+ +# Module: tf.compat.v1.math + + + + + + + + + +Math Operations. + + +Note: Functions taking `Tensor` arguments can also take anything accepted by +tf.convert_to_tensor. + +Note: Elementwise binary operations in TensorFlow follow [numpy-style +broadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). + +TensorFlow provides a variety of math functions including: + +* Basic arithmetic operators and trigonometric functions. +* Special math functions (like: tf.math.igamma and tf.math.zeta) +* Complex number functions (like: tf.math.imag and tf.math.angle) +* Reductions and scans (like: tf.math.reduce_mean and tf.math.cumsum) +* Segment functions (like: tf.math.segment_sum) + +See: tf.linalg for matrix and tensor functions. + + + +## About Segmentation + +TensorFlow provides several operations that you can use to perform common +math computations on tensor segments. +Here a segmentation is a partitioning of a tensor along +the first dimension, i.e. it defines a mapping from the first dimension onto +`segment_ids`. The `segment_ids` tensor should be the size of +the first dimension, `d0`, with consecutive IDs in the range `0` to `k`, +where `k [[0 0 0 0] +# [5 6 7 8]] +``` + +The standard `segment_*` functions assert that the segment indices are sorted. +If you have unsorted indices use the equivalent `unsorted_segment_` function. +These functions take an additional argument `num_segments` so that the output +tensor can be efficiently allocated. + +``` python +c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) +tf.math.unsorted_segment_sum(c, tf.constant([0, 1, 0]), num_segments=2) +# ==> [[ 6, 8, 10, 12], +# [-1, -2, -3, -4]] +``` + +## Modules + +[`special`](../../../tf/compat/v1/math/special.md) module: Public API for tf.math.special namespace. + +## Functions + +[`abs(...)`](../../../tf/math/abs.md): Computes the absolute value of a tensor. + +[`accumulate_n(...)`](../../../tf/math/accumulate_n.md): Returns the element-wise sum of a list of tensors. + +[`acos(...)`](../../../tf/math/acos.md): Computes acos of x element-wise. + +[`acosh(...)`](../../../tf/math/acosh.md): Computes inverse hyperbolic cosine of x element-wise. + +[`add(...)`](../../../tf/math/add.md): Returns x + y element-wise. + +[`add_n(...)`](../../../tf/math/add_n.md): Adds all input tensors element-wise. + +[`angle(...)`](../../../tf/math/angle.md): Returns the element-wise argument of a complex (or real) tensor. + +[`argmax(...)`](../../../tf/compat/v1/argmax.md): Returns the index with the largest value across axes of a tensor. (deprecated arguments) + +[`argmin(...)`](../../../tf/compat/v1/argmin.md): Returns the index with the smallest value across axes of a tensor. (deprecated arguments) + +[`asin(...)`](../../../tf/math/asin.md): Computes the trignometric inverse sine of x element-wise. + +[`asinh(...)`](../../../tf/math/asinh.md): Computes inverse hyperbolic sine of x element-wise. + +[`atan(...)`](../../../tf/math/atan.md): Computes the trignometric inverse tangent of x element-wise. + +[`atan2(...)`](../../../tf/math/atan2.md): Computes arctangent of `y/x` element-wise, respecting signs of the arguments. + +[`atanh(...)`](../../../tf/math/atanh.md): Computes inverse hyperbolic tangent of x element-wise. + +[`bessel_i0(...)`](../../../tf/math/bessel_i0.md): Computes the Bessel i0 function of `x` element-wise. + +[`bessel_i0e(...)`](../../../tf/math/bessel_i0e.md): Computes the Bessel i0e function of `x` element-wise. + +[`bessel_i1(...)`](../../../tf/math/bessel_i1.md): Computes the Bessel i1 function of `x` element-wise. + +[`bessel_i1e(...)`](../../../tf/math/bessel_i1e.md): Computes the Bessel i1e function of `x` element-wise. + +[`betainc(...)`](../../../tf/math/betainc.md): Compute the regularized incomplete beta integral \\(I_x(a, b)\\). + +[`bincount(...)`](../../../tf/compat/v1/bincount.md): Counts the number of occurrences of each value in an integer array. + +[`ceil(...)`](../../../tf/math/ceil.md): Return the ceiling of the input, element-wise. + +[`confusion_matrix(...)`](../../../tf/compat/v1/confusion_matrix.md): Computes the confusion matrix from predictions and labels. + +[`conj(...)`](../../../tf/math/conj.md): Returns the complex conjugate of a complex number. + +[`cos(...)`](../../../tf/math/cos.md): Computes cos of x element-wise. + +[`cosh(...)`](../../../tf/math/cosh.md): Computes hyperbolic cosine of x element-wise. + +[`count_nonzero(...)`](../../../tf/compat/v1/count_nonzero.md): Computes number of nonzero elements across dimensions of a tensor. (deprecated arguments) (deprecated arguments) + +[`cumprod(...)`](../../../tf/math/cumprod.md): Compute the cumulative product of the tensor `x` along `axis`. + +[`cumsum(...)`](../../../tf/math/cumsum.md): Compute the cumulative sum of the tensor `x` along `axis`. + +[`cumulative_logsumexp(...)`](../../../tf/math/cumulative_logsumexp.md): Compute the cumulative log-sum-exp of the tensor `x` along `axis`. + +[`digamma(...)`](../../../tf/math/digamma.md): Computes Psi, the derivative of Lgamma (the log of the absolute value of + +[`divide(...)`](../../../tf/math/divide.md): Computes Python style division of `x` by `y`. + +[`divide_no_nan(...)`](../../../tf/math/divide_no_nan.md): Computes a safe divide which returns 0 if the y is zero. + +[`equal(...)`](../../../tf/math/equal.md): Returns the truth value of (x == y) element-wise. + +[`erf(...)`](../../../tf/math/erf.md): Computes the Gauss error function of `x` element-wise. + +[`erfc(...)`](../../../tf/math/erfc.md): Computes the complementary error function of `x` element-wise. + +[`erfinv(...)`](../../../tf/math/erfinv.md): Compute inverse error function. + +[`exp(...)`](../../../tf/math/exp.md): Computes exponential of x element-wise. \\(y = e^x\\). + +[`expm1(...)`](../../../tf/math/expm1.md): Computes `exp(x) - 1` element-wise. + +[`floor(...)`](../../../tf/math/floor.md): Returns element-wise largest integer not greater than x. + +[`floordiv(...)`](../../../tf/math/floordiv.md): Divides `x / y` elementwise, rounding toward the most negative integer. + +[`floormod(...)`](../../../tf/math/floormod.md): Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +[`greater(...)`](../../../tf/math/greater.md): Returns the truth value of (x > y) element-wise. + +[`greater_equal(...)`](../../../tf/math/greater_equal.md): Returns the truth value of (x >= y) element-wise. + +[`igamma(...)`](../../../tf/math/igamma.md): Compute the lower regularized incomplete Gamma function `P(a, x)`. + +[`igammac(...)`](../../../tf/math/igammac.md): Compute the upper regularized incomplete Gamma function `Q(a, x)`. + +[`imag(...)`](../../../tf/math/imag.md): Returns the imaginary part of a complex (or real) tensor. + +[`in_top_k(...)`](../../../tf/compat/v1/math/in_top_k.md): Says whether the targets are in the top `K` predictions. + +[`invert_permutation(...)`](../../../tf/math/invert_permutation.md): Computes the inverse permutation of a tensor. + +[`is_finite(...)`](../../../tf/math/is_finite.md): Returns which elements of x are finite. + +[`is_inf(...)`](../../../tf/math/is_inf.md): Returns which elements of x are Inf. + +[`is_nan(...)`](../../../tf/math/is_nan.md): Returns which elements of x are NaN. + +[`is_non_decreasing(...)`](../../../tf/math/is_non_decreasing.md): Returns `True` if `x` is non-decreasing. + +[`is_strictly_increasing(...)`](../../../tf/math/is_strictly_increasing.md): Returns `True` if `x` is strictly increasing. + +[`l2_normalize(...)`](../../../tf/compat/v1/linalg/l2_normalize.md): Normalizes along dimension `axis` using an L2 norm. (deprecated arguments) + +[`lbeta(...)`](../../../tf/math/lbeta.md): Computes \\(ln(|Beta(x)|)\\), reducing along the last dimension. + +[`less(...)`](../../../tf/math/less.md): Returns the truth value of (x < y) element-wise. + +[`less_equal(...)`](../../../tf/math/less_equal.md): Returns the truth value of (x <= y) element-wise. + +[`lgamma(...)`](../../../tf/math/lgamma.md): Computes the log of the absolute value of `Gamma(x)` element-wise. + +[`log(...)`](../../../tf/math/log.md): Computes natural logarithm of x element-wise. + +[`log1p(...)`](../../../tf/math/log1p.md): Computes natural logarithm of (1 + x) element-wise. + +[`log_sigmoid(...)`](../../../tf/math/log_sigmoid.md): Computes log sigmoid of `x` element-wise. + +[`log_softmax(...)`](../../../tf/compat/v1/math/log_softmax.md): Computes log softmax activations. (deprecated arguments) + +[`logical_and(...)`](../../../tf/math/logical_and.md): Logical AND function. + +[`logical_not(...)`](../../../tf/math/logical_not.md): Returns the truth value of `NOT x` element-wise. + +[`logical_or(...)`](../../../tf/math/logical_or.md): Returns the truth value of x OR y element-wise. + +[`logical_xor(...)`](../../../tf/math/logical_xor.md): Logical XOR function. + +[`maximum(...)`](../../../tf/math/maximum.md): Returns the max of x and y (i.e. x > y ? x : y) element-wise. + +[`minimum(...)`](../../../tf/math/minimum.md): Returns the min of x and y (i.e. x < y ? x : y) element-wise. + +[`mod(...)`](../../../tf/math/floormod.md): Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +[`multiply(...)`](../../../tf/math/multiply.md): Returns an element-wise x * y. + +[`multiply_no_nan(...)`](../../../tf/math/multiply_no_nan.md): Computes the product of x and y and returns 0 if the y is zero, even if x is NaN or infinite. + +[`ndtri(...)`](../../../tf/math/ndtri.md): Compute quantile of Standard Normal. + +[`negative(...)`](../../../tf/math/negative.md): Computes numerical negative value element-wise. + +[`nextafter(...)`](../../../tf/math/nextafter.md): Returns the next representable value of `x1` in the direction of `x2`, element-wise. + +[`not_equal(...)`](../../../tf/math/not_equal.md): Returns the truth value of (x != y) element-wise. + +[`polygamma(...)`](../../../tf/math/polygamma.md): Compute the polygamma function \\(\psi^{(n)}(x)\\). + +[`polyval(...)`](../../../tf/math/polyval.md): Computes the elementwise value of a polynomial. + +[`pow(...)`](../../../tf/math/pow.md): Computes the power of one value to another. + +[`real(...)`](../../../tf/math/real.md): Returns the real part of a complex (or real) tensor. + +[`reciprocal(...)`](../../../tf/math/reciprocal.md): Computes the reciprocal of x element-wise. + +[`reciprocal_no_nan(...)`](../../../tf/math/reciprocal_no_nan.md): Performs a safe reciprocal operation, element wise. + +[`reduce_all(...)`](../../../tf/compat/v1/reduce_all.md): Computes the "logical and" of elements across dimensions of a tensor. (deprecated arguments) + +[`reduce_any(...)`](../../../tf/compat/v1/reduce_any.md): Computes the "logical or" of elements across dimensions of a tensor. (deprecated arguments) + +[`reduce_euclidean_norm(...)`](../../../tf/math/reduce_euclidean_norm.md): Computes the Euclidean norm of elements across dimensions of a tensor. + +[`reduce_logsumexp(...)`](../../../tf/compat/v1/reduce_logsumexp.md): Computes log(sum(exp(elements across dimensions of a tensor))). (deprecated arguments) + +[`reduce_max(...)`](../../../tf/compat/v1/reduce_max.md): Computes the maximum of elements across dimensions of a tensor. (deprecated arguments) + +[`reduce_mean(...)`](../../../tf/compat/v1/reduce_mean.md): Computes the mean of elements across dimensions of a tensor. + +[`reduce_min(...)`](../../../tf/compat/v1/reduce_min.md): Computes the minimum of elements across dimensions of a tensor. (deprecated arguments) + +[`reduce_prod(...)`](../../../tf/compat/v1/reduce_prod.md): Computes the product of elements across dimensions of a tensor. (deprecated arguments) + +[`reduce_std(...)`](../../../tf/math/reduce_std.md): Computes the standard deviation of elements across dimensions of a tensor. + +[`reduce_sum(...)`](../../../tf/compat/v1/reduce_sum.md): Computes the sum of elements across dimensions of a tensor. (deprecated arguments) + +[`reduce_variance(...)`](../../../tf/math/reduce_variance.md): Computes the variance of elements across dimensions of a tensor. + +[`rint(...)`](../../../tf/math/rint.md): Returns element-wise integer closest to x. + +[`round(...)`](../../../tf/math/round.md): Rounds the values of a tensor to the nearest integer, element-wise. + +[`rsqrt(...)`](../../../tf/math/rsqrt.md): Computes reciprocal of square root of x element-wise. + +[`scalar_mul(...)`](../../../tf/compat/v1/scalar_mul.md): Multiplies a scalar times a `Tensor` or `IndexedSlices` object. + +[`segment_max(...)`](../../../tf/math/segment_max.md): Computes the maximum along segments of a tensor. + +[`segment_mean(...)`](../../../tf/math/segment_mean.md): Computes the mean along segments of a tensor. + +[`segment_min(...)`](../../../tf/math/segment_min.md): Computes the minimum along segments of a tensor. + +[`segment_prod(...)`](../../../tf/math/segment_prod.md): Computes the product along segments of a tensor. + +[`segment_sum(...)`](../../../tf/math/segment_sum.md): Computes the sum along segments of a tensor. + +[`sigmoid(...)`](../../../tf/math/sigmoid.md): Computes sigmoid of `x` element-wise. + +[`sign(...)`](../../../tf/math/sign.md): Returns an element-wise indication of the sign of a number. + +[`sin(...)`](../../../tf/math/sin.md): Computes sine of x element-wise. + +[`sinh(...)`](../../../tf/math/sinh.md): Computes hyperbolic sine of x element-wise. + +[`sobol_sample(...)`](../../../tf/math/sobol_sample.md): Generates points from the Sobol sequence. + +[`softmax(...)`](../../../tf/compat/v1/math/softmax.md): Computes softmax activations. (deprecated arguments) + +[`softplus(...)`](../../../tf/math/softplus.md): Computes softplus: `log(exp(features) + 1)`. + +[`softsign(...)`](../../../tf/nn/softsign.md): Computes softsign: `features / (abs(features) + 1)`. + +[`sqrt(...)`](../../../tf/math/sqrt.md): Computes element-wise square root of the input tensor. + +[`square(...)`](../../../tf/math/square.md): Computes square of x element-wise. + +[`squared_difference(...)`](../../../tf/math/squared_difference.md): Returns (x - y)(x - y) element-wise. + +[`subtract(...)`](../../../tf/math/subtract.md): Returns x - y element-wise. + +[`tan(...)`](../../../tf/math/tan.md): Computes tan of x element-wise. + +[`tanh(...)`](../../../tf/math/tanh.md): Computes hyperbolic tangent of `x` element-wise. + +[`top_k(...)`](../../../tf/math/top_k.md): Finds values and indices of the `k` largest entries for the last dimension. + +[`truediv(...)`](../../../tf/math/truediv.md): Divides x / y elementwise (using Python 3 division operator semantics). + +[`unsorted_segment_max(...)`](../../../tf/math/unsorted_segment_max.md): Computes the maximum along segments of a tensor. + +[`unsorted_segment_mean(...)`](../../../tf/math/unsorted_segment_mean.md): Computes the mean along segments of a tensor. + +[`unsorted_segment_min(...)`](../../../tf/math/unsorted_segment_min.md): Computes the minimum along segments of a tensor. + +[`unsorted_segment_prod(...)`](../../../tf/math/unsorted_segment_prod.md): Computes the product along segments of a tensor. + +[`unsorted_segment_sqrt_n(...)`](../../../tf/math/unsorted_segment_sqrt_n.md): Computes the sum along segments of a tensor divided by the sqrt(N). + +[`unsorted_segment_sum(...)`](../../../tf/math/unsorted_segment_sum.md): Computes the sum along segments of a tensor. + +[`xdivy(...)`](../../../tf/math/xdivy.md): Returns 0 if x == 0, and x / y otherwise, elementwise. + +[`xlog1py(...)`](../../../tf/math/xlog1py.md): Compute x * log1p(y). + +[`xlogy(...)`](../../../tf/math/xlogy.md): Returns 0 if x == 0, and x * log(y) otherwise, elementwise. + +[`zero_fraction(...)`](../../../tf/math/zero_fraction.md): Returns the fraction of zeros in `value`. + +[`zeta(...)`](../../../tf/math/zeta.md): Compute the Hurwitz zeta function \\(\zeta(x, q)\\). + diff --git a/site/en/api_docs/python/tf/compat/v1/math/in_top_k.md b/site/en/api_docs/python/tf/compat/v1/math/in_top_k.md new file mode 100644 index 00000000000..165182f743f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/math/in_top_k.md @@ -0,0 +1,112 @@ +description: Says whether the targets are in the top K predictions. + +
+ + +
+ +# tf.compat.v1.math.in_top_k + + + + + + + + + +Says whether the targets are in the top `K` predictions. + + + + + + + + + +This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the +prediction for the target class is finite (not inf, -inf, or nan) and among +the top `k` predictions among all predictions for example `i`. Note that the +behavior of `InTopK` differs from the `TopK` op in its handling of ties; if +multiple classes have the same prediction value and straddle the top-`k` +boundary, all of those classes are considered to be in the top `k`. + +More formally, let + + \\(predictions_i\\) be the predictions for all classes for example `i`, + \\(targets_i\\) be the target class for example `i`, + \\(out_i\\) be the output for example `i`, + +$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$ + + + + + + + + + + + + + + + + + + + +
+`predictions` + +A `Tensor` of type `float32`. +A `batch_size` x `classes` tensor. +
+`targets` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A `batch_size` vector of class ids. +
+`k` + +An `int`. Number of top elements to look at for computing precision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. Computed Precision at `k` as a `bool Tensor`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/math/log_softmax.md b/site/en/api_docs/python/tf/compat/v1/math/log_softmax.md new file mode 100644 index 00000000000..d7a23b5b270 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/math/log_softmax.md @@ -0,0 +1,123 @@ +description: Computes log softmax activations. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.math.log_softmax + + + + + + + + + +Computes log softmax activations. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. +Instructions for updating: +dim is deprecated, use axis instead + +For each batch `i` and class `j` we have + + logsoftmax = logits - log(reduce_sum(exp(logits), axis)) + + + + + + + + + + + + + + + + + + + +
+`logits` + +A non-empty `Tensor`. Must be one of the following types: `half`, +`float32`, `float64`. +
+`axis` + +The dimension softmax would be performed on. The default is -1 which +indicates the last dimension. +
+`name` + +A name for the operation (optional). +
+`dim` + +Deprecated alias for `axis`. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `logits`. Same shape as `logits`. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if `logits` is empty or `axis` is beyond the last +dimension of `logits`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/math/softmax.md b/site/en/api_docs/python/tf/compat/v1/math/softmax.md new file mode 100644 index 00000000000..741b61860cd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/math/softmax.md @@ -0,0 +1,149 @@ +description: Computes softmax activations. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.math.softmax + + + + + + + + + +Computes softmax activations. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. +Instructions for updating: +dim is deprecated, use axis instead + +This function performs the equivalent of + + softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis) + +See: https://en.wikipedia.org/wiki/Softmax_function + +#### Example usage: + + +>>> tf.nn.softmax([-1, 0., 1.]) + + + + + + + + + + + + + + + + + + + + +
+`logits` + +A non-empty `Tensor`, or an object whose type has a registered +`Tensor` conversion function. Must be one of the following types: +`half`,`float32`, `float64`. See also `convert_to_tensor` +
+`axis` + +The dimension softmax would be performed on. The default is -1 which +indicates the last dimension. +
+`name` + +A name for the operation (optional). +
+`dim` + +Deprecated alias for `axis`. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type and shape as `logits`. +
+ + + + + + + + + + + + + + + + + + +
+`InvalidArgumentError` + +if `logits` is empty or `axis` is beyond the last +dimension of `logits`. +
+`TypeError` + +If no conversion function is registered for `logits` to +Tensor. +
+`RuntimeError` + +If a registered conversion function returns an invalid +value. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/math/special.md b/site/en/api_docs/python/tf/compat/v1/math/special.md new file mode 100644 index 00000000000..0efa544db90 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/math/special.md @@ -0,0 +1,33 @@ +description: Public API for tf.math.special namespace. + +
+ + +
+ +# Module: tf.compat.v1.math.special + + + + + + + + + +Public API for tf.math.special namespace. + + + +## Functions + +[`dawsn(...)`](../../../../tf/math/special/dawsn.md): Computes Dawson's integral of `x` element-wise. + +[`expint(...)`](../../../../tf/math/special/expint.md): Computes the Exponential integral of `x` element-wise. + +[`fresnel_cos(...)`](../../../../tf/math/special/fresnel_cos.md): Computes Fresnel's cosine integral of `x` element-wise. + +[`fresnel_sin(...)`](../../../../tf/math/special/fresnel_sin.md): Computes Fresnel's sine integral of `x` element-wise. + +[`spence(...)`](../../../../tf/math/special/spence.md): Computes Spence's integral of `x` element-wise. + diff --git a/site/en/api_docs/python/tf/compat/v1/metrics.md b/site/en/api_docs/python/tf/compat/v1/metrics.md new file mode 100644 index 00000000000..81a342ccade --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics.md @@ -0,0 +1,89 @@ +description: Evaluation-related metrics. + +
+ + +
+ +# Module: tf.compat.v1.metrics + + + + + + + + + +Evaluation-related metrics. + + + +## Functions + +[`accuracy(...)`](../../../tf/compat/v1/metrics/accuracy.md): Calculates how often `predictions` matches `labels`. + +[`auc(...)`](../../../tf/compat/v1/metrics/auc.md): Computes the approximate AUC via a Riemann sum. (deprecated) + +[`average_precision_at_k(...)`](../../../tf/compat/v1/metrics/average_precision_at_k.md): Computes average precision@k of predictions with respect to sparse labels. + +[`false_negatives(...)`](../../../tf/compat/v1/metrics/false_negatives.md): Computes the total number of false negatives. + +[`false_negatives_at_thresholds(...)`](../../../tf/compat/v1/metrics/false_negatives_at_thresholds.md): Computes false negatives at provided threshold values. + +[`false_positives(...)`](../../../tf/compat/v1/metrics/false_positives.md): Sum the weights of false positives. + +[`false_positives_at_thresholds(...)`](../../../tf/compat/v1/metrics/false_positives_at_thresholds.md): Computes false positives at provided threshold values. + +[`mean(...)`](../../../tf/compat/v1/metrics/mean.md): Computes the (weighted) mean of the given values. + +[`mean_absolute_error(...)`](../../../tf/compat/v1/metrics/mean_absolute_error.md): Computes the mean absolute error between the labels and predictions. + +[`mean_cosine_distance(...)`](../../../tf/compat/v1/metrics/mean_cosine_distance.md): Computes the cosine distance between the labels and predictions. + +[`mean_iou(...)`](../../../tf/compat/v1/metrics/mean_iou.md): Calculate per-step mean Intersection-Over-Union (mIOU). + +[`mean_per_class_accuracy(...)`](../../../tf/compat/v1/metrics/mean_per_class_accuracy.md): Calculates the mean of the per-class accuracies. + +[`mean_relative_error(...)`](../../../tf/compat/v1/metrics/mean_relative_error.md): Computes the mean relative error by normalizing with the given values. + +[`mean_squared_error(...)`](../../../tf/compat/v1/metrics/mean_squared_error.md): Computes the mean squared error between the labels and predictions. + +[`mean_tensor(...)`](../../../tf/compat/v1/metrics/mean_tensor.md): Computes the element-wise (weighted) mean of the given tensors. + +[`percentage_below(...)`](../../../tf/compat/v1/metrics/percentage_below.md): Computes the percentage of values less than the given threshold. + +[`precision(...)`](../../../tf/compat/v1/metrics/precision.md): Computes the precision of the predictions with respect to the labels. + +[`precision_at_k(...)`](../../../tf/compat/v1/metrics/precision_at_k.md): Computes precision@k of the predictions with respect to sparse labels. + +[`precision_at_thresholds(...)`](../../../tf/compat/v1/metrics/precision_at_thresholds.md): Computes precision values for different `thresholds` on `predictions`. + +[`precision_at_top_k(...)`](../../../tf/compat/v1/metrics/precision_at_top_k.md): Computes precision@k of the predictions with respect to sparse labels. + +[`recall(...)`](../../../tf/compat/v1/metrics/recall.md): Computes the recall of the predictions with respect to the labels. + +[`recall_at_k(...)`](../../../tf/compat/v1/metrics/recall_at_k.md): Computes recall@k of the predictions with respect to sparse labels. + +[`recall_at_thresholds(...)`](../../../tf/compat/v1/metrics/recall_at_thresholds.md): Computes various recall values for different `thresholds` on `predictions`. + +[`recall_at_top_k(...)`](../../../tf/compat/v1/metrics/recall_at_top_k.md): Computes recall@k of top-k predictions with respect to sparse labels. + +[`root_mean_squared_error(...)`](../../../tf/compat/v1/metrics/root_mean_squared_error.md): Computes the root mean squared error between the labels and predictions. + +[`sensitivity_at_specificity(...)`](../../../tf/compat/v1/metrics/sensitivity_at_specificity.md): Computes the specificity at a given sensitivity. + +[`sparse_average_precision_at_k(...)`](../../../tf/compat/v1/metrics/sparse_average_precision_at_k.md): Renamed to `average_precision_at_k`, please use that method instead. (deprecated) + +[`sparse_precision_at_k(...)`](../../../tf/compat/v1/metrics/sparse_precision_at_k.md): Renamed to `precision_at_k`, please use that method instead. (deprecated) + +[`specificity_at_sensitivity(...)`](../../../tf/compat/v1/metrics/specificity_at_sensitivity.md): Computes the specificity at a given sensitivity. + +[`true_negatives(...)`](../../../tf/compat/v1/metrics/true_negatives.md): Sum the weights of true_negatives. + +[`true_negatives_at_thresholds(...)`](../../../tf/compat/v1/metrics/true_negatives_at_thresholds.md): Computes true negatives at provided threshold values. + +[`true_positives(...)`](../../../tf/compat/v1/metrics/true_positives.md): Sum the weights of true_positives. + +[`true_positives_at_thresholds(...)`](../../../tf/compat/v1/metrics/true_positives_at_thresholds.md): Computes true positives at provided threshold values. + diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/accuracy.md b/site/en/api_docs/python/tf/compat/v1/metrics/accuracy.md new file mode 100644 index 00000000000..998a202971a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/accuracy.md @@ -0,0 +1,158 @@ +description: Calculates how often predictions matches labels. + +
+ + +
+ +# tf.compat.v1.metrics.accuracy + + + + + + + + + +Calculates how often `predictions` matches `labels`. + + + + + + + +The `accuracy` function creates two local variables, `total` and +`count` that are used to compute the frequency with which `predictions` +matches `labels`. This frequency is ultimately returned as `accuracy`: an +idempotent operation that simply divides `total` by `count`. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the `accuracy`. +Internally, an `is_correct` operation computes a `Tensor` with elements 1.0 +where the corresponding elements of `predictions` and `labels` match and 0.0 +otherwise. Then `update_op` increments `total` with the reduced sum of the +product of `weights` and `is_correct`, and it increments `count` with the +reduced sum of `weights`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth values, a `Tensor` whose shape matches +`predictions`. +
+`predictions` + +The predicted values, a `Tensor` of any shape. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that `accuracy` should +be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`accuracy` + +A `Tensor` representing the accuracy, the value of `total` divided +by `count`. +
+`update_op` + +An operation that increments the `total` and `count` variables +appropriately and whose value matches `accuracy`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/auc.md b/site/en/api_docs/python/tf/compat/v1/metrics/auc.md new file mode 100644 index 00000000000..b5b3377c11a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/auc.md @@ -0,0 +1,221 @@ +description: Computes the approximate AUC via a Riemann sum. (deprecated) + +
+ + +
+ +# tf.compat.v1.metrics.auc + + + + + + + + + +Computes the approximate AUC via a Riemann sum. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +The value of AUC returned by this may race with the update so this is deprecated. Please use tf.keras.metrics.AUC instead. + +The `auc` function creates four local variables, `true_positives`, +`true_negatives`, `false_positives` and `false_negatives` that are used to +compute the AUC. To discretize the AUC curve, a linearly spaced set of +thresholds is used to compute pairs of recall and precision values. The area +under the ROC-curve is therefore computed using the height of the recall +values by the false positive rate, while the area under the PR-curve is the +computed using the height of the precision values by the recall. + +This value is ultimately returned as `auc`, an idempotent operation that +computes the area under a discretized curve of precision versus recall values +(computed using the aforementioned variables). The `num_thresholds` variable +controls the degree of discretization with larger numbers of thresholds more +closely approximating the true AUC. The quality of the approximation may vary +dramatically depending on `num_thresholds`. + +For best results, `predictions` should be distributed approximately uniformly +in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC +approximation may be poor if this is not the case. Setting `summation_method` +to 'minoring' or 'majoring' can help quantify the error in the approximation +by providing lower or upper bound estimate of the AUC. The `thresholds` +parameter can be used to manually specify thresholds which split the +predictions more evenly. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the `auc`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` whose shape matches `predictions`. Will be cast to +`bool`. +
+`predictions` + +A floating point `Tensor` of arbitrary shape and whose values +are in the range `[0, 1]`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`num_thresholds` + +The number of thresholds to use when discretizing the roc +curve. +
+`metrics_collections` + +An optional list of collections that `auc` should be +added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`curve` + +Specifies the name of the curve to be computed, 'ROC' [default] or +'PR' for the Precision-Recall-curve. +
+`name` + +An optional variable_scope name. +
+`summation_method` + +Specifies the Riemann summation method used +(https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that +applies the trapezoidal rule; 'careful_interpolation', a variant of it +differing only by a more correct interpolation scheme for PR-AUC - +interpolating (true/false) positives but not the ratio that is precision; +'minoring' that applies left summation for increasing intervals and right +summation for decreasing intervals; 'majoring' that does the opposite. +Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' +(to be deprecated soon) as it applies the same method for ROC, and a +better one (see Davis & Goadrich 2006 for details) for the PR curve. +
+`thresholds` + +An optional list of floating point values to use as the +thresholds for discretizing the curve. If set, the `num_thresholds` +parameter is ignored. Values should be in [0, 1]. Endpoint thresholds +equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be +automatically included with these to correctly handle predictions equal to +exactly 0 or 1. +
+ + + + + + + + + + + + + + + +
+`auc` + +A scalar `Tensor` representing the current area-under-curve. +
+`update_op` + +An operation that increments the `true_positives`, +`true_negatives`, `false_positives` and `false_negatives` variables +appropriately and whose value matches `auc`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/average_precision_at_k.md b/site/en/api_docs/python/tf/compat/v1/metrics/average_precision_at_k.md new file mode 100644 index 00000000000..0c3b8298868 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/average_precision_at_k.md @@ -0,0 +1,173 @@ +description: Computes average precision@k of predictions with respect to sparse labels. + +
+ + +
+ +# tf.compat.v1.metrics.average_precision_at_k + + + + + + + + + +Computes average precision@k of predictions with respect to sparse labels. + + + + + + + +`average_precision_at_k` creates two local variables, +`average_precision_at_/total` and `average_precision_at_/max`, that +are used to compute the frequency. This frequency is ultimately returned as +`average_precision_at_`: an idempotent operation that simply divides +`average_precision_at_/total` by `average_precision_at_/max`. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`precision_at_`. Internally, a `top_k` operation computes a `Tensor` +indicating the top `k` `predictions`. Set operations applied to `top_k` and +`labels` calculate the true positives and false positives weighted by +`weights`. Then `update_op` increments `true_positive_at_` and +`false_positive_at_` using these values. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +`int64` `Tensor` or `SparseTensor` with shape +[D1, ... DN, num_labels] or [D1, ... DN], where the latter implies +num_labels=1. N >= 1 and num_labels is the number of target classes for +the associated prediction. Commonly, N=1 and `labels` has shape +[batch_size, num_labels]. [D1, ... DN] must match `predictions`. Values +should be in range [0, num_classes), where num_classes is the last +dimension of `predictions`. Values outside this range are ignored. +
+`predictions` + +Float `Tensor` with shape [D1, ... DN, num_classes] where +N >= 1. Commonly, N=1 and `predictions` has shape +[batch size, num_classes]. The final dimension contains the logit values +for each class. [D1, ... DN] must match `labels`. +
+`k` + +Integer, k for @k metric. This will calculate an average precision for +range `[1,k]`, as documented above. +
+`weights` + +`Tensor` whose rank is either 0, or n-1, where n is the rank of +`labels`. If the latter, it must be broadcastable to `labels` (i.e., all +dimensions must be either `1`, or the same as the corresponding `labels` +dimension). +
+`metrics_collections` + +An optional list of collections that values should +be added to. +
+`updates_collections` + +An optional list of collections that updates should +be added to. +
+`name` + +Name of new update operation, and namespace for other dependent ops. +
+ + + + + + + + + + + + + + + +
+`mean_average_precision` + +Scalar `float64` `Tensor` with the mean average +precision values. +
+`update` + +`Operation` that increments variables appropriately, and whose +value matches `metric`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if k is invalid. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/false_negatives.md b/site/en/api_docs/python/tf/compat/v1/metrics/false_negatives.md new file mode 100644 index 00000000000..be75b02511a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/false_negatives.md @@ -0,0 +1,143 @@ +description: Computes the total number of false negatives. + +
+ + +
+ +# tf.compat.v1.metrics.false_negatives + + + + + + + + + +Computes the total number of false negatives. + + + + + + + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth values, a `Tensor` whose dimensions must match +`predictions`. Will be cast to `bool`. +
+`predictions` + +The predicted values, a `Tensor` of arbitrary dimensions. Will +be cast to `bool`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that the metric +value variable should be added to. +
+`updates_collections` + +An optional list of collections that the metric update +ops should be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`value_tensor` + +A `Tensor` representing the current value of the metric. +
+`update_op` + +An operation that accumulates the error from a batch of data. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `weights` is not `None` and its shape doesn't match `values`, +or if either `metrics_collections` or `updates_collections` are not a list +or tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/false_negatives_at_thresholds.md b/site/en/api_docs/python/tf/compat/v1/metrics/false_negatives_at_thresholds.md new file mode 100644 index 00000000000..a011084810f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/false_negatives_at_thresholds.md @@ -0,0 +1,152 @@ +description: Computes false negatives at provided threshold values. + +
+ + +
+ +# tf.compat.v1.metrics.false_negatives_at_thresholds + + + + + + + + + +Computes false negatives at provided threshold values. + + + + + + + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` whose shape matches `predictions`. Will be cast to +`bool`. +
+`predictions` + +A floating point `Tensor` of arbitrary shape and whose values +are in the range `[0, 1]`. +
+`thresholds` + +A python list or tuple of float thresholds in `[0, 1]`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that `false_negatives` +should be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`false_negatives` + +A float `Tensor` of shape `[len(thresholds)]`. +
+`update_op` + +An operation that updates the `false_negatives` variable and +returns its current value. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/false_positives.md b/site/en/api_docs/python/tf/compat/v1/metrics/false_positives.md new file mode 100644 index 00000000000..dcca3f82e17 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/false_positives.md @@ -0,0 +1,144 @@ +description: Sum the weights of false positives. + +
+ + +
+ +# tf.compat.v1.metrics.false_positives + + + + + + + + + +Sum the weights of false positives. + + + + + + + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth values, a `Tensor` whose dimensions must match +`predictions`. Will be cast to `bool`. +
+`predictions` + +The predicted values, a `Tensor` of arbitrary dimensions. Will +be cast to `bool`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that the metric +value variable should be added to. +
+`updates_collections` + +An optional list of collections that the metric update +ops should be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`value_tensor` + +A `Tensor` representing the current value of the metric. +
+`update_op` + +An operation that accumulates the error from a batch of data. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/false_positives_at_thresholds.md b/site/en/api_docs/python/tf/compat/v1/metrics/false_positives_at_thresholds.md new file mode 100644 index 00000000000..a834dbd1017 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/false_positives_at_thresholds.md @@ -0,0 +1,152 @@ +description: Computes false positives at provided threshold values. + +
+ + +
+ +# tf.compat.v1.metrics.false_positives_at_thresholds + + + + + + + + + +Computes false positives at provided threshold values. + + + + + + + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` whose shape matches `predictions`. Will be cast to +`bool`. +
+`predictions` + +A floating point `Tensor` of arbitrary shape and whose values +are in the range `[0, 1]`. +
+`thresholds` + +A python list or tuple of float thresholds in `[0, 1]`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that `false_positives` +should be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`false_positives` + +A float `Tensor` of shape `[len(thresholds)]`. +
+`update_op` + +An operation that updates the `false_positives` variable and +returns its current value. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/mean.md b/site/en/api_docs/python/tf/compat/v1/metrics/mean.md new file mode 100644 index 00000000000..a46c279a20a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/mean.md @@ -0,0 +1,146 @@ +description: Computes the (weighted) mean of the given values. + +
+ + +
+ +# tf.compat.v1.metrics.mean + + + + + + + + + +Computes the (weighted) mean of the given values. + + + + + + + +The `mean` function creates two local variables, `total` and `count` +that are used to compute the average of `values`. This average is ultimately +returned as `mean` which is an idempotent operation that simply divides +`total` by `count`. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the `mean`. +`update_op` increments `total` with the reduced sum of the product of `values` +and `weights`, and it increments `count` with the reduced sum of `weights`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + +
+`values` + +A `Tensor` of arbitrary dimensions. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`values`, and must be broadcastable to `values` (i.e., all dimensions must +be either `1`, or the same as the corresponding `values` dimension). +
+`metrics_collections` + +An optional list of collections that `mean` +should be added to. +
+`updates_collections` + +An optional list of collections that `update_op` +should be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`mean` + +A `Tensor` representing the current mean, the value of `total` divided +by `count`. +
+`update_op` + +An operation that increments the `total` and `count` variables +appropriately and whose value matches `mean_value`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `weights` is not `None` and its shape doesn't match `values`, +or if either `metrics_collections` or `updates_collections` are not a list +or tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/mean_absolute_error.md b/site/en/api_docs/python/tf/compat/v1/metrics/mean_absolute_error.md new file mode 100644 index 00000000000..3da96072365 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/mean_absolute_error.md @@ -0,0 +1,158 @@ +description: Computes the mean absolute error between the labels and predictions. + +
+ + +
+ +# tf.compat.v1.metrics.mean_absolute_error + + + + + + + + + +Computes the mean absolute error between the labels and predictions. + + + + + + + +The `mean_absolute_error` function creates two local variables, +`total` and `count` that are used to compute the mean absolute error. This +average is weighted by `weights`, and it is ultimately returned as +`mean_absolute_error`: an idempotent operation that simply divides `total` by +`count`. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`mean_absolute_error`. Internally, an `absolute_errors` operation computes the +absolute value of the differences between `predictions` and `labels`. Then +`update_op` increments `total` with the reduced sum of the product of +`weights` and `absolute_errors`, and it increments `count` with the reduced +sum of `weights` + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` of the same shape as `predictions`. +
+`predictions` + +A `Tensor` of arbitrary shape. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that +`mean_absolute_error` should be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`mean_absolute_error` + +A `Tensor` representing the current mean, the value of +`total` divided by `count`. +
+`update_op` + +An operation that increments the `total` and `count` variables +appropriately and whose value matches `mean_absolute_error`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/mean_cosine_distance.md b/site/en/api_docs/python/tf/compat/v1/metrics/mean_cosine_distance.md new file mode 100644 index 00000000000..cba46832b08 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/mean_cosine_distance.md @@ -0,0 +1,162 @@ +description: Computes the cosine distance between the labels and predictions. + +
+ + +
+ +# tf.compat.v1.metrics.mean_cosine_distance + + + + + + + + + +Computes the cosine distance between the labels and predictions. + + + + + + + +The `mean_cosine_distance` function creates two local variables, +`total` and `count` that are used to compute the average cosine distance +between `predictions` and `labels`. This average is weighted by `weights`, +and it is ultimately returned as `mean_distance`, which is an idempotent +operation that simply divides `total` by `count`. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`mean_distance`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` of arbitrary shape. +
+`predictions` + +A `Tensor` of the same shape as `labels`. +
+`dim` + +The dimension along which the cosine distance is computed. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). Also, +dimension `dim` must be `1`. +
+`metrics_collections` + +An optional list of collections that the metric +value variable should be added to. +
+`updates_collections` + +An optional list of collections that the metric update +ops should be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`mean_distance` + +A `Tensor` representing the current mean, the value of +`total` divided by `count`. +
+`update_op` + +An operation that increments the `total` and `count` variables +appropriately. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/mean_iou.md b/site/en/api_docs/python/tf/compat/v1/metrics/mean_iou.md new file mode 100644 index 00000000000..bd63d5a1d52 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/mean_iou.md @@ -0,0 +1,165 @@ +description: Calculate per-step mean Intersection-Over-Union (mIOU). + +
+ + +
+ +# tf.compat.v1.metrics.mean_iou + + + + + + + + + +Calculate per-step mean Intersection-Over-Union (mIOU). + + + + + + + +Mean Intersection-Over-Union is a common evaluation metric for +semantic image segmentation, which first computes the IOU for each +semantic class and then computes the average over classes. +IOU is defined as follows: + IOU = true_positive / (true_positive + false_positive + false_negative). +The predictions are accumulated in a confusion matrix, weighted by `weights`, +and mIOU is then calculated from it. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the `mean_iou`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` of ground truth labels with shape [batch size] and of +type `int32` or `int64`. The tensor will be flattened if its rank > 1. +
+`predictions` + +A `Tensor` of prediction results for semantic labels, whose +shape is [batch size] and type `int32` or `int64`. The tensor will be +flattened if its rank > 1. +
+`num_classes` + +The possible number of labels the prediction task can +have. This value must be provided, since a confusion matrix of +dimension = [num_classes, num_classes] will be allocated. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that `mean_iou` +should be added to. +
+`updates_collections` + +An optional list of collections `update_op` should be +added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`mean_iou` + +A `Tensor` representing the mean intersection-over-union. +
+`update_op` + +An operation that increments the confusion matrix. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/mean_per_class_accuracy.md b/site/en/api_docs/python/tf/compat/v1/metrics/mean_per_class_accuracy.md new file mode 100644 index 00000000000..08458b88e0e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/mean_per_class_accuracy.md @@ -0,0 +1,161 @@ +description: Calculates the mean of the per-class accuracies. + +
+ + +
+ +# tf.compat.v1.metrics.mean_per_class_accuracy + + + + + + + + + +Calculates the mean of the per-class accuracies. + + + + + + + +Calculates the accuracy for each class, then takes the mean of that. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates the accuracy of each class and returns +them. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` of ground truth labels with shape [batch size] and of +type `int32` or `int64`. The tensor will be flattened if its rank > 1. +
+`predictions` + +A `Tensor` of prediction results for semantic labels, whose +shape is [batch size] and type `int32` or `int64`. The tensor will be +flattened if its rank > 1. +
+`num_classes` + +The possible number of labels the prediction task can +have. This value must be provided, since two variables with shape = +[num_classes] will be allocated. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that +`mean_per_class_accuracy' +should be added to. +
+`updates_collections` + +An optional list of collections `update_op` should be +added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`mean_accuracy` + +A `Tensor` representing the mean per class accuracy. +
+`update_op` + +An operation that updates the accuracy tensor. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/mean_relative_error.md b/site/en/api_docs/python/tf/compat/v1/metrics/mean_relative_error.md new file mode 100644 index 00000000000..4339e6c89cc --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/mean_relative_error.md @@ -0,0 +1,165 @@ +description: Computes the mean relative error by normalizing with the given values. + +
+ + +
+ +# tf.compat.v1.metrics.mean_relative_error + + + + + + + + + +Computes the mean relative error by normalizing with the given values. + + + + + + + +The `mean_relative_error` function creates two local variables, +`total` and `count` that are used to compute the mean relative absolute error. +This average is weighted by `weights`, and it is ultimately returned as +`mean_relative_error`: an idempotent operation that simply divides `total` by +`count`. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`mean_reative_error`. Internally, a `relative_errors` operation divides the +absolute value of the differences between `predictions` and `labels` by the +`normalizer`. Then `update_op` increments `total` with the reduced sum of the +product of `weights` and `relative_errors`, and it increments `count` with the +reduced sum of `weights`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` of the same shape as `predictions`. +
+`predictions` + +A `Tensor` of arbitrary shape. +
+`normalizer` + +A `Tensor` of the same shape as `predictions`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that +`mean_relative_error` should be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`mean_relative_error` + +A `Tensor` representing the current mean, the value of +`total` divided by `count`. +
+`update_op` + +An operation that increments the `total` and `count` variables +appropriately and whose value matches `mean_relative_error`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/mean_squared_error.md b/site/en/api_docs/python/tf/compat/v1/metrics/mean_squared_error.md new file mode 100644 index 00000000000..65fccb226ff --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/mean_squared_error.md @@ -0,0 +1,158 @@ +description: Computes the mean squared error between the labels and predictions. + +
+ + +
+ +# tf.compat.v1.metrics.mean_squared_error + + + + + + + + + +Computes the mean squared error between the labels and predictions. + + + + + + + +The `mean_squared_error` function creates two local variables, +`total` and `count` that are used to compute the mean squared error. +This average is weighted by `weights`, and it is ultimately returned as +`mean_squared_error`: an idempotent operation that simply divides `total` by +`count`. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`mean_squared_error`. Internally, a `squared_error` operation computes the +element-wise square of the difference between `predictions` and `labels`. Then +`update_op` increments `total` with the reduced sum of the product of +`weights` and `squared_error`, and it increments `count` with the reduced sum +of `weights`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` of the same shape as `predictions`. +
+`predictions` + +A `Tensor` of arbitrary shape. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that +`mean_squared_error` should be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`mean_squared_error` + +A `Tensor` representing the current mean, the value of +`total` divided by `count`. +
+`update_op` + +An operation that increments the `total` and `count` variables +appropriately and whose value matches `mean_squared_error`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/mean_tensor.md b/site/en/api_docs/python/tf/compat/v1/metrics/mean_tensor.md new file mode 100644 index 00000000000..2da9fe4430c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/mean_tensor.md @@ -0,0 +1,150 @@ +description: Computes the element-wise (weighted) mean of the given tensors. + +
+ + +
+ +# tf.compat.v1.metrics.mean_tensor + + + + + + + + + +Computes the element-wise (weighted) mean of the given tensors. + + + + + + + +In contrast to the `mean` function which returns a scalar with the +mean, this function returns an average tensor with the same shape as the +input tensors. + +The `mean_tensor` function creates two local variables, +`total_tensor` and `count_tensor` that are used to compute the average of +`values`. This average is ultimately returned as `mean` which is an idempotent +operation that simply divides `total` by `count`. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the `mean`. +`update_op` increments `total` with the reduced sum of the product of `values` +and `weights`, and it increments `count` with the reduced sum of `weights`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + +
+`values` + +A `Tensor` of arbitrary dimensions. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`values`, and must be broadcastable to `values` (i.e., all dimensions must +be either `1`, or the same as the corresponding `values` dimension). +
+`metrics_collections` + +An optional list of collections that `mean` +should be added to. +
+`updates_collections` + +An optional list of collections that `update_op` +should be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`mean` + +A float `Tensor` representing the current mean, the value of `total` +divided by `count`. +
+`update_op` + +An operation that increments the `total` and `count` variables +appropriately and whose value matches `mean_value`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `weights` is not `None` and its shape doesn't match `values`, +or if either `metrics_collections` or `updates_collections` are not a list +or tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/percentage_below.md b/site/en/api_docs/python/tf/compat/v1/metrics/percentage_below.md new file mode 100644 index 00000000000..f89b37a0372 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/percentage_below.md @@ -0,0 +1,153 @@ +description: Computes the percentage of values less than the given threshold. + +
+ + +
+ +# tf.compat.v1.metrics.percentage_below + + + + + + + + + +Computes the percentage of values less than the given threshold. + + + + + + + +The `percentage_below` function creates two local variables, +`total` and `count` that are used to compute the percentage of `values` that +fall below `threshold`. This rate is weighted by `weights`, and it is +ultimately returned as `percentage` which is an idempotent operation that +simply divides `total` by `count`. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`percentage`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`values` + +A numeric `Tensor` of arbitrary size. +
+`threshold` + +A scalar threshold. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`values`, and must be broadcastable to `values` (i.e., all dimensions must +be either `1`, or the same as the corresponding `values` dimension). +
+`metrics_collections` + +An optional list of collections that the metric +value variable should be added to. +
+`updates_collections` + +An optional list of collections that the metric update +ops should be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`percentage` + +A `Tensor` representing the current mean, the value of `total` +divided by `count`. +
+`update_op` + +An operation that increments the `total` and `count` variables +appropriately. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `weights` is not `None` and its shape doesn't match `values`, +or if either `metrics_collections` or `updates_collections` are not a list +or tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/precision.md b/site/en/api_docs/python/tf/compat/v1/metrics/precision.md new file mode 100644 index 00000000000..a7967620a8d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/precision.md @@ -0,0 +1,158 @@ +description: Computes the precision of the predictions with respect to the labels. + +
+ + +
+ +# tf.compat.v1.metrics.precision + + + + + + + + + +Computes the precision of the predictions with respect to the labels. + + + + + + + +The `precision` function creates two local variables, +`true_positives` and `false_positives`, that are used to compute the +precision. This value is ultimately returned as `precision`, an idempotent +operation that simply divides `true_positives` by the sum of `true_positives` +and `false_positives`. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`precision`. `update_op` weights each prediction by the corresponding value in +`weights`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth values, a `Tensor` whose dimensions must match +`predictions`. Will be cast to `bool`. +
+`predictions` + +The predicted values, a `Tensor` of arbitrary dimensions. Will +be cast to `bool`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that `precision` should +be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`precision` + +Scalar float `Tensor` with the value of `true_positives` +divided by the sum of `true_positives` and `false_positives`. +
+`update_op` + +`Operation` that increments `true_positives` and +`false_positives` variables appropriately and whose value matches +`precision`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/precision_at_k.md b/site/en/api_docs/python/tf/compat/v1/metrics/precision_at_k.md new file mode 100644 index 00000000000..fd3dcca72d9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/precision_at_k.md @@ -0,0 +1,194 @@ +description: Computes precision@k of the predictions with respect to sparse labels. + +
+ + +
+ +# tf.compat.v1.metrics.precision_at_k + + + + + + + + + +Computes precision@k of the predictions with respect to sparse labels. + + + + + + + +If `class_id` is specified, we calculate precision by considering only the + entries in the batch for which `class_id` is in the top-k highest + `predictions`, and computing the fraction of them for which `class_id` is + indeed a correct label. +If `class_id` is not specified, we'll calculate precision as how often on + average a class among the top-k classes with the highest predicted values + of a batch entry is correct and can be found in the label for that entry. + +`precision_at_k` creates two local variables, +`true_positive_at_` and `false_positive_at_`, that are used to compute +the precision@k frequency. This frequency is ultimately returned as +`precision_at_`: an idempotent operation that simply divides +`true_positive_at_` by total (`true_positive_at_` + +`false_positive_at_`). + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`precision_at_`. Internally, a `top_k` operation computes a `Tensor` +indicating the top `k` `predictions`. Set operations applied to `top_k` and +`labels` calculate the true positives and false positives weighted by +`weights`. Then `update_op` increments `true_positive_at_` and +`false_positive_at_` using these values. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +`int64` `Tensor` or `SparseTensor` with shape +[D1, ... DN, num_labels] or [D1, ... DN], where the latter implies +num_labels=1. N >= 1 and num_labels is the number of target classes for +the associated prediction. Commonly, N=1 and `labels` has shape +[batch_size, num_labels]. [D1, ... DN] must match `predictions`. Values +should be in range [0, num_classes), where num_classes is the last +dimension of `predictions`. Values outside this range are ignored. +
+`predictions` + +Float `Tensor` with shape [D1, ... DN, num_classes] where +N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. +The final dimension contains the logit values for each class. [D1, ... DN] +must match `labels`. +
+`k` + +Integer, k for @k metric. +
+`class_id` + +Integer class ID for which we want binary metrics. This should be +in range [0, num_classes], where num_classes is the last dimension of +`predictions`. If `class_id` is outside this range, the method returns +NAN. +
+`weights` + +`Tensor` whose rank is either 0, or n-1, where n is the rank of +`labels`. If the latter, it must be broadcastable to `labels` (i.e., all +dimensions must be either `1`, or the same as the corresponding `labels` +dimension). +
+`metrics_collections` + +An optional list of collections that values should +be added to. +
+`updates_collections` + +An optional list of collections that updates should +be added to. +
+`name` + +Name of new update operation, and namespace for other dependent ops. +
+ + + + + + + + + + + + + + + +
+`precision` + +Scalar `float64` `Tensor` with the value of `true_positives` +divided by the sum of `true_positives` and `false_positives`. +
+`update_op` + +`Operation` that increments `true_positives` and +`false_positives` variables appropriately, and whose value matches +`precision`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `weights` is not `None` and its shape doesn't match +`predictions`, or if either `metrics_collections` or `updates_collections` +are not a list or tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/precision_at_thresholds.md b/site/en/api_docs/python/tf/compat/v1/metrics/precision_at_thresholds.md new file mode 100644 index 00000000000..8c29f880bde --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/precision_at_thresholds.md @@ -0,0 +1,165 @@ +description: Computes precision values for different thresholds on predictions. + +
+ + +
+ +# tf.compat.v1.metrics.precision_at_thresholds + + + + + + + + + +Computes precision values for different `thresholds` on `predictions`. + + + + + + + +The `precision_at_thresholds` function creates four local variables, +`true_positives`, `true_negatives`, `false_positives` and `false_negatives` +for various values of thresholds. `precision[i]` is defined as the total +weight of values in `predictions` above `thresholds[i]` whose corresponding +entry in `labels` is `True`, divided by the total weight of values in +`predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] + +false_positives[i])`). + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`precision`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth values, a `Tensor` whose dimensions must match +`predictions`. Will be cast to `bool`. +
+`predictions` + +A floating point `Tensor` of arbitrary shape and whose values +are in the range `[0, 1]`. +
+`thresholds` + +A python list or tuple of float thresholds in `[0, 1]`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that `auc` should be +added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`precision` + +A float `Tensor` of shape `[len(thresholds)]`. +
+`update_op` + +An operation that increments the `true_positives`, +`true_negatives`, `false_positives` and `false_negatives` variables that +are used in the computation of `precision`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/precision_at_top_k.md b/site/en/api_docs/python/tf/compat/v1/metrics/precision_at_top_k.md new file mode 100644 index 00000000000..16f91f33aac --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/precision_at_top_k.md @@ -0,0 +1,173 @@ +description: Computes precision@k of the predictions with respect to sparse labels. + +
+ + +
+ +# tf.compat.v1.metrics.precision_at_top_k + + + + + + + + + +Computes precision@k of the predictions with respect to sparse labels. + + + + + + + +Differs from `sparse_precision_at_k` in that predictions must be in the form +of top `k` class indices, whereas `sparse_precision_at_k` expects logits. +Refer to `sparse_precision_at_k` for more details. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +`int64` `Tensor` or `SparseTensor` with shape +[D1, ... DN, num_labels] or [D1, ... DN], where the latter implies +num_labels=1. N >= 1 and num_labels is the number of target classes for +the associated prediction. Commonly, N=1 and `labels` has shape +[batch_size, num_labels]. [D1, ... DN] must match `predictions`. Values +should be in range [0, num_classes), where num_classes is the last +dimension of `predictions`. Values outside this range are ignored. +
+`predictions_idx` + +Integer `Tensor` with shape [D1, ... DN, k] where +N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. +The final dimension contains the top `k` predicted class indices. +[D1, ... DN] must match `labels`. +
+`k` + +Integer, k for @k metric. Only used for the default op name. +
+`class_id` + +Integer class ID for which we want binary metrics. This should be +in range [0, num_classes], where num_classes is the last dimension of +`predictions`. If `class_id` is outside this range, the method returns +NAN. +
+`weights` + +`Tensor` whose rank is either 0, or n-1, where n is the rank of +`labels`. If the latter, it must be broadcastable to `labels` (i.e., all +dimensions must be either `1`, or the same as the corresponding `labels` +dimension). +
+`metrics_collections` + +An optional list of collections that values should +be added to. +
+`updates_collections` + +An optional list of collections that updates should +be added to. +
+`name` + +Name of new update operation, and namespace for other dependent ops. +
+ + + + + + + + + + + + + + + +
+`precision` + +Scalar `float64` `Tensor` with the value of `true_positives` +divided by the sum of `true_positives` and `false_positives`. +
+`update_op` + +`Operation` that increments `true_positives` and +`false_positives` variables appropriately, and whose value matches +`precision`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `weights` is not `None` and its shape doesn't match +`predictions`, or if either `metrics_collections` or `updates_collections` +are not a list or tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/recall.md b/site/en/api_docs/python/tf/compat/v1/metrics/recall.md new file mode 100644 index 00000000000..4c44617670c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/recall.md @@ -0,0 +1,156 @@ +description: Computes the recall of the predictions with respect to the labels. + +
+ + +
+ +# tf.compat.v1.metrics.recall + + + + + + + + + +Computes the recall of the predictions with respect to the labels. + + + + + + + +The `recall` function creates two local variables, `true_positives` +and `false_negatives`, that are used to compute the recall. This value is +ultimately returned as `recall`, an idempotent operation that simply divides +`true_positives` by the sum of `true_positives` and `false_negatives`. + +For estimation of the metric over a stream of data, the function creates an +`update_op` that updates these variables and returns the `recall`. `update_op` +weights each prediction by the corresponding value in `weights`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth values, a `Tensor` whose dimensions must match +`predictions`. Will be cast to `bool`. +
+`predictions` + +The predicted values, a `Tensor` of arbitrary dimensions. Will +be cast to `bool`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that `recall` should +be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`recall` + +Scalar float `Tensor` with the value of `true_positives` divided +by the sum of `true_positives` and `false_negatives`. +
+`update_op` + +`Operation` that increments `true_positives` and +`false_negatives` variables appropriately and whose value matches +`recall`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/recall_at_k.md b/site/en/api_docs/python/tf/compat/v1/metrics/recall_at_k.md new file mode 100644 index 00000000000..c1ddf9794ef --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/recall_at_k.md @@ -0,0 +1,193 @@ +description: Computes recall@k of the predictions with respect to sparse labels. + +
+ + +
+ +# tf.compat.v1.metrics.recall_at_k + + + + + + + + + +Computes recall@k of the predictions with respect to sparse labels. + + + + + + + +If `class_id` is specified, we calculate recall by considering only the + entries in the batch for which `class_id` is in the label, and computing + the fraction of them for which `class_id` is in the top-k `predictions`. +If `class_id` is not specified, we'll calculate recall as how often on + average a class among the labels of a batch entry is in the top-k + `predictions`. + +`sparse_recall_at_k` creates two local variables, +`true_positive_at_` and `false_negative_at_`, that are used to compute +the recall_at_k frequency. This frequency is ultimately returned as +`recall_at_`: an idempotent operation that simply divides +`true_positive_at_` by total (`true_positive_at_` + +`false_negative_at_`). + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`recall_at_`. Internally, a `top_k` operation computes a `Tensor` +indicating the top `k` `predictions`. Set operations applied to `top_k` and +`labels` calculate the true positives and false negatives weighted by +`weights`. Then `update_op` increments `true_positive_at_` and +`false_negative_at_` using these values. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +`int64` `Tensor` or `SparseTensor` with shape +[D1, ... DN, num_labels] or [D1, ... DN], where the latter implies +num_labels=1. N >= 1 and num_labels is the number of target classes for +the associated prediction. Commonly, N=1 and `labels` has shape +[batch_size, num_labels]. [D1, ... DN] must match `predictions`. Values +should be in range [0, num_classes), where num_classes is the last +dimension of `predictions`. Values outside this range always count +towards `false_negative_at_`. +
+`predictions` + +Float `Tensor` with shape [D1, ... DN, num_classes] where +N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. +The final dimension contains the logit values for each class. [D1, ... DN] +must match `labels`. +
+`k` + +Integer, k for @k metric. +
+`class_id` + +Integer class ID for which we want binary metrics. This should be +in range [0, num_classes), where num_classes is the last dimension of +`predictions`. If class_id is outside this range, the method returns NAN. +
+`weights` + +`Tensor` whose rank is either 0, or n-1, where n is the rank of +`labels`. If the latter, it must be broadcastable to `labels` (i.e., all +dimensions must be either `1`, or the same as the corresponding `labels` +dimension). +
+`metrics_collections` + +An optional list of collections that values should +be added to. +
+`updates_collections` + +An optional list of collections that updates should +be added to. +
+`name` + +Name of new update operation, and namespace for other dependent ops. +
+ + + + + + + + + + + + + + + +
+`recall` + +Scalar `float64` `Tensor` with the value of `true_positives` divided +by the sum of `true_positives` and `false_negatives`. +
+`update_op` + +`Operation` that increments `true_positives` and +`false_negatives` variables appropriately, and whose value matches +`recall`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `weights` is not `None` and its shape doesn't match +`predictions`, or if either `metrics_collections` or `updates_collections` +are not a list or tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/recall_at_thresholds.md b/site/en/api_docs/python/tf/compat/v1/metrics/recall_at_thresholds.md new file mode 100644 index 00000000000..76cbad44f7f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/recall_at_thresholds.md @@ -0,0 +1,163 @@ +description: Computes various recall values for different thresholds on predictions. + +
+ + +
+ +# tf.compat.v1.metrics.recall_at_thresholds + + + + + + + + + +Computes various recall values for different `thresholds` on `predictions`. + + + + + + + +The `recall_at_thresholds` function creates four local variables, +`true_positives`, `true_negatives`, `false_positives` and `false_negatives` +for various values of thresholds. `recall[i]` is defined as the total weight +of values in `predictions` above `thresholds[i]` whose corresponding entry in +`labels` is `True`, divided by the total weight of `True` values in `labels` +(`true_positives[i] / (true_positives[i] + false_negatives[i])`). + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the `recall`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth values, a `Tensor` whose dimensions must match +`predictions`. Will be cast to `bool`. +
+`predictions` + +A floating point `Tensor` of arbitrary shape and whose values +are in the range `[0, 1]`. +
+`thresholds` + +A python list or tuple of float thresholds in `[0, 1]`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that `recall` should be +added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`recall` + +A float `Tensor` of shape `[len(thresholds)]`. +
+`update_op` + +An operation that increments the `true_positives`, +`true_negatives`, `false_positives` and `false_negatives` variables that +are used in the computation of `recall`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/recall_at_top_k.md b/site/en/api_docs/python/tf/compat/v1/metrics/recall_at_top_k.md new file mode 100644 index 00000000000..5049c6b1857 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/recall_at_top_k.md @@ -0,0 +1,166 @@ +description: Computes recall@k of top-k predictions with respect to sparse labels. + +
+ + +
+ +# tf.compat.v1.metrics.recall_at_top_k + + + + + + + + + +Computes recall@k of top-k predictions with respect to sparse labels. + + + + + + + +Differs from `recall_at_k` in that predictions must be in the form of top `k` +class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k` +for more details. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +`int64` `Tensor` or `SparseTensor` with shape +[D1, ... DN, num_labels] or [D1, ... DN], where the latter implies +num_labels=1. N >= 1 and num_labels is the number of target classes for +the associated prediction. Commonly, N=1 and `labels` has shape +[batch_size, num_labels]. [D1, ... DN] must match `predictions`. Values +should be in range [0, num_classes), where num_classes is the last +dimension of `predictions`. Values outside this range always count +towards `false_negative_at_`. +
+`predictions_idx` + +Integer `Tensor` with shape [D1, ... DN, k] where N >= 1. +Commonly, N=1 and predictions has shape [batch size, k]. The final +dimension contains the top `k` predicted class indices. [D1, ... DN] must +match `labels`. +
+`k` + +Integer, k for @k metric. Only used for the default op name. +
+`class_id` + +Integer class ID for which we want binary metrics. This should be +in range [0, num_classes), where num_classes is the last dimension of +`predictions`. If class_id is outside this range, the method returns NAN. +
+`weights` + +`Tensor` whose rank is either 0, or n-1, where n is the rank of +`labels`. If the latter, it must be broadcastable to `labels` (i.e., all +dimensions must be either `1`, or the same as the corresponding `labels` +dimension). +
+`metrics_collections` + +An optional list of collections that values should +be added to. +
+`updates_collections` + +An optional list of collections that updates should +be added to. +
+`name` + +Name of new update operation, and namespace for other dependent ops. +
+ + + + + + + + + + + + + + + +
+`recall` + +Scalar `float64` `Tensor` with the value of `true_positives` divided +by the sum of `true_positives` and `false_negatives`. +
+`update_op` + +`Operation` that increments `true_positives` and +`false_negatives` variables appropriately, and whose value matches +`recall`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `weights` is not `None` and its shape doesn't match +`predictions`, or if either `metrics_collections` or `updates_collections` +are not a list or tuple. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/root_mean_squared_error.md b/site/en/api_docs/python/tf/compat/v1/metrics/root_mean_squared_error.md new file mode 100644 index 00000000000..9cb0fd03008 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/root_mean_squared_error.md @@ -0,0 +1,158 @@ +description: Computes the root mean squared error between the labels and predictions. + +
+ + +
+ +# tf.compat.v1.metrics.root_mean_squared_error + + + + + + + + + +Computes the root mean squared error between the labels and predictions. + + + + + + + +The `root_mean_squared_error` function creates two local variables, +`total` and `count` that are used to compute the root mean squared error. +This average is weighted by `weights`, and it is ultimately returned as +`root_mean_squared_error`: an idempotent operation that takes the square root +of the division of `total` by `count`. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`root_mean_squared_error`. Internally, a `squared_error` operation computes +the element-wise square of the difference between `predictions` and `labels`. +Then `update_op` increments `total` with the reduced sum of the product of +`weights` and `squared_error`, and it increments `count` with the reduced sum +of `weights`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` of the same shape as `predictions`. +
+`predictions` + +A `Tensor` of arbitrary shape. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that +`root_mean_squared_error` should be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`root_mean_squared_error` + +A `Tensor` representing the current mean, the value +of `total` divided by `count`. +
+`update_op` + +An operation that increments the `total` and `count` variables +appropriately and whose value matches `root_mean_squared_error`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/sensitivity_at_specificity.md b/site/en/api_docs/python/tf/compat/v1/metrics/sensitivity_at_specificity.md new file mode 100644 index 00000000000..562fa7a8e0d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/sensitivity_at_specificity.md @@ -0,0 +1,177 @@ +description: Computes the specificity at a given sensitivity. + +
+ + +
+ +# tf.compat.v1.metrics.sensitivity_at_specificity + + + + + + + + + +Computes the specificity at a given sensitivity. + + + + + + + +The `sensitivity_at_specificity` function creates four local +variables, `true_positives`, `true_negatives`, `false_positives` and +`false_negatives` that are used to compute the sensitivity at the given +specificity value. The threshold for the given specificity value is computed +and used to evaluate the corresponding sensitivity. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`sensitivity`. `update_op` increments the `true_positives`, `true_negatives`, +`false_positives` and `false_negatives` counts with the weight of each case +found in the `predictions` and `labels`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + +For additional information about specificity and sensitivity, see the +following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth values, a `Tensor` whose dimensions must match +`predictions`. Will be cast to `bool`. +
+`predictions` + +A floating point `Tensor` of arbitrary shape and whose values +are in the range `[0, 1]`. +
+`specificity` + +A scalar value in range `[0, 1]`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`num_thresholds` + +The number of thresholds to use for matching the given +specificity. +
+`metrics_collections` + +An optional list of collections that `sensitivity` +should be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`sensitivity` + +A scalar `Tensor` representing the sensitivity at the given +`specificity` value. +
+`update_op` + +An operation that increments the `true_positives`, +`true_negatives`, `false_positives` and `false_negatives` variables +appropriately and whose value matches `sensitivity`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, if +`weights` is not `None` and its shape doesn't match `predictions`, or if +`specificity` is not between 0 and 1, or if either `metrics_collections` +or `updates_collections` are not a list or tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/sparse_average_precision_at_k.md b/site/en/api_docs/python/tf/compat/v1/metrics/sparse_average_precision_at_k.md new file mode 100644 index 00000000000..94ba541e580 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/sparse_average_precision_at_k.md @@ -0,0 +1,38 @@ +description: Renamed to average_precision_at_k, please use that method instead. (deprecated) + +
+ + +
+ +# tf.compat.v1.metrics.sparse_average_precision_at_k + + + + + + + + + +Renamed to `average_precision_at_k`, please use that method instead. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use average_precision_at_k instead \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/sparse_precision_at_k.md b/site/en/api_docs/python/tf/compat/v1/metrics/sparse_precision_at_k.md new file mode 100644 index 00000000000..03d3c6ecd84 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/sparse_precision_at_k.md @@ -0,0 +1,38 @@ +description: Renamed to precision_at_k, please use that method instead. (deprecated) + +
+ + +
+ +# tf.compat.v1.metrics.sparse_precision_at_k + + + + + + + + + +Renamed to `precision_at_k`, please use that method instead. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use precision_at_k instead \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/specificity_at_sensitivity.md b/site/en/api_docs/python/tf/compat/v1/metrics/specificity_at_sensitivity.md new file mode 100644 index 00000000000..638c7119458 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/specificity_at_sensitivity.md @@ -0,0 +1,177 @@ +description: Computes the specificity at a given sensitivity. + +
+ + +
+ +# tf.compat.v1.metrics.specificity_at_sensitivity + + + + + + + + + +Computes the specificity at a given sensitivity. + + + + + + + +The `specificity_at_sensitivity` function creates four local +variables, `true_positives`, `true_negatives`, `false_positives` and +`false_negatives` that are used to compute the specificity at the given +sensitivity value. The threshold for the given sensitivity value is computed +and used to evaluate the corresponding specificity. + +For estimation of the metric over a stream of data, the function creates an +`update_op` operation that updates these variables and returns the +`specificity`. `update_op` increments the `true_positives`, `true_negatives`, +`false_positives` and `false_negatives` counts with the weight of each case +found in the `predictions` and `labels`. + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + +For additional information about specificity and sensitivity, see the +following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth values, a `Tensor` whose dimensions must match +`predictions`. Will be cast to `bool`. +
+`predictions` + +A floating point `Tensor` of arbitrary shape and whose values +are in the range `[0, 1]`. +
+`sensitivity` + +A scalar value in range `[0, 1]`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`num_thresholds` + +The number of thresholds to use for matching the given +sensitivity. +
+`metrics_collections` + +An optional list of collections that `specificity` +should be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`specificity` + +A scalar `Tensor` representing the specificity at the given +`sensitivity` value. +
+`update_op` + +An operation that increments the `true_positives`, +`true_negatives`, `false_positives` and `false_negatives` variables +appropriately and whose value matches `specificity`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, if +`weights` is not `None` and its shape doesn't match `predictions`, or if +`sensitivity` is not between 0 and 1, or if either `metrics_collections` +or `updates_collections` are not a list or tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/true_negatives.md b/site/en/api_docs/python/tf/compat/v1/metrics/true_negatives.md new file mode 100644 index 00000000000..7b2ca269a0e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/true_negatives.md @@ -0,0 +1,144 @@ +description: Sum the weights of true_negatives. + +
+ + +
+ +# tf.compat.v1.metrics.true_negatives + + + + + + + + + +Sum the weights of true_negatives. + + + + + + + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth values, a `Tensor` whose dimensions must match +`predictions`. Will be cast to `bool`. +
+`predictions` + +The predicted values, a `Tensor` of arbitrary dimensions. Will +be cast to `bool`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that the metric +value variable should be added to. +
+`updates_collections` + +An optional list of collections that the metric update +ops should be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`value_tensor` + +A `Tensor` representing the current value of the metric. +
+`update_op` + +An operation that accumulates the error from a batch of data. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/true_negatives_at_thresholds.md b/site/en/api_docs/python/tf/compat/v1/metrics/true_negatives_at_thresholds.md new file mode 100644 index 00000000000..3fd2eb15206 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/true_negatives_at_thresholds.md @@ -0,0 +1,152 @@ +description: Computes true negatives at provided threshold values. + +
+ + +
+ +# tf.compat.v1.metrics.true_negatives_at_thresholds + + + + + + + + + +Computes true negatives at provided threshold values. + + + + + + + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` whose shape matches `predictions`. Will be cast to +`bool`. +
+`predictions` + +A floating point `Tensor` of arbitrary shape and whose values +are in the range `[0, 1]`. +
+`thresholds` + +A python list or tuple of float thresholds in `[0, 1]`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that `true_negatives` +should be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`true_negatives` + +A float `Tensor` of shape `[len(thresholds)]`. +
+`update_op` + +An operation that updates the `true_negatives` variable and +returns its current value. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/true_positives.md b/site/en/api_docs/python/tf/compat/v1/metrics/true_positives.md new file mode 100644 index 00000000000..0c4e383d8c9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/true_positives.md @@ -0,0 +1,144 @@ +description: Sum the weights of true_positives. + +
+ + +
+ +# tf.compat.v1.metrics.true_positives + + + + + + + + + +Sum the weights of true_positives. + + + + + + + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +The ground truth values, a `Tensor` whose dimensions must match +`predictions`. Will be cast to `bool`. +
+`predictions` + +The predicted values, a `Tensor` of arbitrary dimensions. Will +be cast to `bool`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that the metric +value variable should be added to. +
+`updates_collections` + +An optional list of collections that the metric update +ops should be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`value_tensor` + +A `Tensor` representing the current value of the metric. +
+`update_op` + +An operation that accumulates the error from a batch of data. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/metrics/true_positives_at_thresholds.md b/site/en/api_docs/python/tf/compat/v1/metrics/true_positives_at_thresholds.md new file mode 100644 index 00000000000..fbfcdadf766 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/metrics/true_positives_at_thresholds.md @@ -0,0 +1,152 @@ +description: Computes true positives at provided threshold values. + +
+ + +
+ +# tf.compat.v1.metrics.true_positives_at_thresholds + + + + + + + + + +Computes true positives at provided threshold values. + + + + + + + +If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` whose shape matches `predictions`. Will be cast to +`bool`. +
+`predictions` + +A floating point `Tensor` of arbitrary shape and whose values +are in the range `[0, 1]`. +
+`thresholds` + +A python list or tuple of float thresholds in `[0, 1]`. +
+`weights` + +Optional `Tensor` whose rank is either 0, or the same rank as +`labels`, and must be broadcastable to `labels` (i.e., all dimensions must +be either `1`, or the same as the corresponding `labels` dimension). +
+`metrics_collections` + +An optional list of collections that `true_positives` +should be added to. +
+`updates_collections` + +An optional list of collections that `update_op` should +be added to. +
+`name` + +An optional variable_scope name. +
+ + + + + + + + + + + + + + + +
+`true_positives` + +A float `Tensor` of shape `[len(thresholds)]`. +
+`update_op` + +An operation that updates the `true_positives` variable and +returns its current value. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `predictions` and `labels` have mismatched shapes, or if +`weights` is not `None` and its shape doesn't match `predictions`, or if +either `metrics_collections` or `updates_collections` are not a list or +tuple. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/min_max_variable_partitioner.md b/site/en/api_docs/python/tf/compat/v1/min_max_variable_partitioner.md new file mode 100644 index 00000000000..cb209c23a66 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/min_max_variable_partitioner.md @@ -0,0 +1,93 @@ +description: Partitioner to allocate minimum size per slice. + +
+ + +
+ +# tf.compat.v1.min_max_variable_partitioner + + + + + + + + + +Partitioner to allocate minimum size per slice. + + + + + + + +Returns a partitioner that partitions the variable of given shape and dtype +such that each partition has a minimum of `min_slice_size` slice of the +variable. The maximum number of such partitions (upper bound) is given by +`max_partitions`. + + + + + + + + + + + + + + + + + + + +
+`max_partitions` + +Upper bound on the number of partitions. Defaults to 1. +
+`axis` + +Axis along which to partition the variable. Defaults to 0. +
+`min_slice_size` + +Minimum size of the variable slice per partition. Defaults +to 256K. +
+`bytes_per_string_element` + +If the `Variable` is of type string, this provides +an estimate of how large each scalar in the `Variable` is. +
+ + + + + + + + + + + +
+A partition function usable as the `partitioner` argument to +`variable_scope` and `get_variable`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/mixed_precision.md b/site/en/api_docs/python/tf/compat/v1/mixed_precision.md new file mode 100644 index 00000000000..00fe0c10237 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/mixed_precision.md @@ -0,0 +1,25 @@ +description: Public API for tf.mixed_precision namespace. + +
+ + +
+ +# Module: tf.compat.v1.mixed_precision + + + + + + + + + +Public API for tf.mixed_precision namespace. + + + +## Modules + +[`experimental`](../../../tf/compat/v1/mixed_precision/experimental.md) module: Public API for tf.mixed_precision.experimental namespace. + diff --git a/site/en/api_docs/python/tf/compat/v1/mixed_precision/experimental.md b/site/en/api_docs/python/tf/compat/v1/mixed_precision/experimental.md new file mode 100644 index 00000000000..9f939f916b6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/mixed_precision/experimental.md @@ -0,0 +1,29 @@ +description: Public API for tf.mixed_precision.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.mixed_precision.experimental + + + + + + + + + +Public API for tf.mixed_precision.experimental namespace. + + + +## Classes + +[`class DynamicLossScale`](../../../../tf/mixed_precision/experimental/DynamicLossScale.md): Loss scale that dynamically adjusts itself. + +[`class FixedLossScale`](../../../../tf/mixed_precision/experimental/FixedLossScale.md): Loss scale with a fixed value. + +[`class LossScale`](../../../../tf/mixed_precision/experimental/LossScale.md): Base class for all loss scales. + diff --git a/site/en/api_docs/python/tf/compat/v1/mlir.md b/site/en/api_docs/python/tf/compat/v1/mlir.md new file mode 100644 index 00000000000..da9d5f2fb27 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/mlir.md @@ -0,0 +1,25 @@ +description: Public API for tf.mlir namespace. + +
+ + +
+ +# Module: tf.compat.v1.mlir + + + + + + + + + +Public API for tf.mlir namespace. + + + +## Modules + +[`experimental`](../../../tf/compat/v1/mlir/experimental.md) module: Public API for tf.mlir.experimental namespace. + diff --git a/site/en/api_docs/python/tf/compat/v1/mlir/experimental.md b/site/en/api_docs/python/tf/compat/v1/mlir/experimental.md new file mode 100644 index 00000000000..9483dc67d22 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/mlir/experimental.md @@ -0,0 +1,25 @@ +description: Public API for tf.mlir.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.mlir.experimental + + + + + + + + + +Public API for tf.mlir.experimental namespace. + + + +## Functions + +[`convert_graph_def(...)`](../../../../tf/mlir/experimental/convert_graph_def.md): Import a GraphDef and convert it to a textual MLIR module. + diff --git a/site/en/api_docs/python/tf/compat/v1/model_variables.md b/site/en/api_docs/python/tf/compat/v1/model_variables.md new file mode 100644 index 00000000000..c77eb4efe84 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/model_variables.md @@ -0,0 +1,68 @@ +description: Returns all variables in the MODEL_VARIABLES collection. + +
+ + +
+ +# tf.compat.v1.model_variables + + + + + + + + + +Returns all variables in the MODEL_VARIABLES collection. + + + + + + + + + + + + + + + + + +
+`scope` + +(Optional.) A string. If supplied, the resulting list is filtered to +include only items whose `name` attribute matches `scope` using +`re.match`. Items without a `name` attribute are never returned if a scope +is supplied. The choice of `re.match` means that a `scope` without special +tokens filters by prefix. +
+ + + + + + + + + + + +
+A list of local Variable objects. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/moving_average_variables.md b/site/en/api_docs/python/tf/compat/v1/moving_average_variables.md new file mode 100644 index 00000000000..e06bce65275 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/moving_average_variables.md @@ -0,0 +1,72 @@ +description: Returns all variables that maintain their moving averages. + +
+ + +
+ +# tf.compat.v1.moving_average_variables + + + + + + + + + +Returns all variables that maintain their moving averages. + + + + + + + +If an `ExponentialMovingAverage` object is created and the `apply()` +method is called on a list of variables, these variables will +be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection. +This convenience function returns the contents of that collection. + + + + + + + + + + +
+`scope` + +(Optional.) A string. If supplied, the resulting list is filtered to +include only items whose `name` attribute matches `scope` using +`re.match`. Items without a `name` attribute are never returned if a scope +is supplied. The choice of `re.match` means that a `scope` without special +tokens filters by prefix. +
+ + + + + + + + + + + +
+A list of Variable objects. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/multinomial.md b/site/en/api_docs/python/tf/compat/v1/multinomial.md new file mode 100644 index 00000000000..360d8eeda95 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/multinomial.md @@ -0,0 +1,118 @@ +description: Draws samples from a multinomial distribution. (deprecated) + +
+ + +
+ +# tf.compat.v1.multinomial + + + + + + + + + +Draws samples from a multinomial distribution. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.random.categorical instead. + +#### Example: + + + +```python +# samples has shape [1, 5], where each value is either 0 or 1 with equal +# probability. +samples = tf.random.categorical(tf.math.log([[0.5, 0.5]]), 5) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`logits` + +2-D Tensor with shape `[batch_size, num_classes]`. Each slice +`[i, :]` represents the unnormalized log-probabilities for all classes. +
+`num_samples` + +0-D. Number of independent samples to draw for each row slice. +
+`seed` + +A Python integer. Used to create a random seed for the distribution. +See tf.random.set_seed for behavior. +
+`name` + +Optional name for the operation. +
+`output_dtype` + +integer type to use for the output. Defaults to int64. +
+ + + + + + + + + + + +
+The drawn samples of shape `[batch_size, num_samples]`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nest.md b/site/en/api_docs/python/tf/compat/v1/nest.md new file mode 100644 index 00000000000..5e1aa27f380 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nest.md @@ -0,0 +1,33 @@ +description: Public API for tf.nest namespace. + +
+ + +
+ +# Module: tf.compat.v1.nest + + + + + + + + + +Public API for tf.nest namespace. + + + +## Functions + +[`assert_same_structure(...)`](../../../tf/nest/assert_same_structure.md): Asserts that two structures are nested in the same way. + +[`flatten(...)`](../../../tf/nest/flatten.md): Returns a flat list from a given nested structure. + +[`is_nested(...)`](../../../tf/nest/is_nested.md): Returns true if its input is a collections.abc.Sequence (except strings). + +[`map_structure(...)`](../../../tf/nest/map_structure.md): Applies `func` to each entry in `structure` and returns a new structure. + +[`pack_sequence_as(...)`](../../../tf/nest/pack_sequence_as.md): Returns a given flattened sequence packed into a given structure. + diff --git a/site/en/api_docs/python/tf/compat/v1/nn.md b/site/en/api_docs/python/tf/compat/v1/nn.md new file mode 100644 index 00000000000..a31c6d3182e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn.md @@ -0,0 +1,243 @@ +description: Wrappers for primitive Neural Net (NN) Operations. + +
+ + +
+ +# Module: tf.compat.v1.nn + + + + + + + + + +Wrappers for primitive Neural Net (NN) Operations. + + + +## Modules + +[`rnn_cell`](../../../tf/compat/v1/nn/rnn_cell.md) module: Module for constructing RNN Cells. + +## Functions + +[`all_candidate_sampler(...)`](../../../tf/random/all_candidate_sampler.md): Generate the set of all classes. + +[`atrous_conv2d(...)`](../../../tf/nn/atrous_conv2d.md): Atrous convolution (a.k.a. convolution with holes or dilated convolution). + +[`atrous_conv2d_transpose(...)`](../../../tf/nn/atrous_conv2d_transpose.md): The transpose of `atrous_conv2d`. + +[`avg_pool(...)`](../../../tf/compat/v1/nn/avg_pool.md): Performs the average pooling on the input. + +[`avg_pool1d(...)`](../../../tf/nn/avg_pool1d.md): Performs the average pooling on the input. + +[`avg_pool2d(...)`](../../../tf/compat/v1/nn/avg_pool.md): Performs the average pooling on the input. + +[`avg_pool3d(...)`](../../../tf/nn/avg_pool3d.md): Performs the average pooling on the input. + +[`avg_pool_v2(...)`](../../../tf/nn/avg_pool.md): Performs the avg pooling on the input. + +[`batch_norm_with_global_normalization(...)`](../../../tf/compat/v1/nn/batch_norm_with_global_normalization.md): Batch normalization. + +[`batch_normalization(...)`](../../../tf/nn/batch_normalization.md): Batch normalization. + +[`bias_add(...)`](../../../tf/nn/bias_add.md): Adds `bias` to `value`. + +[`bidirectional_dynamic_rnn(...)`](../../../tf/compat/v1/nn/bidirectional_dynamic_rnn.md): Creates a dynamic version of bidirectional recurrent neural network. (deprecated) + +[`collapse_repeated(...)`](../../../tf/nn/collapse_repeated.md): Merge repeated labels into single labels. + +[`compute_accidental_hits(...)`](../../../tf/nn/compute_accidental_hits.md): Compute the position ids in `sampled_candidates` matching `true_classes`. + +[`compute_average_loss(...)`](../../../tf/nn/compute_average_loss.md): Scales per-example losses with sample_weights and computes their average. + +[`conv1d(...)`](../../../tf/compat/v1/nn/conv1d.md): Computes a 1-D convolution given 3-D input and filter tensors. (deprecated argument values) (deprecated argument values) + +[`conv1d_transpose(...)`](../../../tf/nn/conv1d_transpose.md): The transpose of `conv1d`. + +[`conv2d(...)`](../../../tf/compat/v1/nn/conv2d.md): Computes a 2-D convolution given 4-D `input` and `filter` tensors. + +[`conv2d_backprop_filter(...)`](../../../tf/compat/v1/nn/conv2d_backprop_filter.md): Computes the gradients of convolution with respect to the filter. + +[`conv2d_backprop_input(...)`](../../../tf/compat/v1/nn/conv2d_backprop_input.md): Computes the gradients of convolution with respect to the input. + +[`conv2d_transpose(...)`](../../../tf/compat/v1/nn/conv2d_transpose.md): The transpose of `conv2d`. + +[`conv3d(...)`](../../../tf/compat/v1/nn/conv3d.md): Computes a 3-D convolution given 5-D `input` and `filter` tensors. + +[`conv3d_backprop_filter(...)`](../../../tf/compat/v1/nn/conv3d_backprop_filter.md): Computes the gradients of 3-D convolution with respect to the filter. + +[`conv3d_backprop_filter_v2(...)`](../../../tf/compat/v1/nn/conv3d_backprop_filter.md): Computes the gradients of 3-D convolution with respect to the filter. + +[`conv3d_transpose(...)`](../../../tf/compat/v1/nn/conv3d_transpose.md): The transpose of `conv3d`. + +[`conv_transpose(...)`](../../../tf/nn/conv_transpose.md): The transpose of `convolution`. + +[`convolution(...)`](../../../tf/compat/v1/nn/convolution.md): Computes sums of N-D convolutions (actually cross-correlation). + +[`crelu(...)`](../../../tf/compat/v1/nn/crelu.md): Computes Concatenated ReLU. + +[`ctc_beam_search_decoder(...)`](../../../tf/compat/v1/nn/ctc_beam_search_decoder.md): Performs beam search decoding on the logits given in input. + +[`ctc_beam_search_decoder_v2(...)`](../../../tf/nn/ctc_beam_search_decoder.md): Performs beam search decoding on the logits given in input. + +[`ctc_greedy_decoder(...)`](../../../tf/nn/ctc_greedy_decoder.md): Performs greedy decoding on the logits given in input (best path). + +[`ctc_loss(...)`](../../../tf/compat/v1/nn/ctc_loss.md): Computes the CTC (Connectionist Temporal Classification) Loss. + +[`ctc_loss_v2(...)`](../../../tf/compat/v1/nn/ctc_loss_v2.md): Computes CTC (Connectionist Temporal Classification) loss. + +[`ctc_unique_labels(...)`](../../../tf/nn/ctc_unique_labels.md): Get unique labels and indices for batched labels for tf.nn.ctc_loss. + +[`depth_to_space(...)`](../../../tf/compat/v1/depth_to_space.md): DepthToSpace for tensors of type T. + +[`depthwise_conv2d(...)`](../../../tf/compat/v1/nn/depthwise_conv2d.md): Depthwise 2-D convolution. + +[`depthwise_conv2d_backprop_filter(...)`](../../../tf/nn/depthwise_conv2d_backprop_filter.md): Computes the gradients of depthwise convolution with respect to the filter. + +[`depthwise_conv2d_backprop_input(...)`](../../../tf/nn/depthwise_conv2d_backprop_input.md): Computes the gradients of depthwise convolution with respect to the input. + +[`depthwise_conv2d_native(...)`](../../../tf/compat/v1/nn/depthwise_conv2d_native.md): Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors. + +[`depthwise_conv2d_native_backprop_filter(...)`](../../../tf/nn/depthwise_conv2d_backprop_filter.md): Computes the gradients of depthwise convolution with respect to the filter. + +[`depthwise_conv2d_native_backprop_input(...)`](../../../tf/nn/depthwise_conv2d_backprop_input.md): Computes the gradients of depthwise convolution with respect to the input. + +[`dilation2d(...)`](../../../tf/compat/v1/nn/dilation2d.md): Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors. + +[`dropout(...)`](../../../tf/compat/v1/nn/dropout.md): Computes dropout. (deprecated arguments) + +[`dynamic_rnn(...)`](../../../tf/compat/v1/nn/dynamic_rnn.md): Creates a recurrent neural network specified by RNNCell `cell`. (deprecated) + +[`elu(...)`](../../../tf/nn/elu.md): Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise. + +[`embedding_lookup(...)`](../../../tf/compat/v1/nn/embedding_lookup.md): Looks up `ids` in a list of embedding tensors. + +[`embedding_lookup_sparse(...)`](../../../tf/compat/v1/nn/embedding_lookup_sparse.md): Computes embeddings for the given ids and weights. + +[`erosion2d(...)`](../../../tf/compat/v1/nn/erosion2d.md): Computes the grayscale erosion of 4-D `value` and 3-D `kernel` tensors. + +[`fixed_unigram_candidate_sampler(...)`](../../../tf/random/fixed_unigram_candidate_sampler.md): Samples a set of classes using the provided (fixed) base distribution. + +[`fractional_avg_pool(...)`](../../../tf/compat/v1/nn/fractional_avg_pool.md): Performs fractional average pooling on the input. (deprecated) + +[`fractional_max_pool(...)`](../../../tf/compat/v1/nn/fractional_max_pool.md): Performs fractional max pooling on the input. (deprecated) + +[`fused_batch_norm(...)`](../../../tf/compat/v1/nn/fused_batch_norm.md): Batch normalization. + +[`in_top_k(...)`](../../../tf/compat/v1/math/in_top_k.md): Says whether the targets are in the top `K` predictions. + +[`l2_loss(...)`](../../../tf/nn/l2_loss.md): L2 Loss. + +[`l2_normalize(...)`](../../../tf/compat/v1/linalg/l2_normalize.md): Normalizes along dimension `axis` using an L2 norm. (deprecated arguments) + +[`leaky_relu(...)`](../../../tf/nn/leaky_relu.md): Compute the Leaky ReLU activation function. + +[`learned_unigram_candidate_sampler(...)`](../../../tf/random/learned_unigram_candidate_sampler.md): Samples a set of classes from a distribution learned during training. + +[`local_response_normalization(...)`](../../../tf/nn/local_response_normalization.md): Local Response Normalization. + +[`log_poisson_loss(...)`](../../../tf/nn/log_poisson_loss.md): Computes log Poisson loss given `log_input`. + +[`log_softmax(...)`](../../../tf/compat/v1/math/log_softmax.md): Computes log softmax activations. (deprecated arguments) + +[`log_uniform_candidate_sampler(...)`](../../../tf/random/log_uniform_candidate_sampler.md): Samples a set of classes using a log-uniform (Zipfian) base distribution. + +[`lrn(...)`](../../../tf/nn/local_response_normalization.md): Local Response Normalization. + +[`max_pool(...)`](../../../tf/compat/v1/nn/max_pool.md): Performs the max pooling on the input. + +[`max_pool1d(...)`](../../../tf/nn/max_pool1d.md): Performs the max pooling on the input. + +[`max_pool2d(...)`](../../../tf/nn/max_pool2d.md): Performs the max pooling on the input. + +[`max_pool3d(...)`](../../../tf/nn/max_pool3d.md): Performs the max pooling on the input. + +[`max_pool_v2(...)`](../../../tf/nn/max_pool.md): Performs the max pooling on the input. + +[`max_pool_with_argmax(...)`](../../../tf/compat/v1/nn/max_pool_with_argmax.md): Performs max pooling on the input and outputs both max values and indices. + +[`moments(...)`](../../../tf/compat/v1/nn/moments.md): Calculate the mean and variance of `x`. + +[`nce_loss(...)`](../../../tf/compat/v1/nn/nce_loss.md): Computes and returns the noise-contrastive estimation training loss. + +[`normalize_moments(...)`](../../../tf/nn/normalize_moments.md): Calculate the mean and variance of based on the sufficient statistics. + +[`pool(...)`](../../../tf/compat/v1/nn/pool.md): Performs an N-D pooling operation. + +[`quantized_avg_pool(...)`](../../../tf/compat/v1/nn/quantized_avg_pool.md): Produces the average pool of the input tensor for quantized types. + +[`quantized_conv2d(...)`](../../../tf/compat/v1/nn/quantized_conv2d.md): Computes a 2D convolution given quantized 4D input and filter tensors. + +[`quantized_max_pool(...)`](../../../tf/compat/v1/nn/quantized_max_pool.md): Produces the max pool of the input tensor for quantized types. + +[`quantized_relu_x(...)`](../../../tf/compat/v1/nn/quantized_relu_x.md): Computes Quantized Rectified Linear X: `min(max(features, 0), max_value)` + +[`raw_rnn(...)`](../../../tf/compat/v1/nn/raw_rnn.md): Creates an `RNN` specified by RNNCell `cell` and loop function `loop_fn`. + +[`relu(...)`](../../../tf/nn/relu.md): Computes rectified linear: `max(features, 0)`. + +[`relu6(...)`](../../../tf/nn/relu6.md): Computes Rectified Linear 6: `min(max(features, 0), 6)`. + +[`relu_layer(...)`](../../../tf/compat/v1/nn/relu_layer.md): Computes Relu(x * weight + biases). + +[`safe_embedding_lookup_sparse(...)`](../../../tf/compat/v1/nn/safe_embedding_lookup_sparse.md): Lookup embedding results, accounting for invalid IDs and empty features. + +[`sampled_softmax_loss(...)`](../../../tf/compat/v1/nn/sampled_softmax_loss.md): Computes and returns the sampled softmax training loss. + +[`scale_regularization_loss(...)`](../../../tf/nn/scale_regularization_loss.md): Scales the sum of the given regularization losses by number of replicas. + +[`selu(...)`](../../../tf/nn/selu.md): Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)` + +[`separable_conv2d(...)`](../../../tf/compat/v1/nn/separable_conv2d.md): 2-D convolution with separable filters. + +[`sigmoid(...)`](../../../tf/math/sigmoid.md): Computes sigmoid of `x` element-wise. + +[`sigmoid_cross_entropy_with_logits(...)`](../../../tf/compat/v1/nn/sigmoid_cross_entropy_with_logits.md): Computes sigmoid cross entropy given `logits`. + +[`softmax(...)`](../../../tf/compat/v1/math/softmax.md): Computes softmax activations. (deprecated arguments) + +[`softmax_cross_entropy_with_logits(...)`](../../../tf/compat/v1/nn/softmax_cross_entropy_with_logits.md): Computes softmax cross entropy between `logits` and `labels`. (deprecated) + +[`softmax_cross_entropy_with_logits_v2(...)`](../../../tf/compat/v1/nn/softmax_cross_entropy_with_logits_v2.md): Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments) + +[`softplus(...)`](../../../tf/math/softplus.md): Computes softplus: `log(exp(features) + 1)`. + +[`softsign(...)`](../../../tf/nn/softsign.md): Computes softsign: `features / (abs(features) + 1)`. + +[`space_to_batch(...)`](../../../tf/compat/v1/space_to_batch.md): SpaceToBatch for 4-D tensors of type T. + +[`space_to_depth(...)`](../../../tf/compat/v1/space_to_depth.md): SpaceToDepth for tensors of type T. + +[`sparse_softmax_cross_entropy_with_logits(...)`](../../../tf/compat/v1/nn/sparse_softmax_cross_entropy_with_logits.md): Computes sparse softmax cross entropy between `logits` and `labels`. + +[`static_bidirectional_rnn(...)`](../../../tf/compat/v1/nn/static_bidirectional_rnn.md): Creates a bidirectional recurrent neural network. (deprecated) + +[`static_rnn(...)`](../../../tf/compat/v1/nn/static_rnn.md): Creates a recurrent neural network specified by RNNCell `cell`. (deprecated) + +[`static_state_saving_rnn(...)`](../../../tf/compat/v1/nn/static_state_saving_rnn.md): RNN that accepts a state saver for time-truncated RNN calculation. (deprecated) + +[`sufficient_statistics(...)`](../../../tf/compat/v1/nn/sufficient_statistics.md): Calculate the sufficient statistics for the mean and variance of `x`. + +[`swish(...)`](../../../tf/nn/swish.md): Computes the Swish activation function: `x * sigmoid(x)`. + +[`tanh(...)`](../../../tf/math/tanh.md): Computes hyperbolic tangent of `x` element-wise. + +[`top_k(...)`](../../../tf/math/top_k.md): Finds values and indices of the `k` largest entries for the last dimension. + +[`uniform_candidate_sampler(...)`](../../../tf/random/uniform_candidate_sampler.md): Samples a set of classes using a uniform base distribution. + +[`weighted_cross_entropy_with_logits(...)`](../../../tf/compat/v1/nn/weighted_cross_entropy_with_logits.md): Computes a weighted cross entropy. (deprecated arguments) + +[`weighted_moments(...)`](../../../tf/compat/v1/nn/weighted_moments.md): Returns the frequency-weighted mean and variance of `x`. + +[`with_space_to_batch(...)`](../../../tf/nn/with_space_to_batch.md): Performs `op` on the space-to-batch representation of `input`. + +[`xw_plus_b(...)`](../../../tf/compat/v1/nn/xw_plus_b.md): Computes matmul(x, weights) + biases. + +[`zero_fraction(...)`](../../../tf/math/zero_fraction.md): Returns the fraction of zeros in `value`. + diff --git a/site/en/api_docs/python/tf/compat/v1/nn/avg_pool.md b/site/en/api_docs/python/tf/compat/v1/nn/avg_pool.md new file mode 100644 index 00000000000..2373f316372 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/avg_pool.md @@ -0,0 +1,123 @@ +description: Performs the average pooling on the input. + +
+ + +
+ +# tf.compat.v1.nn.avg_pool + + + + + + + + + +Performs the average pooling on the input. + + + + + + + + + +Each entry in `output` is the mean of the corresponding size `ksize` +window in `value`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A 4-D `Tensor` of shape `[batch, height, width, channels]` and type +`float32`, `float64`, `qint8`, `quint8`, or `qint32`. +
+`ksize` + +An int or list of `ints` that has length `1`, `2` or `4`. The size of +the window for each dimension of the input tensor. +
+`strides` + +An int or list of `ints` that has length `1`, `2` or `4`. The +stride of the sliding window for each dimension of the input tensor. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. +See the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string. 'NHWC' and 'NCHW' are supported. +
+`name` + +Optional name for the operation. +
+`input` + +Alias for value. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `value`. The average pooled output tensor. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/batch_norm_with_global_normalization.md b/site/en/api_docs/python/tf/compat/v1/nn/batch_norm_with_global_normalization.md new file mode 100644 index 00000000000..9888b3e9957 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/batch_norm_with_global_normalization.md @@ -0,0 +1,152 @@ +description: Batch normalization. + +
+ + +
+ +# tf.compat.v1.nn.batch_norm_with_global_normalization + + + + + + + + + +Batch normalization. + + + + + + + +This op is deprecated. See tf.nn.batch_normalization. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`t` + +A 4D input Tensor. +
+`m` + +A 1D mean Tensor with size matching the last dimension of t. +This is the first output from tf.nn.moments, +or a saved moving average thereof. +
+`v` + +A 1D variance Tensor with size matching the last dimension of t. +This is the second output from tf.nn.moments, +or a saved moving average thereof. +
+`beta` + +A 1D beta Tensor with size matching the last dimension of t. +An offset to be added to the normalized tensor. +
+`gamma` + +A 1D gamma Tensor with size matching the last dimension of t. +If "scale_after_normalization" is true, this tensor will be multiplied +with the normalized tensor. +
+`variance_epsilon` + +A small float number to avoid dividing by 0. +
+`scale_after_normalization` + +A bool indicating whether the resulted tensor +needs to be multiplied with gamma. +
+`name` + +A name for this operation (optional). +
+`input` + +Alias for t. +
+`mean` + +Alias for m. +
+`variance` + +Alias for v. +
+ + + + + + + + + + + +
+A batch-normalized `t`. +
+ + + +#### References: + +Batch Normalization - Accelerating Deep Network Training by Reducing +Internal Covariate Shift: + [Ioffe et al., 2015](http://proceedings.mlr.press/v37/ioffe15.html) + ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/nn/bidirectional_dynamic_rnn.md b/site/en/api_docs/python/tf/compat/v1/nn/bidirectional_dynamic_rnn.md new file mode 100644 index 00000000000..ac00d377596 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/bidirectional_dynamic_rnn.md @@ -0,0 +1,209 @@ +description: Creates a dynamic version of bidirectional recurrent neural network. (deprecated) + +
+ + +
+ +# tf.compat.v1.nn.bidirectional_dynamic_rnn + + + + + + + + + +Creates a dynamic version of bidirectional recurrent neural network. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use `keras.layers.Bidirectional(keras.layers.RNN(cell))`, which is equivalent to this API + +Takes input and builds independent forward and backward RNNs. The input_size +of forward and backward cell must match. The initial state for both directions +is zero by default (but can be set optionally) and no intermediate states are +ever returned -- the network is fully unrolled for the given (passed in) +length(s) of the sequence(s) or completely unrolled if length(s) is not +given. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cell_fw` + +An instance of RNNCell, to be used for forward direction. +
+`cell_bw` + +An instance of RNNCell, to be used for backward direction. +
+`inputs` + +The RNN inputs. +If time_major == False (default), this must be a tensor of shape: +`[batch_size, max_time, ...]`, or a nested tuple of such elements. +If time_major == True, this must be a tensor of shape: `[max_time, +batch_size, ...]`, or a nested tuple of such elements. +
+`sequence_length` + +(optional) An int32/int64 vector, size `[batch_size]`, +containing the actual lengths for each of the sequences in the batch. If +not provided, all batch entries are assumed to be full sequences; and time +reversal is applied from time `0` to `max_time` for each sequence. +
+`initial_state_fw` + +(optional) An initial state for the forward RNN. This must +be a tensor of appropriate type and shape `[batch_size, +cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a +tuple of tensors having shapes `[batch_size, s] for s in +cell_fw.state_size`. +
+`initial_state_bw` + +(optional) Same as for `initial_state_fw`, but using the +corresponding properties of `cell_bw`. +
+`dtype` + +(optional) The data type for the initial states and expected output. +Required if initial_states are not provided or RNN states have a +heterogeneous dtype. +
+`parallel_iterations` + +(Default: 32). The number of iterations to run in +parallel. Those operations which do not have any temporal dependency and +can be run in parallel, will be. This parameter trades off time for +space. Values >> 1 use more memory but take less time, while smaller +values use less memory but computations take longer. +
+`swap_memory` + +Transparently swap the tensors produced in forward inference +but needed for back prop from GPU to CPU. This allows training RNNs which +would typically not fit on a single GPU, with very minimal (or no) +performance penalty. +
+`time_major` + +The shape format of the `inputs` and `outputs` Tensors. If true, +these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, +these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using +`time_major = True` is a bit more efficient because it avoids transposes +at the beginning and end of the RNN calculation. However, most TensorFlow +data is batch-major, so by default this function accepts input and emits +output in batch-major form. +
+`scope` + +VariableScope for the created subgraph; defaults to +"bidirectional_rnn" +
+ + + + + + + + + + + +
+A tuple (outputs, output_states) where: +outputs: A tuple (output_fw, output_bw) containing the forward and +the backward rnn output `Tensor`. +If time_major == False (default), +output_fw will be a `Tensor` shaped: +`[batch_size, max_time, cell_fw.output_size]` +and output_bw will be a `Tensor` shaped: +`[batch_size, max_time, cell_bw.output_size]`. +If time_major == True, +output_fw will be a `Tensor` shaped: +`[max_time, batch_size, cell_fw.output_size]` +and output_bw will be a `Tensor` shaped: +`[max_time, batch_size, cell_bw.output_size]`. +It returns a tuple instead of a single concatenated `Tensor`, unlike +in the `bidirectional_rnn`. If the concatenated one is preferred, +the forward and backward outputs can be concatenated as +tf.concat(outputs, 2). +output_states: A tuple (output_state_fw, output_state_bw) containing +the forward and the backward final states of bidirectional rnn. +
+ + + + + + + + + + + + +
+`TypeError` + +If `cell_fw` or `cell_bw` is not an instance of `RNNCell`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/conv1d.md b/site/en/api_docs/python/tf/compat/v1/nn/conv1d.md new file mode 100644 index 00000000000..44e8d96e2a9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/conv1d.md @@ -0,0 +1,173 @@ +description: Computes a 1-D convolution given 3-D input and filter tensors. (deprecated argument values) (deprecated argument values) + +
+ + +
+ +# tf.compat.v1.nn.conv1d + + + + + + + + + +Computes a 1-D convolution given 3-D input and filter tensors. (deprecated argument values) (deprecated argument values) + + + + + + + +Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NCHW')`. They will be removed in a future version. +Instructions for updating: +`NCHW` for data_format is deprecated, use `NCW` instead + +Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NHWC')`. They will be removed in a future version. +Instructions for updating: +`NHWC` for data_format is deprecated, use `NWC` instead + +Given an input tensor of shape + [batch, in_width, in_channels] +if data_format is "NWC", or + [batch, in_channels, in_width] +if data_format is "NCW", +and a filter / kernel tensor of shape +[filter_width, in_channels, out_channels], this op reshapes +the arguments to pass them to conv2d to perform the equivalent +convolution operation. + +Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. +For example, if `data_format` does not start with "NC", a tensor of shape + [batch, in_width, in_channels] +is reshaped to + [batch, 1, in_width, in_channels], +and the filter is reshaped to + [1, filter_width, in_channels, out_channels]. +The result is then reshaped back to + [batch, out_width, out_channels] +\(where out_width is a function of the stride and padding as in conv2d\) and +returned to the caller. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A 3D `Tensor`. Must be of type `float16`, `float32`, or `float64`. +
+`filters` + +A 3D `Tensor`. Must have the same type as `value`. +
+`stride` + +An int or list of `ints` that has length `1` or `3`. The number of +entries by which the filter is moved right at each step. +
+`padding` + +'SAME' or 'VALID' +
+`use_cudnn_on_gpu` + +An optional `bool`. Defaults to `True`. +
+`data_format` + +An optional `string` from `"NWC", "NCW"`. Defaults to `"NWC"`, +the data is stored in the order of [batch, in_width, in_channels]. The +`"NCW"` format stores data as [batch, in_channels, in_width]. +
+`name` + +A name for the operation (optional). +
+`input` + +Alias for value. +
+`dilations` + +An int or list of `ints` that has length `1` or `3` which +defaults to 1. The dilation factor for each dimension of input. If set to +k > 1, there will be k-1 skipped cells between each filter element on that +dimension. Dilations in the batch and depth dimensions must be 1. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as input. +
+ + + + + + + + + + + + +
+`ValueError` + +if `data_format` is invalid. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/conv2d.md b/site/en/api_docs/python/tf/compat/v1/nn/conv2d.md new file mode 100644 index 00000000000..f0715ec3b97 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/conv2d.md @@ -0,0 +1,170 @@ +description: Computes a 2-D convolution given 4-D input and filter tensors. + +
+ + +
+ +# tf.compat.v1.nn.conv2d + + + + + + + + + +Computes a 2-D convolution given 4-D `input` and `filter` tensors. + + + + + + + +Given an input tensor of shape `[batch, in_height, in_width, in_channels]` +and a filter / kernel tensor of shape +`[filter_height, filter_width, in_channels, out_channels]`, this op +performs the following: + +1. Flattens the filter to a 2-D matrix with shape + `[filter_height * filter_width * in_channels, output_channels]`. +2. Extracts image patches from the input tensor to form a *virtual* + tensor of shape `[batch, out_height, out_width, + filter_height * filter_width * in_channels]`. +3. For each patch, right-multiplies the filter matrix and the image patch + vector. + +In detail, with the default NHWC format, + + output[b, i, j, k] = + sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] + * filter[di, dj, q, k] + +Must have `strides[0] = strides[3] = 1`. For the most common case of the same +horizontal and vertical strides, `strides = [1, stride, stride, 1]`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: +`half`, `bfloat16`, `float32`, `float64`. +A 4-D tensor. The dimension order is interpreted according to the value +of `data_format`, see below for details. +
+`filter` + +A `Tensor`. Must have the same type as `input`. +A 4-D tensor of shape +`[filter_height, filter_width, in_channels, out_channels]` +
+`strides` + +An int or list of `ints` that has length `1`, `2` or `4`. The +stride of the sliding window for each dimension of `input`. If a single +value is given it is replicated in the `H` and `W` dimension. By default +the `N` and `C` dimensions are set to 1. The dimension order is determined +by the value of `data_format`, see below for details. +
+`padding` + +Either the `string` `"SAME"` or `"VALID"` indicating the type of +padding algorithm to use, or a list indicating the explicit paddings at +the start and end of each dimension. When explicit padding is used and +data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, +pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used +and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], +[pad_top, pad_bottom], [pad_left, pad_right]]`. +
+`use_cudnn_on_gpu` + +An optional `bool`. Defaults to `True`. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. +Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, height, width, channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, channels, height, width]. +
+`dilations` + +An int or list of `ints` that has length `1`, `2` or `4`, +defaults to 1. The dilation factor for each dimension of`input`. If a +single value is given it is replicated in the `H` and `W` dimension. By +default the `N` and `C` dimensions are set to 1. If set to k > 1, there +will be k-1 skipped cells between each filter element on that dimension. +The dimension order is determined by the value of `data_format`, see above +for details. Dilations in the batch and depth dimensions if a 4-d tensor +must be 1. +
+`name` + +A name for the operation (optional). +
+`filters` + +Alias for filter. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/conv2d_backprop_filter.md b/site/en/api_docs/python/tf/compat/v1/nn/conv2d_backprop_filter.md new file mode 100644 index 00000000000..697919a260f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/conv2d_backprop_filter.md @@ -0,0 +1,148 @@ +description: Computes the gradients of convolution with respect to the filter. + +
+ + +
+ +# tf.compat.v1.nn.conv2d_backprop_filter + + + + + + + + + +Computes the gradients of convolution with respect to the filter. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: +`half`, `bfloat16`, `float32`, `float64`. +4-D with shape `[batch, in_height, in_width, in_channels]`. +
+`filter_sizes` + +A `Tensor` of type `int32`. +An integer vector representing the tensor shape of `filter`, +where `filter` is a 4-D +`[filter_height, filter_width, in_channels, out_channels]` tensor. +
+`out_backprop` + +A `Tensor`. Must have the same type as `input`. +4-D with shape `[batch, out_height, out_width, out_channels]`. +Gradients w.r.t. the output of the convolution. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +of the convolution. Must be in the same order as the dimension specified +with format. +
+`padding` + +Either the `string `"SAME"` or `"VALID"` indicating the type of +padding algorithm to use, or a list indicating the explicit paddings at +the start and end of each dimension. When explicit padding is used and +data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, +pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used +and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], +[pad_top, pad_bottom], [pad_left, pad_right]]`. +
+`use_cudnn_on_gpu` + +An optional `bool`. Defaults to `True`. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. +Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, in_height, in_width, in_channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each +filter element on that dimension. The dimension order is determined by +the value of `data_format`, see above for details. Dilations in the batch +and depth dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/conv2d_backprop_input.md b/site/en/api_docs/python/tf/compat/v1/nn/conv2d_backprop_input.md new file mode 100644 index 00000000000..2063b47784f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/conv2d_backprop_input.md @@ -0,0 +1,156 @@ +description: Computes the gradients of convolution with respect to the input. + +
+ + +
+ +# tf.compat.v1.nn.conv2d_backprop_input + + + + + + + + + +Computes the gradients of convolution with respect to the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_sizes` + +A `Tensor` of type `int32`. +An integer vector representing the shape of `input`, +where `input` is a 4-D `[batch, height, width, channels]` tensor. +
+`filter` + +A `Tensor`. Must be one of the following types: +`half`, `bfloat16`, `float32`, `float64`. +4-D with shape +`[filter_height, filter_width, in_channels, out_channels]`. +
+`out_backprop` + +A `Tensor`. Must have the same type as `filter`. +4-D with shape `[batch, out_height, out_width, out_channels]`. +Gradients w.r.t. the output of the convolution. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +of the convolution. Must be in the same order as the dimension specified +with format. +
+`padding` + +Either the `string `"SAME"` or `"VALID"` indicating the type of +padding algorithm to use, or a list indicating the explicit paddings at +the start and end of each dimension. When explicit padding is used and +data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, +pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used +and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], +[pad_top, pad_bottom], [pad_left, pad_right]]`. +
+`use_cudnn_on_gpu` + +An optional `bool`. Defaults to `True`. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. +Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, in_height, in_width, in_channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each +filter element on that dimension. The dimension order is determined by +the value of `data_format`, see above for details. Dilations in the batch +and depth dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+`filters` + +Alias for filter. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `filter`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/conv2d_transpose.md b/site/en/api_docs/python/tf/compat/v1/nn/conv2d_transpose.md new file mode 100644 index 00000000000..8a42139cf21 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/conv2d_transpose.md @@ -0,0 +1,175 @@ +description: The transpose of conv2d. + +
+ + +
+ +# tf.compat.v1.nn.conv2d_transpose + + + + + + + + + +The transpose of `conv2d`. + + + + + + + +This operation is sometimes called "deconvolution" after +(Zeiler et al., 2010), but is really the transpose (gradient) of `conv2d` +rather than an actual deconvolution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A 4-D `Tensor` of type `float` and shape +`[batch, height, width, in_channels]` for `NHWC` data format or +`[batch, in_channels, height, width]` for `NCHW` data format. +
+`filter` + +A 4-D `Tensor` with the same type as `value` and shape +`[height, width, output_channels, in_channels]`. `filter`'s +`in_channels` dimension must match that of `value`. +
+`output_shape` + +A 1-D `Tensor` representing the output shape of the +deconvolution op. +
+`strides` + +An int or list of `ints` that has length `1`, `2` or `4`. The +stride of the sliding window for each dimension of `input`. If a single +value is given it is replicated in the `H` and `W` dimension. By default +the `N` and `C` dimensions are set to 0. The dimension order is determined +by the value of `data_format`, see below for details. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. +See the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string. 'NHWC' and 'NCHW' are supported. +
+`name` + +Optional name for the returned tensor. +
+`input` + +Alias for value. +
+`filters` + +Alias for filter. +
+`dilations` + +An int or list of `ints` that has length `1`, `2` or `4`, +defaults to 1. The dilation factor for each dimension of`input`. If a +single value is given it is replicated in the `H` and `W` dimension. By +default the `N` and `C` dimensions are set to 1. If set to k > 1, there +will be k-1 skipped cells between each filter element on that dimension. +The dimension order is determined by the value of `data_format`, see above +for details. Dilations in the batch and depth dimensions if a 4-d tensor +must be 1. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `value`. +
+ + + + + + + + + + + + +
+`ValueError` + +If input/output depth does not match `filter`'s shape, or if +padding is other than `'VALID'` or `'SAME'`. +
+ + + +#### References: + +Deconvolutional Networks: + [Zeiler et al., 2010] + (https://ieeexplore.ieee.org/abstract/document/5539957) + ([pdf] + (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/nn/conv3d.md b/site/en/api_docs/python/tf/compat/v1/nn/conv3d.md new file mode 100644 index 00000000000..425bf2cc363 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/conv3d.md @@ -0,0 +1,128 @@ +description: Computes a 3-D convolution given 5-D input and filter tensors. + +
+ + +
+ +# tf.compat.v1.nn.conv3d + + + + + + + + + +Computes a 3-D convolution given 5-D `input` and `filter` tensors. + + + + + + + +In signal processing, cross-correlation is a measure of similarity of +two waveforms as a function of a time-lag applied to one of them. This +is also known as a sliding dot product or sliding inner-product. + +Our Conv3D implements a form of cross-correlation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +Shape `[batch, in_depth, in_height, in_width, in_channels]`. +
+`filter` + +A `Tensor`. Must have the same type as `input`. +Shape `[filter_depth, filter_height, filter_width, in_channels, +out_channels]`. `in_channels` must match between `input` and `filter`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. +The data format of the input and output data. With the +default format "NDHWC", the data is stored in the order of: +[batch, in_depth, in_height, in_width, in_channels]. +Alternatively, the format could be "NCDHW", the data storage order is: +[batch, in_channels, in_depth, in_height, in_width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. +1-D tensor of length 5. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each +filter element on that dimension. The dimension order is determined by the +value of `data_format`, see above for details. Dilations in the batch and +depth dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/conv3d_backprop_filter.md b/site/en/api_docs/python/tf/compat/v1/nn/conv3d_backprop_filter.md new file mode 100644 index 00000000000..4fd6908c2a8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/conv3d_backprop_filter.md @@ -0,0 +1,140 @@ +description: Computes the gradients of 3-D convolution with respect to the filter. + +
+ + +
+ +# tf.compat.v1.nn.conv3d_backprop_filter + + + + + + + + + +Computes the gradients of 3-D convolution with respect to the filter. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +Shape `[batch, depth, rows, cols, in_channels]`. +
+`filter_sizes` + +A `Tensor` of type `int32`. +An integer vector representing the tensor shape of `filter`, +where `filter` is a 5-D +`[filter_depth, filter_height, filter_width, in_channels, out_channels]` +tensor. +
+`out_backprop` + +A `Tensor`. Must have the same type as `input`. +Backprop signal of shape `[batch, out_depth, out_rows, out_cols, +out_channels]`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. +The data format of the input and output data. With the +default format "NDHWC", the data is stored in the order of: +[batch, in_depth, in_height, in_width, in_channels]. +Alternatively, the format could be "NCDHW", the data storage order is: +[batch, in_channels, in_depth, in_height, in_width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. +1-D tensor of length 5. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each +filter element on that dimension. The dimension order is determined by the +value of `data_format`, see above for details. Dilations in the batch and +depth dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/conv3d_transpose.md b/site/en/api_docs/python/tf/compat/v1/nn/conv3d_transpose.md new file mode 100644 index 00000000000..36db8c5c303 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/conv3d_transpose.md @@ -0,0 +1,172 @@ +description: The transpose of conv3d. + +
+ + +
+ +# tf.compat.v1.nn.conv3d_transpose + + + + + + + + + +The transpose of `conv3d`. + + + + + + + +This operation is sometimes called "deconvolution" after +(Zeiler et al., 2010), but is really the transpose (gradient) of `conv3d` +rather than an actual deconvolution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A 5-D `Tensor` of type `float` and shape +`[batch, depth, height, width, in_channels]`. +
+`filter` + +A 5-D `Tensor` with the same type as `value` and shape +`[depth, height, width, output_channels, in_channels]`. `filter`'s +`in_channels` dimension must match that of `value`. +
+`output_shape` + +A 1-D `Tensor` representing the output shape of the +deconvolution op. +
+`strides` + +A list of ints. The stride of the sliding window for each +dimension of the input tensor. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. +See the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string, either `'NDHWC'` or `'NCDHW`' specifying the layout +of the input and output tensors. Defaults to `'NDHWC'`. +
+`name` + +Optional name for the returned tensor. +
+`input` + +Alias of value. +
+`filters` + +Alias of filter. +
+`dilations` + +An int or list of `ints` that has length `1`, `3` or `5`, +defaults to 1. The dilation factor for each dimension of`input`. If a +single value is given it is replicated in the `D`, `H` and `W` dimension. +By default the `N` and `C` dimensions are set to 1. If set to k > 1, there +will be k-1 skipped cells between each filter element on that dimension. +The dimension order is determined by the value of `data_format`, see above +for details. Dilations in the batch and depth dimensions if a 5-d tensor +must be 1. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `value`. +
+ + + + + + + + + + + + +
+`ValueError` + +If input/output depth does not match `filter`'s shape, or if +padding is other than `'VALID'` or `'SAME'`. +
+ + + +#### References: + +Deconvolutional Networks: + [Zeiler et al., 2010] + (https://ieeexplore.ieee.org/abstract/document/5539957) + ([pdf] + (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/nn/convolution.md b/site/en/api_docs/python/tf/compat/v1/nn/convolution.md new file mode 100644 index 00000000000..8f03565ba0b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/convolution.md @@ -0,0 +1,230 @@ +description: Computes sums of N-D convolutions (actually cross-correlation). + +
+ + +
+ +# tf.compat.v1.nn.convolution + + + + + + + + + +Computes sums of N-D convolutions (actually cross-correlation). + + + + + + + +This also supports either output striding via the optional `strides` parameter +or atrous convolution (also known as convolution with holes or dilated +convolution, based on the French word "trous" meaning holes in English) via +the optional `dilation_rate` parameter. Currently, however, output striding +is not supported for atrous convolutions. + +Specifically, in the case that `data_format` does not start with "NC", given +a rank (N+2) `input` Tensor of shape + + [num_batches, + input_spatial_shape[0], + ..., + input_spatial_shape[N-1], + num_input_channels], + +a rank (N+2) `filter` Tensor of shape + + [spatial_filter_shape[0], + ..., + spatial_filter_shape[N-1], + num_input_channels, + num_output_channels], + +an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N) +specifying the filter upsampling/input downsampling rate, and an optional list +of N `strides` (defaulting [1]*N), this computes for each N-D spatial output +position (x[0], ..., x[N-1]): + +``` + output[b, x[0], ..., x[N-1], k] = + sum_{z[0], ..., z[N-1], q} + filter[z[0], ..., z[N-1], q, k] * + padded_input[b, + x[0]*strides[0] + dilation_rate[0]*z[0], + ..., + x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], + q] +``` +where b is the index into the batch, k is the output channel number, q is the +input channel number, and z is the N-D spatial offset within the filter. Here, +`padded_input` is obtained by zero padding the input using an effective +spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and +output striding `strides` as described in the +[comment here](https://tensorflow.org/api_guides/python/nn#Convolution). + +In the case that `data_format` does start with `"NC"`, the `input` and output +(but not the `filter`) are simply transposed as follows: + + convolution(input, data_format, **kwargs) = + tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), + **kwargs), + [0, N+1] + range(1, N+1)) + +It is required that 1 <= N <= 3. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +An (N+2)-D `Tensor` of type `T`, of shape +`[batch_size] + input_spatial_shape + [in_channels]` if data_format does +not start with "NC" (default), or +`[batch_size, in_channels] + input_spatial_shape` if data_format starts +with "NC". +
+`filter` + +An (N+2)-D `Tensor` with the same type as `input` and shape +`spatial_filter_shape + [in_channels, out_channels]`. +
+`padding` + +A string, either `"VALID"` or `"SAME"`. The padding algorithm. +
+`strides` + +Optional. Sequence of N ints >= 1. Specifies the output stride. +Defaults to [1]*N. If any value of strides is > 1, then all values of +dilation_rate must be 1. +
+`dilation_rate` + +Optional. Sequence of N ints >= 1. Specifies the filter +upsampling/input downsampling rate. In the literature, the same parameter +is sometimes called `input stride` or `dilation`. The effective filter +size used for the convolution will be `spatial_filter_shape + +(spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting +(dilation_rate[i]-1) zeros between consecutive elements of the original +filter in each spatial dimension i. If any value of dilation_rate is > 1, +then all values of strides must be 1. +
+`name` + +Optional name for the returned tensor. +
+`data_format` + +A string or None. Specifies whether the channel dimension of +the `input` and output is the last dimension (default, or if `data_format` +does not start with "NC"), or the second dimension (if `data_format` +starts with "NC"). For N=1, the valid values are "NWC" (default) and +"NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". +For N=3, the valid values are "NDHWC" (default) and "NCDHW". +
+`filters` + +Alias of filter. +
+`dilations` + +Alias of dilation_rate. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `input` of shape + +`[batch_size] + output_spatial_shape + [out_channels]` + +if data_format is None or does not start with "NC", or + +`[batch_size, out_channels] + output_spatial_shape` + +if data_format starts with "NC", +where `output_spatial_shape` depends on the value of `padding`. + +If padding == "SAME": +output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i]) + +If padding == "VALID": +output_spatial_shape[i] = +ceil((input_spatial_shape[i] - +(spatial_filter_shape[i]-1) * dilation_rate[i]) +/ strides[i]). +
+ + + + + + + + + + + + +
+`ValueError` + +If input/output depth does not match `filter` shape, if padding +is other than `"VALID"` or `"SAME"`, or if data_format is invalid. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/crelu.md b/site/en/api_docs/python/tf/compat/v1/nn/crelu.md new file mode 100644 index 00000000000..ae4f1282135 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/crelu.md @@ -0,0 +1,93 @@ +description: Computes Concatenated ReLU. + +
+ + +
+ +# tf.compat.v1.nn.crelu + + + + + + + + + +Computes Concatenated ReLU. + + + + + + + +Concatenates a ReLU which selects only the positive part of the activation +with a ReLU which selects only the *negative* part of the activation. +Note that as a result this non-linearity doubles the depth of the activations. +Source: [Understanding and Improving Convolutional Neural Networks via +Concatenated Rectified Linear Units. W. Shang, et +al.](https://arxiv.org/abs/1603.05201) + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`, +`int16`, or `int8`. +
+`name` + +A name for the operation (optional). +
+`axis` + +The axis that the output values are concatenated along. Default is -1. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `features`. +
+ + + +#### References: + +Understanding and Improving Convolutional Neural Networks via Concatenated +Rectified Linear Units: + [Shang et al., 2016](http://proceedings.mlr.press/v48/shang16) + ([pdf](http://proceedings.mlr.press/v48/shang16.pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/nn/ctc_beam_search_decoder.md b/site/en/api_docs/python/tf/compat/v1/nn/ctc_beam_search_decoder.md new file mode 100644 index 00000000000..c78c650d898 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/ctc_beam_search_decoder.md @@ -0,0 +1,130 @@ +description: Performs beam search decoding on the logits given in input. + +
+ + +
+ +# tf.compat.v1.nn.ctc_beam_search_decoder + + + + + + + + + +Performs beam search decoding on the logits given in input. + + + + + + + +**Note** The `ctc_greedy_decoder` is a special case of the +`ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but +that decoder is faster for this special case). + +If `merge_repeated` is `True`, merge repeated classes in the output beams. +This means that if consecutive entries in a beam are the same, +only the first of these is emitted. That is, when the sequence is +`A B B * B * B` (where '*' is the blank label), the return value is: + + * `A B` if `merge_repeated = True`. + * `A B B B` if `merge_repeated = False`. + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +3-D `float` `Tensor`, size `[max_time x batch_size x num_classes]`. +The logits. +
+`sequence_length` + +1-D `int32` vector containing sequence lengths, having size +`[batch_size]`. +
+`beam_width` + +An int scalar >= 0 (beam search beam width). +
+`top_paths` + +An int scalar >= 0, <= beam_width (controls output size). +
+`merge_repeated` + +Boolean. Default: True. +
+ + + + + + + + + + + + + + + + + +
+A tuple `(decoded, log_probabilities)` where +
+`decoded` + +A list of length top_paths, where `decoded[j]` +is a `SparseTensor` containing the decoded outputs: + +`decoded[j].indices`: Indices matrix `(total_decoded_outputs[j] x 2)` +The rows store: [batch, time]. + +`decoded[j].values`: Values vector, size `(total_decoded_outputs[j])`. +The vector stores the decoded classes for beam j. + +`decoded[j].dense_shape`: Shape vector, size `(2)`. +The shape values are: `[batch_size, max_decoded_length[j]]`. +
+`log_probability` + +A `float` matrix `(batch_size x top_paths)` containing +sequence log-probabilities. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/ctc_loss.md b/site/en/api_docs/python/tf/compat/v1/nn/ctc_loss.md new file mode 100644 index 00000000000..277eeb307f0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/ctc_loss.md @@ -0,0 +1,222 @@ +description: Computes the CTC (Connectionist Temporal Classification) Loss. + +
+ + +
+ +# tf.compat.v1.nn.ctc_loss + + + + + + + + + +Computes the CTC (Connectionist Temporal Classification) Loss. + + + + + + + +This op implements the CTC loss as presented in (Graves et al., 2016). + +#### Input requirements: + + + +``` +sequence_length(b) <= time for all b + +max(labels.indices(labels.indices[:, 1] == b, 2)) + <= sequence_length(b) for all b. +``` + +#### Notes: + + + +This class performs the softmax operation for you, so inputs should +be e.g. linear projections of outputs by an LSTM. + +The `inputs` Tensor's innermost dimension size, `num_classes`, represents +`num_labels + 1` classes, where num_labels is the number of true labels, and +the largest value `(num_classes - 1)` is reserved for the blank label. + +For example, for a vocabulary containing 3 labels `[a, b, c]`, +`num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`. + +Regarding the arguments `preprocess_collapse_repeated` and +`ctc_merge_repeated`: + +If `preprocess_collapse_repeated` is True, then a preprocessing step runs +before loss calculation, wherein repeated labels passed to the loss +are merged into single labels. This is useful if the training labels come +from, e.g., forced alignments and therefore have unnecessary repetitions. + +If `ctc_merge_repeated` is set False, then deep within the CTC calculation, +repeated non-blank labels will not be merged and are interpreted +as individual labels. This is a simplified (non-standard) version of CTC. + +Here is a table of the (roughly) expected first order behavior: + +* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True` + + Classical CTC behavior: Outputs true repeated classes with blanks in + between, and can also output repeated classes with no blanks in + between that need to be collapsed by the decoder. + +* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False` + + Never learns to output repeated classes, as they are collapsed + in the input labels before training. + +* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False` + + Outputs repeated classes with blanks in between, but generally does not + require the decoder to collapse/merge repeated classes. + +* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True` + + Untested. Very likely will not learn to output repeated classes. + +The `ignore_longer_outputs_than_inputs` option allows to specify the behavior +of the CTCLoss when dealing with sequences that have longer outputs than +inputs. If true, the CTCLoss will simply return zero gradient for those +items, otherwise an InvalidArgument error is returned, stopping training. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +An `int32` `SparseTensor`. +`labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id +for (batch b, time t). `labels.values[i]` must take on values in `[0, +num_labels)`. See `core/ops/ctc_ops.cc` for more details. +
+`inputs` + +3-D `float` `Tensor`. +If time_major == False, this will be a `Tensor` shaped: `[batch_size, +max_time, num_classes]`. +If time_major == True (default), this will be a `Tensor` shaped: +`[max_time, batch_size, num_classes]`. The logits. +
+`sequence_length` + +1-D `int32` vector, size `[batch_size]`. The sequence +lengths. +
+`preprocess_collapse_repeated` + +Boolean. Default: False. If True, repeated +labels are collapsed prior to the CTC calculation. +
+`ctc_merge_repeated` + +Boolean. Default: True. +
+`ignore_longer_outputs_than_inputs` + +Boolean. Default: False. If True, +sequences with longer outputs than inputs will be ignored. +
+`time_major` + +The shape format of the `inputs` Tensors. If True, these +`Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, +these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. +Using `time_major = True` (default) is a bit more efficient because it +avoids transposes at the beginning of the ctc_loss calculation. However, +most TensorFlow data is batch-major, so by this function also accepts +inputs in batch-major form. +
+`logits` + +Alias for inputs. +
+ + + + + + + + + + + +
+A 1-D `float` `Tensor`, size `[batch]`, containing the negative log +probabilities. +
+ + + + + + + + + + + + +
+`TypeError` + +if labels is not a `SparseTensor`. +
+ + + +#### References: + +Connectionist Temporal Classification - Labeling Unsegmented Sequence Data +with Recurrent Neural Networks: + [Graves et al., 2016](https://dl.acm.org/citation.cfm?id=1143891) + ([pdf](http://www.cs.toronto.edu/~graves/icml_2006.pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/nn/ctc_loss_v2.md b/site/en/api_docs/python/tf/compat/v1/nn/ctc_loss_v2.md new file mode 100644 index 00000000000..746b62d3126 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/ctc_loss_v2.md @@ -0,0 +1,150 @@ +description: Computes CTC (Connectionist Temporal Classification) loss. + +
+ + +
+ +# tf.compat.v1.nn.ctc_loss_v2 + + + + + + + + + +Computes CTC (Connectionist Temporal Classification) loss. + + + + + + + +This op implements the CTC loss as presented in (Graves et al., 2016). + +#### Notes: + + + +- Same as the "Classic CTC" in TensorFlow 1.x's tf.compat.v1.nn.ctc_loss + setting of preprocess_collapse_repeated=False, ctc_merge_repeated=True +- Labels may be supplied as either a dense, zero-padded tensor with a + vector of label sequence lengths OR as a SparseTensor. +- On TPU and GPU: Only dense padded labels are supported. +- On CPU: Caller may use SparseTensor or dense padded labels but calling with + a SparseTensor will be significantly faster. +- Default blank label is 0 rather num_classes - 1, unless overridden by + blank_index. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +tensor of shape [batch_size, max_label_seq_length] or SparseTensor +
+`logits` + +tensor of shape [frames, batch_size, num_labels], if +logits_time_major == False, shape is [batch_size, frames, num_labels]. +
+`label_length` + +tensor of shape [batch_size], None if labels is SparseTensor +Length of reference label sequence in labels. +
+`logit_length` + +tensor of shape [batch_size] Length of input sequence in +logits. +
+`logits_time_major` + +(optional) If True (default), logits is shaped [time, +batch, logits]. If False, shape is [batch, time, logits] +
+`unique` + +(optional) Unique label indices as computed by +ctc_unique_labels(labels). If supplied, enable a faster, memory efficient +implementation on TPU. +
+`blank_index` + +(optional) Set the class index to use for the blank label. +Negative values will start from num_classes, ie, -1 will reproduce the +ctc_loss behavior of using num_classes - 1 for the blank symbol. There is +some memory/performance overhead to switching from the default of 0 as an +additional shifted copy of the logits may be created. +
+`name` + +A name for this `Op`. Defaults to "ctc_loss_dense". +
+ + + + + + + + + + + + +
+`loss` + +tensor of shape [batch_size], negative log probabilities. +
+ + + +#### References: + +Connectionist Temporal Classification - Labeling Unsegmented Sequence Data +with Recurrent Neural Networks: + [Graves et al., 2016](https://dl.acm.org/citation.cfm?id=1143891) + ([pdf](http://www.cs.toronto.edu/~graves/icml_2006.pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/nn/depthwise_conv2d.md b/site/en/api_docs/python/tf/compat/v1/nn/depthwise_conv2d.md new file mode 100644 index 00000000000..df3945175d3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/depthwise_conv2d.md @@ -0,0 +1,140 @@ +description: Depthwise 2-D convolution. + +
+ + +
+ +# tf.compat.v1.nn.depthwise_conv2d + + + + + + + + + +Depthwise 2-D convolution. + + + + + + + +Given a 4D input tensor ('NHWC' or 'NCHW' data formats) +and a filter tensor of shape +`[filter_height, filter_width, in_channels, channel_multiplier]` +containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` +applies a different filter to each input channel (expanding from 1 channel +to `channel_multiplier` channels for each), then concatenates the results +together. The output has `in_channels * channel_multiplier` channels. + +In detail, with the default NHWC format, + + output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} + filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, + strides[2] * j + rate[1] * dj, k] + +Must have `strides[0] = strides[3] = 1`. For the most common case of the +same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. +If any value in `rate` is greater than 1, we perform atrous depthwise +convolution, in which case all values in the `strides` tensor must be equal +to 1. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +4-D with shape according to `data_format`. +
+`filter` + +4-D with shape +`[filter_height, filter_width, in_channels, channel_multiplier]`. +
+`strides` + +1-D of size 4. The stride of the sliding window for each +dimension of `input`. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. +See the "returns" section of tf.nn.convolution for details. +
+`rate` + +1-D of size 2. The dilation rate in which we sample input values +across the `height` and `width` dimensions in atrous convolution. If it is +greater than 1, then all values of strides must be 1. +
+`name` + +A name for this operation (optional). +
+`data_format` + +The data format for input. Either "NHWC" (default) or "NCHW". +
+`dilations` + +Alias of rate. +
+ + + + + + + + + + + +
+A 4-D `Tensor` with shape according to `data_format`. E.g., for +"NHWC" format, shape is +`[batch, out_height, out_width, in_channels * channel_multiplier].` +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/depthwise_conv2d_native.md b/site/en/api_docs/python/tf/compat/v1/nn/depthwise_conv2d_native.md new file mode 100644 index 00000000000..f0568f2f3b5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/depthwise_conv2d_native.md @@ -0,0 +1,133 @@ +description: Computes a 2-D depthwise convolution given 4-D input and filter tensors. + +
+ + +
+ +# tf.compat.v1.nn.depthwise_conv2d_native + + + + + + + + + +Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors. + + + + + + + +Given an input tensor of shape `[batch, in_height, in_width, in_channels]` +and a filter / kernel tensor of shape +`[filter_height, filter_width, in_channels, channel_multiplier]`, containing +`in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies +a different filter to each input channel (expanding from 1 channel to +`channel_multiplier` channels for each), then concatenates the results +together. Thus, the output has `in_channels * channel_multiplier` channels. + +``` +for k in 0..in_channels-1 + for q in 0..channel_multiplier-1 + output[b, i, j, k * channel_multiplier + q] = + sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * + filter[di, dj, k, q] +``` + +Must have `strides[0] = strides[3] = 1`. For the most common case of the same +horizontal and vertices strides, `strides = [1, stride, stride, 1]`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +
+`filter` + +A `Tensor`. Must have the same type as `input`. +
+`strides` + +A list of `ints`. +1-D of length 4. The stride of the sliding window for each dimension +of `input`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, height, width, channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, channels, height, width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each filter +element on that dimension. The dimension order is determined by the value of +`data_format`, see above for details. Dilations in the batch and depth +dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/dilation2d.md b/site/en/api_docs/python/tf/compat/v1/nn/dilation2d.md new file mode 100644 index 00000000000..b1f279a087c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/dilation2d.md @@ -0,0 +1,130 @@ +description: Computes the grayscale dilation of 4-D input and 3-D filter tensors. + +
+ + +
+ +# tf.compat.v1.nn.dilation2d + + + + + + + + + +Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors. + + + + + + + +The `input` tensor has shape `[batch, in_height, in_width, depth]` and the +`filter` tensor has shape `[filter_height, filter_width, depth]`, i.e., each +input channel is processed independently of the others with its own structuring +function. The `output` tensor has shape +`[batch, out_height, out_width, depth]`. The spatial dimensions of the output +tensor depend on the `padding` algorithm. We currently only support the default +"NHWC" `data_format`. + +In detail, the grayscale morphological 2-D dilation is the max-sum correlation +(for consistency with `conv2d`, we use unmirrored filters): + + output[b, y, x, c] = + max_{dy, dx} input[b, + strides[1] * y + rates[1] * dy, + strides[2] * x + rates[2] * dx, + c] + + filter[dy, dx, c] + +Max-pooling is a special case when the filter has size equal to the pooling +kernel size and contains all zeros. + +Note on duality: The dilation of `input` by the `filter` is equal to the +negation of the erosion of `-input` by the reflected `filter`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +4-D with shape `[batch, in_height, in_width, depth]`. +
+`filter` + +A `Tensor`. Must have the same type as `input`. +3-D with shape `[filter_height, filter_width, depth]`. +
+`strides` + +A list of `ints` that has length `>= 4`. +The stride of the sliding window for each dimension of the input +tensor. Must be: `[1, stride_height, stride_width, 1]`. +
+`rates` + +A list of `ints` that has length `>= 4`. +The input stride for atrous morphological dilation. Must be: +`[1, rate_height, rate_width, 1]`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/dropout.md b/site/en/api_docs/python/tf/compat/v1/nn/dropout.md new file mode 100644 index 00000000000..eed80bdb270 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/dropout.md @@ -0,0 +1,135 @@ +description: Computes dropout. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.nn.dropout + + + + + + + + + +Computes dropout. (deprecated arguments) + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version. +Instructions for updating: +Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. + +For each element of `x`, with probability `rate`, outputs `0`, and otherwise +scales up the input by `1 / (1-rate)`. The scaling is such that the expected +sum is unchanged. + +By default, each element is kept or dropped independently. If `noise_shape` +is specified, it must be +[broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) +to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` +will make independent decisions. For example, if `shape(x) = [k, l, m, n]` +and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be +kept independently and each row and column will be kept or not kept together. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A floating point tensor. +
+`keep_prob` + +(deprecated) A deprecated alias for `(1-rate)`. +
+`noise_shape` + +A 1-D `Tensor` of type `int32`, representing the +shape for randomly generated keep/drop flags. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.random.set_seed for behavior. +
+`name` + +A name for this operation (optional). +
+`rate` + +A scalar `Tensor` with the same type as `x`. The probability that each +element of `x` is discarded. +
+ + + + + + + + + + + +
+A Tensor of the same shape of `x`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `rate` is not in `[0, 1)` or if `x` is not a floating +point tensor. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/dynamic_rnn.md b/site/en/api_docs/python/tf/compat/v1/nn/dynamic_rnn.md new file mode 100644 index 00000000000..c83bd025476 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/dynamic_rnn.md @@ -0,0 +1,248 @@ +description: Creates a recurrent neural network specified by RNNCell cell. (deprecated) + +
+ + +
+ +# tf.compat.v1.nn.dynamic_rnn + + + + + + + + + +Creates a recurrent neural network specified by RNNCell `cell`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use `keras.layers.RNN(cell)`, which is equivalent to this API + +Performs fully dynamic unrolling of `inputs`. + +#### Example: + + + +```python +# create a BasicRNNCell +rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) + +# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size] + +# defining initial state +initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32) + +# 'state' is a tensor of shape [batch_size, cell_state_size] +outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, + initial_state=initial_state, + dtype=tf.float32) +``` + +```python +# create 2 LSTMCells +rnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size) for size in [128, 256]] + +# create a RNN cell composed sequentially of a number of RNNCells +multi_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers) + +# 'outputs' is a tensor of shape [batch_size, max_time, 256] +# 'state' is a N-tuple where N is the number of LSTMCells containing a +# tf.nn.rnn_cell.LSTMStateTuple for each cell +outputs, state = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell, + inputs=data, + dtype=tf.float32) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cell` + +An instance of RNNCell. +
+`inputs` + +The RNN inputs. +If `time_major == False` (default), this must be a `Tensor` of shape: +`[batch_size, max_time, ...]`, or a nested tuple of such elements. +If `time_major == True`, this must be a `Tensor` of shape: `[max_time, +batch_size, ...]`, or a nested tuple of such elements. This may also be +a (possibly nested) tuple of Tensors satisfying this property. The +first two dimensions must match across all the inputs, but otherwise the +ranks and other shape components may differ. In this case, input to +`cell` at each time-step will replicate the structure of these tuples, +except for the time dimension (from which the time is taken). The input +to `cell` at each time step will be a `Tensor` or (possibly nested) +tuple of Tensors each with dimensions `[batch_size, ...]`. +
+`sequence_length` + +(optional) An int32/int64 vector sized `[batch_size]`. Used +to copy-through state and zero-out outputs when past a batch element's +sequence length. This parameter enables users to extract the last valid +state and properly padded outputs, so it is provided for correctness. +
+`initial_state` + +(optional) An initial state for the RNN. If `cell.state_size` +is an integer, this must be a `Tensor` of appropriate type and shape +`[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this +should be a tuple of tensors having shapes `[batch_size, s] for s in +cell.state_size`. +
+`dtype` + +(optional) The data type for the initial state and expected output. +Required if initial_state is not provided or RNN state has a heterogeneous +dtype. +
+`parallel_iterations` + +(Default: 32). The number of iterations to run in +parallel. Those operations which do not have any temporal dependency and +can be run in parallel, will be. This parameter trades off time for +space. Values >> 1 use more memory but take less time, while smaller +values use less memory but computations take longer. +
+`swap_memory` + +Transparently swap the tensors produced in forward inference +but needed for back prop from GPU to CPU. This allows training RNNs which +would typically not fit on a single GPU, with very minimal (or no) +performance penalty. +
+`time_major` + +The shape format of the `inputs` and `outputs` Tensors. If true, +these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, +these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using +`time_major = True` is a bit more efficient because it avoids transposes +at the beginning and end of the RNN calculation. However, most TensorFlow +data is batch-major, so by default this function accepts input and emits +output in batch-major form. +
+`scope` + +VariableScope for the created subgraph; defaults to "rnn". +
+ + + + + + + + + + + + + + + + + +
+A pair (outputs, state) where: +
+`outputs` + +The RNN output `Tensor`. + +If time_major == False (default), this will be a `Tensor` shaped: +`[batch_size, max_time, cell.output_size]`. + +If time_major == True, this will be a `Tensor` shaped: +`[max_time, batch_size, cell.output_size]`. + +Note, if `cell.output_size` is a (possibly nested) tuple of integers +or `TensorShape` objects, then `outputs` will be a tuple having the +same structure as `cell.output_size`, containing Tensors having shapes +corresponding to the shape data in `cell.output_size`. +
+`state` + +The final state. If `cell.state_size` is an int, this +will be shaped `[batch_size, cell.state_size]`. If it is a +`TensorShape`, this will be shaped `[batch_size] + cell.state_size`. +If it is a (possibly nested) tuple of ints or `TensorShape`, this will +be a tuple having the corresponding shapes. If cells are `LSTMCells` +`state` will be a tuple containing a `LSTMStateTuple` for each cell. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `cell` is not an instance of RNNCell. +
+`ValueError` + +If inputs is None or an empty list. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/embedding_lookup.md b/site/en/api_docs/python/tf/compat/v1/nn/embedding_lookup.md new file mode 100644 index 00000000000..a3849b3b62f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/embedding_lookup.md @@ -0,0 +1,154 @@ +description: Looks up ids in a list of embedding tensors. + +
+ + +
+ +# tf.compat.v1.nn.embedding_lookup + + + + + + + + + +Looks up `ids` in a list of embedding tensors. + + + + + + + +This function is used to perform parallel lookups on the list of tensors in +`params`. It is a generalization of tf.gather, where `params` is +interpreted as a partitioning of a large embedding tensor. `params` may be +a `PartitionedVariable` as returned by using tf.compat.v1.get_variable() +with a partitioner. + +If `len(params) > 1`, each element `id` of `ids` is partitioned between +the elements of `params` according to the `partition_strategy`. +In all strategies, if the id space does not evenly divide the number of +partitions, each of the first `(max_id + 1) % len(params)` partitions will +be assigned one more id. + +If `partition_strategy` is `"mod"`, we assign each id to partition +`p = id % len(params)`. For instance, +13 ids are split across 5 partitions as: +`[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]` + +If `partition_strategy` is `"div"`, we assign ids to partitions in a +contiguous manner. In this case, 13 ids are split across 5 partitions as: +`[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]` + +If the input ids are ragged tensors, partition variables are not supported and +the partition strategy and the max_norm are ignored. +The results of the lookup are concatenated into a dense +tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`params` + +A single tensor representing the complete embedding tensor, or a +list of P tensors all of same shape except for the first dimension, +representing sharded embedding tensors. Alternatively, a +`PartitionedVariable`, created by partitioning along dimension 0. Each +element must be appropriately sized for the given `partition_strategy`. +
+`ids` + +A `Tensor` or a 'RaggedTensor' with type `int32` or `int64` containing +the ids to be looked up in `params`. +
+`partition_strategy` + +A string specifying the partitioning strategy, relevant +if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default +is `"mod"`. +
+`name` + +A name for the operation (optional). +
+`validate_indices` + +DEPRECATED. If this operation is assigned to CPU, values +in `indices` are always validated to be within range. If assigned to GPU, +out-of-bound indices result in safe but unspecified behavior, which may +include raising an error. +
+`max_norm` + +If not `None`, each embedding is clipped if its l2-norm is larger +than this value. +
+ + + + + + + + + + + +
+A `Tensor` or a 'RaggedTensor', depending on the input, with the same type +as the tensors in `params`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `params` is empty. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/embedding_lookup_sparse.md b/site/en/api_docs/python/tf/compat/v1/nn/embedding_lookup_sparse.md new file mode 100644 index 00000000000..c82cf5c032e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/embedding_lookup_sparse.md @@ -0,0 +1,184 @@ +description: Computes embeddings for the given ids and weights. + +
+ + +
+ +# tf.compat.v1.nn.embedding_lookup_sparse + + + + + + + + + +Computes embeddings for the given ids and weights. + + + + + + + +This op assumes that there is at least one id for each row in the dense tensor +represented by sp_ids (i.e. there are no rows with empty features), and that +all the indices of sp_ids are in canonical row-major order. + +It also assumes that all id values lie in the range [0, p0), where p0 +is the sum of the size of params along dimension 0. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`params` + +A single tensor representing the complete embedding tensor, or a +list of P tensors all of same shape except for the first dimension, +representing sharded embedding tensors. Alternatively, a +`PartitionedVariable`, created by partitioning along dimension 0. Each +element must be appropriately sized for the given `partition_strategy`. +
+`sp_ids` + +N x M `SparseTensor` of int64 ids where N is typically batch size +and M is arbitrary. +
+`sp_weights` + +either a `SparseTensor` of float / double weights, or `None` to +indicate all weights should be taken to be 1. If specified, `sp_weights` +must have exactly the same shape and indices as `sp_ids`. +
+`partition_strategy` + +A string specifying the partitioning strategy, relevant +if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default +is `"mod"`. See `tf.nn.embedding_lookup` for more details. +
+`name` + +Optional name for the op. +
+`combiner` + +A string specifying the reduction op. Currently "mean", "sqrtn" +and "sum" are supported. "sum" computes the weighted sum of the embedding +results for each row. "mean" is the weighted sum divided by the total +weight. "sqrtn" is the weighted sum divided by the square root of the sum +of the squares of the weights. +
+`max_norm` + +If not `None`, each embedding is clipped if its l2-norm is larger +than this value, before combining. +
+ + + + + + + + + + + +
+A dense tensor representing the combined embeddings for the +sparse ids. For each row in the dense tensor represented by `sp_ids`, the op +looks up the embeddings for all ids in that row, multiplies them by the +corresponding weight, and combines these embeddings as specified. + +In other words, if + +`shape(combined params) = [p0, p1, ..., pm]` + +and + +`shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn]` + +then + +`shape(output) = [d0, d1, ..., dn-1, p1, ..., pm]`. + +For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are + +```python +[0, 0]: id 1, weight 2.0 +[0, 1]: id 3, weight 0.5 +[1, 0]: id 0, weight 1.0 +[2, 3]: id 1, weight 3.0 +``` + +with `combiner`="mean", then the output will be a 3x20 matrix where + +```python +output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) +output[1, :] = (params[0, :] * 1.0) / 1.0 +output[2, :] = (params[1, :] * 3.0) / 3.0 +``` +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `sp_ids` is not a `SparseTensor`, or if `sp_weights` is +neither `None` nor `SparseTensor`. +
+`ValueError` + +If `combiner` is not one of {"mean", "sqrtn", "sum"}. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/erosion2d.md b/site/en/api_docs/python/tf/compat/v1/nn/erosion2d.md new file mode 100644 index 00000000000..515c5219525 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/erosion2d.md @@ -0,0 +1,144 @@ +description: Computes the grayscale erosion of 4-D value and 3-D kernel tensors. + +
+ + +
+ +# tf.compat.v1.nn.erosion2d + + + + + + + + + +Computes the grayscale erosion of 4-D `value` and 3-D `kernel` tensors. + + + + + + + +The `value` tensor has shape `[batch, in_height, in_width, depth]` and the +`kernel` tensor has shape `[kernel_height, kernel_width, depth]`, i.e., +each input channel is processed independently of the others with its own +structuring function. The `output` tensor has shape +`[batch, out_height, out_width, depth]`. The spatial dimensions of the +output tensor depend on the `padding` algorithm. We currently only support the +default "NHWC" `data_format`. + +In detail, the grayscale morphological 2-D erosion is given by: + + output[b, y, x, c] = + min_{dy, dx} value[b, + strides[1] * y - rates[1] * dy, + strides[2] * x - rates[2] * dx, + c] - + kernel[dy, dx, c] + +Duality: The erosion of `value` by the `kernel` is equal to the negation of +the dilation of `-value` by the reflected `kernel`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. 4-D with shape `[batch, in_height, in_width, depth]`. +
+`kernel` + +A `Tensor`. Must have the same type as `value`. +3-D with shape `[kernel_height, kernel_width, depth]`. +
+`strides` + +A list of `ints` that has length `>= 4`. +1-D of length 4. The stride of the sliding window for each dimension of +the input tensor. Must be: `[1, stride_height, stride_width, 1]`. +
+`rates` + +A list of `ints` that has length `>= 4`. +1-D of length 4. The input stride for atrous morphological dilation. +Must be: `[1, rate_height, rate_width, 1]`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). If not specified "erosion2d" +is used. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `value`. +4-D with shape `[batch, out_height, out_width, depth]`. +
+ + + + + + + + + + + + +
+`ValueError` + +If the `value` depth does not match `kernel`' shape, or if +padding is other than `'VALID'` or `'SAME'`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/fractional_avg_pool.md b/site/en/api_docs/python/tf/compat/v1/nn/fractional_avg_pool.md new file mode 100644 index 00000000000..1cf8e1d86c2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/fractional_avg_pool.md @@ -0,0 +1,150 @@ +description: Performs fractional average pooling on the input. (deprecated) + +
+ + +
+ +# tf.compat.v1.nn.fractional_avg_pool + + + + + + + + + +Performs fractional average pooling on the input. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +`seed2` and `deterministic` args are deprecated. Use fractional_avg_pool_v2. + +This is a deprecated version of `fractional_avg_pool`. + +Fractional average pooling is similar to Fractional max pooling in the pooling +region generation step. The only difference is that after pooling regions are +generated, a mean operation is performed instead of a max operation in each +pooling region. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. 4-D with shape `[batch, height, width, channels]`. +
+`pooling_ratio` + +A list of `floats` that has length >= 4. Pooling ratio for +each dimension of `value`, currently only supports row and col dimension +and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, +1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't +allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling +ratio on height and width dimensions respectively. +
+`pseudo_random` + +An optional `bool`. Defaults to `False`. When set to `True`, +generates the pooling sequence in a pseudorandom fashion, otherwise, in a +random fashion. Check paper (Graham, 2015) for difference between +pseudorandom and random. +
+`overlapping` + +An optional `bool`. Defaults to `False`. When set to `True`, +it means when pooling, the values at the boundary of adjacent pooling +cells are used by both cells. For example: +`index 0 1 2 3 4` +`value 20 5 16 3 7` +If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used +twice. The result would be [20, 16] for fractional avg pooling. +
+`deterministic` + +An optional `bool`. Deprecated; use `fractional_avg_pool_v2` +instead. +
+`seed` + +An optional `int`. Defaults to `0`. If set to be non-zero, the +random number generator is seeded by the given seed. Otherwise it is +seeded by a random seed. +
+`seed2` + +An optional `int`. Deprecated; use `fractional_avg_pool_v2` instead. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + +
+ + +A tuple of `Tensor` objects (`output`, `row_pooling_sequence`, +`col_pooling_sequence`). + output: Output `Tensor` after fractional avg pooling. Has the same type as + `value`. + row_pooling_sequence: A `Tensor` of type `int64`. + col_pooling_sequence: A `Tensor` of type `int64`. + +#### References: + +Fractional Max-Pooling: + [Graham, 2015](https://arxiv.org/abs/1412.6071) + ([pdf](https://arxiv.org/pdf/1412.6071.pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/nn/fractional_max_pool.md b/site/en/api_docs/python/tf/compat/v1/nn/fractional_max_pool.md new file mode 100644 index 00000000000..22678fc47f4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/fractional_max_pool.md @@ -0,0 +1,171 @@ +description: Performs fractional max pooling on the input. (deprecated) + +
+ + +
+ +# tf.compat.v1.nn.fractional_max_pool + + + + + + + + + +Performs fractional max pooling on the input. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +`seed2` and `deterministic` args are deprecated. Use fractional_max_pool_v2. + +This is a deprecated version of `fractional_max_pool`. + +Fractional max pooling is slightly different than regular max pooling. In +regular max pooling, you downsize an input set by taking the maximum value of +smaller N x N subsections of the set (often 2x2), and try to reduce the set by +a factor of N, where N is an integer. Fractional max pooling, as you might +expect from the word "fractional", means that the overall reduction ratio N +does not have to be an integer. + +The sizes of the pooling regions are generated randomly but are fairly +uniform. For example, let's look at the height dimension, and the constraints +on the list of rows that will be pool boundaries. + +First we define the following: + +1. input_row_length : the number of rows from the input set +2. output_row_length : which will be smaller than the input +3. alpha = input_row_length / output_row_length : our reduction ratio +4. K = floor(alpha) +5. row_pooling_sequence : this is the result list of pool boundary rows + +Then, row_pooling_sequence should satisfy: + +1. a[0] = 0 : the first value of the sequence is 0 +2. a[end] = input_row_length : the last value of the sequence is the size +3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size +4. length(row_pooling_sequence) = output_row_length+1 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. 4-D with shape `[batch, height, width, channels]`. +
+`pooling_ratio` + +A list of `floats` that has length >= 4. Pooling ratio for +each dimension of `value`, currently only supports row and col dimension +and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, +1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't +allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling +ratio on height and width dimensions respectively. +
+`pseudo_random` + +An optional `bool`. Defaults to `False`. When set to `True`, +generates the pooling sequence in a pseudorandom fashion, otherwise, in a +random fashion. Check (Graham, 2015) for difference between +pseudorandom and random. +
+`overlapping` + +An optional `bool`. Defaults to `False`. When set to `True`, +it means when pooling, the values at the boundary of adjacent pooling +cells are used by both cells. For example: +`index 0 1 2 3 4` +`value 20 5 16 3 7` +If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used +twice. The result would be [20, 16] for fractional max pooling. +
+`deterministic` + +An optional `bool`. Deprecated; use `fractional_max_pool_v2` +instead. +
+`seed` + +An optional `int`. Defaults to `0`. If set to be non-zero, the +random number generator is seeded by the given seed. Otherwise it is +seeded by a random seed. +
+`seed2` + +An optional `int`. Deprecated; use `fractional_max_pool_v2` instead. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + +
+ + +A tuple of `Tensor` objects (`output`, `row_pooling_sequence`, +`col_pooling_sequence`). + output: Output `Tensor` after fractional max pooling. Has the same type as + `value`. + row_pooling_sequence: A `Tensor` of type `int64`. + col_pooling_sequence: A `Tensor` of type `int64`. + +#### References: + +Fractional Max-Pooling: + [Graham, 2015](https://arxiv.org/abs/1412.6071) + ([pdf](https://arxiv.org/pdf/1412.6071.pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/nn/fused_batch_norm.md b/site/en/api_docs/python/tf/compat/v1/nn/fused_batch_norm.md new file mode 100644 index 00000000000..a9662c75b0f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/fused_batch_norm.md @@ -0,0 +1,188 @@ +description: Batch normalization. + +
+ + +
+ +# tf.compat.v1.nn.fused_batch_norm + + + + + + + + + +Batch normalization. + + + + + + + + +See Source: [Batch Normalization: Accelerating Deep Network Training by +Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] +(http://arxiv.org/abs/1502.03167). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Input `Tensor` of 4 dimensions. +
+`scale` + +A `Tensor` of 1 dimension for scaling. +
+`offset` + +A `Tensor` of 1 dimension for bias. +
+`mean` + +A `Tensor` of 1 dimension for population mean. The shape and meaning +of this argument depends on the value of is_training and +exponential_avg_factor as follows: +is_training==False (inference): +Mean must be a `Tensor` of the same shape as scale containing the +estimated population mean computed during training. +is_training==True and exponential_avg_factor == 1.0: +Mean must be None. +is_training==True and exponential_avg_factor != 1.0: +Mean must be a `Tensor` of the same shape as scale containing the +exponential running mean. +
+`variance` + +A `Tensor` of 1 dimension for population variance. The shape and +meaning of this argument depends on the value of is_training and +exponential_avg_factor as follows: +is_training==False (inference): +Variance must be a `Tensor` of the same shape as scale containing +the estimated population variance computed during training. +is_training==True and exponential_avg_factor == 1.0: +Variance must be None. +is_training==True and exponential_avg_factor != 1.0: +Variance must be a `Tensor` of the same shape as scale containing +the exponential running variance. +
+`epsilon` + +A small float number added to the variance of x. +
+`data_format` + +The data format for x. Either "NHWC" (default) or "NCHW". +
+`is_training` + +A bool value to specify if the operation is used for +training or inference. +
+`name` + +A name for this operation (optional). +
+`exponential_avg_factor` + +A float number (usually between 0 and 1) used +for controlling the decay of the running +population average of mean and variance. +If set to 1.0, the current batch average is +returned. +
+ + + + + + + + + + + + + + + + + + +
+`y` + +A 4D Tensor for the normalized, scaled, offsetted x. +
+`running_mean` + +A 1D Tensor for the exponential running mean of x. +The output value is (1 - exponential_avg_factor) * mean + +exponential_avg_factor * batch_mean), where batch_mean +is the mean of the current batch in x. +
+`running_var` + +A 1D Tensor for the exponential running variance +The output value is (1 - exponential_avg_factor) * variance + +exponential_avg_factor * batch_variance), where batch_variance +is the variance of the current batch in x. +
+ + + +#### References: + +Batch Normalization - Accelerating Deep Network Training by Reducing +Internal Covariate Shift: + [Ioffe et al., 2015](http://proceedings.mlr.press/v37/ioffe15.html) + ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/nn/max_pool.md b/site/en/api_docs/python/tf/compat/v1/nn/max_pool.md new file mode 100644 index 00000000000..02e653a1d11 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/max_pool.md @@ -0,0 +1,110 @@ +description: Performs the max pooling on the input. + +
+ + +
+ +# tf.compat.v1.nn.max_pool + + + + + + + + + +Performs the max pooling on the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A 4-D `Tensor` of the format specified by `data_format`. +
+`ksize` + +An int or list of `ints` that has length `1`, `2` or `4`. +The size of the window for each dimension of the input tensor. +
+`strides` + +An int or list of `ints` that has length `1`, `2` or `4`. +The stride of the sliding window for each dimension of the input tensor. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. +See the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported. +
+`name` + +Optional name for the operation. +
+`input` + +Alias for value. +
+ + + + + + + + + + + +
+A `Tensor` of format specified by `data_format`. +The max pooled output tensor. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/max_pool_with_argmax.md b/site/en/api_docs/python/tf/compat/v1/nn/max_pool_with_argmax.md new file mode 100644 index 00000000000..e04845faa52 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/max_pool_with_argmax.md @@ -0,0 +1,136 @@ +description: Performs max pooling on the input and outputs both max values and indices. + +
+ + +
+ +# tf.compat.v1.nn.max_pool_with_argmax + + + + + + + + + +Performs max pooling on the input and outputs both max values and indices. + + + + + + + +The indices in `argmax` are flattened, so that a maximum value at position +`[b, y, x, c]` becomes flattened index: +`(y * width + x) * channels + c` if `include_batch_in_index` is False; +`((b * height + y) * width + x) * channels + c` if `include_batch_in_index` is True. + +The indices returned are always in `[0, height) x [0, width)` before flattening, +even if padding is involved and the mathematically correct answer is outside +(either negative or too large). This is a bug, but fixing it is difficult to do +in a safe backwards compatible way, especially due to flattening. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +4-D with shape `[batch, height, width, channels]`. Input to pool over. +
+`ksize` + +A list of `ints` that has length `>= 4`. +The size of the window for each dimension of the input tensor. +
+`strides` + +A list of `ints` that has length `>= 4`. +The stride of the sliding window for each dimension of the +input tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`Targmax` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`include_batch_in_index` + +An optional `bool`. Defaults to `False`. +Whether to include batch dimension in flattened index of `argmax`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, argmax). +
+`output` + +A `Tensor`. Has the same type as `input`. +
+`argmax` + +A `Tensor` of type `Targmax`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/moments.md b/site/en/api_docs/python/tf/compat/v1/nn/moments.md new file mode 100644 index 00000000000..16af284f3e0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/moments.md @@ -0,0 +1,112 @@ +description: Calculate the mean and variance of x. + +
+ + +
+ +# tf.compat.v1.nn.moments + + + + + + + + + +Calculate the mean and variance of `x`. + + + + + + + +The mean and variance are calculated by aggregating the contents of `x` +across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean +and variance of a vector. + +Note: shift is currently not used; the true mean is computed and used. + +When using these moments for batch normalization (see +tf.nn.batch_normalization): + + * for so-called "global normalization", used with convolutional filters with + shape `[batch, height, width, depth]`, pass `axes=[0, 1, 2]`. + * for simple batch normalization pass `axes=[0]` (batch only). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. +
+`axes` + +Array of ints. Axes along which to compute mean and +variance. +
+`shift` + +Not used in the current implementation +
+`name` + +Name used to scope the operations that compute the moments. +
+`keep_dims` + +produce moments with the same dimensionality as the input. +
+`keepdims` + +Alias to keep_dims. +
+ + + + + + + + + + + +
+Two `Tensor` objects: `mean` and `variance`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/nce_loss.md b/site/en/api_docs/python/tf/compat/v1/nn/nce_loss.md new file mode 100644 index 00000000000..3a343d81d62 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/nce_loss.md @@ -0,0 +1,198 @@ +description: Computes and returns the noise-contrastive estimation training loss. + +
+ + +
+ +# tf.compat.v1.nn.nce_loss + + + + + + + + + +Computes and returns the noise-contrastive estimation training loss. + + + + + + + +A common use case is to use this method for training, and calculate the full +sigmoid loss for evaluation or inference. In this case, you must set +`partition_strategy="div"` for the two losses to be consistent, as in the +following example: + +```python +if mode == "train": + loss = tf.nn.nce_loss( + weights=weights, + biases=biases, + labels=labels, + inputs=inputs, + ..., + partition_strategy="div") +elif mode == "eval": + logits = tf.matmul(inputs, tf.transpose(weights)) + logits = tf.nn.bias_add(logits, biases) + labels_one_hot = tf.one_hot(labels, n_classes) + loss = tf.nn.sigmoid_cross_entropy_with_logits( + labels=labels_one_hot, + logits=logits) + loss = tf.reduce_sum(loss, axis=1) +``` + +Note: By default this uses a log-uniform (Zipfian) distribution for sampling, +so your labels must be sorted in order of decreasing frequency to achieve +good results. For more details, see +tf.random.log_uniform_candidate_sampler. + +Note: In the case where `num_true` > 1, we assign to each target class +the target probability 1 / `num_true` so that the target probabilities +sum to 1 per-example. + +Note: It would be useful to allow a variable number of target classes per +example. We hope to provide this functionality in a future release. +For now, if you have a variable number of target classes, you can pad them +out to a constant number by either repeating them or by padding +with an otherwise unused class. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`weights` + +A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` +objects whose concatenation along dimension 0 has shape +[num_classes, dim]. The (possibly-partitioned) class embeddings. +
+`biases` + +A `Tensor` of shape `[num_classes]`. The class biases. +
+`labels` + +A `Tensor` of type `int64` and shape `[batch_size, +num_true]`. The target classes. +
+`inputs` + +A `Tensor` of shape `[batch_size, dim]`. The forward +activations of the input network. +
+`num_sampled` + +An `int`. The number of negative classes to randomly sample +per batch. This single sample of negative classes is evaluated for each +element in the batch. +
+`num_classes` + +An `int`. The number of possible classes. +
+`num_true` + +An `int`. The number of target classes per training example. +
+`sampled_values` + +a tuple of (`sampled_candidates`, `true_expected_count`, +`sampled_expected_count`) returned by a `*_candidate_sampler` function. +(if None, we default to `log_uniform_candidate_sampler`) +
+`remove_accidental_hits` + +A `bool`. Whether to remove "accidental hits" +where a sampled class equals one of the target classes. If set to +`True`, this is a "Sampled Logistic" loss instead of NCE, and we are +learning to generate log-odds instead of log probabilities. See +our Candidate Sampling Algorithms Reference +([pdf](https://www.tensorflow.org/extras/candidate_sampling.pdf)). +Default is False. +
+`partition_strategy` + +A string specifying the partitioning strategy, relevant +if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. +Default is `"mod"`. See `tf.nn.embedding_lookup` for more details. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `batch_size` 1-D tensor of per-example NCE losses. +
+ + + +#### References: + +Noise-contrastive estimation - A new estimation principle for unnormalized +statistical models: + [Gutmann et al., 2010](http://proceedings.mlr.press/v9/gutmann10a) + ([pdf](http://proceedings.mlr.press/v9/gutmann10a/gutmann10a.pdf)) diff --git a/site/en/api_docs/python/tf/compat/v1/nn/pool.md b/site/en/api_docs/python/tf/compat/v1/nn/pool.md new file mode 100644 index 00000000000..a184ed341a4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/pool.md @@ -0,0 +1,197 @@ +description: Performs an N-D pooling operation. + +
+ + +
+ +# tf.compat.v1.nn.pool + + + + + + + + + +Performs an N-D pooling operation. + + + + + + + +In the case that `data_format` does not start with "NC", computes for + 0 <= b < batch_size, + 0 <= x[i] < output_spatial_shape[i], + 0 <= c < num_channels: + +``` + output[b, x[0], ..., x[N-1], c] = + REDUCE_{z[0], ..., z[N-1]} + input[b, + x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], + ... + x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], + c], +``` + +where the reduction function REDUCE depends on the value of `pooling_type`, +and pad_before is defined based on the value of `padding` as described in +the "returns" section of tf.nn.convolution for details. +The reduction never includes out-of-bounds positions. + +In the case that `data_format` starts with `"NC"`, the `input` and output are +simply transposed as follows: + +``` + pool(input, data_format, **kwargs) = + tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), + **kwargs), + [0, N+1] + range(1, N+1)) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +Tensor of rank N+2, of shape +`[batch_size] + input_spatial_shape + [num_channels]` if data_format does +not start with "NC" (default), or +`[batch_size, num_channels] + input_spatial_shape` if data_format starts +with "NC". Pooling happens over the spatial dimensions only. +
+`window_shape` + +Sequence of N ints >= 1. +
+`pooling_type` + +Specifies pooling operation, must be "AVG" or "MAX". +
+`padding` + +The padding algorithm, must be "SAME" or "VALID". +See the "returns" section of tf.nn.convolution for details. +
+`dilation_rate` + +Optional. Dilation rate. List of N ints >= 1. +Defaults to [1]*N. If any value of dilation_rate is > 1, then all values +of strides must be 1. +
+`strides` + +Optional. Sequence of N ints >= 1. Defaults to [1]*N. +If any value of strides is > 1, then all values of dilation_rate must be +1. +
+`name` + +Optional. Name of the op. +
+`data_format` + +A string or None. Specifies whether the channel dimension of +the `input` and output is the last dimension (default, or if `data_format` +does not start with "NC"), or the second dimension (if `data_format` +starts with "NC"). For N=1, the valid values are "NWC" (default) and +"NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". +For N=3, the valid values are "NDHWC" (default) and "NCDHW". +
+`dilations` + +Alias for dilation_rate +
+ + + + + + + + + + + +
+Tensor of rank N+2, of shape +[batch_size] + output_spatial_shape + [num_channels] + +if data_format is None or does not start with "NC", or + +[batch_size, num_channels] + output_spatial_shape + +if data_format starts with "NC", +where `output_spatial_shape` depends on the value of padding: + +If padding = "SAME": +output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i]) + +If padding = "VALID": +output_spatial_shape[i] = +ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) +/ strides[i]). +
+ + + + + + + + + + + + +
+`ValueError` + +if arguments are invalid. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/quantized_avg_pool.md b/site/en/api_docs/python/tf/compat/v1/nn/quantized_avg_pool.md new file mode 100644 index 00000000000..0b8470a5811 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/quantized_avg_pool.md @@ -0,0 +1,130 @@ +description: Produces the average pool of the input tensor for quantized types. + +
+ + +
+ +# tf.compat.v1.nn.quantized_avg_pool + + + + + + + + + +Produces the average pool of the input tensor for quantized types. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +4-D with shape `[batch, height, width, channels]`. +
+`min_input` + +A `Tensor` of type `float32`. +The float value that the lowest quantized input value represents. +
+`max_input` + +A `Tensor` of type `float32`. +The float value that the highest quantized input value represents. +
+`ksize` + +A list of `ints`. +The size of the window for each dimension of the input tensor. +The length must be 4 to match the number of dimensions of the input. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +tensor. The length must be 4 to match the number of dimensions of the input. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor`. Has the same type as `input`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/quantized_conv2d.md b/site/en/api_docs/python/tf/compat/v1/nn/quantized_conv2d.md new file mode 100644 index 00000000000..176b773b75e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/quantized_conv2d.md @@ -0,0 +1,168 @@ +description: Computes a 2D convolution given quantized 4D input and filter tensors. + +
+ + +
+ +# tf.compat.v1.nn.quantized_conv2d + + + + + + + + + +Computes a 2D convolution given quantized 4D input and filter tensors. + + + + + + + +The inputs are quantized tensors where the lowest value represents the real +number of the associated minimum, and the highest represents the maximum. +This means that you can only interpret the quantized output in the same way, by +taking the returned minimum and maximum values into account. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +filter's input_depth dimension must match input's depth dimensions. +
+`min_input` + +A `Tensor` of type `float32`. +The float value that the lowest quantized input value represents. +
+`max_input` + +A `Tensor` of type `float32`. +The float value that the highest quantized input value represents. +
+`min_filter` + +A `Tensor` of type `float32`. +The float value that the lowest quantized filter value represents. +
+`max_filter` + +A `Tensor` of type `float32`. +The float value that the highest quantized filter value represents. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each +filter element on that dimension. The dimension order is determined by the +value of `data_format`, see above for details. Dilations in the batch and +depth dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/quantized_max_pool.md b/site/en/api_docs/python/tf/compat/v1/nn/quantized_max_pool.md new file mode 100644 index 00000000000..ecb8fb8bc61 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/quantized_max_pool.md @@ -0,0 +1,130 @@ +description: Produces the max pool of the input tensor for quantized types. + +
+ + +
+ +# tf.compat.v1.nn.quantized_max_pool + + + + + + + + + +Produces the max pool of the input tensor for quantized types. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The 4D (batch x rows x cols x depth) Tensor to MaxReduce over. +
+`min_input` + +A `Tensor` of type `float32`. +The float value that the lowest quantized input value represents. +
+`max_input` + +A `Tensor` of type `float32`. +The float value that the highest quantized input value represents. +
+`ksize` + +A list of `ints`. +The size of the window for each dimension of the input tensor. +The length must be 4 to match the number of dimensions of the input. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +tensor. The length must be 4 to match the number of dimensions of the input. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor`. Has the same type as `input`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/quantized_relu_x.md b/site/en/api_docs/python/tf/compat/v1/nn/quantized_relu_x.md new file mode 100644 index 00000000000..03e65d6ae13 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/quantized_relu_x.md @@ -0,0 +1,118 @@ +description: Computes Quantized Rectified Linear X: min(max(features, 0), max_value) + +
+ + +
+ +# tf.compat.v1.nn.quantized_relu_x + + + + + + + + + +Computes Quantized Rectified Linear X: `min(max(features, 0), max_value)` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`max_value` + +A `Tensor` of type `float32`. +
+`min_features` + +A `Tensor` of type `float32`. +The float value that the lowest quantized value represents. +
+`max_features` + +A `Tensor` of type `float32`. +The float value that the highest quantized value represents. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (activations, min_activations, max_activations). +
+`activations` + +A `Tensor` of type `out_type`. +
+`min_activations` + +A `Tensor` of type `float32`. +
+`max_activations` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/raw_rnn.md b/site/en/api_docs/python/tf/compat/v1/nn/raw_rnn.md new file mode 100644 index 00000000000..1dc4140befb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/raw_rnn.md @@ -0,0 +1,253 @@ +description: Creates an RNN specified by RNNCell cell and loop function loop_fn. + +
+ + +
+ +# tf.compat.v1.nn.raw_rnn + + + + + + + + + +Creates an `RNN` specified by RNNCell `cell` and loop function `loop_fn`. + + + + + + + +**NOTE: This method is still in testing, and the API may change.** + +This function is a more primitive version of `dynamic_rnn` that provides +more direct access to the inputs each iteration. It also provides more +control over when to start and finish reading the sequence, and +what to emit for the output. + +For example, it can be used to implement the dynamic decoder of a seq2seq +model. + +Instead of working with `Tensor` objects, most operations work with +`TensorArray` objects directly. + +The operation of `raw_rnn`, in pseudo-code, is basically the following: + +```python +time = tf.constant(0, dtype=tf.int32) +(finished, next_input, initial_state, emit_structure, loop_state) = loop_fn( + time=time, cell_output=None, cell_state=None, loop_state=None) +emit_ta = TensorArray(dynamic_size=True, dtype=initial_state.dtype) +state = initial_state +while not all(finished): + (output, cell_state) = cell(next_input, state) + (next_finished, next_input, next_state, emit, loop_state) = loop_fn( + time=time + 1, cell_output=output, cell_state=cell_state, + loop_state=loop_state) + # Emit zeros and copy forward state for minibatch entries that are finished. + state = tf.where(finished, state, next_state) + emit = tf.where(finished, tf.zeros_like(emit_structure), emit) + emit_ta = emit_ta.write(time, emit) + # If any new minibatch entries are marked as finished, mark these. + finished = tf.logical_or(finished, next_finished) + time += 1 +return (emit_ta, state, loop_state) +``` + +with the additional properties that output and state may be (possibly nested) +tuples, as determined by `cell.output_size` and `cell.state_size`, and +as a result the final `state` and `emit_ta` may themselves be tuples. + +A simple implementation of `dynamic_rnn` via `raw_rnn` looks like this: + +```python +inputs = tf.compat.v1.placeholder(shape=(max_time, batch_size, input_depth), + dtype=tf.float32) +sequence_length = tf.compat.v1.placeholder(shape=(batch_size,), +dtype=tf.int32) +inputs_ta = tf.TensorArray(dtype=tf.float32, size=max_time) +inputs_ta = inputs_ta.unstack(inputs) + +cell = tf.compat.v1.nn.rnn_cell.LSTMCell(num_units) + +def loop_fn(time, cell_output, cell_state, loop_state): + emit_output = cell_output # == None for time == 0 + if cell_output is None: # time == 0 + next_cell_state = cell.zero_state(batch_size, tf.float32) + else: + next_cell_state = cell_state + elements_finished = (time >= sequence_length) + finished = tf.reduce_all(elements_finished) + next_input = tf.cond( + finished, + lambda: tf.zeros([batch_size, input_depth], dtype=tf.float32), + lambda: inputs_ta.read(time)) + next_loop_state = None + return (elements_finished, next_input, next_cell_state, + emit_output, next_loop_state) + +outputs_ta, final_state, _ = raw_rnn(cell, loop_fn) +outputs = outputs_ta.stack() +``` + + + + + + + + + + + + + + + + + + + + + + +
+`cell` + +An instance of RNNCell. +
+`loop_fn` + +A callable that takes inputs `(time, cell_output, cell_state, +loop_state)` and returns the tuple `(finished, next_input, +next_cell_state, emit_output, next_loop_state)`. Here `time` is an int32 +scalar `Tensor`, `cell_output` is a `Tensor` or (possibly nested) tuple of +tensors as determined by `cell.output_size`, and `cell_state` is a +`Tensor` or (possibly nested) tuple of tensors, as determined by the +`loop_fn` on its first call (and should match `cell.state_size`). +The outputs are: `finished`, a boolean `Tensor` of +shape `[batch_size]`, `next_input`: the next input to feed to `cell`, +`next_cell_state`: the next state to feed to `cell`, +and `emit_output`: the output to store for this iteration. Note that +`emit_output` should be a `Tensor` or (possibly nested) tuple of tensors +which is aggregated in the `emit_ta` inside the `while_loop`. For the +first call to `loop_fn`, the `emit_output` corresponds to the +`emit_structure` which is then used to determine the size of the +`zero_tensor` for the `emit_ta` (defaults to `cell.output_size`). For +the subsequent calls to the `loop_fn`, the `emit_output` corresponds to +the actual output tensor that is to be aggregated in the `emit_ta`. The +parameter `cell_state` and output `next_cell_state` may be either a +single or (possibly nested) tuple of tensors. The parameter +`loop_state` and output `next_loop_state` may be either a single or +(possibly nested) tuple of `Tensor` and `TensorArray` objects. This +last parameter may be ignored by `loop_fn` and the return value may be +`None`. If it is not `None`, then the `loop_state` will be propagated +through the RNN loop, for use purely by `loop_fn` to keep track of its +own state. The `next_loop_state` parameter returned may be `None`. The +first call to `loop_fn` will be `time = 0`, `cell_output = None`, +`cell_state = None`, and `loop_state = None`. For this call: The +`next_cell_state` value should be the value with which to initialize the +cell's state. It may be a final state from a previous RNN or it may be +the output of `cell.zero_state()`. It should be a (possibly nested) +tuple structure of tensors. If `cell.state_size` is an integer, this +must be a `Tensor` of appropriate type and shape `[batch_size, +cell.state_size]`. If `cell.state_size` is a `TensorShape`, this must be +a `Tensor` of appropriate type and shape `[batch_size] + +cell.state_size`. If `cell.state_size` is a (possibly nested) tuple of +ints or `TensorShape`, this will be a tuple having the corresponding +shapes. The `emit_output` value may be either `None` or a (possibly +nested) tuple structure of tensors, e.g., `(tf.zeros(shape_0, +dtype=dtype_0), tf.zeros(shape_1, dtype=dtype_1))`. If this first +`emit_output` return value is `None`, then the `emit_ta` result of +`raw_rnn` will have the same structure and dtypes as `cell.output_size`. +Otherwise `emit_ta` will have the same structure, shapes (prepended with +a `batch_size` dimension), and dtypes as `emit_output`. The actual +values returned for `emit_output` at this initializing call are ignored. +Note, this emit structure must be consistent across all time steps. +
+`parallel_iterations` + +(Default: 32). The number of iterations to run in +parallel. Those operations which do not have any temporal dependency and +can be run in parallel, will be. This parameter trades off time for +space. Values >> 1 use more memory but take less time, while smaller +values use less memory but computations take longer. +
+`swap_memory` + +Transparently swap the tensors produced in forward inference +but needed for back prop from GPU to CPU. This allows training RNNs which +would typically not fit on a single GPU, with very minimal (or no) +performance penalty. +
+`scope` + +VariableScope for the created subgraph; defaults to "rnn". +
+ + + + + + + + + + + +
+A tuple `(emit_ta, final_state, final_loop_state)` where: + +`emit_ta`: The RNN output `TensorArray`. +If `loop_fn` returns a (possibly nested) set of Tensors for +`emit_output` during initialization, (inputs `time = 0`, +`cell_output = None`, and `loop_state = None`), then `emit_ta` will +have the same structure, dtypes, and shapes as `emit_output` instead. +If `loop_fn` returns `emit_output = None` during this call, +the structure of `cell.output_size` is used: +If `cell.output_size` is a (possibly nested) tuple of integers +or `TensorShape` objects, then `emit_ta` will be a tuple having the +same structure as `cell.output_size`, containing TensorArrays whose +elements' shapes correspond to the shape data in `cell.output_size`. + +`final_state`: The final cell state. If `cell.state_size` is an int, this +will be shaped `[batch_size, cell.state_size]`. If it is a +`TensorShape`, this will be shaped `[batch_size] + cell.state_size`. +If it is a (possibly nested) tuple of ints or `TensorShape`, this will +be a tuple having the corresponding shapes. + +`final_loop_state`: The final loop state as returned by `loop_fn`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `cell` is not an instance of RNNCell, or `loop_fn` is not +a `callable`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/relu_layer.md b/site/en/api_docs/python/tf/compat/v1/nn/relu_layer.md new file mode 100644 index 00000000000..344a3f45bca --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/relu_layer.md @@ -0,0 +1,87 @@ +description: Computes Relu(x * weight + biases). + +
+ + +
+ +# tf.compat.v1.nn.relu_layer + + + + + + + + + +Computes Relu(x * weight + biases). + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +a 2D tensor. Dimensions typically: batch, in_units +
+`weights` + +a 2D tensor. Dimensions typically: in_units, out_units +
+`biases` + +a 1D tensor. Dimensions: out_units +
+`name` + +A name for the operation (optional). If not specified +"nn_relu_layer" is used. +
+ + + + + + + + + + + +
+A 2-D Tensor computing relu(matmul(x, weights) + biases). +Dimensions typically: batch, out_units. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell.md b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell.md new file mode 100644 index 00000000000..838e6a01670 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell.md @@ -0,0 +1,43 @@ +description: Module for constructing RNN Cells. + +
+ + +
+ +# Module: tf.compat.v1.nn.rnn_cell + + + + + + + + + +Module for constructing RNN Cells. + + + +## Classes + +[`class BasicLSTMCell`](../../../../tf/compat/v1/nn/rnn_cell/BasicLSTMCell.md): DEPRECATED: Please use tf.compat.v1.nn.rnn_cell.LSTMCell instead. + +[`class BasicRNNCell`](../../../../tf/compat/v1/nn/rnn_cell/BasicRNNCell.md): The most basic RNN cell. + +[`class DeviceWrapper`](../../../../tf/compat/v1/nn/rnn_cell/DeviceWrapper.md): Operator that ensures an RNNCell runs on a particular device. + +[`class DropoutWrapper`](../../../../tf/compat/v1/nn/rnn_cell/DropoutWrapper.md): Operator adding dropout to inputs and outputs of the given cell. + +[`class GRUCell`](../../../../tf/compat/v1/nn/rnn_cell/GRUCell.md): Gated Recurrent Unit cell. + +[`class LSTMCell`](../../../../tf/compat/v1/nn/rnn_cell/LSTMCell.md): Long short-term memory unit (LSTM) recurrent network cell. + +[`class LSTMStateTuple`](../../../../tf/compat/v1/nn/rnn_cell/LSTMStateTuple.md): Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state. + +[`class MultiRNNCell`](../../../../tf/compat/v1/nn/rnn_cell/MultiRNNCell.md): RNN cell composed sequentially of multiple simple cells. + +[`class RNNCell`](../../../../tf/compat/v1/nn/rnn_cell/RNNCell.md): Abstract object representing an RNN cell. + +[`class ResidualWrapper`](../../../../tf/compat/v1/nn/rnn_cell/ResidualWrapper.md): RNNCell wrapper that ensures cell inputs are added to the outputs. + diff --git a/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/BasicLSTMCell.md b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/BasicLSTMCell.md new file mode 100644 index 00000000000..0cb993eaaea --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/BasicLSTMCell.md @@ -0,0 +1,254 @@ +description: DEPRECATED: Please use tf.compat.v1.nn.rnn_cell.LSTMCell instead. + +
+ + + + + + +
+ +# tf.compat.v1.nn.rnn_cell.BasicLSTMCell + + + + + + + + + +DEPRECATED: Please use tf.compat.v1.nn.rnn_cell.LSTMCell instead. + + + + + + + +Basic LSTM recurrent network cell. + +The implementation is based on + +We add forget_bias (default: 1) to the biases of the forget gate in order to +reduce the scale of forgetting in the beginning of the training. + +It does not allow cell clipping, a projection layer, and does not +use peep-hole connections: it is the basic baseline. + +For advanced models, please use the full tf.compat.v1.nn.rnn_cell.LSTMCell +that follows. + +Note that this cell is not optimized for performance. Please use +`tf.contrib.cudnn_rnn.CudnnLSTM` for better performance on GPU, or +`tf.contrib.rnn.LSTMBlockCell` and `tf.contrib.rnn.LSTMBlockFusedCell` for +better performance on CPU. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_units` + +int, The number of units in the LSTM cell. +
+`forget_bias` + +float, The bias added to forget gates (see above). Must set +to `0.0` manually when restoring from CudnnLSTM-trained checkpoints. +
+`state_is_tuple` + +If True, accepted and returned states are 2-tuples of the +`c_state` and `m_state`. If False, they are concatenated along the +column axis. The latter behavior will soon be deprecated. +
+`activation` + +Activation function of the inner states. Default: `tanh`. It +could also be string that is within Keras activation function names. +
+`reuse` + +(optional) Python boolean describing whether to reuse variables in +an existing scope. If not `True`, and the existing scope already has +the given variables, an error is raised. +
+`name` + +String, the name of the layer. Layers with the same name will share +weights, but to avoid mistakes we require reuse=True in such cases. +
+`dtype` + +Default dtype of the layer (default of `None` means use the type of +the first input). Required when `build` is called before `call`. +
+`**kwargs` + +Dict, keyword named properties for common layer attributes, like +`trainable` etc when constructing the cell from configs of get_config(). +When restoring from CudnnLSTM-trained checkpoints, must use +`CudnnCompatibleLSTMCell` instead. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`output_size` + +Integer or TensorShape: size of outputs produced by this cell. +
+`scope_name` + + +
+`state_size` + +size(s) of state(s) used by this cell. + +It can be represented by an Integer, a TensorShape or a tuple of Integers +or TensorShapes. +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + +Return zero-filled state tensor(s). + + + + + + + + + + + + + + +
Args
+`batch_size` + +int, float, or unit Tensor representing the batch size. +
+`dtype` + +the data type to use for the state. +
+ + + + + + + + + + + +
Returns
+If `state_size` is an int or TensorShape, then the return value is a +`N-D` tensor of shape `[batch_size, state_size]` filled with zeros. + +If `state_size` is a nested list or tuple, then the return value is +a nested list or tuple (of the same structure) of `2-D` tensors with +the shapes `[batch_size, s]` for each s in `state_size`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/BasicRNNCell.md b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/BasicRNNCell.md new file mode 100644 index 00000000000..17d87abce0f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/BasicRNNCell.md @@ -0,0 +1,219 @@ +description: The most basic RNN cell. + +
+ + + + + + +
+ +# tf.compat.v1.nn.rnn_cell.BasicRNNCell + + + + + + + + + +The most basic RNN cell. + + + + + + + +Note that this cell is not optimized for performance. Please use +`tf.contrib.cudnn_rnn.CudnnRNNTanh` for better performance on GPU. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_units` + +int, The number of units in the RNN cell. +
+`activation` + +Nonlinearity to use. Default: `tanh`. It could also be string +that is within Keras activation function names. +
+`reuse` + +(optional) Python boolean describing whether to reuse variables in an +existing scope. If not `True`, and the existing scope already has the +given variables, an error is raised. +
+`name` + +String, the name of the layer. Layers with the same name will share +weights, but to avoid mistakes we require reuse=True in such cases. +
+`dtype` + +Default dtype of the layer (default of `None` means use the type of +the first input). Required when `build` is called before `call`. +
+`**kwargs` + +Dict, keyword named properties for common layer attributes, like +`trainable` etc when constructing the cell from configs of get_config(). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`output_size` + +Integer or TensorShape: size of outputs produced by this cell. +
+`scope_name` + + +
+`state_size` + +size(s) of state(s) used by this cell. + +It can be represented by an Integer, a TensorShape or a tuple of Integers +or TensorShapes. +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + +Return zero-filled state tensor(s). + + + + + + + + + + + + + + +
Args
+`batch_size` + +int, float, or unit Tensor representing the batch size. +
+`dtype` + +the data type to use for the state. +
+ + + + + + + + + + + +
Returns
+If `state_size` is an int or TensorShape, then the return value is a +`N-D` tensor of shape `[batch_size, state_size]` filled with zeros. + +If `state_size` is a nested list or tuple, then the return value is +a nested list or tuple (of the same structure) of `2-D` tensors with +the shapes `[batch_size, s]` for each s in `state_size`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/DeviceWrapper.md b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/DeviceWrapper.md new file mode 100644 index 00000000000..3a38a03bd04 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/DeviceWrapper.md @@ -0,0 +1,144 @@ +description: Operator that ensures an RNNCell runs on a particular device. + +
+ + + + + + +
+ +# tf.compat.v1.nn.rnn_cell.DeviceWrapper + + + + + + + + + +Operator that ensures an RNNCell runs on a particular device. + + + + + + + + + + + + + + + + + + + + + + + +
+`cell` + +An instance of `RNNCell`. +
+`device` + +A device string or function, for passing to tf.device. +
+`**kwargs` + +dict of keyword arguments for base layer. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`output_size` + + +
+`scope_name` + + +
+`state_size` + + +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/DropoutWrapper.md b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/DropoutWrapper.md new file mode 100644 index 00000000000..e3fdda4a494 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/DropoutWrapper.md @@ -0,0 +1,251 @@ +description: Operator adding dropout to inputs and outputs of the given cell. + +
+ + + + + + +
+ +# tf.compat.v1.nn.rnn_cell.DropoutWrapper + + + + + + + + + +Operator adding dropout to inputs and outputs of the given cell. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cell` + +an RNNCell, a projection to output_size is added to it. +
+`input_keep_prob` + +unit Tensor or float between 0 and 1, input keep +probability; if it is constant and 1, no input dropout will be added. +
+`output_keep_prob` + +unit Tensor or float between 0 and 1, output keep +probability; if it is constant and 1, no output dropout will be added. +
+`state_keep_prob` + +unit Tensor or float between 0 and 1, output keep +probability; if it is constant and 1, no output dropout will be added. +State dropout is performed on the outgoing states of the cell. **Note** +the state components to which dropout is applied when `state_keep_prob` +is in `(0, 1)` are also determined by the argument +`dropout_state_filter_visitor` (e.g. by default dropout is never applied +to the `c` component of an `LSTMStateTuple`). +
+`variational_recurrent` + +Python bool. If `True`, then the same dropout +pattern is applied across all time steps per run call. If this parameter +is set, `input_size` **must** be provided. +
+`input_size` + +(optional) (possibly nested tuple of) `TensorShape` objects +containing the depth(s) of the input tensors expected to be passed in to +the `DropoutWrapper`. Required and used **iff** `variational_recurrent += True` and `input_keep_prob < 1`. +
+`dtype` + +(optional) The `dtype` of the input, state, and output tensors. +Required and used **iff** `variational_recurrent = True`. +
+`seed` + +(optional) integer, the randomness seed. +
+`dropout_state_filter_visitor` + +(optional), default: (see below). Function +that takes any hierarchical level of the state and returns a scalar or +depth=1 structure of Python booleans describing which terms in the state +should be dropped out. In addition, if the function returns `True`, +dropout is applied across this sublevel. If the function returns +`False`, dropout is not applied across this entire sublevel. +Default behavior: perform dropout on all terms except the memory (`c`) +state of `LSTMCellState` objects, and don't try to apply dropout to +`TensorArray` objects: ``` +def dropout_state_filter_visitor(s): +if isinstance(s, LSTMCellState): # Never perform dropout on the c +state. return LSTMCellState(c=False, h=True) +elif isinstance(s, TensorArray): return False return True ``` +
+`**kwargs` + +dict of keyword arguments for base layer. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if `cell` is not an `RNNCell`, or `keep_state_fn` is provided +but not `callable`. +
+`ValueError` + +if any of the keep_probs are not between 0 and 1. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`output_size` + + +
+`scope_name` + + +
+`state_size` + + +
+`wrapped_cell` + + +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/GRUCell.md b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/GRUCell.md new file mode 100644 index 00000000000..3d5640ce614 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/GRUCell.md @@ -0,0 +1,242 @@ +description: Gated Recurrent Unit cell. + +
+ + + + + + +
+ +# tf.compat.v1.nn.rnn_cell.GRUCell + + + + + + + + + +Gated Recurrent Unit cell. + + + + + + + +Note that this cell is not optimized for performance. Please use +`tf.contrib.cudnn_rnn.CudnnGRU` for better performance on GPU, or +`tf.contrib.rnn.GRUBlockCellV2` for better performance on CPU. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_units` + +int, The number of units in the GRU cell. +
+`activation` + +Nonlinearity to use. Default: `tanh`. +
+`reuse` + +(optional) Python boolean describing whether to reuse variables in an +existing scope. If not `True`, and the existing scope already has the +given variables, an error is raised. +
+`kernel_initializer` + +(optional) The initializer to use for the weight and +projection matrices. +
+`bias_initializer` + +(optional) The initializer to use for the bias. +
+`name` + +String, the name of the layer. Layers with the same name will share +weights, but to avoid mistakes we require reuse=True in such cases. +
+`dtype` + +Default dtype of the layer (default of `None` means use the type of +the first input). Required when `build` is called before `call`. +
+`**kwargs` + +Dict, keyword named properties for common layer attributes, like +`trainable` etc when constructing the cell from configs of get_config(). + +References: +Learning Phrase Representations using RNN Encoder Decoder for Statistical +Machine Translation: +[Cho et al., 2014] +(https://aclanthology.coli.uni-saarland.de/papers/D14-1179/d14-1179) +([pdf](http://emnlp2014.org/papers/pdf/EMNLP2014179.pdf)) +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`output_size` + +Integer or TensorShape: size of outputs produced by this cell. +
+`scope_name` + + +
+`state_size` + +size(s) of state(s) used by this cell. + +It can be represented by an Integer, a TensorShape or a tuple of Integers +or TensorShapes. +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + +Return zero-filled state tensor(s). + + + + + + + + + + + + + + +
Args
+`batch_size` + +int, float, or unit Tensor representing the batch size. +
+`dtype` + +the data type to use for the state. +
+ + + + + + + + + + + +
Returns
+If `state_size` is an int or TensorShape, then the return value is a +`N-D` tensor of shape `[batch_size, state_size]` filled with zeros. + +If `state_size` is a nested list or tuple, then the return value is +a nested list or tuple (of the same structure) of `2-D` tensors with +the shapes `[batch_size, s]` for each s in `state_size`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/LSTMCell.md b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/LSTMCell.md new file mode 100644 index 00000000000..288c6c24874 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/LSTMCell.md @@ -0,0 +1,322 @@ +description: Long short-term memory unit (LSTM) recurrent network cell. + +
+ + + + + + +
+ +# tf.compat.v1.nn.rnn_cell.LSTMCell + + + + + + + + + +Long short-term memory unit (LSTM) recurrent network cell. + + + + + + + +The default non-peephole implementation is based on (Gers et al., 1999). +The peephole implementation is based on (Sak et al., 2014). + +The class uses optional peep-hole connections, optional cell clipping, and +an optional projection layer. + +Note that this cell is not optimized for performance. Please use +`tf.contrib.cudnn_rnn.CudnnLSTM` for better performance on GPU, or +`tf.contrib.rnn.LSTMBlockCell` and `tf.contrib.rnn.LSTMBlockFusedCell` for +better performance on CPU. +References: + Long short-term memory recurrent neural network architectures for large + scale acoustic modeling: + [Sak et al., 2014] + (https://www.isca-speech.org/archive/interspeech_2014/i14_0338.html) + ([pdf] + (https://www.isca-speech.org/archive/archive_papers/interspeech_2014/i14_0338.pdf)) + Learning to forget: + [Gers et al., 1999] + (http://digital-library.theiet.org/content/conferences/10.1049/cp_19991218) + ([pdf](https://arxiv.org/pdf/1409.2329.pdf)) + Long Short-Term Memory: + [Hochreiter et al., 1997] + (https://www.mitpressjournals.org/doi/abs/10.1162/neco.1997.9.8.1735) + ([pdf](http://ml.jku.at/publications/older/3504.pdf)) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_units` + +int, The number of units in the LSTM cell. +
+`use_peepholes` + +bool, set True to enable diagonal/peephole connections. +
+`cell_clip` + +(optional) A float value, if provided the cell state is clipped +by this value prior to the cell output activation. +
+`initializer` + +(optional) The initializer to use for the weight and +projection matrices. +
+`num_proj` + +(optional) int, The output dimensionality for the projection +matrices. If None, no projection is performed. +
+`proj_clip` + +(optional) A float value. If `num_proj > 0` and `proj_clip` is +provided, then the projected values are clipped elementwise to within +`[-proj_clip, proj_clip]`. +
+`num_unit_shards` + +Deprecated, will be removed by Jan. 2017. Use a +variable_scope partitioner instead. +
+`num_proj_shards` + +Deprecated, will be removed by Jan. 2017. Use a +variable_scope partitioner instead. +
+`forget_bias` + +Biases of the forget gate are initialized by default to 1 in +order to reduce the scale of forgetting at the beginning of the +training. Must set it manually to `0.0` when restoring from CudnnLSTM +trained checkpoints. +
+`state_is_tuple` + +If True, accepted and returned states are 2-tuples of the +`c_state` and `m_state`. If False, they are concatenated along the +column axis. This latter behavior will soon be deprecated. +
+`activation` + +Activation function of the inner states. Default: `tanh`. It +could also be string that is within Keras activation function names. +
+`reuse` + +(optional) Python boolean describing whether to reuse variables in +an existing scope. If not `True`, and the existing scope already has +the given variables, an error is raised. +
+`name` + +String, the name of the layer. Layers with the same name will share +weights, but to avoid mistakes we require reuse=True in such cases. +
+`dtype` + +Default dtype of the layer (default of `None` means use the type of +the first input). Required when `build` is called before `call`. +
+`**kwargs` + +Dict, keyword named properties for common layer attributes, like +`trainable` etc when constructing the cell from configs of get_config(). +When restoring from CudnnLSTM-trained checkpoints, use +`CudnnCompatibleLSTMCell` instead. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`output_size` + +Integer or TensorShape: size of outputs produced by this cell. +
+`scope_name` + + +
+`state_size` + +size(s) of state(s) used by this cell. + +It can be represented by an Integer, a TensorShape or a tuple of Integers +or TensorShapes. +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + +Return zero-filled state tensor(s). + + + + + + + + + + + + + + +
Args
+`batch_size` + +int, float, or unit Tensor representing the batch size. +
+`dtype` + +the data type to use for the state. +
+ + + + + + + + + + + +
Returns
+If `state_size` is an int or TensorShape, then the return value is a +`N-D` tensor of shape `[batch_size, state_size]` filled with zeros. + +If `state_size` is a nested list or tuple, then the return value is +a nested list or tuple (of the same structure) of `2-D` tensors with +the shapes `[batch_size, s]` for each s in `state_size`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/LSTMStateTuple.md b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/LSTMStateTuple.md new file mode 100644 index 00000000000..526fa447c3e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/LSTMStateTuple.md @@ -0,0 +1,79 @@ +description: Tuple used by LSTM Cells for state_size, zero_state, and output state. + +
+ + + + + +
+ +# tf.compat.v1.nn.rnn_cell.LSTMStateTuple + + + + + + + + + +Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state. + + + + + + + +Stores two elements: `(c, h)`, in that order. Where `c` is the hidden state +and `h` is the output. + +Only used when `state_is_tuple=True`. + + + + + + + + + + + + + + + + + + +
+`c` + + +
+`h` + + +
+`dtype` + + +
+ + + +## Class Variables + +* `c` +* `h` diff --git a/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/MultiRNNCell.md b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/MultiRNNCell.md new file mode 100644 index 00000000000..1b67b030e67 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/MultiRNNCell.md @@ -0,0 +1,215 @@ +description: RNN cell composed sequentially of multiple simple cells. + +
+ + + + + + +
+ +# tf.compat.v1.nn.rnn_cell.MultiRNNCell + + + + + + + + + +RNN cell composed sequentially of multiple simple cells. + +Inherits From: [`RNNCell`](../../../../../tf/compat/v1/nn/rnn_cell/RNNCell.md) + + + + + + + + +#### Example: + + + +```python +num_units = [128, 64] +cells = [BasicLSTMCell(num_units=n) for n in num_units] +stacked_rnn_cell = MultiRNNCell(cells) +``` + + + + + + + + + + + + + +
+`cells` + +list of RNNCells that will be composed in this order. +
+`state_is_tuple` + +If True, accepted and returned states are n-tuples, where +`n = len(cells)`. If False, the states are all concatenated along the +column axis. This latter behavior will soon be deprecated. +
+ + + + + + + + + + + + +
+`ValueError` + +if cells is empty (not allowed), or at least one of the cells +returns a state tuple but the flag `state_is_tuple` is `False`. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`output_size` + +Integer or TensorShape: size of outputs produced by this cell. +
+`scope_name` + + +
+`state_size` + +size(s) of state(s) used by this cell. + +It can be represented by an Integer, a TensorShape or a tuple of Integers +or TensorShapes. +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + +Return zero-filled state tensor(s). + + + + + + + + + + + + + + +
Args
+`batch_size` + +int, float, or unit Tensor representing the batch size. +
+`dtype` + +the data type to use for the state. +
+ + + + + + + + + + + +
Returns
+If `state_size` is an int or TensorShape, then the return value is a +`N-D` tensor of shape `[batch_size, state_size]` filled with zeros. + +If `state_size` is a nested list or tuple, then the return value is +a nested list or tuple (of the same structure) of `2-D` tensors with +the shapes `[batch_size, s]` for each s in `state_size`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/RNNCell.md b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/RNNCell.md new file mode 100644 index 00000000000..5e89cc4d74a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/RNNCell.md @@ -0,0 +1,178 @@ +description: Abstract object representing an RNN cell. + +
+ + + + + + +
+ +# tf.compat.v1.nn.rnn_cell.RNNCell + + + + + + + + + +Abstract object representing an RNN cell. + +Inherits From: [`Layer`](../../../../../tf/compat/v1/layers/Layer.md) + + + + + + + +Every `RNNCell` must have the properties below and implement `call` with +the signature `(output, next_state) = call(input, state)`. The optional +third input argument, `scope`, is allowed for backwards compatibility +purposes; but should be left off for new subclasses. + +This definition of cell differs from the definition used in the literature. +In the literature, 'cell' refers to an object with a single scalar output. +This definition refers to a horizontal array of such units. + +An RNN cell, in the most abstract setting, is anything that has +a state and performs some operation that takes a matrix of inputs. +This operation results in an output matrix with `self.output_size` columns. +If `self.state_size` is an integer, this operation also results in a new +state matrix with `self.state_size` columns. If `self.state_size` is a +(possibly nested tuple of) TensorShape object(s), then it should return a +matching structure of Tensors having shape `[batch_size].concatenate(s)` +for each `s` in `self.batch_size`. + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`output_size` + +Integer or TensorShape: size of outputs produced by this cell. +
+`scope_name` + + +
+`state_size` + +size(s) of state(s) used by this cell. + +It can be represented by an Integer, a TensorShape or a tuple of Integers +or TensorShapes. +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + +Return zero-filled state tensor(s). + + + + + + + + + + + + + + +
Args
+`batch_size` + +int, float, or unit Tensor representing the batch size. +
+`dtype` + +the data type to use for the state. +
+ + + + + + + + + + + +
Returns
+If `state_size` is an int or TensorShape, then the return value is a +`N-D` tensor of shape `[batch_size, state_size]` filled with zeros. + +If `state_size` is a nested list or tuple, then the return value is +a nested list or tuple (of the same structure) of `2-D` tensors with +the shapes `[batch_size, s]` for each s in `state_size`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/ResidualWrapper.md b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/ResidualWrapper.md new file mode 100644 index 00000000000..dad109900fd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/rnn_cell/ResidualWrapper.md @@ -0,0 +1,147 @@ +description: RNNCell wrapper that ensures cell inputs are added to the outputs. + +
+ + + + + + +
+ +# tf.compat.v1.nn.rnn_cell.ResidualWrapper + + + + + + + + + +RNNCell wrapper that ensures cell inputs are added to the outputs. + + + + + + + + + + + + + + + + + + + + + + + +
+`cell` + +An instance of `RNNCell`. +
+`residual_fn` + +(Optional) The function to map raw cell inputs and raw cell +outputs to the actual cell outputs of the residual network. +Defaults to calling nest.map_structure on (lambda i, o: i + o), inputs +and outputs. +
+`**kwargs` + +dict of keyword arguments for base layer. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +DEPRECATED FUNCTION + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Stop using this property because tf.layers layers no longer track their graph. +
+`output_size` + + +
+`scope_name` + + +
+`state_size` + + +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/nn/safe_embedding_lookup_sparse.md b/site/en/api_docs/python/tf/compat/v1/nn/safe_embedding_lookup_sparse.md new file mode 100644 index 00000000000..05eb1d116b0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/safe_embedding_lookup_sparse.md @@ -0,0 +1,155 @@ +description: Lookup embedding results, accounting for invalid IDs and empty features. + +
+ + +
+ +# tf.compat.v1.nn.safe_embedding_lookup_sparse + + + + + + + + + +Lookup embedding results, accounting for invalid IDs and empty features. + + + + + + + +The partitioned embedding in `embedding_weights` must all be the same shape +except for the first dimension. The first dimension is allowed to vary as the +vocabulary size is not necessarily a multiple of `P`. `embedding_weights` +may be a `PartitionedVariable` as returned by using +tf.compat.v1.get_variable() with a +partitioner. + +Invalid IDs (< 0) are pruned from input IDs and weights, as well as any IDs +with non-positive weight. For an entry with no features, the embedding vector +for `default_id` is returned, or the 0-vector if `default_id` is not supplied. + +The ids and weights may be multi-dimensional. Embeddings are always aggregated +along the last dimension. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`embedding_weights` + +A list of `P` float `Tensor`s or values representing +partitioned embedding `Tensor`s. Alternatively, a `PartitionedVariable` +created by partitioning along dimension 0. The total unpartitioned shape +should be `[e_0, e_1, ..., e_m]`, where `e_0` represents the vocab size +and `e_1, ..., e_m` are the embedding dimensions. +
+`sparse_ids` + +`SparseTensor` of shape `[d_0, d_1, ..., d_n]` containing the +ids. `d_0` is typically batch size. +
+`sparse_weights` + +`SparseTensor` of same shape as `sparse_ids`, containing +float weights corresponding to `sparse_ids`, or `None` if all weights are +be assumed to be 1.0. +
+`combiner` + +A string specifying how to combine embedding results for each +entry. Currently "mean", "sqrtn" and "sum" are supported, with "mean" the +default. +
+`default_id` + +The id to use for an entry with no features. +
+`name` + +A name for this operation (optional). +
+`partition_strategy` + +A string specifying the partitioning strategy. Currently +`"div"` and `"mod"` are supported. Default is `"div"`. +
+`max_norm` + +If not `None`, all embeddings are l2-normalized to max_norm before +combining. +
+ + + + + + + + + + + +
+Dense `Tensor` of shape `[d_0, d_1, ..., d_{n-1}, e_1, ..., e_m]`. +
+ + + + + + + + + + + + +
+`ValueError` + +if `embedding_weights` is empty. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/sampled_softmax_loss.md b/site/en/api_docs/python/tf/compat/v1/nn/sampled_softmax_loss.md new file mode 100644 index 00000000000..babac86dca3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/sampled_softmax_loss.md @@ -0,0 +1,195 @@ +description: Computes and returns the sampled softmax training loss. + +
+ + +
+ +# tf.compat.v1.nn.sampled_softmax_loss + + + + + + + + + +Computes and returns the sampled softmax training loss. + + + + + + + +This is a faster way to train a softmax classifier over a huge number of +classes. + +This operation is for training only. It is generally an underestimate of +the full softmax loss. + +A common use case is to use this method for training, and calculate the full +softmax loss for evaluation or inference. In this case, you must set +`partition_strategy="div"` for the two losses to be consistent, as in the +following example: + +```python +if mode == "train": + loss = tf.nn.sampled_softmax_loss( + weights=weights, + biases=biases, + labels=labels, + inputs=inputs, + ..., + partition_strategy="div") +elif mode == "eval": + logits = tf.matmul(inputs, tf.transpose(weights)) + logits = tf.nn.bias_add(logits, biases) + labels_one_hot = tf.one_hot(labels, n_classes) + loss = tf.nn.softmax_cross_entropy_with_logits( + labels=labels_one_hot, + logits=logits) +``` + +See our Candidate Sampling Algorithms Reference +([pdf](https://www.tensorflow.org/extras/candidate_sampling.pdf)). +Also see Section 3 of (Jean et al., 2014) for the math. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`weights` + +A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` +objects whose concatenation along dimension 0 has shape +[num_classes, dim]. The (possibly-sharded) class embeddings. +
+`biases` + +A `Tensor` of shape `[num_classes]`. The class biases. +
+`labels` + +A `Tensor` of type `int64` and shape `[batch_size, +num_true]`. The target classes. Note that this format differs from +the `labels` argument of `nn.softmax_cross_entropy_with_logits`. +
+`inputs` + +A `Tensor` of shape `[batch_size, dim]`. The forward +activations of the input network. +
+`num_sampled` + +An `int`. The number of classes to randomly sample per batch. +
+`num_classes` + +An `int`. The number of possible classes. +
+`num_true` + +An `int`. The number of target classes per training example. +
+`sampled_values` + +a tuple of (`sampled_candidates`, `true_expected_count`, +`sampled_expected_count`) returned by a `*_candidate_sampler` function. +(if None, we default to `log_uniform_candidate_sampler`) +
+`remove_accidental_hits` + +A `bool`. whether to remove "accidental hits" +where a sampled class equals one of the target classes. Default is +True. +
+`partition_strategy` + +A string specifying the partitioning strategy, relevant +if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. +Default is `"mod"`. See `tf.nn.embedding_lookup` for more details. +
+`name` + +A name for the operation (optional). +
+`seed` + +random seed for candidate sampling. Default to None, which doesn't set +the op-level random seed for candidate sampling. +
+ + + + + + + + + + + +
+A `batch_size` 1-D tensor of per-example sampled softmax losses. +
+ + + +#### References: + +On Using Very Large Target Vocabulary for Neural Machine Translation: + [Jean et al., 2014] + (https://aclanthology.coli.uni-saarland.de/papers/P15-1001/p15-1001) + ([pdf](http://aclweb.org/anthology/P15-1001)) diff --git a/site/en/api_docs/python/tf/compat/v1/nn/separable_conv2d.md b/site/en/api_docs/python/tf/compat/v1/nn/separable_conv2d.md new file mode 100644 index 00000000000..0e06acbecfa --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/separable_conv2d.md @@ -0,0 +1,150 @@ +description: 2-D convolution with separable filters. + +
+ + +
+ +# tf.compat.v1.nn.separable_conv2d + + + + + + + + + +2-D convolution with separable filters. + + + + + + + +Performs a depthwise convolution that acts separately on channels followed by +a pointwise convolution that mixes channels. Note that this is separability +between dimensions `[1, 2]` and `3`, not spatial separability between +dimensions `1` and `2`. + +In detail, with the default NHWC format, + + output[b, i, j, k] = sum_{di, dj, q, r} + input[b, strides[1] * i + di, strides[2] * j + dj, q] * + depthwise_filter[di, dj, q, r] * + pointwise_filter[0, 0, q * channel_multiplier + r, k] + +`strides` controls the strides for the depthwise convolution only, since +the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have +`strides[0] = strides[3] = 1`. For the most common case of the same +horizontal and vertical strides, `strides = [1, stride, stride, 1]`. +If any value in `rate` is greater than 1, we perform atrous depthwise +convolution, in which case all values in the `strides` tensor must be equal +to 1. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +4-D `Tensor` with shape according to `data_format`. +
+`depthwise_filter` + +4-D `Tensor` with shape +`[filter_height, filter_width, in_channels, channel_multiplier]`. +Contains `in_channels` convolutional filters of depth 1. +
+`pointwise_filter` + +4-D `Tensor` with shape +`[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise +filter to mix channels after `depthwise_filter` has convolved spatially. +
+`strides` + +1-D of size 4. The strides for the depthwise convolution for +each dimension of `input`. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. +See the "returns" section of tf.nn.convolution for details. +
+`rate` + +1-D of size 2. The dilation rate in which we sample input values +across the `height` and `width` dimensions in atrous convolution. If it is +greater than 1, then all values of strides must be 1. +
+`name` + +A name for this operation (optional). +
+`data_format` + +The data format for input. Either "NHWC" (default) or "NCHW". +
+`dilations` + +Alias of rate. +
+ + + + + + + + + + + +
+A 4-D `Tensor` with shape according to 'data_format'. For +example, with data_format="NHWC", shape is [batch, out_height, +out_width, out_channels]. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/sigmoid_cross_entropy_with_logits.md b/site/en/api_docs/python/tf/compat/v1/nn/sigmoid_cross_entropy_with_logits.md new file mode 100644 index 00000000000..9f0f8b42508 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/sigmoid_cross_entropy_with_logits.md @@ -0,0 +1,129 @@ +description: Computes sigmoid cross entropy given logits. + +
+ + +
+ +# tf.compat.v1.nn.sigmoid_cross_entropy_with_logits + + + + + + + + + +Computes sigmoid cross entropy given `logits`. + + + + + + + +Measures the probability error in discrete classification tasks in which each +class is independent and not mutually exclusive. For instance, one could +perform multilabel classification where a picture can contain both an elephant +and a dog at the same time. + +For brevity, let `x = logits`, `z = labels`. The logistic loss is + + z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) + = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) + = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) + = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) + = (1 - z) * x + log(1 + exp(-x)) + = x - x * z + log(1 + exp(-x)) + +For x < 0, to avoid overflow in exp(-x), we reformulate the above + + x - x * z + log(1 + exp(-x)) + = log(exp(x)) - x * z + log(1 + exp(-x)) + = - x * z + log(1 + exp(x)) + +Hence, to ensure stability and avoid overflow, the implementation uses this +equivalent formulation + + max(x, 0) - x * z + log(1 + exp(-abs(x))) + +`logits` and `labels` must have the same type and shape. + + + + + + + + + + + + + + + + + + + +
+`_sentinel` + +Used to prevent positional parameters. Internal, do not use. +
+`labels` + +A `Tensor` of the same type and shape as `logits`. +
+`logits` + +A `Tensor` of type `float32` or `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of the same shape as `logits` with the componentwise +logistic losses. +
+ + + + + + + + + + + + +
+`ValueError` + +If `logits` and `labels` do not have the same shape. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/softmax_cross_entropy_with_logits.md b/site/en/api_docs/python/tf/compat/v1/nn/softmax_cross_entropy_with_logits.md new file mode 100644 index 00000000000..4038f50640f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/softmax_cross_entropy_with_logits.md @@ -0,0 +1,141 @@ +description: Computes softmax cross entropy between logits and labels. (deprecated) + +
+ + +
+ +# tf.compat.v1.nn.softmax_cross_entropy_with_logits + + + + + + + + + +Computes softmax cross entropy between `logits` and `labels`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: + +Future major versions of TensorFlow will allow gradients to flow +into the labels input on backprop by default. + +See `tf.nn.softmax_cross_entropy_with_logits_v2`. + + +Measures the probability error in discrete classification tasks in which the +classes are mutually exclusive (each entry is in exactly one class). For +example, each CIFAR-10 image is labeled with one and only one label: an image +can be a dog or a truck, but not both. + +**NOTE:** While the classes are mutually exclusive, their probabilities +need not be. All that is required is that each row of `labels` is +a valid probability distribution. If they are not, the computation of the +gradient will be incorrect. + +If using exclusive `labels` (wherein one and only +one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`. + +**WARNING:** This op expects unscaled logits, since it performs a `softmax` +on `logits` internally for efficiency. Do not call this op with the +output of `softmax`, as it will produce incorrect results. + +A common use case is to have logits and labels of shape +`[batch_size, num_classes]`, but higher dimensions are supported, with +the `dim` argument specifying the class dimension. + +Backpropagation will happen only into `logits`. To calculate a cross entropy +loss that allows backpropagation into both `logits` and `labels`, see +`tf.nn.softmax_cross_entropy_with_logits_v2`. + +**Note that to avoid confusion, it is required to pass only named arguments to +this function.** + + + + + + + + + + + + + + + + + + + + + + + + + +
+`_sentinel` + +Used to prevent positional parameters. Internal, do not use. +
+`labels` + +Each vector along the class dimension should hold a valid +probability distribution e.g. for the case in which labels are of shape +`[batch_size, num_classes]`, each row of `labels[i]` must be a valid +probability distribution. +
+`logits` + +Per-label activations, typically a linear output. These activation +energies are interpreted as unnormalized log probabilities. +
+`dim` + +The class dimension. Defaulted to -1 which is the last dimension. +
+`name` + +A name for the operation (optional). +
+`axis` + +Alias for dim. +
+ + + + + + + + + + + +
+A `Tensor` that contains the softmax cross entropy loss. Its type is the +same as `logits` and its shape is the same as `labels` except that it does +not have the last dimension of `labels`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/softmax_cross_entropy_with_logits_v2.md b/site/en/api_docs/python/tf/compat/v1/nn/softmax_cross_entropy_with_logits_v2.md new file mode 100644 index 00000000000..88fa8520f7f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/softmax_cross_entropy_with_logits_v2.md @@ -0,0 +1,131 @@ +description: Computes softmax cross entropy between logits and labels. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.nn.softmax_cross_entropy_with_logits_v2 + + + + + + + + + +Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments) + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version. +Instructions for updating: +dim is deprecated, use axis instead + +Measures the probability error in discrete classification tasks in which the +classes are mutually exclusive (each entry is in exactly one class). For +example, each CIFAR-10 image is labeled with one and only one label: an image +can be a dog or a truck, but not both. + +**NOTE:** While the classes are mutually exclusive, their probabilities +need not be. All that is required is that each row of `labels` is +a valid probability distribution. If they are not, the computation of the +gradient will be incorrect. + +If using exclusive `labels` (wherein one and only +one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`. + +**WARNING:** This op expects unscaled logits, since it performs a `softmax` +on `logits` internally for efficiency. Do not call this op with the +output of `softmax`, as it will produce incorrect results. + +A common use case is to have logits and labels of shape +`[batch_size, num_classes]`, but higher dimensions are supported, with +the `axis` argument specifying the class dimension. + +`logits` and `labels` must have the same dtype (either `float16`, `float32`, +or `float64`). + +Backpropagation will happen into both `logits` and `labels`. To disallow +backpropagation into `labels`, pass label tensors through tf.stop_gradient +before feeding it to this function. + +**Note that to avoid confusion, it is required to pass only named arguments to +this function.** + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +Each vector along the class dimension should hold a valid +probability distribution e.g. for the case in which labels are of shape +`[batch_size, num_classes]`, each row of `labels[i]` must be a valid +probability distribution. +
+`logits` + +Unscaled log probabilities. +
+`axis` + +The class dimension. Defaulted to -1 which is the last dimension. +
+`name` + +A name for the operation (optional). +
+`dim` + +Deprecated alias for axis. +
+ + + + + + + + + + + +
+A `Tensor` that contains the softmax cross entropy loss. Its type is the +same as `logits` and its shape is the same as `labels` except that it does +not have the last dimension of `labels`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/sparse_softmax_cross_entropy_with_logits.md b/site/en/api_docs/python/tf/compat/v1/nn/sparse_softmax_cross_entropy_with_logits.md new file mode 100644 index 00000000000..5ab223d6e42 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/sparse_softmax_cross_entropy_with_logits.md @@ -0,0 +1,136 @@ +description: Computes sparse softmax cross entropy between logits and labels. + +
+ + +
+ +# tf.compat.v1.nn.sparse_softmax_cross_entropy_with_logits + + + + + + + + + +Computes sparse softmax cross entropy between `logits` and `labels`. + + + + + + + +Measures the probability error in discrete classification tasks in which the +classes are mutually exclusive (each entry is in exactly one class). For +example, each CIFAR-10 image is labeled with one and only one label: an image +can be a dog or a truck, but not both. + +**NOTE:** For this operation, the probability of a given label is considered +exclusive. That is, soft classes are not allowed, and the `labels` vector +must provide a single specific index for the true class for each row of +`logits` (each minibatch entry). For soft softmax classification with +a probability distribution for each entry, see +`softmax_cross_entropy_with_logits_v2`. + +**WARNING:** This op expects unscaled logits, since it performs a `softmax` +on `logits` internally for efficiency. Do not call this op with the +output of `softmax`, as it will produce incorrect results. + +A common use case is to have logits of shape +`[batch_size, num_classes]` and have labels of shape +`[batch_size]`, but higher dimensions are supported, in which +case the `dim`-th dimension is assumed to be of size `num_classes`. +`logits` must have the dtype of `float16`, `float32`, or `float64`, and +`labels` must have the dtype of `int32` or `int64`. + +**Note that to avoid confusion, it is required to pass only named arguments to +this function.** + + + + + + + + + + + + + + + + + + + +
+`_sentinel` + +Used to prevent positional parameters. Internal, do not use. +
+`labels` + +`Tensor` of shape `[d_0, d_1, ..., d_{r-1}]` (where `r` is rank of +`labels` and result) and dtype `int32` or `int64`. Each entry in `labels` +must be an index in `[0, num_classes)`. Other values will raise an +exception when this op is run on CPU, and return `NaN` for corresponding +loss and gradient rows on GPU. +
+`logits` + +Per-label activations (typically a linear output) of shape +`[d_0, d_1, ..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or +`float64`. These activation energies are interpreted as unnormalized log +probabilities. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of the same shape as `labels` and of the same type as `logits` +with the softmax cross entropy loss. +
+ + + + + + + + + + + + +
+`ValueError` + +If logits are scalars (need to have rank >= 1) or if the rank +of the labels is not equal to the rank of the logits minus one. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/static_bidirectional_rnn.md b/site/en/api_docs/python/tf/compat/v1/nn/static_bidirectional_rnn.md new file mode 100644 index 00000000000..830485ee499 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/static_bidirectional_rnn.md @@ -0,0 +1,163 @@ +description: Creates a bidirectional recurrent neural network. (deprecated) + +
+ + +
+ +# tf.compat.v1.nn.static_bidirectional_rnn + + + + + + + + + +Creates a bidirectional recurrent neural network. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API + +Similar to the unidirectional case above (rnn) but takes input and builds +independent forward and backward RNNs with the final forward and backward +outputs depth-concatenated, such that the output will have the format +[time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of +forward and backward cell must match. The initial state for both directions +is zero by default (but can be set optionally) and no intermediate states are +ever returned -- the network is fully unrolled for the given (passed in) +length(s) of the sequence(s) or completely unrolled if length(s) is not given. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cell_fw` + +An instance of RNNCell, to be used for forward direction. +
+`cell_bw` + +An instance of RNNCell, to be used for backward direction. +
+`inputs` + +A length T list of inputs, each a tensor of shape [batch_size, +input_size], or a nested tuple of such elements. +
+`initial_state_fw` + +(optional) An initial state for the forward RNN. This must +be a tensor of appropriate type and shape `[batch_size, +cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a +tuple of tensors having shapes `[batch_size, s] for s in +cell_fw.state_size`. +
+`initial_state_bw` + +(optional) Same as for `initial_state_fw`, but using the +corresponding properties of `cell_bw`. +
+`dtype` + +(optional) The data type for the initial state. Required if either +of the initial states are not provided. +
+`sequence_length` + +(optional) An int32/int64 vector, size `[batch_size]`, +containing the actual lengths for each of the sequences. +
+`scope` + +VariableScope for the created subgraph; defaults to +"bidirectional_rnn" +
+ + + + + + + + + + + +
+A tuple (outputs, output_state_fw, output_state_bw) where: +outputs is a length `T` list of outputs (one for each input), which +are depth-concatenated forward and backward outputs. +output_state_fw is the final state of the forward rnn. +output_state_bw is the final state of the backward rnn. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `cell_fw` or `cell_bw` is not an instance of `RNNCell`. +
+`ValueError` + +If inputs is None or an empty list. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/static_rnn.md b/site/en/api_docs/python/tf/compat/v1/nn/static_rnn.md new file mode 100644 index 00000000000..e33ab9fe749 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/static_rnn.md @@ -0,0 +1,167 @@ +description: Creates a recurrent neural network specified by RNNCell cell. (deprecated) + +
+ + +
+ +# tf.compat.v1.nn.static_rnn + + + + + + + + + +Creates a recurrent neural network specified by RNNCell `cell`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API + +The simplest form of RNN network generated is: + +```python + state = cell.zero_state(...) + outputs = [] + for input_ in inputs: + output, state = cell(input_, state) + outputs.append(output) + return (outputs, state) +``` +However, a few other options are available: + +An initial state can be provided. +If the sequence_length vector is provided, dynamic calculation is performed. +This method of calculation does not compute the RNN steps past the maximum +sequence length of the minibatch (thus saving computational time), +and properly propagates the state at an example's sequence length +to the final state output. + +The dynamic calculation performed is, at time `t` for batch row `b`, + +```python + (output, state)(b, t) = + (t >= sequence_length(b)) + ? (zeros(cell.output_size), states(b, sequence_length(b) - 1)) + : cell(input(b, t), state(b, t - 1)) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cell` + +An instance of RNNCell. +
+`inputs` + +A length T list of inputs, each a `Tensor` of shape `[batch_size, +input_size]`, or a nested tuple of such elements. +
+`initial_state` + +(optional) An initial state for the RNN. If `cell.state_size` +is an integer, this must be a `Tensor` of appropriate type and shape +`[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this +should be a tuple of tensors having shapes `[batch_size, s] for s in +cell.state_size`. +
+`dtype` + +(optional) The data type for the initial state and expected output. +Required if initial_state is not provided or RNN state has a heterogeneous +dtype. +
+`sequence_length` + +Specifies the length of each sequence in inputs. An int32 +or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`. +
+`scope` + +VariableScope for the created subgraph; defaults to "rnn". +
+ + + + + + + + + + + +
+A pair (outputs, state) where: + +- outputs is a length T list of outputs (one for each input), or a nested +tuple of such elements. +- state is the final state +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `cell` is not an instance of RNNCell. +
+`ValueError` + +If `inputs` is `None` or an empty list, or if the input depth +(column size) cannot be inferred from inputs via shape inference. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/static_state_saving_rnn.md b/site/en/api_docs/python/tf/compat/v1/nn/static_state_saving_rnn.md new file mode 100644 index 00000000000..74ba0f2dc2a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/static_state_saving_rnn.md @@ -0,0 +1,134 @@ +description: RNN that accepts a state saver for time-truncated RNN calculation. (deprecated) + +
+ + +
+ +# tf.compat.v1.nn.static_state_saving_rnn + + + + + + + + + +RNN that accepts a state saver for time-truncated RNN calculation. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use `keras.layers.RNN(cell, stateful=True)`, which is equivalent to this API + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cell` + +An instance of `RNNCell`. +
+`inputs` + +A length T list of inputs, each a `Tensor` of shape `[batch_size, +input_size]`. +
+`state_saver` + +A state saver object with methods `state` and `save_state`. +
+`state_name` + +Python string or tuple of strings. The name to use with the +state_saver. If the cell returns tuples of states (i.e., `cell.state_size` +is a tuple) then `state_name` should be a tuple of strings having the same +length as `cell.state_size`. Otherwise it should be a single string. +
+`sequence_length` + +(optional) An int32/int64 vector size [batch_size]. See the +documentation for rnn() for more details about sequence_length. +
+`scope` + +VariableScope for the created subgraph; defaults to "rnn". +
+ + + + + + + + + + + +
+A pair (outputs, state) where: +outputs is a length T list of outputs (one for each input) +states is the final state +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `cell` is not an instance of RNNCell. +
+`ValueError` + +If `inputs` is `None` or an empty list, or if the arity and +type of `state_name` does not match that of `cell.state_size`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/sufficient_statistics.md b/site/en/api_docs/python/tf/compat/v1/nn/sufficient_statistics.md new file mode 100644 index 00000000000..dc436718f2b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/sufficient_statistics.md @@ -0,0 +1,109 @@ +description: Calculate the sufficient statistics for the mean and variance of x. + +
+ + +
+ +# tf.compat.v1.nn.sufficient_statistics + + + + + + + + + +Calculate the sufficient statistics for the mean and variance of `x`. + + + + + + + +These sufficient statistics are computed using the one pass algorithm on +an input that's optionally shifted. See: +https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. +
+`axes` + +Array of ints. Axes along which to compute mean and variance. +
+`shift` + +A `Tensor` containing the value by which to shift the data for +numerical stability, or `None` if no shift is to be performed. A shift +close to the true mean provides the most numerically stable results. +
+`keep_dims` + +produce statistics with the same dimensionality as the input. +
+`name` + +Name used to scope the operations that compute the sufficient stats. +
+`keepdims` + +Alias for keep_dims. +
+ + + + + + + + + + + +
+Four `Tensor` objects of the same type as `x`: + +* the count (number of elements to average over). +* the (possibly shifted) sum of the elements in the array. +* the (possibly shifted) sum of squares of the elements in the array. +* the shift by which the mean must be corrected or None if `shift` is None. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/weighted_cross_entropy_with_logits.md b/site/en/api_docs/python/tf/compat/v1/nn/weighted_cross_entropy_with_logits.md new file mode 100644 index 00000000000..4968e74e37a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/weighted_cross_entropy_with_logits.md @@ -0,0 +1,150 @@ +description: Computes a weighted cross entropy. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.nn.weighted_cross_entropy_with_logits + + + + + + + + + +Computes a weighted cross entropy. (deprecated arguments) + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(targets)`. They will be removed in a future version. +Instructions for updating: +targets is deprecated, use labels instead + +This is like `sigmoid_cross_entropy_with_logits()` except that `pos_weight`, +allows one to trade off recall and precision by up- or down-weighting the +cost of a positive error relative to a negative error. + +The usual cross-entropy cost is defined as: + + labels * -log(sigmoid(logits)) + + (1 - labels) * -log(1 - sigmoid(logits)) + +A value `pos_weight > 1` decreases the false negative count, hence increasing +the recall. +Conversely setting `pos_weight < 1` decreases the false positive count and +increases the precision. +This can be seen from the fact that `pos_weight` is introduced as a +multiplicative coefficient for the positive labels term +in the loss expression: + + labels * -log(sigmoid(logits)) * pos_weight + + (1 - labels) * -log(1 - sigmoid(logits)) + +For brevity, let `x = logits`, `z = labels`, `q = pos_weight`. +The loss is: + + qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) + = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) + = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) + = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) + = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x)) + = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x)) + +Setting `l = (1 + (q - 1) * z)`, to ensure stability and avoid overflow, +the implementation uses + + (1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0)) + +`logits` and `labels` must have the same type and shape. + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` of the same type and shape as `logits`. +
+`logits` + +A `Tensor` of type `float32` or `float64`. +
+`pos_weight` + +A coefficient to use on the positive examples. +
+`name` + +A name for the operation (optional). +
+`targets` + +Deprecated alias for labels. +
+ + + + + + + + + + + +
+A `Tensor` of the same shape as `logits` with the componentwise +weighted logistic losses. +
+ + + + + + + + + + + + +
+`ValueError` + +If `logits` and `labels` do not have the same shape. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/weighted_moments.md b/site/en/api_docs/python/tf/compat/v1/nn/weighted_moments.md new file mode 100644 index 00000000000..d5dfcbd884d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/weighted_moments.md @@ -0,0 +1,101 @@ +description: Returns the frequency-weighted mean and variance of x. + +
+ + +
+ +# tf.compat.v1.nn.weighted_moments + + + + + + + + + +Returns the frequency-weighted mean and variance of `x`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor. +
+`axes` + +1-d tensor of int32 values; these are the axes along which +to compute mean and variance. +
+`frequency_weights` + +A tensor of positive weights which can be +broadcast with x. +
+`name` + +Name used to scope the operation. +
+`keep_dims` + +Produce moments with the same dimensionality as the input. +
+`keepdims` + +Alias of keep_dims. +
+ + + + + + + + + + + +
+Two tensors: `weighted_mean` and `weighted_variance`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/nn/xw_plus_b.md b/site/en/api_docs/python/tf/compat/v1/nn/xw_plus_b.md new file mode 100644 index 00000000000..318611bf63d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/nn/xw_plus_b.md @@ -0,0 +1,87 @@ +description: Computes matmul(x, weights) + biases. + +
+ + +
+ +# tf.compat.v1.nn.xw_plus_b + + + + + + + + + +Computes matmul(x, weights) + biases. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +a 2D tensor. Dimensions typically: batch, in_units +
+`weights` + +a 2D tensor. Dimensions typically: in_units, out_units +
+`biases` + +a 1D tensor. Dimensions: out_units +
+`name` + +A name for the operation (optional). If not specified +"xw_plus_b" is used. +
+ + + + + + + + + + + +
+A 2-D Tensor computing matmul(x, weights) + biases. +Dimensions typically: batch, out_units. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/no_regularizer.md b/site/en/api_docs/python/tf/compat/v1/no_regularizer.md new file mode 100644 index 00000000000..312c2a45546 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/no_regularizer.md @@ -0,0 +1,33 @@ +description: Use this function to prevent regularization of variables. + +
+ + +
+ +# tf.compat.v1.no_regularizer + + + + + + + + + +Use this function to prevent regularization of variables. + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/norm.md b/site/en/api_docs/python/tf/compat/v1/norm.md new file mode 100644 index 00000000000..3edd429880a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/norm.md @@ -0,0 +1,177 @@ +description: Computes the norm of vectors, matrices, and tensors. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.norm + + + + + + + + + +Computes the norm of vectors, matrices, and tensors. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +This function can compute several different vector norms (the 1-norm, the +Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and +matrix norms (Frobenius, 1-norm, 2-norm and inf-norm). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensor` + +`Tensor` of types `float32`, `float64`, `complex64`, `complex128` +
+`ord` + +Order of the norm. Supported values are 'fro', 'euclidean', +`1`, `2`, `np.inf` and any positive real number yielding the corresponding +p-norm. Default is 'euclidean' which is equivalent to Frobenius norm if +`tensor` is a matrix and equivalent to 2-norm for vectors. +Some restrictions apply: +a) The Frobenius norm `fro` is not defined for vectors, +b) If axis is a 2-tuple (matrix norm), only 'euclidean', 'fro', `1`, +`2`, `np.inf` are supported. +See the description of `axis` on how to compute norms for a batch of +vectors or matrices stored in a tensor. +
+`axis` + +If `axis` is `None` (the default), the input is considered a vector +and a single vector norm is computed over the entire set of values in the +tensor, i.e. `norm(tensor, ord=ord)` is equivalent to +`norm(reshape(tensor, [-1]), ord=ord)`. +If `axis` is a Python integer, the input is considered a batch of vectors, +and `axis` determines the axis in `tensor` over which to compute vector +norms. +If `axis` is a 2-tuple of Python integers it is considered a batch of +matrices and `axis` determines the axes in `tensor` over which to compute +a matrix norm. +Negative indices are supported. Example: If you are passing a tensor that +can be either a matrix or a batch of matrices at runtime, pass +`axis=[-2,-1]` instead of `axis=None` to make sure that matrix norms are +computed. +
+`keepdims` + +If True, the axis indicated in `axis` are kept with size 1. +Otherwise, the dimensions in `axis` are removed from the output shape. +
+`name` + +The name of the op. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + + +
+`output` + +A `Tensor` of the same type as tensor, containing the vector or +matrix norms. If `keepdims` is True then the rank of output is equal to +the rank of `tensor`. Otherwise, if `axis` is none the output is a scalar, +if `axis` is an integer, the rank of `output` is one less than the rank +of `tensor`, if `axis` is a 2-tuple the rank of `output` is two less +than the rank of `tensor`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `ord` or `axis` is invalid. +
+ + + + +#### Numpy Compatibility +Mostly equivalent to numpy.linalg.norm. +Not supported: ord <= 0, 2-norm for matrices, nuclear norm. +Other differences: + a) If axis is `None`, treats the flattened `tensor` as a vector + regardless of rank. + b) Explicitly supports 'euclidean' norm as the default, including for + higher order tensors. + diff --git a/site/en/api_docs/python/tf/compat/v1/ones_like.md b/site/en/api_docs/python/tf/compat/v1/ones_like.md new file mode 100644 index 00000000000..452b6416868 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/ones_like.md @@ -0,0 +1,102 @@ +description: Creates a tensor with all elements set to 1. + +
+ + +
+ +# tf.compat.v1.ones_like + + + + + + + + + +Creates a tensor with all elements set to 1. + + + + + + + +See also tf.ones. + +Given a single tensor (`tensor`), this operation returns a tensor of the same +type and shape as `tensor` with all elements set to 1. Optionally, you can +specify a new type (`dtype`) for the returned tensor. + +#### For example: + + + +```python +tensor = tf.constant([[1, 2, 3], [4, 5, 6]]) +tf.ones_like(tensor) # [[1, 1, 1], [1, 1, 1]] +``` + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`dtype` + +A type for the returned `Tensor`. Must be `float32`, `float64`, +`int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `complex64`, +`complex128` or `bool`. +
+`name` + +A name for the operation (optional). +
+`optimize` + +if true, attempt to statically determine the shape of 'tensor' and +encode it as a constant. +
+ + + + + + + + + + + +
+A `Tensor` with all elements set to 1. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/op_scope.md b/site/en/api_docs/python/tf/compat/v1/op_scope.md new file mode 100644 index 00000000000..387e88b5a84 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/op_scope.md @@ -0,0 +1,34 @@ +description: DEPRECATED. Same as name_scope above, just different argument order. + +
+ + +
+ +# tf.compat.v1.op_scope + + + + + + + + + +DEPRECATED. Same as name_scope above, just different argument order. + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/pad.md b/site/en/api_docs/python/tf/compat/v1/pad.md new file mode 100644 index 00000000000..ccddbf8dd0f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/pad.md @@ -0,0 +1,148 @@ +description: Pads a tensor. + +
+ + +
+ +# tf.compat.v1.pad + + + + + + + + + +Pads a tensor. + + + + + + + +This operation pads a `tensor` according to the `paddings` you specify. +`paddings` is an integer tensor with shape `[n, 2]`, where n is the rank of +`tensor`. For each dimension D of `input`, `paddings[D, 0]` indicates how +many values to add before the contents of `tensor` in that dimension, and +`paddings[D, 1]` indicates how many values to add after the contents of +`tensor` in that dimension. If `mode` is "REFLECT" then both `paddings[D, 0]` +and `paddings[D, 1]` must be no greater than `tensor.dim_size(D) - 1`. If +`mode` is "SYMMETRIC" then both `paddings[D, 0]` and `paddings[D, 1]` must be +no greater than `tensor.dim_size(D)`. + +The padded size of each dimension D of the output is: + +`paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1]` + +#### For example: + + + +```python +t = tf.constant([[1, 2, 3], [4, 5, 6]]) +paddings = tf.constant([[1, 1,], [2, 2]]) +# 'constant_values' is 0. +# rank of 't' is 2. +tf.pad(t, paddings, "CONSTANT") # [[0, 0, 0, 0, 0, 0, 0], + # [0, 0, 1, 2, 3, 0, 0], + # [0, 0, 4, 5, 6, 0, 0], + # [0, 0, 0, 0, 0, 0, 0]] + +tf.pad(t, paddings, "REFLECT") # [[6, 5, 4, 5, 6, 5, 4], + # [3, 2, 1, 2, 3, 2, 1], + # [6, 5, 4, 5, 6, 5, 4], + # [3, 2, 1, 2, 3, 2, 1]] + +tf.pad(t, paddings, "SYMMETRIC") # [[2, 1, 1, 2, 3, 3, 2], + # [2, 1, 1, 2, 3, 3, 2], + # [5, 4, 4, 5, 6, 6, 5], + # [5, 4, 4, 5, 6, 6, 5]] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`paddings` + +A `Tensor` of type `int32`. +
+`mode` + +One of "CONSTANT", "REFLECT", or "SYMMETRIC" (case-insensitive) +
+`name` + +A name for the operation (optional). +
+`constant_values` + +In "CONSTANT" mode, the scalar pad value to use. Must be +same type as `tensor`. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ + + + + + + + + + + + +
+`ValueError` + +When mode is not one of "CONSTANT", "REFLECT", or "SYMMETRIC". +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/parse_example.md b/site/en/api_docs/python/tf/compat/v1/parse_example.md new file mode 100644 index 00000000000..4e9928b4425 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/parse_example.md @@ -0,0 +1,324 @@ +description: Parses Example protos into a dict of tensors. + +
+ + +
+ +# tf.compat.v1.parse_example + + + + + + + + + +Parses `Example` protos into a `dict` of tensors. + + + + + + + + + +Parses a number of serialized [`Example`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto) +protos given in `serialized`. We refer to `serialized` as a batch with +`batch_size` many entries of individual `Example` protos. + +`example_names` may contain descriptive names for the corresponding serialized +protos. These may be useful for debugging purposes, but they have no effect on +the output. If not `None`, `example_names` must be the same length as +`serialized`. + +This op parses serialized examples into a dictionary mapping keys to `Tensor` +`SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to +`VarLenFeature`, `SparseFeature`, `RaggedFeature`, and `FixedLenFeature` +objects. Each `VarLenFeature` and `SparseFeature` is mapped to a +`SparseTensor`; each `FixedLenFeature` is mapped to a `Tensor`; and each +`RaggedFeature` is mapped to a `RaggedTensor`. + +Each `VarLenFeature` maps to a `SparseTensor` of the specified type +representing a ragged matrix. Its indices are `[batch, index]` where `batch` +identifies the example in `serialized`, and `index` is the value's index in +the list of values associated with that feature and example. + +Each `SparseFeature` maps to a `SparseTensor` of the specified type +representing a Tensor of `dense_shape` `[batch_size] + SparseFeature.size`. +Its `values` come from the feature in the examples with key `value_key`. +A `values[i]` comes from a position `k` in the feature of an example at batch +entry `batch`. This positional information is recorded in `indices[i]` as +`[batch, index_0, index_1, ...]` where `index_j` is the `k-th` value of +the feature in the example at with key `SparseFeature.index_key[j]`. +In other words, we split the indices (except the first index indicating the +batch entry) of a `SparseTensor` by dimension into different features of the +`Example`. Due to its complexity a `VarLenFeature` should be preferred over a +`SparseFeature` whenever possible. + +Each `FixedLenFeature` `df` maps to a `Tensor` of the specified type (or +tf.float32 if not specified) and shape `(serialized.size(),) + df.shape`. + +`FixedLenFeature` entries with a `default_value` are optional. With no default +value, we will fail if that `Feature` is missing from any example in +`serialized`. + +Each `FixedLenSequenceFeature` `df` maps to a `Tensor` of the specified type +(or tf.float32 if not specified) and shape +`(serialized.size(), None) + df.shape`. +All examples in `serialized` will be padded with `default_value` along the +second dimension. + +Each `RaggedFeature` maps to a `RaggedTensor` of the specified type. It +is formed by stacking the `RaggedTensor` for each example, where the +`RaggedTensor` for each individual example is constructed using the tensors +specified by `RaggedTensor.values_key` and `RaggedTensor.partition`. See +the tf.io.RaggedFeature documentation for details and examples. + +#### Examples: + + + +For example, if one expects a tf.float32 `VarLenFeature` `ft` and three +serialized `Example`s are provided: + +``` +serialized = [ + features + { feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } }, + features + { feature []}, + features + { feature { key: "ft" value { float_list { value: [3.0] } } } +] +``` + +then the output will look like: + +```python +{"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]], + values=[1.0, 2.0, 3.0], + dense_shape=(3, 2)) } +``` + +If instead a `FixedLenSequenceFeature` with `default_value = -1.0` and +`shape=[]` is used then the output will look like: + +```python +{"ft": [[1.0, 2.0], [3.0, -1.0]]} +``` + +Given two `Example` input protos in `serialized`: + +``` +[ + features { + feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } } + feature { key: "gps" value { float_list { value: [] } } } + }, + features { + feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } } + feature { key: "dank" value { int64_list { value: [ 42 ] } } } + feature { key: "gps" value { } } + } +] +``` + +And arguments + +``` +example_names: ["input0", "input1"], +features: { + "kw": VarLenFeature(tf.string), + "dank": VarLenFeature(tf.int64), + "gps": VarLenFeature(tf.float32), +} +``` + +Then the output is a dictionary: + +```python +{ + "kw": SparseTensor( + indices=[[0, 0], [0, 1], [1, 0]], + values=["knit", "big", "emmy"] + dense_shape=[2, 2]), + "dank": SparseTensor( + indices=[[1, 0]], + values=[42], + dense_shape=[2, 1]), + "gps": SparseTensor( + indices=[], + values=[], + dense_shape=[2, 0]), +} +``` + +For dense results in two serialized `Example`s: + +``` +[ + features { + feature { key: "age" value { int64_list { value: [ 0 ] } } } + feature { key: "gender" value { bytes_list { value: [ "f" ] } } } + }, + features { + feature { key: "age" value { int64_list { value: [] } } } + feature { key: "gender" value { bytes_list { value: [ "f" ] } } } + } +] +``` + +#### We can use arguments: + + + +``` +example_names: ["input0", "input1"], +features: { + "age": FixedLenFeature([], dtype=tf.int64, default_value=-1), + "gender": FixedLenFeature([], dtype=tf.string), +} +``` + +And the expected output is: + +```python +{ + "age": [[0], [-1]], + "gender": [["f"], ["f"]], +} +``` + +An alternative to `VarLenFeature` to obtain a `SparseTensor` is +`SparseFeature`. For example, given two `Example` input protos in +`serialized`: + +``` +[ + features { + feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } } + feature { key: "ix" value { int64_list { value: [ 3, 20 ] } } } + }, + features { + feature { key: "val" value { float_list { value: [ 0.0 ] } } } + feature { key: "ix" value { int64_list { value: [ 42 ] } } } + } +] +``` + +And arguments + +``` +example_names: ["input0", "input1"], +features: { + "sparse": SparseFeature( + index_key="ix", value_key="val", dtype=tf.float32, size=100), +} +``` + +Then the output is a dictionary: + +```python +{ + "sparse": SparseTensor( + indices=[[0, 3], [0, 20], [1, 42]], + values=[0.5, -1.0, 0.0] + dense_shape=[2, 100]), +} +``` + +See the tf.io.RaggedFeature documentation for examples showing how +`RaggedFeature` can be used to obtain `RaggedTensor`s. + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A vector (1-D Tensor) of strings, a batch of binary +serialized `Example` protos. +
+`features` + +A `dict` mapping feature keys to `FixedLenFeature`, +`VarLenFeature`, `SparseFeature`, and `RaggedFeature` values. +
+`example_names` + +A vector (1-D Tensor) of strings (optional), the names of +the serialized protos in the batch. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+A `dict` mapping feature keys to `Tensor`, `SparseTensor`, and +`RaggedTensor` values. +
+ + + + + + + + + + + + +
+`ValueError` + +if any feature is invalid. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/parse_single_example.md b/site/en/api_docs/python/tf/compat/v1/parse_single_example.md new file mode 100644 index 00000000000..76815363d1e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/parse_single_example.md @@ -0,0 +1,127 @@ +description: Parses a single Example proto. + +
+ + +
+ +# tf.compat.v1.parse_single_example + + + + + + + + + +Parses a single `Example` proto. + + + + + + + + + +Similar to `parse_example`, except: + +For dense tensors, the returned `Tensor` is identical to the output of +`parse_example`, except there is no batch dimension, the output shape is the +same as the shape given in `dense_shape`. + +For `SparseTensor`s, the first (batch) column of the indices matrix is removed +(the indices matrix is a column vector), the values vector is unchanged, and +the first (`batch_size`) entry of the shape vector is removed (it is now a +single element vector). + +One might see performance advantages by batching `Example` protos with +`parse_example` instead of using this function directly. + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A scalar string Tensor, a single serialized Example. +
+`features` + +A `dict` mapping feature keys to `FixedLenFeature` or +`VarLenFeature` values. +
+`name` + +A name for this operation (optional). +
+`example_names` + +(Optional) A scalar string Tensor, the associated name. +
+ + + + + + + + + + + +
+A `dict` mapping feature keys to `Tensor` and `SparseTensor` values. +
+ + + + + + + + + + + + +
+`ValueError` + +if any feature is invalid. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/placeholder.md b/site/en/api_docs/python/tf/compat/v1/placeholder.md new file mode 100644 index 00000000000..2b9bb0efb96 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/placeholder.md @@ -0,0 +1,122 @@ +description: Inserts a placeholder for a tensor that will be always fed. + +
+ + +
+ +# tf.compat.v1.placeholder + + + + + + + + + +Inserts a placeholder for a tensor that will be always fed. + + + + + + + +**Important**: This tensor will produce an error if evaluated. Its value must +be fed using the `feed_dict` optional argument to `Session.run()`, +`Tensor.eval()`, or `Operation.run()`. + +#### For example: + + + +```python +x = tf.compat.v1.placeholder(tf.float32, shape=(1024, 1024)) +y = tf.matmul(x, x) + +with tf.compat.v1.Session() as sess: + print(sess.run(y)) # ERROR: will fail because x was not fed. + + rand_array = np.random.rand(1024, 1024) + print(sess.run(y, feed_dict={x: rand_array})) # Will succeed. +``` + + + + + + + + + + + + + + + + + + +
+`dtype` + +The type of elements in the tensor to be fed. +
+`shape` + +The shape of the tensor to be fed (optional). If the shape is not +specified, you can feed a tensor of any shape. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` that may be used as a handle for feeding a value, but not +evaluated directly. +
+ + + + + + + + + + + + +
+`RuntimeError` + +if eager execution is enabled +
+ + + +#### Eager Compatibility +Placeholders are not compatible with eager execution. + diff --git a/site/en/api_docs/python/tf/compat/v1/placeholder_with_default.md b/site/en/api_docs/python/tf/compat/v1/placeholder_with_default.md new file mode 100644 index 00000000000..ac71a5ab336 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/placeholder_with_default.md @@ -0,0 +1,79 @@ +description: A placeholder op that passes through input when its output is not fed. + +
+ + +
+ +# tf.compat.v1.placeholder_with_default + + + + + + + + + +A placeholder op that passes through `input` when its output is not fed. + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. The default value to produce when output is not fed. +
+`shape` + +A tf.TensorShape or list of `int`s. The (possibly partial) shape of +the tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/profiler.md b/site/en/api_docs/python/tf/compat/v1/profiler.md new file mode 100644 index 00000000000..6246525bc1b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler.md @@ -0,0 +1,43 @@ +description: Public API for tf.profiler namespace. + +
+ + +
+ +# Module: tf.compat.v1.profiler + + + + + + + + + +Public API for tf.profiler namespace. + + + +## Classes + +[`class AdviceProto`](../../../tf/compat/v1/profiler/AdviceProto.md): A ProtocolMessage + +[`class GraphNodeProto`](../../../tf/compat/v1/profiler/GraphNodeProto.md): A ProtocolMessage + +[`class MultiGraphNodeProto`](../../../tf/compat/v1/profiler/MultiGraphNodeProto.md): A ProtocolMessage + +[`class OpLogProto`](../../../tf/compat/v1/profiler/OpLogProto.md): A ProtocolMessage + +[`class ProfileOptionBuilder`](../../../tf/compat/v1/profiler/ProfileOptionBuilder.md): Option Builder for Profiling API. + +[`class Profiler`](../../../tf/compat/v1/profiler/Profiler.md): TensorFlow multi-step profiler. + +## Functions + +[`advise(...)`](../../../tf/compat/v1/profiler/advise.md): Auto profile and advise. + +[`profile(...)`](../../../tf/compat/v1/profiler/profile.md): Profile model. + +[`write_op_log(...)`](../../../tf/compat/v1/profiler/write_op_log.md): Log provided 'op_log', and add additional model information below. + diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/AdviceProto.md b/site/en/api_docs/python/tf/compat/v1/profiler/AdviceProto.md new file mode 100644 index 00000000000..4c4d1433302 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/AdviceProto.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + + + +
+ +# tf.compat.v1.profiler.AdviceProto + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + +
+`checkers` + +`repeated CheckersEntry checkers` +
+ + + +## Child Classes +[`class Checker`](../../../../tf/compat/v1/profiler/AdviceProto/Checker.md) + +[`class CheckersEntry`](../../../../tf/compat/v1/profiler/AdviceProto/CheckersEntry.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/AdviceProto/Checker.md b/site/en/api_docs/python/tf/compat/v1/profiler/AdviceProto/Checker.md new file mode 100644 index 00000000000..132357a7446 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/AdviceProto/Checker.md @@ -0,0 +1,46 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.profiler.AdviceProto.Checker + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + +
+`reports` + +`repeated string reports` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/AdviceProto/CheckersEntry.md b/site/en/api_docs/python/tf/compat/v1/profiler/AdviceProto/CheckersEntry.md new file mode 100644 index 00000000000..f3436fa92db --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/AdviceProto/CheckersEntry.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.profiler.AdviceProto.CheckersEntry + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`key` + +`string key` +
+`value` + +`Checker value` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/GraphNodeProto.md b/site/en/api_docs/python/tf/compat/v1/profiler/GraphNodeProto.md new file mode 100644 index 00000000000..95016c68c1b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/GraphNodeProto.md @@ -0,0 +1,232 @@ +description: A ProtocolMessage + +
+ + + +
+ +# tf.compat.v1.profiler.GraphNodeProto + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`accelerator_exec_micros` + +`int64 accelerator_exec_micros` +
+`children` + +`repeated GraphNodeProto children` +
+`cpu_exec_micros` + +`int64 cpu_exec_micros` +
+`devices` + +`repeated string devices` +
+`exec_micros` + +`int64 exec_micros` +
+`float_ops` + +`int64 float_ops` +
+`input_shapes` + +`repeated InputShapesEntry input_shapes` +
+`name` + +`string name` +
+`output_bytes` + +`int64 output_bytes` +
+`parameters` + +`int64 parameters` +
+`peak_bytes` + +`int64 peak_bytes` +
+`requested_bytes` + +`int64 requested_bytes` +
+`residual_bytes` + +`int64 residual_bytes` +
+`run_count` + +`int64 run_count` +
+`shapes` + +`repeated TensorShapeProto shapes` +
+`tensor_value` + +`TFProfTensorProto tensor_value` +
+`total_accelerator_exec_micros` + +`int64 total_accelerator_exec_micros` +
+`total_cpu_exec_micros` + +`int64 total_cpu_exec_micros` +
+`total_definition_count` + +`int64 total_definition_count` +
+`total_exec_micros` + +`int64 total_exec_micros` +
+`total_float_ops` + +`int64 total_float_ops` +
+`total_output_bytes` + +`int64 total_output_bytes` +
+`total_parameters` + +`int64 total_parameters` +
+`total_peak_bytes` + +`int64 total_peak_bytes` +
+`total_requested_bytes` + +`int64 total_requested_bytes` +
+`total_residual_bytes` + +`int64 total_residual_bytes` +
+`total_run_count` + +`int64 total_run_count` +
+ + + +## Child Classes +[`class InputShapesEntry`](../../../../tf/compat/v1/profiler/GraphNodeProto/InputShapesEntry.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/GraphNodeProto/InputShapesEntry.md b/site/en/api_docs/python/tf/compat/v1/profiler/GraphNodeProto/InputShapesEntry.md new file mode 100644 index 00000000000..19ccb9636b9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/GraphNodeProto/InputShapesEntry.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`key` + +`int32 key` +
+`value` + +`TensorShapeProto value` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/MultiGraphNodeProto.md b/site/en/api_docs/python/tf/compat/v1/profiler/MultiGraphNodeProto.md new file mode 100644 index 00000000000..563e3633b44 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/MultiGraphNodeProto.md @@ -0,0 +1,186 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.profiler.MultiGraphNodeProto + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`accelerator_exec_micros` + +`int64 accelerator_exec_micros` +
+`children` + +`repeated MultiGraphNodeProto children` +
+`cpu_exec_micros` + +`int64 cpu_exec_micros` +
+`exec_micros` + +`int64 exec_micros` +
+`float_ops` + +`int64 float_ops` +
+`graph_nodes` + +`repeated GraphNodeProto graph_nodes` +
+`name` + +`string name` +
+`output_bytes` + +`int64 output_bytes` +
+`parameters` + +`int64 parameters` +
+`peak_bytes` + +`int64 peak_bytes` +
+`requested_bytes` + +`int64 requested_bytes` +
+`residual_bytes` + +`int64 residual_bytes` +
+`total_accelerator_exec_micros` + +`int64 total_accelerator_exec_micros` +
+`total_cpu_exec_micros` + +`int64 total_cpu_exec_micros` +
+`total_exec_micros` + +`int64 total_exec_micros` +
+`total_float_ops` + +`int64 total_float_ops` +
+`total_output_bytes` + +`int64 total_output_bytes` +
+`total_parameters` + +`int64 total_parameters` +
+`total_peak_bytes` + +`int64 total_peak_bytes` +
+`total_requested_bytes` + +`int64 total_requested_bytes` +
+`total_residual_bytes` + +`int64 total_residual_bytes` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/OpLogProto.md b/site/en/api_docs/python/tf/compat/v1/profiler/OpLogProto.md new file mode 100644 index 00000000000..54ecaef74b2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/OpLogProto.md @@ -0,0 +1,57 @@ +description: A ProtocolMessage + +
+ + + +
+ +# tf.compat.v1.profiler.OpLogProto + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`id_to_string` + +`repeated IdToStringEntry id_to_string` +
+`log_entries` + +`repeated OpLogEntry log_entries` +
+ + + +## Child Classes +[`class IdToStringEntry`](../../../../tf/compat/v1/profiler/OpLogProto/IdToStringEntry.md) + diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/OpLogProto/IdToStringEntry.md b/site/en/api_docs/python/tf/compat/v1/profiler/OpLogProto/IdToStringEntry.md new file mode 100644 index 00000000000..b66500607be --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/OpLogProto/IdToStringEntry.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.profiler.OpLogProto.IdToStringEntry + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`key` + +`int64 key` +
+`value` + +`string value` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/ProfileOptionBuilder.md b/site/en/api_docs/python/tf/compat/v1/profiler/ProfileOptionBuilder.md new file mode 100644 index 00000000000..e958f5c2fcf --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/ProfileOptionBuilder.md @@ -0,0 +1,1020 @@ +description: Option Builder for Profiling API. + +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.profiler.ProfileOptionBuilder + + + + + + + + + +Option Builder for Profiling API. + + + + + + + +For tutorial on the options, see +https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/options.md + +```python +# Users can use pre-built options: +opts = ( + tf.profiler.ProfileOptionBuilder.trainable_variables_parameter()) + +# Or, build your own options: +opts = (tf.compat.v1.profiler.ProfileOptionBuilder() + .with_max_depth(10) + .with_min_micros(1000) + .select(['accelerator_micros']) + .with_stdout_output() + .build() + +# Or customize the pre-built options: +opts = (tf.compat.v1.profiler.ProfileOptionBuilder( + tf.profiler.ProfileOptionBuilder.time_and_memory()) + .with_displaying_options(show_name_regexes=['.*rnn.*']) + .build()) + +# Finally, profiling with the options: +_ = tf.compat.v1.profiler.profile(tf.compat.v1.get_default_graph(), + run_meta=run_meta, + cmd='scope', + options=opts) +``` + + + + + + + + + + +
+`options` + +Optional initial option dict to start with. +
+ + + +## Methods + +

account_displayed_op_only

+ +View source + + + +Whether only account the statistics of displayed profiler nodes. + + + + + + + + + + + +
Args
+`is_true` + +If true, only account statistics of nodes eventually +displayed by the outputs. +Otherwise, a node's statistics are accounted by its parents +as long as it's types match 'account_type_regexes', even if +it is hidden from the output, say, by hide_name_regexes. +
+ + + + + + + + + + + +
Returns
+self +
+ + + +

build

+ +View source + + + +Build a profiling option. + + + + + + + + + + +
Returns
+A dict of profiling options. +
+ + + +

float_operation

+ +View source + + + +Options used to profile float operations. + +Please see https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/profile_model_architecture.md +on the caveats of calculating float operations. + + + + + + + + + +
Returns
+A dict of profiling options. +
+ + + +

order_by

+ +View source + + + +Order the displayed profiler nodes based on a attribute. + +Supported attribute includes micros, bytes, occurrence, params, etc. +https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/options.md + + + + + + + + + + +
Args
+`attribute` + +An attribute the profiler node has. +
+ + + + + + + + + + + +
Returns
+self +
+ + + +

select

+ +View source + + + +Select the attributes to display. + +See https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/options.md +for supported attributes. + + + + + + + + + + +
Args
+`attributes` + +A list of attribute the profiler node has. +
+ + + + + + + + + + + +
Returns
+self +
+ + + +

time_and_memory

+ +View source + + + +Show operation time and memory consumptions. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`min_micros` + +Only show profiler nodes with execution time +no less than this. It sums accelerator and cpu times. +
+`min_bytes` + +Only show profiler nodes requested to allocate no less bytes +than this. +
+`min_accelerator_micros` + +Only show profiler nodes spend no less than +this time on accelerator (e.g. GPU). +
+`min_cpu_micros` + +Only show profiler nodes spend no less than +this time on cpu. +
+`min_peak_bytes` + +Only show profiler nodes using no less than this bytes +at peak (high watermark). For profiler nodes consist of multiple +graph nodes, it sums the graph nodes' peak_bytes. +
+`min_residual_bytes` + +Only show profiler nodes have no less than +this bytes not being de-allocated after Compute() ends. For +profiler nodes consist of multiple graph nodes, it sums the +graph nodes' residual_bytes. +
+`min_output_bytes` + +Only show profiler nodes have no less than this bytes +output. The output are not necessarily allocated by this profiler +nodes. +
+ + + + + + + + + + + +
Returns
+A dict of profiling options. +
+ + + +

trainable_variables_parameter

+ +View source + + + +Options used to profile trainable variable parameters. + +Normally used together with 'scope' view. + + + + + + + + + +
Returns
+A dict of profiling options. +
+ + + +

with_accounted_types

+ +View source + + + +Selectively counting statistics based on node types. + +Here, 'types' means the profiler nodes' properties. Profiler by default +consider device name (e.g. /job:xx/.../device:GPU:0) and operation type +(e.g. MatMul) as profiler nodes' properties. User can also associate +customized 'types' to profiler nodes through OpLogProto proto. + +For example, user can select profiler nodes placed on gpu:0 with: +`account_type_regexes=['.*gpu:0.*']` + +If none of a node's properties match the specified regexes, the node is +not displayed nor accounted. + + + + + + + + + + +
Args
+`account_type_regexes` + +A list of regexes specifying the types. +
+ + + + + + + + + + + +
Returns
+self. +
+ + + +

with_empty_output

+ +View source + + + +Do not generate side-effect outputs. + + +

with_file_output

+ +View source + + + +Print the result to a file. + + +

with_max_depth

+ +View source + + + +Set the maximum depth of display. + +The depth depends on profiling view. For 'scope' view, it's the +depth of name scope hierarchy (tree), for 'op' view, it's the number +of operation types (list), etc. + + + + + + + + + + +
Args
+`max_depth` + +Maximum depth of the data structure to display. +
+ + + + + + + + + + + +
Returns
+self +
+ + + +

with_min_execution_time

+ +View source + + + +Only show profiler nodes consuming no less than 'min_micros'. + + + + + + + + + + + + + + + + + +
Args
+`min_micros` + +Only show profiler nodes with execution time +no less than this. It sums accelerator and cpu times. +
+`min_accelerator_micros` + +Only show profiler nodes spend no less than +this time on accelerator (e.g. GPU). +
+`min_cpu_micros` + +Only show profiler nodes spend no less than +this time on cpu. +
+ + + + + + + + + + + +
Returns
+self +
+ + + +

with_min_float_operations

+ +View source + + + +Only show profiler nodes consuming no less than 'min_float_ops'. + +Please see https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/profile_model_architecture.md +on the caveats of calculating float operations. + + + + + + + + + + +
Args
+`min_float_ops` + +Only show profiler nodes with float operations +no less than this. +
+ + + + + + + + + + + +
Returns
+self +
+ + + +

with_min_memory

+ +View source + + + +Only show profiler nodes consuming no less than 'min_bytes'. + + + + + + + + + + + + + + + + + + + + +
Args
+`min_bytes` + +Only show profiler nodes requested to allocate no less bytes +than this. +
+`min_peak_bytes` + +Only show profiler nodes using no less than this bytes +at peak (high watermark). For profiler nodes consist of multiple +graph nodes, it sums the graph nodes' peak_bytes. +
+`min_residual_bytes` + +Only show profiler nodes have no less than +this bytes not being de-allocated after Compute() ends. For +profiler nodes consist of multiple graph nodes, it sums the +graph nodes' residual_bytes. +
+`min_output_bytes` + +Only show profiler nodes have no less than this bytes +output. The output are not necessarily allocated by this profiler +nodes. +
+ + + + + + + + + + + +
Returns
+self +
+ + + +

with_min_occurrence

+ +View source + + + +Only show profiler nodes including no less than 'min_occurrence' graph nodes. + +A "node" means a profiler output node, which can be a python line +(code view), an operation type (op view), or a graph node +(graph/scope view). A python line includes all graph nodes created by that +line, while an operation type includes all graph nodes of that type. + + + + + + + + + + +
Args
+`min_occurrence` + +Only show nodes including no less than this. +
+ + + + + + + + + + + +
Returns
+self +
+ + + +

with_min_parameters

+ +View source + + + +Only show profiler nodes holding no less than 'min_params' parameters. + +'Parameters' normally refers the weights of in TensorFlow variables. +It reflects the 'capacity' of models. + + + + + + + + + + +
Args
+`min_params` + +Only show profiler nodes holding number parameters +no less than this. +
+ + + + + + + + + + + +
Returns
+self +
+ + + +

with_node_names

+ +View source + + + +Regular expressions used to select profiler nodes to display. + +After 'with_accounted_types' is evaluated, 'with_node_names' are +evaluated as follows: + + For a profile data structure, profiler first finds the profiler + nodes matching 'start_name_regexes', and starts displaying profiler + nodes from there. Then, if a node matches 'show_name_regexes' and + doesn't match 'hide_name_regexes', it's displayed. If a node matches + 'trim_name_regexes', profiler stops further searching that branch. + + + + + + + + + + + + + + + + + + + +
Args
+`start_name_regexes` + +list of node name regexes to start displaying. +
+`show_name_regexes` + +list of node names regexes to display. +
+`hide_name_regexes` + +list of node_names regexes that should be hidden. +
+`trim_name_regexes` + +list of node name regexes from where to stop. +
+ + + + + + + + + + + +
Returns
+self +
+ + + +

with_pprof_output

+ +View source + + + +Generate a pprof profile gzip file. + + +#### To use the pprof file: + +pprof -png --nodecount=100 --sample_index=1 + + + + + + + + + + + + +
Args
+`pprof_file` + +filename for output, usually suffixed with .pb.gz. +
+ + + + + + + + + + + +
Returns
+self. +
+ + + +

with_stdout_output

+ +View source + + + +Print the result to stdout. + + +

with_step

+ +View source + + + +Which profile step to use for profiling. + +The 'step' here refers to the step defined by `Profiler.add_step()` API. + + + + + + + + + + +
Args
+`step` + +When multiple steps of profiles are available, select which step's +profile to use. If -1, use average of all available steps. +
+ + + + + + + + + + + +
Returns
+self +
+ + + +

with_timeline_output

+ +View source + + + +Generate a timeline json file. + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/Profiler.md b/site/en/api_docs/python/tf/compat/v1/profiler/Profiler.md new file mode 100644 index 00000000000..7151e688549 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/Profiler.md @@ -0,0 +1,397 @@ +description: TensorFlow multi-step profiler. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.profiler.Profiler + + + + + + + + + +TensorFlow multi-step profiler. + + + + + + + +https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/README.md + +```python +Typical use case: + # Currently we are only allowed to create 1 profiler per process. + profiler = Profiler(sess.graph) + + for i in xrange(total_steps): + if i % 10000 == 0: + run_meta = tf.compat.v1.RunMetadata() + _ = sess.run(..., + options=tf.compat.v1.RunOptions( + trace_level=tf.RunOptions.FULL_TRACE), + run_metadata=run_meta) + profiler.add_step(i, run_meta) + + # Profile the parameters of your model. + profiler.profile_name_scope(options=(option_builder.ProfileOptionBuilder + .trainable_variables_parameter())) + + # Or profile the timing of your model operations. + opts = option_builder.ProfileOptionBuilder.time_and_memory() + profiler.profile_operations(options=opts) + + # Or you can generate a timeline: + opts = (option_builder.ProfileOptionBuilder( + option_builder.ProfileOptionBuilder.time_and_memory()) + .with_step(i) + .with_timeline_output(filename).build()) + profiler.profile_graph(options=opts) + else: + _ = sess.run(...) + # Auto detect problems and generate advice. + profiler.advise() +``` + + + + + + + + + + + + + +
+`graph` + +tf.Graph. If None and eager execution is not enabled, use +default graph. +
+`op_log` + +optional. tensorflow::tfprof::OpLogProto proto. Used to define +extra op types. +
+ + + +## Methods + +

add_step

+ +View source + + + +Add statistics of a step. + + + + + + + + + + + + + + +
Args
+`step` + +int, An id used to group one or more different `run_meta` together. +When profiling with the profile_xxx APIs, user can use the `step` +id in the `options` to profile these `run_meta` together. +
+`run_meta` + +RunMetadata proto that contains statistics of a session run. +
+ + + +

advise

+ +View source + + + +Automatically detect problems and generate reports. + + + + + + + + + + + +
Args
+`options` + +A dict of options. See ALL_ADVICE example above. +
+ + + + + + + + + + + +
Returns
+An Advise proto that contains the reports from all checkers. +
+ + + +

profile_graph

+ +View source + + + +Profile the statistics of graph nodes, organized by dataflow graph. + + + + + + + + + + + +
Args
+`options` + +A dict of options. See core/profiler/g3doc/options.md. +
+ + + + + + + + + + + +
Returns
+a GraphNodeProto that records the results. +
+ + + +

profile_name_scope

+ +View source + + + +Profile the statistics of graph nodes, organized by name scope. + + + + + + + + + + + +
Args
+`options` + +A dict of options. See core/profiler/g3doc/options.md. +
+ + + + + + + + + + + +
Returns
+a GraphNodeProto that records the results. +
+ + + +

profile_operations

+ +View source + + + +Profile the statistics of the Operation types (e.g. MatMul, Conv2D). + + + + + + + + + + + +
Args
+`options` + +A dict of options. See core/profiler/g3doc/options.md. +
+ + + + + + + + + + + +
Returns
+a MultiGraphNodeProto that records the results. +
+ + + +

profile_python

+ +View source + + + +Profile the statistics of the Python codes. + + By default, it shows the call stack from root. To avoid + redundant output, you may use options to filter as below + options['show_name_regexes'] = ['.*my_code.py.*'] + + + + + + + + + + +
Args
+`options` + +A dict of options. See core/profiler/g3doc/options.md. +
+ + + + + + + + + + + +
Returns
+a MultiGraphNodeProto that records the results. +
+ + + +

serialize_to_string

+ +View source + + + +Serialize the ProfileProto to a binary string. + + Users can write it to file for offline analysis by tfprof commandline + or graphical interface. + + + + + + + + + +
Returns
+ProfileProto binary string. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/advise.md b/site/en/api_docs/python/tf/compat/v1/profiler/advise.md new file mode 100644 index 00000000000..2027a6d6f92 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/advise.md @@ -0,0 +1,83 @@ +description: Auto profile and advise. + +
+ + +
+ +# tf.compat.v1.profiler.advise + + + + + + + + + +Auto profile and advise. + + + + + + + + Builds profiles and automatically check anomalies of various + aspects. For more details: + https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/README.md + + + + + + + + + + + + + + + + +
+`graph` + +tf.Graph. If None and eager execution is not enabled, use +default graph. +
+`run_meta` + +optional tensorflow.RunMetadata proto. It is necessary to +to support run time information profiling, such as time and memory. +
+`options` + +see ALL_ADVICE example above. Default checks everything. +
+ + + + + + + + + + + +
+Returns AdviceProto proto +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/profile.md b/site/en/api_docs/python/tf/compat/v1/profiler/profile.md new file mode 100644 index 00000000000..c0caa949c5f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/profile.md @@ -0,0 +1,105 @@ +description: Profile model. + +
+ + +
+ +# tf.compat.v1.profiler.profile + + + + + + + + + +Profile model. + + + + + + + + Tutorials and examples can be found in: + https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/profiler/g3doc/python_api.md + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +tf.Graph. If None and eager execution is not enabled, use +default graph. +
+`run_meta` + +optional tensorflow.RunMetadata proto. It is necessary to +to support run time information profiling, such as time and memory. +
+`op_log` + +tensorflow.tfprof.OpLogProto proto. User can assign "types" to +graph nodes with op_log. "types" allow user to flexibly group and +account profiles using options['accounted_type_regexes']. +
+`cmd` + +string. Either 'op', 'scope', 'graph' or 'code'. +'op' view organizes profile using operation type. (e.g. MatMul) +'scope' view organizes profile using graph node name scope. +'graph' view organizes profile using graph node inputs/outputs. +'code' view organizes profile using Python call stack. +
+`options` + +A dict of options. See core/profiler/g3doc/options.md. +
+ + + + + + + + + + + +
+If cmd is 'scope' or 'graph', returns GraphNodeProto proto. +If cmd is 'op' or 'code', returns MultiGraphNodeProto proto. +Side effect: stdout/file/timeline.json depending on options['output'] +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/profiler/write_op_log.md b/site/en/api_docs/python/tf/compat/v1/profiler/write_op_log.md new file mode 100644 index 00000000000..123ec6ddb95 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/profiler/write_op_log.md @@ -0,0 +1,88 @@ +description: Log provided 'op_log', and add additional model information below. + +
+ + +
+ +# tf.compat.v1.profiler.write_op_log + + + + + + + + + +Log provided 'op_log', and add additional model information below. + + + + + + + + The API also assigns ops in tf.compat.v1.trainable_variables() an op type + called '_trainable_variables'. + The API also logs 'flops' statistics for ops with op.RegisterStatistics() + defined. flops calculation depends on Tensor shapes defined in 'graph', + which might not be complete. 'run_meta', if provided, completes the shape + information with best effort. + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +tf.Graph. If None and eager execution is not enabled, use +default graph. +
+`log_dir` + +directory to write the log file. +
+`op_log` + +(Optional) OpLogProto proto to be written. If not provided, an new +one is created. +
+`run_meta` + +(Optional) RunMetadata proto that helps flops computation using +run time shape information. +
+`add_trace` + +Whether to add python code trace information. +Used to support "code" view. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/py_func.md b/site/en/api_docs/python/tf/compat/v1/py_func.md new file mode 100644 index 00000000000..c6c9fa69fb2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/py_func.md @@ -0,0 +1,134 @@ +description: Wraps a python function and uses it as a TensorFlow op. + +
+ + +
+ +# tf.compat.v1.py_func + + + + + + + + + +Wraps a python function and uses it as a TensorFlow op. + + + + + + + +Given a python function `func`, which takes numpy arrays as its +arguments and returns numpy arrays as its outputs, wrap this function as an +operation in a TensorFlow graph. The following snippet constructs a simple +TensorFlow graph that invokes the `np.sinh()` NumPy function as a operation +in the graph: + +```python +def my_func(x): + # x will be a numpy array with the contents of the placeholder below + return np.sinh(x) +input = tf.compat.v1.placeholder(tf.float32) +y = tf.compat.v1.py_func(my_func, [input], tf.float32) +``` + +**N.B.** The tf.compat.v1.py_func() operation has the following known +limitations: + +* The body of the function (i.e. `func`) will not be serialized in a + `GraphDef`. Therefore, you should not use this function if you need to + serialize your model and restore it in a different environment. + +* The operation must run in the same address space as the Python program + that calls tf.compat.v1.py_func(). If you are using distributed + TensorFlow, you + must run a tf.distribute.Server in the same process as the program that + calls + tf.compat.v1.py_func() and you must pin the created operation to a device + in that + server (e.g. using `with tf.device():`). + + + + + + + + + + + + + + + + + + + + + + +
+`func` + +A Python function, which accepts `ndarray` objects as arguments and +returns a list of `ndarray` objects (or a single `ndarray`). This function +must accept as many arguments as there are tensors in `inp`, and these +argument types will match the corresponding tf.Tensor objects in `inp`. +The returns `ndarray`s must match the number and types defined `Tout`. +Important Note: Input and output numpy `ndarray`s of `func` are not +guaranteed to be copies. In some cases their underlying memory will be +shared with the corresponding TensorFlow tensors. In-place modification +or storing `func` input or return values in python datastructures +without explicit (np.)copy can have non-deterministic consequences. +
+`inp` + +A list of `Tensor` objects. +
+`Tout` + +A list or tuple of tensorflow data types or a single tensorflow data +type if there is only one, indicating what `func` returns. +
+`stateful` + +(Boolean.) If True, the function should be considered stateful. If +a function is stateless, when given the same input it will return the same +output and have no observable side effects. Optimizations such as common +subexpression elimination are only performed on stateless operations. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` or a single `Tensor` which `func` computes. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/python_io.md b/site/en/api_docs/python/tf/compat/v1/python_io.md new file mode 100644 index 00000000000..5b1c308c94c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/python_io.md @@ -0,0 +1,33 @@ +description: Python functions for directly manipulating TFRecord-formatted files. + +
+ + +
+ +# Module: tf.compat.v1.python_io + + + + + + + + + +Python functions for directly manipulating TFRecord-formatted files. + + + +## Classes + +[`class TFRecordCompressionType`](../../../tf/compat/v1/io/TFRecordCompressionType.md): The type of compression for the record. + +[`class TFRecordOptions`](../../../tf/io/TFRecordOptions.md): Options used for manipulating TFRecord files. + +[`class TFRecordWriter`](../../../tf/io/TFRecordWriter.md): A class to write records to a TFRecords file. + +## Functions + +[`tf_record_iterator(...)`](../../../tf/compat/v1/io/tf_record_iterator.md): An iterator that read the records from a TFRecords file. (deprecated) + diff --git a/site/en/api_docs/python/tf/compat/v1/quantization.md b/site/en/api_docs/python/tf/compat/v1/quantization.md new file mode 100644 index 00000000000..0834c0e4133 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/quantization.md @@ -0,0 +1,43 @@ +description: Public API for tf.quantization namespace. + +
+ + +
+ +# Module: tf.compat.v1.quantization + + + + + + + + + +Public API for tf.quantization namespace. + + + +## Functions + +[`dequantize(...)`](../../../tf/quantization/dequantize.md): Dequantize the 'input' tensor into a float or bfloat16 Tensor. + +[`fake_quant_with_min_max_args(...)`](../../../tf/quantization/fake_quant_with_min_max_args.md): Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. + +[`fake_quant_with_min_max_args_gradient(...)`](../../../tf/quantization/fake_quant_with_min_max_args_gradient.md): Compute gradients for a FakeQuantWithMinMaxArgs operation. + +[`fake_quant_with_min_max_vars(...)`](../../../tf/quantization/fake_quant_with_min_max_vars.md): Fake-quantize the 'inputs' tensor of type float via global float scalars `min` + +[`fake_quant_with_min_max_vars_gradient(...)`](../../../tf/quantization/fake_quant_with_min_max_vars_gradient.md): Compute gradients for a FakeQuantWithMinMaxVars operation. + +[`fake_quant_with_min_max_vars_per_channel(...)`](../../../tf/quantization/fake_quant_with_min_max_vars_per_channel.md): Fake-quantize the 'inputs' tensor of type float and one of the shapes: `[d]`, + +[`fake_quant_with_min_max_vars_per_channel_gradient(...)`](../../../tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient.md): Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. + +[`quantize(...)`](../../../tf/quantization/quantize.md): Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. + +[`quantize_and_dequantize(...)`](../../../tf/quantization/quantize_and_dequantize.md): Quantizes then dequantizes a tensor. + +[`quantized_concat(...)`](../../../tf/quantization/quantized_concat.md): Concatenates quantized tensors along one dimension. + diff --git a/site/en/api_docs/python/tf/compat/v1/quantize_v2.md b/site/en/api_docs/python/tf/compat/v1/quantize_v2.md new file mode 100644 index 00000000000..4c62b3bbfd8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/quantize_v2.md @@ -0,0 +1,35 @@ +description: Please use tf.quantization.quantize instead. + +
+ + +
+ +# tf.compat.v1.quantize_v2 + + + + + + + + + +Please use tf.quantization.quantize instead. + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/queue.md b/site/en/api_docs/python/tf/compat/v1/queue.md new file mode 100644 index 00000000000..16c9dbc84c6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/queue.md @@ -0,0 +1,33 @@ +description: Public API for tf.queue namespace. + +
+ + +
+ +# Module: tf.compat.v1.queue + + + + + + + + + +Public API for tf.queue namespace. + + + +## Classes + +[`class FIFOQueue`](../../../tf/queue/FIFOQueue.md): A queue implementation that dequeues elements in first-in first-out order. + +[`class PaddingFIFOQueue`](../../../tf/queue/PaddingFIFOQueue.md): A FIFOQueue that supports batching variable-sized tensors by padding. + +[`class PriorityQueue`](../../../tf/queue/PriorityQueue.md): A queue implementation that dequeues elements in prioritized order. + +[`class QueueBase`](../../../tf/queue/QueueBase.md): Base class for queue implementations. + +[`class RandomShuffleQueue`](../../../tf/queue/RandomShuffleQueue.md): A queue implementation that dequeues elements in a random order. + diff --git a/site/en/api_docs/python/tf/compat/v1/ragged.md b/site/en/api_docs/python/tf/compat/v1/ragged.md new file mode 100644 index 00000000000..8d47db5b063 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/ragged.md @@ -0,0 +1,186 @@ +description: Ragged Tensors. + +
+ + +
+ +# Module: tf.compat.v1.ragged + + + + + + + + + +Ragged Tensors. + + +This package defines ops for manipulating ragged tensors (tf.RaggedTensor), +which are tensors with non-uniform shapes. In particular, each `RaggedTensor` +has one or more *ragged dimensions*, which are dimensions whose slices may have +different lengths. For example, the inner (column) dimension of +`rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []]` is ragged, since the column slices +(`rt[0, :]`, ..., `rt[4, :]`) have different lengths. For a more detailed +description of ragged tensors, see the tf.RaggedTensor class documentation +and the [Ragged Tensor Guide](/guide/ragged_tensors). + + +### Additional ops that support `RaggedTensor` + +Arguments that accept `RaggedTensor`s are marked in **bold**. + +* `tf.batch_gather`(**params**, **indices**, name=`None`) +* tf.bitwise.bitwise_and(**x**, **y**, name=`None`) +* tf.bitwise.bitwise_or(**x**, **y**, name=`None`) +* tf.bitwise.bitwise_xor(**x**, **y**, name=`None`) +* tf.bitwise.invert(**x**, name=`None`) +* tf.bitwise.left_shift(**x**, **y**, name=`None`) +* tf.bitwise.right_shift(**x**, **y**, name=`None`) +* tf.cast(**x**, dtype, name=`None`) +* tf.clip_by_value(**t**, clip_value_min, clip_value_max, name=`None`) +* tf.concat(**values**, axis, name=`'concat'`) +* tf.debugging.check_numerics(**tensor**, message, name=`None`) +* tf.dtypes.complex(**real**, **imag**, name=`None`) +* tf.dtypes.saturate_cast(**value**, dtype, name=`None`) +* tf.dynamic_partition(**data**, **partitions**, num_partitions, name=`None`) +* tf.expand_dims(**input**, axis=`None`, name=`None`, dim=`None`) +* tf.gather_nd(**params**, **indices**, name=`None`, batch_dims=`0`) +* tf.gather(**params**, **indices**, validate_indices=`None`, name=`None`, axis=`None`, batch_dims=`0`) +* tf.identity(**input**, name=`None`) +* tf.io.decode_base64(**input**, name=`None`) +* tf.io.decode_compressed(**bytes**, compression_type=`''`, name=`None`) +* tf.io.encode_base64(**input**, pad=`False`, name=`None`) +* tf.math.abs(**x**, name=`None`) +* tf.math.acos(**x**, name=`None`) +* tf.math.acosh(**x**, name=`None`) +* tf.math.add_n(**inputs**, name=`None`) +* tf.math.add(**x**, **y**, name=`None`) +* tf.math.angle(**input**, name=`None`) +* tf.math.asin(**x**, name=`None`) +* tf.math.asinh(**x**, name=`None`) +* tf.math.atan2(**y**, **x**, name=`None`) +* tf.math.atan(**x**, name=`None`) +* tf.math.atanh(**x**, name=`None`) +* tf.math.ceil(**x**, name=`None`) +* tf.math.conj(**x**, name=`None`) +* tf.math.cos(**x**, name=`None`) +* tf.math.cosh(**x**, name=`None`) +* tf.math.digamma(**x**, name=`None`) +* tf.math.divide_no_nan(**x**, **y**, name=`None`) +* tf.math.divide(**x**, **y**, name=`None`) +* tf.math.equal(**x**, **y**, name=`None`) +* tf.math.erf(**x**, name=`None`) +* tf.math.erfc(**x**, name=`None`) +* tf.math.erfinv(**x**, name=`None`) +* tf.math.exp(**x**, name=`None`) +* tf.math.expm1(**x**, name=`None`) +* tf.math.floor(**x**, name=`None`) +* tf.math.floordiv(**x**, **y**, name=`None`) +* tf.math.floormod(**x**, **y**, name=`None`) +* tf.math.greater_equal(**x**, **y**, name=`None`) +* tf.math.greater(**x**, **y**, name=`None`) +* tf.math.imag(**input**, name=`None`) +* tf.math.is_finite(**x**, name=`None`) +* tf.math.is_inf(**x**, name=`None`) +* tf.math.is_nan(**x**, name=`None`) +* tf.math.less_equal(**x**, **y**, name=`None`) +* tf.math.less(**x**, **y**, name=`None`) +* tf.math.lgamma(**x**, name=`None`) +* tf.math.log1p(**x**, name=`None`) +* tf.math.log_sigmoid(**x**, name=`None`) +* tf.math.log(**x**, name=`None`) +* tf.math.logical_and(**x**, **y**, name=`None`) +* tf.math.logical_not(**x**, name=`None`) +* tf.math.logical_or(**x**, **y**, name=`None`) +* tf.math.logical_xor(**x**, **y**, name=`'LogicalXor'`) +* tf.math.maximum(**x**, **y**, name=`None`) +* tf.math.minimum(**x**, **y**, name=`None`) +* tf.math.multiply(**x**, **y**, name=`None`) +* tf.math.ndtri(**x**, name=`None`) +* tf.math.negative(**x**, name=`None`) +* tf.math.not_equal(**x**, **y**, name=`None`) +* tf.math.pow(**x**, **y**, name=`None`) +* tf.math.real(**input**, name=`None`) +* tf.math.reciprocal(**x**, name=`None`) +* tf.math.reduce_any(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.math.reduce_max(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.math.reduce_mean(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.math.reduce_min(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.math.reduce_prod(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.math.reduce_sum(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.math.rint(**x**, name=`None`) +* tf.math.round(**x**, name=`None`) +* tf.math.rsqrt(**x**, name=`None`) +* tf.math.sign(**x**, name=`None`) +* tf.math.sin(**x**, name=`None`) +* tf.math.sinh(**x**, name=`None`) +* tf.math.sqrt(**x**, name=`None`) +* tf.math.square(**x**, name=`None`) +* tf.math.squared_difference(**x**, **y**, name=`None`) +* tf.math.subtract(**x**, **y**, name=`None`) +* tf.math.tan(**x**, name=`None`) +* tf.math.truediv(**x**, **y**, name=`None`) +* tf.math.unsorted_segment_max(**data**, **segment_ids**, num_segments, name=`None`) +* tf.math.unsorted_segment_mean(**data**, **segment_ids**, num_segments, name=`None`) +* tf.math.unsorted_segment_min(**data**, **segment_ids**, num_segments, name=`None`) +* tf.math.unsorted_segment_prod(**data**, **segment_ids**, num_segments, name=`None`) +* tf.math.unsorted_segment_sqrt_n(**data**, **segment_ids**, num_segments, name=`None`) +* tf.math.unsorted_segment_sum(**data**, **segment_ids**, num_segments, name=`None`) +* tf.one_hot(**indices**, depth, on_value=`None`, off_value=`None`, axis=`None`, dtype=`None`, name=`None`) +* tf.ones_like(**tensor**, dtype=`None`, name=`None`, optimize=`True`) +* tf.rank(**input**, name=`None`) +* tf.realdiv(**x**, **y**, name=`None`) +* tf.reduce_all(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.reverse(**tensor**, axis, name=`None`) +* tf.size(**input**, name=`None`, out_type=tf.int32) +* tf.squeeze(**input**, axis=`None`, name=`None`, squeeze_dims=`None`) +* tf.stack(**values**, axis=`0`, name=`'stack'`) +* tf.strings.as_string(**input**, precision=`-1`, scientific=`False`, shortest=`False`, width=`-1`, fill=`''`, name=`None`) +* tf.strings.join(**inputs**, separator=`''`, name=`None`) +* tf.strings.length(**input**, name=`None`, unit=`'BYTE'`) +* tf.strings.reduce_join(**inputs**, axis=`None`, keepdims=`False`, separator=`''`, name=`None`) +* tf.strings.regex_full_match(**input**, pattern, name=`None`) +* tf.strings.regex_replace(**input**, pattern, rewrite, replace_global=`True`, name=`None`) +* tf.strings.strip(**input**, name=`None`) +* tf.strings.substr(**input**, pos, len, name=`None`, unit=`'BYTE'`) +* tf.strings.to_hash_bucket_fast(**input**, num_buckets, name=`None`) +* tf.strings.to_hash_bucket_strong(**input**, num_buckets, key, name=`None`) +* tf.strings.to_hash_bucket(**input**, num_buckets, name=`None`) +* tf.strings.to_hash_bucket(**input**, num_buckets, name=`None`) +* tf.strings.to_number(**input**, out_type=tf.float32, name=`None`) +* tf.strings.unicode_script(**input**, name=`None`) +* tf.tile(**input**, multiples, name=`None`) +* tf.truncatediv(**x**, **y**, name=`None`) +* tf.truncatemod(**x**, **y**, name=`None`) +* tf.where(**condition**, **x**=`None`, **y**=`None`, name=`None`) +* tf.zeros_like(**tensor**, dtype=`None`, name=`None`, optimize=`True`)n + +## Classes + +[`class RaggedTensorValue`](../../../tf/compat/v1/ragged/RaggedTensorValue.md): Represents the value of a `RaggedTensor`. + +## Functions + +[`boolean_mask(...)`](../../../tf/ragged/boolean_mask.md): Applies a boolean mask to `data` without flattening the mask dimensions. + +[`constant(...)`](../../../tf/ragged/constant.md): Constructs a constant RaggedTensor from a nested Python list. + +[`constant_value(...)`](../../../tf/compat/v1/ragged/constant_value.md): Constructs a RaggedTensorValue from a nested Python list. + +[`map_flat_values(...)`](../../../tf/ragged/map_flat_values.md): Applies `op` to the values of one or more RaggedTensors. + +[`placeholder(...)`](../../../tf/compat/v1/ragged/placeholder.md): Creates a placeholder for a tf.RaggedTensor that will always be fed. + +[`range(...)`](../../../tf/ragged/range.md): Returns a `RaggedTensor` containing the specified sequences of numbers. + +[`row_splits_to_segment_ids(...)`](../../../tf/ragged/row_splits_to_segment_ids.md): Generates the segmentation corresponding to a RaggedTensor `row_splits`. + +[`segment_ids_to_row_splits(...)`](../../../tf/ragged/segment_ids_to_row_splits.md): Generates the RaggedTensor `row_splits` corresponding to a segmentation. + +[`stack(...)`](../../../tf/ragged/stack.md): Stacks a list of rank-`R` tensors into one rank-`(R+1)` `RaggedTensor`. + +[`stack_dynamic_partitions(...)`](../../../tf/ragged/stack_dynamic_partitions.md): Stacks dynamic partitions of a Tensor or RaggedTensor. + diff --git a/site/en/api_docs/python/tf/compat/v1/ragged/RaggedTensorValue.md b/site/en/api_docs/python/tf/compat/v1/ragged/RaggedTensorValue.md new file mode 100644 index 00000000000..70574033dae --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/ragged/RaggedTensorValue.md @@ -0,0 +1,141 @@ +description: Represents the value of a RaggedTensor. + +
+ + + + +
+ +# tf.compat.v1.ragged.RaggedTensorValue + + + + + + + + + +Represents the value of a `RaggedTensor`. + + + + + + + +Warning: `RaggedTensorValue` should only be used in graph mode; in +eager mode, the tf.RaggedTensor class contains its value directly. + +See tf.RaggedTensor for a description of ragged tensors. + + + + + + + + + + + + + +
+`values` + +A numpy array of any type and shape; or a RaggedTensorValue. +
+`row_splits` + +A 1-D int32 or int64 numpy array. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +The numpy dtype of values in this tensor. +
+`flat_values` + +The innermost `values` array for this ragged tensor value. +
+`nested_row_splits` + +The row_splits for all ragged dimensions in this ragged tensor value. +
+`ragged_rank` + +The number of ragged dimensions in this ragged tensor value. +
+`row_splits` + +The split indices for the ragged tensor value. +
+`shape` + +A tuple indicating the shape of this RaggedTensorValue. +
+`values` + +The concatenated values for all rows in this tensor. +
+ + + +## Methods + +

to_list

+ +View source + + + +Returns this ragged tensor value as a nested Python list. + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/ragged/constant_value.md b/site/en/api_docs/python/tf/compat/v1/ragged/constant_value.md new file mode 100644 index 00000000000..8888eec42a9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/ragged/constant_value.md @@ -0,0 +1,139 @@ +description: Constructs a RaggedTensorValue from a nested Python list. + +
+ + +
+ +# tf.compat.v1.ragged.constant_value + + + + + + + + + +Constructs a RaggedTensorValue from a nested Python list. + + + + + + + +Warning: This function returns a `RaggedTensorValue`, not a `RaggedTensor`. +If you wish to construct a constant `RaggedTensor`, use +[`ragged.constant(...)`](constant.md) instead. + +#### Example: + + + +``` +>>> tf.compat.v1.ragged.constant_value([[1, 2], [3], [4, 5, 6]]) +tf.RaggedTensorValue(values=array([1, 2, 3, 4, 5, 6]), + row_splits=array([0, 2, 3, 6])) +``` + +All scalar values in `pylist` must have the same nesting depth `K`, and the +returned `RaggedTensorValue` will have rank `K`. If `pylist` contains no +scalar values, then `K` is one greater than the maximum depth of empty lists +in `pylist`. All scalar values in `pylist` must be compatible with `dtype`. + + + + + + + + + + + + + + + + + + + + + + +
+`pylist` + +A nested `list`, `tuple` or `np.ndarray`. Any nested element that +is not a `list` or `tuple` must be a scalar value compatible with `dtype`. +
+`dtype` + +`numpy.dtype`. The type of elements for the returned `RaggedTensor`. +If not specified, then a default is chosen based on the scalar values in +`pylist`. +
+`ragged_rank` + +An integer specifying the ragged rank of the returned +`RaggedTensorValue`. Must be nonnegative and less than `K`. Defaults to +`max(0, K - 1)` if `inner_shape` is not specified. Defaults to `max(0, K +- 1 - len(inner_shape))` if `inner_shape` is specified. +
+`inner_shape` + +A tuple of integers specifying the shape for individual inner +values in the returned `RaggedTensorValue`. Defaults to `()` if +`ragged_rank` is not specified. If `ragged_rank` is specified, then a +default is chosen based on the contents of `pylist`. +
+`row_splits_dtype` + +data type for the constructed `RaggedTensorValue`'s +row_splits. One of `numpy.int32` or `numpy.int64`. +
+ + + + + + + + + + + +
+A `tf.RaggedTensorValue` or `numpy.array` with rank `K` and the specified +`ragged_rank`, containing the values from `pylist`. +
+ + + + + + + + + + + + +
+`ValueError` + +If the scalar values in `pylist` have inconsistent nesting +depth; or if ragged_rank or inner_shape are incompatible with `pylist`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/ragged/placeholder.md b/site/en/api_docs/python/tf/compat/v1/ragged/placeholder.md new file mode 100644 index 00000000000..3813df682b3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/ragged/placeholder.md @@ -0,0 +1,108 @@ +description: Creates a placeholder for a tf.RaggedTensor that will always be fed. + +
+ + +
+ +# tf.compat.v1.ragged.placeholder + + + + + + + + + +Creates a placeholder for a tf.RaggedTensor that will always be fed. + + + + + + + +**Important**: This ragged tensor will produce an error if evaluated. +Its value must be fed using the `feed_dict` optional argument to +`Session.run()`, `Tensor.eval()`, or `Operation.run()`. + +@compatibility{eager} Placeholders are not compatible with eager execution. + + + + + + + + + + + + + + + + + + + +
+`dtype` + +The data type for the `RaggedTensor`. +
+`ragged_rank` + +The ragged rank for the `RaggedTensor` +
+`value_shape` + +The shape for individual flat values in the `RaggedTensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `RaggedTensor` that may be used as a handle for feeding a value, but +not evaluated directly. +
+ + + + + + + + + + + + +
+`RuntimeError` + +if eager execution is enabled +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/random.md b/site/en/api_docs/python/tf/compat/v1/random.md new file mode 100644 index 00000000000..4c32ce48838 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/random.md @@ -0,0 +1,85 @@ +description: Public API for tf.random namespace. + +
+ + +
+ +# Module: tf.compat.v1.random + + + + + + + + + +Public API for tf.random namespace. + + + +## Modules + +[`experimental`](../../../tf/compat/v1/random/experimental.md) module: Public API for tf.random.experimental namespace. + +## Classes + +[`class Algorithm`](../../../tf/random/Algorithm.md): An enumeration. + +[`class Generator`](../../../tf/random/Generator.md): Random-number generator. + +## Functions + +[`all_candidate_sampler(...)`](../../../tf/random/all_candidate_sampler.md): Generate the set of all classes. + +[`categorical(...)`](../../../tf/random/categorical.md): Draws samples from a categorical distribution. + +[`create_rng_state(...)`](../../../tf/random/create_rng_state.md): Creates a RNG state from an integer or a vector. + +[`fixed_unigram_candidate_sampler(...)`](../../../tf/random/fixed_unigram_candidate_sampler.md): Samples a set of classes using the provided (fixed) base distribution. + +[`gamma(...)`](../../../tf/random/gamma.md): Draws `shape` samples from each of the given Gamma distribution(s). + +[`get_global_generator(...)`](../../../tf/random/get_global_generator.md): Retrieves the global generator. + +[`get_seed(...)`](../../../tf/compat/v1/get_seed.md): Returns the local seeds an operation should use given an op-specific seed. + +[`learned_unigram_candidate_sampler(...)`](../../../tf/random/learned_unigram_candidate_sampler.md): Samples a set of classes from a distribution learned during training. + +[`log_uniform_candidate_sampler(...)`](../../../tf/random/log_uniform_candidate_sampler.md): Samples a set of classes using a log-uniform (Zipfian) base distribution. + +[`multinomial(...)`](../../../tf/compat/v1/multinomial.md): Draws samples from a multinomial distribution. (deprecated) + +[`normal(...)`](../../../tf/random/normal.md): Outputs random values from a normal distribution. + +[`poisson(...)`](../../../tf/compat/v1/random_poisson.md): Draws `shape` samples from each of the given Poisson distribution(s). + +[`set_global_generator(...)`](../../../tf/random/set_global_generator.md): Replaces the global generator with another `Generator` object. + +[`set_random_seed(...)`](../../../tf/compat/v1/set_random_seed.md): Sets the graph-level random seed for the default graph. + +[`shuffle(...)`](../../../tf/random/shuffle.md): Randomly shuffles a tensor along its first dimension. + +[`stateless_binomial(...)`](../../../tf/random/stateless_binomial.md): Outputs deterministic pseudorandom values from a binomial distribution. + +[`stateless_categorical(...)`](../../../tf/random/stateless_categorical.md): Draws deterministic pseudorandom samples from a categorical distribution. + +[`stateless_gamma(...)`](../../../tf/random/stateless_gamma.md): Outputs deterministic pseudorandom values from a gamma distribution. + +[`stateless_multinomial(...)`](../../../tf/compat/v1/random/stateless_multinomial.md): Draws deterministic pseudorandom samples from a multinomial distribution. (deprecated) + +[`stateless_normal(...)`](../../../tf/random/stateless_normal.md): Outputs deterministic pseudorandom values from a normal distribution. + +[`stateless_poisson(...)`](../../../tf/random/stateless_poisson.md): Outputs deterministic pseudorandom values from a Poisson distribution. + +[`stateless_truncated_normal(...)`](../../../tf/random/stateless_truncated_normal.md): Outputs deterministic pseudorandom values, truncated normally distributed. + +[`stateless_uniform(...)`](../../../tf/random/stateless_uniform.md): Outputs deterministic pseudorandom values from a uniform distribution. + +[`truncated_normal(...)`](../../../tf/random/truncated_normal.md): Outputs random values from a truncated normal distribution. + +[`uniform(...)`](../../../tf/random/uniform.md): Outputs random values from a uniform distribution. + +[`uniform_candidate_sampler(...)`](../../../tf/random/uniform_candidate_sampler.md): Samples a set of classes using a uniform base distribution. + diff --git a/site/en/api_docs/python/tf/compat/v1/random/experimental.md b/site/en/api_docs/python/tf/compat/v1/random/experimental.md new file mode 100644 index 00000000000..9e9350507d7 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/random/experimental.md @@ -0,0 +1,35 @@ +description: Public API for tf.random.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.random.experimental + + + + + + + + + +Public API for tf.random.experimental namespace. + + + +## Classes + +[`class Algorithm`](../../../../tf/random/Algorithm.md): An enumeration. + +[`class Generator`](../../../../tf/random/Generator.md): Random-number generator. + +## Functions + +[`create_rng_state(...)`](../../../../tf/random/create_rng_state.md): Creates a RNG state from an integer or a vector. + +[`get_global_generator(...)`](../../../../tf/random/get_global_generator.md): Retrieves the global generator. + +[`set_global_generator(...)`](../../../../tf/random/set_global_generator.md): Replaces the global generator with another `Generator` object. + diff --git a/site/en/api_docs/python/tf/compat/v1/random/stateless_multinomial.md b/site/en/api_docs/python/tf/compat/v1/random/stateless_multinomial.md new file mode 100644 index 00000000000..9a233e40653 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/random/stateless_multinomial.md @@ -0,0 +1,113 @@ +description: Draws deterministic pseudorandom samples from a multinomial distribution. (deprecated) + +
+ + +
+ +# tf.compat.v1.random.stateless_multinomial + + + + + + + + + +Draws deterministic pseudorandom samples from a multinomial distribution. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.random.stateless_categorical instead. + +This is a stateless version of tf.random.categorical: if run twice with the +same seeds, it will produce the same pseudorandom numbers. The output is +consistent across multiple runs on the same hardware (and between CPU +and GPU), but may change between versions of TensorFlow or on non-CPU/GPU +hardware. + +#### Example: + + + +```python +# samples has shape [1, 5], where each value is either 0 or 1 with equal +# probability. +samples = tf.random.stateless_categorical( + tf.math.log([[0.5, 0.5]]), 5, seed=[7, 17]) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`logits` + +2-D Tensor with shape `[batch_size, num_classes]`. Each slice +`[i, :]` represents the unnormalized log-probabilities for all classes. +
+`num_samples` + +0-D. Number of independent samples to draw for each row slice. +
+`seed` + +A shape [2] integer Tensor of seeds to the random number generator. +
+`output_dtype` + +integer type to use for the output. Defaults to int64. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
+The drawn samples of shape `[batch_size, num_samples]`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/random_normal_initializer.md b/site/en/api_docs/python/tf/compat/v1/random_normal_initializer.md new file mode 100644 index 00000000000..e3dec37efcd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/random_normal_initializer.md @@ -0,0 +1,225 @@ +description: Initializer that generates tensors with a normal distribution. + +
+ + + + + + +
+ +# tf.compat.v1.random_normal_initializer + + + + + + + + + +Initializer that generates tensors with a normal distribution. + +Inherits From: [`Initializer`](../../../tf/compat/v1/keras/initializers/Initializer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`mean` + +a python scalar or a scalar tensor. Mean of the random values to +generate. +
+`stddev` + +a python scalar or a scalar tensor. Standard deviation of the random +values to generate. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed for behavior. +
+`dtype` + +Default data type, used if no `dtype` argument is provided when +calling the initializer. Only floating point types are supported. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/random_poisson.md b/site/en/api_docs/python/tf/compat/v1/random_poisson.md new file mode 100644 index 00000000000..9a783c721d6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/random_poisson.md @@ -0,0 +1,129 @@ +description: Draws shape samples from each of the given Poisson distribution(s). + +
+ + +
+ +# tf.compat.v1.random_poisson + + + + + + + + + +Draws `shape` samples from each of the given Poisson distribution(s). + + + + + + + + + +`lam` is the rate parameter describing the distribution(s). + +#### Example: + + + +```python +samples = tf.random.poisson([0.5, 1.5], [10]) +# samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents +# the samples drawn from each distribution + +samples = tf.random.poisson([12.2, 3.3], [7, 5]) +# samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] +# represents the 7x5 samples drawn from each of the two distributions +``` + + + + + + + + + + + + + + + + + + + + + + +
+`lam` + +A Tensor or Python value or N-D array of type `dtype`. +`lam` provides the rate parameter(s) describing the poisson +distribution(s) to sample. +
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output samples +to be drawn per "rate"-parameterized distribution. +
+`dtype` + +The type of the output: `float16`, `float32`, `float64`, `int32` or +`int64`. +
+`seed` + +A Python integer. Used to create a random seed for the distributions. +See +tf.random.set_seed +for behavior. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + + +
+`samples` + +a `Tensor` of shape `tf.concat([shape, tf.shape(lam)], axis=0)` +with values of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/random_uniform_initializer.md b/site/en/api_docs/python/tf/compat/v1/random_uniform_initializer.md new file mode 100644 index 00000000000..31b8012861d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/random_uniform_initializer.md @@ -0,0 +1,225 @@ +description: Initializer that generates tensors with a uniform distribution. + +
+ + + + + + +
+ +# tf.compat.v1.random_uniform_initializer + + + + + + + + + +Initializer that generates tensors with a uniform distribution. + +Inherits From: [`Initializer`](../../../tf/compat/v1/keras/initializers/Initializer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`minval` + +A python scalar or a scalar tensor. Lower bound of the range of +random values to generate. +
+`maxval` + +A python scalar or a scalar tensor. Upper bound of the range of +random values to generate. Defaults to 1 for float types. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed for behavior. +
+`dtype` + +Default data type, used if no `dtype` argument is provided when +calling the initializer. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/raw_ops.md b/site/en/api_docs/python/tf/compat/v1/raw_ops.md new file mode 100644 index 00000000000..bed30cd5e04 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/raw_ops.md @@ -0,0 +1,2545 @@ +description: Public API for tf.raw_ops namespace. + +
+ + +
+ +# Module: tf.compat.v1.raw_ops + + + + + + + + + +Public API for tf.raw_ops namespace. + + + +## Functions + +[`Abort(...)`](../../../tf/raw_ops/Abort.md): Raise a exception to abort the process when called. + +[`Abs(...)`](../../../tf/raw_ops/Abs.md): Computes the absolute value of a tensor. + +[`AccumulateNV2(...)`](../../../tf/raw_ops/AccumulateNV2.md): Returns the element-wise sum of a list of tensors. + +[`AccumulatorApplyGradient(...)`](../../../tf/raw_ops/AccumulatorApplyGradient.md): Applies a gradient to a given accumulator. + +[`AccumulatorNumAccumulated(...)`](../../../tf/raw_ops/AccumulatorNumAccumulated.md): Returns the number of gradients aggregated in the given accumulators. + +[`AccumulatorSetGlobalStep(...)`](../../../tf/raw_ops/AccumulatorSetGlobalStep.md): Updates the accumulator with a new value for global_step. + +[`AccumulatorTakeGradient(...)`](../../../tf/raw_ops/AccumulatorTakeGradient.md): Extracts the average gradient in the given ConditionalAccumulator. + +[`Acos(...)`](../../../tf/raw_ops/Acos.md): Computes acos of x element-wise. + +[`Acosh(...)`](../../../tf/raw_ops/Acosh.md): Computes inverse hyperbolic cosine of x element-wise. + +[`Add(...)`](../../../tf/raw_ops/Add.md): Returns x + y element-wise. + +[`AddManySparseToTensorsMap(...)`](../../../tf/raw_ops/AddManySparseToTensorsMap.md): Add an `N`-minibatch `SparseTensor` to a `SparseTensorsMap`, return `N` handles. + +[`AddN(...)`](../../../tf/raw_ops/AddN.md): Add all input tensors element wise. + +[`AddSparseToTensorsMap(...)`](../../../tf/raw_ops/AddSparseToTensorsMap.md): Add a `SparseTensor` to a `SparseTensorsMap` return its handle. + +[`AddV2(...)`](../../../tf/raw_ops/AddV2.md): Returns x + y element-wise. + +[`AdjustContrast(...)`](../../../tf/raw_ops/AdjustContrast.md): Deprecated. Disallowed in GraphDef version >= 2. + +[`AdjustContrastv2(...)`](../../../tf/raw_ops/AdjustContrastv2.md): Adjust the contrast of one or more images. + +[`AdjustHue(...)`](../../../tf/raw_ops/AdjustHue.md): Adjust the hue of one or more images. + +[`AdjustSaturation(...)`](../../../tf/raw_ops/AdjustSaturation.md): Adjust the saturation of one or more images. + +[`All(...)`](../../../tf/raw_ops/All.md): Computes the "logical and" of elements across dimensions of a tensor. + +[`AllCandidateSampler(...)`](../../../tf/raw_ops/AllCandidateSampler.md): Generates labels for candidate sampling with a learned unigram distribution. + +[`AllToAll(...)`](../../../tf/raw_ops/AllToAll.md): An Op to exchange data across TPU replicas. + +[`Angle(...)`](../../../tf/raw_ops/Angle.md): Returns the argument of a complex number. + +[`AnonymousIterator(...)`](../../../tf/raw_ops/AnonymousIterator.md): A container for an iterator resource. + +[`AnonymousIteratorV2(...)`](../../../tf/raw_ops/AnonymousIteratorV2.md): A container for an iterator resource. + +[`AnonymousMemoryCache(...)`](../../../tf/raw_ops/AnonymousMemoryCache.md) + +[`AnonymousMultiDeviceIterator(...)`](../../../tf/raw_ops/AnonymousMultiDeviceIterator.md): A container for a multi device iterator resource. + +[`AnonymousRandomSeedGenerator(...)`](../../../tf/raw_ops/AnonymousRandomSeedGenerator.md) + +[`Any(...)`](../../../tf/raw_ops/Any.md): Computes the "logical or" of elements across dimensions of a tensor. + +[`ApplyAdaMax(...)`](../../../tf/raw_ops/ApplyAdaMax.md): Update '*var' according to the AdaMax algorithm. + +[`ApplyAdadelta(...)`](../../../tf/raw_ops/ApplyAdadelta.md): Update '*var' according to the adadelta scheme. + +[`ApplyAdagrad(...)`](../../../tf/raw_ops/ApplyAdagrad.md): Update '*var' according to the adagrad scheme. + +[`ApplyAdagradDA(...)`](../../../tf/raw_ops/ApplyAdagradDA.md): Update '*var' according to the proximal adagrad scheme. + +[`ApplyAdagradV2(...)`](../../../tf/raw_ops/ApplyAdagradV2.md): Update '*var' according to the adagrad scheme. + +[`ApplyAdam(...)`](../../../tf/raw_ops/ApplyAdam.md): Update '*var' according to the Adam algorithm. + +[`ApplyAddSign(...)`](../../../tf/raw_ops/ApplyAddSign.md): Update '*var' according to the AddSign update. + +[`ApplyCenteredRMSProp(...)`](../../../tf/raw_ops/ApplyCenteredRMSProp.md): Update '*var' according to the centered RMSProp algorithm. + +[`ApplyFtrl(...)`](../../../tf/raw_ops/ApplyFtrl.md): Update '*var' according to the Ftrl-proximal scheme. + +[`ApplyFtrlV2(...)`](../../../tf/raw_ops/ApplyFtrlV2.md): Update '*var' according to the Ftrl-proximal scheme. + +[`ApplyGradientDescent(...)`](../../../tf/raw_ops/ApplyGradientDescent.md): Update '*var' by subtracting 'alpha' * 'delta' from it. + +[`ApplyMomentum(...)`](../../../tf/raw_ops/ApplyMomentum.md): Update '*var' according to the momentum scheme. + +[`ApplyPowerSign(...)`](../../../tf/raw_ops/ApplyPowerSign.md): Update '*var' according to the AddSign update. + +[`ApplyProximalAdagrad(...)`](../../../tf/raw_ops/ApplyProximalAdagrad.md): Update '*var' and '*accum' according to FOBOS with Adagrad learning rate. + +[`ApplyProximalGradientDescent(...)`](../../../tf/raw_ops/ApplyProximalGradientDescent.md): Update '*var' as FOBOS algorithm with fixed learning rate. + +[`ApplyRMSProp(...)`](../../../tf/raw_ops/ApplyRMSProp.md): Update '*var' according to the RMSProp algorithm. + +[`ApproximateEqual(...)`](../../../tf/raw_ops/ApproximateEqual.md): Returns the truth value of abs(x-y) < tolerance element-wise. + +[`ArgMax(...)`](../../../tf/raw_ops/ArgMax.md): Returns the index with the largest value across dimensions of a tensor. + +[`ArgMin(...)`](../../../tf/raw_ops/ArgMin.md): Returns the index with the smallest value across dimensions of a tensor. + +[`AsString(...)`](../../../tf/raw_ops/AsString.md): Converts each entry in the given tensor to strings. + +[`Asin(...)`](../../../tf/raw_ops/Asin.md): Computes the trignometric inverse sine of x element-wise. + +[`Asinh(...)`](../../../tf/raw_ops/Asinh.md): Computes inverse hyperbolic sine of x element-wise. + +[`Assert(...)`](../../../tf/raw_ops/Assert.md): Asserts that the given condition is true. + +[`AssertCardinalityDataset(...)`](../../../tf/raw_ops/AssertCardinalityDataset.md) + +[`AssertNextDataset(...)`](../../../tf/raw_ops/AssertNextDataset.md): A transformation that asserts which transformations happen next. + +[`Assign(...)`](../../../tf/raw_ops/Assign.md): Update 'ref' by assigning 'value' to it. + +[`AssignAdd(...)`](../../../tf/raw_ops/AssignAdd.md): Update 'ref' by adding 'value' to it. + +[`AssignAddVariableOp(...)`](../../../tf/raw_ops/AssignAddVariableOp.md): Adds a value to the current value of a variable. + +[`AssignSub(...)`](../../../tf/raw_ops/AssignSub.md): Update 'ref' by subtracting 'value' from it. + +[`AssignSubVariableOp(...)`](../../../tf/raw_ops/AssignSubVariableOp.md): Subtracts a value from the current value of a variable. + +[`AssignVariableOp(...)`](../../../tf/raw_ops/AssignVariableOp.md): Assigns a new value to a variable. + +[`Atan(...)`](../../../tf/raw_ops/Atan.md): Computes the trignometric inverse tangent of x element-wise. + +[`Atan2(...)`](../../../tf/raw_ops/Atan2.md): Computes arctangent of `y/x` element-wise, respecting signs of the arguments. + +[`Atanh(...)`](../../../tf/raw_ops/Atanh.md): Computes inverse hyperbolic tangent of x element-wise. + +[`AudioSpectrogram(...)`](../../../tf/raw_ops/AudioSpectrogram.md): Produces a visualization of audio data over time. + +[`AudioSummary(...)`](../../../tf/raw_ops/AudioSummary.md): Outputs a `Summary` protocol buffer with audio. + +[`AudioSummaryV2(...)`](../../../tf/raw_ops/AudioSummaryV2.md): Outputs a `Summary` protocol buffer with audio. + +[`AutoShardDataset(...)`](../../../tf/raw_ops/AutoShardDataset.md): Creates a dataset that shards the input dataset. + +[`AvgPool(...)`](../../../tf/raw_ops/AvgPool.md): Performs average pooling on the input. + +[`AvgPool3D(...)`](../../../tf/raw_ops/AvgPool3D.md): Performs 3D average pooling on the input. + +[`AvgPool3DGrad(...)`](../../../tf/raw_ops/AvgPool3DGrad.md): Computes gradients of average pooling function. + +[`AvgPoolGrad(...)`](../../../tf/raw_ops/AvgPoolGrad.md): Computes gradients of the average pooling function. + +[`Barrier(...)`](../../../tf/raw_ops/Barrier.md): Defines a barrier that persists across different graph executions. + +[`BarrierClose(...)`](../../../tf/raw_ops/BarrierClose.md): Closes the given barrier. + +[`BarrierIncompleteSize(...)`](../../../tf/raw_ops/BarrierIncompleteSize.md): Computes the number of incomplete elements in the given barrier. + +[`BarrierInsertMany(...)`](../../../tf/raw_ops/BarrierInsertMany.md): For each key, assigns the respective value to the specified component. + +[`BarrierReadySize(...)`](../../../tf/raw_ops/BarrierReadySize.md): Computes the number of complete elements in the given barrier. + +[`BarrierTakeMany(...)`](../../../tf/raw_ops/BarrierTakeMany.md): Takes the given number of completed elements from a barrier. + +[`Batch(...)`](../../../tf/raw_ops/Batch.md): Batches all input tensors nondeterministically. + +[`BatchCholesky(...)`](../../../tf/raw_ops/BatchCholesky.md) + +[`BatchCholeskyGrad(...)`](../../../tf/raw_ops/BatchCholeskyGrad.md) + +[`BatchDataset(...)`](../../../tf/raw_ops/BatchDataset.md): Creates a dataset that batches `batch_size` elements from `input_dataset`. + +[`BatchDatasetV2(...)`](../../../tf/raw_ops/BatchDatasetV2.md): Creates a dataset that batches `batch_size` elements from `input_dataset`. + +[`BatchFFT(...)`](../../../tf/raw_ops/BatchFFT.md) + +[`BatchFFT2D(...)`](../../../tf/raw_ops/BatchFFT2D.md) + +[`BatchFFT3D(...)`](../../../tf/raw_ops/BatchFFT3D.md) + +[`BatchFunction(...)`](../../../tf/raw_ops/BatchFunction.md): Batches all the inputs tensors to the computation done by the function. + +[`BatchIFFT(...)`](../../../tf/raw_ops/BatchIFFT.md) + +[`BatchIFFT2D(...)`](../../../tf/raw_ops/BatchIFFT2D.md) + +[`BatchIFFT3D(...)`](../../../tf/raw_ops/BatchIFFT3D.md) + +[`BatchMatMul(...)`](../../../tf/raw_ops/BatchMatMul.md): Multiplies slices of two tensors in batches. + +[`BatchMatMulV2(...)`](../../../tf/raw_ops/BatchMatMulV2.md): Multiplies slices of two tensors in batches. + +[`BatchMatrixBandPart(...)`](../../../tf/raw_ops/BatchMatrixBandPart.md) + +[`BatchMatrixDeterminant(...)`](../../../tf/raw_ops/BatchMatrixDeterminant.md) + +[`BatchMatrixDiag(...)`](../../../tf/raw_ops/BatchMatrixDiag.md) + +[`BatchMatrixDiagPart(...)`](../../../tf/raw_ops/BatchMatrixDiagPart.md) + +[`BatchMatrixInverse(...)`](../../../tf/raw_ops/BatchMatrixInverse.md) + +[`BatchMatrixSetDiag(...)`](../../../tf/raw_ops/BatchMatrixSetDiag.md) + +[`BatchMatrixSolve(...)`](../../../tf/raw_ops/BatchMatrixSolve.md) + +[`BatchMatrixSolveLs(...)`](../../../tf/raw_ops/BatchMatrixSolveLs.md) + +[`BatchMatrixTriangularSolve(...)`](../../../tf/raw_ops/BatchMatrixTriangularSolve.md) + +[`BatchNormWithGlobalNormalization(...)`](../../../tf/raw_ops/BatchNormWithGlobalNormalization.md): Batch normalization. + +[`BatchNormWithGlobalNormalizationGrad(...)`](../../../tf/raw_ops/BatchNormWithGlobalNormalizationGrad.md): Gradients for batch normalization. + +[`BatchSelfAdjointEig(...)`](../../../tf/raw_ops/BatchSelfAdjointEig.md) + +[`BatchSelfAdjointEigV2(...)`](../../../tf/raw_ops/BatchSelfAdjointEigV2.md) + +[`BatchSvd(...)`](../../../tf/raw_ops/BatchSvd.md) + +[`BatchToSpace(...)`](../../../tf/raw_ops/BatchToSpace.md): BatchToSpace for 4-D tensors of type T. + +[`BatchToSpaceND(...)`](../../../tf/raw_ops/BatchToSpaceND.md): BatchToSpace for N-D tensors of type T. + +[`BesselI0e(...)`](../../../tf/raw_ops/BesselI0e.md): Computes the Bessel i0e function of `x` element-wise. + +[`BesselI1e(...)`](../../../tf/raw_ops/BesselI1e.md): Computes the Bessel i1e function of `x` element-wise. + +[`Betainc(...)`](../../../tf/raw_ops/Betainc.md): Compute the regularized incomplete beta integral \\(I_x(a, b)\\). + +[`BiasAdd(...)`](../../../tf/raw_ops/BiasAdd.md): Adds `bias` to `value`. + +[`BiasAddGrad(...)`](../../../tf/raw_ops/BiasAddGrad.md): The backward operation for "BiasAdd" on the "bias" tensor. + +[`BiasAddV1(...)`](../../../tf/raw_ops/BiasAddV1.md): Adds `bias` to `value`. + +[`Bincount(...)`](../../../tf/raw_ops/Bincount.md): Counts the number of occurrences of each value in an integer array. + +[`Bitcast(...)`](../../../tf/raw_ops/Bitcast.md): Bitcasts a tensor from one type to another without copying data. + +[`BitwiseAnd(...)`](../../../tf/raw_ops/BitwiseAnd.md): Elementwise computes the bitwise AND of `x` and `y`. + +[`BitwiseOr(...)`](../../../tf/raw_ops/BitwiseOr.md): Elementwise computes the bitwise OR of `x` and `y`. + +[`BitwiseXor(...)`](../../../tf/raw_ops/BitwiseXor.md): Elementwise computes the bitwise XOR of `x` and `y`. + +[`BlockLSTM(...)`](../../../tf/raw_ops/BlockLSTM.md): Computes the LSTM cell forward propagation for all the time steps. + +[`BlockLSTMGrad(...)`](../../../tf/raw_ops/BlockLSTMGrad.md): Computes the LSTM cell backward propagation for the entire time sequence. + +[`BlockLSTMGradV2(...)`](../../../tf/raw_ops/BlockLSTMGradV2.md): Computes the LSTM cell backward propagation for the entire time sequence. + +[`BlockLSTMV2(...)`](../../../tf/raw_ops/BlockLSTMV2.md): Computes the LSTM cell forward propagation for all the time steps. + +[`BoostedTreesAggregateStats(...)`](../../../tf/raw_ops/BoostedTreesAggregateStats.md): Aggregates the summary of accumulated stats for the batch. + +[`BoostedTreesBucketize(...)`](../../../tf/raw_ops/BoostedTreesBucketize.md): Bucketize each feature based on bucket boundaries. + +[`BoostedTreesCalculateBestFeatureSplit(...)`](../../../tf/raw_ops/BoostedTreesCalculateBestFeatureSplit.md): Calculates gains for each feature and returns the best possible split information for the feature. + +[`BoostedTreesCalculateBestFeatureSplitV2(...)`](../../../tf/raw_ops/BoostedTreesCalculateBestFeatureSplitV2.md): Calculates gains for each feature and returns the best possible split information for each node. However, if no split is found, then no split information is returned for that node. + +[`BoostedTreesCalculateBestGainsPerFeature(...)`](../../../tf/raw_ops/BoostedTreesCalculateBestGainsPerFeature.md): Calculates gains for each feature and returns the best possible split information for the feature. + +[`BoostedTreesCenterBias(...)`](../../../tf/raw_ops/BoostedTreesCenterBias.md): Calculates the prior from the training data (the bias) and fills in the first node with the logits' prior. Returns a boolean indicating whether to continue centering. + +[`BoostedTreesCreateEnsemble(...)`](../../../tf/raw_ops/BoostedTreesCreateEnsemble.md): Creates a tree ensemble model and returns a handle to it. + +[`BoostedTreesCreateQuantileStreamResource(...)`](../../../tf/raw_ops/BoostedTreesCreateQuantileStreamResource.md): Create the Resource for Quantile Streams. + +[`BoostedTreesDeserializeEnsemble(...)`](../../../tf/raw_ops/BoostedTreesDeserializeEnsemble.md): Deserializes a serialized tree ensemble config and replaces current tree + +[`BoostedTreesEnsembleResourceHandleOp(...)`](../../../tf/raw_ops/BoostedTreesEnsembleResourceHandleOp.md): Creates a handle to a BoostedTreesEnsembleResource + +[`BoostedTreesExampleDebugOutputs(...)`](../../../tf/raw_ops/BoostedTreesExampleDebugOutputs.md): Debugging/model interpretability outputs for each example. + +[`BoostedTreesFlushQuantileSummaries(...)`](../../../tf/raw_ops/BoostedTreesFlushQuantileSummaries.md): Flush the quantile summaries from each quantile stream resource. + +[`BoostedTreesGetEnsembleStates(...)`](../../../tf/raw_ops/BoostedTreesGetEnsembleStates.md): Retrieves the tree ensemble resource stamp token, number of trees and growing statistics. + +[`BoostedTreesMakeQuantileSummaries(...)`](../../../tf/raw_ops/BoostedTreesMakeQuantileSummaries.md): Makes the summary of quantiles for the batch. + +[`BoostedTreesMakeStatsSummary(...)`](../../../tf/raw_ops/BoostedTreesMakeStatsSummary.md): Makes the summary of accumulated stats for the batch. + +[`BoostedTreesPredict(...)`](../../../tf/raw_ops/BoostedTreesPredict.md): Runs multiple additive regression ensemble predictors on input instances and + +[`BoostedTreesQuantileStreamResourceAddSummaries(...)`](../../../tf/raw_ops/BoostedTreesQuantileStreamResourceAddSummaries.md): Add the quantile summaries to each quantile stream resource. + +[`BoostedTreesQuantileStreamResourceDeserialize(...)`](../../../tf/raw_ops/BoostedTreesQuantileStreamResourceDeserialize.md): Deserialize bucket boundaries and ready flag into current QuantileAccumulator. + +[`BoostedTreesQuantileStreamResourceFlush(...)`](../../../tf/raw_ops/BoostedTreesQuantileStreamResourceFlush.md): Flush the summaries for a quantile stream resource. + +[`BoostedTreesQuantileStreamResourceGetBucketBoundaries(...)`](../../../tf/raw_ops/BoostedTreesQuantileStreamResourceGetBucketBoundaries.md): Generate the bucket boundaries for each feature based on accumulated summaries. + +[`BoostedTreesQuantileStreamResourceHandleOp(...)`](../../../tf/raw_ops/BoostedTreesQuantileStreamResourceHandleOp.md): Creates a handle to a BoostedTreesQuantileStreamResource. + +[`BoostedTreesSerializeEnsemble(...)`](../../../tf/raw_ops/BoostedTreesSerializeEnsemble.md): Serializes the tree ensemble to a proto. + +[`BoostedTreesSparseAggregateStats(...)`](../../../tf/raw_ops/BoostedTreesSparseAggregateStats.md): Aggregates the summary of accumulated stats for the batch. + +[`BoostedTreesSparseCalculateBestFeatureSplit(...)`](../../../tf/raw_ops/BoostedTreesSparseCalculateBestFeatureSplit.md): Calculates gains for each feature and returns the best possible split information for the feature. + +[`BoostedTreesTrainingPredict(...)`](../../../tf/raw_ops/BoostedTreesTrainingPredict.md): Runs multiple additive regression ensemble predictors on input instances and + +[`BoostedTreesUpdateEnsemble(...)`](../../../tf/raw_ops/BoostedTreesUpdateEnsemble.md): Updates the tree ensemble by either adding a layer to the last tree being grown + +[`BoostedTreesUpdateEnsembleV2(...)`](../../../tf/raw_ops/BoostedTreesUpdateEnsembleV2.md): Updates the tree ensemble by adding a layer to the last tree being grown + +[`BroadcastArgs(...)`](../../../tf/raw_ops/BroadcastArgs.md): Return the shape of s0 op s1 with broadcast. + +[`BroadcastGradientArgs(...)`](../../../tf/raw_ops/BroadcastGradientArgs.md): Return the reduction indices for computing gradients of s0 op s1 with broadcast. + +[`BroadcastTo(...)`](../../../tf/raw_ops/BroadcastTo.md): Broadcast an array for a compatible shape. + +[`Bucketize(...)`](../../../tf/raw_ops/Bucketize.md): Bucketizes 'input' based on 'boundaries'. + +[`BytesProducedStatsDataset(...)`](../../../tf/raw_ops/BytesProducedStatsDataset.md): Records the bytes size of each element of `input_dataset` in a StatsAggregator. + +[`CSRSparseMatrixComponents(...)`](../../../tf/raw_ops/CSRSparseMatrixComponents.md): Reads out the CSR components at batch `index`. + +[`CSRSparseMatrixToDense(...)`](../../../tf/raw_ops/CSRSparseMatrixToDense.md): Convert a (possibly batched) CSRSparseMatrix to dense. + +[`CSRSparseMatrixToSparseTensor(...)`](../../../tf/raw_ops/CSRSparseMatrixToSparseTensor.md): Converts a (possibly batched) CSRSparesMatrix to a SparseTensor. + +[`CSVDataset(...)`](../../../tf/raw_ops/CSVDataset.md) + +[`CTCBeamSearchDecoder(...)`](../../../tf/raw_ops/CTCBeamSearchDecoder.md): Performs beam search decoding on the logits given in input. + +[`CTCGreedyDecoder(...)`](../../../tf/raw_ops/CTCGreedyDecoder.md): Performs greedy decoding on the logits given in inputs. + +[`CTCLoss(...)`](../../../tf/raw_ops/CTCLoss.md): Calculates the CTC Loss (log probability) for each batch entry. Also calculates + +[`CTCLossV2(...)`](../../../tf/raw_ops/CTCLossV2.md): Calculates the CTC Loss (log probability) for each batch entry. Also calculates + +[`CacheDataset(...)`](../../../tf/raw_ops/CacheDataset.md): Creates a dataset that caches elements from `input_dataset`. + +[`CacheDatasetV2(...)`](../../../tf/raw_ops/CacheDatasetV2.md) + +[`Case(...)`](../../../tf/raw_ops/Case.md): An n-way switch statement which calls a single branch function. + +[`Cast(...)`](../../../tf/raw_ops/Cast.md): Cast x of type SrcT to y of DstT. + +[`Ceil(...)`](../../../tf/raw_ops/Ceil.md): Returns element-wise smallest integer not less than x. + +[`CheckNumerics(...)`](../../../tf/raw_ops/CheckNumerics.md): Checks a tensor for NaN and Inf values. + +[`CheckNumericsV2(...)`](../../../tf/raw_ops/CheckNumericsV2.md): Checks a tensor for NaN, -Inf and +Inf values. + +[`Cholesky(...)`](../../../tf/raw_ops/Cholesky.md): Computes the Cholesky decomposition of one or more square matrices. + +[`CholeskyGrad(...)`](../../../tf/raw_ops/CholeskyGrad.md): Computes the reverse mode backpropagated gradient of the Cholesky algorithm. + +[`ChooseFastestBranchDataset(...)`](../../../tf/raw_ops/ChooseFastestBranchDataset.md) + +[`ChooseFastestDataset(...)`](../../../tf/raw_ops/ChooseFastestDataset.md) + +[`ClipByValue(...)`](../../../tf/raw_ops/ClipByValue.md): Clips tensor values to a specified min and max. + +[`CloseSummaryWriter(...)`](../../../tf/raw_ops/CloseSummaryWriter.md) + +[`CollectiveBcastRecv(...)`](../../../tf/raw_ops/CollectiveBcastRecv.md): Receives a tensor value broadcast from another device. + +[`CollectiveBcastSend(...)`](../../../tf/raw_ops/CollectiveBcastSend.md): Broadcasts a tensor value to one or more other devices. + +[`CollectiveGather(...)`](../../../tf/raw_ops/CollectiveGather.md): Mutually accumulates multiple tensors of identical type and shape. + +[`CollectivePermute(...)`](../../../tf/raw_ops/CollectivePermute.md): An Op to permute tensors across replicated TPU instances. + +[`CollectiveReduce(...)`](../../../tf/raw_ops/CollectiveReduce.md): Mutually reduces multiple tensors of identical type and shape. + +[`CombinedNonMaxSuppression(...)`](../../../tf/raw_ops/CombinedNonMaxSuppression.md): Greedily selects a subset of bounding boxes in descending order of score, + +[`CompareAndBitpack(...)`](../../../tf/raw_ops/CompareAndBitpack.md): Compare values of `input` to `threshold` and pack resulting bits into a `uint8`. + +[`Complex(...)`](../../../tf/raw_ops/Complex.md): Converts two real numbers to a complex number. + +[`ComplexAbs(...)`](../../../tf/raw_ops/ComplexAbs.md): Computes the complex absolute value of a tensor. + +[`ComputeAccidentalHits(...)`](../../../tf/raw_ops/ComputeAccidentalHits.md): Computes the ids of the positions in sampled_candidates that match true_labels. + +[`Concat(...)`](../../../tf/raw_ops/Concat.md): Concatenates tensors along one dimension. + +[`ConcatOffset(...)`](../../../tf/raw_ops/ConcatOffset.md): Computes offsets of concat inputs within its output. + +[`ConcatV2(...)`](../../../tf/raw_ops/ConcatV2.md): Concatenates tensors along one dimension. + +[`ConcatenateDataset(...)`](../../../tf/raw_ops/ConcatenateDataset.md): Creates a dataset that concatenates `input_dataset` with `another_dataset`. + +[`ConditionalAccumulator(...)`](../../../tf/raw_ops/ConditionalAccumulator.md): A conditional accumulator for aggregating gradients. + +[`ConfigureDistributedTPU(...)`](../../../tf/raw_ops/ConfigureDistributedTPU.md): Sets up the centralized structures for a distributed TPU system. + +[`ConfigureTPUEmbedding(...)`](../../../tf/raw_ops/ConfigureTPUEmbedding.md): Sets up TPUEmbedding in a distributed TPU system. + +[`Conj(...)`](../../../tf/raw_ops/Conj.md): Returns the complex conjugate of a complex number. + +[`ConjugateTranspose(...)`](../../../tf/raw_ops/ConjugateTranspose.md): Shuffle dimensions of x according to a permutation and conjugate the result. + +[`Const(...)`](../../../tf/raw_ops/Const.md): Returns a constant tensor. + +[`ConsumeMutexLock(...)`](../../../tf/raw_ops/ConsumeMutexLock.md): This op consumes a lock created by `MutexLock`. + +[`ControlTrigger(...)`](../../../tf/raw_ops/ControlTrigger.md): Does nothing. Serves as a control trigger for scheduling. + +[`Conv2D(...)`](../../../tf/raw_ops/Conv2D.md): Computes a 2-D convolution given 4-D `input` and `filter` tensors. + +[`Conv2DBackpropFilter(...)`](../../../tf/raw_ops/Conv2DBackpropFilter.md): Computes the gradients of convolution with respect to the filter. + +[`Conv2DBackpropInput(...)`](../../../tf/raw_ops/Conv2DBackpropInput.md): Computes the gradients of convolution with respect to the input. + +[`Conv3D(...)`](../../../tf/raw_ops/Conv3D.md): Computes a 3-D convolution given 5-D `input` and `filter` tensors. + +[`Conv3DBackpropFilter(...)`](../../../tf/raw_ops/Conv3DBackpropFilter.md): Computes the gradients of 3-D convolution with respect to the filter. + +[`Conv3DBackpropFilterV2(...)`](../../../tf/raw_ops/Conv3DBackpropFilterV2.md): Computes the gradients of 3-D convolution with respect to the filter. + +[`Conv3DBackpropInput(...)`](../../../tf/raw_ops/Conv3DBackpropInput.md): Computes the gradients of 3-D convolution with respect to the input. + +[`Conv3DBackpropInputV2(...)`](../../../tf/raw_ops/Conv3DBackpropInputV2.md): Computes the gradients of 3-D convolution with respect to the input. + +[`Copy(...)`](../../../tf/raw_ops/Copy.md): Copy a tensor from CPU-to-CPU or GPU-to-GPU. + +[`CopyHost(...)`](../../../tf/raw_ops/CopyHost.md): Copy a tensor to host. + +[`Cos(...)`](../../../tf/raw_ops/Cos.md): Computes cos of x element-wise. + +[`Cosh(...)`](../../../tf/raw_ops/Cosh.md): Computes hyperbolic cosine of x element-wise. + +[`CountUpTo(...)`](../../../tf/raw_ops/CountUpTo.md): Increments 'ref' until it reaches 'limit'. + +[`CreateSummaryDbWriter(...)`](../../../tf/raw_ops/CreateSummaryDbWriter.md) + +[`CreateSummaryFileWriter(...)`](../../../tf/raw_ops/CreateSummaryFileWriter.md) + +[`CropAndResize(...)`](../../../tf/raw_ops/CropAndResize.md): Extracts crops from the input image tensor and resizes them. + +[`CropAndResizeGradBoxes(...)`](../../../tf/raw_ops/CropAndResizeGradBoxes.md): Computes the gradient of the crop_and_resize op wrt the input boxes tensor. + +[`CropAndResizeGradImage(...)`](../../../tf/raw_ops/CropAndResizeGradImage.md): Computes the gradient of the crop_and_resize op wrt the input image tensor. + +[`Cross(...)`](../../../tf/raw_ops/Cross.md): Compute the pairwise cross product. + +[`CrossReplicaSum(...)`](../../../tf/raw_ops/CrossReplicaSum.md): An Op to sum inputs across replicated TPU instances. + +[`CudnnRNN(...)`](../../../tf/raw_ops/CudnnRNN.md): A RNN backed by cuDNN. + +[`CudnnRNNBackprop(...)`](../../../tf/raw_ops/CudnnRNNBackprop.md): Backprop step of CudnnRNN. + +[`CudnnRNNBackpropV2(...)`](../../../tf/raw_ops/CudnnRNNBackpropV2.md): Backprop step of CudnnRNN. + +[`CudnnRNNBackpropV3(...)`](../../../tf/raw_ops/CudnnRNNBackpropV3.md): Backprop step of CudnnRNNV3. + +[`CudnnRNNCanonicalToParams(...)`](../../../tf/raw_ops/CudnnRNNCanonicalToParams.md): Converts CudnnRNN params from canonical form to usable form. + +[`CudnnRNNCanonicalToParamsV2(...)`](../../../tf/raw_ops/CudnnRNNCanonicalToParamsV2.md): Converts CudnnRNN params from canonical form to usable form. It supports the projection in LSTM. + +[`CudnnRNNParamsSize(...)`](../../../tf/raw_ops/CudnnRNNParamsSize.md): Computes size of weights that can be used by a Cudnn RNN model. + +[`CudnnRNNParamsToCanonical(...)`](../../../tf/raw_ops/CudnnRNNParamsToCanonical.md): Retrieves CudnnRNN params in canonical form. + +[`CudnnRNNParamsToCanonicalV2(...)`](../../../tf/raw_ops/CudnnRNNParamsToCanonicalV2.md): Retrieves CudnnRNN params in canonical form. It supports the projection in LSTM. + +[`CudnnRNNV2(...)`](../../../tf/raw_ops/CudnnRNNV2.md): A RNN backed by cuDNN. + +[`CudnnRNNV3(...)`](../../../tf/raw_ops/CudnnRNNV3.md): A RNN backed by cuDNN. + +[`Cumprod(...)`](../../../tf/raw_ops/Cumprod.md): Compute the cumulative product of the tensor `x` along `axis`. + +[`Cumsum(...)`](../../../tf/raw_ops/Cumsum.md): Compute the cumulative sum of the tensor `x` along `axis`. + +[`CumulativeLogsumexp(...)`](../../../tf/raw_ops/CumulativeLogsumexp.md): Compute the cumulative product of the tensor `x` along `axis`. + +[`DataFormatDimMap(...)`](../../../tf/raw_ops/DataFormatDimMap.md): Returns the dimension index in the destination data format given the one in + +[`DataFormatVecPermute(...)`](../../../tf/raw_ops/DataFormatVecPermute.md): Returns the permuted vector/tensor in the destination data format given the + +[`DatasetCardinality(...)`](../../../tf/raw_ops/DatasetCardinality.md): Returns the cardinality of `input_dataset`. + +[`DatasetFromGraph(...)`](../../../tf/raw_ops/DatasetFromGraph.md): Creates a dataset from the given `graph_def`. + +[`DatasetToGraph(...)`](../../../tf/raw_ops/DatasetToGraph.md): Returns a serialized GraphDef representing `input_dataset`. + +[`DatasetToGraphV2(...)`](../../../tf/raw_ops/DatasetToGraphV2.md): Returns a serialized GraphDef representing `input_dataset`. + +[`DatasetToSingleElement(...)`](../../../tf/raw_ops/DatasetToSingleElement.md): Outputs the single element from the given dataset. + +[`DatasetToTFRecord(...)`](../../../tf/raw_ops/DatasetToTFRecord.md): Writes the given dataset to the given file using the TFRecord format. + +[`Dawsn(...)`](../../../tf/raw_ops/Dawsn.md) + +[`DebugGradientIdentity(...)`](../../../tf/raw_ops/DebugGradientIdentity.md): Identity op for gradient debugging. + +[`DebugGradientRefIdentity(...)`](../../../tf/raw_ops/DebugGradientRefIdentity.md): Identity op for gradient debugging. + +[`DebugIdentity(...)`](../../../tf/raw_ops/DebugIdentity.md): Provides an identity mapping of the non-Ref type input tensor for debugging. + +[`DebugIdentityV2(...)`](../../../tf/raw_ops/DebugIdentityV2.md): Debug Identity V2 Op. + +[`DebugNanCount(...)`](../../../tf/raw_ops/DebugNanCount.md): Debug NaN Value Counter Op. + +[`DebugNumericSummary(...)`](../../../tf/raw_ops/DebugNumericSummary.md): Debug Numeric Summary Op. + +[`DebugNumericSummaryV2(...)`](../../../tf/raw_ops/DebugNumericSummaryV2.md): Debug Numeric Summary V2 Op. + +[`DecodeAndCropJpeg(...)`](../../../tf/raw_ops/DecodeAndCropJpeg.md): Decode and Crop a JPEG-encoded image to a uint8 tensor. + +[`DecodeBase64(...)`](../../../tf/raw_ops/DecodeBase64.md): Decode web-safe base64-encoded strings. + +[`DecodeBmp(...)`](../../../tf/raw_ops/DecodeBmp.md): Decode the first frame of a BMP-encoded image to a uint8 tensor. + +[`DecodeCSV(...)`](../../../tf/raw_ops/DecodeCSV.md): Convert CSV records to tensors. Each column maps to one tensor. + +[`DecodeCompressed(...)`](../../../tf/raw_ops/DecodeCompressed.md): Decompress strings. + +[`DecodeGif(...)`](../../../tf/raw_ops/DecodeGif.md): Decode the frame(s) of a GIF-encoded image to a uint8 tensor. + +[`DecodeJSONExample(...)`](../../../tf/raw_ops/DecodeJSONExample.md): Convert JSON-encoded Example records to binary protocol buffer strings. + +[`DecodeJpeg(...)`](../../../tf/raw_ops/DecodeJpeg.md): Decode a JPEG-encoded image to a uint8 tensor. + +[`DecodePaddedRaw(...)`](../../../tf/raw_ops/DecodePaddedRaw.md): Reinterpret the bytes of a string as a vector of numbers. + +[`DecodePng(...)`](../../../tf/raw_ops/DecodePng.md): Decode a PNG-encoded image to a uint8 or uint16 tensor. + +[`DecodeProtoV2(...)`](../../../tf/raw_ops/DecodeProtoV2.md): The op extracts fields from a serialized protocol buffers message into tensors. + +[`DecodeRaw(...)`](../../../tf/raw_ops/DecodeRaw.md): Reinterpret the bytes of a string as a vector of numbers. + +[`DecodeWav(...)`](../../../tf/raw_ops/DecodeWav.md): Decode a 16-bit PCM WAV file to a float tensor. + +[`DeepCopy(...)`](../../../tf/raw_ops/DeepCopy.md): Makes a copy of `x`. + +[`DeleteIterator(...)`](../../../tf/raw_ops/DeleteIterator.md): A container for an iterator resource. + +[`DeleteMemoryCache(...)`](../../../tf/raw_ops/DeleteMemoryCache.md) + +[`DeleteMultiDeviceIterator(...)`](../../../tf/raw_ops/DeleteMultiDeviceIterator.md): A container for an iterator resource. + +[`DeleteRandomSeedGenerator(...)`](../../../tf/raw_ops/DeleteRandomSeedGenerator.md) + +[`DeleteSessionTensor(...)`](../../../tf/raw_ops/DeleteSessionTensor.md): Delete the tensor specified by its handle in the session. + +[`DenseToCSRSparseMatrix(...)`](../../../tf/raw_ops/DenseToCSRSparseMatrix.md): Converts a dense tensor to a (possibly batched) CSRSparseMatrix. + +[`DenseToDenseSetOperation(...)`](../../../tf/raw_ops/DenseToDenseSetOperation.md): Applies set operation along last dimension of 2 `Tensor` inputs. + +[`DenseToSparseBatchDataset(...)`](../../../tf/raw_ops/DenseToSparseBatchDataset.md): Creates a dataset that batches input elements into a SparseTensor. + +[`DenseToSparseSetOperation(...)`](../../../tf/raw_ops/DenseToSparseSetOperation.md): Applies set operation along last dimension of `Tensor` and `SparseTensor`. + +[`DepthToSpace(...)`](../../../tf/raw_ops/DepthToSpace.md): DepthToSpace for tensors of type T. + +[`DepthwiseConv2dNative(...)`](../../../tf/raw_ops/DepthwiseConv2dNative.md): Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors. + +[`DepthwiseConv2dNativeBackpropFilter(...)`](../../../tf/raw_ops/DepthwiseConv2dNativeBackpropFilter.md): Computes the gradients of depthwise convolution with respect to the filter. + +[`DepthwiseConv2dNativeBackpropInput(...)`](../../../tf/raw_ops/DepthwiseConv2dNativeBackpropInput.md): Computes the gradients of depthwise convolution with respect to the input. + +[`Dequantize(...)`](../../../tf/raw_ops/Dequantize.md): Dequantize the 'input' tensor into a float or bfloat16 Tensor. + +[`DeserializeIterator(...)`](../../../tf/raw_ops/DeserializeIterator.md): Converts the given variant tensor to an iterator and stores it in the given resource. + +[`DeserializeManySparse(...)`](../../../tf/raw_ops/DeserializeManySparse.md): Deserialize and concatenate `SparseTensors` from a serialized minibatch. + +[`DeserializeSparse(...)`](../../../tf/raw_ops/DeserializeSparse.md): Deserialize `SparseTensor` objects. + +[`DestroyResourceOp(...)`](../../../tf/raw_ops/DestroyResourceOp.md): Deletes the resource specified by the handle. + +[`DestroyTemporaryVariable(...)`](../../../tf/raw_ops/DestroyTemporaryVariable.md): Destroys the temporary variable and returns its final value. + +[`Diag(...)`](../../../tf/raw_ops/Diag.md): Returns a diagonal tensor with a given diagonal values. + +[`DiagPart(...)`](../../../tf/raw_ops/DiagPart.md): Returns the diagonal part of the tensor. + +[`Digamma(...)`](../../../tf/raw_ops/Digamma.md): Computes Psi, the derivative of Lgamma (the log of the absolute value of + +[`Dilation2D(...)`](../../../tf/raw_ops/Dilation2D.md): Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors. + +[`Dilation2DBackpropFilter(...)`](../../../tf/raw_ops/Dilation2DBackpropFilter.md): Computes the gradient of morphological 2-D dilation with respect to the filter. + +[`Dilation2DBackpropInput(...)`](../../../tf/raw_ops/Dilation2DBackpropInput.md): Computes the gradient of morphological 2-D dilation with respect to the input. + +[`DirectedInterleaveDataset(...)`](../../../tf/raw_ops/DirectedInterleaveDataset.md): A substitute for `InterleaveDataset` on a fixed list of `N` datasets. + +[`Div(...)`](../../../tf/raw_ops/Div.md): Returns x / y element-wise. + +[`DivNoNan(...)`](../../../tf/raw_ops/DivNoNan.md): Returns 0 if the denominator is zero. + +[`DrawBoundingBoxes(...)`](../../../tf/raw_ops/DrawBoundingBoxes.md): Draw bounding boxes on a batch of images. + +[`DrawBoundingBoxesV2(...)`](../../../tf/raw_ops/DrawBoundingBoxesV2.md): Draw bounding boxes on a batch of images. + +[`DummyMemoryCache(...)`](../../../tf/raw_ops/DummyMemoryCache.md) + +[`DynamicPartition(...)`](../../../tf/raw_ops/DynamicPartition.md): Partitions `data` into `num_partitions` tensors using indices from `partitions`. + +[`DynamicStitch(...)`](../../../tf/raw_ops/DynamicStitch.md): Interleave the values from the `data` tensors into a single tensor. + +[`EagerPyFunc(...)`](../../../tf/raw_ops/EagerPyFunc.md): Eagerly executes a python function to compute func(input)->output. The + +[`EditDistance(...)`](../../../tf/raw_ops/EditDistance.md): Computes the (possibly normalized) Levenshtein Edit Distance. + +[`Eig(...)`](../../../tf/raw_ops/Eig.md): Computes the eigen decomposition of one or more square matrices. + +[`Einsum(...)`](../../../tf/raw_ops/Einsum.md): Tensor contraction according to Einstein summation convention. + +[`Elu(...)`](../../../tf/raw_ops/Elu.md): Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise. + +[`EluGrad(...)`](../../../tf/raw_ops/EluGrad.md): Computes gradients for the exponential linear (Elu) operation. + +[`Empty(...)`](../../../tf/raw_ops/Empty.md): Creates a tensor with the given shape. + +[`EmptyTensorList(...)`](../../../tf/raw_ops/EmptyTensorList.md): Creates and returns an empty tensor list. + +[`EncodeBase64(...)`](../../../tf/raw_ops/EncodeBase64.md): Encode strings into web-safe base64 format. + +[`EncodeJpeg(...)`](../../../tf/raw_ops/EncodeJpeg.md): JPEG-encode an image. + +[`EncodeJpegVariableQuality(...)`](../../../tf/raw_ops/EncodeJpegVariableQuality.md): JPEG encode input image with provided compression quality. + +[`EncodePng(...)`](../../../tf/raw_ops/EncodePng.md): PNG-encode an image. + +[`EncodeProto(...)`](../../../tf/raw_ops/EncodeProto.md): The op serializes protobuf messages provided in the input tensors. + +[`EncodeWav(...)`](../../../tf/raw_ops/EncodeWav.md): Encode audio data using the WAV file format. + +[`EnqueueTPUEmbeddingIntegerBatch(...)`](../../../tf/raw_ops/EnqueueTPUEmbeddingIntegerBatch.md): An op that enqueues a list of input batch tensors to TPUEmbedding. + +[`EnqueueTPUEmbeddingSparseBatch(...)`](../../../tf/raw_ops/EnqueueTPUEmbeddingSparseBatch.md): An op that enqueues TPUEmbedding input indices from a SparseTensor. + +[`EnqueueTPUEmbeddingSparseTensorBatch(...)`](../../../tf/raw_ops/EnqueueTPUEmbeddingSparseTensorBatch.md): Eases the porting of code that uses tf.nn.embedding_lookup_sparse(). + +[`EnsureShape(...)`](../../../tf/raw_ops/EnsureShape.md): Ensures that the tensor's shape matches the expected shape. + +[`Enter(...)`](../../../tf/raw_ops/Enter.md): Creates or finds a child frame, and makes `data` available to the child frame. + +[`Equal(...)`](../../../tf/raw_ops/Equal.md): Returns the truth value of (x == y) element-wise. + +[`Erf(...)`](../../../tf/raw_ops/Erf.md): Computes the Gauss error function of `x` element-wise. + +[`Erfc(...)`](../../../tf/raw_ops/Erfc.md): Computes the complementary error function of `x` element-wise. + +[`Erfinv(...)`](../../../tf/raw_ops/Erfinv.md) + +[`EuclideanNorm(...)`](../../../tf/raw_ops/EuclideanNorm.md): Computes the euclidean norm of elements across dimensions of a tensor. + +[`Exit(...)`](../../../tf/raw_ops/Exit.md): Exits the current frame to its parent frame. + +[`Exp(...)`](../../../tf/raw_ops/Exp.md): Computes exponential of x element-wise. \\(y = e^x\\). + +[`ExpandDims(...)`](../../../tf/raw_ops/ExpandDims.md): Inserts a dimension of 1 into a tensor's shape. + +[`ExperimentalAssertNextDataset(...)`](../../../tf/raw_ops/ExperimentalAssertNextDataset.md) + +[`ExperimentalAutoShardDataset(...)`](../../../tf/raw_ops/ExperimentalAutoShardDataset.md): Creates a dataset that shards the input dataset. + +[`ExperimentalBytesProducedStatsDataset(...)`](../../../tf/raw_ops/ExperimentalBytesProducedStatsDataset.md): Records the bytes size of each element of `input_dataset` in a StatsAggregator. + +[`ExperimentalCSVDataset(...)`](../../../tf/raw_ops/ExperimentalCSVDataset.md) + +[`ExperimentalChooseFastestDataset(...)`](../../../tf/raw_ops/ExperimentalChooseFastestDataset.md) + +[`ExperimentalDatasetCardinality(...)`](../../../tf/raw_ops/ExperimentalDatasetCardinality.md): Returns the cardinality of `input_dataset`. + +[`ExperimentalDatasetToTFRecord(...)`](../../../tf/raw_ops/ExperimentalDatasetToTFRecord.md): Writes the given dataset to the given file using the TFRecord format. + +[`ExperimentalDenseToSparseBatchDataset(...)`](../../../tf/raw_ops/ExperimentalDenseToSparseBatchDataset.md): Creates a dataset that batches input elements into a SparseTensor. + +[`ExperimentalDirectedInterleaveDataset(...)`](../../../tf/raw_ops/ExperimentalDirectedInterleaveDataset.md): A substitute for `InterleaveDataset` on a fixed list of `N` datasets. + +[`ExperimentalGroupByReducerDataset(...)`](../../../tf/raw_ops/ExperimentalGroupByReducerDataset.md): Creates a dataset that computes a group-by on `input_dataset`. + +[`ExperimentalGroupByWindowDataset(...)`](../../../tf/raw_ops/ExperimentalGroupByWindowDataset.md): Creates a dataset that computes a windowed group-by on `input_dataset`. + +[`ExperimentalIgnoreErrorsDataset(...)`](../../../tf/raw_ops/ExperimentalIgnoreErrorsDataset.md): Creates a dataset that contains the elements of `input_dataset` ignoring errors. + +[`ExperimentalIteratorGetDevice(...)`](../../../tf/raw_ops/ExperimentalIteratorGetDevice.md): Returns the name of the device on which `resource` has been placed. + +[`ExperimentalLMDBDataset(...)`](../../../tf/raw_ops/ExperimentalLMDBDataset.md) + +[`ExperimentalLatencyStatsDataset(...)`](../../../tf/raw_ops/ExperimentalLatencyStatsDataset.md): Records the latency of producing `input_dataset` elements in a StatsAggregator. + +[`ExperimentalMapAndBatchDataset(...)`](../../../tf/raw_ops/ExperimentalMapAndBatchDataset.md): Creates a dataset that fuses mapping with batching. + +[`ExperimentalMapDataset(...)`](../../../tf/raw_ops/ExperimentalMapDataset.md): Creates a dataset that applies `f` to the outputs of `input_dataset`. + +[`ExperimentalMatchingFilesDataset(...)`](../../../tf/raw_ops/ExperimentalMatchingFilesDataset.md) + +[`ExperimentalMaxIntraOpParallelismDataset(...)`](../../../tf/raw_ops/ExperimentalMaxIntraOpParallelismDataset.md): Creates a dataset that overrides the maximum intra-op parallelism. + +[`ExperimentalNonSerializableDataset(...)`](../../../tf/raw_ops/ExperimentalNonSerializableDataset.md) + +[`ExperimentalParallelInterleaveDataset(...)`](../../../tf/raw_ops/ExperimentalParallelInterleaveDataset.md): Creates a dataset that applies `f` to the outputs of `input_dataset`. + +[`ExperimentalParseExampleDataset(...)`](../../../tf/raw_ops/ExperimentalParseExampleDataset.md): Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features. + +[`ExperimentalPrivateThreadPoolDataset(...)`](../../../tf/raw_ops/ExperimentalPrivateThreadPoolDataset.md): Creates a dataset that uses a custom thread pool to compute `input_dataset`. + +[`ExperimentalRandomDataset(...)`](../../../tf/raw_ops/ExperimentalRandomDataset.md): Creates a Dataset that returns pseudorandom numbers. + +[`ExperimentalRebatchDataset(...)`](../../../tf/raw_ops/ExperimentalRebatchDataset.md): Creates a dataset that changes the batch size. + +[`ExperimentalScanDataset(...)`](../../../tf/raw_ops/ExperimentalScanDataset.md): Creates a dataset successively reduces `f` over the elements of `input_dataset`. + +[`ExperimentalSetStatsAggregatorDataset(...)`](../../../tf/raw_ops/ExperimentalSetStatsAggregatorDataset.md) + +[`ExperimentalSleepDataset(...)`](../../../tf/raw_ops/ExperimentalSleepDataset.md) + +[`ExperimentalSlidingWindowDataset(...)`](../../../tf/raw_ops/ExperimentalSlidingWindowDataset.md): Creates a dataset that passes a sliding window over `input_dataset`. + +[`ExperimentalSqlDataset(...)`](../../../tf/raw_ops/ExperimentalSqlDataset.md): Creates a dataset that executes a SQL query and emits rows of the result set. + +[`ExperimentalStatsAggregatorHandle(...)`](../../../tf/raw_ops/ExperimentalStatsAggregatorHandle.md): Creates a statistics manager resource. + +[`ExperimentalStatsAggregatorSummary(...)`](../../../tf/raw_ops/ExperimentalStatsAggregatorSummary.md): Produces a summary of any statistics recorded by the given statistics manager. + +[`ExperimentalTakeWhileDataset(...)`](../../../tf/raw_ops/ExperimentalTakeWhileDataset.md): Creates a dataset that stops iteration when predicate` is false. + +[`ExperimentalThreadPoolDataset(...)`](../../../tf/raw_ops/ExperimentalThreadPoolDataset.md): Creates a dataset that uses a custom thread pool to compute `input_dataset`. + +[`ExperimentalThreadPoolHandle(...)`](../../../tf/raw_ops/ExperimentalThreadPoolHandle.md): Creates a dataset that uses a custom thread pool to compute `input_dataset`. + +[`ExperimentalUnbatchDataset(...)`](../../../tf/raw_ops/ExperimentalUnbatchDataset.md): A dataset that splits the elements of its input into multiple elements. + +[`ExperimentalUniqueDataset(...)`](../../../tf/raw_ops/ExperimentalUniqueDataset.md): Creates a dataset that contains the unique elements of `input_dataset`. + +[`Expint(...)`](../../../tf/raw_ops/Expint.md) + +[`Expm1(...)`](../../../tf/raw_ops/Expm1.md): Computes `exp(x) - 1` element-wise. + +[`ExtractGlimpse(...)`](../../../tf/raw_ops/ExtractGlimpse.md): Extracts a glimpse from the input tensor. + +[`ExtractImagePatches(...)`](../../../tf/raw_ops/ExtractImagePatches.md): Extract `patches` from `images` and put them in the "depth" output dimension. + +[`ExtractJpegShape(...)`](../../../tf/raw_ops/ExtractJpegShape.md): Extract the shape information of a JPEG-encoded image. + +[`ExtractVolumePatches(...)`](../../../tf/raw_ops/ExtractVolumePatches.md): Extract `patches` from `input` and put them in the "depth" output dimension. 3D extension of `extract_image_patches`. + +[`FFT(...)`](../../../tf/raw_ops/FFT.md): Fast Fourier transform. + +[`FFT2D(...)`](../../../tf/raw_ops/FFT2D.md): 2D fast Fourier transform. + +[`FFT3D(...)`](../../../tf/raw_ops/FFT3D.md): 3D fast Fourier transform. + +[`FIFOQueue(...)`](../../../tf/raw_ops/FIFOQueue.md): A queue that produces elements in first-in first-out order. + +[`FIFOQueueV2(...)`](../../../tf/raw_ops/FIFOQueueV2.md): A queue that produces elements in first-in first-out order. + +[`Fact(...)`](../../../tf/raw_ops/Fact.md): Output a fact about factorials. + +[`FakeParam(...)`](../../../tf/raw_ops/FakeParam.md): This op is used as a placeholder in If branch functions. It doesn't provide a + +[`FakeQuantWithMinMaxArgs(...)`](../../../tf/raw_ops/FakeQuantWithMinMaxArgs.md): Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. + +[`FakeQuantWithMinMaxArgsGradient(...)`](../../../tf/raw_ops/FakeQuantWithMinMaxArgsGradient.md): Compute gradients for a FakeQuantWithMinMaxArgs operation. + +[`FakeQuantWithMinMaxVars(...)`](../../../tf/raw_ops/FakeQuantWithMinMaxVars.md): Fake-quantize the 'inputs' tensor of type float via global float scalars `min` + +[`FakeQuantWithMinMaxVarsGradient(...)`](../../../tf/raw_ops/FakeQuantWithMinMaxVarsGradient.md): Compute gradients for a FakeQuantWithMinMaxVars operation. + +[`FakeQuantWithMinMaxVarsPerChannel(...)`](../../../tf/raw_ops/FakeQuantWithMinMaxVarsPerChannel.md): Fake-quantize the 'inputs' tensor of type float and one of the shapes: `[d]`, + +[`FakeQuantWithMinMaxVarsPerChannelGradient(...)`](../../../tf/raw_ops/FakeQuantWithMinMaxVarsPerChannelGradient.md): Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. + +[`FakeQueue(...)`](../../../tf/raw_ops/FakeQueue.md): Deprecated. Do not use. + +[`Fill(...)`](../../../tf/raw_ops/Fill.md): Creates a tensor filled with a scalar value. + +[`FilterByLastComponentDataset(...)`](../../../tf/raw_ops/FilterByLastComponentDataset.md): Creates a dataset containing elements of first component of `input_dataset` having true in the last component. + +[`FilterDataset(...)`](../../../tf/raw_ops/FilterDataset.md): Creates a dataset containing elements of `input_dataset` matching `predicate`. + +[`Fingerprint(...)`](../../../tf/raw_ops/Fingerprint.md): Generates fingerprint values. + +[`FixedLengthRecordDataset(...)`](../../../tf/raw_ops/FixedLengthRecordDataset.md): Creates a dataset that emits the records from one or more binary files. + +[`FixedLengthRecordDatasetV2(...)`](../../../tf/raw_ops/FixedLengthRecordDatasetV2.md) + +[`FixedLengthRecordReader(...)`](../../../tf/raw_ops/FixedLengthRecordReader.md): A Reader that outputs fixed-length records from a file. + +[`FixedLengthRecordReaderV2(...)`](../../../tf/raw_ops/FixedLengthRecordReaderV2.md): A Reader that outputs fixed-length records from a file. + +[`FixedUnigramCandidateSampler(...)`](../../../tf/raw_ops/FixedUnigramCandidateSampler.md): Generates labels for candidate sampling with a learned unigram distribution. + +[`FlatMapDataset(...)`](../../../tf/raw_ops/FlatMapDataset.md): Creates a dataset that applies `f` to the outputs of `input_dataset`. + +[`Floor(...)`](../../../tf/raw_ops/Floor.md): Returns element-wise largest integer not greater than x. + +[`FloorDiv(...)`](../../../tf/raw_ops/FloorDiv.md): Returns x // y element-wise. + +[`FloorMod(...)`](../../../tf/raw_ops/FloorMod.md): Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +[`FlushSummaryWriter(...)`](../../../tf/raw_ops/FlushSummaryWriter.md) + +[`For(...)`](../../../tf/raw_ops/For.md): ```python + +[`FractionalAvgPool(...)`](../../../tf/raw_ops/FractionalAvgPool.md): Performs fractional average pooling on the input. + +[`FractionalAvgPoolGrad(...)`](../../../tf/raw_ops/FractionalAvgPoolGrad.md): Computes gradient of the FractionalAvgPool function. + +[`FractionalMaxPool(...)`](../../../tf/raw_ops/FractionalMaxPool.md): Performs fractional max pooling on the input. + +[`FractionalMaxPoolGrad(...)`](../../../tf/raw_ops/FractionalMaxPoolGrad.md): Computes gradient of the FractionalMaxPool function. + +[`FresnelCos(...)`](../../../tf/raw_ops/FresnelCos.md) + +[`FresnelSin(...)`](../../../tf/raw_ops/FresnelSin.md) + +[`FusedBatchNorm(...)`](../../../tf/raw_ops/FusedBatchNorm.md): Batch normalization. + +[`FusedBatchNormGrad(...)`](../../../tf/raw_ops/FusedBatchNormGrad.md): Gradient for batch normalization. + +[`FusedBatchNormGradV2(...)`](../../../tf/raw_ops/FusedBatchNormGradV2.md): Gradient for batch normalization. + +[`FusedBatchNormGradV3(...)`](../../../tf/raw_ops/FusedBatchNormGradV3.md): Gradient for batch normalization. + +[`FusedBatchNormV2(...)`](../../../tf/raw_ops/FusedBatchNormV2.md): Batch normalization. + +[`FusedBatchNormV3(...)`](../../../tf/raw_ops/FusedBatchNormV3.md): Batch normalization. + +[`FusedPadConv2D(...)`](../../../tf/raw_ops/FusedPadConv2D.md): Performs a padding as a preprocess during a convolution. + +[`FusedResizeAndPadConv2D(...)`](../../../tf/raw_ops/FusedResizeAndPadConv2D.md): Performs a resize and padding as a preprocess during a convolution. + +[`GRUBlockCell(...)`](../../../tf/raw_ops/GRUBlockCell.md): Computes the GRU cell forward propagation for 1 time step. + +[`GRUBlockCellGrad(...)`](../../../tf/raw_ops/GRUBlockCellGrad.md): Computes the GRU cell back-propagation for 1 time step. + +[`Gather(...)`](../../../tf/raw_ops/Gather.md): Gather slices from `params` according to `indices`. + +[`GatherNd(...)`](../../../tf/raw_ops/GatherNd.md): Gather slices from `params` into a Tensor with shape specified by `indices`. + +[`GatherV2(...)`](../../../tf/raw_ops/GatherV2.md): Gather slices from `params` axis `axis` according to `indices`. + +[`GenerateBoundingBoxProposals(...)`](../../../tf/raw_ops/GenerateBoundingBoxProposals.md): This op produces Region of Interests from given bounding boxes(bbox_deltas) encoded wrt anchors according to eq.2 in arXiv:1506.01497 + +[`GenerateVocabRemapping(...)`](../../../tf/raw_ops/GenerateVocabRemapping.md): Given a path to new and old vocabulary files, returns a remapping Tensor of + +[`GeneratorDataset(...)`](../../../tf/raw_ops/GeneratorDataset.md): Creates a dataset that invokes a function to generate elements. + +[`GetSessionHandle(...)`](../../../tf/raw_ops/GetSessionHandle.md): Store the input tensor in the state of the current session. + +[`GetSessionHandleV2(...)`](../../../tf/raw_ops/GetSessionHandleV2.md): Store the input tensor in the state of the current session. + +[`GetSessionTensor(...)`](../../../tf/raw_ops/GetSessionTensor.md): Get the value of the tensor specified by its handle. + +[`Greater(...)`](../../../tf/raw_ops/Greater.md): Returns the truth value of (x > y) element-wise. + +[`GreaterEqual(...)`](../../../tf/raw_ops/GreaterEqual.md): Returns the truth value of (x >= y) element-wise. + +[`GroupByReducerDataset(...)`](../../../tf/raw_ops/GroupByReducerDataset.md): Creates a dataset that computes a group-by on `input_dataset`. + +[`GroupByWindowDataset(...)`](../../../tf/raw_ops/GroupByWindowDataset.md): Creates a dataset that computes a windowed group-by on `input_dataset`. + +[`GuaranteeConst(...)`](../../../tf/raw_ops/GuaranteeConst.md): Gives a guarantee to the TF runtime that the input tensor is a constant. + +[`HSVToRGB(...)`](../../../tf/raw_ops/HSVToRGB.md): Convert one or more images from HSV to RGB. + +[`HashTable(...)`](../../../tf/raw_ops/HashTable.md): Creates a non-initialized hash table. + +[`HashTableV2(...)`](../../../tf/raw_ops/HashTableV2.md): Creates a non-initialized hash table. + +[`HistogramFixedWidth(...)`](../../../tf/raw_ops/HistogramFixedWidth.md): Return histogram of values. + +[`HistogramSummary(...)`](../../../tf/raw_ops/HistogramSummary.md): Outputs a `Summary` protocol buffer with a histogram. + +[`IFFT(...)`](../../../tf/raw_ops/IFFT.md): Inverse fast Fourier transform. + +[`IFFT2D(...)`](../../../tf/raw_ops/IFFT2D.md): Inverse 2D fast Fourier transform. + +[`IFFT3D(...)`](../../../tf/raw_ops/IFFT3D.md): Inverse 3D fast Fourier transform. + +[`IRFFT(...)`](../../../tf/raw_ops/IRFFT.md): Inverse real-valued fast Fourier transform. + +[`IRFFT2D(...)`](../../../tf/raw_ops/IRFFT2D.md): Inverse 2D real-valued fast Fourier transform. + +[`IRFFT3D(...)`](../../../tf/raw_ops/IRFFT3D.md): Inverse 3D real-valued fast Fourier transform. + +[`Identity(...)`](../../../tf/raw_ops/Identity.md): Return a tensor with the same shape and contents as the input tensor or value. + +[`IdentityN(...)`](../../../tf/raw_ops/IdentityN.md): Returns a list of tensors with the same shapes and contents as the input + +[`IdentityReader(...)`](../../../tf/raw_ops/IdentityReader.md): A Reader that outputs the queued work as both the key and value. + +[`IdentityReaderV2(...)`](../../../tf/raw_ops/IdentityReaderV2.md): A Reader that outputs the queued work as both the key and value. + +[`If(...)`](../../../tf/raw_ops/If.md): output = cond ? then_branch(input) : else_branch(input) + +[`Igamma(...)`](../../../tf/raw_ops/Igamma.md): Compute the lower regularized incomplete Gamma function `P(a, x)`. + +[`IgammaGradA(...)`](../../../tf/raw_ops/IgammaGradA.md): Computes the gradient of `igamma(a, x)` wrt `a`. + +[`Igammac(...)`](../../../tf/raw_ops/Igammac.md): Compute the upper regularized incomplete Gamma function `Q(a, x)`. + +[`IgnoreErrorsDataset(...)`](../../../tf/raw_ops/IgnoreErrorsDataset.md): Creates a dataset that contains the elements of `input_dataset` ignoring errors. + +[`Imag(...)`](../../../tf/raw_ops/Imag.md): Returns the imaginary part of a complex number. + +[`ImageProjectiveTransformV2(...)`](../../../tf/raw_ops/ImageProjectiveTransformV2.md): Applies the given transform to each of the images. + +[`ImageSummary(...)`](../../../tf/raw_ops/ImageSummary.md): Outputs a `Summary` protocol buffer with images. + +[`ImmutableConst(...)`](../../../tf/raw_ops/ImmutableConst.md): Returns immutable tensor from memory region. + +[`ImportEvent(...)`](../../../tf/raw_ops/ImportEvent.md) + +[`InTopK(...)`](../../../tf/raw_ops/InTopK.md): Says whether the targets are in the top `K` predictions. + +[`InTopKV2(...)`](../../../tf/raw_ops/InTopKV2.md): Says whether the targets are in the top `K` predictions. + +[`InfeedDequeue(...)`](../../../tf/raw_ops/InfeedDequeue.md): A placeholder op for a value that will be fed into the computation. + +[`InfeedDequeueTuple(...)`](../../../tf/raw_ops/InfeedDequeueTuple.md): Fetches multiple values from infeed as an XLA tuple. + +[`InfeedEnqueue(...)`](../../../tf/raw_ops/InfeedEnqueue.md): An op which feeds a single Tensor value into the computation. + +[`InfeedEnqueuePrelinearizedBuffer(...)`](../../../tf/raw_ops/InfeedEnqueuePrelinearizedBuffer.md): An op which enqueues prelinearized buffer into TPU infeed. + +[`InfeedEnqueueTuple(...)`](../../../tf/raw_ops/InfeedEnqueueTuple.md): Feeds multiple Tensor values into the computation as an XLA tuple. + +[`InitializeTable(...)`](../../../tf/raw_ops/InitializeTable.md): Table initializer that takes two tensors for keys and values respectively. + +[`InitializeTableFromTextFile(...)`](../../../tf/raw_ops/InitializeTableFromTextFile.md): Initializes a table from a text file. + +[`InitializeTableFromTextFileV2(...)`](../../../tf/raw_ops/InitializeTableFromTextFileV2.md): Initializes a table from a text file. + +[`InitializeTableV2(...)`](../../../tf/raw_ops/InitializeTableV2.md): Table initializer that takes two tensors for keys and values respectively. + +[`InplaceAdd(...)`](../../../tf/raw_ops/InplaceAdd.md): Adds v into specified rows of x. + +[`InplaceSub(...)`](../../../tf/raw_ops/InplaceSub.md): Subtracts `v` into specified rows of `x`. + +[`InplaceUpdate(...)`](../../../tf/raw_ops/InplaceUpdate.md): Updates specified rows with values in `v`. + +[`InterleaveDataset(...)`](../../../tf/raw_ops/InterleaveDataset.md): Creates a dataset that applies `f` to the outputs of `input_dataset`. + +[`Inv(...)`](../../../tf/raw_ops/Inv.md): Computes the reciprocal of x element-wise. + +[`InvGrad(...)`](../../../tf/raw_ops/InvGrad.md): Computes the gradient for the inverse of `x` wrt its input. + +[`Invert(...)`](../../../tf/raw_ops/Invert.md): Invert (flip) each bit of supported types; for example, type `uint8` value 01010101 becomes 10101010. + +[`InvertPermutation(...)`](../../../tf/raw_ops/InvertPermutation.md): Computes the inverse permutation of a tensor. + +[`IsBoostedTreesEnsembleInitialized(...)`](../../../tf/raw_ops/IsBoostedTreesEnsembleInitialized.md): Checks whether a tree ensemble has been initialized. + +[`IsBoostedTreesQuantileStreamResourceInitialized(...)`](../../../tf/raw_ops/IsBoostedTreesQuantileStreamResourceInitialized.md): Checks whether a quantile stream has been initialized. + +[`IsFinite(...)`](../../../tf/raw_ops/IsFinite.md): Returns which elements of x are finite. + +[`IsInf(...)`](../../../tf/raw_ops/IsInf.md): Returns which elements of x are Inf. + +[`IsNan(...)`](../../../tf/raw_ops/IsNan.md): Returns which elements of x are NaN. + +[`IsVariableInitialized(...)`](../../../tf/raw_ops/IsVariableInitialized.md): Checks whether a tensor has been initialized. + +[`Iterator(...)`](../../../tf/raw_ops/Iterator.md): A container for an iterator resource. + +[`IteratorFromStringHandle(...)`](../../../tf/raw_ops/IteratorFromStringHandle.md): Converts the given string representing a handle to an iterator to a resource. + +[`IteratorFromStringHandleV2(...)`](../../../tf/raw_ops/IteratorFromStringHandleV2.md) + +[`IteratorGetDevice(...)`](../../../tf/raw_ops/IteratorGetDevice.md): Returns the name of the device on which `resource` has been placed. + +[`IteratorGetNext(...)`](../../../tf/raw_ops/IteratorGetNext.md): Gets the next output from the given iterator . + +[`IteratorGetNextAsOptional(...)`](../../../tf/raw_ops/IteratorGetNextAsOptional.md): Gets the next output from the given iterator as an Optional variant. + +[`IteratorGetNextSync(...)`](../../../tf/raw_ops/IteratorGetNextSync.md): Gets the next output from the given iterator. + +[`IteratorToStringHandle(...)`](../../../tf/raw_ops/IteratorToStringHandle.md): Converts the given `resource_handle` representing an iterator to a string. + +[`IteratorV2(...)`](../../../tf/raw_ops/IteratorV2.md) + +[`L2Loss(...)`](../../../tf/raw_ops/L2Loss.md): L2 Loss. + +[`LMDBDataset(...)`](../../../tf/raw_ops/LMDBDataset.md): Creates a dataset that emits the key-value pairs in one or more LMDB files. + +[`LMDBReader(...)`](../../../tf/raw_ops/LMDBReader.md): A Reader that outputs the records from a LMDB file. + +[`LRN(...)`](../../../tf/raw_ops/LRN.md): Local Response Normalization. + +[`LRNGrad(...)`](../../../tf/raw_ops/LRNGrad.md): Gradients for Local Response Normalization. + +[`LSTMBlockCell(...)`](../../../tf/raw_ops/LSTMBlockCell.md): Computes the LSTM cell forward propagation for 1 time step. + +[`LSTMBlockCellGrad(...)`](../../../tf/raw_ops/LSTMBlockCellGrad.md): Computes the LSTM cell backward propagation for 1 timestep. + +[`LatencyStatsDataset(...)`](../../../tf/raw_ops/LatencyStatsDataset.md): Records the latency of producing `input_dataset` elements in a StatsAggregator. + +[`LeakyRelu(...)`](../../../tf/raw_ops/LeakyRelu.md): Computes rectified linear: `max(features, features * alpha)`. + +[`LeakyReluGrad(...)`](../../../tf/raw_ops/LeakyReluGrad.md): Computes rectified linear gradients for a LeakyRelu operation. + +[`LearnedUnigramCandidateSampler(...)`](../../../tf/raw_ops/LearnedUnigramCandidateSampler.md): Generates labels for candidate sampling with a learned unigram distribution. + +[`LeftShift(...)`](../../../tf/raw_ops/LeftShift.md): Elementwise computes the bitwise left-shift of `x` and `y`. + +[`LegacyParallelInterleaveDatasetV2(...)`](../../../tf/raw_ops/LegacyParallelInterleaveDatasetV2.md): Creates a dataset that applies `f` to the outputs of `input_dataset`. + +[`Less(...)`](../../../tf/raw_ops/Less.md): Returns the truth value of (x < y) element-wise. + +[`LessEqual(...)`](../../../tf/raw_ops/LessEqual.md): Returns the truth value of (x <= y) element-wise. + +[`Lgamma(...)`](../../../tf/raw_ops/Lgamma.md): Computes the log of the absolute value of `Gamma(x)` element-wise. + +[`LinSpace(...)`](../../../tf/raw_ops/LinSpace.md): Generates values in an interval. + +[`ListDiff(...)`](../../../tf/raw_ops/ListDiff.md): Computes the difference between two lists of numbers or strings. + +[`LoadAndRemapMatrix(...)`](../../../tf/raw_ops/LoadAndRemapMatrix.md): Loads a 2-D (matrix) `Tensor` with name `old_tensor_name` from the checkpoint + +[`LoadTPUEmbeddingADAMParameters(...)`](../../../tf/raw_ops/LoadTPUEmbeddingADAMParameters.md): Load ADAM embedding parameters. + +[`LoadTPUEmbeddingADAMParametersGradAccumDebug(...)`](../../../tf/raw_ops/LoadTPUEmbeddingADAMParametersGradAccumDebug.md): Load ADAM embedding parameters with debug support. + +[`LoadTPUEmbeddingAdadeltaParameters(...)`](../../../tf/raw_ops/LoadTPUEmbeddingAdadeltaParameters.md): Load Adadelta embedding parameters. + +[`LoadTPUEmbeddingAdadeltaParametersGradAccumDebug(...)`](../../../tf/raw_ops/LoadTPUEmbeddingAdadeltaParametersGradAccumDebug.md): Load Adadelta parameters with debug support. + +[`LoadTPUEmbeddingAdagradParameters(...)`](../../../tf/raw_ops/LoadTPUEmbeddingAdagradParameters.md): Load Adagrad embedding parameters. + +[`LoadTPUEmbeddingAdagradParametersGradAccumDebug(...)`](../../../tf/raw_ops/LoadTPUEmbeddingAdagradParametersGradAccumDebug.md): Load Adagrad embedding parameters with debug support. + +[`LoadTPUEmbeddingCenteredRMSPropParameters(...)`](../../../tf/raw_ops/LoadTPUEmbeddingCenteredRMSPropParameters.md): Load centered RMSProp embedding parameters. + +[`LoadTPUEmbeddingFTRLParameters(...)`](../../../tf/raw_ops/LoadTPUEmbeddingFTRLParameters.md): Load FTRL embedding parameters. + +[`LoadTPUEmbeddingFTRLParametersGradAccumDebug(...)`](../../../tf/raw_ops/LoadTPUEmbeddingFTRLParametersGradAccumDebug.md): Load FTRL embedding parameters with debug support. + +[`LoadTPUEmbeddingMDLAdagradLightParameters(...)`](../../../tf/raw_ops/LoadTPUEmbeddingMDLAdagradLightParameters.md): Load MDL Adagrad Light embedding parameters. + +[`LoadTPUEmbeddingMomentumParameters(...)`](../../../tf/raw_ops/LoadTPUEmbeddingMomentumParameters.md): Load Momentum embedding parameters. + +[`LoadTPUEmbeddingMomentumParametersGradAccumDebug(...)`](../../../tf/raw_ops/LoadTPUEmbeddingMomentumParametersGradAccumDebug.md): Load Momentum embedding parameters with debug support. + +[`LoadTPUEmbeddingProximalAdagradParameters(...)`](../../../tf/raw_ops/LoadTPUEmbeddingProximalAdagradParameters.md): Load proximal Adagrad embedding parameters. + +[`LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug(...)`](../../../tf/raw_ops/LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug.md): Load proximal Adagrad embedding parameters with debug support. + +[`LoadTPUEmbeddingRMSPropParameters(...)`](../../../tf/raw_ops/LoadTPUEmbeddingRMSPropParameters.md): Load RMSProp embedding parameters. + +[`LoadTPUEmbeddingRMSPropParametersGradAccumDebug(...)`](../../../tf/raw_ops/LoadTPUEmbeddingRMSPropParametersGradAccumDebug.md): Load RMSProp embedding parameters with debug support. + +[`LoadTPUEmbeddingStochasticGradientDescentParameters(...)`](../../../tf/raw_ops/LoadTPUEmbeddingStochasticGradientDescentParameters.md): Load SGD embedding parameters. + +[`Log(...)`](../../../tf/raw_ops/Log.md): Computes natural logarithm of x element-wise. + +[`Log1p(...)`](../../../tf/raw_ops/Log1p.md): Computes natural logarithm of (1 + x) element-wise. + +[`LogMatrixDeterminant(...)`](../../../tf/raw_ops/LogMatrixDeterminant.md): Computes the sign and the log of the absolute value of the determinant of + +[`LogSoftmax(...)`](../../../tf/raw_ops/LogSoftmax.md): Computes log softmax activations. + +[`LogUniformCandidateSampler(...)`](../../../tf/raw_ops/LogUniformCandidateSampler.md): Generates labels for candidate sampling with a log-uniform distribution. + +[`LogicalAnd(...)`](../../../tf/raw_ops/LogicalAnd.md): Returns the truth value of x AND y element-wise. + +[`LogicalNot(...)`](../../../tf/raw_ops/LogicalNot.md): Returns the truth value of `NOT x` element-wise. + +[`LogicalOr(...)`](../../../tf/raw_ops/LogicalOr.md): Returns the truth value of x OR y element-wise. + +[`LookupTableExport(...)`](../../../tf/raw_ops/LookupTableExport.md): Outputs all keys and values in the table. + +[`LookupTableExportV2(...)`](../../../tf/raw_ops/LookupTableExportV2.md): Outputs all keys and values in the table. + +[`LookupTableFind(...)`](../../../tf/raw_ops/LookupTableFind.md): Looks up keys in a table, outputs the corresponding values. + +[`LookupTableFindV2(...)`](../../../tf/raw_ops/LookupTableFindV2.md): Looks up keys in a table, outputs the corresponding values. + +[`LookupTableImport(...)`](../../../tf/raw_ops/LookupTableImport.md): Replaces the contents of the table with the specified keys and values. + +[`LookupTableImportV2(...)`](../../../tf/raw_ops/LookupTableImportV2.md): Replaces the contents of the table with the specified keys and values. + +[`LookupTableInsert(...)`](../../../tf/raw_ops/LookupTableInsert.md): Updates the table to associates keys with values. + +[`LookupTableInsertV2(...)`](../../../tf/raw_ops/LookupTableInsertV2.md): Updates the table to associates keys with values. + +[`LookupTableRemoveV2(...)`](../../../tf/raw_ops/LookupTableRemoveV2.md): Removes keys and its associated values from a table. + +[`LookupTableSize(...)`](../../../tf/raw_ops/LookupTableSize.md): Computes the number of elements in the given table. + +[`LookupTableSizeV2(...)`](../../../tf/raw_ops/LookupTableSizeV2.md): Computes the number of elements in the given table. + +[`LoopCond(...)`](../../../tf/raw_ops/LoopCond.md): Forwards the input to the output. + +[`LowerBound(...)`](../../../tf/raw_ops/LowerBound.md): Applies lower_bound(sorted_search_values, values) along each row. + +[`Lu(...)`](../../../tf/raw_ops/Lu.md): Computes the LU decomposition of one or more square matrices. + +[`MakeIterator(...)`](../../../tf/raw_ops/MakeIterator.md): Makes a new iterator from the given `dataset` and stores it in `iterator`. + +[`MapAndBatchDataset(...)`](../../../tf/raw_ops/MapAndBatchDataset.md): Creates a dataset that fuses mapping with batching. + +[`MapClear(...)`](../../../tf/raw_ops/MapClear.md): Op removes all elements in the underlying container. + +[`MapDataset(...)`](../../../tf/raw_ops/MapDataset.md): Creates a dataset that applies `f` to the outputs of `input_dataset`. + +[`MapDefun(...)`](../../../tf/raw_ops/MapDefun.md): Maps a function on the list of tensors unpacked from arguments on dimension 0. + +[`MapIncompleteSize(...)`](../../../tf/raw_ops/MapIncompleteSize.md): Op returns the number of incomplete elements in the underlying container. + +[`MapPeek(...)`](../../../tf/raw_ops/MapPeek.md): Op peeks at the values at the specified key. If the + +[`MapSize(...)`](../../../tf/raw_ops/MapSize.md): Op returns the number of elements in the underlying container. + +[`MapStage(...)`](../../../tf/raw_ops/MapStage.md): Stage (key, values) in the underlying container which behaves like a hashtable. + +[`MapUnstage(...)`](../../../tf/raw_ops/MapUnstage.md): Op removes and returns the values associated with the key + +[`MapUnstageNoKey(...)`](../../../tf/raw_ops/MapUnstageNoKey.md): Op removes and returns a random (key, value) + +[`MatMul(...)`](../../../tf/raw_ops/MatMul.md): Multiply the matrix "a" by the matrix "b". + +[`MatchingFiles(...)`](../../../tf/raw_ops/MatchingFiles.md): Returns the set of files matching one or more glob patterns. + +[`MatchingFilesDataset(...)`](../../../tf/raw_ops/MatchingFilesDataset.md) + +[`MatrixBandPart(...)`](../../../tf/raw_ops/MatrixBandPart.md): Copy a tensor setting everything outside a central band in each innermost matrix + +[`MatrixDeterminant(...)`](../../../tf/raw_ops/MatrixDeterminant.md): Computes the determinant of one or more square matrices. + +[`MatrixDiag(...)`](../../../tf/raw_ops/MatrixDiag.md): Returns a batched diagonal tensor with a given batched diagonal values. + +[`MatrixDiagPart(...)`](../../../tf/raw_ops/MatrixDiagPart.md): Returns the batched diagonal part of a batched tensor. + +[`MatrixDiagPartV2(...)`](../../../tf/raw_ops/MatrixDiagPartV2.md): Returns the batched diagonal part of a batched tensor. + +[`MatrixDiagPartV3(...)`](../../../tf/raw_ops/MatrixDiagPartV3.md): Returns the batched diagonal part of a batched tensor. + +[`MatrixDiagV2(...)`](../../../tf/raw_ops/MatrixDiagV2.md): Returns a batched diagonal tensor with given batched diagonal values. + +[`MatrixDiagV3(...)`](../../../tf/raw_ops/MatrixDiagV3.md): Returns a batched diagonal tensor with given batched diagonal values. + +[`MatrixExponential(...)`](../../../tf/raw_ops/MatrixExponential.md): Deprecated, use python implementation tf.linalg.matrix_exponential. + +[`MatrixInverse(...)`](../../../tf/raw_ops/MatrixInverse.md): Computes the inverse of one or more square invertible matrices or their + +[`MatrixLogarithm(...)`](../../../tf/raw_ops/MatrixLogarithm.md): Computes the matrix logarithm of one or more square matrices: + +[`MatrixSetDiag(...)`](../../../tf/raw_ops/MatrixSetDiag.md): Returns a batched matrix tensor with new batched diagonal values. + +[`MatrixSetDiagV2(...)`](../../../tf/raw_ops/MatrixSetDiagV2.md): Returns a batched matrix tensor with new batched diagonal values. + +[`MatrixSetDiagV3(...)`](../../../tf/raw_ops/MatrixSetDiagV3.md): Returns a batched matrix tensor with new batched diagonal values. + +[`MatrixSolve(...)`](../../../tf/raw_ops/MatrixSolve.md): Solves systems of linear equations. + +[`MatrixSolveLs(...)`](../../../tf/raw_ops/MatrixSolveLs.md): Solves one or more linear least-squares problems. + +[`MatrixSquareRoot(...)`](../../../tf/raw_ops/MatrixSquareRoot.md): Computes the matrix square root of one or more square matrices: + +[`MatrixTriangularSolve(...)`](../../../tf/raw_ops/MatrixTriangularSolve.md): Solves systems of linear equations with upper or lower triangular matrices by backsubstitution. + +[`Max(...)`](../../../tf/raw_ops/Max.md): Computes the maximum of elements across dimensions of a tensor. + +[`MaxIntraOpParallelismDataset(...)`](../../../tf/raw_ops/MaxIntraOpParallelismDataset.md): Creates a dataset that overrides the maximum intra-op parallelism. + +[`MaxPool(...)`](../../../tf/raw_ops/MaxPool.md): Performs max pooling on the input. + +[`MaxPool3D(...)`](../../../tf/raw_ops/MaxPool3D.md): Performs 3D max pooling on the input. + +[`MaxPool3DGrad(...)`](../../../tf/raw_ops/MaxPool3DGrad.md): Computes gradients of max pooling function. + +[`MaxPool3DGradGrad(...)`](../../../tf/raw_ops/MaxPool3DGradGrad.md): Computes second-order gradients of the maxpooling function. + +[`MaxPoolGrad(...)`](../../../tf/raw_ops/MaxPoolGrad.md): Computes gradients of the maxpooling function. + +[`MaxPoolGradGrad(...)`](../../../tf/raw_ops/MaxPoolGradGrad.md): Computes second-order gradients of the maxpooling function. + +[`MaxPoolGradGradV2(...)`](../../../tf/raw_ops/MaxPoolGradGradV2.md): Computes second-order gradients of the maxpooling function. + +[`MaxPoolGradGradWithArgmax(...)`](../../../tf/raw_ops/MaxPoolGradGradWithArgmax.md): Computes second-order gradients of the maxpooling function. + +[`MaxPoolGradV2(...)`](../../../tf/raw_ops/MaxPoolGradV2.md): Computes gradients of the maxpooling function. + +[`MaxPoolGradWithArgmax(...)`](../../../tf/raw_ops/MaxPoolGradWithArgmax.md): Computes gradients of the maxpooling function. + +[`MaxPoolV2(...)`](../../../tf/raw_ops/MaxPoolV2.md): Performs max pooling on the input. + +[`MaxPoolWithArgmax(...)`](../../../tf/raw_ops/MaxPoolWithArgmax.md): Performs max pooling on the input and outputs both max values and indices. + +[`Maximum(...)`](../../../tf/raw_ops/Maximum.md): Returns the max of x and y (i.e. x > y ? x : y) element-wise. + +[`Mean(...)`](../../../tf/raw_ops/Mean.md): Computes the mean of elements across dimensions of a tensor. + +[`Merge(...)`](../../../tf/raw_ops/Merge.md): Forwards the value of an available tensor from `inputs` to `output`. + +[`MergeSummary(...)`](../../../tf/raw_ops/MergeSummary.md): Merges summaries. + +[`MergeV2Checkpoints(...)`](../../../tf/raw_ops/MergeV2Checkpoints.md): V2 format specific: merges the metadata files of sharded checkpoints. The + +[`Mfcc(...)`](../../../tf/raw_ops/Mfcc.md): Transforms a spectrogram into a form that's useful for speech recognition. + +[`Min(...)`](../../../tf/raw_ops/Min.md): Computes the minimum of elements across dimensions of a tensor. + +[`Minimum(...)`](../../../tf/raw_ops/Minimum.md): Returns the min of x and y (i.e. x < y ? x : y) element-wise. + +[`MirrorPad(...)`](../../../tf/raw_ops/MirrorPad.md): Pads a tensor with mirrored values. + +[`MirrorPadGrad(...)`](../../../tf/raw_ops/MirrorPadGrad.md): Gradient op for `MirrorPad` op. This op folds a mirror-padded tensor. + +[`Mod(...)`](../../../tf/raw_ops/Mod.md): Returns element-wise remainder of division. This emulates C semantics in that + +[`ModelDataset(...)`](../../../tf/raw_ops/ModelDataset.md): Identity transformation that models performance. + +[`Mul(...)`](../../../tf/raw_ops/Mul.md): Returns x * y element-wise. + +[`MulNoNan(...)`](../../../tf/raw_ops/MulNoNan.md): Returns x * y element-wise. Returns zero if y is zero, even if x if infinite or NaN. + +[`MultiDeviceIterator(...)`](../../../tf/raw_ops/MultiDeviceIterator.md): Creates a MultiDeviceIterator resource. + +[`MultiDeviceIteratorFromStringHandle(...)`](../../../tf/raw_ops/MultiDeviceIteratorFromStringHandle.md): Generates a MultiDeviceIterator resource from its provided string handle. + +[`MultiDeviceIteratorGetNextFromShard(...)`](../../../tf/raw_ops/MultiDeviceIteratorGetNextFromShard.md): Gets next element for the provided shard number. + +[`MultiDeviceIteratorInit(...)`](../../../tf/raw_ops/MultiDeviceIteratorInit.md): Initializes the multi device iterator with the given dataset. + +[`MultiDeviceIteratorToStringHandle(...)`](../../../tf/raw_ops/MultiDeviceIteratorToStringHandle.md): Produces a string handle for the given MultiDeviceIterator. + +[`Multinomial(...)`](../../../tf/raw_ops/Multinomial.md): Draws samples from a multinomial distribution. + +[`MutableDenseHashTable(...)`](../../../tf/raw_ops/MutableDenseHashTable.md): Creates an empty hash table that uses tensors as the backing store. + +[`MutableDenseHashTableV2(...)`](../../../tf/raw_ops/MutableDenseHashTableV2.md): Creates an empty hash table that uses tensors as the backing store. + +[`MutableHashTable(...)`](../../../tf/raw_ops/MutableHashTable.md): Creates an empty hash table. + +[`MutableHashTableOfTensors(...)`](../../../tf/raw_ops/MutableHashTableOfTensors.md): Creates an empty hash table. + +[`MutableHashTableOfTensorsV2(...)`](../../../tf/raw_ops/MutableHashTableOfTensorsV2.md): Creates an empty hash table. + +[`MutableHashTableV2(...)`](../../../tf/raw_ops/MutableHashTableV2.md): Creates an empty hash table. + +[`MutexLock(...)`](../../../tf/raw_ops/MutexLock.md): Locks a mutex resource. The output is the lock. So long as the lock tensor + +[`MutexV2(...)`](../../../tf/raw_ops/MutexV2.md): Creates a Mutex resource that can be locked by `MutexLock`. + +[`NcclAllReduce(...)`](../../../tf/raw_ops/NcclAllReduce.md): Outputs a tensor containing the reduction across all input tensors. + +[`NcclBroadcast(...)`](../../../tf/raw_ops/NcclBroadcast.md): Sends `input` to all devices that are connected to the output. + +[`NcclReduce(...)`](../../../tf/raw_ops/NcclReduce.md): Reduces `input` from `num_devices` using `reduction` to a single device. + +[`Ndtri(...)`](../../../tf/raw_ops/Ndtri.md) + +[`Neg(...)`](../../../tf/raw_ops/Neg.md): Computes numerical negative value element-wise. + +[`NextAfter(...)`](../../../tf/raw_ops/NextAfter.md): Returns the next representable value of `x1` in the direction of `x2`, element-wise. + +[`NextIteration(...)`](../../../tf/raw_ops/NextIteration.md): Makes its input available to the next iteration. + +[`NoOp(...)`](../../../tf/raw_ops/NoOp.md): Does nothing. Only useful as a placeholder for control edges. + +[`NonDeterministicInts(...)`](../../../tf/raw_ops/NonDeterministicInts.md): Non-deterministically generates some integers. + +[`NonMaxSuppression(...)`](../../../tf/raw_ops/NonMaxSuppression.md): Greedily selects a subset of bounding boxes in descending order of score, + +[`NonMaxSuppressionV2(...)`](../../../tf/raw_ops/NonMaxSuppressionV2.md): Greedily selects a subset of bounding boxes in descending order of score, + +[`NonMaxSuppressionV3(...)`](../../../tf/raw_ops/NonMaxSuppressionV3.md): Greedily selects a subset of bounding boxes in descending order of score, + +[`NonMaxSuppressionV4(...)`](../../../tf/raw_ops/NonMaxSuppressionV4.md): Greedily selects a subset of bounding boxes in descending order of score, + +[`NonMaxSuppressionV5(...)`](../../../tf/raw_ops/NonMaxSuppressionV5.md): Greedily selects a subset of bounding boxes in descending order of score, + +[`NonMaxSuppressionWithOverlaps(...)`](../../../tf/raw_ops/NonMaxSuppressionWithOverlaps.md): Greedily selects a subset of bounding boxes in descending order of score, + +[`NonSerializableDataset(...)`](../../../tf/raw_ops/NonSerializableDataset.md) + +[`NotEqual(...)`](../../../tf/raw_ops/NotEqual.md): Returns the truth value of (x != y) element-wise. + +[`NthElement(...)`](../../../tf/raw_ops/NthElement.md): Finds values of the `n`-th order statistic for the last dimension. + +[`OneHot(...)`](../../../tf/raw_ops/OneHot.md): Returns a one-hot tensor. + +[`OneShotIterator(...)`](../../../tf/raw_ops/OneShotIterator.md): Makes a "one-shot" iterator that can be iterated only once. + +[`OnesLike(...)`](../../../tf/raw_ops/OnesLike.md): Returns a tensor of ones with the same shape and type as x. + +[`OptimizeDataset(...)`](../../../tf/raw_ops/OptimizeDataset.md): Creates a dataset by applying optimizations to `input_dataset`. + +[`OptionalFromValue(...)`](../../../tf/raw_ops/OptionalFromValue.md): Constructs an Optional variant from a tuple of tensors. + +[`OptionalGetValue(...)`](../../../tf/raw_ops/OptionalGetValue.md): Returns the value stored in an Optional variant or raises an error if none exists. + +[`OptionalHasValue(...)`](../../../tf/raw_ops/OptionalHasValue.md): Returns true if and only if the given Optional variant has a value. + +[`OptionalNone(...)`](../../../tf/raw_ops/OptionalNone.md): Creates an Optional variant with no value. + +[`OrderedMapClear(...)`](../../../tf/raw_ops/OrderedMapClear.md): Op removes all elements in the underlying container. + +[`OrderedMapIncompleteSize(...)`](../../../tf/raw_ops/OrderedMapIncompleteSize.md): Op returns the number of incomplete elements in the underlying container. + +[`OrderedMapPeek(...)`](../../../tf/raw_ops/OrderedMapPeek.md): Op peeks at the values at the specified key. If the + +[`OrderedMapSize(...)`](../../../tf/raw_ops/OrderedMapSize.md): Op returns the number of elements in the underlying container. + +[`OrderedMapStage(...)`](../../../tf/raw_ops/OrderedMapStage.md): Stage (key, values) in the underlying container which behaves like a ordered + +[`OrderedMapUnstage(...)`](../../../tf/raw_ops/OrderedMapUnstage.md): Op removes and returns the values associated with the key + +[`OrderedMapUnstageNoKey(...)`](../../../tf/raw_ops/OrderedMapUnstageNoKey.md): Op removes and returns the (key, value) element with the smallest + +[`OutfeedDequeue(...)`](../../../tf/raw_ops/OutfeedDequeue.md): Retrieves a single tensor from the computation outfeed. + +[`OutfeedDequeueTuple(...)`](../../../tf/raw_ops/OutfeedDequeueTuple.md): Retrieve multiple values from the computation outfeed. + +[`OutfeedEnqueue(...)`](../../../tf/raw_ops/OutfeedEnqueue.md): Enqueue a Tensor on the computation outfeed. + +[`OutfeedEnqueueTuple(...)`](../../../tf/raw_ops/OutfeedEnqueueTuple.md): Enqueue multiple Tensor values on the computation outfeed. + +[`Pack(...)`](../../../tf/raw_ops/Pack.md): Packs a list of `N` rank-`R` tensors into one rank-`(R+1)` tensor. + +[`Pad(...)`](../../../tf/raw_ops/Pad.md): Pads a tensor with zeros. + +[`PadV2(...)`](../../../tf/raw_ops/PadV2.md): Pads a tensor. + +[`PaddedBatchDataset(...)`](../../../tf/raw_ops/PaddedBatchDataset.md): Creates a dataset that batches and pads `batch_size` elements from the input. + +[`PaddedBatchDatasetV2(...)`](../../../tf/raw_ops/PaddedBatchDatasetV2.md): Creates a dataset that batches and pads `batch_size` elements from the input. + +[`PaddingFIFOQueue(...)`](../../../tf/raw_ops/PaddingFIFOQueue.md): A queue that produces elements in first-in first-out order. + +[`PaddingFIFOQueueV2(...)`](../../../tf/raw_ops/PaddingFIFOQueueV2.md): A queue that produces elements in first-in first-out order. + +[`ParallelConcat(...)`](../../../tf/raw_ops/ParallelConcat.md): Concatenates a list of `N` tensors along the first dimension. + +[`ParallelDynamicStitch(...)`](../../../tf/raw_ops/ParallelDynamicStitch.md): Interleave the values from the `data` tensors into a single tensor. + +[`ParallelInterleaveDataset(...)`](../../../tf/raw_ops/ParallelInterleaveDataset.md): Creates a dataset that applies `f` to the outputs of `input_dataset`. + +[`ParallelInterleaveDatasetV2(...)`](../../../tf/raw_ops/ParallelInterleaveDatasetV2.md): Creates a dataset that applies `f` to the outputs of `input_dataset`. + +[`ParallelInterleaveDatasetV3(...)`](../../../tf/raw_ops/ParallelInterleaveDatasetV3.md): Creates a dataset that applies `f` to the outputs of `input_dataset`. + +[`ParallelInterleaveDatasetV4(...)`](../../../tf/raw_ops/ParallelInterleaveDatasetV4.md): Creates a dataset that applies `f` to the outputs of `input_dataset`. + +[`ParallelMapDataset(...)`](../../../tf/raw_ops/ParallelMapDataset.md): Creates a dataset that applies `f` to the outputs of `input_dataset`. + +[`ParallelMapDatasetV2(...)`](../../../tf/raw_ops/ParallelMapDatasetV2.md): Creates a dataset that applies `f` to the outputs of `input_dataset`. + +[`ParameterizedTruncatedNormal(...)`](../../../tf/raw_ops/ParameterizedTruncatedNormal.md): Outputs random values from a normal distribution. The parameters may each be a + +[`ParseExample(...)`](../../../tf/raw_ops/ParseExample.md): Transforms a vector of brain.Example protos (as strings) into typed tensors. + +[`ParseExampleDataset(...)`](../../../tf/raw_ops/ParseExampleDataset.md): Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features. + +[`ParseExampleDatasetV2(...)`](../../../tf/raw_ops/ParseExampleDatasetV2.md): Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features. + +[`ParseExampleV2(...)`](../../../tf/raw_ops/ParseExampleV2.md): Transforms a vector of tf.Example protos (as strings) into typed tensors. + +[`ParseSequenceExample(...)`](../../../tf/raw_ops/ParseSequenceExample.md): Transforms a vector of brain.SequenceExample protos (as strings) into typed tensors. + +[`ParseSequenceExampleV2(...)`](../../../tf/raw_ops/ParseSequenceExampleV2.md): Transforms a vector of tf.io.SequenceExample protos (as strings) into + +[`ParseSingleExample(...)`](../../../tf/raw_ops/ParseSingleExample.md): Transforms a tf.Example proto (as a string) into typed tensors. + +[`ParseSingleSequenceExample(...)`](../../../tf/raw_ops/ParseSingleSequenceExample.md): Transforms a scalar brain.SequenceExample proto (as strings) into typed tensors. + +[`ParseTensor(...)`](../../../tf/raw_ops/ParseTensor.md): Transforms a serialized tensorflow.TensorProto proto into a Tensor. + +[`PartitionedCall(...)`](../../../tf/raw_ops/PartitionedCall.md): returns `f(inputs)`, where `f`'s body is placed and partitioned. + +[`Placeholder(...)`](../../../tf/raw_ops/Placeholder.md): A placeholder op for a value that will be fed into the computation. + +[`PlaceholderV2(...)`](../../../tf/raw_ops/PlaceholderV2.md): A placeholder op for a value that will be fed into the computation. + +[`PlaceholderWithDefault(...)`](../../../tf/raw_ops/PlaceholderWithDefault.md): A placeholder op that passes through `input` when its output is not fed. + +[`Polygamma(...)`](../../../tf/raw_ops/Polygamma.md): Compute the polygamma function \\(\psi^{(n)}(x)\\). + +[`PopulationCount(...)`](../../../tf/raw_ops/PopulationCount.md): Computes element-wise population count (a.k.a. popcount, bitsum, bitcount). + +[`Pow(...)`](../../../tf/raw_ops/Pow.md): Computes the power of one value to another. + +[`PrefetchDataset(...)`](../../../tf/raw_ops/PrefetchDataset.md): Creates a dataset that asynchronously prefetches elements from `input_dataset`. + +[`Prelinearize(...)`](../../../tf/raw_ops/Prelinearize.md): An op which linearizes one Tensor value to an opaque variant tensor. + +[`PrelinearizeTuple(...)`](../../../tf/raw_ops/PrelinearizeTuple.md): An op which linearizes multiple Tensor values to an opaque variant tensor. + +[`PreventGradient(...)`](../../../tf/raw_ops/PreventGradient.md): An identity op that triggers an error if a gradient is requested. + +[`Print(...)`](../../../tf/raw_ops/Print.md): Prints a list of tensors. + +[`PrintV2(...)`](../../../tf/raw_ops/PrintV2.md): Prints a string scalar. + +[`PriorityQueue(...)`](../../../tf/raw_ops/PriorityQueue.md): A queue that produces elements sorted by the first component value. + +[`PriorityQueueV2(...)`](../../../tf/raw_ops/PriorityQueueV2.md): A queue that produces elements sorted by the first component value. + +[`PrivateThreadPoolDataset(...)`](../../../tf/raw_ops/PrivateThreadPoolDataset.md): Creates a dataset that uses a custom thread pool to compute `input_dataset`. + +[`Prod(...)`](../../../tf/raw_ops/Prod.md): Computes the product of elements across dimensions of a tensor. + +[`PyFunc(...)`](../../../tf/raw_ops/PyFunc.md): Invokes a python function to compute func(input)->output. + +[`PyFuncStateless(...)`](../../../tf/raw_ops/PyFuncStateless.md): A stateless version of PyFunc. + +[`Qr(...)`](../../../tf/raw_ops/Qr.md): Computes the QR decompositions of one or more matrices. + +[`QuantizeAndDequantize(...)`](../../../tf/raw_ops/QuantizeAndDequantize.md): Use QuantizeAndDequantizeV2 instead. + +[`QuantizeAndDequantizeV2(...)`](../../../tf/raw_ops/QuantizeAndDequantizeV2.md): Quantizes then dequantizes a tensor. + +[`QuantizeAndDequantizeV3(...)`](../../../tf/raw_ops/QuantizeAndDequantizeV3.md): Quantizes then dequantizes a tensor. + +[`QuantizeDownAndShrinkRange(...)`](../../../tf/raw_ops/QuantizeDownAndShrinkRange.md): Convert the quantized 'input' tensor into a lower-precision 'output', using the + +[`QuantizeV2(...)`](../../../tf/raw_ops/QuantizeV2.md): Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. + +[`QuantizedAdd(...)`](../../../tf/raw_ops/QuantizedAdd.md): Returns x + y element-wise, working on quantized buffers. + +[`QuantizedAvgPool(...)`](../../../tf/raw_ops/QuantizedAvgPool.md): Produces the average pool of the input tensor for quantized types. + +[`QuantizedBatchNormWithGlobalNormalization(...)`](../../../tf/raw_ops/QuantizedBatchNormWithGlobalNormalization.md): Quantized Batch normalization. + +[`QuantizedBiasAdd(...)`](../../../tf/raw_ops/QuantizedBiasAdd.md): Adds Tensor 'bias' to Tensor 'input' for Quantized types. + +[`QuantizedConcat(...)`](../../../tf/raw_ops/QuantizedConcat.md): Concatenates quantized tensors along one dimension. + +[`QuantizedConv2D(...)`](../../../tf/raw_ops/QuantizedConv2D.md): Computes a 2D convolution given quantized 4D input and filter tensors. + +[`QuantizedConv2DAndRelu(...)`](../../../tf/raw_ops/QuantizedConv2DAndRelu.md) + +[`QuantizedConv2DAndReluAndRequantize(...)`](../../../tf/raw_ops/QuantizedConv2DAndReluAndRequantize.md) + +[`QuantizedConv2DAndRequantize(...)`](../../../tf/raw_ops/QuantizedConv2DAndRequantize.md) + +[`QuantizedConv2DPerChannel(...)`](../../../tf/raw_ops/QuantizedConv2DPerChannel.md): Computes QuantizedConv2D per channel. + +[`QuantizedConv2DWithBias(...)`](../../../tf/raw_ops/QuantizedConv2DWithBias.md) + +[`QuantizedConv2DWithBiasAndRelu(...)`](../../../tf/raw_ops/QuantizedConv2DWithBiasAndRelu.md) + +[`QuantizedConv2DWithBiasAndReluAndRequantize(...)`](../../../tf/raw_ops/QuantizedConv2DWithBiasAndReluAndRequantize.md) + +[`QuantizedConv2DWithBiasAndRequantize(...)`](../../../tf/raw_ops/QuantizedConv2DWithBiasAndRequantize.md) + +[`QuantizedConv2DWithBiasSignedSumAndReluAndRequantize(...)`](../../../tf/raw_ops/QuantizedConv2DWithBiasSignedSumAndReluAndRequantize.md) + +[`QuantizedConv2DWithBiasSumAndRelu(...)`](../../../tf/raw_ops/QuantizedConv2DWithBiasSumAndRelu.md) + +[`QuantizedConv2DWithBiasSumAndReluAndRequantize(...)`](../../../tf/raw_ops/QuantizedConv2DWithBiasSumAndReluAndRequantize.md) + +[`QuantizedDepthwiseConv2D(...)`](../../../tf/raw_ops/QuantizedDepthwiseConv2D.md): Computes quantized depthwise Conv2D. + +[`QuantizedDepthwiseConv2DWithBias(...)`](../../../tf/raw_ops/QuantizedDepthwiseConv2DWithBias.md): Computes quantized depthwise Conv2D with Bias. + +[`QuantizedDepthwiseConv2DWithBiasAndRelu(...)`](../../../tf/raw_ops/QuantizedDepthwiseConv2DWithBiasAndRelu.md): Computes quantized depthwise Conv2D with Bias and Relu. + +[`QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize(...)`](../../../tf/raw_ops/QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize.md): Computes quantized depthwise Conv2D with Bias, Relu and Requantize. + +[`QuantizedInstanceNorm(...)`](../../../tf/raw_ops/QuantizedInstanceNorm.md): Quantized Instance normalization. + +[`QuantizedMatMul(...)`](../../../tf/raw_ops/QuantizedMatMul.md): Perform a quantized matrix multiplication of `a` by the matrix `b`. + +[`QuantizedMatMulWithBias(...)`](../../../tf/raw_ops/QuantizedMatMulWithBias.md): Performs a quantized matrix multiplication of `a` by the matrix `b` with bias + +[`QuantizedMatMulWithBiasAndDequantize(...)`](../../../tf/raw_ops/QuantizedMatMulWithBiasAndDequantize.md) + +[`QuantizedMatMulWithBiasAndRelu(...)`](../../../tf/raw_ops/QuantizedMatMulWithBiasAndRelu.md): Perform a quantized matrix multiplication of `a` by the matrix `b` with bias + +[`QuantizedMatMulWithBiasAndReluAndRequantize(...)`](../../../tf/raw_ops/QuantizedMatMulWithBiasAndReluAndRequantize.md): Perform a quantized matrix multiplication of `a` by the matrix `b` with bias + +[`QuantizedMatMulWithBiasAndRequantize(...)`](../../../tf/raw_ops/QuantizedMatMulWithBiasAndRequantize.md) + +[`QuantizedMaxPool(...)`](../../../tf/raw_ops/QuantizedMaxPool.md): Produces the max pool of the input tensor for quantized types. + +[`QuantizedMul(...)`](../../../tf/raw_ops/QuantizedMul.md): Returns x * y element-wise, working on quantized buffers. + +[`QuantizedRelu(...)`](../../../tf/raw_ops/QuantizedRelu.md): Computes Quantized Rectified Linear: `max(features, 0)` + +[`QuantizedRelu6(...)`](../../../tf/raw_ops/QuantizedRelu6.md): Computes Quantized Rectified Linear 6: `min(max(features, 0), 6)` + +[`QuantizedReluX(...)`](../../../tf/raw_ops/QuantizedReluX.md): Computes Quantized Rectified Linear X: `min(max(features, 0), max_value)` + +[`QuantizedReshape(...)`](../../../tf/raw_ops/QuantizedReshape.md): Reshapes a quantized tensor as per the Reshape op. + +[`QuantizedResizeBilinear(...)`](../../../tf/raw_ops/QuantizedResizeBilinear.md): Resize quantized `images` to `size` using quantized bilinear interpolation. + +[`QueueClose(...)`](../../../tf/raw_ops/QueueClose.md): Closes the given queue. + +[`QueueCloseV2(...)`](../../../tf/raw_ops/QueueCloseV2.md): Closes the given queue. + +[`QueueDequeue(...)`](../../../tf/raw_ops/QueueDequeue.md): Dequeues a tuple of one or more tensors from the given queue. + +[`QueueDequeueMany(...)`](../../../tf/raw_ops/QueueDequeueMany.md): Dequeues `n` tuples of one or more tensors from the given queue. + +[`QueueDequeueManyV2(...)`](../../../tf/raw_ops/QueueDequeueManyV2.md): Dequeues `n` tuples of one or more tensors from the given queue. + +[`QueueDequeueUpTo(...)`](../../../tf/raw_ops/QueueDequeueUpTo.md): Dequeues `n` tuples of one or more tensors from the given queue. + +[`QueueDequeueUpToV2(...)`](../../../tf/raw_ops/QueueDequeueUpToV2.md): Dequeues `n` tuples of one or more tensors from the given queue. + +[`QueueDequeueV2(...)`](../../../tf/raw_ops/QueueDequeueV2.md): Dequeues a tuple of one or more tensors from the given queue. + +[`QueueEnqueue(...)`](../../../tf/raw_ops/QueueEnqueue.md): Enqueues a tuple of one or more tensors in the given queue. + +[`QueueEnqueueMany(...)`](../../../tf/raw_ops/QueueEnqueueMany.md): Enqueues zero or more tuples of one or more tensors in the given queue. + +[`QueueEnqueueManyV2(...)`](../../../tf/raw_ops/QueueEnqueueManyV2.md): Enqueues zero or more tuples of one or more tensors in the given queue. + +[`QueueEnqueueV2(...)`](../../../tf/raw_ops/QueueEnqueueV2.md): Enqueues a tuple of one or more tensors in the given queue. + +[`QueueIsClosed(...)`](../../../tf/raw_ops/QueueIsClosed.md): Returns true if queue is closed. + +[`QueueIsClosedV2(...)`](../../../tf/raw_ops/QueueIsClosedV2.md): Returns true if queue is closed. + +[`QueueSize(...)`](../../../tf/raw_ops/QueueSize.md): Computes the number of elements in the given queue. + +[`QueueSizeV2(...)`](../../../tf/raw_ops/QueueSizeV2.md): Computes the number of elements in the given queue. + +[`RFFT(...)`](../../../tf/raw_ops/RFFT.md): Real-valued fast Fourier transform. + +[`RFFT2D(...)`](../../../tf/raw_ops/RFFT2D.md): 2D real-valued fast Fourier transform. + +[`RFFT3D(...)`](../../../tf/raw_ops/RFFT3D.md): 3D real-valued fast Fourier transform. + +[`RGBToHSV(...)`](../../../tf/raw_ops/RGBToHSV.md): Converts one or more images from RGB to HSV. + +[`RaggedGather(...)`](../../../tf/raw_ops/RaggedGather.md): Gather ragged slices from `params` axis `0` according to `indices`. + +[`RaggedRange(...)`](../../../tf/raw_ops/RaggedRange.md): Returns a `RaggedTensor` containing the specified sequences of numbers. + +[`RaggedTensorFromVariant(...)`](../../../tf/raw_ops/RaggedTensorFromVariant.md): Decodes a `variant` Tensor into a `RaggedTensor`. + +[`RaggedTensorToSparse(...)`](../../../tf/raw_ops/RaggedTensorToSparse.md): Converts a `RaggedTensor` into a `SparseTensor` with the same values. + +[`RaggedTensorToTensor(...)`](../../../tf/raw_ops/RaggedTensorToTensor.md): Create a dense tensor from a ragged tensor, possibly altering its shape. + +[`RaggedTensorToVariant(...)`](../../../tf/raw_ops/RaggedTensorToVariant.md): Encodes a `RaggedTensor` into a `variant` Tensor. + +[`RandomCrop(...)`](../../../tf/raw_ops/RandomCrop.md): Randomly crop `image`. + +[`RandomDataset(...)`](../../../tf/raw_ops/RandomDataset.md): Creates a Dataset that returns pseudorandom numbers. + +[`RandomGamma(...)`](../../../tf/raw_ops/RandomGamma.md): Outputs random values from the Gamma distribution(s) described by alpha. + +[`RandomGammaGrad(...)`](../../../tf/raw_ops/RandomGammaGrad.md): Computes the derivative of a Gamma random sample w.r.t. `alpha`. + +[`RandomPoisson(...)`](../../../tf/raw_ops/RandomPoisson.md): Use RandomPoissonV2 instead. + +[`RandomPoissonV2(...)`](../../../tf/raw_ops/RandomPoissonV2.md): Outputs random values from the Poisson distribution(s) described by rate. + +[`RandomShuffle(...)`](../../../tf/raw_ops/RandomShuffle.md): Randomly shuffles a tensor along its first dimension. + +[`RandomShuffleQueue(...)`](../../../tf/raw_ops/RandomShuffleQueue.md): A queue that randomizes the order of elements. + +[`RandomShuffleQueueV2(...)`](../../../tf/raw_ops/RandomShuffleQueueV2.md): A queue that randomizes the order of elements. + +[`RandomStandardNormal(...)`](../../../tf/raw_ops/RandomStandardNormal.md): Outputs random values from a normal distribution. + +[`RandomUniform(...)`](../../../tf/raw_ops/RandomUniform.md): Outputs random values from a uniform distribution. + +[`RandomUniformInt(...)`](../../../tf/raw_ops/RandomUniformInt.md): Outputs random integers from a uniform distribution. + +[`Range(...)`](../../../tf/raw_ops/Range.md): Creates a sequence of numbers. + +[`RangeDataset(...)`](../../../tf/raw_ops/RangeDataset.md): Creates a dataset with a range of values. Corresponds to python's xrange. + +[`Rank(...)`](../../../tf/raw_ops/Rank.md): Returns the rank of a tensor. + +[`ReadFile(...)`](../../../tf/raw_ops/ReadFile.md): Reads and outputs the entire contents of the input filename. + +[`ReadVariableOp(...)`](../../../tf/raw_ops/ReadVariableOp.md): Reads the value of a variable. + +[`ReaderNumRecordsProduced(...)`](../../../tf/raw_ops/ReaderNumRecordsProduced.md): Returns the number of records this Reader has produced. + +[`ReaderNumRecordsProducedV2(...)`](../../../tf/raw_ops/ReaderNumRecordsProducedV2.md): Returns the number of records this Reader has produced. + +[`ReaderNumWorkUnitsCompleted(...)`](../../../tf/raw_ops/ReaderNumWorkUnitsCompleted.md): Returns the number of work units this Reader has finished processing. + +[`ReaderNumWorkUnitsCompletedV2(...)`](../../../tf/raw_ops/ReaderNumWorkUnitsCompletedV2.md): Returns the number of work units this Reader has finished processing. + +[`ReaderRead(...)`](../../../tf/raw_ops/ReaderRead.md): Returns the next record (key, value pair) produced by a Reader. + +[`ReaderReadUpTo(...)`](../../../tf/raw_ops/ReaderReadUpTo.md): Returns up to `num_records` (key, value) pairs produced by a Reader. + +[`ReaderReadUpToV2(...)`](../../../tf/raw_ops/ReaderReadUpToV2.md): Returns up to `num_records` (key, value) pairs produced by a Reader. + +[`ReaderReadV2(...)`](../../../tf/raw_ops/ReaderReadV2.md): Returns the next record (key, value pair) produced by a Reader. + +[`ReaderReset(...)`](../../../tf/raw_ops/ReaderReset.md): Restore a Reader to its initial clean state. + +[`ReaderResetV2(...)`](../../../tf/raw_ops/ReaderResetV2.md): Restore a Reader to its initial clean state. + +[`ReaderRestoreState(...)`](../../../tf/raw_ops/ReaderRestoreState.md): Restore a reader to a previously saved state. + +[`ReaderRestoreStateV2(...)`](../../../tf/raw_ops/ReaderRestoreStateV2.md): Restore a reader to a previously saved state. + +[`ReaderSerializeState(...)`](../../../tf/raw_ops/ReaderSerializeState.md): Produce a string tensor that encodes the state of a Reader. + +[`ReaderSerializeStateV2(...)`](../../../tf/raw_ops/ReaderSerializeStateV2.md): Produce a string tensor that encodes the state of a Reader. + +[`Real(...)`](../../../tf/raw_ops/Real.md): Returns the real part of a complex number. + +[`RealDiv(...)`](../../../tf/raw_ops/RealDiv.md): Returns x / y element-wise for real types. + +[`RebatchDataset(...)`](../../../tf/raw_ops/RebatchDataset.md): Creates a dataset that changes the batch size. + +[`Reciprocal(...)`](../../../tf/raw_ops/Reciprocal.md): Computes the reciprocal of x element-wise. + +[`ReciprocalGrad(...)`](../../../tf/raw_ops/ReciprocalGrad.md): Computes the gradient for the inverse of `x` wrt its input. + +[`RecordInput(...)`](../../../tf/raw_ops/RecordInput.md): Emits randomized records. + +[`Recv(...)`](../../../tf/raw_ops/Recv.md): Receives the named tensor from send_device on recv_device. + +[`RecvTPUEmbeddingActivations(...)`](../../../tf/raw_ops/RecvTPUEmbeddingActivations.md): An op that receives embedding activations on the TPU. + +[`ReduceDataset(...)`](../../../tf/raw_ops/ReduceDataset.md): Reduces the input dataset to a singleton using a reduce function. + +[`ReduceJoin(...)`](../../../tf/raw_ops/ReduceJoin.md): Joins a string Tensor across the given dimensions. + +[`RefEnter(...)`](../../../tf/raw_ops/RefEnter.md): Creates or finds a child frame, and makes `data` available to the child frame. + +[`RefExit(...)`](../../../tf/raw_ops/RefExit.md): Exits the current frame to its parent frame. + +[`RefIdentity(...)`](../../../tf/raw_ops/RefIdentity.md): Return the same ref tensor as the input ref tensor. + +[`RefMerge(...)`](../../../tf/raw_ops/RefMerge.md): Forwards the value of an available tensor from `inputs` to `output`. + +[`RefNextIteration(...)`](../../../tf/raw_ops/RefNextIteration.md): Makes its input available to the next iteration. + +[`RefSelect(...)`](../../../tf/raw_ops/RefSelect.md): Forwards the `index`th element of `inputs` to `output`. + +[`RefSwitch(...)`](../../../tf/raw_ops/RefSwitch.md): Forwards the ref tensor `data` to the output port determined by `pred`. + +[`RegexFullMatch(...)`](../../../tf/raw_ops/RegexFullMatch.md): Check if the input matches the regex pattern. + +[`RegexReplace(...)`](../../../tf/raw_ops/RegexReplace.md): Replaces matches of the `pattern` regular expression in `input` with the + +[`Relu(...)`](../../../tf/raw_ops/Relu.md): Computes rectified linear: `max(features, 0)`. + +[`Relu6(...)`](../../../tf/raw_ops/Relu6.md): Computes rectified linear 6: `min(max(features, 0), 6)`. + +[`Relu6Grad(...)`](../../../tf/raw_ops/Relu6Grad.md): Computes rectified linear 6 gradients for a Relu6 operation. + +[`ReluGrad(...)`](../../../tf/raw_ops/ReluGrad.md): Computes rectified linear gradients for a Relu operation. + +[`RemoteCall(...)`](../../../tf/raw_ops/RemoteCall.md): Runs function `f` on a remote device indicated by `target`. + +[`RepeatDataset(...)`](../../../tf/raw_ops/RepeatDataset.md): Creates a dataset that emits the outputs of `input_dataset` `count` times. + +[`RequantizationRange(...)`](../../../tf/raw_ops/RequantizationRange.md): Computes a range that covers the actual values present in a quantized tensor. + +[`RequantizationRangePerChannel(...)`](../../../tf/raw_ops/RequantizationRangePerChannel.md): Computes requantization range per channel. + +[`Requantize(...)`](../../../tf/raw_ops/Requantize.md): Converts the quantized `input` tensor into a lower-precision `output`. + +[`RequantizePerChannel(...)`](../../../tf/raw_ops/RequantizePerChannel.md): Requantizes input with min and max values known per channel. + +[`Reshape(...)`](../../../tf/raw_ops/Reshape.md): Reshapes a tensor. + +[`ResizeArea(...)`](../../../tf/raw_ops/ResizeArea.md): Resize `images` to `size` using area interpolation. + +[`ResizeBicubic(...)`](../../../tf/raw_ops/ResizeBicubic.md): Resize `images` to `size` using bicubic interpolation. + +[`ResizeBicubicGrad(...)`](../../../tf/raw_ops/ResizeBicubicGrad.md): Computes the gradient of bicubic interpolation. + +[`ResizeBilinear(...)`](../../../tf/raw_ops/ResizeBilinear.md): Resize `images` to `size` using bilinear interpolation. + +[`ResizeBilinearGrad(...)`](../../../tf/raw_ops/ResizeBilinearGrad.md): Computes the gradient of bilinear interpolation. + +[`ResizeNearestNeighbor(...)`](../../../tf/raw_ops/ResizeNearestNeighbor.md): Resize `images` to `size` using nearest neighbor interpolation. + +[`ResizeNearestNeighborGrad(...)`](../../../tf/raw_ops/ResizeNearestNeighborGrad.md): Computes the gradient of nearest neighbor interpolation. + +[`ResourceAccumulatorApplyGradient(...)`](../../../tf/raw_ops/ResourceAccumulatorApplyGradient.md): Applies a gradient to a given accumulator. + +[`ResourceAccumulatorNumAccumulated(...)`](../../../tf/raw_ops/ResourceAccumulatorNumAccumulated.md): Returns the number of gradients aggregated in the given accumulators. + +[`ResourceAccumulatorSetGlobalStep(...)`](../../../tf/raw_ops/ResourceAccumulatorSetGlobalStep.md): Updates the accumulator with a new value for global_step. + +[`ResourceAccumulatorTakeGradient(...)`](../../../tf/raw_ops/ResourceAccumulatorTakeGradient.md): Extracts the average gradient in the given ConditionalAccumulator. + +[`ResourceApplyAdaMax(...)`](../../../tf/raw_ops/ResourceApplyAdaMax.md): Update '*var' according to the AdaMax algorithm. + +[`ResourceApplyAdadelta(...)`](../../../tf/raw_ops/ResourceApplyAdadelta.md): Update '*var' according to the adadelta scheme. + +[`ResourceApplyAdagrad(...)`](../../../tf/raw_ops/ResourceApplyAdagrad.md): Update '*var' according to the adagrad scheme. + +[`ResourceApplyAdagradDA(...)`](../../../tf/raw_ops/ResourceApplyAdagradDA.md): Update '*var' according to the proximal adagrad scheme. + +[`ResourceApplyAdagradV2(...)`](../../../tf/raw_ops/ResourceApplyAdagradV2.md): Update '*var' according to the adagrad scheme. + +[`ResourceApplyAdam(...)`](../../../tf/raw_ops/ResourceApplyAdam.md): Update '*var' according to the Adam algorithm. + +[`ResourceApplyAdamWithAmsgrad(...)`](../../../tf/raw_ops/ResourceApplyAdamWithAmsgrad.md): Update '*var' according to the Adam algorithm. + +[`ResourceApplyAddSign(...)`](../../../tf/raw_ops/ResourceApplyAddSign.md): Update '*var' according to the AddSign update. + +[`ResourceApplyCenteredRMSProp(...)`](../../../tf/raw_ops/ResourceApplyCenteredRMSProp.md): Update '*var' according to the centered RMSProp algorithm. + +[`ResourceApplyFtrl(...)`](../../../tf/raw_ops/ResourceApplyFtrl.md): Update '*var' according to the Ftrl-proximal scheme. + +[`ResourceApplyFtrlV2(...)`](../../../tf/raw_ops/ResourceApplyFtrlV2.md): Update '*var' according to the Ftrl-proximal scheme. + +[`ResourceApplyGradientDescent(...)`](../../../tf/raw_ops/ResourceApplyGradientDescent.md): Update '*var' by subtracting 'alpha' * 'delta' from it. + +[`ResourceApplyKerasMomentum(...)`](../../../tf/raw_ops/ResourceApplyKerasMomentum.md): Update '*var' according to the momentum scheme. + +[`ResourceApplyMomentum(...)`](../../../tf/raw_ops/ResourceApplyMomentum.md): Update '*var' according to the momentum scheme. Set use_nesterov = True if you + +[`ResourceApplyPowerSign(...)`](../../../tf/raw_ops/ResourceApplyPowerSign.md): Update '*var' according to the AddSign update. + +[`ResourceApplyProximalAdagrad(...)`](../../../tf/raw_ops/ResourceApplyProximalAdagrad.md): Update '*var' and '*accum' according to FOBOS with Adagrad learning rate. + +[`ResourceApplyProximalGradientDescent(...)`](../../../tf/raw_ops/ResourceApplyProximalGradientDescent.md): Update '*var' as FOBOS algorithm with fixed learning rate. + +[`ResourceApplyRMSProp(...)`](../../../tf/raw_ops/ResourceApplyRMSProp.md): Update '*var' according to the RMSProp algorithm. + +[`ResourceConditionalAccumulator(...)`](../../../tf/raw_ops/ResourceConditionalAccumulator.md): A conditional accumulator for aggregating gradients. + +[`ResourceCountUpTo(...)`](../../../tf/raw_ops/ResourceCountUpTo.md): Increments variable pointed to by 'resource' until it reaches 'limit'. + +[`ResourceGather(...)`](../../../tf/raw_ops/ResourceGather.md): Gather slices from the variable pointed to by `resource` according to `indices`. + +[`ResourceGatherNd(...)`](../../../tf/raw_ops/ResourceGatherNd.md) + +[`ResourceScatterAdd(...)`](../../../tf/raw_ops/ResourceScatterAdd.md): Adds sparse updates to the variable referenced by `resource`. + +[`ResourceScatterDiv(...)`](../../../tf/raw_ops/ResourceScatterDiv.md): Divides sparse updates into the variable referenced by `resource`. + +[`ResourceScatterMax(...)`](../../../tf/raw_ops/ResourceScatterMax.md): Reduces sparse updates into the variable referenced by `resource` using the `max` operation. + +[`ResourceScatterMin(...)`](../../../tf/raw_ops/ResourceScatterMin.md): Reduces sparse updates into the variable referenced by `resource` using the `min` operation. + +[`ResourceScatterMul(...)`](../../../tf/raw_ops/ResourceScatterMul.md): Multiplies sparse updates into the variable referenced by `resource`. + +[`ResourceScatterNdAdd(...)`](../../../tf/raw_ops/ResourceScatterNdAdd.md): Applies sparse addition to individual values or slices in a Variable. + +[`ResourceScatterNdSub(...)`](../../../tf/raw_ops/ResourceScatterNdSub.md): Applies sparse subtraction to individual values or slices in a Variable. + +[`ResourceScatterNdUpdate(...)`](../../../tf/raw_ops/ResourceScatterNdUpdate.md): Applies sparse `updates` to individual values or slices within a given + +[`ResourceScatterSub(...)`](../../../tf/raw_ops/ResourceScatterSub.md): Subtracts sparse updates from the variable referenced by `resource`. + +[`ResourceScatterUpdate(...)`](../../../tf/raw_ops/ResourceScatterUpdate.md): Assigns sparse updates to the variable referenced by `resource`. + +[`ResourceSparseApplyAdadelta(...)`](../../../tf/raw_ops/ResourceSparseApplyAdadelta.md): var: Should be from a Variable(). + +[`ResourceSparseApplyAdagrad(...)`](../../../tf/raw_ops/ResourceSparseApplyAdagrad.md): Update relevant entries in '*var' and '*accum' according to the adagrad scheme. + +[`ResourceSparseApplyAdagradDA(...)`](../../../tf/raw_ops/ResourceSparseApplyAdagradDA.md): Update entries in '*var' and '*accum' according to the proximal adagrad scheme. + +[`ResourceSparseApplyAdagradV2(...)`](../../../tf/raw_ops/ResourceSparseApplyAdagradV2.md): Update relevant entries in '*var' and '*accum' according to the adagrad scheme. + +[`ResourceSparseApplyCenteredRMSProp(...)`](../../../tf/raw_ops/ResourceSparseApplyCenteredRMSProp.md): Update '*var' according to the centered RMSProp algorithm. + +[`ResourceSparseApplyFtrl(...)`](../../../tf/raw_ops/ResourceSparseApplyFtrl.md): Update relevant entries in '*var' according to the Ftrl-proximal scheme. + +[`ResourceSparseApplyFtrlV2(...)`](../../../tf/raw_ops/ResourceSparseApplyFtrlV2.md): Update relevant entries in '*var' according to the Ftrl-proximal scheme. + +[`ResourceSparseApplyKerasMomentum(...)`](../../../tf/raw_ops/ResourceSparseApplyKerasMomentum.md): Update relevant entries in '*var' and '*accum' according to the momentum scheme. + +[`ResourceSparseApplyMomentum(...)`](../../../tf/raw_ops/ResourceSparseApplyMomentum.md): Update relevant entries in '*var' and '*accum' according to the momentum scheme. + +[`ResourceSparseApplyProximalAdagrad(...)`](../../../tf/raw_ops/ResourceSparseApplyProximalAdagrad.md): Sparse update entries in '*var' and '*accum' according to FOBOS algorithm. + +[`ResourceSparseApplyProximalGradientDescent(...)`](../../../tf/raw_ops/ResourceSparseApplyProximalGradientDescent.md): Sparse update '*var' as FOBOS algorithm with fixed learning rate. + +[`ResourceSparseApplyRMSProp(...)`](../../../tf/raw_ops/ResourceSparseApplyRMSProp.md): Update '*var' according to the RMSProp algorithm. + +[`ResourceStridedSliceAssign(...)`](../../../tf/raw_ops/ResourceStridedSliceAssign.md): Assign `value` to the sliced l-value reference of `ref`. + +[`Restore(...)`](../../../tf/raw_ops/Restore.md): Restores a tensor from checkpoint files. + +[`RestoreSlice(...)`](../../../tf/raw_ops/RestoreSlice.md): Restores a tensor from checkpoint files. + +[`RestoreV2(...)`](../../../tf/raw_ops/RestoreV2.md): Restores tensors from a V2 checkpoint. + +[`RetrieveTPUEmbeddingADAMParameters(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingADAMParameters.md): Retrieve ADAM embedding parameters. + +[`RetrieveTPUEmbeddingADAMParametersGradAccumDebug(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingADAMParametersGradAccumDebug.md): Retrieve ADAM embedding parameters with debug support. + +[`RetrieveTPUEmbeddingAdadeltaParameters(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingAdadeltaParameters.md): Retrieve Adadelta embedding parameters. + +[`RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug.md): Retrieve Adadelta embedding parameters with debug support. + +[`RetrieveTPUEmbeddingAdagradParameters(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingAdagradParameters.md): Retrieve Adagrad embedding parameters. + +[`RetrieveTPUEmbeddingAdagradParametersGradAccumDebug(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingAdagradParametersGradAccumDebug.md): Retrieve Adagrad embedding parameters with debug support. + +[`RetrieveTPUEmbeddingCenteredRMSPropParameters(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingCenteredRMSPropParameters.md): Retrieve centered RMSProp embedding parameters. + +[`RetrieveTPUEmbeddingFTRLParameters(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingFTRLParameters.md): Retrieve FTRL embedding parameters. + +[`RetrieveTPUEmbeddingFTRLParametersGradAccumDebug(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.md): Retrieve FTRL embedding parameters with debug support. + +[`RetrieveTPUEmbeddingMDLAdagradLightParameters(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingMDLAdagradLightParameters.md): Retrieve MDL Adagrad Light embedding parameters. + +[`RetrieveTPUEmbeddingMomentumParameters(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingMomentumParameters.md): Retrieve Momentum embedding parameters. + +[`RetrieveTPUEmbeddingMomentumParametersGradAccumDebug(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingMomentumParametersGradAccumDebug.md): Retrieve Momentum embedding parameters with debug support. + +[`RetrieveTPUEmbeddingProximalAdagradParameters(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParameters.md): Retrieve proximal Adagrad embedding parameters. + +[`RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug.md): Retrieve proximal Adagrad embedding parameters with debug support. + +[`RetrieveTPUEmbeddingRMSPropParameters(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingRMSPropParameters.md): Retrieve RMSProp embedding parameters. + +[`RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug.md): Retrieve RMSProp embedding parameters with debug support. + +[`RetrieveTPUEmbeddingStochasticGradientDescentParameters(...)`](../../../tf/raw_ops/RetrieveTPUEmbeddingStochasticGradientDescentParameters.md): Retrieve SGD embedding parameters. + +[`Reverse(...)`](../../../tf/raw_ops/Reverse.md): Reverses specific dimensions of a tensor. + +[`ReverseSequence(...)`](../../../tf/raw_ops/ReverseSequence.md): Reverses variable length slices. + +[`ReverseV2(...)`](../../../tf/raw_ops/ReverseV2.md): Reverses specific dimensions of a tensor. + +[`RightShift(...)`](../../../tf/raw_ops/RightShift.md): Elementwise computes the bitwise right-shift of `x` and `y`. + +[`Rint(...)`](../../../tf/raw_ops/Rint.md): Returns element-wise integer closest to x. + +[`RngSkip(...)`](../../../tf/raw_ops/RngSkip.md): Advance the counter of a counter-based RNG. + +[`Roll(...)`](../../../tf/raw_ops/Roll.md): Rolls the elements of a tensor along an axis. + +[`Round(...)`](../../../tf/raw_ops/Round.md): Rounds the values of a tensor to the nearest integer, element-wise. + +[`Rsqrt(...)`](../../../tf/raw_ops/Rsqrt.md): Computes reciprocal of square root of x element-wise. + +[`RsqrtGrad(...)`](../../../tf/raw_ops/RsqrtGrad.md): Computes the gradient for the rsqrt of `x` wrt its input. + +[`SampleDistortedBoundingBox(...)`](../../../tf/raw_ops/SampleDistortedBoundingBox.md): Generate a single randomly distorted bounding box for an image. + +[`SampleDistortedBoundingBoxV2(...)`](../../../tf/raw_ops/SampleDistortedBoundingBoxV2.md): Generate a single randomly distorted bounding box for an image. + +[`SamplingDataset(...)`](../../../tf/raw_ops/SamplingDataset.md): Creates a dataset that takes a Bernoulli sample of the contents of another dataset. + +[`Save(...)`](../../../tf/raw_ops/Save.md): Saves the input tensors to disk. + +[`SaveSlices(...)`](../../../tf/raw_ops/SaveSlices.md): Saves input tensors slices to disk. + +[`SaveV2(...)`](../../../tf/raw_ops/SaveV2.md): Saves tensors in V2 checkpoint format. + +[`ScalarSummary(...)`](../../../tf/raw_ops/ScalarSummary.md): Outputs a `Summary` protocol buffer with scalar values. + +[`ScaleAndTranslate(...)`](../../../tf/raw_ops/ScaleAndTranslate.md) + +[`ScaleAndTranslateGrad(...)`](../../../tf/raw_ops/ScaleAndTranslateGrad.md) + +[`ScanDataset(...)`](../../../tf/raw_ops/ScanDataset.md): Creates a dataset successively reduces `f` over the elements of `input_dataset`. + +[`ScatterAdd(...)`](../../../tf/raw_ops/ScatterAdd.md): Adds sparse updates to a variable reference. + +[`ScatterDiv(...)`](../../../tf/raw_ops/ScatterDiv.md): Divides a variable reference by sparse updates. + +[`ScatterMax(...)`](../../../tf/raw_ops/ScatterMax.md): Reduces sparse updates into a variable reference using the `max` operation. + +[`ScatterMin(...)`](../../../tf/raw_ops/ScatterMin.md): Reduces sparse updates into a variable reference using the `min` operation. + +[`ScatterMul(...)`](../../../tf/raw_ops/ScatterMul.md): Multiplies sparse updates into a variable reference. + +[`ScatterNd(...)`](../../../tf/raw_ops/ScatterNd.md): Scatter `updates` into a new tensor according to `indices`. + +[`ScatterNdAdd(...)`](../../../tf/raw_ops/ScatterNdAdd.md): Applies sparse addition to individual values or slices in a Variable. + +[`ScatterNdNonAliasingAdd(...)`](../../../tf/raw_ops/ScatterNdNonAliasingAdd.md): Applies sparse addition to `input` using individual values or slices + +[`ScatterNdSub(...)`](../../../tf/raw_ops/ScatterNdSub.md): Applies sparse subtraction to individual values or slices in a Variable. + +[`ScatterNdUpdate(...)`](../../../tf/raw_ops/ScatterNdUpdate.md): Applies sparse `updates` to individual values or slices within a given + +[`ScatterSub(...)`](../../../tf/raw_ops/ScatterSub.md): Subtracts sparse updates to a variable reference. + +[`ScatterUpdate(...)`](../../../tf/raw_ops/ScatterUpdate.md): Applies sparse updates to a variable reference. + +[`SdcaFprint(...)`](../../../tf/raw_ops/SdcaFprint.md): Computes fingerprints of the input strings. + +[`SdcaOptimizer(...)`](../../../tf/raw_ops/SdcaOptimizer.md): Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for + +[`SdcaOptimizerV2(...)`](../../../tf/raw_ops/SdcaOptimizerV2.md): Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for + +[`SdcaShrinkL1(...)`](../../../tf/raw_ops/SdcaShrinkL1.md): Applies L1 regularization shrink step on the parameters. + +[`SegmentMax(...)`](../../../tf/raw_ops/SegmentMax.md): Computes the maximum along segments of a tensor. + +[`SegmentMean(...)`](../../../tf/raw_ops/SegmentMean.md): Computes the mean along segments of a tensor. + +[`SegmentMin(...)`](../../../tf/raw_ops/SegmentMin.md): Computes the minimum along segments of a tensor. + +[`SegmentProd(...)`](../../../tf/raw_ops/SegmentProd.md): Computes the product along segments of a tensor. + +[`SegmentSum(...)`](../../../tf/raw_ops/SegmentSum.md): Computes the sum along segments of a tensor. + +[`Select(...)`](../../../tf/raw_ops/Select.md): Selects elements from `x` or `y`, depending on `condition`. + +[`SelectV2(...)`](../../../tf/raw_ops/SelectV2.md) + +[`SelfAdjointEig(...)`](../../../tf/raw_ops/SelfAdjointEig.md): Computes the Eigen Decomposition of a batch of square self-adjoint matrices. + +[`SelfAdjointEigV2(...)`](../../../tf/raw_ops/SelfAdjointEigV2.md): Computes the eigen decomposition of one or more square self-adjoint matrices. + +[`Selu(...)`](../../../tf/raw_ops/Selu.md): Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)` + +[`SeluGrad(...)`](../../../tf/raw_ops/SeluGrad.md): Computes gradients for the scaled exponential linear (Selu) operation. + +[`Send(...)`](../../../tf/raw_ops/Send.md): Sends the named tensor from send_device to recv_device. + +[`SendTPUEmbeddingGradients(...)`](../../../tf/raw_ops/SendTPUEmbeddingGradients.md): Performs gradient updates of embedding tables. + +[`SerializeIterator(...)`](../../../tf/raw_ops/SerializeIterator.md): Converts the given `resource_handle` representing an iterator to a variant tensor. + +[`SerializeManySparse(...)`](../../../tf/raw_ops/SerializeManySparse.md): Serialize an `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor` object. + +[`SerializeSparse(...)`](../../../tf/raw_ops/SerializeSparse.md): Serialize a `SparseTensor` into a `[3]` `Tensor` object. + +[`SerializeTensor(...)`](../../../tf/raw_ops/SerializeTensor.md): Transforms a Tensor into a serialized TensorProto proto. + +[`SetSize(...)`](../../../tf/raw_ops/SetSize.md): Number of unique elements along last dimension of input `set`. + +[`SetStatsAggregatorDataset(...)`](../../../tf/raw_ops/SetStatsAggregatorDataset.md) + +[`Shape(...)`](../../../tf/raw_ops/Shape.md): Returns the shape of a tensor. + +[`ShapeN(...)`](../../../tf/raw_ops/ShapeN.md): Returns shape of tensors. + +[`ShardDataset(...)`](../../../tf/raw_ops/ShardDataset.md): Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +[`ShardedFilename(...)`](../../../tf/raw_ops/ShardedFilename.md): Generate a sharded filename. The filename is printf formatted as + +[`ShardedFilespec(...)`](../../../tf/raw_ops/ShardedFilespec.md): Generate a glob pattern matching all sharded file names. + +[`ShuffleAndRepeatDataset(...)`](../../../tf/raw_ops/ShuffleAndRepeatDataset.md): Creates a dataset that shuffles and repeats elements from `input_dataset` + +[`ShuffleDataset(...)`](../../../tf/raw_ops/ShuffleDataset.md): Creates a dataset that shuffles elements from `input_dataset` pseudorandomly. + +[`ShuffleDatasetV2(...)`](../../../tf/raw_ops/ShuffleDatasetV2.md) + +[`ShutdownDistributedTPU(...)`](../../../tf/raw_ops/ShutdownDistributedTPU.md): Shuts down a running distributed TPU system. + +[`Sigmoid(...)`](../../../tf/raw_ops/Sigmoid.md): Computes sigmoid of `x` element-wise. + +[`SigmoidGrad(...)`](../../../tf/raw_ops/SigmoidGrad.md): Computes the gradient of the sigmoid of `x` wrt its input. + +[`Sign(...)`](../../../tf/raw_ops/Sign.md): Returns an element-wise indication of the sign of a number. + +[`Sin(...)`](../../../tf/raw_ops/Sin.md): Computes sine of x element-wise. + +[`Sinh(...)`](../../../tf/raw_ops/Sinh.md): Computes hyperbolic sine of x element-wise. + +[`Size(...)`](../../../tf/raw_ops/Size.md): Returns the size of a tensor. + +[`SkipDataset(...)`](../../../tf/raw_ops/SkipDataset.md): Creates a dataset that skips `count` elements from the `input_dataset`. + +[`SleepDataset(...)`](../../../tf/raw_ops/SleepDataset.md) + +[`Slice(...)`](../../../tf/raw_ops/Slice.md): Return a slice from 'input'. + +[`SlidingWindowDataset(...)`](../../../tf/raw_ops/SlidingWindowDataset.md): Creates a dataset that passes a sliding window over `input_dataset`. + +[`Snapshot(...)`](../../../tf/raw_ops/Snapshot.md): Returns a copy of the input tensor. + +[`SnapshotDataset(...)`](../../../tf/raw_ops/SnapshotDataset.md): Creates a dataset that will write to / read from a snapshot. + +[`SobolSample(...)`](../../../tf/raw_ops/SobolSample.md): Generates points from the Sobol sequence. + +[`Softmax(...)`](../../../tf/raw_ops/Softmax.md): Computes softmax activations. + +[`SoftmaxCrossEntropyWithLogits(...)`](../../../tf/raw_ops/SoftmaxCrossEntropyWithLogits.md): Computes softmax cross entropy cost and gradients to backpropagate. + +[`Softplus(...)`](../../../tf/raw_ops/Softplus.md): Computes softplus: `log(exp(features) + 1)`. + +[`SoftplusGrad(...)`](../../../tf/raw_ops/SoftplusGrad.md): Computes softplus gradients for a softplus operation. + +[`Softsign(...)`](../../../tf/raw_ops/Softsign.md): Computes softsign: `features / (abs(features) + 1)`. + +[`SoftsignGrad(...)`](../../../tf/raw_ops/SoftsignGrad.md): Computes softsign gradients for a softsign operation. + +[`SpaceToBatch(...)`](../../../tf/raw_ops/SpaceToBatch.md): SpaceToBatch for 4-D tensors of type T. + +[`SpaceToBatchND(...)`](../../../tf/raw_ops/SpaceToBatchND.md): SpaceToBatch for N-D tensors of type T. + +[`SpaceToDepth(...)`](../../../tf/raw_ops/SpaceToDepth.md): SpaceToDepth for tensors of type T. + +[`SparseAccumulatorApplyGradient(...)`](../../../tf/raw_ops/SparseAccumulatorApplyGradient.md): Applies a sparse gradient to a given accumulator. + +[`SparseAccumulatorTakeGradient(...)`](../../../tf/raw_ops/SparseAccumulatorTakeGradient.md): Extracts the average sparse gradient in a SparseConditionalAccumulator. + +[`SparseAdd(...)`](../../../tf/raw_ops/SparseAdd.md): Adds two `SparseTensor` objects to produce another `SparseTensor`. + +[`SparseAddGrad(...)`](../../../tf/raw_ops/SparseAddGrad.md): The gradient operator for the SparseAdd op. + +[`SparseApplyAdadelta(...)`](../../../tf/raw_ops/SparseApplyAdadelta.md): var: Should be from a Variable(). + +[`SparseApplyAdagrad(...)`](../../../tf/raw_ops/SparseApplyAdagrad.md): Update relevant entries in '*var' and '*accum' according to the adagrad scheme. + +[`SparseApplyAdagradDA(...)`](../../../tf/raw_ops/SparseApplyAdagradDA.md): Update entries in '*var' and '*accum' according to the proximal adagrad scheme. + +[`SparseApplyAdagradV2(...)`](../../../tf/raw_ops/SparseApplyAdagradV2.md): Update relevant entries in '*var' and '*accum' according to the adagrad scheme. + +[`SparseApplyCenteredRMSProp(...)`](../../../tf/raw_ops/SparseApplyCenteredRMSProp.md): Update '*var' according to the centered RMSProp algorithm. + +[`SparseApplyFtrl(...)`](../../../tf/raw_ops/SparseApplyFtrl.md): Update relevant entries in '*var' according to the Ftrl-proximal scheme. + +[`SparseApplyFtrlV2(...)`](../../../tf/raw_ops/SparseApplyFtrlV2.md): Update relevant entries in '*var' according to the Ftrl-proximal scheme. + +[`SparseApplyMomentum(...)`](../../../tf/raw_ops/SparseApplyMomentum.md): Update relevant entries in '*var' and '*accum' according to the momentum scheme. + +[`SparseApplyProximalAdagrad(...)`](../../../tf/raw_ops/SparseApplyProximalAdagrad.md): Sparse update entries in '*var' and '*accum' according to FOBOS algorithm. + +[`SparseApplyProximalGradientDescent(...)`](../../../tf/raw_ops/SparseApplyProximalGradientDescent.md): Sparse update '*var' as FOBOS algorithm with fixed learning rate. + +[`SparseApplyRMSProp(...)`](../../../tf/raw_ops/SparseApplyRMSProp.md): Update '*var' according to the RMSProp algorithm. + +[`SparseConcat(...)`](../../../tf/raw_ops/SparseConcat.md): Concatenates a list of `SparseTensor` along the specified dimension. + +[`SparseConditionalAccumulator(...)`](../../../tf/raw_ops/SparseConditionalAccumulator.md): A conditional accumulator for aggregating sparse gradients. + +[`SparseCross(...)`](../../../tf/raw_ops/SparseCross.md): Generates sparse cross from a list of sparse and dense tensors. + +[`SparseDenseCwiseAdd(...)`](../../../tf/raw_ops/SparseDenseCwiseAdd.md): Adds up a SparseTensor and a dense Tensor, using these special rules: + +[`SparseDenseCwiseDiv(...)`](../../../tf/raw_ops/SparseDenseCwiseDiv.md): Component-wise divides a SparseTensor by a dense Tensor. + +[`SparseDenseCwiseMul(...)`](../../../tf/raw_ops/SparseDenseCwiseMul.md): Component-wise multiplies a SparseTensor by a dense Tensor. + +[`SparseFillEmptyRows(...)`](../../../tf/raw_ops/SparseFillEmptyRows.md): Fills empty rows in the input 2-D `SparseTensor` with a default value. + +[`SparseFillEmptyRowsGrad(...)`](../../../tf/raw_ops/SparseFillEmptyRowsGrad.md): The gradient of SparseFillEmptyRows. + +[`SparseMatMul(...)`](../../../tf/raw_ops/SparseMatMul.md): Multiply matrix "a" by matrix "b". + +[`SparseMatrixAdd(...)`](../../../tf/raw_ops/SparseMatrixAdd.md): Sparse addition of two CSR matrices, C = alpha * A + beta * B. + +[`SparseMatrixMatMul(...)`](../../../tf/raw_ops/SparseMatrixMatMul.md): Matrix-multiplies a sparse matrix with a dense matrix. + +[`SparseMatrixMul(...)`](../../../tf/raw_ops/SparseMatrixMul.md): Element-wise multiplication of a sparse matrix with a dense tensor. + +[`SparseMatrixNNZ(...)`](../../../tf/raw_ops/SparseMatrixNNZ.md): Returns the number of nonzeroes of `sparse_matrix`. + +[`SparseMatrixOrderingAMD(...)`](../../../tf/raw_ops/SparseMatrixOrderingAMD.md): Computes the Approximate Minimum Degree (AMD) ordering of `input`. + +[`SparseMatrixSoftmax(...)`](../../../tf/raw_ops/SparseMatrixSoftmax.md): Calculates the softmax of a CSRSparseMatrix. + +[`SparseMatrixSoftmaxGrad(...)`](../../../tf/raw_ops/SparseMatrixSoftmaxGrad.md): Calculates the gradient of the SparseMatrixSoftmax op. + +[`SparseMatrixSparseCholesky(...)`](../../../tf/raw_ops/SparseMatrixSparseCholesky.md): Computes the sparse Cholesky decomposition of `input`. + +[`SparseMatrixSparseMatMul(...)`](../../../tf/raw_ops/SparseMatrixSparseMatMul.md): Sparse-matrix-multiplies two CSR matrices `a` and `b`. + +[`SparseMatrixTranspose(...)`](../../../tf/raw_ops/SparseMatrixTranspose.md): Transposes the inner (matrix) dimensions of a CSRSparseMatrix. + +[`SparseMatrixZeros(...)`](../../../tf/raw_ops/SparseMatrixZeros.md): Creates an all-zeros CSRSparseMatrix with shape `dense_shape`. + +[`SparseReduceMax(...)`](../../../tf/raw_ops/SparseReduceMax.md): Computes the max of elements across dimensions of a SparseTensor. + +[`SparseReduceMaxSparse(...)`](../../../tf/raw_ops/SparseReduceMaxSparse.md): Computes the max of elements across dimensions of a SparseTensor. + +[`SparseReduceSum(...)`](../../../tf/raw_ops/SparseReduceSum.md): Computes the sum of elements across dimensions of a SparseTensor. + +[`SparseReduceSumSparse(...)`](../../../tf/raw_ops/SparseReduceSumSparse.md): Computes the sum of elements across dimensions of a SparseTensor. + +[`SparseReorder(...)`](../../../tf/raw_ops/SparseReorder.md): Reorders a SparseTensor into the canonical, row-major ordering. + +[`SparseReshape(...)`](../../../tf/raw_ops/SparseReshape.md): Reshapes a SparseTensor to represent values in a new dense shape. + +[`SparseSegmentMean(...)`](../../../tf/raw_ops/SparseSegmentMean.md): Computes the mean along sparse segments of a tensor. + +[`SparseSegmentMeanGrad(...)`](../../../tf/raw_ops/SparseSegmentMeanGrad.md): Computes gradients for SparseSegmentMean. + +[`SparseSegmentMeanWithNumSegments(...)`](../../../tf/raw_ops/SparseSegmentMeanWithNumSegments.md): Computes the mean along sparse segments of a tensor. + +[`SparseSegmentSqrtN(...)`](../../../tf/raw_ops/SparseSegmentSqrtN.md): Computes the sum along sparse segments of a tensor divided by the sqrt of N. + +[`SparseSegmentSqrtNGrad(...)`](../../../tf/raw_ops/SparseSegmentSqrtNGrad.md): Computes gradients for SparseSegmentSqrtN. + +[`SparseSegmentSqrtNWithNumSegments(...)`](../../../tf/raw_ops/SparseSegmentSqrtNWithNumSegments.md): Computes the sum along sparse segments of a tensor divided by the sqrt of N. + +[`SparseSegmentSum(...)`](../../../tf/raw_ops/SparseSegmentSum.md): Computes the sum along sparse segments of a tensor. + +[`SparseSegmentSumWithNumSegments(...)`](../../../tf/raw_ops/SparseSegmentSumWithNumSegments.md): Computes the sum along sparse segments of a tensor. + +[`SparseSlice(...)`](../../../tf/raw_ops/SparseSlice.md): Slice a `SparseTensor` based on the `start` and `size`. + +[`SparseSliceGrad(...)`](../../../tf/raw_ops/SparseSliceGrad.md): The gradient operator for the SparseSlice op. + +[`SparseSoftmax(...)`](../../../tf/raw_ops/SparseSoftmax.md): Applies softmax to a batched N-D `SparseTensor`. + +[`SparseSoftmaxCrossEntropyWithLogits(...)`](../../../tf/raw_ops/SparseSoftmaxCrossEntropyWithLogits.md): Computes softmax cross entropy cost and gradients to backpropagate. + +[`SparseSparseMaximum(...)`](../../../tf/raw_ops/SparseSparseMaximum.md): Returns the element-wise max of two SparseTensors. + +[`SparseSparseMinimum(...)`](../../../tf/raw_ops/SparseSparseMinimum.md): Returns the element-wise min of two SparseTensors. + +[`SparseSplit(...)`](../../../tf/raw_ops/SparseSplit.md): Split a `SparseTensor` into `num_split` tensors along one dimension. + +[`SparseTensorDenseAdd(...)`](../../../tf/raw_ops/SparseTensorDenseAdd.md): Adds up a `SparseTensor` and a dense `Tensor`, producing a dense `Tensor`. + +[`SparseTensorDenseMatMul(...)`](../../../tf/raw_ops/SparseTensorDenseMatMul.md): Multiply SparseTensor (of rank 2) "A" by dense matrix "B". + +[`SparseTensorSliceDataset(...)`](../../../tf/raw_ops/SparseTensorSliceDataset.md): Creates a dataset that splits a SparseTensor into elements row-wise. + +[`SparseTensorToCSRSparseMatrix(...)`](../../../tf/raw_ops/SparseTensorToCSRSparseMatrix.md): Converts a SparseTensor to a (possibly batched) CSRSparseMatrix. + +[`SparseToDense(...)`](../../../tf/raw_ops/SparseToDense.md): Converts a sparse representation into a dense tensor. + +[`SparseToSparseSetOperation(...)`](../../../tf/raw_ops/SparseToSparseSetOperation.md): Applies set operation along last dimension of 2 `SparseTensor` inputs. + +[`Spence(...)`](../../../tf/raw_ops/Spence.md) + +[`Split(...)`](../../../tf/raw_ops/Split.md): Splits a tensor into `num_split` tensors along one dimension. + +[`SplitV(...)`](../../../tf/raw_ops/SplitV.md): Splits a tensor into `num_split` tensors along one dimension. + +[`SqlDataset(...)`](../../../tf/raw_ops/SqlDataset.md): Creates a dataset that executes a SQL query and emits rows of the result set. + +[`Sqrt(...)`](../../../tf/raw_ops/Sqrt.md): Computes square root of x element-wise. + +[`SqrtGrad(...)`](../../../tf/raw_ops/SqrtGrad.md): Computes the gradient for the sqrt of `x` wrt its input. + +[`Square(...)`](../../../tf/raw_ops/Square.md): Computes square of x element-wise. + +[`SquaredDifference(...)`](../../../tf/raw_ops/SquaredDifference.md): Returns (x - y)(x - y) element-wise. + +[`Squeeze(...)`](../../../tf/raw_ops/Squeeze.md): Removes dimensions of size 1 from the shape of a tensor. + +[`Stack(...)`](../../../tf/raw_ops/Stack.md): Deprecated, use StackV2. + +[`StackClose(...)`](../../../tf/raw_ops/StackClose.md): Deprecated, use StackCloseV2. + +[`StackCloseV2(...)`](../../../tf/raw_ops/StackCloseV2.md): Delete the stack from its resource container. + +[`StackPop(...)`](../../../tf/raw_ops/StackPop.md): Deprecated, use StackPopV2. + +[`StackPopV2(...)`](../../../tf/raw_ops/StackPopV2.md): Pop the element at the top of the stack. + +[`StackPush(...)`](../../../tf/raw_ops/StackPush.md): Deprecated, use StackPushV2. + +[`StackPushV2(...)`](../../../tf/raw_ops/StackPushV2.md): Push an element onto the stack. + +[`StackV2(...)`](../../../tf/raw_ops/StackV2.md): A stack that produces elements in first-in last-out order. + +[`Stage(...)`](../../../tf/raw_ops/Stage.md): Stage values similar to a lightweight Enqueue. + +[`StageClear(...)`](../../../tf/raw_ops/StageClear.md): Op removes all elements in the underlying container. + +[`StagePeek(...)`](../../../tf/raw_ops/StagePeek.md): Op peeks at the values at the specified index. If the + +[`StageSize(...)`](../../../tf/raw_ops/StageSize.md): Op returns the number of elements in the underlying container. + +[`StatefulPartitionedCall(...)`](../../../tf/raw_ops/StatefulPartitionedCall.md): returns `f(inputs)`, where `f`'s body is placed and partitioned. + +[`StatefulRandomBinomial(...)`](../../../tf/raw_ops/StatefulRandomBinomial.md) + +[`StatefulStandardNormal(...)`](../../../tf/raw_ops/StatefulStandardNormal.md): Outputs random values from a normal distribution. This op is deprecated in favor of op 'StatefulStandardNormalV2' + +[`StatefulStandardNormalV2(...)`](../../../tf/raw_ops/StatefulStandardNormalV2.md): Outputs random values from a normal distribution. + +[`StatefulTruncatedNormal(...)`](../../../tf/raw_ops/StatefulTruncatedNormal.md): Outputs random values from a truncated normal distribution. + +[`StatefulUniform(...)`](../../../tf/raw_ops/StatefulUniform.md): Outputs random values from a uniform distribution. + +[`StatefulUniformFullInt(...)`](../../../tf/raw_ops/StatefulUniformFullInt.md): Outputs random integers from a uniform distribution. + +[`StatefulUniformInt(...)`](../../../tf/raw_ops/StatefulUniformInt.md): Outputs random integers from a uniform distribution. + +[`StatelessIf(...)`](../../../tf/raw_ops/StatelessIf.md): output = cond ? then_branch(input) : else_branch(input) + +[`StatelessMultinomial(...)`](../../../tf/raw_ops/StatelessMultinomial.md): Draws samples from a multinomial distribution. + +[`StatelessRandomBinomial(...)`](../../../tf/raw_ops/StatelessRandomBinomial.md): Outputs deterministic pseudorandom random numbers from a binomial distribution. + +[`StatelessRandomGammaV2(...)`](../../../tf/raw_ops/StatelessRandomGammaV2.md): Outputs deterministic pseudorandom random numbers from a gamma distribution. + +[`StatelessRandomNormal(...)`](../../../tf/raw_ops/StatelessRandomNormal.md): Outputs deterministic pseudorandom values from a normal distribution. + +[`StatelessRandomPoisson(...)`](../../../tf/raw_ops/StatelessRandomPoisson.md): Outputs deterministic pseudorandom random numbers from a Poisson distribution. + +[`StatelessRandomUniform(...)`](../../../tf/raw_ops/StatelessRandomUniform.md): Outputs deterministic pseudorandom random values from a uniform distribution. + +[`StatelessRandomUniformFullInt(...)`](../../../tf/raw_ops/StatelessRandomUniformFullInt.md): Outputs deterministic pseudorandom random integers from a uniform distribution. + +[`StatelessRandomUniformInt(...)`](../../../tf/raw_ops/StatelessRandomUniformInt.md): Outputs deterministic pseudorandom random integers from a uniform distribution. + +[`StatelessTruncatedNormal(...)`](../../../tf/raw_ops/StatelessTruncatedNormal.md): Outputs deterministic pseudorandom values from a truncated normal distribution. + +[`StatelessWhile(...)`](../../../tf/raw_ops/StatelessWhile.md): output = input; While (Cond(output)) { output = Body(output) } + +[`StaticRegexFullMatch(...)`](../../../tf/raw_ops/StaticRegexFullMatch.md): Check if the input matches the regex pattern. + +[`StaticRegexReplace(...)`](../../../tf/raw_ops/StaticRegexReplace.md): Replaces the match of pattern in input with rewrite. + +[`StatsAggregatorHandle(...)`](../../../tf/raw_ops/StatsAggregatorHandle.md): Creates a statistics manager resource. + +[`StatsAggregatorHandleV2(...)`](../../../tf/raw_ops/StatsAggregatorHandleV2.md) + +[`StatsAggregatorSetSummaryWriter(...)`](../../../tf/raw_ops/StatsAggregatorSetSummaryWriter.md): Set a summary_writer_interface to record statistics using given stats_aggregator. + +[`StatsAggregatorSummary(...)`](../../../tf/raw_ops/StatsAggregatorSummary.md): Produces a summary of any statistics recorded by the given statistics manager. + +[`StopGradient(...)`](../../../tf/raw_ops/StopGradient.md): Stops gradient computation. + +[`StridedSlice(...)`](../../../tf/raw_ops/StridedSlice.md): Return a strided slice from `input`. + +[`StridedSliceAssign(...)`](../../../tf/raw_ops/StridedSliceAssign.md): Assign `value` to the sliced l-value reference of `ref`. + +[`StridedSliceGrad(...)`](../../../tf/raw_ops/StridedSliceGrad.md): Returns the gradient of `StridedSlice`. + +[`StringFormat(...)`](../../../tf/raw_ops/StringFormat.md): Formats a string template using a list of tensors. + +[`StringJoin(...)`](../../../tf/raw_ops/StringJoin.md): Joins the strings in the given list of string tensors into one tensor; + +[`StringLength(...)`](../../../tf/raw_ops/StringLength.md): String lengths of `input`. + +[`StringLower(...)`](../../../tf/raw_ops/StringLower.md): Converts all uppercase characters into their respective lowercase replacements. + +[`StringNGrams(...)`](../../../tf/raw_ops/StringNGrams.md): Creates ngrams from ragged string data. + +[`StringSplit(...)`](../../../tf/raw_ops/StringSplit.md): Split elements of `input` based on `delimiter` into a `SparseTensor`. + +[`StringSplitV2(...)`](../../../tf/raw_ops/StringSplitV2.md): Split elements of `source` based on `sep` into a `SparseTensor`. + +[`StringStrip(...)`](../../../tf/raw_ops/StringStrip.md): Strip leading and trailing whitespaces from the Tensor. + +[`StringToHashBucket(...)`](../../../tf/raw_ops/StringToHashBucket.md): Converts each string in the input Tensor to its hash mod by a number of buckets. + +[`StringToHashBucketFast(...)`](../../../tf/raw_ops/StringToHashBucketFast.md): Converts each string in the input Tensor to its hash mod by a number of buckets. + +[`StringToHashBucketStrong(...)`](../../../tf/raw_ops/StringToHashBucketStrong.md): Converts each string in the input Tensor to its hash mod by a number of buckets. + +[`StringToNumber(...)`](../../../tf/raw_ops/StringToNumber.md): Converts each string in the input Tensor to the specified numeric type. + +[`StringUpper(...)`](../../../tf/raw_ops/StringUpper.md): Converts all lowercase characters into their respective uppercase replacements. + +[`Sub(...)`](../../../tf/raw_ops/Sub.md): Returns x - y element-wise. + +[`Substr(...)`](../../../tf/raw_ops/Substr.md): Return substrings from `Tensor` of strings. + +[`Sum(...)`](../../../tf/raw_ops/Sum.md): Computes the sum of elements across dimensions of a tensor. + +[`SummaryWriter(...)`](../../../tf/raw_ops/SummaryWriter.md) + +[`Svd(...)`](../../../tf/raw_ops/Svd.md): Computes the singular value decompositions of one or more matrices. + +[`Switch(...)`](../../../tf/raw_ops/Switch.md): Forwards `data` to the output port determined by `pred`. + +[`SymbolicGradient(...)`](../../../tf/raw_ops/SymbolicGradient.md): Computes the gradient function for function f via backpropagation. + +[`TFRecordDataset(...)`](../../../tf/raw_ops/TFRecordDataset.md): Creates a dataset that emits the records from one or more TFRecord files. + +[`TFRecordReader(...)`](../../../tf/raw_ops/TFRecordReader.md): A Reader that outputs the records from a TensorFlow Records file. + +[`TFRecordReaderV2(...)`](../../../tf/raw_ops/TFRecordReaderV2.md): A Reader that outputs the records from a TensorFlow Records file. + +[`TPUCompilationResult(...)`](../../../tf/raw_ops/TPUCompilationResult.md): Returns the result of a TPU compilation. + +[`TPUEmbeddingActivations(...)`](../../../tf/raw_ops/TPUEmbeddingActivations.md): An op enabling differentiation of TPU Embeddings. + +[`TPUOrdinalSelector(...)`](../../../tf/raw_ops/TPUOrdinalSelector.md): A TPU core selector Op. + +[`TPUPartitionedCall(...)`](../../../tf/raw_ops/TPUPartitionedCall.md): Calls a function placed on a specified TPU device. + +[`TPUReplicateMetadata(...)`](../../../tf/raw_ops/TPUReplicateMetadata.md): Metadata indicating how the TPU computation should be replicated. + +[`TPUReplicatedInput(...)`](../../../tf/raw_ops/TPUReplicatedInput.md): Connects N inputs to an N-way replicated TPU computation. + +[`TPUReplicatedOutput(...)`](../../../tf/raw_ops/TPUReplicatedOutput.md): Connects N outputs from an N-way replicated TPU computation. + +[`TakeDataset(...)`](../../../tf/raw_ops/TakeDataset.md): Creates a dataset that contains `count` elements from the `input_dataset`. + +[`TakeManySparseFromTensorsMap(...)`](../../../tf/raw_ops/TakeManySparseFromTensorsMap.md): Read `SparseTensors` from a `SparseTensorsMap` and concatenate them. + +[`TakeWhileDataset(...)`](../../../tf/raw_ops/TakeWhileDataset.md): Creates a dataset that stops iteration when predicate` is false. + +[`Tan(...)`](../../../tf/raw_ops/Tan.md): Computes tan of x element-wise. + +[`Tanh(...)`](../../../tf/raw_ops/Tanh.md): Computes hyperbolic tangent of `x` element-wise. + +[`TanhGrad(...)`](../../../tf/raw_ops/TanhGrad.md): Computes the gradient for the tanh of `x` wrt its input. + +[`TemporaryVariable(...)`](../../../tf/raw_ops/TemporaryVariable.md): Returns a tensor that may be mutated, but only persists within a single step. + +[`TensorArray(...)`](../../../tf/raw_ops/TensorArray.md) + +[`TensorArrayClose(...)`](../../../tf/raw_ops/TensorArrayClose.md) + +[`TensorArrayCloseV2(...)`](../../../tf/raw_ops/TensorArrayCloseV2.md): Deprecated. Use TensorArrayCloseV3 + +[`TensorArrayCloseV3(...)`](../../../tf/raw_ops/TensorArrayCloseV3.md): Delete the TensorArray from its resource container. + +[`TensorArrayConcat(...)`](../../../tf/raw_ops/TensorArrayConcat.md) + +[`TensorArrayConcatV2(...)`](../../../tf/raw_ops/TensorArrayConcatV2.md): Deprecated. Use TensorArrayConcatV3 + +[`TensorArrayConcatV3(...)`](../../../tf/raw_ops/TensorArrayConcatV3.md): Concat the elements from the TensorArray into value `value`. + +[`TensorArrayGather(...)`](../../../tf/raw_ops/TensorArrayGather.md) + +[`TensorArrayGatherV2(...)`](../../../tf/raw_ops/TensorArrayGatherV2.md): Deprecated. Use TensorArrayGatherV3 + +[`TensorArrayGatherV3(...)`](../../../tf/raw_ops/TensorArrayGatherV3.md): Gather specific elements from the TensorArray into output `value`. + +[`TensorArrayGrad(...)`](../../../tf/raw_ops/TensorArrayGrad.md) + +[`TensorArrayGradV2(...)`](../../../tf/raw_ops/TensorArrayGradV2.md): Deprecated. Use TensorArrayGradV3 + +[`TensorArrayGradV3(...)`](../../../tf/raw_ops/TensorArrayGradV3.md): Creates a TensorArray for storing the gradients of values in the given handle. + +[`TensorArrayGradWithShape(...)`](../../../tf/raw_ops/TensorArrayGradWithShape.md): Creates a TensorArray for storing multiple gradients of values in the given handle. + +[`TensorArrayPack(...)`](../../../tf/raw_ops/TensorArrayPack.md) + +[`TensorArrayRead(...)`](../../../tf/raw_ops/TensorArrayRead.md) + +[`TensorArrayReadV2(...)`](../../../tf/raw_ops/TensorArrayReadV2.md): Deprecated. Use TensorArrayReadV3 + +[`TensorArrayReadV3(...)`](../../../tf/raw_ops/TensorArrayReadV3.md): Read an element from the TensorArray into output `value`. + +[`TensorArrayScatter(...)`](../../../tf/raw_ops/TensorArrayScatter.md) + +[`TensorArrayScatterV2(...)`](../../../tf/raw_ops/TensorArrayScatterV2.md): Deprecated. Use TensorArrayScatterV3 + +[`TensorArrayScatterV3(...)`](../../../tf/raw_ops/TensorArrayScatterV3.md): Scatter the data from the input value into specific TensorArray elements. + +[`TensorArraySize(...)`](../../../tf/raw_ops/TensorArraySize.md) + +[`TensorArraySizeV2(...)`](../../../tf/raw_ops/TensorArraySizeV2.md): Deprecated. Use TensorArraySizeV3 + +[`TensorArraySizeV3(...)`](../../../tf/raw_ops/TensorArraySizeV3.md): Get the current size of the TensorArray. + +[`TensorArraySplit(...)`](../../../tf/raw_ops/TensorArraySplit.md) + +[`TensorArraySplitV2(...)`](../../../tf/raw_ops/TensorArraySplitV2.md): Deprecated. Use TensorArraySplitV3 + +[`TensorArraySplitV3(...)`](../../../tf/raw_ops/TensorArraySplitV3.md): Split the data from the input value into TensorArray elements. + +[`TensorArrayUnpack(...)`](../../../tf/raw_ops/TensorArrayUnpack.md) + +[`TensorArrayV2(...)`](../../../tf/raw_ops/TensorArrayV2.md): Deprecated. Use TensorArrayV3 + +[`TensorArrayV3(...)`](../../../tf/raw_ops/TensorArrayV3.md): An array of Tensors of given size. + +[`TensorArrayWrite(...)`](../../../tf/raw_ops/TensorArrayWrite.md) + +[`TensorArrayWriteV2(...)`](../../../tf/raw_ops/TensorArrayWriteV2.md): Deprecated. Use TensorArrayGradV3 + +[`TensorArrayWriteV3(...)`](../../../tf/raw_ops/TensorArrayWriteV3.md): Push an element onto the tensor_array. + +[`TensorDataset(...)`](../../../tf/raw_ops/TensorDataset.md): Creates a dataset that emits `components` as a tuple of tensors once. + +[`TensorListConcat(...)`](../../../tf/raw_ops/TensorListConcat.md): Concats all tensors in the list along the 0th dimension. + +[`TensorListConcatLists(...)`](../../../tf/raw_ops/TensorListConcatLists.md) + +[`TensorListConcatV2(...)`](../../../tf/raw_ops/TensorListConcatV2.md): Concats all tensors in the list along the 0th dimension. + +[`TensorListElementShape(...)`](../../../tf/raw_ops/TensorListElementShape.md): The shape of the elements of the given list, as a tensor. + +[`TensorListFromTensor(...)`](../../../tf/raw_ops/TensorListFromTensor.md): Creates a TensorList which, when stacked, has the value of `tensor`. + +[`TensorListGather(...)`](../../../tf/raw_ops/TensorListGather.md): Creates a Tensor by indexing into the TensorList. + +[`TensorListGetItem(...)`](../../../tf/raw_ops/TensorListGetItem.md): Returns the item in the list with the given index. + +[`TensorListLength(...)`](../../../tf/raw_ops/TensorListLength.md): Returns the number of tensors in the input tensor list. + +[`TensorListPopBack(...)`](../../../tf/raw_ops/TensorListPopBack.md): Returns the last element of the input list as well as a list with all but that element. + +[`TensorListPushBack(...)`](../../../tf/raw_ops/TensorListPushBack.md): Returns a list which has the passed-in `Tensor` as last element and the other elements of the given list in `input_handle`. + +[`TensorListPushBackBatch(...)`](../../../tf/raw_ops/TensorListPushBackBatch.md) + +[`TensorListReserve(...)`](../../../tf/raw_ops/TensorListReserve.md): List of the given size with empty elements. + +[`TensorListResize(...)`](../../../tf/raw_ops/TensorListResize.md): Resizes the list. + +[`TensorListScatter(...)`](../../../tf/raw_ops/TensorListScatter.md): Creates a TensorList by indexing into a Tensor. + +[`TensorListScatterIntoExistingList(...)`](../../../tf/raw_ops/TensorListScatterIntoExistingList.md): Scatters tensor at indices in an input list. + +[`TensorListScatterV2(...)`](../../../tf/raw_ops/TensorListScatterV2.md): Creates a TensorList by indexing into a Tensor. + +[`TensorListSetItem(...)`](../../../tf/raw_ops/TensorListSetItem.md): Sets the index-th position of the list to contain the given tensor. + +[`TensorListSplit(...)`](../../../tf/raw_ops/TensorListSplit.md): Splits a tensor into a list. + +[`TensorListStack(...)`](../../../tf/raw_ops/TensorListStack.md): Stacks all tensors in the list. + +[`TensorScatterAdd(...)`](../../../tf/raw_ops/TensorScatterAdd.md): Adds sparse `updates` to an existing tensor according to `indices`. + +[`TensorScatterSub(...)`](../../../tf/raw_ops/TensorScatterSub.md): Subtracts sparse `updates` from an existing tensor according to `indices`. + +[`TensorScatterUpdate(...)`](../../../tf/raw_ops/TensorScatterUpdate.md): Scatter `updates` into an existing tensor according to `indices`. + +[`TensorSliceDataset(...)`](../../../tf/raw_ops/TensorSliceDataset.md): Creates a dataset that emits each dim-0 slice of `components` once. + +[`TensorStridedSliceUpdate(...)`](../../../tf/raw_ops/TensorStridedSliceUpdate.md): Assign `value` to the sliced l-value reference of `input`. + +[`TensorSummary(...)`](../../../tf/raw_ops/TensorSummary.md): Outputs a `Summary` protocol buffer with a tensor. + +[`TensorSummaryV2(...)`](../../../tf/raw_ops/TensorSummaryV2.md): Outputs a `Summary` protocol buffer with a tensor and per-plugin data. + +[`TextLineDataset(...)`](../../../tf/raw_ops/TextLineDataset.md): Creates a dataset that emits the lines of one or more text files. + +[`TextLineReader(...)`](../../../tf/raw_ops/TextLineReader.md): A Reader that outputs the lines of a file delimited by '\n'. + +[`TextLineReaderV2(...)`](../../../tf/raw_ops/TextLineReaderV2.md): A Reader that outputs the lines of a file delimited by '\n'. + +[`ThreadPoolDataset(...)`](../../../tf/raw_ops/ThreadPoolDataset.md): Creates a dataset that uses a custom thread pool to compute `input_dataset`. + +[`ThreadPoolHandle(...)`](../../../tf/raw_ops/ThreadPoolHandle.md): Creates a dataset that uses a custom thread pool to compute `input_dataset`. + +[`ThreadUnsafeUnigramCandidateSampler(...)`](../../../tf/raw_ops/ThreadUnsafeUnigramCandidateSampler.md): Generates labels for candidate sampling with a learned unigram distribution. + +[`Tile(...)`](../../../tf/raw_ops/Tile.md): Constructs a tensor by tiling a given tensor. + +[`TileGrad(...)`](../../../tf/raw_ops/TileGrad.md): Returns the gradient of `Tile`. + +[`Timestamp(...)`](../../../tf/raw_ops/Timestamp.md): Provides the time since epoch in seconds. + +[`ToBool(...)`](../../../tf/raw_ops/ToBool.md): Converts a tensor to a scalar predicate. + +[`TopK(...)`](../../../tf/raw_ops/TopK.md): Finds values and indices of the `k` largest elements for the last dimension. + +[`TopKV2(...)`](../../../tf/raw_ops/TopKV2.md): Finds values and indices of the `k` largest elements for the last dimension. + +[`Transpose(...)`](../../../tf/raw_ops/Transpose.md): Shuffle dimensions of x according to a permutation. + +[`TridiagonalMatMul(...)`](../../../tf/raw_ops/TridiagonalMatMul.md): Calculate product with tridiagonal matrix. + +[`TridiagonalSolve(...)`](../../../tf/raw_ops/TridiagonalSolve.md): Solves tridiagonal systems of equations. + +[`TruncateDiv(...)`](../../../tf/raw_ops/TruncateDiv.md): Returns x / y element-wise for integer types. + +[`TruncateMod(...)`](../../../tf/raw_ops/TruncateMod.md): Returns element-wise remainder of division. This emulates C semantics in that + +[`TruncatedNormal(...)`](../../../tf/raw_ops/TruncatedNormal.md): Outputs random values from a truncated normal distribution. + +[`Unbatch(...)`](../../../tf/raw_ops/Unbatch.md): Reverses the operation of Batch for a single output Tensor. + +[`UnbatchDataset(...)`](../../../tf/raw_ops/UnbatchDataset.md): A dataset that splits the elements of its input into multiple elements. + +[`UnbatchGrad(...)`](../../../tf/raw_ops/UnbatchGrad.md): Gradient of Unbatch. + +[`UnicodeDecode(...)`](../../../tf/raw_ops/UnicodeDecode.md): Decodes each string in `input` into a sequence of Unicode code points. + +[`UnicodeDecodeWithOffsets(...)`](../../../tf/raw_ops/UnicodeDecodeWithOffsets.md): Decodes each string in `input` into a sequence of Unicode code points. + +[`UnicodeEncode(...)`](../../../tf/raw_ops/UnicodeEncode.md): Encode a tensor of ints into unicode strings. + +[`UnicodeScript(...)`](../../../tf/raw_ops/UnicodeScript.md): Determine the script codes of a given tensor of Unicode integer code points. + +[`UnicodeTranscode(...)`](../../../tf/raw_ops/UnicodeTranscode.md): Transcode the input text from a source encoding to a destination encoding. + +[`UniformCandidateSampler(...)`](../../../tf/raw_ops/UniformCandidateSampler.md): Generates labels for candidate sampling with a uniform distribution. + +[`Unique(...)`](../../../tf/raw_ops/Unique.md): Finds unique elements in a 1-D tensor. + +[`UniqueDataset(...)`](../../../tf/raw_ops/UniqueDataset.md): Creates a dataset that contains the unique elements of `input_dataset`. + +[`UniqueV2(...)`](../../../tf/raw_ops/UniqueV2.md): Finds unique elements along an axis of a tensor. + +[`UniqueWithCounts(...)`](../../../tf/raw_ops/UniqueWithCounts.md): Finds unique elements in a 1-D tensor. + +[`UniqueWithCountsV2(...)`](../../../tf/raw_ops/UniqueWithCountsV2.md): Finds unique elements along an axis of a tensor. + +[`Unpack(...)`](../../../tf/raw_ops/Unpack.md): Unpacks a given dimension of a rank-`R` tensor into `num` rank-`(R-1)` tensors. + +[`UnravelIndex(...)`](../../../tf/raw_ops/UnravelIndex.md): Converts an array of flat indices into a tuple of coordinate arrays. + +[`UnsortedSegmentJoin(...)`](../../../tf/raw_ops/UnsortedSegmentJoin.md): Joins the elements of `inputs` based on `segment_ids`. + +[`UnsortedSegmentMax(...)`](../../../tf/raw_ops/UnsortedSegmentMax.md): Computes the maximum along segments of a tensor. + +[`UnsortedSegmentMin(...)`](../../../tf/raw_ops/UnsortedSegmentMin.md): Computes the minimum along segments of a tensor. + +[`UnsortedSegmentProd(...)`](../../../tf/raw_ops/UnsortedSegmentProd.md): Computes the product along segments of a tensor. + +[`UnsortedSegmentSum(...)`](../../../tf/raw_ops/UnsortedSegmentSum.md): Computes the sum along segments of a tensor. + +[`Unstage(...)`](../../../tf/raw_ops/Unstage.md): Op is similar to a lightweight Dequeue. + +[`UnwrapDatasetVariant(...)`](../../../tf/raw_ops/UnwrapDatasetVariant.md) + +[`UpperBound(...)`](../../../tf/raw_ops/UpperBound.md): Applies upper_bound(sorted_search_values, values) along each row. + +[`VarHandleOp(...)`](../../../tf/raw_ops/VarHandleOp.md): Creates a handle to a Variable resource. + +[`VarIsInitializedOp(...)`](../../../tf/raw_ops/VarIsInitializedOp.md): Checks whether a resource handle-based variable has been initialized. + +[`Variable(...)`](../../../tf/raw_ops/Variable.md): Use VariableV2 instead. + +[`VariableShape(...)`](../../../tf/raw_ops/VariableShape.md): Returns the shape of the variable pointed to by `resource`. + +[`VariableV2(...)`](../../../tf/raw_ops/VariableV2.md): Holds state in the form of a tensor that persists across steps. + +[`Where(...)`](../../../tf/raw_ops/Where.md): Returns locations of nonzero / true values in a tensor. + +[`While(...)`](../../../tf/raw_ops/While.md): output = input; While (Cond(output)) { output = Body(output) } + +[`WholeFileReader(...)`](../../../tf/raw_ops/WholeFileReader.md): A Reader that outputs the entire contents of a file as a value. + +[`WholeFileReaderV2(...)`](../../../tf/raw_ops/WholeFileReaderV2.md): A Reader that outputs the entire contents of a file as a value. + +[`WindowDataset(...)`](../../../tf/raw_ops/WindowDataset.md): Combines (nests of) input elements into a dataset of (nests of) windows. + +[`WorkerHeartbeat(...)`](../../../tf/raw_ops/WorkerHeartbeat.md): Worker heartbeat op. + +[`WrapDatasetVariant(...)`](../../../tf/raw_ops/WrapDatasetVariant.md) + +[`WriteAudioSummary(...)`](../../../tf/raw_ops/WriteAudioSummary.md) + +[`WriteFile(...)`](../../../tf/raw_ops/WriteFile.md): Writes contents to the file at input filename. Creates file and recursively + +[`WriteGraphSummary(...)`](../../../tf/raw_ops/WriteGraphSummary.md) + +[`WriteHistogramSummary(...)`](../../../tf/raw_ops/WriteHistogramSummary.md) + +[`WriteImageSummary(...)`](../../../tf/raw_ops/WriteImageSummary.md) + +[`WriteRawProtoSummary(...)`](../../../tf/raw_ops/WriteRawProtoSummary.md) + +[`WriteScalarSummary(...)`](../../../tf/raw_ops/WriteScalarSummary.md) + +[`WriteSummary(...)`](../../../tf/raw_ops/WriteSummary.md) + +[`Xdivy(...)`](../../../tf/raw_ops/Xdivy.md): Returns 0 if x == 0, and x / y otherwise, elementwise. + +[`Xlog1py(...)`](../../../tf/raw_ops/Xlog1py.md): Returns 0 if x == 0, and x * log1p(y) otherwise, elementwise. + +[`Xlogy(...)`](../../../tf/raw_ops/Xlogy.md): Returns 0 if x == 0, and x * log(y) otherwise, elementwise. + +[`ZerosLike(...)`](../../../tf/raw_ops/ZerosLike.md): Returns a tensor of zeros with the same shape and type as x. + +[`Zeta(...)`](../../../tf/raw_ops/Zeta.md): Compute the Hurwitz zeta function \\(\zeta(x, q)\\). + +[`ZipDataset(...)`](../../../tf/raw_ops/ZipDataset.md): Creates a dataset that zips together `input_datasets`. + diff --git a/site/en/api_docs/python/tf/compat/v1/reduce_all.md b/site/en/api_docs/python/tf/compat/v1/reduce_all.md new file mode 100644 index 00000000000..febee24d2c1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/reduce_all.md @@ -0,0 +1,141 @@ +description: Computes the "logical and" of elements across dimensions of a tensor. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.reduce_all + + + + + + + + + +Computes the "logical and" of elements across dimensions of a tensor. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + +#### For example: + + + +```python +x = tf.constant([[True, True], [False, False]]) +tf.reduce_all(x) # False +tf.reduce_all(x, 0) # [False, False] +tf.reduce_all(x, 1) # [True, False] +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The boolean tensor to reduce. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+`reduction_indices` + +The old (deprecated) name for axis. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + +
+The reduced tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.all + diff --git a/site/en/api_docs/python/tf/compat/v1/reduce_any.md b/site/en/api_docs/python/tf/compat/v1/reduce_any.md new file mode 100644 index 00000000000..576db1ddab8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/reduce_any.md @@ -0,0 +1,141 @@ +description: Computes the "logical or" of elements across dimensions of a tensor. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.reduce_any + + + + + + + + + +Computes the "logical or" of elements across dimensions of a tensor. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + +#### For example: + + + +```python +x = tf.constant([[True, True], [False, False]]) +tf.reduce_any(x) # True +tf.reduce_any(x, 0) # [True, True] +tf.reduce_any(x, 1) # [True, False] +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The boolean tensor to reduce. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+`reduction_indices` + +The old (deprecated) name for axis. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + +
+The reduced tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.any + diff --git a/site/en/api_docs/python/tf/compat/v1/reduce_join.md b/site/en/api_docs/python/tf/compat/v1/reduce_join.md new file mode 100644 index 00000000000..ecdd7f20b75 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/reduce_join.md @@ -0,0 +1,118 @@ +description: Joins all strings into a single string, or joins along an axis. + +
+ + +
+ +# tf.compat.v1.reduce_join + + + + + + + + + +Joins all strings into a single string, or joins along an axis. + + + + + + + + + +``` +>>> tf.strings.reduce_join([['abc','123'], +... ['def','456']]).numpy() +b'abc123def456' +>>> tf.strings.reduce_join([['abc','123'], +... ['def','456']], axis=-1).numpy() +array([b'abc123', b'def456'], dtype=object) +>>> tf.strings.reduce_join([['abc','123'], +... ['def','456']], +... axis=-1, +... separator=" ").numpy() +array([b'abc 123', b'def 456'], dtype=object) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A tf.string tensor. +
+`axis` + +Which axis to join along. The default behavior is to join all +elements, producing a scalar. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`separator` + +a string added between each string being joined. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tf.string tensor. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/reduce_logsumexp.md b/site/en/api_docs/python/tf/compat/v1/reduce_logsumexp.md new file mode 100644 index 00000000000..cdb0ffd8c11 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/reduce_logsumexp.md @@ -0,0 +1,141 @@ +description: Computes log(sum(exp(elements across dimensions of a tensor))). (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.reduce_logsumexp + + + + + + + + + +Computes log(sum(exp(elements across dimensions of a tensor))). (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` has no entries, all dimensions are reduced, and a +tensor with a single element is returned. + +This function is more numerically stable than log(sum(exp(input))). It avoids +overflows caused by taking the exp of large inputs and underflows caused by +taking the log of small inputs. + +#### For example: + + + +```python +x = tf.constant([[0., 0., 0.], [0., 0., 0.]]) +tf.reduce_logsumexp(x) # log(6) +tf.reduce_logsumexp(x, 0) # [log(2), log(2), log(2)] +tf.reduce_logsumexp(x, 1) # [log(3), log(3)] +tf.reduce_logsumexp(x, 1, keepdims=True) # [[log(3)], [log(3)]] +tf.reduce_logsumexp(x, [0, 1]) # log(6) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+`reduction_indices` + +The old (deprecated) name for axis. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + +
+The reduced tensor. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/reduce_max.md b/site/en/api_docs/python/tf/compat/v1/reduce_max.md new file mode 100644 index 00000000000..45d39fdfad2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/reduce_max.md @@ -0,0 +1,130 @@ +description: Computes the maximum of elements across dimensions of a tensor. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.reduce_max + + + + + + + + + +Computes the maximum of elements across dimensions of a tensor. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have real numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+`reduction_indices` + +The old (deprecated) name for axis. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + +
+The reduced tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.max + diff --git a/site/en/api_docs/python/tf/compat/v1/reduce_mean.md b/site/en/api_docs/python/tf/compat/v1/reduce_mean.md new file mode 100644 index 00000000000..c0e8b570bbb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/reduce_mean.md @@ -0,0 +1,156 @@ +description: Computes the mean of elements across dimensions of a tensor. + +
+ + +
+ +# tf.compat.v1.reduce_mean + + + + + + + + + +Computes the mean of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input_tensor` along the dimensions given in `axis` by computing the +mean of elements across the dimensions in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a tensor with a single +element is returned. + +#### For example: + + + +``` +>>> x = tf.constant([[1., 1.], [2., 2.]]) +>>> tf.reduce_mean(x) + +>>> tf.reduce_mean(x, 0) + +>>> tf.reduce_mean(x, 1) + +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+`reduction_indices` + +The old (deprecated) name for axis. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + +
+The reduced tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.mean + +Please note that `np.mean` has a `dtype` parameter that could be used to +specify the output type. By default this is `dtype=float64`. On the other +hand, tf.reduce_mean has an aggressive type inference from `input_tensor`, +for example: + +``` +>>> x = tf.constant([1, 0, 1, 0]) +>>> tf.reduce_mean(x) + +>>> y = tf.constant([1., 0., 1., 0.]) +>>> tf.reduce_mean(y) + +``` + + diff --git a/site/en/api_docs/python/tf/compat/v1/reduce_min.md b/site/en/api_docs/python/tf/compat/v1/reduce_min.md new file mode 100644 index 00000000000..62d6b18f2c4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/reduce_min.md @@ -0,0 +1,130 @@ +description: Computes the minimum of elements across dimensions of a tensor. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.reduce_min + + + + + + + + + +Computes the minimum of elements across dimensions of a tensor. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have real numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+`reduction_indices` + +The old (deprecated) name for axis. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + +
+The reduced tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.min + diff --git a/site/en/api_docs/python/tf/compat/v1/reduce_prod.md b/site/en/api_docs/python/tf/compat/v1/reduce_prod.md new file mode 100644 index 00000000000..5dc806d3863 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/reduce_prod.md @@ -0,0 +1,130 @@ +description: Computes the product of elements across dimensions of a tensor. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.reduce_prod + + + + + + + + + +Computes the product of elements across dimensions of a tensor. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+`reduction_indices` + +The old (deprecated) name for axis. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + +
+The reduced tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.prod + diff --git a/site/en/api_docs/python/tf/compat/v1/reduce_sum.md b/site/en/api_docs/python/tf/compat/v1/reduce_sum.md new file mode 100644 index 00000000000..9d560ae3e66 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/reduce_sum.md @@ -0,0 +1,144 @@ +description: Computes the sum of elements across dimensions of a tensor. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.reduce_sum + + + + + + + + + +Computes the sum of elements across dimensions of a tensor. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + +#### For example: + + + +```python +x = tf.constant([[1, 1, 1], [1, 1, 1]]) +tf.reduce_sum(x) # 6 +tf.reduce_sum(x, 0) # [2, 2, 2] +tf.reduce_sum(x, 1) # [3, 3] +tf.reduce_sum(x, 1, keepdims=True) # [[3], [3]] +tf.reduce_sum(x, [0, 1]) # 6 +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+`reduction_indices` + +The old (deprecated) name for axis. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + +
+The reduced tensor, of the same dtype as the input_tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.sum apart the fact that numpy upcast uint8 and int32 to +int64 while tensorflow returns the same dtype as the input. + diff --git a/site/en/api_docs/python/tf/compat/v1/report_uninitialized_variables.md b/site/en/api_docs/python/tf/compat/v1/report_uninitialized_variables.md new file mode 100644 index 00000000000..21e9c7bec92 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/report_uninitialized_variables.md @@ -0,0 +1,77 @@ +description: Adds ops to list the names of uninitialized variables. + +
+ + +
+ +# tf.compat.v1.report_uninitialized_variables + + + + + + + + + +Adds ops to list the names of uninitialized variables. + + + + + + + +When run, it returns a 1-D tensor containing the names of uninitialized +variables if there are any, or an empty array if there are none. + + + + + + + + + + + + + +
+`var_list` + +List of `Variable` objects to check. Defaults to the value of +`global_variables() + local_variables()` +
+`name` + +Optional name of the `Operation`. +
+ + + + + + + + + + + +
+A 1-D tensor containing names of the uninitialized variables, or an empty +1-D tensor if there are no variables or no uninitialized variables. +
+ + +**NOTE** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/reset_default_graph.md b/site/en/api_docs/python/tf/compat/v1/reset_default_graph.md new file mode 100644 index 00000000000..7258dac3374 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/reset_default_graph.md @@ -0,0 +1,40 @@ +description: Clears the default graph stack and resets the global default graph. + +
+ + +
+ +# tf.compat.v1.reset_default_graph + + + + + + + + + +Clears the default graph stack and resets the global default graph. + + + + + + + +NOTE: The default graph is a property of the current thread. This +function applies only to the current thread. Calling this function while +a tf.compat.v1.Session or tf.compat.v1.InteractiveSession is active will +result in undefined +behavior. Using any previously created tf.Operation or tf.Tensor objects +after calling this function will result in undefined behavior. +Raises: + AssertionError: If this function is called within a nested graph. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/resource_loader.md b/site/en/api_docs/python/tf/compat/v1/resource_loader.md new file mode 100644 index 00000000000..f17b5c27e96 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/resource_loader.md @@ -0,0 +1,33 @@ +description: Resource management library. + +
+ + +
+ +# Module: tf.compat.v1.resource_loader + + + + + + + + + +Resource management library. + + + +## Functions + +[`get_data_files_path(...)`](../../../tf/compat/v1/resource_loader/get_data_files_path.md): Get a direct path to the data files colocated with the script. + +[`get_path_to_datafile(...)`](../../../tf/compat/v1/resource_loader/get_path_to_datafile.md): Get the path to the specified file in the data dependencies. + +[`get_root_dir_with_all_resources(...)`](../../../tf/compat/v1/resource_loader/get_root_dir_with_all_resources.md): Get a root directory containing all the data attributes in the build rule. + +[`load_resource(...)`](../../../tf/compat/v1/resource_loader/load_resource.md): Load the resource at given path, where path is relative to tensorflow/. + +[`readahead_file_path(...)`](../../../tf/compat/v1/resource_loader/readahead_file_path.md): Readahead files not implemented; simply returns given path. + diff --git a/site/en/api_docs/python/tf/compat/v1/resource_loader/get_data_files_path.md b/site/en/api_docs/python/tf/compat/v1/resource_loader/get_data_files_path.md new file mode 100644 index 00000000000..f8046c20300 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/resource_loader/get_data_files_path.md @@ -0,0 +1,46 @@ +description: Get a direct path to the data files colocated with the script. + +
+ + +
+ +# tf.compat.v1.resource_loader.get_data_files_path + + + + + + + + + +Get a direct path to the data files colocated with the script. + + + + + + + + + + + + + + + + +
+The directory where files specified in data attribute of py_test +and py_binary are stored. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/resource_loader/get_path_to_datafile.md b/site/en/api_docs/python/tf/compat/v1/resource_loader/get_path_to_datafile.md new file mode 100644 index 00000000000..cf657deabb2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/resource_loader/get_path_to_datafile.md @@ -0,0 +1,83 @@ +description: Get the path to the specified file in the data dependencies. + +
+ + +
+ +# tf.compat.v1.resource_loader.get_path_to_datafile + + + + + + + + + +Get the path to the specified file in the data dependencies. + + + + + + + +The path is relative to tensorflow/ + + + + + + + + + + +
+`path` + +a string resource path relative to tensorflow/ +
+ + + + + + + + + + + +
+The path to the specified file present in the data attribute of py_test +or py_binary. +
+ + + + + + + + + + + + +
+`IOError` + +If the path is not found, or the resource can't be opened. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/resource_loader/get_root_dir_with_all_resources.md b/site/en/api_docs/python/tf/compat/v1/resource_loader/get_root_dir_with_all_resources.md new file mode 100644 index 00000000000..96a360f2f7f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/resource_loader/get_root_dir_with_all_resources.md @@ -0,0 +1,47 @@ +description: Get a root directory containing all the data attributes in the build rule. + +
+ + +
+ +# tf.compat.v1.resource_loader.get_root_dir_with_all_resources + + + + + + + + + +Get a root directory containing all the data attributes in the build rule. + + + + + + + + + + + + + + + + +
+The path to the specified file present in the data attribute of py_test +or py_binary. Falls back to returning the same as get_data_files_path if it +fails to detect a bazel runfiles directory. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/resource_loader/load_resource.md b/site/en/api_docs/python/tf/compat/v1/resource_loader/load_resource.md new file mode 100644 index 00000000000..6791951b3eb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/resource_loader/load_resource.md @@ -0,0 +1,81 @@ +description: Load the resource at given path, where path is relative to tensorflow/. + +
+ + +
+ +# tf.compat.v1.resource_loader.load_resource + + + + + + + + + +Load the resource at given path, where path is relative to tensorflow/. + + + + + + + + + + + + + + + + + +
+`path` + +a string resource path relative to tensorflow/. +
+ + + + + + + + + + + +
+The contents of that resource. +
+ + + + + + + + + + + + +
+`IOError` + +If the path is not found, or the resource can't be opened. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/resource_loader/readahead_file_path.md b/site/en/api_docs/python/tf/compat/v1/resource_loader/readahead_file_path.md new file mode 100644 index 00000000000..93328594b60 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/resource_loader/readahead_file_path.md @@ -0,0 +1,33 @@ +description: Readahead files not implemented; simply returns given path. + +
+ + +
+ +# tf.compat.v1.resource_loader.readahead_file_path + + + + + + + + + +Readahead files not implemented; simply returns given path. + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/resource_variables_enabled.md b/site/en/api_docs/python/tf/compat/v1/resource_variables_enabled.md new file mode 100644 index 00000000000..c4fa814ed51 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/resource_variables_enabled.md @@ -0,0 +1,43 @@ +description: Returns True if resource variables are enabled. + +
+ + +
+ +# tf.compat.v1.resource_variables_enabled + + + + + + + + + +Returns `True` if resource variables are enabled. + + + + + + + +Resource variables are improved versions of TensorFlow variables with a +well-defined memory model. Accessing a resource variable reads its value, and +all ops which access a specific read value of the variable are guaranteed to +see the same value for that tensor. Writes which happen after a read (by +having a control or data dependency on the read) are guaranteed not to affect +the value of the read tensor, and similarly writes which happen before a read +are guaranteed to affect the value. No guarantees are made about unordered +read/write pairs. + +Calling tf.enable_resource_variables() lets you opt-in to this TensorFlow 2.0 +feature. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/reverse_sequence.md b/site/en/api_docs/python/tf/compat/v1/reverse_sequence.md new file mode 100644 index 00000000000..d2875c7898c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/reverse_sequence.md @@ -0,0 +1,105 @@ +description: Reverses variable length slices. (deprecated arguments) (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.reverse_sequence + + + + + + + + + +Reverses variable length slices. (deprecated arguments) (deprecated arguments) + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(seq_dim)`. They will be removed in a future version. +Instructions for updating: +seq_dim is deprecated, use seq_axis instead + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(batch_dim)`. They will be removed in a future version. +Instructions for updating: +batch_dim is deprecated, use batch_axis instead + +This op first slices `input` along the dimension `batch_axis`, and for +each slice `i`, reverses the first `seq_lengths[i]` elements along the +dimension `seq_axis`. + +The elements of `seq_lengths` must obey `seq_lengths[i] <= +input.dims[seq_dim]`, and `seq_lengths` must be a vector of length +`input.dims[batch_dim]`. + +The output slice `i` along dimension `batch_axis` is then given by +input slice `i`, with the first `seq_lengths[i]` slices along +dimension `seq_axis` reversed. + +#### Example usage: + + + +``` +>>> seq_lengths = [7, 2, 3, 5] +>>> input = [[1, 2, 3, 4, 5, 0, 0, 0], [1, 2, 0, 0, 0, 0, 0, 0], +... [1, 2, 3, 4, 0, 0, 0, 0], [1, 2, 3, 4, 5, 6, 7, 8]] +>>> output = tf.reverse_sequence(input, seq_lengths, seq_axis=1, batch_axis=0) +>>> output + +``` + + + + + + + + + +
+`input`: A `Tensor`. The input to reverse. +`seq_lengths`: A `Tensor`. Must be one of the following types: `int32`, +`int64`. 1-D with length `input.dims(batch_dim)` and `max(seq_lengths) <= +input.dims(seq_dim)` +`seq_axis`: An `int`. The dimension which is partially reversed. +`batch_axis`: An optional `int`. Defaults to `0`. The dimension along which +reversal is performed. +`name`: A name for the operation (optional). +
+ + + + + + + + + + + +
+A Tensor. Has the same type as input. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model.md b/site/en/api_docs/python/tf/compat/v1/saved_model.md new file mode 100644 index 00000000000..17ec716f299 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model.md @@ -0,0 +1,133 @@ +description: Public API for tf.saved_model namespace. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# Module: tf.compat.v1.saved_model + + + + + + + + + +Public API for tf.saved_model namespace. + + + +## Modules + +[`builder`](../../../tf/compat/v1/saved_model/builder.md) module: SavedModel builder. + +[`constants`](../../../tf/compat/v1/saved_model/constants.md) module: Constants for SavedModel save and restore operations. + +[`experimental`](../../../tf/compat/v1/saved_model/experimental.md) module: Public API for tf.saved_model.experimental namespace. + +[`loader`](../../../tf/compat/v1/saved_model/loader.md) module: Loader functionality for SavedModel with hermetic, language-neutral exports. + +[`main_op`](../../../tf/compat/v1/saved_model/main_op.md) module: SavedModel main op. + +[`signature_constants`](../../../tf/compat/v1/saved_model/signature_constants.md) module: Signature constants for SavedModel save and restore operations. + +[`signature_def_utils`](../../../tf/compat/v1/saved_model/signature_def_utils.md) module: SignatureDef utility functions. + +[`tag_constants`](../../../tf/compat/v1/saved_model/tag_constants.md) module: Common tags used for graphs in SavedModel. + +[`utils`](../../../tf/compat/v1/saved_model/utils.md) module: SavedModel utility functions. + +## Classes + +[`class Asset`](../../../tf/saved_model/Asset.md): Represents a file asset to hermetically include in a SavedModel. + +[`class Builder`](../../../tf/compat/v1/saved_model/Builder.md): Builds the `SavedModel` protocol buffer and saves variables and assets. + +[`class SaveOptions`](../../../tf/saved_model/SaveOptions.md): Options for saving to SavedModel. + +## Functions + +[`build_signature_def(...)`](../../../tf/compat/v1/saved_model/build_signature_def.md): Utility function to build a SignatureDef protocol buffer. + +[`build_tensor_info(...)`](../../../tf/compat/v1/saved_model/build_tensor_info.md): Utility function to build TensorInfo proto from a Tensor. (deprecated) + +[`classification_signature_def(...)`](../../../tf/compat/v1/saved_model/classification_signature_def.md): Creates classification signature from given examples and predictions. + +[`contains_saved_model(...)`](../../../tf/compat/v1/saved_model/contains_saved_model.md): Checks whether the provided export directory could contain a SavedModel. + +[`get_tensor_from_tensor_info(...)`](../../../tf/compat/v1/saved_model/get_tensor_from_tensor_info.md): Returns the Tensor or CompositeTensor described by a TensorInfo proto. (deprecated) + +[`is_valid_signature(...)`](../../../tf/compat/v1/saved_model/is_valid_signature.md): Determine whether a SignatureDef can be served by TensorFlow Serving. + +[`load(...)`](../../../tf/compat/v1/saved_model/load.md): Loads the model from a SavedModel as specified by tags. (deprecated) + +[`load_v2(...)`](../../../tf/saved_model/load.md): Load a SavedModel from `export_dir`. + +[`main_op_with_restore(...)`](../../../tf/compat/v1/saved_model/main_op_with_restore.md): Returns a main op to init variables, tables and restore the graph. (deprecated) + +[`maybe_saved_model_directory(...)`](../../../tf/compat/v1/saved_model/contains_saved_model.md): Checks whether the provided export directory could contain a SavedModel. + +[`predict_signature_def(...)`](../../../tf/compat/v1/saved_model/predict_signature_def.md): Creates prediction signature from given inputs and outputs. + +[`regression_signature_def(...)`](../../../tf/compat/v1/saved_model/regression_signature_def.md): Creates regression signature from given examples and predictions. + +[`save(...)`](../../../tf/saved_model/save.md): Exports the Trackable object `obj` to [SavedModel format](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md). + +[`simple_save(...)`](../../../tf/compat/v1/saved_model/simple_save.md): Convenience function to build a SavedModel suitable for serving. (deprecated) + +## Other Members + +* `ASSETS_DIRECTORY = 'assets'` +* `ASSETS_KEY = 'saved_model_assets'` +* `CLASSIFY_INPUTS = 'inputs'` +* `CLASSIFY_METHOD_NAME = 'tensorflow/serving/classify'` +* `CLASSIFY_OUTPUT_CLASSES = 'classes'` +* `CLASSIFY_OUTPUT_SCORES = 'scores'` +* `DEBUG_DIRECTORY = 'debug'` +* `DEBUG_INFO_FILENAME_PB = 'saved_model_debug_info.pb'` +* `DEFAULT_SERVING_SIGNATURE_DEF_KEY = 'serving_default'` +* `GPU = 'gpu'` +* `LEGACY_INIT_OP_KEY = 'legacy_init_op'` +* `MAIN_OP_KEY = 'saved_model_main_op'` +* `PREDICT_INPUTS = 'inputs'` +* `PREDICT_METHOD_NAME = 'tensorflow/serving/predict'` +* `PREDICT_OUTPUTS = 'outputs'` +* `REGRESS_INPUTS = 'inputs'` +* `REGRESS_METHOD_NAME = 'tensorflow/serving/regress'` +* `REGRESS_OUTPUTS = 'outputs'` +* `SAVED_MODEL_FILENAME_PB = 'saved_model.pb'` +* `SAVED_MODEL_FILENAME_PBTXT = 'saved_model.pbtxt'` +* `SAVED_MODEL_SCHEMA_VERSION = 1` +* `SERVING = 'serve'` +* `TPU = 'tpu'` +* `TRAINING = 'train'` +* `VARIABLES_DIRECTORY = 'variables'` +* `VARIABLES_FILENAME = 'variables'` diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/Builder.md b/site/en/api_docs/python/tf/compat/v1/saved_model/Builder.md new file mode 100644 index 00000000000..fed747c53c4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/Builder.md @@ -0,0 +1,353 @@ +description: Builds the SavedModel protocol buffer and saves variables and assets. + +
+ + + + + + +
+ +# tf.compat.v1.saved_model.Builder + + + + + + + + + +Builds the `SavedModel` protocol buffer and saves variables and assets. + + + + + + + + + +The `SavedModelBuilder` class provides the functionality to build a +`SavedModel` protocol buffer. Specifically, this allows multiple meta +graphs to be saved as part of a single language-neutral `SavedModel`, +while sharing variables and assets. + +To build a SavedModel, the first meta graph must be saved with variables. +Subsequent meta graphs will simply be saved with their graph definitions. If +assets need to be saved and written or copied to disk, they can be provided +when the meta graph def is added. If multiple meta graph defs are associated +an asset of the same name, only the first version is retained. + +Each meta graph added to the SavedModel must be annotated with tags. The tags +provide a means to identify the specific meta graph to load and restore, along +with the shared set of variables and assets. + +Typical usage for the `SavedModelBuilder`: + +```python +... +builder = tf.compat.v1.saved_model.Builder(export_dir) + +with tf.compat.v1.Session(graph=tf.Graph()) as sess: + ... + builder.add_meta_graph_and_variables(sess, + ["foo-tag"], + signature_def_map=foo_signatures, + assets_collection=foo_assets) +... + +with tf.compat.v1.Session(graph=tf.Graph()) as sess: + ... + builder.add_meta_graph(["bar-tag", "baz-tag"]) +... + +builder.save() +``` + +Note: This function will only be available through the v1 compatibility +library as tf.compat.v1.saved_model.builder.SavedModelBuilder or +tf.compat.v1.saved_model.Builder. Tensorflow 2.0 will introduce a new +object-based method of creating SavedModels. + +## Methods + +

add_meta_graph

+ +View source + + + +Adds the current meta graph to the SavedModel. + +Creates a Saver in the current scope and uses the Saver to export the meta +graph def. Invoking this API requires the `add_meta_graph_and_variables()` +API to have been invoked before. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`tags` + +The set of tags to annotate the meta graph def with. +
+`signature_def_map` + +The map of signature defs to be added to the meta graph +def. +
+`assets_collection` + +Assets to be saved with SavedModel. Note +that this list should be a subset of the assets saved as part of +the first meta graph in the SavedModel. +
+`clear_devices` + +Set to true if the device info on the default graph should +be cleared. +
+`init_op` + +Op or group of ops to execute when the graph is loaded. Note +that when the init_op is specified it is run after the restore op at +load-time. +
+`train_op` + +Op or group of opts that trains the model when run. This will +not be run automatically when the graph is loaded, instead saved in +a SignatureDef accessible through the exported MetaGraph. +
+`saver` + +An instance of tf.compat.v1.train.Saver that will be used to export +the metagraph. If None, a sharded Saver that restores all variables will +be used. +
+ + + + + + + + + + + + +
Raises
+`AssertionError` + +If the variables for the SavedModel have not been saved +yet, or if the graph already contains one or more legacy init ops. +
+ + + +

add_meta_graph_and_variables

+ +View source + + + +Adds the current meta graph to the SavedModel and saves variables. + +Creates a Saver to save the variables from the provided session. Exports the +corresponding meta graph def. This function assumes that the variables to be +saved have been initialized. For a given `SavedModelBuilder`, this API must +be called exactly once and for the first meta graph to save. For subsequent +meta graph defs to be added, the `add_meta_graph()` API must be used. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`sess` + +The TensorFlow session from which to save the meta graph and +variables. +
+`tags` + +The set of tags with which to save the meta graph. +
+`signature_def_map` + +The map of signature def map to add to the meta graph +def. +
+`assets_collection` + +Assets to be saved with SavedModel. +
+`clear_devices` + +Set to true if the device info on the default graph should +be cleared. +
+`init_op` + +Op or group of ops to execute when the graph is loaded. Note +that when the init_op is specified it is run after the restore op at +load-time. +
+`train_op` + +Op or group of ops that trains the model when run. This will +not be run automatically when the graph is loaded, instead saved in +a SignatureDef accessible through the exported MetaGraph. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the NodeDefs. For a detailed guide, see +[Stripping Default-Valued Attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+`saver` + +An instance of tf.compat.v1.train.Saver that will be used to export the +metagraph and save variables. If None, a sharded Saver that restores +all variables will be used. +
+ + + +

save

+ +View source + + + +Writes a `SavedModel` protocol buffer to disk. + +The function writes the SavedModel protocol buffer to the export directory +in a serialized format. + + + + + + + + + + +
Args
+`as_text` + +Writes the SavedModel protocol buffer in text format to +disk. Protocol buffers in text format are useful for debugging, but +parsing fails when it encounters an unknown field and so is not forward +compatible. This means changes to TensorFlow may prevent deployment of +new text format SavedModels to existing serving binaries. Do not deploy +`as_text` SavedModels to production. +
+ + + + + + + + + + + +
Returns
+The path to which the SavedModel protocol buffer was written. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/build_signature_def.md b/site/en/api_docs/python/tf/compat/v1/saved_model/build_signature_def.md new file mode 100644 index 00000000000..728e8b124a1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/build_signature_def.md @@ -0,0 +1,91 @@ +description: Utility function to build a SignatureDef protocol buffer. + +
+ + +
+ +# tf.compat.v1.saved_model.build_signature_def + + + + + + + + + +Utility function to build a SignatureDef protocol buffer. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +Inputs of the SignatureDef defined as a proto map of string to +tensor info. +
+`outputs` + +Outputs of the SignatureDef defined as a proto map of string to +tensor info. +
+`method_name` + +Method name of the SignatureDef as a string. +
+ + + + + + + + + + + +
+A SignatureDef protocol buffer constructed based on the supplied arguments. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/build_tensor_info.md b/site/en/api_docs/python/tf/compat/v1/saved_model/build_tensor_info.md new file mode 100644 index 00000000000..801521bce1b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/build_tensor_info.md @@ -0,0 +1,97 @@ +description: Utility function to build TensorInfo proto from a Tensor. (deprecated) + +
+ + +
+ +# tf.compat.v1.saved_model.build_tensor_info + + + + + + + + + +Utility function to build TensorInfo proto from a Tensor. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info. + + + + + + + + + + +
+`tensor` + +Tensor or SparseTensor whose name, dtype and shape are used to +build the TensorInfo. For SparseTensors, the names of the three +constituent Tensors are used. +
+ + + + + + + + + + + +
+A TensorInfo protocol buffer constructed based on the supplied argument. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/builder.md b/site/en/api_docs/python/tf/compat/v1/saved_model/builder.md new file mode 100644 index 00000000000..eb5dee34cc7 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/builder.md @@ -0,0 +1,27 @@ +description: SavedModel builder. + +
+ + +
+ +# Module: tf.compat.v1.saved_model.builder + + + + + + + + + +SavedModel builder. + + +Builds a SavedModel that can be saved to storage, is language neutral, and +enables systems to produce, consume, or transform TensorFlow Models. + +## Classes + +[`class SavedModelBuilder`](../../../../tf/compat/v1/saved_model/Builder.md): Builds the `SavedModel` protocol buffer and saves variables and assets. + diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/classification_signature_def.md b/site/en/api_docs/python/tf/compat/v1/saved_model/classification_signature_def.md new file mode 100644 index 00000000000..8c34b39cd6c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/classification_signature_def.md @@ -0,0 +1,110 @@ +description: Creates classification signature from given examples and predictions. + +
+ + +
+ +# tf.compat.v1.saved_model.classification_signature_def + + + + + + + + + +Creates classification signature from given examples and predictions. + + + + + + + + + +This function produces signatures intended for use with the TensorFlow Serving +Classify API (tensorflow_serving/apis/prediction_service.proto), and so +constrains the input and output types to those allowed by TensorFlow Serving. + + + + + + + + + + + + + + + + +
+`examples` + +A string `Tensor`, expected to accept serialized tf.Examples. +
+`classes` + +A string `Tensor`. Note that the ClassificationResponse message +requires that class labels are strings, not integers or anything else. +
+`scores` + +a float `Tensor`. +
+ + + + + + + + + + + +
+A classification-flavored signature_def. +
+ + + + + + + + + + + + +
+`ValueError` + +If examples is `None`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/constants.md b/site/en/api_docs/python/tf/compat/v1/saved_model/constants.md new file mode 100644 index 00000000000..ed00464329c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/constants.md @@ -0,0 +1,45 @@ +description: Constants for SavedModel save and restore operations. + +
+ + + + + + + + + + + + + +
+ +# Module: tf.compat.v1.saved_model.constants + + + + + + + + + +Constants for SavedModel save and restore operations. + + + +## Other Members + +* `ASSETS_DIRECTORY = 'assets'` +* `ASSETS_KEY = 'saved_model_assets'` +* `DEBUG_DIRECTORY = 'debug'` +* `DEBUG_INFO_FILENAME_PB = 'saved_model_debug_info.pb'` +* `LEGACY_INIT_OP_KEY = 'legacy_init_op'` +* `MAIN_OP_KEY = 'saved_model_main_op'` +* `SAVED_MODEL_FILENAME_PB = 'saved_model.pb'` +* `SAVED_MODEL_FILENAME_PBTXT = 'saved_model.pbtxt'` +* `SAVED_MODEL_SCHEMA_VERSION = 1` +* `VARIABLES_DIRECTORY = 'variables'` +* `VARIABLES_FILENAME = 'variables'` diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/contains_saved_model.md b/site/en/api_docs/python/tf/compat/v1/saved_model/contains_saved_model.md new file mode 100644 index 00000000000..4f3691f1bef --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/contains_saved_model.md @@ -0,0 +1,80 @@ +description: Checks whether the provided export directory could contain a SavedModel. + +
+ + +
+ +# tf.compat.v1.saved_model.contains_saved_model + + + + + + + + + +Checks whether the provided export directory could contain a SavedModel. + + + + + + + + + +Note that the method does not load any data by itself. If the method returns +`false`, the export directory definitely does not contain a SavedModel. If the +method returns `true`, the export directory may contain a SavedModel but +provides no guarantee that it can be loaded. + + + + + + + + + + +
+`export_dir` + +Absolute string path to possible export location. For example, +'/my/foo/model'. +
+ + + + + + + + + + + +
+True if the export directory contains SavedModel files, False otherwise. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/experimental.md b/site/en/api_docs/python/tf/compat/v1/saved_model/experimental.md new file mode 100644 index 00000000000..2aa0e4e0e3d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/experimental.md @@ -0,0 +1,25 @@ +description: Public API for tf.saved_model.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.saved_model.experimental + + + + + + + + + +Public API for tf.saved_model.experimental namespace. + + + +## Functions + +[`save(...)`](../../../../tf/saved_model/save.md): Exports the Trackable object `obj` to [SavedModel format](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md). + diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/get_tensor_from_tensor_info.md b/site/en/api_docs/python/tf/compat/v1/saved_model/get_tensor_from_tensor_info.md new file mode 100644 index 00000000000..130c4df5310 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/get_tensor_from_tensor_info.md @@ -0,0 +1,120 @@ +description: Returns the Tensor or CompositeTensor described by a TensorInfo proto. (deprecated) + +
+ + +
+ +# tf.compat.v1.saved_model.get_tensor_from_tensor_info + + + + + + + + + +Returns the Tensor or CompositeTensor described by a TensorInfo proto. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.get_tensor_from_tensor_info or tf.compat.v1.saved_model.get_tensor_from_tensor_info. + + + + + + + + + + + + + + + + +
+`tensor_info` + +A TensorInfo proto describing a Tensor or SparseTensor or +CompositeTensor. +
+`graph` + +The tf.Graph in which tensors are looked up. If None, the +current default graph is used. +
+`import_scope` + +If not None, names in `tensor_info` are prefixed with this +string before lookup. +
+ + + + + + + + + + + +
+The Tensor or SparseTensor or CompositeTensor in `graph` described by +`tensor_info`. +
+ + + + + + + + + + + + + + + +
+`KeyError` + +If `tensor_info` does not correspond to a tensor in `graph`. +
+`ValueError` + +If `tensor_info` is malformed. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/is_valid_signature.md b/site/en/api_docs/python/tf/compat/v1/saved_model/is_valid_signature.md new file mode 100644 index 00000000000..f1eda7274c3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/is_valid_signature.md @@ -0,0 +1,44 @@ +description: Determine whether a SignatureDef can be served by TensorFlow Serving. + +
+ + +
+ +# tf.compat.v1.saved_model.is_valid_signature + + + + + + + + + +Determine whether a SignatureDef can be served by TensorFlow Serving. + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/load.md b/site/en/api_docs/python/tf/compat/v1/saved_model/load.md new file mode 100644 index 00000000000..0317b62e04a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/load.md @@ -0,0 +1,130 @@ +description: Loads the model from a SavedModel as specified by tags. (deprecated) + +
+ + +
+ +# tf.compat.v1.saved_model.load + + + + + + + + + +Loads the model from a SavedModel as specified by tags. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0. + + + + + + + + + + + + + + + + + + + + + + +
+`sess` + +The TensorFlow session to restore the variables. +
+`tags` + +Set of string tags to identify the required MetaGraphDef. These should +correspond to the tags used when saving the variables using the +SavedModel `save()` API. +
+`export_dir` + +Directory in which the SavedModel protocol buffer and variables +to be loaded are located. +
+`import_scope` + +Optional `string` -- if specified, prepend this string +followed by '/' to all loaded tensor names. This scope is applied to +tensor instances loaded into the passed session, but it is *not* written +through to the static `MetaGraphDef` protocol buffer that is returned. +
+`**saver_kwargs` + +Optional keyword arguments passed through to Saver. +
+ + + + + + + + + + + +
+The `MetaGraphDef` protocol buffer loaded in the provided session. This +can be used to further extract signature-defs, collection-defs, etc. +
+ + + + + + + + + + + + +
+`RuntimeError` + +MetaGraphDef associated with the tags cannot be found. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/loader.md b/site/en/api_docs/python/tf/compat/v1/saved_model/loader.md new file mode 100644 index 00000000000..50b9866f3bd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/loader.md @@ -0,0 +1,72 @@ +description: Loader functionality for SavedModel with hermetic, language-neutral exports. + +
+ + +
+ +# Module: tf.compat.v1.saved_model.loader + + + + + + + + + +Loader functionality for SavedModel with hermetic, language-neutral exports. + + +Load and restore capability for a SavedModel, which may include multiple meta +graph defs. Each SavedModel is associated with a single checkpoint. Each meta +graph def is saved with one or more tags, which are used to identify the exact +meta graph def to load. + +The `load` operation requires the session in which to restore the graph +definition and variables, the tags used to identify the meta graph def to +load and the location of the SavedModel. + +Upon a load, the subset of variables and assets supplied as part of the specific +meta graph def, will be restored into the supplied session. The values of the +variables though will correspond to the saved values from the first meta graph +added to the SavedModel using `add_meta_graph_and_variables(...)` in +`builder.py`. + +#### Typical usage: + + + +```python +... +builder = tf.compat.v1.saved_model.builder.SavedModelBuilder(export_dir) + +with tf.compat.v1.Session(graph=tf.Graph()) as sess: + ... + builder.add_meta_graph_and_variables(sess, + ["foo-tag"], + signature_def_map=foo_signatures, + assets_collection=foo_assets) +... + +with tf.compat.v1.Session(graph=tf.Graph()) as sess: + ... + builder.add_meta_graph(["bar-tag", "baz-tag"], + assets_collection=bar_baz_assets) +... + +builder.save() + +... +with tf.compat.v1.Session(graph=tf.Graph()) as sess: + tf.compat.v1.saved_model.loader.load(sess, ["foo-tag"], export_dir) + ... + +``` + +## Functions + +[`load(...)`](../../../../tf/compat/v1/saved_model/load.md): Loads the model from a SavedModel as specified by tags. (deprecated) + +[`maybe_saved_model_directory(...)`](../../../../tf/compat/v1/saved_model/contains_saved_model.md): Checks whether the provided export directory could contain a SavedModel. + diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/main_op.md b/site/en/api_docs/python/tf/compat/v1/saved_model/main_op.md new file mode 100644 index 00000000000..f8082cab66a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/main_op.md @@ -0,0 +1,29 @@ +description: SavedModel main op. + +
+ + +
+ +# Module: tf.compat.v1.saved_model.main_op + + + + + + + + + +SavedModel main op. + + +Builds a main op that defines the sequence of ops to be run as part of the +SavedModel load/restore operations. + +## Functions + +[`main_op(...)`](../../../../tf/compat/v1/saved_model/main_op/main_op.md): Returns a main op to init variables and tables. (deprecated) + +[`main_op_with_restore(...)`](../../../../tf/compat/v1/saved_model/main_op_with_restore.md): Returns a main op to init variables, tables and restore the graph. (deprecated) + diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/main_op/main_op.md b/site/en/api_docs/python/tf/compat/v1/saved_model/main_op/main_op.md new file mode 100644 index 00000000000..64e0c8134fb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/main_op/main_op.md @@ -0,0 +1,51 @@ +description: Returns a main op to init variables and tables. (deprecated) + +
+ + +
+ +# tf.compat.v1.saved_model.main_op.main_op + + + + + + + + + +Returns a main op to init variables and tables. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.main_op.main_op. + +Returns the main op including the group of ops that initializes all +variables, initializes local variables and initialize all tables. + + + + + + + + + +
+The set of ops to be run as part of the main op upon the load operation. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/main_op_with_restore.md b/site/en/api_docs/python/tf/compat/v1/saved_model/main_op_with_restore.md new file mode 100644 index 00000000000..2707862c612 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/main_op_with_restore.md @@ -0,0 +1,82 @@ +description: Returns a main op to init variables, tables and restore the graph. (deprecated) + +
+ + +
+ +# tf.compat.v1.saved_model.main_op_with_restore + + + + + + + + + +Returns a main op to init variables, tables and restore the graph. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.main_op_with_restore or tf.compat.v1.saved_model.main_op.main_op_with_restore. + +Returns the main op including the group of ops that initializes all +variables, initialize local variables, initialize all tables and the restore +op name. + + + + + + + + + + +
+`restore_op_name` + +Name of the op to use to restore the graph. +
+ + + + + + + + + + + +
+The set of ops to be run as part of the main op upon the load operation. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/predict_signature_def.md b/site/en/api_docs/python/tf/compat/v1/saved_model/predict_signature_def.md new file mode 100644 index 00000000000..cefcf65ff37 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/predict_signature_def.md @@ -0,0 +1,102 @@ +description: Creates prediction signature from given inputs and outputs. + +
+ + +
+ +# tf.compat.v1.saved_model.predict_signature_def + + + + + + + + + +Creates prediction signature from given inputs and outputs. + + + + + + + + + +This function produces signatures intended for use with the TensorFlow Serving +Predict API (tensorflow_serving/apis/prediction_service.proto). This API +imposes no constraints on the input and output types. + + + + + + + + + + + + + +
+`inputs` + +dict of string to `Tensor`. +
+`outputs` + +dict of string to `Tensor`. +
+ + + + + + + + + + + +
+A prediction-flavored signature_def. +
+ + + + + + + + + + + + +
+`ValueError` + +If inputs or outputs is `None`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/regression_signature_def.md b/site/en/api_docs/python/tf/compat/v1/saved_model/regression_signature_def.md new file mode 100644 index 00000000000..255f4626af8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/regression_signature_def.md @@ -0,0 +1,102 @@ +description: Creates regression signature from given examples and predictions. + +
+ + +
+ +# tf.compat.v1.saved_model.regression_signature_def + + + + + + + + + +Creates regression signature from given examples and predictions. + + + + + + + + + +This function produces signatures intended for use with the TensorFlow Serving +Regress API (tensorflow_serving/apis/prediction_service.proto), and so +constrains the input and output types to those allowed by TensorFlow Serving. + + + + + + + + + + + + + +
+`examples` + +A string `Tensor`, expected to accept serialized tf.Examples. +
+`predictions` + +A float `Tensor`. +
+ + + + + + + + + + + +
+A regression-flavored signature_def. +
+ + + + + + + + + + + + +
+`ValueError` + +If examples is `None`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/signature_constants.md b/site/en/api_docs/python/tf/compat/v1/saved_model/signature_constants.md new file mode 100644 index 00000000000..924f09ec525 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/signature_constants.md @@ -0,0 +1,45 @@ +description: Signature constants for SavedModel save and restore operations. + +
+ + + + + + + + + + + + + +
+ +# Module: tf.compat.v1.saved_model.signature_constants + + + + + + + + + +Signature constants for SavedModel save and restore operations. + + + +## Other Members + +* `CLASSIFY_INPUTS = 'inputs'` +* `CLASSIFY_METHOD_NAME = 'tensorflow/serving/classify'` +* `CLASSIFY_OUTPUT_CLASSES = 'classes'` +* `CLASSIFY_OUTPUT_SCORES = 'scores'` +* `DEFAULT_SERVING_SIGNATURE_DEF_KEY = 'serving_default'` +* `PREDICT_INPUTS = 'inputs'` +* `PREDICT_METHOD_NAME = 'tensorflow/serving/predict'` +* `PREDICT_OUTPUTS = 'outputs'` +* `REGRESS_INPUTS = 'inputs'` +* `REGRESS_METHOD_NAME = 'tensorflow/serving/regress'` +* `REGRESS_OUTPUTS = 'outputs'` diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/signature_def_utils.md b/site/en/api_docs/python/tf/compat/v1/saved_model/signature_def_utils.md new file mode 100644 index 00000000000..5db0740831a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/signature_def_utils.md @@ -0,0 +1,38 @@ +description: SignatureDef utility functions. + +
+ + +
+ +# Module: tf.compat.v1.saved_model.signature_def_utils + + + + + + + + + +SignatureDef utility functions. + + +Utility functions for building and inspecting SignatureDef protos. + +## Classes + +[`class MethodNameUpdater`](../../../../tf/compat/v1/saved_model/signature_def_utils/MethodNameUpdater.md): Updates the method name(s) of the SavedModel stored in the given path. + +## Functions + +[`build_signature_def(...)`](../../../../tf/compat/v1/saved_model/build_signature_def.md): Utility function to build a SignatureDef protocol buffer. + +[`classification_signature_def(...)`](../../../../tf/compat/v1/saved_model/classification_signature_def.md): Creates classification signature from given examples and predictions. + +[`is_valid_signature(...)`](../../../../tf/compat/v1/saved_model/is_valid_signature.md): Determine whether a SignatureDef can be served by TensorFlow Serving. + +[`predict_signature_def(...)`](../../../../tf/compat/v1/saved_model/predict_signature_def.md): Creates prediction signature from given inputs and outputs. + +[`regression_signature_def(...)`](../../../../tf/compat/v1/saved_model/regression_signature_def.md): Creates regression signature from given examples and predictions. + diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/signature_def_utils/MethodNameUpdater.md b/site/en/api_docs/python/tf/compat/v1/saved_model/signature_def_utils/MethodNameUpdater.md new file mode 100644 index 00000000000..840e568984a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/signature_def_utils/MethodNameUpdater.md @@ -0,0 +1,212 @@ +description: Updates the method name(s) of the SavedModel stored in the given path. + +
+ + + + + +
+ +# tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater + + + + + + + + + +Updates the method name(s) of the SavedModel stored in the given path. + + + + + + + +The `MethodNameUpdater` class provides the functionality to update the method +name field in the signature_defs of the given SavedModel. For example, it +can be used to replace the `predict` `method_name` to `regress`. + +Typical usages of the `MethodNameUpdater` +```python +... +updater = tf.compat.v1.saved_model.MethodNameUpdater(export_dir) +# Update all signature_defs with key "foo" in all meta graph defs. +updater.replace_method_name(signature_key="foo", method_name="regress") +# Update a single signature_def with key "bar" in the meta graph def with +# tags ["serve"] +updater.replace_method_name(signature_key="bar", method_name="classify", + tags="serve") +updater.save(new_export_dir) +``` + +Note: This function will only be available through the v1 compatibility +library as tf.compat.v1.saved_model.builder.MethodNameUpdater. + + + + + + + + + + +
+`export_dir` + +Directory containing the SavedModel files. +
+ + + + + + + + + + + + +
+`IOError` + +If the saved model file does not exist, or cannot be successfully +parsed. +
+ + + +## Methods + +

replace_method_name

+ +View source + + + +Replaces the method_name in the specified signature_def. + +This will match and replace multiple sig defs iff tags is None (i.e when +multiple `MetaGraph`s have a signature_def with the same key). +If tags is not None, this will only replace a single signature_def in the +`MetaGraph` with matching tags. + + + + + + + + + + + + + + + + +
Args
+`signature_key` + +Key of the signature_def to be updated. +
+`method_name` + +new method_name to replace the existing one. +
+`tags` + +A tag or sequence of tags identifying the `MetaGraph` to update. If +None, all meta graphs will be updated. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if signature_key or method_name are not defined or +if no metagraphs were found with the associated tags or +if no meta graph has a signature_def that matches signature_key. +
+ + + +

save

+ +View source + + + +Saves the updated `SavedModel`. + + + + + + + + + + + +
Args
+`new_export_dir` + +Path where the updated `SavedModel` will be saved. If +None, the input `SavedModel` will be overriden with the updates. +
+ + + + + + + + + + + + +
Raises
+`errors.OpError` + +If there are errors during the file save operation. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/simple_save.md b/site/en/api_docs/python/tf/compat/v1/saved_model/simple_save.md new file mode 100644 index 00000000000..399d84e864e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/simple_save.md @@ -0,0 +1,115 @@ +description: Convenience function to build a SavedModel suitable for serving. (deprecated) + +
+ + +
+ +# tf.compat.v1.saved_model.simple_save + + + + + + + + + +Convenience function to build a SavedModel suitable for serving. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.simple_save. + +In many common cases, saving models for serving will be as simple as: + + simple_save(session, + export_dir, + inputs={"x": x, "y": y}, + outputs={"z": z}) + +Although in many cases it's not necessary to understand all of the many ways + to configure a SavedModel, this method has a few practical implications: + - It will be treated as a graph for inference / serving (i.e. uses the tag + `saved_model.SERVING`) + - The SavedModel will load in TensorFlow Serving and supports the + [Predict + API](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/predict.proto). + To use the Classify, Regress, or MultiInference APIs, please + use either + [tf.Estimator](https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator) + or the lower level + [SavedModel + APIs](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md). + - Some TensorFlow ops depend on information on disk or other information + called "assets". These are generally handled automatically by adding the + assets to the `GraphKeys.ASSET_FILEPATHS` collection. Only assets in that + collection are exported; if you need more custom behavior, you'll need to + use the + [SavedModelBuilder](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/builder.py). + +More information about SavedModel and signatures can be found here: +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md. + + + + + + + + + + + + + + + + + + + + + + +
+`session` + +The TensorFlow session from which to save the meta graph and +variables. +
+`export_dir` + +The path to which the SavedModel will be stored. +
+`inputs` + +dict mapping string input names to tensors. These are added +to the SignatureDef as the inputs. +
+`outputs` + +dict mapping string output names to tensors. These are added +to the SignatureDef as the outputs. +
+`legacy_init_op` + +Legacy support for op or group of ops to execute after the +restore op upon a load. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/tag_constants.md b/site/en/api_docs/python/tf/compat/v1/saved_model/tag_constants.md new file mode 100644 index 00000000000..4040aa58a2b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/tag_constants.md @@ -0,0 +1,31 @@ +description: Common tags used for graphs in SavedModel. + +
+ + + + + + +
+ +# Module: tf.compat.v1.saved_model.tag_constants + + + + + + + + + +Common tags used for graphs in SavedModel. + + + +## Other Members + +* `GPU = 'gpu'` +* `SERVING = 'serve'` +* `TPU = 'tpu'` +* `TRAINING = 'train'` diff --git a/site/en/api_docs/python/tf/compat/v1/saved_model/utils.md b/site/en/api_docs/python/tf/compat/v1/saved_model/utils.md new file mode 100644 index 00000000000..9a821634dea --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/saved_model/utils.md @@ -0,0 +1,28 @@ +description: SavedModel utility functions. + +
+ + +
+ +# Module: tf.compat.v1.saved_model.utils + + + + + + + + + +SavedModel utility functions. + + +Utility functions to assist with setup and construction of the SavedModel proto. + +## Functions + +[`build_tensor_info(...)`](../../../../tf/compat/v1/saved_model/build_tensor_info.md): Utility function to build TensorInfo proto from a Tensor. (deprecated) + +[`get_tensor_from_tensor_info(...)`](../../../../tf/compat/v1/saved_model/get_tensor_from_tensor_info.md): Returns the Tensor or CompositeTensor described by a TensorInfo proto. (deprecated) + diff --git a/site/en/api_docs/python/tf/compat/v1/scalar_mul.md b/site/en/api_docs/python/tf/compat/v1/scalar_mul.md new file mode 100644 index 00000000000..a67ed0c757a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/scalar_mul.md @@ -0,0 +1,109 @@ +description: Multiplies a scalar times a Tensor or IndexedSlices object. + +
+ + +
+ +# tf.compat.v1.scalar_mul + + + + + + + + + +Multiplies a scalar times a `Tensor` or `IndexedSlices` object. + + + + + + + + + +Intended for use in gradient code which might deal with `IndexedSlices` +objects, which are easy to multiply by a scalar but more expensive to +multiply with arbitrary tensors. + + + + + + + + + + + + + + + + +
+`scalar` + +A 0-D scalar `Tensor`. Must have known shape. +
+`x` + +A `Tensor` or `IndexedSlices` to be scaled. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+`scalar * x` of the same type (`Tensor` or `IndexedSlices`) as `x`. +
+ + + + + + + + + + + + +
+`ValueError` + +if scalar is not a 0-D `scalar`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/scan.md b/site/en/api_docs/python/tf/compat/v1/scan.md new file mode 100644 index 00000000000..b34c57a415b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/scan.md @@ -0,0 +1,217 @@ +description: scan on the list of tensors unpacked from elems on dimension 0. + +
+ + +
+ +# tf.compat.v1.scan + + + + + + + + + +scan on the list of tensors unpacked from `elems` on dimension 0. + + + + + + + +The simplest version of `scan` repeatedly applies the callable `fn` to a +sequence of elements from first to last. The elements are made of the tensors +unpacked from `elems` on dimension 0. The callable fn takes two tensors as +arguments. The first argument is the accumulated value computed from the +preceding invocation of fn, and the second is the value at the current +position of `elems`. If `initializer` is None, `elems` must contain at least +one element, and its first element is used as the initializer. + +Suppose that `elems` is unpacked into `values`, a list of tensors. The shape +of the result tensor is `[len(values)] + fn(initializer, values[0]).shape`. +If reverse=True, it's fn(initializer, values[-1]).shape. + +This method also allows multi-arity `elems` and accumulator. If `elems` +is a (possibly nested) list or tuple of tensors, then each of these tensors +must have a matching first (unpack) dimension. The second argument of +`fn` must match the structure of `elems`. + +If no `initializer` is provided, the output structure and dtypes of `fn` +are assumed to be the same as its input; and in this case, the first +argument of `fn` must match the structure of `elems`. + +If an `initializer` is provided, then the output of `fn` must have the same +structure as `initializer`; and the first argument of `fn` must match +this structure. + +For example, if `elems` is `(t1, [t2, t3])` and `initializer` is +`[i1, i2]` then an appropriate signature for `fn` in `python2` is: +`fn = lambda (acc_p1, acc_p2), (t1, [t2, t3]):` and `fn` must return a list, +`[acc_n1, acc_n2]`. An alternative correct signature for `fn`, and the + one that works in `python3`, is: +`fn = lambda a, t:`, where `a` and `t` correspond to the input tuples. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fn` + +The callable to be performed. It accepts two arguments. The first will +have the same structure as `initializer` if one is provided, otherwise it +will have the same structure as `elems`. The second will have the same +(possibly nested) structure as `elems`. Its output must have the same +structure as `initializer` if one is provided, otherwise it must have the +same structure as `elems`. +
+`elems` + +A tensor or (possibly nested) sequence of tensors, each of which will +be unpacked along their first dimension. The nested sequence of the +resulting slices will be the first argument to `fn`. +
+`initializer` + +(optional) A tensor or (possibly nested) sequence of tensors, +initial value for the accumulator, and the expected output type of `fn`. +
+`parallel_iterations` + +(optional) The number of iterations allowed to run in +parallel. +
+`back_prop` + +(optional) True enables support for back propagation. +
+`swap_memory` + +(optional) True enables GPU-CPU memory swapping. +
+`infer_shape` + +(optional) False disables tests for consistent output shapes. +
+`reverse` + +(optional) True scans the tensor last to first (instead of first to +last). +
+`name` + +(optional) Name prefix for the returned tensors. +
+ + + + + + + + + + + +
+A tensor or (possibly nested) sequence of tensors. Each tensor packs the +results of applying `fn` to tensors unpacked from `elems` along the first +dimension, and the previous accumulator value(s), from first to last (or +last to first, if `reverse=True`). +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if `fn` is not callable or the structure of the output of +`fn` and `initializer` do not match. +
+`ValueError` + +if the lengths of the output of `fn` and `initializer` +do not match. +
+ + + +#### Examples: + +```python +elems = np.array([1, 2, 3, 4, 5, 6]) +sum = scan(lambda a, x: a + x, elems) +# sum == [1, 3, 6, 10, 15, 21] +sum = scan(lambda a, x: a + x, elems, reverse=True) +# sum == [21, 20, 18, 15, 11, 6] +``` + +```python +elems = np.array([1, 2, 3, 4, 5, 6]) +initializer = np.array(0) +sum_one = scan( + lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer) +# sum_one == [1, 2, 3, 4, 5, 6] +``` + +```python +elems = np.array([1, 0, 0, 0, 0, 0]) +initializer = (np.array(0), np.array(1)) +fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer) +# fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13]) +``` diff --git a/site/en/api_docs/python/tf/compat/v1/scatter_add.md b/site/en/api_docs/python/tf/compat/v1/scatter_add.md new file mode 100644 index 00000000000..88c515d9012 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/scatter_add.md @@ -0,0 +1,120 @@ +description: Adds sparse updates to the variable referenced by resource. + +
+ + +
+ +# tf.compat.v1.scatter_add + + + + + + + + + +Adds sparse updates to the variable referenced by `resource`. + + + + + + + +This operation computes + +```python + # Scalar indices + ref[indices, ...] += updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] += updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] += updates[i, ..., j, ...] +``` + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the updated value. +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions add. + +Requires `updates.shape = indices.shape + ref.shape[1:]`. + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A `Variable`. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A tensor of updated values to store in `ref`. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the assignment will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+Same as `ref`. Returned as a convenience for operations that want +to use the updated values after the update is done. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/scatter_div.md b/site/en/api_docs/python/tf/compat/v1/scatter_div.md new file mode 100644 index 00000000000..4708206c8ef --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/scatter_div.md @@ -0,0 +1,120 @@ +description: Divides a variable reference by sparse updates. + +
+ + +
+ +# tf.compat.v1.scatter_div + + + + + + + + + +Divides a variable reference by sparse updates. + + + + + + + +This operation computes + +```python + # Scalar indices + ref[indices, ...] /= updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] /= updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...] +``` + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions divide. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = +[]`. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, +`float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, +`qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, +`uint32`, `uint64`. Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. A +tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. A tensor of values +that `ref` is divided by. +
+`use_locking` + +An optional `bool`. Defaults to `False`. If True, the operation +will be protected by a lock; otherwise the behavior is undefined, but may +exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/scatter_max.md b/site/en/api_docs/python/tf/compat/v1/scatter_max.md new file mode 100644 index 00000000000..507503b58be --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/scatter_max.md @@ -0,0 +1,123 @@ +description: Reduces sparse updates into a variable reference using the max operation. + +
+ + +
+ +# tf.compat.v1.scatter_max + + + + + + + + + +Reduces sparse updates into a variable reference using the `max` operation. + + + + + + + +This operation computes + + # Scalar indices + ref[indices, ...] = max(ref[indices, ...], updates[...]) + + # Vector indices (for each i) + ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...]) + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...], + updates[i, ..., j, ...]) + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions combine. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = +[]`. + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `half`, +`bfloat16`, `float32`, `float64`, `int32`, `int64`. Should be from a +`Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. A +tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. A tensor of updated +values to reduce into `ref`. +
+`use_locking` + +An optional `bool`. Defaults to `False`. If True, the update +will be protected by a lock; otherwise the behavior is undefined, but may +exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/scatter_min.md b/site/en/api_docs/python/tf/compat/v1/scatter_min.md new file mode 100644 index 00000000000..d398c7c15f6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/scatter_min.md @@ -0,0 +1,123 @@ +description: Reduces sparse updates into a variable reference using the min operation. + +
+ + +
+ +# tf.compat.v1.scatter_min + + + + + + + + + +Reduces sparse updates into a variable reference using the `min` operation. + + + + + + + +This operation computes + + # Scalar indices + ref[indices, ...] = min(ref[indices, ...], updates[...]) + + # Vector indices (for each i) + ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...]) + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...], + updates[i, ..., j, ...]) + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions combine. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = +[]`. + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `half`, +`bfloat16`, `float32`, `float64`, `int32`, `int64`. Should be from a +`Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. A +tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. A tensor of updated +values to reduce into `ref`. +
+`use_locking` + +An optional `bool`. Defaults to `False`. If True, the update +will be protected by a lock; otherwise the behavior is undefined, but may +exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/scatter_mul.md b/site/en/api_docs/python/tf/compat/v1/scatter_mul.md new file mode 100644 index 00000000000..83136c36d70 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/scatter_mul.md @@ -0,0 +1,120 @@ +description: Multiplies sparse updates into a variable reference. + +
+ + +
+ +# tf.compat.v1.scatter_mul + + + + + + + + + +Multiplies sparse updates into a variable reference. + + + + + + + +This operation computes + +```python + # Scalar indices + ref[indices, ...] *= updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] *= updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...] +``` + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions multiply. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = +[]`. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, +`float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, +`qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, +`uint32`, `uint64`. Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. A +tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. A tensor of updated +values to multiply to `ref`. +
+`use_locking` + +An optional `bool`. Defaults to `False`. If True, the operation +will be protected by a lock; otherwise the behavior is undefined, but may +exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/scatter_nd_add.md b/site/en/api_docs/python/tf/compat/v1/scatter_nd_add.md new file mode 100644 index 00000000000..65fab970ed2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/scatter_nd_add.md @@ -0,0 +1,132 @@ +description: Applies sparse addition to individual values or slices in a Variable. + +
+ + +
+ +# tf.compat.v1.scatter_nd_add + + + + + + + + + +Applies sparse addition to individual values or slices in a Variable. + + + + + + + +`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into `ref`. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of `ref`. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]] +``` + +For example, say we want to add 4 scattered elements to a rank-1 tensor to +8 elements. In Python, that addition would look like this: + +```python +ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) +indices = tf.constant([[4], [3], [1], [7]]) +updates = tf.constant([9, 10, 11, 12]) +add = tf.compat.v1.scatter_nd_add(ref, indices, updates) +with tf.compat.v1.Session() as sess: + print sess.run(add) +``` + +The resulting update to ref would look like this: + + [1, 13, 3, 14, 14, 6, 7, 20] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, +`float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, +`qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, +`uint32`, `uint64`. A mutable Tensor. Should be from a Variable node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into ref. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A tensor of updated values to add to ref. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the assignment will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/scatter_nd_sub.md b/site/en/api_docs/python/tf/compat/v1/scatter_nd_sub.md new file mode 100644 index 00000000000..5ce09c0ced5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/scatter_nd_sub.md @@ -0,0 +1,133 @@ +description: Applies sparse subtraction to individual values or slices in a Variable. + +
+ + +
+ +# tf.compat.v1.scatter_nd_sub + + + + + + + + + +Applies sparse subtraction to individual values or slices in a Variable. + + + + + + + +`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into `ref`. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of `ref`. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]] +``` + +For example, say we want to subtract 4 scattered elements from a rank-1 tensor +with 8 elements. In Python, that update would look like this: + +```python +ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) +indices = tf.constant([[4], [3], [1] ,[7]]) +updates = tf.constant([9, 10, 11, 12]) +op = tf.compat.v1.scatter_nd_sub(ref, indices, updates) +with tf.compat.v1.Session() as sess: + print sess.run(op) +``` + +The resulting update to ref would look like this: + + [1, -9, 3, -6, -6, 6, 7, -4] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, +`float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, +`qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, +`uint32`, `uint64`. A mutable Tensor. Should be from a Variable node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into ref. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A tensor of updated values to add to ref. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +An optional bool. Defaults to True. If True, the assignment will +be protected by a lock; otherwise the behavior is undefined, +but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/scatter_nd_update.md b/site/en/api_docs/python/tf/compat/v1/scatter_nd_update.md new file mode 100644 index 00000000000..6a9b82b2b95 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/scatter_nd_update.md @@ -0,0 +1,131 @@ +description: Applies sparse updates to individual values or slices in a Variable. + +
+ + +
+ +# tf.compat.v1.scatter_nd_update + + + + + + + + + +Applies sparse `updates` to individual values or slices in a Variable. + + + + + + + +`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into `ref`. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of `ref`. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]. +``` + +For example, say we want to update 4 scattered elements to a rank-1 tensor to +8 elements. In Python, that update would look like this: + +```python + ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) + indices = tf.constant([[4], [3], [1] ,[7]]) + updates = tf.constant([9, 10, 11, 12]) + update = tf.compat.v1.scatter_nd_update(ref, indices, updates) + with tf.compat.v1.Session() as sess: + print sess.run(update) +``` + +The resulting update to ref would look like this: + + [1, 11, 3, 10, 9, 6, 7, 12] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A Variable. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into ref. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A Tensor. Must have the same type as ref. A tensor of updated +values to add to ref. +
+`use_locking` + +An optional `bool`. Defaults to `True`. +An optional bool. Defaults to True. If True, the assignment will +be protected by a lock; otherwise the behavior is undefined, +but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The value of the variable after the update. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/scatter_sub.md b/site/en/api_docs/python/tf/compat/v1/scatter_sub.md new file mode 100644 index 00000000000..4ef76f03595 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/scatter_sub.md @@ -0,0 +1,123 @@ +description: Subtracts sparse updates to a variable reference. + +
+ + +
+ +# tf.compat.v1.scatter_sub + + + + + + + + + +Subtracts sparse updates to a variable reference. + + + + + + + +```python + # Scalar indices + ref[indices, ...] -= updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] -= updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...] +``` + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their (negated) contributions add. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or +`updates.shape = []`. + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, +`float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, +`qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, +`uint32`, `uint64`. Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A tensor of updated values to subtract from `ref`. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the subtraction will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/scatter_update.md b/site/en/api_docs/python/tf/compat/v1/scatter_update.md new file mode 100644 index 00000000000..dcd029b31d1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/scatter_update.md @@ -0,0 +1,122 @@ +description: Applies sparse updates to a variable reference. + +
+ + +
+ +# tf.compat.v1.scatter_update + + + + + + + + + +Applies sparse updates to a variable reference. + + + + + + + +This operation computes + +```python + # Scalar indices + ref[indices, ...] = updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] = updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] = updates[i, ..., j, ...] +``` + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +If values in `ref` is to be updated more than once, because there are +duplicate entries in `indices`, the order at which the updates happen +for each value is undefined. + +Requires `updates.shape = indices.shape + ref.shape[1:]`. + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A `Variable`. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A tensor of updated values to store in `ref`. +
+`use_locking` + +An optional `bool`. Defaults to `True`. +If True, the assignment will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+Same as `ref`. Returned as a convenience for operations that want +to use the updated values after the update is done. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/serialize_many_sparse.md b/site/en/api_docs/python/tf/compat/v1/serialize_many_sparse.md new file mode 100644 index 00000000000..7f86b5ccdda --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/serialize_many_sparse.md @@ -0,0 +1,115 @@ +description: Serialize N-minibatch SparseTensor into an [N, 3] Tensor. + +
+ + +
+ +# tf.compat.v1.serialize_many_sparse + + + + + + + + + +Serialize `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor`. + + + + + + + + + +The `SparseTensor` must have rank `R` greater than 1, and the first dimension +is treated as the minibatch dimension. Elements of the `SparseTensor` +must be sorted in increasing order of this first dimension. The serialized +`SparseTensor` objects going into each row of the output `Tensor` will have +rank `R-1`. + +The minibatch size `N` is extracted from `sparse_shape[0]`. + + + + + + + + + + + + + + + + +
+`sp_input` + +The input rank `R` `SparseTensor`. +
+`name` + +A name prefix for the returned tensors (optional). +
+`out_type` + +The `dtype` to use for serialization. +
+ + + + + + + + + + + +
+A matrix (2-D `Tensor`) with `N` rows and `3` columns. Each column +represents serialized `SparseTensor`'s indices, values, and shape +(respectively). +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/serialize_sparse.md b/site/en/api_docs/python/tf/compat/v1/serialize_sparse.md new file mode 100644 index 00000000000..5bf602e8801 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/serialize_sparse.md @@ -0,0 +1,107 @@ +description: Serialize a SparseTensor into a 3-vector (1-D Tensor) object. + +
+ + +
+ +# tf.compat.v1.serialize_sparse + + + + + + + + + +Serialize a `SparseTensor` into a 3-vector (1-D `Tensor`) object. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sp_input` + +The input `SparseTensor`. +
+`name` + +A name prefix for the returned tensors (optional). +
+`out_type` + +The `dtype` to use for serialization. +
+ + + + + + + + + + + +
+A 3-vector (1-D `Tensor`), with each column representing the serialized +`SparseTensor`'s indices, values, and shape (respectively). +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/set_random_seed.md b/site/en/api_docs/python/tf/compat/v1/set_random_seed.md new file mode 100644 index 00000000000..6bd57230099 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/set_random_seed.md @@ -0,0 +1,153 @@ +description: Sets the graph-level random seed for the default graph. + +
+ + +
+ +# tf.compat.v1.set_random_seed + + + + + + + + + +Sets the graph-level random seed for the default graph. + + + + + + + + + +Operations that rely on a random seed actually derive it from two seeds: +the graph-level and operation-level seeds. This sets the graph-level seed. + +Its interactions with operation-level seeds is as follows: + + 1. If neither the graph-level nor the operation seed is set: + A random seed is used for this op. + 2. If the graph-level seed is set, but the operation seed is not: + The system deterministically picks an operation seed in conjunction with + the graph-level seed so that it gets a unique random sequence. Within the + same version of tensorflow and user code, this sequence is deterministic. + However across different versions, this sequence might change. If the + code depends on particular seeds to work, specify both graph-level + and operation-level seeds explicitly. + 3. If the graph-level seed is not set, but the operation seed is set: + A default graph-level seed and the specified operation seed are used to + determine the random sequence. + 4. If both the graph-level and the operation seed are set: + Both seeds are used in conjunction to determine the random sequence. + +To illustrate the user-visible effects, consider these examples: + +To generate different sequences across sessions, set neither +graph-level nor op-level seeds: + +```python +a = tf.random.uniform([1]) +b = tf.random.normal([1]) + +print("Session 1") +with tf.compat.v1.Session() as sess1: + print(sess1.run(a)) # generates 'A1' + print(sess1.run(a)) # generates 'A2' + print(sess1.run(b)) # generates 'B1' + print(sess1.run(b)) # generates 'B2' + +print("Session 2") +with tf.compat.v1.Session() as sess2: + print(sess2.run(a)) # generates 'A3' + print(sess2.run(a)) # generates 'A4' + print(sess2.run(b)) # generates 'B3' + print(sess2.run(b)) # generates 'B4' +``` + +To generate the same repeatable sequence for an op across sessions, set the +seed for the op: + +```python +a = tf.random.uniform([1], seed=1) +b = tf.random.normal([1]) + +# Repeatedly running this block with the same graph will generate the same +# sequence of values for 'a', but different sequences of values for 'b'. +print("Session 1") +with tf.compat.v1.Session() as sess1: + print(sess1.run(a)) # generates 'A1' + print(sess1.run(a)) # generates 'A2' + print(sess1.run(b)) # generates 'B1' + print(sess1.run(b)) # generates 'B2' + +print("Session 2") +with tf.compat.v1.Session() as sess2: + print(sess2.run(a)) # generates 'A1' + print(sess2.run(a)) # generates 'A2' + print(sess2.run(b)) # generates 'B3' + print(sess2.run(b)) # generates 'B4' +``` + +To make the random sequences generated by all ops be repeatable across +sessions, set a graph-level seed: + +```python +tf.compat.v1.random.set_random_seed(1234) +a = tf.random.uniform([1]) +b = tf.random.normal([1]) + +# Repeatedly running this block with the same graph will generate the same +# sequences of 'a' and 'b'. +print("Session 1") +with tf.compat.v1.Session() as sess1: + print(sess1.run(a)) # generates 'A1' + print(sess1.run(a)) # generates 'A2' + print(sess1.run(b)) # generates 'B1' + print(sess1.run(b)) # generates 'B2' + +print("Session 2") +with tf.compat.v1.Session() as sess2: + print(sess2.run(a)) # generates 'A1' + print(sess2.run(a)) # generates 'A2' + print(sess2.run(b)) # generates 'B1' + print(sess2.run(b)) # generates 'B2' +``` + + + + + + + + + + +
+`seed` + +integer. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/setdiff1d.md b/site/en/api_docs/python/tf/compat/v1/setdiff1d.md new file mode 100644 index 00000000000..79c7bb50c83 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/setdiff1d.md @@ -0,0 +1,120 @@ +description: Computes the difference between two lists of numbers or strings. + +
+ + +
+ +# tf.compat.v1.setdiff1d + + + + + + + + + +Computes the difference between two lists of numbers or strings. + + + + + + + +Given a list `x` and a list `y`, this operation returns a list `out` that +represents all values that are in `x` but not in `y`. The returned list `out` +is sorted in the same order that the numbers appear in `x` (duplicates are +preserved). This operation also returns a list `idx` that represents the +position of each `out` element in `x`. In other words: + +`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]` + +For example, given this input: + +``` +x = [1, 2, 3, 4, 5, 6] +y = [1, 3, 5] +``` + +This operation would return: + +``` +out ==> [2, 4, 6] +idx ==> [1, 3, 5] +``` + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. 1-D. Values to keep. +
+`y` + +A `Tensor`. Must have the same type as `x`. 1-D. Values to remove. +
+`out_idx` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (out, idx). +
+`out` + +A `Tensor`. Has the same type as `x`. +
+`idx` + +A `Tensor` of type `out_idx`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sets.md b/site/en/api_docs/python/tf/compat/v1/sets.md new file mode 100644 index 00000000000..8e6f70efb85 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sets.md @@ -0,0 +1,39 @@ +description: Tensorflow set operations. + +
+ + +
+ +# Module: tf.compat.v1.sets + + + + + + + + + +Tensorflow set operations. + + + +## Functions + +[`difference(...)`](../../../tf/sets/difference.md): Compute set difference of elements in last dimension of `a` and `b`. + +[`intersection(...)`](../../../tf/sets/intersection.md): Compute set intersection of elements in last dimension of `a` and `b`. + +[`set_difference(...)`](../../../tf/sets/difference.md): Compute set difference of elements in last dimension of `a` and `b`. + +[`set_intersection(...)`](../../../tf/sets/intersection.md): Compute set intersection of elements in last dimension of `a` and `b`. + +[`set_size(...)`](../../../tf/sets/size.md): Compute number of unique elements along last dimension of `a`. + +[`set_union(...)`](../../../tf/sets/union.md): Compute set union of elements in last dimension of `a` and `b`. + +[`size(...)`](../../../tf/sets/size.md): Compute number of unique elements along last dimension of `a`. + +[`union(...)`](../../../tf/sets/union.md): Compute set union of elements in last dimension of `a` and `b`. + diff --git a/site/en/api_docs/python/tf/compat/v1/shape.md b/site/en/api_docs/python/tf/compat/v1/shape.md new file mode 100644 index 00000000000..c1c6bdf3736 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/shape.md @@ -0,0 +1,89 @@ +description: Returns the shape of a tensor. + +
+ + +
+ +# tf.compat.v1.shape + + + + + + + + + +Returns the shape of a tensor. + + + + + + + +This operation returns a 1-D integer tensor representing the shape of `input`. + +#### For example: + + + +```python +t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]) +tf.shape(t) # [2, 2, 3] +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` or `SparseTensor`. +
+`name` + +A name for the operation (optional). +
+`out_type` + +(Optional) The specified output type of the operation (`int32` +or `int64`). Defaults to tf.int32. +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/signal.md b/site/en/api_docs/python/tf/compat/v1/signal.md new file mode 100644 index 00000000000..e25735363c9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/signal.md @@ -0,0 +1,92 @@ +description: Signal processing operations. + +
+ + +
+ +# Module: tf.compat.v1.signal + + + + + + + + + +Signal processing operations. + + +See the [tf.signal](https://tensorflow.org/api_guides/python/contrib.signal) +guide. + + +[hamming]: https://en.wikipedia.org/wiki/Window_function#Hamming_window +[hann]: https://en.wikipedia.org/wiki/Window_function#Hann_window +[mel]: https://en.wikipedia.org/wiki/Mel_scale +[mfcc]: https://en.wikipedia.org/wiki/Mel-frequency_cepstrum +[stft]: https://en.wikipedia.org/wiki/Short-time_Fourier_transform + +## Functions + +[`dct(...)`](../../../tf/signal/dct.md): Computes the 1D [Discrete Cosine Transform (DCT)][dct] of `input`. + +[`fft(...)`](../../../tf/signal/fft.md): Fast Fourier transform. + +[`fft2d(...)`](../../../tf/signal/fft2d.md): 2D fast Fourier transform. + +[`fft3d(...)`](../../../tf/signal/fft3d.md): 3D fast Fourier transform. + +[`fftshift(...)`](../../../tf/signal/fftshift.md): Shift the zero-frequency component to the center of the spectrum. + +[`frame(...)`](../../../tf/signal/frame.md): Expands `signal`'s `axis` dimension into frames of `frame_length`. + +[`hamming_window(...)`](../../../tf/signal/hamming_window.md): Generate a [Hamming][hamming] window. + +[`hann_window(...)`](../../../tf/signal/hann_window.md): Generate a [Hann window][hann]. + +[`idct(...)`](../../../tf/signal/idct.md): Computes the 1D [Inverse Discrete Cosine Transform (DCT)][idct] of `input`. + +[`ifft(...)`](../../../tf/signal/ifft.md): Inverse fast Fourier transform. + +[`ifft2d(...)`](../../../tf/signal/ifft2d.md): Inverse 2D fast Fourier transform. + +[`ifft3d(...)`](../../../tf/signal/ifft3d.md): Inverse 3D fast Fourier transform. + +[`ifftshift(...)`](../../../tf/signal/ifftshift.md): The inverse of fftshift. + +[`inverse_mdct(...)`](../../../tf/signal/inverse_mdct.md): Computes the inverse modified DCT of `mdcts`. + +[`inverse_stft(...)`](../../../tf/signal/inverse_stft.md): Computes the inverse [Short-time Fourier Transform][stft] of `stfts`. + +[`inverse_stft_window_fn(...)`](../../../tf/signal/inverse_stft_window_fn.md): Generates a window function that can be used in `inverse_stft`. + +[`irfft(...)`](../../../tf/signal/irfft.md): Inverse real-valued fast Fourier transform. + +[`irfft2d(...)`](../../../tf/signal/irfft2d.md): Inverse 2D real-valued fast Fourier transform. + +[`irfft3d(...)`](../../../tf/signal/irfft3d.md): Inverse 3D real-valued fast Fourier transform. + +[`kaiser_bessel_derived_window(...)`](../../../tf/signal/kaiser_bessel_derived_window.md): Generate a [Kaiser Bessel derived window][kbd]. + +[`kaiser_window(...)`](../../../tf/signal/kaiser_window.md): Generate a [Kaiser window][kaiser]. + +[`linear_to_mel_weight_matrix(...)`](../../../tf/signal/linear_to_mel_weight_matrix.md): Returns a matrix to warp linear scale spectrograms to the [mel scale][mel]. + +[`mdct(...)`](../../../tf/signal/mdct.md): Computes the [Modified Discrete Cosine Transform][mdct] of `signals`. + +[`mfccs_from_log_mel_spectrograms(...)`](../../../tf/signal/mfccs_from_log_mel_spectrograms.md): Computes [MFCCs][mfcc] of `log_mel_spectrograms`. + +[`overlap_and_add(...)`](../../../tf/signal/overlap_and_add.md): Reconstructs a signal from a framed representation. + +[`rfft(...)`](../../../tf/signal/rfft.md): Real-valued fast Fourier transform. + +[`rfft2d(...)`](../../../tf/signal/rfft2d.md): 2D real-valued fast Fourier transform. + +[`rfft3d(...)`](../../../tf/signal/rfft3d.md): 3D real-valued fast Fourier transform. + +[`stft(...)`](../../../tf/signal/stft.md): Computes the [Short-time Fourier Transform][stft] of `signals`. + +[`vorbis_window(...)`](../../../tf/signal/vorbis_window.md): Generate a [Vorbis power complementary window][vorbis]. + diff --git a/site/en/api_docs/python/tf/compat/v1/size.md b/site/en/api_docs/python/tf/compat/v1/size.md new file mode 100644 index 00000000000..f02dceb78e1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/size.md @@ -0,0 +1,96 @@ +description: Returns the size of a tensor. + +
+ + +
+ +# tf.compat.v1.size + + + + + + + + + +Returns the size of a tensor. + + + + + + + +Returns a 0-D `Tensor` representing the number of elements in `input` +of type `out_type`. Defaults to tf.int32. + +#### For example: + + + +```python +t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]) +tf.size(t) # 12 +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` or `SparseTensor`. +
+`name` + +A name for the operation (optional). +
+`out_type` + +(Optional) The specified non-quantized numeric output type of the +operation. Defaults to tf.int32. +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. Defaults to tf.int32. +
+ + + + +#### Numpy Compatibility +Equivalent to np.size() + diff --git a/site/en/api_docs/python/tf/compat/v1/space_to_batch.md b/site/en/api_docs/python/tf/compat/v1/space_to_batch.md new file mode 100644 index 00000000000..971edde5ee5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/space_to_batch.md @@ -0,0 +1,188 @@ +description: SpaceToBatch for 4-D tensors of type T. + +
+ + +
+ +# tf.compat.v1.space_to_batch + + + + + + + + + +SpaceToBatch for 4-D tensors of type T. + + + + + + + + + +This is a legacy version of the more general SpaceToBatchND. + +Zero-pads and then rearranges (permutes) blocks of spatial data into batch. +More specifically, this op outputs a copy of the input tensor where values from +the `height` and `width` dimensions are moved to the `batch` dimension. After +the zero-padding, both `height` and `width` of the input must be divisible by the +block size. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. 4-D with shape `[batch, height, width, depth]`. +
+`paddings` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2-D tensor of non-negative integers with shape `[2, 2]`. It specifies +the padding of the input with zeros across the spatial dimensions as follows: + +paddings = [[pad_top, pad_bottom], [pad_left, pad_right]] + +The effective spatial dimensions of the zero-padded input tensor will be: + +height_pad = pad_top + height + pad_bottom +width_pad = pad_left + width + pad_right + +The attr `block_size` must be greater than one. It indicates the block size. + +* Non-overlapping blocks of size `block_size x block size` in the height and +width dimensions are rearranged into the batch dimension at each location. +* The batch of the output tensor is `batch * block_size * block_size`. +* Both height_pad and width_pad must be divisible by block_size. + +The shape of the output will be: + +[batch*block_size*block_size, height_pad/block_size, width_pad/block_size, +depth] + +Some examples: + +(1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2: + +``` +x = [[[[1], [2]], [[3], [4]]]] +``` + +The output tensor has shape `[4, 1, 1, 1]` and value: + +``` +[[[[1]]], [[[2]]], [[[3]]], [[[4]]]] +``` + +(2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2: + +``` +x = [[[[1, 2, 3], [4, 5, 6]], +[[7, 8, 9], [10, 11, 12]]]] +``` + +The output tensor has shape `[4, 1, 1, 3]` and value: + +``` +[[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] +``` + +(3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]], +[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` + +The output tensor has shape `[4, 2, 2, 1]` and value: + +``` +x = [[[[1], [3]], [[9], [11]]], +[[[2], [4]], [[10], [12]]], +[[[5], [7]], [[13], [15]]], +[[[6], [8]], [[14], [16]]]] +``` + +(4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]]], +[[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` + +The output tensor has shape `[8, 1, 2, 1]` and value: + +``` +x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]], +[[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]] +``` + +Among others, this operation is useful for reducing atrous convolution into +regular convolution. +
+`block_size` + +An `int` that is `>= 2`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/space_to_depth.md b/site/en/api_docs/python/tf/compat/v1/space_to_depth.md new file mode 100644 index 00000000000..c33b260b86b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/space_to_depth.md @@ -0,0 +1,179 @@ +description: SpaceToDepth for tensors of type T. + +
+ + +
+ +# tf.compat.v1.space_to_depth + + + + + + + + + +SpaceToDepth for tensors of type T. + + + + + + + + + +Rearranges blocks of spatial data, into depth. More specifically, +this op outputs a copy of the input tensor where values from the `height` +and `width` dimensions are moved to the `depth` dimension. +The attr `block_size` indicates the input block size. + + * Non-overlapping blocks of size `block_size x block size` are rearranged + into depth at each location. + * The depth of the output tensor is `block_size * block_size * input_depth`. + * The Y, X coordinates within each block of the input become the high order + component of the output channel index. + * The input tensor's height and width must be divisible by block_size. + +The `data_format` attr specifies the layout of the input and output tensors +with the following options: + "NHWC": `[ batch, height, width, channels ]` + "NCHW": `[ batch, channels, height, width ]` + "NCHW_VECT_C": + `qint8 [ batch, channels / 4, height, width, 4 ]` + +It is useful to consider the operation as transforming a 6-D Tensor. +e.g. for data_format = NHWC, + Each element in the input tensor can be specified via 6 coordinates, + ordered by decreasing memory layout significance as: + n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates + within the output image, bX, bY means coordinates + within the input block, iC means input channels). + The output would be a transpose to the following layout: + n,oY,oX,bY,bX,iC + +This operation is useful for resizing the activations between convolutions +(but keeping all data), e.g. instead of pooling. It is also useful for training +purely convolutional models. + +For example, given an input of shape `[1, 2, 2, 1]`, data_format = "NHWC" and +block_size = 2: + +``` +x = [[[[1], [2]], + [[3], [4]]]] +``` + +This operation will output a tensor of shape `[1, 1, 1, 4]`: + +``` +[[[[1, 2, 3, 4]]]] +``` + +Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`, +the corresponding output will have a single element (i.e. width and height are +both 1) and will have a depth of 4 channels (1 * block_size * block_size). +The output element shape is `[1, 1, 4]`. + +For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g. + +``` +x = [[[[1, 2, 3], [4, 5, 6]], + [[7, 8, 9], [10, 11, 12]]]] +``` + +This operation, for block_size of 2, will return the following tensor of shape +`[1, 1, 1, 12]` + +``` +[[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] +``` + +Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2: + +``` +x = [[[[1], [2], [5], [6]], + [[3], [4], [7], [8]], + [[9], [10], [13], [14]], + [[11], [12], [15], [16]]]] +``` + +the operator will return the following tensor of shape `[1 2 2 4]`: + +``` +x = [[[[1, 2, 3, 4], + [5, 6, 7, 8]], + [[9, 10, 11, 12], + [13, 14, 15, 16]]]] +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`block_size` + +An `int` that is `>= 2`. The size of the spatial block. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW", "NCHW_VECT_C"`. Defaults to `"NHWC"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse.md b/site/en/api_docs/python/tf/compat/v1/sparse.md new file mode 100644 index 00000000000..3c068f847dc --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse.md @@ -0,0 +1,94 @@ +description: Sparse Tensor Representation. + +
+ + +
+ +# Module: tf.compat.v1.sparse + + + + + + + + + +Sparse Tensor Representation. + + +See also tf.SparseTensor. + +## Classes + +[`class SparseConditionalAccumulator`](../../../tf/compat/v1/SparseConditionalAccumulator.md): A conditional accumulator for aggregating sparse gradients. + +[`class SparseTensor`](../../../tf/sparse/SparseTensor.md): Represents a sparse tensor. + +## Functions + +[`add(...)`](../../../tf/compat/v1/sparse_add.md): Adds two tensors, at least one of each is a `SparseTensor`. (deprecated arguments) + +[`concat(...)`](../../../tf/compat/v1/sparse_concat.md): Concatenates a list of `SparseTensor` along the specified dimension. (deprecated arguments) + +[`cross(...)`](../../../tf/sparse/cross.md): Generates sparse cross from a list of sparse and dense tensors. + +[`cross_hashed(...)`](../../../tf/sparse/cross_hashed.md): Generates hashed sparse cross from a list of sparse and dense tensors. + +[`expand_dims(...)`](../../../tf/sparse/expand_dims.md): Inserts a dimension of 1 into a tensor's shape. + +[`eye(...)`](../../../tf/sparse/eye.md): Creates a two-dimensional sparse tensor with ones along the diagonal. + +[`fill_empty_rows(...)`](../../../tf/sparse/fill_empty_rows.md): Fills empty rows in the input 2-D `SparseTensor` with a default value. + +[`from_dense(...)`](../../../tf/sparse/from_dense.md): Converts a dense tensor into a sparse tensor. + +[`mask(...)`](../../../tf/sparse/mask.md): Masks elements of `IndexedSlices`. + +[`matmul(...)`](../../../tf/sparse/sparse_dense_matmul.md): Multiply SparseTensor (or dense Matrix) (of rank 2) "A" by dense matrix + +[`maximum(...)`](../../../tf/sparse/maximum.md): Returns the element-wise max of two SparseTensors. + +[`merge(...)`](../../../tf/compat/v1/sparse_merge.md): Combines a batch of feature ids and values into a single `SparseTensor`. (deprecated) + +[`minimum(...)`](../../../tf/sparse/minimum.md): Returns the element-wise min of two SparseTensors. + +[`placeholder(...)`](../../../tf/compat/v1/sparse_placeholder.md): Inserts a placeholder for a sparse tensor that will be always fed. + +[`reduce_max(...)`](../../../tf/compat/v1/sparse_reduce_max.md): Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments) + +[`reduce_max_sparse(...)`](../../../tf/compat/v1/sparse_reduce_max_sparse.md): Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments) + +[`reduce_sum(...)`](../../../tf/compat/v1/sparse_reduce_sum.md): Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments) + +[`reduce_sum_sparse(...)`](../../../tf/compat/v1/sparse_reduce_sum_sparse.md): Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments) + +[`reorder(...)`](../../../tf/sparse/reorder.md): Reorders a `SparseTensor` into the canonical, row-major ordering. + +[`reset_shape(...)`](../../../tf/sparse/reset_shape.md): Resets the shape of a `SparseTensor` with indices and values unchanged. + +[`reshape(...)`](../../../tf/sparse/reshape.md): Reshapes a `SparseTensor` to represent values in a new dense shape. + +[`retain(...)`](../../../tf/sparse/retain.md): Retains specified non-empty values within a `SparseTensor`. + +[`segment_mean(...)`](../../../tf/compat/v1/sparse_segment_mean.md): Computes the mean along sparse segments of a tensor. + +[`segment_sqrt_n(...)`](../../../tf/compat/v1/sparse_segment_sqrt_n.md): Computes the sum along sparse segments of a tensor divided by the sqrt(N). + +[`segment_sum(...)`](../../../tf/compat/v1/sparse_segment_sum.md): Computes the sum along sparse segments of a tensor. + +[`slice(...)`](../../../tf/sparse/slice.md): Slice a `SparseTensor` based on the `start` and `size. + +[`softmax(...)`](../../../tf/sparse/softmax.md): Applies softmax to a batched N-D `SparseTensor`. + +[`sparse_dense_matmul(...)`](../../../tf/sparse/sparse_dense_matmul.md): Multiply SparseTensor (or dense Matrix) (of rank 2) "A" by dense matrix + +[`split(...)`](../../../tf/compat/v1/sparse_split.md): Split a `SparseTensor` into `num_split` tensors along `axis`. (deprecated arguments) + +[`to_dense(...)`](../../../tf/sparse/to_dense.md): Converts a `SparseTensor` into a dense tensor. + +[`to_indicator(...)`](../../../tf/sparse/to_indicator.md): Converts a `SparseTensor` of ids into a dense bool indicator tensor. + +[`transpose(...)`](../../../tf/sparse/transpose.md): Transposes a `SparseTensor` + diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_add.md b/site/en/api_docs/python/tf/compat/v1/sparse_add.md new file mode 100644 index 00000000000..a799ca22a74 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_add.md @@ -0,0 +1,154 @@ +description: Adds two tensors, at least one of each is a SparseTensor. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.sparse_add + + + + + + + + + +Adds two tensors, at least one of each is a `SparseTensor`. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(thresh)`. They will be removed in a future version. +Instructions for updating: +thresh is deprecated, use threshold instead + +If one `SparseTensor` and one `Tensor` are passed in, returns a `Tensor`. If +both arguments are `SparseTensor`s, this returns a `SparseTensor`. The order +of arguments does not matter. Use vanilla tf.add() for adding two dense +`Tensor`s. + +The shapes of the two operands must match: broadcasting is not supported. + +The indices of any input `SparseTensor` are assumed ordered in standard +lexicographic order. If this is not the case, before this step run +`SparseReorder` to restore index ordering. + +If both arguments are sparse, we perform "clipping" as follows. By default, +if two values sum to zero at some index, the output `SparseTensor` would still +include that particular location in its index, storing a zero in the +corresponding value slot. To override this, callers can specify `thresh`, +indicating that if the sum has a magnitude strictly smaller than `thresh`, its +corresponding value and index would then not be included. In particular, +`thresh == 0.0` (default) means everything is kept and actual thresholding +happens only for a positive value. + +For example, suppose the logical sum of two sparse operands is (densified): + + [ 2] + [.1 0] + [ 6 -.2] + +Then, + +* `thresh == 0` (the default): all 5 index/value pairs will be returned. +* `thresh == 0.11`: only .1 and 0 will vanish, and the remaining three + index/value pairs will be returned. +* `thresh == 0.21`: .1, 0, and -.2 will vanish. + + + + + + + + + + + + + + + + + + + +
+`a` + +The first operand; `SparseTensor` or `Tensor`. +
+`b` + +The second operand; `SparseTensor` or `Tensor`. At least one operand +must be sparse. +
+`threshold` + +An optional 0-D `Tensor` (defaults to `0`). The magnitude +threshold that determines if an output value/index pair takes space. Its +dtype should match that of the values if they are real; if the latter are +complex64/complex128, then the dtype should be float32/float64, +correspondingly. +
+`thresh` + +Deprecated alias for `threshold`. +
+ + + + + + + + + + + +
+A `SparseTensor` or a `Tensor`, representing the sum. +
+ + + + + + + + + + + + +
+`TypeError` + +If both `a` and `b` are `Tensor`s. Use tf.add() instead. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_concat.md b/site/en/api_docs/python/tf/compat/v1/sparse_concat.md new file mode 100644 index 00000000000..f6155965b5a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_concat.md @@ -0,0 +1,211 @@ +description: Concatenates a list of SparseTensor along the specified dimension. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.sparse_concat + + + + + + + + + +Concatenates a list of `SparseTensor` along the specified dimension. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(concat_dim)`. They will be removed in a future version. +Instructions for updating: +concat_dim is deprecated, use axis instead + +Concatenation is with respect to the dense versions of each sparse input. +It is assumed that each inputs is a `SparseTensor` whose elements are ordered +along increasing dimension number. + +If expand_nonconcat_dim is False, all inputs' shapes must match, except for +the concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are +allowed to vary among all inputs. + +The `indices`, `values`, and `shapes` lists must have the same length. + +If expand_nonconcat_dim is False, then the output shape is identical to the +inputs', except along the concat dimension, where it is the sum of the inputs' +sizes along that dimension. + +If expand_nonconcat_dim is True, then the output shape along the non-concat +dimensions will be expand to be the largest among all inputs, and it is the +sum of the inputs sizes along the concat dimension. + +The output elements will be resorted to preserve the sort order along +increasing dimension number. + +This op runs in `O(M log M)` time, where `M` is the total number of non-empty +values across all inputs. This is due to the need for an internal sort in +order to concatenate efficiently across an arbitrary dimension. + +For example, if `axis = 1` and the inputs are + + sp_inputs[0]: shape = [2, 3] + [0, 2]: "a" + [1, 0]: "b" + [1, 1]: "c" + + sp_inputs[1]: shape = [2, 4] + [0, 1]: "d" + [0, 2]: "e" + +then the output will be + + shape = [2, 7] + [0, 2]: "a" + [0, 4]: "d" + [0, 5]: "e" + [1, 0]: "b" + [1, 1]: "c" + +Graphically this is equivalent to doing + + [ a] concat [ d e ] = [ a d e ] + [b c ] [ ] [b c ] + +Another example, if 'axis = 1' and the inputs are + + sp_inputs[0]: shape = [3, 3] + [0, 2]: "a" + [1, 0]: "b" + [2, 1]: "c" + + sp_inputs[1]: shape = [2, 4] + [0, 1]: "d" + [0, 2]: "e" + +if expand_nonconcat_dim = False, this will result in an error. But if +expand_nonconcat_dim = True, this will result in: + + shape = [3, 7] + [0, 2]: "a" + [0, 4]: "d" + [0, 5]: "e" + [1, 0]: "b" + [2, 1]: "c" + +Graphically this is equivalent to doing + + [ a] concat [ d e ] = [ a d e ] + [b ] [ ] [b ] + [ c ] [ c ] + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`axis` + +Dimension to concatenate along. Must be in range [-rank, rank), +where rank is the number of dimensions in each input `SparseTensor`. +
+`sp_inputs` + +List of `SparseTensor` to concatenate. +
+`name` + +A name prefix for the returned tensors (optional). +
+`expand_nonconcat_dim` + +Whether to allow the expansion in the non-concat +dimensions. Defaulted to False. +
+`concat_dim` + +The old (deprecated) name for axis. +
+`expand_nonconcat_dims` + +alias for expand_nonconcat_dim +
+ + + + + + + + + + + +
+A `SparseTensor` with the concatenated output. +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_inputs` is not a list of `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_matmul.md b/site/en/api_docs/python/tf/compat/v1/sparse_matmul.md new file mode 100644 index 00000000000..cf87e6a4394 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_matmul.md @@ -0,0 +1,111 @@ +description: Multiply matrix "a" by matrix "b". + +
+ + +
+ +# tf.compat.v1.sparse_matmul + + + + + + + + + +Multiply matrix "a" by matrix "b". + + + + + + + +The inputs must be two-dimensional matrices and the inner dimension of "a" must +match the outer dimension of "b". Both "a" and "b" must be `Tensor`s not +`SparseTensor`s. This op is optimized for the case where at least one of "a" or +"b" is sparse, in the sense that they have a large proportion of zero values. +The breakeven for using this versus a dense matrix multiply on one platform was +30% zero values in the sparse matrix. + +The gradient computation of this operation will only take advantage of sparsity +in the input gradient when that gradient comes from a Relu. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `bfloat16`. +
+`b` + +A `Tensor`. Must be one of the following types: `float32`, `bfloat16`. +
+`transpose_a` + +An optional `bool`. Defaults to `False`. +
+`transpose_b` + +An optional `bool`. Defaults to `False`. +
+`a_is_sparse` + +An optional `bool`. Defaults to `False`. +
+`b_is_sparse` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_merge.md b/site/en/api_docs/python/tf/compat/v1/sparse_merge.md new file mode 100644 index 00000000000..30aa5226154 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_merge.md @@ -0,0 +1,205 @@ +description: Combines a batch of feature ids and values into a single SparseTensor. (deprecated) + +
+ + +
+ +# tf.compat.v1.sparse_merge + + + + + + + + + +Combines a batch of feature ids and values into a single `SparseTensor`. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +No similar op available at this time. + +The most common use case for this function occurs when feature ids and +their corresponding values are stored in `Example` protos on disk. +`parse_example` will return a batch of ids and a batch of values, and this +function joins them into a single logical `SparseTensor` for use in +functions such as `sparse_tensor_dense_matmul`, `sparse_to_dense`, etc. + +The `SparseTensor` returned by this function has the following properties: + + - `indices` is equivalent to `sp_ids.indices` with the last + dimension discarded and replaced with `sp_ids.values`. + - `values` is simply `sp_values.values`. + - If `sp_ids.dense_shape = [D0, D1, ..., Dn, K]`, then + `output.shape = [D0, D1, ..., Dn, vocab_size]`. + +For example, consider the following feature vectors: + +```python + vector1 = [-3, 0, 0, 0, 0, 0] + vector2 = [ 0, 1, 0, 4, 1, 0] + vector3 = [ 5, 0, 0, 9, 0, 0] +``` + +These might be stored sparsely in the following Example protos by storing +only the feature ids (column number if the vectors are treated as a matrix) +of the non-zero elements and the corresponding values: + +```python + examples = [Example(features={ + "ids": Feature(int64_list=Int64List(value=[0])), + "values": Feature(float_list=FloatList(value=[-3]))}), + Example(features={ + "ids": Feature(int64_list=Int64List(value=[1, 4, 3])), + "values": Feature(float_list=FloatList(value=[1, 1, 4]))}), + Example(features={ + "ids": Feature(int64_list=Int64List(value=[0, 3])), + "values": Feature(float_list=FloatList(value=[5, 9]))})] +``` + +The result of calling parse_example on these examples will produce a +dictionary with entries for "ids" and "values". Passing those two objects +to this function along with vocab_size=6, will produce a `SparseTensor` that +sparsely represents all three instances. Namely, the `indices` property will +contain the coordinates of the non-zero entries in the feature matrix (the +first dimension is the row number in the matrix, i.e., the index within the +batch, and the second dimension is the column number, i.e., the feature id); +`values` will contain the actual values. `shape` will be the shape of the +original matrix, i.e., (3, 6). For our example above, the output will be +equal to: + +```python + SparseTensor(indices=[[0, 0], [1, 1], [1, 3], [1, 4], [2, 0], [2, 3]], + values=[-3, 1, 4, 1, 5, 9], + dense_shape=[3, 6]) +``` + +This method generalizes to higher-dimensions by simply providing a list for +both the sp_ids as well as the vocab_size. +In this case the resulting `SparseTensor` has the following properties: + - `indices` is equivalent to `sp_ids[0].indices` with the last + dimension discarded and concatenated with + `sp_ids[0].values, sp_ids[1].values, ...`. + - `values` is simply `sp_values.values`. + - If `sp_ids.dense_shape = [D0, D1, ..., Dn, K]`, then + `output.shape = [D0, D1, ..., Dn] + vocab_size`. + + + + + + + + + + + + + + + + + + + + + + +
+`sp_ids` + +A single `SparseTensor` with `values` property of type `int32` +or `int64` or a Python list of such `SparseTensor`s or a list thereof. +
+`sp_values` + +A `SparseTensor` of any type. +
+`vocab_size` + +A scalar `int64` Tensor (or Python int) containing the new size +of the last dimension, `all(0 <= sp_ids.values < vocab_size)`. +Or a list thereof with `all(0 <= sp_ids[i].values < vocab_size[i])` for +all `i`. +
+`name` + +A name prefix for the returned tensors (optional) +
+`already_sorted` + +A boolean to specify whether the per-batch values in +`sp_values` are already sorted. If so skip sorting, False by default +(optional). +
+ + + + + + + + + + + +
+A `SparseTensor` compactly representing a batch of feature ids and values, +useful for passing to functions that expect such a `SparseTensor`. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `sp_values` is not a `SparseTensor`. Or if `sp_ids` is neither +a `SparseTensor` nor a list thereof. Or if `vocab_size` is not a +`Tensor` or a Python int and `sp_ids` is a `SparseTensor`. Or if +`vocab_size` is not a or list thereof and `sp_ids` is a list. +
+`ValueError` + +If `sp_ids` and `vocab_size` are lists of different lengths. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_placeholder.md b/site/en/api_docs/python/tf/compat/v1/sparse_placeholder.md new file mode 100644 index 00000000000..64f98f4f576 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_placeholder.md @@ -0,0 +1,138 @@ +description: Inserts a placeholder for a sparse tensor that will be always fed. + +
+ + +
+ +# tf.compat.v1.sparse_placeholder + + + + + + + + + +Inserts a placeholder for a sparse tensor that will be always fed. + + + + + + + + + +**Important**: This sparse tensor will produce an error if evaluated. +Its value must be fed using the `feed_dict` optional argument to +`Session.run()`, `Tensor.eval()`, or `Operation.run()`. + +#### For example: + + + +```python +x = tf.compat.v1.sparse.placeholder(tf.float32) +y = tf.sparse.reduce_sum(x) + +with tf.compat.v1.Session() as sess: + print(sess.run(y)) # ERROR: will fail because x was not fed. + + indices = np.array([[3, 2, 0], [4, 5, 1]], dtype=np.int64) + values = np.array([1.0, 2.0], dtype=np.float32) + shape = np.array([7, 9, 2], dtype=np.int64) + print(sess.run(y, feed_dict={ + x: tf.compat.v1.SparseTensorValue(indices, values, shape)})) # Will + succeed. + print(sess.run(y, feed_dict={ + x: (indices, values, shape)})) # Will succeed. + + sp = tf.SparseTensor(indices=indices, values=values, dense_shape=shape) + sp_value = sp.eval(session=sess) + print(sess.run(y, feed_dict={x: sp_value})) # Will succeed. +``` + +@compatibility{eager} Placeholders are not compatible with eager execution. + + + + + + + + + + + + + + + + +
+`dtype` + +The type of `values` elements in the tensor to be fed. +
+`shape` + +The shape of the tensor to be fed (optional). If the shape is not +specified, you can feed a sparse tensor of any shape. +
+`name` + +A name for prefixing the operations (optional). +
+ + + + + + + + + + + +
+A `SparseTensor` that may be used as a handle for feeding a value, but not +evaluated directly. +
+ + + + + + + + + + + + +
+`RuntimeError` + +if eager execution is enabled +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_reduce_max.md b/site/en/api_docs/python/tf/compat/v1/sparse_reduce_max.md new file mode 100644 index 00000000000..e50e06649b6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_reduce_max.md @@ -0,0 +1,152 @@ +description: Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.sparse_reduce_max + + + + + + + + + +Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(reduction_axes)`. They will be removed in a future version. +Instructions for updating: +reduction_axes is deprecated, use axis instead + +This Op takes a SparseTensor and is the sparse counterpart to +tf.reduce_max(). In particular, this Op also returns a dense `Tensor` +instead of a sparse one. + +Note: A gradient is not defined for this function, so it can't be used +in training models that need gradient descent. + +Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless +`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in +`reduction_axes`. If `keepdims` is true, the reduced dimensions are retained +with length 1. + +If `reduction_axes` has no entries, all dimensions are reduced, and a tensor +with a single element is returned. Additionally, the axes can be negative, +similar to the indexing rules in Python. + +The values not defined in `sp_input` don't participate in the reduce max, +as opposed to be implicitly assumed 0 -- hence it can return negative values +for sparse `reduction_axes`. But, in case there are no values in +`reduction_axes`, it will reduce to 0. See second example below. + +#### For example: + + + +```python +# 'x' represents [[1, ?, 2] +# [?, 3, ?]] +# where ? is implicitly-zero. +tf.sparse.reduce_max(x) ==> 3 +tf.sparse.reduce_max(x, 0) ==> [1, 3, 2] +tf.sparse.reduce_max(x, 1) ==> [2, 3] # Can also use -1 as the axis. +tf.sparse.reduce_max(x, 1, keepdims=True) ==> [[2], [3]] +tf.sparse.reduce_max(x, [0, 1]) ==> 3 + +# 'y' represents [[-7, ?] +# [ 4, 3] +# [ ?, ?] +tf.sparse.reduce_max(x, 1) ==> [-7, 4, 0] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`sp_input` + +The SparseTensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce; list or scalar. If `None` (the +default), reduces all dimensions. +
+`keepdims` + +If true, retain reduced dimensions with length 1. +
+`reduction_axes` + +Deprecated name of `axis`. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + +
+The reduced Tensor. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_reduce_max_sparse.md b/site/en/api_docs/python/tf/compat/v1/sparse_reduce_max_sparse.md new file mode 100644 index 00000000000..2a1a78680a9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_reduce_max_sparse.md @@ -0,0 +1,123 @@ +description: Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.sparse_reduce_max_sparse + + + + + + + + + +Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +This Op takes a SparseTensor and is the sparse counterpart to +tf.reduce_max(). In contrast to SparseReduceSum, this Op returns a +SparseTensor. + +Note: A gradient is not defined for this function, so it can't be used +in training models that need gradient descent. + +Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless +`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in +`reduction_axes`. If `keepdims` is true, the reduced dimensions are retained +with length 1. + +If `reduction_axes` has no entries, all dimensions are reduced, and a tensor +with a single element is returned. Additionally, the axes can be negative, +which are interpreted according to the indexing rules in Python. + + + + + + + + + + + + + + + + + + + + + + +
+`sp_input` + +The SparseTensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce; list or scalar. If `None` (the +default), reduces all dimensions. +
+`keepdims` + +If true, retain reduced dimensions with length 1. +
+`reduction_axes` + +Deprecated name of axis. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + +
+The reduced SparseTensor. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_reduce_sum.md b/site/en/api_docs/python/tf/compat/v1/sparse_reduce_sum.md new file mode 100644 index 00000000000..fef30e5ff45 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_reduce_sum.md @@ -0,0 +1,139 @@ +description: Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.sparse_reduce_sum + + + + + + + + + +Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(reduction_axes)`. They will be removed in a future version. +Instructions for updating: +reduction_axes is deprecated, use axis instead + +This Op takes a SparseTensor and is the sparse counterpart to +tf.reduce_sum(). In particular, this Op also returns a dense `Tensor` +instead of a sparse one. + +Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless +`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in +`reduction_axes`. If `keepdims` is true, the reduced dimensions are retained +with length 1. + +If `reduction_axes` has no entries, all dimensions are reduced, and a tensor +with a single element is returned. Additionally, the axes can be negative, +similar to the indexing rules in Python. + +#### For example: + + + +```python +# 'x' represents [[1, ?, 1] +# [?, 1, ?]] +# where ? is implicitly-zero. +tf.sparse.reduce_sum(x) ==> 3 +tf.sparse.reduce_sum(x, 0) ==> [1, 1, 1] +tf.sparse.reduce_sum(x, 1) ==> [2, 1] # Can also use -1 as the axis. +tf.sparse.reduce_sum(x, 1, keepdims=True) ==> [[2], [1]] +tf.sparse.reduce_sum(x, [0, 1]) ==> 3 +``` + + + + + + + + + + + + + + + + + + + + + + +
+`sp_input` + +The SparseTensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce; list or scalar. If `None` (the +default), reduces all dimensions. +
+`keepdims` + +If true, retain reduced dimensions with length 1. +
+`reduction_axes` + +Deprecated name of `axis`. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + +
+The reduced Tensor. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_reduce_sum_sparse.md b/site/en/api_docs/python/tf/compat/v1/sparse_reduce_sum_sparse.md new file mode 100644 index 00000000000..0d5920ccc99 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_reduce_sum_sparse.md @@ -0,0 +1,123 @@ +description: Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.sparse_reduce_sum_sparse + + + + + + + + + +Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version. +Instructions for updating: +keep_dims is deprecated, use keepdims instead + +This Op takes a SparseTensor and is the sparse counterpart to +tf.reduce_sum(). In contrast to SparseReduceSum, this Op returns a +SparseTensor. + +Note: A gradient is not defined for this function, so it can't be used +in training models that need gradient descent. + +Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless +`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in +`reduction_axes`. If `keepdims` is true, the reduced dimensions are retained +with length 1. + +If `reduction_axes` has no entries, all dimensions are reduced, and a tensor +with a single element is returned. Additionally, the axes can be negative, +which are interpreted according to the indexing rules in Python. + + + + + + + + + + + + + + + + + + + + + + +
+`sp_input` + +The SparseTensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce; list or scalar. If `None` (the +default), reduces all dimensions. +
+`keepdims` + +If true, retain reduced dimensions with length 1. +
+`reduction_axes` + +Deprecated name of axis. +
+`keep_dims` + +Deprecated alias for `keepdims`. +
+ + + + + + + + + + + +
+The reduced SparseTensor. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_segment_mean.md b/site/en/api_docs/python/tf/compat/v1/sparse_segment_mean.md new file mode 100644 index 00000000000..7bf0f1f355c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_segment_mean.md @@ -0,0 +1,118 @@ +description: Computes the mean along sparse segments of a tensor. + +
+ + +
+ +# tf.compat.v1.sparse_segment_mean + + + + + + + + + +Computes the mean along sparse segments of a tensor. + + + + + + + + + +Read [the section on +segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation) +for an explanation of segments. + +Like tf.math.segment_mean, but `segment_ids` can have rank less than +`data`'s first dimension, selecting a subset of dimension 0, specified by +`indices`. +`segment_ids` is allowed to have missing ids, in which case the output will +be zeros at those indices. In those cases `num_segments` is used to determine +the size of the output. + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor` with data that will be assembled in the output. +
+`indices` + +A 1-D `Tensor` with indices into `data`. Has same rank as +`segment_ids`. +
+`segment_ids` + +A 1-D `Tensor` with indices into the output `Tensor`. Values +should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+`num_segments` + +An optional int32 scalar. Indicates the size of the output +`Tensor`. +
+ + + + + + + + + + + +
+A `tensor` of the shape as data, except for dimension 0 which +has size `k`, the number of segments specified via `num_segments` or +inferred for the last element in `segments_ids`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_segment_sqrt_n.md b/site/en/api_docs/python/tf/compat/v1/sparse_segment_sqrt_n.md new file mode 100644 index 00000000000..cc0035ab177 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_segment_sqrt_n.md @@ -0,0 +1,109 @@ +description: Computes the sum along sparse segments of a tensor divided by the sqrt(N). + +
+ + +
+ +# tf.compat.v1.sparse_segment_sqrt_n + + + + + + + + + +Computes the sum along sparse segments of a tensor divided by the sqrt(N). + + + + + + + + + +`N` is the size of the segment being reduced. + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor` with data that will be assembled in the output. +
+`indices` + +A 1-D `Tensor` with indices into `data`. Has same rank as +`segment_ids`. +
+`segment_ids` + +A 1-D `Tensor` with indices into the output `Tensor`. Values +should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+`num_segments` + +An optional int32 scalar. Indicates the size of the output +`Tensor`. +
+ + + + + + + + + + + +
+A `tensor` of the shape as data, except for dimension 0 which +has size `k`, the number of segments specified via `num_segments` or +inferred for the last element in `segments_ids`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_segment_sum.md b/site/en/api_docs/python/tf/compat/v1/sparse_segment_sum.md new file mode 100644 index 00000000000..40642c12a81 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_segment_sum.md @@ -0,0 +1,150 @@ +description: Computes the sum along sparse segments of a tensor. + +
+ + +
+ +# tf.compat.v1.sparse_segment_sum + + + + + + + + + +Computes the sum along sparse segments of a tensor. + + + + + + + + + +Read [the section on +segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation) +for an explanation of segments. + +Like tf.math.segment_sum, but `segment_ids` can have rank less than `data`'s +first dimension, selecting a subset of dimension 0, specified by `indices`. +`segment_ids` is allowed to have missing ids, in which case the output will +be zeros at those indices. In those cases `num_segments` is used to determine +the size of the output. + +#### For example: + + + +```python +c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) + +# Select two rows, one segment. +tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0])) +# => [[0 0 0 0]] + +# Select two rows, two segment. +tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1])) +# => [[ 1 2 3 4] +# [-1 -2 -3 -4]] + +# With missing segment ids. +tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 2]), + num_segments=4) +# => [[ 1 2 3 4] +# [ 0 0 0 0] +# [-1 -2 -3 -4] +# [ 0 0 0 0]] + +# Select all rows, two segments. +tf.sparse.segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1])) +# => [[0 0 0 0] +# [5 6 7 8]] + +# Which is equivalent to: +tf.math.segment_sum(c, tf.constant([0, 0, 1])) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor` with data that will be assembled in the output. +
+`indices` + +A 1-D `Tensor` with indices into `data`. Has same rank as +`segment_ids`. +
+`segment_ids` + +A 1-D `Tensor` with indices into the output `Tensor`. Values +should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+`num_segments` + +An optional int32 scalar. Indicates the size of the output +`Tensor`. +
+ + + + + + + + + + + +
+A `tensor` of the shape as data, except for dimension 0 which +has size `k`, the number of segments specified via `num_segments` or +inferred for the last element in `segments_ids`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_split.md b/site/en/api_docs/python/tf/compat/v1/sparse_split.md new file mode 100644 index 00000000000..433891a7d48 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_split.md @@ -0,0 +1,157 @@ +description: Split a SparseTensor into num_split tensors along axis. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.sparse_split + + + + + + + + + +Split a `SparseTensor` into `num_split` tensors along `axis`. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(split_dim)`. They will be removed in a future version. +Instructions for updating: +split_dim is deprecated, use axis instead + +If the `sp_input.dense_shape[axis]` is not an integer multiple of `num_split` +each slice starting from 0:`shape[axis] % num_split` gets extra one +dimension. For example, if `axis = 1` and `num_split = 2` and the +input is: + + input_tensor = shape = [2, 7] + [ a d e ] + [b c ] + +Graphically the output tensors are: + + output_tensor[0] = + [ a ] + [b c ] + + output_tensor[1] = + [ d e ] + [ ] + + + + + + + + + + + + + + + + + + + + + + + + + +
+`keyword_required` + +Python 2 standin for * (temporary for argument reorder) +
+`sp_input` + +The `SparseTensor` to split. +
+`num_split` + +A Python integer. The number of ways to split. +
+`axis` + +A 0-D `int32` `Tensor`. The dimension along which to split. +
+`name` + +A name for the operation (optional). +
+`split_dim` + +Deprecated old name for axis. +
+ + + + + + + + + + + +
+`num_split` `SparseTensor` objects resulting from splitting `value`. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+`ValueError` + +If the deprecated `split_dim` and `axis` are both non None. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sparse_to_dense.md b/site/en/api_docs/python/tf/compat/v1/sparse_to_dense.md new file mode 100644 index 00000000000..331227540e3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sparse_to_dense.md @@ -0,0 +1,130 @@ +description: Converts a sparse representation into a dense tensor. (deprecated) + +
+ + +
+ +# tf.compat.v1.sparse_to_dense + + + + + + + + + +Converts a sparse representation into a dense tensor. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Create a tf.sparse.SparseTensor and use tf.sparse.to_dense instead. + +Builds an array `dense` with shape `output_shape` such that + +```python +# If sparse_indices is scalar +dense[i] = (i == sparse_indices ? sparse_values : default_value) + +# If sparse_indices is a vector, then for each i +dense[sparse_indices[i]] = sparse_values[i] + +# If sparse_indices is an n by d matrix, then for each i in [0, n) +dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i] +``` + +All other values in `dense` are set to `default_value`. If `sparse_values` +is a scalar, all sparse indices are set to this single value. + +Indices should be sorted in lexicographic order, and indices must not +contain any repeats. If `validate_indices` is True, these properties +are checked during execution. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_indices` + +A 0-D, 1-D, or 2-D `Tensor` of type `int32` or `int64`. +`sparse_indices[i]` contains the complete index where `sparse_values[i]` +will be placed. +
+`output_shape` + +A 1-D `Tensor` of the same type as `sparse_indices`. Shape +of the dense output tensor. +
+`sparse_values` + +A 0-D or 1-D `Tensor`. Values corresponding to each row of +`sparse_indices`, or a scalar value to be used for all sparse indices. +
+`default_value` + +A 0-D `Tensor` of the same type as `sparse_values`. Value +to set for indices not specified in `sparse_indices`. Defaults to zero. +
+`validate_indices` + +A boolean value. If True, indices are checked to make +sure they are sorted in lexicographic order and that there are no repeats. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+Dense `Tensor` of shape `output_shape`. Has the same type as +`sparse_values`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/spectral.md b/site/en/api_docs/python/tf/compat/v1/spectral.md new file mode 100644 index 00000000000..0c72845c954 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/spectral.md @@ -0,0 +1,51 @@ +description: Public API for tf.spectral namespace. + +
+ + +
+ +# Module: tf.compat.v1.spectral + + + + + + + + + +Public API for tf.spectral namespace. + + + +## Functions + +[`dct(...)`](../../../tf/signal/dct.md): Computes the 1D [Discrete Cosine Transform (DCT)][dct] of `input`. + +[`fft(...)`](../../../tf/signal/fft.md): Fast Fourier transform. + +[`fft2d(...)`](../../../tf/signal/fft2d.md): 2D fast Fourier transform. + +[`fft3d(...)`](../../../tf/signal/fft3d.md): 3D fast Fourier transform. + +[`idct(...)`](../../../tf/signal/idct.md): Computes the 1D [Inverse Discrete Cosine Transform (DCT)][idct] of `input`. + +[`ifft(...)`](../../../tf/signal/ifft.md): Inverse fast Fourier transform. + +[`ifft2d(...)`](../../../tf/signal/ifft2d.md): Inverse 2D fast Fourier transform. + +[`ifft3d(...)`](../../../tf/signal/ifft3d.md): Inverse 3D fast Fourier transform. + +[`irfft(...)`](../../../tf/signal/irfft.md): Inverse real-valued fast Fourier transform. + +[`irfft2d(...)`](../../../tf/signal/irfft2d.md): Inverse 2D real-valued fast Fourier transform. + +[`irfft3d(...)`](../../../tf/signal/irfft3d.md): Inverse 3D real-valued fast Fourier transform. + +[`rfft(...)`](../../../tf/signal/rfft.md): Real-valued fast Fourier transform. + +[`rfft2d(...)`](../../../tf/signal/rfft2d.md): 2D real-valued fast Fourier transform. + +[`rfft3d(...)`](../../../tf/signal/rfft3d.md): 3D real-valued fast Fourier transform. + diff --git a/site/en/api_docs/python/tf/compat/v1/squeeze.md b/site/en/api_docs/python/tf/compat/v1/squeeze.md new file mode 100644 index 00000000000..cfd3010e058 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/squeeze.md @@ -0,0 +1,139 @@ +description: Removes dimensions of size 1 from the shape of a tensor. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.squeeze + + + + + + + + + +Removes dimensions of size 1 from the shape of a tensor. (deprecated arguments) + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(squeeze_dims)`. They will be removed in a future version. +Instructions for updating: +Use the `axis` argument instead + +Given a tensor `input`, this operation returns a tensor of the same type with +all dimensions of size 1 removed. If you don't want to remove all size 1 +dimensions, you can remove specific size 1 dimensions by specifying +`axis`. + +#### For example: + + + +``` +>>> # 't' is a tensor of shape [1, 2, 1, 3, 1, 1] +>>> t = tf.ones([1, 2, 1, 3, 1, 1]) +>>> print(tf.shape(tf.squeeze(t)).numpy()) +[2 3] +``` + +Or, to remove specific size 1 dimensions: + +``` +>>> # 't' is a tensor of shape [1, 2, 1, 3, 1, 1] +>>> t = tf.ones([1, 2, 1, 3, 1, 1]) +>>> print(tf.shape(tf.squeeze(t, [2, 4])).numpy()) +[1 2 3 1] +``` + +Note: if `input` is a tf.RaggedTensor, then this operation takes `O(N)` +time, where `N` is the number of elements in the squeezed dimensions. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. The `input` to squeeze. +
+`axis` + +An optional list of `ints`. Defaults to `[]`. If specified, only +squeezes the dimensions listed. The dimension index starts at 0. It is an +error to squeeze a dimension that is not 1. Must be in the range +`[-rank(input), rank(input))`. Must be specified if `input` is a +`RaggedTensor`. +
+`name` + +A name for the operation (optional). +
+`squeeze_dims` + +Deprecated keyword argument that is now axis. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +Contains the same data as `input`, but has one or more dimensions of +size 1 removed. +
+ + + + + + + + + + + + +
+`ValueError` + +When both `squeeze_dims` and `axis` are specified. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/string_split.md b/site/en/api_docs/python/tf/compat/v1/string_split.md new file mode 100644 index 00000000000..bcbd226e50c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/string_split.md @@ -0,0 +1,151 @@ +description: Split elements of source based on delimiter. (deprecated arguments) + +
+ + +
+ +# tf.compat.v1.string_split + + + + + + + + + +Split elements of `source` based on `delimiter`. (deprecated arguments) + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(delimiter)`. They will be removed in a future version. +Instructions for updating: +delimiter is deprecated, please use sep instead. + +Let N be the size of `source` (typically N will be the batch size). Split each +element of `source` based on `delimiter` and return a `SparseTensor` +or `RaggedTensor` containing the split tokens. Empty tokens are ignored. + +If `sep` is an empty string, each element of the `source` is split +into individual strings, each containing one byte. (This includes splitting +multibyte sequences of UTF-8.) If delimiter contains multiple bytes, it is +treated as a set of delimiters with each considered a potential split point. + +#### Examples: + + + +``` +>>> print(tf.compat.v1.string_split(['hello world', 'a b c'])) +SparseTensor(indices=tf.Tensor( [[0 0] [0 1] [1 0] [1 1] [1 2]], ...), + values=tf.Tensor([b'hello' b'world' b'a' b'b' b'c'], ...), + dense_shape=tf.Tensor([2 3], shape=(2,), dtype=int64)) +``` + +``` +>>> print(tf.compat.v1.string_split(['hello world', 'a b c'], +... result_type="RaggedTensor")) + +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`source` + +`1-D` string `Tensor`, the strings to split. +
+`sep` + +`0-D` string `Tensor`, the delimiter character, the string should +be length 0 or 1. Default is ' '. +
+`skip_empty` + +A `bool`. If `True`, skip the empty strings from the result. +
+`delimiter` + +deprecated alias for `sep`. +
+`result_type` + +The tensor type for the result: one of `"RaggedTensor"` or +`"SparseTensor"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + +
+`ValueError` + +If delimiter is not a string. +
+ + + + + + + + + + + +
+A `SparseTensor` or `RaggedTensor` of rank `2`, the strings split according +to the delimiter. The first column of the indices corresponds to the row +in `source` and the second column corresponds to the index of the split +component in this row. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/string_to_hash_bucket.md b/site/en/api_docs/python/tf/compat/v1/string_to_hash_bucket.md new file mode 100644 index 00000000000..b3bbd0ed7d4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/string_to_hash_bucket.md @@ -0,0 +1,95 @@ +description: Converts each string in the input Tensor to its hash mod by a number of buckets. + +
+ + +
+ +# tf.compat.v1.string_to_hash_bucket + + + + + + + + + +Converts each string in the input Tensor to its hash mod by a number of buckets. + + + + + + + + + +The hash function is deterministic on the content of the string within the +process. + +Note that the hash function may change from time to time. +This functionality will be deprecated and it's recommended to use +`tf.string_to_hash_bucket_fast()` or `tf.string_to_hash_bucket_strong()`. + + + + + + + + + + + + + + + + +
+`string_tensor` + +A `Tensor` of type `string`. +
+`num_buckets` + +An `int` that is `>= 1`. The number of buckets. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/string_to_number.md b/site/en/api_docs/python/tf/compat/v1/string_to_number.md new file mode 100644 index 00000000000..e5284196975 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/string_to_number.md @@ -0,0 +1,102 @@ +description: Converts each string in the input Tensor to the specified numeric type. + +
+ + +
+ +# tf.compat.v1.string_to_number + + + + + + + + + +Converts each string in the input Tensor to the specified numeric type. + + + + + + + + + +(Note that int32 overflow results in an error while float overflow +results in a rounded value.) + +#### Example: + + + +``` +>>> strings = ["5.0", "3.0", "7.0"] +>>> tf.strings.to_number(strings) + +``` + + + + + + + + + + + + + + + + +
+`string_tensor` + +A `Tensor` of type `string`. +
+`out_type` + +An optional tf.DType from: `tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to tf.float32. +The numeric type to interpret each string in `string_tensor` as. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/strings.md b/site/en/api_docs/python/tf/compat/v1/strings.md new file mode 100644 index 00000000000..f8985a38048 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/strings.md @@ -0,0 +1,75 @@ +description: Operations for working with string Tensors. + +
+ + +
+ +# Module: tf.compat.v1.strings + + + + + + + + + +Operations for working with string Tensors. + + + +## Functions + +[`as_string(...)`](../../../tf/strings/as_string.md): Converts each entry in the given tensor to strings. + +[`bytes_split(...)`](../../../tf/strings/bytes_split.md): Split string elements of `input` into bytes. + +[`format(...)`](../../../tf/strings/format.md): Formats a string template using a list of tensors. + +[`join(...)`](../../../tf/strings/join.md): Perform element-wise concatenation of a list of string tensors. + +[`length(...)`](../../../tf/compat/v1/strings/length.md): Computes the length of each string given in the input tensor. + +[`lower(...)`](../../../tf/strings/lower.md): Converts all uppercase characters into their respective lowercase replacements. + +[`ngrams(...)`](../../../tf/strings/ngrams.md): Create a tensor of n-grams based on `data`. + +[`reduce_join(...)`](../../../tf/compat/v1/reduce_join.md): Joins all strings into a single string, or joins along an axis. + +[`regex_full_match(...)`](../../../tf/strings/regex_full_match.md): Check if the input matches the regex pattern. + +[`regex_replace(...)`](../../../tf/strings/regex_replace.md): Replace elements of `input` matching regex `pattern` with `rewrite`. + +[`split(...)`](../../../tf/compat/v1/strings/split.md): Split elements of `input` based on `sep`. + +[`strip(...)`](../../../tf/strings/strip.md): Strip leading and trailing whitespaces from the Tensor. + +[`substr(...)`](../../../tf/compat/v1/strings/substr.md): Return substrings from `Tensor` of strings. + +[`to_hash_bucket(...)`](../../../tf/compat/v1/string_to_hash_bucket.md): Converts each string in the input Tensor to its hash mod by a number of buckets. + +[`to_hash_bucket_fast(...)`](../../../tf/strings/to_hash_bucket_fast.md): Converts each string in the input Tensor to its hash mod by a number of buckets. + +[`to_hash_bucket_strong(...)`](../../../tf/strings/to_hash_bucket_strong.md): Converts each string in the input Tensor to its hash mod by a number of buckets. + +[`to_number(...)`](../../../tf/compat/v1/string_to_number.md): Converts each string in the input Tensor to the specified numeric type. + +[`unicode_decode(...)`](../../../tf/strings/unicode_decode.md): Decodes each string in `input` into a sequence of Unicode code points. + +[`unicode_decode_with_offsets(...)`](../../../tf/strings/unicode_decode_with_offsets.md): Decodes each string into a sequence of code points with start offsets. + +[`unicode_encode(...)`](../../../tf/strings/unicode_encode.md): Encodes each sequence of Unicode code points in `input` into a string. + +[`unicode_script(...)`](../../../tf/strings/unicode_script.md): Determine the script codes of a given tensor of Unicode integer code points. + +[`unicode_split(...)`](../../../tf/strings/unicode_split.md): Splits each string in `input` into a sequence of Unicode code points. + +[`unicode_split_with_offsets(...)`](../../../tf/strings/unicode_split_with_offsets.md): Splits each string into a sequence of code points with start offsets. + +[`unicode_transcode(...)`](../../../tf/strings/unicode_transcode.md): Transcode the input text from a source encoding to a destination encoding. + +[`unsorted_segment_join(...)`](../../../tf/strings/unsorted_segment_join.md): Joins the elements of `inputs` based on `segment_ids`. + +[`upper(...)`](../../../tf/strings/upper.md): Converts all lowercase characters into their respective uppercase replacements. + diff --git a/site/en/api_docs/python/tf/compat/v1/strings/length.md b/site/en/api_docs/python/tf/compat/v1/strings/length.md new file mode 100644 index 00000000000..f4a089b4104 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/strings/length.md @@ -0,0 +1,92 @@ +description: Computes the length of each string given in the input tensor. + +
+ + +
+ +# tf.compat.v1.strings.length + + + + + + + + + +Computes the length of each string given in the input tensor. + + + + + + + +``` +>>> strings = tf.constant(['Hello','TensorFlow', '🙂']) +>>> tf.strings.length(strings).numpy() # default counts bytes +array([ 5, 10, 4], dtype=int32) +>>> tf.strings.length(strings, unit="UTF8_CHAR").numpy() +array([ 5, 10, 1], dtype=int32) +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. The strings for which to compute the +length for each element. +
+`name` + +A name for the operation (optional). +
+`unit` + +An optional `string` from: `"BYTE", "UTF8_CHAR"`. Defaults to +`"BYTE"`. The unit that is counted to compute string length. One of: +`"BYTE"` (for the number of bytes in each string) or `"UTF8_CHAR"` (for +the number of UTF-8 encoded Unicode code points in each string). Results +are undefined if `unit=UTF8_CHAR` and the `input` strings do not contain +structurally valid UTF-8. +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`, containing the length of the input string in +the same element of the input tensor. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/strings/split.md b/site/en/api_docs/python/tf/compat/v1/strings/split.md new file mode 100644 index 00000000000..07c2d5a337b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/strings/split.md @@ -0,0 +1,149 @@ +description: Split elements of input based on sep. + +
+ + +
+ +# tf.compat.v1.strings.split + + + + + + + + + +Split elements of `input` based on `sep`. + + + + + + + +Let N be the size of `input` (typically N will be the batch size). Split each +element of `input` based on `sep` and return a `SparseTensor` or +`RaggedTensor` containing the split tokens. Empty tokens are ignored. + +#### Examples: + + + +``` +>>> print(tf.compat.v1.strings.split(['hello world', 'a b c'])) +SparseTensor(indices=tf.Tensor( [[0 0] [0 1] [1 0] [1 1] [1 2]], ...), + values=tf.Tensor([b'hello' b'world' b'a' b'b' b'c'], ...), + dense_shape=tf.Tensor([2 3], shape=(2,), dtype=int64)) +``` + +``` +>>> print(tf.compat.v1.strings.split(['hello world', 'a b c'], +... result_type="RaggedTensor")) + +``` + +If `sep` is given, consecutive delimiters are not grouped together and are +deemed to delimit empty strings. For example, `input` of `"1<>2<><>3"` and +`sep` of `"<>"` returns `["1", "2", "", "3"]`. If `sep` is None or an empty +string, consecutive whitespace are regarded as a single separator, and the +result will contain no empty strings at the start or end if the string has +leading or trailing whitespace. + +Note that the above mentioned behavior matches python's str.split. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A string `Tensor` of rank `N`, the strings to split. If +`rank(input)` is not known statically, then it is assumed to be `1`. +
+`sep` + +`0-D` string `Tensor`, the delimiter character. +
+`maxsplit` + +An `int`. If `maxsplit > 0`, limit of the split of the result. +
+`result_type` + +The tensor type for the result: one of `"RaggedTensor"` or +`"SparseTensor"`. +
+`source` + +alias for "input" argument. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + +
+`ValueError` + +If sep is not a string. +
+ + + + + + + + + + + +
+A `SparseTensor` or `RaggedTensor` of rank `N+1`, the strings split +according to the delimiter. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/strings/substr.md b/site/en/api_docs/python/tf/compat/v1/strings/substr.md new file mode 100644 index 00000000000..dd71d143730 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/strings/substr.md @@ -0,0 +1,191 @@ +description: Return substrings from Tensor of strings. + +
+ + +
+ +# tf.compat.v1.strings.substr + + + + + + + + + +Return substrings from `Tensor` of strings. + + + + + + + +For each string in the input `Tensor`, creates a substring starting at index +`pos` with a total length of `len`. + +If `len` defines a substring that would extend beyond the length of the input +string, or if `len` is negative, then as many characters as possible are used. + +A negative `pos` indicates distance within the string backwards from the end. + +If `pos` specifies an index which is out of range for any of the input strings, +then an `InvalidArgumentError` is thrown. + +`pos` and `len` must have the same shape, otherwise a `ValueError` is thrown on +Op creation. + +*NOTE*: `Substr` supports broadcasting up to two dimensions. More about +broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +--- + +Examples + +Using scalar `pos` and `len`: + +```python +input = [b'Hello', b'World'] +position = 1 +length = 3 + +output = [b'ell', b'orl'] +``` + +Using `pos` and `len` with same shape as `input`: + +```python +input = [[b'ten', b'eleven', b'twelve'], + [b'thirteen', b'fourteen', b'fifteen'], + [b'sixteen', b'seventeen', b'eighteen']] +position = [[1, 2, 3], + [1, 2, 3], + [1, 2, 3]] +length = [[2, 3, 4], + [4, 3, 2], + [5, 5, 5]] + +output = [[b'en', b'eve', b'lve'], + [b'hirt', b'urt', b'te'], + [b'ixtee', b'vente', b'hteen']] +``` + +Broadcasting `pos` and `len` onto `input`: + +``` +input = [[b'ten', b'eleven', b'twelve'], + [b'thirteen', b'fourteen', b'fifteen'], + [b'sixteen', b'seventeen', b'eighteen'], + [b'nineteen', b'twenty', b'twentyone']] +position = [1, 2, 3] +length = [1, 2, 3] + +output = [[b'e', b'ev', b'lve'], + [b'h', b'ur', b'tee'], + [b'i', b've', b'hte'], + [b'i', b'en', b'nty']] +``` + +Broadcasting `input` onto `pos` and `len`: + +``` +input = b'thirteen' +position = [1, 5, 7] +length = [3, 2, 1] + +output = [b'hir', b'ee', b'n'] +``` + + + + + + + + + +
+* `ValueError`: If the first argument cannot be converted to a +Tensor of `dtype string`. +* `InvalidArgumentError`: If indicies are out of range. +* `ValueError`: If `pos` and `len` are not the same shape. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. Tensor of strings +
+`pos` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Scalar defining the position of first character in each substring +
+`len` + +A `Tensor`. Must have the same type as `pos`. +Scalar defining the number of characters to include in each substring +
+`unit` + +An optional `string` from: `"BYTE", "UTF8_CHAR"`. Defaults to `"BYTE"`. +The unit that is used to create the substring. One of: `"BYTE"` (for +defining position and length by bytes) or `"UTF8_CHAR"` (for the UTF-8 +encoded Unicode code points). The default is `"BYTE"`. Results are undefined if +`unit=UTF8_CHAR` and the `input` strings do not contain structurally valid +UTF-8. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/substr.md b/site/en/api_docs/python/tf/compat/v1/substr.md new file mode 100644 index 00000000000..2c0af7af6d2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/substr.md @@ -0,0 +1,191 @@ +description: Return substrings from Tensor of strings. + +
+ + +
+ +# tf.compat.v1.substr + + + + + + + + + +Return substrings from `Tensor` of strings. + + + + + + + +For each string in the input `Tensor`, creates a substring starting at index +`pos` with a total length of `len`. + +If `len` defines a substring that would extend beyond the length of the input +string, or if `len` is negative, then as many characters as possible are used. + +A negative `pos` indicates distance within the string backwards from the end. + +If `pos` specifies an index which is out of range for any of the input strings, +then an `InvalidArgumentError` is thrown. + +`pos` and `len` must have the same shape, otherwise a `ValueError` is thrown on +Op creation. + +*NOTE*: `Substr` supports broadcasting up to two dimensions. More about +broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +--- + +Examples + +Using scalar `pos` and `len`: + +```python +input = [b'Hello', b'World'] +position = 1 +length = 3 + +output = [b'ell', b'orl'] +``` + +Using `pos` and `len` with same shape as `input`: + +```python +input = [[b'ten', b'eleven', b'twelve'], + [b'thirteen', b'fourteen', b'fifteen'], + [b'sixteen', b'seventeen', b'eighteen']] +position = [[1, 2, 3], + [1, 2, 3], + [1, 2, 3]] +length = [[2, 3, 4], + [4, 3, 2], + [5, 5, 5]] + +output = [[b'en', b'eve', b'lve'], + [b'hirt', b'urt', b'te'], + [b'ixtee', b'vente', b'hteen']] +``` + +Broadcasting `pos` and `len` onto `input`: + +``` +input = [[b'ten', b'eleven', b'twelve'], + [b'thirteen', b'fourteen', b'fifteen'], + [b'sixteen', b'seventeen', b'eighteen'], + [b'nineteen', b'twenty', b'twentyone']] +position = [1, 2, 3] +length = [1, 2, 3] + +output = [[b'e', b'ev', b'lve'], + [b'h', b'ur', b'tee'], + [b'i', b've', b'hte'], + [b'i', b'en', b'nty']] +``` + +Broadcasting `input` onto `pos` and `len`: + +``` +input = b'thirteen' +position = [1, 5, 7] +length = [3, 2, 1] + +output = [b'hir', b'ee', b'n'] +``` + + + + + + + + + +
+* `ValueError`: If the first argument cannot be converted to a +Tensor of `dtype string`. +* `InvalidArgumentError`: If indicies are out of range. +* `ValueError`: If `pos` and `len` are not the same shape. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. Tensor of strings +
+`pos` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Scalar defining the position of first character in each substring +
+`len` + +A `Tensor`. Must have the same type as `pos`. +Scalar defining the number of characters to include in each substring +
+`unit` + +An optional `string` from: `"BYTE", "UTF8_CHAR"`. Defaults to `"BYTE"`. +The unit that is used to create the substring. One of: `"BYTE"` (for +defining position and length by bytes) or `"UTF8_CHAR"` (for the UTF-8 +encoded Unicode code points). The default is `"BYTE"`. Results are undefined if +`unit=UTF8_CHAR` and the `input` strings do not contain structurally valid +UTF-8. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/summary.md b/site/en/api_docs/python/tf/compat/v1/summary.md new file mode 100644 index 00000000000..9d38954978a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary.md @@ -0,0 +1,63 @@ +description: Operations for writing summary data, for use in analysis and visualization. + +
+ + +
+ +# Module: tf.compat.v1.summary + + + + + + + + + +Operations for writing summary data, for use in analysis and visualization. + + +See the [Summaries and +TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard) guide. + +## Classes + +[`class Event`](../../../tf/compat/v1/Event.md): A ProtocolMessage + +[`class FileWriter`](../../../tf/compat/v1/summary/FileWriter.md): Writes `Summary` protocol buffers to event files. + +[`class FileWriterCache`](../../../tf/compat/v1/summary/FileWriterCache.md): Cache for file writers. + +[`class SessionLog`](../../../tf/compat/v1/SessionLog.md): A ProtocolMessage + +[`class Summary`](../../../tf/compat/v1/Summary.md): A ProtocolMessage + +[`class SummaryDescription`](../../../tf/compat/v1/summary/SummaryDescription.md): A ProtocolMessage + +[`class TaggedRunMetadata`](../../../tf/compat/v1/summary/TaggedRunMetadata.md): A ProtocolMessage + +## Functions + +[`all_v2_summary_ops(...)`](../../../tf/compat/v1/summary/all_v2_summary_ops.md): Returns all V2-style summary ops defined in the current default graph. + +[`audio(...)`](../../../tf/compat/v1/summary/audio.md): Outputs a `Summary` protocol buffer with audio. + +[`get_summary_description(...)`](../../../tf/compat/v1/summary/get_summary_description.md): Given a TensorSummary node_def, retrieve its SummaryDescription. + +[`histogram(...)`](../../../tf/compat/v1/summary/histogram.md): Outputs a `Summary` protocol buffer with a histogram. + +[`image(...)`](../../../tf/compat/v1/summary/image.md): Outputs a `Summary` protocol buffer with images. + +[`initialize(...)`](../../../tf/compat/v1/summary/initialize.md): Initializes summary writing for graph execution mode. + +[`merge(...)`](../../../tf/compat/v1/summary/merge.md): Merges summaries. + +[`merge_all(...)`](../../../tf/compat/v1/summary/merge_all.md): Merges all summaries collected in the default graph. + +[`scalar(...)`](../../../tf/compat/v1/summary/scalar.md): Outputs a `Summary` protocol buffer containing a single scalar value. + +[`tensor_summary(...)`](../../../tf/compat/v1/summary/tensor_summary.md): Outputs a `Summary` protocol buffer with a serialized tensor.proto. + +[`text(...)`](../../../tf/compat/v1/summary/text.md): Summarizes textual data. + diff --git a/site/en/api_docs/python/tf/compat/v1/summary/FileWriter.md b/site/en/api_docs/python/tf/compat/v1/summary/FileWriter.md new file mode 100644 index 00000000000..4793edd3388 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/FileWriter.md @@ -0,0 +1,519 @@ +description: Writes Summary protocol buffers to event files. + +
+ + + + + + + + + + + + + + + +
+ +# tf.compat.v1.summary.FileWriter + + + + + + + + + +Writes `Summary` protocol buffers to event files. + + + + + + + +The `FileWriter` class provides a mechanism to create an event file in a +given directory and add summaries and events to it. The class updates the +file contents asynchronously. This allows a training program to call methods +to add data to the file directly from the training loop, without slowing down +training. + +When constructed with a tf.compat.v1.Session parameter, a `FileWriter` +instead forms a compatibility layer over new graph-based summaries +(`tf.contrib.summary`) to facilitate the use of new summary writing with +pre-existing code that expects a `FileWriter` instance. + +This class is not thread-safe. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`logdir` + +A string. Directory where event file will be written. +
+`graph` + +A `Graph` object, such as `sess.graph`. +
+`max_queue` + +Integer. Size of the queue for pending events and summaries. +
+`flush_secs` + +Number. How often, in seconds, to flush the +pending events and summaries to disk. +
+`graph_def` + +DEPRECATED: Use the `graph` argument instead. +
+`filename_suffix` + +A string. Every event file's name is suffixed with +`suffix`. +
+`session` + +A tf.compat.v1.Session object. See details above. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If called with eager execution enabled. +
+ + + +## Methods + +

add_event

+ +View source + + + +Adds an event to the event file. + + + + + + + + + + + +
Args
+`event` + +An `Event` protocol buffer. +
+ + + +

add_graph

+ +View source + + + +Adds a `Graph` to the event file. + +The graph described by the protocol buffer will be displayed by +TensorBoard. Most users pass a graph in the constructor instead. + + + + + + + + + + + + + + + + +
Args
+`graph` + +A `Graph` object, such as `sess.graph`. +
+`global_step` + +Number. Optional global step counter to record with the +graph. +
+`graph_def` + +DEPRECATED. Use the `graph` parameter instead. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If both graph and graph_def are passed to the method. +
+ + + +

add_meta_graph

+ +View source + + + +Adds a `MetaGraphDef` to the event file. + +The `MetaGraphDef` allows running the given graph via +`saver.import_meta_graph()`. + + + + + + + + + + + + + +
Args
+`meta_graph_def` + +A `MetaGraphDef` object, often as returned by +`saver.export_meta_graph()`. +
+`global_step` + +Number. Optional global step counter to record with the +graph. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If both `meta_graph_def` is not an instance of `MetaGraphDef`. +
+ + + +

add_run_metadata

+ +View source + + + +Adds a metadata information for a single session.run() call. + + + + + + + + + + + + + + + + + +
Args
+`run_metadata` + +A `RunMetadata` protobuf object. +
+`tag` + +The tag name for this metadata. +
+`global_step` + +Number. Optional global step counter to record with the +StepStats. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the provided tag was already used for this type of event. +
+ + + +

add_session_log

+ +View source + + + +Adds a `SessionLog` protocol buffer to the event file. + +This method wraps the provided session in an `Event` protocol buffer +and adds it to the event file. + + + + + + + + + + + + + +
Args
+`session_log` + +A `SessionLog` protocol buffer. +
+`global_step` + +Number. Optional global step value to record with the +summary. +
+ + + +

add_summary

+ +View source + + + +Adds a `Summary` protocol buffer to the event file. + +This method wraps the provided summary in an `Event` protocol buffer +and adds it to the event file. + +You can pass the result of evaluating any summary op, using +`tf.Session.run` or +tf.Tensor.eval, to this +function. Alternatively, you can pass a tf.compat.v1.Summary protocol +buffer that you populate with your own data. The latter is +commonly done to report evaluation results in event files. + + + + + + + + + + + + + +
Args
+`summary` + +A `Summary` protocol buffer, optionally serialized as a string. +
+`global_step` + +Number. Optional global step value to record with the +summary. +
+ + + +

close

+ +View source + + + +Flushes the event file to disk and close the file. + +Call this method when you do not need the summary writer anymore. + +

flush

+ +View source + + + +Flushes the event file to disk. + +Call this method to make sure that all pending events have been written to +disk. + +

get_logdir

+ +View source + + + +Returns the directory where event file will be written. + + +

reopen

+ +View source + + + +Reopens the EventFileWriter. + +Can be called after `close()` to add more events in the same directory. +The events will go into a new events file. + +Does nothing if the EventFileWriter was not closed. + +

__enter__

+ +View source + + + +Make usable with "with" statement. + + +

__exit__

+ +View source + + + +Make usable with "with" statement. + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/summary/FileWriterCache.md b/site/en/api_docs/python/tf/compat/v1/summary/FileWriterCache.md new file mode 100644 index 00000000000..7cecca1da88 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/FileWriterCache.md @@ -0,0 +1,91 @@ +description: Cache for file writers. + +
+ + + + +
+ +# tf.compat.v1.summary.FileWriterCache + + + + + + + + + +Cache for file writers. + + + +This class caches file writers, one per directory. + +## Methods + +

clear

+ +View source + + + +Clear cached summary writers. Currently only used for unit tests. + + +

get

+ +View source + + + +Returns the FileWriter for the specified directory. + + + + + + + + + + + +
Args
+`logdir` + +str, name of the directory. +
+ + + + + + + + + + + +
Returns
+A `FileWriter`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/summary/SummaryDescription.md b/site/en/api_docs/python/tf/compat/v1/summary/SummaryDescription.md new file mode 100644 index 00000000000..055cee86fa0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/SummaryDescription.md @@ -0,0 +1,46 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.summary.SummaryDescription + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + +
+`type_hint` + +`string type_hint` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/summary/TaggedRunMetadata.md b/site/en/api_docs/python/tf/compat/v1/summary/TaggedRunMetadata.md new file mode 100644 index 00000000000..4a979a31977 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/TaggedRunMetadata.md @@ -0,0 +1,53 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.compat.v1.summary.TaggedRunMetadata + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + +
+`run_metadata` + +`bytes run_metadata` +
+`tag` + +`string tag` +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/summary/all_v2_summary_ops.md b/site/en/api_docs/python/tf/compat/v1/summary/all_v2_summary_ops.md new file mode 100644 index 00000000000..3e9d229c919 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/all_v2_summary_ops.md @@ -0,0 +1,48 @@ +description: Returns all V2-style summary ops defined in the current default graph. + +
+ + +
+ +# tf.compat.v1.summary.all_v2_summary_ops + + + + + + + + + +Returns all V2-style summary ops defined in the current default graph. + + + + + + + +This includes ops from TF 2.0 tf.summary and TF 1.x tf.contrib.summary (except +for `tf.contrib.summary.graph` and `tf.contrib.summary.import_event`), but +does *not* include TF 1.x tf.summary ops. + + + + + + + + + +
+List of summary ops, or None if called under eager execution. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/summary/audio.md b/site/en/api_docs/python/tf/compat/v1/summary/audio.md new file mode 100644 index 00000000000..7ade8d13dde --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/audio.md @@ -0,0 +1,117 @@ +description: Outputs a Summary protocol buffer with audio. + +
+ + +
+ +# tf.compat.v1.summary.audio + + + + + + + + + +Outputs a `Summary` protocol buffer with audio. + + + + + + + +The summary has up to `max_outputs` summary values containing audio. The +audio is built from `tensor` which must be 3-D with shape `[batch_size, +frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are +assumed to be in the range of `[-1.0, 1.0]` with a sample rate of +`sample_rate`. + +The `tag` in the outputted Summary.Value protobufs is generated based on the +name, with a suffix depending on the max_outputs setting: + +* If `max_outputs` is 1, the summary value tag is '*name*/audio'. +* If `max_outputs` is greater than 1, the summary value tags are + generated sequentially as '*name*/audio/0', '*name*/audio/1', etc + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for the generated node. Will also serve as a series name in +TensorBoard. +
+`tensor` + +A 3-D `float32` `Tensor` of shape `[batch_size, frames, channels]` +or a 2-D `float32` `Tensor` of shape `[batch_size, frames]`. +
+`sample_rate` + +A Scalar `float32` `Tensor` indicating the sample rate of the +signal in hertz. +
+`max_outputs` + +Max number of batch elements to generate audio for. +
+`collections` + +Optional list of ops.GraphKeys. The collections to add the +summary to. Defaults to [_ops.GraphKeys.SUMMARIES] +
+`family` + +Optional; if provided, used as the prefix of the summary tag name, +which controls the tab name used for display on Tensorboard. +
+ + + + + + + + + + + +
+A scalar `Tensor` of type `string`. The serialized `Summary` protocol +buffer. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/summary/get_summary_description.md b/site/en/api_docs/python/tf/compat/v1/summary/get_summary_description.md new file mode 100644 index 00000000000..b6706f1e823 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/get_summary_description.md @@ -0,0 +1,90 @@ +description: Given a TensorSummary node_def, retrieve its SummaryDescription. + +
+ + +
+ +# tf.compat.v1.summary.get_summary_description + + + + + + + + + +Given a TensorSummary node_def, retrieve its SummaryDescription. + + + + + + + +When a Summary op is instantiated, a SummaryDescription of associated +metadata is stored in its NodeDef. This method retrieves the description. + + + + + + + + + + +
+`node_def` + +the node_def_pb2.NodeDef of a TensorSummary op +
+ + + + + + + + + + + +
+a summary_pb2.SummaryDescription +
+ + + + + + + + + + + + +
+`ValueError` + +if the node is not a summary op. +
+ + + + +#### Eager Compatibility +Not compatible with eager execution. To write TensorBoard +summaries under eager execution, use `tf.contrib.summary` instead. + diff --git a/site/en/api_docs/python/tf/compat/v1/summary/histogram.md b/site/en/api_docs/python/tf/compat/v1/summary/histogram.md new file mode 100644 index 00000000000..657c96cbaf9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/histogram.md @@ -0,0 +1,100 @@ +description: Outputs a Summary protocol buffer with a histogram. + +
+ + +
+ +# tf.compat.v1.summary.histogram + + + + + + + + + +Outputs a `Summary` protocol buffer with a histogram. + + + + + + + +Adding a histogram summary makes it possible to visualize your data's +distribution in TensorBoard. You can see a detailed explanation of the +TensorBoard histogram dashboard +[here](https://www.tensorflow.org/get_started/tensorboard_histograms). + +The generated +[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) +has one summary value containing a histogram for `values`. + +This op reports an `InvalidArgument` error if any value is not finite. + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for the generated node. Will also serve as a series name in +TensorBoard. +
+`values` + +A real numeric `Tensor`. Any shape. Values to use to +build the histogram. +
+`collections` + +Optional list of graph collections keys. The new summary op is +added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. +
+`family` + +Optional; if provided, used as the prefix of the summary tag name, +which controls the tab name used for display on Tensorboard. +
+ + + + + + + + + + + +
+A scalar `Tensor` of type `string`. The serialized `Summary` protocol +buffer. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/summary/image.md b/site/en/api_docs/python/tf/compat/v1/summary/image.md new file mode 100644 index 00000000000..1eebfb02678 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/image.md @@ -0,0 +1,123 @@ +description: Outputs a Summary protocol buffer with images. + +
+ + +
+ +# tf.compat.v1.summary.image + + + + + + + + + +Outputs a `Summary` protocol buffer with images. + + + + + + + +The summary has up to `max_outputs` summary values containing images. The +images are built from `tensor` which must be 4-D with shape `[batch_size, +height, width, channels]` and where `channels` can be: + +* 1: `tensor` is interpreted as Grayscale. +* 3: `tensor` is interpreted as RGB. +* 4: `tensor` is interpreted as RGBA. + +The images have the same number of channels as the input tensor. For float +input, the values are normalized one image at a time to fit in the range +`[0, 255]`. `uint8` values are unchanged. The op uses two different +normalization algorithms: + +* If the input values are all positive, they are rescaled so the largest one + is 255. + +* If any input value is negative, the values are shifted so input value 0.0 + is at 127. They are then rescaled so that either the smallest value is 0, + or the largest one is 255. + +The `tag` in the outputted Summary.Value protobufs is generated based on the +name, with a suffix depending on the max_outputs setting: + +* If `max_outputs` is 1, the summary value tag is '*name*/image'. +* If `max_outputs` is greater than 1, the summary value tags are + generated sequentially as '*name*/image/0', '*name*/image/1', etc. + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for the generated node. Will also serve as a series name in +TensorBoard. +
+`tensor` + +A 4-D `uint8` or `float32` `Tensor` of shape `[batch_size, height, +width, channels]` where `channels` is 1, 3, or 4. +
+`max_outputs` + +Max number of batch elements to generate images for. +
+`collections` + +Optional list of ops.GraphKeys. The collections to add the +summary to. Defaults to [_ops.GraphKeys.SUMMARIES] +
+`family` + +Optional; if provided, used as the prefix of the summary tag name, +which controls the tab name used for display on Tensorboard. +
+ + + + + + + + + + + +
+A scalar `Tensor` of type `string`. The serialized `Summary` protocol +buffer. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/summary/initialize.md b/site/en/api_docs/python/tf/compat/v1/summary/initialize.md new file mode 100644 index 00000000000..86f8a48f500 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/initialize.md @@ -0,0 +1,93 @@ +description: Initializes summary writing for graph execution mode. + +
+ + +
+ +# tf.compat.v1.summary.initialize + + + + + + + + + +Initializes summary writing for graph execution mode. + + + + + + + +This operation is a no-op when executing eagerly. + +This helper method provides a higher-level alternative to using +`tf.contrib.summary.summary_writer_initializer_op` and +`tf.contrib.summary.graph`. + +Most users will also want to call tf.compat.v1.train.create_global_step +which can happen before or after this function is called. + + + + + + + + + + + + + +
+`graph` + +A tf.Graph or tf.compat.v1.GraphDef to output to the writer. +This function will not write the default graph by default. When +writing to an event log file, the associated step will be zero. +
+`session` + +So this method can call `tf.Session.run`. This defaults +to tf.compat.v1.get_default_session. +
+ + + + + + + + + + + + + + + +
+`RuntimeError` + +If the current thread has no default +`tf.contrib.summary.SummaryWriter`. +
+`ValueError` + +If session wasn't passed and no default session. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/summary/merge.md b/site/en/api_docs/python/tf/compat/v1/summary/merge.md new file mode 100644 index 00000000000..010fd5dd0b6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/merge.md @@ -0,0 +1,112 @@ +description: Merges summaries. + +
+ + +
+ +# tf.compat.v1.summary.merge + + + + + + + + + +Merges summaries. + + + + + + + +This op creates a +[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) +protocol buffer that contains the union of all the values in the input +summaries. + +When the Op is run, it reports an `InvalidArgument` error if multiple values +in the summaries to merge use the same tag. + + + + + + + + + + + + + + + + +
+`inputs` + +A list of `string` `Tensor` objects containing serialized `Summary` +protocol buffers. +
+`collections` + +Optional list of graph collections keys. The new summary op is +added to these collections. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A scalar `Tensor` of type `string`. The serialized `Summary` protocol +buffer resulting from the merging. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If called with eager mode enabled. +
+ + + + +#### Eager Compatibility +Not compatible with eager execution. To write TensorBoard +summaries under eager execution, use `tf.contrib.summary` instead. + diff --git a/site/en/api_docs/python/tf/compat/v1/summary/merge_all.md b/site/en/api_docs/python/tf/compat/v1/summary/merge_all.md new file mode 100644 index 00000000000..083eef4cdd2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/merge_all.md @@ -0,0 +1,98 @@ +description: Merges all summaries collected in the default graph. + +
+ + +
+ +# tf.compat.v1.summary.merge_all + + + + + + + + + +Merges all summaries collected in the default graph. + + + + + + + + + + + + + + + + + + + + +
+`key` + +`GraphKey` used to collect the summaries. Defaults to +`GraphKeys.SUMMARIES`. +
+`scope` + +Optional scope used to filter the summary ops, using `re.match` +
+ + + + + + + + + + + +
+If no summaries were collected, returns None. Otherwise returns a scalar +`Tensor` of type `string` containing the serialized `Summary` protocol +buffer resulting from the merging. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If called with eager execution enabled. +
+ + + + +#### Eager Compatibility +Not compatible with eager execution. To write TensorBoard +summaries under eager execution, use `tf.contrib.summary` instead. + diff --git a/site/en/api_docs/python/tf/compat/v1/summary/scalar.md b/site/en/api_docs/python/tf/compat/v1/summary/scalar.md new file mode 100644 index 00000000000..d98f69359fa --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/scalar.md @@ -0,0 +1,106 @@ +description: Outputs a Summary protocol buffer containing a single scalar value. + +
+ + +
+ +# tf.compat.v1.summary.scalar + + + + + + + + + +Outputs a `Summary` protocol buffer containing a single scalar value. + + + + + + + +The generated Summary has a Tensor.proto containing the input Tensor. + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for the generated node. Will also serve as the series name in +TensorBoard. +
+`tensor` + +A real numeric Tensor containing a single value. +
+`collections` + +Optional list of graph collections keys. The new summary op is +added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. +
+`family` + +Optional; if provided, used as the prefix of the summary tag name, +which controls the tab name used for display on Tensorboard. +
+ + + + + + + + + + + +
+A scalar `Tensor` of type `string`. Which contains a `Summary` protobuf. +
+ + + + + + + + + + + + +
+`ValueError` + +If tensor has the wrong shape or type. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/summary/tensor_summary.md b/site/en/api_docs/python/tf/compat/v1/summary/tensor_summary.md new file mode 100644 index 00000000000..34b791f5e0f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/tensor_summary.md @@ -0,0 +1,116 @@ +description: Outputs a Summary protocol buffer with a serialized tensor.proto. + +
+ + +
+ +# tf.compat.v1.summary.tensor_summary + + + + + + + + + +Outputs a `Summary` protocol buffer with a serialized tensor.proto. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for the generated node. If display_name is not set, it will +also serve as the tag name in TensorBoard. (In that case, the tag +name will inherit tf name scopes.) +
+`tensor` + +A tensor of any type and shape to serialize. +
+`summary_description` + +A long description of the summary sequence. Markdown +is supported. +
+`collections` + +Optional list of graph collections keys. The new summary op is +added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. +
+`summary_metadata` + +Optional SummaryMetadata proto (which describes which +plugins may use the summary value). +
+`family` + +Optional; if provided, used as the prefix of the summary tag, +which controls the name used for display on TensorBoard when +display_name is not set. +
+`display_name` + +A string used to name this data in TensorBoard. If this is +not set, then the node name will be used instead. +
+ + + + + + + + + + + +
+A scalar `Tensor` of type `string`. The serialized `Summary` protocol +buffer. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/summary/text.md b/site/en/api_docs/python/tf/compat/v1/summary/text.md new file mode 100644 index 00000000000..358b80f937d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/summary/text.md @@ -0,0 +1,106 @@ +description: Summarizes textual data. + +
+ + +
+ +# tf.compat.v1.summary.text + + + + + + + + + +Summarizes textual data. + + + + + + + +Text data summarized via this plugin will be visible in the Text Dashboard +in TensorBoard. The standard TensorBoard Text Dashboard will render markdown +in the strings, and will automatically organize 1d and 2d tensors into tables. +If a tensor with more than 2 dimensions is provided, a 2d subarray will be +displayed along with a warning message. (Note that this behavior is not +intrinsic to the text summary api, but rather to the default TensorBoard text +plugin.) + + + + + + + + + + + + + + + + +
+`name` + +A name for the generated node. Will also serve as a series name in +TensorBoard. +
+`tensor` + +a string-type Tensor to summarize. +
+`collections` + +Optional list of ops.GraphKeys. The collections to add the +summary to. Defaults to [_ops.GraphKeys.SUMMARIES] +
+ + + + + + + + + + + +
+A TensorSummary op that is configured so that TensorBoard will recognize +that it contains textual data. The TensorSummary is a scalar `Tensor` of +type `string` which contains `Summary` protobufs. +
+ + + + + + + + + + + + +
+`ValueError` + +If tensor has the wrong type. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/sysconfig.md b/site/en/api_docs/python/tf/compat/v1/sysconfig.md new file mode 100644 index 00000000000..e749a8ca8ad --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/sysconfig.md @@ -0,0 +1,37 @@ +description: System configuration library. + +
+ + + + +
+ +# Module: tf.compat.v1.sysconfig + + + + + + + + + +System configuration library. + + + +## Functions + +[`get_compile_flags(...)`](../../../tf/sysconfig/get_compile_flags.md): Get the compilation flags for custom operators. + +[`get_include(...)`](../../../tf/sysconfig/get_include.md): Get the directory containing the TensorFlow C++ header files. + +[`get_lib(...)`](../../../tf/sysconfig/get_lib.md): Get the directory containing the TensorFlow framework library. + +[`get_link_flags(...)`](../../../tf/sysconfig/get_link_flags.md): Get the link flags for custom operators. + +## Other Members + +* `CXX11_ABI_FLAG = 0` +* `MONOLITHIC_BUILD = 0` diff --git a/site/en/api_docs/python/tf/compat/v1/tables_initializer.md b/site/en/api_docs/python/tf/compat/v1/tables_initializer.md new file mode 100644 index 00000000000..23dc4e4c586 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tables_initializer.md @@ -0,0 +1,79 @@ +description: Returns an Op that initializes all tables of the default graph. + +
+ + +
+ +# tf.compat.v1.tables_initializer + + + + + + + + + +Returns an Op that initializes all tables of the default graph. + + + + + + + + + +See the [Low Level +Intro](https://www.tensorflow.org/guide/low_level_intro#feature_columns) +guide, for an example of usage. + + + + + + + + + + +
+`name` + +Optional name for the initialization op. +
+ + + + + + + + + + + +
+An Op that initializes all tables. Note that if there are +not tables the returned Op is a NoOp. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/test.md b/site/en/api_docs/python/tf/compat/v1/test.md new file mode 100644 index 00000000000..e1621b8c0c2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/test.md @@ -0,0 +1,59 @@ +description: Testing. + +
+ + +
+ +# Module: tf.compat.v1.test + + + + + + + + + +Testing. + + + +## Classes + +[`class Benchmark`](../../../tf/test/Benchmark.md): Abstract class that provides helpers for TensorFlow benchmarks. + +[`class StubOutForTesting`](../../../tf/compat/v1/test/StubOutForTesting.md): Support class for stubbing methods out for unit testing. + +[`class TestCase`](../../../tf/test/TestCase.md): Base class for tests that need to test TensorFlow. + +## Functions + +[`assert_equal_graph_def(...)`](../../../tf/compat/v1/test/assert_equal_graph_def.md): Asserts that two `GraphDef`s are (mostly) the same. + +[`benchmark_config(...)`](../../../tf/test/benchmark_config.md): Returns a tf.compat.v1.ConfigProto for disabling the dependency optimizer. + +[`compute_gradient(...)`](../../../tf/compat/v1/test/compute_gradient.md): Computes and returns the theoretical and numerical Jacobian. (deprecated) + +[`compute_gradient_error(...)`](../../../tf/compat/v1/test/compute_gradient_error.md): Computes the gradient error. (deprecated) + +[`create_local_cluster(...)`](../../../tf/test/create_local_cluster.md): Create and start local servers and return the associated `Server` objects. + +[`get_temp_dir(...)`](../../../tf/compat/v1/test/get_temp_dir.md): Returns a temporary directory for use during tests. + +[`gpu_device_name(...)`](../../../tf/test/gpu_device_name.md): Returns the name of a GPU device if available or the empty string. + +[`is_built_with_cuda(...)`](../../../tf/test/is_built_with_cuda.md): Returns whether TensorFlow was built with CUDA (GPU) support. + +[`is_built_with_gpu_support(...)`](../../../tf/test/is_built_with_gpu_support.md): Returns whether TensorFlow was built with GPU (i.e. CUDA or ROCm) support. + +[`is_built_with_rocm(...)`](../../../tf/test/is_built_with_rocm.md): Returns whether TensorFlow was built with ROCm (GPU) support. + +[`is_built_with_xla(...)`](../../../tf/test/is_built_with_xla.md): Returns whether TensorFlow was built with XLA support. + +[`is_gpu_available(...)`](../../../tf/test/is_gpu_available.md): Returns whether TensorFlow can access a GPU. (deprecated) + +[`main(...)`](../../../tf/test/main.md): Runs all unit tests. + +[`test_src_dir_path(...)`](../../../tf/compat/v1/test/test_src_dir_path.md): Creates an absolute test srcdir path given a relative path. + diff --git a/site/en/api_docs/python/tf/compat/v1/test/StubOutForTesting.md b/site/en/api_docs/python/tf/compat/v1/test/StubOutForTesting.md new file mode 100644 index 00000000000..52560a7e102 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/test/StubOutForTesting.md @@ -0,0 +1,258 @@ +description: Support class for stubbing methods out for unit testing. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.test.StubOutForTesting + + + + + + + + + +Support class for stubbing methods out for unit testing. + + + + + + + + +#### Sample Usage: + + + +You want os.path.exists() to always return true during testing. + + stubs = StubOutForTesting() + stubs.Set(os.path, 'exists', lambda x: 1) + ... + stubs.CleanUp() + +The above changes os.path.exists into a lambda that returns 1. Once +the ... part of the code finishes, the CleanUp() looks up the old +value of os.path.exists and restores it. + +## Methods + +

CleanUp

+ +View source + + + +Undoes all SmartSet() & Set() calls, restoring original definitions. + + +

Set

+ +View source + + + +In parent, replace child_name's old definition with new_child. + +The parent could be a module when the child is a function at +module scope. Or the parent could be a class when a class' method +is being replaced. The named child is set to new_child, while the +prior definition is saved away for later, when UnsetAll() is +called. + +This method supports the case where child_name is a staticmethod or a +classmethod of parent. + + + + + + + + + + + + + + + + +
Args
+`parent` + +The context in which the attribute child_name is to be changed. +
+`child_name` + +The name of the attribute to change. +
+`new_child` + +The new value of the attribute. +
+ + + +

SmartSet

+ +View source + + + +Replace obj.attr_name with new_attr. + +This method is smart and works at the module, class, and instance level +while preserving proper inheritance. It will not stub out C types however +unless that has been explicitly allowed by the type. + +This method supports the case where attr_name is a staticmethod or a +classmethod of obj. + +#### Notes: + +- If obj is an instance, then it is its class that will actually be + stubbed. Note that the method Set() does not do that: if obj is + an instance, it (and not its class) will be stubbed. +- The stubbing is using the builtin getattr and setattr. So, the __get__ + and __set__ will be called when stubbing ( + probably be to manipulate obj.__dict__ instead of getattr() and + setattr()). + + + + + + + + + + + + + + + + + + +
Args
+`obj` + +The object whose attributes we want to modify. +
+`attr_name` + +The name of the attribute to modify. +
+`new_attr` + +The new value for the attribute. +
+ + + + + + + + + + + + +
Raises
+`AttributeError` + +If the attribute cannot be found. +
+ + + +

SmartUnsetAll

+ +View source + + + +Reverses SmartSet() calls, restoring things to original definitions. + +This method is automatically called when the StubOutForTesting() +object is deleted; there is no need to call it explicitly. + +It is okay to call SmartUnsetAll() repeatedly, as later calls have +no effect if no SmartSet() calls have been made. + +

UnsetAll

+ +View source + + + +Reverses Set() calls, restoring things to their original definitions. + +This method is automatically called when the StubOutForTesting() +object is deleted; there is no need to call it explicitly. + +It is okay to call UnsetAll() repeatedly, as later calls have no +effect if no Set() calls have been made. + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/test/assert_equal_graph_def.md b/site/en/api_docs/python/tf/compat/v1/test/assert_equal_graph_def.md new file mode 100644 index 00000000000..e9564c46cd0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/test/assert_equal_graph_def.md @@ -0,0 +1,100 @@ +description: Asserts that two GraphDefs are (mostly) the same. + +
+ + +
+ +# tf.compat.v1.test.assert_equal_graph_def + + + + + + + + + +Asserts that two `GraphDef`s are (mostly) the same. + + + + + + + +Compares two `GraphDef` protos for equality, ignoring versions and ordering of +nodes, attrs, and control inputs. Node names are used to match up nodes +between the graphs, so the naming of nodes must be consistent. + + + + + + + + + + + + + + + + + + + +
+`actual` + +The `GraphDef` we have. +
+`expected` + +The `GraphDef` we expected. +
+`checkpoint_v2` + +boolean determining whether to ignore randomized attribute +values that appear in V2 checkpoints. +
+`hash_table_shared_name` + +boolean determining whether to ignore randomized +shared_names that appear in HashTableV2 op defs. +
+ + + + + + + + + + + + + + + +
+`AssertionError` + +If the `GraphDef`s do not match. +
+`TypeError` + +If either argument is not a `GraphDef`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/test/compute_gradient.md b/site/en/api_docs/python/tf/compat/v1/test/compute_gradient.md new file mode 100644 index 00000000000..4f7fa2d4e04 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/test/compute_gradient.md @@ -0,0 +1,137 @@ +description: Computes and returns the theoretical and numerical Jacobian. (deprecated) + +
+ + +
+ +# tf.compat.v1.test.compute_gradient + + + + + + + + + +Computes and returns the theoretical and numerical Jacobian. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. + +If `x` or `y` is complex, the Jacobian will still be real but the +corresponding Jacobian dimension(s) will be twice as large. This is required +even if both input and output is complex since TensorFlow graphs are not +necessarily holomorphic, and may have gradients not expressible as complex +numbers. For example, if `x` is complex with shape `[m]` and `y` is complex +with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with + + J[:m, :n] = d(Re y)/d(Re x) + J[:m, n:] = d(Im y)/d(Re x) + J[m:, :n] = d(Re y)/d(Im x) + J[m:, n:] = d(Im y)/d(Im x) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +a tensor or list of tensors +
+`x_shape` + +the dimensions of x as a tuple or an array of ints. If x is a list, +then this is the list of shapes. +
+`y` + +a tensor +
+`y_shape` + +the dimensions of y as a tuple or an array of ints. +
+`x_init_value` + +(optional) a numpy array of the same shape as "x" +representing the initial value of x. If x is a list, this should be a list +of numpy arrays. If this is none, the function will pick a random tensor +as the initial value. +
+`delta` + +(optional) the amount of perturbation. +
+`init_targets` + +list of targets to run to initialize model params. +
+`extra_feed_dict` + +dict that allows fixing specified tensor values +during the Jacobian calculation. +
+ + + + + + + + + + + +
+Two 2-d numpy arrays representing the theoretical and numerical +Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns +where "x_size" is the number of elements in x and "y_size" is the +number of elements in y. If x is a list, returns a list of two numpy arrays. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/test/compute_gradient_error.md b/site/en/api_docs/python/tf/compat/v1/test/compute_gradient_error.md new file mode 100644 index 00000000000..b92023ab1df --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/test/compute_gradient_error.md @@ -0,0 +1,133 @@ +description: Computes the gradient error. (deprecated) + +
+ + +
+ +# tf.compat.v1.test.compute_gradient_error + + + + + + + + + +Computes the gradient error. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. + +Computes the maximum error for dy/dx between the computed Jacobian and the +numerically estimated Jacobian. + +This function will modify the tensors passed in as it adds more operations +and hence changing the consumers of the operations of the input tensors. + +This function adds operations to the current session. To compute the error +using a particular device, such as a GPU, use the standard methods for +setting a device (e.g. using with sess.graph.device() or setting a device +function in the session constructor). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +a tensor or list of tensors +
+`x_shape` + +the dimensions of x as a tuple or an array of ints. If x is a list, +then this is the list of shapes. +
+`y` + +a tensor +
+`y_shape` + +the dimensions of y as a tuple or an array of ints. +
+`x_init_value` + +(optional) a numpy array of the same shape as "x" +representing the initial value of x. If x is a list, this should be a list +of numpy arrays. If this is none, the function will pick a random tensor +as the initial value. +
+`delta` + +(optional) the amount of perturbation. +
+`init_targets` + +list of targets to run to initialize model params. +
+`extra_feed_dict` + +dict that allows fixing specified tensor values +during the Jacobian calculation. +
+ + + + + + + + + + + +
+The maximum error in between the two Jacobians. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/test/get_temp_dir.md b/site/en/api_docs/python/tf/compat/v1/test/get_temp_dir.md new file mode 100644 index 00000000000..4dd7d893145 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/test/get_temp_dir.md @@ -0,0 +1,46 @@ +description: Returns a temporary directory for use during tests. + +
+ + +
+ +# tf.compat.v1.test.get_temp_dir + + + + + + + + + +Returns a temporary directory for use during tests. + + + + + + + +There is no need to delete the directory after the test. + + + + + + + + + +
+The temporary directory. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/test/test_src_dir_path.md b/site/en/api_docs/python/tf/compat/v1/test/test_src_dir_path.md new file mode 100644 index 00000000000..590185f880e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/test/test_src_dir_path.md @@ -0,0 +1,65 @@ +description: Creates an absolute test srcdir path given a relative path. + +
+ + +
+ +# tf.compat.v1.test.test_src_dir_path + + + + + + + + + +Creates an absolute test srcdir path given a relative path. + + + + + + + + + + + + + + + + + +
+`relative_path` + +a path relative to tensorflow root. +e.g. "core/platform". +
+ + + + + + + + + + + +
+An absolute path to the linked in runfiles. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/to_bfloat16.md b/site/en/api_docs/python/tf/compat/v1/to_bfloat16.md new file mode 100644 index 00000000000..4c48d6a3e45 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/to_bfloat16.md @@ -0,0 +1,92 @@ +description: Casts a tensor to type bfloat16. (deprecated) + +
+ + +
+ +# tf.compat.v1.to_bfloat16 + + + + + + + + + +Casts a tensor to type `bfloat16`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.cast instead. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor` or `IndexedSlices`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with +type `bfloat16`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `x` cannot be cast to the `bfloat16`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/to_complex128.md b/site/en/api_docs/python/tf/compat/v1/to_complex128.md new file mode 100644 index 00000000000..7407095b5ac --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/to_complex128.md @@ -0,0 +1,92 @@ +description: Casts a tensor to type complex128. (deprecated) + +
+ + +
+ +# tf.compat.v1.to_complex128 + + + + + + + + + +Casts a tensor to type `complex128`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.cast instead. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor` or `IndexedSlices`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with +type `complex128`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `x` cannot be cast to the `complex128`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/to_complex64.md b/site/en/api_docs/python/tf/compat/v1/to_complex64.md new file mode 100644 index 00000000000..2db34a2b56c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/to_complex64.md @@ -0,0 +1,92 @@ +description: Casts a tensor to type complex64. (deprecated) + +
+ + +
+ +# tf.compat.v1.to_complex64 + + + + + + + + + +Casts a tensor to type `complex64`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.cast instead. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor` or `IndexedSlices`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with +type `complex64`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `x` cannot be cast to the `complex64`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/to_double.md b/site/en/api_docs/python/tf/compat/v1/to_double.md new file mode 100644 index 00000000000..e38ca8ad951 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/to_double.md @@ -0,0 +1,92 @@ +description: Casts a tensor to type float64. (deprecated) + +
+ + +
+ +# tf.compat.v1.to_double + + + + + + + + + +Casts a tensor to type `float64`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.cast instead. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor` or `IndexedSlices`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with +type `float64`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `x` cannot be cast to the `float64`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/to_float.md b/site/en/api_docs/python/tf/compat/v1/to_float.md new file mode 100644 index 00000000000..882161732cc --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/to_float.md @@ -0,0 +1,92 @@ +description: Casts a tensor to type float32. (deprecated) + +
+ + +
+ +# tf.compat.v1.to_float + + + + + + + + + +Casts a tensor to type `float32`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.cast instead. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor` or `IndexedSlices`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with +type `float32`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `x` cannot be cast to the `float32`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/to_int32.md b/site/en/api_docs/python/tf/compat/v1/to_int32.md new file mode 100644 index 00000000000..c546a48b9a3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/to_int32.md @@ -0,0 +1,92 @@ +description: Casts a tensor to type int32. (deprecated) + +
+ + +
+ +# tf.compat.v1.to_int32 + + + + + + + + + +Casts a tensor to type `int32`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.cast instead. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor` or `IndexedSlices`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with +type `int32`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `x` cannot be cast to the `int32`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/to_int64.md b/site/en/api_docs/python/tf/compat/v1/to_int64.md new file mode 100644 index 00000000000..9b6f931d5f9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/to_int64.md @@ -0,0 +1,92 @@ +description: Casts a tensor to type int64. (deprecated) + +
+ + +
+ +# tf.compat.v1.to_int64 + + + + + + + + + +Casts a tensor to type `int64`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.cast instead. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor` or `IndexedSlices`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with +type `int64`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `x` cannot be cast to the `int64`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/tpu.md b/site/en/api_docs/python/tf/compat/v1/tpu.md new file mode 100644 index 00000000000..e32a7db5a6d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu.md @@ -0,0 +1,53 @@ +description: Ops related to Tensor Processing Units. + +
+ + +
+ +# Module: tf.compat.v1.tpu + + + + + + + + + +Ops related to Tensor Processing Units. + + + +## Modules + +[`experimental`](../../../tf/compat/v1/tpu/experimental.md) module: Public API for tf.tpu.experimental namespace. + +## Classes + +[`class CrossShardOptimizer`](../../../tf/compat/v1/tpu/CrossShardOptimizer.md): An optimizer that averages gradients across TPU shards. + +[`class PaddingSpec`](../../../tf/compat/v1/tpu/PaddingSpec.md): Represents the type of padding policies for tpu.replicate. + +## Functions + +[`batch_parallel(...)`](../../../tf/compat/v1/tpu/batch_parallel.md): Shards `computation` along the batch dimension for parallel execution. + +[`bfloat16_scope(...)`](../../../tf/compat/v1/tpu/bfloat16_scope.md): Scope class for bfloat16 variables so that the model uses custom getter. + +[`core(...)`](../../../tf/compat/v1/tpu/core.md): Returns the device name for a core in a replicated TPU computation. + +[`cross_replica_sum(...)`](../../../tf/compat/v1/tpu/cross_replica_sum.md): Sum the input tensor across replicas according to group_assignment. + +[`initialize_system(...)`](../../../tf/compat/v1/tpu/initialize_system.md): Initializes a distributed TPU system for use with TensorFlow. + +[`outside_compilation(...)`](../../../tf/compat/v1/tpu/outside_compilation.md): Builds part of a computation outside any current TPU replicate scope. + +[`replicate(...)`](../../../tf/compat/v1/tpu/replicate.md): Builds a graph operator that runs a replicated TPU computation. + +[`rewrite(...)`](../../../tf/compat/v1/tpu/rewrite.md): Rewrites `computation` for execution on a TPU system. + +[`shard(...)`](../../../tf/compat/v1/tpu/shard.md): Shards `computation` for parallel execution. + +[`shutdown_system(...)`](../../../tf/compat/v1/tpu/shutdown_system.md): Shuts down a running a distributed TPU system. + diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/CrossShardOptimizer.md b/site/en/api_docs/python/tf/compat/v1/tpu/CrossShardOptimizer.md new file mode 100644 index 00000000000..f0cd0631011 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/CrossShardOptimizer.md @@ -0,0 +1,551 @@ +description: An optimizer that averages gradients across TPU shards. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.tpu.CrossShardOptimizer + + + + + + + + + +An optimizer that averages gradients across TPU shards. + +Inherits From: [`Optimizer`](../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`opt` + +An existing `Optimizer` to encapsulate. +
+`reduction` + +The reduction to apply to the shard losses. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "CrossShardOptimizer". +
+`group_assignment` + +Optional 2d int32 lists with shape +[num_groups, num_replicas_per_group] which describles how to apply +optimizer to subgroups. +
+ + + + + + + + + + + + +
+`ValueError` + +If reduction is not a valid cross-shard reduction. +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +Calls tpu_ops.cross_replica_sum() to sum gradient contributions across +replicas, and then applies the real optimizer. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +compute_gradients(). +
+`global_step` + +Optional Variable to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the Optimizer constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the gradients. If `global_step` was not None, +that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the grads_and_vars is malformed. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of "loss" for the variables in "var_list". + +This simply wraps `compute_gradients()` from the real optimizer. The +gradients will be aggregated in `apply_gradients()` so that user can +modify the gradients like clipping with per replica global norm if needed. +The global norm with aggregated gradients can be bad as one replica's huge +gradients can hurt the gradients from other replicas. + +When the CrossShardOptimizer is constructed with +`reduction == losses.Reduction.MEAN` (default), this function scales the +loss by `1.0 / num_shards` before computing the gradients. Assuming the +optimizer uses the default implementation of `compute_gradients()`, the +gradients of the scaled loss are scaled by `1.0 / num_shards` compared to +the gradients of the original loss. This scaling factor is important because +`apply_gradients()` sums gradients across shards, rather than averaging +them. However, the scaling factor must be taken into account when clipping +the norm of the gradients or performing other postprocessing. + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph +under the key `GraphKey.TRAINABLE_VARIABLES`. +
+`**kwargs` + +Keyword arguments for compute_gradients(). +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If not within a tpu_shard_context or group_assignment is +invalid. +
+ + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named "name" created for "var" by the Optimizer. + +This simply wraps the get_slot() from the actual optimizer. + + + + + + + + + + + + + +
Args
+`*args` + +Arguments for get_slot(). +
+`**kwargs` + +Keyword arguments for get_slot(). +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +This simply wraps the get_slot_names() from the actual optimizer. + + + + + + + + + + + + + +
Args
+`*args` + +Arguments for get_slot(). +
+`**kwargs` + +Keyword arguments for get_slot(). +
+ + + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +Forwarding the variables from the underlying optimizer. + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/PaddingSpec.md b/site/en/api_docs/python/tf/compat/v1/tpu/PaddingSpec.md new file mode 100644 index 00000000000..3de7413e5f1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/PaddingSpec.md @@ -0,0 +1,33 @@ +description: Represents the type of padding policies for tpu.replicate. + +
+ + + + +
+ +# tf.compat.v1.tpu.PaddingSpec + + + + + + + + + +Represents the type of padding policies for tpu.replicate. + + + + +## Class Variables + +* `AUTO = ` +* `POWER_OF_TWO = ` diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/batch_parallel.md b/site/en/api_docs/python/tf/compat/v1/tpu/batch_parallel.md new file mode 100644 index 00000000000..604c8c51ed2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/batch_parallel.md @@ -0,0 +1,143 @@ +description: Shards computation along the batch dimension for parallel execution. + +
+ + +
+ +# tf.compat.v1.tpu.batch_parallel + + + + + + + + + +Shards `computation` along the batch dimension for parallel execution. + + + + + + + +Convenience wrapper around shard(). + +`inputs` must be a list of Tensors or None (equivalent to an empty list). +Each input is split into `num_shards` pieces along the 0-th dimension, and +computation is applied to each shard in parallel. + +Tensors are broadcast to all shards if they are lexically captured by +`computation`. e.g., + +x = tf.constant(7) +def computation(): + return x + 3 +... = shard(computation, ...) + +The outputs from all shards are concatenated back together along their 0-th +dimension. + +Inputs and outputs of the computation must be at least rank-1 Tensors. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`computation` + +A Python function that builds a computation to apply to each +shard of the input. +
+`inputs` + +A list of input tensors or None (equivalent to an empty list). The +0-th dimension of each Tensor must have size divisible by `num_shards`. +
+`num_shards` + +The number of shards. +
+`infeed_queue` + +If not `None`, the `InfeedQueue` from which to append a tuple +of arguments as inputs to `computation`. +
+`device_assignment` + +If not `None`, a `DeviceAssignment` describing the +mapping between logical cores in the computation with physical cores in +the TPU topology. Uses a default device assignment if `None`. The +`DeviceAssignment` may be omitted if each shard of the computation uses +only one core, and there is either only one shard, or the number of shards +is equal to the number of cores in the TPU system. +
+`name` + +(Deprecated) Does nothing. +
+ + + + + + + + + + + +
+A list of output tensors. +
+ + + + + + + + + + + + +
+`ValueError` + +If `num_shards <= 0` +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/bfloat16_scope.md b/site/en/api_docs/python/tf/compat/v1/tpu/bfloat16_scope.md new file mode 100644 index 00000000000..0a2b62d70e3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/bfloat16_scope.md @@ -0,0 +1,34 @@ +description: Scope class for bfloat16 variables so that the model uses custom getter. + +
+ + +
+ +# tf.compat.v1.tpu.bfloat16_scope + + + + + + + + + +Scope class for bfloat16 variables so that the model uses custom getter. + + + + + + + +This enables variables to be read as bfloat16 type when using get_variable. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/core.md b/site/en/api_docs/python/tf/compat/v1/tpu/core.md new file mode 100644 index 00000000000..dda97da5b4c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/core.md @@ -0,0 +1,65 @@ +description: Returns the device name for a core in a replicated TPU computation. + +
+ + +
+ +# tf.compat.v1.tpu.core + + + + + + + + + +Returns the device name for a core in a replicated TPU computation. + + + + + + + + + + + + + + + + + +
+`num` + +the virtual core number within each replica to which operators should +be assigned. +
+ + + + + + + + + + + +
+A device name, suitable for passing to tf.device(). +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/cross_replica_sum.md b/site/en/api_docs/python/tf/compat/v1/tpu/cross_replica_sum.md new file mode 100644 index 00000000000..0d6b917fb2c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/cross_replica_sum.md @@ -0,0 +1,80 @@ +description: Sum the input tensor across replicas according to group_assignment. + +
+ + +
+ +# tf.compat.v1.tpu.cross_replica_sum + + + + + + + + + +Sum the input tensor across replicas according to group_assignment. + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +The local tensor to the sum. +
+`group_assignment` + +Optional 2d int32 lists with shape [num_groups, +num_replicas_per_group]. `group_assignment[i]` represents the replica +ids in the ith subgroup. +
+`name` + +Optional op name. +
+ + + + + + + + + + + +
+A `Tensor` which is summed across replicas. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/experimental.md b/site/en/api_docs/python/tf/compat/v1/tpu/experimental.md new file mode 100644 index 00000000000..39999c36b88 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/experimental.md @@ -0,0 +1,43 @@ +description: Public API for tf.tpu.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.tpu.experimental + + + + + + + + + +Public API for tf.tpu.experimental namespace. + + + +## Classes + +[`class AdagradParameters`](../../../../tf/compat/v1/tpu/experimental/AdagradParameters.md): Optimization parameters for Adagrad with TPU embeddings. + +[`class AdamParameters`](../../../../tf/compat/v1/tpu/experimental/AdamParameters.md): Optimization parameters for Adam with TPU embeddings. + +[`class DeviceAssignment`](../../../../tf/tpu/experimental/DeviceAssignment.md): Mapping from logical cores in a computation to the physical TPU topology. + +[`class FtrlParameters`](../../../../tf/compat/v1/tpu/experimental/FtrlParameters.md): Optimization parameters for Ftrl with TPU embeddings. + +[`class StochasticGradientDescentParameters`](../../../../tf/compat/v1/tpu/experimental/StochasticGradientDescentParameters.md): Optimization parameters for stochastic gradient descent for TPU embeddings. + +## Functions + +[`embedding_column(...)`](../../../../tf/compat/v1/tpu/experimental/embedding_column.md): TPU version of tf.compat.v1.feature_column.embedding_column. + +[`initialize_tpu_system(...)`](../../../../tf/tpu/experimental/initialize_tpu_system.md): Initialize the TPU devices. + +[`shared_embedding_columns(...)`](../../../../tf/compat/v1/tpu/experimental/shared_embedding_columns.md): TPU version of tf.compat.v1.feature_column.shared_embedding_columns. + +[`shutdown_tpu_system(...)`](../../../../tf/tpu/experimental/shutdown_tpu_system.md): Shuts down the TPU devices. + diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/experimental/AdagradParameters.md b/site/en/api_docs/python/tf/compat/v1/tpu/experimental/AdagradParameters.md new file mode 100644 index 00000000000..3ee28fc97c6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/experimental/AdagradParameters.md @@ -0,0 +1,115 @@ +description: Optimization parameters for Adagrad with TPU embeddings. + +
+ + + +
+ +# tf.compat.v1.tpu.experimental.AdagradParameters + + + + + + + + + +Optimization parameters for Adagrad with TPU embeddings. + + + + + + + +Pass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the +`optimization_parameters` argument to set the optimizer and its parameters. +See the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec` +for more details. + +``` +estimator = tf.estimator.tpu.TPUEstimator( + ... + embedding_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec( + ... + optimization_parameters=tf.tpu.experimental.AdagradParameters(0.1), + ...)) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +used for updating embedding table. +
+`initial_accumulator` + +initial accumulator for Adagrad. +
+`use_gradient_accumulation` + +setting this to `False` makes embedding +gradients calculation less accurate but faster. Please see +`optimization_parameters.proto` for details. +for details. +
+`clip_weight_min` + +the minimum value to clip by; None means -infinity. +
+`clip_weight_max` + +the maximum value to clip by; None means +infinity. +
+`weight_decay_factor` + +amount of weight decay to apply; None means that the +weights are not decayed. +
+`multiply_weight_decay_factor_by_learning_rate` + +if true, +`weight_decay_factor` is multiplied by the current learning rate. +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/experimental/AdamParameters.md b/site/en/api_docs/python/tf/compat/v1/tpu/experimental/AdamParameters.md new file mode 100644 index 00000000000..693524ab25b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/experimental/AdamParameters.md @@ -0,0 +1,148 @@ +description: Optimization parameters for Adam with TPU embeddings. + +
+ + + +
+ +# tf.compat.v1.tpu.experimental.AdamParameters + + + + + + + + + +Optimization parameters for Adam with TPU embeddings. + + + + + + + +Pass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the +`optimization_parameters` argument to set the optimizer and its parameters. +See the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec` +for more details. + +``` +estimator = tf.estimator.tpu.TPUEstimator( + ... + embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec( + ... + optimization_parameters=tf.tpu.experimental.AdamParameters(0.1), + ...)) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +a floating point value. The learning rate. +
+`beta1` + +A float value. +The exponential decay rate for the 1st moment estimates. +
+`beta2` + +A float value. +The exponential decay rate for the 2nd moment estimates. +
+`epsilon` + +A small constant for numerical stability. +
+`lazy_adam` + +Use lazy Adam instead of Adam. Lazy Adam trains faster. +Please see `optimization_parameters.proto` for details. +
+`sum_inside_sqrt` + +This improves training speed. Please see +`optimization_parameters.proto` for details. +
+`use_gradient_accumulation` + +setting this to `False` makes embedding +gradients calculation less accurate but faster. Please see +`optimization_parameters.proto` for details. +for details. +
+`clip_weight_min` + +the minimum value to clip by; None means -infinity. +
+`clip_weight_max` + +the maximum value to clip by; None means +infinity. +
+`weight_decay_factor` + +amount of weight decay to apply; None means that the +weights are not decayed. +
+`multiply_weight_decay_factor_by_learning_rate` + +if true, +`weight_decay_factor` is multiplied by the current learning rate. +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/experimental/FtrlParameters.md b/site/en/api_docs/python/tf/compat/v1/tpu/experimental/FtrlParameters.md new file mode 100644 index 00000000000..92d8111730c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/experimental/FtrlParameters.md @@ -0,0 +1,143 @@ +description: Optimization parameters for Ftrl with TPU embeddings. + +
+ + + +
+ +# tf.compat.v1.tpu.experimental.FtrlParameters + + + + + + + + + +Optimization parameters for Ftrl with TPU embeddings. + + + + + + + +Pass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the +`optimization_parameters` argument to set the optimizer and its parameters. +See the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec` +for more details. + +``` +estimator = tf.estimator.tpu.TPUEstimator( + ... + embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec( + ... + optimization_parameters=tf.tpu.experimental.FtrlParameters(0.1), + ...)) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +a floating point value. The learning rate. +
+`learning_rate_power` + +A float value, must be less or equal to zero. +Controls how the learning rate decreases during training. Use zero for +a fixed learning rate. See section 3.1 in the +[paper](https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf). +
+`initial_accumulator_value` + +The starting value for accumulators. +Only zero or positive values are allowed. +
+`l1_regularization_strength` + +A float value, must be greater than or +equal to zero. +
+`l2_regularization_strength` + +A float value, must be greater than or +equal to zero. +
+`use_gradient_accumulation` + +setting this to `False` makes embedding +gradients calculation less accurate but faster. Please see +`optimization_parameters.proto` for details. +for details. +
+`clip_weight_min` + +the minimum value to clip by; None means -infinity. +
+`clip_weight_max` + +the maximum value to clip by; None means +infinity. +
+`weight_decay_factor` + +amount of weight decay to apply; None means that the +weights are not decayed. +
+`multiply_weight_decay_factor_by_learning_rate` + +if true, +`weight_decay_factor` is multiplied by the current learning rate. +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/experimental/StochasticGradientDescentParameters.md b/site/en/api_docs/python/tf/compat/v1/tpu/experimental/StochasticGradientDescentParameters.md new file mode 100644 index 00000000000..f7b19f452c7 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/experimental/StochasticGradientDescentParameters.md @@ -0,0 +1,97 @@ +description: Optimization parameters for stochastic gradient descent for TPU embeddings. + +
+ + + +
+ +# tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters + + + + + + + + + +Optimization parameters for stochastic gradient descent for TPU embeddings. + + + + + + + +Pass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the +`optimization_parameters` argument to set the optimizer and its parameters. +See the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec` +for more details. + +``` +estimator = tf.estimator.tpu.TPUEstimator( + ... + embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec( + ... + optimization_parameters=( + tf.tpu.experimental.StochasticGradientDescentParameters(0.1)))) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +a floating point value. The learning rate. +
+`clip_weight_min` + +the minimum value to clip by; None means -infinity. +
+`clip_weight_max` + +the maximum value to clip by; None means +infinity. +
+`weight_decay_factor` + +amount of weight decay to apply; None means that the +weights are not decayed. +
+`multiply_weight_decay_factor_by_learning_rate` + +if true, +`weight_decay_factor` is multiplied by the current learning rate. +
+ + + diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/experimental/embedding_column.md b/site/en/api_docs/python/tf/compat/v1/tpu/experimental/embedding_column.md new file mode 100644 index 00000000000..e7fc8005c81 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/experimental/embedding_column.md @@ -0,0 +1,205 @@ +description: TPU version of tf.compat.v1.feature_column.embedding_column. + +
+ + +
+ +# tf.compat.v1.tpu.experimental.embedding_column + + + + + + + + + +TPU version of tf.compat.v1.feature_column.embedding_column. + + + + + + + +Note that the interface for `tf.tpu.experimental.embedding_column` is +different from that of tf.compat.v1.feature_column.embedding_column: The +following arguments are NOT supported: `ckpt_to_load_from`, +`tensor_name_in_ckpt`, `max_norm` and `trainable`. + +Use this function in place of tf.compat.v1.feature_column.embedding_column +when you want to use the TPU to accelerate your embedding lookups via TPU +embeddings. + +``` +column = tf.feature_column.categorical_column_with_identity(...) +tpu_column = tf.tpu.experimental.embedding_column(column, 10) +... +def model_fn(features): + dense_feature = tf.keras.layers.DenseFeature(tpu_column) + embedded_feature = dense_feature(features) + ... + +estimator = tf.estimator.tpu.TPUEstimator( + model_fn=model_fn, + ... + embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec( + column=[tpu_column], + ...)) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`categorical_column` + +A categorical column returned from +`categorical_column_with_identity`, `weighted_categorical_column`, +`categorical_column_with_vocabulary_file`, +`categorical_column_with_vocabulary_list`, +`sequence_categorical_column_with_identity`, +`sequence_categorical_column_with_vocabulary_file`, +`sequence_categorical_column_with_vocabulary_list` +
+`dimension` + +An integer specifying dimension of the embedding, must be > 0. +
+`combiner` + +A string specifying how to reduce if there are multiple entries +in a single row for a non-sequence column. For more information, see +tf.feature_column.embedding_column. +
+`initializer` + +A variable initializer function to be used in embedding +variable initialization. If not specified, defaults to +tf.compat.v1.truncated_normal_initializer with mean `0.0` and +standard deviation `1/sqrt(dimension)`. +
+`max_sequence_length` + +An non-negative integer specifying the max sequence +length. Any sequence shorter then this will be padded with 0 embeddings +and any sequence longer will be truncated. This must be positive for +sequence features and 0 for non-sequence features. +
+`learning_rate_fn` + +A function that takes global step and returns learning +rate for the embedding table. +
+`embedding_lookup_device` + +The device on which to run the embedding lookup. +Valid options are "cpu", "tpu_tensor_core", and "tpu_embedding_core". +If specifying "tpu_tensor_core", a tensor_core_shape must be supplied. +If not specified, the default behavior is embedding lookup on +"tpu_embedding_core" for training and "cpu" for inference. +Valid options for training : ["tpu_embedding_core", "tpu_tensor_core"] +Valid options for serving : ["cpu", "tpu_tensor_core"] +For training, tpu_embedding_core is good for large embedding vocab (>1M), +otherwise, tpu_tensor_core is often sufficient. +For serving, doing embedding lookup on tpu_tensor_core during serving is +a way to reduce host cpu usage in cases where that is a bottleneck. +
+`tensor_core_shape` + +If supplied, a list of integers which specifies +the intended dense shape to run embedding lookup for this feature on +TensorCore. The batch dimension can be left None or -1 to indicate +a dynamic shape. Only rank 2 shapes currently supported. +
+`use_safe_embedding_lookup` + +If true, uses safe_embedding_lookup_sparse +instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures +there are no empty rows and all weights and ids are positive at the +expense of extra compute cost. This only applies to rank 2 (NxM) shaped +input tensors. Defaults to true, consider turning off if the above checks +are not needed. Note that having empty rows will not trigger any error +though the output result might be 0 or omitted. +
+ + + + + + + + + + + +
+A `_TPUEmbeddingColumnV2`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `dimension` not > 0. +
+`ValueError` + +if `initializer` is specified but not callable. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/experimental/shared_embedding_columns.md b/site/en/api_docs/python/tf/compat/v1/tpu/experimental/shared_embedding_columns.md new file mode 100644 index 00000000000..3b8d6397f82 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/experimental/shared_embedding_columns.md @@ -0,0 +1,236 @@ +description: TPU version of tf.compat.v1.feature_column.shared_embedding_columns. + +
+ + +
+ +# tf.compat.v1.tpu.experimental.shared_embedding_columns + + + + + + + + + +TPU version of tf.compat.v1.feature_column.shared_embedding_columns. + + + + + + + +Note that the interface for `tf.tpu.experimental.shared_embedding_columns` is +different from that of tf.compat.v1.feature_column.shared_embedding_columns: +The following arguments are NOT supported: `ckpt_to_load_from`, +`tensor_name_in_ckpt`, `max_norm` and `trainable`. + +Use this function in place of +tf.compat.v1.feature_column.shared_embedding_columns` when you want to use the +TPU to accelerate your embedding lookups via TPU embeddings. + +``` +column_a = tf.feature_column.categorical_column_with_identity(...) +column_b = tf.feature_column.categorical_column_with_identity(...) +tpu_columns = tf.tpu.experimental.shared_embedding_columns( + [column_a, column_b], 10) +... +def model_fn(features): + dense_feature = tf.keras.layers.DenseFeature(tpu_columns) + embedded_feature = dense_feature(features) + ... + +estimator = tf.estimator.tpu.TPUEstimator( + model_fn=model_fn, + ... + embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec( + column=tpu_columns, + ...)) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`categorical_columns` + +A list of categorical columns returned from +`categorical_column_with_identity`, `weighted_categorical_column`, +`categorical_column_with_vocabulary_file`, +`categorical_column_with_vocabulary_list`, +`sequence_categorical_column_with_identity`, +`sequence_categorical_column_with_vocabulary_file`, +`sequence_categorical_column_with_vocabulary_list` +
+`dimension` + +An integer specifying dimension of the embedding, must be > 0. +
+`combiner` + +A string specifying how to reduce if there are multiple entries in +a single row for a non-sequence column. For more information, see +tf.feature_column.embedding_column. +
+`initializer` + +A variable initializer function to be used in embedding +variable initialization. If not specified, defaults to +`tf.truncated_normal_initializer` with mean `0.0` and standard deviation +`1/sqrt(dimension)`. +
+`shared_embedding_collection_name` + +Optional name of the collection where +shared embedding weights are added. If not given, a reasonable name will +be chosen based on the names of `categorical_columns`. This is also used +in `variable_scope` when creating shared embedding weights. +
+`max_sequence_lengths` + +An list of non-negative integers, either None or empty +or the same length as the argument categorical_columns. Entries +corresponding to non-sequence columns must be 0 and entries corresponding +to sequence columns specify the max sequence length for the column. Any +sequence shorter then this will be padded with 0 embeddings and any +sequence longer will be truncated. +
+`learning_rate_fn` + +A function that takes global step and returns learning +rate for the embedding table. +
+`embedding_lookup_device` + +The device on which to run the embedding lookup. +Valid options are "cpu", "tpu_tensor_core", and "tpu_embedding_core". If +specifying "tpu_tensor_core", a tensor_core_shape must be supplied. +Defaults to "cpu". If not specified, the default behavior is embedding +lookup on "tpu_embedding_core" for training and "cpu" for inference. +Valid options for training : ["tpu_embedding_core", "tpu_tensor_core"] +Valid options for serving : ["cpu", "tpu_tensor_core"] +For training, tpu_embedding_core is good for large embedding vocab (>1M), +otherwise, tpu_tensor_core is often sufficient. +For serving, doing embedding lookup on tpu_tensor_core during serving is +a way to reduce host cpu usage in cases where that is a bottleneck. +
+`tensor_core_shape` + +If supplied, a list of integers which specifies the +intended dense shape to run embedding lookup for this feature on +TensorCore. The batch dimension can be left None or -1 to indicate a +dynamic shape. Only rank 2 shapes currently supported. +
+`use_safe_embedding_lookup` + +If true, uses safe_embedding_lookup_sparse +instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures +there are no empty rows and all weights and ids are positive at the +expense of extra compute cost. This only applies to rank 2 (NxM) shaped +input tensors. Defaults to true, consider turning off if the above checks +are not needed. Note that having empty rows will not trigger any error +though the output result might be 0 or omitted. +
+ + + + + + + + + + + +
+A list of `_TPUSharedEmbeddingColumnV2`. +
+ + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +if `dimension` not > 0. +
+`ValueError` + +if `initializer` is specified but not callable. +
+`ValueError` + +if `max_sequence_lengths` is specified and not the same length +as `categorical_columns`. +
+`ValueError` + +if `max_sequence_lengths` is positive for a non sequence column +or 0 for a sequence column. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/initialize_system.md b/site/en/api_docs/python/tf/compat/v1/tpu/initialize_system.md new file mode 100644 index 00000000000..df527703963 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/initialize_system.md @@ -0,0 +1,85 @@ +description: Initializes a distributed TPU system for use with TensorFlow. + +
+ + +
+ +# tf.compat.v1.tpu.initialize_system + + + + + + + + + +Initializes a distributed TPU system for use with TensorFlow. + + + + + + + + + + + + + + + + + + + + + + + +
+`embedding_config` + +If not None, a `TPUEmbeddingConfiguration` proto +describing the desired configuration of the hardware embedding lookup +tables. If embedding_config is None, no hardware embeddings can be used. +
+`job` + +The job (the XXX in TensorFlow device specification /job:XXX) that +contains the TPU devices that will be initialized. If job=None it is +assumed there is only one job in the TensorFlow flock, and an error will +be returned if this assumption does not hold. +
+`compilation_failure_closes_chips` + +Set the configuration whether +we want to close TPU chips when there is a compilation failure. +
+ + + + + + + + + + + +
+A serialized `TopologyProto` that describes the TPU system. Note: +the topology must be evaluated using `Session.run` before it can be used. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/outside_compilation.md b/site/en/api_docs/python/tf/compat/v1/tpu/outside_compilation.md new file mode 100644 index 00000000000..5fe66671a61 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/outside_compilation.md @@ -0,0 +1,132 @@ +description: Builds part of a computation outside any current TPU replicate scope. + +
+ + +
+ +# tf.compat.v1.tpu.outside_compilation + + + + + + + + + +Builds part of a computation outside any current TPU replicate scope. + + + + + + + +`tf.tpu.outside_compilation()` is used to run ops in `computation` on CPU +instead of running on TPU. For example, users can run ops that are not +supported on TPU's (e.g. tf.summary.write()) by explicitly placing those +ops on CPU's. Below usage of outside compilation will place ops in +`computation_with_string_ops` on CPU. + +#### Example usage: + + + +```python +def computation_with_string_ops(x): + # strings types are not supported on TPU's and below ops must + # run on CPU instead. + output = tf.strings.format('1{}', x) + return tf.strings.to_number(output) + +def tpu_computation(): + # Expected output is 11. + output = tf.tpu.outside_compilation(computation_with_string_ops, 1) +``` + +Outside compilation should be called inside TPUReplicateContext. That is, +`tf.tpu.outside_compilation()` should be called inside a function that is +passed to `tpu.split_compile_and_replicate()` -- this is implied when +outside compilation is invoked inside a function passed to TPUStrategy +`run()`. If invoked outside of TPUReplicateContext, +then this simply returns the result of `computation`, and therefore, +would be a no-op. Note that outside compilation is different from +`tf.distribute.experimental.TPUStrategy.merge_call()` as logic in +outside compilation is replicated and executed separately for each +replica. On the other hand, `merge_call()` requires a `merge_fn` +to aggregate the inputs from different replicas and is executed only +once. + +For variables placed in TPU device, which includes variables created inside +TPUStrategy scope, outside compilation logic must not include variable +read/write. For variables placed on host, which is the case when variables +created via TPUEstimator, variable read/write is only allowed if the variable +is not accessed by any other ops in the TPU computation. Variable read/write +from outside compilation cluster is not visible from TPU computation and +vice versa. Therefore, if outside compilation logic contains such host +variables read/write ops and if the variables are accessed by TPU +computation as well, then this may lead to deadlock. + +Internally, `tf.tpu.outside_compilation()` adds outside compilation +attributes to all ops in `computation`. During later graph pass, these +ops with outside compilation attribute is extracted out and replicated +into a host-side graph. Inputs to this extract host-side graph is sent +from TPU computation graph to host graph via a pair of XlaSendToHost and +XlaRecvFromHost ops. Note that using `tf.tpu.outside_compilation()` +may result in tensor transfer between TPU and CPU, leading to non-trivial +performance impact. + + + + + + + + + + + + + + + + +
+`computation` + +A Python function that builds the computation to +place on the host. +
+`*args` + +the positional arguments for the computation. +
+`**kwargs` + +the keyword arguments for the computation. +
+ + + + + + + + + + + +
+The Tensors returned by computation. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/replicate.md b/site/en/api_docs/python/tf/compat/v1/tpu/replicate.md new file mode 100644 index 00000000000..8666e7639b1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/replicate.md @@ -0,0 +1,209 @@ +description: Builds a graph operator that runs a replicated TPU computation. + +
+ + +
+ +# tf.compat.v1.tpu.replicate + + + + + + + + + +Builds a graph operator that runs a replicated TPU computation. + + + + + + + +Example for the basic usage that `inputs` has static shape: + +```python + +def computation(x): + x = x + 1 + return tf.math.reduce_mean(x) + +x = tf.convert_to_tensor([1., 2., 3.]) +y = tf.convert_to_tensor([4., 5., 6.]) +tf.compat.v1.tpu.replicate(computation, inputs=[[x], [y]]) +``` + +If the `inputs` has dynamic shapes and you would like to automatically +bucketize the inputs to avoid XLA recompilation. See the advanced example +below: + +```python + +def computation(x): + x = x + 1 + return tf.math.reduce_mean(x) + +# Assume input tensors in two replicas `x` and `y` both have dynamic shape +# ([None, 2]). +tf.compat.v1.tpu.replicate( + computation, + inputs=[x, y], + maximum_shapes=[tf.TensorShape([None, None])], + padding_spec=tf.compat.v1.tpu.PaddingSpec.POWER_OF_TWO) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`computation` + +A Python function that builds the computation to replicate. +
+`inputs` + +A list of lists of input tensors or `None` (equivalent to +`[[]]`), indexed by `[replica_num][input_num]`. All replicas must +have the same number of inputs. Each input can be a nested structure +containing values that are convertible to tensors. Note that passing an +N-dimension list of compatible values will result in a N-dimension list of +scalar tensors rather than a single Rank-N tensors. If you need different +behavior, convert part of inputs to tensors with tf.convert_to_tensor. +
+`infeed_queue` + +If not `None`, the `InfeedQueue` from which to append a tuple +of arguments as inputs to computation. +
+`device_assignment` + +If not `None`, a `DeviceAssignment` describing the +mapping between logical cores in the computation with physical cores in +the TPU topology. Uses a default device assignment if `None`. The +`DeviceAssignment` may be omitted if each replica of the computation uses +only one core, and there is either only one replica, or the number of +replicas is equal to the number of cores in the TPU system. +
+`name` + +(Deprecated) Does nothing. +
+`maximum_shapes` + +A nested structure of tf.TensorShape representing the shape +to which the respective component of each input element in each replica +should be padded. Any unknown dimensions (e.g. +tf.compat.v1.Dimension(None) in a tf.TensorShape or -1 in a tensor-like +object) will be padded to the maximum size of that dimension over all +replicas. The structure of `maximum_shapes` needs to be the same as +`inputs[0]`. +
+`padding_spec` + +An enum specified by `tpu.PaddingSpec`. This describes the +padding policy when the `inputs` to `tpu.replicate` is dynamic. +One usage is to enable automatic bucketizing on the inputs by setting the +value to `tpu.PaddingSpec.POWER_OF_TWO`, which can help to reduce the +recompilation in the XLA side. +
+ + + + + + + + + + + +
+A list of outputs, indexed by `[replica_num]` each output can be a nested +structure same as what computation() returns with a few exceptions. + +Exceptions include: +1) None output: a NoOp would be returned which control-depends on +computation. +2) Single value output: A tuple containing the value would be returned. +3) Operation-only outputs: a NoOp would be returned which +control-depends on computation. +
+ + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +If all replicas do not have equal numbers of input tensors. +
+`ValueError` + +If the number of inputs per replica does not match +the number of formal parameters to `computation`. +
+`ValueError` + +If the static `inputs` dimensions don't match with the values +given in `maximum_shapes`. +
+`ValueError` + +If the structure of inputs per replica does not match +the structure of `maximum_shapes`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/rewrite.md b/site/en/api_docs/python/tf/compat/v1/tpu/rewrite.md new file mode 100644 index 00000000000..be7d4de7e18 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/rewrite.md @@ -0,0 +1,118 @@ +description: Rewrites computation for execution on a TPU system. + +
+ + +
+ +# tf.compat.v1.tpu.rewrite + + + + + + + + + +Rewrites `computation` for execution on a TPU system. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`computation` + +A Python function that builds a computation to apply to the +input. If the function takes n inputs, 'inputs' should be a list of n +tensors. + +`computation` may return a list of operations and tensors. Tensors must +come before operations in the returned list. The return value of +`rewrite` is a list of tensors corresponding to the tensors from the +output of `computation`. + +All `Operation`s constructed during `computation` will be executed when +evaluating any of the returned output tensors, not just the ones returned. +
+`inputs` + +A list of input tensors or `None` (equivalent to an empty list). +Each input can be a nested structure containing values that are +convertible to tensors. Note that passing an N-dimension list of +compatible values will result in a N-dimension list of scalar tensors +rather than a single Rank-N tensors. If you need different behavior, +convert part of inputs to tensors with tf.convert_to_tensor. +
+`infeed_queue` + +If not `None`, the `InfeedQueue` from which to append a tuple +of arguments as inputs to `computation`. +
+`device_assignment` + +if not `None`, a `DeviceAssignment` describing the +mapping between logical cores in the computation with physical cores in +the TPU topology. May be omitted for a single-core computation, in which +case the core attached to task 0, TPU device 0 is used. +
+`name` + +(Deprecated) Does nothing. +
+ + + + + + + + + + + +
+Same data structure as if computation(*inputs) is called directly with some +exceptions for correctness. Exceptions include: +1) None output: a NoOp would be returned which control-depends on +computation. +2) Single value output: A tuple containing the value would be returned. +3) Operation-only outputs: a NoOp would be returned which +control-depends on computation. +
+ + diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/shard.md b/site/en/api_docs/python/tf/compat/v1/tpu/shard.md new file mode 100644 index 00000000000..f0fba5c4c64 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/shard.md @@ -0,0 +1,192 @@ +description: Shards computation for parallel execution. + +
+ + +
+ +# tf.compat.v1.tpu.shard + + + + + + + + + +Shards `computation` for parallel execution. + + + + + + + +`inputs` must be a list of Tensors or None (equivalent to an empty list), each +of which has a corresponding split axis (from `input_shard_axes`). Each input +is split into `num_shards` pieces along the corresponding axis, and +computation is applied to each shard in parallel. + +Tensors are broadcast to all shards if they are lexically captured by +`computation`. e.g., + +x = tf.constant(7) +def computation(): + return x + 3 +... = shard(computation, ...) + + +as inputs. + +If `outputs_from_all_shards` is true, the outputs from all shards of +`computation` are concatenated back together along their `output_shard_axes`. +Otherwise, each output is taken from an arbitrary shard. + +Inputs and outputs of the computation must be at least rank-1 Tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`computation` + +A Python function that builds a computation to apply to each +shard of the input. +
+`inputs` + +A list of input tensors or None (equivalent to an empty list). Each +input tensor has a corresponding shard axes, given by `input_shard_axes`, +which must have size divisible by `num_shards`. +
+`num_shards` + +The number of shards. +
+`input_shard_axes` + +A list of dimensions along which to shard `inputs`, or +`None`. `None` means "shard all inputs along dimension 0". If not `None`, +there must be one dimension per input. +
+`outputs_from_all_shards` + +Boolean or list of boolean. For each output, if +`True`, outputs from all shards are concatenated along the corresponding +`output_shard_axes` entry. Otherwise, each output is taken +from an arbitrary shard. If the argument is a boolean, the argument's +value is used for each output. +
+`output_shard_axes` + +A list of dimensions along which to concatenate the +outputs of `computation`, or `None`. `None` means "concatenate all outputs +along dimension 0". If not `None`, there must be one dimension per output. +Ignored if `outputs_from_all_shards` is False. +
+`infeed_queue` + +If not `None`, the `InfeedQueue` to use to augment the inputs +of `computation`. +
+`device_assignment` + +If not `None`, a `DeviceAssignment` describing the +mapping between logical cores in the computation with physical cores in +the TPU topology. Uses a default device assignment if `None`. The +`DeviceAssignment` may be omitted if each shard of the computation uses +only one core, and there is either only one shard, or the number of shards +is equal to the number of cores in the TPU system. +
+`name` + +(Deprecated) Does nothing. +
+ + + + + + + + + + + +
+A list of output tensors. +
+ + + + + + + + + + + + + + + + + + +
+`ValueError` + +If num_shards <= 0 +
+`ValueError` + +If len(input_shard_axes) != len(inputs) +
+`ValueError` + +If len(output_shard_axes) != len(outputs from `computation`) +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/tpu/shutdown_system.md b/site/en/api_docs/python/tf/compat/v1/tpu/shutdown_system.md new file mode 100644 index 00000000000..671d120460d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tpu/shutdown_system.md @@ -0,0 +1,53 @@ +description: Shuts down a running a distributed TPU system. + +
+ + +
+ +# tf.compat.v1.tpu.shutdown_system + + + + + + + + + +Shuts down a running a distributed TPU system. + + + + + + + + + + + + + + + + + +
+`job` + +The job (the XXX in TensorFlow device specification /job:XXX) that +contains the TPU devices that will be shutdown. If job=None it is +assumed there is only one job in the TensorFlow flock, and an error will +be returned if this assumption does not hold. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train.md b/site/en/api_docs/python/tf/compat/v1/train.md new file mode 100644 index 00000000000..475473bab80 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train.md @@ -0,0 +1,264 @@ +description: Support for training models. + +
+ + +
+ +# Module: tf.compat.v1.train + + + + + + + + + +Support for training models. + + +See the [Training](https://tensorflow.org/api_guides/python/train) guide. + +## Modules + +[`experimental`](../../../tf/compat/v1/train/experimental.md) module: Public API for tf.train.experimental namespace. + +[`queue_runner`](../../../tf/compat/v1/train/queue_runner.md) module: Public API for tf.train.queue_runner namespace. + +## Classes + +[`class AdadeltaOptimizer`](../../../tf/compat/v1/train/AdadeltaOptimizer.md): Optimizer that implements the Adadelta algorithm. + +[`class AdagradDAOptimizer`](../../../tf/compat/v1/train/AdagradDAOptimizer.md): Adagrad Dual Averaging algorithm for sparse linear models. + +[`class AdagradOptimizer`](../../../tf/compat/v1/train/AdagradOptimizer.md): Optimizer that implements the Adagrad algorithm. + +[`class AdamOptimizer`](../../../tf/compat/v1/train/AdamOptimizer.md): Optimizer that implements the Adam algorithm. + +[`class BytesList`](../../../tf/train/BytesList.md): A ProtocolMessage + +[`class Checkpoint`](../../../tf/compat/v1/train/Checkpoint.md): Groups trackable objects, saving and restoring them. + +[`class CheckpointManager`](../../../tf/train/CheckpointManager.md): Deletes old checkpoints. + +[`class CheckpointSaverHook`](../../../tf/estimator/CheckpointSaverHook.md): Saves checkpoints every N steps or seconds. + +[`class CheckpointSaverListener`](../../../tf/estimator/CheckpointSaverListener.md): Interface for listeners that take action before or after checkpoint save. + +[`class ChiefSessionCreator`](../../../tf/compat/v1/train/ChiefSessionCreator.md): Creates a tf.compat.v1.Session for a chief. + +[`class ClusterDef`](../../../tf/train/ClusterDef.md): A ProtocolMessage + +[`class ClusterSpec`](../../../tf/train/ClusterSpec.md): Represents a cluster as a set of "tasks", organized into "jobs". + +[`class Coordinator`](../../../tf/train/Coordinator.md): A coordinator for threads. + +[`class Example`](../../../tf/train/Example.md): A ProtocolMessage + +[`class ExponentialMovingAverage`](../../../tf/train/ExponentialMovingAverage.md): Maintains moving averages of variables by employing an exponential decay. + +[`class Feature`](../../../tf/train/Feature.md): A ProtocolMessage + +[`class FeatureList`](../../../tf/train/FeatureList.md): A ProtocolMessage + +[`class FeatureLists`](../../../tf/train/FeatureLists.md): A ProtocolMessage + +[`class Features`](../../../tf/train/Features.md): A ProtocolMessage + +[`class FeedFnHook`](../../../tf/estimator/FeedFnHook.md): Runs `feed_fn` and sets the `feed_dict` accordingly. + +[`class FinalOpsHook`](../../../tf/estimator/FinalOpsHook.md): A hook which evaluates `Tensors` at the end of a session. + +[`class FloatList`](../../../tf/train/FloatList.md): A ProtocolMessage + +[`class FtrlOptimizer`](../../../tf/compat/v1/train/FtrlOptimizer.md): Optimizer that implements the FTRL algorithm. + +[`class GlobalStepWaiterHook`](../../../tf/estimator/GlobalStepWaiterHook.md): Delays execution until global step reaches `wait_until_step`. + +[`class GradientDescentOptimizer`](../../../tf/compat/v1/train/GradientDescentOptimizer.md): Optimizer that implements the gradient descent algorithm. + +[`class Int64List`](../../../tf/train/Int64List.md): A ProtocolMessage + +[`class JobDef`](../../../tf/train/JobDef.md): A ProtocolMessage + +[`class LoggingTensorHook`](../../../tf/estimator/LoggingTensorHook.md): Prints the given tensors every N local steps, every N seconds, or at end. + +[`class LooperThread`](../../../tf/compat/v1/train/LooperThread.md): A thread that runs code repeatedly, optionally on a timer. + +[`class MomentumOptimizer`](../../../tf/compat/v1/train/MomentumOptimizer.md): Optimizer that implements the Momentum algorithm. + +[`class MonitoredSession`](../../../tf/compat/v1/train/MonitoredSession.md): Session-like object that handles initialization, recovery and hooks. + +[`class NanLossDuringTrainingError`](../../../tf/estimator/NanLossDuringTrainingError.md): Unspecified run-time error. + +[`class NanTensorHook`](../../../tf/estimator/NanTensorHook.md): Monitors the loss tensor and stops training if loss is NaN. + +[`class Optimizer`](../../../tf/compat/v1/train/Optimizer.md): Base class for optimizers. + +[`class ProfilerHook`](../../../tf/estimator/ProfilerHook.md): Captures CPU/GPU profiling information every N steps or seconds. + +[`class ProximalAdagradOptimizer`](../../../tf/compat/v1/train/ProximalAdagradOptimizer.md): Optimizer that implements the Proximal Adagrad algorithm. + +[`class ProximalGradientDescentOptimizer`](../../../tf/compat/v1/train/ProximalGradientDescentOptimizer.md): Optimizer that implements the proximal gradient descent algorithm. + +[`class QueueRunner`](../../../tf/compat/v1/train/QueueRunner.md): Holds a list of enqueue operations for a queue, each to be run in a thread. + +[`class RMSPropOptimizer`](../../../tf/compat/v1/train/RMSPropOptimizer.md): Optimizer that implements the RMSProp algorithm (Tielemans et al. + +[`class Saver`](../../../tf/compat/v1/train/Saver.md): Saves and restores variables. + +[`class SaverDef`](../../../tf/compat/v1/train/SaverDef.md): A ProtocolMessage + +[`class Scaffold`](../../../tf/compat/v1/train/Scaffold.md): Structure to create or gather pieces commonly needed to train a model. + +[`class SecondOrStepTimer`](../../../tf/estimator/SecondOrStepTimer.md): Timer that triggers at most once every N seconds or once every N steps. + +[`class SequenceExample`](../../../tf/train/SequenceExample.md): A ProtocolMessage + +[`class Server`](../../../tf/distribute/Server.md): An in-process TensorFlow server, for use in distributed training. + +[`class ServerDef`](../../../tf/train/ServerDef.md): A ProtocolMessage + +[`class SessionCreator`](../../../tf/compat/v1/train/SessionCreator.md): A factory for tf.Session. + +[`class SessionManager`](../../../tf/compat/v1/train/SessionManager.md): Training helper that restores from checkpoint and creates session. + +[`class SessionRunArgs`](../../../tf/estimator/SessionRunArgs.md): Represents arguments to be added to a `Session.run()` call. + +[`class SessionRunContext`](../../../tf/estimator/SessionRunContext.md): Provides information about the `session.run()` call being made. + +[`class SessionRunHook`](../../../tf/estimator/SessionRunHook.md): Hook to extend calls to MonitoredSession.run(). + +[`class SessionRunValues`](../../../tf/estimator/SessionRunValues.md): Contains the results of `Session.run()`. + +[`class SingularMonitoredSession`](../../../tf/compat/v1/train/SingularMonitoredSession.md): Session-like object that handles initialization, restoring, and hooks. + +[`class StepCounterHook`](../../../tf/estimator/StepCounterHook.md): Hook that counts steps per second. + +[`class StopAtStepHook`](../../../tf/estimator/StopAtStepHook.md): Hook that requests stop at a specified step. + +[`class SummarySaverHook`](../../../tf/estimator/SummarySaverHook.md): Saves summaries every N steps. + +[`class Supervisor`](../../../tf/compat/v1/train/Supervisor.md): A training helper that checkpoints models and computes summaries. + +[`class SyncReplicasOptimizer`](../../../tf/compat/v1/train/SyncReplicasOptimizer.md): Class to synchronize, aggregate gradients and pass them to the optimizer. + +[`class VocabInfo`](../../../tf/estimator/VocabInfo.md): Vocabulary information for warm-starting. + +[`class WorkerSessionCreator`](../../../tf/compat/v1/train/WorkerSessionCreator.md): Creates a tf.compat.v1.Session for a worker. + +## Functions + +[`MonitoredTrainingSession(...)`](../../../tf/compat/v1/train/MonitoredTrainingSession.md): Creates a `MonitoredSession` for training. + +[`NewCheckpointReader(...)`](../../../tf/compat/v1/train/NewCheckpointReader.md): A function that returns a CheckPointReader. + +[`add_queue_runner(...)`](../../../tf/compat/v1/train/add_queue_runner.md): Adds a `QueueRunner` to a collection in the graph. (deprecated) + +[`assert_global_step(...)`](../../../tf/compat/v1/train/assert_global_step.md): Asserts `global_step_tensor` is a scalar int `Variable` or `Tensor`. + +[`basic_train_loop(...)`](../../../tf/compat/v1/train/basic_train_loop.md): Basic loop to train a model. + +[`batch(...)`](../../../tf/compat/v1/train/batch.md): Creates batches of tensors in `tensors`. (deprecated) + +[`batch_join(...)`](../../../tf/compat/v1/train/batch_join.md): Runs a list of tensors to fill a queue to create batches of examples. (deprecated) + +[`checkpoint_exists(...)`](../../../tf/compat/v1/train/checkpoint_exists.md): Checks whether a V1 or V2 checkpoint exists with the specified prefix. (deprecated) + +[`checkpoints_iterator(...)`](../../../tf/train/checkpoints_iterator.md): Continuously yield new checkpoint files as they appear. + +[`cosine_decay(...)`](../../../tf/compat/v1/train/cosine_decay.md): Applies cosine decay to the learning rate. + +[`cosine_decay_restarts(...)`](../../../tf/compat/v1/train/cosine_decay_restarts.md): Applies cosine decay with restarts to the learning rate. + +[`create_global_step(...)`](../../../tf/compat/v1/train/create_global_step.md): Create global step tensor in graph. + +[`do_quantize_training_on_graphdef(...)`](../../../tf/compat/v1/train/do_quantize_training_on_graphdef.md): A general quantization scheme is being developed in `tf.contrib.quantize`. (deprecated) + +[`exponential_decay(...)`](../../../tf/compat/v1/train/exponential_decay.md): Applies exponential decay to the learning rate. + +[`export_meta_graph(...)`](../../../tf/compat/v1/train/export_meta_graph.md): Returns `MetaGraphDef` proto. + +[`generate_checkpoint_state_proto(...)`](../../../tf/compat/v1/train/generate_checkpoint_state_proto.md): Generates a checkpoint state proto. + +[`get_checkpoint_mtimes(...)`](../../../tf/compat/v1/train/get_checkpoint_mtimes.md): Returns the mtimes (modification timestamps) of the checkpoints. (deprecated) + +[`get_checkpoint_state(...)`](../../../tf/train/get_checkpoint_state.md): Returns CheckpointState proto from the "checkpoint" file. + +[`get_global_step(...)`](../../../tf/compat/v1/train/get_global_step.md): Get the global step tensor. + +[`get_or_create_global_step(...)`](../../../tf/compat/v1/train/get_or_create_global_step.md): Returns and create (if necessary) the global step tensor. + +[`global_step(...)`](../../../tf/compat/v1/train/global_step.md): Small helper to get the global step. + +[`import_meta_graph(...)`](../../../tf/compat/v1/train/import_meta_graph.md): Recreates a Graph saved in a `MetaGraphDef` proto. + +[`init_from_checkpoint(...)`](../../../tf/compat/v1/train/init_from_checkpoint.md): Replaces tf.Variable initializers so they load from a checkpoint file. + +[`input_producer(...)`](../../../tf/compat/v1/train/input_producer.md): Output the rows of `input_tensor` to a queue for an input pipeline. (deprecated) + +[`inverse_time_decay(...)`](../../../tf/compat/v1/train/inverse_time_decay.md): Applies inverse time decay to the initial learning rate. + +[`latest_checkpoint(...)`](../../../tf/train/latest_checkpoint.md): Finds the filename of latest saved checkpoint file. + +[`limit_epochs(...)`](../../../tf/compat/v1/train/limit_epochs.md): Returns tensor `num_epochs` times and then raises an `OutOfRange` error. (deprecated) + +[`linear_cosine_decay(...)`](../../../tf/compat/v1/train/linear_cosine_decay.md): Applies linear cosine decay to the learning rate. + +[`list_variables(...)`](../../../tf/train/list_variables.md): Returns list of all variables in the checkpoint. + +[`load_checkpoint(...)`](../../../tf/train/load_checkpoint.md): Returns `CheckpointReader` for checkpoint found in `ckpt_dir_or_file`. + +[`load_variable(...)`](../../../tf/train/load_variable.md): Returns the tensor value of the given variable in the checkpoint. + +[`match_filenames_once(...)`](../../../tf/io/match_filenames_once.md): Save the list of files matching pattern, so it is only computed once. + +[`maybe_batch(...)`](../../../tf/compat/v1/train/maybe_batch.md): Conditionally creates batches of tensors based on `keep_input`. (deprecated) + +[`maybe_batch_join(...)`](../../../tf/compat/v1/train/maybe_batch_join.md): Runs a list of tensors to conditionally fill a queue to create batches. (deprecated) + +[`maybe_shuffle_batch(...)`](../../../tf/compat/v1/train/maybe_shuffle_batch.md): Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated) + +[`maybe_shuffle_batch_join(...)`](../../../tf/compat/v1/train/maybe_shuffle_batch_join.md): Create batches by randomly shuffling conditionally-enqueued tensors. (deprecated) + +[`natural_exp_decay(...)`](../../../tf/compat/v1/train/natural_exp_decay.md): Applies natural exponential decay to the initial learning rate. + +[`noisy_linear_cosine_decay(...)`](../../../tf/compat/v1/train/noisy_linear_cosine_decay.md): Applies noisy linear cosine decay to the learning rate. + +[`piecewise_constant(...)`](../../../tf/compat/v1/train/piecewise_constant.md): Piecewise constant from boundaries and interval values. + +[`piecewise_constant_decay(...)`](../../../tf/compat/v1/train/piecewise_constant.md): Piecewise constant from boundaries and interval values. + +[`polynomial_decay(...)`](../../../tf/compat/v1/train/polynomial_decay.md): Applies a polynomial decay to the learning rate. + +[`range_input_producer(...)`](../../../tf/compat/v1/train/range_input_producer.md): Produces the integers from 0 to limit-1 in a queue. (deprecated) + +[`remove_checkpoint(...)`](../../../tf/compat/v1/train/remove_checkpoint.md): Removes a checkpoint given by `checkpoint_prefix`. (deprecated) + +[`replica_device_setter(...)`](../../../tf/compat/v1/train/replica_device_setter.md): Return a `device function` to use when building a Graph for replicas. + +[`sdca_fprint(...)`](../../../tf/compat/v1/train/sdca_fprint.md): Computes fingerprints of the input strings. + +[`sdca_optimizer(...)`](../../../tf/compat/v1/train/sdca_optimizer.md): Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for + +[`sdca_shrink_l1(...)`](../../../tf/compat/v1/train/sdca_shrink_l1.md): Applies L1 regularization shrink step on the parameters. + +[`shuffle_batch(...)`](../../../tf/compat/v1/train/shuffle_batch.md): Creates batches by randomly shuffling tensors. (deprecated) + +[`shuffle_batch_join(...)`](../../../tf/compat/v1/train/shuffle_batch_join.md): Create batches by randomly shuffling tensors. (deprecated) + +[`slice_input_producer(...)`](../../../tf/compat/v1/train/slice_input_producer.md): Produces a slice of each `Tensor` in `tensor_list`. (deprecated) + +[`start_queue_runners(...)`](../../../tf/compat/v1/train/start_queue_runners.md): Starts all queue runners collected in the graph. (deprecated) + +[`string_input_producer(...)`](../../../tf/compat/v1/train/string_input_producer.md): Output strings (e.g. filenames) to a queue for an input pipeline. (deprecated) + +[`summary_iterator(...)`](../../../tf/compat/v1/train/summary_iterator.md): An iterator for reading `Event` protocol buffers from an event file. + +[`update_checkpoint_state(...)`](../../../tf/compat/v1/train/update_checkpoint_state.md): Updates the content of the 'checkpoint' file. (deprecated) + +[`warm_start(...)`](../../../tf/compat/v1/train/warm_start.md): Warm-starts a model using the given settings. + +[`write_graph(...)`](../../../tf/io/write_graph.md): Writes a graph proto to a file. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/AdadeltaOptimizer.md b/site/en/api_docs/python/tf/compat/v1/train/AdadeltaOptimizer.md new file mode 100644 index 00000000000..19e56a2e28b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/AdadeltaOptimizer.md @@ -0,0 +1,596 @@ +description: Optimizer that implements the Adadelta algorithm. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.AdadeltaOptimizer + + + + + + + + + +Optimizer that implements the Adadelta algorithm. + +Inherits From: [`Optimizer`](../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + + +#### References: + +ADADELTA - An Adaptive Learning Rate Method: + [Zeiler, 2012](http://arxiv.org/abs/1212.5701) + ([pdf](http://arxiv.org/pdf/1212.5701v1.pdf)) + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A `Tensor` or a floating point value. The learning rate. +To match the exact form in the original paper use 1.0. +
+`rho` + +A `Tensor` or a floating point value. The decay rate. +
+`epsilon` + +A `Tensor` or a floating point value. A constant epsilon used +to better conditioning the grad update. +
+`use_locking` + +If `True` use locks for update operations. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "Adadelta". +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +`compute_gradients()`. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the `Optimizer` constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. If `global_step` +was not None, that operation also increments `global_step`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+`RuntimeError` + +If you should use `_distributed_apply()` instead. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of `loss` for the variables in `var_list`. + +This is the first part of `minimize()`. It returns a list +of (gradient, variable) pairs where "gradient" is the gradient +for "variable". Note that "gradient" can be a `Tensor`, an +`IndexedSlices`, or `None` if there is no gradient for the +given variable. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize or a callable taking +no arguments which returns the value to minimize. When eager execution +is enabled it must be a callable. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph +under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. Variable is always present, but +gradient can be `None`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `var_list` contains anything else than `Variable` objects. +
+`ValueError` + +If some arguments are invalid. +
+`RuntimeError` + +If called with eager execution enabled and `loss` is +not callable. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `gate_gradients`, `aggregation_method`, +and `colocate_gradients_with_ops` are ignored. + + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + + + + + + + + + + + + + +
Args
+`var` + +A variable passed to `minimize()` or `apply_gradients()`. +
+`name` + +A string. +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +A list of variables which encode the current state of `Optimizer`. + +Includes slot variables and additional global variables created by the +optimizer in the current default graph. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/AdagradDAOptimizer.md b/site/en/api_docs/python/tf/compat/v1/train/AdagradDAOptimizer.md new file mode 100644 index 00000000000..f8b7dcabae8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/AdagradDAOptimizer.md @@ -0,0 +1,638 @@ +description: Adagrad Dual Averaging algorithm for sparse linear models. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.AdagradDAOptimizer + + + + + + + + + +Adagrad Dual Averaging algorithm for sparse linear models. + +Inherits From: [`Optimizer`](../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + +This optimizer takes care of regularization of unseen features in a mini batch +by updating them when they are seen with a closed form update rule that is +equivalent to having updated them on every mini-batch. + +AdagradDA is typically used when there is a need for large sparsity in the +trained model. This optimizer only guarantees sparsity for linear models. Be +careful when using AdagradDA for deep networks as it will require careful +initialization of the gradient accumulators for it to train. + +#### References: + +Adaptive Subgradient Methods for Online Learning and Stochastic Optimization + :[Duchi et al., 2011](http://jmlr.org/papers/v12/duchi11a.html) + ([pdf](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A `Tensor` or a floating point value. The learning rate. +
+`global_step` + +A `Tensor` containing the current training step number. +
+`initial_gradient_squared_accumulator_value` + +A floating point value. +Starting value for the accumulators, must be positive. +
+`l1_regularization_strength` + +A float value, must be greater than or +equal to zero. +
+`l2_regularization_strength` + +A float value, must be greater than or +equal to zero. +
+`use_locking` + +If `True` use locks for update operations. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "AdagradDA". +
+ + + + + + + + + + + + +
+`ValueError` + +If the `initial_gradient_squared_accumulator_value` is +invalid. +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +`compute_gradients()`. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the `Optimizer` constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. If `global_step` +was not None, that operation also increments `global_step`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+`RuntimeError` + +If you should use `_distributed_apply()` instead. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of `loss` for the variables in `var_list`. + +This is the first part of `minimize()`. It returns a list +of (gradient, variable) pairs where "gradient" is the gradient +for "variable". Note that "gradient" can be a `Tensor`, an +`IndexedSlices`, or `None` if there is no gradient for the +given variable. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize or a callable taking +no arguments which returns the value to minimize. When eager execution +is enabled it must be a callable. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph +under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. Variable is always present, but +gradient can be `None`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `var_list` contains anything else than `Variable` objects. +
+`ValueError` + +If some arguments are invalid. +
+`RuntimeError` + +If called with eager execution enabled and `loss` is +not callable. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `gate_gradients`, `aggregation_method`, +and `colocate_gradients_with_ops` are ignored. + + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + + + + + + + + + + + + + +
Args
+`var` + +A variable passed to `minimize()` or `apply_gradients()`. +
+`name` + +A string. +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +A list of variables which encode the current state of `Optimizer`. + +Includes slot variables and additional global variables created by the +optimizer in the current default graph. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/AdagradOptimizer.md b/site/en/api_docs/python/tf/compat/v1/train/AdagradOptimizer.md new file mode 100644 index 00000000000..e630d460bf9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/AdagradOptimizer.md @@ -0,0 +1,605 @@ +description: Optimizer that implements the Adagrad algorithm. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.AdagradOptimizer + + + + + + + + + +Optimizer that implements the Adagrad algorithm. + +Inherits From: [`Optimizer`](../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + + +#### References: + +Adaptive Subgradient Methods for Online Learning and Stochastic Optimization + :[Duchi et al., 2011](http://jmlr.org/papers/v12/duchi11a.html) + ([pdf](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)) + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A `Tensor` or a floating point value. The learning rate. +
+`initial_accumulator_value` + +A floating point value. +Starting value for the accumulators, must be positive. +
+`use_locking` + +If `True` use locks for update operations. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "Adagrad". +
+ + + + + + + + + + + + +
+`ValueError` + +If the `initial_accumulator_value` is invalid. +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +`compute_gradients()`. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the `Optimizer` constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. If `global_step` +was not None, that operation also increments `global_step`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+`RuntimeError` + +If you should use `_distributed_apply()` instead. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of `loss` for the variables in `var_list`. + +This is the first part of `minimize()`. It returns a list +of (gradient, variable) pairs where "gradient" is the gradient +for "variable". Note that "gradient" can be a `Tensor`, an +`IndexedSlices`, or `None` if there is no gradient for the +given variable. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize or a callable taking +no arguments which returns the value to minimize. When eager execution +is enabled it must be a callable. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph +under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. Variable is always present, but +gradient can be `None`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `var_list` contains anything else than `Variable` objects. +
+`ValueError` + +If some arguments are invalid. +
+`RuntimeError` + +If called with eager execution enabled and `loss` is +not callable. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `gate_gradients`, `aggregation_method`, +and `colocate_gradients_with_ops` are ignored. + + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + + + + + + + + + + + + + +
Args
+`var` + +A variable passed to `minimize()` or `apply_gradients()`. +
+`name` + +A string. +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +A list of variables which encode the current state of `Optimizer`. + +Includes slot variables and additional global variables created by the +optimizer in the current default graph. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/AdamOptimizer.md b/site/en/api_docs/python/tf/compat/v1/train/AdamOptimizer.md new file mode 100644 index 00000000000..057350c3701 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/AdamOptimizer.md @@ -0,0 +1,609 @@ +description: Optimizer that implements the Adam algorithm. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.AdamOptimizer + + + + + + + + + +Optimizer that implements the Adam algorithm. + +Inherits From: [`Optimizer`](../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + + +#### References: + +Adam - A Method for Stochastic Optimization: + [Kingma et al., 2015](https://arxiv.org/abs/1412.6980) + ([pdf](https://arxiv.org/pdf/1412.6980.pdf)) + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A Tensor or a floating point value. The learning rate. +
+`beta1` + +A float value or a constant float tensor. The exponential decay +rate for the 1st moment estimates. +
+`beta2` + +A float value or a constant float tensor. The exponential decay +rate for the 2nd moment estimates. +
+`epsilon` + +A small constant for numerical stability. This epsilon is +"epsilon hat" in the Kingma and Ba paper (in the formula just before +Section 2.1), not the epsilon in Algorithm 1 of the paper. +
+`use_locking` + +If True use locks for update operations. +
+`name` + +Optional name for the operations created when applying gradients. +Defaults to "Adam". @compatibility(eager) When eager execution is +enabled, `learning_rate`, `beta1`, `beta2`, and `epsilon` can each be a +callable that takes no arguments and returns the actual value to use. +This can be useful for changing these values across different +invocations of optimizer functions. @end_compatibility +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +`compute_gradients()`. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the `Optimizer` constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. If `global_step` +was not None, that operation also increments `global_step`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+`RuntimeError` + +If you should use `_distributed_apply()` instead. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of `loss` for the variables in `var_list`. + +This is the first part of `minimize()`. It returns a list +of (gradient, variable) pairs where "gradient" is the gradient +for "variable". Note that "gradient" can be a `Tensor`, an +`IndexedSlices`, or `None` if there is no gradient for the +given variable. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize or a callable taking +no arguments which returns the value to minimize. When eager execution +is enabled it must be a callable. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph +under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. Variable is always present, but +gradient can be `None`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `var_list` contains anything else than `Variable` objects. +
+`ValueError` + +If some arguments are invalid. +
+`RuntimeError` + +If called with eager execution enabled and `loss` is +not callable. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `gate_gradients`, `aggregation_method`, +and `colocate_gradients_with_ops` are ignored. + + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + + + + + + + + + + + + + +
Args
+`var` + +A variable passed to `minimize()` or `apply_gradients()`. +
+`name` + +A string. +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +A list of variables which encode the current state of `Optimizer`. + +Includes slot variables and additional global variables created by the +optimizer in the current default graph. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/Checkpoint.md b/site/en/api_docs/python/tf/compat/v1/train/Checkpoint.md new file mode 100644 index 00000000000..21ed858b289 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/Checkpoint.md @@ -0,0 +1,459 @@ +description: Groups trackable objects, saving and restoring them. + +
+ + + + + + +
+ +# tf.compat.v1.train.Checkpoint + + + + + + + + + +Groups trackable objects, saving and restoring them. + + + + + + + +`Checkpoint`'s constructor accepts keyword arguments whose values are types +that contain trackable state, such as tf.compat.v1.train.Optimizer +implementations, tf.Variable, `tf.keras.Layer` implementations, or +tf.keras.Model implementations. It saves these values with a checkpoint, and +maintains a `save_counter` for numbering checkpoints. + +Example usage when graph building: + +```python +import tensorflow as tf +import os + +checkpoint_directory = "/tmp/training_checkpoints" +checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt") + +checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) +status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_directory)) +train_op = optimizer.minimize( ... ) +status.assert_consumed() # Optional sanity checks. +with tf.compat.v1.Session() as session: + # Use the Session to restore variables, or initialize them if + # tf.train.latest_checkpoint returned None. + status.initialize_or_restore(session) + for _ in range(num_training_steps): + session.run(train_op) + checkpoint.save(file_prefix=checkpoint_prefix) +``` + +Example usage with eager execution enabled: + +```python +import tensorflow as tf +import os + +tf.compat.v1.enable_eager_execution() + +checkpoint_directory = "/tmp/training_checkpoints" +checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt") + +checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) +status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_directory)) +for _ in range(num_training_steps): + optimizer.minimize( ... ) # Variables will be restored on creation. +status.assert_consumed() # Optional sanity checks. +checkpoint.save(file_prefix=checkpoint_prefix) +``` + +`Checkpoint.save` and `Checkpoint.restore` write and read object-based +checkpoints, in contrast to tf.compat.v1.train.Saver which writes and reads +`variable.name` based checkpoints. Object-based checkpointing saves a graph of +dependencies between Python objects (`Layer`s, `Optimizer`s, `Variable`s, +etc.) with named edges, and this graph is used to match variables when +restoring a checkpoint. It can be more robust to changes in the Python +program, and helps to support restore-on-create for variables when executing +eagerly. Prefer tf.train.Checkpoint over tf.compat.v1.train.Saver for new +code. + +`Checkpoint` objects have dependencies on the objects passed as keyword +arguments to their constructors, and each dependency is given a name that is +identical to the name of the keyword argument for which it was created. +TensorFlow classes like `Layer`s and `Optimizer`s will automatically add +dependencies on their variables (e.g. "kernel" and "bias" for +tf.keras.layers.Dense). Inheriting from tf.keras.Model makes managing +dependencies easy in user-defined classes, since `Model` hooks into attribute +assignment. For example: + +```python +class Regress(tf.keras.Model): + + def __init__(self): + super(Regress, self).__init__() + self.input_transform = tf.keras.layers.Dense(10) + # ... + + def call(self, inputs): + x = self.input_transform(inputs) + # ... +``` + +This `Model` has a dependency named "input_transform" on its `Dense` layer, +which in turn depends on its variables. As a result, saving an instance of +`Regress` using tf.train.Checkpoint will also save all the variables created +by the `Dense` layer. + +When variables are assigned to multiple workers, each worker writes its own +section of the checkpoint. These sections are then merged/re-indexed to behave +as a single checkpoint. This avoids copying all variables to one worker, but +does require that all workers see a common filesystem. + +While tf.keras.Model.save_weights and tf.train.Checkpoint.save save in the +same format, note that the root of the resulting checkpoint is the object the +save method is attached to. This means saving a tf.keras.Model using +`save_weights` and loading into a tf.train.Checkpoint with a `Model` +attached (or vice versa) will not match the `Model`'s variables. See the +[guide to training +checkpoints](https://www.tensorflow.org/guide/checkpoint) for +details. Prefer tf.train.Checkpoint over tf.keras.Model.save_weights for +training checkpoints. + + + + + + + + + + +
+`**kwargs` + +Keyword arguments are set as attributes of this object, and are +saved with the checkpoint. Values must be trackable objects. +
+ + + + + + + + + + + + +
+`ValueError` + +If objects in `kwargs` are not trackable. +
+ + + + + + + + + + + + + + +
+`save_counter` + +Incremented when `save()` is called. Used to number +checkpoints. +
+ + + +## Methods + +

restore

+ +View source + + + +Restore a training checkpoint. + +Restores this `Checkpoint` and any objects it depends on. + +When executing eagerly, either assigns values immediately if variables to +restore have been created already, or defers restoration until the variables +are created. Dependencies added after this call will be matched if they have +a corresponding object in the checkpoint (the restore request will queue in +any trackable object waiting for the expected dependency to be added). + +When graph building, restoration ops are added to the graph but not run +immediately. + +To ensure that loading is complete and no more assignments will take place, +use the `assert_consumed()` method of the status object returned by +`restore`: + +```python +checkpoint = tf.train.Checkpoint( ... ) +checkpoint.restore(path).assert_consumed() +``` + +An exception will be raised if any Python objects in the dependency graph +were not found in the checkpoint, or if any checkpointed values do not have +a matching Python object. + +When graph building, `assert_consumed()` indicates that all of the restore +ops that will be created for this checkpoint have been created. They can be +run via the `run_restore_ops()` method of the status object: + +```python +checkpoint.restore(path).assert_consumed().run_restore_ops() +``` + +If the checkpoint has not been consumed completely, then the list of restore +ops will grow as more objects are added to the dependency graph. + +Name-based tf.compat.v1.train.Saver checkpoints can be loaded using this +method. Names are used to match variables. No restore ops are created/run +until `run_restore_ops()` or `initialize_or_restore()` are called on the +returned status object when graph building, but there is restore-on-creation +when executing eagerly. Re-encode name-based checkpoints using +tf.train.Checkpoint.save as soon as possible. + + + + + + + + + + +
Args
+`save_path` + +The path to the checkpoint, as returned by `save` or +tf.train.latest_checkpoint. If None (as when there is no latest +checkpoint for tf.train.latest_checkpoint to return), returns an +object which may run initializers for objects in the dependency graph. +If the checkpoint was written by the name-based +tf.compat.v1.train.Saver, names are used to match variables. +
+ + + + + + + + + + + +
Returns
+A load status object, which can be used to make assertions about the +status of a checkpoint restoration and run initialization/restore ops. + +The returned status object has the following methods: + +* `assert_consumed()`: +Raises an exception if any variables are unmatched: either +checkpointed values which don't have a matching Python object or +Python objects in the dependency graph with no values in the +checkpoint. This method returns the status object, and so may be +chained with `initialize_or_restore` or `run_restore_ops`. + +* `assert_existing_objects_matched()`: +Raises an exception if any existing Python objects in the dependency +graph are unmatched. Unlike `assert_consumed`, this assertion will +pass if values in the checkpoint have no corresponding Python +objects. For example a `tf.keras.Layer` object which has not yet been +built, and so has not created any variables, will pass this assertion +but fail `assert_consumed`. Useful when loading part of a larger +checkpoint into a new Python program, e.g. a training checkpoint with +a tf.compat.v1.train.Optimizer was saved but only the state required +for +inference is being loaded. This method returns the status object, and +so may be chained with `initialize_or_restore` or `run_restore_ops`. + +* `assert_nontrivial_match()`: Asserts that something aside from the root +object was matched. This is a very weak assertion, but is useful for +sanity checking in library code where objects may exist in the +checkpoint which haven't been created in Python and some Python +objects may not have a checkpointed value. + +* `expect_partial()`: Silence warnings about incomplete checkpoint +restores. Warnings are otherwise printed for unused parts of the +checkpoint file or object when the `Checkpoint` object is deleted +(often at program shutdown). + +* `initialize_or_restore(session=None)`: +When graph building, runs variable initializers if `save_path` is +`None`, but otherwise runs restore operations. If no `session` is +explicitly specified, the default session is used. No effect when +executing eagerly (variables are initialized or restored eagerly). + +* `run_restore_ops(session=None)`: +When graph building, runs restore operations. If no `session` is +explicitly specified, the default session is used. No effect when +executing eagerly (restore operations are run eagerly). May only be +called when `save_path` is not `None`. +
+ + + +

save

+ +View source + + + +Saves a training checkpoint and provides basic checkpoint management. + +The saved checkpoint includes variables created by this object and any +trackable objects it depends on at the time `Checkpoint.save()` is +called. + +`save` is a basic convenience wrapper around the `write` method, +sequentially numbering checkpoints using `save_counter` and updating the +metadata used by tf.train.latest_checkpoint. More advanced checkpoint +management, for example garbage collection and custom numbering, may be +provided by other utilities which also wrap `write` +(tf.train.CheckpointManager for example). + + + + + + + + + + + + + +
Args
+`file_prefix` + +A prefix to use for the checkpoint filenames +(/path/to/directory/and_a_prefix). Names are generated based on this +prefix and `Checkpoint.save_counter`. +
+`session` + +The session to evaluate variables in. Ignored when executing +eagerly. If not provided when graph building, the default session is +used. +
+ + + + + + + + + + + +
Returns
+The full path to the checkpoint. +
+ + + +

write

+ +View source + + + +Writes a training checkpoint. + +The checkpoint includes variables created by this object and any +trackable objects it depends on at the time `Checkpoint.write()` is +called. + +`write` does not number checkpoints, increment `save_counter`, or update the +metadata used by tf.train.latest_checkpoint. It is primarily intended for +use by higher level checkpoint management utilities. `save` provides a very +basic implementation of these features. + + + + + + + + + + + + + +
Args
+`file_prefix` + +A prefix to use for the checkpoint filenames +(/path/to/directory/and_a_prefix). +
+`session` + +The session to evaluate variables in. Ignored when executing +eagerly. If not provided when graph building, the default session is +used. +
+ + + + + + + + + + + +
Returns
+The full path to the checkpoint (i.e. `file_prefix`). +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/train/ChiefSessionCreator.md b/site/en/api_docs/python/tf/compat/v1/train/ChiefSessionCreator.md new file mode 100644 index 00000000000..f9c68119544 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/ChiefSessionCreator.md @@ -0,0 +1,102 @@ +description: Creates a tf.compat.v1.Session for a chief. + +
+ + + + +
+ +# tf.compat.v1.train.ChiefSessionCreator + + + + + + + + + +Creates a tf.compat.v1.Session for a chief. + +Inherits From: [`SessionCreator`](../../../../tf/compat/v1/train/SessionCreator.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`scaffold` + +A `Scaffold` used for gathering or building supportive ops. If +not specified a default one is created. It's used to finalize the graph. +
+`master` + +`String` representation of the TensorFlow master to use. +
+`config` + +`ConfigProto` proto used to configure the session. +
+`checkpoint_dir` + +A string. Optional path to a directory where to restore +variables. +
+`checkpoint_filename_with_path` + +Full file name path to the checkpoint file. +
+ + + +## Methods + +

create_session

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/train/FtrlOptimizer.md b/site/en/api_docs/python/tf/compat/v1/train/FtrlOptimizer.md new file mode 100644 index 00000000000..887670b431e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/FtrlOptimizer.md @@ -0,0 +1,670 @@ +description: Optimizer that implements the FTRL algorithm. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.FtrlOptimizer + + + + + + + + + +Optimizer that implements the FTRL algorithm. + +Inherits From: [`Optimizer`](../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + +This version has support for both online L2 (McMahan et al., 2013) and +shrinkage-type L2, which is the addition of an L2 penalty +to the loss function. + +#### References: + +Ad-click prediction: + [McMahan et al., 2013](https://dl.acm.org/citation.cfm?id=2488200) + ([pdf](https://dl.acm.org/ft_gateway.cfm?id=2488200&ftid=1388399&dwn=1&CFID=32233078&CFTOKEN=d60fe57a294c056a-CB75C374-F915-E7A6-1573FBBC7BF7D526)) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A float value or a constant float `Tensor`. +
+`learning_rate_power` + +A float value, must be less or equal to zero. +Controls how the learning rate decreases during training. Use zero for +a fixed learning rate. See section 3.1 in (McMahan et al., 2013). +
+`initial_accumulator_value` + +The starting value for accumulators. +Only zero or positive values are allowed. +
+`l1_regularization_strength` + +A float value, must be greater than or +equal to zero. +
+`l2_regularization_strength` + +A float value, must be greater than or +equal to zero. +
+`use_locking` + +If `True` use locks for update operations. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "Ftrl". +
+`accum_name` + +The suffix for the variable that keeps the gradient squared +accumulator. If not present, defaults to name. +
+`linear_name` + +The suffix for the variable that keeps the linear gradient +accumulator. If not present, defaults to name + "_1". +
+`l2_shrinkage_regularization_strength` + +A float value, must be greater than +or equal to zero. This differs from L2 above in that the L2 above is a +stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. +The FTRL formulation can be written as: +w_{t+1} = argmin_w(\hat{g}_{1:t}w + L1*||w||_1 + L2*||w||_2^2), where +\hat{g} = g + (2*L2_shrinkage*w), and g is the gradient of the loss +function w.r.t. the weights w. +Specifically, in the absence of L1 regularization, it is equivalent to +the following update rule: +w_{t+1} = w_t - lr_t / (1 + 2*L2*lr_t) * g_t - +2*L2_shrinkage*lr_t / (1 + 2*L2*lr_t) * w_t +where lr_t is the learning rate at t. +When input is sparse shrinkage will only happen on the active weights. +
+ + + + + + + + + + + + +
+`ValueError` + +If one of the arguments is invalid. +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +`compute_gradients()`. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the `Optimizer` constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. If `global_step` +was not None, that operation also increments `global_step`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+`RuntimeError` + +If you should use `_distributed_apply()` instead. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of `loss` for the variables in `var_list`. + +This is the first part of `minimize()`. It returns a list +of (gradient, variable) pairs where "gradient" is the gradient +for "variable". Note that "gradient" can be a `Tensor`, an +`IndexedSlices`, or `None` if there is no gradient for the +given variable. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize or a callable taking +no arguments which returns the value to minimize. When eager execution +is enabled it must be a callable. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph +under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. Variable is always present, but +gradient can be `None`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `var_list` contains anything else than `Variable` objects. +
+`ValueError` + +If some arguments are invalid. +
+`RuntimeError` + +If called with eager execution enabled and `loss` is +not callable. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `gate_gradients`, `aggregation_method`, +and `colocate_gradients_with_ops` are ignored. + + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + + + + + + + + + + + + + +
Args
+`var` + +A variable passed to `minimize()` or `apply_gradients()`. +
+`name` + +A string. +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +A list of variables which encode the current state of `Optimizer`. + +Includes slot variables and additional global variables created by the +optimizer in the current default graph. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/GradientDescentOptimizer.md b/site/en/api_docs/python/tf/compat/v1/train/GradientDescentOptimizer.md new file mode 100644 index 00000000000..a47e8765e92 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/GradientDescentOptimizer.md @@ -0,0 +1,573 @@ +description: Optimizer that implements the gradient descent algorithm. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.GradientDescentOptimizer + + + + + + + + + +Optimizer that implements the gradient descent algorithm. + +Inherits From: [`Optimizer`](../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A Tensor or a floating point value. The learning +rate to use. +
+`use_locking` + +If True use locks for update operations. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "GradientDescent". +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +`compute_gradients()`. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the `Optimizer` constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. If `global_step` +was not None, that operation also increments `global_step`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+`RuntimeError` + +If you should use `_distributed_apply()` instead. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of `loss` for the variables in `var_list`. + +This is the first part of `minimize()`. It returns a list +of (gradient, variable) pairs where "gradient" is the gradient +for "variable". Note that "gradient" can be a `Tensor`, an +`IndexedSlices`, or `None` if there is no gradient for the +given variable. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize or a callable taking +no arguments which returns the value to minimize. When eager execution +is enabled it must be a callable. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph +under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. Variable is always present, but +gradient can be `None`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `var_list` contains anything else than `Variable` objects. +
+`ValueError` + +If some arguments are invalid. +
+`RuntimeError` + +If called with eager execution enabled and `loss` is +not callable. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `gate_gradients`, `aggregation_method`, +and `colocate_gradients_with_ops` are ignored. + + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + + + + + + + + + + + + + +
Args
+`var` + +A variable passed to `minimize()` or `apply_gradients()`. +
+`name` + +A string. +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +A list of variables which encode the current state of `Optimizer`. + +Includes slot variables and additional global variables created by the +optimizer in the current default graph. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/LooperThread.md b/site/en/api_docs/python/tf/compat/v1/train/LooperThread.md new file mode 100644 index 00000000000..091db942fe1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/LooperThread.md @@ -0,0 +1,416 @@ +description: A thread that runs code repeatedly, optionally on a timer. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.train.LooperThread + + + + + + + + + +A thread that runs code repeatedly, optionally on a timer. + + + + + + + +This thread class is intended to be used with a `Coordinator`. It repeatedly +runs code specified either as `target` and `args` or by the `run_loop()` +method. + +Before each run the thread checks if the coordinator has requested stop. In +that case the looper thread terminates immediately. + +If the code being run raises an exception, that exception is reported to the +coordinator and the thread terminates. The coordinator will then request all +the other threads it coordinates to stop. + +You typically pass looper threads to the supervisor `Join()` method. + + + + + + + + + + + + + + + + + + + + + + +
+`coord` + +A Coordinator. +
+`timer_interval_secs` + +Time boundaries at which to call Run(), or None +if it should be called back to back. +
+`target` + +Optional callable object that will be executed in the thread. +
+`args` + +Optional arguments to pass to `target` when calling it. +
+`kwargs` + +Optional keyword arguments to pass to `target` when calling it. +
+ + + + + + + + + + + + +
+`ValueError` + +If one of the arguments is invalid. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`daemon` + +A boolean value indicating whether this thread is a daemon thread. + +This must be set before start() is called, otherwise RuntimeError is +raised. Its initial value is inherited from the creating thread; the +main thread is not a daemon thread and therefore all threads created in +the main thread default to daemon = False. + +The entire Python program exits when only daemon threads are left. +
+`ident` + +Thread identifier of this thread or None if it has not been started. + +This is a nonzero integer. See the get_ident() function. Thread +identifiers may be recycled when a thread exits and another thread is +created. The identifier is available even after the thread has exited. +
+`name` + +A string used for identification purposes only. + +It has no semantics. Multiple threads may be given the same name. The +initial name is set by the constructor. +
+`native_id` + +Native integral thread ID of this thread, or None if it has not been started. + +This is a non-negative integer. See the get_native_id() function. +This represents the Thread ID as reported by the kernel. +
+ + + +## Methods + +

getName

+ + + + + + +

isAlive

+ + + +Return whether the thread is alive. + +This method is deprecated, use is_alive() instead. + +

isDaemon

+ + + + + + +

is_alive

+ + + +Return whether the thread is alive. + +This method returns True just before the run() method starts until just +after the run() method terminates. The module function enumerate() +returns a list of all alive threads. + +

join

+ + + +Wait until the thread terminates. + +This blocks the calling thread until the thread whose join() method is +called terminates -- either normally or through an unhandled exception +or until the optional timeout occurs. + +When the timeout argument is present and not None, it should be a +floating point number specifying a timeout for the operation in seconds +(or fractions thereof). As join() always returns None, you must call +is_alive() after join() to decide whether a timeout happened -- if the +thread is still alive, the join() call timed out. + +When the timeout argument is not present or None, the operation will +block until the thread terminates. + +A thread can be join()ed many times. + +join() raises a RuntimeError if an attempt is made to join the current +thread as that would cause a deadlock. It is also an error to join() a +thread before it has been started and attempts to do so raises the same +exception. + +

loop

+ +View source + + + +Start a LooperThread that calls a function periodically. + +If `timer_interval_secs` is None the thread calls `target(args)` +repeatedly. Otherwise `target(args)` is called every `timer_interval_secs` +seconds. The thread terminates when a stop of the coordinator is +requested. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`coord` + +A Coordinator. +
+`timer_interval_secs` + +Number. Time boundaries at which to call `target`. +
+`target` + +A callable object. +
+`args` + +Optional arguments to pass to `target` when calling it. +
+`kwargs` + +Optional keyword arguments to pass to `target` when calling it. +
+ + + + + + + + + + + +
Returns
+The started thread. +
+ + + +

run

+ +View source + + + +Method representing the thread's activity. + +You may override this method in a subclass. The standard run() method +invokes the callable object passed to the object's constructor as the +target argument, if any, with sequential and keyword arguments taken +from the args and kwargs arguments, respectively. + +

run_loop

+ +View source + + + +Called at 'timer_interval_secs' boundaries. + + +

setDaemon

+ + + + + + +

setName

+ + + + + + +

start

+ + + +Start the thread's activity. + +It must be called at most once per thread object. It arranges for the +object's run() method to be invoked in a separate thread of control. + +This method will raise a RuntimeError if called more than once on the +same thread object. + +

start_loop

+ +View source + + + +Called when the thread starts. + + +

stop_loop

+ +View source + + + +Called when the thread stops. + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/train/MomentumOptimizer.md b/site/en/api_docs/python/tf/compat/v1/train/MomentumOptimizer.md new file mode 100644 index 00000000000..0ce3a74473c --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/MomentumOptimizer.md @@ -0,0 +1,608 @@ +description: Optimizer that implements the Momentum algorithm. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.MomentumOptimizer + + + + + + + + + +Optimizer that implements the Momentum algorithm. + +Inherits From: [`Optimizer`](../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + +Computes (if `use_nesterov = False`): + +``` +accumulation = momentum * accumulation + gradient +variable -= learning_rate * accumulation +``` + +Note that in the dense version of this algorithm, `accumulation` is updated +and applied regardless of a gradient's value, whereas the sparse version (when +the gradient is an `IndexedSlices`, typically because of tf.gather or an +embedding) only updates variable slices and corresponding `accumulation` terms +when that part of the variable was used in the forward pass. + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A `Tensor` or a floating point value. The learning rate. +
+`momentum` + +A `Tensor` or a floating point value. The momentum. +
+`use_locking` + +If `True` use locks for update operations. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "Momentum". +
+`use_nesterov` + +If `True` use Nesterov Momentum. +See (Sutskever et al., 2013). +This implementation always computes gradients at the value of the +variable(s) passed to the optimizer. Using Nesterov Momentum makes the +variable(s) track the values called `theta_t + mu*v_t` in the paper. +This implementation is an approximation of the original formula, valid +for high values of momentum. It will compute the "adjusted gradient" +in NAG by assuming that the new gradient will be estimated by the +current average gradient plus the product of momentum and the change +in the average gradient. +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +`compute_gradients()`. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the `Optimizer` constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. If `global_step` +was not None, that operation also increments `global_step`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+`RuntimeError` + +If you should use `_distributed_apply()` instead. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of `loss` for the variables in `var_list`. + +This is the first part of `minimize()`. It returns a list +of (gradient, variable) pairs where "gradient" is the gradient +for "variable". Note that "gradient" can be a `Tensor`, an +`IndexedSlices`, or `None` if there is no gradient for the +given variable. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize or a callable taking +no arguments which returns the value to minimize. When eager execution +is enabled it must be a callable. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph +under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. Variable is always present, but +gradient can be `None`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `var_list` contains anything else than `Variable` objects. +
+`ValueError` + +If some arguments are invalid. +
+`RuntimeError` + +If called with eager execution enabled and `loss` is +not callable. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `gate_gradients`, `aggregation_method`, +and `colocate_gradients_with_ops` are ignored. + + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + + + + + + + + + + + + + +
Args
+`var` + +A variable passed to `minimize()` or `apply_gradients()`. +
+`name` + +A string. +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +A list of variables which encode the current state of `Optimizer`. + +Includes slot variables and additional global variables created by the +optimizer in the current default graph. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/MonitoredSession.md b/site/en/api_docs/python/tf/compat/v1/train/MonitoredSession.md new file mode 100644 index 00000000000..0167d748675 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/MonitoredSession.md @@ -0,0 +1,421 @@ +description: Session-like object that handles initialization, recovery and hooks. + +
+ + + + + + + + + + +
+ +# tf.compat.v1.train.MonitoredSession + + + + + + + + + +Session-like object that handles initialization, recovery and hooks. + + + + + + + + +#### Example usage: + + + +```python +saver_hook = CheckpointSaverHook(...) +summary_hook = SummarySaverHook(...) +with MonitoredSession(session_creator=ChiefSessionCreator(...), + hooks=[saver_hook, summary_hook]) as sess: + while not sess.should_stop(): + sess.run(train_op) +``` + +Initialization: At creation time the monitored session does following things +in given order: + +* calls `hook.begin()` for each given hook +* finalizes the graph via `scaffold.finalize()` +* create session +* initializes the model via initialization ops provided by `Scaffold` +* restores variables if a checkpoint exists +* launches queue runners +* calls `hook.after_create_session()` + +Run: When `run()` is called, the monitored session does following things: + +* calls `hook.before_run()` +* calls TensorFlow `session.run()` with merged fetches and feed_dict +* calls `hook.after_run()` +* returns result of `session.run()` asked by user +* if `AbortedError` or `UnavailableError` occurs, it recovers or + reinitializes the session before executing the run() call again + + +Exit: At the `close()`, the monitored session does following things in order: + +* calls `hook.end()` +* closes the queue runners and the session +* suppresses `OutOfRange` error which indicates that all inputs have been + processed if the monitored_session is used as a context + +How to set tf.compat.v1.Session arguments: + +* In most cases you can set session arguments as follows: + +```python +MonitoredSession( + session_creator=ChiefSessionCreator(master=..., config=...)) +``` + +* In distributed setting for a non-chief worker, you can use following: + +```python +MonitoredSession( + session_creator=WorkerSessionCreator(master=..., config=...)) +``` + +See `MonitoredTrainingSession` for an example usage based on chief or worker. + +Note: This is not a tf.compat.v1.Session. For example, it cannot do +following: + +* it cannot be set as default session. +* it cannot be sent to saver.save. +* it cannot be sent to tf.train.start_queue_runners. + + + + + + + + + + + + + +
+`session_creator` + +A factory object to create session. Typically a +`ChiefSessionCreator` which is the default one. +
+`hooks` + +An iterable of `SessionRunHook' objects. +
+ + + + + + + + + + + +
+A MonitoredSession object. +
+ + + + + + + + + + + + + + + + + + + + + +
+`session_creator` + +A factory object to create session. Typically a +`ChiefSessionCreator` or a `WorkerSessionCreator`. +
+`hooks` + +An iterable of `SessionRunHook' objects. +
+`should_recover` + +A bool. Indicates whether to recover from `AbortedError` +and `UnavailableError` or not. +
+`stop_grace_period_secs` + +Number of seconds given to threads to stop after +`close()` has been called. +
+ + + + + + + + + + + + + + +
+`graph` + +The graph that was launched in this session. +
+ + + +## Child Classes +[`class StepContext`](../../../../tf/compat/v1/train/MonitoredSession/StepContext.md) + +## Methods + +

close

+ +View source + + + + + + +

run

+ +View source + + + +Run ops in the monitored session. + +This method is completely compatible with the `tf.Session.run()` method. + + + + + + + + + + + + + + + + + + + +
Args
+`fetches` + +Same as `tf.Session.run()`. +
+`feed_dict` + +Same as `tf.Session.run()`. +
+`options` + +Same as `tf.Session.run()`. +
+`run_metadata` + +Same as `tf.Session.run()`. +
+ + + + + + + + + + + +
Returns
+Same as `tf.Session.run()`. +
+ + + +

run_step_fn

+ +View source + + + +Run ops using a step function. + + + + + + + + + + + +
Args
+`step_fn` + +A function or a method with a single argument of type +`StepContext`. The function may use methods of the argument to perform +computations with access to a raw session. The returned value of the +`step_fn` will be returned from `run_step_fn`, unless a stop is +requested. In that case, the next `should_stop` call will return True. +Example usage: +```python +with tf.Graph().as_default(): +c = tf.compat.v1.placeholder(dtypes.float32) +v = tf.add(c, 4.0) +w = tf.add(c, 0.5) +def step_fn(step_context): +a = step_context.session.run(fetches=v, feed_dict={c: 0.5}) +if a <= 4.5: +step_context.request_stop() +return step_context.run_with_hooks(fetches=w, +feed_dict={c: 0.1}) + +with tf.MonitoredSession() as session: +while not session.should_stop(): +a = session.run_step_fn(step_fn) +``` +Hooks interact with the `run_with_hooks()` call inside the +`step_fn` as they do with a `MonitoredSession.run` call. +
+ + + + + + + + + + + +
Returns
+Returns the returned value of `step_fn`. +
+ + + + + + + + + + + + + + + +
Raises
+`StopIteration` + +if `step_fn` has called `request_stop()`. It may be +caught by `with tf.MonitoredSession()` to close the session. +
+`ValueError` + +if `step_fn` doesn't have a single argument called +`step_context`. It may also optionally have `self` for cases when it +belongs to an object. +
+ + + +

should_stop

+ +View source + + + + + + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/train/MonitoredSession/StepContext.md b/site/en/api_docs/python/tf/compat/v1/train/MonitoredSession/StepContext.md new file mode 100644 index 00000000000..55cd006f36d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/MonitoredSession/StepContext.md @@ -0,0 +1,139 @@ +description: Control flow instrument for the step_fn from run_step_fn(). + +
+ + + + + +
+ +# tf.compat.v1.train.MonitoredSession.StepContext + + + + + + + + + +Control flow instrument for the `step_fn` from `run_step_fn()`. + + + + + + + + + +Users of `step_fn` may perform `run()` calls without running hooks +by accessing the `session`. A `run()` call with hooks may be performed +using `run_with_hooks()`. Computation flow can be interrupted using +`request_stop()`. + + + + + + + + + + + + + +
+`session` + +An instance of tf.compat.v1.Session. +
+`run_with_hooks_fn` + +A function for running fetches and hooks. +
+ + + + + + + + + + + + + + +
+`session` + + +
+ + + +## Methods + +

request_stop

+ +View source + + + +Exit the training loop by causing `should_stop()` to return `True`. + + Causes `step_fn` to exit by raising an exception. + + + + + + + + + +
Raises
+StopIteration +
+ + + +

run_with_hooks

+ +View source + + + +Same as `MonitoredSession.run`. Accepts the same arguments. + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/train/MonitoredTrainingSession.md b/site/en/api_docs/python/tf/compat/v1/train/MonitoredTrainingSession.md new file mode 100644 index 00000000000..40bd66340fd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/MonitoredTrainingSession.md @@ -0,0 +1,210 @@ +description: Creates a MonitoredSession for training. + +
+ + +
+ +# tf.compat.v1.train.MonitoredTrainingSession + + + + + + + + + +Creates a `MonitoredSession` for training. + + + + + + + +For a chief, this utility sets proper session initializer/restorer. It also +creates hooks related to checkpoint and summary saving. For workers, this +utility sets proper session creator which waits for the chief to +initialize/restore. Please check tf.compat.v1.train.MonitoredSession for +more +information. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`master` + +`String` the TensorFlow master to use. +
+`is_chief` + +If `True`, it will take care of initialization and recovery the +underlying TensorFlow session. If `False`, it will wait on a chief to +initialize or recover the TensorFlow session. +
+`checkpoint_dir` + +A string. Optional path to a directory where to restore +variables. +
+`scaffold` + +A `Scaffold` used for gathering or building supportive ops. If not +specified, a default one is created. It's used to finalize the graph. +
+`hooks` + +Optional list of `SessionRunHook` objects. +
+`chief_only_hooks` + +list of `SessionRunHook` objects. Activate these hooks if +`is_chief==True`, ignore otherwise. +
+`save_checkpoint_secs` + +The frequency, in seconds, that a checkpoint is saved +using a default checkpoint saver. If both `save_checkpoint_steps` and +`save_checkpoint_secs` are set to `None`, then the default checkpoint +saver isn't used. If both are provided, then only `save_checkpoint_secs` +is used. Default 600. +
+`save_summaries_steps` + +The frequency, in number of global steps, that the +summaries are written to disk using a default summary saver. If both +`save_summaries_steps` and `save_summaries_secs` are set to `None`, then +the default summary saver isn't used. Default 100. +
+`save_summaries_secs` + +The frequency, in secs, that the summaries are written +to disk using a default summary saver. If both `save_summaries_steps` and +`save_summaries_secs` are set to `None`, then the default summary saver +isn't used. Default not enabled. +
+`config` + +an instance of tf.compat.v1.ConfigProto proto used to configure +the session. It's the `config` argument of constructor of +tf.compat.v1.Session. +
+`stop_grace_period_secs` + +Number of seconds given to threads to stop after +`close()` has been called. +
+`log_step_count_steps` + +The frequency, in number of global steps, that the +global step/sec is logged. +
+`max_wait_secs` + +Maximum time workers should wait for the session to become +available. This should be kept relatively short to help detect incorrect +code, but sometimes may need to be increased if the chief takes a while to +start up. +
+`save_checkpoint_steps` + +The frequency, in number of global steps, that a +checkpoint is saved using a default checkpoint saver. If both +`save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then +the default checkpoint saver isn't used. If both are provided, then only +`save_checkpoint_secs` is used. Default not enabled. +
+`summary_dir` + +A string. Optional path to a directory where to save +summaries. If None, checkpoint_dir is used instead. +
+`save_graph_def` + +Whether to save the GraphDef and MetaGraphDef to +`checkpoint_dir`. The GraphDef is saved after the session is created as +`graph.pbtxt`. MetaGraphDefs are saved out for every checkpoint as +`model.ckpt-*.meta`. +
+ + + + + + + + + + + +
+A `MonitoredSession` object. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/NewCheckpointReader.md b/site/en/api_docs/python/tf/compat/v1/train/NewCheckpointReader.md new file mode 100644 index 00000000000..a5814e265df --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/NewCheckpointReader.md @@ -0,0 +1,64 @@ +description: A function that returns a CheckPointReader. + +
+ + +
+ +# tf.compat.v1.train.NewCheckpointReader + + + + + + + + + +A function that returns a CheckPointReader. + + + + + + + + + + + + + + + + + +
+`filepattern` + +The filename. +
+ + + + + + + + + + + +
+A CheckpointReader object. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/Optimizer.md b/site/en/api_docs/python/tf/compat/v1/train/Optimizer.md new file mode 100644 index 00000000000..ebbaa468d5e --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/Optimizer.md @@ -0,0 +1,665 @@ +description: Base class for optimizers. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.Optimizer + + + + + + + + + +Base class for optimizers. + + + + + + + +This class defines the API to add Ops to train a model. You never use this +class directly, but instead instantiate one of its subclasses such as +`GradientDescentOptimizer`, `AdagradOptimizer`, or `MomentumOptimizer`. + +### Usage + +```python +# Create an optimizer with the desired parameters. +opt = GradientDescentOptimizer(learning_rate=0.1) +# Add Ops to the graph to minimize a cost by updating a list of variables. +# "cost" is a Tensor, and the list of variables contains tf.Variable +# objects. +opt_op = opt.minimize(cost, var_list=) +``` + +In the training program you will just have to run the returned Op. + +```python +# Execute opt_op to do one step of training: +opt_op.run() +``` + +### Processing gradients before applying them. + +Calling `minimize()` takes care of both computing the gradients and +applying them to the variables. If you want to process the gradients +before applying them you can instead use the optimizer in three steps: + +1. Compute the gradients with `compute_gradients()`. +2. Process the gradients as you wish. +3. Apply the processed gradients with `apply_gradients()`. + +#### Example: + + + +```python +# Create an optimizer. +opt = GradientDescentOptimizer(learning_rate=0.1) + +# Compute the gradients for a list of variables. +grads_and_vars = opt.compute_gradients(loss, ) + +# grads_and_vars is a list of tuples (gradient, variable). Do whatever you +# need to the 'gradient' part, for example cap them, etc. +capped_grads_and_vars = [(MyCapper(gv[0]), gv[1]) for gv in grads_and_vars] + +# Ask the optimizer to apply the capped gradients. +opt.apply_gradients(capped_grads_and_vars) +``` + +### Gating Gradients + +Both `minimize()` and `compute_gradients()` accept a `gate_gradients` +argument that controls the degree of parallelism during the application of +the gradients. + +The possible values are: `GATE_NONE`, `GATE_OP`, and `GATE_GRAPH`. + +`GATE_NONE`: Compute and apply gradients in parallel. This provides +the maximum parallelism in execution, at the cost of some non-reproducibility +in the results. For example the two gradients of `matmul` depend on the input +values: With `GATE_NONE` one of the gradients could be applied to one of the +inputs _before_ the other gradient is computed resulting in non-reproducible +results. + +`GATE_OP`: For each Op, make sure all gradients are computed before +they are used. This prevents race conditions for Ops that generate gradients +for multiple inputs where the gradients depend on the inputs. + +`GATE_GRAPH`: Make sure all gradients for all variables are computed +before any one of them is used. This provides the least parallelism but can +be useful if you want to process all gradients before applying any of them. + +### Slots + +Some optimizer subclasses, such as `MomentumOptimizer` and `AdagradOptimizer` +allocate and manage additional variables associated with the variables to +train. These are called Slots. Slots have names and you can ask the +optimizer for the names of the slots that it uses. Once you have a slot name +you can ask the optimizer for the variable it created to hold the slot value. + +This can be useful if you want to log debug a training algorithm, report stats +about the slots, etc. + + + + + + + + + + + + + +
+`use_locking` + +Bool. If True apply use locks to prevent concurrent updates +to variables. +
+`name` + +A non-empty string. The name to use for accumulators created +for the optimizer. +
+ + + + + + + + + + + + +
+`ValueError` + +If name is malformed. +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +`compute_gradients()`. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the `Optimizer` constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. If `global_step` +was not None, that operation also increments `global_step`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+`RuntimeError` + +If you should use `_distributed_apply()` instead. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of `loss` for the variables in `var_list`. + +This is the first part of `minimize()`. It returns a list +of (gradient, variable) pairs where "gradient" is the gradient +for "variable". Note that "gradient" can be a `Tensor`, an +`IndexedSlices`, or `None` if there is no gradient for the +given variable. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize or a callable taking +no arguments which returns the value to minimize. When eager execution +is enabled it must be a callable. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph +under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. Variable is always present, but +gradient can be `None`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `var_list` contains anything else than `Variable` objects. +
+`ValueError` + +If some arguments are invalid. +
+`RuntimeError` + +If called with eager execution enabled and `loss` is +not callable. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `gate_gradients`, `aggregation_method`, +and `colocate_gradients_with_ops` are ignored. + + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + + + + + + + + + + + + + +
Args
+`var` + +A variable passed to `minimize()` or `apply_gradients()`. +
+`name` + +A string. +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +A list of variables which encode the current state of `Optimizer`. + +Includes slot variables and additional global variables created by the +optimizer in the current default graph. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/ProximalAdagradOptimizer.md b/site/en/api_docs/python/tf/compat/v1/train/ProximalAdagradOptimizer.md new file mode 100644 index 00000000000..30a46742daf --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/ProximalAdagradOptimizer.md @@ -0,0 +1,624 @@ +description: Optimizer that implements the Proximal Adagrad algorithm. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.ProximalAdagradOptimizer + + + + + + + + + +Optimizer that implements the Proximal Adagrad algorithm. + +Inherits From: [`Optimizer`](../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + + +#### References: + +Adaptive Subgradient Methods for Online Learning and Stochastic Optimization: + [Duchi et al., 2011](http://jmlr.org/papers/v12/duchi11a.html) + ([pdf](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)) +Efficient Learning using Forward-Backward Splitting: + [Duchi et al., 2009](http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting) + ([pdf](http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf)) + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A `Tensor` or a floating point value. The learning rate. +
+`initial_accumulator_value` + +A floating point value. +Starting value for the accumulators, must be positive. +
+`l1_regularization_strength` + +A float value, must be greater than or +equal to zero. +
+`l2_regularization_strength` + +A float value, must be greater than or +equal to zero. +
+`use_locking` + +If `True` use locks for update operations. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "Adagrad". +
+ + + + + + + + + + + + +
+`ValueError` + +If the `initial_accumulator_value` is invalid. +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +`compute_gradients()`. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the `Optimizer` constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. If `global_step` +was not None, that operation also increments `global_step`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+`RuntimeError` + +If you should use `_distributed_apply()` instead. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of `loss` for the variables in `var_list`. + +This is the first part of `minimize()`. It returns a list +of (gradient, variable) pairs where "gradient" is the gradient +for "variable". Note that "gradient" can be a `Tensor`, an +`IndexedSlices`, or `None` if there is no gradient for the +given variable. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize or a callable taking +no arguments which returns the value to minimize. When eager execution +is enabled it must be a callable. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph +under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. Variable is always present, but +gradient can be `None`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `var_list` contains anything else than `Variable` objects. +
+`ValueError` + +If some arguments are invalid. +
+`RuntimeError` + +If called with eager execution enabled and `loss` is +not callable. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `gate_gradients`, `aggregation_method`, +and `colocate_gradients_with_ops` are ignored. + + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + + + + + + + + + + + + + +
Args
+`var` + +A variable passed to `minimize()` or `apply_gradients()`. +
+`name` + +A string. +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +A list of variables which encode the current state of `Optimizer`. + +Includes slot variables and additional global variables created by the +optimizer in the current default graph. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/ProximalGradientDescentOptimizer.md b/site/en/api_docs/python/tf/compat/v1/train/ProximalGradientDescentOptimizer.md new file mode 100644 index 00000000000..6817bf9ccbd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/ProximalGradientDescentOptimizer.md @@ -0,0 +1,597 @@ +description: Optimizer that implements the proximal gradient descent algorithm. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.ProximalGradientDescentOptimizer + + + + + + + + + +Optimizer that implements the proximal gradient descent algorithm. + +Inherits From: [`Optimizer`](../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + + +#### References: + +Efficient Learning using Forward-Backward Splitting: + [Duchi et al., 2009](http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting) + ([pdf](http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf)) + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A Tensor or a floating point value. The learning +rate to use. +
+`l1_regularization_strength` + +A float value, must be greater than or +equal to zero. +
+`l2_regularization_strength` + +A float value, must be greater than or +equal to zero. +
+`use_locking` + +If True use locks for update operations. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "GradientDescent". +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +`compute_gradients()`. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the `Optimizer` constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. If `global_step` +was not None, that operation also increments `global_step`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+`RuntimeError` + +If you should use `_distributed_apply()` instead. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of `loss` for the variables in `var_list`. + +This is the first part of `minimize()`. It returns a list +of (gradient, variable) pairs where "gradient" is the gradient +for "variable". Note that "gradient" can be a `Tensor`, an +`IndexedSlices`, or `None` if there is no gradient for the +given variable. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize or a callable taking +no arguments which returns the value to minimize. When eager execution +is enabled it must be a callable. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph +under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. Variable is always present, but +gradient can be `None`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `var_list` contains anything else than `Variable` objects. +
+`ValueError` + +If some arguments are invalid. +
+`RuntimeError` + +If called with eager execution enabled and `loss` is +not callable. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `gate_gradients`, `aggregation_method`, +and `colocate_gradients_with_ops` are ignored. + + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + + + + + + + + + + + + + +
Args
+`var` + +A variable passed to `minimize()` or `apply_gradients()`. +
+`name` + +A string. +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +A list of variables which encode the current state of `Optimizer`. + +Includes slot variables and additional global variables created by the +optimizer in the current default graph. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/QueueRunner.md b/site/en/api_docs/python/tf/compat/v1/train/QueueRunner.md new file mode 100644 index 00000000000..72a59434b89 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/QueueRunner.md @@ -0,0 +1,380 @@ +description: Holds a list of enqueue operations for a queue, each to be run in a thread. + +
+ + + + + + +
+ +# tf.compat.v1.train.QueueRunner + + + + + + + + + +Holds a list of enqueue operations for a queue, each to be run in a thread. + + + + + + + + + +Queues are a convenient TensorFlow mechanism to compute tensors +asynchronously using multiple threads. For example in the canonical 'Input +Reader' setup one set of threads generates filenames in a queue; a second set +of threads read records from the files, processes them, and enqueues tensors +on a second queue; a third set of threads dequeues these input records to +construct batches and runs them through training operations. + +There are several delicate issues when running multiple threads that way: +closing the queues in sequence as the input is exhausted, correctly catching +and reporting exceptions, etc. + +The `QueueRunner`, combined with the `Coordinator`, helps handle these issues. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`queue` + +A `Queue`. +
+`enqueue_ops` + +List of enqueue ops to run in threads later. +
+`close_op` + +Op to close the queue. Pending enqueue ops are preserved. +
+`cancel_op` + +Op to close the queue and cancel pending enqueue ops. +
+`queue_closed_exception_types` + +Optional tuple of Exception types that +indicate that the queue has been closed when raised during an enqueue +operation. Defaults to `(tf.errors.OutOfRangeError,)`. Another common +case includes `(tf.errors.OutOfRangeError, tf.errors.CancelledError)`, +when some of the enqueue ops may dequeue from other Queues. +
+`queue_runner_def` + +Optional `QueueRunnerDef` protocol buffer. If specified, +recreates the QueueRunner from its contents. `queue_runner_def` and the +other arguments are mutually exclusive. +
+`import_scope` + +Optional `string`. Name scope to add. Only used when +initializing from protocol buffer. +
+ + + + + + + + + + + + + + + + + + +
+`ValueError` + +If both `queue_runner_def` and `queue` are both specified. +
+`ValueError` + +If `queue` or `enqueue_ops` are not provided when not +restoring from `queue_runner_def`. +
+`RuntimeError` + +If eager execution is enabled. +
+ + + +#### Eager Compatibility +QueueRunners are not compatible with eager execution. Instead, please +use tf.data to get data into your model. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cancel_op` + + +
+`close_op` + + +
+`enqueue_ops` + + +
+`exceptions_raised` + +Exceptions raised but not handled by the `QueueRunner` threads. + +Exceptions raised in queue runner threads are handled in one of two ways +depending on whether or not a `Coordinator` was passed to +`create_threads()`: + +* With a `Coordinator`, exceptions are reported to the coordinator and +forgotten by the `QueueRunner`. +* Without a `Coordinator`, exceptions are captured by the `QueueRunner` and +made available in this `exceptions_raised` property. +
+`name` + +The string name of the underlying Queue. +
+`queue` + + +
+`queue_closed_exception_types` + + +
+ + + +## Methods + +

create_threads

+ +View source + + + +Create threads to run the enqueue ops for the given session. + +This method requires a session in which the graph was launched. It creates +a list of threads, optionally starting them. There is one thread for each +op passed in `enqueue_ops`. + +The `coord` argument is an optional coordinator that the threads will use +to terminate together and report exceptions. If a coordinator is given, +this method starts an additional thread to close the queue when the +coordinator requests a stop. + +If previously created threads for the given session are still running, no +new threads will be created. + + + + + + + + + + + + + + + + + + + +
Args
+`sess` + +A `Session`. +
+`coord` + +Optional `Coordinator` object for reporting errors and checking +stop conditions. +
+`daemon` + +Boolean. If `True` make the threads daemon threads. +
+`start` + +Boolean. If `True` starts the threads. If `False` the +caller must call the `start()` method of the returned threads. +
+ + + + + + + + + + + +
Returns
+A list of threads. +
+ + + +

from_proto

+ +View source + + + +Returns a `QueueRunner` object created from `queue_runner_def`. + + +

to_proto

+ +View source + + + +Converts this `QueueRunner` to a `QueueRunnerDef` protocol buffer. + + + + + + + + + + + +
Args
+`export_scope` + +Optional `string`. Name scope to remove. +
+ + + + + + + + + + + +
Returns
+A `QueueRunnerDef` protocol buffer, or `None` if the `Variable` is not in +the specified name scope. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/train/RMSPropOptimizer.md b/site/en/api_docs/python/tf/compat/v1/train/RMSPropOptimizer.md new file mode 100644 index 00000000000..db11e84cd49 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/RMSPropOptimizer.md @@ -0,0 +1,612 @@ +description: Optimizer that implements the RMSProp algorithm (Tielemans et al. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.RMSPropOptimizer + + + + + + + + + +Optimizer that implements the RMSProp algorithm (Tielemans et al. + +Inherits From: [`Optimizer`](../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + +2012). + +#### References: + +Coursera slide 29: +Hinton, 2012 +([pdf](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf)) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A Tensor or a floating point value. The learning rate. +
+`decay` + +Discounting factor for the history/coming gradient +
+`momentum` + +A scalar tensor. +
+`epsilon` + +Small value to avoid zero denominator. +
+`use_locking` + +If True use locks for update operation. +
+`centered` + +If True, gradients are normalized by the estimated variance of +the gradient; if False, by the uncentered second moment. Setting this to +True may help with training, but is slightly more expensive in terms of +computation and memory. Defaults to False. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "RMSProp". +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +`compute_gradients()`. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the `Optimizer` constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. If `global_step` +was not None, that operation also increments `global_step`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+`RuntimeError` + +If you should use `_distributed_apply()` instead. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of `loss` for the variables in `var_list`. + +This is the first part of `minimize()`. It returns a list +of (gradient, variable) pairs where "gradient" is the gradient +for "variable". Note that "gradient" can be a `Tensor`, an +`IndexedSlices`, or `None` if there is no gradient for the +given variable. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize or a callable taking +no arguments which returns the value to minimize. When eager execution +is enabled it must be a callable. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph +under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. Variable is always present, but +gradient can be `None`. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `var_list` contains anything else than `Variable` objects. +
+`ValueError` + +If some arguments are invalid. +
+`RuntimeError` + +If called with eager execution enabled and `loss` is +not callable. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `gate_gradients`, `aggregation_method`, +and `colocate_gradients_with_ops` are ignored. + + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + + + + + + + + + + + + + +
Args
+`var` + +A variable passed to `minimize()` or `apply_gradients()`. +
+`name` + +A string. +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +A list of variables which encode the current state of `Optimizer`. + +Includes slot variables and additional global variables created by the +optimizer in the current default graph. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/Saver.md b/site/en/api_docs/python/tf/compat/v1/train/Saver.md new file mode 100644 index 00000000000..5115824415a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/Saver.md @@ -0,0 +1,897 @@ +description: Saves and restores variables. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.Saver + + + + + + + + + +Saves and restores variables. + + + + + + + +See [Variables](https://tensorflow.org/guide/variables) +for an overview of variables, saving and restoring. + +The `Saver` class adds ops to save and restore variables to and from +*checkpoints*. It also provides convenience methods to run these ops. + +Checkpoints are binary files in a proprietary format which map variable names +to tensor values. The best way to examine the contents of a checkpoint is to +load it using a `Saver`. + +Savers can automatically number checkpoint filenames with a provided counter. +This lets you keep multiple checkpoints at different steps while training a +model. For example you can number the checkpoint filenames with the training +step number. To avoid filling up disks, savers manage checkpoint files +automatically. For example, they can keep only the N most recent files, or +one checkpoint for every N hours of training. + +You number checkpoint filenames by passing a value to the optional +`global_step` argument to `save()`: + +```python +saver.save(sess, 'my-model', global_step=0) ==> filename: 'my-model-0' +... +saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000' +``` + +Additionally, optional arguments to the `Saver()` constructor let you control +the proliferation of checkpoint files on disk: + +* `max_to_keep` indicates the maximum number of recent checkpoint files to + keep. As new files are created, older files are deleted. If None or 0, + no checkpoints are deleted from the filesystem but only the last one is + kept in the `checkpoint` file. Defaults to 5 (that is, the 5 most recent + checkpoint files are kept.) + +* `keep_checkpoint_every_n_hours`: In addition to keeping the most recent + `max_to_keep` checkpoint files, you might want to keep one checkpoint file + for every N hours of training. This can be useful if you want to later + analyze how a model progressed during a long training session. For + example, passing `keep_checkpoint_every_n_hours=2` ensures that you keep + one checkpoint file for every 2 hours of training. The default value of + 10,000 hours effectively disables the feature. + +Note that you still have to call the `save()` method to save the model. +Passing these arguments to the constructor will not save variables +automatically for you. + +A training program that saves regularly looks like: + +```python +... +# Create a saver. +saver = tf.compat.v1.train.Saver(...variables...) +# Launch the graph and train, saving the model every 1,000 steps. +sess = tf.compat.v1.Session() +for step in xrange(1000000): + sess.run(..training_op..) + if step % 1000 == 0: + # Append the step number to the checkpoint name: + saver.save(sess, 'my-model', global_step=step) +``` + +In addition to checkpoint files, savers keep a protocol buffer on disk with +the list of recent checkpoints. This is used to manage numbered checkpoint +files and by `latest_checkpoint()`, which makes it easy to discover the path +to the most recent checkpoint. That protocol buffer is stored in a file named +'checkpoint' next to the checkpoint files. + +If you create several savers, you can specify a different filename for the +protocol buffer file in the call to `save()`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var_list` + +A list of `Variable`/`SaveableObject`, or a dictionary mapping +names to `SaveableObject`s. If `None`, defaults to the list of all +saveable objects. +
+`reshape` + +If `True`, allows restoring parameters from a checkpoint where +the variables have a different shape. +
+`sharded` + +If `True`, shard the checkpoints, one per device. +
+`max_to_keep` + +Maximum number of recent checkpoints to keep. Defaults to 5. +
+`keep_checkpoint_every_n_hours` + +How often to keep checkpoints. Defaults to +10,000 hours. +
+`name` + +String. Optional name to use as a prefix when adding operations. +
+`restore_sequentially` + +A `Bool`, which if true, causes restore of different +variables to happen sequentially within each device. This can lower +memory usage when restoring very large models. +
+`saver_def` + +Optional `SaverDef` proto to use instead of running the +builder. This is only useful for specialty code that wants to recreate a +`Saver` object for a previously built `Graph` that had a `Saver`. The +`saver_def` proto should be the one returned by the `as_saver_def()` +call of the `Saver` that was created for that `Graph`. +
+`builder` + +Optional `SaverBuilder` to use if a `saver_def` was not provided. +Defaults to `BulkSaverBuilder()`. +
+`defer_build` + +If `True`, defer adding the save and restore ops to the +`build()` call. In that case `build()` should be called before +finalizing the graph or using the saver. +
+`allow_empty` + +If `False` (default) raise an error if there are no variables +in the graph. Otherwise, construct the saver anyway and make it a no-op. +
+`write_version` + +controls what format to use when saving checkpoints. It +also affects certain filepath matching logic. The V2 format is the +recommended choice: it is much more optimized than V1 in terms of memory +required and latency incurred during restore. Regardless of this +flag, the Saver is able to restore from both V2 and V1 checkpoints. +
+`pad_step_number` + +if True, pads the global step number in the checkpoint +filepaths to some fixed width (8 by default). This is turned off by +default. +
+`save_relative_paths` + +If `True`, will write relative paths to the +checkpoint state file. This is needed if the user wants to copy the +checkpoint directory and reload from the copied directory. +
+`filename` + +If known at graph construction time, filename used for variable +loading/saving. +
+ + + + + + + + + + + + + + + + + + +
+`TypeError` + +If `var_list` is invalid. +
+`ValueError` + +If any of the keys or values in `var_list` are not unique. +
+`RuntimeError` + +If eager execution is enabled and`var_list` does not specify +a list of variables to save. +
+ + + + + + + + + + + + + + +
+`last_checkpoints` + +List of not-yet-deleted checkpoint filenames. + +You can pass any of the returned values to `restore()`. +
+ + + +## Methods + +

as_saver_def

+ +View source + + + +Generates a `SaverDef` representation of this saver. + + + + + + + + + + +
Returns
+A `SaverDef` proto. +
+ + + +

build

+ +View source + + + + + + +

export_meta_graph

+ +View source + + + +Writes `MetaGraphDef` to save_path/filename. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`filename` + +Optional meta_graph filename including the path. +
+`collection_list` + +List of string keys to collect. +
+`as_text` + +If `True`, writes the meta_graph as an ASCII proto. +
+`export_scope` + +Optional `string`. Name scope to remove. +
+`clear_devices` + +Whether or not to clear the device field for an `Operation` +or `Tensor` during export. +
+`clear_extraneous_savers` + +Remove any Saver-related information from the +graph (both Save/Restore ops and SaverDefs) that are not associated with +this Saver. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the NodeDefs. For a detailed guide, see +[Stripping Default-Valued +Attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+`save_debug_info` + +If `True`, save the GraphDebugInfo to a separate file, +which in the same directory of filename and with `_debug` added before +the file extension. +
+ + + + + + + + + + + +
Returns
+A `MetaGraphDef` proto. +
+ + + +

from_proto

+ +View source + + + +Returns a `Saver` object created from `saver_def`. + + + + + + + + + + + + + + +
Args
+`saver_def` + +a `SaverDef` protocol buffer. +
+`import_scope` + +Optional `string`. Name scope to use. +
+ + + + + + + + + + + +
Returns
+A `Saver` built from saver_def. +
+ + + +

recover_last_checkpoints

+ +View source + + + +Recovers the internal saver state after a crash. + +This method is useful for recovering the "self._last_checkpoints" state. + +Globs for the checkpoints pointed to by `checkpoint_paths`. If the files +exist, use their mtime as the checkpoint timestamp. + + + + + + + + + + +
Args
+`checkpoint_paths` + +a list of checkpoint paths. +
+ + + +

restore

+ +View source + + + +Restores previously saved variables. + +This method runs the ops added by the constructor for restoring variables. +It requires a session in which the graph was launched. The variables to +restore do not have to have been initialized, as restoring is itself a way +to initialize variables. + +The `save_path` argument is typically a value previously returned from a +`save()` call, or a call to `latest_checkpoint()`. + + + + + + + + + + + + + +
Args
+`sess` + +A `Session` to use to restore the parameters. None in eager mode. +
+`save_path` + +Path where parameters were previously saved. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If save_path is None or not a valid checkpoint. +
+ + + +

save

+ +View source + + + +Saves variables. + +This method runs the ops added by the constructor for saving variables. +It requires a session in which the graph was launched. The variables to +save must also have been initialized. + +The method returns the path prefix of the newly created checkpoint files. +This string can be passed directly to a call to `restore()`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`sess` + +A Session to use to save the variables. +
+`save_path` + +String. Prefix of filenames created for the checkpoint. +
+`global_step` + +If provided the global step number is appended to `save_path` +to create the checkpoint filenames. The optional argument can be a +`Tensor`, a `Tensor` name or an integer. +
+`latest_filename` + +Optional name for the protocol buffer file that will +contains the list of most recent checkpoints. That file, kept in the +same directory as the checkpoint files, is automatically managed by the +saver to keep track of recent checkpoints. Defaults to 'checkpoint'. +
+`meta_graph_suffix` + +Suffix for `MetaGraphDef` file. Defaults to 'meta'. +
+`write_meta_graph` + +`Boolean` indicating whether or not to write the meta +graph file. +
+`write_state` + +`Boolean` indicating whether or not to write the +`CheckpointStateProto`. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the NodeDefs. For a detailed guide, see +[Stripping Default-Valued +Attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+`save_debug_info` + +If `True`, save the GraphDebugInfo to a separate file, +which in the same directory of save_path and with `_debug` added before +the file extension. This is only enabled when `write_meta_graph` is +`True` +
+ + + + + + + + + + + +
Returns
+A string: path prefix used for the checkpoint files. If the saver is +sharded, this string ends with: '-?????-of-nnnnn' where 'nnnnn' +is the number of shards created. +If the saver is empty, returns None. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `sess` is not a `Session`. +
+`ValueError` + +If `latest_filename` contains path components, or if it +collides with `save_path`. +
+`RuntimeError` + +If save and restore ops weren't built. +
+ + + +

set_last_checkpoints

+ +View source + + + +DEPRECATED: Use set_last_checkpoints_with_time. + +Sets the list of old checkpoint filenames. + + + + + + + + + + +
Args
+`last_checkpoints` + +A list of checkpoint filenames. +
+ + + + + + + + + + + + +
Raises
+`AssertionError` + +If last_checkpoints is not a list. +
+ + + +

set_last_checkpoints_with_time

+ +View source + + + +Sets the list of old checkpoint filenames and timestamps. + + + + + + + + + + + +
Args
+`last_checkpoints_with_time` + +A list of tuples of checkpoint filenames and +timestamps. +
+ + + + + + + + + + + + +
Raises
+`AssertionError` + +If last_checkpoints_with_time is not a list. +
+ + + +

to_proto

+ +View source + + + +Converts this `Saver` to a `SaverDef` protocol buffer. + + + + + + + + + + + +
Args
+`export_scope` + +Optional `string`. Name scope to remove. +
+ + + + + + + + + + + +
Returns
+A `SaverDef` protocol buffer. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/train/SaverDef.md b/site/en/api_docs/python/tf/compat/v1/train/SaverDef.md new file mode 100644 index 00000000000..4e8d7fb6951 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/SaverDef.md @@ -0,0 +1,98 @@ +description: A ProtocolMessage + +
+ + + + + + +
+ +# tf.compat.v1.train.SaverDef + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filename_tensor_name` + +`string filename_tensor_name` +
+`keep_checkpoint_every_n_hours` + +`float keep_checkpoint_every_n_hours` +
+`max_to_keep` + +`int32 max_to_keep` +
+`restore_op_name` + +`string restore_op_name` +
+`save_tensor_name` + +`string save_tensor_name` +
+`sharded` + +`bool sharded` +
+`version` + +`CheckpointFormatVersion version` +
+ + + +## Class Variables + +* `CheckpointFormatVersion` +* `LEGACY = 0` +* `V1 = 1` +* `V2 = 2` diff --git a/site/en/api_docs/python/tf/compat/v1/train/Scaffold.md b/site/en/api_docs/python/tf/compat/v1/train/Scaffold.md new file mode 100644 index 00000000000..79b27a6cdff --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/Scaffold.md @@ -0,0 +1,319 @@ +description: Structure to create or gather pieces commonly needed to train a model. + +
+ + + + + + +
+ +# tf.compat.v1.train.Scaffold + + + + + + + + + +Structure to create or gather pieces commonly needed to train a model. + + + + + + + +When you build a model for training you usually need ops to initialize +variables, a `Saver` to checkpoint them, an op to collect summaries for +the visualizer, and so on. + +Various libraries built on top of the core TensorFlow library take care of +creating some or all of these pieces and storing them in well known +collections in the graph. The `Scaffold` class helps pick these pieces from +the graph collections, creating and adding them to the collections if needed. + +If you call the scaffold constructor without any arguments, it will pick +pieces from the collections, creating default ones if needed when +`scaffold.finalize()` is called. You can pass arguments to the constructor to +provide your own pieces. Pieces that you pass to the constructor are not +added to the graph collections. + +The following pieces are directly accessible as attributes of the `Scaffold` +object: + +* `saver`: A tf.compat.v1.train.Saver object taking care of saving the +variables. + Picked from and stored into the `SAVERS` collection in the graph by default. +* `init_op`: An op to run to initialize the variables. Picked from and + stored into the `INIT_OP` collection in the graph by default. +* `ready_op`: An op to verify that the variables are initialized. Picked + from and stored into the `READY_OP` collection in the graph by default. +* `ready_for_local_init_op`: An op to verify that global state has been + initialized and it is alright to run `local_init_op`. Picked from and + stored into the `READY_FOR_LOCAL_INIT_OP` collection in the graph by + default. This is needed when the initialization of local variables depends + on the values of global variables. +* `local_init_op`: An op to initialize the local variables. Picked + from and stored into the `LOCAL_INIT_OP` collection in the graph by default. +* `summary_op`: An op to run and merge the summaries in the graph. Picked + from and stored into the `SUMMARY_OP` collection in the graph by default. + +You can also pass the following additional pieces to the constructor: + +* `init_feed_dict`: A session feed dictionary that should be used when + running the init op. +* `init_fn`: A callable to run after the init op to perform additional + initializations. The callable will be called as + `init_fn(scaffold, session)`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`init_op` + +Optional op for initializing variables. +
+`init_feed_dict` + +Optional session feed dictionary to use when running the +init_op. +
+`init_fn` + +Optional function to use to initialize the model after running +the init_op. Will be called as `init_fn(scaffold, session)`. +
+`ready_op` + +Optional op to verify that the variables are initialized. Must +return an empty 1D string tensor when the variables are initialized, or +a non-empty 1D string tensor listing the names of the non-initialized +variables. +
+`ready_for_local_init_op` + +Optional op to verify that the global variables +are initialized and `local_init_op` can be run. Must return an empty 1D +string tensor when the global variables are initialized, or a non-empty +1D string tensor listing the names of the non-initialized global +variables. +
+`local_init_op` + +Optional op to initialize local variables. +
+`summary_op` + +Optional op to gather all summaries. Must return a scalar +string tensor containing a serialized `Summary` proto. +
+`saver` + +Optional tf.compat.v1.train.Saver object to use to save and +restore variables. May also be a tf.train.Checkpoint object, in which +case object-based checkpoints are saved. This will also load some +object-based checkpoints saved from elsewhere, but that loading may be +fragile since it uses fixed keys rather than performing a full +graph-based match. For example if a variable has two paths from the +`Checkpoint` object because two `Model` objects share the `Layer` object +that owns it, removing one `Model` may change the keys and break +checkpoint loading through this API, whereas a graph-based match would +match the variable through the other `Model`. +
+`copy_from_scaffold` + +Optional scaffold object to copy fields from. Its +fields will be overwritten by the provided fields in this function. +
+`local_init_feed_dict` + +Optional session feed dictionary to use when running +the local_init_op. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`init_feed_dict` + + +
+`init_fn` + + +
+`init_op` + + +
+`local_init_feed_dict` + + +
+`local_init_op` + + +
+`ready_for_local_init_op` + + +
+`ready_op` + + +
+`saver` + + +
+`summary_op` + + +
+ + + +## Methods + +

default_local_init_op

+ +View source + + + +Returns an op that groups the default local init ops. + +This op is used during session initialization when a Scaffold is +initialized without specifying the local_init_op arg. It includes +tf.compat.v1.local_variables_initializer, +tf.compat.v1.tables_initializer, and also +initializes local session resources. + + + + + + + + + +
Returns
+The default Scaffold local init op. +
+ + + +

finalize

+ +View source + + + +Creates operations if needed and finalizes the graph. + + +

get_or_default

+ +View source + + + +Get from cache or create a default operation. + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/train/SessionCreator.md b/site/en/api_docs/python/tf/compat/v1/train/SessionCreator.md new file mode 100644 index 00000000000..85446f89737 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/SessionCreator.md @@ -0,0 +1,44 @@ +description: A factory for tf.Session. + +
+ + + +
+ +# tf.compat.v1.train.SessionCreator + + + + + + + + + +A factory for tf.Session. + + + + +## Methods + +

create_session

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/train/SessionManager.md b/site/en/api_docs/python/tf/compat/v1/train/SessionManager.md new file mode 100644 index 00000000000..3415b39356d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/SessionManager.md @@ -0,0 +1,516 @@ +description: Training helper that restores from checkpoint and creates session. + +
+ + + + + + +
+ +# tf.compat.v1.train.SessionManager + + + + + + + + + +Training helper that restores from checkpoint and creates session. + + + + + + + +This class is a small wrapper that takes care of session creation and +checkpoint recovery. It also provides functions that to facilitate +coordination among multiple training threads or processes. + +* Checkpointing trained variables as the training progresses. +* Initializing variables on startup, restoring them from the most recent + checkpoint after a crash, or wait for checkpoints to become available. + +### Usage: + +```python +with tf.Graph().as_default(): + ...add operations to the graph... + # Create a SessionManager that will checkpoint the model in '/tmp/mydir'. + sm = SessionManager() + sess = sm.prepare_session(master, init_op, saver, checkpoint_dir) + # Use the session to train the graph. + while True: + sess.run() +``` + +`prepare_session()` initializes or restores a model. It requires `init_op` +and `saver` as an argument. + +A second process could wait for the model to be ready by doing the following: + +```python +with tf.Graph().as_default(): + ...add operations to the graph... + # Create a SessionManager that will wait for the model to become ready. + sm = SessionManager() + sess = sm.wait_for_session(master) + # Use the session to train the graph. + while True: + sess.run() +``` + +`wait_for_session()` waits for a model to be initialized by other processes. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`local_init_op` + +An `Operation` run immediately after session creation. +Usually used to initialize tables and local variables. +
+`ready_op` + +An `Operation` to check if the model is initialized. +
+`ready_for_local_init_op` + +An `Operation` to check if the model is ready +to run local_init_op. +
+`graph` + +The `Graph` that the model will use. +
+`recovery_wait_secs` + +Seconds between checks for the model to be ready. +
+`local_init_run_options` + +RunOptions to be passed to session.run when +executing the local_init_op. +
+`local_init_feed_dict` + +Optional session feed dictionary to use when running +the local_init_op. +
+ + + + + + + + + + + + +
+`ValueError` + +If ready_for_local_init_op is not None but local_init_op is +None +
+ + + +## Methods + +

prepare_session

+ +View source + + + +Creates a `Session`. Makes sure the model is ready to be used. + +Creates a `Session` on 'master'. If a `saver` object is passed in, and +`checkpoint_dir` points to a directory containing valid checkpoint +files, then it will try to recover the model from checkpoint. If +no checkpoint files are available, and `wait_for_checkpoint` is +`True`, then the process would check every `recovery_wait_secs`, +up to `max_wait_secs`, for recovery to succeed. + +If the model cannot be recovered successfully then it is initialized by +running the `init_op` and calling `init_fn` if they are provided. +The `local_init_op` is also run after init_op and init_fn, regardless of +whether the model was recovered successfully, but only if +`ready_for_local_init_op` passes. + +If the model is recovered from a checkpoint it is assumed that all +global variables have been initialized, in particular neither `init_op` +nor `init_fn` will be executed. + +It is an error if the model cannot be recovered and no `init_op` +or `init_fn` or `local_init_op` are passed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`master` + +`String` representation of the TensorFlow master to use. +
+`init_op` + +Optional `Operation` used to initialize the model. +
+`saver` + +A `Saver` object used to restore a model. +
+`checkpoint_dir` + +Path to the checkpoint files. The latest checkpoint in the +dir will be used to restore. +
+`checkpoint_filename_with_path` + +Full file name path to the checkpoint file. +
+`wait_for_checkpoint` + +Whether to wait for checkpoint to become available. +
+`max_wait_secs` + +Maximum time to wait for checkpoints to become available. +
+`config` + +Optional `ConfigProto` proto used to configure the session. +
+`init_feed_dict` + +Optional dictionary that maps `Tensor` objects to feed +values. This feed dictionary is passed to the session `run()` call when +running the init op. +
+`init_fn` + +Optional callable used to initialize the model. Called after the +optional `init_op` is called. The callable must accept one argument, +the session being initialized. +
+ + + + + + + + + + + +
Returns
+A `Session` object that can be used to drive the model. +
+ + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If the model cannot be initialized or recovered. +
+`ValueError` + +If both checkpoint_dir and checkpoint_filename_with_path are +set. +
+ + + +

recover_session

+ +View source + + + +Creates a `Session`, recovering if possible. + +Creates a new session on 'master'. If the session is not initialized +and can be recovered from a checkpoint, recover it. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`master` + +`String` representation of the TensorFlow master to use. +
+`saver` + +A `Saver` object used to restore a model. +
+`checkpoint_dir` + +Path to the checkpoint files. The latest checkpoint in the +dir will be used to restore. +
+`checkpoint_filename_with_path` + +Full file name path to the checkpoint file. +
+`wait_for_checkpoint` + +Whether to wait for checkpoint to become available. +
+`max_wait_secs` + +Maximum time to wait for checkpoints to become available. +
+`config` + +Optional `ConfigProto` proto used to configure the session. +
+ + + + + + + + + + + +
Returns
+A pair (sess, initialized) where 'initialized' is `True` if +the session could be recovered and initialized, `False` otherwise. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If both checkpoint_dir and checkpoint_filename_with_path are +set. +
+ + + +

wait_for_session

+ +View source + + + +Creates a new `Session` and waits for model to be ready. + +Creates a new `Session` on 'master'. Waits for the model to be +initialized or recovered from a checkpoint. It's expected that +another thread or process will make the model ready, and that this +is intended to be used by threads/processes that participate in a +distributed training configuration where a different thread/process +is responsible for initializing or recovering the model being trained. + +NB: The amount of time this method waits for the session is bounded +by max_wait_secs. By default, this function will wait indefinitely. + + + + + + + + + + + + + + + + +
Args
+`master` + +`String` representation of the TensorFlow master to use. +
+`config` + +Optional ConfigProto proto used to configure the session. +
+`max_wait_secs` + +Maximum time to wait for the session to become available. +
+ + + + + + + + + + + +
Returns
+A `Session`. May be None if the operation exceeds the timeout +specified by config.operation_timeout_in_ms. +
+ + + + + + + + + + + + +
Raises
+`tf.DeadlineExceededError` + +if the session is not available after +max_wait_secs. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/train/SingularMonitoredSession.md b/site/en/api_docs/python/tf/compat/v1/train/SingularMonitoredSession.md new file mode 100644 index 00000000000..54721d47b7b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/SingularMonitoredSession.md @@ -0,0 +1,401 @@ +description: Session-like object that handles initialization, restoring, and hooks. + +
+ + + + + + + + + + + +
+ +# tf.compat.v1.train.SingularMonitoredSession + + + + + + + + + +Session-like object that handles initialization, restoring, and hooks. + + + + + + + +Please note that this utility is not recommended for distributed settings. +For distributed settings, please use tf.compat.v1.train.MonitoredSession. +The +differences between `MonitoredSession` and `SingularMonitoredSession` are: + +* `MonitoredSession` handles `AbortedError` and `UnavailableError` for + distributed settings, but `SingularMonitoredSession` does not. +* `MonitoredSession` can be created in `chief` or `worker` modes. + `SingularMonitoredSession` is always created as `chief`. +* You can access the raw tf.compat.v1.Session object used by + `SingularMonitoredSession`, whereas in MonitoredSession the raw session is + private. This can be used: + - To `run` without hooks. + - To save and restore. +* All other functionality is identical. + +#### Example usage: + + +```python +saver_hook = CheckpointSaverHook(...) +summary_hook = SummarySaverHook(...) +with SingularMonitoredSession(hooks=[saver_hook, summary_hook]) as sess: + while not sess.should_stop(): + sess.run(train_op) +``` + +Initialization: At creation time the hooked session does following things +in given order: + +* calls `hook.begin()` for each given hook +* finalizes the graph via `scaffold.finalize()` +* create session +* initializes the model via initialization ops provided by `Scaffold` +* restores variables if a checkpoint exists +* launches queue runners + +Run: When `run()` is called, the hooked session does following things: + +* calls `hook.before_run()` +* calls TensorFlow `session.run()` with merged fetches and feed_dict +* calls `hook.after_run()` +* returns result of `session.run()` asked by user + +Exit: At the `close()`, the hooked session does following things in order: + +* calls `hook.end()` +* closes the queue runners and the session +* suppresses `OutOfRange` error which indicates that all inputs have been + processed if the `SingularMonitoredSession` is used as a context. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`hooks` + +An iterable of `SessionRunHook' objects. +
+`scaffold` + +A `Scaffold` used for gathering or building supportive ops. If +not specified a default one is created. It's used to finalize the graph. +
+`master` + +`String` representation of the TensorFlow master to use. +
+`config` + +`ConfigProto` proto used to configure the session. +
+`checkpoint_dir` + +A string. Optional path to a directory where to restore +variables. +
+`stop_grace_period_secs` + +Number of seconds given to threads to stop after +`close()` has been called. +
+`checkpoint_filename_with_path` + +A string. Optional path to a checkpoint +file from which to restore variables. +
+ + + + + + + + + + + + + + +
+`graph` + +The graph that was launched in this session. +
+ + + +## Child Classes +[`class StepContext`](../../../../tf/compat/v1/train/MonitoredSession/StepContext.md) + +## Methods + +

close

+ +View source + + + + + + +

raw_session

+ +View source + + + +Returns underlying `TensorFlow.Session` object. + + +

run

+ +View source + + + +Run ops in the monitored session. + +This method is completely compatible with the `tf.Session.run()` method. + + + + + + + + + + + + + + + + + + + +
Args
+`fetches` + +Same as `tf.Session.run()`. +
+`feed_dict` + +Same as `tf.Session.run()`. +
+`options` + +Same as `tf.Session.run()`. +
+`run_metadata` + +Same as `tf.Session.run()`. +
+ + + + + + + + + + + +
Returns
+Same as `tf.Session.run()`. +
+ + + +

run_step_fn

+ +View source + + + +Run ops using a step function. + + + + + + + + + + + +
Args
+`step_fn` + +A function or a method with a single argument of type +`StepContext`. The function may use methods of the argument to perform +computations with access to a raw session. The returned value of the +`step_fn` will be returned from `run_step_fn`, unless a stop is +requested. In that case, the next `should_stop` call will return True. +Example usage: +```python +with tf.Graph().as_default(): +c = tf.compat.v1.placeholder(dtypes.float32) +v = tf.add(c, 4.0) +w = tf.add(c, 0.5) +def step_fn(step_context): +a = step_context.session.run(fetches=v, feed_dict={c: 0.5}) +if a <= 4.5: +step_context.request_stop() +return step_context.run_with_hooks(fetches=w, +feed_dict={c: 0.1}) + +with tf.MonitoredSession() as session: +while not session.should_stop(): +a = session.run_step_fn(step_fn) +``` +Hooks interact with the `run_with_hooks()` call inside the +`step_fn` as they do with a `MonitoredSession.run` call. +
+ + + + + + + + + + + +
Returns
+Returns the returned value of `step_fn`. +
+ + + + + + + + + + + + + + + +
Raises
+`StopIteration` + +if `step_fn` has called `request_stop()`. It may be +caught by `with tf.MonitoredSession()` to close the session. +
+`ValueError` + +if `step_fn` doesn't have a single argument called +`step_context`. It may also optionally have `self` for cases when it +belongs to an object. +
+ + + +

should_stop

+ +View source + + + + + + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/train/Supervisor.md b/site/en/api_docs/python/tf/compat/v1/train/Supervisor.md new file mode 100644 index 00000000000..8cae2271329 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/Supervisor.md @@ -0,0 +1,1717 @@ +description: A training helper that checkpoints models and computes summaries. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.train.Supervisor + + + + + + + + + +A training helper that checkpoints models and computes summaries. + + + + + + + +This class is deprecated. Please use +tf.compat.v1.train.MonitoredTrainingSession instead. + +The Supervisor is a small wrapper around a `Coordinator`, a `Saver`, +and a `SessionManager` that takes care of common needs of TensorFlow +training programs. + +#### Use for a single program + +```python +with tf.Graph().as_default(): + ...add operations to the graph... + # Create a Supervisor that will checkpoint the model in '/tmp/mydir'. + sv = Supervisor(logdir='/tmp/mydir') + # Get a TensorFlow session managed by the supervisor. + with sv.managed_session(FLAGS.master) as sess: + # Use the session to train the graph. + while not sv.should_stop(): + sess.run() +``` + +Within the `with sv.managed_session()` block all variables in the graph have +been initialized. In addition, a few services have been started to +checkpoint the model and add summaries to the event log. + +If the program crashes and is restarted, the managed session automatically +reinitialize variables from the most recent checkpoint. + +The supervisor is notified of any exception raised by one of the services. +After an exception is raised, `should_stop()` returns `True`. In that case +the training loop should also stop. This is why the training loop has to +check for `sv.should_stop()`. + +Exceptions that indicate that the training inputs have been exhausted, +tf.errors.OutOfRangeError, also cause `sv.should_stop()` to return `True` +but are not re-raised from the `with` block: they indicate a normal +termination. + +#### Use for multiple replicas + +To train with replicas you deploy the same program in a `Cluster`. +One of the tasks must be identified as the *chief*: the task that handles +initialization, checkpoints, summaries, and recovery. The other tasks +depend on the *chief* for these services. + +The only change you have to do to the single program code is to indicate +if the program is running as the *chief*. + +```python +# Choose a task as the chief. This could be based on server_def.task_index, +# or job_def.name, or job_def.tasks. It's entirely up to the end user. +# But there can be only one *chief*. +is_chief = (server_def.task_index == 0) +server = tf.distribute.Server(server_def) + +with tf.Graph().as_default(): + ...add operations to the graph... + # Create a Supervisor that uses log directory on a shared file system. + # Indicate if you are the 'chief' + sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief) + # Get a Session in a TensorFlow server on the cluster. + with sv.managed_session(server.target) as sess: + # Use the session to train the graph. + while not sv.should_stop(): + sess.run() +``` + +In the *chief* task, the `Supervisor` works exactly as in the first example +above. In the other tasks `sv.managed_session()` waits for the Model to have +been initialized before returning a session to the training code. The +non-chief tasks depend on the chief task for initializing the model. + +If one of the tasks crashes and restarts, `managed_session()` +checks if the Model is initialized. If yes, it just creates a session and +returns it to the training code that proceeds normally. If the model needs +to be initialized, the chief task takes care of reinitializing it; the other +tasks just wait for the model to have been initialized. + +NOTE: This modified program still works fine as a single program. +The single program marks itself as the chief. + +#### What `master` string to use + +Whether you are running on your machine or in the cluster you can use the +following values for the --master flag: + +* Specifying `''` requests an in-process session that does not use RPC. + +* Specifying `'local'` requests a session that uses the RPC-based + "Master interface" to run TensorFlow programs. See + `tf.train.Server.create_local_server` for + details. + +* Specifying `'grpc://hostname:port'` requests a session that uses + the RPC interface to a specific host, and also allows the in-process + master to access remote tensorflow workers. Often, it is + appropriate to pass `server.target` (for some tf.distribute.Server + named `server). + +#### Advanced use + +##### Launching additional services + +`managed_session()` launches the Checkpoint and Summary services (threads). +If you need more services to run you can simply launch them in the block +controlled by `managed_session()`. + +Example: Start a thread to print losses. We want this thread to run +every 60 seconds, so we launch it with `sv.loop()`. + +```python +... +sv = Supervisor(logdir='/tmp/mydir') +with sv.managed_session(FLAGS.master) as sess: + sv.loop(60, print_loss, (sess, )) + while not sv.should_stop(): + sess.run(my_train_op) +``` + +##### Launching fewer services + +`managed_session()` launches the "summary" and "checkpoint" threads which use +either the optionally `summary_op` and `saver` passed to the constructor, or +default ones created automatically by the supervisor. If you want to run +your own summary and checkpointing logic, disable these services by passing +`None` to the `summary_op` and `saver` parameters. + +Example: Create summaries manually every 100 steps in the chief. + +```python +# Create a Supervisor with no automatic summaries. +sv = Supervisor(logdir='/tmp/mydir', is_chief=is_chief, summary_op=None) +# As summary_op was None, managed_session() does not start the +# summary thread. +with sv.managed_session(FLAGS.master) as sess: + for step in xrange(1000000): + if sv.should_stop(): + break + if is_chief and step % 100 == 0: + # Create the summary every 100 chief steps. + sv.summary_computed(sess, sess.run(my_summary_op)) + else: + # Train normally + sess.run(my_train_op) +``` + +##### Custom model initialization + +`managed_session()` only supports initializing the model by running an +`init_op` or restoring from the latest checkpoint. If you have special +initialization needs, see how to specify a `local_init_op` when creating the +supervisor. You can also use the `SessionManager` directly to create a +session and check if it could be initialized automatically. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`graph` + +A `Graph`. The graph that the model will use. Defaults to the +default `Graph`. The supervisor may add operations to the graph before +creating a session, but the graph should not be modified by the caller +after passing it to the supervisor. +
+`ready_op` + +1-D string `Tensor`. This tensor is evaluated by supervisors in +`prepare_or_wait_for_session()` to check if the model is ready to use. +The model is considered ready if it returns an empty array. Defaults to +the tensor returned from tf.compat.v1.report_uninitialized_variables() +If `None`, the model is not checked for readiness. +
+`ready_for_local_init_op` + +1-D string `Tensor`. This tensor is evaluated by +supervisors in `prepare_or_wait_for_session()` to check if the model is +ready to run the local_init_op. The model is considered ready if it +returns an empty array. Defaults to `None`. If `None`, the model is not +checked for readiness before running local_init_op. +
+`is_chief` + +If True, create a chief supervisor in charge of initializing and +restoring the model. If False, create a supervisor that relies on a +chief supervisor for inits and restore. +
+`init_op` + +`Operation`. Used by chief supervisors to initialize the model +when it can not be recovered. Defaults to an `Operation` that +initializes all global variables. If `None`, no initialization is done +automatically unless you pass a value for `init_fn`, see below. +
+`init_feed_dict` + +A dictionary that maps `Tensor` objects to feed values. +This feed dictionary will be used when `init_op` is evaluated. +
+`local_init_op` + +`Operation`. Used by all supervisors to run initializations +that should run for every new supervisor instance. By default these are +table initializers and initializers for local variables. If `None`, no +further per supervisor-instance initialization is done automatically. +
+`logdir` + +A string. Optional path to a directory where to checkpoint the +model and log events for the visualizer. Used by chief supervisors. The +directory will be created if it does not exist. +
+`summary_op` + +An `Operation` that returns a Summary for the event logs. Used +by chief supervisors if a `logdir` was specified. Defaults to the +operation returned from summary.merge_all(). If `None`, summaries are +not computed automatically. +
+`saver` + +A Saver object. Used by chief supervisors if a `logdir` was +specified. Defaults to the saved returned by Saver(). If `None`, the +model is not saved automatically. +
+`global_step` + +An integer Tensor of size 1 that counts steps. The value +from 'global_step' is used in summaries and checkpoint filenames. +Default to the op named 'global_step' in the graph if it exists, is of +rank 1, size 1, and of type tf.int32 or tf.int64. If `None` the global +step is not recorded in summaries and checkpoint files. Used by chief +supervisors if a `logdir` was specified. +
+`save_summaries_secs` + +Number of seconds between the computation of +summaries for the event log. Defaults to 120 seconds. Pass 0 to +disable summaries. +
+`save_model_secs` + +Number of seconds between the creation of model +checkpoints. Defaults to 600 seconds. Pass 0 to disable checkpoints. +
+`recovery_wait_secs` + +Number of seconds between checks that the model is +ready. Used by supervisors when waiting for a chief supervisor to +initialize or restore the model. Defaults to 30 seconds. +
+`stop_grace_secs` + +Grace period, in seconds, given to running threads to +stop when `stop()` is called. Defaults to 120 seconds. +
+`checkpoint_basename` + +The basename for checkpoint saving. +
+`session_manager` + +`SessionManager`, which manages Session creation and +recovery. If it is `None`, a default `SessionManager` will be created +with the set of arguments passed in for backwards compatibility. +
+`summary_writer` + +`SummaryWriter` to use or `USE_DEFAULT`. Can be `None` to +indicate that no summaries should be written. +
+`init_fn` + +Optional callable used to initialize the model. Called after the +optional `init_op` is called. The callable must accept one argument, +the session being initialized. +
+`local_init_run_options` + +RunOptions to be passed as the SessionManager +local_init_run_options parameter. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If called with eager execution enabled. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`coord` + +Return the Coordinator used by the Supervisor. + +The Coordinator can be useful if you want to run multiple threads +during your training. +
+`global_step` + +Return the global_step Tensor used by the supervisor. +
+`init_feed_dict` + +Return the feed dictionary used when evaluating the `init_op`. +
+`init_op` + +Return the Init Op used by the supervisor. +
+`is_chief` + +Return True if this is a chief supervisor. +
+`ready_for_local_init_op` + + +
+`ready_op` + +Return the Ready Op used by the supervisor. +
+`save_model_secs` + +Return the delay between checkpoints. +
+`save_path` + +Return the save path used by the supervisor. +
+`save_summaries_secs` + +Return the delay between summary computations. +
+`saver` + +Return the Saver used by the supervisor. +
+`session_manager` + +Return the SessionManager used by the Supervisor. +
+`summary_op` + +Return the Summary Tensor used by the chief supervisor. +
+`summary_writer` + +Return the SummaryWriter used by the chief supervisor. +
+ + + +## Methods + +

Loop

+ +View source + + + +Start a LooperThread that calls a function periodically. + +If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)` +repeatedly. Otherwise it calls it every `timer_interval_secs` +seconds. The thread terminates when a stop is requested. + +The started thread is added to the list of threads managed by the supervisor +so it does not need to be passed to the `stop()` method. + + + + + + + + + + + + + + + + + + + +
Args
+`timer_interval_secs` + +Number. Time boundaries at which to call `target`. +
+`target` + +A callable object. +
+`args` + +Optional arguments to pass to `target` when calling it. +
+`kwargs` + +Optional keyword arguments to pass to `target` when calling it. +
+ + + + + + + + + + + +
Returns
+The started thread. +
+ + + +

PrepareSession

+ +View source + + + +Make sure the model is ready to be used. + +Create a session on 'master', recovering or initializing the model as +needed, or wait for a session to be ready. If running as the chief +and `start_standard_service` is set to True, also call the session +manager to start the standard services. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`master` + +name of the TensorFlow master to use. See the +tf.compat.v1.Session constructor for how this is interpreted. +
+`config` + +Optional ConfigProto proto used to configure the session, which is +passed as-is to create the session. +
+`wait_for_checkpoint` + +Whether we should wait for the availability of a +checkpoint before creating Session. Defaults to False. +
+`max_wait_secs` + +Maximum time to wait for the session to become available. +
+`start_standard_services` + +Whether to start the standard services and the +queue runners. +
+ + + + + + + + + + + +
Returns
+A Session object that can be used to drive the model. +
+ + + +

RequestStop

+ +View source + + + +Request that the coordinator stop the threads. + +See `Coordinator.request_stop()`. + + + + + + + + + + +
Args
+`ex` + +Optional `Exception`, or Python `exc_info` tuple as returned by +`sys.exc_info()`. If this is the first call to `request_stop()` the +corresponding exception is recorded and re-raised from `join()`. +
+ + + +

ShouldStop

+ +View source + + + +Check if the coordinator was told to stop. + +See `Coordinator.should_stop()`. + + + + + + + + + +
Returns
+True if the coordinator was told to stop, False otherwise. +
+ + + +

StartQueueRunners

+ +View source + + + +Start threads for `QueueRunners`. + +Note that the queue runners collected in the graph key `QUEUE_RUNNERS` +are already started automatically when you create a session with the +supervisor, so unless you have non-collected queue runners to start +you do not need to call this explicitly. + + + + + + + + + + + + + +
Args
+`sess` + +A `Session`. +
+`queue_runners` + +A list of `QueueRunners`. If not specified, we'll use the +list of queue runners gathered in the graph under the key +`GraphKeys.QUEUE_RUNNERS`. +
+ + + + + + + + + + + +
Returns
+The list of threads started for the `QueueRunners`. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If called with eager execution enabled. +
+ + + + +#### Eager Compatibility +Queues are not compatible with eager execution. To ingest data when eager +execution is enabled, use the tf.data API. + + + +

StartStandardServices

+ +View source + + + +Start the standard services for 'sess'. + +This starts services in the background. The services started depend +on the parameters to the constructor and may include: + + - A Summary thread computing summaries every save_summaries_secs. + - A Checkpoint thread saving the model every save_model_secs. + - A StepCounter thread measure step time. + + + + + + + + + + +
Args
+`sess` + +A Session. +
+ + + + + + + + + + + +
Returns
+A list of threads that are running the standard services. You can use +the Supervisor's Coordinator to join these threads with: +sv.coord.Join() +
+ + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If called with a non-chief Supervisor. +
+`ValueError` + +If not `logdir` was passed to the constructor as the +services need a log directory. +
+ + + +

Stop

+ +View source + + + +Stop the services and the coordinator. + +This does not close the session. + + + + + + + + + + + + + + + + +
Args
+`threads` + +Optional list of threads to join with the coordinator. If +`None`, defaults to the threads running the standard services, the +threads started for `QueueRunners`, and the threads started by the +`loop()` method. To wait on additional threads, pass the list in this +parameter. +
+`close_summary_writer` + +Whether to close the `summary_writer`. Defaults to +`True` if the summary writer was created by the supervisor, `False` +otherwise. +
+`ignore_live_threads` + +If `True` ignores threads that remain running after a +grace period when joining threads via the coordinator, instead of +raising a RuntimeError. +
+ + + +

StopOnException

+ +View source + + + +Context handler to stop the supervisor when an exception is raised. + +See `Coordinator.stop_on_exception()`. + + + + + + + + + +
Returns
+A context handler. +
+ + + +

SummaryComputed

+ +View source + + + +Indicate that a summary was computed. + + + + + + + + + + + + + + + + + +
Args
+`sess` + +A `Session` object. +
+`summary` + +A Summary proto, or a string holding a serialized summary proto. +
+`global_step` + +Int. global step this summary is associated with. If `None`, +it will try to fetch the current step. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if 'summary' is not a Summary proto or a string. +
+`RuntimeError` + +if the Supervisor was created without a `logdir`. +
+ + + +

WaitForStop

+ +View source + + + +Block waiting for the coordinator to stop. + + +

loop

+ +View source + + + +Start a LooperThread that calls a function periodically. + +If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)` +repeatedly. Otherwise it calls it every `timer_interval_secs` +seconds. The thread terminates when a stop is requested. + +The started thread is added to the list of threads managed by the supervisor +so it does not need to be passed to the `stop()` method. + + + + + + + + + + + + + + + + + + + +
Args
+`timer_interval_secs` + +Number. Time boundaries at which to call `target`. +
+`target` + +A callable object. +
+`args` + +Optional arguments to pass to `target` when calling it. +
+`kwargs` + +Optional keyword arguments to pass to `target` when calling it. +
+ + + + + + + + + + + +
Returns
+The started thread. +
+ + + +

managed_session

+ +View source + + + +Returns a context manager for a managed session. + +This context manager creates and automatically recovers a session. It +optionally starts the standard services that handle checkpoints and +summaries. It monitors exceptions raised from the `with` block or from the +services and stops the supervisor as needed. + +The context manager is typically used as follows: + +```python +def train(): + sv = tf.compat.v1.train.Supervisor(...) + with sv.managed_session() as sess: + for step in xrange(..): + if sv.should_stop(): + break + sess.run() + ...do other things needed at each training step... +``` + +An exception raised from the `with` block or one of the service threads is +raised again when the block exits. This is done after stopping all threads +and closing the session. For example, an `AbortedError` exception, raised +in case of preemption of one of the workers in a distributed model, is +raised again when the block exits. + +If you want to retry the training loop in case of preemption you can do it +as follows: + +```python +def main(...): + while True + try: + train() + except tf.errors.Aborted: + pass +``` + +As a special case, exceptions used for control flow, such as +`OutOfRangeError` which reports that input queues are exhausted, are not +raised again from the `with` block: they indicate a clean termination of +the training loop and are considered normal termination. + + + + + + + + + + + + + + + + + + + +
Args
+`master` + +name of the TensorFlow master to use. See the +tf.compat.v1.Session constructor for how this is interpreted. +
+`config` + +Optional `ConfigProto` proto used to configure the session. Passed +as-is to create the session. +
+`start_standard_services` + +Whether to start the standard services, such as +checkpoint, summary and step counter. +
+`close_summary_writer` + +Whether to close the summary writer when closing the +session. Defaults to True. +
+ + + + + + + + + + + +
Returns
+A context manager that yields a `Session` restored from the latest +checkpoint or initialized from scratch if not checkpoint exists. The +session is closed when the `with` block exits. +
+ + + +

prepare_or_wait_for_session

+ +View source + + + +Make sure the model is ready to be used. + +Create a session on 'master', recovering or initializing the model as +needed, or wait for a session to be ready. If running as the chief +and `start_standard_service` is set to True, also call the session +manager to start the standard services. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`master` + +name of the TensorFlow master to use. See the +tf.compat.v1.Session constructor for how this is interpreted. +
+`config` + +Optional ConfigProto proto used to configure the session, which is +passed as-is to create the session. +
+`wait_for_checkpoint` + +Whether we should wait for the availability of a +checkpoint before creating Session. Defaults to False. +
+`max_wait_secs` + +Maximum time to wait for the session to become available. +
+`start_standard_services` + +Whether to start the standard services and the +queue runners. +
+ + + + + + + + + + + +
Returns
+A Session object that can be used to drive the model. +
+ + + +

request_stop

+ +View source + + + +Request that the coordinator stop the threads. + +See `Coordinator.request_stop()`. + + + + + + + + + + +
Args
+`ex` + +Optional `Exception`, or Python `exc_info` tuple as returned by +`sys.exc_info()`. If this is the first call to `request_stop()` the +corresponding exception is recorded and re-raised from `join()`. +
+ + + +

should_stop

+ +View source + + + +Check if the coordinator was told to stop. + +See `Coordinator.should_stop()`. + + + + + + + + + +
Returns
+True if the coordinator was told to stop, False otherwise. +
+ + + +

start_queue_runners

+ +View source + + + +Start threads for `QueueRunners`. + +Note that the queue runners collected in the graph key `QUEUE_RUNNERS` +are already started automatically when you create a session with the +supervisor, so unless you have non-collected queue runners to start +you do not need to call this explicitly. + + + + + + + + + + + + + +
Args
+`sess` + +A `Session`. +
+`queue_runners` + +A list of `QueueRunners`. If not specified, we'll use the +list of queue runners gathered in the graph under the key +`GraphKeys.QUEUE_RUNNERS`. +
+ + + + + + + + + + + +
Returns
+The list of threads started for the `QueueRunners`. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If called with eager execution enabled. +
+ + + + +#### Eager Compatibility +Queues are not compatible with eager execution. To ingest data when eager +execution is enabled, use the tf.data API. + + + +

start_standard_services

+ +View source + + + +Start the standard services for 'sess'. + +This starts services in the background. The services started depend +on the parameters to the constructor and may include: + + - A Summary thread computing summaries every save_summaries_secs. + - A Checkpoint thread saving the model every save_model_secs. + - A StepCounter thread measure step time. + + + + + + + + + + +
Args
+`sess` + +A Session. +
+ + + + + + + + + + + +
Returns
+A list of threads that are running the standard services. You can use +the Supervisor's Coordinator to join these threads with: +sv.coord.Join() +
+ + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If called with a non-chief Supervisor. +
+`ValueError` + +If not `logdir` was passed to the constructor as the +services need a log directory. +
+ + + +

stop

+ +View source + + + +Stop the services and the coordinator. + +This does not close the session. + + + + + + + + + + + + + + + + +
Args
+`threads` + +Optional list of threads to join with the coordinator. If +`None`, defaults to the threads running the standard services, the +threads started for `QueueRunners`, and the threads started by the +`loop()` method. To wait on additional threads, pass the list in this +parameter. +
+`close_summary_writer` + +Whether to close the `summary_writer`. Defaults to +`True` if the summary writer was created by the supervisor, `False` +otherwise. +
+`ignore_live_threads` + +If `True` ignores threads that remain running after a +grace period when joining threads via the coordinator, instead of +raising a RuntimeError. +
+ + + +

stop_on_exception

+ +View source + + + +Context handler to stop the supervisor when an exception is raised. + +See `Coordinator.stop_on_exception()`. + + + + + + + + + +
Returns
+A context handler. +
+ + + +

summary_computed

+ +View source + + + +Indicate that a summary was computed. + + + + + + + + + + + + + + + + + +
Args
+`sess` + +A `Session` object. +
+`summary` + +A Summary proto, or a string holding a serialized summary proto. +
+`global_step` + +Int. global step this summary is associated with. If `None`, +it will try to fetch the current step. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if 'summary' is not a Summary proto or a string. +
+`RuntimeError` + +if the Supervisor was created without a `logdir`. +
+ + + +

wait_for_stop

+ +View source + + + +Block waiting for the coordinator to stop. + + + + +## Class Variables + +* `USE_DEFAULT = 0` diff --git a/site/en/api_docs/python/tf/compat/v1/train/SyncReplicasOptimizer.md b/site/en/api_docs/python/tf/compat/v1/train/SyncReplicasOptimizer.md new file mode 100644 index 00000000000..7c4bddb1615 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/SyncReplicasOptimizer.md @@ -0,0 +1,788 @@ +description: Class to synchronize, aggregate gradients and pass them to the optimizer. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.compat.v1.train.SyncReplicasOptimizer + + + + + + + + + +Class to synchronize, aggregate gradients and pass them to the optimizer. + +Inherits From: [`Optimizer`](../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + +This class is deprecated. For synchronous training, please use [Distribution +Strategies](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/distribute). + +In a typical asynchronous training environment, it's common to have some +stale gradients. For example, with a N-replica asynchronous training, +gradients will be applied to the variables N times independently. Depending +on each replica's training speed, some gradients might be calculated from +copies of the variable from several steps back (N-1 steps on average). This +optimizer avoids stale gradients by collecting gradients from all replicas, +averaging them, then applying them to the variables in one shot, after +which replicas can fetch the new variables and continue. + +The following accumulators/queue are created: + +* N `gradient accumulators`, one per variable to train. Gradients are pushed + to them and the chief worker will wait until enough gradients are collected + and then average them before applying to variables. The accumulator will + drop all stale gradients (more details in the accumulator op). +* 1 `token` queue where the optimizer pushes the new global_step value after + all variables are updated. + +The following local variable is created: +* `sync_rep_local_step`, one per replica. Compared against the global_step in + each accumulator to check for staleness of the gradients. + +The optimizer adds nodes to the graph to collect gradients and pause the +trainers until variables are updated. +For the Parameter Server job: + +1. An accumulator is created for each variable, and each replica pushes the + gradients into the accumulators instead of directly applying them to the + variables. +2. Each accumulator averages once enough gradients (replicas_to_aggregate) + have been accumulated. +3. Apply the averaged gradients to the variables. +4. Only after all variables have been updated, increment the global step. +5. Only after step 4, pushes `global_step` in the `token_queue`, once for + each worker replica. The workers can now fetch the global step, use it to + update its local_step variable and start the next batch. Please note that + some workers can consume multiple minibatches, while some may not consume + even one. This is because each worker fetches minibatches as long as + a token exists. If one worker is stuck for some reason and does not + consume a token, another worker can use it. + +#### For the replicas: + + + +1. Start a step: fetch variables and compute gradients. +2. Once the gradients have been computed, push them into gradient + accumulators. Each accumulator will check the staleness and drop the stale. +3. After pushing all the gradients, dequeue an updated value of global_step + from the token queue and record that step to its local_step variable. Note + that this is effectively a barrier. +4. Start the next batch. + +### Usage + +```python +# Create any optimizer to update the variables, say a simple SGD: +opt = GradientDescentOptimizer(learning_rate=0.1) + +# Wrap the optimizer with sync_replicas_optimizer with 50 replicas: at each +# step the optimizer collects 50 gradients before applying to variables. +# Note that if you want to have 2 backup replicas, you can change +# total_num_replicas=52 and make sure this number matches how many physical +# replicas you started in your job. +opt = tf.compat.v1.train.SyncReplicasOptimizer(opt, replicas_to_aggregate=50, + total_num_replicas=50) + +# Some models have startup_delays to help stabilize the model but when using +# sync_replicas training, set it to 0. + +# Now you can call `minimize()` or `compute_gradients()` and +# `apply_gradients()` normally +training_op = opt.minimize(total_loss, global_step=self.global_step) + + +# You can create the hook which handles initialization and queues. +sync_replicas_hook = opt.make_session_run_hook(is_chief) +``` + +In the training program, every worker will run the train_op as if not +synchronized. + +```python +with training.MonitoredTrainingSession( + master=workers[worker_id].target, is_chief=is_chief, + hooks=[sync_replicas_hook]) as mon_sess: + while not mon_sess.should_stop(): + mon_sess.run(training_op) +``` + +To use SyncReplicasOptimizer with an `Estimator`, you need to send +sync_replicas_hook while calling the fit. +```python +my_estimator = DNNClassifier(..., optimizer=opt) +my_estimator.fit(..., hooks=[sync_replicas_hook]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`opt` + +The actual optimizer that will be used to compute and apply the +gradients. Must be one of the Optimizer classes. +
+`replicas_to_aggregate` + +number of replicas to aggregate for each variable +update. +
+`total_num_replicas` + +Total number of tasks/workers/replicas, could be +different from replicas_to_aggregate. +If total_num_replicas > replicas_to_aggregate: it is backup_replicas + +replicas_to_aggregate. +If total_num_replicas < replicas_to_aggregate: Replicas compute +multiple batches per update to variables. +
+`variable_averages` + +Optional `ExponentialMovingAverage` object, used to +maintain moving averages for the variables passed in +`variables_to_average`. +
+`variables_to_average` + +a list of variables that need to be averaged. Only +needed if variable_averages is passed in. +
+`use_locking` + +If True use locks for update operation. +
+`name` + +string. Optional name of the returned operation. +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This contains most of the synchronization implementation and also wraps the +apply_gradients() from the real optimizer. + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +compute_gradients(). +
+`global_step` + +Optional Variable to increment by one after the +variables have been updated. +
+`name` + +Optional name for the returned operation. Default to the +name passed to the Optimizer constructor. +
+ + + + + + + + + + + + +
Returns
+`train_op` + +The op to dequeue a token so the replicas can exit this batch +and start the next one. This is executed by each replica. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If the grads_and_vars is empty. +
+`ValueError` + +If global step is not provided, the staleness cannot be +checked. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of "loss" for the variables in "var_list". + +This simply wraps the compute_gradients() from the real optimizer. The +gradients will be aggregated in the apply_gradients() so that user can +modify the gradients like clipping with per replica global norm if needed. +The global norm with aggregated gradients can be bad as one replica's huge +gradients can hurt the gradients from other replicas. + + + + + + + + + + + + + +
Args
+`*args` + +Arguments for compute_gradients(). +
+`**kwargs` + +Keyword arguments for compute_gradients(). +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. +
+ + + +

get_chief_queue_runner

+ +View source + + + +Returns the QueueRunner for the chief to execute. + +This includes the operations to synchronize replicas: aggregate gradients, +apply to variables, increment global step, insert tokens to token queue. + +Note that this can only be called after calling apply_gradients() which +actually generates this queuerunner. + + + + + + + + + +
Returns
+A `QueueRunner` for chief to execute. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If this is called before apply_gradients(). +
+ + + +

get_init_tokens_op

+ +View source + + + +Returns the op to fill the sync_token_queue with the tokens. + +This is supposed to be executed in the beginning of the chief/sync thread +so that even if the total_num_replicas is less than replicas_to_aggregate, +the model can still proceed as the replicas can compute multiple steps per +variable update. Make sure: +`num_tokens >= replicas_to_aggregate - total_num_replicas`. + + + + + + + + + + +
Args
+`num_tokens` + +Number of tokens to add to the queue. +
+ + + + + + + + + + + +
Returns
+An op for the chief/sync replica to fill the token queue. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If this is called before apply_gradients(). +
+`ValueError` + +If num_tokens are smaller than replicas_to_aggregate - +total_num_replicas. +
+ + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named "name" created for "var" by the Optimizer. + +This simply wraps the get_slot() from the actual optimizer. + + + + + + + + + + + + + +
Args
+`*args` + +Arguments for get_slot(). +
+`**kwargs` + +Keyword arguments for get_slot(). +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +This simply wraps the get_slot_names() from the actual optimizer. + + + + + + + + + + + + + +
Args
+`*args` + +Arguments for get_slot(). +
+`**kwargs` + +Keyword arguments for get_slot(). +
+ + + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

make_session_run_hook

+ +View source + + + +Creates a hook to handle SyncReplicasHook ops such as initialization. + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +Fetches a list of optimizer variables in the default graph. + +This wraps `variables()` from the actual optimizer. It does not include +the `SyncReplicasOptimizer`'s local step. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/WorkerSessionCreator.md b/site/en/api_docs/python/tf/compat/v1/train/WorkerSessionCreator.md new file mode 100644 index 00000000000..e6c773827c3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/WorkerSessionCreator.md @@ -0,0 +1,93 @@ +description: Creates a tf.compat.v1.Session for a worker. + +
+ + + + +
+ +# tf.compat.v1.train.WorkerSessionCreator + + + + + + + + + +Creates a tf.compat.v1.Session for a worker. + +Inherits From: [`SessionCreator`](../../../../tf/compat/v1/train/SessionCreator.md) + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`scaffold` + +A `Scaffold` used for gathering or building supportive ops. If +not specified a default one is created. It's used to finalize the graph. +
+`master` + +`String` representation of the TensorFlow master to use. +
+`config` + +`ConfigProto` proto used to configure the session. +
+`max_wait_secs` + +Maximum time to wait for the session to become available. +
+ + + +## Methods + +

create_session

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/train/add_queue_runner.md b/site/en/api_docs/python/tf/compat/v1/train/add_queue_runner.md new file mode 100644 index 00000000000..9a048981bc0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/add_queue_runner.md @@ -0,0 +1,79 @@ +description: Adds a QueueRunner to a collection in the graph. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.add_queue_runner + + + + + + + + + +Adds a `QueueRunner` to a collection in the graph. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +To construct input pipelines, use the tf.data module. + +When building a complex model that uses many queues it is often difficult to +gather all the queue runners that need to be run. This convenience function +allows you to add a queue runner to a well known collection in the graph. + +The companion method `start_queue_runners()` can be used to start threads for +all the collected queue runners. + + + + + + + + + + + + + +
+`qr` + +A `QueueRunner`. +
+`collection` + +A `GraphKey` specifying the graph collection to add +the queue runner to. Defaults to `GraphKeys.QUEUE_RUNNERS`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/assert_global_step.md b/site/en/api_docs/python/tf/compat/v1/train/assert_global_step.md new file mode 100644 index 00000000000..1bb9c01a2a0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/assert_global_step.md @@ -0,0 +1,50 @@ +description: Asserts global_step_tensor is a scalar int Variable or Tensor. + +
+ + +
+ +# tf.compat.v1.train.assert_global_step + + + + + + + + + +Asserts `global_step_tensor` is a scalar int `Variable` or `Tensor`. + + + + + + + + + + + + + + + + + +
+`global_step_tensor` + +`Tensor` to test. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/basic_train_loop.md b/site/en/api_docs/python/tf/compat/v1/train/basic_train_loop.md new file mode 100644 index 00000000000..5b6dabf728f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/basic_train_loop.md @@ -0,0 +1,89 @@ +description: Basic loop to train a model. + +
+ + +
+ +# tf.compat.v1.train.basic_train_loop + + + + + + + + + +Basic loop to train a model. + + + + + + + +Calls `train_step_fn` in a loop to train a model. The function is called as: + +```python +train_step_fn(session, *args, **kwargs) +``` + +It is passed a tf.compat.v1.Session in addition to `args` and `kwargs`. The +function +typically runs one training step in the session. + + + + + + + + + + + + + + + + + + + + + + +
+`supervisor` + +tf.compat.v1.train.Supervisor to run the training services. +
+`train_step_fn` + +Callable to execute one training step. Called repeatedly as +`train_step_fn(session, *args **kwargs)`. +
+`args` + +Optional positional arguments passed to `train_step_fn`. +
+`kwargs` + +Optional keyword arguments passed to `train_step_fn`. +
+`master` + +Master to use to create the training session. Defaults to `""` +which causes the session to be created in the local process. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/batch.md b/site/en/api_docs/python/tf/compat/v1/train/batch.md new file mode 100644 index 00000000000..e798a9e2cba --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/batch.md @@ -0,0 +1,208 @@ +description: Creates batches of tensors in tensors. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.batch + + + + + + + + + +Creates batches of tensors in `tensors`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.batch(batch_size) (or `padded_batch(...)` if `dynamic_pad=True`). + +The argument `tensors` can be a list or a dictionary of tensors. +The value returned by the function will be of the same type +as `tensors`. + +This function is implemented using a queue. A `QueueRunner` for the +queue is added to the current `Graph`'s `QUEUE_RUNNER` collection. + +If `enqueue_many` is `False`, `tensors` is assumed to represent a single +example. An input tensor with shape `[x, y, z]` will be output as a tensor +with shape `[batch_size, x, y, z]`. + +If `enqueue_many` is `True`, `tensors` is assumed to represent a batch of +examples, where the first dimension is indexed by example, and all members of +`tensors` should have the same size in the first dimension. If an input +tensor has shape `[*, x, y, z]`, the output will have shape `[batch_size, x, +y, z]`. The `capacity` argument controls the how long the prefetching is +allowed to grow the queues. + +The returned operation is a dequeue operation and will throw +tf.errors.OutOfRangeError if the input queue is exhausted. If this +operation is feeding another input queue, its queue runner will catch +this exception, however, if this operation is used in your main thread +you are responsible for catching this yourself. + +*N.B.:* If `dynamic_pad` is `False`, you must ensure that either +(i) the `shapes` argument is passed, or (ii) all of the tensors in +`tensors` must have fully-defined shapes. `ValueError` will be +raised if neither of these conditions holds. + +If `dynamic_pad` is `True`, it is sufficient that the *rank* of the +tensors is known, but individual dimensions may have shape `None`. +In this case, for each enqueue the dimensions with value `None` +may have a variable length; upon dequeue, the output tensors will be padded +on the right to the maximum shape of the tensors in the current minibatch. +For numbers, this padding takes value 0. For strings, this padding is +the empty string. See `PaddingFIFOQueue` for more info. + +If `allow_smaller_final_batch` is `True`, a smaller batch value than +`batch_size` is returned when the queue is closed and there are not enough +elements to fill the batch, otherwise the pending elements are discarded. +In addition, all output tensors' static shapes, as accessed via the +`shape` property will have a first `Dimension` value of `None`, and +operations that depend on fixed batch_size would fail. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensors` + +The list or dictionary of tensors to enqueue. +
+`batch_size` + +The new batch size pulled from the queue. +
+`num_threads` + +The number of threads enqueuing `tensors`. The batching will +be nondeterministic if `num_threads > 1`. +
+`capacity` + +An integer. The maximum number of elements in the queue. +
+`enqueue_many` + +Whether each tensor in `tensors` is a single example. +
+`shapes` + +(Optional) The shapes for each example. Defaults to the +inferred shapes for `tensors`. +
+`dynamic_pad` + +Boolean. Allow variable dimensions in input shapes. +The given dimensions are padded upon dequeue so that tensors within a +batch have the same shapes. +
+`allow_smaller_final_batch` + +(Optional) Boolean. If `True`, allow the final +batch to be smaller if there are insufficient items left in the queue. +
+`shared_name` + +(Optional). If set, this queue will be shared under the given +name across multiple sessions. +
+`name` + +(Optional) A name for the operations. +
+ + + + + + + + + + + +
+A list or dictionary of tensors with the same types as `tensors` (except if +the input is a list of one element, then it returns a tensor, not a list). +
+ + + + + + + + + + + + +
+`ValueError` + +If the `shapes` are not specified, and cannot be +inferred from the elements of `tensors`. +
+ + + + +#### Eager Compatibility +Input pipelines based on Queues are not supported when eager execution is +enabled. Please use the tf.data API to ingest data under eager execution. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/batch_join.md b/site/en/api_docs/python/tf/compat/v1/train/batch_join.md new file mode 100644 index 00000000000..dfc9dcf5890 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/batch_join.md @@ -0,0 +1,214 @@ +description: Runs a list of tensors to fill a queue to create batches of examples. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.batch_join + + + + + + + + + +Runs a list of tensors to fill a queue to create batches of examples. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use `tf.data.Dataset.interleave(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). + +The `tensors_list` argument is a list of tuples of tensors, or a list of +dictionaries of tensors. Each element in the list is treated similarly +to the `tensors` argument of tf.compat.v1.train.batch(). + +WARNING: This function is nondeterministic, since it starts a separate thread +for each tensor. + +Enqueues a different list of tensors in different threads. +Implemented using a queue -- a `QueueRunner` for the queue +is added to the current `Graph`'s `QUEUE_RUNNER` collection. + +`len(tensors_list)` threads will be started, +with thread `i` enqueuing the tensors from +`tensors_list[i]`. `tensors_list[i1][j]` must match +`tensors_list[i2][j]` in type and shape, except in the first +dimension if `enqueue_many` is true. + +If `enqueue_many` is `False`, each `tensors_list[i]` is assumed +to represent a single example. An input tensor `x` will be output as a +tensor with shape `[batch_size] + x.shape`. + +If `enqueue_many` is `True`, `tensors_list[i]` is assumed to +represent a batch of examples, where the first dimension is indexed +by example, and all members of `tensors_list[i]` should have the +same size in the first dimension. The slices of any input tensor +`x` are treated as examples, and the output tensors will have shape +`[batch_size] + x.shape[1:]`. + +The `capacity` argument controls the how long the prefetching is allowed to +grow the queues. + +The returned operation is a dequeue operation and will throw +tf.errors.OutOfRangeError if the input queue is exhausted. If this +operation is feeding another input queue, its queue runner will catch +this exception, however, if this operation is used in your main thread +you are responsible for catching this yourself. + +*N.B.:* If `dynamic_pad` is `False`, you must ensure that either +(i) the `shapes` argument is passed, or (ii) all of the tensors in +`tensors_list` must have fully-defined shapes. `ValueError` will be +raised if neither of these conditions holds. + +If `dynamic_pad` is `True`, it is sufficient that the *rank* of the +tensors is known, but individual dimensions may have value `None`. +In this case, for each enqueue the dimensions with value `None` +may have a variable length; upon dequeue, the output tensors will be padded +on the right to the maximum shape of the tensors in the current minibatch. +For numbers, this padding takes value 0. For strings, this padding is +the empty string. See `PaddingFIFOQueue` for more info. + +If `allow_smaller_final_batch` is `True`, a smaller batch value than +`batch_size` is returned when the queue is closed and there are not enough +elements to fill the batch, otherwise the pending elements are discarded. +In addition, all output tensors' static shapes, as accessed via the +`shape` property will have a first `Dimension` value of `None`, and +operations that depend on fixed batch_size would fail. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensors_list` + +A list of tuples or dictionaries of tensors to enqueue. +
+`batch_size` + +An integer. The new batch size pulled from the queue. +
+`capacity` + +An integer. The maximum number of elements in the queue. +
+`enqueue_many` + +Whether each tensor in `tensor_list_list` is a single +example. +
+`shapes` + +(Optional) The shapes for each example. Defaults to the +inferred shapes for `tensor_list_list[i]`. +
+`dynamic_pad` + +Boolean. Allow variable dimensions in input shapes. +The given dimensions are padded upon dequeue so that tensors within a +batch have the same shapes. +
+`allow_smaller_final_batch` + +(Optional) Boolean. If `True`, allow the final +batch to be smaller if there are insufficient items left in the queue. +
+`shared_name` + +(Optional) If set, this queue will be shared under the given +name across multiple sessions. +
+`name` + +(Optional) A name for the operations. +
+ + + + + + + + + + + +
+A list or dictionary of tensors with the same number and types as +`tensors_list[i]`. +
+ + + + + + + + + + + + +
+`ValueError` + +If the `shapes` are not specified, and cannot be +inferred from the elements of `tensor_list_list`. +
+ + + + +#### Eager Compatibility +Input pipelines based on Queues are not supported when eager execution is +enabled. Please use the tf.data API to ingest data under eager execution. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/checkpoint_exists.md b/site/en/api_docs/python/tf/compat/v1/train/checkpoint_exists.md new file mode 100644 index 00000000000..f39c286e9f9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/checkpoint_exists.md @@ -0,0 +1,73 @@ +description: Checks whether a V1 or V2 checkpoint exists with the specified prefix. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.checkpoint_exists + + + + + + + + + +Checks whether a V1 or V2 checkpoint exists with the specified prefix. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use standard file APIs to check for files with this prefix. + +This is the recommended way to check if a checkpoint exists, since it takes +into account the naming difference between V1 and V2 formats. + + + + + + + + + + +
+`checkpoint_prefix` + +the prefix of a V1 or V2 checkpoint, with V2 taking +priority. Typically the result of `Saver.save()` or that of +tf.train.latest_checkpoint(), regardless of sharded/non-sharded or +V1/V2. +
+ + + + + + + + + + + +
+A bool, true if a checkpoint referred to by `checkpoint_prefix` exists. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/cosine_decay.md b/site/en/api_docs/python/tf/compat/v1/train/cosine_decay.md new file mode 100644 index 00000000000..1a32af9281a --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/cosine_decay.md @@ -0,0 +1,152 @@ +description: Applies cosine decay to the learning rate. + +
+ + +
+ +# tf.compat.v1.train.cosine_decay + + + + + + + + + +Applies cosine decay to the learning rate. + + + + + + + +When training a model, it is often recommended to lower the learning rate as +the training progresses. This function applies a cosine decay function +to a provided initial learning rate. It requires a `global_step` value to +compute the decayed learning rate. You can just pass a TensorFlow variable +that you increment at each training step. + +The function returns the decayed learning rate. It is computed as: +```python +global_step = min(global_step, decay_steps) +cosine_decay = 0.5 * (1 + cos(pi * global_step / decay_steps)) +decayed = (1 - alpha) * cosine_decay + alpha +decayed_learning_rate = learning_rate * decayed +``` + +#### Example usage: + + +```python +decay_steps = 1000 +lr_decayed = cosine_decay(learning_rate, global_step, decay_steps) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A scalar `float32` or `float64` Tensor or a Python number. +The initial learning rate. +
+`global_step` + +A scalar `int32` or `int64` `Tensor` or a Python number. Global +step to use for the decay computation. +
+`decay_steps` + +A scalar `int32` or `int64` `Tensor` or a Python number. Number +of steps to decay over. +
+`alpha` + +A scalar `float32` or `float64` Tensor or a Python number. Minimum +learning rate value as a fraction of learning_rate. +
+`name` + +String. Optional name of the operation. Defaults to 'CosineDecay'. +
+ + + + + + + + + + + +
+A scalar `Tensor` of the same type as `learning_rate`. The decayed +learning rate. +
+ + + + + + + + + + + + +
+`ValueError` + +if `global_step` is not supplied. +
+ + + +#### References: + +Stochastic Gradient Descent with Warm Restarts: + [Loshchilov et al., 2017] + (https://openreview.net/forum?id=Skq89Scxx¬eId=Skq89Scxx) + ([pdf](https://openreview.net/pdf?id=Skq89Scxx)) + + + + +#### Eager Compatibility +When eager execution is enabled, this function returns a function which in +turn returns the decayed learning rate Tensor. This can be useful for changing +the learning rate value across different invocations of optimizer functions. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/cosine_decay_restarts.md b/site/en/api_docs/python/tf/compat/v1/train/cosine_decay_restarts.md new file mode 100644 index 00000000000..34161f060c6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/cosine_decay_restarts.md @@ -0,0 +1,168 @@ +description: Applies cosine decay with restarts to the learning rate. + +
+ + +
+ +# tf.compat.v1.train.cosine_decay_restarts + + + + + + + + + +Applies cosine decay with restarts to the learning rate. + + + + + + + +When training a model, it is often recommended to lower the learning rate as +the training progresses. This function applies a cosine decay function with +restarts to a provided initial learning rate. It requires a `global_step` +value to compute the decayed learning rate. You can just pass a TensorFlow +variable that you increment at each training step. + +The function returns the decayed learning rate while taking into account +possible warm restarts. The learning rate multiplier first decays +from 1 to `alpha` for `first_decay_steps` steps. Then, a warm +restart is performed. Each new warm restart runs for `t_mul` times more steps +and with `m_mul` times smaller initial learning rate. + +#### Example usage: + + +```python +first_decay_steps = 1000 +lr_decayed = cosine_decay_restarts(learning_rate, global_step, + first_decay_steps) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A scalar `float32` or `float64` Tensor or a Python number. +The initial learning rate. +
+`global_step` + +A scalar `int32` or `int64` `Tensor` or a Python number. Global +step to use for the decay computation. +
+`first_decay_steps` + +A scalar `int32` or `int64` `Tensor` or a Python number. +Number of steps to decay over. +
+`t_mul` + +A scalar `float32` or `float64` `Tensor` or a Python number. Used to +derive the number of iterations in the i-th period +
+`m_mul` + +A scalar `float32` or `float64` `Tensor` or a Python number. +Used to derive the initial learning rate of the i-th period: +
+`alpha` + +A scalar `float32` or `float64` Tensor or a Python number. Minimum +learning rate value as a fraction of the learning_rate. +
+`name` + +String. Optional name of the operation. Defaults to 'SGDRDecay'. +
+ + + + + + + + + + + +
+A scalar `Tensor` of the same type as `learning_rate`. The decayed +learning rate. +
+ + + + + + + + + + + + +
+`ValueError` + +if `global_step` is not supplied. +
+ + + +#### References: + +Stochastic Gradient Descent with Warm Restarts: + [Loshchilov et al., 2017] + (https://openreview.net/forum?id=Skq89Scxx¬eId=Skq89Scxx) + ([pdf](https://openreview.net/pdf?id=Skq89Scxx)) + + + + +#### Eager Compatibility +When eager execution is enabled, this function returns a function which in +turn returns the decayed learning rate Tensor. This can be useful for changing +the learning rate value across different invocations of optimizer functions. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/create_global_step.md b/site/en/api_docs/python/tf/compat/v1/train/create_global_step.md new file mode 100644 index 00000000000..55588ac7842 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/create_global_step.md @@ -0,0 +1,82 @@ +description: Create global step tensor in graph. + +
+ + +
+ +# tf.compat.v1.train.create_global_step + + + + + + + + + +Create global step tensor in graph. + + + + + + + + + + + + + + + + + +
+`graph` + +The graph in which to create the global step tensor. If missing, use +default graph. +
+ + + + + + + + + + + +
+Global step tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if global step tensor is already defined. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/do_quantize_training_on_graphdef.md b/site/en/api_docs/python/tf/compat/v1/train/do_quantize_training_on_graphdef.md new file mode 100644 index 00000000000..52a7b3b3d01 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/do_quantize_training_on_graphdef.md @@ -0,0 +1,77 @@ +description: A general quantization scheme is being developed in tf.contrib.quantize. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.do_quantize_training_on_graphdef + + + + + + + + + +A general quantization scheme is being developed in `tf.contrib.quantize`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +GraphDef quantized training rewriter is deprecated in the long term. + +Consider using that instead, though since it is in the tf.contrib namespace, +it is not subject to backward compatibility guarantees. + + + + + + + + + + + + + +
+`input_graph` + +A `GraphDef`. +
+`num_bits` + +The number of bits for quantize training. +
+ + + + + + + + + + + +
+The graph with quantize training done. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/experimental.md b/site/en/api_docs/python/tf/compat/v1/train/experimental.md new file mode 100644 index 00000000000..91fdb852300 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/experimental.md @@ -0,0 +1,39 @@ +description: Public API for tf.train.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.train.experimental + + + + + + + + + +Public API for tf.train.experimental namespace. + + + +## Classes + +[`class DynamicLossScale`](../../../../tf/mixed_precision/experimental/DynamicLossScale.md): Loss scale that dynamically adjusts itself. + +[`class FixedLossScale`](../../../../tf/mixed_precision/experimental/FixedLossScale.md): Loss scale with a fixed value. + +[`class LossScale`](../../../../tf/mixed_precision/experimental/LossScale.md): Base class for all loss scales. + +[`class MixedPrecisionLossScaleOptimizer`](../../../../tf/compat/v1/train/experimental/MixedPrecisionLossScaleOptimizer.md): An optimizer that applies loss scaling. + +[`class PythonState`](../../../../tf/train/experimental/PythonState.md): A mixin for putting Python state in an object-based checkpoint. + +## Functions + +[`disable_mixed_precision_graph_rewrite(...)`](../../../../tf/compat/v1/train/experimental/disable_mixed_precision_graph_rewrite.md): Disables the mixed precision graph rewrite. + +[`enable_mixed_precision_graph_rewrite(...)`](../../../../tf/compat/v1/train/experimental/enable_mixed_precision_graph_rewrite.md): Enable mixed precision via a graph rewrite. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/experimental/MixedPrecisionLossScaleOptimizer.md b/site/en/api_docs/python/tf/compat/v1/train/experimental/MixedPrecisionLossScaleOptimizer.md new file mode 100644 index 00000000000..72c1cbc9277 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/experimental/MixedPrecisionLossScaleOptimizer.md @@ -0,0 +1,555 @@ +description: An optimizer that applies loss scaling. + +
+ + + + + + + + + + + + + +
+ +# tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer + + + + + + + + + +An optimizer that applies loss scaling. + +Inherits From: [`Optimizer`](../../../../../tf/compat/v1/train/Optimizer.md) + + + + + + + +Loss scaling is a process that multiplies the loss by a multiplier called the +loss scale, and divides each gradient by the same multiplier. The pseudocode +for this process is: + +``` +loss = ... +loss *= loss_scale +grads = gradients(loss, vars) +grads /= loss_scale +``` + +Mathematically, loss scaling has no effect, but can help avoid numerical +underflow in intermediate gradients when float16 tensors are used for mixed +precision training. By multiplying the loss, each intermediate gradient will +have the same multiplier applied. + +The loss scale can either be a fixed constant, chosen by the user, or be +dynamically determined. Dynamically determining the loss scale is convenient +as a loss scale does not have to be explicitly chosen. However it reduces +performance. + +This optimizer wraps another optimizer and applies loss scaling to it via a +`LossScale`. Loss scaling is applied whenever gradients are +computed, such as through `minimize()`. + + + + + + + + + + + + + +
+`use_locking` + +Bool. If True apply use locks to prevent concurrent updates +to variables. +
+`name` + +A non-empty string. The name to use for accumulators created +for the optimizer. +
+ + + + + + + + + + + + +
+`ValueError` + +If name is malformed. +
+ + + +## Methods + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +conditionally applies gradients if all gradient values are finite. +Otherwise no update is performed (nor is `global_step` incremented). + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs as returned by +`compute_gradients()`. +
+`global_step` + +Optional `Variable` to increment by one after the variables +have been updated. +
+`name` + +Optional name for the returned operation. Default to the name +passed to the `Optimizer` constructor. +
+ + + + + + + + + + + +
Returns
+An `Operation` that conditionally applies the specified gradients. If +`global_step` was not None, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If you should use `_distributed_apply()` instead. +
+ + + +

compute_gradients

+ +View source + + + +Compute gradients of `loss` for the variables in `var_list`. + +This adjusts the dynamic range of the gradient evaluation by scaling up +the `loss` value. The gradient values are then scaled back down by the +reciprocal of the loss scale. This is useful in reduced precision training +where small gradient values would otherwise underflow the representable +range. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A Tensor containing the value to minimize or a callable taking no +arguments which returns the value to minimize. When eager execution is +enabled it must be a callable. +
+`var_list` + +Optional list or tuple of tf.Variable to update to minimize +`loss`. Defaults to the list of variables collected in the graph under +the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with the +corresponding op. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+A list of (gradient, variable) pairs. Variable is always present, but +gradient can be `None`. +
+ + + +

get_name

+ +View source + + + + + + +

get_slot

+ +View source + + + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + + + + + + + + + + + + + +
Args
+`var` + +A variable passed to `minimize()` or `apply_gradients()`. +
+`name` + +A string. +
+ + + + + + + + + + + +
Returns
+The `Variable` for the slot if it was created, `None` otherwise. +
+ + + +

get_slot_names

+ +View source + + + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + + + + + + + + + +
Returns
+A list of strings. +
+ + + +

minimize

+ +View source + + + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A `Tensor` containing the value to minimize. +
+`global_step` + +Optional `Variable` to increment by one after the +variables have been updated. +
+`var_list` + +Optional list or tuple of `Variable` objects to update to +minimize `loss`. Defaults to the list of variables collected in +the graph under the key `GraphKeys.TRAINABLE_VARIABLES`. +
+`gate_gradients` + +How to gate the computation of gradients. Can be +`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Valid values are defined in the class `AggregationMethod`. +
+`colocate_gradients_with_ops` + +If True, try colocating gradients with +the corresponding op. +
+`name` + +Optional name for the returned operation. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the variables in `var_list`. If `global_step` +was not `None`, that operation also increments `global_step`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, `loss` should be a Python function that +takes no arguments and computes the value to be minimized. Minimization (and +gradient computation) is done with respect to the elements of `var_list` if +not None, else with respect to any trainable variables created during the +execution of the `loss` function. `gate_gradients`, `aggregation_method`, +`colocate_gradients_with_ops` and `grad_loss` are ignored when eager +execution is enabled. + + + +

variables

+ +View source + + + +A list of variables which encode the current state of `Optimizer`. + +Includes slot variables and additional global variables created by the +optimizer in the current default graph. + + + + + + + + + +
Returns
+A list of variables. +
+ + + + + +## Class Variables + +* `GATE_GRAPH = 2` +* `GATE_NONE = 0` +* `GATE_OP = 1` diff --git a/site/en/api_docs/python/tf/compat/v1/train/experimental/disable_mixed_precision_graph_rewrite.md b/site/en/api_docs/python/tf/compat/v1/train/experimental/disable_mixed_precision_graph_rewrite.md new file mode 100644 index 00000000000..1160add5ee6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/experimental/disable_mixed_precision_graph_rewrite.md @@ -0,0 +1,49 @@ +description: Disables the mixed precision graph rewrite. + +
+ + +
+ +# tf.compat.v1.train.experimental.disable_mixed_precision_graph_rewrite + + + + + + + + + +Disables the mixed precision graph rewrite. + + + + + + + +After this is called, the mixed precision graph rewrite will no longer run for +new Sessions, and so float32 operations will no longer be converted to float16 +in such Sessions. However, any existing Sessions will continue to have the +graph rewrite enabled if they were created after +`enable_mixed_precision_graph_rewrite` was called but before +`disable_mixed_precision_graph_rewrite` was called. + +This does not undo the effects of loss scaling. Any optimizers wrapped with a +LossScaleOptimizer will continue to do loss scaling, although this loss +scaling will no longer be useful if the optimizer is used in new Sessions, as +the graph rewrite no longer converts the graph to use float16. + +This function is useful for unit testing. A unit tests can test using the +mixed precision graph rewrite, then disable it so future unit tests continue +using float32. If this is done, unit tests should not share a single session, +as `enable_mixed_precision_graph_rewrite` and +`disable_mixed_precision_graph_rewrite` have no effect on existing sessions. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/train/experimental/enable_mixed_precision_graph_rewrite.md b/site/en/api_docs/python/tf/compat/v1/train/experimental/enable_mixed_precision_graph_rewrite.md new file mode 100644 index 00000000000..c890522da69 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/experimental/enable_mixed_precision_graph_rewrite.md @@ -0,0 +1,179 @@ +description: Enable mixed precision via a graph rewrite. + +
+ + +
+ +# tf.compat.v1.train.experimental.enable_mixed_precision_graph_rewrite + + + + + + + + + +Enable mixed precision via a graph rewrite. + + + + + + + +Mixed precision is the use of both float32 and float16 data types when +training a model to improve performance. This is achieved via a graph rewrite +operation and a loss-scale optimizer. + +Performing arithmetic operations in float16 takes advantage of specialized +processing units, such as NVIDIA Tensor Cores, for much higher arithmetic +throughput. However, due to the smaller representable range, performing the +entire training with float16 can result in gradient underflow, that is, small +gradient values becoming zeroes. Instead, performing only select arithmetic +operations in float16 results in higher throughput and decreased training +time when using compatible hardware accelerators while also reducing memory +usage, typically without sacrificing model accuracy. + +Note: While the mixed precision rewrite changes the datatype of various +layers throughout the model, the same accuracy reached in float32 is +expected. If a `NaN` gradient occurs with dynamic loss scaling, the model +update for that batch is skipped. In this case, the global step count is not +incremented, and the `LossScaleOptimizer` attempts to decrease the loss +scaling value to avoid `NaN` values in subsequent iterations. This approach +has been shown to achieve the same accuracy as float32 and, in most cases, +better training throughput. + +#### Example: + + + +```python +model = tf.keras.models.Sequential([ + tf.keras.layers.Dense(64, activation='relu'), + tf.keras.layers.Dense(64, activation='softmax'), +]) + +opt = tf.keras.optimizers.SGD() +opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) +model.compile(loss="mse", optimizer=opt) + +x_train = np.random.random((1024, 64)) +y_train = np.random.random((1024, 64)) +model.fit(x_train, y_train) +``` + +Calling `enable_mixed_precision_graph_rewrite(opt)` enables the graph rewrite +operation before computing gradients. The function additionally returns an +`Optimizer` (`opt`) wrapped with a `LossScaleOptimizer`. This prevents +underflow in the float16 tensors during the backward pass. An optimizer of +type `tf.train.Optimizer` or tf.keras.optimizers.Optimizer must be passed +to this function, which will then be wrapped to use loss scaling. + +The graph rewrite operation changes the `dtype` of certain operations in the +graph from float32 to float16. There are several categories of operations +that are either included or excluded by this rewrite operation. The following +categories of Ops are defined inside corresponding functions under the class +`AutoMixedPrecisionLists` in + +auto_mixed_precision_lists.h: + +* `ClearList`: Ops that do not have numerically significant adverse effects. +E.g. `ArgMax` and `Floor`. +* `WhiteList`: Ops that are considered numerically safe for execution in +float16, and thus are always converted. E.g. `Conv2D`. +* `BlackList`: Ops that are numerically unsafe to execute in float16 and +can negatively affect downstream nodes. E.g. `Softmax`. +* `GrayList`: Ops that are considered numerically safe for execution in +float16 unless downstream from a BlackList Op. E.g. `Add` and `AvgPool`. + +When this function is used, gradients should only be computed and applied +with the returned optimizer, either by calling `opt.minimize()` or +`opt.compute_gradients()` followed by `opt.apply_gradients()`. +Gradients should not be computed with tf.gradients or tf.GradientTape. +This is because the returned optimizer will apply loss scaling, and +tf.gradients or tf.GradientTape will not. If you do directly use +tf.gradients or tf.GradientTape, your model may not converge due to +float16 underflow problems. + +When eager execution is enabled, the mixed precision graph rewrite is only +enabled within tf.functions, as outside tf.functions, there is no graph. + +For NVIDIA GPUs with Tensor cores, as a general performance guide, dimensions +(such as batch size, input size, output size, and channel counts) +should be powers of two if under 256, or otherwise divisible by 8 if above +256. For more information, check out the +[NVIDIA Deep Learning Performance Guide]( +https://docs.nvidia.com/deeplearning/sdk/dl-performance-guide/index.html). + +Currently, mixed precision is only enabled on NVIDIA Tensor Core GPUs with +Compute Capability 7.0 and above (Volta, Turing, or newer architectures). The +parts of the graph on CPUs and TPUs are untouched by the graph rewrite. + + + + + + + + + +
+`ValueError`, if the tf.keras.mixed_precision API is also used by calling +tf.keras.mixed_precision.experimental.set_policy. Only one mixed precision +API can be used. +
+ + + + + + + + + + + + + + + +
+`opt` + +An instance of a tf.keras.optimizers.Optimizer or a +`tf.train.Optimizer`. +
+`loss_scale` + +Either an int/float, the string `"dynamic"`, or an instance of +a tf.mixed_precision.experimental.LossScale. The loss scale to use. It +is recommended to keep this as its default value of `"dynamic"`, which +will adjust the scaling automatically to prevent `Inf` or `NaN` values. +
+ + + + + + + + + + + +
+A version of `opt` that will use loss scaling to prevent underflow. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/exponential_decay.md b/site/en/api_docs/python/tf/compat/v1/train/exponential_decay.md new file mode 100644 index 00000000000..2430c40b698 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/exponential_decay.md @@ -0,0 +1,162 @@ +description: Applies exponential decay to the learning rate. + +
+ + +
+ +# tf.compat.v1.train.exponential_decay + + + + + + + + + +Applies exponential decay to the learning rate. + + + + + + + +When training a model, it is often recommended to lower the learning rate as +the training progresses. This function applies an exponential decay function +to a provided initial learning rate. It requires a `global_step` value to +compute the decayed learning rate. You can just pass a TensorFlow variable +that you increment at each training step. + +The function returns the decayed learning rate. It is computed as: + +```python +decayed_learning_rate = learning_rate * + decay_rate ^ (global_step / decay_steps) +``` + +If the argument `staircase` is `True`, then `global_step / decay_steps` is an +integer division and the decayed learning rate follows a staircase function. + +Example: decay every 100000 steps with a base of 0.96: + +```python +... +global_step = tf.Variable(0, trainable=False) +starter_learning_rate = 0.1 +learning_rate = tf.compat.v1.train.exponential_decay(starter_learning_rate, +global_step, + 100000, 0.96, staircase=True) +# Passing global_step to minimize() will increment it at each step. +learning_step = ( + tf.compat.v1.train.GradientDescentOptimizer(learning_rate) + .minimize(...my loss..., global_step=global_step) +) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A scalar `float32` or `float64` `Tensor` or a Python number. +The initial learning rate. +
+`global_step` + +A scalar `int32` or `int64` `Tensor` or a Python number. Global +step to use for the decay computation. Must not be negative. +
+`decay_steps` + +A scalar `int32` or `int64` `Tensor` or a Python number. Must +be positive. See the decay computation above. +
+`decay_rate` + +A scalar `float32` or `float64` `Tensor` or a Python number. +The decay rate. +
+`staircase` + +Boolean. If `True` decay the learning rate at discrete intervals +
+`name` + +String. Optional name of the operation. Defaults to +'ExponentialDecay'. +
+ + + + + + + + + + + +
+A scalar `Tensor` of the same type as `learning_rate`. The decayed +learning rate. +
+ + + + + + + + + + + + +
+`ValueError` + +if `global_step` is not supplied. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, this function returns a function which in +turn returns the decayed learning rate Tensor. This can be useful for changing +the learning rate value across different invocations of optimizer functions. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/export_meta_graph.md b/site/en/api_docs/python/tf/compat/v1/train/export_meta_graph.md new file mode 100644 index 00000000000..84e7462d9c1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/export_meta_graph.md @@ -0,0 +1,199 @@ +description: Returns MetaGraphDef proto. + +
+ + +
+ +# tf.compat.v1.train.export_meta_graph + + + + + + + + + +Returns `MetaGraphDef` proto. + + + + + + + +Optionally writes it to filename. + +This function exports the graph, saver, and collection objects into +`MetaGraphDef` protocol buffer with the intention of it being imported +at a later time or location to restart training, run inference, or be +a subgraph. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filename` + +Optional filename including the path for writing the generated +`MetaGraphDef` protocol buffer. +
+`meta_info_def` + +`MetaInfoDef` protocol buffer. +
+`graph_def` + +`GraphDef` protocol buffer. +
+`saver_def` + +`SaverDef` protocol buffer. +
+`collection_list` + +List of string keys to collect. +
+`as_text` + +If `True`, writes the `MetaGraphDef` as an ASCII proto. +
+`graph` + +The `Graph` to export. If `None`, use the default graph. +
+`export_scope` + +Optional `string`. Name scope under which to extract the +subgraph. The scope name will be striped from the node definitions for +easy import later into new name scopes. If `None`, the whole graph is +exported. graph_def and export_scope cannot both be specified. +
+`clear_devices` + +Whether or not to clear the device field for an `Operation` +or `Tensor` during export. +
+`clear_extraneous_savers` + +Remove any Saver-related information from the graph +(both Save/Restore ops and SaverDefs) that are not associated with the +provided SaverDef. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the NodeDefs. For a detailed guide, see +[Stripping Default-Valued Attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+`save_debug_info` + +If `True`, save the GraphDebugInfo to a separate file, +which in the same directory of filename and with `_debug` added before the +file extend. +
+`**kwargs` + +Optional keyed arguments. +
+ + + + + + + + + + + +
+A `MetaGraphDef` proto. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +When the `GraphDef` is larger than 2GB. +
+`RuntimeError` + +If called with eager execution enabled. +
+ + + + +#### Eager Compatibility +Exporting/importing meta graphs is not supported unless both `graph_def` and +`graph` are provided. No graph exists when eager execution is enabled. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/generate_checkpoint_state_proto.md b/site/en/api_docs/python/tf/compat/v1/train/generate_checkpoint_state_proto.md new file mode 100644 index 00000000000..78ed79255bb --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/generate_checkpoint_state_proto.md @@ -0,0 +1,120 @@ +description: Generates a checkpoint state proto. + +
+ + +
+ +# tf.compat.v1.train.generate_checkpoint_state_proto + + + + + + + + + +Generates a checkpoint state proto. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`save_dir` + +Directory where the model was saved. +
+`model_checkpoint_path` + +The checkpoint file. +
+`all_model_checkpoint_paths` + +List of strings. Paths to all not-yet-deleted +checkpoints, sorted from oldest to newest. If this is a non-empty list, +the last element must be equal to model_checkpoint_path. These paths +are also saved in the CheckpointState proto. +
+`all_model_checkpoint_timestamps` + +A list of floats, indicating the number of +seconds since the Epoch when each checkpoint was generated. +
+`last_preserved_timestamp` + +A float, indicating the number of seconds since +the Epoch when the last preserved checkpoint was written, e.g. due to a +`keep_checkpoint_every_n_hours` parameter (see +tf.train.CheckpointManager for an implementation). +
+ + + + + + + + + + + +
+CheckpointState proto with model_checkpoint_path and +all_model_checkpoint_paths updated to either absolute paths or +relative paths to the current save_dir. +
+ + + + + + + + + + + + +
+`ValueError` + +If `all_model_checkpoint_timestamps` was provided but its length +does not match `all_model_checkpoint_paths`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/get_checkpoint_mtimes.md b/site/en/api_docs/python/tf/compat/v1/train/get_checkpoint_mtimes.md new file mode 100644 index 00000000000..9f9bd319f58 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/get_checkpoint_mtimes.md @@ -0,0 +1,80 @@ +description: Returns the mtimes (modification timestamps) of the checkpoints. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.get_checkpoint_mtimes + + + + + + + + + +Returns the mtimes (modification timestamps) of the checkpoints. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use standard file utilities to get mtimes. + +Globs for the checkpoints pointed to by `checkpoint_prefixes`. If the files +exist, collect their mtime. Both V2 and V1 checkpoints are considered, in +that priority. + +This is the recommended way to get the mtimes, since it takes into account +the naming difference between V1 and V2 formats. + +Note: If not all checkpoints exist, the length of the returned mtimes list +will be smaller than the length of `checkpoint_prefixes` list, so mapping +checkpoints to corresponding mtimes will not be possible. + + + + + + + + + + +
+`checkpoint_prefixes` + +a list of checkpoint paths, typically the results of +`Saver.save()` or those of tf.train.latest_checkpoint(), regardless of +sharded/non-sharded or V1/V2. +
+ + + + + + + + + + + +
+A list of mtimes (in microseconds) of the found checkpoints. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/get_global_step.md b/site/en/api_docs/python/tf/compat/v1/train/get_global_step.md new file mode 100644 index 00000000000..162cd41c556 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/get_global_step.md @@ -0,0 +1,84 @@ +description: Get the global step tensor. + +
+ + +
+ +# tf.compat.v1.train.get_global_step + + + + + + + + + +Get the global step tensor. + + + + + + + +The global step tensor must be an integer variable. We first try to find it +in the collection `GLOBAL_STEP`, or by name `global_step:0`. + + + + + + + + + + +
+`graph` + +The graph to find the global step in. If missing, use default graph. +
+ + + + + + + + + + + +
+The global step variable, or `None` if none was found. +
+ + + + + + + + + + + + +
+`TypeError` + +If the global step tensor has a non-integer type, or if it is not +a `Variable`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/get_or_create_global_step.md b/site/en/api_docs/python/tf/compat/v1/train/get_or_create_global_step.md new file mode 100644 index 00000000000..46096225cb0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/get_or_create_global_step.md @@ -0,0 +1,65 @@ +description: Returns and create (if necessary) the global step tensor. + +
+ + +
+ +# tf.compat.v1.train.get_or_create_global_step + + + + + + + + + +Returns and create (if necessary) the global step tensor. + + + + + + + + + + + + + + + + + +
+`graph` + +The graph in which to create the global step tensor. If missing, use +default graph. +
+ + + + + + + + + + + +
+The global step tensor. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/global_step.md b/site/en/api_docs/python/tf/compat/v1/train/global_step.md new file mode 100644 index 00000000000..39e9b71e6f1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/global_step.md @@ -0,0 +1,85 @@ +description: Small helper to get the global step. + +
+ + +
+ +# tf.compat.v1.train.global_step + + + + + + + + + +Small helper to get the global step. + + + + + + + +```python +# Create a variable to hold the global_step. +global_step_tensor = tf.Variable(10, trainable=False, name='global_step') +# Create a session. +sess = tf.compat.v1.Session() +# Initialize the variable +sess.run(global_step_tensor.initializer) +# Get the variable value. +print('global_step: %s' % tf.compat.v1.train.global_step(sess, +global_step_tensor)) + +global_step: 10 +``` + + + + + + + + + + + + + +
+`sess` + +A TensorFlow `Session` object. +
+`global_step_tensor` + +`Tensor` or the `name` of the operation that contains +the global step. +
+ + + + + + + + + + + +
+The global step value. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/import_meta_graph.md b/site/en/api_docs/python/tf/compat/v1/train/import_meta_graph.md new file mode 100644 index 00000000000..c5f00f6fc4d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/import_meta_graph.md @@ -0,0 +1,196 @@ +description: Recreates a Graph saved in a MetaGraphDef proto. + +
+ + +
+ +# tf.compat.v1.train.import_meta_graph + + + + + + + + + +Recreates a Graph saved in a `MetaGraphDef` proto. + + + + + + + +This function takes a `MetaGraphDef` protocol buffer as input. If +the argument is a file containing a `MetaGraphDef` protocol buffer , +it constructs a protocol buffer from the file content. The function +then adds all the nodes from the `graph_def` field to the +current graph, recreates all the collections, and returns a saver +constructed from the `saver_def` field. + +In combination with `export_meta_graph()`, this function can be used to + +* Serialize a graph along with other Python objects such as `QueueRunner`, + `Variable` into a `MetaGraphDef`. + +* Restart training from a saved graph and checkpoints. + +* Run inference from a saved graph and checkpoints. + +```Python +... +# Create a saver. +saver = tf.compat.v1.train.Saver(...variables...) +# Remember the training_op we want to run by adding it to a collection. +tf.compat.v1.add_to_collection('train_op', train_op) +sess = tf.compat.v1.Session() +for step in xrange(1000000): + sess.run(train_op) + if step % 1000 == 0: + # Saves checkpoint, which by default also exports a meta_graph + # named 'my-model-global_step.meta'. + saver.save(sess, 'my-model', global_step=step) +``` + +Later we can continue training from this saved `meta_graph` without building +the model from scratch. + +```Python +with tf.Session() as sess: + new_saver = + tf.train.import_meta_graph('my-save-dir/my-model-10000.meta') + new_saver.restore(sess, 'my-save-dir/my-model-10000') + # tf.get_collection() returns a list. In this example we only want + # the first one. + train_op = tf.get_collection('train_op')[0] + for step in xrange(1000000): + sess.run(train_op) +``` + +NOTE: Restarting training from saved `meta_graph` only works if the +device assignments have not changed. + +#### Example: + + +Variables, placeholders, and independent operations can also be stored, as +shown in the following example. + +```Python +# Saving contents and operations. +v1 = tf.placeholder(tf.float32, name="v1") +v2 = tf.placeholder(tf.float32, name="v2") +v3 = tf.math.multiply(v1, v2) +vx = tf.Variable(10.0, name="vx") +v4 = tf.add(v3, vx, name="v4") +saver = tf.train.Saver([vx]) +sess = tf.Session() +sess.run(tf.global_variables_initializer()) +sess.run(vx.assign(tf.add(vx, vx))) +result = sess.run(v4, feed_dict={v1:12.0, v2:3.3}) +print(result) +saver.save(sess, "./model_ex1") +``` + +Later this model can be restored and contents loaded. + +```Python +# Restoring variables and running operations. +saver = tf.train.import_meta_graph("./model_ex1.meta") +sess = tf.Session() +saver.restore(sess, "./model_ex1") +result = sess.run("v4:0", feed_dict={"v1:0": 12.0, "v2:0": 3.3}) +print(result) +``` + + + + + + + + + + + + + + + + + + + +
+`meta_graph_or_file` + +`MetaGraphDef` protocol buffer or filename (including +the path) containing a `MetaGraphDef`. +
+`clear_devices` + +Whether or not to clear the device field for an `Operation` +or `Tensor` during import. +
+`import_scope` + +Optional `string`. Name scope to add. Only used when +initializing from protocol buffer. +
+`**kwargs` + +Optional keyed arguments. +
+ + + + + + + + + + + +
+A saver constructed from `saver_def` in `MetaGraphDef` or None. + +A None value is returned if no variables exist in the `MetaGraphDef` +(i.e., there are no variables to restore). +
+ + + + + + + + + + + + +
+`RuntimeError` + +If called with eager execution enabled. +
+ + + + +#### Eager Compatibility +Exporting/importing meta graphs is not supported. No graph exists when eager +execution is enabled. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/init_from_checkpoint.md b/site/en/api_docs/python/tf/compat/v1/train/init_from_checkpoint.md new file mode 100644 index 00000000000..82dd4877ca4 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/init_from_checkpoint.md @@ -0,0 +1,146 @@ +description: Replaces tf.Variable initializers so they load from a checkpoint file. + +
+ + +
+ +# tf.compat.v1.train.init_from_checkpoint + + + + + + + + + +Replaces tf.Variable initializers so they load from a checkpoint file. + + + + + + + +Values are not loaded immediately, but when the initializer is run +(typically by running a tf.compat.v1.global_variables_initializer op). + +Note: This overrides default initialization ops of specified variables and +redefines dtype. + +Assignment map supports following syntax: + +* `'checkpoint_scope_name/': 'scope_name/'` - will load all variables in + current `scope_name` from `checkpoint_scope_name` with matching tensor + names. +* `'checkpoint_scope_name/some_other_variable': 'scope_name/variable_name'` - + will initialize `scope_name/variable_name` variable + from `checkpoint_scope_name/some_other_variable`. +* `'scope_variable_name': variable` - will initialize given tf.Variable + object with tensor 'scope_variable_name' from the checkpoint. +* `'scope_variable_name': list(variable)` - will initialize list of + partitioned variables with tensor 'scope_variable_name' from the checkpoint. +* `'/': 'scope_name/'` - will load all variables in current `scope_name` from + checkpoint's root (e.g. no scope). + +Supports loading into partitioned variables, which are represented as +`'/part_'`. + +#### Example: + + + +```python + +# Say, '/tmp/model.ckpt' has the following tensors: +# -- name='old_scope_1/var1', shape=[20, 2] +# -- name='old_scope_1/var2', shape=[50, 4] +# -- name='old_scope_2/var3', shape=[100, 100] + +# Create new model's variables +with tf.compat.v1.variable_scope('new_scope_1'): + var1 = tf.compat.v1.get_variable('var1', shape=[20, 2], + initializer=tf.compat.v1.zeros_initializer()) +with tf.compat.v1.variable_scope('new_scope_2'): + var2 = tf.compat.v1.get_variable('var2', shape=[50, 4], + initializer=tf.compat.v1.zeros_initializer()) + # Partition into 5 variables along the first axis. + var3 = tf.compat.v1.get_variable(name='var3', shape=[100, 100], + initializer=tf.compat.v1.zeros_initializer(), + partitioner=lambda shape, dtype: [5, 1]) + +# Initialize all variables in `new_scope_1` from `old_scope_1`. +init_from_checkpoint('/tmp/model.ckpt', {'old_scope_1/': 'new_scope_1'}) + +# Use names to specify which variables to initialize from checkpoint. +init_from_checkpoint('/tmp/model.ckpt', + {'old_scope_1/var1': 'new_scope_1/var1', + 'old_scope_1/var2': 'new_scope_2/var2'}) + +# Or use tf.Variable objects to identify what to initialize. +init_from_checkpoint('/tmp/model.ckpt', + {'old_scope_1/var1': var1, + 'old_scope_1/var2': var2}) + +# Initialize partitioned variables using variable's name +init_from_checkpoint('/tmp/model.ckpt', + {'old_scope_2/var3': 'new_scope_2/var3'}) + +# Or specify the list of tf.Variable objects. +init_from_checkpoint('/tmp/model.ckpt', + {'old_scope_2/var3': var3._get_variable_list()}) + +``` + + + + + + + + + + + + + +
+`ckpt_dir_or_file` + +Directory with checkpoints file or path to checkpoint. +
+`assignment_map` + +Dict, where keys are names of the variables in the +checkpoint and values are current variables or names of current variables +(in default graph). +
+ + + + + + + + + + + + +
+`ValueError` + +If missing variables in current graph, or if missing +checkpoints or tensors in checkpoints. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/input_producer.md b/site/en/api_docs/python/tf/compat/v1/train/input_producer.md new file mode 100644 index 00000000000..4f3f1ef0f36 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/input_producer.md @@ -0,0 +1,177 @@ +description: Output the rows of input_tensor to a queue for an input pipeline. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.input_producer + + + + + + + + + +Output the rows of `input_tensor` to a queue for an input pipeline. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use `tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. + +Note: if `num_epochs` is not `None`, this function creates local counter +`epochs`. Use `local_variables_initializer()` to initialize local variables. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +A tensor with the rows to produce. Must be at least +one-dimensional. Must either have a fully-defined shape, or +`element_shape` must be defined. +
+`element_shape` + +(Optional.) A `TensorShape` representing the shape of a +row of `input_tensor`, if it cannot be inferred. +
+`num_epochs` + +(Optional.) An integer. If specified `input_producer` produces +each row of `input_tensor` `num_epochs` times before generating an +`OutOfRange` error. If not specified, `input_producer` can cycle through +the rows of `input_tensor` an unlimited number of times. +
+`shuffle` + +(Optional.) A boolean. If true, the rows are randomly shuffled +within each epoch. +
+`seed` + +(Optional.) An integer. The seed to use if `shuffle` is true. +
+`capacity` + +(Optional.) The capacity of the queue to be used for buffering +the input. +
+`shared_name` + +(Optional.) If set, this queue will be shared under the given +name across multiple sessions. +
+`summary_name` + +(Optional.) If set, a scalar summary for the current queue +size will be generated, using this name as part of the tag. +
+`name` + +(Optional.) A name for queue. +
+`cancel_op` + +(Optional.) Cancel op for the queue +
+ + + + + + + + + + + +
+A queue with the output rows. A `QueueRunner` for the queue is +added to the current `QUEUE_RUNNER` collection of the current +graph. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If the shape of the input cannot be inferred from the arguments. +
+`RuntimeError` + +If called with eager execution enabled. +
+ + + + +#### Eager Compatibility +Input pipelines based on Queues are not supported when eager execution is +enabled. Please use the tf.data API to ingest data under eager execution. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/inverse_time_decay.md b/site/en/api_docs/python/tf/compat/v1/train/inverse_time_decay.md new file mode 100644 index 00000000000..ad558b45805 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/inverse_time_decay.md @@ -0,0 +1,168 @@ +description: Applies inverse time decay to the initial learning rate. + +
+ + +
+ +# tf.compat.v1.train.inverse_time_decay + + + + + + + + + +Applies inverse time decay to the initial learning rate. + + + + + + + +When training a model, it is often recommended to lower the learning rate as +the training progresses. This function applies an inverse decay function +to a provided initial learning rate. It requires an `global_step` value to +compute the decayed learning rate. You can just pass a TensorFlow variable +that you increment at each training step. + +The function returns the decayed learning rate. It is computed as: + +```python +decayed_learning_rate = learning_rate / (1 + decay_rate * global_step / +decay_step) +``` + +or, if `staircase` is `True`, as: + +```python +decayed_learning_rate = learning_rate / (1 + decay_rate * floor(global_step / +decay_step)) +``` + +Example: decay 1/t with a rate of 0.5: + +```python +... +global_step = tf.Variable(0, trainable=False) +learning_rate = 0.1 +decay_steps = 1.0 +decay_rate = 0.5 +learning_rate = tf.compat.v1.train.inverse_time_decay(learning_rate, +global_step, +decay_steps, decay_rate) + +# Passing global_step to minimize() will increment it at each step. +learning_step = ( + tf.compat.v1.train.GradientDescentOptimizer(learning_rate) + .minimize(...my loss..., global_step=global_step) +) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A scalar `float32` or `float64` `Tensor` or a Python number. +The initial learning rate. +
+`global_step` + +A Python number. Global step to use for the decay computation. +Must not be negative. +
+`decay_steps` + +How often to apply decay. +
+`decay_rate` + +A Python number. The decay rate. +
+`staircase` + +Whether to apply decay in a discrete staircase, as opposed to +continuous, fashion. +
+`name` + +String. Optional name of the operation. Defaults to +'InverseTimeDecay'. +
+ + + + + + + + + + + +
+A scalar `Tensor` of the same type as `learning_rate`. The decayed +learning rate. +
+ + + + + + + + + + + + +
+`ValueError` + +if `global_step` is not supplied. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, this function returns a function which in +turn returns the decayed learning rate Tensor. This can be useful for changing +the learning rate value across different invocations of optimizer functions. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/limit_epochs.md b/site/en/api_docs/python/tf/compat/v1/train/limit_epochs.md new file mode 100644 index 00000000000..a4af3cd2756 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/limit_epochs.md @@ -0,0 +1,102 @@ +description: Returns tensor num_epochs times and then raises an OutOfRange error. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.limit_epochs + + + + + + + + + +Returns tensor `num_epochs` times and then raises an `OutOfRange` error. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use `tf.data.Dataset.from_tensors(tensor).repeat(num_epochs)`. + +Note: creates local counter `epochs`. Use `local_variables_initializer()` to +initialize local variables. + + + + + + + + + + + + + + + + +
+`tensor` + +Any `Tensor`. +
+`num_epochs` + +A positive integer (optional). If specified, limits the number +of steps the output tensor may be evaluated. +
+`name` + +A name for the operations (optional). +
+ + + + + + + + + + + +
+tensor or `OutOfRange`. +
+ + + + + + + + + + + + +
+`ValueError` + +if `num_epochs` is invalid. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/linear_cosine_decay.md b/site/en/api_docs/python/tf/compat/v1/train/linear_cosine_decay.md new file mode 100644 index 00000000000..9af8af8e446 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/linear_cosine_decay.md @@ -0,0 +1,176 @@ +description: Applies linear cosine decay to the learning rate. + +
+ + +
+ +# tf.compat.v1.train.linear_cosine_decay + + + + + + + + + +Applies linear cosine decay to the learning rate. + + + + + + + +Note that linear cosine decay is more aggressive than cosine decay and +larger initial learning rates can typically be used. + +When training a model, it is often recommended to lower the learning rate as +the training progresses. This function applies a linear cosine decay function +to a provided initial learning rate. It requires a `global_step` value to +compute the decayed learning rate. You can just pass a TensorFlow variable +that you increment at each training step. + +The function returns the decayed learning rate. It is computed as: +```python +global_step = min(global_step, decay_steps) +linear_decay = (decay_steps - global_step) / decay_steps) +cosine_decay = 0.5 * ( + 1 + cos(pi * 2 * num_periods * global_step / decay_steps)) +decayed = (alpha + linear_decay) * cosine_decay + beta +decayed_learning_rate = learning_rate * decayed +``` + +#### Example usage: + + +```python +decay_steps = 1000 +lr_decayed = linear_cosine_decay(learning_rate, global_step, decay_steps) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A scalar `float32` or `float64` Tensor or a Python number. +The initial learning rate. +
+`global_step` + +A scalar `int32` or `int64` `Tensor` or a Python number. Global +step to use for the decay computation. +
+`decay_steps` + +A scalar `int32` or `int64` `Tensor` or a Python number. Number +of steps to decay over. +
+`num_periods` + +Number of periods in the cosine part of the decay. See +computation above. +
+`alpha` + +See computation above. +
+`beta` + +See computation above. +
+`name` + +String. Optional name of the operation. Defaults to +'LinearCosineDecay'. +
+ + + + + + + + + + + +
+A scalar `Tensor` of the same type as `learning_rate`. The decayed +learning rate. +
+ + + + + + + + + + + + +
+`ValueError` + +if `global_step` is not supplied. +
+ + + +#### References: + +Neural Optimizer Search with Reinforcement Learning: + [Bello et al., 2017](http://proceedings.mlr.press/v70/bello17a.html) + ([pdf](http://proceedings.mlr.press/v70/bello17a/bello17a.pdf)) +Stochastic Gradient Descent with Warm Restarts: + [Loshchilov et al., 2017] + (https://openreview.net/forum?id=Skq89Scxx¬eId=Skq89Scxx) + ([pdf](https://openreview.net/pdf?id=Skq89Scxx)) + + + + +#### Eager Compatibility +When eager execution is enabled, this function returns a function which in +turn returns the decayed learning rate Tensor. This can be useful for changing +the learning rate value across different invocations of optimizer functions. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/maybe_batch.md b/site/en/api_docs/python/tf/compat/v1/train/maybe_batch.md new file mode 100644 index 00000000000..cf1387d7f24 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/maybe_batch.md @@ -0,0 +1,170 @@ +description: Conditionally creates batches of tensors based on keep_input. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.maybe_batch + + + + + + + + + +Conditionally creates batches of tensors based on `keep_input`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use `tf.data.Dataset.filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). + +See docstring in `batch` for more details. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensors` + +The list or dictionary of tensors to enqueue. +
+`keep_input` + +A `bool` Tensor. This tensor controls whether the input is +added to the queue or not. If it is a scalar and evaluates `True`, then +`tensors` are all added to the queue. If it is a vector and `enqueue_many` +is `True`, then each example is added to the queue only if the +corresponding value in `keep_input` is `True`. This tensor essentially +acts as a filtering mechanism. +
+`batch_size` + +The new batch size pulled from the queue. +
+`num_threads` + +The number of threads enqueuing `tensors`. The batching will +be nondeterministic if `num_threads > 1`. +
+`capacity` + +An integer. The maximum number of elements in the queue. +
+`enqueue_many` + +Whether each tensor in `tensors` is a single example. +
+`shapes` + +(Optional) The shapes for each example. Defaults to the +inferred shapes for `tensors`. +
+`dynamic_pad` + +Boolean. Allow variable dimensions in input shapes. +The given dimensions are padded upon dequeue so that tensors within a +batch have the same shapes. +
+`allow_smaller_final_batch` + +(Optional) Boolean. If `True`, allow the final +batch to be smaller if there are insufficient items left in the queue. +
+`shared_name` + +(Optional). If set, this queue will be shared under the given +name across multiple sessions. +
+`name` + +(Optional) A name for the operations. +
+ + + + + + + + + + + +
+A list or dictionary of tensors with the same types as `tensors`. +
+ + + + + + + + + + + + +
+`ValueError` + +If the `shapes` are not specified, and cannot be +inferred from the elements of `tensors`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/maybe_batch_join.md b/site/en/api_docs/python/tf/compat/v1/train/maybe_batch_join.md new file mode 100644 index 00000000000..b27670e7c2b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/maybe_batch_join.md @@ -0,0 +1,164 @@ +description: Runs a list of tensors to conditionally fill a queue to create batches. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.maybe_batch_join + + + + + + + + + +Runs a list of tensors to conditionally fill a queue to create batches. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use `tf.data.Dataset.interleave(...).filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). + +See docstring in `batch_join` for more details. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensors_list` + +A list of tuples or dictionaries of tensors to enqueue. +
+`keep_input` + +A `bool` Tensor. This tensor controls whether the input is +added to the queue or not. If it is a scalar and evaluates `True`, then +`tensors` are all added to the queue. If it is a vector and `enqueue_many` +is `True`, then each example is added to the queue only if the +corresponding value in `keep_input` is `True`. This tensor essentially +acts as a filtering mechanism. +
+`batch_size` + +An integer. The new batch size pulled from the queue. +
+`capacity` + +An integer. The maximum number of elements in the queue. +
+`enqueue_many` + +Whether each tensor in `tensor_list_list` is a single +example. +
+`shapes` + +(Optional) The shapes for each example. Defaults to the +inferred shapes for `tensor_list_list[i]`. +
+`dynamic_pad` + +Boolean. Allow variable dimensions in input shapes. +The given dimensions are padded upon dequeue so that tensors within a +batch have the same shapes. +
+`allow_smaller_final_batch` + +(Optional) Boolean. If `True`, allow the final +batch to be smaller if there are insufficient items left in the queue. +
+`shared_name` + +(Optional) If set, this queue will be shared under the given +name across multiple sessions. +
+`name` + +(Optional) A name for the operations. +
+ + + + + + + + + + + +
+A list or dictionary of tensors with the same number and types as +`tensors_list[i]`. +
+ + + + + + + + + + + + +
+`ValueError` + +If the `shapes` are not specified, and cannot be +inferred from the elements of `tensor_list_list`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/maybe_shuffle_batch.md b/site/en/api_docs/python/tf/compat/v1/train/maybe_shuffle_batch.md new file mode 100644 index 00000000000..1861c9aec23 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/maybe_shuffle_batch.md @@ -0,0 +1,182 @@ +description: Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.maybe_shuffle_batch + + + + + + + + + +Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use `tf.data.Dataset.filter(...).shuffle(min_after_dequeue).batch(batch_size)`. + +See docstring in `shuffle_batch` for more details. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensors` + +The list or dictionary of tensors to enqueue. +
+`batch_size` + +The new batch size pulled from the queue. +
+`capacity` + +An integer. The maximum number of elements in the queue. +
+`min_after_dequeue` + +Minimum number elements in the queue after a +dequeue, used to ensure a level of mixing of elements. +
+`keep_input` + +A `bool` Tensor. This tensor controls whether the input is +added to the queue or not. If it is a scalar and evaluates `True`, then +`tensors` are all added to the queue. If it is a vector and `enqueue_many` +is `True`, then each example is added to the queue only if the +corresponding value in `keep_input` is `True`. This tensor essentially +acts as a filtering mechanism. +
+`num_threads` + +The number of threads enqueuing `tensor_list`. +
+`seed` + +Seed for the random shuffling within the queue. +
+`enqueue_many` + +Whether each tensor in `tensor_list` is a single example. +
+`shapes` + +(Optional) The shapes for each example. Defaults to the +inferred shapes for `tensor_list`. +
+`allow_smaller_final_batch` + +(Optional) Boolean. If `True`, allow the final +batch to be smaller if there are insufficient items left in the queue. +
+`shared_name` + +(Optional) If set, this queue will be shared under the given +name across multiple sessions. +
+`name` + +(Optional) A name for the operations. +
+ + + + + + + + + + + +
+A list or dictionary of tensors with the types as `tensors`. +
+ + + + + + + + + + + + +
+`ValueError` + +If the `shapes` are not specified, and cannot be +inferred from the elements of `tensors`. +
+ + + + +#### Eager Compatibility +Input pipelines based on Queues are not supported when eager execution is +enabled. Please use the tf.data API to ingest data under eager execution. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/maybe_shuffle_batch_join.md b/site/en/api_docs/python/tf/compat/v1/train/maybe_shuffle_batch_join.md new file mode 100644 index 00000000000..be3057047c0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/maybe_shuffle_batch_join.md @@ -0,0 +1,177 @@ +description: Create batches by randomly shuffling conditionally-enqueued tensors. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.maybe_shuffle_batch_join + + + + + + + + + +Create batches by randomly shuffling conditionally-enqueued tensors. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use `tf.data.Dataset.interleave(...).filter(...).shuffle(min_after_dequeue).batch(batch_size)`. + +See docstring in `shuffle_batch_join` for more details. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensors_list` + +A list of tuples or dictionaries of tensors to enqueue. +
+`batch_size` + +An integer. The new batch size pulled from the queue. +
+`capacity` + +An integer. The maximum number of elements in the queue. +
+`min_after_dequeue` + +Minimum number elements in the queue after a +dequeue, used to ensure a level of mixing of elements. +
+`keep_input` + +A `bool` Tensor. This tensor controls whether the input is +added to the queue or not. If it is a scalar and evaluates `True`, then +`tensors` are all added to the queue. If it is a vector and `enqueue_many` +is `True`, then each example is added to the queue only if the +corresponding value in `keep_input` is `True`. This tensor essentially +acts as a filtering mechanism. +
+`seed` + +Seed for the random shuffling within the queue. +
+`enqueue_many` + +Whether each tensor in `tensor_list_list` is a single +example. +
+`shapes` + +(Optional) The shapes for each example. Defaults to the +inferred shapes for `tensors_list[i]`. +
+`allow_smaller_final_batch` + +(Optional) Boolean. If `True`, allow the final +batch to be smaller if there are insufficient items left in the queue. +
+`shared_name` + +(optional). If set, this queue will be shared under the given +name across multiple sessions. +
+`name` + +(Optional) A name for the operations. +
+ + + + + + + + + + + +
+A list or dictionary of tensors with the same number and types as +`tensors_list[i]`. +
+ + + + + + + + + + + + +
+`ValueError` + +If the `shapes` are not specified, and cannot be +inferred from the elements of `tensors_list`. +
+ + + + +#### Eager Compatibility +Input pipelines based on Queues are not supported when eager execution is +enabled. Please use the tf.data API to ingest data under eager execution. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/natural_exp_decay.md b/site/en/api_docs/python/tf/compat/v1/train/natural_exp_decay.md new file mode 100644 index 00000000000..9ed69929a14 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/natural_exp_decay.md @@ -0,0 +1,168 @@ +description: Applies natural exponential decay to the initial learning rate. + +
+ + +
+ +# tf.compat.v1.train.natural_exp_decay + + + + + + + + + +Applies natural exponential decay to the initial learning rate. + + + + + + + +When training a model, it is often recommended to lower the learning rate as +the training progresses. This function applies an exponential decay function +to a provided initial learning rate. It requires an `global_step` value to +compute the decayed learning rate. You can just pass a TensorFlow variable +that you increment at each training step. + +The function returns the decayed learning rate. It is computed as: + +```python +decayed_learning_rate = learning_rate * exp(-decay_rate * global_step / +decay_step) +``` + +or, if `staircase` is `True`, as: + +```python +decayed_learning_rate = learning_rate * exp(-decay_rate * floor(global_step / +decay_step)) +``` + +Example: decay exponentially with a base of 0.96: + +```python +... +global_step = tf.Variable(0, trainable=False) +learning_rate = 0.1 +decay_steps = 5 +k = 0.5 +learning_rate = tf.compat.v1.train.natural_exp_decay(learning_rate, +global_step, + decay_steps, k) + +# Passing global_step to minimize() will increment it at each step. +learning_step = ( + tf.compat.v1.train.GradientDescentOptimizer(learning_rate) + .minimize(...my loss..., global_step=global_step) +) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A scalar `float32` or `float64` `Tensor` or a Python number. +The initial learning rate. +
+`global_step` + +A Python number. Global step to use for the decay computation. +Must not be negative. +
+`decay_steps` + +How often to apply decay. +
+`decay_rate` + +A Python number. The decay rate. +
+`staircase` + +Whether to apply decay in a discrete staircase, as opposed to +continuous, fashion. +
+`name` + +String. Optional name of the operation. Defaults to +'ExponentialTimeDecay'. +
+ + + + + + + + + + + +
+A scalar `Tensor` of the same type as `learning_rate`. The decayed +learning rate. +
+ + + + + + + + + + + + +
+`ValueError` + +if `global_step` is not supplied. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, this function returns a function which in +turn returns the decayed learning rate Tensor. This can be useful for changing +the learning rate value across different invocations of optimizer functions. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/noisy_linear_cosine_decay.md b/site/en/api_docs/python/tf/compat/v1/train/noisy_linear_cosine_decay.md new file mode 100644 index 00000000000..3e9786e2b8d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/noisy_linear_cosine_decay.md @@ -0,0 +1,194 @@ +description: Applies noisy linear cosine decay to the learning rate. + +
+ + +
+ +# tf.compat.v1.train.noisy_linear_cosine_decay + + + + + + + + + +Applies noisy linear cosine decay to the learning rate. + + + + + + + +Note that linear cosine decay is more aggressive than cosine decay and +larger initial learning rates can typically be used. + +When training a model, it is often recommended to lower the learning rate as +the training progresses. This function applies a noisy linear +cosine decay function to a provided initial learning rate. +It requires a `global_step` value to compute the decayed learning rate. +You can just pass a TensorFlow variable that you increment at each +training step. + +The function returns the decayed learning rate. It is computed as: +```python +global_step = min(global_step, decay_steps) +linear_decay = (decay_steps - global_step) / decay_steps) +cosine_decay = 0.5 * ( + 1 + cos(pi * 2 * num_periods * global_step / decay_steps)) +decayed = (alpha + linear_decay + eps_t) * cosine_decay + beta +decayed_learning_rate = learning_rate * decayed +``` +where eps_t is 0-centered gaussian noise with variance +initial_variance / (1 + global_step) ** variance_decay + +#### Example usage: + + +```python +decay_steps = 1000 +lr_decayed = noisy_linear_cosine_decay( + learning_rate, global_step, decay_steps) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A scalar `float32` or `float64` Tensor or a Python number. +The initial learning rate. +
+`global_step` + +A scalar `int32` or `int64` `Tensor` or a Python number. Global +step to use for the decay computation. +
+`decay_steps` + +A scalar `int32` or `int64` `Tensor` or a Python number. Number +of steps to decay over. +
+`initial_variance` + +initial variance for the noise. See computation above. +
+`variance_decay` + +decay for the noise's variance. See computation above. +
+`num_periods` + +Number of periods in the cosine part of the decay. See +computation above. +
+`alpha` + +See computation above. +
+`beta` + +See computation above. +
+`name` + +String. Optional name of the operation. Defaults to +'NoisyLinearCosineDecay'. +
+ + + + + + + + + + + +
+A scalar `Tensor` of the same type as `learning_rate`. The decayed +learning rate. +
+ + + + + + + + + + + + +
+`ValueError` + +if `global_step` is not supplied. +
+ + + +#### References: + +Neural Optimizer Search with Reinforcement Learning: + [Bello et al., 2017](http://proceedings.mlr.press/v70/bello17a.html) + ([pdf](http://proceedings.mlr.press/v70/bello17a/bello17a.pdf)) +Stochastic Gradient Descent with Warm Restarts: + [Loshchilov et al., 2017] + (https://openreview.net/forum?id=Skq89Scxx¬eId=Skq89Scxx) + ([pdf](https://openreview.net/pdf?id=Skq89Scxx)) + + + + +#### Eager Compatibility +When eager execution is enabled, this function returns a function which in +turn returns the decayed learning rate Tensor. This can be useful for changing +the learning rate value across different invocations of optimizer functions. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/piecewise_constant.md b/site/en/api_docs/python/tf/compat/v1/train/piecewise_constant.md new file mode 100644 index 00000000000..d583d2ca9ab --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/piecewise_constant.md @@ -0,0 +1,142 @@ +description: Piecewise constant from boundaries and interval values. + +
+ + +
+ +# tf.compat.v1.train.piecewise_constant + + + + + + + + + +Piecewise constant from boundaries and interval values. + + + + + + + + + +Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5 + for the next 10000 steps, and 0.1 for any additional steps. + +```python +global_step = tf.Variable(0, trainable=False) +boundaries = [100000, 110000] +values = [1.0, 0.5, 0.1] +learning_rate = tf.compat.v1.train.piecewise_constant(global_step, boundaries, +values) + +# Later, whenever we perform an optimization step, we increment global_step. +``` + + + + + + + + + + + + + + + + + + + +
+`x` + +A 0-D scalar `Tensor`. Must be one of the following types: `float32`, +`float64`, `uint8`, `int8`, `int16`, `int32`, `int64`. +
+`boundaries` + +A list of `Tensor`s or `int`s or `float`s with strictly +increasing entries, and with all elements having the same type as `x`. +
+`values` + +A list of `Tensor`s or `float`s or `int`s that specifies the values +for the intervals defined by `boundaries`. It should have one more element +than `boundaries`, and all elements should have the same type. +
+`name` + +A string. Optional name of the operation. Defaults to +'PiecewiseConstant'. +
+ + + + + + + + + + + +
+A 0-D Tensor. Its value is `values[0]` when `x <= boundaries[0]`, +`values[1]` when `x > boundaries[0]` and `x <= boundaries[1]`, ..., +and values[-1] when `x > boundaries[-1]`. +
+ + + + + + + + + + + + +
+`ValueError` + +if types of `x` and `boundaries` do not match, or types of all +`values` do not match or +the number of elements in the lists does not match. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, this function returns a function which in +turn returns the decayed learning rate Tensor. This can be useful for changing +the learning rate value across different invocations of optimizer functions. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/polynomial_decay.md b/site/en/api_docs/python/tf/compat/v1/train/polynomial_decay.md new file mode 100644 index 00000000000..8a4cfa57e7b --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/polynomial_decay.md @@ -0,0 +1,186 @@ +description: Applies a polynomial decay to the learning rate. + +
+ + +
+ +# tf.compat.v1.train.polynomial_decay + + + + + + + + + +Applies a polynomial decay to the learning rate. + + + + + + + +It is commonly observed that a monotonically decreasing learning rate, whose +degree of change is carefully chosen, results in a better performing model. +This function applies a polynomial decay function to a provided initial +`learning_rate` to reach an `end_learning_rate` in the given `decay_steps`. + +It requires a `global_step` value to compute the decayed learning rate. You +can just pass a TensorFlow variable that you increment at each training step. + +The function returns the decayed learning rate. It is computed as: + +```python +global_step = min(global_step, decay_steps) +decayed_learning_rate = (learning_rate - end_learning_rate) * + (1 - global_step / decay_steps) ^ (power) + + end_learning_rate + +``` + +If `cycle` is True then a multiple of `decay_steps` is used, the first one +that is bigger than `global_steps`. + +```python +decay_steps = decay_steps * ceil(global_step / decay_steps) +decayed_learning_rate = (learning_rate - end_learning_rate) * + (1 - global_step / decay_steps) ^ (power) + + end_learning_rate + +``` + +Example: decay from 0.1 to 0.01 in 10000 steps using sqrt (i.e. power=0.5): + +```python +... +global_step = tf.Variable(0, trainable=False) +starter_learning_rate = 0.1 +end_learning_rate = 0.01 +decay_steps = 10000 +learning_rate = tf.compat.v1.train.polynomial_decay(starter_learning_rate, +global_step, + decay_steps, end_learning_rate, + power=0.5) +# Passing global_step to minimize() will increment it at each step. +learning_step = ( + tf.compat.v1.train.GradientDescentOptimizer(learning_rate) + .minimize(...my loss..., global_step=global_step) +) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A scalar `float32` or `float64` `Tensor` or a Python number. +The initial learning rate. +
+`global_step` + +A scalar `int32` or `int64` `Tensor` or a Python number. Global +step to use for the decay computation. Must not be negative. +
+`decay_steps` + +A scalar `int32` or `int64` `Tensor` or a Python number. Must +be positive. See the decay computation above. +
+`end_learning_rate` + +A scalar `float32` or `float64` `Tensor` or a Python +number. The minimal end learning rate. +
+`power` + +A scalar `float32` or `float64` `Tensor` or a Python number. The +power of the polynomial. Defaults to linear, 1.0. +
+`cycle` + +A boolean, whether or not it should cycle beyond decay_steps. +
+`name` + +String. Optional name of the operation. Defaults to +'PolynomialDecay'. +
+ + + + + + + + + + + +
+A scalar `Tensor` of the same type as `learning_rate`. The decayed +learning rate. +
+ + + + + + + + + + + + +
+`ValueError` + +if `global_step` is not supplied. +
+ + + + +#### Eager Compatibility +When eager execution is enabled, this function returns a function which in +turn returns the decayed learning rate Tensor. This can be useful for changing +the learning rate value across different invocations of optimizer functions. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/queue_runner.md b/site/en/api_docs/python/tf/compat/v1/train/queue_runner.md new file mode 100644 index 00000000000..35f4413d610 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/queue_runner.md @@ -0,0 +1,31 @@ +description: Public API for tf.train.queue_runner namespace. + +
+ + +
+ +# Module: tf.compat.v1.train.queue_runner + + + + + + + + + +Public API for tf.train.queue_runner namespace. + + + +## Classes + +[`class QueueRunner`](../../../../tf/compat/v1/train/QueueRunner.md): Holds a list of enqueue operations for a queue, each to be run in a thread. + +## Functions + +[`add_queue_runner(...)`](../../../../tf/compat/v1/train/add_queue_runner.md): Adds a `QueueRunner` to a collection in the graph. (deprecated) + +[`start_queue_runners(...)`](../../../../tf/compat/v1/train/start_queue_runners.md): Starts all queue runners collected in the graph. (deprecated) + diff --git a/site/en/api_docs/python/tf/compat/v1/train/range_input_producer.md b/site/en/api_docs/python/tf/compat/v1/train/range_input_producer.md new file mode 100644 index 00000000000..ae3991bc181 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/range_input_producer.md @@ -0,0 +1,126 @@ +description: Produces the integers from 0 to limit-1 in a queue. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.range_input_producer + + + + + + + + + +Produces the integers from 0 to limit-1 in a queue. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use `tf.data.Dataset.range(limit).shuffle(limit).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. + +Note: if `num_epochs` is not `None`, this function creates local counter +`epochs`. Use `local_variables_initializer()` to initialize local variables. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`limit` + +An int32 scalar tensor. +
+`num_epochs` + +An integer (optional). If specified, `range_input_producer` +produces each integer `num_epochs` times before generating an +OutOfRange error. If not specified, `range_input_producer` can cycle +through the integers an unlimited number of times. +
+`shuffle` + +Boolean. If true, the integers are randomly shuffled within each +epoch. +
+`seed` + +An integer (optional). Seed used if shuffle == True. +
+`capacity` + +An integer. Sets the queue capacity. +
+`shared_name` + +(optional). If set, this queue will be shared under the given +name across multiple sessions. +
+`name` + +A name for the operations (optional). +
+ + + + + + + + + + + +
+A Queue with the output integers. A `QueueRunner` for the Queue +is added to the current `Graph`'s `QUEUE_RUNNER` collection. +
+ + + + +#### Eager Compatibility +Input pipelines based on Queues are not supported when eager execution is +enabled. Please use the tf.data API to ingest data under eager execution. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/remove_checkpoint.md b/site/en/api_docs/python/tf/compat/v1/train/remove_checkpoint.md new file mode 100644 index 00000000000..1d259d1b718 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/remove_checkpoint.md @@ -0,0 +1,71 @@ +description: Removes a checkpoint given by checkpoint_prefix. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.remove_checkpoint + + + + + + + + + +Removes a checkpoint given by `checkpoint_prefix`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use standard file APIs to delete files with this prefix. + + + + + + + + + + + + + + + + +
+`checkpoint_prefix` + +The prefix of a V1 or V2 checkpoint. Typically the result +of `Saver.save()` or that of tf.train.latest_checkpoint(), regardless of +sharded/non-sharded or V1/V2. +
+`checkpoint_format_version` + +`SaverDef.CheckpointFormatVersion`, defaults to +`SaverDef.V2`. +
+`meta_graph_suffix` + +Suffix for `MetaGraphDef` file. Defaults to 'meta'. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/replica_device_setter.md b/site/en/api_docs/python/tf/compat/v1/train/replica_device_setter.md new file mode 100644 index 00000000000..dbdc00b2fdd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/replica_device_setter.md @@ -0,0 +1,161 @@ +description: Return a device function to use when building a Graph for replicas. + +
+ + +
+ +# tf.compat.v1.train.replica_device_setter + + + + + + + + + +Return a `device function` to use when building a Graph for replicas. + + + + + + + +Device Functions are used in `with tf.device(device_function):` statement to +automatically assign devices to `Operation` objects as they are constructed, +Device constraints are added from the inner-most context first, working +outwards. The merging behavior adds constraints to fields that are yet unset +by a more inner context. Currently the fields are (job, task, cpu/gpu). + +If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op. +Otherwise, the value of `ps_tasks` is derived from `cluster`. + +By default, only Variable ops are placed on ps tasks, and the placement +strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used +to do more intelligent placement, such as +`tf.contrib.training.GreedyLoadBalancingStrategy`. + +For example, + +```python +# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker +# jobs on hosts worker0, worker1 and worker2. +cluster_spec = { + "ps": ["ps0:2222", "ps1:2222"], + "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} +with +tf.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)): + # Build your graph + v1 = tf.Variable(...) # assigned to /job:ps/task:0 + v2 = tf.Variable(...) # assigned to /job:ps/task:1 + v3 = tf.Variable(...) # assigned to /job:ps/task:0 +# Run compute +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`ps_tasks` + +Number of tasks in the `ps` job. Ignored if `cluster` is +provided. +
+`ps_device` + +String. Device of the `ps` job. If empty no `ps` job is used. +Defaults to `ps`. +
+`worker_device` + +String. Device of the `worker` job. If empty no `worker` +job is used. +
+`merge_devices` + +`Boolean`. If `True`, merges or only sets a device if the +device constraint is completely unset. merges device specification rather +than overriding them. +
+`cluster` + +`ClusterDef` proto or `ClusterSpec`. +
+`ps_ops` + +List of strings representing `Operation` types that need to be +placed on `ps` devices. If `None`, defaults to `STANDARD_PS_OPS`. +
+`ps_strategy` + +A callable invoked for every ps `Operation` (i.e. matched by +`ps_ops`), that takes the `Operation` and returns the ps task index to +use. If `None`, defaults to a round-robin strategy across all `ps` +devices. +
+ + + + + + + + + + + +
+A function to pass to tf.device(). +
+ + + + + + + + + + + +
+TypeError if `cluster` is not a dictionary or `ClusterDef` protocol buffer, +or if `ps_strategy` is provided but not a callable. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/sdca_fprint.md b/site/en/api_docs/python/tf/compat/v1/train/sdca_fprint.md new file mode 100644 index 00000000000..3ecc1f15780 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/sdca_fprint.md @@ -0,0 +1,67 @@ +description: Computes fingerprints of the input strings. + +
+ + +
+ +# tf.compat.v1.train.sdca_fprint + + + + + + + + + +Computes fingerprints of the input strings. + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +vector of strings to compute fingerprints on. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/sdca_optimizer.md b/site/en/api_docs/python/tf/compat/v1/train/sdca_optimizer.md new file mode 100644 index 00000000000..42fa39aaad1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/sdca_optimizer.md @@ -0,0 +1,234 @@ +description: Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for + +
+ + +
+ +# tf.compat.v1.train.sdca_optimizer + + + + + + + + + +Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for + + + + + + + +linear models with L1 + L2 regularization. As global optimization objective is +strongly-convex, the optimizer optimizes the dual objective at each step. The +optimizer applies each update one example at a time. Examples are sampled +uniformly, and the optimizer is learning rate free and enjoys linear convergence +rate. + +[Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
+Shai Shalev-Shwartz, Tong Zhang. 2012 + +$$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ + +[Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
+Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, +Peter Richtarik, Martin Takac. 2015 + +[Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
+Dominik Csiba, Zheng Qu, Peter Richtarik. 2015 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_example_indices` + +A list of `Tensor` objects with type `int64`. +a list of vectors which contain example indices. +
+`sparse_feature_indices` + +A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. +a list of vectors which contain feature indices. +
+`sparse_feature_values` + +A list of `Tensor` objects with type `float32`. +a list of vectors which contains feature value +associated with each feature group. +
+`dense_features` + +A list of `Tensor` objects with type `float32`. +a list of matrices which contains the dense feature values. +
+`example_weights` + +A `Tensor` of type `float32`. +a vector which contains the weight associated with each +example. +
+`example_labels` + +A `Tensor` of type `float32`. +a vector which contains the label/target associated with each +example. +
+`sparse_indices` + +A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. +a list of vectors where each value is the indices which has +corresponding weights in sparse_weights. This field maybe omitted for the +dense approach. +
+`sparse_weights` + +A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. +a list of vectors where each value is the weight associated with +a sparse feature group. +
+`dense_weights` + +A list with the same length as `dense_features` of `Tensor` objects with type `float32`. +a list of vectors where the values are the weights associated +with a dense feature group. +
+`example_state_data` + +A `Tensor` of type `float32`. +a list of vectors containing the example state data. +
+`loss_type` + +A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. +Type of the primal loss. Currently SdcaSolver supports logistic, +squared and hinge losses. +
+`l1` + +A `float`. Symmetric l1 regularization strength. +
+`l2` + +A `float`. Symmetric l2 regularization strength. +
+`num_loss_partitions` + +An `int` that is `>= 1`. +Number of partitions of the global loss function. +
+`num_inner_iterations` + +An `int` that is `>= 1`. +Number of iterations per mini-batch. +
+`adaptative` + +An optional `bool`. Defaults to `True`. +Whether to use Adaptive SDCA for the inner loop. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights). +
+`out_example_state_data` + +A `Tensor` of type `float32`. +
+`out_delta_sparse_weights` + +A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. +
+`out_delta_dense_weights` + +A list with the same length as `dense_features` of `Tensor` objects with type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/sdca_shrink_l1.md b/site/en/api_docs/python/tf/compat/v1/train/sdca_shrink_l1.md new file mode 100644 index 00000000000..72b5bba4893 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/sdca_shrink_l1.md @@ -0,0 +1,83 @@ +description: Applies L1 regularization shrink step on the parameters. + +
+ + +
+ +# tf.compat.v1.train.sdca_shrink_l1 + + + + + + + + + +Applies L1 regularization shrink step on the parameters. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`weights` + +A list of `Tensor` objects with type mutable `float32`. +a list of vectors where each value is the weight associated with a +feature group. +
+`l1` + +A `float`. Symmetric l1 regularization strength. +
+`l2` + +A `float`. +Symmetric l2 regularization strength. Should be a positive float. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/shuffle_batch.md b/site/en/api_docs/python/tf/compat/v1/train/shuffle_batch.md new file mode 100644 index 00000000000..2221a1f93f7 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/shuffle_batch.md @@ -0,0 +1,220 @@ +description: Creates batches by randomly shuffling tensors. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.shuffle_batch + + + + + + + + + +Creates batches by randomly shuffling tensors. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use `tf.data.Dataset.shuffle(min_after_dequeue).batch(batch_size)`. + +This function adds the following to the current `Graph`: + +* A shuffling queue into which tensors from `tensors` are enqueued. +* A `dequeue_many` operation to create batches from the queue. +* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors + from `tensors`. + +If `enqueue_many` is `False`, `tensors` is assumed to represent a +single example. An input tensor with shape `[x, y, z]` will be output +as a tensor with shape `[batch_size, x, y, z]`. + +If `enqueue_many` is `True`, `tensors` is assumed to represent a +batch of examples, where the first dimension is indexed by example, +and all members of `tensors` should have the same size in the +first dimension. If an input tensor has shape `[*, x, y, z]`, the +output will have shape `[batch_size, x, y, z]`. + +The `capacity` argument controls the how long the prefetching is allowed to +grow the queues. + +The returned operation is a dequeue operation and will throw +tf.errors.OutOfRangeError if the input queue is exhausted. If this +operation is feeding another input queue, its queue runner will catch +this exception, however, if this operation is used in your main thread +you are responsible for catching this yourself. + +#### For example: + + + +```python +# Creates batches of 32 images and 32 labels. +image_batch, label_batch = tf.compat.v1.train.shuffle_batch( + [single_image, single_label], + batch_size=32, + num_threads=4, + capacity=50000, + min_after_dequeue=10000) +``` + +*N.B.:* You must ensure that either (i) the `shapes` argument is +passed, or (ii) all of the tensors in `tensors` must have +fully-defined shapes. `ValueError` will be raised if neither of +these conditions holds. + +If `allow_smaller_final_batch` is `True`, a smaller batch value than +`batch_size` is returned when the queue is closed and there are not enough +elements to fill the batch, otherwise the pending elements are discarded. +In addition, all output tensors' static shapes, as accessed via the +`shape` property will have a first `Dimension` value of `None`, and +operations that depend on fixed batch_size would fail. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensors` + +The list or dictionary of tensors to enqueue. +
+`batch_size` + +The new batch size pulled from the queue. +
+`capacity` + +An integer. The maximum number of elements in the queue. +
+`min_after_dequeue` + +Minimum number elements in the queue after a +dequeue, used to ensure a level of mixing of elements. +
+`num_threads` + +The number of threads enqueuing `tensor_list`. +
+`seed` + +Seed for the random shuffling within the queue. +
+`enqueue_many` + +Whether each tensor in `tensor_list` is a single example. +
+`shapes` + +(Optional) The shapes for each example. Defaults to the +inferred shapes for `tensor_list`. +
+`allow_smaller_final_batch` + +(Optional) Boolean. If `True`, allow the final +batch to be smaller if there are insufficient items left in the queue. +
+`shared_name` + +(Optional) If set, this queue will be shared under the given +name across multiple sessions. +
+`name` + +(Optional) A name for the operations. +
+ + + + + + + + + + + +
+A list or dictionary of tensors with the types as `tensors`. +
+ + + + + + + + + + + + +
+`ValueError` + +If the `shapes` are not specified, and cannot be +inferred from the elements of `tensors`. +
+ + + + +#### Eager Compatibility +Input pipelines based on Queues are not supported when eager execution is +enabled. Please use the tf.data API to ingest data under eager execution. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/shuffle_batch_join.md b/site/en/api_docs/python/tf/compat/v1/train/shuffle_batch_join.md new file mode 100644 index 00000000000..a2e4868105f --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/shuffle_batch_join.md @@ -0,0 +1,206 @@ +description: Create batches by randomly shuffling tensors. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.shuffle_batch_join + + + + + + + + + +Create batches by randomly shuffling tensors. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use `tf.data.Dataset.interleave(...).shuffle(min_after_dequeue).batch(batch_size)`. + +The `tensors_list` argument is a list of tuples of tensors, or a list of +dictionaries of tensors. Each element in the list is treated similarly +to the `tensors` argument of tf.compat.v1.train.shuffle_batch(). + +This version enqueues a different list of tensors in different threads. +It adds the following to the current `Graph`: + +* A shuffling queue into which tensors from `tensors_list` are enqueued. +* A `dequeue_many` operation to create batches from the queue. +* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors + from `tensors_list`. + +`len(tensors_list)` threads will be started, with thread `i` enqueuing +the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match +`tensors_list[i2][j]` in type and shape, except in the first dimension if +`enqueue_many` is true. + +If `enqueue_many` is `False`, each `tensors_list[i]` is assumed +to represent a single example. An input tensor with shape `[x, y, z]` +will be output as a tensor with shape `[batch_size, x, y, z]`. + +If `enqueue_many` is `True`, `tensors_list[i]` is assumed to +represent a batch of examples, where the first dimension is indexed +by example, and all members of `tensors_list[i]` should have the +same size in the first dimension. If an input tensor has shape `[*, x, +y, z]`, the output will have shape `[batch_size, x, y, z]`. + +The `capacity` argument controls the how long the prefetching is allowed to +grow the queues. + +The returned operation is a dequeue operation and will throw +tf.errors.OutOfRangeError if the input queue is exhausted. If this +operation is feeding another input queue, its queue runner will catch +this exception, however, if this operation is used in your main thread +you are responsible for catching this yourself. + +If `allow_smaller_final_batch` is `True`, a smaller batch value than +`batch_size` is returned when the queue is closed and there are not enough +elements to fill the batch, otherwise the pending elements are discarded. +In addition, all output tensors' static shapes, as accessed via the +`shape` property will have a first `Dimension` value of `None`, and +operations that depend on fixed batch_size would fail. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensors_list` + +A list of tuples or dictionaries of tensors to enqueue. +
+`batch_size` + +An integer. The new batch size pulled from the queue. +
+`capacity` + +An integer. The maximum number of elements in the queue. +
+`min_after_dequeue` + +Minimum number elements in the queue after a +dequeue, used to ensure a level of mixing of elements. +
+`seed` + +Seed for the random shuffling within the queue. +
+`enqueue_many` + +Whether each tensor in `tensor_list_list` is a single +example. +
+`shapes` + +(Optional) The shapes for each example. Defaults to the +inferred shapes for `tensors_list[i]`. +
+`allow_smaller_final_batch` + +(Optional) Boolean. If `True`, allow the final +batch to be smaller if there are insufficient items left in the queue. +
+`shared_name` + +(optional). If set, this queue will be shared under the given +name across multiple sessions. +
+`name` + +(Optional) A name for the operations. +
+ + + + + + + + + + + +
+A list or dictionary of tensors with the same number and types as +`tensors_list[i]`. +
+ + + + + + + + + + + + +
+`ValueError` + +If the `shapes` are not specified, and cannot be +inferred from the elements of `tensors_list`. +
+ + + + +#### Eager Compatibility +Input pipelines based on Queues are not supported when eager execution is +enabled. Please use the tf.data API to ingest data under eager execution. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/slice_input_producer.md b/site/en/api_docs/python/tf/compat/v1/train/slice_input_producer.md new file mode 100644 index 00000000000..1e41bd1b875 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/slice_input_producer.md @@ -0,0 +1,145 @@ +description: Produces a slice of each Tensor in tensor_list. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.slice_input_producer + + + + + + + + + +Produces a slice of each `Tensor` in `tensor_list`. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use `tf.data.Dataset.from_tensor_slices(tuple(tensor_list)).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. + +Implemented using a Queue -- a `QueueRunner` for the Queue +is added to the current `Graph`'s `QUEUE_RUNNER` collection. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensor_list` + +A list of `Tensor` objects. Every `Tensor` in +`tensor_list` must have the same size in the first dimension. +
+`num_epochs` + +An integer (optional). If specified, `slice_input_producer` +produces each slice `num_epochs` times before generating +an `OutOfRange` error. If not specified, `slice_input_producer` can cycle +through the slices an unlimited number of times. +
+`shuffle` + +Boolean. If true, the integers are randomly shuffled within each +epoch. +
+`seed` + +An integer (optional). Seed used if shuffle == True. +
+`capacity` + +An integer. Sets the queue capacity. +
+`shared_name` + +(optional). If set, this queue will be shared under the given +name across multiple sessions. +
+`name` + +A name for the operations (optional). +
+ + + + + + + + + + + +
+A list of tensors, one for each element of `tensor_list`. If the tensor +in `tensor_list` has shape `[N, a, b, .., z]`, then the corresponding output +tensor will have shape `[a, b, ..., z]`. +
+ + + + + + + + + + + + +
+`ValueError` + +if `slice_input_producer` produces nothing from `tensor_list`. +
+ + + + +#### Eager Compatibility +Input pipelines based on Queues are not supported when eager execution is +enabled. Please use the tf.data API to ingest data under eager execution. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/start_queue_runners.md b/site/en/api_docs/python/tf/compat/v1/train/start_queue_runners.md new file mode 100644 index 00000000000..93edd9d1ed1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/start_queue_runners.md @@ -0,0 +1,169 @@ +description: Starts all queue runners collected in the graph. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.start_queue_runners + + + + + + + + + +Starts all queue runners collected in the graph. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +To construct input pipelines, use the tf.data module. + +This is a companion method to `add_queue_runner()`. It just starts +threads for all queue runners collected in the graph. It returns +the list of all threads. + + + + + + + + + + + + + + + + + + + + + + +
+`sess` + +`Session` used to run the queue ops. Defaults to the +default session. +
+`coord` + +Optional `Coordinator` for coordinating the started threads. +
+`daemon` + +Whether the threads should be marked as `daemons`, meaning +they don't block program exit. +
+`start` + +Set to `False` to only create the threads, not start them. +
+`collection` + +A `GraphKey` specifying the graph collection to +get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `sess` is None and there isn't any default session. +
+`TypeError` + +if `sess` is not a tf.compat.v1.Session object. +
+ + + + + + + + + + + +
+A list of threads. +
+ + + + + + + + + + + + + + + +
+`RuntimeError` + +If called with eager execution enabled. +
+`ValueError` + +If called without a default tf.compat.v1.Session registered. +
+ + + + +#### Eager Compatibility +Not compatible with eager execution. To ingest data under eager execution, +use the tf.data API instead. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/string_input_producer.md b/site/en/api_docs/python/tf/compat/v1/train/string_input_producer.md new file mode 100644 index 00000000000..0b1188b1985 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/string_input_producer.md @@ -0,0 +1,155 @@ +description: Output strings (e.g. filenames) to a queue for an input pipeline. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.string_input_producer + + + + + + + + + +Output strings (e.g. filenames) to a queue for an input pipeline. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Queue-based input pipelines have been replaced by tf.data. Use `tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`. + +Note: if `num_epochs` is not `None`, this function creates local counter +`epochs`. Use `local_variables_initializer()` to initialize local variables. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`string_tensor` + +A 1-D string tensor with the strings to produce. +
+`num_epochs` + +An integer (optional). If specified, `string_input_producer` +produces each string from `string_tensor` `num_epochs` times before +generating an `OutOfRange` error. If not specified, +`string_input_producer` can cycle through the strings in `string_tensor` +an unlimited number of times. +
+`shuffle` + +Boolean. If true, the strings are randomly shuffled within each +epoch. +
+`seed` + +An integer (optional). Seed used if shuffle == True. +
+`capacity` + +An integer. Sets the queue capacity. +
+`shared_name` + +(optional). If set, this queue will be shared under the given +name across multiple sessions. All sessions open to the device which has +this queue will be able to access it via the shared_name. Using this in +a distributed setting means each name will only be seen by one of the +sessions which has access to this operation. +
+`name` + +A name for the operations (optional). +
+`cancel_op` + +Cancel op for the queue (optional). +
+ + + + + + + + + + + +
+A queue with the output strings. A `QueueRunner` for the Queue +is added to the current `Graph`'s `QUEUE_RUNNER` collection. +
+ + + + + + + + + + + + +
+`ValueError` + +If the string_tensor is a null Python list. At runtime, +will fail with an assertion if string_tensor becomes a null tensor. +
+ + + + +#### Eager Compatibility +Input pipelines based on Queues are not supported when eager execution is +enabled. Please use the tf.data API to ingest data under eager execution. + diff --git a/site/en/api_docs/python/tf/compat/v1/train/summary_iterator.md b/site/en/api_docs/python/tf/compat/v1/train/summary_iterator.md new file mode 100644 index 00000000000..55a0d01c533 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/summary_iterator.md @@ -0,0 +1,83 @@ +description: An iterator for reading Event protocol buffers from an event file. + +
+ + +
+ +# tf.compat.v1.train.summary_iterator + + + + + + + + + +An iterator for reading `Event` protocol buffers from an event file. + + + + + + + +You can use this function to read events written to an event file. It returns +a Python iterator that yields `Event` protocol buffers. + +Example: Print the contents of an events file. + +```python +for e in tf.compat.v1.train.summary_iterator(path to events file): + print(e) +``` + +Example: Print selected summary values. + +```python +# This example supposes that the events file contains summaries with a +# summary value tag 'loss'. These could have been added by calling +# `add_summary()`, passing the output of a scalar summary op created with +# with: `tf.compat.v1.summary.scalar('loss', loss_tensor)`. +for e in tf.compat.v1.train.summary_iterator(path to events file): + for v in e.summary.value: + if v.tag == 'loss': + print(v.simple_value) +``` + +See the protocol buffer definitions of +[Event](https://www.tensorflow.org/code/tensorflow/core/util/event.proto) +and +[Summary](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) +for more information about their attributes. + + + + + + + + + + +
+`path` + +The path to an event file created by a `SummaryWriter`. +
+ + + +#### Yields: + +`Event` protocol buffers. diff --git a/site/en/api_docs/python/tf/compat/v1/train/update_checkpoint_state.md b/site/en/api_docs/python/tf/compat/v1/train/update_checkpoint_state.md new file mode 100644 index 00000000000..357daa89afe --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/update_checkpoint_state.md @@ -0,0 +1,120 @@ +description: Updates the content of the 'checkpoint' file. (deprecated) + +
+ + +
+ +# tf.compat.v1.train.update_checkpoint_state + + + + + + + + + +Updates the content of the 'checkpoint' file. (deprecated) + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.train.CheckpointManager to manage checkpoints rather than manually editing the Checkpoint proto. + +This updates the checkpoint file containing a CheckpointState +proto. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`save_dir` + +Directory where the model was saved. +
+`model_checkpoint_path` + +The checkpoint file. +
+`all_model_checkpoint_paths` + +List of strings. Paths to all not-yet-deleted +checkpoints, sorted from oldest to newest. If this is a non-empty list, +the last element must be equal to model_checkpoint_path. These paths +are also saved in the CheckpointState proto. +
+`latest_filename` + +Optional name of the checkpoint file. Default to +'checkpoint'. +
+`all_model_checkpoint_timestamps` + +Optional list of timestamps (floats, +seconds since the Epoch) indicating when the checkpoints in +`all_model_checkpoint_paths` were created. +
+`last_preserved_timestamp` + +A float, indicating the number of seconds since +the Epoch when the last preserved checkpoint was written, e.g. due to a +`keep_checkpoint_every_n_hours` parameter (see +tf.train.CheckpointManager for an implementation). +
+ + + + + + + + + + + + +
+`RuntimeError` + +If any of the model checkpoint paths conflict with the file +containing CheckpointSate. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/train/warm_start.md b/site/en/api_docs/python/tf/compat/v1/train/warm_start.md new file mode 100644 index 00000000000..16e7a6086ae --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/train/warm_start.md @@ -0,0 +1,123 @@ +description: Warm-starts a model using the given settings. + +
+ + +
+ +# tf.compat.v1.train.warm_start + + + + + + + + + +Warm-starts a model using the given settings. + + + + + + + +If you are using a tf.estimator.Estimator, this will automatically be called +during training. + + + + + + + + + + + + + + + + + + + +
+`ckpt_to_initialize_from` + +[Required] A string specifying the directory with +checkpoint file(s) or path to checkpoint from which to warm-start the +model parameters. +
+`vars_to_warm_start` + +[Optional] One of the following: + +- A regular expression (string) that captures which variables to +warm-start (see tf.compat.v1.get_collection). This expression will only +consider variables in the TRAINABLE_VARIABLES collection -- if you need +to warm-start non_TRAINABLE vars (such as optimizer accumulators or +batch norm statistics), please use the below option. +- A list of strings, each a regex scope provided to +tf.compat.v1.get_collection with GLOBAL_VARIABLES (please see +tf.compat.v1.get_collection). For backwards compatibility reasons, +this is separate from the single-string argument type. +- A list of Variables to warm-start. If you do not have access to the +`Variable` objects at the call site, please use the above option. +- `None`, in which case only TRAINABLE variables specified in +`var_name_to_vocab_info` will be warm-started. + +Defaults to `'.*'`, which warm-starts all variables in the +TRAINABLE_VARIABLES collection. Note that this excludes variables such +as accumulators and moving statistics from batch norm. +
+`var_name_to_vocab_info` + +[Optional] Dict of variable names (strings) to +tf.estimator.VocabInfo. The variable names should be "full" variables, +not the names of the partitions. If not explicitly provided, the variable +is assumed to have no (changes to) vocabulary. +
+`var_name_to_prev_var_name` + +[Optional] Dict of variable names (strings) to +name of the previously-trained variable in `ckpt_to_initialize_from`. If +not explicitly provided, the name of the variable is assumed to be same +between previous checkpoint and current model. Note that this has no +effect on the set of variables that is warm-started, and only controls +name mapping (use `vars_to_warm_start` for controlling what variables to +warm-start). +
+ + + + + + + + + + + + +
+`ValueError` + +If the WarmStartSettings contains prev_var_name or VocabInfo +configuration for variable names that are not used. This is to ensure +a stronger check for variable configuration than relying on users to +examine the logs. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/trainable_variables.md b/site/en/api_docs/python/tf/compat/v1/trainable_variables.md new file mode 100644 index 00000000000..3d50f45f074 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/trainable_variables.md @@ -0,0 +1,72 @@ +description: Returns all variables created with trainable=True. + +
+ + +
+ +# tf.compat.v1.trainable_variables + + + + + + + + + +Returns all variables created with `trainable=True`. + + + + + + + +When passed `trainable=True`, the `Variable()` constructor automatically +adds new variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES`. This convenience function returns the +contents of that collection. + + + + + + + + + + +
+`scope` + +(Optional.) A string. If supplied, the resulting list is filtered to +include only items whose `name` attribute matches `scope` using +`re.match`. Items without a `name` attribute are never returned if a scope +is supplied. The choice of `re.match` means that a `scope` without special +tokens filters by prefix. +
+ + + + + + + + + + + +
+A list of Variable objects. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/transpose.md b/site/en/api_docs/python/tf/compat/v1/transpose.md new file mode 100644 index 00000000000..f37f5fff5a3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/transpose.md @@ -0,0 +1,143 @@ +description: Transposes a. + +
+ + +
+ +# tf.compat.v1.transpose + + + + + + + + + +Transposes `a`. + + + + + + + +Permutes the dimensions according to `perm`. + +The returned tensor's dimension i will correspond to the input dimension +`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is +the rank of the input tensor. Hence by default, this operation performs a +regular matrix transpose on 2-D input Tensors. If conjugate is True and +`a.dtype` is either `complex64` or `complex128` then the values of `a` +are conjugated and transposed. + + + +#### For example: + + + +```python +x = tf.constant([[1, 2, 3], [4, 5, 6]]) +tf.transpose(x) # [[1, 4] + # [2, 5] + # [3, 6]] + +# Equivalently +tf.transpose(x, perm=[1, 0]) # [[1, 4] + # [2, 5] + # [3, 6]] + +# If x is complex, setting conjugate=True gives the conjugate transpose +x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j], + [4 + 4j, 5 + 5j, 6 + 6j]]) +tf.transpose(x, conjugate=True) # [[1 - 1j, 4 - 4j], + # [2 - 2j, 5 - 5j], + # [3 - 3j, 6 - 6j]] + +# 'perm' is more useful for n-dimensional tensors, for n > 2 +x = tf.constant([[[ 1, 2, 3], + [ 4, 5, 6]], + [[ 7, 8, 9], + [10, 11, 12]]]) + +# Take the transpose of the matrices in dimension-0 +# (this common operation has a shorthand `linalg.matrix_transpose`) +tf.transpose(x, perm=[0, 2, 1]) # [[[1, 4], + # [2, 5], + # [3, 6]], + # [[7, 10], + # [8, 11], + # [9, 12]]] +``` + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. +
+`perm` + +A permutation of the dimensions of `a`. +
+`name` + +A name for the operation (optional). +
+`conjugate` + +Optional bool. Setting it to `True` is mathematically equivalent +to tf.math.conj(tf.transpose(input)). +
+ + + + + + + + + + + +
+A transposed `Tensor`. +
+ + + +#### Numpy Compatibility +In `numpy` transposes are memory-efficient constant time operations as they +simply return a new view of the same data with adjusted `strides`. + +TensorFlow does not support strides, so `transpose` returns a new tensor with +the items permuted. + diff --git a/site/en/api_docs/python/tf/compat/v1/truncated_normal_initializer.md b/site/en/api_docs/python/tf/compat/v1/truncated_normal_initializer.md new file mode 100644 index 00000000000..129de6258db --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/truncated_normal_initializer.md @@ -0,0 +1,229 @@ +description: Initializer that generates a truncated normal distribution. + +
+ + + + + + +
+ +# tf.compat.v1.truncated_normal_initializer + + + + + + + + + +Initializer that generates a truncated normal distribution. + +Inherits From: [`Initializer`](../../../tf/compat/v1/keras/initializers/Initializer.md) + + + + + + + + + +These values are similar to values from a `random_normal_initializer` +except that values more than two standard deviations from the mean +are discarded and re-drawn. This is the recommended initializer for +neural network weights and filters. + + + + + + + + + + + + + + + + + + + +
+`mean` + +a python scalar or a scalar tensor. Mean of the random values to +generate. +
+`stddev` + +a python scalar or a scalar tensor. Standard deviation of the random +values to generate. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed for behavior. +
+`dtype` + +Default data type, used if no `dtype` argument is provided when +calling the initializer. Only floating point types are supported. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/tuple.md b/site/en/api_docs/python/tf/compat/v1/tuple.md new file mode 100644 index 00000000000..a45ee9bf731 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/tuple.md @@ -0,0 +1,117 @@ +description: Group tensors together. + +
+ + +
+ +# tf.compat.v1.tuple + + + + + + + + + +Group tensors together. + + + + + + + +This creates a tuple of tensors with the same values as the `tensors` +argument, except that the value of each tensor is only returned after the +values of all tensors have been computed. + +`control_inputs` contains additional ops that have to finish before this op +finishes, but whose outputs are not returned. + +This can be used as a "join" mechanism for parallel computations: all the +argument tensors can be computed in parallel, but the values of any tensor +returned by `tuple` are only available after all the parallel computations +are done. + +See also tf.group and +tf.control_dependencies. + + + + + + + + + + + + + + + + +
+`tensors` + +A list of `Tensor`s or `IndexedSlices`, some entries can be `None`. +
+`name` + +(optional) A name to use as a `name_scope` for the operation. +
+`control_inputs` + +List of additional ops to finish before returning. +
+ + + + + + + + + + + +
+Same as `tensors`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `tensors` does not contain any `Tensor` or `IndexedSlices`. +
+`TypeError` + +If `control_inputs` is not a list of `Operation` or `Tensor` +objects. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/uniform_unit_scaling_initializer.md b/site/en/api_docs/python/tf/compat/v1/uniform_unit_scaling_initializer.md new file mode 100644 index 00000000000..4bc53720808 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/uniform_unit_scaling_initializer.md @@ -0,0 +1,236 @@ +description: Initializer that generates tensors without scaling variance. + +
+ + + + + + +
+ +# tf.compat.v1.uniform_unit_scaling_initializer + + + + + + + + + +Initializer that generates tensors without scaling variance. + +Inherits From: [`Initializer`](../../../tf/compat/v1/keras/initializers/Initializer.md) + + + + + + + + + +When initializing a deep network, it is in principle advantageous to keep +the scale of the input variance constant, so it does not explode or diminish +by reaching the final layer. If the input is `x` and the operation `x * W`, +and we want to initialize `W` uniformly at random, we need to pick `W` from + + [-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)] + +to keep the scale intact, where `dim = W.shape[0]` (the size of the input). +A similar calculation for convolutional networks gives an analogous result +with `dim` equal to the product of the first 3 dimensions. When +nonlinearities are present, we need to multiply this by a constant `factor`. +See (Sussillo et al., 2014) for deeper motivation, experiments +and the calculation of constants. In section 2.3 there, the constants were +numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15. + + + + + + + + + + + + + + + + +
+`factor` + +Float. A multiplicative factor by which the values will be scaled. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.compat.v1.set_random_seed for behavior. +
+`dtype` + +Default data type, used if no `dtype` argument is provided when +calling the initializer. Only floating point types are supported. +
+ + + +#### References: + +[Sussillo et al., 2014](https://arxiv.org/abs/1412.6558) +([pdf](http://arxiv.org/pdf/1412.6558.pdf)) + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. It will typically be the output of +`get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided use the initializer +dtype. +
+`partition_info` + +Optional information about the possible partitioning of a +tensor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/user_ops.md b/site/en/api_docs/python/tf/compat/v1/user_ops.md new file mode 100644 index 00000000000..4e236a4e440 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/user_ops.md @@ -0,0 +1,25 @@ +description: Public API for tf.user_ops namespace. + +
+ + +
+ +# Module: tf.compat.v1.user_ops + + + + + + + + + +Public API for tf.user_ops namespace. + + + +## Functions + +[`my_fact(...)`](../../../tf/compat/v1/user_ops/my_fact.md): Example of overriding the generated code for an Op. + diff --git a/site/en/api_docs/python/tf/compat/v1/user_ops/my_fact.md b/site/en/api_docs/python/tf/compat/v1/user_ops/my_fact.md new file mode 100644 index 00000000000..04e7c1cd4e9 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/user_ops/my_fact.md @@ -0,0 +1,31 @@ +description: Example of overriding the generated code for an Op. + +
+ + +
+ +# tf.compat.v1.user_ops.my_fact + + + + + + + + + +Example of overriding the generated code for an Op. + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/variable_axis_size_partitioner.md b/site/en/api_docs/python/tf/compat/v1/variable_axis_size_partitioner.md new file mode 100644 index 00000000000..e288ebe5dcd --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/variable_axis_size_partitioner.md @@ -0,0 +1,117 @@ +description: Get a partitioner for VariableScope to keep shards below max_shard_bytes. + +
+ + +
+ +# tf.compat.v1.variable_axis_size_partitioner + + + + + + + + + +Get a partitioner for VariableScope to keep shards below `max_shard_bytes`. + + + + + + + +This partitioner will shard a Variable along one axis, attempting to keep +the maximum shard size below `max_shard_bytes`. In practice, this is not +always possible when sharding along only one axis. When this happens, +this axis is sharded as much as possible (i.e., every dimension becomes +a separate shard). + +If the partitioner hits the `max_shards` limit, then each shard may end up +larger than `max_shard_bytes`. By default `max_shards` equals `None` and no +limit on the number of shards is enforced. + +One reasonable value for `max_shard_bytes` is `(64 << 20) - 1`, or almost +`64MB`, to keep below the protobuf byte limit. + + + + + + + + + + + + + + + + + + + +
+`max_shard_bytes` + +The maximum size any given shard is allowed to be. +
+`axis` + +The axis to partition along. Default: outermost axis. +
+`bytes_per_string_element` + +If the `Variable` is of type string, this provides +an estimate of how large each scalar in the `Variable` is. +
+`max_shards` + +The maximum number of shards in int created taking precedence +over `max_shard_bytes`. +
+ + + + + + + + + + + +
+A partition function usable as the `partitioner` argument to +`variable_scope` and `get_variable`. +
+ + + + + + + + + + + + +
+`ValueError` + +If any of the byte counts are non-positive. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/variable_creator_scope.md b/site/en/api_docs/python/tf/compat/v1/variable_creator_scope.md new file mode 100644 index 00000000000..1f7fde0bd38 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/variable_creator_scope.md @@ -0,0 +1,115 @@ +description: Scope which defines a variable creation function to be used by variable(). + +
+ + +
+ +# tf.compat.v1.variable_creator_scope + + + + + + + + + +Scope which defines a variable creation function to be used by variable(). + + + + + + + +variable_creator is expected to be a function with the following signature: + +``` + def variable_creator(next_creator, **kwargs) +``` + +The creator is supposed to eventually call the next_creator to create a +variable if it does want to create a variable and not call Variable or +ResourceVariable directly. This helps make creators composable. A creator may +choose to create multiple variables, return already existing variables, or +simply register that a variable was created and defer to the next creators in +line. Creators can also modify the keyword arguments seen by the next +creators. + +Custom getters in the variable scope will eventually resolve down to these +custom creators when they do create variables. + +The valid keyword arguments in kwds are: + + * initial_value: A `Tensor`, or Python object convertible to a `Tensor`, + which is the initial value for the Variable. The initial value must have + a shape specified unless `validate_shape` is set to False. Can also be a + callable with no argument that returns the initial value when called. In + that case, `dtype` must be specified. (Note that initializer functions + from init_ops.py must first be bound to a shape before being used here.) + * trainable: If `True`, the default, also adds the variable to the graph + collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as + the default list of variables to use by the `Optimizer` classes. + `trainable` defaults to `True`, unless `synchronization` is + set to `ON_READ`, in which case it defaults to `False`. + * collections: List of graph collections keys. The new variable is added to + these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`. + * validate_shape: If `False`, allows the variable to be initialized with a + value of unknown shape. If `True`, the default, the shape of + `initial_value` must be known. + * caching_device: Optional device string describing where the Variable + should be cached for reading. Defaults to the Variable's device. + If not `None`, caches on another device. Typical use is to cache + on the device where the Ops using the Variable reside, to deduplicate + copying through `Switch` and other conditional statements. + * name: Optional name for the variable. Defaults to `'Variable'` and gets + uniquified automatically. + * dtype: If set, initial_value will be converted to the given type. + If `None`, either the datatype will be kept (if `initial_value` is + a Tensor), or `convert_to_tensor` will decide. + * constraint: A constraint function to be applied to the variable after + updates by some algorithms. + * use_resource: if True, a ResourceVariable is always created. + * synchronization: Indicates when a distributed a variable will be + aggregated. Accepted values are constants defined in the class + tf.VariableSynchronization. By default the synchronization is set to + `AUTO` and the current `DistributionStrategy` chooses + when to synchronize. + * aggregation: Indicates how a distributed variable will be aggregated. + Accepted values are constants defined in the class + tf.VariableAggregation. + +This set may grow over time, so it's important the signature of creators is as +mentioned above. + + + + + + + + + + +
+`variable_creator` + +the passed creator +
+ + + +#### Yields: + +A scope in which the creator is active diff --git a/site/en/api_docs/python/tf/compat/v1/variable_op_scope.md b/site/en/api_docs/python/tf/compat/v1/variable_op_scope.md new file mode 100644 index 00000000000..6c5ea3efaf0 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/variable_op_scope.md @@ -0,0 +1,36 @@ +description: Deprecated: context manager for defining an op that creates variables. + +
+ + +
+ +# tf.compat.v1.variable_op_scope + + + + + + + + + +Deprecated: context manager for defining an op that creates variables. + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/variable_scope.md b/site/en/api_docs/python/tf/compat/v1/variable_scope.md new file mode 100644 index 00000000000..a36bec48ca5 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/variable_scope.md @@ -0,0 +1,367 @@ +description: A context manager for defining ops that creates variables (layers). + +
+ + + + + +
+ +# tf.compat.v1.variable_scope + + + + + + + + + +A context manager for defining ops that creates variables (layers). + + + + + + + +This context manager validates that the (optional) `values` are from the same +graph, ensures that graph is the default graph, and pushes a name scope and a +variable scope. + +If `name_or_scope` is not None, it is used as is. If `name_or_scope` is None, +then `default_name` is used. In that case, if the same name has been +previously used in the same scope, it will be made unique by appending `_N` +to it. + +Variable scope allows you to create new variables and to share already created +ones while providing checks to not create or share by accident. For details, +see the [Variable Scope How To](https://tensorflow.org/guide/variables), here +we present only a few basic examples. + +Simple example of how to create a new variable: + +```python +with tf.compat.v1.variable_scope("foo"): + with tf.compat.v1.variable_scope("bar"): + v = tf.compat.v1.get_variable("v", [1]) + assert v.name == "foo/bar/v:0" +``` + +Simple example of how to reenter a premade variable scope safely: + +```python +with tf.compat.v1.variable_scope("foo") as vs: + pass + +# Re-enter the variable scope. +with tf.compat.v1.variable_scope(vs, + auxiliary_name_scope=False) as vs1: + # Restore the original name_scope. + with tf.name_scope(vs1.original_name_scope): + v = tf.compat.v1.get_variable("v", [1]) + assert v.name == "foo/v:0" + c = tf.constant([1], name="c") + assert c.name == "foo/c:0" +``` + +Keep in mind that the counters for `default_name` are discarded once the +parent scope is exited. Therefore when the code re-enters the scope (for +instance by saving it), all nested default_name counters will be restarted. + +#### For instance: + + + +```python +with tf.compat.v1.variable_scope("foo") as vs: + with tf.compat.v1.variable_scope(None, default_name="bar"): + v = tf.compat.v1.get_variable("a", [1]) + assert v.name == "foo/bar/a:0", v.name + with tf.compat.v1.variable_scope(None, default_name="bar"): + v = tf.compat.v1.get_variable("b", [1]) + assert v.name == "foo/bar_1/b:0" + +with tf.compat.v1.variable_scope(vs): + with tf.compat.v1.variable_scope(None, default_name="bar"): + v = tf.compat.v1.get_variable("c", [1]) + assert v.name == "foo/bar/c:0" # Uses bar instead of bar_2! +``` + +Basic example of sharing a variable AUTO_REUSE: + +```python +def foo(): + with tf.compat.v1.variable_scope("foo", reuse=tf.compat.v1.AUTO_REUSE): + v = tf.compat.v1.get_variable("v", [1]) + return v + +v1 = foo() # Creates v. +v2 = foo() # Gets the same, existing v. +assert v1 == v2 +``` + +Basic example of sharing a variable with reuse=True: + +```python +with tf.compat.v1.variable_scope("foo"): + v = tf.compat.v1.get_variable("v", [1]) +with tf.compat.v1.variable_scope("foo", reuse=True): + v1 = tf.compat.v1.get_variable("v", [1]) +assert v1 == v +``` + +Sharing a variable by capturing a scope and setting reuse: + +```python +with tf.compat.v1.variable_scope("foo") as scope: + v = tf.compat.v1.get_variable("v", [1]) + scope.reuse_variables() + v1 = tf.compat.v1.get_variable("v", [1]) +assert v1 == v +``` + +To prevent accidental sharing of variables, we raise an exception when getting +an existing variable in a non-reusing scope. + +```python +with tf.compat.v1.variable_scope("foo"): + v = tf.compat.v1.get_variable("v", [1]) + v1 = tf.compat.v1.get_variable("v", [1]) + # Raises ValueError("... v already exists ..."). +``` + +Similarly, we raise an exception when trying to get a variable that does not +exist in reuse mode. + +```python +with tf.compat.v1.variable_scope("foo", reuse=True): + v = tf.compat.v1.get_variable("v", [1]) + # Raises ValueError("... v does not exists ..."). +``` + +Note that the `reuse` flag is inherited: if we open a reusing scope, then all +its sub-scopes become reusing as well. + +A note about name scoping: Setting `reuse` does not impact the naming of other +ops such as mult. See related discussion on +[github#6189](https://github.com/tensorflow/tensorflow/issues/6189) + +Note that up to and including version 1.0, it was allowed (though explicitly +discouraged) to pass False to the reuse argument, yielding undocumented +behaviour slightly different from None. Starting at 1.1.0 passing None and +False as reuse has exactly the same effect. + +A note about using variable scopes in multi-threaded environment: Variable +scopes are thread local, so one thread will not see another thread's current +scope. Also, when using `default_name`, unique scopes names are also generated +only on a per thread basis. If the same name was used within a different +thread, that doesn't prevent a new thread from creating the same scope. +However, the underlying variable store is shared across threads (within the +same graph). As such, if another thread tries to create a new variable with +the same name as a variable created by a previous thread, it will fail unless +reuse is True. + +Further, each thread starts with an empty variable scope. So if you wish to +preserve name prefixes from a scope from the main thread, you should capture +the main thread's scope and re-enter it in each thread. For e.g. + +``` +main_thread_scope = variable_scope.get_variable_scope() + +# Thread's target function: +def thread_target_fn(captured_scope): + with variable_scope.variable_scope(captured_scope): + # .... regular code for this thread + + +thread = threading.Thread(target=thread_target_fn, args=(main_thread_scope,)) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name_or_scope` + +`string` or `VariableScope`: the scope to open. +
+`default_name` + +The default name to use if the `name_or_scope` argument is +`None`, this name will be uniquified. If name_or_scope is provided it +won't be used and therefore it is not required and can be None. +
+`values` + +The list of `Tensor` arguments that are passed to the op function. +
+`initializer` + +default initializer for variables within this scope. +
+`regularizer` + +default regularizer for variables within this scope. +
+`caching_device` + +default caching device for variables within this scope. +
+`partitioner` + +default partitioner for variables within this scope. +
+`custom_getter` + +default custom getter for variables within this scope. +
+`reuse` + +`True`, None, or tf.compat.v1.AUTO_REUSE; if `True`, we go into +reuse mode for this scope as well as all sub-scopes; if +tf.compat.v1.AUTO_REUSE, we create variables if they do not exist, and +return them otherwise; if None, we inherit the parent scope's reuse +flag. When eager execution is enabled, new variables are always created +unless an EagerVariableStore or template is currently active. +
+`dtype` + +type of variables created in this scope (defaults to the type in +the passed scope, or inherited from parent scope). +
+`use_resource` + +If False, all variables will be regular Variables. If True, +experimental ResourceVariables with well-defined semantics will be used +instead. Defaults to False (will later change to True). When eager +execution is enabled this argument is always forced to be True. +
+`constraint` + +An optional projection function to be applied to the variable +after being updated by an `Optimizer` (e.g. used to implement norm +constraints or value constraints for layer weights). The function must +take as input the unprojected Tensor representing the value of the +variable and return the Tensor for the projected value (which must have +the same shape). Constraints are not safe to use when doing asynchronous +distributed training. +
+`auxiliary_name_scope` + +If `True`, we create an auxiliary name scope with +the scope. If `False`, we don't create it. Note that the argument is not +inherited, and it only takes effect for once when creating. You should +only use it for re-entering a premade variable scope. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +when trying to reuse within a create scope, or create within +a reuse scope. +
+`TypeError` + +when the types of some arguments are not appropriate. +
+ + + +## Methods + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/compat/v1/variables_initializer.md b/site/en/api_docs/python/tf/compat/v1/variables_initializer.md new file mode 100644 index 00000000000..5e4d1ce54f1 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/variables_initializer.md @@ -0,0 +1,91 @@ +description: Returns an Op that initializes a list of variables. + +
+ + +
+ +# tf.compat.v1.variables_initializer + + + + + + + + + +Returns an Op that initializes a list of variables. + + + + + + + + + +After you launch the graph in a session, you can run the returned Op to +initialize all the variables in `var_list`. This Op runs all the +initializers of the variables in `var_list` in parallel. + +Calling `initialize_variables()` is equivalent to passing the list of +initializers to `Group()`. + +If `var_list` is empty, however, the function still returns an Op that can +be run. That Op just has no effect. + + + + + + + + + + + + + +
+`var_list` + +List of `Variable` objects to initialize. +
+`name` + +Optional name for the returned operation. +
+ + + + + + + + + + + +
+An Op that run the initializers of all the specified variables. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/verify_tensor_all_finite.md b/site/en/api_docs/python/tf/compat/v1/verify_tensor_all_finite.md new file mode 100644 index 00000000000..27e4a2b8bf2 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/verify_tensor_all_finite.md @@ -0,0 +1,103 @@ +description: Assert that the tensor does not contain any NaN's or Inf's. + +
+ + +
+ +# tf.compat.v1.verify_tensor_all_finite + + + + + + + + + +Assert that the tensor does not contain any NaN's or Inf's. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`t` + +Tensor to check. +
+`msg` + +Message to log on failure. +
+`name` + +A name for this operation (optional). +
+`x` + +Alias for t. +
+`message` + +Alias for msg. +
+ + + + + + + + + + + +
+Same tensor as `t`. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/version.md b/site/en/api_docs/python/tf/compat/v1/version.md new file mode 100644 index 00000000000..b7247ff9297 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/version.md @@ -0,0 +1,35 @@ +description: Public API for tf.version namespace. + +
+ + + + + + + + +
+ +# Module: tf.compat.v1.version + + + + + + + + + +Public API for tf.version namespace. + + + +## Other Members + +* `COMPILER_VERSION = '7.3.1 20180303'` +* `GIT_VERSION = 'v2.2.0-rc4-8-g2b96f3662b'` +* `GRAPH_DEF_VERSION = 175` +* `GRAPH_DEF_VERSION_MIN_CONSUMER = 0` +* `GRAPH_DEF_VERSION_MIN_PRODUCER = 0` +* `VERSION = '2.2.0'` diff --git a/site/en/api_docs/python/tf/compat/v1/where.md b/site/en/api_docs/python/tf/compat/v1/where.md new file mode 100644 index 00000000000..b5f40ad267d --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/where.md @@ -0,0 +1,127 @@ +description: Return the elements, either from x or y, depending on the condition. + +
+ + +
+ +# tf.compat.v1.where + + + + + + + + + +Return the elements, either from `x` or `y`, depending on the `condition`. + + + + + + + +If both `x` and `y` are None, then this operation returns the coordinates of +true elements of `condition`. The coordinates are returned in a 2-D tensor +where the first dimension (rows) represents the number of true elements, and +the second dimension (columns) represents the coordinates of the true +elements. Keep in mind, the shape of the output tensor can vary depending on +how many true values there are in input. Indices are output in row-major +order. + +If both non-None, `x` and `y` must have the same shape. +The `condition` tensor must be a scalar if `x` and `y` are scalar. +If `x` and `y` are tensors of higher rank, then `condition` must be either a +vector with size matching the first dimension of `x`, or must have the same +shape as `x`. + +The `condition` tensor acts as a mask that chooses, based on the value at each +element, whether the corresponding element / row in the output should be taken +from `x` (if true) or `y` (if false). + +If `condition` is a vector and `x` and `y` are higher rank matrices, then it +chooses which row (outer dimension) to copy from `x` and `y`. If `condition` +has the same shape as `x` and `y`, then it chooses which element to copy from +`x` and `y`. + + + + + + + + + + + + + + + + + + + +
+`condition` + +A `Tensor` of type `bool` +
+`x` + +A Tensor which may have the same shape as `condition`. If `condition` is +rank 1, `x` may have higher rank, but its first dimension must match the +size of `condition`. +
+`y` + +A `tensor` with the same shape and type as `x`. +
+`name` + +A name of the operation (optional) +
+ + + + + + + + + + + +
+A `Tensor` with the same type and shape as `x`, `y` if they are non-None. +Otherwise, a `Tensor` with shape `(num_true, rank(condition))`. +
+ + + + + + + + + + + + +
+`ValueError` + +When exactly one of `x` or `y` is non-None. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/while_loop.md b/site/en/api_docs/python/tf/compat/v1/while_loop.md new file mode 100644 index 00000000000..37b230a7ad6 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/while_loop.md @@ -0,0 +1,295 @@ +description: Repeat body while the condition cond is true. + +
+ + +
+ +# tf.compat.v1.while_loop + + + + + + + + + +Repeat `body` while the condition `cond` is true. + + + + + + + +`cond` is a callable returning a boolean scalar tensor. `body` is a callable +returning a (possibly nested) tuple, namedtuple or list of tensors of the same +arity (length and structure) and types as `loop_vars`. `loop_vars` is a +(possibly nested) tuple, namedtuple or list of tensors that is passed to both +`cond` and `body`. `cond` and `body` both take as many arguments as there are +`loop_vars`. + +In addition to regular Tensors or IndexedSlices, the body may accept and +return TensorArray objects. The flows of the TensorArray objects will +be appropriately forwarded between loops and during gradient calculations. + +Note that `while_loop` calls `cond` and `body` *exactly once* (inside the +call to `while_loop`, and not at all during `Session.run()`). `while_loop` +stitches together the graph fragments created during the `cond` and `body` +calls with some additional graph nodes to create the graph flow that +repeats `body` until `cond` returns false. + +For correctness, tf.while_loop() strictly enforces shape invariants for +the loop variables. A shape invariant is a (possibly partial) shape that +is unchanged across the iterations of the loop. An error will be raised +if the shape of a loop variable after an iteration is determined to be more +general than or incompatible with its shape invariant. For example, a shape +of [11, None] is more general than a shape of [11, 17], and [11, 21] is not +compatible with [11, 17]. By default (if the argument `shape_invariants` is +not specified), it is assumed that the initial shape of each tensor in +`loop_vars` is the same in every iteration. The `shape_invariants` argument +allows the caller to specify a less specific shape invariant for each loop +variable, which is needed if the shape varies between iterations. The +tf.Tensor.set_shape +function may also be used in the `body` function to indicate that +the output loop variable has a particular shape. The shape invariant for +SparseTensor and IndexedSlices are treated specially as follows: + +a) If a loop variable is a SparseTensor, the shape invariant must be +TensorShape([r]) where r is the rank of the dense tensor represented +by the sparse tensor. It means the shapes of the three tensors of the +SparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here +is the shape of the SparseTensor.dense_shape property. It must be the shape of +a vector. + +b) If a loop variable is an IndexedSlices, the shape invariant must be +a shape invariant of the values tensor of the IndexedSlices. It means +the shapes of the three tensors of the IndexedSlices are (shape, [shape[0]], +[shape.ndims]). + +`while_loop` implements non-strict semantics, enabling multiple iterations +to run in parallel. The maximum number of parallel iterations can be +controlled by `parallel_iterations`, which gives users some control over +memory consumption and execution order. For correct programs, `while_loop` +should return the same result for any parallel_iterations > 0. + +For training, TensorFlow stores the tensors that are produced in the +forward inference and are needed in back propagation. These tensors are a +main source of memory consumption and often cause OOM errors when training +on GPUs. When the flag swap_memory is true, we swap out these tensors from +GPU to CPU. This for example allows us to train RNN models with very long +sequences and large batches. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cond` + +A callable that represents the termination condition of the loop. +
+`body` + +A callable that represents the loop body. +
+`loop_vars` + +A (possibly nested) tuple, namedtuple or list of numpy array, +`Tensor`, and `TensorArray` objects. +
+`shape_invariants` + +The shape invariants for the loop variables. +
+`parallel_iterations` + +The number of iterations allowed to run in parallel. It +must be a positive integer. +
+`back_prop` + +Whether backprop is enabled for this while loop. +
+`swap_memory` + +Whether GPU-CPU memory swap is enabled for this loop. +
+`name` + +Optional name prefix for the returned tensors. +
+`maximum_iterations` + +Optional maximum number of iterations of the while loop +to run. If provided, the `cond` output is AND-ed with an additional +condition ensuring the number of iterations executed is no greater than +`maximum_iterations`. +
+`return_same_structure` + +If True, output has same structure as `loop_vars`. If +eager execution is enabled, this is ignored (and always treated as True). +
+ + + + + + + + + + + +
+The output tensors for the loop variables after the loop. +If `return_same_structure` is True, the return value has the same +structure as `loop_vars`. +If `return_same_structure` is False, the return value is a Tensor, +TensorArray or IndexedSlice if the length of `loop_vars` is 1, or a list +otherwise. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if `cond` or `body` is not callable. +
+`ValueError` + +if `loop_vars` is empty. +
+ + + +#### Example: + + + +```python +i = tf.constant(0) +c = lambda i: tf.less(i, 10) +b = lambda i: tf.add(i, 1) +r = tf.while_loop(c, b, [i]) +``` + +Example with nesting and a namedtuple: + +```python +import collections +Pair = collections.namedtuple('Pair', 'j, k') +ijk_0 = (tf.constant(0), Pair(tf.constant(1), tf.constant(2))) +c = lambda i, p: i < 10 +b = lambda i, p: (i + 1, Pair((p.j + p.k), (p.j - p.k))) +ijk_final = tf.while_loop(c, b, ijk_0) +``` + +Example using shape_invariants: + +```python +i0 = tf.constant(0) +m0 = tf.ones([2, 2]) +c = lambda i, m: i < 10 +b = lambda i, m: [i+1, tf.concat([m, m], axis=0)] +tf.while_loop( + c, b, loop_vars=[i0, m0], + shape_invariants=[i0.get_shape(), tf.TensorShape([None, 2])]) +``` + +Example which demonstrates non-strict semantics: In the following +example, the final value of the counter `i` does not depend on `x`. So +the `while_loop` can increment the counter parallel to updates of `x`. +However, because the loop counter at one loop iteration depends +on the value at the previous iteration, the loop counter itself cannot +be incremented in parallel. Hence if we just want the final value of the +counter (which we print on the line `print(sess.run(i))`), then +`x` will never be incremented, but the counter will be updated on a +single thread. Conversely, if we want the value of the output (which we +print on the line `print(sess.run(out).shape)`), then the counter may be +incremented on its own thread, while `x` can be incremented in +parallel on a separate thread. In the extreme case, it is conceivable +that the thread incrementing the counter runs until completion before +`x` is incremented even a single time. The only thing that can never +happen is that the thread updating `x` can never get ahead of the +counter thread because the thread incrementing `x` depends on the value +of the counter. + +```python +import tensorflow as tf + +n = 10000 +x = tf.constant(list(range(n))) +c = lambda i, x: i < n +b = lambda i, x: (tf.compat.v1.Print(i + 1, [i]), tf.compat.v1.Print(x + 1, +[i], "x:")) +i, out = tf.while_loop(c, b, (0, x)) +with tf.compat.v1.Session() as sess: + print(sess.run(i)) # prints [0] ... [9999] + + # The following line may increment the counter and x in parallel. + # The counter thread may get ahead of the other thread, but not the + # other way around. So you may see things like + # [9996] x:[9987] + # meaning that the counter thread is on iteration 9996, + # while the other thread is on iteration 9987 + print(sess.run(out).shape) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/compat/v1/wrap_function.md b/site/en/api_docs/python/tf/compat/v1/wrap_function.md new file mode 100644 index 00000000000..ddcbef70ad3 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/wrap_function.md @@ -0,0 +1,122 @@ +description: Wraps the TF 1.x function fn into a graph function. + +
+ + +
+ +# tf.compat.v1.wrap_function + + + + + + + + + +Wraps the TF 1.x function fn into a graph function. + + + + + + + +The python function `fn` will be called once with symbolic arguments specified +in the `signature`, traced, and turned into a graph function. Any variables +created by `fn` will be owned by the object returned by `wrap_function`. The +resulting graph function can be called with tensors which match the +signature. + +```python +def f(x, do_add): + v = tf.Variable(5.0) + if do_add: + op = v.assign_add(x) + else: + op = v.assign_sub(x) + with tf.control_dependencies([op]): + return v.read_value() + +f_add = tf.compat.v1.wrap_function(f, [tf.TensorSpec((), tf.float32), True]) + +assert float(f_add(1.0)) == 6.0 +assert float(f_add(1.0)) == 7.0 + +# Can call tf.compat.v1.wrap_function again to get a new trace, a new set +# of variables, and possibly different non-template arguments. +f_sub= tf.compat.v1.wrap_function(f, [tf.TensorSpec((), tf.float32), False]) + +assert float(f_sub(1.0)) == 4.0 +assert float(f_sub(1.0)) == 3.0 +``` + +Both tf.compat.v1.wrap_function and tf.function create a callable +TensorFlow graph. But while tf.function runs all stateful operations +(e.g. tf.print) and sequences operations to provide the same semantics as +eager execution, `wrap_function` is closer to the behavior of `session.run` in +TensorFlow 1.x. It will not run any operations unless they are required to +compute the function's outputs, either through a data dependency or a control +dependency. Nor will it sequence operations. + +Unlike tf.function, `wrap_function` will only trace the Python function +once. As with placeholders in TF 1.x, shapes and dtypes must be provided to +`wrap_function`'s `signature` argument. + +Since it is only traced once, variables and state may be created inside the +function and owned by the function wrapper object. + + + + + + + + + + + + + + + + +
+`fn` + +python function to be wrapped +
+`signature` + +the placeholder and python arguments to be passed to the wrapped +function +
+`name` + +Optional. The name of the function. +
+ + + + + + + + + + + +
+the wrapped graph function. +
+ diff --git a/site/en/api_docs/python/tf/compat/v1/xla.md b/site/en/api_docs/python/tf/compat/v1/xla.md new file mode 100644 index 00000000000..c7c2611d881 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/xla.md @@ -0,0 +1,25 @@ +description: Public API for tf.xla namespace. + +
+ + +
+ +# Module: tf.compat.v1.xla + + + + + + + + + +Public API for tf.xla namespace. + + + +## Modules + +[`experimental`](../../../tf/compat/v1/xla/experimental.md) module: Public API for tf.xla.experimental namespace. + diff --git a/site/en/api_docs/python/tf/compat/v1/xla/experimental.md b/site/en/api_docs/python/tf/compat/v1/xla/experimental.md new file mode 100644 index 00000000000..e1d38e284a8 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/xla/experimental.md @@ -0,0 +1,27 @@ +description: Public API for tf.xla.experimental namespace. + +
+ + +
+ +# Module: tf.compat.v1.xla.experimental + + + + + + + + + +Public API for tf.xla.experimental namespace. + + + +## Functions + +[`compile(...)`](../../../../tf/xla/experimental/compile.md): Builds an operator that compiles and runs `computation` with XLA. + +[`jit_scope(...)`](../../../../tf/xla/experimental/jit_scope.md): Enable or disable JIT compilation of operators within the scope. + diff --git a/site/en/api_docs/python/tf/compat/v1/zeros_like.md b/site/en/api_docs/python/tf/compat/v1/zeros_like.md new file mode 100644 index 00000000000..b9361cab329 --- /dev/null +++ b/site/en/api_docs/python/tf/compat/v1/zeros_like.md @@ -0,0 +1,113 @@ +description: Creates a tensor with all elements set to zero. + +
+ + +
+ +# tf.compat.v1.zeros_like + + + + + + + + + +Creates a tensor with all elements set to zero. + + + + + + + +See also tf.zeros. + +Given a single tensor (`tensor`), this operation returns a tensor of the +same type and shape as `tensor` with all elements set to zero. Optionally, +you can use `dtype` to specify a new type for the returned tensor. + +#### Examples: + + +``` +>>> tensor = tf.constant([[1, 2, 3], [4, 5, 6]]) +>>> tf.zeros_like(tensor) + +``` + +``` +>>> tf.zeros_like(tensor, dtype=tf.float32) + +``` + + + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`dtype` + +A type for the returned `Tensor`. Must be `float16`, `float32`, +`float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, +`complex64`, `complex128`, `bool` or `string`. (optional) +
+`name` + +A name for the operation (optional). +
+`optimize` + +if `True`, attempt to statically determine the shape of `tensor` +and encode it as a constant. (optional, defaults to `True`) +
+ + + + + + + + + + + +
+A `Tensor` with all elements set to zero. +
+ diff --git a/site/en/api_docs/python/tf/concat.md b/site/en/api_docs/python/tf/concat.md new file mode 100644 index 00000000000..680541b1ea1 --- /dev/null +++ b/site/en/api_docs/python/tf/concat.md @@ -0,0 +1,164 @@ +description: Concatenates tensors along one dimension. + +
+ + +
+ +# tf.concat + + + + + + + + + +Concatenates tensors along one dimension. + + + + + + + + + +See also tf.tile, tf.stack, tf.repeat. + +Concatenates the list of tensors `values` along dimension `axis`. If +`values[i].shape = [D0, D1, ... Daxis(i), ...Dn]`, the concatenated +result has shape + + [D0, D1, ... Raxis, ...Dn] + +where + + Raxis = sum(Daxis(i)) + +That is, the data from the input tensors is joined along the `axis` +dimension. + +The number of dimensions of the input tensors must match, and all dimensions +except `axis` must be equal. + +#### For example: + + + +``` +>>> t1 = [[1, 2, 3], [4, 5, 6]] +>>> t2 = [[7, 8, 9], [10, 11, 12]] +>>> concat([t1, t2], 0) + +``` + +``` +>>> concat([t1, t2], 1) + +``` + +As in Python, the `axis` could also be negative numbers. Negative `axis` +are interpreted as counting from the end of the rank, i.e., + `axis + rank(values)`-th dimension. + +#### For example: + + + +``` +>>> t1 = [[[1, 2], [2, 3]], [[4, 4], [5, 3]]] +>>> t2 = [[[7, 4], [8, 4]], [[2, 10], [15, 11]]] +>>> tf.concat([t1, t2], -1) + +``` + +Note: If you are concatenating along a new axis consider using stack. +E.g. + +```python +tf.concat([tf.expand_dims(t, axis) for t in tensors], axis) +``` + +can be rewritten as + +```python +tf.stack(tensors, axis=axis) +``` + + + + + + + + + + + + + + + + +
+`values` + +A list of `Tensor` objects or a single `Tensor`. +
+`axis` + +0-D `int32` `Tensor`. Dimension along which to concatenate. Must be +in the range `[-rank(values), rank(values))`. As in Python, indexing for +axis is 0-based. Positive axis in the rage of `[0, rank(values))` refers +to `axis`-th dimension. And negative axis refers to `axis + +rank(values)`-th dimension. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` resulting from concatenation of the input tensors. +
+ diff --git a/site/en/api_docs/python/tf/cond.md b/site/en/api_docs/python/tf/cond.md new file mode 100644 index 00000000000..b926545931f --- /dev/null +++ b/site/en/api_docs/python/tf/cond.md @@ -0,0 +1,164 @@ +description: Return true_fn() if the predicate pred is true else false_fn(). + +
+ + +
+ +# tf.cond + + + + + + + + + +Return `true_fn()` if the predicate `pred` is true else `false_fn()`. + + + + + + + +`true_fn` and `false_fn` both return lists of output tensors. `true_fn` and +`false_fn` must have the same non-zero number and type of outputs. + +**WARNING**: Any Tensors or Operations created outside of `true_fn` and +`false_fn` will be executed regardless of which branch is selected at runtime. + +Although this behavior is consistent with the dataflow model of TensorFlow, +it has frequently surprised users who expected a lazier semantics. +Consider the following simple program: + +```python +z = tf.multiply(a, b) +result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y)) +``` + +If `x < y`, the `tf.add` operation will be executed and `tf.square` +operation will not be executed. Since `z` is needed for at least one +branch of the `cond`, the tf.multiply operation is always executed, +unconditionally. + +Note that `cond` calls `true_fn` and `false_fn` *exactly once* (inside the +call to `cond`, and not at all during `Session.run()`). `cond` +stitches together the graph fragments created during the `true_fn` and +`false_fn` calls with some additional graph nodes to ensure that the right +branch gets executed depending on the value of `pred`. + +tf.cond supports nested structures as implemented in +`tensorflow.python.util.nest`. Both `true_fn` and `false_fn` must return the +same (possibly nested) value structure of lists, tuples, and/or named tuples. +Singleton lists and tuples form the only exceptions to this: when returned by +`true_fn` and/or `false_fn`, they are implicitly unpacked to single values. + +Note: It is illegal to "directly" use tensors created inside a cond branch +outside it, e.g. by storing a reference to a branch tensor in the python +state. If you need to use a tensor created in a branch function you should +return it as an output of the branch function and use the output from +tf.cond instead. + + + + + + + + + + + + + + + + + + + +
+`pred` + +A scalar determining whether to return the result of `true_fn` or +`false_fn`. +
+`true_fn` + +The callable to be performed if pred is true. +
+`false_fn` + +The callable to be performed if pred is false. +
+`name` + +Optional name prefix for the returned tensors. +
+ + + + + + + + + + + +
+Tensors returned by the call to either `true_fn` or `false_fn`. If the +callables return a singleton list, the element is extracted from the list. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if `true_fn` or `false_fn` is not callable. +
+`ValueError` + +if `true_fn` and `false_fn` do not return the same number of +tensors, or return tensors of different types. +
+ + + +#### Example: + + + +```python +x = tf.constant(2) +y = tf.constant(5) +def f1(): return tf.multiply(x, 17) +def f2(): return tf.add(y, 23) +r = tf.cond(tf.less(x, y), f1, f2) +# r is set to f1(). +# Operations in f2 (e.g., tf.add) are not executed. +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/config.md b/site/en/api_docs/python/tf/config.md new file mode 100644 index 00000000000..361898d4d65 --- /dev/null +++ b/site/en/api_docs/python/tf/config.md @@ -0,0 +1,63 @@ +description: Public API for tf.config namespace. + +
+ + +
+ +# Module: tf.config + + + + + + + + + +Public API for tf.config namespace. + + + +## Modules + +[`experimental`](../tf/config/experimental.md) module: Public API for tf.config.experimental namespace. + +[`optimizer`](../tf/config/optimizer.md) module: Public API for tf.config.optimizer namespace. + +[`threading`](../tf/config/threading.md) module: Public API for tf.config.threading namespace. + +## Classes + +[`class LogicalDevice`](../tf/config/LogicalDevice.md): Abstraction for a logical device initialized by the runtime. + +[`class LogicalDeviceConfiguration`](../tf/config/LogicalDeviceConfiguration.md): Configuration class for a logical devices. + +[`class PhysicalDevice`](../tf/config/PhysicalDevice.md): Abstraction for a locally visible physical device. + +## Functions + +[`experimental_connect_to_cluster(...)`](../tf/config/experimental_connect_to_cluster.md): Connects to the given cluster. + +[`experimental_connect_to_host(...)`](../tf/config/experimental_connect_to_host.md): Connects to a single machine to enable remote execution on it. + +[`experimental_functions_run_eagerly(...)`](../tf/config/experimental_functions_run_eagerly.md): Returns the value of the `experimental_run_functions_eagerly` setting. + +[`experimental_run_functions_eagerly(...)`](../tf/config/experimental_run_functions_eagerly.md): Enables / disables eager execution of tf.functions. + +[`get_logical_device_configuration(...)`](../tf/config/get_logical_device_configuration.md): Get the virtual device configuration for a tf.config.PhysicalDevice. + +[`get_soft_device_placement(...)`](../tf/config/get_soft_device_placement.md): Get if soft device placement is enabled. + +[`get_visible_devices(...)`](../tf/config/get_visible_devices.md): Get the list of visible physical devices. + +[`list_logical_devices(...)`](../tf/config/list_logical_devices.md): Return a list of logical devices created by runtime. + +[`list_physical_devices(...)`](../tf/config/list_physical_devices.md): Return a list of physical devices visible to the host runtime. + +[`set_logical_device_configuration(...)`](../tf/config/set_logical_device_configuration.md): Set the logical device configuration for a tf.config.PhysicalDevice. + +[`set_soft_device_placement(...)`](../tf/config/set_soft_device_placement.md): Set if soft device placement is enabled. + +[`set_visible_devices(...)`](../tf/config/set_visible_devices.md): Set the list of visible devices. + diff --git a/site/en/api_docs/python/tf/config/LogicalDevice.md b/site/en/api_docs/python/tf/config/LogicalDevice.md new file mode 100644 index 00000000000..ef1a6b7a9d0 --- /dev/null +++ b/site/en/api_docs/python/tf/config/LogicalDevice.md @@ -0,0 +1,91 @@ +description: Abstraction for a logical device initialized by the runtime. + +
+ + + + + +
+ +# tf.config.LogicalDevice + + + + + + + + + +Abstraction for a logical device initialized by the runtime. + + + + + + + + + +A tf.config.LogicalDevice corresponds to an initialized logical device on a +tf.config.PhysicalDevice or a remote device visible to the cluster. Tensors +and operations can be placed on a specific logical device by calling +tf.device with a specified tf.config.LogicalDevice. + +#### Fields: + + +* `name`: The fully qualified name of the device. Can be used for Op or function + placement. +* `device_type`: String declaring the type of device such as "CPU" or "GPU". + + + + + + + + + + + + + + + + +
+`name` + + +
+`device_type` + + +
+ + + +## Class Variables + +* `device_type` +* `name` diff --git a/site/en/api_docs/python/tf/config/LogicalDeviceConfiguration.md b/site/en/api_docs/python/tf/config/LogicalDeviceConfiguration.md new file mode 100644 index 00000000000..51151b4b59c --- /dev/null +++ b/site/en/api_docs/python/tf/config/LogicalDeviceConfiguration.md @@ -0,0 +1,86 @@ +description: Configuration class for a logical devices. + +
+ + + + +
+ +# tf.config.LogicalDeviceConfiguration + + + + + + + + + +Configuration class for a logical devices. + + + + + + + + + +The class specifies the parameters to configure a tf.config.PhysicalDevice +as it is initialized to a tf.config.LogicalDevice during runtime +initialization. Not all fields are valid for all device types. + +See tf.config.get_logical_device_configuration and +tf.config.set_logical_device_configuration for usage examples. + +#### Fields: + + +* `memory_limit`: (optional) Maximum memory (in MB) to allocate on the virtual + device. Currently only supported for GPUs. + + + + + + + + + + + + + +
+`memory_limit` + + +
+ + + +## Class Variables + +* `memory_limit` diff --git a/site/en/api_docs/python/tf/config/PhysicalDevice.md b/site/en/api_docs/python/tf/config/PhysicalDevice.md new file mode 100644 index 00000000000..9ee4a8038d8 --- /dev/null +++ b/site/en/api_docs/python/tf/config/PhysicalDevice.md @@ -0,0 +1,98 @@ +description: Abstraction for a locally visible physical device. + +
+ + + + + +
+ +# tf.config.PhysicalDevice + + + + + + + + + +Abstraction for a locally visible physical device. + + + + + + + + + +TensorFlow can utilize various devices such as the CPU or multiple GPUs +for computation. Before initializing a local device for use, the user can +customize certain properties of the device such as it's visibility or memory +configuration. + +Once a visible tf.config.PhysicalDevice is initialized one or more +tf.config.LogicalDevice objects are created. Use +tf.config.set_visible_devices to configure the visibility of a physical +device and tf.config.set_logical_device_configuration to configure multiple +tf.config.LogicalDevice objects for a tf.config.PhysicalDevice. This is +useful when separation between models is needed or to simulate a multi-device +environment. + +#### Fields: + + +* `name`: Unique identifier for device. +* `device_type`: String declaring the type of device such as "CPU" or "GPU". + + + + + + + + + + + + + + + + +
+`name` + + +
+`device_type` + + +
+ + + +## Class Variables + +* `device_type` +* `name` diff --git a/site/en/api_docs/python/tf/config/experimental.md b/site/en/api_docs/python/tf/config/experimental.md new file mode 100644 index 00000000000..8c6fde8e57b --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental.md @@ -0,0 +1,57 @@ +description: Public API for tf.config.experimental namespace. + +
+ + +
+ +# Module: tf.config.experimental + + + + + + + + + +Public API for tf.config.experimental namespace. + + + +## Classes + +[`class ClusterDeviceFilters`](../../tf/config/experimental/ClusterDeviceFilters.md): Represent a collection of device filters for the remote workers in cluster. + +[`class VirtualDeviceConfiguration`](../../tf/config/LogicalDeviceConfiguration.md): Configuration class for a logical devices. + +## Functions + +[`disable_mlir_bridge(...)`](../../tf/config/experimental/disable_mlir_bridge.md): Disables experimental MLIR-Based TensorFlow Compiler Bridge. + +[`enable_mlir_bridge(...)`](../../tf/config/experimental/enable_mlir_bridge.md): Enables experimental MLIR-Based TensorFlow Compiler Bridge. + +[`get_device_policy(...)`](../../tf/config/experimental/get_device_policy.md): Gets the current device policy. + +[`get_memory_growth(...)`](../../tf/config/experimental/get_memory_growth.md): Get if memory growth is enabled for a `PhysicalDevice`. + +[`get_synchronous_execution(...)`](../../tf/config/experimental/get_synchronous_execution.md): Gets whether operations are executed synchronously or asynchronously. + +[`get_virtual_device_configuration(...)`](../../tf/config/get_logical_device_configuration.md): Get the virtual device configuration for a tf.config.PhysicalDevice. + +[`get_visible_devices(...)`](../../tf/config/get_visible_devices.md): Get the list of visible physical devices. + +[`list_logical_devices(...)`](../../tf/config/list_logical_devices.md): Return a list of logical devices created by runtime. + +[`list_physical_devices(...)`](../../tf/config/list_physical_devices.md): Return a list of physical devices visible to the host runtime. + +[`set_device_policy(...)`](../../tf/config/experimental/set_device_policy.md): Sets the current thread device policy. + +[`set_memory_growth(...)`](../../tf/config/experimental/set_memory_growth.md): Set if memory growth should be enabled for a `PhysicalDevice`. + +[`set_synchronous_execution(...)`](../../tf/config/experimental/set_synchronous_execution.md): Specifies whether operations are executed synchronously or asynchronously. + +[`set_virtual_device_configuration(...)`](../../tf/config/set_logical_device_configuration.md): Set the logical device configuration for a tf.config.PhysicalDevice. + +[`set_visible_devices(...)`](../../tf/config/set_visible_devices.md): Set the list of visible devices. + diff --git a/site/en/api_docs/python/tf/config/experimental/ClusterDeviceFilters.md b/site/en/api_docs/python/tf/config/experimental/ClusterDeviceFilters.md new file mode 100644 index 00000000000..f7bd7b1f414 --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental/ClusterDeviceFilters.md @@ -0,0 +1,86 @@ +description: Represent a collection of device filters for the remote workers in cluster. + +
+ + + + +
+ +# tf.config.experimental.ClusterDeviceFilters + + + + + + + + + +Represent a collection of device filters for the remote workers in cluster. + + + + + + + + + +NOTE: this is an experimental API and subject to changes. + +Set device filters for selective jobs and tasks. For each remote worker, the +device filters are a list of strings. When any filters are present, the remote +worker will ignore all devices which do not match any of its filters. Each +filter can be partially specified, e.g. "/job:ps", "/job:worker/replica:3", +etc. Note that a device is always visible to the worker it is located on. + +For example, to set the device filters for a parameter server cluster: + +```python +cdf = tf.config.experimental.ClusterDeviceFilters() +for i in range(num_workers): + cdf.set_device_filters('worker', i, ['/job:ps']) +for i in range(num_ps): + cdf.set_device_filters('ps', i, ['/job:worker']) + +tf.config.experimental_connect_to_cluster(cluster_def, + cluster_device_filters=cdf) +``` + +The device filters can be partically specified. For remote tasks that do not +have device filters specified, all devices will be visible to them. + +## Methods + +

set_device_filters

+ +View source + + + +Set the device filters for given job name and task id. + + + + diff --git a/site/en/api_docs/python/tf/config/experimental/disable_mlir_bridge.md b/site/en/api_docs/python/tf/config/experimental/disable_mlir_bridge.md new file mode 100644 index 00000000000..2e272dd73f6 --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental/disable_mlir_bridge.md @@ -0,0 +1,42 @@ +description: Disables experimental MLIR-Based TensorFlow Compiler Bridge. + +
+ + +
+ +# tf.config.experimental.disable_mlir_bridge + + + + + + + + + +Disables experimental MLIR-Based TensorFlow Compiler Bridge. + + + + + + + + diff --git a/site/en/api_docs/python/tf/config/experimental/enable_mlir_bridge.md b/site/en/api_docs/python/tf/config/experimental/enable_mlir_bridge.md new file mode 100644 index 00000000000..0da6ae72c29 --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental/enable_mlir_bridge.md @@ -0,0 +1,52 @@ +description: Enables experimental MLIR-Based TensorFlow Compiler Bridge. + +
+ + +
+ +# tf.config.experimental.enable_mlir_bridge + + + + + + + + + +Enables experimental MLIR-Based TensorFlow Compiler Bridge. + + + + + + + + + +DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT. + +NOTE: MLIR-Based TensorFlow Compiler is under active development and has +missing features, please refrain from using. This API exists for development +and testing only. + +TensorFlow Compiler Bridge (TF Bridge) is responsible for translating parts +of TensorFlow graph into a form that can be accepted as an input by a backend +compiler such as XLA. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/config/experimental/get_device_policy.md b/site/en/api_docs/python/tf/config/experimental/get_device_policy.md new file mode 100644 index 00000000000..1d2c18fbea5 --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental/get_device_policy.md @@ -0,0 +1,61 @@ +description: Gets the current device policy. + +
+ + +
+ +# tf.config.experimental.get_device_policy + + + + + + + + + +Gets the current device policy. + + + + + + + + + +The device policy controls how operations requiring inputs on a specific +device (e.g., on GPU:0) handle inputs on a different device (e.g. GPU:1). + +This function only gets the device policy for the current thread. Any +subsequently started thread will again use the default policy. + + + + + + + + + +
+Current thread device policy +
+ diff --git a/site/en/api_docs/python/tf/config/experimental/get_memory_growth.md b/site/en/api_docs/python/tf/config/experimental/get_memory_growth.md new file mode 100644 index 00000000000..54648cca40b --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental/get_memory_growth.md @@ -0,0 +1,108 @@ +description: Get if memory growth is enabled for a PhysicalDevice. + +
+ + +
+ +# tf.config.experimental.get_memory_growth + + + + + + + + + +Get if memory growth is enabled for a `PhysicalDevice`. + + + + + + + + + +If memory growth is enabled for a `PhysicalDevice`, the runtime initialization +will not allocate all memory on the device. + +#### For example: + + + +``` +>>> physical_devices = tf.config.list_physical_devices('GPU') +>>> try: +... tf.config.experimental.set_memory_growth(physical_devices[0], True) +... assert tf.config.experimental.get_memory_growth(physical_devices[0]) +... except: +... # Invalid device or cannot modify virtual devices once initialized. +... pass +``` + + + + + + + + + + +
+`device` + +`PhysicalDevice` to query +
+ + + + + + + + + + + +
+A boolean indicating the memory growth setting for the `PhysicalDevice`. +
+ + + + + + + + + + + + +
+`ValueError` + +Invalid `PhysicalDevice` specified. +
+ diff --git a/site/en/api_docs/python/tf/config/experimental/get_synchronous_execution.md b/site/en/api_docs/python/tf/config/experimental/get_synchronous_execution.md new file mode 100644 index 00000000000..497eff87d38 --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental/get_synchronous_execution.md @@ -0,0 +1,58 @@ +description: Gets whether operations are executed synchronously or asynchronously. + +
+ + +
+ +# tf.config.experimental.get_synchronous_execution + + + + + + + + + +Gets whether operations are executed synchronously or asynchronously. + + + + + + + + + +TensorFlow can execute operations synchronously or asynchronously. If +asynchronous execution is enabled, operations may return "non-ready" handles. + + + + + + + + + +
+Current thread execution mode +
+ diff --git a/site/en/api_docs/python/tf/config/experimental/set_device_policy.md b/site/en/api_docs/python/tf/config/experimental/set_device_policy.md new file mode 100644 index 00000000000..65ad0727956 --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental/set_device_policy.md @@ -0,0 +1,96 @@ +description: Sets the current thread device policy. + +
+ + +
+ +# tf.config.experimental.set_device_policy + + + + + + + + + +Sets the current thread device policy. + + + + + + + + + +The device policy controls how operations requiring inputs on a specific +device (e.g., on GPU:0) handle inputs on a different device (e.g. GPU:1). + +When using the default, an appropriate policy will be picked automatically. +The default policy may change over time. + +This function only sets the device policy for the current thread. Any +subsequently started thread will again use the default policy. + + + + + + + + + + +
+`device_policy` + +A device policy. +Valid values: +- None: Switch to a system default. +- 'warn': Copies the tensors which are not on the right device and logs +a warning. +- 'explicit': Raises an error if the placement is not as required. +- 'silent': Silently copies the tensors. Note that this may hide +performance problems as there is no notification provided when +operations are blocked on the tensor being copied between devices. +- 'silent_for_int32': silently copies `int32` tensors, raising errors on +the other ones. +
+ + + + + + + + + + + + +
+`ValueError` + +If an invalid `device_policy` is passed. +
+ diff --git a/site/en/api_docs/python/tf/config/experimental/set_memory_growth.md b/site/en/api_docs/python/tf/config/experimental/set_memory_growth.md new file mode 100644 index 00000000000..b3f9b8a75e3 --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental/set_memory_growth.md @@ -0,0 +1,108 @@ +description: Set if memory growth should be enabled for a PhysicalDevice. + +
+ + +
+ +# tf.config.experimental.set_memory_growth + + + + + + + + + +Set if memory growth should be enabled for a `PhysicalDevice`. + + + + + + + + + +If memory growth is enabled for a `PhysicalDevice`, the runtime initialization +will not allocate all memory on the device. Memory growth cannot be configured +on a `PhysicalDevice` with virtual devices configured. + +#### For example: + + + +``` +>>> physical_devices = tf.config.list_physical_devices('GPU') +>>> try: +... tf.config.experimental.set_memory_growth(physical_devices[0], True) +... except: +... # Invalid device or cannot modify virtual devices once initialized. +... pass +``` + + + + + + + + + + + + + +
+`device` + +`PhysicalDevice` to configure +
+`enable` + +(Boolean) Whether to enable or disable memory growth +
+ + + + + + + + + + + + + + + +
+`ValueError` + +Invalid `PhysicalDevice` specified. +
+`RuntimeError` + +Runtime is already initialized. +
+ diff --git a/site/en/api_docs/python/tf/config/experimental/set_synchronous_execution.md b/site/en/api_docs/python/tf/config/experimental/set_synchronous_execution.md new file mode 100644 index 00000000000..0ba95c68e8b --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental/set_synchronous_execution.md @@ -0,0 +1,70 @@ +description: Specifies whether operations are executed synchronously or asynchronously. + +
+ + +
+ +# tf.config.experimental.set_synchronous_execution + + + + + + + + + +Specifies whether operations are executed synchronously or asynchronously. + + + + + + + + + +TensorFlow can execute operations synchronously or asynchronously. If +asynchronous execution is enabled, operations may return "non-ready" handles. + +When `enable` is set to None, an appropriate value will be picked +automatically. The value picked may change between TensorFlow releases. + + + + + + + + + + +
+`enable` + +Whether operations should be dispatched synchronously. +Valid values: +- None: sets the system default. +- True: executes each operation synchronously. +- False: executes each operation asynchronously. +
+ diff --git a/site/en/api_docs/python/tf/config/experimental_connect_to_cluster.md b/site/en/api_docs/python/tf/config/experimental_connect_to_cluster.md new file mode 100644 index 00000000000..3708e5d1924 --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental_connect_to_cluster.md @@ -0,0 +1,135 @@ +description: Connects to the given cluster. + +
+ + +
+ +# tf.config.experimental_connect_to_cluster + + + + + + + + + +Connects to the given cluster. + + + + + + + + + +Will make devices on the cluster available to use. Note that calling this more +than once will work, but will invalidate any tensor handles on the old remote +devices. + +If the given local job name is not present in the cluster specification, it +will be automatically added, using an unused port on the localhost. + +Device filters can be specified to isolate groups of remote tasks to avoid +undesired accesses between workers. Workers accessing resources or launching +ops / functions on filtered remote devices will result in errors (unknown +devices). For any remote task, if no device filter is present, all cluster +devices will be visible; if any device filter is specified, it can only +see devices matching at least one filter. Devices on the task itself are +always visible. Device filters can be particially specified. + +For example, for a cluster set up for parameter server training, the following +device filters might be specified: + +```python +cdf = tf.config.experimental.ClusterDeviceFilters() +# For any worker, only the devices on PS nodes and itself are visible +for i in range(num_workers): + cdf.set_device_filters('worker', i, ['/job:ps']) +# Similarly for any ps, only the devices on workers and itself are visible +for i in range(num_ps): + cdf.set_device_filters('ps', i, ['/job:worker']) + +tf.config.experimental_connect_to_cluster(cluster_def, + cluster_device_filters=cdf) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cluster_spec_or_resolver` + +A `ClusterSpec` or `ClusterResolver` describing +the cluster. +
+`job_name` + +The name of the local job. +
+`task_index` + +The local task index. +
+`protocol` + +The communication protocol, such as `"grpc"`. If unspecified, will +use the default from `python/platform/remote_utils.py`. +
+`make_master_device_default` + +If True and a cluster resolver is passed, will +automatically enter the master task device scope, which indicates the +master becomes the default device to run ops. It won't do anything if +a cluster spec is passed. Will throw an error if the caller is currently +already in some device scope. +
+`cluster_device_filters` + +an instance of +`tf.train.experimental/ClusterDeviceFilters` that specify device filters +to the remote tasks in cluster. +
+ diff --git a/site/en/api_docs/python/tf/config/experimental_connect_to_host.md b/site/en/api_docs/python/tf/config/experimental_connect_to_host.md new file mode 100644 index 00000000000..a29cba3738e --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental_connect_to_host.md @@ -0,0 +1,102 @@ +description: Connects to a single machine to enable remote execution on it. + +
+ + +
+ +# tf.config.experimental_connect_to_host + + + + + + + + + +Connects to a single machine to enable remote execution on it. + + + + + + + + + +Will make devices on the remote host available to use. Note that calling this +more than once will work, but will invalidate any tensor handles on the old +remote devices. + +Using the default job_name of worker, you can schedule ops to run remotely as +follows: +```python +# When eager execution is enabled, connect to the remote host. +tf.config.experimental_connect_to_host("exampleaddr.com:9876") + +with ops.device("job:worker/replica:0/task:1/device:CPU:0"): + # The following tensors should be resident on the remote device, and the op + # will also execute remotely. + x1 = array_ops.ones([2, 2]) + x2 = array_ops.ones([2, 2]) + y = math_ops.matmul(x1, x2) +``` + + + + + + + + + + + + + +
+`remote_host` + +a single or a list the remote server addr in host-port format. +
+`job_name` + +The job name under which the new server will be accessible. +
+ + + + + + + + + + + + +
+`ValueError` + +if remote_host is None. +
+ diff --git a/site/en/api_docs/python/tf/config/experimental_functions_run_eagerly.md b/site/en/api_docs/python/tf/config/experimental_functions_run_eagerly.md new file mode 100644 index 00000000000..df1f15a2604 --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental_functions_run_eagerly.md @@ -0,0 +1,42 @@ +description: Returns the value of the experimental_run_functions_eagerly setting. + +
+ + +
+ +# tf.config.experimental_functions_run_eagerly + + + + + + + + + +Returns the value of the `experimental_run_functions_eagerly` setting. + + + + + + + + diff --git a/site/en/api_docs/python/tf/config/experimental_run_functions_eagerly.md b/site/en/api_docs/python/tf/config/experimental_run_functions_eagerly.md new file mode 100644 index 00000000000..e029cf5dc5a --- /dev/null +++ b/site/en/api_docs/python/tf/config/experimental_run_functions_eagerly.md @@ -0,0 +1,98 @@ +description: Enables / disables eager execution of tf.functions. + +
+ + +
+ +# tf.config.experimental_run_functions_eagerly + + + + + + + + + +Enables / disables eager execution of tf.functions. + + + + + + + + + +Calling tf.config.experimental_run_functions_eagerly(True) will make all +invocations of tf.function run eagerly instead of running as a traced graph +function. + +This can be useful for debugging or profiling. For example, let's say you +implemented a simple iterative sqrt function, and you want to collect the +intermediate values and plot the convergence. Appending the values to a list +in `@tf.function` normally wouldn't work since it will just record the Tensors +being traced, not the values. Instead, you can do the following. + +``` +>>> ys = [] +>>> +>>> @tf.function +... def sqrt(x): +... y = x / 2 +... d = y +... for _ in range(10): +... d /= 2 +... if y * y < x: +... y += d +... else: +... y -= d +... ys.append(y.numpy()) +... return y +>>> +>>> tf.config.experimental_run_functions_eagerly(True) +>>> sqrt(tf.constant(2.)) + +>>> ys +[1.5, 1.25, 1.375, 1.4375, 1.40625, 1.421875, 1.4140625, 1.4179688, 1.4160156, +1.4150391] +>>> tf.config.experimental_run_functions_eagerly(False) +``` + +Calling tf.config.experimental_run_functions_eagerly(False) will undo this +behavior. + + + + + + + + + + +
+`run_eagerly` + +Boolean. Whether to run functions eagerly. +
+ diff --git a/site/en/api_docs/python/tf/config/get_logical_device_configuration.md b/site/en/api_docs/python/tf/config/get_logical_device_configuration.md new file mode 100644 index 00000000000..ee22181b23e --- /dev/null +++ b/site/en/api_docs/python/tf/config/get_logical_device_configuration.md @@ -0,0 +1,106 @@ +description: Get the virtual device configuration for a tf.config.PhysicalDevice. + +
+ + +
+ +# tf.config.get_logical_device_configuration + + + + + + + + + +Get the virtual device configuration for a tf.config.PhysicalDevice. + + + + + + + + + +Returns the list of tf.config.LogicalDeviceConfiguration +objects previously configured by a call to +tf.config.set_logical_device_configuration. + +#### For example: + + + +``` +>>> physical_devices = tf.config.list_physical_devices('CPU') +>>> assert len(physical_devices) == 1, "No CPUs found" +>>> configs = tf.config.get_logical_device_configuration( +... physical_devices[0]) +>>> try: +... assert configs is None +... tf.config.set_logical_device_configuration( +... physical_devices[0], +... [tf.config.LogicalDeviceConfiguration(), +... tf.config.LogicalDeviceConfiguration()]) +... configs = tf.config.get_logical_device_configuration( +... physical_devices[0]) +... assert len(configs) == 2 +... except: +... # Cannot modify virtual devices once initialized. +... pass +``` + + + + + + + + + + +
+`device` + +`PhysicalDevice` to query +
+ + + + + + + + + + + +
+List of tf.config.LogicalDeviceConfiguration objects or +`None` if no virtual device configuration has been set for this physical +device. +
+ diff --git a/site/en/api_docs/python/tf/config/get_soft_device_placement.md b/site/en/api_docs/python/tf/config/get_soft_device_placement.md new file mode 100644 index 00000000000..a81cd39259a --- /dev/null +++ b/site/en/api_docs/python/tf/config/get_soft_device_placement.md @@ -0,0 +1,60 @@ +description: Get if soft device placement is enabled. + +
+ + +
+ +# tf.config.get_soft_device_placement + + + + + + + + + +Get if soft device placement is enabled. + + + + + + + + + +If enabled, an op will be placed on CPU if any of the following are true + 1. there's no GPU implementation for the OP + 2. no GPU devices are known or registered + 3. need to co-locate with reftype input(s) which are from CPU + + + + + + + + + +
+If soft placement is enabled. +
+ diff --git a/site/en/api_docs/python/tf/config/get_visible_devices.md b/site/en/api_docs/python/tf/config/get_visible_devices.md new file mode 100644 index 00000000000..e3109b0a523 --- /dev/null +++ b/site/en/api_docs/python/tf/config/get_visible_devices.md @@ -0,0 +1,97 @@ +description: Get the list of visible physical devices. + +
+ + +
+ +# tf.config.get_visible_devices + + + + + + + + + +Get the list of visible physical devices. + + + + + + + + + +Returns the list of `PhysicalDevice`s currently marked as visible to the +runtime. A visible device will have at least one `LogicalDevice` associated +with it once the runtime is initialized. + +The following example verifies all visible GPUs have been disabled: + +``` +>>> physical_devices = tf.config.list_physical_devices('GPU') +>>> try: +... # Disable all GPUS +... tf.config.set_visible_devices([], 'GPU') +... visible_devices = tf.config.get_visible_devices() +... for device in visible_devices: +... assert device.device_type != 'GPU' +... except: +... # Invalid device or cannot modify virtual devices once initialized. +... pass +``` + + + + + + + + + + +
+`device_type` + +(optional string) Only include devices matching this device +type. For example "CPU" or "GPU". +
+ + + + + + + + + + + +
+List of visible `PhysicalDevice`s +
+ diff --git a/site/en/api_docs/python/tf/config/list_logical_devices.md b/site/en/api_docs/python/tf/config/list_logical_devices.md new file mode 100644 index 00000000000..79c664dfb35 --- /dev/null +++ b/site/en/api_docs/python/tf/config/list_logical_devices.md @@ -0,0 +1,102 @@ +description: Return a list of logical devices created by runtime. + +
+ + +
+ +# tf.config.list_logical_devices + + + + + + + + + +Return a list of logical devices created by runtime. + + + + + + + + + +Logical devices may correspond to physical devices or remote devices in the +cluster. Operations and tensors may be placed on these devices by using the +`name` of the tf.config.LogicalDevice. + +Calling tf.config.list_logical_devices triggers the runtime to configure any +tf.config.PhysicalDevice visible to the runtime, thereby preventing +further configuration. To avoid runtime initialization, call +tf.config.list_physical_devices instead. + +#### For example: + + + +``` +>>> logical_devices = tf.config.list_logical_devices('GPU') +>>> if len(logical_devices) > 0: +... # Allocate on GPU:0 +... with tf.device(logical_devices[0].name): +... one = tf.constant(1) +... # Allocate on GPU:1 +... with tf.device(logical_devices[1].name): +... two = tf.constant(2) +``` + + + + + + + + + + +
+`device_type` + +(optional string) Only include devices matching this device +type. For example "CPU" or "GPU". +
+ + + + + + + + + + + +
+List of initialized `LogicalDevice`s +
+ diff --git a/site/en/api_docs/python/tf/config/list_physical_devices.md b/site/en/api_docs/python/tf/config/list_physical_devices.md new file mode 100644 index 00000000000..2d91651ce99 --- /dev/null +++ b/site/en/api_docs/python/tf/config/list_physical_devices.md @@ -0,0 +1,98 @@ +description: Return a list of physical devices visible to the host runtime. + +
+ + +
+ +# tf.config.list_physical_devices + + + + + + + + + +Return a list of physical devices visible to the host runtime. + + + + + + + + + +Physical devices are hardware devices present on the host machine. By default +all discovered CPU and GPU devices are considered visible. + +This API allows querying the physical hardware resources prior to runtime +initialization. Thus, giving an opportunity to call any additional +configuration APIs. This is in contrast to tf.config.list_logical_devices, +which triggers runtime initialization in order to list the configured devices. + +The following example lists the number of visible GPUs on the host. + +``` +>>> physical_devices = tf.config.list_physical_devices('GPU') +>>> print("Num GPUs:", len(physical_devices)) +Num GPUs: ... +``` + +However, the number of GPUs available to the runtime may change during runtime +initialization due to marking certain devices as not visible or configuring +multiple logical devices. + + + + + + + + + + +
+`device_type` + +(optional string) Only include devices matching this device +type. For example "CPU" or "GPU". +
+ + + + + + + + + + + +
+List of discovered tf.config.PhysicalDevice objects +
+ diff --git a/site/en/api_docs/python/tf/config/optimizer.md b/site/en/api_docs/python/tf/config/optimizer.md new file mode 100644 index 00000000000..bc08d93909d --- /dev/null +++ b/site/en/api_docs/python/tf/config/optimizer.md @@ -0,0 +1,31 @@ +description: Public API for tf.config.optimizer namespace. + +
+ + +
+ +# Module: tf.config.optimizer + + + + + + + + + +Public API for tf.config.optimizer namespace. + + + +## Functions + +[`get_experimental_options(...)`](../../tf/config/optimizer/get_experimental_options.md): Get experimental optimizer options. + +[`get_jit(...)`](../../tf/config/optimizer/get_jit.md): Get if JIT compilation is enabled. + +[`set_experimental_options(...)`](../../tf/config/optimizer/set_experimental_options.md): Set experimental optimizer options. + +[`set_jit(...)`](../../tf/config/optimizer/set_jit.md): Set if JIT compilation is enabled. + diff --git a/site/en/api_docs/python/tf/config/optimizer/get_experimental_options.md b/site/en/api_docs/python/tf/config/optimizer/get_experimental_options.md new file mode 100644 index 00000000000..a7ab62734f0 --- /dev/null +++ b/site/en/api_docs/python/tf/config/optimizer/get_experimental_options.md @@ -0,0 +1,61 @@ +description: Get experimental optimizer options. + +
+ + +
+ +# tf.config.optimizer.get_experimental_options + + + + + + + + + +Get experimental optimizer options. + + + + + + + + + +Refer to tf.config.optimizer.set_experimental_options for a list of current +options. + +Note that optimizations are only applied in graph mode, (within tf.function). +In addition, as these are experimental options, the list is subject to change. + + + + + + + + + +
+Dictionary of configured experimental optimizer options +
+ diff --git a/site/en/api_docs/python/tf/config/optimizer/get_jit.md b/site/en/api_docs/python/tf/config/optimizer/get_jit.md new file mode 100644 index 00000000000..391faed9000 --- /dev/null +++ b/site/en/api_docs/python/tf/config/optimizer/get_jit.md @@ -0,0 +1,59 @@ +description: Get if JIT compilation is enabled. + +
+ + +
+ +# tf.config.optimizer.get_jit + + + + + + + + + +Get if JIT compilation is enabled. + + + + + + + + + +Note that optimizations are only applied to code that is compiled into a +graph. In eager mode, which is the TF2 API default, that means only code that +is defined under a tf.function decorator. + + + + + + + + + +
+If JIT compilation is enabled. +
+ diff --git a/site/en/api_docs/python/tf/config/optimizer/set_experimental_options.md b/site/en/api_docs/python/tf/config/optimizer/set_experimental_options.md new file mode 100644 index 00000000000..aefafe31cb7 --- /dev/null +++ b/site/en/api_docs/python/tf/config/optimizer/set_experimental_options.md @@ -0,0 +1,92 @@ +description: Set experimental optimizer options. + +
+ + +
+ +# tf.config.optimizer.set_experimental_options + + + + + + + + + +Set experimental optimizer options. + + + + + + + + + +Note that optimizations are only applied in graph mode, (within tf.function). +In addition, as these are experimental options, the list is subject to change. + + + + + + + + + + +
+`options` + +Dictionary of experimental optimizer options to configure. +Valid keys: +- layout_optimizer: Optimize tensor layouts +e.g. This will try to use NCHW layout on GPU which is faster. +- constant_folding: Fold constants +Statically infer the value of tensors when possible, and materialize the +result using constants. +- shape_optimization: Simplify computations made on shapes. +- remapping: Remap subgraphs onto more efficient implementations. +- arithmetic_optimization: Simplify arithmetic ops with common +sub-expression elimination and arithmetic simplification. +- dependency_optimization: Control dependency optimizations. Remove +redundant control dependencies, which may enable other optimization. +This optimizer is also essential for pruning Identity and NoOp nodes. +- loop_optimization: Loop optimizations. +- function_optimization: Function optimizations and inlining. +- debug_stripper: Strips debug-related nodes from the graph. +- disable_model_pruning: Disable removal of unnecessary ops from the graph +- scoped_allocator_optimization: Try to allocate some independent Op +outputs contiguously in order to merge or eliminate downstream Ops. +- pin_to_host_optimization: Force small ops onto the CPU. +- implementation_selector: Enable the swap of kernel implementations based +on the device placement. +- auto_mixed_precision: Change certain float32 ops to float16 on Volta +GPUs and above. Without the use of loss scaling, this can cause +numerical underflow (see +keras.mixed_precision.experimental.LossScaleOptimizer). +- disable_meta_optimizer: Disable the entire meta optimizer. +- min_graph_nodes: The minimum number of nodes in a graph to optimizer. +For smaller graphs, optimization is skipped. +
+ diff --git a/site/en/api_docs/python/tf/config/optimizer/set_jit.md b/site/en/api_docs/python/tf/config/optimizer/set_jit.md new file mode 100644 index 00000000000..8ab62332feb --- /dev/null +++ b/site/en/api_docs/python/tf/config/optimizer/set_jit.md @@ -0,0 +1,64 @@ +description: Set if JIT compilation is enabled. + +
+ + +
+ +# tf.config.optimizer.set_jit + + + + + + + + + +Set if JIT compilation is enabled. + + + + + + + + + +Note that optimizations are only applied to code that is compiled into a +graph. In eager mode, which is the TF2 API default, that means only code that +is defined under a tf.function decorator. + + + + + + + + + + +
+`enabled` + +Whether to enable JIT compilation. +
+ diff --git a/site/en/api_docs/python/tf/config/set_logical_device_configuration.md b/site/en/api_docs/python/tf/config/set_logical_device_configuration.md new file mode 100644 index 00000000000..89fb8f6c5a1 --- /dev/null +++ b/site/en/api_docs/python/tf/config/set_logical_device_configuration.md @@ -0,0 +1,148 @@ +description: Set the logical device configuration for a tf.config.PhysicalDevice. + +
+ + +
+ +# tf.config.set_logical_device_configuration + + + + + + + + + +Set the logical device configuration for a tf.config.PhysicalDevice. + + + + + + + + + +A visible tf.config.PhysicalDevice will by default have a single +tf.config.LogicalDevice associated with it once the runtime is initialized. +Specifying a list of tf.config.LogicalDeviceConfiguration objects allows +multiple devices to be created on the same tf.config.PhysicalDevice. + +The following example splits the CPU into 2 logical devices: + +``` +>>> physical_devices = tf.config.list_physical_devices('CPU') +>>> assert len(physical_devices) == 1, "No CPUs found" +>>> # Specify 2 virtual CPUs. Note currently memory limit is not supported. +>>> try: +... tf.config.set_logical_device_configuration( +... physical_devices[0], +... [tf.config.LogicalDeviceConfiguration(), +... tf.config.LogicalDeviceConfiguration()]) +... logical_devices = tf.config.list_logical_devices('CPU') +... assert len(logical_devices) == 2 +... +... tf.config.set_logical_device_configuration( +... physical_devices[0], +... [tf.config.LogicalDeviceConfiguration(), +... tf.config.LogicalDeviceConfiguration(), +... tf.config.LogicalDeviceConfiguration(), +... tf.config.LogicalDeviceConfiguration()]) +... except: +... # Cannot modify logical devices once initialized. +... pass +``` + +The following example splits the GPU into 2 logical devices with 100 MB each: + +``` +>>> physical_devices = tf.config.list_physical_devices('GPU') +>>> try: +... tf.config.set_logical_device_configuration( +... physical_devices[0], +... [tf.config.LogicalDeviceConfiguration(memory_limit=100), +... tf.config.LogicalDeviceConfiguration(memory_limit=100)]) +... +... logical_devices = tf.config.list_logical_devices('GPU') +... assert len(logical_devices) == len(physical_devices) + 1 +... +... tf.config.set_logical_device_configuration( +... physical_devices[0], +... [tf.config.LogicalDeviceConfiguration(memory_limit=10), +... tf.config.LogicalDeviceConfiguration(memory_limit=10)]) +... except: +... # Invalid device or cannot modify logical devices once initialized. +... pass +``` + + + + + + + + + + + + + +
+`device` + +The `PhysicalDevice` to configure. +
+`logical_devices` + +(optional) List of tf.config.LogicalDeviceConfiguration +objects to allocate for the specified `PhysicalDevice`. If None, the +default configuration will be used. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If argument validation fails. +
+`RuntimeError` + +Runtime is already initialized. +
+ diff --git a/site/en/api_docs/python/tf/config/set_soft_device_placement.md b/site/en/api_docs/python/tf/config/set_soft_device_placement.md new file mode 100644 index 00000000000..37df414d779 --- /dev/null +++ b/site/en/api_docs/python/tf/config/set_soft_device_placement.md @@ -0,0 +1,65 @@ +description: Set if soft device placement is enabled. + +
+ + +
+ +# tf.config.set_soft_device_placement + + + + + + + + + +Set if soft device placement is enabled. + + + + + + + + + +If enabled, an op will be placed on CPU if any of the following are true + 1. there's no GPU implementation for the OP + 2. no GPU devices are known or registered + 3. need to co-locate with reftype input(s) which are from CPU + + + + + + + + + + +
+`enabled` + +Whether to enable soft placement. +
+ diff --git a/site/en/api_docs/python/tf/config/set_visible_devices.md b/site/en/api_docs/python/tf/config/set_visible_devices.md new file mode 100644 index 00000000000..af6b08b2a96 --- /dev/null +++ b/site/en/api_docs/python/tf/config/set_visible_devices.md @@ -0,0 +1,115 @@ +description: Set the list of visible devices. + +
+ + +
+ +# tf.config.set_visible_devices + + + + + + + + + +Set the list of visible devices. + + + + + + + + + +Specifies which `PhysicalDevice` objects are visible to the runtime. +TensorFlow will only allocate memory and place operations on visible +physical devices, as otherwise no `LogicalDevice` will be created on them. +By default all discovered devices are marked as visible. + +The following example demonstrates disabling the first GPU on the machine. + +``` +>>> physical_devices = tf.config.list_physical_devices('GPU') +>>> try: +... # Disable first GPU +... tf.config.set_visible_devices(physical_devices[1:], 'GPU') +... logical_devices = tf.config.list_logical_devices('GPU') +... # Logical device was not created for first GPU +... assert len(logical_devices) == len(physical_devices) - 1 +... except: +... # Invalid device or cannot modify virtual devices once initialized. +... pass +``` + + + + + + + + + + + + + +
+`devices` + +List of `PhysicalDevice`s to make visible +
+`device_type` + +(optional) Only configure devices matching this device type. +For example "CPU" or "GPU". Other devices will be left unaltered. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If argument validation fails. +
+`RuntimeError` + +Runtime is already initialized. +
+ diff --git a/site/en/api_docs/python/tf/config/threading.md b/site/en/api_docs/python/tf/config/threading.md new file mode 100644 index 00000000000..0d82ae5e36c --- /dev/null +++ b/site/en/api_docs/python/tf/config/threading.md @@ -0,0 +1,31 @@ +description: Public API for tf.config.threading namespace. + +
+ + +
+ +# Module: tf.config.threading + + + + + + + + + +Public API for tf.config.threading namespace. + + + +## Functions + +[`get_inter_op_parallelism_threads(...)`](../../tf/config/threading/get_inter_op_parallelism_threads.md): Get number of threads used for parallelism between independent operations. + +[`get_intra_op_parallelism_threads(...)`](../../tf/config/threading/get_intra_op_parallelism_threads.md): Get number of threads used within an individual op for parallelism. + +[`set_inter_op_parallelism_threads(...)`](../../tf/config/threading/set_inter_op_parallelism_threads.md): Set number of threads used for parallelism between independent operations. + +[`set_intra_op_parallelism_threads(...)`](../../tf/config/threading/set_intra_op_parallelism_threads.md): Set number of threads used within an individual op for parallelism. + diff --git a/site/en/api_docs/python/tf/config/threading/get_inter_op_parallelism_threads.md b/site/en/api_docs/python/tf/config/threading/get_inter_op_parallelism_threads.md new file mode 100644 index 00000000000..59b049866b9 --- /dev/null +++ b/site/en/api_docs/python/tf/config/threading/get_inter_op_parallelism_threads.md @@ -0,0 +1,58 @@ +description: Get number of threads used for parallelism between independent operations. + +
+ + +
+ +# tf.config.threading.get_inter_op_parallelism_threads + + + + + + + + + +Get number of threads used for parallelism between independent operations. + + + + + + + + + +Determines the number of threads used by independent non-blocking operations. +0 means the system picks an appropriate number. + + + + + + + + + +
+Number of parallel threads +
+ diff --git a/site/en/api_docs/python/tf/config/threading/get_intra_op_parallelism_threads.md b/site/en/api_docs/python/tf/config/threading/get_intra_op_parallelism_threads.md new file mode 100644 index 00000000000..b83381db814 --- /dev/null +++ b/site/en/api_docs/python/tf/config/threading/get_intra_op_parallelism_threads.md @@ -0,0 +1,59 @@ +description: Get number of threads used within an individual op for parallelism. + +
+ + +
+ +# tf.config.threading.get_intra_op_parallelism_threads + + + + + + + + + +Get number of threads used within an individual op for parallelism. + + + + + + + + + +Certain operations like matrix multiplication and reductions can utilize +parallel threads for speed ups. A value of 0 means the system picks an +appropriate number. + + + + + + + + + +
+Number of parallel threads +
+ diff --git a/site/en/api_docs/python/tf/config/threading/set_inter_op_parallelism_threads.md b/site/en/api_docs/python/tf/config/threading/set_inter_op_parallelism_threads.md new file mode 100644 index 00000000000..ef7d7e54004 --- /dev/null +++ b/site/en/api_docs/python/tf/config/threading/set_inter_op_parallelism_threads.md @@ -0,0 +1,63 @@ +description: Set number of threads used for parallelism between independent operations. + +
+ + +
+ +# tf.config.threading.set_inter_op_parallelism_threads + + + + + + + + + +Set number of threads used for parallelism between independent operations. + + + + + + + + + +Determines the number of threads used by independent non-blocking operations. +0 means the system picks an appropriate number. + + + + + + + + + + +
+`num_threads` + +Number of parallel threads +
+ diff --git a/site/en/api_docs/python/tf/config/threading/set_intra_op_parallelism_threads.md b/site/en/api_docs/python/tf/config/threading/set_intra_op_parallelism_threads.md new file mode 100644 index 00000000000..bbcfc8f28de --- /dev/null +++ b/site/en/api_docs/python/tf/config/threading/set_intra_op_parallelism_threads.md @@ -0,0 +1,64 @@ +description: Set number of threads used within an individual op for parallelism. + +
+ + +
+ +# tf.config.threading.set_intra_op_parallelism_threads + + + + + + + + + +Set number of threads used within an individual op for parallelism. + + + + + + + + + +Certain operations like matrix multiplication and reductions can utilize +parallel threads for speed ups. A value of 0 means the system picks an +appropriate number. + + + + + + + + + + +
+`num_threads` + +Number of parallel threads +
+ diff --git a/site/en/api_docs/python/tf/constant.md b/site/en/api_docs/python/tf/constant.md new file mode 100644 index 00000000000..0999c1e1bc8 --- /dev/null +++ b/site/en/api_docs/python/tf/constant.md @@ -0,0 +1,202 @@ +description: Creates a constant tensor from a tensor-like object. + +
+ + +
+ +# tf.constant + + + + + + + + + +Creates a constant tensor from a tensor-like object. + + + + + + + +Note: All eager tf.Tensor values are immutable (in contrast to +tf.Variable). There is nothing especially _constant_ about the value +returned from tf.constant. This function it is not fundamentally different +from tf.convert_to_tensor. The name tf.constant comes from the symbolic +APIs (like tf.data or keras functional models) where the `value` is embeded +in a `Const` node in the tf.Graph. tf.constant is useful for asserting +that the value can be embedded that way. + +If the argument `dtype` is not specified, then the type is inferred from +the type of `value`. + +``` +>>> # Constant 1-D Tensor from a python list. +>>> tf.constant([1, 2, 3, 4, 5, 6]) + +>>> # Or a numpy array +>>> a = np.array([[1, 2, 3], [4, 5, 6]]) +>>> tf.constant(a) + +``` + +If `dtype` is specified the resulting tensor values are cast to the requested +`dtype`. + +``` +>>> tf.constant([1, 2, 3, 4, 5, 6], dtype=tf.float64) + +``` + +If `shape` is set, the `value` is reshaped to match. Scalars are expanded to +fill the `shape`: + +``` +>>> tf.constant(0, shape=(2, 3)) + +>>> tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) + +``` + +tf.constant has no effect if an eager Tensor is passed as the `value`, it +even transmits gradients: + +``` +>>> v = tf.Variable([0.0]) +>>> with tf.GradientTape() as g: +... loss = tf.constant(v + v) +>>> g.gradient(loss, v).numpy() +array([2.], dtype=float32) +``` + +But, since tf.constant embeds the value in the tf.Graph this fails for +symbolic tensors: + +``` +>>> i = tf.keras.layers.Input(shape=[None, None]) +>>> t = tf.constant(i) +Traceback (most recent call last): +... +NotImplementedError: ... +``` + +tf.constant will _always_ create CPU (host) tensors. In order to create +tensors on other devices, use tf.identity. (If the `value` is an eager +Tensor, however, the tensor will be returned unmodified as mentioned above.) + +#### Related Ops: + + + +* tf.convert_to_tensor is similar but: + * It has no `shape` argument. + * Symbolic tensors are allowed to pass through. + + ``` + >>> i = tf.keras.layers.Input(shape=[None, None]) + >>> t = tf.convert_to_tensor(i) + ``` + +* tf.fill: differs in a few ways: + * tf.constant supports arbitrary constants, not just uniform scalar + Tensors like tf.fill. + * tf.fill creates an Op in the graph that is expanded at runtime, so it + can efficiently represent large tensors. + * Since tf.fill does not embed the value, it can produce dynamically + sized outputs. + + + + + + + + + + + + + + + + + + + +
+`value` + +A constant value (or list) of output type `dtype`. +
+`dtype` + +The type of the elements of the resulting tensor. +
+`shape` + +Optional dimensions of resulting tensor. +
+`name` + +Optional name for the tensor. +
+ + + + + + + + + + + +
+A Constant Tensor. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if shape is incorrectly specified or unsupported. +
+`ValueError` + +if called on a symbolic tensor. +
+ diff --git a/site/en/api_docs/python/tf/constant_initializer.md b/site/en/api_docs/python/tf/constant_initializer.md new file mode 100644 index 00000000000..ed18cdae3bf --- /dev/null +++ b/site/en/api_docs/python/tf/constant_initializer.md @@ -0,0 +1,282 @@ +description: Initializer that generates tensors with constant values. + +
+ + + + + + +
+ +# tf.constant_initializer + + + + + + + + + +Initializer that generates tensors with constant values. + +Inherits From: [`Initializer`](../tf/keras/initializers/Initializer.md) + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +tf.constant_initializer returns an object which when called returns a tensor +populated with the `value` specified in the constructor. This `value` must be +convertible to the requested `dtype`. + +The argument `value` can be a scalar constant value, or a list of +values. Scalars broadcast to whichever shape is requested from the +initializer. + +If `value` is a list, then the length of the list must be equal to the number +of elements implied by the desired shape of the tensor. If the total number of +elements in `value` is not equal to the number of elements required by the +tensor shape, the initializer will raise a `TypeError`. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, tf.constant_initializer(2.)) +>>> v1 + +>>> v2 + +>>> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.)) +(, >> value = [0, 1, 2, 3, 4, 5, 6, 7] +>>> init = tf.constant_initializer(value) +>>> # Fitting shape +>>> tf.Variable(init(shape=[2, 4], dtype=tf.float32)) + +>>> # Larger shape +>>> tf.Variable(init(shape=[3, 4], dtype=tf.float32)) +Traceback (most recent call last): +... +TypeError: ...value has 8 elements, shape is (3, 4) with 12 elements... +>>> # Smaller shape +>>> tf.Variable(init(shape=[2, 3], dtype=tf.float32)) +Traceback (most recent call last): +... +TypeError: ...value has 8 elements, shape is (2, 3) with 6 elements... +``` + + + + + + + + + + +
+`value` + +A Python scalar, list or tuple of values, or a N-dimensional numpy +array. All elements of the initialized variable will be set to the +corresponding value in the `value` argument. +
+ + + + + + + + + + + + +
+`TypeError` + +If the input `value` is not one of the expected types. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. +It will typically be the output of `get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided the dtype of the +tensor created will be the type of the inital value. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If the initializer cannot create a tensor of the requested +dtype. +
+ + + + + diff --git a/site/en/api_docs/python/tf/control_dependencies.md b/site/en/api_docs/python/tf/control_dependencies.md new file mode 100644 index 00000000000..993219732aa --- /dev/null +++ b/site/en/api_docs/python/tf/control_dependencies.md @@ -0,0 +1,85 @@ +description: Wrapper for Graph.control_dependencies() using the default graph. + +
+ + +
+ +# tf.control_dependencies + + + + + + + + + +Wrapper for Graph.control_dependencies() using the default graph. + + + + + + + + + +See tf.Graph.control_dependencies +for more details. + +When eager execution is enabled, any callable object in the `control_inputs` +list will be called. + + + + + + + + + + +
+`control_inputs` + +A list of `Operation` or `Tensor` objects which must be +executed or computed before running the operations defined in the context. +Can also be `None` to clear the control dependencies. If eager execution +is enabled, any callable object in the `control_inputs` list will be +called. +
+ + + + + + + + + + + +
+A context manager that specifies control dependencies for all +operations constructed within the context. +
+ diff --git a/site/en/api_docs/python/tf/convert_to_tensor.md b/site/en/api_docs/python/tf/convert_to_tensor.md new file mode 100644 index 00000000000..8aa825617f7 --- /dev/null +++ b/site/en/api_docs/python/tf/convert_to_tensor.md @@ -0,0 +1,159 @@ +description: Converts the given value to a Tensor. + +
+ + +
+ +# tf.convert_to_tensor + + + + + + + + + +Converts the given `value` to a `Tensor`. + + + + + + + +This function converts Python objects of various types to `Tensor` +objects. It accepts `Tensor` objects, numpy arrays, Python lists, +and Python scalars. For example: + +``` +>>> def my_func(arg): +... arg = tf.convert_to_tensor(arg, dtype=tf.float32) +... return arg +``` + +``` +>>> # The following calls are equivalent. +>>> value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]])) +>>> print(value_1) +tf.Tensor( + [[1. 2.] + [3. 4.]], shape=(2, 2), dtype=float32) +>>> value_2 = my_func([[1.0, 2.0], [3.0, 4.0]]) +>>> print(value_2) +tf.Tensor( + [[1. 2.] + [3. 4.]], shape=(2, 2), dtype=float32) +>>> value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)) +>>> print(value_3) +tf.Tensor( + [[1. 2.] + [3. 4.]], shape=(2, 2), dtype=float32) +``` + +This function can be useful when composing a new operation in Python +(such as `my_func` in the example above). All standard Python op +constructors apply this function to each of their Tensor-valued +inputs, which allows those ops to accept numpy arrays, Python lists, +and scalars in addition to `Tensor` objects. + +Note: This function diverges from default Numpy behavior for `float` and + `string` types when `None` is present in a Python list or scalar. Rather + than silently converting `None` values, an error will be thrown. + + + + + + + + + + + + + + + + + + + +
+`value` + +An object whose type has a registered `Tensor` conversion function. +
+`dtype` + +Optional element type for the returned tensor. If missing, the type +is inferred from the type of `value`. +
+`dtype_hint` + +Optional element type for the returned tensor, used when dtype +is None. In some cases, a caller may not have a dtype in mind when +converting to a tensor, so dtype_hint can be used as a soft preference. +If the conversion to `dtype_hint` is not possible, this argument has no +effect. +
+`name` + +Optional name to use if a new `Tensor` is created. +
+ + + + + + + + + + + +
+A `Tensor` based on `value`. +
+ + + + + + + + + + + + + + + + + + +
+`TypeError` + +If no conversion function is registered for `value` to `dtype`. +
+`RuntimeError` + +If a registered conversion function returns an invalid value. +
+`ValueError` + +If the `value` is a tensor not of given `dtype` in graph mode. +
+ diff --git a/site/en/api_docs/python/tf/custom_gradient.md b/site/en/api_docs/python/tf/custom_gradient.md new file mode 100644 index 00000000000..ddf71ce924c --- /dev/null +++ b/site/en/api_docs/python/tf/custom_gradient.md @@ -0,0 +1,181 @@ +description: Decorator to define a function with a custom gradient. + +
+ + +
+ +# tf.custom_gradient + + + + + + + + + +Decorator to define a function with a custom gradient. + + + + + + + + + +This decorator allows fine grained control over the gradients of a sequence +for operations. This may be useful for multiple reasons, including providing +a more efficient or numerically stable gradient for a sequence of operations. + +For example, consider the following function that commonly occurs in the +computation of cross entropy and log likelihoods: + +```python +def log1pexp(x): + return tf.math.log(1 + tf.exp(x)) +``` + +Due to numerical instability, the gradient of this function evaluated at x=100 +is NaN. For example: + +```python +x = tf.constant(100.) +y = log1pexp(x) +dy = tf.gradients(y, x) # Will be NaN when evaluated. +``` + +The gradient expression can be analytically simplified to provide numerical +stability: + +```python +@tf.custom_gradient +def log1pexp(x): + e = tf.exp(x) + def grad(dy): + return dy * (1 - 1 / (1 + e)) + return tf.math.log(1 + e), grad +``` + +With this definition, the gradient at x=100 will be correctly evaluated as +1.0. + +Nesting custom gradients can lead to unintuitive results. The default +behavior does not correspond to n-th order derivatives. For example + +```python +@tf.custom_gradient +def op(x): + y = op1(x) + @tf.custom_gradient + def grad_fn(dy): + gdy = op2(x, y, dy) + def grad_grad_fn(ddy): # Not the 2nd order gradient of op w.r.t. x. + return op3(x, y, dy, ddy) + return gdy, grad_grad_fn + return y, grad_fn +``` + +The function `grad_grad_fn` will be calculating the first order gradient +of `grad_fn` with respect to `dy`, which is used to generate forward-mode +gradient graphs from backward-mode gradient graphs, but is not the same as +the second order gradient of `op` with respect to `x`. + +Instead, wrap nested `@tf.custom_gradients` in another function: + +```python +@tf.custom_gradient +def op_with_fused_backprop(x): + y, x_grad = fused_op(x) + def first_order_gradient(dy): + @tf.custom_gradient + def first_order_custom(unused_x): + def second_order_and_transpose(ddy): + return second_order_for_x(...), gradient_wrt_dy(...) + return x_grad, second_order_and_transpose + return dy * first_order_custom(x) + return y, first_order_gradient +``` + +Additional arguments to the inner `@tf.custom_gradient`-decorated function +control the expected return values of the innermost function. + +See also tf.RegisterGradient which registers a gradient function for a +primitive TensorFlow operation. tf.custom_gradient on the other hand allows +for fine grained control over the gradient computation of a sequence of +operations. + +Note that if the decorated function uses `Variable`s, the enclosing variable +scope must be using `ResourceVariable`s. + + + + + + + + + + +
+`f` + +function `f(*x)` that returns a tuple `(y, grad_fn)` where: +- `x` is a sequence of `Tensor` inputs to the function. +- `y` is a `Tensor` or sequence of `Tensor` outputs of applying +TensorFlow operations in `f` to `x`. +- `grad_fn` is a function with the signature `g(*grad_ys)` which returns +a list of `Tensor`s - the derivatives of `Tensor`s in `y` with respect +to the `Tensor`s in `x`. `grad_ys` is a `Tensor` or sequence of +`Tensor`s the same size as `y` holding the initial value gradients for +each `Tensor` in `y`. In a pure mathematical sense, a vector-argument +vector-valued function `f`'s derivatives should be its Jacobian matrix +`J`. Here we are expressing the Jacobian `J` as a function `grad_fn` +which defines how `J` will transform a vector `grad_ys` when +left-multiplied with it (`grad_ys * J`). This functional representation +of a matrix is convenient to use for chain-rule calculation +(in e.g. the back-propagation algorithm). + +If `f` uses `Variable`s (that are not part of the +inputs), i.e. through `get_variable`, then `grad_fn` should have +signature `g(*grad_ys, variables=None)`, where `variables` is a list of +the `Variable`s, and return a 2-tuple `(grad_xs, grad_vars)`, where +`grad_xs` is the same as above, and `grad_vars` is a `list` +with the derivatives of `Tensor`s in `y` with respect to the variables +(that is, grad_vars has one Tensor per variable in variables). +
+ + + + + + + + + + + +
+A function `h(x)` which returns the same value as `f(x)[0]` and whose +gradient (as calculated by tf.gradients) is determined by `f(x)[1]`. +
+ diff --git a/site/en/api_docs/python/tf/data.md b/site/en/api_docs/python/tf/data.md new file mode 100644 index 00000000000..9c9f5a27622 --- /dev/null +++ b/site/en/api_docs/python/tf/data.md @@ -0,0 +1,40 @@ +description: tf.data.Dataset API for input pipelines. + +
+ + +
+ +# Module: tf.data + + + + + + + + + +tf.data.Dataset API for input pipelines. + + +See [Importing Data](https://tensorflow.org/guide/data) for an overview. + +## Modules + +[`experimental`](../tf/data/experimental.md) module: Experimental API for building input pipelines. + +## Classes + +[`class Dataset`](../tf/data/Dataset.md): Represents a potentially large set of elements. + +[`class DatasetSpec`](../tf/data/DatasetSpec.md): Type specification for tf.data.Dataset. + +[`class FixedLengthRecordDataset`](../tf/data/FixedLengthRecordDataset.md): A `Dataset` of fixed-length records from one or more binary files. + +[`class Options`](../tf/data/Options.md): Represents options for tf.data.Dataset. + +[`class TFRecordDataset`](../tf/data/TFRecordDataset.md): A `Dataset` comprising records from one or more TFRecord files. + +[`class TextLineDataset`](../tf/data/TextLineDataset.md): A `Dataset` comprising lines from one or more text files. + diff --git a/site/en/api_docs/python/tf/data/Dataset.md b/site/en/api_docs/python/tf/data/Dataset.md new file mode 100644 index 00000000000..3ebf5402983 --- /dev/null +++ b/site/en/api_docs/python/tf/data/Dataset.md @@ -0,0 +1,2725 @@ +description: Represents a potentially large set of elements. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.data.Dataset + + + + + + + + + +Represents a potentially large set of elements. + + + + + + + +The tf.data.Dataset API supports writing descriptive and efficient input +pipelines. `Dataset` usage follows a common pattern: + +1. Create a source dataset from your input data. +2. Apply dataset transformations to preprocess the data. +3. Iterate over the dataset and process the elements. + +Iteration happens in a streaming fashion, so the full dataset does not need to +fit into memory. + +#### Source Datasets: + + + +The simplest way to create a dataset is to create it from a python `list`: + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +To process lines from files, use tf.data.TextLineDataset: + +``` +>>> dataset = tf.data.TextLineDataset(["file1.txt", "file2.txt"]) +``` + +To process records written in the `TFRecord` format, use `TFRecordDataset`: + +``` +>>> dataset = tf.data.TFRecordDataset(["file1.tfrecords", "file2.tfrecords"]) +``` + +To create a dataset of all files matching a pattern, use +tf.data.Dataset.list_files: + +``` +>>> dataset = tf.data.Dataset.list_files("/path/*.txt") # doctest: +SKIP +``` + +See tf.data.FixedLengthRecordDataset and tf.data.Dataset.from_generator +for more ways to create datasets. + +#### Transformations: + + + +Once you have a dataset, you can apply transformations to prepare the data for +your model: + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.map(lambda x: x*2) +>>> list(dataset.as_numpy_iterator()) +[2, 4, 6] +``` + +#### Common Terms: + + + +**Element**: A single output from calling `next()` on a dataset iterator. + Elements may be nested structures containing multiple components. For + example, the element `(1, (3, "apple"))` has one tuple nested in another + tuple. The components are `1`, `3`, and `"apple"`. +**Component**: The leaf in the nested structure of an element. + +#### Supported types: + + + +Elements can be nested structures of tuples, named tuples, and dictionaries. +Element components can be of any type representable by tf.TypeSpec, +including tf.Tensor, tf.data.Dataset, tf.SparseTensor, +tf.RaggedTensor, and tf.TensorArray. + +``` +>>> a = 1 # Integer element +>>> b = 2.0 # Float element +>>> c = (1, 2) # Tuple element with 2 components +>>> d = {"a": (2, 2), "b": 3} # Dict element with 3 components +>>> Point = collections.namedtuple("Point", ["x", "y"]) # doctest: +SKIP +>>> e = Point(1, 2) # Named tuple # doctest: +SKIP +>>> f = tf.data.Dataset.range(10) # Dataset element +``` + + + + + + + + + + +
+`variant_tensor` + +A DT_VARIANT tensor that represents the dataset. +
+ + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of Dataset.from_generator() uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +Dataset.from_generator(). The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to Dataset.from_generator() and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +Dataset.from_generator(). + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use Dataset.interleave() to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +Dataset.from_tensor_slices(filenames) instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/data/DatasetSpec.md b/site/en/api_docs/python/tf/data/DatasetSpec.md new file mode 100644 index 00000000000..d545489eacd --- /dev/null +++ b/site/en/api_docs/python/tf/data/DatasetSpec.md @@ -0,0 +1,183 @@ +description: Type specification for tf.data.Dataset. + +
+ + + + + + + + +
+ +# tf.data.DatasetSpec + + + + + + + + + +Type specification for tf.data.Dataset. + + + + + + + + + +See tf.TypeSpec for more information about TensorFlow type specifications. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> tf.data.DatasetSpec.from_value(dataset) +DatasetSpec(TensorSpec(shape=(), dtype=tf.int64, name=None), TensorShape([])) +``` + + + + + + + + + + + + +
+`value_type` + +The Python type for values that are compatible with this TypeSpec. +
+ + + +## Methods + +

from_value

+ +View source + + + +Creates a `DatasetSpec` for the given tf.data.Dataset value. + + +

is_compatible_with

+ +View source + + + +Returns true if `spec_or_value` is compatible with this TypeSpec. + + +

most_specific_compatible_type

+ +View source + + + +Returns the most specific TypeSpec compatible with `self` and `other`. + + + + + + + + + + + +
Args
+`other` + +A `TypeSpec`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If there is no TypeSpec that is compatible with both `self` +and `other`. +
+ + + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/data/FixedLengthRecordDataset.md b/site/en/api_docs/python/tf/data/FixedLengthRecordDataset.md new file mode 100644 index 00000000000..5265d6b300e --- /dev/null +++ b/site/en/api_docs/python/tf/data/FixedLengthRecordDataset.md @@ -0,0 +1,2690 @@ +description: A Dataset of fixed-length records from one or more binary files. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.data.FixedLengthRecordDataset + + + + + + + + + +A `Dataset` of fixed-length records from one or more binary files. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A tf.string tensor or tf.data.Dataset containing one or +more filenames. +
+`record_bytes` + +A tf.int64 scalar representing the number of bytes in each +record. +
+`header_bytes` + +(Optional.) A tf.int64 scalar representing the number of +bytes to skip at the start of a file. +
+`footer_bytes` + +(Optional.) A tf.int64 scalar representing the number of +bytes to ignore at the end of a file. +
+`buffer_size` + +(Optional.) A tf.int64 scalar representing the number of +bytes to buffer when reading. +
+`compression_type` + +(Optional.) A tf.string scalar evaluating to one of +`""` (no compression), `"ZLIB"`, or `"GZIP"`. +
+`num_parallel_reads` + +(Optional.) A tf.int64 scalar representing the +number of files to read in parallel. If greater than one, the records of +files read in parallel are outputted in an interleaved order. If your +input pipeline is I/O bottlenecked, consider setting this parameter to a +value greater than one to parallelize the I/O. If `None`, files will be +read sequentially. +
+ + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of Dataset.from_generator() uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +Dataset.from_generator(). The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to Dataset.from_generator() and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +Dataset.from_generator(). + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use Dataset.interleave() to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +Dataset.from_tensor_slices(filenames) instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/data/Options.md b/site/en/api_docs/python/tf/data/Options.md new file mode 100644 index 00000000000..187c1cc4a42 --- /dev/null +++ b/site/en/api_docs/python/tf/data/Options.md @@ -0,0 +1,217 @@ +description: Represents options for tf.data.Dataset. + +
+ + + + + + +
+ +# tf.data.Options + + + + + + + + + +Represents options for tf.data.Dataset. + + + + + + + + + +An `Options` object can be, for instance, used to control which graph +optimizations to apply or whether to use performance modeling to dynamically +tune the parallelism of operations such as tf.data.Dataset.map or +tf.data.Dataset.interleave. + +After constructing an `Options` object, use `dataset.with_options(options)` to +apply the options to a dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> options = tf.data.Options() +>>> # Set options here. +>>> dataset = dataset.with_options(options) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`experimental_deterministic` + +Whether the outputs need to be produced in deterministic order. If None, defaults to True. +
+`experimental_distribute` + +The distribution strategy options associated with the dataset. See tf.data.experimental.DistributeOptions for more details. +
+`experimental_external_state_policy` + +By default, tf.data will refuse to serialize a dataset or checkpoint its iterator if the dataset contains a stateful op as the serialization / checkpointing won't be able to capture its state. Users can -- at their own risk -- override this restriction by explicitly specifying that they are fine throwing away the state in these ops. There are three settings available - IGNORE: in which wecompletely ignore any state; WARN: We warn the user that some state might be thrown away; FAIL: We fail if any state is being captured. +
+`experimental_optimization` + +The optimization options associated with the dataset. See tf.data.experimental.OptimizationOptions for more details. +
+`experimental_slack` + +Whether to introduce 'slack' in the last `prefetch` of the input pipeline, if it exists. This may reduce CPU contention with accelerator host-side activity at the start of a step. The slack frequency is determined by the number of devices attached to this input pipeline. If None, defaults to False. +
+`experimental_stats` + +The statistics options associated with the dataset. See tf.data.experimental.StatsOptions for more details. +
+`experimental_threading` + +The threading options associated with the dataset. See tf.data.experimental.ThreadingOptions for more details. +
+ + + +## Methods + +

merge

+ +View source + + + +Merges itself with the given tf.data.Options. + +The given tf.data.Options can be merged as long as there does not exist an +attribute that is set to different values in `self` and `options`. + + + + + + + + + + +
Args
+`options` + +a tf.data.Options to merge with +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if the given tf.data.Options cannot be merged +
+ + + + + + + + + + + +
Returns
+New tf.data.Options() object which is the result of merging self with +the input tf.data.Options. +
+ + + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/data/TFRecordDataset.md b/site/en/api_docs/python/tf/data/TFRecordDataset.md new file mode 100644 index 00000000000..8a5b7eecadc --- /dev/null +++ b/site/en/api_docs/python/tf/data/TFRecordDataset.md @@ -0,0 +1,2693 @@ +description: A Dataset comprising records from one or more TFRecord files. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.data.TFRecordDataset + + + + + + + + + +A `Dataset` comprising records from one or more TFRecord files. + +Inherits From: [`Dataset`](../../tf/data/Dataset.md) + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A tf.string tensor or tf.data.Dataset containing one or +more filenames. +
+`compression_type` + +(Optional.) A tf.string scalar evaluating to one of +`""` (no compression), `"ZLIB"`, or `"GZIP"`. +
+`buffer_size` + +(Optional.) A tf.int64 scalar representing the number of +bytes in the read buffer. If your input pipeline is I/O bottlenecked, +consider setting this parameter to a value 1-100 MBs. If `None`, a +sensible default for both local and remote file systems is used. +
+`num_parallel_reads` + +(Optional.) A tf.int64 scalar representing the +number of files to read in parallel. If greater than one, the records of +files read in parallel are outputted in an interleaved order. If your +input pipeline is I/O bottlenecked, consider setting this parameter to a +value greater than one to parallelize the I/O. If `None`, files will be +read sequentially. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If any argument does not have the expected type. +
+`ValueError` + +If any argument does not have the expected shape. +
+ + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of Dataset.from_generator() uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +Dataset.from_generator(). The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to Dataset.from_generator() and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +Dataset.from_generator(). + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use Dataset.interleave() to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +Dataset.from_tensor_slices(filenames) instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/data/TextLineDataset.md b/site/en/api_docs/python/tf/data/TextLineDataset.md new file mode 100644 index 00000000000..1bcaca9c712 --- /dev/null +++ b/site/en/api_docs/python/tf/data/TextLineDataset.md @@ -0,0 +1,2666 @@ +description: A Dataset comprising lines from one or more text files. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.data.TextLineDataset + + + + + + + + + +A `Dataset` comprising lines from one or more text files. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A tf.string tensor or tf.data.Dataset containing one or +more filenames. +
+`compression_type` + +(Optional.) A tf.string scalar evaluating to one of +`""` (no compression), `"ZLIB"`, or `"GZIP"`. +
+`buffer_size` + +(Optional.) A tf.int64 scalar denoting the number of bytes +to buffer. A value of 0 results in the default buffering values chosen +based on the compression type. +
+`num_parallel_reads` + +(Optional.) A tf.int64 scalar representing the +number of files to read in parallel. If greater than one, the records of +files read in parallel are outputted in an interleaved order. If your +input pipeline is I/O bottlenecked, consider setting this parameter to a +value greater than one to parallelize the I/O. If `None`, files will be +read sequentially. +
+ + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of Dataset.from_generator() uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +Dataset.from_generator(). The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to Dataset.from_generator() and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +Dataset.from_generator(). + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use Dataset.interleave() to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +Dataset.from_tensor_slices(filenames) instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/data/experimental.md b/site/en/api_docs/python/tf/data/experimental.md new file mode 100644 index 00000000000..c03be37b61c --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental.md @@ -0,0 +1,137 @@ +description: Experimental API for building input pipelines. + +
+ + + + + +
+ +# Module: tf.data.experimental + + + + + + + + + +Experimental API for building input pipelines. + + +This module contains experimental `Dataset` sources and transformations that can +be used in conjunction with the tf.data.Dataset API. Note that the +tf.data.experimental API is not subject to the same backwards compatibility +guarantees as tf.data, but we will provide deprecation advice in advance of +removing existing functionality. + +See [Importing Data](https://tensorflow.org/guide/datasets) for an overview. + + + + +## Classes + +[`class AutoShardPolicy`](../../tf/data/experimental/AutoShardPolicy.md): Represents the type of auto-sharding we enable. + +[`class CheckpointInputPipelineHook`](../../tf/data/experimental/CheckpointInputPipelineHook.md): Checkpoints input pipeline state every N steps or seconds. + +[`class CsvDataset`](../../tf/data/experimental/CsvDataset.md): A Dataset comprising lines from one or more CSV files. + +[`class DistributeOptions`](../../tf/data/experimental/DistributeOptions.md): Represents options for distributed data processing. + +[`class MapVectorizationOptions`](../../tf/data/experimental/MapVectorizationOptions.md): Represents options for the MapVectorization optimization. + +[`class OptimizationOptions`](../../tf/data/experimental/OptimizationOptions.md): Represents options for dataset optimizations. + +[`class Optional`](../../tf/data/experimental/Optional.md): Wraps a value that may/may not be present at runtime. + +[`class RandomDataset`](../../tf/data/experimental/RandomDataset.md): A `Dataset` of pseudorandom values. + +[`class Reducer`](../../tf/data/experimental/Reducer.md): A reducer is used for reducing a set of elements. + +[`class SqlDataset`](../../tf/data/experimental/SqlDataset.md): A `Dataset` consisting of the results from a SQL query. + +[`class StatsAggregator`](../../tf/data/experimental/StatsAggregator.md): A stateful resource that aggregates statistics from one or more iterators. + +[`class StatsOptions`](../../tf/data/experimental/StatsOptions.md): Represents options for collecting dataset stats using `StatsAggregator`. + +[`class TFRecordWriter`](../../tf/data/experimental/TFRecordWriter.md): Writes a dataset to a TFRecord file. + +[`class ThreadingOptions`](../../tf/data/experimental/ThreadingOptions.md): Represents options for dataset threading. + +## Functions + +[`Counter(...)`](../../tf/data/experimental/Counter.md): Creates a `Dataset` that counts from `start` in steps of size `step`. + +[`assert_cardinality(...)`](../../tf/data/experimental/assert_cardinality.md): Asserts the cardinality of the input dataset. + +[`bucket_by_sequence_length(...)`](../../tf/data/experimental/bucket_by_sequence_length.md): A transformation that buckets elements in a `Dataset` by length. + +[`bytes_produced_stats(...)`](../../tf/data/experimental/bytes_produced_stats.md): Records the number of bytes produced by each element of the input dataset. + +[`cardinality(...)`](../../tf/data/experimental/cardinality.md): Returns the cardinality of `dataset`, if known. + +[`choose_from_datasets(...)`](../../tf/data/experimental/choose_from_datasets.md): Creates a dataset that deterministically chooses elements from `datasets`. + +[`copy_to_device(...)`](../../tf/data/experimental/copy_to_device.md): A transformation that copies dataset elements to the given `target_device`. + +[`dense_to_ragged_batch(...)`](../../tf/data/experimental/dense_to_ragged_batch.md): A transformation that batches ragged elements into tf.RaggedTensors. + +[`dense_to_sparse_batch(...)`](../../tf/data/experimental/dense_to_sparse_batch.md): A transformation that batches ragged elements into tf.SparseTensors. + +[`enumerate_dataset(...)`](../../tf/data/experimental/enumerate_dataset.md): A transformation that enumerates the elements of a dataset. (deprecated) + +[`from_variant(...)`](../../tf/data/experimental/from_variant.md): Constructs a dataset from the given variant and structure. + +[`get_next_as_optional(...)`](../../tf/data/experimental/get_next_as_optional.md): Returns an `Optional` that contains the next value from the iterator. + +[`get_single_element(...)`](../../tf/data/experimental/get_single_element.md): Returns the single element in `dataset` as a nested structure of tensors. + +[`get_structure(...)`](../../tf/data/experimental/get_structure.md): Returns the type specification of an element of a `Dataset` or `Iterator`. + +[`group_by_reducer(...)`](../../tf/data/experimental/group_by_reducer.md): A transformation that groups elements and performs a reduction. + +[`group_by_window(...)`](../../tf/data/experimental/group_by_window.md): A transformation that groups windows of elements by key and reduces them. + +[`ignore_errors(...)`](../../tf/data/experimental/ignore_errors.md): Creates a `Dataset` from another `Dataset` and silently ignores any errors. + +[`latency_stats(...)`](../../tf/data/experimental/latency_stats.md): Records the latency of producing each element of the input dataset. + +[`make_batched_features_dataset(...)`](../../tf/data/experimental/make_batched_features_dataset.md): Returns a `Dataset` of feature dictionaries from `Example` protos. + +[`make_csv_dataset(...)`](../../tf/data/experimental/make_csv_dataset.md): Reads CSV files into a dataset. + +[`make_saveable_from_iterator(...)`](../../tf/data/experimental/make_saveable_from_iterator.md): Returns a SaveableObject for saving/restoring iterator state using Saver. + +[`map_and_batch(...)`](../../tf/data/experimental/map_and_batch.md): Fused implementation of `map` and `batch`. (deprecated) + +[`parallel_interleave(...)`](../../tf/data/experimental/parallel_interleave.md): A parallel version of the Dataset.interleave() transformation. (deprecated) + +[`parse_example_dataset(...)`](../../tf/data/experimental/parse_example_dataset.md): A transformation that parses `Example` protos into a `dict` of tensors. + +[`prefetch_to_device(...)`](../../tf/data/experimental/prefetch_to_device.md): A transformation that prefetches dataset values to the given `device`. + +[`rejection_resample(...)`](../../tf/data/experimental/rejection_resample.md): A transformation that resamples a dataset to achieve a target distribution. + +[`sample_from_datasets(...)`](../../tf/data/experimental/sample_from_datasets.md): Samples elements at random from the datasets in `datasets`. + +[`scan(...)`](../../tf/data/experimental/scan.md): A transformation that scans a function across an input dataset. + +[`shuffle_and_repeat(...)`](../../tf/data/experimental/shuffle_and_repeat.md): Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated) + +[`take_while(...)`](../../tf/data/experimental/take_while.md): A transformation that stops dataset iteration based on a `predicate`. + +[`to_variant(...)`](../../tf/data/experimental/to_variant.md): Returns a variant representing the given dataset. + +[`unbatch(...)`](../../tf/data/experimental/unbatch.md): Splits elements of a dataset into multiple elements on the batch dimension. (deprecated) + +[`unique(...)`](../../tf/data/experimental/unique.md): Creates a `Dataset` from another `Dataset`, discarding duplicates. + +## Other Members + +* `AUTOTUNE = -1` +* `INFINITE_CARDINALITY = -1` +* `UNKNOWN_CARDINALITY = -2` diff --git a/site/en/api_docs/python/tf/data/experimental/AutoShardPolicy.md b/site/en/api_docs/python/tf/data/experimental/AutoShardPolicy.md new file mode 100644 index 00000000000..bc36ca27279 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/AutoShardPolicy.md @@ -0,0 +1,50 @@ +description: Represents the type of auto-sharding we enable. + +
+ + + + + + +
+ +# tf.data.experimental.AutoShardPolicy + + + + + + + + + +Represents the type of auto-sharding we enable. + + + + + +Please see the DistributeOptions.auto_shard_policy documentation for more +information on each type of autosharding. + +## Class Variables + +* `AUTO = ` +* `DATA = ` +* `FILE = ` +* `OFF = ` diff --git a/site/en/api_docs/python/tf/data/experimental/CheckpointInputPipelineHook.md b/site/en/api_docs/python/tf/data/experimental/CheckpointInputPipelineHook.md new file mode 100644 index 00000000000..1a685de1485 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/CheckpointInputPipelineHook.md @@ -0,0 +1,356 @@ +description: Checkpoints input pipeline state every N steps or seconds. + +
+ + + + + + + + +
+ +# tf.data.experimental.CheckpointInputPipelineHook + + + + + + + + + +Checkpoints input pipeline state every N steps or seconds. + +Inherits From: [`SessionRunHook`](../../../tf/estimator/SessionRunHook.md) + + + + + + + + + +This hook saves the state of the iterators in the `Graph` so that when +training is resumed the input pipeline continues from where it left off. +This could potentially avoid overfitting in certain pipelines where the +number of training steps per eval are small compared to the dataset +size or if the training pipeline is pre-empted. + +Differences from `CheckpointSaverHook`: +1. Saves only the input pipelines in the "iterators" collection and not the + global variables or other saveable objects. +2. Does not write the `GraphDef` and `MetaGraphDef` to the summary. + +Example of checkpointing the training pipeline: + +```python +est = tf.estimator.Estimator(model_fn) +while True: + est.train( + train_input_fn, + hooks=[tf.data.experimental.CheckpointInputPipelineHook(est)], + steps=train_steps_per_eval) + # Note: We do not pass the hook here. + metrics = est.evaluate(eval_input_fn) + if should_stop_the_training(metrics): + break +``` + +This hook should be used if the input pipeline state needs to be saved +separate from the model checkpoint. Doing so may be useful for a few reasons: +1. The input pipeline checkpoint may be large, if there are large shuffle + or prefetch buffers for instance, and may bloat the checkpoint size. +2. If the input pipeline is shared between training and validation, restoring + the checkpoint during validation may override the validation input + pipeline. + +For saving the input pipeline checkpoint alongside the model weights use +tf.data.experimental.make_saveable_from_iterator directly to create a +`SaveableObject` and add to the `SAVEABLE_OBJECTS` collection. Note, however, +that you will need to be careful not to restore the training iterator during +eval. You can do that by not adding the iterator to the SAVEABLE_OBJECTS +collector when building the eval graph. + + + + + + + + + + + + + +
+`estimator` + +Estimator. +
+`external_state_policy` + +A string that identifies how to handle input +pipelines that depend on external state. Possible values are +'ignore': The external state is silently ignored. +'warn': The external state is ignored, logging a warning. +'fail': The operation fails upon encountering external state. +By default we set it to 'fail'. +
+ + + + + + + + + + + + + + + + + + +
+`ValueError` + +One of `save_steps` or `save_secs` should be set. +
+`ValueError` + +At most one of saver or scaffold should be set. +
+`ValueError` + +If `external_state_policy` is not one of 'warn', 'ignore' or +'fail'. +
+ + + +## Methods + +

after_create_session

+ +View source + + + +Called when new TensorFlow session is created. + +This is called to signal the hooks that a new session has been created. This +has two essential differences with the situation in which `begin` is called: + +* When this is called, the graph is finalized and ops can no longer be added + to the graph. +* This method will also be called as a result of recovering a wrapped + session, not only at the beginning of the overall session. + + + + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that has been created. +
+`coord` + +A Coordinator object which keeps track of all threads. +
+ + + +

after_run

+ +View source + + + +Called after each call to run(). + +The `run_values` argument contains results of requested ops/tensors by +`before_run()`. + +The `run_context` argument is the same one send to `before_run` call. +`run_context.request_stop()` can be called to stop the iteration. + +If `session.run()` raises any exceptions then `after_run()` is not called. + + + + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+`run_values` + +A SessionRunValues object. +
+ + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Called once before using the session. + +When called, the default graph is the one that will be launched in the +session. The hook can modify the graph by adding new operations to it. +After the `begin()` call the graph will be finalized and the other callbacks +can not modify the graph anymore. Second call of `begin()` on the same +graph, should not change the graph. + +

end

+ +View source + + + +Called at the end of session. + +The `session` argument can be used in case the hook wants to run final ops, +such as saving a last checkpoint. + +If `session.run()` raises exception other than OutOfRangeError or +StopIteration then `end()` is not called. +Note the difference between `end()` and `after_run()` behavior when +`session.run()` raises OutOfRangeError or StopIteration. In that case +`end()` is called but `after_run()` is not called. + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that will be soon closed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/data/experimental/Counter.md b/site/en/api_docs/python/tf/data/experimental/Counter.md new file mode 100644 index 00000000000..6a224fb0f14 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/Counter.md @@ -0,0 +1,91 @@ +description: Creates a Dataset that counts from start in steps of size step. + +
+ + +
+ +# tf.data.experimental.Counter + + + + + + + + + +Creates a `Dataset` that counts from `start` in steps of size `step`. + + + + + + + + +#### For example: + + + +```python +Dataset.count() == [0, 1, 2, ...) +Dataset.count(2) == [2, 3, ...) +Dataset.count(2, 5) == [2, 7, 12, ...) +Dataset.count(0, -1) == [0, -1, -2, ...) +Dataset.count(10, -1) == [10, 9, ...) +``` + + + + + + + + + + + + + + + + +
+`start` + +(Optional.) The starting value for the counter. Defaults to 0. +
+`step` + +(Optional.) The step size for the counter. Defaults to 1. +
+`dtype` + +(Optional.) The data type for counter elements. Defaults to +tf.int64. +
+ + + + + + + + + + + +
+A `Dataset` of scalar `dtype` elements. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/CsvDataset.md b/site/en/api_docs/python/tf/data/experimental/CsvDataset.md new file mode 100644 index 00000000000..827f8ee42d0 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/CsvDataset.md @@ -0,0 +1,2712 @@ +description: A Dataset comprising lines from one or more CSV files. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.data.experimental.CsvDataset + + + + + + + + + +A Dataset comprising lines from one or more CSV files. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A tf.string tensor containing one or more filenames. +
+`record_defaults` + +A list of default values for the CSV fields. Each item in +the list is either a valid CSV `DType` (float32, float64, int32, int64, +string), or a `Tensor` object with one of the above types. One per +column of CSV data, with either a scalar `Tensor` default value for the +column if it is optional, or `DType` or empty `Tensor` if required. If +both this and `select_columns` are specified, these must have the same +lengths, and `column_defaults` is assumed to be sorted in order of +increasing column index. +
+`compression_type` + +(Optional.) A tf.string scalar evaluating to one of +`""` (no compression), `"ZLIB"`, or `"GZIP"`. Defaults to no +compression. +
+`buffer_size` + +(Optional.) A tf.int64 scalar denoting the number of bytes +to buffer while reading files. Defaults to 4MB. +
+`header` + +(Optional.) A tf.bool scalar indicating whether the CSV file(s) +have header line(s) that should be skipped when parsing. Defaults to +`False`. +
+`field_delim` + +(Optional.) A tf.string scalar containing the delimiter +character that separates fields in a record. Defaults to `","`. +
+`use_quote_delim` + +(Optional.) A tf.bool scalar. If `False`, treats +double quotation marks as regular characters inside of string fields +(ignoring RFC 4180, Section 2, Bullet 5). Defaults to `True`. +
+`na_value` + +(Optional.) A tf.string scalar indicating a value that will +be treated as NA/NaN. +
+`select_cols` + +(Optional.) A sorted list of column indices to select from +the input data. If specified, only this subset of columns will be +parsed. Defaults to parsing all columns. +
+ + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of Dataset.from_generator() uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +Dataset.from_generator(). The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to Dataset.from_generator() and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +Dataset.from_generator(). + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use Dataset.interleave() to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +Dataset.from_tensor_slices(filenames) instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/data/experimental/DistributeOptions.md b/site/en/api_docs/python/tf/data/experimental/DistributeOptions.md new file mode 100644 index 00000000000..104606b1566 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/DistributeOptions.md @@ -0,0 +1,112 @@ +description: Represents options for distributed data processing. + +
+ + + + + +
+ +# tf.data.experimental.DistributeOptions + + + + + + + + + +Represents options for distributed data processing. + + + + + + + + + +You can set the distribution options of a dataset through the +`experimental_distribute` property of tf.data.Options; the property is +an instance of tf.data.experimental.DistributeOptions. + +```python +options = tf.data.Options() +options.experimental_distribute.auto_shard_policy = AutoShardPolicy.OFF +dataset = dataset.with_options(options) +``` + + + + + + + + + + + + + + + +
+`auto_shard_policy` + +The type of sharding that auto-shard should attempt. If this is set to FILE, then we will attempt to shard by files (each worker will get a set of files to process). If we cannot find a set of files to shard for at least one file per worker, we will error out. When this option is selected, make sure that you have enough files so that each worker gets at least one file. There will be a runtime error thrown if there are insufficient files. If this is set to DATA, then we will shard by elements produced by the dataset, and each worker will process the whole dataset and discard the portion that is not for itself. If this is set to OFF, then we will not autoshard, and each worker will receive a copy of the full dataset. This option is set to AUTO by default, AUTO will attempt to first shard by FILE, and fall back to sharding by DATA if we cannot find a set of files to shard. +
+`num_devices` + +The number of devices attached to this input pipeline. This will be automatically set by MultiDeviceIterator. +
+ + + +## Methods + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/data/experimental/MapVectorizationOptions.md b/site/en/api_docs/python/tf/data/experimental/MapVectorizationOptions.md new file mode 100644 index 00000000000..00c911d3129 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/MapVectorizationOptions.md @@ -0,0 +1,103 @@ +description: Represents options for the MapVectorization optimization. + +
+ + + + + +
+ +# tf.data.experimental.MapVectorizationOptions + + + + + + + + + +Represents options for the MapVectorization optimization. + + + + + + + + + + + + + + + + + + + + + + + + +
+`enabled` + +Whether to vectorize map transformations. If None, defaults to False. +
+`use_choose_fastest` + +Whether to use ChooseFastestBranchDataset with this transformation. If True, the pipeline picks between the vectorized and original segment at runtime based on their iterations speed. If None, defaults to False. +
+ + + +## Methods + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/data/experimental/OptimizationOptions.md b/site/en/api_docs/python/tf/data/experimental/OptimizationOptions.md new file mode 100644 index 00000000000..83ca802fdbd --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/OptimizationOptions.md @@ -0,0 +1,205 @@ +description: Represents options for dataset optimizations. + +
+ + + + + +
+ +# tf.data.experimental.OptimizationOptions + + + + + + + + + +Represents options for dataset optimizations. + + + + + + + + + +You can set the optimization options of a dataset through the +`experimental_optimization` property of tf.data.Options; the property is +an instance of tf.data.experimental.OptimizationOptions. + +```python +options = tf.data.Options() +options.experimental_optimization.noop_elimination = True +options.experimental_optimization.map_vectorization.enabled = True +options.experimental_optimization.apply_default_optimizations = False +dataset = dataset.with_options(options) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`apply_default_optimizations` + +Whether to apply default graph optimizations. If False, only graph optimizations that have been explicitly enabled will be applied. +
+`autotune` + +Whether to automatically tune performance knobs. If None, defaults to True. +
+`autotune_buffers` + +When autotuning is enabled (through `autotune`), determines whether to also autotune buffer sizes for datasets with parallelism. If None, defaults to False. +
+`autotune_cpu_budget` + +When autotuning is enabled (through `autotune`), determines the CPU budget to use. Values greater than the number of schedulable CPU cores are allowed but may result in CPU contention. If None, defaults to the number of schedulable CPU cores. +
+`filter_fusion` + +Whether to fuse filter transformations. If None, defaults to False. +
+`filter_with_random_uniform_fusion` + +Whether to fuse filter dataset that predicts random_uniform < rate into a sampling dataset. If None, defaults to False. +
+`hoist_random_uniform` + +Whether to hoist `tf.random_uniform()` ops out of map transformations. If None, defaults to False. +
+`map_and_batch_fusion` + +Whether to fuse map and batch transformations. If None, defaults to True. +
+`map_and_filter_fusion` + +Whether to fuse map and filter transformations. If None, defaults to False. +
+`map_fusion` + +Whether to fuse map transformations. If None, defaults to False. +
+`map_parallelization` + +Whether to parallelize stateless map transformations. If None, defaults to False. +
+`map_vectorization` + +The map vectorization options associated with the dataset. See tf.data.experimental.MapVectorizationOptions for more details. +
+`noop_elimination` + +Whether to eliminate no-op transformations. If None, defaults to True. +
+`parallel_batch` + +Whether to parallelize copying of batch elements. If None, defaults to False. +
+`shuffle_and_repeat_fusion` + +Whether to fuse shuffle and repeat transformations. If None, defaults to True. +
+ + + +## Methods + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/data/experimental/Optional.md b/site/en/api_docs/python/tf/data/experimental/Optional.md new file mode 100644 index 00000000000..66654d52193 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/Optional.md @@ -0,0 +1,260 @@ +description: Wraps a value that may/may not be present at runtime. + +
+ + + + + + +
+ +# tf.data.experimental.Optional + + + + + + + + + +Wraps a value that may/may not be present at runtime. + + + + + +An `Optional` can represent the result of an operation that may fail as a +value, rather than raising an exception and halting execution. For example, +tf.data.experimental.get_next_as_optional returns an `Optional` that either +contains the next value of an iterator if one exists, or a "none" value that +indicates the end of the sequence has been reached. + +`Optional` can only be used by values that are convertible to `Tensor` or +`CompositeTensor`. + + + + + + + + + + + + +
+`value_structure` + +The structure of the components of this optional. +
+ + + +## Methods + +

from_value

+ +View source + + + +Returns an `Optional` that wraps the given value. + + + + + + + + + + + +
Args
+`value` + +A value to wrap. The value must be convertible to `Tensor` or +`CompositeTensor`. +
+ + + + + + + + + + + +
Returns
+An `Optional` that wraps `value`. +
+ + + +

get_value

+ +View source + + + +Returns the value wrapped by this optional. + +If this optional does not have a value (i.e. `self.has_value()` evaluates +to `False`), this operation will raise tf.errors.InvalidArgumentError +at runtime. + + + + + + + + + + +
Args
+`name` + +(Optional.) A name for the created operation. +
+ + + + + + + + + + + +
Returns
+The wrapped value. +
+ + + +

has_value

+ +View source + + + +Returns a tensor that evaluates to `True` if this optional has a value. + + + + + + + + + + + +
Args
+`name` + +(Optional.) A name for the created operation. +
+ + + + + + + + + + + +
Returns
+A scalar tf.Tensor of type tf.bool. +
+ + + +

none_from_structure

+ +View source + + + +Returns an `Optional` that has no value. + +NOTE: This method takes an argument that defines the structure of the value +that would be contained in the returned `Optional` if it had a value. + + + + + + + + + + +
Args
+`value_structure` + +A `Structure` object representing the structure of the +components of this optional. +
+ + + + + + + + + + + +
Returns
+An `Optional` that has no value. +
+ + + + + diff --git a/site/en/api_docs/python/tf/data/experimental/RandomDataset.md b/site/en/api_docs/python/tf/data/experimental/RandomDataset.md new file mode 100644 index 00000000000..132cd1f3c66 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/RandomDataset.md @@ -0,0 +1,2619 @@ +description: A Dataset of pseudorandom values. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.data.experimental.RandomDataset + + + + + + + + + +A `Dataset` of pseudorandom values. + + + + + + + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of Dataset.from_generator() uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +Dataset.from_generator(). The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to Dataset.from_generator() and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +Dataset.from_generator(). + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use Dataset.interleave() to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +Dataset.from_tensor_slices(filenames) instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/data/experimental/Reducer.md b/site/en/api_docs/python/tf/data/experimental/Reducer.md new file mode 100644 index 00000000000..71eceb4c3dd --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/Reducer.md @@ -0,0 +1,84 @@ +description: A reducer is used for reducing a set of elements. + +
+ + + +
+ +# tf.data.experimental.Reducer + + + + + + + + + +A reducer is used for reducing a set of elements. + + + + + + + + + +A reducer is represented as a tuple of the three functions: + 1) initialization function: key => initial state + 2) reduce function: (old state, input) => new state + 3) finalization function: state => result + + + + + + + + + + + + + + + + + + +
+`finalize_func` + + +
+`init_func` + + +
+`reduce_func` + + +
+ + + diff --git a/site/en/api_docs/python/tf/data/experimental/SqlDataset.md b/site/en/api_docs/python/tf/data/experimental/SqlDataset.md new file mode 100644 index 00000000000..9866d34530b --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/SqlDataset.md @@ -0,0 +1,2660 @@ +description: A Dataset consisting of the results from a SQL query. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.data.experimental.SqlDataset + + + + + + + + + +A `Dataset` consisting of the results from a SQL query. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`driver_name` + +A 0-D tf.string tensor containing the database type. +Currently, the only supported value is 'sqlite'. +
+`data_source_name` + +A 0-D tf.string tensor containing a connection string +to connect to the database. +
+`query` + +A 0-D tf.string tensor containing the SQL query to execute. +
+`output_types` + +A tuple of tf.DType objects representing the types of the +columns returned by `query`. +
+ + + + + + + + + + + + + + +
+`element_spec` + +The type specification of an element of this dataset. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec +TensorSpec(shape=(), dtype=tf.int32, name=None) +``` +
+ + + +## Methods + +

apply

+ +View source + + + +Applies a transformation function to this dataset. + +`apply` enables chaining of custom `Dataset` transformations, which are +represented as functions that take one `Dataset` argument and return a +transformed `Dataset`. + +``` +>>> dataset = tf.data.Dataset.range(100) +>>> def dataset_fn(ds): +... return ds.filter(lambda x: x < 5) +>>> dataset = dataset.apply(dataset_fn) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2, 3, 4] +``` + + + + + + + + + + +
Args
+`transformation_func` + +A function that takes one `Dataset` argument and +returns a `Dataset`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` returned by applying `transformation_func` to this +dataset. +
+ + + +

as_numpy_iterator

+ +View source + + + +Returns an iterator which converts all elements of the dataset to numpy. + +Use `as_numpy_iterator` to inspect the content of your dataset. To see +element shapes and types, print dataset elements directly instead of using +`as_numpy_iterator`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset: +... print(element) +tf.Tensor(1, shape=(), dtype=int32) +tf.Tensor(2, shape=(), dtype=int32) +tf.Tensor(3, shape=(), dtype=int32) +``` + +This method requires that you are running in eager mode and the dataset's +element_spec contains only `TensorSpec` components. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +1 +2 +3 +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> print(list(dataset.as_numpy_iterator())) +[1, 2, 3] +``` + +`as_numpy_iterator()` will preserve the nested structure of dataset +elements. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), +... 'b': [5, 6]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, +... {'a': (2, 4), 'b': 6}] +True +``` + + + + + + + + + +
Returns
+An iterable over the elements of the dataset, with their tensors converted +to numpy arrays. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +if an element contains a non-`Tensor` value. +
+`RuntimeError` + +if eager execution is not enabled. +
+ + + +

batch

+ +View source + + + +Combines consecutive elements of this dataset into batches. + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] +``` + +``` +>>> dataset = tf.data.Dataset.range(8) +>>> dataset = dataset.batch(3, drop_remainder=True) +>>> list(dataset.as_numpy_iterator()) +[array([0, 1, 2]), array([3, 4, 5])] +``` + +The components of the resulting element will have an additional outer +dimension, which will be `batch_size` (or `N % batch_size` for the last +element if `batch_size` does not divide the number of input elements `N` +evenly and `drop_remainder` is `False`). If your program depends on the +batches having the same outer dimension, you should set the `drop_remainder` +argument to `True` to prevent the smaller batch from being produced. + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

cache

+ +View source + + + +Caches the elements in this dataset. + +The first time the dataset is iterated over, its elements will be cached +either in the specified file or in memory. Subsequent iterations will +use the cached data. + +Note: For the cache to be finalized, the input dataset must be iterated +through in its entirety. Otherwise, subsequent iterations will not use +cached data. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.map(lambda x: x**2) +>>> dataset = dataset.cache() +>>> # The first time reading through the data will generate the data using +>>> # `range` and `map`. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +>>> # Subsequent iterations read from the cache. +>>> list(dataset.as_numpy_iterator()) +[0, 1, 4, 9, 16] +``` + +When caching to a file, the cached data will persist across runs. Even the +first iteration through the data will read from the cache file. Changing +the input pipeline before the call to `.cache()` will have no effect until +the cache file is removed or the filename is changed. + +``` +>>> dataset = tf.data.Dataset.range(5) +>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[0, 1, 2, 3, 4] +``` + +Note: `cache` will produce exactly the same elements during each iteration +through the dataset. If you wish to randomize the iteration order, make sure +to call `shuffle` *after* calling `cache`. + + + + + + + + + + +
Args
+`filename` + +A tf.string scalar tf.Tensor, representing the name of a +directory on the filesystem to use for caching elements in this Dataset. +If a filename is not provided, the dataset will be cached in memory. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

concatenate

+ +View source + + + +Creates a `Dataset` by concatenating the given dataset with this dataset. + +``` +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] +>>> ds = a.concatenate(b) +>>> list(ds.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7] +>>> # The input dataset and dataset to be concatenated should have the same +>>> # nested structures and output types. +>>> c = tf.data.Dataset.zip((a, b)) +>>> a.concatenate(c) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and (tf.int64, tf.int64) +>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) +>>> a.concatenate(d) +Traceback (most recent call last): +TypeError: Two datasets to concatenate have different types + and +``` + + + + + + + + + + +
Args
+`dataset` + +`Dataset` to be concatenated. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

enumerate

+ +View source + + + +Enumerates the elements of this dataset. + +It is similar to python's `enumerate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.enumerate(start=5) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(5, 1) +(6, 2) +(7, 3) +``` + +``` +>>> # The nested structure of the input dataset determines the structure of +>>> # elements in the resulting dataset. +>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) +>>> dataset = dataset.enumerate() +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(0, array([7, 8], dtype=int32)) +(1, array([ 9, 10], dtype=int32)) +``` + + + + + + + + + + +
Args
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

filter

+ +View source + + + +Filters this dataset according to `predicate`. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.filter(lambda x: x < 3) +>>> list(dataset.as_numpy_iterator()) +[1, 2] +>>> # `tf.math.equal(x, y)` is required for equality comparison +>>> def filter_fn(x): +... return tf.math.equal(x, 1) +>>> dataset = dataset.filter(filter_fn) +>>> list(dataset.as_numpy_iterator()) +[1] +``` + + + + + + + + + + +
Args
+`predicate` + +A function mapping a dataset element to a boolean. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +The `Dataset` containing the elements of this dataset for which +`predicate` is `True`. +
+ + + +

flat_map

+ +View source + + + +Maps `map_func` across this dataset and flattens the result. + +Use `flat_map` if you want to make sure that the order of your dataset +stays the same. For example, to flatten a dataset of batches into a +dataset of their elements: + +``` +>>> dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 4, 5, 6, 7, 8, 9] +``` + +tf.data.Dataset.interleave() is a generalization of `flat_map`, since +`flat_map` produces the same output as +tf.data.Dataset.interleave(cycle_length=1) + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_generator

+ +View source + + + +Creates a `Dataset` whose elements are generated by `generator`. + +The `generator` argument must be a callable object that returns +an object that supports the `iter()` protocol (e.g. a generator function). +The elements generated by `generator` must be compatible with the given +`output_types` and (optional) `output_shapes` arguments. + +``` +>>> import itertools +>>> +>>> def gen(): +... for i in itertools.count(1): +... yield (i, [1] * i) +>>> +>>> dataset = tf.data.Dataset.from_generator( +... gen, +... (tf.int64, tf.int64), +... (tf.TensorShape([]), tf.TensorShape([None]))) +>>> +>>> list(dataset.take(3).as_numpy_iterator()) +[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] +``` + +Note: The current implementation of Dataset.from_generator() uses +tf.numpy_function and inherits the same constraints. In particular, it +requires the `Dataset`- and `Iterator`-related operations to be placed +on a device in the same process as the Python program that called +Dataset.from_generator(). The body of `generator` will not be +serialized in a `GraphDef`, and you should not use this method if you +need to serialize your model and restore it in a different environment. + +Note: If `generator` depends on mutable global variables or other external +state, be aware that the runtime may invoke `generator` multiple times +(in order to support repeating the `Dataset`) and at any time +between the call to Dataset.from_generator() and the production of the +first element from the generator. Mutating global variables or external +state can cause undefined behavior, and we recommend that you explicitly +cache any external state in `generator` before calling +Dataset.from_generator(). + + + + + + + + + + + + + + + + + + + +
Args
+`generator` + +A callable object that returns an object that supports the +`iter()` protocol. If `args` is not specified, `generator` must take no +arguments; otherwise it must take as many arguments as there are values +in `args`. +
+`output_types` + +A nested structure of tf.DType objects corresponding to +each component of an element yielded by `generator`. +
+`output_shapes` + +(Optional.) A nested structure of tf.TensorShape objects +corresponding to each component of an element yielded by `generator`. +
+`args` + +(Optional.) A tuple of tf.Tensor objects that will be evaluated +and passed to `generator` as NumPy-array arguments. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensor_slices

+ +View source + + + +Creates a `Dataset` whose elements are slices of the given tensors. + +The given tensors are sliced along their first dimension. This operation +preserves the structure of the input tensors, removing the first dimension +of each tensor and using it as the dataset dimension. All input tensors +must have the same size in their first dimensions. + +``` +>>> # Slicing a 1D tensor produces scalar tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Slicing a 2D tensor produces 1D tensor elements. +>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] +``` + +``` +>>> # Slicing a tuple of 1D tensors produces tuple elements containing +>>> # scalar tensors. +>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) +>>> list(dataset.as_numpy_iterator()) +[(1, 3, 5), (2, 4, 6)] +``` + +``` +>>> # Dictionary structure is also preserved. +>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) +>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, +... {'a': 2, 'b': 4}] +True +``` + +``` +>>> # Two tensors can be combined into one Dataset object. +>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor +>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor +>>> dataset = Dataset.from_tensor_slices((features, labels)) +>>> # Both the features and the labels tensors can be converted +>>> # to a Dataset object separately and combined after. +>>> features_dataset = Dataset.from_tensor_slices(features) +>>> labels_dataset = Dataset.from_tensor_slices(labels) +>>> dataset = Dataset.zip((features_dataset, labels_dataset)) +>>> # A batched feature and label set can be converted to a Dataset +>>> # in similar fashion. +>>> batched_features = tf.constant([[[1, 3], [2, 3]], +... [[2, 1], [1, 2]], +... [[3, 3], [3, 2]]], shape=(3, 2, 2)) +>>> batched_labels = tf.constant([['A', 'A'], +... ['B', 'B'], +... ['A', 'B']], shape=(3, 2, 1)) +>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) +>>> for element in dataset.as_numpy_iterator(): +... print(element) +(array([[1, 3], + [2, 3]], dtype=int32), array([[b'A'], + [b'A']], dtype=object)) +(array([[2, 1], + [1, 2]], dtype=int32), array([[b'B'], + [b'B']], dtype=object)) +(array([[3, 3], + [3, 2]], dtype=int32), array([[b'A'], + [b'B']], dtype=object)) +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this guide]( +https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element, with each component having the same size in +the first dimension. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

from_tensors

+ +View source + + + +Creates a `Dataset` with a single element, comprising the given tensors. + +`from_tensors` produces a dataset containing only a single element. To slice +the input tensor into multiple elements, use `from_tensor_slices` instead. + +``` +>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3]) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32)] +>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) +>>> list(dataset.as_numpy_iterator()) +[(array([1, 2, 3], dtype=int32), b'A')] +``` + +``` +>>> # You can use `from_tensors` to produce a dataset which repeats +>>> # the same example many times. +>>> example = tf.constant([1,2,3]) +>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2) +>>> list(dataset.as_numpy_iterator()) +[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] +``` + +Note that if `tensors` contains a NumPy array, and eager execution is not +enabled, the values will be embedded in the graph as one or more +tf.constant operations. For large datasets (> 1 GB), this can waste +memory and run into byte limits of graph serialization. If `tensors` +contains one or more large NumPy arrays, consider the alternative described +in [this +guide](https://tensorflow.org/guide/data#consuming_numpy_arrays). + + + + + + + + + + +
Args
+`tensors` + +A dataset element. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

interleave

+ +View source + + + +Maps `map_func` across this dataset, and interleaves the results. + +For example, you can use Dataset.interleave() to process many input files +concurrently: + +``` +>>> # Preprocess 4 files concurrently, and interleave blocks of 16 records +>>> # from each file. +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> def parse_fn(filename): +... return tf.data.Dataset.range(10) +>>> dataset = dataset.interleave(lambda x: +... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), +... cycle_length=4, block_length=16) +``` + +The `cycle_length` and `block_length` arguments control the order in which +elements are produced. `cycle_length` controls the number of input elements +that are processed concurrently. If you set `cycle_length` to 1, this +transformation will handle one input element at a time, and will produce +identical results to tf.data.Dataset.flat_map. In general, +this transformation will apply `map_func` to `cycle_length` input elements, +open iterators on the returned `Dataset` objects, and cycle through them +producing `block_length` consecutive elements from each iterator, and +consuming the next input element each time it reaches the end of an +iterator. + +#### For example: + + + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> # NOTE: New lines indicate "block" boundaries. +>>> dataset = dataset.interleave( +... lambda x: Dataset.from_tensors(x).repeat(6), +... cycle_length=2, block_length=4) +>>> list(dataset.as_numpy_iterator()) +[1, 1, 1, 1, + 2, 2, 2, 2, + 1, 1, + 2, 2, + 3, 3, 3, 3, + 4, 4, 4, 4, + 3, 3, + 4, 4, + 5, 5, 5, 5, + 5, 5] +``` + +Note: The order of elements yielded by this transformation is +deterministic, as long as `map_func` is a pure function and +`deterministic=True`. If `map_func` contains any stateful operations, the +order in which that state is accessed is undefined. + +Performance can often be improved by setting `num_parallel_calls` so that +`interleave` will use multiple threads to fetch elements. If determinism +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt", +... "/var/data/file3.txt", "/var/data/file4.txt"] +>>> dataset = tf.data.Dataset.from_tensor_slices(filenames) +>>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), +... cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to a dataset. +
+`cycle_length` + +(Optional.) The number of input elements that will be +processed concurrently. If not specified, the value will be derived from +the number of available CPU cores. If the `num_parallel_calls` argument +is set to tf.data.experimental.AUTOTUNE, the `cycle_length` argument +also identifies the maximum degree of parallelism. +
+`block_length` + +(Optional.) The number of consecutive elements to produce +from each input element before cycling to another input element. +
+`num_parallel_calls` + +(Optional.) If specified, the implementation creates a +threadpool, which is used to fetch inputs from cycle elements +asynchronously and in parallel. The default behavior is to fetch inputs +from cycle elements synchronously with no parallelism. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

list_files

+ +View source + + + +A dataset of all files matching one or more glob patterns. + +The `file_pattern` argument should be a small number of glob patterns. +If your filenames have already been globbed, use +Dataset.from_tensor_slices(filenames) instead, as re-globbing every +filename with `list_files` may result in poor performance with remote +storage systems. + +Note: The default behavior of this method is to return filenames in +a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False` +to get results in a deterministic order. + +#### Example: + +If we had the following files on our filesystem: + + - /path/to/dir/a.txt + - /path/to/dir/b.py + - /path/to/dir/c.py + +If we pass "/path/to/dir/*.py" as the directory, the dataset +would produce: + + - /path/to/dir/b.py + - /path/to/dir/c.py + + + + + + + + + + + + + + + + + + +
Args
+`file_pattern` + +A string, a list of strings, or a tf.Tensor of string type +(scalar or vector), representing the filename glob (i.e. shell wildcard) +pattern(s) that will be matched. +
+`shuffle` + +(Optional.) If `True`, the file names will be shuffled randomly. +Defaults to `True`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of strings corresponding to file names. +
+ + + +

map

+ +View source + + + +Maps `map_func` across the elements of this dataset. + +This transformation applies `map_func` to each element of this dataset, and +returns a new dataset containing the transformed elements, in the same +order as they appeared in the input. `map_func` can be used to change both +the values and the structure of a dataset's elements. For example, adding 1 +to each element, or projecting a subset of element components. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1) +>>> list(dataset.as_numpy_iterator()) +[2, 3, 4, 5, 6] +``` + +The input signature of `map_func` is determined by the structure of each +element in this dataset. + +``` +>>> dataset = Dataset.range(5) +>>> # `map_func` takes a single argument of type `tf.Tensor` with the same +>>> # shape and dtype. +>>> result = dataset.map(lambda x: x + 1) +``` + +``` +>>> # Each element is a tuple containing two `tf.Tensor` objects. +>>> elements = [(1, "foo"), (2, "bar"), (3, "baz)")] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, (tf.int32, tf.string)) +>>> # `map_func` takes two arguments of type `tf.Tensor`. This function +>>> # projects out just the first component. +>>> result = dataset.map(lambda x_int, y_str: x_int) +>>> list(result.as_numpy_iterator()) +[1, 2, 3] +``` + +``` +>>> # Each element is a dictionary mapping strings to `tf.Tensor` objects. +>>> elements = ([{"a": 1, "b": "foo"}, +... {"a": 2, "b": "bar"}, +... {"a": 3, "b": "baz"}]) +>>> dataset = tf.data.Dataset.from_generator( +... lambda: elements, {"a": tf.int32, "b": tf.string}) +>>> # `map_func` takes a single argument of type `dict` with the same keys +>>> # as the elements. +>>> result = dataset.map(lambda d: str(d["a"]) + d["b"]) +``` + +The value or values returned by `map_func` determine the structure of each +element in the returned dataset. + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> # `map_func` returns two `tf.Tensor` objects. +>>> def g(x): +... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) +>>> result = dataset.map(g) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) +>>> # Python primitives, lists, and NumPy arrays are implicitly converted to +>>> # `tf.Tensor`. +>>> def h(x): +... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) +>>> result = dataset.map(h) +>>> result.element_spec +(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) +>>> # `map_func` can return nested structures. +>>> def i(x): +... return (37.0, [42, 16]), "foo" +>>> result = dataset.map(i) +>>> result.element_spec +((TensorSpec(shape=(), dtype=tf.float32, name=None), + TensorSpec(shape=(2,), dtype=tf.int32, name=None)), + TensorSpec(shape=(), dtype=tf.string, name=None)) +``` + +`map_func` can accept as arguments and return any type of dataset element. + +Note that irrespective of the context in which `map_func` is defined (eager +vs. graph), tf.data traces the function and executes it as a graph. To use +Python code inside of the function you have two options: + +1) Rely on AutoGraph to convert Python code into an equivalent graph +computation. The downside of this approach is that AutoGraph can convert +some but not all Python code. + +2) Use tf.py_function, which allows you to write arbitrary Python code but +will generally result in worse performance than 1). For example: + +``` +>>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) +>>> # transform a string tensor to upper case string using a Python function +>>> def upper_case_fn(t: tf.Tensor): +... return t.numpy().decode('utf-8').upper() +>>> d = d.map(lambda x: tf.py_function(func=upper_case_fn, +... inp=[x], Tout=tf.string)) +>>> list(d.as_numpy_iterator()) +[b'HELLO', b'WORLD'] +``` + +Performance can often be improved by setting `num_parallel_calls` so that +`map` will use multiple threads to process elements. If deterministic order +isn't required, it can also improve performance to set +`deterministic=False`. + +``` +>>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] +>>> dataset = dataset.map(lambda x: x + 1, +... num_parallel_calls=tf.data.experimental.AUTOTUNE, +... deterministic=False) +``` + + + + + + + + + + + + + + + + +
Args
+`map_func` + +A function mapping a dataset element to another dataset element. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number elements to process asynchronously in parallel. +If not specified, elements will be processed sequentially. If the value +tf.data.experimental.AUTOTUNE is used, then the number of parallel +calls is set dynamically based on available CPU. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order. If `deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

options

+ +View source + + + +Returns the options for this dataset and its inputs. + + + + + + + + + + +
Returns
+A tf.data.Options object representing the dataset options. +
+ + + +

padded_batch

+ +View source + + + +Combines consecutive elements of this dataset into padded batches. + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and this transformation will pad each component to the +respective shape in `padded_shapes`. The `padded_shapes` argument +determines the resulting shape for each dimension of each component in an +output element: + +* If the dimension is a constant, the component will be padded out to that + length in that dimension. +* If the dimension is unknown, the component will be padded out to the + maximum length of all elements in that dimension. + +``` +>>> A = (tf.data.Dataset +... .range(1, 5, output_type=tf.int32) +... .map(lambda x: tf.fill([x], x))) +>>> # Pad to the smallest per-batch size that fits all elements. +>>> B = A.padded_batch(2) +>>> for element in B.as_numpy_iterator(): +... print(element) +[[1 0] + [2 2]] +[[3 3 3 0] + [4 4 4 4]] +>>> # Pad to a fixed size. +>>> C = A.padded_batch(2, padded_shapes=5) +>>> for element in C.as_numpy_iterator(): +... print(element) +[[1 0 0 0 0] + [2 2 0 0 0]] +[[3 3 3 0 0] + [4 4 4 4 0]] +>>> # Pad with a custom value. +>>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1) +>>> for element in D.as_numpy_iterator(): +... print(element) +[[ 1 -1 -1 -1 -1] + [ 2 2 -1 -1 -1]] +[[ 3 3 3 -1 -1] + [ 4 4 4 4 -1]] +>>> # Components of nested elements can be padded independently. +>>> elements = [([1, 2, 3], [10]), +... ([4, 5], [11, 12])] +>>> dataset = tf.data.Dataset.from_generator( +... lambda: iter(elements), (tf.int32, tf.int32)) +>>> # Pad the first component of the tuple to length 4, and the second +>>> # component to the smallest size that fits. +>>> dataset = dataset.padded_batch(2, +... padded_shapes=([4], [None]), +... padding_values=(-1, 100)) +>>> list(dataset.as_numpy_iterator()) +[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), + array([[ 10, 100], [ 11, 12]], dtype=int32))] +``` + +See also tf.data.experimental.dense_to_sparse_batch, which combines +elements that may have different shapes into a tf.SparseTensor. + + + + + + + + + + + + + + + + + + + +
Args
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`padded_shapes` + +(Optional.) A nested structure of tf.TensorShape or +tf.int64 vector tensor-like objects representing the shape to which +the respective component of each input element should be padded prior +to batching. Any unknown dimensions will be padded to the maximum size +of that dimension in each batch. If unset, all dimensions of all +components are padded to the maximum size in the batch. `padded_shapes` +must be set if any component has an unknown rank. +
+`padding_values` + +(Optional.) A nested structure of scalar-shaped +tf.Tensor, representing the padding values to use for the respective +components. None represents that the nested structure should be padded +with default values. Defaults are `0` for numeric types and the empty +string for string types. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If a component has an unknown rank, and the `padded_shapes` +argument is not set. +
+ + + +

prefetch

+ +View source + + + +Creates a `Dataset` that prefetches elements from this dataset. + +Most dataset input pipelines should end with a call to `prefetch`. This +allows later elements to be prepared while the current element is being +processed. This often improves latency and throughput, at the cost of +using additional memory to store prefetched elements. + +Note: Like other `Dataset` methods, prefetch operates on the +elements of the input dataset. It has no concept of examples vs. batches. +`examples.prefetch(2)` will prefetch two elements (2 examples), +while `examples.batch(20).prefetch(2)` will prefetch 2 elements +(2 batches, of 20 examples each). + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.prefetch(2) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number of elements that will be buffered when prefetching. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

range

+ +View source + + + +Creates a `Dataset` of a step-separated range of values. + +``` +>>> list(Dataset.range(5).as_numpy_iterator()) +[0, 1, 2, 3, 4] +>>> list(Dataset.range(2, 5).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2).as_numpy_iterator()) +[1, 3] +>>> list(Dataset.range(1, 5, -2).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1).as_numpy_iterator()) +[] +>>> list(Dataset.range(5, 1, -2).as_numpy_iterator()) +[5, 3] +>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) +[2, 3, 4] +>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) +[1.0, 3.0] +``` + + + + + + + + + + + + + +
Args
+`*args` + +follows the same semantics as python's xrange. +len(args) == 1 -> start = 0, stop = args[0], step = 1. +len(args) == 2 -> start = args[0], stop = args[1], step = 1. +len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. +
+`**kwargs` + +- output_type: Its expected dtype. (Optional, default: tf.int64). +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `RangeDataset`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if len(args) == 0. +
+ + + +

reduce

+ +View source + + + +Reduces the input dataset to a single element. + +The transformation calls `reduce_func` successively on every element of +the input dataset until the dataset is exhausted, aggregating information in +its internal state. The `initial_state` argument is used for the initial +state and the final state is returned as the result. + +``` +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() +5 +>>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() +10 +``` + + + + + + + + + + + + + +
Args
+`initial_state` + +An element representing the initial state of the +transformation. +
+`reduce_func` + +A function that maps `(old_state, input_element)` to +`new_state`. It must take two arguments and return a new element +The structure of `new_state` must match the structure of +`initial_state`. +
+ + + + + + + + + + + +
Returns
+A dataset element corresponding to the final state of the transformation. +
+ + + +

repeat

+ +View source + + + +Repeats this dataset so each original value is seen `count` times. + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> dataset = dataset.repeat(3) +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 3, 1, 2, 3] +``` + +Note: If this dataset is a function of global state (e.g. a random number +generator), then different repetitions may produce different elements. + + + + + + + + + + +
Args
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of times the dataset should be repeated. The default behavior (if +`count` is `None` or `-1`) is for the dataset be repeated indefinitely. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

shard

+ +View source + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + +`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will +contain all elements of A whose index mod n = i. + +``` +>>> A = tf.data.Dataset.range(10) +>>> B = A.shard(num_shards=3, index=0) +>>> list(B.as_numpy_iterator()) +[0, 3, 6, 9] +>>> C = A.shard(num_shards=3, index=1) +>>> list(C.as_numpy_iterator()) +[1, 4, 7] +>>> D = A.shard(num_shards=3, index=2) +>>> list(D.as_numpy_iterator()) +[2, 5, 8] +``` + +This dataset operator is very useful when running distributed training, as +it allows each worker to read a unique subset. + +When reading a single input file, you can shard elements as follows: + +```python +d = tf.data.TFRecordDataset(input_file) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + +#### Important caveats: + + + +- Be sure to shard before you use any randomizing operator (such as + shuffle). +- Generally it is best if the shard operator is used early in the dataset + pipeline. For example, when reading from a set of TFRecord files, shard + before converting the dataset to input samples. This avoids reading every + file on every worker. The following is an example of an efficient + sharding strategy within a complete pipeline: + +```python +d = Dataset.list_files(pattern) +d = d.shard(num_workers, worker_index) +d = d.repeat(num_epochs) +d = d.shuffle(shuffle_buffer_size) +d = d.interleave(tf.data.TFRecordDataset, + cycle_length=num_readers, block_length=1) +d = d.map(parser_fn, num_parallel_calls=num_map_threads) +``` + + + + + + + + + + + + + +
Args
+`num_shards` + +A tf.int64 scalar tf.Tensor, representing the number of +shards operating in parallel. +
+`index` + +A tf.int64 scalar tf.Tensor, representing the worker index. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + + + + + + + + + + +
Raises
+`InvalidArgumentError` + +if `num_shards` or `index` are illegal values. + +Note: error checking is done on a best-effort basis, and errors aren't +guaranteed to be caught upon dataset creation. (e.g. providing in a +placeholder tensor bypasses the early checking, and will instead result +in an error during a session.run call.) +
+ + + +

shuffle

+ +View source + + + +Randomly shuffles the elements of this dataset. + +This dataset fills a buffer with `buffer_size` elements, then randomly +samples elements from this buffer, replacing the selected elements with new +elements. For perfect shuffling, a buffer size greater than or equal to the +full size of the dataset is required. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + +`reshuffle_each_iteration` controls whether the shuffle order should be +different for each epoch. In TF 1.X, the idiomatic way to create epochs +was through the `repeat` transformation: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> dataset = dataset.repeat(2) # doctest: +SKIP +[1, 0, 2, 1, 0, 2] +``` + +In TF 2.0, tf.data.Dataset objects are Python iterables which makes it +possible to also create epochs through Python iteration: + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 2, 0] +``` + +``` +>>> dataset = tf.data.Dataset.range(3) +>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False) +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP +[1, 0, 2] +``` + + + + + + + + + + + + + + + + +
Args
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the number of +elements from this dataset from which the new dataset will sample. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+`reshuffle_each_iteration` + +(Optional.) A boolean, which if true indicates +that the dataset should be pseudorandomly reshuffled each time it is +iterated over. (Defaults to `True`.) +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

skip

+ +View source + + + +Creates a `Dataset` that skips `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.skip(7) +>>> list(dataset.as_numpy_iterator()) +[7, 8, 9] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be skipped to form the new dataset. +If `count` is greater than the size of this dataset, the new dataset +will contain no elements. If `count` is -1, skips the entire dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

take

+ +View source + + + +Creates a `Dataset` with at most `count` elements from this dataset. + +``` +>>> dataset = tf.data.Dataset.range(10) +>>> dataset = dataset.take(3) +>>> list(dataset.as_numpy_iterator()) +[0, 1, 2] +``` + + + + + + + + + + +
Args
+`count` + +A tf.int64 scalar tf.Tensor, representing the number of +elements of this dataset that should be taken to form the new dataset. +If `count` is -1, or if `count` is greater than the size of this +dataset, the new dataset will contain all elements of this dataset. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

unbatch

+ +View source + + + +Splits elements of a dataset into multiple elements. + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +``` +>>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] +>>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) +>>> dataset = dataset.unbatch() +>>> list(dataset.as_numpy_iterator()) +[1, 2, 3, 1, 2, 1, 2, 3, 4] +``` + + + + + + + + + +
Returns
+A `Dataset`. +
+ + + +

window

+ +View source + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to `False`). + +The `shift` argument determines the number of input elements by which the +window moves on each iteration. If windows and elements are both numbered +starting at 0, the first element in window `k` will be element `k * shift` +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +#### For example: + + + +``` +>>> dataset = tf.data.Dataset.range(7).window(2) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1] +[2, 3] +[4, 5] +[6] +>>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 1, 2] +[2, 3, 4] +[4, 5, 6] +>>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) +>>> for window in dataset: +... print(list(window.as_numpy_iterator())) +[0, 2, 4] +[1, 3, 5] +[2, 4, 6] +``` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +``` +>>> nested = ([1, 2, 3, 4], [5, 6, 7, 8]) +>>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print(tuple(to_numpy(component) for component in window)) +([1, 2], [5, 6]) +([3, 4], [7, 8]) +``` + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) +>>> dataset = dataset.window(2) +>>> for window in dataset: +... def to_numpy(ds): +... return list(ds.as_numpy_iterator()) +... print({'a': to_numpy(window['a'])}) +{'a': [1, 2]} +{'a': [3, 4]} +``` + + + + + + + + + + + + + + + + + + + +
Args
+`size` + +A tf.int64 scalar tf.Tensor, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +number of input elements by which the window moves in each iteration. +Defaults to `size`. Must be positive. +
+`stride` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +stride of the input elements in the sliding window. Must be positive. +The default value of 1 means "retain every input element". +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last window should be dropped if its size is smaller than +`size`. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` of (nests of) windows -- a finite datasets of flat +elements created from the (nests of) input elements. +
+ + + +

with_options

+ +View source + + + +Returns a new tf.data.Dataset with the given options set. + +The options are "global" in the sense they apply to the entire dataset. +If options are set multiple times, they are merged as long as different +options do not use different non-default values. + +``` +>>> ds = tf.data.Dataset.range(5) +>>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5), +... cycle_length=3, +... num_parallel_calls=3) +>>> options = tf.data.Options() +>>> # This will make the interleave order non-deterministic. +>>> options.experimental_deterministic = False +>>> ds = ds.with_options(options) +``` + + + + + + + + + + +
Args
+`options` + +A tf.data.Options that identifies the options the use. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset` with the given options. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +when an option is set more than once to a non-default value +
+ + + +

zip

+ +View source + + + +Creates a `Dataset` by zipping together the given datasets. + +This method has similar semantics to the built-in `zip()` function +in Python, with the main difference being that the `datasets` +argument can be an arbitrary nested structure of `Dataset` objects. + +``` +>>> # The nested structure of the `datasets` argument determines the +>>> # structure of elements in the resulting dataset. +>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] +>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] +>>> ds = tf.data.Dataset.zip((a, b)) +>>> list(ds.as_numpy_iterator()) +[(1, 4), (2, 5), (3, 6)] +>>> ds = tf.data.Dataset.zip((b, a)) +>>> list(ds.as_numpy_iterator()) +[(4, 1), (5, 2), (6, 3)] +>>> +>>> # The `datasets` argument may contain an arbitrary number of datasets. +>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], +... # [9, 10], +... # [11, 12] ] +>>> ds = tf.data.Dataset.zip((a, b, c)) +>>> for element in ds.as_numpy_iterator(): +... print(element) +(1, 4, array([7, 8])) +(2, 5, array([ 9, 10])) +(3, 6, array([11, 12])) +>>> +>>> # The number of elements in the resulting dataset is the same as +>>> # the size of the smallest dataset in `datasets`. +>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] +>>> ds = tf.data.Dataset.zip((a, d)) +>>> list(ds.as_numpy_iterator()) +[(1, 13), (2, 14)] +``` + + + + + + + + + + +
Args
+`datasets` + +A nested structure of datasets. +
+ + + + + + + + + + + + +
Returns
+`Dataset` + +A `Dataset`. +
+ + + +

__iter__

+ +View source + + + +Creates an `Iterator` for enumerating the elements of this dataset. + +The returned iterator implements the Python iterator protocol and therefore +can only be used in eager mode. + + + + + + + + + +
Returns
+An `Iterator` over the elements of this dataset. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If not inside of tf.function and not executing eagerly. +
+ + + + + diff --git a/site/en/api_docs/python/tf/data/experimental/StatsAggregator.md b/site/en/api_docs/python/tf/data/experimental/StatsAggregator.md new file mode 100644 index 00000000000..608b031bee2 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/StatsAggregator.md @@ -0,0 +1,61 @@ +description: A stateful resource that aggregates statistics from one or more iterators. + +
+ + + +
+ +# tf.data.experimental.StatsAggregator + + + + + + + + + +A stateful resource that aggregates statistics from one or more iterators. + + + + + + + +To record statistics, use one of the custom transformation functions defined +in this module when defining your tf.data.Dataset. All statistics will be +aggregated by the `StatsAggregator` that is associated with a particular +iterator (see below). For example, to record the latency of producing each +element by iterating over a dataset: + +```python +dataset = ... +dataset = dataset.apply(tf.data.experimental.latency_stats("total_bytes")) +``` + +To associate a `StatsAggregator` with a tf.data.Dataset object, use +the following pattern: + +```python +aggregator = tf.data.experimental.StatsAggregator() +dataset = ... + +# Apply `StatsOptions` to associate `dataset` with `aggregator`. +options = tf.data.Options() +options.experimental_stats.aggregator = aggregator +dataset = dataset.with_options(options) +``` + +Note: This interface is experimental and expected to change. In particular, +we expect to add other implementations of `StatsAggregator` that provide +different ways of exporting statistics, and add more types of statistics. + diff --git a/site/en/api_docs/python/tf/data/experimental/StatsOptions.md b/site/en/api_docs/python/tf/data/experimental/StatsOptions.md new file mode 100644 index 00000000000..1ea01cd0b5d --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/StatsOptions.md @@ -0,0 +1,130 @@ +description: Represents options for collecting dataset stats using StatsAggregator. + +
+ + + + + +
+ +# tf.data.experimental.StatsOptions + + + + + + + + + +Represents options for collecting dataset stats using `StatsAggregator`. + + + + + + + + + +You can set the stats options of a dataset through the `experimental_stats` +property of tf.data.Options; the property is an instance of +tf.data.experimental.StatsOptions. For example, to collect latency stats +on all dataset edges, use the following pattern: + +```python +aggregator = tf.data.experimental.StatsAggregator() + +options = tf.data.Options() +options.experimental_stats.aggregator = aggregator +options.experimental_stats.latency_all_edges = True +dataset = dataset.with_options(options) +``` + + + + + + + + + + + + + + + + + + + + + +
+`aggregator` + +Associates the given statistics aggregator with the dataset pipeline. +
+`counter_prefix` + +Prefix for the statistics recorded as counter. +
+`latency_all_edges` + +Whether to add latency measurements on all edges. Defaults to False. +
+`prefix` + +Prefix to prepend all statistics recorded for the input `dataset` with. +
+ + + +## Methods + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/data/experimental/TFRecordWriter.md b/site/en/api_docs/python/tf/data/experimental/TFRecordWriter.md new file mode 100644 index 00000000000..ce6d4479c3a --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/TFRecordWriter.md @@ -0,0 +1,164 @@ +description: Writes a dataset to a TFRecord file. + +
+ + + + +
+ +# tf.data.experimental.TFRecordWriter + + + + + + + + + +Writes a dataset to a TFRecord file. + + + + + + + + + +The elements of the dataset must be scalar strings. To serialize dataset +elements as strings, you can use the tf.io.serialize_tensor function. + +```python +dataset = tf.data.Dataset.range(3) +dataset = dataset.map(tf.io.serialize_tensor) +writer = tf.data.experimental.TFRecordWriter("/path/to/file.tfrecord") +writer.write(dataset) +``` + +To read back the elements, use `TFRecordDataset`. + +```python +dataset = tf.data.TFRecordDataset("/path/to/file.tfrecord") +dataset = dataset.map(lambda x: tf.io.parse_tensor(x, tf.int64)) +``` + +To shard a `dataset` across multiple TFRecord files: + +```python +dataset = ... # dataset to be written + +def reduce_func(key, dataset): + filename = tf.strings.join([PATH_PREFIX, tf.strings.as_string(key)]) + writer = tf.data.experimental.TFRecordWriter(filename) + writer.write(dataset.map(lambda _, x: x)) + return tf.data.Dataset.from_tensors(filename) + +dataset = dataset.enumerate() +dataset = dataset.apply(tf.data.experimental.group_by_window( + lambda i, _: i % NUM_SHARDS, reduce_func, tf.int64.max +)) +``` + + + + + + + + + + + + + +
+`filename` + +a string path indicating where to write the TFRecord data. +
+`compression_type` + +(Optional.) a string indicating what type of compression +to use when writing the file. See `tf.io.TFRecordCompressionType` for +what types of compression are available. Defaults to `None`. +
+ + + +## Methods + +

write

+ +View source + + + +Writes a dataset to a TFRecord file. + +An operation that writes the content of the specified dataset to the file +specified in the constructor. + +If the file exists, it will be overwritten. + + + + + + + + + + +
Args
+`dataset` + +a tf.data.Dataset whose elements are to be written to a file +
+ + + + + + + + + + + +
Returns
+In graph mode, this returns an operation which when executed performs the +write. In eager mode, the write is performed by the method itself and +there is no return value. +
+ + +Raises + TypeError: if `dataset` is not a tf.data.Dataset. + TypeError: if the elements produced by the dataset are not scalar strings. + + + diff --git a/site/en/api_docs/python/tf/data/experimental/ThreadingOptions.md b/site/en/api_docs/python/tf/data/experimental/ThreadingOptions.md new file mode 100644 index 00000000000..4f10b5c2cc7 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/ThreadingOptions.md @@ -0,0 +1,112 @@ +description: Represents options for dataset threading. + +
+ + + + + +
+ +# tf.data.experimental.ThreadingOptions + + + + + + + + + +Represents options for dataset threading. + + + + + + + + + +You can set the threading options of a dataset through the +`experimental_threading` property of tf.data.Options; the property is +an instance of tf.data.experimental.ThreadingOptions. + +```python +options = tf.data.Options() +options.experimental_threading.private_threadpool_size = 10 +dataset = dataset.with_options(options) +``` + + + + + + + + + + + + + + + +
+`max_intra_op_parallelism` + +If set, it overrides the maximum degree of intra-op parallelism. +
+`private_threadpool_size` + +If set, the dataset will use a private threadpool of the given size. +
+ + + +## Methods + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + + + diff --git a/site/en/api_docs/python/tf/data/experimental/assert_cardinality.md b/site/en/api_docs/python/tf/data/experimental/assert_cardinality.md new file mode 100644 index 00000000000..b05d78340d2 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/assert_cardinality.md @@ -0,0 +1,106 @@ +description: Asserts the cardinality of the input dataset. + +
+ + +
+ +# tf.data.experimental.assert_cardinality + + + + + + + + + +Asserts the cardinality of the input dataset. + + + + + + + + + +NOTE: The following assumes that "examples.tfrecord" contains 42 records. + +``` +>>> dataset = tf.data.TFRecordDataset("examples.tfrecord") +>>> cardinality = tf.data.experimental.cardinality(dataset) +>>> print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy()) +True +>>> dataset = dataset.apply(tf.data.experimental.assert_cardinality(42)) +>>> print(tf.data.experimental.cardinality(dataset).numpy()) +42 +``` + + + + + + + + + + +
+`expected_cardinality` + +The expected cardinality of the input dataset. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ + + + + + + + + + + + +
+`FailedPreconditionError` + +The assertion is checked at runtime (when iterating +the dataset) and an error is raised if the actual and expected cardinality +differ. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/bucket_by_sequence_length.md b/site/en/api_docs/python/tf/data/experimental/bucket_by_sequence_length.md new file mode 100644 index 00000000000..01ffb733fe5 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/bucket_by_sequence_length.md @@ -0,0 +1,165 @@ +description: A transformation that buckets elements in a Dataset by length. + +
+ + +
+ +# tf.data.experimental.bucket_by_sequence_length + + + + + + + + + +A transformation that buckets elements in a `Dataset` by length. + + + + + + + + + +Elements of the `Dataset` are grouped together by length and then are padded +and batched. + +This is useful for sequence tasks in which the elements have variable length. +Grouping together elements that have similar lengths reduces the total +fraction of padding in a batch which increases training step efficiency. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`element_length_func` + +function from element in `Dataset` to tf.int32, +determines the length of the element, which will determine the bucket it +goes into. +
+`bucket_boundaries` + +`list`, upper length boundaries of the buckets. +
+`bucket_batch_sizes` + +`list`, batch size per bucket. Length should be +`len(bucket_boundaries) + 1`. +
+`padded_shapes` + +Nested structure of tf.TensorShape to pass to +tf.data.Dataset.padded_batch. If not provided, will use +`dataset.output_shapes`, which will result in variable length dimensions +being padded out to the maximum length in each batch. +
+`padding_values` + +Values to pad with, passed to +tf.data.Dataset.padded_batch. Defaults to padding with 0. +
+`pad_to_bucket_boundary` + +bool, if `False`, will pad dimensions with unknown +size to maximum length in batch. If `True`, will pad dimensions with +unknown size to bucket boundary minus 1 (i.e., the maximum length in each +bucket), and caller must ensure that the source `Dataset` does not contain +any elements with length longer than `max(bucket_boundaries)`. +
+`no_padding` + +`bool`, indicates whether to pad the batch features (features +need to be either of type tf.SparseTensor or of same shape). +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ + + + + + + + + + + + +
+`ValueError` + +if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/bytes_produced_stats.md b/site/en/api_docs/python/tf/data/experimental/bytes_produced_stats.md new file mode 100644 index 00000000000..6ad762e4e31 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/bytes_produced_stats.md @@ -0,0 +1,79 @@ +description: Records the number of bytes produced by each element of the input dataset. + +
+ + +
+ +# tf.data.experimental.bytes_produced_stats + + + + + + + + + +Records the number of bytes produced by each element of the input dataset. + + + + + + + + + +To consume the statistics, associate a `StatsAggregator` with the output +dataset. + + + + + + + + + + +
+`tag` + +String. All statistics recorded by the returned transformation will +be associated with the given `tag`. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/cardinality.md b/site/en/api_docs/python/tf/data/experimental/cardinality.md new file mode 100644 index 00000000000..c02e77d48c7 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/cardinality.md @@ -0,0 +1,96 @@ +description: Returns the cardinality of dataset, if known. + +
+ + +
+ +# tf.data.experimental.cardinality + + + + + + + + + +Returns the cardinality of `dataset`, if known. + + + + + + + + + +The operation returns the cardinality of `dataset`. The operation may return +tf.data.experimental.INFINITE_CARDINALITY if `dataset` contains an infinite +number of elements or tf.data.experimental.UNKNOWN_CARDINALITY if the +analysis fails to determine the number of elements in `dataset` (e.g. when the +dataset source is a file). + +``` +>>> dataset = tf.data.Dataset.range(42) +>>> print(tf.data.experimental.cardinality(dataset).numpy()) +42 +>>> dataset = dataset.repeat() +>>> cardinality = tf.data.experimental.cardinality(dataset) +>>> print((cardinality == tf.data.experimental.INFINITE_CARDINALITY).numpy()) +True +>>> dataset = dataset.filter(lambda x: True) +>>> cardinality = tf.data.experimental.cardinality(dataset) +>>> print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy()) +True +``` + + + + + + + + + + +
+`dataset` + +A tf.data.Dataset for which to determine cardinality. +
+ + + + + + + + + + + +
+A scalar tf.int64 `Tensor` representing the cardinality of `dataset`. If +the cardinality is infinite or unknown, the operation returns the named +constant `INFINITE_CARDINALITY` and `UNKNOWN_CARDINALITY` respectively. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/choose_from_datasets.md b/site/en/api_docs/python/tf/data/experimental/choose_from_datasets.md new file mode 100644 index 00000000000..2962a4bc910 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/choose_from_datasets.md @@ -0,0 +1,109 @@ +description: Creates a dataset that deterministically chooses elements from datasets. + +
+ + +
+ +# tf.data.experimental.choose_from_datasets + + + + + + + + + +Creates a dataset that deterministically chooses elements from `datasets`. + + + + + + + +For example, given the following datasets: + +```python +datasets = [tf.data.Dataset.from_tensors("foo").repeat(), + tf.data.Dataset.from_tensors("bar").repeat(), + tf.data.Dataset.from_tensors("baz").repeat()] + +# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`. +choice_dataset = tf.data.Dataset.range(3).repeat(3) + +result = tf.data.experimental.choose_from_datasets(datasets, choice_dataset) +``` + +The elements of `result` will be: + +``` +"foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz" +``` + + + + + + + + + + + + + +
+`datasets` + +A list of tf.data.Dataset objects with compatible structure. +
+`choice_dataset` + +A tf.data.Dataset of scalar tf.int64 tensors between +`0` and `len(datasets) - 1`. +
+ + + + + + + + + + + +
+A dataset that interleaves elements from `datasets` according to the values +of `choice_dataset`. +
+ + + + + + + + + + + + +
+`TypeError` + +If the `datasets` or `choice_dataset` arguments have the wrong +type. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/copy_to_device.md b/site/en/api_docs/python/tf/data/experimental/copy_to_device.md new file mode 100644 index 00000000000..2744d76d66a --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/copy_to_device.md @@ -0,0 +1,83 @@ +description: A transformation that copies dataset elements to the given target_device. + +
+ + +
+ +# tf.data.experimental.copy_to_device + + + + + + + + + +A transformation that copies dataset elements to the given `target_device`. + + + + + + + + + + + + + + + + + + + + + + +
+`target_device` + +The name of a device to which elements will be copied. +
+`source_device` + +The original device on which `input_dataset` will be placed. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/dense_to_ragged_batch.md b/site/en/api_docs/python/tf/data/experimental/dense_to_ragged_batch.md new file mode 100644 index 00000000000..929df03a028 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/dense_to_ragged_batch.md @@ -0,0 +1,124 @@ +description: A transformation that batches ragged elements into tf.RaggedTensors. + +
+ + +
+ +# tf.data.experimental.dense_to_ragged_batch + + + + + + + + + +A transformation that batches ragged elements into tf.RaggedTensors. + + + + + + + + + +This transformation combines multiple consecutive elements of the input +dataset into a single element. + +Like tf.data.Dataset.batch, the components of the resulting element will +have an additional outer dimension, which will be `batch_size` (or +`N % batch_size` for the last element if `batch_size` does not divide the +number of input elements `N` evenly and `drop_remainder` is `False`). If +your program depends on the batches having the same outer dimension, you +should set the `drop_remainder` argument to `True` to prevent the smaller +batch from being produced. + +Unlike tf.data.Dataset.batch, the input elements to be batched may have +different shapes, and each batch will be encoded as a tf.RaggedTensor. +Example: + +``` +>>> dataset = tf.data.Dataset.from_tensor_slices(np.arange(6)) +>>> dataset = dataset.map(lambda x: tf.range(x)) +>>> dataset = dataset.apply( +... tf.data.experimental.dense_to_ragged_batch(batch_size=2)) +>>> for batch in dataset: +... print(batch) + + + +``` + + + + + + + + + + + + + + + + +
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in the case it has fewer than +`batch_size` elements; the default behavior is not to drop the smaller +batch. +
+`row_splits_dtype` + +The dtype that should be used for the `row_splits` of any +new ragged tensors. Existing tf.RaggedTensor elements do not have their +row_splits dtype changed. +
+ + + + + + + + + + + + +
+`Dataset` + +A `Dataset`. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/dense_to_sparse_batch.md b/site/en/api_docs/python/tf/data/experimental/dense_to_sparse_batch.md new file mode 100644 index 00000000000..ac5ccf22914 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/dense_to_sparse_batch.md @@ -0,0 +1,113 @@ +description: A transformation that batches ragged elements into tf.SparseTensors. + +
+ + +
+ +# tf.data.experimental.dense_to_sparse_batch + + + + + + + + + +A transformation that batches ragged elements into tf.SparseTensors. + + + + + + + + + +Like Dataset.padded_batch(), this transformation combines multiple +consecutive elements of the dataset, which might have different +shapes, into a single element. The resulting element has three +components (`indices`, `values`, and `dense_shape`), which +comprise a tf.SparseTensor that represents the same data. The +`row_shape` represents the dense shape of each row in the +resulting tf.SparseTensor, to which the effective batch size is +prepended. For example: + +```python +# NOTE: The following examples use `{ ... }` to represent the +# contents of a dataset. +a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] } + +a.apply(tf.data.experimental.dense_to_sparse_batch( + batch_size=2, row_shape=[6])) == +{ + ([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1]], # indices + ['a', 'b', 'c', 'a', 'b'], # values + [2, 6]), # dense_shape + ([[0, 0], [0, 1], [0, 2], [0, 3]], + ['a', 'b', 'c', 'd'], + [1, 6]) +} +``` + + + + + + + + + + + + + +
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`row_shape` + +A tf.TensorShape or tf.int64 vector tensor-like object +representing the equivalent dense shape of a row in the resulting +tf.SparseTensor. Each element of this dataset must have the same rank as +`row_shape`, and must have size less than or equal to `row_shape` in each +dimension. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/enumerate_dataset.md b/site/en/api_docs/python/tf/data/experimental/enumerate_dataset.md new file mode 100644 index 00000000000..ed9b7329291 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/enumerate_dataset.md @@ -0,0 +1,97 @@ +description: A transformation that enumerates the elements of a dataset. (deprecated) + +
+ + +
+ +# tf.data.experimental.enumerate_dataset + + + + + + + + + +A transformation that enumerates the elements of a dataset. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.data.Dataset.enumerate() + +It is similar to python's `enumerate`. +For example: + +```python +# NOTE: The following examples use `{ ... }` to represent the +# contents of a dataset. +a = { 1, 2, 3 } +b = { (7, 8), (9, 10) } + +# The nested structure of the `datasets` argument determines the +# structure of elements in the resulting dataset. +a.apply(tf.data.experimental.enumerate_dataset(start=5)) +=> { (5, 1), (6, 2), (7, 3) } +b.apply(tf.data.experimental.enumerate_dataset()) +=> { (0, (7, 8)), (1, (9, 10)) } +``` + + + + + + + + + + +
+`start` + +A tf.int64 scalar tf.Tensor, representing the start value for +enumeration. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/from_variant.md b/site/en/api_docs/python/tf/data/experimental/from_variant.md new file mode 100644 index 00000000000..b78b48e6ee7 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/from_variant.md @@ -0,0 +1,83 @@ +description: Constructs a dataset from the given variant and structure. + +
+ + +
+ +# tf.data.experimental.from_variant + + + + + + + + + +Constructs a dataset from the given variant and structure. + + + + + + + + + + + + + + + + + + + + + + +
+`variant` + +A scalar tf.variant tensor representing a dataset. +
+`structure` + +A `tf.data.experimental.Structure` object representing the +structure of each element in the dataset. +
+ + + + + + + + + + + +
+A tf.data.Dataset instance. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/get_next_as_optional.md b/site/en/api_docs/python/tf/data/experimental/get_next_as_optional.md new file mode 100644 index 00000000000..21d1b3bf9b3 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/get_next_as_optional.md @@ -0,0 +1,78 @@ +description: Returns an Optional that contains the next value from the iterator. + +
+ + +
+ +# tf.data.experimental.get_next_as_optional + + + + + + + + + +Returns an `Optional` that contains the next value from the iterator. + + + + + + + + + +If `iterator` has reached the end of the sequence, the returned `Optional` +will have no value. + + + + + + + + + + +
+`iterator` + +An iterator for an instance of tf.data.Dataset. +
+ + + + + + + + + + + +
+An `Optional` object representing the next value from the iterator (if it +has one) or no value. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/get_single_element.md b/site/en/api_docs/python/tf/data/experimental/get_single_element.md new file mode 100644 index 00000000000..1491ca4ee5f --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/get_single_element.md @@ -0,0 +1,116 @@ +description: Returns the single element in dataset as a nested structure of tensors. + +
+ + +
+ +# tf.data.experimental.get_single_element + + + + + + + + + +Returns the single element in `dataset` as a nested structure of tensors. + + + + + + + + + +This function enables you to use a tf.data.Dataset in a stateless +"tensor-in tensor-out" expression, without creating an iterator. +This can be useful when your preprocessing transformations are expressed +as a `Dataset`, and you want to use the transformation at serving time. + +#### For example: + + + +```python +def preprocessing_fn(input_str): + # ... + return image, label + +input_batch = ... # input batch of BATCH_SIZE elements +dataset = (tf.data.Dataset.from_tensor_slices(input_batch) + .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) + .batch(BATCH_SIZE)) + +image_batch, label_batch = tf.data.experimental.get_single_element(dataset) +``` + + + + + + + + + + +
+`dataset` + +A tf.data.Dataset object containing a single element. +
+ + + + + + + + + + + +
+A nested structure of tf.Tensor objects, corresponding to the single +element of `dataset`. +
+ + + + + + + + + + + + +
+`TypeError` + +if `dataset` is not a tf.data.Dataset object. +InvalidArgumentError (at runtime): if `dataset` does not contain exactly +one element. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/get_structure.md b/site/en/api_docs/python/tf/data/experimental/get_structure.md new file mode 100644 index 00000000000..36ea69ad99f --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/get_structure.md @@ -0,0 +1,94 @@ +description: Returns the type specification of an element of a Dataset or Iterator. + +
+ + +
+ +# tf.data.experimental.get_structure + + + + + + + + + +Returns the type specification of an element of a `Dataset` or `Iterator`. + + + + + + + + + + + + + + + + + + + +
+`dataset_or_iterator` + +A tf.data.Dataset or `tf.data.Iterator`. +
+ + + + + + + + + + + +
+A nested structure of tf.TypeSpec objects matching the structure of an +element of `dataset_or_iterator` and specifying the type of individual +components. +
+ + + + + + + + + + + + +
+`TypeError` + +If `dataset_or_iterator` is not a `Dataset` or `Iterator` object. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/group_by_reducer.md b/site/en/api_docs/python/tf/data/experimental/group_by_reducer.md new file mode 100644 index 00000000000..4cc6ed5135c --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/group_by_reducer.md @@ -0,0 +1,92 @@ +description: A transformation that groups elements and performs a reduction. + +
+ + +
+ +# tf.data.experimental.group_by_reducer + + + + + + + + + +A transformation that groups elements and performs a reduction. + + + + + + + + + +This transformation maps element of a dataset to a key using `key_func` and +groups the elements by key. The `reducer` is used to process each group; its +`init_func` is used to initialize state for each group when it is created, the +`reduce_func` is used to update the state every time an element is mapped to +the matching group, and the `finalize_func` is used to map the final state to +an output value. + + + + + + + + + + + + + +
+`key_func` + +A function mapping a nested structure of tensors +(having shapes and types defined by `self.output_shapes` and +`self.output_types`) to a scalar tf.int64 tensor. +
+`reducer` + +An instance of `Reducer`, which captures the reduction logic using +the `init_func`, `reduce_func`, and `finalize_func` functions. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/group_by_window.md b/site/en/api_docs/python/tf/data/experimental/group_by_window.md new file mode 100644 index 00000000000..78b604e7ef3 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/group_by_window.md @@ -0,0 +1,132 @@ +description: A transformation that groups windows of elements by key and reduces them. + +
+ + +
+ +# tf.data.experimental.group_by_window + + + + + + + + + +A transformation that groups windows of elements by key and reduces them. + + + + + + + + + +This transformation maps each consecutive element in a dataset to a key +using `key_func` and groups the elements by key. It then applies +`reduce_func` to at most `window_size_func(key)` elements matching the same +key. All except the final window for each key will contain +`window_size_func(key)` elements; the final window may be smaller. + +You may provide either a constant `window_size` or a window size determined by +the key through `window_size_func`. + + + + + + + + + + + + + + + + + + + +
+`key_func` + +A function mapping a nested structure of tensors +(having shapes and types defined by `self.output_shapes` and +`self.output_types`) to a scalar tf.int64 tensor. +
+`reduce_func` + +A function mapping a key and a dataset of up to `window_size` +consecutive elements matching that key to another dataset. +
+`window_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements matching the same key to combine in a single +batch, which will be passed to `reduce_func`. Mutually exclusive with +`window_size_func`. +
+`window_size_func` + +A function mapping a key to a tf.int64 scalar +tf.Tensor, representing the number of consecutive elements matching +the same key to combine in a single batch, which will be passed to +`reduce_func`. Mutually exclusive with `window_size`. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ + + + + + + + + + + + +
+`ValueError` + +if neither or both of {`window_size`, `window_size_func`} are +passed. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/ignore_errors.md b/site/en/api_docs/python/tf/data/experimental/ignore_errors.md new file mode 100644 index 00000000000..54205d9ca7a --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/ignore_errors.md @@ -0,0 +1,72 @@ +description: Creates a Dataset from another Dataset and silently ignores any errors. + +
+ + +
+ +# tf.data.experimental.ignore_errors + + + + + + + + + +Creates a `Dataset` from another `Dataset` and silently ignores any errors. + + + + + + + + + +Use this transformation to produce a dataset that contains the same elements +as the input, but silently drops any elements that caused an error. For +example: + +```python +dataset = tf.data.Dataset.from_tensor_slices([1., 2., 0., 4.]) + +# Computing `tf.debugging.check_numerics(1. / 0.)` will raise an +InvalidArgumentError. +dataset = dataset.map(lambda x: tf.debugging.check_numerics(1. / x, "error")) + +# Using `ignore_errors()` will drop the element that causes an error. +dataset = + dataset.apply(tf.data.experimental.ignore_errors()) # ==> {1., 0.5, 0.2} +``` + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/latency_stats.md b/site/en/api_docs/python/tf/data/experimental/latency_stats.md new file mode 100644 index 00000000000..cb8fc03ea89 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/latency_stats.md @@ -0,0 +1,79 @@ +description: Records the latency of producing each element of the input dataset. + +
+ + +
+ +# tf.data.experimental.latency_stats + + + + + + + + + +Records the latency of producing each element of the input dataset. + + + + + + + + + +To consume the statistics, associate a `StatsAggregator` with the output +dataset. + + + + + + + + + + +
+`tag` + +String. All statistics recorded by the returned transformation will +be associated with the given `tag`. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/make_batched_features_dataset.md b/site/en/api_docs/python/tf/data/experimental/make_batched_features_dataset.md new file mode 100644 index 00000000000..f34928246fd --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/make_batched_features_dataset.md @@ -0,0 +1,256 @@ +description: Returns a Dataset of feature dictionaries from Example protos. + +
+ + +
+ +# tf.data.experimental.make_batched_features_dataset + + + + + + + + + +Returns a `Dataset` of feature dictionaries from `Example` protos. + + + + + + + +If label_key argument is provided, returns a `Dataset` of tuple +comprising of feature dictionaries and label. + +#### Example: + + + +``` +serialized_examples = [ + features { + feature { key: "age" value { int64_list { value: [ 0 ] } } } + feature { key: "gender" value { bytes_list { value: [ "f" ] } } } + feature { key: "kws" value { bytes_list { value: [ "code", "art" ] } } } + }, + features { + feature { key: "age" value { int64_list { value: [] } } } + feature { key: "gender" value { bytes_list { value: [ "f" ] } } } + feature { key: "kws" value { bytes_list { value: [ "sports" ] } } } + } +] +``` + +#### We can use arguments: + + + +``` +features: { + "age": FixedLenFeature([], dtype=tf.int64, default_value=-1), + "gender": FixedLenFeature([], dtype=tf.string), + "kws": VarLenFeature(dtype=tf.string), +} +``` + +And the expected output is: + +```python +{ + "age": [[0], [-1]], + "gender": [["f"], ["f"]], + "kws": SparseTensor( + indices=[[0, 0], [0, 1], [1, 0]], + values=["code", "art", "sports"] + dense_shape=[2, 2]), +} +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`file_pattern` + +List of files or patterns of file paths containing +`Example` records. See tf.io.gfile.glob for pattern rules. +
+`batch_size` + +An int representing the number of records to combine +in a single batch. +
+`features` + +A `dict` mapping feature keys to `FixedLenFeature` or +`VarLenFeature` values. See tf.io.parse_example. +
+`reader` + +A function or class that can be +called with a `filenames` tensor and (optional) `reader_args` and returns +a `Dataset` of `Example` tensors. Defaults to tf.data.TFRecordDataset. +
+`label_key` + +(Optional) A string corresponding to the key labels are stored in +`tf.Examples`. If provided, it must be one of the `features` key, +otherwise results in `ValueError`. +
+`reader_args` + +Additional arguments to pass to the reader class. +
+`num_epochs` + +Integer specifying the number of times to read through the +dataset. If None, cycles through the dataset forever. Defaults to `None`. +
+`shuffle` + +A boolean, indicates whether the input should be shuffled. Defaults +to `True`. +
+`shuffle_buffer_size` + +Buffer size of the ShuffleDataset. A large capacity +ensures better shuffling but would increase memory usage and startup time. +
+`shuffle_seed` + +Randomization seed to use for shuffling. +
+`prefetch_buffer_size` + +Number of feature batches to prefetch in order to +improve performance. Recommended value is the number of batches consumed +per training step. Defaults to auto-tune. +
+`reader_num_threads` + +Number of threads used to read `Example` records. If >1, +the results will be interleaved. Defaults to `1`. +
+`parser_num_threads` + +Number of threads to use for parsing `Example` tensors +into a dictionary of `Feature` tensors. Defaults to `2`. +
+`sloppy_ordering` + +If `True`, reading performance will be improved at +the cost of non-deterministic ordering. If `False`, the order of elements +produced is deterministic prior to shuffling (elements are still +randomized if `shuffle=True`. Note that if the seed is set, then order +of elements after shuffling is deterministic). Defaults to `False`. +
+`drop_final_batch` + +If `True`, and the batch size does not evenly divide the +input dataset size, the final smaller batch will be dropped. Defaults to +`False`. +
+ + + + + + + + + + + +
+A dataset of `dict` elements, (or a tuple of `dict` elements and label). +Each `dict` maps feature keys to `Tensor` or `SparseTensor` objects. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `reader` is of the wrong type. +
+`ValueError` + +If `label_key` is not one of the `features` keys. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/make_csv_dataset.md b/site/en/api_docs/python/tf/data/experimental/make_csv_dataset.md new file mode 100644 index 00000000000..aaa5d864918 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/make_csv_dataset.md @@ -0,0 +1,275 @@ +description: Reads CSV files into a dataset. + +
+ + +
+ +# tf.data.experimental.make_csv_dataset + + + + + + + + + +Reads CSV files into a dataset. + + + + + + + +Reads CSV files into a dataset, where each element is a (features, labels) +tuple that corresponds to a batch of CSV rows. The features dictionary +maps feature column names to `Tensor`s containing the corresponding +feature data, and labels is a `Tensor` containing the batch's label data. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`file_pattern` + +List of files or patterns of file paths containing CSV +records. See tf.io.gfile.glob for pattern rules. +
+`batch_size` + +An int representing the number of records to combine +in a single batch. +
+`column_names` + +An optional list of strings that corresponds to the CSV +columns, in order. One per column of the input record. If this is not +provided, infers the column names from the first row of the records. +These names will be the keys of the features dict of each dataset element. +
+`column_defaults` + +A optional list of default values for the CSV fields. One +item per selected column of the input record. Each item in the list is +either a valid CSV dtype (float32, float64, int32, int64, or string), or a +`Tensor` with one of the aforementioned types. The tensor can either be +a scalar default value (if the column is optional), or an empty tensor (if +the column is required). If a dtype is provided instead of a tensor, the +column is also treated as required. If this list is not provided, tries +to infer types based on reading the first num_rows_for_inference rows of +files specified, and assumes all columns are optional, defaulting to `0` +for numeric values and `""` for string values. If both this and +`select_columns` are specified, these must have the same lengths, and +`column_defaults` is assumed to be sorted in order of increasing column +index. +
+`label_name` + +A optional string corresponding to the label column. If +provided, the data for this column is returned as a separate `Tensor` from +the features dictionary, so that the dataset complies with the format +expected by a `tf.Estimator.train` or `tf.Estimator.evaluate` input +function. +
+`select_columns` + +An optional list of integer indices or string column +names, that specifies a subset of columns of CSV data to select. If +column names are provided, these must correspond to names provided in +`column_names` or inferred from the file header lines. When this argument +is specified, only a subset of CSV columns will be parsed and returned, +corresponding to the columns specified. Using this results in faster +parsing and lower memory usage. If both this and `column_defaults` are +specified, these must have the same lengths, and `column_defaults` is +assumed to be sorted in order of increasing column index. +
+`field_delim` + +An optional `string`. Defaults to `","`. Char delimiter to +separate fields in a record. +
+`use_quote_delim` + +An optional bool. Defaults to `True`. If false, treats +double quotation marks as regular characters inside of the string fields. +
+`na_value` + +Additional string to recognize as NA/NaN. +
+`header` + +A bool that indicates whether the first rows of provided CSV files +correspond to header lines with column names, and should not be included +in the data. +
+`num_epochs` + +An int specifying the number of times this dataset is repeated. +If None, cycles through the dataset forever. +
+`shuffle` + +A bool that indicates whether the input should be shuffled. +
+`shuffle_buffer_size` + +Buffer size to use for shuffling. A large buffer size +ensures better shuffling, but increases memory usage and startup time. +
+`shuffle_seed` + +Randomization seed to use for shuffling. +
+`prefetch_buffer_size` + +An int specifying the number of feature +batches to prefetch for performance improvement. Recommended value is the +number of batches consumed per training step. Defaults to auto-tune. +
+`num_parallel_reads` + +Number of threads used to read CSV records from files. +If >1, the results will be interleaved. Defaults to `1`. +
+`sloppy` + +If `True`, reading performance will be improved at +the cost of non-deterministic ordering. If `False`, the order of elements +produced is deterministic prior to shuffling (elements are still +randomized if `shuffle=True`. Note that if the seed is set, then order +of elements after shuffling is deterministic). Defaults to `False`. +
+`num_rows_for_inference` + +Number of rows of a file to use for type inference +if record_defaults is not provided. If None, reads all the rows of all +the files. Defaults to 100. +
+`compression_type` + +(Optional.) A tf.string scalar evaluating to one of +`""` (no compression), `"ZLIB"`, or `"GZIP"`. Defaults to no compression. +
+`ignore_errors` + +(Optional.) If `True`, ignores errors with CSV file parsing, +such as malformed data or empty lines, and moves on to the next valid +CSV record. Otherwise, the dataset raises an error and stops processing +when encountering any invalid records. Defaults to `False`. +
+ + + + + + + + + + + +
+A dataset, where each element is a (features, labels) tuple that corresponds +to a batch of `batch_size` CSV rows. The features dictionary maps feature +column names to `Tensor`s containing the corresponding column data, and +labels is a `Tensor` containing the column data for the label column +specified by `label_name`. +
+ + + + + + + + + + + + +
+`ValueError` + +If any of the arguments is malformed. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/make_saveable_from_iterator.md b/site/en/api_docs/python/tf/data/experimental/make_saveable_from_iterator.md new file mode 100644 index 00000000000..5d48efdf2a0 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/make_saveable_from_iterator.md @@ -0,0 +1,144 @@ +description: Returns a SaveableObject for saving/restoring iterator state using Saver. + +
+ + +
+ +# tf.data.experimental.make_saveable_from_iterator + + + + + + + + + +Returns a SaveableObject for saving/restoring iterator state using Saver. + + + + + + + + + + + + + + + + + + + + + + +
+`iterator` + +Iterator. +
+`external_state_policy` + +A string that identifies how to handle input +pipelines that depend on external state. Possible values are +'ignore': The external state is silently ignored. +'warn': The external state is ignored, logging a warning. +'fail': The operation fails upon encountering external state. +By default we set it to 'fail'. +
+ + + + + + + + + + + +
+A SaveableObject for saving/restoring iterator state using Saver. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If iterator does not support checkpointing. +
+`ValueError` + +If `external_state_policy` is not one of 'warn', 'ignore' or +'fail'. +
+ + + +#### For example: + + + +```python +with tf.Graph().as_default(): + ds = tf.data.Dataset.range(10) + iterator = ds.make_initializable_iterator() + # Build the iterator SaveableObject. + saveable_obj = tf.data.experimental.make_saveable_from_iterator(iterator) + # Add the SaveableObject to the SAVEABLE_OBJECTS collection so + # it can be automatically saved using Saver. + tf.compat.v1.add_to_collection(tf.GraphKeys.SAVEABLE_OBJECTS, saveable_obj) + saver = tf.compat.v1.train.Saver() + + while continue_training: + ... Perform training ... + if should_save_checkpoint: + saver.save() +``` + +Note: When restoring the iterator, the existing iterator state is completely +discarded. This means that any changes you may have made to the Dataset +graph will be discarded as well! This includes the new Dataset graph +that you may have built during validation. So, while running validation, +make sure to run the initializer for the validation input pipeline after +restoring the checkpoint. + +Note: Not all iterators support checkpointing yet. Attempting to save the +state of an unsupported iterator will throw an error. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/data/experimental/map_and_batch.md b/site/en/api_docs/python/tf/data/experimental/map_and_batch.md new file mode 100644 index 00000000000..60521bba7f1 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/map_and_batch.md @@ -0,0 +1,145 @@ +description: Fused implementation of map and batch. (deprecated) + +
+ + +
+ +# tf.data.experimental.map_and_batch + + + + + + + + + +Fused implementation of `map` and `batch`. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.data.Dataset.map(map_func, num_parallel_calls) followed by tf.data.Dataset.batch(batch_size, drop_remainder). Static tf.data optimizations will take care of using the fused implementation. + +Maps `map_func` across `batch_size` consecutive elements of this dataset +and then combines them into a batch. Functionally, it is equivalent to `map` +followed by `batch`. However, by fusing the two transformations together, the +implementation can be more efficient. Surfacing this transformation in the API +is temporary. Once automatic input pipeline optimization is implemented, +the fusing of `map` and `batch` will happen automatically and this API will be +deprecated. + + + + + + + + + + + + + + + + + + + + + + +
+`map_func` + +A function mapping a nested structure of tensors to another +nested structure of tensors. +
+`batch_size` + +A tf.int64 scalar tf.Tensor, representing the number of +consecutive elements of this dataset to combine in a single batch. +
+`num_parallel_batches` + +(Optional.) A tf.int64 scalar tf.Tensor, +representing the number of batches to create in parallel. On one hand, +higher values can help mitigate the effect of stragglers. On the other +hand, higher values can increase contention if CPU is scarce. +
+`drop_remainder` + +(Optional.) A tf.bool scalar tf.Tensor, representing +whether the last batch should be dropped in case its size is smaller than +desired; the default behavior is not to drop the smaller batch. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number of elements to process in parallel. If not +specified, `batch_size * num_parallel_batches` elements will be processed +in parallel. If the value tf.data.experimental.AUTOTUNE is used, then +the number of parallel calls is set dynamically based on available CPU. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ + + + + + + + + + + + +
+`ValueError` + +If both `num_parallel_batches` and `num_parallel_calls` are +specified. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/parallel_interleave.md b/site/en/api_docs/python/tf/data/experimental/parallel_interleave.md new file mode 100644 index 00000000000..0f06ed53d27 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/parallel_interleave.md @@ -0,0 +1,148 @@ +description: A parallel version of the Dataset.interleave() transformation. (deprecated) + +
+ + +
+ +# tf.data.experimental.parallel_interleave + + + + + + + + + +A parallel version of the Dataset.interleave() transformation. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE) instead. If sloppy execution is desired, use tf.data.Options.experimental_deterministic. + +`parallel_interleave()` maps `map_func` across its input to produce nested +datasets, and outputs their elements interleaved. Unlike +tf.data.Dataset.interleave, it gets elements from `cycle_length` nested +datasets in parallel, which increases the throughput, especially in the +presence of stragglers. Furthermore, the `sloppy` argument can be used to +improve performance, by relaxing the requirement that the outputs are produced +in a deterministic order, and allowing the implementation to skip over nested +datasets whose elements are not readily available when requested. + +#### Example usage: + + + +```python +# Preprocess 4 files concurrently. +filenames = tf.data.Dataset.list_files("/path/to/data/train*.tfrecords") +dataset = filenames.apply( + tf.data.experimental.parallel_interleave( + lambda filename: tf.data.TFRecordDataset(filename), + cycle_length=4)) +``` + +WARNING: If `sloppy` is `True`, the order of produced elements is not +deterministic. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`map_func` + +A function mapping a nested structure of tensors to a `Dataset`. +
+`cycle_length` + +The number of input `Dataset`s to interleave from in parallel. +
+`block_length` + +The number of consecutive elements to pull from an input +`Dataset` before advancing to the next input `Dataset`. +
+`sloppy` + +A boolean controlling whether determinism should be traded for +performance by allowing elements to be produced out of order. If +`sloppy` is `None`, the tf.data.Options.experimental_deterministic +dataset option (`True` by default) is used to decide whether to enforce a +deterministic order. +
+`buffer_output_elements` + +The number of elements each iterator being +interleaved should buffer (similar to the `.prefetch()` transformation for +each interleaved iterator). +
+`prefetch_input_elements` + +The number of input elements to transform to +iterators before they are needed for interleaving. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/parse_example_dataset.md b/site/en/api_docs/python/tf/data/experimental/parse_example_dataset.md new file mode 100644 index 00000000000..44a0271c71d --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/parse_example_dataset.md @@ -0,0 +1,126 @@ +description: A transformation that parses Example protos into a dict of tensors. + +
+ + +
+ +# tf.data.experimental.parse_example_dataset + + + + + + + + + +A transformation that parses `Example` protos into a `dict` of tensors. + + + + + + + + + +Parses a number of serialized `Example` protos given in `serialized`. We refer +to `serialized` as a batch with `batch_size` many entries of individual +`Example` protos. + +This op parses serialized examples into a dictionary mapping keys to `Tensor`, +`SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to +`VarLenFeature`, `RaggedFeature`, `SparseFeature`, and `FixedLenFeature` +objects. Each `VarLenFeature` and `SparseFeature` is mapped to a +`SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each +`FixedLenFeature` is mapped to a `Tensor`. See tf.io.parse_example for more +details about feature dictionaries. + + + + + + + + + + + + + + + + +
+`features` + +A `dict` mapping feature keys to `FixedLenFeature`, +`VarLenFeature`, `RaggedFeature`, and `SparseFeature` values. +
+`num_parallel_calls` + +(Optional.) A tf.int32 scalar tf.Tensor, +representing the number of parsing processes to call in parallel. +
+`deterministic` + +(Optional.) A boolean controlling whether determinism +should be traded for performance by allowing elements to be produced out +of order if some parsing calls complete faster than others. If +`deterministic` is `None`, the +tf.data.Options.experimental_deterministic dataset option (`True` by +default) is used to decide whether to produce elements +deterministically. +
+ + + + + + + + + + + +
+A dataset transformation function, which can be passed to +tf.data.Dataset.apply. +
+ + + + + + + + + + + + +
+`ValueError` + +if features argument is None. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/prefetch_to_device.md b/site/en/api_docs/python/tf/data/experimental/prefetch_to_device.md new file mode 100644 index 00000000000..95eb233489b --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/prefetch_to_device.md @@ -0,0 +1,86 @@ +description: A transformation that prefetches dataset values to the given device. + +
+ + +
+ +# tf.data.experimental.prefetch_to_device + + + + + + + + + +A transformation that prefetches dataset values to the given `device`. + + + + + + + + + +NOTE: Although the transformation creates a tf.data.Dataset, the +transformation must be the final `Dataset` in the input pipeline. + + + + + + + + + + + + + +
+`device` + +A string. The name of a device to which elements will be prefetched. +
+`buffer_size` + +(Optional.) The number of elements to buffer on `device`. +Defaults to an automatically chosen value. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/rejection_resample.md b/site/en/api_docs/python/tf/data/experimental/rejection_resample.md new file mode 100644 index 00000000000..b795d93d979 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/rejection_resample.md @@ -0,0 +1,102 @@ +description: A transformation that resamples a dataset to achieve a target distribution. + +
+ + +
+ +# tf.data.experimental.rejection_resample + + + + + + + + + +A transformation that resamples a dataset to achieve a target distribution. + + + + + + + + + +**NOTE** Resampling is performed via rejection sampling; some fraction +of the input values will be dropped. + + + + + + + + + + + + + + + + + + + +
+`class_func` + +A function mapping an element of the input dataset to a scalar +tf.int32 tensor. Values should be in `[0, num_classes)`. +
+`target_dist` + +A floating point type tensor, shaped `[num_classes]`. +
+`initial_dist` + +(Optional.) A floating point type tensor, shaped +`[num_classes]`. If not provided, the true class distribution is +estimated live in a streaming fashion. +
+`seed` + +(Optional.) Python integer seed for the resampler. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/sample_from_datasets.md b/site/en/api_docs/python/tf/data/experimental/sample_from_datasets.md new file mode 100644 index 00000000000..aba925c3f72 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/sample_from_datasets.md @@ -0,0 +1,110 @@ +description: Samples elements at random from the datasets in datasets. + +
+ + +
+ +# tf.data.experimental.sample_from_datasets + + + + + + + + + +Samples elements at random from the datasets in `datasets`. + + + + + + + + + + + + + + + + + + + + + + + +
+`datasets` + +A list of tf.data.Dataset objects with compatible structure. +
+`weights` + +(Optional.) A list of `len(datasets)` floating-point values where +`weights[i]` represents the probability with which an element should be +sampled from `datasets[i]`, or a tf.data.Dataset object where each +element is such a list. Defaults to a uniform distribution across +`datasets`. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the +random seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + +
+A dataset that interleaves elements from `datasets` at random, according to +`weights` if provided, otherwise with uniform probability. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If the `datasets` or `weights` arguments have the wrong type. +
+`ValueError` + +If the `weights` argument is specified and does not match the +length of the `datasets` element. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/scan.md b/site/en/api_docs/python/tf/data/experimental/scan.md new file mode 100644 index 00000000000..a7a86364e2d --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/scan.md @@ -0,0 +1,91 @@ +description: A transformation that scans a function across an input dataset. + +
+ + +
+ +# tf.data.experimental.scan + + + + + + + + + +A transformation that scans a function across an input dataset. + + + + + + + + + +This transformation is a stateful relative of tf.data.Dataset.map. +In addition to mapping `scan_func` across the elements of the input dataset, +`scan()` accumulates one or more state tensors, whose initial values are +`initial_state`. + + + + + + + + + + + + + +
+`initial_state` + +A nested structure of tensors, representing the initial state +of the accumulator. +
+`scan_func` + +A function that maps `(old_state, input_element)` to +`(new_state, output_element)`. It must take two arguments and return a +pair of nested structures of tensors. The `new_state` must match the +structure of `initial_state`. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/shuffle_and_repeat.md b/site/en/api_docs/python/tf/data/experimental/shuffle_and_repeat.md new file mode 100644 index 00000000000..ec0fe8d3990 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/shuffle_and_repeat.md @@ -0,0 +1,128 @@ +description: Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated) + +
+ + +
+ +# tf.data.experimental.shuffle_and_repeat + + + + + + + + + +Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.data.Dataset.shuffle(buffer_size, seed) followed by tf.data.Dataset.repeat(count). Static tf.data optimizations will take care of using the fused implementation. + +``` +>>> d = tf.data.Dataset.from_tensor_slices([1, 2, 3]) +>>> d = d.apply(tf.data.experimental.shuffle_and_repeat(2, count=2)) +>>> [elem.numpy() for elem in d] # doctest: +SKIP +[2, 3, 1, 1, 3, 2] +``` + +```python +dataset.apply( + tf.data.experimental.shuffle_and_repeat(buffer_size, count, seed)) +``` + +produces the same output as + +```python +dataset.shuffle( + buffer_size, seed=seed, reshuffle_each_iteration=True).repeat(count) +``` + +In each repetition, this dataset fills a buffer with `buffer_size` elements, +then randomly samples elements from this buffer, replacing the selected +elements with new elements. For perfect shuffling, set the buffer size equal +to the full size of the dataset. + +For instance, if your dataset contains 10,000 elements but `buffer_size` is +set to 1,000, then `shuffle` will initially select a random element from +only the first 1,000 elements in the buffer. Once an element is selected, +its space in the buffer is replaced by the next (i.e. 1,001-st) element, +maintaining the 1,000 element buffer. + + + + + + + + + + + + + + + + +
+`buffer_size` + +A tf.int64 scalar tf.Tensor, representing the maximum +number elements that will be buffered when prefetching. +
+`count` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the number +of times the dataset should be repeated. The default behavior (if `count` +is `None` or `-1`) is for the dataset be repeated indefinitely. +
+`seed` + +(Optional.) A tf.int64 scalar tf.Tensor, representing the random +seed that will be used to create the distribution. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/take_while.md b/site/en/api_docs/python/tf/data/experimental/take_while.md new file mode 100644 index 00000000000..445f55bd8c5 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/take_while.md @@ -0,0 +1,78 @@ +description: A transformation that stops dataset iteration based on a predicate. + +
+ + +
+ +# tf.data.experimental.take_while + + + + + + + + + +A transformation that stops dataset iteration based on a `predicate`. + + + + + + + + + + + + + + + + + + + +
+`predicate` + +A function that maps a nested structure of tensors (having shapes +and types defined by `self.output_shapes` and `self.output_types`) to a +scalar tf.bool tensor. +
+ + + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/to_variant.md b/site/en/api_docs/python/tf/data/experimental/to_variant.md new file mode 100644 index 00000000000..72547b750df --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/to_variant.md @@ -0,0 +1,75 @@ +description: Returns a variant representing the given dataset. + +
+ + +
+ +# tf.data.experimental.to_variant + + + + + + + + + +Returns a variant representing the given dataset. + + + + + + + + + + + + + + + + + + + +
+`dataset` + +A tf.data.Dataset. +
+ + + + + + + + + + + +
+A scalar tf.variant tensor representing the given dataset. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/unbatch.md b/site/en/api_docs/python/tf/data/experimental/unbatch.md new file mode 100644 index 00000000000..ce0a20e89da --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/unbatch.md @@ -0,0 +1,74 @@ +description: Splits elements of a dataset into multiple elements on the batch dimension. (deprecated) + +
+ + +
+ +# tf.data.experimental.unbatch + + + + + + + + + +Splits elements of a dataset into multiple elements on the batch dimension. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use tf.data.Dataset.unbatch(). + +For example, if elements of the dataset are shaped `[B, a0, a1, ...]`, +where `B` may vary for each input element, then for each element in the +dataset, the unbatched dataset will contain `B` consecutive elements +of shape `[a0, a1, ...]`. + +```python +# NOTE: The following example uses `{ ... }` to represent the contents +# of a dataset. +a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] } + +a.unbatch() == { + 'a', 'b', 'c', 'a', 'b', 'a', 'b', 'c', 'd'} +``` + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/data/experimental/unique.md b/site/en/api_docs/python/tf/data/experimental/unique.md new file mode 100644 index 00000000000..7d246aef7b2 --- /dev/null +++ b/site/en/api_docs/python/tf/data/experimental/unique.md @@ -0,0 +1,66 @@ +description: Creates a Dataset from another Dataset, discarding duplicates. + +
+ + +
+ +# tf.data.experimental.unique + + + + + + + + + +Creates a `Dataset` from another `Dataset`, discarding duplicates. + + + + + + + + + +Use this transformation to produce a dataset that contains one instance of +each unique element in the input. For example: + +```python +dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1]) + +# Using `unique()` will drop the duplicate elements. +dataset = dataset.apply(tf.data.experimental.unique()) # ==> { 1, 37, 2 } +``` + + + + + + + + + +
+A `Dataset` transformation function, which can be passed to +tf.data.Dataset.apply. +
+ diff --git a/site/en/api_docs/python/tf/debugging.md b/site/en/api_docs/python/tf/debugging.md new file mode 100644 index 00000000000..684a9305a93 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging.md @@ -0,0 +1,83 @@ +description: Public API for tf.debugging namespace. + +
+ + +
+ +# Module: tf.debugging + + + + + + + + + +Public API for tf.debugging namespace. + + + +## Modules + +[`experimental`](../tf/debugging/experimental.md) module: Public API for tf.debugging.experimental namespace. + +## Functions + +[`Assert(...)`](../tf/debugging/Assert.md): Asserts that the given condition is true. + +[`assert_all_finite(...)`](../tf/debugging/assert_all_finite.md): Assert that the tensor does not contain any NaN's or Inf's. + +[`assert_equal(...)`](../tf/debugging/assert_equal.md): Assert the condition `x == y` holds element-wise. + +[`assert_greater(...)`](../tf/debugging/assert_greater.md): Assert the condition `x > y` holds element-wise. + +[`assert_greater_equal(...)`](../tf/debugging/assert_greater_equal.md): Assert the condition `x >= y` holds element-wise. + +[`assert_integer(...)`](../tf/debugging/assert_integer.md): Assert that `x` is of integer dtype. + +[`assert_less(...)`](../tf/debugging/assert_less.md): Assert the condition `x < y` holds element-wise. + +[`assert_less_equal(...)`](../tf/debugging/assert_less_equal.md): Assert the condition `x <= y` holds element-wise. + +[`assert_near(...)`](../tf/debugging/assert_near.md): Assert the condition `x` and `y` are close element-wise. + +[`assert_negative(...)`](../tf/debugging/assert_negative.md): Assert the condition `x < 0` holds element-wise. + +[`assert_non_negative(...)`](../tf/debugging/assert_non_negative.md): Assert the condition `x >= 0` holds element-wise. + +[`assert_non_positive(...)`](../tf/debugging/assert_non_positive.md): Assert the condition `x <= 0` holds element-wise. + +[`assert_none_equal(...)`](../tf/debugging/assert_none_equal.md): Assert the condition `x != y` holds for all elements. + +[`assert_positive(...)`](../tf/debugging/assert_positive.md): Assert the condition `x > 0` holds element-wise. + +[`assert_proper_iterable(...)`](../tf/debugging/assert_proper_iterable.md): Static assert that values is a "proper" iterable. + +[`assert_rank(...)`](../tf/debugging/assert_rank.md): Assert that `x` has rank equal to `rank`. + +[`assert_rank_at_least(...)`](../tf/debugging/assert_rank_at_least.md): Assert that `x` has rank of at least `rank`. + +[`assert_rank_in(...)`](../tf/debugging/assert_rank_in.md): Assert that `x` has a rank in `ranks`. + +[`assert_same_float_dtype(...)`](../tf/debugging/assert_same_float_dtype.md): Validate and return float type based on `tensors` and `dtype`. + +[`assert_scalar(...)`](../tf/debugging/assert_scalar.md): Asserts that the given `tensor` is a scalar. + +[`assert_shapes(...)`](../tf/debugging/assert_shapes.md): Assert tensor shapes and dimension size relationships between tensors. + +[`assert_type(...)`](../tf/debugging/assert_type.md): Asserts that the given `Tensor` is of the specified type. + +[`check_numerics(...)`](../tf/debugging/check_numerics.md): Checks a tensor for NaN and Inf values. + +[`disable_check_numerics(...)`](../tf/debugging/disable_check_numerics.md): Disable the eager/graph unified numerics checking mechanism. + +[`enable_check_numerics(...)`](../tf/debugging/enable_check_numerics.md): Enable tensor numerics checking in an eager/graph unified fashion. + +[`get_log_device_placement(...)`](../tf/debugging/get_log_device_placement.md): Get if device placements are logged. + +[`is_numeric_tensor(...)`](../tf/debugging/is_numeric_tensor.md): Returns `True` if the elements of `tensor` are numbers. + +[`set_log_device_placement(...)`](../tf/debugging/set_log_device_placement.md): Set if device placements should be logged. + diff --git a/site/en/api_docs/python/tf/debugging/Assert.md b/site/en/api_docs/python/tf/debugging/Assert.md new file mode 100644 index 00000000000..8b06751a909 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/Assert.md @@ -0,0 +1,131 @@ +description: Asserts that the given condition is true. + +
+ + +
+ +# tf.debugging.Assert + + + + + + + + + +Asserts that the given condition is true. + + + + + + + + + +If `condition` evaluates to false, print the list of tensors in `data`. +`summarize` determines how many entries of the tensors to print. + +NOTE: In graph mode, to ensure that Assert executes, one usually attaches +a dependency: + +```python +# Ensure maximum element of x is smaller or equal to 1 +assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x]) +with tf.control_dependencies([assert_op]): + ... code using x ... +``` + + + + + + + + + + + + + + + + + + + +
+`condition` + +The condition to evaluate. +
+`data` + +The tensors to print out when condition is false. +
+`summarize` + +Print this many entries of each tensor. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + + +
+`assert_op` + +An `Operation` that, when executed, raises a +tf.errors.InvalidArgumentError if `condition` is not true. +
+ + + + + + + + + +
+ + +**NOTE** The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. + +#### Eager Compatibility +tf.errors.InvalidArgumentError if `condition` is not true + diff --git a/site/en/api_docs/python/tf/debugging/assert_all_finite.md b/site/en/api_docs/python/tf/debugging/assert_all_finite.md new file mode 100644 index 00000000000..41e773696cd --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_all_finite.md @@ -0,0 +1,78 @@ +description: Assert that the tensor does not contain any NaN's or Inf's. + +
+ + +
+ +# tf.debugging.assert_all_finite + + + + + + + + + +Assert that the tensor does not contain any NaN's or Inf's. + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor to check. +
+`message` + +Message to log on failure. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+Same tensor as `x`. +
+ diff --git a/site/en/api_docs/python/tf/debugging/assert_equal.md b/site/en/api_docs/python/tf/debugging/assert_equal.md new file mode 100644 index 00000000000..d903f88fbd9 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_equal.md @@ -0,0 +1,132 @@ +description: Assert the condition x == y holds element-wise. + +
+ + +
+ +# tf.debugging.assert_equal + + + + + + + + + +Assert the condition `x == y` holds element-wise. + + + + + + + + + +This Op checks that `x[i] == y[i]` holds for every pair of (possibly +broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is +trivially satisfied. + +If `x` and `y` are not equal, `message`, as well as the first `summarize` +entries of `x` and `y` are printed, and `InvalidArgumentError` is raised. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`y` + +Numeric `Tensor`, same dtype as and broadcastable to `x`. +
+`message` + +A string to prefix to the default message. +
+`summarize` + +Print this many entries of each tensor. +
+`name` + +A name for this operation (optional). Defaults to "assert_equal". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x == y` is False. This can be +used with tf.control_dependencies inside of tf.functions to block +followup computation until the check has executed. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x == y` is False. The check can be performed immediately during eager +execution or if `x` and `y` are statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_greater.md b/site/en/api_docs/python/tf/debugging/assert_greater.md new file mode 100644 index 00000000000..9b897c203ee --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_greater.md @@ -0,0 +1,133 @@ +description: Assert the condition x > y holds element-wise. + +
+ + +
+ +# tf.debugging.assert_greater + + + + + + + + + +Assert the condition `x > y` holds element-wise. + + + + + + + + + +This Op checks that `x[i] > y[i]` holds for every pair of (possibly +broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is +trivially satisfied. + +If `x` is not greater than `y` element-wise, `message`, as well as the first +`summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is +raised. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`y` + +Numeric `Tensor`, same dtype as and broadcastable to `x`. +
+`message` + +A string to prefix to the default message. +
+`summarize` + +Print this many entries of each tensor. +
+`name` + +A name for this operation (optional). Defaults to "assert_greater". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x > y` is False. This can be +used with tf.control_dependencies inside of tf.functions to block +followup computation until the check has executed. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x > y` is False. The check can be performed immediately during eager +execution or if `x` and `y` are statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_greater_equal.md b/site/en/api_docs/python/tf/debugging/assert_greater_equal.md new file mode 100644 index 00000000000..6807ff4bf0b --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_greater_equal.md @@ -0,0 +1,126 @@ +description: Assert the condition x >= y holds element-wise. + +
+ + +
+ +# tf.debugging.assert_greater_equal + + + + + + + + + +Assert the condition `x >= y` holds element-wise. + + + + + + + +This Op checks that `x[i] >= y[i]` holds for every pair of (possibly +broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is +trivially satisfied. + +If `x` is not greater or equal to `y` element-wise, `message`, as well as the +first `summarize` entries of `x` and `y` are printed, and +`InvalidArgumentError` is raised. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`y` + +Numeric `Tensor`, same dtype as and broadcastable to `x`. +
+`message` + +A string to prefix to the default message. +
+`summarize` + +Print this many entries of each tensor. +
+`name` + +A name for this operation (optional). Defaults to +"assert_greater_equal". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x >= y` is False. This can be +used with tf.control_dependencies inside of tf.functions to block +followup computation until the check has executed. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x >= y` is False. The check can be performed immediately during eager +execution or if `x` and `y` are statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_integer.md b/site/en/api_docs/python/tf/debugging/assert_integer.md new file mode 100644 index 00000000000..710d823b4da --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_integer.md @@ -0,0 +1,85 @@ +description: Assert that x is of integer dtype. + +
+ + +
+ +# tf.debugging.assert_integer + + + + + + + + + +Assert that `x` is of integer dtype. + + + + + + + +If `x` has a non-integer type, `message`, as well as the dtype of `x` are +printed, and `InvalidArgumentError` is raised. + +This can always be checked statically, so this method returns nothing. + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_integer". +
+ + + + + + + + + + + + +
+`TypeError` + +If `x.dtype` is not a non-quantized integer type. +
+ diff --git a/site/en/api_docs/python/tf/debugging/assert_less.md b/site/en/api_docs/python/tf/debugging/assert_less.md new file mode 100644 index 00000000000..0595edd794c --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_less.md @@ -0,0 +1,133 @@ +description: Assert the condition x < y holds element-wise. + +
+ + +
+ +# tf.debugging.assert_less + + + + + + + + + +Assert the condition `x < y` holds element-wise. + + + + + + + + + +This Op checks that `x[i] < y[i]` holds for every pair of (possibly +broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is +trivially satisfied. + +If `x` is not less than `y` element-wise, `message`, as well as the first +`summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is +raised. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`y` + +Numeric `Tensor`, same dtype as and broadcastable to `x`. +
+`message` + +A string to prefix to the default message. +
+`summarize` + +Print this many entries of each tensor. +
+`name` + +A name for this operation (optional). Defaults to "assert_less". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x < y` is False. +This can be used with tf.control_dependencies inside of tf.functions +to block followup computation until the check has executed. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x < y` is False. The check can be performed immediately during eager +execution or if `x` and `y` are statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_less_equal.md b/site/en/api_docs/python/tf/debugging/assert_less_equal.md new file mode 100644 index 00000000000..ebeadfa34ee --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_less_equal.md @@ -0,0 +1,125 @@ +description: Assert the condition x <= y holds element-wise. + +
+ + +
+ +# tf.debugging.assert_less_equal + + + + + + + + + +Assert the condition `x <= y` holds element-wise. + + + + + + + +This Op checks that `x[i] <= y[i]` holds for every pair of (possibly +broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is +trivially satisfied. + +If `x` is not less or equal than `y` element-wise, `message`, as well as the +first `summarize` entries of `x` and `y` are printed, and +`InvalidArgumentError` is raised. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`y` + +Numeric `Tensor`, same dtype as and broadcastable to `x`. +
+`message` + +A string to prefix to the default message. +
+`summarize` + +Print this many entries of each tensor. +
+`name` + +A name for this operation (optional). Defaults to "assert_less_equal". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x <= y` is False. This can be +used with tf.control_dependencies inside of tf.functions to block +followup computation until the check has executed. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x <= y` is False. The check can be performed immediately during eager +execution or if `x` and `y` are statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_near.md b/site/en/api_docs/python/tf/debugging/assert_near.md new file mode 100644 index 00000000000..eafe4fd88ef --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_near.md @@ -0,0 +1,155 @@ +description: Assert the condition x and y are close element-wise. + +
+ + +
+ +# tf.debugging.assert_near + + + + + + + + + +Assert the condition `x` and `y` are close element-wise. + + + + + + + +This Op checks that `x[i] - y[i] < atol + rtol * tf.abs(y[i])` holds for every +pair of (possibly broadcast) elements of `x` and `y`. If both `x` and `y` are +empty, this is trivially satisfied. + +If any elements of `x` and `y` are not close, `message`, as well as the first +`summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` +is raised. + +The default `atol` and `rtol` is `10 * eps`, where `eps` is the smallest +representable positive number such that `1 + eps != 1`. This is about +`1.2e-6` in `32bit`, `2.22e-15` in `64bit`, and `0.00977` in `16bit`. +See `numpy.finfo`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Float or complex `Tensor`. +
+`y` + +Float or complex `Tensor`, same dtype as and broadcastable to `x`. +
+`rtol` + +`Tensor`. Same `dtype` as, and broadcastable to, `x`. +The relative tolerance. Default is `10 * eps`. +
+`atol` + +`Tensor`. Same `dtype` as, and broadcastable to, `x`. +The absolute tolerance. Default is `10 * eps`. +
+`message` + +A string to prefix to the default message. +
+`summarize` + +Print this many entries of each tensor. +
+`name` + +A name for this operation (optional). Defaults to "assert_near". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x` and `y` are not close enough. +This can be used with tf.control_dependencies inside of tf.functions +to block followup computation until the check has executed. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x != y` is False for any pair of elements in `x` and `y`. The check can +be performed immediately during eager execution or if `x` and `y` are +statically known. +
+ + + + +#### Eager Compatibility +returns None + + + +#### Numpy Compatibility +Similar to `numpy.assert_allclose`, except tolerance depends on data type. +This is due to the fact that `TensorFlow` is often used with `32bit`, `64bit`, +and even `16bit` data. + diff --git a/site/en/api_docs/python/tf/debugging/assert_negative.md b/site/en/api_docs/python/tf/debugging/assert_negative.md new file mode 100644 index 00000000000..972889ff1c6 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_negative.md @@ -0,0 +1,116 @@ +description: Assert the condition x < 0 holds element-wise. + +
+ + +
+ +# tf.debugging.assert_negative + + + + + + + + + +Assert the condition `x < 0` holds element-wise. + + + + + + + +This Op checks that `x[i] < 0` holds for every element of `x`. If `x` is +empty, this is trivially satisfied. + +If `x` is not negative everywhere, `message`, as well as the first `summarize` +entries of `x` are printed, and `InvalidArgumentError` is raised. + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`message` + +A string to prefix to the default message. +
+`summarize` + +Print this many entries of each tensor. +
+`name` + +A name for this operation (optional). Defaults to "assert_negative". +
+ + + + + + + + + + + +
+Op raising `InvalidArgumentError` unless `x` is all negative. This can be +used with tf.control_dependencies inside of tf.functions to block +followup computation until the check has executed. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x[i] < 0` is False. The check can be performed immediately during eager +execution or if `x` is statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_non_negative.md b/site/en/api_docs/python/tf/debugging/assert_non_negative.md new file mode 100644 index 00000000000..6b17cbe0dd0 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_non_negative.md @@ -0,0 +1,117 @@ +description: Assert the condition x >= 0 holds element-wise. + +
+ + +
+ +# tf.debugging.assert_non_negative + + + + + + + + + +Assert the condition `x >= 0` holds element-wise. + + + + + + + +This Op checks that `x[i] >= 0` holds for every element of `x`. If `x` is +empty, this is trivially satisfied. + +If `x` is not >= 0 everywhere, `message`, as well as the first `summarize` +entries of `x` are printed, and `InvalidArgumentError` is raised. + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`message` + +A string to prefix to the default message. +
+`summarize` + +Print this many entries of each tensor. +
+`name` + +A name for this operation (optional). Defaults to +"assert_non_negative". +
+ + + + + + + + + + + +
+Op raising `InvalidArgumentError` unless `x` is all non-negative. This can +be used with tf.control_dependencies inside of tf.functions to block +followup computation until the check has executed. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x[i] >= 0` is False. The check can be performed immediately during eager +execution or if `x` is statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_non_positive.md b/site/en/api_docs/python/tf/debugging/assert_non_positive.md new file mode 100644 index 00000000000..4e71cbc16dc --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_non_positive.md @@ -0,0 +1,117 @@ +description: Assert the condition x <= 0 holds element-wise. + +
+ + +
+ +# tf.debugging.assert_non_positive + + + + + + + + + +Assert the condition `x <= 0` holds element-wise. + + + + + + + +This Op checks that `x[i] <= 0` holds for every element of `x`. If `x` is +empty, this is trivially satisfied. + +If `x` is not <= 0 everywhere, `message`, as well as the first `summarize` +entries of `x` are printed, and `InvalidArgumentError` is raised. + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`message` + +A string to prefix to the default message. +
+`summarize` + +Print this many entries of each tensor. +
+`name` + +A name for this operation (optional). Defaults to +"assert_non_positive". +
+ + + + + + + + + + + +
+Op raising `InvalidArgumentError` unless `x` is all non-positive. This can +be used with tf.control_dependencies inside of tf.functions to block +followup computation until the check has executed. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x[i] <= 0` is False. The check can be performed immediately during eager +execution or if `x` is statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_none_equal.md b/site/en/api_docs/python/tf/debugging/assert_none_equal.md new file mode 100644 index 00000000000..d55b8c872a6 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_none_equal.md @@ -0,0 +1,127 @@ +description: Assert the condition x != y holds for all elements. + +
+ + +
+ +# tf.debugging.assert_none_equal + + + + + + + + + +Assert the condition `x != y` holds for all elements. + + + + + + + +This Op checks that `x[i] != y[i]` holds for every pair of (possibly +broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is +trivially satisfied. + +If any elements of `x` and `y` are equal, `message`, as well as the first +`summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` +is raised. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`y` + +Numeric `Tensor`, same dtype as and broadcastable to `x`. +
+`summarize` + +Print this many entries of each tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to +"assert_none_equal". +
+ + + + + + + + + + + +
+Op that raises `InvalidArgumentError` if `x != y` is ever False. This can +be used with tf.control_dependencies inside of tf.functions to block +followup computation until the check has executed. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x != y` is False for any pair of elements in `x` and `y`. The check can +be performed immediately during eager execution or if `x` and `y` are +statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_positive.md b/site/en/api_docs/python/tf/debugging/assert_positive.md new file mode 100644 index 00000000000..4a853aa77a9 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_positive.md @@ -0,0 +1,116 @@ +description: Assert the condition x > 0 holds element-wise. + +
+ + +
+ +# tf.debugging.assert_positive + + + + + + + + + +Assert the condition `x > 0` holds element-wise. + + + + + + + +This Op checks that `x[i] > 0` holds for every element of `x`. If `x` is +empty, this is trivially satisfied. + +If `x` is not positive everywhere, `message`, as well as the first `summarize` +entries of `x` are printed, and `InvalidArgumentError` is raised. + + + + + + + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`message` + +A string to prefix to the default message. +
+`summarize` + +Print this many entries of each tensor. +
+`name` + +A name for this operation (optional). Defaults to "assert_positive". +
+ + + + + + + + + + + +
+Op raising `InvalidArgumentError` unless `x` is all positive. This can be +used with tf.control_dependencies inside of tf.functions to block +followup computation until the check has executed. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x[i] > 0` is False. The check can be performed immediately during eager +execution or if `x` is statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_proper_iterable.md b/site/en/api_docs/python/tf/debugging/assert_proper_iterable.md new file mode 100644 index 00000000000..2bb63871e94 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_proper_iterable.md @@ -0,0 +1,81 @@ +description: Static assert that values is a "proper" iterable. + +
+ + +
+ +# tf.debugging.assert_proper_iterable + + + + + + + + + +Static assert that values is a "proper" iterable. + + + + + + + + + +`Ops` that expect iterables of `Tensor` can call this to validate input. +Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves. + + + + + + + + + + +
+`values` + +Object to be checked. +
+ + + + + + + + + + + + +
+`TypeError` + +If `values` is not iterable or is one of +`Tensor`, `SparseTensor`, `np.array`, tf.compat.bytes_or_text_types. +
+ diff --git a/site/en/api_docs/python/tf/debugging/assert_rank.md b/site/en/api_docs/python/tf/debugging/assert_rank.md new file mode 100644 index 00000000000..fce3284fe5f --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_rank.md @@ -0,0 +1,125 @@ +description: Assert that x has rank equal to rank. + +
+ + +
+ +# tf.debugging.assert_rank + + + + + + + + + +Assert that `x` has rank equal to `rank`. + + + + + + + + + +This Op checks that the rank of `x` is equal to `rank`. + +If `x` has a different rank, `message`, as well as the shape of `x` are +printed, and `InvalidArgumentError` is raised. + + + + + + + + + + + + + + + + + + + +
+`x` + +`Tensor`. +
+`rank` + +Scalar integer `Tensor`. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to +"assert_rank". +
+ + + + + + + + + + + +
+Op raising `InvalidArgumentError` unless `x` has specified rank. +If static checks determine `x` has correct rank, a `no_op` is returned. +This can be used with tf.control_dependencies inside of tf.functions +to block followup computation until the check has executed. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if the check can be performed immediately and +`x` does not have rank `rank`. The check can be performed immediately +during eager execution or if the shape of `x` is statically known. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_rank_at_least.md b/site/en/api_docs/python/tf/debugging/assert_rank_at_least.md new file mode 100644 index 00000000000..e64ebe82113 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_rank_at_least.md @@ -0,0 +1,123 @@ +description: Assert that x has rank of at least rank. + +
+ + +
+ +# tf.debugging.assert_rank_at_least + + + + + + + + + +Assert that `x` has rank of at least `rank`. + + + + + + + +This Op checks that the rank of `x` is greater or equal to `rank`. + +If `x` has a rank lower than `rank`, `message`, as well as the shape of `x` +are printed, and `InvalidArgumentError` is raised. + + + + + + + + + + + + + + + + + + + +
+`x` + +`Tensor`. +
+`rank` + +Scalar integer `Tensor`. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to +"assert_rank_at_least". +
+ + + + + + + + + + + +
+Op raising `InvalidArgumentError` unless `x` has specified rank or higher. +If static checks determine `x` has correct rank, a `no_op` is returned. +This can be used with tf.control_dependencies inside of tf.functions +to block followup computation until the check has executed. +
+ + + + + + + + + + + + + + + +
+`InvalidArgumentError` + +`x` does not have rank at least `rank`, but the rank +cannot be statically determined. +
+`ValueError` + +If static checks determine `x` has mismatched rank. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_rank_in.md b/site/en/api_docs/python/tf/debugging/assert_rank_in.md new file mode 100644 index 00000000000..e3189ff5178 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_rank_in.md @@ -0,0 +1,122 @@ +description: Assert that x has a rank in ranks. + +
+ + +
+ +# tf.debugging.assert_rank_in + + + + + + + + + +Assert that `x` has a rank in `ranks`. + + + + + + + +This Op checks that the rank of `x` is in `ranks`. + +If `x` has a different rank, `message`, as well as the shape of `x` are +printed, and `InvalidArgumentError` is raised. + + + + + + + + + + + + + + + + + + + +
+`x` + +`Tensor`. +
+`ranks` + +`Iterable` of scalar `Tensor` objects. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_rank_in". +
+ + + + + + + + + + + +
+Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`. +If static checks determine `x` has matching rank, a `no_op` is returned. +This can be used with tf.control_dependencies inside of tf.functions +to block followup computation until the check has executed. +
+ + + + + + + + + + + + + + + +
+`InvalidArgumentError` + +`x` does not have rank in `ranks`, but the rank cannot +be statically determined. +
+`ValueError` + +If static checks determine `x` has mismatched rank. +
+ + + +#### Eager Compatibility +returns None + diff --git a/site/en/api_docs/python/tf/debugging/assert_same_float_dtype.md b/site/en/api_docs/python/tf/debugging/assert_same_float_dtype.md new file mode 100644 index 00000000000..bf2bfdcaaf3 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_same_float_dtype.md @@ -0,0 +1,106 @@ +description: Validate and return float type based on tensors and dtype. + +
+ + +
+ +# tf.debugging.assert_same_float_dtype + + + + + + + + + +Validate and return float type based on `tensors` and `dtype`. + + + + + + + + + +For ops such as matrix multiplication, inputs and weights must be of the +same float type. This function validates that all `tensors` are the same type, +validates that type is `dtype` (if supplied), and returns the type. Type must +be a floating point type. If neither `tensors` nor `dtype` is supplied, +the function will return dtypes.float32. + + + + + + + + + + + + + +
+`tensors` + +Tensors of input values. Can include `None` elements, which will be +ignored. +
+`dtype` + +Expected type. +
+ + + + + + + + + + + +
+Validated type. +
+ + + + + + + + + + + + +
+`ValueError` + +if neither `tensors` nor `dtype` is supplied, or result is not +float, or the common type of the inputs is not a floating point type. +
+ diff --git a/site/en/api_docs/python/tf/debugging/assert_scalar.md b/site/en/api_docs/python/tf/debugging/assert_scalar.md new file mode 100644 index 00000000000..de420b6b7c4 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_scalar.md @@ -0,0 +1,87 @@ +description: Asserts that the given tensor is a scalar. + +
+ + +
+ +# tf.debugging.assert_scalar + + + + + + + + + +Asserts that the given `tensor` is a scalar. + + + + + + + +This function raises `ValueError` unless it can be certain that the given +`tensor` is a scalar. `ValueError` is also raised if the shape of `tensor` is +unknown. + +This is always checked statically, so this method returns nothing. + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation. Defaults to "assert_scalar" +
+ + + + + + + + + + + + +
+`ValueError` + +If the tensor is not scalar (rank 0), or if its shape is +unknown. +
+ diff --git a/site/en/api_docs/python/tf/debugging/assert_shapes.md b/site/en/api_docs/python/tf/debugging/assert_shapes.md new file mode 100644 index 00000000000..d2bafbf83e8 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_shapes.md @@ -0,0 +1,147 @@ +description: Assert tensor shapes and dimension size relationships between tensors. + +
+ + +
+ +# tf.debugging.assert_shapes + + + + + + + + + +Assert tensor shapes and dimension size relationships between tensors. + + + + + + + +This Op checks that a collection of tensors shape relationships +satisfies given constraints. + +#### Example: + + + +``` +>>> n = 10 +>>> q = 3 +>>> d = 7 +>>> x = tf.zeros([n,q]) +>>> y = tf.ones([n,d]) +>>> param = tf.Variable([1.0, 2.0, 3.0]) +>>> scalar = 1.0 +>>> tf.debugging.assert_shapes([ +... (x, ('N', 'Q')), +... (y, ('N', 'D')), +... (param, ('Q',)), +... (scalar, ()), +... ]) +``` + +``` +>>> tf.debugging.assert_shapes([ +... (x, ('N', 'D')), +... (y, ('N', 'D')) +... ]) +Traceback (most recent call last): +... +ValueError: ... +``` + +If `x`, `y`, `param` or `scalar` does not have a shape that satisfies +all specified constraints, `message`, as well as the first `summarize` entries +of the first encountered violating tensor are printed, and +`InvalidArgumentError` is raised. + +Size entries in the specified shapes are checked against other entries by +their __hash__, except: + - a size entry is interpreted as an explicit size if it can be parsed as an + integer primitive. + - a size entry is interpreted as *any* size if it is None or '.'. + +If the first entry of a shape is `...` (type `Ellipsis`) or '*' that indicates +a variable number of outer dimensions of unspecified size, i.e. the constraint +applies to the inner-most dimensions only. + +Scalar tensors and specified shapes of length zero (excluding the 'inner-most' +prefix) are both treated as having a single dimension of size one. + + + + + + + + + + + + + + + + + + + + + + +
+`shapes` + +dictionary with (`Tensor` to shape) items. A shape must be an +iterable. +
+`data` + +The tensors to print out if the condition is False. Defaults to error +message and first few entries of the violating tensor. +
+`summarize` + +Print this many entries of the tensor. +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation (optional). Defaults to "assert_shapes". +
+ + + + + + + + + + + + +
+`ValueError` + +If static checks determine any shape constraint is violated. +
+ diff --git a/site/en/api_docs/python/tf/debugging/assert_type.md b/site/en/api_docs/python/tf/debugging/assert_type.md new file mode 100644 index 00000000000..748e5e39c86 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/assert_type.md @@ -0,0 +1,90 @@ +description: Asserts that the given Tensor is of the specified type. + +
+ + +
+ +# tf.debugging.assert_type + + + + + + + + + +Asserts that the given `Tensor` is of the specified type. + + + + + + + +This can always be checked statically, so this method returns nothing. + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`tf_type` + +A tensorflow type (dtypes.float32, tf.int64, dtypes.bool, +etc). +
+`message` + +A string to prefix to the default message. +
+`name` + +A name for this operation. Defaults to "assert_type" +
+ + + + + + + + + + + + +
+`TypeError` + +If the tensor's data type doesn't match `tf_type`. +
+ diff --git a/site/en/api_docs/python/tf/debugging/check_numerics.md b/site/en/api_docs/python/tf/debugging/check_numerics.md new file mode 100644 index 00000000000..fa7d9c065d7 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/check_numerics.md @@ -0,0 +1,86 @@ +description: Checks a tensor for NaN and Inf values. + +
+ + +
+ +# tf.debugging.check_numerics + + + + + + + + + +Checks a tensor for NaN and Inf values. + + + + + + + + + +When run, reports an `InvalidArgument` error if `tensor` has any values +that are not a number (NaN) or infinity (Inf). Otherwise, passes `tensor` as-is. + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`message` + +A `string`. Prefix of the error message. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/debugging/disable_check_numerics.md b/site/en/api_docs/python/tf/debugging/disable_check_numerics.md new file mode 100644 index 00000000000..5fe96abc224 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/disable_check_numerics.md @@ -0,0 +1,51 @@ +description: Disable the eager/graph unified numerics checking mechanism. + +
+ + +
+ +# tf.debugging.disable_check_numerics + + + + + + + + + +Disable the eager/graph unified numerics checking mechanism. + + + + + + + + + +This method can be used after a call to tf.debugging.enable_check_numerics() +to disable the numerics-checking mechanism that catches infinity and NaN +values output by ops executed eagerly or in tf.function-compiled graphs. + +This method is idempotent. Calling it multiple times has the same effect +as calling it once. + +This method takes effect only on the thread in which it is called. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/debugging/enable_check_numerics.md b/site/en/api_docs/python/tf/debugging/enable_check_numerics.md new file mode 100644 index 00000000000..303a66e7fe4 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/enable_check_numerics.md @@ -0,0 +1,143 @@ +description: Enable tensor numerics checking in an eager/graph unified fashion. + +
+ + +
+ +# tf.debugging.enable_check_numerics + + + + + + + + + +Enable tensor numerics checking in an eager/graph unified fashion. + + + + + + + + + +The numerics checking mechanism will cause any TensorFlow eager execution or +graph execution to error out as soon as an op's output tensor contains +infinity or NaN. + +This method is idempotent. Calling it multiple times has the same effect +as calling it once. + +This method takes effect only on the thread in which it is called. + +When a op's float-type output tensor contains any Infinity or NaN, an +tf.errors.InvalidArgumentError will be thrown, with an error message that +reveals the following information: + - The type of the op that generated the tensor with bad numerics. + - Data type (dtype) of the tensor. + - Shape of the tensor (to the extent known at the time of eager execution + or graph construction). + - Name of the containing graph (if available). + - (Graph mode only): The stack trace of the intra-graph op's creation, + with a stack-height limit and a path-length limit for visual clarity. + The stack frames that belong to the user's code (as opposed to + tensorflow's internal code) are highlighted with a text arrow ("->"). + - (Eager mode only): How many of the offending tensor's elements are + `Infinity` and `NaN`, respectively. + +Once enabled, the check-numerics mechanism can be disabled by using +tf.debugging.disable_check_numerics(). + +#### Example usage: + + + +1. Catching infinity during the execution of a tf.function graph: + + ```py + import tensorflow as tf + + tf.debugging.enable_check_numerics() + + @tf.function + def square_log_x_plus_1(x): + v = tf.math.log(x + 1) + return tf.math.square(v) + + x = -1.0 + + # When the following line runs, a function graph will be compiled + # from the Python function `log_x_plus_1()`. Due to the + # `enable_check_numerics()` call above, the graph will contain + # numerics checking ops that will run during the function graph's + # execution. The function call generates an -infinity when the Log + # (logarithm) op operates on the output tensor of the Add op. + # The program errors out at this line, printing an error message. + y = log_x_plus_1(x) + z = -y + ``` + +2. Catching NaN during eager execution: + + ```py + import numpy as np + import tensorflow as tf + + tf.debugging.enable_check_numerics() + + x = np.array([[0.0, -1.0], [4.0, 3.0]]) + + # The following line executes the Sqrt op eagerly. Due to the negative + # element in the input array, a NaN is generated. Due to the + # `enable_check_numerics()` call above, the program errors immediately + # at this line, printing an error message. + y = tf.math.sqrt(x) + z = tf.matmul(y, y) + ``` + + + + + + + + + + + + + +
+`stack_height_limit` + +Limit to the height of the printed stack trace. +Applicable only to ops in tf.functions (graphs). +
+`path_length_limit` + +Limit to the file path included in the printed stack +trace. Applicable only to ops in tf.functions (graphs). +
+ diff --git a/site/en/api_docs/python/tf/debugging/experimental.md b/site/en/api_docs/python/tf/debugging/experimental.md new file mode 100644 index 00000000000..42f58b46a0f --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/experimental.md @@ -0,0 +1,27 @@ +description: Public API for tf.debugging.experimental namespace. + +
+ + +
+ +# Module: tf.debugging.experimental + + + + + + + + + +Public API for tf.debugging.experimental namespace. + + + +## Functions + +[`disable_dump_debug_info(...)`](../../tf/debugging/experimental/disable_dump_debug_info.md): Disable the currently-enabled debugging dumping. + +[`enable_dump_debug_info(...)`](../../tf/debugging/experimental/enable_dump_debug_info.md): Enable dumping debugging information from a TensorFlow program. + diff --git a/site/en/api_docs/python/tf/debugging/experimental/disable_dump_debug_info.md b/site/en/api_docs/python/tf/debugging/experimental/disable_dump_debug_info.md new file mode 100644 index 00000000000..9c6d80dd380 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/experimental/disable_dump_debug_info.md @@ -0,0 +1,47 @@ +description: Disable the currently-enabled debugging dumping. + +
+ + +
+ +# tf.debugging.experimental.disable_dump_debug_info + + + + + + + + + +Disable the currently-enabled debugging dumping. + + + + + + + + + +If the `enable_dump_debug_info()` method under the same Python namespace +has been invoked before, calling this method disables it. If no call to +`enable_dump_debug_info()` has been made, calling this method is a no-op. +Calling this method more than once is idempotent. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/debugging/experimental/enable_dump_debug_info.md b/site/en/api_docs/python/tf/debugging/experimental/enable_dump_debug_info.md new file mode 100644 index 00000000000..f2137f4e768 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/experimental/enable_dump_debug_info.md @@ -0,0 +1,177 @@ +description: Enable dumping debugging information from a TensorFlow program. + +
+ + +
+ +# tf.debugging.experimental.enable_dump_debug_info + + + + + + + + + +Enable dumping debugging information from a TensorFlow program. + + + + + + + + + +The debugging information is dumped to a directory on the file system +specified as `dump_root`. + +The dumped debugging information can be ingested by debugger UIs. + +The files in the dump directory contain the following information: + - TensorFlow Function construction (e.g., compilation of Python functions + decorated with @tf.function), the op types, names (if available), context, + the input and output tensors, and the associated stack traces. + - Execution of TensorFlow operations (ops) and Functions and their stack + traces, op types, names (if available) and contexts. In addition, + depending on the value of the `tensor_debug_mode` argument (see Args + section below), the value(s) of the output tensors or more concise + summaries of the tensor values will be dumped. + - A snapshot of Python source files involved in the execution of the + TensorFlow program. + +Once enabled, the dumping can be disabled with the corresponding +`disable_dump_debug_info()` method under the same Python namespace. +Calling this method more than once with the same `dump_root` is idempotent. +Calling this method more than once with different `tensor_debug_mode`s +leads to a `ValueError`. +Calling this method more than once with different `circular_buffer_size`s +leads to a `ValueError`. +Calling this method with a different `dump_root` abolishes the +previously-enabled `dump_root`. + +#### Usage example: + + + +```py +tf.debugging.experimental.enable_dump_debug_info('/tmp/my-tfdbg-dumps') + +# Code to build, train and run your TensorFlow model... +``` + + + + + + + + + + + + + + + + + + + + + + +
+`dump_root` + +The directory path where the dumping information will be written. +
+`tensor_debug_mode` + +Debug mode for tensor values, as a string. +The currently supported options are: +- "NO_TENSOR": (Default) Only traces the execution of ops' output +tensors, while not dumping the value of the ops' output tensors +or any form of concise summary of them. +
+`circular_buffer_size` + +Size of the circular buffers for execution events. +These circular buffers are designed to reduce the overhead of debugging +dumping. They hold the most recent debug events concerning eager execution +of ops and tf.functions and traces of tensor values computed inside +tf.functions. They are written to the file system only when the proper +flushing method is called (see description of return values below). +Expected to be an integer. If <= 0, the circular-buffer behavior will be +disabled, i.e., the execution debug events will be written to the file +writers in the same way as non-execution events such as op creations and +source-file snapshots. +
+`op_regex` + +Dump data from only the tensors from op types that matches to the +regular expression (through Python's `re.match()`). +"Op type" refers to the names of the TensorFlow operations (e.g., +"MatMul", "LogSoftmax"), which may repeat in a TensorFlow +function. It does *not* refer to the names of nodes (e.g., +"dense/MatMul", "dense_1/MatMul_1") which are unique within a function. +- Example 1: Dump tensor data from only MatMul and Relu ops +`op_regex="^(MatMul|Relu)$"`. +- Example 2: Dump tensors from all ops *except* Relu: +`op_regex="(?!^Relu$)"`. +This filter operates in a logical AND relation with `tensor_dtypes`. +
+`tensor_dtypes` + +Dump data from only the tensors of which the specified +dtypes. This optional argument can be in any of the following format: +- a list or tuple of `DType` objects or strings that can be converted +to `DType` objects via tf.as_dtype(). Examples: +- `tensor_dtype=[tf.float32, tf.float64]`, +- `tensor_dtype=["float32", "float64"]`, +- `tensor_dtypes=(tf.int32, tf.bool)`, +- `tensor_dtypes=("int32", "bool")` +- a callable that takes a single `DType` argument and returns a Python +`boolean` indicating whether the dtype is to be included in the data +dumping. Examples: +- `tensor_dtype=lambda dtype: dtype.is_integer`. +This filter operates in a logical AND relation with `op_regex`. +
+ + + + + + + + + + + +
+A DebugEventsWriter instance used by the dumping callback. The caller +may use its flushing methods, including `FlushNonExecutionFiles()` and +`FlushExecutionFiles()`. +
+ diff --git a/site/en/api_docs/python/tf/debugging/get_log_device_placement.md b/site/en/api_docs/python/tf/debugging/get_log_device_placement.md new file mode 100644 index 00000000000..4dbb6d6e838 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/get_log_device_placement.md @@ -0,0 +1,56 @@ +description: Get if device placements are logged. + +
+ + +
+ +# tf.debugging.get_log_device_placement + + + + + + + + + +Get if device placements are logged. + + + + + + + + + + + + + + + + + + +
+If device placements are logged. +
+ diff --git a/site/en/api_docs/python/tf/debugging/is_numeric_tensor.md b/site/en/api_docs/python/tf/debugging/is_numeric_tensor.md new file mode 100644 index 00000000000..0bc4cdf8538 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/is_numeric_tensor.md @@ -0,0 +1,61 @@ +description: Returns True if the elements of tensor are numbers. + +
+ + +
+ +# tf.debugging.is_numeric_tensor + + + + + + + + + +Returns `True` if the elements of `tensor` are numbers. + + + + + + + + + +Specifically, returns `True` if the dtype of `tensor` is one of the following: + +* tf.float32 +* tf.float64 +* tf.int8 +* tf.int16 +* tf.int32 +* tf.int64 +* tf.uint8 +* tf.qint8 +* tf.qint32 +* tf.quint8 +* tf.complex64 + +Returns `False` if `tensor` is of a non-numeric type or if `tensor` is not +a tf.Tensor object. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/debugging/set_log_device_placement.md b/site/en/api_docs/python/tf/debugging/set_log_device_placement.md new file mode 100644 index 00000000000..41ff25676f9 --- /dev/null +++ b/site/en/api_docs/python/tf/debugging/set_log_device_placement.md @@ -0,0 +1,61 @@ +description: Set if device placements should be logged. + +
+ + +
+ +# tf.debugging.set_log_device_placement + + + + + + + + + +Set if device placements should be logged. + + + + + + + + + + + + + + + + + + + +
+`enabled` + +Whether to enabled device placement logging. +
+ diff --git a/site/en/api_docs/python/tf/device.md b/site/en/api_docs/python/tf/device.md new file mode 100644 index 00000000000..2733a72b4c0 --- /dev/null +++ b/site/en/api_docs/python/tf/device.md @@ -0,0 +1,103 @@ +description: Specifies the device for ops created/executed in this context. + +
+ + +
+ +# tf.device + + + + + + + + + +Specifies the device for ops created/executed in this context. + + + + + + + +This function specifies the device to be used for ops created/executed in a +particular context. Nested contexts will inherit and also create/execute +their ops on the specified device. If a specific device is not required, +consider not using this function so that a device can be automatically +assigned. In general the use of this function is optional. `device_name` can +be fully specified, as in "/job:worker/task:1/device:cpu:0", or partially +specified, containing only a subset of the "/"-separated fields. Any fields +which are specified will override device annotations from outer scopes. + +#### For example: + + + +```python +with tf.device('/job:foo'): + # ops created here have devices with /job:foo + with tf.device('/job:bar/task:0/device:gpu:2'): + # ops created here have the fully specified device above + with tf.device('/device:gpu:1'): + # ops created here have the device '/job:foo/device:gpu:1' +``` + + + + + + + + + + +
+`device_name` + +The device name to use in the context. +
+ + + + + + + + + + + +
+A context manager that specifies the default device to use for newly +created ops. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If a function is passed in. +
+ diff --git a/site/en/api_docs/python/tf/distribute.md b/site/en/api_docs/python/tf/distribute.md new file mode 100644 index 00000000000..f59c3abfef9 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute.md @@ -0,0 +1,146 @@ +description: Library for running a computation across multiple devices. + +
+ + +
+ +# Module: tf.distribute + + + + + + + + + +Library for running a computation across multiple devices. + + +See the guide for overview and examples: +[TensorFlow v2.x](https://www.tensorflow.org/guide/distributed_training), +[TensorFlow v1.x](https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/distribute_strategy.ipynb). + +The intent of this library is that you can write an algorithm in a stylized way +and it will be usable with a variety of different tf.distribute.Strategy +implementations. Each descendant will implement a different strategy for +distributing the algorithm across multiple devices/machines. Furthermore, these +changes can be hidden inside the specific layers and other library classes that +need special treatment to run in a distributed setting, so that most users' +model definition code can run unchanged. The tf.distribute.Strategy API works +the same way with eager and graph execution. + +*Glossary* + +* _Data parallelism_ is where we run multiple copies of the model + on different slices of the input data. This is in contrast to + _model parallelism_ where we divide up a single copy of a model + across multiple devices. + Note: we only support data parallelism for now, but + hope to add support for model parallelism in the future. +* A _device_ is a CPU or accelerator (e.g. GPUs, TPUs) on some machine that + TensorFlow can run operations on (see e.g. tf.device). You may have multiple + devices on a single machine, or be connected to devices on multiple + machines. Devices used to run computations are called _worker devices_. + Devices used to store variables are _parameter devices_. For some strategies, + such as tf.distribute.MirroredStrategy, the worker and parameter devices + will be the same (see mirrored variables below). For others they will be + different. For example, tf.distribute.experimental.CentralStorageStrategy + puts the variables on a single device (which may be a worker device or may be + the CPU), and tf.distribute.experimental.ParameterServerStrategy puts the + variables on separate machines called parameter servers (see below). +* A _replica_ is one copy of the model, running on one slice of the + input data. Right now each replica is executed on its own + worker device, but once we add support for model parallelism + a replica may span multiple worker devices. +* A _host_ is the CPU device on a machine with worker devices, typically + used for running input pipelines. +* A _worker_ is defined to be the physical machine(s) containing the physical + devices (e.g. GPUs, TPUs) on which the replicated computation is executed. A + worker may contain one or more replicas, but contains at least one + replica. Typically one worker will correspond to one machine, but in the case + of very large models with model parallelism, one worker may span multiple + machines. We typically run one input pipeline per worker, feeding all the + replicas on that worker. +* _Synchronous_, or more commonly _sync_, training is where the updates from + each replica are aggregated together before updating the model variables. This + is in contrast to _asynchronous_, or _async_ training, where each replica + updates the model variables independently. You may also have replicas + partitioned into groups which are in sync within each group but async between + groups. +* _Parameter servers_: These are machines that hold a single copy of + parameters/variables, used by some strategies (right now just + tf.distribute.experimental.ParameterServerStrategy). All replicas that want + to operate on a variable retrieve it at the beginning of a step and send an + update to be applied at the end of the step. These can in priniciple support + either sync or async training, but right now we only have support for async + training with parameter servers. Compare to + tf.distribute.experimental.CentralStorageStrategy, which puts all variables + on a single device on the same machine (and does sync training), and + tf.distribute.MirroredStrategy, which mirrors variables to multiple devices + (see below). +* _Mirrored variables_: These are variables that are copied to multiple + devices, where we keep the copies in sync by applying the same + updates to every copy. Normally would only be used with sync training. +* Reductions and all-reduce: A _reduction_ is some method of aggregating + multiple values into one value, like "sum" or "mean". If a strategy is doing + sync training, we will perform a reduction on the gradients to a parameter + from all replicas before applying the update. _All-reduce_ is an algorithm for + performing a reduction on values from multiple devices and making the result + available on all of those devices. + +Note that we provide a default version of tf.distribute.Strategy that is +used when no other strategy is in scope, that provides the same API with +reasonable default behavior. + +## Modules + +[`cluster_resolver`](../tf/distribute/cluster_resolver.md) module: Library imports for ClusterResolvers. + +[`experimental`](../tf/distribute/experimental.md) module: Experimental Distribution Strategy library. + +## Classes + +[`class CrossDeviceOps`](../tf/distribute/CrossDeviceOps.md): Base class for cross-device reduction and broadcasting algorithms. + +[`class DistributedValues`](../tf/distribute/DistributedValues.md): Base class for representing distributed values. + +[`class HierarchicalCopyAllReduce`](../tf/distribute/HierarchicalCopyAllReduce.md): Reduction using hierarchical copy all-reduce. + +[`class InputContext`](../tf/distribute/InputContext.md): A class wrapping information needed by an input function. + +[`class InputReplicationMode`](../tf/distribute/InputReplicationMode.md): Replication mode for input function. + +[`class MirroredStrategy`](../tf/distribute/MirroredStrategy.md): Synchronous training across multiple replicas on one machine. + +[`class NcclAllReduce`](../tf/distribute/NcclAllReduce.md): Reduction using NCCL all-reduce. + +[`class OneDeviceStrategy`](../tf/distribute/OneDeviceStrategy.md): A distribution strategy for running on a single device. + +[`class ReduceOp`](../tf/distribute/ReduceOp.md): Indicates how a set of values should be reduced. + +[`class ReductionToOneDevice`](../tf/distribute/ReductionToOneDevice.md): Always do reduction to one device first and then do broadcasting. + +[`class ReplicaContext`](../tf/distribute/ReplicaContext.md): tf.distribute.Strategy API when in a replica context. + +[`class RunOptions`](../tf/distribute/RunOptions.md): Run options for `strategy.run`. + +[`class Server`](../tf/distribute/Server.md): An in-process TensorFlow server, for use in distributed training. + +[`class Strategy`](../tf/distribute/Strategy.md): A state & compute distribution policy on a list of devices. + +[`class StrategyExtended`](../tf/distribute/StrategyExtended.md): Additional APIs for algorithms that need to be distribution-aware. + +## Functions + +[`experimental_set_strategy(...)`](../tf/distribute/experimental_set_strategy.md): Set a tf.distribute.Strategy as current without `with strategy.scope()`. + +[`get_replica_context(...)`](../tf/distribute/get_replica_context.md): Returns the current tf.distribute.ReplicaContext or `None`. + +[`get_strategy(...)`](../tf/distribute/get_strategy.md): Returns the current tf.distribute.Strategy object. + +[`has_strategy(...)`](../tf/distribute/has_strategy.md): Return if there is a current non-default tf.distribute.Strategy. + +[`in_cross_replica_context(...)`](../tf/distribute/in_cross_replica_context.md): Returns `True` if in a cross-replica context. + diff --git a/site/en/api_docs/python/tf/distribute/CrossDeviceOps.md b/site/en/api_docs/python/tf/distribute/CrossDeviceOps.md new file mode 100644 index 00000000000..46d275cce18 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/CrossDeviceOps.md @@ -0,0 +1,500 @@ +description: Base class for cross-device reduction and broadcasting algorithms. + +
+ + + + + + + + + +
+ +# tf.distribute.CrossDeviceOps + + + + + + + + + +Base class for cross-device reduction and broadcasting algorithms. + + + + + + + + + + +## Methods + +

batch_reduce

+ +View source + + + +Reduce PerReplica objects in a batch. + +Reduce each first element in `value_destination_pairs` to each second +element which indicates the destinations. + +This can be faster than multiple individual `reduce`s because we can +fuse several tensors into one or multiple packs before reduction. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +An instance of tf.distribute.ReduceOp that indicates how the +`per_replica_value` will be reduced. +
+`value_destination_pairs` + +A list or a tuple of PerReplica objects (or +tensors with device set if there is one device) and destinations. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+a list of Mirrored objects. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `value_destination_pairs` is not an iterable of +tuples of PerReplica objects and destinations. +
+ + + +

batch_reduce_implementation

+ +View source + + + +Implementation of reduce PerReplica objects in a batch. + +Overriding this method is useful for subclass implementers. + +Reduce each first element in `value_destination_pairs` to each second +element which indicates the destinations. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +An instance of tf.distribute.ReduceOp that indicates how +per_replica_value will be reduced. +
+`value_destination_pairs` + +An iterable of tuples of PerReplica objects +(or tensors with device set if there is one device) and destinations. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+a list of Mirrored objects. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `value_destination_pairs` is not an iterable of +tuples of PerReplica objects and destinations +
+ + + +

broadcast

+ +View source + + + +Broadcast the `tensor` to destinations. + + + + + + + + + + + + + + +
Args
+`tensor` + +the tensor to broadcast. +
+`destinations` + +the broadcast destinations. +
+ + + + + + + + + + + +
Returns
+a Mirrored object. +
+ + + +

broadcast_implementation

+ +View source + + + +Implementation of broadcast the `tensor` to destinations. + + + + + + + + + + + + + + +
Args
+`tensor` + +the tensor to broadcast. +
+`destinations` + +the broadcast destinations. +
+ + + + + + + + + + + +
Returns
+a Mirrored object. +
+ + + +

reduce

+ +View source + + + +Reduce `per_replica_value` to `destinations`. + +It runs the reduction operation defined by `reduce_op` and put the +result on `destinations`. + + + + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +An instance of tf.distribute.ReduceOp that indicates how +per_replica_value will be reduced. +
+`per_replica_value` + +A PerReplica object or a tensor with device set. +
+`destinations` + +the reduction destinations. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+a Mirrored object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if per_replica_value can't be converted to a PerReplica +object or if destinations aren't strings, Variables or DistributedValues +
+ + + +

reduce_implementation

+ +View source + + + +The implementation of reduce of `per_replica_value` to `destinations`. + +Overriding this method is useful for subclass implementers. + +It runs the reduction operation defined by `reduce_op` and put the +result on `destinations`. + + + + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +An instance tf.distribute.ReduceOp that indicates of how +per_replica_value will be reduced. +
+`per_replica_value` + +A PerReplica object or a tensor with device set. +
+`destinations` + +the reduction destinations. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+a Mirrored object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if per_replica_value can't be converted to a PerReplica +object. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/DistributedValues.md b/site/en/api_docs/python/tf/distribute/DistributedValues.md new file mode 100644 index 00000000000..f1a89ee94b7 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/DistributedValues.md @@ -0,0 +1,100 @@ +description: Base class for representing distributed values. + +
+ + + +
+ +# tf.distribute.DistributedValues + + + + + + + + + +Base class for representing distributed values. + + + + + + + +A subclass instance of DistributedValues is created when creating variables +within a distribution strategy, iterating a `tf.Dataset` or through +`strategy.run`. This base class should never be instantiated +directly. DistributedValues contains a value per replica. Depending on +the subclass, the values could either be synced on update, synced on demand, +or never synced. + +DistributedValues can be reduced to obtain single value across replicas, +as input into `run` or the per replica values inspected +using `experimental_local_results`. + +#### Example usage: + + + +1. Created from Dataset: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2) +>>> dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset)) +>>> distributed_values = next(dataset_iterator) +``` + +2. Returned by `run`: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> @tf.function +... def run(): +... ctx = tf.distribute.get_replica_context() +... return ctx.replica_id_in_sync_group +>>> distributed_values = strategy.run(run) +``` + +3. As input into `run`: +>>> strategy = tf.distribute.MirroredStrategy() +>>> dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2) +>>> dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset)) +>>> distributed_values = next(dataset_iterator) +>>> @tf.function +... def run(input): +... return input + 1.0 +>>> updated_value = strategy.run(run, +... args=(distributed_values,)) + +4. Reduce value +>>> strategy = tf.distribute.MirroredStrategy() +>>> dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2) +>>> dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset)) +>>> distributed_values = next(dataset_iterator) +>>> reduced_value = strategy.reduce(tf.distribute.ReduceOp.SUM, +... distributed_values, +... axis = 0) + +5. Inspect per replica values. +>>> strategy = tf.distribute.MirroredStrategy() +>>> dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2) +>>> dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset)) +>>> per_replica_values = strategy.experimental_local_results( +... distributed_values) +>>> per_replica_values +(,) + diff --git a/site/en/api_docs/python/tf/distribute/HierarchicalCopyAllReduce.md b/site/en/api_docs/python/tf/distribute/HierarchicalCopyAllReduce.md new file mode 100644 index 00000000000..5bb38820bba --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/HierarchicalCopyAllReduce.md @@ -0,0 +1,315 @@ +description: Reduction using hierarchical copy all-reduce. + +
+ + + + + + +
+ +# tf.distribute.HierarchicalCopyAllReduce + + + + + + + + + +Reduction using hierarchical copy all-reduce. + + + + + + + + + +It reduces to one GPU along edges in some hierarchy and broadcasts back to +each GPU along the same path. Before performing all-reduce, tensors will be +repacked or aggregated for more efficient cross-device transportation. + +This is a reduction created for Nvidia DGX-1 which assumes GPUs connects like +that on DGX-1 machine. If you have different GPU inter-connections, it is +likely that it would be slower than tf.distribute.ReductionToOneDevice. + + + + + + + + + + +
+`num_packs` + +values will be packed in this many splits. `num_packs` should +be greater than or equals 0. When it is zero, no packing will be done. +
+ + + + + + + + + + + +
+ValueError if `num_packs` is negative. +
+ + + +## Methods + +

batch_reduce

+ +View source + + + +Reduce PerReplica objects in a batch. + +Reduce each first element in `value_destination_pairs` to each second +element which indicates the destinations. + +This can be faster than multiple individual `reduce`s because we can +fuse several tensors into one or multiple packs before reduction. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +An instance of tf.distribute.ReduceOp that indicates how the +`per_replica_value` will be reduced. +
+`value_destination_pairs` + +A list or a tuple of PerReplica objects (or +tensors with device set if there is one device) and destinations. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+a list of Mirrored objects. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `value_destination_pairs` is not an iterable of +tuples of PerReplica objects and destinations. +
+ + + +

broadcast

+ +View source + + + +Broadcast the `tensor` to destinations. + + + + + + + + + + + + + + +
Args
+`tensor` + +the tensor to broadcast. +
+`destinations` + +the broadcast destinations. +
+ + + + + + + + + + + +
Returns
+a Mirrored object. +
+ + + +

reduce

+ +View source + + + +Reduce `per_replica_value` to `destinations`. + +It runs the reduction operation defined by `reduce_op` and put the +result on `destinations`. + + + + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +An instance of tf.distribute.ReduceOp that indicates how +per_replica_value will be reduced. +
+`per_replica_value` + +A PerReplica object or a tensor with device set. +
+`destinations` + +the reduction destinations. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+a Mirrored object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if per_replica_value can't be converted to a PerReplica +object or if destinations aren't strings, Variables or DistributedValues +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/InputContext.md b/site/en/api_docs/python/tf/distribute/InputContext.md new file mode 100644 index 00000000000..c586822f5a7 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/InputContext.md @@ -0,0 +1,187 @@ +description: A class wrapping information needed by an input function. + +
+ + + + +
+ +# tf.distribute.InputContext + + + + + + + + + +A class wrapping information needed by an input function. + + + + + + + + + +This is a context class that is passed to the user's input function and +contains information about the compute replicas and input pipelines. The +number of compute replicas (in sync training) helps compute the local batch +size from the desired global batch size for each replica. The input pipeline +information can be used to return a different subset of the input in each +replica (for e.g. shard the input pipeline, use a different input +source etc). + + + + + + + + + + + + + + + + +
+`num_input_pipelines` + +the number of input pipelines in a cluster. +
+`input_pipeline_id` + +the current input pipeline id, should be an int in +[0,`num_input_pipelines`). +
+`num_replicas_in_sync` + +the number of replicas that are in sync. +
+ + + + + + + + + + + + + + + + + + + + +
+`input_pipeline_id` + +Returns the input pipeline ID. +
+`num_input_pipelines` + +Returns the number of input pipelines. +
+`num_replicas_in_sync` + +Returns the number of compute replicas in sync. +
+ + + +## Methods + +

get_per_replica_batch_size

+ +View source + + + +Returns the per-replica batch size. + + + + + + + + + + + +
Args
+`global_batch_size` + +the global batch size which should be divisible by +`num_replicas_in_sync`. +
+ + + + + + + + + + + +
Returns
+the per-replica batch size. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `global_batch_size` not divisible by +`num_replicas_in_sync`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/InputReplicationMode.md b/site/en/api_docs/python/tf/distribute/InputReplicationMode.md new file mode 100644 index 00000000000..cb410f4be70 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/InputReplicationMode.md @@ -0,0 +1,47 @@ +description: Replication mode for input function. + +
+ + + +
+ +# tf.distribute.InputReplicationMode + + + + + + + + + +Replication mode for input function. + + + + + +* `PER_WORKER`: The input function will be called on each worker + independently, creating as many input pipelines as number of workers. + Replicas will dequeue from the local Dataset on their worker. + tf.distribute.Strategy doesn't manage any state sharing between such + separate input pipelines. + +## Class Variables + +* `PER_WORKER` diff --git a/site/en/api_docs/python/tf/distribute/MirroredStrategy.md b/site/en/api_docs/python/tf/distribute/MirroredStrategy.md new file mode 100644 index 00000000000..c9d49b2c8b6 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/MirroredStrategy.md @@ -0,0 +1,1125 @@ +description: Synchronous training across multiple replicas on one machine. + +
+ + + + + + + + + + + + + + +
+ +# tf.distribute.MirroredStrategy + + + + + + + + + +Synchronous training across multiple replicas on one machine. + +Inherits From: [`Strategy`](../../tf/distribute/Strategy.md) + + + + + + + +This strategy is typically used for training on one +machine with multiple GPUs. For TPUs, use +tf.distribute.experimental.TPUStrategy. To use `MirroredStrategy` with +multiple workers, please refer to +tf.distribute.experimental.MultiWorkerMirroredStrategy. + +For example, a variable created under a `MirroredStrategy` is a +`MirroredVariable`. If no devices are specified in the constructor argument of +the strategy then it will use all the available GPUs. If no GPUs are found, it +will use the available CPUs. Note that TensorFlow treats all CPUs on a +machine as a single device, and uses threads internally for parallelism. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> with strategy.scope(): +... x = tf.Variable(1.) +>>> x +MirroredVariable:{ + 0: + } +``` + +While using distribution strategies, all the variable creation should be done +within the strategy's scope. This will replicate the variables across all the +replicas and keep them in sync using an all-reduce algorithm. + +Variables created inside a `MirroredStrategy` which is wrapped with a +tf.function are still `MirroredVariables`. + +``` +>>> x = [] +>>> @tf.function # Wrap the function with tf.function. +... def create_variable(): +... if not x: +... x.append(tf.Variable(1.)) +>>> strategy = tf.distribute.MirroredStrategy() +>>> with strategy.scope(): +... create_variable() +... print (x[0]) +MirroredVariable:{ + 0: + } +``` + +`experimental_distribute_dataset` can be used to distribute the dataset across +the replicas when writing your own training loop. If you are using `.fit` and +`.compile` methods available in tf.keras, then tf.keras will handle the +distribution for you. + +#### For example: + + + +```python +my_strategy = tf.distribute.MirroredStrategy() +with my_strategy.scope(): + @tf.function + def distribute_train_epoch(dataset): + def replica_fn(input): + # process input and return result + return result + + total_result = 0 + for x in dataset: + per_replica_result = my_strategy.run(replica_fn, args=(x,)) + total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM, + per_replica_result, axis=None) + return total_result + + dist_dataset = my_strategy.experimental_distribute_dataset(dataset) + for _ in range(EPOCHS): + train_result = distribute_train_epoch(dist_dataset) +``` + + + + + + + + + + + + + +
+`devices` + +a list of device strings such as `['/gpu:0', '/gpu:1']`. If +`None`, all available GPUs are used. If no GPUs are found, CPU is used. +
+`cross_device_ops` + +optional, a descedant of `CrossDeviceOps`. If this is not +set, `NcclAllReduce()` will be used by default. One would customize this +if NCCL isn't available or if a special implementation that exploits +the particular hardware is available. +
+ + + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_assign_to_logical_device

+ +View source + + + +Adds annotation that `tensor` will be assigned to a logical device. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to `tensor` specifying that operations on +`tensor` will be invoked on logical core device id `logical_device_id`. +When model parallelism is used, the default behavior is that all ops +are placed on zero-th logical device. + +```python + +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + output = tf.add(inputs, inputs) + + // Add operation will be executed on logical device 0. + output = strategy.experimental_assign_to_logical_device(output, 0) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` + + + + + + + + + + + + + +
Args
+`tensor` + +Input tensor to annotate. +
+`logical_device_id` + +Id of the logical core to which the tensor will be +assigned. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +The logical device id presented is not consistent with total +number of partitions specified by the device assignment. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via `dataset`. + +The returned distributed dataset can be iterated over similar to how +regular datasets can. +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +The following is an example: + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + +We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). + +In a multi-worker setting, we will first attempt to distribute the dataset +by attempting to detect whether the dataset is being created out of +ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, +attempting to shard the input files. Note that there has to be at least one +input file per worker. If you have less than one input file per worker, we +suggest that you should disable distributing your dataset using the method +below. + +If that attempt is unsuccessful (e.g. the dataset is created from a +Dataset.range), we will shard the dataset evenly at the end by appending a +`.shard` operation to the end of the processing pipeline. This will cause +the entire preprocessing pipeline for all the data to be run on every +worker, and each worker will do redundant work. We will print a warning +if this method of sharding is selected. + +You can disable dataset sharding across workers using the +`auto_shard_policy` option in tf.data.experimental.DistributeOptions. + +Within each worker, we will also split the data among all the worker +devices (if more than one a present), and this will happen even if +multi-worker sharding is disabled using the method above. + +If the above batch splitting and dataset sharding logic is undesirable, +please use `experimental_distribute_datasets_from_function` instead, which +does not do any automatic splitting or sharding. + +You can also use the `element_spec` property of the distributed dataset +returned by this API to query the tf.TypeSpec of the elements returned +by the iterator. This can be used to set the `input_signature` property +of a tf.function. + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +@tf.function(input_signature=[dist_dataset.element_spec]) +def train_step(inputs): + # train model with inputs + return + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be sharded across all replicas using +the rules stated above. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. Each +replica on that worker will dequeue one batch of inputs from the local +`Dataset` (i.e. if a worker has two replicas, two batches will be dequeued +from the `Dataset` every step). + +This method can be used for several purposes. For example, where +`experimental_distribute_dataset` is unable to shard the input files, this +method might be used to manually shard the dataset (avoiding the slow +fallback behavior in `experimental_distribute_dataset`). In cases where the +dataset is infinite, this sharding can be done by creating dataset replicas +that differ only in their random seed. +`experimental_distribute_dataset` may also sometimes fail to split the +batch across replicas on a worker. In that case, this method can be used +where that limitation does not exist. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + +To query the tf.TypeSpec of the elements in the distributed dataset +returned by this API, you need to use the `element_spec` property of the +distributed iterator. This tf.TypeSpec can be used to set the +`input_signature` property of a tf.function. + +```python +# If you want to specify `input_signature` for a `tf.function` you must +# first create the iterator. +iterator = iter(inputs) + +@tf.function(input_signature=[iterator.element_spec]) +def replica_fn_with_signature(inputs): + # train the model with inputs + return + +for _ in range(steps): + strategy.run(replica_fn_with_signature, + args=(next(iterator),)) +``` + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_values_from_function

+ +View source + + + +Generates tf.distribute.DistributedValues from `value_fn`. + +This function is to generate tf.distribute.DistributedValues to pass +into `run`, `reduce`, or other methods that take +distributed values when not using datasets. + + + + + + + + + + +
Args
+`value_fn` + +The function to run to generate values. It is called for +each replica with `tf.distribute.ValueContext` as the sole argument. It +must return a Tensor or a type that can be converted to a Tensor. +
+ + + + + + + + + + + +
Returns
+A tf.distribute.DistributedValues containing a value for each replica. +
+ + + +#### Example usage: + + + +1. Return constant value per replica: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return tf.constant(1.) +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(,) +``` + +2. Distribute values in array based on replica_id: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> array_value = np.array([3., 2., 1.]) +>>> def value_fn(ctx): +... return array_value[ctx.replica_id_in_sync_group] +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(3.0,) +``` + +3. Specify values using num_replicas_in_sync: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return ctx.num_replicas_in_sync +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(1,) +``` + +4. Place values on devices and distribute: + +``` +strategy = tf.distribute.TPUStrategy() +worker_devices = strategy.extended.worker_devices +multiple_values = [] +for i in range(strategy.num_replicas_in_sync): + with tf.device(worker_devices[i]): + multiple_values.append(tf.constant(1.0)) + +def value_fn(ctx): + return multiple_values[ctx.replica_id] + +distributed_values = strategy. + experimental_distribute_values_from_function( + value_fn) +``` + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +Note: This only returns values on the worker initiated by this client. +When using a tf.distribute.Strategy like +tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker +will be its own client, and this function will only return values +computed on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use `experimental_distribute_dataset` +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_replicate_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be replicated to all logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be invoked on all logical devices. + +```python +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + images, labels = inputs + images = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + + // For loss calculation, all logical devices share the same logits + // and labels. + labels = strategy.experimental_replicate_to_logical_devices(labels) + output = strategy.experimental_replicate_to_logical_devices(output) + loss = loss_fn(labels, output) + + return loss + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_split_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be split across logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be be split among multiple logical devices. Tensor `tensor` +will be split across dimensions specified by `partition_dimensions`. +The dimensions of `tensor` must be divisible by corresponding value in +`partition_dimensions`. + +For example, for system with 8 logical devices, if `tensor` is an image +tensor with shape (batch_size, width, height, channel) and +`partition_dimensions` is [1, 2, 4, 1], then `tensor` will be split +2 in width dimension and 4 way in height dimension and the split +tensor values will be fed into 8 logical devices. + +```python +# Initializing TPU system with 8 logical devices and 1 replica. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[2, 2, 2], + num_replicas=1) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + inputs = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + partition_dimensions: An unnested list of integers with the size equal to + rank of `tensor` specifying how `tensor` will be partitioned. The + product of all elements in `partition_dimensions` must be equal to the + total number of logical devices per replica. + + + + + + + + + + +
Raises
+`ValueError` + +1) If the size of partition_dimensions does not equal to rank +of `tensor` or 2) if product of elements of `partition_dimensions` does +not match the number of logical devices per replica defined by the +implementing DistributionStrategy's device specification or +3) if a known size of `tensor` is not divisible by corresponding +value in `partition_dimensions`. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +Executes ops specified by `fn` on each replica. If `args` or `kwargs` have +tf.distribute.DistributedValues, such as those produced by a +"distributed `Dataset`" or `experimental_distribute_values_from_function` +when `fn` is executed on a particular replica, it will be executed with the +component of tf.distribute.DistributedValues that correspond to that +replica. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `all_reduce`. + +All arguments in `args` or `kwargs` should either be nest of tensors or +tf.distribute.DistributedValues containing tensors or composite tensors. + +IMPORTANT: Depending on the implementation of tf.distribute.Strategy and +whether eager execution is enabled, `fn` may be called one or more times ( +once for each replica). + +#### Example usage: + + + +1. Constant tensor input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> tensor_input = tf.constant(3.0) +>>> @tf.function +... def replica_fn(input): +... return input*2.0 +>>> result = strategy.run(replica_fn, args=(tensor_input,)) +>>> result + +``` + +2. DistributedValues input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> @tf.function +... def run(): +... def value_fn(value_context): +... return value_context.num_replicas_in_sync +... distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +... def replica_fn2(input): +... return input*2 +... return strategy.run(replica_fn2, args=(distributed_values,)) +>>> result = run() +>>> result + +``` + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be tf.distribute.DistributedValues, `Tensor` +objects, or `Tensor`s (for example, if running on a single replica). +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + + + + + + + + + +
Returns
+A context manager. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/NcclAllReduce.md b/site/en/api_docs/python/tf/distribute/NcclAllReduce.md new file mode 100644 index 00000000000..5435082a83a --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/NcclAllReduce.md @@ -0,0 +1,308 @@ +description: Reduction using NCCL all-reduce. + +
+ + + + + + +
+ +# tf.distribute.NcclAllReduce + + + + + + + + + +Reduction using NCCL all-reduce. + + + + + + + + + + + + + + + + + + + +
+`num_packs` + +values will be packed in this many splits. `num_packs` should +be greater than or equals 0. When it is zero, no packing will be done. +
+ + + + + + + + + + + +
+ValueError if `num_packs` is negative. +
+ + + +## Methods + +

batch_reduce

+ +View source + + + +Reduce PerReplica objects in a batch. + +Reduce each first element in `value_destination_pairs` to each second +element which indicates the destinations. + +This can be faster than multiple individual `reduce`s because we can +fuse several tensors into one or multiple packs before reduction. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +An instance of tf.distribute.ReduceOp that indicates how the +`per_replica_value` will be reduced. +
+`value_destination_pairs` + +A list or a tuple of PerReplica objects (or +tensors with device set if there is one device) and destinations. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+a list of Mirrored objects. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `value_destination_pairs` is not an iterable of +tuples of PerReplica objects and destinations. +
+ + + +

broadcast

+ +View source + + + +Broadcast the `tensor` to destinations. + + + + + + + + + + + + + + +
Args
+`tensor` + +the tensor to broadcast. +
+`destinations` + +the broadcast destinations. +
+ + + + + + + + + + + +
Returns
+a Mirrored object. +
+ + + +

reduce

+ +View source + + + +Reduce `per_replica_value` to `destinations`. + +It runs the reduction operation defined by `reduce_op` and put the +result on `destinations`. + + + + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +An instance of tf.distribute.ReduceOp that indicates how +per_replica_value will be reduced. +
+`per_replica_value` + +A PerReplica object or a tensor with device set. +
+`destinations` + +the reduction destinations. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+a Mirrored object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if per_replica_value can't be converted to a PerReplica +object or if destinations aren't strings, Variables or DistributedValues +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/OneDeviceStrategy.md b/site/en/api_docs/python/tf/distribute/OneDeviceStrategy.md new file mode 100644 index 00000000000..e1d56110c74 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/OneDeviceStrategy.md @@ -0,0 +1,906 @@ +description: A distribution strategy for running on a single device. + +
+ + + + + + + + + + + + + + +
+ +# tf.distribute.OneDeviceStrategy + + + + + + + + + +A distribution strategy for running on a single device. + +Inherits From: [`Strategy`](../../tf/distribute/Strategy.md) + + + + + + + +Using this strategy will place any variables created in its scope on the +specified device. Input distributed through this strategy will be +prefetched to the specified device. Moreover, any functions called via +`strategy.run` will also be placed on the specified device +as well. + +Typical usage of this strategy could be testing your code with the +tf.distribute.Strategy API before switching to other strategies which +actually distribute to multiple devices/machines. + +#### For example: + + +``` +strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0") + +with strategy.scope(): + v = tf.Variable(1.0) + print(v.device) # /job:localhost/replica:0/task:0/device:GPU:0 + +def step_fn(x): + return x * 2 + +result = 0 +for i in range(10): + result += strategy.run(step_fn, args=(i,)) +print(result) # 90 +``` + + + + + + + + + + +
+`device` + +Device string identifier for the device on which the variables +should be placed. See class docs for more details on how the device is +used. Examples: "/cpu:0", "/gpu:0", "/device:CPU:0", "/device:GPU:0" +
+ + + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_assign_to_logical_device

+ +View source + + + +Adds annotation that `tensor` will be assigned to a logical device. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to `tensor` specifying that operations on +`tensor` will be invoked on logical core device id `logical_device_id`. +When model parallelism is used, the default behavior is that all ops +are placed on zero-th logical device. + +```python + +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + output = tf.add(inputs, inputs) + + // Add operation will be executed on logical device 0. + output = strategy.experimental_assign_to_logical_device(output, 0) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` + + + + + + + + + + + + + +
Args
+`tensor` + +Input tensor to annotate. +
+`logical_device_id` + +Id of the logical core to which the tensor will be +assigned. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +The logical device id presented is not consistent with total +number of partitions specified by the device assignment. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via dataset. + +In this case, there is only one device, so this is only a thin wrapper +around the input dataset. It will, however, prefetch the input data to the +specified device. The returned distributed dataset can be iterated over +similar to how regular datasets can. + +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +#### Example: + + +``` +strategy = tf.distribute.OneDeviceStrategy() +dataset = tf.data.Dataset.range(10).batch(2) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +for x in dist_dataset: + print(x) # [0, 1], [2, 3],... +``` +Args: + dataset: tf.data.Dataset to be prefetched to device. + + + + + + + + + +
Returns
+A "distributed `Dataset`" that the caller can iterate over. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. In this +case, we only have one worker and one device so `dataset_fn` is called +once. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which the caller can iterate over like regular +datasets. +
+ + + +

experimental_distribute_values_from_function

+ +View source + + + +Generates tf.distribute.DistributedValues from `value_fn`. + +This function is to generate tf.distribute.DistributedValues to pass +into `run`, `reduce`, or other methods that take +distributed values when not using datasets. + + + + + + + + + + +
Args
+`value_fn` + +The function to run to generate values. It is called for +each replica with `tf.distribute.ValueContext` as the sole argument. It +must return a Tensor or a type that can be converted to a Tensor. +
+ + + + + + + + + + + +
Returns
+A tf.distribute.DistributedValues containing a value for each replica. +
+ + + +#### Example usage: + + + +1. Return constant value per replica: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return tf.constant(1.) +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(,) +``` + +2. Distribute values in array based on replica_id: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> array_value = np.array([3., 2., 1.]) +>>> def value_fn(ctx): +... return array_value[ctx.replica_id_in_sync_group] +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(3.0,) +``` + +3. Specify values using num_replicas_in_sync: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return ctx.num_replicas_in_sync +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(1,) +``` + +4. Place values on devices and distribute: + +``` +strategy = tf.distribute.TPUStrategy() +worker_devices = strategy.extended.worker_devices +multiple_values = [] +for i in range(strategy.num_replicas_in_sync): + with tf.device(worker_devices[i]): + multiple_values.append(tf.constant(1.0)) + +def value_fn(ctx): + return multiple_values[ctx.replica_id] + +distributed_values = strategy. + experimental_distribute_values_from_function( + value_fn) +``` + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +In `OneDeviceStrategy`, the `value` is always expected to be a single +value, so the result is just the value in a tuple. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use `experimental_distribute_dataset` +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_replicate_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be replicated to all logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be invoked on all logical devices. + +```python +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + images, labels = inputs + images = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + + // For loss calculation, all logical devices share the same logits + // and labels. + labels = strategy.experimental_replicate_to_logical_devices(labels) + output = strategy.experimental_replicate_to_logical_devices(output) + loss = loss_fn(labels, output) + + return loss + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_split_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be split across logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be be split among multiple logical devices. Tensor `tensor` +will be split across dimensions specified by `partition_dimensions`. +The dimensions of `tensor` must be divisible by corresponding value in +`partition_dimensions`. + +For example, for system with 8 logical devices, if `tensor` is an image +tensor with shape (batch_size, width, height, channel) and +`partition_dimensions` is [1, 2, 4, 1], then `tensor` will be split +2 in width dimension and 4 way in height dimension and the split +tensor values will be fed into 8 logical devices. + +```python +# Initializing TPU system with 8 logical devices and 1 replica. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[2, 2, 2], + num_replicas=1) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + inputs = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + partition_dimensions: An unnested list of integers with the size equal to + rank of `tensor` specifying how `tensor` will be partitioned. The + product of all elements in `partition_dimensions` must be equal to the + total number of logical devices per replica. + + + + + + + + + + +
Raises
+`ValueError` + +1) If the size of partition_dimensions does not equal to rank +of `tensor` or 2) if product of elements of `partition_dimensions` does +not match the number of logical devices per replica defined by the +implementing DistributionStrategy's device specification or +3) if a known size of `tensor` is not divisible by corresponding +value in `partition_dimensions`. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +In `OneDeviceStrategy`, there is only one replica, so if axis=None, value +is simply returned. If axis is specified as something other than None, +such as axis=0, value is reduced along that axis and returned. + +#### Example: + + +``` +t = tf.range(10) + +result = strategy.reduce(tf.distribute.ReduceOp.SUM, t, axis=None).numpy() +# result: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] + +result = strategy.reduce(tf.distribute.ReduceOp.SUM, t, axis=0).numpy() +# result: 45 +``` + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +In `OneDeviceStrategy`, `fn` is simply called within a device scope for the +given device, with the provided arguments. + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Return value from running `fn`. +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + +In `OneDeviceStrategy`, all variables created inside `strategy.scope()` +will be on `device` specified at strategy construction time. +See example in the docs for this class. + + + + + + + + + +
Returns
+A context manager to use for creating variables with this strategy. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/ReduceOp.md b/site/en/api_docs/python/tf/distribute/ReduceOp.md new file mode 100644 index 00000000000..201d7f20930 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/ReduceOp.md @@ -0,0 +1,46 @@ +description: Indicates how a set of values should be reduced. + +
+ + + + +
+ +# tf.distribute.ReduceOp + + + + + + + + + +Indicates how a set of values should be reduced. + + + + + +* `SUM`: Add all the values. +* `MEAN`: Take the arithmetic mean ("average") of the values. + +## Class Variables + +* `MEAN` +* `SUM` diff --git a/site/en/api_docs/python/tf/distribute/ReductionToOneDevice.md b/site/en/api_docs/python/tf/distribute/ReductionToOneDevice.md new file mode 100644 index 00000000000..d9068699d64 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/ReductionToOneDevice.md @@ -0,0 +1,310 @@ +description: Always do reduction to one device first and then do broadcasting. + +
+ + + + + + +
+ +# tf.distribute.ReductionToOneDevice + + + + + + + + + +Always do reduction to one device first and then do broadcasting. + +Inherits From: [`CrossDeviceOps`](../../tf/distribute/CrossDeviceOps.md) + + + + + + + + + +Batch reduction is done by reduction on each element one by one. + +``` + mirrored_strategy = tf.distribute.MirroredStrategy( + cross_device_ops=tf.distribute.ReductionToOneDevice()) +``` + + + + + + + + + + + + + +
+`reduce_to_device` + +the intermediate device to reduce to. If None, reduce +to the first device in `destinations` of the `reduce()` method. +
+`accumulation_fn` + +a function that does accumulation. If None, then +tf.math.add_n is used. +
+ + + +## Methods + +

batch_reduce

+ +View source + + + +Reduce PerReplica objects in a batch. + +Reduce each first element in `value_destination_pairs` to each second +element which indicates the destinations. + +This can be faster than multiple individual `reduce`s because we can +fuse several tensors into one or multiple packs before reduction. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +An instance of tf.distribute.ReduceOp that indicates how the +`per_replica_value` will be reduced. +
+`value_destination_pairs` + +A list or a tuple of PerReplica objects (or +tensors with device set if there is one device) and destinations. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+a list of Mirrored objects. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `value_destination_pairs` is not an iterable of +tuples of PerReplica objects and destinations. +
+ + + +

broadcast

+ +View source + + + +Broadcast the `tensor` to destinations. + + + + + + + + + + + + + + +
Args
+`tensor` + +the tensor to broadcast. +
+`destinations` + +the broadcast destinations. +
+ + + + + + + + + + + +
Returns
+a Mirrored object. +
+ + + +

reduce

+ +View source + + + +Reduce `per_replica_value` to `destinations`. + +It runs the reduction operation defined by `reduce_op` and put the +result on `destinations`. + + + + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +An instance of tf.distribute.ReduceOp that indicates how +per_replica_value will be reduced. +
+`per_replica_value` + +A PerReplica object or a tensor with device set. +
+`destinations` + +the reduction destinations. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+a Mirrored object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if per_replica_value can't be converted to a PerReplica +object or if destinations aren't strings, Variables or DistributedValues +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/ReplicaContext.md b/site/en/api_docs/python/tf/distribute/ReplicaContext.md new file mode 100644 index 00000000000..2e68833340a --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/ReplicaContext.md @@ -0,0 +1,276 @@ +description: tf.distribute.Strategy API when in a replica context. + +
+ + + + + + + +
+ +# tf.distribute.ReplicaContext + + + + + + + + + +tf.distribute.Strategy API when in a replica context. + + + + + + + + + +You can use tf.distribute.get_replica_context to get an instance of +`ReplicaContext`. This should be inside your replicated step function, such +as in a tf.distribute.Strategy.run call. + + + + + + + + + + + + + + + + + + + + + +
+`devices` + +The devices this replica is to be executed on, as a tuple of strings. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+`replica_id_in_sync_group` + +Returns the id of the replica being defined. + +This identifies the replica that is part of a sync group. Currently we +assume that all sync groups contain the same number of replicas. The value +of the replica id can range from 0 to `num_replica_in_sync` - 1. + +NOTE: This is not guaranteed to be the same ID as the XLA replica ID use +for low-level operations such as collective_permute. +
+`strategy` + +The current tf.distribute.Strategy object. +
+ + + +## Methods + +

all_reduce

+ +View source + + + +All-reduces the given `value Tensor` nest across replicas. + +If `all_reduce` is called in any replica, it must be called in all replicas. +The nested structure and `Tensor` shapes must be identical in all replicas. + +IMPORTANT: The ordering of communications must be identical in all replicas. + +Example with two replicas: + Replica 0 `value`: {'a': 1, 'b': [40, 1]} + Replica 1 `value`: {'a': 3, 'b': [ 2, 98]} + + If `reduce_op` == `SUM`: + Result (on all replicas): {'a': 4, 'b': [42, 99]} + + If `reduce_op` == `MEAN`: + Result (on all replicas): {'a': 2, 'b': [21, 49.5]} + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +Reduction type, an instance of tf.distribute.ReduceOp enum. +
+`value` + +The nested structure of `Tensor`s to all-reduce. The structure must +be compatible with tf.nest. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+A `Tensor` nest with the reduced `value`s from each replica. +
+ + + +

merge_call

+ +View source + + + +Merge args across replicas and run `merge_fn` in a cross-replica context. + +This allows communication and coordination when there are multiple calls +to the step_fn triggered by a call to `strategy.run(step_fn, ...)`. + +See tf.distribute.Strategy.run for an explanation. + +If not inside a distributed scope, this is equivalent to: + +``` +strategy = tf.distribute.get_strategy() +with cross-replica-context(strategy): + return merge_fn(strategy, *args, **kwargs) +``` + + + + + + + + + + + + + + + + +
Args
+`merge_fn` + +Function that joins arguments from threads that are given as +PerReplica. It accepts tf.distribute.Strategy object as +the first argument. +
+`args` + +List or tuple with positional per-thread arguments for `merge_fn`. +
+`kwargs` + +Dict with keyword per-thread arguments for `merge_fn`. +
+ + + + + + + + + + + +
Returns
+The return value of `merge_fn`, except for `PerReplica` values which are +unpacked. +
+ + + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/distribute/RunOptions.md b/site/en/api_docs/python/tf/distribute/RunOptions.md new file mode 100644 index 00000000000..1758a9029d2 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/RunOptions.md @@ -0,0 +1,88 @@ +description: Run options for strategy.run. + +
+ + + + + +
+ +# tf.distribute.RunOptions + + + + + + + + + +Run options for `strategy.run`. + + + + + + + + + +This can be used to hold some strategy specific configs. + + + + + + + + + + + + + + + +
+`experimental_enable_dynamic_batch_size` + +Boolean. Only applies to +TPUStrategy. Default to True. If True, TPUStrategy will enable dynamic +padder to support dynamic batch size for the inputs. Otherwise only static +shape inputs are allowed. +
+`experimental_bucketizing_dynamic_shape` + +Boolean. Only applies to +TPUStrategy. Default to False. If True, TPUStrategy will automatic +bucketize inputs passed into `run` if the input shape is +dynamic. This is a performance optimization to reduce XLA recompilation, +which should not have impact on correctness. +
+ + + +## Class Variables + +* `experimental_bucketizing_dynamic_shape` +* `experimental_enable_dynamic_batch_size` diff --git a/site/en/api_docs/python/tf/distribute/Server.md b/site/en/api_docs/python/tf/distribute/Server.md new file mode 100644 index 00000000000..1973c66b434 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/Server.md @@ -0,0 +1,295 @@ +description: An in-process TensorFlow server, for use in distributed training. + +
+ + + + + + +
+ +# tf.distribute.Server + + + + + + + + + +An in-process TensorFlow server, for use in distributed training. + + + + + + + + + +A tf.distribute.Server instance encapsulates a set of devices and a +tf.compat.v1.Session target that +can participate in distributed training. A server belongs to a +cluster (specified by a tf.train.ClusterSpec), and +corresponds to a particular task in a named job. The server can +communicate with any other server in the same cluster. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`server_or_cluster_def` + +A tf.train.ServerDef or tf.train.ClusterDef +protocol buffer, or a tf.train.ClusterSpec object, describing the +server to be created and/or the cluster of which it is a member. +
+`job_name` + +(Optional.) Specifies the name of the job of which the server is +a member. Defaults to the value in `server_or_cluster_def`, if +specified. +
+`task_index` + +(Optional.) Specifies the task index of the server in its job. +Defaults to the value in `server_or_cluster_def`, if specified. +Otherwise defaults to 0 if the server's job has only one task. +
+`protocol` + +(Optional.) Specifies the protocol to be used by the server. +Acceptable values include `"grpc", "grpc+verbs"`. Defaults to the value +in `server_or_cluster_def`, if specified. Otherwise defaults to +`"grpc"`. +
+`config` + +(Options.) A tf.compat.v1.ConfigProto that specifies default +configuration options for all sessions that run on this server. +
+`start` + +(Optional.) Boolean, indicating whether to start the server after +creating it. Defaults to `True`. +
+ + + + + + + + + + + + +
+`tf.errors.OpError` + +Or one of its subclasses if an error occurs while +creating the TensorFlow server. +
+ + + + + + + + + + + + + + + + + +
+`server_def` + +Returns the tf.train.ServerDef for this server. +
+`target` + +Returns the target for a tf.compat.v1.Session to connect to this server. + +To create a +tf.compat.v1.Session that +connects to this server, use the following snippet: + +```python +server = tf.distribute.Server(...) +with tf.compat.v1.Session(server.target): +# ... +``` +
+ + + +## Methods + +

create_local_server

+ +View source + + + +Creates a new single-process cluster running on the local host. + +This method is a convenience wrapper for creating a +tf.distribute.Server with a tf.train.ServerDef that specifies a +single-process cluster containing a single task in a job called +`"local"`. + + + + + + + + + + + + + +
Args
+`config` + +(Options.) A tf.compat.v1.ConfigProto that specifies default +configuration options for all sessions that run on this server. +
+`start` + +(Optional.) Boolean, indicating whether to start the server after +creating it. Defaults to `True`. +
+ + + + + + + + + + + +
Returns
+A local tf.distribute.Server. +
+ + + +

join

+ +View source + + + +Blocks until the server has shut down. + +This method currently blocks forever. + + + + + + + + + + +
Raises
+`tf.errors.OpError` + +Or one of its subclasses if an error occurs while +joining the TensorFlow server. +
+ + + +

start

+ +View source + + + +Starts this server. + + + + + + + + + + + +
Raises
+`tf.errors.OpError` + +Or one of its subclasses if an error occurs while +starting the TensorFlow server. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/Strategy.md b/site/en/api_docs/python/tf/distribute/Strategy.md new file mode 100644 index 00000000000..8e00372962d --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/Strategy.md @@ -0,0 +1,1104 @@ +description: A state & compute distribution policy on a list of devices. + +
+ + + + + + + + + + + + + + +
+ +# tf.distribute.Strategy + + + + + + + + + +A state & compute distribution policy on a list of devices. + + + + + + + +See [the guide](https://www.tensorflow.org/guide/distributed_training) +for overview and examples. + +#### In short: + + + +* To use it with Keras `compile`/`fit`, + [please + read](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_keras). +* You may pass descendant of tf.distribute.Strategy to + tf.estimator.RunConfig to specify how a tf.estimator.Estimator + should distribute its computation. See + [guide](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_estimator_limited_support). +* Otherwise, use tf.distribute.Strategy.scope to specify that a + strategy should be used when building an executing your model. + (This puts you in the "cross-replica context" for this strategy, which + means the strategy is put in control of things like variable placement.) +* If you are writing a custom training loop, you will need to call a few more + methods, + [see the + guide](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_custom_training_loops): + + * Start by either creating a tf.data.Dataset normally or using + `tf.distribute.experimental_make_numpy_dataset` to make a dataset out of + a `numpy` array. + * Use tf.distribute.Strategy.experimental_distribute_dataset to convert + a tf.data.Dataset to something that produces "per-replica" values. + If you want to manually specify how the dataset should be partitioned + across replicas, use + tf.distribute.Strategy.experimental_distribute_datasets_from_function + instead. + * Use tf.distribute.Strategy.run to run a function + once per replica, taking values that may be "per-replica" (e.g. + from a distributed dataset) and returning "per-replica" values. + This function is executed in "replica context", which means each + operation is performed separately on each replica. + * Finally use a method (such as tf.distribute.Strategy.reduce) to + convert the resulting "per-replica" values into ordinary `Tensor`s. + +A custom training loop can be as simple as: + +``` +with my_strategy.scope(): + @tf.function + def distribute_train_epoch(dataset): + def replica_fn(input): + # process input and return result + return result + + total_result = 0 + for x in dataset: + per_replica_result = my_strategy.run(replica_fn, args=(x,)) + total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM, + per_replica_result, axis=None) + return total_result + + dist_dataset = my_strategy.experimental_distribute_dataset(dataset) + for _ in range(EPOCHS): + train_result = distribute_train_epoch(dist_dataset) +``` + +This takes an ordinary `dataset` and `replica_fn` and runs it +distributed using a particular tf.distribute.Strategy named +`my_strategy` above. Any variables created in `replica_fn` are created +using `my_strategy`'s policy, and library functions called by +`replica_fn` can use the `get_replica_context()` API to implement +distributed-specific behavior. + +You can use the `reduce` API to aggregate results across replicas and use +this as a return value from one iteration over the distributed dataset. Or +you can use tf.keras.metrics (such as loss, accuracy, etc.) to +accumulate metrics across steps in a given epoch. + +See the +[custom training loop +tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training) +for a more detailed example. + +Note: tf.distribute.Strategy currently does not support TensorFlow's +partitioned variables (where a single variable is split across multiple +devices) at this time. + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_assign_to_logical_device

+ +View source + + + +Adds annotation that `tensor` will be assigned to a logical device. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to `tensor` specifying that operations on +`tensor` will be invoked on logical core device id `logical_device_id`. +When model parallelism is used, the default behavior is that all ops +are placed on zero-th logical device. + +```python + +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + output = tf.add(inputs, inputs) + + // Add operation will be executed on logical device 0. + output = strategy.experimental_assign_to_logical_device(output, 0) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` + + + + + + + + + + + + + +
Args
+`tensor` + +Input tensor to annotate. +
+`logical_device_id` + +Id of the logical core to which the tensor will be +assigned. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +The logical device id presented is not consistent with total +number of partitions specified by the device assignment. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via `dataset`. + +The returned distributed dataset can be iterated over similar to how +regular datasets can. +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +The following is an example: + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + +We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). + +In a multi-worker setting, we will first attempt to distribute the dataset +by attempting to detect whether the dataset is being created out of +ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, +attempting to shard the input files. Note that there has to be at least one +input file per worker. If you have less than one input file per worker, we +suggest that you should disable distributing your dataset using the method +below. + +If that attempt is unsuccessful (e.g. the dataset is created from a +Dataset.range), we will shard the dataset evenly at the end by appending a +`.shard` operation to the end of the processing pipeline. This will cause +the entire preprocessing pipeline for all the data to be run on every +worker, and each worker will do redundant work. We will print a warning +if this method of sharding is selected. + +You can disable dataset sharding across workers using the +`auto_shard_policy` option in tf.data.experimental.DistributeOptions. + +Within each worker, we will also split the data among all the worker +devices (if more than one a present), and this will happen even if +multi-worker sharding is disabled using the method above. + +If the above batch splitting and dataset sharding logic is undesirable, +please use `experimental_distribute_datasets_from_function` instead, which +does not do any automatic splitting or sharding. + +You can also use the `element_spec` property of the distributed dataset +returned by this API to query the tf.TypeSpec of the elements returned +by the iterator. This can be used to set the `input_signature` property +of a tf.function. + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +@tf.function(input_signature=[dist_dataset.element_spec]) +def train_step(inputs): + # train model with inputs + return + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be sharded across all replicas using +the rules stated above. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. Each +replica on that worker will dequeue one batch of inputs from the local +`Dataset` (i.e. if a worker has two replicas, two batches will be dequeued +from the `Dataset` every step). + +This method can be used for several purposes. For example, where +`experimental_distribute_dataset` is unable to shard the input files, this +method might be used to manually shard the dataset (avoiding the slow +fallback behavior in `experimental_distribute_dataset`). In cases where the +dataset is infinite, this sharding can be done by creating dataset replicas +that differ only in their random seed. +`experimental_distribute_dataset` may also sometimes fail to split the +batch across replicas on a worker. In that case, this method can be used +where that limitation does not exist. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + +To query the tf.TypeSpec of the elements in the distributed dataset +returned by this API, you need to use the `element_spec` property of the +distributed iterator. This tf.TypeSpec can be used to set the +`input_signature` property of a tf.function. + +```python +# If you want to specify `input_signature` for a `tf.function` you must +# first create the iterator. +iterator = iter(inputs) + +@tf.function(input_signature=[iterator.element_spec]) +def replica_fn_with_signature(inputs): + # train the model with inputs + return + +for _ in range(steps): + strategy.run(replica_fn_with_signature, + args=(next(iterator),)) +``` + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_values_from_function

+ +View source + + + +Generates tf.distribute.DistributedValues from `value_fn`. + +This function is to generate tf.distribute.DistributedValues to pass +into `run`, `reduce`, or other methods that take +distributed values when not using datasets. + + + + + + + + + + +
Args
+`value_fn` + +The function to run to generate values. It is called for +each replica with `tf.distribute.ValueContext` as the sole argument. It +must return a Tensor or a type that can be converted to a Tensor. +
+ + + + + + + + + + + +
Returns
+A tf.distribute.DistributedValues containing a value for each replica. +
+ + + +#### Example usage: + + + +1. Return constant value per replica: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return tf.constant(1.) +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(,) +``` + +2. Distribute values in array based on replica_id: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> array_value = np.array([3., 2., 1.]) +>>> def value_fn(ctx): +... return array_value[ctx.replica_id_in_sync_group] +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(3.0,) +``` + +3. Specify values using num_replicas_in_sync: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return ctx.num_replicas_in_sync +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(1,) +``` + +4. Place values on devices and distribute: + +``` +strategy = tf.distribute.TPUStrategy() +worker_devices = strategy.extended.worker_devices +multiple_values = [] +for i in range(strategy.num_replicas_in_sync): + with tf.device(worker_devices[i]): + multiple_values.append(tf.constant(1.0)) + +def value_fn(ctx): + return multiple_values[ctx.replica_id] + +distributed_values = strategy. + experimental_distribute_values_from_function( + value_fn) +``` + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +Note: This only returns values on the worker initiated by this client. +When using a tf.distribute.Strategy like +tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker +will be its own client, and this function will only return values +computed on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use `experimental_distribute_dataset` +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_replicate_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be replicated to all logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be invoked on all logical devices. + +```python +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + images, labels = inputs + images = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + + // For loss calculation, all logical devices share the same logits + // and labels. + labels = strategy.experimental_replicate_to_logical_devices(labels) + output = strategy.experimental_replicate_to_logical_devices(output) + loss = loss_fn(labels, output) + + return loss + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_split_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be split across logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be be split among multiple logical devices. Tensor `tensor` +will be split across dimensions specified by `partition_dimensions`. +The dimensions of `tensor` must be divisible by corresponding value in +`partition_dimensions`. + +For example, for system with 8 logical devices, if `tensor` is an image +tensor with shape (batch_size, width, height, channel) and +`partition_dimensions` is [1, 2, 4, 1], then `tensor` will be split +2 in width dimension and 4 way in height dimension and the split +tensor values will be fed into 8 logical devices. + +```python +# Initializing TPU system with 8 logical devices and 1 replica. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[2, 2, 2], + num_replicas=1) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + inputs = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + partition_dimensions: An unnested list of integers with the size equal to + rank of `tensor` specifying how `tensor` will be partitioned. The + product of all elements in `partition_dimensions` must be equal to the + total number of logical devices per replica. + + + + + + + + + + +
Raises
+`ValueError` + +1) If the size of partition_dimensions does not equal to rank +of `tensor` or 2) if product of elements of `partition_dimensions` does +not match the number of logical devices per replica defined by the +implementing DistributionStrategy's device specification or +3) if a known size of `tensor` is not divisible by corresponding +value in `partition_dimensions`. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +Executes ops specified by `fn` on each replica. If `args` or `kwargs` have +tf.distribute.DistributedValues, such as those produced by a +"distributed `Dataset`" or `experimental_distribute_values_from_function` +when `fn` is executed on a particular replica, it will be executed with the +component of tf.distribute.DistributedValues that correspond to that +replica. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `all_reduce`. + +All arguments in `args` or `kwargs` should either be nest of tensors or +tf.distribute.DistributedValues containing tensors or composite tensors. + +IMPORTANT: Depending on the implementation of tf.distribute.Strategy and +whether eager execution is enabled, `fn` may be called one or more times ( +once for each replica). + +#### Example usage: + + + +1. Constant tensor input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> tensor_input = tf.constant(3.0) +>>> @tf.function +... def replica_fn(input): +... return input*2.0 +>>> result = strategy.run(replica_fn, args=(tensor_input,)) +>>> result + +``` + +2. DistributedValues input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> @tf.function +... def run(): +... def value_fn(value_context): +... return value_context.num_replicas_in_sync +... distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +... def replica_fn2(input): +... return input*2 +... return strategy.run(replica_fn2, args=(distributed_values,)) +>>> result = run() +>>> result + +``` + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be tf.distribute.DistributedValues, `Tensor` +objects, or `Tensor`s (for example, if running on a single replica). +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + + + + + + + + + +
Returns
+A context manager. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/StrategyExtended.md b/site/en/api_docs/python/tf/distribute/StrategyExtended.md new file mode 100644 index 00000000000..30dcf934dc1 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/StrategyExtended.md @@ -0,0 +1,767 @@ +description: Additional APIs for algorithms that need to be distribution-aware. + +
+ + + + + + + + + + + +
+ +# tf.distribute.StrategyExtended + + + + + + + + + +Additional APIs for algorithms that need to be distribution-aware. + + + + + + + +Note: For most usage of tf.distribute.Strategy, there should be no need to +call these methods, since TensorFlow libraries (such as optimizers) already +call these methods when needed on your behalf. + +Lower-level concepts: + +* Wrapped values: In order to represent values parallel across devices + (either replicas or the devices associated with a particular value), we + wrap them in a "PerReplica" or "Mirrored" object that contains a map + from replica id to values. "PerReplica" is used when the value may be + different across replicas, and "Mirrored" when the value are the same. +* Unwrapping and merging: Consider calling a function `fn` on multiple + replicas, like `run(fn, args=[w])` with an + argument `w` that is a wrapped value. This means `w` will have a map taking + replica id `0` to `w0`, replica id `11` to `w1`, etc. + `run()` unwraps `w` before calling `fn`, so + it calls `fn(w0)` on `d0`, `fn(w1)` on `d1`, etc. It then merges the return + values from `fn()`, which can possibly result in wrapped values. For + example, let's say `fn()` returns a tuple with three components: `(x, a, + v0)` from replica 0, `(x, b, v1)` on replica 1, etc. If the first component + is the same object `x` from every replica, then the first component of the + merged result will also be `x`. If the second component is different (`a`, + `b`, ...) from each replica, then the merged value will have a wrapped map + from replica device to the different values. If the third component is the + members of a mirrored variable (`v` maps `d0` to `v0`, `d1` to v1, etc.), + then the merged result will be that mirrored variable (`v`). +* Worker devices vs. parameter devices: Most replica computations will + happen on worker devices. Since we don't yet support model + parallelism, there will be one worker device per replica. When using + parameter servers or central storage, the set of devices holding + variables may be different, otherwise the parameter devices might + match the worker devices. + +*Replica context vs. Cross-replica context* + +A _replica context_ applies when we are in some function that is being called +once for each replica. Otherwise we are in cross-replica context, which is +useful for calling tf.distribute.Strategy methods which operate across the +replicas (like `reduce_to()`). By default you start in a replica context +(the "default single replica context") and then some methods can switch you +back and forth. There is a third mode you can be in called _update context_ +used when updating variables. + +* tf.distribute.Strategy.scope: enters cross-replica context when + no other strategy is in scope. +* tf.distribute.Strategy.run: calls a function in + replica context. +* tf.distribute.ReplicaContext.merge_call: transitions from replica + context to cross-replica context. +* tf.distribute.StrategyExtended.update: calls a function in an update + context from a cross-replica context. + +In a replica context, you may freely read the values of variables, but +you may only update their value if they specify a way to aggregate the +update using the `aggregation` parameter in the variable's constructor. +In a cross-replica context, you may read or write variables (writes may +need to be broadcast to all copies of the variable if it is mirrored). + +*Sync on read variables* + +In some cases, such as a metric, we want to accumulate a bunch of updates on +each replica independently and only aggregate when reading. This can be a big +performance win when the value is read only rarely (maybe the value is only +read at the end of an epoch or when checkpointing). These are variables +created by passing `synchronization=ON_READ` to the variable's constructor +(and some value for `aggregation`). + +The strategy may choose to put the variable on multiple devices, like mirrored +variables, but unlike mirrored variables we don't synchronize the updates to +them to make sure they have the same value. Instead, the synchronization is +performed when reading in cross-replica context. In a replica context, reads +and writes are performed on the local copy (we allow reads so you can write +code like `v = 0.9*v + 0.1*update`). We don't allow operations like +`v.assign_add` in a cross-replica context for sync on read variables; right +now we don't have a use case for such updates and depending on the aggregation +mode such updates may not be sensible. + +*Locality* + +Depending on how a value is produced, it will have a type that will determine +how it may be used. + +"Per-replica" values exist on the worker devices, with a different value for +each replica. They are produced by iterating through a "distributed `Dataset`" +returned by tf.distribute.Strategy.experimental_distribute_dataset and +tf.distribute.Strategy.experimental_distribute_datasets_from_function. They +are also the typical result returned by +tf.distribute.Strategy.run. You typically can't use a +per-replica value directly in a cross-replica context, without first resolving +how to aggregate the values across replicas, for instance by using +tf.distribute.Strategy.reduce. + +"Mirrored" values are like per-replica values, except we know that the value +on all replicas are the same. We can safely read a mirrored value in a +cross-replica context by using the value on any replica. You can convert +a per-replica value into a mirrored value by using +tf.distribute.ReplicaContext.all_reduce. + +Values can also have the same locality as a variable, which is a mirrored +value but residing on the same devices as the variable (as opposed to the +compute devices). Such values may be passed to a call to +tf.distribute.StrategyExtended.update to update the value of a variable. +You may use tf.distribute.StrategyExtended.colocate_vars_with to give a +variable the same locality as another variable. This is useful, for example, +for "slot" variables used by an optimizer for keeping track of statistics +used to update a primary/model variable. You may convert a per-replica +value to a variable's locality by using +tf.distribute.StrategyExtended.reduce_to or +tf.distribute.StrategyExtended.batch_reduce_to. + +In addition to slot variables which should be colocated with their primary +variables, optimizers also define non-slot variables. These can be things like +"number of step updates performed" or "beta1^t" and "beta2^t". Each strategy +has some policy for which devices those variables should be copied too, called +the "non-slot devices" (some subset of the parameter devices). We require that +all non-slot variables are allocated on the same device, or mirrored across +the same set of devices. You can use +tf.distribute.StrategyExtended.non_slot_devices to pick a consistent set of +devices to pass to both tf.distribute.StrategyExtended.colocate_vars_with +and tf.distribute.StrategyExtended.update_non_slot. + +*How to update a variable* + +The standard pattern for updating variables is to: + +1. In your function passed to tf.distribute.Strategy.run, + compute a list of (update, variable) pairs. For example, the update might + be a the gradient of the loss with respect to the variable. +2. Switch to cross-replica mode by calling + `tf.distribute.get_replica_context().merge_call()` with the updates and + variables as arguments. +3. Call + tf.distribute.StrategyExtended.reduce_to(VariableAggregation.SUM, t, v) + (for one variable) or tf.distribute.StrategyExtended.batch_reduce_to + (for a list of variables) to sum the updates. + and broadcast the result to the variable's devices. +4. Call tf.distribute.StrategyExtended.update(v) for each variable to update + its value. + +Steps 2 through 4 are done automatically by class +tf.keras.optimizers.Optimizer if you call its +tf.keras.optimizers.Optimizer.apply_gradients method in a replica context. +They are also done automatically if you call an `assign*` method on a (non +sync-on-read) variable that was constructed with an aggregation method (which +is used to determine the reduction used in step 3). + +*Distribute-aware layers* + +Layers are generally called in a replica context, except when defining a +functional model. tf.distribute.in_cross_replica_context will let you +determine which case you are in. If in a replica context, +the tf.distribute.get_replica_context function will return a +tf.distribute.ReplicaContext object. The `ReplicaContext` object has an +`all_reduce` method for aggregating across all replicas. Alternatively, you +can update variables following steps 2-4 above. + +Note: For new tf.distribute.Strategy implementations, please put all logic +in a subclass of tf.distribute.StrategyExtended. The only code needed for +the tf.distribute.Strategy subclass is for instantiating your subclass of +tf.distribute.StrategyExtended in the `__init__` method. + + + + + + + + + + + + + + + + + + +
+`experimental_require_static_shapes` + +Returns `True` if static shape is required; `False` otherwise. +
+`parameter_devices` + +Returns the tuple of all devices used to place variables. +
+`worker_devices` + +Returns the tuple of all devices used to for compute replica execution. +
+ + + +## Methods + +

batch_reduce_to

+ +View source + + + +Combine multiple `reduce_to` calls into one for faster execution. + + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +Reduction type, an instance of tf.distribute.ReduceOp enum. +
+`value_destination_pairs` + +A sequence of (value, destinations) pairs. See +`reduce_to()` for a description. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+A list of mirrored values, one per pair in `value_destination_pairs`. +
+ + + +

colocate_vars_with

+ +View source + + + +Scope that controls which devices variables will be created on. + +No operations should be added to the graph inside this scope, it +should only be used when creating variables (some implementations +work by changing variable creation, others work by using a +tf.compat.v1.colocate_with() scope). + +This may only be used inside `self.scope()`. + +#### Example usage: + + + +``` +with strategy.scope(): + var1 = tf.Variable(...) + with strategy.extended.colocate_vars_with(var1): + # var2 and var3 will be created on the same device(s) as var1 + var2 = tf.Variable(...) + var3 = tf.Variable(...) + + def fn(v1, v2, v3): + # operates on v1 from var1, v2 from var2, and v3 from var3 + + # `fn` runs on every device `var1` is on, `var2` and `var3` will be there + # too. + strategy.extended.update(var1, fn, args=(var2, var3)) +``` + + + + + + + + + + +
Args
+`colocate_with_variable` + +A variable created in this strategy's `scope()`. +Variables created while in the returned context manager will be on the +same set of devices as `colocate_with_variable`. +
+ + + + + + + + + + + +
Returns
+A context manager. +
+ + + +

non_slot_devices

+ +View source + + + +Device(s) for non-slot variables. + +Create variables on these devices in a +`with colocate_vars_with(non_slot_devices(...)):` block. +Update those using `update_non_slot()`. + + + + + + + + + + +
Args
+`var_list` + +The list of variables being optimized, needed with the +default tf.distribute.Strategy. +
+ + + + + + + + + + + +
Returns
+A sequence of devices for non-slot variables. +
+ + + +

reduce_to

+ +View source + + + +Combine (via e.g. sum or mean) values across replicas. + + + + + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +Reduction type, an instance of tf.distribute.ReduceOp enum. +
+`value` + +A per-replica value with one value per replica. +
+`destinations` + +A mirrored variable, a per-replica tensor, or a device +string. The return value will be copied to all destination devices (or +all the devices where the `destinations` value resides). To perform an +all-reduction, pass `value` to `destinations`. +
+`experimental_hints` + +A `tf.distrbute.experimental.CollectiveHints`. Hints +to perform collective operations. +
+ + + + + + + + + + + +
Returns
+A tensor or value mirrored to `destinations`. +
+ + + +

update

+ +View source + + + +Run `fn` to update `var` using inputs mirrored to the same devices. + +If `var` is mirrored across multiple devices, then this implements +logic like: + +``` +results = {} +for device, v in var: + with tf.device(device): + # args and kwargs will be unwrapped if they are mirrored. + results[device] = fn(v, *args, **kwargs) +return merged(results) +``` + +Otherwise this returns `fn(var, *args, **kwargs)` colocated with `var`. + +Neither `args` nor `kwargs` may contain per-replica values. +If they contain mirrored values, they will be unwrapped before +calling `fn`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`var` + +Variable, possibly mirrored to multiple devices, to operate on. +
+`fn` + +Function to call. Should take the variable as the first argument. +
+`args` + +Tuple or list. Additional positional arguments to pass to `fn()`. +
+`kwargs` + +Dict with keyword arguments to pass to `fn()`. +
+`group` + +Boolean. Defaults to True. If False, the return value will be +unwrapped. +
+ + + + + + + + + + + +
Returns
+By default, the merged return value of `fn` across all replicas. The +merged result has dependencies to make sure that if it is evaluated at +all, the side effects (updates) will happen on every replica. If instead +"group=False" is specified, this function will return a nest of lists +where each list has an element per replica, and the caller is responsible +for ensuring all elements are executed. +
+ + + +

update_non_slot

+ +View source + + + +Runs `fn(*args, **kwargs)` on `colocate_with` devices. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`colocate_with` + +The return value of `non_slot_devices()`. +
+`fn` + +Function to execute. +
+`args` + +Tuple or list. Positional arguments to pass to `fn()`. +
+`kwargs` + +Dict with keyword arguments to pass to `fn()`. +
+`group` + +Boolean. Defaults to True. If False, the return value will be +unwrapped. +
+ + + + + + + + + + + +
Returns
+Return value of `fn`, possibly merged across devices. +
+ + + +

value_container

+ +View source + + + +Returns the container that this per-replica `value` belongs to. + + + + + + + + + + + +
Args
+`value` + +A value returned by `run()` or a variable created in `scope()`. +
+ + + + + + + + + + + +
Returns
+A container that `value` belongs to. +If value does not belong to any container (including the case of +container having been destroyed), returns the value itself. +`value in experimental_local_results(value_container(value))` will +always be true. +
+ + + +

variable_created_in_scope

+ +View source + + + +Tests whether `v` was created while this strategy scope was active. + +Variables created inside the strategy scope are "owned" by it: + +```python +strategy = tf.distribute.StrategyExtended() +with strategy.scope(): + v = tf.Variable(1.) +strategy.variable_created_in_scope(v) +True +``` + +Variables created outside the strategy are not owned by it: + +```python +v = tf.Variable(1.) +strategy.variable_created_in_scope(v) +False +``` + + + + + + + + + + +
Args
+`v` + +A tf.Variable instance. +
+ + + + + + + + + + + +
Returns
+True if `v` was created inside the scope, False if not. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/cluster_resolver.md b/site/en/api_docs/python/tf/distribute/cluster_resolver.md new file mode 100644 index 00000000000..885554dc9ba --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/cluster_resolver.md @@ -0,0 +1,44 @@ +description: Library imports for ClusterResolvers. + +
+ + +
+ +# Module: tf.distribute.cluster_resolver + + + + + + + + + +Library imports for ClusterResolvers. + + +This library contains all implementations of ClusterResolvers. +ClusterResolvers are a way of specifying cluster information for distributed +execution. Built on top of existing `ClusterSpec` framework, ClusterResolvers +are a way for TensorFlow to communicate with various cluster management +systems (e.g. GCE, AWS, etc...). + +## Classes + +[`class ClusterResolver`](../../tf/distribute/cluster_resolver/ClusterResolver.md): Abstract class for all implementations of ClusterResolvers. + +[`class GCEClusterResolver`](../../tf/distribute/cluster_resolver/GCEClusterResolver.md): ClusterResolver for Google Compute Engine. + +[`class KubernetesClusterResolver`](../../tf/distribute/cluster_resolver/KubernetesClusterResolver.md): ClusterResolver for Kubernetes. + +[`class SimpleClusterResolver`](../../tf/distribute/cluster_resolver/SimpleClusterResolver.md): Simple implementation of ClusterResolver that accepts a ClusterSpec. + +[`class SlurmClusterResolver`](../../tf/distribute/cluster_resolver/SlurmClusterResolver.md): ClusterResolver for system with Slurm workload manager. + +[`class TFConfigClusterResolver`](../../tf/distribute/cluster_resolver/TFConfigClusterResolver.md): Implementation of a ClusterResolver which reads the TF_CONFIG EnvVar. + +[`class TPUClusterResolver`](../../tf/distribute/cluster_resolver/TPUClusterResolver.md): Cluster Resolver for Google Cloud TPUs. + +[`class UnionResolver`](../../tf/distribute/cluster_resolver/UnionResolver.md): Performs a union on underlying ClusterResolvers. + diff --git a/site/en/api_docs/python/tf/distribute/cluster_resolver/ClusterResolver.md b/site/en/api_docs/python/tf/distribute/cluster_resolver/ClusterResolver.md new file mode 100644 index 00000000000..8b18d22cfb3 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/cluster_resolver/ClusterResolver.md @@ -0,0 +1,257 @@ +description: Abstract class for all implementations of ClusterResolvers. + +
+ + + + + +
+ +# tf.distribute.cluster_resolver.ClusterResolver + + + + + + + + + +Abstract class for all implementations of ClusterResolvers. + + + + + +This defines the skeleton for all implementations of ClusterResolvers. +ClusterResolvers are a way for TensorFlow to communicate with various cluster +management systems (e.g. GCE, AWS, etc...). + +By letting TensorFlow communicate with these systems, we will be able to +automatically discover and resolve IP addresses for various TensorFlow +workers. This will eventually allow us to automatically recover from +underlying machine failures and scale TensorFlow worker clusters up and down. + +Note to Implementors: In addition to these abstract methods, you must also +implement the task_type, task_id, and rpc_layer attributes. You may choose +to implement them either as properties with getters or setters or directly +set the attributes. + +- task_type is the name of the server's current named job (e.g. 'worker', + 'ps' in a distributed parameterized training job). +- task_id is the ordinal index of the server within the task type. +- rpc_layer is the protocol used by TensorFlow to communicate with other + TensorFlow servers in a distributed environment. + + + + + + + + + + + + +
+`environment` + +Returns the current environment which TensorFlow is running in. + +There are two possible return values, "google" (when TensorFlow is running +in a Google-internal environment) or an empty string (when TensorFlow is +running elsewhere). + +If you are implementing a ClusterResolver that works in both the Google +environment and the open-source world (for instance, a TPU ClusterResolver +or similar), you will have to return the appropriate string depending on the +environment, which you will have to detect. + +Otherwise, if you are implementing a ClusterResolver that will only work +in open-source TensorFlow, you do not need to implement this property. +
+ + + +## Methods + +

cluster_spec

+ +View source + + + +Retrieve the current state of the cluster and return a ClusterSpec. + + + + + + + + + + +
Returns
+A ClusterSpec representing the state of the cluster at the moment this +function is called. +
+ + +Implementors of this function must take care in ensuring that the +ClusterSpec returned is up-to-date at the time of calling this function. +This usually means retrieving the information from the underlying cluster +management system every time this function is invoked and reconstructing +a cluster_spec, rather than attempting to cache anything. + +

master

+ +View source + + + +Retrieves the name or URL of the session master. + + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional) The type of the TensorFlow task of the master. +
+`task_id` + +(Optional) The index of the TensorFlow task of the master. +
+`rpc_layer` + +(Optional) The RPC protocol for the given cluster. +
+ + + + + + + + + + + +
Returns
+The name or URL of the session master. +
+ + +Implementors of this function must take care in ensuring that the master +returned is up-to-date at the time to calling this function. This usually +means retrieving the master every time this function is invoked. + +

num_accelerators

+ +View source + + + +Returns the number of accelerator cores per worker. + +This returns the number of accelerator cores (such as GPUs and TPUs) +available per worker. + +Optionally, we allow callers to specify the task_type, and task_id, for +if they want to target a specific TensorFlow process to query +the number of accelerators. This is to support heterogenous environments, +where the number of accelerators cores per host is different. + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional) The type of the TensorFlow task of the machine we +want to query. +
+`task_id` + +(Optional) The index of the TensorFlow task of the machine we +want to query. +
+`config_proto` + +(Optional) Configuration for starting a new session to +query how many accelerator cores it has. +
+ + + + + + + + + + + +
Returns
+A map of accelerator types to number of cores. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/cluster_resolver/GCEClusterResolver.md b/site/en/api_docs/python/tf/distribute/cluster_resolver/GCEClusterResolver.md new file mode 100644 index 00000000000..be3a2b08c21 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/cluster_resolver/GCEClusterResolver.md @@ -0,0 +1,369 @@ +description: ClusterResolver for Google Compute Engine. + +
+ + + + + + +
+ +# tf.distribute.cluster_resolver.GCEClusterResolver + + + + + + + + + +ClusterResolver for Google Compute Engine. + +Inherits From: [`ClusterResolver`](../../../tf/distribute/cluster_resolver/ClusterResolver.md) + + + + + + + + + +This is an implementation of cluster resolvers for the Google Compute Engine +instance group platform. By specifying a project, zone, and instance group, +this will retrieve the IP address of all the instances within the instance +group and return a ClusterResolver object suitable for use for distributed +TensorFlow. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`project` + +Name of the GCE project. +
+`zone` + +Zone of the GCE instance group. +
+`instance_group` + +Name of the GCE instance group. +
+`port` + +Port of the listening TensorFlow server (default: 8470) +
+`task_type` + +Name of the TensorFlow job this GCE instance group of VM +instances belong to. +
+`task_id` + +The task index for this particular VM, within the GCE +instance group. In particular, every single instance should be assigned +a unique ordinal index within an instance group manually so that they +can be distinguished from each other. +
+`rpc_layer` + +The RPC layer TensorFlow should use to communicate across +instances. +
+`credentials` + +GCE Credentials. If nothing is specified, this defaults to +GoogleCredentials.get_application_default(). +
+`service` + +The GCE API object returned by the googleapiclient.discovery +function. (Default: discovery.build('compute', 'v1')). If you specify a +custom service object, then the credentials parameter will be ignored. +
+ + + + + + + + + + + + +
+`ImportError` + +If the googleapiclient is not installed. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`environment` + +Returns the current environment which TensorFlow is running in. + +There are two possible return values, "google" (when TensorFlow is running +in a Google-internal environment) or an empty string (when TensorFlow is +running elsewhere). + +If you are implementing a ClusterResolver that works in both the Google +environment and the open-source world (for instance, a TPU ClusterResolver +or similar), you will have to return the appropriate string depending on the +environment, which you will have to detect. + +Otherwise, if you are implementing a ClusterResolver that will only work +in open-source TensorFlow, you do not need to implement this property. +
+`rpc_layer` + + +
+`task_id` + + +
+`task_type` + + +
+ + + +## Methods + +

cluster_spec

+ +View source + + + +Returns a ClusterSpec object based on the latest instance group info. + +This returns a ClusterSpec object for use based on information from the +specified instance group. We will retrieve the information from the GCE APIs +every time this method is called. + + + + + + + + + +
Returns
+A ClusterSpec containing host information retrieved from GCE. +
+ + + +

master

+ +View source + + + +Retrieves the name or URL of the session master. + + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional) The type of the TensorFlow task of the master. +
+`task_id` + +(Optional) The index of the TensorFlow task of the master. +
+`rpc_layer` + +(Optional) The RPC protocol for the given cluster. +
+ + + + + + + + + + + +
Returns
+The name or URL of the session master. +
+ + +Implementors of this function must take care in ensuring that the master +returned is up-to-date at the time to calling this function. This usually +means retrieving the master every time this function is invoked. + +

num_accelerators

+ +View source + + + +Returns the number of accelerator cores per worker. + +This returns the number of accelerator cores (such as GPUs and TPUs) +available per worker. + +Optionally, we allow callers to specify the task_type, and task_id, for +if they want to target a specific TensorFlow process to query +the number of accelerators. This is to support heterogenous environments, +where the number of accelerators cores per host is different. + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional) The type of the TensorFlow task of the machine we +want to query. +
+`task_id` + +(Optional) The index of the TensorFlow task of the machine we +want to query. +
+`config_proto` + +(Optional) Configuration for starting a new session to +query how many accelerator cores it has. +
+ + + + + + + + + + + +
Returns
+A map of accelerator types to number of cores. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/cluster_resolver/KubernetesClusterResolver.md b/site/en/api_docs/python/tf/distribute/cluster_resolver/KubernetesClusterResolver.md new file mode 100644 index 00000000000..51e5858d075 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/cluster_resolver/KubernetesClusterResolver.md @@ -0,0 +1,340 @@ +description: ClusterResolver for Kubernetes. + +
+ + + + + + +
+ +# tf.distribute.cluster_resolver.KubernetesClusterResolver + + + + + + + + + +ClusterResolver for Kubernetes. + +Inherits From: [`ClusterResolver`](../../../tf/distribute/cluster_resolver/ClusterResolver.md) + + + + + + + + + +This is an implementation of cluster resolvers for Kubernetes. When given the +the Kubernetes namespace and label selector for pods, we will retrieve the +pod IP addresses of all running pods matching the selector, and return a +ClusterSpec based on that information. + + + + + + + + + + + + + + + + + + + +
+`job_to_label_mapping` + +A mapping of TensorFlow jobs to label selectors. +This allows users to specify many TensorFlow jobs in one Cluster +Resolver, and each job can have pods belong with different label +selectors. For example, a sample mapping might be +``` +{'worker': ['job-name=worker-cluster-a', 'job-name=worker-cluster-b'], +'ps': ['job-name=ps-1', 'job-name=ps-2']} +``` +
+`tf_server_port` + +The port the TensorFlow server is listening on. +
+`rpc_layer` + +(Optional) The RPC layer TensorFlow should use to communicate +between tasks in Kubernetes. Defaults to 'grpc'. +
+`override_client` + +The Kubernetes client (usually automatically retrieved +using `from kubernetes import client as k8sclient`). If you pass this +in, you are responsible for setting Kubernetes credentials manually. +
+ + + + + + + + + + + + + + + +
+`ImportError` + +If the Kubernetes Python client is not installed and no +`override_client` is passed in. +
+`RuntimeError` + +If autoresolve_task is not a boolean or a callable. +
+ + + + + + + + + + + + + + +
+`environment` + +Returns the current environment which TensorFlow is running in. + +There are two possible return values, "google" (when TensorFlow is running +in a Google-internal environment) or an empty string (when TensorFlow is +running elsewhere). + +If you are implementing a ClusterResolver that works in both the Google +environment and the open-source world (for instance, a TPU ClusterResolver +or similar), you will have to return the appropriate string depending on the +environment, which you will have to detect. + +Otherwise, if you are implementing a ClusterResolver that will only work +in open-source TensorFlow, you do not need to implement this property. +
+ + + +## Methods + +

cluster_spec

+ +View source + + + +Returns a ClusterSpec object based on the latest info from Kubernetes. + +We retrieve the information from the Kubernetes master every time this +method is called. + + + + + + + + + +
Returns
+A ClusterSpec containing host information returned from Kubernetes. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If any of the pods returned by the master is not in the +`Running` phase. +
+ + + +

master

+ +View source + + + +Returns the master address to use when creating a session. + +You must have set the task_type and task_id object properties before +calling this function, or pass in the `task_type` and `task_id` +parameters when using this function. If you do both, the function parameters +will override the object properties. + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional) The type of the TensorFlow task of the master. +
+`task_id` + +(Optional) The index of the TensorFlow task of the master. +
+`rpc_layer` + +(Optional) The RPC protocol for the given cluster. +
+ + + + + + + + + + + +
Returns
+The name or URL of the session master. +
+ + + +

num_accelerators

+ +View source + + + +Returns the number of accelerator cores per worker. + +This returns the number of accelerator cores (such as GPUs and TPUs) +available per worker. + +Optionally, we allow callers to specify the task_type, and task_id, for +if they want to target a specific TensorFlow process to query +the number of accelerators. This is to support heterogenous environments, +where the number of accelerators cores per host is different. + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional) The type of the TensorFlow task of the machine we +want to query. +
+`task_id` + +(Optional) The index of the TensorFlow task of the machine we +want to query. +
+`config_proto` + +(Optional) Configuration for starting a new session to +query how many accelerator cores it has. +
+ + + + + + + + + + + +
Returns
+A map of accelerator types to number of cores. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/cluster_resolver/SimpleClusterResolver.md b/site/en/api_docs/python/tf/distribute/cluster_resolver/SimpleClusterResolver.md new file mode 100644 index 00000000000..d5f85e5d5b1 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/cluster_resolver/SimpleClusterResolver.md @@ -0,0 +1,228 @@ +description: Simple implementation of ClusterResolver that accepts a ClusterSpec. + +
+ + + + + + +
+ +# tf.distribute.cluster_resolver.SimpleClusterResolver + + + + + + + + + +Simple implementation of ClusterResolver that accepts a ClusterSpec. + +Inherits From: [`ClusterResolver`](../../../tf/distribute/cluster_resolver/ClusterResolver.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`environment` + +Returns the current environment which TensorFlow is running in. + +There are two possible return values, "google" (when TensorFlow is running +in a Google-internal environment) or an empty string (when TensorFlow is +running elsewhere). + +If you are implementing a ClusterResolver that works in both the Google +environment and the open-source world (for instance, a TPU ClusterResolver +or similar), you will have to return the appropriate string depending on the +environment, which you will have to detect. + +Otherwise, if you are implementing a ClusterResolver that will only work +in open-source TensorFlow, you do not need to implement this property. +
+`rpc_layer` + + +
+`task_id` + + +
+`task_type` + + +
+ + + +## Methods + +

cluster_spec

+ +View source + + + +Returns the ClusterSpec passed into the constructor. + + +

master

+ +View source + + + +Returns the master address to use when creating a session. + + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional) The type of the TensorFlow task of the master. +
+`task_id` + +(Optional) The index of the TensorFlow task of the master. +
+`rpc_layer` + +(Optional) The RPC used by distributed TensorFlow. +
+ + + + + + + + + + + +
Returns
+The name or URL of the session master. +
+ + +If a task_type and task_id is given, this will override the `master` +string passed into the initialization function. + +

num_accelerators

+ +View source + + + +Returns the number of accelerator cores per worker. + +The SimpleClusterResolver does not do automatic detection of accelerators, +so a TensorFlow session will never be created, and thus all arguments are +unused and we simply assume that the type of accelerator is a GPU and return +the value in provided to us in the constructor. + + + + + + + + + + + + + + + + +
Args
+`task_type` + +Unused. +
+`task_id` + +Unused. +
+`config_proto` + +Unused. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/cluster_resolver/SlurmClusterResolver.md b/site/en/api_docs/python/tf/distribute/cluster_resolver/SlurmClusterResolver.md new file mode 100644 index 00000000000..36c20c5a9fe --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/cluster_resolver/SlurmClusterResolver.md @@ -0,0 +1,372 @@ +description: ClusterResolver for system with Slurm workload manager. + +
+ + + + + + + +
+ +# tf.distribute.cluster_resolver.SlurmClusterResolver + + + + + + + + + +ClusterResolver for system with Slurm workload manager. + +Inherits From: [`ClusterResolver`](../../../tf/distribute/cluster_resolver/ClusterResolver.md) + + + + + + + + + +This is an implementation of ClusterResolver for Slurm clusters. This allows +the specification of jobs and task counts, number of tasks per node, number +of GPUs on each node and number of GPUs for each task. It retrieves system +attributes by Slurm environment variables, resolves allocated computing node +names, constructs a cluster and returns a ClusterResolver object which can be +used for distributed TensorFlow. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`jobs` + +Dictionary with job names as key and number of tasks in the job as +value. Defaults to as many 'worker's as there are (Slurm) tasks. +
+`port_base` + +The first port number to start with for processes on a node. +
+`gpus_per_node` + +Number of GPUs available on each node. Defaults to the +number of GPUs reported by nvidia-smi +
+`gpus_per_task` + +Number of GPUs to be used for each task. Default is to +evenly distribute the gpus_per_node to tasks_per_node. +
+`tasks_per_node` + +Number of tasks running on each node. Can be an integer if +the number of tasks per node is constant or a dictionary mapping +hostnames to number of tasks on that node. If not set the Slurm +environment is queried for the correct mapping. +
+`auto_set_gpu` + +Set the visible CUDA devices automatically while resolving +the cluster by setting CUDA_VISIBLE_DEVICES environment variable. +Defaults to True. +
+`rpc_layer` + +The protocol TensorFlow used to communicate between nodes. +Defaults to 'grpc'. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If requested more GPUs per node then available or +requested more tasks then assigned tasks or +resolving missing values from the environment failed. +
+ + + + + + + + + + + + + + +
+`environment` + +Returns the current environment which TensorFlow is running in. + +There are two possible return values, "google" (when TensorFlow is running +in a Google-internal environment) or an empty string (when TensorFlow is +running elsewhere). + +If you are implementing a ClusterResolver that works in both the Google +environment and the open-source world (for instance, a TPU ClusterResolver +or similar), you will have to return the appropriate string depending on the +environment, which you will have to detect. + +Otherwise, if you are implementing a ClusterResolver that will only work +in open-source TensorFlow, you do not need to implement this property. +
+ + + +## Methods + +

cluster_spec

+ +View source + + + +Returns a ClusterSpec object based on the latest instance group info. + +This returns a ClusterSpec object for use based on information from the +specified initialization parameters and Slurm environment variables. The +cluster specification is resolved each time this function is called. The +resolver extract hostnames of nodes by scontrol and pack tasks in that +order until a node a has number of tasks that is equal to specification. +GPUs on nodes are allocated to tasks by specification through setting +CUDA_VISIBLE_DEVICES environment variable. + + + + + + + + + +
Returns
+A ClusterSpec containing host information retrieved from Slurm's +environment variables. +
+ + + +

get_task_info

+ +View source + + + +Returns job name and task_id for the process which calls this. + +This returns the job name and task index for the process which calls this +function according to its rank and cluster specification. The job name and +task index are set after a cluster is constructed by cluster_spec otherwise +defaults to None. + + + + + + + + + +
Returns
+A string specifying job name the process belongs to and an integer +specifying the task index the process belongs to in that job. +
+ + + +

master

+ +View source + + + +Returns the master string for connecting to a TensorFlow master. + + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional) Overrides the default auto-selected task type. +
+`task_id` + +(Optional) Overrides the default auto-selected task index. +
+`rpc_layer` + +(Optional) Overrides the default RPC protocol TensorFlow uses +to communicate across nodes. +
+ + + + + + + + + + + +
Returns
+A connection string for connecting to a TensorFlow master. +
+ + + +

num_accelerators

+ +View source + + + +Returns the number of accelerator cores per worker. + +This returns the number of accelerator cores (such as GPUs and TPUs) +available per worker. + +Optionally, we allow callers to specify the task_type, and task_id, for +if they want to target a specific TensorFlow process to query +the number of accelerators. This is to support heterogenous environments, +where the number of accelerators cores per host is different. + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional) The type of the TensorFlow task of the machine we +want to query. +
+`task_id` + +(Optional) The index of the TensorFlow task of the machine we +want to query. +
+`config_proto` + +(Optional) Configuration for starting a new session to +query how many accelerator cores it has. +
+ + + + + + + + + + + +
Returns
+A map of accelerator types to number of cores. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/cluster_resolver/TFConfigClusterResolver.md b/site/en/api_docs/python/tf/distribute/cluster_resolver/TFConfigClusterResolver.md new file mode 100644 index 00000000000..4598d32c7fd --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/cluster_resolver/TFConfigClusterResolver.md @@ -0,0 +1,324 @@ +description: Implementation of a ClusterResolver which reads the TF_CONFIG EnvVar. + +
+ + + + + + +
+ +# tf.distribute.cluster_resolver.TFConfigClusterResolver + + + + + + + + + +Implementation of a ClusterResolver which reads the TF_CONFIG EnvVar. + +Inherits From: [`ClusterResolver`](../../../tf/distribute/cluster_resolver/ClusterResolver.md) + + + + + + + + + +This is an implementation of cluster resolvers when using TF_CONFIG to set +information about the cluster. The cluster spec returned will be +initialized from the TF_CONFIG environment variable. + + + + + + + + + + + + + + + + + + + +
+`task_type` + +(String, optional) Overrides the task type specified in the +TF_CONFIG environment variable. +
+`task_id` + +(Integer, optional) Overrides the task index specified in the +TF_CONFIG environment variable. +
+`rpc_layer` + +(String, optional) Overrides the rpc layer TensorFlow uses. +
+`environment` + +(String, optional) Overrides the environment TensorFlow +operates in. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`environment` + +Returns the current environment which TensorFlow is running in. + +There are two possible return values, "google" (when TensorFlow is running +in a Google-internal environment) or an empty string (when TensorFlow is +running elsewhere). + +If you are implementing a ClusterResolver that works in both the Google +environment and the open-source world (for instance, a TPU ClusterResolver +or similar), you will have to return the appropriate string depending on the +environment, which you will have to detect. + +Otherwise, if you are implementing a ClusterResolver that will only work +in open-source TensorFlow, you do not need to implement this property. +
+`rpc_layer` + + +
+`task_id` + + +
+`task_type` + + +
+ + + +## Methods + +

cluster_spec

+ +View source + + + +Returns a ClusterSpec based on the TF_CONFIG environment variable. + + + + + + + + + + +
Returns
+A ClusterSpec with information from the TF_CONFIG environment variable. +
+ + + +

master

+ +View source + + + +Returns the master address to use when creating a TensorFlow session. + + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(String, optional) Overrides and sets the task_type of the +master. +
+`task_id` + +(Integer, optional) Overrides and sets the task id of the +master. +
+`rpc_layer` + +(String, optional) Overrides and sets the protocol over which +TensorFlow nodes communicate with each other. +
+ + + + + + + + + + + +
Returns
+The address of the master. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If the task_type or task_id is not specified and the +`TF_CONFIG` environment variable does not contain a task section. +
+ + + +

num_accelerators

+ +View source + + + +Returns the number of accelerator cores per worker. + +This returns the number of accelerator cores (such as GPUs and TPUs) +available per worker. + +Optionally, we allow callers to specify the task_type, and task_id, for +if they want to target a specific TensorFlow process to query +the number of accelerators. This is to support heterogenous environments, +where the number of accelerators cores per host is different. + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional) The type of the TensorFlow task of the machine we +want to query. +
+`task_id` + +(Optional) The index of the TensorFlow task of the machine we +want to query. +
+`config_proto` + +(Optional) Configuration for starting a new session to +query how many accelerator cores it has. +
+ + + + + + + + + + + +
Returns
+A map of accelerator types to number of cores. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/cluster_resolver/TPUClusterResolver.md b/site/en/api_docs/python/tf/distribute/cluster_resolver/TPUClusterResolver.md new file mode 100644 index 00000000000..f9db7da01f4 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/cluster_resolver/TPUClusterResolver.md @@ -0,0 +1,457 @@ +description: Cluster Resolver for Google Cloud TPUs. + +
+ + + + + + + + + + +
+ +# tf.distribute.cluster_resolver.TPUClusterResolver + + + + + + + + + +Cluster Resolver for Google Cloud TPUs. + +Inherits From: [`ClusterResolver`](../../../tf/distribute/cluster_resolver/ClusterResolver.md) + + + + + + + + + +This is an implementation of cluster resolvers for the Google Cloud TPU +service. As Cloud TPUs are in alpha, you will need to specify a API definition +file for this to consume, in addition to a list of Cloud TPUs in your Google +Cloud Platform project. + +TPUClusterResolver supports the following distinct environments: +Google Compute Engine +Google Kubernetes Engine +Google internal + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tpu` + +A string corresponding to the TPU to use. If the string is an empty +string, the string 'local', or a string that begins with 'grpc://', then +it is assumed to not correspond with a Cloud TPU and will instead be +passed as the session master and no ClusterSpec propagation will be +done. In the future, this may also support a list of strings when +multiple Cloud TPUs are used. +
+`zone` + +Zone where the TPUs are located. If omitted or empty, we will assume +that the zone of the TPU is the same as the zone of the GCE VM, which we +will try to discover from the GCE metadata service. +
+`project` + +Name of the GCP project containing Cloud TPUs. If omitted or +empty, we will try to discover the project name of the GCE VM from the +GCE metadata service. +
+`job_name` + +Name of the TensorFlow job the TPUs belong to. +
+`coordinator_name` + +The name to use for the coordinator. Set to None if the +coordinator should not be included in the computed ClusterSpec. +
+`coordinator_address` + +The address of the coordinator (typically an ip:port +pair). If set to None, a TF server will be started. If coordinator_name +is None, a TF server will not be started even if coordinator_address is +None. +
+`credentials` + +GCE Credentials. If None, then we use default credentials +from the oauth2client +
+`service` + +The GCE API object returned by the googleapiclient.discovery +function. If you specify a custom service object, then the credentials +parameter will be ignored. +
+`discovery_url` + +A URL template that points to the location of the discovery +service. It should have two parameters {api} and {apiVersion} that when +filled in produce an absolute URL to the discovery document for that +service. The environment variable 'TPU_API_DISCOVERY_URL' will override +this. +
+ + + + + + + + + + + + + + + + + + +
+`ImportError` + +If the googleapiclient is not installed. +
+`ValueError` + +If no TPUs are specified. +
+`RuntimeError` + +If an empty TPU name is specified and this is running in a +Google Cloud environment. +
+ + + + + + + + + + + + + + +
+`environment` + +Returns the current environment which TensorFlow is running in. +
+ + + +## Methods + +

cluster_spec

+ +View source + + + +Returns a ClusterSpec object based on the latest TPU information. + +We retrieve the information from the GCE APIs every time this method is +called. + + + + + + + + + +
Returns
+A ClusterSpec containing host information returned from Cloud TPUs, +or None. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If the provided TPU is not healthy. +
+ + + +

get_job_name

+ +View source + + + + + + +

get_master

+ +View source + + + + + + +

master

+ +View source + + + +Get the Master string to be used for the session. + +In the normal case, this returns the grpc path (grpc://1.2.3.4:8470) of +first instance in the ClusterSpec returned by the cluster_spec function. + +If a non-TPU name is used when constructing a TPUClusterResolver, that will +be returned instead (e.g. If the tpus argument's value when constructing +this TPUClusterResolver was 'grpc://10.240.1.2:8470', +'grpc://10.240.1.2:8470' will be returned). + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional, string) The type of the TensorFlow task of the +master. +
+`task_id` + +(Optional, integer) The index of the TensorFlow task of the +master. +
+`rpc_layer` + +(Optional, string) The RPC protocol TensorFlow should use to +communicate with TPUs. +
+ + + + + + + + + + + +
Returns
+string, the connection string to use when creating a session. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If none of the TPUs specified exists. +
+ + + +

num_accelerators

+ +View source + + + +Returns the number of TPU cores per worker. + +Connects to the master and list all the devices present in the master, +and counts them up. Also verifies that the device counts per host in the +cluster is the same before returning the number of TPU cores per host. + + + + + + + + + + + + + + + + +
Args
+`task_type` + +Unused. +
+`task_id` + +Unused. +
+`config_proto` + +Used to create a connection to a TPU master in order to +retrieve the system metadata. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If we cannot talk to a TPU worker after retrying or if the +number of TPU devices per host is different. +
+ + + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/distribute/cluster_resolver/UnionResolver.md b/site/en/api_docs/python/tf/distribute/cluster_resolver/UnionResolver.md new file mode 100644 index 00000000000..9c99b6f4e3d --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/cluster_resolver/UnionResolver.md @@ -0,0 +1,350 @@ +description: Performs a union on underlying ClusterResolvers. + +
+ + + + + + +
+ +# tf.distribute.cluster_resolver.UnionResolver + + + + + + + + + +Performs a union on underlying ClusterResolvers. + +Inherits From: [`ClusterResolver`](../../../tf/distribute/cluster_resolver/ClusterResolver.md) + + + + + + + + + +This class performs a union given two or more existing ClusterResolvers. It +merges the underlying ClusterResolvers, and returns one unified ClusterSpec +when cluster_spec is called. The details of the merge function is +documented in the cluster_spec function. + +For additional ClusterResolver properties such as task type, task index, +rpc layer, environment, etc..., we will return the value from the first +ClusterResolver in the union. + + + + + + + + + + + + + +
+`*args` + +`ClusterResolver` objects to be unionized. +
+`**kwargs` + +rpc_layer - (Optional) Override value for the RPC layer used by +TensorFlow. +task_type - (Optional) Override value for the current task type. +task_id - (Optional) Override value for the current task index. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If any argument is not a subclass of `ClusterResolvers`. +
+`ValueError` + +If there are no arguments passed. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`environment` + +Returns the current environment which TensorFlow is running in. + +There are two possible return values, "google" (when TensorFlow is running +in a Google-internal environment) or an empty string (when TensorFlow is +running elsewhere). + +If you are implementing a ClusterResolver that works in both the Google +environment and the open-source world (for instance, a TPU ClusterResolver +or similar), you will have to return the appropriate string depending on the +environment, which you will have to detect. + +Otherwise, if you are implementing a ClusterResolver that will only work +in open-source TensorFlow, you do not need to implement this property. +
+`rpc_layer` + + +
+`task_id` + + +
+`task_type` + + +
+ + + +## Methods + +

cluster_spec

+ +View source + + + +Returns a union of all the ClusterSpecs from the ClusterResolvers. + + + + + + + + + + +
Returns
+A ClusterSpec containing host information merged from all the underlying +ClusterResolvers. +
+ + + + + + + + + + + + +
Raises
+`KeyError` + +If there are conflicting keys detected when merging two or +more dictionaries, this exception is raised. +
+ + +Note: If there are multiple ClusterResolvers exposing ClusterSpecs with the +same job name, we will merge the list/dict of workers. + +If *all* underlying ClusterSpecs expose the set of workers as lists, we will +concatenate the lists of workers, starting with the list of workers from +the first ClusterResolver passed into the constructor. + +If *any* of the ClusterSpecs expose the set of workers as a dict, we will +treat all the sets of workers as dicts (even if they are returned as lists) +and will only merge them into a dict if there is no conflicting keys. If +there is a conflicting key, we will raise a `KeyError`. + +

master

+ +View source + + + +Returns the master address to use when creating a session. + +This usually returns the master from the first ClusterResolver passed in, +but you can override this by specifying the task_type and task_id. + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional) The type of the TensorFlow task of the master. +
+`task_id` + +(Optional) The index of the TensorFlow task of the master. +
+`rpc_layer` + +(Optional) The RPC protocol for the given cluster. +
+ + + + + + + + + + + +
Returns
+The name or URL of the session master. +
+ + + +

num_accelerators

+ +View source + + + +Returns the number of accelerator cores per worker. + +This returns the number of accelerator cores (such as GPUs and TPUs) +available per worker. + +Optionally, we allow callers to specify the task_type, and task_id, for +if they want to target a specific TensorFlow process to query +the number of accelerators. This is to support heterogenous environments, +where the number of accelerators cores per host is different. + + + + + + + + + + + + + + + + +
Args
+`task_type` + +(Optional) The type of the TensorFlow task of the machine we +want to query. +
+`task_id` + +(Optional) The index of the TensorFlow task of the machine we +want to query. +
+`config_proto` + +(Optional) Configuration for starting a new session to +query how many accelerator cores it has. +
+ + + + + + + + + + + +
Returns
+A map of accelerator types to number of cores. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/experimental.md b/site/en/api_docs/python/tf/distribute/experimental.md new file mode 100644 index 00000000000..408fbc448ac --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/experimental.md @@ -0,0 +1,37 @@ +description: Experimental Distribution Strategy library. + +
+ + +
+ +# Module: tf.distribute.experimental + + + + + + + + + +Experimental Distribution Strategy library. + + + +## Classes + +[`class CentralStorageStrategy`](../../tf/distribute/experimental/CentralStorageStrategy.md): A one-machine strategy that puts all variables on a single device. + +[`class CollectiveCommunication`](../../tf/distribute/experimental/CollectiveCommunication.md): Communication choices for CollectiveOps. + +[`class CollectiveHints`](../../tf/distribute/experimental/CollectiveHints.md): Hints for collective operations like AllReduce. + +[`class MultiWorkerMirroredStrategy`](../../tf/distribute/experimental/MultiWorkerMirroredStrategy.md): A distribution strategy for synchronous training on multiple workers. + +[`class ParameterServerStrategy`](../../tf/distribute/experimental/ParameterServerStrategy.md): An asynchronous multi-worker parameter server tf.distribute strategy. + +[`class TPUStrategy`](../../tf/distribute/experimental/TPUStrategy.md): TPU distribution strategy implementation. + +[`class ValueContext`](../../tf/distribute/experimental/ValueContext.md): A class wrapping information needed by a distribute function. + diff --git a/site/en/api_docs/python/tf/distribute/experimental/CentralStorageStrategy.md b/site/en/api_docs/python/tf/distribute/experimental/CentralStorageStrategy.md new file mode 100644 index 00000000000..32780368b50 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/experimental/CentralStorageStrategy.md @@ -0,0 +1,917 @@ +description: A one-machine strategy that puts all variables on a single device. + +
+ + + + + + + + + + + + + + +
+ +# tf.distribute.experimental.CentralStorageStrategy + + + + + + + + + +A one-machine strategy that puts all variables on a single device. + +Inherits From: [`Strategy`](../../../tf/distribute/Strategy.md) + + + + + + + +Variables are assigned to local CPU or the only GPU. If there is more +than one GPU, compute operations (other than variable update operations) +will be replicated across all GPUs. + +#### For Example: + + +``` +strategy = tf.distribute.experimental.CentralStorageStrategy() +# Create a dataset +ds = tf.data.Dataset.range(5).batch(2) +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(ds) + +with strategy.scope(): + @tf.function + def train_step(val): + return val + 1 + + # Iterate over the distributed dataset + for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_assign_to_logical_device

+ +View source + + + +Adds annotation that `tensor` will be assigned to a logical device. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to `tensor` specifying that operations on +`tensor` will be invoked on logical core device id `logical_device_id`. +When model parallelism is used, the default behavior is that all ops +are placed on zero-th logical device. + +```python + +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + output = tf.add(inputs, inputs) + + // Add operation will be executed on logical device 0. + output = strategy.experimental_assign_to_logical_device(output, 0) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` + + + + + + + + + + + + + +
Args
+`tensor` + +Input tensor to annotate. +
+`logical_device_id` + +Id of the logical core to which the tensor will be +assigned. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +The logical device id presented is not consistent with total +number of partitions specified by the device assignment. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via dataset. + +The returned dataset is a wrapped strategy dataset which creates a +multidevice iterator under the hood. It prefetches the input data to the +specified devices on the worker. The returned distributed dataset can be +iterated over similar to how regular datasets can. + +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +#### For Example: + + +``` +strategy = tf.distribute.CentralStorageStrategy() # with 1 CPU and 1 GPU +dataset = tf.data.Dataset.range(10).batch(2) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +for x in dist_dataset: + print(x) # Prints PerReplica values [0, 1], [2, 3],... + +``` +Args: + dataset: tf.data.Dataset to be prefetched to device. + + + + + + + + + +
Returns
+A "distributed `Dataset`" that the caller can iterate over. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. In this +case, we only have one worker so `dataset_fn` is called once. Each replica +on this worker will then dequeue a batch of elements from this local +dataset. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed. + +#### For Example: + + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which the caller can iterate over like regular +datasets. +
+ + + +

experimental_distribute_values_from_function

+ +View source + + + +Generates tf.distribute.DistributedValues from `value_fn`. + +This function is to generate tf.distribute.DistributedValues to pass +into `run`, `reduce`, or other methods that take +distributed values when not using datasets. + + + + + + + + + + +
Args
+`value_fn` + +The function to run to generate values. It is called for +each replica with `tf.distribute.ValueContext` as the sole argument. It +must return a Tensor or a type that can be converted to a Tensor. +
+ + + + + + + + + + + +
Returns
+A tf.distribute.DistributedValues containing a value for each replica. +
+ + + +#### Example usage: + + + +1. Return constant value per replica: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return tf.constant(1.) +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(,) +``` + +2. Distribute values in array based on replica_id: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> array_value = np.array([3., 2., 1.]) +>>> def value_fn(ctx): +... return array_value[ctx.replica_id_in_sync_group] +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(3.0,) +``` + +3. Specify values using num_replicas_in_sync: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return ctx.num_replicas_in_sync +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(1,) +``` + +4. Place values on devices and distribute: + +``` +strategy = tf.distribute.TPUStrategy() +worker_devices = strategy.extended.worker_devices +multiple_values = [] +for i in range(strategy.num_replicas_in_sync): + with tf.device(worker_devices[i]): + multiple_values.append(tf.constant(1.0)) + +def value_fn(ctx): + return multiple_values[ctx.replica_id] + +distributed_values = strategy. + experimental_distribute_values_from_function( + value_fn) +``` + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +In `CentralStorageStrategy` there is a single worker so the value returned +will be all the values on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `run()`, `extended.call_for_each_replica()`, +or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use `experimental_distribute_dataset` +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_replicate_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be replicated to all logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be invoked on all logical devices. + +```python +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + images, labels = inputs + images = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + + // For loss calculation, all logical devices share the same logits + // and labels. + labels = strategy.experimental_replicate_to_logical_devices(labels) + output = strategy.experimental_replicate_to_logical_devices(output) + loss = loss_fn(labels, output) + + return loss + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_split_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be split across logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be be split among multiple logical devices. Tensor `tensor` +will be split across dimensions specified by `partition_dimensions`. +The dimensions of `tensor` must be divisible by corresponding value in +`partition_dimensions`. + +For example, for system with 8 logical devices, if `tensor` is an image +tensor with shape (batch_size, width, height, channel) and +`partition_dimensions` is [1, 2, 4, 1], then `tensor` will be split +2 in width dimension and 4 way in height dimension and the split +tensor values will be fed into 8 logical devices. + +```python +# Initializing TPU system with 8 logical devices and 1 replica. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[2, 2, 2], + num_replicas=1) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + inputs = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + partition_dimensions: An unnested list of integers with the size equal to + rank of `tensor` specifying how `tensor` will be partitioned. The + product of all elements in `partition_dimensions` must be equal to the + total number of logical devices per replica. + + + + + + + + + + +
Raises
+`ValueError` + +1) If the size of partition_dimensions does not equal to rank +of `tensor` or 2) if product of elements of `partition_dimensions` does +not match the number of logical devices per replica defined by the +implementing DistributionStrategy's device specification or +3) if a known size of `tensor` is not divisible by corresponding +value in `partition_dimensions`. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + +#### For Example: + + +``` +strategy = tf.distribute.experimental.CentralStorageStrategy( + compute_devices=['CPU:0', 'GPU:0'], parameter_device='CPU:0') +ds = tf.data.Dataset.range(10) +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(ds) + +with strategy.scope(): + @tf.function + def train_step(val): + # pass through + return val + + # Iterate over the distributed dataset + for x in dist_dataset: + result = strategy.run(train_step, args=(x,)) + +result = strategy.reduce(tf.distribute.ReduceOp.SUM, result, + axis=None).numpy() +# result: array([ 4, 6, 8, 10]) + +result = strategy.reduce(tf.distribute.ReduceOp.SUM, result, axis=0).numpy() +# result: 28 +``` + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +In `CentralStorageStrategy`, `fn` is called on each of the compute +replicas, with the provided "per replica" arguments specific to that device. + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Return value from running `fn`. +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + + + + + + + + + +
Returns
+A context manager. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/experimental/CollectiveCommunication.md b/site/en/api_docs/python/tf/distribute/experimental/CollectiveCommunication.md new file mode 100644 index 00000000000..7109475421a --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/experimental/CollectiveCommunication.md @@ -0,0 +1,51 @@ +description: Communication choices for CollectiveOps. + +
+ + + + + +
+ +# tf.distribute.experimental.CollectiveCommunication + + + + + + + + + +Communication choices for CollectiveOps. + + + + + +* `AUTO`: Default to runtime's automatic choices. +* `RING`: TensorFlow's ring algorithms for all-reduce and + all-gather. +* `NCCL`: Use ncclAllReduce for all-reduce, and ring algorithms for + all-gather. + +## Class Variables + +* `AUTO` +* `NCCL` +* `RING` diff --git a/site/en/api_docs/python/tf/distribute/experimental/CollectiveHints.md b/site/en/api_docs/python/tf/distribute/experimental/CollectiveHints.md new file mode 100644 index 00000000000..f6719033873 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/experimental/CollectiveHints.md @@ -0,0 +1,105 @@ +description: Hints for collective operations like AllReduce. + +
+ + + +
+ +# tf.distribute.experimental.CollectiveHints + + + + + + + + + +Hints for collective operations like AllReduce. + + + + + + + + + +This can be passed to methods like +`tf.distribute.get_replica_context().all_reduce()` to optimize collective +operation performance. Note that these are only hints, which may or may not +change the actual behavior. Some options only apply to certain strategy and +are ignored by others. + +One common optimization is to break gradients all-reduce into multiple packs +so that weight updates can overlap with gradient all-reduce. + +#### Example: + + + +```python +hints = tf.distribute.experimental.CollectiveHints( + bytes_per_pack=50 * 1024 * 1024) +grads = tf.distribute.get_replica_context().all_reduce( + 'sum', grads, experimental_hints=hints) +optimizer.apply_gradients(zip(grads, vars), + experimental_aggregate_gradients=False) +``` + + + + + + + + + + +
+`bytes_per_pack` + +A non-negative integer. Breaks collective operations into +packs of certain size. If it's zero, the value is determined +automatically. This only applies to all-reduce with +`MultiWorkerMirroredStrategy` currently. +
+ + + + + + + + + + + + +
+`ValueError` + +When arguments have invalid value. +
+ + + diff --git a/site/en/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy.md b/site/en/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy.md new file mode 100644 index 00000000000..5d3d54885c1 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy.md @@ -0,0 +1,1086 @@ +description: A distribution strategy for synchronous training on multiple workers. + +
+ + + + + + + + + + + + + + +
+ +# tf.distribute.experimental.MultiWorkerMirroredStrategy + + + + + + + + + +A distribution strategy for synchronous training on multiple workers. + +Inherits From: [`Strategy`](../../../tf/distribute/Strategy.md) + + + + + + + +This strategy implements synchronous distributed training across multiple +workers, each with potentially multiple GPUs. Similar to +tf.distribute.MirroredStrategy, it creates copies of all variables in the +model on each device across all workers. + +It uses CollectiveOps's implementation of multi-worker all-reduce to +to keep variables in sync. A collective op is a single op in the +TensorFlow graph which can automatically choose an all-reduce algorithm in +the TensorFlow runtime according to hardware, network topology and tensor +sizes. + +By default it uses all local GPUs or CPU for single-worker training. + +When 'TF_CONFIG' environment variable is set, it parses cluster_spec, +task_type and task_id from 'TF_CONFIG' and turns into a multi-worker strategy +which mirrored models on GPUs of all machines in a cluster. In the current +implementation, it uses all GPUs in a cluster and it assumes all workers have +the same number of GPUs. + +You can also pass a distribute.cluster_resolver.ClusterResolver instance +when instantiating the strategy. The task_type, task_id etc. will be parsed +from the resolver instance instead of from the `TF_CONFIG` env var. + +It supports both eager mode and graph mode. However, for eager mode, it has to +set up the eager context in its constructor and therefore all ops in eager +mode have to run after the strategy object is created. + + + + + + + + + + + + + +
+`communication` + +optional Enum of type +distribute.experimental.CollectiveCommunication. This provides a way +for the user to override the choice of collective op communication. +Possible values include `AUTO`, `RING`, and `NCCL`. +
+`cluster_resolver` + +optional distribute.cluster_resolver.ClusterResolver +object. The default ClusterResolver that is used is the +TFConfigClusterResolver which is instantiated from the TF_CONFIG env +var. +
+ + + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_assign_to_logical_device

+ +View source + + + +Adds annotation that `tensor` will be assigned to a logical device. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to `tensor` specifying that operations on +`tensor` will be invoked on logical core device id `logical_device_id`. +When model parallelism is used, the default behavior is that all ops +are placed on zero-th logical device. + +```python + +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + output = tf.add(inputs, inputs) + + // Add operation will be executed on logical device 0. + output = strategy.experimental_assign_to_logical_device(output, 0) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` + + + + + + + + + + + + + +
Args
+`tensor` + +Input tensor to annotate. +
+`logical_device_id` + +Id of the logical core to which the tensor will be +assigned. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +The logical device id presented is not consistent with total +number of partitions specified by the device assignment. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via `dataset`. + +The returned distributed dataset can be iterated over similar to how +regular datasets can. +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +The following is an example: + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + +We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). + +In a multi-worker setting, we will first attempt to distribute the dataset +by attempting to detect whether the dataset is being created out of +ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, +attempting to shard the input files. Note that there has to be at least one +input file per worker. If you have less than one input file per worker, we +suggest that you should disable distributing your dataset using the method +below. + +If that attempt is unsuccessful (e.g. the dataset is created from a +Dataset.range), we will shard the dataset evenly at the end by appending a +`.shard` operation to the end of the processing pipeline. This will cause +the entire preprocessing pipeline for all the data to be run on every +worker, and each worker will do redundant work. We will print a warning +if this method of sharding is selected. + +You can disable dataset sharding across workers using the +`auto_shard_policy` option in tf.data.experimental.DistributeOptions. + +Within each worker, we will also split the data among all the worker +devices (if more than one a present), and this will happen even if +multi-worker sharding is disabled using the method above. + +If the above batch splitting and dataset sharding logic is undesirable, +please use `experimental_distribute_datasets_from_function` instead, which +does not do any automatic splitting or sharding. + +You can also use the `element_spec` property of the distributed dataset +returned by this API to query the tf.TypeSpec of the elements returned +by the iterator. This can be used to set the `input_signature` property +of a tf.function. + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +@tf.function(input_signature=[dist_dataset.element_spec]) +def train_step(inputs): + # train model with inputs + return + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be sharded across all replicas using +the rules stated above. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. Each +replica on that worker will dequeue one batch of inputs from the local +`Dataset` (i.e. if a worker has two replicas, two batches will be dequeued +from the `Dataset` every step). + +This method can be used for several purposes. For example, where +`experimental_distribute_dataset` is unable to shard the input files, this +method might be used to manually shard the dataset (avoiding the slow +fallback behavior in `experimental_distribute_dataset`). In cases where the +dataset is infinite, this sharding can be done by creating dataset replicas +that differ only in their random seed. +`experimental_distribute_dataset` may also sometimes fail to split the +batch across replicas on a worker. In that case, this method can be used +where that limitation does not exist. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + +To query the tf.TypeSpec of the elements in the distributed dataset +returned by this API, you need to use the `element_spec` property of the +distributed iterator. This tf.TypeSpec can be used to set the +`input_signature` property of a tf.function. + +```python +# If you want to specify `input_signature` for a `tf.function` you must +# first create the iterator. +iterator = iter(inputs) + +@tf.function(input_signature=[iterator.element_spec]) +def replica_fn_with_signature(inputs): + # train the model with inputs + return + +for _ in range(steps): + strategy.run(replica_fn_with_signature, + args=(next(iterator),)) +``` + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_values_from_function

+ +View source + + + +Generates tf.distribute.DistributedValues from `value_fn`. + +This function is to generate tf.distribute.DistributedValues to pass +into `run`, `reduce`, or other methods that take +distributed values when not using datasets. + + + + + + + + + + +
Args
+`value_fn` + +The function to run to generate values. It is called for +each replica with `tf.distribute.ValueContext` as the sole argument. It +must return a Tensor or a type that can be converted to a Tensor. +
+ + + + + + + + + + + +
Returns
+A tf.distribute.DistributedValues containing a value for each replica. +
+ + + +#### Example usage: + + + +1. Return constant value per replica: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return tf.constant(1.) +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(,) +``` + +2. Distribute values in array based on replica_id: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> array_value = np.array([3., 2., 1.]) +>>> def value_fn(ctx): +... return array_value[ctx.replica_id_in_sync_group] +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(3.0,) +``` + +3. Specify values using num_replicas_in_sync: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return ctx.num_replicas_in_sync +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(1,) +``` + +4. Place values on devices and distribute: + +``` +strategy = tf.distribute.TPUStrategy() +worker_devices = strategy.extended.worker_devices +multiple_values = [] +for i in range(strategy.num_replicas_in_sync): + with tf.device(worker_devices[i]): + multiple_values.append(tf.constant(1.0)) + +def value_fn(ctx): + return multiple_values[ctx.replica_id] + +distributed_values = strategy. + experimental_distribute_values_from_function( + value_fn) +``` + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +Note: This only returns values on the worker initiated by this client. +When using a tf.distribute.Strategy like +tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker +will be its own client, and this function will only return values +computed on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use `experimental_distribute_dataset` +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_replicate_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be replicated to all logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be invoked on all logical devices. + +```python +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + images, labels = inputs + images = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + + // For loss calculation, all logical devices share the same logits + // and labels. + labels = strategy.experimental_replicate_to_logical_devices(labels) + output = strategy.experimental_replicate_to_logical_devices(output) + loss = loss_fn(labels, output) + + return loss + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_split_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be split across logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be be split among multiple logical devices. Tensor `tensor` +will be split across dimensions specified by `partition_dimensions`. +The dimensions of `tensor` must be divisible by corresponding value in +`partition_dimensions`. + +For example, for system with 8 logical devices, if `tensor` is an image +tensor with shape (batch_size, width, height, channel) and +`partition_dimensions` is [1, 2, 4, 1], then `tensor` will be split +2 in width dimension and 4 way in height dimension and the split +tensor values will be fed into 8 logical devices. + +```python +# Initializing TPU system with 8 logical devices and 1 replica. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[2, 2, 2], + num_replicas=1) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + inputs = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + partition_dimensions: An unnested list of integers with the size equal to + rank of `tensor` specifying how `tensor` will be partitioned. The + product of all elements in `partition_dimensions` must be equal to the + total number of logical devices per replica. + + + + + + + + + + +
Raises
+`ValueError` + +1) If the size of partition_dimensions does not equal to rank +of `tensor` or 2) if product of elements of `partition_dimensions` does +not match the number of logical devices per replica defined by the +implementing DistributionStrategy's device specification or +3) if a known size of `tensor` is not divisible by corresponding +value in `partition_dimensions`. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +Executes ops specified by `fn` on each replica. If `args` or `kwargs` have +tf.distribute.DistributedValues, such as those produced by a +"distributed `Dataset`" or `experimental_distribute_values_from_function` +when `fn` is executed on a particular replica, it will be executed with the +component of tf.distribute.DistributedValues that correspond to that +replica. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `all_reduce`. + +All arguments in `args` or `kwargs` should either be nest of tensors or +tf.distribute.DistributedValues containing tensors or composite tensors. + +IMPORTANT: Depending on the implementation of tf.distribute.Strategy and +whether eager execution is enabled, `fn` may be called one or more times ( +once for each replica). + +#### Example usage: + + + +1. Constant tensor input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> tensor_input = tf.constant(3.0) +>>> @tf.function +... def replica_fn(input): +... return input*2.0 +>>> result = strategy.run(replica_fn, args=(tensor_input,)) +>>> result + +``` + +2. DistributedValues input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> @tf.function +... def run(): +... def value_fn(value_context): +... return value_context.num_replicas_in_sync +... distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +... def replica_fn2(input): +... return input*2 +... return strategy.run(replica_fn2, args=(distributed_values,)) +>>> result = run() +>>> result + +``` + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be tf.distribute.DistributedValues, `Tensor` +objects, or `Tensor`s (for example, if running on a single replica). +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + +In `MultiWorkerMirroredStrategy`, all variables created inside +`strategy.scope() will be mirrored on all replicas of each worker. +Moreover, it also sets a default device scope so that ops without +specified devices will end up on the correct worker. + + + + + + + + + +
Returns
+A context manager to use for creating variables with this strategy. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/experimental/ParameterServerStrategy.md b/site/en/api_docs/python/tf/distribute/experimental/ParameterServerStrategy.md new file mode 100644 index 00000000000..2bc4d40a88e --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/experimental/ParameterServerStrategy.md @@ -0,0 +1,1093 @@ +description: An asynchronous multi-worker parameter server tf.distribute strategy. + +
+ + + + + + + + + + + + + + +
+ +# tf.distribute.experimental.ParameterServerStrategy + + + + + + + + + +An asynchronous multi-worker parameter server tf.distribute strategy. + +Inherits From: [`Strategy`](../../../tf/distribute/Strategy.md) + + + + + + + +This strategy requires two roles: workers and parameter servers. Variables and +updates to those variables will be assigned to parameter servers and other +operations are assigned to workers. + +When each worker has more than one GPU, operations will be replicated on all +GPUs. Even though operations may be replicated, variables are not and each +worker shares a common view for which parameter server a variable is assigned +to. + +By default it uses `TFConfigClusterResolver` to detect configurations for +multi-worker training. This requires a 'TF_CONFIG' environment variable and +the 'TF_CONFIG' must have a cluster spec. + +This class assumes each worker is running the same code independently, but +parameter servers are running a standard server. This means that while each +worker will synchronously compute a single gradient update across all GPUs, +updates between workers proceed asynchronously. Operations that occur only on +the first replica (such as incrementing the global step), will occur on the +first replica *of every worker*. + +It is expected to call `call_for_each_replica(fn, ...)` for any +operations which potentially can be replicated across replicas (i.e. multiple +GPUs) even if there is only CPU or one GPU. When defining the `fn`, extra +caution needs to be taken: + +1) It is generally not recommended to open a device scope under the strategy's +scope. A device scope (i.e. calling tf.device) will be merged with or +override the device for operations but will not change the device for +variables. + +2) It is also not recommended to open a colocation scope (i.e. calling +tf.compat.v1.colocate_with) under the strategy's scope. For colocating +variables, use `strategy.extended.colocate_vars_with` instead. Colocation of +ops will possibly create device assignment conflicts. + +Note: This strategy only works with the Estimator API. Pass an instance of +this strategy to the `experimental_distribute` argument when you create the +`RunConfig`. This instance of `RunConfig` should then be passed to the +`Estimator` instance on which `train_and_evaluate` is called. + +#### For Example: + + +``` +strategy = tf.distribute.experimental.ParameterServerStrategy() +run_config = tf.estimator.RunConfig( + experimental_distribute.train_distribute=strategy) +estimator = tf.estimator.Estimator(config=run_config) +tf.estimator.train_and_evaluate(estimator,...) +``` + + + + + + + + + + +
+`cluster_resolver` + +Optional +tf.distribute.cluster_resolver.ClusterResolver object. Defaults to a +tf.distribute.cluster_resolver.TFConfigClusterResolver. +
+ + + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_assign_to_logical_device

+ +View source + + + +Adds annotation that `tensor` will be assigned to a logical device. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to `tensor` specifying that operations on +`tensor` will be invoked on logical core device id `logical_device_id`. +When model parallelism is used, the default behavior is that all ops +are placed on zero-th logical device. + +```python + +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + output = tf.add(inputs, inputs) + + // Add operation will be executed on logical device 0. + output = strategy.experimental_assign_to_logical_device(output, 0) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` + + + + + + + + + + + + + +
Args
+`tensor` + +Input tensor to annotate. +
+`logical_device_id` + +Id of the logical core to which the tensor will be +assigned. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +The logical device id presented is not consistent with total +number of partitions specified by the device assignment. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via `dataset`. + +The returned distributed dataset can be iterated over similar to how +regular datasets can. +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +The following is an example: + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + +We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). + +In a multi-worker setting, we will first attempt to distribute the dataset +by attempting to detect whether the dataset is being created out of +ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, +attempting to shard the input files. Note that there has to be at least one +input file per worker. If you have less than one input file per worker, we +suggest that you should disable distributing your dataset using the method +below. + +If that attempt is unsuccessful (e.g. the dataset is created from a +Dataset.range), we will shard the dataset evenly at the end by appending a +`.shard` operation to the end of the processing pipeline. This will cause +the entire preprocessing pipeline for all the data to be run on every +worker, and each worker will do redundant work. We will print a warning +if this method of sharding is selected. + +You can disable dataset sharding across workers using the +`auto_shard_policy` option in tf.data.experimental.DistributeOptions. + +Within each worker, we will also split the data among all the worker +devices (if more than one a present), and this will happen even if +multi-worker sharding is disabled using the method above. + +If the above batch splitting and dataset sharding logic is undesirable, +please use `experimental_distribute_datasets_from_function` instead, which +does not do any automatic splitting or sharding. + +You can also use the `element_spec` property of the distributed dataset +returned by this API to query the tf.TypeSpec of the elements returned +by the iterator. This can be used to set the `input_signature` property +of a tf.function. + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +@tf.function(input_signature=[dist_dataset.element_spec]) +def train_step(inputs): + # train model with inputs + return + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be sharded across all replicas using +the rules stated above. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. Each +replica on that worker will dequeue one batch of inputs from the local +`Dataset` (i.e. if a worker has two replicas, two batches will be dequeued +from the `Dataset` every step). + +This method can be used for several purposes. For example, where +`experimental_distribute_dataset` is unable to shard the input files, this +method might be used to manually shard the dataset (avoiding the slow +fallback behavior in `experimental_distribute_dataset`). In cases where the +dataset is infinite, this sharding can be done by creating dataset replicas +that differ only in their random seed. +`experimental_distribute_dataset` may also sometimes fail to split the +batch across replicas on a worker. In that case, this method can be used +where that limitation does not exist. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + +To query the tf.TypeSpec of the elements in the distributed dataset +returned by this API, you need to use the `element_spec` property of the +distributed iterator. This tf.TypeSpec can be used to set the +`input_signature` property of a tf.function. + +```python +# If you want to specify `input_signature` for a `tf.function` you must +# first create the iterator. +iterator = iter(inputs) + +@tf.function(input_signature=[iterator.element_spec]) +def replica_fn_with_signature(inputs): + # train the model with inputs + return + +for _ in range(steps): + strategy.run(replica_fn_with_signature, + args=(next(iterator),)) +``` + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_values_from_function

+ +View source + + + +Generates tf.distribute.DistributedValues from `value_fn`. + +This function is to generate tf.distribute.DistributedValues to pass +into `run`, `reduce`, or other methods that take +distributed values when not using datasets. + + + + + + + + + + +
Args
+`value_fn` + +The function to run to generate values. It is called for +each replica with `tf.distribute.ValueContext` as the sole argument. It +must return a Tensor or a type that can be converted to a Tensor. +
+ + + + + + + + + + + +
Returns
+A tf.distribute.DistributedValues containing a value for each replica. +
+ + + +#### Example usage: + + + +1. Return constant value per replica: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return tf.constant(1.) +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(,) +``` + +2. Distribute values in array based on replica_id: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> array_value = np.array([3., 2., 1.]) +>>> def value_fn(ctx): +... return array_value[ctx.replica_id_in_sync_group] +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(3.0,) +``` + +3. Specify values using num_replicas_in_sync: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return ctx.num_replicas_in_sync +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(1,) +``` + +4. Place values on devices and distribute: + +``` +strategy = tf.distribute.TPUStrategy() +worker_devices = strategy.extended.worker_devices +multiple_values = [] +for i in range(strategy.num_replicas_in_sync): + with tf.device(worker_devices[i]): + multiple_values.append(tf.constant(1.0)) + +def value_fn(ctx): + return multiple_values[ctx.replica_id] + +distributed_values = strategy. + experimental_distribute_values_from_function( + value_fn) +``` + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +Note: This only returns values on the worker initiated by this client. +When using a tf.distribute.Strategy like +tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker +will be its own client, and this function will only return values +computed on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use `experimental_distribute_dataset` +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_replicate_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be replicated to all logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be invoked on all logical devices. + +```python +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + images, labels = inputs + images = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + + // For loss calculation, all logical devices share the same logits + // and labels. + labels = strategy.experimental_replicate_to_logical_devices(labels) + output = strategy.experimental_replicate_to_logical_devices(output) + loss = loss_fn(labels, output) + + return loss + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_split_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be split across logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be be split among multiple logical devices. Tensor `tensor` +will be split across dimensions specified by `partition_dimensions`. +The dimensions of `tensor` must be divisible by corresponding value in +`partition_dimensions`. + +For example, for system with 8 logical devices, if `tensor` is an image +tensor with shape (batch_size, width, height, channel) and +`partition_dimensions` is [1, 2, 4, 1], then `tensor` will be split +2 in width dimension and 4 way in height dimension and the split +tensor values will be fed into 8 logical devices. + +```python +# Initializing TPU system with 8 logical devices and 1 replica. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[2, 2, 2], + num_replicas=1) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + inputs = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + partition_dimensions: An unnested list of integers with the size equal to + rank of `tensor` specifying how `tensor` will be partitioned. The + product of all elements in `partition_dimensions` must be equal to the + total number of logical devices per replica. + + + + + + + + + + +
Raises
+`ValueError` + +1) If the size of partition_dimensions does not equal to rank +of `tensor` or 2) if product of elements of `partition_dimensions` does +not match the number of logical devices per replica defined by the +implementing DistributionStrategy's device specification or +3) if a known size of `tensor` is not divisible by corresponding +value in `partition_dimensions`. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +Run `fn` on each replica, with the given arguments. + +Executes ops specified by `fn` on each replica. If `args` or `kwargs` have +tf.distribute.DistributedValues, such as those produced by a +"distributed `Dataset`" or `experimental_distribute_values_from_function` +when `fn` is executed on a particular replica, it will be executed with the +component of tf.distribute.DistributedValues that correspond to that +replica. + +`fn` may call tf.distribute.get_replica_context() to access members such +as `all_reduce`. + +All arguments in `args` or `kwargs` should either be nest of tensors or +tf.distribute.DistributedValues containing tensors or composite tensors. + +IMPORTANT: Depending on the implementation of tf.distribute.Strategy and +whether eager execution is enabled, `fn` may be called one or more times ( +once for each replica). + +#### Example usage: + + + +1. Constant tensor input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> tensor_input = tf.constant(3.0) +>>> @tf.function +... def replica_fn(input): +... return input*2.0 +>>> result = strategy.run(replica_fn, args=(tensor_input,)) +>>> result + +``` + +2. DistributedValues input. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> @tf.function +... def run(): +... def value_fn(value_context): +... return value_context.num_replicas_in_sync +... distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +... def replica_fn2(input): +... return input*2 +... return strategy.run(replica_fn2, args=(distributed_values,)) +>>> result = run() +>>> result + +``` + + + + + + + + + + + + + + + + + + + +
Args
+`fn` + +The function to run. The output must be a tf.nest of `Tensor`s. +
+`args` + +(Optional) Positional arguments to `fn`. +
+`kwargs` + +(Optional) Keyword arguments to `fn`. +
+`options` + +(Optional) An instance of tf.distribute.RunOptions specifying +the options to run `fn`. +
+ + + + + + + + + + + +
Returns
+Merged return value of `fn` across replicas. The structure of the return +value is the same as the return value from `fn`. Each element in the +structure can either be tf.distribute.DistributedValues, `Tensor` +objects, or `Tensor`s (for example, if running on a single replica). +
+ + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + + + + + + + + + +
Returns
+A context manager. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/experimental/TPUStrategy.md b/site/en/api_docs/python/tf/distribute/experimental/TPUStrategy.md new file mode 100644 index 00000000000..d6b86398ceb --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/experimental/TPUStrategy.md @@ -0,0 +1,943 @@ +description: TPU distribution strategy implementation. + +
+ + + + + + + + + + + + + + +
+ +# tf.distribute.experimental.TPUStrategy + + + + + + + + + +TPU distribution strategy implementation. + +Inherits From: [`Strategy`](../../../tf/distribute/Strategy.md) + + + + + + + + + + + + + + + + + + + + +
+`tpu_cluster_resolver` + +A tf.distribute.cluster_resolver.TPUClusterResolver, +which provides information about the TPU cluster. +
+`device_assignment` + +Optional tf.tpu.experimental.DeviceAssignment to +specify the placement of replicas on the TPU cluster. Currently only +supports the usecase of using a single core within a TPU cluster. +
+ + + + + + + + + + + + + + + + + +
+`extended` + +tf.distribute.StrategyExtended with additional methods. +
+`num_replicas_in_sync` + +Returns number of replicas over which gradients are aggregated. +
+ + + +## Methods + +

experimental_assign_to_logical_device

+ +View source + + + +Adds annotation that `tensor` will be assigned to a logical device. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to `tensor` specifying that operations on +`tensor` will be invoked on logical core device id `logical_device_id`. +When model parallelism is used, the default behavior is that all ops +are placed on zero-th logical device. + +```python + +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + output = tf.add(inputs, inputs) + + // Add operation will be executed on logical device 0. + output = strategy.experimental_assign_to_logical_device(output, 0) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` + + + + + + + + + + + + + +
Args
+`tensor` + +Input tensor to annotate. +
+`logical_device_id` + +Id of the logical core to which the tensor will be +assigned. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +The logical device id presented is not consistent with total +number of partitions specified by the device assignment. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_distribute_dataset

+ +View source + + + +Distributes a tf.data.Dataset instance provided via `dataset`. + +The returned distributed dataset can be iterated over similar to how +regular datasets can. +NOTE: Currently, the user cannot add any more transformations to a +distributed dataset. + +The following is an example: + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + +We will assume that the input dataset is batched by the +global batch size. With this assumption, we will make a best effort to +divide each batch across all the replicas (one or more workers). + +In a multi-worker setting, we will first attempt to distribute the dataset +by attempting to detect whether the dataset is being created out of +ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, +attempting to shard the input files. Note that there has to be at least one +input file per worker. If you have less than one input file per worker, we +suggest that you should disable distributing your dataset using the method +below. + +If that attempt is unsuccessful (e.g. the dataset is created from a +Dataset.range), we will shard the dataset evenly at the end by appending a +`.shard` operation to the end of the processing pipeline. This will cause +the entire preprocessing pipeline for all the data to be run on every +worker, and each worker will do redundant work. We will print a warning +if this method of sharding is selected. + +You can disable dataset sharding across workers using the +`auto_shard_policy` option in tf.data.experimental.DistributeOptions. + +Within each worker, we will also split the data among all the worker +devices (if more than one a present), and this will happen even if +multi-worker sharding is disabled using the method above. + +If the above batch splitting and dataset sharding logic is undesirable, +please use `experimental_distribute_datasets_from_function` instead, which +does not do any automatic splitting or sharding. + +You can also use the `element_spec` property of the distributed dataset +returned by this API to query the tf.TypeSpec of the elements returned +by the iterator. This can be used to set the `input_signature` property +of a tf.function. + +```python +strategy = tf.distribute.MirroredStrategy() + +# Create a dataset +dataset = dataset_ops.Dataset.TFRecordDataset([ + "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) + +# Distribute that dataset +dist_dataset = strategy.experimental_distribute_dataset(dataset) + +@tf.function(input_signature=[dist_dataset.element_spec]) +def train_step(inputs): + # train model with inputs + return + +# Iterate over the distributed dataset +for x in dist_dataset: + # process dataset elements + strategy.run(train_step, args=(x,)) +``` + + + + + + + + + + +
Args
+`dataset` + +tf.data.Dataset that will be sharded across all replicas using +the rules stated above. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_datasets_from_function

+ +View source + + + +Distributes tf.data.Dataset instances created by calls to `dataset_fn`. + +`dataset_fn` will be called once for each worker in the strategy. Each +replica on that worker will dequeue one batch of inputs from the local +`Dataset` (i.e. if a worker has two replicas, two batches will be dequeued +from the `Dataset` every step). + +This method can be used for several purposes. For example, where +`experimental_distribute_dataset` is unable to shard the input files, this +method might be used to manually shard the dataset (avoiding the slow +fallback behavior in `experimental_distribute_dataset`). In cases where the +dataset is infinite, this sharding can be done by creating dataset replicas +that differ only in their random seed. +`experimental_distribute_dataset` may also sometimes fail to split the +batch across replicas on a worker. In that case, this method can be used +where that limitation does not exist. + +The `dataset_fn` should take an tf.distribute.InputContext instance where +information about batching and input replication can be accessed: + +``` +def dataset_fn(input_context): + batch_size = input_context.get_per_replica_batch_size(global_batch_size) + d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) + return d.shard( + input_context.num_input_pipelines, input_context.input_pipeline_id) + +inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) + +for batch in inputs: + replica_results = strategy.run(replica_fn, args=(batch,)) +``` + +IMPORTANT: The tf.data.Dataset returned by `dataset_fn` should have a +per-replica batch size, unlike `experimental_distribute_dataset`, which uses +the global batch size. This may be computed using +`input_context.get_per_replica_batch_size`. + +To query the tf.TypeSpec of the elements in the distributed dataset +returned by this API, you need to use the `element_spec` property of the +distributed iterator. This tf.TypeSpec can be used to set the +`input_signature` property of a tf.function. + +```python +# If you want to specify `input_signature` for a `tf.function` you must +# first create the iterator. +iterator = iter(inputs) + +@tf.function(input_signature=[iterator.element_spec]) +def replica_fn_with_signature(inputs): + # train the model with inputs + return + +for _ in range(steps): + strategy.run(replica_fn_with_signature, + args=(next(iterator),)) +``` + + + + + + + + + + +
Args
+`dataset_fn` + +A function taking a tf.distribute.InputContext instance and +returning a tf.data.Dataset. +
+ + + + + + + + + + + +
Returns
+A "distributed `Dataset`", which acts like a tf.data.Dataset except +it produces "per-replica" values. +
+ + + +

experimental_distribute_values_from_function

+ +View source + + + +Generates tf.distribute.DistributedValues from `value_fn`. + +This function is to generate tf.distribute.DistributedValues to pass +into `run`, `reduce`, or other methods that take +distributed values when not using datasets. + + + + + + + + + + +
Args
+`value_fn` + +The function to run to generate values. It is called for +each replica with `tf.distribute.ValueContext` as the sole argument. It +must return a Tensor or a type that can be converted to a Tensor. +
+ + + + + + + + + + + +
Returns
+A tf.distribute.DistributedValues containing a value for each replica. +
+ + + +#### Example usage: + + + +1. Return constant value per replica: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return tf.constant(1.) +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(,) +``` + +2. Distribute values in array based on replica_id: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> array_value = np.array([3., 2., 1.]) +>>> def value_fn(ctx): +... return array_value[ctx.replica_id_in_sync_group] +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(3.0,) +``` + +3. Specify values using num_replicas_in_sync: + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(ctx): +... return ctx.num_replicas_in_sync +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(1,) +``` + +4. Place values on devices and distribute: + +``` +strategy = tf.distribute.TPUStrategy() +worker_devices = strategy.extended.worker_devices +multiple_values = [] +for i in range(strategy.num_replicas_in_sync): + with tf.device(worker_devices[i]): + multiple_values.append(tf.constant(1.0)) + +def value_fn(ctx): + return multiple_values[ctx.replica_id] + +distributed_values = strategy. + experimental_distribute_values_from_function( + value_fn) +``` + +

experimental_local_results

+ +View source + + + +Returns the list of all local per-replica values contained in `value`. + +Note: This only returns values on the worker initiated by this client. +When using a tf.distribute.Strategy like +tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker +will be its own client, and this function will only return values +computed on that worker. + + + + + + + + + + +
Args
+`value` + +A value returned by `experimental_run()`, `run()`, +`extended.call_for_each_replica()`, or a variable created in `scope`. +
+ + + + + + + + + + + +
Returns
+A tuple of values contained in `value`. If `value` represents a single +value, this returns `(value,).` +
+ + + +

experimental_make_numpy_dataset

+ +View source + + + +Makes a tf.data.Dataset for input provided via a numpy array. + +This avoids adding `numpy_input` as a large constant in the graph, +and copies the data to the machine or machines that will be processing +the input. + +Note that you will likely need to use `experimental_distribute_dataset` +with the returned dataset to further distribute it with the strategy. + +#### Example: + + +``` +numpy_input = np.ones([10], dtype=np.float32) +dataset = strategy.experimental_make_numpy_dataset(numpy_input) +dist_dataset = strategy.experimental_distribute_dataset(dataset) +``` + + + + + + + + + + +
Args
+`numpy_input` + +A nest of NumPy input arrays that will be converted into a +dataset. Note that lists of Numpy arrays are stacked, as that is normal +tf.data.Dataset behavior. +
+ + + + + + + + + + + +
Returns
+A tf.data.Dataset representing `numpy_input`. +
+ + + +

experimental_replicate_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be replicated to all logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be invoked on all logical devices. + +```python +# Initializing TPU system with 2 logical devices and 4 replicas. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[1, 1, 2], + num_replicas=4) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + images, labels = inputs + images = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + + // For loss calculation, all logical devices share the same logits + // and labels. + labels = strategy.experimental_replicate_to_logical_devices(labels) + output = strategy.experimental_replicate_to_logical_devices(output) + loss = loss_fn(labels, output) + + return loss + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

experimental_split_to_logical_devices

+ +View source + + + +Adds annotation that `tensor` will be split across logical devices. + +NOTE: This API is only supported in TPUStrategy for now. +This adds an annotation to tensor `tensor` specifying that operations on +`tensor` will be be split among multiple logical devices. Tensor `tensor` +will be split across dimensions specified by `partition_dimensions`. +The dimensions of `tensor` must be divisible by corresponding value in +`partition_dimensions`. + +For example, for system with 8 logical devices, if `tensor` is an image +tensor with shape (batch_size, width, height, channel) and +`partition_dimensions` is [1, 2, 4, 1], then `tensor` will be split +2 in width dimension and 4 way in height dimension and the split +tensor values will be fed into 8 logical devices. + +```python +# Initializing TPU system with 8 logical devices and 1 replica. +resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') +tf.config.experimental_connect_to_cluster(resolver) +topology = tf.tpu.experimental.initialize_tpu_system(resolver) +device_assignment = tf.tpu.experimental.DeviceAssignment.build( + topology, + computation_shape=[2, 2, 2], + num_replicas=1) +strategy = tf.distribute.experimental.TPUStrategy( + resolver, device_assignment=device_assignment) + +iterator = iter(inputs) + +@tf.function() +def step_fn(inputs): + inputs = strategy.experimental_split_to_logical_devices( + inputs, [1, 2, 4, 1]) + + // model() function will be executed on 8 logical devices with `inputs` + // split 2 * 4 ways. + output = model(inputs) + return output + +strategy.run(step_fn, args=(next(iterator),)) +``` +Args: + tensor: Input tensor to annotate. + partition_dimensions: An unnested list of integers with the size equal to + rank of `tensor` specifying how `tensor` will be partitioned. The + product of all elements in `partition_dimensions` must be equal to the + total number of logical devices per replica. + + + + + + + + + + +
Raises
+`ValueError` + +1) If the size of partition_dimensions does not equal to rank +of `tensor` or 2) if product of elements of `partition_dimensions` does +not match the number of logical devices per replica defined by the +implementing DistributionStrategy's device specification or +3) if a known size of `tensor` is not divisible by corresponding +value in `partition_dimensions`. +
+ + + + + + + + + + + +
Returns
+Annotated tensor with idential value as `tensor`. +
+ + + +

reduce

+ +View source + + + +Reduce `value` across replicas. + +Given a per-replica value returned by `run`, say a +per-example loss, the batch will be divided across all the replicas. This +function allows you to aggregate across replicas and optionally also across +batch elements. For example, if you have a global batch size of 8 and 2 +replicas, values for examples `[0, 1, 2, 3]` will be on replica 0 and +`[4, 5, 6, 7]` will be on replica 1. By default, `reduce` will just +aggregate across replicas, returning `[0+4, 1+5, 2+6, 3+7]`. This is useful +when each replica is computing a scalar or some other value that doesn't +have a "batch" dimension (like a gradient). More often you will want to +aggregate across the global batch, which you can get by specifying the batch +dimension as the `axis`, typically `axis=0`. In this case it would return a +scalar `0+1+2+3+4+5+6+7`. + +If there is a last partial batch, you will need to specify an axis so +that the resulting shape is consistent across replicas. So if the last +batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you +would get a shape mismatch unless you specify `axis=0`. If you specify +tf.distribute.ReduceOp.MEAN, using `axis=0` will use the correct +denominator of 6. Contrast this with computing `reduce_mean` to get a +scalar value on each replica and this function to average those means, +which will weigh some values `1/8` and others `1/4`. + + + + + + + + + + + + + + + + +
Args
+`reduce_op` + +A tf.distribute.ReduceOp value specifying how values should +be combined. +
+`value` + +A "per replica" value, e.g. returned by `run` to +be combined into a single tensor. +
+`axis` + +Specifies the dimension to reduce along within each +replica's tensor. Should typically be set to the batch dimension, or +`None` to only reduce across replicas (e.g. if the tensor has no batch +dimension). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. +
+ + + +

run

+ +View source + + + +See base class. + + +

scope

+ +View source + + + +Returns a context manager selecting this Strategy as current. + +Inside a `with strategy.scope():` code block, this thread +will use a variable creator set by `strategy`, and will +enter its "cross-replica context". + + + + + + + + + +
Returns
+A context manager. +
+ + + + + diff --git a/site/en/api_docs/python/tf/distribute/experimental/ValueContext.md b/site/en/api_docs/python/tf/distribute/experimental/ValueContext.md new file mode 100644 index 00000000000..09cffca551a --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/experimental/ValueContext.md @@ -0,0 +1,121 @@ +description: A class wrapping information needed by a distribute function. + +
+ + + +
+ +# tf.distribute.experimental.ValueContext + + + + + + + + + +A class wrapping information needed by a distribute function. + + + + + + + +This is a context class that is passed to the `value_fn` in +`strategy.experimental_distribute_values_from_function` and contains +information about the compute replicas. The `num_replicas_in_sync` and +`replica_id` can be used to customize the value on each replica. + +#### Example usage: + + + +1. Directly constructed. + +``` +>>> def value_fn(context): +... return context.replica_id_in_sync_group/context.num_replicas_in_sync +>>> context = tf.distribute.experimental.ValueContext( +... replica_id_in_sync_group=2, num_replicas_in_sync=4) +>>> per_replica_value = value_fn(context) +>>> per_replica_value +0.5 +``` + +2. Passed in by `experimental_distribute_values_from_function`. + +``` +>>> strategy = tf.distribute.MirroredStrategy() +>>> def value_fn(value_context): +... return value_context.num_replicas_in_sync +>>> distributed_values = ( +... strategy.experimental_distribute_values_from_function( +... value_fn)) +>>> local_result = strategy.experimental_local_results(distributed_values) +>>> local_result +(1,) +``` + + + + + + + + + + + + + +
+`replica_id_in_sync_group` + +the current replica_id, should be an int in +[0,`num_replicas_in_sync`). +
+`num_replicas_in_sync` + +the number of replicas that are in sync. +
+ + + + + + + + + + + + + + + + + +
+`num_replicas_in_sync` + +Returns the number of compute replicas in sync. +
+`replica_id_in_sync_group` + +Returns the replica ID. +
+ + + diff --git a/site/en/api_docs/python/tf/distribute/experimental_set_strategy.md b/site/en/api_docs/python/tf/distribute/experimental_set_strategy.md new file mode 100644 index 00000000000..6819cd7d396 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/experimental_set_strategy.md @@ -0,0 +1,103 @@ +description: Set a tf.distribute.Strategy as current without with strategy.scope(). + +
+ + +
+ +# tf.distribute.experimental_set_strategy + + + + + + + + + +Set a tf.distribute.Strategy as current without `with strategy.scope()`. + + + + + + + + + +``` +tf.distribute.experimental_set_strategy(strategy1) +f() +tf.distribute.experimental_set_strategy(strategy2) +g() +tf.distribute.experimental_set_strategy(None) +h() +``` + +is equivalent to: + +``` +with strategy1.scope(): + f() +with strategy2.scope(): + g() +h() +``` + +In general, you should use the `with strategy.scope():` API, but this +alternative may be convenient in notebooks where you would have to put +each cell in a `with strategy.scope():` block. + +Note: This should only be called outside of any TensorFlow scope to +avoid improper nesting. + + + + + + + + + + +
+`strategy` + +A tf.distribute.Strategy object or None. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If called inside a `with strategy.scope():`. +
+ diff --git a/site/en/api_docs/python/tf/distribute/get_replica_context.md b/site/en/api_docs/python/tf/distribute/get_replica_context.md new file mode 100644 index 00000000000..16636381c26 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/get_replica_context.md @@ -0,0 +1,95 @@ +description: Returns the current tf.distribute.ReplicaContext or None. + +
+ + +
+ +# tf.distribute.get_replica_context + + + + + + + + + +Returns the current tf.distribute.ReplicaContext or `None`. + + + + + + + + + +Returns `None` if in a cross-replica context. + +#### Note that execution: + + + +1. starts in the default (single-replica) replica context (this function + will return the default `ReplicaContext` object); +2. switches to cross-replica context (in which case this will return + `None`) when entering a `with tf.distribute.Strategy.scope():` block; +3. switches to a (non-default) replica context inside `strategy.run(fn, ...)`; +4. if `fn` calls `get_replica_context().merge_call(merge_fn, ...)`, then + inside `merge_fn` you are back in the cross-replica context (and again + this function will return `None`). + +Most tf.distribute.Strategy methods may only be executed in +a cross-replica context, in a replica context you should use the +API of the tf.distribute.ReplicaContext object returned by this +method instead. + +``` +assert tf.distribute.get_replica_context() is not None # default +with strategy.scope(): + assert tf.distribute.get_replica_context() is None + + def f(): + replica_context = tf.distribute.get_replica_context() # for strategy + assert replica_context is not None + tf.print("Replica id: ", replica_context.replica_id_in_sync_group, + " of ", replica_context.num_replicas_in_sync) + + strategy.run(f) +``` + + + + + + + + + +
+The current tf.distribute.ReplicaContext object when in a replica context +scope, else `None`. + +Within a particular block, exactly one of these two things will be true: + +* `get_replica_context()` returns non-`None`, or +* `tf.distribute.is_cross_replica_context()` returns True. +
+ diff --git a/site/en/api_docs/python/tf/distribute/get_strategy.md b/site/en/api_docs/python/tf/distribute/get_strategy.md new file mode 100644 index 00000000000..23d1f8ce2b5 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/get_strategy.md @@ -0,0 +1,65 @@ +description: Returns the current tf.distribute.Strategy object. + +
+ + +
+ +# tf.distribute.get_strategy + + + + + + + + + +Returns the current tf.distribute.Strategy object. + + + + + + + + + +Typically only used in a cross-replica context: + +``` +if tf.distribute.in_cross_replica_context(): + strategy = tf.distribute.get_strategy() + ... +``` + + + + + + + + + +
+A tf.distribute.Strategy object. Inside a `with strategy.scope()` block, +it returns `strategy`, otherwise it returns the default (single-replica) +tf.distribute.Strategy object. +
+ diff --git a/site/en/api_docs/python/tf/distribute/has_strategy.md b/site/en/api_docs/python/tf/distribute/has_strategy.md new file mode 100644 index 00000000000..5a9f436f56d --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/has_strategy.md @@ -0,0 +1,61 @@ +description: Return if there is a current non-default tf.distribute.Strategy. + +
+ + +
+ +# tf.distribute.has_strategy + + + + + + + + + +Return if there is a current non-default tf.distribute.Strategy. + + + + + + + + + +``` +assert not tf.distribute.has_strategy() +with strategy.scope(): + assert tf.distribute.has_strategy() +``` + + + + + + + + + +
+True if inside a `with strategy.scope():`. +
+ diff --git a/site/en/api_docs/python/tf/distribute/in_cross_replica_context.md b/site/en/api_docs/python/tf/distribute/in_cross_replica_context.md new file mode 100644 index 00000000000..3bf6a8fe6a7 --- /dev/null +++ b/site/en/api_docs/python/tf/distribute/in_cross_replica_context.md @@ -0,0 +1,70 @@ +description: Returns True if in a cross-replica context. + +
+ + +
+ +# tf.distribute.in_cross_replica_context + + + + + + + + + +Returns `True` if in a cross-replica context. + + + + + + + + + +See tf.distribute.get_replica_context for details. + +``` +assert not tf.distribute.in_cross_replica_context() +with strategy.scope(): + assert tf.distribute.in_cross_replica_context() + + def f(): + assert not tf.distribute.in_cross_replica_context() + + strategy.run(f) +``` + + + + + + + + + +
+`True` if in a cross-replica context (`get_replica_context()` returns +`None`), or `False` if in a replica context (`get_replica_context()` returns +non-`None`). +
+ diff --git a/site/en/api_docs/python/tf/dtypes.md b/site/en/api_docs/python/tf/dtypes.md new file mode 100644 index 00000000000..54ed0abd5d9 --- /dev/null +++ b/site/en/api_docs/python/tf/dtypes.md @@ -0,0 +1,89 @@ +description: Public API for tf.dtypes namespace. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# Module: tf.dtypes + + + + + + + + + +Public API for tf.dtypes namespace. + + + +## Classes + +[`class DType`](../tf/dtypes/DType.md): Represents the type of the elements in a `Tensor`. + +## Functions + +[`as_dtype(...)`](../tf/dtypes/as_dtype.md): Converts the given `type_value` to a `DType`. + +[`cast(...)`](../tf/cast.md): Casts a tensor to a new type. + +[`complex(...)`](../tf/dtypes/complex.md): Converts two real numbers to a complex number. + +[`saturate_cast(...)`](../tf/dtypes/saturate_cast.md): Performs a safe saturating cast of `value` to `dtype`. + +## Other Members + +* `QUANTIZED_DTYPES` +* `bfloat16` +* `bool` +* `complex128` +* `complex64` +* `double` +* `float16` +* `float32` +* `float64` +* `half` +* `int16` +* `int32` +* `int64` +* `int8` +* `qint16` +* `qint32` +* `qint8` +* `quint16` +* `quint8` +* `resource` +* `string` +* `uint16` +* `uint32` +* `uint64` +* `uint8` +* `variant` diff --git a/site/en/api_docs/python/tf/dtypes/DType.md b/site/en/api_docs/python/tf/dtypes/DType.md new file mode 100644 index 00000000000..ab1779267fe --- /dev/null +++ b/site/en/api_docs/python/tf/dtypes/DType.md @@ -0,0 +1,294 @@ +description: Represents the type of the elements in a Tensor. + +
+ + + + + + + +
+ +# tf.dtypes.DType + + + + + + + + + +Represents the type of the elements in a `Tensor`. + + + + + + + + + +The following `DType` objects are defined: + +* tf.float16: 16-bit half-precision floating-point. +* tf.float32: 32-bit single-precision floating-point. +* tf.float64: 64-bit double-precision floating-point. +* tf.bfloat16: 16-bit truncated floating-point. +* tf.complex64: 64-bit single-precision complex. +* tf.complex128: 128-bit double-precision complex. +* tf.int8: 8-bit signed integer. +* tf.uint8: 8-bit unsigned integer. +* tf.uint16: 16-bit unsigned integer. +* tf.uint32: 32-bit unsigned integer. +* tf.uint64: 64-bit unsigned integer. +* tf.int16: 16-bit signed integer. +* tf.int32: 32-bit signed integer. +* tf.int64: 64-bit signed integer. +* tf.bool: Boolean. +* tf.string: String. +* tf.qint8: Quantized 8-bit signed integer. +* tf.quint8: Quantized 8-bit unsigned integer. +* tf.qint16: Quantized 16-bit signed integer. +* tf.quint16: Quantized 16-bit unsigned integer. +* tf.qint32: Quantized 32-bit signed integer. +* tf.resource: Handle to a mutable resource. +* tf.variant: Values of arbitrary types. + +The tf.as_dtype() function converts numpy types and string type +names to a `DType` object. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`as_datatype_enum` + +Returns a `types_pb2.DataType` enum value based on this data type. +
+`as_numpy_dtype` + +Returns a Python `type` object based on this `DType`. +
+`base_dtype` + +Returns a non-reference `DType` based on this `DType`. +
+`is_bool` + +Returns whether this is a boolean data type. +
+`is_complex` + +Returns whether this is a complex floating point type. +
+`is_floating` + +Returns whether this is a (non-quantized, real) floating point type. +
+`is_integer` + +Returns whether this is a (non-quantized) integer type. +
+`is_numpy_compatible` + +Returns whether this data type has a compatible NumPy data type. +
+`is_quantized` + +Returns whether this is a quantized data type. +
+`is_unsigned` + +Returns whether this type is unsigned. + +Non-numeric, unordered, and quantized types are not considered unsigned, and +this function returns `False`. +
+`limits` + +Return intensity limits, i.e. + +(min, max) tuple, of the dtype. +Args: +clip_negative : bool, optional If True, clip the negative range (i.e. +return 0 for min intensity) even if the image dtype allows negative +values. Returns +min, max : tuple Lower and upper intensity limits. +
+`max` + +Returns the maximum representable value in this data type. +
+`min` + +Returns the minimum representable value in this data type. +
+`name` + + +
+`real_dtype` + +Returns the `DType` corresponding to this `DType`'s real part. +
+`size` + + +
+ + + +## Methods + +

is_compatible_with

+ +View source + + + +Returns True if the `other` DType will be converted to this DType. + +The conversion rules are as follows: + +```python +DType(T) .is_compatible_with(DType(T)) == True +``` + + + + + + + + + + +
Args
+`other` + +A `DType` (or object that may be converted to a `DType`). +
+ + + + + + + + + + + +
Returns
+True if a Tensor of the `other` `DType` will be implicitly converted to +this `DType`. +
+ + + +

__eq__

+ +View source + + + +Returns True iff this DType refers to the same type as `other`. + + +

__ne__

+ +View source + + + +Returns True iff self != other. + + + + diff --git a/site/en/api_docs/python/tf/dtypes/as_dtype.md b/site/en/api_docs/python/tf/dtypes/as_dtype.md new file mode 100644 index 00000000000..71efc052770 --- /dev/null +++ b/site/en/api_docs/python/tf/dtypes/as_dtype.md @@ -0,0 +1,98 @@ +description: Converts the given type_value to a DType. + +
+ + +
+ +# tf.dtypes.as_dtype + + + + + + + + + +Converts the given `type_value` to a `DType`. + + + + + + + + + + + + + + + + + + + +
+`type_value` + +A value that can be converted to a tf.DType object. This may +currently be a tf.DType object, a [`DataType` +enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto), +a string type name, or a `numpy.dtype`. +
+ + + + + + + + + + + +
+A `DType` corresponding to `type_value`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `type_value` cannot be converted to a `DType`. +
+ diff --git a/site/en/api_docs/python/tf/dtypes/complex.md b/site/en/api_docs/python/tf/dtypes/complex.md new file mode 100644 index 00000000000..004f1bd38f0 --- /dev/null +++ b/site/en/api_docs/python/tf/dtypes/complex.md @@ -0,0 +1,125 @@ +description: Converts two real numbers to a complex number. + +
+ + +
+ +# tf.dtypes.complex + + + + + + + + + +Converts two real numbers to a complex number. + + + + + + + + + +Given a tensor `real` representing the real part of a complex number, and a +tensor `imag` representing the imaginary part of a complex number, this +operation returns complex numbers elementwise of the form \\(a + bj\\), where +*a* represents the `real` part and *b* represents the `imag` part. + +The input tensors `real` and `imag` must have the same shape. + +#### For example: + + + +```python +real = tf.constant([2.25, 3.25]) +imag = tf.constant([4.75, 5.75]) +tf.complex(real, imag) # [[2.25 + 4.75j], [3.25 + 5.75j]] +``` + + + + + + + + + + + + + + + + +
+`real` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`imag` + +A `Tensor`. Must have the same type as `real`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `complex64` or `complex128`. +
+ + + + + + + + + + + + +
+`TypeError` + +Real and imag must be correct types +
+ diff --git a/site/en/api_docs/python/tf/dtypes/saturate_cast.md b/site/en/api_docs/python/tf/dtypes/saturate_cast.md new file mode 100644 index 00000000000..08c0b7350af --- /dev/null +++ b/site/en/api_docs/python/tf/dtypes/saturate_cast.md @@ -0,0 +1,95 @@ +description: Performs a safe saturating cast of value to dtype. + +
+ + +
+ +# tf.dtypes.saturate_cast + + + + + + + + + +Performs a safe saturating cast of `value` to `dtype`. + + + + + + + + + +This function casts the input to `dtype` without applying any scaling. If +there is a danger that values would over or underflow in the cast, this op +applies the appropriate clamping before the cast. + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. +
+`dtype` + +The desired output `DType`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+`value` safely cast to `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/dynamic_partition.md b/site/en/api_docs/python/tf/dynamic_partition.md new file mode 100644 index 00000000000..e5ad0140078 --- /dev/null +++ b/site/en/api_docs/python/tf/dynamic_partition.md @@ -0,0 +1,132 @@ +description: Partitions data into num_partitions tensors using indices from partitions. + +
+ + +
+ +# tf.dynamic_partition + + + + + + + + + +Partitions `data` into `num_partitions` tensors using indices from `partitions`. + + + + + + + + + +For each index tuple `js` of size `partitions.ndim`, the slice `data[js, ...]` +becomes part of `outputs[partitions[js]]`. The slices with `partitions[js] = i` +are placed in `outputs[i]` in lexicographic order of `js`, and the first +dimension of `outputs[i]` is the number of entries in `partitions` equal to `i`. +In detail, + +```python + outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:] + + outputs[i] = pack([data[js, ...] for js if partitions[js] == i]) +``` + +`data.shape` must start with `partitions.shape`. + +#### For example: + + + +```python + # Scalar partitions. + partitions = 1 + num_partitions = 2 + data = [10, 20] + outputs[0] = [] # Empty with shape [0, 2] + outputs[1] = [[10, 20]] + + # Vector partitions. + partitions = [0, 0, 1, 1, 0] + num_partitions = 2 + data = [10, 20, 30, 40, 50] + outputs[0] = [10, 20, 50] + outputs[1] = [30, 40] +``` + +See `dynamic_stitch` for an example on how to merge partitions back. + +
+ +
+ + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. +
+`partitions` + +A `Tensor` of type `int32`. +Any shape. Indices in the range `[0, num_partitions)`. +
+`num_partitions` + +An `int` that is `>= 1`. +The number of partitions to output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `num_partitions` `Tensor` objects with the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/dynamic_stitch.md b/site/en/api_docs/python/tf/dynamic_stitch.md new file mode 100644 index 00000000000..49f4f2f4faf --- /dev/null +++ b/site/en/api_docs/python/tf/dynamic_stitch.md @@ -0,0 +1,148 @@ +description: Interleave the values from the data tensors into a single tensor. + +
+ + +
+ +# tf.dynamic_stitch + + + + + + + + + +Interleave the values from the `data` tensors into a single tensor. + + + + + + + + + +Builds a merged tensor such that + +```python + merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...] +``` + +For example, if each `indices[m]` is scalar or vector, we have + +```python + # Scalar indices: + merged[indices[m], ...] = data[m][...] + + # Vector indices: + merged[indices[m][i], ...] = data[m][i, ...] +``` + +Each `data[i].shape` must start with the corresponding `indices[i].shape`, +and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we +must have `data[i].shape = indices[i].shape + constant`. In terms of this +`constant`, the output shape is + + merged.shape = [max(indices)] + constant + +Values are merged in order, so if an index appears in both `indices[m][i]` and +`indices[n][j]` for `(m,i) < (n,j)` the slice `data[n][j]` will appear in the +merged result. If you do not need this guarantee, ParallelDynamicStitch might +perform better on some devices. + +#### For example: + + + +```python + indices[0] = 6 + indices[1] = [4, 1] + indices[2] = [[5, 2], [0, 3]] + data[0] = [61, 62] + data[1] = [[41, 42], [11, 12]] + data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]] + merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42], + [51, 52], [61, 62]] +``` + +This method can be used to merge partitions created by `dynamic_partition` +as illustrated on the following example: + +```python + # Apply function (increments x_i) on elements for which a certain condition + # apply (x_i != -1 in this example). + x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4]) + condition_mask=tf.not_equal(x,tf.constant(-1.)) + partitioned_data = tf.dynamic_partition( + x, tf.cast(condition_mask, tf.int32) , 2) + partitioned_data[1] = partitioned_data[1] + 1.0 + condition_indices = tf.dynamic_partition( + tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2) + x = tf.dynamic_stitch(condition_indices, partitioned_data) + # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain + # unchanged. +``` + +
+ +
+ + + + + + + + + + + + + + + + +
+`indices` + +A list of at least 1 `Tensor` objects with type `int32`. +
+`data` + +A list with the same length as `indices` of `Tensor` objects with the same type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/edit_distance.md b/site/en/api_docs/python/tf/edit_distance.md new file mode 100644 index 00000000000..1b9a7f8d2a5 --- /dev/null +++ b/site/en/api_docs/python/tf/edit_distance.md @@ -0,0 +1,156 @@ +description: Computes the Levenshtein distance between sequences. + +
+ + +
+ +# tf.edit_distance + + + + + + + + + +Computes the Levenshtein distance between sequences. + + + + + + + + + +This operation takes variable-length sequences (`hypothesis` and `truth`), +each provided as a `SparseTensor`, and computes the Levenshtein distance. +You can normalize the edit distance by length of `truth` by setting +`normalize` to true. + +For example, given the following input: + +```python +# 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values: +# (0,0) = ["a"] +# (1,0) = ["b"] +hypothesis = tf.SparseTensor( + [[0, 0, 0], + [1, 0, 0]], + ["a", "b"], + (2, 1, 1)) + +# 'truth' is a tensor of shape `[2, 2]` with variable-length values: +# (0,0) = [] +# (0,1) = ["a"] +# (1,0) = ["b", "c"] +# (1,1) = ["a"] +truth = tf.SparseTensor( + [[0, 1, 0], + [1, 0, 0], + [1, 0, 1], + [1, 1, 0]], + ["a", "b", "c", "a"], + (2, 2, 2)) + +normalize = True +``` + +This operation would return the following: + +```python +# 'output' is a tensor of shape `[2, 2]` with edit distances normalized +# by 'truth' lengths. +output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis + [0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis +``` + + + + + + + + + + + + + + + + + + + +
+`hypothesis` + +A `SparseTensor` containing hypothesis sequences. +
+`truth` + +A `SparseTensor` containing truth sequences. +
+`normalize` + +A `bool`. If `True`, normalizes the Levenshtein distance by +length of `truth.` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A dense `Tensor` with rank `R - 1`, where R is the rank of the +`SparseTensor` inputs `hypothesis` and `truth`. +
+ + + + + + + + + + + + +
+`TypeError` + +If either `hypothesis` or `truth` are not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/einsum.md b/site/en/api_docs/python/tf/einsum.md new file mode 100644 index 00000000000..2677ef9acaf --- /dev/null +++ b/site/en/api_docs/python/tf/einsum.md @@ -0,0 +1,169 @@ +description: Tensor contraction over specified indices and outer product. + +
+ + +
+ +# tf.einsum + + + + + + + + + +Tensor contraction over specified indices and outer product. + + + + + + + + + +Einsum allows defining Tensors by defining their element-wise computation. +This computation is defined by `equation`, a shorthand form based on Einstein +summation. As an example, consider multiplying two matrices A and B to form a +matrix C. The elements of C are given by: + +``` + C[i,k] = sum_j A[i,j] * B[j,k] +``` + +The corresponding `equation` is: + +``` + ij,jk->ik +``` + +In general, to convert the element-wise equation into the `equation` string, +use the following procedure (intermediate strings for matrix multiplication +example provided in parentheses): + +1. remove variable names, brackets, and commas, (`ik = sum_j ij * jk`) +2. replace "*" with ",", (`ik = sum_j ij , jk`) +3. drop summation signs, and (`ik = ij, jk`) +4. move the output to the right, while replacing "=" with "->". (`ij,jk->ik`) + +Many common operations can be expressed in this way. For example: + +```python +# Matrix multiplication +einsum('ij,jk->ik', m0, m1) # output[i,k] = sum_j m0[i,j] * m1[j, k] + +# Dot product +einsum('i,i->', u, v) # output = sum_i u[i]*v[i] + +# Outer product +einsum('i,j->ij', u, v) # output[i,j] = u[i]*v[j] + +# Transpose +einsum('ij->ji', m) # output[j,i] = m[i,j] + +# Trace +einsum('ii', m) # output[j,i] = trace(m) = sum_i m[i, i] + +# Batch matrix multiplication +einsum('aij,ajk->aik', s, t) # out[a,i,k] = sum_j s[a,i,j] * t[a, j, k] +``` + +To enable and control broadcasting, use an ellipsis. For example, to perform +batch matrix multiplication with NumPy-style broadcasting across the batch +dimensions, use: + +```python +einsum('...ij,...jk->...ik', u, v) +``` + + + + + + + + + + + + + + + + +
+`equation` + +a `str` describing the contraction, in the same format as +`numpy.einsum`. +
+`*inputs` + +the inputs to contract (each one a `Tensor`), whose shapes should +be consistent with `equation`. +
+`**kwargs` + +- optimize: Optimization strategy to use to find contraction path using +opt_einsum. Must be 'greedy', 'optimal', 'branch-2', 'branch-all' or +'auto'. (optional, default: 'greedy'). +- name: A name for the operation (optional). +
+ + + + + + + + + + + +
+The contracted `Tensor`, with shape determined by `equation`. +
+ + + + + + + + + + + + +
+`ValueError` + +If +- the format of `equation` is incorrect, +- number of inputs or their shapes are inconsistent with `equation`. +
+ diff --git a/site/en/api_docs/python/tf/ensure_shape.md b/site/en/api_docs/python/tf/ensure_shape.md new file mode 100644 index 00000000000..c08832c83a4 --- /dev/null +++ b/site/en/api_docs/python/tf/ensure_shape.md @@ -0,0 +1,121 @@ +description: Updates the shape of a tensor and checks at runtime that the shape holds. + +
+ + +
+ +# tf.ensure_shape + + + + + + + + + +Updates the shape of a tensor and checks at runtime that the shape holds. + + + + + + + + + + +#### For example: + + +```python +x = tf.compat.v1.placeholder(tf.int32) +print(x.shape) +==> TensorShape(None) +y = x * 2 +print(y.shape) +==> TensorShape(None) + +y = tf.ensure_shape(y, (None, 3, 3)) +print(y.shape) +==> TensorShape([Dimension(None), Dimension(3), Dimension(3)]) + +with tf.compat.v1.Session() as sess: + # Raises tf.errors.InvalidArgumentError, because the shape (3,) is not + # compatible with the shape (None, 3, 3) + sess.run(y, feed_dict={x: [1, 2, 3]}) + +``` + +NOTE: This differs from Tensor.set_shape in that it sets the static shape +of the resulting tensor and enforces it at runtime, raising an error if the +tensor's runtime shape is incompatible with the specified shape. +Tensor.set_shape sets the static shape of the tensor without enforcing it +at runtime, which may result in inconsistencies between the statically-known +shape of tensors and the runtime value of tensors. + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. +
+`shape` + +A `TensorShape` representing the shape of this tensor, a +`TensorShapeProto`, a list, a tuple, or None. +
+`name` + +A name for this operation (optional). Defaults to "EnsureShape". +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type and contents as `x`. At runtime, raises a +tf.errors.InvalidArgumentError if `shape` is incompatible with the shape +of `x`. +
+ diff --git a/site/en/api_docs/python/tf/errors.md b/site/en/api_docs/python/tf/errors.md new file mode 100644 index 00000000000..72b347f89f8 --- /dev/null +++ b/site/en/api_docs/python/tf/errors.md @@ -0,0 +1,93 @@ +description: Exception types for TensorFlow errors. + +
+ + + + + + + + + + + + + + + + + + + +
+ +# Module: tf.errors + + + + + + + + + +Exception types for TensorFlow errors. + + + +## Classes + +[`class AbortedError`](../tf/errors/AbortedError.md): The operation was aborted, typically due to a concurrent action. + +[`class AlreadyExistsError`](../tf/errors/AlreadyExistsError.md): Raised when an entity that we attempted to create already exists. + +[`class CancelledError`](../tf/errors/CancelledError.md): Raised when an operation or step is cancelled. + +[`class DataLossError`](../tf/errors/DataLossError.md): Raised when unrecoverable data loss or corruption is encountered. + +[`class DeadlineExceededError`](../tf/errors/DeadlineExceededError.md): Raised when a deadline expires before an operation could complete. + +[`class FailedPreconditionError`](../tf/errors/FailedPreconditionError.md): Operation was rejected because the system is not in a state to execute it. + +[`class InternalError`](../tf/errors/InternalError.md): Raised when the system experiences an internal error. + +[`class InvalidArgumentError`](../tf/errors/InvalidArgumentError.md): Raised when an operation receives an invalid argument. + +[`class NotFoundError`](../tf/errors/NotFoundError.md): Raised when a requested entity (e.g., a file or directory) was not found. + +[`class OpError`](../tf/errors/OpError.md): A generic error that is raised when TensorFlow execution fails. + +[`class OutOfRangeError`](../tf/errors/OutOfRangeError.md): Raised when an operation iterates past the valid input range. + +[`class PermissionDeniedError`](../tf/errors/PermissionDeniedError.md): Raised when the caller does not have permission to run an operation. + +[`class ResourceExhaustedError`](../tf/errors/ResourceExhaustedError.md): Some resource has been exhausted. + +[`class UnauthenticatedError`](../tf/errors/UnauthenticatedError.md): The request does not have valid authentication credentials. + +[`class UnavailableError`](../tf/errors/UnavailableError.md): Raised when the runtime is currently unavailable. + +[`class UnimplementedError`](../tf/errors/UnimplementedError.md): Raised when an operation has not been implemented. + +[`class UnknownError`](../tf/errors/UnknownError.md): Unknown error. + +## Other Members + +* `ABORTED = 10` +* `ALREADY_EXISTS = 6` +* `CANCELLED = 1` +* `DATA_LOSS = 15` +* `DEADLINE_EXCEEDED = 4` +* `FAILED_PRECONDITION = 9` +* `INTERNAL = 13` +* `INVALID_ARGUMENT = 3` +* `NOT_FOUND = 5` +* `OK = 0` +* `OUT_OF_RANGE = 11` +* `PERMISSION_DENIED = 7` +* `RESOURCE_EXHAUSTED = 8` +* `UNAUTHENTICATED = 16` +* `UNAVAILABLE = 14` +* `UNIMPLEMENTED = 12` +* `UNKNOWN = 2` diff --git a/site/en/api_docs/python/tf/errors/AbortedError.md b/site/en/api_docs/python/tf/errors/AbortedError.md new file mode 100644 index 00000000000..3e8cd43bde2 --- /dev/null +++ b/site/en/api_docs/python/tf/errors/AbortedError.md @@ -0,0 +1,102 @@ +description: The operation was aborted, typically due to a concurrent action. + +
+ + + +
+ +# tf.errors.AbortedError + + + + + + + + + +The operation was aborted, typically due to a concurrent action. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +For example, running a +`tf.QueueBase.enqueue` +operation may raise `AbortedError` if a +`tf.QueueBase.close` operation +previously ran. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/AlreadyExistsError.md b/site/en/api_docs/python/tf/errors/AlreadyExistsError.md new file mode 100644 index 00000000000..bf182af6c83 --- /dev/null +++ b/site/en/api_docs/python/tf/errors/AlreadyExistsError.md @@ -0,0 +1,101 @@ +description: Raised when an entity that we attempted to create already exists. + +
+ + + +
+ +# tf.errors.AlreadyExistsError + + + + + + + + + +Raised when an entity that we attempted to create already exists. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +For example, running an operation that saves a file +(e.g. `tf.train.Saver.save`) +could potentially raise this exception if an explicit filename for an +existing file was passed. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/CancelledError.md b/site/en/api_docs/python/tf/errors/CancelledError.md new file mode 100644 index 00000000000..e214abd01ee --- /dev/null +++ b/site/en/api_docs/python/tf/errors/CancelledError.md @@ -0,0 +1,104 @@ +description: Raised when an operation or step is cancelled. + +
+ + + +
+ +# tf.errors.CancelledError + + + + + + + + + +Raised when an operation or step is cancelled. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +For example, a long-running operation (e.g. +`tf.QueueBase.enqueue` may be +cancelled by running another operation (e.g. +`tf.QueueBase.close`, +or by `tf.Session.close`. +A step that is running such a long-running operation will fail by raising +`CancelledError`. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/DataLossError.md b/site/en/api_docs/python/tf/errors/DataLossError.md new file mode 100644 index 00000000000..b0167cb6064 --- /dev/null +++ b/site/en/api_docs/python/tf/errors/DataLossError.md @@ -0,0 +1,100 @@ +description: Raised when unrecoverable data loss or corruption is encountered. + +
+ + + +
+ +# tf.errors.DataLossError + + + + + + + + + +Raised when unrecoverable data loss or corruption is encountered. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +For example, this may be raised by running a +`tf.WholeFileReader.read` +operation, if the file is truncated while it is being read. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/DeadlineExceededError.md b/site/en/api_docs/python/tf/errors/DeadlineExceededError.md new file mode 100644 index 00000000000..b784e1dc02f --- /dev/null +++ b/site/en/api_docs/python/tf/errors/DeadlineExceededError.md @@ -0,0 +1,98 @@ +description: Raised when a deadline expires before an operation could complete. + +
+ + + +
+ +# tf.errors.DeadlineExceededError + + + + + + + + + +Raised when a deadline expires before an operation could complete. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +This exception is not currently used. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/FailedPreconditionError.md b/site/en/api_docs/python/tf/errors/FailedPreconditionError.md new file mode 100644 index 00000000000..c274a73f5bc --- /dev/null +++ b/site/en/api_docs/python/tf/errors/FailedPreconditionError.md @@ -0,0 +1,100 @@ +description: Operation was rejected because the system is not in a state to execute it. + +
+ + + +
+ +# tf.errors.FailedPreconditionError + + + + + + + + + +Operation was rejected because the system is not in a state to execute it. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +This exception is most commonly raised when running an operation +that reads a tf.Variable +before it has been initialized. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/InternalError.md b/site/en/api_docs/python/tf/errors/InternalError.md new file mode 100644 index 00000000000..cd0d6a63508 --- /dev/null +++ b/site/en/api_docs/python/tf/errors/InternalError.md @@ -0,0 +1,99 @@ +description: Raised when the system experiences an internal error. + +
+ + + +
+ +# tf.errors.InternalError + + + + + + + + + +Raised when the system experiences an internal error. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +This exception is raised when some invariant expected by the runtime +has been broken. Catching this exception is not recommended. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/InvalidArgumentError.md b/site/en/api_docs/python/tf/errors/InvalidArgumentError.md new file mode 100644 index 00000000000..e8934213886 --- /dev/null +++ b/site/en/api_docs/python/tf/errors/InvalidArgumentError.md @@ -0,0 +1,104 @@ +description: Raised when an operation receives an invalid argument. + +
+ + + +
+ +# tf.errors.InvalidArgumentError + + + + + + + + + +Raised when an operation receives an invalid argument. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +This may occur, for example, if an operation receives an input +tensor that has an invalid value or shape. For example, the +tf.matmul op will raise this +error if it receives an input that is not a matrix, and the +tf.reshape op will raise +this error if the new shape does not match the number of elements in the input +tensor. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/NotFoundError.md b/site/en/api_docs/python/tf/errors/NotFoundError.md new file mode 100644 index 00000000000..5b8e9cb0f3a --- /dev/null +++ b/site/en/api_docs/python/tf/errors/NotFoundError.md @@ -0,0 +1,101 @@ +description: Raised when a requested entity (e.g., a file or directory) was not found. + +
+ + + +
+ +# tf.errors.NotFoundError + + + + + + + + + +Raised when a requested entity (e.g., a file or directory) was not found. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +For example, running the +`tf.WholeFileReader.read` +operation could raise `NotFoundError` if it receives the name of a file that +does not exist. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/OpError.md b/site/en/api_docs/python/tf/errors/OpError.md new file mode 100644 index 00000000000..8f9365ac80a --- /dev/null +++ b/site/en/api_docs/python/tf/errors/OpError.md @@ -0,0 +1,135 @@ +description: A generic error that is raised when TensorFlow execution fails. + +
+ + + +
+ +# tf.errors.OpError + + + + + + + + + +A generic error that is raised when TensorFlow execution fails. + + + + + + + + + +Whenever possible, the session will raise a more specific subclass +of `OpError` from the tf.errors module. + + + + + + + + + + + + + + + + + + + +
+`node_def` + +The `node_def_pb2.NodeDef` proto representing the op that +failed, if known; otherwise None. +
+`op` + +The `ops.Operation` that failed, if known; otherwise None. +
+`message` + +The message string describing the failure. +
+`error_code` + +The `error_codes_pb2.Code` describing the error. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/OutOfRangeError.md b/site/en/api_docs/python/tf/errors/OutOfRangeError.md new file mode 100644 index 00000000000..b0c47600cf1 --- /dev/null +++ b/site/en/api_docs/python/tf/errors/OutOfRangeError.md @@ -0,0 +1,102 @@ +description: Raised when an operation iterates past the valid input range. + +
+ + + +
+ +# tf.errors.OutOfRangeError + + + + + + + + + +Raised when an operation iterates past the valid input range. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +This exception is raised in "end-of-file" conditions, such as when a +`tf.QueueBase.dequeue` +operation is blocked on an empty queue, and a +`tf.QueueBase.close` +operation executes. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/PermissionDeniedError.md b/site/en/api_docs/python/tf/errors/PermissionDeniedError.md new file mode 100644 index 00000000000..372a59510c3 --- /dev/null +++ b/site/en/api_docs/python/tf/errors/PermissionDeniedError.md @@ -0,0 +1,101 @@ +description: Raised when the caller does not have permission to run an operation. + +
+ + + +
+ +# tf.errors.PermissionDeniedError + + + + + + + + + +Raised when the caller does not have permission to run an operation. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +For example, running the +`tf.WholeFileReader.read` +operation could raise `PermissionDeniedError` if it receives the name of a +file for which the user does not have the read file permission. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/ResourceExhaustedError.md b/site/en/api_docs/python/tf/errors/ResourceExhaustedError.md new file mode 100644 index 00000000000..c7e0eaf37df --- /dev/null +++ b/site/en/api_docs/python/tf/errors/ResourceExhaustedError.md @@ -0,0 +1,99 @@ +description: Some resource has been exhausted. + +
+ + + +
+ +# tf.errors.ResourceExhaustedError + + + + + + + + + +Some resource has been exhausted. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +For example, this error might be raised if a per-user quota is +exhausted, or perhaps the entire file system is out of space. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/UnauthenticatedError.md b/site/en/api_docs/python/tf/errors/UnauthenticatedError.md new file mode 100644 index 00000000000..a0cd952960a --- /dev/null +++ b/site/en/api_docs/python/tf/errors/UnauthenticatedError.md @@ -0,0 +1,98 @@ +description: The request does not have valid authentication credentials. + +
+ + + +
+ +# tf.errors.UnauthenticatedError + + + + + + + + + +The request does not have valid authentication credentials. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +This exception is not currently used. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/UnavailableError.md b/site/en/api_docs/python/tf/errors/UnavailableError.md new file mode 100644 index 00000000000..296e3712a4a --- /dev/null +++ b/site/en/api_docs/python/tf/errors/UnavailableError.md @@ -0,0 +1,98 @@ +description: Raised when the runtime is currently unavailable. + +
+ + + +
+ +# tf.errors.UnavailableError + + + + + + + + + +Raised when the runtime is currently unavailable. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +This exception is not currently used. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/UnimplementedError.md b/site/en/api_docs/python/tf/errors/UnimplementedError.md new file mode 100644 index 00000000000..75b11b8f93b --- /dev/null +++ b/site/en/api_docs/python/tf/errors/UnimplementedError.md @@ -0,0 +1,102 @@ +description: Raised when an operation has not been implemented. + +
+ + + +
+ +# tf.errors.UnimplementedError + + + + + + + + + +Raised when an operation has not been implemented. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +Some operations may raise this error when passed otherwise-valid +arguments that it does not currently support. For example, running +the tf.nn.max_pool2d operation +would raise this error if pooling was requested on the batch dimension, +because this is not yet supported. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/errors/UnknownError.md b/site/en/api_docs/python/tf/errors/UnknownError.md new file mode 100644 index 00000000000..593595e2763 --- /dev/null +++ b/site/en/api_docs/python/tf/errors/UnknownError.md @@ -0,0 +1,102 @@ +description: Unknown error. + +
+ + + +
+ +# tf.errors.UnknownError + + + + + + + + + +Unknown error. + +Inherits From: [`OpError`](../../tf/errors/OpError.md) + + + + + + + + + +An example of where this error may be returned is if a Status value +received from another address space belongs to an error-space that +is not known to this address space. Also, errors raised by APIs that +do not return enough error information may be converted to this +error. + + + + + + + + + + + + + + + + + + + + + + +
+`error_code` + +The integer error code that describes the error. +
+`message` + +The error message that describes the error. +
+`node_def` + +The `NodeDef` proto representing the op that failed. +
+`op` + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +tf.Operation +object. In that case, this will return `None`, and you should +instead use the tf.errors.OpError.node_def to +discover information about the op. +
+ + + diff --git a/site/en/api_docs/python/tf/estimator.md b/site/en/api_docs/python/tf/estimator.md new file mode 100644 index 00000000000..b431b200e21 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator.md @@ -0,0 +1,143 @@ +description: Estimator: High level tools for working with models. + +
+ + +
+ +# Module: tf.estimator + + + + + + + + + +Estimator: High level tools for working with models. + + + +## Modules + +[`experimental`](../tf/estimator/experimental.md) module: Public API for tf.estimator.experimental namespace. + +[`export`](../tf/estimator/export.md) module: All public utility methods for exporting Estimator to SavedModel. + +## Classes + +[`class BaselineClassifier`](../tf/estimator/BaselineClassifier.md): A classifier that can establish a simple baseline. + +[`class BaselineEstimator`](../tf/estimator/BaselineEstimator.md): An estimator that can establish a simple baseline. + +[`class BaselineRegressor`](../tf/estimator/BaselineRegressor.md): A regressor that can establish a simple baseline. + +[`class BestExporter`](../tf/estimator/BestExporter.md): This class exports the serving graph and checkpoints of the best models. + +[`class BinaryClassHead`](../tf/estimator/BinaryClassHead.md): Creates a `Head` for single label binary classification. + +[`class BoostedTreesClassifier`](../tf/estimator/BoostedTreesClassifier.md): A Classifier for Tensorflow Boosted Trees models. + +[`class BoostedTreesEstimator`](../tf/estimator/BoostedTreesEstimator.md): An Estimator for Tensorflow Boosted Trees models. + +[`class BoostedTreesRegressor`](../tf/estimator/BoostedTreesRegressor.md): A Regressor for Tensorflow Boosted Trees models. + +[`class CheckpointSaverHook`](../tf/estimator/CheckpointSaverHook.md): Saves checkpoints every N steps or seconds. + +[`class CheckpointSaverListener`](../tf/estimator/CheckpointSaverListener.md): Interface for listeners that take action before or after checkpoint save. + +[`class DNNClassifier`](../tf/estimator/DNNClassifier.md): A classifier for TensorFlow DNN models. + +[`class DNNEstimator`](../tf/estimator/DNNEstimator.md): An estimator for TensorFlow DNN models with user-specified head. + +[`class DNNLinearCombinedClassifier`](../tf/estimator/DNNLinearCombinedClassifier.md): An estimator for TensorFlow Linear and DNN joined classification models. + +[`class DNNLinearCombinedEstimator`](../tf/estimator/DNNLinearCombinedEstimator.md): An estimator for TensorFlow Linear and DNN joined models with custom head. + +[`class DNNLinearCombinedRegressor`](../tf/estimator/DNNLinearCombinedRegressor.md): An estimator for TensorFlow Linear and DNN joined models for regression. + +[`class DNNRegressor`](../tf/estimator/DNNRegressor.md): A regressor for TensorFlow DNN models. + +[`class Estimator`](../tf/estimator/Estimator.md): Estimator class to train and evaluate TensorFlow models. + +[`class EstimatorSpec`](../tf/estimator/EstimatorSpec.md): Ops and objects returned from a `model_fn` and passed to an `Estimator`. + +[`class EvalSpec`](../tf/estimator/EvalSpec.md): Configuration for the "eval" part for the `train_and_evaluate` call. + +[`class Exporter`](../tf/estimator/Exporter.md): A class representing a type of model export. + +[`class FeedFnHook`](../tf/estimator/FeedFnHook.md): Runs `feed_fn` and sets the `feed_dict` accordingly. + +[`class FinalExporter`](../tf/estimator/FinalExporter.md): This class exports the serving graph and checkpoints at the end. + +[`class FinalOpsHook`](../tf/estimator/FinalOpsHook.md): A hook which evaluates `Tensors` at the end of a session. + +[`class GlobalStepWaiterHook`](../tf/estimator/GlobalStepWaiterHook.md): Delays execution until global step reaches `wait_until_step`. + +[`class Head`](../tf/estimator/Head.md): Interface for the head/top of a model. + +[`class LatestExporter`](../tf/estimator/LatestExporter.md): This class regularly exports the serving graph and checkpoints. + +[`class LinearClassifier`](../tf/estimator/LinearClassifier.md): Linear classifier model. + +[`class LinearEstimator`](../tf/estimator/LinearEstimator.md): An estimator for TensorFlow linear models with user-specified head. + +[`class LinearRegressor`](../tf/estimator/LinearRegressor.md): An estimator for TensorFlow Linear regression problems. + +[`class LoggingTensorHook`](../tf/estimator/LoggingTensorHook.md): Prints the given tensors every N local steps, every N seconds, or at end. + +[`class LogisticRegressionHead`](../tf/estimator/LogisticRegressionHead.md): Creates a `Head` for logistic regression. + +[`class ModeKeys`](../tf/estimator/ModeKeys.md): Standard names for Estimator model modes. + +[`class MultiClassHead`](../tf/estimator/MultiClassHead.md): Creates a `Head` for multi class classification. + +[`class MultiHead`](../tf/estimator/MultiHead.md): Creates a `Head` for multi-objective learning. + +[`class MultiLabelHead`](../tf/estimator/MultiLabelHead.md): Creates a `Head` for multi-label classification. + +[`class NanLossDuringTrainingError`](../tf/estimator/NanLossDuringTrainingError.md): Unspecified run-time error. + +[`class NanTensorHook`](../tf/estimator/NanTensorHook.md): Monitors the loss tensor and stops training if loss is NaN. + +[`class PoissonRegressionHead`](../tf/estimator/PoissonRegressionHead.md): Creates a `Head` for poisson regression using tf.nn.log_poisson_loss. + +[`class ProfilerHook`](../tf/estimator/ProfilerHook.md): Captures CPU/GPU profiling information every N steps or seconds. + +[`class RegressionHead`](../tf/estimator/RegressionHead.md): Creates a `Head` for regression using the `mean_squared_error` loss. + +[`class RunConfig`](../tf/estimator/RunConfig.md): This class specifies the configurations for an `Estimator` run. + +[`class SecondOrStepTimer`](../tf/estimator/SecondOrStepTimer.md): Timer that triggers at most once every N seconds or once every N steps. + +[`class SessionRunArgs`](../tf/estimator/SessionRunArgs.md): Represents arguments to be added to a `Session.run()` call. + +[`class SessionRunContext`](../tf/estimator/SessionRunContext.md): Provides information about the `session.run()` call being made. + +[`class SessionRunHook`](../tf/estimator/SessionRunHook.md): Hook to extend calls to MonitoredSession.run(). + +[`class SessionRunValues`](../tf/estimator/SessionRunValues.md): Contains the results of `Session.run()`. + +[`class StepCounterHook`](../tf/estimator/StepCounterHook.md): Hook that counts steps per second. + +[`class StopAtStepHook`](../tf/estimator/StopAtStepHook.md): Hook that requests stop at a specified step. + +[`class SummarySaverHook`](../tf/estimator/SummarySaverHook.md): Saves summaries every N steps. + +[`class TrainSpec`](../tf/estimator/TrainSpec.md): Configuration for the "train" part for the `train_and_evaluate` call. + +[`class VocabInfo`](../tf/estimator/VocabInfo.md): Vocabulary information for warm-starting. + +[`class WarmStartSettings`](../tf/estimator/WarmStartSettings.md): Settings for warm-starting in `tf.estimator.Estimators`. + +## Functions + +[`add_metrics(...)`](../tf/estimator/add_metrics.md): Creates a new tf.estimator.Estimator which has given metrics. + +[`classifier_parse_example_spec(...)`](../tf/estimator/classifier_parse_example_spec.md): Generates parsing spec for tf.parse_example to be used with classifiers. + +[`regressor_parse_example_spec(...)`](../tf/estimator/regressor_parse_example_spec.md): Generates parsing spec for tf.parse_example to be used with regressors. + +[`train_and_evaluate(...)`](../tf/estimator/train_and_evaluate.md): Train and evaluate the `estimator`. + diff --git a/site/en/api_docs/python/tf/estimator/BaselineClassifier.md b/site/en/api_docs/python/tf/estimator/BaselineClassifier.md new file mode 100644 index 00000000000..ba201acc0d5 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/BaselineClassifier.md @@ -0,0 +1,1053 @@ +description: A classifier that can establish a simple baseline. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.BaselineClassifier + + + + + + + + + +A classifier that can establish a simple baseline. + +Inherits From: [`Estimator`](../../tf/estimator/Estimator.md) + + + + + + + +This classifier ignores feature values and will learn to predict the average +value of each label. For single-label problems, this will predict the +probability distribution of the classes as seen in the labels. For multi-label +problems, this will predict the fraction of examples that are positive for +each class. + +#### Example: + + + +```python + +# Build BaselineClassifier +classifier = tf.estimator.BaselineClassifier(n_classes=3) + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass + +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass + +# Fit model. +classifier.train(input_fn=input_fn_train) + +# Evaluate cross entropy between the test and train labels. +loss = classifier.evaluate(input_fn=input_fn_eval)["loss"] + +# predict outputs the probability distribution of the classes as seen in +# training. +predictions = classifier.predict(new_samples) + +``` + +Input of `train` and `evaluate` should have following features, + otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with + `key=weight_column` whose value is a `Tensor`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`n_classes` + +number of label classes. Default is binary classification. +It must be greater than 1. Note: Class labels are integers representing +the class index (i.e. values from 0 to n_classes-1). For arbitrary +label values (e.g. string labels), convert to class indices first. +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It will be multiplied by the loss of the example. +
+`label_vocabulary` + +Optional list of strings with size `[n_classes]` +defining the label vocabulary. Only supported for `n_classes` > 2. +
+`optimizer` + +String, `tf.keras.optimizers.*` object, or callable that +creates the optimizer to use for training. If not specified, will use +`Ftrl` as the default optimizer. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Describes how +to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `n_classes` < 2. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/BaselineEstimator.md b/site/en/api_docs/python/tf/estimator/BaselineEstimator.md new file mode 100644 index 00000000000..f8672b77df9 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/BaselineEstimator.md @@ -0,0 +1,1001 @@ +description: An estimator that can establish a simple baseline. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.BaselineEstimator + + + + + + + + + +An estimator that can establish a simple baseline. + +Inherits From: [`Estimator`](../../tf/estimator/Estimator.md) + + + + + + + +The estimator uses a user-specified head. + +This estimator ignores feature values and will learn to predict the average +value of each label. E.g. for single-label classification problems, this will +predict the probability distribution of the classes as seen in the labels. +For multi-label classification problems, it will predict the ratio of examples +that contain each class. + +#### Example: + + + +```python + +# Build baseline multi-label classifier. +estimator = tf.estimator.BaselineEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3)) + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass + +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass + +# Fit model. +estimator.train(input_fn=input_fn_train) + +# Evaluates cross entropy between the test and train labels. +loss = estimator.evaluate(input_fn=input_fn_eval)["loss"] + +# For each class, predicts the ratio of training examples that contain the +# class. +predictions = estimator.predict(new_samples) + +``` + +Input of `train` and `evaluate` should have following features, + otherwise there will be a `KeyError`: + +* if `weight_column` is specified in the `head` constructor (and not None) for + the head passed to BaselineEstimator's constructor, a feature with + `key=weight_column` whose value is a `Tensor`. + + + + + + + + + + + + + + + + + + + +
+`head` + +A `Head` instance constructed with a method such as +tf.estimator.MultiLabelHead. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`optimizer` + +String, `tf.keras.optimizers.*` object, or callable that +creates the optimizer to use for training. If not specified, will use +`Ftrl` as the default optimizer. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/BaselineRegressor.md b/site/en/api_docs/python/tf/estimator/BaselineRegressor.md new file mode 100644 index 00000000000..0c5c3ad537f --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/BaselineRegressor.md @@ -0,0 +1,1021 @@ +description: A regressor that can establish a simple baseline. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.BaselineRegressor + + + + + + + + + +A regressor that can establish a simple baseline. + +Inherits From: [`Estimator`](../../tf/estimator/Estimator.md) + + + + + + + +This regressor ignores feature values and will learn to predict the average +value of each label. + +#### Example: + + + +```python + +# Build BaselineRegressor +regressor = tf.estimator.BaselineRegressor() + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass + +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass + +# Fit model. +regressor.train(input_fn=input_fn_train) + +# Evaluate squared-loss between the test and train targets. +loss = regressor.evaluate(input_fn=input_fn_eval)["loss"] + +# predict outputs the mean value seen during training. +predictions = regressor.predict(new_samples) +``` + +Input of `train` and `evaluate` should have following features, + otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with + `key=weight_column` whose value is a `Tensor`. + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`label_dimension` + +Number of regression targets per example. This is the +size of the last dimension of the labels and logits `Tensor` objects +(typically, these have shape `[batch_size, label_dimension]`). +
+`weight_column` + +A string or a `_NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It will be multiplied by the loss of the example. +
+`optimizer` + +String, `tf.keras.optimizers.*` object, or callable that +creates the optimizer to use for training. If not specified, will use +`Ftrl` as the default optimizer. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Describes how +to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/BestExporter.md b/site/en/api_docs/python/tf/estimator/BestExporter.md new file mode 100644 index 00000000000..96bcfbd2fb7 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/BestExporter.md @@ -0,0 +1,252 @@ +description: This class exports the serving graph and checkpoints of the best models. + +
+ + + + +
+ +# tf.estimator.BestExporter + + + + + + + + + +This class exports the serving graph and checkpoints of the best models. + +Inherits From: [`Exporter`](../../tf/estimator/Exporter.md) + + + + + + + + + +This class performs a model export everytime the new model is better than any +existing model. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +unique name of this `Exporter` that is going to be used in the +export path. +
+`serving_input_receiver_fn` + +a function that takes no arguments and returns +a `ServingInputReceiver`. +
+`event_file_pattern` + +event file name pattern relative to model_dir. If +None, however, the exporter would not be preemption-safe. To be +preemption-safe, event_file_pattern must be specified. +
+`compare_fn` + +a function that compares two evaluation results and returns +true if current evaluation result is better. Follows the signature: +* Args: +* `best_eval_result`: This is the evaluation result of the best model. +* `current_eval_result`: This is the evaluation result of current +candidate model. +* Returns: True if current evaluation result is better; otherwise, +False. +
+`assets_extra` + +An optional dict specifying how to populate the assets.extra +directory within the exported SavedModel. Each key should give the +destination path (including the filename) relative to the assets.extra +directory. The corresponding value gives the full path of the source +file to be copied. For example, the simple case of copying a single +file without renaming it is specified as `{'my_asset_file.txt': +'/path/to/my_asset_file.txt'}`. +
+`as_text` + +whether to write the SavedModel proto in text format. Defaults to +`False`. +
+`exports_to_keep` + +Number of exports to keep. Older exports will be +garbage-collected. Defaults to 5. Set to `None` to disable garbage +collection. +
+ + + + + + + + + + + + +
+`ValueError` + +if any argument is invalid. +
+ + + + + + + + + + + + + + +
+`name` + +Directory name. + +A directory name under the export base directory where exports of +this type are written. Should not be `None` nor empty. +
+ + + +## Methods + +

export

+ +View source + + + +Exports the given `Estimator` to a specific format. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`estimator` + +the `Estimator` to export. +
+`export_path` + +A string containing a directory where to write the export. +
+`checkpoint_path` + +The checkpoint path to export. +
+`eval_result` + +The output of Estimator.evaluate on this checkpoint. +
+`is_the_final_export` + +This boolean is True when this is an export in the +end of training. It is False for the intermediate exports during the +training. When passing `Exporter` to tf.estimator.train_and_evaluate +`is_the_final_export` is always False if TrainSpec.max_steps is +`None`. +
+ + + + + + + + + + + +
Returns
+The string path to the exported directory or `None` if export is skipped. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/BinaryClassHead.md b/site/en/api_docs/python/tf/estimator/BinaryClassHead.md new file mode 100644 index 00000000000..925505ef728 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/BinaryClassHead.md @@ -0,0 +1,460 @@ +description: Creates a Head for single label binary classification. + +
+ + + + + + + + +
+ +# tf.estimator.BinaryClassHead + + + + + + + + + +Creates a `Head` for single label binary classification. + +Inherits From: [`Head`](../../tf/estimator/Head.md) + + + + + + + + + +Uses `sigmoid_cross_entropy_with_logits` loss. + +The head expects `logits` with shape `[D0, D1, ... DN, 1]`. +In many applications, the shape is `[batch_size, 1]`. + +`labels` must be a dense `Tensor` with shape matching `logits`, namely +`[D0, D1, ... DN, 1]`. If `label_vocabulary` given, `labels` must be a string +`Tensor` with values from the vocabulary. If `label_vocabulary` is not given, +`labels` must be float `Tensor` with values in the interval `[0, 1]`. + +If `weight_column` is specified, weights must be of shape +`[D0, D1, ... DN]`, or `[D0, D1, ... DN, 1]`. + +The loss is the weighted sum over the input dimensions. Namely, if the input +labels have shape `[batch_size, 1]`, the loss is the weighted sum over +`batch_size`. + +Also supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or +`(labels, logits, features, loss_reduction)` as arguments and returns loss +with shape `[D0, D1, ... DN, 1]`. `loss_fn` must support float `labels` with +shape `[D0, D1, ... DN, 1]`. Namely, the head applies `label_vocabulary` to +the input labels before passing them to `loss_fn`. + +#### Usage: + + + +``` +>>> head = tf.estimator.BinaryClassHead() +>>> logits = np.array(((45,), (-41,),), dtype=np.float32) +>>> labels = np.array(((1,), (1,),), dtype=np.int32) +>>> features = {'x': np.array(((42,),), dtype=np.float32)} +>>> # expected_loss = sum(cross_entropy(labels, logits)) / batch_size +>>> # = sum(0, 41) / 2 = 41 / 2 = 20.50 +>>> loss = head.loss(labels, logits, features=features) +>>> print('{:.2f}'.format(loss.numpy())) +20.50 +>>> eval_metrics = head.metrics() +>>> updated_metrics = head.update_metrics( +... eval_metrics, features, logits, labels) +>>> for k in sorted(updated_metrics): +... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy())) + accuracy : 0.50 + accuracy_baseline : 1.00 + auc : 0.00 + auc_precision_recall : 1.00 + average_loss : 20.50 + label/mean : 1.00 + precision : 1.00 + prediction/mean : 0.50 + recall : 0.50 +>>> preds = head.predictions(logits) +>>> print(preds['logits']) +tf.Tensor( + [[ 45.] + [-41.]], shape=(2, 1), dtype=float32) +``` + +Usage with a canned estimator: + +```python +my_head = tf.estimator.BinaryClassHead() +my_estimator = tf.estimator.DNNEstimator( + head=my_head, + hidden_units=..., + feature_columns=...) +``` + +It can also be used with a custom `model_fn`. Example: + +```python +def _my_model_fn(features, labels, mode): + my_head = tf.estimator.BinaryClassHead() + logits = tf.keras.Model(...)(features) + + return my_head.create_estimator_spec( + features=features, + mode=mode, + labels=labels, + optimizer=tf.keras.optimizers.Adagrad(lr=0.1), + logits=logits) + +my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. +
+`thresholds` + +Iterable of floats in the range `(0, 1)`. For binary +classification metrics such as precision and recall, an eval metric is +generated for each threshold value. This threshold is applied to the +logistic values to determine the binary classification (i.e., above the +threshold is `true`, below is `false`. +
+`label_vocabulary` + +A list or tuple of strings representing possible label +values. If it is not given, that means labels are already encoded within +[0, 1]. If given, labels must be string type and have any value in +`label_vocabulary`. Note that errors will be raised if `label_vocabulary` +is not provided but labels are strings. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Decides how to +reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`, namely +weighted sum of losses divided by `batch size * label_dimension`. +
+`loss_fn` + +Optional loss function. +
+`name` + +Name of the head. If provided, summary and metrics keys will be +suffixed by `"/" + name`. Also used as `name_scope` when creating ops. +
+ + + + + + + + + + + + + + + + + + + + +
+`logits_dimension` + +See `base_head.Head` for details. +
+`loss_reduction` + +See `base_head.Head` for details. +
+`name` + +See `base_head.Head` for details. +
+ + + +## Methods + +

create_estimator_spec

+ +View source + + + +Returns `EstimatorSpec` that a model_fn can return. + +It is recommended to pass all args via name. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`features` + +Input `dict` mapping string feature names to `Tensor` or +`SparseTensor` objects containing the values for that feature in a +minibatch. Often to be used to fetch example-weight tensor. +
+`mode` + +Estimator's `ModeKeys`. +
+`logits` + +Logits `Tensor` to be used by the head. +
+`labels` + +Labels `Tensor`, or `dict` mapping string label names to `Tensor` +objects of the label values. +
+`optimizer` + +An tf.keras.optimizers.Optimizer instance to optimize the +loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, +trainable_variables)`, which updates variables to minimize `loss`. +
+`trainable_variables` + +A list or tuple of `Variable` objects to update to +minimize `loss`. In Tensorflow 1.x, by default these are the list of +variables collected in the graph under the key +`GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have +collections and GraphKeys, trainable_variables need to be passed +explicitly here. +
+`train_op_fn` + +Function that takes a scalar loss `Tensor` and returns an op +to optimize the model with the loss in TRAIN mode. Used if `optimizer` +is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in +TRAIN mode. By default, it is `None` in other modes. If you want to +optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use +EstimatorSpec.loss to compute and apply gradients. +
+`update_ops` + +A list or tuple of update ops to be run at training time. For +example, layers such as BatchNormalization create mean and variance +update ops that need to be run at training time. In Tensorflow 1.x, +these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x +doesn't have collections, update_ops need to be passed explicitly here. +
+`regularization_losses` + +A list of additional scalar losses to be added to +the training loss, such as regularization losses. +
+ + + + + + + + + + + +
Returns
+`EstimatorSpec`. +
+ + + +

loss

+ +View source + + + +Returns regularized training loss. See `base_head.Head` for details. + + +

metrics

+ +View source + + + +Creates metrics. See `base_head.Head` for details. + + +

predictions

+ +View source + + + +Return predictions based on keys. + +See `base_head.Head` for details. + + + + + + + + + + + + + +
Args
+`logits` + +logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. +For many applications, the shape is `[batch_size, logits_dimension]`. +
+`keys` + +a list or tuple of prediction keys. Each key can be either the class +variable of prediction_keys.PredictionKeys or its string value, such as: +prediction_keys.PredictionKeys.CLASSES or 'classes'. If not specified, +it will return the predictions for all valid keys. +
+ + + + + + + + + + + +
Returns
+A dict of predictions. +
+ + + +

update_metrics

+ +View source + + + +Updates eval metrics. See `base_head.Head` for details. + + + + diff --git a/site/en/api_docs/python/tf/estimator/BoostedTreesClassifier.md b/site/en/api_docs/python/tf/estimator/BoostedTreesClassifier.md new file mode 100644 index 00000000000..89921c2f161 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/BoostedTreesClassifier.md @@ -0,0 +1,1439 @@ +description: A Classifier for Tensorflow Boosted Trees models. + +
+ + + + + + + + + + + + + + + +
+ +# tf.estimator.BoostedTreesClassifier + + + + + + + + + +A Classifier for Tensorflow Boosted Trees models. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +An iterable containing all the feature columns used by +the model. All items in the set should be instances of classes derived +from `FeatureColumn`. +
+`n_batches_per_layer` + +the number of batches to collect statistics per +layer. The total number of batches is total number of data divided by +batch size. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`n_classes` + +number of label classes. Default is binary classification. +
+`weight_column` + +A string or a `NumericColumn` created by +`tf.fc_old.numeric_column` defining feature column representing weights. +It is used to downweight or boost examples during training. It will be +multiplied by the loss of the example. If it is a string, it is used as +a key to fetch weight tensor from the `features`. If it is a +`NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +`weight_column.normalizer_fn` is applied on it to get weight tensor. +
+`label_vocabulary` + +A list of strings represents possible label values. If +given, labels must be string type and have any value in +`label_vocabulary`. If it is not given, that means labels are already +encoded as integer or float within `[0, 1]` for `n_classes=2` and +encoded as integer values in {0, 1,..., n_classes-1} for `n_classes>2`. +Also, there will be errors if vocabulary is not provided and labels are +string. +
+`n_trees` + +number trees to be created. +
+`max_depth` + +maximum depth of the tree to grow. +
+`learning_rate` + +shrinkage parameter to be used when a tree added to the +model. +
+`l1_regularization` + +regularization multiplier applied to the absolute +weights of the tree leafs. +
+`l2_regularization` + +regularization multiplier applied to the square weights +of the tree leafs. +
+`tree_complexity` + +regularization factor to penalize trees with more leaves. +
+`min_node_weight` + +min_node_weight: minimum hessian a node must have for a +split to be considered. The value will be compared with +`sum(leaf_hessian)/(batch_size * n_batches_per_layer)`. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+`center_bias` + +Whether bias centering needs to occur. Bias centering refers +to the first node in the very first tree returning the prediction that +is aligned with the original labels distribution. For example, for +regression problems, the first node will return the mean of the labels. +For binary classification problems, it will return a logit for a prior +probability of label 1. +
+`pruning_mode` + +one of `none`, `pre`, `post` to indicate no pruning, pre- +pruning (do not split a node if not enough gain is observed) and post +pruning (build the tree up to a max depth and then prune branches with +negative gain). For pre and post pruning, you MUST provide +`tree_complexity >0`. +
+`quantile_sketch_epsilon` + +float between 0 and 1. Error bound for quantile +computation. This is only used for float feature columns, and the number +of buckets generated per float feature is `1/quantile_sketch_epsilon`. +
+`train_in_memory` + +`bool`, when true, it assumes the dataset is in memory, +i.e., `input_fn` should return the entire dataset as a single batch, +`n_batches_per_layer` should be set as 1, `num_worker_replicas` should +be 1, and `num_ps_replicas` should be 0 in `tf.Estimator.RunConfig`. +
+ + + + + + + + + + + + +
+`ValueError` + +when wrong arguments are given or unsupported functionalities +are requested. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

experimental_feature_importances

+ +View source + + + +Computes gain-based feature importances. + +The higher the value, the more important the corresponding feature. + + + + + + + + + + +
Args
+`normalize` + +If True, normalize the feature importances. +
+ + + + + + + + + + + + +
Returns
+`feature_importances` + +an OrderedDict, where the keys are the feature column +names and the values are importances. It is sorted by importance. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When attempting to normalize on an empty ensemble +or an ensemble of trees which have no splits. Or when attempting +to normalize and feature importances have negative values. +
+ + + +

experimental_predict_with_explanations

+ +View source + + + +Computes model explainability outputs per example along with predictions. + +Currently supports directional feature contributions (DFCs). For each +instance, DFCs indicate the aggregate contribution of each feature. See +https://arxiv.org/abs/1312.1121 and +http://blog.datadive.net/interpreting-random-forests/ for more details. + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for predicting as +minibatches. See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary, with the exception of 'bias' and 'dfc', which will +always be in the dictionary. If `None`, returns all keys in prediction +dict, as well as two new keys 'dfc' and 'bias'. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. The `predictions` tensors will +contain at least two keys 'dfc' and 'bias' for model explanations. The +`dfc` value corresponds to the contribution of each feature to the overall +prediction for this instance (positive indicating that the feature makes +it more likely to select class 1 and negative less likely). The `dfc` is +an OrderedDict, where the keys are the feature column names and the values +are the contributions. It is sorted by the absolute value of the +contribution (e.g OrderedDict([('age', -0.54), ('gender', 0.4), ('fare', +0.21)])). The 'bias' value will be the same across all the instances, +corresponding to the probability (classification) or prediction +(regression) of the training data distribution. + + + + + + + + + + + + +
Raises
+`ValueError` + +when wrong arguments are given or unsupported functionalities +are requested. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/BoostedTreesEstimator.md b/site/en/api_docs/python/tf/estimator/BoostedTreesEstimator.md new file mode 100644 index 00000000000..c4dfee6398b --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/BoostedTreesEstimator.md @@ -0,0 +1,1405 @@ +description: An Estimator for Tensorflow Boosted Trees models. + +
+ + + + + + + + + + + + + + + +
+ +# tf.estimator.BoostedTreesEstimator + + + + + + + + + +An Estimator for Tensorflow Boosted Trees models. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +An iterable containing all the feature columns used by +the model. All items in the set should be instances of classes derived +from `FeatureColumn`. +
+`n_batches_per_layer` + +the number of batches to collect statistics per +layer. +
+`head` + +the `Head` instance defined for Estimator. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. +
+`weight_column` + +A string or a `_NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to downweight or boost examples during training. It +will be multiplied by the loss of the example. If it is a string, it is +used as a key to fetch weight tensor from the `features`. If it is a +`_NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +weight_column.normalizer_fn is applied on it to get weight tensor. +
+`n_trees` + +number trees to be created. +
+`max_depth` + +maximum depth of the tree to grow. +
+`learning_rate` + +shrinkage parameter to be used when a tree added to the +model. +
+`l1_regularization` + +regularization multiplier applied to the absolute +weights of the tree leafs. +
+`l2_regularization` + +regularization multiplier applied to the square weights +of the tree leafs. +
+`tree_complexity` + +regularization factor to penalize trees with more leaves. +
+`min_node_weight` + +minimum hessian a node must have for a split to be +considered. The value will be compared with `sum(leaf_hessian)/ +(batch_size * n_batches_per_layer)`. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+`center_bias` + +Whether bias centering needs to occur. Bias centering refers +to the first node in the very first tree returning the prediction that +is aligned with the original labels distribution. For example, for +regression problems, the first node will return the mean of the labels. +For binary classification problems, it will return a logit for a prior +probability of label 1. +
+`pruning_mode` + +one of `none`, `pre`, `post` to indicate no pruning, pre- +pruning (do not split a node if not enough gain is observed) and post +pruning (build the tree up to a max depth and then prune branches with +negative gain). For pre and post pruning, you MUST provide +`tree_complexity>0`. +
+`quantile_sketch_epsilon` + +float between 0 and 1. Error bound for quantile +computation. This is only used for float feature columns, and the number +of buckets generated per float feature is `1/quantile_sketch_epsilon`. +
+ + + + + + + + + + + + +
+`ValueError` + +when wrong arguments are given or unsupported functionalities +are requested. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

experimental_feature_importances

+ +View source + + + +Computes gain-based feature importances. + +The higher the value, the more important the corresponding feature. + + + + + + + + + + +
Args
+`normalize` + +If True, normalize the feature importances. +
+ + + + + + + + + + + + +
Returns
+`feature_importances` + +an OrderedDict, where the keys are the feature column +names and the values are importances. It is sorted by importance. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When attempting to normalize on an empty ensemble +or an ensemble of trees which have no splits. Or when attempting +to normalize and feature importances have negative values. +
+ + + +

experimental_predict_with_explanations

+ +View source + + + +Computes model explainability outputs per example along with predictions. + +Currently supports directional feature contributions (DFCs). For each +instance, DFCs indicate the aggregate contribution of each feature. See +https://arxiv.org/abs/1312.1121 and +http://blog.datadive.net/interpreting-random-forests/ for more details. + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for predicting as +minibatches. See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary, with the exception of 'bias' and 'dfc', which will +always be in the dictionary. If `None`, returns all keys in prediction +dict, as well as two new keys 'dfc' and 'bias'. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. The `predictions` tensors will +contain at least two keys 'dfc' and 'bias' for model explanations. The +`dfc` value corresponds to the contribution of each feature to the overall +prediction for this instance (positive indicating that the feature makes +it more likely to select class 1 and negative less likely). The `dfc` is +an OrderedDict, where the keys are the feature column names and the values +are the contributions. It is sorted by the absolute value of the +contribution (e.g OrderedDict([('age', -0.54), ('gender', 0.4), ('fare', +0.21)])). The 'bias' value will be the same across all the instances, +corresponding to the probability (classification) or prediction +(regression) of the training data distribution. + + + + + + + + + + + + +
Raises
+`ValueError` + +when wrong arguments are given or unsupported functionalities +are requested. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/BoostedTreesRegressor.md b/site/en/api_docs/python/tf/estimator/BoostedTreesRegressor.md new file mode 100644 index 00000000000..6c17fe47fdd --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/BoostedTreesRegressor.md @@ -0,0 +1,1426 @@ +description: A Regressor for Tensorflow Boosted Trees models. + +
+ + + + + + + + + + + + + + + +
+ +# tf.estimator.BoostedTreesRegressor + + + + + + + + + +A Regressor for Tensorflow Boosted Trees models. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +An iterable containing all the feature columns used by +the model. All items in the set should be instances of classes derived +from `FeatureColumn`. +
+`n_batches_per_layer` + +the number of batches to collect statistics per +layer. The total number of batches is total number of data divided by +batch size. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`label_dimension` + +Number of regression targets per example. +
+`weight_column` + +A string or a `NumericColumn` created by +`tf.fc_old.numeric_column` defining feature column representing weights. +It is used to downweight or boost examples during training. It will be +multiplied by the loss of the example. If it is a string, it is used as +a key to fetch weight tensor from the `features`. If it is a +`NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +`weight_column.normalizer_fn` is applied on it to get weight tensor. +
+`n_trees` + +number trees to be created. +
+`max_depth` + +maximum depth of the tree to grow. +
+`learning_rate` + +shrinkage parameter to be used when a tree added to the +model. +
+`l1_regularization` + +regularization multiplier applied to the absolute +weights of the tree leafs. +
+`l2_regularization` + +regularization multiplier applied to the square weights +of the tree leafs. +
+`tree_complexity` + +regularization factor to penalize trees with more leaves. +
+`min_node_weight` + +min_node_weight: minimum hessian a node must have for a +split to be considered. The value will be compared with +`sum(leaf_hessian)/(batch_size * n_batches_per_layer)`. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+`center_bias` + +Whether bias centering needs to occur. Bias centering refers +to the first node in the very first tree returning the prediction that +is aligned with the original labels distribution. For example, for +regression problems, the first node will return the mean of the labels. +For binary classification problems, it will return a logit for a prior +probability of label 1. +
+`pruning_mode` + +one of `none`, `pre`, `post` to indicate no pruning, pre- +pruning (do not split a node if not enough gain is observed) and post +pruning (build the tree up to a max depth and then prune branches with +negative gain). For pre and post pruning, you MUST provide +`tree_complexity>0`. +
+`quantile_sketch_epsilon` + +float between 0 and 1. Error bound for quantile +computation. This is only used for float feature columns, and the number +of buckets generated per float feature is `1/quantile_sketch_epsilon`. +
+`train_in_memory` + +`bool`, when true, it assumes the dataset is in memory, +i.e., `input_fn` should return the entire dataset as a single batch, +`n_batches_per_layer` should be set as 1, `num_worker_replicas` should +be 1, and `num_ps_replicas` should be 0 in `tf.Estimator.RunConfig`. +
+ + + + + + + + + + + + +
+`ValueError` + +when wrong arguments are given or unsupported functionalities +are requested. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

experimental_feature_importances

+ +View source + + + +Computes gain-based feature importances. + +The higher the value, the more important the corresponding feature. + + + + + + + + + + +
Args
+`normalize` + +If True, normalize the feature importances. +
+ + + + + + + + + + + + +
Returns
+`feature_importances` + +an OrderedDict, where the keys are the feature column +names and the values are importances. It is sorted by importance. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When attempting to normalize on an empty ensemble +or an ensemble of trees which have no splits. Or when attempting +to normalize and feature importances have negative values. +
+ + + +

experimental_predict_with_explanations

+ +View source + + + +Computes model explainability outputs per example along with predictions. + +Currently supports directional feature contributions (DFCs). For each +instance, DFCs indicate the aggregate contribution of each feature. See +https://arxiv.org/abs/1312.1121 and +http://blog.datadive.net/interpreting-random-forests/ for more details. + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for predicting as +minibatches. See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary, with the exception of 'bias' and 'dfc', which will +always be in the dictionary. If `None`, returns all keys in prediction +dict, as well as two new keys 'dfc' and 'bias'. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. The `predictions` tensors will +contain at least two keys 'dfc' and 'bias' for model explanations. The +`dfc` value corresponds to the contribution of each feature to the overall +prediction for this instance (positive indicating that the feature makes +it more likely to select class 1 and negative less likely). The `dfc` is +an OrderedDict, where the keys are the feature column names and the values +are the contributions. It is sorted by the absolute value of the +contribution (e.g OrderedDict([('age', -0.54), ('gender', 0.4), ('fare', +0.21)])). The 'bias' value will be the same across all the instances, +corresponding to the probability (classification) or prediction +(regression) of the training data distribution. + + + + + + + + + + + + +
Raises
+`ValueError` + +when wrong arguments are given or unsupported functionalities +are requested. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/CheckpointSaverHook.md b/site/en/api_docs/python/tf/estimator/CheckpointSaverHook.md new file mode 100644 index 00000000000..e62da3c0761 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/CheckpointSaverHook.md @@ -0,0 +1,352 @@ +description: Saves checkpoints every N steps or seconds. + +
+ + + + + + + + +
+ +# tf.estimator.CheckpointSaverHook + + + + + + + + + +Saves checkpoints every N steps or seconds. + +Inherits From: [`SessionRunHook`](../../tf/estimator/SessionRunHook.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`checkpoint_dir` + +`str`, base directory for the checkpoint files. +
+`save_secs` + +`int`, save every N secs. +
+`save_steps` + +`int`, save every N steps. +
+`saver` + +`Saver` object, used for saving. +
+`checkpoint_basename` + +`str`, base name for the checkpoint files. +
+`scaffold` + +`Scaffold`, use to get saver object. +
+`listeners` + +List of `CheckpointSaverListener` subclass instances. Used for +callbacks that run immediately before or after this hook saves the +checkpoint. +
+`save_graph_def` + +Whether to save the GraphDef and MetaGraphDef to +`checkpoint_dir`. The GraphDef is saved after the session is created as +`graph.pbtxt`. MetaGraphDefs are saved out for every checkpoint as +`model.ckpt-*.meta`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +One of `save_steps` or `save_secs` should be set. +
+`ValueError` + +At most one of `saver` or `scaffold` should be set. +
+ + + +## Methods + +

after_create_session

+ +View source + + + +Called when new TensorFlow session is created. + +This is called to signal the hooks that a new session has been created. This +has two essential differences with the situation in which `begin` is called: + +* When this is called, the graph is finalized and ops can no longer be added + to the graph. +* This method will also be called as a result of recovering a wrapped + session, not only at the beginning of the overall session. + + + + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that has been created. +
+`coord` + +A Coordinator object which keeps track of all threads. +
+ + + +

after_run

+ +View source + + + +Called after each call to run(). + +The `run_values` argument contains results of requested ops/tensors by +`before_run()`. + +The `run_context` argument is the same one send to `before_run` call. +`run_context.request_stop()` can be called to stop the iteration. + +If `session.run()` raises any exceptions then `after_run()` is not called. + + + + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+`run_values` + +A SessionRunValues object. +
+ + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Called once before using the session. + +When called, the default graph is the one that will be launched in the +session. The hook can modify the graph by adding new operations to it. +After the `begin()` call the graph will be finalized and the other callbacks +can not modify the graph anymore. Second call of `begin()` on the same +graph, should not change the graph. + +

end

+ +View source + + + +Called at the end of session. + +The `session` argument can be used in case the hook wants to run final ops, +such as saving a last checkpoint. + +If `session.run()` raises exception other than OutOfRangeError or +StopIteration then `end()` is not called. +Note the difference between `end()` and `after_run()` behavior when +`session.run()` raises OutOfRangeError or StopIteration. In that case +`end()` is called but `after_run()` is not called. + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that will be soon closed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/CheckpointSaverListener.md b/site/en/api_docs/python/tf/estimator/CheckpointSaverListener.md new file mode 100644 index 00000000000..817c18be3be --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/CheckpointSaverListener.md @@ -0,0 +1,144 @@ +description: Interface for listeners that take action before or after checkpoint save. + +
+ + + + + + +
+ +# tf.estimator.CheckpointSaverListener + + + + + + + + + +Interface for listeners that take action before or after checkpoint save. + + + + + +`CheckpointSaverListener` triggers only in steps when `CheckpointSaverHook` is +triggered, and provides callbacks at the following points: + - before using the session + - before each call to `Saver.save()` + - after each call to `Saver.save()` + - at the end of session + +To use a listener, implement a class and pass the listener to a +`CheckpointSaverHook`, as in this example: + +```python +class ExampleCheckpointSaverListener(CheckpointSaverListener): + def begin(self): + # You can add ops to the graph here. + print('Starting the session.') + self.your_tensor = ... + + def before_save(self, session, global_step_value): + print('About to write a checkpoint') + + def after_save(self, session, global_step_value): + print('Done writing checkpoint.') + if decided_to_stop_training(): + return True + + def end(self, session, global_step_value): + print('Done with the session.') + +... +listener = ExampleCheckpointSaverListener() +saver_hook = tf.estimator.CheckpointSaverHook( + checkpoint_dir, listeners=[listener]) +with +tf.compat.v1.train.MonitoredTrainingSession(chief_only_hooks=[saver_hook]): + ... +``` + +A `CheckpointSaverListener` may simply take some action after every +checkpoint save. It is also possible for the listener to use its own schedule +to act less frequently, e.g. based on global_step_value. In this case, +implementors should implement the `end()` method to handle actions related to +the last checkpoint save. But the listener should not act twice if +`after_save()` already handled this last checkpoint save. + +A `CheckpointSaverListener` can request training to be stopped, by returning +True in `after_save`. Please note that, in replicated distributed training +setting, only `chief` should use this behavior. Otherwise each worker will do +their own evaluation, which may be wasteful of resources. + +## Methods + +

after_save

+ +View source + + + + + + +

before_save

+ +View source + + + + + + +

begin

+ +View source + + + + + + +

end

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/estimator/DNNClassifier.md b/site/en/api_docs/python/tf/estimator/DNNClassifier.md new file mode 100644 index 00000000000..252d65f9cef --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/DNNClassifier.md @@ -0,0 +1,1130 @@ +description: A classifier for TensorFlow DNN models. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.DNNClassifier + + + + + + + + + +A classifier for TensorFlow DNN models. + +Inherits From: [`Estimator`](../../tf/estimator/Estimator.md) + + + + + + + + +#### Example: + + + +```python +categorical_feature_a = categorical_column_with_hash_bucket(...) +categorical_feature_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_emb = embedding_column( + categorical_column=categorical_feature_a, ...) +categorical_feature_b_emb = embedding_column( + categorical_column=categorical_feature_b, ...) + +estimator = tf.estimator.DNNClassifier( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256]) + +# Or estimator using the ProximalAdagradOptimizer optimizer with +# regularization. +estimator = tf.estimator.DNNClassifier( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + optimizer=tf.compat.v1.train.ProximalAdagradOptimizer( + learning_rate=0.1, + l1_regularization_strength=0.001 + )) + +# Or estimator using an optimizer with a learning rate decay. +estimator = tf.estimator.DNNClassifier( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + optimizer=lambda: tf.keras.optimizers.Adam( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96)) + +# Or estimator with warm-starting from a previous checkpoint. +estimator = tf.estimator.DNNClassifier( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + warm_start_from="/path/to/checkpoint/dir") + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train) +metrics = estimator.evaluate(input_fn=input_fn_eval) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with `key=weight_column` whose + value is a `Tensor`. +* for each `column` in `feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using softmax cross entropy. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`hidden_units` + +Iterable of number hidden units per layer. All layers are +fully connected. Ex. `[64, 32]` means first layer has 64 nodes and +second one has 32. +
+`feature_columns` + +An iterable containing all the feature columns used by +the model. All items in the set should be instances of classes derived +from `_FeatureColumn`. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`n_classes` + +Number of label classes. Defaults to 2, namely binary +classification. Must be > 1. +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. If it is a string, it is +used as a key to fetch weight tensor from the `features`. If it is a +`_NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +weight_column.normalizer_fn is applied on it to get weight tensor. +
+`label_vocabulary` + +A list of strings represents possible label values. If +given, labels must be string type and have any value in +`label_vocabulary`. If it is not given, that means labels are already +encoded as integer or float within [0, 1] for `n_classes=2` and encoded +as integer values in {0, 1,..., n_classes-1} for `n_classes`>2 . Also +there will be errors if vocabulary is not provided and labels are +string. +
+`optimizer` + +An instance of `tf.keras.optimizers.*` used to train the model. +Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', +SGD'), or callable. Defaults to Adagrad optimizer. +
+`activation_fn` + +Activation function applied to each layer. If `None`, will +use tf.nn.relu. +
+`dropout` + +When not `None`, the probability we will drop out a given +coordinate. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+`warm_start_from` + +A string filepath to a checkpoint to warm-start from, or +a `WarmStartSettings` object to fully configure warm-starting. If the +string filepath is provided instead of a `WarmStartSettings`, then all +weights are warm-started, and it is assumed that vocabularies and Tensor +names are unchanged. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Describes how +to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. +
+`batch_norm` + +Whether to use batch normalization after each hidden layer. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/DNNEstimator.md b/site/en/api_docs/python/tf/estimator/DNNEstimator.md new file mode 100644 index 00000000000..8b545506aa6 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/DNNEstimator.md @@ -0,0 +1,1099 @@ +description: An estimator for TensorFlow DNN models with user-specified head. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.DNNEstimator + + + + + + + + + +An estimator for TensorFlow DNN models with user-specified head. + +Inherits From: [`Estimator`](../../tf/estimator/Estimator.md) + + + + + + + + +#### Example: + + + +```python +sparse_feature_a = sparse_column_with_hash_bucket(...) +sparse_feature_b = sparse_column_with_hash_bucket(...) + +sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a, + ...) +sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b, + ...) + +estimator = tf.estimator.DNNEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], + hidden_units=[1024, 512, 256]) + +# Or estimator using the ProximalAdagradOptimizer optimizer with +# regularization. +estimator = tf.estimator.DNNEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], + hidden_units=[1024, 512, 256], + optimizer=tf.compat.v1.train.ProximalAdagradOptimizer( + learning_rate=0.1, + l1_regularization_strength=0.001 + )) + +# Or estimator using an optimizer with a learning rate decay. +estimator = tf.estimator.DNNEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], + hidden_units=[1024, 512, 256], + optimizer=lambda: tf.keras.optimizers.Adam( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96)) + +# Or estimator with warm-starting from a previous checkpoint. +estimator = tf.estimator.DNNEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], + hidden_units=[1024, 512, 256], + warm_start_from="/path/to/checkpoint/dir") + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train) +metrics = estimator.evaluate(input_fn=input_fn_eval) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with `key=weight_column` whose + value is a `Tensor`. +* for each `column` in `feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss and predicted output are determined by the specified head. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`head` + +A `_Head` instance constructed with a method such as +`tf.contrib.estimator.multi_label_head`. +
+`hidden_units` + +Iterable of number hidden units per layer. All layers are +fully connected. Ex. `[64, 32]` means first layer has 64 nodes and +second one has 32. +
+`feature_columns` + +An iterable containing all the feature columns used by +the model. All items in the set should be instances of classes derived +from `_FeatureColumn`. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`optimizer` + +An instance of `tf.keras.optimizers.*` used to train the model. +Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', +SGD'), or callable. Defaults to Adagrad optimizer. +
+`activation_fn` + +Activation function applied to each layer. If `None`, will +use tf.nn.relu. +
+`dropout` + +When not `None`, the probability we will drop out a given +coordinate. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+`warm_start_from` + +A string filepath to a checkpoint to warm-start from, or +a `WarmStartSettings` object to fully configure warm-starting. If the +string filepath is provided instead of a `WarmStartSettings`, then all +weights are warm-started, and it is assumed that vocabularies and Tensor +names are unchanged. +
+`batch_norm` + +Whether to use batch normalization after each hidden layer. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/DNNLinearCombinedClassifier.md b/site/en/api_docs/python/tf/estimator/DNNLinearCombinedClassifier.md new file mode 100644 index 00000000000..d274c17ea6c --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/DNNLinearCombinedClassifier.md @@ -0,0 +1,1177 @@ +description: An estimator for TensorFlow Linear and DNN joined classification models. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.DNNLinearCombinedClassifier + + + + + + + + + +An estimator for TensorFlow Linear and DNN joined classification models. + +Inherits From: [`Estimator`](../../tf/estimator/Estimator.md) + + + + + + + +Note: This estimator is also known as wide-n-deep. + +#### Example: + + + +```python +numeric_feature = numeric_column(...) +categorical_column_a = categorical_column_with_hash_bucket(...) +categorical_column_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_x_categorical_feature_b = crossed_column(...) +categorical_feature_a_emb = embedding_column( + categorical_column=categorical_feature_a, ...) +categorical_feature_b_emb = embedding_column( + categorical_id_column=categorical_feature_b, ...) + +estimator = tf.estimator.DNNLinearCombinedClassifier( + # wide settings + linear_feature_columns=[categorical_feature_a_x_categorical_feature_b], + linear_optimizer=tf.keras.optimizers.Ftrl(...), + # deep settings + dnn_feature_columns=[ + categorical_feature_a_emb, categorical_feature_b_emb, + numeric_feature], + dnn_hidden_units=[1000, 500, 100], + dnn_optimizer=tf.keras.optimizers.Adagrad(...), + # warm-start settings + warm_start_from="/path/to/checkpoint/dir") + +# To apply L1 and L2 regularization, you can set dnn_optimizer to: +tf.compat.v1.train.ProximalAdagradOptimizer( + learning_rate=0.1, + l1_regularization_strength=0.001, + l2_regularization_strength=0.001) +# To apply learning rate decay, you can set dnn_optimizer to a callable: +lambda: tf.keras.optimizers.Adam( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96) +# It is the same for linear_optimizer. + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train, steps=100) +metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* for each `column` in `dnn_feature_columns` + `linear_feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using softmax cross entropy. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`linear_feature_columns` + +An iterable containing all the feature columns +used by linear part of the model. All items in the set must be instances +of classes derived from `FeatureColumn`. +
+`linear_optimizer` + +An instance of `tf.keras.optimizers.*` used to apply +gradients to the linear part of the model. Can also be a string (one of +'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to +FTRL optimizer. +
+`dnn_feature_columns` + +An iterable containing all the feature columns used +by deep part of the model. All items in the set must be instances of +classes derived from `FeatureColumn`. +
+`dnn_optimizer` + +An instance of `tf.keras.optimizers.*` used to apply +gradients to the deep part of the model. Can also be a string (one of +'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to +Adagrad optimizer. +
+`dnn_hidden_units` + +List of hidden units per layer. All layers are fully +connected. +
+`dnn_activation_fn` + +Activation function applied to each layer. If None, +will use tf.nn.relu. +
+`dnn_dropout` + +When not None, the probability we will drop out a given +coordinate. +
+`n_classes` + +Number of label classes. Defaults to 2, namely binary +classification. Must be > 1. +
+`weight_column` + +A string or a `_NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. If it is a string, it is +used as a key to fetch weight tensor from the `features`. If it is a +`_NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +weight_column.normalizer_fn is applied on it to get weight tensor. +
+`label_vocabulary` + +A list of strings represents possible label values. If +given, labels must be string type and have any value in +`label_vocabulary`. If it is not given, that means labels are already +encoded as integer or float within [0, 1] for `n_classes=2` and encoded +as integer values in {0, 1,..., n_classes-1} for `n_classes`>2 . Also +there will be errors if vocabulary is not provided and labels are +string. +
+`config` + +RunConfig object to configure the runtime settings. +
+`warm_start_from` + +A string filepath to a checkpoint to warm-start from, or +a `WarmStartSettings` object to fully configure warm-starting. If the +string filepath is provided instead of a `WarmStartSettings`, then all +weights are warm-started, and it is assumed that vocabularies and Tensor +names are unchanged. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Describes how +to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. +
+`batch_norm` + +Whether to use batch normalization after each hidden layer. +
+`linear_sparse_combiner` + +A string specifying how to reduce the linear model +if a categorical column is multivalent. One of "mean", "sqrtn", and +"sum" -- these are effectively different ways to do example-level +normalization, which can be useful for bag-of-words features. For more +details, see `tf.feature_column.linear_model`. +
+ + + + + + + + + + + + +
+`ValueError` + +If both linear_feature_columns and dnn_features_columns are +empty at the same time. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/DNNLinearCombinedEstimator.md b/site/en/api_docs/python/tf/estimator/DNNLinearCombinedEstimator.md new file mode 100644 index 00000000000..231e03d7b33 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/DNNLinearCombinedEstimator.md @@ -0,0 +1,1122 @@ +description: An estimator for TensorFlow Linear and DNN joined models with custom head. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.DNNLinearCombinedEstimator + + + + + + + + + +An estimator for TensorFlow Linear and DNN joined models with custom head. + +Inherits From: [`Estimator`](../../tf/estimator/Estimator.md) + + + + + + + +Note: This estimator is also known as wide-n-deep. + +#### Example: + + + +```python +numeric_feature = numeric_column(...) +categorical_column_a = categorical_column_with_hash_bucket(...) +categorical_column_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_x_categorical_feature_b = crossed_column(...) +categorical_feature_a_emb = embedding_column( + categorical_column=categorical_feature_a, ...) +categorical_feature_b_emb = embedding_column( + categorical_column=categorical_feature_b, ...) + +estimator = tf.estimator.DNNLinearCombinedEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + # wide settings + linear_feature_columns=[categorical_feature_a_x_categorical_feature_b], + linear_optimizer=tf.keras.optimizers.Ftrl(...), + # deep settings + dnn_feature_columns=[ + categorical_feature_a_emb, categorical_feature_b_emb, + numeric_feature], + dnn_hidden_units=[1000, 500, 100], + dnn_optimizer=tf.keras.optimizers.Adagrad(...)) + +# To apply L1 and L2 regularization, you can set dnn_optimizer to: +tf.compat.v1.train.ProximalAdagradOptimizer( + learning_rate=0.1, + l1_regularization_strength=0.001, + l2_regularization_strength=0.001) +# To apply learning rate decay, you can set dnn_optimizer to a callable: +lambda: tf.keras.optimizers.Adam( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96) +# It is the same for linear_optimizer. + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train, steps=100) +metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* for each `column` in `dnn_feature_columns` + `linear_feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using mean squared error. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`head` + +A `Head` instance constructed with a method such as +tf.estimator.MultiLabelHead. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. +
+`linear_feature_columns` + +An iterable containing all the feature columns +used by linear part of the model. All items in the set must be instances +of classes derived from `FeatureColumn`. +
+`linear_optimizer` + +An instance of `tf.keras.optimizers.*` used to apply +gradients to the linear part of the model. Can also be a string (one of +'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to +FTRL optimizer. +
+`dnn_feature_columns` + +An iterable containing all the feature columns used +by deep part of the model. All items in the set must be instances of +classes derived from `FeatureColumn`. +
+`dnn_optimizer` + +An instance of `tf.keras.optimizers.*` used to apply +gradients to the deep part of the model. Can also be a string (one of +'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to +Adagrad optimizer. +
+`dnn_hidden_units` + +List of hidden units per layer. All layers are fully +connected. +
+`dnn_activation_fn` + +Activation function applied to each layer. If None, +will use tf.nn.relu. +
+`dnn_dropout` + +When not None, the probability we will drop out a given +coordinate. +
+`config` + +RunConfig object to configure the runtime settings. +
+`linear_sparse_combiner` + +A string specifying how to reduce the linear model +if a categorical column is multivalent. One of "mean", "sqrtn", and +"sum" -- these are effectively different ways to do example-level +normalization, which can be useful for bag-of-words features. For more +details, see `tf.feature_column.linear_model`. +
+ + + + + + + + + + + + +
+`ValueError` + +If both linear_feature_columns and dnn_features_columns are +empty at the same time. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/DNNLinearCombinedRegressor.md b/site/en/api_docs/python/tf/estimator/DNNLinearCombinedRegressor.md new file mode 100644 index 00000000000..2d2b3a5c57b --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/DNNLinearCombinedRegressor.md @@ -0,0 +1,1165 @@ +description: An estimator for TensorFlow Linear and DNN joined models for regression. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.DNNLinearCombinedRegressor + + + + + + + + + +An estimator for TensorFlow Linear and DNN joined models for regression. + +Inherits From: [`Estimator`](../../tf/estimator/Estimator.md) + + + + + + + +Note: This estimator is also known as wide-n-deep. + +#### Example: + + + +```python +numeric_feature = numeric_column(...) +categorical_column_a = categorical_column_with_hash_bucket(...) +categorical_column_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_x_categorical_feature_b = crossed_column(...) +categorical_feature_a_emb = embedding_column( + categorical_column=categorical_feature_a, ...) +categorical_feature_b_emb = embedding_column( + categorical_column=categorical_feature_b, ...) + +estimator = tf.estimator.DNNLinearCombinedRegressor( + # wide settings + linear_feature_columns=[categorical_feature_a_x_categorical_feature_b], + linear_optimizer=tf.keras.optimizers.Ftrl(...), + # deep settings + dnn_feature_columns=[ + categorical_feature_a_emb, categorical_feature_b_emb, + numeric_feature], + dnn_hidden_units=[1000, 500, 100], + dnn_optimizer=tf.keras.optimizers.Adagrad(...), + # warm-start settings + warm_start_from="/path/to/checkpoint/dir") + +# To apply L1 and L2 regularization, you can set dnn_optimizer to: +tf.compat.v1.train.ProximalAdagradOptimizer( + learning_rate=0.1, + l1_regularization_strength=0.001, + l2_regularization_strength=0.001) +# To apply learning rate decay, you can set dnn_optimizer to a callable: +lambda: tf.keras.optimizers.Adam( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96) +# It is the same for linear_optimizer. + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train, steps=100) +metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* for each `column` in `dnn_feature_columns` + `linear_feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using mean squared error. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`linear_feature_columns` + +An iterable containing all the feature columns +used by linear part of the model. All items in the set must be instances +of classes derived from `FeatureColumn`. +
+`linear_optimizer` + +An instance of `tf.keras.optimizers.*` used to apply +gradients to the linear part of the model. Can also be a string (one of +'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to +FTRL optimizer. +
+`dnn_feature_columns` + +An iterable containing all the feature columns used +by deep part of the model. All items in the set must be instances of +classes derived from `FeatureColumn`. +
+`dnn_optimizer` + +An instance of `tf.keras.optimizers.*` used to apply +gradients to the deep part of the model. Can also be a string (one of +'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to +Adagrad optimizer. +
+`dnn_hidden_units` + +List of hidden units per layer. All layers are fully +connected. +
+`dnn_activation_fn` + +Activation function applied to each layer. If None, +will use tf.nn.relu. +
+`dnn_dropout` + +When not None, the probability we will drop out a given +coordinate. +
+`label_dimension` + +Number of regression targets per example. This is the +size of the last dimension of the labels and logits `Tensor` objects +(typically, these have shape `[batch_size, label_dimension]`). +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. If it is a string, it is +used as a key to fetch weight tensor from the `features`. If it is a +`_NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +weight_column.normalizer_fn is applied on it to get weight tensor. +
+`config` + +RunConfig object to configure the runtime settings. +
+`warm_start_from` + +A string filepath to a checkpoint to warm-start from, or +a `WarmStartSettings` object to fully configure warm-starting. If the +string filepath is provided instead of a `WarmStartSettings`, then all +weights are warm-started, and it is assumed that vocabularies and Tensor +names are unchanged. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Describes how +to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. +
+`batch_norm` + +Whether to use batch normalization after each hidden layer. +
+`linear_sparse_combiner` + +A string specifying how to reduce the linear model +if a categorical column is multivalent. One of "mean", "sqrtn", and +"sum" -- these are effectively different ways to do example-level +normalization, which can be useful for bag-of-words features. For more +details, see `tf.feature_column.linear_model`. +
+ + + + + + + + + + + + +
+`ValueError` + +If both linear_feature_columns and dnn_features_columns are +empty at the same time. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/DNNRegressor.md b/site/en/api_docs/python/tf/estimator/DNNRegressor.md new file mode 100644 index 00000000000..2ab30c96ac3 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/DNNRegressor.md @@ -0,0 +1,1118 @@ +description: A regressor for TensorFlow DNN models. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.DNNRegressor + + + + + + + + + +A regressor for TensorFlow DNN models. + +Inherits From: [`Estimator`](../../tf/estimator/Estimator.md) + + + + + + + + +#### Example: + + + +```python +categorical_feature_a = categorical_column_with_hash_bucket(...) +categorical_feature_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_emb = embedding_column( + categorical_column=categorical_feature_a, ...) +categorical_feature_b_emb = embedding_column( + categorical_column=categorical_feature_b, ...) + +estimator = tf.estimator.DNNRegressor( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256]) + +# Or estimator using the ProximalAdagradOptimizer optimizer with +# regularization. +estimator = tf.estimator.DNNRegressor( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + optimizer=tf.compat.v1.train.ProximalAdagradOptimizer( + learning_rate=0.1, + l1_regularization_strength=0.001 + )) + +# Or estimator using an optimizer with a learning rate decay. +estimator = tf.estimator.DNNRegressor( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + optimizer=lambda: tf.keras.optimizers.Adam( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96)) + +# Or estimator with warm-starting from a previous checkpoint. +estimator = tf.estimator.DNNRegressor( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + warm_start_from="/path/to/checkpoint/dir") + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train) +metrics = estimator.evaluate(input_fn=input_fn_eval) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with `key=weight_column` whose + value is a `Tensor`. +* for each `column` in `feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using mean squared error. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`hidden_units` + +Iterable of number hidden units per layer. All layers are +fully connected. Ex. `[64, 32]` means first layer has 64 nodes and +second one has 32. +
+`feature_columns` + +An iterable containing all the feature columns used by +the model. All items in the set should be instances of classes derived +from `FeatureColumn`. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`label_dimension` + +Number of regression targets per example. This is the +size of the last dimension of the labels and logits `Tensor` objects +(typically, these have shape `[batch_size, label_dimension]`). +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. If it is a string, it is +used as a key to fetch weight tensor from the `features`. If it is a +`NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +weight_column.normalizer_fn is applied on it to get weight tensor. +
+`optimizer` + +An instance of `tf.keras.optimizers.*` used to train the model. +Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', +SGD'), or callable. Defaults to Adagrad optimizer. +
+`activation_fn` + +Activation function applied to each layer. If `None`, will +use tf.nn.relu. +
+`dropout` + +When not `None`, the probability we will drop out a given +coordinate. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+`warm_start_from` + +A string filepath to a checkpoint to warm-start from, or +a `WarmStartSettings` object to fully configure warm-starting. If the +string filepath is provided instead of a `WarmStartSettings`, then all +weights are warm-started, and it is assumed that vocabularies and Tensor +names are unchanged. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Describes how +to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. +
+`batch_norm` + +Whether to use batch normalization after each hidden layer. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/Estimator.md b/site/en/api_docs/python/tf/estimator/Estimator.md new file mode 100644 index 00000000000..f1d0bc351e6 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/Estimator.md @@ -0,0 +1,1065 @@ +description: Estimator class to train and evaluate TensorFlow models. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.Estimator + + + + + + + + + +Estimator class to train and evaluate TensorFlow models. + +Inherits From: [`Estimator`](../../tf/compat/v1/estimator/Estimator.md) + + + + + + + +The `Estimator` object wraps a model which is specified by a `model_fn`, +which, given inputs and a number of other parameters, returns the ops +necessary to perform training, evaluation, or predictions. + +All outputs (checkpoints, event files, etc.) are written to `model_dir`, or a +subdirectory thereof. If `model_dir` is not set, a temporary directory is +used. + +The `config` argument can be passed tf.estimator.RunConfig object containing +information about the execution environment. It is passed on to the +`model_fn`, if the `model_fn` has a parameter named "config" (and input +functions in the same manner). If the `config` parameter is not passed, it is +instantiated by the `Estimator`. Not passing config means that defaults useful +for local execution are used. `Estimator` makes config available to the model +(for instance, to allow specialization based on the number of workers +available), and also uses some of its fields to control internals, especially +regarding checkpointing. + +The `params` argument contains hyperparameters. It is passed to the +`model_fn`, if the `model_fn` has a parameter named "params", and to the input +functions in the same manner. `Estimator` only passes params along, it does +not inspect it. The structure of `params` is therefore entirely up to the +developer. + +None of `Estimator`'s methods can be overridden in subclasses (its +constructor enforces this). Subclasses should use `model_fn` to configure +the base class, and may add methods implementing specialized functionality. + +See [estimators](https://tensorflow.org/guide/estimators) for more + information. + +To warm-start an `Estimator`: + +```python +estimator = tf.estimator.DNNClassifier( + feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], + hidden_units=[1024, 512, 256], + warm_start_from="/path/to/checkpoint/dir") +``` + +For more details on warm-start configuration, see +tf.estimator.WarmStartSettings. + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_fn` + +Model function. Follows the signature: +* `features` -- This is the first item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same. +* `labels` -- This is the second item returned from the `input_fn` +passed to `train`, `evaluate`, and `predict`. This should be a +single tf.Tensor or `dict` of same (for multi-head models). If +mode is tf.estimator.ModeKeys.PREDICT, `labels=None` will be +passed. If the `model_fn`'s signature does not accept `mode`, the +`model_fn` must still be able to handle `labels=None`. +* `mode` -- Optional. Specifies if this is training, evaluation or +prediction. See tf.estimator.ModeKeys. +`params` -- Optional `dict` of hyperparameters. Will receive what is +passed to Estimator in `params` parameter. This allows to configure +Estimators from hyper parameter tuning. +* `config` -- Optional estimator.RunConfig object. Will receive what +is passed to Estimator as its `config` parameter, or a default +value. Allows setting up things in your `model_fn` based on +configuration such as `num_ps_replicas`, or `model_dir`. +* Returns -- tf.estimator.EstimatorSpec +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into an estimator to +continue training a previously saved model. If `PathLike` object, the +path will be resolved. If `None`, the model_dir in `config` will be used +if set. If both are set, they must be same. If both are `None`, a +temporary directory will be used. +
+`config` + +estimator.RunConfig configuration object. +
+`params` + +`dict` of hyper parameters that will be passed into `model_fn`. +Keys are names of parameters, values are basic python types. +
+`warm_start_from` + +Optional string filepath to a checkpoint or SavedModel to +warm-start from, or a tf.estimator.WarmStartSettings object to fully +configure warm-starting. If None, only TRAINABLE variables are +warm-started. If the string filepath is provided instead of a +tf.estimator.WarmStartSettings, then all variables are warm-started, +and it is assumed that vocabularies and tf.Tensor names are unchanged. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +parameters of `model_fn` don't match `params`. +
+`ValueError` + +if this is called via a subclass and if that class overrides +a member of `Estimator`. +
+ + + +#### Eager Compatibility +Calling methods of `Estimator` will work while eager execution is enabled. +However, the `model_fn` and `input_fn` is not executed eagerly, `Estimator` +will switch to graph mode before calling all user-provided functions (incl. +hooks), so their code has to be compatible with graph mode execution. Note +that `input_fn` code using tf.data generally works in both graph and eager +modes. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/EstimatorSpec.md b/site/en/api_docs/python/tf/estimator/EstimatorSpec.md new file mode 100644 index 00000000000..43e07eb38f5 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/EstimatorSpec.md @@ -0,0 +1,296 @@ +description: Ops and objects returned from a model_fn and passed to an Estimator. + +
+ + + + + + + + + + + + + + +
+ +# tf.estimator.EstimatorSpec + + + + + + + + + +Ops and objects returned from a `model_fn` and passed to an `Estimator`. + + + + + + + + + +`EstimatorSpec` fully defines the model to be run by an `Estimator`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`mode` + +A `ModeKeys`. Specifies if this is training, evaluation or +prediction. +
+`predictions` + +Predictions `Tensor` or dict of `Tensor`. +
+`loss` + +Training loss `Tensor`. Must be either scalar, or with shape `[1]`. +
+`train_op` + +Op for the training step. +
+`eval_metric_ops` + +Dict of metric results keyed by name. +The values of the dict can be one of the following: (1) instance of +`Metric` class. (2) Results of calling a metric function, namely a +`(metric_tensor, update_op)` tuple. `metric_tensor` should be +evaluated without any impact on state (typically is a pure computation +results based on variables.). For example, it should not trigger the +`update_op` or requires any input fetching. +
+`export_outputs` + +Describes the output signatures to be exported to +`SavedModel` and used during serving. +A dict `{name: output}` where: +* name: An arbitrary name for this output. +* output: an `ExportOutput` object such as `ClassificationOutput`, +`RegressionOutput`, or `PredictOutput`. Single-headed models only need +to specify one entry in this dictionary. Multi-headed models should +specify one entry for each head, one of which must be named using +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`. +If no entry is provided, a default `PredictOutput` mapping to +`predictions` will be created. +
+`training_chief_hooks` + +Iterable of `tf.train.SessionRunHook` objects to run +on the chief worker during training. +
+`training_hooks` + +Iterable of `tf.train.SessionRunHook` objects to run on +all workers during training. +
+`scaffold` + +A `tf.train.Scaffold` object that can be used to set +initialization, saver, and more to be used in training. +
+`evaluation_hooks` + +Iterable of `tf.train.SessionRunHook` objects to run +during evaluation. +
+`prediction_hooks` + +Iterable of `tf.train.SessionRunHook` objects to run +during predictions. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If validation fails. +
+`TypeError` + +If any of the arguments is not the expected type. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`mode` + + +
+`predictions` + + +
+`loss` + + +
+`train_op` + + +
+`eval_metric_ops` + + +
+`export_outputs` + + +
+`training_chief_hooks` + + +
+`training_hooks` + + +
+`scaffold` + + +
+`evaluation_hooks` + + +
+`prediction_hooks` + + +
+ + + +## Class Variables + +* `eval_metric_ops` +* `evaluation_hooks` +* `export_outputs` +* `loss` +* `mode` +* `prediction_hooks` +* `predictions` +* `scaffold` +* `train_op` +* `training_chief_hooks` +* `training_hooks` diff --git a/site/en/api_docs/python/tf/estimator/EvalSpec.md b/site/en/api_docs/python/tf/estimator/EvalSpec.md new file mode 100644 index 00000000000..c932e170e4f --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/EvalSpec.md @@ -0,0 +1,230 @@ +description: Configuration for the "eval" part for the train_and_evaluate call. + +
+ + + + + + + + + + +
+ +# tf.estimator.EvalSpec + + + + + + + + + +Configuration for the "eval" part for the `train_and_evaluate` call. + + + + + + + + + +`EvalSpec` combines details of evaluation of the trained model as well as its +export. Evaluation consists of computing metrics to judge the performance of +the trained model. Export writes out the trained model on to external +storage. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A 'tf.data.Dataset' object: Outputs of `Dataset` object must be a +tuple (features, labels) with same constraints as below. +* A tuple (features, labels): Where features is a `Tensor` or a +dictionary of string feature name to `Tensor` and labels is a +`Tensor` or a dictionary of string label name to `Tensor`. +
+`steps` + +Int. Positive number of steps for which to evaluate model. If +`None`, evaluates until `input_fn` raises an end-of-input exception. See +Estimator.evaluate for details. +
+`name` + +String. Name of the evaluation if user needs to run multiple +evaluations on different data sets. Metrics for different evaluations +are saved in separate folders, and appear separately in tensorboard. +
+`hooks` + +Iterable of `tf.train.SessionRunHook` objects to run during +evaluation. +
+`exporters` + +Iterable of `Exporter`s, or a single one, or `None`. +`exporters` will be invoked after each evaluation. +
+`start_delay_secs` + +Int. Start evaluating after waiting for this many +seconds. +
+`throttle_secs` + +Int. Do not re-evaluate unless the last evaluation was +started at least this many seconds ago. Of course, evaluation does not +occur if no new checkpoints are available, hence, this is the minimum. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If any of the input arguments is invalid. +
+`TypeError` + +If any of the arguments is not of the expected type. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_fn` + + +
+`steps` + + +
+`name` + + +
+`hooks` + + +
+`exporters` + + +
+`start_delay_secs` + + +
+`throttle_secs` + + +
+ + + +## Class Variables + +* `exporters` +* `hooks` +* `input_fn` +* `name` +* `start_delay_secs` +* `steps` +* `throttle_secs` diff --git a/site/en/api_docs/python/tf/estimator/Exporter.md b/site/en/api_docs/python/tf/estimator/Exporter.md new file mode 100644 index 00000000000..f0e3a733602 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/Exporter.md @@ -0,0 +1,142 @@ +description: A class representing a type of model export. + +
+ + + +
+ +# tf.estimator.Exporter + + + + + + + + + +A class representing a type of model export. + + + + + + + + + + + + + + + + + +
+`name` + +Directory name. + +A directory name under the export base directory where exports of +this type are written. Should not be `None` nor empty. +
+ + + +## Methods + +

export

+ +View source + + + +Exports the given `Estimator` to a specific format. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`estimator` + +the `Estimator` to export. +
+`export_path` + +A string containing a directory where to write the export. +
+`checkpoint_path` + +The checkpoint path to export. +
+`eval_result` + +The output of Estimator.evaluate on this checkpoint. +
+`is_the_final_export` + +This boolean is True when this is an export in the +end of training. It is False for the intermediate exports during the +training. When passing `Exporter` to tf.estimator.train_and_evaluate +`is_the_final_export` is always False if TrainSpec.max_steps is +`None`. +
+ + + + + + + + + + + +
Returns
+The string path to the exported directory or `None` if export is skipped. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/FeedFnHook.md b/site/en/api_docs/python/tf/estimator/FeedFnHook.md new file mode 100644 index 00000000000..6cd43cd86f1 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/FeedFnHook.md @@ -0,0 +1,273 @@ +description: Runs feed_fn and sets the feed_dict accordingly. + +
+ + + + + + + + +
+ +# tf.estimator.FeedFnHook + + + + + + + + + +Runs `feed_fn` and sets the `feed_dict` accordingly. + +Inherits From: [`SessionRunHook`](../../tf/estimator/SessionRunHook.md) + + + + + + + + + + + + + + + + + + + +
+`feed_fn` + +function that takes no arguments and returns `dict` of `Tensor` +to feed. +
+ + + +## Methods + +

after_create_session

+ +View source + + + +Called when new TensorFlow session is created. + +This is called to signal the hooks that a new session has been created. This +has two essential differences with the situation in which `begin` is called: + +* When this is called, the graph is finalized and ops can no longer be added + to the graph. +* This method will also be called as a result of recovering a wrapped + session, not only at the beginning of the overall session. + + + + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that has been created. +
+`coord` + +A Coordinator object which keeps track of all threads. +
+ + + +

after_run

+ +View source + + + +Called after each call to run(). + +The `run_values` argument contains results of requested ops/tensors by +`before_run()`. + +The `run_context` argument is the same one send to `before_run` call. +`run_context.request_stop()` can be called to stop the iteration. + +If `session.run()` raises any exceptions then `after_run()` is not called. + + + + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+`run_values` + +A SessionRunValues object. +
+ + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Called once before using the session. + +When called, the default graph is the one that will be launched in the +session. The hook can modify the graph by adding new operations to it. +After the `begin()` call the graph will be finalized and the other callbacks +can not modify the graph anymore. Second call of `begin()` on the same +graph, should not change the graph. + +

end

+ +View source + + + +Called at the end of session. + +The `session` argument can be used in case the hook wants to run final ops, +such as saving a last checkpoint. + +If `session.run()` raises exception other than OutOfRangeError or +StopIteration then `end()` is not called. +Note the difference between `end()` and `after_run()` behavior when +`session.run()` raises OutOfRangeError or StopIteration. In that case +`end()` is called but `after_run()` is not called. + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that will be soon closed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/FinalExporter.md b/site/en/api_docs/python/tf/estimator/FinalExporter.md new file mode 100644 index 00000000000..b96bc04b1ab --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/FinalExporter.md @@ -0,0 +1,217 @@ +description: This class exports the serving graph and checkpoints at the end. + +
+ + + + +
+ +# tf.estimator.FinalExporter + + + + + + + + + +This class exports the serving graph and checkpoints at the end. + +Inherits From: [`Exporter`](../../tf/estimator/Exporter.md) + + + + + + + + + +This class performs a single export at the end of training. + + + + + + + + + + + + + + + + + + + +
+`name` + +unique name of this `Exporter` that is going to be used in the +export path. +
+`serving_input_receiver_fn` + +a function that takes no arguments and returns +a `ServingInputReceiver`. +
+`assets_extra` + +An optional dict specifying how to populate the assets.extra +directory within the exported SavedModel. Each key should give the +destination path (including the filename) relative to the assets.extra +directory. The corresponding value gives the full path of the source +file to be copied. For example, the simple case of copying a single +file without renaming it is specified as +`{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. +
+`as_text` + +whether to write the SavedModel proto in text format. Defaults to +`False`. +
+ + + + + + + + + + + + +
+`ValueError` + +if any arguments is invalid. +
+ + + + + + + + + + + + + + +
+`name` + +Directory name. + +A directory name under the export base directory where exports of +this type are written. Should not be `None` nor empty. +
+ + + +## Methods + +

export

+ +View source + + + +Exports the given `Estimator` to a specific format. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`estimator` + +the `Estimator` to export. +
+`export_path` + +A string containing a directory where to write the export. +
+`checkpoint_path` + +The checkpoint path to export. +
+`eval_result` + +The output of Estimator.evaluate on this checkpoint. +
+`is_the_final_export` + +This boolean is True when this is an export in the +end of training. It is False for the intermediate exports during the +training. When passing `Exporter` to tf.estimator.train_and_evaluate +`is_the_final_export` is always False if TrainSpec.max_steps is +`None`. +
+ + + + + + + + + + + +
Returns
+The string path to the exported directory or `None` if export is skipped. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/FinalOpsHook.md b/site/en/api_docs/python/tf/estimator/FinalOpsHook.md new file mode 100644 index 00000000000..aa213600b44 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/FinalOpsHook.md @@ -0,0 +1,300 @@ +description: A hook which evaluates Tensors at the end of a session. + +
+ + + + + + + + +
+ +# tf.estimator.FinalOpsHook + + + + + + + + + +A hook which evaluates `Tensors` at the end of a session. + +Inherits From: [`SessionRunHook`](../../tf/estimator/SessionRunHook.md) + + + + + + + + + + + + + + + + + + + + + + +
+`final_ops` + +A single `Tensor`, a list of `Tensors` or a dictionary of names +to `Tensors`. +
+`final_ops_feed_dict` + +A feed dictionary to use when running +`final_ops_dict`. +
+ + + + + + + + + + + + + + +
+`final_ops_values` + + +
+ + + +## Methods + +

after_create_session

+ +View source + + + +Called when new TensorFlow session is created. + +This is called to signal the hooks that a new session has been created. This +has two essential differences with the situation in which `begin` is called: + +* When this is called, the graph is finalized and ops can no longer be added + to the graph. +* This method will also be called as a result of recovering a wrapped + session, not only at the beginning of the overall session. + + + + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that has been created. +
+`coord` + +A Coordinator object which keeps track of all threads. +
+ + + +

after_run

+ +View source + + + +Called after each call to run(). + +The `run_values` argument contains results of requested ops/tensors by +`before_run()`. + +The `run_context` argument is the same one send to `before_run` call. +`run_context.request_stop()` can be called to stop the iteration. + +If `session.run()` raises any exceptions then `after_run()` is not called. + + + + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+`run_values` + +A SessionRunValues object. +
+ + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Called once before using the session. + +When called, the default graph is the one that will be launched in the +session. The hook can modify the graph by adding new operations to it. +After the `begin()` call the graph will be finalized and the other callbacks +can not modify the graph anymore. Second call of `begin()` on the same +graph, should not change the graph. + +

end

+ +View source + + + +Called at the end of session. + +The `session` argument can be used in case the hook wants to run final ops, +such as saving a last checkpoint. + +If `session.run()` raises exception other than OutOfRangeError or +StopIteration then `end()` is not called. +Note the difference between `end()` and `after_run()` behavior when +`session.run()` raises OutOfRangeError or StopIteration. In that case +`end()` is called but `after_run()` is not called. + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that will be soon closed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/GlobalStepWaiterHook.md b/site/en/api_docs/python/tf/estimator/GlobalStepWaiterHook.md new file mode 100644 index 00000000000..ff5d37111f2 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/GlobalStepWaiterHook.md @@ -0,0 +1,276 @@ +description: Delays execution until global step reaches wait_until_step. + +
+ + + + + + + + +
+ +# tf.estimator.GlobalStepWaiterHook + + + + + + + + + +Delays execution until global step reaches `wait_until_step`. + +Inherits From: [`SessionRunHook`](../../tf/estimator/SessionRunHook.md) + + + + + + + + + +This hook delays execution until global step reaches to `wait_until_step`. It +is used to gradually start workers in distributed settings. One example usage +would be setting `wait_until_step=int(K*log(task_id+1))` assuming that +task_id=0 is the chief. + + + + + + + + + + +
+`wait_until_step` + +an `int` shows until which global step should we wait. +
+ + + +## Methods + +

after_create_session

+ +View source + + + +Called when new TensorFlow session is created. + +This is called to signal the hooks that a new session has been created. This +has two essential differences with the situation in which `begin` is called: + +* When this is called, the graph is finalized and ops can no longer be added + to the graph. +* This method will also be called as a result of recovering a wrapped + session, not only at the beginning of the overall session. + + + + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that has been created. +
+`coord` + +A Coordinator object which keeps track of all threads. +
+ + + +

after_run

+ +View source + + + +Called after each call to run(). + +The `run_values` argument contains results of requested ops/tensors by +`before_run()`. + +The `run_context` argument is the same one send to `before_run` call. +`run_context.request_stop()` can be called to stop the iteration. + +If `session.run()` raises any exceptions then `after_run()` is not called. + + + + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+`run_values` + +A SessionRunValues object. +
+ + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Called once before using the session. + +When called, the default graph is the one that will be launched in the +session. The hook can modify the graph by adding new operations to it. +After the `begin()` call the graph will be finalized and the other callbacks +can not modify the graph anymore. Second call of `begin()` on the same +graph, should not change the graph. + +

end

+ +View source + + + +Called at the end of session. + +The `session` argument can be used in case the hook wants to run final ops, +such as saving a last checkpoint. + +If `session.run()` raises exception other than OutOfRangeError or +StopIteration then `end()` is not called. +Note the difference between `end()` and `after_run()` behavior when +`session.run()` raises OutOfRangeError or StopIteration. In that case +`end()` is called but `after_run()` is not called. + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that will be soon closed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/Head.md b/site/en/api_docs/python/tf/estimator/Head.md new file mode 100644 index 00000000000..3d6d74a8937 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/Head.md @@ -0,0 +1,528 @@ +description: Interface for the head/top of a model. + +
+ + + + + + + +
+ +# tf.estimator.Head + + + + + + + + + +Interface for the head/top of a model. + + + + + +Head sits on top of the model network and handles computing the outputs of +the network. Given logits (or output of a hidden layer), a Head knows how to +compute predictions, loss, train_op, metrics and export outputs. It is meant +to: + +1. Simplify writing model_fn and to make model_fn more configurable for + Estimator. +2. Simpilfy creating loss and metrics for the train and test loop in Eager + execution. +3. Support wide range of machine learning models. Since most heads can work + with logits, they can support DNN, RNN, Wide, Wide&Deep, + Global objectives, Gradient boosted trees and many other types + of machine learning models. + +#### Common usage: + + +Here is simplified model_fn to build a DNN regression model. + ```python + def _my_dnn_model_fn(features, labels, mode, params, config=None): + # Optionally your callers can pass head to model_fn as a param. + head = tf.estimator.RegressionHead(...) + + feature_columns = tf.feature_column.numeric_column(...) + feature_layer = tf.keras.layers.DenseFeatures(feature_columns) + inputs = feature_layer(features) + + # Compute logits with tf.keras.layers API + hidden_layer0 = tf.keras.layers.Dense( + units=1000, activation="relu")(inputs) + hidden_layer1 = tf.keras.layers.Dense( + units=500, activation="relu")(hidden_layer0) + logits = tf.keras.layers.Dense( + units=head.logits_dimension, activation=None)(hidden_layer1) + + # Or use Keras model for logits computation + model = tf.keras.Sequential() + model.add(tf.keras.layers.Dense(units=1000, activation="relu")) + model.add(tf.keras.layers.Dense(units=500, activation="relu")) + model.add(tf.keras.layers.Dense( + units=head.logits_dimension, activation=None)) + logits = model(inputs) + + return head.create_estimator_spec( + features=features, + labels=labels, + mode=mode, + logits=logits, + optimizer=optimizer) + ``` + + + + + + + + + + + + + + + + + + +
+`logits_dimension` + +Size of the last dimension of the logits `Tensor`. + +Often is the number of classes, labels, or real values to be predicted. +Typically, logits is of shape `[batch_size, logits_dimension]`. +
+`loss_reduction` + +One of tf.losses.Reduction. + +Describes how to reduce training loss over batch, such as mean or sum. +
+`name` + +The name of this head. +
+ + + +## Methods + +

create_estimator_spec

+ +View source + + + +Returns `EstimatorSpec` that a model_fn can return. + +It is recommended to pass all args via name. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`features` + +Input `dict` mapping string feature names to `Tensor` or +`SparseTensor` objects containing the values for that feature in a +minibatch. Often to be used to fetch example-weight tensor. +
+`mode` + +Estimator's `ModeKeys`. +
+`logits` + +Logits `Tensor` to be used by the head. +
+`labels` + +Labels `Tensor`, or `dict` mapping string label names to `Tensor` +objects of the label values. +
+`optimizer` + +An tf.keras.optimizers.Optimizer instance to optimize the +loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, +trainable_variables)`, which updates variables to minimize `loss`. +
+`trainable_variables` + +A list or tuple of `Variable` objects to update to +minimize `loss`. In Tensorflow 1.x, by default these are the list of +variables collected in the graph under the key +`GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have +collections and GraphKeys, trainable_variables need to be passed +explicitly here. +
+`train_op_fn` + +Function that takes a scalar loss `Tensor` and returns an op +to optimize the model with the loss in TRAIN mode. Used if `optimizer` +is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in +TRAIN mode. By default, it is `None` in other modes. If you want to +optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use +EstimatorSpec.loss to compute and apply gradients. +
+`update_ops` + +A list or tuple of update ops to be run at training time. For +example, layers such as BatchNormalization create mean and variance +update ops that need to be run at training time. In Tensorflow 1.x, +these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x +doesn't have collections, update_ops need to be passed explicitly here. +
+`regularization_losses` + +A list of additional scalar losses to be added to +the training loss, such as regularization losses. +
+ + + + + + + + + + + +
Returns
+`EstimatorSpec`. +
+ + + +

loss

+ +View source + + + +Returns a loss `Tensor` from provided arguments. + +Note that, the args of `features` and `mode` are most likely not used, but +some Head implementations may require them. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`labels` + +Labels `Tensor`, or `dict` mapping string label names to `Tensor` +objects of the label values. +
+`logits` + +Logits `Tensor` to be used for loss construction. +
+`features` + +Input `dict` mapping string feature names to `Tensor` or +`SparseTensor` objects containing the values for that feature in a +minibatch. Often to be used to fetch example-weight tensor. +
+`mode` + +Estimator's `ModeKeys`. To be used in case loss calculation is +different in Train and Eval mode. +
+`regularization_losses` + +A list of additional scalar losses to be added to +the training loss, such as regularization losses. +
+ + + + + + + + + + + +
Returns
+A scalar `Tensor` representing regularized training loss used in train and +eval. +
+ + + +

metrics

+ +View source + + + +Returns a `dict` of metric objects. + + + + + + + + + + + +
Args
+`regularization_losses` + +A list of additional scalar losses to be added to +the training loss, such as regularization losses. +
+ + + + + + + + + + + +
Returns
+A `dict` of metrics keyed by string name. The value is an instance of +`Metric` class. +
+ + + +

predictions

+ +View source + + + +Returns a `dict` of predictions from provided logits. + + + + + + + + + + + + + + +
Args
+`logits` + +Logits `Tensor` to be used for prediction construction. +
+`keys` + +A list of `string` for prediction keys. Defaults to `None`, meaning +if not specified, predictions will be created for all the pre-defined +valid keys in the head. +
+ + + + + + + + + + + +
Returns
+A `dict` of predicted `Tensor` keyed by prediction name. +
+ + + +

update_metrics

+ +View source + + + +Updates metric objects and returns a `dict` of the updated metrics. + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`eval_metrics` + +A `dict` of metrics to be updated. +
+`features` + +Input `dict` mapping string feature names to `Tensor` or +`SparseTensor` objects containing the values for that feature in a +minibatch. Often to be used to fetch example-weight tensor. +
+`logits` + +logits `Tensor` to be used for metrics update. +
+`labels` + +Labels `Tensor`, or `dict` mapping string label names to `Tensor` +objects of the label values. +
+`mode` + +Estimator's `ModeKeys`. In most cases, this arg is not used and can +be removed in the method implementation. +
+`regularization_losses` + +A list of additional scalar losses to be added to +the training and evaluation loss, such as regularization losses. Note +that, the `mode` arg is not used in the `tf.estimator.*Head`. If the +update of the metrics doesn't rely on `mode`, it can be safely ignored +in the method signature. +
+ + + + + + + + + + + +
Returns
+A `dict` of updated metrics keyed by name. The value is an instance of +`Metric` class. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/LatestExporter.md b/site/en/api_docs/python/tf/estimator/LatestExporter.md new file mode 100644 index 00000000000..f3524a6206c --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/LatestExporter.md @@ -0,0 +1,227 @@ +description: This class regularly exports the serving graph and checkpoints. + +
+ + + + +
+ +# tf.estimator.LatestExporter + + + + + + + + + +This class regularly exports the serving graph and checkpoints. + +Inherits From: [`Exporter`](../../tf/estimator/Exporter.md) + + + + + + + + + +In addition to exporting, this class also garbage collects stale exports. + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +unique name of this `Exporter` that is going to be used in the +export path. +
+`serving_input_receiver_fn` + +a function that takes no arguments and returns +a `ServingInputReceiver`. +
+`assets_extra` + +An optional dict specifying how to populate the assets.extra +directory within the exported SavedModel. Each key should give the +destination path (including the filename) relative to the assets.extra +directory. The corresponding value gives the full path of the source +file to be copied. For example, the simple case of copying a single +file without renaming it is specified as +`{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. +
+`as_text` + +whether to write the SavedModel proto in text format. Defaults to +`False`. +
+`exports_to_keep` + +Number of exports to keep. Older exports will be +garbage-collected. Defaults to 5. Set to `None` to disable garbage +collection. +
+ + + + + + + + + + + + +
+`ValueError` + +if any arguments is invalid. +
+ + + + + + + + + + + + + + +
+`name` + +Directory name. + +A directory name under the export base directory where exports of +this type are written. Should not be `None` nor empty. +
+ + + +## Methods + +

export

+ +View source + + + +Exports the given `Estimator` to a specific format. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`estimator` + +the `Estimator` to export. +
+`export_path` + +A string containing a directory where to write the export. +
+`checkpoint_path` + +The checkpoint path to export. +
+`eval_result` + +The output of Estimator.evaluate on this checkpoint. +
+`is_the_final_export` + +This boolean is True when this is an export in the +end of training. It is False for the intermediate exports during the +training. When passing `Exporter` to tf.estimator.train_and_evaluate +`is_the_final_export` is always False if TrainSpec.max_steps is +`None`. +
+ + + + + + + + + + + +
Returns
+The string path to the exported directory or `None` if export is skipped. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/LinearClassifier.md b/site/en/api_docs/python/tf/estimator/LinearClassifier.md new file mode 100644 index 00000000000..2372777dcc2 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/LinearClassifier.md @@ -0,0 +1,1129 @@ +description: Linear classifier model. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.LinearClassifier + + + + + + + + + +Linear classifier model. + +Inherits From: [`Estimator`](../../tf/estimator/Estimator.md) + + + + + + + +Train a linear model to classify instances into one of multiple possible +classes. When number of possible classes is 2, this is binary classification. + +#### Example: + + + +```python +categorical_column_a = categorical_column_with_hash_bucket(...) +categorical_column_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_x_categorical_feature_b = crossed_column(...) + +# Estimator using the default optimizer. +estimator = tf.estimator.LinearClassifier( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b]) + +# Or estimator using the FTRL optimizer with regularization. +estimator = tf.estimator.LinearClassifier( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + optimizer=tf.keras.optimizers.Ftrl( + learning_rate=0.1, + l1_regularization_strength=0.001 + )) + +# Or estimator using an optimizer with a learning rate decay. +estimator = tf.estimator.LinearClassifier( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + optimizer=lambda: tf.keras.optimizers.Ftrl( + learning_rate=tf.exponential_decay( + learning_rate=0.1, + global_step=tf.get_global_step(), + decay_steps=10000, + decay_rate=0.96)) + +# Or estimator with warm-starting from a previous checkpoint. +estimator = tf.estimator.LinearClassifier( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + warm_start_from="/path/to/checkpoint/dir") + + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train) +metrics = estimator.evaluate(input_fn=input_fn_eval) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, + otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with `key=weight_column` whose + value is a `Tensor`. +* for each `column` in `feature_columns`: + - if `column` is a `SparseColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedSparseColumn`, two features: the first with + `key` the id column name, the second with `key` the weight column name. + Both features' `value` must be a `SparseTensor`. + - if `column` is a `RealValuedColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using softmax cross entropy. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +An iterable containing all the feature columns used by +the model. All items in the set should be instances of classes derived +from `FeatureColumn`. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`n_classes` + +number of label classes. Default is binary classification. Note +that class labels are integers representing the class index (i.e. values +from 0 to n_classes-1). For arbitrary label values (e.g. string labels), +convert to class indices first. +
+`weight_column` + +A string or a `_NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. If it is a string, it is +used as a key to fetch weight tensor from the `features`. If it is a +`_NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +weight_column.normalizer_fn is applied on it to get weight tensor. +
+`label_vocabulary` + +A list of strings represents possible label values. If +given, labels must be string type and have any value in +`label_vocabulary`. If it is not given, that means labels are already +encoded as integer or float within [0, 1] for `n_classes=2` and encoded +as integer values in {0, 1,..., n_classes-1} for `n_classes`>2 . Also +there will be errors if vocabulary is not provided and labels are +string. +
+`optimizer` + +An instance of `tf.keras.optimizers.*` or +tf.estimator.experimental.LinearSDCA used to train the model. Can also +be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or +callable. Defaults to FTRL optimizer. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+`warm_start_from` + +A string filepath to a checkpoint to warm-start from, or +a `WarmStartSettings` object to fully configure warm-starting. If the +string filepath is provided instead of a `WarmStartSettings`, then all +weights and biases are warm-started, and it is assumed that vocabularies +and Tensor names are unchanged. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Describes how +to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. +
+`sparse_combiner` + +A string specifying how to reduce if a categorical column +is multivalent. One of "mean", "sqrtn", and "sum" -- these are +effectively different ways to do example-level normalization, which can +be useful for bag-of-words features. for more details, see +`tf.feature_column.linear_model`. +
+ + + + + + + + + + + + +
+`ValueError` + +if n_classes < 2. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/LinearEstimator.md b/site/en/api_docs/python/tf/estimator/LinearEstimator.md new file mode 100644 index 00000000000..5e7104f8e12 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/LinearEstimator.md @@ -0,0 +1,1055 @@ +description: An estimator for TensorFlow linear models with user-specified head. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.LinearEstimator + + + + + + + + + +An estimator for TensorFlow linear models with user-specified head. + +Inherits From: [`Estimator`](../../tf/estimator/Estimator.md) + + + + + + + + +#### Example: + + + +```python +categorical_column_a = categorical_column_with_hash_bucket(...) +categorical_column_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_x_categorical_feature_b = crossed_column(...) + +# Estimator using the default optimizer. +estimator = tf.estimator.LinearEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b]) + +# Or estimator using an optimizer with a learning rate decay. +estimator = tf.estimator.LinearEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + optimizer=lambda: tf.keras.optimizers.Ftrl( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96)) + +# Or estimator using the FTRL optimizer with regularization. +estimator = tf.estimator.LinearEstimator( + head=tf.estimator.MultiLabelHead(n_classes=3), + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b]) + optimizer=tf.keras.optimizers.Ftrl( + learning_rate=0.1, + l1_regularization_strength=0.001 + )) + +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train, steps=100) +metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with `key=weight_column` whose + value is a `Tensor`. +* for each `column` in `feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss and predicted output are determined by the specified head. + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`head` + +A `Head` instance constructed with a method such as +tf.estimator.MultiLabelHead. +
+`feature_columns` + +An iterable containing all the feature columns used by +the model. All items in the set should be instances of classes derived +from `FeatureColumn`. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`optimizer` + +An instance of `tf.keras.optimizers.*` used to train the model. +Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', +'SGD'), or callable. Defaults to FTRL optimizer. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+`sparse_combiner` + +A string specifying how to reduce if a categorical column +is multivalent. One of "mean", "sqrtn", and "sum" -- these are +effectively different ways to do example-level normalization, which can +be useful for bag-of-words features. for more details, see +`tf.feature_column.linear_model`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/LinearRegressor.md b/site/en/api_docs/python/tf/estimator/LinearRegressor.md new file mode 100644 index 00000000000..0f7f5302a75 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/LinearRegressor.md @@ -0,0 +1,1098 @@ +description: An estimator for TensorFlow Linear regression problems. + +
+ + + + + + + + + + + + +
+ +# tf.estimator.LinearRegressor + + + + + + + + + +An estimator for TensorFlow Linear regression problems. + +Inherits From: [`Estimator`](../../tf/estimator/Estimator.md) + + + + + + + +Train a linear regression model to predict label value given observation of +feature values. + +#### Example: + + + +```python +categorical_column_a = categorical_column_with_hash_bucket(...) +categorical_column_b = categorical_column_with_hash_bucket(...) + +categorical_feature_a_x_categorical_feature_b = crossed_column(...) + +# Estimator using the default optimizer. +estimator = tf.estimator.LinearRegressor( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b]) + +# Or estimator using the FTRL optimizer with regularization. +estimator = tf.estimator.LinearRegressor( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + optimizer=tf.keras.optimizers.Ftrl( + learning_rate=0.1, + l1_regularization_strength=0.001 + )) + +# Or estimator using an optimizer with a learning rate decay. +estimator = tf.estimator.LinearRegressor( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + optimizer=lambda: tf.keras.optimizers.Ftrl( + learning_rate=tf.compat.v1.train.exponential_decay( + learning_rate=0.1, + global_step=tf.compat.v1.train.get_global_step(), + decay_steps=10000, + decay_rate=0.96)) + +# Or estimator with warm-starting from a previous checkpoint. +estimator = tf.estimator.LinearRegressor( + feature_columns=[categorical_column_a, + categorical_feature_a_x_categorical_feature_b], + warm_start_from="/path/to/checkpoint/dir") + + +# Input builders +def input_fn_train: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_eval: + # Returns tf.data.Dataset of (x, y) tuple where y represents label's class + # index. + pass +def input_fn_predict: + # Returns tf.data.Dataset of (x, None) tuple. + pass +estimator.train(input_fn=input_fn_train) +metrics = estimator.evaluate(input_fn=input_fn_eval) +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, + otherwise there will be a KeyError: + +* if `weight_column` is not `None`, a feature with `key=weight_column` whose + value is a `Tensor`. +* for each `column` in `feature_columns`: + - if `column` is a `SparseColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedSparseColumn`, two features: the first with + `key` the id column name, the second with `key` the weight column name. + Both features' `value` must be a `SparseTensor`. + - if `column` is a `RealValuedColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using mean squared error. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +An iterable containing all the feature columns used by +the model. All items in the set should be instances of classes derived +from `FeatureColumn`. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`label_dimension` + +Number of regression targets per example. This is the +size of the last dimension of the labels and logits `Tensor` objects +(typically, these have shape `[batch_size, label_dimension]`). +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. If it is a string, it is +used as a key to fetch weight tensor from the `features`. If it is a +`NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +weight_column.normalizer_fn is applied on it to get weight tensor. +
+`optimizer` + +An instance of `tf.keras.optimizers.*` or +tf.estimator.experimental.LinearSDCA used to train the model. Can also +be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or +callable. Defaults to FTRL optimizer. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+`warm_start_from` + +A string filepath to a checkpoint to warm-start from, or +a `WarmStartSettings` object to fully configure warm-starting. If the +string filepath is provided instead of a `WarmStartSettings`, then all +weights and biases are warm-started, and it is assumed that vocabularies +and Tensor names are unchanged. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Describes how +to reduce training loss over batch. Defaults to `SUM`. +
+`sparse_combiner` + +A string specifying how to reduce if a categorical column +is multivalent. One of "mean", "sqrtn", and "sum" -- these are +effectively different ways to do example-level normalization, which can +be useful for bag-of-words features. for more details, see +`tf.feature_column.linear_model`. +
+ + + +#### Eager Compatibility +Estimators can be used while eager execution is enabled. Note that `input_fn` +and all hooks are executed inside a graph context, so they have to be written +to be compatible with graph mode. Note that `input_fn` code using tf.data +generally works in both graph and eager modes. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`export_savedmodel` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/LoggingTensorHook.md b/site/en/api_docs/python/tf/estimator/LoggingTensorHook.md new file mode 100644 index 00000000000..b1f918b8531 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/LoggingTensorHook.md @@ -0,0 +1,332 @@ +description: Prints the given tensors every N local steps, every N seconds, or at end. + +
+ + + + + + + + +
+ +# tf.estimator.LoggingTensorHook + + + + + + + + + +Prints the given tensors every N local steps, every N seconds, or at end. + +Inherits From: [`SessionRunHook`](../../tf/estimator/SessionRunHook.md) + + + + + + + + + +The tensors will be printed to the log, with `INFO` severity. If you are not +seeing the logs, you might want to add the following line after your imports: + +```python + tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO) +``` + +Note that if `at_end` is True, `tensors` should not include any tensor +whose evaluation produces a side effect such as consuming additional inputs. + + + + + + + + + + + + + + + + + + + + + + +
+`tensors` + +`dict` that maps string-valued tags to tensors/tensor names, or +`iterable` of tensors/tensor names. +
+`every_n_iter` + +`int`, print the values of `tensors` once every N local +steps taken on the current worker. +
+`every_n_secs` + +`int` or `float`, print the values of `tensors` once every N +seconds. Exactly one of `every_n_iter` and `every_n_secs` should be +provided. +
+`at_end` + +`bool` specifying whether to print the values of `tensors` at the +end of the run. +
+`formatter` + +function, takes dict of `tag`->`Tensor` and returns a string. +If `None` uses default printing all tensors. +
+ + + + + + + + + + + + +
+`ValueError` + +if `every_n_iter` is non-positive. +
+ + + +## Methods + +

after_create_session

+ +View source + + + +Called when new TensorFlow session is created. + +This is called to signal the hooks that a new session has been created. This +has two essential differences with the situation in which `begin` is called: + +* When this is called, the graph is finalized and ops can no longer be added + to the graph. +* This method will also be called as a result of recovering a wrapped + session, not only at the beginning of the overall session. + + + + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that has been created. +
+`coord` + +A Coordinator object which keeps track of all threads. +
+ + + +

after_run

+ +View source + + + +Called after each call to run(). + +The `run_values` argument contains results of requested ops/tensors by +`before_run()`. + +The `run_context` argument is the same one send to `before_run` call. +`run_context.request_stop()` can be called to stop the iteration. + +If `session.run()` raises any exceptions then `after_run()` is not called. + + + + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+`run_values` + +A SessionRunValues object. +
+ + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Called once before using the session. + +When called, the default graph is the one that will be launched in the +session. The hook can modify the graph by adding new operations to it. +After the `begin()` call the graph will be finalized and the other callbacks +can not modify the graph anymore. Second call of `begin()` on the same +graph, should not change the graph. + +

end

+ +View source + + + +Called at the end of session. + +The `session` argument can be used in case the hook wants to run final ops, +such as saving a last checkpoint. + +If `session.run()` raises exception other than OutOfRangeError or +StopIteration then `end()` is not called. +Note the difference between `end()` and `after_run()` behavior when +`session.run()` raises OutOfRangeError or StopIteration. In that case +`end()` is called but `after_run()` is not called. + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that will be soon closed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/LogisticRegressionHead.md b/site/en/api_docs/python/tf/estimator/LogisticRegressionHead.md new file mode 100644 index 00000000000..2c59e84385f --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/LogisticRegressionHead.md @@ -0,0 +1,387 @@ +description: Creates a Head for logistic regression. + +
+ + + + + + + + +
+ +# tf.estimator.LogisticRegressionHead + + + + + + + + + +Creates a `Head` for logistic regression. + +Inherits From: [`RegressionHead`](../../tf/estimator/RegressionHead.md) + + + + + + + + + +Uses `sigmoid_cross_entropy_with_logits` loss, which is the same as +`BinaryClassHead`. The differences compared to `BinaryClassHead` are: + +* Does not support `label_vocabulary`. Instead, labels must be float in the + range [0, 1]. +* Does not calculate some metrics that do not make sense, such as AUC. +* In `PREDICT` mode, only returns logits and predictions + (`=tf.sigmoid(logits)`), whereas `BinaryClassHead` also returns + probabilities, classes, and class_ids. +* Export output defaults to `RegressionOutput`, whereas `BinaryClassHead` + defaults to `PredictOutput`. + +The head expects `logits` with shape `[D0, D1, ... DN, 1]`. +In many applications, the shape is `[batch_size, 1]`. + +The `labels` shape must match `logits`, namely +`[D0, D1, ... DN]` or `[D0, D1, ... DN, 1]`. + +If `weight_column` is specified, weights must be of shape +`[D0, D1, ... DN]` or `[D0, D1, ... DN, 1]`. + +This is implemented as a generalized linear model, see +https://en.wikipedia.org/wiki/Generalized_linear_model. + +The head can be used with a canned estimator. Example: + +```python +my_head = tf.estimator.LogisticRegressionHead() +my_estimator = tf.estimator.DNNEstimator( + head=my_head, + hidden_units=..., + feature_columns=...) +``` + +It can also be used with a custom `model_fn`. Example: + +```python +def _my_model_fn(features, labels, mode): + my_head = tf.estimator.LogisticRegressionHead() + logits = tf.keras.Model(...)(features) + + return my_head.create_estimator_spec( + features=features, + mode=mode, + labels=labels, + optimizer=tf.keras.optimizers.Adagrad(lr=0.1), + logits=logits) + +my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn) +``` + + + + + + + + + + + + + + + + +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Decides how to +reduce training loss over batch and label dimension. Defaults to +`SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by `batch +size * label_dimension`. +
+`name` + +name of the head. If provided, summary and metrics keys will be +suffixed by `"/" + name`. Also used as `name_scope` when creating ops. +
+ + + + + + + + + + + + + + + + + + + + +
+`logits_dimension` + +See `base_head.Head` for details. +
+`loss_reduction` + +See `base_head.Head` for details. +
+`name` + +See `base_head.Head` for details. +
+ + + +## Methods + +

create_estimator_spec

+ +View source + + + +Returns `EstimatorSpec` that a model_fn can return. + +It is recommended to pass all args via name. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`features` + +Input `dict` mapping string feature names to `Tensor` or +`SparseTensor` objects containing the values for that feature in a +minibatch. Often to be used to fetch example-weight tensor. +
+`mode` + +Estimator's `ModeKeys`. +
+`logits` + +Logits `Tensor` to be used by the head. +
+`labels` + +Labels `Tensor`, or `dict` mapping string label names to `Tensor` +objects of the label values. +
+`optimizer` + +An tf.keras.optimizers.Optimizer instance to optimize the +loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, +trainable_variables)`, which updates variables to minimize `loss`. +
+`trainable_variables` + +A list or tuple of `Variable` objects to update to +minimize `loss`. In Tensorflow 1.x, by default these are the list of +variables collected in the graph under the key +`GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have +collections and GraphKeys, trainable_variables need to be passed +explicitly here. +
+`train_op_fn` + +Function that takes a scalar loss `Tensor` and returns an op +to optimize the model with the loss in TRAIN mode. Used if `optimizer` +is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in +TRAIN mode. By default, it is `None` in other modes. If you want to +optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use +EstimatorSpec.loss to compute and apply gradients. +
+`update_ops` + +A list or tuple of update ops to be run at training time. For +example, layers such as BatchNormalization create mean and variance +update ops that need to be run at training time. In Tensorflow 1.x, +these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x +doesn't have collections, update_ops need to be passed explicitly here. +
+`regularization_losses` + +A list of additional scalar losses to be added to +the training loss, such as regularization losses. +
+ + + + + + + + + + + +
Returns
+`EstimatorSpec`. +
+ + + +

loss

+ +View source + + + +Return predictions based on keys. See `base_head.Head` for details. + + +

metrics

+ +View source + + + +Creates metrics. See `base_head.Head` for details. + + +

predictions

+ +View source + + + +Return predictions based on keys. + +See `base_head.Head` for details. + + + + + + + + + + +
Args
+`logits` + +logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. +For many applications, the shape is `[batch_size, logits_dimension]`. +
+ + + + + + + + + + + +
Returns
+A dict of predictions. +
+ + + +

update_metrics

+ +View source + + + +Updates eval metrics. See `base_head.Head` for details. + + + + diff --git a/site/en/api_docs/python/tf/estimator/ModeKeys.md b/site/en/api_docs/python/tf/estimator/ModeKeys.md new file mode 100644 index 00000000000..4a44beeb25e --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/ModeKeys.md @@ -0,0 +1,51 @@ +description: Standard names for Estimator model modes. + +
+ + + + + +
+ +# tf.estimator.ModeKeys + + + + + + + + + +Standard names for Estimator model modes. + + + + + +The following standard keys are defined: + +* `TRAIN`: training/fitting mode. +* `EVAL`: testing/evaluation mode. +* `PREDICT`: predication/inference mode. + +## Class Variables + +* `EVAL = 'eval'` +* `PREDICT = 'infer'` +* `TRAIN = 'train'` diff --git a/site/en/api_docs/python/tf/estimator/MultiClassHead.md b/site/en/api_docs/python/tf/estimator/MultiClassHead.md new file mode 100644 index 00000000000..a5225287966 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/MultiClassHead.md @@ -0,0 +1,453 @@ +description: Creates a Head for multi class classification. + +
+ + + + + + + + +
+ +# tf.estimator.MultiClassHead + + + + + + + + + +Creates a `Head` for multi class classification. + +Inherits From: [`Head`](../../tf/estimator/Head.md) + + + + + + + + + +Uses `sparse_softmax_cross_entropy` loss. + +The head expects `logits` with shape `[D0, D1, ... DN, n_classes]`. +In many applications, the shape is `[batch_size, n_classes]`. + +`labels` must be a dense `Tensor` with shape matching `logits`, namely +`[D0, D1, ... DN, 1]`. If `label_vocabulary` given, `labels` must be a string +`Tensor` with values from the vocabulary. If `label_vocabulary` is not given, +`labels` must be an integer `Tensor` with values specifying the class index. + +If `weight_column` is specified, weights must be of shape +`[D0, D1, ... DN]`, or `[D0, D1, ... DN, 1]`. + +The loss is the weighted sum over the input dimensions. Namely, if the input +labels have shape `[batch_size, 1]`, the loss is the weighted sum over +`batch_size`. + +Also supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or +`(labels, logits, features, loss_reduction)` as arguments and returns +unreduced loss with shape `[D0, D1, ... DN, 1]`. `loss_fn` must support +integer `labels` with shape `[D0, D1, ... DN, 1]`. Namely, the head applies +`label_vocabulary` to the input labels before passing them to `loss_fn`. + +#### Usage: + + + +``` +>>> n_classes = 3 +>>> head = tf.estimator.MultiClassHead(n_classes) +>>> logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32) +>>> labels = np.array(((1,), (1,)), dtype=np.int64) +>>> features = {'x': np.array(((42,),), dtype=np.int32)} +>>> # expected_loss = sum(cross_entropy(labels, logits)) / batch_size +>>> # = sum(10, 0) / 2 = 5. +>>> loss = head.loss(labels, logits, features=features) +>>> print('{:.2f}'.format(loss.numpy())) +5.00 +>>> eval_metrics = head.metrics() +>>> updated_metrics = head.update_metrics( +... eval_metrics, features, logits, labels) +>>> for k in sorted(updated_metrics): +... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy())) +accuracy : 0.50 +average_loss : 5.00 +>>> preds = head.predictions(logits) +>>> print(preds['logits']) +tf.Tensor( + [[10. 0. 0.] + [ 0. 10. 0.]], shape=(2, 3), dtype=float32) +``` + +Usage with a canned estimator: + +```python +my_head = tf.estimator.MultiClassHead(n_classes=3) +my_estimator = tf.estimator.DNNEstimator( + head=my_head, + hidden_units=..., + feature_columns=...) +``` + +It can also be used with a custom `model_fn`. Example: + +```python +def _my_model_fn(features, labels, mode): + my_head = tf.estimator.MultiClassHead(n_classes=3) + logits = tf.keras.Model(...)(features) + + return my_head.create_estimator_spec( + features=features, + mode=mode, + labels=labels, + optimizer=tf.keras.optimizers.Adagrad(lr=0.1), + logits=logits) + +my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`n_classes` + +Number of classes, must be greater than 2 (for 2 classes, use +`BinaryClassHead`). +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. +
+`label_vocabulary` + +A list or tuple of strings representing possible label +values. If it is not given, that means labels are already encoded as an +integer within [0, n_classes). If given, labels must be of string type and +have any value in `label_vocabulary`. Note that errors will be raised if +`label_vocabulary` is not provided but labels are strings. If both +`n_classes` and `label_vocabulary` are provided, `label_vocabulary` should +contain exactly `n_classes` items. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Decides how to +reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`, namely +weighted sum of losses divided by `batch size * label_dimension`. +
+`loss_fn` + +Optional loss function. +
+`name` + +Name of the head. If provided, summary and metrics keys will be +suffixed by `"/" + name`. Also used as `name_scope` when creating ops. +
+ + + + + + + + + + + + + + + + + + + + +
+`logits_dimension` + +See `base_head.Head` for details. +
+`loss_reduction` + +See `base_head.Head` for details. +
+`name` + +See `base_head.Head` for details. +
+ + + +## Methods + +

create_estimator_spec

+ +View source + + + +Returns `EstimatorSpec` that a model_fn can return. + +It is recommended to pass all args via name. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`features` + +Input `dict` mapping string feature names to `Tensor` or +`SparseTensor` objects containing the values for that feature in a +minibatch. Often to be used to fetch example-weight tensor. +
+`mode` + +Estimator's `ModeKeys`. +
+`logits` + +Logits `Tensor` to be used by the head. +
+`labels` + +Labels `Tensor`, or `dict` mapping string label names to `Tensor` +objects of the label values. +
+`optimizer` + +An tf.keras.optimizers.Optimizer instance to optimize the +loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, +trainable_variables)`, which updates variables to minimize `loss`. +
+`trainable_variables` + +A list or tuple of `Variable` objects to update to +minimize `loss`. In Tensorflow 1.x, by default these are the list of +variables collected in the graph under the key +`GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have +collections and GraphKeys, trainable_variables need to be passed +explicitly here. +
+`train_op_fn` + +Function that takes a scalar loss `Tensor` and returns an op +to optimize the model with the loss in TRAIN mode. Used if `optimizer` +is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in +TRAIN mode. By default, it is `None` in other modes. If you want to +optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use +EstimatorSpec.loss to compute and apply gradients. +
+`update_ops` + +A list or tuple of update ops to be run at training time. For +example, layers such as BatchNormalization create mean and variance +update ops that need to be run at training time. In Tensorflow 1.x, +these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x +doesn't have collections, update_ops need to be passed explicitly here. +
+`regularization_losses` + +A list of additional scalar losses to be added to +the training loss, such as regularization losses. +
+ + + + + + + + + + + +
Returns
+`EstimatorSpec`. +
+ + + +

loss

+ +View source + + + +Returns regularized training loss. See `base_head.Head` for details. + + +

metrics

+ +View source + + + +Creates metrics. See `base_head.Head` for details. + + +

predictions

+ +View source + + + +Return predictions based on keys. + +See `base_head.Head` for details. + + + + + + + + + + + + + +
Args
+`logits` + +logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. +For many applications, the shape is `[batch_size, logits_dimension]`. +
+`keys` + +a list or tuple of prediction keys. Each key can be either the class +variable of prediction_keys.PredictionKeys or its string value, such as: +prediction_keys.PredictionKeys.CLASSES or 'classes'. If not specified, +it will return the predictions for all valid keys. +
+ + + + + + + + + + + +
Returns
+A dict of predictions. +
+ + + +

update_metrics

+ +View source + + + +Updates eval metrics. See `base_head.Head` for details. + + + + diff --git a/site/en/api_docs/python/tf/estimator/MultiHead.md b/site/en/api_docs/python/tf/estimator/MultiHead.md new file mode 100644 index 00000000000..04dee8e19a8 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/MultiHead.md @@ -0,0 +1,433 @@ +description: Creates a Head for multi-objective learning. + +
+ + + + + + + + +
+ +# tf.estimator.MultiHead + + + + + + + + + +Creates a `Head` for multi-objective learning. + +Inherits From: [`Head`](../../tf/estimator/Head.md) + + + + + + + + + +This class merges the output of multiple `Head` objects. Specifically: + +* For training, sums losses of each head, calls `train_op_fn` with this + final loss. +* For eval, merges metrics by adding `head.name` suffix to the keys in eval + metrics, such as `precision/head1.name`, `precision/head2.name`. +* For prediction, merges predictions and updates keys in prediction dict to a + 2-tuple, `(head.name, prediction_key)`. Merges `export_outputs` such that + by default the first head is served. + +#### Usage: + + + +``` +>>> head1 = tf.estimator.MultiLabelHead(n_classes=2, name='head1') +>>> head2 = tf.estimator.MultiLabelHead(n_classes=3, name='head2') +>>> multi_head = tf.estimator.MultiHead([head1, head2]) +>>> logits = { +... 'head1': np.array([[-10., 10.], [-15., 10.]], dtype=np.float32), +... 'head2': np.array([[20., -20., 20.], [-30., 20., -20.]], +... dtype=np.float32),} +>>> labels = { +... 'head1': np.array([[1, 0], [1, 1]], dtype=np.int64), +... 'head2': np.array([[0, 1, 0], [1, 1, 0]], dtype=np.int64),} +>>> features = {'x': np.array(((42,),), dtype=np.float32)} +>>> # For large logits, sigmoid cross entropy loss is approximated as: +>>> # loss = labels * (logits < 0) * (-logits) + +>>> # (1 - labels) * (logits > 0) * logits => +>>> # head1: expected_unweighted_loss = [[10., 10.], [15., 0.]] +>>> # loss1 = ((10 + 10) / 2 + (15 + 0) / 2) / 2 = 8.75 +>>> # head2: expected_unweighted_loss = [[20., 20., 20.], [30., 0., 0]] +>>> # loss2 = ((20 + 20 + 20) / 3 + (30 + 0 + 0) / 3) / 2 = 15.00 +>>> # loss = loss1 + loss2 = 8.75 + 15.00 = 23.75 +>>> loss = multi_head.loss(labels, logits, features=features) +>>> print('{:.2f}'.format(loss.numpy())) +23.75 +>>> eval_metrics = multi_head.metrics() +>>> updated_metrics = multi_head.update_metrics( +... eval_metrics, features, logits, labels) +>>> for k in sorted(updated_metrics): +... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy())) +auc/head1 : 0.17 +auc/head2 : 0.33 +auc_precision_recall/head1 : 0.60 +auc_precision_recall/head2 : 0.40 +average_loss/head1 : 8.75 +average_loss/head2 : 15.00 +loss/head1 : 8.75 +loss/head2 : 15.00 +>>> preds = multi_head.predictions(logits) +>>> print(preds[('head1', 'logits')]) +tf.Tensor( + [[-10. 10.] + [-15. 10.]], shape=(2, 2), dtype=float32) +``` + +Usage with a canned estimator: + +```python +# In `input_fn`, specify labels as a dict keyed by head name: +def input_fn(): + features = ... + labels1 = ... + labels2 = ... + return features, {'head1.name': labels1, 'head2.name': labels2} + +# In `model_fn`, specify logits as a dict keyed by head name: +def model_fn(features, labels, mode): + # Create simple heads and specify head name. + head1 = tf.estimator.MultiClassHead(n_classes=3, name='head1') + head2 = tf.estimator.BinaryClassHead(name='head2') + # Create MultiHead from two simple heads. + head = tf.estimator.MultiHead([head1, head2]) + # Create logits for each head, and combine them into a dict. + logits1, logits2 = logit_fn() + logits = {'head1.name': logits1, 'head2.name': logits2} + # Return the merged EstimatorSpec + return head.create_estimator_spec(..., logits=logits, ...) + +# Create an estimator with this model_fn. +estimator = tf.estimator.Estimator(model_fn=model_fn) +estimator.train(input_fn=input_fn) +``` + +Also supports `logits` as a `Tensor` of shape +`[D0, D1, ... DN, logits_dimension]`. It will split the `Tensor` along the +last dimension and distribute it appropriately among the heads. E.g.: + +```python +# Input logits. +logits = np.array([[-1., 1., 2., -2., 2.], [-1.5, 1., -3., 2., -2.]], + dtype=np.float32) +# Suppose head1 and head2 have the following logits dimension. +head1.logits_dimension = 2 +head2.logits_dimension = 3 +# After splitting, the result will be: +logits_dict = {'head1_name': [[-1., 1.], [-1.5, 1.]], + 'head2_name': [[2., -2., 2.], [-3., 2., -2.]]} +``` + +#### Usage: + + + +```python +def model_fn(features, labels, mode): + # Create simple heads and specify head name. + head1 = tf.estimator.MultiClassHead(n_classes=3, name='head1') + head2 = tf.estimator.BinaryClassHead(name='head2') + # Create multi-head from two simple heads. + head = tf.estimator.MultiHead([head1, head2]) + # Create logits for the multihead. The result of logits is a `Tensor`. + logits = logit_fn(logits_dimension=head.logits_dimension) + # Return the merged EstimatorSpec + return head.create_estimator_spec(..., logits=logits, ...) +``` + + + + + + + + + + + + + +
+`heads` + +List or tuple of `Head` instances. All heads must have `name` +specified. The first head in the list is the default used at serving time. +
+`head_weights` + +Optional list of weights, same length as `heads`. Used when +merging losses to calculate the weighted sum of losses from each head. If +`None`, all losses are weighted equally. +
+ + + + + + + + + + + + + + + + + + + + +
+`logits_dimension` + +See `base_head.Head` for details. +
+`loss_reduction` + +See `base_head.Head` for details. +
+`name` + +See `base_head.Head` for details. +
+ + + +## Methods + +

create_estimator_spec

+ +View source + + + +Returns a `model_fn.EstimatorSpec`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`features` + +Input `dict` of `Tensor` or `SparseTensor` objects. +
+`mode` + +Estimator's `ModeKeys`. +
+`logits` + +Input `dict` keyed by head name, or logits `Tensor` with shape +`[D0, D1, ... DN, logits_dimension]`. For many applications, the +`Tensor` shape is `[batch_size, logits_dimension]`. If logits is a +`Tensor`, it will split the `Tensor` along the last dimension and +distribute it appropriately among the heads. Check `MultiHead` for +examples. +
+`labels` + +Input `dict` keyed by head name. For each head, the label value +can be integer or string `Tensor` with shape matching its corresponding +`logits`.`labels` is a required argument when `mode` equals `TRAIN` or +`EVAL`. +
+`optimizer` + +An tf.keras.optimizers.Optimizer instance to optimize the +loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, +trainable_variables)`, which updates variables to minimize `loss`. +
+`trainable_variables` + +A list or tuple of `Variable` objects to update to +minimize `loss`. In Tensorflow 1.x, by default these are the list of +variables collected in the graph under the key +`GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have +collections and GraphKeys, trainable_variables need to be passed +explicitly here. +
+`train_op_fn` + +Function that takes a scalar loss `Tensor` and returns +`train_op`. Used if `optimizer` is `None`. +
+`update_ops` + +A list or tuple of update ops to be run at training time. For +example, layers such as BatchNormalization create mean and variance +update ops that need to be run at training time. In Tensorflow 1.x, +these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x +doesn't have collections, update_ops need to be passed explicitly here. +
+`regularization_losses` + +A list of additional scalar losses to be added to +the training loss, such as regularization losses. These losses are +usually expressed as a batch average, so for best results, in each head, +users need to use the default `loss_reduction=SUM_OVER_BATCH_SIZE` to +avoid scaling errors. Compared to the regularization losses for each +head, this loss is to regularize the merged loss of all heads in multi +head, and will be added to the overall training loss of multi head. +
+ + + + + + + + + + + +
Returns
+A `model_fn.EstimatorSpec` instance. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If both `train_op_fn` and `optimizer` are `None` in TRAIN +mode, or if both are set. +If `mode` is not in Estimator's `ModeKeys`. +
+ + + +

loss

+ +View source + + + +Returns regularized training loss. See `base_head.Head` for details. + + +

metrics

+ +View source + + + +Creates metrics. See `base_head.Head` for details. + + +

predictions

+ +View source + + + +Create predictions. See `base_head.Head` for details. + + +

update_metrics

+ +View source + + + +Updates eval metrics. See `base_head.Head` for details. + + + + diff --git a/site/en/api_docs/python/tf/estimator/MultiLabelHead.md b/site/en/api_docs/python/tf/estimator/MultiLabelHead.md new file mode 100644 index 00000000000..75ecbec7081 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/MultiLabelHead.md @@ -0,0 +1,481 @@ +description: Creates a Head for multi-label classification. + +
+ + + + + + + + +
+ +# tf.estimator.MultiLabelHead + + + + + + + + + +Creates a `Head` for multi-label classification. + +Inherits From: [`Head`](../../tf/estimator/Head.md) + + + + + + + + + +Multi-label classification handles the case where each example may have zero +or more associated labels, from a discrete set. This is distinct from +`MultiClassHead` which has exactly one label per example. + +Uses `sigmoid_cross_entropy` loss average over classes and weighted sum over +the batch. Namely, if the input logits have shape `[batch_size, n_classes]`, +the loss is the average over `n_classes` and the weighted sum over +`batch_size`. + +The head expects `logits` with shape `[D0, D1, ... DN, n_classes]`. In many +applications, the shape is `[batch_size, n_classes]`. + +#### Labels can be: + + + +* A multi-hot tensor of shape `[D0, D1, ... DN, n_classes]` +* An integer `SparseTensor` of class indices. The `dense_shape` must be + `[D0, D1, ... DN, ?]` and the values within `[0, n_classes)`. +* If `label_vocabulary` is given, a string `SparseTensor`. The `dense_shape` + must be `[D0, D1, ... DN, ?]` and the values within `label_vocabulary` or a + multi-hot tensor of shape `[D0, D1, ... DN, n_classes]`. + +If `weight_column` is specified, weights must be of shape +`[D0, D1, ... DN]`, or `[D0, D1, ... DN, 1]`. + +Also supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or +`(labels, logits, features)` as arguments and returns unreduced loss with +shape `[D0, D1, ... DN, 1]`. `loss_fn` must support indicator `labels` with +shape `[D0, D1, ... DN, n_classes]`. Namely, the head applies +`label_vocabulary` to the input labels before passing them to `loss_fn`. + +#### Usage: + + + +``` +>>> n_classes = 2 +>>> head = tf.estimator.MultiLabelHead(n_classes) +>>> logits = np.array([[-1., 1.], [-1.5, 1.5]], dtype=np.float32) +>>> labels = np.array([[1, 0], [1, 1]], dtype=np.int64) +>>> features = {'x': np.array([[41], [42]], dtype=np.int32)} +>>> # expected_loss = sum(_sigmoid_cross_entropy(labels, logits)) / batch_size +>>> # = sum(1.31326169, 0.9514133) / 2 = 1.13 +>>> loss = head.loss(labels, logits, features=features) +>>> print('{:.2f}'.format(loss.numpy())) +1.13 +>>> eval_metrics = head.metrics() +>>> updated_metrics = head.update_metrics( +... eval_metrics, features, logits, labels) +>>> for k in sorted(updated_metrics): +... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy())) +auc : 0.33 +auc_precision_recall : 0.77 +average_loss : 1.13 +>>> preds = head.predictions(logits) +>>> print(preds['logits']) +tf.Tensor( + [[-1. 1. ] + [-1.5 1.5]], shape=(2, 2), dtype=float32) +``` + +Usage with a canned estimator: + +```python +my_head = tf.estimator.MultiLabelHead(n_classes=3) +my_estimator = tf.estimator.DNNEstimator( + head=my_head, + hidden_units=..., + feature_columns=...) +``` + +It can also be used with a custom `model_fn`. Example: + +```python +def _my_model_fn(features, labels, mode): + my_head = tf.estimator.MultiLabelHead(n_classes=3) + logits = tf.keras.Model(...)(features) + + return my_head.create_estimator_spec( + features=features, + mode=mode, + labels=labels, + optimizer=tf.keras.optimizers.Adagrad(lr=0.1), + logits=logits) + +my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`n_classes` + +Number of classes, must be greater than 1 (for 1 class, use +`BinaryClassHead`). +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. Per-class weighting is not +supported. +
+`thresholds` + +Iterable of floats in the range `(0, 1)`. Accuracy, precision +and recall metrics are evaluated for each threshold value. The threshold +is applied to the predicted probabilities, i.e. above the threshold is +`true`, below is `false`. +
+`label_vocabulary` + +A list of strings represents possible label values. If it +is not given, that means labels are already encoded as integer within [0, +n_classes) or multi-hot Tensor. If given, labels must be SparseTensor +`string` type and have any value in `label_vocabulary`. Also there will be +errors if vocabulary is not provided and labels are string. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Decides how to +reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`, namely +weighted sum of losses divided by batch size. +
+`loss_fn` + +Optional loss function. +
+`classes_for_class_based_metrics` + +List of integer class IDs or string class +names for which per-class metrics are evaluated. If integers, all must be +in the range `[0, n_classes - 1]`. If strings, all must be in +`label_vocabulary`. +
+`name` + +Name of the head. If provided, summary and metrics keys will be +suffixed by `"/" + name`. Also used as `name_scope` when creating ops. +
+ + + + + + + + + + + + + + + + + + + + +
+`logits_dimension` + +See `base_head.Head` for details. +
+`loss_reduction` + +See `base_head.Head` for details. +
+`name` + +See `base_head.Head` for details. +
+ + + +## Methods + +

create_estimator_spec

+ +View source + + + +Returns `EstimatorSpec` that a model_fn can return. + +It is recommended to pass all args via name. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`features` + +Input `dict` mapping string feature names to `Tensor` or +`SparseTensor` objects containing the values for that feature in a +minibatch. Often to be used to fetch example-weight tensor. +
+`mode` + +Estimator's `ModeKeys`. +
+`logits` + +Logits `Tensor` to be used by the head. +
+`labels` + +Labels `Tensor`, or `dict` mapping string label names to `Tensor` +objects of the label values. +
+`optimizer` + +An tf.keras.optimizers.Optimizer instance to optimize the +loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, +trainable_variables)`, which updates variables to minimize `loss`. +
+`trainable_variables` + +A list or tuple of `Variable` objects to update to +minimize `loss`. In Tensorflow 1.x, by default these are the list of +variables collected in the graph under the key +`GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have +collections and GraphKeys, trainable_variables need to be passed +explicitly here. +
+`train_op_fn` + +Function that takes a scalar loss `Tensor` and returns an op +to optimize the model with the loss in TRAIN mode. Used if `optimizer` +is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in +TRAIN mode. By default, it is `None` in other modes. If you want to +optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use +EstimatorSpec.loss to compute and apply gradients. +
+`update_ops` + +A list or tuple of update ops to be run at training time. For +example, layers such as BatchNormalization create mean and variance +update ops that need to be run at training time. In Tensorflow 1.x, +these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x +doesn't have collections, update_ops need to be passed explicitly here. +
+`regularization_losses` + +A list of additional scalar losses to be added to +the training loss, such as regularization losses. +
+ + + + + + + + + + + +
Returns
+`EstimatorSpec`. +
+ + + +

loss

+ +View source + + + +Returns regularized training loss. See `base_head.Head` for details. + + +

metrics

+ +View source + + + +Creates metrics. See `base_head.Head` for details. + + +

predictions

+ +View source + + + +Return predictions based on keys. + +See `base_head.Head` for details. + + + + + + + + + + + + + +
Args
+`logits` + +logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. +For many applications, the shape is `[batch_size, logits_dimension]`. +
+`keys` + +a list of prediction keys. Key can be either the class variable +of prediction_keys.PredictionKeys or its string value, such as: +prediction_keys.PredictionKeys.LOGITS or 'logits'. +
+ + + + + + + + + + + +
Returns
+A dict of predictions. +
+ + + +

update_metrics

+ +View source + + + +Updates eval metrics. See `base_head.Head` for details. + + + + diff --git a/site/en/api_docs/python/tf/estimator/NanLossDuringTrainingError.md b/site/en/api_docs/python/tf/estimator/NanLossDuringTrainingError.md new file mode 100644 index 00000000000..97542469e06 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/NanLossDuringTrainingError.md @@ -0,0 +1,48 @@ +description: Unspecified run-time error. + +
+ + + + +
+ +# tf.estimator.NanLossDuringTrainingError + + + + + + + + + +Unspecified run-time error. + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/estimator/NanTensorHook.md b/site/en/api_docs/python/tf/estimator/NanTensorHook.md new file mode 100644 index 00000000000..af59ec6b27e --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/NanTensorHook.md @@ -0,0 +1,280 @@ +description: Monitors the loss tensor and stops training if loss is NaN. + +
+ + + + + + + + +
+ +# tf.estimator.NanTensorHook + + + + + + + + + +Monitors the loss tensor and stops training if loss is NaN. + +Inherits From: [`SessionRunHook`](../../tf/estimator/SessionRunHook.md) + + + + + + + + + +Can either fail with exception or just stop training. + + + + + + + + + + + + + +
+`loss_tensor` + +`Tensor`, the loss tensor. +
+`fail_on_nan_loss` + +`bool`, whether to raise exception when loss is NaN. +
+ + + +## Methods + +

after_create_session

+ +View source + + + +Called when new TensorFlow session is created. + +This is called to signal the hooks that a new session has been created. This +has two essential differences with the situation in which `begin` is called: + +* When this is called, the graph is finalized and ops can no longer be added + to the graph. +* This method will also be called as a result of recovering a wrapped + session, not only at the beginning of the overall session. + + + + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that has been created. +
+`coord` + +A Coordinator object which keeps track of all threads. +
+ + + +

after_run

+ +View source + + + +Called after each call to run(). + +The `run_values` argument contains results of requested ops/tensors by +`before_run()`. + +The `run_context` argument is the same one send to `before_run` call. +`run_context.request_stop()` can be called to stop the iteration. + +If `session.run()` raises any exceptions then `after_run()` is not called. + + + + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+`run_values` + +A SessionRunValues object. +
+ + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Called once before using the session. + +When called, the default graph is the one that will be launched in the +session. The hook can modify the graph by adding new operations to it. +After the `begin()` call the graph will be finalized and the other callbacks +can not modify the graph anymore. Second call of `begin()` on the same +graph, should not change the graph. + +

end

+ +View source + + + +Called at the end of session. + +The `session` argument can be used in case the hook wants to run final ops, +such as saving a last checkpoint. + +If `session.run()` raises exception other than OutOfRangeError or +StopIteration then `end()` is not called. +Note the difference between `end()` and `after_run()` behavior when +`session.run()` raises OutOfRangeError or StopIteration. In that case +`end()` is called but `after_run()` is not called. + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that will be soon closed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/PoissonRegressionHead.md b/site/en/api_docs/python/tf/estimator/PoissonRegressionHead.md new file mode 100644 index 00000000000..70b940992ed --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/PoissonRegressionHead.md @@ -0,0 +1,400 @@ +description: Creates a Head for poisson regression using tf.nn.log_poisson_loss. + +
+ + + + + + + + +
+ +# tf.estimator.PoissonRegressionHead + + + + + + + + + +Creates a `Head` for poisson regression using tf.nn.log_poisson_loss. + +Inherits From: [`RegressionHead`](../../tf/estimator/RegressionHead.md) + + + + + + + + + +The loss is the weighted sum over all input dimensions. Namely, if the input +labels have shape `[batch_size, label_dimension]`, the loss is the weighted +sum over both `batch_size` and `label_dimension`. + +The head expects `logits` with shape `[D0, D1, ... DN, label_dimension]`. +In many applications, the shape is `[batch_size, label_dimension]`. + +The `labels` shape must match `logits`, namely +`[D0, D1, ... DN, label_dimension]`. If `label_dimension=1`, shape +`[D0, D1, ... DN]` is also supported. + +If `weight_column` is specified, weights must be of shape +`[D0, D1, ... DN]`, `[D0, D1, ... DN, 1]` or +`[D0, D1, ... DN, label_dimension]`. + +This is implemented as a generalized linear model, see +https://en.wikipedia.org/wiki/Generalized_linear_model. + +The head can be used with a canned estimator. Example: + +```python +my_head = tf.estimator.PoissonRegressionHead() +my_estimator = tf.estimator.DNNEstimator( + head=my_head, + hidden_units=..., + feature_columns=...) +``` + +It can also be used with a custom `model_fn`. Example: + +```python +def _my_model_fn(features, labels, mode): + my_head = tf.estimator.PoissonRegressionHead() + logits = tf.keras.Model(...)(features) + + return my_head.create_estimator_spec( + features=features, + mode=mode, + labels=labels, + optimizer=tf.keras.optimizers.Adagrad(lr=0.1), + logits=logits) + +my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. +
+`label_dimension` + +Number of regression labels per example. This is the size +of the last dimension of the labels `Tensor` (typically, this has shape +`[batch_size, label_dimension]`). +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Decides how to +reduce training loss over batch and label dimension. Defaults to +`SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by `batch +size * label_dimension`. +
+`compute_full_loss` + +Whether to include the constant `log(z!)` term in +computing the poisson loss. See tf.nn.log_poisson_loss for the full +documentation. +
+`name` + +name of the head. If provided, summary and metrics keys will be +suffixed by `"/" + name`. Also used as `name_scope` when creating ops. +
+ + + + + + + + + + + + + + + + + + + + +
+`logits_dimension` + +See `base_head.Head` for details. +
+`loss_reduction` + +See `base_head.Head` for details. +
+`name` + +See `base_head.Head` for details. +
+ + + +## Methods + +

create_estimator_spec

+ +View source + + + +Returns `EstimatorSpec` that a model_fn can return. + +It is recommended to pass all args via name. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`features` + +Input `dict` mapping string feature names to `Tensor` or +`SparseTensor` objects containing the values for that feature in a +minibatch. Often to be used to fetch example-weight tensor. +
+`mode` + +Estimator's `ModeKeys`. +
+`logits` + +Logits `Tensor` to be used by the head. +
+`labels` + +Labels `Tensor`, or `dict` mapping string label names to `Tensor` +objects of the label values. +
+`optimizer` + +An tf.keras.optimizers.Optimizer instance to optimize the +loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, +trainable_variables)`, which updates variables to minimize `loss`. +
+`trainable_variables` + +A list or tuple of `Variable` objects to update to +minimize `loss`. In Tensorflow 1.x, by default these are the list of +variables collected in the graph under the key +`GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have +collections and GraphKeys, trainable_variables need to be passed +explicitly here. +
+`train_op_fn` + +Function that takes a scalar loss `Tensor` and returns an op +to optimize the model with the loss in TRAIN mode. Used if `optimizer` +is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in +TRAIN mode. By default, it is `None` in other modes. If you want to +optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use +EstimatorSpec.loss to compute and apply gradients. +
+`update_ops` + +A list or tuple of update ops to be run at training time. For +example, layers such as BatchNormalization create mean and variance +update ops that need to be run at training time. In Tensorflow 1.x, +these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x +doesn't have collections, update_ops need to be passed explicitly here. +
+`regularization_losses` + +A list of additional scalar losses to be added to +the training loss, such as regularization losses. +
+ + + + + + + + + + + +
Returns
+`EstimatorSpec`. +
+ + + +

loss

+ +View source + + + +Return predictions based on keys. See `base_head.Head` for details. + + +

metrics

+ +View source + + + +Creates metrics. See `base_head.Head` for details. + + +

predictions

+ +View source + + + +Return predictions based on keys. + +See `base_head.Head` for details. + + + + + + + + + + +
Args
+`logits` + +logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. +For many applications, the shape is `[batch_size, logits_dimension]`. +
+ + + + + + + + + + + +
Returns
+A dict of predictions. +
+ + + +

update_metrics

+ +View source + + + +Updates eval metrics. See `base_head.Head` for details. + + + + diff --git a/site/en/api_docs/python/tf/estimator/ProfilerHook.md b/site/en/api_docs/python/tf/estimator/ProfilerHook.md new file mode 100644 index 00000000000..df48349004c --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/ProfilerHook.md @@ -0,0 +1,310 @@ +description: Captures CPU/GPU profiling information every N steps or seconds. + +
+ + + + + + + + +
+ +# tf.estimator.ProfilerHook + + + + + + + + + +Captures CPU/GPU profiling information every N steps or seconds. + +Inherits From: [`SessionRunHook`](../../tf/estimator/SessionRunHook.md) + + + + + + + + + +This produces files called "timeline-.json", which are in Chrome +Trace format. + +For more information see: +https://github.com/catapult-project/catapult/blob/master/tracing/README.md + + + + + + + + + + + + + + + + + + + + + + +
+`save_steps` + +`int`, save profile traces every N steps. Exactly one of +`save_secs` and `save_steps` should be set. +
+`save_secs` + +`int` or `float`, save profile traces every N seconds. +
+`output_dir` + +`string`, the directory to save the profile traces to. +Defaults to the current directory. +
+`show_dataflow` + +`bool`, if True, add flow events to the trace connecting +producers and consumers of tensors. +
+`show_memory` + +`bool`, if True, add object snapshot events to the trace +showing the sizes and lifetimes of tensors. +
+ + + +## Methods + +

after_create_session

+ +View source + + + +Called when new TensorFlow session is created. + +This is called to signal the hooks that a new session has been created. This +has two essential differences with the situation in which `begin` is called: + +* When this is called, the graph is finalized and ops can no longer be added + to the graph. +* This method will also be called as a result of recovering a wrapped + session, not only at the beginning of the overall session. + + + + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that has been created. +
+`coord` + +A Coordinator object which keeps track of all threads. +
+ + + +

after_run

+ +View source + + + +Called after each call to run(). + +The `run_values` argument contains results of requested ops/tensors by +`before_run()`. + +The `run_context` argument is the same one send to `before_run` call. +`run_context.request_stop()` can be called to stop the iteration. + +If `session.run()` raises any exceptions then `after_run()` is not called. + + + + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+`run_values` + +A SessionRunValues object. +
+ + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Called once before using the session. + +When called, the default graph is the one that will be launched in the +session. The hook can modify the graph by adding new operations to it. +After the `begin()` call the graph will be finalized and the other callbacks +can not modify the graph anymore. Second call of `begin()` on the same +graph, should not change the graph. + +

end

+ +View source + + + +Called at the end of session. + +The `session` argument can be used in case the hook wants to run final ops, +such as saving a last checkpoint. + +If `session.run()` raises exception other than OutOfRangeError or +StopIteration then `end()` is not called. +Note the difference between `end()` and `after_run()` behavior when +`session.run()` raises OutOfRangeError or StopIteration. In that case +`end()` is called but `after_run()` is not called. + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that will be soon closed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/RegressionHead.md b/site/en/api_docs/python/tf/estimator/RegressionHead.md new file mode 100644 index 00000000000..907c1d96945 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/RegressionHead.md @@ -0,0 +1,443 @@ +description: Creates a Head for regression using the mean_squared_error loss. + +
+ + + + + + + + +
+ +# tf.estimator.RegressionHead + + + + + + + + + +Creates a `Head` for regression using the `mean_squared_error` loss. + +Inherits From: [`Head`](../../tf/estimator/Head.md) + + + + + + + + + +The loss is the weighted sum over all input dimensions. Namely, if the input +labels have shape `[batch_size, label_dimension]`, the loss is the weighted +sum over both `batch_size` and `label_dimension`. + +The head expects `logits` with shape `[D0, D1, ... DN, label_dimension]`. +In many applications, the shape is `[batch_size, label_dimension]`. + +The `labels` shape must match `logits`, namely +`[D0, D1, ... DN, label_dimension]`. If `label_dimension=1`, shape +`[D0, D1, ... DN]` is also supported. + +If `weight_column` is specified, weights must be of shape +`[D0, D1, ... DN]`, `[D0, D1, ... DN, 1]` or +`[D0, D1, ... DN, label_dimension]`. + +Supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or +`(labels, logits, features, loss_reduction)` as arguments and returns +unreduced loss with shape `[D0, D1, ... DN, label_dimension]`. + +Also supports custom `inverse_link_fn`, also known as 'mean function'. +`inverse_link_fn` is only used in `PREDICT` mode. It takes `logits` as +argument and returns predicted values. This function is the inverse of the +link function defined in +https://en.wikipedia.org/wiki/Generalized_linear_model#Link_function +Namely, for poisson regression, set `inverse_link_fn=tf.exp`. + +#### Usage: + + + +``` +>>> head = tf.estimator.RegressionHead() +>>> logits = np.array(((45,), (41,),), dtype=np.float32) +>>> labels = np.array(((43,), (44,),), dtype=np.int32) +>>> features = {'x': np.array(((42,),), dtype=np.float32)} +>>> # expected_loss = weighted_loss / batch_size +>>> # = (43-45)^2 + (44-41)^2 / 2 = 6.50 +>>> loss = head.loss(labels, logits, features=features) +>>> print('{:.2f}'.format(loss.numpy())) +6.50 +>>> eval_metrics = head.metrics() +>>> updated_metrics = head.update_metrics( +... eval_metrics, features, logits, labels) +>>> for k in sorted(updated_metrics): +... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy())) + average_loss : 6.50 + label/mean : 43.50 + prediction/mean : 43.00 +>>> preds = head.predictions(logits) +>>> print(preds['predictions']) +tf.Tensor( + [[45.] + [41.]], shape=(2, 1), dtype=float32) +``` + +Usage with a canned estimator: + +```python +my_head = tf.estimator.RegressionHead() +my_estimator = tf.estimator.DNNEstimator( + head=my_head, + hidden_units=..., + feature_columns=...) +``` + +It can also be used with a custom `model_fn`. Example: + +```python +def _my_model_fn(features, labels, mode): + my_head = tf.estimator.RegressionHead() + logits = tf.keras.Model(...)(features) + + return my_head.create_estimator_spec( + features=features, + mode=mode, + labels=labels, + optimizer=tf.keras.optimizers.Adagrad(lr=0.1), + logits=logits) + +my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. +
+`label_dimension` + +Number of regression labels per example. This is the size +of the last dimension of the labels `Tensor` (typically, this has shape +`[batch_size, label_dimension]`). +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Decides how to +reduce training loss over batch and label dimension. Defaults to +`SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by +`batch_size * label_dimension`. +
+`loss_fn` + +Optional loss function. Defaults to `mean_squared_error`. +
+`inverse_link_fn` + +Optional inverse link function, also known as 'mean +function'. Defaults to identity. +
+`name` + +name of the head. If provided, summary and metrics keys will be +suffixed by `"/" + name`. Also used as `name_scope` when creating ops. +
+ + + + + + + + + + + + + + + + + + + + +
+`logits_dimension` + +See `base_head.Head` for details. +
+`loss_reduction` + +See `base_head.Head` for details. +
+`name` + +See `base_head.Head` for details. +
+ + + +## Methods + +

create_estimator_spec

+ +View source + + + +Returns `EstimatorSpec` that a model_fn can return. + +It is recommended to pass all args via name. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`features` + +Input `dict` mapping string feature names to `Tensor` or +`SparseTensor` objects containing the values for that feature in a +minibatch. Often to be used to fetch example-weight tensor. +
+`mode` + +Estimator's `ModeKeys`. +
+`logits` + +Logits `Tensor` to be used by the head. +
+`labels` + +Labels `Tensor`, or `dict` mapping string label names to `Tensor` +objects of the label values. +
+`optimizer` + +An tf.keras.optimizers.Optimizer instance to optimize the +loss in TRAIN mode. Namely, sets `train_op = optimizer.get_updates(loss, +trainable_variables)`, which updates variables to minimize `loss`. +
+`trainable_variables` + +A list or tuple of `Variable` objects to update to +minimize `loss`. In Tensorflow 1.x, by default these are the list of +variables collected in the graph under the key +`GraphKeys.TRAINABLE_VARIABLES`. As Tensorflow 2.x doesn't have +collections and GraphKeys, trainable_variables need to be passed +explicitly here. +
+`train_op_fn` + +Function that takes a scalar loss `Tensor` and returns an op +to optimize the model with the loss in TRAIN mode. Used if `optimizer` +is `None`. Exactly one of `train_op_fn` and `optimizer` must be set in +TRAIN mode. By default, it is `None` in other modes. If you want to +optimize loss yourself, you can pass `lambda _: tf.no_op()` and then use +EstimatorSpec.loss to compute and apply gradients. +
+`update_ops` + +A list or tuple of update ops to be run at training time. For +example, layers such as BatchNormalization create mean and variance +update ops that need to be run at training time. In Tensorflow 1.x, +these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x +doesn't have collections, update_ops need to be passed explicitly here. +
+`regularization_losses` + +A list of additional scalar losses to be added to +the training loss, such as regularization losses. +
+ + + + + + + + + + + +
Returns
+`EstimatorSpec`. +
+ + + +

loss

+ +View source + + + +Return predictions based on keys. See `base_head.Head` for details. + + +

metrics

+ +View source + + + +Creates metrics. See `base_head.Head` for details. + + +

predictions

+ +View source + + + +Return predictions based on keys. + +See `base_head.Head` for details. + + + + + + + + + + +
Args
+`logits` + +logits `Tensor` with shape `[D0, D1, ... DN, logits_dimension]`. +For many applications, the shape is `[batch_size, logits_dimension]`. +
+ + + + + + + + + + + +
Returns
+A dict of predictions. +
+ + + +

update_metrics

+ +View source + + + +Updates eval metrics. See `base_head.Head` for details. + + + + diff --git a/site/en/api_docs/python/tf/estimator/RunConfig.md b/site/en/api_docs/python/tf/estimator/RunConfig.md new file mode 100644 index 00000000000..bf88c7767a9 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/RunConfig.md @@ -0,0 +1,543 @@ +description: This class specifies the configurations for an Estimator run. + +
+ + + + +
+ +# tf.estimator.RunConfig + + + + + + + + + +This class specifies the configurations for an `Estimator` run. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`model_dir` + +directory where model parameters, graph, etc are saved. If +`PathLike` object, the path will be resolved. If `None`, will use a +default value set by the Estimator. +
+`tf_random_seed` + +Random seed for TensorFlow initializers. Setting this +value allows consistency between reruns. +
+`save_summary_steps` + +Save summaries every this many steps. +
+`save_checkpoints_steps` + +Save checkpoints every this many steps. Can not be +specified with `save_checkpoints_secs`. +
+`save_checkpoints_secs` + +Save checkpoints every this many seconds. Can not +be specified with `save_checkpoints_steps`. Defaults to 600 seconds if +both `save_checkpoints_steps` and `save_checkpoints_secs` are not set in +constructor. If both `save_checkpoints_steps` and +`save_checkpoints_secs` are `None`, then checkpoints are disabled. +
+`session_config` + +a ConfigProto used to set session parameters, or `None`. +
+`keep_checkpoint_max` + +The maximum number of recent checkpoint files to +keep. As new files are created, older files are deleted. If `None` or 0, +all checkpoint files are kept. Defaults to 5 (that is, the 5 most recent +checkpoint files are kept.) +
+`keep_checkpoint_every_n_hours` + +Number of hours between each checkpoint to +be saved. The default value of 10,000 hours effectively disables the +feature. +
+`log_step_count_steps` + +The frequency, in number of global steps, that the +global step and the loss will be logged during training. Also controls +the frequency that the global steps / s will be logged (and written to +summary) during training. +
+`train_distribute` + +An optional instance of tf.distribute.Strategy. If +specified, then Estimator will distribute the user's model during +training, according to the policy specified by that strategy. Setting +`experimental_distribute.train_distribute` is preferred. +
+`device_fn` + +A callable invoked for every `Operation` that takes the +`Operation` and returns the device string. If `None`, defaults to the +device function returned by `tf.train.replica_device_setter` with +round-robin strategy. +
+`protocol` + +An optional argument which specifies the protocol used when +starting server. `None` means default to grpc. +
+`eval_distribute` + +An optional instance of tf.distribute.Strategy. If +specified, then Estimator will distribute the user's model during +evaluation, according to the policy specified by that strategy. Setting +`experimental_distribute.eval_distribute` is preferred. +
+`experimental_distribute` + +An optional +`tf.contrib.distribute.DistributeConfig` object specifying +DistributionStrategy-related configuration. The `train_distribute` and +`eval_distribute` can be passed as parameters to `RunConfig` or set in +`experimental_distribute` but not both. +
+`experimental_max_worker_delay_secs` + +An optional integer specifying the +maximum time a worker should wait before starting. By default, workers +are started at staggered times, with each worker being delayed by up to +60 seconds. This is intended to reduce the risk of divergence, which can +occur when many workers simultaneously update the weights of a randomly +initialized model. Users who warm-start their models and train them for +short durations (a few minutes or less) should consider reducing this +default to improve training times. +
+`session_creation_timeout_secs` + +Max time workers should wait for a session +to become available (on initialization or when recovering a session) +with MonitoredTrainingSession. Defaults to 7200 seconds, but users may +want to set a lower value to detect problems with variable / session +(re)-initialization more quickly. +
+ + + + + + + + + + + + +
+`ValueError` + +If both `save_checkpoints_steps` and `save_checkpoints_secs` +are set. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cluster_spec` + + +
+`device_fn` + +Returns the device_fn. + +If device_fn is not `None`, it overrides the default +device function used in `Estimator`. +Otherwise the default one is used. +
+`eval_distribute` + +Optional tf.distribute.Strategy for evaluation. +
+`evaluation_master` + + +
+`experimental_max_worker_delay_secs` + + +
+`global_id_in_cluster` + +The global id in the training cluster. + +All global ids in the training cluster are assigned from an increasing +sequence of consecutive integers. The first id is 0. + +Note: Task id (the property field `task_id`) is tracking the index of the +node among all nodes with the SAME task type. For example, given the cluster +definition as follows: + +``` +cluster = {'chief': ['host0:2222'], +'ps': ['host1:2222', 'host2:2222'], +'worker': ['host3:2222', 'host4:2222', 'host5:2222']} +``` + +Nodes with task type `worker` can have id 0, 1, 2. Nodes with task type +`ps` can have id, 0, 1. So, `task_id` is not unique, but the pair +(`task_type`, `task_id`) can uniquely determine a node in the cluster. + +Global id, i.e., this field, is tracking the index of the node among ALL +nodes in the cluster. It is uniquely assigned. For example, for the cluster +spec given above, the global ids are assigned as: +``` +task_type | task_id | global_id +-------------------------------- +chief | 0 | 0 +worker | 0 | 1 +worker | 1 | 2 +worker | 2 | 3 +ps | 0 | 4 +ps | 1 | 5 +``` +
+`is_chief` + + +
+`keep_checkpoint_every_n_hours` + + +
+`keep_checkpoint_max` + + +
+`log_step_count_steps` + + +
+`master` + + +
+`model_dir` + + +
+`num_ps_replicas` + + +
+`num_worker_replicas` + + +
+`protocol` + +Returns the optional protocol value. +
+`save_checkpoints_secs` + + +
+`save_checkpoints_steps` + + +
+`save_summary_steps` + + +
+`service` + +Returns the platform defined (in TF_CONFIG) service dict. +
+`session_config` + + +
+`session_creation_timeout_secs` + + +
+`task_id` + + +
+`task_type` + + +
+`tf_random_seed` + + +
+`train_distribute` + +Optional tf.distribute.Strategy for training. +
+ + + +## Methods + +

replace

+ +View source + + + +Returns a new instance of `RunConfig` replacing specified properties. + +Only the properties in the following list are allowed to be replaced: + + - `model_dir`, + - `tf_random_seed`, + - `save_summary_steps`, + - `save_checkpoints_steps`, + - `save_checkpoints_secs`, + - `session_config`, + - `keep_checkpoint_max`, + - `keep_checkpoint_every_n_hours`, + - `log_step_count_steps`, + - `train_distribute`, + - `device_fn`, + - `protocol`. + - `eval_distribute`, + - `experimental_distribute`, + - `experimental_max_worker_delay_secs`, + +In addition, either `save_checkpoints_steps` or `save_checkpoints_secs` +can be set (should not be both). + + + + + + + + + + +
Args
+`**kwargs` + +keyword named properties with new values. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If any property name in `kwargs` does not exist or is not +allowed to be replaced, or both `save_checkpoints_steps` and +`save_checkpoints_secs` are set. +
+ + + + + + + + + + + +
Returns
+a new instance of `RunConfig`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/SecondOrStepTimer.md b/site/en/api_docs/python/tf/estimator/SecondOrStepTimer.md new file mode 100644 index 00000000000..22e1b4104f2 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/SecondOrStepTimer.md @@ -0,0 +1,172 @@ +description: Timer that triggers at most once every N seconds or once every N steps. + +
+ + + + + + + +
+ +# tf.estimator.SecondOrStepTimer + + + + + + + + + +Timer that triggers at most once every N seconds or once every N steps. + + + + + + + + + +This symbol is also exported to v2 in tf.estimator namespace. See +https://github.com/tensorflow/estimator/blob/master/tensorflow_estimator/python/estimator/hooks/basic_session_run_hooks.py + +## Methods + +

last_triggered_step

+ +View source + + + +Returns the last triggered time step or None if never triggered. + + +

reset

+ +View source + + + +Resets the timer. + + +

should_trigger_for_step

+ +View source + + + +Return true if the timer should trigger for the specified step. + + + + + + + + + + + +
Args
+`step` + +Training step to trigger on. +
+ + + + + + + + + + + +
Returns
+True if the difference between the current time and the time of the last +trigger exceeds `every_secs`, or if the difference between the current +step and the last triggered step exceeds `every_steps`. False otherwise. +
+ + + +

update_last_triggered_step

+ +View source + + + +Update the last triggered time and step number. + + + + + + + + + + + +
Args
+`step` + +The current step. +
+ + + + + + + + + + + +
Returns
+A pair `(elapsed_time, elapsed_steps)`, where `elapsed_time` is the number +of seconds between the current trigger and the last one (a float), and +`elapsed_steps` is the number of steps between the current trigger and +the last one. Both values will be set to `None` on the first trigger. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/SessionRunArgs.md b/site/en/api_docs/python/tf/estimator/SessionRunArgs.md new file mode 100644 index 00000000000..3445001b6cf --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/SessionRunArgs.md @@ -0,0 +1,128 @@ +description: Represents arguments to be added to a Session.run() call. + +
+ + + + + + +
+ +# tf.estimator.SessionRunArgs + + + + + + + + + +Represents arguments to be added to a `Session.run()` call. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fetches` + +Exactly like the 'fetches' argument to Session.Run(). +Can be a single tensor or op, a list of 'fetches' or a dictionary +of fetches. For example: +fetches = global_step_tensor +fetches = [train_op, summary_op, global_step_tensor] +fetches = {'step': global_step_tensor, 'summ': summary_op} +Note that this can recurse as expected: +fetches = {'step': global_step_tensor, +'ops': [train_op, check_nan_op]} +
+`feed_dict` + +Exactly like the `feed_dict` argument to `Session.Run()` +
+`options` + +Exactly like the `options` argument to `Session.run()`, i.e., a +config_pb2.RunOptions proto. +
+ + + + + + + + + + + + + + + + + + + + +
+`fetches` + + +
+`feed_dict` + + +
+`options` + + +
+ + + +## Class Variables + +* `feed_dict` +* `fetches` +* `options` diff --git a/site/en/api_docs/python/tf/estimator/SessionRunContext.md b/site/en/api_docs/python/tf/estimator/SessionRunContext.md new file mode 100644 index 00000000000..ff07cf952c0 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/SessionRunContext.md @@ -0,0 +1,109 @@ +description: Provides information about the session.run() call being made. + +
+ + + + +
+ +# tf.estimator.SessionRunContext + + + + + + + + + +Provides information about the `session.run()` call being made. + + + + + + + + + +Provides information about original request to `Session.Run()` function. +SessionRunHook objects can stop the loop by calling `request_stop()` of +`run_context`. In the future we may use this object to add more information +about run without changing the Hook API. + + + + + + + + + + + + + + + + + + +
+`original_args` + +A `SessionRunArgs` object holding the original arguments of `run()`. + +If user called `MonitoredSession.run(fetches=a, feed_dict=b)`, then this +field is equal to SessionRunArgs(a, b). +
+`session` + +A TensorFlow session object which will execute the `run`. +
+`stop_requested` + +Returns whether a stop is requested or not. + +If true, `MonitoredSession` stops iterations. +Returns: +A `bool` +
+ + + +## Methods + +

request_stop

+ +View source + + + +Sets stop requested field. + +Hooks can use this function to request stop of iterations. +`MonitoredSession` checks whether this is called or not. + + + diff --git a/site/en/api_docs/python/tf/estimator/SessionRunHook.md b/site/en/api_docs/python/tf/estimator/SessionRunHook.md new file mode 100644 index 00000000000..b2a741cdf97 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/SessionRunHook.md @@ -0,0 +1,244 @@ +description: Hook to extend calls to MonitoredSession.run(). + +
+ + + + + + + +
+ +# tf.estimator.SessionRunHook + + + + + + + + + +Hook to extend calls to MonitoredSession.run(). + + + + + + +## Methods + +

after_create_session

+ +View source + + + +Called when new TensorFlow session is created. + +This is called to signal the hooks that a new session has been created. This +has two essential differences with the situation in which `begin` is called: + +* When this is called, the graph is finalized and ops can no longer be added + to the graph. +* This method will also be called as a result of recovering a wrapped + session, not only at the beginning of the overall session. + + + + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that has been created. +
+`coord` + +A Coordinator object which keeps track of all threads. +
+ + + +

after_run

+ +View source + + + +Called after each call to run(). + +The `run_values` argument contains results of requested ops/tensors by +`before_run()`. + +The `run_context` argument is the same one send to `before_run` call. +`run_context.request_stop()` can be called to stop the iteration. + +If `session.run()` raises any exceptions then `after_run()` is not called. + + + + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+`run_values` + +A SessionRunValues object. +
+ + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Called once before using the session. + +When called, the default graph is the one that will be launched in the +session. The hook can modify the graph by adding new operations to it. +After the `begin()` call the graph will be finalized and the other callbacks +can not modify the graph anymore. Second call of `begin()` on the same +graph, should not change the graph. + +

end

+ +View source + + + +Called at the end of session. + +The `session` argument can be used in case the hook wants to run final ops, +such as saving a last checkpoint. + +If `session.run()` raises exception other than OutOfRangeError or +StopIteration then `end()` is not called. +Note the difference between `end()` and `after_run()` behavior when +`session.run()` raises OutOfRangeError or StopIteration. In that case +`end()` is called but `after_run()` is not called. + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that will be soon closed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/SessionRunValues.md b/site/en/api_docs/python/tf/estimator/SessionRunValues.md new file mode 100644 index 00000000000..8e5fa8cb9c6 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/SessionRunValues.md @@ -0,0 +1,129 @@ +description: Contains the results of Session.run(). + +
+ + + + + + +
+ +# tf.estimator.SessionRunValues + + + + + + + + + +Contains the results of `Session.run()`. + + + + + + + + + +In the future we may use this object to add more information about result of +run without changing the Hook API. + + + + + + + + + + + + + + + + +
+`results` + +The return values from `Session.run()` corresponding to the fetches +attribute returned in the RunArgs. Note that this has the same shape as +the RunArgs fetches. For example: +fetches = global_step_tensor +=> results = nparray(int) +fetches = [train_op, summary_op, global_step_tensor] +=> results = [None, nparray(string), nparray(int)] +fetches = {'step': global_step_tensor, 'summ': summary_op} +=> results = {'step': nparray(int), 'summ': nparray(string)} +
+`options` + +`RunOptions` from the `Session.run()` call. +
+`run_metadata` + +`RunMetadata` from the `Session.run()` call. +
+ + + + + + + + + + + + + + + + + + + + +
+`results` + + +
+`options` + + +
+`run_metadata` + + +
+ + + +## Class Variables + +* `options` +* `results` +* `run_metadata` diff --git a/site/en/api_docs/python/tf/estimator/StepCounterHook.md b/site/en/api_docs/python/tf/estimator/StepCounterHook.md new file mode 100644 index 00000000000..5131e11f2ab --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/StepCounterHook.md @@ -0,0 +1,255 @@ +description: Hook that counts steps per second. + +
+ + + + + + + + +
+ +# tf.estimator.StepCounterHook + + + + + + + + + +Hook that counts steps per second. + +Inherits From: [`SessionRunHook`](../../tf/estimator/SessionRunHook.md) + + + + + + + + + + +## Methods + +

after_create_session

+ +View source + + + +Called when new TensorFlow session is created. + +This is called to signal the hooks that a new session has been created. This +has two essential differences with the situation in which `begin` is called: + +* When this is called, the graph is finalized and ops can no longer be added + to the graph. +* This method will also be called as a result of recovering a wrapped + session, not only at the beginning of the overall session. + + + + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that has been created. +
+`coord` + +A Coordinator object which keeps track of all threads. +
+ + + +

after_run

+ +View source + + + +Called after each call to run(). + +The `run_values` argument contains results of requested ops/tensors by +`before_run()`. + +The `run_context` argument is the same one send to `before_run` call. +`run_context.request_stop()` can be called to stop the iteration. + +If `session.run()` raises any exceptions then `after_run()` is not called. + + + + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+`run_values` + +A SessionRunValues object. +
+ + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Called once before using the session. + +When called, the default graph is the one that will be launched in the +session. The hook can modify the graph by adding new operations to it. +After the `begin()` call the graph will be finalized and the other callbacks +can not modify the graph anymore. Second call of `begin()` on the same +graph, should not change the graph. + +

end

+ +View source + + + +Called at the end of session. + +The `session` argument can be used in case the hook wants to run final ops, +such as saving a last checkpoint. + +If `session.run()` raises exception other than OutOfRangeError or +StopIteration then `end()` is not called. +Note the difference between `end()` and `after_run()` behavior when +`session.run()` raises OutOfRangeError or StopIteration. In that case +`end()` is called but `after_run()` is not called. + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that will be soon closed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/StopAtStepHook.md b/site/en/api_docs/python/tf/estimator/StopAtStepHook.md new file mode 100644 index 00000000000..5d326836941 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/StopAtStepHook.md @@ -0,0 +1,296 @@ +description: Hook that requests stop at a specified step. + +
+ + + + + + + + +
+ +# tf.estimator.StopAtStepHook + + + + + + + + + +Hook that requests stop at a specified step. + +Inherits From: [`SessionRunHook`](../../tf/estimator/SessionRunHook.md) + + + + + + + + + + + + + + + + + + + + + + +
+`num_steps` + +Number of steps to execute. +
+`last_step` + +Step after which to stop. +
+ + + + + + + + + + + + +
+`ValueError` + +If one of the arguments is invalid. +
+ + + +## Methods + +

after_create_session

+ +View source + + + +Called when new TensorFlow session is created. + +This is called to signal the hooks that a new session has been created. This +has two essential differences with the situation in which `begin` is called: + +* When this is called, the graph is finalized and ops can no longer be added + to the graph. +* This method will also be called as a result of recovering a wrapped + session, not only at the beginning of the overall session. + + + + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that has been created. +
+`coord` + +A Coordinator object which keeps track of all threads. +
+ + + +

after_run

+ +View source + + + +Called after each call to run(). + +The `run_values` argument contains results of requested ops/tensors by +`before_run()`. + +The `run_context` argument is the same one send to `before_run` call. +`run_context.request_stop()` can be called to stop the iteration. + +If `session.run()` raises any exceptions then `after_run()` is not called. + + + + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+`run_values` + +A SessionRunValues object. +
+ + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Called once before using the session. + +When called, the default graph is the one that will be launched in the +session. The hook can modify the graph by adding new operations to it. +After the `begin()` call the graph will be finalized and the other callbacks +can not modify the graph anymore. Second call of `begin()` on the same +graph, should not change the graph. + +

end

+ +View source + + + +Called at the end of session. + +The `session` argument can be used in case the hook wants to run final ops, +such as saving a last checkpoint. + +If `session.run()` raises exception other than OutOfRangeError or +StopIteration then `end()` is not called. +Note the difference between `end()` and `after_run()` behavior when +`session.run()` raises OutOfRangeError or StopIteration. In that case +`end()` is called but `after_run()` is not called. + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that will be soon closed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/SummarySaverHook.md b/site/en/api_docs/python/tf/estimator/SummarySaverHook.md new file mode 100644 index 00000000000..5298e8a0a41 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/SummarySaverHook.md @@ -0,0 +1,332 @@ +description: Saves summaries every N steps. + +
+ + + + + + + + +
+ +# tf.estimator.SummarySaverHook + + + + + + + + + +Saves summaries every N steps. + +Inherits From: [`SessionRunHook`](../../tf/estimator/SessionRunHook.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`save_steps` + +`int`, save summaries every N steps. Exactly one of +`save_secs` and `save_steps` should be set. +
+`save_secs` + +`int`, save summaries every N seconds. +
+`output_dir` + +`string`, the directory to save the summaries to. Only used if +no `summary_writer` is supplied. +
+`summary_writer` + +`SummaryWriter`. If `None` and an `output_dir` was passed, +one will be created accordingly. +
+`scaffold` + +`Scaffold` to get summary_op if it's not provided. +
+`summary_op` + +`Tensor` of type `string` containing the serialized `Summary` +protocol buffer or a list of `Tensor`. They are most likely an output by +TF summary methods like tf.compat.v1.summary.scalar or +tf.compat.v1.summary.merge_all. It can be passed in as one tensor; if +more than one, they must be passed in as a list. +
+ + + + + + + + + + + + +
+`ValueError` + +Exactly one of scaffold or summary_op should be set. +
+ + + +## Methods + +

after_create_session

+ +View source + + + +Called when new TensorFlow session is created. + +This is called to signal the hooks that a new session has been created. This +has two essential differences with the situation in which `begin` is called: + +* When this is called, the graph is finalized and ops can no longer be added + to the graph. +* This method will also be called as a result of recovering a wrapped + session, not only at the beginning of the overall session. + + + + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that has been created. +
+`coord` + +A Coordinator object which keeps track of all threads. +
+ + + +

after_run

+ +View source + + + +Called after each call to run(). + +The `run_values` argument contains results of requested ops/tensors by +`before_run()`. + +The `run_context` argument is the same one send to `before_run` call. +`run_context.request_stop()` can be called to stop the iteration. + +If `session.run()` raises any exceptions then `after_run()` is not called. + + + + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+`run_values` + +A SessionRunValues object. +
+ + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Called once before using the session. + +When called, the default graph is the one that will be launched in the +session. The hook can modify the graph by adding new operations to it. +After the `begin()` call the graph will be finalized and the other callbacks +can not modify the graph anymore. Second call of `begin()` on the same +graph, should not change the graph. + +

end

+ +View source + + + +Called at the end of session. + +The `session` argument can be used in case the hook wants to run final ops, +such as saving a last checkpoint. + +If `session.run()` raises exception other than OutOfRangeError or +StopIteration then `end()` is not called. +Note the difference between `end()` and `after_run()` behavior when +`session.run()` raises OutOfRangeError or StopIteration. In that case +`end()` is called but `after_run()` is not called. + + + + + + + + + + +
Args
+`session` + +A TensorFlow Session that will be soon closed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/TrainSpec.md b/site/en/api_docs/python/tf/estimator/TrainSpec.md new file mode 100644 index 00000000000..4fd7a003321 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/TrainSpec.md @@ -0,0 +1,158 @@ +description: Configuration for the "train" part for the train_and_evaluate call. + +
+ + + + + + +
+ +# tf.estimator.TrainSpec + + + + + + + + + +Configuration for the "train" part for the `train_and_evaluate` call. + + + + + + + + + +`TrainSpec` determines the input data for the training, as well as the +duration. Optional hooks run at various stages of training. + + + + + + + + + + + + + + + + +
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A 'tf.data.Dataset' object: Outputs of `Dataset` object must be a +tuple (features, labels) with same constraints as below. +* A tuple (features, labels): Where features is a `Tensor` or a +dictionary of string feature name to `Tensor` and labels is a +`Tensor` or a dictionary of string label name to `Tensor`. +
+`max_steps` + +Int. Positive number of total steps for which to train model. +If `None`, train forever. The training `input_fn` is not expected to +generate `OutOfRangeError` or `StopIteration` exceptions. See the +`train_and_evaluate` stop condition section for details. +
+`hooks` + +Iterable of `tf.train.SessionRunHook` objects to run on all workers +(including chief) during training. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If any of the input arguments is invalid. +
+`TypeError` + +If any of the arguments is not of the expected type. +
+ + + + + + + + + + + + + + + + + + + + +
+`input_fn` + + +
+`max_steps` + + +
+`hooks` + + +
+ + + +## Class Variables + +* `hooks` +* `input_fn` +* `max_steps` diff --git a/site/en/api_docs/python/tf/estimator/VocabInfo.md b/site/en/api_docs/python/tf/estimator/VocabInfo.md new file mode 100644 index 00000000000..4f47e45f940 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/VocabInfo.md @@ -0,0 +1,187 @@ +description: Vocabulary information for warm-starting. + +
+ + + + + + + + + + +
+ +# tf.estimator.VocabInfo + + + + + + + + + +Vocabulary information for warm-starting. + + + + + + + + + + See tf.estimator.WarmStartSettings for examples of using + VocabInfo to warm-start. + + Args: + new_vocab: [Required] A path to the new vocabulary file (used with the model + to be trained). + new_vocab_size: [Required] An integer indicating how many entries of the new + vocabulary will used in training. + num_oov_buckets: [Required] An integer indicating how many OOV buckets are + associated with the vocabulary. + old_vocab: [Required] A path to the old vocabulary file (used with the + checkpoint to be warm-started from). + old_vocab_size: [Optional] An integer indicating how many entries of the old + vocabulary were used in the creation of the checkpoint. If not provided, + the entire old vocabulary will be used. + backup_initializer: [Optional] A variable initializer used for variables + corresponding to new vocabulary entries and OOV. If not provided, these + entries will be zero-initialized. + axis: [Optional] Denotes what axis the vocabulary corresponds to. The + default, 0, corresponds to the most common use case (embeddings or + linear weights for binary classification / regression). An axis of 1 + could be used for warm-starting output layers with class vocabularies. + + Returns: + A `VocabInfo` which represents the vocabulary information for warm-starting. + + Raises: + ValueError: `axis` is neither 0 or 1. + + Example Usage: +```python + embeddings_vocab_info = tf.VocabInfo( + new_vocab='embeddings_vocab', + new_vocab_size=100, + num_oov_buckets=1, + old_vocab='pretrained_embeddings_vocab', + old_vocab_size=10000, + backup_initializer=tf.compat.v1.truncated_normal_initializer( + mean=0.0, stddev=(1 / math.sqrt(embedding_dim))), + axis=0) + + softmax_output_layer_kernel_vocab_info = tf.VocabInfo( + new_vocab='class_vocab', + new_vocab_size=5, + num_oov_buckets=0, # No OOV for classes. + old_vocab='old_class_vocab', + old_vocab_size=8, + backup_initializer=tf.compat.v1.glorot_uniform_initializer(), + axis=1) + + softmax_output_layer_bias_vocab_info = tf.VocabInfo( + new_vocab='class_vocab', + new_vocab_size=5, + num_oov_buckets=0, # No OOV for classes. + old_vocab='old_class_vocab', + old_vocab_size=8, + backup_initializer=tf.compat.v1.zeros_initializer(), + axis=0) + + #Currently, only axis=0 and axis=1 are supported. + ``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`new_vocab` + + +
+`new_vocab_size` + + +
+`num_oov_buckets` + + +
+`old_vocab` + + +
+`old_vocab_size` + + +
+`backup_initializer` + + +
+`axis` + + +
+ + + +## Class Variables + +* `axis` +* `backup_initializer` +* `new_vocab` +* `new_vocab_size` +* `num_oov_buckets` +* `old_vocab` +* `old_vocab_size` diff --git a/site/en/api_docs/python/tf/estimator/WarmStartSettings.md b/site/en/api_docs/python/tf/estimator/WarmStartSettings.md new file mode 100644 index 00000000000..7423008b44b --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/WarmStartSettings.md @@ -0,0 +1,257 @@ +description: Settings for warm-starting in tf.estimator.Estimators. + +
+ + + + + + + +
+ +# tf.estimator.WarmStartSettings + + + + + + + + + +Settings for warm-starting in `tf.estimator.Estimators`. + + + + + + + + + +Example Use with canned tf.estimator.DNNEstimator: + +``` +emb_vocab_file = tf.feature_column.embedding_column( + tf.feature_column.categorical_column_with_vocabulary_file( + "sc_vocab_file", "new_vocab.txt", vocab_size=100), + dimension=8) +emb_vocab_list = tf.feature_column.embedding_column( + tf.feature_column.categorical_column_with_vocabulary_list( + "sc_vocab_list", vocabulary_list=["a", "b"]), + dimension=8) +estimator = tf.estimator.DNNClassifier( + hidden_units=[128, 64], feature_columns=[emb_vocab_file, emb_vocab_list], + warm_start_from=ws) +``` + +where `ws` could be defined as: + +Warm-start all weights in the model (input layer and hidden weights). +Either the directory or a specific checkpoint can be provided (in the case +of the former, the latest checkpoint will be used): + +``` +ws = WarmStartSettings(ckpt_to_initialize_from="/tmp") +ws = WarmStartSettings(ckpt_to_initialize_from="/tmp/model-1000") +``` + +Warm-start only the embeddings (input layer): + +``` +ws = WarmStartSettings(ckpt_to_initialize_from="/tmp", + vars_to_warm_start=".*input_layer.*") +``` + +Warm-start all weights but the embedding parameters corresponding to +`sc_vocab_file` have a different vocab from the one used in the current +model: + +``` +vocab_info = tf.estimator.VocabInfo( + new_vocab=sc_vocab_file.vocabulary_file, + new_vocab_size=sc_vocab_file.vocabulary_size, + num_oov_buckets=sc_vocab_file.num_oov_buckets, + old_vocab="old_vocab.txt" +) +ws = WarmStartSettings( + ckpt_to_initialize_from="/tmp", + var_name_to_vocab_info={ + "input_layer/sc_vocab_file_embedding/embedding_weights": vocab_info + }) +``` + +Warm-start only `sc_vocab_file` embeddings (and no other variables), which +have a different vocab from the one used in the current model: + +``` +vocab_info = tf.estimator.VocabInfo( + new_vocab=sc_vocab_file.vocabulary_file, + new_vocab_size=sc_vocab_file.vocabulary_size, + num_oov_buckets=sc_vocab_file.num_oov_buckets, + old_vocab="old_vocab.txt" +) +ws = WarmStartSettings( + ckpt_to_initialize_from="/tmp", + vars_to_warm_start=None, + var_name_to_vocab_info={ + "input_layer/sc_vocab_file_embedding/embedding_weights": vocab_info + }) +``` + +Warm-start all weights but the parameters corresponding to `sc_vocab_file` +have a different vocab from the one used in current checkpoint, and only +100 of those entries were used: + +``` +vocab_info = tf.estimator.VocabInfo( + new_vocab=sc_vocab_file.vocabulary_file, + new_vocab_size=sc_vocab_file.vocabulary_size, + num_oov_buckets=sc_vocab_file.num_oov_buckets, + old_vocab="old_vocab.txt", + old_vocab_size=100 +) +ws = WarmStartSettings( + ckpt_to_initialize_from="/tmp", + var_name_to_vocab_info={ + "input_layer/sc_vocab_file_embedding/embedding_weights": vocab_info + }) +``` + +Warm-start all weights but the parameters corresponding to `sc_vocab_file` +have a different vocab from the one used in current checkpoint and the +parameters corresponding to `sc_vocab_list` have a different name from the +current checkpoint: + +``` +vocab_info = tf.estimator.VocabInfo( + new_vocab=sc_vocab_file.vocabulary_file, + new_vocab_size=sc_vocab_file.vocabulary_size, + num_oov_buckets=sc_vocab_file.num_oov_buckets, + old_vocab="old_vocab.txt", + old_vocab_size=100 +) +ws = WarmStartSettings( + ckpt_to_initialize_from="/tmp", + var_name_to_vocab_info={ + "input_layer/sc_vocab_file_embedding/embedding_weights": vocab_info + }, + var_name_to_prev_var_name={ + "input_layer/sc_vocab_list_embedding/embedding_weights": + "old_tensor_name" + }) +``` + +Warm-start all TRAINABLE variables: + +``` +ws = WarmStartSettings(ckpt_to_initialize_from="/tmp", + vars_to_warm_start=".*") +``` + +Warm-start all variables (including non-TRAINABLE): + +``` +ws = WarmStartSettings(ckpt_to_initialize_from="/tmp", + vars_to_warm_start=[".*"]) +``` + +Warm-start non-TRAINABLE variables "v1", "v1/Momentum", and "v2" but not +"v2/momentum": + +``` +ws = WarmStartSettings(ckpt_to_initialize_from="/tmp", + vars_to_warm_start=["v1", "v2[^/]"]) +``` + + + + + + + + + + + + + + + + + + + + + +
+`ckpt_to_initialize_from` + +[Required] A string specifying the directory with +checkpoint file(s) or path to checkpoint from which to warm-start the +model parameters. +
+`vars_to_warm_start` + +[Optional] One of the following: - A regular expression +(string) that captures which variables to warm-start (see +tf.compat.v1.get_collection). This expression will only consider +variables in the TRAINABLE_VARIABLES collection -- if you need to +warm-start non_TRAINABLE vars (such as optimizer accumulators or batch +norm statistics), please use the below option. - A list of strings, each a +regex scope provided to tf.compat.v1.get_collection with GLOBAL_VARIABLES +(please see tf.compat.v1.get_collection). For backwards compatibility +reasons, this is separate from the single-string argument type. - A list +of Variables to warm-start. If you do not have access to the `Variable` +objects at the call site, please use the above option. - `None`, in which +case only TRAINABLE variables specified in `var_name_to_vocab_info` will +be warm-started. Defaults to `'.*'`, which warm-starts all variables in +the TRAINABLE_VARIABLES collection. Note that this excludes variables +such as accumulators and moving statistics from batch norm. +
+`var_name_to_vocab_info` + +[Optional] Dict of variable names (strings) to +tf.estimator.VocabInfo. The variable names should be "full" variables, +not the names of the partitions. If not explicitly provided, the variable +is assumed to have no (changes to) vocabulary. +
+`var_name_to_prev_var_name` + +[Optional] Dict of variable names (strings) to +name of the previously-trained variable in `ckpt_to_initialize_from`. If +not explicitly provided, the name of the variable is assumed to be same +between previous checkpoint and current model. Note that this has no +effect on the set of variables that is warm-started, and only controls +name mapping (use `vars_to_warm_start` for controlling what variables to +warm-start). +
+ + + +## Class Variables + +* `ckpt_to_initialize_from` +* `var_name_to_prev_var_name` +* `var_name_to_vocab_info` +* `vars_to_warm_start` diff --git a/site/en/api_docs/python/tf/estimator/add_metrics.md b/site/en/api_docs/python/tf/estimator/add_metrics.md new file mode 100644 index 00000000000..5815a9dcd20 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/add_metrics.md @@ -0,0 +1,126 @@ +description: Creates a new tf.estimator.Estimator which has given metrics. + +
+ + +
+ +# tf.estimator.add_metrics + + + + + + + + + +Creates a new tf.estimator.Estimator which has given metrics. + + + + + + + + + + +#### Example: + + + +```python + def my_auc(labels, predictions): + auc_metric = tf.keras.metrics.AUC(name="my_auc") + auc_metric.update_state(y_true=labels, y_pred=predictions['logistic']) + return {'auc': auc_metric} + + estimator = tf.estimator.DNNClassifier(...) + estimator = tf.estimator.add_metrics(estimator, my_auc) + estimator.train(...) + estimator.evaluate(...) +``` +Example usage of custom metric which uses features: + +```python + def my_auc(labels, predictions, features): + auc_metric = tf.keras.metrics.AUC(name="my_auc") + auc_metric.update_state(y_true=labels, y_pred=predictions['logistic'], + sample_weight=features['weight']) + return {'auc': auc_metric} + + estimator = tf.estimator.DNNClassifier(...) + estimator = tf.estimator.add_metrics(estimator, my_auc) + estimator.train(...) + estimator.evaluate(...) +``` + + + + + + + + + + + + + +
+`estimator` + +A tf.estimator.Estimator object. +
+`metric_fn` + +A function which should obey the following signature: +- Args: can only have following four arguments in any order: +* predictions: Predictions `Tensor` or dict of `Tensor` created by given +`estimator`. +* features: Input `dict` of `Tensor` objects created by `input_fn` which +is given to `estimator.evaluate` as an argument. +* labels: Labels `Tensor` or dict of `Tensor` created by `input_fn` +which is given to `estimator.evaluate` as an argument. +* config: config attribute of the `estimator`. +- Returns: Dict of metric results keyed by name. Final metrics are a +union of this and `estimator's` existing metrics. If there is a name +conflict between this and `estimator`s existing metrics, this will +override the existing one. The values of the dict are the results of +calling a metric function, namely a `(metric_tensor, update_op)` tuple. +
+ + + + + + + + + + + +
+A new tf.estimator.Estimator which has a union of original metrics with +given ones. +
+ diff --git a/site/en/api_docs/python/tf/estimator/classifier_parse_example_spec.md b/site/en/api_docs/python/tf/estimator/classifier_parse_example_spec.md new file mode 100644 index 00000000000..3dbdb66f4b5 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/classifier_parse_example_spec.md @@ -0,0 +1,223 @@ +description: Generates parsing spec for tf.parse_example to be used with classifiers. + +
+ + +
+ +# tf.estimator.classifier_parse_example_spec + + + + + + + + + +Generates parsing spec for tf.parse_example to be used with classifiers. + + + + + + + +If users keep data in tf.Example format, they need to call tf.parse_example +with a proper feature spec. There are two main things that this utility helps: + +* Users need to combine parsing spec of features with labels and weights + (if any) since they are all parsed from same tf.Example instance. This + utility combines these specs. +* It is difficult to map expected label by a classifier such as + `DNNClassifier` to corresponding tf.parse_example spec. This utility encodes + it by getting related information from users (key, dtype). + +Example output of parsing spec: + +```python +# Define features and transformations +feature_b = tf.feature_column.numeric_column(...) +feature_c_bucketized = tf.feature_column.bucketized_column( + tf.feature_column.numeric_column("feature_c"), ...) +feature_a_x_feature_c = tf.feature_column.crossed_column( + columns=["feature_a", feature_c_bucketized], ...) + +feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c] +parsing_spec = tf.estimator.classifier_parse_example_spec( + feature_columns, label_key='my-label', label_dtype=tf.string) + +# For the above example, classifier_parse_example_spec would return the dict: +assert parsing_spec == { + "feature_a": parsing_ops.VarLenFeature(tf.string), + "feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32), + "feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32) + "my-label" : parsing_ops.FixedLenFeature([1], dtype=tf.string) +} +``` + +Example usage with a classifier: + +```python +feature_columns = # define features via tf.feature_column +estimator = DNNClassifier( + n_classes=1000, + feature_columns=feature_columns, + weight_column='example-weight', + label_vocabulary=['photos', 'keep', ...], + hidden_units=[256, 64, 16]) +# This label configuration tells the classifier the following: +# * weights are retrieved with key 'example-weight' +# * label is string and can be one of the following ['photos', 'keep', ...] +# * integer id for label 'photos' is 0, 'keep' is 1, ... + + +# Input builders +def input_fn_train(): # Returns a tuple of features and labels. + features = tf.contrib.learn.read_keyed_batch_features( + file_pattern=train_files, + batch_size=batch_size, + # creates parsing configuration for tf.parse_example + features=tf.estimator.classifier_parse_example_spec( + feature_columns, + label_key='my-label', + label_dtype=tf.string, + weight_column='example-weight'), + reader=tf.RecordIOReader) + labels = features.pop('my-label') + return features, labels + +estimator.train(input_fn=input_fn_train) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +An iterable containing all feature columns. All items +should be instances of classes derived from `FeatureColumn`. +
+`label_key` + +A string identifying the label. It means tf.Example stores labels +with this key. +
+`label_dtype` + +A `tf.dtype` identifies the type of labels. By default it is +tf.int64. If user defines a `label_vocabulary`, this should be set as +tf.string. tf.float32 labels are only supported for binary +classification. +
+`label_default` + +used as label if label_key does not exist in given +tf.Example. An example usage: let's say `label_key` is 'clicked' and +tf.Example contains clicked data only for positive examples in following +format `key:clicked, value:1`. This means that if there is no data with +key 'clicked' it should count as negative example by setting +`label_deafault=0`. Type of this value should be compatible with +`label_dtype`. +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. If it is a string, it is +used as a key to fetch weight tensor from the `features`. If it is a +`NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +weight_column.normalizer_fn is applied on it to get weight tensor. +
+ + + + + + + + + + + +
+A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature` +value. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +If label is used in `feature_columns`. +
+`ValueError` + +If weight_column is used in `feature_columns`. +
+`ValueError` + +If any of the given `feature_columns` is not a `_FeatureColumn` +instance. +
+`ValueError` + +If `weight_column` is not a `NumericColumn` instance. +
+`ValueError` + +if label_key is None. +
+ diff --git a/site/en/api_docs/python/tf/estimator/experimental.md b/site/en/api_docs/python/tf/estimator/experimental.md new file mode 100644 index 00000000000..6213fec6214 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental.md @@ -0,0 +1,49 @@ +description: Public API for tf.estimator.experimental namespace. + +
+ + +
+ +# Module: tf.estimator.experimental + + + + + + + + + +Public API for tf.estimator.experimental namespace. + + + +## Classes + +[`class InMemoryEvaluatorHook`](../../tf/estimator/experimental/InMemoryEvaluatorHook.md): Hook to run evaluation in training without a checkpoint. + +[`class LinearSDCA`](../../tf/estimator/experimental/LinearSDCA.md): Stochastic Dual Coordinate Ascent helper for linear estimators. + +[`class RNNClassifier`](../../tf/estimator/experimental/RNNClassifier.md): A classifier for TensorFlow RNN models. + +[`class RNNEstimator`](../../tf/estimator/experimental/RNNEstimator.md): An Estimator for TensorFlow RNN models with user-specified head. + +## Functions + +[`build_raw_supervised_input_receiver_fn(...)`](../../tf/estimator/experimental/build_raw_supervised_input_receiver_fn.md): Build a supervised_input_receiver_fn for raw features and labels. + +[`call_logit_fn(...)`](../../tf/estimator/experimental/call_logit_fn.md): Calls logit_fn (experimental). + +[`make_early_stopping_hook(...)`](../../tf/estimator/experimental/make_early_stopping_hook.md): Creates early-stopping hook. + +[`make_stop_at_checkpoint_step_hook(...)`](../../tf/estimator/experimental/make_stop_at_checkpoint_step_hook.md): Creates a proper StopAtCheckpointStepHook based on chief status. + +[`stop_if_higher_hook(...)`](../../tf/estimator/experimental/stop_if_higher_hook.md): Creates hook to stop if the given metric is higher than the threshold. + +[`stop_if_lower_hook(...)`](../../tf/estimator/experimental/stop_if_lower_hook.md): Creates hook to stop if the given metric is lower than the threshold. + +[`stop_if_no_decrease_hook(...)`](../../tf/estimator/experimental/stop_if_no_decrease_hook.md): Creates hook to stop if metric does not decrease within given max steps. + +[`stop_if_no_increase_hook(...)`](../../tf/estimator/experimental/stop_if_no_increase_hook.md): Creates hook to stop if metric does not increase within given max steps. + diff --git a/site/en/api_docs/python/tf/estimator/experimental/InMemoryEvaluatorHook.md b/site/en/api_docs/python/tf/estimator/experimental/InMemoryEvaluatorHook.md new file mode 100644 index 00000000000..be8c84befa0 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental/InMemoryEvaluatorHook.md @@ -0,0 +1,281 @@ +description: Hook to run evaluation in training without a checkpoint. + +
+ + + + + + + + +
+ +# tf.estimator.experimental.InMemoryEvaluatorHook + + + + + + + + + +Hook to run evaluation in training without a checkpoint. + +Inherits From: [`SessionRunHook`](../../../tf/estimator/SessionRunHook.md) + + + + + + + + + + +#### Example: + + + +```python +def train_input_fn(): + ... + return train_dataset + +def eval_input_fn(): + ... + return eval_dataset + +estimator = tf.estimator.DNNClassifier(...) + +evaluator = tf.estimator.experimental.InMemoryEvaluatorHook( + estimator, eval_input_fn) +estimator.train(train_input_fn, hooks=[evaluator]) +``` + +Current limitations of this approach are: + +* It doesn't support multi-node distributed mode. +* It doesn't support saveable objects other than variables (such as boosted + tree support) +* It doesn't support custom saver logic (such as ExponentialMovingAverage + support) + + + + + + + + + + + + + + + + + + + + + + + + + +
+`estimator` + +A tf.estimator.Estimator instance to call evaluate. +
+`input_fn` + +Equivalent to the `input_fn` arg to `estimator.evaluate`. A +function that constructs the input data for evaluation. See [Creating +input functions]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A 'tf.data.Dataset' object: Outputs of `Dataset` object must be a +tuple (features, labels) with same constraints as below. +* A tuple (features, labels): Where `features` is a `Tensor` or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`steps` + +Equivalent to the `steps` arg to `estimator.evaluate`. Number of +steps for which to evaluate model. If `None`, evaluates until `input_fn` +raises an end-of-input exception. +
+`hooks` + +Equivalent to the `hooks` arg to `estimator.evaluate`. List of +`SessionRunHook` subclass instances. Used for callbacks inside the +evaluation call. +
+`name` + +Equivalent to the `name` arg to `estimator.evaluate`. Name of the +evaluation if user needs to run multiple evaluations on different data +sets, such as on training data vs test data. Metrics for different +evaluations are saved in separate folders, and appear separately in +tensorboard. +
+`every_n_iter` + +`int`, runs the evaluator once every N training iteration. +
+ + + + + + + + + + + + +
+`ValueError` + +if `every_n_iter` is non-positive or it's not a single machine +training +
+ + + +## Methods + +

after_create_session

+ +View source + + + +Does first run which shows the eval metrics before training. + + +

after_run

+ +View source + + + +Runs evaluator. + + +

before_run

+ +View source + + + +Called before each call to run(). + +You can return from this call a `SessionRunArgs` object indicating ops or +tensors to add to the upcoming `run()` call. These ops/tensors will be run +together with the ops/tensors originally passed to the original run() call. +The run args you return can also contain feeds to be added to the run() +call. + +The `run_context` argument is a `SessionRunContext` that provides +information about the upcoming `run()` call: the originally requested +op/tensors, the TensorFlow Session. + +At this point graph is finalized and you can not add ops. + + + + + + + + + + +
Args
+`run_context` + +A `SessionRunContext` object. +
+ + + + + + + + + + + +
Returns
+None or a `SessionRunArgs` object. +
+ + + +

begin

+ +View source + + + +Build eval graph and restoring op. + + +

end

+ +View source + + + +Runs evaluator for final model. + + + + diff --git a/site/en/api_docs/python/tf/estimator/experimental/LinearSDCA.md b/site/en/api_docs/python/tf/estimator/experimental/LinearSDCA.md new file mode 100644 index 00000000000..5dceae1b205 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental/LinearSDCA.md @@ -0,0 +1,180 @@ +description: Stochastic Dual Coordinate Ascent helper for linear estimators. + +
+ + + + +
+ +# tf.estimator.experimental.LinearSDCA + + + + + + + + + +Stochastic Dual Coordinate Ascent helper for linear estimators. + + + + + + + + + +Objects of this class are intended to be provided as the optimizer argument +(though LinearSDCA objects do not implement the `tf.train.Optimizer` +interface) +when creating tf.estimator.LinearClassifier or +tf.estimator.LinearRegressor. + +SDCA can only be used with `LinearClassifier` and `LinearRegressor` under the +following conditions: + + - Feature columns are of type V2. + - Multivalent categorical columns are not normalized. In other words the + `sparse_combiner` argument in the estimator constructor should be "sum". + - For classification: binary label. + - For regression: one-dimensional label. + +#### Example usage: + + + +```python +real_feature_column = numeric_column(...) +sparse_feature_column = categorical_column_with_hash_bucket(...) +linear_sdca = tf.estimator.experimental.LinearSDCA( + example_id_column='example_id', + num_loss_partitions=1, + num_table_shards=1, + symmetric_l2_regularization=2.0) +classifier = tf.estimator.LinearClassifier( + feature_columns=[real_feature_column, sparse_feature_column], + weight_column=..., + optimizer=linear_sdca) +classifier.train(input_fn_train, steps=50) +classifier.evaluate(input_fn=input_fn_eval) +``` + +Here the expectation is that the `input_fn_*` functions passed to train and +evaluate return a pair (dict, label_tensor) where dict has `example_id_column` +as `key` whose value is a `Tensor` of shape [batch_size] and dtype string. +num_loss_partitions defines sigma' in eq (11) of [3]. Convergence of (global) +loss is guaranteed if `num_loss_partitions` is larger or equal to the product +`(#concurrent train ops/per worker) x (#workers)`. Larger values for +`num_loss_partitions` lead to slower convergence. The recommended value for +`num_loss_partitions` in tf.estimator (where currently there is one process +per worker) is the number of workers running the train steps. It defaults to 1 +(single machine). +`num_table_shards` defines the number of shards for the internal state +table, typically set to match the number of parameter servers for large +data sets. + +The SDCA algorithm was originally introduced in [1] and it was followed by +the L1 proximal step [2], a distributed version [3] and adaptive sampling [4]. +[1] www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf +[2] https://arxiv.org/pdf/1309.2375.pdf +[3] https://arxiv.org/pdf/1502.03508.pdf +[4] https://arxiv.org/pdf/1502.08053.pdf +Details specific to this implementation are provided in: +https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/linear_optimizer/doc/sdca.ipynb + + + + + + + + + + + + + + + + + + + + + + + + + +
+`example_id_column` + +The column name containing the example ids. +
+`num_loss_partitions` + +Number of workers. +
+`num_table_shards` + +Number of shards of the internal state table, typically +set to match the number of parameter servers. +
+`symmetric_l1_regularization` + +A float value, must be greater than or equal +to zero. +
+`symmetric_l2_regularization` + +A float value, must be greater than zero and +should typically be greater than 1. +
+`adaptive` + +A boolean indicating whether to use adaptive sampling. +
+ + + +## Methods + +

get_train_step

+ +View source + + + +Returns the training operation of an SdcaModel optimizer. + + + + diff --git a/site/en/api_docs/python/tf/estimator/experimental/RNNClassifier.md b/site/en/api_docs/python/tf/estimator/experimental/RNNClassifier.md new file mode 100644 index 00000000000..2c65896432e --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental/RNNClassifier.md @@ -0,0 +1,1270 @@ +description: A classifier for TensorFlow RNN models. + +
+ + + + + + + + + + + + + +
+ +# tf.estimator.experimental.RNNClassifier + + + + + + + + + +A classifier for TensorFlow RNN models. + +Inherits From: [`RNNEstimator`](../../../tf/estimator/experimental/RNNEstimator.md) + + + + + + + +Trains a recurrent neural network model to classify instances into one of +multiple classes. + +#### Example: + + + +```python +token_sequence = sequence_categorical_column_with_hash_bucket(...) +token_emb = embedding_column(categorical_column=token_sequence, ...) + +estimator = RNNClassifier( + sequence_feature_columns=[token_emb], + units=[32, 16], cell_type='lstm') + +# Input builders +def input_fn_train: # returns x, y + pass +estimator.train(input_fn=input_fn_train, steps=100) + +def input_fn_eval: # returns x, y + pass +metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) +def input_fn_predict: # returns x, None + pass +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* if `weight_column` is not `None`, a feature with + `key=weight_column` whose value is a `Tensor`. +* for each `column` in `sequence_feature_columns`: + - a feature with `key=column.name` whose `value` is a `SparseTensor`. +* for each `column` in `context_feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss is calculated by using softmax cross entropy. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sequence_feature_columns` + +An iterable containing the `FeatureColumn`s that +represent sequential input. All items in the set should either be +sequence columns (e.g. `sequence_numeric_column`) or constructed from +one (e.g. `embedding_column` with `sequence_categorical_column_*` as +input). +
+`context_feature_columns` + +An iterable containing the `FeatureColumn`s for +contextual input. The data represented by these columns will be +replicated and given to the RNN at each timestep. These columns must be +instances of classes derived from `DenseColumn` such as +`numeric_column`, not the sequential variants. +
+`units` + +Iterable of integer number of hidden units per RNN layer. If set, +`cell_type` must also be specified and `rnn_cell_fn` must be `None`. +
+`cell_type` + +A class producing a RNN cell or a string specifying the cell +type. Supported strings are: `'simple_rnn'`, `'lstm'`, and `'gru'`. If +set, `units` must also be specified and `rnn_cell_fn` must be `None`. +
+`rnn_cell_fn` + +A function that returns a RNN cell instance that will be used +to construct the RNN. If set, `units` and `cell_type` cannot be set. +This is for advanced users who need additional customization beyond +`units` and `cell_type`. Note that tf.keras.layers.StackedRNNCells is +needed for stacked RNNs. +
+`return_sequences` + +A boolean indicating whether to return the last output +in the output sequence, or the full sequence. Note that if True, +`weight_column` must be None or a string. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`n_classes` + +Number of label classes. Defaults to 2, namely binary +classification. Must be > 1. +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. If it is a string, it is +used as a key to fetch weight tensor from the `features`. If it is a +`NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +weight_column.normalizer_fn is applied on it to get weight tensor. +
+`label_vocabulary` + +A list of strings represents possible label values. If +given, labels must be string type and have any value in +`label_vocabulary`. If it is not given, that means labels are already +encoded as integer or float within [0, 1] for `n_classes=2` and encoded +as integer values in {0, 1,..., n_classes-1} for `n_classes`>2 . Also +there will be errors if vocabulary is not provided and labels are +string. +
+`optimizer` + +An instance of `tf.Optimizer` or string specifying optimizer +type. Defaults to Adagrad optimizer. +
+`loss_reduction` + +One of tf.losses.Reduction except `NONE`. Describes how +to reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`. +
+`sequence_mask` + +A string with the name of the sequence mask tensor. If +`sequence_mask` is in the features dictionary, the provided tensor is +used, otherwise the sequence mask is computed from the length of +sequential features. The sequence mask is used in evaluation and +training mode to aggregate loss and metrics computation while excluding +padding steps. It is also added to the predictions dictionary in +prediction mode to indicate which steps are padding. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+ + + + + + + + + + + + +
+`ValueError` + +If `units`, `cell_type`, and `rnn_cell_fn` are not +compatible. +
+ + + +#### Eager Compatibility +Estimators are not compatible with eager execution. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/experimental/RNNEstimator.md b/site/en/api_docs/python/tf/estimator/experimental/RNNEstimator.md new file mode 100644 index 00000000000..8fc9acdf6b8 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental/RNNEstimator.md @@ -0,0 +1,1228 @@ +description: An Estimator for TensorFlow RNN models with user-specified head. + +
+ + + + + + + + + + + + + +
+ +# tf.estimator.experimental.RNNEstimator + + + + + + + + + +An Estimator for TensorFlow RNN models with user-specified head. + +Inherits From: [`Estimator`](../../../tf/compat/v1/estimator/Estimator.md) + + + + + + + + +#### Example: + + + +```python +token_sequence = sequence_categorical_column_with_hash_bucket(...) +token_emb = embedding_column(categorical_column=token_sequence, ...) + +estimator = RNNEstimator( + head=tf.estimator.RegressionHead(), + sequence_feature_columns=[token_emb], + units=[32, 16], cell_type='lstm') + +# Or with custom RNN cell: +def rnn_cell_fn(_): + cells = [ tf.keras.layers.LSTMCell(size) for size in [32, 16] ] + return tf.keras.layers.StackedRNNCells(cells) + +estimator = RNNEstimator( + head=tf.estimator.RegressionHead(), + sequence_feature_columns=[token_emb], + rnn_cell_fn=rnn_cell_fn) + +# Input builders +def input_fn_train: # returns x, y + pass +estimator.train(input_fn=input_fn_train, steps=100) + +def input_fn_eval: # returns x, y + pass +metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) +def input_fn_predict: # returns x, None + pass +predictions = estimator.predict(input_fn=input_fn_predict) +``` + +Input of `train` and `evaluate` should have following features, +otherwise there will be a `KeyError`: + +* if the head's `weight_column` is not `None`, a feature with + `key=weight_column` whose value is a `Tensor`. +* for each `column` in `sequence_feature_columns`: + - a feature with `key=column.name` whose `value` is a `SparseTensor`. +* for each `column` in `context_feature_columns`: + - if `column` is a `CategoricalColumn`, a feature with `key=column.name` + whose `value` is a `SparseTensor`. + - if `column` is a `WeightedCategoricalColumn`, two features: the first + with `key` the id column name, the second with `key` the weight column + name. Both features' `value` must be a `SparseTensor`. + - if `column` is a `DenseColumn`, a feature with `key=column.name` + whose `value` is a `Tensor`. + +Loss and predicted output are determined by the specified head. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`head` + +A `Head` instance. This specifies the model's output and loss +function to be optimized. +
+`sequence_feature_columns` + +An iterable containing the `FeatureColumn`s that +represent sequential input. All items in the set should either be +sequence columns (e.g. `sequence_numeric_column`) or constructed from +one (e.g. `embedding_column` with `sequence_categorical_column_*` as +input). +
+`context_feature_columns` + +An iterable containing the `FeatureColumn`s for +contextual input. The data represented by these columns will be +replicated and given to the RNN at each timestep. These columns must be +instances of classes derived from `DenseColumn` such as +`numeric_column`, not the sequential variants. +
+`units` + +Iterable of integer number of hidden units per RNN layer. If set, +`cell_type` must also be specified and `rnn_cell_fn` must be `None`. +
+`cell_type` + +A class producing a RNN cell or a string specifying the cell +type. Supported strings are: `'simple_rnn'`, `'lstm'`, and `'gru'`. If +set, `units` must also be specified and `rnn_cell_fn` must be `None`. +
+`rnn_cell_fn` + +A function that returns a RNN cell instance that will be used +to construct the RNN. If set, `units` and `cell_type` cannot be set. +This is for advanced users who need additional customization beyond +`units` and `cell_type`. Note that tf.keras.layers.StackedRNNCells is +needed for stacked RNNs. +
+`return_sequences` + +A boolean indicating whether to return the last output +in the output sequence, or the full sequence. +
+`model_dir` + +Directory to save model parameters, graph and etc. This can +also be used to load checkpoints from the directory into a estimator to +continue training a previously saved model. +
+`optimizer` + +An instance of `tf.Optimizer` or string specifying optimizer +type. Defaults to Adagrad optimizer. +
+`config` + +`RunConfig` object to configure the runtime settings. +
+ + + + + + + + + + + + +
+`ValueError` + +If `units`, `cell_type`, and `rnn_cell_fn` are not +compatible. +
+ + + +#### Eager Compatibility +Estimators are not compatible with eager execution. + + + + + + + + + + + + + + + + + + + + + + + +
+`config` + + +
+`model_dir` + + +
+`model_fn` + +Returns the `model_fn` which is bound to `self.params`. +
+`params` + + +
+ + + +## Methods + +

eval_dir

+ +View source + + + +Shows the directory name where evaluation metrics are dumped. + + + + + + + + + + + +
Args
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A string which is the path of directory contains evaluation metrics. +
+ + + +

evaluate

+ +View source + + + +Evaluates the model given evaluation data `input_fn`. + +For each step, calls `input_fn`, which returns one batch of data. +Evaluates until: +- `steps` batches are processed, or +- `input_fn` raises an end-of-input exception (tf.errors.OutOfRangeError +or +`StopIteration`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the input data for evaluation. See +[Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The +function should construct and return one of the following: * A +tf.data.Dataset object: Outputs of `Dataset` object must be a tuple +`(features, labels)` with same constraints as below. * A tuple +`(features, labels)`: Where `features` is a tf.Tensor or a dictionary +of string feature name to `Tensor` and `labels` is a `Tensor` or a +dictionary of string label name to `Tensor`. Both `features` and +`labels` are consumed by `model_fn`. They should satisfy the +expectation of `model_fn` from inputs. +
+`steps` + +Number of steps for which to evaluate model. If `None`, evaluates +until `input_fn` raises an end-of-input exception. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the evaluation call. +
+`checkpoint_path` + +Path of a specific checkpoint to evaluate. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, evaluation is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`name` + +Name of the evaluation if user needs to run multiple evaluations on +different data sets, such as on training data vs test data. Metrics for +different evaluations are saved in separate folders, and appear +separately in tensorboard. +
+ + + + + + + + + + + +
Returns
+A dict containing the evaluation metrics specified in `model_fn` keyed by +name, as well as an entry `global_step` which contains the value of the +global step for which this evaluation was performed. For canned +estimators, the dict contains the `loss` (mean loss per mini-batch) and +the `average_loss` (mean loss per sample). Canned classifiers also return +the `accuracy`. Canned regressors also return the `label/mean` and the +`prediction/mean`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `steps <= 0`. +
+ + + +

experimental_export_all_saved_models

+ +View source + + + +Exports a `SavedModel` with `tf.MetaGraphDefs` for each requested mode. + +For each mode passed in via the `input_receiver_fn_map`, +this method builds a new graph by calling the `input_receiver_fn` to obtain +feature and label `Tensor`s. Next, this method calls the `Estimator`'s +`model_fn` in the passed mode to generate the model graph based on +those features and labels, and restores the given checkpoint +(or, lacking that, the most recent checkpoint) into the graph. +Only one of the modes is used for saving variables to the `SavedModel` +(order of preference: tf.estimator.ModeKeys.TRAIN, +tf.estimator.ModeKeys.EVAL, then +tf.estimator.ModeKeys.PREDICT), such that up to three +`tf.MetaGraphDefs` are saved with a single set of variables in a single +`SavedModel` directory. + +For the variables and `tf.MetaGraphDefs`, a timestamped export directory +below +`export_dir_base`, and writes a `SavedModel` into it containing +the `tf.MetaGraphDef` for the given mode and its associated signatures. + +For prediction, the exported `MetaGraphDef` will provide one `SignatureDef` +for each element of the `export_outputs` dict returned from the `model_fn`, +named using the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +For training and evaluation, the `train_op` is stored in an extra +collection, +and loss, metrics, and predictions are included in a `SignatureDef` for the +mode in question. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`input_receiver_fn_map` + +dict of tf.estimator.ModeKeys to +`input_receiver_fn` mappings, where the `input_receiver_fn` is a +function that takes no arguments and returns the appropriate subclass of +`InputReceiver`. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any `input_receiver_fn` is `None`, no `export_outputs` +are provided, or no checkpoint can be found. +
+ + + +

export_saved_model

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + +The experimental_mode parameter can be used to export a single +train/eval/predict graph as a `SavedModel`. +See `experimental_export_all_saved_models` for full docs. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`experimental_mode` + +tf.estimator.ModeKeys value indicating with mode will +be exported. Note that this feature is experimental. +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

export_savedmodel

+ +View source + + + +Exports inference graph as a `SavedModel` into the given dir. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +This function has been renamed, use `export_saved_model` instead. + +For a detailed guide, see +[Using SavedModel with +Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators). + +This method builds a new graph by first calling the +`serving_input_receiver_fn` to obtain feature `Tensor`s, and then calling +this `Estimator`'s `model_fn` to generate the model graph based on those +features. It restores the given checkpoint (or, lacking that, the most +recent checkpoint) into this graph in a fresh session. Finally it creates +a timestamped export directory below the given `export_dir_base`, and writes +a `SavedModel` into it containing a single `tf.MetaGraphDef` saved from this +session. + +The exported `MetaGraphDef` will provide one `SignatureDef` for each +element of the `export_outputs` dict returned from the `model_fn`, named +using +the same keys. One of these keys is always +`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`, +indicating which +signature will be served when a serving request does not specify one. +For each signature, the outputs are provided by the corresponding +tf.estimator.export.ExportOutputs, and the inputs are always the input +receivers provided by +the `serving_input_receiver_fn`. + +Extra assets may be written into the `SavedModel` via the `assets_extra` +argument. This should be a dict, where each key gives a destination path +(including the filename) relative to the assets.extra directory. The +corresponding value gives the full path of the source file to be copied. +For example, the simple case of copying a single file without renaming it +is specified as `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`export_dir_base` + +A string containing a directory in which to create +timestamped subdirectories containing exported `SavedModel`s. +
+`serving_input_receiver_fn` + +A function that takes no argument and returns a +tf.estimator.export.ServingInputReceiver or +tf.estimator.export.TensorServingInputReceiver. +
+`assets_extra` + +A dict specifying how to populate the assets.extra directory +within the exported `SavedModel`, or `None` if no extra assets are +needed. +
+`as_text` + +whether to write the `SavedModel` proto in text format. +
+`checkpoint_path` + +The checkpoint path to export. If `None` (the default), +the most recent checkpoint found within the model directory is chosen. +
+`strip_default_attrs` + +Boolean. If `True`, default-valued attributes will be +removed from the `NodeDef`s. For a detailed guide, see [Stripping +Default-Valued Attributes]( +https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). +
+ + + + + + + + + + + +
Returns
+The path to the exported directory as a bytes object. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if no `serving_input_receiver_fn` is provided, no +`export_outputs` are provided, or no checkpoint can be found. +
+ + + +

get_variable_names

+ +View source + + + +Returns list of all variable names in this model. + + + + + + + + + + +
Returns
+List of names. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

get_variable_value

+ +View source + + + +Returns value of the variable given by name. + + + + + + + + + + + +
Args
+`name` + +string or a list of string, name of the tensor. +
+ + + + + + + + + + + +
Returns
+Numpy array - value of the tensor. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the `Estimator` has not produced a checkpoint yet. +
+ + + +

latest_checkpoint

+ +View source + + + +Finds the filename of the latest saved checkpoint file in `model_dir`. + + + + + + + + + + +
Returns
+The full path to the latest checkpoint or `None` if no checkpoint was +found. +
+ + + +

predict

+ +View source + + + +Yields predictions for given features. + +Please note that interleaving two predict outputs does not work. See: +[issue/20506]( +https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517) + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that constructs the features. Prediction continues +until `input_fn` raises an end-of-input exception +(tf.errors.OutOfRangeError or `StopIteration`). See [Premade +Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* tf.data.Dataset object -- Outputs of `Dataset` object must have +same constraints as below. +* features -- A tf.Tensor or a dictionary of string feature name to +`Tensor`. features are consumed by `model_fn`. They should satisfy +the expectation of `model_fn` from inputs. * A tuple, in which case +the first item is extracted as features. +
+`predict_keys` + +list of `str`, name of the keys to predict. It is used if +the tf.estimator.EstimatorSpec.predictions is a `dict`. If +`predict_keys` is used then rest of the predictions will be filtered +from the dictionary. If `None`, returns all. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the prediction call. +
+`checkpoint_path` + +Path of a specific checkpoint to predict. If `None`, the +latest checkpoint in `model_dir` is used. If there are no checkpoints +in `model_dir`, prediction is run with newly initialized `Variables` +instead of ones restored from checkpoint. +
+`yield_single_examples` + +If `False`, yields the whole batch as returned by +the `model_fn` instead of decomposing the batch into individual +elements. This is useful if `model_fn` returns some tensors whose first +dimension is not equal to the batch size. +
+ + + +#### Yields: + +Evaluated values of `predictions` tensors. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If batch length of predictions is not the same and +`yield_single_examples` is `True`. +
+`ValueError` + +If there is a conflict between `predict_keys` and +`predictions`. For example if `predict_keys` is not `None` but +tf.estimator.EstimatorSpec.predictions is not a `dict`. +
+ + + +

train

+ +View source + + + +Trains a model given training data `input_fn`. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`input_fn` + +A function that provides input data for training as minibatches. +See [Premade Estimators]( +https://tensorflow.org/guide/premade_estimators#create_input_functions) +for more information. The function should construct and return one of +the following: +* A tf.data.Dataset object: Outputs of `Dataset` object must be a +tuple `(features, labels)` with same constraints as below. +* A tuple `(features, labels)`: Where `features` is a tf.Tensor or a +dictionary of string feature name to `Tensor` and `labels` is a +`Tensor` or a dictionary of string label name to `Tensor`. Both +`features` and `labels` are consumed by `model_fn`. They should +satisfy the expectation of `model_fn` from inputs. +
+`hooks` + +List of `tf.train.SessionRunHook` subclass instances. Used for +callbacks inside the training loop. +
+`steps` + +Number of steps for which to train the model. If `None`, train +forever or train until `input_fn` generates the `tf.errors.OutOfRange` +error or `StopIteration` exception. `steps` works incrementally. If you +call two times `train(steps=10)` then training occurs in total 20 steps. +If `OutOfRange` or `StopIteration` occurs in the middle, training stops +before 20 steps. If you don't want to have incremental behavior please +set `max_steps` instead. If set, `max_steps` must be `None`. +
+`max_steps` + +Number of total steps for which to train model. If `None`, +train forever or train until `input_fn` generates the +`tf.errors.OutOfRange` error or `StopIteration` exception. If set, +`steps` must be `None`. If `OutOfRange` or `StopIteration` occurs in the +middle, training stops before `max_steps` steps. Two calls to +`train(steps=100)` means 200 training iterations. On the other hand, two +calls to `train(max_steps=100)` means that the second call will not do +any iteration since first call did all 100 steps. +
+`saving_listeners` + +list of `CheckpointSaverListener` objects. Used for +callbacks that run immediately before or after checkpoint savings. +
+ + + + + + + + + + + +
Returns
+`self`, for chaining. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If both `steps` and `max_steps` are not `None`. +
+`ValueError` + +If either `steps` or `max_steps <= 0`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/experimental/build_raw_supervised_input_receiver_fn.md b/site/en/api_docs/python/tf/estimator/experimental/build_raw_supervised_input_receiver_fn.md new file mode 100644 index 00000000000..01a51268cc6 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental/build_raw_supervised_input_receiver_fn.md @@ -0,0 +1,111 @@ +description: Build a supervised_input_receiver_fn for raw features and labels. + +
+ + +
+ +# tf.estimator.experimental.build_raw_supervised_input_receiver_fn + + + + + + + + + +Build a supervised_input_receiver_fn for raw features and labels. + + + + + + + + + +This function wraps tensor placeholders in a supervised_receiver_fn +with the expectation that the features and labels appear precisely as +the model_fn expects them. Features and labels can therefore be dicts of +tensors, or raw tensors. + + + + + + + + + + + + + + + + +
+`features` + +a dict of string to `Tensor` or `Tensor`. +
+`labels` + +a dict of string to `Tensor` or `Tensor`. +
+`default_batch_size` + +the number of query examples expected per batch. Leave +unset for variable batch size (recommended). +
+ + + + + + + + + + + +
+A supervised_input_receiver_fn. +
+ + + + + + + + + + + + +
+`ValueError` + +if features and labels have overlapping keys. +
+ diff --git a/site/en/api_docs/python/tf/estimator/experimental/call_logit_fn.md b/site/en/api_docs/python/tf/estimator/experimental/call_logit_fn.md new file mode 100644 index 00000000000..e4c78048bdc --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental/call_logit_fn.md @@ -0,0 +1,126 @@ +description: Calls logit_fn (experimental). + +
+ + +
+ +# tf.estimator.experimental.call_logit_fn + + + + + + + + + +Calls logit_fn (experimental). + + + + + + + + + +THIS FUNCTION IS EXPERIMENTAL. Keras layers/models are the recommended APIs +for logit and model composition. + +A utility function that calls the provided logit_fn with the relevant subset +of provided arguments. Similar to tf.estimator._call_model_fn(). + + + + + + + + + + + + + + + + + + + + + + +
+`logit_fn` + +A logit_fn as defined above. +
+`features` + +The features dict. +
+`mode` + +TRAIN / EVAL / PREDICT ModeKeys. +
+`params` + +The hyperparameter dict. +
+`config` + +The configuration object. +
+ + + + + + + + + + + +
+A logit Tensor, the output of logit_fn. +
+ + + + + + + + + + + + +
+`ValueError` + +if logit_fn does not return a Tensor or a dictionary mapping +strings to Tensors. +
+ diff --git a/site/en/api_docs/python/tf/estimator/experimental/make_early_stopping_hook.md b/site/en/api_docs/python/tf/estimator/experimental/make_early_stopping_hook.md new file mode 100644 index 00000000000..12a8e07c22c --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental/make_early_stopping_hook.md @@ -0,0 +1,147 @@ +description: Creates early-stopping hook. + +
+ + +
+ +# tf.estimator.experimental.make_early_stopping_hook + + + + + + + + + +Creates early-stopping hook. + + + + + + + + + +Returns a `SessionRunHook` that stops training when `should_stop_fn` returns +`True`. + +#### Usage example: + + + +```python +estimator = ... +hook = early_stopping.make_early_stopping_hook( + estimator, should_stop_fn=make_stop_fn(...)) +train_spec = tf.estimator.TrainSpec(..., hooks=[hook]) +tf.estimator.train_and_evaluate(estimator, train_spec, ...) +``` + +Caveat: Current implementation supports early-stopping both training and +evaluation in local mode. In distributed mode, training can be stopped but +evaluation (where it's a separate job) will indefinitely wait for new model +checkpoints to evaluate, so you will need other means to detect and stop it. +Early-stopping evaluation in distributed mode requires changes in +`train_and_evaluate` API and will be addressed in a future revision. + + + + + + + + + + + + + + + + + + + +
+`estimator` + +A tf.estimator.Estimator instance. +
+`should_stop_fn` + +`callable`, function that takes no arguments and returns a +`bool`. If the function returns `True`, stopping will be initiated by the +chief. +
+`run_every_secs` + +If specified, calls `should_stop_fn` at an interval of +`run_every_secs` seconds. Defaults to 60 seconds. Either this or +`run_every_steps` must be set. +
+`run_every_steps` + +If specified, calls `should_stop_fn` every +`run_every_steps` steps. Either this or `run_every_secs` must be set. +
+ + + + + + + + + + + +
+A `SessionRunHook` that periodically executes `should_stop_fn` and initiates +early stopping if the function returns `True`. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `estimator` is not of type tf.estimator.Estimator. +
+`ValueError` + +If both `run_every_secs` and `run_every_steps` are set. +
+ diff --git a/site/en/api_docs/python/tf/estimator/experimental/make_stop_at_checkpoint_step_hook.md b/site/en/api_docs/python/tf/estimator/experimental/make_stop_at_checkpoint_step_hook.md new file mode 100644 index 00000000000..2fb9a8aa86d --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental/make_stop_at_checkpoint_step_hook.md @@ -0,0 +1,44 @@ +description: Creates a proper StopAtCheckpointStepHook based on chief status. + +
+ + +
+ +# tf.estimator.experimental.make_stop_at_checkpoint_step_hook + + + + + + + + + +Creates a proper StopAtCheckpointStepHook based on chief status. + + + + + + + + diff --git a/site/en/api_docs/python/tf/estimator/experimental/stop_if_higher_hook.md b/site/en/api_docs/python/tf/estimator/experimental/stop_if_higher_hook.md new file mode 100644 index 00000000000..a919cc6a31f --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental/stop_if_higher_hook.md @@ -0,0 +1,144 @@ +description: Creates hook to stop if the given metric is higher than the threshold. + +
+ + +
+ +# tf.estimator.experimental.stop_if_higher_hook + + + + + + + + + +Creates hook to stop if the given metric is higher than the threshold. + + + + + + + + + + +#### Usage example: + + + +```python +estimator = ... +# Hook to stop training if accuracy becomes higher than 0.9. +hook = early_stopping.stop_if_higher_hook(estimator, "accuracy", 0.9) +train_spec = tf.estimator.TrainSpec(..., hooks=[hook]) +tf.estimator.train_and_evaluate(estimator, train_spec, ...) +``` + +Caveat: Current implementation supports early-stopping both training and +evaluation in local mode. In distributed mode, training can be stopped but +evaluation (where it's a separate job) will indefinitely wait for new model +checkpoints to evaluate, so you will need other means to detect and stop it. +Early-stopping evaluation in distributed mode requires changes in +`train_and_evaluate` API and will be addressed in a future revision. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`estimator` + +A tf.estimator.Estimator instance. +
+`metric_name` + +`str`, metric to track. "loss", "accuracy", etc. +
+`threshold` + +Numeric threshold for the given metric. +
+`eval_dir` + +If set, directory containing summary files with eval metrics. By +default, `estimator.eval_dir()` will be used. +
+`min_steps` + +`int`, stop is never requested if global step is less than this +value. Defaults to 0. +
+`run_every_secs` + +If specified, calls `should_stop_fn` at an interval of +`run_every_secs` seconds. Defaults to 60 seconds. Either this or +`run_every_steps` must be set. +
+`run_every_steps` + +If specified, calls `should_stop_fn` every +`run_every_steps` steps. Either this or `run_every_secs` must be set. +
+ + + + + + + + + + + +
+An early-stopping hook of type `SessionRunHook` that periodically checks +if the given metric is higher than specified threshold and initiates +early stopping if true. +
+ diff --git a/site/en/api_docs/python/tf/estimator/experimental/stop_if_lower_hook.md b/site/en/api_docs/python/tf/estimator/experimental/stop_if_lower_hook.md new file mode 100644 index 00000000000..4c64fdc8e54 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental/stop_if_lower_hook.md @@ -0,0 +1,144 @@ +description: Creates hook to stop if the given metric is lower than the threshold. + +
+ + +
+ +# tf.estimator.experimental.stop_if_lower_hook + + + + + + + + + +Creates hook to stop if the given metric is lower than the threshold. + + + + + + + + + + +#### Usage example: + + + +```python +estimator = ... +# Hook to stop training if loss becomes lower than 100. +hook = early_stopping.stop_if_lower_hook(estimator, "loss", 100) +train_spec = tf.estimator.TrainSpec(..., hooks=[hook]) +tf.estimator.train_and_evaluate(estimator, train_spec, ...) +``` + +Caveat: Current implementation supports early-stopping both training and +evaluation in local mode. In distributed mode, training can be stopped but +evaluation (where it's a separate job) will indefinitely wait for new model +checkpoints to evaluate, so you will need other means to detect and stop it. +Early-stopping evaluation in distributed mode requires changes in +`train_and_evaluate` API and will be addressed in a future revision. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`estimator` + +A tf.estimator.Estimator instance. +
+`metric_name` + +`str`, metric to track. "loss", "accuracy", etc. +
+`threshold` + +Numeric threshold for the given metric. +
+`eval_dir` + +If set, directory containing summary files with eval metrics. By +default, `estimator.eval_dir()` will be used. +
+`min_steps` + +`int`, stop is never requested if global step is less than this +value. Defaults to 0. +
+`run_every_secs` + +If specified, calls `should_stop_fn` at an interval of +`run_every_secs` seconds. Defaults to 60 seconds. Either this or +`run_every_steps` must be set. +
+`run_every_steps` + +If specified, calls `should_stop_fn` every +`run_every_steps` steps. Either this or `run_every_secs` must be set. +
+ + + + + + + + + + + +
+An early-stopping hook of type `SessionRunHook` that periodically checks +if the given metric is lower than specified threshold and initiates +early stopping if true. +
+ diff --git a/site/en/api_docs/python/tf/estimator/experimental/stop_if_no_decrease_hook.md b/site/en/api_docs/python/tf/estimator/experimental/stop_if_no_decrease_hook.md new file mode 100644 index 00000000000..1c7e5a227aa --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental/stop_if_no_decrease_hook.md @@ -0,0 +1,145 @@ +description: Creates hook to stop if metric does not decrease within given max steps. + +
+ + +
+ +# tf.estimator.experimental.stop_if_no_decrease_hook + + + + + + + + + +Creates hook to stop if metric does not decrease within given max steps. + + + + + + + + + + +#### Usage example: + + + +```python +estimator = ... +# Hook to stop training if loss does not decrease in over 100000 steps. +hook = early_stopping.stop_if_no_decrease_hook(estimator, "loss", 100000) +train_spec = tf.estimator.TrainSpec(..., hooks=[hook]) +tf.estimator.train_and_evaluate(estimator, train_spec, ...) +``` + +Caveat: Current implementation supports early-stopping both training and +evaluation in local mode. In distributed mode, training can be stopped but +evaluation (where it's a separate job) will indefinitely wait for new model +checkpoints to evaluate, so you will need other means to detect and stop it. +Early-stopping evaluation in distributed mode requires changes in +`train_and_evaluate` API and will be addressed in a future revision. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`estimator` + +A tf.estimator.Estimator instance. +
+`metric_name` + +`str`, metric to track. "loss", "accuracy", etc. +
+`max_steps_without_decrease` + +`int`, maximum number of training steps with no +decrease in the given metric. +
+`eval_dir` + +If set, directory containing summary files with eval metrics. By +default, `estimator.eval_dir()` will be used. +
+`min_steps` + +`int`, stop is never requested if global step is less than this +value. Defaults to 0. +
+`run_every_secs` + +If specified, calls `should_stop_fn` at an interval of +`run_every_secs` seconds. Defaults to 60 seconds. Either this or +`run_every_steps` must be set. +
+`run_every_steps` + +If specified, calls `should_stop_fn` every +`run_every_steps` steps. Either this or `run_every_secs` must be set. +
+ + + + + + + + + + + +
+An early-stopping hook of type `SessionRunHook` that periodically checks +if the given metric shows no decrease over given maximum number of +training steps, and initiates early stopping if true. +
+ diff --git a/site/en/api_docs/python/tf/estimator/experimental/stop_if_no_increase_hook.md b/site/en/api_docs/python/tf/estimator/experimental/stop_if_no_increase_hook.md new file mode 100644 index 00000000000..0bbe5b30c7e --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/experimental/stop_if_no_increase_hook.md @@ -0,0 +1,145 @@ +description: Creates hook to stop if metric does not increase within given max steps. + +
+ + +
+ +# tf.estimator.experimental.stop_if_no_increase_hook + + + + + + + + + +Creates hook to stop if metric does not increase within given max steps. + + + + + + + + + + +#### Usage example: + + + +```python +estimator = ... +# Hook to stop training if accuracy does not increase in over 100000 steps. +hook = early_stopping.stop_if_no_increase_hook(estimator, "accuracy", 100000) +train_spec = tf.estimator.TrainSpec(..., hooks=[hook]) +tf.estimator.train_and_evaluate(estimator, train_spec, ...) +``` + +Caveat: Current implementation supports early-stopping both training and +evaluation in local mode. In distributed mode, training can be stopped but +evaluation (where it's a separate job) will indefinitely wait for new model +checkpoints to evaluate, so you will need other means to detect and stop it. +Early-stopping evaluation in distributed mode requires changes in +`train_and_evaluate` API and will be addressed in a future revision. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`estimator` + +A tf.estimator.Estimator instance. +
+`metric_name` + +`str`, metric to track. "loss", "accuracy", etc. +
+`max_steps_without_increase` + +`int`, maximum number of training steps with no +increase in the given metric. +
+`eval_dir` + +If set, directory containing summary files with eval metrics. By +default, `estimator.eval_dir()` will be used. +
+`min_steps` + +`int`, stop is never requested if global step is less than this +value. Defaults to 0. +
+`run_every_secs` + +If specified, calls `should_stop_fn` at an interval of +`run_every_secs` seconds. Defaults to 60 seconds. Either this or +`run_every_steps` must be set. +
+`run_every_steps` + +If specified, calls `should_stop_fn` every +`run_every_steps` steps. Either this or `run_every_secs` must be set. +
+ + + + + + + + + + + +
+An early-stopping hook of type `SessionRunHook` that periodically checks +if the given metric shows no increase over given maximum number of +training steps, and initiates early stopping if true. +
+ diff --git a/site/en/api_docs/python/tf/estimator/export.md b/site/en/api_docs/python/tf/estimator/export.md new file mode 100644 index 00000000000..049aab84c03 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/export.md @@ -0,0 +1,42 @@ +description: All public utility methods for exporting Estimator to SavedModel. + +
+ + +
+ +# Module: tf.estimator.export + + + + + + + + + +All public utility methods for exporting Estimator to SavedModel. + + +This file includes functions and constants from core (model_utils) and export.py + +## Classes + +[`class ClassificationOutput`](../../tf/estimator/export/ClassificationOutput.md): Represents the output of a classification head. + +[`class ExportOutput`](../../tf/estimator/export/ExportOutput.md): Represents an output of a model that can be served. + +[`class PredictOutput`](../../tf/estimator/export/PredictOutput.md): Represents the output of a generic prediction head. + +[`class RegressionOutput`](../../tf/estimator/export/RegressionOutput.md): Represents the output of a regression head. + +[`class ServingInputReceiver`](../../tf/estimator/export/ServingInputReceiver.md): A return type for a serving_input_receiver_fn. + +[`class TensorServingInputReceiver`](../../tf/estimator/export/TensorServingInputReceiver.md): A return type for a serving_input_receiver_fn. + +## Functions + +[`build_parsing_serving_input_receiver_fn(...)`](../../tf/estimator/export/build_parsing_serving_input_receiver_fn.md): Build a serving_input_receiver_fn expecting fed tf.Examples. + +[`build_raw_serving_input_receiver_fn(...)`](../../tf/estimator/export/build_raw_serving_input_receiver_fn.md): Build a serving_input_receiver_fn expecting feature Tensors. + diff --git a/site/en/api_docs/python/tf/estimator/export/ClassificationOutput.md b/site/en/api_docs/python/tf/estimator/export/ClassificationOutput.md new file mode 100644 index 00000000000..d0c4650a191 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/export/ClassificationOutput.md @@ -0,0 +1,171 @@ +description: Represents the output of a classification head. + +
+ + + + +
+ +# tf.estimator.export.ClassificationOutput + + + + + + + + + +Represents the output of a classification head. + +Inherits From: [`ExportOutput`](../../../tf/estimator/export/ExportOutput.md) + + + + + + + + + +Either classes or scores or both must be set. + +The classes `Tensor` must provide string labels, not integer class IDs. + +If only classes is set, it is interpreted as providing top-k results in +descending order. + +If only scores is set, it is interpreted as providing a score for every class +in order of class ID. + +If both classes and scores are set, they are interpreted as zipped, so each +score corresponds to the class at the same index. Clients should not depend +on the order of the entries. + + + + + + + + + + + + + +
+`scores` + +A float `Tensor` giving scores (sometimes but not always +interpretable as probabilities) for each class. May be `None`, but +only if `classes` is set. Interpretation varies-- see class doc. +
+`classes` + +A string `Tensor` giving predicted class labels. May be `None`, +but only if `scores` is set. Interpretation varies-- see class doc. +
+ + + + + + + + + + + + +
+`ValueError` + +if neither classes nor scores is set, or one of them is not a +`Tensor` with the correct dtype. +
+ + + + + + + + + + + + + + + + + +
+`classes` + + +
+`scores` + + +
+ + + +## Methods + +

as_signature_def

+ +View source + + + +Generate a SignatureDef proto for inclusion in a MetaGraphDef. + +The SignatureDef will specify outputs as described in this ExportOutput, +and will use the provided receiver_tensors as inputs. + + + + + + + + + + +
Args
+`receiver_tensors` + +a `Tensor`, or a dict of string to `Tensor`, specifying +input nodes that will be fed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/export/ExportOutput.md b/site/en/api_docs/python/tf/estimator/export/ExportOutput.md new file mode 100644 index 00000000000..b0d93eda1cc --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/export/ExportOutput.md @@ -0,0 +1,78 @@ +description: Represents an output of a model that can be served. + +
+ + + +
+ +# tf.estimator.export.ExportOutput + + + + + + + + + +Represents an output of a model that can be served. + + + + + +These typically correspond to model heads. + +## Methods + +

as_signature_def

+ +View source + + + +Generate a SignatureDef proto for inclusion in a MetaGraphDef. + +The SignatureDef will specify outputs as described in this ExportOutput, +and will use the provided receiver_tensors as inputs. + + + + + + + + + + +
Args
+`receiver_tensors` + +a `Tensor`, or a dict of string to `Tensor`, specifying +input nodes that will be fed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/export/PredictOutput.md b/site/en/api_docs/python/tf/estimator/export/PredictOutput.md new file mode 100644 index 00000000000..41aa377c3ea --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/export/PredictOutput.md @@ -0,0 +1,145 @@ +description: Represents the output of a generic prediction head. + +
+ + + + +
+ +# tf.estimator.export.PredictOutput + + + + + + + + + +Represents the output of a generic prediction head. + +Inherits From: [`ExportOutput`](../../../tf/estimator/export/ExportOutput.md) + + + + + + + + + +A generic prediction need not be either a classification or a regression. + +Named outputs must be provided as a dict from string to `Tensor`, + + + + + + + + + + +
+`outputs` + +A `Tensor` or a dict of string to `Tensor` representing the +predictions. +
+ + + + + + + + + + + + +
+`ValueError` + +if the outputs is not dict, or any of its keys are not +strings, or any of its values are not `Tensor`s. +
+ + + + + + + + + + + + + + +
+`outputs` + + +
+ + + +## Methods + +

as_signature_def

+ +View source + + + +Generate a SignatureDef proto for inclusion in a MetaGraphDef. + +The SignatureDef will specify outputs as described in this ExportOutput, +and will use the provided receiver_tensors as inputs. + + + + + + + + + + +
Args
+`receiver_tensors` + +a `Tensor`, or a dict of string to `Tensor`, specifying +input nodes that will be fed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/export/RegressionOutput.md b/site/en/api_docs/python/tf/estimator/export/RegressionOutput.md new file mode 100644 index 00000000000..11e8be4aa45 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/export/RegressionOutput.md @@ -0,0 +1,140 @@ +description: Represents the output of a regression head. + +
+ + + + +
+ +# tf.estimator.export.RegressionOutput + + + + + + + + + +Represents the output of a regression head. + +Inherits From: [`ExportOutput`](../../../tf/estimator/export/ExportOutput.md) + + + + + + + + + + + + + + + + + + + +
+`value` + +a float `Tensor` giving the predicted values. Required. +
+ + + + + + + + + + + + +
+`ValueError` + +if the value is not a `Tensor` with dtype tf.float32. +
+ + + + + + + + + + + + + + +
+`value` + + +
+ + + +## Methods + +

as_signature_def

+ +View source + + + +Generate a SignatureDef proto for inclusion in a MetaGraphDef. + +The SignatureDef will specify outputs as described in this ExportOutput, +and will use the provided receiver_tensors as inputs. + + + + + + + + + + +
Args
+`receiver_tensors` + +a `Tensor`, or a dict of string to `Tensor`, specifying +input nodes that will be fed. +
+ + + + + diff --git a/site/en/api_docs/python/tf/estimator/export/ServingInputReceiver.md b/site/en/api_docs/python/tf/estimator/export/ServingInputReceiver.md new file mode 100644 index 00000000000..f46d6bb4951 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/export/ServingInputReceiver.md @@ -0,0 +1,103 @@ +description: A return type for a serving_input_receiver_fn. + +
+ + + + + + +
+ +# tf.estimator.export.ServingInputReceiver + + + + + + + + + +A return type for a serving_input_receiver_fn. + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`, `SparseTensor`, or dict of string or int to `Tensor` +or `SparseTensor`, specifying the features to be passed to the model. +Note: if `features` passed is not a dict, it will be wrapped in a dict +with a single entry, using 'feature' as the key. Consequently, the +model +must accept a feature dict of the form {'feature': tensor}. You may use +`TensorServingInputReceiver` if you want the tensor to be passed as is. +
+`receiver_tensors` + +A `Tensor`, `SparseTensor`, or dict of string to `Tensor` +or `SparseTensor`, specifying input nodes where this receiver expects to +be fed by default. Typically, this is a single placeholder expecting +serialized `tf.Example` protos. +
+`receiver_tensors_alternatives` + +a dict of string to additional groups of +receiver tensors, each of which may be a `Tensor`, `SparseTensor`, or dict +of string to `Tensor` or`SparseTensor`. These named receiver tensor +alternatives generate additional serving signatures, which may be used to +feed inputs at different points within the input receiver subgraph. A +typical usage is to allow feeding raw feature `Tensor`s *downstream* of +the tf.parse_example() op. Defaults to None. +
+ + + +## Class Variables + +* `features` +* `receiver_tensors` +* `receiver_tensors_alternatives` diff --git a/site/en/api_docs/python/tf/estimator/export/TensorServingInputReceiver.md b/site/en/api_docs/python/tf/estimator/export/TensorServingInputReceiver.md new file mode 100644 index 00000000000..b30df481676 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/export/TensorServingInputReceiver.md @@ -0,0 +1,112 @@ +description: A return type for a serving_input_receiver_fn. + +
+ + + + + + +
+ +# tf.estimator.export.TensorServingInputReceiver + + + + + + + + + +A return type for a serving_input_receiver_fn. + + + + + + + + + +This is for use with models that expect a single `Tensor` or `SparseTensor` +as an input feature, as opposed to a dict of features. + +The normal `ServingInputReceiver` always returns a feature dict, even if it +contains only one entry, and so can be used only with models that accept such +a dict. For models that accept only a single raw feature, the +`serving_input_receiver_fn` provided to Estimator.export_saved_model() +should return this `TensorServingInputReceiver` instead. See: +https://github.com/tensorflow/tensorflow/issues/11674 + +Note that the receiver_tensors and receiver_tensor_alternatives arguments +will be automatically converted to the dict representation in either case, +because the SavedModel format requires each input `Tensor` to have a name +(provided by the dict key). + + + + + + + + + + + + + + + + + + +
+`features` + +A single `Tensor` or `SparseTensor`, representing the feature to +be passed to the model. +
+`receiver_tensors` + +A `Tensor`, `SparseTensor`, or dict of string to `Tensor` +or `SparseTensor`, specifying input nodes where this receiver expects to +be fed by default. Typically, this is a single placeholder expecting +serialized `tf.Example` protos. +
+`receiver_tensors_alternatives` + +a dict of string to additional groups of +receiver tensors, each of which may be a `Tensor`, `SparseTensor`, or dict +of string to `Tensor` or`SparseTensor`. These named receiver tensor +alternatives generate additional serving signatures, which may be used to +feed inputs at different points within the input receiver subgraph. A +typical usage is to allow feeding raw feature `Tensor`s *downstream* of +the tf.parse_example() op. Defaults to None. +
+ + + +## Class Variables + +* `features` +* `receiver_tensors` +* `receiver_tensors_alternatives` diff --git a/site/en/api_docs/python/tf/estimator/export/build_parsing_serving_input_receiver_fn.md b/site/en/api_docs/python/tf/estimator/export/build_parsing_serving_input_receiver_fn.md new file mode 100644 index 00000000000..dd2d0065e60 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/export/build_parsing_serving_input_receiver_fn.md @@ -0,0 +1,86 @@ +description: Build a serving_input_receiver_fn expecting fed tf.Examples. + +
+ + +
+ +# tf.estimator.export.build_parsing_serving_input_receiver_fn + + + + + + + + + +Build a serving_input_receiver_fn expecting fed tf.Examples. + + + + + + + + + +Creates a serving_input_receiver_fn that expects a serialized tf.Example fed +into a string placeholder. The function parses the tf.Example according to +the provided feature_spec, and returns all parsed Tensors as features. + + + + + + + + + + + + + +
+`feature_spec` + +a dict of string to `VarLenFeature`/`FixedLenFeature`. +
+`default_batch_size` + +the number of query examples expected per batch. Leave +unset for variable batch size (recommended). +
+ + + + + + + + + + + +
+A serving_input_receiver_fn suitable for use in serving. +
+ diff --git a/site/en/api_docs/python/tf/estimator/export/build_raw_serving_input_receiver_fn.md b/site/en/api_docs/python/tf/estimator/export/build_raw_serving_input_receiver_fn.md new file mode 100644 index 00000000000..53a0110fab9 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/export/build_raw_serving_input_receiver_fn.md @@ -0,0 +1,85 @@ +description: Build a serving_input_receiver_fn expecting feature Tensors. + +
+ + +
+ +# tf.estimator.export.build_raw_serving_input_receiver_fn + + + + + + + + + +Build a serving_input_receiver_fn expecting feature Tensors. + + + + + + + + + +Creates an serving_input_receiver_fn that expects all features to be fed +directly. + + + + + + + + + + + + + +
+`features` + +a dict of string to `Tensor`. +
+`default_batch_size` + +the number of query examples expected per batch. Leave +unset for variable batch size (recommended). +
+ + + + + + + + + + + +
+A serving_input_receiver_fn. +
+ diff --git a/site/en/api_docs/python/tf/estimator/regressor_parse_example_spec.md b/site/en/api_docs/python/tf/estimator/regressor_parse_example_spec.md new file mode 100644 index 00000000000..b6114f63d8e --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/regressor_parse_example_spec.md @@ -0,0 +1,224 @@ +description: Generates parsing spec for tf.parse_example to be used with regressors. + +
+ + +
+ +# tf.estimator.regressor_parse_example_spec + + + + + + + + + +Generates parsing spec for tf.parse_example to be used with regressors. + + + + + + + +If users keep data in tf.Example format, they need to call tf.parse_example +with a proper feature spec. There are two main things that this utility helps: + +* Users need to combine parsing spec of features with labels and weights + (if any) since they are all parsed from same tf.Example instance. This + utility combines these specs. +* It is difficult to map expected label by a regressor such as `DNNRegressor` + to corresponding tf.parse_example spec. This utility encodes it by getting + related information from users (key, dtype). + +Example output of parsing spec: + +```python +# Define features and transformations +feature_b = tf.feature_column.numeric_column(...) +feature_c_bucketized = tf.feature_column.bucketized_column( + tf.feature_column.numeric_column("feature_c"), ...) +feature_a_x_feature_c = tf.feature_column.crossed_column( + columns=["feature_a", feature_c_bucketized], ...) + +feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c] +parsing_spec = tf.estimator.regressor_parse_example_spec( + feature_columns, label_key='my-label') + +# For the above example, regressor_parse_example_spec would return the dict: +assert parsing_spec == { + "feature_a": parsing_ops.VarLenFeature(tf.string), + "feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32), + "feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32) + "my-label" : parsing_ops.FixedLenFeature([1], dtype=tf.float32) +} +``` + +Example usage with a regressor: + +```python +feature_columns = # define features via tf.feature_column +estimator = DNNRegressor( + hidden_units=[256, 64, 16], + feature_columns=feature_columns, + weight_column='example-weight', + label_dimension=3) +# This label configuration tells the regressor the following: +# * weights are retrieved with key 'example-weight' +# * label is a 3 dimension tensor with float32 dtype. + + +# Input builders +def input_fn_train(): # Returns a tuple of features and labels. + features = tf.contrib.learn.read_keyed_batch_features( + file_pattern=train_files, + batch_size=batch_size, + # creates parsing configuration for tf.parse_example + features=tf.estimator.classifier_parse_example_spec( + feature_columns, + label_key='my-label', + label_dimension=3, + weight_column='example-weight'), + reader=tf.RecordIOReader) + labels = features.pop('my-label') + return features, labels + +estimator.train(input_fn=input_fn_train) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +An iterable containing all feature columns. All items +should be instances of classes derived from `_FeatureColumn`. +
+`label_key` + +A string identifying the label. It means tf.Example stores labels +with this key. +
+`label_dtype` + +A `tf.dtype` identifies the type of labels. By default it is +tf.float32. +
+`label_default` + +used as label if label_key does not exist in given +tf.Example. By default default_value is none, which means +`tf.parse_example` will error out if there is any missing label. +
+`label_dimension` + +Number of regression targets per example. This is the size +of the last dimension of the labels and logits `Tensor` objects +(typically, these have shape `[batch_size, label_dimension]`). +
+`weight_column` + +A string or a `NumericColumn` created by +tf.feature_column.numeric_column defining feature column representing +weights. It is used to down weight or boost examples during training. It +will be multiplied by the loss of the example. If it is a string, it is +used as a key to fetch weight tensor from the `features`. If it is a +`NumericColumn`, raw tensor is fetched by key `weight_column.key`, then +weight_column.normalizer_fn is applied on it to get weight tensor. +
+ + + + + + + + + + + +
+A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature` +value. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +If label is used in `feature_columns`. +
+`ValueError` + +If weight_column is used in `feature_columns`. +
+`ValueError` + +If any of the given `feature_columns` is not a `_FeatureColumn` +instance. +
+`ValueError` + +If `weight_column` is not a `NumericColumn` instance. +
+`ValueError` + +if label_key is None. +
+ diff --git a/site/en/api_docs/python/tf/estimator/train_and_evaluate.md b/site/en/api_docs/python/tf/estimator/train_and_evaluate.md new file mode 100644 index 00000000000..035570fbe57 --- /dev/null +++ b/site/en/api_docs/python/tf/estimator/train_and_evaluate.md @@ -0,0 +1,274 @@ +description: Train and evaluate the estimator. + +
+ + +
+ +# tf.estimator.train_and_evaluate + + + + + + + + + +Train and evaluate the `estimator`. + + + + + + + + + +This utility function trains, evaluates, and (optionally) exports the model by +using the given `estimator`. All training related specification is held in +`train_spec`, including training `input_fn` and training max steps, etc. All +evaluation and export related specification is held in `eval_spec`, including +evaluation `input_fn`, steps, etc. + +This utility function provides consistent behavior for both local +(non-distributed) and distributed configurations. The default distribution +configuration is parameter server-based between-graph replication. For other +types of distribution configurations such as all-reduce training, please use +[DistributionStrategies](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/distribute). + +Overfitting: In order to avoid overfitting, it is recommended to set up the +training `input_fn` to shuffle the training data properly. + +Stop condition: In order to support both distributed and non-distributed +configuration reliably, the only supported stop condition for model +training is `train_spec.max_steps`. If `train_spec.max_steps` is `None`, the +model is trained forever. *Use with care* if model stop condition is +different. For example, assume that the model is expected to be trained with +one epoch of training data, and the training `input_fn` is configured to throw +`OutOfRangeError` after going through one epoch, which stops the +Estimator.train. For a three-training-worker distributed configuration, each +training worker is likely to go through the whole epoch independently. So, the +model will be trained with three epochs of training data instead of one epoch. + +Example of local (non-distributed) training: + +```python +# Set up feature columns. +categorial_feature_a = categorial_column_with_hash_bucket(...) +categorial_feature_a_emb = embedding_column( + categorical_column=categorial_feature_a, ...) +... # other feature columns + +estimator = DNNClassifier( + feature_columns=[categorial_feature_a_emb, ...], + hidden_units=[1024, 512, 256]) + +# Or set up the model directory +# estimator = DNNClassifier( +# config=tf.estimator.RunConfig( +# model_dir='/my_model', save_summary_steps=100), +# feature_columns=[categorial_feature_a_emb, ...], +# hidden_units=[1024, 512, 256]) + +# Input pipeline for train and evaluate. +def train_input_fn(): # returns x, y + # please shuffle the data. + pass +def eval_input_fn(): # returns x, y + pass + +train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=1000) +eval_spec = tf.estimator.EvalSpec(input_fn=eval_input_fn) + +tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) +``` +Note that in current implementation `estimator.evaluate` will be called +multiple times. This means that evaluation graph (including eval_input_fn) +will be re-created for each `evaluate` call. `estimator.train` will be called +only once. + +Example of distributed training: + +Regarding the example of distributed training, the code above can be used +without a change (Please do make sure that the RunConfig.model_dir for all +workers is set to the same directory, i.e., a shared file system all workers +can read and write). The only extra work to do is setting the environment +variable `TF_CONFIG` properly for each worker correspondingly. + +Also see +[Distributed TensorFlow](https://www.tensorflow.org/deploy/distributed). + +Setting environment variable depends on the platform. For example, on Linux, +it can be done as follows (`$` is the shell prompt): + +``` +$ TF_CONFIG='' python train_model.py +``` + +For the content in `TF_CONFIG`, assume that the training cluster spec looks +like: + +``` +cluster = {"chief": ["host0:2222"], + "worker": ["host1:2222", "host2:2222", "host3:2222"], + "ps": ["host4:2222", "host5:2222"]} +``` + +Example of `TF_CONFIG` for chief training worker (must have one and only one): + +``` +# This should be a JSON string, which is set as environment variable. Usually +# the cluster manager handles that. +TF_CONFIG='{ + "cluster": { + "chief": ["host0:2222"], + "worker": ["host1:2222", "host2:2222", "host3:2222"], + "ps": ["host4:2222", "host5:2222"] + }, + "task": {"type": "chief", "index": 0} +}' +``` +Note that the chief worker also does the model training job, similar to other +non-chief training workers (see next paragraph). In addition to the model +training, it manages some extra work, e.g., checkpoint saving and restoring, +writing summaries, etc. + +Example of `TF_CONFIG` for non-chief training worker (optional, could be +multiple): + +``` +# This should be a JSON string, which is set as environment variable. Usually +# the cluster manager handles that. +TF_CONFIG='{ + "cluster": { + "chief": ["host0:2222"], + "worker": ["host1:2222", "host2:2222", "host3:2222"], + "ps": ["host4:2222", "host5:2222"] + }, + "task": {"type": "worker", "index": 0} +}' +``` +where the `task.index` should be set as 0, 1, 2, in this example, respectively +for non-chief training workers. + +Example of `TF_CONFIG` for parameter server, aka ps (could be multiple): + +``` +# This should be a JSON string, which is set as environment variable. Usually +# the cluster manager handles that. +TF_CONFIG='{ + "cluster": { + "chief": ["host0:2222"], + "worker": ["host1:2222", "host2:2222", "host3:2222"], + "ps": ["host4:2222", "host5:2222"] + }, + "task": {"type": "ps", "index": 0} +}' +``` +where the `task.index` should be set as 0 and 1, in this example, respectively +for parameter servers. + +Example of `TF_CONFIG` for evaluator task. Evaluator is a special task that is +not part of the training cluster. There could be only one. It is used for +model evaluation. + +``` +# This should be a JSON string, which is set as environment variable. Usually +# the cluster manager handles that. +TF_CONFIG='{ + "cluster": { + "chief": ["host0:2222"], + "worker": ["host1:2222", "host2:2222", "host3:2222"], + "ps": ["host4:2222", "host5:2222"] + }, + "task": {"type": "evaluator", "index": 0} +}' +``` + +When `distribute` or `experimental_distribute.train_distribute` and +`experimental_distribute.remote_cluster` is set, this method will start a +client running on the current host which connects to the `remote_cluster` for +training and evaluation. + + + + + + + + + + + + + + + + +
+`estimator` + +An `Estimator` instance to train and evaluate. +
+`train_spec` + +A `TrainSpec` instance to specify the training specification. +
+`eval_spec` + +A `EvalSpec` instance to specify the evaluation and export +specification. +
+ + + + + + + + + + + +
+A tuple of the result of the `evaluate` call to the `Estimator` and the +export results using the specified `ExportStrategy`. +Currently, the return value is undefined for distributed training mode. +
+ + + + + + + + + + + + +
+`ValueError` + +if environment variable `TF_CONFIG` is incorrectly set. +
+ diff --git a/site/en/api_docs/python/tf/executing_eagerly.md b/site/en/api_docs/python/tf/executing_eagerly.md new file mode 100644 index 00000000000..88f69fefd3b --- /dev/null +++ b/site/en/api_docs/python/tf/executing_eagerly.md @@ -0,0 +1,100 @@ +description: Checks whether the current thread has eager execution enabled. + +
+ + +
+ +# tf.executing_eagerly + + + + + + + + + +Checks whether the current thread has eager execution enabled. + + + + + + + +Eager execution is enabled by default and this API returns `True` +in most of cases. However, this API might return `False` in the following use +cases. + +* Executing inside tf.function, unless under tf.init_scope or + tf.config.experimental_run_functions_eagerly(True) is previously called. +* Executing inside a transformation function for `tf.dataset`. +* tf.compat.v1.disable_eager_execution() is called. + +#### General case: + + + +``` +>>> print(tf.executing_eagerly()) +True +``` + +Inside tf.function: + +``` +>>> @tf.function +... def fn(): +... with tf.init_scope(): +... print(tf.executing_eagerly()) +... print(tf.executing_eagerly()) +>>> fn() +True +False +``` + +Inside tf.function after + +tf.config.experimental_run_functions_eagerly(True) is called: +>>> tf.config.experimental_run_functions_eagerly(True) +>>> @tf.function +... def fn(): +... with tf.init_scope(): +... print(tf.executing_eagerly()) +... print(tf.executing_eagerly()) +>>> fn() +True +True +>>> tf.config.experimental_run_functions_eagerly(False) + +Inside a transformation function for `tf.dataset`: + +``` +>>> def data_fn(x): +... print(tf.executing_eagerly()) +... return x +>>> dataset = tf.data.Dataset.range(100) +>>> dataset = dataset.map(data_fn) +False +``` + + + + + + + + + +
+`True` if the current thread has eager execution enabled. +
+ diff --git a/site/en/api_docs/python/tf/expand_dims.md b/site/en/api_docs/python/tf/expand_dims.md new file mode 100644 index 00000000000..c39d7936099 --- /dev/null +++ b/site/en/api_docs/python/tf/expand_dims.md @@ -0,0 +1,163 @@ +description: Returns a tensor with an additional dimension inserted at index axis. + +
+ + +
+ +# tf.expand_dims + + + + + + + + + +Returns a tensor with an additional dimension inserted at index `axis`. + + + + + + + +Given a tensor `input`, this operation inserts a dimension of size 1 at the +dimension index `axis` of `input`'s shape. The dimension index `axis` starts +at zero; if you specify a negative number for `axis` it is counted backward +from the end. + +This operation is useful if you want to add a batch dimension to a single +element. For example, if you have a single image of shape `[height, width, +channels]`, you can make it a batch of one image with `expand_dims(image, 0)`, +which will make the shape `[1, height, width, channels]`. + +#### Examples: + + + +``` +>>> t = [[1, 2, 3],[4, 5, 6]] # shape [2, 3] +``` + +``` +>>> tf.expand_dims(t, 0) + +``` + +``` +>>> tf.expand_dims(t, 1) + +``` + +``` +>>> tf.expand_dims(t, 2) + +``` + +``` +>>> tf.expand_dims(t, -1) # Last dimension index. In this case, same as 2. + +``` + +This operation is related to: + +* tf.squeeze, which removes dimensions of size 1. +* tf.reshape, which provides more flexible reshaping capability + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`axis` + +Integer specifying the dimension index at which to expand the +shape of `input`. Given an input of D dimensions, `axis` must be in range +`[-(D+1), D]` (inclusive). +
+`name` + +Optional string. The name of the output `Tensor`. +
+ + + + + + + + + + + +
+A tensor with the same data as `input`, with an additional dimension +inserted at the index specified by `axis`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `axis` is not specified. +
+`InvalidArgumentError` + +If `axis` is out of range `[-(D+1), D]`. +
+ diff --git a/site/en/api_docs/python/tf/experimental.md b/site/en/api_docs/python/tf/experimental.md new file mode 100644 index 00000000000..f39609e075d --- /dev/null +++ b/site/en/api_docs/python/tf/experimental.md @@ -0,0 +1,35 @@ +description: Public API for tf.experimental namespace. + +
+ + +
+ +# Module: tf.experimental + + + + + + + + + +Public API for tf.experimental namespace. + + + +## Modules + +[`dlpack`](../tf/experimental/dlpack.md) module: Public API for tf.experimental.dlpack namespace. + +[`tensorrt`](../tf/experimental/tensorrt.md) module: Public API for tf.experimental.tensorrt namespace. + +## Functions + +[`async_clear_error(...)`](../tf/experimental/async_clear_error.md): Clear pending operations and error statuses in async execution. + +[`async_scope(...)`](../tf/experimental/async_scope.md): Context manager for grouping async operations. + +[`function_executor_type(...)`](../tf/experimental/function_executor_type.md): Context manager for setting the executor of eager defined functions. + diff --git a/site/en/api_docs/python/tf/experimental/async_clear_error.md b/site/en/api_docs/python/tf/experimental/async_clear_error.md new file mode 100644 index 00000000000..08288cca84a --- /dev/null +++ b/site/en/api_docs/python/tf/experimental/async_clear_error.md @@ -0,0 +1,61 @@ +description: Clear pending operations and error statuses in async execution. + +
+ + +
+ +# tf.experimental.async_clear_error + + + + + + + + + +Clear pending operations and error statuses in async execution. + + + + + + + + + +In async execution mode, an error in op/function execution can lead to errors +in subsequent ops/functions that are scheduled but not yet executed. Calling +this method clears all pending operations and reset the async execution state. + +#### Example: + + + +``` +while True: + try: + # Step function updates the metric `loss` internally + train_step_fn() + except tf.errors.OutOfRangeError: + tf.experimental.async_clear_error() + break +logging.info('loss =', loss.numpy()) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/experimental/async_scope.md b/site/en/api_docs/python/tf/experimental/async_scope.md new file mode 100644 index 00000000000..583689eb7d8 --- /dev/null +++ b/site/en/api_docs/python/tf/experimental/async_scope.md @@ -0,0 +1,69 @@ +description: Context manager for grouping async operations. + +
+ + +
+ +# tf.experimental.async_scope + + + + + + + + + +Context manager for grouping async operations. + + + + + + + + + +Ops/function calls inside the scope can return before finishing the actual +execution. When exiting the async scope, a synchronization barrier will be +automatically added to ensure the completion of all async op and function +execution, potentially raising exceptions if async execution results in +an error state. + +Users may write the following code to asynchronuously invoke `train_step_fn` +and log the `loss` metric for every `num_steps` steps in a training loop. +`train_step_fn` internally consumes data using `iterator.get_next()`, and may +throw OutOfRangeError when running out of data. In the case: + +``` +try: + with tf.experimental.async_scope(): + for _ in range(num_steps): + # Step function updates the metric `loss` internally + train_step_fn() +except tf.errors.OutOfRangeError: + tf.experimental.async_clear_error() +logging.info('loss =', loss.numpy()) +``` + +#### Yields: + +Context manager for grouping async operations. diff --git a/site/en/api_docs/python/tf/experimental/dlpack.md b/site/en/api_docs/python/tf/experimental/dlpack.md new file mode 100644 index 00000000000..5c5d7879c14 --- /dev/null +++ b/site/en/api_docs/python/tf/experimental/dlpack.md @@ -0,0 +1,27 @@ +description: Public API for tf.experimental.dlpack namespace. + +
+ + +
+ +# Module: tf.experimental.dlpack + + + + + + + + + +Public API for tf.experimental.dlpack namespace. + + + +## Functions + +[`from_dlpack(...)`](../../tf/experimental/dlpack/from_dlpack.md): Returns the Tensorflow eager tensor. + +[`to_dlpack(...)`](../../tf/experimental/dlpack/to_dlpack.md): Returns the dlpack capsule representing the tensor. + diff --git a/site/en/api_docs/python/tf/experimental/dlpack/from_dlpack.md b/site/en/api_docs/python/tf/experimental/dlpack/from_dlpack.md new file mode 100644 index 00000000000..e558636e23a --- /dev/null +++ b/site/en/api_docs/python/tf/experimental/dlpack/from_dlpack.md @@ -0,0 +1,71 @@ +description: Returns the Tensorflow eager tensor. + +
+ + +
+ +# tf.experimental.dlpack.from_dlpack + + + + + + + + + +Returns the Tensorflow eager tensor. + + + + + + + +The returned tensor uses the memory shared by dlpack capsules from other +framework. + + ```python + a = tf.experimental.dlpack.from_dlpack(dlcapsule) + # `a` uses the memory shared by dlpack + ``` + + + + + + + + + + +
+`dlcapsule` + +A PyCapsule named as dltensor +
+ + + + + + + + + + + +
+A Tensorflow eager tensor +
+ diff --git a/site/en/api_docs/python/tf/experimental/dlpack/to_dlpack.md b/site/en/api_docs/python/tf/experimental/dlpack/to_dlpack.md new file mode 100644 index 00000000000..9de69905ca3 --- /dev/null +++ b/site/en/api_docs/python/tf/experimental/dlpack/to_dlpack.md @@ -0,0 +1,72 @@ +description: Returns the dlpack capsule representing the tensor. + +
+ + +
+ +# tf.experimental.dlpack.to_dlpack + + + + + + + + + +Returns the dlpack capsule representing the tensor. + + + + + + + +This operation ensures the underlying data memory is ready when returns. + + ```python + a = tf.tensor([1, 10]) + dlcapsule = tf.experimental.dlpack.to_dlpack(a) + # dlcapsule represents the dlpack data structure + ``` + + + + + + + + + + +
+`tf_tensor` + +Tensorflow eager tensor, to be converted to dlpack capsule. +
+ + + + + + + + + + + +
+A PyCapsule named as dltensor, which shares the underlying memory to other +framework. This PyCapsule can be consumed only once. +
+ diff --git a/site/en/api_docs/python/tf/experimental/function_executor_type.md b/site/en/api_docs/python/tf/experimental/function_executor_type.md new file mode 100644 index 00000000000..579abb7ad09 --- /dev/null +++ b/site/en/api_docs/python/tf/experimental/function_executor_type.md @@ -0,0 +1,69 @@ +description: Context manager for setting the executor of eager defined functions. + +
+ + +
+ +# tf.experimental.function_executor_type + + + + + + + + + +Context manager for setting the executor of eager defined functions. + + + + + + + + + +Eager defined functions are functions decorated by tf.contrib.eager.defun. + + + + + + + + + + +
+`executor_type` + +a string for the name of the executor to be used to execute +functions defined by tf.contrib.eager.defun. +
+ + + +#### Yields: + +Context manager for setting the executor of eager defined functions. diff --git a/site/en/api_docs/python/tf/experimental/tensorrt.md b/site/en/api_docs/python/tf/experimental/tensorrt.md new file mode 100644 index 00000000000..cd566b0834c --- /dev/null +++ b/site/en/api_docs/python/tf/experimental/tensorrt.md @@ -0,0 +1,27 @@ +description: Public API for tf.experimental.tensorrt namespace. + +
+ + +
+ +# Module: tf.experimental.tensorrt + + + + + + + + + +Public API for tf.experimental.tensorrt namespace. + + + +## Classes + +[`class ConversionParams`](../../tf/experimental/tensorrt/ConversionParams.md): Parameters that are used for TF-TRT conversion. + +[`class Converter`](../../tf/experimental/tensorrt/Converter.md): An offline converter for TF-TRT transformation for TF 2.0 SavedModels. + diff --git a/site/en/api_docs/python/tf/experimental/tensorrt/ConversionParams.md b/site/en/api_docs/python/tf/experimental/tensorrt/ConversionParams.md new file mode 100644 index 00000000000..f1e7f39a724 --- /dev/null +++ b/site/en/api_docs/python/tf/experimental/tensorrt/ConversionParams.md @@ -0,0 +1,176 @@ +description: Parameters that are used for TF-TRT conversion. + +
+ + + + + + + + + + + + +
+ +# tf.experimental.tensorrt.ConversionParams + + + + + + + + + +Parameters that are used for TF-TRT conversion. + + + + + + + + +#### Fields: + + +* `rewriter_config_template`: a template RewriterConfig proto used to create a + TRT-enabled RewriterConfig. If None, it will use a default one. +* `max_workspace_size_bytes`: the maximum GPU temporary memory which the TRT + engine can use at execution time. This corresponds to the + 'workspaceSize' parameter of nvinfer1::IBuilder::setMaxWorkspaceSize(). +* `precision_mode`: one the strings in + TrtPrecisionMode.supported_precision_modes(). +* `minimum_segment_size`: the minimum number of nodes required for a subgraph + to be replaced by TRTEngineOp. +* `is_dynamic_op`: whether to generate dynamic TRT ops which will build the + TRT network and engine at run time. i.e. Since TensorRT version < 6.0 + does not support dynamic dimensions other than the batch dimension, when + the TensorFlow graph has a non-batch dimension of dynamic size, we would + need to enable this option. This option should be set to True in TF 2.0. +* `maximum_cached_engines`: max number of cached TRT engines for dynamic TRT + ops. Created TRT engines for a dynamic dimension are cached. This is the + maximum number of engines that can be cached. If the number of cached + engines is already at max but none of them supports the input shapes, + the TRTEngineOp will fall back to run the original TF subgraph that + corresponds to the TRTEngineOp. +* `use_calibration`: this argument is ignored if precision_mode is not INT8. + If set to True, a calibration graph will be created to calibrate the + missing ranges. The calibration graph must be converted to an inference + graph by running calibration with calibrate(). If set to False, + quantization nodes will be expected for every tensor in the graph + (excluding those which will be fused). If a range is missing, an error + will occur. Please note that accuracy may be negatively affected if + there is a mismatch between which tensors TRT quantizes and which + tensors were trained with fake quantization. +* `max_batch_size`: max size for the input batch. This parameter is only + effective when is_dynamic_op=False which is not supported in TF 2.0. +* `allow_build_at_runtime`: whether to build TensorRT engines during runtime. + If no TensorRT engine can be found in cache that can handle the given + inputs during runtime, then a new TensorRT engine is built at runtime if + allow_build_at_runtime=True, and otherwise native TF is used. This + argument is only effective if is_dynamic_op=True. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`rewriter_config_template` + + +
+`max_workspace_size_bytes` + + +
+`precision_mode` + + +
+`minimum_segment_size` + + +
+`is_dynamic_op` + + +
+`maximum_cached_engines` + + +
+`use_calibration` + + +
+`max_batch_size` + + +
+`allow_build_at_runtime` + + +
+ + + +## Class Variables + +* `allow_build_at_runtime` +* `is_dynamic_op` +* `max_batch_size` +* `max_workspace_size_bytes` +* `maximum_cached_engines` +* `minimum_segment_size` +* `precision_mode` +* `rewriter_config_template` +* `use_calibration` diff --git a/site/en/api_docs/python/tf/experimental/tensorrt/Converter.md b/site/en/api_docs/python/tf/experimental/tensorrt/Converter.md new file mode 100644 index 00000000000..29f4158303b --- /dev/null +++ b/site/en/api_docs/python/tf/experimental/tensorrt/Converter.md @@ -0,0 +1,352 @@ +description: An offline converter for TF-TRT transformation for TF 2.0 SavedModels. + +
+ + + + + + +
+ +# tf.experimental.tensorrt.Converter + + + + + + + + + +An offline converter for TF-TRT transformation for TF 2.0 SavedModels. + + + + + + + +Currently this is not available on Windows platform. + +Note that in V2, is_dynamic_op=False is not supported, meaning TRT engines +will be built only when the corresponding TRTEngineOp is executed. But we +still provide a way to avoid the cost of building TRT engines during inference +(see more below). + +There are several ways to run the conversion: + +1. FP32/FP16 precision + + ```python + params = tf.experimental.tensorrt.ConversionParams( + precision_mode='FP16') + converter = tf.experimental.tensorrt.Converter( + input_saved_model_dir="my_dir", conversion_params=params) + converter.convert() + converter.save(output_saved_model_dir) + ``` + + In this case, no TRT engines will be built or saved in the converted + SavedModel. But if input data is available during conversion, we can still + build and save the TRT engines to reduce the cost during inference (see + option 2 below). + +2. FP32/FP16 precision with pre-built engines + + ```python + params = tf.experimental.tensorrt.ConversionParams( + precision_mode='FP16', + # Set this to a large enough number so it can cache all the engines. + maximum_cached_engines=16) + converter = tf.experimental.tensorrt.Converter( + input_saved_model_dir="my_dir", conversion_params=params) + converter.convert() + + # Define a generator function that yields input data, and use it to execute + # the graph to build TRT engines. + # With TensorRT 5.1, different engines will be built (and saved later) for + # different input shapes to the TRTEngineOp. + def my_input_fn(): + for _ in range(num_runs): + inp1, inp2 = ... + yield inp1, inp2 + + converter.build(input_fn=my_input_fn) # Generate corresponding TRT engines + converter.save(output_saved_model_dir) # Generated engines will be saved. + ``` + + In this way, one engine will be built/saved for each unique input shapes of + the TRTEngineOp. This is good for applications that cannot afford building + engines during inference but have access to input data that is similar to + the one used in production (for example, that has the same input shapes). + Also, the generated TRT engines is platform dependent, so we need to run + `build()` in an environment that is similar to production (e.g. with + same type of GPU). + +3. INT8 precision and calibration with pre-built engines + + ```python + params = tf.experimental.tensorrt.ConversionParams( + precision_mode='INT8', + # Currently only one INT8 engine is supported in this mode. + maximum_cached_engines=1, + use_calibration=True) + converter = tf.experimental.tensorrt.Converter( + input_saved_model_dir="my_dir", conversion_params=params) + + # Define a generator function that yields input data, and run INT8 + # calibration with the data. All input data should have the same shape. + # At the end of convert(), the calibration stats (e.g. range information) + # will be saved and can be used to generate more TRT engines with different + # shapes. Also, one TRT engine will be generated (with the same shape as + # the calibration data) for save later. + def my_calibration_input_fn(): + for _ in range(num_runs): + inp1, inp2 = ... + yield inp1, inp2 + + converter.convert(calibration_input_fn=my_calibration_input_fn) + + # (Optional) Generate more TRT engines offline (same as the previous + # option), to avoid the cost of generating them during inference. + def my_input_fn(): + for _ in range(num_runs): + inp1, inp2 = ... + yield inp1, inp2 + converter.build(input_fn=my_input_fn) + + # Save the TRT engine and the engines. + converter.save(output_saved_model_dir) + ``` + + + + + + + + + + + + + + + + + + + +
+`input_saved_model_dir` + +the directory to load the SavedModel which contains +the input graph to transforms. Used only when input_graph_def is None. +
+`input_saved_model_tags` + +list of tags to load the SavedModel. +
+`input_saved_model_signature_key` + +the key of the signature to optimize the +graph for. +
+`conversion_params` + +a TrtConversionParams instance. +
+ + + + + + + + + + + + +
+`ValueError` + +if the combination of the parameters is invalid. +
+ + + +## Methods + +

build

+ +View source + + + +Run inference with converted graph in order to build TensorRT engines. + + + + + + + + + + + +
Args
+`input_fn` + +a generator function that yields input data as a list or tuple, +which will be used to execute the converted signature to generate TRT +engines. Example: +`def input_fn(): +# Let's assume a network with 2 input tensors. We generate 3 sets +# of dummy input data: +input_shapes = [[(1, 16), (2, 16)], # 1st input list +[(2, 32), (4, 32)], # 2nd list of two tensors +[(4, 32), (8, 32)]] # 3rd input list +for shapes in input_shapes: +# return a list of input tensors +yield [np.zeros(x).astype(np.float32) for x in shapes]` +
+ + + + + + + + + + + + + + + +
Raises
+`NotImplementedError` + +build() is already called. +
+`RuntimeError` + +the input_fx is None. +
+ + + +

convert

+ +View source + + + +Convert the input SavedModel in 2.0 format. + + + + + + + + + + + +
Args
+`calibration_input_fn` + +a generator function that yields input data as a +list or tuple, which will be used to execute the converted signature for +calibration. All the returned input data should have the same shape. +Example: `def input_fn(): yield input1, input2, input3` +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if the input combination is invalid. +
+ + + + + + + + + + + +
Returns
+The TF-TRT converted Function. +
+ + + +

save

+ +View source + + + +Save the converted SavedModel. + + + + + + + + + + + +
Args
+`output_saved_model_dir` + +directory to saved the converted SavedModel. +
+ + + + + diff --git a/site/en/api_docs/python/tf/extract_volume_patches.md b/site/en/api_docs/python/tf/extract_volume_patches.md new file mode 100644 index 00000000000..dc5d0459a81 --- /dev/null +++ b/site/en/api_docs/python/tf/extract_volume_patches.md @@ -0,0 +1,110 @@ +description: Extract patches from input and put them in the "depth" output dimension. 3D extension of extract_image_patches. + +
+ + +
+ +# tf.extract_volume_patches + + + + + + + + + +Extract `patches` from `input` and put them in the "depth" output dimension. 3D extension of `extract_image_patches`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +5-D Tensor with shape `[batch, in_planes, in_rows, in_cols, depth]`. +
+`ksizes` + +A list of `ints` that has length `>= 5`. +The size of the sliding window for each dimension of `input`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D of length 5. How far the centers of two consecutive patches are in +`input`. Must be: `[1, stride_planes, stride_rows, stride_cols, 1]`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. + +We specify the size-related attributes as: + +```python +ksizes = [1, ksize_planes, ksize_rows, ksize_cols, 1] +strides = [1, stride_planes, strides_rows, strides_cols, 1] +``` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/eye.md b/site/en/api_docs/python/tf/eye.md new file mode 100644 index 00000000000..c94efe824f6 --- /dev/null +++ b/site/en/api_docs/python/tf/eye.md @@ -0,0 +1,125 @@ +description: Construct an identity matrix, or a batch of matrices. + +
+ + +
+ +# tf.eye + + + + + + + + + +Construct an identity matrix, or a batch of matrices. + + + + + + + + + +```python +# Construct one identity matrix. +tf.eye(2) +==> [[1., 0.], + [0., 1.]] + +# Construct a batch of 3 identity matrices, each 2 x 2. +# batch_identity[i, :, :] is a 2 x 2 identity matrix, i = 0, 1, 2. +batch_identity = tf.eye(2, batch_shape=[3]) + +# Construct one 2 x 3 "identity" matrix +tf.eye(2, num_columns=3) +==> [[ 1., 0., 0.], + [ 0., 1., 0.]] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`num_rows` + +Non-negative `int32` scalar `Tensor` giving the number of rows +in each batch matrix. +
+`num_columns` + +Optional non-negative `int32` scalar `Tensor` giving the number +of columns in each batch matrix. Defaults to `num_rows`. +
+`batch_shape` + +A list or tuple of Python integers or a 1-D `int32` `Tensor`. +If provided, the returned `Tensor` will have leading batch dimensions of +this shape. +
+`dtype` + +The type of an element in the resulting `Tensor` +
+`name` + +A name for this `Op`. Defaults to "eye". +
+ + + + + + + + + + + +
+A `Tensor` of shape `batch_shape + [num_rows, num_columns]` +
+ diff --git a/site/en/api_docs/python/tf/feature_column.md b/site/en/api_docs/python/tf/feature_column.md new file mode 100644 index 00000000000..489c5ebeec6 --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column.md @@ -0,0 +1,57 @@ +description: Public API for tf.feature_column namespace. + +
+ + +
+ +# Module: tf.feature_column + + + + + + + + + +Public API for tf.feature_column namespace. + + + +## Functions + +[`bucketized_column(...)`](../tf/feature_column/bucketized_column.md): Represents discretized dense input bucketed by `boundaries`. + +[`categorical_column_with_hash_bucket(...)`](../tf/feature_column/categorical_column_with_hash_bucket.md): Represents sparse feature where ids are set by hashing. + +[`categorical_column_with_identity(...)`](../tf/feature_column/categorical_column_with_identity.md): A `CategoricalColumn` that returns identity values. + +[`categorical_column_with_vocabulary_file(...)`](../tf/feature_column/categorical_column_with_vocabulary_file.md): A `CategoricalColumn` with a vocabulary file. + +[`categorical_column_with_vocabulary_list(...)`](../tf/feature_column/categorical_column_with_vocabulary_list.md): A `CategoricalColumn` with in-memory vocabulary. + +[`crossed_column(...)`](../tf/feature_column/crossed_column.md): Returns a column for performing crosses of categorical features. + +[`embedding_column(...)`](../tf/feature_column/embedding_column.md): `DenseColumn` that converts from sparse, categorical input. + +[`indicator_column(...)`](../tf/feature_column/indicator_column.md): Represents multi-hot representation of given categorical column. + +[`make_parse_example_spec(...)`](../tf/feature_column/make_parse_example_spec.md): Creates parsing spec dictionary from input feature_columns. + +[`numeric_column(...)`](../tf/feature_column/numeric_column.md): Represents real valued or numerical features. + +[`sequence_categorical_column_with_hash_bucket(...)`](../tf/feature_column/sequence_categorical_column_with_hash_bucket.md): A sequence of categorical terms where ids are set by hashing. + +[`sequence_categorical_column_with_identity(...)`](../tf/feature_column/sequence_categorical_column_with_identity.md): Returns a feature column that represents sequences of integers. + +[`sequence_categorical_column_with_vocabulary_file(...)`](../tf/feature_column/sequence_categorical_column_with_vocabulary_file.md): A sequence of categorical terms where ids use a vocabulary file. + +[`sequence_categorical_column_with_vocabulary_list(...)`](../tf/feature_column/sequence_categorical_column_with_vocabulary_list.md): A sequence of categorical terms where ids use an in-memory list. + +[`sequence_numeric_column(...)`](../tf/feature_column/sequence_numeric_column.md): Returns a feature column that represents sequences of numeric data. + +[`shared_embeddings(...)`](../tf/feature_column/shared_embeddings.md): List of dense columns that convert from sparse, categorical input. + +[`weighted_categorical_column(...)`](../tf/feature_column/weighted_categorical_column.md): Applies weight values to a `CategoricalColumn`. + diff --git a/site/en/api_docs/python/tf/feature_column/bucketized_column.md b/site/en/api_docs/python/tf/feature_column/bucketized_column.md new file mode 100644 index 00000000000..6bc8936638e --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/bucketized_column.md @@ -0,0 +1,160 @@ +description: Represents discretized dense input bucketed by boundaries. + +
+ + +
+ +# tf.feature_column.bucketized_column + + + + + + + + + +Represents discretized dense input bucketed by `boundaries`. + + + + + + + + + +Buckets include the left boundary, and exclude the right boundary. Namely, +`boundaries=[0., 1., 2.]` generates buckets `(-inf, 0.)`, `[0., 1.)`, +`[1., 2.)`, and `[2., +inf)`. + +For example, if the inputs are + +```python +boundaries = [0, 10, 100] +input tensor = [[-5, 10000] + [150, 10] + [5, 100]] +``` + +then the output will be + +```python +output = [[0, 3] + [3, 2] + [1, 3]] +``` + +#### Example: + + + +```python +price = tf.feature_column.numeric_column('price') +bucketized_price = tf.feature_column.bucketized_column( + price, boundaries=[...]) +columns = [bucketized_price, ...] +features = tf.io.parse_example( + ..., features=tf.feature_column.make_parse_example_spec(columns)) +dense_tensor = tf.keras.layers.DenseFeatures(columns)(features) +``` + +`bucketized_column` can also be crossed with another categorical column using +`crossed_column`: + +```python +price = tf.feature_column.numeric_column('price') +# bucketized_column converts numerical feature to a categorical one. +bucketized_price = tf.feature_column.bucketized_column( + price, boundaries=[...]) +# 'keywords' is a string feature. +price_x_keywords = tf.feature_column.crossed_column( + [bucketized_price, 'keywords'], 50K) +columns = [price_x_keywords, ...] +features = tf.io.parse_example( + ..., features=tf.feature_column.make_parse_example_spec(columns)) +dense_tensor = tf.keras.layers.DenseFeatures(columns)(features) +linear_model = tf.keras.experimental.LinearModel(units=...)(dense_tensor) +``` + + + + + + + + + + + + + +
+`source_column` + +A one-dimensional dense column which is generated with +`numeric_column`. +
+`boundaries` + +A sorted list or tuple of floats specifying the boundaries. +
+ + + + + + + + + + + +
+A `BucketizedColumn`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `source_column` is not a numeric column, or if it is not +one-dimensional. +
+`ValueError` + +If `boundaries` is not a sorted list or tuple. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket.md b/site/en/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket.md new file mode 100644 index 00000000000..a86e16c9396 --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket.md @@ -0,0 +1,141 @@ +description: Represents sparse feature where ids are set by hashing. + +
+ + +
+ +# tf.feature_column.categorical_column_with_hash_bucket + + + + + + + + + +Represents sparse feature where ids are set by hashing. + + + + + + + + + +Use this when your sparse features are in string or integer format, and you +want to distribute your inputs into a finite number of buckets by hashing. +output_id = Hash(input_feature_string) % bucket_size for string type input. +For int type input, the value is converted to its string representation first +and then hashed by the same formula. + +For input dictionary `features`, `features[key]` is either `Tensor` or +`SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int +and `''` for string, which will be dropped by this feature column. + +#### Example: + + + +```python +keywords = categorical_column_with_hash_bucket("keywords", 10K) +columns = [keywords, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction = linear_model(features, columns) + +# or +keywords_embedded = embedding_column(keywords, 16) +columns = [keywords_embedded, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +dense_tensor = input_layer(features, columns) +``` + + + + + + + + + + + + + + + + +
+`key` + +A unique string identifying the input feature. It is used as the +column name and the dictionary key for feature parsing configs, feature +`Tensor` objects, and feature columns. +
+`hash_bucket_size` + +An int > 1. The number of buckets. +
+`dtype` + +The type of features. Only string and integer types are supported. +
+ + + + + + + + + + + +
+A `HashedCategoricalColumn`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +`hash_bucket_size` is not greater than 1. +
+`ValueError` + +`dtype` is neither string nor integer. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/categorical_column_with_identity.md b/site/en/api_docs/python/tf/feature_column/categorical_column_with_identity.md new file mode 100644 index 00000000000..753769ec662 --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/categorical_column_with_identity.md @@ -0,0 +1,153 @@ +description: A CategoricalColumn that returns identity values. + +
+ + +
+ +# tf.feature_column.categorical_column_with_identity + + + + + + + + + +A `CategoricalColumn` that returns identity values. + + + + + + + + + +Use this when your inputs are integers in the range `[0, num_buckets)`, and +you want to use the input value itself as the categorical ID. Values outside +this range will result in `default_value` if specified, otherwise it will +fail. + +Typically, this is used for contiguous ranges of integer indexes, but +it doesn't have to be. This might be inefficient, however, if many of IDs +are unused. Consider `categorical_column_with_hash_bucket` in that case. + +For input dictionary `features`, `features[key]` is either `Tensor` or +`SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int +and `''` for string, which will be dropped by this feature column. + +In the following examples, each input in the range `[0, 1000000)` is assigned +the same value. All other inputs are assigned `default_value` 0. Note that a +literal 0 in inputs will result in the same default ID. + +#### Linear model: + + + +```python +video_id = categorical_column_with_identity( + key='video_id', num_buckets=1000000, default_value=0) +columns = [video_id, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction, _, _ = linear_model(features, columns) +``` + +Embedding for a DNN model: + +```python +columns = [embedding_column(video_id, 9),...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +dense_tensor = input_layer(features, columns) +``` + + + + + + + + + + + + + + + + +
+`key` + +A unique string identifying the input feature. It is used as the +column name and the dictionary key for feature parsing configs, feature +`Tensor` objects, and feature columns. +
+`num_buckets` + +Range of inputs and outputs is `[0, num_buckets)`. +
+`default_value` + +If set, values outside of range `[0, num_buckets)` will +be replaced with this value. If not set, values >= num_buckets will +cause a failure while values < 0 will be dropped. +
+ + + + + + + + + + + +
+A `CategoricalColumn` that returns identity values. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `num_buckets` is less than one. +
+`ValueError` + +if `default_value` is not in range `[0, num_buckets)`. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file.md b/site/en/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file.md new file mode 100644 index 00000000000..614db7ddd08 --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file.md @@ -0,0 +1,202 @@ +description: A CategoricalColumn with a vocabulary file. + +
+ + +
+ +# tf.feature_column.categorical_column_with_vocabulary_file + + + + + + + + + +A `CategoricalColumn` with a vocabulary file. + + + + + + + +Use this when your inputs are in string or integer format, and you have a +vocabulary file that maps each value to an integer ID. By default, +out-of-vocabulary values are ignored. Use either (but not both) of +`num_oov_buckets` and `default_value` to specify how to include +out-of-vocabulary values. + +For input dictionary `features`, `features[key]` is either `Tensor` or +`SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int +and `''` for string, which will be dropped by this feature column. + +Example with `num_oov_buckets`: +File `'/us/states.txt'` contains 50 lines, each with a 2-character U.S. state +abbreviation. All inputs with values in that file are assigned an ID 0-49, +corresponding to its line number. All other values are hashed and assigned an +ID 50-54. + +```python +states = categorical_column_with_vocabulary_file( + key='states', vocabulary_file='/us/states.txt', vocabulary_size=50, + num_oov_buckets=5) +columns = [states, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction = linear_model(features, columns) +``` + +Example with `default_value`: +File `'/us/states.txt'` contains 51 lines - the first line is `'XX'`, and the +other 50 each have a 2-character U.S. state abbreviation. Both a literal +`'XX'` in input, and other values missing from the file, will be assigned +ID 0. All others are assigned the corresponding line number 1-50. + +```python +states = categorical_column_with_vocabulary_file( + key='states', vocabulary_file='/us/states.txt', vocabulary_size=51, + default_value=0) +columns = [states, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction, _, _ = linear_model(features, columns) +``` + +And to make an embedding with either: + +```python +columns = [embedding_column(states, 3),...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +dense_tensor = input_layer(features, columns) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A unique string identifying the input feature. It is used as the +column name and the dictionary key for feature parsing configs, feature +`Tensor` objects, and feature columns. +
+`vocabulary_file` + +The vocabulary file name. +
+`vocabulary_size` + +Number of the elements in the vocabulary. This must be no +greater than length of `vocabulary_file`, if less than length, later +values are ignored. If None, it is set to the length of `vocabulary_file`. +
+`dtype` + +The type of features. Only string and integer types are supported. +
+`default_value` + +The integer ID value to return for out-of-vocabulary feature +values, defaults to `-1`. This can not be specified with a positive +`num_oov_buckets`. +
+`num_oov_buckets` + +Non-negative integer, the number of out-of-vocabulary +buckets. All out-of-vocabulary inputs will be assigned IDs in the range +`[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of +the input value. A positive `num_oov_buckets` can not be specified with +`default_value`. +
+ + + + + + + + + + + +
+A `CategoricalColumn` with a vocabulary file. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +`vocabulary_file` is missing or cannot be opened. +
+`ValueError` + +`vocabulary_size` is missing or < 1. +
+`ValueError` + +`num_oov_buckets` is a negative integer. +
+`ValueError` + +`num_oov_buckets` and `default_value` are both specified. +
+`ValueError` + +`dtype` is neither string nor integer. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list.md b/site/en/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list.md new file mode 100644 index 00000000000..4babdd3c2c1 --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list.md @@ -0,0 +1,197 @@ +description: A CategoricalColumn with in-memory vocabulary. + +
+ + +
+ +# tf.feature_column.categorical_column_with_vocabulary_list + + + + + + + + + +A `CategoricalColumn` with in-memory vocabulary. + + + + + + + + + +Use this when your inputs are in string or integer format, and you have an +in-memory vocabulary mapping each value to an integer ID. By default, +out-of-vocabulary values are ignored. Use either (but not both) of +`num_oov_buckets` and `default_value` to specify how to include +out-of-vocabulary values. + +For input dictionary `features`, `features[key]` is either `Tensor` or +`SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int +and `''` for string, which will be dropped by this feature column. + +Example with `num_oov_buckets`: +In the following example, each input in `vocabulary_list` is assigned an ID +0-3 corresponding to its index (e.g., input 'B' produces output 2). All other +inputs are hashed and assigned an ID 4-5. + +```python +colors = categorical_column_with_vocabulary_list( + key='colors', vocabulary_list=('R', 'G', 'B', 'Y'), + num_oov_buckets=2) +columns = [colors, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction, _, _ = linear_model(features, columns) +``` + +Example with `default_value`: +In the following example, each input in `vocabulary_list` is assigned an ID +0-4 corresponding to its index (e.g., input 'B' produces output 3). All other +inputs are assigned `default_value` 0. + + +```python +colors = categorical_column_with_vocabulary_list( + key='colors', vocabulary_list=('X', 'R', 'G', 'B', 'Y'), default_value=0) +columns = [colors, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction, _, _ = linear_model(features, columns) +``` + +And to make an embedding with either: + +```python +columns = [embedding_column(colors, 3),...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +dense_tensor = input_layer(features, columns) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A unique string identifying the input feature. It is used as the column +name and the dictionary key for feature parsing configs, feature `Tensor` +objects, and feature columns. +
+`vocabulary_list` + +An ordered iterable defining the vocabulary. Each feature +is mapped to the index of its value (if present) in `vocabulary_list`. +Must be castable to `dtype`. +
+`dtype` + +The type of features. Only string and integer types are supported. If +`None`, it will be inferred from `vocabulary_list`. +
+`default_value` + +The integer ID value to return for out-of-vocabulary feature +values, defaults to `-1`. This can not be specified with a positive +`num_oov_buckets`. +
+`num_oov_buckets` + +Non-negative integer, the number of out-of-vocabulary +buckets. All out-of-vocabulary inputs will be assigned IDs in the range +`[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a +hash of the input value. A positive `num_oov_buckets` can not be specified +with `default_value`. +
+ + + + + + + + + + + +
+A `CategoricalColumn` with in-memory vocabulary. +
+ + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +if `vocabulary_list` is empty, or contains duplicate keys. +
+`ValueError` + +`num_oov_buckets` is a negative integer. +
+`ValueError` + +`num_oov_buckets` and `default_value` are both specified. +
+`ValueError` + +if `dtype` is not integer or string. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/crossed_column.md b/site/en/api_docs/python/tf/feature_column/crossed_column.md new file mode 100644 index 00000000000..7dba52485f6 --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/crossed_column.md @@ -0,0 +1,213 @@ +description: Returns a column for performing crosses of categorical features. + +
+ + +
+ +# tf.feature_column.crossed_column + + + + + + + + + +Returns a column for performing crosses of categorical features. + + + + + + + + + +Crossed features will be hashed according to `hash_bucket_size`. Conceptually, +the transformation can be thought of as: + Hash(cartesian product of features) % `hash_bucket_size` + +For example, if the input features are: + +* SparseTensor referred by first key: + + ```python + shape = [2, 2] + { + [0, 0]: "a" + [1, 0]: "b" + [1, 1]: "c" + } + ``` + +* SparseTensor referred by second key: + + ```python + shape = [2, 1] + { + [0, 0]: "d" + [1, 0]: "e" + } + ``` + +then crossed feature will look like: + +```python + shape = [2, 2] +{ + [0, 0]: Hash64("d", Hash64("a")) % hash_bucket_size + [1, 0]: Hash64("e", Hash64("b")) % hash_bucket_size + [1, 1]: Hash64("e", Hash64("c")) % hash_bucket_size +} +``` + +Here is an example to create a linear model with crosses of string features: + +```python +keywords_x_doc_terms = crossed_column(['keywords', 'doc_terms'], 50K) +columns = [keywords_x_doc_terms, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction = linear_model(features, columns) +``` + +You could also use vocabulary lookup before crossing: + +```python +keywords = categorical_column_with_vocabulary_file( + 'keywords', '/path/to/vocabulary/file', vocabulary_size=1K) +keywords_x_doc_terms = crossed_column([keywords, 'doc_terms'], 50K) +columns = [keywords_x_doc_terms, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction = linear_model(features, columns) +``` + +If an input feature is of numeric type, you can use +`categorical_column_with_identity`, or `bucketized_column`, as in the example: + +```python +# vertical_id is an integer categorical feature. +vertical_id = categorical_column_with_identity('vertical_id', 10K) +price = numeric_column('price') +# bucketized_column converts numerical feature to a categorical one. +bucketized_price = bucketized_column(price, boundaries=[...]) +vertical_id_x_price = crossed_column([vertical_id, bucketized_price], 50K) +columns = [vertical_id_x_price, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction = linear_model(features, columns) +``` + +To use crossed column in DNN model, you need to add it in an embedding column +as in this example: + +```python +vertical_id_x_price = crossed_column([vertical_id, bucketized_price], 50K) +vertical_id_x_price_embedded = embedding_column(vertical_id_x_price, 10) +dense_tensor = input_layer(features, [vertical_id_x_price_embedded, ...]) +``` + + + + + + + + + + + + + + + + +
+`keys` + +An iterable identifying the features to be crossed. Each element can +be either: +* string: Will use the corresponding feature which must be of string type. +* `CategoricalColumn`: Will use the transformed tensor produced by this +column. Does not support hashed categorical column. +
+`hash_bucket_size` + +An int > 1. The number of buckets. +
+`hash_key` + +Specify the hash_key that will be used by the `FingerprintCat64` +function to combine the crosses fingerprints on SparseCrossOp (optional). +
+ + + + + + + + + + + +
+A `CrossedColumn`. +
+ + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +If `len(keys) < 2`. +
+`ValueError` + +If any of the keys is neither a string nor `CategoricalColumn`. +
+`ValueError` + +If any of the keys is `HashedCategoricalColumn`. +
+`ValueError` + +If `hash_bucket_size < 1`. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/embedding_column.md b/site/en/api_docs/python/tf/feature_column/embedding_column.md new file mode 100644 index 00000000000..3ea7de4879f --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/embedding_column.md @@ -0,0 +1,226 @@ +description: DenseColumn that converts from sparse, categorical input. + +
+ + +
+ +# tf.feature_column.embedding_column + + + + + + + + + +`DenseColumn` that converts from sparse, categorical input. + + + + + + + + + +Use this when your inputs are sparse, but you want to convert them to a dense +representation (e.g., to feed to a DNN). + +Inputs must be a `CategoricalColumn` created by any of the +`categorical_column_*` function. Here is an example of using +`embedding_column` with `DNNClassifier`: + +```python +video_id = categorical_column_with_identity( + key='video_id', num_buckets=1000000, default_value=0) +columns = [embedding_column(video_id, 9),...] + +estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...) + +label_column = ... +def input_fn(): + features = tf.io.parse_example( + ..., features=make_parse_example_spec(columns + [label_column])) + labels = features.pop(label_column.name) + return features, labels + +estimator.train(input_fn=input_fn, steps=100) +``` + +Here is an example using `embedding_column` with model_fn: + +```python +def model_fn(features, ...): + video_id = categorical_column_with_identity( + key='video_id', num_buckets=1000000, default_value=0) + columns = [embedding_column(video_id, 9),...] + dense_tensor = input_layer(features, columns) + # Form DNN layers, calculate loss, and return EstimatorSpec. + ... +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`categorical_column` + +A `CategoricalColumn` created by a +`categorical_column_with_*` function. This column produces the sparse IDs +that are inputs to the embedding lookup. +
+`dimension` + +An integer specifying dimension of the embedding, must be > 0. +
+`combiner` + +A string specifying how to reduce if there are multiple entries in +a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with +'mean' the default. 'sqrtn' often achieves good accuracy, in particular +with bag-of-words columns. Each of this can be thought as example level +normalizations on the column. For more information, see +`tf.embedding_lookup_sparse`. +
+`initializer` + +A variable initializer function to be used in embedding +variable initialization. If not specified, defaults to +`truncated_normal_initializer` with mean `0.0` and +standard deviation `1/sqrt(dimension)`. +
+`ckpt_to_load_from` + +String representing checkpoint name/pattern from which to +restore column weights. Required if `tensor_name_in_ckpt` is not `None`. +
+`tensor_name_in_ckpt` + +Name of the `Tensor` in `ckpt_to_load_from` from which +to restore the column weights. Required if `ckpt_to_load_from` is not +`None`. +
+`max_norm` + +If not `None`, embedding values are l2-normalized to this value. +
+`trainable` + +Whether or not the embedding is trainable. Default is True. +
+`use_safe_embedding_lookup` + +If true, uses safe_embedding_lookup_sparse +instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures +there are no empty rows and all weights and ids are positive at the +expense of extra compute cost. This only applies to rank 2 (NxM) shaped +input tensors. Defaults to true, consider turning off if the above checks +are not needed. Note that having empty rows will not trigger any error +though the output result might be 0 or omitted. +
+ + + + + + + + + + + +
+`DenseColumn` that converts from sparse input. +
+ + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +if `dimension` not > 0. +
+`ValueError` + +if exactly one of `ckpt_to_load_from` and `tensor_name_in_ckpt` +is specified. +
+`ValueError` + +if `initializer` is specified and is not callable. +
+`RuntimeError` + +If eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/indicator_column.md b/site/en/api_docs/python/tf/feature_column/indicator_column.md new file mode 100644 index 00000000000..a73b867e46a --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/indicator_column.md @@ -0,0 +1,113 @@ +description: Represents multi-hot representation of given categorical column. + +
+ + +
+ +# tf.feature_column.indicator_column + + + + + + + + + +Represents multi-hot representation of given categorical column. + + + + + + + + + +- For DNN model, `indicator_column` can be used to wrap any + `categorical_column_*` (e.g., to feed to DNN). Consider to Use + `embedding_column` if the number of buckets/unique(values) are large. + +- For Wide (aka linear) model, `indicator_column` is the internal + representation for categorical column when passing categorical column + directly (as any element in feature_columns) to `linear_model`. See + `linear_model` for details. + +```python +name = indicator_column(categorical_column_with_vocabulary_list( + 'name', ['bob', 'george', 'wanda']) +columns = [name, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +dense_tensor = input_layer(features, columns) + +dense_tensor == [[1, 0, 0]] # If "name" bytes_list is ["bob"] +dense_tensor == [[1, 0, 1]] # If "name" bytes_list is ["bob", "wanda"] +dense_tensor == [[2, 0, 0]] # If "name" bytes_list is ["bob", "bob"] +``` + + + + + + + + + + +
+`categorical_column` + +A `CategoricalColumn` which is created by +`categorical_column_with_*` or `crossed_column` functions. +
+ + + + + + + + + + + +
+An `IndicatorColumn`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `categorical_column` is not CategoricalColumn type. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/make_parse_example_spec.md b/site/en/api_docs/python/tf/feature_column/make_parse_example_spec.md new file mode 100644 index 00000000000..4deb650e564 --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/make_parse_example_spec.md @@ -0,0 +1,116 @@ +description: Creates parsing spec dictionary from input feature_columns. + +
+ + +
+ +# tf.feature_column.make_parse_example_spec + + + + + + + + + +Creates parsing spec dictionary from input feature_columns. + + + + + + + +The returned dictionary can be used as arg 'features' in +tf.io.parse_example. + +#### Typical usage example: + + + +```python +# Define features and transformations +feature_a = tf.feature_column.categorical_column_with_vocabulary_file(...) +feature_b = tf.feature_column.numeric_column(...) +feature_c_bucketized = tf.feature_column.bucketized_column( + tf.feature_column.numeric_column("feature_c"), ...) +feature_a_x_feature_c = tf.feature_column.crossed_column( + columns=["feature_a", feature_c_bucketized], ...) + +feature_columns = set( + [feature_b, feature_c_bucketized, feature_a_x_feature_c]) +features = tf.io.parse_example( + serialized=serialized_examples, + features=tf.feature_column.make_parse_example_spec(feature_columns)) +``` + +For the above example, make_parse_example_spec would return the dict: + +```python +{ + "feature_a": parsing_ops.VarLenFeature(tf.string), + "feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32), + "feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32) +} +``` + + + + + + + + + + +
+`feature_columns` + +An iterable containing all feature columns. All items +should be instances of classes derived from `FeatureColumn`. +
+ + + + + + + + + + + +
+A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature` +value. +
+ + + + + + + + + + + + +
+`ValueError` + +If any of the given `feature_columns` is not a `FeatureColumn` +instance. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/numeric_column.md b/site/en/api_docs/python/tf/feature_column/numeric_column.md new file mode 100644 index 00000000000..d3a4344ea1f --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/numeric_column.md @@ -0,0 +1,182 @@ +description: Represents real valued or numerical features. + +
+ + +
+ +# tf.feature_column.numeric_column + + + + + + + + + +Represents real valued or numerical features. + + + + + + + + + + +#### Example: + + + +```python +price = numeric_column('price') +columns = [price, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +dense_tensor = input_layer(features, columns) + +# or +bucketized_price = bucketized_column(price, boundaries=[...]) +columns = [bucketized_price, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction = linear_model(features, columns) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A unique string identifying the input feature. It is used as the +column name and the dictionary key for feature parsing configs, feature +`Tensor` objects, and feature columns. +
+`shape` + +An iterable of integers specifies the shape of the `Tensor`. An +integer can be given which means a single dimension `Tensor` with given +width. The `Tensor` representing the column will have the shape of +[batch_size] + `shape`. +
+`default_value` + +A single value compatible with `dtype` or an iterable of +values compatible with `dtype` which the column takes on during +`tf.Example` parsing if data is missing. A default value of `None` will +cause tf.io.parse_example to fail if an example does not contain this +column. If a single value is provided, the same value will be applied as +the default value for every item. If an iterable of values is provided, +the shape of the `default_value` should be equal to the given `shape`. +
+`dtype` + +defines the type of values. Default value is tf.float32. Must be a +non-quantized, real integer or floating point type. +
+`normalizer_fn` + +If not `None`, a function that can be used to normalize the +value of the tensor after `default_value` is applied for parsing. +Normalizer function takes the input `Tensor` as its argument, and returns +the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that +even though the most common use case of this function is normalization, it +can be used for any kind of Tensorflow transformations. +
+ + + + + + + + + + + +
+A `NumericColumn`. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`TypeError` + +if any dimension in shape is not an int +
+`ValueError` + +if any dimension in shape is not a positive integer +
+`TypeError` + +if `default_value` is an iterable but not compatible with `shape` +
+`TypeError` + +if `default_value` is not compatible with `dtype`. +
+`ValueError` + +if `dtype` is not convertible to tf.float32. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/sequence_categorical_column_with_hash_bucket.md b/site/en/api_docs/python/tf/feature_column/sequence_categorical_column_with_hash_bucket.md new file mode 100644 index 00000000000..af68ac9f802 --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/sequence_categorical_column_with_hash_bucket.md @@ -0,0 +1,136 @@ +description: A sequence of categorical terms where ids are set by hashing. + +
+ + +
+ +# tf.feature_column.sequence_categorical_column_with_hash_bucket + + + + + + + + + +A sequence of categorical terms where ids are set by hashing. + + + + + + + + + +Pass this to `embedding_column` or `indicator_column` to convert sequence +categorical data into dense representation for input to sequence NN, such as +RNN. + +#### Example: + + + +```python +tokens = sequence_categorical_column_with_hash_bucket( + 'tokens', hash_bucket_size=1000) +tokens_embedding = embedding_column(tokens, dimension=10) +columns = [tokens_embedding] + +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +sequence_feature_layer = SequenceFeatures(columns) +sequence_input, sequence_length = sequence_feature_layer(features) +sequence_length_mask = tf.sequence_mask(sequence_length) + +rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size) +rnn_layer = tf.keras.layers.RNN(rnn_cell) +outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask) +``` + + + + + + + + + + + + + + + + +
+`key` + +A unique string identifying the input feature. +
+`hash_bucket_size` + +An int > 1. The number of buckets. +
+`dtype` + +The type of features. Only string and integer types are supported. +
+ + + + + + + + + + + +
+A `SequenceCategoricalColumn`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +`hash_bucket_size` is not greater than 1. +
+`ValueError` + +`dtype` is neither string nor integer. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/sequence_categorical_column_with_identity.md b/site/en/api_docs/python/tf/feature_column/sequence_categorical_column_with_identity.md new file mode 100644 index 00000000000..c8bde1694f5 --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/sequence_categorical_column_with_identity.md @@ -0,0 +1,139 @@ +description: Returns a feature column that represents sequences of integers. + +
+ + +
+ +# tf.feature_column.sequence_categorical_column_with_identity + + + + + + + + + +Returns a feature column that represents sequences of integers. + + + + + + + + + +Pass this to `embedding_column` or `indicator_column` to convert sequence +categorical data into dense representation for input to sequence NN, such as +RNN. + +#### Example: + + + +```python +watches = sequence_categorical_column_with_identity( + 'watches', num_buckets=1000) +watches_embedding = embedding_column(watches, dimension=10) +columns = [watches_embedding] + +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +sequence_feature_layer = SequenceFeatures(columns) +sequence_input, sequence_length = sequence_feature_layer(features) +sequence_length_mask = tf.sequence_mask(sequence_length) + +rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size) +rnn_layer = tf.keras.layers.RNN(rnn_cell) +outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask) +``` + + + + + + + + + + + + + + + + +
+`key` + +A unique string identifying the input feature. +
+`num_buckets` + +Range of inputs. Namely, inputs are expected to be in the +range `[0, num_buckets)`. +
+`default_value` + +If `None`, this column's graph operations will fail for +out-of-range inputs. Otherwise, this value must be in the range +`[0, num_buckets)`, and will replace out-of-range inputs. +
+ + + + + + + + + + + +
+A `SequenceCategoricalColumn`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `num_buckets` is less than one. +
+`ValueError` + +if `default_value` is not in range `[0, num_buckets)`. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/sequence_categorical_column_with_vocabulary_file.md b/site/en/api_docs/python/tf/feature_column/sequence_categorical_column_with_vocabulary_file.md new file mode 100644 index 00000000000..902225e68bf --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/sequence_categorical_column_with_vocabulary_file.md @@ -0,0 +1,188 @@ +description: A sequence of categorical terms where ids use a vocabulary file. + +
+ + +
+ +# tf.feature_column.sequence_categorical_column_with_vocabulary_file + + + + + + + + + +A sequence of categorical terms where ids use a vocabulary file. + + + + + + + + + +Pass this to `embedding_column` or `indicator_column` to convert sequence +categorical data into dense representation for input to sequence NN, such as +RNN. + +#### Example: + + + +```python +states = sequence_categorical_column_with_vocabulary_file( + key='states', vocabulary_file='/us/states.txt', vocabulary_size=50, + num_oov_buckets=5) +states_embedding = embedding_column(states, dimension=10) +columns = [states_embedding] + +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +sequence_feature_layer = SequenceFeatures(columns) +sequence_input, sequence_length = sequence_feature_layer(features) +sequence_length_mask = tf.sequence_mask(sequence_length) + +rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size) +rnn_layer = tf.keras.layers.RNN(rnn_cell) +outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A unique string identifying the input feature. +
+`vocabulary_file` + +The vocabulary file name. +
+`vocabulary_size` + +Number of the elements in the vocabulary. This must be no +greater than length of `vocabulary_file`, if less than length, later +values are ignored. If None, it is set to the length of `vocabulary_file`. +
+`num_oov_buckets` + +Non-negative integer, the number of out-of-vocabulary +buckets. All out-of-vocabulary inputs will be assigned IDs in the range +`[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of +the input value. A positive `num_oov_buckets` can not be specified with +`default_value`. +
+`default_value` + +The integer ID value to return for out-of-vocabulary feature +values, defaults to `-1`. This can not be specified with a positive +`num_oov_buckets`. +
+`dtype` + +The type of features. Only string and integer types are supported. +
+ + + + + + + + + + + +
+A `SequenceCategoricalColumn`. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +`vocabulary_file` is missing or cannot be opened. +
+`ValueError` + +`vocabulary_size` is missing or < 1. +
+`ValueError` + +`num_oov_buckets` is a negative integer. +
+`ValueError` + +`num_oov_buckets` and `default_value` are both specified. +
+`ValueError` + +`dtype` is neither string nor integer. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/sequence_categorical_column_with_vocabulary_list.md b/site/en/api_docs/python/tf/feature_column/sequence_categorical_column_with_vocabulary_list.md new file mode 100644 index 00000000000..6b2e814410f --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/sequence_categorical_column_with_vocabulary_list.md @@ -0,0 +1,174 @@ +description: A sequence of categorical terms where ids use an in-memory list. + +
+ + +
+ +# tf.feature_column.sequence_categorical_column_with_vocabulary_list + + + + + + + + + +A sequence of categorical terms where ids use an in-memory list. + + + + + + + + + +Pass this to `embedding_column` or `indicator_column` to convert sequence +categorical data into dense representation for input to sequence NN, such as +RNN. + +#### Example: + + + +```python +colors = sequence_categorical_column_with_vocabulary_list( + key='colors', vocabulary_list=('R', 'G', 'B', 'Y'), + num_oov_buckets=2) +colors_embedding = embedding_column(colors, dimension=3) +columns = [colors_embedding] + +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +sequence_feature_layer = SequenceFeatures(columns) +sequence_input, sequence_length = sequence_feature_layer(features) +sequence_length_mask = tf.sequence_mask(sequence_length) + +rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size) +rnn_layer = tf.keras.layers.RNN(rnn_cell) +outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A unique string identifying the input feature. +
+`vocabulary_list` + +An ordered iterable defining the vocabulary. Each feature +is mapped to the index of its value (if present) in `vocabulary_list`. +Must be castable to `dtype`. +
+`dtype` + +The type of features. Only string and integer types are supported. +If `None`, it will be inferred from `vocabulary_list`. +
+`default_value` + +The integer ID value to return for out-of-vocabulary feature +values, defaults to `-1`. This can not be specified with a positive +`num_oov_buckets`. +
+`num_oov_buckets` + +Non-negative integer, the number of out-of-vocabulary +buckets. All out-of-vocabulary inputs will be assigned IDs in the range +`[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a +hash of the input value. A positive `num_oov_buckets` can not be specified +with `default_value`. +
+ + + + + + + + + + + +
+A `SequenceCategoricalColumn`. +
+ + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +if `vocabulary_list` is empty, or contains duplicate keys. +
+`ValueError` + +`num_oov_buckets` is a negative integer. +
+`ValueError` + +`num_oov_buckets` and `default_value` are both specified. +
+`ValueError` + +if `dtype` is not integer or string. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/sequence_numeric_column.md b/site/en/api_docs/python/tf/feature_column/sequence_numeric_column.md new file mode 100644 index 00000000000..e6bc616fbcb --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/sequence_numeric_column.md @@ -0,0 +1,159 @@ +description: Returns a feature column that represents sequences of numeric data. + +
+ + +
+ +# tf.feature_column.sequence_numeric_column + + + + + + + + + +Returns a feature column that represents sequences of numeric data. + + + + + + + + + + +#### Example: + + + +```python +temperature = sequence_numeric_column('temperature') +columns = [temperature] + +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +sequence_feature_layer = SequenceFeatures(columns) +sequence_input, sequence_length = sequence_feature_layer(features) +sequence_length_mask = tf.sequence_mask(sequence_length) + +rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size) +rnn_layer = tf.keras.layers.RNN(rnn_cell) +outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A unique string identifying the input features. +
+`shape` + +The shape of the input data per sequence id. E.g. if `shape=(2,)`, +each example must contain `2 * sequence_length` values. +
+`default_value` + +A single value compatible with `dtype` that is used for +padding the sparse data into a dense `Tensor`. +
+`dtype` + +The type of values. +
+`normalizer_fn` + +If not `None`, a function that can be used to normalize the +value of the tensor after `default_value` is applied for parsing. +Normalizer function takes the input `Tensor` as its argument, and returns +the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that +even though the most common use case of this function is normalization, it +can be used for any kind of Tensorflow transformations. +
+ + + + + + + + + + + +
+A `SequenceNumericColumn`. +
+ + + + + + + + + + + + + + + + + + +
+`TypeError` + +if any dimension in shape is not an int. +
+`ValueError` + +if any dimension in shape is not a positive integer. +
+`ValueError` + +if `dtype` is not convertible to tf.float32. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/shared_embeddings.md b/site/en/api_docs/python/tf/feature_column/shared_embeddings.md new file mode 100644 index 00000000000..4e198f1c849 --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/shared_embeddings.md @@ -0,0 +1,252 @@ +description: List of dense columns that convert from sparse, categorical input. + +
+ + +
+ +# tf.feature_column.shared_embeddings + + + + + + + + + +List of dense columns that convert from sparse, categorical input. + + + + + + + +This is similar to `embedding_column`, except that it produces a list of +embedding columns that share the same embedding weights. + +Use this when your inputs are sparse and of the same type (e.g. watched and +impression video IDs that share the same vocabulary), and you want to convert +them to a dense representation (e.g., to feed to a DNN). + +Inputs must be a list of categorical columns created by any of the +`categorical_column_*` function. They must all be of the same type and have +the same arguments except `key`. E.g. they can be +categorical_column_with_vocabulary_file with the same vocabulary_file. Some or +all columns could also be weighted_categorical_column. + +Here is an example embedding of two features for a DNNClassifier model: + +```python +watched_video_id = categorical_column_with_vocabulary_file( + 'watched_video_id', video_vocabulary_file, video_vocabulary_size) +impression_video_id = categorical_column_with_vocabulary_file( + 'impression_video_id', video_vocabulary_file, video_vocabulary_size) +columns = shared_embedding_columns( + [watched_video_id, impression_video_id], dimension=10) + +estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...) + +label_column = ... +def input_fn(): + features = tf.io.parse_example( + ..., features=make_parse_example_spec(columns + [label_column])) + labels = features.pop(label_column.name) + return features, labels + +estimator.train(input_fn=input_fn, steps=100) +``` + +Here is an example using `shared_embedding_columns` with model_fn: + +```python +def model_fn(features, ...): + watched_video_id = categorical_column_with_vocabulary_file( + 'watched_video_id', video_vocabulary_file, video_vocabulary_size) + impression_video_id = categorical_column_with_vocabulary_file( + 'impression_video_id', video_vocabulary_file, video_vocabulary_size) + columns = shared_embedding_columns( + [watched_video_id, impression_video_id], dimension=10) + dense_tensor = input_layer(features, columns) + # Form DNN layers, calculate loss, and return EstimatorSpec. + ... +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`categorical_columns` + +List of categorical columns created by a +`categorical_column_with_*` function. These columns produce the sparse IDs +that are inputs to the embedding lookup. All columns must be of the same +type and have the same arguments except `key`. E.g. they can be +categorical_column_with_vocabulary_file with the same vocabulary_file. +Some or all columns could also be weighted_categorical_column. +
+`dimension` + +An integer specifying dimension of the embedding, must be > 0. +
+`combiner` + +A string specifying how to reduce if there are multiple entries +in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with +'mean' the default. 'sqrtn' often achieves good accuracy, in particular +with bag-of-words columns. Each of this can be thought as example level +normalizations on the column. For more information, see +`tf.embedding_lookup_sparse`. +
+`initializer` + +A variable initializer function to be used in embedding +variable initialization. If not specified, defaults to +`truncated_normal_initializer` with mean `0.0` and standard +deviation `1/sqrt(dimension)`. +
+`shared_embedding_collection_name` + +Optional collective name of these columns. +If not given, a reasonable name will be chosen based on the names of +`categorical_columns`. +
+`ckpt_to_load_from` + +String representing checkpoint name/pattern from which to +restore column weights. Required if `tensor_name_in_ckpt` is not `None`. +
+`tensor_name_in_ckpt` + +Name of the `Tensor` in `ckpt_to_load_from` from +which to restore the column weights. Required if `ckpt_to_load_from` is +not `None`. +
+`max_norm` + +If not `None`, each embedding is clipped if its l2-norm is +larger than this value, before combining. +
+`trainable` + +Whether or not the embedding is trainable. Default is True. +
+`use_safe_embedding_lookup` + +If true, uses safe_embedding_lookup_sparse +instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures +there are no empty rows and all weights and ids are positive at the +expense of extra compute cost. This only applies to rank 2 (NxM) shaped +input tensors. Defaults to true, consider turning off if the above checks +are not needed. Note that having empty rows will not trigger any error +though the output result might be 0 or omitted. +
+ + + + + + + + + + + +
+A list of dense columns that converts from sparse input. The order of +results follows the ordering of `categorical_columns`. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +if `dimension` not > 0. +
+`ValueError` + +if any of the given `categorical_columns` is of different type +or has different arguments than the others. +
+`ValueError` + +if exactly one of `ckpt_to_load_from` and `tensor_name_in_ckpt` +is specified. +
+`ValueError` + +if `initializer` is specified and is not callable. +
+`RuntimeError` + +if eager execution is enabled. +
+ diff --git a/site/en/api_docs/python/tf/feature_column/weighted_categorical_column.md b/site/en/api_docs/python/tf/feature_column/weighted_categorical_column.md new file mode 100644 index 00000000000..945aed9dd5f --- /dev/null +++ b/site/en/api_docs/python/tf/feature_column/weighted_categorical_column.md @@ -0,0 +1,158 @@ +description: Applies weight values to a CategoricalColumn. + +
+ + +
+ +# tf.feature_column.weighted_categorical_column + + + + + + + + + +Applies weight values to a `CategoricalColumn`. + + + + + + + + + +Use this when each of your sparse inputs has both an ID and a value. For +example, if you're representing text documents as a collection of word +frequencies, you can provide 2 parallel sparse input features ('terms' and +'frequencies' below). + +#### Example: + + + +Input `tf.Example` objects: + +```proto +[ + features { + feature { + key: "terms" + value {bytes_list {value: "very" value: "model"}} + } + feature { + key: "frequencies" + value {float_list {value: 0.3 value: 0.1}} + } + }, + features { + feature { + key: "terms" + value {bytes_list {value: "when" value: "course" value: "human"}} + } + feature { + key: "frequencies" + value {float_list {value: 0.4 value: 0.1 value: 0.2}} + } + } +] +``` + +```python +categorical_column = categorical_column_with_hash_bucket( + column_name='terms', hash_bucket_size=1000) +weighted_column = weighted_categorical_column( + categorical_column=categorical_column, weight_feature_key='frequencies') +columns = [weighted_column, ...] +features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) +linear_prediction, _, _ = linear_model(features, columns) +``` + +This assumes the input dictionary contains a `SparseTensor` for key +'terms', and a `SparseTensor` for key 'frequencies'. These 2 tensors must have +the same indices and dense shape. + + + + + + + + + + + + + + + + +
+`categorical_column` + +A `CategoricalColumn` created by +`categorical_column_with_*` functions. +
+`weight_feature_key` + +String key for weight values. +
+`dtype` + +Type of weights, such as tf.float32. Only float and integer weights +are supported. +
+ + + + + + + + + + + +
+A `CategoricalColumn` composed of two sparse features: one represents id, +the other represents weight (value) of the id feature in that example. +
+ + + + + + + + + + + + +
+`ValueError` + +if `dtype` is not convertible to float. +
+ diff --git a/site/en/api_docs/python/tf/fill.md b/site/en/api_docs/python/tf/fill.md new file mode 100644 index 00000000000..fc06e2265b9 --- /dev/null +++ b/site/en/api_docs/python/tf/fill.md @@ -0,0 +1,138 @@ +description: Creates a tensor filled with a scalar value. + +
+ + +
+ +# tf.fill + + + + + + + + + +Creates a tensor filled with a scalar value. + + + + + + + + + +This operation creates a tensor of shape `dims` and fills it with `value`. + +#### For example: + + + +``` +>>> tf.fill([2, 3], 9) + +``` + +tf.fill evaluates at graph runtime and supports dynamic shapes based on +other runtime `tf.Tensors`, unlike tf.constant(value, shape=dims), which +embeds the value as a `Const` node. + + + + + + + + + + + + + + + + +
+`dims` + +A 1-D sequence of non-negative numbers. Represents the shape of the +output tf.Tensor. Entries should be of type: `int32`, `int64`. +
+`value` + +A value to fill the returned tf.Tensor. +
+`name` + +Optional string. The name of the output tf.Tensor. +
+ + + + + + + + + + + +
+A tf.Tensor with shape `dims` and the same dtype as `value`. +
+ + + + + + + + + + + + + + + +
+`InvalidArgumentError` + +`dims` contains negative entries. +
+`NotFoundError` + +`dims` contains non-integer entries. +
+ + + + +#### Numpy Compatibility +Similar to `np.full`. In `numpy`, more parameters are supported. Passing a +number argument as the shape (`np.full(5, value)`) is valid in `numpy` for +specifying a 1-D shaped result, while TensorFlow does not support this syntax. + diff --git a/site/en/api_docs/python/tf/fingerprint.md b/site/en/api_docs/python/tf/fingerprint.md new file mode 100644 index 00000000000..e32af147c9e --- /dev/null +++ b/site/en/api_docs/python/tf/fingerprint.md @@ -0,0 +1,122 @@ +description: Generates fingerprint values. + +
+ + +
+ +# tf.fingerprint + + + + + + + + + +Generates fingerprint values. + + + + + + + + + +Generates fingerprint values of `data`. + +Fingerprint op considers the first dimension of `data` as the batch dimension, +and `output[i]` contains the fingerprint value generated from contents in +`data[i, ...]` for all `i`. + +Fingerprint op writes fingerprint values as byte arrays. For example, the +default method `farmhash64` generates a 64-bit fingerprint value at a time. +This 8-byte value is written out as an tf.uint8 array of size 8, in +little-endian order. + +For example, suppose that `data` has data type tf.int32 and shape (2, 3, 4), +and that the fingerprint method is `farmhash64`. In this case, the output +shape is (2, 8), where 2 is the batch dimension size of `data`, and 8 is the +size of each fingerprint value in bytes. `output[0, :]` is generated from +12 integers in `data[0, :, :]` and similarly `output[1, :]` is generated from +other 12 integers in `data[1, :, :]`. + +Note that this op fingerprints the raw underlying buffer, and it does not +fingerprint Tensor's metadata such as data type and/or shape. For example, the +fingerprint values are invariant under reshapes and bitcasts as long as the +batch dimension remain the same: + +```python +tf.fingerprint(data) == tf.fingerprint(tf.reshape(data, ...)) +tf.fingerprint(data) == tf.fingerprint(tf.bitcast(data, ...)) +``` + +For string data, one should expect `tf.fingerprint(data) != +tf.fingerprint(tf.string.reduce_join(data))` in general. + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must have rank 1 or higher. +
+`method` + +A `Tensor` of type tf.string. Fingerprint method used by this op. +Currently available method is `farmhash64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A two-dimensional `Tensor` of type tf.uint8. The first dimension equals to +`data`'s first dimension, and the second dimension size depends on the +fingerprint algorithm. +
+ diff --git a/site/en/api_docs/python/tf/foldl.md b/site/en/api_docs/python/tf/foldl.md new file mode 100644 index 00000000000..d3b019d91da --- /dev/null +++ b/site/en/api_docs/python/tf/foldl.md @@ -0,0 +1,165 @@ +description: foldl on the list of tensors unpacked from elems on dimension 0. (deprecated argument values) + +
+ + +
+ +# tf.foldl + + + + + + + + + +foldl on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values) + + + + + + + +Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(back_prop=False)`. They will be removed in a future version. +Instructions for updating: +back_prop=False is deprecated. Consider using tf.stop_gradient instead. +Instead of: +results = tf.foldl(fn, elems, back_prop=False) +Use: +results = tf.nest.map_structure(tf.stop_gradient, tf.foldl(fn, elems)) + +This foldl operator repeatedly applies the callable `fn` to a sequence +of elements from first to last. The elements are made of the tensors +unpacked from `elems` on dimension 0. The callable fn takes two tensors as +arguments. The first argument is the accumulated value computed from the +preceding invocation of fn, and the second is the value at the current +position of `elems`. If `initializer` is None, `elems` must contain at least +one element, and its first element is used as the initializer. + +Suppose that `elems` is unpacked into `values`, a list of tensors. The shape +of the result tensor is fn(initializer, values[0]).shape`. + +This method also allows multi-arity `elems` and output of `fn`. If `elems` +is a (possibly nested) list or tuple of tensors, then each of these tensors +must have a matching first (unpack) dimension. The signature of `fn` may +match the structure of `elems`. That is, if `elems` is +`(t1, [t2, t3, [t4, t5]])`, then an appropriate signature for `fn` is: +`fn = lambda (t1, [t2, t3, [t4, t5]]):`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fn` + +The callable to be performed. +
+`elems` + +A tensor or (possibly nested) sequence of tensors, each of which will +be unpacked along their first dimension. The nested sequence of the +resulting slices will be the first argument to `fn`. +
+`initializer` + +(optional) A tensor or (possibly nested) sequence of tensors, +as the initial value for the accumulator. +
+`parallel_iterations` + +(optional) The number of iterations allowed to run in +parallel. +
+`back_prop` + +(optional) Deprecated. False disables support for back +propagation. Prefer using tf.stop_gradient instead. +
+`swap_memory` + +(optional) True enables GPU-CPU memory swapping. +
+`name` + +(optional) Name prefix for the returned tensors. +
+ + + + + + + + + + + +
+A tensor or (possibly nested) sequence of tensors, resulting from applying +`fn` consecutively to the list of tensors unpacked from `elems`, from first +to last. +
+ + + + + + + + + + + + +
+`TypeError` + +if `fn` is not callable. +
+ + + +#### Example: + +```python +elems = tf.constant([1, 2, 3, 4, 5, 6]) +sum = foldl(lambda a, x: a + x, elems) +# sum == 21 +``` diff --git a/site/en/api_docs/python/tf/foldr.md b/site/en/api_docs/python/tf/foldr.md new file mode 100644 index 00000000000..f232257d48b --- /dev/null +++ b/site/en/api_docs/python/tf/foldr.md @@ -0,0 +1,165 @@ +description: foldr on the list of tensors unpacked from elems on dimension 0. (deprecated argument values) + +
+ + +
+ +# tf.foldr + + + + + + + + + +foldr on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values) + + + + + + + +Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(back_prop=False)`. They will be removed in a future version. +Instructions for updating: +back_prop=False is deprecated. Consider using tf.stop_gradient instead. +Instead of: +results = tf.foldr(fn, elems, back_prop=False) +Use: +results = tf.nest.map_structure(tf.stop_gradient, tf.foldr(fn, elems)) + +This foldr operator repeatedly applies the callable `fn` to a sequence +of elements from last to first. The elements are made of the tensors +unpacked from `elems`. The callable fn takes two tensors as arguments. +The first argument is the accumulated value computed from the preceding +invocation of fn, and the second is the value at the current position of +`elems`. If `initializer` is None, `elems` must contain at least one element, +and its first element is used as the initializer. + +Suppose that `elems` is unpacked into `values`, a list of tensors. The shape +of the result tensor is `fn(initializer, values[0]).shape`. + +This method also allows multi-arity `elems` and output of `fn`. If `elems` +is a (possibly nested) list or tuple of tensors, then each of these tensors +must have a matching first (unpack) dimension. The signature of `fn` may +match the structure of `elems`. That is, if `elems` is +`(t1, [t2, t3, [t4, t5]])`, then an appropriate signature for `fn` is: +`fn = lambda (t1, [t2, t3, [t4, t5]]):`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fn` + +The callable to be performed. +
+`elems` + +A tensor or (possibly nested) sequence of tensors, each of which will +be unpacked along their first dimension. The nested sequence of the +resulting slices will be the first argument to `fn`. +
+`initializer` + +(optional) A tensor or (possibly nested) sequence of tensors, +as the initial value for the accumulator. +
+`parallel_iterations` + +(optional) The number of iterations allowed to run in +parallel. +
+`back_prop` + +(optional) Deprecated. False disables support for back +propagation. Prefer using tf.stop_gradient instead. +
+`swap_memory` + +(optional) True enables GPU-CPU memory swapping. +
+`name` + +(optional) Name prefix for the returned tensors. +
+ + + + + + + + + + + +
+A tensor or (possibly nested) sequence of tensors, resulting from applying +`fn` consecutively to the list of tensors unpacked from `elems`, from last +to first. +
+ + + + + + + + + + + + +
+`TypeError` + +if `fn` is not callable. +
+ + + +#### Example: + +```python +elems = [1, 2, 3, 4, 5, 6] +sum = foldr(lambda a, x: a + x, elems) +# sum == 21 +``` diff --git a/site/en/api_docs/python/tf/function.md b/site/en/api_docs/python/tf/function.md new file mode 100644 index 00000000000..dca5eb29fda --- /dev/null +++ b/site/en/api_docs/python/tf/function.md @@ -0,0 +1,348 @@ +description: Compiles a function into a callable TensorFlow graph. + +
+ + +
+ +# tf.function + + + + + + + + + +Compiles a function into a callable TensorFlow graph. + + + + + + + + + +tf.function constructs a callable that executes a TensorFlow graph +(tf.Graph) created by trace-compiling the TensorFlow operations in `func`, +effectively executing `func` as a TensorFlow graph. + +#### Example usage: + + + +``` +>>> @tf.function +... def f(x, y): +... return x ** 2 + y +>>> x = tf.constant([2, 3]) +>>> y = tf.constant([3, -2]) +>>> f(x, y) + +``` + +_Features_ + +`func` may use data-dependent control flow, including `if`, `for`, `while` +`break`, `continue` and `return` statements: + +``` +>>> @tf.function +... def f(x): +... if tf.reduce_sum(x) > 0: +... return x * x +... else: +... return -x // 2 +>>> f(tf.constant(-2)) + +``` + +`func`'s closure may include tf.Tensor and tf.Variable objects: + +``` +>>> @tf.function +... def f(): +... return x ** 2 + y +>>> x = tf.constant([-2, -3]) +>>> y = tf.Variable([3, -2]) +>>> f() + +``` + +`func` may also use ops with side effects, such as tf.print, tf.Variable +and others: + +``` +>>> v = tf.Variable(1) +>>> @tf.function +... def f(x): +... for i in tf.range(x): +... v.assign_add(i) +>>> f(3) +>>> v + +``` + +Important: Any Python side-effects (appending to a list, printing with +`print`, etc) will only happen once, when `func` is traced. To have +side-effects executed into your tf.function they need to be written +as TF ops: + +``` +>>> l = [] +>>> @tf.function +... def f(x): +... for i in x: +... l.append(i + 1) # Caution! Will only happen once when tracing +>>> f(tf.constant([1, 2, 3])) +>>> l +[] +``` + +Instead, use TensorFlow collections like tf.TensorArray: + +``` +>>> @tf.function +... def f(x): +... ta = tf.TensorArray(dtype=tf.int32, size=0, dynamic_size=True) +... for i in range(len(x)): +... ta = ta.write(i, x[i] + 1) +... return ta.stack() +>>> f(tf.constant([1, 2, 3])) + +``` + +_tf.function is polymorphic_ + +Internally, tf.function can build more than one graph, to support arguments +with different data types or shapes, since TensorFlow can build more +efficient graphs that are specialized on shapes and dtypes. tf.function +also treats any pure Python value as opaque objects, and builds a separate +graph for each set of Python arguments that it encounters. + +To obtain an individual graph, use the `get_concrete_function` method of +the callable created by tf.function. It can be called with the same +arguments as `func` and returns a special tf.Graph object: + +``` +>>> @tf.function +... def f(x): +... return x + 1 +>>> isinstance(f.get_concrete_function(1).graph, tf.Graph) +True +``` + +Caution: Passing python scalars or lists as arguments to tf.function will +always build a new graph. To avoid this, pass numeric arguments as Tensors +whenever possible: + +``` +>>> @tf.function +... def f(x): +... return tf.abs(x) +>>> f1 = f.get_concrete_function(1) +>>> f2 = f.get_concrete_function(2) # Slow - builds new graph +>>> f1 is f2 +False +>>> f1 = f.get_concrete_function(tf.constant(1)) +>>> f2 = f.get_concrete_function(tf.constant(2)) # Fast - reuses f1 +>>> f1 is f2 +True +``` + +Python numerical arguments should only be used when they take few distinct +values, such as hyperparameters like the number of layers in a neural network. + +_Input signatures_ + +For Tensor arguments, tf.function instantiates a separate graph for every +unique set of input shapes and datatypes. The example below creates two +separate graphs, each specialized to a different shape: + +``` +>>> @tf.function +... def f(x): +... return x + 1 +>>> vector = tf.constant([1.0, 1.0]) +>>> matrix = tf.constant([[3.0]]) +>>> f.get_concrete_function(vector) is f.get_concrete_function(matrix) +False +``` + +An "input signature" can be optionally provided to tf.function to control +the graphs traced. The input signature specifies the shape and type of each +Tensor argument to the function using a tf.TensorSpec object. More general +shapes can be used. This is useful to avoid creating multiple graphs when +Tensors have dynamic shapes. It also restricts the shape and datatype of +Tensors that can be used: + +``` +>>> @tf.function( +... input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)]) +... def f(x): +... return x + 1 +>>> vector = tf.constant([1.0, 1.0]) +>>> matrix = tf.constant([[3.0]]) +>>> f.get_concrete_function(vector) is f.get_concrete_function(matrix) +True +``` + +_Variables may only be created once_ + +tf.function only allows creating new tf.Variable objects when it is called +for the first time: + +``` +>>> class MyModule(tf.Module): +... def __init__(self): +... self.v = None +... +... @tf.function +... def call(self, x): +... if self.v is None: +... self.v = tf.Variable(tf.ones_like(x)) +... return self.v * x +``` + +In general, it is recommended to create stateful objects like tf.Variable +outside of tf.function and passing them as arguments. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`func` + +the function to be compiled. If `func` is None, tf.function returns +a decorator that can be invoked with a single argument - `func`. In other +words, `tf.function(input_signature=...)(func)` is equivalent to +tf.function(func, input_signature=...). The former can be used as +decorator. +
+`input_signature` + +A possibly nested sequence of tf.TensorSpec objects +specifying the shapes and dtypes of the Tensors that will be supplied to +this function. If `None`, a separate function is instantiated for each +inferred input signature. If input_signature is specified, every input to +`func` must be a `Tensor`, and `func` cannot accept `**kwargs`. +
+`autograph` + +Whether autograph should be applied on `func` before tracing a +graph. Data-dependent control flow requires `autograph=True`. For more +information, see the [tf.function and AutoGraph guide]( +https://www.tensorflow.org/guide/function). +
+`experimental_implements` + +If provided, contains a name of a "known" function +this implements. For example "mycompany.my_recurrent_cell". +This is stored as an attribute in inference function, +which can then be detected when processing serialized function. +See [standardizing composite ops](https://github.com/tensorflow/community/blob/master/rfcs/20190610-standardizing-composite_ops.md) +for details. For an example of utilizing this attribute see this +[example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/transforms/prepare_composite_functions_tf.cc) +The code above automatically detects and substitutes function that +implements "embedded_matmul" and allows TFLite to substitute its own +implementations. For instance, a tensorflow user can use this +attribute to mark that their function also implements +`embedded_matmul` (perhaps more efficiently!) +by specifying it using this parameter: +`@tf.function(experimental_implements="embedded_matmul")` +
+`experimental_autograph_options` + +Optional tuple of +tf.autograph.experimental.Feature values. +
+`experimental_relax_shapes` + +When True, tf.function may generate fewer, +graphs that are less specialized on input shapes. +
+`experimental_compile` + +If True, the function is always compiled by +[XLA](https://www.tensorflow.org/xla). XLA may be more efficient in some +cases (e.g. TPU, XLA_GPU, dense tensor computations). +
+ + + + + + + + + + + +
+If `func` is not None, returns a callable that will execute the compiled +function (and return zero or more tf.Tensor objects). +If `func` is None, returns a decorator that, when invoked with a single +`func` argument, returns a callable equivalent to the case above. +
+ + + + + + + + + + + +
+ValueError when attempting to use experimental_compile, but XLA support is +not enabled. +
+ diff --git a/site/en/api_docs/python/tf/gather.md b/site/en/api_docs/python/tf/gather.md new file mode 100644 index 00000000000..db7a8c355e5 --- /dev/null +++ b/site/en/api_docs/python/tf/gather.md @@ -0,0 +1,154 @@ +description: Gather slices from params axis axis according to indices. + +
+ + +
+ +# tf.gather + + + + + + + + + +Gather slices from params axis `axis` according to indices. + + + + + + + +Gather slices from params axis `axis` according to `indices`. `indices` must +be an integer tensor of any dimension (usually 0-D or 1-D). + +For 0-D (scalar) `indices`: + +$$\begin{align*} +output[p_0, ..., p_{axis-1}, && &&& p_{axis + 1}, ..., p_{N-1}] = \\ +params[p_0, ..., p_{axis-1}, && indices, &&& p_{axis + 1}, ..., p_{N-1}] +\end{align*}$$ + +Where *N* = `ndims(params)`. + +For 1-D (vector) `indices` with `batch_dims=0`: + +$$\begin{align*} +output[p_0, ..., p_{axis-1}, && &i, &&p_{axis + 1}, ..., p_{N-1}] =\\ +params[p_0, ..., p_{axis-1}, && indices[&i], &&p_{axis + 1}, ..., p_{N-1}] +\end{align*}$$ + +In the general case, produces an output tensor where: + +$$\begin{align*} +output[p_0, &..., p_{axis-1}, & + &i_{B}, ..., i_{M-1}, & + p_{axis + 1}, &..., p_{N-1}] = \\ +params[p_0, &..., p_{axis-1}, & + indices[p_0, ..., p_{B-1}, &i_{B}, ..., i_{M-1}], & + p_{axis + 1}, &..., p_{N-1}] +\end{align*}$$ + +Where *N* = `ndims(params)`, *M* = `ndims(indices)`, and *B* = `batch_dims`. +Note that `params.shape[:batch_dims]` must be identical to +`indices.shape[:batch_dims]`. + +The shape of the output tensor is: + +> `output.shape = params.shape[:axis] + indices.shape[batch_dims:] + +> params.shape[axis + 1:]`. + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, a 0 is stored in the corresponding +output value. + +See also tf.gather_nd. + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + + + + +
+`params` + +The `Tensor` from which to gather values. Must be at least rank +`axis + 1`. +
+`indices` + +The index `Tensor`. Must be one of the following types: `int32`, +`int64`. Must be in range `[0, params.shape[axis])`. +
+`validate_indices` + +Deprecated, does nothing. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. The +`axis` in `params` to gather `indices` from. Must be greater than or equal +to `batch_dims`. Defaults to the first non-batch dimension. Supports +negative indexes. +
+`batch_dims` + +An `integer`. The number of batch dimensions. Must be less +than or equal to `rank(indices)`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `params`. +
+ diff --git a/site/en/api_docs/python/tf/gather_nd.md b/site/en/api_docs/python/tf/gather_nd.md new file mode 100644 index 00000000000..05358b2b108 --- /dev/null +++ b/site/en/api_docs/python/tf/gather_nd.md @@ -0,0 +1,219 @@ +description: Gather slices from params into a Tensor with shape specified by indices. + +
+ + +
+ +# tf.gather_nd + + + + + + + + + +Gather slices from `params` into a Tensor with shape specified by `indices`. + + + + + + + +`indices` is an K-dimensional integer tensor, best thought of as a +(K-1)-dimensional tensor of indices into `params`, where each element defines +a slice of `params`: + + output[\\(i_0, ..., i_{K-2}\\)] = params[indices[\\(i_0, ..., i_{K-2}\\)]] + +Whereas in tf.gather `indices` defines slices into the first +dimension of `params`, in tf.gather_nd, `indices` defines slices into the +first `N` dimensions of `params`, where `N = indices.shape[-1]`. + +The last dimension of `indices` can be at most the rank of +`params`: + + indices.shape[-1] <= params.rank + +The last dimension of `indices` corresponds to elements +(if `indices.shape[-1] == params.rank`) or slices +(if `indices.shape[-1] < params.rank`) along dimension `indices.shape[-1]` +of `params`. The output tensor has shape + + indices.shape[:-1] + params.shape[indices.shape[-1]:] + +Additionally both 'params' and 'indices' can have M leading batch +dimensions that exactly match. In this case 'batch_dims' must be M. + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, a 0 is stored in the +corresponding output value. + +Some examples below. + +Simple indexing into a matrix: + +```python + indices = [[0, 0], [1, 1]] + params = [['a', 'b'], ['c', 'd']] + output = ['a', 'd'] +``` + +Slice indexing into a matrix: + +```python + indices = [[1], [0]] + params = [['a', 'b'], ['c', 'd']] + output = [['c', 'd'], ['a', 'b']] +``` + +Indexing into a 3-tensor: + +```python + indices = [[1]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [[['a1', 'b1'], ['c1', 'd1']]] + + + indices = [[0, 1], [1, 0]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [['c0', 'd0'], ['a1', 'b1']] + + + indices = [[0, 0, 1], [1, 0, 1]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = ['b0', 'b1'] +``` + +The examples below are for the case when only indices have leading extra +dimensions. If both 'params' and 'indices' have leading batch dimensions, use +the 'batch_dims' parameter to run gather_nd in batch mode. + +Batched indexing into a matrix: + +```python + indices = [[[0, 0]], [[0, 1]]] + params = [['a', 'b'], ['c', 'd']] + output = [['a'], ['b']] +``` + +Batched slice indexing into a matrix: + +```python + indices = [[[1]], [[0]]] + params = [['a', 'b'], ['c', 'd']] + output = [[['c', 'd']], [['a', 'b']]] +``` + +Batched indexing into a 3-tensor: + +```python + indices = [[[1]], [[0]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [[[['a1', 'b1'], ['c1', 'd1']]], + [[['a0', 'b0'], ['c0', 'd0']]]] + + indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [[['c0', 'd0'], ['a1', 'b1']], + [['a0', 'b0'], ['c1', 'd1']]] + + + indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [['b0', 'b1'], ['d0', 'c1']] +``` + +Examples with batched 'params' and 'indices': + +```python + batch_dims = 1 + indices = [[1], [0]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [['c0', 'd0'], ['a1', 'b1']] + + batch_dims = 1 + indices = [[[1]], [[0]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [[['c0', 'd0']], [['a1', 'b1']]] + + batch_dims = 1 + indices = [[[1, 0]], [[0, 1]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [['c0'], ['b1']] +``` + +See also tf.gather. + + + + + + + + + + + + + + + + + + + +
+`params` + +A `Tensor`. The tensor from which to gather values. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`name` + +A name for the operation (optional). +
+`batch_dims` + +An integer or a scalar 'Tensor'. The number of batch dimensions. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `params`. +
+ diff --git a/site/en/api_docs/python/tf/get_logger.md b/site/en/api_docs/python/tf/get_logger.md new file mode 100644 index 00000000000..9544a5d78cf --- /dev/null +++ b/site/en/api_docs/python/tf/get_logger.md @@ -0,0 +1,42 @@ +description: Return TF logger instance. + +
+ + +
+ +# tf.get_logger + + + + + + + + + +Return TF logger instance. + + + + + + + + diff --git a/site/en/api_docs/python/tf/get_static_value.md b/site/en/api_docs/python/tf/get_static_value.md new file mode 100644 index 00000000000..c82a38190b2 --- /dev/null +++ b/site/en/api_docs/python/tf/get_static_value.md @@ -0,0 +1,108 @@ +description: Returns the constant value of the given tensor, if efficiently calculable. + +
+ + +
+ +# tf.get_static_value + + + + + + + + + +Returns the constant value of the given tensor, if efficiently calculable. + + + + + + + + + +This function attempts to partially evaluate the given tensor, and +returns its value as a numpy ndarray if this succeeds. + +Compatibility(V1): If `constant_value(tensor)` returns a non-`None` result, it +will no longer be possible to feed a different value for `tensor`. This allows +the result of this function to influence the graph that is constructed, and +permits static shape optimizations. + + + + + + + + + + + + + +
+`tensor` + +The Tensor to be evaluated. +
+`partial` + +If True, the returned numpy array is allowed to have partially +evaluated values. Values that can't be evaluated will be None. +
+ + + + + + + + + + + +
+A numpy ndarray containing the constant value of the given `tensor`, +or None if it cannot be calculated. +
+ + + + + + + + + + + + +
+`TypeError` + +if tensor is not an ops.Tensor. +
+ diff --git a/site/en/api_docs/python/tf/grad_pass_through.md b/site/en/api_docs/python/tf/grad_pass_through.md new file mode 100644 index 00000000000..ff55490c08c --- /dev/null +++ b/site/en/api_docs/python/tf/grad_pass_through.md @@ -0,0 +1,105 @@ +description: Creates a grad-pass-through op with the forward behavior provided in f. + +
+ + +
+ +# tf.grad_pass_through + + + + + + + + + +Creates a grad-pass-through op with the forward behavior provided in f. + + + + + + + + + +Use this function to wrap any op, maintaining its behavior in the forward +pass, but replacing the original op in the backward graph with an identity. +For example: + +```python +x = tf.Variable(1.0, name="x") +z = tf.Variable(3.0, name="z") + +with tf.GradientTape() as tape: + # y will evaluate to 9.0 + y = tf.grad_pass_through(x.assign)(z**2) +# grads will evaluate to 6.0 +grads = tape.gradient(y, z) +``` + +Another example is a 'differentiable' moving average approximation, where +gradients are allowed to flow into the last value fed to the moving average, +but the moving average is still used for the forward pass: + +```python +x = ... # Some scalar value +# A moving average object, we don't need to know how this is implemented +moving_average = MovingAverage() +with backprop.GradientTape() as tape: + # mavg_x will evaluate to the current running average value + mavg_x = tf.grad_pass_through(moving_average)(x) +grads = tape.gradient(mavg_x, x) # grads will evaluate to 1.0 +``` + + + + + + + + + + +
+`f` + +function `f(*x)` that returns a `Tensor` or nested structure of `Tensor` +outputs. +
+ + + + + + + + + + + +
+A function `h(x)` which returns the same values as `f(x)` and whose +gradients are the same as those of an identity function. +
+ diff --git a/site/en/api_docs/python/tf/gradients.md b/site/en/api_docs/python/tf/gradients.md new file mode 100644 index 00000000000..f7e97f99560 --- /dev/null +++ b/site/en/api_docs/python/tf/gradients.md @@ -0,0 +1,233 @@ +description: Constructs symbolic derivatives of sum of ys w.r.t. x in xs. + +
+ + +
+ +# tf.gradients + + + + + + + + + +Constructs symbolic derivatives of sum of `ys` w.r.t. x in `xs`. + + + + + + + +`ys` and `xs` are each a `Tensor` or a list of tensors. `grad_ys` +is a list of `Tensor`, holding the gradients received by the +`ys`. The list must be the same length as `ys`. + +`gradients()` adds ops to the graph to output the derivatives of `ys` with +respect to `xs`. It returns a list of `Tensor` of length `len(xs)` where +each tensor is the `sum(dy/dx)` for y in `ys` and for x in `xs`. + +`grad_ys` is a list of tensors of the same length as `ys` that holds +the initial gradients for each y in `ys`. When `grad_ys` is None, +we fill in a tensor of '1's of the shape of y for each y in `ys`. A +user can provide their own initial `grad_ys` to compute the +derivatives using a different initial gradient for each y (e.g., if +one wanted to weight the gradient differently for each value in +each y). + +`stop_gradients` is a `Tensor` or a list of tensors to be considered constant +with respect to all `xs`. These tensors will not be backpropagated through, +as though they had been explicitly disconnected using `stop_gradient`. Among +other things, this allows computation of partial derivatives as opposed to +total derivatives. For example: + +```python +a = tf.constant(0.) +b = 2 * a +g = tf.gradients(a + b, [a, b], stop_gradients=[a, b]) +``` + +Here the partial derivatives `g` evaluate to `[1.0, 1.0]`, compared to the +total derivatives `tf.gradients(a + b, [a, b])`, which take into account the +influence of `a` on `b` and evaluate to `[3.0, 1.0]`. Note that the above is +equivalent to: + +```python +a = tf.stop_gradient(tf.constant(0.)) +b = tf.stop_gradient(2 * a) +g = tf.gradients(a + b, [a, b]) +``` + +`stop_gradients` provides a way of stopping gradient after the graph has +already been constructed, as compared to tf.stop_gradient which is used +during graph construction. When the two approaches are combined, +backpropagation stops at both tf.stop_gradient nodes and nodes in +`stop_gradients`, whichever is encountered first. + +All integer tensors are considered constant with respect to all `xs`, as if +they were included in `stop_gradients`. + +`unconnected_gradients` determines the value returned for each x in xs if it +is unconnected in the graph to ys. By default this is None to safeguard +against errors. Mathematically these gradients are zero which can be requested +using the `'zero'` option. `tf.UnconnectedGradients` provides the +following options and behaviors: + +```python +a = tf.ones([1, 2]) +b = tf.ones([3, 1]) +g1 = tf.gradients([b], [a], unconnected_gradients='none') +sess.run(g1) # [None] + +g2 = tf.gradients([b], [a], unconnected_gradients='zero') +sess.run(g2) # [array([[0., 0.]], dtype=float32)] +``` + +Let us take one practical example which comes during the back propogation +phase. This function is used to evaluate the derivatives of the cost function +with respect to Weights `Ws` and Biases `bs`. Below sample implementation +provides the exaplantion of what it is actually used for : + +```python +Ws = tf.constant(0.) +bs = 2 * Ws +cost = Ws + bs # This is just an example. So, please ignore the formulas. +g = tf.gradients(cost, [Ws, bs]) +dCost_dW, dCost_db = g +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`ys` + +A `Tensor` or list of tensors to be differentiated. +
+`xs` + +A `Tensor` or list of tensors to be used for differentiation. +
+`grad_ys` + +Optional. A `Tensor` or list of tensors the same size as +`ys` and holding the gradients computed for each y in `ys`. +
+`name` + +Optional name to use for grouping all the gradient ops together. +defaults to 'gradients'. +
+`gate_gradients` + +If True, add a tuple around the gradients returned +for an operations. This avoids some race conditions. +
+`aggregation_method` + +Specifies the method used to combine gradient terms. +Accepted values are constants defined in the class `AggregationMethod`. +
+`stop_gradients` + +Optional. A `Tensor` or list of tensors not to differentiate +through. +
+`unconnected_gradients` + +Optional. Specifies the gradient value returned when +the given input tensors are unconnected. Accepted values are constants +defined in the class tf.UnconnectedGradients and the default value is +`none`. +
+ + + + + + + + + + + +
+A list of `Tensor` of length `len(xs)` where each tensor is the `sum(dy/dx)` +for y in `ys` and for x in `xs`. +
+ + + + + + + + + + + + + + + + + + +
+`LookupError` + +if one of the operations between `x` and `y` does not +have a registered gradient function. +
+`ValueError` + +if the arguments are invalid. +
+`RuntimeError` + +if called in Eager mode. +
+ diff --git a/site/en/api_docs/python/tf/graph_util.md b/site/en/api_docs/python/tf/graph_util.md new file mode 100644 index 00000000000..0ef77c66afe --- /dev/null +++ b/site/en/api_docs/python/tf/graph_util.md @@ -0,0 +1,25 @@ +description: Helpers to manipulate a tensor graph in python. + +
+ + +
+ +# Module: tf.graph_util + + + + + + + + + +Helpers to manipulate a tensor graph in python. + + + +## Functions + +[`import_graph_def(...)`](../tf/graph_util/import_graph_def.md): Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments) + diff --git a/site/en/api_docs/python/tf/graph_util/import_graph_def.md b/site/en/api_docs/python/tf/graph_util/import_graph_def.md new file mode 100644 index 00000000000..d1eb1194833 --- /dev/null +++ b/site/en/api_docs/python/tf/graph_util/import_graph_def.md @@ -0,0 +1,166 @@ +description: Imports the graph from graph_def into the current default Graph. (deprecated arguments) + +
+ + +
+ +# tf.graph_util.import_graph_def + + + + + + + + + +Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments) + + + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(op_dict)`. They will be removed in a future version. +Instructions for updating: +Please file an issue at https://github.com/tensorflow/tensorflow/issues if you depend on this feature. + +This function provides a way to import a serialized TensorFlow +[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) +protocol buffer, and extract individual objects in the `GraphDef` as +tf.Tensor and tf.Operation objects. Once extracted, +these objects are placed into the current default `Graph`. See +tf.Graph.as_graph_def for a way to create a `GraphDef` +proto. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`graph_def` + +A `GraphDef` proto containing operations to be imported into +the default graph. +
+`input_map` + +A dictionary mapping input names (as strings) in `graph_def` +to `Tensor` objects. The values of the named input tensors in the +imported graph will be re-mapped to the respective `Tensor` values. +
+`return_elements` + +A list of strings containing operation names in +`graph_def` that will be returned as `Operation` objects; and/or +tensor names in `graph_def` that will be returned as `Tensor` objects. +
+`name` + +(Optional.) A prefix that will be prepended to the names in +`graph_def`. Note that this does not apply to imported function names. +Defaults to `"import"`. +
+`op_dict` + +(Optional.) Deprecated, do not use. +
+`producer_op_list` + +(Optional.) An `OpList` proto with the (possibly stripped) +list of `OpDef`s used by the producer of the graph. If provided, +unrecognized attrs for ops in `graph_def` that have their default value +according to `producer_op_list` will be removed. This will allow some more +`GraphDef`s produced by later binaries to be accepted by earlier binaries. +
+ + + + + + + + + + + +
+A list of `Operation` and/or `Tensor` objects from the imported graph, +corresponding to the names in `return_elements`, +and None if `returns_elements` is None. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `graph_def` is not a `GraphDef` proto, +`input_map` is not a dictionary mapping strings to `Tensor` objects, +or `return_elements` is not a list of strings. +
+`ValueError` + +If `input_map`, or `return_elements` contains names that +do not appear in `graph_def`, or `graph_def` is not well-formed (e.g. +it refers to an unknown tensor). +
+ diff --git a/site/en/api_docs/python/tf/group.md b/site/en/api_docs/python/tf/group.md new file mode 100644 index 00000000000..95e4237b960 --- /dev/null +++ b/site/en/api_docs/python/tf/group.md @@ -0,0 +1,104 @@ +description: Create an op that groups multiple operations. + +
+ + +
+ +# tf.group + + + + + + + + + +Create an op that groups multiple operations. + + + + + + + + + +When this op finishes, all ops in `inputs` have finished. This op has no +output. + +See also tf.tuple and +tf.control_dependencies. + + + + + + + + + + + + + +
+`*inputs` + +Zero or more tensors to group. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+An Operation that executes all its inputs. +
+ + + + + + + + + + + + +
+`ValueError` + +If an unknown keyword argument is provided. +
+ diff --git a/site/en/api_docs/python/tf/guarantee_const.md b/site/en/api_docs/python/tf/guarantee_const.md new file mode 100644 index 00000000000..5b7af740287 --- /dev/null +++ b/site/en/api_docs/python/tf/guarantee_const.md @@ -0,0 +1,83 @@ +description: Gives a guarantee to the TF runtime that the input tensor is a constant. + +
+ + +
+ +# tf.guarantee_const + + + + + + + + + +Gives a guarantee to the TF runtime that the input tensor is a constant. + + + + + + + + + +The runtime is then free to make optimizations based on this. + +Only accepts value typed tensors as inputs and rejects resource variable handles +as input. + +Returns the input tensor without modification. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/hessians.md b/site/en/api_docs/python/tf/hessians.md new file mode 100644 index 00000000000..032c7d525bc --- /dev/null +++ b/site/en/api_docs/python/tf/hessians.md @@ -0,0 +1,124 @@ +description: Constructs the Hessian of sum of ys with respect to x in xs. + +
+ + +
+ +# tf.hessians + + + + + + + + + +Constructs the Hessian of sum of `ys` with respect to `x` in `xs`. + + + + + + + +`hessians()` adds ops to the graph to output the Hessian matrix of `ys` +with respect to `xs`. It returns a list of `Tensor` of length `len(xs)` +where each tensor is the Hessian of `sum(ys)`. + +The Hessian is a matrix of second-order partial derivatives of a scalar +tensor (see https://en.wikipedia.org/wiki/Hessian_matrix for more details). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`ys` + +A `Tensor` or list of tensors to be differentiated. +
+`xs` + +A `Tensor` or list of tensors to be used for differentiation. +
+`name` + +Optional name to use for grouping all the gradient ops together. +defaults to 'hessians'. +
+`colocate_gradients_with_ops` + +See `gradients()` documentation for details. +
+`gate_gradients` + +See `gradients()` documentation for details. +
+`aggregation_method` + +See `gradients()` documentation for details. +
+ + + + + + + + + + + +
+A list of Hessian matrices of `sum(ys)` for each `x` in `xs`. +
+ + + + + + + + + + + + +
+`LookupError` + +if one of the operations between `xs` and `ys` does not +have a registered gradient function. +
+ diff --git a/site/en/api_docs/python/tf/histogram_fixed_width.md b/site/en/api_docs/python/tf/histogram_fixed_width.md new file mode 100644 index 00000000000..c274b8b1464 --- /dev/null +++ b/site/en/api_docs/python/tf/histogram_fixed_width.md @@ -0,0 +1,150 @@ +description: Return histogram of values. + +
+ + +
+ +# tf.histogram_fixed_width + + + + + + + + + +Return histogram of values. + + + + + + + + + +Given the tensor `values`, this operation returns a rank 1 histogram counting +the number of entries in `values` that fell into every bin. The bins are +equal width and determined by the arguments `value_range` and `nbins`. + + + + + + + + + + + + + + + + + + + + + + +
+`values` + +Numeric `Tensor`. +
+`value_range` + +Shape [2] `Tensor` of same `dtype` as `values`. +values <= value_range[0] will be mapped to hist[0], +values >= value_range[1] will be mapped to hist[-1]. +
+`nbins` + +Scalar `int32 Tensor`. Number of histogram bins. +
+`dtype` + +dtype for returned histogram. +
+`name` + +A name for this operation (defaults to 'histogram_fixed_width'). +
+ + + + + + + + + + + +
+A 1-D `Tensor` holding histogram of values. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If any unsupported dtype is provided. +
+`tf.errors.InvalidArgumentError` + +If value_range does not +satisfy value_range[0] < value_range[1]. +
+ + + +#### Examples: + + + +```python +# Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf) +nbins = 5 +value_range = [0.0, 5.0] +new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15] + +with tf.compat.v1.get_default_session() as sess: + hist = tf.histogram_fixed_width(new_values, value_range, nbins=5) + variables.global_variables_initializer().run() + sess.run(hist) => [2, 1, 1, 0, 2] +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/histogram_fixed_width_bins.md b/site/en/api_docs/python/tf/histogram_fixed_width_bins.md new file mode 100644 index 00000000000..6cfa9bde829 --- /dev/null +++ b/site/en/api_docs/python/tf/histogram_fixed_width_bins.md @@ -0,0 +1,152 @@ +description: Bins the given values for use in a histogram. + +
+ + +
+ +# tf.histogram_fixed_width_bins + + + + + + + + + +Bins the given values for use in a histogram. + + + + + + + + + +Given the tensor `values`, this operation returns a rank 1 `Tensor` +representing the indices of a histogram into which each element +of `values` would be binned. The bins are equal width and +determined by the arguments `value_range` and `nbins`. + + + + + + + + + + + + + + + + + + + + + + +
+`values` + +Numeric `Tensor`. +
+`value_range` + +Shape [2] `Tensor` of same `dtype` as `values`. +values <= value_range[0] will be mapped to hist[0], +values >= value_range[1] will be mapped to hist[-1]. +
+`nbins` + +Scalar `int32 Tensor`. Number of histogram bins. +
+`dtype` + +dtype for returned histogram. +
+`name` + +A name for this operation (defaults to 'histogram_fixed_width'). +
+ + + + + + + + + + + +
+A `Tensor` holding the indices of the binned values whose shape matches +`values`. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If any unsupported dtype is provided. +
+`tf.errors.InvalidArgumentError` + +If value_range does not +satisfy value_range[0] < value_range[1]. +
+ + + +#### Examples: + + + +```python +# Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf) +nbins = 5 +value_range = [0.0, 5.0] +new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15] + +with tf.compat.v1.get_default_session() as sess: + indices = tf.histogram_fixed_width_bins(new_values, value_range, nbins=5) + variables.global_variables_initializer().run() + sess.run(indices) # [0, 0, 1, 2, 4, 4] +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/identity.md b/site/en/api_docs/python/tf/identity.md new file mode 100644 index 00000000000..383109d439c --- /dev/null +++ b/site/en/api_docs/python/tf/identity.md @@ -0,0 +1,112 @@ +description: Return a Tensor with the same shape and contents as input. + +
+ + +
+ +# tf.identity + + + + + + + + + +Return a Tensor with the same shape and contents as input. + + + + + + + + + +The return value is not the same Tensor as the original, but contains the same +values. This operation is fast when used on the same device. + +#### For example: + + + +``` +>>> a = tf.constant([0.78]) +>>> a_identity = tf.identity(a) +>>> a.numpy() +array([0.78], dtype=float32) +>>> a_identity.numpy() +array([0.78], dtype=float32) +``` + +Calling tf.identity on a variable will make a Tensor that represents the +value of that variable at the time it is called. This is equivalent to calling +`.read_value()`. + +``` +>>> a = tf.Variable(5) +>>> a_identity = tf.identity(a) +>>> a.assign_add(1) + +>>> a.numpy() +6 +>>> a_identity.numpy() +5 +``` + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/identity_n.md b/site/en/api_docs/python/tf/identity_n.md new file mode 100644 index 00000000000..c30636e6cf4 --- /dev/null +++ b/site/en/api_docs/python/tf/identity_n.md @@ -0,0 +1,92 @@ +description: Returns a list of tensors with the same shapes and contents as the input + +
+ + +
+ +# tf.identity_n + + + + + + + + + +Returns a list of tensors with the same shapes and contents as the input + + + + + + + + + +tensors. + +This op can be used to override the gradient for complicated functions. For +example, suppose y = f(x) and we wish to apply a custom function g for backprop +such that dx = g(dy). In Python, + +```python +with tf.get_default_graph().gradient_override_map( + {'IdentityN': 'OverrideGradientWithG'}): + y, _ = identity_n([f(x), x]) + +@tf.RegisterGradient('OverrideGradientWithG') +def ApplyG(op, dy, _): + return [None, g(dy)] # Do not backprop to f(x). +``` + + + + + + + + + + + + + +
+`input` + +A list of `Tensor` objects. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/image.md b/site/en/api_docs/python/tf/image.md new file mode 100644 index 00000000000..e7deab25c73 --- /dev/null +++ b/site/en/api_docs/python/tf/image.md @@ -0,0 +1,278 @@ +description: Image ops. + +
+ + +
+ +# Module: tf.image + + + + + + + + + +Image ops. + + +The tf.image module contains various functions for image +processing and decoding-encoding Ops. + +Many of the encoding/decoding functions are also available in the +core tf.io module. + +## Image processing + +### Resizing + +The resizing Ops accept input images as tensors of several types. They always +output resized images as float32 tensors. + +The convenience function tf.image.resize supports both 4-D +and 3-D tensors as input and output. 4-D tensors are for batches of images, +3-D tensors for individual images. + +Resized images will be distorted if their original aspect ratio is not the +same as size. To avoid distortions see tf.image.resize_with_pad. + +* tf.image.resize +* tf.image.resize_with_pad +* tf.image.resize_with_crop_or_pad + +The Class tf.image.ResizeMethod provides various resize methods like +`bilinear`, `nearest_neighbor`. + +### Converting Between Colorspaces + +Image ops work either on individual images or on batches of images, depending on +the shape of their input Tensor. + +If 3-D, the shape is `[height, width, channels]`, and the Tensor represents one +image. If 4-D, the shape is `[batch_size, height, width, channels]`, and the +Tensor represents `batch_size` images. + +Currently, `channels` can usefully be 1, 2, 3, or 4. Single-channel images are +grayscale, images with 3 channels are encoded as either RGB or HSV. Images +with 2 or 4 channels include an alpha channel, which has to be stripped from the +image before passing the image to most image processing functions (and can be +re-attached later). + +Internally, images are either stored in as one `float32` per channel per pixel +(implicitly, values are assumed to lie in `[0,1)`) or one `uint8` per channel +per pixel (values are assumed to lie in `[0,255]`). + +TensorFlow can convert between images in RGB or HSV or YIQ. + +* tf.image.rgb_to_grayscale, tf.image.grayscale_to_rgb +* tf.image.rgb_to_hsv, tf.image.hsv_to_rgb +* tf.image.rgb_to_yiq, tf.image.yiq_to_rgb +* tf.image.rgb_to_yuv, tf.image.yuv_to_rgb +* tf.image.image_gradients +* tf.image.convert_image_dtype + +### Image Adjustments + +TensorFlow provides functions to adjust images in various ways: brightness, +contrast, hue, and saturation. Each adjustment can be done with predefined +parameters or with random parameters picked from predefined intervals. Random +adjustments are often useful to expand a training set and reduce overfitting. + +If several adjustments are chained it is advisable to minimize the number of +redundant conversions by first converting the images to the most natural data +type and representation. + +* tf.image.adjust_brightness +* tf.image.adjust_contrast +* tf.image.adjust_gamma +* tf.image.adjust_hue +* tf.image.adjust_jpeg_quality +* tf.image.adjust_saturation +* tf.image.random_brightness +* tf.image.random_contrast +* tf.image.random_hue +* tf.image.random_saturation +* tf.image.per_image_standardization + +### Working with Bounding Boxes + +* tf.image.draw_bounding_boxes +* tf.image.combined_non_max_suppression +* tf.image.generate_bounding_box_proposals +* tf.image.non_max_suppression +* tf.image.non_max_suppression_overlaps +* tf.image.non_max_suppression_padded +* tf.image.non_max_suppression_with_scores +* tf.image.pad_to_bounding_box +* tf.image.sample_distorted_bounding_box + +### Cropping + +* tf.image.central_crop +* tf.image.crop_and_resize +* tf.image.crop_to_bounding_box +* tf.io.decode_and_crop_jpeg +* tf.image.extract_glimpse +* tf.image.random_crop +* tf.image.resize_with_crop_or_pad + +### Flipping, Rotating and Transposing + +* tf.image.flip_left_right +* tf.image.flip_up_down +* tf.image.random_flip_left_right +* tf.image.random_flip_up_down +* tf.image.rot90 +* tf.image.transpose + +## Image decoding and encoding + +TensorFlow provides Ops to decode and encode JPEG and PNG formats. Encoded +images are represented by scalar string Tensors, decoded images by 3-D uint8 +tensors of shape `[height, width, channels]`. (PNG also supports uint16.) + +Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]` + +The encode and decode Ops apply to one image at a time. Their input and output +are all of variable size. If you need fixed size images, pass the output of +the decode Ops to one of the cropping and resizing Ops. + +* tf.io.decode_bmp +* tf.io.decode_gif +* tf.io.decode_image +* tf.io.decode_jpeg +* tf.io.decode_and_crop_jpeg +* tf.io.decode_png +* tf.io.encode_jpeg +* tf.image.encode_png + +## Classes + +[`class ResizeMethod`](../tf/image/ResizeMethod.md): See tf.image.resize for details. + +## Functions + +[`adjust_brightness(...)`](../tf/image/adjust_brightness.md): Adjust the brightness of RGB or Grayscale images. + +[`adjust_contrast(...)`](../tf/image/adjust_contrast.md): Adjust contrast of RGB or grayscale images. + +[`adjust_gamma(...)`](../tf/image/adjust_gamma.md): Performs [Gamma Correction](http://en.wikipedia.org/wiki/Gamma_correction). + +[`adjust_hue(...)`](../tf/image/adjust_hue.md): Adjust hue of RGB images. + +[`adjust_jpeg_quality(...)`](../tf/image/adjust_jpeg_quality.md): Adjust jpeg encoding quality of an image. + +[`adjust_saturation(...)`](../tf/image/adjust_saturation.md): Adjust saturation of RGB images. + +[`central_crop(...)`](../tf/image/central_crop.md): Crop the central region of the image(s). + +[`combined_non_max_suppression(...)`](../tf/image/combined_non_max_suppression.md): Greedily selects a subset of bounding boxes in descending order of score. + +[`convert_image_dtype(...)`](../tf/image/convert_image_dtype.md): Convert `image` to `dtype`, scaling its values if needed. + +[`crop_and_resize(...)`](../tf/image/crop_and_resize.md): Extracts crops from the input image tensor and resizes them. + +[`crop_to_bounding_box(...)`](../tf/image/crop_to_bounding_box.md): Crops an image to a specified bounding box. + +[`decode_and_crop_jpeg(...)`](../tf/io/decode_and_crop_jpeg.md): Decode and Crop a JPEG-encoded image to a uint8 tensor. + +[`decode_bmp(...)`](../tf/io/decode_bmp.md): Decode the first frame of a BMP-encoded image to a uint8 tensor. + +[`decode_gif(...)`](../tf/io/decode_gif.md): Decode the frame(s) of a GIF-encoded image to a uint8 tensor. + +[`decode_image(...)`](../tf/io/decode_image.md): Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`. + +[`decode_jpeg(...)`](../tf/io/decode_jpeg.md): Decode a JPEG-encoded image to a uint8 tensor. + +[`decode_png(...)`](../tf/io/decode_png.md): Decode a PNG-encoded image to a uint8 or uint16 tensor. + +[`draw_bounding_boxes(...)`](../tf/image/draw_bounding_boxes.md): Draw bounding boxes on a batch of images. + +[`encode_jpeg(...)`](../tf/io/encode_jpeg.md): JPEG-encode an image. + +[`encode_png(...)`](../tf/image/encode_png.md): PNG-encode an image. + +[`extract_glimpse(...)`](../tf/image/extract_glimpse.md): Extracts a glimpse from the input tensor. + +[`extract_jpeg_shape(...)`](../tf/io/extract_jpeg_shape.md): Extract the shape information of a JPEG-encoded image. + +[`extract_patches(...)`](../tf/image/extract_patches.md): Extract `patches` from `images`. + +[`flip_left_right(...)`](../tf/image/flip_left_right.md): Flip an image horizontally (left to right). + +[`flip_up_down(...)`](../tf/image/flip_up_down.md): Flip an image vertically (upside down). + +[`generate_bounding_box_proposals(...)`](../tf/image/generate_bounding_box_proposals.md): Generate bounding box proposals from encoded bounding boxes. + +[`grayscale_to_rgb(...)`](../tf/image/grayscale_to_rgb.md): Converts one or more images from Grayscale to RGB. + +[`hsv_to_rgb(...)`](../tf/image/hsv_to_rgb.md): Convert one or more images from HSV to RGB. + +[`image_gradients(...)`](../tf/image/image_gradients.md): Returns image gradients (dy, dx) for each color channel. + +[`is_jpeg(...)`](../tf/io/is_jpeg.md): Convenience function to check if the 'contents' encodes a JPEG image. + +[`non_max_suppression(...)`](../tf/image/non_max_suppression.md): Greedily selects a subset of bounding boxes in descending order of score. + +[`non_max_suppression_overlaps(...)`](../tf/image/non_max_suppression_overlaps.md): Greedily selects a subset of bounding boxes in descending order of score. + +[`non_max_suppression_padded(...)`](../tf/image/non_max_suppression_padded.md): Greedily selects a subset of bounding boxes in descending order of score. + +[`non_max_suppression_with_scores(...)`](../tf/image/non_max_suppression_with_scores.md): Greedily selects a subset of bounding boxes in descending order of score. + +[`pad_to_bounding_box(...)`](../tf/image/pad_to_bounding_box.md): Pad `image` with zeros to the specified `height` and `width`. + +[`per_image_standardization(...)`](../tf/image/per_image_standardization.md): Linearly scales each image in `image` to have mean 0 and variance 1. + +[`psnr(...)`](../tf/image/psnr.md): Returns the Peak Signal-to-Noise Ratio between a and b. + +[`random_brightness(...)`](../tf/image/random_brightness.md): Adjust the brightness of images by a random factor. + +[`random_contrast(...)`](../tf/image/random_contrast.md): Adjust the contrast of an image or images by a random factor. + +[`random_crop(...)`](../tf/image/random_crop.md): Randomly crops a tensor to a given size. + +[`random_flip_left_right(...)`](../tf/image/random_flip_left_right.md): Randomly flip an image horizontally (left to right). + +[`random_flip_up_down(...)`](../tf/image/random_flip_up_down.md): Randomly flips an image vertically (upside down). + +[`random_hue(...)`](../tf/image/random_hue.md): Adjust the hue of RGB images by a random factor. + +[`random_jpeg_quality(...)`](../tf/image/random_jpeg_quality.md): Randomly changes jpeg encoding quality for inducing jpeg noise. + +[`random_saturation(...)`](../tf/image/random_saturation.md): Adjust the saturation of RGB images by a random factor. + +[`resize(...)`](../tf/image/resize.md): Resize `images` to `size` using the specified `method`. + +[`resize_with_crop_or_pad(...)`](../tf/image/resize_with_crop_or_pad.md): Crops and/or pads an image to a target width and height. + +[`resize_with_pad(...)`](../tf/image/resize_with_pad.md): Resizes and pads an image to a target width and height. + +[`rgb_to_grayscale(...)`](../tf/image/rgb_to_grayscale.md): Converts one or more images from RGB to Grayscale. + +[`rgb_to_hsv(...)`](../tf/image/rgb_to_hsv.md): Converts one or more images from RGB to HSV. + +[`rgb_to_yiq(...)`](../tf/image/rgb_to_yiq.md): Converts one or more images from RGB to YIQ. + +[`rgb_to_yuv(...)`](../tf/image/rgb_to_yuv.md): Converts one or more images from RGB to YUV. + +[`rot90(...)`](../tf/image/rot90.md): Rotate image(s) counter-clockwise by 90 degrees. + +[`sample_distorted_bounding_box(...)`](../tf/image/sample_distorted_bounding_box.md): Generate a single randomly distorted bounding box for an image. + +[`sobel_edges(...)`](../tf/image/sobel_edges.md): Returns a tensor holding Sobel edge maps. + +[`ssim(...)`](../tf/image/ssim.md): Computes SSIM index between img1 and img2. + +[`ssim_multiscale(...)`](../tf/image/ssim_multiscale.md): Computes the MS-SSIM between img1 and img2. + +[`total_variation(...)`](../tf/image/total_variation.md): Calculate and return the total variation for one or more images. + +[`transpose(...)`](../tf/image/transpose.md): Transpose image(s) by swapping the height and width dimension. + +[`yiq_to_rgb(...)`](../tf/image/yiq_to_rgb.md): Converts one or more images from YIQ to RGB. + +[`yuv_to_rgb(...)`](../tf/image/yuv_to_rgb.md): Converts one or more images from YUV to RGB. + diff --git a/site/en/api_docs/python/tf/image/ResizeMethod.md b/site/en/api_docs/python/tf/image/ResizeMethod.md new file mode 100644 index 00000000000..70664b8c7d1 --- /dev/null +++ b/site/en/api_docs/python/tf/image/ResizeMethod.md @@ -0,0 +1,45 @@ +description: See tf.image.resize for details. + +
+ + + + + + + + + + +
+ +# tf.image.ResizeMethod + + + + + + + + + +See tf.image.resize for details. + + + + +## Class Variables + +* `AREA = 'area'` +* `BICUBIC = 'bicubic'` +* `BILINEAR = 'bilinear'` +* `GAUSSIAN = 'gaussian'` +* `LANCZOS3 = 'lanczos3'` +* `LANCZOS5 = 'lanczos5'` +* `MITCHELLCUBIC = 'mitchellcubic'` +* `NEAREST_NEIGHBOR = 'nearest'` diff --git a/site/en/api_docs/python/tf/image/adjust_brightness.md b/site/en/api_docs/python/tf/image/adjust_brightness.md new file mode 100644 index 00000000000..58e466262f8 --- /dev/null +++ b/site/en/api_docs/python/tf/image/adjust_brightness.md @@ -0,0 +1,109 @@ +description: Adjust the brightness of RGB or Grayscale images. + +
+ + +
+ +# tf.image.adjust_brightness + + + + + + + + + +Adjust the brightness of RGB or Grayscale images. + + + + + + + + + +This is a convenience method that converts RGB images to float +representation, adjusts their brightness, and then converts them back to the +original data type. If several adjustments are chained, it is advisable to +minimize the number of redundant conversions. + +The value `delta` is added to all components of the tensor `image`. `image` is +converted to `float` and scaled appropriately if it is in fixed-point +representation, and `delta` is converted to the same data type. For regular +images, `delta` should be in the range `[0,1)`, as it is added to the image in +floating point representation, where pixel values are in the `[0,1)` range. + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.adjust_brightness(x, delta=0.1) + +``` + + + + + + + + + + + + + +
+`image` + +RGB image or images to adjust. +
+`delta` + +A scalar. Amount to add to the pixel values. +
+ + + + + + + + + + + +
+A brightness-adjusted tensor of the same shape and type as `image`. +
+ diff --git a/site/en/api_docs/python/tf/image/adjust_contrast.md b/site/en/api_docs/python/tf/image/adjust_contrast.md new file mode 100644 index 00000000000..41db7a31566 --- /dev/null +++ b/site/en/api_docs/python/tf/image/adjust_contrast.md @@ -0,0 +1,113 @@ +description: Adjust contrast of RGB or grayscale images. + +
+ + +
+ +# tf.image.adjust_contrast + + + + + + + + + +Adjust contrast of RGB or grayscale images. + + + + + + + + + +This is a convenience method that converts RGB images to float +representation, adjusts their contrast, and then converts them back to the +original data type. If several adjustments are chained, it is advisable to +minimize the number of redundant conversions. + +`images` is a tensor of at least 3 dimensions. The last 3 dimensions are +interpreted as `[height, width, channels]`. The other dimensions only +represent a collection of images, such as `[batch, height, width, channels].` + +Contrast is adjusted independently for each channel of each image. + +For each channel, this Op computes the mean of the image pixels in the +channel and then adjusts each component `x` of each pixel to +`(x - mean) * contrast_factor + mean`. + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.adjust_contrast(x, 2) + +``` + + + + + + + + + + + + + +
+`images` + +Images to adjust. At least 3-D. +
+`contrast_factor` + +A float multiplier for adjusting contrast. +
+ + + + + + + + + + + +
+The contrast-adjusted image or images. +
+ diff --git a/site/en/api_docs/python/tf/image/adjust_gamma.md b/site/en/api_docs/python/tf/image/adjust_gamma.md new file mode 100644 index 00000000000..89e8da3377a --- /dev/null +++ b/site/en/api_docs/python/tf/image/adjust_gamma.md @@ -0,0 +1,142 @@ +description: Performs [Gamma Correction](http://en.wikipedia.org/wiki/Gamma_correction). + +
+ + +
+ +# tf.image.adjust_gamma + + + + + + + + + +Performs [Gamma Correction](http://en.wikipedia.org/wiki/Gamma_correction). + + + + + + + + + +on the input image. + +Also known as Power Law Transform. This function converts the +input images at first to float representation, then transforms them +pixelwise according to the equation `Out = gain * In**gamma`, +and then converts the back to the original data type. + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.adjust_gamma(x, 0.2) + +``` + + + + + + + + + + + + + + + + +
+`image` + +RGB image or images to adjust. +
+`gamma` + +A scalar or tensor. Non-negative real number. +
+`gain` + +A scalar or tensor. The constant multiplier. +
+ + + + + + + + + + + +
+A Tensor. A Gamma-adjusted tensor of the same shape and type as `image`. +
+ + + + + + + + + + + + +
+`ValueError` + +If gamma is negative. +
+ + + +#### Notes: + +For gamma greater than 1, the histogram will shift towards left and +the output image will be darker than the input image. +For gamma less than 1, the histogram will shift towards right and +the output image will be brighter than the input image. + + +#### References: + +[Wikipedia](http://en.wikipedia.org/wiki/Gamma_correction) diff --git a/site/en/api_docs/python/tf/image/adjust_hue.md b/site/en/api_docs/python/tf/image/adjust_hue.md new file mode 100644 index 00000000000..85b349762cc --- /dev/null +++ b/site/en/api_docs/python/tf/image/adjust_hue.md @@ -0,0 +1,137 @@ +description: Adjust hue of RGB images. + +
+ + +
+ +# tf.image.adjust_hue + + + + + + + + + +Adjust hue of RGB images. + + + + + + + + + +This is a convenience method that converts an RGB image to float +representation, converts it to HSV, adds an offset to the +hue channel, converts back to RGB and then back to the original +data type. If several adjustments are chained it is advisable to minimize +the number of redundant conversions. + +`image` is an RGB image. The image hue is adjusted by converting the +image(s) to HSV and rotating the hue channel (H) by +`delta`. The image is then converted back to RGB. + +`delta` must be in the interval `[-1, 1]`. + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.adjust_hue(x, 0.2) + +``` + + + + + + + + + + + + + + + + +
+`image` + +RGB image or images. The size of the last dimension must be 3. +
+`delta` + +float. How much to add to the hue channel. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+Adjusted image(s), same shape and DType as `image`. +
+ + + +#### Usage Example: + + + +``` +>>> image = [[[1, 2, 3], [4, 5, 6]], +... [[7, 8, 9], [10, 11, 12]], +... [[13, 14, 15], [16, 17, 18]]] +>>> image = tf.constant(image) +>>> tf.image.adjust_hue(image, 0.2) + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/image/adjust_jpeg_quality.md b/site/en/api_docs/python/tf/image/adjust_jpeg_quality.md new file mode 100644 index 00000000000..d436aa96380 --- /dev/null +++ b/site/en/api_docs/python/tf/image/adjust_jpeg_quality.md @@ -0,0 +1,135 @@ +description: Adjust jpeg encoding quality of an image. + +
+ + +
+ +# tf.image.adjust_jpeg_quality + + + + + + + + + +Adjust jpeg encoding quality of an image. + + + + + + + + + +This is a convenience method that converts an image to uint8 representation, +encodes it to jpeg with `jpeg_quality`, decodes it, and then converts back +to the original data type. + +`jpeg_quality` must be in the interval `[0, 100]`. + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.adjust_jpeg_quality(x, 75) + +``` + + + + + + + + + + + + + + + + +
+`image` + +3D image. The size of the last dimension must be None, 1 or 3. +
+`jpeg_quality` + +Python int or Tensor of type int32. jpeg encoding quality. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+Adjusted image, same shape and DType as `image`. +
+ + + + + + + + + + + + + + + +
+`InvalidArgumentError` + +quality must be in [0,100] +
+`InvalidArgumentError` + +image must have 1 or 3 channels +
+ diff --git a/site/en/api_docs/python/tf/image/adjust_saturation.md b/site/en/api_docs/python/tf/image/adjust_saturation.md new file mode 100644 index 00000000000..43e00bf540c --- /dev/null +++ b/site/en/api_docs/python/tf/image/adjust_saturation.md @@ -0,0 +1,132 @@ +description: Adjust saturation of RGB images. + +
+ + +
+ +# tf.image.adjust_saturation + + + + + + + + + +Adjust saturation of RGB images. + + + + + + + + + +This is a convenience method that converts RGB images to float +representation, converts them to HSV, adds an offset to the +saturation channel, converts back to RGB and then back to the original +data type. If several adjustments are chained it is advisable to minimize +the number of redundant conversions. + +`image` is an RGB image or images. The image saturation is adjusted by +converting the images to HSV and multiplying the saturation (S) channel by +`saturation_factor` and clipping. The images are then converted back to RGB. + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.adjust_saturation(x, 0.5) + +``` + + + + + + + + + + + + + + + + +
+`image` + +RGB image or images. The size of the last dimension must be 3. +
+`saturation_factor` + +float. Factor to multiply the saturation by. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+Adjusted image(s), same shape and DType as `image`. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +input must have 3 channels +
+ diff --git a/site/en/api_docs/python/tf/image/central_crop.md b/site/en/api_docs/python/tf/image/central_crop.md new file mode 100644 index 00000000000..2d000215996 --- /dev/null +++ b/site/en/api_docs/python/tf/image/central_crop.md @@ -0,0 +1,142 @@ +description: Crop the central region of the image(s). + +
+ + +
+ +# tf.image.central_crop + + + + + + + + + +Crop the central region of the image(s). + + + + + + + + + +Remove the outer parts of an image but retain the central region of the image +along each dimension. If we specify central_fraction = 0.5, this function +returns the region marked with "X" in the below diagram. + + -------- + | | + | XXXX | + | XXXX | + | | where "X" is the central 50% of the image. + -------- + +This function works on either a single image (`image` is a 3-D Tensor), or a +batch of images (`image` is a 4-D Tensor). + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0], +... [7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]], +... [[13.0, 14.0, 15.0], +... [16.0, 17.0, 18.0], +... [19.0, 20.0, 21.0], +... [22.0, 23.0, 24.0]], +... [[25.0, 26.0, 27.0], +... [28.0, 29.0, 30.0], +... [31.0, 32.0, 33.0], +... [34.0, 35.0, 36.0]], +... [[37.0, 38.0, 39.0], +... [40.0, 41.0, 42.0], +... [43.0, 44.0, 45.0], +... [46.0, 47.0, 48.0]]] +>>> tf.image.central_crop(x, 0.5) + +``` + + + + + + + + + + + + + +
+`image` + +Either a 3-D float Tensor of shape [height, width, depth], or a 4-D +Tensor of shape [batch_size, height, width, depth]. +
+`central_fraction` + +float (0, 1], fraction of size to crop +
+ + + + + + + + + + + + +
+`ValueError` + +if central_crop_fraction is not within (0, 1]. +
+ + + + + + + + + + + +
+3-D / 4-D float Tensor, as per the input. +
+ diff --git a/site/en/api_docs/python/tf/image/combined_non_max_suppression.md b/site/en/api_docs/python/tf/image/combined_non_max_suppression.md new file mode 100644 index 00000000000..acffa10fcaf --- /dev/null +++ b/site/en/api_docs/python/tf/image/combined_non_max_suppression.md @@ -0,0 +1,168 @@ +description: Greedily selects a subset of bounding boxes in descending order of score. + +
+ + +
+ +# tf.image.combined_non_max_suppression + + + + + + + + + +Greedily selects a subset of bounding boxes in descending order of score. + + + + + + + + + +This operation performs non_max_suppression on the inputs per batch, across +all classes. +Prunes away boxes that have high intersection-over-union (IOU) overlap +with previously selected boxes. Bounding boxes are supplied as +[y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any +diagonal pair of box corners and the coordinates can be provided as normalized +(i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm +is agnostic to where the origin is in the coordinate system. Also note that +this algorithm is invariant to orthogonal transformations and translations +of the coordinate system; thus translating or reflections of the coordinate +system result in the same boxes being selected by the algorithm. +The output of this operation is the final boxes, scores and classes tensor +returned after performing non_max_suppression. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`boxes` + +A 4-D float `Tensor` of shape `[batch_size, num_boxes, q, 4]`. If `q` +is 1 then same boxes are used for all classes otherwise, if `q` is equal +to number of classes, class-specific boxes are used. +
+`scores` + +A 3-D float `Tensor` of shape `[batch_size, num_boxes, num_classes]` +representing a single score corresponding to each box (each row of boxes). +
+`max_output_size_per_class` + +A scalar integer `Tensor` representing the +maximum number of boxes to be selected by non-max suppression per class +
+`max_total_size` + +A scalar representing the maximum number of boxes retained +over all classes. +
+`iou_threshold` + +A float representing the threshold for deciding whether boxes +overlap too much with respect to IOU. +
+`score_threshold` + +A float representing the threshold for deciding when to +remove boxes based on score. +
+`pad_per_class` + +If false, the output nmsed boxes, scores and classes are +padded/clipped to `max_total_size`. If true, the output nmsed boxes, +scores and classes are padded to be of length +`max_size_per_class`*`num_classes`, unless it exceeds `max_total_size` in +which case it is clipped to `max_total_size`. Defaults to false. +
+`clip_boxes` + +If true, the coordinates of output nmsed boxes will be clipped +to [0, 1]. If false, output the box coordinates as it is. Defaults to +true. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+'nmsed_boxes': A [batch_size, max_detections, 4] float32 tensor +containing the non-max suppressed boxes. +'nmsed_scores': A [batch_size, max_detections] float32 tensor containing +the scores for the boxes. +'nmsed_classes': A [batch_size, max_detections] float32 tensor +containing the class for boxes. +'valid_detections': A [batch_size] int32 tensor indicating the number of +valid detections per batch item. Only the top valid_detections[i] entries +in nms_boxes[i], nms_scores[i] and nms_class[i] are valid. The rest of the +entries are zero paddings. +
+ diff --git a/site/en/api_docs/python/tf/image/convert_image_dtype.md b/site/en/api_docs/python/tf/image/convert_image_dtype.md new file mode 100644 index 00000000000..2b7e6f1be5d --- /dev/null +++ b/site/en/api_docs/python/tf/image/convert_image_dtype.md @@ -0,0 +1,146 @@ +description: Convert image to dtype, scaling its values if needed. + +
+ + +
+ +# tf.image.convert_image_dtype + + + + + + + + + +Convert `image` to `dtype`, scaling its values if needed. + + + + + + + + + +Images that are represented using floating point values are expected to have +values in the range [0,1). Image data stored in integer data types are +expected to have values in the range `[0,MAX]`, where `MAX` is the largest +positive representable number for the data type. + +This op converts between data types, scaling the values appropriately before +casting. + +Note that converting from floating point inputs to integer types may lead to +over/underflow problems. Set saturate to `True` to avoid such problem in +problematic conversions. If enabled, saturation will clip the output into the +allowed range before performing a potentially dangerous cast (and only before +performing such a cast, i.e., when casting from a floating point to an integer +type, and when casting from a signed to an unsigned type; `saturate` has no +effect on casts between floats, or on casts that increase the type's range). + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.convert_image_dtype(x, dtype=tf.float16, saturate=False) + +``` + + + + + + + + + + + + + + + + + + + +
+`image` + +An image. +
+`dtype` + +A `DType` to convert `image` to. +
+`saturate` + +If `True`, clip the input before casting (if necessary). +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+`image`, converted to `dtype`. +
+ + + + + + + + + + + + +
+`AttributeError` + +Raises an attribute error when dtype is neither +float nor integer +
+ diff --git a/site/en/api_docs/python/tf/image/crop_and_resize.md b/site/en/api_docs/python/tf/image/crop_and_resize.md new file mode 100644 index 00000000000..6c762e26bfb --- /dev/null +++ b/site/en/api_docs/python/tf/image/crop_and_resize.md @@ -0,0 +1,166 @@ +description: Extracts crops from the input image tensor and resizes them. + +
+ + +
+ +# tf.image.crop_and_resize + + + + + + + + + +Extracts crops from the input image tensor and resizes them. + + + + + + + +Extracts crops from the input image tensor and resizes them using bilinear +sampling or nearest neighbor sampling (possibly with aspect ratio change) to a +common output size specified by `crop_size`. This is more general than the +`crop_to_bounding_box` op which extracts a fixed size slice from the input +image and does not allow resizing or aspect ratio change. + +Returns a tensor with `crops` from the input `image` at positions defined at +the bounding box locations in `boxes`. The cropped boxes are all resized (with +bilinear or nearest neighbor interpolation) to a fixed +`size = [crop_height, crop_width]`. The result is a 4-D tensor +`[num_boxes, crop_height, crop_width, depth]`. The resizing is corner aligned. +In particular, if `boxes = [[0, 0, 1, 1]]`, the method will give identical +results to using tf.compat.v1.image.resize_bilinear() or +tf.compat.v1.image.resize_nearest_neighbor()(depends on the `method` +argument) with +`align_corners=True`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`image` + +A 4-D tensor of shape `[batch, image_height, image_width, depth]`. +Both `image_height` and `image_width` need to be positive. +
+`boxes` + +A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor +specifies the coordinates of a box in the `box_ind[i]` image and is +specified in normalized coordinates `[y1, x1, y2, x2]`. A normalized +coordinate value of `y` is mapped to the image coordinate at `y * +(image_height - 1)`, so as the `[0, 1]` interval of normalized image +height is mapped to `[0, image_height - 1]` in image height coordinates. +We do allow `y1` > `y2`, in which case the sampled crop is an up-down +flipped version of the original image. The width dimension is treated +similarly. Normalized coordinates outside the `[0, 1]` range are allowed, +in which case we use `extrapolation_value` to extrapolate the input image +values. +
+`box_indices` + +A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, +batch)`. The value of `box_ind[i]` specifies the image that the `i`-th box +refers to. +
+`crop_size` + +A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`. +All cropped image patches are resized to this size. The aspect ratio of +the image content is not preserved. Both `crop_height` and `crop_width` +need to be positive. +
+`method` + +An optional string specifying the sampling method for resizing. It +can be either `"bilinear"` or `"nearest"` and default to `"bilinear"`. +Currently two sampling methods are supported: Bilinear and Nearest +Neighbor. +
+`extrapolation_value` + +An optional `float`. Defaults to `0`. Value used for +extrapolation, when applicable. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`. +
+ + + +#### Example: + + + +```python +import tensorflow as tf +BATCH_SIZE = 1 +NUM_BOXES = 5 +IMAGE_HEIGHT = 256 +IMAGE_WIDTH = 256 +CHANNELS = 3 +CROP_SIZE = (24, 24) + +image = tf.random.normal(shape=(BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, +CHANNELS) ) +boxes = tf.random.uniform(shape=(NUM_BOXES, 4)) +box_indices = tf.random.uniform(shape=(NUM_BOXES,), minval=0, +maxval=BATCH_SIZE, dtype=tf.int32) +output = tf.image.crop_and_resize(image, boxes, box_indices, CROP_SIZE) +output.shape #=> (5, 24, 24, 3) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/image/crop_to_bounding_box.md b/site/en/api_docs/python/tf/image/crop_to_bounding_box.md new file mode 100644 index 00000000000..dc8e4b031e8 --- /dev/null +++ b/site/en/api_docs/python/tf/image/crop_to_bounding_box.md @@ -0,0 +1,132 @@ +description: Crops an image to a specified bounding box. + +
+ + +
+ +# tf.image.crop_to_bounding_box + + + + + + + + + +Crops an image to a specified bounding box. + + + + + + + + + +This op cuts a rectangular part out of `image`. The top-left corner of the +returned image is at `offset_height, offset_width` in `image`, and its +lower-right corner is at +`offset_height + target_height, offset_width + target_width`. + + + + + + + + + + + + + + + + + + + + + + +
+`image` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+`offset_height` + +Vertical coordinate of the top-left corner of the result in +the input. +
+`offset_width` + +Horizontal coordinate of the top-left corner of the result in +the input. +
+`target_height` + +Height of the result. +
+`target_width` + +Width of the result. +
+ + + + + + + + + + + +
+If `image` was 4-D, a 4-D float Tensor of shape +`[batch, target_height, target_width, channels]` +If `image` was 3-D, a 3-D float Tensor of shape +`[target_height, target_width, channels]` +
+ + + + + + + + + + + + +
+`ValueError` + +If the shape of `image` is incompatible with the `offset_*` or +`target_*` arguments, or either `offset_height` or `offset_width` is +negative, or either `target_height` or `target_width` is not positive. +
+ diff --git a/site/en/api_docs/python/tf/image/draw_bounding_boxes.md b/site/en/api_docs/python/tf/image/draw_bounding_boxes.md new file mode 100644 index 00000000000..169d15b3ec8 --- /dev/null +++ b/site/en/api_docs/python/tf/image/draw_bounding_boxes.md @@ -0,0 +1,125 @@ +description: Draw bounding boxes on a batch of images. + +
+ + +
+ +# tf.image.draw_bounding_boxes + + + + + + + + + +Draw bounding boxes on a batch of images. + + + + + + + +Outputs a copy of `images` but draws on top of the pixels zero or more +bounding boxes specified by the locations in `boxes`. The coordinates of the +each bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`. +The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width +and the height of the underlying image. + +For example, if an image is 100 x 200 pixels (height x width) and the bounding +box is `[0.1, 0.2, 0.5, 0.9]`, the upper-left and bottom-right coordinates of +the bounding box will be `(40, 10)` to `(180, 50)` (in (x,y) coordinates). + +Parts of the bounding box may fall outside the image. + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `float32`, `half`. +4-D with shape `[batch, height, width, depth]`. A batch of images. +
+`boxes` + +A `Tensor` of type `float32`. 3-D with shape `[batch, +num_bounding_boxes, 4]` containing bounding boxes. +
+`colors` + +A `Tensor` of type `float32`. 2-D. A list of RGBA colors to cycle +through for the boxes. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ + + +#### Usage Example: + + + +``` +>>> # create an empty image +>>> img = tf.zeros([1, 3, 3, 3]) +>>> # draw a box around the image +>>> box = np.array([0, 0, 1, 1]) +>>> boxes = box.reshape([1, 1, 4]) +>>> # alternate between red and blue +>>> colors = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, 1.0]]) +>>> tf.image.draw_bounding_boxes(img, boxes, colors) + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/image/encode_png.md b/site/en/api_docs/python/tf/image/encode_png.md new file mode 100644 index 00000000000..319b4c66765 --- /dev/null +++ b/site/en/api_docs/python/tf/image/encode_png.md @@ -0,0 +1,101 @@ +description: PNG-encode an image. + +
+ + +
+ +# tf.image.encode_png + + + + + + + + + +PNG-encode an image. + + + + + + + + + +`image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]` +where `channels` is: + +* 1: for grayscale. +* 2: for grayscale + alpha. +* 3: for RGB. +* 4: for RGBA. + +The ZLIB compression level, `compression`, can be -1 for the PNG-encoder +default or a value from 0 to 9. 9 is the highest compression level, +generating the smallest output, but is slower. + + + + + + + + + + + + + + + + +
+`image` + +A `Tensor`. Must be one of the following types: `uint8`, `uint16`. +3-D with shape `[height, width, channels]`. +
+`compression` + +An optional `int`. Defaults to `-1`. Compression level. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/image/extract_glimpse.md b/site/en/api_docs/python/tf/image/extract_glimpse.md new file mode 100644 index 00000000000..8cec724f8b2 --- /dev/null +++ b/site/en/api_docs/python/tf/image/extract_glimpse.md @@ -0,0 +1,162 @@ +description: Extracts a glimpse from the input tensor. + +
+ + +
+ +# tf.image.extract_glimpse + + + + + + + + + +Extracts a glimpse from the input tensor. + + + + + + + +Returns a set of windows called glimpses extracted at location +`offsets` from the input tensor. If the windows only partially +overlaps the inputs, the non-overlapping areas will be filled with +random noise. + +The result is a 4-D tensor of shape `[batch_size, glimpse_height, +glimpse_width, channels]`. The channels and batch dimensions are the +same as that of the input tensor. The height and width of the output +windows are specified in the `size` parameter. + +The argument `normalized` and `centered` controls how the windows are built: + +* If the coordinates are normalized but not centered, 0.0 and 1.0 + correspond to the minimum and maximum of each height and width + dimension. +* If the coordinates are both normalized and centered, they range from + -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper + left corner, the lower right corner is located at (1.0, 1.0) and the + center is at (0, 0). +* If the coordinates are not normalized they are interpreted as + numbers of pixels. + +#### Usage Example: + + + +``` +>>> x = [[[[0.0], +... [1.0], +... [2.0]], +... [[3.0], +... [4.0], +... [5.0]], +... [[6.0], +... [7.0], +... [8.0]]]] +>>> tf.image.extract_glimpse(x, size=(2, 2), offsets=[[1, 1]], +... centered=False, normalized=False) + +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `float32`. A 4-D float tensor of shape +`[batch_size, height, width, channels]`. +
+`size` + +A `Tensor` of type `int32`. A 1-D tensor of 2 elements containing the +size of the glimpses to extract. The glimpse height must be specified +first, following by the glimpse width. +
+`offsets` + +A `Tensor` of type `float32`. A 2-D integer tensor of shape +`[batch_size, 2]` containing the y, x locations of the center of each +window. +
+`centered` + +An optional `bool`. Defaults to `True`. indicates if the offset +coordinates are centered relative to the image, in which case the (0, 0) +offset is relative to the center of the input images. If false, the (0,0) +offset corresponds to the upper left corner of the input images. +
+`normalized` + +An optional `bool`. Defaults to `True`. indicates if the offset +coordinates are normalized. +
+`noise` + +An optional `string`. Defaults to `uniform`. indicates if the noise +should be `uniform` (uniform distribution), `gaussian` (gaussian +distribution), or `zero` (zero padding). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/image/extract_patches.md b/site/en/api_docs/python/tf/image/extract_patches.md new file mode 100644 index 00000000000..6e9b54ec54a --- /dev/null +++ b/site/en/api_docs/python/tf/image/extract_patches.md @@ -0,0 +1,213 @@ +description: Extract patches from images. + +
+ + +
+ +# tf.image.extract_patches + + + + + + + + + +Extract `patches` from `images`. + + + + + + + + + +This op collects patches from the input image, as if applying a +convolution. All extracted patches are stacked in the depth (last) dimension +of the output. + +Specifically, the op extracts patches of shape `sizes` which are `strides` +apart in the input image. The output is subsampled using the `rates` argument, +in the same manner as "atrous" or "dilated" convolutions. + +The result is a 4D tensor which is indexed by batch, row, and column. +`output[i, x, y]` contains a flattened patch of size `sizes[1], sizes[2]` +which is taken from the input starting at +`images[i, x*strides[1], y*strides[2]]`. + +Each output patch can be reshaped to `sizes[1], sizes[2], depth`, where +`depth` is `images.shape[3]`. + +The output elements are taken from the input at intervals given by the `rate` +argument, as in dilated convolutions. + +The `padding` argument has no effect on the size of each patch, it determines +how many patches are extracted. If `VALID`, only patches which are fully +contained in the input image are included. If `SAME`, all patches whose +starting point is inside the input are included, and areas outside the input +default to zero. + +#### Example: + + + +``` + n = 10 + # images is a 1 x 10 x 10 x 1 array that contains the numbers 1 through 100 + images = [[[[x * n + y + 1] for y in range(n)] for x in range(n)]] + + # We generate two outputs as follows: + # 1. 3x3 patches with stride length 5 + # 2. Same as above, but the rate is increased to 2 + tf.extract_image_patches(images=images, + ksizes=[1, 3, 3, 1], + strides=[1, 5, 5, 1], + rates=[1, 1, 1, 1], + padding='VALID') + + # Yields: + [[[[ 1 2 3 11 12 13 21 22 23] + [ 6 7 8 16 17 18 26 27 28]] + [[51 52 53 61 62 63 71 72 73] + [56 57 58 66 67 68 76 77 78]]]] +``` + +If we mark the pixels in the input image which are taken for the output with +`*`, we see the pattern: + +``` + * * * 4 5 * * * 9 10 + * * * 14 15 * * * 19 20 + * * * 24 25 * * * 29 30 + 31 32 33 34 35 36 37 38 39 40 + 41 42 43 44 45 46 47 48 49 50 + * * * 54 55 * * * 59 60 + * * * 64 65 * * * 69 70 + * * * 74 75 * * * 79 80 + 81 82 83 84 85 86 87 88 89 90 + 91 92 93 94 95 96 97 98 99 100 +``` + +``` + tf.extract_image_patches(images=images, + sizes=[1, 3, 3, 1], + strides=[1, 5, 5, 1], + rates=[1, 2, 2, 1], + padding='VALID') + + # Yields: + [[[[ 1 3 5 21 23 25 41 43 45] + [ 6 8 10 26 28 30 46 48 50]] + + [[ 51 53 55 71 73 75 91 93 95] + [ 56 58 60 76 78 80 96 98 100]]]] +``` + +We can again draw the effect, this time using the symbols `*`, `x`, `+` and +`o` to distinguish the patches: + +``` + * 2 * 4 * x 7 x 9 x + 11 12 13 14 15 16 17 18 19 20 + * 22 * 24 * x 27 x 29 x + 31 32 33 34 35 36 37 38 39 40 + * 42 * 44 * x 47 x 49 x + + 52 + 54 + o 57 o 59 o + 61 62 63 64 65 66 67 68 69 70 + + 72 + 74 + o 77 o 79 o + 81 82 83 84 85 86 87 88 89 90 + + 92 + 94 + o 97 o 99 o +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`images` + +A 4-D Tensor with shape `[batch, in_rows, in_cols, depth] +
+`sizes` + +The size of the extracted patches. Must be [1, size_rows, size_cols, +1]. +
+`strides` + +A 1-D Tensor of length 4. How far the centers of two consecutive +patches are in the images. Must be: `[1, stride_rows, stride_cols, 1]`. +
+`rates` + +A 1-D Tensor of length 4. Must be: `[1, rate_rows, rate_cols, 1]`. +This is the input stride, specifying how far two consecutive patch samples +are in the input. Equivalent to extracting patches with `patch_sizes_eff = +patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by subsampling +them spatially by a factor of `rates`. This is equivalent to `rate` in +dilated (a.k.a. Atrous) convolutions. +
+`padding` + +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A 4-D Tensor of the same type as the input. +
+ diff --git a/site/en/api_docs/python/tf/image/flip_left_right.md b/site/en/api_docs/python/tf/image/flip_left_right.md new file mode 100644 index 00000000000..fed2b8047e8 --- /dev/null +++ b/site/en/api_docs/python/tf/image/flip_left_right.md @@ -0,0 +1,113 @@ +description: Flip an image horizontally (left to right). + +
+ + +
+ +# tf.image.flip_left_right + + + + + + + + + +Flip an image horizontally (left to right). + + + + + + + + + +Outputs the contents of `image` flipped along the width dimension. + +See also `reverse()`. + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.flip_left_right(x) + +``` + + + + + + + + + + +
+`image` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+ + + + + + + + + + + +
+A tensor of the same type and shape as `image`. +
+ + + + + + + + + + + + +
+`ValueError` + +if the shape of `image` not supported. +
+ diff --git a/site/en/api_docs/python/tf/image/flip_up_down.md b/site/en/api_docs/python/tf/image/flip_up_down.md new file mode 100644 index 00000000000..c7dd7ce6f44 --- /dev/null +++ b/site/en/api_docs/python/tf/image/flip_up_down.md @@ -0,0 +1,113 @@ +description: Flip an image vertically (upside down). + +
+ + +
+ +# tf.image.flip_up_down + + + + + + + + + +Flip an image vertically (upside down). + + + + + + + + + +Outputs the contents of `image` flipped along the height dimension. + +See also `reverse()`. + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.flip_up_down(x) + +``` + + + + + + + + + + +
+`image` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+ + + + + + + + + + + +
+A `Tensor` of the same type and shape as `image`. +
+ + + + + + + + + + + + +
+`ValueError` + +if the shape of `image` not supported. +
+ diff --git a/site/en/api_docs/python/tf/image/generate_bounding_box_proposals.md b/site/en/api_docs/python/tf/image/generate_bounding_box_proposals.md new file mode 100644 index 00000000000..5f7b720b36a --- /dev/null +++ b/site/en/api_docs/python/tf/image/generate_bounding_box_proposals.md @@ -0,0 +1,69 @@ +description: Generate bounding box proposals from encoded bounding boxes. + +
+ + +
+ +# tf.image.generate_bounding_box_proposals + + + + + + + + + +Generate bounding box proposals from encoded bounding boxes. + + + + + + + + + + + + + + + + + + + + + + +
+`rois` + +Region of interest boxes sorted by their scores. +
+`roi_probabilities` + +scores of the ROI boxes in the ROIs' tensor. +
+ diff --git a/site/en/api_docs/python/tf/image/grayscale_to_rgb.md b/site/en/api_docs/python/tf/image/grayscale_to_rgb.md new file mode 100644 index 00000000000..84896aa10a4 --- /dev/null +++ b/site/en/api_docs/python/tf/image/grayscale_to_rgb.md @@ -0,0 +1,94 @@ +description: Converts one or more images from Grayscale to RGB. + +
+ + +
+ +# tf.image.grayscale_to_rgb + + + + + + + + + +Converts one or more images from Grayscale to RGB. + + + + + + + + + +Outputs a tensor of the same `DType` and rank as `images`. The size of the +last dimension of the output is 3, containing the RGB value of the pixels. +The input images' last dimension must be size 1. + +``` +>>> original = tf.constant([[[1.0], [2.0], [3.0]]]) +>>> converted = tf.image.grayscale_to_rgb(original) +>>> print(converted.numpy()) +[[[1. 1. 1.] + [2. 2. 2.] + [3. 3. 3.]]] +``` + + + + + + + + + + + + + +
+`images` + +The Grayscale tensor to convert. The last dimension must be size 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The converted grayscale image(s). +
+ diff --git a/site/en/api_docs/python/tf/image/hsv_to_rgb.md b/site/en/api_docs/python/tf/image/hsv_to_rgb.md new file mode 100644 index 00000000000..07d1f41fbaa --- /dev/null +++ b/site/en/api_docs/python/tf/image/hsv_to_rgb.md @@ -0,0 +1,83 @@ +description: Convert one or more images from HSV to RGB. + +
+ + +
+ +# tf.image.hsv_to_rgb + + + + + + + + + +Convert one or more images from HSV to RGB. + + + + + + + + + +Outputs a tensor of the same shape as the `images` tensor, containing the RGB +value of the pixels. The output is only well defined if the value in `images` +are in `[0,1]`. + +See `rgb_to_hsv` for a description of the HSV encoding. + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +1-D or higher rank. HSV data to convert. Last dimension must be size 3. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/image/image_gradients.md b/site/en/api_docs/python/tf/image/image_gradients.md new file mode 100644 index 00000000000..a30dac529eb --- /dev/null +++ b/site/en/api_docs/python/tf/image/image_gradients.md @@ -0,0 +1,133 @@ +description: Returns image gradients (dy, dx) for each color channel. + +
+ + +
+ +# tf.image.image_gradients + + + + + + + + + +Returns image gradients (dy, dx) for each color channel. + + + + + + + + + +Both output tensors have the same shape as the input: [batch_size, h, w, +d]. The gradient values are organized so that [I(x+1, y) - I(x, y)] is in +location (x, y). That means that dy will always have zeros in the last row, +and dx will always have zeros in the last column. + +#### Usage Example: + +```python +BATCH_SIZE = 1 +IMAGE_HEIGHT = 5 +IMAGE_WIDTH = 5 +CHANNELS = 1 +image = tf.reshape(tf.range(IMAGE_HEIGHT * IMAGE_WIDTH * CHANNELS, + delta=1, dtype=tf.float32), + shape=(BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS)) +dx, dy = tf.image.image_gradients(image) +print(image[0, :,:,0]) +tf.Tensor( + [[ 0. 1. 2. 3. 4.] + [ 5. 6. 7. 8. 9.] + [10. 11. 12. 13. 14.] + [15. 16. 17. 18. 19.] + [20. 21. 22. 23. 24.]], shape=(5, 5), dtype=float32) +print(dx[0, :,:,0]) +tf.Tensor( + [[5. 5. 5. 5. 5.] + [5. 5. 5. 5. 5.] + [5. 5. 5. 5. 5.] + [5. 5. 5. 5. 5.] + [0. 0. 0. 0. 0.]], shape=(5, 5), dtype=float32) +print(dy[0, :,:,0]) +tf.Tensor( + [[1. 1. 1. 1. 0.] + [1. 1. 1. 1. 0.] + [1. 1. 1. 1. 0.] + [1. 1. 1. 1. 0.] + [1. 1. 1. 1. 0.]], shape=(5, 5), dtype=float32) +``` + + + + + + + + + + + + +
+`image` + +Tensor with shape [batch_size, h, w, d]. +
+ + + + + + + + + + + +
+Pair of tensors (dy, dx) holding the vertical and horizontal image +gradients (1-step finite difference). +
+ + + + + + + + + + + + +
+`ValueError` + +If `image` is not a 4D tensor. +
+ diff --git a/site/en/api_docs/python/tf/image/non_max_suppression.md b/site/en/api_docs/python/tf/image/non_max_suppression.md new file mode 100644 index 00000000000..c83ca449d5a --- /dev/null +++ b/site/en/api_docs/python/tf/image/non_max_suppression.md @@ -0,0 +1,137 @@ +description: Greedily selects a subset of bounding boxes in descending order of score. + +
+ + +
+ +# tf.image.non_max_suppression + + + + + + + + + +Greedily selects a subset of bounding boxes in descending order of score. + + + + + + + + + +Prunes away boxes that have high intersection-over-union (IOU) overlap +with previously selected boxes. Bounding boxes are supplied as +`[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any +diagonal pair of box corners and the coordinates can be provided as normalized +(i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm +is agnostic to where the origin is in the coordinate system. Note that this +algorithm is invariant to orthogonal transformations and translations +of the coordinate system; thus translating or reflections of the coordinate +system result in the same boxes being selected by the algorithm. +The output of this operation is a set of integers indexing into the input +collection of bounding boxes representing the selected boxes. The bounding +box coordinates corresponding to the selected indices can then be obtained +using the tf.gather operation. For example: + ```python + selected_indices = tf.image.non_max_suppression( + boxes, scores, max_output_size, iou_threshold) + selected_boxes = tf.gather(boxes, selected_indices) + ``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`boxes` + +A 2-D float `Tensor` of shape `[num_boxes, 4]`. +
+`scores` + +A 1-D float `Tensor` of shape `[num_boxes]` representing a single +score corresponding to each box (each row of boxes). +
+`max_output_size` + +A scalar integer `Tensor` representing the maximum number +of boxes to be selected by non-max suppression. +
+`iou_threshold` + +A float representing the threshold for deciding whether boxes +overlap too much with respect to IOU. +
+`score_threshold` + +A float representing the threshold for deciding when to +remove boxes based on score. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + +
+`selected_indices` + +A 1-D integer `Tensor` of shape `[M]` representing the +selected indices from the boxes tensor, where `M <= max_output_size`. +
+ diff --git a/site/en/api_docs/python/tf/image/non_max_suppression_overlaps.md b/site/en/api_docs/python/tf/image/non_max_suppression_overlaps.md new file mode 100644 index 00000000000..a8af471c184 --- /dev/null +++ b/site/en/api_docs/python/tf/image/non_max_suppression_overlaps.md @@ -0,0 +1,130 @@ +description: Greedily selects a subset of bounding boxes in descending order of score. + +
+ + +
+ +# tf.image.non_max_suppression_overlaps + + + + + + + + + +Greedily selects a subset of bounding boxes in descending order of score. + + + + + + + + + +Prunes away boxes that have high overlap with previously selected boxes. +N-by-n overlap values are supplied as square matrix. +The output of this operation is a set of integers indexing into the input +collection of bounding boxes representing the selected boxes. The bounding +box coordinates corresponding to the selected indices can then be obtained +using the tf.gather operation. For example: + ```python + selected_indices = tf.image.non_max_suppression_overlaps( + overlaps, scores, max_output_size, iou_threshold) + selected_boxes = tf.gather(boxes, selected_indices) + ``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`overlaps` + +A 2-D float `Tensor` of shape `[num_boxes, num_boxes]`. +
+`scores` + +A 1-D float `Tensor` of shape `[num_boxes]` representing a single +score corresponding to each box (each row of boxes). +
+`max_output_size` + +A scalar integer `Tensor` representing the maximum number +of boxes to be selected by non-max suppression. +
+`overlap_threshold` + +A float representing the threshold for deciding whether +boxes overlap too much with respect to the provided overlap values. +
+`score_threshold` + +A float representing the threshold for deciding when to +remove boxes based on score. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + +
+`selected_indices` + +A 1-D integer `Tensor` of shape `[M]` representing the +selected indices from the overlaps tensor, where `M <= max_output_size`. +
+ diff --git a/site/en/api_docs/python/tf/image/non_max_suppression_padded.md b/site/en/api_docs/python/tf/image/non_max_suppression_padded.md new file mode 100644 index 00000000000..c2e27c6a554 --- /dev/null +++ b/site/en/api_docs/python/tf/image/non_max_suppression_padded.md @@ -0,0 +1,151 @@ +description: Greedily selects a subset of bounding boxes in descending order of score. + +
+ + +
+ +# tf.image.non_max_suppression_padded + + + + + + + + + +Greedily selects a subset of bounding boxes in descending order of score. + + + + + + + + + +Performs algorithmically equivalent operation to tf.image.non_max_suppression, +with the addition of an optional parameter which zero-pads the output to +be of size `max_output_size`. +The output of this operation is a tuple containing the set of integers +indexing into the input collection of bounding boxes representing the selected +boxes and the number of valid indices in the index set. The bounding box +coordinates corresponding to the selected indices can then be obtained using +the tf.slice and tf.gather operations. For example: + ```python + selected_indices_padded, num_valid = tf.image.non_max_suppression_padded( + boxes, scores, max_output_size, iou_threshold, + score_threshold, pad_to_max_output_size=True) + selected_indices = tf.slice( + selected_indices_padded, tf.constant([0]), num_valid) + selected_boxes = tf.gather(boxes, selected_indices) + ``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`boxes` + +A 2-D float `Tensor` of shape `[num_boxes, 4]`. +
+`scores` + +A 1-D float `Tensor` of shape `[num_boxes]` representing a single +score corresponding to each box (each row of boxes). +
+`max_output_size` + +A scalar integer `Tensor` representing the maximum number +of boxes to be selected by non-max suppression. +
+`iou_threshold` + +A float representing the threshold for deciding whether boxes +overlap too much with respect to IOU. +
+`score_threshold` + +A float representing the threshold for deciding when to +remove boxes based on score. +
+`pad_to_max_output_size` + +bool. If True, size of `selected_indices` output is +padded to `max_output_size`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + +
+`selected_indices` + +A 1-D integer `Tensor` of shape `[M]` representing the +selected indices from the boxes tensor, where `M <= max_output_size`. +
+`valid_outputs` + +A scalar integer `Tensor` denoting how many elements in +`selected_indices` are valid. Valid elements occur first, then padding. +
+ diff --git a/site/en/api_docs/python/tf/image/non_max_suppression_with_scores.md b/site/en/api_docs/python/tf/image/non_max_suppression_with_scores.md new file mode 100644 index 00000000000..8e241044d28 --- /dev/null +++ b/site/en/api_docs/python/tf/image/non_max_suppression_with_scores.md @@ -0,0 +1,172 @@ +description: Greedily selects a subset of bounding boxes in descending order of score. + +
+ + +
+ +# tf.image.non_max_suppression_with_scores + + + + + + + + + +Greedily selects a subset of bounding boxes in descending order of score. + + + + + + + + + +Prunes away boxes that have high intersection-over-union (IOU) overlap +with previously selected boxes. Bounding boxes are supplied as +`[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any +diagonal pair of box corners and the coordinates can be provided as normalized +(i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm +is agnostic to where the origin is in the coordinate system. Note that this +algorithm is invariant to orthogonal transformations and translations +of the coordinate system; thus translating or reflections of the coordinate +system result in the same boxes being selected by the algorithm. +The output of this operation is a set of integers indexing into the input +collection of bounding boxes representing the selected boxes. The bounding +box coordinates corresponding to the selected indices can then be obtained +using the tf.gather operation. For example: + ```python + selected_indices, selected_scores = tf.image.non_max_suppression_v2( + boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1, + soft_nms_sigma=0.5) + selected_boxes = tf.gather(boxes, selected_indices) + ``` + +This function generalizes the tf.image.non_max_suppression op by also +supporting a Soft-NMS (with Gaussian weighting) mode (c.f. +Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score +of other overlapping boxes instead of directly causing them to be pruned. +Consequently, in contrast to tf.image.non_max_suppression, +`tf.image.non_max_suppression_v2` returns the new scores of each input box in +the second output, `selected_scores`. + +To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be +larger than 0. When `soft_nms_sigma` equals 0, the behavior of +`tf.image.non_max_suppression_v2` is identical to that of +tf.image.non_max_suppression (except for the extra output) both in function +and in running time. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`boxes` + +A 2-D float `Tensor` of shape `[num_boxes, 4]`. +
+`scores` + +A 1-D float `Tensor` of shape `[num_boxes]` representing a single +score corresponding to each box (each row of boxes). +
+`max_output_size` + +A scalar integer `Tensor` representing the maximum number +of boxes to be selected by non-max suppression. +
+`iou_threshold` + +A float representing the threshold for deciding whether boxes +overlap too much with respect to IOU. +
+`score_threshold` + +A float representing the threshold for deciding when to +remove boxes based on score. +
+`soft_nms_sigma` + +A scalar float representing the Soft NMS sigma parameter; +See Bodla et al, https://arxiv.org/abs/1704.04503). When +`soft_nms_sigma=0.0` (which is default), we fall back to standard (hard) +NMS. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + +
+`selected_indices` + +A 1-D integer `Tensor` of shape `[M]` representing the +selected indices from the boxes tensor, where `M <= max_output_size`. +
+`selected_scores` + +A 1-D float tensor of shape `[M]` representing the +corresponding scores for each selected box, where `M <= max_output_size`. +Scores only differ from corresponding input scores when using Soft NMS +(i.e. when `soft_nms_sigma>0`) +
+ diff --git a/site/en/api_docs/python/tf/image/pad_to_bounding_box.md b/site/en/api_docs/python/tf/image/pad_to_bounding_box.md new file mode 100644 index 00000000000..2fc5e8603b3 --- /dev/null +++ b/site/en/api_docs/python/tf/image/pad_to_bounding_box.md @@ -0,0 +1,162 @@ +description: Pad image with zeros to the specified height and width. + +
+ + +
+ +# tf.image.pad_to_bounding_box + + + + + + + + + +Pad `image` with zeros to the specified `height` and `width`. + + + + + + + + + +Adds `offset_height` rows of zeros on top, `offset_width` columns of +zeros on the left, and then pads the image on the bottom and right +with zeros until it has dimensions `target_height`, `target_width`. + +This op does nothing if `offset_*` is zero and the image already has size +`target_height` by `target_width`. + +#### Usage Example: + + + +``` +>>> x = [[[1., 2., 3.], +... [4., 5., 6.]], +... [[7., 8., 9.], +... [10., 11., 12.]]] +>>> padded_image = tf.image.pad_to_bounding_box(x, 1, 1, 4, 4) +>>> padded_image + +``` + + + + + + + + + + + + + + + + + + + + + + +
+`image` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+`offset_height` + +Number of rows of zeros to add on top. +
+`offset_width` + +Number of columns of zeros to add on the left. +
+`target_height` + +Height of output image. +
+`target_width` + +Width of output image. +
+ + + + + + + + + + + +
+If `image` was 4-D, a 4-D float Tensor of shape +`[batch, target_height, target_width, channels]` +If `image` was 3-D, a 3-D float Tensor of shape +`[target_height, target_width, channels]` +
+ + + + + + + + + + + + +
+`ValueError` + +If the shape of `image` is incompatible with the `offset_*` or +`target_*` arguments, or either `offset_height` or `offset_width` is +negative. +
+ diff --git a/site/en/api_docs/python/tf/image/per_image_standardization.md b/site/en/api_docs/python/tf/image/per_image_standardization.md new file mode 100644 index 00000000000..495d9278cb1 --- /dev/null +++ b/site/en/api_docs/python/tf/image/per_image_standardization.md @@ -0,0 +1,101 @@ +description: Linearly scales each image in image to have mean 0 and variance 1. + +
+ + +
+ +# tf.image.per_image_standardization + + + + + + + + + +Linearly scales each image in `image` to have mean 0 and variance 1. + + + + + + + + + +For each 3-D image `x` in `image`, computes `(x - mean) / adjusted_stddev`, +where + +- `mean` is the average of all values in `x` +- `adjusted_stddev = max(stddev, 1.0/sqrt(N))` is capped away from 0 to + protect against division by 0 when handling uniform images + - `N` is the number of elements in `x` + - `stddev` is the standard deviation of all values in `x` + + + + + + + + + + +
+`image` + +An n-D Tensor with at least 3 dimensions, the last 3 of which are the +dimensions of each image. +
+ + + + + + + + + + + +
+A `Tensor` with the same shape and dtype as `image`. +
+ + + + + + + + + + + + +
+`ValueError` + +if the shape of 'image' is incompatible with this function. +
+ diff --git a/site/en/api_docs/python/tf/image/psnr.md b/site/en/api_docs/python/tf/image/psnr.md new file mode 100644 index 00000000000..4e991b28001 --- /dev/null +++ b/site/en/api_docs/python/tf/image/psnr.md @@ -0,0 +1,120 @@ +description: Returns the Peak Signal-to-Noise Ratio between a and b. + +
+ + +
+ +# tf.image.psnr + + + + + + + + + +Returns the Peak Signal-to-Noise Ratio between a and b. + + + + + + + + + +This is intended to be used on signals (or images). Produces a PSNR value for +each image in batch. + +The last three dimensions of input are expected to be [height, width, depth]. + +#### Example: + + + +```python + # Read images from file. + im1 = tf.decode_png('path/to/im1.png') + im2 = tf.decode_png('path/to/im2.png') + # Compute PSNR over tf.uint8 Tensors. + psnr1 = tf.image.psnr(im1, im2, max_val=255) + + # Compute PSNR over tf.float32 Tensors. + im1 = tf.image.convert_image_dtype(im1, tf.float32) + im2 = tf.image.convert_image_dtype(im2, tf.float32) + psnr2 = tf.image.psnr(im1, im2, max_val=1.0) + # psnr1 and psnr2 both have type tf.float32 and are almost equal. +``` + + + + + + + + + + + + + + + + + + + +
+`a` + +First set of images. +
+`b` + +Second set of images. +
+`max_val` + +The dynamic range of the images (i.e., the difference between the +maximum the and minimum allowed values). +
+`name` + +Namespace to embed the computation in. +
+ + + + + + + + + + + +
+The scalar PSNR between a and b. The returned tensor has type tf.float32 +and shape [batch_size, 1]. +
+ diff --git a/site/en/api_docs/python/tf/image/random_brightness.md b/site/en/api_docs/python/tf/image/random_brightness.md new file mode 100644 index 00000000000..627302d9245 --- /dev/null +++ b/site/en/api_docs/python/tf/image/random_brightness.md @@ -0,0 +1,122 @@ +description: Adjust the brightness of images by a random factor. + +
+ + +
+ +# tf.image.random_brightness + + + + + + + + + +Adjust the brightness of images by a random factor. + + + + + + + + + +Equivalent to `adjust_brightness()` using a `delta` randomly picked in the +interval `[-max_delta, max_delta)`. + + + + + + + + + + + + + + + + +
+`image` + +An image or images to adjust. +
+`max_delta` + +float, must be non-negative. +
+`seed` + +A Python integer. Used to create a random seed. See +tf.compat.v1.set_random_seed for behavior. +
+ + + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.random_brightness(x, 0.2) + +``` + + + + + + + + + +
+The brightness-adjusted image(s). +
+ + + + + + + + + + + + +
+`ValueError` + +if `max_delta` is negative. +
+ diff --git a/site/en/api_docs/python/tf/image/random_contrast.md b/site/en/api_docs/python/tf/image/random_contrast.md new file mode 100644 index 00000000000..394b384eee9 --- /dev/null +++ b/site/en/api_docs/python/tf/image/random_contrast.md @@ -0,0 +1,129 @@ +description: Adjust the contrast of an image or images by a random factor. + +
+ + +
+ +# tf.image.random_contrast + + + + + + + + + +Adjust the contrast of an image or images by a random factor. + + + + + + + + + +Equivalent to `adjust_contrast()` but uses a `contrast_factor` randomly +picked in the interval `[lower, upper]`. + + + + + + + + + + + + + + + + + + + +
+`image` + +An image tensor with 3 or more dimensions. +
+`lower` + +float. Lower bound for the random contrast factor. +
+`upper` + +float. Upper bound for the random contrast factor. +
+`seed` + +A Python integer. Used to create a random seed. See +tf.compat.v1.set_random_seed for behavior. +
+ + + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.random_contrast(x, 0.2, 0.5) + +``` + + + + + + + + + +
+The contrast-adjusted image(s). +
+ + + + + + + + + + + + +
+`ValueError` + +if `upper <= lower` or if `lower < 0`. +
+ diff --git a/site/en/api_docs/python/tf/image/random_crop.md b/site/en/api_docs/python/tf/image/random_crop.md new file mode 100644 index 00000000000..551edaf80fd --- /dev/null +++ b/site/en/api_docs/python/tf/image/random_crop.md @@ -0,0 +1,104 @@ +description: Randomly crops a tensor to a given size. + +
+ + +
+ +# tf.image.random_crop + + + + + + + + + +Randomly crops a tensor to a given size. + + + + + + + + + +Slices a shape `size` portion out of `value` at a uniformly chosen offset. +Requires `value.shape >= size`. + +If a dimension should not be cropped, pass the full size of that dimension. +For example, RGB images can be cropped with +`size = [crop_height, crop_width, 3]`. + + + + + + + + + + + + + + + + + + + +
+`value` + +Input tensor to crop. +
+`size` + +1-D tensor with size the rank of `value`. +
+`seed` + +Python integer. Used to create a random seed. See +tf.random.set_seed +for behavior. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+A cropped tensor of the same rank as `value` and shape `size`. +
+ diff --git a/site/en/api_docs/python/tf/image/random_flip_left_right.md b/site/en/api_docs/python/tf/image/random_flip_left_right.md new file mode 100644 index 00000000000..02114b78819 --- /dev/null +++ b/site/en/api_docs/python/tf/image/random_flip_left_right.md @@ -0,0 +1,128 @@ +description: Randomly flip an image horizontally (left to right). + +
+ + +
+ +# tf.image.random_flip_left_right + + + + + + + + + +Randomly flip an image horizontally (left to right). + + + + + + + + + +With a 1 in 2 chance, outputs the contents of `image` flipped along the +second dimension, which is `width`. Otherwise output the image as-is. +When passing a batch of images, each image will be randomly flipped +independent of other images. + +#### Example usage: + + + +``` +>>> import numpy as np +``` + +``` +>>> image = np.array([[[1], [2]], [[3], [4]]]) +>>> tf.image.random_flip_left_right(image, 5).numpy().tolist() +[[[2], [1]], [[4], [3]]] +``` + +Randomly flip multiple images. +>>> images = np.array( +... [ +... [[[1], [2]], [[3], [4]]], +... [[[5], [6]], [[7], [8]]] +... ]) +>>> tf.image.random_flip_left_right(images, 6).numpy().tolist() +[[[[2], [1]], [[4], [3]]], [[[5], [6]], [[7], [8]]]] + + + + + + + + + + + + + +
+`image` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+`seed` + +A Python integer. Used to create a random seed. See +tf.compat.v1.set_random_seed for behavior. +
+ + + + + + + + + + + +
+A tensor of the same type and shape as `image`. +
+ + + + + + + + + + + + +
+`ValueError` + +if the shape of `image` not supported. +
+ diff --git a/site/en/api_docs/python/tf/image/random_flip_up_down.md b/site/en/api_docs/python/tf/image/random_flip_up_down.md new file mode 100644 index 00000000000..774c3103f92 --- /dev/null +++ b/site/en/api_docs/python/tf/image/random_flip_up_down.md @@ -0,0 +1,128 @@ +description: Randomly flips an image vertically (upside down). + +
+ + +
+ +# tf.image.random_flip_up_down + + + + + + + + + +Randomly flips an image vertically (upside down). + + + + + + + + + +With a 1 in 2 chance, outputs the contents of `image` flipped along the first +dimension, which is `height`. Otherwise, output the image as-is. +When passing a batch of images, each image will be randomly flipped +independent of other images. + +#### Example usage: + + + +``` +>>> import numpy as np +``` + +``` +>>> image = np.array([[[1], [2]], [[3], [4]]]) +>>> tf.image.random_flip_up_down(image, 3).numpy().tolist() +[[[3], [4]], [[1], [2]]] +``` + +Randomly flip multiple images. +>>> images = np.array( +... [ +... [[[1], [2]], [[3], [4]]], +... [[[5], [6]], [[7], [8]]] +... ]) +>>> tf.image.random_flip_up_down(images, 4).numpy().tolist() +[[[[3], [4]], [[1], [2]]], [[[5], [6]], [[7], [8]]]] + + + + + + + + + + + + + +
+`image` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+`seed` + +A Python integer. Used to create a random seed. See +tf.compat.v1.set_random_seed for behavior. +
+ + + + + + + + + + + +
+A tensor of the same type and shape as `image`. +
+ + + + + + + + + + + + +
+`ValueError` + +if the shape of `image` not supported. +
+ diff --git a/site/en/api_docs/python/tf/image/random_hue.md b/site/en/api_docs/python/tf/image/random_hue.md new file mode 100644 index 00000000000..b6ae6d07124 --- /dev/null +++ b/site/en/api_docs/python/tf/image/random_hue.md @@ -0,0 +1,126 @@ +description: Adjust the hue of RGB images by a random factor. + +
+ + +
+ +# tf.image.random_hue + + + + + + + + + +Adjust the hue of RGB images by a random factor. + + + + + + + + + +Equivalent to `adjust_hue()` but uses a `delta` randomly +picked in the interval `[-max_delta, max_delta]`. + +`max_delta` must be in the interval `[0, 0.5]`. + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.random_hue(x, 0.2) + +``` + + + + + + + + + + + + + + + + +
+`image` + +RGB image or images. The size of the last dimension must be 3. +
+`max_delta` + +float. The maximum value for the random delta. +
+`seed` + +An operation-specific seed. It will be used in conjunction with the +graph-level seed to determine the real seeds that will be used in this +operation. Please see the documentation of set_random_seed for its +interaction with the graph-level random seed. +
+ + + + + + + + + + + +
+Adjusted image(s), same shape and DType as `image`. +
+ + + + + + + + + + + + +
+`ValueError` + +if `max_delta` is invalid. +
+ diff --git a/site/en/api_docs/python/tf/image/random_jpeg_quality.md b/site/en/api_docs/python/tf/image/random_jpeg_quality.md new file mode 100644 index 00000000000..58843b6bc17 --- /dev/null +++ b/site/en/api_docs/python/tf/image/random_jpeg_quality.md @@ -0,0 +1,132 @@ +description: Randomly changes jpeg encoding quality for inducing jpeg noise. + +
+ + +
+ +# tf.image.random_jpeg_quality + + + + + + + + + +Randomly changes jpeg encoding quality for inducing jpeg noise. + + + + + + + + + +`min_jpeg_quality` must be in the interval `[0, 100]` and less than +`max_jpeg_quality`. +`max_jpeg_quality` must be in the interval `[0, 100]`. + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.random_jpeg_quality(x, 75, 95) + +``` + + + + + + + + + + + + + + + + + + + +
+`image` + +3D image. Size of the last dimension must be 1 or 3. +
+`min_jpeg_quality` + +Minimum jpeg encoding quality to use. +
+`max_jpeg_quality` + +Maximum jpeg encoding quality to use. +
+`seed` + +An operation-specific seed. It will be used in conjunction with the +graph-level seed to determine the real seeds that will be used in this +operation. Please see the documentation of set_random_seed for its +interaction with the graph-level random seed. +
+ + + + + + + + + + + +
+Adjusted image(s), same shape and DType as `image`. +
+ + + + + + + + + + + + +
+`ValueError` + +if `min_jpeg_quality` or `max_jpeg_quality` is invalid. +
+ diff --git a/site/en/api_docs/python/tf/image/random_saturation.md b/site/en/api_docs/python/tf/image/random_saturation.md new file mode 100644 index 00000000000..2476fe8f231 --- /dev/null +++ b/site/en/api_docs/python/tf/image/random_saturation.md @@ -0,0 +1,135 @@ +description: Adjust the saturation of RGB images by a random factor. + +
+ + +
+ +# tf.image.random_saturation + + + + + + + + + +Adjust the saturation of RGB images by a random factor. + + + + + + + + + +Equivalent to `adjust_saturation()` but uses a `saturation_factor` randomly +picked in the interval `[lower, upper]`. + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.random_saturation(x, 5, 10) + +``` + + + + + + + + + + + + + + + + + + + +
+`image` + +RGB image or images. The size of the last dimension must be 3. +
+`lower` + +float. Lower bound for the random saturation factor. +
+`upper` + +float. Upper bound for the random saturation factor. +
+`seed` + +An operation-specific seed. It will be used in conjunction with the +graph-level seed to determine the real seeds that will be used in this +operation. Please see the documentation of set_random_seed for its +interaction with the graph-level random seed. +
+ + + + + + + + + + + +
+Adjusted image(s), same shape and DType as `image`. +
+ + + + + + + + + + + + +
+`ValueError` + +if `upper <= lower` or if `lower < 0`. +
+ diff --git a/site/en/api_docs/python/tf/image/resize.md b/site/en/api_docs/python/tf/image/resize.md new file mode 100644 index 00000000000..eea635db80f --- /dev/null +++ b/site/en/api_docs/python/tf/image/resize.md @@ -0,0 +1,242 @@ +description: Resize images to size using the specified method. + +
+ + +
+ +# tf.image.resize + + + + + + + + + +Resize `images` to `size` using the specified `method`. + + + + + + + +Resized images will be distorted if their original aspect ratio is not +the same as `size`. To avoid distortions see +tf.image.resize_with_pad. + +``` +>>> image = tf.constant([ +... [1,0,0,0,0], +... [0,1,0,0,0], +... [0,0,1,0,0], +... [0,0,0,1,0], +... [0,0,0,0,1], +... ]) +>>> # Add "batch" and "channels" dimensions +>>> image = image[tf.newaxis, ..., tf.newaxis] +>>> image.shape.as_list() # [batch, height, width, channels] +[1, 5, 5, 1] +>>> tf.image.resize(image, [3,5])[0,...,0].numpy() +array([[0.6666667, 0.3333333, 0. , 0. , 0. ], + [0. , 0. , 1. , 0. , 0. ], + [0. , 0. , 0. , 0.3333335, 0.6666665]], + dtype=float32) +``` + +It works equally well with a single image instead of a batch of images: + +``` +>>> tf.image.resize(image[0], [3,5]).shape.as_list() +[3, 5, 1] +``` + +When 'antialias' is true, the sampling filter will anti-alias the input image +as well as interpolate. When downsampling an image with [anti-aliasing]( +https://en.wikipedia.org/wiki/Spatial_anti-aliasing) the sampling filter +kernel is scaled in order to properly anti-alias the input image signal. +'antialias' has no effect when upsampling an image: + +``` +>>> a = tf.image.resize(image, [5,10]) +>>> b = tf.image.resize(image, [5,10], antialias=True) +>>> tf.reduce_max(abs(a - b)).numpy() +0.0 +``` + +The `method` argument expects an item from the image.ResizeMethod enum, or +the string equivalent. The options are: + +* `'bilinear'`: [Bilinear interpolation.]( + https://en.wikipedia.org/wiki/Bilinear_interpolation) If 'antialias' is + true, becomes a hat/tent filter function with radius 1 when downsampling. +* `lanczos3`: [Lanczos kernel]( + https://en.wikipedia.org/wiki/Lanczos_resampling) with radius 3. + High-quality practical filter but may have some ringing, especially on + synthetic images. +* `lanczos5`: [Lanczos kernel] ( + https://en.wikipedia.org/wiki/Lanczos_resampling) with radius 5. + Very-high-quality filter but may have stronger ringing. +* `bicubic`: [Cubic interpolant]( + https://en.wikipedia.org/wiki/Bicubic_interpolation) of Keys. Equivalent to + Catmull-Rom kernel. Reasonably good quality and faster than Lanczos3Kernel, + particularly when upsampling. +* `gaussian`: [Gaussian kernel]( + https://en.wikipedia.org/wiki/Gaussian_filter) with radius 3, + sigma = 1.5 / 3.0. +* `nearest`: [Nearest neighbor interpolation.]( + https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation) + 'antialias' has no effect when used with nearest neighbor interpolation. +* `area`: Anti-aliased resampling with area interpolation. + 'antialias' has no effect when used with area interpolation; it + always anti-aliases. +* `mitchellcubic`: Mitchell-Netravali Cubic non-interpolating filter. + For synthetic images (especially those lacking proper prefiltering), less + ringing than Keys cubic kernel but less sharp. + +Note: Near image edges the filtering kernel may be partially outside the +image boundaries. For these pixels, only input pixels inside the image will be +included in the filter sum, and the output value will be appropriately +normalized. + +The return value has type `float32`, unless the `method` is +ResizeMethod.NEAREST_NEIGHBOR, then the return dtype is the dtype +of `images`: + +``` +>>> nn = tf.image.resize(image, [5,7], method='nearest') +>>> nn[0,...,0].numpy() +array([[1, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 1]], dtype=int32) +``` + +With `preserve_aspect_ratio=True`, the aspect ratio is preserved, so `size` +is the maximum for each dimension: + +``` +>>> max_10_20 = tf.image.resize(image, [10,20], preserve_aspect_ratio=True) +>>> max_10_20.shape.as_list() +[1, 10, 10, 1] +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`images` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+`size` + +A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new +size for the images. +
+`method` + +An image.ResizeMethod, or string equivalent. Defaults to +`bilinear`. +
+`preserve_aspect_ratio` + +Whether to preserve the aspect ratio. If this is set, +then `images` will be resized to a size that fits in `size` while +preserving the aspect ratio of the original image. Scales up the image if +`size` is bigger than the current size of the `image`. Defaults to False. +
+`antialias` + +Whether to use an anti-aliasing filter when downsampling an +image. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + + + + + + + + +
+`ValueError` + +if the shape of `images` is incompatible with the +shape arguments to this function +
+`ValueError` + +if `size` has an invalid shape or type. +
+`ValueError` + +if an unsupported resize method is specified. +
+ + + + + + + + + + + +
+If `images` was 4-D, a 4-D float Tensor of shape +`[batch, new_height, new_width, channels]`. +If `images` was 3-D, a 3-D float Tensor of shape +`[new_height, new_width, channels]`. +
+ diff --git a/site/en/api_docs/python/tf/image/resize_with_crop_or_pad.md b/site/en/api_docs/python/tf/image/resize_with_crop_or_pad.md new file mode 100644 index 00000000000..2ff6a67685f --- /dev/null +++ b/site/en/api_docs/python/tf/image/resize_with_crop_or_pad.md @@ -0,0 +1,119 @@ +description: Crops and/or pads an image to a target width and height. + +
+ + +
+ +# tf.image.resize_with_crop_or_pad + + + + + + + + + +Crops and/or pads an image to a target width and height. + + + + + + + + + +Resizes an image to a target width and height by either centrally +cropping the image or padding it evenly with zeros. + +If `width` or `height` is greater than the specified `target_width` or +`target_height` respectively, this op centrally crops along that dimension. +If `width` or `height` is smaller than the specified `target_width` or +`target_height` respectively, this op centrally pads with 0 along that +dimension. + + + + + + + + + + + + + + + + +
+`image` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+`target_height` + +Target height. +
+`target_width` + +Target width. +
+ + + + + + + + + + + + +
+`ValueError` + +if `target_height` or `target_width` are zero or negative. +
+ + + + + + + + + + + +
+Cropped and/or padded image. +If `images` was 4-D, a 4-D float Tensor of shape +`[batch, new_height, new_width, channels]`. +If `images` was 3-D, a 3-D float Tensor of shape +`[new_height, new_width, channels]`. +
+ diff --git a/site/en/api_docs/python/tf/image/resize_with_pad.md b/site/en/api_docs/python/tf/image/resize_with_pad.md new file mode 100644 index 00000000000..32bcf6d7fd2 --- /dev/null +++ b/site/en/api_docs/python/tf/image/resize_with_pad.md @@ -0,0 +1,120 @@ +description: Resizes and pads an image to a target width and height. + +
+ + +
+ +# tf.image.resize_with_pad + + + + + + + + + +Resizes and pads an image to a target width and height. + + + + + + + +Resizes an image to a target width and height by keeping +the aspect ratio the same without distortion. If the target +dimensions don't match the image dimensions, the image +is resized and then padded with zeroes to match requested +dimensions. + + + + + + + + + + + + + + + + + + + + + + +
+`image` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+`target_height` + +Target height. +
+`target_width` + +Target width. +
+`method` + +Method to use for resizing image. See image.resize() +
+`antialias` + +Whether to use anti-aliasing when resizing. See 'image.resize()'. +
+ + + + + + + + + + + + +
+`ValueError` + +if `target_height` or `target_width` are zero or negative. +
+ + + + + + + + + + + +
+Resized and padded image. +If `images` was 4-D, a 4-D float Tensor of shape +`[batch, new_height, new_width, channels]`. +If `images` was 3-D, a 3-D float Tensor of shape +`[new_height, new_width, channels]`. +
+ diff --git a/site/en/api_docs/python/tf/image/rgb_to_grayscale.md b/site/en/api_docs/python/tf/image/rgb_to_grayscale.md new file mode 100644 index 00000000000..886837250b8 --- /dev/null +++ b/site/en/api_docs/python/tf/image/rgb_to_grayscale.md @@ -0,0 +1,93 @@ +description: Converts one or more images from RGB to Grayscale. + +
+ + +
+ +# tf.image.rgb_to_grayscale + + + + + + + + + +Converts one or more images from RGB to Grayscale. + + + + + + + + + +Outputs a tensor of the same `DType` and rank as `images`. The size of the +last dimension of the output is 1, containing the Grayscale value of the +pixels. + +``` +>>> original = tf.constant([[[1.0, 2.0, 3.0]]]) +>>> converted = tf.image.rgb_to_grayscale(original) +>>> print(converted.numpy()) +[[[1.81...]]] +``` + + + + + + + + + + + + + +
+`images` + +The RGB tensor to convert. The last dimension must have size 3 and +should contain RGB values. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The converted grayscale image(s). +
+ diff --git a/site/en/api_docs/python/tf/image/rgb_to_hsv.md b/site/en/api_docs/python/tf/image/rgb_to_hsv.md new file mode 100644 index 00000000000..41ecaad53bf --- /dev/null +++ b/site/en/api_docs/python/tf/image/rgb_to_hsv.md @@ -0,0 +1,100 @@ +description: Converts one or more images from RGB to HSV. + +
+ + +
+ +# tf.image.rgb_to_hsv + + + + + + + + + +Converts one or more images from RGB to HSV. + + + + + + + + + +Outputs a tensor of the same shape as the `images` tensor, containing the HSV +value of the pixels. The output is only well defined if the value in `images` +are in `[0,1]`. + +`output[..., 0]` contains hue, `output[..., 1]` contains saturation, and +`output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0 +corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue. + +#### Usage Example: + + + +``` +>>> blue_image = tf.stack([ +... tf.zeros([5,5]), +... tf.zeros([5,5]), +... tf.ones([5,5])], +... axis=-1) +>>> blue_hsv_image = tf.image.rgb_to_hsv(blue_image) +>>> blue_hsv_image[0,0].numpy() +array([0.6666667, 1. , 1. ], dtype=float32) +``` + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +1-D or higher rank. RGB data to convert. Last dimension must be size 3. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/image/rgb_to_yiq.md b/site/en/api_docs/python/tf/image/rgb_to_yiq.md new file mode 100644 index 00000000000..f9dba5e38f3 --- /dev/null +++ b/site/en/api_docs/python/tf/image/rgb_to_yiq.md @@ -0,0 +1,93 @@ +description: Converts one or more images from RGB to YIQ. + +
+ + +
+ +# tf.image.rgb_to_yiq + + + + + + + + + +Converts one or more images from RGB to YIQ. + + + + + + + + + +Outputs a tensor of the same shape as the `images` tensor, containing the YIQ +value of the pixels. +The output is only well defined if the value in images are in [0,1]. + +#### Usage Example: + + + +``` +>>> x = tf.constant([[[1.0, 2.0, 3.0]]]) +>>> tf.image.rgb_to_yiq(x) + +``` + + + + + + + + + + +
+`images` + +2-D or higher rank. Image data to convert. Last dimension must be +size 3. +
+ + + + + + + + + + + + +
+`images` + +tensor with the same shape as `images`. +
+ diff --git a/site/en/api_docs/python/tf/image/rgb_to_yuv.md b/site/en/api_docs/python/tf/image/rgb_to_yuv.md new file mode 100644 index 00000000000..9b154552d70 --- /dev/null +++ b/site/en/api_docs/python/tf/image/rgb_to_yuv.md @@ -0,0 +1,99 @@ +description: Converts one or more images from RGB to YUV. + +
+ + +
+ +# tf.image.rgb_to_yuv + + + + + + + + + +Converts one or more images from RGB to YUV. + + + + + + + + + +Outputs a tensor of the same shape as the `images` tensor, containing the YUV +value of the pixels. +The output is only well defined if the value in images are in [0,1]. + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.rgb_to_yuv(x) + +``` + + + + + + + + + + +
+`images` + +2-D or higher rank. Image data to convert. Last dimension must be +size 3. +
+ + + + + + + + + + + + +
+`images` + +tensor with the same shape as `images`. +
+ diff --git a/site/en/api_docs/python/tf/image/rot90.md b/site/en/api_docs/python/tf/image/rot90.md new file mode 100644 index 00000000000..86f47437151 --- /dev/null +++ b/site/en/api_docs/python/tf/image/rot90.md @@ -0,0 +1,126 @@ +description: Rotate image(s) counter-clockwise by 90 degrees. + +
+ + +
+ +# tf.image.rot90 + + + + + + + + + +Rotate image(s) counter-clockwise by 90 degrees. + + + + + + + + + + +#### For example: + + + +``` +>>> a=tf.constant([[[1],[2]], +... [[3],[4]]]) +>>> # rotating `a` counter clockwise by 90 degrees +>>> a_rot=tf.image.rot90(a) +>>> print(a_rot[...,0].numpy()) +[[2 4] + [1 3]] +>>> # rotating `a` counter clockwise by 270 degrees +>>> a_rot=tf.image.rot90(a, k=3) +>>> print(a_rot[...,0].numpy()) +[[3 1] + [4 2]] +``` + + + + + + + + + + + + + + + + +
+`image` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+`k` + +A scalar integer. The number of times the image is rotated by 90 degrees. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+A rotated tensor of the same type and shape as `image`. +
+ + + + + + + + + + + + +
+`ValueError` + +if the shape of `image` not supported. +
+ diff --git a/site/en/api_docs/python/tf/image/sample_distorted_bounding_box.md b/site/en/api_docs/python/tf/image/sample_distorted_bounding_box.md new file mode 100644 index 00000000000..76af1c0d7bd --- /dev/null +++ b/site/en/api_docs/python/tf/image/sample_distorted_bounding_box.md @@ -0,0 +1,204 @@ +description: Generate a single randomly distorted bounding box for an image. + +
+ + +
+ +# tf.image.sample_distorted_bounding_box + + + + + + + + + +Generate a single randomly distorted bounding box for an image. + + + + + + + +Bounding box annotations are often supplied in addition to ground-truth labels +in image recognition or object localization tasks. A common technique for +training such a system is to randomly distort an image while preserving +its content, i.e. *data augmentation*. This Op outputs a randomly distorted +localization of an object, i.e. bounding box, given an `image_size`, +`bounding_boxes` and a series of constraints. + +The output of this Op is a single bounding box that may be used to crop the +original image. The output is returned as 3 tensors: `begin`, `size` and +`bboxes`. The first 2 tensors can be fed directly into tf.slice to crop the +image. The latter may be supplied to tf.image.draw_bounding_boxes to +visualize what the bounding box looks like. + +Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. +The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width +and the height of the underlying image. + +For example, + +```python + # Generate a single distorted bounding box. + begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box( + tf.shape(image), + bounding_boxes=bounding_boxes, + min_object_covered=0.1) + + # Draw the bounding box in an image summary. + image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), + bbox_for_draw) + tf.compat.v1.summary.image('images_with_box', image_with_box) + + # Employ the bounding box to distort the image. + distorted_image = tf.slice(image, begin, size) +``` + +Note that if no bounding box information is available, setting +`use_image_if_no_bounding_boxes = true` will assume there is a single implicit +bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is +false and no bounding boxes are supplied, an error is raised. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`image_size` + +A `Tensor`. Must be one of the following types: `uint8`, `int8`, +`int16`, `int32`, `int64`. 1-D, containing `[height, width, channels]`. +
+`bounding_boxes` + +A `Tensor` of type `float32`. 3-D with shape `[batch, N, 4]` +describing the N bounding boxes associated with the image. +
+`seed` + +An optional `int`. Defaults to `0`. If `seed` is set to non-zero, the +random number generator is seeded by the given `seed`. Otherwise, it is +seeded by a random seed. +
+`min_object_covered` + +A Tensor of type `float32`. Defaults to `0.1`. The +cropped area of the image must contain at least this fraction of any +bounding box supplied. The value of this parameter should be non-negative. +In the case of 0, the cropped area does not need to overlap any of the +bounding boxes supplied. +
+`aspect_ratio_range` + +An optional list of `floats`. Defaults to `[0.75, +1.33]`. The cropped area of the image must have an aspect `ratio = width / +height` within this range. +
+`area_range` + +An optional list of `floats`. Defaults to `[0.05, 1]`. The +cropped area of the image must contain a fraction of the supplied image +within this range. +
+`max_attempts` + +An optional `int`. Defaults to `100`. Number of attempts at +generating a cropped region of the image of the specified constraints. +After `max_attempts` failures, return the entire image. +
+`use_image_if_no_bounding_boxes` + +An optional `bool`. Defaults to `False`. +Controls behavior if no bounding boxes supplied. If true, assume an +implicit bounding box covering the whole input. If false, raise an error. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (begin, size, bboxes). +
+`begin` + +A `Tensor`. Has the same type as `image_size`. 1-D, containing +`[offset_height, offset_width, 0]`. Provide as input to +tf.slice. +
+`size` + +A `Tensor`. Has the same type as `image_size`. 1-D, containing +`[target_height, target_width, -1]`. Provide as input to +tf.slice. +
+`bboxes` + +A `Tensor` of type `float32`. 3-D with shape `[1, 1, 4]` containing +the distorted bounding box. +Provide as input to tf.image.draw_bounding_boxes. +
+ diff --git a/site/en/api_docs/python/tf/image/sobel_edges.md b/site/en/api_docs/python/tf/image/sobel_edges.md new file mode 100644 index 00000000000..a5db5ad6ed2 --- /dev/null +++ b/site/en/api_docs/python/tf/image/sobel_edges.md @@ -0,0 +1,78 @@ +description: Returns a tensor holding Sobel edge maps. + +
+ + +
+ +# tf.image.sobel_edges + + + + + + + + + +Returns a tensor holding Sobel edge maps. + + + + + + + + + + + + + + + + + + + +
+`image` + +Image tensor with shape [batch_size, h, w, d] and type float32 or +float64. The image(s) must be 2x2 or larger. +
+ + + + + + + + + + + +
+Tensor holding edge maps for each channel. Returns a tensor with shape +[batch_size, h, w, d, 2] where the last two dimensions hold [[dy[0], dx[0]], +[dy[1], dx[1]], ..., [dy[d-1], dx[d-1]]] calculated using the Sobel filter. +
+ diff --git a/site/en/api_docs/python/tf/image/ssim.md b/site/en/api_docs/python/tf/image/ssim.md new file mode 100644 index 00000000000..db62814cde1 --- /dev/null +++ b/site/en/api_docs/python/tf/image/ssim.md @@ -0,0 +1,157 @@ +description: Computes SSIM index between img1 and img2. + +
+ + +
+ +# tf.image.ssim + + + + + + + + + +Computes SSIM index between img1 and img2. + + + + + + + + + +This function is based on the standard SSIM implementation from: +Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image +quality assessment: from error visibility to structural similarity. IEEE +transactions on image processing. + +Note: The true SSIM is only defined on grayscale. This function does not +perform any colorspace transform. (If the input is already YUV, then it will +compute YUV SSIM average.) + +#### Details: + +- 11x11 Gaussian filter of width 1.5 is used. +- k1 = 0.01, k2 = 0.03 as in the original paper. + + +The image sizes must be at least 11x11 because of the filter size. + +#### Example: + + + +```python + # Read images from file. + im1 = tf.decode_png('path/to/im1.png') + im2 = tf.decode_png('path/to/im2.png') + # Compute SSIM over tf.uint8 Tensors. + ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11, + filter_sigma=1.5, k1=0.01, k2=0.03) + + # Compute SSIM over tf.float32 Tensors. + im1 = tf.image.convert_image_dtype(im1, tf.float32) + im2 = tf.image.convert_image_dtype(im2, tf.float32) + ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11, + filter_sigma=1.5, k1=0.01, k2=0.03) + # ssim1 and ssim2 both have type tf.float32 and are almost equal. +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`img1` + +First image batch. +
+`img2` + +Second image batch. +
+`max_val` + +The dynamic range of the images (i.e., the difference between the +maximum the and minimum allowed values). +
+`filter_size` + +Default value 11 (size of gaussian filter). +
+`filter_sigma` + +Default value 1.5 (width of gaussian filter). +
+`k1` + +Default value 0.01 +
+`k2` + +Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so +it would be better if we took the values in the range of 0 < K2 < 0.4). +
+ + + + + + + + + + + +
+A tensor containing an SSIM value for each image in batch. Returned SSIM +values are in range (-1, 1], when pixel values are non-negative. Returns +a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]). +
+ diff --git a/site/en/api_docs/python/tf/image/ssim_multiscale.md b/site/en/api_docs/python/tf/image/ssim_multiscale.md new file mode 100644 index 00000000000..c6311ea1bf0 --- /dev/null +++ b/site/en/api_docs/python/tf/image/ssim_multiscale.md @@ -0,0 +1,143 @@ +description: Computes the MS-SSIM between img1 and img2. + +
+ + +
+ +# tf.image.ssim_multiscale + + + + + + + + + +Computes the MS-SSIM between img1 and img2. + + + + + + + + + +This function assumes that `img1` and `img2` are image batches, i.e. the last +three dimensions are [height, width, channels]. + +Note: The true SSIM is only defined on grayscale. This function does not +perform any colorspace transform. (If the input is already YUV, then it will +compute YUV SSIM average.) + +Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale +structural similarity for image quality assessment." Signals, Systems and +Computers, 2004. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`img1` + +First image batch. +
+`img2` + +Second image batch. Must have the same rank as img1. +
+`max_val` + +The dynamic range of the images (i.e., the difference between the +maximum the and minimum allowed values). +
+`power_factors` + +Iterable of weights for each of the scales. The number of +scales used is the length of the list. Index 0 is the unscaled +resolution's weight and each increasing scale corresponds to the image +being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, +0.1333), which are the values obtained in the original paper. +
+`filter_size` + +Default value 11 (size of gaussian filter). +
+`filter_sigma` + +Default value 1.5 (width of gaussian filter). +
+`k1` + +Default value 0.01 +
+`k2` + +Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so +it would be better if we took the values in the range of 0 < K2 < 0.4). +
+ + + + + + + + + + + +
+A tensor containing an MS-SSIM value for each image in batch. The values +are in range [0, 1]. Returns a tensor with shape: +broadcast(img1.shape[:-3], img2.shape[:-3]). +
+ diff --git a/site/en/api_docs/python/tf/image/total_variation.md b/site/en/api_docs/python/tf/image/total_variation.md new file mode 100644 index 00000000000..8453a78cb10 --- /dev/null +++ b/site/en/api_docs/python/tf/image/total_variation.md @@ -0,0 +1,117 @@ +description: Calculate and return the total variation for one or more images. + +
+ + +
+ +# tf.image.total_variation + + + + + + + + + +Calculate and return the total variation for one or more images. + + + + + + + + + +The total variation is the sum of the absolute differences for neighboring +pixel-values in the input images. This measures how much noise is in the +images. + +This can be used as a loss-function during optimization so as to suppress +noise in images. If you have a batch of images, then you should calculate +the scalar loss-value as the sum: +`loss = tf.reduce_sum(tf.image.total_variation(images))` + +This implements the anisotropic 2-D version of the formula described here: + +https://en.wikipedia.org/wiki/Total_variation_denoising + + + + + + + + + + + + + +
+`images` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + +
+`ValueError` + +if images.shape is not a 3-D or 4-D vector. +
+ + + + + + + + + + + +
+The total variation of `images`. + +If `images` was 4-D, return a 1-D float Tensor of shape `[batch]` with the +total variation for each image in the batch. +If `images` was 3-D, return a scalar float with the total variation for +that image. +
+ diff --git a/site/en/api_docs/python/tf/image/transpose.md b/site/en/api_docs/python/tf/image/transpose.md new file mode 100644 index 00000000000..ac2af8fd740 --- /dev/null +++ b/site/en/api_docs/python/tf/image/transpose.md @@ -0,0 +1,140 @@ +description: Transpose image(s) by swapping the height and width dimension. + +
+ + +
+ +# tf.image.transpose + + + + + + + + + +Transpose image(s) by swapping the height and width dimension. + + + + + + + + + + +#### Usage Example: + + + +``` +>>> x = [[[1.0, 2.0, 3.0], +... [4.0, 5.0, 6.0]], +... [[7.0, 8.0, 9.0], +... [10.0, 11.0, 12.0]]] +>>> tf.image.transpose(x) + +``` + + + + + + + + + + + + + +
+`image` + +4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor +of shape `[height, width, channels]`. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+If `image` was 4-D, a 4-D float Tensor of shape +`[batch, width, height, channels]` +If `image` was 3-D, a 3-D float Tensor of shape +`[width, height, channels]` +
+ + + + + + + + + + + + +
+`ValueError` + +if the shape of `image` not supported. +
+ + + +#### Usage Example: + + + +``` +>>> image = [[[1, 2], [3, 4]], +... [[5, 6], [7, 8]], +... [[9, 10], [11, 12]]] +>>> image = tf.constant(image) +>>> tf.image.transpose(image) + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/image/yiq_to_rgb.md b/site/en/api_docs/python/tf/image/yiq_to_rgb.md new file mode 100644 index 00000000000..581c893a6ef --- /dev/null +++ b/site/en/api_docs/python/tf/image/yiq_to_rgb.md @@ -0,0 +1,83 @@ +description: Converts one or more images from YIQ to RGB. + +
+ + +
+ +# tf.image.yiq_to_rgb + + + + + + + + + +Converts one or more images from YIQ to RGB. + + + + + + + + + +Outputs a tensor of the same shape as the `images` tensor, containing the RGB +value of the pixels. +The output is only well defined if the Y value in images are in [0,1], +I value are in [-0.5957,0.5957] and Q value are in [-0.5226,0.5226]. + + + + + + + + + + +
+`images` + +2-D or higher rank. Image data to convert. Last dimension must be +size 3. +
+ + + + + + + + + + + + +
+`images` + +tensor with the same shape as `images`. +
+ diff --git a/site/en/api_docs/python/tf/image/yuv_to_rgb.md b/site/en/api_docs/python/tf/image/yuv_to_rgb.md new file mode 100644 index 00000000000..f7369c9b8c7 --- /dev/null +++ b/site/en/api_docs/python/tf/image/yuv_to_rgb.md @@ -0,0 +1,83 @@ +description: Converts one or more images from YUV to RGB. + +
+ + +
+ +# tf.image.yuv_to_rgb + + + + + + + + + +Converts one or more images from YUV to RGB. + + + + + + + + + +Outputs a tensor of the same shape as the `images` tensor, containing the RGB +value of the pixels. +The output is only well defined if the Y value in images are in [0,1], +U and V value are in [-0.5,0.5]. + + + + + + + + + + +
+`images` + +2-D or higher rank. Image data to convert. Last dimension must be +size 3. +
+ + + + + + + + + + + + +
+`images` + +tensor with the same shape as `images`. +
+ diff --git a/site/en/api_docs/python/tf/init_scope.md b/site/en/api_docs/python/tf/init_scope.md new file mode 100644 index 00000000000..6bb2225e62b --- /dev/null +++ b/site/en/api_docs/python/tf/init_scope.md @@ -0,0 +1,100 @@ +description: A context manager that lifts ops out of control-flow scopes and function-building graphs. + +
+ + +
+ +# tf.init_scope + + + + + + + + + +A context manager that lifts ops out of control-flow scopes and function-building graphs. + + + + + + + + + +There is often a need to lift variable initialization ops out of control-flow +scopes, function-building graphs, and gradient tapes. Entering an +`init_scope` is a mechanism for satisfying these desiderata. In particular, +entering an `init_scope` has three effects: + + (1) All control dependencies are cleared the moment the scope is entered; + this is equivalent to entering the context manager returned from + `control_dependencies(None)`, which has the side-effect of exiting + control-flow scopes like tf.cond and tf.while_loop. + + (2) All operations that are created while the scope is active are lifted + into the lowest context on the `context_stack` that is not building a + graph function. Here, a context is defined as either a graph or an eager + context. Every context switch, i.e., every installation of a graph as + the default graph and every switch into eager mode, is logged in a + thread-local stack called `context_switches`; the log entry for a + context switch is popped from the stack when the context is exited. + Entering an `init_scope` is equivalent to crawling up + `context_switches`, finding the first context that is not building a + graph function, and entering it. A caveat is that if graph mode is + enabled but the default graph stack is empty, then entering an + `init_scope` will simply install a fresh graph as the default one. + + (3) The gradient tape is paused while the scope is active. + +When eager execution is enabled, code inside an init_scope block runs with +eager execution enabled even when tracing a tf.function. For example: + +```python +tf.compat.v1.enable_eager_execution() + +@tf.function +def func(): + # A function constructs TensorFlow graphs, + # it does not execute eagerly. + assert not tf.executing_eagerly() + with tf.init_scope(): + # Initialization runs with eager execution enabled + assert tf.executing_eagerly() +``` + + + + + + + + + + +
+`RuntimeError` + +if graph state is incompatible with this initialization. +
+ diff --git a/site/en/api_docs/python/tf/io.md b/site/en/api_docs/python/tf/io.md new file mode 100644 index 00000000000..4fe2cd8a871 --- /dev/null +++ b/site/en/api_docs/python/tf/io.md @@ -0,0 +1,105 @@ +description: Public API for tf.io namespace. + +
+ + +
+ +# Module: tf.io + + + + + + + + + +Public API for tf.io namespace. + + + +## Modules + +[`gfile`](../tf/io/gfile.md) module: Public API for tf.io.gfile namespace. + +## Classes + +[`class FixedLenFeature`](../tf/io/FixedLenFeature.md): Configuration for parsing a fixed-length input feature. + +[`class FixedLenSequenceFeature`](../tf/io/FixedLenSequenceFeature.md): Configuration for parsing a variable-length input feature into a `Tensor`. + +[`class RaggedFeature`](../tf/io/RaggedFeature.md): Configuration for passing a RaggedTensor input feature. + +[`class SparseFeature`](../tf/io/SparseFeature.md): Configuration for parsing a sparse input feature from an `Example`. + +[`class TFRecordOptions`](../tf/io/TFRecordOptions.md): Options used for manipulating TFRecord files. + +[`class TFRecordWriter`](../tf/io/TFRecordWriter.md): A class to write records to a TFRecords file. + +[`class VarLenFeature`](../tf/io/VarLenFeature.md): Configuration for parsing a variable-length input feature. + +## Functions + +[`decode_and_crop_jpeg(...)`](../tf/io/decode_and_crop_jpeg.md): Decode and Crop a JPEG-encoded image to a uint8 tensor. + +[`decode_base64(...)`](../tf/io/decode_base64.md): Decode web-safe base64-encoded strings. + +[`decode_bmp(...)`](../tf/io/decode_bmp.md): Decode the first frame of a BMP-encoded image to a uint8 tensor. + +[`decode_compressed(...)`](../tf/io/decode_compressed.md): Decompress strings. + +[`decode_csv(...)`](../tf/io/decode_csv.md): Convert CSV records to tensors. Each column maps to one tensor. + +[`decode_gif(...)`](../tf/io/decode_gif.md): Decode the frame(s) of a GIF-encoded image to a uint8 tensor. + +[`decode_image(...)`](../tf/io/decode_image.md): Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`. + +[`decode_jpeg(...)`](../tf/io/decode_jpeg.md): Decode a JPEG-encoded image to a uint8 tensor. + +[`decode_json_example(...)`](../tf/io/decode_json_example.md): Convert JSON-encoded Example records to binary protocol buffer strings. + +[`decode_png(...)`](../tf/io/decode_png.md): Decode a PNG-encoded image to a uint8 or uint16 tensor. + +[`decode_proto(...)`](../tf/io/decode_proto.md): The op extracts fields from a serialized protocol buffers message into tensors. + +[`decode_raw(...)`](../tf/io/decode_raw.md): Convert raw byte strings into tensors. + +[`deserialize_many_sparse(...)`](../tf/io/deserialize_many_sparse.md): Deserialize and concatenate `SparseTensors` from a serialized minibatch. + +[`encode_base64(...)`](../tf/io/encode_base64.md): Encode strings into web-safe base64 format. + +[`encode_jpeg(...)`](../tf/io/encode_jpeg.md): JPEG-encode an image. + +[`encode_proto(...)`](../tf/io/encode_proto.md): The op serializes protobuf messages provided in the input tensors. + +[`extract_jpeg_shape(...)`](../tf/io/extract_jpeg_shape.md): Extract the shape information of a JPEG-encoded image. + +[`is_jpeg(...)`](../tf/io/is_jpeg.md): Convenience function to check if the 'contents' encodes a JPEG image. + +[`match_filenames_once(...)`](../tf/io/match_filenames_once.md): Save the list of files matching pattern, so it is only computed once. + +[`matching_files(...)`](../tf/io/matching_files.md): Returns the set of files matching one or more glob patterns. + +[`parse_example(...)`](../tf/io/parse_example.md): Parses `Example` protos into a `dict` of tensors. + +[`parse_sequence_example(...)`](../tf/io/parse_sequence_example.md): Parses a batch of `SequenceExample` protos. + +[`parse_single_example(...)`](../tf/io/parse_single_example.md): Parses a single `Example` proto. + +[`parse_single_sequence_example(...)`](../tf/io/parse_single_sequence_example.md): Parses a single `SequenceExample` proto. + +[`parse_tensor(...)`](../tf/io/parse_tensor.md): Transforms a serialized tensorflow.TensorProto proto into a Tensor. + +[`read_file(...)`](../tf/io/read_file.md): Reads and outputs the entire contents of the input filename. + +[`serialize_many_sparse(...)`](../tf/io/serialize_many_sparse.md): Serialize `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor`. + +[`serialize_sparse(...)`](../tf/io/serialize_sparse.md): Serialize a `SparseTensor` into a 3-vector (1-D `Tensor`) object. + +[`serialize_tensor(...)`](../tf/io/serialize_tensor.md): Transforms a Tensor into a serialized TensorProto proto. + +[`write_file(...)`](../tf/io/write_file.md): Writes contents to the file at input filename. Creates file and recursively + +[`write_graph(...)`](../tf/io/write_graph.md): Writes a graph proto to a file. + diff --git a/site/en/api_docs/python/tf/io/FixedLenFeature.md b/site/en/api_docs/python/tf/io/FixedLenFeature.md new file mode 100644 index 00000000000..10b977f774d --- /dev/null +++ b/site/en/api_docs/python/tf/io/FixedLenFeature.md @@ -0,0 +1,99 @@ +description: Configuration for parsing a fixed-length input feature. + +
+ + + + + + +
+ +# tf.io.FixedLenFeature + + + + + + + + + +Configuration for parsing a fixed-length input feature. + + + + + + + + + +To treat sparse input as dense, provide a `default_value`; otherwise, +the parse functions will fail on any examples missing this feature. + +#### Fields: + + +* `shape`: Shape of input data. +* `dtype`: Data type of input. +* `default_value`: Value to be used if an example is missing this feature. It + must be compatible with `dtype` and of the specified `shape`. + + + + + + + + + + + + + + + + + + + +
+`shape` + + +
+`dtype` + + +
+`default_value` + + +
+ + + +## Class Variables + +* `default_value` +* `dtype` +* `shape` diff --git a/site/en/api_docs/python/tf/io/FixedLenSequenceFeature.md b/site/en/api_docs/python/tf/io/FixedLenSequenceFeature.md new file mode 100644 index 00000000000..adcaccd447b --- /dev/null +++ b/site/en/api_docs/python/tf/io/FixedLenSequenceFeature.md @@ -0,0 +1,121 @@ +description: Configuration for parsing a variable-length input feature into a Tensor. + +
+ + + + + + + +
+ +# tf.io.FixedLenSequenceFeature + + + + + + + + + +Configuration for parsing a variable-length input feature into a `Tensor`. + + + + + + + + + +The resulting `Tensor` of parsing a single `SequenceExample` or `Example` has +a static `shape` of `[None] + shape` and the specified `dtype`. +The resulting `Tensor` of parsing a `batch_size` many `Example`s has +a static `shape` of `[batch_size, None] + shape` and the specified `dtype`. +The entries in the `batch` from different `Examples` will be padded with +`default_value` to the maximum length present in the `batch`. + +To treat a sparse input as dense, provide `allow_missing=True`; otherwise, +the parse functions will fail on any examples missing this feature. + +#### Fields: + + +* `shape`: Shape of input data for dimension 2 and higher. First dimension is + of variable length `None`. +* `dtype`: Data type of input. +* `allow_missing`: Whether to allow this feature to be missing from a feature + list item. Is available only for parsing `SequenceExample` not for + parsing `Examples`. +* `default_value`: Scalar value to be used to pad multiple `Example`s to their + maximum length. Irrelevant for parsing a single `Example` or + `SequenceExample`. Defaults to "" for dtype string and 0 otherwise + (optional). + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + + +
+`dtype` + + +
+`allow_missing` + + +
+`default_value` + + +
+ + + +## Class Variables + +* `allow_missing` +* `default_value` +* `dtype` +* `shape` diff --git a/site/en/api_docs/python/tf/io/RaggedFeature.md b/site/en/api_docs/python/tf/io/RaggedFeature.md new file mode 100644 index 00000000000..bb66836a4e9 --- /dev/null +++ b/site/en/api_docs/python/tf/io/RaggedFeature.md @@ -0,0 +1,251 @@ +description: Configuration for passing a RaggedTensor input feature. + +
+ + + + + + + + + + + + + + +
+ +# tf.io.RaggedFeature + + + + + + + + + +Configuration for passing a RaggedTensor input feature. + + + + + + + + + +`value_key` specifies the feature key for a variable-length list of values; +and `partitions` specifies zero or more feature keys for partitioning those +values into higher dimensions. Each element of `partitions` must be one of +the following: + + * `tf.io.RaggedFeature.RowSplits(key: string)` + * `tf.io.RaggedFeature.RowLengths(key: string)` + * `tf.io.RaggedFeature.RowStarts(key: string)` + * `tf.io.RaggedFeature.RowLimits(key: string)` + * `tf.io.RaggedFeature.ValueRowIds(key: string)` + * `tf.io.RaggedFeature.UniformRowLength(length: int)`. + +Where `key` is a feature key whose values are used to partition the values. +Partitions are listed from outermost to innermost. + +* If `len(partitions) == 0` (the default), then: + + * A feature from a single `tf.Example` is parsed into a 1D tf.Tensor. + * A feature from a batch of `tf.Example`s is parsed into a 2D + tf.RaggedTensor, where the outer dimension is the batch dimension, and + the inner (ragged) dimension is the feature length in each example. + +* If `len(partitions) == 1`, then: + + * A feature from a single `tf.Example` is parsed into a 2D + tf.RaggedTensor, where the values taken from the `value_key` are + separated into rows using the partition key. + * A feature from a batch of `tf.Example`s is parsed into a 3D + tf.RaggedTensor, where the outer dimension is the batch dimension, + the two inner dimensions are formed by separating the `value_key` values + from each example into rows using that example's partition key. + +* If `len(partitions) > 1`, then: + + * A feature from a single `tf.Example` is parsed into a tf.RaggedTensor + whose rank is `len(partitions)+1`, and whose ragged_rank is + `len(partitions)`. + + * A feature from a batch of `tf.Example`s is parsed into a tf.RaggedTensor + whose rank is `len(partitions)+2` and whose ragged_rank is + `len(partitions)+1`, where the outer dimension is the batch dimension. + +There is one exception: if the final (i.e., innermost) element(s) of +`partitions` are `UniformRowLength`s, then the values are simply reshaped (as +a higher-dimensional tf.Tensor), rather than being wrapped in a +tf.RaggedTensor. + +#### Examples + +``` +>>> import google.protobuf.text_format as pbtext +>>> example_batch = [ +... pbtext.Merge(r''' +... features { +... feature {key: "v" value {int64_list {value: [3, 1, 4, 1, 5, 9]}}} +... feature {key: "s1" value {int64_list {value: [0, 2, 3, 3, 6]}}} +... feature {key: "s2" value {int64_list {value: [0, 2, 3, 4]}}} +... }''', tf.train.Example()).SerializeToString(), +... pbtext.Merge(r''' +... features { +... feature {key: "v" value {int64_list {value: [2, 7, 1, 8, 2, 8, 1]}}} +... feature {key: "s1" value {int64_list {value: [0, 3, 4, 5, 7]}}} +... feature {key: "s2" value {int64_list {value: [0, 1, 1, 4]}}} +... }''', tf.train.Example()).SerializeToString()] +``` + +``` +>>> features = { +... # Zero partitions: returns 1D tf.Tensor for each Example. +... 'f1': tf.io.RaggedFeature(value_key="v", dtype=tf.int64), +... # One partition: returns 2D tf.RaggedTensor for each Example. +... 'f2': tf.io.RaggedFeature(value_key="v", dtype=tf.int64, partitions=[ +... tf.io.RaggedFeature.RowSplits("s1")]), +... # Two partitions: returns 3D tf.RaggedTensor for each Example. +... 'f3': tf.io.RaggedFeature(value_key="v", dtype=tf.int64, partitions=[ +... tf.io.RaggedFeature.RowSplits("s2"), +... tf.io.RaggedFeature.RowSplits("s1")]) +... } +``` + +``` +>>> feature_dict = tf.io.parse_single_example(example_batch[0], features) +>>> for (name, val) in sorted(feature_dict.items()): +... print('%s: %s' % (name, val)) +f1: tf.Tensor([3 1 4 1 5 9], shape=(6,), dtype=int64) +f2: +f3: +``` + +``` +>>> feature_dict = tf.io.parse_example(example_batch, features) +>>> for (name, val) in sorted(feature_dict.items()): +... print('%s: %s' % (name, val)) +f1: +f2: +f3: +``` + +#### Fields: + + +* `dtype`: Data type of the `RaggedTensor`. Must be one of: + tf.dtypes.int64, tf.dtypes.float32, tf.dtypes.string. +* `value_key`: (Optional.) Key for a `Feature` in the input `Example`, whose + parsed `Tensor` will be the resulting RaggedTensor.flat_values. If + not specified, then it defaults to the key for this `RaggedFeature`. +* `partitions`: (Optional.) A list of objects specifying the row-partitioning + tensors (from outermost to innermost). Each entry in this list must be + one of: + * `tf.io.RaggedFeature.RowSplits(key: string)` + * `tf.io.RaggedFeature.RowLengths(key: string)` + * `tf.io.RaggedFeature.RowStarts(key: string)` + * `tf.io.RaggedFeature.RowLimits(key: string)` + * `tf.io.RaggedFeature.ValueRowIds(key: string)` + * `tf.io.RaggedFeature.UniformRowLength(length: int)`. + Where `key` is a key for a `Feature` in the input `Example`, whose parsed + `Tensor` will be the resulting row-partitioning tensor. +* `row_splits_dtype`: (Optional.) Data type for the row-partitioning tensor(s). + One of `int32` or `int64`. Defaults to `int32`. +* `validate`: (Optional.) Boolean indicating whether or not to validate that + the input values form a valid RaggedTensor. Defaults to `False`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + + +
+`value_key` + + +
+`partitions` + + +
+`row_splits_dtype` + + +
+`validate` + + +
+ + + +## Child Classes +[`class RowLengths`](../../tf/io/RaggedFeature/RowLengths.md) + +[`class RowLimits`](../../tf/io/RaggedFeature/RowLimits.md) + +[`class RowSplits`](../../tf/io/RaggedFeature/RowSplits.md) + +[`class RowStarts`](../../tf/io/RaggedFeature/RowStarts.md) + +[`class UniformRowLength`](../../tf/io/RaggedFeature/UniformRowLength.md) + +[`class ValueRowIds`](../../tf/io/RaggedFeature/ValueRowIds.md) + +## Class Variables + +* `dtype` +* `partitions` +* `row_splits_dtype` +* `validate` +* `value_key` diff --git a/site/en/api_docs/python/tf/io/RaggedFeature/RowLengths.md b/site/en/api_docs/python/tf/io/RaggedFeature/RowLengths.md new file mode 100644 index 00000000000..858493dc1b2 --- /dev/null +++ b/site/en/api_docs/python/tf/io/RaggedFeature/RowLengths.md @@ -0,0 +1,70 @@ +description: RowLengths(key,) + +
+ + + + +
+ +# tf.io.RaggedFeature.RowLengths + + + + + + + + + +RowLengths(key,) + + + + + + + + + + + + + + + + + + + + + +
+`key` + + +
+ + + +## Class Variables + +* `key` diff --git a/site/en/api_docs/python/tf/io/RaggedFeature/RowLimits.md b/site/en/api_docs/python/tf/io/RaggedFeature/RowLimits.md new file mode 100644 index 00000000000..75b12248157 --- /dev/null +++ b/site/en/api_docs/python/tf/io/RaggedFeature/RowLimits.md @@ -0,0 +1,70 @@ +description: RowLimits(key,) + +
+ + + + +
+ +# tf.io.RaggedFeature.RowLimits + + + + + + + + + +RowLimits(key,) + + + + + + + + + + + + + + + + + + + + + +
+`key` + + +
+ + + +## Class Variables + +* `key` diff --git a/site/en/api_docs/python/tf/io/RaggedFeature/RowSplits.md b/site/en/api_docs/python/tf/io/RaggedFeature/RowSplits.md new file mode 100644 index 00000000000..7319afd5cad --- /dev/null +++ b/site/en/api_docs/python/tf/io/RaggedFeature/RowSplits.md @@ -0,0 +1,70 @@ +description: RowSplits(key,) + +
+ + + + +
+ +# tf.io.RaggedFeature.RowSplits + + + + + + + + + +RowSplits(key,) + + + + + + + + + + + + + + + + + + + + + +
+`key` + + +
+ + + +## Class Variables + +* `key` diff --git a/site/en/api_docs/python/tf/io/RaggedFeature/RowStarts.md b/site/en/api_docs/python/tf/io/RaggedFeature/RowStarts.md new file mode 100644 index 00000000000..b823d5be017 --- /dev/null +++ b/site/en/api_docs/python/tf/io/RaggedFeature/RowStarts.md @@ -0,0 +1,70 @@ +description: RowStarts(key,) + +
+ + + + +
+ +# tf.io.RaggedFeature.RowStarts + + + + + + + + + +RowStarts(key,) + + + + + + + + + + + + + + + + + + + + + +
+`key` + + +
+ + + +## Class Variables + +* `key` diff --git a/site/en/api_docs/python/tf/io/RaggedFeature/UniformRowLength.md b/site/en/api_docs/python/tf/io/RaggedFeature/UniformRowLength.md new file mode 100644 index 00000000000..9b88db4670c --- /dev/null +++ b/site/en/api_docs/python/tf/io/RaggedFeature/UniformRowLength.md @@ -0,0 +1,70 @@ +description: UniformRowLength(length,) + +
+ + + + +
+ +# tf.io.RaggedFeature.UniformRowLength + + + + + + + + + +UniformRowLength(length,) + + + + + + + + + + + + + + + + + + + + + +
+`length` + + +
+ + + +## Class Variables + +* `length` diff --git a/site/en/api_docs/python/tf/io/RaggedFeature/ValueRowIds.md b/site/en/api_docs/python/tf/io/RaggedFeature/ValueRowIds.md new file mode 100644 index 00000000000..831d671d61c --- /dev/null +++ b/site/en/api_docs/python/tf/io/RaggedFeature/ValueRowIds.md @@ -0,0 +1,70 @@ +description: ValueRowIds(key,) + +
+ + + + +
+ +# tf.io.RaggedFeature.ValueRowIds + + + + + + + + + +ValueRowIds(key,) + + + + + + + + + + + + + + + + + + + + + +
+`key` + + +
+ + + +## Class Variables + +* `key` diff --git a/site/en/api_docs/python/tf/io/SparseFeature.md b/site/en/api_docs/python/tf/io/SparseFeature.md new file mode 100644 index 00000000000..649aa904ebf --- /dev/null +++ b/site/en/api_docs/python/tf/io/SparseFeature.md @@ -0,0 +1,170 @@ +description: Configuration for parsing a sparse input feature from an Example. + +
+ + + + + + + + +
+ +# tf.io.SparseFeature + + + + + + + + + +Configuration for parsing a sparse input feature from an `Example`. + + + + + + + + + +Note, preferably use `VarLenFeature` (possibly in combination with a +`SequenceExample`) in order to parse out `SparseTensor`s instead of +`SparseFeature` due to its simplicity. + +Closely mimicking the `SparseTensor` that will be obtained by parsing an +`Example` with a `SparseFeature` config, a `SparseFeature` contains a + +* `value_key`: The name of key for a `Feature` in the `Example` whose parsed + `Tensor` will be the resulting SparseTensor.values. + +* `index_key`: A list of names - one for each dimension in the resulting + `SparseTensor` whose `indices[i][dim]` indicating the position of + the `i`-th value in the `dim` dimension will be equal to the `i`-th value in + the Feature with key named `index_key[dim]` in the `Example`. + +* `size`: A list of ints for the resulting SparseTensor.dense_shape. + +For example, we can represent the following 2D `SparseTensor` + +```python +SparseTensor(indices=[[3, 1], [20, 0]], + values=[0.5, -1.0] + dense_shape=[100, 3]) +``` + +with an `Example` input proto + +```python +features { + feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } } + feature { key: "ix0" value { int64_list { value: [ 3, 20 ] } } } + feature { key: "ix1" value { int64_list { value: [ 1, 0 ] } } } +} +``` + +and `SparseFeature` config with 2 `index_key`s + +```python +SparseFeature(index_key=["ix0", "ix1"], + value_key="val", + dtype=tf.float32, + size=[100, 3]) +``` + +#### Fields: + + +* `index_key`: A single string name or a list of string names of index features. + For each key the underlying feature's type must be `int64` and its length + must always match that of the `value_key` feature. + To represent `SparseTensor`s with a `dense_shape` of `rank` higher than 1 + a list of length `rank` should be used. +* `value_key`: Name of value feature. The underlying feature's type must + be `dtype` and its length must always match that of all the `index_key`s' + features. +* `dtype`: Data type of the `value_key` feature. +* `size`: A Python int or list thereof specifying the dense shape. Should be a + list if and only if `index_key` is a list. In that case the list must be + equal to the length of `index_key`. Each for each entry `i` all values in + the `index_key`[i] feature must be in `[0, size[i])`. +* `already_sorted`: A Python boolean to specify whether the values in + `value_key` are already sorted by their index position. If so skip + sorting. False by default (optional). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`index_key` + + +
+`value_key` + + +
+`dtype` + + +
+`size` + + +
+`already_sorted` + + +
+ + + +## Class Variables + +* `already_sorted` +* `dtype` +* `index_key` +* `size` +* `value_key` diff --git a/site/en/api_docs/python/tf/io/TFRecordOptions.md b/site/en/api_docs/python/tf/io/TFRecordOptions.md new file mode 100644 index 00000000000..233dd44c2f5 --- /dev/null +++ b/site/en/api_docs/python/tf/io/TFRecordOptions.md @@ -0,0 +1,210 @@ +description: Options used for manipulating TFRecord files. + +
+ + + + + +
+ +# tf.io.TFRecordOptions + + + + + + + + + +Options used for manipulating TFRecord files. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`compression_type` + +`"GZIP"`, `"ZLIB"`, or `""` (no compression). +
+`flush_mode` + +flush mode or `None`, Default: Z_NO_FLUSH. +
+`input_buffer_size` + +int or `None`. +
+`output_buffer_size` + +int or `None`. +
+`window_bits` + +int or `None`. +
+`compression_level` + +0 to 9, or `None`. +
+`compression_method` + +compression method or `None`. +
+`mem_level` + +1 to 9, or `None`. +
+`compression_strategy` + +strategy or `None`. Default: Z_DEFAULT_STRATEGY. +
+ + + + + + + + + + + + +
+`ValueError` + +If compression_type is invalid. +
+ + + +## Methods + +

get_compression_type_string

+ +View source + + + +Convert various option types to a unified string. + + + + + + + + + + + +
Args
+`options` + +`TFRecordOption`, `TFRecordCompressionType`, or string. +
+ + + + + + + + + + + +
Returns
+Compression type as string (e.g. `'ZLIB'`, `'GZIP'`, or `''`). +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If compression_type is invalid. +
+ + + + + +## Class Variables + +* `compression_type_map` diff --git a/site/en/api_docs/python/tf/io/TFRecordWriter.md b/site/en/api_docs/python/tf/io/TFRecordWriter.md new file mode 100644 index 00000000000..bb5a5a248b1 --- /dev/null +++ b/site/en/api_docs/python/tf/io/TFRecordWriter.md @@ -0,0 +1,242 @@ +description: A class to write records to a TFRecords file. + +
+ + + + + + + + + +
+ +# tf.io.TFRecordWriter + + + + + + + + + +A class to write records to a TFRecords file. + + + + + + + + + +[TFRecords tutorial](https://www.tensorflow.org/tutorials/load_data/tfrecord) + +TFRecords is a binary format which is optimized for high throughput data +retrieval, generally in conjunction with tf.data. `TFRecordWriter` is used +to write serialized examples to a file for later consumption. The key steps +are: + + Ahead of time: + + - [Convert data into a serialized format]( + https://www.tensorflow.org/tutorials/load_data/tfrecord#tfexample) + - [Write the serialized data to one or more files]( + https://www.tensorflow.org/tutorials/load_data/tfrecord#tfrecord_files_in_python) + + During training or evaluation: + + - [Read serialized examples into memory]( + https://www.tensorflow.org/tutorials/load_data/tfrecord#reading_a_tfrecord_file) + - [Parse (deserialize) examples]( + https://www.tensorflow.org/tutorials/load_data/tfrecord#reading_a_tfrecord_file) + +A minimal example is given below: + +``` +>>> import tempfile +>>> example_path = os.path.join(tempfile.gettempdir(), "example.tfrecords") +>>> np.random.seed(0) +``` + +``` +>>> # Write the records to a file. +... with tf.io.TFRecordWriter(example_path) as file_writer: +... for _ in range(4): +... x, y = np.random.random(), np.random.random() +... +... record_bytes = tf.train.Example(features=tf.train.Features(feature={ +... "x": tf.train.Feature(float_list=tf.train.FloatList(value=[x])), +... "y": tf.train.Feature(float_list=tf.train.FloatList(value=[y])), +... })).SerializeToString() +... file_writer.write(record_bytes) +``` + +``` +>>> # Read the data back out. +>>> def decode_fn(record_bytes): +... return tf.io.parse_single_example( +... # Data +... record_bytes, +... +... # Schema +... {"x": tf.io.FixedLenFeature([], dtype=tf.float32), +... "y": tf.io.FixedLenFeature([], dtype=tf.float32)} +... ) +``` + +``` +>>> for batch in tf.data.TFRecordDataset([example_path]).map(decode_fn): +... print("x = {x:.4f}, y = {y:.4f}".format(**batch)) +x = 0.5488, y = 0.7152 +x = 0.6028, y = 0.5449 +x = 0.4237, y = 0.6459 +x = 0.4376, y = 0.8918 +``` + +This class implements `__enter__` and `__exit__`, and can be used +in `with` blocks like a normal file. (See the usage example above.) + + + + + + + + + + + + + +
+`path` + +The path to the TFRecords file. +
+`options` + +(optional) String specifying compression type, +`TFRecordCompressionType`, or `TFRecordOptions` object. +
+ + + + + + + + + + + + + + + +
+`IOError` + +If `path` cannot be opened for writing. +
+`ValueError` + +If valid compression_type can't be determined from `options`. +
+ + + +## Methods + +

close

+ +View source + + + +Close the file. + + +

flush

+ +View source + + + +Flush the file. + + +

write

+ +View source + + + +Write a string record to the file. + + + + + + + + + + + +
Args
+`record` + +str +
+ + + +

__enter__

+ + + +__enter__(self: object) -> object + + +

__exit__

+ + + +__exit__(self: tensorflow.python._pywrap_record_io.RecordWriter, *args) -> None + + + + diff --git a/site/en/api_docs/python/tf/io/VarLenFeature.md b/site/en/api_docs/python/tf/io/VarLenFeature.md new file mode 100644 index 00000000000..ebd816288ca --- /dev/null +++ b/site/en/api_docs/python/tf/io/VarLenFeature.md @@ -0,0 +1,76 @@ +description: Configuration for parsing a variable-length input feature. + +
+ + + + +
+ +# tf.io.VarLenFeature + + + + + + + + + +Configuration for parsing a variable-length input feature. + + + + + + + + + + +#### Fields: + + +* `dtype`: Data type of input. + + + + + + + + + + + + + +
+`dtype` + + +
+ + + +## Class Variables + +* `dtype` diff --git a/site/en/api_docs/python/tf/io/decode_and_crop_jpeg.md b/site/en/api_docs/python/tf/io/decode_and_crop_jpeg.md new file mode 100644 index 00000000000..9584fe19871 --- /dev/null +++ b/site/en/api_docs/python/tf/io/decode_and_crop_jpeg.md @@ -0,0 +1,164 @@ +description: Decode and Crop a JPEG-encoded image to a uint8 tensor. + +
+ + +
+ +# tf.io.decode_and_crop_jpeg + + + + + + + + + +Decode and Crop a JPEG-encoded image to a uint8 tensor. + + + + + + + + + +The attr `channels` indicates the desired number of color channels for the +decoded image. + +#### Accepted values are: + + + +* 0: Use the number of channels in the JPEG-encoded image. +* 1: output a grayscale image. +* 3: output an RGB image. + +If needed, the JPEG-encoded image is transformed to match the requested number +of color channels. + +The attr `ratio` allows downscaling the image by an integer factor during +decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than +downscaling the image later. + + +It is equivalent to a combination of decode and crop, but much faster by only +decoding partial jpeg image. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. 0-D. The JPEG-encoded image. +
+`crop_window` + +A `Tensor` of type `int32`. +1-D. The crop window: [crop_y, crop_x, crop_height, crop_width]. +
+`channels` + +An optional `int`. Defaults to `0`. +Number of color channels for the decoded image. +
+`ratio` + +An optional `int`. Defaults to `1`. Downscaling ratio. +
+`fancy_upscaling` + +An optional `bool`. Defaults to `True`. +If true use a slower but nicer upscaling of the +chroma planes (yuv420/422 only). +
+`try_recover_truncated` + +An optional `bool`. Defaults to `False`. +If true try to recover an image from truncated input. +
+`acceptable_fraction` + +An optional `float`. Defaults to `1`. +The minimum required fraction of lines before a truncated +input is accepted. +
+`dct_method` + +An optional `string`. Defaults to `""`. +string specifying a hint about the algorithm used for +decompression. Defaults to "" which maps to a system-specific +default. Currently valid values are ["INTEGER_FAST", +"INTEGER_ACCURATE"]. The hint may be ignored (e.g., the internal +jpeg library changes to a version that does not have that specific +option.) +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `uint8`. +
+ diff --git a/site/en/api_docs/python/tf/io/decode_base64.md b/site/en/api_docs/python/tf/io/decode_base64.md new file mode 100644 index 00000000000..d47158903ab --- /dev/null +++ b/site/en/api_docs/python/tf/io/decode_base64.md @@ -0,0 +1,79 @@ +description: Decode web-safe base64-encoded strings. + +
+ + +
+ +# tf.io.decode_base64 + + + + + + + + + +Decode web-safe base64-encoded strings. + + + + + + + + + +Input may or may not have padding at the end. See EncodeBase64 for padding. +Web-safe means that input must use - and _ instead of + and /. + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. Base64 strings to decode. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/io/decode_bmp.md b/site/en/api_docs/python/tf/io/decode_bmp.md new file mode 100644 index 00000000000..f845012ad97 --- /dev/null +++ b/site/en/api_docs/python/tf/io/decode_bmp.md @@ -0,0 +1,97 @@ +description: Decode the first frame of a BMP-encoded image to a uint8 tensor. + +
+ + +
+ +# tf.io.decode_bmp + + + + + + + + + +Decode the first frame of a BMP-encoded image to a uint8 tensor. + + + + + + + + + +The attr `channels` indicates the desired number of color channels for the +decoded image. + +#### Accepted values are: + + + +* 0: Use the number of channels in the BMP-encoded image. +* 3: output an RGB image. +* 4: output an RGBA image. + + + + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. 0-D. The BMP-encoded image. +
+`channels` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `uint8`. +
+ diff --git a/site/en/api_docs/python/tf/io/decode_compressed.md b/site/en/api_docs/python/tf/io/decode_compressed.md new file mode 100644 index 00000000000..9ea64c2c281 --- /dev/null +++ b/site/en/api_docs/python/tf/io/decode_compressed.md @@ -0,0 +1,93 @@ +description: Decompress strings. + +
+ + +
+ +# tf.io.decode_compressed + + + + + + + + + +Decompress strings. + + + + + + + + + +This op decompresses each element of the `bytes` input `Tensor`, which +is assumed to be compressed using the given `compression_type`. + +The `output` is a string `Tensor` of the same shape as `bytes`, +each element containing the decompressed data from the corresponding +element in `bytes`. + + + + + + + + + + + + + + + + +
+`bytes` + +A `Tensor` of type `string`. +A Tensor of string which is compressed. +
+`compression_type` + +An optional `string`. Defaults to `""`. +A scalar containing either (i) the empty string (no +compression), (ii) "ZLIB", or (iii) "GZIP". +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/io/decode_csv.md b/site/en/api_docs/python/tf/io/decode_csv.md new file mode 100644 index 00000000000..e0eed792444 --- /dev/null +++ b/site/en/api_docs/python/tf/io/decode_csv.md @@ -0,0 +1,139 @@ +description: Convert CSV records to tensors. Each column maps to one tensor. + +
+ + +
+ +# tf.io.decode_csv + + + + + + + + + +Convert CSV records to tensors. Each column maps to one tensor. + + + + + + + +RFC 4180 format is expected for the CSV records. +(https://tools.ietf.org/html/rfc4180) +Note that we allow leading and trailing spaces with int or float field. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`records` + +A `Tensor` of type `string`. +Each string is a record/row in the csv and all records should have +the same format. +
+`record_defaults` + +A list of `Tensor` objects with specific types. +Acceptable types are `float32`, `float64`, `int32`, `int64`, `string`. +One tensor per column of the input record, with either a +scalar default value for that column or an empty vector if the column is +required. +
+`field_delim` + +An optional `string`. Defaults to `","`. +char delimiter to separate fields in a record. +
+`use_quote_delim` + +An optional `bool`. Defaults to `True`. +If false, treats double quotation marks as regular +characters inside of the string fields (ignoring RFC 4180, Section 2, +Bullet 5). +
+`na_value` + +Additional string to recognize as NA/NaN. +
+`select_cols` + +Optional sorted list of column indices to select. If specified, +only this subset of columns will be parsed and returned. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects. Has the same type as `record_defaults`. +Each tensor will have the same shape as records. +
+ + + + + + + + + + + + +
+`ValueError` + +If any of the arguments is malformed. +
+ diff --git a/site/en/api_docs/python/tf/io/decode_gif.md b/site/en/api_docs/python/tf/io/decode_gif.md new file mode 100644 index 00000000000..75cf6569e6b --- /dev/null +++ b/site/en/api_docs/python/tf/io/decode_gif.md @@ -0,0 +1,88 @@ +description: Decode the frame(s) of a GIF-encoded image to a uint8 tensor. + +
+ + +
+ +# tf.io.decode_gif + + + + + + + + + +Decode the frame(s) of a GIF-encoded image to a uint8 tensor. + + + + + + + + + +GIF images with frame or transparency compression are not supported. +On Linux and MacOS systems, convert animated GIFs from compressed to +uncompressed by running: + + convert $src.gif -coalesce $dst.gif + +This op also supports decoding JPEGs and PNGs, though it is cleaner to use +tf.image.decode_image. + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. 0-D. The GIF-encoded image. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `uint8`. +
+ diff --git a/site/en/api_docs/python/tf/io/decode_image.md b/site/en/api_docs/python/tf/io/decode_image.md new file mode 100644 index 00000000000..ade1df2f959 --- /dev/null +++ b/site/en/api_docs/python/tf/io/decode_image.md @@ -0,0 +1,141 @@ +description: Function for decode_bmp, decode_gif, decode_jpeg, and decode_png. + +
+ + +
+ +# tf.io.decode_image + + + + + + + + + +Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`. + + + + + + + + + +Detects whether an image is a BMP, GIF, JPEG, or PNG, and performs the +appropriate operation to convert the input bytes `string` into a `Tensor` +of type `dtype`. + +Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as +opposed to `decode_bmp`, `decode_jpeg` and `decode_png`, which return 3-D +arrays `[height, width, num_channels]`. Make sure to take this into account +when constructing your graph if you are intermixing GIF files with BMP, JPEG, +and/or PNG files. Alternately, set the `expand_animations` argument of this +function to `False`, in which case the op will return 3-dimensional tensors +and will truncate animated GIF files to the first frame. + + + + + + + + + + + + + + + + + + + + + + +
+`contents` + +0-D `string`. The encoded image bytes. +
+`channels` + +An optional `int`. Defaults to `0`. Number of color channels for +the decoded image. +
+`dtype` + +The desired DType of the returned `Tensor`. +
+`name` + +A name for the operation (optional) +
+`expand_animations` + +Controls the shape of the returned op's output. If +`True`, the returned op will produce a 3-D tensor for PNG, JPEG, and BMP +files; and a 4-D tensor for all GIFs, whether animated or not. If, +`False`, the returned op will produce a 3-D tensor for all file types and +will truncate animated GIFs to the first frame. +
+ + + + + + + + + + + +
+`Tensor` with type `dtype` and a 3- or 4-dimensional shape, depending on +the file type and the value of the `expand_animations` parameter. +
+ + + + + + + + + + + + +
+`ValueError` + +On incorrect number of channels. +
+ diff --git a/site/en/api_docs/python/tf/io/decode_jpeg.md b/site/en/api_docs/python/tf/io/decode_jpeg.md new file mode 100644 index 00000000000..eebf9ca25fd --- /dev/null +++ b/site/en/api_docs/python/tf/io/decode_jpeg.md @@ -0,0 +1,156 @@ +description: Decode a JPEG-encoded image to a uint8 tensor. + +
+ + +
+ +# tf.io.decode_jpeg + + + + + + + + + +Decode a JPEG-encoded image to a uint8 tensor. + + + + + + + + + +The attr `channels` indicates the desired number of color channels for the +decoded image. + +#### Accepted values are: + + + +* 0: Use the number of channels in the JPEG-encoded image. +* 1: output a grayscale image. +* 3: output an RGB image. + +If needed, the JPEG-encoded image is transformed to match the requested number +of color channels. + +The attr `ratio` allows downscaling the image by an integer factor during +decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than +downscaling the image later. + + +This op also supports decoding PNGs and non-animated GIFs since the interface is +the same, though it is cleaner to use tf.image.decode_image. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. 0-D. The JPEG-encoded image. +
+`channels` + +An optional `int`. Defaults to `0`. +Number of color channels for the decoded image. +
+`ratio` + +An optional `int`. Defaults to `1`. Downscaling ratio. +
+`fancy_upscaling` + +An optional `bool`. Defaults to `True`. +If true use a slower but nicer upscaling of the +chroma planes (yuv420/422 only). +
+`try_recover_truncated` + +An optional `bool`. Defaults to `False`. +If true try to recover an image from truncated input. +
+`acceptable_fraction` + +An optional `float`. Defaults to `1`. +The minimum required fraction of lines before a truncated +input is accepted. +
+`dct_method` + +An optional `string`. Defaults to `""`. +string specifying a hint about the algorithm used for +decompression. Defaults to "" which maps to a system-specific +default. Currently valid values are ["INTEGER_FAST", +"INTEGER_ACCURATE"]. The hint may be ignored (e.g., the internal +jpeg library changes to a version that does not have that specific +option.) +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `uint8`. +
+ diff --git a/site/en/api_docs/python/tf/io/decode_json_example.md b/site/en/api_docs/python/tf/io/decode_json_example.md new file mode 100644 index 00000000000..4d78e71fb64 --- /dev/null +++ b/site/en/api_docs/python/tf/io/decode_json_example.md @@ -0,0 +1,85 @@ +description: Convert JSON-encoded Example records to binary protocol buffer strings. + +
+ + +
+ +# tf.io.decode_json_example + + + + + + + + + +Convert JSON-encoded Example records to binary protocol buffer strings. + + + + + + + + + +This op translates a tensor containing Example records, encoded using +the [standard JSON +mapping](https://developers.google.com/protocol-buffers/docs/proto3#json), +into a tensor containing the same records encoded as binary protocol +buffers. The resulting tensor can then be fed to any of the other +Example-parsing ops. + + + + + + + + + + + + + +
+`json_examples` + +A `Tensor` of type `string`. +Each string is a JSON object serialized according to the JSON +mapping of the Example proto. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/io/decode_png.md b/site/en/api_docs/python/tf/io/decode_png.md new file mode 100644 index 00000000000..c25b791c21f --- /dev/null +++ b/site/en/api_docs/python/tf/io/decode_png.md @@ -0,0 +1,112 @@ +description: Decode a PNG-encoded image to a uint8 or uint16 tensor. + +
+ + +
+ +# tf.io.decode_png + + + + + + + + + +Decode a PNG-encoded image to a uint8 or uint16 tensor. + + + + + + + + + +The attr `channels` indicates the desired number of color channels for the +decoded image. + +#### Accepted values are: + + + +* 0: Use the number of channels in the PNG-encoded image. +* 1: output a grayscale image. +* 3: output an RGB image. +* 4: output an RGBA image. + +If needed, the PNG-encoded image is transformed to match the requested number +of color channels. + +This op also supports decoding JPEGs and non-animated GIFs since the interface +is the same, though it is cleaner to use tf.image.decode_image. + + + + + + + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. 0-D. The PNG-encoded image. +
+`channels` + +An optional `int`. Defaults to `0`. +Number of color channels for the decoded image. +
+`dtype` + +An optional tf.DType from: `tf.uint8, tf.uint16`. Defaults to tf.uint8. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/io/decode_proto.md b/site/en/api_docs/python/tf/io/decode_proto.md new file mode 100644 index 00000000000..689306dff6b --- /dev/null +++ b/site/en/api_docs/python/tf/io/decode_proto.md @@ -0,0 +1,189 @@ +description: The op extracts fields from a serialized protocol buffers message into tensors. + +
+ + +
+ +# tf.io.decode_proto + + + + + + + + + +The op extracts fields from a serialized protocol buffers message into tensors. + + + + + + + + + +The `decode_proto` op extracts fields from a serialized protocol buffers +message into tensors. The fields in `field_names` are decoded and converted +to the corresponding `output_types` if possible. + +A `message_type` name must be provided to give context for the field names. +The actual message descriptor can be looked up either in the linked-in +descriptor pool or a filename provided by the caller using the +`descriptor_source` attribute. + +Each output tensor is a dense tensor. This means that it is padded to hold +the largest number of repeated elements seen in the input minibatch. (The +shape is also padded by one to prevent zero-sized dimensions). The actual +repeat counts for each example in the minibatch can be found in the `sizes` +output. In many cases the output of `decode_proto` is fed immediately into +tf.squeeze if missing values are not a concern. When using tf.squeeze, always +pass the squeeze dimension explicitly to avoid surprises. + +For the most part, the mapping between Proto field types and TensorFlow dtypes +is straightforward. However, there are a few special cases: + +- A proto field that contains a submessage or group can only be converted +to `DT_STRING` (the serialized submessage). This is to reduce the complexity +of the API. The resulting string can be used as input to another instance of +the decode_proto op. + +- TensorFlow lacks support for unsigned integers. The ops represent uint64 +types as a `DT_INT64` with the same twos-complement bit pattern (the obvious +way). Unsigned int32 values can be represented exactly by specifying type +`DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in +the `output_types` attribute. + +Both binary and text proto serializations are supported, and can be +chosen using the `format` attribute. + +The `descriptor_source` attribute selects the source of protocol +descriptors to consult when looking up `message_type`. This may be: + +- An empty string or "local://", in which case protocol descriptors are +created for C++ (not Python) proto definitions linked to the binary. + +- A file, in which case protocol descriptors are created from the file, +which is expected to contain a `FileDescriptorSet` serialized as a string. +NOTE: You can build a `descriptor_source` file using the `--descriptor_set_out` +and `--include_imports` options to the protocol compiler `protoc`. + +- A "bytes://", in which protocol descriptors are created from ``, +which is expected to be a `FileDescriptorSet` serialized as a string. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`bytes` + +A `Tensor` of type `string`. +Tensor of serialized protos with shape `batch_shape`. +
+`message_type` + +A `string`. Name of the proto message type to decode. +
+`field_names` + +A list of `strings`. +List of strings containing proto field names. An extension field can be decoded +by using its full name, e.g. EXT_PACKAGE.EXT_FIELD_NAME. +
+`output_types` + +A list of `tf.DTypes`. +List of TF types to use for the respective field in field_names. +
+`descriptor_source` + +An optional `string`. Defaults to `"local://"`. +Either the special value `local://` or a path to a file containing +a serialized `FileDescriptorSet`. +
+`message_format` + +An optional `string`. Defaults to `"binary"`. +Either `binary` or `text`. +
+`sanitize` + +An optional `bool`. Defaults to `False`. +Whether to sanitize the result or not. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sizes, values). +
+`sizes` + +A `Tensor` of type `int32`. +
+`values` + +A list of `Tensor` objects of type `output_types`. +
+ diff --git a/site/en/api_docs/python/tf/io/decode_raw.md b/site/en/api_docs/python/tf/io/decode_raw.md new file mode 100644 index 00000000000..165c62f3cc3 --- /dev/null +++ b/site/en/api_docs/python/tf/io/decode_raw.md @@ -0,0 +1,99 @@ +description: Convert raw byte strings into tensors. + +
+ + +
+ +# tf.io.decode_raw + + + + + + + + + +Convert raw byte strings into tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_bytes` + +Each element of the input Tensor is converted to an array of bytes. +
+`out_type` + +`DType` of the output. Acceptable types are `half`, `float`, `double`, +`int32`, `uint16`, `uint8`, `int16`, `int8`, `int64`. +
+`little_endian` + +Whether the `input_bytes` data is in little-endian format. Data will be +converted into host byte order if necessary. +
+`fixed_length` + +If set, the first `fixed_length` bytes of each element will be converted. +Data will be zero-padded or truncated to the specified length. + +`fixed_length` must be a multiple of the size of `out_type`. +`fixed_length` must be specified if the elements of `input_bytes` are of +variable length. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` object storing the decoded bytes. +
+ diff --git a/site/en/api_docs/python/tf/io/deserialize_many_sparse.md b/site/en/api_docs/python/tf/io/deserialize_many_sparse.md new file mode 100644 index 00000000000..5ae9602f5e0 --- /dev/null +++ b/site/en/api_docs/python/tf/io/deserialize_many_sparse.md @@ -0,0 +1,141 @@ +description: Deserialize and concatenate SparseTensors from a serialized minibatch. + +
+ + +
+ +# tf.io.deserialize_many_sparse + + + + + + + + + +Deserialize and concatenate `SparseTensors` from a serialized minibatch. + + + + + + + + + +The input `serialized_sparse` must be a string matrix of shape `[N x 3]` where +`N` is the minibatch size and the rows correspond to packed outputs of +`serialize_sparse`. The ranks of the original `SparseTensor` objects +must all match. When the final `SparseTensor` is created, it has rank one +higher than the ranks of the incoming `SparseTensor` objects (they have been +concatenated along a new row dimension). + +The output `SparseTensor` object's shape values for all dimensions but the +first are the max across the input `SparseTensor` objects' shape values +for the corresponding dimensions. Its first shape value is `N`, the minibatch +size. + +The input `SparseTensor` objects' indices are assumed ordered in +standard lexicographic order. If this is not the case, after this +step run sparse.reorder to restore index ordering. + +For example, if the serialized input is a `[2, 3]` matrix representing two +original `SparseTensor` objects: + + index = [ 0] + [10] + [20] + values = [1, 2, 3] + shape = [50] + +and + + index = [ 2] + [10] + values = [4, 5] + shape = [30] + +then the final deserialized `SparseTensor` will be: + + index = [0 0] + [0 10] + [0 20] + [1 2] + [1 10] + values = [1, 2, 3, 4, 5] + shape = [2 50] + + + + + + + + + + + + + + + + + + + +
+`serialized_sparse` + +2-D `Tensor` of type `string` of shape `[N, 3]`. +The serialized and packed `SparseTensor` objects. +
+`dtype` + +The `dtype` of the serialized `SparseTensor` objects. +
+`rank` + +(optional) Python int, the rank of the `SparseTensor` objects. +
+`name` + +A name prefix for the returned tensors (optional) +
+ + + + + + + + + + + +
+A `SparseTensor` representing the deserialized `SparseTensor`s, +concatenated along the `SparseTensor`s' first dimension. + +All of the serialized `SparseTensor`s must have had the same rank and type. +
+ diff --git a/site/en/api_docs/python/tf/io/encode_base64.md b/site/en/api_docs/python/tf/io/encode_base64.md new file mode 100644 index 00000000000..7080e7bf932 --- /dev/null +++ b/site/en/api_docs/python/tf/io/encode_base64.md @@ -0,0 +1,91 @@ +description: Encode strings into web-safe base64 format. + +
+ + +
+ +# tf.io.encode_base64 + + + + + + + + + +Encode strings into web-safe base64 format. + + + + + + + + + +Refer to the following article for more information on base64 format: +en.wikipedia.org/wiki/Base64. Base64 strings may have padding with '=' at the +end so that the encoded has length multiple of 4. See Padding section of the +link above. + +Web-safe means that the encoder uses - and _ instead of + and /. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. Strings to be encoded. +
+`pad` + +An optional `bool`. Defaults to `False`. +Bool whether padding is applied at the ends. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/io/encode_jpeg.md b/site/en/api_docs/python/tf/io/encode_jpeg.md new file mode 100644 index 00000000000..57e3f39b3b5 --- /dev/null +++ b/site/en/api_docs/python/tf/io/encode_jpeg.md @@ -0,0 +1,172 @@ +description: JPEG-encode an image. + +
+ + +
+ +# tf.io.encode_jpeg + + + + + + + + + +JPEG-encode an image. + + + + + + + + + +`image` is a 3-D uint8 Tensor of shape `[height, width, channels]`. + +The attr `format` can be used to override the color format of the encoded +output. Values can be: + +* `''`: Use a default format based on the number of channels in the image. +* `grayscale`: Output a grayscale JPEG image. The `channels` dimension + of `image` must be 1. +* `rgb`: Output an RGB JPEG image. The `channels` dimension + of `image` must be 3. + +If `format` is not specified or is the empty string, a default format is picked +in function of the number of channels in `image`: + +* 1: Output a grayscale image. +* 3: Output an RGB image. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`image` + +A `Tensor` of type `uint8`. +3-D with shape `[height, width, channels]`. +
+`format` + +An optional `string` from: `"", "grayscale", "rgb"`. Defaults to `""`. +Per pixel image format. +
+`quality` + +An optional `int`. Defaults to `95`. +Quality of the compression from 0 to 100 (higher is better and slower). +
+`progressive` + +An optional `bool`. Defaults to `False`. +If True, create a JPEG that loads progressively (coarse to fine). +
+`optimize_size` + +An optional `bool`. Defaults to `False`. +If True, spend CPU/RAM to reduce size with no quality change. +
+`chroma_downsampling` + +An optional `bool`. Defaults to `True`. +See http://en.wikipedia.org/wiki/Chroma_subsampling. +
+`density_unit` + +An optional `string` from: `"in", "cm"`. Defaults to `"in"`. +Unit used to specify `x_density` and `y_density`: +pixels per inch (`'in'`) or centimeter (`'cm'`). +
+`x_density` + +An optional `int`. Defaults to `300`. +Horizontal pixels per density unit. +
+`y_density` + +An optional `int`. Defaults to `300`. +Vertical pixels per density unit. +
+`xmp_metadata` + +An optional `string`. Defaults to `""`. +If not empty, embed this XMP metadata in the image header. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/io/encode_proto.md b/site/en/api_docs/python/tf/io/encode_proto.md new file mode 100644 index 00000000000..b4441945b3c --- /dev/null +++ b/site/en/api_docs/python/tf/io/encode_proto.md @@ -0,0 +1,149 @@ +description: The op serializes protobuf messages provided in the input tensors. + +
+ + +
+ +# tf.io.encode_proto + + + + + + + + + +The op serializes protobuf messages provided in the input tensors. + + + + + + + + + +The types of the tensors in `values` must match the schema for the fields +specified in `field_names`. All the tensors in `values` must have a common +shape prefix, *batch_shape*. + +The `sizes` tensor specifies repeat counts for each field. The repeat count +(last dimension) of a each tensor in `values` must be greater than or equal +to corresponding repeat count in `sizes`. + +A `message_type` name must be provided to give context for the field names. +The actual message descriptor can be looked up either in the linked-in +descriptor pool or a filename provided by the caller using the +`descriptor_source` attribute. + +For the most part, the mapping between Proto field types and TensorFlow dtypes +is straightforward. However, there are a few special cases: + +- A proto field that contains a submessage or group can only be converted +to `DT_STRING` (the serialized submessage). This is to reduce the complexity +of the API. The resulting string can be used as input to another instance of +the decode_proto op. + +- TensorFlow lacks support for unsigned integers. The ops represent uint64 +types as a `DT_INT64` with the same twos-complement bit pattern (the obvious +way). Unsigned int32 values can be represented exactly by specifying type +`DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in +the `output_types` attribute. + +The `descriptor_source` attribute selects the source of protocol +descriptors to consult when looking up `message_type`. This may be: + +- An empty string or "local://", in which case protocol descriptors are +created for C++ (not Python) proto definitions linked to the binary. + +- A file, in which case protocol descriptors are created from the file, +which is expected to contain a `FileDescriptorSet` serialized as a string. +NOTE: You can build a `descriptor_source` file using the `--descriptor_set_out` +and `--include_imports` options to the protocol compiler `protoc`. + +- A "bytes://", in which protocol descriptors are created from ``, +which is expected to be a `FileDescriptorSet` serialized as a string. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sizes` + +A `Tensor` of type `int32`. +Tensor of int32 with shape `[batch_shape, len(field_names)]`. +
+`values` + +A list of `Tensor` objects. +List of tensors containing values for the corresponding field. +
+`field_names` + +A list of `strings`. +List of strings containing proto field names. +
+`message_type` + +A `string`. Name of the proto message type to decode. +
+`descriptor_source` + +An optional `string`. Defaults to `"local://"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/io/extract_jpeg_shape.md b/site/en/api_docs/python/tf/io/extract_jpeg_shape.md new file mode 100644 index 00000000000..5b8364ea10a --- /dev/null +++ b/site/en/api_docs/python/tf/io/extract_jpeg_shape.md @@ -0,0 +1,90 @@ +description: Extract the shape information of a JPEG-encoded image. + +
+ + +
+ +# tf.io.extract_jpeg_shape + + + + + + + + + +Extract the shape information of a JPEG-encoded image. + + + + + + + + + +This op only parses the image header, so it is much faster than DecodeJpeg. + + + + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. 0-D. The JPEG-encoded image. +
+`output_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +(Optional) The output type of the operation (int32 or int64). +Defaults to int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `output_type`. +
+ diff --git a/site/en/api_docs/python/tf/io/gfile.md b/site/en/api_docs/python/tf/io/gfile.md new file mode 100644 index 00000000000..7cbe1303a1c --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile.md @@ -0,0 +1,51 @@ +description: Public API for tf.io.gfile namespace. + +
+ + +
+ +# Module: tf.io.gfile + + + + + + + + + +Public API for tf.io.gfile namespace. + + + +## Classes + +[`class GFile`](../../tf/io/gfile/GFile.md): File I/O wrappers without thread locking. + +## Functions + +[`copy(...)`](../../tf/io/gfile/copy.md): Copies data from `src` to `dst`. + +[`exists(...)`](../../tf/io/gfile/exists.md): Determines whether a path exists or not. + +[`glob(...)`](../../tf/io/gfile/glob.md): Returns a list of files that match the given pattern(s). + +[`isdir(...)`](../../tf/io/gfile/isdir.md): Returns whether the path is a directory or not. + +[`listdir(...)`](../../tf/io/gfile/listdir.md): Returns a list of entries contained within a directory. + +[`makedirs(...)`](../../tf/io/gfile/makedirs.md): Creates a directory and all parent/intermediate directories. + +[`mkdir(...)`](../../tf/io/gfile/mkdir.md): Creates a directory with the name given by `path`. + +[`remove(...)`](../../tf/io/gfile/remove.md): Deletes the path located at 'path'. + +[`rename(...)`](../../tf/io/gfile/rename.md): Rename or move a file / directory. + +[`rmtree(...)`](../../tf/io/gfile/rmtree.md): Deletes everything under path recursively. + +[`stat(...)`](../../tf/io/gfile/stat.md): Returns file statistics for a given path. + +[`walk(...)`](../../tf/io/gfile/walk.md): Recursive directory tree generator for directories. + diff --git a/site/en/api_docs/python/tf/io/gfile/GFile.md b/site/en/api_docs/python/tf/io/gfile/GFile.md new file mode 100644 index 00000000000..cea2a410c94 --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/GFile.md @@ -0,0 +1,336 @@ +description: File I/O wrappers without thread locking. + +
+ + + + + + + + + + + + + + + + + +
+ +# tf.io.gfile.GFile + + + + + + + + + +File I/O wrappers without thread locking. + + + + + + + + + +The main roles of the tf.io.gfile module are: + +1. To provide an API that is close to Python's file I/O objects, and +2. To provide an implementation based on TensorFlow's C++ FileSystem API. + +The C++ FileSystem API supports multiple file system implementations, +including local files, Google Cloud Storage (using a `gs://` prefix, and +HDFS (using an `hdfs://` prefix). TensorFlow exports these as `tf.io.gfile`, +so that you can use these implementations for saving and loading checkpoints, +writing to TensorBoard logs, and accessing training data (among other uses). +However, if all your files are local, you can use the regular Python file +API without any problem. + +*Note*: though similar to Python's I/O implementation, there are semantic +differences to make tf.io.gfile more efficient for backing filesystems. For +example, a write mode file will not be opened until the first write call, to +minimize RPC invocations in network filesystems. + + + + + + + + + + + + + + + +
+`mode` + +Returns the mode in which the file was opened. +
+`name` + +Returns the file name. +
+ + + +## Methods + +

close

+ +View source + + + +Closes FileIO. Should be called for the WritableFile to be flushed. + + +

flush

+ +View source + + + +Flushes the Writable file. + +This only ensures that the data has made its way out of the process without +any guarantees on whether it's written to disk. This means that the +data would survive an application crash but not necessarily an OS crash. + +

next

+ +View source + + + + + + +

read

+ +View source + + + +Returns the contents of a file as a string. + +Starts reading from current position in file. + + + + + + + + + + +
Args
+`n` + +Read `n` bytes if `n != -1`. If `n = -1`, reads to end of file. +
+ + + + + + + + + + + +
Returns
+`n` bytes of the file (or whole file) in bytes mode or `n` bytes of the +string if in string (regular) mode. +
+ + + +

readline

+ +View source + + + +Reads the next line from the file. Leaves the '\n' at the end. + + +

readlines

+ +View source + + + +Returns all lines from the file in a list. + + +

seek

+ +View source + + + +Seeks to the offset in the file. (deprecated arguments) + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(position)`. They will be removed in a future version. +Instructions for updating: +position is deprecated in favor of the offset argument. + + + + + + + + + + + + + +
Args
+`offset` + +The byte count relative to the whence argument. +
+`whence` + +Valid values for whence are: +0: start of the file (default) +1: relative to the current position of the file +2: relative to the end of file. `offset` is usually negative. +
+ + + +

seekable

+ +View source + + + +Returns True as FileIO supports random access ops of seek()/tell() + + +

size

+ +View source + + + +Returns the size of the file. + + +

tell

+ +View source + + + +Returns the current position in the file. + + +

write

+ +View source + + + +Writes file_content to the file. Appends to the end of the file. + + +

__enter__

+ +View source + + + +Make usable with "with" statement. + + +

__exit__

+ +View source + + + +Make usable with "with" statement. + + +

__iter__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/io/gfile/copy.md b/site/en/api_docs/python/tf/io/gfile/copy.md new file mode 100644 index 00000000000..94ee32e6488 --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/copy.md @@ -0,0 +1,93 @@ +description: Copies data from src to dst. + +
+ + +
+ +# tf.io.gfile.copy + + + + + + + + + +Copies data from `src` to `dst`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`src` + +string, name of the file whose contents need to be copied +
+`dst` + +string, name of the file to which to copy to +
+`overwrite` + +boolean, if false it's an error for `dst` to be occupied by an +existing file. +
+ + + + + + + + + + + + +
+`errors.OpError` + +If the operation fails. +
+ diff --git a/site/en/api_docs/python/tf/io/gfile/exists.md b/site/en/api_docs/python/tf/io/gfile/exists.md new file mode 100644 index 00000000000..d809226d245 --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/exists.md @@ -0,0 +1,93 @@ +description: Determines whether a path exists or not. + +
+ + +
+ +# tf.io.gfile.exists + + + + + + + + + +Determines whether a path exists or not. + + + + + + + + + + + + + + + + + + + +
+`path` + +string, a path +
+ + + + + + + + + + + +
+True if the path exists, whether it's a file or a directory. +False if the path does not exist and there are no filesystem errors. +
+ + + + + + + + + + + + +
+`errors.OpError` + +Propagates any errors reported by the FileSystem API. +
+ diff --git a/site/en/api_docs/python/tf/io/gfile/glob.md b/site/en/api_docs/python/tf/io/gfile/glob.md new file mode 100644 index 00000000000..ebd1d3cdf27 --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/glob.md @@ -0,0 +1,92 @@ +description: Returns a list of files that match the given pattern(s). + +
+ + +
+ +# tf.io.gfile.glob + + + + + + + + + +Returns a list of files that match the given pattern(s). + + + + + + + + + + + + + + + + + + + +
+`pattern` + +string or iterable of strings. The glob pattern(s). +
+ + + + + + + + + + + +
+A list of strings containing filenames that match the given pattern(s). +
+ + + + + + + + + + + + +
+`errors.OpError` + +If there are filesystem / directory listing errors. +
+ diff --git a/site/en/api_docs/python/tf/io/gfile/isdir.md b/site/en/api_docs/python/tf/io/gfile/isdir.md new file mode 100644 index 00000000000..db75d8948b5 --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/isdir.md @@ -0,0 +1,75 @@ +description: Returns whether the path is a directory or not. + +
+ + +
+ +# tf.io.gfile.isdir + + + + + + + + + +Returns whether the path is a directory or not. + + + + + + + + + + + + + + + + + + + +
+`path` + +string, path to a potential directory +
+ + + + + + + + + + + +
+True, if the path is a directory; False otherwise +
+ diff --git a/site/en/api_docs/python/tf/io/gfile/listdir.md b/site/en/api_docs/python/tf/io/gfile/listdir.md new file mode 100644 index 00000000000..06d88c02528 --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/listdir.md @@ -0,0 +1,91 @@ +description: Returns a list of entries contained within a directory. + +
+ + +
+ +# tf.io.gfile.listdir + + + + + + + + + +Returns a list of entries contained within a directory. + + + + + + + + + +The list is in arbitrary order. It does not contain the special entries "." +and "..". + + + + + + + + + + +
+`path` + +string, path to a directory +
+ + + + + + + + + + + +
+[filename1, filename2, ... filenameN] as strings +
+ + + + + + + + + + + +
+errors.NotFoundError if directory doesn't exist +
+ diff --git a/site/en/api_docs/python/tf/io/gfile/makedirs.md b/site/en/api_docs/python/tf/io/gfile/makedirs.md new file mode 100644 index 00000000000..35dc340bf79 --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/makedirs.md @@ -0,0 +1,79 @@ +description: Creates a directory and all parent/intermediate directories. + +
+ + +
+ +# tf.io.gfile.makedirs + + + + + + + + + +Creates a directory and all parent/intermediate directories. + + + + + + + + + +It succeeds if path already exists and is writable. + + + + + + + + + + +
+`path` + +string, name of the directory to be created +
+ + + + + + + + + + + + +
+`errors.OpError` + +If the operation fails. +
+ diff --git a/site/en/api_docs/python/tf/io/gfile/mkdir.md b/site/en/api_docs/python/tf/io/gfile/mkdir.md new file mode 100644 index 00000000000..e5b3a6395da --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/mkdir.md @@ -0,0 +1,80 @@ +description: Creates a directory with the name given by path. + +
+ + +
+ +# tf.io.gfile.mkdir + + + + + + + + + +Creates a directory with the name given by `path`. + + + + + + + + + + + + + + + + + + + +
+`path` + +string, name of the directory to be created +
+ + +Notes: The parent directories need to exist. Use tf.io.gfile.makedirs + instead if there is the possibility that the parent dirs don't exist. + + + + + + + + + + +
+`errors.OpError` + +If the operation fails. +
+ diff --git a/site/en/api_docs/python/tf/io/gfile/remove.md b/site/en/api_docs/python/tf/io/gfile/remove.md new file mode 100644 index 00000000000..86f3dd06be5 --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/remove.md @@ -0,0 +1,79 @@ +description: Deletes the path located at 'path'. + +
+ + +
+ +# tf.io.gfile.remove + + + + + + + + + +Deletes the path located at 'path'. + + + + + + + + + + + + + + + + + + + +
+`path` + +string, a path +
+ + + + + + + + + + + + +
+`errors.OpError` + +Propagates any errors reported by the FileSystem API. E.g., +`NotFoundError` if the path does not exist. +
+ diff --git a/site/en/api_docs/python/tf/io/gfile/rename.md b/site/en/api_docs/python/tf/io/gfile/rename.md new file mode 100644 index 00000000000..4effee1dfc0 --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/rename.md @@ -0,0 +1,93 @@ +description: Rename or move a file / directory. + +
+ + +
+ +# tf.io.gfile.rename + + + + + + + + + +Rename or move a file / directory. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`src` + +string, pathname for a file +
+`dst` + +string, pathname to which the file needs to be moved +
+`overwrite` + +boolean, if false it's an error for `dst` to be occupied by an +existing file. +
+ + + + + + + + + + + + +
+`errors.OpError` + +If the operation fails. +
+ diff --git a/site/en/api_docs/python/tf/io/gfile/rmtree.md b/site/en/api_docs/python/tf/io/gfile/rmtree.md new file mode 100644 index 00000000000..c45033e0d8b --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/rmtree.md @@ -0,0 +1,78 @@ +description: Deletes everything under path recursively. + +
+ + +
+ +# tf.io.gfile.rmtree + + + + + + + + + +Deletes everything under path recursively. + + + + + + + + + + + + + + + + + + + +
+`path` + +string, a path +
+ + + + + + + + + + + + +
+`errors.OpError` + +If the operation fails. +
+ diff --git a/site/en/api_docs/python/tf/io/gfile/stat.md b/site/en/api_docs/python/tf/io/gfile/stat.md new file mode 100644 index 00000000000..d859c183798 --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/stat.md @@ -0,0 +1,92 @@ +description: Returns file statistics for a given path. + +
+ + +
+ +# tf.io.gfile.stat + + + + + + + + + +Returns file statistics for a given path. + + + + + + + + + + + + + + + + + + + +
+`path` + +string, path to a file +
+ + + + + + + + + + + +
+FileStatistics struct that contains information about the path +
+ + + + + + + + + + + + +
+`errors.OpError` + +If the operation fails. +
+ diff --git a/site/en/api_docs/python/tf/io/gfile/walk.md b/site/en/api_docs/python/tf/io/gfile/walk.md new file mode 100644 index 00000000000..2d93e79aec4 --- /dev/null +++ b/site/en/api_docs/python/tf/io/gfile/walk.md @@ -0,0 +1,85 @@ +description: Recursive directory tree generator for directories. + +
+ + +
+ +# tf.io.gfile.walk + + + + + + + + + +Recursive directory tree generator for directories. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`top` + +string, a Directory name +
+`topdown` + +bool, Traverse pre order if True, post order if False. +
+`onerror` + +optional handler for errors. Should be a function, it will be +called with the error as argument. Rethrowing the error aborts the walk. +Errors that happen while listing directories are ignored. +
+ + + +#### Yields: + +Each yield is a 3-tuple: the pathname of a directory, followed by lists of +all its subdirectories and leaf files. That is, each yield looks like: +`(dirname, [subdirname, subdirname, ...], [filename, filename, ...])`. +Each item is a string. diff --git a/site/en/api_docs/python/tf/io/is_jpeg.md b/site/en/api_docs/python/tf/io/is_jpeg.md new file mode 100644 index 00000000000..de6eda4513a --- /dev/null +++ b/site/en/api_docs/python/tf/io/is_jpeg.md @@ -0,0 +1,86 @@ +description: Convenience function to check if the 'contents' encodes a JPEG image. + +
+ + +
+ +# tf.io.is_jpeg + + + + + + + + + +Convenience function to check if the 'contents' encodes a JPEG image. + + + + + + + + + + + + + + + + + + + + + + +
+`contents` + +0-D `string`. The encoded image bytes. +
+`name` + +A name for the operation (optional) +
+ + + + + + + + + + + +
+A scalar boolean tensor indicating if 'contents' may be a JPEG image. +is_jpeg is susceptible to false positives. +
+ diff --git a/site/en/api_docs/python/tf/io/match_filenames_once.md b/site/en/api_docs/python/tf/io/match_filenames_once.md new file mode 100644 index 00000000000..e7b007eb64c --- /dev/null +++ b/site/en/api_docs/python/tf/io/match_filenames_once.md @@ -0,0 +1,83 @@ +description: Save the list of files matching pattern, so it is only computed once. + +
+ + +
+ +# tf.io.match_filenames_once + + + + + + + + + +Save the list of files matching pattern, so it is only computed once. + + + + + + + + + +NOTE: The order of the files returned is deterministic. + + + + + + + + + + + + + +
+`pattern` + +A file pattern (glob), or 1D tensor of file patterns. +
+`name` + +A name for the operations (optional). +
+ + + + + + + + + + + +
+A variable that is initialized to the list of files matching the pattern(s). +
+ diff --git a/site/en/api_docs/python/tf/io/matching_files.md b/site/en/api_docs/python/tf/io/matching_files.md new file mode 100644 index 00000000000..f54b8d8b2a7 --- /dev/null +++ b/site/en/api_docs/python/tf/io/matching_files.md @@ -0,0 +1,81 @@ +description: Returns the set of files matching one or more glob patterns. + +
+ + +
+ +# tf.io.matching_files + + + + + + + + + +Returns the set of files matching one or more glob patterns. + + + + + + + + + +Note that this routine only supports wildcard characters in the +basename portion of the pattern, not in the directory portion. +Note also that the order of filenames returned is deterministic. + + + + + + + + + + + + + +
+`pattern` + +A `Tensor` of type `string`. +Shell wildcard pattern(s). Scalar or vector of type string. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/io/parse_example.md b/site/en/api_docs/python/tf/io/parse_example.md new file mode 100644 index 00000000000..2d07387fb2c --- /dev/null +++ b/site/en/api_docs/python/tf/io/parse_example.md @@ -0,0 +1,313 @@ +description: Parses Example protos into a dict of tensors. + +
+ + +
+ +# tf.io.parse_example + + + + + + + + + +Parses `Example` protos into a `dict` of tensors. + + + + + + + +Parses a number of serialized [`Example`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto) +protos given in `serialized`. We refer to `serialized` as a batch with +`batch_size` many entries of individual `Example` protos. + +`example_names` may contain descriptive names for the corresponding serialized +protos. These may be useful for debugging purposes, but they have no effect on +the output. If not `None`, `example_names` must be the same length as +`serialized`. + +This op parses serialized examples into a dictionary mapping keys to `Tensor` +`SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to +`VarLenFeature`, `SparseFeature`, `RaggedFeature`, and `FixedLenFeature` +objects. Each `VarLenFeature` and `SparseFeature` is mapped to a +`SparseTensor`; each `FixedLenFeature` is mapped to a `Tensor`; and each +`RaggedFeature` is mapped to a `RaggedTensor`. + +Each `VarLenFeature` maps to a `SparseTensor` of the specified type +representing a ragged matrix. Its indices are `[batch, index]` where `batch` +identifies the example in `serialized`, and `index` is the value's index in +the list of values associated with that feature and example. + +Each `SparseFeature` maps to a `SparseTensor` of the specified type +representing a Tensor of `dense_shape` `[batch_size] + SparseFeature.size`. +Its `values` come from the feature in the examples with key `value_key`. +A `values[i]` comes from a position `k` in the feature of an example at batch +entry `batch`. This positional information is recorded in `indices[i]` as +`[batch, index_0, index_1, ...]` where `index_j` is the `k-th` value of +the feature in the example at with key SparseFeature.index_key[j]. +In other words, we split the indices (except the first index indicating the +batch entry) of a `SparseTensor` by dimension into different features of the +`Example`. Due to its complexity a `VarLenFeature` should be preferred over a +`SparseFeature` whenever possible. + +Each `FixedLenFeature` `df` maps to a `Tensor` of the specified type (or +tf.float32 if not specified) and shape `(serialized.size(),) + df.shape`. + +`FixedLenFeature` entries with a `default_value` are optional. With no default +value, we will fail if that `Feature` is missing from any example in +`serialized`. + +Each `FixedLenSequenceFeature` `df` maps to a `Tensor` of the specified type +(or tf.float32 if not specified) and shape +`(serialized.size(), None) + df.shape`. +All examples in `serialized` will be padded with `default_value` along the +second dimension. + +Each `RaggedFeature` maps to a `RaggedTensor` of the specified type. It +is formed by stacking the `RaggedTensor` for each example, where the +`RaggedTensor` for each individual example is constructed using the tensors +specified by `RaggedTensor.values_key` and `RaggedTensor.partition`. See +the tf.io.RaggedFeature documentation for details and examples. + +#### Examples: + + + +For example, if one expects a tf.float32 `VarLenFeature` `ft` and three +serialized `Example`s are provided: + +``` +serialized = [ + features + { feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } }, + features + { feature []}, + features + { feature { key: "ft" value { float_list { value: [3.0] } } } +] +``` + +then the output will look like: + +```python +{"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]], + values=[1.0, 2.0, 3.0], + dense_shape=(3, 2)) } +``` + +If instead a `FixedLenSequenceFeature` with `default_value = -1.0` and +`shape=[]` is used then the output will look like: + +```python +{"ft": [[1.0, 2.0], [3.0, -1.0]]} +``` + +Given two `Example` input protos in `serialized`: + +``` +[ + features { + feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } } + feature { key: "gps" value { float_list { value: [] } } } + }, + features { + feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } } + feature { key: "dank" value { int64_list { value: [ 42 ] } } } + feature { key: "gps" value { } } + } +] +``` + +And arguments + +``` +example_names: ["input0", "input1"], +features: { + "kw": VarLenFeature(tf.string), + "dank": VarLenFeature(tf.int64), + "gps": VarLenFeature(tf.float32), +} +``` + +Then the output is a dictionary: + +```python +{ + "kw": SparseTensor( + indices=[[0, 0], [0, 1], [1, 0]], + values=["knit", "big", "emmy"] + dense_shape=[2, 2]), + "dank": SparseTensor( + indices=[[1, 0]], + values=[42], + dense_shape=[2, 1]), + "gps": SparseTensor( + indices=[], + values=[], + dense_shape=[2, 0]), +} +``` + +For dense results in two serialized `Example`s: + +``` +[ + features { + feature { key: "age" value { int64_list { value: [ 0 ] } } } + feature { key: "gender" value { bytes_list { value: [ "f" ] } } } + }, + features { + feature { key: "age" value { int64_list { value: [] } } } + feature { key: "gender" value { bytes_list { value: [ "f" ] } } } + } +] +``` + +#### We can use arguments: + + + +``` +example_names: ["input0", "input1"], +features: { + "age": FixedLenFeature([], dtype=tf.int64, default_value=-1), + "gender": FixedLenFeature([], dtype=tf.string), +} +``` + +And the expected output is: + +```python +{ + "age": [[0], [-1]], + "gender": [["f"], ["f"]], +} +``` + +An alternative to `VarLenFeature` to obtain a `SparseTensor` is +`SparseFeature`. For example, given two `Example` input protos in +`serialized`: + +``` +[ + features { + feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } } + feature { key: "ix" value { int64_list { value: [ 3, 20 ] } } } + }, + features { + feature { key: "val" value { float_list { value: [ 0.0 ] } } } + feature { key: "ix" value { int64_list { value: [ 42 ] } } } + } +] +``` + +And arguments + +``` +example_names: ["input0", "input1"], +features: { + "sparse": SparseFeature( + index_key="ix", value_key="val", dtype=tf.float32, size=100), +} +``` + +Then the output is a dictionary: + +```python +{ + "sparse": SparseTensor( + indices=[[0, 3], [0, 20], [1, 42]], + values=[0.5, -1.0, 0.0] + dense_shape=[2, 100]), +} +``` + +See the tf.io.RaggedFeature documentation for examples showing how +`RaggedFeature` can be used to obtain `RaggedTensor`s. + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A vector (1-D Tensor) of strings, a batch of binary +serialized `Example` protos. +
+`features` + +A `dict` mapping feature keys to `FixedLenFeature`, +`VarLenFeature`, `SparseFeature`, and `RaggedFeature` values. +
+`example_names` + +A vector (1-D Tensor) of strings (optional), the names of +the serialized protos in the batch. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+A `dict` mapping feature keys to `Tensor`, `SparseTensor`, and +`RaggedTensor` values. +
+ + + + + + + + + + + + +
+`ValueError` + +if any feature is invalid. +
+ diff --git a/site/en/api_docs/python/tf/io/parse_sequence_example.md b/site/en/api_docs/python/tf/io/parse_sequence_example.md new file mode 100644 index 00000000000..b84de3679a8 --- /dev/null +++ b/site/en/api_docs/python/tf/io/parse_sequence_example.md @@ -0,0 +1,193 @@ +description: Parses a batch of SequenceExample protos. + +
+ + +
+ +# tf.io.parse_sequence_example + + + + + + + + + +Parses a batch of `SequenceExample` protos. + + + + + + + + + +Parses a vector of serialized +[`SequenceExample`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto) +protos given in `serialized`. + +This op parses serialized sequence examples into a tuple of dictionaries, +each mapping keys to `Tensor` and `SparseTensor` objects. +The first dictionary contains mappings for keys appearing in +`context_features`, and the second dictionary contains mappings for keys +appearing in `sequence_features`. + +At least one of `context_features` and `sequence_features` must be provided +and non-empty. + +The `context_features` keys are associated with a `SequenceExample` as a +whole, independent of time / frame. In contrast, the `sequence_features` keys +provide a way to access variable-length data within the `FeatureList` section +of the `SequenceExample` proto. While the shapes of `context_features` values +are fixed with respect to frame, the frame dimension (the first dimension) +of `sequence_features` values may vary between `SequenceExample` protos, +and even between `feature_list` keys within the same `SequenceExample`. + +`context_features` contains `VarLenFeature`, `RaggedFeature`, and +`FixedLenFeature` objects. Each `VarLenFeature` is mapped to a +`SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each +`FixedLenFeature` is mapped to a `Tensor`, of the specified type, shape, and +default value. + +`sequence_features` contains `VarLenFeature`, `RaggedFeature`, and +`FixedLenSequenceFeature` objects. Each `VarLenFeature` is mapped to a +`SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor; and +each `FixedLenSequenceFeature` is mapped to a `Tensor`, each of the specified +type. The shape will be `(B,T,) + df.dense_shape` for +`FixedLenSequenceFeature` `df`, where `B` is the batch size, and `T` is the +length of the associated `FeatureList` in the `SequenceExample`. For instance, +`FixedLenSequenceFeature([])` yields a scalar 2-D `Tensor` of static shape +`[None, None]` and dynamic shape `[B, T]`, while +`FixedLenSequenceFeature([k])` (for `int k >= 1`) yields a 3-D matrix `Tensor` +of static shape `[None, None, k]` and dynamic shape `[B, T, k]`. + +Like the input, the resulting output tensors have a batch dimension. This +means that the original per-example shapes of `VarLenFeature`s and +`FixedLenSequenceFeature`s can be lost. To handle that situation, this op also +provides dicts of shape tensors as part of the output. There is one dict for +the context features, and one for the feature_list features. Context features +of type `FixedLenFeature`s will not be present, since their shapes are already +known by the caller. In situations where the input 'FixedLenFeature`s are of +different lengths across examples, the shorter examples will be padded with +default datatype values: 0 for numeric types, and the empty string for string +types. + +Each `SparseTensor` corresponding to `sequence_features` represents a ragged +vector. Its indices are `[time, index]`, where `time` is the `FeatureList` +entry and `index` is the value's index in the list of values associated with +that time. + +`FixedLenFeature` entries with a `default_value` and `FixedLenSequenceFeature` +entries with `allow_missing=True` are optional; otherwise, we will fail if +that `Feature` or `FeatureList` is missing from any example in `serialized`. + +`example_name` may contain a descriptive name for the corresponding serialized +proto. This may be useful for debugging purposes, but it has no effect on the +output. If not `None`, `example_name` must be a scalar. + + + + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A vector (1-D Tensor) of type string containing binary +serialized `SequenceExample` protos. +
+`context_features` + +A `dict` mapping feature keys to `FixedLenFeature` or +`VarLenFeature` or `RaggedFeature` values. These features are associated +with a `SequenceExample` as a whole. +
+`sequence_features` + +A `dict` mapping feature keys to +`FixedLenSequenceFeature` or `VarLenFeature` or `RaggedFeature` values. +These features are associated with data within the `FeatureList` section +of the `SequenceExample` proto. +
+`example_names` + +A vector (1-D Tensor) of strings (optional), the name of the +serialized protos. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+A tuple of three `dict`s, each mapping keys to `Tensor`s, +`SparseTensor`s, and `RaggedTensor`. The first dict contains the context +key/values, the second dict contains the feature_list key/values, and the +final dict contains the lengths of any dense feature_list features. +
+ + + + + + + + + + + + +
+`ValueError` + +if any feature is invalid. +
+ diff --git a/site/en/api_docs/python/tf/io/parse_single_example.md b/site/en/api_docs/python/tf/io/parse_single_example.md new file mode 100644 index 00000000000..758415ac3e6 --- /dev/null +++ b/site/en/api_docs/python/tf/io/parse_single_example.md @@ -0,0 +1,116 @@ +description: Parses a single Example proto. + +
+ + +
+ +# tf.io.parse_single_example + + + + + + + + + +Parses a single `Example` proto. + + + + + + + +Similar to `parse_example`, except: + +For dense tensors, the returned `Tensor` is identical to the output of +`parse_example`, except there is no batch dimension, the output shape is the +same as the shape given in `dense_shape`. + +For `SparseTensor`s, the first (batch) column of the indices matrix is removed +(the indices matrix is a column vector), the values vector is unchanged, and +the first (`batch_size`) entry of the shape vector is removed (it is now a +single element vector). + +One might see performance advantages by batching `Example` protos with +`parse_example` instead of using this function directly. + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A scalar string Tensor, a single serialized Example. +
+`features` + +A `dict` mapping feature keys to `FixedLenFeature` or +`VarLenFeature` values. +
+`example_names` + +(Optional) A scalar string Tensor, the associated name. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+A `dict` mapping feature keys to `Tensor` and `SparseTensor` values. +
+ + + + + + + + + + + + +
+`ValueError` + +if any feature is invalid. +
+ diff --git a/site/en/api_docs/python/tf/io/parse_single_sequence_example.md b/site/en/api_docs/python/tf/io/parse_single_sequence_example.md new file mode 100644 index 00000000000..232609462a5 --- /dev/null +++ b/site/en/api_docs/python/tf/io/parse_single_sequence_example.md @@ -0,0 +1,184 @@ +description: Parses a single SequenceExample proto. + +
+ + +
+ +# tf.io.parse_single_sequence_example + + + + + + + + + +Parses a single `SequenceExample` proto. + + + + + + + + + +Parses a single serialized [`SequenceExample`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto) +proto given in `serialized`. + +This op parses a serialized sequence example into a tuple of dictionaries, +each mapping keys to `Tensor` and `SparseTensor` objects. +The first dictionary contains mappings for keys appearing in +`context_features`, and the second dictionary contains mappings for keys +appearing in `sequence_features`. + +At least one of `context_features` and `sequence_features` must be provided +and non-empty. + +The `context_features` keys are associated with a `SequenceExample` as a +whole, independent of time / frame. In contrast, the `sequence_features` keys +provide a way to access variable-length data within the `FeatureList` section +of the `SequenceExample` proto. While the shapes of `context_features` values +are fixed with respect to frame, the frame dimension (the first dimension) +of `sequence_features` values may vary between `SequenceExample` protos, +and even between `feature_list` keys within the same `SequenceExample`. + +`context_features` contains `VarLenFeature`, `RaggedFeature`, and +`FixedLenFeature` objects. Each `VarLenFeature` is mapped to a `SparseTensor`; +each `RaggedFeature` is mapped to a `RaggedTensor`; and each `FixedLenFeature` +is mapped to a `Tensor`, of the specified type, shape, and default value. + +`sequence_features` contains `VarLenFeature`, `RaggedFeature`, and +`FixedLenSequenceFeature` objects. Each `VarLenFeature` is mapped to a +`SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each +`FixedLenSequenceFeature` is mapped to a `Tensor`, each of the specified type. +The shape will be `(T,) + df.dense_shape` for `FixedLenSequenceFeature` `df`, +where `T` is the length of the associated `FeatureList` in the +`SequenceExample`. For instance, `FixedLenSequenceFeature([])` yields a scalar +1-D `Tensor` of static shape `[None]` and dynamic shape `[T]`, while +`FixedLenSequenceFeature([k])` (for `int k >= 1`) yields a 2-D matrix `Tensor` +of static shape `[None, k]` and dynamic shape `[T, k]`. + +Each `SparseTensor` corresponding to `sequence_features` represents a ragged +vector. Its indices are `[time, index]`, where `time` is the `FeatureList` +entry and `index` is the value's index in the list of values associated with +that time. + +`FixedLenFeature` entries with a `default_value` and `FixedLenSequenceFeature` +entries with `allow_missing=True` are optional; otherwise, we will fail if +that `Feature` or `FeatureList` is missing from any example in `serialized`. + +`example_name` may contain a descriptive name for the corresponding serialized +proto. This may be useful for debugging purposes, but it has no effect on the +output. If not `None`, `example_name` must be a scalar. + +Note that the batch version of this function, `tf.parse_sequence_example`, +is written for better memory efficiency and will be faster on large +`SequenceExample`s. + + + + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A scalar (0-D Tensor) of type string, a single binary +serialized `SequenceExample` proto. +
+`context_features` + +A `dict` mapping feature keys to `FixedLenFeature` or +`VarLenFeature` or `RaggedFeature` values. These features are associated +with a `SequenceExample` as a whole. +
+`sequence_features` + +A `dict` mapping feature keys to +`FixedLenSequenceFeature` or `VarLenFeature` or `RaggedFeature` values. +These features are associated with data within the `FeatureList` section +of the `SequenceExample` proto. +
+`example_name` + +A scalar (0-D Tensor) of strings (optional), the name of +the serialized proto. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+A tuple of two `dict`s, each mapping keys to `Tensor`s and `SparseTensor`s +and `RaggedTensor`s. + +* The first dict contains the context key/values. +* The second dict contains the feature_list key/values. +
+ + + + + + + + + + + + +
+`ValueError` + +if any feature is invalid. +
+ diff --git a/site/en/api_docs/python/tf/io/parse_tensor.md b/site/en/api_docs/python/tf/io/parse_tensor.md new file mode 100644 index 00000000000..014b8001b78 --- /dev/null +++ b/site/en/api_docs/python/tf/io/parse_tensor.md @@ -0,0 +1,87 @@ +description: Transforms a serialized tensorflow.TensorProto proto into a Tensor. + +
+ + +
+ +# tf.io.parse_tensor + + + + + + + + + +Transforms a serialized tensorflow.TensorProto proto into a Tensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A `Tensor` of type `string`. +A scalar string containing a serialized TensorProto proto. +
+`out_type` + +A tf.DType. +The type of the serialized tensor. The provided type must match the +type of the serialized tensor and no implicit conversion will take place. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/io/read_file.md b/site/en/api_docs/python/tf/io/read_file.md new file mode 100644 index 00000000000..52759b7d405 --- /dev/null +++ b/site/en/api_docs/python/tf/io/read_file.md @@ -0,0 +1,77 @@ +description: Reads and outputs the entire contents of the input filename. + +
+ + +
+ +# tf.io.read_file + + + + + + + + + +Reads and outputs the entire contents of the input filename. + + + + + + + + + + + + + + + + + + + + + + +
+`filename` + +A `Tensor` of type `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/io/serialize_many_sparse.md b/site/en/api_docs/python/tf/io/serialize_many_sparse.md new file mode 100644 index 00000000000..20992cab713 --- /dev/null +++ b/site/en/api_docs/python/tf/io/serialize_many_sparse.md @@ -0,0 +1,104 @@ +description: Serialize N-minibatch SparseTensor into an [N, 3] Tensor. + +
+ + +
+ +# tf.io.serialize_many_sparse + + + + + + + + + +Serialize `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor`. + + + + + + + +The `SparseTensor` must have rank `R` greater than 1, and the first dimension +is treated as the minibatch dimension. Elements of the `SparseTensor` +must be sorted in increasing order of this first dimension. The serialized +`SparseTensor` objects going into each row of the output `Tensor` will have +rank `R-1`. + +The minibatch size `N` is extracted from `sparse_shape[0]`. + + + + + + + + + + + + + + + + +
+`sp_input` + +The input rank `R` `SparseTensor`. +
+`out_type` + +The `dtype` to use for serialization. +
+`name` + +A name prefix for the returned tensors (optional). +
+ + + + + + + + + + + +
+A matrix (2-D `Tensor`) with `N` rows and `3` columns. Each column +represents serialized `SparseTensor`'s indices, values, and shape +(respectively). +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/io/serialize_sparse.md b/site/en/api_docs/python/tf/io/serialize_sparse.md new file mode 100644 index 00000000000..8a1dbcd2b04 --- /dev/null +++ b/site/en/api_docs/python/tf/io/serialize_sparse.md @@ -0,0 +1,96 @@ +description: Serialize a SparseTensor into a 3-vector (1-D Tensor) object. + +
+ + +
+ +# tf.io.serialize_sparse + + + + + + + + + +Serialize a `SparseTensor` into a 3-vector (1-D `Tensor`) object. + + + + + + + + + + + + + + + + + + + + + + + +
+`sp_input` + +The input `SparseTensor`. +
+`out_type` + +The `dtype` to use for serialization. +
+`name` + +A name prefix for the returned tensors (optional). +
+ + + + + + + + + + + +
+A 3-vector (1-D `Tensor`), with each column representing the serialized +`SparseTensor`'s indices, values, and shape (respectively). +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/io/serialize_tensor.md b/site/en/api_docs/python/tf/io/serialize_tensor.md new file mode 100644 index 00000000000..228791926c2 --- /dev/null +++ b/site/en/api_docs/python/tf/io/serialize_tensor.md @@ -0,0 +1,77 @@ +description: Transforms a Tensor into a serialized TensorProto proto. + +
+ + +
+ +# tf.io.serialize_tensor + + + + + + + + + +Transforms a Tensor into a serialized TensorProto proto. + + + + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. A Tensor of type `T`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/io/write_file.md b/site/en/api_docs/python/tf/io/write_file.md new file mode 100644 index 00000000000..6c15e71367f --- /dev/null +++ b/site/en/api_docs/python/tf/io/write_file.md @@ -0,0 +1,87 @@ +description: Writes contents to the file at input filename. Creates file and recursively + +
+ + +
+ +# tf.io.write_file + + + + + + + + + +Writes contents to the file at input filename. Creates file and recursively + + + + + + + + + +creates directory if not existing. + + + + + + + + + + + + + + + + +
+`filename` + +A `Tensor` of type `string`. +scalar. The name of the file to which we write the contents. +
+`contents` + +A `Tensor` of type `string`. +scalar. The content to be written to the output file. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/io/write_graph.md b/site/en/api_docs/python/tf/io/write_graph.md new file mode 100644 index 00000000000..f2dd76ab1cb --- /dev/null +++ b/site/en/api_docs/python/tf/io/write_graph.md @@ -0,0 +1,112 @@ +description: Writes a graph proto to a file. + +
+ + +
+ +# tf.io.write_graph + + + + + + + + + +Writes a graph proto to a file. + + + + + + + + + +The graph is written as a text proto unless `as_text` is `False`. + +```python +v = tf.Variable(0, name='my_variable') +sess = tf.compat.v1.Session() +tf.io.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt') +``` + +or + +```python +v = tf.Variable(0, name='my_variable') +sess = tf.compat.v1.Session() +tf.io.write_graph(sess.graph, '/tmp/my-model', 'train.pbtxt') +``` + + + + + + + + + + + + + + + + + + + +
+`graph_or_graph_def` + +A `Graph` or a `GraphDef` protocol buffer. +
+`logdir` + +Directory where to write the graph. This can refer to remote +filesystems, such as Google Cloud Storage (GCS). +
+`name` + +Filename for the graph. +
+`as_text` + +If `True`, writes the graph as an ASCII proto. +
+ + + + + + + + + + + +
+The path of the output proto file. +
+ diff --git a/site/en/api_docs/python/tf/is_tensor.md b/site/en/api_docs/python/tf/is_tensor.md new file mode 100644 index 00000000000..e7595805529 --- /dev/null +++ b/site/en/api_docs/python/tf/is_tensor.md @@ -0,0 +1,88 @@ +description: Checks whether x is a tensor or "tensor-like". + +
+ + +
+ +# tf.is_tensor + + + + + + + + + +Checks whether `x` is a tensor or "tensor-like". + + + + + + + + + +If `is_tensor(x)` returns `True`, it is safe to assume that `x` is a tensor or +can be converted to a tensor using `ops.convert_to_tensor(x)`. + +#### Usage example: + + + +``` +>>> tf.is_tensor(tf.constant([[1,2,3],[4,5,6],[7,8,9]])) +True +>>> tf.is_tensor("Hello World") +False +``` + + + + + + + + + + +
+`x` + +A python object to check. +
+ + + + + + + + + + + +
+`True` if `x` is a tensor or "tensor-like", `False` if not. +
+ diff --git a/site/en/api_docs/python/tf/keras.md b/site/en/api_docs/python/tf/keras.md new file mode 100644 index 00000000000..be2926b402c --- /dev/null +++ b/site/en/api_docs/python/tf/keras.md @@ -0,0 +1,77 @@ +description: Implementation of the Keras API meant to be a high-level API for TensorFlow. + +
+ + + +
+ +# Module: tf.keras + + + + + + + + + +Implementation of the Keras API meant to be a high-level API for TensorFlow. + + +Detailed documentation and user guides are available at +[tensorflow.org](https://www.tensorflow.org/guide/keras). + +## Modules + +[`activations`](../tf/keras/activations.md) module: Built-in activation functions. + +[`applications`](../tf/keras/applications.md) module: Keras Applications are canned architectures with pre-trained weights. + +[`backend`](../tf/keras/backend.md) module: Keras backend API. + +[`callbacks`](../tf/keras/callbacks.md) module: Callbacks: utilities called at certain points during model training. + +[`constraints`](../tf/keras/constraints.md) module: Constraints: functions that impose constraints on weight values. + +[`datasets`](../tf/keras/datasets.md) module: Public API for tf.keras.datasets namespace. + +[`estimator`](../tf/keras/estimator.md) module: Keras estimator API. + +[`experimental`](../tf/keras/experimental.md) module: Public API for tf.keras.experimental namespace. + +[`initializers`](../tf/keras/initializers.md) module: Keras initializer serialization / deserialization. + +[`layers`](../tf/keras/layers.md) module: Keras layers API. + +[`losses`](../tf/keras/losses.md) module: Built-in loss functions. + +[`metrics`](../tf/keras/metrics.md) module: Built-in metrics. + +[`mixed_precision`](../tf/keras/mixed_precision.md) module: Public API for tf.keras.mixed_precision namespace. + +[`models`](../tf/keras/models.md) module: Code for model cloning, plus model-related API entries. + +[`optimizers`](../tf/keras/optimizers.md) module: Built-in optimizer classes. + +[`preprocessing`](../tf/keras/preprocessing.md) module: Keras data preprocessing utils. + +[`regularizers`](../tf/keras/regularizers.md) module: Built-in regularizers. + +[`utils`](../tf/keras/utils.md) module: Public API for tf.keras.utils namespace. + +[`wrappers`](../tf/keras/wrappers.md) module: Public API for tf.keras.wrappers namespace. + +## Classes + +[`class Model`](../tf/keras/Model.md): `Model` groups layers into an object with training and inference features. + +[`class Sequential`](../tf/keras/Sequential.md): `Sequential` groups a linear stack of layers into a tf.keras.Model. + +## Functions + +[`Input(...)`](../tf/keras/Input.md): `Input()` is used to instantiate a Keras tensor. + +## Other Members + +* `__version__ = '2.3.0-tf'` diff --git a/site/en/api_docs/python/tf/keras/Input.md b/site/en/api_docs/python/tf/keras/Input.md new file mode 100644 index 00000000000..7ce18bee00a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/Input.md @@ -0,0 +1,209 @@ +description: Input() is used to instantiate a Keras tensor. + +
+ + +
+ +# tf.keras.Input + + + + + + + + + +`Input()` is used to instantiate a Keras tensor. + + + + + + + + + +A Keras tensor is a TensorFlow symbolic tensor object, +which we augment with certain attributes that allow us to build a Keras model +just by knowing the inputs and outputs of the model. + +For instance, if `a`, `b` and `c` are Keras tensors, +it becomes possible to do: +`model = Model(input=[a, b], output=c)` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A shape tuple (integers), not including the batch size. +For instance, `shape=(32,)` indicates that the expected input +will be batches of 32-dimensional vectors. Elements of this tuple +can be None; 'None' elements represent dimensions where the shape is +not known. +
+`batch_size` + +optional static batch size (integer). +
+`name` + +An optional name string for the layer. +Should be unique in a model (do not reuse the same name twice). +It will be autogenerated if it isn't provided. +
+`dtype` + +The data type expected by the input, as a string +(`float32`, `float64`, `int32`...) +
+`sparse` + +A boolean specifying whether the placeholder to be created is +sparse. Only one of 'ragged' and 'sparse' can be True. +
+`tensor` + +Optional existing tensor to wrap into the `Input` layer. +If set, the layer will not create a placeholder tensor. +
+`ragged` + +A boolean specifying whether the placeholder to be created is +ragged. Only one of 'ragged' and 'sparse' can be True. In this case, +values of 'None' in the 'shape' argument represent ragged dimensions. +For more information about RaggedTensors, see +https://www.tensorflow.org/guide/ragged_tensors. +
+`**kwargs` + +deprecated arguments support. Supports `batch_shape` and +`batch_input_shape`. +
+ + + + + + + + + + + +
+A `tensor`. +
+ + + +#### Example: + + + +```python +# this is a logistic regression in Keras +x = Input(shape=(32,)) +y = Dense(16, activation='softmax')(x) +model = Model(x, y) +``` + +Note that even if eager execution is enabled, +`Input` produces a symbolic tensor (i.e. a placeholder). +This symbolic tensor can be used with other +TensorFlow ops, as such: + +```python +x = Input(shape=(32,)) +y = tf.square(x) +``` + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +If both `sparse` and `ragged` are provided. +
+`ValueError` + +If both `shape` and (`batch_input_shape` or `batch_shape`) are +provided. +
+`ValueError` + +If both `shape` and `tensor` are None. +
+`ValueError` + +if any unrecognized parameters are provided. +
+ diff --git a/site/en/api_docs/python/tf/keras/Model.md b/site/en/api_docs/python/tf/keras/Model.md new file mode 100644 index 00000000000..9e73f88f9e6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/Model.md @@ -0,0 +1,2314 @@ +description: Model groups layers into an object with training and inference features. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.keras.Model + + + + + + + + + +`Model` groups layers into an object with training and inference features. + + + + + + + + + +There are two ways to instantiate a `Model`: + +1 - With the "functional API", where you start from `Input`, +you chain layer calls to specify the model's forward pass, +and finally you create your model from inputs and outputs: + +```python +import tensorflow as tf + +inputs = tf.keras.Input(shape=(3,)) +x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs) +outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x) +model = tf.keras.Model(inputs=inputs, outputs=outputs) +``` + +2 - By subclassing the `Model` class: in that case, you should define your +layers in `__init__` and you should implement the model's forward pass +in `call`. + +```python +import tensorflow as tf + +class MyModel(tf.keras.Model): + + def __init__(self): + super(MyModel, self).__init__() + self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) + self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) + + def call(self, inputs): + x = self.dense1(inputs) + return self.dense2(x) + +model = MyModel() +``` + +If you subclass `Model`, you can optionally have +a `training` argument (boolean) in `call`, which you can use to specify +a different behavior in training and inference: + +```python +import tensorflow as tf + +class MyModel(tf.keras.Model): + + def __init__(self): + super(MyModel, self).__init__() + self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) + self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) + self.dropout = tf.keras.layers.Dropout(0.5) + + def call(self, inputs, training=False): + x = self.dense1(inputs) + if training: + x = self.dropout(x, training=training) + return self.dense2(x) + +model = MyModel() +``` + +Once the model is created, you can config the model with losses and metrics +with `model.compile()`, train the model with `model.fit()`, or use the model +to do prediction with `model.predict()`. + +Checkout [guide](https://www.tensorflow.org/guide/keras/overview) for +additional details. + + + + + + + + + + + + + + + + + + + + + + + + +
+`distribute_strategy` + +The tf.distribute.Strategy this model was created under. +
+`layers` + + +
+`metrics_names` + +Returns the model's display labels for all outputs. + +Note: `metrics_names` are available only after a keras.Model has been +trained/evaluated on actual data. + +``` +>>> inputs = tf.keras.layers.Input(shape=(3,)) +>>> outputs = tf.keras.layers.Dense(2)(inputs) +>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs) +>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"]) +>>> model.metrics_names +[] +``` + +``` +>>> x = np.random.random((2, 3)) +>>> y = np.random.randint(0, 2, (2, 2)) +>>> _ = model.fit(x, y, verbose=0) +>>> model.metrics_names +['loss', 'mae'] +``` + +``` +>>> inputs = tf.keras.layers.Input(shape=(3,)) +>>> d = tf.keras.layers.Dense(2, name='out') +>>> output_1 = d(inputs) +>>> output_2 = d(inputs) +>>> model = tf.keras.models.Model( +... inputs=inputs, outputs=[output_1, output_2]) +>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"]) +>>> _ = model.fit(x, (y, y), verbose=0) +>>> model.metrics_names +['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae', +'out_1_acc'] +``` +
+`run_eagerly` + +Settable attribute indicating whether the model should run eagerly. + +Running eagerly means that your model will be run step by step, +like Python code. Your model might run slower, but it should become easier +for you to debug it by stepping into individual layer calls. + +By default, we will attempt to compile your model to a static graph to +deliver the best execution performance. +
+`state_updates` + +Returns the `updates` from all layers that are stateful. + +This is useful for separating training updates and +state updates, e.g. when we need to update a layer's internal state +during prediction. +
+ + + +## Methods + +

compile

+ +View source + + + +Configures the model for training. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`optimizer` + +String (name of optimizer) or optimizer instance. +See tf.keras.optimizers. +
+`loss` + +String (name of objective function), objective function or +tf.keras.losses.Loss instance. See tf.keras.losses. +An objective function is any callable with the signature +`loss = fn(y_true, y_pred)`, where +y_true = ground truth values with shape = `[batch_size, d0, .. dN]`, +except sparse loss functions such as sparse categorical crossentropy +where shape = `[batch_size, d0, .. dN-1]`. +y_pred = predicted values with shape = `[batch_size, d0, .. dN]`. +It returns a weighted loss float tensor. +If a custom `Loss` instance is used and reduction is set to NONE, +return value has the shape [batch_size, d0, .. dN-1] ie. per-sample +or per-timestep loss values; otherwise, it is a scalar. +If the model has multiple outputs, you can use a different loss on +each output by passing a dictionary or a list of losses. The loss +value that will be minimized by the model will then be the sum of +all individual losses. +
+`metrics` + +List of metrics to be evaluated by the model during training +and testing. +Each of this can be a string (name of a built-in function), function +or a tf.keras.metrics.Metric instance. See tf.keras.metrics. +Typically you will use `metrics=['accuracy']`. A function is any +callable with the signature `result = fn(y_true, y_pred)`. +To specify different metrics for different outputs of a +multi-output model, you could also pass a dictionary, such as +`metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}`. +You can also pass a list (len = len(outputs)) of lists of metrics +such as `metrics=[['accuracy'], ['accuracy', 'mse']]` or +`metrics=['accuracy', ['accuracy', 'mse']]`. +When you pass the strings 'accuracy' or 'acc', we convert this to +one of tf.keras.metrics.BinaryAccuracy, +tf.keras.metrics.CategoricalAccuracy, +tf.keras.metrics.SparseCategoricalAccuracy based on the loss +function used and the model output shape. We do a similar conversion +for the strings 'crossentropy' and 'ce' as well. +
+`loss_weights` + +Optional list or dictionary specifying scalar +coefficients (Python floats) to weight the loss contributions +of different model outputs. +The loss value that will be minimized by the model +will then be the *weighted sum* of all individual losses, +weighted by the `loss_weights` coefficients. +If a list, it is expected to have a 1:1 mapping +to the model's outputs. If a dict, it is expected to map +output names (strings) to scalar coefficients. +
+`sample_weight_mode` + +If you need to do timestep-wise +sample weighting (2D weights), set this to `"temporal"`. +`None` defaults to sample-wise weights (1D). +If the model has multiple outputs, you can use a different +`sample_weight_mode` on each output by passing a +dictionary or a list of modes. +
+`weighted_metrics` + +List of metrics to be evaluated and weighted +by sample_weight or class_weight during training and testing. +
+`**kwargs` + +Any additional arguments. For eager execution, pass +`run_eagerly=True`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid arguments for +`optimizer`, `loss`, `metrics` or `sample_weight_mode`. +
+ + + +

evaluate

+ +View source + + + +Returns the loss value & metrics values for the model in test mode. + +Computation is done in batches. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: - A Numpy array (or array-like), or a list +of arrays (in case the model has multiple inputs). - A TensorFlow +tensor, or a list of tensors (in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, if +the model has named inputs. - A tf.data dataset. - A generator or +keras.utils.Sequence instance. A more detailed description of +unpacking behavior for iterator types (Dataset, generator, Sequence) +is given in the `Unpacking behavior for iterator-like inputs` section +of Model.fit. +
+`y` + +Target data. Like the input data `x`, it could be either Numpy +array(s) or TensorFlow tensor(s). It should be consistent with `x` +(you cannot have Numpy inputs and tensor targets, or inversely). If +`x` is a dataset, generator or keras.utils.Sequence instance, `y` +should not be specified (since targets will be obtained from the +iterator/dataset). +
+`batch_size` + +Integer or `None`. Number of samples per gradient update. If +unspecified, `batch_size` will default to 32. Do not specify the +`batch_size` if your data is in the form of a dataset, generators, +or keras.utils.Sequence instances (since they generate batches). +
+`verbose` + +0 or 1. Verbosity mode. 0 = silent, 1 = progress bar. +
+`sample_weight` + +Optional Numpy array of weights for the test samples, +used for weighting the loss function. You can either pass a flat (1D) +Numpy array with the same length as the input samples +(1:1 mapping between weights and samples), or in the case of +temporal data, you can pass a 2D array with shape `(samples, +sequence_length)`, to apply a different weight to every timestep +of every sample. In this case you should make sure to specify +`sample_weight_mode="temporal"` in `compile()`. This argument is +not supported when `x` is a dataset, instead pass sample weights +as the third element of `x`. +
+`steps` + +Integer or `None`. Total number of steps (batches of samples) +before declaring the evaluation round finished. Ignored with the +default value of `None`. If x is a tf.data dataset and `steps` is +None, 'evaluate' will run until the dataset is exhausted. This +argument is not supported with array inputs. +
+`callbacks` + +List of keras.callbacks.Callback instances. List of +callbacks to apply during evaluation. See +[callbacks](/api_docs/python/tf/keras/callbacks). +
+`max_queue_size` + +Integer. Used for generator or keras.utils.Sequence +input only. Maximum size for the generator queue. If unspecified, +`max_queue_size` will default to 10. +
+`workers` + +Integer. Used for generator or keras.utils.Sequence input +only. Maximum number of processes to spin up when using process-based +threading. If unspecified, `workers` will default to 1. If 0, will +execute the generator on the main thread. +
+`use_multiprocessing` + +Boolean. Used for generator or +keras.utils.Sequence input only. If `True`, use process-based +threading. If unspecified, `use_multiprocessing` will default to +`False`. Note that because this implementation relies on +multiprocessing, you should not pass non-picklable arguments to the +generator as they can't be passed easily to children processes. +
+`return_dict` + +If `True`, loss and metric results are returned as a dict, +with each key being the name of the metric. If `False`, they are +returned as a list. +
+ + +See the discussion of `Unpacking behavior for iterator-like inputs` for +Model.fit. + + + + + + + + + +
Returns
+Scalar test loss (if the model has a single output and no metrics) +or list of scalars (if the model has multiple outputs +and/or metrics). The attribute `model.metrics_names` will give you +the display labels for the scalar outputs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +in case of invalid arguments. +
+ + + +

evaluate_generator

+ +View source + + + +Evaluates the model on a data generator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use Model.evaluate, which supports generators. + +#### DEPRECATED: + +Model.evaluate now supports generators, so there is no longer any need +to use this endpoint. + + +

fit

+ +View source + + + +Trains the model for a fixed number of epochs (iterations on a dataset). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: +- A Numpy array (or array-like), or a list of arrays +(in case the model has multiple inputs). +- A TensorFlow tensor, or a list of tensors +(in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, +if the model has named inputs. +- A tf.data dataset. Should return a tuple +of either `(inputs, targets)` or +`(inputs, targets, sample_weights)`. +- A generator or keras.utils.Sequence returning `(inputs, targets)` +or `(inputs, targets, sample_weights)`. +A more detailed description of unpacking behavior for iterator types +(Dataset, generator, Sequence) is given below. +
+`y` + +Target data. Like the input data `x`, +it could be either Numpy array(s) or TensorFlow tensor(s). +It should be consistent with `x` (you cannot have Numpy inputs and +tensor targets, or inversely). If `x` is a dataset, generator, +or keras.utils.Sequence instance, `y` should +not be specified (since targets will be obtained from `x`). +
+`batch_size` + +Integer or `None`. +Number of samples per gradient update. +If unspecified, `batch_size` will default to 32. +Do not specify the `batch_size` if your data is in the +form of datasets, generators, or keras.utils.Sequence instances +(since they generate batches). +
+`epochs` + +Integer. Number of epochs to train the model. +An epoch is an iteration over the entire `x` and `y` +data provided. +Note that in conjunction with `initial_epoch`, +`epochs` is to be understood as "final epoch". +The model is not trained for a number of iterations +given by `epochs`, but merely until the epoch +of index `epochs` is reached. +
+`verbose` + +0, 1, or 2. Verbosity mode. +0 = silent, 1 = progress bar, 2 = one line per epoch. +Note that the progress bar is not particularly useful when +logged to a file, so verbose=2 is recommended when not running +interactively (eg, in a production environment). +
+`callbacks` + +List of keras.callbacks.Callback instances. +List of callbacks to apply during training. +See tf.keras.callbacks. +
+`validation_split` + +Float between 0 and 1. +Fraction of the training data to be used as validation data. +The model will set apart this fraction of the training data, +will not train on it, and will evaluate +the loss and any model metrics +on this data at the end of each epoch. +The validation data is selected from the last samples +in the `x` and `y` data provided, before shuffling. This argument is +not supported when `x` is a dataset, generator or +keras.utils.Sequence instance. +
+`validation_data` + +Data on which to evaluate +the loss and any model metrics at the end of each epoch. +The model will not be trained on this data. +`validation_data` will override `validation_split`. +`validation_data` could be: +- tuple `(x_val, y_val)` of Numpy arrays or tensors +- tuple `(x_val, y_val, val_sample_weights)` of Numpy arrays +- dataset +For the first two cases, `batch_size` must be provided. +For the last case, `validation_steps` could be provided. +Note that `validation_data` does not support all the data types that +are supported in `x`, eg, dict, generator or keras.utils.Sequence. +
+`shuffle` + +Boolean (whether to shuffle the training data +before each epoch) or str (for 'batch'). This argument is ignored +when `x` is a generator. 'batch' is a special option for dealing +with the limitations of HDF5 data; it shuffles in batch-sized +chunks. Has no effect when `steps_per_epoch` is not `None`. +
+`class_weight` + +Optional dictionary mapping class indices (integers) +to a weight (float) value, used for weighting the loss function +(during training only). +This can be useful to tell the model to +"pay more attention" to samples from +an under-represented class. +
+`sample_weight` + +Optional Numpy array of weights for +the training samples, used for weighting the loss function +(during training only). You can either pass a flat (1D) +Numpy array with the same length as the input samples +(1:1 mapping between weights and samples), +or in the case of temporal data, +you can pass a 2D array with shape +`(samples, sequence_length)`, +to apply a different weight to every timestep of every sample. +In this case you should make sure to specify +`sample_weight_mode="temporal"` in `compile()`. This argument is not +supported when `x` is a dataset, generator, or +keras.utils.Sequence instance, instead provide the sample_weights +as the third element of `x`. +
+`initial_epoch` + +Integer. +Epoch at which to start training +(useful for resuming a previous training run). +
+`steps_per_epoch` + +Integer or `None`. +Total number of steps (batches of samples) +before declaring one epoch finished and starting the +next epoch. When training with input tensors such as +TensorFlow data tensors, the default `None` is equal to +the number of samples in your dataset divided by +the batch size, or 1 if that cannot be determined. If x is a +tf.data dataset, and 'steps_per_epoch' +is None, the epoch will run until the input dataset is exhausted. +When passing an infinitely repeating dataset, you must specify the +`steps_per_epoch` argument. This argument is not supported with +array inputs. +
+`validation_steps` + +Only relevant if `validation_data` is provided and +is a tf.data dataset. Total number of steps (batches of +samples) to draw before stopping when performing validation +at the end of every epoch. If 'validation_steps' is None, validation +will run until the `validation_data` dataset is exhausted. In the +case of an infinitely repeated dataset, it will run into an +infinite loop. If 'validation_steps' is specified and only part of +the dataset will be consumed, the evaluation will start from the +beginning of the dataset at each epoch. This ensures that the same +validation samples are used every time. +
+`validation_batch_size` + +Integer or `None`. +Number of samples per validation batch. +If unspecified, will default to `batch_size`. +Do not specify the `validation_batch_size` if your data is in the +form of datasets, generators, or keras.utils.Sequence instances +(since they generate batches). +
+`validation_freq` + +Only relevant if validation data is provided. Integer +or `collections_abc.Container` instance (e.g. list, tuple, etc.). +If an integer, specifies how many training epochs to run before a +new validation run is performed, e.g. `validation_freq=2` runs +validation every 2 epochs. If a Container, specifies the epochs on +which to run validation, e.g. `validation_freq=[1, 2, 10]` runs +validation at the end of the 1st, 2nd, and 10th epochs. +
+`max_queue_size` + +Integer. Used for generator or keras.utils.Sequence +input only. Maximum size for the generator queue. +If unspecified, `max_queue_size` will default to 10. +
+`workers` + +Integer. Used for generator or keras.utils.Sequence input +only. Maximum number of processes to spin up +when using process-based threading. If unspecified, `workers` +will default to 1. If 0, will execute the generator on the main +thread. +
+`use_multiprocessing` + +Boolean. Used for generator or +keras.utils.Sequence input only. If `True`, use process-based +threading. If unspecified, `use_multiprocessing` will default to +`False`. Note that because this implementation relies on +multiprocessing, you should not pass non-picklable arguments to +the generator as they can't be passed easily to children processes. +
+ + +Unpacking behavior for iterator-like inputs: + A common pattern is to pass a tf.data.Dataset, generator, or + tf.keras.utils.Sequence to the `x` argument of fit, which will in fact + yield not only features (x) but optionally targets (y) and sample weights. + Keras requires that the output of such iterator-likes be unambiguous. The + iterator should return a tuple of length 1, 2, or 3, where the optional + second and third elements will be used for y and sample_weight + respectively. Any other type provided will be wrapped in a length one + tuple, effectively treating everything as 'x'. When yielding dicts, they + should still adhere to the top-level tuple structure. + e.g. `({"x0": x0, "x1": x1}, y)`. Keras will not attempt to separate + features, targets, and weights from the keys of a single dict. + A notable unsupported data type is the namedtuple. The reason is that + it behaves like both an ordered datatype (tuple) and a mapping + datatype (dict). So given a namedtuple of the form: + `namedtuple("example_tuple", ["y", "x"])` + it is ambiguous whether to reverse the order of the elements when + interpreting the value. Even worse is a tuple of the form: + `namedtuple("other_tuple", ["x", "y", "z"])` + where it is unclear if the tuple was intended to be unpacked into x, y, + and sample_weight or passed through as a single element to `x`. As a + result the data processing code will simply raise a ValueError if it + encounters a namedtuple. (Along with instructions to remedy the issue.) + + + + + + + + + +
Returns
+A `History` object. Its `History.history` attribute is +a record of training loss values and metrics values +at successive epochs, as well as validation loss values +and validation metrics values (if applicable). +
+ + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If the model was never compiled. +
+`ValueError` + +In case of mismatch between the provided input data +and what the model expects. +
+ + + +

fit_generator

+ +View source + + + +Fits the model on data yielded batch-by-batch by a Python generator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use Model.fit, which supports generators. + +#### DEPRECATED: + +Model.fit now supports generators, so there is no longer any need to use +this endpoint. + + +

get_layer

+ +View source + + + +Retrieves a layer based on either its name (unique) or index. + +If `name` and `index` are both provided, `index` will take precedence. +Indices are based on order of horizontal graph traversal (bottom-up). + + + + + + + + + + + + + +
Arguments
+`name` + +String, name of layer. +
+`index` + +Integer, index of layer. +
+ + + + + + + + + + + +
Returns
+A layer instance. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid layer name or index. +
+ + + +

load_weights

+ +View source + + + +Loads all layer weights, either from a TensorFlow or an HDF5 weight file. + +If `by_name` is False weights are loaded based on the network's +topology. This means the architecture should be the same as when the weights +were saved. Note that layers that don't have weights are not taken into +account in the topological ordering, so adding or removing layers is fine as +long as they don't have weights. + +If `by_name` is True, weights are loaded into layers only if they share the +same name. This is useful for fine-tuning or transfer-learning models where +some of the layers have changed. + +Only topological loading (`by_name=False`) is supported when loading weights +from the TensorFlow format. Note that topological loading differs slightly +between TensorFlow and HDF5 formats for user-defined classes inheriting from +tf.keras.Model: HDF5 loads based on a flattened list of weights, while the +TensorFlow format loads based on the object-local names of attributes to +which layers are assigned in the `Model`'s constructor. + + + + + + + + + + + + + + + + +
Arguments
+`filepath` + +String, path to the weights file to load. For weight files in +TensorFlow format, this is the file prefix (the same as was passed +to `save_weights`). +
+`by_name` + +Boolean, whether to load weights by name or by topological +order. Only topological loading is supported for weight files in +TensorFlow format. +
+`skip_mismatch` + +Boolean, whether to skip loading of layers where there is +a mismatch in the number of weights, or a mismatch in the shape of +the weight (only valid when `by_name=True`). +
+ + + + + + + + + + + +
Returns
+When loading a weight file in TensorFlow format, returns the same status +object as tf.train.Checkpoint.restore. When graph building, restore +ops are run automatically as soon as the network is built (on first call +for user-defined classes inheriting from `Model`, immediately if it is +already built). + +When loading weights in HDF5 format, returns `None`. +
+ + + + + + + + + + + + + + + +
Raises
+`ImportError` + +If h5py is not available and the weight file is in HDF5 +format. +
+`ValueError` + +If `skip_mismatch` is set to `True` when `by_name` is +`False`. +
+ + + +

make_predict_function

+ +View source + + + +Creates a function that executes one step of inference. + +This method can be overridden to support custom inference logic. +This method is called by Model.predict and Model.predict_on_batch. + +Typically, this method directly controls tf.function and +tf.distribute.Strategy settings, and delegates the actual evaluation +logic to Model.predict_step. + +This function is cached the first time Model.predict or +Model.predict_on_batch is called. The cache is cleared whenever +Model.compile is called. + + + + + + + + + +
Returns
+Function. The function created by this method should accept a +`tf.data.Iterator`, and return the outputs of the `Model`. +
+ + + +

make_test_function

+ +View source + + + +Creates a function that executes one step of evaluation. + +This method can be overridden to support custom evaluation logic. +This method is called by Model.evaluate and Model.test_on_batch. + +Typically, this method directly controls tf.function and +tf.distribute.Strategy settings, and delegates the actual evaluation +logic to Model.test_step. + +This function is cached the first time Model.evaluate or +Model.test_on_batch is called. The cache is cleared whenever +Model.compile is called. + + + + + + + + + +
Returns
+Function. The function created by this method should accept a +`tf.data.Iterator`, and return a `dict` containing values that will +be passed to `tf.keras.Callbacks.on_test_batch_end`. +
+ + + +

make_train_function

+ +View source + + + +Creates a function that executes one step of training. + +This method can be overridden to support custom training logic. +This method is called by Model.fit and Model.train_on_batch. + +Typically, this method directly controls tf.function and +tf.distribute.Strategy settings, and delegates the actual training +logic to Model.train_step. + +This function is cached the first time Model.fit or +Model.train_on_batch is called. The cache is cleared whenever +Model.compile is called. + + + + + + + + + +
Returns
+Function. The function created by this method should accept a +`tf.data.Iterator`, and return a `dict` containing values that will +be passed to `tf.keras.Callbacks.on_train_batch_end`, such as +`{'loss': 0.2, 'accuracy': 0.7}`. +
+ + + +

predict

+ +View source + + + +Generates output predictions for the input samples. + +Computation is done in batches. This method is designed for performance in +large scale inputs. For small amount of inputs that fit in one batch, +directly using `__call__` is recommended for faster execution, e.g., +`model(x)`, or `model(x, training=False)` if you have layers such as +tf.keras.layers.BatchNormalization that behaves differently during +inference. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input samples. It could be: +- A Numpy array (or array-like), or a list of arrays +(in case the model has multiple inputs). +- A TensorFlow tensor, or a list of tensors +(in case the model has multiple inputs). +- A tf.data dataset. +- A generator or keras.utils.Sequence instance. +A more detailed description of unpacking behavior for iterator types +(Dataset, generator, Sequence) is given in the `Unpacking behavior +for iterator-like inputs` section of `Model.fit`. +
+`batch_size` + +Integer or `None`. +Number of samples per batch. +If unspecified, `batch_size` will default to 32. +Do not specify the `batch_size` if your data is in the +form of dataset, generators, or keras.utils.Sequence instances +(since they generate batches). +
+`verbose` + +Verbosity mode, 0 or 1. +
+`steps` + +Total number of steps (batches of samples) +before declaring the prediction round finished. +Ignored with the default value of `None`. If x is a tf.data +dataset and `steps` is None, `predict` will +run until the input dataset is exhausted. +
+`callbacks` + +List of keras.callbacks.Callback instances. +List of callbacks to apply during prediction. +See [callbacks](/api_docs/python/tf/keras/callbacks). +
+`max_queue_size` + +Integer. Used for generator or keras.utils.Sequence +input only. Maximum size for the generator queue. +If unspecified, `max_queue_size` will default to 10. +
+`workers` + +Integer. Used for generator or keras.utils.Sequence input +only. Maximum number of processes to spin up when using +process-based threading. If unspecified, `workers` will default +to 1. If 0, will execute the generator on the main thread. +
+`use_multiprocessing` + +Boolean. Used for generator or +keras.utils.Sequence input only. If `True`, use process-based +threading. If unspecified, `use_multiprocessing` will default to +`False`. Note that because this implementation relies on +multiprocessing, you should not pass non-picklable arguments to +the generator as they can't be passed easily to children processes. +
+ + +See the discussion of `Unpacking behavior for iterator-like inputs` for +Model.fit. Note that Model.predict uses the same interpretation rules as +Model.fit and Model.evaluate, so inputs must be unambiguous for all +three methods. + + + + + + + + + +
Returns
+Numpy array(s) of predictions. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of mismatch between the provided +input data and the model's expectations, +or in case a stateful model receives a number of samples +that is not a multiple of the batch size. +
+ + + +

predict_generator

+ +View source + + + +Generates predictions for the input samples from a data generator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use Model.predict, which supports generators. + +#### DEPRECATED: + +Model.predict now supports generators, so there is no longer any need +to use this endpoint. + + +

predict_on_batch

+ +View source + + + +Returns predictions for a single batch of samples. + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: - A Numpy array (or array-like), or a list +of arrays (in case the model has multiple inputs). - A TensorFlow +tensor, or a list of tensors (in case the model has multiple inputs). +
+ + + + + + + + + + + +
Returns
+Numpy array(s) of predictions. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of mismatch between given number of inputs and +expectations of the model. +
+ + + +

predict_step

+ +View source + + + +The logic for one inference step. + +This method can be overridden to support custom inference logic. +This method is called by Model.make_predict_function. + +This method should contain the mathemetical logic for one step of inference. +This typically includes the forward pass. + +Configuration details for *how* this logic is run (e.g. tf.function and +tf.distribute.Strategy settings), should be left to +Model.make_predict_function, which can also be overridden. + + + + + + + + + + +
Arguments
+`data` + +A nested structure of `Tensor`s. +
+ + + + + + + + + + + +
Returns
+The result of one inference step, typically the output of calling the +`Model` on data. +
+ + + +

reset_metrics

+ +View source + + + +Resets the state of metrics. + + +

reset_states

+ +View source + + + + + + +

save

+ +View source + + + +Saves the model to Tensorflow SavedModel or a single HDF5 file. + + +#### The savefile includes: + +- The model architecture, allowing to re-instantiate the model. +- The model weights. +- The state of the optimizer, allowing to resume training + exactly where you left off. + + +This allows you to save the entirety of the state of a model +in a single file. + +Saved models can be reinstantiated via keras.models.load_model. +The model returned by `load_model` is a compiled model ready to be used +(unless the saved model was never compiled in the first place). + +Models built with the Sequential and Functional API can be saved to both the +HDF5 and SavedModel formats. Subclassed models can only be saved with the +SavedModel format. + +Note that the model weights may have different scoped names after being +loaded. Scoped names include the model/layer names, such as +"dense_1/kernel:0"`. It is recommended that you use the layer properties to + access specific variables, e.g. `model.get_layer("dense_1").kernel`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`filepath` + +String, path to SavedModel or H5 file to save the model. +
+`overwrite` + +Whether to silently overwrite any existing file at the +target location, or provide the user with a manual prompt. +
+`include_optimizer` + +If True, save optimizer's state together. +
+`save_format` + +Either 'tf' or 'h5', indicating whether to save the model +to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and +'h5' in TF 1.X. +
+`signatures` + +Signatures to save with the SavedModel. Applicable to the +'tf' format only. Please see the `signatures` argument in +tf.saved_model.save for details. +
+`options` + +Optional tf.saved_model.SaveOptions object that specifies +options for saving to SavedModel. +
+ + + +#### Example: + + + +```python +from keras.models import load_model + +model.save('my_model.h5') # creates a HDF5 file 'my_model.h5' +del model # deletes the existing model + +# returns a compiled model +# identical to the previous one +model = load_model('my_model.h5') +``` + +

save_weights

+ +View source + + + +Saves all layer weights. + +Either saves in HDF5 or in TensorFlow format based on the `save_format` +argument. + +When saving in HDF5 format, the weight file has: + - `layer_names` (attribute), a list of strings + (ordered names of model layers). + - For every layer, a `group` named `layer.name` + - For every such layer group, a group attribute `weight_names`, + a list of strings + (ordered names of weights tensor of the layer). + - For every weight in the layer, a dataset + storing the weight value, named after the weight tensor. + +When saving in TensorFlow format, all objects referenced by the network are +saved in the same format as tf.train.Checkpoint, including any `Layer` +instances or `Optimizer` instances assigned to object attributes. For +networks constructed from inputs and outputs using `tf.keras.Model(inputs, +outputs)`, `Layer` instances used by the network are tracked/saved +automatically. For user-defined classes which inherit from tf.keras.Model, +`Layer` instances must be assigned to object attributes, typically in the +constructor. See the documentation of tf.train.Checkpoint and +tf.keras.Model for details. + +While the formats are the same, do not mix `save_weights` and +tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be +loaded using Model.load_weights. Checkpoints saved using +tf.train.Checkpoint.save should be restored using the corresponding +tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over +`save_weights` for training checkpoints. + +The TensorFlow format matches objects and variables by starting at a root +object, `self` for `save_weights`, and greedily matching attribute +names. For Model.save this is the `Model`, and for Checkpoint.save this +is the `Checkpoint` even if the `Checkpoint` has a model attached. This +means saving a tf.keras.Model using `save_weights` and loading into a +tf.train.Checkpoint with a `Model` attached (or vice versa) will not match +the `Model`'s variables. See the [guide to training +checkpoints](https://www.tensorflow.org/guide/checkpoint) for details +on the TensorFlow format. + + + + + + + + + + + + + + + + +
Arguments
+`filepath` + +String, path to the file to save the weights to. When saving +in TensorFlow format, this is the prefix used for checkpoint files +(multiple files are generated). Note that the '.h5' suffix causes +weights to be saved in HDF5 format. +
+`overwrite` + +Whether to silently overwrite any existing file at the +target location, or provide the user with a manual prompt. +
+`save_format` + +Either 'tf' or 'h5'. A `filepath` ending in '.h5' or +'.keras' will default to HDF5 if `save_format` is `None`. Otherwise +`None` defaults to 'tf'. +
+ + + + + + + + + + + + + + + +
Raises
+`ImportError` + +If h5py is not available when attempting to save in HDF5 +format. +
+`ValueError` + +For invalid/unknown format arguments. +
+ + + +

summary

+ +View source + + + +Prints a string summary of the network. + + + + + + + + + + + + + + + + + +
Arguments
+`line_length` + +Total length of printed lines +(e.g. set this to adapt the display to different +terminal window sizes). +
+`positions` + +Relative or absolute positions of log elements +in each line. If not provided, +defaults to `[.33, .55, .67, 1.]`. +
+`print_fn` + +Print function to use. Defaults to `print`. +It will be called on each line of the summary. +You can set it to a custom function +in order to capture the string summary. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `summary()` is called before the model is built. +
+ + + +

test_on_batch

+ +View source + + + +Test the model on a single batch of samples. + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: - A Numpy array (or array-like), or a list +of arrays (in case the model has multiple inputs). - A TensorFlow +tensor, or a list of tensors (in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, if +the model has named inputs. +
+`y` + +Target data. Like the input data `x`, it could be either Numpy +array(s) or TensorFlow tensor(s). It should be consistent with `x` +(you cannot have Numpy inputs and tensor targets, or inversely). +
+`sample_weight` + +Optional array of the same length as x, containing +weights to apply to the model's loss for each sample. In the case of +temporal data, you can pass a 2D array with shape (samples, +sequence_length), to apply a different weight to every timestep of +every sample. In this case you should make sure to specify +sample_weight_mode="temporal" in compile(). +
+`reset_metrics` + +If `True`, the metrics returned will be only for this +batch. If `False`, the metrics will be statefully accumulated across +batches. +
+`return_dict` + +If `True`, loss and metric results are returned as a dict, +with each key being the name of the metric. If `False`, they are +returned as a list. +
+ + + + + + + + + + + +
Returns
+Scalar test loss (if the model has a single output and no metrics) +or list of scalars (if the model has multiple outputs +and/or metrics). The attribute `model.metrics_names` will give you +the display labels for the scalar outputs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid user-provided arguments. +
+ + + +

test_step

+ +View source + + + +The logic for one evaluation step. + +This method can be overridden to support custom evaluation logic. +This method is called by Model.make_test_function. + +This function should contain the mathemetical logic for one step of +evaluation. +This typically includes the forward pass, loss calculation, and metrics +updates. + +Configuration details for *how* this logic is run (e.g. tf.function and +tf.distribute.Strategy settings), should be left to +Model.make_test_function, which can also be overridden. + + + + + + + + + + +
Arguments
+`data` + +A nested structure of `Tensor`s. +
+ + + + + + + + + + + +
Returns
+A `dict` containing values that will be passed to +`tf.keras.callbacks.CallbackList.on_train_batch_end`. Typically, the +values of the `Model`'s metrics are returned. +
+ + + +

to_json

+ +View source + + + +Returns a JSON string containing the network configuration. + +To load a network from a JSON save file, use +keras.models.model_from_json(json_string, custom_objects={}). + + + + + + + + + + +
Arguments
+`**kwargs` + +Additional keyword arguments +to be passed to `json.dumps()`. +
+ + + + + + + + + + + +
Returns
+A JSON string. +
+ + + +

to_yaml

+ +View source + + + +Returns a yaml string containing the network configuration. + +To load a network from a yaml save file, use +keras.models.model_from_yaml(yaml_string, custom_objects={}). + +`custom_objects` should be a dictionary mapping +the names of custom losses / layers / etc to the corresponding +functions / classes. + + + + + + + + + + +
Arguments
+`**kwargs` + +Additional keyword arguments +to be passed to `yaml.dump()`. +
+ + + + + + + + + + + +
Returns
+A YAML string. +
+ + + + + + + + + + + + +
Raises
+`ImportError` + +if yaml module is not found. +
+ + + +

train_on_batch

+ +View source + + + +Runs a single gradient update on a single batch of data. + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: +- A Numpy array (or array-like), or a list of arrays +(in case the model has multiple inputs). +- A TensorFlow tensor, or a list of tensors +(in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, +if the model has named inputs. +
+`y` + +Target data. Like the input data `x`, it could be either Numpy +array(s) or TensorFlow tensor(s). It should be consistent with `x` +(you cannot have Numpy inputs and tensor targets, or inversely). +
+`sample_weight` + +Optional array of the same length as x, containing +weights to apply to the model's loss for each sample. In the case of +temporal data, you can pass a 2D array with shape (samples, +sequence_length), to apply a different weight to every timestep of +every sample. In this case you should make sure to specify +sample_weight_mode="temporal" in compile(). +
+`class_weight` + +Optional dictionary mapping class indices (integers) to a +weight (float) to apply to the model's loss for the samples from this +class during training. This can be useful to tell the model to "pay +more attention" to samples from an under-represented class. +
+`reset_metrics` + +If `True`, the metrics returned will be only for this +batch. If `False`, the metrics will be statefully accumulated across +batches. +
+`return_dict` + +If `True`, loss and metric results are returned as a dict, +with each key being the name of the metric. If `False`, they are +returned as a list. +
+ + + + + + + + + + + +
Returns
+Scalar training loss +(if the model has a single output and no metrics) +or list of scalars (if the model has multiple outputs +and/or metrics). The attribute `model.metrics_names` will give you +the display labels for the scalar outputs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid user-provided arguments. +
+ + + +

train_step

+ +View source + + + +The logic for one training step. + +This method can be overridden to support custom training logic. +This method is called by Model.make_train_function. + +This method should contain the mathemetical logic for one step of training. +This typically includes the forward pass, loss calculation, backpropagation, +and metric updates. + +Configuration details for *how* this logic is run (e.g. tf.function and +tf.distribute.Strategy settings), should be left to +Model.make_train_function, which can also be overridden. + + + + + + + + + + +
Arguments
+`data` + +A nested structure of `Tensor`s. +
+ + + + + + + + + + + +
Returns
+A `dict` containing values that will be passed to +`tf.keras.callbacks.CallbackList.on_train_batch_end`. Typically, the +values of the `Model`'s metrics are returned. Example: +`{'loss': 0.2, 'accuracy': 0.7}`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/Sequential.md b/site/en/api_docs/python/tf/keras/Sequential.md new file mode 100644 index 00000000000..f1697821f37 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/Sequential.md @@ -0,0 +1,2567 @@ +description: Sequential groups a linear stack of layers into a tf.keras.Model. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.keras.Sequential + + + + + + + + + +`Sequential` groups a linear stack of layers into a tf.keras.Model. + +Inherits From: [`Model`](../../tf/keras/Model.md) + + + + + + + + + +`Sequential` provides training and inference features on this model. + +#### Examples: + + + +``` +>>> # Optionally, the first layer can receive an `input_shape` argument: +>>> model = tf.keras.Sequential() +>>> model.add(tf.keras.layers.Dense(8, input_shape=(16,))) +>>> # Afterwards, we do automatic shape inference: +>>> model.add(tf.keras.layers.Dense(4)) +``` + +``` +>>> # This is identical to the following: +>>> model = tf.keras.Sequential() +>>> model.add(tf.keras.layers.Dense(8, input_dim=16)) +``` + +``` +>>> # And to the following: +>>> model = tf.keras.Sequential() +>>> model.add(tf.keras.layers.Dense(8, batch_input_shape=(None, 16))) +``` + +``` +>>> # Note that you can also omit the `input_shape` argument. +>>> # In that case the model doesn't have any weights until the first call +>>> # to a training/evaluation method (since it isn't yet built): +>>> model = tf.keras.Sequential() +>>> model.add(tf.keras.layers.Dense(8)) +>>> model.add(tf.keras.layers.Dense(4)) +>>> # model.weights not created yet +``` + +``` +>>> # Whereas if you specify the input shape, the model gets built +>>> # continuously as you are adding layers: +>>> model = tf.keras.Sequential() +>>> model.add(tf.keras.layers.Dense(8, input_shape=(16,))) +>>> model.add(tf.keras.layers.Dense(4)) +>>> len(model.weights) +4 +``` + +``` +>>> # When using the delayed-build pattern (no input shape specified), you can +>>> # choose to manually build your model by calling +>>> # `build(batch_input_shape)`: +>>> model = tf.keras.Sequential() +>>> model.add(tf.keras.layers.Dense(8)) +>>> model.add(tf.keras.layers.Dense(4)) +>>> model.build((None, 16)) +>>> len(model.weights) +4 +``` + +```python +# Note that when using the delayed-build pattern (no input shape specified), +# the model gets built the first time you call `fit` (or other training and +# evaluation methods). +model = tf.keras.Sequential() +model.add(tf.keras.layers.Dense(8)) +model.add(tf.keras.layers.Dense(1)) +model.compile(optimizer='sgd', loss='mse') +# This builds the model for the first time: +model.fit(x, y, batch_size=32, epochs=10) +``` + + + + + + + + + + + + + +
+`layers` + +Optional list of layers to add to the model. +
+`name` + +Optional name for the model. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`distribute_strategy` + +The tf.distribute.Strategy this model was created under. +
+`layers` + + +
+`metrics_names` + +Returns the model's display labels for all outputs. + +Note: `metrics_names` are available only after a keras.Model has been +trained/evaluated on actual data. + +``` +>>> inputs = tf.keras.layers.Input(shape=(3,)) +>>> outputs = tf.keras.layers.Dense(2)(inputs) +>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs) +>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"]) +>>> model.metrics_names +[] +``` + +``` +>>> x = np.random.random((2, 3)) +>>> y = np.random.randint(0, 2, (2, 2)) +>>> _ = model.fit(x, y, verbose=0) +>>> model.metrics_names +['loss', 'mae'] +``` + +``` +>>> inputs = tf.keras.layers.Input(shape=(3,)) +>>> d = tf.keras.layers.Dense(2, name='out') +>>> output_1 = d(inputs) +>>> output_2 = d(inputs) +>>> model = tf.keras.models.Model( +... inputs=inputs, outputs=[output_1, output_2]) +>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"]) +>>> _ = model.fit(x, (y, y), verbose=0) +>>> model.metrics_names +['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae', +'out_1_acc'] +``` +
+`run_eagerly` + +Settable attribute indicating whether the model should run eagerly. + +Running eagerly means that your model will be run step by step, +like Python code. Your model might run slower, but it should become easier +for you to debug it by stepping into individual layer calls. + +By default, we will attempt to compile your model to a static graph to +deliver the best execution performance. +
+`state_updates` + +Returns the `updates` from all layers that are stateful. + +This is useful for separating training updates and +state updates, e.g. when we need to update a layer's internal state +during prediction. +
+ + + +## Methods + +

add

+ +View source + + + +Adds a layer instance on top of the layer stack. + + + + + + + + + + + +
Arguments
+`layer` + +layer instance. +
+ + + + + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `layer` is not a layer instance. +
+`ValueError` + +In case the `layer` argument does not +know its input shape. +
+`ValueError` + +In case the `layer` argument has +multiple output tensors, or is already connected +somewhere else (forbidden in `Sequential` models). +
+ + + +

compile

+ +View source + + + +Configures the model for training. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`optimizer` + +String (name of optimizer) or optimizer instance. +See tf.keras.optimizers. +
+`loss` + +String (name of objective function), objective function or +tf.keras.losses.Loss instance. See tf.keras.losses. +An objective function is any callable with the signature +`loss = fn(y_true, y_pred)`, where +y_true = ground truth values with shape = `[batch_size, d0, .. dN]`, +except sparse loss functions such as sparse categorical crossentropy +where shape = `[batch_size, d0, .. dN-1]`. +y_pred = predicted values with shape = `[batch_size, d0, .. dN]`. +It returns a weighted loss float tensor. +If a custom `Loss` instance is used and reduction is set to NONE, +return value has the shape [batch_size, d0, .. dN-1] ie. per-sample +or per-timestep loss values; otherwise, it is a scalar. +If the model has multiple outputs, you can use a different loss on +each output by passing a dictionary or a list of losses. The loss +value that will be minimized by the model will then be the sum of +all individual losses. +
+`metrics` + +List of metrics to be evaluated by the model during training +and testing. +Each of this can be a string (name of a built-in function), function +or a tf.keras.metrics.Metric instance. See tf.keras.metrics. +Typically you will use `metrics=['accuracy']`. A function is any +callable with the signature `result = fn(y_true, y_pred)`. +To specify different metrics for different outputs of a +multi-output model, you could also pass a dictionary, such as +`metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}`. +You can also pass a list (len = len(outputs)) of lists of metrics +such as `metrics=[['accuracy'], ['accuracy', 'mse']]` or +`metrics=['accuracy', ['accuracy', 'mse']]`. +When you pass the strings 'accuracy' or 'acc', we convert this to +one of tf.keras.metrics.BinaryAccuracy, +tf.keras.metrics.CategoricalAccuracy, +tf.keras.metrics.SparseCategoricalAccuracy based on the loss +function used and the model output shape. We do a similar conversion +for the strings 'crossentropy' and 'ce' as well. +
+`loss_weights` + +Optional list or dictionary specifying scalar +coefficients (Python floats) to weight the loss contributions +of different model outputs. +The loss value that will be minimized by the model +will then be the *weighted sum* of all individual losses, +weighted by the `loss_weights` coefficients. +If a list, it is expected to have a 1:1 mapping +to the model's outputs. If a dict, it is expected to map +output names (strings) to scalar coefficients. +
+`sample_weight_mode` + +If you need to do timestep-wise +sample weighting (2D weights), set this to `"temporal"`. +`None` defaults to sample-wise weights (1D). +If the model has multiple outputs, you can use a different +`sample_weight_mode` on each output by passing a +dictionary or a list of modes. +
+`weighted_metrics` + +List of metrics to be evaluated and weighted +by sample_weight or class_weight during training and testing. +
+`**kwargs` + +Any additional arguments. For eager execution, pass +`run_eagerly=True`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid arguments for +`optimizer`, `loss`, `metrics` or `sample_weight_mode`. +
+ + + +

evaluate

+ +View source + + + +Returns the loss value & metrics values for the model in test mode. + +Computation is done in batches. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: - A Numpy array (or array-like), or a list +of arrays (in case the model has multiple inputs). - A TensorFlow +tensor, or a list of tensors (in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, if +the model has named inputs. - A tf.data dataset. - A generator or +keras.utils.Sequence instance. A more detailed description of +unpacking behavior for iterator types (Dataset, generator, Sequence) +is given in the `Unpacking behavior for iterator-like inputs` section +of Model.fit. +
+`y` + +Target data. Like the input data `x`, it could be either Numpy +array(s) or TensorFlow tensor(s). It should be consistent with `x` +(you cannot have Numpy inputs and tensor targets, or inversely). If +`x` is a dataset, generator or keras.utils.Sequence instance, `y` +should not be specified (since targets will be obtained from the +iterator/dataset). +
+`batch_size` + +Integer or `None`. Number of samples per gradient update. If +unspecified, `batch_size` will default to 32. Do not specify the +`batch_size` if your data is in the form of a dataset, generators, +or keras.utils.Sequence instances (since they generate batches). +
+`verbose` + +0 or 1. Verbosity mode. 0 = silent, 1 = progress bar. +
+`sample_weight` + +Optional Numpy array of weights for the test samples, +used for weighting the loss function. You can either pass a flat (1D) +Numpy array with the same length as the input samples +(1:1 mapping between weights and samples), or in the case of +temporal data, you can pass a 2D array with shape `(samples, +sequence_length)`, to apply a different weight to every timestep +of every sample. In this case you should make sure to specify +`sample_weight_mode="temporal"` in `compile()`. This argument is +not supported when `x` is a dataset, instead pass sample weights +as the third element of `x`. +
+`steps` + +Integer or `None`. Total number of steps (batches of samples) +before declaring the evaluation round finished. Ignored with the +default value of `None`. If x is a tf.data dataset and `steps` is +None, 'evaluate' will run until the dataset is exhausted. This +argument is not supported with array inputs. +
+`callbacks` + +List of keras.callbacks.Callback instances. List of +callbacks to apply during evaluation. See +[callbacks](/api_docs/python/tf/keras/callbacks). +
+`max_queue_size` + +Integer. Used for generator or keras.utils.Sequence +input only. Maximum size for the generator queue. If unspecified, +`max_queue_size` will default to 10. +
+`workers` + +Integer. Used for generator or keras.utils.Sequence input +only. Maximum number of processes to spin up when using process-based +threading. If unspecified, `workers` will default to 1. If 0, will +execute the generator on the main thread. +
+`use_multiprocessing` + +Boolean. Used for generator or +keras.utils.Sequence input only. If `True`, use process-based +threading. If unspecified, `use_multiprocessing` will default to +`False`. Note that because this implementation relies on +multiprocessing, you should not pass non-picklable arguments to the +generator as they can't be passed easily to children processes. +
+`return_dict` + +If `True`, loss and metric results are returned as a dict, +with each key being the name of the metric. If `False`, they are +returned as a list. +
+ + +See the discussion of `Unpacking behavior for iterator-like inputs` for +Model.fit. + + + + + + + + + +
Returns
+Scalar test loss (if the model has a single output and no metrics) +or list of scalars (if the model has multiple outputs +and/or metrics). The attribute `model.metrics_names` will give you +the display labels for the scalar outputs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +in case of invalid arguments. +
+ + + +

evaluate_generator

+ +View source + + + +Evaluates the model on a data generator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use Model.evaluate, which supports generators. + +#### DEPRECATED: + +Model.evaluate now supports generators, so there is no longer any need +to use this endpoint. + + +

fit

+ +View source + + + +Trains the model for a fixed number of epochs (iterations on a dataset). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: +- A Numpy array (or array-like), or a list of arrays +(in case the model has multiple inputs). +- A TensorFlow tensor, or a list of tensors +(in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, +if the model has named inputs. +- A tf.data dataset. Should return a tuple +of either `(inputs, targets)` or +`(inputs, targets, sample_weights)`. +- A generator or keras.utils.Sequence returning `(inputs, targets)` +or `(inputs, targets, sample_weights)`. +A more detailed description of unpacking behavior for iterator types +(Dataset, generator, Sequence) is given below. +
+`y` + +Target data. Like the input data `x`, +it could be either Numpy array(s) or TensorFlow tensor(s). +It should be consistent with `x` (you cannot have Numpy inputs and +tensor targets, or inversely). If `x` is a dataset, generator, +or keras.utils.Sequence instance, `y` should +not be specified (since targets will be obtained from `x`). +
+`batch_size` + +Integer or `None`. +Number of samples per gradient update. +If unspecified, `batch_size` will default to 32. +Do not specify the `batch_size` if your data is in the +form of datasets, generators, or keras.utils.Sequence instances +(since they generate batches). +
+`epochs` + +Integer. Number of epochs to train the model. +An epoch is an iteration over the entire `x` and `y` +data provided. +Note that in conjunction with `initial_epoch`, +`epochs` is to be understood as "final epoch". +The model is not trained for a number of iterations +given by `epochs`, but merely until the epoch +of index `epochs` is reached. +
+`verbose` + +0, 1, or 2. Verbosity mode. +0 = silent, 1 = progress bar, 2 = one line per epoch. +Note that the progress bar is not particularly useful when +logged to a file, so verbose=2 is recommended when not running +interactively (eg, in a production environment). +
+`callbacks` + +List of keras.callbacks.Callback instances. +List of callbacks to apply during training. +See tf.keras.callbacks. +
+`validation_split` + +Float between 0 and 1. +Fraction of the training data to be used as validation data. +The model will set apart this fraction of the training data, +will not train on it, and will evaluate +the loss and any model metrics +on this data at the end of each epoch. +The validation data is selected from the last samples +in the `x` and `y` data provided, before shuffling. This argument is +not supported when `x` is a dataset, generator or +keras.utils.Sequence instance. +
+`validation_data` + +Data on which to evaluate +the loss and any model metrics at the end of each epoch. +The model will not be trained on this data. +`validation_data` will override `validation_split`. +`validation_data` could be: +- tuple `(x_val, y_val)` of Numpy arrays or tensors +- tuple `(x_val, y_val, val_sample_weights)` of Numpy arrays +- dataset +For the first two cases, `batch_size` must be provided. +For the last case, `validation_steps` could be provided. +Note that `validation_data` does not support all the data types that +are supported in `x`, eg, dict, generator or keras.utils.Sequence. +
+`shuffle` + +Boolean (whether to shuffle the training data +before each epoch) or str (for 'batch'). This argument is ignored +when `x` is a generator. 'batch' is a special option for dealing +with the limitations of HDF5 data; it shuffles in batch-sized +chunks. Has no effect when `steps_per_epoch` is not `None`. +
+`class_weight` + +Optional dictionary mapping class indices (integers) +to a weight (float) value, used for weighting the loss function +(during training only). +This can be useful to tell the model to +"pay more attention" to samples from +an under-represented class. +
+`sample_weight` + +Optional Numpy array of weights for +the training samples, used for weighting the loss function +(during training only). You can either pass a flat (1D) +Numpy array with the same length as the input samples +(1:1 mapping between weights and samples), +or in the case of temporal data, +you can pass a 2D array with shape +`(samples, sequence_length)`, +to apply a different weight to every timestep of every sample. +In this case you should make sure to specify +`sample_weight_mode="temporal"` in `compile()`. This argument is not +supported when `x` is a dataset, generator, or +keras.utils.Sequence instance, instead provide the sample_weights +as the third element of `x`. +
+`initial_epoch` + +Integer. +Epoch at which to start training +(useful for resuming a previous training run). +
+`steps_per_epoch` + +Integer or `None`. +Total number of steps (batches of samples) +before declaring one epoch finished and starting the +next epoch. When training with input tensors such as +TensorFlow data tensors, the default `None` is equal to +the number of samples in your dataset divided by +the batch size, or 1 if that cannot be determined. If x is a +tf.data dataset, and 'steps_per_epoch' +is None, the epoch will run until the input dataset is exhausted. +When passing an infinitely repeating dataset, you must specify the +`steps_per_epoch` argument. This argument is not supported with +array inputs. +
+`validation_steps` + +Only relevant if `validation_data` is provided and +is a tf.data dataset. Total number of steps (batches of +samples) to draw before stopping when performing validation +at the end of every epoch. If 'validation_steps' is None, validation +will run until the `validation_data` dataset is exhausted. In the +case of an infinitely repeated dataset, it will run into an +infinite loop. If 'validation_steps' is specified and only part of +the dataset will be consumed, the evaluation will start from the +beginning of the dataset at each epoch. This ensures that the same +validation samples are used every time. +
+`validation_batch_size` + +Integer or `None`. +Number of samples per validation batch. +If unspecified, will default to `batch_size`. +Do not specify the `validation_batch_size` if your data is in the +form of datasets, generators, or keras.utils.Sequence instances +(since they generate batches). +
+`validation_freq` + +Only relevant if validation data is provided. Integer +or `collections_abc.Container` instance (e.g. list, tuple, etc.). +If an integer, specifies how many training epochs to run before a +new validation run is performed, e.g. `validation_freq=2` runs +validation every 2 epochs. If a Container, specifies the epochs on +which to run validation, e.g. `validation_freq=[1, 2, 10]` runs +validation at the end of the 1st, 2nd, and 10th epochs. +
+`max_queue_size` + +Integer. Used for generator or keras.utils.Sequence +input only. Maximum size for the generator queue. +If unspecified, `max_queue_size` will default to 10. +
+`workers` + +Integer. Used for generator or keras.utils.Sequence input +only. Maximum number of processes to spin up +when using process-based threading. If unspecified, `workers` +will default to 1. If 0, will execute the generator on the main +thread. +
+`use_multiprocessing` + +Boolean. Used for generator or +keras.utils.Sequence input only. If `True`, use process-based +threading. If unspecified, `use_multiprocessing` will default to +`False`. Note that because this implementation relies on +multiprocessing, you should not pass non-picklable arguments to +the generator as they can't be passed easily to children processes. +
+ + +Unpacking behavior for iterator-like inputs: + A common pattern is to pass a tf.data.Dataset, generator, or + tf.keras.utils.Sequence to the `x` argument of fit, which will in fact + yield not only features (x) but optionally targets (y) and sample weights. + Keras requires that the output of such iterator-likes be unambiguous. The + iterator should return a tuple of length 1, 2, or 3, where the optional + second and third elements will be used for y and sample_weight + respectively. Any other type provided will be wrapped in a length one + tuple, effectively treating everything as 'x'. When yielding dicts, they + should still adhere to the top-level tuple structure. + e.g. `({"x0": x0, "x1": x1}, y)`. Keras will not attempt to separate + features, targets, and weights from the keys of a single dict. + A notable unsupported data type is the namedtuple. The reason is that + it behaves like both an ordered datatype (tuple) and a mapping + datatype (dict). So given a namedtuple of the form: + `namedtuple("example_tuple", ["y", "x"])` + it is ambiguous whether to reverse the order of the elements when + interpreting the value. Even worse is a tuple of the form: + `namedtuple("other_tuple", ["x", "y", "z"])` + where it is unclear if the tuple was intended to be unpacked into x, y, + and sample_weight or passed through as a single element to `x`. As a + result the data processing code will simply raise a ValueError if it + encounters a namedtuple. (Along with instructions to remedy the issue.) + + + + + + + + + +
Returns
+A `History` object. Its `History.history` attribute is +a record of training loss values and metrics values +at successive epochs, as well as validation loss values +and validation metrics values (if applicable). +
+ + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If the model was never compiled. +
+`ValueError` + +In case of mismatch between the provided input data +and what the model expects. +
+ + + +

fit_generator

+ +View source + + + +Fits the model on data yielded batch-by-batch by a Python generator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use Model.fit, which supports generators. + +#### DEPRECATED: + +Model.fit now supports generators, so there is no longer any need to use +this endpoint. + + +

get_layer

+ +View source + + + +Retrieves a layer based on either its name (unique) or index. + +If `name` and `index` are both provided, `index` will take precedence. +Indices are based on order of horizontal graph traversal (bottom-up). + + + + + + + + + + + + + +
Arguments
+`name` + +String, name of layer. +
+`index` + +Integer, index of layer. +
+ + + + + + + + + + + +
Returns
+A layer instance. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid layer name or index. +
+ + + +

load_weights

+ +View source + + + +Loads all layer weights, either from a TensorFlow or an HDF5 weight file. + +If `by_name` is False weights are loaded based on the network's +topology. This means the architecture should be the same as when the weights +were saved. Note that layers that don't have weights are not taken into +account in the topological ordering, so adding or removing layers is fine as +long as they don't have weights. + +If `by_name` is True, weights are loaded into layers only if they share the +same name. This is useful for fine-tuning or transfer-learning models where +some of the layers have changed. + +Only topological loading (`by_name=False`) is supported when loading weights +from the TensorFlow format. Note that topological loading differs slightly +between TensorFlow and HDF5 formats for user-defined classes inheriting from +tf.keras.Model: HDF5 loads based on a flattened list of weights, while the +TensorFlow format loads based on the object-local names of attributes to +which layers are assigned in the `Model`'s constructor. + + + + + + + + + + + + + + + + +
Arguments
+`filepath` + +String, path to the weights file to load. For weight files in +TensorFlow format, this is the file prefix (the same as was passed +to `save_weights`). +
+`by_name` + +Boolean, whether to load weights by name or by topological +order. Only topological loading is supported for weight files in +TensorFlow format. +
+`skip_mismatch` + +Boolean, whether to skip loading of layers where there is +a mismatch in the number of weights, or a mismatch in the shape of +the weight (only valid when `by_name=True`). +
+ + + + + + + + + + + +
Returns
+When loading a weight file in TensorFlow format, returns the same status +object as tf.train.Checkpoint.restore. When graph building, restore +ops are run automatically as soon as the network is built (on first call +for user-defined classes inheriting from `Model`, immediately if it is +already built). + +When loading weights in HDF5 format, returns `None`. +
+ + + + + + + + + + + + + + + +
Raises
+`ImportError` + +If h5py is not available and the weight file is in HDF5 +format. +
+`ValueError` + +If `skip_mismatch` is set to `True` when `by_name` is +`False`. +
+ + + +

make_predict_function

+ +View source + + + +Creates a function that executes one step of inference. + +This method can be overridden to support custom inference logic. +This method is called by Model.predict and Model.predict_on_batch. + +Typically, this method directly controls tf.function and +tf.distribute.Strategy settings, and delegates the actual evaluation +logic to Model.predict_step. + +This function is cached the first time Model.predict or +Model.predict_on_batch is called. The cache is cleared whenever +Model.compile is called. + + + + + + + + + +
Returns
+Function. The function created by this method should accept a +`tf.data.Iterator`, and return the outputs of the `Model`. +
+ + + +

make_test_function

+ +View source + + + +Creates a function that executes one step of evaluation. + +This method can be overridden to support custom evaluation logic. +This method is called by Model.evaluate and Model.test_on_batch. + +Typically, this method directly controls tf.function and +tf.distribute.Strategy settings, and delegates the actual evaluation +logic to Model.test_step. + +This function is cached the first time Model.evaluate or +Model.test_on_batch is called. The cache is cleared whenever +Model.compile is called. + + + + + + + + + +
Returns
+Function. The function created by this method should accept a +`tf.data.Iterator`, and return a `dict` containing values that will +be passed to `tf.keras.Callbacks.on_test_batch_end`. +
+ + + +

make_train_function

+ +View source + + + +Creates a function that executes one step of training. + +This method can be overridden to support custom training logic. +This method is called by Model.fit and Model.train_on_batch. + +Typically, this method directly controls tf.function and +tf.distribute.Strategy settings, and delegates the actual training +logic to Model.train_step. + +This function is cached the first time Model.fit or +Model.train_on_batch is called. The cache is cleared whenever +Model.compile is called. + + + + + + + + + +
Returns
+Function. The function created by this method should accept a +`tf.data.Iterator`, and return a `dict` containing values that will +be passed to `tf.keras.Callbacks.on_train_batch_end`, such as +`{'loss': 0.2, 'accuracy': 0.7}`. +
+ + + +

pop

+ +View source + + + +Removes the last layer in the model. + + + + + + + + + + + +
Raises
+`TypeError` + +if there are no layers in the model. +
+ + + +

predict

+ +View source + + + +Generates output predictions for the input samples. + +Computation is done in batches. This method is designed for performance in +large scale inputs. For small amount of inputs that fit in one batch, +directly using `__call__` is recommended for faster execution, e.g., +`model(x)`, or `model(x, training=False)` if you have layers such as +tf.keras.layers.BatchNormalization that behaves differently during +inference. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input samples. It could be: +- A Numpy array (or array-like), or a list of arrays +(in case the model has multiple inputs). +- A TensorFlow tensor, or a list of tensors +(in case the model has multiple inputs). +- A tf.data dataset. +- A generator or keras.utils.Sequence instance. +A more detailed description of unpacking behavior for iterator types +(Dataset, generator, Sequence) is given in the `Unpacking behavior +for iterator-like inputs` section of `Model.fit`. +
+`batch_size` + +Integer or `None`. +Number of samples per batch. +If unspecified, `batch_size` will default to 32. +Do not specify the `batch_size` if your data is in the +form of dataset, generators, or keras.utils.Sequence instances +(since they generate batches). +
+`verbose` + +Verbosity mode, 0 or 1. +
+`steps` + +Total number of steps (batches of samples) +before declaring the prediction round finished. +Ignored with the default value of `None`. If x is a tf.data +dataset and `steps` is None, `predict` will +run until the input dataset is exhausted. +
+`callbacks` + +List of keras.callbacks.Callback instances. +List of callbacks to apply during prediction. +See [callbacks](/api_docs/python/tf/keras/callbacks). +
+`max_queue_size` + +Integer. Used for generator or keras.utils.Sequence +input only. Maximum size for the generator queue. +If unspecified, `max_queue_size` will default to 10. +
+`workers` + +Integer. Used for generator or keras.utils.Sequence input +only. Maximum number of processes to spin up when using +process-based threading. If unspecified, `workers` will default +to 1. If 0, will execute the generator on the main thread. +
+`use_multiprocessing` + +Boolean. Used for generator or +keras.utils.Sequence input only. If `True`, use process-based +threading. If unspecified, `use_multiprocessing` will default to +`False`. Note that because this implementation relies on +multiprocessing, you should not pass non-picklable arguments to +the generator as they can't be passed easily to children processes. +
+ + +See the discussion of `Unpacking behavior for iterator-like inputs` for +Model.fit. Note that Model.predict uses the same interpretation rules as +Model.fit and Model.evaluate, so inputs must be unambiguous for all +three methods. + + + + + + + + + +
Returns
+Numpy array(s) of predictions. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of mismatch between the provided +input data and the model's expectations, +or in case a stateful model receives a number of samples +that is not a multiple of the batch size. +
+ + + +

predict_classes

+ +View source + + + +Generate class predictions for the input samples. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2021-01-01. +Instructions for updating: +Please use instead:* `np.argmax(model.predict(x), axis=-1)`, if your model does multi-class classification (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`, if your model does binary classification (e.g. if it uses a `sigmoid` last-layer activation). + +The input samples are processed batch by batch. + + + + + + + + + + + + + + + + +
Arguments
+`x` + +input data, as a Numpy array or list of Numpy arrays +(if the model has multiple inputs). +
+`batch_size` + +integer. +
+`verbose` + +verbosity mode, 0 or 1. +
+ + + + + + + + + + + +
Returns
+A numpy array of class predictions. +
+ + + +

predict_generator

+ +View source + + + +Generates predictions for the input samples from a data generator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use Model.predict, which supports generators. + +#### DEPRECATED: + +Model.predict now supports generators, so there is no longer any need +to use this endpoint. + + +

predict_on_batch

+ +View source + + + +Returns predictions for a single batch of samples. + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: - A Numpy array (or array-like), or a list +of arrays (in case the model has multiple inputs). - A TensorFlow +tensor, or a list of tensors (in case the model has multiple inputs). +
+ + + + + + + + + + + +
Returns
+Numpy array(s) of predictions. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of mismatch between given number of inputs and +expectations of the model. +
+ + + +

predict_proba

+ +View source + + + +Generates class probability predictions for the input samples. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2021-01-01. +Instructions for updating: +Please use `model.predict()` instead. + +The input samples are processed batch by batch. + + + + + + + + + + + + + + + + +
Arguments
+`x` + +input data, as a Numpy array or list of Numpy arrays +(if the model has multiple inputs). +
+`batch_size` + +integer. +
+`verbose` + +verbosity mode, 0 or 1. +
+ + + + + + + + + + + +
Returns
+A Numpy array of probability predictions. +
+ + + +

predict_step

+ +View source + + + +The logic for one inference step. + +This method can be overridden to support custom inference logic. +This method is called by Model.make_predict_function. + +This method should contain the mathemetical logic for one step of inference. +This typically includes the forward pass. + +Configuration details for *how* this logic is run (e.g. tf.function and +tf.distribute.Strategy settings), should be left to +Model.make_predict_function, which can also be overridden. + + + + + + + + + + +
Arguments
+`data` + +A nested structure of `Tensor`s. +
+ + + + + + + + + + + +
Returns
+The result of one inference step, typically the output of calling the +`Model` on data. +
+ + + +

reset_metrics

+ +View source + + + +Resets the state of metrics. + + +

reset_states

+ +View source + + + + + + +

save

+ +View source + + + +Saves the model to Tensorflow SavedModel or a single HDF5 file. + + +#### The savefile includes: + +- The model architecture, allowing to re-instantiate the model. +- The model weights. +- The state of the optimizer, allowing to resume training + exactly where you left off. + + +This allows you to save the entirety of the state of a model +in a single file. + +Saved models can be reinstantiated via keras.models.load_model. +The model returned by `load_model` is a compiled model ready to be used +(unless the saved model was never compiled in the first place). + +Models built with the Sequential and Functional API can be saved to both the +HDF5 and SavedModel formats. Subclassed models can only be saved with the +SavedModel format. + +Note that the model weights may have different scoped names after being +loaded. Scoped names include the model/layer names, such as +"dense_1/kernel:0"`. It is recommended that you use the layer properties to + access specific variables, e.g. `model.get_layer("dense_1").kernel`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`filepath` + +String, path to SavedModel or H5 file to save the model. +
+`overwrite` + +Whether to silently overwrite any existing file at the +target location, or provide the user with a manual prompt. +
+`include_optimizer` + +If True, save optimizer's state together. +
+`save_format` + +Either 'tf' or 'h5', indicating whether to save the model +to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and +'h5' in TF 1.X. +
+`signatures` + +Signatures to save with the SavedModel. Applicable to the +'tf' format only. Please see the `signatures` argument in +tf.saved_model.save for details. +
+`options` + +Optional tf.saved_model.SaveOptions object that specifies +options for saving to SavedModel. +
+ + + +#### Example: + + + +```python +from keras.models import load_model + +model.save('my_model.h5') # creates a HDF5 file 'my_model.h5' +del model # deletes the existing model + +# returns a compiled model +# identical to the previous one +model = load_model('my_model.h5') +``` + +

save_weights

+ +View source + + + +Saves all layer weights. + +Either saves in HDF5 or in TensorFlow format based on the `save_format` +argument. + +When saving in HDF5 format, the weight file has: + - `layer_names` (attribute), a list of strings + (ordered names of model layers). + - For every layer, a `group` named `layer.name` + - For every such layer group, a group attribute `weight_names`, + a list of strings + (ordered names of weights tensor of the layer). + - For every weight in the layer, a dataset + storing the weight value, named after the weight tensor. + +When saving in TensorFlow format, all objects referenced by the network are +saved in the same format as tf.train.Checkpoint, including any `Layer` +instances or `Optimizer` instances assigned to object attributes. For +networks constructed from inputs and outputs using `tf.keras.Model(inputs, +outputs)`, `Layer` instances used by the network are tracked/saved +automatically. For user-defined classes which inherit from tf.keras.Model, +`Layer` instances must be assigned to object attributes, typically in the +constructor. See the documentation of tf.train.Checkpoint and +tf.keras.Model for details. + +While the formats are the same, do not mix `save_weights` and +tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be +loaded using Model.load_weights. Checkpoints saved using +tf.train.Checkpoint.save should be restored using the corresponding +tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over +`save_weights` for training checkpoints. + +The TensorFlow format matches objects and variables by starting at a root +object, `self` for `save_weights`, and greedily matching attribute +names. For Model.save this is the `Model`, and for Checkpoint.save this +is the `Checkpoint` even if the `Checkpoint` has a model attached. This +means saving a tf.keras.Model using `save_weights` and loading into a +tf.train.Checkpoint with a `Model` attached (or vice versa) will not match +the `Model`'s variables. See the [guide to training +checkpoints](https://www.tensorflow.org/guide/checkpoint) for details +on the TensorFlow format. + + + + + + + + + + + + + + + + +
Arguments
+`filepath` + +String, path to the file to save the weights to. When saving +in TensorFlow format, this is the prefix used for checkpoint files +(multiple files are generated). Note that the '.h5' suffix causes +weights to be saved in HDF5 format. +
+`overwrite` + +Whether to silently overwrite any existing file at the +target location, or provide the user with a manual prompt. +
+`save_format` + +Either 'tf' or 'h5'. A `filepath` ending in '.h5' or +'.keras' will default to HDF5 if `save_format` is `None`. Otherwise +`None` defaults to 'tf'. +
+ + + + + + + + + + + + + + + +
Raises
+`ImportError` + +If h5py is not available when attempting to save in HDF5 +format. +
+`ValueError` + +For invalid/unknown format arguments. +
+ + + +

summary

+ +View source + + + +Prints a string summary of the network. + + + + + + + + + + + + + + + + + +
Arguments
+`line_length` + +Total length of printed lines +(e.g. set this to adapt the display to different +terminal window sizes). +
+`positions` + +Relative or absolute positions of log elements +in each line. If not provided, +defaults to `[.33, .55, .67, 1.]`. +
+`print_fn` + +Print function to use. Defaults to `print`. +It will be called on each line of the summary. +You can set it to a custom function +in order to capture the string summary. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `summary()` is called before the model is built. +
+ + + +

test_on_batch

+ +View source + + + +Test the model on a single batch of samples. + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: - A Numpy array (or array-like), or a list +of arrays (in case the model has multiple inputs). - A TensorFlow +tensor, or a list of tensors (in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, if +the model has named inputs. +
+`y` + +Target data. Like the input data `x`, it could be either Numpy +array(s) or TensorFlow tensor(s). It should be consistent with `x` +(you cannot have Numpy inputs and tensor targets, or inversely). +
+`sample_weight` + +Optional array of the same length as x, containing +weights to apply to the model's loss for each sample. In the case of +temporal data, you can pass a 2D array with shape (samples, +sequence_length), to apply a different weight to every timestep of +every sample. In this case you should make sure to specify +sample_weight_mode="temporal" in compile(). +
+`reset_metrics` + +If `True`, the metrics returned will be only for this +batch. If `False`, the metrics will be statefully accumulated across +batches. +
+`return_dict` + +If `True`, loss and metric results are returned as a dict, +with each key being the name of the metric. If `False`, they are +returned as a list. +
+ + + + + + + + + + + +
Returns
+Scalar test loss (if the model has a single output and no metrics) +or list of scalars (if the model has multiple outputs +and/or metrics). The attribute `model.metrics_names` will give you +the display labels for the scalar outputs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid user-provided arguments. +
+ + + +

test_step

+ +View source + + + +The logic for one evaluation step. + +This method can be overridden to support custom evaluation logic. +This method is called by Model.make_test_function. + +This function should contain the mathemetical logic for one step of +evaluation. +This typically includes the forward pass, loss calculation, and metrics +updates. + +Configuration details for *how* this logic is run (e.g. tf.function and +tf.distribute.Strategy settings), should be left to +Model.make_test_function, which can also be overridden. + + + + + + + + + + +
Arguments
+`data` + +A nested structure of `Tensor`s. +
+ + + + + + + + + + + +
Returns
+A `dict` containing values that will be passed to +`tf.keras.callbacks.CallbackList.on_train_batch_end`. Typically, the +values of the `Model`'s metrics are returned. +
+ + + +

to_json

+ +View source + + + +Returns a JSON string containing the network configuration. + +To load a network from a JSON save file, use +keras.models.model_from_json(json_string, custom_objects={}). + + + + + + + + + + +
Arguments
+`**kwargs` + +Additional keyword arguments +to be passed to `json.dumps()`. +
+ + + + + + + + + + + +
Returns
+A JSON string. +
+ + + +

to_yaml

+ +View source + + + +Returns a yaml string containing the network configuration. + +To load a network from a yaml save file, use +keras.models.model_from_yaml(yaml_string, custom_objects={}). + +`custom_objects` should be a dictionary mapping +the names of custom losses / layers / etc to the corresponding +functions / classes. + + + + + + + + + + +
Arguments
+`**kwargs` + +Additional keyword arguments +to be passed to `yaml.dump()`. +
+ + + + + + + + + + + +
Returns
+A YAML string. +
+ + + + + + + + + + + + +
Raises
+`ImportError` + +if yaml module is not found. +
+ + + +

train_on_batch

+ +View source + + + +Runs a single gradient update on a single batch of data. + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: +- A Numpy array (or array-like), or a list of arrays +(in case the model has multiple inputs). +- A TensorFlow tensor, or a list of tensors +(in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, +if the model has named inputs. +
+`y` + +Target data. Like the input data `x`, it could be either Numpy +array(s) or TensorFlow tensor(s). It should be consistent with `x` +(you cannot have Numpy inputs and tensor targets, or inversely). +
+`sample_weight` + +Optional array of the same length as x, containing +weights to apply to the model's loss for each sample. In the case of +temporal data, you can pass a 2D array with shape (samples, +sequence_length), to apply a different weight to every timestep of +every sample. In this case you should make sure to specify +sample_weight_mode="temporal" in compile(). +
+`class_weight` + +Optional dictionary mapping class indices (integers) to a +weight (float) to apply to the model's loss for the samples from this +class during training. This can be useful to tell the model to "pay +more attention" to samples from an under-represented class. +
+`reset_metrics` + +If `True`, the metrics returned will be only for this +batch. If `False`, the metrics will be statefully accumulated across +batches. +
+`return_dict` + +If `True`, loss and metric results are returned as a dict, +with each key being the name of the metric. If `False`, they are +returned as a list. +
+ + + + + + + + + + + +
Returns
+Scalar training loss +(if the model has a single output and no metrics) +or list of scalars (if the model has multiple outputs +and/or metrics). The attribute `model.metrics_names` will give you +the display labels for the scalar outputs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid user-provided arguments. +
+ + + +

train_step

+ +View source + + + +The logic for one training step. + +This method can be overridden to support custom training logic. +This method is called by Model.make_train_function. + +This method should contain the mathemetical logic for one step of training. +This typically includes the forward pass, loss calculation, backpropagation, +and metric updates. + +Configuration details for *how* this logic is run (e.g. tf.function and +tf.distribute.Strategy settings), should be left to +Model.make_train_function, which can also be overridden. + + + + + + + + + + +
Arguments
+`data` + +A nested structure of `Tensor`s. +
+ + + + + + + + + + + +
Returns
+A `dict` containing values that will be passed to +`tf.keras.callbacks.CallbackList.on_train_batch_end`. Typically, the +values of the `Model`'s metrics are returned. Example: +`{'loss': 0.2, 'accuracy': 0.7}`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/activations.md b/site/en/api_docs/python/tf/keras/activations.md new file mode 100644 index 00000000000..373b1865cee --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations.md @@ -0,0 +1,53 @@ +description: Built-in activation functions. + +
+ + +
+ +# Module: tf.keras.activations + + + + + + + + + +Built-in activation functions. + + + +## Functions + +[`deserialize(...)`](../../tf/keras/activations/deserialize.md): Returns activation function denoted by input string. + +[`elu(...)`](../../tf/keras/activations/elu.md): Exponential linear unit. + +[`exponential(...)`](../../tf/keras/activations/exponential.md): Exponential activation function. + +[`get(...)`](../../tf/keras/activations/get.md): Returns function. + +[`hard_sigmoid(...)`](../../tf/keras/activations/hard_sigmoid.md): Hard sigmoid activation function. + +[`linear(...)`](../../tf/keras/activations/linear.md): Linear activation function. + +[`relu(...)`](../../tf/keras/activations/relu.md): Applies the rectified linear unit activation function. + +[`selu(...)`](../../tf/keras/activations/selu.md): Scaled Exponential Linear Unit (SELU). + +[`serialize(...)`](../../tf/keras/activations/serialize.md): Returns name attribute (`__name__`) of function. + +[`sigmoid(...)`](../../tf/keras/activations/sigmoid.md): Sigmoid activation function. + +[`softmax(...)`](../../tf/keras/activations/softmax.md): Softmax converts a real vector to a vector of categorical probabilities. + +[`softplus(...)`](../../tf/keras/activations/softplus.md): Softplus activation function. + +[`softsign(...)`](../../tf/keras/activations/softsign.md): Softsign activation function. + +[`swish(...)`](../../tf/keras/activations/swish.md): Swish activation function. + +[`tanh(...)`](../../tf/keras/activations/tanh.md): Hyperbolic tangent activation function. + diff --git a/site/en/api_docs/python/tf/keras/activations/deserialize.md b/site/en/api_docs/python/tf/keras/activations/deserialize.md new file mode 100644 index 00000000000..2f7216f8d61 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/deserialize.md @@ -0,0 +1,133 @@ +description: Returns activation function denoted by input string. + +
+ + +
+ +# tf.keras.activations.deserialize + + + + + + + + + +Returns activation function denoted by input string. + + + + + + + + + + + + + + + + + + + +
+`x` + +String +
+ + + + + + + + + + + +
+TensorFlow Activation function denoted by input string. +
+ + + +#### For example: + + + +``` +>>> tf.keras.activations.deserialize('linear') + +>>> tf.keras.activations.deserialize('sigmoid') + +>>> tf.keras.activations.deserialize('abcd') +Traceback (most recent call last): +... +ValueError: Unknown activation function:abcd +``` + + + + + + + + + + + + + +
+`name` + +The name of the activation function. +
+`custom_objects` + +A {name:value} dictionary for activations not build into +keras. +
+ + + + + + + + + + + + +
+`ValueError` + +`Unknown activation function` if the input string does not +denote any defined Tensorflow activation function. +
+ diff --git a/site/en/api_docs/python/tf/keras/activations/elu.md b/site/en/api_docs/python/tf/keras/activations/elu.md new file mode 100644 index 00000000000..ff60fa8311d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/elu.md @@ -0,0 +1,89 @@ +description: Exponential linear unit. + +
+ + +
+ +# tf.keras.activations.elu + + + + + + + + + +Exponential linear unit. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Input tensor. +
+`alpha` + +A scalar, slope of negative section. +
+ + + + + + + + + + + +
+The exponential linear activation: `x` if `x > 0` and +`alpha * (exp(x)-1)` if `x < 0`. +
+ + + +#### Reference: + +- [Fast and Accurate Deep Network Learning by Exponential + Linear Units (ELUs)](https://arxiv.org/abs/1511.07289) diff --git a/site/en/api_docs/python/tf/keras/activations/exponential.md b/site/en/api_docs/python/tf/keras/activations/exponential.md new file mode 100644 index 00000000000..0ba1fa4d436 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/exponential.md @@ -0,0 +1,88 @@ +description: Exponential activation function. + +
+ + +
+ +# tf.keras.activations.exponential + + + + + + + + + +Exponential activation function. + + + + + + + + + + +#### For example: + + + +``` +>>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32) +>>> b = tf.keras.activations.exponential(a) +>>> b.numpy() +array([ 0.04978707, 0.36787945, 1. , 2.7182817 , 20.085537 ], + dtype=float32) +``` + + + + + + + + + + +
+`x` + +Input tensor. +
+ + + + + + + + + + + +
+Tensor with exponential activation: `exp(x)`. Tensor will be of same +shape and dtype of input `x`. +
+ diff --git a/site/en/api_docs/python/tf/keras/activations/get.md b/site/en/api_docs/python/tf/keras/activations/get.md new file mode 100644 index 00000000000..ea2ef19f3da --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/get.md @@ -0,0 +1,114 @@ +description: Returns function. + +
+ + +
+ +# tf.keras.activations.get + + + + + + + + + +Returns function. + + + + + + + + + + + + + + + + + + + +
+`identifier` + +Function or string +
+ + + + + + + + + + + +
+Activation function denoted by input: +- `Linear activation function` if input is `None`. +- Function corresponding to the input string or input function. +
+ + + +#### For example: + + + +``` +>>> tf.keras.activations.get('softmax') + +>>> tf.keras.activations.get(tf.keras.activations.softmax) + +>>> tf.keras.activations.get(None) + +>>> tf.keras.activations.get(abs) + +>>> tf.keras.activations.get('abcd') +Traceback (most recent call last): +... +ValueError: Unknown activation function:abcd +``` + + + + + + + + + + +
+`ValueError` + +Input is an unknown function or string, i.e., the input does +not denote any defined function. +
+ diff --git a/site/en/api_docs/python/tf/keras/activations/hard_sigmoid.md b/site/en/api_docs/python/tf/keras/activations/hard_sigmoid.md new file mode 100644 index 00000000000..bcff87c0ab3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/hard_sigmoid.md @@ -0,0 +1,91 @@ +description: Hard sigmoid activation function. + +
+ + +
+ +# tf.keras.activations.hard_sigmoid + + + + + + + + + +Hard sigmoid activation function. + + + + + + + + + +Faster to compute than sigmoid activation. + +#### For example: + + + +``` +>>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32) +>>> b = tf.keras.activations.hard_sigmoid(a) +>>> b.numpy() +array([0. , 0.3, 0.5, 0.7, 1. ], dtype=float32) +``` + + + + + + + + + + +
+`x` + +Input tensor. +
+ + + + + + + + + + + +
+The hard sigmoid activation: + +- `0` if `x < -2.5` +- `1` if `x > 2.5` +- `0.2 * x + 0.5` if `-2.5 <= x <= 2.5`. +
+ diff --git a/site/en/api_docs/python/tf/keras/activations/linear.md b/site/en/api_docs/python/tf/keras/activations/linear.md new file mode 100644 index 00000000000..58d1c73657e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/linear.md @@ -0,0 +1,86 @@ +description: Linear activation function. + +
+ + +
+ +# tf.keras.activations.linear + + + + + + + + + +Linear activation function. + + + + + + + + + + +#### For example: + + + +``` +>>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32) +>>> b = tf.keras.activations.linear(a) +>>> b.numpy() +array([-3., -1., 0., 1., 3.], dtype=float32) +``` + + + + + + + + + + +
+`x` + +Input tensor. +
+ + + + + + + + + + + +
+the input unmodified. +
+ diff --git a/site/en/api_docs/python/tf/keras/activations/relu.md b/site/en/api_docs/python/tf/keras/activations/relu.md new file mode 100644 index 00000000000..3282ce8f1c3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/relu.md @@ -0,0 +1,123 @@ +description: Applies the rectified linear unit activation function. + +
+ + +
+ +# tf.keras.activations.relu + + + + + + + + + +Applies the rectified linear unit activation function. + + + + + + + + + +With default values, this returns the standard ReLU activation: +`max(x, 0)`, the element-wise maximum of 0 and the input tensor. + +Modifying default parameters allows you to use non-zero thresholds, +change the max value of the activation, +and to use a non-zero multiple of the input for values below the threshold. + +#### For example: + + + +``` +>>> foo = tf.constant([-10, -5, 0.0, 5, 10], dtype = tf.float32) +>>> tf.keras.activations.relu(foo).numpy() +array([ 0., 0., 0., 5., 10.], dtype=float32) +>>> tf.keras.activations.relu(foo, alpha=0.5).numpy() +array([-5. , -2.5, 0. , 5. , 10. ], dtype=float32) +>>> tf.keras.activations.relu(foo, max_value=5).numpy() +array([0., 0., 0., 5., 5.], dtype=float32) +>>> tf.keras.activations.relu(foo, threshold=5).numpy() +array([-0., -0., 0., 0., 10.], dtype=float32) +``` + + + + + + + + + + + + + + + + + + + +
+`x` + +Input `tensor` or `variable`. +
+`alpha` + +A `float` that governs the slope for values lower than the +threshold. +
+`max_value` + +A `float` that sets the saturation threshold (the largest value +the function will return). +
+`threshold` + +A `float` giving the threshold value of the activation function +below which values will be damped or set to zero. +
+ + + + + + + + + + + +
+A `Tensor` representing the input tensor, +transformed by the relu activation function. +Tensor will be of the same shape and dtype of input `x`. +
+ diff --git a/site/en/api_docs/python/tf/keras/activations/selu.md b/site/en/api_docs/python/tf/keras/activations/selu.md new file mode 100644 index 00000000000..af0461f89cf --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/selu.md @@ -0,0 +1,124 @@ +description: Scaled Exponential Linear Unit (SELU). + +
+ + +
+ +# tf.keras.activations.selu + + + + + + + + + +Scaled Exponential Linear Unit (SELU). + + + + + + + + + +The Scaled Exponential Linear Unit (SELU) activation function is: +`scale * x` if `x > 0` and `scale * alpha * (exp(x) - 1)` if `x < 0` +where `alpha` and `scale` are pre-defined constants +(`alpha = 1.67326324` +and `scale = 1.05070098`). +The SELU activation function multiplies `scale` > 1 with the +`[elu](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/activations/elu)` +(Exponential Linear Unit (ELU)) to ensure a slope larger than one +for positive net inputs. + +The values of `alpha` and `scale` are +chosen so that the mean and variance of the inputs are preserved +between two consecutive layers as long as the weights are initialized +correctly (see [`lecun_normal` initialization] +(https://www.tensorflow.org/api_docs/python/tf/keras/initializers/lecun_normal)) +and the number of inputs is "large enough" +(see references for more information). + +![]https://cdn-images-1.medium.com/max/1600/1*m0e8lZU_Zrkh4ESfQkY2Pw.png +(Courtesy: Blog on Towards DataScience at +https://towardsdatascience.com/selu-make-fnns-great-again-snn-8d61526802a9) + +#### Example Usage: + + + +``` +>>> n_classes = 10 #10-class problem +>>> from tensorflow.python.keras.layers import Dense +>>> model = tf.keras.Sequential() +>>> model.add(Dense(64, kernel_initializer='lecun_normal', +... activation='selu', input_shape=(28, 28, 1))) +>>> model.add(Dense(32, kernel_initializer='lecun_normal', +... activation='selu')) +>>> model.add(Dense(16, kernel_initializer='lecun_normal', +... activation='selu')) +>>> model.add(Dense(n_classes, activation='softmax')) +``` + + + + + + + + + + +
+`x` + +A tensor or variable to compute the activation function for. +
+ + + + + + + + + + + +
+The scaled exponential unit activation: `scale * elu(x, alpha)`. +
+ + +# Note + - To be used together with the initialization "[lecun_normal] + (https://www.tensorflow.org/api_docs/python/tf/keras/initializers/lecun_normal)". + - To be used together with the dropout variant "[AlphaDropout] + (https://www.tensorflow.org/api_docs/python/tf/keras/layers/AlphaDropout)". + +#### References: + +[Self-Normalizing Neural Networks (Klambauer et al, 2017)] +(https://arxiv.org/abs/1706.02515) diff --git a/site/en/api_docs/python/tf/keras/activations/serialize.md b/site/en/api_docs/python/tf/keras/activations/serialize.md new file mode 100644 index 00000000000..a21123d72b4 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/serialize.md @@ -0,0 +1,107 @@ +description: Returns name attribute (__name__) of function. + +
+ + +
+ +# tf.keras.activations.serialize + + + + + + + + + +Returns name attribute (`__name__`) of function. + + + + + + + + + + + + + + + + + + + +
+`activation` + +Function +
+ + + + + + + + + + + +
+String denoting the name attribute of the input function +
+ + + +#### For example: + + + +``` +>>> tf.keras.activations.serialize(tf.keras.activations.tanh) +'tanh' +>>> tf.keras.activations.serialize(tf.keras.activations.sigmoid) +'sigmoid' +>>> tf.keras.activations.serialize('abcd') +Traceback (most recent call last): +... +ValueError: ('Cannot serialize', 'abcd') +``` + + + + + + + + + + +
+`ValueError` + +The input function is not a valid one. +
+ diff --git a/site/en/api_docs/python/tf/keras/activations/sigmoid.md b/site/en/api_docs/python/tf/keras/activations/sigmoid.md new file mode 100644 index 00000000000..100534f4f42 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/sigmoid.md @@ -0,0 +1,95 @@ +description: Sigmoid activation function. + +
+ + +
+ +# tf.keras.activations.sigmoid + + + + + + + + + +Sigmoid activation function. + + + + + + + + + +Applies the sigmoid activation function. The sigmoid function is defined as +1 divided by (1 + exp(-x)). It's curve is like an "S" and is like a smoothed +version of the Heaviside (Unit Step Function) function. For small values +(<-5) the sigmoid returns a value close to zero and for larger values (>5) +the result of the function gets close to 1. + +Sigmoid is equivalent to a 2-element Softmax, where the second element is +assumed to be zero. + +#### For example: + + + +``` +>>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32) +>>> b = tf.keras.activations.sigmoid(a) +>>> b.numpy() >= 0.0 +array([ True, True, True, True, True]) +``` + + + + + + + + + + +
+`x` + +Input tensor. +
+ + + + + + + + + + + +
+Tensor with the sigmoid activation: `(1.0 / (1.0 + exp(-x)))`. +Tensor will be of same shape and dtype of input `x`. +
+ diff --git a/site/en/api_docs/python/tf/keras/activations/softmax.md b/site/en/api_docs/python/tf/keras/activations/softmax.md new file mode 100644 index 00000000000..6bbce22ba89 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/softmax.md @@ -0,0 +1,111 @@ +description: Softmax converts a real vector to a vector of categorical probabilities. + +
+ + +
+ +# tf.keras.activations.softmax + + + + + + + + + +Softmax converts a real vector to a vector of categorical probabilities. + + + + + + + + + +The elements of the output vector are in range (0, 1) and sum to 1. + +Each vector is handled independently. The `axis` argument sets which axis +of the input the function is applied along. + +Softmax is often used as the activation for the last +layer of a classification network because the result could be interpreted as +a probability distribution. + +The softmax of each vector x is calculated by `exp(x)/tf.reduce_sum(exp(x))`. +The input values in are the log-odds of the resulting probability. + + + + + + + + + + + + + +
+`x` + +Input tensor. +
+`axis` + +Integer, axis along which the softmax normalization is applied. +
+ + + + + + + + + + + +
+Tensor, output of softmax transformation (all values are non-negative +and sum to 1). +
+ + + + + + + + + + + + +
+`ValueError` + +In case `dim(x) == 1`. +
+ diff --git a/site/en/api_docs/python/tf/keras/activations/softplus.md b/site/en/api_docs/python/tf/keras/activations/softplus.md new file mode 100644 index 00000000000..9f76e6f208b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/softplus.md @@ -0,0 +1,75 @@ +description: Softplus activation function. + +
+ + +
+ +# tf.keras.activations.softplus + + + + + + + + + +Softplus activation function. + + + + + + + + + + + + + + + + + + + +
+`x` + +Input tensor. +
+ + + + + + + + + + + +
+The softplus activation: `log(exp(x) + 1)`. +
+ diff --git a/site/en/api_docs/python/tf/keras/activations/softsign.md b/site/en/api_docs/python/tf/keras/activations/softsign.md new file mode 100644 index 00000000000..7b0f93e409c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/softsign.md @@ -0,0 +1,75 @@ +description: Softsign activation function. + +
+ + +
+ +# tf.keras.activations.softsign + + + + + + + + + +Softsign activation function. + + + + + + + + + + + + + + + + + + + +
+`x` + +Input tensor. +
+ + + + + + + + + + + +
+The softsign activation: `x / (abs(x) + 1)`. +
+ diff --git a/site/en/api_docs/python/tf/keras/activations/swish.md b/site/en/api_docs/python/tf/keras/activations/swish.md new file mode 100644 index 00000000000..402962e4776 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/swish.md @@ -0,0 +1,75 @@ +description: Swish activation function. + +
+ + +
+ +# tf.keras.activations.swish + + + + + + + + + +Swish activation function. + + + + + + + + + + + + + + + + + + + +
+`x` + +Input tensor. +
+ + + + + + + + + + + +
+The swish activation applied to `x`. +
+ diff --git a/site/en/api_docs/python/tf/keras/activations/tanh.md b/site/en/api_docs/python/tf/keras/activations/tanh.md new file mode 100644 index 00000000000..4cba96df165 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/activations/tanh.md @@ -0,0 +1,88 @@ +description: Hyperbolic tangent activation function. + +
+ + +
+ +# tf.keras.activations.tanh + + + + + + + + + +Hyperbolic tangent activation function. + + + + + + + + + + +#### For example: + + + +``` +>>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32) +>>> b = tf.keras.activations.tanh(a) +>>> b.numpy() +array([-0.9950547, -0.7615942, 0. , 0.7615942, 0.9950547], + dtype=float32) +``` + + + + + + + + + + +
+`x` + +Input tensor. +
+ + + + + + + + + + + +
+Tensor of same shape and dtype of input `x`, with tanh activation: +`tanh(x) = sinh(x)/cosh(x) = ((exp(x) - exp(-x))/(exp(x) + exp(-x)))`. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications.md b/site/en/api_docs/python/tf/keras/applications.md new file mode 100644 index 00000000000..f5560bbd8dd --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications.md @@ -0,0 +1,87 @@ +description: Keras Applications are canned architectures with pre-trained weights. + +
+ + +
+ +# Module: tf.keras.applications + + + + + + + + + +Keras Applications are canned architectures with pre-trained weights. + + + +## Modules + +[`densenet`](../../tf/keras/applications/densenet.md) module: DenseNet models for Keras. + +[`imagenet_utils`](../../tf/keras/applications/imagenet_utils.md) module: Utilities for ImageNet data preprocessing & prediction decoding. + +[`inception_resnet_v2`](../../tf/keras/applications/inception_resnet_v2.md) module: Inception-ResNet V2 model for Keras. + +[`inception_v3`](../../tf/keras/applications/inception_v3.md) module: Inception V3 model for Keras. + +[`mobilenet`](../../tf/keras/applications/mobilenet.md) module: MobileNet v1 models for Keras. + +[`mobilenet_v2`](../../tf/keras/applications/mobilenet_v2.md) module: MobileNet v2 models for Keras. + +[`nasnet`](../../tf/keras/applications/nasnet.md) module: NASNet-A models for Keras. + +[`resnet`](../../tf/keras/applications/resnet.md) module: ResNet models for Keras. + +[`resnet50`](../../tf/keras/applications/resnet50.md) module: Public API for tf.keras.applications.resnet50 namespace. + +[`resnet_v2`](../../tf/keras/applications/resnet_v2.md) module: ResNet v2 models for Keras. + +[`vgg16`](../../tf/keras/applications/vgg16.md) module: VGG16 model for Keras. + +[`vgg19`](../../tf/keras/applications/vgg19.md) module: VGG19 model for Keras. + +[`xception`](../../tf/keras/applications/xception.md) module: Xception V1 model for Keras. + +## Functions + +[`DenseNet121(...)`](../../tf/keras/applications/DenseNet121.md): Instantiates the Densenet121 architecture. + +[`DenseNet169(...)`](../../tf/keras/applications/DenseNet169.md): Instantiates the Densenet169 architecture. + +[`DenseNet201(...)`](../../tf/keras/applications/DenseNet201.md): Instantiates the Densenet201 architecture. + +[`InceptionResNetV2(...)`](../../tf/keras/applications/InceptionResNetV2.md): Instantiates the Inception-ResNet v2 architecture. + +[`InceptionV3(...)`](../../tf/keras/applications/InceptionV3.md): Instantiates the Inception v3 architecture. + +[`MobileNet(...)`](../../tf/keras/applications/MobileNet.md): Instantiates the MobileNet architecture. + +[`MobileNetV2(...)`](../../tf/keras/applications/MobileNetV2.md): Instantiates the MobileNetV2 architecture. + +[`NASNetLarge(...)`](../../tf/keras/applications/NASNetLarge.md): Instantiates a NASNet model in ImageNet mode. + +[`NASNetMobile(...)`](../../tf/keras/applications/NASNetMobile.md): Instantiates a Mobile NASNet model in ImageNet mode. + +[`ResNet101(...)`](../../tf/keras/applications/ResNet101.md): Instantiates the ResNet101 architecture. + +[`ResNet101V2(...)`](../../tf/keras/applications/ResNet101V2.md): Instantiates the ResNet101V2 architecture. + +[`ResNet152(...)`](../../tf/keras/applications/ResNet152.md): Instantiates the ResNet152 architecture. + +[`ResNet152V2(...)`](../../tf/keras/applications/ResNet152V2.md): Instantiates the ResNet152V2 architecture. + +[`ResNet50(...)`](../../tf/keras/applications/ResNet50.md): Instantiates the ResNet50 architecture. + +[`ResNet50V2(...)`](../../tf/keras/applications/ResNet50V2.md): Instantiates the ResNet50V2 architecture. + +[`VGG16(...)`](../../tf/keras/applications/VGG16.md): Instantiates the VGG16 model. + +[`VGG19(...)`](../../tf/keras/applications/VGG19.md): Instantiates the VGG19 architecture. + +[`Xception(...)`](../../tf/keras/applications/Xception.md): Instantiates the Xception architecture. + diff --git a/site/en/api_docs/python/tf/keras/applications/DenseNet121.md b/site/en/api_docs/python/tf/keras/applications/DenseNet121.md new file mode 100644 index 00000000000..21cc1c99e1f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/DenseNet121.md @@ -0,0 +1,139 @@ +description: Instantiates the Densenet121 architecture. + +
+ + +
+ +# tf.keras.applications.DenseNet121 + + + + + + + + + +Instantiates the Densenet121 architecture. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the fully-connected +layer at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor (i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(224, 224, 3)` (with `'channels_last'` data format) +or `(3, 224, 224)` (with `'channels_first'` data format). +It should have exactly 3 inputs channels, +and width and height should be no smaller than 32. +E.g. `(200, 200, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+ + + + + + + + + + + +
+A Keras model instance. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/DenseNet169.md b/site/en/api_docs/python/tf/keras/applications/DenseNet169.md new file mode 100644 index 00000000000..7fafb37a515 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/DenseNet169.md @@ -0,0 +1,139 @@ +description: Instantiates the Densenet169 architecture. + +
+ + +
+ +# tf.keras.applications.DenseNet169 + + + + + + + + + +Instantiates the Densenet169 architecture. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the fully-connected +layer at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor (i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(224, 224, 3)` (with `'channels_last'` data format) +or `(3, 224, 224)` (with `'channels_first'` data format). +It should have exactly 3 inputs channels, +and width and height should be no smaller than 32. +E.g. `(200, 200, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+ + + + + + + + + + + +
+A Keras model instance. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/DenseNet201.md b/site/en/api_docs/python/tf/keras/applications/DenseNet201.md new file mode 100644 index 00000000000..bc874a7f3ef --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/DenseNet201.md @@ -0,0 +1,139 @@ +description: Instantiates the Densenet201 architecture. + +
+ + +
+ +# tf.keras.applications.DenseNet201 + + + + + + + + + +Instantiates the Densenet201 architecture. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the fully-connected +layer at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor (i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(224, 224, 3)` (with `'channels_last'` data format) +or `(3, 224, 224)` (with `'channels_first'` data format). +It should have exactly 3 inputs channels, +and width and height should be no smaller than 32. +E.g. `(200, 200, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+ + + + + + + + + + + +
+A Keras model instance. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/InceptionResNetV2.md b/site/en/api_docs/python/tf/keras/applications/InceptionResNetV2.md new file mode 100644 index 00000000000..9393dab5c20 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/InceptionResNetV2.md @@ -0,0 +1,182 @@ +description: Instantiates the Inception-ResNet v2 architecture. + +
+ + +
+ +# tf.keras.applications.InceptionResNetV2 + + + + + + + + + +Instantiates the Inception-ResNet v2 architecture. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. + +Caution: Be sure to properly pre-process your inputs to the application. +Please see applications.inception_resnet_v2.preprocess_input for an example. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the fully-connected +layer at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor (i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is `False` (otherwise the input shape +has to be `(299, 299, 3)` (with `'channels_last'` data format) +or `(3, 299, 299)` (with `'channels_first'` data format). +It should have exactly 3 inputs channels, +and width and height should be no smaller than 75. +E.g. `(150, 150, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the last convolutional block. +- `'avg'` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `'max'` means that global max pooling will be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is `True`, and +if no `weights` argument is specified. +
+`classifier_activation` + +A `str` or callable. The activation function to use +on the "top" layer. Ignored unless `include_top=True`. Set +`classifier_activation=None` to return the logits of the "top" layer. +
+`**kwargs` + +For backwards compatibility only. +
+ + + + + + + + + + + +
+A keras.Model instance. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +in case of invalid argument for `weights`, +or invalid input shape. +
+`ValueError` + +if `classifier_activation` is not `softmax` or `None` when +using a pretrained top layer. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/InceptionV3.md b/site/en/api_docs/python/tf/keras/applications/InceptionV3.md new file mode 100644 index 00000000000..40759137687 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/InceptionV3.md @@ -0,0 +1,184 @@ +description: Instantiates the Inception v3 architecture. + +
+ + +
+ +# tf.keras.applications.InceptionV3 + + + + + + + + + +Instantiates the Inception v3 architecture. + + + + + + + + + + +#### Reference paper: + + +- [Rethinking the Inception Architecture for Computer Vision]( + http://arxiv.org/abs/1512.00567) (CVPR 2016) + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in the tf.keras.backend.image_data_format(). + +Caution: Be sure to properly pre-process your inputs to the application. +Please see applications.inception_v3.preprocess_input for an example. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +Boolean, whether to include the fully-connected +layer at the top, as the last layer of the network. Default to `True`. +
+`weights` + +One of `None` (random initialization), +`imagenet` (pre-training on ImageNet), +or the path to the weights file to be loaded. Default to `imagenet`. +
+`input_tensor` + +Optional Keras tensor (i.e. output of layers.Input()) +to use as image input for the model. `input_tensor` is useful for sharing +inputs between multiple different networks. Default to None. +
+`input_shape` + +Optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(299, 299, 3)` (with `channels_last` data format) +or `(3, 299, 299)` (with `channels_first` data format). +It should have exactly 3 inputs channels, +and width and height should be no smaller than 75. +E.g. `(150, 150, 3)` would be one valid value. +`input_shape` will be ignored if the `input_tensor` is provided. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` (default) means that the output of the model will be +the 4D tensor output of the last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. Default to 1000. +
+`classifier_activation` + +A `str` or callable. The activation function to use +on the "top" layer. Ignored unless `include_top=True`. Set +`classifier_activation=None` to return the logits of the "top" layer. +
+ + + + + + + + + + + +
+A keras.Model instance. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +in case of invalid argument for `weights`, +or invalid input shape. +
+`ValueError` + +if `classifier_activation` is not `softmax` or `None` when +using a pretrained top layer. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/MobileNet.md b/site/en/api_docs/python/tf/keras/applications/MobileNet.md new file mode 100644 index 00000000000..f41a25be473 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/MobileNet.md @@ -0,0 +1,218 @@ +description: Instantiates the MobileNet architecture. + +
+ + +
+ +# tf.keras.applications.MobileNet + + + + + + + + + +Instantiates the MobileNet architecture. + + + + + + + + + + +#### Reference paper: + + +- [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision + Applications](https://arxiv.org/abs/1704.04861) + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in the tf.keras.backend.image_data_format(). + +Caution: Be sure to properly pre-process your inputs to the application. +Please see applications.mobilenet.preprocess_input for an example. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_shape` + +Optional shape tuple, only to be specified if `include_top` +is False (otherwise the input shape has to be `(224, 224, 3)` (with +`channels_last` data format) or (3, 224, 224) (with `channels_first` +data format). It should have exactly 3 inputs channels, and width and +height should be no smaller than 32. E.g. `(200, 200, 3)` would be one +valid value. Default to `None`. +`input_shape` will be ignored if the `input_tensor` is provided. +
+`alpha` + +Controls the width of the network. This is known as the width +multiplier in the MobileNet paper. - If `alpha` < 1.0, proportionally +decreases the number of filters in each layer. - If `alpha` > 1.0, +proportionally increases the number of filters in each layer. - If +`alpha` = 1, default number of filters from the paper are used at each +layer. Default to 1.0. +
+`depth_multiplier` + +Depth multiplier for depthwise convolution. This is +called the resolution multiplier in the MobileNet paper. Default to 1.0. +
+`dropout` + +Dropout rate. Default to 0.001. +
+`include_top` + +Boolean, whether to include the fully-connected layer at the +top of the network. Default to `True`. +
+`weights` + +One of `None` (random initialization), 'imagenet' (pre-training +on ImageNet), or the path to the weights file to be loaded. Default to +`imagenet`. +
+`input_tensor` + +Optional Keras tensor (i.e. output of layers.Input()) to +use as image input for the model. `input_tensor` is useful for sharing +inputs between multiple different networks. Default to None. +
+`pooling` + +Optional pooling mode for feature extraction when `include_top` +is `False`. +- `None` (default) means that the output of the model will be +the 4D tensor output of the last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will be applied. +
+`classes` + +Optional number of classes to classify images into, only to be +specified if `include_top` is True, and if no `weights` argument is +specified. Defaults to 1000. +
+`classifier_activation` + +A `str` or callable. The activation function to use +on the "top" layer. Ignored unless `include_top=True`. Set +`classifier_activation=None` to return the logits of the "top" layer. +
+`**kwargs` + +For backwards compatibility only. +
+ + + + + + + + + + + +
+A keras.Model instance. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +in case of invalid argument for `weights`, +or invalid input shape. +
+`ValueError` + +if `classifier_activation` is not `softmax` or `None` when +using a pretrained top layer. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/MobileNetV2.md b/site/en/api_docs/python/tf/keras/applications/MobileNetV2.md new file mode 100644 index 00000000000..796c82f17bc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/MobileNetV2.md @@ -0,0 +1,212 @@ +description: Instantiates the MobileNetV2 architecture. + +
+ + +
+ +# tf.keras.applications.MobileNetV2 + + + + + + + + + +Instantiates the MobileNetV2 architecture. + + + + + + + + + + +#### Reference paper: + + +- [MobileNetV2: Inverted Residuals and Linear Bottlenecks] +(https://arxiv.org/abs/1801.04381) (CVPR 2018) + +Optionally loads weights pre-trained on ImageNet. + +Caution: Be sure to properly pre-process your inputs to the application. +Please see applications.mobilenet_v2.preprocess_input for an example. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_shape` + +Optional shape tuple, to be specified if you would +like to use a model with an input image resolution that is not +(224, 224, 3). +It should have exactly 3 inputs channels (224, 224, 3). +You can also omit this option if you would like +to infer input_shape from an input_tensor. +If you choose to include both input_tensor and input_shape then +input_shape will be used if they match, if the shapes +do not match then we will throw an error. +E.g. `(160, 160, 3)` would be one valid value. +
+`alpha` + +Float between 0 and 1. controls the width of the network. +This is known as the width multiplier in the MobileNetV2 paper, +but the name is kept for consistency with `applications.MobileNetV1` +model in Keras. +- If `alpha` < 1.0, proportionally decreases the number +of filters in each layer. +- If `alpha` > 1.0, proportionally increases the number +of filters in each layer. +- If `alpha` = 1, default number of filters from the paper +are used at each layer. +
+`include_top` + +Boolean, whether to include the fully-connected +layer at the top of the network. Defaults to `True`. +
+`weights` + +String, one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +Optional Keras tensor (i.e. output of +layers.Input()) +to use as image input for the model. +
+`pooling` + +String, optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model +will be the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a +2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +Integer, optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+`classifier_activation` + +A `str` or callable. The activation function to use +on the "top" layer. Ignored unless `include_top=True`. Set +`classifier_activation=None` to return the logits of the "top" layer. +
+`**kwargs` + +For backwards compatibility only. +
+ + + + + + + + + + + +
+A keras.Model instance. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +in case of invalid argument for `weights`, +or invalid input shape or invalid alpha, rows when +weights='imagenet' +
+`ValueError` + +if `classifier_activation` is not `softmax` or `None` when +using a pretrained top layer. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/NASNetLarge.md b/site/en/api_docs/python/tf/keras/applications/NASNetLarge.md new file mode 100644 index 00000000000..ab864647a10 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/NASNetLarge.md @@ -0,0 +1,165 @@ +description: Instantiates a NASNet model in ImageNet mode. + +
+ + +
+ +# tf.keras.applications.NASNetLarge + + + + + + + + + +Instantiates a NASNet model in ImageNet mode. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_shape` + +Optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(331, 331, 3)` for NASNetLarge. +It should have exactly 3 inputs channels, +and width and height should be no smaller than 32. +E.g. `(224, 224, 3)` would be one valid value. +
+`include_top` + +Whether to include the fully-connected +layer at the top of the network. +
+`weights` + +`None` (random initialization) or +`imagenet` (ImageNet weights) +
+`input_tensor` + +Optional Keras tensor (i.e. output of +layers.Input()) +to use as image input for the model. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model +will be the 4D tensor output of the +last convolutional layer. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional layer, and thus +the output of the model will be a +2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +Optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+ + + + + + + + + + + +
+A Keras model instance. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +in case of invalid argument for `weights`, +or invalid input shape. +
+`RuntimeError` + +If attempting to run this model with a +backend that does not support separable convolutions. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/NASNetMobile.md b/site/en/api_docs/python/tf/keras/applications/NASNetMobile.md new file mode 100644 index 00000000000..6b534682dcc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/NASNetMobile.md @@ -0,0 +1,165 @@ +description: Instantiates a Mobile NASNet model in ImageNet mode. + +
+ + +
+ +# tf.keras.applications.NASNetMobile + + + + + + + + + +Instantiates a Mobile NASNet model in ImageNet mode. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_shape` + +Optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(224, 224, 3)` for NASNetMobile +It should have exactly 3 inputs channels, +and width and height should be no smaller than 32. +E.g. `(224, 224, 3)` would be one valid value. +
+`include_top` + +Whether to include the fully-connected +layer at the top of the network. +
+`weights` + +`None` (random initialization) or +`imagenet` (ImageNet weights) +
+`input_tensor` + +Optional Keras tensor (i.e. output of +layers.Input()) +to use as image input for the model. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model +will be the 4D tensor output of the +last convolutional layer. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional layer, and thus +the output of the model will be a +2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +Optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+ + + + + + + + + + + +
+A Keras model instance. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +In case of invalid argument for `weights`, +or invalid input shape. +
+`RuntimeError` + +If attempting to run this model with a +backend that does not support separable convolutions. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/ResNet101.md b/site/en/api_docs/python/tf/keras/applications/ResNet101.md new file mode 100644 index 00000000000..07d75e28496 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/ResNet101.md @@ -0,0 +1,139 @@ +description: Instantiates the ResNet101 architecture. + +
+ + +
+ +# tf.keras.applications.ResNet101 + + + + + + + + + +Instantiates the ResNet101 architecture. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the fully-connected +layer at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor (i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(224, 224, 3)` (with `'channels_last'` data format) +or `(3, 224, 224)` (with `'channels_first'` data format). +It should have exactly 3 inputs channels, +and width and height should be no smaller than 32. +E.g. `(200, 200, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+ + + + + + + + + + + +
+A Keras model instance. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/ResNet101V2.md b/site/en/api_docs/python/tf/keras/applications/ResNet101V2.md new file mode 100644 index 00000000000..395f526b4f4 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/ResNet101V2.md @@ -0,0 +1,151 @@ +description: Instantiates the ResNet101V2 architecture. + +
+ + +
+ +# tf.keras.applications.ResNet101V2 + + + + + + + + + +Instantiates the ResNet101V2 architecture. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. + +Caution: Be sure to properly pre-process your inputs to the application. +Please see applications.resnet_v2.preprocess_input for an example. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the fully-connected +layer at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor (i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(224, 224, 3)` (with `'channels_last'` data format) +or `(3, 224, 224)` (with `'channels_first'` data format). +It should have exactly 3 inputs channels, +and width and height should be no smaller than 32. +E.g. `(200, 200, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+`classifier_activation` + +A `str` or callable. The activation function to use +on the "top" layer. Ignored unless `include_top=True`. Set +`classifier_activation=None` to return the logits of the "top" layer. +
+ + + + + + + + + + + +
+A keras.Model instance. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/ResNet152.md b/site/en/api_docs/python/tf/keras/applications/ResNet152.md new file mode 100644 index 00000000000..a42a5a9db8e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/ResNet152.md @@ -0,0 +1,139 @@ +description: Instantiates the ResNet152 architecture. + +
+ + +
+ +# tf.keras.applications.ResNet152 + + + + + + + + + +Instantiates the ResNet152 architecture. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the fully-connected +layer at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor (i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(224, 224, 3)` (with `'channels_last'` data format) +or `(3, 224, 224)` (with `'channels_first'` data format). +It should have exactly 3 inputs channels, +and width and height should be no smaller than 32. +E.g. `(200, 200, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+ + + + + + + + + + + +
+A Keras model instance. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/ResNet152V2.md b/site/en/api_docs/python/tf/keras/applications/ResNet152V2.md new file mode 100644 index 00000000000..c4be1385e14 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/ResNet152V2.md @@ -0,0 +1,151 @@ +description: Instantiates the ResNet152V2 architecture. + +
+ + +
+ +# tf.keras.applications.ResNet152V2 + + + + + + + + + +Instantiates the ResNet152V2 architecture. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. + +Caution: Be sure to properly pre-process your inputs to the application. +Please see applications.resnet_v2.preprocess_input for an example. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the fully-connected +layer at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor (i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(224, 224, 3)` (with `'channels_last'` data format) +or `(3, 224, 224)` (with `'channels_first'` data format). +It should have exactly 3 inputs channels, +and width and height should be no smaller than 32. +E.g. `(200, 200, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+`classifier_activation` + +A `str` or callable. The activation function to use +on the "top" layer. Ignored unless `include_top=True`. Set +`classifier_activation=None` to return the logits of the "top" layer. +
+ + + + + + + + + + + +
+A keras.Model instance. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/ResNet50.md b/site/en/api_docs/python/tf/keras/applications/ResNet50.md new file mode 100644 index 00000000000..e02a9154d88 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/ResNet50.md @@ -0,0 +1,139 @@ +description: Instantiates the ResNet50 architecture. + +
+ + +
+ +# tf.keras.applications.ResNet50 + + + + + + + + + +Instantiates the ResNet50 architecture. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the fully-connected +layer at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor (i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(224, 224, 3)` (with `'channels_last'` data format) +or `(3, 224, 224)` (with `'channels_first'` data format). +It should have exactly 3 inputs channels, +and width and height should be no smaller than 32. +E.g. `(200, 200, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+ + + + + + + + + + + +
+A Keras model instance. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/ResNet50V2.md b/site/en/api_docs/python/tf/keras/applications/ResNet50V2.md new file mode 100644 index 00000000000..3ab43ba6dd2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/ResNet50V2.md @@ -0,0 +1,151 @@ +description: Instantiates the ResNet50V2 architecture. + +
+ + +
+ +# tf.keras.applications.ResNet50V2 + + + + + + + + + +Instantiates the ResNet50V2 architecture. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. + +Caution: Be sure to properly pre-process your inputs to the application. +Please see applications.resnet_v2.preprocess_input for an example. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the fully-connected +layer at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor (i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(224, 224, 3)` (with `'channels_last'` data format) +or `(3, 224, 224)` (with `'channels_first'` data format). +It should have exactly 3 inputs channels, +and width and height should be no smaller than 32. +E.g. `(200, 200, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+`classifier_activation` + +A `str` or callable. The activation function to use +on the "top" layer. Ignored unless `include_top=True`. Set +`classifier_activation=None` to return the logits of the "top" layer. +
+ + + + + + + + + + + +
+A keras.Model instance. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/VGG16.md b/site/en/api_docs/python/tf/keras/applications/VGG16.md new file mode 100644 index 00000000000..9ad9880f02d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/VGG16.md @@ -0,0 +1,184 @@ +description: Instantiates the VGG16 model. + +
+ + +
+ +# tf.keras.applications.VGG16 + + + + + + + + + +Instantiates the VGG16 model. + + + + + + + + + +By default, it loads weights pre-trained on ImageNet. Check 'weights' for +other options. + +This model can be built both with 'channels_first' data format +(channels, height, width) or 'channels_last' data format +(height, width, channels). + +The default input size for this model is 224x224. + +Caution: Be sure to properly pre-process your inputs to the application. +Please see applications.vgg16.preprocess_input for an example. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the 3 fully-connected +layers at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor +(i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(224, 224, 3)` +(with `channels_last` data format) +or `(3, 224, 224)` (with `channels_first` data format). +It should have exactly 3 input channels, +and width and height should be no smaller than 32. +E.g. `(200, 200, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+`classifier_activation` + +A `str` or callable. The activation function to use +on the "top" layer. Ignored unless `include_top=True`. Set +`classifier_activation=None` to return the logits of the "top" layer. +
+ + + + + + + + + + + +
+A keras.Model instance. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +in case of invalid argument for `weights`, +or invalid input shape. +
+`ValueError` + +if `classifier_activation` is not `softmax` or `None` when +using a pretrained top layer. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/VGG19.md b/site/en/api_docs/python/tf/keras/applications/VGG19.md new file mode 100644 index 00000000000..e32a4aa9b69 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/VGG19.md @@ -0,0 +1,184 @@ +description: Instantiates the VGG19 architecture. + +
+ + +
+ +# tf.keras.applications.VGG19 + + + + + + + + + +Instantiates the VGG19 architecture. + + + + + + + + + +By default, it loads weights pre-trained on ImageNet. Check 'weights' for +other options. + +This model can be built both with 'channels_first' data format +(channels, height, width) or 'channels_last' data format +(height, width, channels). + +The default input size for this model is 224x224. + +Caution: Be sure to properly pre-process your inputs to the application. +Please see applications.vgg19.preprocess_input for an example. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the 3 fully-connected +layers at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor +(i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(224, 224, 3)` +(with `channels_last` data format) +or `(3, 224, 224)` (with `channels_first` data format). +It should have exactly 3 inputs channels, +and width and height should be no smaller than 32. +E.g. `(200, 200, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, and +if no `weights` argument is specified. +
+`classifier_activation` + +A `str` or callable. The activation function to use +on the "top" layer. Ignored unless `include_top=True`. Set +`classifier_activation=None` to return the logits of the "top" layer. +
+ + + + + + + + + + + +
+A keras.Model instance. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +in case of invalid argument for `weights`, +or invalid input shape. +
+`ValueError` + +if `classifier_activation` is not `softmax` or `None` when +using a pretrained top layer. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/Xception.md b/site/en/api_docs/python/tf/keras/applications/Xception.md new file mode 100644 index 00000000000..42023ecfa3d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/Xception.md @@ -0,0 +1,178 @@ +description: Instantiates the Xception architecture. + +
+ + +
+ +# tf.keras.applications.Xception + + + + + + + + + +Instantiates the Xception architecture. + + + + + + + + + +Optionally loads weights pre-trained on ImageNet. +Note that the data format convention used by the model is +the one specified in your Keras config at `~/.keras/keras.json`. +Note that the default input image size for this model is 299x299. + +Caution: Be sure to properly pre-process your inputs to the application. +Please see applications.xception.preprocess_input for an example. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`include_top` + +whether to include the fully-connected +layer at the top of the network. +
+`weights` + +one of `None` (random initialization), +'imagenet' (pre-training on ImageNet), +or the path to the weights file to be loaded. +
+`input_tensor` + +optional Keras tensor +(i.e. output of layers.Input()) +to use as image input for the model. +
+`input_shape` + +optional shape tuple, only to be specified +if `include_top` is False (otherwise the input shape +has to be `(299, 299, 3)`. +It should have exactly 3 inputs channels, +and width and height should be no smaller than 71. +E.g. `(150, 150, 3)` would be one valid value. +
+`pooling` + +Optional pooling mode for feature extraction +when `include_top` is `False`. +- `None` means that the output of the model will be +the 4D tensor output of the +last convolutional block. +- `avg` means that global average pooling +will be applied to the output of the +last convolutional block, and thus +the output of the model will be a 2D tensor. +- `max` means that global max pooling will +be applied. +
+`classes` + +optional number of classes to classify images +into, only to be specified if `include_top` is True, +and if no `weights` argument is specified. +
+`classifier_activation` + +A `str` or callable. The activation function to use +on the "top" layer. Ignored unless `include_top=True`. Set +`classifier_activation=None` to return the logits of the "top" layer. +
+ + + + + + + + + + + +
+A keras.Model instance. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +in case of invalid argument for `weights`, +or invalid input shape. +
+`ValueError` + +if `classifier_activation` is not `softmax` or `None` when +using a pretrained top layer. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/densenet.md b/site/en/api_docs/python/tf/keras/applications/densenet.md new file mode 100644 index 00000000000..c345778662e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/densenet.md @@ -0,0 +1,39 @@ +description: DenseNet models for Keras. + +
+ + +
+ +# Module: tf.keras.applications.densenet + + + + + + + + + +DenseNet models for Keras. + + + +#### Reference paper: + +- [Densely Connected Convolutional Networks] + (https://arxiv.org/abs/1608.06993) (CVPR 2017 Best Paper Award) + + +## Functions + +[`DenseNet121(...)`](../../../tf/keras/applications/DenseNet121.md): Instantiates the Densenet121 architecture. + +[`DenseNet169(...)`](../../../tf/keras/applications/DenseNet169.md): Instantiates the Densenet169 architecture. + +[`DenseNet201(...)`](../../../tf/keras/applications/DenseNet201.md): Instantiates the Densenet201 architecture. + +[`decode_predictions(...)`](../../../tf/keras/applications/densenet/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/densenet/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/densenet/decode_predictions.md b/site/en/api_docs/python/tf/keras/applications/densenet/decode_predictions.md new file mode 100644 index 00000000000..65b9f212e31 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/densenet/decode_predictions.md @@ -0,0 +1,102 @@ +description: Decodes the prediction of an ImageNet model. + +
+ + +
+ +# tf.keras.applications.densenet.decode_predictions + + + + + + + + + +Decodes the prediction of an ImageNet model. + + + + + + + + + + + + + + + + + + + + + + +
+`preds` + +Numpy array encoding a batch of predictions. +
+`top` + +Integer, how many top-guesses to return. Defaults to 5. +
+ + + + + + + + + + + +
+A list of lists of top class prediction tuples +`(class_name, class_description, score)`. +One list of tuples per sample in batch input. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid shape of the `pred` array +(must be 2D). +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/densenet/preprocess_input.md b/site/en/api_docs/python/tf/keras/applications/densenet/preprocess_input.md new file mode 100644 index 00000000000..ac68309566e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/densenet/preprocess_input.md @@ -0,0 +1,122 @@ +description: Preprocesses a tensor or Numpy array encoding a batch of images. + +
+ + +
+ +# tf.keras.applications.densenet.preprocess_input + + + + + + + + + +Preprocesses a tensor or Numpy array encoding a batch of images. + + + + + + + + + +Usage example with applications.MobileNet: + +```python +i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) +x = tf.cast(i, tf.float32) +x = tf.keras.applications.mobilenet.preprocess_input(x) +core = tf.keras.applications.MobileNet() +x = core(x) +model = tf.keras.Model(inputs=[i], outputs=[x]) + +image = tf.image.decode_png(tf.io.read_file('file.png')) +result = model(image) +``` + + + + + + + + + + + + + +
+`x` + +A floating point `numpy.array` or a tf.Tensor, 3D or 4D with 3 color +channels, with values in the range [0, 255]. +The preprocessed data are written over the input data +if the data types are compatible. To avoid this +behaviour, `numpy.copy(x)` can be used. +
+`data_format` + +Optional data format of the image tensor/array. Defaults to +None, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+ + + + + + + + + + + +
+Preprocessed `numpy.array` or a tf.Tensor with type `float32`. + +The input pixels values are scaled between 0 and 1 and each channel is +normalized with respect to the InageNet dataset. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of unknown `data_format` argument. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/imagenet_utils.md b/site/en/api_docs/python/tf/keras/applications/imagenet_utils.md new file mode 100644 index 00000000000..13b559cbc1c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/imagenet_utils.md @@ -0,0 +1,27 @@ +description: Utilities for ImageNet data preprocessing & prediction decoding. + +
+ + +
+ +# Module: tf.keras.applications.imagenet_utils + + + + + + + + + +Utilities for ImageNet data preprocessing & prediction decoding. + + + +## Functions + +[`decode_predictions(...)`](../../../tf/keras/applications/imagenet_utils/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/imagenet_utils/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/imagenet_utils/decode_predictions.md b/site/en/api_docs/python/tf/keras/applications/imagenet_utils/decode_predictions.md new file mode 100644 index 00000000000..5e746325c7d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/imagenet_utils/decode_predictions.md @@ -0,0 +1,102 @@ +description: Decodes the prediction of an ImageNet model. + +
+ + +
+ +# tf.keras.applications.imagenet_utils.decode_predictions + + + + + + + + + +Decodes the prediction of an ImageNet model. + + + + + + + + + + + + + + + + + + + + + + +
+`preds` + +Numpy array encoding a batch of predictions. +
+`top` + +Integer, how many top-guesses to return. Defaults to 5. +
+ + + + + + + + + + + +
+A list of lists of top class prediction tuples +`(class_name, class_description, score)`. +One list of tuples per sample in batch input. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid shape of the `pred` array +(must be 2D). +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/imagenet_utils/preprocess_input.md b/site/en/api_docs/python/tf/keras/applications/imagenet_utils/preprocess_input.md new file mode 100644 index 00000000000..c6a53d7a0f2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/imagenet_utils/preprocess_input.md @@ -0,0 +1,135 @@ +description: Preprocesses a tensor or Numpy array encoding a batch of images. + +
+ + +
+ +# tf.keras.applications.imagenet_utils.preprocess_input + + + + + + + + + +Preprocesses a tensor or Numpy array encoding a batch of images. + + + + + + + + + +Usage example with applications.MobileNet: + +```python +i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) +x = tf.cast(i, tf.float32) +x = tf.keras.applications.mobilenet.preprocess_input(x) +core = tf.keras.applications.MobileNet() +x = core(x) +model = tf.keras.Model(inputs=[i], outputs=[x]) + +image = tf.image.decode_png(tf.io.read_file('file.png')) +result = model(image) +``` + + + + + + + + + + + + + + + + +
+`x` + +A floating point `numpy.array` or a tf.Tensor, 3D or 4D with 3 color +channels, with values in the range [0, 255]. +The preprocessed data are written over the input data +if the data types are compatible. To avoid this +behaviour, `numpy.copy(x)` can be used. +
+`data_format` + +Optional data format of the image tensor/array. Defaults to +None, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+`mode` + +One of "caffe", "tf" or "torch". +- caffe: will convert the images from RGB to BGR, +then will zero-center each color channel with +respect to the ImageNet dataset, +without scaling. +- tf: will scale pixels between -1 and 1, +sample-wise. +- torch: will scale pixels between 0 and 1 and then +will normalize each channel with respect to the +ImageNet dataset. +
+ + + + + + + + + + + +
+Preprocessed `numpy.array` or a tf.Tensor with type `float32`. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of unknown `data_format` argument. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/inception_resnet_v2.md b/site/en/api_docs/python/tf/keras/applications/inception_resnet_v2.md new file mode 100644 index 00000000000..ccf6b607ea3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/inception_resnet_v2.md @@ -0,0 +1,36 @@ +description: Inception-ResNet V2 model for Keras. + +
+ + +
+ +# Module: tf.keras.applications.inception_resnet_v2 + + + + + + + + + +Inception-ResNet V2 model for Keras. + + + +#### Reference paper: + +- [Inception-v4, Inception-ResNet and the Impact of + Residual Connections on Learning](https://arxiv.org/abs/1602.07261) + (AAAI 2017) + + +## Functions + +[`InceptionResNetV2(...)`](../../../tf/keras/applications/InceptionResNetV2.md): Instantiates the Inception-ResNet v2 architecture. + +[`decode_predictions(...)`](../../../tf/keras/applications/inception_resnet_v2/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/inception_resnet_v2/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/inception_resnet_v2/decode_predictions.md b/site/en/api_docs/python/tf/keras/applications/inception_resnet_v2/decode_predictions.md new file mode 100644 index 00000000000..d7c833487fc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/inception_resnet_v2/decode_predictions.md @@ -0,0 +1,102 @@ +description: Decodes the prediction of an ImageNet model. + +
+ + +
+ +# tf.keras.applications.inception_resnet_v2.decode_predictions + + + + + + + + + +Decodes the prediction of an ImageNet model. + + + + + + + + + + + + + + + + + + + + + + +
+`preds` + +Numpy array encoding a batch of predictions. +
+`top` + +Integer, how many top-guesses to return. Defaults to 5. +
+ + + + + + + + + + + +
+A list of lists of top class prediction tuples +`(class_name, class_description, score)`. +One list of tuples per sample in batch input. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid shape of the `pred` array +(must be 2D). +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/inception_resnet_v2/preprocess_input.md b/site/en/api_docs/python/tf/keras/applications/inception_resnet_v2/preprocess_input.md new file mode 100644 index 00000000000..8a0d1849db5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/inception_resnet_v2/preprocess_input.md @@ -0,0 +1,121 @@ +description: Preprocesses a tensor or Numpy array encoding a batch of images. + +
+ + +
+ +# tf.keras.applications.inception_resnet_v2.preprocess_input + + + + + + + + + +Preprocesses a tensor or Numpy array encoding a batch of images. + + + + + + + + + +Usage example with applications.MobileNet: + +```python +i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) +x = tf.cast(i, tf.float32) +x = tf.keras.applications.mobilenet.preprocess_input(x) +core = tf.keras.applications.MobileNet() +x = core(x) +model = tf.keras.Model(inputs=[i], outputs=[x]) + +image = tf.image.decode_png(tf.io.read_file('file.png')) +result = model(image) +``` + + + + + + + + + + + + + +
+`x` + +A floating point `numpy.array` or a tf.Tensor, 3D or 4D with 3 color +channels, with values in the range [0, 255]. +The preprocessed data are written over the input data +if the data types are compatible. To avoid this +behaviour, `numpy.copy(x)` can be used. +
+`data_format` + +Optional data format of the image tensor/array. Defaults to +None, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+ + + + + + + + + + + +
+Preprocessed `numpy.array` or a tf.Tensor with type `float32`. + +The inputs pixel values are scaled between -1 and 1, sample-wise. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of unknown `data_format` argument. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/inception_v3.md b/site/en/api_docs/python/tf/keras/applications/inception_v3.md new file mode 100644 index 00000000000..27ae38d216e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/inception_v3.md @@ -0,0 +1,35 @@ +description: Inception V3 model for Keras. + +
+ + +
+ +# Module: tf.keras.applications.inception_v3 + + + + + + + + + +Inception V3 model for Keras. + + + +#### Reference paper: + +- [Rethinking the Inception Architecture for Computer Vision]( + http://arxiv.org/abs/1512.00567) (CVPR 2016) + + +## Functions + +[`InceptionV3(...)`](../../../tf/keras/applications/InceptionV3.md): Instantiates the Inception v3 architecture. + +[`decode_predictions(...)`](../../../tf/keras/applications/inception_v3/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/inception_v3/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/inception_v3/decode_predictions.md b/site/en/api_docs/python/tf/keras/applications/inception_v3/decode_predictions.md new file mode 100644 index 00000000000..8d5644f379e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/inception_v3/decode_predictions.md @@ -0,0 +1,102 @@ +description: Decodes the prediction of an ImageNet model. + +
+ + +
+ +# tf.keras.applications.inception_v3.decode_predictions + + + + + + + + + +Decodes the prediction of an ImageNet model. + + + + + + + + + + + + + + + + + + + + + + +
+`preds` + +Numpy array encoding a batch of predictions. +
+`top` + +Integer, how many top-guesses to return. Defaults to 5. +
+ + + + + + + + + + + +
+A list of lists of top class prediction tuples +`(class_name, class_description, score)`. +One list of tuples per sample in batch input. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid shape of the `pred` array +(must be 2D). +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/inception_v3/preprocess_input.md b/site/en/api_docs/python/tf/keras/applications/inception_v3/preprocess_input.md new file mode 100644 index 00000000000..1470be46b1d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/inception_v3/preprocess_input.md @@ -0,0 +1,121 @@ +description: Preprocesses a tensor or Numpy array encoding a batch of images. + +
+ + +
+ +# tf.keras.applications.inception_v3.preprocess_input + + + + + + + + + +Preprocesses a tensor or Numpy array encoding a batch of images. + + + + + + + + + +Usage example with applications.MobileNet: + +```python +i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) +x = tf.cast(i, tf.float32) +x = tf.keras.applications.mobilenet.preprocess_input(x) +core = tf.keras.applications.MobileNet() +x = core(x) +model = tf.keras.Model(inputs=[i], outputs=[x]) + +image = tf.image.decode_png(tf.io.read_file('file.png')) +result = model(image) +``` + + + + + + + + + + + + + +
+`x` + +A floating point `numpy.array` or a tf.Tensor, 3D or 4D with 3 color +channels, with values in the range [0, 255]. +The preprocessed data are written over the input data +if the data types are compatible. To avoid this +behaviour, `numpy.copy(x)` can be used. +
+`data_format` + +Optional data format of the image tensor/array. Defaults to +None, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+ + + + + + + + + + + +
+Preprocessed `numpy.array` or a tf.Tensor with type `float32`. + +The inputs pixel values are scaled between -1 and 1, sample-wise. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of unknown `data_format` argument. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/mobilenet.md b/site/en/api_docs/python/tf/keras/applications/mobilenet.md new file mode 100644 index 00000000000..7fb3a5a7c2a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/mobilenet.md @@ -0,0 +1,75 @@ +description: MobileNet v1 models for Keras. + +
+ + +
+ +# Module: tf.keras.applications.mobilenet + + + + + + + + + +MobileNet v1 models for Keras. + + +MobileNet is a general architecture and can be used for multiple use cases. +Depending on the use case, it can use different input layer size and +different width factors. This allows different width models to reduce +the number of multiply-adds and thereby +reduce inference cost on mobile devices. + +MobileNets support any input size greater than 32 x 32, with larger image sizes +offering better performance. +The number of parameters and number of multiply-adds +can be modified by using the `alpha` parameter, +which increases/decreases the number of filters in each layer. +By altering the image size and `alpha` parameter, +all 16 models from the paper can be built, with ImageNet weights provided. + +The paper demonstrates the performance of MobileNets using `alpha` values of +1.0 (also called 100 % MobileNet), 0.75, 0.5 and 0.25. +For each of these `alpha` values, weights for 4 different input image sizes +are provided (224, 192, 160, 128). + +The following table describes the size and accuracy of the 100% MobileNet +on size 224 x 224: +---------------------------------------------------------------------------- +Width Multiplier (alpha) | ImageNet Acc | Multiply-Adds (M) | Params (M) +---------------------------------------------------------------------------- +| 1.0 MobileNet-224 | 70.6 % | 529 | 4.2 | +| 0.75 MobileNet-224 | 68.4 % | 325 | 2.6 | +| 0.50 MobileNet-224 | 63.7 % | 149 | 1.3 | +| 0.25 MobileNet-224 | 50.6 % | 41 | 0.5 | +---------------------------------------------------------------------------- + +The following table describes the performance of +the 100 % MobileNet on various input sizes: +------------------------------------------------------------------------ + Resolution | ImageNet Acc | Multiply-Adds (M) | Params (M) +------------------------------------------------------------------------ +| 1.0 MobileNet-224 | 70.6 % | 529 | 4.2 | +| 1.0 MobileNet-192 | 69.1 % | 529 | 4.2 | +| 1.0 MobileNet-160 | 67.2 % | 529 | 4.2 | +| 1.0 MobileNet-128 | 64.4 % | 529 | 4.2 | +------------------------------------------------------------------------ + +#### Reference paper: + +- [MobileNets: Efficient Convolutional Neural Networks for + Mobile Vision Applications](https://arxiv.org/abs/1704.04861) + + +## Functions + +[`MobileNet(...)`](../../../tf/keras/applications/MobileNet.md): Instantiates the MobileNet architecture. + +[`decode_predictions(...)`](../../../tf/keras/applications/mobilenet/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/mobilenet/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/mobilenet/decode_predictions.md b/site/en/api_docs/python/tf/keras/applications/mobilenet/decode_predictions.md new file mode 100644 index 00000000000..f24170cd968 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/mobilenet/decode_predictions.md @@ -0,0 +1,102 @@ +description: Decodes the prediction of an ImageNet model. + +
+ + +
+ +# tf.keras.applications.mobilenet.decode_predictions + + + + + + + + + +Decodes the prediction of an ImageNet model. + + + + + + + + + + + + + + + + + + + + + + +
+`preds` + +Numpy array encoding a batch of predictions. +
+`top` + +Integer, how many top-guesses to return. Defaults to 5. +
+ + + + + + + + + + + +
+A list of lists of top class prediction tuples +`(class_name, class_description, score)`. +One list of tuples per sample in batch input. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid shape of the `pred` array +(must be 2D). +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/mobilenet/preprocess_input.md b/site/en/api_docs/python/tf/keras/applications/mobilenet/preprocess_input.md new file mode 100644 index 00000000000..ad99ceabb42 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/mobilenet/preprocess_input.md @@ -0,0 +1,121 @@ +description: Preprocesses a tensor or Numpy array encoding a batch of images. + +
+ + +
+ +# tf.keras.applications.mobilenet.preprocess_input + + + + + + + + + +Preprocesses a tensor or Numpy array encoding a batch of images. + + + + + + + + + +Usage example with applications.MobileNet: + +```python +i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) +x = tf.cast(i, tf.float32) +x = tf.keras.applications.mobilenet.preprocess_input(x) +core = tf.keras.applications.MobileNet() +x = core(x) +model = tf.keras.Model(inputs=[i], outputs=[x]) + +image = tf.image.decode_png(tf.io.read_file('file.png')) +result = model(image) +``` + + + + + + + + + + + + + +
+`x` + +A floating point `numpy.array` or a tf.Tensor, 3D or 4D with 3 color +channels, with values in the range [0, 255]. +The preprocessed data are written over the input data +if the data types are compatible. To avoid this +behaviour, `numpy.copy(x)` can be used. +
+`data_format` + +Optional data format of the image tensor/array. Defaults to +None, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+ + + + + + + + + + + +
+Preprocessed `numpy.array` or a tf.Tensor with type `float32`. + +The inputs pixel values are scaled between -1 and 1, sample-wise. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of unknown `data_format` argument. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/mobilenet_v2.md b/site/en/api_docs/python/tf/keras/applications/mobilenet_v2.md new file mode 100644 index 00000000000..e26e0f863a5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/mobilenet_v2.md @@ -0,0 +1,82 @@ +description: MobileNet v2 models for Keras. + +
+ + +
+ +# Module: tf.keras.applications.mobilenet_v2 + + + + + + + + + +MobileNet v2 models for Keras. + + +MobileNetV2 is a general architecture and can be used for multiple use cases. +Depending on the use case, it can use different input layer size and +different width factors. This allows different width models to reduce +the number of multiply-adds and thereby +reduce inference cost on mobile devices. + +MobileNetV2 is very similar to the original MobileNet, +except that it uses inverted residual blocks with +bottlenecking features. It has a drastically lower +parameter count than the original MobileNet. +MobileNets support any input size greater +than 32 x 32, with larger image sizes +offering better performance. + +The number of parameters and number of multiply-adds +can be modified by using the `alpha` parameter, +which increases/decreases the number of filters in each layer. +By altering the image size and `alpha` parameter, +all 22 models from the paper can be built, with ImageNet weights provided. + +The paper demonstrates the performance of MobileNets using `alpha` values of +1.0 (also called 100 % MobileNet), 0.35, 0.5, 0.75, 1.0, 1.3, and 1.4 +For each of these `alpha` values, weights for 5 different input image sizes +are provided (224, 192, 160, 128, and 96). + +The following table describes the performance of +MobileNet on various input sizes: +------------------------------------------------------------------------ +MACs stands for Multiply Adds + Classification Checkpoint|MACs (M)|Parameters (M)|Top 1 Accuracy|Top 5 Accuracy +--------------------------|------------|---------------|---------|----|--------- +| [mobilenet_v2_1.4_224] | 582 | 6.06 | 75.0 | 92.5 | +| [mobilenet_v2_1.3_224] | 509 | 5.34 | 74.4 | 92.1 | +| [mobilenet_v2_1.0_224] | 300 | 3.47 | 71.8 | 91.0 | +| [mobilenet_v2_1.0_192] | 221 | 3.47 | 70.7 | 90.1 | +| [mobilenet_v2_1.0_160] | 154 | 3.47 | 68.8 | 89.0 | +| [mobilenet_v2_1.0_128] | 99 | 3.47 | 65.3 | 86.9 | +| [mobilenet_v2_1.0_96] | 56 | 3.47 | 60.3 | 83.2 | +| [mobilenet_v2_0.75_224] | 209 | 2.61 | 69.8 | 89.6 | +| [mobilenet_v2_0.75_192] | 153 | 2.61 | 68.7 | 88.9 | +| [mobilenet_v2_0.75_160] | 107 | 2.61 | 66.4 | 87.3 | +| [mobilenet_v2_0.75_128] | 69 | 2.61 | 63.2 | 85.3 | +| [mobilenet_v2_0.75_96] | 39 | 2.61 | 58.8 | 81.6 | +| [mobilenet_v2_0.5_224] | 97 | 1.95 | 65.4 | 86.4 | +| [mobilenet_v2_0.5_192] | 71 | 1.95 | 63.9 | 85.4 | +| [mobilenet_v2_0.5_160] | 50 | 1.95 | 61.0 | 83.2 | +| [mobilenet_v2_0.5_128] | 32 | 1.95 | 57.7 | 80.8 | +| [mobilenet_v2_0.5_96] | 18 | 1.95 | 51.2 | 75.8 | +| [mobilenet_v2_0.35_224] | 59 | 1.66 | 60.3 | 82.9 | +| [mobilenet_v2_0.35_192] | 43 | 1.66 | 58.2 | 81.2 | +| [mobilenet_v2_0.35_160] | 30 | 1.66 | 55.7 | 79.1 | +| [mobilenet_v2_0.35_128] | 20 | 1.66 | 50.8 | 75.0 | +| [mobilenet_v2_0.35_96] | 11 | 1.66 | 45.5 | 70.4 | + +## Functions + +[`MobileNetV2(...)`](../../../tf/keras/applications/MobileNetV2.md): Instantiates the MobileNetV2 architecture. + +[`decode_predictions(...)`](../../../tf/keras/applications/mobilenet_v2/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/mobilenet_v2/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/mobilenet_v2/decode_predictions.md b/site/en/api_docs/python/tf/keras/applications/mobilenet_v2/decode_predictions.md new file mode 100644 index 00000000000..47c70f03d19 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/mobilenet_v2/decode_predictions.md @@ -0,0 +1,102 @@ +description: Decodes the prediction of an ImageNet model. + +
+ + +
+ +# tf.keras.applications.mobilenet_v2.decode_predictions + + + + + + + + + +Decodes the prediction of an ImageNet model. + + + + + + + + + + + + + + + + + + + + + + +
+`preds` + +Numpy array encoding a batch of predictions. +
+`top` + +Integer, how many top-guesses to return. Defaults to 5. +
+ + + + + + + + + + + +
+A list of lists of top class prediction tuples +`(class_name, class_description, score)`. +One list of tuples per sample in batch input. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid shape of the `pred` array +(must be 2D). +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/mobilenet_v2/preprocess_input.md b/site/en/api_docs/python/tf/keras/applications/mobilenet_v2/preprocess_input.md new file mode 100644 index 00000000000..4b760d739a3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/mobilenet_v2/preprocess_input.md @@ -0,0 +1,121 @@ +description: Preprocesses a tensor or Numpy array encoding a batch of images. + +
+ + +
+ +# tf.keras.applications.mobilenet_v2.preprocess_input + + + + + + + + + +Preprocesses a tensor or Numpy array encoding a batch of images. + + + + + + + + + +Usage example with applications.MobileNet: + +```python +i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) +x = tf.cast(i, tf.float32) +x = tf.keras.applications.mobilenet.preprocess_input(x) +core = tf.keras.applications.MobileNet() +x = core(x) +model = tf.keras.Model(inputs=[i], outputs=[x]) + +image = tf.image.decode_png(tf.io.read_file('file.png')) +result = model(image) +``` + + + + + + + + + + + + + +
+`x` + +A floating point `numpy.array` or a tf.Tensor, 3D or 4D with 3 color +channels, with values in the range [0, 255]. +The preprocessed data are written over the input data +if the data types are compatible. To avoid this +behaviour, `numpy.copy(x)` can be used. +
+`data_format` + +Optional data format of the image tensor/array. Defaults to +None, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+ + + + + + + + + + + +
+Preprocessed `numpy.array` or a tf.Tensor with type `float32`. + +The inputs pixel values are scaled between -1 and 1, sample-wise. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of unknown `data_format` argument. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/nasnet.md b/site/en/api_docs/python/tf/keras/applications/nasnet.md new file mode 100644 index 00000000000..8fe8d0d0d16 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/nasnet.md @@ -0,0 +1,54 @@ +description: NASNet-A models for Keras. + +
+ + +
+ +# Module: tf.keras.applications.nasnet + + + + + + + + + +NASNet-A models for Keras. + + +NASNet refers to Neural Architecture Search Network, a family of models +that were designed automatically by learning the model architectures +directly on the dataset of interest. + +Here we consider NASNet-A, the highest performance model that was found +for the CIFAR-10 dataset, and then extended to ImageNet 2012 dataset, +obtaining state of the art performance on CIFAR-10 and ImageNet 2012. +Only the NASNet-A models, and their respective weights, which are suited +for ImageNet 2012 are provided. + +The below table describes the performance on ImageNet 2012: +-------------------------------------------------------------------------------- + Architecture | Top-1 Acc | Top-5 Acc | Multiply-Adds | Params (M) +-------------------------------------------------------------------------------- +| NASNet-A (4 @ 1056) | 74.0 % | 91.6 % | 564 M | 5.3 | +| NASNet-A (6 @ 4032) | 82.7 % | 96.2 % | 23.8 B | 88.9 | +-------------------------------------------------------------------------------- + +#### References: + +- [Learning Transferable Architectures for Scalable Image Recognition] + (https://arxiv.org/abs/1707.07012) (CVPR 2018) + + +## Functions + +[`NASNetLarge(...)`](../../../tf/keras/applications/NASNetLarge.md): Instantiates a NASNet model in ImageNet mode. + +[`NASNetMobile(...)`](../../../tf/keras/applications/NASNetMobile.md): Instantiates a Mobile NASNet model in ImageNet mode. + +[`decode_predictions(...)`](../../../tf/keras/applications/nasnet/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/nasnet/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/nasnet/decode_predictions.md b/site/en/api_docs/python/tf/keras/applications/nasnet/decode_predictions.md new file mode 100644 index 00000000000..452891c70e5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/nasnet/decode_predictions.md @@ -0,0 +1,102 @@ +description: Decodes the prediction of an ImageNet model. + +
+ + +
+ +# tf.keras.applications.nasnet.decode_predictions + + + + + + + + + +Decodes the prediction of an ImageNet model. + + + + + + + + + + + + + + + + + + + + + + +
+`preds` + +Numpy array encoding a batch of predictions. +
+`top` + +Integer, how many top-guesses to return. Defaults to 5. +
+ + + + + + + + + + + +
+A list of lists of top class prediction tuples +`(class_name, class_description, score)`. +One list of tuples per sample in batch input. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid shape of the `pred` array +(must be 2D). +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/nasnet/preprocess_input.md b/site/en/api_docs/python/tf/keras/applications/nasnet/preprocess_input.md new file mode 100644 index 00000000000..3650074de4b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/nasnet/preprocess_input.md @@ -0,0 +1,121 @@ +description: Preprocesses a tensor or Numpy array encoding a batch of images. + +
+ + +
+ +# tf.keras.applications.nasnet.preprocess_input + + + + + + + + + +Preprocesses a tensor or Numpy array encoding a batch of images. + + + + + + + + + +Usage example with applications.MobileNet: + +```python +i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) +x = tf.cast(i, tf.float32) +x = tf.keras.applications.mobilenet.preprocess_input(x) +core = tf.keras.applications.MobileNet() +x = core(x) +model = tf.keras.Model(inputs=[i], outputs=[x]) + +image = tf.image.decode_png(tf.io.read_file('file.png')) +result = model(image) +``` + + + + + + + + + + + + + +
+`x` + +A floating point `numpy.array` or a tf.Tensor, 3D or 4D with 3 color +channels, with values in the range [0, 255]. +The preprocessed data are written over the input data +if the data types are compatible. To avoid this +behaviour, `numpy.copy(x)` can be used. +
+`data_format` + +Optional data format of the image tensor/array. Defaults to +None, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+ + + + + + + + + + + +
+Preprocessed `numpy.array` or a tf.Tensor with type `float32`. + +The inputs pixel values are scaled between -1 and 1, sample-wise. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of unknown `data_format` argument. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/resnet.md b/site/en/api_docs/python/tf/keras/applications/resnet.md new file mode 100644 index 00000000000..79f3178d4f3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/resnet.md @@ -0,0 +1,33 @@ +description: ResNet models for Keras. + +
+ + +
+ +# Module: tf.keras.applications.resnet + + + + + + + + + +ResNet models for Keras. + + + +## Functions + +[`ResNet101(...)`](../../../tf/keras/applications/ResNet101.md): Instantiates the ResNet101 architecture. + +[`ResNet152(...)`](../../../tf/keras/applications/ResNet152.md): Instantiates the ResNet152 architecture. + +[`ResNet50(...)`](../../../tf/keras/applications/ResNet50.md): Instantiates the ResNet50 architecture. + +[`decode_predictions(...)`](../../../tf/keras/applications/resnet/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/resnet/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/resnet/decode_predictions.md b/site/en/api_docs/python/tf/keras/applications/resnet/decode_predictions.md new file mode 100644 index 00000000000..0aa6b821145 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/resnet/decode_predictions.md @@ -0,0 +1,105 @@ +description: Decodes the prediction of an ImageNet model. + +
+ + +
+ +# tf.keras.applications.resnet.decode_predictions + + + + + + + + + +Decodes the prediction of an ImageNet model. + + + + + + + + + + + + + + + + + + + + + + +
+`preds` + +Numpy array encoding a batch of predictions. +
+`top` + +Integer, how many top-guesses to return. Defaults to 5. +
+ + + + + + + + + + + +
+A list of lists of top class prediction tuples +`(class_name, class_description, score)`. +One list of tuples per sample in batch input. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid shape of the `pred` array +(must be 2D). +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/resnet/preprocess_input.md b/site/en/api_docs/python/tf/keras/applications/resnet/preprocess_input.md new file mode 100644 index 00000000000..117aa26b40e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/resnet/preprocess_input.md @@ -0,0 +1,125 @@ +description: Preprocesses a tensor or Numpy array encoding a batch of images. + +
+ + +
+ +# tf.keras.applications.resnet.preprocess_input + + + + + + + + + +Preprocesses a tensor or Numpy array encoding a batch of images. + + + + + + + + + +Usage example with applications.MobileNet: + +```python +i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) +x = tf.cast(i, tf.float32) +x = tf.keras.applications.mobilenet.preprocess_input(x) +core = tf.keras.applications.MobileNet() +x = core(x) +model = tf.keras.Model(inputs=[i], outputs=[x]) + +image = tf.image.decode_png(tf.io.read_file('file.png')) +result = model(image) +``` + + + + + + + + + + + + + +
+`x` + +A floating point `numpy.array` or a tf.Tensor, 3D or 4D with 3 color +channels, with values in the range [0, 255]. +The preprocessed data are written over the input data +if the data types are compatible. To avoid this +behaviour, `numpy.copy(x)` can be used. +
+`data_format` + +Optional data format of the image tensor/array. Defaults to +None, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+ + + + + + + + + + + +
+Preprocessed `numpy.array` or a tf.Tensor with type `float32`. + +The images are converted from RGB to BGR, then each color channel is +zero-centered with respect to the ImageNet dataset, without scaling. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of unknown `data_format` argument. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/resnet50.md b/site/en/api_docs/python/tf/keras/applications/resnet50.md new file mode 100644 index 00000000000..f7c45597a29 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/resnet50.md @@ -0,0 +1,29 @@ +description: Public API for tf.keras.applications.resnet50 namespace. + +
+ + +
+ +# Module: tf.keras.applications.resnet50 + + + + + + + + + +Public API for tf.keras.applications.resnet50 namespace. + + + +## Functions + +[`ResNet50(...)`](../../../tf/keras/applications/ResNet50.md): Instantiates the ResNet50 architecture. + +[`decode_predictions(...)`](../../../tf/keras/applications/resnet/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/resnet/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/resnet_v2.md b/site/en/api_docs/python/tf/keras/applications/resnet_v2.md new file mode 100644 index 00000000000..69cb2013978 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/resnet_v2.md @@ -0,0 +1,33 @@ +description: ResNet v2 models for Keras. + +
+ + +
+ +# Module: tf.keras.applications.resnet_v2 + + + + + + + + + +ResNet v2 models for Keras. + + + +## Functions + +[`ResNet101V2(...)`](../../../tf/keras/applications/ResNet101V2.md): Instantiates the ResNet101V2 architecture. + +[`ResNet152V2(...)`](../../../tf/keras/applications/ResNet152V2.md): Instantiates the ResNet152V2 architecture. + +[`ResNet50V2(...)`](../../../tf/keras/applications/ResNet50V2.md): Instantiates the ResNet50V2 architecture. + +[`decode_predictions(...)`](../../../tf/keras/applications/resnet_v2/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/resnet_v2/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/resnet_v2/decode_predictions.md b/site/en/api_docs/python/tf/keras/applications/resnet_v2/decode_predictions.md new file mode 100644 index 00000000000..82ac26db60e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/resnet_v2/decode_predictions.md @@ -0,0 +1,102 @@ +description: Decodes the prediction of an ImageNet model. + +
+ + +
+ +# tf.keras.applications.resnet_v2.decode_predictions + + + + + + + + + +Decodes the prediction of an ImageNet model. + + + + + + + + + + + + + + + + + + + + + + +
+`preds` + +Numpy array encoding a batch of predictions. +
+`top` + +Integer, how many top-guesses to return. Defaults to 5. +
+ + + + + + + + + + + +
+A list of lists of top class prediction tuples +`(class_name, class_description, score)`. +One list of tuples per sample in batch input. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid shape of the `pred` array +(must be 2D). +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/resnet_v2/preprocess_input.md b/site/en/api_docs/python/tf/keras/applications/resnet_v2/preprocess_input.md new file mode 100644 index 00000000000..7acdcf702dd --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/resnet_v2/preprocess_input.md @@ -0,0 +1,122 @@ +description: Preprocesses a tensor or Numpy array encoding a batch of images. + +
+ + +
+ +# tf.keras.applications.resnet_v2.preprocess_input + + + + + + + + + +Preprocesses a tensor or Numpy array encoding a batch of images. + + + + + + + + + +Usage example with applications.MobileNet: + +```python +i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) +x = tf.cast(i, tf.float32) +x = tf.keras.applications.mobilenet.preprocess_input(x) +core = tf.keras.applications.MobileNet() +x = core(x) +model = tf.keras.Model(inputs=[i], outputs=[x]) + +image = tf.image.decode_png(tf.io.read_file('file.png')) +result = model(image) +``` + + + + + + + + + + + + + +
+`x` + +A floating point `numpy.array` or a tf.Tensor, 3D or 4D with 3 color +channels, with values in the range [0, 255]. +The preprocessed data are written over the input data +if the data types are compatible. To avoid this +behaviour, `numpy.copy(x)` can be used. +
+`data_format` + +Optional data format of the image tensor/array. Defaults to +None, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+ + + + + + + + + + + +
+Preprocessed `numpy.array` or a tf.Tensor with type `float32`. + +The images are converted from RGB to BGR, then each color channel is +zero-centered with respect to the ImageNet dataset, without scaling. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of unknown `data_format` argument. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/vgg16.md b/site/en/api_docs/python/tf/keras/applications/vgg16.md new file mode 100644 index 00000000000..fb44defeab7 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/vgg16.md @@ -0,0 +1,29 @@ +description: VGG16 model for Keras. + +
+ + +
+ +# Module: tf.keras.applications.vgg16 + + + + + + + + + +VGG16 model for Keras. + + + +## Functions + +[`VGG16(...)`](../../../tf/keras/applications/VGG16.md): Instantiates the VGG16 model. + +[`decode_predictions(...)`](../../../tf/keras/applications/vgg16/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/vgg16/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/vgg16/decode_predictions.md b/site/en/api_docs/python/tf/keras/applications/vgg16/decode_predictions.md new file mode 100644 index 00000000000..09786413377 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/vgg16/decode_predictions.md @@ -0,0 +1,102 @@ +description: Decodes the prediction of an ImageNet model. + +
+ + +
+ +# tf.keras.applications.vgg16.decode_predictions + + + + + + + + + +Decodes the prediction of an ImageNet model. + + + + + + + + + + + + + + + + + + + + + + +
+`preds` + +Numpy array encoding a batch of predictions. +
+`top` + +Integer, how many top-guesses to return. Defaults to 5. +
+ + + + + + + + + + + +
+A list of lists of top class prediction tuples +`(class_name, class_description, score)`. +One list of tuples per sample in batch input. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid shape of the `pred` array +(must be 2D). +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/vgg16/preprocess_input.md b/site/en/api_docs/python/tf/keras/applications/vgg16/preprocess_input.md new file mode 100644 index 00000000000..c47a5672aba --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/vgg16/preprocess_input.md @@ -0,0 +1,122 @@ +description: Preprocesses a tensor or Numpy array encoding a batch of images. + +
+ + +
+ +# tf.keras.applications.vgg16.preprocess_input + + + + + + + + + +Preprocesses a tensor or Numpy array encoding a batch of images. + + + + + + + + + +Usage example with applications.MobileNet: + +```python +i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) +x = tf.cast(i, tf.float32) +x = tf.keras.applications.mobilenet.preprocess_input(x) +core = tf.keras.applications.MobileNet() +x = core(x) +model = tf.keras.Model(inputs=[i], outputs=[x]) + +image = tf.image.decode_png(tf.io.read_file('file.png')) +result = model(image) +``` + + + + + + + + + + + + + +
+`x` + +A floating point `numpy.array` or a tf.Tensor, 3D or 4D with 3 color +channels, with values in the range [0, 255]. +The preprocessed data are written over the input data +if the data types are compatible. To avoid this +behaviour, `numpy.copy(x)` can be used. +
+`data_format` + +Optional data format of the image tensor/array. Defaults to +None, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+ + + + + + + + + + + +
+Preprocessed `numpy.array` or a tf.Tensor with type `float32`. + +The images are converted from RGB to BGR, then each color channel is +zero-centered with respect to the ImageNet dataset, without scaling. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of unknown `data_format` argument. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/vgg19.md b/site/en/api_docs/python/tf/keras/applications/vgg19.md new file mode 100644 index 00000000000..8a7b684f124 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/vgg19.md @@ -0,0 +1,35 @@ +description: VGG19 model for Keras. + +
+ + +
+ +# Module: tf.keras.applications.vgg19 + + + + + + + + + +VGG19 model for Keras. + + + +#### Reference: + +- [Very Deep Convolutional Networks for Large-Scale Image Recognition]( + https://arxiv.org/abs/1409.1556) (ICLR 2015) + + +## Functions + +[`VGG19(...)`](../../../tf/keras/applications/VGG19.md): Instantiates the VGG19 architecture. + +[`decode_predictions(...)`](../../../tf/keras/applications/vgg19/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/vgg19/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/vgg19/decode_predictions.md b/site/en/api_docs/python/tf/keras/applications/vgg19/decode_predictions.md new file mode 100644 index 00000000000..9e16b52eb16 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/vgg19/decode_predictions.md @@ -0,0 +1,102 @@ +description: Decodes the prediction of an ImageNet model. + +
+ + +
+ +# tf.keras.applications.vgg19.decode_predictions + + + + + + + + + +Decodes the prediction of an ImageNet model. + + + + + + + + + + + + + + + + + + + + + + +
+`preds` + +Numpy array encoding a batch of predictions. +
+`top` + +Integer, how many top-guesses to return. Defaults to 5. +
+ + + + + + + + + + + +
+A list of lists of top class prediction tuples +`(class_name, class_description, score)`. +One list of tuples per sample in batch input. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid shape of the `pred` array +(must be 2D). +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/vgg19/preprocess_input.md b/site/en/api_docs/python/tf/keras/applications/vgg19/preprocess_input.md new file mode 100644 index 00000000000..d7eaac729bc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/vgg19/preprocess_input.md @@ -0,0 +1,122 @@ +description: Preprocesses a tensor or Numpy array encoding a batch of images. + +
+ + +
+ +# tf.keras.applications.vgg19.preprocess_input + + + + + + + + + +Preprocesses a tensor or Numpy array encoding a batch of images. + + + + + + + + + +Usage example with applications.MobileNet: + +```python +i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) +x = tf.cast(i, tf.float32) +x = tf.keras.applications.mobilenet.preprocess_input(x) +core = tf.keras.applications.MobileNet() +x = core(x) +model = tf.keras.Model(inputs=[i], outputs=[x]) + +image = tf.image.decode_png(tf.io.read_file('file.png')) +result = model(image) +``` + + + + + + + + + + + + + +
+`x` + +A floating point `numpy.array` or a tf.Tensor, 3D or 4D with 3 color +channels, with values in the range [0, 255]. +The preprocessed data are written over the input data +if the data types are compatible. To avoid this +behaviour, `numpy.copy(x)` can be used. +
+`data_format` + +Optional data format of the image tensor/array. Defaults to +None, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+ + + + + + + + + + + +
+Preprocessed `numpy.array` or a tf.Tensor with type `float32`. + +The images are converted from RGB to BGR, then each color channel is +zero-centered with respect to the ImageNet dataset, without scaling. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of unknown `data_format` argument. +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/xception.md b/site/en/api_docs/python/tf/keras/applications/xception.md new file mode 100644 index 00000000000..321d8af29eb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/xception.md @@ -0,0 +1,37 @@ +description: Xception V1 model for Keras. + +
+ + +
+ +# Module: tf.keras.applications.xception + + + + + + + + + +Xception V1 model for Keras. + + +On ImageNet, this model gets to a top-1 validation accuracy of 0.790 +and a top-5 validation accuracy of 0.945. + +#### Reference paper: + +- [Xception: Deep Learning with Depthwise Separable Convolutions]( + https://arxiv.org/abs/1610.02357) (CVPR 2017) + + +## Functions + +[`Xception(...)`](../../../tf/keras/applications/Xception.md): Instantiates the Xception architecture. + +[`decode_predictions(...)`](../../../tf/keras/applications/xception/decode_predictions.md): Decodes the prediction of an ImageNet model. + +[`preprocess_input(...)`](../../../tf/keras/applications/xception/preprocess_input.md): Preprocesses a tensor or Numpy array encoding a batch of images. + diff --git a/site/en/api_docs/python/tf/keras/applications/xception/decode_predictions.md b/site/en/api_docs/python/tf/keras/applications/xception/decode_predictions.md new file mode 100644 index 00000000000..7bb6c90f80c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/xception/decode_predictions.md @@ -0,0 +1,102 @@ +description: Decodes the prediction of an ImageNet model. + +
+ + +
+ +# tf.keras.applications.xception.decode_predictions + + + + + + + + + +Decodes the prediction of an ImageNet model. + + + + + + + + + + + + + + + + + + + + + + +
+`preds` + +Numpy array encoding a batch of predictions. +
+`top` + +Integer, how many top-guesses to return. Defaults to 5. +
+ + + + + + + + + + + +
+A list of lists of top class prediction tuples +`(class_name, class_description, score)`. +One list of tuples per sample in batch input. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid shape of the `pred` array +(must be 2D). +
+ diff --git a/site/en/api_docs/python/tf/keras/applications/xception/preprocess_input.md b/site/en/api_docs/python/tf/keras/applications/xception/preprocess_input.md new file mode 100644 index 00000000000..75f811cc3a3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/applications/xception/preprocess_input.md @@ -0,0 +1,121 @@ +description: Preprocesses a tensor or Numpy array encoding a batch of images. + +
+ + +
+ +# tf.keras.applications.xception.preprocess_input + + + + + + + + + +Preprocesses a tensor or Numpy array encoding a batch of images. + + + + + + + + + +Usage example with applications.MobileNet: + +```python +i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) +x = tf.cast(i, tf.float32) +x = tf.keras.applications.mobilenet.preprocess_input(x) +core = tf.keras.applications.MobileNet() +x = core(x) +model = tf.keras.Model(inputs=[i], outputs=[x]) + +image = tf.image.decode_png(tf.io.read_file('file.png')) +result = model(image) +``` + + + + + + + + + + + + + +
+`x` + +A floating point `numpy.array` or a tf.Tensor, 3D or 4D with 3 color +channels, with values in the range [0, 255]. +The preprocessed data are written over the input data +if the data types are compatible. To avoid this +behaviour, `numpy.copy(x)` can be used. +
+`data_format` + +Optional data format of the image tensor/array. Defaults to +None, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+ + + + + + + + + + + +
+Preprocessed `numpy.array` or a tf.Tensor with type `float32`. + +The inputs pixel values are scaled between -1 and 1, sample-wise. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of unknown `data_format` argument. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend.md b/site/en/api_docs/python/tf/keras/backend.md new file mode 100644 index 00000000000..2a1186da414 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend.md @@ -0,0 +1,311 @@ +description: Keras backend API. + +
+ + +
+ +# Module: tf.keras.backend + + + + + + + + + +Keras backend API. + + + +## Functions + +[`abs(...)`](../../tf/keras/backend/abs.md): Element-wise absolute value. + +[`all(...)`](../../tf/keras/backend/all.md): Bitwise reduction (logical AND). + +[`any(...)`](../../tf/keras/backend/any.md): Bitwise reduction (logical OR). + +[`arange(...)`](../../tf/keras/backend/arange.md): Creates a 1D tensor containing a sequence of integers. + +[`argmax(...)`](../../tf/keras/backend/argmax.md): Returns the index of the maximum value along an axis. + +[`argmin(...)`](../../tf/keras/backend/argmin.md): Returns the index of the minimum value along an axis. + +[`backend(...)`](../../tf/keras/backend/backend.md): Publicly accessible method for determining the current backend. + +[`batch_dot(...)`](../../tf/keras/backend/batch_dot.md): Batchwise dot product. + +[`batch_flatten(...)`](../../tf/keras/backend/batch_flatten.md): Turn a nD tensor into a 2D tensor with same 0th dimension. + +[`batch_get_value(...)`](../../tf/keras/backend/batch_get_value.md): Returns the value of more than one tensor variable. + +[`batch_normalization(...)`](../../tf/keras/backend/batch_normalization.md): Applies batch normalization on x given mean, var, beta and gamma. + +[`batch_set_value(...)`](../../tf/keras/backend/batch_set_value.md): Sets the values of many tensor variables at once. + +[`bias_add(...)`](../../tf/keras/backend/bias_add.md): Adds a bias vector to a tensor. + +[`binary_crossentropy(...)`](../../tf/keras/backend/binary_crossentropy.md): Binary crossentropy between an output tensor and a target tensor. + +[`cast(...)`](../../tf/keras/backend/cast.md): Casts a tensor to a different dtype and returns it. + +[`cast_to_floatx(...)`](../../tf/keras/backend/cast_to_floatx.md): Cast a Numpy array to the default Keras float type. + +[`categorical_crossentropy(...)`](../../tf/keras/backend/categorical_crossentropy.md): Categorical crossentropy between an output tensor and a target tensor. + +[`clear_session(...)`](../../tf/keras/backend/clear_session.md): Destroys the current TF graph and session, and creates a new one. + +[`clip(...)`](../../tf/keras/backend/clip.md): Element-wise value clipping. + +[`concatenate(...)`](../../tf/keras/backend/concatenate.md): Concatenates a list of tensors alongside the specified axis. + +[`constant(...)`](../../tf/keras/backend/constant.md): Creates a constant tensor. + +[`conv1d(...)`](../../tf/keras/backend/conv1d.md): 1D convolution. + +[`conv2d(...)`](../../tf/keras/backend/conv2d.md): 2D convolution. + +[`conv2d_transpose(...)`](../../tf/keras/backend/conv2d_transpose.md): 2D deconvolution (i.e. + +[`conv3d(...)`](../../tf/keras/backend/conv3d.md): 3D convolution. + +[`cos(...)`](../../tf/keras/backend/cos.md): Computes cos of x element-wise. + +[`count_params(...)`](../../tf/keras/backend/count_params.md): Returns the static number of elements in a variable or tensor. + +[`ctc_batch_cost(...)`](../../tf/keras/backend/ctc_batch_cost.md): Runs CTC loss algorithm on each batch element. + +[`ctc_decode(...)`](../../tf/keras/backend/ctc_decode.md): Decodes the output of a softmax. + +[`ctc_label_dense_to_sparse(...)`](../../tf/keras/backend/ctc_label_dense_to_sparse.md): Converts CTC labels from dense to sparse. + +[`cumprod(...)`](../../tf/keras/backend/cumprod.md): Cumulative product of the values in a tensor, alongside the specified axis. + +[`cumsum(...)`](../../tf/keras/backend/cumsum.md): Cumulative sum of the values in a tensor, alongside the specified axis. + +[`depthwise_conv2d(...)`](../../tf/keras/backend/depthwise_conv2d.md): 2D convolution with separable filters. + +[`dot(...)`](../../tf/keras/backend/dot.md): Multiplies 2 tensors (and/or variables) and returns a tensor. + +[`dropout(...)`](../../tf/keras/backend/dropout.md): Sets entries in `x` to zero at random, while scaling the entire tensor. + +[`dtype(...)`](../../tf/keras/backend/dtype.md): Returns the dtype of a Keras tensor or variable, as a string. + +[`elu(...)`](../../tf/keras/backend/elu.md): Exponential linear unit. + +[`epsilon(...)`](../../tf/keras/backend/epsilon.md): Returns the value of the fuzz factor used in numeric expressions. + +[`equal(...)`](../../tf/keras/backend/equal.md): Element-wise equality between two tensors. + +[`eval(...)`](../../tf/keras/backend/eval.md): Evaluates the value of a variable. + +[`exp(...)`](../../tf/keras/backend/exp.md): Element-wise exponential. + +[`expand_dims(...)`](../../tf/keras/backend/expand_dims.md): Adds a 1-sized dimension at index "axis". + +[`eye(...)`](../../tf/keras/backend/eye.md): Instantiate an identity matrix and returns it. + +[`flatten(...)`](../../tf/keras/backend/flatten.md): Flatten a tensor. + +[`floatx(...)`](../../tf/keras/backend/floatx.md): Returns the default float type, as a string. + +[`foldl(...)`](../../tf/keras/backend/foldl.md): Reduce elems using fn to combine them from left to right. + +[`foldr(...)`](../../tf/keras/backend/foldr.md): Reduce elems using fn to combine them from right to left. + +[`function(...)`](../../tf/keras/backend/function.md): Instantiates a Keras function. + +[`gather(...)`](../../tf/keras/backend/gather.md): Retrieves the elements of indices `indices` in the tensor `reference`. + +[`get_uid(...)`](../../tf/keras/backend/get_uid.md): Associates a string prefix with an integer counter in a TensorFlow graph. + +[`get_value(...)`](../../tf/keras/backend/get_value.md): Returns the value of a variable. + +[`gradients(...)`](../../tf/keras/backend/gradients.md): Returns the gradients of `loss` w.r.t. `variables`. + +[`greater(...)`](../../tf/keras/backend/greater.md): Element-wise truth value of (x > y). + +[`greater_equal(...)`](../../tf/keras/backend/greater_equal.md): Element-wise truth value of (x >= y). + +[`hard_sigmoid(...)`](../../tf/keras/backend/hard_sigmoid.md): Segment-wise linear approximation of sigmoid. + +[`image_data_format(...)`](../../tf/keras/backend/image_data_format.md): Returns the default image data format convention. + +[`in_test_phase(...)`](../../tf/keras/backend/in_test_phase.md): Selects `x` in test phase, and `alt` otherwise. + +[`in_top_k(...)`](../../tf/keras/backend/in_top_k.md): Returns whether the `targets` are in the top `k` `predictions`. + +[`in_train_phase(...)`](../../tf/keras/backend/in_train_phase.md): Selects `x` in train phase, and `alt` otherwise. + +[`int_shape(...)`](../../tf/keras/backend/int_shape.md): Returns the shape of tensor or variable as a tuple of int or None entries. + +[`is_keras_tensor(...)`](../../tf/keras/backend/is_keras_tensor.md): Returns whether `x` is a Keras tensor. + +[`is_sparse(...)`](../../tf/keras/backend/is_sparse.md): Returns whether a tensor is a sparse tensor. + +[`l2_normalize(...)`](../../tf/keras/backend/l2_normalize.md): Normalizes a tensor wrt the L2 norm alongside the specified axis. + +[`learning_phase(...)`](../../tf/keras/backend/learning_phase.md): Returns the learning phase flag. + +[`learning_phase_scope(...)`](../../tf/keras/backend/learning_phase_scope.md): Provides a scope within which the learning phase is equal to `value`. + +[`less(...)`](../../tf/keras/backend/less.md): Element-wise truth value of (x < y). + +[`less_equal(...)`](../../tf/keras/backend/less_equal.md): Element-wise truth value of (x <= y). + +[`local_conv1d(...)`](../../tf/keras/backend/local_conv1d.md): Apply 1D conv with un-shared weights. + +[`local_conv2d(...)`](../../tf/keras/backend/local_conv2d.md): Apply 2D conv with un-shared weights. + +[`log(...)`](../../tf/keras/backend/log.md): Element-wise log. + +[`manual_variable_initialization(...)`](../../tf/keras/backend/manual_variable_initialization.md): Sets the manual variable initialization flag. + +[`map_fn(...)`](../../tf/keras/backend/map_fn.md): Map the function fn over the elements elems and return the outputs. + +[`max(...)`](../../tf/keras/backend/max.md): Maximum value in a tensor. + +[`maximum(...)`](../../tf/keras/backend/maximum.md): Element-wise maximum of two tensors. + +[`mean(...)`](../../tf/keras/backend/mean.md): Mean of a tensor, alongside the specified axis. + +[`min(...)`](../../tf/keras/backend/min.md): Minimum value in a tensor. + +[`minimum(...)`](../../tf/keras/backend/minimum.md): Element-wise minimum of two tensors. + +[`moving_average_update(...)`](../../tf/keras/backend/moving_average_update.md): Compute the moving average of a variable. + +[`name_scope(...)`](../../tf/keras/backend/name_scope.md): A context manager for use when defining a Python op. + +[`ndim(...)`](../../tf/keras/backend/ndim.md): Returns the number of axes in a tensor, as an integer. + +[`normalize_batch_in_training(...)`](../../tf/keras/backend/normalize_batch_in_training.md): Computes mean and std for batch then apply batch_normalization on batch. + +[`not_equal(...)`](../../tf/keras/backend/not_equal.md): Element-wise inequality between two tensors. + +[`one_hot(...)`](../../tf/keras/backend/one_hot.md): Computes the one-hot representation of an integer tensor. + +[`ones(...)`](../../tf/keras/backend/ones.md): Instantiates an all-ones variable and returns it. + +[`ones_like(...)`](../../tf/keras/backend/ones_like.md): Instantiates an all-ones variable of the same shape as another tensor. + +[`permute_dimensions(...)`](../../tf/keras/backend/permute_dimensions.md): Permutes axes in a tensor. + +[`placeholder(...)`](../../tf/keras/backend/placeholder.md): Instantiates a placeholder tensor and returns it. + +[`pool2d(...)`](../../tf/keras/backend/pool2d.md): 2D Pooling. + +[`pool3d(...)`](../../tf/keras/backend/pool3d.md): 3D Pooling. + +[`pow(...)`](../../tf/keras/backend/pow.md): Element-wise exponentiation. + +[`print_tensor(...)`](../../tf/keras/backend/print_tensor.md): Prints `message` and the tensor value when evaluated. + +[`prod(...)`](../../tf/keras/backend/prod.md): Multiplies the values in a tensor, alongside the specified axis. + +[`random_binomial(...)`](../../tf/keras/backend/random_binomial.md): Returns a tensor with random binomial distribution of values. + +[`random_normal(...)`](../../tf/keras/backend/random_normal.md): Returns a tensor with normal distribution of values. + +[`random_normal_variable(...)`](../../tf/keras/backend/random_normal_variable.md): Instantiates a variable with values drawn from a normal distribution. + +[`random_uniform(...)`](../../tf/keras/backend/random_uniform.md): Returns a tensor with uniform distribution of values. + +[`random_uniform_variable(...)`](../../tf/keras/backend/random_uniform_variable.md): Instantiates a variable with values drawn from a uniform distribution. + +[`relu(...)`](../../tf/keras/backend/relu.md): Rectified linear unit. + +[`repeat(...)`](../../tf/keras/backend/repeat.md): Repeats a 2D tensor. + +[`repeat_elements(...)`](../../tf/keras/backend/repeat_elements.md): Repeats the elements of a tensor along an axis, like `np.repeat`. + +[`reset_uids(...)`](../../tf/keras/backend/reset_uids.md): Resets graph identifiers. + +[`reshape(...)`](../../tf/keras/backend/reshape.md): Reshapes a tensor to the specified shape. + +[`resize_images(...)`](../../tf/keras/backend/resize_images.md): Resizes the images contained in a 4D tensor. + +[`resize_volumes(...)`](../../tf/keras/backend/resize_volumes.md): Resizes the volume contained in a 5D tensor. + +[`reverse(...)`](../../tf/keras/backend/reverse.md): Reverse a tensor along the specified axes. + +[`rnn(...)`](../../tf/keras/backend/rnn.md): Iterates over the time dimension of a tensor. + +[`round(...)`](../../tf/keras/backend/round.md): Element-wise rounding to the closest integer. + +[`separable_conv2d(...)`](../../tf/keras/backend/separable_conv2d.md): 2D convolution with separable filters. + +[`set_epsilon(...)`](../../tf/keras/backend/set_epsilon.md): Sets the value of the fuzz factor used in numeric expressions. + +[`set_floatx(...)`](../../tf/keras/backend/set_floatx.md): Sets the default float type. + +[`set_image_data_format(...)`](../../tf/keras/backend/set_image_data_format.md): Sets the value of the image data format convention. + +[`set_learning_phase(...)`](../../tf/keras/backend/set_learning_phase.md): Sets the learning phase to a fixed value. + +[`set_value(...)`](../../tf/keras/backend/set_value.md): Sets the value of a variable, from a Numpy array. + +[`shape(...)`](../../tf/keras/backend/shape.md): Returns the symbolic shape of a tensor or variable. + +[`sigmoid(...)`](../../tf/keras/backend/sigmoid.md): Element-wise sigmoid. + +[`sign(...)`](../../tf/keras/backend/sign.md): Element-wise sign. + +[`sin(...)`](../../tf/keras/backend/sin.md): Computes sin of x element-wise. + +[`softmax(...)`](../../tf/keras/backend/softmax.md): Softmax of a tensor. + +[`softplus(...)`](../../tf/keras/backend/softplus.md): Softplus of a tensor. + +[`softsign(...)`](../../tf/keras/backend/softsign.md): Softsign of a tensor. + +[`sparse_categorical_crossentropy(...)`](../../tf/keras/backend/sparse_categorical_crossentropy.md): Categorical crossentropy with integer targets. + +[`spatial_2d_padding(...)`](../../tf/keras/backend/spatial_2d_padding.md): Pads the 2nd and 3rd dimensions of a 4D tensor. + +[`spatial_3d_padding(...)`](../../tf/keras/backend/spatial_3d_padding.md): Pads 5D tensor with zeros along the depth, height, width dimensions. + +[`sqrt(...)`](../../tf/keras/backend/sqrt.md): Element-wise square root. + +[`square(...)`](../../tf/keras/backend/square.md): Element-wise square. + +[`squeeze(...)`](../../tf/keras/backend/squeeze.md): Removes a 1-dimension from the tensor at index "axis". + +[`stack(...)`](../../tf/keras/backend/stack.md): Stacks a list of rank `R` tensors into a rank `R+1` tensor. + +[`std(...)`](../../tf/keras/backend/std.md): Standard deviation of a tensor, alongside the specified axis. + +[`stop_gradient(...)`](../../tf/keras/backend/stop_gradient.md): Returns `variables` but with zero gradient w.r.t. every other variable. + +[`sum(...)`](../../tf/keras/backend/sum.md): Sum of the values in a tensor, alongside the specified axis. + +[`switch(...)`](../../tf/keras/backend/switch.md): Switches between two operations depending on a scalar value. + +[`tanh(...)`](../../tf/keras/backend/tanh.md): Element-wise tanh. + +[`temporal_padding(...)`](../../tf/keras/backend/temporal_padding.md): Pads the middle dimension of a 3D tensor. + +[`tile(...)`](../../tf/keras/backend/tile.md): Creates a tensor by tiling `x` by `n`. + +[`to_dense(...)`](../../tf/keras/backend/to_dense.md): Converts a sparse tensor into a dense tensor and returns it. + +[`transpose(...)`](../../tf/keras/backend/transpose.md): Transposes a tensor and returns it. + +[`truncated_normal(...)`](../../tf/keras/backend/truncated_normal.md): Returns a tensor with truncated random normal distribution of values. + +[`update(...)`](../../tf/keras/backend/update.md) + +[`update_add(...)`](../../tf/keras/backend/update_add.md): Update the value of `x` by adding `increment`. + +[`update_sub(...)`](../../tf/keras/backend/update_sub.md): Update the value of `x` by subtracting `decrement`. + +[`var(...)`](../../tf/keras/backend/var.md): Variance of a tensor, alongside the specified axis. + +[`variable(...)`](../../tf/keras/backend/variable.md): Instantiates a variable and returns it. + +[`zeros(...)`](../../tf/keras/backend/zeros.md): Instantiates an all-zeros variable and returns it. + +[`zeros_like(...)`](../../tf/keras/backend/zeros_like.md): Instantiates an all-zeros variable of the same shape as another tensor. + diff --git a/site/en/api_docs/python/tf/keras/backend/abs.md b/site/en/api_docs/python/tf/keras/backend/abs.md new file mode 100644 index 00000000000..ef61c985652 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/abs.md @@ -0,0 +1,75 @@ +description: Element-wise absolute value. + +
+ + +
+ +# tf.keras.backend.abs + + + + + + + + + +Element-wise absolute value. + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/all.md b/site/en/api_docs/python/tf/keras/backend/all.md new file mode 100644 index 00000000000..3346304c57f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/all.md @@ -0,0 +1,89 @@ +description: Bitwise reduction (logical AND). + +
+ + +
+ +# tf.keras.backend.all + + + + + + + + + +Bitwise reduction (logical AND). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`axis` + +axis along which to perform the reduction. +
+`keepdims` + +whether the drop or broadcast the reduction axes. +
+ + + + + + + + + + + +
+A uint8 tensor (0s and 1s). +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/any.md b/site/en/api_docs/python/tf/keras/backend/any.md new file mode 100644 index 00000000000..f5e45692328 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/any.md @@ -0,0 +1,89 @@ +description: Bitwise reduction (logical OR). + +
+ + +
+ +# tf.keras.backend.any + + + + + + + + + +Bitwise reduction (logical OR). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`axis` + +axis along which to perform the reduction. +
+`keepdims` + +whether the drop or broadcast the reduction axes. +
+ + + + + + + + + + + +
+A uint8 tensor (0s and 1s). +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/arange.md b/site/en/api_docs/python/tf/keras/backend/arange.md new file mode 100644 index 00000000000..f8a1da492ba --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/arange.md @@ -0,0 +1,112 @@ +description: Creates a 1D tensor containing a sequence of integers. + +
+ + +
+ +# tf.keras.backend.arange + + + + + + + + + +Creates a 1D tensor containing a sequence of integers. + + + + + + + + + +The function arguments use the same convention as +Theano's arange: if only one argument is provided, +it is in fact the "stop" argument and "start" is 0. + +The default type of the returned tensor is `'int32'` to +match TensorFlow's default. + + + + + + + + + + + + + + + + + + + +
+`start` + +Start value. +
+`stop` + +Stop value. +
+`step` + +Difference between two successive values. +
+`dtype` + +Integer dtype to use. +
+ + + + + + + + + + + +
+An integer tensor. +
+ + + +#### Example: + + +``` +>>> tf.keras.backend.arange(start=0, stop=10, step=1.5) + +``` diff --git a/site/en/api_docs/python/tf/keras/backend/argmax.md b/site/en/api_docs/python/tf/keras/backend/argmax.md new file mode 100644 index 00000000000..29b5d8a3efd --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/argmax.md @@ -0,0 +1,82 @@ +description: Returns the index of the maximum value along an axis. + +
+ + +
+ +# tf.keras.backend.argmax + + + + + + + + + +Returns the index of the maximum value along an axis. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`axis` + +axis along which to perform the reduction. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/argmin.md b/site/en/api_docs/python/tf/keras/backend/argmin.md new file mode 100644 index 00000000000..c73ac8f7de6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/argmin.md @@ -0,0 +1,82 @@ +description: Returns the index of the minimum value along an axis. + +
+ + +
+ +# tf.keras.backend.argmin + + + + + + + + + +Returns the index of the minimum value along an axis. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`axis` + +axis along which to perform the reduction. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/backend.md b/site/en/api_docs/python/tf/keras/backend/backend.md new file mode 100644 index 00000000000..ab5bef77919 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/backend.md @@ -0,0 +1,57 @@ +description: Publicly accessible method for determining the current backend. + +
+ + +
+ +# tf.keras.backend.backend + + + + + + + + + +Publicly accessible method for determining the current backend. + + + + + + + + + +Only exists for API compatibility with multi-backend Keras. + + + + + + + + + +
+The string "tensorflow". +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/batch_dot.md b/site/en/api_docs/python/tf/keras/backend/batch_dot.md new file mode 100644 index 00000000000..7f9d1ed9b24 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/batch_dot.md @@ -0,0 +1,127 @@ +description: Batchwise dot product. + +
+ + +
+ +# tf.keras.backend.batch_dot + + + + + + + + + +Batchwise dot product. + + + + + + + + + +`batch_dot` is used to compute dot product of `x` and `y` when +`x` and `y` are data in batch, i.e. in a shape of +`(batch_size, :)`. +`batch_dot` results in a tensor or variable with less dimensions +than the input. If the number of dimensions is reduced to 1, +we use `expand_dims` to make sure that ndim is at least 2. + + + + + + + + + + + + + + + + +
+`x` + +Keras tensor or variable with `ndim >= 2`. +
+`y` + +Keras tensor or variable with `ndim >= 2`. +
+`axes` + +Tuple or list of integers with target dimensions, or single integer. +The sizes of `x.shape[axes[0]]` and `y.shape[axes[1]]` should be equal. +
+ + + + + + + + + + + +
+A tensor with shape equal to the concatenation of `x`'s shape +(less the dimension that was summed over) and `y`'s shape +(less the batch dimension and the dimension that was summed over). +If the final rank is 1, we reshape it to `(batch_size, 1)`. +
+ + + +#### Examples: + + + +``` +>>> x_batch = tf.keras.backend.ones(shape=(32, 20, 1)) +>>> y_batch = tf.keras.backend.ones(shape=(32, 30, 20)) +>>> xy_batch_dot = tf.keras.backend.batch_dot(x_batch, y_batch, axes=(1, 2)) +>>> tf.keras.backend.int_shape(xy_batch_dot) +(32, 1, 30) +``` + +#### Shape inference: + +Let `x`'s shape be `(100, 20)` and `y`'s shape be `(100, 30, 20)`. +If `axes` is (1, 2), to find the output shape of resultant tensor, + loop through each dimension in `x`'s shape and `y`'s shape: +* `x.shape[0]` : 100 : append to output shape +* `x.shape[1]` : 20 : do not append to output shape, + dimension 1 of `x` has been summed over. (`dot_axes[0]` = 1) +* `y.shape[0]` : 100 : do not append to output shape, + always ignore first dimension of `y` +* `y.shape[1]` : 30 : append to output shape +* `y.shape[2]` : 20 : do not append to output shape, + dimension 2 of `y` has been summed over. (`dot_axes[1]` = 2) +`output_shape` = `(100, 30)` diff --git a/site/en/api_docs/python/tf/keras/backend/batch_flatten.md b/site/en/api_docs/python/tf/keras/backend/batch_flatten.md new file mode 100644 index 00000000000..ac7889d0a5c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/batch_flatten.md @@ -0,0 +1,89 @@ +description: Turn a nD tensor into a 2D tensor with same 0th dimension. + +
+ + +
+ +# tf.keras.backend.batch_flatten + + + + + + + + + +Turn a nD tensor into a 2D tensor with same 0th dimension. + + + + + + + + + +In other words, it flattens each data samples of a batch. + + + + + + + + + + +
+`x` + +A tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ + + +#### Examples: + +Flattening a 3D tensor to 2D by collapsing the last dimension. + + +``` +>>> x_batch = tf.keras.backend.ones(shape=(2, 3, 4, 5)) +>>> x_batch_flatten = batch_flatten(x_batch) +>>> tf.keras.backend.int_shape(x_batch_flatten) +(2, 60) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/batch_get_value.md b/site/en/api_docs/python/tf/keras/backend/batch_get_value.md new file mode 100644 index 00000000000..548ccd2e88e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/batch_get_value.md @@ -0,0 +1,92 @@ +description: Returns the value of more than one tensor variable. + +
+ + +
+ +# tf.keras.backend.batch_get_value + + + + + + + + + +Returns the value of more than one tensor variable. + + + + + + + + + + + + + + + + + + + +
+`tensors` + +list of ops to run. +
+ + + + + + + + + + + +
+A list of Numpy arrays. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If this method is called inside defun. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/batch_normalization.md b/site/en/api_docs/python/tf/keras/backend/batch_normalization.md new file mode 100644 index 00000000000..631e97135f2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/batch_normalization.md @@ -0,0 +1,120 @@ +description: Applies batch normalization on x given mean, var, beta and gamma. + +
+ + +
+ +# tf.keras.backend.batch_normalization + + + + + + + + + +Applies batch normalization on x given mean, var, beta and gamma. + + + + + + + + + +I.e. returns: +`output = (x - mean) / (sqrt(var) + epsilon) * gamma + beta` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Input tensor or variable. +
+`mean` + +Mean of batch. +
+`var` + +Variance of batch. +
+`beta` + +Tensor with which to center the input. +
+`gamma` + +Tensor by which to scale the input. +
+`axis` + +Integer, the axis that should be normalized. +(typically the features axis). +
+`epsilon` + +Fuzz factor. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/batch_set_value.md b/site/en/api_docs/python/tf/keras/backend/batch_set_value.md new file mode 100644 index 00000000000..05bf7b2c5d3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/batch_set_value.md @@ -0,0 +1,62 @@ +description: Sets the values of many tensor variables at once. + +
+ + +
+ +# tf.keras.backend.batch_set_value + + + + + + + + + +Sets the values of many tensor variables at once. + + + + + + + + + + + + + + + + + + + +
+`tuples` + +a list of tuples `(tensor, value)`. +`value` should be a Numpy array. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/bias_add.md b/site/en/api_docs/python/tf/keras/backend/bias_add.md new file mode 100644 index 00000000000..ff1c11cae4a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/bias_add.md @@ -0,0 +1,110 @@ +description: Adds a bias vector to a tensor. + +
+ + +
+ +# tf.keras.backend.bias_add + + + + + + + + + +Adds a bias vector to a tensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`bias` + +Bias tensor to add. +
+`data_format` + +string, `"channels_last"` or `"channels_first"`. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +In one of the two cases below: +1. invalid `data_format` argument. +2. invalid bias shape. +the bias should be either a vector or +a tensor with ndim(x) - 1 dimension +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/binary_crossentropy.md b/site/en/api_docs/python/tf/keras/backend/binary_crossentropy.md new file mode 100644 index 00000000000..06d3fefacc1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/binary_crossentropy.md @@ -0,0 +1,91 @@ +description: Binary crossentropy between an output tensor and a target tensor. + +
+ + +
+ +# tf.keras.backend.binary_crossentropy + + + + + + + + + +Binary crossentropy between an output tensor and a target tensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`target` + +A tensor with the same shape as `output`. +
+`output` + +A tensor. +
+`from_logits` + +Whether `output` is expected to be a logits tensor. +By default, we consider that `output` +encodes a probability distribution. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/cast.md b/site/en/api_docs/python/tf/keras/backend/cast.md new file mode 100644 index 00000000000..c26df49824b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/cast.md @@ -0,0 +1,99 @@ +description: Casts a tensor to a different dtype and returns it. + +
+ + +
+ +# tf.keras.backend.cast + + + + + + + + + +Casts a tensor to a different dtype and returns it. + + + + + + + + + +You can cast a Keras variable but it still returns a Keras tensor. + + + + + + + + + + + + + +
+`x` + +Keras tensor (or variable). +
+`dtype` + +String, either (`'float16'`, `'float32'`, or `'float64'`). +
+ + + + + + + + + + + +
+Keras tensor with dtype `dtype`. +
+ + + +#### Examples: + +Cast a float32 variable to a float64 tensor + + +``` +>>> input = tf.keras.backend.ones(shape=(1,3)) +>>> print(input) + +>>> cast_input = tf.keras.backend.cast(input, dtype='float64') +>>> print(cast_input) +tf.Tensor([[1. 1. 1.]], shape=(1, 3), dtype=float64) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/cast_to_floatx.md b/site/en/api_docs/python/tf/keras/backend/cast_to_floatx.md new file mode 100644 index 00000000000..89a251d1af3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/cast_to_floatx.md @@ -0,0 +1,94 @@ +description: Cast a Numpy array to the default Keras float type. + +
+ + +
+ +# tf.keras.backend.cast_to_floatx + + + + + + + + + +Cast a Numpy array to the default Keras float type. + + + + + + + + + + + + + + + + + + + +
+`x` + +Numpy array or TensorFlow tensor. +
+ + + + + + + + + + + +
+The same array (Numpy array if `x` was a Numpy array, or TensorFlow tensor +if `x` was a tensor), cast to its new type. +
+ + + +#### Example: + + + +``` +>>> tf.keras.backend.floatx() +'float32' +>>> arr = np.array([1.0, 2.0], dtype='float64') +>>> arr.dtype +dtype('float64') +>>> new_arr = cast_to_floatx(arr) +>>> new_arr +array([1., 2.], dtype=float32) +>>> new_arr.dtype +dtype('float32') +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/categorical_crossentropy.md b/site/en/api_docs/python/tf/keras/backend/categorical_crossentropy.md new file mode 100644 index 00000000000..0869d0de1be --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/categorical_crossentropy.md @@ -0,0 +1,144 @@ +description: Categorical crossentropy between an output tensor and a target tensor. + +
+ + +
+ +# tf.keras.backend.categorical_crossentropy + + + + + + + + + +Categorical crossentropy between an output tensor and a target tensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`target` + +A tensor of the same shape as `output`. +
+`output` + +A tensor resulting from a softmax +(unless `from_logits` is True, in which +case `output` is expected to be the logits). +
+`from_logits` + +Boolean, whether `output` is the +result of a softmax, or is a tensor of logits. +
+`axis` + +Int specifying the channels axis. `axis=-1` corresponds to data +format `channels_last', and `axis=1` corresponds to data format +`channels_first`. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if `axis` is neither -1 nor one of the axes of `output`. +
+ + + +#### Example: + + + +``` +>>> a = tf.constant([1., 0., 0., 0., 1., 0., 0., 0., 1.], shape=[3,3]) +>>> print(a) +tf.Tensor( + [[1. 0. 0.] + [0. 1. 0.] + [0. 0. 1.]], shape=(3, 3), dtype=float32) +>>> b = tf.constant([.9, .05, .05, .5, .89, .6, .05, .01, .94], shape=[3,3]) +>>> print(b) +tf.Tensor( + [[0.9 0.05 0.05] + [0.5 0.89 0.6 ] + [0.05 0.01 0.94]], shape=(3, 3), dtype=float32) +>>> loss = tf.keras.backend.categorical_crossentropy(a, b) +>>> print(np.around(loss, 5)) +[0.10536 0.80467 0.06188] +>>> loss = tf.keras.backend.categorical_crossentropy(a, a) +>>> print(np.around(loss, 5)) +[0. 0. 0.] +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/clear_session.md b/site/en/api_docs/python/tf/keras/backend/clear_session.md new file mode 100644 index 00000000000..80ce30b8e4a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/clear_session.md @@ -0,0 +1,66 @@ +description: Destroys the current TF graph and session, and creates a new one. + +
+ + +
+ +# tf.keras.backend.clear_session + + + + + + + + + +Destroys the current TF graph and session, and creates a new one. + + + + + + + + + +Calling clear_session() releases the global graph state that Keras is +holding on to; resets the counters used for naming layers and +variables in Keras; and resets the learning phase. This helps avoid clutter +from old models and layers, especially when memory is limited, and a +common use-case for clear_session is releasing memory when building models +and layers in a loop. + +``` +>>> import tensorflow as tf +>>> layers = [tf.keras.layers.Dense(10) for _ in range(10)] +>>> new_layer = tf.keras.layers.Dense(10) +>>> print(new_layer.name) +dense_10 +>>> tf.keras.backend.set_learning_phase(1) +>>> print(tf.keras.backend.learning_phase()) +1 +>>> tf.keras.backend.clear_session() +>>> new_layer = tf.keras.layers.Dense(10) +>>> print(new_layer.name) +dense +>>> print(tf.keras.backend.learning_phase()) +0 +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/clip.md b/site/en/api_docs/python/tf/keras/backend/clip.md new file mode 100644 index 00000000000..e05b25f3366 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/clip.md @@ -0,0 +1,89 @@ +description: Element-wise value clipping. + +
+ + +
+ +# tf.keras.backend.clip + + + + + + + + + +Element-wise value clipping. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`min_value` + +Python float, integer, or tensor. +
+`max_value` + +Python float, integer, or tensor. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/concatenate.md b/site/en/api_docs/python/tf/keras/backend/concatenate.md new file mode 100644 index 00000000000..04530d2f34f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/concatenate.md @@ -0,0 +1,96 @@ +description: Concatenates a list of tensors alongside the specified axis. + +
+ + +
+ +# tf.keras.backend.concatenate + + + + + + + + + +Concatenates a list of tensors alongside the specified axis. + + + + + + + + + + + + + + + + + + + + + + +
+`tensors` + +list of tensors to concatenate. +
+`axis` + +concatenation axis. +
+ + + + + + + + + + + +
+A tensor. +
+ + + +#### Example: + + +``` +>>> a = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) +>>> b = tf.constant([[10, 20, 30], [40, 50, 60], [70, 80, 90]]) +>>> tf.keras.backend.concatenate((a, b), axis=-1) + +``` diff --git a/site/en/api_docs/python/tf/keras/backend/constant.md b/site/en/api_docs/python/tf/keras/backend/constant.md new file mode 100644 index 00000000000..49ea57a8a6a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/constant.md @@ -0,0 +1,96 @@ +description: Creates a constant tensor. + +
+ + +
+ +# tf.keras.backend.constant + + + + + + + + + +Creates a constant tensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A constant value (or list) +
+`dtype` + +The type of the elements of the resulting tensor. +
+`shape` + +Optional dimensions of resulting tensor. +
+`name` + +Optional name for the tensor. +
+ + + + + + + + + + + +
+A Constant Tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/conv1d.md b/site/en/api_docs/python/tf/keras/backend/conv1d.md new file mode 100644 index 00000000000..b86a113979a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/conv1d.md @@ -0,0 +1,128 @@ +description: 1D convolution. + +
+ + +
+ +# tf.keras.backend.conv1d + + + + + + + + + +1D convolution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`kernel` + +kernel tensor. +
+`strides` + +stride integer. +
+`padding` + +string, `"same"`, `"causal"` or `"valid"`. +
+`data_format` + +string, one of "channels_last", "channels_first". +
+`dilation_rate` + +integer dilate rate. +
+ + + + + + + + + + + +
+A tensor, result of 1D convolution. +
+ + + + + + + + + + + + +
+`ValueError` + +if `data_format` is neither `channels_last` or +`channels_first`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/conv2d.md b/site/en/api_docs/python/tf/keras/backend/conv2d.md new file mode 100644 index 00000000000..64f22d2d86a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/conv2d.md @@ -0,0 +1,129 @@ +description: 2D convolution. + +
+ + +
+ +# tf.keras.backend.conv2d + + + + + + + + + +2D convolution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`kernel` + +kernel tensor. +
+`strides` + +strides tuple. +
+`padding` + +string, `"same"` or `"valid"`. +
+`data_format` + +`"channels_last"` or `"channels_first"`. +
+`dilation_rate` + +tuple of 2 integers. +
+ + + + + + + + + + + +
+A tensor, result of 2D convolution. +
+ + + + + + + + + + + + +
+`ValueError` + +if `data_format` is neither `channels_last` or +`channels_first`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/conv2d_transpose.md b/site/en/api_docs/python/tf/keras/backend/conv2d_transpose.md new file mode 100644 index 00000000000..17fbe0711a5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/conv2d_transpose.md @@ -0,0 +1,137 @@ +description: 2D deconvolution (i.e. + +
+ + +
+ +# tf.keras.backend.conv2d_transpose + + + + + + + + + +2D deconvolution (i.e. + + + + + + + + + +transposed convolution). + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`kernel` + +kernel tensor. +
+`output_shape` + +1D int tensor for the output shape. +
+`strides` + +strides tuple. +
+`padding` + +string, `"same"` or `"valid"`. +
+`data_format` + +string, `"channels_last"` or `"channels_first"`. +
+`dilation_rate` + +Tuple of 2 integers. +
+ + + + + + + + + + + +
+A tensor, result of transposed 2D convolution. +
+ + + + + + + + + + + + +
+`ValueError` + +if `data_format` is neither `channels_last` or +`channels_first`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/conv3d.md b/site/en/api_docs/python/tf/keras/backend/conv3d.md new file mode 100644 index 00000000000..8bd63afc655 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/conv3d.md @@ -0,0 +1,129 @@ +description: 3D convolution. + +
+ + +
+ +# tf.keras.backend.conv3d + + + + + + + + + +3D convolution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`kernel` + +kernel tensor. +
+`strides` + +strides tuple. +
+`padding` + +string, `"same"` or `"valid"`. +
+`data_format` + +string, `"channels_last"` or `"channels_first"`. +
+`dilation_rate` + +tuple of 3 integers. +
+ + + + + + + + + + + +
+A tensor, result of 3D convolution. +
+ + + + + + + + + + + + +
+`ValueError` + +if `data_format` is neither `channels_last` or +`channels_first`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/cos.md b/site/en/api_docs/python/tf/keras/backend/cos.md new file mode 100644 index 00000000000..666df52d587 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/cos.md @@ -0,0 +1,75 @@ +description: Computes cos of x element-wise. + +
+ + +
+ +# tf.keras.backend.cos + + + + + + + + + +Computes cos of x element-wise. + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/count_params.md b/site/en/api_docs/python/tf/keras/backend/count_params.md new file mode 100644 index 00000000000..c8cefad1c35 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/count_params.md @@ -0,0 +1,89 @@ +description: Returns the static number of elements in a variable or tensor. + +
+ + +
+ +# tf.keras.backend.count_params + + + + + + + + + +Returns the static number of elements in a variable or tensor. + + + + + + + + + + + + + + + + + + + +
+`x` + +Variable or tensor. +
+ + + + + + + + + + + +
+Integer, the number of scalars in `x`. +
+ + + +#### Example: + + + +``` +>>> kvar = tf.keras.backend.zeros((2,3)) +>>> tf.keras.backend.count_params(kvar) +6 +>>> tf.keras.backend.eval(kvar) +array([[0., 0., 0.], + [0., 0., 0.]], dtype=float32) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/ctc_batch_cost.md b/site/en/api_docs/python/tf/keras/backend/ctc_batch_cost.md new file mode 100644 index 00000000000..c67f9de029e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/ctc_batch_cost.md @@ -0,0 +1,101 @@ +description: Runs CTC loss algorithm on each batch element. + +
+ + +
+ +# tf.keras.backend.ctc_batch_cost + + + + + + + + + +Runs CTC loss algorithm on each batch element. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`y_true` + +tensor `(samples, max_string_length)` +containing the truth labels. +
+`y_pred` + +tensor `(samples, time_steps, num_categories)` +containing the prediction, or output of the softmax. +
+`input_length` + +tensor `(samples, 1)` containing the sequence length for +each batch item in `y_pred`. +
+`label_length` + +tensor `(samples, 1)` containing the sequence length for +each batch item in `y_true`. +
+ + + + + + + + + + + +
+Tensor with shape (samples,1) containing the +CTC loss of each element. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/ctc_decode.md b/site/en/api_docs/python/tf/keras/backend/ctc_decode.md new file mode 100644 index 00000000000..a15236a9d6e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/ctc_decode.md @@ -0,0 +1,119 @@ +description: Decodes the output of a softmax. + +
+ + +
+ +# tf.keras.backend.ctc_decode + + + + + + + + + +Decodes the output of a softmax. + + + + + + + + + +Can use either greedy search (also known as best path) +or a constrained dictionary search. + + + + + + + + + + + + + + + + + + + + + + +
+`y_pred` + +tensor `(samples, time_steps, num_categories)` +containing the prediction, or output of the softmax. +
+`input_length` + +tensor `(samples, )` containing the sequence length for +each batch item in `y_pred`. +
+`greedy` + +perform much faster best-path search if `true`. +This does not use a dictionary. +
+`beam_width` + +if `greedy` is `false`: a beam search decoder will be used +with a beam of this width. +
+`top_paths` + +if `greedy` is `false`, +how many of the most probable paths will be returned. +
+ + + + + + + + + + + + +
+`Tuple` + +List: if `greedy` is `true`, returns a list of one element that +contains the decoded sequence. +If `false`, returns the `top_paths` most probable +decoded sequences. +Important: blank labels are returned as `-1`. +Tensor `(top_paths, )` that contains +the log probability of each decoded sequence. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/ctc_label_dense_to_sparse.md b/site/en/api_docs/python/tf/keras/backend/ctc_label_dense_to_sparse.md new file mode 100644 index 00000000000..2b0c5c6d53e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/ctc_label_dense_to_sparse.md @@ -0,0 +1,82 @@ +description: Converts CTC labels from dense to sparse. + +
+ + +
+ +# tf.keras.backend.ctc_label_dense_to_sparse + + + + + + + + + +Converts CTC labels from dense to sparse. + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +dense CTC labels. +
+`label_lengths` + +length of the labels. +
+ + + + + + + + + + + +
+A sparse tensor representation of the labels. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/cumprod.md b/site/en/api_docs/python/tf/keras/backend/cumprod.md new file mode 100644 index 00000000000..5d39884d4f7 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/cumprod.md @@ -0,0 +1,82 @@ +description: Cumulative product of the values in a tensor, alongside the specified axis. + +
+ + +
+ +# tf.keras.backend.cumprod + + + + + + + + + +Cumulative product of the values in a tensor, alongside the specified axis. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+`axis` + +An integer, the axis to compute the product. +
+ + + + + + + + + + + +
+A tensor of the cumulative product of values of `x` along `axis`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/cumsum.md b/site/en/api_docs/python/tf/keras/backend/cumsum.md new file mode 100644 index 00000000000..049c518d983 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/cumsum.md @@ -0,0 +1,82 @@ +description: Cumulative sum of the values in a tensor, alongside the specified axis. + +
+ + +
+ +# tf.keras.backend.cumsum + + + + + + + + + +Cumulative sum of the values in a tensor, alongside the specified axis. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+`axis` + +An integer, the axis to compute the sum. +
+ + + + + + + + + + + +
+A tensor of the cumulative sum of values of `x` along `axis`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/depthwise_conv2d.md b/site/en/api_docs/python/tf/keras/backend/depthwise_conv2d.md new file mode 100644 index 00000000000..a14af45c638 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/depthwise_conv2d.md @@ -0,0 +1,130 @@ +description: 2D convolution with separable filters. + +
+ + +
+ +# tf.keras.backend.depthwise_conv2d + + + + + + + + + +2D convolution with separable filters. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +input tensor +
+`depthwise_kernel` + +convolution kernel for the depthwise convolution. +
+`strides` + +strides tuple (length 2). +
+`padding` + +string, `"same"` or `"valid"`. +
+`data_format` + +string, `"channels_last"` or `"channels_first"`. +
+`dilation_rate` + +tuple of integers, +dilation rates for the separable convolution. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if `data_format` is neither `channels_last` or +`channels_first`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/dot.md b/site/en/api_docs/python/tf/keras/backend/dot.md new file mode 100644 index 00000000000..b9c2db7325c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/dot.md @@ -0,0 +1,111 @@ +description: Multiplies 2 tensors (and/or variables) and returns a tensor. + +
+ + +
+ +# tf.keras.backend.dot + + + + + + + + + +Multiplies 2 tensors (and/or variables) and returns a tensor. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`y` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor, dot product of `x` and `y`. +
+ + + +#### Examples: + + + +``` +>>> x = tf.keras.backend.placeholder(shape=(2, 3)) +>>> y = tf.keras.backend.placeholder(shape=(3, 4)) +>>> xy = tf.keras.backend.dot(x, y) +>>> xy + +``` + +``` +>>> x = tf.keras.backend.placeholder(shape=(32, 28, 3)) +>>> y = tf.keras.backend.placeholder(shape=(3, 4)) +>>> xy = tf.keras.backend.dot(x, y) +>>> xy + +``` + +``` +>>> x = tf.keras.backend.random_uniform_variable(shape=(2, 3), low=0, high=1) +>>> y = tf.keras.backend.ones((4, 3, 5)) +>>> xy = tf.keras.backend.dot(x, y) +>>> tf.keras.backend.int_shape(xy) +(2, 4, 5) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/dropout.md b/site/en/api_docs/python/tf/keras/backend/dropout.md new file mode 100644 index 00000000000..e0839d6a792 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/dropout.md @@ -0,0 +1,98 @@ +description: Sets entries in x to zero at random, while scaling the entire tensor. + +
+ + +
+ +# tf.keras.backend.dropout + + + + + + + + + +Sets entries in `x` to zero at random, while scaling the entire tensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +tensor +
+`level` + +fraction of the entries in the tensor +that will be set to 0. +
+`noise_shape` + +shape for randomly generated keep/drop flags, +must be broadcastable to the shape of `x` +
+`seed` + +random seed to ensure determinism. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/dtype.md b/site/en/api_docs/python/tf/keras/backend/dtype.md new file mode 100644 index 00000000000..e32bd9125e6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/dtype.md @@ -0,0 +1,98 @@ +description: Returns the dtype of a Keras tensor or variable, as a string. + +
+ + +
+ +# tf.keras.backend.dtype + + + + + + + + + +Returns the dtype of a Keras tensor or variable, as a string. + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+String, dtype of `x`. +
+ + + +#### Examples: + + + +``` +>>> tf.keras.backend.dtype(tf.keras.backend.placeholder(shape=(2,4,5))) +'float32' +>>> tf.keras.backend.dtype(tf.keras.backend.placeholder(shape=(2,4,5), +... dtype='float32')) +'float32' +>>> tf.keras.backend.dtype(tf.keras.backend.placeholder(shape=(2,4,5), +... dtype='float64')) +'float64' +>>> kvar = tf.keras.backend.variable(np.array([[1, 2], [3, 4]])) +>>> tf.keras.backend.dtype(kvar) +'float32' +>>> kvar = tf.keras.backend.variable(np.array([[1, 2], [3, 4]]), +... dtype='float32') +>>> tf.keras.backend.dtype(kvar) +'float32' +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/elu.md b/site/en/api_docs/python/tf/keras/backend/elu.md new file mode 100644 index 00000000000..a738186675e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/elu.md @@ -0,0 +1,82 @@ +description: Exponential linear unit. + +
+ + +
+ +# tf.keras.backend.elu + + + + + + + + + +Exponential linear unit. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable to compute the activation function for. +
+`alpha` + +A scalar, slope of negative section. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/epsilon.md b/site/en/api_docs/python/tf/keras/backend/epsilon.md new file mode 100644 index 00000000000..12f66974e80 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/epsilon.md @@ -0,0 +1,63 @@ +description: Returns the value of the fuzz factor used in numeric expressions. + +
+ + +
+ +# tf.keras.backend.epsilon + + + + + + + + + +Returns the value of the fuzz factor used in numeric expressions. + + + + + + + + + + + + + + + + + + +
+A float. +
+ + + +#### Example: + + +>>> tf.keras.backend.epsilon() +1e-07 \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/equal.md b/site/en/api_docs/python/tf/keras/backend/equal.md new file mode 100644 index 00000000000..5a591556de1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/equal.md @@ -0,0 +1,82 @@ +description: Element-wise equality between two tensors. + +
+ + +
+ +# tf.keras.backend.equal + + + + + + + + + +Element-wise equality between two tensors. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`y` + +Tensor or variable. +
+ + + + + + + + + + + +
+A bool tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/eval.md b/site/en/api_docs/python/tf/keras/backend/eval.md new file mode 100644 index 00000000000..88aeba66955 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/eval.md @@ -0,0 +1,88 @@ +description: Evaluates the value of a variable. + +
+ + +
+ +# tf.keras.backend.eval + + + + + + + + + +Evaluates the value of a variable. + + + + + + + + + + + + + + + + + + + +
+`x` + +A variable. +
+ + + + + + + + + + + +
+A Numpy array. +
+ + + +#### Examples: + + + +``` +>>> kvar = tf.keras.backend.variable(np.array([[1, 2], [3, 4]]), +... dtype='float32') +>>> tf.keras.backend.eval(kvar) +array([[1., 2.], + [3., 4.]], dtype=float32) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/exp.md b/site/en/api_docs/python/tf/keras/backend/exp.md new file mode 100644 index 00000000000..4f2c41baddb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/exp.md @@ -0,0 +1,75 @@ +description: Element-wise exponential. + +
+ + +
+ +# tf.keras.backend.exp + + + + + + + + + +Element-wise exponential. + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/expand_dims.md b/site/en/api_docs/python/tf/keras/backend/expand_dims.md new file mode 100644 index 00000000000..e3ada509784 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/expand_dims.md @@ -0,0 +1,82 @@ +description: Adds a 1-sized dimension at index "axis". + +
+ + +
+ +# tf.keras.backend.expand_dims + + + + + + + + + +Adds a 1-sized dimension at index "axis". + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+`axis` + +Position where to add a new axis. +
+ + + + + + + + + + + +
+A tensor with expanded dimensions. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/eye.md b/site/en/api_docs/python/tf/keras/backend/eye.md new file mode 100644 index 00000000000..4b8c3fccc57 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/eye.md @@ -0,0 +1,103 @@ +description: Instantiate an identity matrix and returns it. + +
+ + +
+ +# tf.keras.backend.eye + + + + + + + + + +Instantiate an identity matrix and returns it. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`size` + +Integer, number of rows/columns. +
+`dtype` + +String, data type of returned Keras variable. +
+`name` + +String, name of returned Keras variable. +
+ + + + + + + + + + + +
+A Keras variable, an identity matrix. +
+ + + +#### Example: + + + + +``` +>>> kvar = tf.keras.backend.eye(3) +>>> tf.keras.backend.eval(kvar) +array([[1., 0., 0.], + [0., 1., 0.], + [0., 0., 1.]], dtype=float32) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/flatten.md b/site/en/api_docs/python/tf/keras/backend/flatten.md new file mode 100644 index 00000000000..2a80aa213cc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/flatten.md @@ -0,0 +1,90 @@ +description: Flatten a tensor. + +
+ + +
+ +# tf.keras.backend.flatten + + + + + + + + + +Flatten a tensor. + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+ + + + + + + + + + + +
+A tensor, reshaped into 1-D +
+ + + +#### Example: + + +``` +>>> b = tf.constant([[1, 2], [3, 4]]) +>>> b + +>>> tf.keras.backend.flatten(b) + +``` diff --git a/site/en/api_docs/python/tf/keras/backend/floatx.md b/site/en/api_docs/python/tf/keras/backend/floatx.md new file mode 100644 index 00000000000..53476cfa558 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/floatx.md @@ -0,0 +1,64 @@ +description: Returns the default float type, as a string. + +
+ + +
+ +# tf.keras.backend.floatx + + + + + + + + + +Returns the default float type, as a string. + + + + + + + + + +E.g. `'float16'`, `'float32'`, `'float64'`. + + + + + + + + + +
+String, the current default float type. +
+ + + +#### Example: + + +>>> tf.keras.backend.floatx() +'float32' \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/foldl.md b/site/en/api_docs/python/tf/keras/backend/foldl.md new file mode 100644 index 00000000000..7b85cb45c6a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/foldl.md @@ -0,0 +1,97 @@ +description: Reduce elems using fn to combine them from left to right. + +
+ + +
+ +# tf.keras.backend.foldl + + + + + + + + + +Reduce elems using fn to combine them from left to right. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fn` + +Callable that will be called upon each element in elems and an +accumulator, for instance `lambda acc, x: acc + x` +
+`elems` + +tensor +
+`initializer` + +The first value used (`elems[0]` in case of None) +
+`name` + +A string name for the foldl node in the graph +
+ + + + + + + + + + + +
+Tensor with same type and shape as `initializer`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/foldr.md b/site/en/api_docs/python/tf/keras/backend/foldr.md new file mode 100644 index 00000000000..4cbe5eece8a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/foldr.md @@ -0,0 +1,97 @@ +description: Reduce elems using fn to combine them from right to left. + +
+ + +
+ +# tf.keras.backend.foldr + + + + + + + + + +Reduce elems using fn to combine them from right to left. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fn` + +Callable that will be called upon each element in elems and an +accumulator, for instance `lambda acc, x: acc + x` +
+`elems` + +tensor +
+`initializer` + +The first value used (`elems[-1]` in case of None) +
+`name` + +A string name for the foldr node in the graph +
+ + + + + + + + + + + +
+Same type and shape as initializer +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/function.md b/site/en/api_docs/python/tf/keras/backend/function.md new file mode 100644 index 00000000000..724830ff301 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/function.md @@ -0,0 +1,120 @@ +description: Instantiates a Keras function. + +
+ + +
+ +# tf.keras.backend.function + + + + + + + + + +Instantiates a Keras function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +List of placeholder tensors. +
+`outputs` + +List of output tensors. +
+`updates` + +List of update ops. +
+`name` + +String, name of function. +
+`**kwargs` + +Passed to `tf.Session.run`. +
+ + + + + + + + + + + +
+Output values as Numpy arrays. +
+ + + + + + + + + + + + +
+`ValueError` + +if invalid kwargs are passed in or if in eager execution. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/gather.md b/site/en/api_docs/python/tf/keras/backend/gather.md new file mode 100644 index 00000000000..920037994c0 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/gather.md @@ -0,0 +1,105 @@ +description: Retrieves the elements of indices indices in the tensor reference. + +
+ + +
+ +# tf.keras.backend.gather + + + + + + + + + +Retrieves the elements of indices `indices` in the tensor `reference`. + + + + + + + + + + + + + + + + + + + + + + +
+`reference` + +A tensor. +
+`indices` + +An integer tensor of indices. +
+ + + + + + + + + + + +
+A tensor of same type as `reference`. +
+ + + +#### Examples: + + + +``` +>>> var = tf.keras.backend.variable([[1, 2, 3], [4, 5, 6]]) +>>> tf.keras.backend.eval(var) +array([[1., 2., 3.], + [4., 5., 6.]], dtype=float32) +>>> var_gathered = tf.keras.backend.gather(var, [0]) +>>> tf.keras.backend.eval(var_gathered) +array([[1., 2., 3.]], dtype=float32) +>>> var_gathered = tf.keras.backend.gather(var, [1]) +>>> tf.keras.backend.eval(var_gathered) +array([[4., 5., 6.]], dtype=float32) +>>> var_gathered = tf.keras.backend.gather(var, [0,1,0]) +>>> tf.keras.backend.eval(var_gathered) +array([[1., 2., 3.], + [4., 5., 6.], + [1., 2., 3.]], dtype=float32) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/get_uid.md b/site/en/api_docs/python/tf/keras/backend/get_uid.md new file mode 100644 index 00000000000..2d5aa4769bf --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/get_uid.md @@ -0,0 +1,87 @@ +description: Associates a string prefix with an integer counter in a TensorFlow graph. + +
+ + +
+ +# tf.keras.backend.get_uid + + + + + + + + + +Associates a string prefix with an integer counter in a TensorFlow graph. + + + + + + + + + + + + + + + + + + + +
+`prefix` + +String prefix to index. +
+ + + + + + + + + + + +
+Unique integer ID. +
+ + + +#### Example: + + + +``` +>>> get_uid('dense') +1 +>>> get_uid('dense') +2 +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/get_value.md b/site/en/api_docs/python/tf/keras/backend/get_value.md new file mode 100644 index 00000000000..65239a4889a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/get_value.md @@ -0,0 +1,116 @@ +description: Returns the value of a variable. + +
+ + +
+ +# tf.keras.backend.get_value + + + + + + + + + +Returns the value of a variable. + + + + + + + + + +backend.get_value is the compliment of backend.set_value, and provides +a generic interface for reading from variables while abstracting away the +differences between TensorFlow 1.x and 2.x semantics. + +``` +>>> K = tf.keras.backend # Common keras convention +>>> v = K.variable(1.) +``` + +``` +>>> # reassign +>>> K.set_value(v, 2.) +>>> print(K.get_value(v)) +2.0 +``` + +``` +>>> # increment +>>> K.set_value(v, K.get_value(v) + 1) +>>> print(K.get_value(v)) +3.0 +``` + +Variable semantics in TensorFlow 2 are eager execution friendly. The above +code is roughly equivalent to: + +``` +>>> v = tf.Variable(1.) +``` + +``` +>>> _ = v.assign(2.) +>>> print(v.numpy()) +2.0 +``` + +``` +>>> _ = v.assign_add(1.) +>>> print(v.numpy()) +3.0 +``` + + + + + + + + + + +
+`x` + +input variable. +
+ + + + + + + + + + + +
+A Numpy array. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/gradients.md b/site/en/api_docs/python/tf/keras/backend/gradients.md new file mode 100644 index 00000000000..915363830f6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/gradients.md @@ -0,0 +1,82 @@ +description: Returns the gradients of loss w.r.t. variables. + +
+ + +
+ +# tf.keras.backend.gradients + + + + + + + + + +Returns the gradients of `loss` w.r.t. `variables`. + + + + + + + + + + + + + + + + + + + + + + +
+`loss` + +Scalar tensor to minimize. +
+`variables` + +List of variables. +
+ + + + + + + + + + + +
+A gradients tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/greater.md b/site/en/api_docs/python/tf/keras/backend/greater.md new file mode 100644 index 00000000000..1ff1d2c2b45 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/greater.md @@ -0,0 +1,82 @@ +description: Element-wise truth value of (x > y). + +
+ + +
+ +# tf.keras.backend.greater + + + + + + + + + +Element-wise truth value of (x > y). + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`y` + +Tensor or variable. +
+ + + + + + + + + + + +
+A bool tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/greater_equal.md b/site/en/api_docs/python/tf/keras/backend/greater_equal.md new file mode 100644 index 00000000000..b28d55245bb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/greater_equal.md @@ -0,0 +1,82 @@ +description: Element-wise truth value of (x >= y). + +
+ + +
+ +# tf.keras.backend.greater_equal + + + + + + + + + +Element-wise truth value of (x >= y). + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`y` + +Tensor or variable. +
+ + + + + + + + + + + +
+A bool tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/hard_sigmoid.md b/site/en/api_docs/python/tf/keras/backend/hard_sigmoid.md new file mode 100644 index 00000000000..ecce3a4da3f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/hard_sigmoid.md @@ -0,0 +1,78 @@ +description: Segment-wise linear approximation of sigmoid. + +
+ + +
+ +# tf.keras.backend.hard_sigmoid + + + + + + + + + +Segment-wise linear approximation of sigmoid. + + + + + + + + + +Faster than sigmoid. +Returns `0.` if `x < -2.5`, `1.` if `x > 2.5`. +In `-2.5 <= x <= 2.5`, returns `0.2 * x + 0.5`. + + + + + + + + + + +
+`x` + +A tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/image_data_format.md b/site/en/api_docs/python/tf/keras/backend/image_data_format.md new file mode 100644 index 00000000000..79611d82f90 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/image_data_format.md @@ -0,0 +1,63 @@ +description: Returns the default image data format convention. + +
+ + +
+ +# tf.keras.backend.image_data_format + + + + + + + + + +Returns the default image data format convention. + + + + + + + + + + + + + + + + + + +
+A string, either `'channels_first'` or `'channels_last'` +
+ + + +#### Example: + + +>>> tf.keras.backend.image_data_format() +'channels_last' \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/in_test_phase.md b/site/en/api_docs/python/tf/keras/backend/in_test_phase.md new file mode 100644 index 00000000000..805314596fb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/in_test_phase.md @@ -0,0 +1,94 @@ +description: Selects x in test phase, and alt otherwise. + +
+ + +
+ +# tf.keras.backend.in_test_phase + + + + + + + + + +Selects `x` in test phase, and `alt` otherwise. + + + + + + + + + +Note that `alt` should have the *same shape* as `x`. + + + + + + + + + + + + + + + + +
+`x` + +What to return in test phase +(tensor or callable that returns a tensor). +
+`alt` + +What to return otherwise +(tensor or callable that returns a tensor). +
+`training` + +Optional scalar tensor +(or Python boolean, or Python integer) +specifying the learning phase. +
+ + + + + + + + + + + +
+Either `x` or `alt` based on `K.learning_phase`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/in_top_k.md b/site/en/api_docs/python/tf/keras/backend/in_top_k.md new file mode 100644 index 00000000000..033c752f35f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/in_top_k.md @@ -0,0 +1,91 @@ +description: Returns whether the targets are in the top k predictions. + +
+ + +
+ +# tf.keras.backend.in_top_k + + + + + + + + + +Returns whether the `targets` are in the top `k` `predictions`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`predictions` + +A tensor of shape `(batch_size, classes)` and type `float32`. +
+`targets` + +A 1D tensor of length `batch_size` and type `int32` or `int64`. +
+`k` + +An `int`, number of top elements to consider. +
+ + + + + + + + + + + +
+A 1D tensor of length `batch_size` and type `bool`. +`output[i]` is `True` if `predictions[i, targets[i]]` is within top-`k` +values of `predictions[i]`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/in_train_phase.md b/site/en/api_docs/python/tf/keras/backend/in_train_phase.md new file mode 100644 index 00000000000..1430197b4c7 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/in_train_phase.md @@ -0,0 +1,95 @@ +description: Selects x in train phase, and alt otherwise. + +
+ + +
+ +# tf.keras.backend.in_train_phase + + + + + + + + + +Selects `x` in train phase, and `alt` otherwise. + + + + + + + + + +Note that `alt` should have the *same shape* as `x`. + + + + + + + + + + + + + + + + +
+`x` + +What to return in train phase +(tensor or callable that returns a tensor). +
+`alt` + +What to return otherwise +(tensor or callable that returns a tensor). +
+`training` + +Optional scalar tensor +(or Python boolean, or Python integer) +specifying the learning phase. +
+ + + + + + + + + + + +
+Either `x` or `alt` based on the `training` flag. +the `training` flag defaults to `K.learning_phase()`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/int_shape.md b/site/en/api_docs/python/tf/keras/backend/int_shape.md new file mode 100644 index 00000000000..bf59c694599 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/int_shape.md @@ -0,0 +1,90 @@ +description: Returns the shape of tensor or variable as a tuple of int or None entries. + +
+ + +
+ +# tf.keras.backend.int_shape + + + + + + + + + +Returns the shape of tensor or variable as a tuple of int or None entries. + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tuple of integers (or None entries). +
+ + + +#### Examples: + + + +``` +>>> input = tf.keras.backend.placeholder(shape=(2, 4, 5)) +>>> tf.keras.backend.int_shape(input) +(2, 4, 5) +>>> val = np.array([[1, 2], [3, 4]]) +>>> kvar = tf.keras.backend.variable(value=val) +>>> tf.keras.backend.int_shape(kvar) +(2, 2) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/is_keras_tensor.md b/site/en/api_docs/python/tf/keras/backend/is_keras_tensor.md new file mode 100644 index 00000000000..c06db85a80a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/is_keras_tensor.md @@ -0,0 +1,125 @@ +description: Returns whether x is a Keras tensor. + +
+ + +
+ +# tf.keras.backend.is_keras_tensor + + + + + + + + + +Returns whether `x` is a Keras tensor. + + + + + + + + + +A "Keras tensor" is a tensor that was returned by a Keras layer, +(`Layer` class) or by `Input`. + + + + + + + + + + +
+`x` + +A candidate tensor. +
+ + + + + + + + + + + +
+A boolean: Whether the argument is a Keras tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +In case `x` is not a symbolic tensor. +
+ + + +#### Examples: + + + +``` +>>> np_var = np.array([1, 2]) +>>> # A numpy array is not a symbolic tensor. +>>> tf.keras.backend.is_keras_tensor(np_var) +Traceback (most recent call last): +... +ValueError: Unexpectedly found an instance of type ``. +Expected a symbolic tensor instance. +>>> keras_var = tf.keras.backend.variable(np_var) +>>> # A variable created with the keras backend is not a Keras tensor. +>>> tf.keras.backend.is_keras_tensor(keras_var) +False +>>> keras_placeholder = tf.keras.backend.placeholder(shape=(2, 4, 5)) +>>> # A placeholder is not a Keras tensor. +>>> tf.keras.backend.is_keras_tensor(keras_placeholder) +False +>>> keras_input = tf.keras.layers.Input([10]) +>>> # An Input is a Keras tensor. +>>> tf.keras.backend.is_keras_tensor(keras_input) +True +>>> keras_layer_output = tf.keras.layers.Dense(10)(keras_input) +>>> # Any Keras layer output is a Keras tensor. +>>> tf.keras.backend.is_keras_tensor(keras_layer_output) +True +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/is_sparse.md b/site/en/api_docs/python/tf/keras/backend/is_sparse.md new file mode 100644 index 00000000000..1fd51517a40 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/is_sparse.md @@ -0,0 +1,90 @@ +description: Returns whether a tensor is a sparse tensor. + +
+ + +
+ +# tf.keras.backend.is_sparse + + + + + + + + + +Returns whether a tensor is a sparse tensor. + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A tensor instance. +
+ + + + + + + + + + + +
+A boolean. +
+ + + +#### Example: + + + + +``` +>>> a = tf.keras.backend.placeholder((2, 2), sparse=False) +>>> print(tf.keras.backend.is_sparse(a)) +False +>>> b = tf.keras.backend.placeholder((2, 2), sparse=True) +>>> print(tf.keras.backend.is_sparse(b)) +True +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/l2_normalize.md b/site/en/api_docs/python/tf/keras/backend/l2_normalize.md new file mode 100644 index 00000000000..4a27b2819c8 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/l2_normalize.md @@ -0,0 +1,82 @@ +description: Normalizes a tensor wrt the L2 norm alongside the specified axis. + +
+ + +
+ +# tf.keras.backend.l2_normalize + + + + + + + + + +Normalizes a tensor wrt the L2 norm alongside the specified axis. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`axis` + +axis along which to perform normalization. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/learning_phase.md b/site/en/api_docs/python/tf/keras/backend/learning_phase.md new file mode 100644 index 00000000000..5c3a8f21844 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/learning_phase.md @@ -0,0 +1,59 @@ +description: Returns the learning phase flag. + +
+ + +
+ +# tf.keras.backend.learning_phase + + + + + + + + + +Returns the learning phase flag. + + + + + + + + + +The learning phase flag is a bool tensor (0 = test, 1 = train) +to be passed as input to any Keras function +that uses a different behavior at train time and test time. + + + + + + + + + +
+Learning phase (scalar integer tensor or Python integer). +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/learning_phase_scope.md b/site/en/api_docs/python/tf/keras/backend/learning_phase_scope.md new file mode 100644 index 00000000000..1221ebcb190 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/learning_phase_scope.md @@ -0,0 +1,87 @@ +description: Provides a scope within which the learning phase is equal to value. + +
+ + +
+ +# tf.keras.backend.learning_phase_scope + + + + + + + + + +Provides a scope within which the learning phase is equal to `value`. + + + + + + + + + +The learning phase gets restored to its original value upon exiting the scope. + + + + + + + + + + +
+`value` + +Learning phase value, either 0 or 1 (integers). +0 = test, 1 = train +
+ + + +#### Yields: + +None. + + + + + + + + + + + + +
+`ValueError` + +if `value` is neither `0` nor `1`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/less.md b/site/en/api_docs/python/tf/keras/backend/less.md new file mode 100644 index 00000000000..a47f0486abf --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/less.md @@ -0,0 +1,82 @@ +description: Element-wise truth value of (x < y). + +
+ + +
+ +# tf.keras.backend.less + + + + + + + + + +Element-wise truth value of (x < y). + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`y` + +Tensor or variable. +
+ + + + + + + + + + + +
+A bool tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/less_equal.md b/site/en/api_docs/python/tf/keras/backend/less_equal.md new file mode 100644 index 00000000000..d4e67d3e2ad --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/less_equal.md @@ -0,0 +1,82 @@ +description: Element-wise truth value of (x <= y). + +
+ + +
+ +# tf.keras.backend.less_equal + + + + + + + + + +Element-wise truth value of (x <= y). + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`y` + +Tensor or variable. +
+ + + + + + + + + + + +
+A bool tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/local_conv1d.md b/site/en/api_docs/python/tf/keras/backend/local_conv1d.md new file mode 100644 index 00000000000..36c5dabb6fb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/local_conv1d.md @@ -0,0 +1,115 @@ +description: Apply 1D conv with un-shared weights. + +
+ + +
+ +# tf.keras.backend.local_conv1d + + + + + + + + + +Apply 1D conv with un-shared weights. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +3D tensor with shape: +(batch_size, steps, input_dim) +if data_format is "channels_last" or +(batch_size, input_dim, steps) +if data_format is "channels_first". +
+`kernel` + +the unshared weight for convolution, +with shape (output_length, feature_dim, filters). +
+`kernel_size` + +a tuple of a single integer, +specifying the length of the 1D convolution window. +
+`strides` + +a tuple of a single integer, +specifying the stride length of the convolution. +
+`data_format` + +the data format, channels_first or channels_last. +
+ + + + + + + + + + + +
+A 3d tensor with shape: +(batch_size, output_length, filters) +if data_format='channels_first' +or 3D tensor with shape: +(batch_size, filters, output_length) +if data_format='channels_last'. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/local_conv2d.md b/site/en/api_docs/python/tf/keras/backend/local_conv2d.md new file mode 100644 index 00000000000..e45a17a2eb1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/local_conv2d.md @@ -0,0 +1,123 @@ +description: Apply 2D conv with un-shared weights. + +
+ + +
+ +# tf.keras.backend.local_conv2d + + + + + + + + + +Apply 2D conv with un-shared weights. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +4D tensor with shape: +(batch_size, filters, new_rows, new_cols) +if data_format='channels_first' +or 4D tensor with shape: +(batch_size, new_rows, new_cols, filters) +if data_format='channels_last'. +
+`kernel` + +the unshared weight for convolution, +with shape (output_items, feature_dim, filters). +
+`kernel_size` + +a tuple of 2 integers, specifying the +width and height of the 2D convolution window. +
+`strides` + +a tuple of 2 integers, specifying the strides +of the convolution along the width and height. +
+`output_shape` + +a tuple with (output_row, output_col). +
+`data_format` + +the data format, channels_first or channels_last. +
+ + + + + + + + + + + +
+A 4D tensor with shape: +(batch_size, filters, new_rows, new_cols) +if data_format='channels_first' +or 4D tensor with shape: +(batch_size, new_rows, new_cols, filters) +if data_format='channels_last'. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/log.md b/site/en/api_docs/python/tf/keras/backend/log.md new file mode 100644 index 00000000000..a1439a9e6ca --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/log.md @@ -0,0 +1,75 @@ +description: Element-wise log. + +
+ + +
+ +# tf.keras.backend.log + + + + + + + + + +Element-wise log. + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/manual_variable_initialization.md b/site/en/api_docs/python/tf/keras/backend/manual_variable_initialization.md new file mode 100644 index 00000000000..505a9160f75 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/manual_variable_initialization.md @@ -0,0 +1,66 @@ +description: Sets the manual variable initialization flag. + +
+ + +
+ +# tf.keras.backend.manual_variable_initialization + + + + + + + + + +Sets the manual variable initialization flag. + + + + + + + + + +This boolean flag determines whether +variables should be initialized +as they are instantiated (default), or if +the user should handle the initialization +(e.g. via tf.compat.v1.initialize_all_variables()). + + + + + + + + + + +
+`value` + +Python boolean. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/map_fn.md b/site/en/api_docs/python/tf/keras/backend/map_fn.md new file mode 100644 index 00000000000..1351a75123a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/map_fn.md @@ -0,0 +1,96 @@ +description: Map the function fn over the elements elems and return the outputs. + +
+ + +
+ +# tf.keras.backend.map_fn + + + + + + + + + +Map the function fn over the elements elems and return the outputs. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fn` + +Callable that will be called upon each element in elems +
+`elems` + +tensor +
+`name` + +A string name for the map node in the graph +
+`dtype` + +Output data type. +
+ + + + + + + + + + + +
+Tensor with dtype `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/max.md b/site/en/api_docs/python/tf/keras/backend/max.md new file mode 100644 index 00000000000..3fbf89f1151 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/max.md @@ -0,0 +1,92 @@ +description: Maximum value in a tensor. + +
+ + +
+ +# tf.keras.backend.max + + + + + + + + + +Maximum value in a tensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+`axis` + +An integer, the axis to find maximum values. +
+`keepdims` + +A boolean, whether to keep the dimensions or not. +If `keepdims` is `False`, the rank of the tensor is reduced +by 1. If `keepdims` is `True`, +the reduced dimension is retained with length 1. +
+ + + + + + + + + + + +
+A tensor with maximum values of `x`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/maximum.md b/site/en/api_docs/python/tf/keras/backend/maximum.md new file mode 100644 index 00000000000..d648e25af97 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/maximum.md @@ -0,0 +1,97 @@ +description: Element-wise maximum of two tensors. + +
+ + +
+ +# tf.keras.backend.maximum + + + + + + + + + +Element-wise maximum of two tensors. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`y` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor with the element wise maximum value(s) of `x` and `y`. +
+ + + +#### Examples: + + + +``` +>>> x = tf.Variable([[1, 2], [3, 4]]) +>>> y = tf.Variable([[2, 1], [0, -1]]) +>>> m = tf.keras.backend.maximum(x, y) +>>> m + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/mean.md b/site/en/api_docs/python/tf/keras/backend/mean.md new file mode 100644 index 00000000000..856d3087851 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/mean.md @@ -0,0 +1,92 @@ +description: Mean of a tensor, alongside the specified axis. + +
+ + +
+ +# tf.keras.backend.mean + + + + + + + + + +Mean of a tensor, alongside the specified axis. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+`axis` + +A list of integer. Axes to compute the mean. +
+`keepdims` + +A boolean, whether to keep the dimensions or not. +If `keepdims` is `False`, the rank of the tensor is reduced +by 1 for each entry in `axis`. If `keepdims` is `True`, +the reduced dimensions are retained with length 1. +
+ + + + + + + + + + + +
+A tensor with the mean of elements of `x`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/min.md b/site/en/api_docs/python/tf/keras/backend/min.md new file mode 100644 index 00000000000..6340019afd1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/min.md @@ -0,0 +1,92 @@ +description: Minimum value in a tensor. + +
+ + +
+ +# tf.keras.backend.min + + + + + + + + + +Minimum value in a tensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+`axis` + +An integer, the axis to find minimum values. +
+`keepdims` + +A boolean, whether to keep the dimensions or not. +If `keepdims` is `False`, the rank of the tensor is reduced +by 1. If `keepdims` is `True`, +the reduced dimension is retained with length 1. +
+ + + + + + + + + + + +
+A tensor with minimum values of `x`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/minimum.md b/site/en/api_docs/python/tf/keras/backend/minimum.md new file mode 100644 index 00000000000..d6cd9f5beb4 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/minimum.md @@ -0,0 +1,82 @@ +description: Element-wise minimum of two tensors. + +
+ + +
+ +# tf.keras.backend.minimum + + + + + + + + + +Element-wise minimum of two tensors. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`y` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/moving_average_update.md b/site/en/api_docs/python/tf/keras/backend/moving_average_update.md new file mode 100644 index 00000000000..0ea65592bcc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/moving_average_update.md @@ -0,0 +1,89 @@ +description: Compute the moving average of a variable. + +
+ + +
+ +# tf.keras.backend.moving_average_update + + + + + + + + + +Compute the moving average of a variable. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A Variable. +
+`value` + +A tensor with the same shape as `variable`. +
+`momentum` + +The moving average momentum. +
+ + + + + + + + + + + +
+An Operation to update the variable. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/name_scope.md b/site/en/api_docs/python/tf/keras/backend/name_scope.md new file mode 100644 index 00000000000..347b1d26587 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/name_scope.md @@ -0,0 +1,78 @@ +description: A context manager for use when defining a Python op. + +
+ + +
+ +# tf.keras.backend.name_scope + + + + + + + + + +A context manager for use when defining a Python op. + + + + + + + +This context manager pushes a name scope, which will make the name of all +operations added within it have a prefix. + +For example, to define a new Python op called `my_op`: + + +def my_op(a): + with tf.name_scope("MyOp") as scope: + a = tf.convert_to_tensor(a, name="a") + # Define some computation that uses `a`. + return foo_op(..., name=scope) + + +When executed, the Tensor `a` will have the name `MyOp/a`. + + + + + + + + + + +
+`name` + +The prefix to use on all names created within the name scope. +
+ + + + + + + + + + + +
+Name scope context manager. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/ndim.md b/site/en/api_docs/python/tf/keras/backend/ndim.md new file mode 100644 index 00000000000..2f50134ad95 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/ndim.md @@ -0,0 +1,91 @@ +description: Returns the number of axes in a tensor, as an integer. + +
+ + +
+ +# tf.keras.backend.ndim + + + + + + + + + +Returns the number of axes in a tensor, as an integer. + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+Integer (scalar), number of axes. +
+ + + +#### Examples: + + + + +``` +>>> input = tf.keras.backend.placeholder(shape=(2, 4, 5)) +>>> val = np.array([[1, 2], [3, 4]]) +>>> kvar = tf.keras.backend.variable(value=val) +>>> tf.keras.backend.ndim(input) +3 +>>> tf.keras.backend.ndim(kvar) +2 +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/normalize_batch_in_training.md b/site/en/api_docs/python/tf/keras/backend/normalize_batch_in_training.md new file mode 100644 index 00000000000..785c7efe144 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/normalize_batch_in_training.md @@ -0,0 +1,104 @@ +description: Computes mean and std for batch then apply batch_normalization on batch. + +
+ + +
+ +# tf.keras.backend.normalize_batch_in_training + + + + + + + + + +Computes mean and std for batch then apply batch_normalization on batch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Input tensor or variable. +
+`gamma` + +Tensor by which to scale the input. +
+`beta` + +Tensor with which to center the input. +
+`reduction_axes` + +iterable of integers, +axes over which to normalize. +
+`epsilon` + +Fuzz factor. +
+ + + + + + + + + + + +
+A tuple length of 3, `(normalized_tensor, mean, variance)`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/not_equal.md b/site/en/api_docs/python/tf/keras/backend/not_equal.md new file mode 100644 index 00000000000..46c62750775 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/not_equal.md @@ -0,0 +1,82 @@ +description: Element-wise inequality between two tensors. + +
+ + +
+ +# tf.keras.backend.not_equal + + + + + + + + + +Element-wise inequality between two tensors. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`y` + +Tensor or variable. +
+ + + + + + + + + + + +
+A bool tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/one_hot.md b/site/en/api_docs/python/tf/keras/backend/one_hot.md new file mode 100644 index 00000000000..17f553402f6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/one_hot.md @@ -0,0 +1,98 @@ +description: Computes the one-hot representation of an integer tensor. + +
+ + +
+ +# tf.keras.backend.one_hot + + + + + + + + + +Computes the one-hot representation of an integer tensor. + + + + + + + + + + + + + + + + + + + + + + +
+`indices` + +nD integer tensor of shape +`(batch_size, dim1, dim2, ... dim(n-1))` +
+`num_classes` + +Integer, number of classes to consider. +
+ + + + + + + + + + + +
+(n + 1)D one hot representation of the input +with shape `(batch_size, dim1, dim2, ... dim(n-1), num_classes)` +
+ + + + + + + + + + + +
+The one-hot tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/ones.md b/site/en/api_docs/python/tf/keras/backend/ones.md new file mode 100644 index 00000000000..676393f4896 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/ones.md @@ -0,0 +1,105 @@ +description: Instantiates an all-ones variable and returns it. + +
+ + +
+ +# tf.keras.backend.ones + + + + + + + + + +Instantiates an all-ones variable and returns it. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +Tuple of integers, shape of returned Keras variable. +
+`dtype` + +String, data type of returned Keras variable. +
+`name` + +String, name of returned Keras variable. +
+ + + + + + + + + + + +
+A Keras variable, filled with `1.0`. +Note that if `shape` was symbolic, we cannot return a variable, +and will return a dynamically-shaped tensor instead. +
+ + + +#### Example: + + + + +``` +>>> kvar = tf.keras.backend.ones((3,4)) +>>> tf.keras.backend.eval(kvar) +array([[1., 1., 1., 1.], + [1., 1., 1., 1.], + [1., 1., 1., 1.]], dtype=float32) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/ones_like.md b/site/en/api_docs/python/tf/keras/backend/ones_like.md new file mode 100644 index 00000000000..d5d31fa526d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/ones_like.md @@ -0,0 +1,103 @@ +description: Instantiates an all-ones variable of the same shape as another tensor. + +
+ + +
+ +# tf.keras.backend.ones_like + + + + + + + + + +Instantiates an all-ones variable of the same shape as another tensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Keras variable or tensor. +
+`dtype` + +String, dtype of returned Keras variable. +None uses the dtype of x. +
+`name` + +String, name for the variable to create. +
+ + + + + + + + + + + +
+A Keras variable with the shape of x filled with ones. +
+ + + +#### Example: + + + +``` +>>> kvar = tf.keras.backend.variable(np.random.random((2,3))) +>>> kvar_ones = tf.keras.backend.ones_like(kvar) +>>> tf.keras.backend.eval(kvar_ones) +array([[1., 1., 1.], + [1., 1., 1.]], dtype=float32) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/permute_dimensions.md b/site/en/api_docs/python/tf/keras/backend/permute_dimensions.md new file mode 100644 index 00000000000..47e6504a482 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/permute_dimensions.md @@ -0,0 +1,102 @@ +description: Permutes axes in a tensor. + +
+ + +
+ +# tf.keras.backend.permute_dimensions + + + + + + + + + +Permutes axes in a tensor. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`pattern` + +A tuple of +dimension indices, e.g. `(0, 2, 1)`. +
+ + + + + + + + + + + +
+A tensor. +
+ + + +#### Example: + + +``` +>>> a = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]) +>>> a + +>>> tf.keras.backend.permute_dimensions(a, pattern=(1, 0)) + +``` diff --git a/site/en/api_docs/python/tf/keras/backend/placeholder.md b/site/en/api_docs/python/tf/keras/backend/placeholder.md new file mode 100644 index 00000000000..4b60b1a158f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/placeholder.md @@ -0,0 +1,152 @@ +description: Instantiates a placeholder tensor and returns it. + +
+ + +
+ +# tf.keras.backend.placeholder + + + + + + + + + +Instantiates a placeholder tensor and returns it. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +Shape of the placeholder +(integer tuple, may include `None` entries). +
+`ndim` + +Number of axes of the tensor. +At least one of {`shape`, `ndim`} must be specified. +If both are specified, `shape` is used. +
+`dtype` + +Placeholder type. +
+`sparse` + +Boolean, whether the placeholder should have a sparse type. +
+`name` + +Optional name string for the placeholder. +
+`ragged` + +Boolean, whether the placeholder should have a ragged type. +In this case, values of 'None' in the 'shape' argument represent +ragged dimensions. For more information about RaggedTensors, see this +[guide](https://www.tensorflow.org/guide/ragged_tensors). +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If called with eager execution +
+`ValueError` + +If called with sparse = True and ragged = True. +
+ + + + + + + + + + + +
+Tensor instance (with Keras metadata included). +
+ + + +#### Examples: + + + + +``` +>>> input_ph = tf.keras.backend.placeholder(shape=(2, 4, 5)) +>>> input_ph + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/pool2d.md b/site/en/api_docs/python/tf/keras/backend/pool2d.md new file mode 100644 index 00000000000..1787d4f40f2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/pool2d.md @@ -0,0 +1,149 @@ +description: 2D Pooling. + +
+ + +
+ +# tf.keras.backend.pool2d + + + + + + + + + +2D Pooling. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`pool_size` + +tuple of 2 integers. +
+`strides` + +tuple of 2 integers. +
+`padding` + +string, `"same"` or `"valid"`. +
+`data_format` + +string, `"channels_last"` or `"channels_first"`. +
+`pool_mode` + +string, `"max"` or `"avg"`. +
+ + + + + + + + + + + +
+A tensor, result of 2D pooling. +
+ + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +if `data_format` is neither `"channels_last"` or +`"channels_first"`. +
+`ValueError` + +if `pool_size` is not a tuple of 2 integers. +
+`ValueError` + +if `strides` is not a tuple of 2 integers. +
+`ValueError` + +if `pool_mode` is neither `"max"` or `"avg"`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/pool3d.md b/site/en/api_docs/python/tf/keras/backend/pool3d.md new file mode 100644 index 00000000000..bbf522f5b12 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/pool3d.md @@ -0,0 +1,136 @@ +description: 3D Pooling. + +
+ + +
+ +# tf.keras.backend.pool3d + + + + + + + + + +3D Pooling. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`pool_size` + +tuple of 3 integers. +
+`strides` + +tuple of 3 integers. +
+`padding` + +string, `"same"` or `"valid"`. +
+`data_format` + +string, `"channels_last"` or `"channels_first"`. +
+`pool_mode` + +string, `"max"` or `"avg"`. +
+ + + + + + + + + + + +
+A tensor, result of 3D pooling. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `data_format` is neither `"channels_last"` or +`"channels_first"`. +
+`ValueError` + +if `pool_mode` is neither `"max"` or `"avg"`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/pow.md b/site/en/api_docs/python/tf/keras/backend/pow.md new file mode 100644 index 00000000000..74dc77f9fe5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/pow.md @@ -0,0 +1,82 @@ +description: Element-wise exponentiation. + +
+ + +
+ +# tf.keras.backend.pow + + + + + + + + + +Element-wise exponentiation. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`a` + +Python integer. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/print_tensor.md b/site/en/api_docs/python/tf/keras/backend/print_tensor.md new file mode 100644 index 00000000000..171240c236c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/print_tensor.md @@ -0,0 +1,97 @@ +description: Prints message and the tensor value when evaluated. + +
+ + +
+ +# tf.keras.backend.print_tensor + + + + + + + + + +Prints `message` and the tensor value when evaluated. + + + + + + + + + +Note that `print_tensor` returns a new tensor identical to `x` +which should be used in the following code. Otherwise the +print operation is not taken into account during evaluation. + +#### Example: + + + +``` +>>> x = tf.constant([[1.0, 2.0], [3.0, 4.0]]) +>>> tf.keras.backend.print_tensor(x) + +``` + + + + + + + + + + + + + +
+`x` + +Tensor to print. +
+`message` + +Message to print jointly with the tensor. +
+ + + + + + + + + + + +
+The same tensor `x`, unchanged. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/prod.md b/site/en/api_docs/python/tf/keras/backend/prod.md new file mode 100644 index 00000000000..990f48daccb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/prod.md @@ -0,0 +1,92 @@ +description: Multiplies the values in a tensor, alongside the specified axis. + +
+ + +
+ +# tf.keras.backend.prod + + + + + + + + + +Multiplies the values in a tensor, alongside the specified axis. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+`axis` + +An integer, the axis to compute the product. +
+`keepdims` + +A boolean, whether to keep the dimensions or not. +If `keepdims` is `False`, the rank of the tensor is reduced +by 1. If `keepdims` is `True`, +the reduced dimension is retained with length 1. +
+ + + + + + + + + + + +
+A tensor with the product of elements of `x`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/random_binomial.md b/site/en/api_docs/python/tf/keras/backend/random_binomial.md new file mode 100644 index 00000000000..9efc3c5a14f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/random_binomial.md @@ -0,0 +1,99 @@ +description: Returns a tensor with random binomial distribution of values. + +
+ + +
+ +# tf.keras.backend.random_binomial + + + + + + + + + +Returns a tensor with random binomial distribution of values. + + + + + + + + + +The binomial distribution with parameters `n` and `p` is the probability +distribution of the number of successful Bernoulli process. Only supports +`n` = 1 for now. + + + + + + + + + + + + + + + + + + + +
+`shape` + +A tuple of integers, the shape of tensor to create. +
+`p` + +A float, `0. <= p <= 1`, probability of binomial distribution. +
+`dtype` + +String, dtype of returned tensor. +
+`seed` + +Integer, random seed. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/random_normal.md b/site/en/api_docs/python/tf/keras/backend/random_normal.md new file mode 100644 index 00000000000..649cc03992f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/random_normal.md @@ -0,0 +1,108 @@ +description: Returns a tensor with normal distribution of values. + +
+ + +
+ +# tf.keras.backend.random_normal + + + + + + + + + +Returns a tensor with normal distribution of values. + + + + + + + + + +It is an alias to tf.random.normal. + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A tuple of integers, the shape of tensor to create. +
+`mean` + +A float, the mean value of the normal distribution to draw samples. +Default to 0.0. +
+`stddev` + +A float, the standard deviation of the normal distribution +to draw samples. Default to 1.0. +
+`dtype` + +tf.dtypes.DType, dtype of returned tensor. Default to use Keras +backend dtype which is float32. +
+`seed` + +Integer, random seed. Will use a random numpy integer when not +specified. +
+ + + + + + + + + + + +
+A tensor with normal distribution of values. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/random_normal_variable.md b/site/en/api_docs/python/tf/keras/backend/random_normal_variable.md new file mode 100644 index 00000000000..71d86f9be3b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/random_normal_variable.md @@ -0,0 +1,122 @@ +description: Instantiates a variable with values drawn from a normal distribution. + +
+ + +
+ +# tf.keras.backend.random_normal_variable + + + + + + + + + +Instantiates a variable with values drawn from a normal distribution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +Tuple of integers, shape of returned Keras variable. +
+`mean` + +Float, mean of the normal distribution. +
+`scale` + +Float, standard deviation of the normal distribution. +
+`dtype` + +String, dtype of returned Keras variable. +
+`name` + +String, name of returned Keras variable. +
+`seed` + +Integer, random seed. +
+ + + + + + + + + + + +
+A Keras variable, filled with drawn samples. +
+ + + +#### Example: + + + +``` +>>> kvar = tf.keras.backend.random_normal_variable((2,3), 0, 1) +>>> kvar + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/random_uniform.md b/site/en/api_docs/python/tf/keras/backend/random_uniform.md new file mode 100644 index 00000000000..78f8bd78aff --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/random_uniform.md @@ -0,0 +1,105 @@ +description: Returns a tensor with uniform distribution of values. + +
+ + +
+ +# tf.keras.backend.random_uniform + + + + + + + + + +Returns a tensor with uniform distribution of values. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A tuple of integers, the shape of tensor to create. +
+`minval` + +A float, lower boundary of the uniform distribution +to draw samples. +
+`maxval` + +A float, upper boundary of the uniform distribution +to draw samples. +
+`dtype` + +String, dtype of returned tensor. +
+`seed` + +Integer, random seed. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/random_uniform_variable.md b/site/en/api_docs/python/tf/keras/backend/random_uniform_variable.md new file mode 100644 index 00000000000..b4523b34827 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/random_uniform_variable.md @@ -0,0 +1,122 @@ +description: Instantiates a variable with values drawn from a uniform distribution. + +
+ + +
+ +# tf.keras.backend.random_uniform_variable + + + + + + + + + +Instantiates a variable with values drawn from a uniform distribution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +Tuple of integers, shape of returned Keras variable. +
+`low` + +Float, lower boundary of the output interval. +
+`high` + +Float, upper boundary of the output interval. +
+`dtype` + +String, dtype of returned Keras variable. +
+`name` + +String, name of returned Keras variable. +
+`seed` + +Integer, random seed. +
+ + + + + + + + + + + +
+A Keras variable, filled with drawn samples. +
+ + + +#### Example: + + + +``` +>>> kvar = tf.keras.backend.random_uniform_variable((2,3), 0, 1) +>>> kvar + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/relu.md b/site/en/api_docs/python/tf/keras/backend/relu.md new file mode 100644 index 00000000000..766c17bace5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/relu.md @@ -0,0 +1,102 @@ +description: Rectified linear unit. + +
+ + +
+ +# tf.keras.backend.relu + + + + + + + + + +Rectified linear unit. + + + + + + + + + +With default values, it returns element-wise `max(x, 0)`. + +Otherwise, it follows: +`f(x) = max_value` for `x >= max_value`, +`f(x) = x` for `threshold <= x < max_value`, +`f(x) = alpha * (x - threshold)` otherwise. + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+`alpha` + +A scalar, slope of negative section (default=`0.`). +
+`max_value` + +float. Saturation threshold. +
+`threshold` + +float. Threshold value for thresholded activation. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/repeat.md b/site/en/api_docs/python/tf/keras/backend/repeat.md new file mode 100644 index 00000000000..5cd77195d5d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/repeat.md @@ -0,0 +1,102 @@ +description: Repeats a 2D tensor. + +
+ + +
+ +# tf.keras.backend.repeat + + + + + + + + + +Repeats a 2D tensor. + + + + + + + + + +if `x` has shape (samples, dim) and `n` is `2`, +the output will have shape `(samples, 2, dim)`. + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`n` + +Python integer, number of times to repeat. +
+ + + + + + + + + + + +
+A tensor. +
+ + + +#### Example: + + +``` +>>> b = tf.constant([[1, 2], [3, 4]]) +>>> b + +>>> tf.keras.backend.repeat(b, n=2) + +``` diff --git a/site/en/api_docs/python/tf/keras/backend/repeat_elements.md b/site/en/api_docs/python/tf/keras/backend/repeat_elements.md new file mode 100644 index 00000000000..923d136116a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/repeat_elements.md @@ -0,0 +1,102 @@ +description: Repeats the elements of a tensor along an axis, like np.repeat. + +
+ + +
+ +# tf.keras.backend.repeat_elements + + + + + + + + + +Repeats the elements of a tensor along an axis, like `np.repeat`. + + + + + + + + + +If `x` has shape `(s1, s2, s3)` and `axis` is `1`, the output +will have shape `(s1, s2 * rep, s3)`. + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`rep` + +Python integer, number of times to repeat. +
+`axis` + +Axis along which to repeat. +
+ + + + + + + + + + + +
+A tensor. +
+ + + +#### Example: + + +``` +>>> b = tf.constant([1, 2, 3]) +>>> tf.keras.backend.repeat_elements(b, rep=2, axis=0) + +``` diff --git a/site/en/api_docs/python/tf/keras/backend/reset_uids.md b/site/en/api_docs/python/tf/keras/backend/reset_uids.md new file mode 100644 index 00000000000..9edce20cd30 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/reset_uids.md @@ -0,0 +1,43 @@ +description: Resets graph identifiers. + +
+ + +
+ +# tf.keras.backend.reset_uids + + + + + + + + + +Resets graph identifiers. + + + + + + + + + \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/reshape.md b/site/en/api_docs/python/tf/keras/backend/reshape.md new file mode 100644 index 00000000000..81bd196259c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/reshape.md @@ -0,0 +1,100 @@ +description: Reshapes a tensor to the specified shape. + +
+ + +
+ +# tf.keras.backend.reshape + + + + + + + + + +Reshapes a tensor to the specified shape. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`shape` + +Target shape tuple. +
+ + + + + + + + + + + +
+A tensor. +
+ + + +#### Example: + + +``` +>>> a = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]) +>>> a + +>>> tf.keras.backend.reshape(a, shape=(2, 6)) + +``` diff --git a/site/en/api_docs/python/tf/keras/backend/resize_images.md b/site/en/api_docs/python/tf/keras/backend/resize_images.md new file mode 100644 index 00000000000..581be08a0c5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/resize_images.md @@ -0,0 +1,121 @@ +description: Resizes the images contained in a 4D tensor. + +
+ + +
+ +# tf.keras.backend.resize_images + + + + + + + + + +Resizes the images contained in a 4D tensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable to resize. +
+`height_factor` + +Positive integer. +
+`width_factor` + +Positive integer. +
+`data_format` + +One of `"channels_first"`, `"channels_last"`. +
+`interpolation` + +A string, one of `nearest` or `bilinear`. +
+ + + + + + + + + + + +
+A tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +in case of incorrect value for +`data_format` or `interpolation`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/resize_volumes.md b/site/en/api_docs/python/tf/keras/backend/resize_volumes.md new file mode 100644 index 00000000000..af71940829e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/resize_volumes.md @@ -0,0 +1,121 @@ +description: Resizes the volume contained in a 5D tensor. + +
+ + +
+ +# tf.keras.backend.resize_volumes + + + + + + + + + +Resizes the volume contained in a 5D tensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable to resize. +
+`depth_factor` + +Positive integer. +
+`height_factor` + +Positive integer. +
+`width_factor` + +Positive integer. +
+`data_format` + +One of `"channels_first"`, `"channels_last"`. +
+ + + + + + + + + + + +
+A tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if `data_format` is neither +`channels_last` or `channels_first`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/reverse.md b/site/en/api_docs/python/tf/keras/backend/reverse.md new file mode 100644 index 00000000000..59181530871 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/reverse.md @@ -0,0 +1,83 @@ +description: Reverse a tensor along the specified axes. + +
+ + +
+ +# tf.keras.backend.reverse + + + + + + + + + +Reverse a tensor along the specified axes. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor to reverse. +
+`axes` + +Integer or iterable of integers. +Axes to reverse. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/rnn.md b/site/en/api_docs/python/tf/keras/backend/rnn.md new file mode 100644 index 00000000000..4a9c401e019 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/rnn.md @@ -0,0 +1,208 @@ +description: Iterates over the time dimension of a tensor. + +
+ + +
+ +# tf.keras.backend.rnn + + + + + + + + + +Iterates over the time dimension of a tensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`step_function` + +RNN step function. +Args; +input; Tensor with shape `(samples, ...)` (no time dimension), +representing input for the batch of samples at a certain +time step. +states; List of tensors. +Returns; +output; Tensor with shape `(samples, output_dim)` +(no time dimension). +new_states; List of tensors, same length and shapes +as 'states'. The first state in the list must be the +output tensor at the previous timestep. +
+`inputs` + +Tensor of temporal data of shape `(samples, time, ...)` +(at least 3D), or nested tensors, and each of which has shape +`(samples, time, ...)`. +
+`initial_states` + +Tensor with shape `(samples, state_size)` +(no time dimension), containing the initial values for the states used +in the step function. In the case that state_size is in a nested +shape, the shape of initial_states will also follow the nested +structure. +
+`go_backwards` + +Boolean. If True, do the iteration over the time +dimension in reverse order and return the reversed sequence. +
+`mask` + +Binary tensor with shape `(samples, time, 1)`, +with a zero for every element that is masked. +
+`constants` + +List of constant values passed at each step. +
+`unroll` + +Whether to unroll the RNN or to use a symbolic `while_loop`. +
+`input_length` + +An integer or a 1-D Tensor, depending on whether +the time dimension is fixed-length or not. In case of variable length +input, it is used for masking in case there's no mask specified. +
+`time_major` + +Boolean. If true, the inputs and outputs will be in shape +`(timesteps, batch, ...)`, whereas in the False case, it will be +`(batch, timesteps, ...)`. Using `time_major = True` is a bit more +efficient because it avoids transposes at the beginning and end of the +RNN calculation. However, most TensorFlow data is batch-major, so by +default this function accepts input and emits output in batch-major +form. +
+`zero_output_for_mask` + +Boolean. If True, the output for masked timestep +will be zeros, whereas in the False case, output from previous +timestep is returned. +
+ + + + + + + + + + + +
+A tuple, `(last_output, outputs, new_states)`. +last_output: the latest output of the rnn, of shape `(samples, ...)` +outputs: tensor with shape `(samples, time, ...)` where each +entry `outputs[s, t]` is the output of the step function +at time `t` for sample `s`. +new_states: list of tensors, latest states returned by +the step function, of shape `(samples, ...)`. +
+ + + + + + + + + + + + + + + + + + +
+`ValueError` + +if input dimension is less than 3. +
+`ValueError` + +if `unroll` is `True` but input timestep is not a fixed +number. +
+`ValueError` + +if `mask` is provided (not `None`) but states is not provided +(`len(states)` == 0). +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/round.md b/site/en/api_docs/python/tf/keras/backend/round.md new file mode 100644 index 00000000000..a699eb917ca --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/round.md @@ -0,0 +1,76 @@ +description: Element-wise rounding to the closest integer. + +
+ + +
+ +# tf.keras.backend.round + + + + + + + + + +Element-wise rounding to the closest integer. + + + + + + + + + +In case of tie, the rounding mode used is "half to even". + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/separable_conv2d.md b/site/en/api_docs/python/tf/keras/backend/separable_conv2d.md new file mode 100644 index 00000000000..aa1f6846a5f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/separable_conv2d.md @@ -0,0 +1,144 @@ +description: 2D convolution with separable filters. + +
+ + +
+ +# tf.keras.backend.separable_conv2d + + + + + + + + + +2D convolution with separable filters. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +input tensor +
+`depthwise_kernel` + +convolution kernel for the depthwise convolution. +
+`pointwise_kernel` + +kernel for the 1x1 convolution. +
+`strides` + +strides tuple (length 2). +
+`padding` + +string, `"same"` or `"valid"`. +
+`data_format` + +string, `"channels_last"` or `"channels_first"`. +
+`dilation_rate` + +tuple of integers, +dilation rates for the separable convolution. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `data_format` is neither `channels_last` or +`channels_first`. +
+`ValueError` + +if `strides` is not a tuple of 2 integers. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/set_epsilon.md b/site/en/api_docs/python/tf/keras/backend/set_epsilon.md new file mode 100644 index 00000000000..a40aa0f76cd --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/set_epsilon.md @@ -0,0 +1,72 @@ +description: Sets the value of the fuzz factor used in numeric expressions. + +
+ + +
+ +# tf.keras.backend.set_epsilon + + + + + + + + + +Sets the value of the fuzz factor used in numeric expressions. + + + + + + + + + + + + + + + + + + + +
+`value` + +float. New value of epsilon. +
+ + + +#### Example: + + +>>> tf.keras.backend.epsilon() +1e-07 +>>> tf.keras.backend.set_epsilon(1e-5) +>>> tf.keras.backend.epsilon() +1e-05 + >>> tf.keras.backend.set_epsilon(1e-7) \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/set_floatx.md b/site/en/api_docs/python/tf/keras/backend/set_floatx.md new file mode 100644 index 00000000000..4dca2b33cfe --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/set_floatx.md @@ -0,0 +1,94 @@ +description: Sets the default float type. + +
+ + +
+ +# tf.keras.backend.set_floatx + + + + + + + + + +Sets the default float type. + + + + + + + + + +Note: It is not recommended to set this to float16 for training, as this will +likely cause numeric stability issues. Instead, mixed precision, which is +using a mix of float16 and float32, can be used by calling +`tf.keras.mixed_precision.experimental.set_policy('mixed_float16')`. See the +[mixed precision +guide](https://www.tensorflow.org/guide/keras/mixed_precision) for details. + + + + + + + + + + +
+`value` + +String; `'float16'`, `'float32'`, or `'float64'`. +
+ + + +#### Example: + + +>>> tf.keras.backend.floatx() +'float32' +>>> tf.keras.backend.set_floatx('float64') +>>> tf.keras.backend.floatx() +'float64' +>>> tf.keras.backend.set_floatx('float32') + + + + + + + + + + +
+`ValueError` + +In case of invalid value. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/set_image_data_format.md b/site/en/api_docs/python/tf/keras/backend/set_image_data_format.md new file mode 100644 index 00000000000..e60f88925c9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/set_image_data_format.md @@ -0,0 +1,88 @@ +description: Sets the value of the image data format convention. + +
+ + +
+ +# tf.keras.backend.set_image_data_format + + + + + + + + + +Sets the value of the image data format convention. + + + + + + + + + + + + + + + + + + + +
+`data_format` + +string. `'channels_first'` or `'channels_last'`. +
+ + + +#### Example: + + +>>> tf.keras.backend.image_data_format() +'channels_last' +>>> tf.keras.backend.set_image_data_format('channels_first') +>>> tf.keras.backend.image_data_format() +'channels_first' +>>> tf.keras.backend.set_image_data_format('channels_last') + + + + + + + + + + +
+`ValueError` + +In case of invalid `data_format` value. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/set_learning_phase.md b/site/en/api_docs/python/tf/keras/backend/set_learning_phase.md new file mode 100644 index 00000000000..607ab53b4a0 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/set_learning_phase.md @@ -0,0 +1,79 @@ +description: Sets the learning phase to a fixed value. + +
+ + +
+ +# tf.keras.backend.set_learning_phase + + + + + + + + + +Sets the learning phase to a fixed value. + + + + + + + + + + + + + + + + + + + +
+`value` + +Learning phase value, either 0 or 1 (integers). +0 = test, 1 = train +
+ + + + + + + + + + + + +
+`ValueError` + +if `value` is neither `0` nor `1`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/set_value.md b/site/en/api_docs/python/tf/keras/backend/set_value.md new file mode 100644 index 00000000000..d3aa456a19b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/set_value.md @@ -0,0 +1,110 @@ +description: Sets the value of a variable, from a Numpy array. + +
+ + +
+ +# tf.keras.backend.set_value + + + + + + + + + +Sets the value of a variable, from a Numpy array. + + + + + + + + + +backend.set_value is the compliment of backend.get_value, and provides +a generic interface for assigning to variables while abstracting away the +differences between TensorFlow 1.x and 2.x semantics. + +``` +>>> K = tf.keras.backend # Common keras convention +>>> v = K.variable(1.) +``` + +``` +>>> # reassign +>>> K.set_value(v, 2.) +>>> print(K.get_value(v)) +2.0 +``` + +``` +>>> # increment +>>> K.set_value(v, K.get_value(v) + 1) +>>> print(K.get_value(v)) +3.0 +``` + +Variable semantics in TensorFlow 2 are eager execution friendly. The above +code is roughly equivalent to: + +``` +>>> v = tf.Variable(1.) +``` + +``` +>>> _ = v.assign(2.) +>>> print(v.numpy()) +2.0 +``` + +``` +>>> _ = v.assign_add(1.) +>>> print(v.numpy()) +3.0 +``` + + + + + + + + + + + + + +
+`x` + +Variable to set to a new value. +
+`value` + +Value to set the tensor to, as a Numpy array +(of the same shape). +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/shape.md b/site/en/api_docs/python/tf/keras/backend/shape.md new file mode 100644 index 00000000000..2af2c88f95f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/shape.md @@ -0,0 +1,90 @@ +description: Returns the symbolic shape of a tensor or variable. + +
+ + +
+ +# tf.keras.backend.shape + + + + + + + + + +Returns the symbolic shape of a tensor or variable. + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+ + + + + + + + + + + +
+A symbolic shape (which is itself a tensor). +
+ + + +#### Examples: + + + +``` +>>> val = np.array([[1, 2], [3, 4]]) +>>> kvar = tf.keras.backend.variable(value=val) +>>> tf.keras.backend.shape(kvar) + +>>> input = tf.keras.backend.placeholder(shape=(2, 4, 5)) +>>> tf.keras.backend.shape(input) + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/sigmoid.md b/site/en/api_docs/python/tf/keras/backend/sigmoid.md new file mode 100644 index 00000000000..bee5a545432 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/sigmoid.md @@ -0,0 +1,75 @@ +description: Element-wise sigmoid. + +
+ + +
+ +# tf.keras.backend.sigmoid + + + + + + + + + +Element-wise sigmoid. + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/sign.md b/site/en/api_docs/python/tf/keras/backend/sign.md new file mode 100644 index 00000000000..0b60c790165 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/sign.md @@ -0,0 +1,75 @@ +description: Element-wise sign. + +
+ + +
+ +# tf.keras.backend.sign + + + + + + + + + +Element-wise sign. + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/sin.md b/site/en/api_docs/python/tf/keras/backend/sin.md new file mode 100644 index 00000000000..d9214291032 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/sin.md @@ -0,0 +1,75 @@ +description: Computes sin of x element-wise. + +
+ + +
+ +# tf.keras.backend.sin + + + + + + + + + +Computes sin of x element-wise. + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/softmax.md b/site/en/api_docs/python/tf/keras/backend/softmax.md new file mode 100644 index 00000000000..ce2ce681f7f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/softmax.md @@ -0,0 +1,83 @@ +description: Softmax of a tensor. + +
+ + +
+ +# tf.keras.backend.softmax + + + + + + + + + +Softmax of a tensor. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+`axis` + +The dimension softmax would be performed on. +The default is -1 which indicates the last dimension. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/softplus.md b/site/en/api_docs/python/tf/keras/backend/softplus.md new file mode 100644 index 00000000000..df43fcdb290 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/softplus.md @@ -0,0 +1,75 @@ +description: Softplus of a tensor. + +
+ + +
+ +# tf.keras.backend.softplus + + + + + + + + + +Softplus of a tensor. + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/softsign.md b/site/en/api_docs/python/tf/keras/backend/softsign.md new file mode 100644 index 00000000000..0974a2f7941 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/softsign.md @@ -0,0 +1,75 @@ +description: Softsign of a tensor. + +
+ + +
+ +# tf.keras.backend.softsign + + + + + + + + + +Softsign of a tensor. + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/sparse_categorical_crossentropy.md b/site/en/api_docs/python/tf/keras/backend/sparse_categorical_crossentropy.md new file mode 100644 index 00000000000..61ee3927e1d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/sparse_categorical_crossentropy.md @@ -0,0 +1,118 @@ +description: Categorical crossentropy with integer targets. + +
+ + +
+ +# tf.keras.backend.sparse_categorical_crossentropy + + + + + + + + + +Categorical crossentropy with integer targets. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`target` + +An integer tensor. +
+`output` + +A tensor resulting from a softmax +(unless `from_logits` is True, in which +case `output` is expected to be the logits). +
+`from_logits` + +Boolean, whether `output` is the +result of a softmax, or is a tensor of logits. +
+`axis` + +Int specifying the channels axis. `axis=-1` corresponds to data +format `channels_last', and `axis=1` corresponds to data format +`channels_first`. +
+ + + + + + + + + + + +
+Output tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if `axis` is neither -1 nor one of the axes of `output`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/spatial_2d_padding.md b/site/en/api_docs/python/tf/keras/backend/spatial_2d_padding.md new file mode 100644 index 00000000000..04d0eeee771 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/spatial_2d_padding.md @@ -0,0 +1,107 @@ +description: Pads the 2nd and 3rd dimensions of a 4D tensor. + +
+ + +
+ +# tf.keras.backend.spatial_2d_padding + + + + + + + + + +Pads the 2nd and 3rd dimensions of a 4D tensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`padding` + +Tuple of 2 tuples, padding pattern. +
+`data_format` + +One of `channels_last` or `channels_first`. +
+ + + + + + + + + + + +
+A padded 4D tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if `data_format` is neither +`channels_last` or `channels_first`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/spatial_3d_padding.md b/site/en/api_docs/python/tf/keras/backend/spatial_3d_padding.md new file mode 100644 index 00000000000..4976ff6a3f1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/spatial_3d_padding.md @@ -0,0 +1,114 @@ +description: Pads 5D tensor with zeros along the depth, height, width dimensions. + +
+ + +
+ +# tf.keras.backend.spatial_3d_padding + + + + + + + + + +Pads 5D tensor with zeros along the depth, height, width dimensions. + + + + + + + + + +Pads these dimensions with respectively +"padding[0]", "padding[1]" and "padding[2]" zeros left and right. + +For 'channels_last' data_format, +the 2nd, 3rd and 4th dimension will be padded. +For 'channels_first' data_format, +the 3rd, 4th and 5th dimension will be padded. + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`padding` + +Tuple of 3 tuples, padding pattern. +
+`data_format` + +One of `channels_last` or `channels_first`. +
+ + + + + + + + + + + +
+A padded 5D tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +if `data_format` is neither +`channels_last` or `channels_first`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/sqrt.md b/site/en/api_docs/python/tf/keras/backend/sqrt.md new file mode 100644 index 00000000000..d10a4efbe49 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/sqrt.md @@ -0,0 +1,75 @@ +description: Element-wise square root. + +
+ + +
+ +# tf.keras.backend.sqrt + + + + + + + + + +Element-wise square root. + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/square.md b/site/en/api_docs/python/tf/keras/backend/square.md new file mode 100644 index 00000000000..223e17f1f05 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/square.md @@ -0,0 +1,75 @@ +description: Element-wise square. + +
+ + +
+ +# tf.keras.backend.square + + + + + + + + + +Element-wise square. + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/squeeze.md b/site/en/api_docs/python/tf/keras/backend/squeeze.md new file mode 100644 index 00000000000..8e13938002c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/squeeze.md @@ -0,0 +1,82 @@ +description: Removes a 1-dimension from the tensor at index "axis". + +
+ + +
+ +# tf.keras.backend.squeeze + + + + + + + + + +Removes a 1-dimension from the tensor at index "axis". + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+`axis` + +Axis to drop. +
+ + + + + + + + + + + +
+A tensor with the same data as `x` but reduced dimensions. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/stack.md b/site/en/api_docs/python/tf/keras/backend/stack.md new file mode 100644 index 00000000000..dcbc3403200 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/stack.md @@ -0,0 +1,97 @@ +description: Stacks a list of rank R tensors into a rank R+1 tensor. + +
+ + +
+ +# tf.keras.backend.stack + + + + + + + + + +Stacks a list of rank `R` tensors into a rank `R+1` tensor. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +List of tensors. +
+`axis` + +Axis along which to perform stacking. +
+ + + + + + + + + + + +
+A tensor. +
+ + + +#### Example: + + +``` +>>> a = tf.constant([[1, 2],[3, 4]]) +>>> b = tf.constant([[10, 20],[30, 40]]) +>>> tf.keras.backend.stack((a, b)) + +``` diff --git a/site/en/api_docs/python/tf/keras/backend/std.md b/site/en/api_docs/python/tf/keras/backend/std.md new file mode 100644 index 00000000000..70f4520a9cc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/std.md @@ -0,0 +1,97 @@ +description: Standard deviation of a tensor, alongside the specified axis. + +
+ + +
+ +# tf.keras.backend.std + + + + + + + + + +Standard deviation of a tensor, alongside the specified axis. + + + + + + + + + +It is an alias to tf.math.reduce_std. + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. It should have numerical dtypes. Boolean type +inputs will be converted to float. +
+`axis` + +An integer, the axis to compute the standard deviation. If `None` +(the default), reduces all dimensions. Must be in the range +`[-rank(x), rank(x))`. +
+`keepdims` + +A boolean, whether to keep the dimensions or not. +If `keepdims` is `False`, the rank of the tensor is reduced +by 1. If `keepdims` is `True`, the reduced dimension is retained with +length 1. +
+ + + + + + + + + + + +
+A tensor with the standard deviation of elements of `x` with same dtype. +Boolean type input will be converted to float. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/stop_gradient.md b/site/en/api_docs/python/tf/keras/backend/stop_gradient.md new file mode 100644 index 00000000000..28331285175 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/stop_gradient.md @@ -0,0 +1,77 @@ +description: Returns variables but with zero gradient w.r.t. every other variable. + +
+ + +
+ +# tf.keras.backend.stop_gradient + + + + + + + + + +Returns `variables` but with zero gradient w.r.t. every other variable. + + + + + + + + + + + + + + + + + + + +
+`variables` + +Tensor or list of tensors to consider constant with respect +to any other variable. +
+ + + + + + + + + + + +
+A single tensor or a list of tensors (depending on the passed argument) +that has no gradient with respect to any other variable. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/sum.md b/site/en/api_docs/python/tf/keras/backend/sum.md new file mode 100644 index 00000000000..9e1d73e7cb9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/sum.md @@ -0,0 +1,92 @@ +description: Sum of the values in a tensor, alongside the specified axis. + +
+ + +
+ +# tf.keras.backend.sum + + + + + + + + + +Sum of the values in a tensor, alongside the specified axis. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+`axis` + +An integer, the axis to sum over. +
+`keepdims` + +A boolean, whether to keep the dimensions or not. +If `keepdims` is `False`, the rank of the tensor is reduced +by 1. If `keepdims` is `True`, +the reduced dimension is retained with length 1. +
+ + + + + + + + + + + +
+A tensor with sum of `x`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/switch.md b/site/en/api_docs/python/tf/keras/backend/switch.md new file mode 100644 index 00000000000..013160fa509 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/switch.md @@ -0,0 +1,108 @@ +description: Switches between two operations depending on a scalar value. + +
+ + +
+ +# tf.keras.backend.switch + + + + + + + + + +Switches between two operations depending on a scalar value. + + + + + + + + + +Note that both `then_expression` and `else_expression` +should be symbolic tensors of the *same shape*. + + + + + + + + + + + + + + + + +
+`condition` + +tensor (`int` or `bool`). +
+`then_expression` + +either a tensor, or a callable that returns a tensor. +
+`else_expression` + +either a tensor, or a callable that returns a tensor. +
+ + + + + + + + + + + +
+The selected tensor. +
+ + + + + + + + + + + + +
+`ValueError` + +If rank of `condition` is greater than rank of expressions. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/tanh.md b/site/en/api_docs/python/tf/keras/backend/tanh.md new file mode 100644 index 00000000000..55067b9a0c7 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/tanh.md @@ -0,0 +1,75 @@ +description: Element-wise tanh. + +
+ + +
+ +# tf.keras.backend.tanh + + + + + + + + + +Element-wise tanh. + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/temporal_padding.md b/site/en/api_docs/python/tf/keras/backend/temporal_padding.md new file mode 100644 index 00000000000..4b2a96e7dd2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/temporal_padding.md @@ -0,0 +1,83 @@ +description: Pads the middle dimension of a 3D tensor. + +
+ + +
+ +# tf.keras.backend.temporal_padding + + + + + + + + + +Pads the middle dimension of a 3D tensor. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+`padding` + +Tuple of 2 integers, how many zeros to +add at the start and end of dim 1. +
+ + + + + + + + + + + +
+A padded 3D tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/tile.md b/site/en/api_docs/python/tf/keras/backend/tile.md new file mode 100644 index 00000000000..fc16beb4635 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/tile.md @@ -0,0 +1,83 @@ +description: Creates a tensor by tiling x by n. + +
+ + +
+ +# tf.keras.backend.tile + + + + + + + + + +Creates a tensor by tiling `x` by `n`. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable +
+`n` + +A list of integer. The length must be the same as the number of +dimensions in `x`. +
+ + + + + + + + + + + +
+A tiled tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/to_dense.md b/site/en/api_docs/python/tf/keras/backend/to_dense.md new file mode 100644 index 00000000000..01293e70b6a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/to_dense.md @@ -0,0 +1,90 @@ +description: Converts a sparse tensor into a dense tensor and returns it. + +
+ + +
+ +# tf.keras.backend.to_dense + + + + + + + + + +Converts a sparse tensor into a dense tensor and returns it. + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A tensor instance (potentially sparse). +
+ + + + + + + + + + + +
+A dense tensor. +
+ + + +#### Examples: + + + + +``` +>>> b = tf.keras.backend.placeholder((2, 2), sparse=True) +>>> print(tf.keras.backend.is_sparse(b)) +True +>>> c = tf.keras.backend.to_dense(b) +>>> print(tf.keras.backend.is_sparse(c)) +False +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/transpose.md b/site/en/api_docs/python/tf/keras/backend/transpose.md new file mode 100644 index 00000000000..1c0f05d9f56 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/transpose.md @@ -0,0 +1,98 @@ +description: Transposes a tensor and returns it. + +
+ + +
+ +# tf.keras.backend.transpose + + + + + + + + + +Transposes a tensor and returns it. + + + + + + + + + + + + + + + + + + + +
+`x` + +Tensor or variable. +
+ + + + + + + + + + + +
+A tensor. +
+ + + +#### Examples: + + + +``` +>>> var = tf.keras.backend.variable([[1, 2, 3], [4, 5, 6]]) +>>> tf.keras.backend.eval(var) +array([[1., 2., 3.], + [4., 5., 6.]], dtype=float32) +>>> var_transposed = tf.keras.backend.transpose(var) +>>> tf.keras.backend.eval(var_transposed) +array([[1., 4.], + [2., 5.], + [3., 6.]], dtype=float32) +>>> input = tf.keras.backend.placeholder((2, 3)) +>>> input + +>>> input_transposed = tf.keras.backend.transpose(input) +>>> input_transposed + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/truncated_normal.md b/site/en/api_docs/python/tf/keras/backend/truncated_normal.md new file mode 100644 index 00000000000..882fa0ad91d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/truncated_normal.md @@ -0,0 +1,107 @@ +description: Returns a tensor with truncated random normal distribution of values. + +
+ + +
+ +# tf.keras.backend.truncated_normal + + + + + + + + + +Returns a tensor with truncated random normal distribution of values. + + + + + + + + + +The generated values follow a normal distribution +with specified mean and standard deviation, +except that values whose magnitude is more than +two standard deviations from the mean are dropped and re-picked. + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A tuple of integers, the shape of tensor to create. +
+`mean` + +Mean of the values. +
+`stddev` + +Standard deviation of the values. +
+`dtype` + +String, dtype of returned tensor. +
+`seed` + +Integer, random seed. +
+ + + + + + + + + + + +
+A tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/update.md b/site/en/api_docs/python/tf/keras/backend/update.md new file mode 100644 index 00000000000..a4aa883ec41 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/update.md @@ -0,0 +1,42 @@ +
+ + +
+ +# tf.keras.backend.update + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/backend/update_add.md b/site/en/api_docs/python/tf/keras/backend/update_add.md new file mode 100644 index 00000000000..135638a8ffb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/update_add.md @@ -0,0 +1,82 @@ +description: Update the value of x by adding increment. + +
+ + +
+ +# tf.keras.backend.update_add + + + + + + + + + +Update the value of `x` by adding `increment`. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A Variable. +
+`increment` + +A tensor of same shape as `x`. +
+ + + + + + + + + + + +
+The variable `x` updated. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/update_sub.md b/site/en/api_docs/python/tf/keras/backend/update_sub.md new file mode 100644 index 00000000000..50067c9a27e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/update_sub.md @@ -0,0 +1,82 @@ +description: Update the value of x by subtracting decrement. + +
+ + +
+ +# tf.keras.backend.update_sub + + + + + + + + + +Update the value of `x` by subtracting `decrement`. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A Variable. +
+`decrement` + +A tensor of same shape as `x`. +
+ + + + + + + + + + + +
+The variable `x` updated. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/var.md b/site/en/api_docs/python/tf/keras/backend/var.md new file mode 100644 index 00000000000..d37d9e4789c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/var.md @@ -0,0 +1,92 @@ +description: Variance of a tensor, alongside the specified axis. + +
+ + +
+ +# tf.keras.backend.var + + + + + + + + + +Variance of a tensor, alongside the specified axis. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor or variable. +
+`axis` + +An integer, the axis to compute the variance. +
+`keepdims` + +A boolean, whether to keep the dimensions or not. +If `keepdims` is `False`, the rank of the tensor is reduced +by 1. If `keepdims` is `True`, +the reduced dimension is retained with length 1. +
+ + + + + + + + + + + +
+A tensor with the variance of elements of `x`. +
+ diff --git a/site/en/api_docs/python/tf/keras/backend/variable.md b/site/en/api_docs/python/tf/keras/backend/variable.md new file mode 100644 index 00000000000..3d10a69b430 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/variable.md @@ -0,0 +1,114 @@ +description: Instantiates a variable and returns it. + +
+ + +
+ +# tf.keras.backend.variable + + + + + + + + + +Instantiates a variable and returns it. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +Numpy array, initial value of the tensor. +
+`dtype` + +Tensor type. +
+`name` + +Optional name string for the tensor. +
+`constraint` + +Optional projection function to be +applied to the variable after an optimizer update. +
+ + + + + + + + + + + +
+A variable instance (with Keras metadata included). +
+ + + +#### Examples: + + + +``` +>>> val = np.array([[1, 2], [3, 4]]) +>>> kvar = tf.keras.backend.variable(value=val, dtype='float64', +... name='example_var') +>>> tf.keras.backend.dtype(kvar) +'float64' +>>> print(kvar) + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/zeros.md b/site/en/api_docs/python/tf/keras/backend/zeros.md new file mode 100644 index 00000000000..894bf837ed8 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/zeros.md @@ -0,0 +1,115 @@ +description: Instantiates an all-zeros variable and returns it. + +
+ + +
+ +# tf.keras.backend.zeros + + + + + + + + + +Instantiates an all-zeros variable and returns it. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +Tuple or list of integers, shape of returned Keras variable +
+`dtype` + +data type of returned Keras variable +
+`name` + +name of returned Keras variable +
+ + + + + + + + + + + +
+A variable (including Keras metadata), filled with `0.0`. +Note that if `shape` was symbolic, we cannot return a variable, +and will return a dynamically-shaped tensor instead. +
+ + + +#### Example: + + + +``` +>>> kvar = tf.keras.backend.zeros((3,4)) +>>> tf.keras.backend.eval(kvar) +array([[0., 0., 0., 0.], + [0., 0., 0., 0.], + [0., 0., 0., 0.]], dtype=float32) +>>> A = tf.constant([1,2,3]) +>>> kvar2 = tf.keras.backend.zeros(A.shape) # [0., 0., 0.] +>>> tf.keras.backend.eval(kvar2) +array([0., 0., 0.], dtype=float32) +>>> kvar3 = tf.keras.backend.zeros(A.shape,dtype=tf.int32) +>>> tf.keras.backend.eval(kvar3) +array([0, 0, 0], dtype=int32) +>>> kvar4 = tf.keras.backend.zeros([2,3]) +>>> tf.keras.backend.eval(kvar4) +array([[0., 0., 0.], + [0., 0., 0.]], dtype=float32) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/backend/zeros_like.md b/site/en/api_docs/python/tf/keras/backend/zeros_like.md new file mode 100644 index 00000000000..c9797736581 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/backend/zeros_like.md @@ -0,0 +1,102 @@ +description: Instantiates an all-zeros variable of the same shape as another tensor. + +
+ + +
+ +# tf.keras.backend.zeros_like + + + + + + + + + +Instantiates an all-zeros variable of the same shape as another tensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Keras variable or Keras tensor. +
+`dtype` + +dtype of returned Keras variable. +`None` uses the dtype of `x`. +
+`name` + +name for the variable to create. +
+ + + + + + + + + + + +
+A Keras variable with the shape of `x` filled with zeros. +
+ + + +#### Example: + + + + +from tensorflow.keras import backend as K +kvar = K.variable(np.random.random((2,3))) +kvar_zeros = K.zeros_like(kvar) +K.eval(kvar_zeros) +# array([[ 0., 0., 0.], [ 0., 0., 0.]], dtype=float32) \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/callbacks.md b/site/en/api_docs/python/tf/keras/callbacks.md new file mode 100644 index 00000000000..88466b31d68 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks.md @@ -0,0 +1,49 @@ +description: Callbacks: utilities called at certain points during model training. + +
+ + +
+ +# Module: tf.keras.callbacks + + + + + + + + + +Callbacks: utilities called at certain points during model training. + + + +## Classes + +[`class BaseLogger`](../../tf/keras/callbacks/BaseLogger.md): Callback that accumulates epoch averages of metrics. + +[`class CSVLogger`](../../tf/keras/callbacks/CSVLogger.md): Callback that streams epoch results to a csv file. + +[`class Callback`](../../tf/keras/callbacks/Callback.md): Abstract base class used to build new callbacks. + +[`class EarlyStopping`](../../tf/keras/callbacks/EarlyStopping.md): Stop training when a monitored metric has stopped improving. + +[`class History`](../../tf/keras/callbacks/History.md): Callback that records events into a `History` object. + +[`class LambdaCallback`](../../tf/keras/callbacks/LambdaCallback.md): Callback for creating simple, custom callbacks on-the-fly. + +[`class LearningRateScheduler`](../../tf/keras/callbacks/LearningRateScheduler.md): Learning rate scheduler. + +[`class ModelCheckpoint`](../../tf/keras/callbacks/ModelCheckpoint.md): Callback to save the Keras model or model weights at some frequency. + +[`class ProgbarLogger`](../../tf/keras/callbacks/ProgbarLogger.md): Callback that prints metrics to stdout. + +[`class ReduceLROnPlateau`](../../tf/keras/callbacks/ReduceLROnPlateau.md): Reduce learning rate when a metric has stopped improving. + +[`class RemoteMonitor`](../../tf/keras/callbacks/RemoteMonitor.md): Callback used to stream events to a server. + +[`class TensorBoard`](../../tf/keras/callbacks/TensorBoard.md): Enable visualizations for TensorBoard. + +[`class TerminateOnNaN`](../../tf/keras/callbacks/TerminateOnNaN.md): Callback that terminates training when a NaN loss is encountered. + diff --git a/site/en/api_docs/python/tf/keras/callbacks/BaseLogger.md b/site/en/api_docs/python/tf/keras/callbacks/BaseLogger.md new file mode 100644 index 00000000000..eb4d370a2e8 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/BaseLogger.md @@ -0,0 +1,102 @@ +description: Callback that accumulates epoch averages of metrics. + +
+ + + + + +
+ +# tf.keras.callbacks.BaseLogger + + + + + + + + + +Callback that accumulates epoch averages of metrics. + +Inherits From: [`Callback`](../../../tf/keras/callbacks/Callback.md) + + + + + + + + + +This callback is automatically applied to every Keras model. + + + + + + + + + + +
+`stateful_metrics` + +Iterable of string names of metrics that +should *not* be averaged over an epoch. +Metrics in this list will be logged as-is in `on_epoch_end`. +All others will be averaged in `on_epoch_end`. +
+ + + +## Methods + +

set_model

+ +View source + + + + + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/callbacks/CSVLogger.md b/site/en/api_docs/python/tf/keras/callbacks/CSVLogger.md new file mode 100644 index 00000000000..8617acb7814 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/CSVLogger.md @@ -0,0 +1,124 @@ +description: Callback that streams epoch results to a csv file. + +
+ + + + + +
+ +# tf.keras.callbacks.CSVLogger + + + + + + + + + +Callback that streams epoch results to a csv file. + +Inherits From: [`Callback`](../../../tf/keras/callbacks/Callback.md) + + + + + + + + + +Supports all values that can be represented as a string, +including 1D iterables such as np.ndarray. + +#### Example: + + + +```python +csv_logger = CSVLogger('training.log') +model.fit(X_train, Y_train, callbacks=[csv_logger]) +``` + + + + + + + + + + + + + + + + +
+`filename` + +filename of the csv file, e.g. 'run/log.csv'. +
+`separator` + +string used to separate elements in the csv file. +
+`append` + +True: append if file exists (useful for continuing +training). False: overwrite existing file, +
+ + + +## Methods + +

set_model

+ +View source + + + + + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/callbacks/Callback.md b/site/en/api_docs/python/tf/keras/callbacks/Callback.md new file mode 100644 index 00000000000..50d47a3ac21 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/Callback.md @@ -0,0 +1,680 @@ +description: Abstract base class used to build new callbacks. + +
+ + + + + + + + + + + + + + + + + + + + + +
+ +# tf.keras.callbacks.Callback + + + + + + + + + +Abstract base class used to build new callbacks. + + + + + + + + + +The `logs` dictionary that callback methods +take as argument will contain keys for quantities relevant to +the current batch or epoch. + +Currently, the `.fit()` method of the `Model` class +will include the following quantities in the `logs` that +it passes to its callbacks: + + on_epoch_end: logs include `acc` and `loss`, and + optionally include `val_loss` + (if validation is enabled in `fit`), and `val_acc` + (if validation and accuracy monitoring are enabled). + on_batch_begin: logs include `size`, + the number of samples in the current batch. + on_batch_end: logs include `loss`, and optionally `acc` + (if accuracy monitoring is enabled). + + + + + + + + + + + + + + + + + + +
+`params` + +dict. Training parameters +(eg. verbosity, batch size, number of epochs...). +
+`model` + +instance of keras.models.Model. +Reference of the model being trained. +
+`validation_data` + +Deprecated. Do not use. +
+ + + +## Methods + +

on_batch_begin

+ +View source + + + +A backwards compatibility alias for `on_train_batch_begin`. + + +

on_batch_end

+ +View source + + + +A backwards compatibility alias for `on_train_batch_end`. + + +

on_epoch_begin

+ +View source + + + +Called at the start of an epoch. + +Subclasses should override for any actions to run. This function should only +be called during TRAIN mode. + + + + + + + + + + + + + +
Arguments
+`epoch` + +integer, index of epoch. +
+`logs` + +dict. Currently no data is passed to this argument for this method +but that may change in the future. +
+ + + +

on_epoch_end

+ +View source + + + +Called at the end of an epoch. + +Subclasses should override for any actions to run. This function should only +be called during TRAIN mode. + + + + + + + + + + + + + +
Arguments
+`epoch` + +integer, index of epoch. +
+`logs` + +dict, metric results for this training epoch, and for the +validation epoch if validation is performed. Validation result keys +are prefixed with `val_`. +
+ + + +

on_predict_batch_begin

+ +View source + + + +Called at the beginning of a batch in `predict` methods. + +Subclasses should override for any actions to run. + + + + + + + + + + + + + +
Arguments
+`batch` + +integer, index of batch within the current epoch. +
+`logs` + +dict. Has keys `batch` and `size` representing the current batch +number and the size of the batch. +
+ + + +

on_predict_batch_end

+ +View source + + + +Called at the end of a batch in `predict` methods. + +Subclasses should override for any actions to run. + + + + + + + + + + + + + +
Arguments
+`batch` + +integer, index of batch within the current epoch. +
+`logs` + +dict. Metric results for this batch. +
+ + + +

on_predict_begin

+ +View source + + + +Called at the beginning of prediction. + +Subclasses should override for any actions to run. + + + + + + + + + + +
Arguments
+`logs` + +dict. Currently no data is passed to this argument for this method +but that may change in the future. +
+ + + +

on_predict_end

+ +View source + + + +Called at the end of prediction. + +Subclasses should override for any actions to run. + + + + + + + + + + +
Arguments
+`logs` + +dict. Currently no data is passed to this argument for this method +but that may change in the future. +
+ + + +

on_test_batch_begin

+ +View source + + + +Called at the beginning of a batch in `evaluate` methods. + +Also called at the beginning of a validation batch in the `fit` +methods, if validation data is provided. + +Subclasses should override for any actions to run. + + + + + + + + + + + + + +
Arguments
+`batch` + +integer, index of batch within the current epoch. +
+`logs` + +dict. Has keys `batch` and `size` representing the current batch +number and the size of the batch. +
+ + + +

on_test_batch_end

+ +View source + + + +Called at the end of a batch in `evaluate` methods. + +Also called at the end of a validation batch in the `fit` +methods, if validation data is provided. + +Subclasses should override for any actions to run. + + + + + + + + + + + + + +
Arguments
+`batch` + +integer, index of batch within the current epoch. +
+`logs` + +dict. Metric results for this batch. +
+ + + +

on_test_begin

+ +View source + + + +Called at the beginning of evaluation or validation. + +Subclasses should override for any actions to run. + + + + + + + + + + +
Arguments
+`logs` + +dict. Currently no data is passed to this argument for this method +but that may change in the future. +
+ + + +

on_test_end

+ +View source + + + +Called at the end of evaluation or validation. + +Subclasses should override for any actions to run. + + + + + + + + + + +
Arguments
+`logs` + +dict. Currently no data is passed to this argument for this method +but that may change in the future. +
+ + + +

on_train_batch_begin

+ +View source + + + +Called at the beginning of a training batch in `fit` methods. + +Subclasses should override for any actions to run. + + + + + + + + + + + + + +
Arguments
+`batch` + +integer, index of batch within the current epoch. +
+`logs` + +dict. Has keys `batch` and `size` representing the current batch +number and the size of the batch. +
+ + + +

on_train_batch_end

+ +View source + + + +Called at the end of a training batch in `fit` methods. + +Subclasses should override for any actions to run. + + + + + + + + + + + + + +
Arguments
+`batch` + +integer, index of batch within the current epoch. +
+`logs` + +dict. Metric results for this batch. +
+ + + +

on_train_begin

+ +View source + + + +Called at the beginning of training. + +Subclasses should override for any actions to run. + + + + + + + + + + +
Arguments
+`logs` + +dict. Currently no data is passed to this argument for this method +but that may change in the future. +
+ + + +

on_train_end

+ +View source + + + +Called at the end of training. + +Subclasses should override for any actions to run. + + + + + + + + + + +
Arguments
+`logs` + +dict. Currently no data is passed to this argument for this method +but that may change in the future. +
+ + + +

set_model

+ +View source + + + + + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/callbacks/EarlyStopping.md b/site/en/api_docs/python/tf/keras/callbacks/EarlyStopping.md new file mode 100644 index 00000000000..e614ec9befd --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/EarlyStopping.md @@ -0,0 +1,196 @@ +description: Stop training when a monitored metric has stopped improving. + +
+ + + + + + +
+ +# tf.keras.callbacks.EarlyStopping + + + + + + + + + +Stop training when a monitored metric has stopped improving. + +Inherits From: [`Callback`](../../../tf/keras/callbacks/Callback.md) + + + + + + + + + +Assuming the goal of a training is to minimize the loss. With this, the +metric to be monitored would be 'loss', and mode would be 'min'. A +`model.fit()` training loop will check at end of every epoch whether +the loss is no longer decreasing, considering the `min_delta` and +`patience` if applicable. Once it's found no longer decreasing, +`model.stop_training` is marked True and the training terminates. + +The quantity to be monitored needs to be available in `logs` dict. +To make it so, pass the loss or metrics at `model.compile()`. + +#### Example: + + + +``` +>>> callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3) +>>> # This callback will stop the training when there is no improvement in +>>> # the validation loss for three consecutive epochs. +>>> model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> model.compile(tf.keras.optimizers.SGD(), loss='mse') +>>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5), +... epochs=10, batch_size=1, callbacks=[callback], +... verbose=0) +>>> len(history.history['loss']) # Only 4 epochs are run. +4 +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`monitor` + +Quantity to be monitored. +
+`min_delta` + +Minimum change in the monitored quantity +to qualify as an improvement, i.e. an absolute +change of less than min_delta, will count as no +improvement. +
+`patience` + +Number of epochs with no improvement +after which training will be stopped. +
+`verbose` + +verbosity mode. +
+`mode` + +One of `{"auto", "min", "max"}`. In `min` mode, +training will stop when the quantity +monitored has stopped decreasing; in `max` +mode it will stop when the quantity +monitored has stopped increasing; in `auto` +mode, the direction is automatically inferred +from the name of the monitored quantity. +
+`baseline` + +Baseline value for the monitored quantity. +Training will stop if the model doesn't show improvement over the +baseline. +
+`restore_best_weights` + +Whether to restore model weights from +the epoch with the best value of the monitored quantity. +If False, the model weights obtained at the last step of +training are used. +
+ + + +## Methods + +

get_monitor_value

+ +View source + + + + + + +

set_model

+ +View source + + + + + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/callbacks/History.md b/site/en/api_docs/python/tf/keras/callbacks/History.md new file mode 100644 index 00000000000..456398eea40 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/History.md @@ -0,0 +1,82 @@ +description: Callback that records events into a History object. + +
+ + + + + +
+ +# tf.keras.callbacks.History + + + + + + + + + +Callback that records events into a `History` object. + +Inherits From: [`Callback`](../../../tf/keras/callbacks/Callback.md) + + + + + + + + + +This callback is automatically applied to +every Keras model. The `History` object +gets returned by the `fit` method of models. + +## Methods + +

set_model

+ +View source + + + + + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/callbacks/LambdaCallback.md b/site/en/api_docs/python/tf/keras/callbacks/LambdaCallback.md new file mode 100644 index 00000000000..fa7ea29d367 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/LambdaCallback.md @@ -0,0 +1,175 @@ +description: Callback for creating simple, custom callbacks on-the-fly. + +
+ + + + + +
+ +# tf.keras.callbacks.LambdaCallback + + + + + + + + + +Callback for creating simple, custom callbacks on-the-fly. + +Inherits From: [`Callback`](../../../tf/keras/callbacks/Callback.md) + + + + + + + + + +This callback is constructed with anonymous functions that will be called +at the appropriate time. Note that the callbacks expects positional +arguments, as: + + - `on_epoch_begin` and `on_epoch_end` expect two positional arguments: + `epoch`, `logs` + - `on_batch_begin` and `on_batch_end` expect two positional arguments: + `batch`, `logs` + - `on_train_begin` and `on_train_end` expect one positional argument: + `logs` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`on_epoch_begin` + +called at the beginning of every epoch. +
+`on_epoch_end` + +called at the end of every epoch. +
+`on_batch_begin` + +called at the beginning of every batch. +
+`on_batch_end` + +called at the end of every batch. +
+`on_train_begin` + +called at the beginning of model training. +
+`on_train_end` + +called at the end of model training. +
+ + + +#### Example: + + + +```python +# Print the batch number at the beginning of every batch. +batch_print_callback = LambdaCallback( + on_batch_begin=lambda batch,logs: print(batch)) + +# Stream the epoch loss to a file in JSON format. The file content +# is not well-formed JSON but rather has a JSON object per line. +import json +json_log = open('loss_log.json', mode='wt', buffering=1) +json_logging_callback = LambdaCallback( + on_epoch_end=lambda epoch, logs: json_log.write( + json.dumps({'epoch': epoch, 'loss': logs['loss']}) + '\n'), + on_train_end=lambda logs: json_log.close() +) + +# Terminate some processes after having finished model training. +processes = ... +cleanup_callback = LambdaCallback( + on_train_end=lambda logs: [ + p.terminate() for p in processes if p.is_alive()]) + +model.fit(..., + callbacks=[batch_print_callback, + json_logging_callback, + cleanup_callback]) +``` + +## Methods + +

set_model

+ +View source + + + + + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/callbacks/LearningRateScheduler.md b/site/en/api_docs/python/tf/keras/callbacks/LearningRateScheduler.md new file mode 100644 index 00000000000..40338c34e46 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/LearningRateScheduler.md @@ -0,0 +1,120 @@ +description: Learning rate scheduler. + +
+ + + + + +
+ +# tf.keras.callbacks.LearningRateScheduler + + + + + + + + + +Learning rate scheduler. + +Inherits From: [`Callback`](../../../tf/keras/callbacks/Callback.md) + + + + + + + + + + + + + + + + + + + + + + +
+`schedule` + +a function that takes an epoch index as input +(integer, indexed from 0) and returns a new +learning rate as output (float). +
+`verbose` + +int. 0: quiet, 1: update messages. +
+ + +```python +# This function keeps the learning rate at 0.001 for the first ten epochs +# and decreases it exponentially after that. +def scheduler(epoch): + if epoch < 10: + return 0.001 + else: + return 0.001 * tf.math.exp(0.1 * (10 - epoch)) + +callback = tf.keras.callbacks.LearningRateScheduler(scheduler) +model.fit(data, labels, epochs=100, callbacks=[callback], + validation_data=(val_data, val_labels)) +``` + +## Methods + +

set_model

+ +View source + + + + + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/callbacks/ModelCheckpoint.md b/site/en/api_docs/python/tf/keras/callbacks/ModelCheckpoint.md new file mode 100644 index 00000000000..7ce08f68d91 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/ModelCheckpoint.md @@ -0,0 +1,206 @@ +description: Callback to save the Keras model or model weights at some frequency. + +
+ + + + + +
+ +# tf.keras.callbacks.ModelCheckpoint + + + + + + + + + +Callback to save the Keras model or model weights at some frequency. + +Inherits From: [`Callback`](../../../tf/keras/callbacks/Callback.md) + + + + + + + + + +`ModelCheckpoint` callback is used in conjunction with training using +`model.fit()` to save a model or weights (in a checkpoint file) at some +interval, so the model or weights can be loaded later to continue the training +from the state saved. + +A few options this callback provides include: + +- Whether to only keep the model that has achieved the "best performance" so + far, or whether to save the model at the end of every epoch regardless of + performance. +- Definition of 'best'; which quantity to monitor and whether it should be + maximized or minimized. +- The frequency it should save at. Currently, the callback supports saving at + the end of every epoch, or after a fixed number of training batches. +- Whether only weights are saved, or the whole model is saved. + +#### Example: + + + +```python +EPOCHS = 10 +checkpoint_filepath = '/tmp/checkpoint' +model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( + filepath=checkpoint_filepath, + save_weights_only=True, + monitor='val_acc', + mode='max', + save_best_only=True) + +# Model weights are saved at the end of every epoch, if it's the best seen +# so far. +model.fit(epochs=EPOCHS, callbacks=[model_checkpoint_callback]) + +# The model weights (that are considered the best) are loaded into the model. +model.load_weights(checkpoint_filepath) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filepath` + +string, path to save the model file. `filepath` can contain +named formatting options, which will be filled the value of `epoch` and +keys in `logs` (passed in `on_epoch_end`). For example: if `filepath` is +`weights.{epoch:02d}-{val_loss:.2f}.hdf5`, then the model checkpoints +will be saved with the epoch number and the validation loss in the +filename. +
+`monitor` + +quantity to monitor. +
+`verbose` + +verbosity mode, 0 or 1. +
+`save_best_only` + +if `save_best_only=True`, the latest best model according +to the quantity monitored will not be overwritten. +If `filepath` doesn't contain formatting options like `{epoch}` then +`filepath` will be overwritten by each new better model. +
+`mode` + +one of {auto, min, max}. If `save_best_only=True`, the decision to +overwrite the current save file is made based on either the maximization +or the minimization of the monitored quantity. For `val_acc`, this +should be `max`, for `val_loss` this should be `min`, etc. In `auto` +mode, the direction is automatically inferred from the name of the +monitored quantity. +
+`save_weights_only` + +if True, then only the model's weights will be saved +(`model.save_weights(filepath)`), else the full model is saved +(`model.save(filepath)`). +
+`save_freq` + +`'epoch'` or integer. When using `'epoch'`, the callback saves +the model after each epoch. When using integer, the callback saves the +model at end of this many batches. Note that if the saving isn't aligned +to epochs, the monitored metric may potentially be less reliable (it +could reflect as little as 1 batch, since the metrics get reset every +epoch). Defaults to `'epoch'` +
+`**kwargs` + +Additional arguments for backwards compatibility. Possible key +is `period`. +
+ + + +## Methods + +

set_model

+ +View source + + + + + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/callbacks/ProgbarLogger.md b/site/en/api_docs/python/tf/keras/callbacks/ProgbarLogger.md new file mode 100644 index 00000000000..71f2d9685cb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/ProgbarLogger.md @@ -0,0 +1,128 @@ +description: Callback that prints metrics to stdout. + +
+ + + + + +
+ +# tf.keras.callbacks.ProgbarLogger + + + + + + + + + +Callback that prints metrics to stdout. + +Inherits From: [`Callback`](../../../tf/keras/callbacks/Callback.md) + + + + + + + + + + + + + + + + + + + + + + +
+`count_mode` + +One of "steps" or "samples". +Whether the progress bar should +count samples seen or steps (batches) seen. +
+`stateful_metrics` + +Iterable of string names of metrics that +should *not* be averaged over an epoch. +Metrics in this list will be logged as-is. +All others will be averaged over time (e.g. loss, etc). +If not provided, defaults to the `Model`'s metrics. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid `count_mode`. +
+ + + +## Methods + +

set_model

+ +View source + + + + + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/callbacks/ReduceLROnPlateau.md b/site/en/api_docs/python/tf/keras/callbacks/ReduceLROnPlateau.md new file mode 100644 index 00000000000..8772503a7b2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/ReduceLROnPlateau.md @@ -0,0 +1,182 @@ +description: Reduce learning rate when a metric has stopped improving. + +
+ + + + + + +
+ +# tf.keras.callbacks.ReduceLROnPlateau + + + + + + + + + +Reduce learning rate when a metric has stopped improving. + +Inherits From: [`Callback`](../../../tf/keras/callbacks/Callback.md) + + + + + + + + + +Models often benefit from reducing the learning rate by a factor +of 2-10 once learning stagnates. This callback monitors a +quantity and if no improvement is seen for a 'patience' number +of epochs, the learning rate is reduced. + +#### Example: + + + +```python +reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, + patience=5, min_lr=0.001) +model.fit(X_train, Y_train, callbacks=[reduce_lr]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`monitor` + +quantity to be monitored. +
+`factor` + +factor by which the learning rate will be reduced. new_lr = lr * +factor +
+`patience` + +number of epochs with no improvement after which learning rate +will be reduced. +
+`verbose` + +int. 0: quiet, 1: update messages. +
+`mode` + +one of {auto, min, max}. In `min` mode, lr will be reduced when the +quantity monitored has stopped decreasing; in `max` mode it will be +reduced when the quantity monitored has stopped increasing; in `auto` +mode, the direction is automatically inferred from the name of the +monitored quantity. +
+`min_delta` + +threshold for measuring the new optimum, to only focus on +significant changes. +
+`cooldown` + +number of epochs to wait before resuming normal operation after +lr has been reduced. +
+`min_lr` + +lower bound on the learning rate. +
+ + + +## Methods + +

in_cooldown

+ +View source + + + + + + +

set_model

+ +View source + + + + + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/callbacks/RemoteMonitor.md b/site/en/api_docs/python/tf/keras/callbacks/RemoteMonitor.md new file mode 100644 index 00000000000..9486fa4eb9b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/RemoteMonitor.md @@ -0,0 +1,136 @@ +description: Callback used to stream events to a server. + +
+ + + + + +
+ +# tf.keras.callbacks.RemoteMonitor + + + + + + + + + +Callback used to stream events to a server. + +Inherits From: [`Callback`](../../../tf/keras/callbacks/Callback.md) + + + + + + + + + +Requires the `requests` library. +Events are sent to `root + '/publish/epoch/end/'` by default. Calls are +HTTP POST, with a `data` argument which is a +JSON-encoded dictionary of event data. +If send_as_json is set to True, the content type of the request will be +application/json. Otherwise the serialized JSON will be sent within a form. + + + + + + + + + + + + + + + + + + + + + + +
+`root` + +String; root url of the target server. +
+`path` + +String; path relative to `root` to which the events will be sent. +
+`field` + +String; JSON field under which the data will be stored. +The field is used only if the payload is sent within a form +(i.e. send_as_json is set to False). +
+`headers` + +Dictionary; optional custom HTTP headers. +
+`send_as_json` + +Boolean; whether the request should be +sent as application/json. +
+ + + +## Methods + +

set_model

+ +View source + + + + + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/callbacks/TensorBoard.md b/site/en/api_docs/python/tf/keras/callbacks/TensorBoard.md new file mode 100644 index 00000000000..195ee870238 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/TensorBoard.md @@ -0,0 +1,216 @@ +description: Enable visualizations for TensorBoard. + +
+ + + + + +
+ +# tf.keras.callbacks.TensorBoard + + + + + + + + + +Enable visualizations for TensorBoard. + +Inherits From: [`Callback`](../../../tf/keras/callbacks/Callback.md) + + + + + + + +TensorBoard is a visualization tool provided with TensorFlow. + +This callback logs events for TensorBoard, including: + +* Metrics summary plots +* Training graph visualization +* Activation histograms +* Sampled profiling + +If you have installed TensorFlow with pip, you should be able +to launch TensorBoard from the command line: + +```sh +tensorboard --logdir=path_to_your_logs +``` + +You can find more information about TensorBoard +[here](https://www.tensorflow.org/get_started/summaries_and_tensorboard). + +Example (Basic): +```python +tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="./logs") +model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback]) +# run the tensorboard command to view the visualizations. +``` +Example (Profile): +```python +# profile a single batch, e.g. the 5th batch. +tensorboard_callback = + tf.keras.callbacks.TensorBoard(log_dir='./logs', profile_batch=5) +model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback]) +# run the tensorboard command to view the visualizations in profile plugin. + +# profile a range of batches, e.g. from 10 to 20. +tensorboard_callback = + tf.keras.callbacks.TensorBoard(log_dir='./logs', profile_batch='10,20') +model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback]) +# run the tensorboard command to view the visualizations in profile plugin. +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`log_dir` + +the path of the directory where to save the log files to be +parsed by TensorBoard. +
+`histogram_freq` + +frequency (in epochs) at which to compute activation and +weight histograms for the layers of the model. If set to 0, histograms +won't be computed. Validation data (or split) must be specified for +histogram visualizations. +
+`write_graph` + +whether to visualize the graph in TensorBoard. The log file +can become quite large when write_graph is set to True. +
+`write_images` + +whether to write model weights to visualize as image in +TensorBoard. +
+`update_freq` + +`'batch'` or `'epoch'` or integer. When using `'batch'`, +writes the losses and metrics to TensorBoard after each batch. The same +applies for `'epoch'`. If using an integer, let's say `1000`, the +callback will write the metrics and losses to TensorBoard every 1000 +batches. Note that writing too frequently to TensorBoard can slow down +your training. +
+`profile_batch` + +Profile the batch(es) to sample compute characteristics. +profile_batch must be a non-negative integer or a comma separated string +of pair of positive integers. A pair of positive integers signify a +range of batches to profile. By default, it will profile the second +batch. Set profile_batch=0 to disable profiling. Must run in TensorFlow +eager mode. +
+`embeddings_freq` + +frequency (in epochs) at which embedding layers will be +visualized. If set to 0, embeddings won't be visualized. +
+`embeddings_metadata` + +a dictionary which maps layer name to a file name in +which metadata for this embedding layer is saved. See the +[details]( +https://www.tensorflow.org/how_tos/embedding_viz/#metadata_optional) +about metadata files format. In case if the same metadata file is +used for all embedding layers, string can be passed. +
+ + + + + + + + + + + + +
+`ValueError` + +If histogram_freq is set and no validation data is provided. +
+ + + +## Methods + +

set_model

+ +View source + + + +Sets Keras model and writes graph if specified. + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/callbacks/TerminateOnNaN.md b/site/en/api_docs/python/tf/keras/callbacks/TerminateOnNaN.md new file mode 100644 index 00000000000..44877dfd01f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/callbacks/TerminateOnNaN.md @@ -0,0 +1,79 @@ +description: Callback that terminates training when a NaN loss is encountered. + +
+ + + + + +
+ +# tf.keras.callbacks.TerminateOnNaN + + + + + + + + + +Callback that terminates training when a NaN loss is encountered. + +Inherits From: [`Callback`](../../../tf/keras/callbacks/Callback.md) + + + + + + + + + + +## Methods + +

set_model

+ +View source + + + + + + +

set_params

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/constraints.md b/site/en/api_docs/python/tf/keras/constraints.md new file mode 100644 index 00000000000..828f2bb61f8 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/constraints.md @@ -0,0 +1,53 @@ +description: Constraints: functions that impose constraints on weight values. + +
+ + +
+ +# Module: tf.keras.constraints + + + + + + + + + +Constraints: functions that impose constraints on weight values. + + + +## Classes + +[`class Constraint`](../../tf/keras/constraints/Constraint.md) + +[`class MaxNorm`](../../tf/keras/constraints/MaxNorm.md): MaxNorm weight constraint. + +[`class MinMaxNorm`](../../tf/keras/constraints/MinMaxNorm.md): MinMaxNorm weight constraint. + +[`class NonNeg`](../../tf/keras/constraints/NonNeg.md): Constrains the weights to be non-negative. + +[`class RadialConstraint`](../../tf/keras/constraints/RadialConstraint.md): Constrains `Conv2D` kernel weights to be the same for each radius. + +[`class UnitNorm`](../../tf/keras/constraints/UnitNorm.md): Constrains the weights incident to each hidden unit to have unit norm. + +[`class max_norm`](../../tf/keras/constraints/MaxNorm.md): MaxNorm weight constraint. + +[`class min_max_norm`](../../tf/keras/constraints/MinMaxNorm.md): MinMaxNorm weight constraint. + +[`class non_neg`](../../tf/keras/constraints/NonNeg.md): Constrains the weights to be non-negative. + +[`class radial_constraint`](../../tf/keras/constraints/RadialConstraint.md): Constrains `Conv2D` kernel weights to be the same for each radius. + +[`class unit_norm`](../../tf/keras/constraints/UnitNorm.md): Constrains the weights incident to each hidden unit to have unit norm. + +## Functions + +[`deserialize(...)`](../../tf/keras/constraints/deserialize.md) + +[`get(...)`](../../tf/keras/constraints/get.md) + +[`serialize(...)`](../../tf/keras/constraints/serialize.md) + diff --git a/site/en/api_docs/python/tf/keras/constraints/Constraint.md b/site/en/api_docs/python/tf/keras/constraints/Constraint.md new file mode 100644 index 00000000000..bc703501181 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/constraints/Constraint.md @@ -0,0 +1,66 @@ +
+ + + + +
+ +# tf.keras.constraints.Constraint + + + + + + + + + + + + + + + + +## Methods + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/constraints/MaxNorm.md b/site/en/api_docs/python/tf/keras/constraints/MaxNorm.md new file mode 100644 index 00000000000..bdc4d01a15a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/constraints/MaxNorm.md @@ -0,0 +1,118 @@ +description: MaxNorm weight constraint. + +
+ + + + + +
+ +# tf.keras.constraints.MaxNorm + + + + + + + + + +MaxNorm weight constraint. + +Inherits From: [`Constraint`](../../../tf/keras/constraints/Constraint.md) + + + + + + + + + +Constrains the weights incident to each hidden unit +to have a norm less than or equal to a desired value. + + + + + + + + + + + + + +
+`m` + +the maximum norm for the incoming weights. +
+`axis` + +integer, axis along which to calculate weight norms. +For instance, in a `Dense` layer the weight matrix +has shape `(input_dim, output_dim)`, +set `axis` to `0` to constrain each weight vector +of length `(input_dim,)`. +In a `Conv2D` layer with `data_format="channels_last"`, +the weight tensor has shape +`(rows, cols, input_depth, output_depth)`, +set `axis` to `[0, 1, 2]` +to constrain the weights of each filter tensor of size +`(rows, cols, input_depth)`. +
+ + + +## Methods + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/constraints/MinMaxNorm.md b/site/en/api_docs/python/tf/keras/constraints/MinMaxNorm.md new file mode 100644 index 00000000000..43fb717d3c6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/constraints/MinMaxNorm.md @@ -0,0 +1,138 @@ +description: MinMaxNorm weight constraint. + +
+ + + + + +
+ +# tf.keras.constraints.MinMaxNorm + + + + + + + + + +MinMaxNorm weight constraint. + +Inherits From: [`Constraint`](../../../tf/keras/constraints/Constraint.md) + + + + + + + + + +Constrains the weights incident to each hidden unit +to have the norm between a lower bound and an upper bound. + + + + + + + + + + + + + + + + + + + +
+`min_value` + +the minimum norm for the incoming weights. +
+`max_value` + +the maximum norm for the incoming weights. +
+`rate` + +rate for enforcing the constraint: weights will be +rescaled to yield +`(1 - rate) * norm + rate * norm.clip(min_value, max_value)`. +Effectively, this means that rate=1.0 stands for strict +enforcement of the constraint, while rate<1.0 means that +weights will be rescaled at each step to slowly move +towards a value inside the desired interval. +
+`axis` + +integer, axis along which to calculate weight norms. +For instance, in a `Dense` layer the weight matrix +has shape `(input_dim, output_dim)`, +set `axis` to `0` to constrain each weight vector +of length `(input_dim,)`. +In a `Conv2D` layer with `data_format="channels_last"`, +the weight tensor has shape +`(rows, cols, input_depth, output_depth)`, +set `axis` to `[0, 1, 2]` +to constrain the weights of each filter tensor of size +`(rows, cols, input_depth)`. +
+ + + +## Methods + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/constraints/NonNeg.md b/site/en/api_docs/python/tf/keras/constraints/NonNeg.md new file mode 100644 index 00000000000..db3cce00531 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/constraints/NonNeg.md @@ -0,0 +1,73 @@ +description: Constrains the weights to be non-negative. + +
+ + + + +
+ +# tf.keras.constraints.NonNeg + + + + + + + + + +Constrains the weights to be non-negative. + +Inherits From: [`Constraint`](../../../tf/keras/constraints/Constraint.md) + + + + + + +## Methods + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/constraints/RadialConstraint.md b/site/en/api_docs/python/tf/keras/constraints/RadialConstraint.md new file mode 100644 index 00000000000..a1e80daacb9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/constraints/RadialConstraint.md @@ -0,0 +1,95 @@ +description: Constrains Conv2D kernel weights to be the same for each radius. + +
+ + + + +
+ +# tf.keras.constraints.RadialConstraint + + + + + + + + + +Constrains `Conv2D` kernel weights to be the same for each radius. + +Inherits From: [`Constraint`](../../../tf/keras/constraints/Constraint.md) + + + + + +For example, the desired output for the following 4-by-4 kernel:: + +``` + kernel = [[v_00, v_01, v_02, v_03], + [v_10, v_11, v_12, v_13], + [v_20, v_21, v_22, v_23], + [v_30, v_31, v_32, v_33]] +``` + +is this:: + +``` + kernel = [[v_11, v_11, v_11, v_11], + [v_11, v_33, v_33, v_11], + [v_11, v_33, v_33, v_11], + [v_11, v_11, v_11, v_11]] +``` + +This constraint can be applied to any `Conv2D` layer version, including +`Conv2DTranspose` and `SeparableConv2D`, and with either `"channels_last"` or +`"channels_first"` data format. The method assumes the weight tensor is of +shape `(rows, cols, input_depth, output_depth)`. + +## Methods + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/constraints/UnitNorm.md b/site/en/api_docs/python/tf/keras/constraints/UnitNorm.md new file mode 100644 index 00000000000..c510615876a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/constraints/UnitNorm.md @@ -0,0 +1,109 @@ +description: Constrains the weights incident to each hidden unit to have unit norm. + +
+ + + + + +
+ +# tf.keras.constraints.UnitNorm + + + + + + + + + +Constrains the weights incident to each hidden unit to have unit norm. + +Inherits From: [`Constraint`](../../../tf/keras/constraints/Constraint.md) + + + + + + + + + + + + + + + + + + + +
+`axis` + +integer, axis along which to calculate weight norms. +For instance, in a `Dense` layer the weight matrix +has shape `(input_dim, output_dim)`, +set `axis` to `0` to constrain each weight vector +of length `(input_dim,)`. +In a `Conv2D` layer with `data_format="channels_last"`, +the weight tensor has shape +`(rows, cols, input_depth, output_depth)`, +set `axis` to `[0, 1, 2]` +to constrain the weights of each filter tensor of size +`(rows, cols, input_depth)`. +
+ + + +## Methods + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/constraints/deserialize.md b/site/en/api_docs/python/tf/keras/constraints/deserialize.md new file mode 100644 index 00000000000..f597fa732b5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/constraints/deserialize.md @@ -0,0 +1,42 @@ +
+ + +
+ +# tf.keras.constraints.deserialize + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/constraints/get.md b/site/en/api_docs/python/tf/keras/constraints/get.md new file mode 100644 index 00000000000..0bf4d561099 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/constraints/get.md @@ -0,0 +1,42 @@ +
+ + +
+ +# tf.keras.constraints.get + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/constraints/serialize.md b/site/en/api_docs/python/tf/keras/constraints/serialize.md new file mode 100644 index 00000000000..4b60156ebb8 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/constraints/serialize.md @@ -0,0 +1,42 @@ +
+ + +
+ +# tf.keras.constraints.serialize + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/datasets.md b/site/en/api_docs/python/tf/keras/datasets.md new file mode 100644 index 00000000000..5b428abd529 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets.md @@ -0,0 +1,37 @@ +description: Public API for tf.keras.datasets namespace. + +
+ + +
+ +# Module: tf.keras.datasets + + + + + + + + + +Public API for tf.keras.datasets namespace. + + + +## Modules + +[`boston_housing`](../../tf/keras/datasets/boston_housing.md) module: Boston housing price regression dataset. + +[`cifar10`](../../tf/keras/datasets/cifar10.md) module: CIFAR10 small images classification dataset. + +[`cifar100`](../../tf/keras/datasets/cifar100.md) module: CIFAR100 small images classification dataset. + +[`fashion_mnist`](../../tf/keras/datasets/fashion_mnist.md) module: Fashion-MNIST dataset. + +[`imdb`](../../tf/keras/datasets/imdb.md) module: IMDB sentiment classification dataset. + +[`mnist`](../../tf/keras/datasets/mnist.md) module: MNIST handwritten digits dataset. + +[`reuters`](../../tf/keras/datasets/reuters.md) module: Reuters topic classification dataset. + diff --git a/site/en/api_docs/python/tf/keras/datasets/boston_housing.md b/site/en/api_docs/python/tf/keras/datasets/boston_housing.md new file mode 100644 index 00000000000..efdc03762c9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/boston_housing.md @@ -0,0 +1,25 @@ +description: Boston housing price regression dataset. + +
+ + +
+ +# Module: tf.keras.datasets.boston_housing + + + + + + + + + +Boston housing price regression dataset. + + + +## Functions + +[`load_data(...)`](../../../tf/keras/datasets/boston_housing/load_data.md): Loads the Boston Housing dataset. + diff --git a/site/en/api_docs/python/tf/keras/datasets/boston_housing/load_data.md b/site/en/api_docs/python/tf/keras/datasets/boston_housing/load_data.md new file mode 100644 index 00000000000..f2e6827d681 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/boston_housing/load_data.md @@ -0,0 +1,107 @@ +description: Loads the Boston Housing dataset. + +
+ + +
+ +# tf.keras.datasets.boston_housing.load_data + + + + + + + + + +Loads the Boston Housing dataset. + + + + + + + + + +This is a dataset taken from the StatLib library which is maintained at +Carnegie Mellon University. + +Samples contain 13 attributes of houses at different locations around the +Boston suburbs in the late 1970s. Targets are the median values of +the houses at a location (in k$). + +The attributes themselves are defined in the +[StatLib website](http://lib.stat.cmu.edu/datasets/boston). + + + + + + + + + + + + + + + + +
+`path` + +path where to cache the dataset locally +(relative to ~/.keras/datasets). +
+`test_split` + +fraction of the data to reserve as test set. +
+`seed` + +Random seed for shuffling the data +before computing the test split. +
+ + + + + + + + + + + +
+Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`. + +**x_train, x_test**: numpy arrays with shape (num_samples, 13) containing +either the training samples (for x_train), or test samples (for y_train) + +**y_train, y_test**: numpy arrays of shape (num_samples, ) containing the +target scalars. The targets are float scalars typically between 10 and +50 that represent the home prices in k$. +
+ diff --git a/site/en/api_docs/python/tf/keras/datasets/cifar10.md b/site/en/api_docs/python/tf/keras/datasets/cifar10.md new file mode 100644 index 00000000000..3ff2ef8236a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/cifar10.md @@ -0,0 +1,25 @@ +description: CIFAR10 small images classification dataset. + +
+ + +
+ +# Module: tf.keras.datasets.cifar10 + + + + + + + + + +CIFAR10 small images classification dataset. + + + +## Functions + +[`load_data(...)`](../../../tf/keras/datasets/cifar10/load_data.md): Loads [CIFAR10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). + diff --git a/site/en/api_docs/python/tf/keras/datasets/cifar10/load_data.md b/site/en/api_docs/python/tf/keras/datasets/cifar10/load_data.md new file mode 100644 index 00000000000..deeac01193c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/cifar10/load_data.md @@ -0,0 +1,67 @@ +description: Loads [CIFAR10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). + +
+ + +
+ +# tf.keras.datasets.cifar10.load_data + + + + + + + + + +Loads [CIFAR10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). + + + + + + + + + +This is a dataset of 50,000 32x32 color training images and 10,000 test +images, labeled over 10 categories. See more info at the +[CIFAR homepage](https://www.cs.toronto.edu/~kriz/cifar.html). + + + + + + + + + +
+Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`. + +**x_train, x_test**: uint8 arrays of RGB image data with shape +(num_samples, 3, 32, 32) if the tf.keras.backend.image_data_format is +'channels_first', or (num_samples, 32, 32, 3) if the data format +is 'channels_last'. + +**y_train, y_test**: uint8 arrays of category labels +(integers in range 0-9) each with shape (num_samples, 1). +
+ diff --git a/site/en/api_docs/python/tf/keras/datasets/cifar100.md b/site/en/api_docs/python/tf/keras/datasets/cifar100.md new file mode 100644 index 00000000000..ffe1fea0908 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/cifar100.md @@ -0,0 +1,25 @@ +description: CIFAR100 small images classification dataset. + +
+ + +
+ +# Module: tf.keras.datasets.cifar100 + + + + + + + + + +CIFAR100 small images classification dataset. + + + +## Functions + +[`load_data(...)`](../../../tf/keras/datasets/cifar100/load_data.md): Loads [CIFAR100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). + diff --git a/site/en/api_docs/python/tf/keras/datasets/cifar100/load_data.md b/site/en/api_docs/python/tf/keras/datasets/cifar100/load_data.md new file mode 100644 index 00000000000..de18d307231 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/cifar100/load_data.md @@ -0,0 +1,106 @@ +description: Loads [CIFAR100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). + +
+ + +
+ +# tf.keras.datasets.cifar100.load_data + + + + + + + + + +Loads [CIFAR100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). + + + + + + + + + +This is a dataset of 50,000 32x32 color training images and +10,000 test images, labeled over 100 fine-grained classes that are +grouped into 20 coarse-grained classes. See more info at the +[CIFAR homepage](https://www.cs.toronto.edu/~kriz/cifar.html). + + + + + + + + + + +
+`label_mode` + +one of "fine", "coarse". If it is "fine" the category labels +are the fine-grained labels, if it is "coarse" the output labels are the +coarse-grained superclasses. +
+ + + + + + + + + + + +
+Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`. + +**x_train, x_test**: uint8 arrays of RGB image data with shape +(num_samples, 3, 32, 32) if the tf.keras.backend.image_data_format is +'channels_first', or (num_samples, 32, 32, 3) if the data format +is 'channels_last'. + +**y_train, y_test**: uint8 arrays of category labels with shape +(num_samples, 1). +
+ + + + + + + + + + + + +
+`ValueError` + +in case of invalid `label_mode`. +
+ diff --git a/site/en/api_docs/python/tf/keras/datasets/fashion_mnist.md b/site/en/api_docs/python/tf/keras/datasets/fashion_mnist.md new file mode 100644 index 00000000000..9a0d1c46aac --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/fashion_mnist.md @@ -0,0 +1,25 @@ +description: Fashion-MNIST dataset. + +
+ + +
+ +# Module: tf.keras.datasets.fashion_mnist + + + + + + + + + +Fashion-MNIST dataset. + + + +## Functions + +[`load_data(...)`](../../../tf/keras/datasets/fashion_mnist/load_data.md): Loads the Fashion-MNIST dataset. + diff --git a/site/en/api_docs/python/tf/keras/datasets/fashion_mnist/load_data.md b/site/en/api_docs/python/tf/keras/datasets/fashion_mnist/load_data.md new file mode 100644 index 00000000000..d814bccf6a9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/fashion_mnist/load_data.md @@ -0,0 +1,85 @@ +description: Loads the Fashion-MNIST dataset. + +
+ + +
+ +# tf.keras.datasets.fashion_mnist.load_data + + + + + + + + + +Loads the Fashion-MNIST dataset. + + + + + + + + + +This is a dataset of 60,000 28x28 grayscale images of 10 fashion categories, +along with a test set of 10,000 images. This dataset can be used as +a drop-in replacement for MNIST. The class labels are: + +| Label | Description | +|:-----:|-------------| +| 0 | T-shirt/top | +| 1 | Trouser | +| 2 | Pullover | +| 3 | Dress | +| 4 | Coat | +| 5 | Sandal | +| 6 | Shirt | +| 7 | Sneaker | +| 8 | Bag | +| 9 | Ankle boot | + + + + + + + + + +
+Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`. + +**x_train, x_test**: uint8 arrays of grayscale image data with shape +(num_samples, 28, 28). + +**y_train, y_test**: uint8 arrays of labels (integers in range 0-9) +with shape (num_samples,). +
+ + + +#### License: + +The copyright for Fashion-MNIST is held by Zalando SE. +Fashion-MNIST is licensed under the [MIT license]( +https://github.com/zalandoresearch/fashion-mnist/blob/master/LICENSE). diff --git a/site/en/api_docs/python/tf/keras/datasets/imdb.md b/site/en/api_docs/python/tf/keras/datasets/imdb.md new file mode 100644 index 00000000000..6075e4da4be --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/imdb.md @@ -0,0 +1,27 @@ +description: IMDB sentiment classification dataset. + +
+ + +
+ +# Module: tf.keras.datasets.imdb + + + + + + + + + +IMDB sentiment classification dataset. + + + +## Functions + +[`get_word_index(...)`](../../../tf/keras/datasets/imdb/get_word_index.md): Retrieves a dict mapping words to their index in the IMDB dataset. + +[`load_data(...)`](../../../tf/keras/datasets/imdb/load_data.md): Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/). + diff --git a/site/en/api_docs/python/tf/keras/datasets/imdb/get_word_index.md b/site/en/api_docs/python/tf/keras/datasets/imdb/get_word_index.md new file mode 100644 index 00000000000..c1593fe0fcd --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/imdb/get_word_index.md @@ -0,0 +1,75 @@ +description: Retrieves a dict mapping words to their index in the IMDB dataset. + +
+ + +
+ +# tf.keras.datasets.imdb.get_word_index + + + + + + + + + +Retrieves a dict mapping words to their index in the IMDB dataset. + + + + + + + + + + + + + + + + + + + +
+`path` + +where to cache the data (relative to `~/.keras/dataset`). +
+ + + + + + + + + + + +
+The word index dictionary. Keys are word strings, values are their index. +
+ diff --git a/site/en/api_docs/python/tf/keras/datasets/imdb/load_data.md b/site/en/api_docs/python/tf/keras/datasets/imdb/load_data.md new file mode 100644 index 00000000000..4d701ea986e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/imdb/load_data.md @@ -0,0 +1,186 @@ +description: Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/). + +
+ + +
+ +# tf.keras.datasets.imdb.load_data + + + + + + + + + +Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/). + + + + + + + + + +This is a dataset of 25,000 movies reviews from IMDB, labeled by sentiment +(positive/negative). Reviews have been preprocessed, and each review is +encoded as a list of word indexes (integers). +For convenience, words are indexed by overall frequency in the dataset, +so that for instance the integer "3" encodes the 3rd most frequent word in +the data. This allows for quick filtering operations such as: +"only consider the top 10,000 most +common words, but eliminate the top 20 most common words". + +As a convention, "0" does not stand for a specific word, but instead is used +to encode any unknown word. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`path` + +where to cache the data (relative to `~/.keras/dataset`). +
+`num_words` + +integer or None. Words are +ranked by how often they occur (in the training set) and only +the `num_words` most frequent words are kept. Any less frequent word +will appear as `oov_char` value in the sequence data. If None, +all words are kept. Defaults to None, so all words are kept. +
+`skip_top` + +skip the top N most frequently occurring words +(which may not be informative). These words will appear as +`oov_char` value in the dataset. Defaults to 0, so no words are +skipped. +
+`maxlen` + +int or None. Maximum sequence length. +Any longer sequence will be truncated. Defaults to None, which +means no truncation. +
+`seed` + +int. Seed for reproducible data shuffling. +
+`start_char` + +int. The start of a sequence will be marked with this +character. Defaults to 1 because 0 is usually the padding character. +
+`oov_char` + +int. The out-of-vocabulary character. +Words that were cut out because of the `num_words` or +`skip_top` limits will be replaced with this character. +
+`index_from` + +int. Index actual words with this index and higher. +
+`**kwargs` + +Used for backwards compatibility. +
+ + + + + + + + + + + +
+Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`. + +**x_train, x_test**: lists of sequences, which are lists of indexes +(integers). If the num_words argument was specific, the maximum +possible index value is num_words-1. If the `maxlen` argument was +specified, the largest possible sequence length is `maxlen`. + +**y_train, y_test**: lists of integer labels (1 or 0). +
+ + + + + + + + + + + + +
+`ValueError` + +in case `maxlen` is so low +that no input sequence could be kept. +
+ + +Note that the 'out of vocabulary' character is only used for +words that were present in the training set but are not included +because they're not making the `num_words` cut here. +Words that were not seen in the training set but are in the test set +have simply been skipped. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/datasets/mnist.md b/site/en/api_docs/python/tf/keras/datasets/mnist.md new file mode 100644 index 00000000000..cea0b012501 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/mnist.md @@ -0,0 +1,25 @@ +description: MNIST handwritten digits dataset. + +
+ + +
+ +# Module: tf.keras.datasets.mnist + + + + + + + + + +MNIST handwritten digits dataset. + + + +## Functions + +[`load_data(...)`](../../../tf/keras/datasets/mnist/load_data.md): Loads the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). + diff --git a/site/en/api_docs/python/tf/keras/datasets/mnist/load_data.md b/site/en/api_docs/python/tf/keras/datasets/mnist/load_data.md new file mode 100644 index 00000000000..dcc41bf8f9a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/mnist/load_data.md @@ -0,0 +1,96 @@ +description: Loads the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). + +
+ + +
+ +# tf.keras.datasets.mnist.load_data + + + + + + + + + +Loads the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). + + + + + + + + + +This is a dataset of 60,000 28x28 grayscale images of the 10 digits, +along with a test set of 10,000 images. +More info can be found at the +(MNIST homepage)[http://yann.lecun.com/exdb/mnist/]. + + + + + + + + + + + +
+`path` + +path where to cache the dataset locally +(relative to ~/.keras/datasets). +
+ + + + + + + + + + + +
+Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`. + +**x_train, x_test**: uint8 arrays of grayscale image data with shapes +(num_samples, 28, 28). + +**y_train, y_test**: uint8 arrays of digit labels (integers in range 0-9) +with shapes (num_samples,). +
+ + + +#### License: + +Yann LeCun and Corinna Cortes hold the copyright of MNIST dataset, +which is a derivative work from original NIST datasets. +MNIST dataset is made available under the terms of the +[Creative Commons Attribution-Share Alike 3.0 license.]( +https://creativecommons.org/licenses/by-sa/3.0/) diff --git a/site/en/api_docs/python/tf/keras/datasets/reuters.md b/site/en/api_docs/python/tf/keras/datasets/reuters.md new file mode 100644 index 00000000000..71029e93ddd --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/reuters.md @@ -0,0 +1,27 @@ +description: Reuters topic classification dataset. + +
+ + +
+ +# Module: tf.keras.datasets.reuters + + + + + + + + + +Reuters topic classification dataset. + + + +## Functions + +[`get_word_index(...)`](../../../tf/keras/datasets/reuters/get_word_index.md): Retrieves a dict mapping words to their index in the Reuters dataset. + +[`load_data(...)`](../../../tf/keras/datasets/reuters/load_data.md): Loads the Reuters newswire classification dataset. + diff --git a/site/en/api_docs/python/tf/keras/datasets/reuters/get_word_index.md b/site/en/api_docs/python/tf/keras/datasets/reuters/get_word_index.md new file mode 100644 index 00000000000..18713709f1d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/reuters/get_word_index.md @@ -0,0 +1,75 @@ +description: Retrieves a dict mapping words to their index in the Reuters dataset. + +
+ + +
+ +# tf.keras.datasets.reuters.get_word_index + + + + + + + + + +Retrieves a dict mapping words to their index in the Reuters dataset. + + + + + + + + + + + + + + + + + + + +
+`path` + +where to cache the data (relative to `~/.keras/dataset`). +
+ + + + + + + + + + + +
+The word index dictionary. Keys are word strings, values are their index. +
+ diff --git a/site/en/api_docs/python/tf/keras/datasets/reuters/load_data.md b/site/en/api_docs/python/tf/keras/datasets/reuters/load_data.md new file mode 100644 index 00000000000..9fa7f6da769 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/datasets/reuters/load_data.md @@ -0,0 +1,184 @@ +description: Loads the Reuters newswire classification dataset. + +
+ + +
+ +# tf.keras.datasets.reuters.load_data + + + + + + + + + +Loads the Reuters newswire classification dataset. + + + + + + + + + +This is a dataset of 11,228 newswires from Reuters, labeled over 46 topics. +This was originally generated by parsing and preprocessing the classic +Reuters-21578 dataset, but the preprocessing code is no longer packaged +with Keras. + +See this [github discussion](https://github.com/keras-team/keras/issues/12072) +for more info. + +Each newswire is encoded as a list of word indexes (integers). +For convenience, words are indexed by overall frequency in the dataset, +so that for instance the integer "3" encodes the 3rd most frequent word in +the data. This allows for quick filtering operations such as: +"only consider the top 10,000 most +common words, but eliminate the top 20 most common words". + +As a convention, "0" does not stand for a specific word, but instead is used +to encode any unknown word. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`path` + +where to cache the data (relative to `~/.keras/dataset`). +
+`num_words` + +integer or None. Words are +ranked by how often they occur (in the training set) and only +the `num_words` most frequent words are kept. Any less frequent word +will appear as `oov_char` value in the sequence data. If None, +all words are kept. Defaults to None, so all words are kept. +
+`skip_top` + +skip the top N most frequently occurring words +(which may not be informative). These words will appear as +`oov_char` value in the dataset. Defaults to 0, so no words are +skipped. +
+`maxlen` + +int or None. Maximum sequence length. +Any longer sequence will be truncated. Defaults to None, which +means no truncation. +
+`test_split` + +Float between 0 and 1. Fraction of the dataset to be used +as test data. Defaults to 0.2, meaning 20% of the dataset is used as +test data. +
+`seed` + +int. Seed for reproducible data shuffling. +
+`start_char` + +int. The start of a sequence will be marked with this +character. Defaults to 1 because 0 is usually the padding character. +
+`oov_char` + +int. The out-of-vocabulary character. +Words that were cut out because of the `num_words` or +`skip_top` limits will be replaced with this character. +
+`index_from` + +int. Index actual words with this index and higher. +
+`**kwargs` + +Used for backwards compatibility. +
+ + + + + + + + + + + +
+Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`. + +**x_train, x_test**: lists of sequences, which are lists of indexes +(integers). If the num_words argument was specific, the maximum +possible index value is num_words-1. If the `maxlen` argument was +specified, the largest possible sequence length is `maxlen`. + +**y_train, y_test**: lists of integer labels (1 or 0). +
+ + +Note: The 'out of vocabulary' character is only used for +words that were present in the training set but are not included +because they're not making the `num_words` cut here. +Words that were not seen in the training set but are in the test set +have simply been skipped. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/estimator.md b/site/en/api_docs/python/tf/keras/estimator.md new file mode 100644 index 00000000000..4c80441cd0e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/estimator.md @@ -0,0 +1,25 @@ +description: Keras estimator API. + +
+ + +
+ +# Module: tf.keras.estimator + + + + + + + + + +Keras estimator API. + + + +## Functions + +[`model_to_estimator(...)`](../../tf/keras/estimator/model_to_estimator.md): Constructs an `Estimator` instance from given keras model. + diff --git a/site/en/api_docs/python/tf/keras/estimator/model_to_estimator.md b/site/en/api_docs/python/tf/keras/estimator/model_to_estimator.md new file mode 100644 index 00000000000..7e59141be75 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/estimator/model_to_estimator.md @@ -0,0 +1,197 @@ +description: Constructs an Estimator instance from given keras model. + +
+ + +
+ +# tf.keras.estimator.model_to_estimator + + + + + + + + + +Constructs an `Estimator` instance from given keras model. + + + + + + + +If you use infrastructure or other tooling that relies on Estimators, you can +still build a Keras model and use model_to_estimator to convert the Keras +model to an Estimator for use with downstream systems. + +For usage example, please see: +[Creating estimators from Keras +Models](https://www.tensorflow.org/guide/estimators#creating_estimators_from_keras_models). + +#### Sample Weights: + + +Estimators returned by `model_to_estimator` are configured so that they can +handle sample weights (similar to `keras_model.fit(x, y, sample_weights)`). + +To pass sample weights when training or evaluating the Estimator, the first +item returned by the input function should be a dictionary with keys +`features` and `sample_weights`. Example below: + +```python +keras_model = tf.keras.Model(...) +keras_model.compile(...) + +estimator = tf.keras.estimator.model_to_estimator(keras_model) + +def input_fn(): + return dataset_ops.Dataset.from_tensors( + ({'features': features, 'sample_weights': sample_weights}, + targets)) + +estimator.train(input_fn, steps=1) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`keras_model` + +A compiled Keras model object. This argument is mutually +exclusive with `keras_model_path`. Estimator's `model_fn` uses the +structure of the model to clone the model. Defaults to `None`. +
+`keras_model_path` + +Path to a compiled Keras model saved on disk, in HDF5 +format, which can be generated with the `save()` method of a Keras model. +This argument is mutually exclusive with `keras_model`. +Defaults to `None`. +
+`custom_objects` + +Dictionary for cloning customized objects. This is +used with classes that is not part of this pip package. For example, if +user maintains a `relu6` class that inherits from tf.keras.layers.Layer, +then pass `custom_objects={'relu6': relu6}`. Defaults to `None`. +
+`model_dir` + +Directory to save `Estimator` model parameters, graph, summary +files for TensorBoard, etc. If unset a directory will be created with +`tempfile.mkdtemp` +
+`config` + +`RunConfig` to config `Estimator`. Allows setting up things in +`model_fn` based on configuration such as `num_ps_replicas`, or +`model_dir`. Defaults to `None`. If both `config.model_dir` and the +`model_dir` argument (above) are specified the `model_dir` **argument** +takes precedence. +
+`checkpoint_format` + +Sets the format of the checkpoint saved by the estimator +when training. May be `saver` or `checkpoint`, depending on whether to +save checkpoints from tf.compat.v1.train.Saver or tf.train.Checkpoint. +The default is `checkpoint`. Estimators use name-based `tf.train.Saver` +checkpoints, while Keras models use object-based checkpoints from +tf.train.Checkpoint. Currently, saving object-based checkpoints from +`model_to_estimator` is only supported by Functional and Sequential +models. Defaults to 'checkpoint'. +
+ + + + + + + + + + + +
+An Estimator from given keras model. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +If neither keras_model nor keras_model_path was given. +
+`ValueError` + +If both keras_model and keras_model_path was given. +
+`ValueError` + +If the keras_model_path is a GCS URI. +
+`ValueError` + +If keras_model has not been compiled. +
+`ValueError` + +If an invalid checkpoint_format was given. +
+ diff --git a/site/en/api_docs/python/tf/keras/experimental.md b/site/en/api_docs/python/tf/keras/experimental.md new file mode 100644 index 00000000000..893618fd5ba --- /dev/null +++ b/site/en/api_docs/python/tf/keras/experimental.md @@ -0,0 +1,43 @@ +description: Public API for tf.keras.experimental namespace. + +
+ + +
+ +# Module: tf.keras.experimental + + + + + + + + + +Public API for tf.keras.experimental namespace. + + + +## Classes + +[`class CosineDecay`](../../tf/keras/experimental/CosineDecay.md): A LearningRateSchedule that uses a cosine decay schedule. + +[`class CosineDecayRestarts`](../../tf/keras/experimental/CosineDecayRestarts.md): A LearningRateSchedule that uses a cosine decay schedule with restarts. + +[`class LinearCosineDecay`](../../tf/keras/experimental/LinearCosineDecay.md): A LearningRateSchedule that uses a linear cosine decay schedule. + +[`class LinearModel`](../../tf/keras/experimental/LinearModel.md): Linear Model for regression and classification problems. + +[`class NoisyLinearCosineDecay`](../../tf/keras/experimental/NoisyLinearCosineDecay.md): A LearningRateSchedule that uses a noisy linear cosine decay schedule. + +[`class PeepholeLSTMCell`](../../tf/keras/experimental/PeepholeLSTMCell.md): Equivalent to LSTMCell class but adds peephole connections. + +[`class SequenceFeatures`](../../tf/keras/experimental/SequenceFeatures.md): A layer for sequence input. + +[`class WideDeepModel`](../../tf/keras/experimental/WideDeepModel.md): Wide & Deep Model for regression and classification problems. + +## Functions + +[`terminate_keras_multiprocessing_pools(...)`](../../tf/keras/experimental/terminate_keras_multiprocessing_pools.md): Destroy Keras' multiprocessing pools to prevent deadlocks. + diff --git a/site/en/api_docs/python/tf/keras/experimental/CosineDecay.md b/site/en/api_docs/python/tf/keras/experimental/CosineDecay.md new file mode 100644 index 00000000000..047597c4396 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/experimental/CosineDecay.md @@ -0,0 +1,166 @@ +description: A LearningRateSchedule that uses a cosine decay schedule. + +
+ + + + + + +
+ +# tf.keras.experimental.CosineDecay + + + + + + + + + +A LearningRateSchedule that uses a cosine decay schedule. + +Inherits From: [`LearningRateSchedule`](../../../tf/keras/optimizers/schedules/LearningRateSchedule.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`initial_learning_rate` + +A scalar `float32` or `float64` Tensor or a +Python number. The initial learning rate. +
+`decay_steps` + +A scalar `int32` or `int64` `Tensor` or a Python number. +Number of steps to decay over. +
+`alpha` + +A scalar `float32` or `float64` Tensor or a Python number. +Minimum learning rate value as a fraction of initial_learning_rate. +
+`name` + +String. Optional name of the operation. Defaults to 'CosineDecay'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `LearningRateSchedule` from its config. + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `LearningRateSchedule` instance. +
+ + + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/experimental/CosineDecayRestarts.md b/site/en/api_docs/python/tf/keras/experimental/CosineDecayRestarts.md new file mode 100644 index 00000000000..cd6f843e363 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/experimental/CosineDecayRestarts.md @@ -0,0 +1,183 @@ +description: A LearningRateSchedule that uses a cosine decay schedule with restarts. + +
+ + + + + + +
+ +# tf.keras.experimental.CosineDecayRestarts + + + + + + + + + +A LearningRateSchedule that uses a cosine decay schedule with restarts. + +Inherits From: [`LearningRateSchedule`](../../../tf/keras/optimizers/schedules/LearningRateSchedule.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`initial_learning_rate` + +A scalar `float32` or `float64` Tensor or a Python +number. The initial learning rate. +
+`first_decay_steps` + +A scalar `int32` or `int64` `Tensor` or a Python +number. Number of steps to decay over. +
+`t_mul` + +A scalar `float32` or `float64` `Tensor` or a Python number. +Used to derive the number of iterations in the i-th period +
+`m_mul` + +A scalar `float32` or `float64` `Tensor` or a Python number. +Used to derive the initial learning rate of the i-th period: +
+`alpha` + +A scalar `float32` or `float64` Tensor or a Python number. +Minimum learning rate value as a fraction of the initial_learning_rate. +
+`name` + +String. Optional name of the operation. Defaults to 'SGDRDecay'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `LearningRateSchedule` from its config. + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `LearningRateSchedule` instance. +
+ + + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/experimental/LinearCosineDecay.md b/site/en/api_docs/python/tf/keras/experimental/LinearCosineDecay.md new file mode 100644 index 00000000000..27e5653b80a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/experimental/LinearCosineDecay.md @@ -0,0 +1,182 @@ +description: A LearningRateSchedule that uses a linear cosine decay schedule. + +
+ + + + + + +
+ +# tf.keras.experimental.LinearCosineDecay + + + + + + + + + +A LearningRateSchedule that uses a linear cosine decay schedule. + +Inherits From: [`LearningRateSchedule`](../../../tf/keras/optimizers/schedules/LearningRateSchedule.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`initial_learning_rate` + +A scalar `float32` or `float64` Tensor or a Python +number. The initial learning rate. +
+`decay_steps` + +A scalar `int32` or `int64` `Tensor` or a Python number. +Number of steps to decay over. +
+`num_periods` + +Number of periods in the cosine part of the decay. +See computation above. +
+`alpha` + +See computation above. +
+`beta` + +See computation above. +
+`name` + +String. Optional name of the operation. Defaults to +'LinearCosineDecay'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `LearningRateSchedule` from its config. + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `LearningRateSchedule` instance. +
+ + + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/experimental/LinearModel.md b/site/en/api_docs/python/tf/keras/experimental/LinearModel.md new file mode 100644 index 00000000000..e0e35e1755b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/experimental/LinearModel.md @@ -0,0 +1,2347 @@ +description: Linear Model for regression and classification problems. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.keras.experimental.LinearModel + + + + + + + + + +Linear Model for regression and classification problems. + +Inherits From: [`Model`](../../../tf/keras/Model.md) + + + + + + + + + +This model approximates the following function: +$$y = \beta + \sum_{i=1}^{N} w_{i} * x_{i}$$ +where $$\beta$$ is the bias and $$w_{i}$$ is the weight for each feature. + +#### Example: + + + +```python +model = LinearModel() +model.compile(optimizer='sgd', loss='mse') +model.fit(x, y, epochs) +``` + +This model accepts sparse float inputs as well: + +#### Example: + + +```python +model = LinearModel() +opt = tf.keras.optimizers.Adam() +loss_fn = tf.keras.losses.MeanSquaredError() +with tf.GradientTape() as tape: + output = model(sparse_input) + loss = tf.reduce_mean(loss_fn(target, output)) +grads = tape.gradient(loss, model.weights) +opt.apply_gradients(zip(grads, model.weights)) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, output dimension without the batch size. +
+`activation` + +Activation function to use. +If you don't specify anything, no activation is applied. +
+`use_bias` + +whether to calculate the bias/intercept for this model. If set +to False, no bias/intercept will be used in calculations, e.g., the data +is already centered. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrices. +
+`bias_initializer` + +Initializer for the bias vector. +
+`kernel_regularizer` + +regularizer for kernel vectors. +
+`bias_regularizer` + +regularizer for bias vector. +
+`**kwargs` + +The keyword arguments that are passed on to BaseLayer.__init__. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`distribute_strategy` + +The tf.distribute.Strategy this model was created under. +
+`layers` + + +
+`metrics_names` + +Returns the model's display labels for all outputs. + +Note: `metrics_names` are available only after a keras.Model has been +trained/evaluated on actual data. + +``` +>>> inputs = tf.keras.layers.Input(shape=(3,)) +>>> outputs = tf.keras.layers.Dense(2)(inputs) +>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs) +>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"]) +>>> model.metrics_names +[] +``` + +``` +>>> x = np.random.random((2, 3)) +>>> y = np.random.randint(0, 2, (2, 2)) +>>> _ = model.fit(x, y, verbose=0) +>>> model.metrics_names +['loss', 'mae'] +``` + +``` +>>> inputs = tf.keras.layers.Input(shape=(3,)) +>>> d = tf.keras.layers.Dense(2, name='out') +>>> output_1 = d(inputs) +>>> output_2 = d(inputs) +>>> model = tf.keras.models.Model( +... inputs=inputs, outputs=[output_1, output_2]) +>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"]) +>>> _ = model.fit(x, (y, y), verbose=0) +>>> model.metrics_names +['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae', +'out_1_acc'] +``` +
+`run_eagerly` + +Settable attribute indicating whether the model should run eagerly. + +Running eagerly means that your model will be run step by step, +like Python code. Your model might run slower, but it should become easier +for you to debug it by stepping into individual layer calls. + +By default, we will attempt to compile your model to a static graph to +deliver the best execution performance. +
+`state_updates` + +Returns the `updates` from all layers that are stateful. + +This is useful for separating training updates and +state updates, e.g. when we need to update a layer's internal state +during prediction. +
+ + + +## Methods + +

compile

+ +View source + + + +Configures the model for training. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`optimizer` + +String (name of optimizer) or optimizer instance. +See tf.keras.optimizers. +
+`loss` + +String (name of objective function), objective function or +tf.keras.losses.Loss instance. See tf.keras.losses. +An objective function is any callable with the signature +`loss = fn(y_true, y_pred)`, where +y_true = ground truth values with shape = `[batch_size, d0, .. dN]`, +except sparse loss functions such as sparse categorical crossentropy +where shape = `[batch_size, d0, .. dN-1]`. +y_pred = predicted values with shape = `[batch_size, d0, .. dN]`. +It returns a weighted loss float tensor. +If a custom `Loss` instance is used and reduction is set to NONE, +return value has the shape [batch_size, d0, .. dN-1] ie. per-sample +or per-timestep loss values; otherwise, it is a scalar. +If the model has multiple outputs, you can use a different loss on +each output by passing a dictionary or a list of losses. The loss +value that will be minimized by the model will then be the sum of +all individual losses. +
+`metrics` + +List of metrics to be evaluated by the model during training +and testing. +Each of this can be a string (name of a built-in function), function +or a tf.keras.metrics.Metric instance. See tf.keras.metrics. +Typically you will use `metrics=['accuracy']`. A function is any +callable with the signature `result = fn(y_true, y_pred)`. +To specify different metrics for different outputs of a +multi-output model, you could also pass a dictionary, such as +`metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}`. +You can also pass a list (len = len(outputs)) of lists of metrics +such as `metrics=[['accuracy'], ['accuracy', 'mse']]` or +`metrics=['accuracy', ['accuracy', 'mse']]`. +When you pass the strings 'accuracy' or 'acc', we convert this to +one of tf.keras.metrics.BinaryAccuracy, +tf.keras.metrics.CategoricalAccuracy, +tf.keras.metrics.SparseCategoricalAccuracy based on the loss +function used and the model output shape. We do a similar conversion +for the strings 'crossentropy' and 'ce' as well. +
+`loss_weights` + +Optional list or dictionary specifying scalar +coefficients (Python floats) to weight the loss contributions +of different model outputs. +The loss value that will be minimized by the model +will then be the *weighted sum* of all individual losses, +weighted by the `loss_weights` coefficients. +If a list, it is expected to have a 1:1 mapping +to the model's outputs. If a dict, it is expected to map +output names (strings) to scalar coefficients. +
+`sample_weight_mode` + +If you need to do timestep-wise +sample weighting (2D weights), set this to `"temporal"`. +`None` defaults to sample-wise weights (1D). +If the model has multiple outputs, you can use a different +`sample_weight_mode` on each output by passing a +dictionary or a list of modes. +
+`weighted_metrics` + +List of metrics to be evaluated and weighted +by sample_weight or class_weight during training and testing. +
+`**kwargs` + +Any additional arguments. For eager execution, pass +`run_eagerly=True`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid arguments for +`optimizer`, `loss`, `metrics` or `sample_weight_mode`. +
+ + + +

evaluate

+ +View source + + + +Returns the loss value & metrics values for the model in test mode. + +Computation is done in batches. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: - A Numpy array (or array-like), or a list +of arrays (in case the model has multiple inputs). - A TensorFlow +tensor, or a list of tensors (in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, if +the model has named inputs. - A tf.data dataset. - A generator or +keras.utils.Sequence instance. A more detailed description of +unpacking behavior for iterator types (Dataset, generator, Sequence) +is given in the `Unpacking behavior for iterator-like inputs` section +of Model.fit. +
+`y` + +Target data. Like the input data `x`, it could be either Numpy +array(s) or TensorFlow tensor(s). It should be consistent with `x` +(you cannot have Numpy inputs and tensor targets, or inversely). If +`x` is a dataset, generator or keras.utils.Sequence instance, `y` +should not be specified (since targets will be obtained from the +iterator/dataset). +
+`batch_size` + +Integer or `None`. Number of samples per gradient update. If +unspecified, `batch_size` will default to 32. Do not specify the +`batch_size` if your data is in the form of a dataset, generators, +or keras.utils.Sequence instances (since they generate batches). +
+`verbose` + +0 or 1. Verbosity mode. 0 = silent, 1 = progress bar. +
+`sample_weight` + +Optional Numpy array of weights for the test samples, +used for weighting the loss function. You can either pass a flat (1D) +Numpy array with the same length as the input samples +(1:1 mapping between weights and samples), or in the case of +temporal data, you can pass a 2D array with shape `(samples, +sequence_length)`, to apply a different weight to every timestep +of every sample. In this case you should make sure to specify +`sample_weight_mode="temporal"` in `compile()`. This argument is +not supported when `x` is a dataset, instead pass sample weights +as the third element of `x`. +
+`steps` + +Integer or `None`. Total number of steps (batches of samples) +before declaring the evaluation round finished. Ignored with the +default value of `None`. If x is a tf.data dataset and `steps` is +None, 'evaluate' will run until the dataset is exhausted. This +argument is not supported with array inputs. +
+`callbacks` + +List of keras.callbacks.Callback instances. List of +callbacks to apply during evaluation. See +[callbacks](/api_docs/python/tf/keras/callbacks). +
+`max_queue_size` + +Integer. Used for generator or keras.utils.Sequence +input only. Maximum size for the generator queue. If unspecified, +`max_queue_size` will default to 10. +
+`workers` + +Integer. Used for generator or keras.utils.Sequence input +only. Maximum number of processes to spin up when using process-based +threading. If unspecified, `workers` will default to 1. If 0, will +execute the generator on the main thread. +
+`use_multiprocessing` + +Boolean. Used for generator or +keras.utils.Sequence input only. If `True`, use process-based +threading. If unspecified, `use_multiprocessing` will default to +`False`. Note that because this implementation relies on +multiprocessing, you should not pass non-picklable arguments to the +generator as they can't be passed easily to children processes. +
+`return_dict` + +If `True`, loss and metric results are returned as a dict, +with each key being the name of the metric. If `False`, they are +returned as a list. +
+ + +See the discussion of `Unpacking behavior for iterator-like inputs` for +Model.fit. + + + + + + + + + +
Returns
+Scalar test loss (if the model has a single output and no metrics) +or list of scalars (if the model has multiple outputs +and/or metrics). The attribute `model.metrics_names` will give you +the display labels for the scalar outputs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +in case of invalid arguments. +
+ + + +

evaluate_generator

+ +View source + + + +Evaluates the model on a data generator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use Model.evaluate, which supports generators. + +#### DEPRECATED: + +Model.evaluate now supports generators, so there is no longer any need +to use this endpoint. + + +

fit

+ +View source + + + +Trains the model for a fixed number of epochs (iterations on a dataset). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: +- A Numpy array (or array-like), or a list of arrays +(in case the model has multiple inputs). +- A TensorFlow tensor, or a list of tensors +(in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, +if the model has named inputs. +- A tf.data dataset. Should return a tuple +of either `(inputs, targets)` or +`(inputs, targets, sample_weights)`. +- A generator or keras.utils.Sequence returning `(inputs, targets)` +or `(inputs, targets, sample_weights)`. +A more detailed description of unpacking behavior for iterator types +(Dataset, generator, Sequence) is given below. +
+`y` + +Target data. Like the input data `x`, +it could be either Numpy array(s) or TensorFlow tensor(s). +It should be consistent with `x` (you cannot have Numpy inputs and +tensor targets, or inversely). If `x` is a dataset, generator, +or keras.utils.Sequence instance, `y` should +not be specified (since targets will be obtained from `x`). +
+`batch_size` + +Integer or `None`. +Number of samples per gradient update. +If unspecified, `batch_size` will default to 32. +Do not specify the `batch_size` if your data is in the +form of datasets, generators, or keras.utils.Sequence instances +(since they generate batches). +
+`epochs` + +Integer. Number of epochs to train the model. +An epoch is an iteration over the entire `x` and `y` +data provided. +Note that in conjunction with `initial_epoch`, +`epochs` is to be understood as "final epoch". +The model is not trained for a number of iterations +given by `epochs`, but merely until the epoch +of index `epochs` is reached. +
+`verbose` + +0, 1, or 2. Verbosity mode. +0 = silent, 1 = progress bar, 2 = one line per epoch. +Note that the progress bar is not particularly useful when +logged to a file, so verbose=2 is recommended when not running +interactively (eg, in a production environment). +
+`callbacks` + +List of keras.callbacks.Callback instances. +List of callbacks to apply during training. +See tf.keras.callbacks. +
+`validation_split` + +Float between 0 and 1. +Fraction of the training data to be used as validation data. +The model will set apart this fraction of the training data, +will not train on it, and will evaluate +the loss and any model metrics +on this data at the end of each epoch. +The validation data is selected from the last samples +in the `x` and `y` data provided, before shuffling. This argument is +not supported when `x` is a dataset, generator or +keras.utils.Sequence instance. +
+`validation_data` + +Data on which to evaluate +the loss and any model metrics at the end of each epoch. +The model will not be trained on this data. +`validation_data` will override `validation_split`. +`validation_data` could be: +- tuple `(x_val, y_val)` of Numpy arrays or tensors +- tuple `(x_val, y_val, val_sample_weights)` of Numpy arrays +- dataset +For the first two cases, `batch_size` must be provided. +For the last case, `validation_steps` could be provided. +Note that `validation_data` does not support all the data types that +are supported in `x`, eg, dict, generator or keras.utils.Sequence. +
+`shuffle` + +Boolean (whether to shuffle the training data +before each epoch) or str (for 'batch'). This argument is ignored +when `x` is a generator. 'batch' is a special option for dealing +with the limitations of HDF5 data; it shuffles in batch-sized +chunks. Has no effect when `steps_per_epoch` is not `None`. +
+`class_weight` + +Optional dictionary mapping class indices (integers) +to a weight (float) value, used for weighting the loss function +(during training only). +This can be useful to tell the model to +"pay more attention" to samples from +an under-represented class. +
+`sample_weight` + +Optional Numpy array of weights for +the training samples, used for weighting the loss function +(during training only). You can either pass a flat (1D) +Numpy array with the same length as the input samples +(1:1 mapping between weights and samples), +or in the case of temporal data, +you can pass a 2D array with shape +`(samples, sequence_length)`, +to apply a different weight to every timestep of every sample. +In this case you should make sure to specify +`sample_weight_mode="temporal"` in `compile()`. This argument is not +supported when `x` is a dataset, generator, or +keras.utils.Sequence instance, instead provide the sample_weights +as the third element of `x`. +
+`initial_epoch` + +Integer. +Epoch at which to start training +(useful for resuming a previous training run). +
+`steps_per_epoch` + +Integer or `None`. +Total number of steps (batches of samples) +before declaring one epoch finished and starting the +next epoch. When training with input tensors such as +TensorFlow data tensors, the default `None` is equal to +the number of samples in your dataset divided by +the batch size, or 1 if that cannot be determined. If x is a +tf.data dataset, and 'steps_per_epoch' +is None, the epoch will run until the input dataset is exhausted. +When passing an infinitely repeating dataset, you must specify the +`steps_per_epoch` argument. This argument is not supported with +array inputs. +
+`validation_steps` + +Only relevant if `validation_data` is provided and +is a tf.data dataset. Total number of steps (batches of +samples) to draw before stopping when performing validation +at the end of every epoch. If 'validation_steps' is None, validation +will run until the `validation_data` dataset is exhausted. In the +case of an infinitely repeated dataset, it will run into an +infinite loop. If 'validation_steps' is specified and only part of +the dataset will be consumed, the evaluation will start from the +beginning of the dataset at each epoch. This ensures that the same +validation samples are used every time. +
+`validation_batch_size` + +Integer or `None`. +Number of samples per validation batch. +If unspecified, will default to `batch_size`. +Do not specify the `validation_batch_size` if your data is in the +form of datasets, generators, or keras.utils.Sequence instances +(since they generate batches). +
+`validation_freq` + +Only relevant if validation data is provided. Integer +or `collections_abc.Container` instance (e.g. list, tuple, etc.). +If an integer, specifies how many training epochs to run before a +new validation run is performed, e.g. `validation_freq=2` runs +validation every 2 epochs. If a Container, specifies the epochs on +which to run validation, e.g. `validation_freq=[1, 2, 10]` runs +validation at the end of the 1st, 2nd, and 10th epochs. +
+`max_queue_size` + +Integer. Used for generator or keras.utils.Sequence +input only. Maximum size for the generator queue. +If unspecified, `max_queue_size` will default to 10. +
+`workers` + +Integer. Used for generator or keras.utils.Sequence input +only. Maximum number of processes to spin up +when using process-based threading. If unspecified, `workers` +will default to 1. If 0, will execute the generator on the main +thread. +
+`use_multiprocessing` + +Boolean. Used for generator or +keras.utils.Sequence input only. If `True`, use process-based +threading. If unspecified, `use_multiprocessing` will default to +`False`. Note that because this implementation relies on +multiprocessing, you should not pass non-picklable arguments to +the generator as they can't be passed easily to children processes. +
+ + +Unpacking behavior for iterator-like inputs: + A common pattern is to pass a tf.data.Dataset, generator, or + tf.keras.utils.Sequence to the `x` argument of fit, which will in fact + yield not only features (x) but optionally targets (y) and sample weights. + Keras requires that the output of such iterator-likes be unambiguous. The + iterator should return a tuple of length 1, 2, or 3, where the optional + second and third elements will be used for y and sample_weight + respectively. Any other type provided will be wrapped in a length one + tuple, effectively treating everything as 'x'. When yielding dicts, they + should still adhere to the top-level tuple structure. + e.g. `({"x0": x0, "x1": x1}, y)`. Keras will not attempt to separate + features, targets, and weights from the keys of a single dict. + A notable unsupported data type is the namedtuple. The reason is that + it behaves like both an ordered datatype (tuple) and a mapping + datatype (dict). So given a namedtuple of the form: + `namedtuple("example_tuple", ["y", "x"])` + it is ambiguous whether to reverse the order of the elements when + interpreting the value. Even worse is a tuple of the form: + `namedtuple("other_tuple", ["x", "y", "z"])` + where it is unclear if the tuple was intended to be unpacked into x, y, + and sample_weight or passed through as a single element to `x`. As a + result the data processing code will simply raise a ValueError if it + encounters a namedtuple. (Along with instructions to remedy the issue.) + + + + + + + + + +
Returns
+A `History` object. Its `History.history` attribute is +a record of training loss values and metrics values +at successive epochs, as well as validation loss values +and validation metrics values (if applicable). +
+ + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If the model was never compiled. +
+`ValueError` + +In case of mismatch between the provided input data +and what the model expects. +
+ + + +

fit_generator

+ +View source + + + +Fits the model on data yielded batch-by-batch by a Python generator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use Model.fit, which supports generators. + +#### DEPRECATED: + +Model.fit now supports generators, so there is no longer any need to use +this endpoint. + + +

get_layer

+ +View source + + + +Retrieves a layer based on either its name (unique) or index. + +If `name` and `index` are both provided, `index` will take precedence. +Indices are based on order of horizontal graph traversal (bottom-up). + + + + + + + + + + + + + +
Arguments
+`name` + +String, name of layer. +
+`index` + +Integer, index of layer. +
+ + + + + + + + + + + +
Returns
+A layer instance. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid layer name or index. +
+ + + +

load_weights

+ +View source + + + +Loads all layer weights, either from a TensorFlow or an HDF5 weight file. + +If `by_name` is False weights are loaded based on the network's +topology. This means the architecture should be the same as when the weights +were saved. Note that layers that don't have weights are not taken into +account in the topological ordering, so adding or removing layers is fine as +long as they don't have weights. + +If `by_name` is True, weights are loaded into layers only if they share the +same name. This is useful for fine-tuning or transfer-learning models where +some of the layers have changed. + +Only topological loading (`by_name=False`) is supported when loading weights +from the TensorFlow format. Note that topological loading differs slightly +between TensorFlow and HDF5 formats for user-defined classes inheriting from +tf.keras.Model: HDF5 loads based on a flattened list of weights, while the +TensorFlow format loads based on the object-local names of attributes to +which layers are assigned in the `Model`'s constructor. + + + + + + + + + + + + + + + + +
Arguments
+`filepath` + +String, path to the weights file to load. For weight files in +TensorFlow format, this is the file prefix (the same as was passed +to `save_weights`). +
+`by_name` + +Boolean, whether to load weights by name or by topological +order. Only topological loading is supported for weight files in +TensorFlow format. +
+`skip_mismatch` + +Boolean, whether to skip loading of layers where there is +a mismatch in the number of weights, or a mismatch in the shape of +the weight (only valid when `by_name=True`). +
+ + + + + + + + + + + +
Returns
+When loading a weight file in TensorFlow format, returns the same status +object as tf.train.Checkpoint.restore. When graph building, restore +ops are run automatically as soon as the network is built (on first call +for user-defined classes inheriting from `Model`, immediately if it is +already built). + +When loading weights in HDF5 format, returns `None`. +
+ + + + + + + + + + + + + + + +
Raises
+`ImportError` + +If h5py is not available and the weight file is in HDF5 +format. +
+`ValueError` + +If `skip_mismatch` is set to `True` when `by_name` is +`False`. +
+ + + +

make_predict_function

+ +View source + + + +Creates a function that executes one step of inference. + +This method can be overridden to support custom inference logic. +This method is called by Model.predict and Model.predict_on_batch. + +Typically, this method directly controls tf.function and +tf.distribute.Strategy settings, and delegates the actual evaluation +logic to Model.predict_step. + +This function is cached the first time Model.predict or +Model.predict_on_batch is called. The cache is cleared whenever +Model.compile is called. + + + + + + + + + +
Returns
+Function. The function created by this method should accept a +`tf.data.Iterator`, and return the outputs of the `Model`. +
+ + + +

make_test_function

+ +View source + + + +Creates a function that executes one step of evaluation. + +This method can be overridden to support custom evaluation logic. +This method is called by Model.evaluate and Model.test_on_batch. + +Typically, this method directly controls tf.function and +tf.distribute.Strategy settings, and delegates the actual evaluation +logic to Model.test_step. + +This function is cached the first time Model.evaluate or +Model.test_on_batch is called. The cache is cleared whenever +Model.compile is called. + + + + + + + + + +
Returns
+Function. The function created by this method should accept a +`tf.data.Iterator`, and return a `dict` containing values that will +be passed to `tf.keras.Callbacks.on_test_batch_end`. +
+ + + +

make_train_function

+ +View source + + + +Creates a function that executes one step of training. + +This method can be overridden to support custom training logic. +This method is called by Model.fit and Model.train_on_batch. + +Typically, this method directly controls tf.function and +tf.distribute.Strategy settings, and delegates the actual training +logic to Model.train_step. + +This function is cached the first time Model.fit or +Model.train_on_batch is called. The cache is cleared whenever +Model.compile is called. + + + + + + + + + +
Returns
+Function. The function created by this method should accept a +`tf.data.Iterator`, and return a `dict` containing values that will +be passed to `tf.keras.Callbacks.on_train_batch_end`, such as +`{'loss': 0.2, 'accuracy': 0.7}`. +
+ + + +

predict

+ +View source + + + +Generates output predictions for the input samples. + +Computation is done in batches. This method is designed for performance in +large scale inputs. For small amount of inputs that fit in one batch, +directly using `__call__` is recommended for faster execution, e.g., +`model(x)`, or `model(x, training=False)` if you have layers such as +tf.keras.layers.BatchNormalization that behaves differently during +inference. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input samples. It could be: +- A Numpy array (or array-like), or a list of arrays +(in case the model has multiple inputs). +- A TensorFlow tensor, or a list of tensors +(in case the model has multiple inputs). +- A tf.data dataset. +- A generator or keras.utils.Sequence instance. +A more detailed description of unpacking behavior for iterator types +(Dataset, generator, Sequence) is given in the `Unpacking behavior +for iterator-like inputs` section of `Model.fit`. +
+`batch_size` + +Integer or `None`. +Number of samples per batch. +If unspecified, `batch_size` will default to 32. +Do not specify the `batch_size` if your data is in the +form of dataset, generators, or keras.utils.Sequence instances +(since they generate batches). +
+`verbose` + +Verbosity mode, 0 or 1. +
+`steps` + +Total number of steps (batches of samples) +before declaring the prediction round finished. +Ignored with the default value of `None`. If x is a tf.data +dataset and `steps` is None, `predict` will +run until the input dataset is exhausted. +
+`callbacks` + +List of keras.callbacks.Callback instances. +List of callbacks to apply during prediction. +See [callbacks](/api_docs/python/tf/keras/callbacks). +
+`max_queue_size` + +Integer. Used for generator or keras.utils.Sequence +input only. Maximum size for the generator queue. +If unspecified, `max_queue_size` will default to 10. +
+`workers` + +Integer. Used for generator or keras.utils.Sequence input +only. Maximum number of processes to spin up when using +process-based threading. If unspecified, `workers` will default +to 1. If 0, will execute the generator on the main thread. +
+`use_multiprocessing` + +Boolean. Used for generator or +keras.utils.Sequence input only. If `True`, use process-based +threading. If unspecified, `use_multiprocessing` will default to +`False`. Note that because this implementation relies on +multiprocessing, you should not pass non-picklable arguments to +the generator as they can't be passed easily to children processes. +
+ + +See the discussion of `Unpacking behavior for iterator-like inputs` for +Model.fit. Note that Model.predict uses the same interpretation rules as +Model.fit and Model.evaluate, so inputs must be unambiguous for all +three methods. + + + + + + + + + +
Returns
+Numpy array(s) of predictions. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of mismatch between the provided +input data and the model's expectations, +or in case a stateful model receives a number of samples +that is not a multiple of the batch size. +
+ + + +

predict_generator

+ +View source + + + +Generates predictions for the input samples from a data generator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use Model.predict, which supports generators. + +#### DEPRECATED: + +Model.predict now supports generators, so there is no longer any need +to use this endpoint. + + +

predict_on_batch

+ +View source + + + +Returns predictions for a single batch of samples. + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: - A Numpy array (or array-like), or a list +of arrays (in case the model has multiple inputs). - A TensorFlow +tensor, or a list of tensors (in case the model has multiple inputs). +
+ + + + + + + + + + + +
Returns
+Numpy array(s) of predictions. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of mismatch between given number of inputs and +expectations of the model. +
+ + + +

predict_step

+ +View source + + + +The logic for one inference step. + +This method can be overridden to support custom inference logic. +This method is called by Model.make_predict_function. + +This method should contain the mathemetical logic for one step of inference. +This typically includes the forward pass. + +Configuration details for *how* this logic is run (e.g. tf.function and +tf.distribute.Strategy settings), should be left to +Model.make_predict_function, which can also be overridden. + + + + + + + + + + +
Arguments
+`data` + +A nested structure of `Tensor`s. +
+ + + + + + + + + + + +
Returns
+The result of one inference step, typically the output of calling the +`Model` on data. +
+ + + +

reset_metrics

+ +View source + + + +Resets the state of metrics. + + +

reset_states

+ +View source + + + + + + +

save

+ +View source + + + +Saves the model to Tensorflow SavedModel or a single HDF5 file. + + +#### The savefile includes: + +- The model architecture, allowing to re-instantiate the model. +- The model weights. +- The state of the optimizer, allowing to resume training + exactly where you left off. + + +This allows you to save the entirety of the state of a model +in a single file. + +Saved models can be reinstantiated via keras.models.load_model. +The model returned by `load_model` is a compiled model ready to be used +(unless the saved model was never compiled in the first place). + +Models built with the Sequential and Functional API can be saved to both the +HDF5 and SavedModel formats. Subclassed models can only be saved with the +SavedModel format. + +Note that the model weights may have different scoped names after being +loaded. Scoped names include the model/layer names, such as +"dense_1/kernel:0"`. It is recommended that you use the layer properties to + access specific variables, e.g. `model.get_layer("dense_1").kernel`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`filepath` + +String, path to SavedModel or H5 file to save the model. +
+`overwrite` + +Whether to silently overwrite any existing file at the +target location, or provide the user with a manual prompt. +
+`include_optimizer` + +If True, save optimizer's state together. +
+`save_format` + +Either 'tf' or 'h5', indicating whether to save the model +to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and +'h5' in TF 1.X. +
+`signatures` + +Signatures to save with the SavedModel. Applicable to the +'tf' format only. Please see the `signatures` argument in +tf.saved_model.save for details. +
+`options` + +Optional tf.saved_model.SaveOptions object that specifies +options for saving to SavedModel. +
+ + + +#### Example: + + + +```python +from keras.models import load_model + +model.save('my_model.h5') # creates a HDF5 file 'my_model.h5' +del model # deletes the existing model + +# returns a compiled model +# identical to the previous one +model = load_model('my_model.h5') +``` + +

save_weights

+ +View source + + + +Saves all layer weights. + +Either saves in HDF5 or in TensorFlow format based on the `save_format` +argument. + +When saving in HDF5 format, the weight file has: + - `layer_names` (attribute), a list of strings + (ordered names of model layers). + - For every layer, a `group` named `layer.name` + - For every such layer group, a group attribute `weight_names`, + a list of strings + (ordered names of weights tensor of the layer). + - For every weight in the layer, a dataset + storing the weight value, named after the weight tensor. + +When saving in TensorFlow format, all objects referenced by the network are +saved in the same format as tf.train.Checkpoint, including any `Layer` +instances or `Optimizer` instances assigned to object attributes. For +networks constructed from inputs and outputs using `tf.keras.Model(inputs, +outputs)`, `Layer` instances used by the network are tracked/saved +automatically. For user-defined classes which inherit from tf.keras.Model, +`Layer` instances must be assigned to object attributes, typically in the +constructor. See the documentation of tf.train.Checkpoint and +tf.keras.Model for details. + +While the formats are the same, do not mix `save_weights` and +tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be +loaded using Model.load_weights. Checkpoints saved using +tf.train.Checkpoint.save should be restored using the corresponding +tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over +`save_weights` for training checkpoints. + +The TensorFlow format matches objects and variables by starting at a root +object, `self` for `save_weights`, and greedily matching attribute +names. For Model.save this is the `Model`, and for Checkpoint.save this +is the `Checkpoint` even if the `Checkpoint` has a model attached. This +means saving a tf.keras.Model using `save_weights` and loading into a +tf.train.Checkpoint with a `Model` attached (or vice versa) will not match +the `Model`'s variables. See the [guide to training +checkpoints](https://www.tensorflow.org/guide/checkpoint) for details +on the TensorFlow format. + + + + + + + + + + + + + + + + +
Arguments
+`filepath` + +String, path to the file to save the weights to. When saving +in TensorFlow format, this is the prefix used for checkpoint files +(multiple files are generated). Note that the '.h5' suffix causes +weights to be saved in HDF5 format. +
+`overwrite` + +Whether to silently overwrite any existing file at the +target location, or provide the user with a manual prompt. +
+`save_format` + +Either 'tf' or 'h5'. A `filepath` ending in '.h5' or +'.keras' will default to HDF5 if `save_format` is `None`. Otherwise +`None` defaults to 'tf'. +
+ + + + + + + + + + + + + + + +
Raises
+`ImportError` + +If h5py is not available when attempting to save in HDF5 +format. +
+`ValueError` + +For invalid/unknown format arguments. +
+ + + +

summary

+ +View source + + + +Prints a string summary of the network. + + + + + + + + + + + + + + + + + +
Arguments
+`line_length` + +Total length of printed lines +(e.g. set this to adapt the display to different +terminal window sizes). +
+`positions` + +Relative or absolute positions of log elements +in each line. If not provided, +defaults to `[.33, .55, .67, 1.]`. +
+`print_fn` + +Print function to use. Defaults to `print`. +It will be called on each line of the summary. +You can set it to a custom function +in order to capture the string summary. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `summary()` is called before the model is built. +
+ + + +

test_on_batch

+ +View source + + + +Test the model on a single batch of samples. + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: - A Numpy array (or array-like), or a list +of arrays (in case the model has multiple inputs). - A TensorFlow +tensor, or a list of tensors (in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, if +the model has named inputs. +
+`y` + +Target data. Like the input data `x`, it could be either Numpy +array(s) or TensorFlow tensor(s). It should be consistent with `x` +(you cannot have Numpy inputs and tensor targets, or inversely). +
+`sample_weight` + +Optional array of the same length as x, containing +weights to apply to the model's loss for each sample. In the case of +temporal data, you can pass a 2D array with shape (samples, +sequence_length), to apply a different weight to every timestep of +every sample. In this case you should make sure to specify +sample_weight_mode="temporal" in compile(). +
+`reset_metrics` + +If `True`, the metrics returned will be only for this +batch. If `False`, the metrics will be statefully accumulated across +batches. +
+`return_dict` + +If `True`, loss and metric results are returned as a dict, +with each key being the name of the metric. If `False`, they are +returned as a list. +
+ + + + + + + + + + + +
Returns
+Scalar test loss (if the model has a single output and no metrics) +or list of scalars (if the model has multiple outputs +and/or metrics). The attribute `model.metrics_names` will give you +the display labels for the scalar outputs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid user-provided arguments. +
+ + + +

test_step

+ +View source + + + +The logic for one evaluation step. + +This method can be overridden to support custom evaluation logic. +This method is called by Model.make_test_function. + +This function should contain the mathemetical logic for one step of +evaluation. +This typically includes the forward pass, loss calculation, and metrics +updates. + +Configuration details for *how* this logic is run (e.g. tf.function and +tf.distribute.Strategy settings), should be left to +Model.make_test_function, which can also be overridden. + + + + + + + + + + +
Arguments
+`data` + +A nested structure of `Tensor`s. +
+ + + + + + + + + + + +
Returns
+A `dict` containing values that will be passed to +`tf.keras.callbacks.CallbackList.on_train_batch_end`. Typically, the +values of the `Model`'s metrics are returned. +
+ + + +

to_json

+ +View source + + + +Returns a JSON string containing the network configuration. + +To load a network from a JSON save file, use +keras.models.model_from_json(json_string, custom_objects={}). + + + + + + + + + + +
Arguments
+`**kwargs` + +Additional keyword arguments +to be passed to `json.dumps()`. +
+ + + + + + + + + + + +
Returns
+A JSON string. +
+ + + +

to_yaml

+ +View source + + + +Returns a yaml string containing the network configuration. + +To load a network from a yaml save file, use +keras.models.model_from_yaml(yaml_string, custom_objects={}). + +`custom_objects` should be a dictionary mapping +the names of custom losses / layers / etc to the corresponding +functions / classes. + + + + + + + + + + +
Arguments
+`**kwargs` + +Additional keyword arguments +to be passed to `yaml.dump()`. +
+ + + + + + + + + + + +
Returns
+A YAML string. +
+ + + + + + + + + + + + +
Raises
+`ImportError` + +if yaml module is not found. +
+ + + +

train_on_batch

+ +View source + + + +Runs a single gradient update on a single batch of data. + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: +- A Numpy array (or array-like), or a list of arrays +(in case the model has multiple inputs). +- A TensorFlow tensor, or a list of tensors +(in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, +if the model has named inputs. +
+`y` + +Target data. Like the input data `x`, it could be either Numpy +array(s) or TensorFlow tensor(s). It should be consistent with `x` +(you cannot have Numpy inputs and tensor targets, or inversely). +
+`sample_weight` + +Optional array of the same length as x, containing +weights to apply to the model's loss for each sample. In the case of +temporal data, you can pass a 2D array with shape (samples, +sequence_length), to apply a different weight to every timestep of +every sample. In this case you should make sure to specify +sample_weight_mode="temporal" in compile(). +
+`class_weight` + +Optional dictionary mapping class indices (integers) to a +weight (float) to apply to the model's loss for the samples from this +class during training. This can be useful to tell the model to "pay +more attention" to samples from an under-represented class. +
+`reset_metrics` + +If `True`, the metrics returned will be only for this +batch. If `False`, the metrics will be statefully accumulated across +batches. +
+`return_dict` + +If `True`, loss and metric results are returned as a dict, +with each key being the name of the metric. If `False`, they are +returned as a list. +
+ + + + + + + + + + + +
Returns
+Scalar training loss +(if the model has a single output and no metrics) +or list of scalars (if the model has multiple outputs +and/or metrics). The attribute `model.metrics_names` will give you +the display labels for the scalar outputs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid user-provided arguments. +
+ + + +

train_step

+ +View source + + + +The logic for one training step. + +This method can be overridden to support custom training logic. +This method is called by Model.make_train_function. + +This method should contain the mathemetical logic for one step of training. +This typically includes the forward pass, loss calculation, backpropagation, +and metric updates. + +Configuration details for *how* this logic is run (e.g. tf.function and +tf.distribute.Strategy settings), should be left to +Model.make_train_function, which can also be overridden. + + + + + + + + + + +
Arguments
+`data` + +A nested structure of `Tensor`s. +
+ + + + + + + + + + + +
Returns
+A `dict` containing values that will be passed to +`tf.keras.callbacks.CallbackList.on_train_batch_end`. Typically, the +values of the `Model`'s metrics are returned. Example: +`{'loss': 0.2, 'accuracy': 0.7}`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/experimental/NoisyLinearCosineDecay.md b/site/en/api_docs/python/tf/keras/experimental/NoisyLinearCosineDecay.md new file mode 100644 index 00000000000..17dd08ec9a1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/experimental/NoisyLinearCosineDecay.md @@ -0,0 +1,196 @@ +description: A LearningRateSchedule that uses a noisy linear cosine decay schedule. + +
+ + + + + + +
+ +# tf.keras.experimental.NoisyLinearCosineDecay + + + + + + + + + +A LearningRateSchedule that uses a noisy linear cosine decay schedule. + +Inherits From: [`LearningRateSchedule`](../../../tf/keras/optimizers/schedules/LearningRateSchedule.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`initial_learning_rate` + +A scalar `float32` or `float64` Tensor or a Python +number. The initial learning rate. +
+`decay_steps` + +A scalar `int32` or `int64` `Tensor` or a Python number. +Number of steps to decay over. +
+`initial_variance` + +initial variance for the noise. See computation above. +
+`variance_decay` + +decay for the noise's variance. See computation above. +
+`num_periods` + +Number of periods in the cosine part of the decay. +See computation above. +
+`alpha` + +See computation above. +
+`beta` + +See computation above. +
+`name` + +String. Optional name of the operation. Defaults to +'NoisyLinearCosineDecay'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `LearningRateSchedule` from its config. + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `LearningRateSchedule` instance. +
+ + + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/experimental/PeepholeLSTMCell.md b/site/en/api_docs/python/tf/keras/experimental/PeepholeLSTMCell.md new file mode 100644 index 00000000000..487d9ffa6d0 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/experimental/PeepholeLSTMCell.md @@ -0,0 +1,264 @@ +description: Equivalent to LSTMCell class but adds peephole connections. + +
+ + + + + + + + + +
+ +# tf.keras.experimental.PeepholeLSTMCell + + + + + + + + + +Equivalent to LSTMCell class but adds peephole connections. + +Inherits From: [`LSTMCell`](../../../tf/compat/v1/keras/layers/LSTMCell.md) + + + + + + + + + +Peephole connections allow the gates to utilize the previous internal state as +well as the previous hidden state (which is what LSTMCell is limited to). +This allows PeepholeLSTMCell to better learn precise timings over LSTMCell. + +From [Gers et al.](http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf): + +"We find that LSTM augmented by 'peephole connections' from its internal +cells to its multiplicative gates can learn the fine distinction between +sequences of spikes spaced either 50 or 49 time steps apart without the help +of any short training exemplars." + +The peephole implementation is based on: + +[Long short-term memory recurrent neural network architectures for + large scale acoustic modeling. +](https://research.google.com/pubs/archive/43905.pdf) + +#### Example: + + + +```python +# Create 2 PeepholeLSTMCells +peephole_lstm_cells = [PeepholeLSTMCell(size) for size in [128, 256]] +# Create a layer composed sequentially of the peephole LSTM cells. +layer = RNN(peephole_lstm_cells) +input = keras.Input((timesteps, input_dim)) +output = layer(input) +``` + +## Methods + +

get_dropout_mask_for_cell

+ +View source + + + +Get the dropout mask for RNN cell's input. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

get_initial_state

+ +View source + + + + + + +

get_recurrent_dropout_mask_for_cell

+ +View source + + + +Get the recurrent dropout mask for RNN cell. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

reset_dropout_mask

+ +View source + + + +Reset the cached dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + +

reset_recurrent_dropout_mask

+ +View source + + + +Reset the cached recurrent dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + + + diff --git a/site/en/api_docs/python/tf/keras/experimental/SequenceFeatures.md b/site/en/api_docs/python/tf/keras/experimental/SequenceFeatures.md new file mode 100644 index 00000000000..8974bec01f2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/experimental/SequenceFeatures.md @@ -0,0 +1,140 @@ +description: A layer for sequence input. + +
+ + + + +
+ +# tf.keras.experimental.SequenceFeatures + + + + + + + + + +A layer for sequence input. + + + + + + + + + +All `feature_columns` must be sequence dense columns with the same +`sequence_length`. The output of this method can be fed into sequence +networks, such as RNN. + +The output of this method is a 3D `Tensor` of shape `[batch_size, T, D]`. +`T` is the maximum sequence length for this batch, which could differ from +batch to batch. + +If multiple `feature_columns` are given with `Di` `num_elements` each, their +outputs are concatenated. So, the final `Tensor` has shape +`[batch_size, T, D0 + D1 + ... + Dn]`. + +#### Example: + + + +```python +rating = sequence_numeric_column('rating') +watches = sequence_categorical_column_with_identity( + 'watches', num_buckets=1000) +watches_embedding = embedding_column(watches, dimension=10) +columns = [rating, watches_embedding] + +sequence_input_layer = SequenceFeatures(columns) +features = tf.io.parse_example(..., + features=make_parse_example_spec(columns)) +sequence_input, sequence_length = sequence_input_layer(features) +sequence_length_mask = tf.sequence_mask(sequence_length) + +rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size) +rnn_layer = tf.keras.layers.RNN(rnn_cell) +outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask) +``` + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +An iterable of dense sequence columns. Valid columns are +- `embedding_column` that wraps a `sequence_categorical_column_with_*` +- `sequence_numeric_column`. +
+`trainable` + +Boolean, whether the layer's variables will be updated via +gradient descent during training. +
+`name` + +Name to give to the SequenceFeatures. +
+`**kwargs` + +Keyword arguments to construct a layer. +
+ + + + + + + + + + + + +
+`ValueError` + +If any of the `feature_columns` is not a +`SequenceDenseColumn`. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/experimental/WideDeepModel.md b/site/en/api_docs/python/tf/keras/experimental/WideDeepModel.md new file mode 100644 index 00000000000..79118b4de09 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/experimental/WideDeepModel.md @@ -0,0 +1,2327 @@ +description: Wide & Deep Model for regression and classification problems. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.keras.experimental.WideDeepModel + + + + + + + + + +Wide & Deep Model for regression and classification problems. + +Inherits From: [`Model`](../../../tf/keras/Model.md) + + + + + + + + + +This model jointly train a linear and a dnn model. + +#### Example: + + + +```python +linear_model = LinearModel() +dnn_model = keras.Sequential([keras.layers.Dense(units=64), + keras.layers.Dense(units=1)]) +combined_model = WideDeepModel(dnn_model, linear_model) +combined_model.compile(optimizer=['sgd', 'adam'], 'mse', ['mse']) +# define dnn_inputs and linear_inputs as separate numpy arrays or +# a single numpy array if dnn_inputs is same as linear_inputs. +combined_model.fit([dnn_inputs, linear_inputs], y, epochs) +# or define a single `tf.data.Dataset` that contains a single tensor or +# separate tensors for dnn_inputs and linear_inputs. +dataset = tf.data.Dataset.from_tensors(([dnn_inputs, linear_inputs], y)) +combined_model.fit(dataset, epochs) +``` + +Both linear and dnn model can be pre-compiled and trained separately +before jointly training: + +#### Example: + + +```python +linear_model = LinearModel() +linear_model.compile('adagrad', 'mse') +linear_model.fit(linear_inputs, y, epochs) +dnn_model = keras.Sequential([keras.layers.Dense(units=1)]) +dnn_model.compile('rmsprop', 'mse') +dnn_model.fit(dnn_inputs, y, epochs) +combined_model = WideDeepModel(dnn_model, linear_model) +combined_model.compile(optimizer=['sgd', 'adam'], 'mse', ['mse']) +combined_model.fit([dnn_inputs, linear_inputs], y, epochs) +``` + + + + + + + + + + + + + + + + + + + +
+`linear_model` + +a premade LinearModel, its output must match the output of +the dnn model. +
+`dnn_model` + +a tf.keras.Model, its output must match the output of the +linear model. +
+`activation` + +Activation function. Set it to None to maintain a linear +activation. +
+`**kwargs` + +The keyword arguments that are passed on to BaseLayer.__init__. +Allowed keyword arguments include `name`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`distribute_strategy` + +The tf.distribute.Strategy this model was created under. +
+`layers` + + +
+`metrics_names` + +Returns the model's display labels for all outputs. + +Note: `metrics_names` are available only after a keras.Model has been +trained/evaluated on actual data. + +``` +>>> inputs = tf.keras.layers.Input(shape=(3,)) +>>> outputs = tf.keras.layers.Dense(2)(inputs) +>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs) +>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"]) +>>> model.metrics_names +[] +``` + +``` +>>> x = np.random.random((2, 3)) +>>> y = np.random.randint(0, 2, (2, 2)) +>>> _ = model.fit(x, y, verbose=0) +>>> model.metrics_names +['loss', 'mae'] +``` + +``` +>>> inputs = tf.keras.layers.Input(shape=(3,)) +>>> d = tf.keras.layers.Dense(2, name='out') +>>> output_1 = d(inputs) +>>> output_2 = d(inputs) +>>> model = tf.keras.models.Model( +... inputs=inputs, outputs=[output_1, output_2]) +>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"]) +>>> _ = model.fit(x, (y, y), verbose=0) +>>> model.metrics_names +['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae', +'out_1_acc'] +``` +
+`run_eagerly` + +Settable attribute indicating whether the model should run eagerly. + +Running eagerly means that your model will be run step by step, +like Python code. Your model might run slower, but it should become easier +for you to debug it by stepping into individual layer calls. + +By default, we will attempt to compile your model to a static graph to +deliver the best execution performance. +
+`state_updates` + +Returns the `updates` from all layers that are stateful. + +This is useful for separating training updates and +state updates, e.g. when we need to update a layer's internal state +during prediction. +
+ + + +## Methods + +

compile

+ +View source + + + +Configures the model for training. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`optimizer` + +String (name of optimizer) or optimizer instance. +See tf.keras.optimizers. +
+`loss` + +String (name of objective function), objective function or +tf.keras.losses.Loss instance. See tf.keras.losses. +An objective function is any callable with the signature +`loss = fn(y_true, y_pred)`, where +y_true = ground truth values with shape = `[batch_size, d0, .. dN]`, +except sparse loss functions such as sparse categorical crossentropy +where shape = `[batch_size, d0, .. dN-1]`. +y_pred = predicted values with shape = `[batch_size, d0, .. dN]`. +It returns a weighted loss float tensor. +If a custom `Loss` instance is used and reduction is set to NONE, +return value has the shape [batch_size, d0, .. dN-1] ie. per-sample +or per-timestep loss values; otherwise, it is a scalar. +If the model has multiple outputs, you can use a different loss on +each output by passing a dictionary or a list of losses. The loss +value that will be minimized by the model will then be the sum of +all individual losses. +
+`metrics` + +List of metrics to be evaluated by the model during training +and testing. +Each of this can be a string (name of a built-in function), function +or a tf.keras.metrics.Metric instance. See tf.keras.metrics. +Typically you will use `metrics=['accuracy']`. A function is any +callable with the signature `result = fn(y_true, y_pred)`. +To specify different metrics for different outputs of a +multi-output model, you could also pass a dictionary, such as +`metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}`. +You can also pass a list (len = len(outputs)) of lists of metrics +such as `metrics=[['accuracy'], ['accuracy', 'mse']]` or +`metrics=['accuracy', ['accuracy', 'mse']]`. +When you pass the strings 'accuracy' or 'acc', we convert this to +one of tf.keras.metrics.BinaryAccuracy, +tf.keras.metrics.CategoricalAccuracy, +tf.keras.metrics.SparseCategoricalAccuracy based on the loss +function used and the model output shape. We do a similar conversion +for the strings 'crossentropy' and 'ce' as well. +
+`loss_weights` + +Optional list or dictionary specifying scalar +coefficients (Python floats) to weight the loss contributions +of different model outputs. +The loss value that will be minimized by the model +will then be the *weighted sum* of all individual losses, +weighted by the `loss_weights` coefficients. +If a list, it is expected to have a 1:1 mapping +to the model's outputs. If a dict, it is expected to map +output names (strings) to scalar coefficients. +
+`sample_weight_mode` + +If you need to do timestep-wise +sample weighting (2D weights), set this to `"temporal"`. +`None` defaults to sample-wise weights (1D). +If the model has multiple outputs, you can use a different +`sample_weight_mode` on each output by passing a +dictionary or a list of modes. +
+`weighted_metrics` + +List of metrics to be evaluated and weighted +by sample_weight or class_weight during training and testing. +
+`**kwargs` + +Any additional arguments. For eager execution, pass +`run_eagerly=True`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid arguments for +`optimizer`, `loss`, `metrics` or `sample_weight_mode`. +
+ + + +

evaluate

+ +View source + + + +Returns the loss value & metrics values for the model in test mode. + +Computation is done in batches. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: - A Numpy array (or array-like), or a list +of arrays (in case the model has multiple inputs). - A TensorFlow +tensor, or a list of tensors (in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, if +the model has named inputs. - A tf.data dataset. - A generator or +keras.utils.Sequence instance. A more detailed description of +unpacking behavior for iterator types (Dataset, generator, Sequence) +is given in the `Unpacking behavior for iterator-like inputs` section +of Model.fit. +
+`y` + +Target data. Like the input data `x`, it could be either Numpy +array(s) or TensorFlow tensor(s). It should be consistent with `x` +(you cannot have Numpy inputs and tensor targets, or inversely). If +`x` is a dataset, generator or keras.utils.Sequence instance, `y` +should not be specified (since targets will be obtained from the +iterator/dataset). +
+`batch_size` + +Integer or `None`. Number of samples per gradient update. If +unspecified, `batch_size` will default to 32. Do not specify the +`batch_size` if your data is in the form of a dataset, generators, +or keras.utils.Sequence instances (since they generate batches). +
+`verbose` + +0 or 1. Verbosity mode. 0 = silent, 1 = progress bar. +
+`sample_weight` + +Optional Numpy array of weights for the test samples, +used for weighting the loss function. You can either pass a flat (1D) +Numpy array with the same length as the input samples +(1:1 mapping between weights and samples), or in the case of +temporal data, you can pass a 2D array with shape `(samples, +sequence_length)`, to apply a different weight to every timestep +of every sample. In this case you should make sure to specify +`sample_weight_mode="temporal"` in `compile()`. This argument is +not supported when `x` is a dataset, instead pass sample weights +as the third element of `x`. +
+`steps` + +Integer or `None`. Total number of steps (batches of samples) +before declaring the evaluation round finished. Ignored with the +default value of `None`. If x is a tf.data dataset and `steps` is +None, 'evaluate' will run until the dataset is exhausted. This +argument is not supported with array inputs. +
+`callbacks` + +List of keras.callbacks.Callback instances. List of +callbacks to apply during evaluation. See +[callbacks](/api_docs/python/tf/keras/callbacks). +
+`max_queue_size` + +Integer. Used for generator or keras.utils.Sequence +input only. Maximum size for the generator queue. If unspecified, +`max_queue_size` will default to 10. +
+`workers` + +Integer. Used for generator or keras.utils.Sequence input +only. Maximum number of processes to spin up when using process-based +threading. If unspecified, `workers` will default to 1. If 0, will +execute the generator on the main thread. +
+`use_multiprocessing` + +Boolean. Used for generator or +keras.utils.Sequence input only. If `True`, use process-based +threading. If unspecified, `use_multiprocessing` will default to +`False`. Note that because this implementation relies on +multiprocessing, you should not pass non-picklable arguments to the +generator as they can't be passed easily to children processes. +
+`return_dict` + +If `True`, loss and metric results are returned as a dict, +with each key being the name of the metric. If `False`, they are +returned as a list. +
+ + +See the discussion of `Unpacking behavior for iterator-like inputs` for +Model.fit. + + + + + + + + + +
Returns
+Scalar test loss (if the model has a single output and no metrics) +or list of scalars (if the model has multiple outputs +and/or metrics). The attribute `model.metrics_names` will give you +the display labels for the scalar outputs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +in case of invalid arguments. +
+ + + +

evaluate_generator

+ +View source + + + +Evaluates the model on a data generator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use Model.evaluate, which supports generators. + +#### DEPRECATED: + +Model.evaluate now supports generators, so there is no longer any need +to use this endpoint. + + +

fit

+ +View source + + + +Trains the model for a fixed number of epochs (iterations on a dataset). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: +- A Numpy array (or array-like), or a list of arrays +(in case the model has multiple inputs). +- A TensorFlow tensor, or a list of tensors +(in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, +if the model has named inputs. +- A tf.data dataset. Should return a tuple +of either `(inputs, targets)` or +`(inputs, targets, sample_weights)`. +- A generator or keras.utils.Sequence returning `(inputs, targets)` +or `(inputs, targets, sample_weights)`. +A more detailed description of unpacking behavior for iterator types +(Dataset, generator, Sequence) is given below. +
+`y` + +Target data. Like the input data `x`, +it could be either Numpy array(s) or TensorFlow tensor(s). +It should be consistent with `x` (you cannot have Numpy inputs and +tensor targets, or inversely). If `x` is a dataset, generator, +or keras.utils.Sequence instance, `y` should +not be specified (since targets will be obtained from `x`). +
+`batch_size` + +Integer or `None`. +Number of samples per gradient update. +If unspecified, `batch_size` will default to 32. +Do not specify the `batch_size` if your data is in the +form of datasets, generators, or keras.utils.Sequence instances +(since they generate batches). +
+`epochs` + +Integer. Number of epochs to train the model. +An epoch is an iteration over the entire `x` and `y` +data provided. +Note that in conjunction with `initial_epoch`, +`epochs` is to be understood as "final epoch". +The model is not trained for a number of iterations +given by `epochs`, but merely until the epoch +of index `epochs` is reached. +
+`verbose` + +0, 1, or 2. Verbosity mode. +0 = silent, 1 = progress bar, 2 = one line per epoch. +Note that the progress bar is not particularly useful when +logged to a file, so verbose=2 is recommended when not running +interactively (eg, in a production environment). +
+`callbacks` + +List of keras.callbacks.Callback instances. +List of callbacks to apply during training. +See tf.keras.callbacks. +
+`validation_split` + +Float between 0 and 1. +Fraction of the training data to be used as validation data. +The model will set apart this fraction of the training data, +will not train on it, and will evaluate +the loss and any model metrics +on this data at the end of each epoch. +The validation data is selected from the last samples +in the `x` and `y` data provided, before shuffling. This argument is +not supported when `x` is a dataset, generator or +keras.utils.Sequence instance. +
+`validation_data` + +Data on which to evaluate +the loss and any model metrics at the end of each epoch. +The model will not be trained on this data. +`validation_data` will override `validation_split`. +`validation_data` could be: +- tuple `(x_val, y_val)` of Numpy arrays or tensors +- tuple `(x_val, y_val, val_sample_weights)` of Numpy arrays +- dataset +For the first two cases, `batch_size` must be provided. +For the last case, `validation_steps` could be provided. +Note that `validation_data` does not support all the data types that +are supported in `x`, eg, dict, generator or keras.utils.Sequence. +
+`shuffle` + +Boolean (whether to shuffle the training data +before each epoch) or str (for 'batch'). This argument is ignored +when `x` is a generator. 'batch' is a special option for dealing +with the limitations of HDF5 data; it shuffles in batch-sized +chunks. Has no effect when `steps_per_epoch` is not `None`. +
+`class_weight` + +Optional dictionary mapping class indices (integers) +to a weight (float) value, used for weighting the loss function +(during training only). +This can be useful to tell the model to +"pay more attention" to samples from +an under-represented class. +
+`sample_weight` + +Optional Numpy array of weights for +the training samples, used for weighting the loss function +(during training only). You can either pass a flat (1D) +Numpy array with the same length as the input samples +(1:1 mapping between weights and samples), +or in the case of temporal data, +you can pass a 2D array with shape +`(samples, sequence_length)`, +to apply a different weight to every timestep of every sample. +In this case you should make sure to specify +`sample_weight_mode="temporal"` in `compile()`. This argument is not +supported when `x` is a dataset, generator, or +keras.utils.Sequence instance, instead provide the sample_weights +as the third element of `x`. +
+`initial_epoch` + +Integer. +Epoch at which to start training +(useful for resuming a previous training run). +
+`steps_per_epoch` + +Integer or `None`. +Total number of steps (batches of samples) +before declaring one epoch finished and starting the +next epoch. When training with input tensors such as +TensorFlow data tensors, the default `None` is equal to +the number of samples in your dataset divided by +the batch size, or 1 if that cannot be determined. If x is a +tf.data dataset, and 'steps_per_epoch' +is None, the epoch will run until the input dataset is exhausted. +When passing an infinitely repeating dataset, you must specify the +`steps_per_epoch` argument. This argument is not supported with +array inputs. +
+`validation_steps` + +Only relevant if `validation_data` is provided and +is a tf.data dataset. Total number of steps (batches of +samples) to draw before stopping when performing validation +at the end of every epoch. If 'validation_steps' is None, validation +will run until the `validation_data` dataset is exhausted. In the +case of an infinitely repeated dataset, it will run into an +infinite loop. If 'validation_steps' is specified and only part of +the dataset will be consumed, the evaluation will start from the +beginning of the dataset at each epoch. This ensures that the same +validation samples are used every time. +
+`validation_batch_size` + +Integer or `None`. +Number of samples per validation batch. +If unspecified, will default to `batch_size`. +Do not specify the `validation_batch_size` if your data is in the +form of datasets, generators, or keras.utils.Sequence instances +(since they generate batches). +
+`validation_freq` + +Only relevant if validation data is provided. Integer +or `collections_abc.Container` instance (e.g. list, tuple, etc.). +If an integer, specifies how many training epochs to run before a +new validation run is performed, e.g. `validation_freq=2` runs +validation every 2 epochs. If a Container, specifies the epochs on +which to run validation, e.g. `validation_freq=[1, 2, 10]` runs +validation at the end of the 1st, 2nd, and 10th epochs. +
+`max_queue_size` + +Integer. Used for generator or keras.utils.Sequence +input only. Maximum size for the generator queue. +If unspecified, `max_queue_size` will default to 10. +
+`workers` + +Integer. Used for generator or keras.utils.Sequence input +only. Maximum number of processes to spin up +when using process-based threading. If unspecified, `workers` +will default to 1. If 0, will execute the generator on the main +thread. +
+`use_multiprocessing` + +Boolean. Used for generator or +keras.utils.Sequence input only. If `True`, use process-based +threading. If unspecified, `use_multiprocessing` will default to +`False`. Note that because this implementation relies on +multiprocessing, you should not pass non-picklable arguments to +the generator as they can't be passed easily to children processes. +
+ + +Unpacking behavior for iterator-like inputs: + A common pattern is to pass a tf.data.Dataset, generator, or + tf.keras.utils.Sequence to the `x` argument of fit, which will in fact + yield not only features (x) but optionally targets (y) and sample weights. + Keras requires that the output of such iterator-likes be unambiguous. The + iterator should return a tuple of length 1, 2, or 3, where the optional + second and third elements will be used for y and sample_weight + respectively. Any other type provided will be wrapped in a length one + tuple, effectively treating everything as 'x'. When yielding dicts, they + should still adhere to the top-level tuple structure. + e.g. `({"x0": x0, "x1": x1}, y)`. Keras will not attempt to separate + features, targets, and weights from the keys of a single dict. + A notable unsupported data type is the namedtuple. The reason is that + it behaves like both an ordered datatype (tuple) and a mapping + datatype (dict). So given a namedtuple of the form: + `namedtuple("example_tuple", ["y", "x"])` + it is ambiguous whether to reverse the order of the elements when + interpreting the value. Even worse is a tuple of the form: + `namedtuple("other_tuple", ["x", "y", "z"])` + where it is unclear if the tuple was intended to be unpacked into x, y, + and sample_weight or passed through as a single element to `x`. As a + result the data processing code will simply raise a ValueError if it + encounters a namedtuple. (Along with instructions to remedy the issue.) + + + + + + + + + +
Returns
+A `History` object. Its `History.history` attribute is +a record of training loss values and metrics values +at successive epochs, as well as validation loss values +and validation metrics values (if applicable). +
+ + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If the model was never compiled. +
+`ValueError` + +In case of mismatch between the provided input data +and what the model expects. +
+ + + +

fit_generator

+ +View source + + + +Fits the model on data yielded batch-by-batch by a Python generator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use Model.fit, which supports generators. + +#### DEPRECATED: + +Model.fit now supports generators, so there is no longer any need to use +this endpoint. + + +

get_layer

+ +View source + + + +Retrieves a layer based on either its name (unique) or index. + +If `name` and `index` are both provided, `index` will take precedence. +Indices are based on order of horizontal graph traversal (bottom-up). + + + + + + + + + + + + + +
Arguments
+`name` + +String, name of layer. +
+`index` + +Integer, index of layer. +
+ + + + + + + + + + + +
Returns
+A layer instance. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid layer name or index. +
+ + + +

load_weights

+ +View source + + + +Loads all layer weights, either from a TensorFlow or an HDF5 weight file. + +If `by_name` is False weights are loaded based on the network's +topology. This means the architecture should be the same as when the weights +were saved. Note that layers that don't have weights are not taken into +account in the topological ordering, so adding or removing layers is fine as +long as they don't have weights. + +If `by_name` is True, weights are loaded into layers only if they share the +same name. This is useful for fine-tuning or transfer-learning models where +some of the layers have changed. + +Only topological loading (`by_name=False`) is supported when loading weights +from the TensorFlow format. Note that topological loading differs slightly +between TensorFlow and HDF5 formats for user-defined classes inheriting from +tf.keras.Model: HDF5 loads based on a flattened list of weights, while the +TensorFlow format loads based on the object-local names of attributes to +which layers are assigned in the `Model`'s constructor. + + + + + + + + + + + + + + + + +
Arguments
+`filepath` + +String, path to the weights file to load. For weight files in +TensorFlow format, this is the file prefix (the same as was passed +to `save_weights`). +
+`by_name` + +Boolean, whether to load weights by name or by topological +order. Only topological loading is supported for weight files in +TensorFlow format. +
+`skip_mismatch` + +Boolean, whether to skip loading of layers where there is +a mismatch in the number of weights, or a mismatch in the shape of +the weight (only valid when `by_name=True`). +
+ + + + + + + + + + + +
Returns
+When loading a weight file in TensorFlow format, returns the same status +object as tf.train.Checkpoint.restore. When graph building, restore +ops are run automatically as soon as the network is built (on first call +for user-defined classes inheriting from `Model`, immediately if it is +already built). + +When loading weights in HDF5 format, returns `None`. +
+ + + + + + + + + + + + + + + +
Raises
+`ImportError` + +If h5py is not available and the weight file is in HDF5 +format. +
+`ValueError` + +If `skip_mismatch` is set to `True` when `by_name` is +`False`. +
+ + + +

make_predict_function

+ +View source + + + +Creates a function that executes one step of inference. + +This method can be overridden to support custom inference logic. +This method is called by Model.predict and Model.predict_on_batch. + +Typically, this method directly controls tf.function and +tf.distribute.Strategy settings, and delegates the actual evaluation +logic to Model.predict_step. + +This function is cached the first time Model.predict or +Model.predict_on_batch is called. The cache is cleared whenever +Model.compile is called. + + + + + + + + + +
Returns
+Function. The function created by this method should accept a +`tf.data.Iterator`, and return the outputs of the `Model`. +
+ + + +

make_test_function

+ +View source + + + +Creates a function that executes one step of evaluation. + +This method can be overridden to support custom evaluation logic. +This method is called by Model.evaluate and Model.test_on_batch. + +Typically, this method directly controls tf.function and +tf.distribute.Strategy settings, and delegates the actual evaluation +logic to Model.test_step. + +This function is cached the first time Model.evaluate or +Model.test_on_batch is called. The cache is cleared whenever +Model.compile is called. + + + + + + + + + +
Returns
+Function. The function created by this method should accept a +`tf.data.Iterator`, and return a `dict` containing values that will +be passed to `tf.keras.Callbacks.on_test_batch_end`. +
+ + + +

make_train_function

+ +View source + + + +Creates a function that executes one step of training. + +This method can be overridden to support custom training logic. +This method is called by Model.fit and Model.train_on_batch. + +Typically, this method directly controls tf.function and +tf.distribute.Strategy settings, and delegates the actual training +logic to Model.train_step. + +This function is cached the first time Model.fit or +Model.train_on_batch is called. The cache is cleared whenever +Model.compile is called. + + + + + + + + + +
Returns
+Function. The function created by this method should accept a +`tf.data.Iterator`, and return a `dict` containing values that will +be passed to `tf.keras.Callbacks.on_train_batch_end`, such as +`{'loss': 0.2, 'accuracy': 0.7}`. +
+ + + +

predict

+ +View source + + + +Generates output predictions for the input samples. + +Computation is done in batches. This method is designed for performance in +large scale inputs. For small amount of inputs that fit in one batch, +directly using `__call__` is recommended for faster execution, e.g., +`model(x)`, or `model(x, training=False)` if you have layers such as +tf.keras.layers.BatchNormalization that behaves differently during +inference. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input samples. It could be: +- A Numpy array (or array-like), or a list of arrays +(in case the model has multiple inputs). +- A TensorFlow tensor, or a list of tensors +(in case the model has multiple inputs). +- A tf.data dataset. +- A generator or keras.utils.Sequence instance. +A more detailed description of unpacking behavior for iterator types +(Dataset, generator, Sequence) is given in the `Unpacking behavior +for iterator-like inputs` section of `Model.fit`. +
+`batch_size` + +Integer or `None`. +Number of samples per batch. +If unspecified, `batch_size` will default to 32. +Do not specify the `batch_size` if your data is in the +form of dataset, generators, or keras.utils.Sequence instances +(since they generate batches). +
+`verbose` + +Verbosity mode, 0 or 1. +
+`steps` + +Total number of steps (batches of samples) +before declaring the prediction round finished. +Ignored with the default value of `None`. If x is a tf.data +dataset and `steps` is None, `predict` will +run until the input dataset is exhausted. +
+`callbacks` + +List of keras.callbacks.Callback instances. +List of callbacks to apply during prediction. +See [callbacks](/api_docs/python/tf/keras/callbacks). +
+`max_queue_size` + +Integer. Used for generator or keras.utils.Sequence +input only. Maximum size for the generator queue. +If unspecified, `max_queue_size` will default to 10. +
+`workers` + +Integer. Used for generator or keras.utils.Sequence input +only. Maximum number of processes to spin up when using +process-based threading. If unspecified, `workers` will default +to 1. If 0, will execute the generator on the main thread. +
+`use_multiprocessing` + +Boolean. Used for generator or +keras.utils.Sequence input only. If `True`, use process-based +threading. If unspecified, `use_multiprocessing` will default to +`False`. Note that because this implementation relies on +multiprocessing, you should not pass non-picklable arguments to +the generator as they can't be passed easily to children processes. +
+ + +See the discussion of `Unpacking behavior for iterator-like inputs` for +Model.fit. Note that Model.predict uses the same interpretation rules as +Model.fit and Model.evaluate, so inputs must be unambiguous for all +three methods. + + + + + + + + + +
Returns
+Numpy array(s) of predictions. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of mismatch between the provided +input data and the model's expectations, +or in case a stateful model receives a number of samples +that is not a multiple of the batch size. +
+ + + +

predict_generator

+ +View source + + + +Generates predictions for the input samples from a data generator. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Please use Model.predict, which supports generators. + +#### DEPRECATED: + +Model.predict now supports generators, so there is no longer any need +to use this endpoint. + + +

predict_on_batch

+ +View source + + + +Returns predictions for a single batch of samples. + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: - A Numpy array (or array-like), or a list +of arrays (in case the model has multiple inputs). - A TensorFlow +tensor, or a list of tensors (in case the model has multiple inputs). +
+ + + + + + + + + + + +
Returns
+Numpy array(s) of predictions. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of mismatch between given number of inputs and +expectations of the model. +
+ + + +

predict_step

+ +View source + + + +The logic for one inference step. + +This method can be overridden to support custom inference logic. +This method is called by Model.make_predict_function. + +This method should contain the mathemetical logic for one step of inference. +This typically includes the forward pass. + +Configuration details for *how* this logic is run (e.g. tf.function and +tf.distribute.Strategy settings), should be left to +Model.make_predict_function, which can also be overridden. + + + + + + + + + + +
Arguments
+`data` + +A nested structure of `Tensor`s. +
+ + + + + + + + + + + +
Returns
+The result of one inference step, typically the output of calling the +`Model` on data. +
+ + + +

reset_metrics

+ +View source + + + +Resets the state of metrics. + + +

reset_states

+ +View source + + + + + + +

save

+ +View source + + + +Saves the model to Tensorflow SavedModel or a single HDF5 file. + + +#### The savefile includes: + +- The model architecture, allowing to re-instantiate the model. +- The model weights. +- The state of the optimizer, allowing to resume training + exactly where you left off. + + +This allows you to save the entirety of the state of a model +in a single file. + +Saved models can be reinstantiated via keras.models.load_model. +The model returned by `load_model` is a compiled model ready to be used +(unless the saved model was never compiled in the first place). + +Models built with the Sequential and Functional API can be saved to both the +HDF5 and SavedModel formats. Subclassed models can only be saved with the +SavedModel format. + +Note that the model weights may have different scoped names after being +loaded. Scoped names include the model/layer names, such as +"dense_1/kernel:0"`. It is recommended that you use the layer properties to + access specific variables, e.g. `model.get_layer("dense_1").kernel`. + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`filepath` + +String, path to SavedModel or H5 file to save the model. +
+`overwrite` + +Whether to silently overwrite any existing file at the +target location, or provide the user with a manual prompt. +
+`include_optimizer` + +If True, save optimizer's state together. +
+`save_format` + +Either 'tf' or 'h5', indicating whether to save the model +to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and +'h5' in TF 1.X. +
+`signatures` + +Signatures to save with the SavedModel. Applicable to the +'tf' format only. Please see the `signatures` argument in +tf.saved_model.save for details. +
+`options` + +Optional tf.saved_model.SaveOptions object that specifies +options for saving to SavedModel. +
+ + + +#### Example: + + + +```python +from keras.models import load_model + +model.save('my_model.h5') # creates a HDF5 file 'my_model.h5' +del model # deletes the existing model + +# returns a compiled model +# identical to the previous one +model = load_model('my_model.h5') +``` + +

save_weights

+ +View source + + + +Saves all layer weights. + +Either saves in HDF5 or in TensorFlow format based on the `save_format` +argument. + +When saving in HDF5 format, the weight file has: + - `layer_names` (attribute), a list of strings + (ordered names of model layers). + - For every layer, a `group` named `layer.name` + - For every such layer group, a group attribute `weight_names`, + a list of strings + (ordered names of weights tensor of the layer). + - For every weight in the layer, a dataset + storing the weight value, named after the weight tensor. + +When saving in TensorFlow format, all objects referenced by the network are +saved in the same format as tf.train.Checkpoint, including any `Layer` +instances or `Optimizer` instances assigned to object attributes. For +networks constructed from inputs and outputs using `tf.keras.Model(inputs, +outputs)`, `Layer` instances used by the network are tracked/saved +automatically. For user-defined classes which inherit from tf.keras.Model, +`Layer` instances must be assigned to object attributes, typically in the +constructor. See the documentation of tf.train.Checkpoint and +tf.keras.Model for details. + +While the formats are the same, do not mix `save_weights` and +tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be +loaded using Model.load_weights. Checkpoints saved using +tf.train.Checkpoint.save should be restored using the corresponding +tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over +`save_weights` for training checkpoints. + +The TensorFlow format matches objects and variables by starting at a root +object, `self` for `save_weights`, and greedily matching attribute +names. For Model.save this is the `Model`, and for Checkpoint.save this +is the `Checkpoint` even if the `Checkpoint` has a model attached. This +means saving a tf.keras.Model using `save_weights` and loading into a +tf.train.Checkpoint with a `Model` attached (or vice versa) will not match +the `Model`'s variables. See the [guide to training +checkpoints](https://www.tensorflow.org/guide/checkpoint) for details +on the TensorFlow format. + + + + + + + + + + + + + + + + +
Arguments
+`filepath` + +String, path to the file to save the weights to. When saving +in TensorFlow format, this is the prefix used for checkpoint files +(multiple files are generated). Note that the '.h5' suffix causes +weights to be saved in HDF5 format. +
+`overwrite` + +Whether to silently overwrite any existing file at the +target location, or provide the user with a manual prompt. +
+`save_format` + +Either 'tf' or 'h5'. A `filepath` ending in '.h5' or +'.keras' will default to HDF5 if `save_format` is `None`. Otherwise +`None` defaults to 'tf'. +
+ + + + + + + + + + + + + + + +
Raises
+`ImportError` + +If h5py is not available when attempting to save in HDF5 +format. +
+`ValueError` + +For invalid/unknown format arguments. +
+ + + +

summary

+ +View source + + + +Prints a string summary of the network. + + + + + + + + + + + + + + + + + +
Arguments
+`line_length` + +Total length of printed lines +(e.g. set this to adapt the display to different +terminal window sizes). +
+`positions` + +Relative or absolute positions of log elements +in each line. If not provided, +defaults to `[.33, .55, .67, 1.]`. +
+`print_fn` + +Print function to use. Defaults to `print`. +It will be called on each line of the summary. +You can set it to a custom function +in order to capture the string summary. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if `summary()` is called before the model is built. +
+ + + +

test_on_batch

+ +View source + + + +Test the model on a single batch of samples. + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: - A Numpy array (or array-like), or a list +of arrays (in case the model has multiple inputs). - A TensorFlow +tensor, or a list of tensors (in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, if +the model has named inputs. +
+`y` + +Target data. Like the input data `x`, it could be either Numpy +array(s) or TensorFlow tensor(s). It should be consistent with `x` +(you cannot have Numpy inputs and tensor targets, or inversely). +
+`sample_weight` + +Optional array of the same length as x, containing +weights to apply to the model's loss for each sample. In the case of +temporal data, you can pass a 2D array with shape (samples, +sequence_length), to apply a different weight to every timestep of +every sample. In this case you should make sure to specify +sample_weight_mode="temporal" in compile(). +
+`reset_metrics` + +If `True`, the metrics returned will be only for this +batch. If `False`, the metrics will be statefully accumulated across +batches. +
+`return_dict` + +If `True`, loss and metric results are returned as a dict, +with each key being the name of the metric. If `False`, they are +returned as a list. +
+ + + + + + + + + + + +
Returns
+Scalar test loss (if the model has a single output and no metrics) +or list of scalars (if the model has multiple outputs +and/or metrics). The attribute `model.metrics_names` will give you +the display labels for the scalar outputs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid user-provided arguments. +
+ + + +

test_step

+ +View source + + + +The logic for one evaluation step. + +This method can be overridden to support custom evaluation logic. +This method is called by Model.make_test_function. + +This function should contain the mathemetical logic for one step of +evaluation. +This typically includes the forward pass, loss calculation, and metrics +updates. + +Configuration details for *how* this logic is run (e.g. tf.function and +tf.distribute.Strategy settings), should be left to +Model.make_test_function, which can also be overridden. + + + + + + + + + + +
Arguments
+`data` + +A nested structure of `Tensor`s. +
+ + + + + + + + + + + +
Returns
+A `dict` containing values that will be passed to +`tf.keras.callbacks.CallbackList.on_train_batch_end`. Typically, the +values of the `Model`'s metrics are returned. +
+ + + +

to_json

+ +View source + + + +Returns a JSON string containing the network configuration. + +To load a network from a JSON save file, use +keras.models.model_from_json(json_string, custom_objects={}). + + + + + + + + + + +
Arguments
+`**kwargs` + +Additional keyword arguments +to be passed to `json.dumps()`. +
+ + + + + + + + + + + +
Returns
+A JSON string. +
+ + + +

to_yaml

+ +View source + + + +Returns a yaml string containing the network configuration. + +To load a network from a yaml save file, use +keras.models.model_from_yaml(yaml_string, custom_objects={}). + +`custom_objects` should be a dictionary mapping +the names of custom losses / layers / etc to the corresponding +functions / classes. + + + + + + + + + + +
Arguments
+`**kwargs` + +Additional keyword arguments +to be passed to `yaml.dump()`. +
+ + + + + + + + + + + +
Returns
+A YAML string. +
+ + + + + + + + + + + + +
Raises
+`ImportError` + +if yaml module is not found. +
+ + + +

train_on_batch

+ +View source + + + +Runs a single gradient update on a single batch of data. + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +Input data. It could be: +- A Numpy array (or array-like), or a list of arrays +(in case the model has multiple inputs). +- A TensorFlow tensor, or a list of tensors +(in case the model has multiple inputs). +- A dict mapping input names to the corresponding array/tensors, +if the model has named inputs. +
+`y` + +Target data. Like the input data `x`, it could be either Numpy +array(s) or TensorFlow tensor(s). It should be consistent with `x` +(you cannot have Numpy inputs and tensor targets, or inversely). +
+`sample_weight` + +Optional array of the same length as x, containing +weights to apply to the model's loss for each sample. In the case of +temporal data, you can pass a 2D array with shape (samples, +sequence_length), to apply a different weight to every timestep of +every sample. In this case you should make sure to specify +sample_weight_mode="temporal" in compile(). +
+`class_weight` + +Optional dictionary mapping class indices (integers) to a +weight (float) to apply to the model's loss for the samples from this +class during training. This can be useful to tell the model to "pay +more attention" to samples from an under-represented class. +
+`reset_metrics` + +If `True`, the metrics returned will be only for this +batch. If `False`, the metrics will be statefully accumulated across +batches. +
+`return_dict` + +If `True`, loss and metric results are returned as a dict, +with each key being the name of the metric. If `False`, they are +returned as a list. +
+ + + + + + + + + + + +
Returns
+Scalar training loss +(if the model has a single output and no metrics) +or list of scalars (if the model has multiple outputs +and/or metrics). The attribute `model.metrics_names` will give you +the display labels for the scalar outputs. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid user-provided arguments. +
+ + + +

train_step

+ +View source + + + +The logic for one training step. + +This method can be overridden to support custom training logic. +This method is called by Model.make_train_function. + +This method should contain the mathemetical logic for one step of training. +This typically includes the forward pass, loss calculation, backpropagation, +and metric updates. + +Configuration details for *how* this logic is run (e.g. tf.function and +tf.distribute.Strategy settings), should be left to +Model.make_train_function, which can also be overridden. + + + + + + + + + + +
Arguments
+`data` + +A nested structure of `Tensor`s. +
+ + + + + + + + + + + +
Returns
+A `dict` containing values that will be passed to +`tf.keras.callbacks.CallbackList.on_train_batch_end`. Typically, the +values of the `Model`'s metrics are returned. Example: +`{'loss': 0.2, 'accuracy': 0.7}`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/experimental/terminate_keras_multiprocessing_pools.md b/site/en/api_docs/python/tf/keras/experimental/terminate_keras_multiprocessing_pools.md new file mode 100644 index 00000000000..752a4481e65 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/experimental/terminate_keras_multiprocessing_pools.md @@ -0,0 +1,88 @@ +description: Destroy Keras' multiprocessing pools to prevent deadlocks. + +
+ + +
+ +# tf.keras.experimental.terminate_keras_multiprocessing_pools + + + + + + + + + +Destroy Keras' multiprocessing pools to prevent deadlocks. + + + + + + + + + +In general multiprocessing.Pool can interact quite badly with other, seemingly +unrelated, parts of a codebase due to Pool's reliance on fork. This method +cleans up all pools which are known to belong to Keras (and thus can be safely +terminated). + + + + + + + + + + + + + +
+`grace_period` + +Time (in seconds) to wait for process cleanup to propagate. +
+`use_sigkill` + +Boolean of whether or not to perform a cleanup pass using +SIGKILL. +
+ + + + + + + + + + + +
+A list of human readable strings describing all issues encountered. It is up +to the caller to decide whether to treat this as an error condition. +
+ diff --git a/site/en/api_docs/python/tf/keras/initializers.md b/site/en/api_docs/python/tf/keras/initializers.md new file mode 100644 index 00000000000..5ccc8133caa --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers.md @@ -0,0 +1,85 @@ +description: Keras initializer serialization / deserialization. + +
+ + +
+ +# Module: tf.keras.initializers + + + + + + + + + +Keras initializer serialization / deserialization. + + + + + +## Classes + +[`class Constant`](../../tf/constant_initializer.md): Initializer that generates tensors with constant values. + +[`class GlorotNormal`](../../tf/keras/initializers/GlorotNormal.md): The Glorot normal initializer, also called Xavier normal initializer. + +[`class GlorotUniform`](../../tf/keras/initializers/GlorotUniform.md): The Glorot uniform initializer, also called Xavier uniform initializer. + +[`class Identity`](../../tf/keras/initializers/Identity.md): Initializer that generates the identity matrix. + +[`class Initializer`](../../tf/keras/initializers/Initializer.md): Initializer base class: all initializers inherit from this class. + +[`class Ones`](../../tf/ones_initializer.md): Initializer that generates tensors initialized to 1. + +[`class Orthogonal`](../../tf/keras/initializers/Orthogonal.md): Initializer that generates an orthogonal matrix. + +[`class RandomNormal`](../../tf/random_normal_initializer.md): Initializer that generates tensors with a normal distribution. + +[`class RandomUniform`](../../tf/random_uniform_initializer.md): Initializer that generates tensors with a uniform distribution. + +[`class TruncatedNormal`](../../tf/keras/initializers/TruncatedNormal.md): Initializer that generates a truncated normal distribution. + +[`class VarianceScaling`](../../tf/keras/initializers/VarianceScaling.md): Initializer capable of adapting its scale to the shape of weights tensors. + +[`class Zeros`](../../tf/zeros_initializer.md): Initializer that generates tensors initialized to 0. + +[`class constant`](../../tf/constant_initializer.md): Initializer that generates tensors with constant values. + +[`class glorot_normal`](../../tf/keras/initializers/GlorotNormal.md): The Glorot normal initializer, also called Xavier normal initializer. + +[`class glorot_uniform`](../../tf/keras/initializers/GlorotUniform.md): The Glorot uniform initializer, also called Xavier uniform initializer. + +[`class identity`](../../tf/keras/initializers/Identity.md): Initializer that generates the identity matrix. + +[`class ones`](../../tf/ones_initializer.md): Initializer that generates tensors initialized to 1. + +[`class orthogonal`](../../tf/keras/initializers/Orthogonal.md): Initializer that generates an orthogonal matrix. + +[`class zeros`](../../tf/zeros_initializer.md): Initializer that generates tensors initialized to 0. + +## Functions + +[`deserialize(...)`](../../tf/keras/initializers/deserialize.md): Return an `Initializer` object from its config. + +[`get(...)`](../../tf/keras/initializers/get.md) + +[`he_normal(...)`](../../tf/keras/initializers/he_normal.md): He normal initializer. + +[`he_uniform(...)`](../../tf/keras/initializers/he_uniform.md): He uniform variance scaling initializer. + +[`lecun_normal(...)`](../../tf/keras/initializers/lecun_normal.md): LeCun normal initializer. + +[`lecun_uniform(...)`](../../tf/keras/initializers/lecun_uniform.md): LeCun uniform initializer. + +[`serialize(...)`](../../tf/keras/initializers/serialize.md) + diff --git a/site/en/api_docs/python/tf/keras/initializers/GlorotNormal.md b/site/en/api_docs/python/tf/keras/initializers/GlorotNormal.md new file mode 100644 index 00000000000..7d8286ec078 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/GlorotNormal.md @@ -0,0 +1,239 @@ +description: The Glorot normal initializer, also called Xavier normal initializer. + +
+ + + + + + +
+ +# tf.keras.initializers.GlorotNormal + + + + + + + + + +The Glorot normal initializer, also called Xavier normal initializer. + +Inherits From: [`VarianceScaling`](../../../tf/keras/initializers/VarianceScaling.md) + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +Draws samples from a truncated normal distribution centered on 0 with `stddev += sqrt(2 / (fan_in + fan_out))` where `fan_in` is the number of input units in +the weight tensor and `fan_out` is the number of output units in the weight +tensor. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k, k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, tf.initializers.GlorotNormal()) +>>> v1 +>> v2 +>> make_variables(4, tf.initializers.RandomNormal()) +( + + + + + + + + +
+`seed` + +A Python integer. Used to create random seeds. See +tf.random.set_seed for behavior. +
+ + + +#### References: + +[Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html) +([pdf](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf)) + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. +It will typically be the output of `get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. Only floating point types are +supported. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the dtype is not floating point +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/initializers/GlorotUniform.md b/site/en/api_docs/python/tf/keras/initializers/GlorotUniform.md new file mode 100644 index 00000000000..dba27b16b34 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/GlorotUniform.md @@ -0,0 +1,239 @@ +description: The Glorot uniform initializer, also called Xavier uniform initializer. + +
+ + + + + + +
+ +# tf.keras.initializers.GlorotUniform + + + + + + + + + +The Glorot uniform initializer, also called Xavier uniform initializer. + +Inherits From: [`VarianceScaling`](../../../tf/keras/initializers/VarianceScaling.md) + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +Draws samples from a uniform distribution within [-limit, limit] where `limit` +is `sqrt(6 / (fan_in + fan_out))` where `fan_in` is the number of input units +in the weight tensor and `fan_out` is the number of output units in the weight +tensor. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k, k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, tf.initializers.GlorotUniform()) +>>> v1 +>> v2 +>> make_variables(4, tf.initializers.RandomNormal()) +( + + + + + + + + +
+`seed` + +A Python integer. Used to create random seeds. See +tf.random.set_seed for behavior. +
+ + + +#### References: + +[Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html) +([pdf](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf)) + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. +It will typically be the output of `get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. Only floating point types are +supported. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the dtype is not floating point +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/initializers/Identity.md b/site/en/api_docs/python/tf/keras/initializers/Identity.md new file mode 100644 index 00000000000..d66c0bf55f4 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/Identity.md @@ -0,0 +1,236 @@ +description: Initializer that generates the identity matrix. + +
+ + + + + + +
+ +# tf.keras.initializers.Identity + + + + + + + + + +Initializer that generates the identity matrix. + +Inherits From: [`Initializer`](../../../tf/keras/initializers/Initializer.md) + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +Only usable for generating 2D matrices. + +#### Examples: + + + +``` +>>> def make_variable(k, initializer): +... return tf.Variable(initializer(shape=[k, k], dtype=tf.float32)) +>>> make_variable(2, tf.initializers.Identity()) + +>>> make_variable(3, tf.initializers.Identity(gain=0.5)) + +``` + + + + + + + + + + +
+`gain` + +Multiplicative factor to apply to the identity matrix. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. +It will typically be the output of `get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. Only floating point types are +supported. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If the dtype is not floating point +
+`ValueError` + +If the requested shape does not have exactly two axes. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/initializers/Initializer.md b/site/en/api_docs/python/tf/keras/initializers/Initializer.md new file mode 100644 index 00000000000..13f13f28ab1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/Initializer.md @@ -0,0 +1,161 @@ +description: Initializer base class: all initializers inherit from this class. + +
+ + + + + +
+ +# tf.keras.initializers.Initializer + + + + + + + + + +Initializer base class: all initializers inherit from this class. + + + + + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. +It will typically be the output of `get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. If not provided will return tensor +of tf.float32. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/initializers/Orthogonal.md b/site/en/api_docs/python/tf/keras/initializers/Orthogonal.md new file mode 100644 index 00000000000..572863e5a0a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/Orthogonal.md @@ -0,0 +1,253 @@ +description: Initializer that generates an orthogonal matrix. + +
+ + + + + + +
+ +# tf.keras.initializers.Orthogonal + + + + + + + + + +Initializer that generates an orthogonal matrix. + +Inherits From: [`Initializer`](../../../tf/keras/initializers/Initializer.md) + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +If the shape of the tensor to initialize is two-dimensional, it is initialized +with an orthogonal matrix obtained from the QR decomposition of a matrix of +random numbers drawn from a normal distribution. +If the matrix has fewer rows than columns then the output will have orthogonal +rows. Otherwise, the output will have orthogonal columns. + +If the shape of the tensor to initialize is more than two-dimensional, +a matrix of shape `(shape[0] * ... * shape[n - 2], shape[n - 1])` +is initialized, where `n` is the length of the shape vector. +The matrix is subsequently reshaped to give a tensor of the desired shape. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k, k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, tf.initializers.Orthogonal()) +>>> v1 +>> v2 +>> make_variables(4, tf.initializers.Orthogonal(gain=0.5)) +( + + + + + + + + + + + +
+`gain` + +multiplicative factor to apply to the orthogonal matrix +
+`seed` + +A Python integer. Used to create random seeds. See +tf.random.set_seed for behavior. +
+ + + +#### References: + +[Saxe et al., 2014](https://openreview.net/forum?id=_wzZwKpTDF_9C) +([pdf](https://arxiv.org/pdf/1312.6120.pdf)) + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. +It will typically be the output of `get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. Only floating point types are +supported. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the dtype is not floating point or the input shape is not +valid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/initializers/TruncatedNormal.md b/site/en/api_docs/python/tf/keras/initializers/TruncatedNormal.md new file mode 100644 index 00000000000..00260249f6b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/TruncatedNormal.md @@ -0,0 +1,250 @@ +description: Initializer that generates a truncated normal distribution. + +
+ + + + + + +
+ +# tf.keras.initializers.TruncatedNormal + + + + + + + + + +Initializer that generates a truncated normal distribution. + +Inherits From: [`Initializer`](../../../tf/keras/initializers/Initializer.md) + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +These values are similar to values from a tf.initializers.RandomNormal +except that values more than two standard deviations from the mean are +discarded and re-drawn. This is the recommended initializer for neural network +weights and filters. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables( +... 3, tf.initializers.TruncatedNormal(mean=1., stddev=2.)) +>>> v1 + +>>> v2 +>> make_variables(4, tf.initializers.RandomUniform(minval=-1., maxval=1.)) +(, + + + + + + + + + + + + + + +
+`mean` + +a python scalar or a scalar tensor. Mean of the random values +to generate. +
+`stddev` + +a python scalar or a scalar tensor. Standard deviation of the +random values to generate. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.random.set_seed for behavior. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. +It will typically be the output of `get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. Only floating point types are +supported. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the dtype is not floating point +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/initializers/VarianceScaling.md b/site/en/api_docs/python/tf/keras/initializers/VarianceScaling.md new file mode 100644 index 00000000000..b510059b5bd --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/VarianceScaling.md @@ -0,0 +1,280 @@ +description: Initializer capable of adapting its scale to the shape of weights tensors. + +
+ + + + + + +
+ +# tf.keras.initializers.VarianceScaling + + + + + + + + + +Initializer capable of adapting its scale to the shape of weights tensors. + +Inherits From: [`Initializer`](../../../tf/keras/initializers/Initializer.md) + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +With `distribution="truncated_normal" or "untruncated_normal"`, samples are +drawn from a truncated/untruncated normal distribution with a mean of zero and +a standard deviation (after truncation, if used) `stddev = sqrt(scale / n)` +where n is: + + - number of input units in the weight tensor, if mode = "fan_in" + - number of output units, if mode = "fan_out" + - average of the numbers of input and output units, if mode = "fan_avg" + +With `distribution="uniform"`, samples are drawn from a uniform distribution +within [-limit, limit], with `limit = sqrt(3 * scale / n)`. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, tf.initializers.VarianceScaling(scale=1.)) +>>> v1 + +>>> v2 +>> make_variables(4, tf.initializers.VarianceScaling(distribution='uniform')) +(, + + + + + + + + + + + + + + + + + +
+`scale` + +Scaling factor (positive float). +
+`mode` + +One of "fan_in", "fan_out", "fan_avg". +
+`distribution` + +Random distribution to use. One of "truncated_normal", +"untruncated_normal" and "uniform". +
+`seed` + +A Python integer. Used to create random seeds. See +tf.random.set_seed for behavior. +
+ + + + + + + + + + + + +
+`ValueError` + +In case of an invalid value for the "scale", mode" or +"distribution" arguments. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. +It will typically be the output of `get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. Only floating point types are +supported. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the dtype is not floating point +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/initializers/deserialize.md b/site/en/api_docs/python/tf/keras/initializers/deserialize.md new file mode 100644 index 00000000000..debc09d32b5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/deserialize.md @@ -0,0 +1,47 @@ +description: Return an Initializer object from its config. + +
+ + +
+ +# tf.keras.initializers.deserialize + + + + + + + + + +Return an `Initializer` object from its config. + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/initializers/get.md b/site/en/api_docs/python/tf/keras/initializers/get.md new file mode 100644 index 00000000000..35e66bb6e85 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/get.md @@ -0,0 +1,45 @@ +
+ + +
+ +# tf.keras.initializers.get + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/initializers/he_normal.md b/site/en/api_docs/python/tf/keras/initializers/he_normal.md new file mode 100644 index 00000000000..395c71ab9fe --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/he_normal.md @@ -0,0 +1,104 @@ +description: He normal initializer. + +
+ + +
+ +# tf.keras.initializers.he_normal + + + + + + + + + +He normal initializer. + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +It draws samples from a truncated normal distribution centered on 0 with +`stddev = sqrt(2 / fan_in)` where `fan_in` is the number of input units in the +weight tensor. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k, k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, tf.initializers.he_normal()) +>>> v1 +>> v2 +>> make_variables(4, tf.initializers.RandomNormal()) +( + + + + + + + + +
+`seed` + +A Python integer. Used to seed the random generator. +
+ + + + + + + + + + + +
+A callable Initializer with `shape` and `dtype` arguments which generates a +tensor. +
+ + + +#### References: + +[He et al., 2015](https://www.cv-foundation.org/openaccess/content_iccv_2015/html/He_Delving_Deep_into_ICCV_2015_paper.html) +([pdf](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf)) diff --git a/site/en/api_docs/python/tf/keras/initializers/he_uniform.md b/site/en/api_docs/python/tf/keras/initializers/he_uniform.md new file mode 100644 index 00000000000..ba64f88c342 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/he_uniform.md @@ -0,0 +1,104 @@ +description: He uniform variance scaling initializer. + +
+ + +
+ +# tf.keras.initializers.he_uniform + + + + + + + + + +He uniform variance scaling initializer. + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +Draws samples from a uniform distribution within [-limit, limit] where `limit` +is `sqrt(6 / fan_in)` where `fan_in` is the number of input units in the +weight tensor. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k, k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, tf.initializers.he_uniform()) +>>> v1 +>> v2 +>> make_variables(4, tf.initializers.RandomNormal()) +( + + + + + + + + +
+`seed` + +A Python integer. Used to seed the random generator. +
+ + + + + + + + + + + +
+A callable Initializer with `shape` and `dtype` arguments which generates a +tensor. +
+ + + +#### References: + +[He et al., 2015](https://www.cv-foundation.org/openaccess/content_iccv_2015/html/He_Delving_Deep_into_ICCV_2015_paper.html) +([pdf](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf)) diff --git a/site/en/api_docs/python/tf/keras/initializers/lecun_normal.md b/site/en/api_docs/python/tf/keras/initializers/lecun_normal.md new file mode 100644 index 00000000000..4037674209b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/lecun_normal.md @@ -0,0 +1,109 @@ +description: LeCun normal initializer. + +
+ + +
+ +# tf.keras.initializers.lecun_normal + + + + + + + + + +LeCun normal initializer. + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +Draws samples from a truncated normal distribution centered on 0 with `stddev += sqrt(1 / fan_in)` where `fan_in` is the number of input units in the weight +tensor. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k, k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, tf.initializers.lecun_normal()) +>>> v1 +>> v2 +>> make_variables(4, tf.initializers.RandomNormal()) +( + + + + + + + + +
+`seed` + +A Python integer. Used to seed the random generator. +
+ + + + + + + + + + + +
+A callable Initializer with `shape` and `dtype` arguments which generates a +tensor. +
+ + + +#### References: + +- Self-Normalizing Neural Networks, +[Klambauer et al., 2017] +(https://papers.nips.cc/paper/6698-self-normalizing-neural-networks) +([pdf] +(https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf)) +- Efficient Backprop, +[Lecun et al., 1998](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) diff --git a/site/en/api_docs/python/tf/keras/initializers/lecun_uniform.md b/site/en/api_docs/python/tf/keras/initializers/lecun_uniform.md new file mode 100644 index 00000000000..73548a76821 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/lecun_uniform.md @@ -0,0 +1,107 @@ +description: LeCun uniform initializer. + +
+ + +
+ +# tf.keras.initializers.lecun_uniform + + + + + + + + + +LeCun uniform initializer. + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +Draws samples from a uniform distribution within [-limit, limit] where `limit` +is `sqrt(3 / fan_in)` where `fan_in` is the number of input units in the +weight tensor. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k, k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, tf.initializers.lecun_uniform()) +>>> v1 +>> v2 +>> make_variables(4, tf.initializers.RandomNormal()) +( + + + + + + + + +
+`seed` + +A Python integer. Used to seed the random generator. +
+ + + + + + + + + + + +
+A callable Initializer with `shape` and `dtype` arguments which generates a +tensor. +
+ + + +#### References: + +- Self-Normalizing Neural Networks, +[Klambauer et al., 2017](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks) +([pdf](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf)) +- Efficient Backprop, +[Lecun et al., 1998](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) diff --git a/site/en/api_docs/python/tf/keras/initializers/serialize.md b/site/en/api_docs/python/tf/keras/initializers/serialize.md new file mode 100644 index 00000000000..3947c16762c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/initializers/serialize.md @@ -0,0 +1,45 @@ +
+ + +
+ +# tf.keras.initializers.serialize + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/layers.md b/site/en/api_docs/python/tf/keras/layers.md new file mode 100644 index 00000000000..57f00f75e56 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers.md @@ -0,0 +1,255 @@ +description: Keras layers API. + +
+ + +
+ +# Module: tf.keras.layers + + + + + + + + + +Keras layers API. + + + +## Modules + +[`experimental`](../../tf/keras/layers/experimental.md) module: Public API for tf.keras.layers.experimental namespace. + +## Classes + +[`class AbstractRNNCell`](../../tf/keras/layers/AbstractRNNCell.md): Abstract object representing an RNN cell. + +[`class Activation`](../../tf/keras/layers/Activation.md): Applies an activation function to an output. + +[`class ActivityRegularization`](../../tf/keras/layers/ActivityRegularization.md): Layer that applies an update to the cost function based input activity. + +[`class Add`](../../tf/keras/layers/Add.md): Layer that adds a list of inputs. + +[`class AdditiveAttention`](../../tf/keras/layers/AdditiveAttention.md): Additive attention layer, a.k.a. Bahdanau-style attention. + +[`class AlphaDropout`](../../tf/keras/layers/AlphaDropout.md): Applies Alpha Dropout to the input. + +[`class Attention`](../../tf/keras/layers/Attention.md): Dot-product attention layer, a.k.a. Luong-style attention. + +[`class Average`](../../tf/keras/layers/Average.md): Layer that averages a list of inputs element-wise. + +[`class AveragePooling1D`](../../tf/keras/layers/AveragePooling1D.md): Average pooling for temporal data. + +[`class AveragePooling2D`](../../tf/keras/layers/AveragePooling2D.md): Average pooling operation for spatial data. + +[`class AveragePooling3D`](../../tf/keras/layers/AveragePooling3D.md): Average pooling operation for 3D data (spatial or spatio-temporal). + +[`class AvgPool1D`](../../tf/keras/layers/AveragePooling1D.md): Average pooling for temporal data. + +[`class AvgPool2D`](../../tf/keras/layers/AveragePooling2D.md): Average pooling operation for spatial data. + +[`class AvgPool3D`](../../tf/keras/layers/AveragePooling3D.md): Average pooling operation for 3D data (spatial or spatio-temporal). + +[`class BatchNormalization`](../../tf/keras/layers/BatchNormalization.md): Normalize and scale inputs or activations. (Ioffe and Szegedy, 2014). + +[`class Bidirectional`](../../tf/keras/layers/Bidirectional.md): Bidirectional wrapper for RNNs. + +[`class Concatenate`](../../tf/keras/layers/Concatenate.md): Layer that concatenates a list of inputs. + +[`class Conv1D`](../../tf/keras/layers/Conv1D.md): 1D convolution layer (e.g. temporal convolution). + +[`class Conv2D`](../../tf/keras/layers/Conv2D.md): 2D convolution layer (e.g. spatial convolution over images). + +[`class Conv2DTranspose`](../../tf/keras/layers/Conv2DTranspose.md): Transposed convolution layer (sometimes called Deconvolution). + +[`class Conv3D`](../../tf/keras/layers/Conv3D.md): 3D convolution layer (e.g. spatial convolution over volumes). + +[`class Conv3DTranspose`](../../tf/keras/layers/Conv3DTranspose.md): Transposed convolution layer (sometimes called Deconvolution). + +[`class ConvLSTM2D`](../../tf/keras/layers/ConvLSTM2D.md): Convolutional LSTM. + +[`class Convolution1D`](../../tf/keras/layers/Conv1D.md): 1D convolution layer (e.g. temporal convolution). + +[`class Convolution2D`](../../tf/keras/layers/Conv2D.md): 2D convolution layer (e.g. spatial convolution over images). + +[`class Convolution2DTranspose`](../../tf/keras/layers/Conv2DTranspose.md): Transposed convolution layer (sometimes called Deconvolution). + +[`class Convolution3D`](../../tf/keras/layers/Conv3D.md): 3D convolution layer (e.g. spatial convolution over volumes). + +[`class Convolution3DTranspose`](../../tf/keras/layers/Conv3DTranspose.md): Transposed convolution layer (sometimes called Deconvolution). + +[`class Cropping1D`](../../tf/keras/layers/Cropping1D.md): Cropping layer for 1D input (e.g. temporal sequence). + +[`class Cropping2D`](../../tf/keras/layers/Cropping2D.md): Cropping layer for 2D input (e.g. picture). + +[`class Cropping3D`](../../tf/keras/layers/Cropping3D.md): Cropping layer for 3D data (e.g. spatial or spatio-temporal). + +[`class Dense`](../../tf/keras/layers/Dense.md): Just your regular densely-connected NN layer. + +[`class DenseFeatures`](../../tf/keras/layers/DenseFeatures.md): A layer that produces a dense `Tensor` based on given `feature_columns`. + +[`class DepthwiseConv2D`](../../tf/keras/layers/DepthwiseConv2D.md): Depthwise separable 2D convolution. + +[`class Dot`](../../tf/keras/layers/Dot.md): Layer that computes a dot product between samples in two tensors. + +[`class Dropout`](../../tf/keras/layers/Dropout.md): Applies Dropout to the input. + +[`class ELU`](../../tf/keras/layers/ELU.md): Exponential Linear Unit. + +[`class Embedding`](../../tf/keras/layers/Embedding.md): Turns positive integers (indexes) into dense vectors of fixed size. + +[`class Flatten`](../../tf/keras/layers/Flatten.md): Flattens the input. Does not affect the batch size. + +[`class GRU`](../../tf/keras/layers/GRU.md): Gated Recurrent Unit - Cho et al. 2014. + +[`class GRUCell`](../../tf/keras/layers/GRUCell.md): Cell class for the GRU layer. + +[`class GaussianDropout`](../../tf/keras/layers/GaussianDropout.md): Apply multiplicative 1-centered Gaussian noise. + +[`class GaussianNoise`](../../tf/keras/layers/GaussianNoise.md): Apply additive zero-centered Gaussian noise. + +[`class GlobalAveragePooling1D`](../../tf/keras/layers/GlobalAveragePooling1D.md): Global average pooling operation for temporal data. + +[`class GlobalAveragePooling2D`](../../tf/keras/layers/GlobalAveragePooling2D.md): Global average pooling operation for spatial data. + +[`class GlobalAveragePooling3D`](../../tf/keras/layers/GlobalAveragePooling3D.md): Global Average pooling operation for 3D data. + +[`class GlobalAvgPool1D`](../../tf/keras/layers/GlobalAveragePooling1D.md): Global average pooling operation for temporal data. + +[`class GlobalAvgPool2D`](../../tf/keras/layers/GlobalAveragePooling2D.md): Global average pooling operation for spatial data. + +[`class GlobalAvgPool3D`](../../tf/keras/layers/GlobalAveragePooling3D.md): Global Average pooling operation for 3D data. + +[`class GlobalMaxPool1D`](../../tf/keras/layers/GlobalMaxPool1D.md): Global max pooling operation for 1D temporal data. + +[`class GlobalMaxPool2D`](../../tf/keras/layers/GlobalMaxPool2D.md): Global max pooling operation for spatial data. + +[`class GlobalMaxPool3D`](../../tf/keras/layers/GlobalMaxPool3D.md): Global Max pooling operation for 3D data. + +[`class GlobalMaxPooling1D`](../../tf/keras/layers/GlobalMaxPool1D.md): Global max pooling operation for 1D temporal data. + +[`class GlobalMaxPooling2D`](../../tf/keras/layers/GlobalMaxPool2D.md): Global max pooling operation for spatial data. + +[`class GlobalMaxPooling3D`](../../tf/keras/layers/GlobalMaxPool3D.md): Global Max pooling operation for 3D data. + +[`class InputLayer`](../../tf/keras/layers/InputLayer.md): Layer to be used as an entry point into a Network (a graph of layers). + +[`class InputSpec`](../../tf/keras/layers/InputSpec.md): Specifies the rank, dtype and shape of every input to a layer. + +[`class LSTM`](../../tf/keras/layers/LSTM.md): Long Short-Term Memory layer - Hochreiter 1997. + +[`class LSTMCell`](../../tf/keras/layers/LSTMCell.md): Cell class for the LSTM layer. + +[`class Lambda`](../../tf/keras/layers/Lambda.md): Wraps arbitrary expressions as a `Layer` object. + +[`class Layer`](../../tf/keras/layers/Layer.md): This is the class from which all layers inherit. + +[`class LayerNormalization`](../../tf/keras/layers/LayerNormalization.md): Layer normalization layer (Ba et al., 2016). + +[`class LeakyReLU`](../../tf/keras/layers/LeakyReLU.md): Leaky version of a Rectified Linear Unit. + +[`class LocallyConnected1D`](../../tf/keras/layers/LocallyConnected1D.md): Locally-connected layer for 1D inputs. + +[`class LocallyConnected2D`](../../tf/keras/layers/LocallyConnected2D.md): Locally-connected layer for 2D inputs. + +[`class Masking`](../../tf/keras/layers/Masking.md): Masks a sequence by using a mask value to skip timesteps. + +[`class MaxPool1D`](../../tf/keras/layers/MaxPool1D.md): Max pooling operation for 1D temporal data. + +[`class MaxPool2D`](../../tf/keras/layers/MaxPool2D.md): Max pooling operation for 2D spatial data. + +[`class MaxPool3D`](../../tf/keras/layers/MaxPool3D.md): Max pooling operation for 3D data (spatial or spatio-temporal). + +[`class MaxPooling1D`](../../tf/keras/layers/MaxPool1D.md): Max pooling operation for 1D temporal data. + +[`class MaxPooling2D`](../../tf/keras/layers/MaxPool2D.md): Max pooling operation for 2D spatial data. + +[`class MaxPooling3D`](../../tf/keras/layers/MaxPool3D.md): Max pooling operation for 3D data (spatial or spatio-temporal). + +[`class Maximum`](../../tf/keras/layers/Maximum.md): Layer that computes the maximum (element-wise) a list of inputs. + +[`class Minimum`](../../tf/keras/layers/Minimum.md): Layer that computes the minimum (element-wise) a list of inputs. + +[`class Multiply`](../../tf/keras/layers/Multiply.md): Layer that multiplies (element-wise) a list of inputs. + +[`class PReLU`](../../tf/keras/layers/PReLU.md): Parametric Rectified Linear Unit. + +[`class Permute`](../../tf/keras/layers/Permute.md): Permutes the dimensions of the input according to a given pattern. + +[`class RNN`](../../tf/keras/layers/RNN.md): Base class for recurrent layers. + +[`class ReLU`](../../tf/keras/layers/ReLU.md): Rectified Linear Unit activation function. + +[`class RepeatVector`](../../tf/keras/layers/RepeatVector.md): Repeats the input n times. + +[`class Reshape`](../../tf/keras/layers/Reshape.md): Layer that reshapes inputs into the given shape. + +[`class SeparableConv1D`](../../tf/keras/layers/SeparableConv1D.md): Depthwise separable 1D convolution. + +[`class SeparableConv2D`](../../tf/keras/layers/SeparableConv2D.md): Depthwise separable 2D convolution. + +[`class SeparableConvolution1D`](../../tf/keras/layers/SeparableConv1D.md): Depthwise separable 1D convolution. + +[`class SeparableConvolution2D`](../../tf/keras/layers/SeparableConv2D.md): Depthwise separable 2D convolution. + +[`class SimpleRNN`](../../tf/keras/layers/SimpleRNN.md): Fully-connected RNN where the output is to be fed back to input. + +[`class SimpleRNNCell`](../../tf/keras/layers/SimpleRNNCell.md): Cell class for SimpleRNN. + +[`class Softmax`](../../tf/keras/layers/Softmax.md): Softmax activation function. + +[`class SpatialDropout1D`](../../tf/keras/layers/SpatialDropout1D.md): Spatial 1D version of Dropout. + +[`class SpatialDropout2D`](../../tf/keras/layers/SpatialDropout2D.md): Spatial 2D version of Dropout. + +[`class SpatialDropout3D`](../../tf/keras/layers/SpatialDropout3D.md): Spatial 3D version of Dropout. + +[`class StackedRNNCells`](../../tf/keras/layers/StackedRNNCells.md): Wrapper allowing a stack of RNN cells to behave as a single cell. + +[`class Subtract`](../../tf/keras/layers/Subtract.md): Layer that subtracts two inputs. + +[`class ThresholdedReLU`](../../tf/keras/layers/ThresholdedReLU.md): Thresholded Rectified Linear Unit. + +[`class TimeDistributed`](../../tf/keras/layers/TimeDistributed.md): This wrapper allows to apply a layer to every temporal slice of an input. + +[`class UpSampling1D`](../../tf/keras/layers/UpSampling1D.md): Upsampling layer for 1D inputs. + +[`class UpSampling2D`](../../tf/keras/layers/UpSampling2D.md): Upsampling layer for 2D inputs. + +[`class UpSampling3D`](../../tf/keras/layers/UpSampling3D.md): Upsampling layer for 3D inputs. + +[`class Wrapper`](../../tf/keras/layers/Wrapper.md): Abstract wrapper base class. + +[`class ZeroPadding1D`](../../tf/keras/layers/ZeroPadding1D.md): Zero-padding layer for 1D input (e.g. temporal sequence). + +[`class ZeroPadding2D`](../../tf/keras/layers/ZeroPadding2D.md): Zero-padding layer for 2D input (e.g. picture). + +[`class ZeroPadding3D`](../../tf/keras/layers/ZeroPadding3D.md): Zero-padding layer for 3D data (spatial or spatio-temporal). + +## Functions + +[`Input(...)`](../../tf/keras/Input.md): `Input()` is used to instantiate a Keras tensor. + +[`add(...)`](../../tf/keras/layers/add.md): Functional interface to the tf.keras.layers.Add layer. + +[`average(...)`](../../tf/keras/layers/average.md): Functional interface to the tf.keras.layers.Average layer. + +[`concatenate(...)`](../../tf/keras/layers/concatenate.md): Functional interface to the `Concatenate` layer. + +[`deserialize(...)`](../../tf/keras/layers/deserialize.md): Instantiates a layer from a config dictionary. + +[`dot(...)`](../../tf/keras/layers/dot.md): Functional interface to the `Dot` layer. + +[`maximum(...)`](../../tf/keras/layers/maximum.md): Functional interface to compute maximum (element-wise) list of `inputs`. + +[`minimum(...)`](../../tf/keras/layers/minimum.md): Functional interface to the `Minimum` layer. + +[`multiply(...)`](../../tf/keras/layers/multiply.md): Functional interface to the `Multiply` layer. + +[`serialize(...)`](../../tf/keras/layers/serialize.md) + +[`subtract(...)`](../../tf/keras/layers/subtract.md): Functional interface to the `Subtract` layer. + diff --git a/site/en/api_docs/python/tf/keras/layers/AbstractRNNCell.md b/site/en/api_docs/python/tf/keras/layers/AbstractRNNCell.md new file mode 100644 index 00000000000..ba7a3732e63 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/AbstractRNNCell.md @@ -0,0 +1,149 @@ +description: Abstract object representing an RNN cell. + +
+ + + + + +
+ +# tf.keras.layers.AbstractRNNCell + + + + + + + + + +Abstract object representing an RNN cell. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn) +for details about the usage of RNN API. + +This is the base class for implementing RNN cells with custom behavior. + +Every `RNNCell` must have the properties below and implement `call` with +the signature `(output, next_state) = call(input, state)`. + +#### Examples: + + + +```python + class MinimalRNNCell(AbstractRNNCell): + + def __init__(self, units, **kwargs): + self.units = units + super(MinimalRNNCell, self).__init__(**kwargs) + + @property + def state_size(self): + return self.units + + def build(self, input_shape): + self.kernel = self.add_weight(shape=(input_shape[-1], self.units), + initializer='uniform', + name='kernel') + self.recurrent_kernel = self.add_weight( + shape=(self.units, self.units), + initializer='uniform', + name='recurrent_kernel') + self.built = True + + def call(self, inputs, states): + prev_output = states[0] + h = K.dot(inputs, self.kernel) + output = h + K.dot(prev_output, self.recurrent_kernel) + return output, output +``` + +This definition of cell differs from the definition used in the literature. +In the literature, 'cell' refers to an object with a single scalar output. +This definition refers to a horizontal array of such units. + +An RNN cell, in the most abstract setting, is anything that has +a state and performs some operation that takes a matrix of inputs. +This operation results in an output matrix with `self.output_size` columns. +If `self.state_size` is an integer, this operation also results in a new +state matrix with `self.state_size` columns. If `self.state_size` is a +(possibly nested tuple of) TensorShape object(s), then it should return a +matching structure of Tensors having shape `[batch_size].concatenate(s)` +for each `s` in `self.batch_size`. + + + + + + + + + + + + + + + +
+`output_size` + +Integer or TensorShape: size of outputs produced by this cell. +
+`state_size` + +size(s) of state(s) used by this cell. + +It can be represented by an Integer, a TensorShape or a tuple of Integers +or TensorShapes. +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/Activation.md b/site/en/api_docs/python/tf/keras/layers/Activation.md new file mode 100644 index 00000000000..010b8758760 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Activation.md @@ -0,0 +1,96 @@ +description: Applies an activation function to an output. + +
+ + + + +
+ +# tf.keras.layers.Activation + + + + + + + + + +Applies an activation function to an output. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + + + + + + + + + + + +
+`activation` + +Activation function, such as tf.nn.relu, or string name of +built-in activation function, such as "relu". +
+ + + +#### Usage: + + + +``` +>>> layer = tf.keras.layers.Activation('relu') +>>> output = layer([-3.0, -1.0, 0.0, 2.0]) +>>> list(output.numpy()) +[0.0, 0.0, 0.0, 2.0] +>>> layer = tf.keras.layers.Activation(tf.nn.relu) +>>> output = layer([-3.0, -1.0, 0.0, 2.0]) +>>> list(output.numpy()) +[0.0, 0.0, 0.0, 2.0] +``` + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the batch axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as input. + + diff --git a/site/en/api_docs/python/tf/keras/layers/ActivityRegularization.md b/site/en/api_docs/python/tf/keras/layers/ActivityRegularization.md new file mode 100644 index 00000000000..9b136f38135 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/ActivityRegularization.md @@ -0,0 +1,87 @@ +description: Layer that applies an update to the cost function based input activity. + +
+ + + + +
+ +# tf.keras.layers.ActivityRegularization + + + + + + + + + +Layer that applies an update to the cost function based input activity. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + + + + + + + + + + + + + + +
+`l1` + +L1 regularization factor (positive float). +
+`l2` + +L2 regularization factor (positive float). +
+ + + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as input. + + diff --git a/site/en/api_docs/python/tf/keras/layers/Add.md b/site/en/api_docs/python/tf/keras/layers/Add.md new file mode 100644 index 00000000000..3fb07b9f9bd --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Add.md @@ -0,0 +1,94 @@ +description: Layer that adds a list of inputs. + +
+ + + + +
+ +# tf.keras.layers.Add + + + + + + + + + +Layer that adds a list of inputs. + + + + + + + + + +It takes as input a list of tensors, +all of the same shape, and returns +a single tensor (also of the same shape). + +#### Examples: + + + +``` +>>> input_shape = (2, 3, 4) +>>> x1 = tf.random.normal(input_shape) +>>> x2 = tf.random.normal(input_shape) +>>> y = tf.keras.layers.Add()([x1, x2]) +>>> print(y.shape) +(2, 3, 4) +``` + +Used in a functional model: + +``` +>>> input1 = tf.keras.layers.Input(shape=(16,)) +>>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1) +>>> input2 = tf.keras.layers.Input(shape=(32,)) +>>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2) +>>> # equivalent to `added = tf.keras.layers.add([x1, x2])` +>>> added = tf.keras.layers.Add()([x1, x2]) +>>> out = tf.keras.layers.Dense(4)(added) +>>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out) +``` + + + + + + + + + + +
+`**kwargs` + +standard layer keyword arguments. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/AdditiveAttention.md b/site/en/api_docs/python/tf/keras/layers/AdditiveAttention.md new file mode 100644 index 00000000000..ae097922081 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/AdditiveAttention.md @@ -0,0 +1,172 @@ +description: Additive attention layer, a.k.a. Bahdanau-style attention. + +
+ + + + +
+ +# tf.keras.layers.AdditiveAttention + + + + + + + + + +Additive attention layer, a.k.a. Bahdanau-style attention. + + + + + + + + + +Inputs are `query` tensor of shape `[batch_size, Tq, dim]`, `value` tensor of +shape `[batch_size, Tv, dim]` and `key` tensor of shape +`[batch_size, Tv, dim]`. The calculation follows the steps: + +1. Reshape `query` and `value` into shapes `[batch_size, Tq, 1, dim]` + and `[batch_size, 1, Tv, dim]` respectively. +2. Calculate scores with shape `[batch_size, Tq, Tv]` as a non-linear + sum: `scores = tf.reduce_sum(tf.tanh(query + value), axis=-1)` +3. Use scores to calculate a distribution with shape + `[batch_size, Tq, Tv]`: `distribution = tf.nn.softmax(scores)`. +4. Use `distribution` to create a linear combination of `value` with + shape `batch_size, Tq, dim]`: + `return tf.matmul(distribution, value)`. + + + + + + + + + + + + + + + + +
+`use_scale` + +If `True`, will create a variable to scale the attention scores. +
+`causal` + +Boolean. Set to `True` for decoder self-attention. Adds a mask such +that position `i` cannot attend to positions `j > i`. This prevents the +flow of information from the future towards the past. +
+`dropout` + +Float between 0 and 1. Fraction of the units to drop for the +attention scores. +
+ + + +#### Call Arguments: + + + +* `inputs`: List of the following tensors: + * query: Query `Tensor` of shape `[batch_size, Tq, dim]`. + * value: Value `Tensor` of shape `[batch_size, Tv, dim]`. + * key: Optional key `Tensor` of shape `[batch_size, Tv, dim]`. If not + given, will use `value` for both `key` and `value`, which is the + most common case. +* `mask`: List of the following tensors: + * query_mask: A boolean mask `Tensor` of shape `[batch_size, Tq]`. + If given, the output will be zero at the positions where + `mask==False`. + * value_mask: A boolean mask `Tensor` of shape `[batch_size, Tv]`. + If given, will apply the mask such that values at positions where + `mask==False` do not contribute to the result. +* `training`: Python boolean indicating whether the layer should behave in + training mode (adding dropout) or in inference mode (no dropout). + + +#### Output shape: + + +Attention outputs of shape `[batch_size, Tq, dim]`. + + +The meaning of `query`, `value` and `key` depend on the application. In the +case of text similarity, for example, `query` is the sequence embeddings of +the first piece of text and `value` is the sequence embeddings of the second +piece of text. `key` is usually the same tensor as `value`. + +Here is a code example for using `AdditiveAttention` in a CNN+Attention +network: + +```python +# Variable-length int sequences. +query_input = tf.keras.Input(shape=(None,), dtype='int32') +value_input = tf.keras.Input(shape=(None,), dtype='int32') + +# Embedding lookup. +token_embedding = tf.keras.layers.Embedding(max_tokens, dimension) +# Query embeddings of shape [batch_size, Tq, dimension]. +query_embeddings = token_embedding(query_input) +# Value embeddings of shape [batch_size, Tv, dimension]. +value_embeddings = token_embedding(value_input) + +# CNN layer. +cnn_layer = tf.keras.layers.Conv1D( + filters=100, + kernel_size=4, + # Use 'same' padding so outputs have the same shape as inputs. + padding='same') +# Query encoding of shape [batch_size, Tq, filters]. +query_seq_encoding = cnn_layer(query_embeddings) +# Value encoding of shape [batch_size, Tv, filters]. +value_seq_encoding = cnn_layer(value_embeddings) + +# Query-value attention of shape [batch_size, Tq, filters]. +query_value_attention_seq = tf.keras.layers.AdditiveAttention()( + [query_seq_encoding, value_seq_encoding]) + +# Reduce over the sequence axis to produce encodings of shape +# [batch_size, filters]. +query_encoding = tf.keras.layers.GlobalAveragePooling1D()( + query_seq_encoding) +query_value_attention = tf.keras.layers.GlobalAveragePooling1D()( + query_value_attention_seq) + +# Concatenate query and document encodings to produce a DNN input layer. +input_layer = tf.keras.layers.Concatenate()( + [query_encoding, query_value_attention]) + +# Add DNN layers, and create Model. +# ... +``` + diff --git a/site/en/api_docs/python/tf/keras/layers/AlphaDropout.md b/site/en/api_docs/python/tf/keras/layers/AlphaDropout.md new file mode 100644 index 00000000000..155d9627dde --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/AlphaDropout.md @@ -0,0 +1,102 @@ +description: Applies Alpha Dropout to the input. + +
+ + + + +
+ +# tf.keras.layers.AlphaDropout + + + + + + + + + +Applies Alpha Dropout to the input. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +Alpha Dropout is a `Dropout` that keeps mean and variance of inputs +to their original values, in order to ensure the self-normalizing property +even after this dropout. +Alpha Dropout fits well to Scaled Exponential Linear Units +by randomly setting activations to the negative saturation value. + + + + + + + + + + + + + +
+`rate` + +float, drop probability (as with `Dropout`). +The multiplicative noise will have +standard deviation `sqrt(rate / (1 - rate))`. +
+`seed` + +A Python integer to use as random seed. +
+ + + +#### Call arguments: + + +* `inputs`: Input tensor (of any rank). +* `training`: Python boolean indicating whether the layer should behave in + training mode (adding dropout) or in inference mode (doing nothing). + + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as input. + + diff --git a/site/en/api_docs/python/tf/keras/layers/Attention.md b/site/en/api_docs/python/tf/keras/layers/Attention.md new file mode 100644 index 00000000000..76a3256f665 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Attention.md @@ -0,0 +1,170 @@ +description: Dot-product attention layer, a.k.a. Luong-style attention. + +
+ + + + +
+ +# tf.keras.layers.Attention + + + + + + + + + +Dot-product attention layer, a.k.a. Luong-style attention. + + + + + + + + + +Inputs are `query` tensor of shape `[batch_size, Tq, dim]`, `value` tensor of +shape `[batch_size, Tv, dim]` and `key` tensor of shape +`[batch_size, Tv, dim]`. The calculation follows the steps: + +1. Calculate scores with shape `[batch_size, Tq, Tv]` as a `query`-`key` dot + product: `scores = tf.matmul(query, key, transpose_b=True)`. +2. Use scores to calculate a distribution with shape + `[batch_size, Tq, Tv]`: `distribution = tf.nn.softmax(scores)`. +3. Use `distribution` to create a linear combination of `value` with + shape `[batch_size, Tq, dim]`: + `return tf.matmul(distribution, value)`. + + + + + + + + + + + + + + + + +
+`use_scale` + +If `True`, will create a scalar variable to scale the attention +scores. +
+`causal` + +Boolean. Set to `True` for decoder self-attention. Adds a mask such +that position `i` cannot attend to positions `j > i`. This prevents the +flow of information from the future towards the past. +
+`dropout` + +Float between 0 and 1. Fraction of the units to drop for the +attention scores. +
+ + + +#### Call Arguments: + + + +* `inputs`: List of the following tensors: + * query: Query `Tensor` of shape `[batch_size, Tq, dim]`. + * value: Value `Tensor` of shape `[batch_size, Tv, dim]`. + * key: Optional key `Tensor` of shape `[batch_size, Tv, dim]`. If not + given, will use `value` for both `key` and `value`, which is the + most common case. +* `mask`: List of the following tensors: + * query_mask: A boolean mask `Tensor` of shape `[batch_size, Tq]`. + If given, the output will be zero at the positions where + `mask==False`. + * value_mask: A boolean mask `Tensor` of shape `[batch_size, Tv]`. + If given, will apply the mask such that values at positions where + `mask==False` do not contribute to the result. +* `training`: Python boolean indicating whether the layer should behave in + training mode (adding dropout) or in inference mode (no dropout). + + +#### Output shape: + + +Attention outputs of shape `[batch_size, Tq, dim]`. + + +The meaning of `query`, `value` and `key` depend on the application. In the +case of text similarity, for example, `query` is the sequence embeddings of +the first piece of text and `value` is the sequence embeddings of the second +piece of text. `key` is usually the same tensor as `value`. + +Here is a code example for using `Attention` in a CNN+Attention network: + +```python +# Variable-length int sequences. +query_input = tf.keras.Input(shape=(None,), dtype='int32') +value_input = tf.keras.Input(shape=(None,), dtype='int32') + +# Embedding lookup. +token_embedding = tf.keras.layers.Embedding(max_tokens, dimension) +# Query embeddings of shape [batch_size, Tq, dimension]. +query_embeddings = token_embedding(query_input) +# Value embeddings of shape [batch_size, Tv, dimension]. +value_embeddings = token_embedding(value_input) + +# CNN layer. +cnn_layer = tf.keras.layers.Conv1D( + filters=100, + kernel_size=4, + # Use 'same' padding so outputs have the same shape as inputs. + padding='same') +# Query encoding of shape [batch_size, Tq, filters]. +query_seq_encoding = cnn_layer(query_embeddings) +# Value encoding of shape [batch_size, Tv, filters]. +value_seq_encoding = cnn_layer(value_embeddings) + +# Query-value attention of shape [batch_size, Tq, filters]. +query_value_attention_seq = tf.keras.layers.Attention()( + [query_seq_encoding, value_seq_encoding]) + +# Reduce over the sequence axis to produce encodings of shape +# [batch_size, filters]. +query_encoding = tf.keras.layers.GlobalAveragePooling1D()( + query_seq_encoding) +query_value_attention = tf.keras.layers.GlobalAveragePooling1D()( + query_value_attention_seq) + +# Concatenate query and document encodings to produce a DNN input layer. +input_layer = tf.keras.layers.Concatenate()( + [query_encoding, query_value_attention]) + +# Add DNN layers, and create Model. +# ... +``` + diff --git a/site/en/api_docs/python/tf/keras/layers/Average.md b/site/en/api_docs/python/tf/keras/layers/Average.md new file mode 100644 index 00000000000..f10fe6da5f0 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Average.md @@ -0,0 +1,109 @@ +description: Layer that averages a list of inputs element-wise. + +
+ + + + +
+ +# tf.keras.layers.Average + + + + + + + + + +Layer that averages a list of inputs element-wise. + + + + + + + + + +It takes as input a list of tensors, all of the same shape, and returns +a single tensor (also of the same shape). + +#### Example: + + + +``` +>>> x1 = np.ones((2, 2)) +>>> x2 = np.zeros((2, 2)) +>>> y = tf.keras.layers.Average()([x1, x2]) +>>> y.numpy().tolist() +[[0.5, 0.5], [0.5, 0.5]] +``` + +Usage in a functional model: + +``` +>>> input1 = tf.keras.layers.Input(shape=(16,)) +>>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1) +>>> input2 = tf.keras.layers.Input(shape=(32,)) +>>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2) +>>> avg = tf.keras.layers.Average()([x1, x2]) +>>> out = tf.keras.layers.Dense(4)(avg) +>>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out) +``` + + + + + + + + + + +
+`ValueError` + +If there is a shape mismatch between the inputs and the shapes +cannot be broadcasted to match. +
+ + + + + + + + + + + + +
+`**kwargs` + +standard layer keyword arguments. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/AveragePooling1D.md b/site/en/api_docs/python/tf/keras/layers/AveragePooling1D.md new file mode 100644 index 00000000000..c8263f0dd9d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/AveragePooling1D.md @@ -0,0 +1,115 @@ +description: Average pooling for temporal data. + +
+ + + + +
+ +# tf.keras.layers.AveragePooling1D + + + + + + + + + +Average pooling for temporal data. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`pool_size` + +Integer, size of the average pooling windows. +
+`strides` + +Integer, or None. Factor by which to downscale. +E.g. 2 will halve the input. +If None, it will default to `pool_size`. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, steps, features)` while `channels_first` +corresponds to inputs with shape +`(batch, features, steps)`. +
+ + + +#### Input shape: + +- If `data_format='channels_last'`: + 3D tensor with shape `(batch_size, steps, features)`. +- If `data_format='channels_first'`: + 3D tensor with shape `(batch_size, features, steps)`. + + + +#### Output shape: + +- If `data_format='channels_last'`: + 3D tensor with shape `(batch_size, downsampled_steps, features)`. +- If `data_format='channels_first'`: + 3D tensor with shape `(batch_size, features, downsampled_steps)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/AveragePooling2D.md b/site/en/api_docs/python/tf/keras/layers/AveragePooling2D.md new file mode 100644 index 00000000000..a7a251ecf17 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/AveragePooling2D.md @@ -0,0 +1,121 @@ +description: Average pooling operation for spatial data. + +
+ + + + +
+ +# tf.keras.layers.AveragePooling2D + + + + + + + + + +Average pooling operation for spatial data. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`pool_size` + +integer or tuple of 2 integers, +factors by which to downscale (vertical, horizontal). +`(2, 2)` will halve the input in both spatial dimension. +If only one integer is specified, the same window length +will be used for both dimensions. +
+`strides` + +Integer, tuple of 2 integers, or None. +Strides values. +If None, it will default to `pool_size`. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +- If `data_format='channels_last'`: + 4D tensor with shape `(batch_size, rows, cols, channels)`. +- If `data_format='channels_first'`: + 4D tensor with shape `(batch_size, channels, rows, cols)`. + + + +#### Output shape: + +- If `data_format='channels_last'`: + 4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`. +- If `data_format='channels_first'`: + 4D tensor with shape `(batch_size, channels, pooled_rows, pooled_cols)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/AveragePooling3D.md b/site/en/api_docs/python/tf/keras/layers/AveragePooling3D.md new file mode 100644 index 00000000000..95f39575996 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/AveragePooling3D.md @@ -0,0 +1,121 @@ +description: Average pooling operation for 3D data (spatial or spatio-temporal). + +
+ + + + +
+ +# tf.keras.layers.AveragePooling3D + + + + + + + + + +Average pooling operation for 3D data (spatial or spatio-temporal). + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`pool_size` + +tuple of 3 integers, +factors by which to downscale (dim1, dim2, dim3). +`(2, 2, 2)` will halve the size of the 3D input in each dimension. +
+`strides` + +tuple of 3 integers, or None. Strides values. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)` +while `channels_first` corresponds to inputs with shape +`(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +- If `data_format='channels_last'`: + 5D tensor with shape: + `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)` +- If `data_format='channels_first'`: + 5D tensor with shape: + `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)` + + + +#### Output shape: + +- If `data_format='channels_last'`: + 5D tensor with shape: + `(batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)` +- If `data_format='channels_first'`: + 5D tensor with shape: + `(batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)` + + diff --git a/site/en/api_docs/python/tf/keras/layers/BatchNormalization.md b/site/en/api_docs/python/tf/keras/layers/BatchNormalization.md new file mode 100644 index 00000000000..997af18b6bc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/BatchNormalization.md @@ -0,0 +1,337 @@ +description: Normalize and scale inputs or activations. (Ioffe and Szegedy, 2014). + +
+ + + + +
+ +# tf.keras.layers.BatchNormalization + + + + + + + + + +Normalize and scale inputs or activations. (Ioffe and Szegedy, 2014). + + + + + + + +Normalize the activations of the previous layer at each batch, +i.e. applies a transformation that maintains the mean activation +close to 0 and the activation standard deviation close to 1. + +Batch normalization differs from other layers in several key aspects: + +1) Adding BatchNormalization with `training=True` to a model causes the +result of one example to depend on the contents of all other examples in a +minibatch. Be careful when padding batches or masking examples, as these can +change the minibatch statistics and affect other examples. + +2) Updates to the weights (moving statistics) are based on the forward pass +of a model rather than the result of gradient computations. + +3) When performing inference using a model containing batch normalization, it +is generally (though not always) desirable to use accumulated statistics +rather than mini-batch statistics. This is accomplished by passing +`training=False` when calling the model, or using `model.predict`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`axis` + +Integer, the axis that should be normalized +(typically the features axis). +For instance, after a `Conv2D` layer with +`data_format="channels_first"`, +set `axis=1` in `BatchNormalization`. +
+`momentum` + +Momentum for the moving average. +
+`epsilon` + +Small float added to variance to avoid dividing by zero. +
+`center` + +If True, add offset of `beta` to normalized tensor. +If False, `beta` is ignored. +
+`scale` + +If True, multiply by `gamma`. +If False, `gamma` is not used. +When the next layer is linear (also e.g. nn.relu), +this can be disabled since the scaling +will be done by the next layer. +
+`beta_initializer` + +Initializer for the beta weight. +
+`gamma_initializer` + +Initializer for the gamma weight. +
+`moving_mean_initializer` + +Initializer for the moving mean. +
+`moving_variance_initializer` + +Initializer for the moving variance. +
+`beta_regularizer` + +Optional regularizer for the beta weight. +
+`gamma_regularizer` + +Optional regularizer for the gamma weight. +
+`beta_constraint` + +Optional constraint for the beta weight. +
+`gamma_constraint` + +Optional constraint for the gamma weight. +
+`renorm` + +Whether to use Batch Renormalization +(https://arxiv.org/abs/1702.03275). This adds extra variables during +training. The inference is the same for either value of this parameter. +
+`renorm_clipping` + +A dictionary that may map keys 'rmax', 'rmin', 'dmax' to +scalar `Tensors` used to clip the renorm correction. The correction +`(r, d)` is used as `corrected_value = normalized_value * r + d`, with +`r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, +dmax are set to inf, 0, inf, respectively. +
+`renorm_momentum` + +Momentum used to update the moving means and standard +deviations with renorm. Unlike `momentum`, this affects training +and should be neither too small (which would add noise) nor too large +(which would give stale estimates). Note that `momentum` is still applied +to get the means and variances for inference. +
+`fused` + +if `True`, use a faster, fused implementation, or raise a ValueError +if the fused implementation cannot be used. If `None`, use the faster +implementation if possible. If False, do not used the fused +implementation. +
+`trainable` + +Boolean, if `True` the variables will be marked as trainable. +
+`virtual_batch_size` + +An `int`. By default, `virtual_batch_size` is `None`, +which means batch normalization is performed across the whole batch. When +`virtual_batch_size` is not `None`, instead perform "Ghost Batch +Normalization", which creates virtual sub-batches which are each +normalized separately (with shared gamma, beta, and moving statistics). +Must divide the actual batch size during execution. +
+`adjustment` + +A function taking the `Tensor` containing the (dynamic) shape of +the input tensor and returning a pair (scale, bias) to apply to the +normalized values (before gamma and beta), only during training. For +example, if axis==-1, +`adjustment = lambda shape: ( +tf.random.uniform(shape[-1:], 0.93, 1.07), +tf.random.uniform(shape[-1:], -0.1, 0.1))` +will scale the normalized value by up to 7% up or down, then shift the +result by up to 0.1 (with independent scaling and bias for each feature +but shared across all examples), and finally apply gamma and/or beta. If +`None`, no adjustment is applied. Cannot be specified if +virtual_batch_size is specified. +
+ + + +#### Call arguments: + + +* `inputs`: Input tensor (of any rank). +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. + - `training=True`: The layer will normalize its inputs using the + mean and variance of the current batch of inputs. + - `training=False`: The layer will normalize its inputs using the + mean and variance of its moving statistics, learned during training. + + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as input. + + + +**About setting `layer.trainable = False` on a `BatchNormalization layer:** + +The meaning of setting `layer.trainable = False` is to freeze the layer, +i.e. its internal state will not change during training: +its trainable weights will not be updated +during `fit()` or `train_on_batch()`, and its state updates will not be run. + +Usually, this does not necessarily mean that the layer is run in inference +mode (which is normally controlled by the `training` argument that can +be passed when calling a layer). "Frozen state" and "inference mode" +are two separate concepts. + +However, in the case of the `BatchNormalization` layer, **setting +`trainable = False` on the layer means that the layer will be +subsequently run in inference mode** (meaning that it will use +the moving mean and the moving variance to normalize the current batch, +rather than using the mean and variance of the current batch). + +This behavior has been introduced in TensorFlow 2.0, in order +to enable `layer.trainable = False` to produce the most commonly +expected behavior in the convnet fine-tuning use case. + +#### Note that: + +- This behavior only occurs as of TensorFlow 2.0. In 1.*, + setting `layer.trainable = False` would freeze the layer but would + not switch it to inference mode. +- Setting `trainable` on an model containing other layers will + recursively set the `trainable` value of all inner layers. +- If the value of the `trainable` + attribute is changed after calling `compile()` on a model, + the new value doesn't take effect for this model + until `compile()` is called again. + + + +Normalization equations: + Consider the intermediate activations \(x\) of a mini-batch of size + \\(m\\): + + We can compute the mean and variance of the batch + + \\({\mu_B} = \frac{1}{m} \sum_{i=1}^{m} {x_i}\\) + + \\({\sigma_B^2} = \frac{1}{m} \sum_{i=1}^{m} ({x_i} - {\mu_B})^2\\) + + and then compute a normalized \\(x\\), including a small factor + \\({\epsilon}\\) for numerical stability. + + \\(\hat{x_i} = \frac{x_i - \mu_B}{\sqrt{\sigma_B^2 + \epsilon}}\\) + + And finally \\(\hat{x}\) is linearly transformed by \({\gamma}\\) + and \\({\beta}\\), which are learned parameters: + + \\({y_i} = {\gamma * \hat{x_i} + \beta}\\) + +#### References: + + +- [Batch Normalization: Accelerating Deep Network Training by Reducing + Internal Covariate Shift](https://arxiv.org/abs/1502.03167) + diff --git a/site/en/api_docs/python/tf/keras/layers/Bidirectional.md b/site/en/api_docs/python/tf/keras/layers/Bidirectional.md new file mode 100644 index 00000000000..16bd9034713 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Bidirectional.md @@ -0,0 +1,188 @@ +description: Bidirectional wrapper for RNNs. + +
+ + + + + +
+ +# tf.keras.layers.Bidirectional + + + + + + + + + +Bidirectional wrapper for RNNs. + +Inherits From: [`Wrapper`](../../../tf/keras/layers/Wrapper.md) + + + + + + + + + + + + + + + + + + + + + + + + + +
+`layer` + +keras.layers.RNN instance, such as keras.layers.LSTM or +keras.layers.GRU. It could also be a keras.layers.Layer instance +that meets the following criteria: +1. Be a sequence-processing layer (accepts 3D+ inputs). +2. Have a `go_backwards`, `return_sequences` and `return_state` +attribute (with the same semantics as for the `RNN` class). +3. Have an `input_spec` attribute. +4. Implement serialization via `get_config()` and `from_config()`. +Note that the recommended way to create new RNN layers is to write a +custom RNN cell and use it with keras.layers.RNN, instead of +subclassing keras.layers.Layer directly. +
+`merge_mode` + +Mode by which outputs of the forward and backward RNNs will be +combined. One of {'sum', 'mul', 'concat', 'ave', None}. If None, the +outputs will not be combined, they will be returned as a list. Default +value is 'concat'. +
+`backward_layer` + +Optional keras.layers.RNN, or keras.layers.Layer` instance +to be used to handle backwards input processing. If `backward_layer` is +not provided, the layer instance passed as the `layer` argument will be +used to generate the backward layer automatically. +Note that the provided `backward_layer` layer should have properties +matching those of the `layer` argument, in particular it should have the +same values for `stateful`, `return_states`, `return_sequence`, etc. +In addition, `backward_layer` and `layer` should have different +`go_backwards` argument values. +A `ValueError` will be raised if these requirements are not met. +
+ + + +#### Call arguments: + +The call arguments for this layer are the same as those of the wrapped RNN + layer. + + + + + + + + + + + + +
+`ValueError` + +1. If `layer` or `backward_layer` is not a `Layer` instance. +2. In case of invalid `merge_mode` argument. +3. If `backward_layer` has mismatched properties compared to `layer`. +
+ + + +#### Examples: + + + +```python +model = Sequential() +model.add(Bidirectional(LSTM(10, return_sequences=True), input_shape=(5, 10))) +model.add(Bidirectional(LSTM(10))) +model.add(Dense(5)) +model.add(Activation('softmax')) +model.compile(loss='categorical_crossentropy', optimizer='rmsprop') + + # With custom backward layer + model = Sequential() + forward_layer = LSTM(10, return_sequences=True) + backward_layer = LSTM(10, activation='relu', return_sequences=True, + go_backwards=True) + model.add(Bidirectional(forward_layer, backward_layer=backward_layer, + input_shape=(5, 10))) + model.add(Dense(5)) + model.add(Activation('softmax')) + model.compile(loss='categorical_crossentropy', optimizer='rmsprop') +``` + + + + + + + + + + + + +
+`constraints` + + +
+ + + +## Methods + +

reset_states

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/Concatenate.md b/site/en/api_docs/python/tf/keras/layers/Concatenate.md new file mode 100644 index 00000000000..6286403648c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Concatenate.md @@ -0,0 +1,104 @@ +description: Layer that concatenates a list of inputs. + +
+ + + + +
+ +# tf.keras.layers.Concatenate + + + + + + + + + +Layer that concatenates a list of inputs. + + + + + + + + + +It takes as input a list of tensors, all of the same shape except +for the concatenation axis, and returns a single tensor that is the +concatenation of all inputs. + +``` +>>> x = np.arange(20).reshape(2, 2, 5) +>>> print(x) +[[[ 0 1 2 3 4] + [ 5 6 7 8 9]] + [[10 11 12 13 14] + [15 16 17 18 19]]] +>>> y = np.arange(20, 30).reshape(2, 1, 5) +>>> print(y) +[[[20 21 22 23 24]] + [[25 26 27 28 29]]] +>>> tf.keras.layers.Concatenate(axis=1)([x, y]) + +``` + +``` +>>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) +>>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) +>>> concatted = tf.keras.layers.Concatenate()([x1, x2]) +>>> concatted.shape +TensorShape([5, 16]) +``` + + + + + + + + + + + + + +
+`axis` + +Axis along which to concatenate. +
+`**kwargs` + +standard layer keyword arguments. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/Conv1D.md b/site/en/api_docs/python/tf/keras/layers/Conv1D.md new file mode 100644 index 00000000000..3a1f63bd795 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Conv1D.md @@ -0,0 +1,266 @@ +description: 1D convolution layer (e.g. temporal convolution). + +
+ + + + +
+ +# tf.keras.layers.Conv1D + + + + + + + + + +1D convolution layer (e.g. temporal convolution). + + + + + + + + + +This layer creates a convolution kernel that is convolved +with the layer input over a single spatial (or temporal) dimension +to produce a tensor of outputs. +If `use_bias` is True, a bias vector is created and added to the outputs. +Finally, if `activation` is not `None`, +it is applied to the outputs as well. + +When using this layer as the first layer in a model, +provide an `input_shape` argument +(tuple of integers or `None`, e.g. +`(10, 128)` for sequences of 10 vectors of 128-dimensional vectors, +or `(None, 128)` for variable-length sequences of 128-dimensional vectors. + +#### Examples: + + + +``` +>>> # The inputs are 128-length vectors with 10 timesteps, and the batch size +>>> # is 4. +>>> input_shape = (4, 10, 128) +>>> x = tf.random.normal(input_shape) +>>> y = tf.keras.layers.Conv1D( +... 32, 3, activation='relu',input_shape=input_shape)(x) +>>> print(y.shape) +(4, 8, 32) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space +(i.e. the number of output filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of a single integer, +specifying the length of the 1D convolution window. +
+`strides` + +An integer or tuple/list of a single integer, +specifying the stride length of the convolution. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"`, `"causal"` or `"same"` (case-insensitive). +`"causal"` results in causal (dilated) convolutions, e.g. `output[t]` +does not depend on `input[t+1:]`. Useful when modeling temporal data +where the model should not violate the temporal order. +See [WaveNet: A Generative Model for Raw Audio, section +2.1](https://arxiv.org/abs/1609.03499). +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +
+`dilation_rate` + +an integer or tuple/list of a single integer, specifying +the dilation rate to use for dilated convolution. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any `strides` value != 1. +
+`activation` + +Activation function to use. +If you don't specify anything, no activation is applied ( +see keras.activations). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix ( +see keras.initializers). +
+`bias_initializer` + +Initializer for the bias vector ( +see keras.initializers). +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix (see keras.regularizers). +
+`bias_regularizer` + +Regularizer function applied to the bias vector ( +see keras.regularizers). +
+`activity_regularizer` + +Regularizer function applied to +the output of the layer (its "activation") ( +see keras.regularizers). +
+`kernel_constraint` + +Constraint function applied to the kernel matrix ( +see keras.constraints). +
+`bias_constraint` + +Constraint function applied to the bias vector ( +see keras.constraints). +
+ + + +#### Input shape: + +3D tensor with shape: `(batch_size, steps, input_dim)` + + + +#### Output shape: + +3D tensor with shape: `(batch_size, new_steps, filters)` + `steps` value might have changed due to padding or strides. + + + + + + + + + + + +
+A tensor of rank 3 representing +`activation(conv1d(inputs, kernel) + bias)`. +
+ + + + + + + + + + + + +
+`ValueError` + +when both `strides` > 1 and `dilation_rate` > 1. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/Conv2D.md b/site/en/api_docs/python/tf/keras/layers/Conv2D.md new file mode 100644 index 00000000000..31df709211e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Conv2D.md @@ -0,0 +1,308 @@ +description: 2D convolution layer (e.g. spatial convolution over images). + +
+ + + + +
+ +# tf.keras.layers.Conv2D + + + + + + + + + +2D convolution layer (e.g. spatial convolution over images). + + + + + + + + + +This layer creates a convolution kernel that is convolved +with the layer input to produce a tensor of +outputs. If `use_bias` is True, +a bias vector is created and added to the outputs. Finally, if +`activation` is not `None`, it is applied to the outputs as well. + +When using this layer as the first layer in a model, +provide the keyword argument `input_shape` +(tuple of integers, does not include the sample axis), +e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures +in `data_format="channels_last"`. + +#### Examples: + + + +``` +>>> # The inputs are 28x28 RGB images with `channels_last` and the batch +>>> # size is 4. +>>> input_shape = (4, 28, 28, 3) +>>> x = tf.random.normal(input_shape) +>>> y = tf.keras.layers.Conv2D( +... 2, 3, activation='relu', input_shape=input_shape)(x) +>>> print(y.shape) +(4, 26, 26, 2) +``` + +``` +>>> # With `dilation_rate` as 2. +>>> input_shape = (4, 28, 28, 3) +>>> x = tf.random.normal(input_shape) +>>> y = tf.keras.layers.Conv2D( +... 2, 3, activation='relu', dilation_rate=2, input_shape=input_shape)(x) +>>> print(y.shape) +(4, 24, 24, 2) +``` + +``` +>>> # With `padding` as "same". +>>> input_shape = (4, 28, 28, 3) +>>> x = tf.random.normal(input_shape) +>>> y = tf.keras.layers.Conv2D( +... 2, 3, activation='relu', padding="same", input_shape=input_shape)(x) +>>> print(y.shape) +(4, 28, 28, 2) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space +(i.e. the number of output filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of 2 integers, specifying the +height and width of the 2D convolution window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 2 integers, +specifying the strides of the convolution along the height and width. +Can be a single integer to specify the same value for +all spatial dimensions. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +one of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch_size, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+`dilation_rate` + +an integer or tuple/list of 2 integers, specifying +the dilation rate to use for dilated convolution. +Can be a single integer to specify the same value for +all spatial dimensions. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`activation` + +Activation function to use. +If you don't specify anything, no activation is applied ( +see keras.activations). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix ( +see keras.initializers). +
+`bias_initializer` + +Initializer for the bias vector ( +see keras.initializers). +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix (see keras.regularizers). +
+`bias_regularizer` + +Regularizer function applied to the bias vector ( +see keras.regularizers). +
+`activity_regularizer` + +Regularizer function applied to +the output of the layer (its "activation") ( +see keras.regularizers). +
+`kernel_constraint` + +Constraint function applied to the kernel matrix ( +see keras.constraints). +
+`bias_constraint` + +Constraint function applied to the bias vector ( +see keras.constraints). +
+ + + +#### Input shape: + +4D tensor with shape: +`(batch_size, channels, rows, cols)` if data_format='channels_first' +or 4D tensor with shape: +`(batch_size, rows, cols, channels)` if data_format='channels_last'. + + + +#### Output shape: + +4D tensor with shape: +`(batch_size, filters, new_rows, new_cols)` if data_format='channels_first' +or 4D tensor with shape: +`(batch_size, new_rows, new_cols, filters)` if data_format='channels_last'. +`rows` and `cols` values might have changed due to padding. + + + + + + + + + + + +
+A tensor of rank 4 representing +`activation(conv2d(inputs, kernel) + bias)`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `padding` is "causal". +
+`ValueError` + +when both `strides` > 1 and `dilation_rate` > 1. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/Conv2DTranspose.md b/site/en/api_docs/python/tf/keras/layers/Conv2DTranspose.md new file mode 100644 index 00000000000..41c6387bbd1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Conv2DTranspose.md @@ -0,0 +1,303 @@ +description: Transposed convolution layer (sometimes called Deconvolution). + +
+ + + + +
+ +# tf.keras.layers.Conv2DTranspose + + + + + + + + + +Transposed convolution layer (sometimes called Deconvolution). + +Inherits From: [`Conv2D`](../../../tf/keras/layers/Conv2D.md) + + + + + + + + + +The need for transposed convolutions generally arises +from the desire to use a transformation going in the opposite direction +of a normal convolution, i.e., from something that has the shape of the +output of some convolution to something that has the shape of its input +while maintaining a connectivity pattern that is compatible with +said convolution. + +When using this layer as the first layer in a model, +provide the keyword argument `input_shape` +(tuple of integers, does not include the sample axis), +e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures +in `data_format="channels_last"`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space +(i.e. the number of output filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of 2 integers, specifying the +height and width of the 2D convolution window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 2 integers, +specifying the strides of the convolution along the height and width. +Can be a single integer to specify the same value for +all spatial dimensions. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +one of `"valid"` or `"same"` (case-insensitive). +
+`output_padding` + +An integer or tuple/list of 2 integers, +specifying the amount of padding along the height and width +of the output tensor. +Can be a single integer to specify the same value for all +spatial dimensions. +The amount of output padding along a given dimension must be +lower than the stride along that same dimension. +If set to `None` (default), the output shape is inferred. +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch_size, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+`dilation_rate` + +an integer or tuple/list of 2 integers, specifying +the dilation rate to use for dilated convolution. +Can be a single integer to specify the same value for +all spatial dimensions. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`activation` + +Activation function to use. +If you don't specify anything, no activation is applied ( +see keras.activations). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix ( +see keras.initializers). +
+`bias_initializer` + +Initializer for the bias vector ( +see keras.initializers). +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix (see keras.regularizers). +
+`bias_regularizer` + +Regularizer function applied to the bias vector ( +see keras.regularizers). +
+`activity_regularizer` + +Regularizer function applied to +the output of the layer (its "activation") (see keras.regularizers). +
+`kernel_constraint` + +Constraint function applied to the kernel matrix ( +see keras.constraints). +
+`bias_constraint` + +Constraint function applied to the bias vector ( +see keras.constraints). +
+ + + +#### Input shape: + +4D tensor with shape: +`(batch_size, channels, rows, cols)` if data_format='channels_first' +or 4D tensor with shape: +`(batch_size, rows, cols, channels)` if data_format='channels_last'. + + + +#### Output shape: + +4D tensor with shape: +`(batch_size, filters, new_rows, new_cols)` if data_format='channels_first' +or 4D tensor with shape: +`(batch_size, new_rows, new_cols, filters)` if data_format='channels_last'. +`rows` and `cols` values might have changed due to padding. +If `output_padding` is specified: +``` +new_rows = ((rows - 1) * strides[0] + kernel_size[0] - 2 * padding[0] + +output_padding[0]) +new_cols = ((cols - 1) * strides[1] + kernel_size[1] - 2 * padding[1] + +output_padding[1]) +``` + + + + + + + + + + + +
+A tensor of rank 4 representing +`activation(conv2dtranspose(inputs, kernel) + bias)`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `padding` is "causal". +
+`ValueError` + +when both `strides` > 1 and `dilation_rate` > 1. +
+ + + +#### References: + +- [A guide to convolution arithmetic for deep + learning](https://arxiv.org/abs/1603.07285v1) +- [Deconvolutional + Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf) + + diff --git a/site/en/api_docs/python/tf/keras/layers/Conv3D.md b/site/en/api_docs/python/tf/keras/layers/Conv3D.md new file mode 100644 index 00000000000..335016f509a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Conv3D.md @@ -0,0 +1,295 @@ +description: 3D convolution layer (e.g. spatial convolution over volumes). + +
+ + + + +
+ +# tf.keras.layers.Conv3D + + + + + + + + + +3D convolution layer (e.g. spatial convolution over volumes). + + + + + + + + + +This layer creates a convolution kernel that is convolved +with the layer input to produce a tensor of +outputs. If `use_bias` is True, +a bias vector is created and added to the outputs. Finally, if +`activation` is not `None`, it is applied to the outputs as well. + +When using this layer as the first layer in a model, +provide the keyword argument `input_shape` +(tuple of integers, does not include the sample axis), +e.g. `input_shape=(128, 128, 128, 1)` for 128x128x128 volumes +with a single channel, +in `data_format="channels_last"`. + +#### Examples: + + + +``` +>>> # The inputs are 28x28x28 volumes with a single channel, and the +>>> # batch size is 4 +>>> input_shape =(4, 28, 28, 28, 1) +>>> x = tf.random.normal(input_shape) +>>> y = tf.keras.layers.Conv3D( +... 2, 3, activation='relu', input_shape=input_shape)(x) +>>> print(y.shape) +(4, 26, 26, 26, 2) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space +(i.e. the number of output filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of 3 integers, specifying the +depth, height and width of the 3D convolution window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 3 integers, +specifying the strides of the convolution along each spatial +dimension. +Can be a single integer to specify the same value for +all spatial dimensions. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +one of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)` +while `channels_first` corresponds to inputs with shape +`(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+`dilation_rate` + +an integer or tuple/list of 3 integers, specifying +the dilation rate to use for dilated convolution. +Can be a single integer to specify the same value for +all spatial dimensions. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`activation` + +Activation function to use. +If you don't specify anything, no activation is applied ( +see keras.activations). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix ( +see keras.initializers). +
+`bias_initializer` + +Initializer for the bias vector ( +see keras.initializers). +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix ( +see keras.regularizers). +
+`bias_regularizer` + +Regularizer function applied to the bias vector ( +see keras.regularizers). +
+`activity_regularizer` + +Regularizer function applied to +the output of the layer (its "activation") ( +see keras.regularizers). +
+`kernel_constraint` + +Constraint function applied to the kernel matrix ( +see keras.constraints). +
+`bias_constraint` + +Constraint function applied to the bias vector ( +see keras.constraints). +
+ + + +#### Input shape: + +5D tensor with shape: +`(batch_size, channels, conv_dim1, conv_dim2, conv_dim3)` if + data_format='channels_first' +or 5D tensor with shape: +`(batch_size, conv_dim1, conv_dim2, conv_dim3, channels)` if + data_format='channels_last'. + + + +#### Output shape: + +5D tensor with shape: +`(batch_size, filters, new_conv_dim1, new_conv_dim2, new_conv_dim3)` if + data_format='channels_first' +or 5D tensor with shape: +`(batch_size, new_conv_dim1, new_conv_dim2, new_conv_dim3, filters)` if + data_format='channels_last'. +`new_conv_dim1`, `new_conv_dim2` and `new_conv_dim3` values might have + changed due to padding. + + + + + + + + + + + +
+A tensor of rank 5 representing +`activation(conv3d(inputs, kernel) + bias)`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `padding` is "causal". +
+`ValueError` + +when both `strides` > 1 and `dilation_rate` > 1. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/Conv3DTranspose.md b/site/en/api_docs/python/tf/keras/layers/Conv3DTranspose.md new file mode 100644 index 00000000000..d97534e9936 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Conv3DTranspose.md @@ -0,0 +1,308 @@ +description: Transposed convolution layer (sometimes called Deconvolution). + +
+ + + + +
+ +# tf.keras.layers.Conv3DTranspose + + + + + + + + + +Transposed convolution layer (sometimes called Deconvolution). + +Inherits From: [`Conv3D`](../../../tf/keras/layers/Conv3D.md) + + + + + + + + + +The need for transposed convolutions generally arises +from the desire to use a transformation going in the opposite direction +of a normal convolution, i.e., from something that has the shape of the +output of some convolution to something that has the shape of its input +while maintaining a connectivity pattern that is compatible with +said convolution. + +When using this layer as the first layer in a model, +provide the keyword argument `input_shape` +(tuple of integers, does not include the sample axis), +e.g. `input_shape=(128, 128, 128, 3)` for a 128x128x128 volume with 3 channels +if `data_format="channels_last"`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space +(i.e. the number of output filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of 3 integers, specifying the +depth, height and width of the 3D convolution window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 3 integers, +specifying the strides of the convolution along the depth, height +and width. +Can be a single integer to specify the same value for +all spatial dimensions. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +one of `"valid"` or `"same"` (case-insensitive). +
+`output_padding` + +An integer or tuple/list of 3 integers, +specifying the amount of padding along the depth, height, and +width. +Can be a single integer to specify the same value for all +spatial dimensions. +The amount of output padding along a given dimension must be +lower than the stride along that same dimension. +If set to `None` (default), the output shape is inferred. +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, depth, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch_size, channels, depth, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+`dilation_rate` + +an integer or tuple/list of 3 integers, specifying +the dilation rate to use for dilated convolution. +Can be a single integer to specify the same value for +all spatial dimensions. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`activation` + +Activation function to use. +If you don't specify anything, no activation is applied ( +see keras.activations). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix. +
+`bias_initializer` + +Initializer for the bias vector. +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix ( +see keras.regularizers). +
+`bias_regularizer` + +Regularizer function applied to the bias vector ( +see keras.regularizers). +
+`activity_regularizer` + +Regularizer function applied to +the output of the layer (its "activation") ( +see keras.regularizers). +
+`kernel_constraint` + +Constraint function applied to the kernel matrix ( +see keras.constraints). +
+`bias_constraint` + +Constraint function applied to the bias vector ( +see keras.constraints). +
+ + + +#### Input shape: + +5D tensor with shape: +`(batch_size, channels, depth, rows, cols)` if data_format='channels_first' +or 5D tensor with shape: +`(batch_size, depth, rows, cols, channels)` if data_format='channels_last'. + + + +#### Output shape: + +5D tensor with shape: +`(batch_size, filters, new_depth, new_rows, new_cols)` if + data_format='channels_first' +or 5D tensor with shape: +`(batch_size, new_depth, new_rows, new_cols, filters)` if + data_format='channels_last'. +`depth` and `rows` and `cols` values might have changed due to padding. +If `output_padding` is specified:: +``` +new_depth = ((depth - 1) * strides[0] + kernel_size[0] - 2 * padding[0] + +output_padding[0]) +new_rows = ((rows - 1) * strides[1] + kernel_size[1] - 2 * padding[1] + +output_padding[1]) +new_cols = ((cols - 1) * strides[2] + kernel_size[2] - 2 * padding[2] + +output_padding[2]) +``` + + + + + + + + + + + +
+A tensor of rank 5 representing +`activation(conv3dtranspose(inputs, kernel) + bias)`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `padding` is "causal". +
+`ValueError` + +when both `strides` > 1 and `dilation_rate` > 1. +
+ + + +#### References: + +- [A guide to convolution arithmetic for deep + learning](https://arxiv.org/abs/1603.07285v1) +- [Deconvolutional + Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf) + + diff --git a/site/en/api_docs/python/tf/keras/layers/ConvLSTM2D.md b/site/en/api_docs/python/tf/keras/layers/ConvLSTM2D.md new file mode 100644 index 00000000000..6b9560ab817 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/ConvLSTM2D.md @@ -0,0 +1,574 @@ +description: Convolutional LSTM. + +
+ + + + + +
+ +# tf.keras.layers.ConvLSTM2D + + + + + + + + + +Convolutional LSTM. + + + + + + + + + +It is similar to an LSTM layer, but the input transformations +and recurrent transformations are both convolutional. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space +(i.e. the number of output filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of n integers, specifying the +dimensions of the convolution window. +
+`strides` + +An integer or tuple/list of n integers, +specifying the strides of the convolution. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, time, ..., channels)` +while `channels_first` corresponds to +inputs with shape `(batch, time, channels, ...)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+`dilation_rate` + +An integer or tuple/list of n integers, specifying +the dilation rate to use for dilated convolution. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any `strides` value != 1. +
+`activation` + +Activation function to use. +By default hyperbolic tangent activation function is applied +(`tanh(x)`). +
+`recurrent_activation` + +Activation function to use +for the recurrent step. +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, +used for the linear transformation of the inputs. +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` +weights matrix, +used for the linear transformation of the recurrent state. +
+`bias_initializer` + +Initializer for the bias vector. +
+`unit_forget_bias` + +Boolean. +If True, add 1 to the bias of the forget gate at initialization. +Use in combination with `bias_initializer="zeros"`. +This is recommended in [Jozefowicz et al.] +(http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf) +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix. +
+`recurrent_regularizer` + +Regularizer function applied to +the `recurrent_kernel` weights matrix. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. +
+`activity_regularizer` + +Regularizer function applied to. +
+`kernel_constraint` + +Constraint function applied to +the `kernel` weights matrix. +
+`recurrent_constraint` + +Constraint function applied to +the `recurrent_kernel` weights matrix. +
+`bias_constraint` + +Constraint function applied to the bias vector. +
+`return_sequences` + +Boolean. Whether to return the last output +in the output sequence, or the full sequence. +
+`go_backwards` + +Boolean (default False). +If True, process the input sequence backwards. +
+`stateful` + +Boolean (default False). If True, the last state +for each sample at index i in a batch will be used as initial +state for the sample of index i in the following batch. +
+`dropout` + +Float between 0 and 1. +Fraction of the units to drop for +the linear transformation of the inputs. +
+`recurrent_dropout` + +Float between 0 and 1. +Fraction of the units to drop for +the linear transformation of the recurrent state. +
+ + + +#### Call arguments: + + +* `inputs`: A 5D tensor. +* `mask`: Binary tensor of shape `(samples, timesteps)` indicating whether + a given timestep should be masked. +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. This argument is passed to the cell + when calling it. This is only relevant if `dropout` or `recurrent_dropout` + are set. +* `initial_state`: List of initial state tensors to be passed to the first + call of the cell. + + +#### Input shape: + +- If data_format='channels_first' + 5D tensor with shape: + `(samples, time, channels, rows, cols)` +- If data_format='channels_last' + 5D tensor with shape: + `(samples, time, rows, cols, channels)` + + + +#### Output shape: + +- If `return_sequences` + - If data_format='channels_first' + 5D tensor with shape: + `(samples, time, filters, output_row, output_col)` + - If data_format='channels_last' + 5D tensor with shape: + `(samples, time, output_row, output_col, filters)` +- Else + - If data_format ='channels_first' + 4D tensor with shape: + `(samples, filters, output_row, output_col)` + - If data_format='channels_last' + 4D tensor with shape: + `(samples, output_row, output_col, filters)` + where `o_row` and `o_col` depend on the shape of the filter and + the padding + + + + + + + + + + + + +
+`ValueError` + +in case of invalid constructor arguments. +
+ + + +#### References: + +- [Convolutional LSTM Network: A Machine Learning Approach for +Precipitation Nowcasting](http://arxiv.org/abs/1506.04214v1) +The current implementation does not include the feedback loop on the +cells output. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`activation` + + +
+`bias_constraint` + + +
+`bias_initializer` + + +
+`bias_regularizer` + + +
+`data_format` + + +
+`dilation_rate` + + +
+`dropout` + + +
+`filters` + + +
+`kernel_constraint` + + +
+`kernel_initializer` + + +
+`kernel_regularizer` + + +
+`kernel_size` + + +
+`padding` + + +
+`recurrent_activation` + + +
+`recurrent_constraint` + + +
+`recurrent_dropout` + + +
+`recurrent_initializer` + + +
+`recurrent_regularizer` + + +
+`states` + + +
+`strides` + + +
+`unit_forget_bias` + + +
+`use_bias` + + +
+ + + +## Methods + +

reset_states

+ +View source + + + +Reset the recorded states for the stateful RNN layer. + +Can only be used when RNN layer is constructed with `stateful` = `True`. +Args: + states: Numpy arrays that contains the value for the initial state, which + will be feed to cell at the first time step. When the value is None, + zero filled numpy array will be created based on the cell state size. + + + + + + + + + + + + + + + + +
Raises
+`AttributeError` + +When the RNN layer is not stateful. +
+`ValueError` + +When the batch size of the RNN layer is unknown. +
+`ValueError` + +When the input numpy array is not compatible with the RNN +layer state, either size wise or dtype wise. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/Cropping1D.md b/site/en/api_docs/python/tf/keras/layers/Cropping1D.md new file mode 100644 index 00000000000..fe4e405b10d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Cropping1D.md @@ -0,0 +1,103 @@ +description: Cropping layer for 1D input (e.g. temporal sequence). + +
+ + + + +
+ +# tf.keras.layers.Cropping1D + + + + + + + + + +Cropping layer for 1D input (e.g. temporal sequence). + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +It crops along the time dimension (axis 1). + +#### Examples: + + + +``` +>>> input_shape = (2, 3, 2) +>>> x = np.arange(np.prod(input_shape)).reshape(input_shape) +>>> print(x) +[[[ 0 1] + [ 2 3] + [ 4 5]] + [[ 6 7] + [ 8 9] + [10 11]]] +>>> y = tf.keras.layers.Cropping1D(cropping=1)(x) +>>> print(y) +tf.Tensor( + [[[2 3]] + [[8 9]]], shape=(2, 1, 2), dtype=int64) +``` + + + + + + + + + + +
+`cropping` + +Int or tuple of int (length 2) +How many units should be trimmed off at the beginning and end of +the cropping dimension (axis 1). +If a single int is provided, the same value will be used for both. +
+ + + +#### Input shape: + +3D tensor with shape `(batch_size, axis_to_crop, features)` + + + +#### Output shape: + +3D tensor with shape `(batch_size, cropped_axis, features)` + + diff --git a/site/en/api_docs/python/tf/keras/layers/Cropping2D.md b/site/en/api_docs/python/tf/keras/layers/Cropping2D.md new file mode 100644 index 00000000000..fd3ee757be6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Cropping2D.md @@ -0,0 +1,124 @@ +description: Cropping layer for 2D input (e.g. picture). + +
+ + + + +
+ +# tf.keras.layers.Cropping2D + + + + + + + + + +Cropping layer for 2D input (e.g. picture). + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +It crops along spatial dimensions, i.e. height and width. + +#### Examples: + + + +``` +>>> input_shape = (2, 28, 28, 3) +>>> x = np.arange(np.prod(input_shape)).reshape(input_shape) +>>> y = tf.keras.layers.Cropping2D(cropping=((2, 2), (4, 4)))(x) +>>> print(y.shape) +(2, 24, 20, 3) +``` + + + + + + + + + + + + + +
+`cropping` + +Int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints. +- If int: the same symmetric cropping +is applied to height and width. +- If tuple of 2 ints: +interpreted as two different +symmetric cropping values for height and width: +`(symmetric_height_crop, symmetric_width_crop)`. +- If tuple of 2 tuples of 2 ints: +interpreted as +`((top_crop, bottom_crop), (left_crop, right_crop))` +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch_size, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +4D tensor with shape: +- If `data_format` is `"channels_last"`: + `(batch_size, rows, cols, channels)` +- If `data_format` is `"channels_first"`: + `(batch_size, channels, rows, cols)` + + + +#### Output shape: + +4D tensor with shape: +- If `data_format` is `"channels_last"`: + `(batch_size, cropped_rows, cropped_cols, channels)` +- If `data_format` is `"channels_first"`: + `(batch_size, channels, cropped_rows, cropped_cols)` + + diff --git a/site/en/api_docs/python/tf/keras/layers/Cropping3D.md b/site/en/api_docs/python/tf/keras/layers/Cropping3D.md new file mode 100644 index 00000000000..87db16fc2d3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Cropping3D.md @@ -0,0 +1,123 @@ +description: Cropping layer for 3D data (e.g. spatial or spatio-temporal). + +
+ + + + +
+ +# tf.keras.layers.Cropping3D + + + + + + + + + +Cropping layer for 3D data (e.g. spatial or spatio-temporal). + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + + Examples: + +``` +>>> input_shape = (2, 28, 28, 10, 3) +>>> x = np.arange(np.prod(input_shape)).reshape(input_shape) +>>> y = tf.keras.layers.Cropping3D(cropping=(2, 4, 2))(x) +>>> print(y.shape) +(2, 24, 20, 6, 3) +``` + + + + + + + + + + + + + +
+`cropping` + +Int, or tuple of 3 ints, or tuple of 3 tuples of 2 ints. +- If int: the same symmetric cropping +is applied to depth, height, and width. +- If tuple of 3 ints: interpreted as two different +symmetric cropping values for depth, height, and width: +`(symmetric_dim1_crop, symmetric_dim2_crop, symmetric_dim3_crop)`. +- If tuple of 3 tuples of 2 ints: interpreted as +`((left_dim1_crop, right_dim1_crop), (left_dim2_crop, +right_dim2_crop), (left_dim3_crop, right_dim3_crop))` +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)` +while `channels_first` corresponds to inputs with shape +`(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +5D tensor with shape: +- If `data_format` is `"channels_last"`: + `(batch_size, first_axis_to_crop, second_axis_to_crop, third_axis_to_crop, + depth)` +- If `data_format` is `"channels_first"`: + `(batch_size, depth, first_axis_to_crop, second_axis_to_crop, + third_axis_to_crop)` + + + +#### Output shape: + +5D tensor with shape: +- If `data_format` is `"channels_last"`: + `(batch_size, first_cropped_axis, second_cropped_axis, third_cropped_axis, + depth)` +- If `data_format` is `"channels_first"`: + `(batch_size, depth, first_cropped_axis, second_cropped_axis, + third_cropped_axis)` + + diff --git a/site/en/api_docs/python/tf/keras/layers/Dense.md b/site/en/api_docs/python/tf/keras/layers/Dense.md new file mode 100644 index 00000000000..22b833c877f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Dense.md @@ -0,0 +1,187 @@ +description: Just your regular densely-connected NN layer. + +
+ + + + +
+ +# tf.keras.layers.Dense + + + + + + + + + +Just your regular densely-connected NN layer. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +`Dense` implements the operation: +`output = activation(dot(input, kernel) + bias)` +where `activation` is the element-wise activation function +passed as the `activation` argument, `kernel` is a weights matrix +created by the layer, and `bias` is a bias vector created by the layer +(only applicable if `use_bias` is `True`). + +Note: If the input to the layer has a rank greater than 2, then `Dense` +computes the dot product between the `inputs` and the `kernel` along the +last axis of the `inputs` and axis 1 of the `kernel` (using tf.tensordot). +For example, if input has dimensions `(batch_size, d0, d1)`, +then we create a `kernel` with shape `(d1, units)`, and the `kernel` operates +along axis 2 of the `input`, on every sub-tensor of shape `(1, 1, d1)` +(there are `batch_size * d0` such sub-tensors). +The output in this case will have shape `(batch_size, d0, units)`. + +Besides, layer attributes cannot be modified after the layer has been called +once (except the `trainable` attribute). + +#### Example: + + + +```python +# as first layer in a sequential model: +model = Sequential() +model.add(Dense(32, input_shape=(16,))) +# now the model will take as input arrays of shape (*, 16) +# and output arrays of shape (*, 32) + +# after the first layer, you don't need to specify +# the size of the input anymore: +model.add(Dense(32)) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`activation` + +Activation function to use. +If you don't specify anything, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix. +
+`bias_initializer` + +Initializer for the bias vector. +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. +
+`activity_regularizer` + +Regularizer function applied to +the output of the layer (its "activation").. +
+`kernel_constraint` + +Constraint function applied to +the `kernel` weights matrix. +
+`bias_constraint` + +Constraint function applied to the bias vector. +
+ + + +#### Input shape: + +N-D tensor with shape: `(batch_size, ..., input_dim)`. +The most common situation would be +a 2D input with shape `(batch_size, input_dim)`. + + + +#### Output shape: + +N-D tensor with shape: `(batch_size, ..., units)`. +For instance, for a 2D input with shape `(batch_size, input_dim)`, +the output would have shape `(batch_size, units)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/DenseFeatures.md b/site/en/api_docs/python/tf/keras/layers/DenseFeatures.md new file mode 100644 index 00000000000..27b3eea07b6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/DenseFeatures.md @@ -0,0 +1,129 @@ +description: A layer that produces a dense Tensor based on given feature_columns. + +
+ + + + +
+ +# tf.keras.layers.DenseFeatures + + + + + + + + + +A layer that produces a dense `Tensor` based on given `feature_columns`. + +Inherits From: [`DenseFeatures`](../../../tf/compat/v1/keras/layers/DenseFeatures.md) + + + + + + + +Generally a single example in training data is described with FeatureColumns. +At the first layer of the model, this column oriented data should be converted +to a single `Tensor`. + +This layer can be called multiple times with different features. + +This is the V2 version of this layer that uses name_scopes to create +variables instead of variable_scopes. But this approach currently lacks +support for partitioned variables. In that case, use the V1 version instead. + +#### Example: + + + +```python +price = tf.feature_column.numeric_column('price') +keywords_embedded = tf.feature_column.embedding_column( + tf.feature_column.categorical_column_with_hash_bucket("keywords", 10K), + dimensions=16) +columns = [price, keywords_embedded, ...] +feature_layer = tf.keras.layers.DenseFeatures(columns) + +features = tf.io.parse_example( + ..., features=tf.feature_column.make_parse_example_spec(columns)) +dense_tensor = feature_layer(features) +for units in [128, 64, 32]: + dense_tensor = tf.keras.layers.Dense(units, activation='relu')(dense_tensor) +prediction = tf.keras.layers.Dense(1)(dense_tensor) +``` + + + + + + + + + + + + + + + + + + + +
+`feature_columns` + +An iterable containing the FeatureColumns to use as +inputs to your model. All items should be instances of classes derived +from `DenseColumn` such as `numeric_column`, `embedding_column`, +`bucketized_column`, `indicator_column`. If you have categorical +features, you can wrap them with an `embedding_column` or +`indicator_column`. +
+`trainable` + +Boolean, whether the layer's variables will be updated via +gradient descent during training. +
+`name` + +Name to give to the DenseFeatures. +
+`**kwargs` + +Keyword arguments to construct a layer. +
+ + + + + + + + + + + + +
+`ValueError` + +if an item in `feature_columns` is not a `DenseColumn`. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/DepthwiseConv2D.md b/site/en/api_docs/python/tf/keras/layers/DepthwiseConv2D.md new file mode 100644 index 00000000000..86f53586fba --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/DepthwiseConv2D.md @@ -0,0 +1,256 @@ +description: Depthwise separable 2D convolution. + +
+ + + + +
+ +# tf.keras.layers.DepthwiseConv2D + + + + + + + + + +Depthwise separable 2D convolution. + +Inherits From: [`Conv2D`](../../../tf/keras/layers/Conv2D.md) + + + + + + + + + +Depthwise Separable convolutions consists in performing +just the first step in a depthwise spatial convolution +(which acts on each input channel separately). +The `depth_multiplier` argument controls how many +output channels are generated per input channel in the depthwise step. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`kernel_size` + +An integer or tuple/list of 2 integers, specifying the +height and width of the 2D convolution window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 2 integers, +specifying the strides of the convolution along the height and width. +Can be a single integer to specify the same value for +all spatial dimensions. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +one of `'valid'` or `'same'` (case-insensitive). +
+`depth_multiplier` + +The number of depthwise convolution output channels +for each input channel. +The total number of depthwise convolution output +channels will be equal to `filters_in * depth_multiplier`. +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch_size, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be 'channels_last'. +
+`activation` + +Activation function to use. +If you don't specify anything, no activation is applied ( +see keras.activations). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`depthwise_initializer` + +Initializer for the depthwise kernel matrix ( +see keras.initializers). +
+`bias_initializer` + +Initializer for the bias vector ( +see keras.initializers). +
+`depthwise_regularizer` + +Regularizer function applied to +the depthwise kernel matrix (see keras.regularizers). +
+`bias_regularizer` + +Regularizer function applied to the bias vector ( +see keras.regularizers). +
+`activity_regularizer` + +Regularizer function applied to +the output of the layer (its 'activation') ( +see keras.regularizers). +
+`depthwise_constraint` + +Constraint function applied to +the depthwise kernel matrix ( +see keras.constraints). +
+`bias_constraint` + +Constraint function applied to the bias vector ( +see keras.constraints). +
+ + + +#### Input shape: + +4D tensor with shape: +`[batch_size, channels, rows, cols]` if data_format='channels_first' +or 4D tensor with shape: +`[batch_size, rows, cols, channels]` if data_format='channels_last'. + + + +#### Output shape: + +4D tensor with shape: +`[batch_size, filters, new_rows, new_cols]` if data_format='channels_first' +or 4D tensor with shape: +`[batch_size, new_rows, new_cols, filters]` if data_format='channels_last'. +`rows` and `cols` values might have changed due to padding. + + + + + + + + + + + +
+A tensor of rank 4 representing +`activation(depthwiseconv2d(inputs, kernel) + bias)`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `padding` is "causal". +
+`ValueError` + +when both `strides` > 1 and `dilation_rate` > 1. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/Dot.md b/site/en/api_docs/python/tf/keras/layers/Dot.md new file mode 100644 index 00000000000..c712569cbae --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Dot.md @@ -0,0 +1,116 @@ +description: Layer that computes a dot product between samples in two tensors. + +
+ + + + +
+ +# tf.keras.layers.Dot + + + + + + + + + +Layer that computes a dot product between samples in two tensors. + + + + + + + + + +E.g. if applied to a list of two tensors `a` and `b` of shape +`(batch_size, n)`, the output will be a tensor of shape `(batch_size, 1)` +where each entry `i` will be the dot product between +`a[i]` and `b[i]`. + +``` +>>> x = np.arange(10).reshape(1, 5, 2) +>>> print(x) +[[[0 1] + [2 3] + [4 5] + [6 7] + [8 9]]] +>>> y = np.arange(10, 20).reshape(1, 2, 5) +>>> print(y) +[[[10 11 12 13 14] + [15 16 17 18 19]]] +>>> tf.keras.layers.Dot(axes=(1, 2))([x, y]) + +``` + +``` +>>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) +>>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) +>>> dotted = tf.keras.layers.Dot(axes=1)([x1, x2]) +>>> dotted.shape +TensorShape([5, 1]) +``` + + + + + + + + + + + + + + + + +
+`axes` + +Integer or tuple of integers, +axis or axes along which to take the dot product. If a tuple, should +be two integers corresponding to the desired axis from the first input +and the desired axis from the second input, respectively. Note that the +size of the two selected axes must match. +
+`normalize` + +Whether to L2-normalize samples along the +dot product axis before taking the dot product. +If set to True, then the output of the dot product +is the cosine proximity between the two samples. +
+`**kwargs` + +Standard layer keyword arguments. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/Dropout.md b/site/en/api_docs/python/tf/keras/layers/Dropout.md new file mode 100644 index 00000000000..82a539ee5ba --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Dropout.md @@ -0,0 +1,127 @@ +description: Applies Dropout to the input. + +
+ + + + +
+ +# tf.keras.layers.Dropout + + + + + + + + + +Applies Dropout to the input. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +The Dropout layer randomly sets input units to 0 with a frequency of `rate` +at each step during training time, which helps prevent overfitting. +Inputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over +all inputs is unchanged. + +Note that the Dropout layer only applies when `training` is set to True +such that no values are dropped during inference. When using `model.fit`, +`training` will be appropriately set to True automatically, and in other +contexts, you can set the kwarg explicitly to True when calling the layer. + +(This is in contrast to setting `trainable=False` for a Dropout layer. +`trainable` does not affect the layer's behavior, as Dropout does +not have any variables/weights that can be frozen during training.) + +``` +>>> tf.random.set_seed(0) +>>> layer = tf.keras.layers.Dropout(.2, input_shape=(2,)) +>>> data = np.arange(10).reshape(5, 2).astype(np.float32) +>>> print(data) +[[0. 1.] + [2. 3.] + [4. 5.] + [6. 7.] + [8. 9.]] +>>> outputs = layer(data, training=True) +>>> print(outputs) +tf.Tensor( +[[ 0. 1.25] + [ 2.5 3.75] + [ 5. 6.25] + [ 7.5 8.75] + [10. 0. ]], shape=(5, 2), dtype=float32) +``` + + + + + + + + + + + + + + + + +
+`rate` + +Float between 0 and 1. Fraction of the input units to drop. +
+`noise_shape` + +1D integer tensor representing the shape of the +binary dropout mask that will be multiplied with the input. +For instance, if your inputs have shape +`(batch_size, timesteps, features)` and +you want the dropout mask to be the same for all timesteps, +you can use `noise_shape=(batch_size, 1, features)`. +
+`seed` + +A Python integer to use as random seed. +
+ + + +#### Call arguments: + + +* `inputs`: Input tensor (of any rank). +* `training`: Python boolean indicating whether the layer should behave in + training mode (adding dropout) or in inference mode (doing nothing). + + diff --git a/site/en/api_docs/python/tf/keras/layers/ELU.md b/site/en/api_docs/python/tf/keras/layers/ELU.md new file mode 100644 index 00000000000..4c82af65772 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/ELU.md @@ -0,0 +1,90 @@ +description: Exponential Linear Unit. + +
+ + + + +
+ +# tf.keras.layers.ELU + + + + + + + + + +Exponential Linear Unit. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + + +#### It follows: + + + +``` + f(x) = alpha * (exp(x) - 1.) for x < 0 + f(x) = x for x >= 0 +``` + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as the input. + + + + + + + + + + + + +
+`alpha` + +Scale for the negative factor. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/Embedding.md b/site/en/api_docs/python/tf/keras/layers/Embedding.md new file mode 100644 index 00000000000..801ae0558cb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Embedding.md @@ -0,0 +1,160 @@ +description: Turns positive integers (indexes) into dense vectors of fixed size. + +
+ + + + +
+ +# tf.keras.layers.Embedding + + + + + + + + + +Turns positive integers (indexes) into dense vectors of fixed size. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +e.g. `[[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]` + +This layer can only be used as the first layer in a model. + +#### Example: + + + +```python +model = Sequential() +model.add(Embedding(1000, 64, input_length=10)) +# the model will take as input an integer matrix of size (batch, +# input_length). +# the largest integer (i.e. word index) in the input should be no larger +# than 999 (vocabulary size). +# now model.output_shape == (None, 10, 64), where None is the batch +# dimension. + +input_array = np.random.randint(1000, size=(32, 10)) + +model.compile('rmsprop', 'mse') +output_array = model.predict(input_array) +assert output_array.shape == (32, 10, 64) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dim` + +int > 0. Size of the vocabulary, +i.e. maximum integer index + 1. +
+`output_dim` + +int >= 0. Dimension of the dense embedding. +
+`embeddings_initializer` + +Initializer for the `embeddings` matrix. +
+`embeddings_regularizer` + +Regularizer function applied to +the `embeddings` matrix. +
+`embeddings_constraint` + +Constraint function applied to +the `embeddings` matrix. +
+`mask_zero` + +Whether or not the input value 0 is a special "padding" +value that should be masked out. +This is useful when using recurrent layers +which may take variable length input. +If this is `True` then all subsequent layers +in the model need to support masking or an exception will be raised. +If mask_zero is set to True, as a consequence, index 0 cannot be +used in the vocabulary (input_dim should equal size of +vocabulary + 1). +
+`input_length` + +Length of input sequences, when it is constant. +This argument is required if you are going to connect +`Flatten` then `Dense` layers upstream +(without it, the shape of the dense outputs cannot be computed). +
+ + + +#### Input shape: + +2D tensor with shape: `(batch_size, input_length)`. + + + +#### Output shape: + +3D tensor with shape: `(batch_size, input_length, output_dim)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/Flatten.md b/site/en/api_docs/python/tf/keras/layers/Flatten.md new file mode 100644 index 00000000000..2d697e53333 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Flatten.md @@ -0,0 +1,92 @@ +description: Flattens the input. Does not affect the batch size. + +
+ + + + +
+ +# tf.keras.layers.Flatten + + + + + + + + + +Flattens the input. Does not affect the batch size. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +If inputs are shaped `(batch,)` without a channel dimension, then flattening +adds an extra channel dimension and output shapes are `(batch, 1)`. + + + + + + + + + + +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, ..., channels)` while `channels_first` corresponds to +inputs with shape `(batch, channels, ...)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Example: + + + +```python +model = Sequential() +model.add(Convolution2D(64, 3, 3, + border_mode='same', + input_shape=(3, 32, 32))) +# now: model.output_shape == (None, 64, 32, 32) + +model.add(Flatten()) +# now: model.output_shape == (None, 65536) +``` + diff --git a/site/en/api_docs/python/tf/keras/layers/GRU.md b/site/en/api_docs/python/tf/keras/layers/GRU.md new file mode 100644 index 00000000000..14983da1c41 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/GRU.md @@ -0,0 +1,685 @@ +description: Gated Recurrent Unit - Cho et al. 2014. + +
+ + + + + + + + + +
+ +# tf.keras.layers.GRU + + + + + + + + + +Gated Recurrent Unit - Cho et al. 2014. + +Inherits From: [`GRU`](../../../tf/compat/v1/keras/layers/GRU.md) + + + + + + + +See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn) +for details about the usage of RNN API. + +Based on available runtime hardware and constraints, this layer +will choose different implementations (cuDNN-based or pure-TensorFlow) +to maximize the performance. If a GPU is available and all +the arguments to the layer meet the requirement of the CuDNN kernel +(see below for details), the layer will use a fast cuDNN implementation. + +The requirements to use the cuDNN implementation are: + +1. `activation` == `tanh` +2. `recurrent_activation` == `sigmoid` +3. `recurrent_dropout` == 0 +4. `unroll` is `False` +5. `use_bias` is `True` +6. `reset_after` is `True` +7. Inputs are not masked or strictly right padded. + +There are two variants of the GRU implementation. The default one is based on +[v3](https://arxiv.org/abs/1406.1078v3) and has reset gate applied to hidden +state before matrix multiplication. The other one is based on +[original](https://arxiv.org/abs/1406.1078v1) and has the order reversed. + +The second variant is compatible with CuDNNGRU (GPU-only) and allows +inference on CPU. Thus it has separate biases for `kernel` and +`recurrent_kernel`. To use this variant, set `'reset_after'=True` and +`recurrent_activation='sigmoid'`. + +#### For example: + + + +``` +>>> inputs = tf.random.normal([32, 10, 8]) +>>> gru = tf.keras.layers.GRU(4) +>>> output = gru(inputs) +>>> print(output.shape) +(32, 4) +>>> gru = tf.keras.layers.GRU(4, return_sequences=True, return_state=True) +>>> whole_sequence_output, final_state = gru(inputs) +>>> print(whole_sequence_output.shape) +(32, 10, 4) +>>> print(final_state.shape) +(32, 4) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`activation` + +Activation function to use. +Default: hyperbolic tangent (`tanh`). +If you pass `None`, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`recurrent_activation` + +Activation function to use +for the recurrent step. +Default: sigmoid (`sigmoid`). +If you pass `None`, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean, (default `True`), whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, +used for the linear transformation of the inputs. Default: +`glorot_uniform`. +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` +weights matrix, used for the linear transformation of the recurrent +state. Default: `orthogonal`. +
+`bias_initializer` + +Initializer for the bias vector. Default: `zeros`. +
+`kernel_regularizer` + +Regularizer function applied to the `kernel` weights +matrix. Default: `None`. +
+`recurrent_regularizer` + +Regularizer function applied to the +`recurrent_kernel` weights matrix. Default: `None`. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. Default: +`None`. +
+`activity_regularizer` + +Regularizer function applied to the output of the +layer (its "activation"). Default: `None`. +
+`kernel_constraint` + +Constraint function applied to the `kernel` weights +matrix. Default: `None`. +
+`recurrent_constraint` + +Constraint function applied to the `recurrent_kernel` +weights matrix. Default: `None`. +
+`bias_constraint` + +Constraint function applied to the bias vector. Default: +`None`. +
+`dropout` + +Float between 0 and 1. Fraction of the units to drop for the linear +transformation of the inputs. Default: 0. +
+`recurrent_dropout` + +Float between 0 and 1. Fraction of the units to drop for +the linear transformation of the recurrent state. Default: 0. +
+`implementation` + +Implementation mode, either 1 or 2. +Mode 1 will structure its operations as a larger number of +smaller dot products and additions, whereas mode 2 will +batch them into fewer, larger operations. These modes will +have different performance profiles on different hardware and +for different applications. Default: 2. +
+`return_sequences` + +Boolean. Whether to return the last output +in the output sequence, or the full sequence. Default: `False`. +
+`return_state` + +Boolean. Whether to return the last state in addition to the +output. Default: `False`. +
+`go_backwards` + +Boolean (default `False`). +If True, process the input sequence backwards and return the +reversed sequence. +
+`stateful` + +Boolean (default False). If True, the last state +for each sample at index i in a batch will be used as initial +state for the sample of index i in the following batch. +
+`unroll` + +Boolean (default False). +If True, the network will be unrolled, +else a symbolic loop will be used. +Unrolling can speed-up a RNN, +although it tends to be more memory-intensive. +Unrolling is only suitable for short sequences. +
+`time_major` + +The shape format of the `inputs` and `outputs` tensors. +If True, the inputs and outputs will be in shape +`[timesteps, batch, feature]`, whereas in the False case, it will be +`[batch, timesteps, feature]`. Using `time_major = True` is a bit more +efficient because it avoids transposes at the beginning and end of the +RNN calculation. However, most TensorFlow data is batch-major, so by +default this function accepts input and emits output in batch-major +form. +
+`reset_after` + +GRU convention (whether to apply reset gate after or +before matrix multiplication). False = "before", +True = "after" (default and CuDNN compatible). +
+ + + +#### Call arguments: + + +* `inputs`: A 3D tensor, with shape `[batch, timesteps, feature]`. +* `mask`: Binary tensor of shape `[samples, timesteps]` indicating whether + a given timestep should be masked (optional, defaults to `None`). +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. This argument is passed to the cell + when calling it. This is only relevant if `dropout` or + `recurrent_dropout` is used (optional, defaults to `None`). +* `initial_state`: List of initial state tensors to be passed to the first + call of the cell (optional, defaults to `None` which causes creation + of zero-filled initial state tensors). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`activation` + + +
+`bias_constraint` + + +
+`bias_initializer` + + +
+`bias_regularizer` + + +
+`dropout` + + +
+`implementation` + + +
+`kernel_constraint` + + +
+`kernel_initializer` + + +
+`kernel_regularizer` + + +
+`recurrent_activation` + + +
+`recurrent_constraint` + + +
+`recurrent_dropout` + + +
+`recurrent_initializer` + + +
+`recurrent_regularizer` + + +
+`reset_after` + + +
+`states` + + +
+`units` + + +
+`use_bias` + + +
+ + + +## Methods + +

get_dropout_mask_for_cell

+ +View source + + + +Get the dropout mask for RNN cell's input. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

get_recurrent_dropout_mask_for_cell

+ +View source + + + +Get the recurrent dropout mask for RNN cell. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

reset_dropout_mask

+ +View source + + + +Reset the cached dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + +

reset_recurrent_dropout_mask

+ +View source + + + +Reset the cached recurrent dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + +

reset_states

+ +View source + + + +Reset the recorded states for the stateful RNN layer. + +Can only be used when RNN layer is constructed with `stateful` = `True`. +Args: + states: Numpy arrays that contains the value for the initial state, which + will be feed to cell at the first time step. When the value is None, + zero filled numpy array will be created based on the cell state size. + + + + + + + + + + + + + + + + +
Raises
+`AttributeError` + +When the RNN layer is not stateful. +
+`ValueError` + +When the batch size of the RNN layer is unknown. +
+`ValueError` + +When the input numpy array is not compatible with the RNN +layer state, either size wise or dtype wise. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/GRUCell.md b/site/en/api_docs/python/tf/keras/layers/GRUCell.md new file mode 100644 index 00000000000..0a19bbd4d71 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/GRUCell.md @@ -0,0 +1,414 @@ +description: Cell class for the GRU layer. + +
+ + + + + + + + + +
+ +# tf.keras.layers.GRUCell + + + + + + + + + +Cell class for the GRU layer. + +Inherits From: [`GRUCell`](../../../tf/compat/v1/keras/layers/GRUCell.md) + + + + + + + +See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn) +for details about the usage of RNN API. + +This class processes one step within the whole time sequence input, whereas +`tf.keras.layer.GRU` processes the whole sequence. + +#### For example: + + + +``` +>>> inputs = tf.random.normal([32, 10, 8]) +>>> rnn = tf.keras.layers.RNN(tf.keras.layers.GRUCell(4)) +>>> output = rnn(inputs) +>>> print(output.shape) +(32, 4) +>>> rnn = tf.keras.layers.RNN( +... tf.keras.layers.GRUCell(4), +... return_sequences=True, +... return_state=True) +>>> whole_sequence_output, final_state = rnn(inputs) +>>> print(whole_sequence_output.shape) +(32, 10, 4) +>>> print(final_state.shape) +(32, 4) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`activation` + +Activation function to use. Default: hyperbolic tangent +(`tanh`). If you pass None, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`recurrent_activation` + +Activation function to use for the recurrent step. +Default: sigmoid (`sigmoid`). If you pass `None`, no activation is +applied (ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean, (default `True`), whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, +used for the linear transformation of the inputs. Default: +`glorot_uniform`. +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` +weights matrix, used for the linear transformation of the recurrent state. +Default: `orthogonal`. +
+`bias_initializer` + +Initializer for the bias vector. Default: `zeros`. +
+`kernel_regularizer` + +Regularizer function applied to the `kernel` weights +matrix. Default: `None`. +
+`recurrent_regularizer` + +Regularizer function applied to the +`recurrent_kernel` weights matrix. Default: `None`. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. Default: +`None`. +
+`kernel_constraint` + +Constraint function applied to the `kernel` weights +matrix. Default: `None`. +
+`recurrent_constraint` + +Constraint function applied to the `recurrent_kernel` +weights matrix. Default: `None`. +
+`bias_constraint` + +Constraint function applied to the bias vector. Default: +`None`. +
+`dropout` + +Float between 0 and 1. Fraction of the units to drop for the +linear transformation of the inputs. Default: 0. +
+`recurrent_dropout` + +Float between 0 and 1. Fraction of the units to drop for +the linear transformation of the recurrent state. Default: 0. +
+`implementation` + +Implementation mode, either 1 or 2. +Mode 1 will structure its operations as a larger number of +smaller dot products and additions, whereas mode 2 (default) will +batch them into fewer, larger operations. These modes will +have different performance profiles on different hardware and +for different applications. Default: 2. +
+`reset_after` + +GRU convention (whether to apply reset gate after or +before matrix multiplication). False = "before", +True = "after" (default and CuDNN compatible). +
+ + + +#### Call arguments: + + +* `inputs`: A 2D tensor, with shape of `[batch, feature]`. +* `states`: A 2D tensor with shape of `[batch, units]`, which is the state from + the previous time step. For timestep 0, the initial state provided by user + will be feed to cell. +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. Only relevant when `dropout` or + `recurrent_dropout` is used. + + +## Methods + +

get_dropout_mask_for_cell

+ +View source + + + +Get the dropout mask for RNN cell's input. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

get_initial_state

+ +View source + + + + + + +

get_recurrent_dropout_mask_for_cell

+ +View source + + + +Get the recurrent dropout mask for RNN cell. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

reset_dropout_mask

+ +View source + + + +Reset the cached dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + +

reset_recurrent_dropout_mask

+ +View source + + + +Reset the cached recurrent dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + + + diff --git a/site/en/api_docs/python/tf/keras/layers/GaussianDropout.md b/site/en/api_docs/python/tf/keras/layers/GaussianDropout.md new file mode 100644 index 00000000000..16f829890df --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/GaussianDropout.md @@ -0,0 +1,91 @@ +description: Apply multiplicative 1-centered Gaussian noise. + +
+ + + + +
+ +# tf.keras.layers.GaussianDropout + + + + + + + + + +Apply multiplicative 1-centered Gaussian noise. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +As it is a regularization layer, it is only active at training time. + + + + + + + + + + +
+`rate` + +Float, drop probability (as with `Dropout`). +The multiplicative noise will have +standard deviation `sqrt(rate / (1 - rate))`. +
+ + + +#### Call arguments: + + +* `inputs`: Input tensor (of any rank). +* `training`: Python boolean indicating whether the layer should behave in + training mode (adding dropout) or in inference mode (doing nothing). + + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as input. + + diff --git a/site/en/api_docs/python/tf/keras/layers/GaussianNoise.md b/site/en/api_docs/python/tf/keras/layers/GaussianNoise.md new file mode 100644 index 00000000000..687104557af --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/GaussianNoise.md @@ -0,0 +1,94 @@ +description: Apply additive zero-centered Gaussian noise. + +
+ + + + +
+ +# tf.keras.layers.GaussianNoise + + + + + + + + + +Apply additive zero-centered Gaussian noise. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +This is useful to mitigate overfitting +(you could see it as a form of random data augmentation). +Gaussian Noise (GS) is a natural choice as corruption process +for real valued inputs. + +As it is a regularization layer, it is only active at training time. + + + + + + + + + + +
+`stddev` + +Float, standard deviation of the noise distribution. +
+ + + +#### Call arguments: + + +* `inputs`: Input tensor (of any rank). +* `training`: Python boolean indicating whether the layer should behave in + training mode (adding noise) or in inference mode (doing nothing). + + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as input. + + diff --git a/site/en/api_docs/python/tf/keras/layers/GlobalAveragePooling1D.md b/site/en/api_docs/python/tf/keras/layers/GlobalAveragePooling1D.md new file mode 100644 index 00000000000..198cbcaef50 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/GlobalAveragePooling1D.md @@ -0,0 +1,110 @@ +description: Global average pooling operation for temporal data. + +
+ + + + +
+ +# tf.keras.layers.GlobalAveragePooling1D + + + + + + + + + +Global average pooling operation for temporal data. + + + + + + + + + + +#### Examples: + + + +``` +>>> input_shape = (2, 3, 4) +>>> x = tf.random.normal(input_shape) +>>> y = tf.keras.layers.GlobalAveragePooling1D()(x) +>>> print(y.shape) +(2, 4) +``` + + + + + + + + + + +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, steps, features)` while `channels_first` +corresponds to inputs with shape +`(batch, features, steps)`. +
+ + + +#### Call arguments: + + +* `inputs`: A 3D tensor. +* `mask`: Binary tensor of shape `(batch_size, steps)` indicating whether + a given step should be masked (excluded from the average). + + +#### Input shape: + +- If `data_format='channels_last'`: + 3D tensor with shape: + `(batch_size, steps, features)` +- If `data_format='channels_first'`: + 3D tensor with shape: + `(batch_size, features, steps)` + + + +#### Output shape: + +2D tensor with shape `(batch_size, features)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/GlobalAveragePooling2D.md b/site/en/api_docs/python/tf/keras/layers/GlobalAveragePooling2D.md new file mode 100644 index 00000000000..11f0515b125 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/GlobalAveragePooling2D.md @@ -0,0 +1,103 @@ +description: Global average pooling operation for spatial data. + +
+ + + + +
+ +# tf.keras.layers.GlobalAveragePooling2D + + + + + + + + + +Global average pooling operation for spatial data. + + + + + + + + + + +#### Examples: + + + +``` +>>> input_shape = (2, 4, 5, 3) +>>> x = tf.random.normal(input_shape) +>>> y = tf.keras.layers.GlobalAveragePooling2D()(x) +>>> print(y.shape) +(2, 3) +``` + + + + + + + + + + +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +- If `data_format='channels_last'`: + 4D tensor with shape `(batch_size, rows, cols, channels)`. +- If `data_format='channels_first'`: + 4D tensor with shape `(batch_size, channels, rows, cols)`. + + + +#### Output shape: + +2D tensor with shape `(batch_size, channels)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/GlobalAveragePooling3D.md b/site/en/api_docs/python/tf/keras/layers/GlobalAveragePooling3D.md new file mode 100644 index 00000000000..4f67b940ea3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/GlobalAveragePooling3D.md @@ -0,0 +1,93 @@ +description: Global Average pooling operation for 3D data. + +
+ + + + +
+ +# tf.keras.layers.GlobalAveragePooling3D + + + + + + + + + +Global Average pooling operation for 3D data. + + + + + + + + + + + + + + + + + + + +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)` +while `channels_first` corresponds to inputs with shape +`(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +- If `data_format='channels_last'`: + 5D tensor with shape: + `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)` +- If `data_format='channels_first'`: + 5D tensor with shape: + `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)` + + + +#### Output shape: + +2D tensor with shape `(batch_size, channels)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/GlobalMaxPool1D.md b/site/en/api_docs/python/tf/keras/layers/GlobalMaxPool1D.md new file mode 100644 index 00000000000..6141dd37146 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/GlobalMaxPool1D.md @@ -0,0 +1,112 @@ +description: Global max pooling operation for 1D temporal data. + +
+ + + + +
+ +# tf.keras.layers.GlobalMaxPool1D + + + + + + + + + +Global max pooling operation for 1D temporal data. + + + + + + + + + +Downsamples the input representation by taking the maximum value over +the time dimension. + +#### For example: + + + +``` +>>> x = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]]) +>>> x = tf.reshape(x, [3, 3, 1]) +>>> x + +>>> max_pool_1d = tf.keras.layers.GlobalMaxPooling1D() +>>> max_pool_1d(x) + +``` + + + + + + + + + + +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, steps, features)` while `channels_first` +corresponds to inputs with shape +`(batch, features, steps)`. +
+ + + +#### Input shape: + +- If `data_format='channels_last'`: + 3D tensor with shape: + `(batch_size, steps, features)` +- If `data_format='channels_first'`: + 3D tensor with shape: + `(batch_size, features, steps)` + + + +#### Output shape: + +2D tensor with shape `(batch_size, features)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/GlobalMaxPool2D.md b/site/en/api_docs/python/tf/keras/layers/GlobalMaxPool2D.md new file mode 100644 index 00000000000..63c54858560 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/GlobalMaxPool2D.md @@ -0,0 +1,103 @@ +description: Global max pooling operation for spatial data. + +
+ + + + +
+ +# tf.keras.layers.GlobalMaxPool2D + + + + + + + + + +Global max pooling operation for spatial data. + + + + + + + + + + +#### Examples: + + + +``` +>>> input_shape = (2, 4, 5, 3) +>>> x = tf.random.normal(input_shape) +>>> y = tf.keras.layers.GlobalMaxPool2D()(x) +>>> print(y.shape) +(2, 3) +``` + + + + + + + + + + +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +- If `data_format='channels_last'`: + 4D tensor with shape `(batch_size, rows, cols, channels)`. +- If `data_format='channels_first'`: + 4D tensor with shape `(batch_size, channels, rows, cols)`. + + + +#### Output shape: + +2D tensor with shape `(batch_size, channels)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/GlobalMaxPool3D.md b/site/en/api_docs/python/tf/keras/layers/GlobalMaxPool3D.md new file mode 100644 index 00000000000..bf9cbac7950 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/GlobalMaxPool3D.md @@ -0,0 +1,93 @@ +description: Global Max pooling operation for 3D data. + +
+ + + + +
+ +# tf.keras.layers.GlobalMaxPool3D + + + + + + + + + +Global Max pooling operation for 3D data. + + + + + + + + + + + + + + + + + + + +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)` +while `channels_first` corresponds to inputs with shape +`(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +- If `data_format='channels_last'`: + 5D tensor with shape: + `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)` +- If `data_format='channels_first'`: + 5D tensor with shape: + `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)` + + + +#### Output shape: + +2D tensor with shape `(batch_size, channels)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/InputLayer.md b/site/en/api_docs/python/tf/keras/layers/InputLayer.md new file mode 100644 index 00000000000..cef158fb777 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/InputLayer.md @@ -0,0 +1,150 @@ +description: Layer to be used as an entry point into a Network (a graph of layers). + +
+ + + + +
+ +# tf.keras.layers.InputLayer + + + + + + + + + +Layer to be used as an entry point into a Network (a graph of layers). + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +It can either wrap an existing tensor (pass an `input_tensor` argument) +or create a placeholder tensor (pass arguments `input_shape`, and +optionally, `dtype`). + +It is generally recommend to use the functional layer API via `Input`, +(which creates an `InputLayer`) without directly using `InputLayer`. + +When using InputLayer with Keras Sequential model, it can be skipped by +moving the input_shape parameter to the first layer after the InputLayer. + +This class can create placeholders for tf.Tensors, tf.SparseTensors, and +tf.RaggedTensors by choosing 'sparse=True' or 'ragged=True'. Note that +'sparse' and 'ragged' can't be configured to True at same time. +Usage: + +```python +# With explicit InputLayer. +model = tf.keras.Sequential([ + tf.keras.layers.InputLayer(input_shape=(4,)), + tf.keras.layers.Dense(8)]) +model.compile(tf.optimizers.RMSprop(0.001), loss='mse') +model.fit(np.zeros((10, 4)), + np.ones((10, 8))) + +# Without InputLayer and let the first layer to have the input_shape. +# Keras will add a input for the model behind the scene. +model = tf.keras.Sequential([ + tf.keras.layers.Dense(8, input_shape=(4,))]) +model.compile(tf.optimizers.RMSprop(0.001), loss='mse') +model.fit(np.zeros((10, 4)), + np.ones((10, 8))) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_shape` + +Shape tuple (not including the batch axis), or `TensorShape` +instance (not including the batch axis). +
+`batch_size` + +Optional input batch size (integer or None). +
+`dtype` + +Optional datatype of the input. When not provided, the Keras +default float type will be used. +
+`input_tensor` + +Optional tensor to use as layer input +instead of creating a placeholder. +
+`sparse` + +Boolean, whether the placeholder created is meant to be sparse. +Default to False. +
+`ragged` + +Boolean, whether the placeholder created is meant to be ragged. +In this case, values of 'None' in the 'shape' argument represent +ragged dimensions. For more information about RaggedTensors, see +https://www.tensorflow.org/guide/ragged_tensors. +Default to False. +
+`name` + +Optional name of the layer (string). +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/InputSpec.md b/site/en/api_docs/python/tf/keras/layers/InputSpec.md new file mode 100644 index 00000000000..8250f85f74c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/InputSpec.md @@ -0,0 +1,140 @@ +description: Specifies the rank, dtype and shape of every input to a layer. + +
+ + + + + +
+ +# tf.keras.layers.InputSpec + + + + + + + + + +Specifies the rank, dtype and shape of every input to a layer. + + + + + + + + + +Layers can expose (if appropriate) an `input_spec` attribute: +an instance of `InputSpec`, or a nested structure of `InputSpec` instances +(one per input tensor). These objects enable the layer to run input +compatibility checks for input structure, input rank, input shape, and +input dtype. + +A None entry in a shape is compatible with any dimension, +a None shape is compatible with any shape. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +Expected DataType of the input. +
+`shape` + +Shape tuple, expected shape of the input +(may include None for unchecked axes). +
+`ndim` + +Integer, expected rank of the input. +
+`max_ndim` + +Integer, maximum rank of the input. +
+`min_ndim` + +Integer, minimum rank of the input. +
+`axes` + +Dictionary mapping integer axes to +a specific dimension value. +
+ + + +## Methods + +

from_config

+ +View source + + + + + + +

get_config

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/LSTM.md b/site/en/api_docs/python/tf/keras/layers/LSTM.md new file mode 100644 index 00000000000..e0edbb10899 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/LSTM.md @@ -0,0 +1,669 @@ +description: Long Short-Term Memory layer - Hochreiter 1997. + +
+ + + + + + + + + +
+ +# tf.keras.layers.LSTM + + + + + + + + + +Long Short-Term Memory layer - Hochreiter 1997. + +Inherits From: [`LSTM`](../../../tf/compat/v1/keras/layers/LSTM.md) + + + + + + + +See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn) +for details about the usage of RNN API. + +Based on available runtime hardware and constraints, this layer +will choose different implementations (cuDNN-based or pure-TensorFlow) +to maximize the performance. If a GPU is available and all +the arguments to the layer meet the requirement of the CuDNN kernel +(see below for details), the layer will use a fast cuDNN implementation. + +The requirements to use the cuDNN implementation are: + +1. `activation` == `tanh` +2. `recurrent_activation` == `sigmoid` +3. `recurrent_dropout` == 0 +4. `unroll` is `False` +5. `use_bias` is `True` +6. Inputs are not masked or strictly right padded. + +#### For example: + + + +``` +>>> inputs = tf.random.normal([32, 10, 8]) +>>> lstm = tf.keras.layers.LSTM(4) +>>> output = lstm(inputs) +>>> print(output.shape) +(32, 4) +>>> lstm = tf.keras.layers.LSTM(4, return_sequences=True, return_state=True) +>>> whole_seq_output, final_memory_state, final_carry_state = lstm(inputs) +>>> print(whole_seq_output.shape) +(32, 10, 4) +>>> print(final_memory_state.shape) +(32, 4) +>>> print(final_carry_state.shape) +(32, 4) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`activation` + +Activation function to use. +Default: hyperbolic tangent (`tanh`). If you pass `None`, no activation +is applied (ie. "linear" activation: `a(x) = x`). +
+`recurrent_activation` + +Activation function to use for the recurrent step. +Default: sigmoid (`sigmoid`). If you pass `None`, no activation is +applied (ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean (default `True`), whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, used for +the linear transformation of the inputs. Default: `glorot_uniform`. +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` weights +matrix, used for the linear transformation of the recurrent state. +Default: `orthogonal`. +
+`bias_initializer` + +Initializer for the bias vector. Default: `zeros`. +
+`unit_forget_bias` + +Boolean (default `True`). If True, add 1 to the bias of +the forget gate at initialization. Setting it to true will also force +`bias_initializer="zeros"`. This is recommended in [Jozefowicz et +al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf). +
+`kernel_regularizer` + +Regularizer function applied to the `kernel` weights +matrix. Default: `None`. +
+`recurrent_regularizer` + +Regularizer function applied to the +`recurrent_kernel` weights matrix. Default: `None`. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. Default: +`None`. +
+`activity_regularizer` + +Regularizer function applied to the output of the +layer (its "activation"). Default: `None`. +
+`kernel_constraint` + +Constraint function applied to the `kernel` weights +matrix. Default: `None`. +
+`recurrent_constraint` + +Constraint function applied to the `recurrent_kernel` +weights matrix. Default: `None`. +
+`bias_constraint` + +Constraint function applied to the bias vector. Default: +`None`. +
+`dropout` + +Float between 0 and 1. Fraction of the units to drop for the linear +transformation of the inputs. Default: 0. +
+`recurrent_dropout` + +Float between 0 and 1. Fraction of the units to drop for +the linear transformation of the recurrent state. Default: 0. +
+`implementation` + +Implementation mode, either 1 or 2. Mode 1 will structure +its operations as a larger number of smaller dot products and additions, +whereas mode 2 will batch them into fewer, larger operations. These modes +will have different performance profiles on different hardware and for +different applications. Default: 2. +
+`return_sequences` + +Boolean. Whether to return the last output. in the output +sequence, or the full sequence. Default: `False`. +
+`return_state` + +Boolean. Whether to return the last state in addition to the +output. Default: `False`. +
+`go_backwards` + +Boolean (default `False`). If True, process the input sequence +backwards and return the reversed sequence. +
+`stateful` + +Boolean (default `False`). If True, the last state for each sample +at index i in a batch will be used as initial state for the sample of +index i in the following batch. +
+`time_major` + +The shape format of the `inputs` and `outputs` tensors. +If True, the inputs and outputs will be in shape +`[timesteps, batch, feature]`, whereas in the False case, it will be +`[batch, timesteps, feature]`. Using `time_major = True` is a bit more +efficient because it avoids transposes at the beginning and end of the +RNN calculation. However, most TensorFlow data is batch-major, so by +default this function accepts input and emits output in batch-major +form. +
+`unroll` + +Boolean (default `False`). If True, the network will be unrolled, +else a symbolic loop will be used. Unrolling can speed-up a RNN, although +it tends to be more memory-intensive. Unrolling is only suitable for short +sequences. +
+ + + +#### Call arguments: + + +* `inputs`: A 3D tensor with shape `[batch, timesteps, feature]`. +* `mask`: Binary tensor of shape `[batch, timesteps]` indicating whether + a given timestep should be masked (optional, defaults to `None`). +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. This argument is passed to the cell + when calling it. This is only relevant if `dropout` or + `recurrent_dropout` is used (optional, defaults to `None`). +* `initial_state`: List of initial state tensors to be passed to the first + call of the cell (optional, defaults to `None` which causes creation + of zero-filled initial state tensors). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`activation` + + +
+`bias_constraint` + + +
+`bias_initializer` + + +
+`bias_regularizer` + + +
+`dropout` + + +
+`implementation` + + +
+`kernel_constraint` + + +
+`kernel_initializer` + + +
+`kernel_regularizer` + + +
+`recurrent_activation` + + +
+`recurrent_constraint` + + +
+`recurrent_dropout` + + +
+`recurrent_initializer` + + +
+`recurrent_regularizer` + + +
+`states` + + +
+`unit_forget_bias` + + +
+`units` + + +
+`use_bias` + + +
+ + + +## Methods + +

get_dropout_mask_for_cell

+ +View source + + + +Get the dropout mask for RNN cell's input. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

get_recurrent_dropout_mask_for_cell

+ +View source + + + +Get the recurrent dropout mask for RNN cell. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

reset_dropout_mask

+ +View source + + + +Reset the cached dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + +

reset_recurrent_dropout_mask

+ +View source + + + +Reset the cached recurrent dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + +

reset_states

+ +View source + + + +Reset the recorded states for the stateful RNN layer. + +Can only be used when RNN layer is constructed with `stateful` = `True`. +Args: + states: Numpy arrays that contains the value for the initial state, which + will be feed to cell at the first time step. When the value is None, + zero filled numpy array will be created based on the cell state size. + + + + + + + + + + + + + + + + +
Raises
+`AttributeError` + +When the RNN layer is not stateful. +
+`ValueError` + +When the batch size of the RNN layer is unknown. +
+`ValueError` + +When the input numpy array is not compatible with the RNN +layer state, either size wise or dtype wise. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/LSTMCell.md b/site/en/api_docs/python/tf/keras/layers/LSTMCell.md new file mode 100644 index 00000000000..33db65f3849 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/LSTMCell.md @@ -0,0 +1,417 @@ +description: Cell class for the LSTM layer. + +
+ + + + + + + + + +
+ +# tf.keras.layers.LSTMCell + + + + + + + + + +Cell class for the LSTM layer. + +Inherits From: [`LSTMCell`](../../../tf/compat/v1/keras/layers/LSTMCell.md) + + + + + + + +See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn) +for details about the usage of RNN API. + +This class processes one step within the whole time sequence input, whereas +`tf.keras.layer.LSTM` processes the whole sequence. + +#### For example: + + + +``` +>>> inputs = tf.random.normal([32, 10, 8]) +>>> rnn = tf.keras.layers.RNN(tf.keras.layers.LSTMCell(4)) +>>> output = rnn(inputs) +>>> print(output.shape) +(32, 4) +>>> rnn = tf.keras.layers.RNN( +... tf.keras.layers.LSTMCell(4), +... return_sequences=True, +... return_state=True) +>>> whole_seq_output, final_memory_state, final_carry_state = rnn(inputs) +>>> print(whole_seq_output.shape) +(32, 10, 4) +>>> print(final_memory_state.shape) +(32, 4) +>>> print(final_carry_state.shape) +(32, 4) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`activation` + +Activation function to use. Default: hyperbolic tangent +(`tanh`). If you pass `None`, no activation is applied (ie. "linear" +activation: `a(x) = x`). +
+`recurrent_activation` + +Activation function to use for the recurrent step. +Default: sigmoid (`sigmoid`). If you pass `None`, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean, (default `True`), whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, used for +the linear transformation of the inputs. Default: `glorot_uniform`. +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` weights +matrix, used for the linear transformation of the recurrent state. +Default: `orthogonal`. +
+`bias_initializer` + +Initializer for the bias vector. Default: `zeros`. +
+`unit_forget_bias` + +Boolean (default `True`). If True, add 1 to the bias of +the forget gate at initialization. Setting it to true will also force +`bias_initializer="zeros"`. This is recommended in [Jozefowicz et +al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf) +
+`kernel_regularizer` + +Regularizer function applied to the `kernel` weights +matrix. Default: `None`. +
+`recurrent_regularizer` + +Regularizer function applied to +the `recurrent_kernel` weights matrix. Default: `None`. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. Default: +`None`. +
+`kernel_constraint` + +Constraint function applied to the `kernel` weights +matrix. Default: `None`. +
+`recurrent_constraint` + +Constraint function applied to the `recurrent_kernel` +weights matrix. Default: `None`. +
+`bias_constraint` + +Constraint function applied to the bias vector. Default: +`None`. +
+`dropout` + +Float between 0 and 1. Fraction of the units to drop for the linear +transformation of the inputs. Default: 0. +
+`recurrent_dropout` + +Float between 0 and 1. Fraction of the units to drop for +the linear transformation of the recurrent state. Default: 0. +
+`implementation` + +Implementation mode, either 1 or 2. +Mode 1 will structure its operations as a larger number of smaller dot +products and additions, whereas mode 2 (default) will batch them into +fewer, larger operations. These modes will have different performance +profiles on different hardware and for different applications. Default: 2. +
+ + + +#### Call arguments: + + +* `inputs`: A 2D tensor, with shape of `[batch, feature]`. +* `states`: List of 2 tensors that corresponding to the cell's units. Both of + them have shape `[batch, units]`, the first tensor is the memory state + from previous time step, the second tensor is the carry state from + previous time step. For timestep 0, the initial state provided by user + will be feed to cell. +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. Only relevant when `dropout` or + `recurrent_dropout` is used. + + +## Methods + +

get_dropout_mask_for_cell

+ +View source + + + +Get the dropout mask for RNN cell's input. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

get_initial_state

+ +View source + + + + + + +

get_recurrent_dropout_mask_for_cell

+ +View source + + + +Get the recurrent dropout mask for RNN cell. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

reset_dropout_mask

+ +View source + + + +Reset the cached dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + +

reset_recurrent_dropout_mask

+ +View source + + + +Reset the cached recurrent dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + + + diff --git a/site/en/api_docs/python/tf/keras/layers/Lambda.md b/site/en/api_docs/python/tf/keras/layers/Lambda.md new file mode 100644 index 00000000000..3067da93326 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Lambda.md @@ -0,0 +1,172 @@ +description: Wraps arbitrary expressions as a Layer object. + +
+ + + + +
+ +# tf.keras.layers.Lambda + + + + + + + + + +Wraps arbitrary expressions as a `Layer` object. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +The `Lambda` layer exists so that arbitrary TensorFlow functions +can be used when constructing `Sequential` and Functional API +models. `Lambda` layers are best suited for simple operations or +quick experimentation. For more advanced usecases, follow +[this guide](https://www.tensorflow.org/guide/keras/custom_layers_and_models) +for subclassing tf.keras.layers.Layer. + +The main reason to subclass tf.keras.layers.Layer instead of using a +`Lambda` layer is saving and inspecting a Model. `Lambda` layers +are saved by serializing the Python bytecode, whereas subclassed +Layers can be saved via overriding their `get_config` method. Overriding +`get_config` improves the portability of Models. Models that rely on +subclassed Layers are also often easier to visualize and reason about. + +#### Examples: + + + +```python +# add a x -> x^2 layer +model.add(Lambda(lambda x: x ** 2)) +``` +```python +# add a layer that returns the concatenation +# of the positive part of the input and +# the opposite of the negative part + +def antirectifier(x): + x -= K.mean(x, axis=1, keepdims=True) + x = K.l2_normalize(x, axis=1) + pos = K.relu(x) + neg = K.relu(-x) + return K.concatenate([pos, neg], axis=1) + +model.add(Lambda(antirectifier)) +``` + +#### Variables: + +While it is possible to use Variables with Lambda layers, this practice is +discouraged as it can easily lead to bugs. For instance, consider the +following layer: + +```python + scale = tf.Variable(1.) + scale_layer = tf.keras.layers.Lambda(lambda x: x * scale) +``` + +Because scale_layer does not directly track the `scale` variable, it will +not appear in `scale_layer.trainable_weights` and will therefore not be +trained if `scale_layer` is used in a Model. + +A better pattern is to write a subclassed Layer: + +```python + class ScaleLayer(tf.keras.layers.Layer): + def __init__(self): + super(ScaleLayer, self).__init__() + self.scale = tf.Variable(1.) + + def call(self, inputs): + return inputs * self.scale +``` + +In general, Lambda layers can be convenient for simple stateless +computation, but anything more complex should use a subclass Layer instead. + + + + + + + + + + + + + + + + + + + + + +
+`function` + +The function to be evaluated. Takes input tensor as first +argument. +
+`output_shape` + +Expected output shape from function. This argument can be +inferred if not explicitly provided. Can be a tuple or function. If a +tuple, it only specifies the first dimension onward; +sample dimension is assumed either the same as the input: `output_shape = +(input_shape[0], ) + output_shape` or, the input is `None` and +the sample dimension is also `None`: `output_shape = (None, ) + +output_shape` If a function, it specifies the entire shape as a function +of the +input shape: `output_shape = f(input_shape)` +
+`mask` + +Either None (indicating no masking) or a callable with the same +signature as the `compute_mask` layer method, or a tensor that will be +returned as output mask regardless what the input is. +
+`arguments` + +Optional dictionary of keyword arguments to be passed to the +function. +
+ + +Input shape: Arbitrary. Use the keyword argument input_shape (tuple of + integers, does not include the samples axis) when using this layer as the + first layer in a model. +Output shape: Specified by `output_shape` argument + diff --git a/site/en/api_docs/python/tf/keras/layers/Layer.md b/site/en/api_docs/python/tf/keras/layers/Layer.md new file mode 100644 index 00000000000..25a4dbb0533 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Layer.md @@ -0,0 +1,1316 @@ +description: This is the class from which all layers inherit. + +
+ + + + + + + + + + + + + + + + + + +
+ +# tf.keras.layers.Layer + + + + + + + + + +This is the class from which all layers inherit. + +Inherits From: [`Module`](../../../tf/Module.md) + + + + + + + + + +A layer is a callable object that takes as input one or more tensors and +that outputs one or more tensors. It involves *computation*, defined +in the `call()` method, and a *state* (weight variables), defined +either in the constructor `__init__()` or in the `build()` method. + +Users will just instantiate a layer and then treat it as a callable. + +We recommend that descendants of `Layer` implement the following methods: + +* `__init__()`: Defines custom layer attributes, and creates layer state + variables that do not depend on input shapes, using `add_weight()`. +* `build(self, input_shape)`: This method can be used to create weights that + depend on the shape(s) of the input(s), using `add_weight()`. `__call__()` + will automatically build the layer (if it has not been built yet) by + calling `build()`. +* `call(self, *args, **kwargs)`: Called in `__call__` after making sure + `build()` has been called. `call()` performs the logic of applying the + layer to the input tensors (which should be passed in as argument). + Two reserved keyword arguments you can optionally use in `call()` are: + - `training` (boolean, whether the call is in + inference mode or training mode) + - `mask` (boolean tensor encoding masked timesteps in the input, used + in RNN layers) +* `get_config(self)`: Returns a dictionary containing the configuration used + to initialize this layer. If the keys differ from the arguments + in `__init__`, then override `from_config(self)` as well. + This method is used when saving + the layer or a model that contains this layer. + +#### Examples: + + + +Here's a basic example: a layer with two variables, `w` and `b`, +that returns `y = w . x + b`. +It shows how to implement `build()` and `call()`. +Variables set as attributes of a layer are tracked as weights +of the layers (in `layer.weights`). + +```python +class SimpleDense(Layer): + + def __init__(self, units=32): + super(SimpleDense, self).__init__() + self.units = units + + def build(self, input_shape): # Create the state of the layer (weights) + w_init = tf.random_normal_initializer() + self.w = tf.Variable( + initial_value=w_init(shape=(input_shape[-1], self.units), + dtype='float32'), + trainable=True) + b_init = tf.zeros_initializer() + self.b = tf.Variable( + initial_value=b_init(shape=(self.units,), dtype='float32'), + trainable=True) + + def call(self, inputs): # Defines the computation from inputs to outputs + return tf.matmul(inputs, self.w) + self.b + +# Instantiates the layer. +linear_layer = SimpleDense(4) + +# This will also call `build(input_shape)` and create the weights. +y = linear_layer(tf.ones((2, 2))) +assert len(linear_layer.weights) == 2 + +# These weights are trainable, so they're listed in `trainable_weights`: +assert len(linear_layer.trainable_weights) == 2 +``` + +Note that the method `add_weight()` offers a shortcut to create weights: + +```python +class SimpleDense(Layer): + + def __init__(self, units=32): + super(SimpleDense, self).__init__() + self.units = units + + def build(self, input_shape): + self.w = self.add_weight(shape=(input_shape[-1], self.units), + initializer='random_normal', + trainable=True) + self.b = self.add_weight(shape=(self.units,), + initializer='random_normal', + trainable=True) + + def call(self, inputs): + return tf.matmul(inputs, self.w) + self.b +``` + +Besides trainable weights, updated via backpropagation during training, +layers can also have non-trainable weights. These weights are meant to +be updated manually during `call()`. Here's a example layer that computes +the running sum of its inputs: + +```python +class ComputeSum(Layer): + + def __init__(self, input_dim): + super(ComputeSum, self).__init__() + # Create a non-trainable weight. + self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), + trainable=False) + + def call(self, inputs): + self.total.assign_add(tf.reduce_sum(inputs, axis=0)) + return self.total + +my_sum = ComputeSum(2) +x = tf.ones((2, 2)) + +y = my_sum(x) +print(y.numpy()) # [2. 2.] + +y = my_sum(x) +print(y.numpy()) # [4. 4.] + +assert my_sum.weights == [my_sum.total] +assert my_sum.non_trainable_weights == [my_sum.total] +assert my_sum.trainable_weights == [] +``` + +For more information about creating layers, see the guide +[Writing custom layers and models with Keras]( + https://www.tensorflow.org/guide/keras/custom_layers_and_models) + + + + + + + + + + + + + + + + + + + +
+`trainable` + +Boolean, whether the layer's variables should be trainable. +
+`name` + +String name of the layer. +
+`dtype` + +The dtype of the layer's computations and weights (default of +`None` means use tf.keras.backend.floatx in TensorFlow 2, or the type +of the first input in TensorFlow 1). +
+`dynamic` + +Set this to `True` if your layer should only be run eagerly, and +should not be used to generate a static computation graph. +This would be the case for a Tree-RNN or a recursive network, +for example, or generally for any layer that manipulates tensors +using Python control flow. If `False`, we assume that the layer can +safely be used to generate a static computation graph. +
+ + +Each layer has a dtype, which is typically the dtype of the layer's +computations and variables. A layer's dtype can be queried via the +Layer.dtype property. The dtype is specified with the `dtype` constructor +argument. In TensorFlow 2, the dtype defaults to tf.keras.backend.floatx() +if no dtype is passed. `floatx()` itself defaults to "float32". Additionally, +layers will cast their inputs to the layer's dtype in TensorFlow 2. When mixed +precision is used, layers may have different computation and variable dtypes. +See tf.keras.mixed_precision.experimental.Policy for details on layer +dtypes. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +The name of the layer (string). +
+`dtype` + +The dtype of the layer's computations and weights. If mixed +precision is used with a tf.keras.mixed_precision.experimental.Policy, +this is instead just the dtype of the layer's weights, as the computations +are done in a different dtype. +
+`updates` + +List of update ops of this layer. +
+`losses` + +List of losses added by this layer. +
+`trainable_weights` + +List of variables to be included in backprop. +
+`non_trainable_weights` + +List of variables that should not be +included in backprop. +
+`weights` + +The concatenation of the lists trainable_weights and +non_trainable_weights (in this order). +
+`trainable` + +Whether the layer should be trained (boolean). +
+`input_spec` + +Optional (list of) `InputSpec` object(s) specifying the +constraints on inputs that can be accepted by the layer. +
+`activity_regularizer` + +Optional regularizer function for the output of this layer. +
+`dynamic` + +Whether the layer is dynamic (eager-only); set in the constructor. +
+`input` + +Retrieves the input tensor(s) of a layer. + +Only applicable if the layer has exactly one input, +i.e. if it is connected to one incoming layer. +
+`metrics` + +List of tf.keras.metrics.Metric instances tracked by the layer. +
+`output` + +Retrieves the output tensor(s) of a layer. + +Only applicable if the layer has exactly one output, +i.e. if it is connected to one incoming layer. +
+ + + +## Methods + +

add_loss

+ +View source + + + +Add loss tensor(s), potentially dependent on layer inputs. + +Some losses (for instance, activity regularization losses) may be dependent +on the inputs passed when calling a layer. Hence, when reusing the same +layer on different inputs `a` and `b`, some entries in `layer.losses` may +be dependent on `a` and some on `b`. This method automatically keeps track +of dependencies. + +This method can be used inside a subclassed layer or model's `call` +function, in which case `losses` should be a Tensor or list of Tensors. + +#### Example: + + + +```python +class MyLayer(tf.keras.layers.Layer): + def call(inputs, self): + self.add_loss(tf.abs(tf.reduce_mean(inputs)), inputs=True) + return inputs +``` + +This method can also be called directly on a Functional Model during +construction. In this case, any loss Tensors passed to this Model must +be symbolic and be able to be traced back to the model's `Input`s. These +losses become part of the model's topology and are tracked in `get_config`. + +#### Example: + + + +```python +inputs = tf.keras.Input(shape=(10,)) +x = tf.keras.layers.Dense(10)(inputs) +outputs = tf.keras.layers.Dense(1)(x) +model = tf.keras.Model(inputs, outputs) +# Activity regularization. +model.add_loss(tf.abs(tf.reduce_mean(x))) +``` + +If this is not the case for your loss (if, for example, your loss references +a `Variable` of one of the model's layers), you can wrap your loss in a +zero-argument lambda. These losses are not tracked as part of the model's +topology since they can't be serialized. + +#### Example: + + + +```python +inputs = tf.keras.Input(shape=(10,)) +x = tf.keras.layers.Dense(10)(inputs) +outputs = tf.keras.layers.Dense(1)(x) +model = tf.keras.Model(inputs, outputs) +# Weight regularization. +model.add_loss(lambda: tf.reduce_mean(x.kernel)) +``` + +The `get_losses_for` method allows to retrieve the losses relevant to a +specific set of inputs. + + + + + + + + + + + + + +
Arguments
+`losses` + +Loss tensor, or list/tuple of tensors. Rather than tensors, losses +may also be zero-argument callables which create a loss tensor. +
+`inputs` + +Ignored when executing eagerly. If anything other than None is +passed, it signals the losses are conditional on some of the layer's +inputs, and thus they should only be run where these inputs are +available. This is the case for activity regularization losses, for +instance. If `None` is passed, the losses are assumed +to be unconditional, and will apply across all dataflows of the layer +(e.g. weight regularization losses). +
+ + + +

add_metric

+ +View source + + + +Adds metric tensor to the layer. + + + + + + + + + + + + + + + + + +
Args
+`value` + +Metric tensor. +
+`aggregation` + +Sample-wise metric reduction function. If `aggregation=None`, +it indicates that the metric tensor provided has been aggregated +already. eg, `bin_acc = BinaryAccuracy(name='acc')` followed by +`model.add_metric(bin_acc(y_true, y_pred))`. If aggregation='mean', the +given metric tensor will be sample-wise reduced using `mean` function. +eg, `model.add_metric(tf.reduce_sum(outputs), name='output_mean', +aggregation='mean')`. +
+`name` + +String metric name. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `aggregation` is anything other than None or `mean`. +
+ + + +

add_weight

+ +View source + + + +Adds a new variable to the layer. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Arguments
+`name` + +Variable name. +
+`shape` + +Variable shape. Defaults to scalar if unspecified. +
+`dtype` + +The type of the variable. Defaults to `self.dtype` or `float32`. +
+`initializer` + +Initializer instance (callable). +
+`regularizer` + +Regularizer instance (callable). +
+`trainable` + +Boolean, whether the variable should be part of the layer's +"trainable_variables" (e.g. variables, biases) +or "non_trainable_variables" (e.g. BatchNorm mean and variance). +Note that `trainable` cannot be `True` if `synchronization` +is set to `ON_READ`. +
+`constraint` + +Constraint instance (callable). +
+`partitioner` + +Partitioner to be passed to the `Trackable` API. +
+`use_resource` + +Whether to use `ResourceVariable`. +
+`synchronization` + +Indicates when a distributed a variable will be +aggregated. Accepted values are constants defined in the class +tf.VariableSynchronization. By default the synchronization is set to +`AUTO` and the current `DistributionStrategy` chooses +when to synchronize. If `synchronization` is set to `ON_READ`, +`trainable` must not be set to `True`. +
+`aggregation` + +Indicates how a distributed variable will be aggregated. +Accepted values are constants defined in the class +tf.VariableAggregation. +
+`**kwargs` + +Additional keyword arguments. Accepted values are `getter`, +`collections`, `experimental_autocast` and `caching_device`. +
+ + + + + + + + + + + +
Returns
+The created variable. Usually either a `Variable` or `ResourceVariable` +instance. If `partitioner` is not `None`, a `PartitionedVariable` +instance is returned. +
+ + + + + + + + + + + + + + + +
Raises
+`RuntimeError` + +If called with partitioned variable regularization and +eager execution is enabled. +
+`ValueError` + +When giving unsupported dtype and no initializer or when +trainable has been set to True with synchronization set as `ON_READ`. +
+ + + +

build

+ +View source + + + +Creates the variables of the layer (optional, for subclass implementers). + +This is a method that implementers of subclasses of `Layer` or `Model` +can override if they need a state-creation step in-between +layer instantiation and layer call. + +This is typically used to create the weights of `Layer` subclasses. + + + + + + + + + + +
Arguments
+`input_shape` + +Instance of `TensorShape`, or list of instances of +`TensorShape` if the layer expects a list of inputs +(one instance per input). +
+ + + +

call

+ +View source + + + +This is where the layer's logic lives. + + + + + + + + + + + + + + +
Arguments
+`inputs` + +Input tensor, or list/tuple of input tensors. +
+`**kwargs` + +Additional keyword arguments. +
+ + + + + + + + + + + +
Returns
+A tensor or list/tuple of tensors. +
+ + + +

compute_mask

+ +View source + + + +Computes an output mask tensor. + + + + + + + + + + + + + + +
Arguments
+`inputs` + +Tensor or list of tensors. +
+`mask` + +Tensor or list of tensors. +
+ + + + + + + + + + + +
Returns
+None or a tensor (or list of tensors, +one per output tensor of the layer). +
+ + + +

compute_output_shape

+ +View source + + + +Computes the output shape of the layer. + +If the layer has not been built, this method will call `build` on the +layer. This assumes that the layer will later be used with inputs that +match the input shape provided here. + + + + + + + + + + +
Arguments
+`input_shape` + +Shape tuple (tuple of integers) +or list of shape tuples (one per output tensor of the layer). +Shape tuples can include None for free dimensions, +instead of an integer. +
+ + + + + + + + + + + +
Returns
+An input shape tuple. +
+ + + +

compute_output_signature

+ +View source + + + +Compute the output tensor signature of the layer based on the inputs. + +Unlike a TensorShape object, a TensorSpec object contains both shape +and dtype information for a tensor. This method allows layers to provide +output dtype information if it is different from the input dtype. +For any layer that doesn't implement this function, +the framework will fall back to use `compute_output_shape`, and will +assume that the output dtype matches the input dtype. + + + + + + + + + + +
Args
+`input_signature` + +Single TensorSpec or nested structure of TensorSpec +objects, describing a candidate input for the layer. +
+ + + + + + + + + + + +
Returns
+Single TensorSpec or nested structure of TensorSpec objects, describing +how the layer would transform the provided input. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If input_signature contains a non-TensorSpec object. +
+ + + +

count_params

+ +View source + + + +Count the total number of scalars composing the weights. + + + + + + + + + + +
Returns
+An integer count. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if the layer isn't yet built +(in which case its weights aren't yet defined). +
+ + + +

from_config

+ +View source + + + +Creates a layer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same layer from the config +dictionary. It does not handle layer connectivity +(handled by Network), nor weights (handled by `set_weights`). + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the +output of get_config. +
+ + + + + + + + + + + +
Returns
+A layer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the layer. + +A layer config is a Python dictionary (serializable) +containing the configuration of a layer. +The same layer can be reinstantiated later +(without its trained weights) from this configuration. + +The config of a layer does not include connectivity +information, nor the layer class name. These are handled +by `Network` (one layer of abstraction above). + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

get_weights

+ +View source + + + +Returns the current weights of the layer. + +The weights of a layer represent the state of the layer. This function +returns both trainable and non-trainable weight values associated with this +layer as a list of Numpy arrays, which can in turn be used to load state +into similarly parameterized layers. + +For example, a Dense layer returns a list of two values-- per-output +weights and the bias value. These can be used to set the weights of another +Dense layer: + +``` +>>> a = tf.keras.layers.Dense(1, +... kernel_initializer=tf.constant_initializer(1.)) +>>> a_out = a(tf.convert_to_tensor([[1., 2., 3.]])) +>>> a.get_weights() +[array([[1.], + [1.], + [1.]], dtype=float32), array([0.], dtype=float32)] +>>> b = tf.keras.layers.Dense(1, +... kernel_initializer=tf.constant_initializer(2.)) +>>> b_out = b(tf.convert_to_tensor([[10., 20., 30.]])) +>>> b.get_weights() +[array([[2.], + [2.], + [2.]], dtype=float32), array([0.], dtype=float32)] +>>> b.set_weights(a.get_weights()) +>>> b.get_weights() +[array([[1.], + [1.], + [1.]], dtype=float32), array([0.], dtype=float32)] +``` + + + + + + + + + +
Returns
+Weights values as a list of numpy arrays. +
+ + + +

set_weights

+ +View source + + + +Sets the weights of the layer, from Numpy arrays. + +The weights of a layer represent the state of the layer. This function +sets the weight values from numpy arrays. The weight values should be +passed in the order they are created by the layer. Note that the layer's +weights must be instantiated before calling this function by calling +the layer. + +For example, a Dense layer returns a list of two values-- per-output +weights and the bias value. These can be used to set the weights of another +Dense layer: + +``` +>>> a = tf.keras.layers.Dense(1, +... kernel_initializer=tf.constant_initializer(1.)) +>>> a_out = a(tf.convert_to_tensor([[1., 2., 3.]])) +>>> a.get_weights() +[array([[1.], + [1.], + [1.]], dtype=float32), array([0.], dtype=float32)] +>>> b = tf.keras.layers.Dense(1, +... kernel_initializer=tf.constant_initializer(2.)) +>>> b_out = b(tf.convert_to_tensor([[10., 20., 30.]])) +>>> b.get_weights() +[array([[2.], + [2.], + [2.]], dtype=float32), array([0.], dtype=float32)] +>>> b.set_weights(a.get_weights()) +>>> b.get_weights() +[array([[1.], + [1.], + [1.]], dtype=float32), array([0.], dtype=float32)] +``` + + + + + + + + + + +
Arguments
+`weights` + +a list of Numpy arrays. The number +of arrays and their shape must match +number of the dimensions of the weights +of the layer (i.e. it should match the +output of `get_weights`). +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the provided weights list does not match the +layer's specifications. +
+ + + +

__call__

+ +View source + + + +Wraps `call`, applying pre- and post-processing steps. + + + + + + + + + + + + + + +
Arguments
+`*args` + +Positional arguments to be passed to `self.call`. +
+`**kwargs` + +Keyword arguments to be passed to `self.call`. +
+ + + + + + + + + + + +
Returns
+Output tensor(s). +
+ + + +#### Note: + +- The following optional keyword arguments are reserved for specific uses: + * `training`: Boolean scalar tensor of Python boolean indicating + whether the `call` is meant for training or inference. + * `mask`: Boolean input mask. +- If the layer's `call` method takes a `mask` argument (as some Keras + layers do), its default value will be set to the mask generated + for `inputs` by the previous layer (if `input` did come from + a layer that generated a corresponding mask, i.e. if it came from + a Keras layer with masking support. + + + + + + + + + + + + + + + +
Raises
+`ValueError` + +if the layer's `call` method returns None (an invalid value). +
+`RuntimeError` + +if `super().__init__()` was not called in the constructor. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/LayerNormalization.md b/site/en/api_docs/python/tf/keras/layers/LayerNormalization.md new file mode 100644 index 00000000000..ecefa9c7ca7 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/LayerNormalization.md @@ -0,0 +1,169 @@ +description: Layer normalization layer (Ba et al., 2016). + +
+ + + + +
+ +# tf.keras.layers.LayerNormalization + + + + + + + + + +Layer normalization layer (Ba et al., 2016). + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +Normalize the activations of the previous layer for each given example in a +batch independently, rather than across a batch like Batch Normalization. +i.e. applies a transformation that maintains the mean activation within each +example close to 0 and the activation standard deviation close to 1. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`axis` + +Integer or List/Tuple. The axis that should be normalized +(typically the features axis). +
+`epsilon` + +Small float added to variance to avoid dividing by zero. +
+`center` + +If True, add offset of `beta` to normalized tensor. +If False, `beta` is ignored. +
+`scale` + +If True, multiply by `gamma`. +If False, `gamma` is not used. +When the next layer is linear (also e.g. nn.relu), +this can be disabled since the scaling +will be done by the next layer. +
+`beta_initializer` + +Initializer for the beta weight. +
+`gamma_initializer` + +Initializer for the gamma weight. +
+`beta_regularizer` + +Optional regularizer for the beta weight. +
+`gamma_regularizer` + +Optional regularizer for the gamma weight. +
+`beta_constraint` + +Optional constraint for the beta weight. +
+`gamma_constraint` + +Optional constraint for the gamma weight. +
+`trainable` + +Boolean, if `True` the variables will be marked as trainable. +
+ + + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as input. + + + +#### References: + +- [Layer Normalization](https://arxiv.org/abs/1607.06450) + + diff --git a/site/en/api_docs/python/tf/keras/layers/LeakyReLU.md b/site/en/api_docs/python/tf/keras/layers/LeakyReLU.md new file mode 100644 index 00000000000..95344eca47a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/LeakyReLU.md @@ -0,0 +1,102 @@ +description: Leaky version of a Rectified Linear Unit. + +
+ + + + +
+ +# tf.keras.layers.LeakyReLU + + + + + + + + + +Leaky version of a Rectified Linear Unit. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +It allows a small gradient when the unit is not active: + +``` + f(x) = alpha * x if x < 0 + f(x) = x if x >= 0 +``` + +#### Usage: + + + +``` +>>> layer = tf.keras.layers.LeakyReLU() +>>> output = layer([-3.0, -1.0, 0.0, 2.0]) +>>> list(output.numpy()) +[-0.9, -0.3, 0.0, 2.0] +>>> layer = tf.keras.layers.LeakyReLU(alpha=0.1) +>>> output = layer([-3.0, -1.0, 0.0, 2.0]) +>>> list(output.numpy()) +[-0.3, -0.1, 0.0, 2.0] +``` + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the batch axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as the input. + + + + + + + + + + + + +
+`alpha` + +Float >= 0. Negative slope coefficient. Default to 0.3. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/LocallyConnected1D.md b/site/en/api_docs/python/tf/keras/layers/LocallyConnected1D.md new file mode 100644 index 00000000000..8104685eb68 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/LocallyConnected1D.md @@ -0,0 +1,252 @@ +description: Locally-connected layer for 1D inputs. + +
+ + + + +
+ +# tf.keras.layers.LocallyConnected1D + + + + + + + + + +Locally-connected layer for 1D inputs. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +The `LocallyConnected1D` layer works similarly to +the `Conv1D` layer, except that weights are unshared, +that is, a different set of filters is applied at each different patch +of the input. + +Note: layer attributes cannot be modified after the layer has been called +once (except the `trainable` attribute). + +#### Example: + + +```python + # apply a unshared weight convolution 1d of length 3 to a sequence with + # 10 timesteps, with 64 output filters + model = Sequential() + model.add(LocallyConnected1D(64, 3, input_shape=(10, 32))) + # now model.output_shape == (None, 8, 64) + # add a new conv1d on top + model.add(LocallyConnected1D(32, 3)) + # now model.output_shape == (None, 6, 32) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space +(i.e. the number of output filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of a single integer, +specifying the length of the 1D convolution window. +
+`strides` + +An integer or tuple/list of a single integer, +specifying the stride length of the convolution. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +Currently only supports `"valid"` (case-insensitive). +`"same"` may be supported in the future. +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, length, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, length)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+`activation` + +Activation function to use. +If you don't specify anything, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix. +
+`bias_initializer` + +Initializer for the bias vector. +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. +
+`activity_regularizer` + +Regularizer function applied to +the output of the layer (its "activation").. +
+`kernel_constraint` + +Constraint function applied to the kernel matrix. +
+`bias_constraint` + +Constraint function applied to the bias vector. +
+`implementation` + +implementation mode, either `1`, `2`, or `3`. +`1` loops over input spatial locations to perform the forward pass. +It is memory-efficient but performs a lot of (small) ops. + +`2` stores layer weights in a dense but sparsely-populated 2D matrix +and implements the forward pass as a single matrix-multiply. It uses +a lot of RAM but performs few (large) ops. + +`3` stores layer weights in a sparse tensor and implements the forward +pass as a single sparse matrix-multiply. + +How to choose: + +`1`: large, dense models, +`2`: small models, +`3`: large, sparse models, + +where "large" stands for large input/output activations +(i.e. many `filters`, `input_filters`, large `input_size`, +`output_size`), and "sparse" stands for few connections between inputs +and outputs, i.e. small ratio +`filters * input_filters * kernel_size / (input_size * strides)`, +where inputs to and outputs of the layer are assumed to have shapes +`(input_size, input_filters)`, `(output_size, filters)` +respectively. + +It is recommended to benchmark each in the setting of interest to pick +the most efficient one (in terms of speed and memory usage). Correct +choice of implementation can lead to dramatic speed improvements (e.g. +50X), potentially at the expense of RAM. + +Also, only `padding="valid"` is supported by `implementation=1`. +
+ + + +#### Input shape: + +3D tensor with shape: `(batch_size, steps, input_dim)` + + + +#### Output shape: + +3D tensor with shape: `(batch_size, new_steps, filters)` +`steps` value might have changed due to padding or strides. + + diff --git a/site/en/api_docs/python/tf/keras/layers/LocallyConnected2D.md b/site/en/api_docs/python/tf/keras/layers/LocallyConnected2D.md new file mode 100644 index 00000000000..a3316d440d2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/LocallyConnected2D.md @@ -0,0 +1,264 @@ +description: Locally-connected layer for 2D inputs. + +
+ + + + +
+ +# tf.keras.layers.LocallyConnected2D + + + + + + + + + +Locally-connected layer for 2D inputs. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +The `LocallyConnected2D` layer works similarly +to the `Conv2D` layer, except that weights are unshared, +that is, a different set of filters is applied at each +different patch of the input. + +Note: layer attributes cannot be modified after the layer has been called +once (except the `trainable` attribute). + +#### Examples: + + +```python + # apply a 3x3 unshared weights convolution with 64 output filters on a + 32x32 image + # with `data_format="channels_last"`: + model = Sequential() + model.add(LocallyConnected2D(64, (3, 3), input_shape=(32, 32, 3))) + # now model.output_shape == (None, 30, 30, 64) + # notice that this layer will consume (30*30)*(3*3*3*64) + (30*30)*64 + parameters + + # add a 3x3 unshared weights convolution on top, with 32 output filters: + model.add(LocallyConnected2D(32, (3, 3))) + # now model.output_shape == (None, 28, 28, 32) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space +(i.e. the number of output filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of 2 integers, specifying the +width and height of the 2D convolution window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 2 integers, +specifying the strides of the convolution along the width and height. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`padding` + +Currently only support `"valid"` (case-insensitive). +`"same"` will be supported in future. +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+`activation` + +Activation function to use. +If you don't specify anything, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix. +
+`bias_initializer` + +Initializer for the bias vector. +
+`kernel_regularizer` + +Regularizer function applied to +the `kernel` weights matrix. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. +
+`activity_regularizer` + +Regularizer function applied to +the output of the layer (its "activation"). +
+`kernel_constraint` + +Constraint function applied to the kernel matrix. +
+`bias_constraint` + +Constraint function applied to the bias vector. +
+`implementation` + +implementation mode, either `1`, `2`, or `3`. +`1` loops over input spatial locations to perform the forward pass. +It is memory-efficient but performs a lot of (small) ops. + +`2` stores layer weights in a dense but sparsely-populated 2D matrix +and implements the forward pass as a single matrix-multiply. It uses +a lot of RAM but performs few (large) ops. + +`3` stores layer weights in a sparse tensor and implements the forward +pass as a single sparse matrix-multiply. + +How to choose: + +`1`: large, dense models, +`2`: small models, +`3`: large, sparse models, + +where "large" stands for large input/output activations +(i.e. many `filters`, `input_filters`, large `np.prod(input_size)`, +`np.prod(output_size)`), and "sparse" stands for few connections +between inputs and outputs, i.e. small ratio +`filters * input_filters * np.prod(kernel_size) / (np.prod(input_size) +* np.prod(strides))`, where inputs to and outputs of the layer are +assumed to have shapes `input_size + (input_filters,)`, +`output_size + (filters,)` respectively. + +It is recommended to benchmark each in the setting of interest to pick +the most efficient one (in terms of speed and memory usage). Correct +choice of implementation can lead to dramatic speed improvements (e.g. +50X), potentially at the expense of RAM. + +Also, only `padding="valid"` is supported by `implementation=1`. +
+ + + +#### Input shape: + +4D tensor with shape: +`(samples, channels, rows, cols)` if data_format='channels_first' +or 4D tensor with shape: +`(samples, rows, cols, channels)` if data_format='channels_last'. + + + +#### Output shape: + +4D tensor with shape: +`(samples, filters, new_rows, new_cols)` if data_format='channels_first' +or 4D tensor with shape: +`(samples, new_rows, new_cols, filters)` if data_format='channels_last'. +`rows` and `cols` values might have changed due to padding. + + diff --git a/site/en/api_docs/python/tf/keras/layers/Masking.md b/site/en/api_docs/python/tf/keras/layers/Masking.md new file mode 100644 index 00000000000..25ee5075f7e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Masking.md @@ -0,0 +1,87 @@ +description: Masks a sequence by using a mask value to skip timesteps. + +
+ + + + +
+ +# tf.keras.layers.Masking + + + + + + + + + +Masks a sequence by using a mask value to skip timesteps. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +For each timestep in the input tensor (dimension #1 in the tensor), +if all values in the input tensor at that timestep +are equal to `mask_value`, then the timestep will be masked (skipped) +in all downstream layers (as long as they support masking). + +If any downstream layer does not support masking yet receives such +an input mask, an exception will be raised. + +#### Example: + + + +Consider a Numpy data array `x` of shape `(samples, timesteps, features)`, +to be fed to an LSTM layer. You want to mask timestep #3 and #5 because you +lack data for these timesteps. You can: + +- Set `x[:, 3, :] = 0.` and `x[:, 5, :] = 0.` +- Insert a `Masking` layer with `mask_value=0.` before the LSTM layer: + +```python +samples, timesteps, features = 32, 10, 8 +inputs = np.random.random([samples, timesteps, features]).astype(np.float32) +inputs[:, 3, :] = 0. +inputs[:, 5, :] = 0. + +model = tf.keras.models.Sequential() +model.add(tf.keras.layers.Masking(mask_value=0., + input_shape=(timesteps, features))) +model.add(tf.keras.layers.LSTM(32)) + +output = model(inputs) +# The time step 3 and 5 will be skipped from LSTM calculation. +``` + +See [the masking and padding +guide](https://www.tensorflow.org/guide/keras/masking_and_padding) +for more details. + diff --git a/site/en/api_docs/python/tf/keras/layers/MaxPool1D.md b/site/en/api_docs/python/tf/keras/layers/MaxPool1D.md new file mode 100644 index 00000000000..2b7f03eee15 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/MaxPool1D.md @@ -0,0 +1,168 @@ +description: Max pooling operation for 1D temporal data. + +
+ + + + +
+ +# tf.keras.layers.MaxPool1D + + + + + + + + + +Max pooling operation for 1D temporal data. + + + + + + + + + +Downsamples the input representation by taking the maximum value over the +window defined by `pool_size`. The window is shifted by `strides`. The +resulting output when using "valid" padding option has a shape of: +`output_shape = (input_shape - pool_size + 1) / strides)` + +The resulting output shape when using the "same" padding option is: +`output_shape = input_shape / strides` + +For example, for strides=1 and padding="valid": + +``` +>>> x = tf.constant([1., 2., 3., 4., 5.]) +>>> x = tf.reshape(x, [1, 5, 1]) +>>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2, +... strides=1, padding='valid') +>>> max_pool_1d(x) + +``` + +For example, for strides=2 and padding="valid": + +``` +>>> x = tf.constant([1., 2., 3., 4., 5.]) +>>> x = tf.reshape(x, [1, 5, 1]) +>>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2, +... strides=2, padding='valid') +>>> max_pool_1d(x) + +``` + +For example, for strides=1 and padding="same": + +``` +>>> x = tf.constant([1., 2., 3., 4., 5.]) +>>> x = tf.reshape(x, [1, 5, 1]) +>>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2, +... strides=1, padding='same') +>>> max_pool_1d(x) + +``` + + + + + + + + + + + + + + + + + + + +
+`pool_size` + +Integer, size of the max pooling window. +
+`strides` + +Integer, or None. Specifies how much the pooling window moves +for each pooling step. +If None, it will default to `pool_size`. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +"valid" adds no padding. "same" adds padding such that if the stride +is 1, the output shape is the same as the input shape. +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, steps, features)` while `channels_first` +corresponds to inputs with shape +`(batch, features, steps)`. +
+ + + +#### Input shape: + +- If `data_format='channels_last'`: + 3D tensor with shape `(batch_size, steps, features)`. +- If `data_format='channels_first'`: + 3D tensor with shape `(batch_size, features, steps)`. + + + +#### Output shape: + +- If `data_format='channels_last'`: + 3D tensor with shape `(batch_size, downsampled_steps, features)`. +- If `data_format='channels_first'`: + 3D tensor with shape `(batch_size, features, downsampled_steps)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/MaxPool2D.md b/site/en/api_docs/python/tf/keras/layers/MaxPool2D.md new file mode 100644 index 00000000000..0657aafa97b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/MaxPool2D.md @@ -0,0 +1,227 @@ +description: Max pooling operation for 2D spatial data. + +
+ + + + +
+ +# tf.keras.layers.MaxPool2D + + + + + + + + + +Max pooling operation for 2D spatial data. + + + + + + + + + +Downsamples the input representation by taking the maximum value over the +window defined by `pool_size` for each dimension along the features axis. +The window is shifted by `strides` in each dimension. The resulting output +when using "valid" padding option has a shape(number of rows or columns) of: +`output_shape = (input_shape - pool_size + 1) / strides)` + +The resulting output shape when using the "same" padding option is: +`output_shape = input_shape / strides` + +For example, for stride=(1,1) and padding="valid": + +``` +>>> x = tf.constant([[1., 2., 3.], +... [4., 5., 6.], +... [7., 8., 9.]]) +>>> x = tf.reshape(x, [1, 3, 3, 1]) +>>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), +... strides=(1, 1), padding='valid') +>>> max_pool_2d(x) + +``` + +For example, for stride=(2,2) and padding="valid": + +``` +>>> x = tf.constant([[1., 2., 3., 4.], +... [5., 6., 7., 8.], +... [9., 10., 11., 12.]]) +>>> x = tf.reshape(x, [1, 3, 4, 1]) +>>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), +... strides=(1, 1), padding='valid') +>>> max_pool_2d(x) + +``` + +#### Usage Example: + + + +``` +>>> input_image = tf.constant([[[[1.], [1.], [2.], [4.]], +... [[2.], [2.], [3.], [2.]], +... [[4.], [1.], [1.], [1.]], +... [[2.], [2.], [1.], [4.]]]]) +>>> output = tf.constant([[[[1], [0]], +... [[0], [1]]]]) +>>> model = tf.keras.models.Sequential() +>>> model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), +... input_shape=(4,4,1))) +>>> model.compile('adam', 'mean_squared_error') +>>> model.predict(input_image, steps=1) +array([[[[2.], + [4.]], + [[4.], + [4.]]]], dtype=float32) +``` + +For example, for stride=(1,1) and padding="same": + +``` +>>> x = tf.constant([[1., 2., 3.], +... [4., 5., 6.], +... [7., 8., 9.]]) +>>> x = tf.reshape(x, [1, 3, 3, 1]) +>>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), +... strides=(1, 1), padding='same') +>>> max_pool_2d(x) + +``` + + + + + + + + + + + + + + + + + + + +
+`pool_size` + +integer or tuple of 2 integers, +window size over which to take the maximum. +`(2, 2)` will take the max value over a 2x2 pooling window. +If only one integer is specified, the same window length +will be used for both dimensions. +
+`strides` + +Integer, tuple of 2 integers, or None. +Strides values. Specifies how far the pooling window moves +for each pooling step. If None, it will default to `pool_size`. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +"valid" adds no zero padding. "same" adds padding such that if the stride +is 1, the output shape is the same as input shape. +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +- If `data_format='channels_last'`: + 4D tensor with shape `(batch_size, rows, cols, channels)`. +- If `data_format='channels_first'`: + 4D tensor with shape `(batch_size, channels, rows, cols)`. + + + +#### Output shape: + +- If `data_format='channels_last'`: + 4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`. +- If `data_format='channels_first'`: + 4D tensor with shape `(batch_size, channels, pooled_rows, pooled_cols)`. + + + + + + + + + + + +
+A tensor of rank 4 representing the maximum pooled values. See above for +output shape. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/MaxPool3D.md b/site/en/api_docs/python/tf/keras/layers/MaxPool3D.md new file mode 100644 index 00000000000..c911cc7e655 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/MaxPool3D.md @@ -0,0 +1,121 @@ +description: Max pooling operation for 3D data (spatial or spatio-temporal). + +
+ + + + +
+ +# tf.keras.layers.MaxPool3D + + + + + + + + + +Max pooling operation for 3D data (spatial or spatio-temporal). + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`pool_size` + +Tuple of 3 integers, +factors by which to downscale (dim1, dim2, dim3). +`(2, 2, 2)` will halve the size of the 3D input in each dimension. +
+`strides` + +tuple of 3 integers, or None. Strides values. +
+`padding` + +One of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)` +while `channels_first` corresponds to inputs with shape +`(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +- If `data_format='channels_last'`: + 5D tensor with shape: + `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)` +- If `data_format='channels_first'`: + 5D tensor with shape: + `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)` + + + +#### Output shape: + +- If `data_format='channels_last'`: + 5D tensor with shape: + `(batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)` +- If `data_format='channels_first'`: + 5D tensor with shape: + `(batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)` + + diff --git a/site/en/api_docs/python/tf/keras/layers/Maximum.md b/site/en/api_docs/python/tf/keras/layers/Maximum.md new file mode 100644 index 00000000000..286e1586549 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Maximum.md @@ -0,0 +1,86 @@ +description: Layer that computes the maximum (element-wise) a list of inputs. + +
+ + + + +
+ +# tf.keras.layers.Maximum + + + + + + + + + +Layer that computes the maximum (element-wise) a list of inputs. + + + + + + + + + +It takes as input a list of tensors, all of the same shape, and returns +a single tensor (also of the same shape). + +``` +>>> tf.keras.layers.Maximum()([np.arange(5).reshape(5, 1), +... np.arange(5, 10).reshape(5, 1)]) + +``` + +``` +>>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) +>>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) +>>> maxed = tf.keras.layers.Maximum()([x1, x2]) +>>> maxed.shape +TensorShape([5, 8]) +``` + + + + + + + + + + +
+`**kwargs` + +standard layer keyword arguments. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/Minimum.md b/site/en/api_docs/python/tf/keras/layers/Minimum.md new file mode 100644 index 00000000000..46aa1068524 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Minimum.md @@ -0,0 +1,86 @@ +description: Layer that computes the minimum (element-wise) a list of inputs. + +
+ + + + +
+ +# tf.keras.layers.Minimum + + + + + + + + + +Layer that computes the minimum (element-wise) a list of inputs. + + + + + + + + + +It takes as input a list of tensors, all of the same shape, and returns +a single tensor (also of the same shape). + +``` +>>> tf.keras.layers.Minimum()([np.arange(5).reshape(5, 1), +... np.arange(5, 10).reshape(5, 1)]) + +``` + +``` +>>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) +>>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) +>>> minned = tf.keras.layers.Minimum()([x1, x2]) +>>> minned.shape +TensorShape([5, 8]) +``` + + + + + + + + + + +
+`**kwargs` + +standard layer keyword arguments. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/Multiply.md b/site/en/api_docs/python/tf/keras/layers/Multiply.md new file mode 100644 index 00000000000..2e4ba929c9b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Multiply.md @@ -0,0 +1,86 @@ +description: Layer that multiplies (element-wise) a list of inputs. + +
+ + + + +
+ +# tf.keras.layers.Multiply + + + + + + + + + +Layer that multiplies (element-wise) a list of inputs. + + + + + + + + + +It takes as input a list of tensors, all of the same shape, and returns +a single tensor (also of the same shape). + +``` +>>> tf.keras.layers.Multiply()([np.arange(5).reshape(5, 1), +... np.arange(5, 10).reshape(5, 1)]) + +``` + +``` +>>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) +>>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) +>>> multiplied = tf.keras.layers.Multiply()([x1, x2]) +>>> multiplied.shape +TensorShape([5, 8]) +``` + + + + + + + + + + +
+`**kwargs` + +standard layer keyword arguments. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/PReLU.md b/site/en/api_docs/python/tf/keras/layers/PReLU.md new file mode 100644 index 00000000000..2908806cf9d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/PReLU.md @@ -0,0 +1,121 @@ +description: Parametric Rectified Linear Unit. + +
+ + + + +
+ +# tf.keras.layers.PReLU + + + + + + + + + +Parametric Rectified Linear Unit. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + + +#### It follows: + + + +``` + f(x) = alpha * x for x < 0 + f(x) = x for x >= 0 +``` + +where `alpha` is a learned array with the same shape as x. + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as the input. + + + + + + + + + + + + + + + + + + + + + +
+`alpha_initializer` + +Initializer function for the weights. +
+`alpha_regularizer` + +Regularizer for the weights. +
+`alpha_constraint` + +Constraint for the weights. +
+`shared_axes` + +The axes along which to share learnable +parameters for the activation function. +For example, if the incoming feature maps +are from a 2D convolution +with output shape `(batch, height, width, channels)`, +and you wish to share parameters across space +so that each filter only has one set of parameters, +set `shared_axes=[1, 2]`. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/Permute.md b/site/en/api_docs/python/tf/keras/layers/Permute.md new file mode 100644 index 00000000000..e3cf0ba1411 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Permute.md @@ -0,0 +1,96 @@ +description: Permutes the dimensions of the input according to a given pattern. + +
+ + + + +
+ +# tf.keras.layers.Permute + + + + + + + + + +Permutes the dimensions of the input according to a given pattern. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +Useful for e.g. connecting RNNs and convnets together. + +#### Example: + + + +```python +model = Sequential() +model.add(Permute((2, 1), input_shape=(10, 64))) +# now: model.output_shape == (None, 64, 10) +# note: `None` is the batch dimension +``` + + + + + + + + + + +
+`dims` + +Tuple of integers. Permutation pattern, does not include the +samples dimension. Indexing starts at 1. +For instance, `(2, 1)` permutes the first and second dimensions +of the input. +
+ + + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same as the input shape, but with the dimensions re-ordered according +to the specified pattern. + + diff --git a/site/en/api_docs/python/tf/keras/layers/RNN.md b/site/en/api_docs/python/tf/keras/layers/RNN.md new file mode 100644 index 00000000000..ba8f34bb018 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/RNN.md @@ -0,0 +1,367 @@ +description: Base class for recurrent layers. + +
+ + + + + +
+ +# tf.keras.layers.RNN + + + + + + + + + +Base class for recurrent layers. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn) +for details about the usage of RNN API. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cell` + +A RNN cell instance or a list of RNN cell instances. +A RNN cell is a class that has: +- A `call(input_at_t, states_at_t)` method, returning +`(output_at_t, states_at_t_plus_1)`. The call method of the +cell can also take the optional argument `constants`, see +section "Note on passing external constants" below. +- A `state_size` attribute. This can be a single integer +(single state) in which case it is the size of the recurrent +state. This can also be a list/tuple of integers (one size per state). +The `state_size` can also be TensorShape or tuple/list of +TensorShape, to represent high dimension state. +- A `output_size` attribute. This can be a single integer or a +TensorShape, which represent the shape of the output. For backward +compatible reason, if this attribute is not available for the +cell, the value will be inferred by the first element of the +`state_size`. +- A `get_initial_state(inputs=None, batch_size=None, dtype=None)` +method that creates a tensor meant to be fed to `call()` as the +initial state, if the user didn't specify any initial state via other +means. The returned initial state should have a shape of +[batch_size, cell.state_size]. The cell might choose to create a +tensor full of zeros, or full of other values based on the cell's +implementation. +`inputs` is the input tensor to the RNN layer, which should +contain the batch size as its shape[0], and also dtype. Note that +the shape[0] might be `None` during the graph construction. Either +the `inputs` or the pair of `batch_size` and `dtype` are provided. +`batch_size` is a scalar tensor that represents the batch size +of the inputs. `dtype` is tf.DType that represents the dtype of +the inputs. +For backward compatible reason, if this method is not implemented +by the cell, the RNN layer will create a zero filled tensor with the +size of [batch_size, cell.state_size]. +In the case that `cell` is a list of RNN cell instances, the cells +will be stacked on top of each other in the RNN, resulting in an +efficient stacked RNN. +
+`return_sequences` + +Boolean (default `False`). Whether to return the last +output in the output sequence, or the full sequence. +
+`return_state` + +Boolean (default `False`). Whether to return the last state +in addition to the output. +
+`go_backwards` + +Boolean (default `False`). +If True, process the input sequence backwards and return the +reversed sequence. +
+`stateful` + +Boolean (default `False`). If True, the last state +for each sample at index i in a batch will be used as initial +state for the sample of index i in the following batch. +
+`unroll` + +Boolean (default `False`). +If True, the network will be unrolled, else a symbolic loop will be used. +Unrolling can speed-up a RNN, although it tends to be more +memory-intensive. Unrolling is only suitable for short sequences. +
+`time_major` + +The shape format of the `inputs` and `outputs` tensors. +If True, the inputs and outputs will be in shape +`(timesteps, batch, ...)`, whereas in the False case, it will be +`(batch, timesteps, ...)`. Using `time_major = True` is a bit more +efficient because it avoids transposes at the beginning and end of the +RNN calculation. However, most TensorFlow data is batch-major, so by +default this function accepts input and emits output in batch-major +form. +
+ + + +#### Call arguments: + + +* `inputs`: Input tensor. +* `mask`: Binary tensor of shape `[batch_size, timesteps]` indicating whether + a given timestep should be masked. +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. This argument is passed to the cell + when calling it. This is for use with cells that use dropout. +* `initial_state`: List of initial state tensors to be passed to the first + call of the cell. +* `constants`: List of constant tensors to be passed to the cell at each + timestep. + + +#### Input shape: + +N-D tensor with shape `[batch_size, timesteps, ...]` or +`[timesteps, batch_size, ...]` when time_major is True. + + + +#### Output shape: + +- If `return_state`: a list of tensors. The first tensor is + the output. The remaining tensors are the last states, + each with shape `[batch_size, state_size]`, where `state_size` could + be a high dimension tensor shape. +- If `return_sequences`: N-D tensor with shape + `[batch_size, timesteps, output_size]`, where `output_size` could + be a high dimension tensor shape, or + `[timesteps, batch_size, output_size]` when `time_major` is True. +- Else, N-D tensor with shape `[batch_size, output_size]`, where + `output_size` could be a high dimension tensor shape. + + + +#### Masking: + +This layer supports masking for input data with a variable number +of timesteps. To introduce masks to your data, +use an [tf.keras.layers.Embedding] layer with the `mask_zero` parameter +set to `True`. + + +Note on using statefulness in RNNs: + You can set RNN layers to be 'stateful', which means that the states + computed for the samples in one batch will be reused as initial states + for the samples in the next batch. This assumes a one-to-one mapping + between samples in different successive batches. + + To enable statefulness: + - Specify `stateful=True` in the layer constructor. + - Specify a fixed batch size for your model, by passing + If sequential model: + `batch_input_shape=(...)` to the first layer in your model. + Else for functional model with 1 or more Input layers: + `batch_shape=(...)` to all the first layers in your model. + This is the expected shape of your inputs + *including the batch size*. + It should be a tuple of integers, e.g. `(32, 10, 100)`. + - Specify `shuffle=False` when calling fit(). + + To reset the states of your model, call `.reset_states()` on either + a specific layer, or on your entire model. + +Note on specifying the initial state of RNNs: + You can specify the initial state of RNN layers symbolically by + calling them with the keyword argument `initial_state`. The value of + `initial_state` should be a tensor or list of tensors representing + the initial state of the RNN layer. + + You can specify the initial state of RNN layers numerically by + calling `reset_states` with the keyword argument `states`. The value of + `states` should be a numpy array or list of numpy arrays representing + the initial state of the RNN layer. + +Note on passing external constants to RNNs: + You can pass "external" constants to the cell using the `constants` + keyword argument of RNN.__call__ (as well as RNN.call) method. This + requires that the `cell.call` method accepts the same keyword argument + `constants`. Such constants can be used to condition the cell + transformation on additional static inputs (not changing over time), + a.k.a. an attention mechanism. + +#### Examples: + + + +```python +# First, let's define a RNN Cell, as a layer subclass. + +class MinimalRNNCell(keras.layers.Layer): + + def __init__(self, units, **kwargs): + self.units = units + self.state_size = units + super(MinimalRNNCell, self).__init__(**kwargs) + + def build(self, input_shape): + self.kernel = self.add_weight(shape=(input_shape[-1], self.units), + initializer='uniform', + name='kernel') + self.recurrent_kernel = self.add_weight( + shape=(self.units, self.units), + initializer='uniform', + name='recurrent_kernel') + self.built = True + + def call(self, inputs, states): + prev_output = states[0] + h = K.dot(inputs, self.kernel) + output = h + K.dot(prev_output, self.recurrent_kernel) + return output, [output] + +# Let's use this cell in a RNN layer: + +cell = MinimalRNNCell(32) +x = keras.Input((None, 5)) +layer = RNN(cell) +y = layer(x) + +# Here's how to use the cell to build a stacked RNN: + +cells = [MinimalRNNCell(32), MinimalRNNCell(64)] +x = keras.Input((None, 5)) +layer = RNN(cells) +y = layer(x) +``` + + + + + + + + + + + + +
+`states` + + +
+ + + +## Methods + +

reset_states

+ +View source + + + +Reset the recorded states for the stateful RNN layer. + +Can only be used when RNN layer is constructed with `stateful` = `True`. +Args: + states: Numpy arrays that contains the value for the initial state, which + will be feed to cell at the first time step. When the value is None, + zero filled numpy array will be created based on the cell state size. + + + + + + + + + + + + + + + + +
Raises
+`AttributeError` + +When the RNN layer is not stateful. +
+`ValueError` + +When the batch size of the RNN layer is unknown. +
+`ValueError` + +When the input numpy array is not compatible with the RNN +layer state, either size wise or dtype wise. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/ReLU.md b/site/en/api_docs/python/tf/keras/layers/ReLU.md new file mode 100644 index 00000000000..dbf2269206f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/ReLU.md @@ -0,0 +1,128 @@ +description: Rectified Linear Unit activation function. + +
+ + + + +
+ +# tf.keras.layers.ReLU + + + + + + + + + +Rectified Linear Unit activation function. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +With default values, it returns element-wise `max(x, 0)`. + +Otherwise, it follows: + +``` + f(x) = max_value if x >= max_value + f(x) = x if threshold <= x < max_value + f(x) = negative_slope * (x - threshold) otherwise +``` + +#### Usage: + + + +``` +>>> layer = tf.keras.layers.ReLU() +>>> output = layer([-3.0, -1.0, 0.0, 2.0]) +>>> list(output.numpy()) +[0.0, 0.0, 0.0, 2.0] +>>> layer = tf.keras.layers.ReLU(max_value=1.0) +>>> output = layer([-3.0, -1.0, 0.0, 2.0]) +>>> list(output.numpy()) +[0.0, 0.0, 0.0, 1.0] +>>> layer = tf.keras.layers.ReLU(negative_slope=1.0) +>>> output = layer([-3.0, -1.0, 0.0, 2.0]) +>>> list(output.numpy()) +[-3.0, -1.0, 0.0, 2.0] +>>> layer = tf.keras.layers.ReLU(threshold=1.5) +>>> output = layer([-3.0, -1.0, 1.0, 2.0]) +>>> list(output.numpy()) +[0.0, 0.0, 0.0, 2.0] +``` + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the batch axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as the input. + + + + + + + + + + + + + + + + + + +
+`max_value` + +Float >= 0. Maximum activation value. Default to None, which +means unlimited. +
+`negative_slope` + +Float >= 0. Negative slope coefficient. Default to 0. +
+`threshold` + +Float. Threshold value for thresholded activation. Default to 0. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/RepeatVector.md b/site/en/api_docs/python/tf/keras/layers/RepeatVector.md new file mode 100644 index 00000000000..ff5268d0e53 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/RepeatVector.md @@ -0,0 +1,92 @@ +description: Repeats the input n times. + +
+ + + + +
+ +# tf.keras.layers.RepeatVector + + + + + + + + + +Repeats the input n times. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + + +#### Example: + + + +```python +model = Sequential() +model.add(Dense(32, input_dim=32)) +# now: model.output_shape == (None, 32) +# note: `None` is the batch dimension + +model.add(RepeatVector(3)) +# now: model.output_shape == (None, 3, 32) +``` + + + + + + + + + + +
+`n` + +Integer, repetition factor. +
+ + + +#### Input shape: + +2D tensor of shape `(num_samples, features)`. + + + +#### Output shape: + +3D tensor of shape `(num_samples, n, features)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/Reshape.md b/site/en/api_docs/python/tf/keras/layers/Reshape.md new file mode 100644 index 00000000000..f02e22a1d9c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Reshape.md @@ -0,0 +1,117 @@ +description: Layer that reshapes inputs into the given shape. + +
+ + + + +
+ +# tf.keras.layers.Reshape + + + + + + + + + +Layer that reshapes inputs into the given shape. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + + +#### Input shape: + +Arbitrary, although all dimensions in the input shape must be known/fixed. +Use the keyword argument `input_shape` (tuple of integers, does not include +the samples/batch size axis) when using this layer as the first layer +in a model. + + + +#### Output shape: + +`(batch_size,) + target_shape` + + + +#### Example: + + + +``` +>>> # as first layer in a Sequential model +>>> model = tf.keras.Sequential() +>>> model.add(tf.keras.layers.Reshape((3, 4), input_shape=(12,))) +>>> # model.output_shape == (None, 3, 4), `None` is the batch size. +>>> model.output_shape +(None, 3, 4) +``` + +``` +>>> # as intermediate layer in a Sequential model +>>> model.add(tf.keras.layers.Reshape((6, 2))) +>>> model.output_shape +(None, 6, 2) +``` + +``` +>>> # also supports shape inference using `-1` as dimension +>>> model.add(tf.keras.layers.Reshape((-1, 2, 2))) +>>> model.output_shape +(None, None, 2, 2) +``` + + + + + + + + + + + + + +
+`target_shape` + +Target shape. Tuple of integers, does not include the +samples dimension (batch size). +
+`**kwargs` + +Any additional layer keyword arguments. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/SeparableConv1D.md b/site/en/api_docs/python/tf/keras/layers/SeparableConv1D.md new file mode 100644 index 00000000000..ddba11ce7f9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/SeparableConv1D.md @@ -0,0 +1,304 @@ +description: Depthwise separable 1D convolution. + +
+ + + + +
+ +# tf.keras.layers.SeparableConv1D + + + + + + + + + +Depthwise separable 1D convolution. + + + + + + + + + +This layer performs a depthwise convolution that acts separately on +channels, followed by a pointwise convolution that mixes channels. +If `use_bias` is True and a bias initializer is provided, +it adds a bias vector to the output. +It then optionally applies an activation function to produce the final output. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space (i.e. the number +of filters in the convolution). +
+`kernel_size` + +A single integer specifying the spatial +dimensions of the filters. +
+`strides` + +A single integer specifying the strides +of the convolution. +Specifying any `stride` value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +One of `"valid"`, `"same"`, or `"causal"` (case-insensitive). +
+`data_format` + +A string, one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, length, channels)` while `channels_first` corresponds to +inputs with shape `(batch_size, channels, length)`. +
+`dilation_rate` + +A single integer, specifying +the dilation rate to use for dilated convolution. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any stride value != 1. +
+`depth_multiplier` + +The number of depthwise convolution output channels for +each input channel. The total number of depthwise convolution output +channels will be equal to `num_filters_in * depth_multiplier`. +
+`activation` + +Activation function to use. +If you don't specify anything, no activation is applied ( +see keras.activations). +
+`use_bias` + +Boolean, whether the layer uses a bias. +
+`depthwise_initializer` + +An initializer for the depthwise convolution kernel ( +see keras.initializers). +
+`pointwise_initializer` + +An initializer for the pointwise convolution kernel ( +see keras.initializers). +
+`bias_initializer` + +An initializer for the bias vector. If None, the default +initializer will be used (see keras.initializers). +
+`depthwise_regularizer` + +Optional regularizer for the depthwise +convolution kernel (see keras.regularizers). +
+`pointwise_regularizer` + +Optional regularizer for the pointwise +convolution kernel (see keras.regularizers). +
+`bias_regularizer` + +Optional regularizer for the bias vector ( +see keras.regularizers). +
+`activity_regularizer` + +Optional regularizer function for the output ( +see keras.regularizers). +
+`depthwise_constraint` + +Optional projection function to be applied to the +depthwise kernel after being updated by an `Optimizer` (e.g. used for +norm constraints or value constraints for layer weights). The function +must take as input the unprojected variable and must return the +projected variable (which must have the same shape). Constraints are +not safe to use when doing asynchronous distributed training ( +see keras.constraints). +
+`pointwise_constraint` + +Optional projection function to be applied to the +pointwise kernel after being updated by an `Optimizer` ( +see keras.constraints). +
+`bias_constraint` + +Optional projection function to be applied to the +bias after being updated by an `Optimizer` ( +see keras.constraints). +
+`trainable` + +Boolean, if `True` the weights of this layer will be marked as +trainable (and listed in `layer.trainable_weights`). +
+`name` + +A string, the name of the layer. +
+ + + +#### Input shape: + +3D tensor with shape: +`(batch_size, channels, steps)` if data_format='channels_first' +or 5D tensor with shape: +`(batch_size, steps, channels)` if data_format='channels_last'. + + + +#### Output shape: + +3D tensor with shape: +`(batch_size, filters, new_steps)` if data_format='channels_first' +or 3D tensor with shape: +`(batch_size, new_steps, filters)` if data_format='channels_last'. +`new_steps` value might have changed due to padding or strides. + + + + + + + + + + + +
+A tensor of rank 3 representing +`activation(separableconv1d(inputs, kernel) + bias)`. +
+ + + + + + + + + + + + +
+`ValueError` + +when both `strides` > 1 and `dilation_rate` > 1. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/SeparableConv2D.md b/site/en/api_docs/python/tf/keras/layers/SeparableConv2D.md new file mode 100644 index 00000000000..042dd6c3378 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/SeparableConv2D.md @@ -0,0 +1,307 @@ +description: Depthwise separable 2D convolution. + +
+ + + + +
+ +# tf.keras.layers.SeparableConv2D + + + + + + + + + +Depthwise separable 2D convolution. + + + + + + + + + +Separable convolutions consist in first performing +a depthwise spatial convolution +(which acts on each input channel separately) +followed by a pointwise convolution which mixes together the resulting +output channels. The `depth_multiplier` argument controls how many +output channels are generated per input channel in the depthwise step. + +Intuitively, separable convolutions can be understood as +a way to factorize a convolution kernel into two smaller kernels, +or as an extreme version of an Inception block. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filters` + +Integer, the dimensionality of the output space +(i.e. the number of output filters in the convolution). +
+`kernel_size` + +An integer or tuple/list of 2 integers, specifying the +height and width of the 2D convolution window. +Can be a single integer to specify the same value for +all spatial dimensions. +
+`strides` + +An integer or tuple/list of 2 integers, +specifying the strides of the convolution along the height and width. +Can be a single integer to specify the same value for +all spatial dimensions. +Specifying any stride value != 1 is incompatible with specifying +any `dilation_rate` value != 1. +
+`padding` + +one of `"valid"` or `"same"` (case-insensitive). +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch_size, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+`dilation_rate` + +An integer or tuple/list of 2 integers, specifying +the dilation rate to use for dilated convolution. +Currently, specifying any `dilation_rate` value != 1 is +incompatible with specifying any `strides` value != 1. +
+`depth_multiplier` + +The number of depthwise convolution output channels +for each input channel. +The total number of depthwise convolution output +channels will be equal to `filters_in * depth_multiplier`. +
+`activation` + +Activation function to use. +If you don't specify anything, no activation is applied ( +see keras.activations). +
+`use_bias` + +Boolean, whether the layer uses a bias vector. +
+`depthwise_initializer` + +Initializer for the depthwise kernel matrix ( +see keras.initializers). +
+`pointwise_initializer` + +Initializer for the pointwise kernel matrix ( +see keras.initializers). +
+`bias_initializer` + +Initializer for the bias vector ( +see keras.initializers). +
+`depthwise_regularizer` + +Regularizer function applied to +the depthwise kernel matrix (see keras.regularizers). +
+`pointwise_regularizer` + +Regularizer function applied to +the pointwise kernel matrix (see keras.regularizers). +
+`bias_regularizer` + +Regularizer function applied to the bias vector ( +see keras.regularizers). +
+`activity_regularizer` + +Regularizer function applied to +the output of the layer (its "activation") ( +see keras.regularizers). +
+`depthwise_constraint` + +Constraint function applied to +the depthwise kernel matrix ( +see keras.constraints). +
+`pointwise_constraint` + +Constraint function applied to +the pointwise kernel matrix ( +see keras.constraints). +
+`bias_constraint` + +Constraint function applied to the bias vector ( +see keras.constraints). +
+ + + +#### Input shape: + +4D tensor with shape: +`(batch_size, channels, rows, cols)` if data_format='channels_first' +or 4D tensor with shape: +`(batch_size, rows, cols, channels)` if data_format='channels_last'. + + + +#### Output shape: + +4D tensor with shape: +`(batch_size, filters, new_rows, new_cols)` if data_format='channels_first' +or 4D tensor with shape: +`(batch_size, new_rows, new_cols, filters)` if data_format='channels_last'. +`rows` and `cols` values might have changed due to padding. + + + + + + + + + + + +
+A tensor of rank 4 representing +`activation(separableconv2d(inputs, kernel) + bias)`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `padding` is "causal". +
+`ValueError` + +when both `strides` > 1 and `dilation_rate` > 1. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/SimpleRNN.md b/site/en/api_docs/python/tf/keras/layers/SimpleRNN.md new file mode 100644 index 00000000000..01dbe55ebbc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/SimpleRNN.md @@ -0,0 +1,441 @@ +description: Fully-connected RNN where the output is to be fed back to input. + +
+ + + + + +
+ +# tf.keras.layers.SimpleRNN + + + + + + + + + +Fully-connected RNN where the output is to be fed back to input. + +Inherits From: [`RNN`](../../../tf/keras/layers/RNN.md) + + + + + + + + + +See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn) +for details about the usage of RNN API. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`activation` + +Activation function to use. +Default: hyperbolic tangent (`tanh`). +If you pass None, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean, (default `True`), whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, +used for the linear transformation of the inputs. Default: +`glorot_uniform`. +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` +weights matrix, used for the linear transformation of the recurrent state. +Default: `orthogonal`. +
+`bias_initializer` + +Initializer for the bias vector. Default: `zeros`. +
+`kernel_regularizer` + +Regularizer function applied to the `kernel` weights +matrix. Default: `None`. +
+`recurrent_regularizer` + +Regularizer function applied to the +`recurrent_kernel` weights matrix. Default: `None`. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. Default: +`None`. +
+`activity_regularizer` + +Regularizer function applied to the output of the +layer (its "activation"). Default: `None`. +
+`kernel_constraint` + +Constraint function applied to the `kernel` weights +matrix. Default: `None`. +
+`recurrent_constraint` + +Constraint function applied to the `recurrent_kernel` +weights matrix. Default: `None`. +
+`bias_constraint` + +Constraint function applied to the bias vector. Default: +`None`. +
+`dropout` + +Float between 0 and 1. +Fraction of the units to drop for the linear transformation of the inputs. +Default: 0. +
+`recurrent_dropout` + +Float between 0 and 1. +Fraction of the units to drop for the linear transformation of the +recurrent state. Default: 0. +
+`return_sequences` + +Boolean. Whether to return the last output +in the output sequence, or the full sequence. Default: `False`. +
+`return_state` + +Boolean. Whether to return the last state +in addition to the output. Default: `False` +
+`go_backwards` + +Boolean (default False). +If True, process the input sequence backwards and return the +reversed sequence. +
+`stateful` + +Boolean (default False). If True, the last state +for each sample at index i in a batch will be used as initial +state for the sample of index i in the following batch. +
+`unroll` + +Boolean (default False). +If True, the network will be unrolled, +else a symbolic loop will be used. +Unrolling can speed-up a RNN, +although it tends to be more memory-intensive. +Unrolling is only suitable for short sequences. +
+ + + +#### Call arguments: + + +* `inputs`: A 3D tensor, with shape `[batch, timesteps, feature]`. +* `mask`: Binary tensor of shape `[batch, timesteps]` indicating whether + a given timestep should be masked. +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. This argument is passed to the cell + when calling it. This is only relevant if `dropout` or + `recurrent_dropout` is used. +* `initial_state`: List of initial state tensors to be passed to the first + call of the cell. + + +#### Examples: + + + +```python +inputs = np.random.random([32, 10, 8]).astype(np.float32) +simple_rnn = tf.keras.layers.SimpleRNN(4) + +output = simple_rnn(inputs) # The output has shape `[32, 4]`. + +simple_rnn = tf.keras.layers.SimpleRNN( + 4, return_sequences=True, return_state=True) + +# whole_sequence_output has shape `[32, 10, 4]`. +# final_state has shape `[32, 4]`. +whole_sequence_output, final_state = simple_rnn(inputs) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`activation` + + +
+`bias_constraint` + + +
+`bias_initializer` + + +
+`bias_regularizer` + + +
+`dropout` + + +
+`kernel_constraint` + + +
+`kernel_initializer` + + +
+`kernel_regularizer` + + +
+`recurrent_constraint` + + +
+`recurrent_dropout` + + +
+`recurrent_initializer` + + +
+`recurrent_regularizer` + + +
+`states` + + +
+`units` + + +
+`use_bias` + + +
+ + + +## Methods + +

reset_states

+ +View source + + + +Reset the recorded states for the stateful RNN layer. + +Can only be used when RNN layer is constructed with `stateful` = `True`. +Args: + states: Numpy arrays that contains the value for the initial state, which + will be feed to cell at the first time step. When the value is None, + zero filled numpy array will be created based on the cell state size. + + + + + + + + + + + + + + + + +
Raises
+`AttributeError` + +When the RNN layer is not stateful. +
+`ValueError` + +When the batch size of the RNN layer is unknown. +
+`ValueError` + +When the input numpy array is not compatible with the RNN +layer state, either size wise or dtype wise. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/SimpleRNNCell.md b/site/en/api_docs/python/tf/keras/layers/SimpleRNNCell.md new file mode 100644 index 00000000000..f1a96e8b6cc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/SimpleRNNCell.md @@ -0,0 +1,394 @@ +description: Cell class for SimpleRNN. + +
+ + + + + + + + + +
+ +# tf.keras.layers.SimpleRNNCell + + + + + + + + + +Cell class for SimpleRNN. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn) +for details about the usage of RNN API. + +This class processes one step within the whole time sequence input, whereas +`tf.keras.layer.SimpleRNN` processes the whole sequence. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`units` + +Positive integer, dimensionality of the output space. +
+`activation` + +Activation function to use. +Default: hyperbolic tangent (`tanh`). +If you pass `None`, no activation is applied +(ie. "linear" activation: `a(x) = x`). +
+`use_bias` + +Boolean, (default `True`), whether the layer uses a bias vector. +
+`kernel_initializer` + +Initializer for the `kernel` weights matrix, +used for the linear transformation of the inputs. Default: +`glorot_uniform`. +
+`recurrent_initializer` + +Initializer for the `recurrent_kernel` +weights matrix, used for the linear transformation of the recurrent state. +Default: `orthogonal`. +
+`bias_initializer` + +Initializer for the bias vector. Default: `zeros`. +
+`kernel_regularizer` + +Regularizer function applied to the `kernel` weights +matrix. Default: `None`. +
+`recurrent_regularizer` + +Regularizer function applied to the +`recurrent_kernel` weights matrix. Default: `None`. +
+`bias_regularizer` + +Regularizer function applied to the bias vector. Default: +`None`. +
+`kernel_constraint` + +Constraint function applied to the `kernel` weights +matrix. Default: `None`. +
+`recurrent_constraint` + +Constraint function applied to the `recurrent_kernel` +weights matrix. Default: `None`. +
+`bias_constraint` + +Constraint function applied to the bias vector. Default: +`None`. +
+`dropout` + +Float between 0 and 1. Fraction of the units to drop for the linear +transformation of the inputs. Default: 0. +
+`recurrent_dropout` + +Float between 0 and 1. Fraction of the units to drop for +the linear transformation of the recurrent state. Default: 0. +
+ + + +#### Call arguments: + + +* `inputs`: A 2D tensor, with shape of `[batch, feature]`. +* `states`: A 2D tensor with shape of `[batch, units]`, which is the state from + the previous time step. For timestep 0, the initial state provided by user + will be feed to cell. +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. Only relevant when `dropout` or + `recurrent_dropout` is used. + + +#### Examples: + + + +```python +inputs = np.random.random([32, 10, 8]).astype(np.float32) +rnn = tf.keras.layers.RNN(tf.keras.layers.SimpleRNNCell(4)) + +output = rnn(inputs) # The output has shape `[32, 4]`. + +rnn = tf.keras.layers.RNN( + tf.keras.layers.SimpleRNNCell(4), + return_sequences=True, + return_state=True) + +# whole_sequence_output has shape `[32, 10, 4]`. +# final_state has shape `[32, 4]`. +whole_sequence_output, final_state = rnn(inputs) +``` + +## Methods + +

get_dropout_mask_for_cell

+ +View source + + + +Get the dropout mask for RNN cell's input. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

get_initial_state

+ +View source + + + + + + +

get_recurrent_dropout_mask_for_cell

+ +View source + + + +Get the recurrent dropout mask for RNN cell. + +It will create mask based on context if there isn't any existing cached +mask. If a new mask is generated, it will update the cache in the cell. + + + + + + + + + + + + + + + + +
Args
+`inputs` + +The input tensor whose shape will be used to generate dropout +mask. +
+`training` + +Boolean tensor, whether its in training mode, dropout will be +ignored in non-training mode. +
+`count` + +Int, how many dropout mask will be generated. It is useful for cell +that has internal weights fused together. +
+ + + + + + + + + + + +
Returns
+List of mask tensor, generated or cached mask based on context. +
+ + + +

reset_dropout_mask

+ +View source + + + +Reset the cached dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + +

reset_recurrent_dropout_mask

+ +View source + + + +Reset the cached recurrent dropout masks if any. + +This is important for the RNN layer to invoke this in it call() method so +that the cached mask is cleared before calling the cell.call(). The mask +should be cached across the timestep within the same batch, but shouldn't +be cached between batches. Otherwise it will introduce unreasonable bias +against certain index of data within the batch. + + + diff --git a/site/en/api_docs/python/tf/keras/layers/Softmax.md b/site/en/api_docs/python/tf/keras/layers/Softmax.md new file mode 100644 index 00000000000..da2ee2ad287 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Softmax.md @@ -0,0 +1,81 @@ +description: Softmax activation function. + +
+ + + + +
+ +# tf.keras.layers.Softmax + + + + + + + + + +Softmax activation function. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as the input. + + + + + + + + + + + + +
+`axis` + +Integer, axis along which the softmax normalization is applied. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/SpatialDropout1D.md b/site/en/api_docs/python/tf/keras/layers/SpatialDropout1D.md new file mode 100644 index 00000000000..fe040fde88f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/SpatialDropout1D.md @@ -0,0 +1,101 @@ +description: Spatial 1D version of Dropout. + +
+ + + + +
+ +# tf.keras.layers.SpatialDropout1D + + + + + + + + + +Spatial 1D version of Dropout. + +Inherits From: [`Dropout`](../../../tf/keras/layers/Dropout.md) + + + + + + + + + +This version performs the same function as Dropout, however it drops +entire 1D feature maps instead of individual elements. If adjacent frames +within feature maps are strongly correlated (as is normally the case in +early convolution layers) then regular dropout will not regularize the +activations and will otherwise just result in an effective learning rate +decrease. In this case, SpatialDropout1D will help promote independence +between feature maps and should be used instead. + + + + + + + + + + +
+`rate` + +Float between 0 and 1. Fraction of the input units to drop. +
+ + + +#### Call arguments: + + +* `inputs`: A 3D tensor. +* `training`: Python boolean indicating whether the layer should behave in + training mode (adding dropout) or in inference mode (doing nothing). + + +#### Input shape: + +3D tensor with shape: +`(samples, timesteps, channels)` + + + +#### Output shape: + +Same as input. + + + +#### References: + +- [Efficient Object Localization Using Convolutional + Networks](https://arxiv.org/abs/1411.4280) + + diff --git a/site/en/api_docs/python/tf/keras/layers/SpatialDropout2D.md b/site/en/api_docs/python/tf/keras/layers/SpatialDropout2D.md new file mode 100644 index 00000000000..4e34ced0edf --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/SpatialDropout2D.md @@ -0,0 +1,116 @@ +description: Spatial 2D version of Dropout. + +
+ + + + +
+ +# tf.keras.layers.SpatialDropout2D + + + + + + + + + +Spatial 2D version of Dropout. + +Inherits From: [`Dropout`](../../../tf/keras/layers/Dropout.md) + + + + + + + + + +This version performs the same function as Dropout, however it drops +entire 2D feature maps instead of individual elements. If adjacent pixels +within feature maps are strongly correlated (as is normally the case in +early convolution layers) then regular dropout will not regularize the +activations and will otherwise just result in an effective learning rate +decrease. In this case, SpatialDropout2D will help promote independence +between feature maps and should be used instead. + + + + + + + + + + + + + +
+`rate` + +Float between 0 and 1. Fraction of the input units to drop. +
+`data_format` + +'channels_first' or 'channels_last'. +In 'channels_first' mode, the channels dimension +(the depth) is at index 1, +in 'channels_last' mode is it at index 3. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Call arguments: + + +* `inputs`: A 4D tensor. +* `training`: Python boolean indicating whether the layer should behave in + training mode (adding dropout) or in inference mode (doing nothing). + + +#### Input shape: + +4D tensor with shape: +`(samples, channels, rows, cols)` if data_format='channels_first' +or 4D tensor with shape: +`(samples, rows, cols, channels)` if data_format='channels_last'. + + + +#### Output shape: + +Same as input. + + + +#### References: + +- [Efficient Object Localization Using Convolutional + Networks](https://arxiv.org/abs/1411.4280) + + diff --git a/site/en/api_docs/python/tf/keras/layers/SpatialDropout3D.md b/site/en/api_docs/python/tf/keras/layers/SpatialDropout3D.md new file mode 100644 index 00000000000..36a29fce831 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/SpatialDropout3D.md @@ -0,0 +1,115 @@ +description: Spatial 3D version of Dropout. + +
+ + + + +
+ +# tf.keras.layers.SpatialDropout3D + + + + + + + + + +Spatial 3D version of Dropout. + +Inherits From: [`Dropout`](../../../tf/keras/layers/Dropout.md) + + + + + + + + + +This version performs the same function as Dropout, however it drops +entire 3D feature maps instead of individual elements. If adjacent voxels +within feature maps are strongly correlated (as is normally the case in +early convolution layers) then regular dropout will not regularize the +activations and will otherwise just result in an effective learning rate +decrease. In this case, SpatialDropout3D will help promote independence +between feature maps and should be used instead. + + + + + + + + + + + + + +
+`rate` + +Float between 0 and 1. Fraction of the input units to drop. +
+`data_format` + +'channels_first' or 'channels_last'. +In 'channels_first' mode, the channels dimension (the depth) +is at index 1, in 'channels_last' mode is it at index 4. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Call arguments: + + +* `inputs`: A 5D tensor. +* `training`: Python boolean indicating whether the layer should behave in + training mode (adding dropout) or in inference mode (doing nothing). + + +#### Input shape: + +5D tensor with shape: +`(samples, channels, dim1, dim2, dim3)` if data_format='channels_first' +or 5D tensor with shape: +`(samples, dim1, dim2, dim3, channels)` if data_format='channels_last'. + + + +#### Output shape: + +Same as input. + + + +#### References: + +- [Efficient Object Localization Using Convolutional + Networks](https://arxiv.org/abs/1411.4280) + + diff --git a/site/en/api_docs/python/tf/keras/layers/StackedRNNCells.md b/site/en/api_docs/python/tf/keras/layers/StackedRNNCells.md new file mode 100644 index 00000000000..d4bb02e16e3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/StackedRNNCells.md @@ -0,0 +1,130 @@ +description: Wrapper allowing a stack of RNN cells to behave as a single cell. + +
+ + + + + +
+ +# tf.keras.layers.StackedRNNCells + + + + + + + + + +Wrapper allowing a stack of RNN cells to behave as a single cell. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +Used to implement efficient stacked RNNs. + + + + + + + + + + +
+`cells` + +List of RNN cell instances. +
+ + + +#### Examples: + + + +```python +batch_size = 3 +sentence_max_length = 5 +n_features = 2 +new_shape = (batch_size, sentence_max_length, n_features) +x = tf.constant(np.reshape(np.arange(30), new_shape), dtype = tf.float32) + +rnn_cells = [tf.keras.layers.LSTMCell(128) for _ in range(2)] +stacked_lstm = tf.keras.layers.StackedRNNCells(rnn_cells) +lstm_layer = tf.keras.layers.RNN(stacked_lstm) + +result = lstm_layer(x) +``` + + + + + + + + + + + + + + + +
+`output_size` + + +
+`state_size` + + +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/Subtract.md b/site/en/api_docs/python/tf/keras/layers/Subtract.md new file mode 100644 index 00000000000..bc6382269b5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Subtract.md @@ -0,0 +1,86 @@ +description: Layer that subtracts two inputs. + +
+ + + + +
+ +# tf.keras.layers.Subtract + + + + + + + + + +Layer that subtracts two inputs. + + + + + + + + + +It takes as input a list of tensors of size 2, +both of the same shape, and returns a single tensor, (inputs[0] - inputs[1]), +also of the same shape. + +#### Examples: + + + +```python + import keras + + input1 = keras.layers.Input(shape=(16,)) + x1 = keras.layers.Dense(8, activation='relu')(input1) + input2 = keras.layers.Input(shape=(32,)) + x2 = keras.layers.Dense(8, activation='relu')(input2) + # Equivalent to subtracted = keras.layers.subtract([x1, x2]) + subtracted = keras.layers.Subtract()([x1, x2]) + + out = keras.layers.Dense(4)(subtracted) + model = keras.models.Model(inputs=[input1, input2], outputs=out) +``` + + + + + + + + + + +
+`**kwargs` + +standard layer keyword arguments. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/ThresholdedReLU.md b/site/en/api_docs/python/tf/keras/layers/ThresholdedReLU.md new file mode 100644 index 00000000000..c4053d18762 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/ThresholdedReLU.md @@ -0,0 +1,90 @@ +description: Thresholded Rectified Linear Unit. + +
+ + + + +
+ +# tf.keras.layers.ThresholdedReLU + + + + + + + + + +Thresholded Rectified Linear Unit. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + + +#### It follows: + + + +``` + f(x) = x for x > theta + f(x) = 0 otherwise` +``` + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as the input. + + + + + + + + + + + + +
+`theta` + +Float >= 0. Threshold location of activation. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/TimeDistributed.md b/site/en/api_docs/python/tf/keras/layers/TimeDistributed.md new file mode 100644 index 00000000000..e1812810e68 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/TimeDistributed.md @@ -0,0 +1,113 @@ +description: This wrapper allows to apply a layer to every temporal slice of an input. + +
+ + + + +
+ +# tf.keras.layers.TimeDistributed + + + + + + + + + +This wrapper allows to apply a layer to every temporal slice of an input. + +Inherits From: [`Wrapper`](../../../tf/keras/layers/Wrapper.md) + + + + + + + + + +The input should be at least 3D, and the dimension of index one +will be considered to be the temporal dimension. + +Consider a batch of 32 video samples, where each sample is a 128x128 RGB image +with `channels_last` data format, across 10 timesteps. +The batch input shape is `(32, 10, 128, 128, 3)`. + +You can then use `TimeDistributed` to apply a `Conv2D` layer to each of the +10 timesteps, independently: + +``` +>>> inputs = tf.keras.Input(shape=(10, 128, 128, 3)) +>>> conv_2d_layer = tf.keras.layers.Conv2D(64, (3, 3)) +>>> outputs = tf.keras.layers.TimeDistributed(conv_2d_layer)(inputs) +>>> outputs.shape +TensorShape([None, 10, 126, 126, 64]) +``` + + + + + + + + + + +
+`layer` + +a tf.keras.layers.Layer instance. +
+ + + +#### Call arguments: + + +* `inputs`: Input tensor. +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. This argument is passed to the + wrapped layer (only if the layer supports this argument). +* `mask`: Binary tensor of shape `(samples, timesteps)` indicating whether + a given timestep should be masked. This argument is passed to the + wrapped layer (only if the layer supports this argument). + + + + + + + + + + + +
+`ValueError` + +If not initialized with a tf.keras.layers.Layer instance. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/UpSampling1D.md b/site/en/api_docs/python/tf/keras/layers/UpSampling1D.md new file mode 100644 index 00000000000..cc357ff5038 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/UpSampling1D.md @@ -0,0 +1,104 @@ +description: Upsampling layer for 1D inputs. + +
+ + + + +
+ +# tf.keras.layers.UpSampling1D + + + + + + + + + +Upsampling layer for 1D inputs. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +Repeats each temporal step `size` times along the time axis. + +#### Examples: + + + +``` +>>> input_shape = (2, 2, 3) +>>> x = np.arange(np.prod(input_shape)).reshape(input_shape) +>>> print(x) +[[[ 0 1 2] + [ 3 4 5]] + [[ 6 7 8] + [ 9 10 11]]] +>>> y = tf.keras.layers.UpSampling1D(size=2)(x) +>>> print(y) +tf.Tensor( + [[[ 0 1 2] + [ 0 1 2] + [ 3 4 5] + [ 3 4 5]] + [[ 6 7 8] + [ 6 7 8] + [ 9 10 11] + [ 9 10 11]]], shape=(2, 4, 3), dtype=int64) +``` + + + + + + + + + + +
+`size` + +Integer. Upsampling factor. +
+ + + +#### Input shape: + +3D tensor with shape: `(batch_size, steps, features)`. + + + +#### Output shape: + +3D tensor with shape: `(batch_size, upsampled_steps, features)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/UpSampling2D.md b/site/en/api_docs/python/tf/keras/layers/UpSampling2D.md new file mode 100644 index 00000000000..9a02f607152 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/UpSampling2D.md @@ -0,0 +1,137 @@ +description: Upsampling layer for 2D inputs. + +
+ + + + +
+ +# tf.keras.layers.UpSampling2D + + + + + + + + + +Upsampling layer for 2D inputs. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +Repeats the rows and columns of the data +by `size[0]` and `size[1]` respectively. + +#### Examples: + + + +``` +>>> input_shape = (2, 2, 1, 3) +>>> x = np.arange(np.prod(input_shape)).reshape(input_shape) +>>> print(x) +[[[[ 0 1 2]] + [[ 3 4 5]]] + [[[ 6 7 8]] + [[ 9 10 11]]]] +>>> y = tf.keras.layers.UpSampling2D(size=(1, 2))(x) +>>> print(y) +tf.Tensor( + [[[[ 0 1 2] + [ 0 1 2]] + [[ 3 4 5] + [ 3 4 5]]] + [[[ 6 7 8] + [ 6 7 8]] + [[ 9 10 11] + [ 9 10 11]]]], shape=(2, 2, 2, 3), dtype=int64) +``` + + + + + + + + + + + + + + + + +
+`size` + +Int, or tuple of 2 integers. +The upsampling factors for rows and columns. +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch_size, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+`interpolation` + +A string, one of `nearest` or `bilinear`. +
+ + + +#### Input shape: + +4D tensor with shape: +- If `data_format` is `"channels_last"`: + `(batch_size, rows, cols, channels)` +- If `data_format` is `"channels_first"`: + `(batch_size, channels, rows, cols)` + + + +#### Output shape: + +4D tensor with shape: +- If `data_format` is `"channels_last"`: + `(batch_size, upsampled_rows, upsampled_cols, channels)` +- If `data_format` is `"channels_first"`: + `(batch_size, channels, upsampled_rows, upsampled_cols)` + + diff --git a/site/en/api_docs/python/tf/keras/layers/UpSampling3D.md b/site/en/api_docs/python/tf/keras/layers/UpSampling3D.md new file mode 100644 index 00000000000..9dbf744e7e9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/UpSampling3D.md @@ -0,0 +1,117 @@ +description: Upsampling layer for 3D inputs. + +
+ + + + +
+ +# tf.keras.layers.UpSampling3D + + + + + + + + + +Upsampling layer for 3D inputs. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +Repeats the 1st, 2nd and 3rd dimensions +of the data by `size[0]`, `size[1]` and `size[2]` respectively. + +#### Examples: + + + +``` +>>> input_shape = (2, 1, 2, 1, 3) +>>> x = tf.constant(1, shape=input_shape) +>>> y = tf.keras.layers.UpSampling3D(size=2)(x) +>>> print(y.shape) +(2, 2, 4, 2, 3) +``` + + + + + + + + + + + + + +
+`size` + +Int, or tuple of 3 integers. +The upsampling factors for dim1, dim2 and dim3. +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)` +while `channels_first` corresponds to inputs with shape +`(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +5D tensor with shape: +- If `data_format` is `"channels_last"`: + `(batch_size, dim1, dim2, dim3, channels)` +- If `data_format` is `"channels_first"`: + `(batch_size, channels, dim1, dim2, dim3)` + + + +#### Output shape: + +5D tensor with shape: +- If `data_format` is `"channels_last"`: + `(batch_size, upsampled_dim1, upsampled_dim2, upsampled_dim3, channels)` +- If `data_format` is `"channels_first"`: + `(batch_size, channels, upsampled_dim1, upsampled_dim2, upsampled_dim3)` + + diff --git a/site/en/api_docs/python/tf/keras/layers/Wrapper.md b/site/en/api_docs/python/tf/keras/layers/Wrapper.md new file mode 100644 index 00000000000..bc60d14aeba --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/Wrapper.md @@ -0,0 +1,70 @@ +description: Abstract wrapper base class. + +
+ + + + +
+ +# tf.keras.layers.Wrapper + + + + + + + + + +Abstract wrapper base class. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +Wrappers take another layer and augment it in various ways. +Do not use this class as a layer, it is only an abstract base class. +Two usable wrappers are the `TimeDistributed` and `Bidirectional` wrappers. + + + + + + + + + + +
+`layer` + +The layer to be wrapped. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/ZeroPadding1D.md b/site/en/api_docs/python/tf/keras/layers/ZeroPadding1D.md new file mode 100644 index 00000000000..131a5bf1b2c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/ZeroPadding1D.md @@ -0,0 +1,113 @@ +description: Zero-padding layer for 1D input (e.g. temporal sequence). + +
+ + + + +
+ +# tf.keras.layers.ZeroPadding1D + + + + + + + + + +Zero-padding layer for 1D input (e.g. temporal sequence). + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + + +#### Examples: + + + +``` +>>> input_shape = (2, 2, 3) +>>> x = np.arange(np.prod(input_shape)).reshape(input_shape) +>>> print(x) +[[[ 0 1 2] + [ 3 4 5]] + [[ 6 7 8] + [ 9 10 11]]] +>>> y = tf.keras.layers.ZeroPadding1D(padding=2)(x) +>>> print(y) +tf.Tensor( + [[[ 0 0 0] + [ 0 0 0] + [ 0 1 2] + [ 3 4 5] + [ 0 0 0] + [ 0 0 0]] + [[ 0 0 0] + [ 0 0 0] + [ 6 7 8] + [ 9 10 11] + [ 0 0 0] + [ 0 0 0]]], shape=(2, 6, 3), dtype=int64) +``` + + + + + + + + + + +
+`padding` + +Int, or tuple of int (length 2), or dictionary. +- If int: +How many zeros to add at the beginning and end of +the padding dimension (axis 1). +- If tuple of int (length 2): +How many zeros to add at the beginning and at the end of +the padding dimension (`(left_pad, right_pad)`). +
+ + + +#### Input shape: + +3D tensor with shape `(batch_size, axis_to_pad, features)` + + + +#### Output shape: + +3D tensor with shape `(batch_size, padded_axis, features)` + + diff --git a/site/en/api_docs/python/tf/keras/layers/ZeroPadding2D.md b/site/en/api_docs/python/tf/keras/layers/ZeroPadding2D.md new file mode 100644 index 00000000000..a6c6cf7c184 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/ZeroPadding2D.md @@ -0,0 +1,140 @@ +description: Zero-padding layer for 2D input (e.g. picture). + +
+ + + + +
+ +# tf.keras.layers.ZeroPadding2D + + + + + + + + + +Zero-padding layer for 2D input (e.g. picture). + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + +This layer can add rows and columns of zeros +at the top, bottom, left and right side of an image tensor. + +#### Examples: + + + +``` +>>> input_shape = (1, 1, 2, 2) +>>> x = np.arange(np.prod(input_shape)).reshape(input_shape) +>>> print(x) +[[[[0 1] + [2 3]]]] +>>> y = tf.keras.layers.ZeroPadding2D(padding=1)(x) +>>> print(y) +tf.Tensor( + [[[[0 0] + [0 0] + [0 0] + [0 0]] + [[0 0] + [0 1] + [2 3] + [0 0]] + [[0 0] + [0 0] + [0 0] + [0 0]]]], shape=(1, 3, 4, 2), dtype=int64) +``` + + + + + + + + + + + + + +
+`padding` + +Int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints. +- If int: the same symmetric padding +is applied to height and width. +- If tuple of 2 ints: +interpreted as two different +symmetric padding values for height and width: +`(symmetric_height_pad, symmetric_width_pad)`. +- If tuple of 2 tuples of 2 ints: +interpreted as +`((top_pad, bottom_pad), (left_pad, right_pad))` +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, height, width, channels)` while `channels_first` +corresponds to inputs with shape +`(batch_size, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +4D tensor with shape: +- If `data_format` is `"channels_last"`: + `(batch_size, rows, cols, channels)` +- If `data_format` is `"channels_first"`: + `(batch_size, channels, rows, cols)` + + + +#### Output shape: + +4D tensor with shape: +- If `data_format` is `"channels_last"`: + `(batch_size, padded_rows, padded_cols, channels)` +- If `data_format` is `"channels_first"`: + `(batch_size, channels, padded_rows, padded_cols)` + + diff --git a/site/en/api_docs/python/tf/keras/layers/ZeroPadding3D.md b/site/en/api_docs/python/tf/keras/layers/ZeroPadding3D.md new file mode 100644 index 00000000000..d15dabac26c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/ZeroPadding3D.md @@ -0,0 +1,128 @@ +description: Zero-padding layer for 3D data (spatial or spatio-temporal). + +
+ + + + +
+ +# tf.keras.layers.ZeroPadding3D + + + + + + + + + +Zero-padding layer for 3D data (spatial or spatio-temporal). + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + + +#### Examples: + + + +``` +>>> input_shape = (1, 1, 2, 2, 3) +>>> x = np.arange(np.prod(input_shape)).reshape(input_shape) +>>> y = tf.keras.layers.ZeroPadding3D(padding=2)(x) +>>> print(y.shape) +(1, 5, 6, 6, 3) +``` + + + + + + + + + + + + + +
+`padding` + +Int, or tuple of 3 ints, or tuple of 3 tuples of 2 ints. +- If int: the same symmetric padding +is applied to height and width. +- If tuple of 3 ints: +interpreted as two different +symmetric padding values for height and width: +`(symmetric_dim1_pad, symmetric_dim2_pad, symmetric_dim3_pad)`. +- If tuple of 3 tuples of 2 ints: +interpreted as +`((left_dim1_pad, right_dim1_pad), (left_dim2_pad, +right_dim2_pad), (left_dim3_pad, right_dim3_pad))` +
+`data_format` + +A string, +one of `channels_last` (default) or `channels_first`. +The ordering of the dimensions in the inputs. +`channels_last` corresponds to inputs with shape +`(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)` +while `channels_first` corresponds to inputs with shape +`(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+ + + +#### Input shape: + +5D tensor with shape: +- If `data_format` is `"channels_last"`: + `(batch_size, first_axis_to_pad, second_axis_to_pad, third_axis_to_pad, + depth)` +- If `data_format` is `"channels_first"`: + `(batch_size, depth, first_axis_to_pad, second_axis_to_pad, + third_axis_to_pad)` + + + +#### Output shape: + +5D tensor with shape: +- If `data_format` is `"channels_last"`: + `(batch_size, first_padded_axis, second_padded_axis, third_axis_to_pad, + depth)` +- If `data_format` is `"channels_first"`: + `(batch_size, depth, first_padded_axis, second_padded_axis, + third_axis_to_pad)` + + diff --git a/site/en/api_docs/python/tf/keras/layers/add.md b/site/en/api_docs/python/tf/keras/layers/add.md new file mode 100644 index 00000000000..d22c0248b5e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/add.md @@ -0,0 +1,108 @@ +description: Functional interface to the tf.keras.layers.Add layer. + +
+ + +
+ +# tf.keras.layers.add + + + + + + + + + +Functional interface to the tf.keras.layers.Add layer. + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of input tensors (at least 2) with the same shape. +
+`**kwargs` + +Standard layer keyword arguments. +
+ + + + + + + + + + + +
+A tensor as the sum of the inputs. It has the same shape as the inputs. +
+ + + +#### Examples: + + + +``` +>>> input_shape = (2, 3, 4) +>>> x1 = tf.random.normal(input_shape) +>>> x2 = tf.random.normal(input_shape) +>>> y = tf.keras.layers.add([x1, x2]) +>>> print(y.shape) +(2, 3, 4) +``` + +Used in a functiona model: + +``` +>>> input1 = tf.keras.layers.Input(shape=(16,)) +>>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1) +>>> input2 = tf.keras.layers.Input(shape=(32,)) +>>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2) +>>> added = tf.keras.layers.add([x1, x2]) +>>> out = tf.keras.layers.Dense(4)(added) +>>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/layers/average.md b/site/en/api_docs/python/tf/keras/layers/average.md new file mode 100644 index 00000000000..8bef6511514 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/average.md @@ -0,0 +1,124 @@ +description: Functional interface to the tf.keras.layers.Average layer. + +
+ + +
+ +# tf.keras.layers.average + + + + + + + + + +Functional interface to the tf.keras.layers.Average layer. + + + + + + + + + + +#### Example: + + + +``` +>>> x1 = np.ones((2, 2)) +>>> x2 = np.zeros((2, 2)) +>>> y = tf.keras.layers.Average()([x1, x2]) +>>> y.numpy().tolist() +[[0.5, 0.5], [0.5, 0.5]] +``` + +Usage in a functional model: + +``` +>>> input1 = tf.keras.layers.Input(shape=(16,)) +>>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1) +>>> input2 = tf.keras.layers.Input(shape=(32,)) +>>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2) +>>> avg = tf.keras.layers.Average()([x1, x2]) +>>> out = tf.keras.layers.Dense(4)(avg) +>>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out) +``` + + + + + + + + + + + + + +
+`inputs` + +A list of input tensors (at least 2). +
+`**kwargs` + +Standard layer keyword arguments. +
+ + + + + + + + + + + +
+A tensor, the average of the inputs. +
+ + + + + + + + + + + + +
+`ValueError` + +If there is a shape mismatch between the inputs and the shapes +cannot be broadcasted to match. +
+ diff --git a/site/en/api_docs/python/tf/keras/layers/concatenate.md b/site/en/api_docs/python/tf/keras/layers/concatenate.md new file mode 100644 index 00000000000..ebf021cab49 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/concatenate.md @@ -0,0 +1,110 @@ +description: Functional interface to the Concatenate layer. + +
+ + +
+ +# tf.keras.layers.concatenate + + + + + + + + + +Functional interface to the `Concatenate` layer. + + + + + + + + + +``` +>>> x = np.arange(20).reshape(2, 2, 5) +>>> print(x) +[[[ 0 1 2 3 4] + [ 5 6 7 8 9]] + [[10 11 12 13 14] + [15 16 17 18 19]]] +>>> y = np.arange(20, 30).reshape(2, 1, 5) +>>> print(y) +[[[20 21 22 23 24]] + [[25 26 27 28 29]]] +>>> tf.keras.layers.concatenate([x, y], +... axis=1) + +``` + + + + + + + + + + + + + + + + +
+`inputs` + +A list of input tensors (at least 2). +
+`axis` + +Concatenation axis. +
+`**kwargs` + +Standard layer keyword arguments. +
+ + + + + + + + + + + +
+A tensor, the concatenation of the inputs alongside axis `axis`. +
+ diff --git a/site/en/api_docs/python/tf/keras/layers/deserialize.md b/site/en/api_docs/python/tf/keras/layers/deserialize.md new file mode 100644 index 00000000000..bf01697d662 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/deserialize.md @@ -0,0 +1,83 @@ +description: Instantiates a layer from a config dictionary. + +
+ + +
+ +# tf.keras.layers.deserialize + + + + + + + + + +Instantiates a layer from a config dictionary. + + + + + + + + + + + + + + + + + + + + + + +
+`config` + +dict of the form {'class_name': str, 'config': dict} +
+`custom_objects` + +dict mapping class names (or function names) +of custom (non-Keras) objects to class/functions +
+ + + + + + + + + + + +
+Layer instance (may be Model, Sequential, Network, Layer...) +
+ diff --git a/site/en/api_docs/python/tf/keras/layers/dot.md b/site/en/api_docs/python/tf/keras/layers/dot.md new file mode 100644 index 00000000000..ecca301beb9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/dot.md @@ -0,0 +1,100 @@ +description: Functional interface to the Dot layer. + +
+ + +
+ +# tf.keras.layers.dot + + + + + + + + + +Functional interface to the `Dot` layer. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of input tensors (at least 2). +
+`axes` + +Integer or tuple of integers, +axis or axes along which to take the dot product. +
+`normalize` + +Whether to L2-normalize samples along the +dot product axis before taking the dot product. +If set to True, then the output of the dot product +is the cosine proximity between the two samples. +
+`**kwargs` + +Standard layer keyword arguments. +
+ + + + + + + + + + + +
+A tensor, the dot product of the samples from the inputs. +
+ diff --git a/site/en/api_docs/python/tf/keras/layers/experimental.md b/site/en/api_docs/python/tf/keras/layers/experimental.md new file mode 100644 index 00000000000..06ea6a8bb9e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental.md @@ -0,0 +1,29 @@ +description: Public API for tf.keras.layers.experimental namespace. + +
+ + +
+ +# Module: tf.keras.layers.experimental + + + + + + + + + +Public API for tf.keras.layers.experimental namespace. + + + +## Modules + +[`preprocessing`](../../../tf/keras/layers/experimental/preprocessing.md) module: Public API for tf.keras.layers.experimental.preprocessing namespace. + +## Classes + +[`class SyncBatchNormalization`](../../../tf/keras/layers/experimental/SyncBatchNormalization.md): Normalize and scale inputs or activations synchronously across replicas. + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/SyncBatchNormalization.md b/site/en/api_docs/python/tf/keras/layers/experimental/SyncBatchNormalization.md new file mode 100644 index 00000000000..441d61ac0c1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/SyncBatchNormalization.md @@ -0,0 +1,237 @@ +description: Normalize and scale inputs or activations synchronously across replicas. + +
+ + + + +
+ +# tf.keras.layers.experimental.SyncBatchNormalization + + + + + + + + + +Normalize and scale inputs or activations synchronously across replicas. + + + + + + + +Applies batch normalization to activations of the previous layer at each batch +by synchronizing the global batch statistics across all devices that are +training the model. For specific details about batch normalization please +refer to the tf.keras.layers.BatchNormalization layer docs. + +If this layer is used when using tf.distribute strategy to train models +across devices/workers, there will be an allreduce call to aggregate batch +statistics across all replicas at every training step. Without tf.distribute +strategy, this layer behaves as a regular tf.keras.layers.BatchNormalization +layer. + +#### Example usage: + + +``` +strategy = tf.distribute.MirroredStrategy() + +with strategy.scope(): + model = tf.keras.Sequential() + model.add(tf.keras.layers.Dense(16)) + model.add(tf.keras.layers.experimental.SyncBatchNormalization()) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`axis` + +Integer, the axis that should be normalized +(typically the features axis). +For instance, after a `Conv2D` layer with +`data_format="channels_first"`, +set `axis=1` in `BatchNormalization`. +
+`momentum` + +Momentum for the moving average. +
+`epsilon` + +Small float added to variance to avoid dividing by zero. +
+`center` + +If True, add offset of `beta` to normalized tensor. +If False, `beta` is ignored. +
+`scale` + +If True, multiply by `gamma`. +If False, `gamma` is not used. +When the next layer is linear (also e.g. nn.relu), +this can be disabled since the scaling +will be done by the next layer. +
+`beta_initializer` + +Initializer for the beta weight. +
+`gamma_initializer` + +Initializer for the gamma weight. +
+`moving_mean_initializer` + +Initializer for the moving mean. +
+`moving_variance_initializer` + +Initializer for the moving variance. +
+`beta_regularizer` + +Optional regularizer for the beta weight. +
+`gamma_regularizer` + +Optional regularizer for the gamma weight. +
+`beta_constraint` + +Optional constraint for the beta weight. +
+`gamma_constraint` + +Optional constraint for the gamma weight. +
+`renorm` + +Whether to use Batch Renormalization +(https://arxiv.org/abs/1702.03275). This adds extra variables during +training. The inference is the same for either value of this parameter. +
+`renorm_clipping` + +A dictionary that may map keys 'rmax', 'rmin', 'dmax' to +scalar `Tensors` used to clip the renorm correction. The correction +`(r, d)` is used as `corrected_value = normalized_value * r + d`, with +`r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, +dmax are set to inf, 0, inf, respectively. +
+`renorm_momentum` + +Momentum used to update the moving means and standard +deviations with renorm. Unlike `momentum`, this affects training +and should be neither too small (which would add noise) nor too large +(which would give stale estimates). Note that `momentum` is still applied +to get the means and variances for inference. +
+`trainable` + +Boolean, if `True` the variables will be marked as trainable. +
+ + + +#### Call arguments: + + +* `inputs`: Input tensor (of any rank). +* `training`: Python boolean indicating whether the layer should behave in + training mode or in inference mode. + - `training=True`: The layer will normalize its inputs using the + mean and variance of the current batch of inputs. + - `training=False`: The layer will normalize its inputs using the + mean and variance of its moving statistics, learned during training. + + +#### Input shape: + +Arbitrary. Use the keyword argument `input_shape` +(tuple of integers, does not include the samples axis) +when using this layer as the first layer in a model. + + + +#### Output shape: + +Same shape as input. + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing.md new file mode 100644 index 00000000000..62aedf1c3d7 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing.md @@ -0,0 +1,49 @@ +description: Public API for tf.keras.layers.experimental.preprocessing namespace. + +
+ + +
+ +# Module: tf.keras.layers.experimental.preprocessing + + + + + + + + + +Public API for tf.keras.layers.experimental.preprocessing namespace. + + + +## Classes + +[`class CenterCrop`](../../../../tf/keras/layers/experimental/preprocessing/CenterCrop.md): Crop the central portion of the images to target height and width. + +[`class Normalization`](../../../../tf/keras/layers/experimental/preprocessing/Normalization.md): Feature-wise normalization of the data. + +[`class PreprocessingLayer`](../../../../tf/keras/layers/experimental/preprocessing/PreprocessingLayer.md): Base class for PreprocessingLayers. + +[`class RandomContrast`](../../../../tf/keras/layers/experimental/preprocessing/RandomContrast.md): Adjust the contrast of an image or images by a random factor. + +[`class RandomCrop`](../../../../tf/keras/layers/experimental/preprocessing/RandomCrop.md): Randomly crop the images to target height and width. + +[`class RandomFlip`](../../../../tf/keras/layers/experimental/preprocessing/RandomFlip.md): Randomly flip each image horizontally and vertically. + +[`class RandomHeight`](../../../../tf/keras/layers/experimental/preprocessing/RandomHeight.md): Randomly vary the height of a batch of images during training. + +[`class RandomRotation`](../../../../tf/keras/layers/experimental/preprocessing/RandomRotation.md): Randomly rotate each image. + +[`class RandomTranslation`](../../../../tf/keras/layers/experimental/preprocessing/RandomTranslation.md): Randomly translate each image during training. + +[`class RandomWidth`](../../../../tf/keras/layers/experimental/preprocessing/RandomWidth.md): Randomly vary the width of a batch of images during training. + +[`class Rescaling`](../../../../tf/keras/layers/experimental/preprocessing/Rescaling.md): Multiply inputs by `scale`. + +[`class Resizing`](../../../../tf/keras/layers/experimental/preprocessing/Resizing.md): Image resizing layer. + +[`class TextVectorization`](../../../../tf/keras/layers/experimental/preprocessing/TextVectorization.md): Text vectorization layer. + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/CenterCrop.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/CenterCrop.md new file mode 100644 index 00000000000..e7c62a1421f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/CenterCrop.md @@ -0,0 +1,97 @@ +description: Crop the central portion of the images to target height and width. + +
+ + + + +
+ +# tf.keras.layers.experimental.preprocessing.CenterCrop + + + + + + + + + +Crop the central portion of the images to target height and width. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + + +#### Input shape: + +4D tensor with shape: +`(samples, height, width, channels)`, data_format='channels_last'. + + + +#### Output shape: + +4D tensor with shape: +`(samples, target_height, target_width, channels)`. + + +If the input height/width is even and the target height/width is odd (or +inversely), the input image is left-padded by 1 pixel. + + + + + + + + + + + + + + + + +
+`height` + +Integer, the height of the output shape. +
+`width` + +Integer, the width of the output shape. +
+`name` + +A string, the name of the layer. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization.md new file mode 100644 index 00000000000..284b49ac84f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization.md @@ -0,0 +1,112 @@ +description: Feature-wise normalization of the data. + +
+ + + + + +
+ +# tf.keras.layers.experimental.preprocessing.Normalization + + + + + + + + + +Feature-wise normalization of the data. + + + + + + + +This layer will coerce its inputs into a normal distribution centered around +0 with standard deviation 1. It accomplishes this by precomputing the mean and +variance of the data, and calling (input-mean)/sqrt(var) at runtime. + +What happens in `adapt`: Compute mean and variance of the data and store them + as the layer's weights. `adapt` should be called before `fit`, `evaluate`, + or `predict`. + + + + + + + + + + + + +
+`axis` + +Integer or tuple of integers, the axis or axes that should be +normalized (typically the features axis). We will normalize each element +in the specified axis. The default is '-1' (the innermost axis); 0 (the +batch axis) is not allowed. +
+ + + +## Methods + +

adapt

+ +View source + + + +Fits the state of the preprocessing layer to the data being passed. + + + + + + + + + + + + + + +
Arguments
+`data` + +The data to train on. It can be passed either as a tf.data Dataset, +or as a numpy array. +
+`reset_state` + +Optional argument specifying whether to clear the state of +the layer at the start of the call to `adapt`, or whether to start from +the existing state. Subclasses may choose to throw if reset_state is set +to 'False'. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/PreprocessingLayer.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/PreprocessingLayer.md new file mode 100644 index 00000000000..c74c1b6760a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/PreprocessingLayer.md @@ -0,0 +1,98 @@ +description: Base class for PreprocessingLayers. + +
+ + + + + +
+ +# tf.keras.layers.experimental.preprocessing.PreprocessingLayer + + + + + + + + + +Base class for PreprocessingLayers. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + + +## Methods + +

adapt

+ +View source + + + +Fits the state of the preprocessing layer to the data being passed. + + + + + + + + + + + + + + +
Arguments
+`data` + +The data to train on. It can be passed either as a tf.data +Dataset, or as a numpy array. +
+`reset_state` + +Optional argument specifying whether to clear the state of +the layer at the start of the call to `adapt`, or whether to start +from the existing state. This argument may not be relevant to all +preprocessing layers: a subclass of PreprocessingLayer may choose to +throw if 'reset_state' is set to False. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomContrast.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomContrast.md new file mode 100644 index 00000000000..b7ec5540a95 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomContrast.md @@ -0,0 +1,124 @@ +description: Adjust the contrast of an image or images by a random factor. + +
+ + + + +
+ +# tf.keras.layers.experimental.preprocessing.RandomContrast + + + + + + + + + +Adjust the contrast of an image or images by a random factor. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + +Contrast is adjusted independently for each channel of each image during +training. + +For each channel, this layer computes the mean of the image pixels in the +channel and then adjusts each component `x` of each pixel to +`(x - mean) * contrast_factor + mean`. + +#### Input shape: + +4D tensor with shape: +`(samples, height, width, channels)`, data_format='channels_last'. + + + +#### Output shape: + +4D tensor with shape: +`(samples, height, width, channels)`, data_format='channels_last'. + + + + + + + + + + + + +
+`ValueError` + +if lower bound is not between [0, 1], or upper bound is +negative. +
+ + + + + + + + + + + + + + + + + + + + +
+`factor` + +a positive float represented as fraction of value, or a tuple of +size 2 representing lower and upper bound. When represented as a single +float, lower = upper. The contrast factor will be randomly picked between +[1.0 - lower, 1.0 + upper]. +
+`seed` + +Integer. Used to create a random seed. +
+`name` + +A string, the name of the layer. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomCrop.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomCrop.md new file mode 100644 index 00000000000..e7a0d52c814 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomCrop.md @@ -0,0 +1,108 @@ +description: Randomly crop the images to target height and width. + +
+ + + + +
+ +# tf.keras.layers.experimental.preprocessing.RandomCrop + + + + + + + + + +Randomly crop the images to target height and width. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + +This layer will crop all the images in the same batch to the same cropping +location. +By default, random cropping is only applied during training. At inference +time, the images will be first rescaled to preserve the shorter side, and +center cropped. If you need to apply random cropping at inference time, +set `training` to True when calling the layer. + +#### Input shape: + +4D tensor with shape: +`(samples, height, width, channels)`, data_format='channels_last'. + + + +#### Output shape: + +4D tensor with shape: +`(samples, target_height, target_width, channels)`. + + + + + + + + + + + + + + + + + + + + + +
+`height` + +Integer, the height of the output shape. +
+`width` + +Integer, the width of the output shape. +
+`seed` + +Integer. Used to create a random seed. +
+`name` + +A string, the name of the layer. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomFlip.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomFlip.md new file mode 100644 index 00000000000..e8699e3bdcc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomFlip.md @@ -0,0 +1,102 @@ +description: Randomly flip each image horizontally and vertically. + +
+ + + + +
+ +# tf.keras.layers.experimental.preprocessing.RandomFlip + + + + + + + + + +Randomly flip each image horizontally and vertically. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + +This layer will flip the images based on the `mode` attribute. +During inference time, the output will be identical to input. Call the layer +with `training=True` to flip the input. + +#### Input shape: + +4D tensor with shape: +`(samples, height, width, channels)`, data_format='channels_last'. + + + +#### Output shape: + +4D tensor with shape: +`(samples, height, width, channels)`, data_format='channels_last'. + + + + + + + + + + + + + + + + + + + + +
+`mode` + +String indicating which flip mode to use. Can be "horizontal", +"vertical", or "horizontal_and_vertical". Defaults to +"horizontal_and_vertical". +
+`seed` + +Integer. Used to create a random seed. +
+`name` + +A string, the name of the layer. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomHeight.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomHeight.md new file mode 100644 index 00000000000..c3e289eb8bb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomHeight.md @@ -0,0 +1,112 @@ +description: Randomly vary the height of a batch of images during training. + +
+ + + + +
+ +# tf.keras.layers.experimental.preprocessing.RandomHeight + + + + + + + + + +Randomly vary the height of a batch of images during training. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + +Adjusts the height of a batch of images by a random factor. The input +should be a 4-D tensor in the "channels_last" image data format. + +By default, this layer is inactive during inference. + + + + + + + + + + + + + + + + + + + +
+`factor` + +A positive float (fraction of original height), or a tuple of size 2 +representing lower and upper bound for resizing vertically. When +represented as a single float, this value is used for both the upper and +lower bound. For instance, `factor=(0.2, 0.3)` results in an output height +varying in the range `[original + 20%, original + 30%]`. `factor=(-0.2, +0.3)` results in an output height varying in the range `[original - 20%, +original + 30%]`. `factor=0.2` results in an output height varying in the +range `[original - 20%, original + 20%]`. +
+`interpolation` + +String, the interpolation method. Defaults to `bilinear`. +Supports `bilinear`, `nearest`, `bicubic`, `area`, `lanczos3`, `lanczos5`, +`gaussian`, `mitchellcubic` +
+`seed` + +Integer. Used to create a random seed. +
+`name` + +A string, the name of the layer. +
+ + + +#### Input shape: + +4D tensor with shape: `(samples, height, width, channels)` + (data_format='channels_last'). + + +#### Output shape: + +4D tensor with shape: `(samples, random_height, width, channels)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomRotation.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomRotation.md new file mode 100644 index 00000000000..2a2e226cfc5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomRotation.md @@ -0,0 +1,155 @@ +description: Randomly rotate each image. + +
+ + + + +
+ +# tf.keras.layers.experimental.preprocessing.RandomRotation + + + + + + + + + +Randomly rotate each image. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + +By default, random rotations are only applied during training. +At inference time, the layer does nothing. If you need to apply random +rotations at inference time, set `training` to True when calling the layer. + +#### Input shape: + +4D tensor with shape: +`(samples, height, width, channels)`, data_format='channels_last'. + + + +#### Output shape: + +4D tensor with shape: +`(samples, height, width, channels)`, data_format='channels_last'. + + + +#### Input shape: + +4D tensor with shape: `(samples, height, width, channels)`, + data_format='channels_last'. + + +#### Output shape: + +4D tensor with shape: `(samples, height, width, channels)`, + data_format='channels_last'. + + + + + + + + + + + + +
+`ValueError` + +if lower bound is not between [0, 1], or upper bound is +negative. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`factor` + +a positive float represented as fraction of 2pi, or a tuple of size +2 representing lower and upper bound for rotating clockwise and +counter-clockwise. When represented as a single float, lower = upper. +
+`fill_mode` + +Points outside the boundaries of the input are filled according +to the given mode (one of `{'constant', 'reflect', 'wrap'}`). +- *reflect*: `(d c b a | a b c d | d c b a)` +The input is extended by reflecting about the edge of the last pixel. +- *constant*: `(k k k k | a b c d | k k k k)` +The input is extended by filling all values beyond the edge with the +same constant value k = 0. +- *wrap*: `(a b c d | a b c d | a b c d)` +
+`interpolation` + +Interpolation mode. Supported values: "nearest", "bilinear". +
+`seed` + +Integer. Used to create a random seed. +
+`name` + +A string, the name of the layer. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomTranslation.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomTranslation.md new file mode 100644 index 00000000000..3d766d7998d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomTranslation.md @@ -0,0 +1,152 @@ +description: Randomly translate each image during training. + +
+ + + + +
+ +# tf.keras.layers.experimental.preprocessing.RandomTranslation + + + + + + + + + +Randomly translate each image during training. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`height_factor` + +a positive float represented as fraction of value, or a tuple +of size 2 representing lower and upper bound for shifting vertically. When +represented as a single float, this value is used for both the upper and +lower bound. For instance, `height_factor=(0.2, 0.3)` results in an output +height varying in the range `[original - 20%, original + 30%]`. +`height_factor=0.2` results in an output height varying in the range +`[original - 20%, original + 20%]`. +
+`width_factor` + +a positive float represented as fraction of value, or a tuple +of size 2 representing lower and upper bound for shifting horizontally. +When represented as a single float, this value is used for both the upper +and lower bound. +
+`fill_mode` + +Points outside the boundaries of the input are filled according +to the given mode (one of `{'constant', 'reflect', 'wrap'}`). +- *reflect*: `(d c b a | a b c d | d c b a)` +The input is extended by reflecting about the edge of the last pixel. +- *constant*: `(k k k k | a b c d | k k k k)` +The input is extended by filling all values beyond the edge with the +same constant value k = 0. +- *wrap*: `(a b c d | a b c d | a b c d)` +The input is extended by wrapping around to the opposite edge. +
+`interpolation` + +Interpolation mode. Supported values: "nearest", "bilinear". +
+`seed` + +Integer. Used to create a random seed. +
+`name` + +A string, the name of the layer. +
+ + + +#### Input shape: + +4D tensor with shape: `(samples, height, width, channels)`, + data_format='channels_last'. + + + +#### Output shape: + +4D tensor with shape: `(samples, height, width, channels)`, + data_format='channels_last'. + + + + + + + + + + + + +
+`ValueError` + +if lower bound is not between [0, 1], or upper bound is +negative. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomWidth.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomWidth.md new file mode 100644 index 00000000000..c2c403b7993 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomWidth.md @@ -0,0 +1,114 @@ +description: Randomly vary the width of a batch of images during training. + +
+ + + + +
+ +# tf.keras.layers.experimental.preprocessing.RandomWidth + + + + + + + + + +Randomly vary the width of a batch of images during training. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + +Adjusts the width of a batch of images by a random factor. The input +should be a 4-D tensor in the "channels_last" image data format. + +By default, this layer is inactive during inference. + + + + + + + + + + + + + + + + + + + +
+`factor` + +A positive float (fraction of original width), or a tuple of +size 2 representing lower and upper bound for resizing horizontally. When +represented as a single float, this value is used for both the upper and +lower bound. For instance, `factor=(0.2, 0.3)` results in an output width +varying in the range `[original + 20%, original + 30%]`. `factor=(-0.2, +0.3)` results in an output width varying in the range `[original - 20%, +original + 30%]`. `factor=0.2` results in an output width varying in the +range `[original - 20%, original + 20%]`. +
+`interpolation` + +String, the interpolation method. Defaults to `bilinear`. +Supports `bilinear`, `nearest`, `bicubic`, `area`, `lanczos3`, `lanczos5`, +`gaussian`, `mitchellcubic` +
+`seed` + +Integer. Used to create a random seed. +
+`name` + +A string, the name of the layer. +
+ + + +#### Input shape: + +4D tensor with shape: +`(samples, height, width, channels)` (data_format='channels_last'). + + + +#### Output shape: + +4D tensor with shape: +`(samples, random_height, width, channels)`. + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/Rescaling.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/Rescaling.md new file mode 100644 index 00000000000..da6a5809460 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/Rescaling.md @@ -0,0 +1,90 @@ +description: Multiply inputs by scale. + +
+ + + + +
+ +# tf.keras.layers.experimental.preprocessing.Rescaling + + + + + + + + + +Multiply inputs by `scale`. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + +For instance, to rescale an input in the `[0, 255]` range +to be in the `[0, 1]` range, you would pass `scale=1./255`. + +The rescaling is applied both during training and inference. + +#### Input shape: + +Arbitrary. + + + +#### Output shape: + +Same as input. + + + + + + + + + + + + + + + +
+`scale` + +Float, the scale to apply to the inputs. +
+`name` + +A string, the name of the layer. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing.md new file mode 100644 index 00000000000..052087cde04 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing.md @@ -0,0 +1,92 @@ +description: Image resizing layer. + +
+ + + + +
+ +# tf.keras.layers.experimental.preprocessing.Resizing + + + + + + + + + +Image resizing layer. + +Inherits From: [`Layer`](../../../../../tf/keras/layers/Layer.md) + + + + + + + + + +Resize the batched image input to target height and width. The input should +be a 4-D tensor in the format of NHWC. + + + + + + + + + + + + + + + + + + + +
+`height` + +Integer, the height of the output shape. +
+`width` + +Integer, the width of the output shape. +
+`interpolation` + +String, the interpolation method. Defaults to `bilinear`. +Supports `bilinear`, `nearest`, `bicubic`, `area`, `lanczos3`, `lanczos5`, +`gaussian`, `mitchellcubic` +
+`name` + +A string, the name of the layer. +
+ + + diff --git a/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization.md b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization.md new file mode 100644 index 00000000000..00b1c912930 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization.md @@ -0,0 +1,358 @@ +description: Text vectorization layer. + +
+ + + + + + + +
+ +# tf.keras.layers.experimental.preprocessing.TextVectorization + + + + + + + + + +Text vectorization layer. + + + + + + + +This layer has basic options for managing text in a Keras model. It +transforms a batch of strings (one sample = one string) into either a list of +token indices (one sample = 1D tensor of integer token indices) or a dense +representation (one sample = 1D tensor of float values representing data about +the sample's tokens). + +If desired, the user can call this layer's adapt() method on a dataset. +When this layer is adapted, it will analyze the dataset, determine the +frequency of individual string values, and create a 'vocabulary' from them. +This vocabulary can have unlimited size or be capped, depending on the +configuration options for this layer; if there are more unique values in the +input than the maximum vocabulary size, the most frequent terms will be used +to create the vocabulary. + +The processing of each sample contains the following steps: + 1) standardize each sample (usually lowercasing + punctuation stripping) + 2) split each sample into substrings (usually words) + 3) recombine substrings into tokens (usually ngrams) + 4) index tokens (associate a unique int value with each token) + 5) transform each sample using this index, either into a vector of ints or + a dense float vector. + +Some notes on passing Callables to customize splitting and normalization for +this layer: + 1) Any callable can be passed to this Layer, but if you want to serialize + this object you should only pass functions that are registered Keras + serializables (see tf.keras.utils.register_keras_serializable for more + details). + 2) When using a custom callable for `standardize`, the data received + by the callable will be exactly as passed to this layer. The callable + should return a tensor of the same shape as the input. + 3) When using a custom callable for `split`, the data received by the + callable will have the 1st dimension squeezed out - instead of + `[["string to split"], ["another string to split"]]`, the Callable will + see `["string to split", "another string to split"]`. The callable should + return a Tensor with the first dimension containing the split tokens - + in this example, we should see something like `[["string", "to", "split], + ["another", "string", "to", "split"]]`. This makes the callable site + natively compatible with tf.strings.split(). + +#### Example: + + +This example instantiates a TextVectorization layer that lowercases text, +splits on whitespace, strips punctuation, and outputs integer vocab indices. +``` +max_features = 5000 # Maximum vocab size. +max_len = 40 # Sequence length to pad the outputs to. + +# Create the layer. +vectorize_layer = text_vectorization.TextVectorization( + max_tokens=max_features, + output_mode='int', + output_sequence_length=max_len) + +# Now that the vocab layer has been created, call `adapt` on the text-only +# dataset to create the vocabulary. You don't have to batch, but for large +# datasets this means we're not keeping spare copies of the dataset in memory. +vectorize_layer.adapt(text_dataset.batch(64)) + +# Create the model that uses the vectorize text layer +model = tf.keras.models.Sequential() + +# Start by creating an explicit input layer. It needs to have a shape of (1,) +# (because we need to guarantee that there is exactly one string input per +# batch), and the dtype needs to be 'string'. +model.add(tf.keras.Input(shape=(1,), dtype=tf.string)) + +# The first layer in our model is the vectorization layer. After this layer, +# we have a tensor of shape (batch_size, max_len) containing vocab indices. +model.add(vectorize_layer) + +# Next, we add a layer to map those vocab indices into a space of +# dimensionality 'embedding_dims'. Note that we're using max_features+1 here, +# since there's an OOV token that gets added to the vocabulary in +# vectorize_layer. +model.add(tf.keras.layers.Embedding(max_features+1, embedding_dims)) + +# At this point, you have embedded float data representing your tokens, and +# can add whatever other layers you need to create your model. +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`max_tokens` + +The maximum size of the vocabulary for this layer. If None, +there is no cap on the size of the vocabulary. +
+`standardize` + +Optional specification for standardization to apply to the +input text. Values can be None (no standardization), +'lower_and_strip_punctuation' (lowercase and remove punctuation) or a +Callable. Default is 'lower_and_strip_punctuation'. +
+`split` + +Optional specification for splitting the input text. Values can be +None (no splitting), 'whitespace' (split on ASCII whitespace), or a +Callable. The default is 'whitespace'. +
+`ngrams` + +Optional specification for ngrams to create from the possibly-split +input text. Values can be None, an integer or tuple of integers; passing +an integer will create ngrams up to that integer, and passing a tuple of +integers will create ngrams for the specified values in the tuple. Passing +None means that no ngrams will be created. +
+`output_mode` + +Optional specification for the output of the layer. Values can +be "int", "binary", "count" or "tf-idf", configuring the layer as follows: +"int": Outputs integer indices, one integer index per split string +token. +"binary": Outputs a single int array per batch, of either vocab_size or +max_tokens size, containing 1s in all elements where the token mapped +to that index exists at least once in the batch item. +"count": As "binary", but the int array contains a count of the number +of times the token at that index appeared in the batch item. +"tf-idf": As "binary", but the TF-IDF algorithm is applied to find the +value in each token slot. +
+`output_sequence_length` + +Only valid in INT mode. If set, the output will have +its time dimension padded or truncated to exactly `output_sequence_length` +values, resulting in a tensor of shape [batch_size, +output_sequence_length] regardless of how many tokens resulted from the +splitting step. Defaults to None. +
+`pad_to_max_tokens` + +Only valid in "binary", "count", and "tf-idf" modes. If +True, the output will have its feature axis padded to `max_tokens` even if +the number of unique tokens in the vocabulary is less than max_tokens, +resulting in a tensor of shape [batch_size, max_tokens] regardless of +vocabulary size. Defaults to True. +
+ + + +## Methods + +

adapt

+ +View source + + + +Fits the state of the preprocessing layer to the dataset. + +Overrides the default adapt method to apply relevant preprocessing to the +inputs before passing to the combiner. + + + + + + + + + + + + + +
Arguments
+`data` + +The data to train on. It can be passed either as a tf.data Dataset, +or as a numpy array. +
+`reset_state` + +Optional argument specifying whether to clear the state of +the layer at the start of the call to `adapt`. This must be True for +this layer, which does not support repeated calls to `adapt`. +
+ + + +

get_vocabulary

+ +View source + + + + + + +

set_vocabulary

+ +View source + + + +Sets vocabulary (and optionally document frequency) data for this layer. + +This method sets the vocabulary and DF data for this layer directly, instead +of analyzing a dataset through 'adapt'. It should be used whenever the vocab +(and optionally document frequency) information is already known. If +vocabulary data is already present in the layer, this method will either +replace it, if 'append' is set to False, or append to it (if 'append' is set +to True). + + + + + + + + + + + + + + + + + + + +
Arguments
+`vocab` + +An array of string tokens. +
+`df_data` + +An array of document frequency data. Only necessary if the layer +output_mode is TFIDF. +
+`oov_df_value` + +The document frequency of the OOV token. Only necessary if +output_mode is TFIDF. OOV data is optional when appending additional +data in TFIDF mode; if an OOV value is supplied it will overwrite the +existing OOV value. +
+`append` + +Whether to overwrite or append any existing vocabulary data. +
+ + + + + + + + + + + + + + + +
Raises
+`ValueError` + +If there are too many inputs, the inputs do not match, or +input data is missing. +
+`RuntimeError` + +If the vocabulary cannot be set when this function is +called. This happens when "binary", "count", and "tfidf" modes, +if "pad_to_max_tokens" is False and the layer itself has already been +called. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/maximum.md b/site/en/api_docs/python/tf/keras/layers/maximum.md new file mode 100644 index 00000000000..f22c3f9fb5d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/maximum.md @@ -0,0 +1,115 @@ +description: Functional interface to compute maximum (element-wise) list of inputs. + +
+ + +
+ +# tf.keras.layers.maximum + + + + + + + + + +Functional interface to compute maximum (element-wise) list of `inputs`. + + + + + + + + + +This is equivalent to the tf.keras.layers.Maximum layer. + +#### For example: + + + +```python +input1 = tf.keras.layers.Input(shape=(16,)) +x1 = tf.keras.layers.Dense(8, activation='relu')(input1) #shape=(None, 8) +input2 = tf.keras.layers.Input(shape=(32,)) +x2 = tf.keras.layers.Dense(8, activation='relu')(input2) #shape=(None, 8) +max_inp=tf.keras.layers.maximum([x1,x2]) #shape=(None, 8) +out = tf.keras.layers.Dense(4)(max_inp) +model = tf.keras.models.Model(inputs=[input1, input2], outputs=out) +``` + + + + + + + + + + + + + +
+`inputs` + +A list of input tensors (at least 2) of same shape. +
+`**kwargs` + +Standard layer keyword arguments. +
+ + + + + + + + + + + +
+A tensor (of same shape as input tensor) with the element-wise +maximum of the inputs. +
+ + + + + + + + + + + + +
+`ValueError` + +If input tensors are of different shape. +
+ diff --git a/site/en/api_docs/python/tf/keras/layers/minimum.md b/site/en/api_docs/python/tf/keras/layers/minimum.md new file mode 100644 index 00000000000..2fb8bdbf4e8 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/minimum.md @@ -0,0 +1,82 @@ +description: Functional interface to the Minimum layer. + +
+ + +
+ +# tf.keras.layers.minimum + + + + + + + + + +Functional interface to the `Minimum` layer. + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of input tensors (at least 2). +
+`**kwargs` + +Standard layer keyword arguments. +
+ + + + + + + + + + + +
+A tensor, the element-wise minimum of the inputs. +
+ diff --git a/site/en/api_docs/python/tf/keras/layers/multiply.md b/site/en/api_docs/python/tf/keras/layers/multiply.md new file mode 100644 index 00000000000..7d6d2b3e806 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/multiply.md @@ -0,0 +1,82 @@ +description: Functional interface to the Multiply layer. + +
+ + +
+ +# tf.keras.layers.multiply + + + + + + + + + +Functional interface to the `Multiply` layer. + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of input tensors (at least 2). +
+`**kwargs` + +Standard layer keyword arguments. +
+ + + + + + + + + + + +
+A tensor, the element-wise product of the inputs. +
+ diff --git a/site/en/api_docs/python/tf/keras/layers/serialize.md b/site/en/api_docs/python/tf/keras/layers/serialize.md new file mode 100644 index 00000000000..4daa8140b4b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/serialize.md @@ -0,0 +1,42 @@ +
+ + +
+ +# tf.keras.layers.serialize + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/layers/subtract.md b/site/en/api_docs/python/tf/keras/layers/subtract.md new file mode 100644 index 00000000000..5130dea5f27 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/layers/subtract.md @@ -0,0 +1,100 @@ +description: Functional interface to the Subtract layer. + +
+ + +
+ +# tf.keras.layers.subtract + + + + + + + + + +Functional interface to the `Subtract` layer. + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of input tensors (exactly 2). +
+`**kwargs` + +Standard layer keyword arguments. +
+ + + + + + + + + + + +
+A tensor, the difference of the inputs. +
+ + + +#### Examples: + + + +```python + import keras + + input1 = keras.layers.Input(shape=(16,)) + x1 = keras.layers.Dense(8, activation='relu')(input1) + input2 = keras.layers.Input(shape=(32,)) + x2 = keras.layers.Dense(8, activation='relu')(input2) + subtracted = keras.layers.subtract([x1, x2]) + + out = keras.layers.Dense(4)(subtracted) + model = keras.models.Model(inputs=[input1, input2], outputs=out) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/losses.md b/site/en/api_docs/python/tf/keras/losses.md new file mode 100644 index 00000000000..8d68d69de5e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses.md @@ -0,0 +1,121 @@ +description: Built-in loss functions. + +
+ + +
+ +# Module: tf.keras.losses + + + + + + + + + +Built-in loss functions. + + + + + +## Classes + +[`class BinaryCrossentropy`](../../tf/keras/losses/BinaryCrossentropy.md): Computes the cross-entropy loss between true labels and predicted labels. + +[`class CategoricalCrossentropy`](../../tf/keras/losses/CategoricalCrossentropy.md): Computes the crossentropy loss between the labels and predictions. + +[`class CategoricalHinge`](../../tf/keras/losses/CategoricalHinge.md): Computes the categorical hinge loss between `y_true` and `y_pred`. + +[`class CosineSimilarity`](../../tf/keras/losses/CosineSimilarity.md): Computes the cosine similarity between `y_true` and `y_pred`. + +[`class Hinge`](../../tf/keras/losses/Hinge.md): Computes the hinge loss between `y_true` and `y_pred`. + +[`class Huber`](../../tf/keras/losses/Huber.md): Computes the Huber loss between `y_true` and `y_pred`. + +[`class KLDivergence`](../../tf/keras/losses/KLDivergence.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`class LogCosh`](../../tf/keras/losses/LogCosh.md): Computes the logarithm of the hyperbolic cosine of the prediction error. + +[`class Loss`](../../tf/keras/losses/Loss.md): Loss base class. + +[`class MeanAbsoluteError`](../../tf/keras/losses/MeanAbsoluteError.md): Computes the mean of absolute difference between labels and predictions. + +[`class MeanAbsolutePercentageError`](../../tf/keras/losses/MeanAbsolutePercentageError.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`class MeanSquaredError`](../../tf/keras/losses/MeanSquaredError.md): Computes the mean of squares of errors between labels and predictions. + +[`class MeanSquaredLogarithmicError`](../../tf/keras/losses/MeanSquaredLogarithmicError.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`class Poisson`](../../tf/keras/losses/Poisson.md): Computes the Poisson loss between `y_true` and `y_pred`. + +[`class Reduction`](../../tf/keras/losses/Reduction.md): Types of loss reduction. + +[`class SparseCategoricalCrossentropy`](../../tf/keras/losses/SparseCategoricalCrossentropy.md): Computes the crossentropy loss between the labels and predictions. + +[`class SquaredHinge`](../../tf/keras/losses/SquaredHinge.md): Computes the squared hinge loss between `y_true` and `y_pred`. + +## Functions + +[`KLD(...)`](../../tf/keras/losses/KLD.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`MAE(...)`](../../tf/keras/losses/MAE.md): Computes the mean absolute error between labels and predictions. + +[`MAPE(...)`](../../tf/keras/losses/MAPE.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`MSE(...)`](../../tf/keras/losses/MSE.md): Computes the mean squared error between labels and predictions. + +[`MSLE(...)`](../../tf/keras/losses/MSLE.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`binary_crossentropy(...)`](../../tf/keras/losses/binary_crossentropy.md): Computes the binary crossentropy loss. + +[`categorical_crossentropy(...)`](../../tf/keras/losses/categorical_crossentropy.md): Computes the categorical crossentropy loss. + +[`categorical_hinge(...)`](../../tf/keras/losses/categorical_hinge.md): Computes the categorical hinge loss between `y_true` and `y_pred`. + +[`cosine_similarity(...)`](../../tf/keras/losses/cosine_similarity.md): Computes the cosine similarity between labels and predictions. + +[`deserialize(...)`](../../tf/keras/losses/deserialize.md): Deserializes a serialized loss class/function instance. + +[`get(...)`](../../tf/keras/losses/get.md): Retrieves a Keras loss function. + +[`hinge(...)`](../../tf/keras/losses/hinge.md): Computes the hinge loss between `y_true` and `y_pred`. + +[`kld(...)`](../../tf/keras/losses/KLD.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`kullback_leibler_divergence(...)`](../../tf/keras/losses/KLD.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`logcosh(...)`](../../tf/keras/losses/logcosh.md): Logarithm of the hyperbolic cosine of the prediction error. + +[`mae(...)`](../../tf/keras/losses/MAE.md): Computes the mean absolute error between labels and predictions. + +[`mape(...)`](../../tf/keras/losses/MAPE.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`mean_absolute_error(...)`](../../tf/keras/losses/MAE.md): Computes the mean absolute error between labels and predictions. + +[`mean_absolute_percentage_error(...)`](../../tf/keras/losses/MAPE.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`mean_squared_error(...)`](../../tf/keras/losses/MSE.md): Computes the mean squared error between labels and predictions. + +[`mean_squared_logarithmic_error(...)`](../../tf/keras/losses/MSLE.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`mse(...)`](../../tf/keras/losses/MSE.md): Computes the mean squared error between labels and predictions. + +[`msle(...)`](../../tf/keras/losses/MSLE.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`poisson(...)`](../../tf/keras/losses/poisson.md): Computes the Poisson loss between y_true and y_pred. + +[`serialize(...)`](../../tf/keras/losses/serialize.md): Serializes loss function or `Loss` instance. + +[`sparse_categorical_crossentropy(...)`](../../tf/keras/losses/sparse_categorical_crossentropy.md): Computes the sparse categorical crossentropy loss. + +[`squared_hinge(...)`](../../tf/keras/losses/squared_hinge.md): Computes the squared hinge loss between `y_true` and `y_pred`. + diff --git a/site/en/api_docs/python/tf/keras/losses/BinaryCrossentropy.md b/site/en/api_docs/python/tf/keras/losses/BinaryCrossentropy.md new file mode 100644 index 00000000000..09e6dc77a36 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/BinaryCrossentropy.md @@ -0,0 +1,302 @@ +description: Computes the cross-entropy loss between true labels and predicted labels. + +
+ + + + + + +
+ +# tf.keras.losses.BinaryCrossentropy + + + + + + + + + +Computes the cross-entropy loss between true labels and predicted labels. + + + + + + + + + +Use this cross-entropy loss when there are only two label classes (assumed to +be 0 and 1). For each example, there should be a single floating-point value +per prediction. + +In the snippet below, each of the four examples has only a single +floating-pointing value, and both `y_pred` and `y_true` have the shape +`[batch_size]`. + +#### Usage: + + + +``` +>>> y_true = [[0., 1.], [0., 0.]] +>>> y_pred = [[0.6, 0.4], [0.4, 0.6]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> bce = tf.keras.losses.BinaryCrossentropy() +>>> bce(y_true, y_pred).numpy() +0.815 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> bce(y_true, y_pred, sample_weight=[1, 0]).numpy() +0.458 +``` + + ``` + >>> # Using 'sum' reduction type. +>>> bce = tf.keras.losses.BinaryCrossentropy( +... reduction=tf.keras.losses.Reduction.SUM) +>>> bce(y_true, y_pred).numpy() +1.630 + ``` + +``` +>>> # Using 'none' reduction type. +>>> bce = tf.keras.losses.BinaryCrossentropy( +... reduction=tf.keras.losses.Reduction.NONE) +>>> bce(y_true, y_pred).numpy() +array([0.916 , 0.714], dtype=float32) +``` + +Usage with the tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.BinaryCrossentropy()) +``` + + + + + + + + + + + + + + + + + + + +
+`from_logits` + +Whether to interpret `y_pred` as a tensor of +[logit](https://en.wikipedia.org/wiki/Logit) values. By default, we +assume that `y_pred` contains probabilities (i.e., values in [0, 1]). +**Note - Using from_logits=True may be more numerically stable. +
+`label_smoothing` + +Float in [0, 1]. When 0, no smoothing occurs. When > 0, +we compute the loss between the predicted labels and a smoothed version +of the true labels, where the smoothing squeezes the labels towards 0.5. +Larger values of `label_smoothing` correspond to heavier smoothing. +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +(Optional) Name for the op. Defaults to 'binary_crossentropy'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/CategoricalCrossentropy.md b/site/en/api_docs/python/tf/keras/losses/CategoricalCrossentropy.md new file mode 100644 index 00000000000..e787b0de608 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/CategoricalCrossentropy.md @@ -0,0 +1,302 @@ +description: Computes the crossentropy loss between the labels and predictions. + +
+ + + + + + +
+ +# tf.keras.losses.CategoricalCrossentropy + + + + + + + + + +Computes the crossentropy loss between the labels and predictions. + + + + + + + + + +Use this crossentropy loss function when there are two or more label classes. +We expect labels to be provided in a `one_hot` representation. If you want to +provide labels as integers, please use `SparseCategoricalCrossentropy` loss. +There should be `# classes` floating point values per feature. + +In the snippet below, there is `# classes` floating pointing values per +example. The shape of both `y_pred` and `y_true` are +`[batch_size, num_classes]`. + +#### Usage: + + + +``` +>>> y_true = [[0, 1, 0], [0, 0, 1]] +>>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> cce = tf.keras.losses.CategoricalCrossentropy() +>>> cce(y_true, y_pred).numpy() +1.177 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> cce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy() +0.814 +``` + +``` +>>> # Using 'sum' reduction type. +>>> cce = tf.keras.losses.CategoricalCrossentropy( +... reduction=tf.keras.losses.Reduction.SUM) +>>> cce(y_true, y_pred).numpy() +2.354 +``` + +``` +>>> # Using 'none' reduction type. +>>> cce = tf.keras.losses.CategoricalCrossentropy( +... reduction=tf.keras.losses.Reduction.NONE) +>>> cce(y_true, y_pred).numpy() +array([0.0513, 2.303], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.CategoricalCrossentropy()) +``` + + + + + + + + + + + + + + + + + + + +
+`from_logits` + +Whether `y_pred` is expected to be a logits tensor. By +default, we assume that `y_pred` encodes a probability distribution. +**Note - Using from_logits=True is more numerically stable.** +
+`label_smoothing` + +Float in [0, 1]. When > 0, label values are smoothed, +meaning the confidence on label values are relaxed. e.g. +`label_smoothing=0.2` means that we will use a value of `0.1` for label +`0` and `0.9` for label `1`" +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to 'categorical_crossentropy'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/CategoricalHinge.md b/site/en/api_docs/python/tf/keras/losses/CategoricalHinge.md new file mode 100644 index 00000000000..85426399e04 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/CategoricalHinge.md @@ -0,0 +1,276 @@ +description: Computes the categorical hinge loss between y_true and y_pred. + +
+ + + + + + +
+ +# tf.keras.losses.CategoricalHinge + + + + + + + + + +Computes the categorical hinge loss between `y_true` and `y_pred`. + + + + + + + + + +`loss = maximum(neg - pos + 1, 0)` +where `neg = sum(y_true * y_pred)` and `pos = maximum(1 - y_true)` + +#### Usage: + + + +``` +>>> y_true = [[0, 1], [0, 0]] +>>> y_pred = [[0.6, 0.4], [0.4, 0.6]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> h = tf.keras.losses.CategoricalHinge() +>>> h(y_true, y_pred).numpy() +1.4 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> h(y_true, y_pred, sample_weight=[1, 0]).numpy() +0.6 +``` + +``` +>>> # Using 'sum' reduction type. +>>> h = tf.keras.losses.CategoricalHinge( +... reduction=tf.keras.losses.Reduction.SUM) +>>> h(y_true, y_pred).numpy() +2.8 +``` + +``` +>>> # Using 'none' reduction type. +>>> h = tf.keras.losses.CategoricalHinge( +... reduction=tf.keras.losses.Reduction.NONE) +>>> h(y_true, y_pred).numpy() +array([1.2, 1.6], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.CategoricalHinge()) +``` + + + + + + + + + + + + + +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to 'categorical_hinge'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/CosineSimilarity.md b/site/en/api_docs/python/tf/keras/losses/CosineSimilarity.md new file mode 100644 index 00000000000..7edd482e8bc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/CosineSimilarity.md @@ -0,0 +1,336 @@ +description: Computes the cosine similarity between y_true and y_pred. + +
+ + + + + + +
+ +# tf.keras.losses.CosineSimilarity + + + + + + + + + +Computes the cosine similarity between `y_true` and `y_pred`. + + + + + + + + + +`loss = -sum(y_true * y_pred)` + +#### Usage: + + + +``` +>>> y_true = [[0., 1.], [1., 1.]] +>>> y_pred = [[1., 0.], [1., 1.]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1) +>>> # l2_norm(y_true) = [[0., 1.], [1./1.414], 1./1.414]]] +>>> # l2_norm(y_pred) = [[1., 0.], [1./1.414], 1./1.414]]] +>>> # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]] +>>> # loss = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1)) +>>> # = ((0. + 0.) + (0.5 + 0.5)) / 2 +>>> cosine_loss(y_true, y_pred).numpy() +-0.5 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> cosine_loss(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() +-0.0999 +``` + +``` +>>> # Using 'sum' reduction type. +>>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1, +... reduction=tf.keras.losses.Reduction.SUM) +>>> cosine_loss(y_true, y_pred).numpy() +-0.999 +``` + +``` +>>> # Using 'none' reduction type. +>>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1, +... reduction=tf.keras.losses.Reduction.NONE) +>>> cosine_loss(y_true, y_pred).numpy() +array([-0., -0.999], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.CosineSimilarity(axis=1)) +``` + + + + + + + + + + + + + + + + +
+`axis` + +(Optional) Defaults to -1. The dimension along which the cosine +similarity is computed. +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to loss. +Default value is `AUTO`. `AUTO` indicates that the reduction option will +be determined by the usage context. For almost all cases this defaults to +`SUM_OVER_BATCH_SIZE`. +When used with tf.distribute.Strategy, outside of built-in training +loops such as tf.keras `compile` and `fit`, using `AUTO` or +`SUM_OVER_BATCH_SIZE` will raise an error. Please see this custom training +[tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. +
+ + + + + + + + + + + + + + + + + + + + + +
+`fn` + +The loss function to wrap, with signature `fn(y_true, y_pred, +**kwargs)`. +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +(Optional) name for the loss. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/Hinge.md b/site/en/api_docs/python/tf/keras/losses/Hinge.md new file mode 100644 index 00000000000..5ab422bc071 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/Hinge.md @@ -0,0 +1,278 @@ +description: Computes the hinge loss between y_true and y_pred. + +
+ + + + + + +
+ +# tf.keras.losses.Hinge + + + + + + + + + +Computes the hinge loss between `y_true` and `y_pred`. + + + + + + + + + +`loss = maximum(1 - y_true * y_pred, 0)` + +`y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are +provided we will convert them to -1 or 1. + +#### Usage: + + + +``` +>>> y_true = [[0., 1.], [0., 0.]] +>>> y_pred = [[0.6, 0.4], [0.4, 0.6]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> h = tf.keras.losses.Hinge() +>>> h(y_true, y_pred).numpy() +1.3 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> h(y_true, y_pred, sample_weight=[1, 0]).numpy() +0.55 +``` + +``` +>>> # Using 'sum' reduction type. +>>> h = tf.keras.losses.Hinge( +... reduction=tf.keras.losses.Reduction.SUM) +>>> h(y_true, y_pred).numpy() +2.6 +``` + +``` +>>> # Using 'none' reduction type. +>>> h = tf.keras.losses.Hinge( +... reduction=tf.keras.losses.Reduction.NONE) +>>> h(y_true, y_pred).numpy() +array([1.1, 1.5], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.Hinge()) +``` + + + + + + + + + + + + + +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to 'hinge'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/Huber.md b/site/en/api_docs/python/tf/keras/losses/Huber.md new file mode 100644 index 00000000000..02a7e1961ca --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/Huber.md @@ -0,0 +1,289 @@ +description: Computes the Huber loss between y_true and y_pred. + +
+ + + + + + +
+ +# tf.keras.losses.Huber + + + + + + + + + +Computes the Huber loss between `y_true` and `y_pred`. + + + + + + + + + +For each value x in `error = y_true - y_pred`: + +``` +loss = 0.5 * x^2 if |x| <= d +loss = 0.5 * d^2 + d * (|x| - d) if |x| > d +``` +where d is `delta`. See: https://en.wikipedia.org/wiki/Huber_loss + +#### Usage: + + + +``` +>>> y_true = [[0, 1], [0, 0]] +>>> y_pred = [[0.6, 0.4], [0.4, 0.6]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> h = tf.keras.losses.Huber() +>>> h(y_true, y_pred).numpy() +0.155 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> h(y_true, y_pred, sample_weight=[1, 0]).numpy() +0.09 +``` + +``` +>>> # Using 'sum' reduction type. +>>> h = tf.keras.losses.Huber( +... reduction=tf.keras.losses.Reduction.SUM) +>>> h(y_true, y_pred).numpy() +0.31 +``` + +``` +>>> # Using 'none' reduction type. +>>> h = tf.keras.losses.Huber( +... reduction=tf.keras.losses.Reduction.NONE) +>>> h(y_true, y_pred).numpy() +array([0.18, 0.13], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.Huber()) +``` + + + + + + + + + + + + + + + + +
+`delta` + +A float, the point where the Huber loss function changes from a +quadratic to linear. +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to 'huber_loss'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/KLD.md b/site/en/api_docs/python/tf/keras/losses/KLD.md new file mode 100644 index 00000000000..af3c093a24b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/KLD.md @@ -0,0 +1,120 @@ +description: Computes Kullback-Leibler divergence loss between y_true and y_pred. + +
+ + +
+ +# tf.keras.losses.KLD + + + + + + + + + +Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + + + + + + + + + +`loss = y_true * log(y_true / y_pred)` + +See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence + +#### Usage: + + + +``` +>>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64) +>>> y_pred = np.random.random(size=(2, 3)) +>>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1) +>>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1) +>>> assert np.array_equal( +... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1)) +``` + + + + + + + + + + + + + +
+`y_true` + +Tensor of true targets. +
+`y_pred` + +Tensor of predicted targets. +
+ + + + + + + + + + + +
+A `Tensor` with loss. +
+ + + + + + + + + + + + +
+`TypeError` + +If `y_true` cannot be cast to the `y_pred.dtype`. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/KLDivergence.md b/site/en/api_docs/python/tf/keras/losses/KLDivergence.md new file mode 100644 index 00000000000..36985a08eff --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/KLDivergence.md @@ -0,0 +1,277 @@ +description: Computes Kullback-Leibler divergence loss between y_true and y_pred. + +
+ + + + + + +
+ +# tf.keras.losses.KLDivergence + + + + + + + + + +Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + + + + + + + + + +`loss = y_true * log(y_true / y_pred)` + +See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence + +#### Usage: + + + +``` +>>> y_true = [[0, 1], [0, 0]] +>>> y_pred = [[0.6, 0.4], [0.4, 0.6]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> kl = tf.keras.losses.KLDivergence() +>>> kl(y_true, y_pred).numpy() +0.458 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> kl(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() +0.366 +``` + +``` +>>> # Using 'sum' reduction type. +>>> kl = tf.keras.losses.KLDivergence( +... reduction=tf.keras.losses.Reduction.SUM) +>>> kl(y_true, y_pred).numpy() +0.916 +``` + +``` +>>> # Using 'none' reduction type. +>>> kl = tf.keras.losses.KLDivergence( +... reduction=tf.keras.losses.Reduction.NONE) +>>> kl(y_true, y_pred).numpy() +array([0.916, -3.08e-06], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.KLDivergence()) +``` + + + + + + + + + + + + + +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to 'kullback_leibler_divergence'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/LogCosh.md b/site/en/api_docs/python/tf/keras/losses/LogCosh.md new file mode 100644 index 00000000000..d55468477e8 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/LogCosh.md @@ -0,0 +1,276 @@ +description: Computes the logarithm of the hyperbolic cosine of the prediction error. + +
+ + + + + + +
+ +# tf.keras.losses.LogCosh + + + + + + + + + +Computes the logarithm of the hyperbolic cosine of the prediction error. + + + + + + + + + +`logcosh = log((exp(x) + exp(-x))/2)`, +where x is the error `y_pred - y_true`. + +#### Usage: + + + +``` +>>> y_true = [[0., 1.], [0., 0.]] +>>> y_pred = [[1., 1.], [0., 0.]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> l = tf.keras.losses.LogCosh() +>>> l(y_true, y_pred).numpy() +0.108 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> l(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() +0.087 +``` + +``` +>>> # Using 'sum' reduction type. +>>> l = tf.keras.losses.LogCosh( +... reduction=tf.keras.losses.Reduction.SUM) +>>> l(y_true, y_pred).numpy() +0.217 +``` + +``` +>>> # Using 'none' reduction type. +>>> l = tf.keras.losses.LogCosh( +... reduction=tf.keras.losses.Reduction.NONE) +>>> l(y_true, y_pred).numpy() +array([0.217, 0.], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.LogCosh()) +``` + + + + + + + + + + + + + +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to 'logcosh'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/Loss.md b/site/en/api_docs/python/tf/keras/losses/Loss.md new file mode 100644 index 00000000000..60485274a03 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/Loss.md @@ -0,0 +1,317 @@ +description: Loss base class. + +
+ + + + + + + +
+ +# tf.keras.losses.Loss + + + + + + + + + +Loss base class. + + + + + + + + + +To be implemented by subclasses: +* `call()`: Contains the logic for loss calculation using `y_true`, `y_pred`. + +Example subclass implementation: +```python +class MeanSquaredError(Loss): + def call(self, y_true, y_pred): + y_pred = ops.convert_to_tensor_v2(y_pred) + y_true = math_ops.cast(y_true, y_pred.dtype) + return K.mean(math_ops.square(y_pred - y_true), axis=-1) +``` + +When used with tf.distribute.Strategy, outside of built-in training loops +such as tf.keras `compile` and `fit`, please use 'SUM' or 'NONE' reduction +types, and reduce losses explicitly in your training loop. Using 'AUTO' or +'SUM_OVER_BATCH_SIZE' will raise an error. + +Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) for more +details on this. + +You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like: +```python +with strategy.scope(): + loss_obj = tf.keras.losses.CategoricalCrossentropy( + reduction=tf.keras.losses.Reduction.NONE) + .... + loss = (tf.reduce_sum(loss_obj(labels, predictions)) * + (1. / global_batch_size)) +``` + + + + + + + + + + + + + +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. +
+ + + +## Methods + +

call

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+ + + + + + + + + + + +
Returns
+Loss values with the shape `[batch_size, d0, .. dN-1]`. +
+ + + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/MAE.md b/site/en/api_docs/python/tf/keras/losses/MAE.md new file mode 100644 index 00000000000..3d8d3be16cf --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/MAE.md @@ -0,0 +1,99 @@ +description: Computes the mean absolute error between labels and predictions. + +
+ + +
+ +# tf.keras.losses.MAE + + + + + + + + + +Computes the mean absolute error between labels and predictions. + + + + + + + + + +`loss = mean(abs(y_true - y_pred), axis=-1)` + +#### Usage: + + + +``` +>>> y_true = np.random.randint(0, 2, size=(2, 3)) +>>> y_pred = np.random.random(size=(2, 3)) +>>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> assert np.array_equal( +... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1)) +``` + + + + + + + + + + + + + +
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+ + + + + + + + + + + +
+Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/MAPE.md b/site/en/api_docs/python/tf/keras/losses/MAPE.md new file mode 100644 index 00000000000..1f72065b4bf --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/MAPE.md @@ -0,0 +1,101 @@ +description: Computes the mean absolute percentage error between y_true and y_pred. + +
+ + +
+ +# tf.keras.losses.MAPE + + + + + + + + + +Computes the mean absolute percentage error between `y_true` and `y_pred`. + + + + + + + + + +`loss = 100 * mean(abs(y_true - y_pred) / y_true, axis=-1)` + +#### Usage: + + + +``` +>>> y_true = np.random.random(size=(2, 3)) +>>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero +>>> y_pred = np.random.random(size=(2, 3)) +>>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> assert np.array_equal( +... loss.numpy(), +... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1)) +``` + + + + + + + + + + + + + +
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+ + + + + + + + + + + +
+Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/MSE.md b/site/en/api_docs/python/tf/keras/losses/MSE.md new file mode 100644 index 00000000000..95ae88fc769 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/MSE.md @@ -0,0 +1,102 @@ +description: Computes the mean squared error between labels and predictions. + +
+ + +
+ +# tf.keras.losses.MSE + + + + + + + + + +Computes the mean squared error between labels and predictions. + + + + + + + + + +After computing the squared distance between the inputs, the mean value over +the last dimension is returned. + +`loss = mean(square(y_true - y_pred), axis=-1)` + +#### Usage: + + + +``` +>>> y_true = np.random.randint(0, 2, size=(2, 3)) +>>> y_pred = np.random.random(size=(2, 3)) +>>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> assert np.array_equal( +... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1)) +``` + + + + + + + + + + + + + +
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+ + + + + + + + + + + +
+Mean squared error values. shape = `[batch_size, d0, .. dN-1]`. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/MSLE.md b/site/en/api_docs/python/tf/keras/losses/MSLE.md new file mode 100644 index 00000000000..89429cc89c2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/MSLE.md @@ -0,0 +1,103 @@ +description: Computes the mean squared logarithmic error between y_true and y_pred. + +
+ + +
+ +# tf.keras.losses.MSLE + + + + + + + + + +Computes the mean squared logarithmic error between `y_true` and `y_pred`. + + + + + + + + + +`loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)` + +#### Usage: + + + +``` +>>> y_true = np.random.randint(0, 2, size=(2, 3)) +>>> y_pred = np.random.random(size=(2, 3)) +>>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> y_true = np.maximum(y_true, 1e-7) +>>> y_pred = np.maximum(y_pred, 1e-7) +>>> assert np.array_equal( +... loss.numpy(), +... np.mean( +... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1)) +``` + + + + + + + + + + + + + +
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+ + + + + + + + + + + +
+Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/MeanAbsoluteError.md b/site/en/api_docs/python/tf/keras/losses/MeanAbsoluteError.md new file mode 100644 index 00000000000..422fe1aca71 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/MeanAbsoluteError.md @@ -0,0 +1,275 @@ +description: Computes the mean of absolute difference between labels and predictions. + +
+ + + + + + +
+ +# tf.keras.losses.MeanAbsoluteError + + + + + + + + + +Computes the mean of absolute difference between labels and predictions. + + + + + + + + + +`loss = abs(y_true - y_pred)` + +#### Usage: + + + +``` +>>> y_true = [[0., 1.], [0., 0.]] +>>> y_pred = [[1., 1.], [1., 0.]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> mae = tf.keras.losses.MeanAbsoluteError() +>>> mae(y_true, y_pred).numpy() +0.5 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> mae(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() +0.25 +``` + +``` +>>> # Using 'sum' reduction type. +>>> mae = tf.keras.losses.MeanAbsoluteError( +... reduction=tf.keras.losses.Reduction.SUM) +>>> mae(y_true, y_pred).numpy() +1.0 +``` + +``` +>>> # Using 'none' reduction type. +>>> mae = tf.keras.losses.MeanAbsoluteError( +... reduction=tf.keras.losses.Reduction.NONE) +>>> mae(y_true, y_pred).numpy() +array([0.5, 0.5], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.MeanAbsoluteError()) +``` + + + + + + + + + + + + + +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to 'mean_absolute_error'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/MeanAbsolutePercentageError.md b/site/en/api_docs/python/tf/keras/losses/MeanAbsolutePercentageError.md new file mode 100644 index 00000000000..56d8128fe80 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/MeanAbsolutePercentageError.md @@ -0,0 +1,276 @@ +description: Computes the mean absolute percentage error between y_true and y_pred. + +
+ + + + + + +
+ +# tf.keras.losses.MeanAbsolutePercentageError + + + + + + + + + +Computes the mean absolute percentage error between `y_true` and `y_pred`. + + + + + + + + + +`loss = 100 * abs(y_true - y_pred) / y_true` + +#### Usage: + + + +``` +>>> y_true = [[2., 1.], [2., 3.]] +>>> y_pred = [[1., 1.], [1., 0.]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> mape = tf.keras.losses.MeanAbsolutePercentageError() +>>> mape(y_true, y_pred).numpy() +50. +``` + +``` +>>> # Calling with 'sample_weight'. +>>> mape(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() +20. +``` + +``` +>>> # Using 'sum' reduction type. +>>> mape = tf.keras.losses.MeanAbsolutePercentageError( +... reduction=tf.keras.losses.Reduction.SUM) +>>> mape(y_true, y_pred).numpy() +100. +``` + +``` +>>> # Using 'none' reduction type. +>>> mape = tf.keras.losses.MeanAbsolutePercentageError( +... reduction=tf.keras.losses.Reduction.NONE) +>>> mape(y_true, y_pred).numpy() +array([25., 75.], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.MeanAbsolutePercentageError()) +``` + + + + + + + + + + + + + +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to +'mean_absolute_percentage_error'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/MeanSquaredError.md b/site/en/api_docs/python/tf/keras/losses/MeanSquaredError.md new file mode 100644 index 00000000000..ec8b00240e1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/MeanSquaredError.md @@ -0,0 +1,275 @@ +description: Computes the mean of squares of errors between labels and predictions. + +
+ + + + + + +
+ +# tf.keras.losses.MeanSquaredError + + + + + + + + + +Computes the mean of squares of errors between labels and predictions. + + + + + + + + + +`loss = square(y_true - y_pred)` + +#### Usage: + + + +``` +>>> y_true = [[0., 1.], [0., 0.]] +>>> y_pred = [[1., 1.], [1., 0.]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> mse = tf.keras.losses.MeanSquaredError() +>>> mse(y_true, y_pred).numpy() +0.5 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> mse(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() +0.25 +``` + +``` +>>> # Using 'sum' reduction type. +>>> mse = tf.keras.losses.MeanSquaredError( +... reduction=tf.keras.losses.Reduction.SUM) +>>> mse(y_true, y_pred).numpy() +1.0 +``` + +``` +>>> # Using 'none' reduction type. +>>> mse = tf.keras.losses.MeanSquaredError( +... reduction=tf.keras.losses.Reduction.NONE) +>>> mse(y_true, y_pred).numpy() +array([0.5, 0.5], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.MeanSquaredError()) +``` + + + + + + + + + + + + + +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to 'mean_squared_error'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/MeanSquaredLogarithmicError.md b/site/en/api_docs/python/tf/keras/losses/MeanSquaredLogarithmicError.md new file mode 100644 index 00000000000..f0074f7a415 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/MeanSquaredLogarithmicError.md @@ -0,0 +1,276 @@ +description: Computes the mean squared logarithmic error between y_true and y_pred. + +
+ + + + + + +
+ +# tf.keras.losses.MeanSquaredLogarithmicError + + + + + + + + + +Computes the mean squared logarithmic error between `y_true` and `y_pred`. + + + + + + + + + +`loss = square(log(y_true + 1.) - log(y_pred + 1.))` + +#### Usage: + + + +``` +>>> y_true = [[0., 1.], [0., 0.]] +>>> y_pred = [[1., 1.], [1., 0.]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> msle = tf.keras.losses.MeanSquaredLogarithmicError() +>>> msle(y_true, y_pred).numpy() +0.240 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> msle(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() +0.120 +``` + +``` +>>> # Using 'sum' reduction type. +>>> msle = tf.keras.losses.MeanSquaredLogarithmicError( +... reduction=tf.keras.losses.Reduction.SUM) +>>> msle(y_true, y_pred).numpy() +0.480 +``` + +``` +>>> # Using 'none' reduction type. +>>> msle = tf.keras.losses.MeanSquaredLogarithmicError( +... reduction=tf.keras.losses.Reduction.NONE) +>>> msle(y_true, y_pred).numpy() +array([0.240, 0.240], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.MeanSquaredLogarithmicError()) +``` + + + + + + + + + + + + + +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to +'mean_squared_logarithmic_error'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/Poisson.md b/site/en/api_docs/python/tf/keras/losses/Poisson.md new file mode 100644 index 00000000000..8243f24329f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/Poisson.md @@ -0,0 +1,275 @@ +description: Computes the Poisson loss between y_true and y_pred. + +
+ + + + + + +
+ +# tf.keras.losses.Poisson + + + + + + + + + +Computes the Poisson loss between `y_true` and `y_pred`. + + + + + + + + + +`loss = y_pred - y_true * log(y_pred)` + +#### Usage: + + + +``` +>>> y_true = [[0., 1.], [0., 0.]] +>>> y_pred = [[1., 1.], [0., 0.]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> p = tf.keras.losses.Poisson() +>>> p(y_true, y_pred).numpy() +0.5 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> p(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() +0.4 +``` + +``` +>>> # Using 'sum' reduction type. +>>> p = tf.keras.losses.Poisson( +... reduction=tf.keras.losses.Reduction.SUM) +>>> p(y_true, y_pred).numpy() +0.999 +``` + +``` +>>> # Using 'none' reduction type. +>>> p = tf.keras.losses.Poisson( +... reduction=tf.keras.losses.Reduction.NONE) +>>> p(y_true, y_pred).numpy() +array([0.999, 0.], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.Poisson()) +``` + + + + + + + + + + + + + +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to 'poisson'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/Reduction.md b/site/en/api_docs/python/tf/keras/losses/Reduction.md new file mode 100644 index 00000000000..264920727bb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/Reduction.md @@ -0,0 +1,107 @@ +description: Types of loss reduction. + +
+ + + + + + + + +
+ +# tf.keras.losses.Reduction + + + + + + + + + +Types of loss reduction. + + + + + +Contains the following values: + +* `AUTO`: Indicates that the reduction option will be determined by the usage + context. For almost all cases this defaults to `SUM_OVER_BATCH_SIZE`. When + used with tf.distribute.Strategy, outside of built-in training loops such + as tf.keras `compile` and `fit`, we expect reduction value to be + `SUM` or `NONE`. Using `AUTO` in that case will raise an error. +* `NONE`: Weighted losses with one dimension reduced (axis=-1, or axis + specified by loss function). When this reduction type used with built-in + Keras training loops like `fit`/`evaluate`, the unreduced vector loss is + passed to the optimizer but the reported loss will be a scalar value. +* `SUM`: Scalar sum of weighted losses. +* `SUM_OVER_BATCH_SIZE`: Scalar `SUM` divided by number of elements in losses. + This reduction type is not supported when used with + tf.distribute.Strategy outside of built-in training loops like tf.keras + `compile`/`fit`. + + You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like: + ``` + with strategy.scope(): + loss_obj = tf.keras.losses.CategoricalCrossentropy( + reduction=tf.keras.losses.Reduction.NONE) + .... + loss = tf.reduce_sum(loss_object(labels, predictions)) * + (1. / global_batch_size) + ``` + +Please see the +[custom training guide](https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details on this. + +## Methods + +

all

+ +View source + + + + + + +

validate

+ +View source + + + + + + + + +## Class Variables + +* `AUTO = 'auto'` +* `NONE = 'none'` +* `SUM = 'sum'` +* `SUM_OVER_BATCH_SIZE = 'sum_over_batch_size'` diff --git a/site/en/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy.md b/site/en/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy.md new file mode 100644 index 00000000000..742da1415ad --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy.md @@ -0,0 +1,295 @@ +description: Computes the crossentropy loss between the labels and predictions. + +
+ + + + + + +
+ +# tf.keras.losses.SparseCategoricalCrossentropy + + + + + + + + + +Computes the crossentropy loss between the labels and predictions. + + + + + + + + + +Use this crossentropy loss function when there are two or more label classes. +We expect labels to be provided as integers. If you want to provide labels +using `one-hot` representation, please use `CategoricalCrossentropy` loss. +There should be `# classes` floating point values per feature for `y_pred` +and a single floating point value per feature for `y_true`. + +In the snippet below, there is a single floating point value per example for +`y_true` and `# classes` floating pointing values per example for `y_pred`. +The shape of `y_true` is `[batch_size]` and the shape of `y_pred` is +`[batch_size, num_classes]`. + +#### Usage: + + + +``` +>>> y_true = [1, 2] +>>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> scce = tf.keras.losses.SparseCategoricalCrossentropy() +>>> scce(y_true, y_pred).numpy() +1.177 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> scce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy() +0.814 +``` + +``` +>>> # Using 'sum' reduction type. +>>> scce = tf.keras.losses.SparseCategoricalCrossentropy( +... reduction=tf.keras.losses.Reduction.SUM) +>>> scce(y_true, y_pred).numpy() +2.354 +``` + +``` +>>> # Using 'none' reduction type. +>>> scce = tf.keras.losses.SparseCategoricalCrossentropy( +... reduction=tf.keras.losses.Reduction.NONE) +>>> scce(y_true, y_pred).numpy() +array([0.0513, 2.303], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.SparseCategoricalCrossentropy()) +``` + + + + + + + + + + + + + + + + +
+`from_logits` + +Whether `y_pred` is expected to be a logits tensor. By +default, we assume that `y_pred` encodes a probability distribution. +**Note - Using from_logits=True may be more numerically stable. +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to +'sparse_categorical_crossentropy'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/SquaredHinge.md b/site/en/api_docs/python/tf/keras/losses/SquaredHinge.md new file mode 100644 index 00000000000..192f8f5f5bd --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/SquaredHinge.md @@ -0,0 +1,278 @@ +description: Computes the squared hinge loss between y_true and y_pred. + +
+ + + + + + +
+ +# tf.keras.losses.SquaredHinge + + + + + + + + + +Computes the squared hinge loss between `y_true` and `y_pred`. + + + + + + + + + +`loss = square(maximum(1 - y_true * y_pred, 0))` + +`y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are +provided we will convert them to -1 or 1. + +#### Usage: + + + +``` +>>> y_true = [[0., 1.], [0., 0.]] +>>> y_pred = [[0.6, 0.4], [0.4, 0.6]] +>>> # Using 'auto'/'sum_over_batch_size' reduction type. +>>> h = tf.keras.losses.SquaredHinge() +>>> h(y_true, y_pred).numpy() +1.86 +``` + +``` +>>> # Calling with 'sample_weight'. +>>> h(y_true, y_pred, sample_weight=[1, 0]).numpy() +0.73 +``` + +``` +>>> # Using 'sum' reduction type. +>>> h = tf.keras.losses.SquaredHinge( +... reduction=tf.keras.losses.Reduction.SUM) +>>> h(y_true, y_pred).numpy() +3.72 +``` + +``` +>>> # Using 'none' reduction type. +>>> h = tf.keras.losses.SquaredHinge( +... reduction=tf.keras.losses.Reduction.NONE) +>>> h(y_true, y_pred).numpy() +array([1.46, 2.26], dtype=float32) +``` + +Usage with the `compile` API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss=tf.keras.losses.SquaredHinge()) +``` + + + + + + + + + + + + + +
+`reduction` + +(Optional) Type of tf.keras.losses.Reduction to apply to +loss. Default value is `AUTO`. `AUTO` indicates that the reduction +option will be determined by the usage context. For almost all cases +this defaults to `SUM_OVER_BATCH_SIZE`. When used with +tf.distribute.Strategy, outside of built-in training loops such as +tf.keras `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` +will raise an error. Please see this custom training [tutorial] +(https://www.tensorflow.org/tutorials/distribute/custom_training) +for more details. +
+`name` + +Optional name for the op. Defaults to 'squared_hinge'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `Loss` from its config (output of `get_config()`). + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `Loss` instance. +
+ + + +

get_config

+ +View source + + + +Returns the config dictionary for a `Loss` instance. + + +

__call__

+ +View source + + + +Invokes the `Loss` instance. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`, except +sparse loss functions such as sparse categorical crossentropy where +shape = `[batch_size, d0, .. dN-1]` +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]` +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the loss. If a scalar is provided, then the loss is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the total loss for each sample of the batch is +rescaled by the corresponding element in the `sample_weight` vector. If +the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be +broadcasted to this shape), then each loss element of `y_pred` is scaled +by the corresponding value of `sample_weight`. (Note on`dN-1`: all loss +functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + +
Returns
+Weighted loss float `Tensor`. If `reduction` is `NONE`, this has +shape `[batch_size, d0, .. dN-1]`; otherwise, it is scalar. (Note `dN-1` +because all loss functions reduce by 1 dimension, usually axis=-1.) +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the shape of `sample_weight` is invalid. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/losses/binary_crossentropy.md b/site/en/api_docs/python/tf/keras/losses/binary_crossentropy.md new file mode 100644 index 00000000000..3920c4925b9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/binary_crossentropy.md @@ -0,0 +1,113 @@ +description: Computes the binary crossentropy loss. + +
+ + +
+ +# tf.keras.losses.binary_crossentropy + + + + + + + + + +Computes the binary crossentropy loss. + + + + + + + + + + +#### Usage: + + + +``` +>>> y_true = [[0, 1], [0, 0]] +>>> y_pred = [[0.6, 0.4], [0.4, 0.6]] +>>> loss = tf.keras.losses.binary_crossentropy(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> loss.numpy() +array([0.916 , 0.714], dtype=float32) +``` + + + + + + + + + + + + + + + + + + + +
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`from_logits` + +Whether `y_pred` is expected to be a logits tensor. By default, +we assume that `y_pred` encodes a probability distribution. +
+`label_smoothing` + +Float in [0, 1]. If > `0` then smooth the labels. +
+ + + + + + + + + + + +
+Binary crossentropy loss value. shape = `[batch_size, d0, .. dN-1]`. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/categorical_crossentropy.md b/site/en/api_docs/python/tf/keras/losses/categorical_crossentropy.md new file mode 100644 index 00000000000..ecf8ab8d863 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/categorical_crossentropy.md @@ -0,0 +1,113 @@ +description: Computes the categorical crossentropy loss. + +
+ + +
+ +# tf.keras.losses.categorical_crossentropy + + + + + + + + + +Computes the categorical crossentropy loss. + + + + + + + + + + +#### Usage: + + + +``` +>>> y_true = [[0, 1, 0], [0, 0, 1]] +>>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] +>>> loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> loss.numpy() +array([0.0513, 2.303], dtype=float32) +``` + + + + + + + + + + + + + + + + + + + +
+`y_true` + +Tensor of one-hot true targets. +
+`y_pred` + +Tensor of predicted targets. +
+`from_logits` + +Whether `y_pred` is expected to be a logits tensor. By default, +we assume that `y_pred` encodes a probability distribution. +
+`label_smoothing` + +Float in [0, 1]. If > `0` then smooth the labels. +
+ + + + + + + + + + + +
+Categorical crossentropy loss value. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/categorical_hinge.md b/site/en/api_docs/python/tf/keras/losses/categorical_hinge.md new file mode 100644 index 00000000000..b4fc51fc660 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/categorical_hinge.md @@ -0,0 +1,103 @@ +description: Computes the categorical hinge loss between y_true and y_pred. + +
+ + +
+ +# tf.keras.losses.categorical_hinge + + + + + + + + + +Computes the categorical hinge loss between `y_true` and `y_pred`. + + + + + + + + + +`loss = maximum(neg - pos + 1, 0)` +where `neg = sum(y_true * y_pred)` and `pos = maximum(1 - y_true)` + +#### Usage: + + + +``` +>>> y_true = np.random.randint(0, 3, size=(2,)) +>>> y_true = tf.keras.utils.to_categorical(y_true, num_classes=3) +>>> y_pred = np.random.random(size=(2, 3)) +>>> loss = tf.keras.losses.categorical_hinge(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> pos = np.sum(y_true * y_pred, axis=-1) +>>> neg = np.amax((1. - y_true) * y_pred, axis=-1) +>>> assert np.array_equal(loss.numpy(), np.maximum(0., neg - pos + 1.)) +``` + + + + + + + + + + + + + +
+`y_true` + +The ground truth values. `y_true` values are expected to be -1 or 1. +If binary (0 or 1) labels are provided they will be converted to -1 or 1. +
+`y_pred` + +The predicted values. +
+ + + + + + + + + + + +
+Categorical hinge loss values. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/cosine_similarity.md b/site/en/api_docs/python/tf/keras/losses/cosine_similarity.md new file mode 100644 index 00000000000..44f37c65bb3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/cosine_similarity.md @@ -0,0 +1,117 @@ +description: Computes the cosine similarity between labels and predictions. + +
+ + +
+ +# tf.keras.losses.cosine_similarity + + + + + + + + + +Computes the cosine similarity between labels and predictions. + + + + + + + + + +Note that it is a negative quantity between -1 and 0, where 0 indicates +orthogonality and values closer to -1 indicate greater similarity. This makes +it usable as a loss function in a setting where you try to maximize the +proximity between predictions and targets. If either `y_true` or `y_pred` +is a zero vector, cosine similarity will be 0 regardless of the proximity +between predictions and targets. + +`loss = -sum(l2_norm(y_true) * l2_norm(y_pred))` + +#### Usage: + + + +``` +>>> y_true = [[0., 1.], [1., 1.]] +>>> y_pred =[[1., 0.], [1., 1.]] +>>> loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1) +>>> # l2_norm(y_true) = [[0., 1.], [1./1.414], 1./1.414]]] +>>> # l2_norm(y_pred) = [[1., 0.], [1./1.414], 1./1.414]]] +>>> # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]] +>>> # loss = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1)) +>>> # = ((0. + 0.) + (0.5 + 0.5)) / 2 +>>> loss.numpy() +array([-0., -0.999], dtype=float32) +``` + + + + + + + + + + + + + + + + +
+`y_true` + +Tensor of true targets. +
+`y_pred` + +Tensor of predicted targets. +
+`axis` + +Axis along which to determine similarity. +
+ + + + + + + + + + + +
+Cosine similarity tensor. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/deserialize.md b/site/en/api_docs/python/tf/keras/losses/deserialize.md new file mode 100644 index 00000000000..09d90a51bdc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/deserialize.md @@ -0,0 +1,86 @@ +description: Deserializes a serialized loss class/function instance. + +
+ + +
+ +# tf.keras.losses.deserialize + + + + + + + + + +Deserializes a serialized loss class/function instance. + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +Loss configuration. +
+`custom_objects` + +Optional dictionary mapping names (strings) to custom +objects (classes and functions) to be considered during deserialization. +
+ + + + + + + + + + + +
+A Keras `Loss` instance or a loss function. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/get.md b/site/en/api_docs/python/tf/keras/losses/get.md new file mode 100644 index 00000000000..6acd17d2615 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/get.md @@ -0,0 +1,97 @@ +description: Retrieves a Keras loss function. + +
+ + +
+ +# tf.keras.losses.get + + + + + + + + + +Retrieves a Keras loss function. + + + + + + + + + + + + + + + + + + + +
+`identifier` + +A loss identifier. One of None or string name of a loss +function/class or loss configuration dictionary or a loss function or +a loss class instance +
+ + + + + + + + + + + +
+A Keras loss function/ `Loss` class instance. +
+ + + + + + + + + + + + +
+`ValueError` + +If `identifier` cannot be interpreted. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/hinge.md b/site/en/api_docs/python/tf/keras/losses/hinge.md new file mode 100644 index 00000000000..dc78a553b38 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/hinge.md @@ -0,0 +1,102 @@ +description: Computes the hinge loss between y_true and y_pred. + +
+ + +
+ +# tf.keras.losses.hinge + + + + + + + + + +Computes the hinge loss between `y_true` and `y_pred`. + + + + + + + + + +`loss = mean(maximum(1 - y_true * y_pred, 0), axis=-1)` + +#### Usage: + + + +``` +>>> y_true = np.random.choice([-1, 1], size=(2, 3)) +>>> y_pred = np.random.random(size=(2, 3)) +>>> loss = tf.keras.losses.hinge(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> assert np.array_equal( +... loss.numpy(), +... np.mean(np.maximum(1. - y_true * y_pred, 0.), axis=-1)) +``` + + + + + + + + + + + + + +
+`y_true` + +The ground truth values. `y_true` values are expected to be -1 or 1. +If binary (0 or 1) labels are provided they will be converted to -1 or 1. +shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+ + + + + + + + + + + +
+Hinge loss values. shape = `[batch_size, d0, .. dN-1]`. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/logcosh.md b/site/en/api_docs/python/tf/keras/losses/logcosh.md new file mode 100644 index 00000000000..930cfbc50ff --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/logcosh.md @@ -0,0 +1,105 @@ +description: Logarithm of the hyperbolic cosine of the prediction error. + +
+ + +
+ +# tf.keras.losses.logcosh + + + + + + + + + +Logarithm of the hyperbolic cosine of the prediction error. + + + + + + + + + +`log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and +to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly +like the mean squared error, but will not be so strongly affected by the +occasional wildly incorrect prediction. + +#### Usage: + + + +``` +>>> y_true = np.random.random(size=(2, 3)) +>>> y_pred = np.random.random(size=(2, 3)) +>>> loss = tf.keras.losses.logcosh(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> x = y_pred - y_true +>>> assert np.allclose( +... loss.numpy(), +... np.mean(x + np.log(np.exp(-2. * x) + 1.) - math_ops.log(2.), axis=-1), +... atol=1e-5) +``` + + + + + + + + + + + + + +
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+ + + + + + + + + + + +
+Logcosh error values. shape = `[batch_size, d0, .. dN-1]`. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/poisson.md b/site/en/api_docs/python/tf/keras/losses/poisson.md new file mode 100644 index 00000000000..f11ea714ec2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/poisson.md @@ -0,0 +1,119 @@ +description: Computes the Poisson loss between y_true and y_pred. + +
+ + +
+ +# tf.keras.losses.poisson + + + + + + + + + +Computes the Poisson loss between y_true and y_pred. + + + + + + + + + +The Poisson loss is the mean of the elements of the `Tensor` +`y_pred - y_true * log(y_pred)`. + +#### Usage: + + + +``` +>>> y_true = np.random.randint(0, 2, size=(2, 3)) +>>> y_pred = np.random.random(size=(2, 3)) +>>> loss = tf.keras.losses.poisson(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> y_pred = y_pred + 1e-7 +>>> assert np.allclose( +... loss.numpy(), np.mean(y_pred - y_true * np.log(y_pred), axis=-1), +... atol=1e-5) +``` + + + + + + + + + + + + + +
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+ + + + + + + + + + + +
+Poisson loss value. shape = `[batch_size, d0, .. dN-1]`. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +If `y_true` and `y_pred` have incompatible shapes. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/serialize.md b/site/en/api_docs/python/tf/keras/losses/serialize.md new file mode 100644 index 00000000000..36a0942f09e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/serialize.md @@ -0,0 +1,78 @@ +description: Serializes loss function or Loss instance. + +
+ + +
+ +# tf.keras.losses.serialize + + + + + + + + + +Serializes loss function or `Loss` instance. + + + + + + + + + + + + + + + + + + + +
+`loss` + +A Keras `Loss` instance or a loss function. +
+ + + + + + + + + + + +
+Loss configuration dictionary. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/sparse_categorical_crossentropy.md b/site/en/api_docs/python/tf/keras/losses/sparse_categorical_crossentropy.md new file mode 100644 index 00000000000..0e9c6a8774e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/sparse_categorical_crossentropy.md @@ -0,0 +1,114 @@ +description: Computes the sparse categorical crossentropy loss. + +
+ + +
+ +# tf.keras.losses.sparse_categorical_crossentropy + + + + + + + + + +Computes the sparse categorical crossentropy loss. + + + + + + + + + + +#### Usage: + + + +``` +>>> y_true = [1, 2] +>>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] +>>> loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> loss.numpy() +array([0.0513, 2.303], dtype=float32) +``` + + + + + + + + + + + + + + + + + + + +
+`y_true` + +Ground truth values. +
+`y_pred` + +The predicted values. +
+`from_logits` + +Whether `y_pred` is expected to be a logits tensor. By default, +we assume that `y_pred` encodes a probability distribution. +
+`axis` + +(Optional) Defaults to -1. The dimension along which the entropy is +computed. +
+ + + + + + + + + + + +
+Sparse categorical crossentropy loss value. +
+ diff --git a/site/en/api_docs/python/tf/keras/losses/squared_hinge.md b/site/en/api_docs/python/tf/keras/losses/squared_hinge.md new file mode 100644 index 00000000000..305f6c392aa --- /dev/null +++ b/site/en/api_docs/python/tf/keras/losses/squared_hinge.md @@ -0,0 +1,102 @@ +description: Computes the squared hinge loss between y_true and y_pred. + +
+ + +
+ +# tf.keras.losses.squared_hinge + + + + + + + + + +Computes the squared hinge loss between `y_true` and `y_pred`. + + + + + + + + + +`loss = mean(square(maximum(1 - y_true * y_pred, 0)), axis=-1)` + +#### Usage: + + + +``` +>>> y_true = np.random.choice([-1, 1], size=(2, 3)) +>>> y_pred = np.random.random(size=(2, 3)) +>>> loss = tf.keras.losses.squared_hinge(y_true, y_pred) +>>> assert loss.shape == (2,) +>>> assert np.array_equal( +... loss.numpy(), +... np.mean(np.square(np.maximum(1. - y_true * y_pred, 0.)), axis=-1)) +``` + + + + + + + + + + + + + +
+`y_true` + +The ground truth values. `y_true` values are expected to be -1 or 1. +If binary (0 or 1) labels are provided we will convert them to -1 or 1. +shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+ + + + + + + + + + + +
+Squared hinge loss values. shape = `[batch_size, d0, .. dN-1]`. +
+ diff --git a/site/en/api_docs/python/tf/keras/metrics.md b/site/en/api_docs/python/tf/keras/metrics.md new file mode 100644 index 00000000000..60de69700db --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics.md @@ -0,0 +1,167 @@ +description: Built-in metrics. + +
+ + +
+ +# Module: tf.keras.metrics + + + + + + + + + +Built-in metrics. + + + + + +## Classes + +[`class AUC`](../../tf/keras/metrics/AUC.md): Computes the approximate AUC (Area under the curve) via a Riemann sum. + +[`class Accuracy`](../../tf/keras/metrics/Accuracy.md): Calculates how often predictions equals labels. + +[`class BinaryAccuracy`](../../tf/keras/metrics/BinaryAccuracy.md): Calculates how often predictions matches binary labels. + +[`class BinaryCrossentropy`](../../tf/keras/metrics/BinaryCrossentropy.md): Computes the crossentropy metric between the labels and predictions. + +[`class CategoricalAccuracy`](../../tf/keras/metrics/CategoricalAccuracy.md): Calculates how often predictions matches one-hot labels. + +[`class CategoricalCrossentropy`](../../tf/keras/metrics/CategoricalCrossentropy.md): Computes the crossentropy metric between the labels and predictions. + +[`class CategoricalHinge`](../../tf/keras/metrics/CategoricalHinge.md): Computes the categorical hinge metric between `y_true` and `y_pred`. + +[`class CosineSimilarity`](../../tf/keras/metrics/CosineSimilarity.md): Computes the cosine similarity between the labels and predictions. + +[`class FalseNegatives`](../../tf/keras/metrics/FalseNegatives.md): Calculates the number of false negatives. + +[`class FalsePositives`](../../tf/keras/metrics/FalsePositives.md): Calculates the number of false positives. + +[`class Hinge`](../../tf/keras/metrics/Hinge.md): Computes the hinge metric between `y_true` and `y_pred`. + +[`class KLDivergence`](../../tf/keras/metrics/KLDivergence.md): Computes Kullback-Leibler divergence metric between `y_true` and `y_pred`. + +[`class LogCoshError`](../../tf/keras/metrics/LogCoshError.md): Computes the logarithm of the hyperbolic cosine of the prediction error. + +[`class Mean`](../../tf/keras/metrics/Mean.md): Computes the (weighted) mean of the given values. + +[`class MeanAbsoluteError`](../../tf/keras/metrics/MeanAbsoluteError.md): Computes the mean absolute error between the labels and predictions. + +[`class MeanAbsolutePercentageError`](../../tf/keras/metrics/MeanAbsolutePercentageError.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`class MeanIoU`](../../tf/keras/metrics/MeanIoU.md): Computes the mean Intersection-Over-Union metric. + +[`class MeanRelativeError`](../../tf/keras/metrics/MeanRelativeError.md): Computes the mean relative error by normalizing with the given values. + +[`class MeanSquaredError`](../../tf/keras/metrics/MeanSquaredError.md): Computes the mean squared error between `y_true` and `y_pred`. + +[`class MeanSquaredLogarithmicError`](../../tf/keras/metrics/MeanSquaredLogarithmicError.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`class MeanTensor`](../../tf/keras/metrics/MeanTensor.md): Computes the element-wise (weighted) mean of the given tensors. + +[`class Metric`](../../tf/keras/metrics/Metric.md): Encapsulates metric logic and state. + +[`class Poisson`](../../tf/keras/metrics/Poisson.md): Computes the Poisson metric between `y_true` and `y_pred`. + +[`class Precision`](../../tf/keras/metrics/Precision.md): Computes the precision of the predictions with respect to the labels. + +[`class PrecisionAtRecall`](../../tf/keras/metrics/PrecisionAtRecall.md): Computes the precision at a given recall. + +[`class Recall`](../../tf/keras/metrics/Recall.md): Computes the recall of the predictions with respect to the labels. + +[`class RecallAtPrecision`](../../tf/keras/metrics/RecallAtPrecision.md): Computes the maximally achievable recall at a required precision. + +[`class RootMeanSquaredError`](../../tf/keras/metrics/RootMeanSquaredError.md): Computes root mean squared error metric between `y_true` and `y_pred`. + +[`class SensitivityAtSpecificity`](../../tf/keras/metrics/SensitivityAtSpecificity.md): Computes the sensitivity at a given specificity. + +[`class SparseCategoricalAccuracy`](../../tf/keras/metrics/SparseCategoricalAccuracy.md): Calculates how often predictions matches integer labels. + +[`class SparseCategoricalCrossentropy`](../../tf/keras/metrics/SparseCategoricalCrossentropy.md): Computes the crossentropy metric between the labels and predictions. + +[`class SparseTopKCategoricalAccuracy`](../../tf/keras/metrics/SparseTopKCategoricalAccuracy.md): Computes how often integer targets are in the top `K` predictions. + +[`class SpecificityAtSensitivity`](../../tf/keras/metrics/SpecificityAtSensitivity.md): Computes the specificity at a given sensitivity. + +[`class SquaredHinge`](../../tf/keras/metrics/SquaredHinge.md): Computes the squared hinge metric between `y_true` and `y_pred`. + +[`class Sum`](../../tf/keras/metrics/Sum.md): Computes the (weighted) sum of the given values. + +[`class TopKCategoricalAccuracy`](../../tf/keras/metrics/TopKCategoricalAccuracy.md): Computes how often targets are in the top `K` predictions. + +[`class TrueNegatives`](../../tf/keras/metrics/TrueNegatives.md): Calculates the number of true negatives. + +[`class TruePositives`](../../tf/keras/metrics/TruePositives.md): Calculates the number of true positives. + +## Functions + +[`KLD(...)`](../../tf/keras/losses/KLD.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`MAE(...)`](../../tf/keras/losses/MAE.md): Computes the mean absolute error between labels and predictions. + +[`MAPE(...)`](../../tf/keras/losses/MAPE.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`MSE(...)`](../../tf/keras/losses/MSE.md): Computes the mean squared error between labels and predictions. + +[`MSLE(...)`](../../tf/keras/losses/MSLE.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`binary_accuracy(...)`](../../tf/keras/metrics/binary_accuracy.md): Calculates how often predictions matches binary labels. + +[`binary_crossentropy(...)`](../../tf/keras/losses/binary_crossentropy.md): Computes the binary crossentropy loss. + +[`categorical_accuracy(...)`](../../tf/keras/metrics/categorical_accuracy.md): Calculates how often predictions matches one-hot labels. + +[`categorical_crossentropy(...)`](../../tf/keras/losses/categorical_crossentropy.md): Computes the categorical crossentropy loss. + +[`deserialize(...)`](../../tf/keras/metrics/deserialize.md) + +[`get(...)`](../../tf/keras/metrics/get.md): Return a metric given its identifer. + +[`hinge(...)`](../../tf/keras/losses/hinge.md): Computes the hinge loss between `y_true` and `y_pred`. + +[`kld(...)`](../../tf/keras/losses/KLD.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`kullback_leibler_divergence(...)`](../../tf/keras/losses/KLD.md): Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`. + +[`mae(...)`](../../tf/keras/losses/MAE.md): Computes the mean absolute error between labels and predictions. + +[`mape(...)`](../../tf/keras/losses/MAPE.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`mean_absolute_error(...)`](../../tf/keras/losses/MAE.md): Computes the mean absolute error between labels and predictions. + +[`mean_absolute_percentage_error(...)`](../../tf/keras/losses/MAPE.md): Computes the mean absolute percentage error between `y_true` and `y_pred`. + +[`mean_squared_error(...)`](../../tf/keras/losses/MSE.md): Computes the mean squared error between labels and predictions. + +[`mean_squared_logarithmic_error(...)`](../../tf/keras/losses/MSLE.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`mse(...)`](../../tf/keras/losses/MSE.md): Computes the mean squared error between labels and predictions. + +[`msle(...)`](../../tf/keras/losses/MSLE.md): Computes the mean squared logarithmic error between `y_true` and `y_pred`. + +[`poisson(...)`](../../tf/keras/losses/poisson.md): Computes the Poisson loss between y_true and y_pred. + +[`serialize(...)`](../../tf/keras/metrics/serialize.md) + +[`sparse_categorical_accuracy(...)`](../../tf/keras/metrics/sparse_categorical_accuracy.md): Calculates how often predictions matches integer labels. + +[`sparse_categorical_crossentropy(...)`](../../tf/keras/losses/sparse_categorical_crossentropy.md): Computes the sparse categorical crossentropy loss. + +[`sparse_top_k_categorical_accuracy(...)`](../../tf/keras/metrics/sparse_top_k_categorical_accuracy.md): Computes how often integer targets are in the top `K` predictions. + +[`squared_hinge(...)`](../../tf/keras/losses/squared_hinge.md): Computes the squared hinge loss between `y_true` and `y_pred`. + +[`top_k_categorical_accuracy(...)`](../../tf/keras/metrics/top_k_categorical_accuracy.md): Computes how often targets are in the top `K` predictions. + diff --git a/site/en/api_docs/python/tf/keras/metrics/AUC.md b/site/en/api_docs/python/tf/keras/metrics/AUC.md new file mode 100644 index 00000000000..1274bc03680 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/AUC.md @@ -0,0 +1,361 @@ +description: Computes the approximate AUC (Area under the curve) via a Riemann sum. + +
+ + + + + + + + +
+ +# tf.keras.metrics.AUC + + + + + + + + + +Computes the approximate AUC (Area under the curve) via a Riemann sum. + +Inherits From: [`Metric`](../../../tf/keras/metrics/Metric.md) + + + + + + + + + +This metric creates four local variables, `true_positives`, `true_negatives`, +`false_positives` and `false_negatives` that are used to compute the AUC. +To discretize the AUC curve, a linearly spaced set of thresholds is used to +compute pairs of recall and precision values. The area under the ROC-curve is +therefore computed using the height of the recall values by the false positive +rate, while the area under the PR-curve is the computed using the height of +the precision values by the recall. + +This value is ultimately returned as `auc`, an idempotent operation that +computes the area under a discretized curve of precision versus recall values +(computed using the aforementioned variables). The `num_thresholds` variable +controls the degree of discretization with larger numbers of thresholds more +closely approximating the true AUC. The quality of the approximation may vary +dramatically depending on `num_thresholds`. The `thresholds` parameter can be +used to manually specify thresholds which split the predictions more evenly. + +For best results, `predictions` should be distributed approximately uniformly +in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC +approximation may be poor if this is not the case. Setting `summation_method` +to 'minoring' or 'majoring' can help quantify the error in the approximation +by providing lower or upper bound estimate of the AUC. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.AUC(num_thresholds=3) +>>> _ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9]) +>>> # threshold values are [0 - 1e-7, 0.5, 1 + 1e-7] +>>> # tp = [2, 1, 0], fp = [2, 0, 0], fn = [0, 1, 2], tn = [0, 2, 2] +>>> # recall = [1, 0.5, 0], fp_rate = [1, 0, 0] +>>> # auc = ((((1+0.5)/2)*(1-0))+ (((0.5+0)/2)*(0-0))) = 0.75 +>>> m.result().numpy() +0.75 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9], +... sample_weight=[1, 0, 0, 1]) +>>> m.result().numpy() +1.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.AUC()]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_thresholds` + +(Optional) Defaults to 200. The number of thresholds to +use when discretizing the roc curve. Values must be > 1. +
+`curve` + +(Optional) Specifies the name of the curve to be computed, 'ROC' +[default] or 'PR' for the Precision-Recall-curve. +
+`summation_method` + +(Optional) Specifies the Riemann summation method used +(https://en.wikipedia.org/wiki/Riemann_sum): 'interpolation' [default], +applies mid-point summation scheme for `ROC`. For PR-AUC, interpolates +(true/false) positives but not the ratio that is precision (see Davis +& Goadrich 2006 for details); 'minoring' that applies left summation +for increasing intervals and right summation for decreasing intervals; +'majoring' that does the opposite. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`thresholds` + +(Optional) A list of floating point values to use as the +thresholds for discretizing the curve. If set, the `num_thresholds` +parameter is ignored. Values should be in [0, 1]. Endpoint thresholds +equal to {-epsilon, 1+epsilon} for a small positive epsilon value will +be automatically included with these to correctly handle predictions +equal to exactly 0 or 1. +
+`multi_label` + +boolean indicating whether multilabel data should be +treated as such, wherein AUC is computed separately for each label and +then averaged across labels, or (when False) if the data should be +flattened into a single label before AUC computation. In the latter +case, when multilabel data is passed to AUC, each label-prediction pair +is treated as an individual data point. Should be set to False for +multi-class data. +
+`label_weights` + +(optional) list, array, or tensor of non-negative weights +used to compute AUCs for multilabel data. When `multi_label` is True, +the weights are applied to the individual label AUCs when they are +averaged to produce the multi-label AUC. When it's False, they are used +to weight the individual label predictions in computing the confusion +matrix on the flattened data. Note that this is unlike class_weights in +that class_weights weights the example depending on the value of its +label, whereas label_weights depends only on the index of that label +before flattening; therefore `label_weights` should not be used for +multi-class data. +
+ + + +## Methods + +

interpolate_pr_auc

+ +View source + + + +Interpolation formula inspired by section 4 of Davis & Goadrich 2006. + +https://www.biostat.wisc.edu/~page/rocpr.pdf + +Note here we derive & use a closed formula not present in the paper +as follows: + + Precision = TP / (TP + FP) = TP / P + +Modeling all of TP (true positive), FP (false positive) and their sum +P = TP + FP (predicted positive) as varying linearly within each interval +[A, B] between successive thresholds, we get + + Precision slope = dTP / dP + = (TP_B - TP_A) / (P_B - P_A) + = (TP - TP_A) / (P - P_A) + Precision = (TP_A + slope * (P - P_A)) / P + +The area within the interval is (slope / total_pos_weight) times + + int_A^B{Precision.dP} = int_A^B{(TP_A + slope * (P - P_A)) * dP / P} + int_A^B{Precision.dP} = int_A^B{slope * dP + intercept * dP / P} + +where intercept = TP_A - slope * P_A = TP_B - slope * P_B, resulting in + + int_A^B{Precision.dP} = TP_B - TP_A + intercept * log(P_B / P_A) + +Bringing back the factor (slope / total_pos_weight) we'd put aside, we get + + slope * [dTP + intercept * log(P_B / P_A)] / total_pos_weight + +where dTP == TP_B - TP_A. + +Note that when P_A == 0 the above calculation simplifies into + + int_A^B{Precision.dTP} = int_A^B{slope * dTP} = slope * (TP_B - TP_A) + +which is really equivalent to imputing constant precision throughout the +first bucket having >0 true positives. + + + + + + + + + + +
Returns
+`pr_auc` + +an approximation of the area under the P-R curve. +
+ + + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates confusion matrix statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values. +
+`y_pred` + +The predicted values. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/Accuracy.md b/site/en/api_docs/python/tf/keras/metrics/Accuracy.md new file mode 100644 index 00000000000..998e96147c3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/Accuracy.md @@ -0,0 +1,223 @@ +description: Calculates how often predictions equals labels. + +
+ + + + + + + +
+ +# tf.keras.metrics.Accuracy + + + + + + + + + +Calculates how often predictions equals labels. + + + + + + + + + +This metric creates two local variables, `total` and `count` that are used to +compute the frequency with which `y_pred` matches `y_true`. This frequency is +ultimately returned as `binary accuracy`: an idempotent operation that simply +divides `total` by `count`. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.Accuracy() +>>> _ = m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]]) +>>> m.result().numpy() +0.75 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]], +... sample_weight=[1, 1, 0, 0]) +>>> m.result().numpy() +0.5 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.Accuracy()]) +``` + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/BinaryAccuracy.md b/site/en/api_docs/python/tf/keras/metrics/BinaryAccuracy.md new file mode 100644 index 00000000000..0e43d496b20 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/BinaryAccuracy.md @@ -0,0 +1,216 @@ +description: Calculates how often predictions matches binary labels. + +
+ + + + + + + +
+ +# tf.keras.metrics.BinaryAccuracy + + + + + + + + + +Calculates how often predictions matches binary labels. + + + + + + + + + +This metric creates two local variables, `total` and `count` that are used to +compute the frequency with which `y_pred` matches `y_true`. This frequency is +ultimately returned as `binary accuracy`: an idempotent operation that simply +divides `total` by `count`. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.BinaryAccuracy() +>>> _ = m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]]) +>>> m.result().numpy() +0.75 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]], +... sample_weight=[1, 0, 0, 1]) +>>> m.result().numpy() +0.5 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.BinaryAccuracy()]) +``` + + + + + + + + + + + + + + + + +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`threshold` + +(Optional) Float representing the threshold for deciding +whether prediction values are 1 or 0. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/BinaryCrossentropy.md b/site/en/api_docs/python/tf/keras/metrics/BinaryCrossentropy.md new file mode 100644 index 00000000000..4a48af8aad6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/BinaryCrossentropy.md @@ -0,0 +1,224 @@ +description: Computes the crossentropy metric between the labels and predictions. + +
+ + + + + + + +
+ +# tf.keras.metrics.BinaryCrossentropy + + + + + + + + + +Computes the crossentropy metric between the labels and predictions. + + + + + + + + + +This is the crossentropy metric class to be used when there are only two +label classes (0 and 1). + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.BinaryCrossentropy() +>>> _ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]]) +>>> m.result().numpy() +0.81492424 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]], +... sample_weight=[1, 0]) +>>> m.result().numpy() +0.9162905 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.BinaryCrossentropy()]) +``` + + + + + + + + + + + + + + + + + + + +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`from_logits` + +(Optional )Whether output is expected to be a logits tensor. +By default, we consider that output encodes a probability distribution. +
+`label_smoothing` + +(Optional) Float in [0, 1]. When > 0, label values are +smoothed, meaning the confidence on label values are relaxed. +e.g. `label_smoothing=0.2` means that we will use a value of `0.1` for +label `0` and `0.9` for label `1`" +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/CategoricalAccuracy.md b/site/en/api_docs/python/tf/keras/metrics/CategoricalAccuracy.md new file mode 100644 index 00000000000..2d1b810eb79 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/CategoricalAccuracy.md @@ -0,0 +1,219 @@ +description: Calculates how often predictions matches one-hot labels. + +
+ + + + + + + +
+ +# tf.keras.metrics.CategoricalAccuracy + + + + + + + + + +Calculates how often predictions matches one-hot labels. + + + + + + + + + +You can provide logits of classes as `y_pred`, since argmax of +logits and probabilities are same. + +This metric creates two local variables, `total` and `count` that are used to +compute the frequency with which `y_pred` matches `y_true`. This frequency is +ultimately returned as `categorical accuracy`: an idempotent operation that +simply divides `total` by `count`. + +`y_pred` and `y_true` should be passed in as vectors of probabilities, rather +than as labels. If necessary, use tf.one_hot to expand `y_true` as a vector. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.CategoricalAccuracy() +>>> _ = m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8], +... [0.05, 0.95, 0]]) +>>> m.result().numpy() +0.5 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8], +... [0.05, 0.95, 0]], +... sample_weight=[0.7, 0.3]) +>>> m.result().numpy() +0.3 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.CategoricalAccuracy()]) +``` + + + + + + + + + + + + + +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/CategoricalCrossentropy.md b/site/en/api_docs/python/tf/keras/metrics/CategoricalCrossentropy.md new file mode 100644 index 00000000000..5f0b1eb89d9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/CategoricalCrossentropy.md @@ -0,0 +1,275 @@ +description: Computes the crossentropy metric between the labels and predictions. + +
+ + + + + + + +
+ +# tf.keras.metrics.CategoricalCrossentropy + + + + + + + + + +Computes the crossentropy metric between the labels and predictions. + + + + + + + + + +This is the crossentropy metric class to be used when there are multiple +label classes (2 or more). Here we assume that labels are given as a `one_hot` +representation. eg., When labels values are [2, 0, 1], + `y_true` = [[0, 0, 1], [1, 0, 0], [0, 1, 0]]. + +#### Usage: + + + +``` +>>> # EPSILON = 1e-7, y = y_true, y` = y_pred +>>> # y` = clip_ops.clip_by_value(output, EPSILON, 1. - EPSILON) +>>> # y` = [[0.05, 0.95, EPSILON], [0.1, 0.8, 0.1]] +>>> # xent = -sum(y * log(y'), axis = -1) +>>> # = -((log 0.95), (log 0.1)) +>>> # = [0.051, 2.302] +>>> # Reduced xent = (0.051 + 2.302) / 2 +>>> m = tf.keras.metrics.CategoricalCrossentropy() +>>> _ = m.update_state([[0, 1, 0], [0, 0, 1]], +... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]) +>>> m.result().numpy() +1.1769392 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1, 0], [0, 0, 1]], +... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]], +... sample_weight=tf.constant([0.3, 0.7])) +>>> m.result().numpy() +1.6271976 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.CategoricalCrossentropy()]) +``` + + + + + + + + + + + + + + + + + + + +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`from_logits` + +(Optional ) Whether `y_pred` is expected to be a logits tensor. +By default, we assume that `y_pred` encodes a probability distribution. +
+`label_smoothing` + +Float in [0, 1]. When > 0, label values are smoothed, +meaning the confidence on label values are relaxed. e.g. +`label_smoothing=0.2` means that we will use a value of `0.1` for label +`0` and `0.9` for label `1`" +
+ + + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/CategoricalHinge.md b/site/en/api_docs/python/tf/keras/metrics/CategoricalHinge.md new file mode 100644 index 00000000000..4835e3ad83d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/CategoricalHinge.md @@ -0,0 +1,219 @@ +description: Computes the categorical hinge metric between y_true and y_pred. + +
+ + + + + + + +
+ +# tf.keras.metrics.CategoricalHinge + + + + + + + + + +Computes the categorical hinge metric between `y_true` and `y_pred`. + + + + + + + + + + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.CategoricalHinge() +>>> _ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]]) +>>> m.result().numpy() +1.4000001 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]], +... sample_weight=[1, 0]) +>>> m.result().numpy() +1.2 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.CategoricalHinge()]) +``` + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/CosineSimilarity.md b/site/en/api_docs/python/tf/keras/metrics/CosineSimilarity.md new file mode 100644 index 00000000000..b39d299d41b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/CosineSimilarity.md @@ -0,0 +1,222 @@ +description: Computes the cosine similarity between the labels and predictions. + +
+ + + + + + + +
+ +# tf.keras.metrics.CosineSimilarity + + + + + + + + + +Computes the cosine similarity between the labels and predictions. + + + + + + + + + +cosine similarity = (a . b) / ||a|| ||b|| +[Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity) + +This metric keeps the average cosine similarity between `predictions` and +`labels` over a stream of data. + +#### Usage: + + + +``` +>>> # l2_norm(y_true) = [[0., 1.], [1./1.414], 1./1.414]]] +>>> # l2_norm(y_pred) = [[1., 0.], [1./1.414], 1./1.414]]] +>>> # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]] +>>> # result = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1)) +>>> # = ((0. + 0.) + (0.5 + 0.5)) / 2 +>>> m = tf.keras.metrics.CosineSimilarity(axis=1) +>>> _ = m.update_state([[0., 1.], [1., 1.]], [[1., 0.], [1., 1.]]) +>>> m.result().numpy() +0.49999997 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0., 1.], [1., 1.]], [[1., 0.], [1., 1.]], +... sample_weight=[0.3, 0.7]) +>>> m.result().numpy() +0.6999999 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.CosineSimilarity(axis=1)]) +``` + + + + + + + + + + + + + + + + +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`axis` + +(Optional) Defaults to -1. The dimension along which the cosine +similarity is computed. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/FalseNegatives.md b/site/en/api_docs/python/tf/keras/metrics/FalseNegatives.md new file mode 100644 index 00000000000..4f24bbe6ad7 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/FalseNegatives.md @@ -0,0 +1,210 @@ +description: Calculates the number of false negatives. + +
+ + + + + + + +
+ +# tf.keras.metrics.FalseNegatives + + + + + + + + + +Calculates the number of false negatives. + + + + + + + + + +If `sample_weight` is given, calculates the sum of the weights of +false negatives. This metric creates one local variable, `accumulator` +that is used to keep track of the number of false negatives. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.FalseNegatives() +>>> _ = m.update_state([0, 1, 1, 1], [0, 1, 0, 0]) +>>> m.result().numpy() +2.0 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([0, 1, 1, 1], [0, 1, 0, 0], sample_weight=[0, 0, 1, 0]) +>>> m.result().numpy() +1.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.FalseNegatives()]) +``` + + + + + + + + + + + + + + + + +
+`thresholds` + +(Optional) Defaults to 0.5. A float value or a python +list/tuple of float threshold values in [0, 1]. A threshold is compared +with prediction values to determine the truth value of predictions +(i.e., above the threshold is `true`, below is `false`). One metric +value is generated for each threshold value. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates the given confusion matrix condition statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values. +
+`y_pred` + +The predicted values. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/FalsePositives.md b/site/en/api_docs/python/tf/keras/metrics/FalsePositives.md new file mode 100644 index 00000000000..1fc96a5c76d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/FalsePositives.md @@ -0,0 +1,210 @@ +description: Calculates the number of false positives. + +
+ + + + + + + +
+ +# tf.keras.metrics.FalsePositives + + + + + + + + + +Calculates the number of false positives. + + + + + + + + + +If `sample_weight` is given, calculates the sum of the weights of +false positives. This metric creates one local variable, `accumulator` +that is used to keep track of the number of false positives. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.FalsePositives() +>>> _ = m.update_state([0, 1, 0, 0], [0, 0, 1, 1]) +>>> m.result().numpy() +2.0 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([0, 1, 0, 0], [0, 0, 1, 1], sample_weight=[0, 0, 1, 0]) +>>> m.result().numpy() +1.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.FalsePositives()]) +``` + + + + + + + + + + + + + + + + +
+`thresholds` + +(Optional) Defaults to 0.5. A float value or a python +list/tuple of float threshold values in [0, 1]. A threshold is compared +with prediction values to determine the truth value of predictions +(i.e., above the threshold is `true`, below is `false`). One metric +value is generated for each threshold value. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates the given confusion matrix condition statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values. +
+`y_pred` + +The predicted values. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/Hinge.md b/site/en/api_docs/python/tf/keras/metrics/Hinge.md new file mode 100644 index 00000000000..d6755b979e7 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/Hinge.md @@ -0,0 +1,218 @@ +description: Computes the hinge metric between y_true and y_pred. + +
+ + + + + + + +
+ +# tf.keras.metrics.Hinge + + + + + + + + + +Computes the hinge metric between `y_true` and `y_pred`. + + + + + + + + + +`y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are +provided we will convert them to -1 or 1. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.Hinge() +>>> _ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]]) +>>> m.result().numpy() +1.3 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]], +... sample_weight=[1, 0]) +>>> m.result().numpy() +1.1 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.Hinge()]) +``` + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/KLDivergence.md b/site/en/api_docs/python/tf/keras/metrics/KLDivergence.md new file mode 100644 index 00000000000..3d6dc994cd3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/KLDivergence.md @@ -0,0 +1,217 @@ +description: Computes Kullback-Leibler divergence metric between y_true and y_pred. + +
+ + + + + + + +
+ +# tf.keras.metrics.KLDivergence + + + + + + + + + +Computes Kullback-Leibler divergence metric between `y_true` and `y_pred`. + + + + + + + + + +`metric = y_true * log(y_true / y_pred)` + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.KLDivergence() +>>> _ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]]) +>>> m.result().numpy() +0.45814306 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]], +... sample_weight=[1, 0]) +>>> m.result().numpy() +0.9162892 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.KLDivergence()]) +``` + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/LogCoshError.md b/site/en/api_docs/python/tf/keras/metrics/LogCoshError.md new file mode 100644 index 00000000000..80a7a83cf66 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/LogCoshError.md @@ -0,0 +1,217 @@ +description: Computes the logarithm of the hyperbolic cosine of the prediction error. + +
+ + + + + + + +
+ +# tf.keras.metrics.LogCoshError + + + + + + + + + +Computes the logarithm of the hyperbolic cosine of the prediction error. + + + + + + + + + +`logcosh = log((exp(x) + exp(-x))/2)`, where x is the error (y_pred - y_true) + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.LogCoshError() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) +>>> m.result().numpy() +0.10844523 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], +... sample_weight=[1, 0]) +>>> m.result().numpy() +0.21689045 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.LogCoshError()]) +``` + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/Mean.md b/site/en/api_docs/python/tf/keras/metrics/Mean.md new file mode 100644 index 00000000000..a92e0c79d06 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/Mean.md @@ -0,0 +1,194 @@ +description: Computes the (weighted) mean of the given values. + +
+ + + + + + + +
+ +# tf.keras.metrics.Mean + + + + + + + + + +Computes the (weighted) mean of the given values. + + + + + + + + + +For example, if values is [1, 3, 5, 7] then the mean is 4. +If the weights were specified as [1, 1, 0, 0] then the mean would be 2. + +This metric creates two variables, `total` and `count` that are used to +compute the average of `values`. This average is ultimately returned as `mean` +which is an idempotent operation that simply divides `total` by `count`. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.Mean() +>>> _ = m.update_state([1, 3, 5, 7]) +>>> m.result().numpy() +4.0 +>>> m.reset_states() +>>> _ = m.update_state([1, 3, 5, 7], sample_weight=[1, 1, 0, 0]) +>>> m.result().numpy() +2.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.add_metric(tf.keras.metrics.Mean(name='mean_1')(outputs)) +model.compile('sgd', loss='mse') +``` + + + + + + + + + + + + + +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates statistics for computing the reduction metric. + +For example, if `values` is [1, 3, 5, 7] and reduction=SUM_OVER_BATCH_SIZE, +then the value of `result()` is 4. If the `sample_weight` is specified as +[1, 1, 0, 0] then value of `result()` would be 2. + + + + + + + + + + + + + +
Args
+`values` + +Per-example value. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/MeanAbsoluteError.md b/site/en/api_docs/python/tf/keras/metrics/MeanAbsoluteError.md new file mode 100644 index 00000000000..08f23723872 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/MeanAbsoluteError.md @@ -0,0 +1,217 @@ +description: Computes the mean absolute error between the labels and predictions. + +
+ + + + + + + +
+ +# tf.keras.metrics.MeanAbsoluteError + + + + + + + + + +Computes the mean absolute error between the labels and predictions. + + + + + + + + + + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.MeanAbsoluteError() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) +>>> m.result().numpy() +0.25 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], +... sample_weight=[1, 0]) +>>> m.result().numpy() +0.5 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', loss='mse', metrics=[tf.keras.metrics.MeanAbsoluteError()]) +``` + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/MeanAbsolutePercentageError.md b/site/en/api_docs/python/tf/keras/metrics/MeanAbsolutePercentageError.md new file mode 100644 index 00000000000..d51b54bbe64 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/MeanAbsolutePercentageError.md @@ -0,0 +1,219 @@ +description: Computes the mean absolute percentage error between y_true and y_pred. + +
+ + + + + + + +
+ +# tf.keras.metrics.MeanAbsolutePercentageError + + + + + + + + + +Computes the mean absolute percentage error between `y_true` and `y_pred`. + + + + + + + + + + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.MeanAbsolutePercentageError() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) +>>> m.result().numpy() +250000000.0 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], +... sample_weight=[1, 0]) +>>> m.result().numpy() +500000000.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.MeanAbsolutePercentageError()]) +``` + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/MeanIoU.md b/site/en/api_docs/python/tf/keras/metrics/MeanIoU.md new file mode 100644 index 00000000000..be00c76e8ac --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/MeanIoU.md @@ -0,0 +1,220 @@ +description: Computes the mean Intersection-Over-Union metric. + +
+ + + + + + + +
+ +# tf.keras.metrics.MeanIoU + + + + + + + + + +Computes the mean Intersection-Over-Union metric. + +Inherits From: [`Metric`](../../../tf/keras/metrics/Metric.md) + + + + + + + + + +Mean Intersection-Over-Union is a common evaluation metric for semantic image +segmentation, which first computes the IOU for each semantic class and then +computes the average over classes. IOU is defined as follows: + IOU = true_positive / (true_positive + false_positive + false_negative). +The predictions are accumulated in a confusion matrix, weighted by +`sample_weight` and the metric is then calculated from it. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> # cm = [[1, 1], +>>> # [1, 1]] +>>> # sum_row = [2, 2], sum_col = [2, 2], true_positives = [1, 1] +>>> # iou = true_positives / (sum_row + sum_col - true_positives)) +>>> # result = (1 / (2 + 2 - 1) + 1 / (2 + 2 - 1)) / 2 = 0.33 +>>> m = tf.keras.metrics.MeanIoU(num_classes=2) +>>> _ = m.update_state([0, 0, 1, 1], [0, 1, 0, 1]) +>>> m.result().numpy() +0.33333334 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([0, 0, 1, 1], [0, 1, 0, 1], +... sample_weight=[0.3, 0.3, 0.3, 0.1]) +>>> m.result().numpy() +0.23809525 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.MeanIoU(num_classes=2)]) +``` + + + + + + + + + + + + + + + + +
+`num_classes` + +The possible number of labels the prediction task can have. +This value must be provided, since a confusion matrix of dimension = +[num_classes, num_classes] will be allocated. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Compute the mean intersection-over-union via the confusion matrix. + + +

update_state

+ +View source + + + +Accumulates the confusion matrix statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values. +
+`y_pred` + +The predicted values. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/MeanRelativeError.md b/site/en/api_docs/python/tf/keras/metrics/MeanRelativeError.md new file mode 100644 index 00000000000..598e597d7ac --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/MeanRelativeError.md @@ -0,0 +1,211 @@ +description: Computes the mean relative error by normalizing with the given values. + +
+ + + + + + + +
+ +# tf.keras.metrics.MeanRelativeError + + + + + + + + + +Computes the mean relative error by normalizing with the given values. + +Inherits From: [`Mean`](../../../tf/keras/metrics/Mean.md) + + + + + + + + + +This metric creates two local variables, `total` and `count` that are used to +compute the mean relative error. This is weighted by `sample_weight`, and +it is ultimately returned as `mean_relative_error`: +an idempotent operation that simply divides `total` by `count`. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.MeanRelativeError(normalizer=[1, 3, 2, 3]) +>>> _ = m.update_state([1, 3, 2, 3], [2, 4, 6, 8]) +``` + +``` +>>> # metric = mean(|y_pred - y_true| / normalizer) +>>> # = mean([1, 1, 4, 5] / [1, 3, 2, 3]) = mean([1, 1/3, 2, 5/3]) +>>> # = 5/4 = 1.25 +>>> m.result().numpy() +1.25 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.MeanRelativeError(normalizer=[1, 3])]) +``` + + + + + + + + + + + + + + + + +
+`normalizer` + +The normalizer values with same shape as predictions. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values. +
+`y_pred` + +The predicted values. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/MeanSquaredError.md b/site/en/api_docs/python/tf/keras/metrics/MeanSquaredError.md new file mode 100644 index 00000000000..b05ef4af0f6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/MeanSquaredError.md @@ -0,0 +1,217 @@ +description: Computes the mean squared error between y_true and y_pred. + +
+ + + + + + + +
+ +# tf.keras.metrics.MeanSquaredError + + + + + + + + + +Computes the mean squared error between `y_true` and `y_pred`. + + + + + + + + + + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.MeanSquaredError() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) +>>> m.result().numpy() +0.25 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], +... sample_weight=[1, 0]) +>>> m.result().numpy() +0.5 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', loss='mse', metrics=[tf.keras.metrics.MeanSquaredError()]) +``` + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/MeanSquaredLogarithmicError.md b/site/en/api_docs/python/tf/keras/metrics/MeanSquaredLogarithmicError.md new file mode 100644 index 00000000000..2576f1cda7b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/MeanSquaredLogarithmicError.md @@ -0,0 +1,219 @@ +description: Computes the mean squared logarithmic error between y_true and y_pred. + +
+ + + + + + + +
+ +# tf.keras.metrics.MeanSquaredLogarithmicError + + + + + + + + + +Computes the mean squared logarithmic error between `y_true` and `y_pred`. + + + + + + + + + + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.MeanSquaredLogarithmicError() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) +>>> m.result().numpy() +0.12011322 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], +... sample_weight=[1, 0]) +>>> m.result().numpy() +0.24022643 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.MeanSquaredLogarithmicError()]) +``` + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/MeanTensor.md b/site/en/api_docs/python/tf/keras/metrics/MeanTensor.md new file mode 100644 index 00000000000..b8be9b201b2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/MeanTensor.md @@ -0,0 +1,209 @@ +description: Computes the element-wise (weighted) mean of the given tensors. + +
+ + + + + + + +
+ +# tf.keras.metrics.MeanTensor + + + + + + + + + +Computes the element-wise (weighted) mean of the given tensors. + +Inherits From: [`Metric`](../../../tf/keras/metrics/Metric.md) + + + + + + + + + +`MeanTensor` returns a tensor with the same shape of the input tensors. The +mean value is updated by keeping local variables `total` and `count`. The +`total` tracks the sum of the weighted values, and `count` stores the sum of +the weighted counts. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.MeanTensor() +>>> _ = m.update_state([0, 1, 2, 3]) +>>> _ = m.update_state([4, 5, 6, 7]) +>>> m.result().numpy() +array([2., 3., 4., 5.], dtype=float32) +``` + +``` +>>> _ = m.update_state([12, 10, 8, 6], sample_weight= [0, 0.2, 0.5, 1]) +>>> m.result().numpy() +array([2. , 3.6363635, 4.8 , 5.3333335], dtype=float32) +``` + + + + + + + + + + + + + +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + + + + + + + + + + + + + + + +
+`count` + + +
+`total` + + +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates statistics for computing the element-wise mean. + + + + + + + + + + + + + + +
Args
+`values` + +Per-example value. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/Metric.md b/site/en/api_docs/python/tf/keras/metrics/Metric.md new file mode 100644 index 00000000000..cb608a67704 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/Metric.md @@ -0,0 +1,214 @@ +description: Encapsulates metric logic and state. + +
+ + + + + + + + +
+ +# tf.keras.metrics.Metric + + + + + + + + + +Encapsulates metric logic and state. + +Inherits From: [`Layer`](../../../tf/keras/layers/Layer.md) + + + + + + + + + + +#### Usage: + + + +```python +m = SomeMetric(...) +for input in ...: + m.update_state(input) +print('Final result: ', m.result().numpy()) +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Sequential() +model.add(tf.keras.layers.Dense(64, activation='relu')) +model.add(tf.keras.layers.Dense(64, activation='relu')) +model.add(tf.keras.layers.Dense(10, activation='softmax')) + +model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01), + loss=tf.keras.losses.CategoricalCrossentropy(), + metrics=[tf.keras.metrics.CategoricalAccuracy()]) + +data = np.random.random((1000, 32)) +labels = np.random.random((1000, 10)) + +dataset = tf.data.Dataset.from_tensor_slices((data, labels)) +dataset = dataset.batch(32) + +model.fit(dataset, epochs=10) +``` + +To be implemented by subclasses: +* `__init__()`: All state variables should be created in this method by + calling `self.add_weight()` like: `self.var = self.add_weight(...)` +* `update_state()`: Has all updates to the state variables like: + self.var.assign_add(...). +* `result()`: Computes and returns a value for the metric + from the state variables. + +Example subclass implementation: + +```python +class BinaryTruePositives(tf.keras.metrics.Metric): + + def __init__(self, name='binary_true_positives', **kwargs): + super(BinaryTruePositives, self).__init__(name=name, **kwargs) + self.true_positives = self.add_weight(name='tp', initializer='zeros') + + def update_state(self, y_true, y_pred, sample_weight=None): + y_true = tf.cast(y_true, tf.bool) + y_pred = tf.cast(y_pred, tf.bool) + + values = tf.logical_and(tf.equal(y_true, True), tf.equal(y_pred, True)) + values = tf.cast(values, self.dtype) + if sample_weight is not None: + sample_weight = tf.cast(sample_weight, self.dtype) + sample_weight = tf.broadcast_weights(sample_weight, values) + values = tf.multiply(values, sample_weight) + self.true_positives.assign_add(tf.reduce_sum(values)) + + def result(self): + return self.true_positives +``` + +## Methods + +

add_weight

+ +View source + + + +Adds state variable. Only for use by subclasses. + + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates statistics for the metric. + +Note: This function is executed as a graph function in graph mode. +This means: + a) Operations on the same resource are executed in textual order. + This should make it easier to do things like add the updated + value of a variable to another, for example. + b) You don't need to worry about collecting the update ops to execute. + All update ops added to the graph by this function will be executed. + As a result, code should generally work the same way with graph or + eager execution. + + + + + + + + + + + + + +
Args
+`*args` + + +
+`**kwargs` + +A mini-batch of inputs to the Metric. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/Poisson.md b/site/en/api_docs/python/tf/keras/metrics/Poisson.md new file mode 100644 index 00000000000..1ef61dc0ace --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/Poisson.md @@ -0,0 +1,217 @@ +description: Computes the Poisson metric between y_true and y_pred. + +
+ + + + + + + +
+ +# tf.keras.metrics.Poisson + + + + + + + + + +Computes the Poisson metric between `y_true` and `y_pred`. + + + + + + + + + +`metric = y_pred - y_true * log(y_pred)` + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.Poisson() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) +>>> m.result().numpy() +0.49999997 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], +... sample_weight=[1, 0]) +>>> m.result().numpy() +0.99999994 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.Poisson()]) +``` + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/Precision.md b/site/en/api_docs/python/tf/keras/metrics/Precision.md new file mode 100644 index 00000000000..f066df25ebe --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/Precision.md @@ -0,0 +1,241 @@ +description: Computes the precision of the predictions with respect to the labels. + +
+ + + + + + + +
+ +# tf.keras.metrics.Precision + + + + + + + + + +Computes the precision of the predictions with respect to the labels. + +Inherits From: [`Metric`](../../../tf/keras/metrics/Metric.md) + + + + + + + + + +The metric creates two local variables, `true_positives` and `false_positives` +that are used to compute the precision. This value is ultimately returned as +`precision`, an idempotent operation that simply divides `true_positives` +by the sum of `true_positives` and `false_positives`. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +If `top_k` is set, we'll calculate precision as how often on average a class +among the top-k classes with the highest predicted values of a batch entry is +correct and can be found in the label for that entry. + +If `class_id` is specified, we calculate precision by considering only the +entries in the batch for which `class_id` is above the threshold and/or in the +top-k highest predictions, and computing the fraction of them for which +`class_id` is indeed a correct label. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.Precision() +>>> _ = m.update_state([0, 1, 1, 1], [1, 0, 1, 1]) +>>> m.result().numpy() +0.6666667 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0]) +>>> m.result().numpy() +1.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.Precision()]) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`thresholds` + +(Optional) A float value or a python list/tuple of float +threshold values in [0, 1]. A threshold is compared with prediction +values to determine the truth value of predictions (i.e., above the +threshold is `true`, below is `false`). One metric value is generated +for each threshold value. If neither thresholds nor top_k are set, the +default is to calculate precision with `thresholds=0.5`. +
+`top_k` + +(Optional) Unset by default. An int value specifying the top-k +predictions to consider when calculating precision. +
+`class_id` + +(Optional) Integer class ID for which we want binary metrics. +This must be in the half-open interval `[0, num_classes)`, where +`num_classes` is the last dimension of predictions. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates true positive and false positive statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values, with the same dimensions as `y_pred`. +Will be cast to `bool`. +
+`y_pred` + +The predicted values. Each element must be in the range `[0, 1]`. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/PrecisionAtRecall.md b/site/en/api_docs/python/tf/keras/metrics/PrecisionAtRecall.md new file mode 100644 index 00000000000..d312682b7d1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/PrecisionAtRecall.md @@ -0,0 +1,219 @@ +description: Computes the precision at a given recall. + +
+ + + + + + + +
+ +# tf.keras.metrics.PrecisionAtRecall + + + + + + + + + +Computes the precision at a given recall. + + + + + + + + + +This metric creates four local variables, `true_positives`, `true_negatives`, +`false_positives` and `false_negatives` that are used to compute the +precision at the given recall. The threshold for the given recall +value is computed and used to evaluate the corresponding precision. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.PrecisionAtRecall(0.8, num_thresholds=1) +>>> _ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9]) +>>> m.result().numpy() +1.0 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9], +... sample_weight=[1, 0, 0, 1]) +>>> m.result().numpy() +1.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.PrecisionAtRecall(recall=0.8)]) +``` + + + + + + + + + + + + + + + + + + + +
+`recall` + +A scalar value in range `[0, 1]`. +
+`num_thresholds` + +(Optional) Defaults to 200. The number of thresholds to +use for matching the given recall. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates confusion matrix statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values. +
+`y_pred` + +The predicted values. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/Recall.md b/site/en/api_docs/python/tf/keras/metrics/Recall.md new file mode 100644 index 00000000000..72c6858865a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/Recall.md @@ -0,0 +1,240 @@ +description: Computes the recall of the predictions with respect to the labels. + +
+ + + + + + + +
+ +# tf.keras.metrics.Recall + + + + + + + + + +Computes the recall of the predictions with respect to the labels. + +Inherits From: [`Metric`](../../../tf/keras/metrics/Metric.md) + + + + + + + + + +This metric creates two local variables, `true_positives` and +`false_negatives`, that are used to compute the recall. This value is +ultimately returned as `recall`, an idempotent operation that simply divides +`true_positives` by the sum of `true_positives` and `false_negatives`. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +If `top_k` is set, recall will be computed as how often on average a class +among the labels of a batch entry is in the top-k predictions. + +If `class_id` is specified, we calculate recall by considering only the +entries in the batch for which `class_id` is in the label, and computing the +fraction of them for which `class_id` is above the threshold and/or in the +top-k predictions. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.Recall() +>>> _ = m.update_state([0, 1, 1, 1], [1, 0, 1, 1]) +>>> m.result().numpy() +0.6666667 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0]) +>>> m.result().numpy() +1.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.Recall()]) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`thresholds` + +(Optional) A float value or a python list/tuple of float +threshold values in [0, 1]. A threshold is compared with prediction +values to determine the truth value of predictions (i.e., above the +threshold is `true`, below is `false`). One metric value is generated +for each threshold value. If neither thresholds nor top_k are set, the +default is to calculate recall with `thresholds=0.5`. +
+`top_k` + +(Optional) Unset by default. An int value specifying the top-k +predictions to consider when calculating recall. +
+`class_id` + +(Optional) Integer class ID for which we want binary metrics. +This must be in the half-open interval `[0, num_classes)`, where +`num_classes` is the last dimension of predictions. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates true positive and false negative statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values, with the same dimensions as `y_pred`. +Will be cast to `bool`. +
+`y_pred` + +The predicted values. Each element must be in the range `[0, 1]`. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/RecallAtPrecision.md b/site/en/api_docs/python/tf/keras/metrics/RecallAtPrecision.md new file mode 100644 index 00000000000..ba98a764676 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/RecallAtPrecision.md @@ -0,0 +1,222 @@ +description: Computes the maximally achievable recall at a required precision. + +
+ + + + + + + +
+ +# tf.keras.metrics.RecallAtPrecision + + + + + + + + + +Computes the maximally achievable recall at a required precision. + + + + + + + + + +For a given score-label-distribution the required precision might not +be achievable, in this case 0.0 is returned as recall. + +This metric creates four local variables, `true_positives`, `true_negatives`, +`false_positives` and `false_negatives` that are used to compute the +recall at the given precision. The threshold for the given precision +value is computed and used to evaluate the corresponding recall. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.RecallAtPrecision(0.8, num_thresholds=1) +>>> _ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9]) +>>> m.result().numpy() +0.5 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9], +... sample_weight=[1, 0, 0, 1]) +>>> m.result().numpy() +1.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.RecallAtPrecision(precision=0.8)]) +``` + + + + + + + + + + + + + + + + + + + +
+`precision` + +A scalar value in range `[0, 1]`. +
+`num_thresholds` + +(Optional) Defaults to 200. The number of thresholds to +use for matching the given precision. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates confusion matrix statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values. +
+`y_pred` + +The predicted values. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/RootMeanSquaredError.md b/site/en/api_docs/python/tf/keras/metrics/RootMeanSquaredError.md new file mode 100644 index 00000000000..5988df6132b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/RootMeanSquaredError.md @@ -0,0 +1,199 @@ +description: Computes root mean squared error metric between y_true and y_pred. + +
+ + + + + + + +
+ +# tf.keras.metrics.RootMeanSquaredError + + + + + + + + + +Computes root mean squared error metric between `y_true` and `y_pred`. + +Inherits From: [`Mean`](../../../tf/keras/metrics/Mean.md) + + + + + + + + + + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.RootMeanSquaredError() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) +>>> m.result().numpy() +0.5 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], +... sample_weight=[1, 0]) +>>> m.result().numpy() +0.70710677 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.RootMeanSquaredError()]) +``` + + + + + + + + + + + + + +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates root mean squared error statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values. +
+`y_pred` + +The predicted values. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/SensitivityAtSpecificity.md b/site/en/api_docs/python/tf/keras/metrics/SensitivityAtSpecificity.md new file mode 100644 index 00000000000..7075962af60 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/SensitivityAtSpecificity.md @@ -0,0 +1,227 @@ +description: Computes the sensitivity at a given specificity. + +
+ + + + + + + +
+ +# tf.keras.metrics.SensitivityAtSpecificity + + + + + + + + + +Computes the sensitivity at a given specificity. + + + + + + + + + +`Sensitivity` measures the proportion of actual positives that are correctly +identified as such (tp / (tp + fn)). +`Specificity` measures the proportion of actual negatives that are correctly +identified as such (tn / (tn + fp)). + +This metric creates four local variables, `true_positives`, `true_negatives`, +`false_positives` and `false_negatives` that are used to compute the +sensitivity at the given specificity. The threshold for the given specificity +value is computed and used to evaluate the corresponding sensitivity. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +For additional information about specificity and sensitivity, see the +following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.SensitivityAtSpecificity(0.4, num_thresholds=1) +>>> _ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9]) +>>> m.result().numpy() +0.5 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9], +... sample_weight=[1, 0, 0, 1]) +>>> m.result().numpy() +1.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.SensitivityAtSpecificity()]) +``` + + + + + + + + + + + + + + + + + + + +
+`specificity` + +A scalar value in range `[0, 1]`. +
+`num_thresholds` + +(Optional) Defaults to 200. The number of thresholds to +use for matching the given specificity. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates confusion matrix statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values. +
+`y_pred` + +The predicted values. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/SparseCategoricalAccuracy.md b/site/en/api_docs/python/tf/keras/metrics/SparseCategoricalAccuracy.md new file mode 100644 index 00000000000..35cd02d7184 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/SparseCategoricalAccuracy.md @@ -0,0 +1,229 @@ +description: Calculates how often predictions matches integer labels. + +
+ + + + + + + +
+ +# tf.keras.metrics.SparseCategoricalAccuracy + + + + + + + + + +Calculates how often predictions matches integer labels. + + + + + + + + + +You can provide logits of classes as `y_pred`, since argmax of +logits and probabilities are same. + +This metric creates two local variables, `total` and `count` that are used to +compute the frequency with which `y_pred` matches `y_true`. This frequency is +ultimately returned as `sparse categorical accuracy`: an idempotent operation +that simply divides `total` by `count`. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.SparseCategoricalAccuracy() +>>> _ = m.update_state([[2], [1]], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]) +>>> m.result().numpy() +0.5 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[2], [1]], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]], +... sample_weight=[0.7, 0.3]) +>>> m.result().numpy() +0.3 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) +``` + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/SparseCategoricalCrossentropy.md b/site/en/api_docs/python/tf/keras/metrics/SparseCategoricalCrossentropy.md new file mode 100644 index 00000000000..736e07ea93c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/SparseCategoricalCrossentropy.md @@ -0,0 +1,281 @@ +description: Computes the crossentropy metric between the labels and predictions. + +
+ + + + + + + +
+ +# tf.keras.metrics.SparseCategoricalCrossentropy + + + + + + + + + +Computes the crossentropy metric between the labels and predictions. + + + + + + + + + +Use this crossentropy metric when there are two or more label classes. +We expect labels to be provided as integers. If you want to provide labels +using `one-hot` representation, please use `CategoricalCrossentropy` metric. +There should be `# classes` floating point values per feature for `y_pred` +and a single floating point value per feature for `y_true`. + +In the snippet below, there is a single floating point value per example for +`y_true` and `# classes` floating pointing values per example for `y_pred`. +The shape of `y_true` is `[batch_size]` and the shape of `y_pred` is +`[batch_size, num_classes]`. + +#### Usage: + + + +``` +>>> # y_true = one_hot(y_true) = [[0, 1, 0], [0, 0, 1]] +>>> # logits = log(y_pred) +>>> # softmax = exp(logits) / sum(exp(logits), axis=-1) +>>> # softmax = [[0.05, 0.95, EPSILON], [0.1, 0.8, 0.1]] +>>> # xent = -sum(y * log(softmax), 1) +>>> # log(softmax) = [[-2.9957, -0.0513, -16.1181], +>>> # [-2.3026, -0.2231, -2.3026]] +>>> # y_true * log(softmax) = [[0, -0.0513, 0], [0, 0, -2.3026]] +>>> # xent = [0.0513, 2.3026] +>>> # Reduced xent = (0.0513 + 2.3026) / 2 +>>> m = tf.keras.metrics.SparseCategoricalCrossentropy() +>>> _ = m.update_state([1, 2], +... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]) +>>> m.result().numpy() +1.1769392 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([1, 2], +... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]], +... sample_weight=tf.constant([0.3, 0.7])) +>>> m.result().numpy() +1.6271976 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.SparseCategoricalCrossentropy()]) +``` + + + + + + + + + + + + + + + + + + + +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`from_logits` + +(Optional ) Whether `y_pred` is expected to be a logits tensor. +By default, we assume that `y_pred` encodes a probability distribution. +
+`axis` + +(Optional) Defaults to -1. The dimension along which the metric is +computed. +
+ + + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/SparseTopKCategoricalAccuracy.md b/site/en/api_docs/python/tf/keras/metrics/SparseTopKCategoricalAccuracy.md new file mode 100644 index 00000000000..bc01cb2a1bb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/SparseTopKCategoricalAccuracy.md @@ -0,0 +1,211 @@ +description: Computes how often integer targets are in the top K predictions. + +
+ + + + + + + +
+ +# tf.keras.metrics.SparseTopKCategoricalAccuracy + + + + + + + + + +Computes how often integer targets are in the top `K` predictions. + + + + + + + + + + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.SparseTopKCategoricalAccuracy(k=1) +>>> _ = m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]) +>>> m.result().numpy() +0.5 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]], +... sample_weight=[0.7, 0.3]) +>>> m.result().numpy() +0.3 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + metrics=[tf.keras.metrics.SparseTopKCategoricalAccuracy()]) +``` + + + + + + + + + + + + + + + + +
+`k` + +(Optional) Number of top elements to look at for computing accuracy. +Defaults to 5. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/SpecificityAtSensitivity.md b/site/en/api_docs/python/tf/keras/metrics/SpecificityAtSensitivity.md new file mode 100644 index 00000000000..f8ea9ac34ae --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/SpecificityAtSensitivity.md @@ -0,0 +1,227 @@ +description: Computes the specificity at a given sensitivity. + +
+ + + + + + + +
+ +# tf.keras.metrics.SpecificityAtSensitivity + + + + + + + + + +Computes the specificity at a given sensitivity. + + + + + + + + + +`Sensitivity` measures the proportion of actual positives that are correctly +identified as such (tp / (tp + fn)). +`Specificity` measures the proportion of actual negatives that are correctly +identified as such (tn / (tn + fp)). + +This metric creates four local variables, `true_positives`, `true_negatives`, +`false_positives` and `false_negatives` that are used to compute the +specificity at the given sensitivity. The threshold for the given sensitivity +value is computed and used to evaluate the corresponding specificity. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +For additional information about specificity and sensitivity, see the +following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.SpecificityAtSensitivity(0.8, num_thresholds=1) +>>> _ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9]) +>>> m.result().numpy() +1.0 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9], +... sample_weight=[1, 0, 0, 1]) +>>> m.result().numpy() +1.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.SpecificityAtSensitivity()]) +``` + + + + + + + + + + + + + + + + + + + +
+`sensitivity` + +A scalar value in range `[0, 1]`. +
+`num_thresholds` + +(Optional) Defaults to 200. The number of thresholds to +use for matching the given sensitivity. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates confusion matrix statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values. +
+`y_pred` + +The predicted values. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/SquaredHinge.md b/site/en/api_docs/python/tf/keras/metrics/SquaredHinge.md new file mode 100644 index 00000000000..7a88f12b6d1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/SquaredHinge.md @@ -0,0 +1,221 @@ +description: Computes the squared hinge metric between y_true and y_pred. + +
+ + + + + + + +
+ +# tf.keras.metrics.SquaredHinge + + + + + + + + + +Computes the squared hinge metric between `y_true` and `y_pred`. + + + + + + + + + +`y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are +provided we will convert them to -1 or 1. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.SquaredHinge() +>>> _ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]]) +>>> m.result().numpy() +1.86 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]], +... sample_weight=[1, 0]) +>>> m.result().numpy() +1.46 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile( + 'sgd', + loss='mse', + metrics=[tf.keras.metrics.SquaredHinge()]) +``` + + + + + + + + + + + + + + + + + + + +
+`fn` + +The metric function to wrap, with signature +`fn(y_true, y_pred, **kwargs)`. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+`**kwargs` + +The keyword arguments that are passed on to `fn`. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/Sum.md b/site/en/api_docs/python/tf/keras/metrics/Sum.md new file mode 100644 index 00000000000..b04f23e2c3e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/Sum.md @@ -0,0 +1,189 @@ +description: Computes the (weighted) sum of the given values. + +
+ + + + + + + +
+ +# tf.keras.metrics.Sum + + + + + + + + + +Computes the (weighted) sum of the given values. + + + + + + + + + +For example, if values is [1, 3, 5, 7] then the sum is 16. +If the weights were specified as [1, 1, 0, 0] then the sum would be 4. + +This metric creates one variable, `total`, that is used to compute the sum of +`values`. This is ultimately returned as `sum`. + +If `sample_weight` is `None`, weights default to 1. Use `sample_weight` of 0 +to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.Sum() +>>> _ = m.update_state([1, 3, 5, 7]) +>>> m.result().numpy() +16.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.add_metric(tf.keras.metrics.Sum(name='sum_1')(outputs)) +model.compile('sgd', loss='mse') +``` + + + + + + + + + + + + + +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates statistics for computing the reduction metric. + +For example, if `values` is [1, 3, 5, 7] and reduction=SUM_OVER_BATCH_SIZE, +then the value of `result()` is 4. If the `sample_weight` is specified as +[1, 1, 0, 0] then value of `result()` would be 2. + + + + + + + + + + + + + +
Args
+`values` + +Per-example value. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/TopKCategoricalAccuracy.md b/site/en/api_docs/python/tf/keras/metrics/TopKCategoricalAccuracy.md new file mode 100644 index 00000000000..bac34bfde7d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/TopKCategoricalAccuracy.md @@ -0,0 +1,211 @@ +description: Computes how often targets are in the top K predictions. + +
+ + + + + + + +
+ +# tf.keras.metrics.TopKCategoricalAccuracy + + + + + + + + + +Computes how often targets are in the top `K` predictions. + + + + + + + + + + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.TopKCategoricalAccuracy(k=1) +>>> _ = m.update_state([[0, 0, 1], [0, 1, 0]], +... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]) +>>> m.result().numpy() +0.5 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([[0, 0, 1], [0, 1, 0]], +... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]], +... sample_weight=[0.7, 0.3]) +>>> m.result().numpy() +0.3 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', metrics=[tf.keras.metrics.TopKCategoricalAccuracy()]) +``` + + + + + + + + + + + + + + + + +
+`k` + +(Optional) Number of top elements to look at for computing accuracy. +Defaults to 5. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates metric statistics. + +`y_true` and `y_pred` should have the same shape. + + + + + + + + + + + + + + + + +
Args
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`sample_weight` + +Optional `sample_weight` acts as a +coefficient for the metric. If a scalar is provided, then the metric is +simply scaled by the given value. If `sample_weight` is a tensor of size +`[batch_size]`, then the metric for each sample of the batch is rescaled +by the corresponding element in the `sample_weight` vector. If the shape +of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted +to this shape), then each metric element of `y_pred` is scaled by the +corresponding value of `sample_weight`. (Note on `dN-1`: all metric +functions reduce by 1 dimension, usually the last axis (-1)). +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/TrueNegatives.md b/site/en/api_docs/python/tf/keras/metrics/TrueNegatives.md new file mode 100644 index 00000000000..fe7270f2442 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/TrueNegatives.md @@ -0,0 +1,210 @@ +description: Calculates the number of true negatives. + +
+ + + + + + + +
+ +# tf.keras.metrics.TrueNegatives + + + + + + + + + +Calculates the number of true negatives. + + + + + + + + + +If `sample_weight` is given, calculates the sum of the weights of +true negatives. This metric creates one local variable, `accumulator` +that is used to keep track of the number of true negatives. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.TrueNegatives() +>>> _ = m.update_state([0, 1, 0, 0], [1, 1, 0, 0]) +>>> m.result().numpy() +2.0 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([0, 1, 0, 0], [1, 1, 0, 0], sample_weight=[0, 0, 1, 0]) +>>> m.result().numpy() +1.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.TrueNegatives()]) +``` + + + + + + + + + + + + + + + + +
+`thresholds` + +(Optional) Defaults to 0.5. A float value or a python +list/tuple of float threshold values in [0, 1]. A threshold is compared +with prediction values to determine the truth value of predictions +(i.e., above the threshold is `true`, below is `false`). One metric +value is generated for each threshold value. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates the given confusion matrix condition statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values. +
+`y_pred` + +The predicted values. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/TruePositives.md b/site/en/api_docs/python/tf/keras/metrics/TruePositives.md new file mode 100644 index 00000000000..e963f509a70 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/TruePositives.md @@ -0,0 +1,210 @@ +description: Calculates the number of true positives. + +
+ + + + + + + +
+ +# tf.keras.metrics.TruePositives + + + + + + + + + +Calculates the number of true positives. + + + + + + + + + +If `sample_weight` is given, calculates the sum of the weights of +true positives. This metric creates one local variable, `true_positives` +that is used to keep track of the number of true positives. + +If `sample_weight` is `None`, weights default to 1. +Use `sample_weight` of 0 to mask values. + +#### Usage: + + + +``` +>>> m = tf.keras.metrics.TruePositives() +>>> _ = m.update_state([0, 1, 1, 1], [1, 0, 1, 1]) +>>> m.result().numpy() +2.0 +``` + +``` +>>> m.reset_states() +>>> _ = m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0]) +>>> m.result().numpy() +1.0 +``` + +Usage with tf.keras API: + +```python +model = tf.keras.Model(inputs, outputs) +model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.TruePositives()]) +``` + + + + + + + + + + + + + + + + +
+`thresholds` + +(Optional) Defaults to 0.5. A float value or a python +list/tuple of float threshold values in [0, 1]. A threshold is compared +with prediction values to determine the truth value of predictions +(i.e., above the threshold is `true`, below is `false`). One metric +value is generated for each threshold value. +
+`name` + +(Optional) string name of the metric instance. +
+`dtype` + +(Optional) data type of the metric result. +
+ + + +## Methods + +

reset_states

+ +View source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, +when a metric is evaluated during training. + +

result

+ +View source + + + +Computes and returns the metric value tensor. + +Result computation is an idempotent operation that simply calculates the +metric value using the state variables. + +

update_state

+ +View source + + + +Accumulates the given confusion matrix condition statistics. + + + + + + + + + + + + + + + + + +
Args
+`y_true` + +The ground truth values. +
+`y_pred` + +The predicted values. +
+`sample_weight` + +Optional weighting of each example. Defaults to 1. Can be a +`Tensor` whose rank is either 0, or the same rank as `y_true`, and must +be broadcastable to `y_true`. +
+ + + + + + + + + + + +
Returns
+Update op. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/binary_accuracy.md b/site/en/api_docs/python/tf/keras/metrics/binary_accuracy.md new file mode 100644 index 00000000000..272952578c0 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/binary_accuracy.md @@ -0,0 +1,93 @@ +description: Calculates how often predictions matches binary labels. + +
+ + +
+ +# tf.keras.metrics.binary_accuracy + + + + + + + + + +Calculates how often predictions matches binary labels. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`y_true` + +Ground truth values. shape = `[batch_size, d0, .. dN]`. +
+`y_pred` + +The predicted values. shape = `[batch_size, d0, .. dN]`. +
+`threshold` + +(Optional) Float representing the threshold for deciding whether +prediction values are 1 or 0. +
+ + + + + + + + + + + +
+Binary accuracy values. shape = `[batch_size, d0, .. dN-1]` +
+ diff --git a/site/en/api_docs/python/tf/keras/metrics/categorical_accuracy.md b/site/en/api_docs/python/tf/keras/metrics/categorical_accuracy.md new file mode 100644 index 00000000000..d9c70e5e121 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/categorical_accuracy.md @@ -0,0 +1,87 @@ +description: Calculates how often predictions matches one-hot labels. + +
+ + +
+ +# tf.keras.metrics.categorical_accuracy + + + + + + + + + +Calculates how often predictions matches one-hot labels. + + + + + + + + + +You can provide logits of classes as `y_pred`, since argmax of +logits and probabilities are same. + + + + + + + + + + + + + +
+`y_true` + +One-hot ground truth values. +
+`y_pred` + +The prediction values. +
+ + + + + + + + + + + +
+Categorical accuracy values. +
+ diff --git a/site/en/api_docs/python/tf/keras/metrics/deserialize.md b/site/en/api_docs/python/tf/keras/metrics/deserialize.md new file mode 100644 index 00000000000..5f13f5c4af9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/deserialize.md @@ -0,0 +1,45 @@ +
+ + +
+ +# tf.keras.metrics.deserialize + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/get.md b/site/en/api_docs/python/tf/keras/metrics/get.md new file mode 100644 index 00000000000..20f8132438a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/get.md @@ -0,0 +1,47 @@ +description: Return a metric given its identifer. + +
+ + +
+ +# tf.keras.metrics.get + + + + + + + + + +Return a metric given its identifer. + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/serialize.md b/site/en/api_docs/python/tf/keras/metrics/serialize.md new file mode 100644 index 00000000000..fc47657bf01 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/serialize.md @@ -0,0 +1,45 @@ +
+ + +
+ +# tf.keras.metrics.serialize + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/metrics/sparse_categorical_accuracy.md b/site/en/api_docs/python/tf/keras/metrics/sparse_categorical_accuracy.md new file mode 100644 index 00000000000..9683bde4439 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/sparse_categorical_accuracy.md @@ -0,0 +1,87 @@ +description: Calculates how often predictions matches integer labels. + +
+ + +
+ +# tf.keras.metrics.sparse_categorical_accuracy + + + + + + + + + +Calculates how often predictions matches integer labels. + + + + + + + + + +You can provide logits of classes as `y_pred`, since argmax of +logits and probabilities are same. + + + + + + + + + + + + + +
+`y_true` + +Integer ground truth values. +
+`y_pred` + +The prediction values. +
+ + + + + + + + + + + +
+Sparse categorical accuracy values. +
+ diff --git a/site/en/api_docs/python/tf/keras/metrics/sparse_top_k_categorical_accuracy.md b/site/en/api_docs/python/tf/keras/metrics/sparse_top_k_categorical_accuracy.md new file mode 100644 index 00000000000..6eaa8fdbf89 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/sparse_top_k_categorical_accuracy.md @@ -0,0 +1,93 @@ +description: Computes how often integer targets are in the top K predictions. + +
+ + +
+ +# tf.keras.metrics.sparse_top_k_categorical_accuracy + + + + + + + + + +Computes how often integer targets are in the top `K` predictions. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`y_true` + +tensor of true targets. +
+`y_pred` + +tensor of predicted targets. +
+`k` + +(Optional) Number of top elements to look at for computing accuracy. +Defaults to 5. +
+ + + + + + + + + + + +
+Sparse top K categorical accuracy value. +
+ diff --git a/site/en/api_docs/python/tf/keras/metrics/top_k_categorical_accuracy.md b/site/en/api_docs/python/tf/keras/metrics/top_k_categorical_accuracy.md new file mode 100644 index 00000000000..381ba5ee09b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/metrics/top_k_categorical_accuracy.md @@ -0,0 +1,93 @@ +description: Computes how often targets are in the top K predictions. + +
+ + +
+ +# tf.keras.metrics.top_k_categorical_accuracy + + + + + + + + + +Computes how often targets are in the top `K` predictions. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`y_true` + +The ground truth values. +
+`y_pred` + +The prediction values. +
+`k` + +(Optional) Number of top elements to look at for computing accuracy. +Defaults to 5. +
+ + + + + + + + + + + +
+Top K categorical accuracy value. +
+ diff --git a/site/en/api_docs/python/tf/keras/mixed_precision.md b/site/en/api_docs/python/tf/keras/mixed_precision.md new file mode 100644 index 00000000000..9e322901642 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/mixed_precision.md @@ -0,0 +1,25 @@ +description: Public API for tf.keras.mixed_precision namespace. + +
+ + +
+ +# Module: tf.keras.mixed_precision + + + + + + + + + +Public API for tf.keras.mixed_precision namespace. + + + +## Modules + +[`experimental`](../../tf/keras/mixed_precision/experimental.md) module: Public API for tf.keras.mixed_precision.experimental namespace. + diff --git a/site/en/api_docs/python/tf/keras/mixed_precision/experimental.md b/site/en/api_docs/python/tf/keras/mixed_precision/experimental.md new file mode 100644 index 00000000000..0eceaba3083 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/mixed_precision/experimental.md @@ -0,0 +1,35 @@ +description: Public API for tf.keras.mixed_precision.experimental namespace. + +
+ + +
+ +# Module: tf.keras.mixed_precision.experimental + + + + + + + + + +Public API for tf.keras.mixed_precision.experimental namespace. + + + +## Classes + +[`class LossScaleOptimizer`](../../../tf/keras/mixed_precision/experimental/LossScaleOptimizer.md): An optimizer that applies loss scaling. + +[`class Policy`](../../../tf/keras/mixed_precision/experimental/Policy.md): A dtype policy for a Keras layer. + +## Functions + +[`get_layer_policy(...)`](../../../tf/keras/mixed_precision/experimental/get_layer_policy.md): Returns the dtype policy of a layer. + +[`global_policy(...)`](../../../tf/keras/mixed_precision/experimental/global_policy.md): Returns the global Policy. + +[`set_policy(...)`](../../../tf/keras/mixed_precision/experimental/set_policy.md): Sets the global Policy. + diff --git a/site/en/api_docs/python/tf/keras/mixed_precision/experimental/LossScaleOptimizer.md b/site/en/api_docs/python/tf/keras/mixed_precision/experimental/LossScaleOptimizer.md new file mode 100644 index 00000000000..d031b09e2a6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/mixed_precision/experimental/LossScaleOptimizer.md @@ -0,0 +1,837 @@ +description: An optimizer that applies loss scaling. + +
+ + + + + + + + + + + + + + + + + + +
+ +# tf.keras.mixed_precision.experimental.LossScaleOptimizer + + + + + + + + + +An optimizer that applies loss scaling. + +Inherits From: [`Optimizer`](../../../../tf/keras/optimizers/Optimizer.md) + + + + + + + + + +Loss scaling is a process that multiplies the loss by a multiplier called the +loss scale, and divides each gradient by the same multiplier. The pseudocode +for this process is: + +``` +loss = ... +loss *= loss_scale +grads = gradients(loss, vars) +grads /= loss_scale +``` + +Mathematically, loss scaling has no effect, but can help avoid numerical +underflow in intermediate gradients when float16 tensors are used. By +multiplying the loss, each intermediate gradient will have the same multiplier +applied. + +The loss scale can either be a fixed constant, chosen by the user, or be +dynamically determined. Dynamically determining the loss scale is convenient +as a loss scale does not have to be explicitly chosen. However it reduces +performance. + +This optimizer wraps another optimizer and applies loss scaling to it via a +`LossScale`. Loss scaling is applied whenever gradients are +computed, either through `minimize()` or `get_gradients()`. The loss scale is +updated via LossScale.update() whenever gradients are applied, either +through `minimize()` or `apply_gradients()`. For example: + +``` +>>> opt = tf.keras.optimizers.SGD(0.25) +>>> opt = tf.keras.mixed_precision.experimental.LossScaleOptimizer(opt, +... "dynamic") +>>> var = tf.Variable(1.) +>>> loss_fn = lambda: var ** 2 +>>> # 'minimize' applies loss scaling to the loss and updates the loss sale. +>>> opt.minimize(loss_fn, var_list=var) +>>> var.numpy() +0.5 +``` + +If a tf.GradientTape is used to compute gradients instead of +LossScaleOptimizer.minimize or LossScaleOptimizer.get_gradients, the loss +and gradients must be scaled manually. This can be done by calling +LossScaleOptimizer.get_scaled_loss before passing the loss to +tf.GradientTape, and LossScaleOptimizer.get_unscaled_gradients after +computing the gradients with tf.GradientTape. For example: + +``` +>>> with tf.GradientTape() as tape: +... loss = loss_fn() +... scaled_loss = opt.get_scaled_loss(loss) +>>> scaled_grad = tape.gradient(scaled_loss, var) +>>> (grad,) = opt.get_unscaled_gradients([scaled_grad]) +>>> opt.apply_gradients([(grad, var)]) # Loss scale is updated here +>>> var.numpy() +0.25 +``` + + + + + + + + + + + + + +
+`optimizer` + +The Optimizer instance to wrap. +
+`loss_scale` + +The loss scale to scale the loss and gradients. This can +either be an int/float to use a fixed loss scale, the string "dynamic" +to use dynamic loss scaling, or an instance of a LossScale. The string +"dynamic" equivalent to passing `DynamicLossScale()`, and passing an +int/float is equivalent to passing a FixedLossScale with the given loss +scale. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`iterations` + +Variable. The number of training steps this Optimizer has run. +
+`learning_rate` + + +
+`loss_scale` + +The `LossScale` instance associated with this optimizer. +
+`lr` + + +
+`weights` + +Returns variables of this Optimizer based on the order created. +
+ + + +## Methods + +

add_slot

+ +View source + + + +Add a new slot variable for `var`. + + +

add_weight

+ +View source + + + + + + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + +The method sums gradients from all replicas in the presence of +tf.distribute.Strategy by default. You can aggregate gradients yourself by +passing `experimental_aggregate_gradients=False`. + +#### Example: + + + +```python +grads = tape.gradient(loss, vars) +grads = tf.distribute.get_replica_context().all_reduce('sum', grads) +# Processing aggregated gradients. +optimizer.apply_gradients(zip(grads, vars), + experimental_aggregate_gradients=False) + +``` + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs. +
+`name` + +Optional name for the returned operation. Default to the name passed +to the `Optimizer` constructor. +
+`experimental_aggregate_gradients` + +Whether to sum gradients from different +replicas in the presense of tf.distribute.Strategy. If False, it's +user responsibility to aggregate the gradients. Default to True. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+ + + +

from_config

+ +View source + + + +Creates an optimizer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same optimizer from the config +dictionary. + + + + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the output of get_config. +
+`custom_objects` + +A Python dictionary mapping names to additional Python +objects used to create this optimizer, such as a function used for a +hyperparameter. +
+ + + + + + + + + + + +
Returns
+An optimizer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the optimizer. + +An optimizer config is a Python dictionary (serializable) +containing the configuration of an optimizer. +The same optimizer can be reinstantiated later +(without any saved state) from this configuration. + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

get_gradients

+ +View source + + + +Returns gradients of `loss` with respect to `params`. + + + + + + + + + + + + + + +
Arguments
+`loss` + +Loss tensor. +
+`params` + +List of variables. +
+ + + + + + + + + + + +
Returns
+List of gradient tensors. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case any gradient cannot be computed (e.g. if gradient +function not implemented). +
+ + + +

get_scaled_loss

+ +View source + + + +Scales the loss by the loss scale. + +This method is only needed if you compute gradients manually, e.g. with +tf.GradientTape. In that case, call this method to scale the loss before +passing the loss to tf.GradientTape. If you use +LossScaleOptimizer.minimize or LossScaleOptimizer.get_gradients, loss +scaling is automatically applied and this method is unneeded. + +If this method is called, `get_unscaled_gradients` should also be called. +See the tf.keras.mixed_precision.experimental.LossScaleOptimizer doc for +an example. + + + + + + + + + + +
Args
+`loss` + +The loss, which will be multiplied by the loss scale. Can either be +a tensor or a callable returning a tensor. +
+ + + + + + + + + + + +
Returns
+`loss` multiplied by LossScaleOptimizer.loss_scale(). +
+ + + +

get_slot

+ +View source + + + + + + +

get_slot_names

+ +View source + + + +A list of names for this optimizer's slots. + + +

get_unscaled_gradients

+ +View source + + + +Unscales the gradients by the loss scale. + +This method is only needed if you compute gradients manually, e.g. with +tf.GradientTape. In that case, call this method to unscale the gradients +after computing them with tf.GradientTape. If you use +LossScaleOptimizer.minimize or LossScaleOptimizer.get_gradients, loss +scaling is automatically applied and this method is unneeded. + +If this method is called, `get_scaled_loss` should also be called. See +the tf.keras.mixed_precision.experimental.LossScaleOptimizer doc for an +example. + + + + + + + + + + +
Args
+`grads` + +A list of tensors, each which will be divided by the loss scale. +Can have None values, which are ignored. +
+ + + + + + + + + + + +
Returns
+A new list the same size as `grads`, where every non-None value in `grads` +is divided by LossScaleOptimizer.loss_scale(). +
+ + + +

get_updates

+ +View source + + + + + + +

get_weights

+ +View source + + + +Returns the current weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function returns the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they were created. The returned list can in turn +be used to load state into similarly parameterized optimizers. + +For example, the RMSprop optimizer for this simple model returns a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> len(opt.get_weights()) +3 +``` + + + + + + + + + +
Returns
+Weights values as a list of numpy arrays. +
+ + + +

minimize

+ +View source + + + +Minimize `loss` by updating `var_list`. + +This method simply computes gradient using tf.GradientTape and calls +`apply_gradients()`. If you want to process the gradient before applying +then call tf.GradientTape and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A callable taking no arguments which returns the value to minimize. +
+`var_list` + +list or tuple of `Variable` objects to update to minimize +`loss`, or a callable returning the list or tuple of `Variable` objects. +Use callable when the variable list would otherwise be incomplete before +`minimize` since the variables are created at the first time `loss` is +called. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+`name` + +Optional name for the returned operation. +
+ + + + + + + + + + + +
Returns
+An `Operation` that updates the variables in `var_list`. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + +

set_weights

+ +View source + + + +Set the weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function takes the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they are created. The passed values are used to set +the new state of the optimizer. + +For example, the RMSprop optimizer for this simple model takes a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])] +>>> opt.set_weights(new_weights) +>>> opt.iterations + +``` + + + + + + + + + + +
Arguments
+`weights` + +weight values as a list of numpy arrays. +
+ + + +

variables

+ +View source + + + +Returns variables of this Optimizer based on the order created. + + + + diff --git a/site/en/api_docs/python/tf/keras/mixed_precision/experimental/Policy.md b/site/en/api_docs/python/tf/keras/mixed_precision/experimental/Policy.md new file mode 100644 index 00000000000..6dfcec54364 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/mixed_precision/experimental/Policy.md @@ -0,0 +1,465 @@ +description: A dtype policy for a Keras layer. + +
+ + + + + +
+ +# tf.keras.mixed_precision.experimental.Policy + + + + + + + + + +A dtype policy for a Keras layer. + + + + + + + + + +A dtype policy determines dtype-related aspects of a layer, such as its +computation and variable dtypes. Each layer has a policy. Policies can be +passed to the `dtype` argument of layer constructors, or a global policy can +be set with tf.keras.mixed_precision.experimental.set_policy. A layer will +default to the global policy if no policy is passed to it's constructor. + +For many models, each layer's policy will have the same compute dtype and +variable dtype, which will typically be float32. In this case, we refer to the +singular dtype as the layer's dtype, which can be queried by the property +tf.keras.layers.Layer.dtype. + +When mixed precision training is used, most layers will instead have a float16 +or bfloat16 compute dtype and a float32 variable dtype, and so the layer does +not have a single dtype. When the variable dtype does not match the compute +dtype, variables will be automatically casted to the compute dtype to avoid +type errors. In this case, tf.keras.layers.Layer.dtype refers to the +variable dtype, not the compute dtype. See [the mixed precision +guide](https://www.tensorflow.org/guide/keras/mixed_precision) for more +information on how to use mixed precision. + +Certain policies also have a tf.mixed_precision.experimental.LossScale +instance, which is used by tf.keras.Models to performance loss scaling. Loss +scaling is a technique used with mixed precision to avoid numerical underflow +in float16 gradients. Loss scaling is only done by Models in Model.fit, +Model.train_on_batch, and similar methods. Layers which are not Models +ignore the loss scale. + +Policies are constructed by passing a string to the constructor, e.g. +`tf.keras.mixed_precision.experimental.Policy('float32')`. The string +determines the compute and variable dtypes. It can be one of the following: + + * Any dtype name, such as 'float32' or 'float64'. Both the variable and + compute dtypes will be that dtype. No loss scaling is done by default. + * 'mixed_float16' or 'mixed_bfloat16': The compute dtype is float16 or + bfloat16, while the variable dtype is float32. These policies are used for + mixed precision training. With 'mixed_float16', a dynamic loss scale is + used by default. 'mixed_bfloat16' does no loss scaling by default, as loss + scaling is unnecessary with bfloat16. + +### How to use mixed precision in a Keras model + +To use mixed precision in a Keras model, the `'mixed_float16'` or +`'mixed_bfloat16'` policy can be used. +tf.keras.mixed_precision.experimental.set_policy can be used to set the +default policy for layers if no policy is passed to them. For example: + +``` +>>> tf.keras.mixed_precision.experimental.set_policy('mixed_float16') +>>> model = tf.keras.models.Sequential([ +... tf.keras.layers.Input((100,)), +... # Dense layers use global policy of 'mixed_float16', which does +... # computations in float16 while keeping variables in float32. +... tf.keras.layers.Dense(10), +... tf.keras.layers.Dense(10), +... # Softmax should be done in float32 for numeric stability. We pass +... # dtype='float32' to use float32 instead of the global policy. +... tf.keras.layers.Activation('softmax', dtype='float32') +... ]) +``` + +Alternatively, the policy can be passed to individual layers instead of +setting the global policy with `set_policy`: + +``` +>>> policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16') +>>> model = tf.keras.models.Sequential([ +... tf.keras.layers.Input((100,)), +... tf.keras.layers.Dense(10, dtype=policy), +... tf.keras.layers.Dense(10, dtype=policy), +... # Softmax should be done in float32 for numeric stability. +... tf.keras.layers.Activation('softmax', dtype='float32') +... ]) +``` + +Note the `'mixed_float16'` policy will apply loss scaling by default in +Model.fit, Model.train_on_batch, and other training methods. If no such +method is used (e.g., a custom training loop is used) and `'mixed_float16'` is +used, the loss scale must be manually applied. See +tf.keras.mixed_precision.experimental.LossScaleOptimizer for details. For +`'mixed_bfloat16'`, no loss scaling is done and loss scaling never needs to be +manually applied. + +See [the mixed precision +guide](https://www.tensorflow.org/guide/keras/mixed_precision) for more +information on using mixed precision + +### How to use float64 in a Keras model + +Using float64 is similar to mixed precision. Either the global policy can be +set to float64, or `dtype='float64'` can be passed to individual layers. For +example, to set the global policy: + +``` +>>> tf.keras.mixed_precision.experimental.set_policy('float64') +>>> model = tf.keras.models.Sequential([ +... tf.keras.layers.Input((100,)), +... # All layers use global policy of 'float64', which does computations +... # and creates variables in float64. +... tf.keras.layers.Dense(10), +... tf.keras.layers.Dense(10), +... tf.keras.layers.Activation('softmax') +... ]) +>>> # Optionaly set policy back to float32 if any other models use float32 +>>> tf.keras.mixed_precision.experimental.set_policy('float32') +``` + +### How a layer uses its policy's compute dtype + +A layer will cast its inputs to its compute dtype in TensorFlow 2. For +example: + +``` +>>> x = tf.ones((4, 4, 4, 4), dtype='float64') +>>> # `layer`'s policy defaults to float32. +>>> layer = tf.keras.layers.Conv2D(filters=4, kernel_size=2) +>>> # `layer` casts it's inputs to its compute dtype, which is float32, and +>>> # does computations in float32. +>>> y = layer(x) +>>> y.dtype +tf.float32 +``` + +Note that the base tf.keras.layers.Layer class inserts the casts. If +subclassing your own layer, you do not have to insert any casts. + +Currently, only tensors in the first argument to the layer's `call` method are +casted. For example: + +``` +>>> class MyLayer(tf.keras.layers.Layer): +... # Bug! `b` will not be casted. +... def call(self, a, b): +... return a + 1., b + 1. +>>> a = tf.constant(1., dtype="float32") +>>> b = tf.constant(1., dtype="float32") +>>> layer = MyLayer(dtype="float64") +>>> x, y = layer(a, b) +>>> x.dtype +tf.float64 +>>> y.dtype +tf.float32 +``` + +If writing your own layer, it is recommended to accept tensors only in the +first argument. This way, all tensors are casted to the layer's compute dtype. +`MyLayer` should therefore be written as: + +``` +>>> class MyLayer(tf.keras.layers.Layer): +... # Now, all tensor inputs will be casted. +... def call(self, inputs): +... a, b = inputs +... return a + 1., b + 1. +>>> a = tf.constant(1., dtype="float32") +>>> b = tf.constant(1., dtype="float32") +>>> layer = MyLayer(dtype="float64") +>>> x, y = layer((a, b)) +>>> x.dtype +tf.float64 +>>> y.dtype +tf.float64 +``` + +Other arguments are not automatically casted for technical reasons, but this +may change in a future minor release. + +A layer subclass can prevent its inputs from being autocasted by passing +`autocast=False` to the layer constructor. For example: + +``` +>>> class NonAutoCastingLayer(tf.keras.layers.Layer): +... def __init__(self, **kwargs): +... kwargs['autocast'] = False +... super(NonAutoCastingLayer, self).__init__(**kwargs) +... def call(self, inp): +... return inp +>>> x = tf.ones((4, 4, 4, 4), dtype='float32') +>>> layer = NonAutoCastingLayer(dtype='float64') +>>> y = layer(x) # Will not cast inputs to it's compute dtype of float64 +>>> y.dtype +tf.float32 +``` + +### How a layer uses its policy's variable dtype + +The default dtype of variables created by tf.keras.layers.Layer.add_weight +is the layer's policy's variable dtype. + +If a layer's compute and variable dtypes differ, `add_weight` will wrap +floating-point variables with a special wrapper called an `AutoCastVariable`. +This wrapper is identical to the original variable except it casts itself to +the layer's compute dtype when used within Layer.call. Outside Layer.call, +the variable is not casted. + +A layer author can prevent a variable from being wrapped with an +`AutoCastVariable` by passing `experimental_autocast=False` to `add_weight`: + +``` +>>> class MyLayer(tf.keras.layers.Layer): +... def build(self, input_shape): +... self.x = self.add_weight('x') +... self.y = self.add_weight('y', experimental_autocast=False) +>>> policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16') +>>> layer = MyLayer(dtype=policy) +>>> layer.build((2, 2)) +>>> layer.x + +>>> layer.y + +``` + +Passing `experimental_autocast=False` is useful for layers which may +internally do some math in the variable dtype instead of the compute dtype. +For example, you may wish to compute variable statistics, such as mean and +variance, in the variable dtype. + +### How to write a layer that supports mixed precision and float64. + +For the most part, layers will automatically support mixed precision and +float64 without any additional work, due to the fact the base layer +automatically casts inputs, creates variables of the correct type, and in the +case of mixed precision, wraps variables with `AutoCastVariables`. + +For example, this simple dense layer does not require any additional work to +support mixed precision or float64. Keras automatically casts the inputs and +variable to the appropriate dtype. + +``` +>>> class MyDense(tf.keras.layers.Layer): +... def build(self, input_shape): +... self.kernel = self.add_weight('kernel', (input_shape[-1], 10)) +... def call(self, inputs): +... return tf.matmul(inputs, self.kernel) +``` + +``` +>>> policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16') +>>> layer = MyDense(dtype=policy) +>>> x = np.random.rand(10, 10) +>>> y = layer(x) +>>> y.dtype +tf.float16 +``` + +The primary case where you need extra work to support mixed precision or +float64 is when you create a new tensor, such as with tf.ones or +tf.constant. In such cases, you must create the tensor of the correct dtype. +For example, suppose you modify the `MyDense` layer to add a random number to +the output using tf.random.normal. You must pass the input dtype to +tf.random.normal to ensure the dtypes match. + +``` +>>> class MyDense(tf.keras.layers.Layer): +... def build(self, input_shape): +... self.kernel = self.add_weight('kernel', (input_shape[-1], 10)) +... def call(self, inputs): +... rand = tf.random.normal(shape=inputs.shape, dtype=inputs.dtype) +... return tf.matmul(inputs, self.kernel) + rand +>>> +>>> layer = MyDense(dtype=policy) +>>> y = layer(x) +>>> y.dtype +tf.float16 +``` + +If you did not pass `dtype=inputs.dtype` to tf.random.normal, a `TypeError` +would have occurred. This is because the dtype defaults to `"float32"`, so the +layer would only work if the inputs were float32. + +### The deprecated "infer" policy + +In addition to the above mentioned policies, a policy can also be "infer". +This Policy is deprecated, and it is not recommended. When a layer has an +infer policy, it will infer the computation and variable dtype from the first +input the first time the layer is called. Once the layer is called for the +first time, the layer's policy will change to the dtype of the first input. + +In TensorFlow 1, only the "infer" policy is available. + + + + + + + + + + + + + +
+`name` + +A string. Can be one of the following values: +* Any dtype name, such as 'float32' or 'float64'. Both the variable and +compute dtypes will be that dtype. +* 'mixed_float16' or 'mixed_bfloat16': The compute dtype is float16 or +bfloat16, while the variable dtype is float32. With 'mixed_float16', +a dynamic loss scale is used. These policies are used for mixed +precision training. +* 'infer' (deprecated): Infer the compute and variable dtype from the +input dtype. +
+`loss_scale` + +A tf.mixed_precision.experimental.LossScale, an int (which +uses a `FixedLossScale`), or the string "dynamic" (which uses a +`DynamicLossScale`). Defaults to using no loss scaling unless `name` is +"mixed_float16", in which case this defaults to "dynamic". Only +tf.keras.Models, not layers, use the loss scale, and it is only used +during Model.fit, Model.train_on_batch, and other similar methods. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`compute_dtype` + +The compute dtype of this policy. + +This is the dtype layers will do their computations in. + +Note that even if the compute dtype is float16 or bfloat16, hardware devices +may not do individual adds, multiplies, and other fundamental operations in +[b]float16, but instead may do some of them in float32 for numeric +stability. The compute dtype is the dtype of the inputs and outputs of the +TensorFlow ops that the layer executes. Internally, many TensorFlow ops will +do certain internal calculations in float32, or some other device-internal +intermediate format with higher precision than [b]float16, to increase +numeric stability. + +For example, a tf.keras.layers.Dense layer, when run on a GPU with a +float16 compute dtype, will pass float16 inputs to tf.matmul. But, tf.matmul +will do use float32 intermediate math. The performance benefit of float16 is +still apparent, due to increased memory bandwidth and the fact modern GPUs +have specialized hardware for computing matmuls on float16 while still +keeping intermediate computations in float32. +
+`loss_scale` + +Returns the loss scale of this Policy. +
+`name` + +Returns the name of this policy. +
+`should_cast_variables` + +Returns True if variables should be casted. + +This is true if the variable dtype is not the same as the compute dtype. +
+`variable_dtype` + +The variable dtype of this policy. + +This is the dtype layers will create their variables in, unless a layer +explicitly chooses a different dtype. If this is different than +Policy.compute_dtype, Layers will cast variables to the compute dtype to +avoid type errors. +
+ + + +## Methods + +

from_config

+ +View source + + + + + + +

get_config

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/mixed_precision/experimental/get_layer_policy.md b/site/en/api_docs/python/tf/keras/mixed_precision/experimental/get_layer_policy.md new file mode 100644 index 00000000000..5cd8ac5e6d2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/mixed_precision/experimental/get_layer_policy.md @@ -0,0 +1,75 @@ +description: Returns the dtype policy of a layer. + +
+ + +
+ +# tf.keras.mixed_precision.experimental.get_layer_policy + + + + + + + + + +Returns the dtype policy of a layer. + + + + + + + + + + + + + + + + + + + +
+`layer` + +A tf.keras.layers.Layer. +
+ + + + + + + + + + + +
+The tf.keras.mixed_precision.experimental.Policy of the layer. +
+ diff --git a/site/en/api_docs/python/tf/keras/mixed_precision/experimental/global_policy.md b/site/en/api_docs/python/tf/keras/mixed_precision/experimental/global_policy.md new file mode 100644 index 00000000000..c71c897370a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/mixed_precision/experimental/global_policy.md @@ -0,0 +1,63 @@ +description: Returns the global Policy. + +
+ + +
+ +# tf.keras.mixed_precision.experimental.global_policy + + + + + + + + + +Returns the global Policy. + + + + + + + + + +The global policy is the default policy used for layers, if no policy is +passed to the layer constructor. If no policy has been set with +keras.mixed_precision.experimental.set_policy, this will return a policy +constructed from tf.keras.backend.floatx() in TensorFlow 2 (floatx defaults +to float32), or an "infer" policy in TensorFlow 1. + +See keras.mixed_precision.experimental.Policy for more information. + + + + + + + + + +
+The global Policy. +
+ diff --git a/site/en/api_docs/python/tf/keras/mixed_precision/experimental/set_policy.md b/site/en/api_docs/python/tf/keras/mixed_precision/experimental/set_policy.md new file mode 100644 index 00000000000..506e62fb848 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/mixed_precision/experimental/set_policy.md @@ -0,0 +1,67 @@ +description: Sets the global Policy. + +
+ + +
+ +# tf.keras.mixed_precision.experimental.set_policy + + + + + + + + + +Sets the global Policy. + + + + + + + + + +The global policy is the default policy used for layers, if no policy is +passed to the layer constructor. If no global policy is set, layers will +instead default to a Policy constructed from tf.keras.backend.floatx() in +TensorFlow 2. In TensorFlow 1, layers default to an "infer" policy. + +See keras.mixed_precision.experimental.Policy for more information. + + + + + + + + + + +
+`policy` + +A Policy, or a string that will be converted to a Policy.. +
+ diff --git a/site/en/api_docs/python/tf/keras/models.md b/site/en/api_docs/python/tf/keras/models.md new file mode 100644 index 00000000000..4f52eefdbfc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/models.md @@ -0,0 +1,41 @@ +description: Code for model cloning, plus model-related API entries. + +
+ + +
+ +# Module: tf.keras.models + + + + + + + + + +Code for model cloning, plus model-related API entries. + + + +## Classes + +[`class Model`](../../tf/keras/Model.md): `Model` groups layers into an object with training and inference features. + +[`class Sequential`](../../tf/keras/Sequential.md): `Sequential` groups a linear stack of layers into a tf.keras.Model. + +## Functions + +[`clone_model(...)`](../../tf/keras/models/clone_model.md): Clone any `Model` instance. + +[`load_model(...)`](../../tf/keras/models/load_model.md): Loads a model saved via `save_model`. + +[`model_from_config(...)`](../../tf/keras/models/model_from_config.md): Instantiates a Keras model from its config. + +[`model_from_json(...)`](../../tf/keras/models/model_from_json.md): Parses a JSON model configuration string and returns a model instance. + +[`model_from_yaml(...)`](../../tf/keras/models/model_from_yaml.md): Parses a yaml model configuration file and returns a model instance. + +[`save_model(...)`](../../tf/keras/models/save_model.md): Saves a model as a TensorFlow SavedModel or HDF5 file. + diff --git a/site/en/api_docs/python/tf/keras/models/clone_model.md b/site/en/api_docs/python/tf/keras/models/clone_model.md new file mode 100644 index 00000000000..6a63ef5cd97 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/models/clone_model.md @@ -0,0 +1,125 @@ +description: Clone any Model instance. + +
+ + +
+ +# tf.keras.models.clone_model + + + + + + + + + +Clone any `Model` instance. + + + + + + + + + +Model cloning is similar to calling a model on new inputs, +except that it creates new layers (and thus new weights) instead +of sharing the weights of the existing layers. + + + + + + + + + + + + + + + + +
+`model` + +Instance of `Model` +(could be a functional model or a Sequential model). +
+`input_tensors` + +optional list of input tensors or InputLayer objects +to build the model upon. If not provided, +placeholders will be created. +
+`clone_function` + +Callable to be used to clone each layer in the target +model (except `InputLayer` instances). It takes as argument the layer +instance to be cloned, and returns the corresponding layer instance to +be used in the model copy. If unspecified, this callable defaults to +the following serialization/deserialization function: +`lambda layer: layer.__class__.from_config(layer.get_config())`. +By passing a custom callable, you can customize your copy of the +model, e.g. by wrapping certain layers of interest (you might want to +replace all `LSTM` instances with equivalent +`Bidirectional(LSTM(...))` instances, for example). +
+ + + + + + + + + + + +
+An instance of `Model` reproducing the behavior +of the original model, on top of new inputs tensors, +using newly instantiated weights. The cloned model might behave +differently from the original model if a custom clone_function +modifies the layer. +
+ + + + + + + + + + + + +
+`ValueError` + +in case of invalid `model` argument value. +
+ diff --git a/site/en/api_docs/python/tf/keras/models/load_model.md b/site/en/api_docs/python/tf/keras/models/load_model.md new file mode 100644 index 00000000000..e33179042be --- /dev/null +++ b/site/en/api_docs/python/tf/keras/models/load_model.md @@ -0,0 +1,141 @@ +description: Loads a model saved via save_model. + +
+ + +
+ +# tf.keras.models.load_model + + + + + + + + + +Loads a model saved via `save_model`. + + + + + + + + + + +#### Usage: + + + +``` +>>> model = tf.keras.Sequential([ +... tf.keras.layers.Dense(5, input_shape=(3,)), +... tf.keras.layers.Softmax()]) +>>> model.save('/tmp/model') +>>> loaded_model = tf.keras.models.load_model('/tmp/model') +>>> x = tf.random.uniform((10, 3)) +>>> assert np.allclose(model.predict(x), loaded_model.predict(x)) +``` + +Note that the model weights may have different scoped names after being +loaded. Scoped names include the model/layer names, such as +"dense_1/kernel:0"`. It is recommended that you use the layer properties to +access specific variables, e.g. `model.get_layer("dense_1").kernel`. + + + + + + + + + + + + + + + + +
+`filepath` + +One of the following: +- String or `pathlib.Path` object, path to the saved model +- `h5py.File` object from which to load the model +
+`custom_objects` + +Optional dictionary mapping names +(strings) to custom classes or functions to be +considered during deserialization. +
+`compile` + +Boolean, whether to compile the model +after loading. +
+ + + + + + + + + + + +
+A Keras model instance. If the original model was compiled, and saved with +the optimizer, then the returned model will be compiled. Otherwise, the +model will be left uncompiled. In the case that an uncompiled model is +returned, a warning is displayed if the `compile` argument is set to +`True`. +
+ + + + + + + + + + + + + + + +
+`ImportError` + +if loading from an hdf5 file and h5py is not available. +
+`IOError` + +In case of an invalid savefile. +
+ diff --git a/site/en/api_docs/python/tf/keras/models/model_from_config.md b/site/en/api_docs/python/tf/keras/models/model_from_config.md new file mode 100644 index 00000000000..ab2d40e3333 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/models/model_from_config.md @@ -0,0 +1,101 @@ +description: Instantiates a Keras model from its config. + +
+ + +
+ +# tf.keras.models.model_from_config + + + + + + + + + +Instantiates a Keras model from its config. + + + + + + + + + + + + + + + + + + + + + + +
+`config` + +Configuration dictionary. +
+`custom_objects` + +Optional dictionary mapping names +(strings) to custom classes or functions to be +considered during deserialization. +
+ + + + + + + + + + + +
+A Keras model instance (uncompiled). +
+ + + + + + + + + + + + +
+`TypeError` + +if `config` is not a dictionary. +
+ diff --git a/site/en/api_docs/python/tf/keras/models/model_from_json.md b/site/en/api_docs/python/tf/keras/models/model_from_json.md new file mode 100644 index 00000000000..5bbdd0d26c0 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/models/model_from_json.md @@ -0,0 +1,96 @@ +description: Parses a JSON model configuration string and returns a model instance. + +
+ + +
+ +# tf.keras.models.model_from_json + + + + + + + + + +Parses a JSON model configuration string and returns a model instance. + + + + + + + + + + +#### Usage: + + + +``` +>>> model = tf.keras.Sequential([ +... tf.keras.layers.Dense(5, input_shape=(3,)), +... tf.keras.layers.Softmax()]) +>>> config = model.to_json() +>>> loaded_model = tf.keras.models.model_from_json(config) +``` + + + + + + + + + + + + + +
+`json_string` + +JSON string encoding a model configuration. +
+`custom_objects` + +Optional dictionary mapping names +(strings) to custom classes or functions to be +considered during deserialization. +
+ + + + + + + + + + + +
+A Keras model instance (uncompiled). +
+ diff --git a/site/en/api_docs/python/tf/keras/models/model_from_yaml.md b/site/en/api_docs/python/tf/keras/models/model_from_yaml.md new file mode 100644 index 00000000000..2b09aa25212 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/models/model_from_yaml.md @@ -0,0 +1,117 @@ +description: Parses a yaml model configuration file and returns a model instance. + +
+ + +
+ +# tf.keras.models.model_from_yaml + + + + + + + + + +Parses a yaml model configuration file and returns a model instance. + + + + + + + + + + +#### Usage: + + + +``` +>>> model = tf.keras.Sequential([ +... tf.keras.layers.Dense(5, input_shape=(3,)), +... tf.keras.layers.Softmax()]) +>>> try: +... import yaml +... config = model.to_yaml() +... loaded_model = tf.keras.models.model_from_yaml(config) +... except ImportError: +... pass +``` + + + + + + + + + + + + + +
+`yaml_string` + +YAML string or open file encoding a model configuration. +
+`custom_objects` + +Optional dictionary mapping names +(strings) to custom classes or functions to be +considered during deserialization. +
+ + + + + + + + + + + +
+A Keras model instance (uncompiled). +
+ + + + + + + + + + + + +
+`ImportError` + +if yaml module is not found. +
+ diff --git a/site/en/api_docs/python/tf/keras/models/save_model.md b/site/en/api_docs/python/tf/keras/models/save_model.md new file mode 100644 index 00000000000..caea911c47f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/models/save_model.md @@ -0,0 +1,166 @@ +description: Saves a model as a TensorFlow SavedModel or HDF5 file. + +
+ + +
+ +# tf.keras.models.save_model + + + + + + + + + +Saves a model as a TensorFlow SavedModel or HDF5 file. + + + + + + + + + + +#### Usage: + + + +``` +>>> model = tf.keras.Sequential([ +... tf.keras.layers.Dense(5, input_shape=(3,)), +... tf.keras.layers.Softmax()]) +>>> model.save('/tmp/model') +>>> loaded_model = tf.keras.models.load_model('/tmp/model') +>>> x = tf.random.uniform((10, 3)) +>>> assert np.allclose(model.predict(x), loaded_model.predict(x)) +``` + +The saved model contains: + + - the model's configuration (topology) + - the model's weights + - the model's optimizer's state (if any) + +Thus the saved model can be reinstantiated in +the exact same state, without any of the code +used for model definition or training. + +Note that the model weights may have different scoped names after being +loaded. Scoped names include the model/layer names, such as +"dense_1/kernel:0"`. It is recommended that you use the layer properties to +access specific variables, e.g. `model.get_layer("dense_1").kernel`. + +_SavedModel serialization_ + +The SavedModel serialization path uses tf.saved_model.save to save the model +and all trackable objects attached to the model (e.g. layers and variables). +`@tf.function`-decorated methods are also saved. Additional trackable objects +and functions are added to the SavedModel to allow the model to be +loaded back as a Keras Model object. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`model` + +Keras model instance to be saved. +
+`filepath` + +One of the following: +- String or `pathlib.Path` object, path where to save the model +- `h5py.File` object where to save the model +
+`overwrite` + +Whether we should overwrite any existing model at the target +location, or instead ask the user with a manual prompt. +
+`include_optimizer` + +If True, save optimizer's state together. +
+`save_format` + +Either 'tf' or 'h5', indicating whether to save the model +to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' +in TF 1.X. +
+`signatures` + +Signatures to save with the SavedModel. Applicable to the 'tf' +format only. Please see the `signatures` argument in +tf.saved_model.save for details. +
+`options` + +Optional tf.saved_model.SaveOptions object that specifies +options for saving to SavedModel. +
+ + + + + + + + + + + + +
+`ImportError` + +If save format is hdf5, and h5py is not available. +
+ diff --git a/site/en/api_docs/python/tf/keras/optimizers.md b/site/en/api_docs/python/tf/keras/optimizers.md new file mode 100644 index 00000000000..5ebad4828bc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers.md @@ -0,0 +1,61 @@ +description: Built-in optimizer classes. + +
+ + +
+ +# Module: tf.keras.optimizers + + + + + + + + + +Built-in optimizer classes. + + + + + +## Modules + +[`schedules`](../../tf/keras/optimizers/schedules.md) module: Public API for tf.keras.optimizers.schedules namespace. + +## Classes + +[`class Adadelta`](../../tf/keras/optimizers/Adadelta.md): Optimizer that implements the Adadelta algorithm. + +[`class Adagrad`](../../tf/keras/optimizers/Adagrad.md): Optimizer that implements the Adagrad algorithm. + +[`class Adam`](../../tf/keras/optimizers/Adam.md): Optimizer that implements the Adam algorithm. + +[`class Adamax`](../../tf/keras/optimizers/Adamax.md): Optimizer that implements the Adamax algorithm. + +[`class Ftrl`](../../tf/keras/optimizers/Ftrl.md): Optimizer that implements the FTRL algorithm. + +[`class Nadam`](../../tf/keras/optimizers/Nadam.md): Optimizer that implements the NAdam algorithm. + +[`class Optimizer`](../../tf/keras/optimizers/Optimizer.md): Updated base class for optimizers. + +[`class RMSprop`](../../tf/keras/optimizers/RMSprop.md): Optimizer that implements the RMSprop algorithm. + +[`class SGD`](../../tf/keras/optimizers/SGD.md): Stochastic gradient descent and momentum optimizer. + +## Functions + +[`deserialize(...)`](../../tf/keras/optimizers/deserialize.md): Inverse of the `serialize` function. + +[`get(...)`](../../tf/keras/optimizers/get.md): Retrieves a Keras Optimizer instance. + +[`serialize(...)`](../../tf/keras/optimizers/serialize.md) + diff --git a/site/en/api_docs/python/tf/keras/optimizers/Adadelta.md b/site/en/api_docs/python/tf/keras/optimizers/Adadelta.md new file mode 100644 index 00000000000..f2cf42e31be --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/Adadelta.md @@ -0,0 +1,701 @@ +description: Optimizer that implements the Adadelta algorithm. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.keras.optimizers.Adadelta + + + + + + + + + +Optimizer that implements the Adadelta algorithm. + +Inherits From: [`Optimizer`](../../../tf/keras/optimizers/Optimizer.md) + + + + + + + + + +Adadelta optimization is a stochastic gradient descent method that is based on +adaptive learning rate per dimension to address two drawbacks: + 1) the continual decay of learning rates throughout training + 2) the need for a manually selected global learning rate + +Two accumulation steps are required: + 1) the accumulation of gradients squared, + 2) the accumulation of updates squared. + +#### Initialization: + + + +$$E[g^2]_0 := 0 \text{(Initialize gradient 2nd order moment vector)}$$ +$$E[\Delta x^2]_0 := 0 \text{(Initialize 2nd order variable update)}$$ + +$$t := t + 1$$ +$$E[g^2]_t := \rho * E[g^2]_{t-1} + (1 - \rho) * g^2$$ +$$\Delta x_t = -RMS[\Delta x]_{t-1} * g_t / RMS[g]_t$$ +$$E[\Delta x^2]_t := \rho * E[\Delta x^2]_{t-1} + (1 - \rho) * \Delta x_t^2$$ +$$x_t := x_{t-1} + \Delta x_{t}$$ + +References + See [M. D. Zeiler](http://arxiv.org/abs/1212.5701) + ([pdf](http://arxiv.org/pdf/1212.5701v1.pdf)) + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A `Tensor`, floating point value, or a schedule that is a +tf.keras.optimizers.schedules.LearningRateSchedule. The learning rate. +To match the exact form in the original paper use 1.0. +
+`rho` + +A `Tensor` or a floating point value. The decay rate. +
+`epsilon` + +A `Tensor` or a floating point value. A constant epsilon used +to better conditioning the grad update. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "Adadelta". +
+`**kwargs` + +keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, +`decay`}. `clipnorm` is clip gradients by norm; `clipvalue` is clip +gradients by value, `decay` is included for backward compatibility to +allow time inverse decay of learning rate. `lr` is included for backward +compatibility, recommended to use `learning_rate` instead. +
+ + + + + + + + + + + + + + + + + +
+`iterations` + +Variable. The number of training steps this Optimizer has run. +
+`weights` + +Returns variables of this Optimizer based on the order created. +
+ + + +## Methods + +

add_slot

+ +View source + + + +Add a new slot variable for `var`. + + +

add_weight

+ +View source + + + + + + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + +The method sums gradients from all replicas in the presence of +tf.distribute.Strategy by default. You can aggregate gradients yourself by +passing `experimental_aggregate_gradients=False`. + +#### Example: + + + +```python +grads = tape.gradient(loss, vars) +grads = tf.distribute.get_replica_context().all_reduce('sum', grads) +# Processing aggregated gradients. +optimizer.apply_gradients(zip(grads, vars), + experimental_aggregate_gradients=False) + +``` + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs. +
+`name` + +Optional name for the returned operation. Default to the name passed +to the `Optimizer` constructor. +
+`experimental_aggregate_gradients` + +Whether to sum gradients from different +replicas in the presense of tf.distribute.Strategy. If False, it's +user responsibility to aggregate the gradients. Default to True. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+ + + +

from_config

+ +View source + + + +Creates an optimizer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same optimizer from the config +dictionary. + + + + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the output of get_config. +
+`custom_objects` + +A Python dictionary mapping names to additional Python +objects used to create this optimizer, such as a function used for a +hyperparameter. +
+ + + + + + + + + + + +
Returns
+An optimizer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the optimizer. + +An optimizer config is a Python dictionary (serializable) +containing the configuration of an optimizer. +The same optimizer can be reinstantiated later +(without any saved state) from this configuration. + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

get_gradients

+ +View source + + + +Returns gradients of `loss` with respect to `params`. + + + + + + + + + + + + + + +
Arguments
+`loss` + +Loss tensor. +
+`params` + +List of variables. +
+ + + + + + + + + + + +
Returns
+List of gradient tensors. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case any gradient cannot be computed (e.g. if gradient +function not implemented). +
+ + + +

get_slot

+ +View source + + + + + + +

get_slot_names

+ +View source + + + +A list of names for this optimizer's slots. + + +

get_updates

+ +View source + + + + + + +

get_weights

+ +View source + + + +Returns the current weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function returns the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they were created. The returned list can in turn +be used to load state into similarly parameterized optimizers. + +For example, the RMSprop optimizer for this simple model returns a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> len(opt.get_weights()) +3 +``` + + + + + + + + + +
Returns
+Weights values as a list of numpy arrays. +
+ + + +

minimize

+ +View source + + + +Minimize `loss` by updating `var_list`. + +This method simply computes gradient using tf.GradientTape and calls +`apply_gradients()`. If you want to process the gradient before applying +then call tf.GradientTape and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A callable taking no arguments which returns the value to minimize. +
+`var_list` + +list or tuple of `Variable` objects to update to minimize +`loss`, or a callable returning the list or tuple of `Variable` objects. +Use callable when the variable list would otherwise be incomplete before +`minimize` since the variables are created at the first time `loss` is +called. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+`name` + +Optional name for the returned operation. +
+ + + + + + + + + + + +
Returns
+An `Operation` that updates the variables in `var_list`. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + +

set_weights

+ +View source + + + +Set the weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function takes the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they are created. The passed values are used to set +the new state of the optimizer. + +For example, the RMSprop optimizer for this simple model takes a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])] +>>> opt.set_weights(new_weights) +>>> opt.iterations + +``` + + + + + + + + + + +
Arguments
+`weights` + +weight values as a list of numpy arrays. +
+ + + +

variables

+ +View source + + + +Returns variables of this Optimizer based on the order created. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/Adagrad.md b/site/en/api_docs/python/tf/keras/optimizers/Adagrad.md new file mode 100644 index 00000000000..c9d1f9b72ee --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/Adagrad.md @@ -0,0 +1,717 @@ +description: Optimizer that implements the Adagrad algorithm. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.keras.optimizers.Adagrad + + + + + + + + + +Optimizer that implements the Adagrad algorithm. + +Inherits From: [`Optimizer`](../../../tf/keras/optimizers/Optimizer.md) + + + + + + + + + +Adagrad is an optimizer with parameter-specific learning rates, +which are adapted relative to how frequently a parameter gets +updated during training. The more updates a parameter receives, +the smaller the updates. + +#### Initialization: + + +$$accum_{g_0} := \text{initial_accumulator_value}$$ + +#### Update step: + + +$$t := t + 1$$ +$$accum_{g_t} := accum_{g_{t-1}} + g^2$$ +$$\theta_t := \theta_{t-1} - lr * g / (\sqrt{accum_{g_t}} + \epsilon)$$ + +#### References: + + + +* [Paper](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf). +* [Introduction] + (https://ppasupat.github.io/a9online/uploads/proximal_notes.pdf). + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A `Tensor`, floating point value, or a schedule that is a +tf.keras.optimizers.schedules.LearningRateSchedule. The learning rate. +
+`initial_accumulator_value` + +A floating point value. +Starting value for the accumulators, must be non-negative. +
+`epsilon` + +A small floating point value to avoid zero denominator. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "Adagrad". +
+`**kwargs` + +keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, +`decay`}. `clipnorm` is clip gradients by norm; `clipvalue` is clip +gradients by value, `decay` is included for backward compatibility to +allow time inverse decay of learning rate. `lr` is included for backward +compatibility, recommended to use `learning_rate` instead. +
+ + + + + + + + + + + + +
+`ValueError` + +If the `initial_accumulator_value` or `epsilon` is invalid. +
+ + + + + + + + + + + + + + + + + +
+`iterations` + +Variable. The number of training steps this Optimizer has run. +
+`weights` + +Returns variables of this Optimizer based on the order created. +
+ + + +## Methods + +

add_slot

+ +View source + + + +Add a new slot variable for `var`. + + +

add_weight

+ +View source + + + + + + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + +The method sums gradients from all replicas in the presence of +tf.distribute.Strategy by default. You can aggregate gradients yourself by +passing `experimental_aggregate_gradients=False`. + +#### Example: + + + +```python +grads = tape.gradient(loss, vars) +grads = tf.distribute.get_replica_context().all_reduce('sum', grads) +# Processing aggregated gradients. +optimizer.apply_gradients(zip(grads, vars), + experimental_aggregate_gradients=False) + +``` + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs. +
+`name` + +Optional name for the returned operation. Default to the name passed +to the `Optimizer` constructor. +
+`experimental_aggregate_gradients` + +Whether to sum gradients from different +replicas in the presense of tf.distribute.Strategy. If False, it's +user responsibility to aggregate the gradients. Default to True. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+ + + +

from_config

+ +View source + + + +Creates an optimizer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same optimizer from the config +dictionary. + + + + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the output of get_config. +
+`custom_objects` + +A Python dictionary mapping names to additional Python +objects used to create this optimizer, such as a function used for a +hyperparameter. +
+ + + + + + + + + + + +
Returns
+An optimizer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the optimizer. + +An optimizer config is a Python dictionary (serializable) +containing the configuration of an optimizer. +The same optimizer can be reinstantiated later +(without any saved state) from this configuration. + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

get_gradients

+ +View source + + + +Returns gradients of `loss` with respect to `params`. + + + + + + + + + + + + + + +
Arguments
+`loss` + +Loss tensor. +
+`params` + +List of variables. +
+ + + + + + + + + + + +
Returns
+List of gradient tensors. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case any gradient cannot be computed (e.g. if gradient +function not implemented). +
+ + + +

get_slot

+ +View source + + + + + + +

get_slot_names

+ +View source + + + +A list of names for this optimizer's slots. + + +

get_updates

+ +View source + + + + + + +

get_weights

+ +View source + + + +Returns the current weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function returns the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they were created. The returned list can in turn +be used to load state into similarly parameterized optimizers. + +For example, the RMSprop optimizer for this simple model returns a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> len(opt.get_weights()) +3 +``` + + + + + + + + + +
Returns
+Weights values as a list of numpy arrays. +
+ + + +

minimize

+ +View source + + + +Minimize `loss` by updating `var_list`. + +This method simply computes gradient using tf.GradientTape and calls +`apply_gradients()`. If you want to process the gradient before applying +then call tf.GradientTape and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A callable taking no arguments which returns the value to minimize. +
+`var_list` + +list or tuple of `Variable` objects to update to minimize +`loss`, or a callable returning the list or tuple of `Variable` objects. +Use callable when the variable list would otherwise be incomplete before +`minimize` since the variables are created at the first time `loss` is +called. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+`name` + +Optional name for the returned operation. +
+ + + + + + + + + + + +
Returns
+An `Operation` that updates the variables in `var_list`. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + +

set_weights

+ +View source + + + +Set the weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function takes the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they are created. The passed values are used to set +the new state of the optimizer. + +For example, the RMSprop optimizer for this simple model takes a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])] +>>> opt.set_weights(new_weights) +>>> opt.iterations + +``` + + + + + + + + + + +
Arguments
+`weights` + +weight values as a list of numpy arrays. +
+ + + +

variables

+ +View source + + + +Returns variables of this Optimizer based on the order created. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/Adam.md b/site/en/api_docs/python/tf/keras/optimizers/Adam.md new file mode 100644 index 00000000000..61e79cf2601 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/Adam.md @@ -0,0 +1,710 @@ +description: Optimizer that implements the Adam algorithm. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.keras.optimizers.Adam + + + + + + + + + +Optimizer that implements the Adam algorithm. + +Inherits From: [`Optimizer`](../../../tf/keras/optimizers/Optimizer.md) + + + + + + + + + +Adam optimization is a stochastic gradient descent method that is based on +adaptive estimation of first-order and second-order moments. +According to the paper +[Adam: A Method for Stochastic Optimization. Kingma et al., +2014](http://arxiv.org/abs/1412.6980), the method is "*computationally +efficient, has little memory requirement, invariant to diagonal rescaling of +gradients, and is well suited for problems that are large in terms of +data/parameters*". + +For AMSGrad see [On The Convergence Of Adam And Beyond. +Reddi et al., 5-8](https://openreview.net/pdf?id=ryQu7f-RZ). + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A `Tensor`, floating point value, or a schedule that is a +tf.keras.optimizers.schedules.LearningRateSchedule, or a callable +that takes no arguments and returns the actual value to use, The +learning rate. Defaults to 0.001. +
+`beta_1` + +A float value or a constant float tensor, or a callable +that takes no arguments and returns the actual value to use. The +exponential decay rate for the 1st moment estimates. Defaults to 0.9. +
+`beta_2` + +A float value or a constant float tensor, or a callable +that takes no arguments and returns the actual value to use, The +exponential decay rate for the 2nd moment estimates. Defaults to 0.999. +
+`epsilon` + +A small constant for numerical stability. This epsilon is +"epsilon hat" in the Kingma and Ba paper (in the formula just before +Section 2.1), not the epsilon in Algorithm 1 of the paper. Defaults to +1e-7. +
+`amsgrad` + +Boolean. Whether to apply AMSGrad variant of this algorithm from +the paper "On the Convergence of Adam and beyond". Defaults to `False`. +
+`name` + +Optional name for the operations created when applying gradients. +Defaults to "Adam". +
+`**kwargs` + +keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, +`decay`}. `clipnorm` is clip gradients by norm; `clipvalue` is clip +gradients by value, `decay` is included for backward compatibility to +allow time inverse decay of learning rate. `lr` is included for backward +compatibility, recommended to use `learning_rate` instead. +
+ + + + + + + + + + + + + + + + + +
+`iterations` + +Variable. The number of training steps this Optimizer has run. +
+`weights` + +Returns variables of this Optimizer based on the order created. +
+ + + +## Methods + +

add_slot

+ +View source + + + +Add a new slot variable for `var`. + + +

add_weight

+ +View source + + + + + + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + +The method sums gradients from all replicas in the presence of +tf.distribute.Strategy by default. You can aggregate gradients yourself by +passing `experimental_aggregate_gradients=False`. + +#### Example: + + + +```python +grads = tape.gradient(loss, vars) +grads = tf.distribute.get_replica_context().all_reduce('sum', grads) +# Processing aggregated gradients. +optimizer.apply_gradients(zip(grads, vars), + experimental_aggregate_gradients=False) + +``` + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs. +
+`name` + +Optional name for the returned operation. Default to the name passed +to the `Optimizer` constructor. +
+`experimental_aggregate_gradients` + +Whether to sum gradients from different +replicas in the presense of tf.distribute.Strategy. If False, it's +user responsibility to aggregate the gradients. Default to True. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+ + + +

from_config

+ +View source + + + +Creates an optimizer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same optimizer from the config +dictionary. + + + + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the output of get_config. +
+`custom_objects` + +A Python dictionary mapping names to additional Python +objects used to create this optimizer, such as a function used for a +hyperparameter. +
+ + + + + + + + + + + +
Returns
+An optimizer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the optimizer. + +An optimizer config is a Python dictionary (serializable) +containing the configuration of an optimizer. +The same optimizer can be reinstantiated later +(without any saved state) from this configuration. + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

get_gradients

+ +View source + + + +Returns gradients of `loss` with respect to `params`. + + + + + + + + + + + + + + +
Arguments
+`loss` + +Loss tensor. +
+`params` + +List of variables. +
+ + + + + + + + + + + +
Returns
+List of gradient tensors. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case any gradient cannot be computed (e.g. if gradient +function not implemented). +
+ + + +

get_slot

+ +View source + + + + + + +

get_slot_names

+ +View source + + + +A list of names for this optimizer's slots. + + +

get_updates

+ +View source + + + + + + +

get_weights

+ +View source + + + +Returns the current weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function returns the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they were created. The returned list can in turn +be used to load state into similarly parameterized optimizers. + +For example, the RMSprop optimizer for this simple model returns a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> len(opt.get_weights()) +3 +``` + + + + + + + + + +
Returns
+Weights values as a list of numpy arrays. +
+ + + +

minimize

+ +View source + + + +Minimize `loss` by updating `var_list`. + +This method simply computes gradient using tf.GradientTape and calls +`apply_gradients()`. If you want to process the gradient before applying +then call tf.GradientTape and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A callable taking no arguments which returns the value to minimize. +
+`var_list` + +list or tuple of `Variable` objects to update to minimize +`loss`, or a callable returning the list or tuple of `Variable` objects. +Use callable when the variable list would otherwise be incomplete before +`minimize` since the variables are created at the first time `loss` is +called. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+`name` + +Optional name for the returned operation. +
+ + + + + + + + + + + +
Returns
+An `Operation` that updates the variables in `var_list`. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + +

set_weights

+ +View source + + + +Set the weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function takes the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they are created. The passed values are used to set +the new state of the optimizer. + +For example, the RMSprop optimizer for this simple model takes a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])] +>>> opt.set_weights(new_weights) +>>> opt.iterations + +``` + + + + + + + + + + +
Arguments
+`weights` + +weight values as a list of numpy arrays. +
+ + + +

variables

+ +View source + + + +Returns variables of this Optimizer based on the order created. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/Adamax.md b/site/en/api_docs/python/tf/keras/optimizers/Adamax.md new file mode 100644 index 00000000000..ecfa0cfa91e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/Adamax.md @@ -0,0 +1,691 @@ +description: Optimizer that implements the Adamax algorithm. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.keras.optimizers.Adamax + + + + + + + + + +Optimizer that implements the Adamax algorithm. + +Inherits From: [`Optimizer`](../../../tf/keras/optimizers/Optimizer.md) + + + + + + + + + +It is a variant of Adam based on the infinity norm. +Default parameters follow those provided in the paper. +Adamax is sometimes superior to adam, specially in models with embeddings. + +References + see Section 7 of [Kingma et al., 2014](http://arxiv.org/abs/1412.6980) + ([pdf](http://arxiv.org/pdf/1412.6980.pdf)). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A `Tensor`, floating point value, or a schedule that is a +tf.keras.optimizers.schedules.LearningRateSchedule. The learning rate. +
+`beta_1` + +A float value or a constant float tensor. The exponential decay +rate for the 1st moment estimates. +
+`beta_2` + +A float value or a constant float tensor. The exponential decay +rate for the exponentially weighted infinity norm. +
+`epsilon` + +A small constant for numerical stability. +
+`name` + +Optional name for the operations created when applying gradients. +Defaults to "Adamax". +
+`**kwargs` + +keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, +`decay`}. `clipnorm` is clip gradients by norm; `clipvalue` is clip +gradients by value, `decay` is included for backward compatibility to +allow time inverse decay of learning rate. `lr` is included for backward +compatibility, recommended to use `learning_rate` instead. +
+ + + + + + + + + + + + + + + + + +
+`iterations` + +Variable. The number of training steps this Optimizer has run. +
+`weights` + +Returns variables of this Optimizer based on the order created. +
+ + + +## Methods + +

add_slot

+ +View source + + + +Add a new slot variable for `var`. + + +

add_weight

+ +View source + + + + + + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + +The method sums gradients from all replicas in the presence of +tf.distribute.Strategy by default. You can aggregate gradients yourself by +passing `experimental_aggregate_gradients=False`. + +#### Example: + + + +```python +grads = tape.gradient(loss, vars) +grads = tf.distribute.get_replica_context().all_reduce('sum', grads) +# Processing aggregated gradients. +optimizer.apply_gradients(zip(grads, vars), + experimental_aggregate_gradients=False) + +``` + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs. +
+`name` + +Optional name for the returned operation. Default to the name passed +to the `Optimizer` constructor. +
+`experimental_aggregate_gradients` + +Whether to sum gradients from different +replicas in the presense of tf.distribute.Strategy. If False, it's +user responsibility to aggregate the gradients. Default to True. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+ + + +

from_config

+ +View source + + + +Creates an optimizer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same optimizer from the config +dictionary. + + + + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the output of get_config. +
+`custom_objects` + +A Python dictionary mapping names to additional Python +objects used to create this optimizer, such as a function used for a +hyperparameter. +
+ + + + + + + + + + + +
Returns
+An optimizer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the optimizer. + +An optimizer config is a Python dictionary (serializable) +containing the configuration of an optimizer. +The same optimizer can be reinstantiated later +(without any saved state) from this configuration. + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

get_gradients

+ +View source + + + +Returns gradients of `loss` with respect to `params`. + + + + + + + + + + + + + + +
Arguments
+`loss` + +Loss tensor. +
+`params` + +List of variables. +
+ + + + + + + + + + + +
Returns
+List of gradient tensors. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case any gradient cannot be computed (e.g. if gradient +function not implemented). +
+ + + +

get_slot

+ +View source + + + + + + +

get_slot_names

+ +View source + + + +A list of names for this optimizer's slots. + + +

get_updates

+ +View source + + + + + + +

get_weights

+ +View source + + + +Returns the current weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function returns the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they were created. The returned list can in turn +be used to load state into similarly parameterized optimizers. + +For example, the RMSprop optimizer for this simple model returns a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> len(opt.get_weights()) +3 +``` + + + + + + + + + +
Returns
+Weights values as a list of numpy arrays. +
+ + + +

minimize

+ +View source + + + +Minimize `loss` by updating `var_list`. + +This method simply computes gradient using tf.GradientTape and calls +`apply_gradients()`. If you want to process the gradient before applying +then call tf.GradientTape and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A callable taking no arguments which returns the value to minimize. +
+`var_list` + +list or tuple of `Variable` objects to update to minimize +`loss`, or a callable returning the list or tuple of `Variable` objects. +Use callable when the variable list would otherwise be incomplete before +`minimize` since the variables are created at the first time `loss` is +called. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+`name` + +Optional name for the returned operation. +
+ + + + + + + + + + + +
Returns
+An `Operation` that updates the variables in `var_list`. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + +

set_weights

+ +View source + + + +Set the weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function takes the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they are created. The passed values are used to set +the new state of the optimizer. + +For example, the RMSprop optimizer for this simple model takes a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])] +>>> opt.set_weights(new_weights) +>>> opt.iterations + +``` + + + + + + + + + + +
Arguments
+`weights` + +weight values as a list of numpy arrays. +
+ + + +

variables

+ +View source + + + +Returns variables of this Optimizer based on the order created. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/Ftrl.md b/site/en/api_docs/python/tf/keras/optimizers/Ftrl.md new file mode 100644 index 00000000000..b9dffcd1b62 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/Ftrl.md @@ -0,0 +1,756 @@ +description: Optimizer that implements the FTRL algorithm. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.keras.optimizers.Ftrl + + + + + + + + + +Optimizer that implements the FTRL algorithm. + +Inherits From: [`Optimizer`](../../../tf/keras/optimizers/Optimizer.md) + + + + + + + + + +See Algorithm 1 of this [paper]( +https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf). +This version has support for both online L2 (the L2 penalty given in the paper +above) and shrinkage-type L2 (which is the addition of an L2 penalty to the +loss function). + +#### Initialization: + + +$$t = 0$$ +$$n_{0} = 0$$ +$$\sigma_{0} = 0$$ +$$z_{0} = 0$$ + +Update ($$i$$ is variable index): +$$t = t + 1$$ +$$n_{t,i} = n_{t-1,i} + g_{t,i}^{2}$$ +$$\sigma_{t,i} = (\sqrt{n_{t,i}} - \sqrt{n_{t-1,i}}) / \alpha$$ +$$z_{t,i} = z_{t-1,i} + g_{t,i} - \sigma_{t,i} * w_{t,i}$$ +$$w_{t,i} = - ((\beta+\sqrt{n+{t}}) / \alpha + \lambda_{2})^{-1} * (z_{i} - + sgn(z_{i}) * \lambda_{1}) if \abs{z_{i}} > \lambda_{i} else 0$$ + +Check the documentation for the l2_shrinkage_regularization_strength +parameter for more details when shrinkage is enabled, where gradient is +replaced with gradient_with_shrinkage. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A `Tensor`, floating point value, or a schedule that is a +tf.keras.optimizers.schedules.LearningRateSchedule. The learning rate. +
+`learning_rate_power` + +A float value, must be less or equal to zero. +Controls how the learning rate decreases during training. Use zero for +a fixed learning rate. +
+`initial_accumulator_value` + +The starting value for accumulators. +Only zero or positive values are allowed. +
+`l1_regularization_strength` + +A float value, must be greater than or +equal to zero. +
+`l2_regularization_strength` + +A float value, must be greater than or +equal to zero. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "Ftrl". +
+`l2_shrinkage_regularization_strength` + +A float value, must be greater than +or equal to zero. This differs from L2 above in that the L2 above is a +stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. +The FTRL formulation can be written as: +w_{t+1} = argmin_w(\hat{g}_{1:t}w + L1*||w||_1 + L2*||w||_2^2), where +\hat{g} = g + (2*L2_shrinkage*w), and g is the gradient of the loss +function w.r.t. the weights w. +Specifically, in the absence of L1 regularization, it is equivalent to +the following update rule: +w_{t+1} = w_t - lr_t / (1 + 2*L2*lr_t) * g_t - +2*L2_shrinkage*lr_t / (1 + 2*L2*lr_t) * w_t +where lr_t is the learning rate at t. +When input is sparse shrinkage will only happen on the active weights.\ +
+`**kwargs` + +keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, +`decay`}. `clipnorm` is clip gradients by norm; `clipvalue` is clip +gradients by value, `decay` is included for backward compatibility to +allow time inverse decay of learning rate. `lr` is included for backward +compatibility, recommended to use `learning_rate` instead. +
+ + + + + + + + + + + + +
+`ValueError` + +If one of the arguments is invalid. +
+ + + + + + + + + + + + + + + + + +
+`iterations` + +Variable. The number of training steps this Optimizer has run. +
+`weights` + +Returns variables of this Optimizer based on the order created. +
+ + + +## Methods + +

add_slot

+ +View source + + + +Add a new slot variable for `var`. + + +

add_weight

+ +View source + + + + + + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + +The method sums gradients from all replicas in the presence of +tf.distribute.Strategy by default. You can aggregate gradients yourself by +passing `experimental_aggregate_gradients=False`. + +#### Example: + + + +```python +grads = tape.gradient(loss, vars) +grads = tf.distribute.get_replica_context().all_reduce('sum', grads) +# Processing aggregated gradients. +optimizer.apply_gradients(zip(grads, vars), + experimental_aggregate_gradients=False) + +``` + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs. +
+`name` + +Optional name for the returned operation. Default to the name passed +to the `Optimizer` constructor. +
+`experimental_aggregate_gradients` + +Whether to sum gradients from different +replicas in the presense of tf.distribute.Strategy. If False, it's +user responsibility to aggregate the gradients. Default to True. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+ + + +

from_config

+ +View source + + + +Creates an optimizer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same optimizer from the config +dictionary. + + + + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the output of get_config. +
+`custom_objects` + +A Python dictionary mapping names to additional Python +objects used to create this optimizer, such as a function used for a +hyperparameter. +
+ + + + + + + + + + + +
Returns
+An optimizer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the optimizer. + +An optimizer config is a Python dictionary (serializable) +containing the configuration of an optimizer. +The same optimizer can be reinstantiated later +(without any saved state) from this configuration. + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

get_gradients

+ +View source + + + +Returns gradients of `loss` with respect to `params`. + + + + + + + + + + + + + + +
Arguments
+`loss` + +Loss tensor. +
+`params` + +List of variables. +
+ + + + + + + + + + + +
Returns
+List of gradient tensors. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case any gradient cannot be computed (e.g. if gradient +function not implemented). +
+ + + +

get_slot

+ +View source + + + + + + +

get_slot_names

+ +View source + + + +A list of names for this optimizer's slots. + + +

get_updates

+ +View source + + + + + + +

get_weights

+ +View source + + + +Returns the current weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function returns the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they were created. The returned list can in turn +be used to load state into similarly parameterized optimizers. + +For example, the RMSprop optimizer for this simple model returns a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> len(opt.get_weights()) +3 +``` + + + + + + + + + +
Returns
+Weights values as a list of numpy arrays. +
+ + + +

minimize

+ +View source + + + +Minimize `loss` by updating `var_list`. + +This method simply computes gradient using tf.GradientTape and calls +`apply_gradients()`. If you want to process the gradient before applying +then call tf.GradientTape and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A callable taking no arguments which returns the value to minimize. +
+`var_list` + +list or tuple of `Variable` objects to update to minimize +`loss`, or a callable returning the list or tuple of `Variable` objects. +Use callable when the variable list would otherwise be incomplete before +`minimize` since the variables are created at the first time `loss` is +called. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+`name` + +Optional name for the returned operation. +
+ + + + + + + + + + + +
Returns
+An `Operation` that updates the variables in `var_list`. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + +

set_weights

+ +View source + + + +Set the weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function takes the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they are created. The passed values are used to set +the new state of the optimizer. + +For example, the RMSprop optimizer for this simple model takes a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])] +>>> opt.set_weights(new_weights) +>>> opt.iterations + +``` + + + + + + + + + + +
Arguments
+`weights` + +weight values as a list of numpy arrays. +
+ + + +

variables

+ +View source + + + +Returns variables of this Optimizer based on the order created. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/Nadam.md b/site/en/api_docs/python/tf/keras/optimizers/Nadam.md new file mode 100644 index 00000000000..4489fa66b59 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/Nadam.md @@ -0,0 +1,713 @@ +description: Optimizer that implements the NAdam algorithm. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.keras.optimizers.Nadam + + + + + + + + + +Optimizer that implements the NAdam algorithm. + +Inherits From: [`Optimizer`](../../../tf/keras/optimizers/Optimizer.md) + + + + + + + + + +Much like Adam is essentially RMSprop with momentum, Nadam is Adam with +Nesterov momentum. + +#### Initialization: + + + +$$m_0 := 0 \text{(Initialize 1st moment vector)}$$ +$$v_0 := 0 \text{(Initialize 2nd moment vector)}$$ +$$mu_0 := 1$$ +$$t := 0 \text{(Initialize timestep)}$$ + +#### Computes: + + +$$t := t + 1$$ +$$\mu_t := \beta_1 * (1 - 0.5 * 0.96^{0.004 * t})$$ +$$g' := g / (1 - \prod_{i=1}^{t}{\mu_i})$$ +$$m_t := \beta_1 * m_{t-1} + (1 - \beta_1) * g$$ +$$m' := m_t / (1 - \prod_{i=1}^{t+1}{\mu_i})$$ +$$v_t := \beta_2 * v_{t-1} + (1 - \beta_2) * g * g$$ +$$v' := v_t / (1 - \beta_2^t)$$ +$$\bar{m} := (1 - \mu_t) * g' + \mu_{t+1} * m'$$ +$$\theta_t := \theta_{t-1} - lr * \bar{m} / (\sqrt{v'} + \epsilon)$$ + +gradient is evaluated at theta(t) + momentum * v(t), and the variables always +store theta + beta_1 * m / sqrt(v) instead of theta. + +References + See [Dozat, T., 2015](http://cs229.stanford.edu/proj2015/054_report.pdf). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A Tensor or a floating point value. The learning rate. +
+`beta_1` + +A float value or a constant float tensor. The exponential decay +rate for the 1st moment estimates. +
+`beta_2` + +A float value or a constant float tensor. The exponential decay +rate for the exponentially weighted infinity norm. +
+`epsilon` + +A small constant for numerical stability. +
+`name` + +Optional name for the operations created when applying gradients. +Defaults to "Adamax". +
+`**kwargs` + +keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, +`decay`}. `clipnorm` is clip gradients by norm; `clipvalue` is clip +gradients by value, `decay` is included for backward compatibility to +allow time inverse decay of learning rate. `lr` is included for backward +compatibility, recommended to use `learning_rate` instead. +
+ + + + + + + + + + + + + + + + + +
+`iterations` + +Variable. The number of training steps this Optimizer has run. +
+`weights` + +Returns variables of this Optimizer based on the order created. +
+ + + +## Methods + +

add_slot

+ +View source + + + +Add a new slot variable for `var`. + + +

add_weight

+ +View source + + + + + + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + +The method sums gradients from all replicas in the presence of +tf.distribute.Strategy by default. You can aggregate gradients yourself by +passing `experimental_aggregate_gradients=False`. + +#### Example: + + + +```python +grads = tape.gradient(loss, vars) +grads = tf.distribute.get_replica_context().all_reduce('sum', grads) +# Processing aggregated gradients. +optimizer.apply_gradients(zip(grads, vars), + experimental_aggregate_gradients=False) + +``` + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs. +
+`name` + +Optional name for the returned operation. Default to the name passed +to the `Optimizer` constructor. +
+`experimental_aggregate_gradients` + +Whether to sum gradients from different +replicas in the presense of tf.distribute.Strategy. If False, it's +user responsibility to aggregate the gradients. Default to True. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+ + + +

from_config

+ +View source + + + +Creates an optimizer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same optimizer from the config +dictionary. + + + + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the output of get_config. +
+`custom_objects` + +A Python dictionary mapping names to additional Python +objects used to create this optimizer, such as a function used for a +hyperparameter. +
+ + + + + + + + + + + +
Returns
+An optimizer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the optimizer. + +An optimizer config is a Python dictionary (serializable) +containing the configuration of an optimizer. +The same optimizer can be reinstantiated later +(without any saved state) from this configuration. + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

get_gradients

+ +View source + + + +Returns gradients of `loss` with respect to `params`. + + + + + + + + + + + + + + +
Arguments
+`loss` + +Loss tensor. +
+`params` + +List of variables. +
+ + + + + + + + + + + +
Returns
+List of gradient tensors. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case any gradient cannot be computed (e.g. if gradient +function not implemented). +
+ + + +

get_slot

+ +View source + + + + + + +

get_slot_names

+ +View source + + + +A list of names for this optimizer's slots. + + +

get_updates

+ +View source + + + + + + +

get_weights

+ +View source + + + +Returns the current weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function returns the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they were created. The returned list can in turn +be used to load state into similarly parameterized optimizers. + +For example, the RMSprop optimizer for this simple model returns a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> len(opt.get_weights()) +3 +``` + + + + + + + + + +
Returns
+Weights values as a list of numpy arrays. +
+ + + +

minimize

+ +View source + + + +Minimize `loss` by updating `var_list`. + +This method simply computes gradient using tf.GradientTape and calls +`apply_gradients()`. If you want to process the gradient before applying +then call tf.GradientTape and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A callable taking no arguments which returns the value to minimize. +
+`var_list` + +list or tuple of `Variable` objects to update to minimize +`loss`, or a callable returning the list or tuple of `Variable` objects. +Use callable when the variable list would otherwise be incomplete before +`minimize` since the variables are created at the first time `loss` is +called. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+`name` + +Optional name for the returned operation. +
+ + + + + + + + + + + +
Returns
+An `Operation` that updates the variables in `var_list`. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + +

set_weights

+ +View source + + + +Set the weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function takes the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they are created. The passed values are used to set +the new state of the optimizer. + +For example, the RMSprop optimizer for this simple model takes a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])] +>>> opt.set_weights(new_weights) +>>> opt.iterations + +``` + + + + + + + + + + +
Arguments
+`weights` + +weight values as a list of numpy arrays. +
+ + + +

variables

+ +View source + + + +Returns variables of this Optimizer based on the order created. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/Optimizer.md b/site/en/api_docs/python/tf/keras/optimizers/Optimizer.md new file mode 100644 index 00000000000..25020d7e7be --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/Optimizer.md @@ -0,0 +1,821 @@ +description: Updated base class for optimizers. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.keras.optimizers.Optimizer + + + + + + + + + +Updated base class for optimizers. + + + + + + + + + +This class defines the API to add Ops to train a model. You never use this +class directly, but instead instantiate one of its subclasses such as +tf.keras.optimizers.SGD, tf.keras.optimizers.Adam. + +### Usage + +```python +# Create an optimizer with the desired parameters. +opt = tf.keras.optimizers.SGD(learning_rate=0.1) +# `loss` is a callable that takes no argument and returns the value +# to minimize. +loss = lambda: 3 * var1 * var1 + 2 * var2 * var2 +# In graph mode, returns op that minimizes the loss by updating the listed +# variables. +opt_op = opt.minimize(loss, var_list=[var1, var2]) +opt_op.run() +# In eager mode, simply call minimize to update the list of variables. +opt.minimize(loss, var_list=[var1, var2]) +``` + +### Custom training loop with Keras models + +In Keras models, sometimes variables are created when the model is first +called, instead of construction time. Examples include 1) sequential models +without input shape pre-defined, or 2) subclassed models. Pass var_list as +callable in these cases. + +#### Example: + + +```python +opt = tf.keras.optimizers.SGD(learning_rate=0.1) +model = tf.keras.Sequential() +model.add(tf.keras.layers.Dense(num_hidden, activation='relu')) +model.add(tf.keras.layers.Dense(num_classes, activation='sigmoid')) +loss_fn = lambda: tf.keras.losses.mse(model(input), output) +var_list_fn = lambda: model.trainable_weights +for input, output in data: + opt.minimize(loss_fn, var_list_fn) +``` + +### Processing gradients before applying them. + +Calling `minimize()` takes care of both computing the gradients and +applying them to the variables. If you want to process the gradients +before applying them you can instead use the optimizer in three steps: + +1. Compute the gradients with tf.GradientTape. +2. Process the gradients as you wish. +3. Apply the processed gradients with `apply_gradients()`. + +#### Example: + + + +```python +# Create an optimizer. +opt = tf.keras.optimizers.SGD(learning_rate=0.1) + +# Compute the gradients for a list of variables. +with tf.GradientTape() as tape: + loss = +vars = +grads = tape.gradient(loss, vars) + +# Process the gradients, for example cap them, etc. +# capped_grads = [MyCapper(g) for g in grads] +processed_grads = [process_gradient(g) for g in grads] + +# Ask the optimizer to apply the processed gradients. +opt.apply_gradients(zip(processed_grads, var_list)) +``` + +### Use with tf.distribute.Strategy. + +This optimizer class is tf.distribute.Strategy aware, which means it +automatically sums gradients across all replicas. To average gradients, +you divide your loss by the global batch size, which is done +automatically if you use tf.keras built-in training or evaluation loops. +See the `reduction` argument of your loss which should be set to +tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE for averaging or +tf.keras.losses.Reduction.SUM for not. + +To aggregate gradients yourself, call `apply_gradients` with +`experimental_aggregate_gradients` set to False. This is useful if you need to +process aggregated gradients. + +If you are not using these and you want to average gradients, you should use +tf.math.reduce_sum to add up your per-example losses and then divide by the +global batch size. Note that when using tf.distribute.Strategy, the first +component of a tensor's shape is the *replica-local* batch size, which is off +by a factor equal to the number of replicas being used to compute a single +step. As a result, using tf.math.reduce_mean will give the wrong answer, +resulting in gradients that can be many times too big. + +### Variable Constraint + +All Keras optimizers respect variable constraints. If constraint function is +passed to any variable, the constraint will be applied to the variable after +the gradient has been applied to the variable. +Important: If gradient is sparse tensor, variable constraint is not supported. + +### Thread Compatibility + +The entire optimizer is currently thread compatible, not thread-safe. The user +needs to perform synchronization if necessary. + +### Slots + +Many optimizer subclasses, such as `Adam` and `Adagrad` allocate and manage +additional variables associated with the variables to train. These are called +Slots. Slots have names and you can ask the optimizer for the names of +the slots that it uses. Once you have a slot name you can ask the optimizer +for the variable it created to hold the slot value. + +This can be useful if you want to log debug a training algorithm, report stats +about the slots, etc. + +### Hyper parameters + +These are arguments passed to the optimizer subclass constructor +(the `__init__` method), and then passed to `self._set_hyper()`. +They can be either regular Python values (like 1.0), tensors, or +callables. If they are callable, the callable will be called during +`apply_gradients()` to get the value for the hyper parameter. + +Hyper parameters can be overwritten through user code: + +#### Example: + + + +```python +# Create an optimizer with the desired parameters. +opt = tf.keras.optimizers.SGD(learning_rate=0.1) +# `loss` is a callable that takes no argument and returns the value +# to minimize. +loss = lambda: 3 * var1 + 2 * var2 +# In eager mode, simply call minimize to update the list of variables. +opt.minimize(loss, var_list=[var1, var2]) +# update learning rate +opt.learning_rate = 0.05 +opt.minimize(loss, var_list=[var1, var2]) +``` + +### Write a customized optimizer. +If you intend to create your own optimization algorithm, simply inherit from +this class and override the following methods: + + - resource_apply_dense (update variable given gradient tensor is dense) + - resource_apply_sparse (update variable given gradient tensor is sparse) + - create_slots (if your optimizer algorithm requires additional variables) + - get_config (serialization of the optimizer, include all hyper parameters) + + + + + + + + + + + + + +
+`name` + +A non-empty string. The name to use for accumulators created +for the optimizer. +
+`**kwargs` + +keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, +`decay`}. `clipnorm` is clip gradients by norm; `clipvalue` is clip +gradients by value, `decay` is included for backward compatibility to +allow time inverse decay of learning rate. `lr` is included for backward +compatibility, recommended to use `learning_rate` instead. +
+ + + + + + + + + + + + +
+`ValueError` + +If name is malformed. +
+ + + + + + + + + + + + + + + + + +
+`iterations` + +Variable. The number of training steps this Optimizer has run. +
+`weights` + +Returns variables of this Optimizer based on the order created. +
+ + + +## Methods + +

add_slot

+ +View source + + + +Add a new slot variable for `var`. + + +

add_weight

+ +View source + + + + + + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + +The method sums gradients from all replicas in the presence of +tf.distribute.Strategy by default. You can aggregate gradients yourself by +passing `experimental_aggregate_gradients=False`. + +#### Example: + + + +```python +grads = tape.gradient(loss, vars) +grads = tf.distribute.get_replica_context().all_reduce('sum', grads) +# Processing aggregated gradients. +optimizer.apply_gradients(zip(grads, vars), + experimental_aggregate_gradients=False) + +``` + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs. +
+`name` + +Optional name for the returned operation. Default to the name passed +to the `Optimizer` constructor. +
+`experimental_aggregate_gradients` + +Whether to sum gradients from different +replicas in the presense of tf.distribute.Strategy. If False, it's +user responsibility to aggregate the gradients. Default to True. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+ + + +

from_config

+ +View source + + + +Creates an optimizer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same optimizer from the config +dictionary. + + + + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the output of get_config. +
+`custom_objects` + +A Python dictionary mapping names to additional Python +objects used to create this optimizer, such as a function used for a +hyperparameter. +
+ + + + + + + + + + + +
Returns
+An optimizer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the optimizer. + +An optimizer config is a Python dictionary (serializable) +containing the configuration of an optimizer. +The same optimizer can be reinstantiated later +(without any saved state) from this configuration. + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

get_gradients

+ +View source + + + +Returns gradients of `loss` with respect to `params`. + + + + + + + + + + + + + + +
Arguments
+`loss` + +Loss tensor. +
+`params` + +List of variables. +
+ + + + + + + + + + + +
Returns
+List of gradient tensors. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case any gradient cannot be computed (e.g. if gradient +function not implemented). +
+ + + +

get_slot

+ +View source + + + + + + +

get_slot_names

+ +View source + + + +A list of names for this optimizer's slots. + + +

get_updates

+ +View source + + + + + + +

get_weights

+ +View source + + + +Returns the current weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function returns the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they were created. The returned list can in turn +be used to load state into similarly parameterized optimizers. + +For example, the RMSprop optimizer for this simple model returns a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> len(opt.get_weights()) +3 +``` + + + + + + + + + +
Returns
+Weights values as a list of numpy arrays. +
+ + + +

minimize

+ +View source + + + +Minimize `loss` by updating `var_list`. + +This method simply computes gradient using tf.GradientTape and calls +`apply_gradients()`. If you want to process the gradient before applying +then call tf.GradientTape and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A callable taking no arguments which returns the value to minimize. +
+`var_list` + +list or tuple of `Variable` objects to update to minimize +`loss`, or a callable returning the list or tuple of `Variable` objects. +Use callable when the variable list would otherwise be incomplete before +`minimize` since the variables are created at the first time `loss` is +called. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+`name` + +Optional name for the returned operation. +
+ + + + + + + + + + + +
Returns
+An `Operation` that updates the variables in `var_list`. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + +

set_weights

+ +View source + + + +Set the weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function takes the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they are created. The passed values are used to set +the new state of the optimizer. + +For example, the RMSprop optimizer for this simple model takes a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])] +>>> opt.set_weights(new_weights) +>>> opt.iterations + +``` + + + + + + + + + + +
Arguments
+`weights` + +weight values as a list of numpy arrays. +
+ + + +

variables

+ +View source + + + +Returns variables of this Optimizer based on the order created. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/RMSprop.md b/site/en/api_docs/python/tf/keras/optimizers/RMSprop.md new file mode 100644 index 00000000000..2f1e2af4473 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/RMSprop.md @@ -0,0 +1,748 @@ +description: Optimizer that implements the RMSprop algorithm. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.keras.optimizers.RMSprop + + + + + + + + + +Optimizer that implements the RMSprop algorithm. + +Inherits From: [`Optimizer`](../../../tf/keras/optimizers/Optimizer.md) + + + + + + + + + +A detailed description of rmsprop. + - maintain a moving (discounted) average of the square of gradients + - divide gradient by the root of this average + +The default settings does not use momentum: + +$$rms_t = \rho * rms_{t-1} + (1-\rho) * g_t^2$$ +$$\theta_t = \theta_{t-1} - \mathrm{learning\_rate} * + g_t / \sqrt{rms_t + \epsilon}$$ + +Since $x/x^2 = sign(x)$, this is an smoothed approximation of: + +$$ \theta_t = \theta_{t-1} - \mathrm{learning\_rate} * sign(g_t) $$ + +With momentum the update is: + +$$rms_t = \rho * rms_{t-1} + (1-\rho) * g_t^2$$ +$$mom_t = \mathrm{momentum} * mom_{t-1} + g_t / \sqrt{rms_t + \epsilon}$$ +$$\theta_t = \theta_{t-1} - \mathrm{learning\_rate} * mom_t$$ + +This implementation of RMSprop uses plain momentum, not Nesterov momentum. + +The centered version additionally maintains a moving average of the +gradients, and uses that average to estimate the variance: + +$$mg_t = \rho * mg_{t-1} + (1-\rho) * g_t$$ +$$rms_t = \rho * rms_{t-1} + (1-\rho) * g_t^2$$ +$$mom_t = \mathrm{momentum} * mom_{t-1} + + \mathrm{learning\_rate} * g_t / sqrt(rms_t - mg_t^2 + \epsilon)$$ +$$\theta_t = \theta_{t-1} - mom_t$$ + +#### Usage: + + + +``` +>>> opt = tf.keras.optimizers.RMSprop(learning_rate=0.1) +>>> var1 = tf.Variable(10.0) +>>> loss = lambda: (var1 ** 2)/2.0 # d(loss)/d(var1) = var1 +>>> step_count = opt.minimize(loss, [var1]).numpy() +>>> var1.numpy() +9.683772 +``` + +References + See ([pdf] + http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A `Tensor`, floating point value, or a schedule that is a +tf.keras.optimizers.schedules.LearningRateSchedule, or a callable +that takes no arguments and returns the actual value to use. The +learning rate. Defeaults to 0.001. +
+`rho` + +Discounting factor for the history/coming gradient. Defaults to 0.9. +
+`momentum` + +A scalar or a scalar `Tensor`. Defaults to 0.0. +
+`epsilon` + +A small constant for numerical stability. This epsilon is +"epsilon hat" in the Kingma and Ba paper (in the formula just before +Section 2.1), not the epsilon in Algorithm 1 of the paper. Defaults to +1e-7. +
+`centered` + +Boolean. If `True`, gradients are normalized by the estimated +variance of the gradient; if False, by the uncentered second moment. +Setting this to `True` may help with training, but is slightly more +expensive in terms of computation and memory. Defaults to `False`. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to "RMSprop". @compatibility(eager) When eager +execution is enabled, `learning_rate`, `decay`, `momentum`, and +`epsilon` can each be a callable that takes no arguments and returns the +actual value to use. This can be useful for changing these values across +different invocations of optimizer functions. @end_compatibility +
+`**kwargs` + +keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, +`decay`}. `clipnorm` is clip gradients by norm; `clipvalue` is clip +gradients by value, `decay` is included for backward compatibility to +allow time inverse decay of learning rate. `lr` is included for backward +compatibility, recommended to use `learning_rate` instead. +
+ + + + + + + + + + + + + + + + + +
+`iterations` + +Variable. The number of training steps this Optimizer has run. +
+`weights` + +Returns variables of this Optimizer based on the order created. +
+ + + +## Methods + +

add_slot

+ +View source + + + +Add a new slot variable for `var`. + + +

add_weight

+ +View source + + + + + + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + +The method sums gradients from all replicas in the presence of +tf.distribute.Strategy by default. You can aggregate gradients yourself by +passing `experimental_aggregate_gradients=False`. + +#### Example: + + + +```python +grads = tape.gradient(loss, vars) +grads = tf.distribute.get_replica_context().all_reduce('sum', grads) +# Processing aggregated gradients. +optimizer.apply_gradients(zip(grads, vars), + experimental_aggregate_gradients=False) + +``` + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs. +
+`name` + +Optional name for the returned operation. Default to the name passed +to the `Optimizer` constructor. +
+`experimental_aggregate_gradients` + +Whether to sum gradients from different +replicas in the presense of tf.distribute.Strategy. If False, it's +user responsibility to aggregate the gradients. Default to True. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+ + + +

from_config

+ +View source + + + +Creates an optimizer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same optimizer from the config +dictionary. + + + + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the output of get_config. +
+`custom_objects` + +A Python dictionary mapping names to additional Python +objects used to create this optimizer, such as a function used for a +hyperparameter. +
+ + + + + + + + + + + +
Returns
+An optimizer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the optimizer. + +An optimizer config is a Python dictionary (serializable) +containing the configuration of an optimizer. +The same optimizer can be reinstantiated later +(without any saved state) from this configuration. + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

get_gradients

+ +View source + + + +Returns gradients of `loss` with respect to `params`. + + + + + + + + + + + + + + +
Arguments
+`loss` + +Loss tensor. +
+`params` + +List of variables. +
+ + + + + + + + + + + +
Returns
+List of gradient tensors. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case any gradient cannot be computed (e.g. if gradient +function not implemented). +
+ + + +

get_slot

+ +View source + + + + + + +

get_slot_names

+ +View source + + + +A list of names for this optimizer's slots. + + +

get_updates

+ +View source + + + + + + +

get_weights

+ +View source + + + +Returns the current weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function returns the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they were created. The returned list can in turn +be used to load state into similarly parameterized optimizers. + +For example, the RMSprop optimizer for this simple model returns a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> len(opt.get_weights()) +3 +``` + + + + + + + + + +
Returns
+Weights values as a list of numpy arrays. +
+ + + +

minimize

+ +View source + + + +Minimize `loss` by updating `var_list`. + +This method simply computes gradient using tf.GradientTape and calls +`apply_gradients()`. If you want to process the gradient before applying +then call tf.GradientTape and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A callable taking no arguments which returns the value to minimize. +
+`var_list` + +list or tuple of `Variable` objects to update to minimize +`loss`, or a callable returning the list or tuple of `Variable` objects. +Use callable when the variable list would otherwise be incomplete before +`minimize` since the variables are created at the first time `loss` is +called. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+`name` + +Optional name for the returned operation. +
+ + + + + + + + + + + +
Returns
+An `Operation` that updates the variables in `var_list`. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + +

set_weights

+ +View source + + + +Set the weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function takes the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they are created. The passed values are used to set +the new state of the optimizer. + +For example, the RMSprop optimizer for this simple model takes a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])] +>>> opt.set_weights(new_weights) +>>> opt.iterations + +``` + + + + + + + + + + +
Arguments
+`weights` + +weight values as a list of numpy arrays. +
+ + + +

variables

+ +View source + + + +Returns variables of this Optimizer based on the order created. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/SGD.md b/site/en/api_docs/python/tf/keras/optimizers/SGD.md new file mode 100644 index 00000000000..19ff7475826 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/SGD.md @@ -0,0 +1,727 @@ +description: Stochastic gradient descent and momentum optimizer. + +
+ + + + + + + + + + + + + + + + +
+ +# tf.keras.optimizers.SGD + + + + + + + + + +Stochastic gradient descent and momentum optimizer. + +Inherits From: [`Optimizer`](../../../tf/keras/optimizers/Optimizer.md) + + + + + + + + + +The update rule for $\theta$ with gradient $g$ when `momentum` is 0.0: +$$\theta_t = \theta_{t-1} - \mathrm{learning\_rate} * g_t$$ + +The update rule when `momentum` is larger than 0.0: +$$v_t = \mathrm{momentum} * v_{t-1} - \mathrm{learning\_rate} * g_t$$ +$$\theta_t = \theta_{t-1} + v_t$$ +if `nesterov` is False, gradient is evaluated at $\theta_t$. +if `nesterov` is True, gradient is evaluated at $\theta_t + momentum * v_t$, + and the variables always store $\theta + m v$ instead of $theta$ + +#### Usage: + + + +``` +>>> opt = tf.keras.optimizers.SGD(learning_rate=0.1) +>>> var = tf.Variable(1.0) +>>> loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1 +>>> step_count = opt.minimize(loss, [var]).numpy() +>>> # Step is `-learning_rate*grad` +>>> var.numpy() +0.9 +``` + +``` +>>> opt = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9) +>>> var = tf.Variable(1.0) +>>> val0 = var.value() +>>> loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1 +>>> # First step is `-learning_rate*grad` +>>> step_count = opt.minimize(loss, [var]).numpy() +>>> val1 = var.value() +>>> (val0 - val1).numpy() +0.1 +>>> # On later steps, step-size increases because of momentum +>>> step_count = opt.minimize(loss, [var]).numpy() +>>> val2 = var.value() +>>> (val1 - val2).numpy() +0.18 +``` + +Some of the args below are hyperparameters, where a hyperparameter is +defined as a scalar Tensor, a regular Python value, or a callable (which +will be evaluated when `apply_gradients` is called) returning a scalar +Tensor or a Python value. + +# References + nesterov = True, See [Sutskever et al., 2013]( + http://jmlr.org/proceedings/papers/v28/sutskever13.pdf). + + + + + + + + + + + + + + + + + + + + + + +
+`learning_rate` + +A `Tensor`, floating point value, or a schedule that is a +tf.keras.optimizers.schedules.LearningRateSchedule, or a callable +that takes no arguments and returns the actual value to use. The +learning rate. Defaults to 0.01. +
+`momentum` + +float hyperparameter >= 0 that accelerates SGD in the relevant +direction and dampens oscillations. Defaults to 0.0, i.e., SGD. +
+`nesterov` + +boolean. Whether to apply Nesterov momentum. +Defaults to `False`. +
+`name` + +Optional name prefix for the operations created when applying +gradients. Defaults to 'SGD'. +
+`**kwargs` + +keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, +`decay`}. `clipnorm` is clip gradients by norm; `clipvalue` is clip +gradients by value, `decay` is included for backward compatibility to +allow time inverse decay of learning rate. `lr` is included for backward +compatibility, recommended to use `learning_rate` instead. +
+ + + + + + + + + + + + + + + + + +
+`iterations` + +Variable. The number of training steps this Optimizer has run. +
+`weights` + +Returns variables of this Optimizer based on the order created. +
+ + + +## Methods + +

add_slot

+ +View source + + + +Add a new slot variable for `var`. + + +

add_weight

+ +View source + + + + + + +

apply_gradients

+ +View source + + + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + +The method sums gradients from all replicas in the presence of +tf.distribute.Strategy by default. You can aggregate gradients yourself by +passing `experimental_aggregate_gradients=False`. + +#### Example: + + + +```python +grads = tape.gradient(loss, vars) +grads = tf.distribute.get_replica_context().all_reduce('sum', grads) +# Processing aggregated gradients. +optimizer.apply_gradients(zip(grads, vars), + experimental_aggregate_gradients=False) + +``` + + + + + + + + + + + + + + + + +
Args
+`grads_and_vars` + +List of (gradient, variable) pairs. +
+`name` + +Optional name for the returned operation. Default to the name passed +to the `Optimizer` constructor. +
+`experimental_aggregate_gradients` + +Whether to sum gradients from different +replicas in the presense of tf.distribute.Strategy. If False, it's +user responsibility to aggregate the gradients. Default to True. +
+ + + + + + + + + + + +
Returns
+An `Operation` that applies the specified gradients. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + + + + +
Raises
+`TypeError` + +If `grads_and_vars` is malformed. +
+`ValueError` + +If none of the variables have gradients. +
+ + + +

from_config

+ +View source + + + +Creates an optimizer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same optimizer from the config +dictionary. + + + + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the output of get_config. +
+`custom_objects` + +A Python dictionary mapping names to additional Python +objects used to create this optimizer, such as a function used for a +hyperparameter. +
+ + + + + + + + + + + +
Returns
+An optimizer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the optimizer. + +An optimizer config is a Python dictionary (serializable) +containing the configuration of an optimizer. +The same optimizer can be reinstantiated later +(without any saved state) from this configuration. + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

get_gradients

+ +View source + + + +Returns gradients of `loss` with respect to `params`. + + + + + + + + + + + + + + +
Arguments
+`loss` + +Loss tensor. +
+`params` + +List of variables. +
+ + + + + + + + + + + +
Returns
+List of gradient tensors. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case any gradient cannot be computed (e.g. if gradient +function not implemented). +
+ + + +

get_slot

+ +View source + + + + + + +

get_slot_names

+ +View source + + + +A list of names for this optimizer's slots. + + +

get_updates

+ +View source + + + + + + +

get_weights

+ +View source + + + +Returns the current weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function returns the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they were created. The returned list can in turn +be used to load state into similarly parameterized optimizers. + +For example, the RMSprop optimizer for this simple model returns a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> len(opt.get_weights()) +3 +``` + + + + + + + + + +
Returns
+Weights values as a list of numpy arrays. +
+ + + +

minimize

+ +View source + + + +Minimize `loss` by updating `var_list`. + +This method simply computes gradient using tf.GradientTape and calls +`apply_gradients()`. If you want to process the gradient before applying +then call tf.GradientTape and `apply_gradients()` explicitly instead +of using this function. + + + + + + + + + + + + + + + + + + + +
Args
+`loss` + +A callable taking no arguments which returns the value to minimize. +
+`var_list` + +list or tuple of `Variable` objects to update to minimize +`loss`, or a callable returning the list or tuple of `Variable` objects. +Use callable when the variable list would otherwise be incomplete before +`minimize` since the variables are created at the first time `loss` is +called. +
+`grad_loss` + +Optional. A `Tensor` holding the gradient computed for `loss`. +
+`name` + +Optional name for the returned operation. +
+ + + + + + + + + + + +
Returns
+An `Operation` that updates the variables in `var_list`. The `iterations` +will be automatically increased by 1. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If some of the variables are not `Variable` objects. +
+ + + +

set_weights

+ +View source + + + +Set the weights of the optimizer. + +The weights of an optimizer are its state (ie, variables). +This function takes the weight values associated with this +optimizer as a list of Numpy arrays. The first value is always the +iterations count of the optimizer, followed by the optimizer's state +variables in the order they are created. The passed values are used to set +the new state of the optimizer. + +For example, the RMSprop optimizer for this simple model takes a list of +three values-- the iteration count, followed by the root-mean-square value +of the kernel and bias of the single Dense layer: + +``` +>>> opt = tf.keras.optimizers.RMSprop() +>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) +>>> m.compile(opt, loss='mse') +>>> data = np.arange(100).reshape(5, 20) +>>> labels = np.zeros(5) +>>> print('Training'); results = m.fit(data, labels) +Training ... +>>> new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])] +>>> opt.set_weights(new_weights) +>>> opt.iterations + +``` + + + + + + + + + + +
Arguments
+`weights` + +weight values as a list of numpy arrays. +
+ + + +

variables

+ +View source + + + +Returns variables of this Optimizer based on the order created. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/deserialize.md b/site/en/api_docs/python/tf/keras/optimizers/deserialize.md new file mode 100644 index 00000000000..dafd1047852 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/deserialize.md @@ -0,0 +1,86 @@ +description: Inverse of the serialize function. + +
+ + +
+ +# tf.keras.optimizers.deserialize + + + + + + + + + +Inverse of the `serialize` function. + + + + + + + + + + + + + + + + + + + + + + +
+`config` + +Optimizer configuration dictionary. +
+`custom_objects` + +Optional dictionary mapping names (strings) to custom +objects (classes and functions) to be considered during deserialization. +
+ + + + + + + + + + + +
+A Keras Optimizer instance. +
+ diff --git a/site/en/api_docs/python/tf/keras/optimizers/get.md b/site/en/api_docs/python/tf/keras/optimizers/get.md new file mode 100644 index 00000000000..8ca10b2af04 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/get.md @@ -0,0 +1,99 @@ +description: Retrieves a Keras Optimizer instance. + +
+ + +
+ +# tf.keras.optimizers.get + + + + + + + + + +Retrieves a Keras Optimizer instance. + + + + + + + + + + + + + + + + + + + +
+`identifier` + +Optimizer identifier, one of +- String: name of an optimizer +- Dictionary: configuration dictionary. - Keras Optimizer instance (it +will be returned unchanged). - TensorFlow Optimizer instance (it +will be wrapped as a Keras Optimizer). +
+ + + + + + + + + + + +
+A Keras Optimizer instance. +
+ + + + + + + + + + + + +
+`ValueError` + +If `identifier` cannot be interpreted. +
+ diff --git a/site/en/api_docs/python/tf/keras/optimizers/schedules.md b/site/en/api_docs/python/tf/keras/optimizers/schedules.md new file mode 100644 index 00000000000..eecaf505af1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/schedules.md @@ -0,0 +1,47 @@ +description: Public API for tf.keras.optimizers.schedules namespace. + +
+ + +
+ +# Module: tf.keras.optimizers.schedules + + + + + + + + + +Public API for tf.keras.optimizers.schedules namespace. + + + + + +## Classes + +[`class ExponentialDecay`](../../../tf/keras/optimizers/schedules/ExponentialDecay.md): A LearningRateSchedule that uses an exponential decay schedule. + +[`class InverseTimeDecay`](../../../tf/keras/optimizers/schedules/InverseTimeDecay.md): A LearningRateSchedule that uses an inverse time decay schedule. + +[`class LearningRateSchedule`](../../../tf/keras/optimizers/schedules/LearningRateSchedule.md): A serializable learning rate decay schedule. + +[`class PiecewiseConstantDecay`](../../../tf/keras/optimizers/schedules/PiecewiseConstantDecay.md): A LearningRateSchedule that uses a piecewise constant decay schedule. + +[`class PolynomialDecay`](../../../tf/keras/optimizers/schedules/PolynomialDecay.md): A LearningRateSchedule that uses a polynomial decay schedule. + +## Functions + +[`deserialize(...)`](../../../tf/keras/optimizers/schedules/deserialize.md) + +[`serialize(...)`](../../../tf/keras/optimizers/schedules/serialize.md) + diff --git a/site/en/api_docs/python/tf/keras/optimizers/schedules/ExponentialDecay.md b/site/en/api_docs/python/tf/keras/optimizers/schedules/ExponentialDecay.md new file mode 100644 index 00000000000..b559ccabc02 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/schedules/ExponentialDecay.md @@ -0,0 +1,178 @@ +description: A LearningRateSchedule that uses an exponential decay schedule. + +
+ + + + + + +
+ +# tf.keras.optimizers.schedules.ExponentialDecay + + + + + + + + + +A LearningRateSchedule that uses an exponential decay schedule. + +Inherits From: [`LearningRateSchedule`](../../../../tf/keras/optimizers/schedules/LearningRateSchedule.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`initial_learning_rate` + +A scalar `float32` or `float64` `Tensor` or a +Python number. The initial learning rate. +
+`decay_steps` + +A scalar `int32` or `int64` `Tensor` or a Python number. +Must be positive. See the decay computation above. +
+`decay_rate` + +A scalar `float32` or `float64` `Tensor` or a +Python number. The decay rate. +
+`staircase` + +Boolean. If `True` decay the learning rate at discrete +intervals +
+`name` + +String. Optional name of the operation. Defaults to +'ExponentialDecay'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `LearningRateSchedule` from its config. + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `LearningRateSchedule` instance. +
+ + + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/schedules/InverseTimeDecay.md b/site/en/api_docs/python/tf/keras/optimizers/schedules/InverseTimeDecay.md new file mode 100644 index 00000000000..a1000fe22a6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/schedules/InverseTimeDecay.md @@ -0,0 +1,176 @@ +description: A LearningRateSchedule that uses an inverse time decay schedule. + +
+ + + + + + +
+ +# tf.keras.optimizers.schedules.InverseTimeDecay + + + + + + + + + +A LearningRateSchedule that uses an inverse time decay schedule. + +Inherits From: [`LearningRateSchedule`](../../../../tf/keras/optimizers/schedules/LearningRateSchedule.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`initial_learning_rate` + +A scalar `float32` or `float64` `Tensor` or a +Python number. The initial learning rate. +
+`decay_steps` + +How often to apply decay. +
+`decay_rate` + +A Python number. The decay rate. +
+`staircase` + +Whether to apply decay in a discrete staircase, as opposed to +continuous, fashion. +
+`name` + +String. Optional name of the operation. Defaults to +'InverseTimeDecay'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `LearningRateSchedule` from its config. + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `LearningRateSchedule` instance. +
+ + + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/schedules/LearningRateSchedule.md b/site/en/api_docs/python/tf/keras/optimizers/schedules/LearningRateSchedule.md new file mode 100644 index 00000000000..07fef8a4e1a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/schedules/LearningRateSchedule.md @@ -0,0 +1,123 @@ +description: A serializable learning rate decay schedule. + +
+ + + + + +
+ +# tf.keras.optimizers.schedules.LearningRateSchedule + + + + + + + + + +A serializable learning rate decay schedule. + + + + + +`LearningRateSchedule`s can be passed in as the learning rate of optimizers in +tf.keras.optimizers. They can be serialized and deserialized using +tf.keras.optimizers.schedules.serialize and +tf.keras.optimizers.schedules.deserialize. + +## Methods + +

from_config

+ +View source + + + +Instantiates a `LearningRateSchedule` from its config. + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `LearningRateSchedule` instance. +
+ + + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/schedules/PiecewiseConstantDecay.md b/site/en/api_docs/python/tf/keras/optimizers/schedules/PiecewiseConstantDecay.md new file mode 100644 index 00000000000..c395b1ac036 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/schedules/PiecewiseConstantDecay.md @@ -0,0 +1,182 @@ +description: A LearningRateSchedule that uses a piecewise constant decay schedule. + +
+ + + + + + +
+ +# tf.keras.optimizers.schedules.PiecewiseConstantDecay + + + + + + + + + +A LearningRateSchedule that uses a piecewise constant decay schedule. + +Inherits From: [`LearningRateSchedule`](../../../../tf/keras/optimizers/schedules/LearningRateSchedule.md) + + + + + + + + + + + + + + + + + + + + + + + + + +
+`boundaries` + +A list of `Tensor`s or `int`s or `float`s with strictly +increasing entries, and with all elements having the same type as the +optimizer step. +
+`values` + +A list of `Tensor`s or `float`s or `int`s that specifies the +values for the intervals defined by `boundaries`. It should have one +more element than `boundaries`, and all elements should have the same +type. +
+`name` + +A string. Optional name of the operation. Defaults to +'PiecewiseConstant'. +
+ + + + + + + + + + + + +
+`ValueError` + +if the number of elements in the lists do not match. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `LearningRateSchedule` from its config. + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `LearningRateSchedule` instance. +
+ + + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/schedules/PolynomialDecay.md b/site/en/api_docs/python/tf/keras/optimizers/schedules/PolynomialDecay.md new file mode 100644 index 00000000000..bf936c448a4 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/schedules/PolynomialDecay.md @@ -0,0 +1,186 @@ +description: A LearningRateSchedule that uses a polynomial decay schedule. + +
+ + + + + + +
+ +# tf.keras.optimizers.schedules.PolynomialDecay + + + + + + + + + +A LearningRateSchedule that uses a polynomial decay schedule. + +Inherits From: [`LearningRateSchedule`](../../../../tf/keras/optimizers/schedules/LearningRateSchedule.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`initial_learning_rate` + +A scalar `float32` or `float64` `Tensor` or a +Python number. The initial learning rate. +
+`decay_steps` + +A scalar `int32` or `int64` `Tensor` or a Python number. +Must be positive. See the decay computation above. +
+`end_learning_rate` + +A scalar `float32` or `float64` `Tensor` or a +Python number. The minimal end learning rate. +
+`power` + +A scalar `float32` or `float64` `Tensor` or a +Python number. The power of the polynomial. Defaults to linear, 1.0. +
+`cycle` + +A boolean, whether or not it should cycle beyond decay_steps. +
+`name` + +String. Optional name of the operation. Defaults to +'PolynomialDecay'. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates a `LearningRateSchedule` from its config. + + + + + + + + + + + +
Args
+`config` + +Output of `get_config()`. +
+ + + + + + + + + + + +
Returns
+A `LearningRateSchedule` instance. +
+ + + +

get_config

+ +View source + + + + + + +

__call__

+ +View source + + + +Call self as a function. + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/schedules/deserialize.md b/site/en/api_docs/python/tf/keras/optimizers/schedules/deserialize.md new file mode 100644 index 00000000000..93feaef3fee --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/schedules/deserialize.md @@ -0,0 +1,45 @@ +
+ + +
+ +# tf.keras.optimizers.schedules.deserialize + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/schedules/serialize.md b/site/en/api_docs/python/tf/keras/optimizers/schedules/serialize.md new file mode 100644 index 00000000000..aee4d95c2aa --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/schedules/serialize.md @@ -0,0 +1,45 @@ +
+ + +
+ +# tf.keras.optimizers.schedules.serialize + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/optimizers/serialize.md b/site/en/api_docs/python/tf/keras/optimizers/serialize.md new file mode 100644 index 00000000000..7b421fc704f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/optimizers/serialize.md @@ -0,0 +1,45 @@ +
+ + +
+ +# tf.keras.optimizers.serialize + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/preprocessing.md b/site/en/api_docs/python/tf/keras/preprocessing.md new file mode 100644 index 00000000000..cdc2359399f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing.md @@ -0,0 +1,29 @@ +description: Keras data preprocessing utils. + +
+ + +
+ +# Module: tf.keras.preprocessing + + + + + + + + + +Keras data preprocessing utils. + + + +## Modules + +[`image`](../../tf/keras/preprocessing/image.md) module: Set of tools for real-time data augmentation on image data. + +[`sequence`](../../tf/keras/preprocessing/sequence.md) module: Utilities for preprocessing sequence data. + +[`text`](../../tf/keras/preprocessing/text.md) module: Utilities for text input preprocessing. + diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image.md b/site/en/api_docs/python/tf/keras/preprocessing/image.md new file mode 100644 index 00000000000..225c57482d2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image.md @@ -0,0 +1,59 @@ +description: Set of tools for real-time data augmentation on image data. + +
+ + +
+ +# Module: tf.keras.preprocessing.image + + + + + + + + + +Set of tools for real-time data augmentation on image data. + + + +## Classes + +[`class DirectoryIterator`](../../../tf/keras/preprocessing/image/DirectoryIterator.md): Iterator capable of reading images from a directory on disk. + +[`class ImageDataGenerator`](../../../tf/keras/preprocessing/image/ImageDataGenerator.md): Generate batches of tensor image data with real-time data augmentation. + +[`class Iterator`](../../../tf/keras/preprocessing/image/Iterator.md): Base class for image data iterators. + +[`class NumpyArrayIterator`](../../../tf/keras/preprocessing/image/NumpyArrayIterator.md): Iterator yielding data from a Numpy array. + +## Functions + +[`apply_affine_transform(...)`](../../../tf/keras/preprocessing/image/apply_affine_transform.md): Applies an affine transformation specified by the parameters given. + +[`apply_brightness_shift(...)`](../../../tf/keras/preprocessing/image/apply_brightness_shift.md): Performs a brightness shift. + +[`apply_channel_shift(...)`](../../../tf/keras/preprocessing/image/apply_channel_shift.md): Performs a channel shift. + +[`array_to_img(...)`](../../../tf/keras/preprocessing/image/array_to_img.md): Converts a 3D Numpy array to a PIL Image instance. + +[`img_to_array(...)`](../../../tf/keras/preprocessing/image/img_to_array.md): Converts a PIL Image instance to a Numpy array. + +[`load_img(...)`](../../../tf/keras/preprocessing/image/load_img.md): Loads an image into PIL format. + +[`random_brightness(...)`](../../../tf/keras/preprocessing/image/random_brightness.md): Performs a random brightness shift. + +[`random_channel_shift(...)`](../../../tf/keras/preprocessing/image/random_channel_shift.md): Performs a random channel shift. + +[`random_rotation(...)`](../../../tf/keras/preprocessing/image/random_rotation.md): Performs a random rotation of a Numpy image tensor. + +[`random_shear(...)`](../../../tf/keras/preprocessing/image/random_shear.md): Performs a random spatial shear of a Numpy image tensor. + +[`random_shift(...)`](../../../tf/keras/preprocessing/image/random_shift.md): Performs a random spatial shift of a Numpy image tensor. + +[`random_zoom(...)`](../../../tf/keras/preprocessing/image/random_zoom.md): Performs a random spatial zoom of a Numpy image tensor. + +[`save_img(...)`](../../../tf/keras/preprocessing/image/save_img.md): Saves an image stored as a Numpy array to a path or file object. + diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/DirectoryIterator.md b/site/en/api_docs/python/tf/keras/preprocessing/image/DirectoryIterator.md new file mode 100644 index 00000000000..15f6638a18a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/DirectoryIterator.md @@ -0,0 +1,344 @@ +description: Iterator capable of reading images from a directory on disk. + +
+ + + + + + + + + + + + + +
+ +# tf.keras.preprocessing.image.DirectoryIterator + + + + + + + + + +Iterator capable of reading images from a directory on disk. + +Inherits From: [`Iterator`](../../../../tf/keras/preprocessing/image/Iterator.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`directory` + +Path to the directory to read images from. +Each subdirectory in this directory will be +considered to contain images from one class, +or alternatively you could specify class subdirectories +via the `classes` argument. +
+`image_data_generator` + +Instance of `ImageDataGenerator` +to use for random transformations and normalization. +
+`target_size` + +tuple of integers, dimensions to resize input images to. +
+`color_mode` + +One of `"rgb"`, `"rgba"`, `"grayscale"`. +Color mode to read images. +
+`classes` + +Optional list of strings, names of subdirectories +containing images from each class (e.g. `["dogs", "cats"]`). +It will be computed automatically if not set. +
+`class_mode` + +Mode for yielding the targets: +`"binary"`: binary targets (if there are only two classes), +`"categorical"`: categorical targets, +`"sparse"`: integer targets, +`"input"`: targets are images identical to input images (mainly +used to work with autoencoders), +`None`: no targets get yielded (only input images are yielded). +
+`batch_size` + +Integer, size of a batch. +
+`shuffle` + +Boolean, whether to shuffle the data between epochs. +
+`seed` + +Random seed for data shuffling. +
+`data_format` + +String, one of `channels_first`, `channels_last`. +
+`save_to_dir` + +Optional directory where to save the pictures +being yielded, in a viewable format. This is useful +for visualizing the random transformations being +applied, for debugging purposes. +
+`save_prefix` + +String prefix to use for saving sample +images (if `save_to_dir` is set). +
+`save_format` + +Format to use for saving sample images +(if `save_to_dir` is set). +
+`subset` + +Subset of data (`"training"` or `"validation"`) if +validation_split is set in ImageDataGenerator. +
+`interpolation` + +Interpolation method used to resample the image if the +target size is different from that of the loaded image. +Supported methods are "nearest", "bilinear", and "bicubic". +If PIL version 1.1.3 or newer is installed, "lanczos" is also +supported. If PIL version 3.4.0 or newer is installed, "box" and +"hamming" are also supported. By default, "nearest" is used. +
+`dtype` + +Dtype to use for generated arrays. +
+ + + + + + + + + + + + + + + + + + + + +
+`filepaths` + +List of absolute paths to image files +
+`labels` + +Class labels of every observation +
+`sample_weight` + + +
+ + + +## Methods + +

next

+ + + +For python 2.x. + +# Returns + The next batch. + +

on_epoch_end

+ + + + + + +

reset

+ + + + + + +

set_processing_attrs

+ + + +Sets attributes to use later for processing files into a batch. + +# Arguments + image_data_generator: Instance of `ImageDataGenerator` + to use for random transformations and normalization. + target_size: tuple of integers, dimensions to resize input images to. + color_mode: One of `"rgb"`, `"rgba"`, `"grayscale"`. + Color mode to read images. + data_format: String, one of `channels_first`, `channels_last`. + save_to_dir: Optional directory where to save the pictures + being yielded, in a viewable format. This is useful + for visualizing the random transformations being + applied, for debugging purposes. + save_prefix: String prefix to use for saving sample + images (if `save_to_dir` is set). + save_format: Format to use for saving sample images + (if `save_to_dir` is set). + subset: Subset of data (`"training"` or `"validation"`) if + validation_split is set in ImageDataGenerator. + interpolation: Interpolation method used to resample the image if the + target size is different from that of the loaded image. + Supported methods are "nearest", "bilinear", and "bicubic". + If PIL version 1.1.3 or newer is installed, "lanczos" is also + supported. If PIL version 3.4.0 or newer is installed, "box" and + "hamming" are also supported. By default, "nearest" is used. + +

__getitem__

+ + + + + + +

__iter__

+ + + + + + +

__len__

+ + + + + + + + +## Class Variables + +* `allowed_class_modes` +* `white_list_formats` diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator.md b/site/en/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator.md new file mode 100644 index 00000000000..bc92240715c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator.md @@ -0,0 +1,726 @@ +description: Generate batches of tensor image data with real-time data augmentation. + +
+ + + + + + + + + + + +
+ +# tf.keras.preprocessing.image.ImageDataGenerator + + + + + + + + + +Generate batches of tensor image data with real-time data augmentation. + + + + + + + + + + The data will be looped over (in batches). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`featurewise_center` + +Boolean. +Set input mean to 0 over the dataset, feature-wise. +
+`samplewise_center` + +Boolean. Set each sample mean to 0. +
+`featurewise_std_normalization` + +Boolean. +Divide inputs by std of the dataset, feature-wise. +
+`samplewise_std_normalization` + +Boolean. Divide each input by its std. +
+`zca_epsilon` + +epsilon for ZCA whitening. Default is 1e-6. +
+`zca_whitening` + +Boolean. Apply ZCA whitening. +
+`rotation_range` + +Int. Degree range for random rotations. +
+`width_shift_range` + +Float, 1-D array-like or int +- float: fraction of total width, if < 1, or pixels if >= 1. +- 1-D array-like: random elements from the array. +- int: integer number of pixels from interval +`(-width_shift_range, +width_shift_range)` +- With `width_shift_range=2` possible values +are integers `[-1, 0, +1]`, +same as with `width_shift_range=[-1, 0, +1]`, +while with `width_shift_range=1.0` possible values are floats +in the interval [-1.0, +1.0). +
+`height_shift_range` + +Float, 1-D array-like or int +- float: fraction of total height, if < 1, or pixels if >= 1. +- 1-D array-like: random elements from the array. +- int: integer number of pixels from interval +`(-height_shift_range, +height_shift_range)` +- With `height_shift_range=2` possible values +are integers `[-1, 0, +1]`, +same as with `height_shift_range=[-1, 0, +1]`, +while with `height_shift_range=1.0` possible values are floats +in the interval [-1.0, +1.0). +
+`brightness_range` + +Tuple or list of two floats. Range for picking +a brightness shift value from. +
+`shear_range` + +Float. Shear Intensity +(Shear angle in counter-clockwise direction in degrees) +
+`zoom_range` + +Float or [lower, upper]. Range for random zoom. +If a float, `[lower, upper] = [1-zoom_range, 1+zoom_range]`. +
+`channel_shift_range` + +Float. Range for random channel shifts. +
+`fill_mode` + +One of {"constant", "nearest", "reflect" or "wrap"}. +Default is 'nearest'. +Points outside the boundaries of the input are filled +according to the given mode: +- 'constant': kkkkkkkk|abcd|kkkkkkkk (cval=k) +- 'nearest': aaaaaaaa|abcd|dddddddd +- 'reflect': abcddcba|abcd|dcbaabcd +- 'wrap': abcdabcd|abcd|abcdabcd +
+`cval` + +Float or Int. +Value used for points outside the boundaries +when `fill_mode = "constant"`. +
+`horizontal_flip` + +Boolean. Randomly flip inputs horizontally. +
+`vertical_flip` + +Boolean. Randomly flip inputs vertically. +
+`rescale` + +rescaling factor. Defaults to None. +If None or 0, no rescaling is applied, +otherwise we multiply the data by the value provided +(after applying all other transformations). +
+`preprocessing_function` + +function that will be applied on each input. +The function will run after the image is resized and augmented. +The function should take one argument: +one image (Numpy tensor with rank 3), +and should output a Numpy tensor with the same shape. +
+`data_format` + +Image data format, +either "channels_first" or "channels_last". +"channels_last" mode means that the images should have shape +`(samples, height, width, channels)`, +"channels_first" mode means that the images should have shape +`(samples, channels, height, width)`. +It defaults to the `image_data_format` value found in your +Keras config file at `~/.keras/keras.json`. +If you never set it, then it will be "channels_last". +
+`validation_split` + +Float. Fraction of images reserved for validation +(strictly between 0 and 1). +
+`dtype` + +Dtype to use for the generated arrays. +
+ + + +#### Examples: + + + +Example of using `.flow(x, y)`: + +```python +(x_train, y_train), (x_test, y_test) = cifar10.load_data() +y_train = np_utils.to_categorical(y_train, num_classes) +y_test = np_utils.to_categorical(y_test, num_classes) +datagen = ImageDataGenerator( + featurewise_center=True, + featurewise_std_normalization=True, + rotation_range=20, + width_shift_range=0.2, + height_shift_range=0.2, + horizontal_flip=True) +# compute quantities required for featurewise normalization +# (std, mean, and principal components if ZCA whitening is applied) +datagen.fit(x_train) +# fits the model on batches with real-time data augmentation: +model.fit_generator(datagen.flow(x_train, y_train, batch_size=32), + steps_per_epoch=len(x_train) / 32, epochs=epochs) +# here's a more "manual" example +for e in range(epochs): + print('Epoch', e) + batches = 0 + for x_batch, y_batch in datagen.flow(x_train, y_train, batch_size=32): + model.fit(x_batch, y_batch) + batches += 1 + if batches >= len(x_train) / 32: + # we need to break the loop by hand because + # the generator loops indefinitely + break +``` + +Example of using `.flow_from_directory(directory)`: + +```python +train_datagen = ImageDataGenerator( + rescale=1./255, + shear_range=0.2, + zoom_range=0.2, + horizontal_flip=True) +test_datagen = ImageDataGenerator(rescale=1./255) +train_generator = train_datagen.flow_from_directory( + 'data/train', + target_size=(150, 150), + batch_size=32, + class_mode='binary') +validation_generator = test_datagen.flow_from_directory( + 'data/validation', + target_size=(150, 150), + batch_size=32, + class_mode='binary') +model.fit_generator( + train_generator, + steps_per_epoch=2000, + epochs=50, + validation_data=validation_generator, + validation_steps=800) +``` + +Example of transforming images and masks together. + +```python +# we create two instances with the same arguments +data_gen_args = dict(featurewise_center=True, + featurewise_std_normalization=True, + rotation_range=90, + width_shift_range=0.1, + height_shift_range=0.1, + zoom_range=0.2) +image_datagen = ImageDataGenerator(**data_gen_args) +mask_datagen = ImageDataGenerator(**data_gen_args) +# Provide the same seed and keyword arguments to the fit and flow methods +seed = 1 +image_datagen.fit(images, augment=True, seed=seed) +mask_datagen.fit(masks, augment=True, seed=seed) +image_generator = image_datagen.flow_from_directory( + 'data/images', + class_mode=None, + seed=seed) +mask_generator = mask_datagen.flow_from_directory( + 'data/masks', + class_mode=None, + seed=seed) +# combine generators into one which yields image and masks +train_generator = zip(image_generator, mask_generator) +model.fit_generator( + train_generator, + steps_per_epoch=2000, + epochs=50) +``` + +## Methods + +

apply_transform

+ + + +Applies a transformation to an image according to given parameters. + +# Arguments + x: 3D tensor, single image. + transform_parameters: Dictionary with string - parameter pairs + describing the transformation. + Currently, the following parameters + from the dictionary are used: + - `'theta'`: Float. Rotation angle in degrees. + - `'tx'`: Float. Shift in the x direction. + - `'ty'`: Float. Shift in the y direction. + - `'shear'`: Float. Shear angle in degrees. + - `'zx'`: Float. Zoom in the x direction. + - `'zy'`: Float. Zoom in the y direction. + - `'flip_horizontal'`: Boolean. Horizontal flip. + - `'flip_vertical'`: Boolean. Vertical flip. + - `'channel_shift_intensity'`: Float. Channel shift intensity. + - `'brightness'`: Float. Brightness shift intensity. + +# Returns + A transformed version of the input (same shape). + +

fit

+ + + +Fits the data generator to some sample data. + +This computes the internal data stats related to the +data-dependent transformations, based on an array of sample data. + +Only required if `featurewise_center` or +`featurewise_std_normalization` or `zca_whitening` are set to True. + +When `rescale` is set to a value, rescaling is applied to +sample data before computing the internal data stats. + +# Arguments + x: Sample data. Should have rank 4. + In case of grayscale data, + the channels axis should have value 1, in case + of RGB data, it should have value 3, and in case + of RGBA data, it should have value 4. + augment: Boolean (default: False). + Whether to fit on randomly augmented samples. + rounds: Int (default: 1). + If using data augmentation (`augment=True`), + this is how many augmentation passes over the data to use. + seed: Int (default: None). Random seed. + +

flow

+ + + +Takes data & label arrays, generates batches of augmented data. + +# Arguments + x: Input data. NumPy array of rank 4 or a tuple. + If tuple, the first element + should contain the images and the second element + another NumPy array or a list of NumPy arrays + that gets passed to the output + without any modifications. + Can be used to feed the model miscellaneous data + along with the images. + In case of grayscale data, the channels axis of the image array + should have value 1, in case + of RGB data, it should have value 3, and in case + of RGBA data, it should have value 4. + y: Labels. + batch_size: Int (default: 32). + shuffle: Boolean (default: True). + sample_weight: Sample weights. + seed: Int (default: None). + save_to_dir: None or str (default: None). + This allows you to optionally specify a directory + to which to save the augmented pictures being generated + (useful for visualizing what you are doing). + save_prefix: Str (default: `''`). + Prefix to use for filenames of saved pictures + (only relevant if `save_to_dir` is set). + save_format: one of "png", "jpeg" + (only relevant if `save_to_dir` is set). Default: "png". + subset: Subset of data (`"training"` or `"validation"`) if + `validation_split` is set in `ImageDataGenerator`. + +# Returns + An `Iterator` yielding tuples of `(x, y)` + where `x` is a NumPy array of image data + (in the case of a single image input) or a list + of NumPy arrays (in the case with + additional inputs) and `y` is a NumPy array + of corresponding labels. If 'sample_weight' is not None, + the yielded tuples are of the form `(x, y, sample_weight)`. + If `y` is None, only the NumPy array `x` is returned. + +

flow_from_dataframe

+ + + +Takes the dataframe and the path to a directory + and generates batches of augmented/normalized data. + +**A simple tutorial can be found **[here]( + http://bit.ly/keras_flow_from_dataframe). + +# Arguments + dataframe: Pandas dataframe containing the filepaths relative to + `directory` (or absolute paths if `directory` is None) of the + images in a string column. It should include other column/s + depending on the `class_mode`: + - if `class_mode` is `"categorical"` (default value) it must + include the `y_col` column with the class/es of each image. + Values in column can be string/list/tuple if a single class + or list/tuple if multiple classes. + - if `class_mode` is `"binary"` or `"sparse"` it must include + the given `y_col` column with class values as strings. + - if `class_mode` is `"raw"` or `"multi_output"` it should contain + the columns specified in `y_col`. + - if `class_mode` is `"input"` or `None` no extra column is needed. + directory: string, path to the directory to read images from. If `None`, + data in `x_col` column should be absolute paths. + x_col: string, column in `dataframe` that contains the filenames (or + absolute paths if `directory` is `None`). + y_col: string or list, column/s in `dataframe` that has the target data. + weight_col: string, column in `dataframe` that contains the sample + weights. Default: `None`. + target_size: tuple of integers `(height, width)`, default: `(256, 256)`. + The dimensions to which all images found will be resized. + color_mode: one of "grayscale", "rgb", "rgba". Default: "rgb". + Whether the images will be converted to have 1 or 3 color channels. + classes: optional list of classes (e.g. `['dogs', 'cats']`). + Default: None. If not provided, the list of classes will be + automatically inferred from the `y_col`, + which will map to the label indices, will be alphanumeric). + The dictionary containing the mapping from class names to class + indices can be obtained via the attribute `class_indices`. + class_mode: one of "binary", "categorical", "input", "multi_output", + "raw", sparse" or None. Default: "categorical". + Mode for yielding the targets: + - `"binary"`: 1D NumPy array of binary labels, + - `"categorical"`: 2D NumPy array of one-hot encoded labels. + Supports multi-label output. + - `"input"`: images identical to input images (mainly used to + work with autoencoders), + - `"multi_output"`: list with the values of the different columns, + - `"raw"`: NumPy array of values in `y_col` column(s), + - `"sparse"`: 1D NumPy array of integer labels, + - `None`, no targets are returned (the generator will only yield + batches of image data, which is useful to use in + `model.predict_generator()`). + batch_size: size of the batches of data (default: 32). + shuffle: whether to shuffle the data (default: True) + seed: optional random seed for shuffling and transformations. + save_to_dir: None or str (default: None). + This allows you to optionally specify a directory + to which to save the augmented pictures being generated + (useful for visualizing what you are doing). + save_prefix: str. Prefix to use for filenames of saved pictures + (only relevant if `save_to_dir` is set). + save_format: one of "png", "jpeg" + (only relevant if `save_to_dir` is set). Default: "png". + follow_links: whether to follow symlinks inside class subdirectories + (default: False). + subset: Subset of data (`"training"` or `"validation"`) if + `validation_split` is set in `ImageDataGenerator`. + interpolation: Interpolation method used to resample the image if the + target size is different from that of the loaded image. + Supported methods are `"nearest"`, `"bilinear"`, and `"bicubic"`. + If PIL version 1.1.3 or newer is installed, `"lanczos"` is also + supported. If PIL version 3.4.0 or newer is installed, `"box"` and + `"hamming"` are also supported. By default, `"nearest"` is used. + validate_filenames: Boolean, whether to validate image filenames in + `x_col`. If `True`, invalid images will be ignored. Disabling this + option can lead to speed-up in the execution of this function. + Default: `True`. + +# Returns + A `DataFrameIterator` yielding tuples of `(x, y)` + where `x` is a NumPy array containing a batch + of images with shape `(batch_size, *target_size, channels)` + and `y` is a NumPy array of corresponding labels. + +

flow_from_directory

+ + + +Takes the path to a directory & generates batches of augmented data. + +# Arguments + directory: string, path to the target directory. + It should contain one subdirectory per class. + Any PNG, JPG, BMP, PPM or TIF images + inside each of the subdirectories directory tree + will be included in the generator. + See [this script]( + https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d) + for more details. + target_size: Tuple of integers `(height, width)`, + default: `(256, 256)`. + The dimensions to which all images found will be resized. + color_mode: One of "grayscale", "rgb", "rgba". Default: "rgb". + Whether the images will be converted to + have 1, 3, or 4 channels. + classes: Optional list of class subdirectories + (e.g. `['dogs', 'cats']`). Default: None. + If not provided, the list of classes will be automatically + inferred from the subdirectory names/structure + under `directory`, where each subdirectory will + be treated as a different class + (and the order of the classes, which will map to the label + indices, will be alphanumeric). + The dictionary containing the mapping from class names to class + indices can be obtained via the attribute `class_indices`. + class_mode: One of "categorical", "binary", "sparse", + "input", or None. Default: "categorical". + Determines the type of label arrays that are returned: + - "categorical" will be 2D one-hot encoded labels, + - "binary" will be 1D binary labels, + "sparse" will be 1D integer labels, + - "input" will be images identical + to input images (mainly used to work with autoencoders). + - If None, no labels are returned + (the generator will only yield batches of image data, + which is useful to use with `model.predict_generator()`). + Please note that in case of class_mode None, + the data still needs to reside in a subdirectory + of `directory` for it to work correctly. + batch_size: Size of the batches of data (default: 32). + shuffle: Whether to shuffle the data (default: True) + If set to False, sorts the data in alphanumeric order. + seed: Optional random seed for shuffling and transformations. + save_to_dir: None or str (default: None). + This allows you to optionally specify + a directory to which to save + the augmented pictures being generated + (useful for visualizing what you are doing). + save_prefix: Str. Prefix to use for filenames of saved pictures + (only relevant if `save_to_dir` is set). + save_format: One of "png", "jpeg" + (only relevant if `save_to_dir` is set). Default: "png". + follow_links: Whether to follow symlinks inside + class subdirectories (default: False). + subset: Subset of data (`"training"` or `"validation"`) if + `validation_split` is set in `ImageDataGenerator`. + interpolation: Interpolation method used to + resample the image if the + target size is different from that of the loaded image. + Supported methods are `"nearest"`, `"bilinear"`, + and `"bicubic"`. + If PIL version 1.1.3 or newer is installed, `"lanczos"` is also + supported. If PIL version 3.4.0 or newer is installed, + `"box"` and `"hamming"` are also supported. + By default, `"nearest"` is used. + +# Returns + A `DirectoryIterator` yielding tuples of `(x, y)` + where `x` is a NumPy array containing a batch + of images with shape `(batch_size, *target_size, channels)` + and `y` is a NumPy array of corresponding labels. + +

get_random_transform

+ + + +Generates random parameters for a transformation. + +# Arguments + seed: Random seed. + img_shape: Tuple of integers. + Shape of the image that is transformed. + +# Returns + A dictionary containing randomly chosen parameters describing the + transformation. + +

random_transform

+ + + +Applies a random transformation to an image. + +# Arguments + x: 3D tensor, single image. + seed: Random seed. + +# Returns + A randomly transformed version of the input (same shape). + +

standardize

+ + + +Applies the normalization configuration in-place to a batch of inputs. + +`x` is changed in-place since the function is mainly used internally +to standardize images and feed them to your network. If a copy of `x` +would be created instead it would have a significant performance cost. +If you want to apply this method without changing the input in-place +you can call the method creating a copy before: + +standardize(np.copy(x)) + +# Arguments + x: Batch of inputs to be normalized. + +# Returns + The inputs, normalized. + + + diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/Iterator.md b/site/en/api_docs/python/tf/keras/preprocessing/image/Iterator.md new file mode 100644 index 00000000000..a3784456804 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/Iterator.md @@ -0,0 +1,129 @@ +description: Base class for image data iterators. + +
+ + + + + + + + + + +
+ +# tf.keras.preprocessing.image.Iterator + + + + + + + + + +Base class for image data iterators. + +Inherits From: [`Sequence`](../../../../tf/keras/utils/Sequence.md) + + + + + + + + + +Every `Iterator` must implement the `_get_batches_of_transformed_samples` +method. + +# Arguments + n: Integer, total number of samples in the dataset to loop over. + batch_size: Integer, size of a batch. + shuffle: Boolean, whether to shuffle the data between epochs. + seed: Random seeding for data shuffling. + +## Methods + +

next

+ + + +For python 2.x. + +# Returns + The next batch. + +

on_epoch_end

+ + + + + + +

reset

+ + + + + + +

__getitem__

+ + + + + + +

__iter__

+ + + + + + +

__len__

+ + + + + + + + +## Class Variables + +* `white_list_formats` diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/NumpyArrayIterator.md b/site/en/api_docs/python/tf/keras/preprocessing/image/NumpyArrayIterator.md new file mode 100644 index 00000000000..7768be417b5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/NumpyArrayIterator.md @@ -0,0 +1,236 @@ +description: Iterator yielding data from a Numpy array. + +
+ + + + + + + + + + + +
+ +# tf.keras.preprocessing.image.NumpyArrayIterator + + + + + + + + + +Iterator yielding data from a Numpy array. + +Inherits From: [`Iterator`](../../../../tf/keras/preprocessing/image/Iterator.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numpy array of input data or tuple. +If tuple, the second elements is either +another numpy array or a list of numpy arrays, +each of which gets passed +through as an output without any modifications. +
+`y` + +Numpy array of targets data. +
+`image_data_generator` + +Instance of `ImageDataGenerator` +to use for random transformations and normalization. +
+`batch_size` + +Integer, size of a batch. +
+`shuffle` + +Boolean, whether to shuffle the data between epochs. +
+`sample_weight` + +Numpy array of sample weights. +
+`seed` + +Random seed for data shuffling. +
+`data_format` + +String, one of `channels_first`, `channels_last`. +
+`save_to_dir` + +Optional directory where to save the pictures +being yielded, in a viewable format. This is useful +for visualizing the random transformations being +applied, for debugging purposes. +
+`save_prefix` + +String prefix to use for saving sample +images (if `save_to_dir` is set). +
+`save_format` + +Format to use for saving sample images +(if `save_to_dir` is set). +
+`subset` + +Subset of data (`"training"` or `"validation"`) if +validation_split is set in ImageDataGenerator. +
+`dtype` + +Dtype to use for the generated arrays. +
+ + + +## Methods + +

next

+ + + +For python 2.x. + +# Returns + The next batch. + +

on_epoch_end

+ + + + + + +

reset

+ + + + + + +

__getitem__

+ + + + + + +

__iter__

+ + + + + + +

__len__

+ + + + + + + + +## Class Variables + +* `white_list_formats` diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/apply_affine_transform.md b/site/en/api_docs/python/tf/keras/preprocessing/image/apply_affine_transform.md new file mode 100644 index 00000000000..c21a76969b1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/apply_affine_transform.md @@ -0,0 +1,61 @@ +description: Applies an affine transformation specified by the parameters given. + +
+ + +
+ +# tf.keras.preprocessing.image.apply_affine_transform + + + + + + + + + +Applies an affine transformation specified by the parameters given. + + + + + + + + + +# Arguments + x: 2D numpy array, single image. + theta: Rotation angle in degrees. + tx: Width shift. + ty: Heigh shift. + shear: Shear angle in degrees. + zx: Zoom in x direction. + zy: Zoom in y direction + row_axis: Index of axis for rows in the input image. + col_axis: Index of axis for columns in the input image. + channel_axis: Index of axis for channels in the input image. + fill_mode: Points outside the boundaries of the input + are filled according to the given mode + (one of `{'constant', 'nearest', 'reflect', 'wrap'}`). + cval: Value used for points outside the boundaries + of the input if `mode='constant'`. + order: int, order of interpolation + +# Returns + The transformed version of the input. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/apply_brightness_shift.md b/site/en/api_docs/python/tf/keras/preprocessing/image/apply_brightness_shift.md new file mode 100644 index 00000000000..51df6df0c4b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/apply_brightness_shift.md @@ -0,0 +1,50 @@ +description: Performs a brightness shift. + +
+ + +
+ +# tf.keras.preprocessing.image.apply_brightness_shift + + + + + + + + + +Performs a brightness shift. + + + + + + + + + +# Arguments + x: Input tensor. Must be 3D. + brightness: Float. The new brightness value. + channel_axis: Index of axis for channels in the input tensor. + +# Returns + Numpy image tensor. + +# Raises + ValueError if `brightness_range` isn't a tuple. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/apply_channel_shift.md b/site/en/api_docs/python/tf/keras/preprocessing/image/apply_channel_shift.md new file mode 100644 index 00000000000..00a82a81a8b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/apply_channel_shift.md @@ -0,0 +1,47 @@ +description: Performs a channel shift. + +
+ + +
+ +# tf.keras.preprocessing.image.apply_channel_shift + + + + + + + + + +Performs a channel shift. + + + + + + + + + +# Arguments + x: Input tensor. Must be 3D. + intensity: Transformation intensity. + channel_axis: Index of axis for channels in the input tensor. + +# Returns + Numpy image tensor. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/array_to_img.md b/site/en/api_docs/python/tf/keras/preprocessing/image/array_to_img.md new file mode 100644 index 00000000000..0ef1cab5cde --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/array_to_img.md @@ -0,0 +1,137 @@ +description: Converts a 3D Numpy array to a PIL Image instance. + +
+ + +
+ +# tf.keras.preprocessing.image.array_to_img + + + + + + + + + +Converts a 3D Numpy array to a PIL Image instance. + + + + + + + + + + +#### Usage: + + + +```python +from PIL import Image +img = np.random.random(size=(100, 100, 3)) +pil_img = tf.keras.preprocessing.image.array_to_img(img) +``` + + + + + + + + + + + + + + + + + + + + +
+`x` + +Input Numpy array. +
+`data_format` + +Image data format, can be either "channels_first" or +"channels_last". Defaults to `None`, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+`scale` + +Whether to rescale image values to be within `[0, 255]`. Defaults +to `True`. +
+`dtype` + +Dtype to use. Default to `None`, in which case the global setting +tf.keras.backend.floatx() is used (unless you changed it, it defaults +to "float32") +
+ + + + + + + + + + + +
+A PIL Image instance. +
+ + + + + + + + + + + + + + + +
+`ImportError` + +if PIL is not available. +
+`ValueError` + +if invalid `x` or `data_format` is passed. +
+ diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/img_to_array.md b/site/en/api_docs/python/tf/keras/preprocessing/image/img_to_array.md new file mode 100644 index 00000000000..6a224df38bb --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/img_to_array.md @@ -0,0 +1,123 @@ +description: Converts a PIL Image instance to a Numpy array. + +
+ + +
+ +# tf.keras.preprocessing.image.img_to_array + + + + + + + + + +Converts a PIL Image instance to a Numpy array. + + + + + + + + + + +#### Usage: + + + +```python +from PIL import Image +img_data = np.random.random(size=(100, 100, 3)) +img = tf.keras.preprocessing.image.array_to_img(img_data) +array = tf.keras.preprocessing.image.img_to_array(img) +``` + + + + + + + + + + + + + + + + + +
+`img` + +Input PIL Image instance. +
+`data_format` + +Image data format, can be either "channels_first" or +"channels_last". Defaults to `None`, in which case the global setting +tf.keras.backend.image_data_format() is used (unless you changed it, +it defaults to "channels_last"). +
+`dtype` + +Dtype to use. Default to `None`, in which case the global setting +tf.keras.backend.floatx() is used (unless you changed it, it defaults +to "float32") +
+ + + + + + + + + + + +
+A 3D Numpy array. +
+ + + + + + + + + + + + +
+`ValueError` + +if invalid `img` or `data_format` is passed. +
+ diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/load_img.md b/site/en/api_docs/python/tf/keras/preprocessing/image/load_img.md new file mode 100644 index 00000000000..092a8ca6cca --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/load_img.md @@ -0,0 +1,63 @@ +description: Loads an image into PIL format. + +
+ + +
+ +# tf.keras.preprocessing.image.load_img + + + + + + + + + +Loads an image into PIL format. + + + + + + + + + +# Arguments + path: Path to image file. + grayscale: DEPRECATED use `color_mode="grayscale"`. + color_mode: The desired image format. One of "grayscale", "rgb", "rgba". + "grayscale" supports 8-bit images and 32-bit signed integer images. + Default: "rgb". + target_size: Either `None` (default to original size) + or tuple of ints `(img_height, img_width)`. + interpolation: Interpolation method used to resample the image if the + target size is different from that of the loaded image. + Supported methods are "nearest", "bilinear", and "bicubic". + If PIL version 1.1.3 or newer is installed, "lanczos" is also + supported. If PIL version 3.4.0 or newer is installed, "box" and + "hamming" are also supported. + Default: "nearest". + +# Returns + A PIL Image instance. + +# Raises + ImportError: if PIL is not available. + ValueError: if interpolation method is not supported. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/random_brightness.md b/site/en/api_docs/python/tf/keras/preprocessing/image/random_brightness.md new file mode 100644 index 00000000000..27582cdf954 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/random_brightness.md @@ -0,0 +1,50 @@ +description: Performs a random brightness shift. + +
+ + +
+ +# tf.keras.preprocessing.image.random_brightness + + + + + + + + + +Performs a random brightness shift. + + + + + + + + + +# Arguments + x: Input tensor. Must be 3D. + brightness_range: Tuple of floats; brightness range. + channel_axis: Index of axis for channels in the input tensor. + +# Returns + Numpy image tensor. + +# Raises + ValueError if `brightness_range` isn't a tuple. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/random_channel_shift.md b/site/en/api_docs/python/tf/keras/preprocessing/image/random_channel_shift.md new file mode 100644 index 00000000000..e683842a218 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/random_channel_shift.md @@ -0,0 +1,47 @@ +description: Performs a random channel shift. + +
+ + +
+ +# tf.keras.preprocessing.image.random_channel_shift + + + + + + + + + +Performs a random channel shift. + + + + + + + + + +# Arguments + x: Input tensor. Must be 3D. + intensity_range: Transformation intensity. + channel_axis: Index of axis for channels in the input tensor. + +# Returns + Numpy image tensor. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/random_rotation.md b/site/en/api_docs/python/tf/keras/preprocessing/image/random_rotation.md new file mode 100644 index 00000000000..66c919dc0c4 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/random_rotation.md @@ -0,0 +1,57 @@ +description: Performs a random rotation of a Numpy image tensor. + +
+ + +
+ +# tf.keras.preprocessing.image.random_rotation + + + + + + + + + +Performs a random rotation of a Numpy image tensor. + + + + + + + + + +# Arguments + x: Input tensor. Must be 3D. + rg: Rotation range, in degrees. + row_axis: Index of axis for rows in the input tensor. + col_axis: Index of axis for columns in the input tensor. + channel_axis: Index of axis for channels in the input tensor. + fill_mode: Points outside the boundaries of the input + are filled according to the given mode + (one of `{'constant', 'nearest', 'reflect', 'wrap'}`). + cval: Value used for points outside the boundaries + of the input if `mode='constant'`. + interpolation_order: int, order of spline interpolation. + see `ndimage.interpolation.affine_transform` + +# Returns + Rotated Numpy image tensor. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/random_shear.md b/site/en/api_docs/python/tf/keras/preprocessing/image/random_shear.md new file mode 100644 index 00000000000..1a9316c8bf2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/random_shear.md @@ -0,0 +1,57 @@ +description: Performs a random spatial shear of a Numpy image tensor. + +
+ + +
+ +# tf.keras.preprocessing.image.random_shear + + + + + + + + + +Performs a random spatial shear of a Numpy image tensor. + + + + + + + + + +# Arguments + x: Input tensor. Must be 3D. + intensity: Transformation intensity in degrees. + row_axis: Index of axis for rows in the input tensor. + col_axis: Index of axis for columns in the input tensor. + channel_axis: Index of axis for channels in the input tensor. + fill_mode: Points outside the boundaries of the input + are filled according to the given mode + (one of `{'constant', 'nearest', 'reflect', 'wrap'}`). + cval: Value used for points outside the boundaries + of the input if `mode='constant'`. + interpolation_order: int, order of spline interpolation. + see `ndimage.interpolation.affine_transform` + +# Returns + Sheared Numpy image tensor. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/random_shift.md b/site/en/api_docs/python/tf/keras/preprocessing/image/random_shift.md new file mode 100644 index 00000000000..92cd040b9ba --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/random_shift.md @@ -0,0 +1,58 @@ +description: Performs a random spatial shift of a Numpy image tensor. + +
+ + +
+ +# tf.keras.preprocessing.image.random_shift + + + + + + + + + +Performs a random spatial shift of a Numpy image tensor. + + + + + + + + + +# Arguments + x: Input tensor. Must be 3D. + wrg: Width shift range, as a float fraction of the width. + hrg: Height shift range, as a float fraction of the height. + row_axis: Index of axis for rows in the input tensor. + col_axis: Index of axis for columns in the input tensor. + channel_axis: Index of axis for channels in the input tensor. + fill_mode: Points outside the boundaries of the input + are filled according to the given mode + (one of `{'constant', 'nearest', 'reflect', 'wrap'}`). + cval: Value used for points outside the boundaries + of the input if `mode='constant'`. + interpolation_order: int, order of spline interpolation. + see `ndimage.interpolation.affine_transform` + +# Returns + Shifted Numpy image tensor. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/random_zoom.md b/site/en/api_docs/python/tf/keras/preprocessing/image/random_zoom.md new file mode 100644 index 00000000000..19e21a09fe9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/random_zoom.md @@ -0,0 +1,60 @@ +description: Performs a random spatial zoom of a Numpy image tensor. + +
+ + +
+ +# tf.keras.preprocessing.image.random_zoom + + + + + + + + + +Performs a random spatial zoom of a Numpy image tensor. + + + + + + + + + +# Arguments + x: Input tensor. Must be 3D. + zoom_range: Tuple of floats; zoom range for width and height. + row_axis: Index of axis for rows in the input tensor. + col_axis: Index of axis for columns in the input tensor. + channel_axis: Index of axis for channels in the input tensor. + fill_mode: Points outside the boundaries of the input + are filled according to the given mode + (one of `{'constant', 'nearest', 'reflect', 'wrap'}`). + cval: Value used for points outside the boundaries + of the input if `mode='constant'`. + interpolation_order: int, order of spline interpolation. + see `ndimage.interpolation.affine_transform` + +# Returns + Zoomed Numpy image tensor. + +# Raises + ValueError: if `zoom_range` isn't a tuple. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/image/save_img.md b/site/en/api_docs/python/tf/keras/preprocessing/image/save_img.md new file mode 100644 index 00000000000..7886c5ae883 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/image/save_img.md @@ -0,0 +1,100 @@ +description: Saves an image stored as a Numpy array to a path or file object. + +
+ + +
+ +# tf.keras.preprocessing.image.save_img + + + + + + + + + +Saves an image stored as a Numpy array to a path or file object. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`path` + +Path or file object. +
+`x` + +Numpy array. +
+`data_format` + +Image data format, +either "channels_first" or "channels_last". +
+`file_format` + +Optional file format override. If omitted, the +format to use is determined from the filename extension. +If a file object was used instead of a filename, this +parameter should always be used. +
+`scale` + +Whether to rescale image values to be within `[0, 255]`. +
+`**kwargs` + +Additional keyword arguments passed to `PIL.Image.save()`. +
+ diff --git a/site/en/api_docs/python/tf/keras/preprocessing/sequence.md b/site/en/api_docs/python/tf/keras/preprocessing/sequence.md new file mode 100644 index 00000000000..7b475cdd190 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/sequence.md @@ -0,0 +1,33 @@ +description: Utilities for preprocessing sequence data. + +
+ + +
+ +# Module: tf.keras.preprocessing.sequence + + + + + + + + + +Utilities for preprocessing sequence data. + + + +## Classes + +[`class TimeseriesGenerator`](../../../tf/keras/preprocessing/sequence/TimeseriesGenerator.md): Utility class for generating batches of temporal data. + +## Functions + +[`make_sampling_table(...)`](../../../tf/keras/preprocessing/sequence/make_sampling_table.md): Generates a word rank-based probabilistic sampling table. + +[`pad_sequences(...)`](../../../tf/keras/preprocessing/sequence/pad_sequences.md): Pads sequences to the same length. + +[`skipgrams(...)`](../../../tf/keras/preprocessing/sequence/skipgrams.md): Generates skipgram word pairs. + diff --git a/site/en/api_docs/python/tf/keras/preprocessing/sequence/TimeseriesGenerator.md b/site/en/api_docs/python/tf/keras/preprocessing/sequence/TimeseriesGenerator.md new file mode 100644 index 00000000000..bbdc9c36153 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/sequence/TimeseriesGenerator.md @@ -0,0 +1,183 @@ +description: Utility class for generating batches of temporal data. + +
+ + + + + + + + + +
+ +# tf.keras.preprocessing.sequence.TimeseriesGenerator + + + + + + + + + +Utility class for generating batches of temporal data. + +Inherits From: [`Sequence`](../../../../tf/keras/utils/Sequence.md) + + + + + + + + + +This class takes in a sequence of data-points gathered at +equal intervals, along with time series parameters such as +stride, length of history, etc., to produce batches for +training/validation. +# Arguments + data: Indexable generator (such as list or Numpy array) + containing consecutive data points (timesteps). + The data should be at 2D, and axis 0 is expected + to be the time dimension. + targets: Targets corresponding to timesteps in `data`. + It should have same length as `data`. + length: Length of the output sequences (in number of timesteps). + sampling_rate: Period between successive individual timesteps + within sequences. For rate `r`, timesteps + `data[i]`, `data[i-r]`, ... `data[i - length]` + are used for create a sample sequence. + stride: Period between successive output sequences. + For stride `s`, consecutive output samples would + be centered around `data[i]`, `data[i+s]`, `data[i+2*s]`, etc. + start_index: Data points earlier than `start_index` will not be used + in the output sequences. This is useful to reserve part of the + data for test or validation. + end_index: Data points later than `end_index` will not be used + in the output sequences. This is useful to reserve part of the + data for test or validation. + shuffle: Whether to shuffle output samples, + or instead draw them in chronological order. + reverse: Boolean: if `true`, timesteps in each output sample will be + in reverse chronological order. + batch_size: Number of timeseries samples in each batch + (except maybe the last one). +# Returns + A [Sequence](/utils/#sequence) instance. +# Examples +```python +from keras.preprocessing.sequence import TimeseriesGenerator +import numpy as np +data = np.array([[i] for i in range(50)]) +targets = np.array([[i] for i in range(50)]) +data_gen = TimeseriesGenerator(data, targets, + length=10, sampling_rate=2, + batch_size=2) +assert len(data_gen) == 20 +batch_0 = data_gen[0] +x, y = batch_0 +assert np.array_equal(x, + np.array([[[0], [2], [4], [6], [8]], + [[1], [3], [5], [7], [9]]])) +assert np.array_equal(y, + np.array([[10], [11]])) +``` + +## Methods + +

get_config

+ + + +Returns the TimeseriesGenerator configuration as Python dictionary. + +# Returns + A Python dictionary with the TimeseriesGenerator configuration. + +

on_epoch_end

+ +View source + + + +Method called at the end of every epoch. + + +

to_json

+ + + +Returns a JSON string containing the timeseries generator +configuration. To load a generator from a JSON string, use +`keras.preprocessing.sequence.timeseries_generator_from_json(json_string)`. + +# Arguments + **kwargs: Additional keyword arguments + to be passed to `json.dumps()`. + +# Returns + A JSON string containing the tokenizer configuration. + +

__getitem__

+ + + + + + +

__iter__

+ +View source + + + +Create a generator that iterate over the Sequence. + + +

__len__

+ + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/preprocessing/sequence/make_sampling_table.md b/site/en/api_docs/python/tf/keras/preprocessing/sequence/make_sampling_table.md new file mode 100644 index 00000000000..76a62d83e8d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/sequence/make_sampling_table.md @@ -0,0 +1,66 @@ +description: Generates a word rank-based probabilistic sampling table. + +
+ + +
+ +# tf.keras.preprocessing.sequence.make_sampling_table + + + + + + + + + +Generates a word rank-based probabilistic sampling table. + + + + + + + + + +Used for generating the `sampling_table` argument for `skipgrams`. +`sampling_table[i]` is the probability of sampling +the word i-th most common word in a dataset +(more common words should be sampled less frequently, for balance). + +The sampling probabilities are generated according +to the sampling distribution used in word2vec: + +``` +p(word) = (min(1, sqrt(word_frequency / sampling_factor) / + (word_frequency / sampling_factor))) +``` + +We assume that the word frequencies follow Zipf's law (s=1) to derive +a numerical approximation of frequency(rank): + +`frequency(rank) ~ 1/(rank * (log(rank) + gamma) + 1/2 - 1/(12*rank))` +where `gamma` is the Euler-Mascheroni constant. + +# Arguments + size: Int, number of possible words to sample. + sampling_factor: The sampling factor in the word2vec formula. + +# Returns + A 1D Numpy array of length `size` where the ith entry + is the probability that a word of rank i should be sampled. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences.md b/site/en/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences.md new file mode 100644 index 00000000000..f5e6d9d1570 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences.md @@ -0,0 +1,180 @@ +description: Pads sequences to the same length. + +
+ + +
+ +# tf.keras.preprocessing.sequence.pad_sequences + + + + + + + + + +Pads sequences to the same length. + + + + + + + + + +This function transforms a list (of length `num_samples`) +of sequences (lists of integers) +into a 2D Numpy array of shape `(num_samples, num_timesteps)`. +`num_timesteps` is either the `maxlen` argument if provided, +or the length of the longest sequence in the list. + +Sequences that are shorter than `num_timesteps` +are padded with `value` until they are `num_timesteps` long. + +Sequences longer than `num_timesteps` are truncated +so that they fit the desired length. + +The position where padding or truncation happens is determined by +the arguments `padding` and `truncating`, respectively. +Pre-padding or removing values from the beginning of the sequence is the +default. + +``` +>>> sequence = [[1], [2, 3], [4, 5, 6]] +>>> tf.keras.preprocessing.sequence.pad_sequences(sequence) +array([[0, 0, 1], + [0, 2, 3], + [4, 5, 6]], dtype=int32) +``` + +``` +>>> tf.keras.preprocessing.sequence.pad_sequences(sequence, value=-1) +array([[-1, -1, 1], + [-1, 2, 3], + [ 4, 5, 6]], dtype=int32) +``` + +``` +>>> tf.keras.preprocessing.sequence.pad_sequences(sequence, padding='post') +array([[1, 0, 0], + [2, 3, 0], + [4, 5, 6]], dtype=int32) +``` + +``` +>>> tf.keras.preprocessing.sequence.pad_sequences(sequence, maxlen=2) +array([[0, 1], + [2, 3], + [5, 6]], dtype=int32) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sequences` + +List of sequences (each sequence is a list of integers). +
+`maxlen` + +Optional Int, maximum length of all sequences. If not provided, +sequences will be padded to the length of the longest individual +sequence. +
+`dtype` + +(Optional, defaults to int32). Type of the output sequences. +To pad sequences with variable length strings, you can use `object`. +
+`padding` + +String, 'pre' or 'post' (optional, defaults to 'pre'): +pad either before or after each sequence. +
+`truncating` + +String, 'pre' or 'post' (optional, defaults to 'pre'): +remove values from sequences larger than +`maxlen`, either at the beginning or at the end of the sequences. +
+`value` + +Float or String, padding value. (Optional, defaults to 0.) +
+ + + + + + + + + + + +
+Numpy array with shape `(len(sequences), maxlen)` +
+ + + + + + + + + + + + +
+`ValueError` + +In case of invalid values for `truncating` or `padding`, +or in case of invalid shape for a `sequences` entry. +
+ diff --git a/site/en/api_docs/python/tf/keras/preprocessing/sequence/skipgrams.md b/site/en/api_docs/python/tf/keras/preprocessing/sequence/skipgrams.md new file mode 100644 index 00000000000..ed635ec79c1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/sequence/skipgrams.md @@ -0,0 +1,80 @@ +description: Generates skipgram word pairs. + +
+ + +
+ +# tf.keras.preprocessing.sequence.skipgrams + + + + + + + + + +Generates skipgram word pairs. + + + + + + + + + +This function transforms a sequence of word indexes (list of integers) +into tuples of words of the form: + +- (word, word in the same window), with label 1 (positive samples). +- (word, random word from the vocabulary), with label 0 (negative samples). + +Read more about Skipgram in this gnomic paper by Mikolov et al.: +[Efficient Estimation of Word Representations in +Vector Space](http://arxiv.org/pdf/1301.3781v3.pdf) + +# Arguments + sequence: A word sequence (sentence), encoded as a list + of word indices (integers). If using a `sampling_table`, + word indices are expected to match the rank + of the words in a reference dataset (e.g. 10 would encode + the 10-th most frequently occurring token). + Note that index 0 is expected to be a non-word and will be skipped. + vocabulary_size: Int, maximum possible word index + 1 + window_size: Int, size of sampling windows (technically half-window). + The window of a word `w_i` will be + `[i - window_size, i + window_size+1]`. + negative_samples: Float >= 0. 0 for no negative (i.e. random) samples. + 1 for same number as positive samples. + shuffle: Whether to shuffle the word couples before returning them. + categorical: bool. if False, labels will be + integers (eg. `[0, 1, 1 .. ]`), + if `True`, labels will be categorical, e.g. + `[[1,0],[0,1],[0,1] .. ]`. + sampling_table: 1D array of size `vocabulary_size` where the entry i + encodes the probability to sample a word of rank i. + seed: Random seed. + +# Returns + couples, labels: where `couples` are int pairs and + `labels` are either 0 or 1. + +# Note + By convention, index 0 in the vocabulary is + a non-word and will be skipped. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/text.md b/site/en/api_docs/python/tf/keras/preprocessing/text.md new file mode 100644 index 00000000000..f97915f8f10 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/text.md @@ -0,0 +1,35 @@ +description: Utilities for text input preprocessing. + +
+ + +
+ +# Module: tf.keras.preprocessing.text + + + + + + + + + +Utilities for text input preprocessing. + + + +## Classes + +[`class Tokenizer`](../../../tf/keras/preprocessing/text/Tokenizer.md): Text tokenization utility class. + +## Functions + +[`hashing_trick(...)`](../../../tf/keras/preprocessing/text/hashing_trick.md): Converts a text to a sequence of indexes in a fixed-size hashing space. + +[`one_hot(...)`](../../../tf/keras/preprocessing/text/one_hot.md): One-hot encodes a text into a list of word indexes of size n. + +[`text_to_word_sequence(...)`](../../../tf/keras/preprocessing/text/text_to_word_sequence.md): Converts a text to a sequence of words (or tokens). + +[`tokenizer_from_json(...)`](../../../tf/keras/preprocessing/text/tokenizer_from_json.md): Parses a JSON tokenizer configuration file and returns a + diff --git a/site/en/api_docs/python/tf/keras/preprocessing/text/Tokenizer.md b/site/en/api_docs/python/tf/keras/preprocessing/text/Tokenizer.md new file mode 100644 index 00000000000..0cdb5174d89 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/text/Tokenizer.md @@ -0,0 +1,272 @@ +description: Text tokenization utility class. + +
+ + + + + + + + + + + + + +
+ +# tf.keras.preprocessing.text.Tokenizer + + + + + + + + + +Text tokenization utility class. + + + + + + + + + +This class allows to vectorize a text corpus, by turning each +text into either a sequence of integers (each integer being the index +of a token in a dictionary) or into a vector where the coefficient +for each token could be binary, based on word count, based on tf-idf... + +# Arguments + num_words: the maximum number of words to keep, based + on word frequency. Only the most common `num_words-1` words will + be kept. + filters: a string where each element is a character that will be + filtered from the texts. The default is all punctuation, plus + tabs and line breaks, minus the `'` character. + lower: boolean. Whether to convert the texts to lowercase. + split: str. Separator for word splitting. + char_level: if True, every character will be treated as a token. + oov_token: if given, it will be added to word_index and used to + replace out-of-vocabulary words during text_to_sequence calls + +By default, all punctuation is removed, turning the texts into +space-separated sequences of words +(words maybe include the `'` character). These sequences are then +split into lists of tokens. They will then be indexed or vectorized. + +`0` is a reserved index that won't be assigned to any word. + +## Methods + +

fit_on_sequences

+ + + +Updates internal vocabulary based on a list of sequences. + +Required before using `sequences_to_matrix` +(if `fit_on_texts` was never called). + +# Arguments + sequences: A list of sequence. + A "sequence" is a list of integer word indices. + +

fit_on_texts

+ + + +Updates internal vocabulary based on a list of texts. + +In the case where texts contains lists, +we assume each entry of the lists to be a token. + +Required before using `texts_to_sequences` or `texts_to_matrix`. + +# Arguments + texts: can be a list of strings, + a generator of strings (for memory-efficiency), + or a list of list of strings. + +

get_config

+ + + +Returns the tokenizer configuration as Python dictionary. +The word count dictionaries used by the tokenizer get serialized +into plain JSON, so that the configuration can be read by other +projects. + +# Returns + A Python dictionary with the tokenizer configuration. + +

sequences_to_matrix

+ + + +Converts a list of sequences into a Numpy matrix. + +# Arguments + sequences: list of sequences + (a sequence is a list of integer word indices). + mode: one of "binary", "count", "tfidf", "freq" + +# Returns + A Numpy matrix. + +# Raises + ValueError: In case of invalid `mode` argument, + or if the Tokenizer requires to be fit to sample data. + +

sequences_to_texts

+ + + +Transforms each sequence into a list of text. + +Only top `num_words-1` most frequent words will be taken into account. +Only words known by the tokenizer will be taken into account. + +# Arguments + sequences: A list of sequences (list of integers). + +# Returns + A list of texts (strings) + +

sequences_to_texts_generator

+ + + +Transforms each sequence in `sequences` to a list of texts(strings). + +Each sequence has to a list of integers. +In other words, sequences should be a list of sequences + +Only top `num_words-1` most frequent words will be taken into account. +Only words known by the tokenizer will be taken into account. + +# Arguments + sequences: A list of sequences. + +# Yields + Yields individual texts. + +

texts_to_matrix

+ + + +Convert a list of texts to a Numpy matrix. + +# Arguments + texts: list of strings. + mode: one of "binary", "count", "tfidf", "freq". + +# Returns + A Numpy matrix. + +

texts_to_sequences

+ + + +Transforms each text in texts to a sequence of integers. + +Only top `num_words-1` most frequent words will be taken into account. +Only words known by the tokenizer will be taken into account. + +# Arguments + texts: A list of texts (strings). + +# Returns + A list of sequences. + +

texts_to_sequences_generator

+ + + +Transforms each text in `texts` to a sequence of integers. + +Each item in texts can also be a list, +in which case we assume each item of that list to be a token. + +Only top `num_words-1` most frequent words will be taken into account. +Only words known by the tokenizer will be taken into account. + +# Arguments + texts: A list of texts (strings). + +# Yields + Yields individual sequences. + +

to_json

+ + + +Returns a JSON string containing the tokenizer configuration. +To load a tokenizer from a JSON string, use +keras.preprocessing.text.tokenizer_from_json(json_string). + +# Arguments + **kwargs: Additional keyword arguments + to be passed to `json.dumps()`. + +# Returns + A JSON string containing the tokenizer configuration. + + + diff --git a/site/en/api_docs/python/tf/keras/preprocessing/text/hashing_trick.md b/site/en/api_docs/python/tf/keras/preprocessing/text/hashing_trick.md new file mode 100644 index 00000000000..22350092965 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/text/hashing_trick.md @@ -0,0 +1,66 @@ +description: Converts a text to a sequence of indexes in a fixed-size hashing space. + +
+ + +
+ +# tf.keras.preprocessing.text.hashing_trick + + + + + + + + + +Converts a text to a sequence of indexes in a fixed-size hashing space. + + + + + + + + + +# Arguments + text: Input text (string). + n: Dimension of the hashing space. + hash_function: defaults to python `hash` function, can be 'md5' or + any function that takes in input a string and returns a int. + Note that 'hash' is not a stable hashing function, so + it is not consistent across different runs, while 'md5' + is a stable hashing function. + filters: list (or concatenation) of characters to filter out, such as + punctuation. Default: ``!"#$%&()*+,-./:;<=>?@[\]^_`{|}~\t\n``, + includes basic punctuation, tabs, and newlines. + lower: boolean. Whether to set the text to lowercase. + split: str. Separator for word splitting. + +# Returns + A list of integer word indices (unicity non-guaranteed). + +`0` is a reserved index that won't be assigned to any word. + +Two or more words may be assigned to the same index, due to possible +collisions by the hashing function. +The [probability]( + https://en.wikipedia.org/wiki/Birthday_problem#Probability_table) +of a collision is in relation to the dimension of the hashing space and +the number of distinct objects. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/text/one_hot.md b/site/en/api_docs/python/tf/keras/preprocessing/text/one_hot.md new file mode 100644 index 00000000000..2ebcf510859 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/text/one_hot.md @@ -0,0 +1,55 @@ +description: One-hot encodes a text into a list of word indexes of size n. + +
+ + +
+ +# tf.keras.preprocessing.text.one_hot + + + + + + + + + +One-hot encodes a text into a list of word indexes of size n. + + + + + + + + + +This is a wrapper to the `hashing_trick` function using `hash` as the +hashing function; unicity of word to index mapping non-guaranteed. + +# Arguments + text: Input text (string). + n: int. Size of vocabulary. + filters: list (or concatenation) of characters to filter out, such as + punctuation. Default: ``!"#$%&()*+,-./:;<=>?@[\]^_`{|}~\t\n``, + includes basic punctuation, tabs, and newlines. + lower: boolean. Whether to set the text to lowercase. + split: str. Separator for word splitting. + +# Returns + List of integers in [1, n]. Each integer encodes a word + (unicity non-guaranteed). \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/text/text_to_word_sequence.md b/site/en/api_docs/python/tf/keras/preprocessing/text/text_to_word_sequence.md new file mode 100644 index 00000000000..cf9378960a1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/text/text_to_word_sequence.md @@ -0,0 +1,50 @@ +description: Converts a text to a sequence of words (or tokens). + +
+ + +
+ +# tf.keras.preprocessing.text.text_to_word_sequence + + + + + + + + + +Converts a text to a sequence of words (or tokens). + + + + + + + + + +# Arguments + text: Input text (string). + filters: list (or concatenation) of characters to filter out, such as + punctuation. Default: ``!"#$%&()*+,-./:;<=>?@[\]^_`{|}~\t\n``, + includes basic punctuation, tabs, and newlines. + lower: boolean. Whether to convert the input to lowercase. + split: str. Separator for word splitting. + +# Returns + A list of words (or tokens). \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/preprocessing/text/tokenizer_from_json.md b/site/en/api_docs/python/tf/keras/preprocessing/text/tokenizer_from_json.md new file mode 100644 index 00000000000..86f58a7f8d5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/preprocessing/text/tokenizer_from_json.md @@ -0,0 +1,46 @@ +description: Parses a JSON tokenizer configuration file and returns a + +
+ + +
+ +# tf.keras.preprocessing.text.tokenizer_from_json + + + + + + + + + +Parses a JSON tokenizer configuration file and returns a + + + + + + + + +tokenizer instance. + +# Arguments + json_string: JSON string encoding a tokenizer configuration. + +# Returns + A Keras Tokenizer instance \ No newline at end of file diff --git a/site/en/api_docs/python/tf/keras/regularizers.md b/site/en/api_docs/python/tf/keras/regularizers.md new file mode 100644 index 00000000000..1c71e17e215 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/regularizers.md @@ -0,0 +1,41 @@ +description: Built-in regularizers. + +
+ + +
+ +# Module: tf.keras.regularizers + + + + + + + + + +Built-in regularizers. + + + +## Classes + +[`class L1L2`](../../tf/keras/regularizers/L1L2.md): A regularizer that applies both L1 and L2 regularization penalties. + +[`class Regularizer`](../../tf/keras/regularizers/Regularizer.md): Regularizer base class. + +## Functions + +[`deserialize(...)`](../../tf/keras/regularizers/deserialize.md) + +[`get(...)`](../../tf/keras/regularizers/get.md) + +[`l1(...)`](../../tf/keras/regularizers/l1.md): Create a regularizer that applies an L1 regularization penalty. + +[`l1_l2(...)`](../../tf/keras/regularizers/l1_l2.md): Create a regularizer that applies both L1 and L2 penalties. + +[`l2(...)`](../../tf/keras/regularizers/l2.md): Create a regularizer that applies an L2 regularization penalty. + +[`serialize(...)`](../../tf/keras/regularizers/serialize.md) + diff --git a/site/en/api_docs/python/tf/keras/regularizers/L1L2.md b/site/en/api_docs/python/tf/keras/regularizers/L1L2.md new file mode 100644 index 00000000000..f61bdfb1676 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/regularizers/L1L2.md @@ -0,0 +1,188 @@ +description: A regularizer that applies both L1 and L2 regularization penalties. + +
+ + + + + + +
+ +# tf.keras.regularizers.L1L2 + + + + + + + + + +A regularizer that applies both L1 and L2 regularization penalties. + +Inherits From: [`Regularizer`](../../../tf/keras/regularizers/Regularizer.md) + + + + + + + + + +The L1 regularization penalty is computed as: +$$\ell_1\,\,penalty =\ell_1\sum_{i=0}^n|x_i|$$ + +The L2 regularization penalty is computed as +$$\ell_2\,\,penalty =\ell_2\sum_{i=0}^nx_i^2$$ + + + + + + + + + + + + + + + +
+`l1` + +Float; L1 regularization factor. +
+`l2` + +Float; L2 regularization factor. +
+ + + +## Methods + +

from_config

+ +View source + + + +Creates a regularizer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same regularizer from the config +dictionary. + +This method is used by Keras `model_to_estimator`, saving and +loading models to HDF5 formats, Keras model cloning, some visualization +utilities, and exporting models to and from JSON. + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the output of get_config. +
+ + + + + + + + + + + +
Returns
+A regularizer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the regularizer. + +An regularizer config is a Python dictionary (serializable) +containing all configuration parameters of the regularizer. +The same regularizer can be reinstantiated later +(without any saved state) from this configuration. + +This method is optional if you are just training and executing models, +exporting to and from SavedModels, or using weight checkpoints. + +This method is required for Keras `model_to_estimator`, saving and +loading models to HDF5 formats, Keras model cloning, some visualization +utilities, and exporting models to and from JSON. + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

__call__

+ +View source + + + +Compute a regularization penalty from an input tensor. + + + + diff --git a/site/en/api_docs/python/tf/keras/regularizers/Regularizer.md b/site/en/api_docs/python/tf/keras/regularizers/Regularizer.md new file mode 100644 index 00000000000..c500d1c5425 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/regularizers/Regularizer.md @@ -0,0 +1,270 @@ +description: Regularizer base class. + +
+ + + + + +
+ +# tf.keras.regularizers.Regularizer + + + + + + + + + +Regularizer base class. + + + + + +Regularizers allow you to apply penalties on layer parameters or layer +activity during optimization. These penalties are summed into the loss +function that the network optimizes. + +Regularization penalties are applied on a per-layer basis. The exact API will +depend on the layer, but many layers (e.g. `Dense`, `Conv1D`, `Conv2D` and +`Conv3D`) have a unified API. + +These layers expose 3 keyword arguments: + +- `kernel_regularizer`: Regularizer to apply a penalty on the layer's kernel +- `bias_regularizer`: Regularizer to apply a penalty on the layer's bias +- `activity_regularizer`: Regularizer to apply a penalty on the layer's output + +All layers (including custom layers) expose `activity_regularizer` as a +settable property, whether or not it is in the constructor arguments. + +The value returned by the `activity_regularizer` is divided by the input +batch size so that the relative weighting between the weight regularizers and +the activity regularizers does not change with the batch size. + +You can access a layer's regularization penalties by calling `layer.losses` +after calling the layer on inputs. + +## Example + +``` +>>> layer = tf.keras.layers.Dense( +... 5, input_dim=5, +... kernel_initializer='ones', +... kernel_regularizer=tf.keras.regularizers.l1(0.01), +... activity_regularizer=tf.keras.regularizers.l2(0.01)) +>>> tensor = tf.ones(shape=(5, 5)) * 2.0 +>>> out = layer(tensor) +``` + +``` +>>> # The kernel regularization term is 0.25 +>>> # The activity regularization term (after dividing by the batch size) is 5 +>>> tf.math.reduce_sum(layer.losses) + +``` + +## Available penalties + +```python +tf.keras.regularizers.l1(0.3) # L1 Regularization Penalty +tf.keras.regularizers.l2(0.1) # L2 Regularization Penalty +tf.keras.regularizers.l1_l2(l1=0.01, l2=0.01) # L1 + L2 penalties +``` + +## Directly calling a regularizer + +Compute a regularization loss on a tensor by directly calling a regularizer +as if it is a one-argument function. + +E.g. +>>> regularizer = tf.keras.regularizers.l2(2.) +>>> tensor = tf.ones(shape=(5, 5)) +>>> regularizer(tensor) + + + +## Developing new regularizers + +Any function that takes in a weight matrix and returns a scalar +tensor can be used as a regularizer, e.g.: + +``` +>>> @tf.keras.utils.register_keras_serializable(package='Custom', name='l1') +... def l1_reg(weight_matrix): +... return 0.01 * tf.math.reduce_sum(tf.math.abs(weight_matrix)) +... +>>> layer = tf.keras.layers.Dense(5, input_dim=5, +... kernel_initializer='ones', kernel_regularizer=l1_reg) +>>> tensor = tf.ones(shape=(5, 5)) +>>> out = layer(tensor) +>>> layer.losses +[] +``` + +Alternatively, you can write your custom regularizers in an +object-oriented way by extending this regularizer base class, e.g.: + +``` +>>> @tf.keras.utils.register_keras_serializable(package='Custom', name='l2') +... class L2Regularizer(tf.keras.regularizers.Regularizer): +... def __init__(self, l2=0.): +... self.l2 = l2 +... +... def __call__(self, x): +... return self.l2 * tf.math.reduce_sum(tf.math.square(x)) +... +... def get_config(self): +... return {'l2': float(self.l2)} +... +>>> layer = tf.keras.layers.Dense( +... 5, input_dim=5, kernel_initializer='ones', +... kernel_regularizer=L2Regularizer(l2=0.5)) +``` + +``` +>>> tensor = tf.ones(shape=(5, 5)) +>>> out = layer(tensor) +>>> layer.losses +[] +``` + +### A note on serialization and deserialization: + +Registering the regularizers as serializable is optional if you are just +training and executing models, exporting to and from SavedModels, or saving +and loading weight checkpoints. + +Registration is required for Keras `model_to_estimator`, saving and +loading models to HDF5 formats, Keras model cloning, some visualization +utilities, and exporting models to and from JSON. If using this functionality, +you must make sure any python process running your model has also defined +and registered your custom regularizer. + +tf.keras.utils.register_keras_serializable is only available in TF 2.1 and +beyond. In earlier versions of TensorFlow you must pass your custom +regularizer to the `custom_objects` argument of methods that expect custom +regularizers to be registered as serializable. + +## Methods + +

from_config

+ +View source + + + +Creates a regularizer from its config. + +This method is the reverse of `get_config`, +capable of instantiating the same regularizer from the config +dictionary. + +This method is used by Keras `model_to_estimator`, saving and +loading models to HDF5 formats, Keras model cloning, some visualization +utilities, and exporting models to and from JSON. + + + + + + + + + + +
Arguments
+`config` + +A Python dictionary, typically the output of get_config. +
+ + + + + + + + + + + +
Returns
+A regularizer instance. +
+ + + +

get_config

+ +View source + + + +Returns the config of the regularizer. + +An regularizer config is a Python dictionary (serializable) +containing all configuration parameters of the regularizer. +The same regularizer can be reinstantiated later +(without any saved state) from this configuration. + +This method is optional if you are just training and executing models, +exporting to and from SavedModels, or using weight checkpoints. + +This method is required for Keras `model_to_estimator`, saving and +loading models to HDF5 formats, Keras model cloning, some visualization +utilities, and exporting models to and from JSON. + + + + + + + + + +
Returns
+Python dictionary. +
+ + + +

__call__

+ +View source + + + +Compute a regularization penalty from an input tensor. + + + + diff --git a/site/en/api_docs/python/tf/keras/regularizers/deserialize.md b/site/en/api_docs/python/tf/keras/regularizers/deserialize.md new file mode 100644 index 00000000000..57266e990c8 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/regularizers/deserialize.md @@ -0,0 +1,42 @@ +
+ + +
+ +# tf.keras.regularizers.deserialize + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/regularizers/get.md b/site/en/api_docs/python/tf/keras/regularizers/get.md new file mode 100644 index 00000000000..92f6289be25 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/regularizers/get.md @@ -0,0 +1,42 @@ +
+ + +
+ +# tf.keras.regularizers.get + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/regularizers/l1.md b/site/en/api_docs/python/tf/keras/regularizers/l1.md new file mode 100644 index 00000000000..d3c83583fce --- /dev/null +++ b/site/en/api_docs/python/tf/keras/regularizers/l1.md @@ -0,0 +1,77 @@ +description: Create a regularizer that applies an L1 regularization penalty. + +
+ + +
+ +# tf.keras.regularizers.l1 + + + + + + + + + +Create a regularizer that applies an L1 regularization penalty. + + + + + + + + + +The L1 regularization penalty is computed as: +$$\ell_1\,\,penalty =\ell_1\sum_{i=0}^n|x_i|$$ + + + + + + + + + + +
+`l` + +Float; L1 regularization factor. +
+ + + + + + + + + + + +
+An L1 Regularizer with the given regularization factor. +
+ diff --git a/site/en/api_docs/python/tf/keras/regularizers/l1_l2.md b/site/en/api_docs/python/tf/keras/regularizers/l1_l2.md new file mode 100644 index 00000000000..3ec8d123dab --- /dev/null +++ b/site/en/api_docs/python/tf/keras/regularizers/l1_l2.md @@ -0,0 +1,87 @@ +description: Create a regularizer that applies both L1 and L2 penalties. + +
+ + +
+ +# tf.keras.regularizers.l1_l2 + + + + + + + + + +Create a regularizer that applies both L1 and L2 penalties. + + + + + + + + + +The L1 regularization penalty is computed as: +$$\ell_1\,\,penalty =\ell_1\sum_{i=0}^n|x_i|$$ + +The L2 regularization penalty is computed as: +$$\ell_2\,\,penalty =\ell_2\sum_{i=0}^nx_i^2$$ + + + + + + + + + + + + + +
+`l1` + +Float; L1 regularization factor. +
+`l2` + +Float; L2 regularization factor. +
+ + + + + + + + + + + +
+An L1L2 Regularizer with the given regularization factors. +
+ diff --git a/site/en/api_docs/python/tf/keras/regularizers/l2.md b/site/en/api_docs/python/tf/keras/regularizers/l2.md new file mode 100644 index 00000000000..ba561163dd3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/regularizers/l2.md @@ -0,0 +1,77 @@ +description: Create a regularizer that applies an L2 regularization penalty. + +
+ + +
+ +# tf.keras.regularizers.l2 + + + + + + + + + +Create a regularizer that applies an L2 regularization penalty. + + + + + + + + + +The L2 regularization penalty is computed as: +$$\ell_2\,\,penalty =\ell_2\sum_{i=0}^nx_i^2$$ + + + + + + + + + + +
+`l` + +Float; L2 regularization factor. +
+ + + + + + + + + + + +
+An L2 Regularizer with the given regularization factor. +
+ diff --git a/site/en/api_docs/python/tf/keras/regularizers/serialize.md b/site/en/api_docs/python/tf/keras/regularizers/serialize.md new file mode 100644 index 00000000000..81656ab9f17 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/regularizers/serialize.md @@ -0,0 +1,42 @@ +
+ + +
+ +# tf.keras.regularizers.serialize + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/utils.md b/site/en/api_docs/python/tf/keras/utils.md new file mode 100644 index 00000000000..f7299ead439 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils.md @@ -0,0 +1,69 @@ +description: Public API for tf.keras.utils namespace. + +
+ + +
+ +# Module: tf.keras.utils + + + + + + + + + +Public API for tf.keras.utils namespace. + + + +## Classes + +[`class CustomObjectScope`](../../tf/keras/utils/CustomObjectScope.md): Provides a scope that changes to `_GLOBAL_CUSTOM_OBJECTS` cannot escape. + +[`class GeneratorEnqueuer`](../../tf/keras/utils/GeneratorEnqueuer.md): Builds a queue out of a data generator. + +[`class HDF5Matrix`](../../tf/keras/utils/HDF5Matrix.md): Representation of HDF5 dataset to be used instead of a Numpy array. + +[`class OrderedEnqueuer`](../../tf/keras/utils/OrderedEnqueuer.md): Builds a Enqueuer from a Sequence. + +[`class Progbar`](../../tf/keras/utils/Progbar.md): Displays a progress bar. + +[`class Sequence`](../../tf/keras/utils/Sequence.md): Base object for fitting to a sequence of data, such as a dataset. + +[`class SequenceEnqueuer`](../../tf/keras/utils/SequenceEnqueuer.md): Base class to enqueue inputs. + +## Functions + +[`convert_all_kernels_in_model(...)`](../../tf/keras/utils/convert_all_kernels_in_model.md): Converts all convolution kernels in a model from Theano to TensorFlow. (deprecated) + +[`custom_object_scope(...)`](../../tf/keras/utils/custom_object_scope.md): Provides a scope that changes to `_GLOBAL_CUSTOM_OBJECTS` cannot escape. + +[`deserialize_keras_object(...)`](../../tf/keras/utils/deserialize_keras_object.md) + +[`get_custom_objects(...)`](../../tf/keras/utils/get_custom_objects.md): Retrieves a live reference to the global dictionary of custom objects. + +[`get_file(...)`](../../tf/keras/utils/get_file.md): Downloads a file from a URL if it not already in the cache. + +[`get_registered_name(...)`](../../tf/keras/utils/get_registered_name.md): Returns the name registered to an object within the Keras framework. + +[`get_registered_object(...)`](../../tf/keras/utils/get_registered_object.md): Returns the class associated with `name` if it is registered with Keras. + +[`get_source_inputs(...)`](../../tf/keras/utils/get_source_inputs.md): Returns the list of input tensors necessary to compute `tensor`. + +[`model_to_dot(...)`](../../tf/keras/utils/model_to_dot.md): Convert a Keras model to dot format. + +[`multi_gpu_model(...)`](../../tf/keras/utils/multi_gpu_model.md): Replicates a model on different GPUs. (deprecated) + +[`normalize(...)`](../../tf/keras/utils/normalize.md): Normalizes a Numpy array. + +[`plot_model(...)`](../../tf/keras/utils/plot_model.md): Converts a Keras model to dot format and save to a file. + +[`register_keras_serializable(...)`](../../tf/keras/utils/register_keras_serializable.md): Registers an object with the Keras serialization framework. + +[`serialize_keras_object(...)`](../../tf/keras/utils/serialize_keras_object.md): Serialize Keras object into JSON. + +[`to_categorical(...)`](../../tf/keras/utils/to_categorical.md): Converts a class vector (integers) to binary class matrix. + diff --git a/site/en/api_docs/python/tf/keras/utils/CustomObjectScope.md b/site/en/api_docs/python/tf/keras/utils/CustomObjectScope.md new file mode 100644 index 00000000000..b71fc75f9c3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/CustomObjectScope.md @@ -0,0 +1,94 @@ +description: Provides a scope that changes to _GLOBAL_CUSTOM_OBJECTS cannot escape. + +
+ + + + + +
+ +# tf.keras.utils.CustomObjectScope + + + + + + + + + +Provides a scope that changes to `_GLOBAL_CUSTOM_OBJECTS` cannot escape. + + + + + + + + + +Code within a `with` statement will be able to access custom objects +by name. Changes to global custom objects persist +within the enclosing `with` statement. At end of the `with` statement, +global custom objects are reverted to state +at beginning of the `with` statement. + +#### Example: + + + +Consider a custom object `MyObject` (e.g. a class): + +```python + with CustomObjectScope({'MyObject':MyObject}): + layer = Dense(..., kernel_regularizer='MyObject') + # save, load, etc. will recognize custom object by name +``` + +## Methods + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/utils/GeneratorEnqueuer.md b/site/en/api_docs/python/tf/keras/utils/GeneratorEnqueuer.md new file mode 100644 index 00000000000..ee9fc3dcc50 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/GeneratorEnqueuer.md @@ -0,0 +1,199 @@ +description: Builds a queue out of a data generator. + +
+ + + + + + + +
+ +# tf.keras.utils.GeneratorEnqueuer + + + + + + + + + +Builds a queue out of a data generator. + +Inherits From: [`SequenceEnqueuer`](../../../tf/keras/utils/SequenceEnqueuer.md) + + + + + + + + + +The provided generator can be finite in which case the class will throw +a `StopIteration` exception. + +Used in `fit_generator`, `evaluate_generator`, `predict_generator`. + + + + + + + + + + + + + + + + + + + +
+`generator` + +a generator function which yields data +
+`use_multiprocessing` + +use multiprocessing if True, otherwise threading +
+`wait_time` + +time to sleep in-between calls to `put()` +
+`random_seed` + +Initial seed for workers, +will be incremented by one for each worker. +
+ + + +## Methods + +

get

+ +View source + + + +Creates a generator to extract data from the queue. + +Skip the data if it is `None`. + +#### Yields: + +The next element in the queue, i.e. a tuple +`(inputs, targets)` or +`(inputs, targets, sample_weights)`. + + +

is_running

+ +View source + + + + + + +

start

+ +View source + + + +Starts the handler's workers. + + + + + + + + + + + + + + +
Arguments
+`workers` + +Number of workers. +
+`max_queue_size` + +queue size +(when full, workers could block on `put()`) +
+ + + +

stop

+ +View source + + + +Stops running threads and wait for them to exit, if necessary. + +Should be called by the same thread which called `start()`. + + + + + + + + + + +
Arguments
+`timeout` + +maximum time to wait on `thread.join()` +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/utils/HDF5Matrix.md b/site/en/api_docs/python/tf/keras/utils/HDF5Matrix.md new file mode 100644 index 00000000000..a9239bc957e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/HDF5Matrix.md @@ -0,0 +1,187 @@ +description: Representation of HDF5 dataset to be used instead of a Numpy array. + +
+ + + + + + +
+ +# tf.keras.utils.HDF5Matrix + + + + + + + + + +Representation of HDF5 dataset to be used instead of a Numpy array. + + + + + + + + + +THIS CLASS IS DEPRECATED. +Training with HDF5Matrix may not be optimized for performance, and might +not work with every distribution strategy. + +We recommend using https://github.com/tensorflow/io to load your +HDF5 data into a tf.data Dataset and passing that dataset to Keras. + + + + + + + + + + + + + + + + + + + + + + +
+`datapath` + +string, path to a HDF5 file +
+`dataset` + +string, name of the HDF5 dataset in the file specified +in datapath +
+`start` + +int, start of desired slice of the specified dataset +
+`end` + +int, end of desired slice of the specified dataset +
+`normalizer` + +function to be called on data when retrieved +
+ + + + + + + + + + + +
+ImportError if HDF5 & h5py are not installed +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +Gets the datatype of the dataset. +
+`ndim` + +Gets the number of dimensions (rank) of the dataset. +
+`shape` + +Gets a numpy-style shape tuple giving the dataset dimensions. +
+`size` + +Gets the total dataset size (number of elements). +
+ + + +## Methods + +

__getitem__

+ +View source + + + + + + +

__len__

+ +View source + + + + + + + + +## Class Variables + +* `refs` diff --git a/site/en/api_docs/python/tf/keras/utils/OrderedEnqueuer.md b/site/en/api_docs/python/tf/keras/utils/OrderedEnqueuer.md new file mode 100644 index 00000000000..8ed0b9c9ca1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/OrderedEnqueuer.md @@ -0,0 +1,188 @@ +description: Builds a Enqueuer from a Sequence. + +
+ + + + + + + +
+ +# tf.keras.utils.OrderedEnqueuer + + + + + + + + + +Builds a Enqueuer from a Sequence. + +Inherits From: [`SequenceEnqueuer`](../../../tf/keras/utils/SequenceEnqueuer.md) + + + + + + + + + +Used in `fit_generator`, `evaluate_generator`, `predict_generator`. + + + + + + + + + + + + + + + + +
+`sequence` + +A `tf.keras.utils.data_utils.Sequence` object. +
+`use_multiprocessing` + +use multiprocessing if True, otherwise threading +
+`shuffle` + +whether to shuffle the data at the beginning of each epoch +
+ + + +## Methods + +

get

+ +View source + + + +Creates a generator to extract data from the queue. + +Skip the data if it is `None`. + +#### Yields: + +The next element in the queue, i.e. a tuple +`(inputs, targets)` or +`(inputs, targets, sample_weights)`. + + +

is_running

+ +View source + + + + + + +

start

+ +View source + + + +Starts the handler's workers. + + + + + + + + + + + + + + +
Arguments
+`workers` + +Number of workers. +
+`max_queue_size` + +queue size +(when full, workers could block on `put()`) +
+ + + +

stop

+ +View source + + + +Stops running threads and wait for them to exit, if necessary. + +Should be called by the same thread which called `start()`. + + + + + + + + + + +
Arguments
+`timeout` + +maximum time to wait on `thread.join()` +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/utils/Progbar.md b/site/en/api_docs/python/tf/keras/utils/Progbar.md new file mode 100644 index 00000000000..59ca510182a --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/Progbar.md @@ -0,0 +1,168 @@ +description: Displays a progress bar. + +
+ + + + + +
+ +# tf.keras.utils.Progbar + + + + + + + + + +Displays a progress bar. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`target` + +Total number of steps expected, None if unknown. +
+`width` + +Progress bar width on screen. +
+`verbose` + +Verbosity mode, 0 (silent), 1 (verbose), 2 (semi-verbose) +
+`stateful_metrics` + +Iterable of string names of metrics that should *not* be +averaged over time. Metrics in this list will be displayed as-is. All +others will be averaged by the progbar before display. +
+`interval` + +Minimum visual progress update interval (in seconds). +
+`unit_name` + +Display name for step counts (usually "step" or "sample"). +
+ + + +## Methods + +

add

+ +View source + + + + + + +

update

+ +View source + + + +Updates the progress bar. + + + + + + + + + + + + + + + + + +
Arguments
+`current` + +Index of current step. +
+`values` + +List of tuples: `(name, value_for_last_step)`. If `name` is in +`stateful_metrics`, `value_for_last_step` will be displayed as-is. +Else, an average of the metric over time will be displayed. +
+`finalize` + +Whether this is the last update for the progress bar. If +`None`, defaults to `current >= self.target`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/utils/Sequence.md b/site/en/api_docs/python/tf/keras/utils/Sequence.md new file mode 100644 index 00000000000..cab684429c9 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/Sequence.md @@ -0,0 +1,182 @@ +description: Base object for fitting to a sequence of data, such as a dataset. + +
+ + + + + + +
+ +# tf.keras.utils.Sequence + + + + + + + + + +Base object for fitting to a sequence of data, such as a dataset. + + + + + +Every `Sequence` must implement the `__getitem__` and the `__len__` methods. +If you want to modify your dataset between epochs you may implement +`on_epoch_end`. +The method `__getitem__` should return a complete batch. + +#### Notes: + + + +`Sequence` are a safer way to do multiprocessing. This structure guarantees +that the network will only train once + on each sample per epoch which is not the case with generators. + +#### Examples: + + + +```python + from skimage.io import imread + from skimage.transform import resize + import numpy as np + import math + + # Here, `x_set` is list of path to the images + # and `y_set` are the associated classes. + + class CIFAR10Sequence(Sequence): + + def __init__(self, x_set, y_set, batch_size): + self.x, self.y = x_set, y_set + self.batch_size = batch_size + + def __len__(self): + return math.ceil(len(self.x) / self.batch_size) + + def __getitem__(self, idx): + batch_x = self.x[idx * self.batch_size:(idx + 1) * + self.batch_size] + batch_y = self.y[idx * self.batch_size:(idx + 1) * + self.batch_size] + + return np.array([ + resize(imread(file_name), (200, 200)) + for file_name in batch_x]), np.array(batch_y) +``` + +## Methods + +

on_epoch_end

+ +View source + + + +Method called at the end of every epoch. + + +

__getitem__

+ +View source + + + +Gets batch at position `index`. + + + + + + + + + + + +
Arguments
+`index` + +position of the batch in the Sequence. +
+ + + + + + + + + + + +
Returns
+A batch +
+ + + +

__iter__

+ +View source + + + +Create a generator that iterate over the Sequence. + + +

__len__

+ +View source + + + +Number of batch in the Sequence. + + + + + + + + + + +
Returns
+The number of batches in the Sequence. +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/utils/SequenceEnqueuer.md b/site/en/api_docs/python/tf/keras/utils/SequenceEnqueuer.md new file mode 100644 index 00000000000..903a53a02e0 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/SequenceEnqueuer.md @@ -0,0 +1,168 @@ +description: Base class to enqueue inputs. + +
+ + + + + + + +
+ +# tf.keras.utils.SequenceEnqueuer + + + + + + + + + +Base class to enqueue inputs. + + + + + + + + + +The task of an Enqueuer is to use parallelism to speed up preprocessing. +This is done with processes or threads. + +#### Example: + + + +```python + enqueuer = SequenceEnqueuer(...) + enqueuer.start() + datas = enqueuer.get() + for data in datas: + # Use the inputs; training, evaluating, predicting. + # ... stop sometime. + enqueuer.stop() +``` + +The `enqueuer.get()` should be an infinite stream of datas. + +## Methods + +

get

+ +View source + + + +Creates a generator to extract data from the queue. + +Skip the data if it is `None`. +# Returns + Generator yielding tuples `(inputs, targets)` + or `(inputs, targets, sample_weights)`. + +

is_running

+ +View source + + + + + + +

start

+ +View source + + + +Starts the handler's workers. + + + + + + + + + + + + + + +
Arguments
+`workers` + +Number of workers. +
+`max_queue_size` + +queue size +(when full, workers could block on `put()`) +
+ + + +

stop

+ +View source + + + +Stops running threads and wait for them to exit, if necessary. + +Should be called by the same thread which called `start()`. + + + + + + + + + + +
Arguments
+`timeout` + +maximum time to wait on `thread.join()` +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/utils/convert_all_kernels_in_model.md b/site/en/api_docs/python/tf/keras/utils/convert_all_kernels_in_model.md new file mode 100644 index 00000000000..ee674e2356f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/convert_all_kernels_in_model.md @@ -0,0 +1,68 @@ +description: Converts all convolution kernels in a model from Theano to TensorFlow. (deprecated) + +
+ + +
+ +# tf.keras.utils.convert_all_kernels_in_model + + + + + + + + + +Converts all convolution kernels in a model from Theano to TensorFlow. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2020-06-23. +Instructions for updating: +The Theano kernel format is legacy; this utility will be removed. + +Also works from TensorFlow to Theano. + +This is used for converting legacy Theano-saved model files. + + + + + + + + + + +
+`model` + +target model for the conversion. +
+ diff --git a/site/en/api_docs/python/tf/keras/utils/custom_object_scope.md b/site/en/api_docs/python/tf/keras/utils/custom_object_scope.md new file mode 100644 index 00000000000..865c491954e --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/custom_object_scope.md @@ -0,0 +1,94 @@ +description: Provides a scope that changes to _GLOBAL_CUSTOM_OBJECTS cannot escape. + +
+ + +
+ +# tf.keras.utils.custom_object_scope + + + + + + + + + +Provides a scope that changes to `_GLOBAL_CUSTOM_OBJECTS` cannot escape. + + + + + + + + + +Convenience wrapper for `CustomObjectScope`. +Code within a `with` statement will be able to access custom objects +by name. Changes to global custom objects persist +within the enclosing `with` statement. At end of the `with` statement, +global custom objects are reverted to state +at beginning of the `with` statement. + +#### Example: + + + +Consider a custom object `MyObject` + +```python + with custom_object_scope({'MyObject':MyObject}): + layer = Dense(..., kernel_regularizer='MyObject') + # save, load, etc. will recognize custom object by name +``` + + + + + + + + + + +
+`*args` + +Variable length list of dictionaries of name, class pairs to add to +custom objects. +
+ + + + + + + + + + + +
+Object of type `CustomObjectScope`. +
+ diff --git a/site/en/api_docs/python/tf/keras/utils/deserialize_keras_object.md b/site/en/api_docs/python/tf/keras/utils/deserialize_keras_object.md new file mode 100644 index 00000000000..196a4c45359 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/deserialize_keras_object.md @@ -0,0 +1,43 @@ +
+ + +
+ +# tf.keras.utils.deserialize_keras_object + + + + + + + + + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/utils/get_custom_objects.md b/site/en/api_docs/python/tf/keras/utils/get_custom_objects.md new file mode 100644 index 00000000000..a81c057929d --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/get_custom_objects.md @@ -0,0 +1,68 @@ +description: Retrieves a live reference to the global dictionary of custom objects. + +
+ + +
+ +# tf.keras.utils.get_custom_objects + + + + + + + + + +Retrieves a live reference to the global dictionary of custom objects. + + + + + + + + + +Updating and clearing custom objects using `custom_object_scope` +is preferred, but `get_custom_objects` can +be used to directly access `_GLOBAL_CUSTOM_OBJECTS`. + +#### Example: + + + +```python + get_custom_objects().clear() + get_custom_objects()['MyObject'] = MyObject +``` + + + + + + + + + +
+Global dictionary of names to classes (`_GLOBAL_CUSTOM_OBJECTS`). +
+ diff --git a/site/en/api_docs/python/tf/keras/utils/get_file.md b/site/en/api_docs/python/tf/keras/utils/get_file.md new file mode 100644 index 00000000000..37e8c55edb6 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/get_file.md @@ -0,0 +1,162 @@ +description: Downloads a file from a URL if it not already in the cache. + +
+ + +
+ +# tf.keras.utils.get_file + + + + + + + + + +Downloads a file from a URL if it not already in the cache. + + + + + + + + + +By default the file at the url `origin` is downloaded to the +cache_dir `~/.keras`, placed in the cache_subdir `datasets`, +and given the filename `fname`. The final location of a file +`example.txt` would therefore be `~/.keras/datasets/example.txt`. + +Files in tar, tar.gz, tar.bz, and zip formats can also be extracted. +Passing a hash will verify the file after download. The command line +programs `shasum` and `sha256sum` can compute the hash. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fname` + +Name of the file. If an absolute path `/path/to/file.txt` is +specified the file will be saved at that location. +
+`origin` + +Original URL of the file. +
+`untar` + +Deprecated in favor of 'extract'. +boolean, whether the file should be decompressed +
+`md5_hash` + +Deprecated in favor of 'file_hash'. +md5 hash of the file for verification +
+`file_hash` + +The expected hash string of the file after download. +The sha256 and md5 hash algorithms are both supported. +
+`cache_subdir` + +Subdirectory under the Keras cache dir where the file is +saved. If an absolute path `/path/to/folder` is +specified the file will be saved at that location. +
+`hash_algorithm` + +Select the hash algorithm to verify the file. +options are 'md5', 'sha256', and 'auto'. +The default 'auto' detects the hash algorithm in use. +
+`extract` + +True tries extracting the file as an Archive, like tar or zip. +
+`archive_format` + +Archive format to try for extracting the file. +Options are 'auto', 'tar', 'zip', and None. +'tar' includes tar, tar.gz, and tar.bz files. +The default 'auto' is ['tar', 'zip']. +None or an empty list will return no matches found. +
+`cache_dir` + +Location to store cached files, when None it +defaults to the [Keras +Directory](/faq/#where-is-the-keras-configuration-filed-stored). +
+ + + + + + + + + + + +
+Path to the downloaded file +
+ diff --git a/site/en/api_docs/python/tf/keras/utils/get_registered_name.md b/site/en/api_docs/python/tf/keras/utils/get_registered_name.md new file mode 100644 index 00000000000..5bc1c6946e1 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/get_registered_name.md @@ -0,0 +1,79 @@ +description: Returns the name registered to an object within the Keras framework. + +
+ + +
+ +# tf.keras.utils.get_registered_name + + + + + + + + + +Returns the name registered to an object within the Keras framework. + + + + + + + + + +This function is part of the Keras serialization and deserialization +framework. It maps objects to the string names associated with those objects +for serialization/deserialization. + + + + + + + + + + +
+`obj` + +The object to look up. +
+ + + + + + + + + + + +
+The name associated with the object, or the default Python name if the +object is not registered. +
+ diff --git a/site/en/api_docs/python/tf/keras/utils/get_registered_object.md b/site/en/api_docs/python/tf/keras/utils/get_registered_object.md new file mode 100644 index 00000000000..99dcf5a4789 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/get_registered_object.md @@ -0,0 +1,105 @@ +description: Returns the class associated with name if it is registered with Keras. + +
+ + +
+ +# tf.keras.utils.get_registered_object + + + + + + + + + +Returns the class associated with `name` if it is registered with Keras. + + + + + + + + + +This function is part of the Keras serialization and deserialization +framework. It maps strings to the objects associated with them for +serialization/deserialization. + +#### Example: + + +``` +def from_config(cls, config, custom_objects=None): + if 'my_custom_object_name' in config: + config['hidden_cls'] = tf.keras.utils.get_registered_object( + config['my_custom_object_name'], custom_objects=custom_objects) +``` + + + + + + + + + + + + + + + + +
+`name` + +The name to look up. +
+`custom_objects` + +A dictionary of custom objects to look the name up in. +Generally, custom_objects is provided by the user. +
+`module_objects` + +A dictionary of custom objects to look the name up in. +Generally, module_objects is provided by midlevel library implementers. +
+ + + + + + + + + + + +
+An instantiable class associated with 'name', or None if no such class +exists. +
+ diff --git a/site/en/api_docs/python/tf/keras/utils/get_source_inputs.md b/site/en/api_docs/python/tf/keras/utils/get_source_inputs.md new file mode 100644 index 00000000000..410daa4c79c --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/get_source_inputs.md @@ -0,0 +1,92 @@ +description: Returns the list of input tensors necessary to compute tensor. + +
+ + +
+ +# tf.keras.utils.get_source_inputs + + + + + + + + + +Returns the list of input tensors necessary to compute `tensor`. + + + + + + + + + +Output will always be a list of tensors +(potentially with 1 element). + + + + + + + + + + + + + + + + +
+`tensor` + +The tensor to start from. +
+`layer` + +Origin layer of the tensor. Will be +determined via tensor._keras_history if not provided. +
+`node_index` + +Origin node index of the tensor. +
+ + + + + + + + + + + +
+List of input tensors. +
+ diff --git a/site/en/api_docs/python/tf/keras/utils/model_to_dot.md b/site/en/api_docs/python/tf/keras/utils/model_to_dot.md new file mode 100644 index 00000000000..6ae0b11afd2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/model_to_dot.md @@ -0,0 +1,140 @@ +description: Convert a Keras model to dot format. + +
+ + +
+ +# tf.keras.utils.model_to_dot + + + + + + + + + +Convert a Keras model to dot format. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`model` + +A Keras model instance. +
+`show_shapes` + +whether to display shape information. +
+`show_layer_names` + +whether to display layer names. +
+`rankdir` + +`rankdir` argument passed to PyDot, +a string specifying the format of the plot: +'TB' creates a vertical plot; +'LR' creates a horizontal plot. +
+`expand_nested` + +whether to expand nested models into clusters. +
+`dpi` + +Dots per inch. +
+`subgraph` + +whether to return a `pydot.Cluster` instance. +
+ + + + + + + + + + + +
+A `pydot.Dot` instance representing the Keras model or +a `pydot.Cluster` instance representing nested model if +`subgraph=True`. +
+ + + + + + + + + + + + +
+`ImportError` + +if graphviz or pydot are not available. +
+ diff --git a/site/en/api_docs/python/tf/keras/utils/multi_gpu_model.md b/site/en/api_docs/python/tf/keras/utils/multi_gpu_model.md new file mode 100644 index 00000000000..85150c93b21 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/multi_gpu_model.md @@ -0,0 +1,215 @@ +description: Replicates a model on different GPUs. (deprecated) + +
+ + +
+ +# tf.keras.utils.multi_gpu_model + + + + + + + + + +Replicates a model on different GPUs. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2020-04-01. +Instructions for updating: +Use tf.distribute.MirroredStrategy instead. + +Specifically, this function implements single-machine +multi-GPU data parallelism. It works in the following way: + +- Divide the model's input(s) into multiple sub-batches. +- Apply a model copy on each sub-batch. Every model copy + is executed on a dedicated GPU. +- Concatenate the results (on CPU) into one big batch. + +E.g. if your `batch_size` is 64 and you use `gpus=2`, +then we will divide the input into 2 sub-batches of 32 samples, +process each sub-batch on one GPU, then return the full +batch of 64 processed samples. + +This induces quasi-linear speedup on up to 8 GPUs. + +This function is only available with the TensorFlow backend +for the time being. + + + + + + + + + + + + + + + + + + + +
+`model` + +A Keras model instance. To avoid OOM errors, +this model could have been built on CPU, for instance +(see usage example below). +
+`gpus` + +Integer >= 2, number of on GPUs on which to create +model replicas. +
+`cpu_merge` + +A boolean value to identify whether to force +merging model weights under the scope of the CPU or not. +
+`cpu_relocation` + +A boolean value to identify whether to +create the model's weights under the scope of the CPU. +If the model is not defined under any preceding device +scope, you can still rescue it by activating this option. +
+ + + + + + + + + + + +
+A Keras `Model` instance which can be used just like the initial +`model` argument, but which distributes its workload on multiple GPUs. +
+ + +Example 1: Training models with weights merge on CPU + +```python + import tensorflow as tf + from keras.applications import Xception + from keras.utils import multi_gpu_model + import numpy as np + + num_samples = 1000 + height = 224 + width = 224 + num_classes = 1000 + + # Instantiate the base model (or "template" model). + # We recommend doing this with under a CPU device scope, + # so that the model's weights are hosted on CPU memory. + # Otherwise they may end up hosted on a GPU, which would + # complicate weight sharing. + with tf.device('/cpu:0'): + model = Xception(weights=None, + input_shape=(height, width, 3), + classes=num_classes) + + # Replicates the model on 8 GPUs. + # This assumes that your machine has 8 available GPUs. + parallel_model = multi_gpu_model(model, gpus=8) + parallel_model.compile(loss='categorical_crossentropy', + optimizer='rmsprop') + + # Generate dummy data. + x = np.random.random((num_samples, height, width, 3)) + y = np.random.random((num_samples, num_classes)) + + # This `fit` call will be distributed on 8 GPUs. + # Since the batch size is 256, each GPU will process 32 samples. + parallel_model.fit(x, y, epochs=20, batch_size=256) + + # Save model via the template model (which shares the same weights): + model.save('my_model.h5') +``` + +Example 2: Training models with weights merge on CPU using cpu_relocation + +```python + .. + # Not needed to change the device scope for model definition: + model = Xception(weights=None, ..) + + try: + model = multi_gpu_model(model, cpu_relocation=True) + print("Training using multiple GPUs..") + except: + print("Training using single GPU or CPU..") + + model.compile(..) + .. +``` + +Example 3: Training models with weights merge on GPU (recommended for NV-link) + +```python + .. + # Not needed to change the device scope for model definition: + model = Xception(weights=None, ..) + + try: + model = multi_gpu_model(model, cpu_merge=False) + print("Training using multiple GPUs..") + except: + print("Training using single GPU or CPU..") + model.compile(..) + .. +``` + + + + + + + + + + +
+`ValueError` + +if the `gpus` argument does not match available devices. +
+ diff --git a/site/en/api_docs/python/tf/keras/utils/normalize.md b/site/en/api_docs/python/tf/keras/utils/normalize.md new file mode 100644 index 00000000000..13212004aa3 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/normalize.md @@ -0,0 +1,89 @@ +description: Normalizes a Numpy array. + +
+ + +
+ +# tf.keras.utils.normalize + + + + + + + + + +Normalizes a Numpy array. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Numpy array to normalize. +
+`axis` + +axis along which to normalize. +
+`order` + +Normalization order (e.g. 2 for L2 norm). +
+ + + + + + + + + + + +
+A normalized copy of the array. +
+ diff --git a/site/en/api_docs/python/tf/keras/utils/plot_model.md b/site/en/api_docs/python/tf/keras/utils/plot_model.md new file mode 100644 index 00000000000..e1f204db4d2 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/plot_model.md @@ -0,0 +1,122 @@ +description: Converts a Keras model to dot format and save to a file. + +
+ + +
+ +# tf.keras.utils.plot_model + + + + + + + + + +Converts a Keras model to dot format and save to a file. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`model` + +A Keras model instance +
+`to_file` + +File name of the plot image. +
+`show_shapes` + +whether to display shape information. +
+`show_layer_names` + +whether to display layer names. +
+`rankdir` + +`rankdir` argument passed to PyDot, +a string specifying the format of the plot: +'TB' creates a vertical plot; +'LR' creates a horizontal plot. +
+`expand_nested` + +Whether to expand nested models into clusters. +
+`dpi` + +Dots per inch. +
+ + + + + + + + + + + +
+A Jupyter notebook Image object if Jupyter is installed. +This enables in-line display of the model plots in notebooks. +
+ diff --git a/site/en/api_docs/python/tf/keras/utils/register_keras_serializable.md b/site/en/api_docs/python/tf/keras/utils/register_keras_serializable.md new file mode 100644 index 00000000000..b511f5e23b5 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/register_keras_serializable.md @@ -0,0 +1,93 @@ +description: Registers an object with the Keras serialization framework. + +
+ + +
+ +# tf.keras.utils.register_keras_serializable + + + + + + + + + +Registers an object with the Keras serialization framework. + + + + + + + + + +This decorator injects the decorated class or function into the Keras custom +object dictionary, so that it can be serialized and deserialized without +needing an entry in the user-provided custom object dict. It also injects a +function that Keras will call to get the object's serializable string key. + +Note that to be serialized and deserialized, classes must implement the +`get_config()` method. Functions do not have this requirement. + +The object will be registered under the key 'package>name' where `name`, +defaults to the object name if not passed. + + + + + + + + + + + + + +
+`package` + +The package that this class belongs to. +
+`name` + +The name to serialize this class under in this package. If None, the +class's name will be used. +
+ + + + + + + + + + + +
+A decorator that registers the decorated class with the passed names. +
+ diff --git a/site/en/api_docs/python/tf/keras/utils/serialize_keras_object.md b/site/en/api_docs/python/tf/keras/utils/serialize_keras_object.md new file mode 100644 index 00000000000..a438be0f739 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/serialize_keras_object.md @@ -0,0 +1,44 @@ +description: Serialize Keras object into JSON. + +
+ + +
+ +# tf.keras.utils.serialize_keras_object + + + + + + + + + +Serialize Keras object into JSON. + + + + + + + + diff --git a/site/en/api_docs/python/tf/keras/utils/to_categorical.md b/site/en/api_docs/python/tf/keras/utils/to_categorical.md new file mode 100644 index 00000000000..9898a56b6db --- /dev/null +++ b/site/en/api_docs/python/tf/keras/utils/to_categorical.md @@ -0,0 +1,105 @@ +description: Converts a class vector (integers) to binary class matrix. + +
+ + +
+ +# tf.keras.utils.to_categorical + + + + + + + + + +Converts a class vector (integers) to binary class matrix. + + + + + + + + + +E.g. for use with categorical_crossentropy. + +#### Usage Example: + + + +``` +>>> y = [0, 1, 2, 3] +>>> tf.keras.utils.to_categorical(y, num_classes=4) +array([[1., 0., 0., 0.], + [0., 1., 0., 0.], + [0., 0., 1., 0.], + [0., 0., 0., 1.]], dtype=float32) +``` + + + + + + + + + + + + + + + + +
+`y` + +class vector to be converted into a matrix +(integers from 0 to num_classes). +
+`num_classes` + +total number of classes. +
+`dtype` + +The data type expected by the input. Default: `'float32'`. +
+ + + + + + + + + + + +
+A binary matrix representation of the input. The classes axis is placed +last. +
+ diff --git a/site/en/api_docs/python/tf/keras/wrappers.md b/site/en/api_docs/python/tf/keras/wrappers.md new file mode 100644 index 00000000000..f0864990151 --- /dev/null +++ b/site/en/api_docs/python/tf/keras/wrappers.md @@ -0,0 +1,25 @@ +description: Public API for tf.keras.wrappers namespace. + +
+ + +
+ +# Module: tf.keras.wrappers + + + + + + + + + +Public API for tf.keras.wrappers namespace. + + + +## Modules + +[`scikit_learn`](../../tf/keras/wrappers/scikit_learn.md) module: Wrapper for using the Scikit-Learn API with Keras models. + diff --git a/site/en/api_docs/python/tf/keras/wrappers/scikit_learn.md b/site/en/api_docs/python/tf/keras/wrappers/scikit_learn.md new file mode 100644 index 00000000000..3003e972e9b --- /dev/null +++ b/site/en/api_docs/python/tf/keras/wrappers/scikit_learn.md @@ -0,0 +1,27 @@ +description: Wrapper for using the Scikit-Learn API with Keras models. + +
+ + +
+ +# Module: tf.keras.wrappers.scikit_learn + + + + + + + + + +Wrapper for using the Scikit-Learn API with Keras models. + + + +## Classes + +[`class KerasClassifier`](../../../tf/keras/wrappers/scikit_learn/KerasClassifier.md): Implementation of the scikit-learn classifier API for Keras. + +[`class KerasRegressor`](../../../tf/keras/wrappers/scikit_learn/KerasRegressor.md): Implementation of the scikit-learn regressor API for Keras. + diff --git a/site/en/api_docs/python/tf/keras/wrappers/scikit_learn/KerasClassifier.md b/site/en/api_docs/python/tf/keras/wrappers/scikit_learn/KerasClassifier.md new file mode 100644 index 00000000000..6b8fca06adc --- /dev/null +++ b/site/en/api_docs/python/tf/keras/wrappers/scikit_learn/KerasClassifier.md @@ -0,0 +1,539 @@ +description: Implementation of the scikit-learn classifier API for Keras. + +
+ + + + + + + + + + + +
+ +# tf.keras.wrappers.scikit_learn.KerasClassifier + + + + + + + + + +Implementation of the scikit-learn classifier API for Keras. + + + + + + + + + + +## Methods + +

check_params

+ +View source + + + +Checks for user typos in `params`. + + + + + + + + + + + +
Arguments
+`params` + +dictionary; the parameters to be checked +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any member of `params` is not a valid argument. +
+ + + +

filter_sk_params

+ +View source + + + +Filters `sk_params` and returns those in `fn`'s arguments. + + + + + + + + + + + + + + +
Arguments
+`fn` + +arbitrary function +
+`override` + +dictionary, values to override `sk_params` +
+ + + + + + + + + + + + +
Returns
+`res` + +dictionary containing variables +in both `sk_params` and `fn`'s arguments. +
+ + + +

fit

+ +View source + + + +Constructs a new model with `build_fn` & fit the model to `(x, y)`. + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +array-like, shape `(n_samples, n_features)` +Training samples where `n_samples` is the number of samples +and `n_features` is the number of features. +
+`y` + +array-like, shape `(n_samples,)` or `(n_samples, n_outputs)` +True labels for `x`. +
+`**kwargs` + +dictionary arguments +Legal arguments are the arguments of Sequential.fit +
+ + + + + + + + + + + + +
Returns
+`history` + +object +details about the training history at each epoch. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +In case of invalid shape for `y` argument. +
+ + + +

get_params

+ +View source + + + +Gets parameters for this estimator. + + + + + + + + + + + +
Arguments
+`**params` + +ignored (exists for API compatibility). +
+ + + + + + + + + + + +
Returns
+Dictionary of parameter names mapped to their values. +
+ + + +

predict

+ +View source + + + +Returns the class predictions for the given test data. + + + + + + + + + + + + + + +
Arguments
+`x` + +array-like, shape `(n_samples, n_features)` +Test samples where `n_samples` is the number of samples +and `n_features` is the number of features. +
+`**kwargs` + +dictionary arguments +Legal arguments are the arguments +of Sequential.predict_classes. +
+ + + + + + + + + + + + +
Returns
+`preds` + +array-like, shape `(n_samples,)` +Class predictions. +
+ + + +

predict_proba

+ +View source + + + +Returns class probability estimates for the given test data. + + + + + + + + + + + + + + +
Arguments
+`x` + +array-like, shape `(n_samples, n_features)` +Test samples where `n_samples` is the number of samples +and `n_features` is the number of features. +
+`**kwargs` + +dictionary arguments +Legal arguments are the arguments +of Sequential.predict_classes. +
+ + + + + + + + + + + + +
Returns
+`proba` + +array-like, shape `(n_samples, n_outputs)` +Class probability estimates. +In the case of binary classification, +to match the scikit-learn API, +will return an array of shape `(n_samples, 2)` +(instead of `(n_sample, 1)` as in Keras). +
+ + + +

score

+ +View source + + + +Returns the mean accuracy on the given test data and labels. + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +array-like, shape `(n_samples, n_features)` +Test samples where `n_samples` is the number of samples +and `n_features` is the number of features. +
+`y` + +array-like, shape `(n_samples,)` or `(n_samples, n_outputs)` +True labels for `x`. +
+`**kwargs` + +dictionary arguments +Legal arguments are the arguments of Sequential.evaluate. +
+ + + + + + + + + + + + +
Returns
+`score` + +float +Mean accuracy of predictions on `x` wrt. `y`. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the underlying model isn't configured to +compute accuracy. You should pass `metrics=["accuracy"]` to +the `.compile()` method of the model. +
+ + + +

set_params

+ +View source + + + +Sets the parameters of this estimator. + + + + + + + + + + + +
Arguments
+`**params` + +Dictionary of parameter names mapped to their values. +
+ + + + + + + + + + + +
Returns
+self +
+ + + + + diff --git a/site/en/api_docs/python/tf/keras/wrappers/scikit_learn/KerasRegressor.md b/site/en/api_docs/python/tf/keras/wrappers/scikit_learn/KerasRegressor.md new file mode 100644 index 00000000000..d5254213c7f --- /dev/null +++ b/site/en/api_docs/python/tf/keras/wrappers/scikit_learn/KerasRegressor.md @@ -0,0 +1,438 @@ +description: Implementation of the scikit-learn regressor API for Keras. + +
+ + + + + + + + + + +
+ +# tf.keras.wrappers.scikit_learn.KerasRegressor + + + + + + + + + +Implementation of the scikit-learn regressor API for Keras. + + + + + + + + + + +## Methods + +

check_params

+ +View source + + + +Checks for user typos in `params`. + + + + + + + + + + + +
Arguments
+`params` + +dictionary; the parameters to be checked +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if any member of `params` is not a valid argument. +
+ + + +

filter_sk_params

+ +View source + + + +Filters `sk_params` and returns those in `fn`'s arguments. + + + + + + + + + + + + + + +
Arguments
+`fn` + +arbitrary function +
+`override` + +dictionary, values to override `sk_params` +
+ + + + + + + + + + + + +
Returns
+`res` + +dictionary containing variables +in both `sk_params` and `fn`'s arguments. +
+ + + +

fit

+ +View source + + + +Constructs a new model with `build_fn` & fit the model to `(x, y)`. + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +array-like, shape `(n_samples, n_features)` +Training samples where `n_samples` is the number of samples +and `n_features` is the number of features. +
+`y` + +array-like, shape `(n_samples,)` or `(n_samples, n_outputs)` +True labels for `x`. +
+`**kwargs` + +dictionary arguments +Legal arguments are the arguments of Sequential.fit +
+ + + + + + + + + + + + +
Returns
+`history` + +object +details about the training history at each epoch. +
+ + + +

get_params

+ +View source + + + +Gets parameters for this estimator. + + + + + + + + + + + +
Arguments
+`**params` + +ignored (exists for API compatibility). +
+ + + + + + + + + + + +
Returns
+Dictionary of parameter names mapped to their values. +
+ + + +

predict

+ +View source + + + +Returns predictions for the given test data. + + + + + + + + + + + + + + +
Arguments
+`x` + +array-like, shape `(n_samples, n_features)` +Test samples where `n_samples` is the number of samples +and `n_features` is the number of features. +
+`**kwargs` + +dictionary arguments +Legal arguments are the arguments of Sequential.predict. +
+ + + + + + + + + + + + +
Returns
+`preds` + +array-like, shape `(n_samples,)` +Predictions. +
+ + + +

score

+ +View source + + + +Returns the mean loss on the given test data and labels. + + + + + + + + + + + + + + + + + +
Arguments
+`x` + +array-like, shape `(n_samples, n_features)` +Test samples where `n_samples` is the number of samples +and `n_features` is the number of features. +
+`y` + +array-like, shape `(n_samples,)` +True labels for `x`. +
+`**kwargs` + +dictionary arguments +Legal arguments are the arguments of Sequential.evaluate. +
+ + + + + + + + + + + + +
Returns
+`score` + +float +Mean accuracy of predictions on `x` wrt. `y`. +
+ + + +

set_params

+ +View source + + + +Sets the parameters of this estimator. + + + + + + + + + + + +
Arguments
+`**params` + +Dictionary of parameter names mapped to their values. +
+ + + + + + + + + + + +
Returns
+self +
+ + + + + diff --git a/site/en/api_docs/python/tf/linalg.md b/site/en/api_docs/python/tf/linalg.md new file mode 100644 index 00000000000..1ea961cd119 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg.md @@ -0,0 +1,161 @@ +description: Operations for linear algebra. + +
+ + +
+ +# Module: tf.linalg + + + + + + + + + +Operations for linear algebra. + + + +## Modules + +[`experimental`](../tf/linalg/experimental.md) module: Public API for tf.linalg.experimental namespace. + +## Classes + +[`class LinearOperator`](../tf/linalg/LinearOperator.md): Base class defining a [batch of] linear operator[s]. + +[`class LinearOperatorAdjoint`](../tf/linalg/LinearOperatorAdjoint.md): `LinearOperator` representing the adjoint of another operator. + +[`class LinearOperatorBlockDiag`](../tf/linalg/LinearOperatorBlockDiag.md): Combines one or more `LinearOperators` in to a Block Diagonal matrix. + +[`class LinearOperatorBlockLowerTriangular`](../tf/linalg/LinearOperatorBlockLowerTriangular.md): Combines `LinearOperators` into a blockwise lower-triangular matrix. + +[`class LinearOperatorCirculant`](../tf/linalg/LinearOperatorCirculant.md): `LinearOperator` acting like a circulant matrix. + +[`class LinearOperatorCirculant2D`](../tf/linalg/LinearOperatorCirculant2D.md): `LinearOperator` acting like a block circulant matrix. + +[`class LinearOperatorCirculant3D`](../tf/linalg/LinearOperatorCirculant3D.md): `LinearOperator` acting like a nested block circulant matrix. + +[`class LinearOperatorComposition`](../tf/linalg/LinearOperatorComposition.md): Composes one or more `LinearOperators`. + +[`class LinearOperatorDiag`](../tf/linalg/LinearOperatorDiag.md): `LinearOperator` acting like a [batch] square diagonal matrix. + +[`class LinearOperatorFullMatrix`](../tf/linalg/LinearOperatorFullMatrix.md): `LinearOperator` that wraps a [batch] matrix. + +[`class LinearOperatorHouseholder`](../tf/linalg/LinearOperatorHouseholder.md): `LinearOperator` acting like a [batch] of Householder transformations. + +[`class LinearOperatorIdentity`](../tf/linalg/LinearOperatorIdentity.md): `LinearOperator` acting like a [batch] square identity matrix. + +[`class LinearOperatorInversion`](../tf/linalg/LinearOperatorInversion.md): `LinearOperator` representing the inverse of another operator. + +[`class LinearOperatorKronecker`](../tf/linalg/LinearOperatorKronecker.md): Kronecker product between two `LinearOperators`. + +[`class LinearOperatorLowRankUpdate`](../tf/linalg/LinearOperatorLowRankUpdate.md): Perturb a `LinearOperator` with a rank `K` update. + +[`class LinearOperatorLowerTriangular`](../tf/linalg/LinearOperatorLowerTriangular.md): `LinearOperator` acting like a [batch] square lower triangular matrix. + +[`class LinearOperatorPermutation`](../tf/linalg/LinearOperatorPermutation.md): `LinearOperator` acting like a [batch] of permutation matrices. + +[`class LinearOperatorScaledIdentity`](../tf/linalg/LinearOperatorScaledIdentity.md): `LinearOperator` acting like a scaled [batch] identity matrix `A = c I`. + +[`class LinearOperatorToeplitz`](../tf/linalg/LinearOperatorToeplitz.md): `LinearOperator` acting like a [batch] of toeplitz matrices. + +[`class LinearOperatorTridiag`](../tf/linalg/LinearOperatorTridiag.md): `LinearOperator` acting like a [batch] square tridiagonal matrix. + +[`class LinearOperatorZeros`](../tf/linalg/LinearOperatorZeros.md): `LinearOperator` acting like a [batch] zero matrix. + +## Functions + +[`adjoint(...)`](../tf/linalg/adjoint.md): Transposes the last two dimensions of and conjugates tensor `matrix`. + +[`band_part(...)`](../tf/linalg/band_part.md): Copy a tensor setting everything outside a central band in each innermost matrix + +[`cholesky(...)`](../tf/linalg/cholesky.md): Computes the Cholesky decomposition of one or more square matrices. + +[`cholesky_solve(...)`](../tf/linalg/cholesky_solve.md): Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations. + +[`cross(...)`](../tf/linalg/cross.md): Compute the pairwise cross product. + +[`det(...)`](../tf/linalg/det.md): Computes the determinant of one or more square matrices. + +[`diag(...)`](../tf/linalg/diag.md): Returns a batched diagonal tensor with given batched diagonal values. + +[`diag_part(...)`](../tf/linalg/diag_part.md): Returns the batched diagonal part of a batched tensor. + +[`eig(...)`](../tf/linalg/eig.md): Computes the eigen decomposition of a batch of matrices. + +[`eigh(...)`](../tf/linalg/eigh.md): Computes the eigen decomposition of a batch of self-adjoint matrices. + +[`eigvals(...)`](../tf/linalg/eigvals.md): Computes the eigenvalues of one or more matrices. + +[`eigvalsh(...)`](../tf/linalg/eigvalsh.md): Computes the eigenvalues of one or more self-adjoint matrices. + +[`einsum(...)`](../tf/einsum.md): Tensor contraction over specified indices and outer product. + +[`expm(...)`](../tf/linalg/expm.md): Computes the matrix exponential of one or more square matrices. + +[`eye(...)`](../tf/eye.md): Construct an identity matrix, or a batch of matrices. + +[`global_norm(...)`](../tf/linalg/global_norm.md): Computes the global norm of multiple tensors. + +[`inv(...)`](../tf/linalg/inv.md): Computes the inverse of one or more square invertible matrices or their + +[`l2_normalize(...)`](../tf/math/l2_normalize.md): Normalizes along dimension `axis` using an L2 norm. + +[`logdet(...)`](../tf/linalg/logdet.md): Computes log of the determinant of a hermitian positive definite matrix. + +[`logm(...)`](../tf/linalg/logm.md): Computes the matrix logarithm of one or more square matrices: + +[`lstsq(...)`](../tf/linalg/lstsq.md): Solves one or more linear least-squares problems. + +[`lu(...)`](../tf/linalg/lu.md): Computes the LU decomposition of one or more square matrices. + +[`lu_matrix_inverse(...)`](../tf/linalg/lu_matrix_inverse.md): Computes the inverse given the LU decomposition(s) of one or more matrices. + +[`lu_reconstruct(...)`](../tf/linalg/lu_reconstruct.md): The reconstruct one or more matrices from their LU decomposition(s). + +[`lu_solve(...)`](../tf/linalg/lu_solve.md): Solves systems of linear eqns `A X = RHS`, given LU factorizations. + +[`matmul(...)`](../tf/linalg/matmul.md): Multiplies matrix `a` by matrix `b`, producing `a` * `b`. + +[`matrix_rank(...)`](../tf/linalg/matrix_rank.md): Compute the matrix rank of one or more matrices. + +[`matrix_transpose(...)`](../tf/linalg/matrix_transpose.md): Transposes last two dimensions of tensor `a`. + +[`matvec(...)`](../tf/linalg/matvec.md): Multiplies matrix `a` by vector `b`, producing `a` * `b`. + +[`norm(...)`](../tf/norm.md): Computes the norm of vectors, matrices, and tensors. + +[`normalize(...)`](../tf/linalg/normalize.md): Normalizes `tensor` along dimension `axis` using specified norm. + +[`pinv(...)`](../tf/linalg/pinv.md): Compute the Moore-Penrose pseudo-inverse of one or more matrices. + +[`qr(...)`](../tf/linalg/qr.md): Computes the QR decompositions of one or more matrices. + +[`set_diag(...)`](../tf/linalg/set_diag.md): Returns a batched matrix tensor with new batched diagonal values. + +[`slogdet(...)`](../tf/linalg/slogdet.md): Computes the sign and the log of the absolute value of the determinant of + +[`solve(...)`](../tf/linalg/solve.md): Solves systems of linear equations. + +[`sqrtm(...)`](../tf/linalg/sqrtm.md): Computes the matrix square root of one or more square matrices: + +[`svd(...)`](../tf/linalg/svd.md): Computes the singular value decompositions of one or more matrices. + +[`tensor_diag(...)`](../tf/linalg/tensor_diag.md): Returns a diagonal tensor with a given diagonal values. + +[`tensor_diag_part(...)`](../tf/linalg/tensor_diag_part.md): Returns the diagonal part of the tensor. + +[`tensordot(...)`](../tf/tensordot.md): Tensor contraction of a and b along specified axes and outer product. + +[`trace(...)`](../tf/linalg/trace.md): Compute the trace of a tensor `x`. + +[`triangular_solve(...)`](../tf/linalg/triangular_solve.md): Solve systems of linear equations with upper or lower triangular matrices. + +[`tridiagonal_matmul(...)`](../tf/linalg/tridiagonal_matmul.md): Multiplies tridiagonal matrix by matrix. + +[`tridiagonal_solve(...)`](../tf/linalg/tridiagonal_solve.md): Solves tridiagonal systems of equations. + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperator.md b/site/en/api_docs/python/tf/linalg/LinearOperator.md new file mode 100644 index 00000000000..e2ab70d1f2d --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperator.md @@ -0,0 +1,1707 @@ +description: Base class defining a [batch of] linear operator[s]. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperator + + + + + + + + + +Base class defining a [batch of] linear operator[s]. + +Inherits From: [`Module`](../../tf/Module.md) + + + + + + + + + +Subclasses of `LinearOperator` provide access to common methods on a +(batch) matrix, without the need to materialize the matrix. This allows: + +* Matrix free computations +* Operators that take advantage of special structure, while providing a + consistent API to users. + +#### Subclassing + +To enable a public method, subclasses should implement the leading-underscore +version of the method. The argument signature should be identical except for +the omission of `name="..."`. For example, to enable +`matmul(x, adjoint=False, name="matmul")` a subclass should implement +`_matmul(x, adjoint=False)`. + +#### Performance contract + +Subclasses should only implement the assert methods +(e.g. `assert_non_singular`) if they can be done in less than `O(N^3)` +time. + +Class docstrings should contain an explanation of computational complexity. +Since this is a high-performance library, attention should be paid to detail, +and explanations can include constants as well as Big-O notation. + +#### Shape compatibility + +`LinearOperator` subclasses should operate on a [batch] matrix with +compatible shape. Class docstrings should define what is meant by compatible +shape. Some subclasses may not support batching. + +#### Examples: + + + +`x` is a batch matrix with compatible shape for `matmul` if + +``` +operator.shape = [B1,...,Bb] + [M, N], b >= 0, +x.shape = [B1,...,Bb] + [N, R] +``` + +`rhs` is a batch matrix with compatible shape for `solve` if + +``` +operator.shape = [B1,...,Bb] + [M, N], b >= 0, +rhs.shape = [B1,...,Bb] + [M, R] +``` + +#### Example docstring for subclasses. + +This operator acts like a (batch) matrix `A` with shape +`[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `m x n` matrix. Again, this matrix `A` may not be materialized, but for +purposes of identifying and working with compatible arguments the shape is +relevant. + +#### Examples: + + + +```python +some_tensor = ... shape = ???? +operator = MyLinOp(some_tensor) + +operator.shape() +==> [2, 4, 4] + +operator.log_abs_determinant() +==> Shape [2] Tensor + +x = ... Shape [2, 4, 5] Tensor + +operator.matmul(x) +==> Shape [2, 4, 5] Tensor +``` + +#### Shape compatibility + +This operator acts on batch matrices with compatible shape. +FILL IN WHAT IS MEANT BY COMPATIBLE SHAPE + +#### Performance + +FILL THIS IN + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +The type of the this `LinearOperator`. Arguments to `matmul` and +`solve` will have to be this type. +
+`graph_parents` + +(Deprecated) Python list of graph prerequisites of this +`LinearOperator` Typically tensors that are passed during initialization +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. If `dtype` is real, this is equivalent to being symmetric. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name for this `LinearOperator`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If any member of graph_parents is `None` or not a `Tensor`. +
+`ValueError` + +If hints are set incorrectly. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorAdjoint.md b/site/en/api_docs/python/tf/linalg/LinearOperatorAdjoint.md new file mode 100644 index 00000000000..d5f79988dcb --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorAdjoint.md @@ -0,0 +1,1639 @@ +description: LinearOperator representing the adjoint of another operator. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorAdjoint + + + + + + + + + +`LinearOperator` representing the adjoint of another operator. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator represents the adjoint of another operator. + +```python +# Create a 2 x 2 linear operator. +operator = LinearOperatorFullMatrix([[1 - i., 3.], [0., 1. + i]]) +operator_adjoint = LinearOperatorAdjoint(operator) + +operator_adjoint.to_dense() +==> [[1. + i, 0.] + [3., 1 - i]] + +operator_adjoint.shape +==> [2, 2] + +operator_adjoint.log_abs_determinant() +==> - log(2) + +x = ... Shape [2, 4] Tensor +operator_adjoint.matmul(x) +==> Shape [2, 4] Tensor, equal to operator.matmul(x, adjoint=True) +``` + +#### Performance + +The performance of `LinearOperatorAdjoint` depends on the underlying +operators performance. + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`operator` + +`LinearOperator` object. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name for this `LinearOperator`. Default is `operator.name + +"_adjoint"`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `operator.is_non_singular` is False. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`operator` + +The operator before taking the adjoint. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorBlockDiag.md b/site/en/api_docs/python/tf/linalg/LinearOperatorBlockDiag.md new file mode 100644 index 00000000000..153b16bdd7b --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorBlockDiag.md @@ -0,0 +1,1690 @@ +description: Combines one or more LinearOperators in to a Block Diagonal matrix. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorBlockDiag + + + + + + + + + +Combines one or more `LinearOperators` in to a Block Diagonal matrix. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator combines one or more linear operators `[op1,...,opJ]`, +building a new `LinearOperator`, whose underlying matrix representation is +square and has each operator `opi` on the main diagonal, and zero's elsewhere. + +#### Shape compatibility + +If `opj` acts like a [batch] square matrix `Aj`, then `op_combined` acts like +the [batch] square matrix formed by having each matrix `Aj` on the main +diagonal. + + +Each `opj` is required to represent a square matrix, and hence will have +shape `batch_shape_j + [M_j, M_j]`. + +If `opj` has shape `batch_shape_j + [M_j, M_j]`, then the combined operator +has shape `broadcast_batch_shape + [sum M_j, sum M_j]`, where +`broadcast_batch_shape` is the mutual broadcast of `batch_shape_j`, +`j = 1,...,J`, assuming the intermediate batch shapes broadcast. +Even if the combined shape is well defined, the combined operator's +methods may fail due to lack of broadcasting ability in the defining +operators' methods. + +```python +# Create a 4 x 4 linear operator combined of two 2 x 2 operators. +operator_1 = LinearOperatorFullMatrix([[1., 2.], [3., 4.]]) +operator_2 = LinearOperatorFullMatrix([[1., 0.], [0., 1.]]) +operator = LinearOperatorBlockDiag([operator_1, operator_2]) + +operator.to_dense() +==> [[1., 2., 0., 0.], + [3., 4., 0., 0.], + [0., 0., 1., 0.], + [0., 0., 0., 1.]] + +operator.shape +==> [4, 4] + +operator.log_abs_determinant() +==> scalar Tensor + +x1 = ... # Shape [2, 2] Tensor +x2 = ... # Shape [2, 2] Tensor +x = tf.concat([x1, x2], 0) # Shape [2, 4] Tensor +operator.matmul(x) +==> tf.concat([operator_1.matmul(x1), operator_2.matmul(x2)]) + +# Create a [2, 3] batch of 4 x 4 linear operators. +matrix_44 = tf.random.normal(shape=[2, 3, 4, 4]) +operator_44 = LinearOperatorFullMatrix(matrix) + +# Create a [1, 3] batch of 5 x 5 linear operators. +matrix_55 = tf.random.normal(shape=[1, 3, 5, 5]) +operator_55 = LinearOperatorFullMatrix(matrix_55) + +# Combine to create a [2, 3] batch of 9 x 9 operators. +operator_99 = LinearOperatorBlockDiag([operator_44, operator_55]) + +# Create a shape [2, 3, 9] vector. +x = tf.random.normal(shape=[2, 3, 9]) +operator_99.matmul(x) +==> Shape [2, 3, 9] Tensor +``` + +#### Performance + +The performance of `LinearOperatorBlockDiag` on any operation is equal to +the sum of the individual operators' operations. + + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`operators` + +Iterable of `LinearOperator` objects, each with +the same `dtype` and composable shape. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +This is true by default, and will raise a `ValueError` otherwise. +
+`name` + +A name for this `LinearOperator`. Default is the individual +operators names joined with `_o_`. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If all operators do not have the same `dtype`. +
+`ValueError` + +If `operators` is empty or are non-square. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`operators` + + +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorBlockLowerTriangular.md b/site/en/api_docs/python/tf/linalg/LinearOperatorBlockLowerTriangular.md new file mode 100644 index 00000000000..2b6eaf43960 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorBlockLowerTriangular.md @@ -0,0 +1,1767 @@ +description: Combines LinearOperators into a blockwise lower-triangular matrix. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorBlockLowerTriangular + + + + + + + + + +Combines `LinearOperators` into a blockwise lower-triangular matrix. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator is initialized with a nested list of linear operators, which +are combined into a new `LinearOperator` whose underlying matrix +representation is square and has each operator on or below the main diagonal, +and zero's elsewhere. Each element of the outer list is a list of +`LinearOperators` corresponding to a row-partition of the blockwise structure. +The number of `LinearOperator`s in row-partion `i` must be equal to `i`. + +For example, a blockwise `3 x 3` `LinearOperatorBlockLowerTriangular` is +initialized with the list `[[op_00], [op_10, op_11], [op_20, op_21, op_22]]`, +where the `op_ij`, `i < 3, j <= i`, are `LinearOperator` instances. The +`LinearOperatorBlockLowerTriangular` behaves as the following blockwise +matrix, where `0` represents appropriately-sized [batch] matrices of zeros: + +```none +[[op_00, 0, 0], + [op_10, op_11, 0], + [op_20, op_21, op_22]] +``` + +Each `op_jj` on the diagonal is required to represent a square matrix, and +hence will have shape `batch_shape_j + [M_j, M_j]`. `LinearOperator`s in row +`j` of the blockwise structure must have `range_dimension` equal to that of +`op_jj`, and `LinearOperators` in column `j` must have `domain_dimension` +equal to that of `op_jj`. + +If each `op_jj` on the diagonal has shape `batch_shape_j + [M_j, M_j]`, then +the combined operator has shape `broadcast_batch_shape + [sum M_j, sum M_j]`, +where `broadcast_batch_shape` is the mutual broadcast of `batch_shape_j`, +`j = 0, 1, ..., J`, assuming the intermediate batch shapes broadcast. +Even if the combined shape is well defined, the combined operator's +methods may fail due to lack of broadcasting ability in the defining +operators' methods. + +For example, to create a 4 x 4 linear operator combined of three 2 x 2 +operators: +>>> operator_0 = tf.linalg.LinearOperatorFullMatrix([[1., 2.], [3., 4.]]) +>>> operator_1 = tf.linalg.LinearOperatorFullMatrix([[1., 0.], [0., 1.]]) +>>> operator_2 = tf.linalg.LinearOperatorLowerTriangular([[5., 6.], [7., 8]]) +>>> operator = LinearOperatorBlockLowerTriangular( +... [[operator_0], [operator_1, operator_2]]) + +``` +>>> operator.to_dense() + +``` + +``` +>>> operator.shape +TensorShape([4, 4]) +``` + +``` +>>> operator.log_abs_determinant() + +``` + +``` +>>> x0 = [[1., 6.], [-3., 4.]] +>>> x1 = [[0., 2.], [4., 0.]] +>>> x = tf.concat([x0, x1], 0) # Shape [2, 4] Tensor +>>> operator.matmul(x) + +``` + +The above `matmul` is equivalent to: +>>> tf.concat([operator_0.matmul(x0), +... operator_1.matmul(x0) + operator_2.matmul(x1)], axis=0) + + +#### Shape compatibility + +This operator acts on [batch] matrix with compatible shape. +`x` is a batch matrix with compatible shape for `matmul` and `solve` if + +``` +operator.shape = [B1,...,Bb] + [M, N], with b >= 0 +x.shape = [B1,...,Bb] + [N, R], with R >= 0. +``` + +#### For example: + + + +Create a [2, 3] batch of 4 x 4 linear operators: +>>> matrix_44 = tf.random.normal(shape=[2, 3, 4, 4]) +>>> operator_44 = tf.linalg.LinearOperatorFullMatrix(matrix_44) + +Create a [1, 3] batch of 5 x 4 linear operators: +>>> matrix_54 = tf.random.normal(shape=[1, 3, 5, 4]) +>>> operator_54 = tf.linalg.LinearOperatorFullMatrix(matrix_54) + +Create a [1, 3] batch of 5 x 5 linear operators: +>>> matrix_55 = tf.random.normal(shape=[1, 3, 5, 5]) +>>> operator_55 = tf.linalg.LinearOperatorFullMatrix(matrix_55) + +Combine to create a [2, 3] batch of 9 x 9 operators: +>>> operator_99 = LinearOperatorBlockLowerTriangular( +... [[operator_44], [operator_54, operator_55]]) +>>> operator_99.shape +TensorShape([2, 3, 9, 9]) + +Create a shape [2, 1, 9] batch of vectors and apply the operator to it. +>>> x = tf.random.normal(shape=[2, 1, 9]) +>>> y = operator_99.matvec(x) +>>> y.shape +TensorShape([2, 3, 9]) + +#### Performance + +Suppose `operator` is a `LinearOperatorBlockLowerTriangular` consisting of `D` +row-partitions and `D` column-partitions, such that the total number of +operators is `N = D * (D + 1) // 2`. + +* `operator.matmul` has complexity equal to the sum of the `matmul` + complexities of the individual operators. +* `operator.solve` has complexity equal to the sum of the `solve` complexities + of the operators on the diagonal and the `matmul` complexities of the + operators off the diagonal. +* `operator.determinant` has complexity equal to the sum of the `determinant` + complexities of the operators on the diagonal. + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`operators` + +Iterable of iterables of `LinearOperator` objects, each with +the same `dtype`. Each element of `operators` corresponds to a row- +partition, in top-to-bottom order. The operators in each row-partition +are filled in left-to-right. For example, +`operators = [[op_0], [op_1, op_2], [op_3, op_4, op_5]]` creates a +`LinearOperatorBlockLowerTriangular` with full block structure +`[[op_0, 0, 0], [op_1, op_2, 0], [op_3, op_4, op_5]]`. The number of +operators in the `i`th row must be equal to `i`, such that each operator +falls on or below the diagonal of the blockwise structure. +`LinearOperator`s that fall on the diagonal (the last elements of each +row) must be square. The other `LinearOperator`s must have domain +dimension equal to the domain dimension of the `LinearOperator`s in the +same column-partition, and range dimension equal to the range dimension +of the `LinearOperator`s in the same row-partition. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +This will raise a `ValueError` if set to `False`. +
+`name` + +A name for this `LinearOperator`. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If all operators do not have the same `dtype`. +
+`ValueError` + +If `operators` is empty, contains an erroneous number of +elements, or contains operators with incompatible shapes. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`operators` + + +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorCirculant.md b/site/en/api_docs/python/tf/linalg/LinearOperatorCirculant.md new file mode 100644 index 00000000000..a3fd3528e5c --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorCirculant.md @@ -0,0 +1,1917 @@ +description: LinearOperator acting like a circulant matrix. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorCirculant + + + + + + + + + +`LinearOperator` acting like a circulant matrix. + + + + + + + + + +This operator acts like a circulant matrix `A` with +shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `N x N` matrix. This matrix `A` is not materialized, but for +purposes of broadcasting this shape will be relevant. + +#### Description in terms of circulant matrices + +Circulant means the entries of `A` are generated by a single vector, the +convolution kernel `h`: `A_{mn} := h_{m-n mod N}`. With `h = [w, x, y, z]`, + +``` +A = |w z y x| + |x w z y| + |y x w z| + |z y x w| +``` + +This means that the result of matrix multiplication `v = Au` has `Lth` column +given circular convolution between `h` with the `Lth` column of `u`. + +#### Description in terms of the frequency spectrum + +There is an equivalent description in terms of the [batch] spectrum `H` and +Fourier transforms. Here we consider `A.shape = [N, N]` and ignore batch +dimensions. Define the discrete Fourier transform (DFT) and its inverse by + +``` +DFT[ h[n] ] = H[k] := sum_{n = 0}^{N - 1} h_n e^{-i 2pi k n / N} +IDFT[ H[k] ] = h[n] = N^{-1} sum_{k = 0}^{N - 1} H_k e^{i 2pi k n / N} +``` + +From these definitions, we see that + +``` +H[0] = sum_{n = 0}^{N - 1} h_n +H[1] = "the first positive frequency" +H[N - 1] = "the first negative frequency" +``` + +Loosely speaking, with `*` element-wise multiplication, matrix multiplication +is equal to the action of a Fourier multiplier: `A u = IDFT[ H * DFT[u] ]`. +Precisely speaking, given `[N, R]` matrix `u`, let `DFT[u]` be the `[N, R]` +matrix with `rth` column equal to the DFT of the `rth` column of `u`. +Define the `IDFT` similarly. +Matrix multiplication may be expressed columnwise: + +```(A u)_r = IDFT[ H * (DFT[u])_r ]``` + +#### Operator properties deduced from the spectrum. + +Letting `U` be the `kth` Euclidean basis vector, and `U = IDFT[u]`. +The above formulas show that`A U = H_k * U`. We conclude that the elements +of `H` are the eigenvalues of this operator. Therefore + +* This operator is positive definite if and only if `Real{H} > 0`. + +A general property of Fourier transforms is the correspondence between +Hermitian functions and real valued transforms. + +Suppose `H.shape = [B1,...,Bb, N]`. We say that `H` is a Hermitian spectrum +if, with `%` meaning modulus division, + +```H[..., n % N] = ComplexConjugate[ H[..., (-n) % N] ]``` + +* This operator corresponds to a real matrix if and only if `H` is Hermitian. +* This operator is self-adjoint if and only if `H` is real. + +See e.g. "Discrete-Time Signal Processing", Oppenheim and Schafer. + +#### Example of a self-adjoint positive definite operator + +```python +# spectrum is real ==> operator is self-adjoint +# spectrum is positive ==> operator is positive definite +spectrum = [6., 4, 2] + +operator = LinearOperatorCirculant(spectrum) + +# IFFT[spectrum] +operator.convolution_kernel() +==> [4 + 0j, 1 + 0.58j, 1 - 0.58j] + +operator.to_dense() +==> [[4 + 0.0j, 1 - 0.6j, 1 + 0.6j], + [1 + 0.6j, 4 + 0.0j, 1 - 0.6j], + [1 - 0.6j, 1 + 0.6j, 4 + 0.0j]] +``` + +#### Example of defining in terms of a real convolution kernel + +```python +# convolution_kernel is real ==> spectrum is Hermitian. +convolution_kernel = [1., 2., 1.]] +spectrum = tf.signal.fft(tf.cast(convolution_kernel, tf.complex64)) + +# spectrum is Hermitian ==> operator is real. +# spectrum is shape [3] ==> operator is shape [3, 3] +# We force the input/output type to be real, which allows this to operate +# like a real matrix. +operator = LinearOperatorCirculant(spectrum, input_output_dtype=tf.float32) + +operator.to_dense() +==> [[ 1, 1, 2], + [ 2, 1, 1], + [ 1, 2, 1]] +``` + +#### Example of Hermitian spectrum + +```python +# spectrum is shape [3] ==> operator is shape [3, 3] +# spectrum is Hermitian ==> operator is real. +spectrum = [1, 1j, -1j] + +operator = LinearOperatorCirculant(spectrum) + +operator.to_dense() +==> [[ 0.33 + 0j, 0.91 + 0j, -0.24 + 0j], + [-0.24 + 0j, 0.33 + 0j, 0.91 + 0j], + [ 0.91 + 0j, -0.24 + 0j, 0.33 + 0j] +``` + +#### Example of forcing real `dtype` when spectrum is Hermitian + +```python +# spectrum is shape [4] ==> operator is shape [4, 4] +# spectrum is real ==> operator is self-adjoint +# spectrum is Hermitian ==> operator is real +# spectrum has positive real part ==> operator is positive-definite. +spectrum = [6., 4, 2, 4] + +# Force the input dtype to be float32. +# Cast the output to float32. This is fine because the operator will be +# real due to Hermitian spectrum. +operator = LinearOperatorCirculant(spectrum, input_output_dtype=tf.float32) + +operator.shape +==> [4, 4] + +operator.to_dense() +==> [[4, 1, 0, 1], + [1, 4, 1, 0], + [0, 1, 4, 1], + [1, 0, 1, 4]] + +# convolution_kernel = tf.signal.ifft(spectrum) +operator.convolution_kernel() +==> [4, 1, 0, 1] +``` + +#### Performance + +Suppose `operator` is a `LinearOperatorCirculant` of shape `[N, N]`, +and `x.shape = [N, R]`. Then + +* `operator.matmul(x)` is `O(R*N*Log[N])` +* `operator.solve(x)` is `O(R*N*Log[N])` +* `operator.determinant()` involves a size `N` `reduce_prod`. + +If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and +`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`. + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + +#### References: + +Toeplitz and Circulant Matrices - A Review: + [Gray, 2006](https://www.nowpublishers.com/article/Details/CIT-006) + ([pdf](https://ee.stanford.edu/~gray/toeplitz.pdf)) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`spectrum` + +Shape `[B1,...,Bb, N]` `Tensor`. Allowed dtypes: `float16`, +`float32`, `float64`, `complex64`, `complex128`. Type can be different +than `input_output_dtype` +
+`input_output_dtype` + +`dtype` for input/output. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. If `spectrum` is real, this will always be true. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix\ +#Extension_for_non_symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name to prepend to all ops created by this class. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`block_depth` + +Depth of recursively defined circulant blocks defining this `Operator`. + +With `A` the dense representation of this `Operator`, + +`block_depth = 1` means `A` is symmetric circulant. For example, + +``` +A = |w z y x| +|x w z y| +|y x w z| +|z y x w| +``` + +`block_depth = 2` means `A` is block symmetric circulant with symmetric +circulant blocks. For example, with `W`, `X`, `Y`, `Z` symmetric circulant, + +``` +A = |W Z Y X| +|X W Z Y| +|Y X W Z| +|Z Y X W| +``` + +`block_depth = 3` means `A` is block symmetric circulant with block +symmetric circulant blocks. +
+`block_shape` + + +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`spectrum` + + +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_hermitian_spectrum

+ +View source + + + +Returns an `Op` that asserts this operator has Hermitian spectrum. + +This operator corresponds to a real-valued matrix if and only if its +spectrum is Hermitian. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Op` that asserts this operator has Hermitian spectrum. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

block_shape_tensor

+ +View source + + + +Shape of the block dimensions of `self.spectrum`. + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

convolution_kernel

+ +View source + + + +Convolution kernel corresponding to `self.spectrum`. + +The `D` dimensional DFT of this kernel is the frequency domain spectrum of +this operator. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with `dtype` `self.dtype`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorCirculant2D.md b/site/en/api_docs/python/tf/linalg/LinearOperatorCirculant2D.md new file mode 100644 index 00000000000..cdb3023011b --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorCirculant2D.md @@ -0,0 +1,1849 @@ +description: LinearOperator acting like a block circulant matrix. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorCirculant2D + + + + + + + + + +`LinearOperator` acting like a block circulant matrix. + + + + + + + + + +This operator acts like a block circulant matrix `A` with +shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `N x N` matrix. This matrix `A` is not materialized, but for +purposes of broadcasting this shape will be relevant. + +#### Description in terms of block circulant matrices + +If `A` is block circulant, with block sizes `N0, N1` (`N0 * N1 = N`): +`A` has a block circulant structure, composed of `N0 x N0` blocks, with each +block an `N1 x N1` circulant matrix. + +For example, with `W`, `X`, `Y`, `Z` each circulant, + +``` +A = |W Z Y X| + |X W Z Y| + |Y X W Z| + |Z Y X W| +``` + +Note that `A` itself will not in general be circulant. + +#### Description in terms of the frequency spectrum + +There is an equivalent description in terms of the [batch] spectrum `H` and +Fourier transforms. Here we consider `A.shape = [N, N]` and ignore batch +dimensions. + +If `H.shape = [N0, N1]`, (`N0 * N1 = N`): +Loosely speaking, matrix multiplication is equal to the action of a +Fourier multiplier: `A u = IDFT2[ H DFT2[u] ]`. +Precisely speaking, given `[N, R]` matrix `u`, let `DFT2[u]` be the +`[N0, N1, R]` `Tensor` defined by re-shaping `u` to `[N0, N1, R]` and taking +a two dimensional DFT across the first two dimensions. Let `IDFT2` be the +inverse of `DFT2`. Matrix multiplication may be expressed columnwise: + +```(A u)_r = IDFT2[ H * (DFT2[u])_r ]``` + +#### Operator properties deduced from the spectrum. + +* This operator is positive definite if and only if `Real{H} > 0`. + +A general property of Fourier transforms is the correspondence between +Hermitian functions and real valued transforms. + +Suppose `H.shape = [B1,...,Bb, N0, N1]`, we say that `H` is a Hermitian +spectrum if, with `%` indicating modulus division, + +``` +H[..., n0 % N0, n1 % N1] = ComplexConjugate[ H[..., (-n0) % N0, (-n1) % N1 ]. +``` + +* This operator corresponds to a real matrix if and only if `H` is Hermitian. +* This operator is self-adjoint if and only if `H` is real. + +See e.g. "Discrete-Time Signal Processing", Oppenheim and Schafer. + +### Example of a self-adjoint positive definite operator + +```python +# spectrum is real ==> operator is self-adjoint +# spectrum is positive ==> operator is positive definite +spectrum = [[1., 2., 3.], + [4., 5., 6.], + [7., 8., 9.]] + +operator = LinearOperatorCirculant2D(spectrum) + +# IFFT[spectrum] +operator.convolution_kernel() +==> [[5.0+0.0j, -0.5-.3j, -0.5+.3j], + [-1.5-.9j, 0, 0], + [-1.5+.9j, 0, 0]] + +operator.to_dense() +==> Complex self adjoint 9 x 9 matrix. +``` + +#### Example of defining in terms of a real convolution kernel, + +```python +# convolution_kernel is real ==> spectrum is Hermitian. +convolution_kernel = [[1., 2., 1.], [5., -1., 1.]] +spectrum = tf.signal.fft2d(tf.cast(convolution_kernel, tf.complex64)) + +# spectrum is shape [2, 3] ==> operator is shape [6, 6] +# spectrum is Hermitian ==> operator is real. +operator = LinearOperatorCirculant2D(spectrum, input_output_dtype=tf.float32) +``` + +#### Performance + +Suppose `operator` is a `LinearOperatorCirculant` of shape `[N, N]`, +and `x.shape = [N, R]`. Then + +* `operator.matmul(x)` is `O(R*N*Log[N])` +* `operator.solve(x)` is `O(R*N*Log[N])` +* `operator.determinant()` involves a size `N` `reduce_prod`. + +If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and +`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`. + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`spectrum` + +Shape `[B1,...,Bb, N]` `Tensor`. Allowed dtypes: `float16`, +`float32`, `float64`, `complex64`, `complex128`. Type can be different +than `input_output_dtype` +
+`input_output_dtype` + +`dtype` for input/output. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. If `spectrum` is real, this will always be true. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix\ +#Extension_for_non_symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name to prepend to all ops created by this class. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`block_depth` + +Depth of recursively defined circulant blocks defining this `Operator`. + +With `A` the dense representation of this `Operator`, + +`block_depth = 1` means `A` is symmetric circulant. For example, + +``` +A = |w z y x| +|x w z y| +|y x w z| +|z y x w| +``` + +`block_depth = 2` means `A` is block symmetric circulant with symmetric +circulant blocks. For example, with `W`, `X`, `Y`, `Z` symmetric circulant, + +``` +A = |W Z Y X| +|X W Z Y| +|Y X W Z| +|Z Y X W| +``` + +`block_depth = 3` means `A` is block symmetric circulant with block +symmetric circulant blocks. +
+`block_shape` + + +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`spectrum` + + +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_hermitian_spectrum

+ +View source + + + +Returns an `Op` that asserts this operator has Hermitian spectrum. + +This operator corresponds to a real-valued matrix if and only if its +spectrum is Hermitian. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Op` that asserts this operator has Hermitian spectrum. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

block_shape_tensor

+ +View source + + + +Shape of the block dimensions of `self.spectrum`. + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

convolution_kernel

+ +View source + + + +Convolution kernel corresponding to `self.spectrum`. + +The `D` dimensional DFT of this kernel is the frequency domain spectrum of +this operator. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with `dtype` `self.dtype`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorCirculant3D.md b/site/en/api_docs/python/tf/linalg/LinearOperatorCirculant3D.md new file mode 100644 index 00000000000..b43bba2cf68 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorCirculant3D.md @@ -0,0 +1,1821 @@ +description: LinearOperator acting like a nested block circulant matrix. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorCirculant3D + + + + + + + + + +`LinearOperator` acting like a nested block circulant matrix. + + + + + + + + + +This operator acts like a block circulant matrix `A` with +shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `N x N` matrix. This matrix `A` is not materialized, but for +purposes of broadcasting this shape will be relevant. + +#### Description in terms of block circulant matrices + +If `A` is nested block circulant, with block sizes `N0, N1, N2` +(`N0 * N1 * N2 = N`): +`A` has a block structure, composed of `N0 x N0` blocks, with each +block an `N1 x N1` block circulant matrix. + +For example, with `W`, `X`, `Y`, `Z` each block circulant, + +``` +A = |W Z Y X| + |X W Z Y| + |Y X W Z| + |Z Y X W| +``` + +Note that `A` itself will not in general be circulant. + +#### Description in terms of the frequency spectrum + +There is an equivalent description in terms of the [batch] spectrum `H` and +Fourier transforms. Here we consider `A.shape = [N, N]` and ignore batch +dimensions. + +If `H.shape = [N0, N1, N2]`, (`N0 * N1 * N2 = N`): +Loosely speaking, matrix multiplication is equal to the action of a +Fourier multiplier: `A u = IDFT3[ H DFT3[u] ]`. +Precisely speaking, given `[N, R]` matrix `u`, let `DFT3[u]` be the +`[N0, N1, N2, R]` `Tensor` defined by re-shaping `u` to `[N0, N1, N2, R]` and +taking a three dimensional DFT across the first three dimensions. Let `IDFT3` +be the inverse of `DFT3`. Matrix multiplication may be expressed columnwise: + +```(A u)_r = IDFT3[ H * (DFT3[u])_r ]``` + +#### Operator properties deduced from the spectrum. + +* This operator is positive definite if and only if `Real{H} > 0`. + +A general property of Fourier transforms is the correspondence between +Hermitian functions and real valued transforms. + +Suppose `H.shape = [B1,...,Bb, N0, N1, N2]`, we say that `H` is a Hermitian +spectrum if, with `%` meaning modulus division, + +``` +H[..., n0 % N0, n1 % N1, n2 % N2] + = ComplexConjugate[ H[..., (-n0) % N0, (-n1) % N1, (-n2) % N2] ]. +``` + +* This operator corresponds to a real matrix if and only if `H` is Hermitian. +* This operator is self-adjoint if and only if `H` is real. + +See e.g. "Discrete-Time Signal Processing", Oppenheim and Schafer. + +### Examples + +See `LinearOperatorCirculant` and `LinearOperatorCirculant2D` for examples. + +#### Performance + +Suppose `operator` is a `LinearOperatorCirculant` of shape `[N, N]`, +and `x.shape = [N, R]`. Then + +* `operator.matmul(x)` is `O(R*N*Log[N])` +* `operator.solve(x)` is `O(R*N*Log[N])` +* `operator.determinant()` involves a size `N` `reduce_prod`. + +If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and +`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`. + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`spectrum` + +Shape `[B1,...,Bb, N]` `Tensor`. Allowed dtypes: `float16`, +`float32`, `float64`, `complex64`, `complex128`. Type can be different +than `input_output_dtype` +
+`input_output_dtype` + +`dtype` for input/output. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. If `spectrum` is real, this will always be true. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the real part of all eigenvalues is positive. We do not require +the operator to be self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix +#Extension_for_non_symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name to prepend to all ops created by this class. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`block_depth` + +Depth of recursively defined circulant blocks defining this `Operator`. + +With `A` the dense representation of this `Operator`, + +`block_depth = 1` means `A` is symmetric circulant. For example, + +``` +A = |w z y x| +|x w z y| +|y x w z| +|z y x w| +``` + +`block_depth = 2` means `A` is block symmetric circulant with symmetric +circulant blocks. For example, with `W`, `X`, `Y`, `Z` symmetric circulant, + +``` +A = |W Z Y X| +|X W Z Y| +|Y X W Z| +|Z Y X W| +``` + +`block_depth = 3` means `A` is block symmetric circulant with block +symmetric circulant blocks. +
+`block_shape` + + +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`spectrum` + + +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_hermitian_spectrum

+ +View source + + + +Returns an `Op` that asserts this operator has Hermitian spectrum. + +This operator corresponds to a real-valued matrix if and only if its +spectrum is Hermitian. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Op` that asserts this operator has Hermitian spectrum. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

block_shape_tensor

+ +View source + + + +Shape of the block dimensions of `self.spectrum`. + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

convolution_kernel

+ +View source + + + +Convolution kernel corresponding to `self.spectrum`. + +The `D` dimensional DFT of this kernel is the frequency domain spectrum of +this operator. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with `dtype` `self.dtype`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorComposition.md b/site/en/api_docs/python/tf/linalg/LinearOperatorComposition.md new file mode 100644 index 00000000000..948be620cb4 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorComposition.md @@ -0,0 +1,1681 @@ +description: Composes one or more LinearOperators. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorComposition + + + + + + + + + +Composes one or more `LinearOperators`. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator composes one or more linear operators `[op1,...,opJ]`, +building a new `LinearOperator` with action defined by: + +``` +op_composed(x) := op1(op2(...(opJ(x)...)) +``` + +If `opj` acts like [batch] matrix `Aj`, then `op_composed` acts like the +[batch] matrix formed with the multiplication `A1 A2...AJ`. + +If `opj` has shape `batch_shape_j + [M_j, N_j]`, then we must have +`N_j = M_{j+1}`, in which case the composed operator has shape equal to +`broadcast_batch_shape + [M_1, N_J]`, where `broadcast_batch_shape` is the +mutual broadcast of `batch_shape_j`, `j = 1,...,J`, assuming the intermediate +batch shapes broadcast. Even if the composed shape is well defined, the +composed operator's methods may fail due to lack of broadcasting ability in +the defining operators' methods. + +```python +# Create a 2 x 2 linear operator composed of two 2 x 2 operators. +operator_1 = LinearOperatorFullMatrix([[1., 2.], [3., 4.]]) +operator_2 = LinearOperatorFullMatrix([[1., 0.], [0., 1.]]) +operator = LinearOperatorComposition([operator_1, operator_2]) + +operator.to_dense() +==> [[1., 2.] + [3., 4.]] + +operator.shape +==> [2, 2] + +operator.log_abs_determinant() +==> scalar Tensor + +x = ... Shape [2, 4] Tensor +operator.matmul(x) +==> Shape [2, 4] Tensor + +# Create a [2, 3] batch of 4 x 5 linear operators. +matrix_45 = tf.random.normal(shape=[2, 3, 4, 5]) +operator_45 = LinearOperatorFullMatrix(matrix) + +# Create a [2, 3] batch of 5 x 6 linear operators. +matrix_56 = tf.random.normal(shape=[2, 3, 5, 6]) +operator_56 = LinearOperatorFullMatrix(matrix_56) + +# Compose to create a [2, 3] batch of 4 x 6 operators. +operator_46 = LinearOperatorComposition([operator_45, operator_56]) + +# Create a shape [2, 3, 6, 2] vector. +x = tf.random.normal(shape=[2, 3, 6, 2]) +operator.matmul(x) +==> Shape [2, 3, 4, 2] Tensor +``` + +#### Performance + +The performance of `LinearOperatorComposition` on any operation is equal to +the sum of the individual operators' operations. + + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`operators` + +Iterable of `LinearOperator` objects, each with +the same `dtype` and composable shape. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name for this `LinearOperator`. Default is the individual +operators names joined with `_o_`. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If all operators do not have the same `dtype`. +
+`ValueError` + +If `operators` is empty. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`operators` + + +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorDiag.md b/site/en/api_docs/python/tf/linalg/LinearOperatorDiag.md new file mode 100644 index 00000000000..23ec73d2bcb --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorDiag.md @@ -0,0 +1,1682 @@ +description: LinearOperator acting like a [batch] square diagonal matrix. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorDiag + + + + + + + + + +`LinearOperator` acting like a [batch] square diagonal matrix. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator acts like a [batch] diagonal matrix `A` with shape +`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `N x N` matrix. This matrix `A` is not materialized, but for +purposes of broadcasting this shape will be relevant. + +`LinearOperatorDiag` is initialized with a (batch) vector. + +```python +# Create a 2 x 2 diagonal linear operator. +diag = [1., -1.] +operator = LinearOperatorDiag(diag) + +operator.to_dense() +==> [[1., 0.] + [0., -1.]] + +operator.shape +==> [2, 2] + +operator.log_abs_determinant() +==> scalar Tensor + +x = ... Shape [2, 4] Tensor +operator.matmul(x) +==> Shape [2, 4] Tensor + +# Create a [2, 3] batch of 4 x 4 linear operators. +diag = tf.random.normal(shape=[2, 3, 4]) +operator = LinearOperatorDiag(diag) + +# Create a shape [2, 1, 4, 2] vector. Note that this shape is compatible +# since the batch dimensions, [2, 1], are broadcast to +# operator.batch_shape = [2, 3]. +y = tf.random.normal(shape=[2, 1, 4, 2]) +x = operator.solve(y) +==> operator.matmul(x) = y +``` + +#### Shape compatibility + +This operator acts on [batch] matrix with compatible shape. +`x` is a batch matrix with compatible shape for `matmul` and `solve` if + +``` +operator.shape = [B1,...,Bb] + [N, N], with b >= 0 +x.shape = [C1,...,Cc] + [N, R], +and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd] +``` + +#### Performance + +Suppose `operator` is a `LinearOperatorDiag` of shape `[N, N]`, +and `x.shape = [N, R]`. Then + +* `operator.matmul(x)` involves `N * R` multiplications. +* `operator.solve(x)` involves `N` divisions and `N * R` multiplications. +* `operator.determinant()` involves a size `N` `reduce_prod`. + +If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and +`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`. + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`diag` + +Shape `[B1,...,Bb, N]` `Tensor` with `b >= 0` `N >= 0`. +The diagonal of the operator. Allowed dtypes: `float16`, `float32`, +`float64`, `complex64`, `complex128`. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. If `diag.dtype` is real, this is auto-set to `True`. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name for this `LinearOperator`. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `diag.dtype` is not an allowed type. +
+`ValueError` + +If `diag.dtype` is real, and `is_self_adjoint` is not `True`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`diag` + + +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorFullMatrix.md b/site/en/api_docs/python/tf/linalg/LinearOperatorFullMatrix.md new file mode 100644 index 00000000000..f00d3304a07 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorFullMatrix.md @@ -0,0 +1,1664 @@ +description: LinearOperator that wraps a [batch] matrix. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorFullMatrix + + + + + + + + + +`LinearOperator` that wraps a [batch] matrix. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator wraps a [batch] matrix `A` (which is a `Tensor`) with shape +`[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `M x N` matrix. + +```python +# Create a 2 x 2 linear operator. +matrix = [[1., 2.], [3., 4.]] +operator = LinearOperatorFullMatrix(matrix) + +operator.to_dense() +==> [[1., 2.] + [3., 4.]] + +operator.shape +==> [2, 2] + +operator.log_abs_determinant() +==> scalar Tensor + +x = ... Shape [2, 4] Tensor +operator.matmul(x) +==> Shape [2, 4] Tensor + +# Create a [2, 3] batch of 4 x 4 linear operators. +matrix = tf.random.normal(shape=[2, 3, 4, 4]) +operator = LinearOperatorFullMatrix(matrix) +``` + +#### Shape compatibility + +This operator acts on [batch] matrix with compatible shape. +`x` is a batch matrix with compatible shape for `matmul` and `solve` if + +``` +operator.shape = [B1,...,Bb] + [M, N], with b >= 0 +x.shape = [B1,...,Bb] + [N, R], with R >= 0. +``` + +#### Performance + +`LinearOperatorFullMatrix` has exactly the same performance as would be +achieved by using standard `TensorFlow` matrix ops. Intelligent choices are +made based on the following initialization hints. + +* If `dtype` is real, and `is_self_adjoint` and `is_positive_definite`, a + Cholesky factorization is used for the determinant and solve. + +In all cases, suppose `operator` is a `LinearOperatorFullMatrix` of shape +`[M, N]`, and `x.shape = [N, R]`. Then + +* `operator.matmul(x)` is `O(M * N * R)`. +* If `M=N`, `operator.solve(x)` is `O(N^3 * R)`. +* If `M=N`, `operator.determinant()` is `O(N^3)`. + +If instead `operator` and `x` have shape `[B1,...,Bb, M, N]` and +`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`. + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`matrix` + +Shape `[B1,...,Bb, M, N]` with `b >= 0`, `M, N >= 0`. +Allowed dtypes: `float16`, `float32`, `float64`, `complex64`, +`complex128`. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name for this `LinearOperator`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `diag.dtype` is not an allowed type. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorHouseholder.md b/site/en/api_docs/python/tf/linalg/LinearOperatorHouseholder.md new file mode 100644 index 00000000000..2a6e37fb302 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorHouseholder.md @@ -0,0 +1,1658 @@ +description: LinearOperator acting like a [batch] of Householder transformations. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorHouseholder + + + + + + + + + +`LinearOperator` acting like a [batch] of Householder transformations. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator acts like a [batch] of householder reflections with shape +`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `N x N` matrix. This matrix `A` is not materialized, but for +purposes of broadcasting this shape will be relevant. + +`LinearOperatorHouseholder` is initialized with a (batch) vector. + +A Householder reflection, defined via a vector `v`, which reflects points +in `R^n` about the hyperplane orthogonal to `v` and through the origin. + +```python +# Create a 2 x 2 householder transform. +vec = [1 / np.sqrt(2), 1. / np.sqrt(2)] +operator = LinearOperatorHouseholder(vec) + +operator.to_dense() +==> [[0., -1.] + [-1., -0.]] + +operator.shape +==> [2, 2] + +operator.log_abs_determinant() +==> scalar Tensor + +x = ... Shape [2, 4] Tensor +operator.matmul(x) +==> Shape [2, 4] Tensor + +#### Shape compatibility + +This operator acts on [batch] matrix with compatible shape. +`x` is a batch matrix with compatible shape for `matmul` and `solve` if + +``` +operator.shape = [B1,...,Bb] + [N, N], with b >= 0 +x.shape = [C1,...,Cc] + [N, R], +and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd] +``` + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`reflection_axis` + +Shape `[B1,...,Bb, N]` `Tensor` with `b >= 0` `N >= 0`. +The vector defining the hyperplane to reflect about. +Allowed dtypes: `float16`, `float32`, `float64`, `complex64`, +`complex128`. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. This is autoset to true +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +This is autoset to false. +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +This is autoset to true. +
+`name` + +A name for this `LinearOperator`. +
+ + + + + + + + + + + + +
+`ValueError` + +`is_self_adjoint` is not `True`, `is_positive_definite` is +not `False` or `is_square` is not `True`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`reflection_axis` + + +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorIdentity.md b/site/en/api_docs/python/tf/linalg/LinearOperatorIdentity.md new file mode 100644 index 00000000000..f732e664dbf --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorIdentity.md @@ -0,0 +1,1739 @@ +description: LinearOperator acting like a [batch] square identity matrix. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorIdentity + + + + + + + + + +`LinearOperator` acting like a [batch] square identity matrix. + + + + + + + + + +This operator acts like a [batch] identity matrix `A` with shape +`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `N x N` matrix. This matrix `A` is not materialized, but for +purposes of broadcasting this shape will be relevant. + +`LinearOperatorIdentity` is initialized with `num_rows`, and optionally +`batch_shape`, and `dtype` arguments. If `batch_shape` is `None`, this +operator efficiently passes through all arguments. If `batch_shape` is +provided, broadcasting may occur, which will require making copies. + +```python +# Create a 2 x 2 identity matrix. +operator = LinearOperatorIdentity(num_rows=2, dtype=tf.float32) + +operator.to_dense() +==> [[1., 0.] + [0., 1.]] + +operator.shape +==> [2, 2] + +operator.log_abs_determinant() +==> 0. + +x = ... Shape [2, 4] Tensor +operator.matmul(x) +==> Shape [2, 4] Tensor, same as x. + +y = tf.random.normal(shape=[3, 2, 4]) +# Note that y.shape is compatible with operator.shape because operator.shape +# is broadcast to [3, 2, 2]. +# This broadcast does NOT require copying data, since we can infer that y +# will be passed through without changing shape. We are always able to infer +# this if the operator has no batch_shape. +x = operator.solve(y) +==> Shape [3, 2, 4] Tensor, same as y. + +# Create a 2-batch of 2x2 identity matrices +operator = LinearOperatorIdentity(num_rows=2, batch_shape=[2]) +operator.to_dense() +==> [[[1., 0.] + [0., 1.]], + [[1., 0.] + [0., 1.]]] + +# Here, even though the operator has a batch shape, the input is the same as +# the output, so x can be passed through without a copy. The operator is able +# to detect that no broadcast is necessary because both x and the operator +# have statically defined shape. +x = ... Shape [2, 2, 3] +operator.matmul(x) +==> Shape [2, 2, 3] Tensor, same as x + +# Here the operator and x have different batch_shape, and are broadcast. +# This requires a copy, since the output is different size than the input. +x = ... Shape [1, 2, 3] +operator.matmul(x) +==> Shape [2, 2, 3] Tensor, equal to [x, x] +``` + +### Shape compatibility + +This operator acts on [batch] matrix with compatible shape. +`x` is a batch matrix with compatible shape for `matmul` and `solve` if + +``` +operator.shape = [B1,...,Bb] + [N, N], with b >= 0 +x.shape = [C1,...,Cc] + [N, R], +and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd] +``` + +### Performance + +If `batch_shape` initialization arg is `None`: + +* `operator.matmul(x)` is `O(1)` +* `operator.solve(x)` is `O(1)` +* `operator.determinant()` is `O(1)` + +If `batch_shape` initialization arg is provided, and static checks cannot +rule out the need to broadcast: + +* `operator.matmul(x)` is `O(D1*...*Dd*N*R)` +* `operator.solve(x)` is `O(D1*...*Dd*N*R)` +* `operator.determinant()` is `O(B1*...*Bb)` + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_rows` + +Scalar non-negative integer `Tensor`. Number of rows in the +corresponding identity matrix. +
+`batch_shape` + +Optional `1-D` integer `Tensor`. The shape of the leading +dimensions. If `None`, this operator has no leading dimensions. +
+`dtype` + +Data type of the matrix that this operator represents. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`assert_proper_shapes` + +Python `bool`. If `False`, only perform static +checks that initialization and method arguments have proper shape. +If `True`, and static checks are inconclusive, add asserts to the graph. +
+`name` + +A name for this `LinearOperator` +
+ + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +If `num_rows` is determined statically to be non-scalar, or +negative. +
+`ValueError` + +If `batch_shape` is determined statically to not be 1-D, or +negative. +
+`ValueError` + +If any of the following is not `True`: +`{is_self_adjoint, is_non_singular, is_positive_definite}`. +
+`TypeError` + +If `num_rows` or `batch_shape` is ref-type (e.g. Variable). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `mat`. Equiv to `I + mat`. + + + + + + + + + + + + + + +
Args
+`mat` + +`Tensor` with same `dtype` and shape broadcastable to `self`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorInversion.md b/site/en/api_docs/python/tf/linalg/LinearOperatorInversion.md new file mode 100644 index 00000000000..8039b17d4fb --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorInversion.md @@ -0,0 +1,1643 @@ +description: LinearOperator representing the inverse of another operator. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorInversion + + + + + + + + + +`LinearOperator` representing the inverse of another operator. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator represents the inverse of another operator. + +```python +# Create a 2 x 2 linear operator. +operator = LinearOperatorFullMatrix([[1., 0.], [0., 2.]]) +operator_inv = LinearOperatorInversion(operator) + +operator_inv.to_dense() +==> [[1., 0.] + [0., 0.5]] + +operator_inv.shape +==> [2, 2] + +operator_inv.log_abs_determinant() +==> - log(2) + +x = ... Shape [2, 4] Tensor +operator_inv.matmul(x) +==> Shape [2, 4] Tensor, equal to operator.solve(x) +``` + +#### Performance + +The performance of `LinearOperatorInversion` depends on the underlying +operators performance: `solve` and `matmul` are swapped, and determinant is +inverted. + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`operator` + +`LinearOperator` object. If `operator.is_non_singular == False`, +an exception is raised. We do allow `operator.is_non_singular == None`, +in which case this operator will have `is_non_singular == None`. +Similarly for `is_self_adjoint` and `is_positive_definite`. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name for this `LinearOperator`. Default is `operator.name + +"_inv"`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `operator.is_non_singular` is False. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`operator` + +The operator before inversion. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorKronecker.md b/site/en/api_docs/python/tf/linalg/LinearOperatorKronecker.md new file mode 100644 index 00000000000..6112e13a617 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorKronecker.md @@ -0,0 +1,1675 @@ +description: Kronecker product between two LinearOperators. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorKronecker + + + + + + + + + +Kronecker product between two `LinearOperators`. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator composes one or more linear operators `[op1,...,opJ]`, +building a new `LinearOperator` representing the Kronecker product: +`op1 x op2 x .. opJ` (we omit parentheses as the Kronecker product is +associative). + +If `opj` has shape `batch_shape_j + [M_j, N_j]`, then the composed operator +will have shape equal to `broadcast_batch_shape + [prod M_j, prod N_j]`, +where the product is over all operators. + +```python +# Create a 4 x 4 linear operator composed of two 2 x 2 operators. +operator_1 = LinearOperatorFullMatrix([[1., 2.], [3., 4.]]) +operator_2 = LinearOperatorFullMatrix([[1., 0.], [2., 1.]]) +operator = LinearOperatorKronecker([operator_1, operator_2]) + +operator.to_dense() +==> [[1., 0., 2., 0.], + [2., 1., 4., 2.], + [3., 0., 4., 0.], + [6., 3., 8., 4.]] + +operator.shape +==> [4, 4] + +operator.log_abs_determinant() +==> scalar Tensor + +x = ... Shape [4, 2] Tensor +operator.matmul(x) +==> Shape [4, 2] Tensor + +# Create a [2, 3] batch of 4 x 5 linear operators. +matrix_45 = tf.random.normal(shape=[2, 3, 4, 5]) +operator_45 = LinearOperatorFullMatrix(matrix) + +# Create a [2, 3] batch of 5 x 6 linear operators. +matrix_56 = tf.random.normal(shape=[2, 3, 5, 6]) +operator_56 = LinearOperatorFullMatrix(matrix_56) + +# Compose to create a [2, 3] batch of 20 x 30 operators. +operator_large = LinearOperatorKronecker([operator_45, operator_56]) + +# Create a shape [2, 3, 20, 2] vector. +x = tf.random.normal(shape=[2, 3, 6, 2]) +operator_large.matmul(x) +==> Shape [2, 3, 30, 2] Tensor +``` + +#### Performance + +The performance of `LinearOperatorKronecker` on any operation is equal to +the sum of the individual operators' operations. + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`operators` + +Iterable of `LinearOperator` objects, each with +the same `dtype` and composable shape, representing the Kronecker +factors. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix\ +#Extension_for_non_symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name for this `LinearOperator`. Default is the individual +operators names joined with `_x_`. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If all operators do not have the same `dtype`. +
+`ValueError` + +If `operators` is empty. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`operators` + + +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorLowRankUpdate.md b/site/en/api_docs/python/tf/linalg/LinearOperatorLowRankUpdate.md new file mode 100644 index 00000000000..9a8d00da582 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorLowRankUpdate.md @@ -0,0 +1,1760 @@ +description: Perturb a LinearOperator with a rank K update. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorLowRankUpdate + + + + + + + + + +Perturb a `LinearOperator` with a rank `K` update. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator acts like a [batch] matrix `A` with shape +`[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `M x N` matrix. + +`LinearOperatorLowRankUpdate` represents `A = L + U D V^H`, where + +``` +L, is a LinearOperator representing [batch] M x N matrices +U, is a [batch] M x K matrix. Typically K << M. +D, is a [batch] K x K matrix. +V, is a [batch] N x K matrix. Typically K << N. +V^H is the Hermitian transpose (adjoint) of V. +``` + +If `M = N`, determinants and solves are done using the matrix determinant +lemma and Woodbury identities, and thus require L and D to be non-singular. + +Solves and determinants will be attempted unless the "is_non_singular" +property of L and D is False. + +In the event that L and D are positive-definite, and U = V, solves and +determinants can be done using a Cholesky factorization. + +```python +# Create a 3 x 3 diagonal linear operator. +diag_operator = LinearOperatorDiag( + diag_update=[1., 2., 3.], is_non_singular=True, is_self_adjoint=True, + is_positive_definite=True) + +# Perturb with a rank 2 perturbation +operator = LinearOperatorLowRankUpdate( + operator=diag_operator, + u=[[1., 2.], [-1., 3.], [0., 0.]], + diag_update=[11., 12.], + v=[[1., 2.], [-1., 3.], [10., 10.]]) + +operator.shape +==> [3, 3] + +operator.log_abs_determinant() +==> scalar Tensor + +x = ... Shape [3, 4] Tensor +operator.matmul(x) +==> Shape [3, 4] Tensor +``` + +### Shape compatibility + +This operator acts on [batch] matrix with compatible shape. +`x` is a batch matrix with compatible shape for `matmul` and `solve` if + +``` +operator.shape = [B1,...,Bb] + [M, N], with b >= 0 +x.shape = [B1,...,Bb] + [N, R], with R >= 0. +``` + +### Performance + +Suppose `operator` is a `LinearOperatorLowRankUpdate` of shape `[M, N]`, +made from a rank `K` update of `base_operator` which performs `.matmul(x)` on +`x` having `x.shape = [N, R]` with `O(L_matmul*N*R)` complexity (and similarly +for `solve`, `determinant`. Then, if `x.shape = [N, R]`, + +* `operator.matmul(x)` is `O(L_matmul*N*R + K*N*R)` + +and if `M = N`, + +* `operator.solve(x)` is `O(L_matmul*N*R + N*K*R + K^2*R + K^3)` +* `operator.determinant()` is `O(L_determinant + L_solve*N*K + K^2*N + K^3)` + +If instead `operator` and `x` have shape `[B1,...,Bb, M, N]` and +`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`. + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular`, `self_adjoint`, `positive_definite`, +`diag_update_positive` and `square`. These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`base_operator` + +Shape `[B1,...,Bb, M, N]`. +
+`u` + +Shape `[B1,...,Bb, M, K]` `Tensor` of same `dtype` as `base_operator`. +This is `U` above. +
+`diag_update` + +Optional shape `[B1,...,Bb, K]` `Tensor` with same `dtype` +as `base_operator`. This is the diagonal of `D` above. +Defaults to `D` being the identity operator. +
+`v` + +Optional `Tensor` of same `dtype` as `u` and shape `[B1,...,Bb, N, K]` +Defaults to `v = u`, in which case the perturbation is symmetric. +If `M != N`, then `v` must be set since the perturbation is not square. +
+`is_diag_update_positive` + +Python `bool`. +If `True`, expect `diag_update > 0`. +
+`is_non_singular` + +Expect that this operator is non-singular. +Default is `None`, unless `is_positive_definite` is auto-set to be +`True` (see below). +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. Default is `None`, unless `base_operator` is self-adjoint +and `v = None` (meaning `u=v`), in which case this defaults to `True`. +
+`is_positive_definite` + +Expect that this operator is positive definite. +Default is `None`, unless `base_operator` is positive-definite +`v = None` (meaning `u=v`), and `is_diag_update_positive`, in which case +this defaults to `True`. +Note that we say an operator is positive definite when the quadratic +form `x^H A x` has positive real part for all nonzero `x`. +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name for this `LinearOperator`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `is_X` flags are set in an inconsistent way. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`base_operator` + +If this operator is `A = L + U D V^H`, this is the `L`. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`diag_operator` + +If this operator is `A = L + U D V^H`, this is `D`. +
+`diag_update` + +If this operator is `A = L + U D V^H`, this is the diagonal of `D`. +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_diag_update_positive` + +If this operator is `A = L + U D V^H`, this hints `D > 0` elementwise. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+`u` + +If this operator is `A = L + U D V^H`, this is the `U`. +
+`v` + +If this operator is `A = L + U D V^H`, this is the `V`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorLowerTriangular.md b/site/en/api_docs/python/tf/linalg/LinearOperatorLowerTriangular.md new file mode 100644 index 00000000000..f130aa4d62e --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorLowerTriangular.md @@ -0,0 +1,1666 @@ +description: LinearOperator acting like a [batch] square lower triangular matrix. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorLowerTriangular + + + + + + + + + +`LinearOperator` acting like a [batch] square lower triangular matrix. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator acts like a [batch] lower triangular matrix `A` with shape +`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `N x N` matrix. + +`LinearOperatorLowerTriangular` is initialized with a `Tensor` having +dimensions `[B1,...,Bb, N, N]`. The upper triangle of the last two +dimensions is ignored. + +```python +# Create a 2 x 2 lower-triangular linear operator. +tril = [[1., 2.], [3., 4.]] +operator = LinearOperatorLowerTriangular(tril) + +# The upper triangle is ignored. +operator.to_dense() +==> [[1., 0.] + [3., 4.]] + +operator.shape +==> [2, 2] + +operator.log_abs_determinant() +==> scalar Tensor + +x = ... Shape [2, 4] Tensor +operator.matmul(x) +==> Shape [2, 4] Tensor + +# Create a [2, 3] batch of 4 x 4 linear operators. +tril = tf.random.normal(shape=[2, 3, 4, 4]) +operator = LinearOperatorLowerTriangular(tril) +``` + +#### Shape compatibility + +This operator acts on [batch] matrix with compatible shape. +`x` is a batch matrix with compatible shape for `matmul` and `solve` if + +``` +operator.shape = [B1,...,Bb] + [N, N], with b >= 0 +x.shape = [B1,...,Bb] + [N, R], with R >= 0. +``` + +#### Performance + +Suppose `operator` is a `LinearOperatorLowerTriangular` of shape `[N, N]`, +and `x.shape = [N, R]`. Then + +* `operator.matmul(x)` involves `N^2 * R` multiplications. +* `operator.solve(x)` involves `N * R` size `N` back-substitutions. +* `operator.determinant()` involves a size `N` `reduce_prod`. + +If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and +`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`. + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tril` + +Shape `[B1,...,Bb, N, N]` with `b >= 0`, `N >= 0`. +The lower triangular part of `tril` defines this operator. The strictly +upper triangle is ignored. +
+`is_non_singular` + +Expect that this operator is non-singular. +This operator is non-singular if and only if its diagonal elements are +all non-zero. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. This operator is self-adjoint only if it is diagonal with +real-valued diagonal entries. In this case it is advised to use +`LinearOperatorDiag`. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name for this `LinearOperator`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `is_square` is `False`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorPermutation.md b/site/en/api_docs/python/tf/linalg/LinearOperatorPermutation.md new file mode 100644 index 00000000000..edd464bb678 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorPermutation.md @@ -0,0 +1,1676 @@ +description: LinearOperator acting like a [batch] of permutation matrices. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorPermutation + + + + + + + + + +`LinearOperator` acting like a [batch] of permutation matrices. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator acts like a [batch] of permutations with shape +`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `N x N` matrix. This matrix `A` is not materialized, but for +purposes of broadcasting this shape will be relevant. + +`LinearOperatorPermutation` is initialized with a (batch) vector. + +A permutation, is defined by an integer vector `v` whose values are unique +and are in the range `[0, ... n]`. Applying the permutation on an input +matrix has the folllowing meaning: the value of `v` at index `i` +says to move the `v[i]`-th row of the input matrix to the `i`-th row. +Because all values are unique, this will result in a permutation of the +rows the input matrix. Note, that the permutation vector `v` has the same +semantics as tf.transpose. + +```python +# Create a 3 x 3 permutation matrix that swaps the last two columns. +vec = [0, 2, 1] +operator = LinearOperatorPermutation(vec) + +operator.to_dense() +==> [[1., 0., 0.] + [0., 0., 1.] + [0., 1., 0.]] + +operator.shape +==> [3, 3] + +# This will be zero. +operator.log_abs_determinant() +==> scalar Tensor + +x = ... Shape [3, 4] Tensor +operator.matmul(x) +==> Shape [3, 4] Tensor + +#### Shape compatibility + +This operator acts on [batch] matrix with compatible shape. +`x` is a batch matrix with compatible shape for `matmul` and `solve` if + +``` +operator.shape = [B1,...,Bb] + [N, N], with b >= 0 +x.shape = [C1,...,Cc] + [N, R], +and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd] +``` + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`perm` + +Shape `[B1,...,Bb, N]` Integer `Tensor` with `b >= 0` +`N >= 0`. An integer vector that represents the permutation to apply. +Note that this argument is same as tf.transpose. However, this +permutation is applied on the rows, while the permutation in +tf.transpose is applied on the dimensions of the `Tensor`. `perm` +is required to have unique entries from `{0, 1, ... N-1}`. +
+`dtype` + +The `dtype` of arguments to this operator. Default: `float32`. +Allowed dtypes: `float16`, `float32`, `float64`, `complex64`, +`complex128`. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. This is autoset to true +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +This is autoset to false. +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +This is autoset to true. +
+`name` + +A name for this `LinearOperator`. +
+ + + + + + + + + + + + +
+`ValueError` + +`is_self_adjoint` is not `True`, `is_positive_definite` is +not `False` or `is_square` is not `True`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`perm` + + +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorScaledIdentity.md b/site/en/api_docs/python/tf/linalg/LinearOperatorScaledIdentity.md new file mode 100644 index 00000000000..a28670d307f --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorScaledIdentity.md @@ -0,0 +1,1695 @@ +description: LinearOperator acting like a scaled [batch] identity matrix A = c I. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorScaledIdentity + + + + + + + + + +`LinearOperator` acting like a scaled [batch] identity matrix `A = c I`. + + + + + + + + + +This operator acts like a scaled [batch] identity matrix `A` with shape +`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +a scaled version of the `N x N` identity matrix. + +`LinearOperatorIdentity` is initialized with `num_rows`, and a `multiplier` +(a `Tensor`) of shape `[B1,...,Bb]`. `N` is set to `num_rows`, and the +`multiplier` determines the scale for each batch member. + +```python +# Create a 2 x 2 scaled identity matrix. +operator = LinearOperatorIdentity(num_rows=2, multiplier=3.) + +operator.to_dense() +==> [[3., 0.] + [0., 3.]] + +operator.shape +==> [2, 2] + +operator.log_abs_determinant() +==> 2 * Log[3] + +x = ... Shape [2, 4] Tensor +operator.matmul(x) +==> 3 * x + +y = tf.random.normal(shape=[3, 2, 4]) +# Note that y.shape is compatible with operator.shape because operator.shape +# is broadcast to [3, 2, 2]. +x = operator.solve(y) +==> 3 * x + +# Create a 2-batch of 2x2 identity matrices +operator = LinearOperatorIdentity(num_rows=2, multiplier=5.) +operator.to_dense() +==> [[[5., 0.] + [0., 5.]], + [[5., 0.] + [0., 5.]]] + +x = ... Shape [2, 2, 3] +operator.matmul(x) +==> 5 * x + +# Here the operator and x have different batch_shape, and are broadcast. +x = ... Shape [1, 2, 3] +operator.matmul(x) +==> 5 * x +``` + +### Shape compatibility + +This operator acts on [batch] matrix with compatible shape. +`x` is a batch matrix with compatible shape for `matmul` and `solve` if + +``` +operator.shape = [B1,...,Bb] + [N, N], with b >= 0 +x.shape = [C1,...,Cc] + [N, R], +and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd] +``` + +### Performance + +* `operator.matmul(x)` is `O(D1*...*Dd*N*R)` +* `operator.solve(x)` is `O(D1*...*Dd*N*R)` +* `operator.determinant()` is `O(D1*...*Dd)` + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_rows` + +Scalar non-negative integer `Tensor`. Number of rows in the +corresponding identity matrix. +
+`multiplier` + +`Tensor` of shape `[B1,...,Bb]`, or `[]` (a scalar). +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`assert_proper_shapes` + +Python `bool`. If `False`, only perform static +checks that initialization and method arguments have proper shape. +If `True`, and static checks are inconclusive, add asserts to the graph. +
+`name` + +A name for this `LinearOperator` +
+ + + + + + + + + + + + +
+`ValueError` + +If `num_rows` is determined statically to be non-scalar, or +negative. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`multiplier` + +The [batch] scalar `Tensor`, `c` in `cI`. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `mat`. Equiv to `I + mat`. + + + + + + + + + + + + + + +
Args
+`mat` + +`Tensor` with same `dtype` and shape broadcastable to `self`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorToeplitz.md b/site/en/api_docs/python/tf/linalg/LinearOperatorToeplitz.md new file mode 100644 index 00000000000..02b342e5439 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorToeplitz.md @@ -0,0 +1,1670 @@ +description: LinearOperator acting like a [batch] of toeplitz matrices. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorToeplitz + + + + + + + + + +`LinearOperator` acting like a [batch] of toeplitz matrices. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator acts like a [batch] Toeplitz matrix `A` with shape +`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `N x N` matrix. This matrix `A` is not materialized, but for +purposes of broadcasting this shape will be relevant. + +#### Description in terms of toeplitz matrices + +Toeplitz means that `A` has constant diagonals. Hence, `A` can be generated +with two vectors. One represents the first column of the matrix, and the +other represents the first row. + +Below is a 4 x 4 example: + +``` +A = |a b c d| + |e a b c| + |f e a b| + |g f e a| +``` + +#### Example of a Toeplitz operator. + +```python +# Create a 3 x 3 Toeplitz operator. +col = [1., 2., 3.] +row = [1., 4., -9.] +operator = LinearOperatorToeplitz(col, row) + +operator.to_dense() +==> [[1., 4., -9.], + [2., 1., 4.], + [3., 2., 1.]] + +operator.shape +==> [3, 3] + +operator.log_abs_determinant() +==> scalar Tensor + +x = ... Shape [3, 4] Tensor +operator.matmul(x) +==> Shape [3, 4] Tensor +``` + +#### Shape compatibility + +This operator acts on [batch] matrix with compatible shape. +`x` is a batch matrix with compatible shape for `matmul` and `solve` if + +``` +operator.shape = [B1,...,Bb] + [N, N], with b >= 0 +x.shape = [C1,...,Cc] + [N, R], +and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd] +``` + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`col` + +Shape `[B1,...,Bb, N]` `Tensor` with `b >= 0` `N >= 0`. +The first column of the operator. Allowed dtypes: `float16`, `float32`, +`float64`, `complex64`, `complex128`. Note that the first entry of +`col` is assumed to be the same as the first entry of `row`. +
+`row` + +Shape `[B1,...,Bb, N]` `Tensor` with `b >= 0` `N >= 0`. +The first row of the operator. Allowed dtypes: `float16`, `float32`, +`float64`, `complex64`, `complex128`. Note that the first entry of +`row` is assumed to be the same as the first entry of `col`. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. If `diag.dtype` is real, this is auto-set to `True`. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name for this `LinearOperator`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`col` + + +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`row` + + +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorTridiag.md b/site/en/api_docs/python/tf/linalg/LinearOperatorTridiag.md new file mode 100644 index 00000000000..611eb3aba4a --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorTridiag.md @@ -0,0 +1,1729 @@ +description: LinearOperator acting like a [batch] square tridiagonal matrix. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorTridiag + + + + + + + + + +`LinearOperator` acting like a [batch] square tridiagonal matrix. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator acts like a [batch] square tridiagonal matrix `A` with shape +`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `N x M` matrix. This matrix `A` is not materialized, but for +purposes of broadcasting this shape will be relevant. + +#### Example usage: + + + +Create a 3 x 3 tridiagonal linear operator. + +``` +>>> superdiag = [3., 4., 5.] +>>> diag = [1., -1., 2.] +>>> subdiag = [6., 7., 8] +>>> operator = tf.linalg.LinearOperatorTridiag( +... [superdiag, diag, subdiag], +... diagonals_format='sequence') +>>> operator.to_dense() + +>>> operator.shape +TensorShape([3, 3]) +``` + +Scalar Tensor output. + +``` +>>> operator.log_abs_determinant() + +``` + +Create a [2, 3] batch of 4 x 4 linear operators. + +``` +>>> diagonals = tf.random.normal(shape=[2, 3, 3, 4]) +>>> operator = tf.linalg.LinearOperatorTridiag( +... diagonals, +... diagonals_format='compact') +``` + +Create a shape [2, 1, 4, 2] vector. Note that this shape is compatible +since the batch dimensions, [2, 1], are broadcast to +operator.batch_shape = [2, 3]. + +``` +>>> y = tf.random.normal(shape=[2, 1, 4, 2]) +>>> x = operator.solve(y) +>>> x + +``` + +#### Shape compatibility + +This operator acts on [batch] matrix with compatible shape. +`x` is a batch matrix with compatible shape for `matmul` and `solve` if + +``` +operator.shape = [B1,...,Bb] + [N, N], with b >= 0 +x.shape = [C1,...,Cc] + [N, R], +and [C1,...,Cc] broadcasts with [B1,...,Bb]. +``` + +#### Performance + +Suppose `operator` is a `LinearOperatorTridiag` of shape `[N, N]`, +and `x.shape = [N, R]`. Then + +* `operator.matmul(x)` will take O(N * R) time. +* `operator.solve(x)` will take O(N * R) time. + +If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and +`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`. + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`diagonals` + +`Tensor` or list of `Tensor`s depending on `diagonals_format`. + +If `diagonals_format=sequence`, this is a list of three `Tensor`'s each +with shape `[B1, ..., Bb, N]`, `b >= 0, N >= 0`, representing the +superdiagonal, diagonal and subdiagonal in that order. Note the +superdiagonal is padded with an element in the last position, and the +subdiagonal is padded with an element in the front. + +If `diagonals_format=matrix` this is a `[B1, ... Bb, N, N]` shaped +`Tensor` representing the full tridiagonal matrix. + +If `diagonals_format=compact` this is a `[B1, ... Bb, 3, N]` shaped +`Tensor` with the second to last dimension indexing the +superdiagonal, diagonal and subdiagonal in that order. Note the +superdiagonal is padded with an element in the last position, and the +subdiagonal is padded with an element in the front. + +In every case, these `Tensor`s are all floating dtype. +
+`diagonals_format` + +one of `matrix`, `sequence`, or `compact`. Default is +`compact`. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. If `diag.dtype` is real, this is auto-set to `True`. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`name` + +A name for this `LinearOperator`. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `diag.dtype` is not an allowed type. +
+`ValueError` + +If `diag.dtype` is real, and `is_self_adjoint` is not `True`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`diagonals` + + +
+`diagonals_format` + + +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `x`. Equivalent to `A + x`. + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with same `dtype` and shape broadcastable to `self.shape`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/LinearOperatorZeros.md b/site/en/api_docs/python/tf/linalg/LinearOperatorZeros.md new file mode 100644 index 00000000000..27ea9cab5d7 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/LinearOperatorZeros.md @@ -0,0 +1,1729 @@ +description: LinearOperator acting like a [batch] zero matrix. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.linalg.LinearOperatorZeros + + + + + + + + + +`LinearOperator` acting like a [batch] zero matrix. + +Inherits From: [`LinearOperator`](../../tf/linalg/LinearOperator.md) + + + + + + + + + +This operator acts like a [batch] zero matrix `A` with shape +`[B1,...,Bb, N, M]` for some `b >= 0`. The first `b` indices index a +batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is +an `N x M` matrix. This matrix `A` is not materialized, but for +purposes of broadcasting this shape will be relevant. + +`LinearOperatorZeros` is initialized with `num_rows`, and optionally +`num_columns, `batch_shape`, and `dtype` arguments. If `num_columns` is +`None`, then this operator will be initialized as a square matrix. If +`batch_shape` is `None`, this operator efficiently passes through all +arguments. If `batch_shape` is provided, broadcasting may occur, which will +require making copies. + +```python +# Create a 2 x 2 zero matrix. +operator = LinearOperatorZero(num_rows=2, dtype=tf.float32) + +operator.to_dense() +==> [[0., 0.] + [0., 0.]] + +operator.shape +==> [2, 2] + +operator.determinant() +==> 0. + +x = ... Shape [2, 4] Tensor +operator.matmul(x) +==> Shape [2, 4] Tensor, same as x. + +# Create a 2-batch of 2x2 zero matrices +operator = LinearOperatorZeros(num_rows=2, batch_shape=[2]) +operator.to_dense() +==> [[[0., 0.] + [0., 0.]], + [[0., 0.] + [0., 0.]]] + +# Here, even though the operator has a batch shape, the input is the same as +# the output, so x can be passed through without a copy. The operator is able +# to detect that no broadcast is necessary because both x and the operator +# have statically defined shape. +x = ... Shape [2, 2, 3] +operator.matmul(x) +==> Shape [2, 2, 3] Tensor, same as tf.zeros_like(x) + +# Here the operator and x have different batch_shape, and are broadcast. +# This requires a copy, since the output is different size than the input. +x = ... Shape [1, 2, 3] +operator.matmul(x) +==> Shape [2, 2, 3] Tensor, equal to tf.zeros_like([x, x]) +``` + +### Shape compatibility + +This operator acts on [batch] matrix with compatible shape. +`x` is a batch matrix with compatible shape for `matmul` and `solve` if + +``` +operator.shape = [B1,...,Bb] + [N, M], with b >= 0 +x.shape = [C1,...,Cc] + [M, R], +and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd] +``` + +#### Matrix property hints + +This `LinearOperator` is initialized with boolean flags of the form `is_X`, +for `X = non_singular, self_adjoint, positive_definite, square`. +These have the following meaning: + +* If `is_X == True`, callers should expect the operator to have the + property `X`. This is a promise that should be fulfilled, but is *not* a + runtime assert. For example, finite floating point precision may result + in these promises being violated. +* If `is_X == False`, callers should expect the operator to not have `X`. +* If `is_X == None` (the default), callers should have no expectation either + way. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_rows` + +Scalar non-negative integer `Tensor`. Number of rows in the +corresponding zero matrix. +
+`num_columns` + +Scalar non-negative integer `Tensor`. Number of columns in +the corresponding zero matrix. If `None`, defaults to the value of +`num_rows`. +
+`batch_shape` + +Optional `1-D` integer `Tensor`. The shape of the leading +dimensions. If `None`, this operator has no leading dimensions. +
+`dtype` + +Data type of the matrix that this operator represents. +
+`is_non_singular` + +Expect that this operator is non-singular. +
+`is_self_adjoint` + +Expect that this operator is equal to its hermitian +transpose. +
+`is_positive_definite` + +Expect that this operator is positive definite, +meaning the quadratic form `x^H A x` has positive real part for all +nonzero `x`. Note that we do not require the operator to be +self-adjoint to be positive-definite. See: +https://en.wikipedia.org/wiki/Positive-definite_matrix#Extension_for_non-symmetric_matrices +
+`is_square` + +Expect that this operator acts like square [batch] matrices. +
+`assert_proper_shapes` + +Python `bool`. If `False`, only perform static +checks that initialization and method arguments have proper shape. +If `True`, and static checks are inconclusive, add asserts to the graph. +
+`name` + +A name for this `LinearOperator` +
+ + + + + + + + + + + + + + + + + + + + + +
+`ValueError` + +If `num_rows` is determined statically to be non-scalar, or +negative. +
+`ValueError` + +If `num_columns` is determined statically to be non-scalar, +or negative. +
+`ValueError` + +If `batch_shape` is determined statically to not be 1-D, or +negative. +
+`ValueError` + +If any of the following is not `True`: +`{is_self_adjoint, is_non_singular, is_positive_definite}`. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`H` + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. +
+`batch_shape` + +`TensorShape` of batch dimensions of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb])`, equivalent to `A.shape[:-2]` +
+`domain_dimension` + +Dimension (in the sense of vector spaces) of the domain of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. +
+`dtype` + +The `DType` of `Tensor`s handled by this `LinearOperator`. +
+`graph_parents` + +List of graph dependencies of this `LinearOperator`. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Do not call `graph_parents`. +
+`is_non_singular` + + +
+`is_positive_definite` + + +
+`is_self_adjoint` + + +
+`is_square` + +Return `True/False` depending on if this operator is square. +
+`range_dimension` + +Dimension (in the sense of vector spaces) of the range of this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. +
+`shape` + +`TensorShape` of this `LinearOperator`. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns +`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.shape`. +
+`tensor_rank` + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. +
+ + + +## Methods + +

add_to_tensor

+ +View source + + + +Add matrix represented by this operator to `mat`. Equiv to `I + mat`. + + + + + + + + + + + + + + +
Args
+`mat` + +`Tensor` with same `dtype` and shape broadcastable to `self`. +
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with broadcast shape and same `dtype` as `self`. +
+ + + +

adjoint

+ +View source + + + +Returns the adjoint of the current `LinearOperator`. + +Given `A` representing this `LinearOperator`, return `A*`. +Note that calling `self.adjoint()` and `self.H` are equivalent. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the adjoint of this `LinearOperator`. +
+ + + +

assert_non_singular

+ +View source + + + +Returns an `Op` that asserts this operator is non singular. + +This operator is considered non-singular if + +``` +ConditionNumber < max{100, range_dimension, domain_dimension} * eps, +eps := np.finfo(self.dtype.as_numpy_dtype).eps +``` + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is singular. +
+ + + +

assert_positive_definite

+ +View source + + + +Returns an `Op` that asserts this operator is positive definite. + +Here, positive definite means that the quadratic form `x^H A x` has positive +real part for all nonzero `x`. Note that we do not require the operator to +be self-adjoint to be positive definite. + + + + + + + + + + +
Args
+`name` + +A name to give this `Op`. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not positive definite. +
+ + + +

assert_self_adjoint

+ +View source + + + +Returns an `Op` that asserts this operator is self-adjoint. + +Here we check that this operator is *exactly* equal to its hermitian +transpose. + + + + + + + + + + +
Args
+`name` + +A string name to prepend to created ops. +
+ + + + + + + + + + + +
Returns
+An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if +the operator is not self-adjoint. +
+ + + +

batch_shape_tensor

+ +View source + + + +Shape of batch dimensions of this operator, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb]`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

cholesky

+ +View source + + + +Returns a Cholesky factor as a `LinearOperator`. + +Given `A` representing this `LinearOperator`, if `A` is positive definite +self-adjoint, return `L`, where `A = L L^T`, i.e. the cholesky +decomposition. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` which represents the lower triangular matrix +in the Cholesky decomposition. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be positive +definite and self adjoint. +
+ + + +

cond

+ +View source + + + +Returns the condition number of this linear operator. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

determinant

+ +View source + + + +Determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

diag_part

+ +View source + + + +Efficiently get the [batch] diagonal part of this operator. + +If this operator has shape `[B1,...,Bb, M, N]`, this returns a +`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where +`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`. + +``` +my_operator = LinearOperatorDiag([1., 2.]) + +# Efficiently get the diagonal +my_operator.diag_part() +==> [1., 2.] + +# Equivalent, but inefficient method +tf.linalg.diag_part(my_operator.to_dense()) +==> [1., 2.] +``` + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + + +
Returns
+`diag_part` + +A `Tensor` of same `dtype` as self. +
+ + + +

domain_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the domain of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `N`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

eigvals

+ +View source + + + +Returns the eigenvalues of this linear operator. + +If the operator is marked as self-adjoint (via `is_self_adjoint`) +this computation can be more efficient. + +Note: This currently only supports self-adjoint operators. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb, N]` `Tensor` of same `dtype` as `self`. +
+ + + +

inverse

+ +View source + + + +Returns the Inverse of this `LinearOperator`. + +Given `A` representing this `LinearOperator`, return a `LinearOperator` +representing `A^-1`. + + + + + + + + + + +
Args
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`LinearOperator` representing inverse of this matrix. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +When the `LinearOperator` is not hinted to be `non_singular`. +
+ + + +

log_abs_determinant

+ +View source + + + +Log absolute value of determinant for every batch member. + + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `self.batch_shape` and same `dtype` as `self`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_square` is `False`. +
+ + + +

matmul

+ +View source + + + +Transform [batch] matrix `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +X = ... # shape [..., N, R], batch matrix, R > 0. + +Y = operator.matmul(X) +Y.shape +==> [..., M, R] + +Y[..., :, r] = sum_j A[..., :, j] X[j, r] +``` + + + + + + + + + + + + + + + + + + + +
Args
+`x` + +`LinearOperator` or `Tensor` with compatible shape and same `dtype` as +`self`. See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`adjoint_arg` + +Python `bool`. If `True`, compute `A x^H` where `x^H` is +the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `LinearOperator` or `Tensor` with shape `[..., M, R]` and same `dtype` +as `self`. +
+ + + +

matvec

+ +View source + + + +Transform [batch] vector `x` with left multiplication: `x --> Ax`. + +```python +# Make an operator acting like batch matric A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) + +X = ... # shape [..., N], batch vector + +Y = operator.matvec(X) +Y.shape +==> [..., M] + +Y[..., :] = sum_j A[..., :, j] X[..., j] +``` + + + + + + + + + + + + + + + + +
Args
+`x` + +`Tensor` with compatible shape and same `dtype` as `self`. +`x` is treated as a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. +
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+A `Tensor` with shape `[..., M]` and same `dtype` as `self`. +
+ + + +

range_dimension_tensor

+ +View source + + + +Dimension (in the sense of vector spaces) of the range of this operator. + +Determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `M`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

shape_tensor

+ +View source + + + +Shape of this `LinearOperator`, determined at runtime. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding +`[B1,...,Bb, M, N]`, equivalent to tf.shape(A). + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor` +
+ + + +

solve

+ +View source + + + +Solve (exact or approx) `R` (batch) systems of equations: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve R > 0 linear systems for every member of the batch. +RHS = ... # shape [..., M, R] + +X = operator.solve(RHS) +# X[..., :, r] is the solution to the r'th linear system +# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r] + +operator.matmul(X) +==> RHS +``` + + + + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator and compatible shape. +`rhs` is treated like a [batch] matrix meaning for every set of leading +dimensions, the last two dimensions defines a matrix. +See class docstring for definition of compatibility. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`adjoint_arg` + +Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` +is the hermitian transpose (transposition and complex conjugation). +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

solvevec

+ +View source + + + +Solve single equation with best effort: `A X = rhs`. + +The returned `Tensor` will be close to an exact solution if `A` is well +conditioned. Otherwise closeness will vary. See class docstring for details. + +#### Examples: + + + +```python +# Make an operator acting like batch matrix A. Assume A.shape = [..., M, N] +operator = LinearOperator(...) +operator.shape = [..., M, N] + +# Solve one linear system for every member of the batch. +RHS = ... # shape [..., M] + +X = operator.solvevec(RHS) +# X is the solution to the linear system +# sum_j A[..., :, j] X[..., j] = RHS[..., :] + +operator.matvec(X) +==> RHS +``` + + + + + + + + + + + + + + + + +
Args
+`rhs` + +`Tensor` with same `dtype` as this operator. +`rhs` is treated like a [batch] vector meaning for every set of leading +dimensions, the last dimension defines a vector. See class docstring +for definition of compatibility regarding batch dimensions. +
+`adjoint` + +Python `bool`. If `True`, solve the system involving the adjoint +of this `LinearOperator`: `A^H X = rhs`. +
+`name` + +A name scope to use for ops added by this method. +
+ + + + + + + + + + + +
Returns
+`Tensor` with shape `[...,N]` and same `dtype` as `rhs`. +
+ + + + + + + + + + + + +
Raises
+`NotImplementedError` + +If `self.is_non_singular` or `is_square` is False. +
+ + + +

tensor_rank_tensor

+ +View source + + + +Rank (in the sense of tensors) of matrix corresponding to this operator. + +If this operator acts like the batch matrix `A` with +`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+`int32` `Tensor`, determined at runtime. +
+ + + +

to_dense

+ +View source + + + +Return a dense (batch) matrix representing this operator. + + +

trace

+ +View source + + + +Trace of the linear operator, equal to sum of `self.diag_part()`. + +If the operator is square, this is also the sum of the eigenvalues. + + + + + + + + + + +
Args
+`name` + +A name for this `Op`. +
+ + + + + + + + + + + +
Returns
+Shape `[B1,...,Bb]` `Tensor` of same `dtype` as `self`. +
+ + + +

__matmul__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/linalg/adjoint.md b/site/en/api_docs/python/tf/linalg/adjoint.md new file mode 100644 index 00000000000..73636f3b952 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/adjoint.md @@ -0,0 +1,96 @@ +description: Transposes the last two dimensions of and conjugates tensor matrix. + +
+ + +
+ +# tf.linalg.adjoint + + + + + + + + + +Transposes the last two dimensions of and conjugates tensor `matrix`. + + + + + + + + + + +#### For example: + + + +```python +x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j], + [4 + 4j, 5 + 5j, 6 + 6j]]) +tf.linalg.adjoint(x) # [[1 - 1j, 4 - 4j], + # [2 - 2j, 5 - 5j], + # [3 - 3j, 6 - 6j]] +``` + + + + + + + + + + + + + +
+`matrix` + +A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`, +or `complex128` with shape `[..., M, M]`. +
+`name` + +A name to give this `Op` (optional). +
+ + + + + + + + + + + +
+The adjoint (a.k.a. Hermitian transpose a.k.a. conjugate transpose) of +matrix. +
+ diff --git a/site/en/api_docs/python/tf/linalg/band_part.md b/site/en/api_docs/python/tf/linalg/band_part.md new file mode 100644 index 00000000000..af3ae0da73a --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/band_part.md @@ -0,0 +1,138 @@ +description: Copy a tensor setting everything outside a central band in each innermost matrix + +
+ + +
+ +# tf.linalg.band_part + + + + + + + + + +Copy a tensor setting everything outside a central band in each innermost matrix + + + + + + + + + +to zero. + +The `band` part is computed as follows: +Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a +tensor with the same shape where + +`band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`. + +The indicator function + +`in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) && + (num_upper < 0 || (n-m) <= num_upper)`. + +#### For example: + + + +``` +# if 'input' is [[ 0, 1, 2, 3] + [-1, 0, 1, 2] + [-2, -1, 0, 1] + [-3, -2, -1, 0]], + +tf.matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3] + [-1, 0, 1, 2] + [ 0, -1, 0, 1] + [ 0, 0, -1, 0]], + +tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0] + [-1, 0, 1, 0] + [-2, -1, 0, 1] + [ 0, -2, -1, 0]] +``` + +#### Useful special cases: + + + +``` + tf.matrix_band_part(input, 0, -1) ==> Upper triangular part. + tf.matrix_band_part(input, -1, 0) ==> Lower triangular part. + tf.matrix_band_part(input, 0, 0) ==> Diagonal. +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Rank `k` tensor. +
+`num_lower` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +0-D tensor. Number of subdiagonals to keep. If negative, keep entire +lower triangle. +
+`num_upper` + +A `Tensor`. Must have the same type as `num_lower`. +0-D tensor. Number of superdiagonals to keep. If negative, keep +entire upper triangle. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/cholesky.md b/site/en/api_docs/python/tf/linalg/cholesky.md new file mode 100644 index 00000000000..f1e1b9641f1 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/cholesky.md @@ -0,0 +1,91 @@ +description: Computes the Cholesky decomposition of one or more square matrices. + +
+ + +
+ +# tf.linalg.cholesky + + + + + + + + + +Computes the Cholesky decomposition of one or more square matrices. + + + + + + + + + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. + +The input has to be symmetric and positive definite. Only the lower-triangular +part of the input will be used for this operation. The upper-triangular part +will not be read. + +The output is a tensor of the same shape as the input +containing the Cholesky decompositions for all input submatrices `[..., :, :]`. + +**Note**: The gradient computation on GPU is faster for large matrices but +not for large batch dimensions when the submatrices are small. In this +case it might be faster to use the CPU. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/cholesky_solve.md b/site/en/api_docs/python/tf/linalg/cholesky_solve.md new file mode 100644 index 00000000000..a6ee4efa888 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/cholesky_solve.md @@ -0,0 +1,108 @@ +description: Solves systems of linear eqns A X = RHS, given Cholesky factorizations. + +
+ + +
+ +# tf.linalg.cholesky_solve + + + + + + + + + +Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations. + + + + + + + + + +```python +# Solve 10 separate 2x2 linear systems: +A = ... # shape 10 x 2 x 2 +RHS = ... # shape 10 x 2 x 1 +chol = tf.linalg.cholesky(A) # shape 10 x 2 x 2 +X = tf.linalg.cholesky_solve(chol, RHS) # shape 10 x 2 x 1 +# tf.matmul(A, X) ~ RHS +X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0] + +# Solve five linear systems (K = 5) for every member of the length 10 batch. +A = ... # shape 10 x 2 x 2 +RHS = ... # shape 10 x 2 x 5 +... +X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2] +``` + + + + + + + + + + + + + + + + +
+`chol` + +A `Tensor`. Must be `float32` or `float64`, shape is `[..., M, M]`. +Cholesky factorization of `A`, e.g. `chol = tf.linalg.cholesky(A)`. +For that reason, only the lower triangular parts (including the diagonal) +of the last two dimensions of `chol` are used. The strictly upper part is +assumed to be zero and not accessed. +
+`rhs` + +A `Tensor`, same type as `chol`, shape is `[..., M, K]`. +
+`name` + +A name to give this `Op`. Defaults to `cholesky_solve`. +
+ + + + + + + + + + + +
+Solution to `A x = rhs`, shape `[..., M, K]`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/cross.md b/site/en/api_docs/python/tf/linalg/cross.md new file mode 100644 index 00000000000..4a9edd0d250 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/cross.md @@ -0,0 +1,89 @@ +description: Compute the pairwise cross product. + +
+ + +
+ +# tf.linalg.cross + + + + + + + + + +Compute the pairwise cross product. + + + + + + + + + +`a` and `b` must be the same shape; they can either be simple 3-element vectors, +or any shape where the innermost dimension is 3. In the latter case, each pair +of corresponding 3-element vectors is cross-multiplied independently. + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +A tensor containing 3-element vectors. +
+`b` + +A `Tensor`. Must have the same type as `a`. +Another tensor, of same type and shape as `a`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/det.md b/site/en/api_docs/python/tf/linalg/det.md new file mode 100644 index 00000000000..98d8eec3829 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/det.md @@ -0,0 +1,81 @@ +description: Computes the determinant of one or more square matrices. + +
+ + +
+ +# tf.linalg.det + + + + + + + + + +Computes the determinant of one or more square matrices. + + + + + + + + + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. The output is a tensor containing the determinants +for all input submatrices `[..., :, :]`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/diag.md b/site/en/api_docs/python/tf/linalg/diag.md new file mode 100644 index 00000000000..013734cd947 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/diag.md @@ -0,0 +1,249 @@ +description: Returns a batched diagonal tensor with given batched diagonal values. + +
+ + +
+ +# tf.linalg.diag + + + + + + + + + +Returns a batched diagonal tensor with given batched diagonal values. + + + + + + + + + +Returns a tensor with the contents in `diagonal` as `k[0]`-th to `k[1]`-th +diagonals of a matrix, with everything else padded with `padding`. `num_rows` +and `num_cols` specify the dimension of the innermost matrix of the output. If +both are not specified, the op assumes the innermost matrix is square and +infers its size from `k` and the innermost dimension of `diagonal`. If only +one of them is specified, the op assumes the unspecified value is the smallest +possible based on other criteria. + +Let `diagonal` have `r` dimensions `[I, J, ..., L, M, N]`. The output tensor +has rank `r+1` with shape `[I, J, ..., L, M, num_rows, num_cols]` when only +one diagonal is given (`k` is an integer or `k[0] == k[1]`). Otherwise, it has +rank `r` with shape `[I, J, ..., L, num_rows, num_cols]`. + +The second innermost dimension of `diagonal` has double meaning. When `k` is +scalar or `k[0] == k[1]`, `M` is part of the batch size [I, J, ..., M], and +the output tensor is: + +``` +output[i, j, ..., l, m, n] + = diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper + padding_value ; otherwise +``` + +Otherwise, `M` is treated as the number of diagonals for the matrix in the +same batch (`M = k[1]-k[0]+1`), and the output tensor is: + +``` +output[i, j, ..., l, m, n] + = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1] + padding_value ; otherwise +``` +where `d = n - m`, `diag_index = k[1] - d`, and +`index_in_diag = n - max(d, 0) + offset`. + +`offset` is zero except when the alignment of the diagonal is to the right. +``` +offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT} + and `d >= 0`) or + (`align` in {LEFT_RIGHT, RIGHT_RIGHT} + and `d <= 0`) + 0 ; otherwise +``` +where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`. + +#### For example: + + + +``` +# The main diagonal. +diagonal = np.array([[1, 2, 3, 4], # Input shape: (2, 4) + [5, 6, 7, 8]]) +tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0], # Output shape: (2, 4, 4) + [0, 2, 0, 0], + [0, 0, 3, 0], + [0, 0, 0, 4]], + [[5, 0, 0, 0], + [0, 6, 0, 0], + [0, 0, 7, 0], + [0, 0, 0, 8]]] + +# A superdiagonal (per batch). +diagonal = np.array([[1, 2, 3], # Input shape: (2, 3) + [4, 5, 6]]) +tf.matrix_diag(diagonal, k = 1) + ==> [[[0, 1, 0, 0], # Output shape: (2, 4, 4) + [0, 0, 2, 0], + [0, 0, 0, 3], + [0, 0, 0, 0]], + [[0, 4, 0, 0], + [0, 0, 5, 0], + [0, 0, 0, 6], + [0, 0, 0, 0]]] + +# A tridiagonal band (per batch). +diagonals = np.array([[[8, 9, 0], # Input shape: (2, 2, 3) + [1, 2, 3], + [0, 4, 5]], + [[2, 3, 0], + [6, 7, 9], + [0, 9, 1]]]) +tf.matrix_diag(diagonals, k = (-1, 1)) + ==> [[[1, 8, 0], # Output shape: (2, 3, 3) + [4, 2, 9], + [0, 5, 3]], + [[6, 2, 0], + [9, 7, 3], + [0, 1, 9]]] + +# RIGHT_LEFT alignment. +diagonals = np.array([[[0, 8, 9], # Input shape: (2, 2, 3) + [1, 2, 3], + [4, 5, 0]], + [[0, 2, 3], + [6, 7, 9], + [9, 1, 0]]]) +tf.matrix_diag(diagonals, k = (-1, 1), align="RIGHT_LEFT") + ==> [[[1, 8, 0], # Output shape: (2, 3, 3) + [4, 2, 9], + [0, 5, 3]], + [[6, 2, 0], + [9, 7, 3], + [0, 1, 9]]] + +# Rectangular matrix. +diagonal = np.array([1, 2]) # Input shape: (2) +tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4) + ==> [[0, 0, 0, 0], # Output shape: (3, 4) + [1, 0, 0, 0], + [0, 2, 0, 0]] + +# Rectangular matrix with inferred num_cols and padding_value = 9. +tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9) + ==> [[9, 9], # Output shape: (3, 2) + [1, 9], + [9, 2]] +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`diagonal` + +A `Tensor` with `rank k >= 1`. +
+`name` + +A name for the operation (optional). +
+`k` + +Diagonal offset(s). Positive value means superdiagonal, 0 refers to the +main diagonal, and negative value means subdiagonals. `k` can be a single +integer (for a single diagonal) or a pair of integers specifying the low +and high ends of a matrix band. `k[0]` must not be larger than `k[1]`. +
+`num_rows` + +The number of rows of the output matrix. If it is not provided, +the op assumes the output matrix is a square matrix and infers the matrix +size from `d_lower`, `d_upper`, and the innermost dimension of `diagonal`. +
+`num_cols` + +The number of columns of the output matrix. If it is not provided, +the op assumes the output matrix is a square matrix and infers the matrix +size from `d_lower`, `d_upper`, and the innermost dimension of `diagonal`. +
+`padding_value` + +The value to fill the area outside the specified diagonal +band with. Default is 0. +
+`align` + +Some diagonals are shorter than `max_diag_len` and need to be padded. +`align` is a string specifying how superdiagonals and subdiagonals should +be aligned, respectively. There are four possible alignments: "RIGHT_LEFT" +(default), "LEFT_RIGHT", "LEFT_LEFT", and "RIGHT_RIGHT". "RIGHT_LEFT" +aligns superdiagonals to the right (left-pads the row) and subdiagonals to +the left (right-pads the row). It is the packing format LAPACK uses. +cuSPARSE uses "LEFT_RIGHT", which is the opposite alignment. +
+ + + + + + + + + + + +
+A Tensor. Has the same type as `diagonal`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/diag_part.md b/site/en/api_docs/python/tf/linalg/diag_part.md new file mode 100644 index 00000000000..d6130115140 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/diag_part.md @@ -0,0 +1,214 @@ +description: Returns the batched diagonal part of a batched tensor. + +
+ + +
+ +# tf.linalg.diag_part + + + + + + + + + +Returns the batched diagonal part of a batched tensor. + + + + + + + + + +Returns a tensor with the `k[0]`-th to `k[1]`-th diagonals of the batched +`input`. + +Assume `input` has `r` dimensions `[I, J, ..., L, M, N]`. +Let `max_diag_len` be the maximum length among all diagonals to be extracted, +`max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))` +Let `num_diags` be the number of diagonals to extract, +`num_diags = k[1] - k[0] + 1`. + +If `num_diags == 1`, the output tensor is of rank `r - 1` with shape +`[I, J, ..., L, max_diag_len]` and values: + +``` +diagonal[i, j, ..., l, n] + = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N, + padding_value ; otherwise. +``` +where `y = max(-k[1], 0)`, `x = max(k[1], 0)`. + +Otherwise, the output tensor has rank `r` with dimensions +`[I, J, ..., L, num_diags, max_diag_len]` with values: + +``` +diagonal[i, j, ..., l, m, n] + = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N, + padding_value ; otherwise. +``` +where `d = k[1] - m`, `y = max(-d, 0) - offset`, and `x = max(d, 0) - offset`. + +`offset` is zero except when the alignment of the diagonal is to the right. +``` +offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT} + and `d >= 0`) or + (`align` in {LEFT_RIGHT, RIGHT_RIGHT} + and `d <= 0`) + 0 ; otherwise +``` +where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`. + +The input must be at least a matrix. + +#### For example: + + + +``` +input = np.array([[[1, 2, 3, 4], # Input shape: (2, 3, 4) + [5, 6, 7, 8], + [9, 8, 7, 6]], + [[5, 4, 3, 2], + [1, 2, 3, 4], + [5, 6, 7, 8]]]) + +# A main diagonal from each batch. +tf.linalg.diag_part(input) ==> [[1, 6, 7], # Output shape: (2, 3) + [5, 2, 7]] + +# A superdiagonal from each batch. +tf.linalg.diag_part(input, k = 1) + ==> [[2, 7, 6], # Output shape: (2, 3) + [4, 3, 8]] + +# A band from each batch. +tf.linalg.diag_part(input, k = (-1, 2)) + ==> [[[3, 8, 0], # Output shape: (2, 4, 3) + [2, 7, 6], + [1, 6, 7], + [0, 5, 8]], + [[3, 4, 0], + [4, 3, 8], + [5, 2, 7], + [0, 1, 6]]] + +# RIGHT_LEFT alignment. +tf.linalg.diag_part(input, k = (-1, 2), align="RIGHT_LEFT") + ==> [[[0, 3, 8], # Output shape: (2, 4, 3) + [2, 7, 6], + [1, 6, 7], + [5, 8, 0]], + [[0, 3, 4], + [4, 3, 8], + [5, 2, 7], + [1, 6, 0]]] + +# max_diag_len can be shorter than the main diagonal. +tf.linalg.diag_part(input, k = (-2, -1)) + ==> [[[5, 8], + [0, 9]], + [[1, 6], + [0, 5]]] + +# padding_value = 9 +tf.linalg.diag_part(input, k = (1, 3), padding_value = 9) + ==> [[[4, 9, 9], # Output shape: (2, 3, 3) + [3, 8, 9], + [2, 7, 6]], + [[2, 9, 9], + [3, 4, 9], + [4, 3, 8]]] + +``` + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` with `rank k >= 2`. +
+`name` + +A name for the operation (optional). +
+`k` + +Diagonal offset(s). Positive value means superdiagonal, 0 refers to the +main diagonal, and negative value means subdiagonals. `k` can be a single +integer (for a single diagonal) or a pair of integers specifying the low +and high ends of a matrix band. `k[0]` must not be larger than `k[1]`. +
+`padding_value` + +The value to fill the area outside the specified diagonal +band with. Default is 0. +
+`align` + +Some diagonals are shorter than `max_diag_len` and need to be padded. +`align` is a string specifying how superdiagonals and subdiagonals should +be aligned, respectively. There are four possible alignments: "RIGHT_LEFT" +(default), "LEFT_RIGHT", "LEFT_LEFT", and "RIGHT_RIGHT". "RIGHT_LEFT" +aligns superdiagonals to the right (left-pads the row) and subdiagonals to +the left (right-pads the row). It is the packing format LAPACK uses. +cuSPARSE uses "LEFT_RIGHT", which is the opposite alignment. +
+ + + + + + + + + + + +
+A Tensor containing diagonals of `input`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/eig.md b/site/en/api_docs/python/tf/linalg/eig.md new file mode 100644 index 00000000000..2a3fd7d43a5 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/eig.md @@ -0,0 +1,98 @@ +description: Computes the eigen decomposition of a batch of matrices. + +
+ + +
+ +# tf.linalg.eig + + + + + + + + + +Computes the eigen decomposition of a batch of matrices. + + + + + + + + + +The eigenvalues +and eigenvectors for a non-Hermitian matrix in general are complex. The +eigenvectors are not guaranteed to be linearly independent. + +Computes the eigenvalues and right eigenvectors of the innermost +N-by-N matrices in `tensor` such that +`tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]`, for i=0...N-1. + + + + + + + + + + + + + +
+`tensor` + +`Tensor` of shape `[..., N, N]`. Only the lower triangular part of +each inner inner matrix is referenced. +
+`name` + +string, optional name of the operation. +
+ + + + + + + + + + + + + + + +
+`e` + +Eigenvalues. Shape is `[..., N]`. Sorted in non-decreasing order. +
+`v` + +Eigenvectors. Shape is `[..., N, N]`. The columns of the inner most +matrices contain eigenvectors of the corresponding matrices in `tensor` +
+ diff --git a/site/en/api_docs/python/tf/linalg/eigh.md b/site/en/api_docs/python/tf/linalg/eigh.md new file mode 100644 index 00000000000..cbafa864e80 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/eigh.md @@ -0,0 +1,97 @@ +description: Computes the eigen decomposition of a batch of self-adjoint matrices. + +
+ + +
+ +# tf.linalg.eigh + + + + + + + + + +Computes the eigen decomposition of a batch of self-adjoint matrices. + + + + + + + + + +Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices +in `tensor` such that +`tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]`, for i=0...N-1. + + + + + + + + + + + + + +
+`tensor` + +`Tensor` of shape `[..., N, N]`. Only the lower triangular part of +each inner inner matrix is referenced. +
+`name` + +string, optional name of the operation. +
+ + + + + + + + + + + + + + + +
+`e` + +Eigenvalues. Shape is `[..., N]`. Sorted in non-decreasing order. +
+`v` + +Eigenvectors. Shape is `[..., N, N]`. The columns of the inner most +matrices contain eigenvectors of the corresponding matrices in `tensor` +
+ diff --git a/site/en/api_docs/python/tf/linalg/eigvals.md b/site/en/api_docs/python/tf/linalg/eigvals.md new file mode 100644 index 00000000000..85494ffc383 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/eigvals.md @@ -0,0 +1,88 @@ +description: Computes the eigenvalues of one or more matrices. + +
+ + +
+ +# tf.linalg.eigvals + + + + + + + + + +Computes the eigenvalues of one or more matrices. + + + + + + + + + +Note: If your program backpropagates through this function, you should replace +it with a call to tf.linalg.eig (possibly ignoring the second output) to +avoid computing the eigen decomposition twice. This is because the +eigenvectors are used to compute the gradient w.r.t. the eigenvalues. See +_SelfAdjointEigV2Grad in linalg_grad.py. + + + + + + + + + + + + + +
+`tensor` + +`Tensor` of shape `[..., N, N]`. +
+`name` + +string, optional name of the operation. +
+ + + + + + + + + + + + +
+`e` + +Eigenvalues. Shape is `[..., N]`. The vector `e[..., :]` contains the `N` +eigenvalues of `tensor[..., :, :]`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/eigvalsh.md b/site/en/api_docs/python/tf/linalg/eigvalsh.md new file mode 100644 index 00000000000..af30c9823a6 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/eigvalsh.md @@ -0,0 +1,91 @@ +description: Computes the eigenvalues of one or more self-adjoint matrices. + +
+ + +
+ +# tf.linalg.eigvalsh + + + + + + + + + +Computes the eigenvalues of one or more self-adjoint matrices. + + + + + + + + + +Note: If your program backpropagates through this function, you should replace +it with a call to tf.linalg.eigh (possibly ignoring the second output) to +avoid computing the eigen decomposition twice. This is because the +eigenvectors are used to compute the gradient w.r.t. the eigenvalues. See +_SelfAdjointEigV2Grad in linalg_grad.py. + + + + + + + + + + + + + +
+`tensor` + +`Tensor` of shape `[..., N, N]`. +
+`name` + +string, optional name of the operation. +
+ + + + + + + + + + + + +
+`e` + +Eigenvalues. Shape is `[..., N]`. The vector `e[..., :]` contains the `N` +eigenvalues of `tensor[..., :, :]`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/experimental.md b/site/en/api_docs/python/tf/linalg/experimental.md new file mode 100644 index 00000000000..a8235fd9aad --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/experimental.md @@ -0,0 +1,25 @@ +description: Public API for tf.linalg.experimental namespace. + +
+ + +
+ +# Module: tf.linalg.experimental + + + + + + + + + +Public API for tf.linalg.experimental namespace. + + + +## Functions + +[`conjugate_gradient(...)`](../../tf/linalg/experimental/conjugate_gradient.md): Conjugate gradient solver. + diff --git a/site/en/api_docs/python/tf/linalg/experimental/conjugate_gradient.md b/site/en/api_docs/python/tf/linalg/experimental/conjugate_gradient.md new file mode 100644 index 00000000000..af404b9aec6 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/experimental/conjugate_gradient.md @@ -0,0 +1,141 @@ +description: Conjugate gradient solver. + +
+ + +
+ +# tf.linalg.experimental.conjugate_gradient + + + + + + + + + +Conjugate gradient solver. + + + + + + + + + +Solves a linear system of equations `A*x = rhs` for self-adjoint, positive +definite matrix `A` and right-hand side vector `rhs`, using an iterative, +matrix-free algorithm where the action of the matrix A is represented by +`operator`. The iteration terminates when either the number of iterations +exceeds `max_iter` or when the residual norm has been reduced to `tol` +times its initial value, i.e. \\(||rhs - A x_k|| <= tol ||rhs||\\). + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`operator` + +A `LinearOperator` that is self-adjoint and positive definite. +
+`rhs` + +A possibly batched vector of shape `[..., N]` containing the right-hand +size vector. +
+`preconditioner` + +A `LinearOperator` that approximates the inverse of `A`. +An efficient preconditioner could dramatically improve the rate of +convergence. If `preconditioner` represents matrix `M`(`M` approximates +`A^{-1}`), the algorithm uses `preconditioner.apply(x)` to estimate +`A^{-1}x`. For this to be useful, the cost of applying `M` should be +much lower than computing `A^{-1}` directly. +
+`x` + +A possibly batched vector of shape `[..., N]` containing the initial +guess for the solution. +
+`tol` + +A float scalar convergence tolerance. +
+`max_iter` + +An integer giving the maximum number of iterations. +
+`name` + +A name scope for the operation. +
+ + + + + + + + + + + + +
+`output` + +A namedtuple representing the final state with fields: +- i: A scalar `int32` `Tensor`. Number of iterations executed. +- x: A rank-1 `Tensor` of shape `[..., N]` containing the computed +solution. +- r: A rank-1 `Tensor` of shape `[.., M]` containing the residual vector. +- p: A rank-1 `Tensor` of shape `[..., N]`. `A`-conjugate basis vector. +- gamma: \\(r \dot M \dot r\\), equivalent to \\(||r||_2^2\\) when +`preconditioner=None`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/expm.md b/site/en/api_docs/python/tf/linalg/expm.md new file mode 100644 index 00000000000..365fe50928c --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/expm.md @@ -0,0 +1,116 @@ +description: Computes the matrix exponential of one or more square matrices. + +
+ + +
+ +# tf.linalg.expm + + + + + + + + + +Computes the matrix exponential of one or more square matrices. + + + + + + + + + +exp(A) = \sum_{n=0}^\infty A^n/n! + +The exponential is computed using a combination of the scaling and squaring +method and the Pade approximation. Details can be found in: +Nicholas J. Higham, "The scaling and squaring method for the matrix +exponential revisited," SIAM J. Matrix Anal. Applic., 26:1179-1193, 2005. + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. The output is a tensor of the same shape as the input +containing the exponential for all input submatrices `[..., :, :]`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`, or +`complex128` with shape `[..., M, M]`. +
+`name` + +A name to give this `Op` (optional). +
+ + + + + + + + + + + +
+the matrix exponential of the input. +
+ + + + + + + + + + + + +
+`ValueError` + +An unsupported type is provided as input. +
+ + + + +#### Scipy Compatibility +Equivalent to scipy.linalg.expm + diff --git a/site/en/api_docs/python/tf/linalg/global_norm.md b/site/en/api_docs/python/tf/linalg/global_norm.md new file mode 100644 index 00000000000..b17d549b5de --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/global_norm.md @@ -0,0 +1,106 @@ +description: Computes the global norm of multiple tensors. + +
+ + +
+ +# tf.linalg.global_norm + + + + + + + + + +Computes the global norm of multiple tensors. + + + + + + + + + +Given a tuple or list of tensors `t_list`, this operation returns the +global norm of the elements in all tensors in `t_list`. The global norm is +computed as: + +`global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))` + +Any entries in `t_list` that are of type None are ignored. + + + + + + + + + + + + + +
+`t_list` + +A tuple or list of mixed `Tensors`, `IndexedSlices`, or None. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A 0-D (scalar) `Tensor` of type `float`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `t_list` is not a sequence. +
+ diff --git a/site/en/api_docs/python/tf/linalg/inv.md b/site/en/api_docs/python/tf/linalg/inv.md new file mode 100644 index 00000000000..3cef559b358 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/inv.md @@ -0,0 +1,96 @@ +description: Computes the inverse of one or more square invertible matrices or their + +
+ + +
+ +# tf.linalg.inv + + + + + + + + + +Computes the inverse of one or more square invertible matrices or their + + + + + + + + + +adjoints (conjugate transposes). + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. The output is a tensor of the same shape as the input +containing the inverse for all input submatrices `[..., :, :]`. + +The op uses LU decomposition with partial pivoting to compute the inverses. + +If a matrix is not invertible there is no guarantee what the op does. It +may detect the condition and raise an exception or it may simply return a +garbage result. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`adjoint` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/logdet.md b/site/en/api_docs/python/tf/linalg/logdet.md new file mode 100644 index 00000000000..db937dd46db --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/logdet.md @@ -0,0 +1,96 @@ +description: Computes log of the determinant of a hermitian positive definite matrix. + +
+ + +
+ +# tf.linalg.logdet + + + + + + + + + +Computes log of the determinant of a hermitian positive definite matrix. + + + + + + + + + +```python +# Compute the determinant of a matrix while reducing the chance of over- or +underflow: +A = ... # shape 10 x 10 +det = tf.exp(tf.linalg.logdet(A)) # scalar +``` + + + + + + + + + + + + + +
+`matrix` + +A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`, +or `complex128` with shape `[..., M, M]`. +
+`name` + +A name to give this `Op`. Defaults to `logdet`. +
+ + + + + + + + + + + +
+The natural log of the determinant of `matrix`. +
+ + + + +#### Numpy Compatibility +Equivalent to numpy.linalg.slogdet, although no sign is returned since only +hermitian positive definite matrices are supported. + diff --git a/site/en/api_docs/python/tf/linalg/logm.md b/site/en/api_docs/python/tf/linalg/logm.md new file mode 100644 index 00000000000..1d30fd4e383 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/logm.md @@ -0,0 +1,93 @@ +description: Computes the matrix logarithm of one or more square matrices: + +
+ + +
+ +# tf.linalg.logm + + + + + + + + + +Computes the matrix logarithm of one or more square matrices: + + + + + + + + + + +\\(log(exp(A)) = A\\) + +This op is only defined for complex matrices. If A is positive-definite and +real, then casting to a complex matrix, taking the logarithm and casting back +to a real matrix will give the correct result. + +This function computes the matrix logarithm using the Schur-Parlett algorithm. +Details of the algorithm can be found in Section 11.6.2 of: +Nicholas J. Higham, Functions of Matrices: Theory and Computation, SIAM 2008. +ISBN 978-0-898716-46-7. + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. The output is a tensor of the same shape as the input +containing the exponential for all input submatrices `[..., :, :]`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/lstsq.md b/site/en/api_docs/python/tf/linalg/lstsq.md new file mode 100644 index 00000000000..cf9b2cc852d --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/lstsq.md @@ -0,0 +1,161 @@ +description: Solves one or more linear least-squares problems. + +
+ + +
+ +# tf.linalg.lstsq + + + + + + + + + +Solves one or more linear least-squares problems. + + + + + + + + + +`matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions +form `M`-by-`N` matrices. Rhs is a tensor of shape `[..., M, K]` whose +inner-most 2 dimensions form `M`-by-`K` matrices. The computed output is a +`Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form `M`-by-`K` +matrices that solve the equations +`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least squares +sense. + +Below we will use the following notation for each pair of matrix and +right-hand sides in the batch: + +`matrix`=\\(A \in \Re^{m \times n}\\), +`rhs`=\\(B \in \Re^{m \times k}\\), +`output`=\\(X \in \Re^{n \times k}\\), +`l2_regularizer`=\\(\lambda\\). + +If `fast` is `True`, then the solution is computed by solving the normal +equations using Cholesky decomposition. Specifically, if \\(m \ge n\\) then +\\(X = (A^T A + \lambda I)^{-1} A^T B\\), which solves the least-squares +problem \\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||A Z - B||_F^2 + +\lambda ||Z||_F^2\\). If \\(m \lt n\\) then `output` is computed as +\\(X = A^T (A A^T + \lambda I)^{-1} B\\), which (for \\(\lambda = 0\\)) is +the minimum-norm solution to the under-determined linear system, i.e. +\\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||Z||_F^2 \\), subject to +\\(A Z = B\\). Notice that the fast path is only numerically stable when +\\(A\\) is numerically full rank and has a condition number +\\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon_{mach}}}\\) or\\(\lambda\\) +is sufficiently large. + +If `fast` is `False` an algorithm based on the numerically robust complete +orthogonal decomposition is used. This computes the minimum-norm +least-squares solution, even when \\(A\\) is rank deficient. This path is +typically 6-7 times slower than the fast path. If `fast` is `False` then +`l2_regularizer` is ignored. + + + + + + + + + + + + + + + + + + + + + + +
+`matrix` + +`Tensor` of shape `[..., M, N]`. +
+`rhs` + +`Tensor` of shape `[..., M, K]`. +
+`l2_regularizer` + +0-D `double` `Tensor`. Ignored if `fast=False`. +
+`fast` + +bool. Defaults to `True`. +
+`name` + +string, optional name of the operation. +
+ + + + + + + + + + + + +
+`output` + +`Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form +`M`-by-`K` matrices that solve the equations +`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least +squares sense. +
+ + + + + + + + + + + + +
+`NotImplementedError` + +linalg.lstsq is currently disabled for complex128 +and l2_regularizer != 0 due to poor accuracy. +
+ diff --git a/site/en/api_docs/python/tf/linalg/lu.md b/site/en/api_docs/python/tf/linalg/lu.md new file mode 100644 index 00000000000..424e90c0bce --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/lu.md @@ -0,0 +1,117 @@ +description: Computes the LU decomposition of one or more square matrices. + +
+ + +
+ +# tf.linalg.lu + + + + + + + + + +Computes the LU decomposition of one or more square matrices. + + + + + + + + + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. + +The input has to be invertible. + +The output consists of two tensors LU and P containing the LU decomposition +of all input submatrices `[..., :, :]`. LU encodes the lower triangular and +upper triangular factors. + +For each input submatrix of shape `[M, M]`, L is a lower triangular matrix of +shape `[M, M]` with unit diagonal whose entries correspond to the strictly lower +triangular part of LU. U is a upper triangular matrix of shape `[M, M]` whose +entries correspond to the upper triangular part, including the diagonal, of LU. + +P represents a permutation matrix encoded as a list of indices each between `0` +and `M-1`, inclusive. If P_mat denotes the permutation matrix corresponding to +P, then the L, U and P satisfies P_mat * input = L * U. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +A tensor of shape `[..., M, M]` whose inner-most 2 dimensions form matrices of +size `[M, M]`. +
+`output_idx_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (lu, p). +
+`lu` + +A `Tensor`. Has the same type as `input`. +
+`p` + +A `Tensor` of type `output_idx_type`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/lu_matrix_inverse.md b/site/en/api_docs/python/tf/linalg/lu_matrix_inverse.md new file mode 100644 index 00000000000..11d6f51db08 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/lu_matrix_inverse.md @@ -0,0 +1,130 @@ +description: Computes the inverse given the LU decomposition(s) of one or more matrices. + +
+ + +
+ +# tf.linalg.lu_matrix_inverse + + + + + + + + + +Computes the inverse given the LU decomposition(s) of one or more matrices. + + + + + + + + + +This op is conceptually identical to, + +```python +inv_X = tf.lu_matrix_inverse(*tf.linalg.lu(X)) +tf.assert_near(tf.matrix_inverse(X), inv_X) +# ==> True +``` + +Note: this function does not verify the implied matrix is actually invertible +nor is this condition checked even when `validate_args=True`. + + + + + + + + + + + + + + + + + + + +
+`lower_upper` + +`lu` as returned by tf.linalg.lu, i.e., if `matmul(P, +matmul(L, U)) = X` then `lower_upper = L + U - eye`. +
+`perm` + +`p` as returned by `tf.linag.lu`, i.e., if `matmul(P, matmul(L, U)) = +X` then `perm = argmax(P)`. +
+`validate_args` + +Python `bool` indicating whether arguments should be checked +for correctness. Note: this function does not verify the implied matrix is +actually invertible, even when `validate_args=True`. +Default value: `False` (i.e., don't validate arguments). +
+`name` + +Python `str` name given to ops managed by this object. +Default value: `None` (i.e., 'lu_matrix_inverse'). +
+ + + + + + + + + + + + +
+`inv_x` + +The matrix_inv, i.e., +`tf.matrix_inverse(tf.linalg.lu_reconstruct(lu, perm))`. +
+ + +#### Examples + +```python +import numpy as np +import tensorflow as tf +import tensorflow_probability as tfp + +x = [[[3., 4], [1, 2]], + [[7., 8], [3, 4]]] +inv_x = tf.linalg.lu_matrix_inverse(*tf.linalg.lu(x)) +tf.assert_near(tf.matrix_inverse(x), inv_x) +# ==> True +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/linalg/lu_reconstruct.md b/site/en/api_docs/python/tf/linalg/lu_reconstruct.md new file mode 100644 index 00000000000..6ed289406f2 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/lu_reconstruct.md @@ -0,0 +1,119 @@ +description: The reconstruct one or more matrices from their LU decomposition(s). + +
+ + +
+ +# tf.linalg.lu_reconstruct + + + + + + + + + +The reconstruct one or more matrices from their LU decomposition(s). + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`lower_upper` + +`lu` as returned by tf.linalg.lu, i.e., if `matmul(P, +matmul(L, U)) = X` then `lower_upper = L + U - eye`. +
+`perm` + +`p` as returned by `tf.linag.lu`, i.e., if `matmul(P, matmul(L, U)) = +X` then `perm = argmax(P)`. +
+`validate_args` + +Python `bool` indicating whether arguments should be checked +for correctness. +Default value: `False` (i.e., don't validate arguments). +
+`name` + +Python `str` name given to ops managed by this object. +Default value: `None` (i.e., 'lu_reconstruct'). +
+ + + + + + + + + + + + +
+`x` + +The original input to tf.linalg.lu, i.e., `x` as in, +`lu_reconstruct(*tf.linalg.lu(x))`. +
+ + +#### Examples + +```python +import numpy as np +import tensorflow as tf +import tensorflow_probability as tfp + +x = [[[3., 4], [1, 2]], + [[7., 8], [3, 4]]] +x_reconstructed = tf.linalg.lu_reconstruct(*tf.linalg.lu(x)) +tf.assert_near(x, x_reconstructed) +# ==> True +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/linalg/lu_solve.md b/site/en/api_docs/python/tf/linalg/lu_solve.md new file mode 100644 index 00000000000..357e409b5ba --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/lu_solve.md @@ -0,0 +1,132 @@ +description: Solves systems of linear eqns A X = RHS, given LU factorizations. + +
+ + +
+ +# tf.linalg.lu_solve + + + + + + + + + +Solves systems of linear eqns `A X = RHS`, given LU factorizations. + + + + + + + + + +Note: this function does not verify the implied matrix is actually invertible +nor is this condition checked even when `validate_args=True`. + + + + + + + + + + + + + + + + + + + + + + +
+`lower_upper` + +`lu` as returned by tf.linalg.lu, i.e., if `matmul(P, +matmul(L, U)) = X` then `lower_upper = L + U - eye`. +
+`perm` + +`p` as returned by `tf.linag.lu`, i.e., if `matmul(P, matmul(L, U)) = +X` then `perm = argmax(P)`. +
+`rhs` + +Matrix-shaped float `Tensor` representing targets for which to solve; +`A X = RHS`. To handle vector cases, use: `lu_solve(..., rhs[..., +tf.newaxis])[..., 0]`. +
+`validate_args` + +Python `bool` indicating whether arguments should be checked +for correctness. Note: this function does not verify the implied matrix is +actually invertible, even when `validate_args=True`. +Default value: `False` (i.e., don't validate arguments). +
+`name` + +Python `str` name given to ops managed by this object. +Default value: `None` (i.e., 'lu_solve'). +
+ + + + + + + + + + + + +
+`x` + +The `X` in `A @ X = RHS`. +
+ + +#### Examples + +```python +import numpy as np +import tensorflow as tf +import tensorflow_probability as tfp + +x = [[[1., 2], + [3, 4]], + [[7, 8], + [3, 4]]] +inv_x = tf.linalg.lu_solve(*tf.linalg.lu(x), rhs=tf.eye(2)) +tf.assert_near(tf.matrix_inverse(x), inv_x) +# ==> True +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/linalg/matmul.md b/site/en/api_docs/python/tf/linalg/matmul.md new file mode 100644 index 00000000000..f13e6e6ee97 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/matmul.md @@ -0,0 +1,251 @@ +description: Multiplies matrix a by matrix b, producing a * b. + +
+ + +
+ +# tf.linalg.matmul + + + + + + + + + +Multiplies matrix `a` by matrix `b`, producing `a` * `b`. + + + + + + + + + +The inputs must, following any transpositions, be tensors of rank >= 2 +where the inner 2 dimensions specify valid matrix multiplication dimensions, +and any further outer dimensions specify matching batch size. + +Both matrices must be of the same type. The supported types are: +`float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`. + +Either matrix can be transposed or adjointed (conjugated and transposed) on +the fly by setting one of the corresponding flag to `True`. These are `False` +by default. + +If one or both of the matrices contain a lot of zeros, a more efficient +multiplication algorithm can be used by setting the corresponding +`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default. +This optimization is only available for plain matrices (rank-2 tensors) with +datatypes `bfloat16` or `float32`. + +A simple 2-D tensor matrix multiplication: + +``` +>>> a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) +>>> a # 2-D tensor + +>>> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) +>>> b # 2-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +A batch matrix multiplication with batch shape [2]: + +``` +>>> a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) +>>> a # 3-D tensor + +>>> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) +>>> b # 3-D tensor + +>>> c = tf.matmul(a, b) +>>> c # `a` * `b` + +``` + +Since python >= 3.5 the @ operator is supported +(see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow, +it simply calls the tf.matmul() function, so the following lines are +equivalent: + +``` +>>> d = a @ b @ [[10], [11]] +>>> d = tf.matmul(tf.matmul(a, b), [[10], [11]]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +tf.Tensor of type `float16`, `float32`, `float64`, `int32`, +`complex64`, `complex128` and rank > 1. +
+`b` + +tf.Tensor with same type and rank as `a`. +
+`transpose_a` + +If `True`, `a` is transposed before multiplication. +
+`transpose_b` + +If `True`, `b` is transposed before multiplication. +
+`adjoint_a` + +If `True`, `a` is conjugated and transposed before +multiplication. +
+`adjoint_b` + +If `True`, `b` is conjugated and transposed before +multiplication. +
+`a_is_sparse` + +If `True`, `a` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`b_is_sparse` + +If `True`, `b` is treated as a sparse matrix. Notice, this +**does not support tf.sparse.SparseTensor**, it just makes optimizations +that assume most values in `a` are zero. +See tf.sparse.sparse_dense_matmul +for some support for tf.SparseTensor multiplication. +
+`name` + +Name for the operation (optional). +
+ + + + + + + + + + + + + + +
+A tf.Tensor of the same type as `a` and `b` where each inner-most matrix +is the product of the corresponding matrices in `a` and `b`, e.g. if all +transpose or adjoint attributes are `False`: + +`output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j])`, +for all indices `i`, `j`. +
+`Note` + +This is matrix product, not element-wise product. +
+ + + + + + + + + + + + +
+`ValueError` + +If `transpose_a` and `adjoint_a`, or `transpose_b` and +`adjoint_b` are both set to `True`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/matrix_rank.md b/site/en/api_docs/python/tf/linalg/matrix_rank.md new file mode 100644 index 00000000000..3b9a0f8fac9 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/matrix_rank.md @@ -0,0 +1,105 @@ +description: Compute the matrix rank of one or more matrices. + +
+ + +
+ +# tf.linalg.matrix_rank + + + + + + + + + +Compute the matrix rank of one or more matrices. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +(Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be +pseudo-inverted. +
+`tol` + +Threshold below which the singular value is counted as 'zero'. +Default value: `None` (i.e., `eps * max(rows, cols) * max(singular_val)`). +
+`validate_args` + +When `True`, additional assertions might be embedded in the +graph. +Default value: `False` (i.e., no graph assertions are added). +
+`name` + +Python `str` prefixed to ops created by this function. +Default value: 'matrix_rank'. +
+ + + + + + + + + + + + +
+`matrix_rank` + +(Batch of) `int32` scalars representing the number of non-zero +singular values. +
+ diff --git a/site/en/api_docs/python/tf/linalg/matrix_transpose.md b/site/en/api_docs/python/tf/linalg/matrix_transpose.md new file mode 100644 index 00000000000..5c4c7c1d617 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/matrix_transpose.md @@ -0,0 +1,150 @@ +description: Transposes last two dimensions of tensor a. + +
+ + +
+ +# tf.linalg.matrix_transpose + + + + + + + + + +Transposes last two dimensions of tensor `a`. + + + + + + + + + + +#### For example: + + + +```python +x = tf.constant([[1, 2, 3], [4, 5, 6]]) +tf.linalg.matrix_transpose(x) # [[1, 4], + # [2, 5], + # [3, 6]] + +x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j], + [4 + 4j, 5 + 5j, 6 + 6j]]) +tf.linalg.matrix_transpose(x, conjugate=True) # [[1 - 1j, 4 - 4j], + # [2 - 2j, 5 - 5j], + # [3 - 3j, 6 - 6j]] + +# Matrix with two batch dimensions. +# x.shape is [1, 2, 3, 4] +# tf.linalg.matrix_transpose(x) is shape [1, 2, 4, 3] +``` + +Note that tf.matmul provides kwargs allowing for transpose of arguments. +This is done with minimal cost, and is preferable to using this function. E.g. + +```python +# Good! Transpose is taken at minimal additional cost. +tf.matmul(matrix, b, transpose_b=True) + +# Inefficient! +tf.matmul(matrix, tf.linalg.matrix_transpose(b)) +``` + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor` with `rank >= 2`. +
+`name` + +A name for the operation (optional). +
+`conjugate` + +Optional bool. Setting it to `True` is mathematically equivalent +to tf.math.conj(tf.linalg.matrix_transpose(input)). +
+ + + + + + + + + + + +
+A transposed batch matrix `Tensor`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `a` is determined statically to have `rank < 2`. +
+ + + +#### Numpy Compatibility +In `numpy` transposes are memory-efficient constant time operations as they +simply return a new view of the same data with adjusted `strides`. + +TensorFlow does not support strides, linalg.matrix_transpose returns a new +tensor with the items permuted. + diff --git a/site/en/api_docs/python/tf/linalg/matvec.md b/site/en/api_docs/python/tf/linalg/matvec.md new file mode 100644 index 00000000000..f2ce8606455 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/matvec.md @@ -0,0 +1,203 @@ +description: Multiplies matrix a by vector b, producing a * b. + +
+ + +
+ +# tf.linalg.matvec + + + + + + + + + +Multiplies matrix `a` by vector `b`, producing `a` * `b`. + + + + + + + + + +The matrix `a` must, following any transpositions, be a tensor of rank >= 2, +with `shape(a)[-1] == shape(b)[-1]`, and `shape(a)[:-2]` able to broadcast +with `shape(b)[:-1]`. + +Both `a` and `b` must be of the same type. The supported types are: +`float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`. + +Matrix `a` can be transposed or adjointed (conjugated and transposed) on +the fly by setting one of the corresponding flag to `True`. These are `False` +by default. + +If one or both of the inputs contain a lot of zeros, a more efficient +multiplication algorithm can be used by setting the corresponding +`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default. +This optimization is only available for plain matrices/vectors (rank-2/1 +tensors) with datatypes `bfloat16` or `float32`. + +#### For example: + + + +```python +# 2-D tensor `a` +# [[1, 2, 3], +# [4, 5, 6]] +a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) + +# 1-D tensor `b` +# [7, 9, 11] +b = tf.constant([7, 9, 11], shape=[3]) + +# `a` * `b` +# [ 58, 64] +c = tf.linalg.matvec(a, b) + + +# 3-D tensor `a` +# [[[ 1, 2, 3], +# [ 4, 5, 6]], +# [[ 7, 8, 9], +# [10, 11, 12]]] +a = tf.constant(np.arange(1, 13, dtype=np.int32), + shape=[2, 2, 3]) + +# 2-D tensor `b` +# [[13, 14, 15], +# [16, 17, 18]] +b = tf.constant(np.arange(13, 19, dtype=np.int32), + shape=[2, 3]) + +# `a` * `b` +# [[ 86, 212], +# [410, 563]] +c = tf.linalg.matvec(a, b) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +`Tensor` of type `float16`, `float32`, `float64`, `int32`, `complex64`, +`complex128` and rank > 1. +
+`b` + +`Tensor` with same type as `a` and compatible dimensions. +
+`transpose_a` + +If `True`, `a` is transposed before multiplication. +
+`adjoint_a` + +If `True`, `a` is conjugated and transposed before +multiplication. +
+`a_is_sparse` + +If `True`, `a` is treated as a sparse matrix. +
+`b_is_sparse` + +If `True`, `b` is treated as a sparse matrix. +
+`name` + +Name for the operation (optional). +
+ + + + + + + + + + + + + + +
+A `Tensor` of the same type as `a` and `b` where each inner-most vector is +the product of the corresponding matrices in `a` and vectors in `b`, e.g. if +all transpose or adjoint attributes are `False`: + +`output`[..., i] = sum_k (`a`[..., i, k] * `b`[..., k]), for all indices i. +
+`Note` + +This is matrix-vector product, not element-wise product. +
+ + + + + + + + + + + + +
+`ValueError` + +If transpose_a and adjoint_a are both set to True. +
+ diff --git a/site/en/api_docs/python/tf/linalg/normalize.md b/site/en/api_docs/python/tf/linalg/normalize.md new file mode 100644 index 00000000000..ccf9dc5c483 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/normalize.md @@ -0,0 +1,149 @@ +description: Normalizes tensor along dimension axis using specified norm. + +
+ + +
+ +# tf.linalg.normalize + + + + + + + + + +Normalizes `tensor` along dimension `axis` using specified norm. + + + + + + + + + +This uses tf.linalg.norm to compute the norm along `axis`. + +This function can compute several different vector norms (the 1-norm, the +Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and +matrix norms (Frobenius, 1-norm, 2-norm and inf-norm). + + + + + + + + + + + + + + + + + + + +
+`tensor` + +`Tensor` of types `float32`, `float64`, `complex64`, `complex128` +
+`ord` + +Order of the norm. Supported values are `'fro'`, `'euclidean'`, `1`, +`2`, `np.inf` and any positive real number yielding the corresponding +p-norm. Default is `'euclidean'` which is equivalent to Frobenius norm if +`tensor` is a matrix and equivalent to 2-norm for vectors. +Some restrictions apply: a) The Frobenius norm `'fro'` is not defined for +vectors, b) If axis is a 2-tuple (matrix norm), only `'euclidean'`, +'`fro'`, `1`, `2`, `np.inf` are supported. See the description of `axis` +on how to compute norms for a batch of vectors or matrices stored in a +tensor. +
+`axis` + +If `axis` is `None` (the default), the input is considered a vector +and a single vector norm is computed over the entire set of values in the +tensor, i.e. `norm(tensor, ord=ord)` is equivalent to +`norm(reshape(tensor, [-1]), ord=ord)`. If `axis` is a Python integer, the +input is considered a batch of vectors, and `axis` determines the axis in +`tensor` over which to compute vector norms. If `axis` is a 2-tuple of +Python integers it is considered a batch of matrices and `axis` determines +the axes in `tensor` over which to compute a matrix norm. +Negative indices are supported. Example: If you are passing a tensor that +can be either a matrix or a batch of matrices at runtime, pass +`axis=[-2,-1]` instead of `axis=None` to make sure that matrix norms are +computed. +
+`name` + +The name of the op. +
+ + + + + + + + + + + + + + + +
+`normalized` + +A normalized `Tensor` with the same shape as `tensor`. +
+`norm` + +The computed norms with the same shape and dtype `tensor` but the +final axis is 1 instead. Same as running +`tf.cast(tf.linalg.norm(tensor, ord, axis keepdims=True), tensor.dtype)`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `ord` or `axis` is invalid. +
+ diff --git a/site/en/api_docs/python/tf/linalg/pinv.md b/site/en/api_docs/python/tf/linalg/pinv.md new file mode 100644 index 00000000000..638e4630173 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/pinv.md @@ -0,0 +1,175 @@ +description: Compute the Moore-Penrose pseudo-inverse of one or more matrices. + +
+ + +
+ +# tf.linalg.pinv + + + + + + + + + +Compute the Moore-Penrose pseudo-inverse of one or more matrices. + + + + + + + + + +Calculate the [generalized inverse of a matrix]( +https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse) using its +singular-value decomposition (SVD) and including all large singular values. + +The pseudo-inverse of a matrix `A`, is defined as: 'the matrix that 'solves' +[the least-squares problem] `A @ x = b`,' i.e., if `x_hat` is a solution, then +`A_pinv` is the matrix such that `x_hat = A_pinv @ b`. It can be shown that if +`U @ Sigma @ V.T = A` is the singular value decomposition of `A`, then +`A_pinv = V @ inv(Sigma) U^T`. [(Strang, 1980)][1] + +This function is analogous to [`numpy.linalg.pinv`]( +https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.pinv.html). +It differs only in default value of `rcond`. In `numpy.linalg.pinv`, the +default `rcond` is `1e-15`. Here the default is +`10. * max(num_rows, num_cols) * np.finfo(dtype).eps`. + + + + + + + + + + + + + + + + + + + +
+`a` + +(Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be +pseudo-inverted. +
+`rcond` + +`Tensor` of small singular value cutoffs. Singular values smaller +(in modulus) than `rcond` * largest_singular_value (again, in modulus) are +set to zero. Must broadcast against `tf.shape(a)[:-2]`. +Default value: `10. * max(num_rows, num_cols) * np.finfo(a.dtype).eps`. +
+`validate_args` + +When `True`, additional assertions might be embedded in the +graph. +Default value: `False` (i.e., no graph assertions are added). +
+`name` + +Python `str` prefixed to ops created by this function. +Default value: 'pinv'. +
+ + + + + + + + + + + + +
+`a_pinv` + +(Batch of) pseudo-inverse of input `a`. Has same shape as `a` except +rightmost two dimensions are transposed. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if input `a` does not have `float`-like `dtype`. +
+`ValueError` + +if input `a` has fewer than 2 dimensions. +
+ + +#### Examples + +```python +import tensorflow as tf +import tensorflow_probability as tfp + +a = tf.constant([[1., 0.4, 0.5], + [0.4, 0.2, 0.25], + [0.5, 0.25, 0.35]]) +tf.matmul(tf.linalg..pinv(a), a) +# ==> array([[1., 0., 0.], + [0., 1., 0.], + [0., 0., 1.]], dtype=float32) + +a = tf.constant([[1., 0.4, 0.5, 1.], + [0.4, 0.2, 0.25, 2.], + [0.5, 0.25, 0.35, 3.]]) +tf.matmul(tf.linalg..pinv(a), a) +# ==> array([[ 0.76, 0.37, 0.21, -0.02], + [ 0.37, 0.43, -0.33, 0.02], + [ 0.21, -0.33, 0.81, 0.01], + [-0.02, 0.02, 0.01, 1. ]], dtype=float32) +``` + +#### References + +[1]: G. Strang. 'Linear Algebra and Its Applications, 2nd Ed.' Academic Press, + Inc., 1980, pp. 139-142. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/linalg/qr.md b/site/en/api_docs/python/tf/linalg/qr.md new file mode 100644 index 00000000000..7416254f7e1 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/qr.md @@ -0,0 +1,112 @@ +description: Computes the QR decompositions of one or more matrices. + +
+ + +
+ +# tf.linalg.qr + + + + + + + + + +Computes the QR decompositions of one or more matrices. + + + + + + + + + +Computes the QR decomposition of each inner matrix in `tensor` such that +`tensor[..., :, :] = q[..., :, :] * r[..., :,:])` + +```python +# a is a tensor. +# q is a tensor of orthonormal matrices. +# r is a tensor of upper triangular matrices. +q, r = qr(a) +q_full, r_full = qr(a, full_matrices=True) +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +A tensor of shape `[..., M, N]` whose inner-most 2 dimensions +form matrices of size `[M, N]`. Let `P` be the minimum of `M` and `N`. +
+`full_matrices` + +An optional `bool`. Defaults to `False`. +If true, compute full-sized `q` and `r`. If false +(the default), compute only the leading `P` columns of `q`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (q, r). +
+`q` + +A `Tensor`. Has the same type as `input`. +
+`r` + +A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/set_diag.md b/site/en/api_docs/python/tf/linalg/set_diag.md new file mode 100644 index 00000000000..0698e58bae5 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/set_diag.md @@ -0,0 +1,205 @@ +description: Returns a batched matrix tensor with new batched diagonal values. + +
+ + +
+ +# tf.linalg.set_diag + + + + + + + + + +Returns a batched matrix tensor with new batched diagonal values. + + + + + + + + + +Given `input` and `diagonal`, this operation returns a tensor with the +same shape and values as `input`, except for the specified diagonals of the +innermost matrices. These will be overwritten by the values in `diagonal`. + +`input` has `r+1` dimensions `[I, J, ..., L, M, N]`. When `k` is scalar or +`k[0] == k[1]`, `diagonal` has `r` dimensions `[I, J, ..., L, max_diag_len]`. +Otherwise, it has `r+1` dimensions `[I, J, ..., L, num_diags, max_diag_len]`. +`num_diags` is the number of diagonals, `num_diags = k[1] - k[0] + 1`. +`max_diag_len` is the longest diagonal in the range `[k[0], k[1]]`, +`max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))` + +The output is a tensor of rank `k+1` with dimensions `[I, J, ..., L, M, N]`. +If `k` is scalar or `k[0] == k[1]`: + +``` +output[i, j, ..., l, m, n] + = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1] + input[i, j, ..., l, m, n] ; otherwise +``` + +Otherwise, + +``` +output[i, j, ..., l, m, n] + = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1] + input[i, j, ..., l, m, n] ; otherwise +``` +where `d = n - m`, `diag_index = k[1] - d`, and +`index_in_diag = n - max(d, 0) + offset`. + +`offset` is zero except when the alignment of the diagonal is to the right. +``` +offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT} + and `d >= 0`) or + (`align` in {LEFT_RIGHT, RIGHT_RIGHT} + and `d <= 0`) + 0 ; otherwise +``` +where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`. + +#### For example: + + + +``` +# The main diagonal. +input = np.array([[[7, 7, 7, 7], # Input shape: (2, 3, 4) + [7, 7, 7, 7], + [7, 7, 7, 7]], + [[7, 7, 7, 7], + [7, 7, 7, 7], + [7, 7, 7, 7]]]) +diagonal = np.array([[1, 2, 3], # Diagonal shape: (2, 3) + [4, 5, 6]]) +tf.matrix_set_diag(input, diagonal) + ==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4) + [7, 2, 7, 7], + [7, 7, 3, 7]], + [[4, 7, 7, 7], + [7, 5, 7, 7], + [7, 7, 6, 7]]] + +# A superdiagonal (per batch). +tf.matrix_set_diag(input, diagonal, k = 1) + ==> [[[7, 1, 7, 7], # Output shape: (2, 3, 4) + [7, 7, 2, 7], + [7, 7, 7, 3]], + [[7, 4, 7, 7], + [7, 7, 5, 7], + [7, 7, 7, 6]]] + +# A band of diagonals. +diagonals = np.array([[[9, 1, 0], # Diagonal shape: (2, 4, 3) + [6, 5, 8], + [1, 2, 3], + [0, 4, 5]], + [[1, 2, 0], + [5, 6, 4], + [6, 1, 2], + [0, 3, 4]]]) +tf.matrix_set_diag(input, diagonals, k = (-1, 2)) + ==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4) + [4, 2, 5, 1], + [7, 5, 3, 8]], + [[6, 5, 1, 7], + [3, 1, 6, 2], + [7, 4, 2, 4]]] + +# RIGHT_LEFT alignment. +diagonals = np.array([[[0, 9, 1], # Diagonal shape: (2, 4, 3) + [6, 5, 8], + [1, 2, 3], + [4, 5, 0]], + [[0, 1, 2], + [5, 6, 4], + [6, 1, 2], + [3, 4, 0]]]) +tf.matrix_set_diag(input, diagonals, k = (-1, 2), align="RIGHT_LEFT") + ==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4) + [4, 2, 5, 1], + [7, 5, 3, 8]], + [[6, 5, 1, 7], + [3, 1, 6, 2], + [7, 4, 2, 4]]] + +``` + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` with rank `k + 1`, where `k >= 1`. +
+`diagonal` + +A `Tensor` with rank `k`, when `d_lower == d_upper`, or `k + 1`, +otherwise. `k >= 1`. +
+`name` + +A name for the operation (optional). +
+`k` + +Diagonal offset(s). Positive value means superdiagonal, 0 refers to the +main diagonal, and negative value means subdiagonals. `k` can be a single +integer (for a single diagonal) or a pair of integers specifying the low +and high ends of a matrix band. `k[0]` must not be larger than `k[1]`. +
+`align` + +Some diagonals are shorter than `max_diag_len` and need to be padded. +`align` is a string specifying how superdiagonals and subdiagonals should +be aligned, respectively. There are four possible alignments: "RIGHT_LEFT" +(default), "LEFT_RIGHT", "LEFT_LEFT", and "RIGHT_RIGHT". "RIGHT_LEFT" +aligns superdiagonals to the right (left-pads the row) and subdiagonals to +the left (right-pads the row). It is the packing format LAPACK uses. +cuSPARSE uses "LEFT_RIGHT", which is the opposite alignment. +
+ diff --git a/site/en/api_docs/python/tf/linalg/slogdet.md b/site/en/api_docs/python/tf/linalg/slogdet.md new file mode 100644 index 00000000000..753acf16fe3 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/slogdet.md @@ -0,0 +1,101 @@ +description: Computes the sign and the log of the absolute value of the determinant of + +
+ + +
+ +# tf.linalg.slogdet + + + + + + + + + +Computes the sign and the log of the absolute value of the determinant of + + + + + + + + + +one or more square matrices. + +The input is a tensor of shape `[N, M, M]` whose inner-most 2 dimensions +form square matrices. The outputs are two tensors containing the signs and +absolute values of the log determinants for all N input submatrices +`[..., :, :]` such that the determinant = sign*exp(log_abs_determinant). +The log_abs_determinant is computed as det(P)*sum(log(diag(LU))) where LU +is the LU decomposition of the input and P is the corresponding +permutation matrix. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. +Shape is `[N, M, M]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sign, log_abs_determinant). +
+`sign` + +A `Tensor`. Has the same type as `input`. +
+`log_abs_determinant` + +A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/solve.md b/site/en/api_docs/python/tf/linalg/solve.md new file mode 100644 index 00000000000..05acb4167f3 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/solve.md @@ -0,0 +1,101 @@ +description: Solves systems of linear equations. + +
+ + +
+ +# tf.linalg.solve + + + + + + + + + +Solves systems of linear equations. + + + + + + + + + +`Matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. `Rhs` is a tensor of shape `[..., M, K]`. The `output` is +a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output matrix +satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`. +If `adjoint` is `True` then each output matrix satisfies +`adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`. + + + + + + + + + + + + + + + + + + + +
+`matrix` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`rhs` + +A `Tensor`. Must have the same type as `matrix`. +Shape is `[..., M, K]`. +
+`adjoint` + +An optional `bool`. Defaults to `False`. +Boolean indicating whether to solve with `matrix` or its (block-wise) +adjoint. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `matrix`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/sqrtm.md b/site/en/api_docs/python/tf/linalg/sqrtm.md new file mode 100644 index 00000000000..9211826346a --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/sqrtm.md @@ -0,0 +1,96 @@ +description: Computes the matrix square root of one or more square matrices: + +
+ + +
+ +# tf.linalg.sqrtm + + + + + + + + + +Computes the matrix square root of one or more square matrices: + + + + + + + + + +matmul(sqrtm(A), sqrtm(A)) = A + +The input matrix should be invertible. If the input matrix is real, it should +have no eigenvalues which are real and negative (pairs of complex conjugate +eigenvalues are allowed). + +The matrix square root is computed by first reducing the matrix to +quasi-triangular form with the real Schur decomposition. The square root +of the quasi-triangular matrix is then computed directly. Details of +the algorithm can be found in: Nicholas J. Higham, "Computing real +square roots of a real matrix", Linear Algebra Appl., 1987. + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. The output is a tensor of the same shape as the input +containing the matrix square root for all input submatrices `[..., :, :]`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/svd.md b/site/en/api_docs/python/tf/linalg/svd.md new file mode 100644 index 00000000000..9de477f8d5e --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/svd.md @@ -0,0 +1,160 @@ +description: Computes the singular value decompositions of one or more matrices. + +
+ + +
+ +# tf.linalg.svd + + + + + + + + + +Computes the singular value decompositions of one or more matrices. + + + + + + + + + +Computes the SVD of each inner matrix in `tensor` such that +`tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * + transpose(conj(v[..., :, :]))` + +```python +# a is a tensor. +# s is a tensor of singular values. +# u is a tensor of left singular vectors. +# v is a tensor of right singular vectors. +s, u, v = svd(a) +s = svd(a, compute_uv=False) +``` + + + + + + + + + + + + + + + + + + + +
+`tensor` + +`Tensor` of shape `[..., M, N]`. Let `P` be the minimum of `M` and +`N`. +
+`full_matrices` + +If true, compute full-sized `u` and `v`. If false +(the default), compute only the leading `P` singular vectors. +Ignored if `compute_uv` is `False`. +
+`compute_uv` + +If `True` then left and right singular vectors will be +computed and returned in `u` and `v`, respectively. Otherwise, only the +singular values will be computed, which can be significantly faster. +
+`name` + +string, optional name of the operation. +
+ + + + + + + + + + + + + + + + + + +
+`s` + +Singular values. Shape is `[..., P]`. The values are sorted in reverse +order of magnitude, so s[..., 0] is the largest value, s[..., 1] is the +second largest, etc. +
+`u` + +Left singular vectors. If `full_matrices` is `False` (default) then +shape is `[..., M, P]`; if `full_matrices` is `True` then shape is +`[..., M, M]`. Not returned if `compute_uv` is `False`. +
+`v` + +Right singular vectors. If `full_matrices` is `False` (default) then +shape is `[..., N, P]`. If `full_matrices` is `True` then shape is +`[..., N, N]`. Not returned if `compute_uv` is `False`. +
+ + + + +#### Numpy Compatibility +Mostly equivalent to numpy.linalg.svd, except that + * The order of output arguments here is `s`, `u`, `v` when `compute_uv` is + `True`, as opposed to `u`, `s`, `v` for numpy.linalg.svd. + * full_matrices is `False` by default as opposed to `True` for + numpy.linalg.svd. + * tf.linalg.svd uses the standard definition of the SVD + \\(A = U \Sigma V^H\\), such that the left singular vectors of `a` are + the columns of `u`, while the right singular vectors of `a` are the + columns of `v`. On the other hand, numpy.linalg.svd returns the adjoint + \\(V^H\\) as the third output argument. +```python +import tensorflow as tf +import numpy as np +s, u, v = tf.linalg.svd(a) +tf_a_approx = tf.matmul(u, tf.matmul(tf.linalg.diag(s), v, adjoint_b=True)) +u, s, v_adj = np.linalg.svd(a, full_matrices=False) +np_a_approx = np.dot(u, np.dot(np.diag(s), v_adj)) +# tf_a_approx and np_a_approx should be numerically close. +``` + diff --git a/site/en/api_docs/python/tf/linalg/tensor_diag.md b/site/en/api_docs/python/tf/linalg/tensor_diag.md new file mode 100644 index 00000000000..23f8fd8e610 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/tensor_diag.md @@ -0,0 +1,97 @@ +description: Returns a diagonal tensor with a given diagonal values. + +
+ + +
+ +# tf.linalg.tensor_diag + + + + + + + + + +Returns a diagonal tensor with a given diagonal values. + + + + + + + + + +Given a `diagonal`, this operation returns a tensor with the `diagonal` and +everything else padded with zeros. The diagonal is computed as follows: + +Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of +rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where: + +`output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else. + +#### For example: + + + +``` +# 'diagonal' is [1, 2, 3, 4] +tf.diag(diagonal) ==> [[1, 0, 0, 0] + [0, 2, 0, 0] + [0, 0, 3, 0] + [0, 0, 0, 4]] +``` + + + + + + + + + + + + + +
+`diagonal` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +Rank k tensor where k is at most 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `diagonal`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/tensor_diag_part.md b/site/en/api_docs/python/tf/linalg/tensor_diag_part.md new file mode 100644 index 00000000000..579bb5abe24 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/tensor_diag_part.md @@ -0,0 +1,98 @@ +description: Returns the diagonal part of the tensor. + +
+ + +
+ +# tf.linalg.tensor_diag_part + + + + + + + + + +Returns the diagonal part of the tensor. + + + + + + + + + +This operation returns a tensor with the `diagonal` part +of the `input`. The `diagonal` part is computed as follows: + +Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a +tensor of rank `k` with dimensions `[D1,..., Dk]` where: + +`diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`. + +#### For example: + + + +``` +# 'input' is [[1, 0, 0, 0] + [0, 2, 0, 0] + [0, 0, 3, 0] + [0, 0, 0, 4]] + +tf.diag_part(input) ==> [1, 2, 3, 4] +``` + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +Rank k tensor where k is even and not zero. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/trace.md b/site/en/api_docs/python/tf/linalg/trace.md new file mode 100644 index 00000000000..52a76081186 --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/trace.md @@ -0,0 +1,109 @@ +description: Compute the trace of a tensor x. + +
+ + +
+ +# tf.linalg.trace + + + + + + + + + +Compute the trace of a tensor `x`. + + + + + + + + + +`trace(x)` returns the sum along the main diagonal of each inner-most matrix +in x. If x is of rank `k` with shape `[I, J, K, ..., L, M, N]`, then output +is a tensor of rank `k-2` with dimensions `[I, J, K, ..., L]` where + +`output[i, j, k, ..., l] = trace(x[i, j, i, ..., l, :, :])` + +#### For example: + + + +```python +x = tf.constant([[1, 2], [3, 4]]) +tf.linalg.trace(x) # 5 + +x = tf.constant([[1, 2, 3], + [4, 5, 6], + [7, 8, 9]]) +tf.linalg.trace(x) # 15 + +x = tf.constant([[[1, 2, 3], + [4, 5, 6], + [7, 8, 9]], + [[-1, -2, -3], + [-4, -5, -6], + [-7, -8, -9]]]) +tf.linalg.trace(x) # [15, -15] +``` + + + + + + + + + + + + + +
+`x` + +tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The trace of input tensor. +
+ diff --git a/site/en/api_docs/python/tf/linalg/triangular_solve.md b/site/en/api_docs/python/tf/linalg/triangular_solve.md new file mode 100644 index 00000000000..5844b310b0a --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/triangular_solve.md @@ -0,0 +1,148 @@ +description: Solve systems of linear equations with upper or lower triangular matrices. + +
+ + +
+ +# tf.linalg.triangular_solve + + + + + + + + + +Solve systems of linear equations with upper or lower triangular matrices. + + + + + + + + + +`matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form +square matrices. If `lower` is `True` then the strictly upper triangular part +of each inner-most matrix is assumed to be zero and not accessed. If `lower` +is `False` then the strictly lower triangular part of each inner-most matrix +is assumed to be zero and not accessed. `rhs` is a tensor of shape +`[..., M, N]`. + +The output is a tensor of shape `[..., M, N]`. If `adjoint` is `True` then the +innermost matrices in output satisfy matrix equations ` +sum_k matrix[..., i, k] * output[..., k, j] = rhs[..., i, j]`. +If `adjoint` is `False` then the +innermost matrices in output satisfy matrix equations +`sum_k adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`. + +#### Example: + + + +``` +>>> a = tf.constant([[3, 0, 0, 0], +... [2, 1, 0, 0], +... [1, 0, 1, 0], +... [1, 1, 1, 1]], dtype=tf.float32) +``` + +``` +>>> b = tf.constant([[4], [2], [4], [2]], dtype=tf.float32) +>>> x = tf.linalg.triangular_solve(a, b, lower=True) +>>> x + +>>> tf.matmul(a, x) + +``` + + + + + + + + + + + + + + + + + + + + + + +
+`matrix` + +A `Tensor`. Must be one of the following types: `float64`, +`float32`, `half`, `complex64`, `complex128`. Shape is `[..., M, M]`. +
+`rhs` + +A `Tensor`. Must have the same type as `matrix`. Shape is `[..., M, +N]`. +
+`lower` + +An optional `bool`. Defaults to `True`. Boolean indicating whether +the innermost matrices in matrix are lower or upper triangular. +
+`adjoint` + +An optional `bool`. Defaults to `False`. Boolean indicating whether +to solve with matrix or its (block-wise) adjoint. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as matrix, and shape is `[..., M, N]`. +
+ diff --git a/site/en/api_docs/python/tf/linalg/tridiagonal_matmul.md b/site/en/api_docs/python/tf/linalg/tridiagonal_matmul.md new file mode 100644 index 00000000000..1990d16b30e --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/tridiagonal_matmul.md @@ -0,0 +1,148 @@ +description: Multiplies tridiagonal matrix by matrix. + +
+ + +
+ +# tf.linalg.tridiagonal_matmul + + + + + + + + + +Multiplies tridiagonal matrix by matrix. + + + + + + + + + +`diagonals` is representation of 3-diagonal NxN matrix, which depends on +`diagonals_format`. + +In `matrix` format, `diagonals` must be a tensor of shape `[..., M, M]`, with +two inner-most dimensions representing the square tridiagonal matrices. +Elements outside of the three diagonals will be ignored. + +If `sequence` format, `diagonals` is list or tuple of three tensors: +`[superdiag, maindiag, subdiag]`, each having shape [..., M]. Last element +of `superdiag` first element of `subdiag` are ignored. + +In `compact` format the three diagonals are brought together into one tensor +of shape `[..., 3, M]`, with last two dimensions containing superdiagonals, +diagonals, and subdiagonals, in order. Similarly to `sequence` format, +elements `diagonals[..., 0, M-1]` and `diagonals[..., 2, 0]` are ignored. + +The `sequence` format is recommended as the one with the best performance. + +`rhs` is matrix to the right of multiplication. It has shape `[..., M, N]`. + +#### Example: + + + +```python +superdiag = tf.constant([-1, -1, 0], dtype=tf.float64) +maindiag = tf.constant([2, 2, 2], dtype=tf.float64) +subdiag = tf.constant([0, -1, -1], dtype=tf.float64) +diagonals = [superdiag, maindiag, subdiag] +rhs = tf.constant([[1, 1], [1, 1], [1, 1]], dtype=tf.float64) +x = tf.linalg.tridiagonal_matmul(diagonals, rhs, diagonals_format='sequence') +``` + + + + + + + + + + + + + + + + + + + +
+`diagonals` + +A `Tensor` or tuple of `Tensor`s describing left-hand sides. The +shape depends of `diagonals_format`, see description above. Must be +`float32`, `float64`, `complex64`, or `complex128`. +
+`rhs` + +A `Tensor` of shape [..., M, N] and with the same dtype as `diagonals`. +
+`diagonals_format` + +one of `sequence`, or `compact`. Default is `compact`. +
+`name` + +A name to give this `Op` (optional). +
+ + + + + + + + + + + +
+A `Tensor` of shape [..., M, N] containing the result of multiplication. +
+ + + + + + + + + + + + +
+`ValueError` + +An unsupported type is provided as input, or when the input +tensors have incorrect shapes. +
+ diff --git a/site/en/api_docs/python/tf/linalg/tridiagonal_solve.md b/site/en/api_docs/python/tf/linalg/tridiagonal_solve.md new file mode 100644 index 00000000000..3a00a4eba3a --- /dev/null +++ b/site/en/api_docs/python/tf/linalg/tridiagonal_solve.md @@ -0,0 +1,205 @@ +description: Solves tridiagonal systems of equations. + +
+ + +
+ +# tf.linalg.tridiagonal_solve + + + + + + + + + +Solves tridiagonal systems of equations. + + + + + + + + + +The input can be supplied in various formats: `matrix`, `sequence` and +`compact`, specified by the `diagonals_format` arg. + +In `matrix` format, `diagonals` must be a tensor of shape `[..., M, M]`, with +two inner-most dimensions representing the square tridiagonal matrices. +Elements outside of the three diagonals will be ignored. + +In `sequence` format, `diagonals` are supplied as a tuple or list of three +tensors of shapes `[..., N]`, `[..., M]`, `[..., N]` representing +superdiagonals, diagonals, and subdiagonals, respectively. `N` can be either +`M-1` or `M`; in the latter case, the last element of superdiagonal and the +first element of subdiagonal will be ignored. + +In `compact` format the three diagonals are brought together into one tensor +of shape `[..., 3, M]`, with last two dimensions containing superdiagonals, +diagonals, and subdiagonals, in order. Similarly to `sequence` format, +elements `diagonals[..., 0, M-1]` and `diagonals[..., 2, 0]` are ignored. + +The `compact` format is recommended as the one with best performance. In case +you need to cast a tensor into a compact format manually, use tf.gather_nd. +An example for a tensor of shape [m, m]: + +```python +rhs = tf.constant([...]) +matrix = tf.constant([[...]]) +m = matrix.shape[0] +dummy_idx = [0, 0] # An arbitrary element to use as a dummy +indices = [[[i, i + 1] for i in range(m - 1)] + [dummy_idx], # Superdiagonal + [[i, i] for i in range(m)], # Diagonal + [dummy_idx] + [[i + 1, i] for i in range(m - 1)]] # Subdiagonal +diagonals=tf.gather_nd(matrix, indices) +x = tf.linalg.tridiagonal_solve(diagonals, rhs) +``` + +Regardless of the `diagonals_format`, `rhs` is a tensor of shape `[..., M]` or +`[..., M, K]`. The latter allows to simultaneously solve K systems with the +same left-hand sides and K different right-hand sides. If `transpose_rhs` +is set to `True` the expected shape is `[..., M]` or `[..., K, M]`. + +The batch dimensions, denoted as `...`, must be the same in `diagonals` and +`rhs`. + +The output is a tensor of the same shape as `rhs`: either `[..., M]` or +`[..., M, K]`. + +The op isn't guaranteed to raise an error if the input matrix is not +invertible. tf.debugging.check_numerics can be applied to the output to +detect invertibility problems. + +**Note**: with large batch sizes, the computation on the GPU may be slow, if +either `partial_pivoting=True` or there are multiple right-hand sides +(`K > 1`). If this issue arises, consider if it's possible to disable pivoting +and have `K = 1`, or, alternatively, consider using CPU. + +On CPU, solution is computed via Gaussian elimination with or without partial +pivoting, depending on `partial_pivoting` parameter. On GPU, Nvidia's cuSPARSE +library is used: https://docs.nvidia.com/cuda/cusparse/index.html#gtsv + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`diagonals` + +A `Tensor` or tuple of `Tensor`s describing left-hand sides. The +shape depends of `diagonals_format`, see description above. Must be +`float32`, `float64`, `complex64`, or `complex128`. +
+`rhs` + +A `Tensor` of shape [..., M] or [..., M, K] and with the same dtype as +`diagonals`. Note that if the shape of `rhs` and/or `diags` isn't known +statically, `rhs` will be treated as a matrix rather than a vector. +
+`diagonals_format` + +one of `matrix`, `sequence`, or `compact`. Default is +`compact`. +
+`transpose_rhs` + +If `True`, `rhs` is transposed before solving (has no effect +if the shape of rhs is [..., M]). +
+`conjugate_rhs` + +If `True`, `rhs` is conjugated before solving. +
+`name` + +A name to give this `Op` (optional). +
+`partial_pivoting` + +whether to perform partial pivoting. `True` by default. +Partial pivoting makes the procedure more stable, but slower. Partial +pivoting is unnecessary in some cases, including diagonally dominant and +symmetric positive definite matrices (see e.g. theorem 9.12 in [1]). +
+ + + + + + + + + + + +
+A `Tensor` of shape [..., M] or [..., M, K] containing the solutions. +
+ + + + + + + + + + + + +
+`ValueError` + +An unsupported type is provided as input, or when the input +tensors have incorrect shapes. +
+ + +[1] Nicholas J. Higham (2002). Accuracy and Stability of Numerical Algorithms: +Second Edition. SIAM. p. 175. ISBN 978-0-89871-802-7. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/linspace.md b/site/en/api_docs/python/tf/linspace.md new file mode 100644 index 00000000000..4641e728237 --- /dev/null +++ b/site/en/api_docs/python/tf/linspace.md @@ -0,0 +1,105 @@ +description: Generates values in an interval. + +
+ + +
+ +# tf.linspace + + + + + + + + + +Generates values in an interval. + + + + + + + + + +A sequence of `num` evenly-spaced values are generated beginning at `start`. +If `num > 1`, the values in the sequence increase by `stop - start / num - 1`, +so that the last one is exactly `stop`. + +#### For example: + + + +``` +tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0] +``` + + + + + + + + + + + + + + + + + + + +
+`start` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +0-D tensor. First entry in the range. +
+`stop` + +A `Tensor`. Must have the same type as `start`. +0-D tensor. Last entry in the range. +
+`num` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +0-D tensor. Number of values to generate. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `start`. +
+ diff --git a/site/en/api_docs/python/tf/lite.md b/site/en/api_docs/python/tf/lite.md new file mode 100644 index 00000000000..b30d6323643 --- /dev/null +++ b/site/en/api_docs/python/tf/lite.md @@ -0,0 +1,39 @@ +description: Public API for tf.lite namespace. + +
+ + +
+ +# Module: tf.lite + + + + + + + + + +Public API for tf.lite namespace. + + + +## Modules + +[`experimental`](../tf/lite/experimental.md) module: Public API for tf.lite.experimental namespace. + +## Classes + +[`class Interpreter`](../tf/lite/Interpreter.md): Interpreter interface for TensorFlow Lite Models. + +[`class OpsSet`](../tf/lite/OpsSet.md): Enum class defining the sets of ops available to generate TFLite models. + +[`class Optimize`](../tf/lite/Optimize.md): Enum defining the optimizations to apply when generating tflite graphs. + +[`class RepresentativeDataset`](../tf/lite/RepresentativeDataset.md): Representative dataset to evaluate optimizations. + +[`class TFLiteConverter`](../tf/lite/TFLiteConverter.md): Converts a TensorFlow model into TensorFlow Lite model. + +[`class TargetSpec`](../tf/lite/TargetSpec.md): Specification of target device. + diff --git a/site/en/api_docs/python/tf/lite/Interpreter.md b/site/en/api_docs/python/tf/lite/Interpreter.md new file mode 100644 index 00000000000..4d6d155bce9 --- /dev/null +++ b/site/en/api_docs/python/tf/lite/Interpreter.md @@ -0,0 +1,497 @@ +description: Interpreter interface for TensorFlow Lite Models. + +
+ + + + + + + + + + + + + +
+ +# tf.lite.Interpreter + + + + + + + + + +Interpreter interface for TensorFlow Lite Models. + + + + + + + + + +This makes the TensorFlow Lite interpreter accessible in Python. +It is possible to use this interpreter in a multithreaded Python environment, +but you must be sure to call functions of a particular instance from only +one thread at a time. So if you want to have 4 threads running different +inferences simultaneously, create an interpreter for each one as thread-local +data. Similarly, if you are calling invoke() in one thread on a single +interpreter but you want to use tensor() on another thread once it is done, +you must use a synchronization primitive between the threads to ensure invoke +has returned before calling tensor(). + + + + + + + + + + + + + + + + +
+`model_path` + +Path to TF-Lite Flatbuffer file. +
+`model_content` + +Content of model. +
+`experimental_delegates` + +Experimental. Subject to change. List of +[TfLiteDelegate](https://www.tensorflow.org/lite/performance/delegates) +objects returned by lite.load_delegate(). +
+ + + + + + + + + + + + +
+`ValueError` + +If the interpreter was unable to create. +
+ + + +## Methods + +

allocate_tensors

+ +View source + + + + + + +

get_input_details

+ +View source + + + +Gets model input details. + + + + + + + + + + +
Returns
+A list of input details. +
+ + + +

get_output_details

+ +View source + + + +Gets model output details. + + + + + + + + + + +
Returns
+A list of output details. +
+ + + +

get_tensor

+ +View source + + + +Gets the value of the input tensor (get a copy). + +If you wish to avoid the copy, use `tensor()`. This function cannot be used +to read intermediate results. + + + + + + + + + + +
Args
+`tensor_index` + +Tensor index of tensor to get. This value can be gotten from +the 'index' field in get_output_details. +
+ + + + + + + + + + + +
Returns
+a numpy array. +
+ + + +

get_tensor_details

+ +View source + + + +Gets tensor details for every tensor with valid tensor details. + +Tensors where required information about the tensor is not found are not +added to the list. This includes temporary tensors without a name. + + + + + + + + + +
Returns
+A list of dictionaries containing tensor information. +
+ + + +

invoke

+ +View source + + + +Invoke the interpreter. + +Be sure to set the input sizes, allocate tensors and fill values before +calling this. Also, note that this function releases the GIL so heavy +computation can be done in the background while the Python interpreter +continues. No other function on this object should be called while the +invoke() call has not finished. + + + + + + + + + + +
Raises
+`ValueError` + +When the underlying interpreter fails raise ValueError. +
+ + + +

reset_all_variables

+ +View source + + + + + + +

resize_tensor_input

+ +View source + + + +Resizes an input tensor. + + + + + + + + + + + + + + +
Args
+`input_index` + +Tensor index of input to set. This value can be gotten from +the 'index' field in get_input_details. +
+`tensor_size` + +The tensor_shape to resize the input to. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the interpreter could not resize the input tensor. +
+ + + +

set_tensor

+ +View source + + + +Sets the value of the input tensor. Note this copies data in `value`. + +If you want to avoid copying, you can use the `tensor()` function to get a +numpy buffer pointing to the input buffer in the tflite interpreter. + + + + + + + + + + + + + +
Args
+`tensor_index` + +Tensor index of tensor to set. This value can be gotten from +the 'index' field in get_input_details. +
+`value` + +Value of tensor to set. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the interpreter could not set the tensor. +
+ + + +

tensor

+ +View source + + + +Returns function that gives a numpy view of the current tensor buffer. + +This allows reading and writing to this tensors w/o copies. This more +closely mirrors the C++ Interpreter class interface's tensor() member, hence +the name. Be careful to not hold these output references through calls +to `allocate_tensors()` and `invoke()`. This function cannot be used to read +intermediate results. + +#### Usage: + + + +``` +interpreter.allocate_tensors() +input = interpreter.tensor(interpreter.get_input_details()[0]["index"]) +output = interpreter.tensor(interpreter.get_output_details()[0]["index"]) +for i in range(10): + input().fill(3.) + interpreter.invoke() + print("inference %s" % output()) +``` + +Notice how this function avoids making a numpy array directly. This is +because it is important to not hold actual numpy views to the data longer +than necessary. If you do, then the interpreter can no longer be invoked, +because it is possible the interpreter would resize and invalidate the +referenced tensors. The NumPy API doesn't allow any mutability of the +the underlying buffers. + +#### WRONG: + + + +``` +input = interpreter.tensor(interpreter.get_input_details()[0]["index"])() +output = interpreter.tensor(interpreter.get_output_details()[0]["index"])() +interpreter.allocate_tensors() # This will throw RuntimeError +for i in range(10): + input.fill(3.) + interpreter.invoke() # this will throw RuntimeError since input,output +``` + + + + + + + + + + +
Args
+`tensor_index` + +Tensor index of tensor to get. This value can be gotten from +the 'index' field in get_output_details. +
+ + + + + + + + + + + +
Returns
+A function that can return a new numpy array pointing to the internal +TFLite tensor state at any point. It is safe to hold the function forever, +but it is not safe to hold the numpy array forever. +
+ + + + + diff --git a/site/en/api_docs/python/tf/lite/OpsSet.md b/site/en/api_docs/python/tf/lite/OpsSet.md new file mode 100644 index 00000000000..19d90ea8651 --- /dev/null +++ b/site/en/api_docs/python/tf/lite/OpsSet.md @@ -0,0 +1,47 @@ +description: Enum class defining the sets of ops available to generate TFLite models. + +
+ + + + + +
+ +# tf.lite.OpsSet + + + + + + + + + +Enum class defining the sets of ops available to generate TFLite models. + + + + + +WARNING: Experimental interface, subject to change. + +## Class Variables + +* `SELECT_TF_OPS` +* `TFLITE_BUILTINS` +* `TFLITE_BUILTINS_INT8` diff --git a/site/en/api_docs/python/tf/lite/Optimize.md b/site/en/api_docs/python/tf/lite/Optimize.md new file mode 100644 index 00000000000..b5029f1e350 --- /dev/null +++ b/site/en/api_docs/python/tf/lite/Optimize.md @@ -0,0 +1,47 @@ +description: Enum defining the optimizations to apply when generating tflite graphs. + +
+ + + + + +
+ +# tf.lite.Optimize + + + + + + + + + +Enum defining the optimizations to apply when generating tflite graphs. + + + + + +Some optimizations may come at the cost of accuracy. + +## Class Variables + +* `DEFAULT` +* `OPTIMIZE_FOR_LATENCY` +* `OPTIMIZE_FOR_SIZE` diff --git a/site/en/api_docs/python/tf/lite/RepresentativeDataset.md b/site/en/api_docs/python/tf/lite/RepresentativeDataset.md new file mode 100644 index 00000000000..bd0dc2450db --- /dev/null +++ b/site/en/api_docs/python/tf/lite/RepresentativeDataset.md @@ -0,0 +1,71 @@ +description: Representative dataset to evaluate optimizations. + +
+ + + +
+ +# tf.lite.RepresentativeDataset + + + + + + + + + +Representative dataset to evaluate optimizations. + + + + + + + + + +A representative dataset that can be used to evaluate optimizations by the +converter. E.g. converter can use these examples to estimate (min, max) ranges +by calibrating the model on inputs. This can allow converter to quantize a +converted floating point model. + + + + + + + + + + +
+`input_gen` + +an input generator that can be used to generate input samples +for the model. This must be a callable object that returns an object +that supports the `iter()` protocol (e.g. a generator function). The +elements generated must have same type and shape as inputs to the model. +
+ + + diff --git a/site/en/api_docs/python/tf/lite/TFLiteConverter.md b/site/en/api_docs/python/tf/lite/TFLiteConverter.md new file mode 100644 index 00000000000..da6f1a01a69 --- /dev/null +++ b/site/en/api_docs/python/tf/lite/TFLiteConverter.md @@ -0,0 +1,374 @@ +description: Converts a TensorFlow model into TensorFlow Lite model. + +
+ + + + + + + +
+ +# tf.lite.TFLiteConverter + + + + + + + + + +Converts a TensorFlow model into TensorFlow Lite model. + + + + + + + + +#### Example usage: + + +```python +# Converting a SavedModel to a TensorFlow Lite model. +converter = lite.TFLiteConverter.from_saved_model(saved_model_dir) +tflite_model = converter.convert() + +# Converting a tf.Keras model to a TensorFlow Lite model. +converter = lite.TFLiteConverter.from_keras_model(model) +tflite_model = converter.convert() + +# Converting ConcreteFunctions to a TensorFlow Lite model. +converter = lite.TFLiteConverter.from_concrete_functions([func]) +tflite_model = converter.convert() +``` + + + + + + + + + + + + + + +
+`funcs` + +List of TensorFlow ConcreteFunctions. The list should not contain +duplicate elements. +
+`trackable_obj` + +tf.AutoTrackable object associated with `funcs`. A +reference to this object needs to be maintained so that Variables do not +get garbage collected since functions have a weak reference to +Variables. This is only required when the tf.AutoTrackable object is not +maintained by the user (e.g. `from_saved_model`). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`allow_custom_ops` + +Boolean indicating whether to allow custom operations. +When false any unknown operation is an error. When true, custom ops are +created for any op that is unknown. The developer will need to provide +these to the TensorFlow Lite runtime with a custom resolver. +(default False) +
+`target_spec` + +Experimental flag, subject to change. Specification of target +device. +
+`optimizations` + +Experimental flag, subject to change. A list of optimizations +to apply when converting the model. E.g. `[Optimize.DEFAULT]` +
+`representative_dataset` + +A representative dataset that can be used to +generate input and output samples for the model. The converter can use the +dataset to evaluate different optimizations. +
+`experimental_new_converter` + +Experimental flag, subject to change. +Enables MLIR-based conversion instead of TOCO conversion. +
+ + + +## Methods + +

convert

+ +View source + + + +Converts a TensorFlow GraphDef based on instance variables. + + + + + + + + + + +
Returns
+The converted data in serialized format. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +Multiple concrete functions are specified. +Input shape is not specified. +Invalid quantization parameters. +
+ + + +

from_concrete_functions

+ +View source + + + +Creates a TFLiteConverter object from ConcreteFunctions. + + + + + + + + + + + +
Args
+`funcs` + +List of TensorFlow ConcreteFunctions. The list should not contain +duplicate elements. Currently converter can only convert a single +ConcreteFunction. Converting multiple functions is under development. +
+ + + + + + + + + + + +
Returns
+TFLiteConverter object. +
+ + + + + + + + + + + +
Raises
+Invalid input type. +
+ + + +

from_keras_model

+ +View source + + + +Creates a TFLiteConverter object from a Keras model. + + + + + + + + + + + +
Args
+`model` + +tf.Keras.Model +
+ + + + + + + + + + + +
Returns
+TFLiteConverter object. +
+ + + +

from_saved_model

+ +View source + + + +Creates a TFLiteConverter object from a SavedModel directory. + + + + + + + + + + + + + + + + + +
Args
+`saved_model_dir` + +SavedModel directory to convert. +
+`signature_keys` + +List of keys identifying SignatureDef containing inputs +and outputs. Elements should not be duplicated. By default the +`signatures` attribute of the MetaGraphdef is used. (default +saved_model.signatures) +
+`tags` + +Set of tags identifying the MetaGraphDef within the SavedModel to +analyze. All tags in the tag set must be present. (default set(SERVING)) +
+ + + + + + + + + + + +
Returns
+TFLiteConverter object. +
+ + + + + + + + + + + +
Raises
+Invalid signature keys. +
+ + + + + diff --git a/site/en/api_docs/python/tf/lite/TargetSpec.md b/site/en/api_docs/python/tf/lite/TargetSpec.md new file mode 100644 index 00000000000..dc9d4b50fc6 --- /dev/null +++ b/site/en/api_docs/python/tf/lite/TargetSpec.md @@ -0,0 +1,79 @@ +description: Specification of target device. + +
+ + + +
+ +# tf.lite.TargetSpec + + + + + + + + + +Specification of target device. + + + + + + + + + +Details about target device. Converter optimizes the generated model for +specific device. + + + + + + + + + + + + + + + +
+`supported_ops` + +Experimental flag, subject to change. Set of OpsSet options +supported by the device. (default set([OpsSet.TFLITE_BUILTINS])) +
+`supported_types` + +List of types for constant values on the target device. +Supported values are types exported by lite.constants. Frequently, an +optimization choice is driven by the most compact (i.e. smallest) type in +this list (default [constants.FLOAT]) +
+ + + diff --git a/site/en/api_docs/python/tf/lite/experimental.md b/site/en/api_docs/python/tf/lite/experimental.md new file mode 100644 index 00000000000..aa7798dd06e --- /dev/null +++ b/site/en/api_docs/python/tf/lite/experimental.md @@ -0,0 +1,25 @@ +description: Public API for tf.lite.experimental namespace. + +
+ + +
+ +# Module: tf.lite.experimental + + + + + + + + + +Public API for tf.lite.experimental namespace. + + + +## Functions + +[`load_delegate(...)`](../../tf/lite/experimental/load_delegate.md): Returns loaded Delegate object. + diff --git a/site/en/api_docs/python/tf/lite/experimental/load_delegate.md b/site/en/api_docs/python/tf/lite/experimental/load_delegate.md new file mode 100644 index 00000000000..68399c1842d --- /dev/null +++ b/site/en/api_docs/python/tf/lite/experimental/load_delegate.md @@ -0,0 +1,110 @@ +description: Returns loaded Delegate object. + +
+ + +
+ +# tf.lite.experimental.load_delegate + + + + + + + + + +Returns loaded Delegate object. + + + + + + + + + + + + + + + + + + + + + + +
+`library` + +Name of shared library containing the +[TfLiteDelegate](https://www.tensorflow.org/lite/performance/delegates). +
+`options` + +Dictionary of options that are required to load the delegate. All +keys and values in the dictionary should be convertible to str. Consult +the documentation of the specific delegate for required and legal options. +(default None) +
+ + + + + + + + + + + +
+Delegate object. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +Delegate failed to load. +
+`RuntimeError` + +If delegate loading is used on unsupported platform. +
+ diff --git a/site/en/api_docs/python/tf/load_library.md b/site/en/api_docs/python/tf/load_library.md new file mode 100644 index 00000000000..4c3af585a17 --- /dev/null +++ b/site/en/api_docs/python/tf/load_library.md @@ -0,0 +1,104 @@ +description: Loads a TensorFlow plugin. + +
+ + +
+ +# tf.load_library + + + + + + + + + +Loads a TensorFlow plugin. + + + + + + + + + +"library_location" can be a path to a specific shared object, or a folder. +If it is a folder, all shared objects that are named "libtfkernel*" will be +loaded. When the library is loaded, kernels registered in the library via the +`REGISTER_*` macros are made available in the TensorFlow process. + + + + + + + + + + +
+`library_location` + +Path to the plugin or the folder of plugins. +Relative or absolute filesystem path to a dynamic library file or folder. +
+ + + + + + + + + + + +
+None +
+ + + + + + + + + + + + + + + +
+`OSError` + +When the file to be loaded is not found. +
+`RuntimeError` + +when unable to load the library. +
+ diff --git a/site/en/api_docs/python/tf/load_op_library.md b/site/en/api_docs/python/tf/load_op_library.md new file mode 100644 index 00000000000..7ae3fbe5ae2 --- /dev/null +++ b/site/en/api_docs/python/tf/load_op_library.md @@ -0,0 +1,101 @@ +description: Loads a TensorFlow plugin, containing custom ops and kernels. + +
+ + +
+ +# tf.load_op_library + + + + + + + + + +Loads a TensorFlow plugin, containing custom ops and kernels. + + + + + + + + + +Pass "library_filename" to a platform-specific mechanism for dynamically +loading a library. The rules for determining the exact location of the +library are platform-specific and are not documented here. When the +library is loaded, ops and kernels registered in the library via the +`REGISTER_*` macros are made available in the TensorFlow process. Note +that ops with the same name as an existing op are rejected and not +registered with the process. + + + + + + + + + + +
+`library_filename` + +Path to the plugin. +Relative or absolute filesystem path to a dynamic library file. +
+ + + + + + + + + + + +
+A python module containing the Python wrappers for Ops defined in +the plugin. +
+ + + + + + + + + + + + +
+`RuntimeError` + +when unable to load the library or get the python wrappers. +
+ diff --git a/site/en/api_docs/python/tf/lookup.md b/site/en/api_docs/python/tf/lookup.md new file mode 100644 index 00000000000..510eee3a63d --- /dev/null +++ b/site/en/api_docs/python/tf/lookup.md @@ -0,0 +1,37 @@ +description: Public API for tf.lookup namespace. + +
+ + +
+ +# Module: tf.lookup + + + + + + + + + +Public API for tf.lookup namespace. + + + +## Modules + +[`experimental`](../tf/lookup/experimental.md) module: Public API for tf.lookup.experimental namespace. + +## Classes + +[`class KeyValueTensorInitializer`](../tf/lookup/KeyValueTensorInitializer.md): Table initializers given `keys` and `values` tensors. + +[`class StaticHashTable`](../tf/lookup/StaticHashTable.md): A generic hash table that is immutable once initialized. + +[`class StaticVocabularyTable`](../tf/lookup/StaticVocabularyTable.md): String to Id table wrapper that assigns out-of-vocabulary keys to buckets. + +[`class TextFileIndex`](../tf/lookup/TextFileIndex.md): The key and value content to get from each line. + +[`class TextFileInitializer`](../tf/lookup/TextFileInitializer.md): Table initializers from a text file. + diff --git a/site/en/api_docs/python/tf/lookup/KeyValueTensorInitializer.md b/site/en/api_docs/python/tf/lookup/KeyValueTensorInitializer.md new file mode 100644 index 00000000000..e83ba748bd4 --- /dev/null +++ b/site/en/api_docs/python/tf/lookup/KeyValueTensorInitializer.md @@ -0,0 +1,185 @@ +description: Table initializers given keys and values tensors. + +
+ + + + +
+ +# tf.lookup.KeyValueTensorInitializer + + + + + + + + + +Table initializers given `keys` and `values` tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`keys` + +The tensor for the keys. +
+`values` + +The tensor for the values. +
+`key_dtype` + +The `keys` data type. Used when `keys` is a python array. +
+`value_dtype` + +The `values` data type. Used when `values` is a python array. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+`key_dtype` + +The expected table key dtype. +
+`value_dtype` + +The expected table value dtype. +
+ + + +## Methods + +

initialize

+ +View source + + + +Initializes the given `table` with `keys` and `values` tensors. + + + + + + + + + + + +
Args
+`table` + +The table to initialize. +
+ + + + + + + + + + + +
Returns
+The operation that initializes the table. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +when the keys and values data types do not match the table +key and value data types. +
+ + + + + diff --git a/site/en/api_docs/python/tf/lookup/StaticHashTable.md b/site/en/api_docs/python/tf/lookup/StaticHashTable.md new file mode 100644 index 00000000000..5a1f9704525 --- /dev/null +++ b/site/en/api_docs/python/tf/lookup/StaticHashTable.md @@ -0,0 +1,294 @@ +description: A generic hash table that is immutable once initialized. + +
+ + + + + + +
+ +# tf.lookup.StaticHashTable + + + + + + + + + +A generic hash table that is immutable once initialized. + + + + + + + + +#### Example usage: + + + +```python +keys_tensor = tf.constant([1, 2]) +vals_tensor = tf.constant([3, 4]) +input_tensor = tf.constant([1, 5]) +table = tf.lookup.StaticHashTable( + tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor), -1) +print(table.lookup(input_tensor)) +``` + + + + + + + + + + + + + + + + +
+`initializer` + +The table initializer to use. See `HashTable` kernel for +supported key and value types. +
+`default_value` + +The value to use if a key is missing in the table. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`default_value` + +The default value of the table. +
+`key_dtype` + +The table key dtype. +
+`name` + +The name of the table. +
+`resource_handle` + +Returns the resource handle associated with this Resource. +
+`value_dtype` + +The table value dtype. +
+ + + +## Methods + +

export

+ +View source + + + +Returns tensors of all keys and values in the table. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A pair of tensors with the first tensor containing all keys and the +second tensors containing all values in the table. +
+ + + +

lookup

+ +View source + + + +Looks up `keys` in a table, outputs the corresponding values. + +The `default_value` is used for keys not present in the table. + + + + + + + + + + + + + +
Args
+`keys` + +Keys to look up. May be either a `SparseTensor` or dense `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `SparseTensor` if keys are sparse, otherwise a dense `Tensor`. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +when `keys` or `default_value` doesn't match the table data +types. +
+ + + +

size

+ +View source + + + +Compute the number of elements in this table. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A scalar tensor containing the number of elements in this table. +
+ + + + + diff --git a/site/en/api_docs/python/tf/lookup/StaticVocabularyTable.md b/site/en/api_docs/python/tf/lookup/StaticVocabularyTable.md new file mode 100644 index 00000000000..d962a8f3576 --- /dev/null +++ b/site/en/api_docs/python/tf/lookup/StaticVocabularyTable.md @@ -0,0 +1,274 @@ +description: String to Id table wrapper that assigns out-of-vocabulary keys to buckets. + +
+ + + + + +
+ +# tf.lookup.StaticVocabularyTable + + + + + + + + + +String to Id table wrapper that assigns out-of-vocabulary keys to buckets. + + + + + + + +For example, if an instance of `StaticVocabularyTable` is initialized with a +string-to-id initializer that maps: + +* `emerson -> 0` +* `lake -> 1` +* `palmer -> 2` + +The `Vocabulary` object will performs the following mapping: + +* `emerson -> 0` +* `lake -> 1` +* `palmer -> 2` +* ` -> bucket_id`, where bucket_id will be between `3` and +`3 + num_oov_buckets - 1`, calculated by: +`hash() % num_oov_buckets + vocab_size` + +If input_tensor is `["emerson", "lake", "palmer", "king", "crimson"]`, +the lookup result is `[0, 1, 2, 4, 7]`. + +If `initializer` is None, only out-of-vocabulary buckets are used. + +#### Example usage: + + + +```python +num_oov_buckets = 3 +input_tensor = tf.constant(["emerson", "lake", "palmer", "king", "crimnson"]) +table = tf.lookup.StaticVocabularyTable( + tf.lookup.TextFileInitializer( + filename, + key_dtype=tf.string, key_index=tf.lookup.TextFileIndex.WHOLE_LINE, + value_dtype=tf.int64, value_index=tf.lookup.TextFileIndex.LINE_NUMBER, + delimiter="\t"), + num_oov_buckets) +out = table.lookup(input_tensor). +table.init.run() +print(out.eval()) +``` + +The hash function used for generating out-of-vocabulary buckets ID is +Fingerprint64. + + + + + + + + + + + + + + + + + + + +
+`initializer` + +A TableInitializerBase object that contains the data used to +initialize the table. If None, then we only use out-of-vocab buckets. +
+`num_oov_buckets` + +Number of buckets to use for out-of-vocabulary keys. Must +be greater than zero. +
+`lookup_key_dtype` + +Data type of keys passed to `lookup`. Defaults to +`initializer.key_dtype` if `initializer` is specified, otherwise +tf.string. Must be string or integer, and must be castable to +`initializer.key_dtype`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + +
+`ValueError` + +when `num_oov_buckets` is not positive. +
+`TypeError` + +when lookup_key_dtype or initializer.key_dtype are not +integer or string. Also when initializer.value_dtype != int64. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`key_dtype` + +The table key dtype. +
+`name` + +The name of the table. +
+`resource_handle` + +Returns the resource handle associated with this Resource. +
+`value_dtype` + +The table value dtype. +
+ + + +## Methods + +

lookup

+ +View source + + + +Looks up `keys` in the table, outputs the corresponding values. + +It assigns out-of-vocabulary keys to buckets based in their hashes. + + + + + + + + + + + + + +
Args
+`keys` + +Keys to look up. May be either a `SparseTensor` or dense `Tensor`. +
+`name` + +Optional name for the op. +
+ + + + + + + + + + + +
Returns
+A `SparseTensor` if keys are sparse, otherwise a dense `Tensor`. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +when `keys` doesn't match the table key data type. +
+ + + +

size

+ +View source + + + +Compute the number of elements in this table. + + + + diff --git a/site/en/api_docs/python/tf/lookup/TextFileIndex.md b/site/en/api_docs/python/tf/lookup/TextFileIndex.md new file mode 100644 index 00000000000..90a22788b3d --- /dev/null +++ b/site/en/api_docs/python/tf/lookup/TextFileIndex.md @@ -0,0 +1,55 @@ +description: The key and value content to get from each line. + +
+ + + + +
+ +# tf.lookup.TextFileIndex + + + + + + + + + +The key and value content to get from each line. + + + + + +This class defines the key and value used for tf.lookup.TextFileInitializer. + +The key and value content to get from each line is specified either +by the following, or a value `>=0`. +* TextFileIndex.LINE_NUMBER means use the line number starting from zero, + expects data type int64. +* TextFileIndex.WHOLE_LINE means use the whole line content, expects data + type string. + +A value `>=0` means use the index (starting at zero) of the split line based + on `delimiter`. + +## Class Variables + +* `LINE_NUMBER = -1` +* `WHOLE_LINE = -2` diff --git a/site/en/api_docs/python/tf/lookup/TextFileInitializer.md b/site/en/api_docs/python/tf/lookup/TextFileInitializer.md new file mode 100644 index 00000000000..a90a039cb0b --- /dev/null +++ b/site/en/api_docs/python/tf/lookup/TextFileInitializer.md @@ -0,0 +1,279 @@ +description: Table initializers from a text file. + +
+ + + + +
+ +# tf.lookup.TextFileInitializer + + + + + + + + + +Table initializers from a text file. + + + + + + + + + +This initializer assigns one entry in the table for each line in the file. + +The key and value type of the table to initialize is given by `key_dtype` and +`value_dtype`. + +The key and value content to get from each line is specified by +the `key_index` and `value_index`. + +* TextFileIndex.LINE_NUMBER means use the line number starting from zero, + expects data type int64. +* TextFileIndex.WHOLE_LINE means use the whole line content, expects data + type string. +* A value `>=0` means use the index (starting at zero) of the split line based + on `delimiter`. + +For example if we have a file with the following content: + +``` +emerson 10 +lake 20 +palmer 30 +``` + +The following snippet initializes a table with the first column as keys and +second column as values: + +* `emerson -> 10` +* `lake -> 20` +* `palmer -> 30` + +```python +table = tf.lookup.StaticHashTable(tf.lookup.TextFileInitializer( + "test.txt", tf.string, 0, tf.int64, 1, delimiter=" "), -1) +... +table.init.run() +``` + +Similarly to initialize the whole line as keys and the line number as values. + +* `emerson 10 -> 0` +* `lake 20 -> 1` +* `palmer 30 -> 2` + +```python +table = tf.lookup.StaticHashTable(tf.lookup.TextFileInitializer( + "test.txt", tf.string, tf.lookup.TextFileIndex.WHOLE_LINE, + tf.int64, tf.lookup.TextFileIndex.LINE_NUMBER, delimiter=" "), -1) +... +table.init.run() +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filename` + +The filename of the text file to be used for initialization. The +path must be accessible from wherever the graph is initialized (eg. +trainer or eval workers). The filename may be a scalar `Tensor`. +
+`key_dtype` + +The `key` data type. +
+`key_index` + +the index that represents information of a line to get the +table 'key' values from. +
+`value_dtype` + +The `value` data type. +
+`value_index` + +the index that represents information of a line to get the +table 'value' values from.' +
+`vocab_size` + +The number of elements in the file, if known. +
+`delimiter` + +The delimiter to separate fields in a line. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + +
+`ValueError` + +when the filename is empty, or when the table key and value +data types do not match the expected data types. +
+ + + + + + + + + + + + + + + + + +
+`key_dtype` + +The expected table key dtype. +
+`value_dtype` + +The expected table value dtype. +
+ + + +## Methods + +

initialize

+ +View source + + + +Initializes the table from a text file. + + + + + + + + + + + +
Args
+`table` + +The table to be initialized. +
+ + + + + + + + + + + +
Returns
+The operation that initializes the table. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +when the keys and values data types do not match the table +key and value data types. +
+ + + + + diff --git a/site/en/api_docs/python/tf/lookup/experimental.md b/site/en/api_docs/python/tf/lookup/experimental.md new file mode 100644 index 00000000000..68e156e5f0f --- /dev/null +++ b/site/en/api_docs/python/tf/lookup/experimental.md @@ -0,0 +1,25 @@ +description: Public API for tf.lookup.experimental namespace. + +
+ + +
+ +# Module: tf.lookup.experimental + + + + + + + + + +Public API for tf.lookup.experimental namespace. + + + +## Classes + +[`class DenseHashTable`](../../tf/lookup/experimental/DenseHashTable.md): A generic mutable hash table implementation using tensors as backing store. + diff --git a/site/en/api_docs/python/tf/lookup/experimental/DenseHashTable.md b/site/en/api_docs/python/tf/lookup/experimental/DenseHashTable.md new file mode 100644 index 00000000000..bb190c05a09 --- /dev/null +++ b/site/en/api_docs/python/tf/lookup/experimental/DenseHashTable.md @@ -0,0 +1,667 @@ +description: A generic mutable hash table implementation using tensors as backing store. + +
+ + + + + + + + + + +
+ +# tf.lookup.experimental.DenseHashTable + + + + + + + + + +A generic mutable hash table implementation using tensors as backing store. + + + + + + + + + +Data can be inserted by calling the insert method and removed by calling the +remove method. It does not support initialization via the init method. + +It uses "open addressing" with quadratic reprobing to resolve collisions. +Compared to `MutableHashTable` the insert, remove and lookup operations in a +`DenseHashTable` are typically faster, but memory usage can be higher. +However, `DenseHashTable` does not require additional memory for +temporary tensors created during checkpointing and restore operations. + +#### Example usage: + + + +```python +table = tf.lookup.DenseHashTable(key_dtype=tf.int64, + value_dtype=tf.int64, + default_value=-1, + empty_key=0, + deleted_key=-1) + +sess.run(table.insert(keys, values)) +out = table.lookup(query_keys) +print(out.eval()) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key_dtype` + +the type of the key tensors. +
+`value_dtype` + +the type of the value tensors. +
+`default_value` + +The value to use if a key is missing in the table. +
+`empty_key` + +the key to use to represent empty buckets internally. Must not +be used in insert, remove or lookup operations. +
+`deleted_key` + +the key to use to represent deleted buckets internally. Must +not be used in insert, remove or lookup operations and be different from +the empty_key. +
+`initial_num_buckets` + +the initial number of buckets. +
+`name` + +A name for the operation (optional). +
+`checkpoint` + +if True, the contents of the table are saved to and restored +from checkpoints. If `shared_name` is empty for a checkpointed table, it +is shared using the table node name. +
+ + + + + + + + + + + + +
+`ValueError` + +If checkpoint is True and no name was specified. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`key_dtype` + +The table key dtype. +
+`name` + +The name of the table. +
+`resource_handle` + +Returns the resource handle associated with this Resource. +
+`value_dtype` + +The table value dtype. +
+ + + +## Methods + +

erase

+ +View source + + + +Removes `keys` and its associated values from the table. + +If a key is not present in the table, it is silently ignored. + + + + + + + + + + + + + +
Args
+`keys` + +Keys to remove. Can be a tensor of any shape. Must match the table's +key type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +when `keys` do not match the table data types. +
+ + + +

export

+ +View source + + + +Returns tensors of all keys and values in the table. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A pair of tensors with the first tensor containing all keys and the +second tensors containing all values in the table. +
+ + + +

insert

+ +View source + + + +Associates `keys` with `values`. + + + + + + + + + + + + + + + + + +
Args
+`keys` + +Keys to insert. Can be a tensor of any shape. Must match the table's +key type. +
+`values` + +Values to be associated with keys. Must be a tensor of the same +shape as `keys` and match the table's value type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +when `keys` or `values` doesn't match the table data +types. +
+ + + +

insert_or_assign

+ +View source + + + +Associates `keys` with `values`. + + + + + + + + + + + + + + + + + +
Args
+`keys` + +Keys to insert. Can be a tensor of any shape. Must match the table's +key type. +
+`values` + +Values to be associated with keys. Must be a tensor of the same +shape as `keys` and match the table's value type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +when `keys` or `values` doesn't match the table data +types. +
+ + + +

lookup

+ +View source + + + +Looks up `keys` in a table, outputs the corresponding values. + +The `default_value` is used for keys not present in the table. + + + + + + + + + + + + + +
Args
+`keys` + +Keys to look up. Can be a tensor of any shape. Must match the +table's key_dtype. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tensor containing the values in the same shape as `keys` using the +table's value type. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +when `keys` do not match the table data types. +
+ + + +

remove

+ +View source + + + +Removes `keys` and its associated values from the table. + +If a key is not present in the table, it is silently ignored. + + + + + + + + + + + + + +
Args
+`keys` + +Keys to remove. Can be a tensor of any shape. Must match the table's +key type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The created Operation. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +when `keys` do not match the table data types. +
+ + + +

size

+ +View source + + + +Compute the number of elements in this table. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A scalar tensor containing the number of elements in this table. +
+ + + + + diff --git a/site/en/api_docs/python/tf/make_ndarray.md b/site/en/api_docs/python/tf/make_ndarray.md new file mode 100644 index 00000000000..00981f38dd4 --- /dev/null +++ b/site/en/api_docs/python/tf/make_ndarray.md @@ -0,0 +1,106 @@ +description: Create a numpy ndarray from a tensor. + +
+ + +
+ +# tf.make_ndarray + + + + + + + + + +Create a numpy ndarray from a tensor. + + + + + + + + + +Create a numpy ndarray with the same shape and data as the tensor. + +#### For example: + + + +```python +# Tensor a has shape (2,3) +a = tf.constant([[1,2,3],[4,5,6]]) +proto_tensor = tf.make_tensor_proto(a) # convert `tensor a` to a proto tensor +tf.make_ndarray(proto_tensor) # output: array([[1, 2, 3], +# [4, 5, 6]], dtype=int32) +# output has shape (2,3) +``` + + + + + + + + + + +
+`tensor` + +A TensorProto. +
+ + + + + + + + + + + +
+A numpy array with the tensor contents. +
+ + + + + + + + + + + + +
+`TypeError` + +if tensor has unsupported type. +
+ diff --git a/site/en/api_docs/python/tf/make_tensor_proto.md b/site/en/api_docs/python/tf/make_tensor_proto.md new file mode 100644 index 00000000000..ef5e821439e --- /dev/null +++ b/site/en/api_docs/python/tf/make_tensor_proto.md @@ -0,0 +1,165 @@ +description: Create a TensorProto. + +
+ + +
+ +# tf.make_tensor_proto + + + + + + + + + +Create a TensorProto. + + + + + + + + + +In TensorFlow 2.0, representing tensors as protos should no longer be a +common workflow. That said, this utility function is still useful for +generating TF Serving request protos: + +```python + request = tensorflow_serving.apis.predict_pb2.PredictRequest() + request.model_spec.name = "my_model" + request.model_spec.signature_name = "serving_default" + request.inputs["images"].CopyFrom(tf.make_tensor_proto(X_new)) +``` + +`make_tensor_proto` accepts "values" of a python scalar, a python list, a +numpy ndarray, or a numpy scalar. + +If "values" is a python scalar or a python list, make_tensor_proto +first convert it to numpy ndarray. If dtype is None, the +conversion tries its best to infer the right numpy data +type. Otherwise, the resulting numpy array has a compatible data +type with the given dtype. + +In either case above, the numpy ndarray (either the caller provided +or the auto-converted) must have the compatible type with dtype. + +`make_tensor_proto` then converts the numpy array to a tensor proto. + +If "shape" is None, the resulting tensor proto represents the numpy +array precisely. + +Otherwise, "shape" specifies the tensor's shape and the numpy array +can not have more elements than what "shape" specifies. + + + + + + + + + + + + + + + + + + + + + + +
+`values` + +Values to put in the TensorProto. +
+`dtype` + +Optional tensor_pb2 DataType value. +
+`shape` + +List of integers representing the dimensions of tensor. +
+`verify_shape` + +Boolean that enables verification of a shape of values. +
+`allow_broadcast` + +Boolean that enables allowing scalars and 1 length vector +broadcasting. Cannot be true when verify_shape is true. +
+ + + + + + + + + + + +
+A `TensorProto`. Depending on the type, it may contain data in the +"tensor_content" attribute, which is not directly useful to Python programs. +To access the values you should convert the proto back to a numpy ndarray +with tf.make_ndarray(proto). + +If `values` is a `TensorProto`, it is immediately returned; `dtype` and +`shape` are ignored. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if unsupported types are provided. +
+`ValueError` + +if arguments have inappropriate values or if verify_shape is +True and shape of values is not equals to a shape from the argument. +
+ diff --git a/site/en/api_docs/python/tf/map_fn.md b/site/en/api_docs/python/tf/map_fn.md new file mode 100644 index 00000000000..2ed50391a15 --- /dev/null +++ b/site/en/api_docs/python/tf/map_fn.md @@ -0,0 +1,237 @@ +description: map on the list of tensors unpacked from elems on dimension 0. (deprecated argument values) + +
+ + +
+ +# tf.map_fn + + + + + + + + + +map on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values) + + + + + + + +Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(back_prop=False)`. They will be removed in a future version. +Instructions for updating: +back_prop=False is deprecated. Consider using tf.stop_gradient instead. +Instead of: +results = tf.map_fn(fn, elems, back_prop=False) +Use: +results = tf.nest.map_structure(tf.stop_gradient, tf.map_fn(fn, elems)) + +The simplest version of `map_fn` repeatedly applies the callable `fn` to a +sequence of elements from first to last. The elements are made of the +tensors unpacked from `elems`. `dtype` is the data type of the return +value of `fn`. Users must provide `dtype` if it is different from +the data type of `elems`. + +Suppose that `elems` is unpacked into `values`, a list of tensors. The shape +of the result tensor is `[values.shape[0]] + fn(values[0]).shape`. + +This method also allows multi-arity `elems` and output of `fn`. If `elems` +is a (possibly nested) list or tuple of tensors, then each of these tensors +must have a matching first (unpack) dimension. The signature of `fn` may +match the structure of `elems`. That is, if `elems` is +`(t1, [t2, t3, [t4, t5]])`, then an appropriate signature for `fn` is: +`fn = lambda (t1, [t2, t3, [t4, t5]]):`. + +Furthermore, `fn` may emit a different structure than its input. For example, +`fn` may look like: `fn = lambda t1: return (t1 + 1, t1 - 1)`. In this case, +the `dtype` parameter is not optional: `dtype` must be a type or (possibly +nested) tuple of types matching the output of `fn`. + +To apply a functional operation to the nonzero elements of a SparseTensor +one of the following methods is recommended. First, if the function is +expressible as TensorFlow ops, use + +```python + result = SparseTensor(input.indices, fn(input.values), input.dense_shape) +``` + +If, however, the function is not expressible as a TensorFlow op, then use + +```python +result = SparseTensor( + input.indices, map_fn(fn, input.values), input.dense_shape) +``` + +instead. + +When executing eagerly, map_fn does not execute in parallel even if +`parallel_iterations` is set to a value > 1. You can still get the +performance benefits of running a function in parallel by using the +tf.function decorator, + +```python +# Assume the function being used in map_fn is fn. +# To ensure map_fn calls fn in parallel, use the tf.function decorator. +@tf.function +def func(tensor): + return tf.map_fn(fn, tensor) +``` + +Note that if you use the tf.function decorator, any non-TensorFlow Python +code that you may have written in your function won't get executed. See +[`tf.function`](https://www.tensorflow.org/api_docs/python/tf/function) for +more details. The recommendation would be to debug without tf.function but +switch to it to get performance benefits of running `map_fn` in parallel. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fn` + +The callable to be performed. It accepts one argument, which will have +the same (possibly nested) structure as `elems`. Its output must have the +same structure as `dtype` if one is provided, otherwise it must have the +same structure as `elems`. +
+`elems` + +A tensor or (possibly nested) sequence of tensors, each of which will +be unpacked along their first dimension. The nested sequence of the +resulting slices will be applied to `fn`. +
+`dtype` + +(optional) The output type(s) of `fn`. If `fn` returns a structure +of Tensors differing from the structure of `elems`, then `dtype` is not +optional and must have the same structure as the output of `fn`. +
+`parallel_iterations` + +(optional) The number of iterations allowed to run in +parallel. When graph building, the default value is 10. While executing +eagerly, the default value is set to 1. +
+`back_prop` + +(optional) Deprecated. False disables support for back +propagation. Prefer using tf.stop_gradient instead. +
+`swap_memory` + +(optional) True enables GPU-CPU memory swapping. +
+`infer_shape` + +(optional) False disables tests for consistent output shapes. +
+`name` + +(optional) Name prefix for the returned tensors. +
+ + + + + + + + + + + +
+A tensor or (possibly nested) sequence of tensors. Each tensor packs the +results of applying `fn` to tensors unpacked from `elems` along the first +dimension, from first to last. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if `fn` is not callable or the structure of the output of +`fn` and `dtype` do not match, or if elems is a SparseTensor. +
+`ValueError` + +if the lengths of the output of `fn` and `dtype` do not match. +
+ + + +#### Examples: + +```python +elems = np.array([1, 2, 3, 4, 5, 6]) +squares = map_fn(lambda x: x * x, elems) +# squares == [1, 4, 9, 16, 25, 36] +``` + +```python +elems = (np.array([1, 2, 3]), np.array([-1, 1, -1])) +alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64) +# alternate == [-1, 2, -3] +``` + +```python +elems = np.array([1, 2, 3]) +alternates = map_fn(lambda x: (x, -x), elems, dtype=(tf.int64, tf.int64)) +# alternates[0] == [1, 2, 3] +# alternates[1] == [-1, -2, -3] +``` diff --git a/site/en/api_docs/python/tf/math.md b/site/en/api_docs/python/tf/math.md new file mode 100644 index 00000000000..2357aef7d77 --- /dev/null +++ b/site/en/api_docs/python/tf/math.md @@ -0,0 +1,335 @@ +description: Math Operations. + +
+ + +
+ +# Module: tf.math + + + + + + + + + +Math Operations. + + +Note: Functions taking `Tensor` arguments can also take anything accepted by +tf.convert_to_tensor. + +Note: Elementwise binary operations in TensorFlow follow [numpy-style +broadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). + +TensorFlow provides a variety of math functions including: + +* Basic arithmetic operators and trigonometric functions. +* Special math functions (like: tf.math.igamma and tf.math.zeta) +* Complex number functions (like: tf.math.imag and tf.math.angle) +* Reductions and scans (like: tf.math.reduce_mean and tf.math.cumsum) +* Segment functions (like: tf.math.segment_sum) + +See: tf.linalg for matrix and tensor functions. + + + +## About Segmentation + +TensorFlow provides several operations that you can use to perform common +math computations on tensor segments. +Here a segmentation is a partitioning of a tensor along +the first dimension, i.e. it defines a mapping from the first dimension onto +`segment_ids`. The `segment_ids` tensor should be the size of +the first dimension, `d0`, with consecutive IDs in the range `0` to `k`, +where `k [[0 0 0 0] +# [5 6 7 8]] +``` + +The standard `segment_*` functions assert that the segment indices are sorted. +If you have unsorted indices use the equivalent `unsorted_segment_` function. +These functions take an additional argument `num_segments` so that the output +tensor can be efficiently allocated. + +``` python +c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) +tf.math.unsorted_segment_sum(c, tf.constant([0, 1, 0]), num_segments=2) +# ==> [[ 6, 8, 10, 12], +# [-1, -2, -3, -4]] +``` + +## Modules + +[`special`](../tf/math/special.md) module: Public API for tf.math.special namespace. + +## Functions + +[`abs(...)`](../tf/math/abs.md): Computes the absolute value of a tensor. + +[`accumulate_n(...)`](../tf/math/accumulate_n.md): Returns the element-wise sum of a list of tensors. + +[`acos(...)`](../tf/math/acos.md): Computes acos of x element-wise. + +[`acosh(...)`](../tf/math/acosh.md): Computes inverse hyperbolic cosine of x element-wise. + +[`add(...)`](../tf/math/add.md): Returns x + y element-wise. + +[`add_n(...)`](../tf/math/add_n.md): Adds all input tensors element-wise. + +[`angle(...)`](../tf/math/angle.md): Returns the element-wise argument of a complex (or real) tensor. + +[`argmax(...)`](../tf/math/argmax.md): Returns the index with the largest value across axes of a tensor. + +[`argmin(...)`](../tf/math/argmin.md): Returns the index with the smallest value across axes of a tensor. + +[`asin(...)`](../tf/math/asin.md): Computes the trignometric inverse sine of x element-wise. + +[`asinh(...)`](../tf/math/asinh.md): Computes inverse hyperbolic sine of x element-wise. + +[`atan(...)`](../tf/math/atan.md): Computes the trignometric inverse tangent of x element-wise. + +[`atan2(...)`](../tf/math/atan2.md): Computes arctangent of `y/x` element-wise, respecting signs of the arguments. + +[`atanh(...)`](../tf/math/atanh.md): Computes inverse hyperbolic tangent of x element-wise. + +[`bessel_i0(...)`](../tf/math/bessel_i0.md): Computes the Bessel i0 function of `x` element-wise. + +[`bessel_i0e(...)`](../tf/math/bessel_i0e.md): Computes the Bessel i0e function of `x` element-wise. + +[`bessel_i1(...)`](../tf/math/bessel_i1.md): Computes the Bessel i1 function of `x` element-wise. + +[`bessel_i1e(...)`](../tf/math/bessel_i1e.md): Computes the Bessel i1e function of `x` element-wise. + +[`betainc(...)`](../tf/math/betainc.md): Compute the regularized incomplete beta integral \\(I_x(a, b)\\). + +[`bincount(...)`](../tf/math/bincount.md): Counts the number of occurrences of each value in an integer array. + +[`ceil(...)`](../tf/math/ceil.md): Return the ceiling of the input, element-wise. + +[`confusion_matrix(...)`](../tf/math/confusion_matrix.md): Computes the confusion matrix from predictions and labels. + +[`conj(...)`](../tf/math/conj.md): Returns the complex conjugate of a complex number. + +[`cos(...)`](../tf/math/cos.md): Computes cos of x element-wise. + +[`cosh(...)`](../tf/math/cosh.md): Computes hyperbolic cosine of x element-wise. + +[`count_nonzero(...)`](../tf/math/count_nonzero.md): Computes number of nonzero elements across dimensions of a tensor. + +[`cumprod(...)`](../tf/math/cumprod.md): Compute the cumulative product of the tensor `x` along `axis`. + +[`cumsum(...)`](../tf/math/cumsum.md): Compute the cumulative sum of the tensor `x` along `axis`. + +[`cumulative_logsumexp(...)`](../tf/math/cumulative_logsumexp.md): Compute the cumulative log-sum-exp of the tensor `x` along `axis`. + +[`digamma(...)`](../tf/math/digamma.md): Computes Psi, the derivative of Lgamma (the log of the absolute value of + +[`divide(...)`](../tf/math/divide.md): Computes Python style division of `x` by `y`. + +[`divide_no_nan(...)`](../tf/math/divide_no_nan.md): Computes a safe divide which returns 0 if the y is zero. + +[`equal(...)`](../tf/math/equal.md): Returns the truth value of (x == y) element-wise. + +[`erf(...)`](../tf/math/erf.md): Computes the Gauss error function of `x` element-wise. + +[`erfc(...)`](../tf/math/erfc.md): Computes the complementary error function of `x` element-wise. + +[`erfinv(...)`](../tf/math/erfinv.md): Compute inverse error function. + +[`exp(...)`](../tf/math/exp.md): Computes exponential of x element-wise. \\(y = e^x\\). + +[`expm1(...)`](../tf/math/expm1.md): Computes `exp(x) - 1` element-wise. + +[`floor(...)`](../tf/math/floor.md): Returns element-wise largest integer not greater than x. + +[`floordiv(...)`](../tf/math/floordiv.md): Divides `x / y` elementwise, rounding toward the most negative integer. + +[`floormod(...)`](../tf/math/floormod.md): Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +[`greater(...)`](../tf/math/greater.md): Returns the truth value of (x > y) element-wise. + +[`greater_equal(...)`](../tf/math/greater_equal.md): Returns the truth value of (x >= y) element-wise. + +[`igamma(...)`](../tf/math/igamma.md): Compute the lower regularized incomplete Gamma function `P(a, x)`. + +[`igammac(...)`](../tf/math/igammac.md): Compute the upper regularized incomplete Gamma function `Q(a, x)`. + +[`imag(...)`](../tf/math/imag.md): Returns the imaginary part of a complex (or real) tensor. + +[`in_top_k(...)`](../tf/math/in_top_k.md): Says whether the targets are in the top `K` predictions. + +[`invert_permutation(...)`](../tf/math/invert_permutation.md): Computes the inverse permutation of a tensor. + +[`is_finite(...)`](../tf/math/is_finite.md): Returns which elements of x are finite. + +[`is_inf(...)`](../tf/math/is_inf.md): Returns which elements of x are Inf. + +[`is_nan(...)`](../tf/math/is_nan.md): Returns which elements of x are NaN. + +[`is_non_decreasing(...)`](../tf/math/is_non_decreasing.md): Returns `True` if `x` is non-decreasing. + +[`is_strictly_increasing(...)`](../tf/math/is_strictly_increasing.md): Returns `True` if `x` is strictly increasing. + +[`l2_normalize(...)`](../tf/math/l2_normalize.md): Normalizes along dimension `axis` using an L2 norm. + +[`lbeta(...)`](../tf/math/lbeta.md): Computes \\(ln(|Beta(x)|)\\), reducing along the last dimension. + +[`less(...)`](../tf/math/less.md): Returns the truth value of (x < y) element-wise. + +[`less_equal(...)`](../tf/math/less_equal.md): Returns the truth value of (x <= y) element-wise. + +[`lgamma(...)`](../tf/math/lgamma.md): Computes the log of the absolute value of `Gamma(x)` element-wise. + +[`log(...)`](../tf/math/log.md): Computes natural logarithm of x element-wise. + +[`log1p(...)`](../tf/math/log1p.md): Computes natural logarithm of (1 + x) element-wise. + +[`log_sigmoid(...)`](../tf/math/log_sigmoid.md): Computes log sigmoid of `x` element-wise. + +[`log_softmax(...)`](../tf/nn/log_softmax.md): Computes log softmax activations. + +[`logical_and(...)`](../tf/math/logical_and.md): Logical AND function. + +[`logical_not(...)`](../tf/math/logical_not.md): Returns the truth value of `NOT x` element-wise. + +[`logical_or(...)`](../tf/math/logical_or.md): Returns the truth value of x OR y element-wise. + +[`logical_xor(...)`](../tf/math/logical_xor.md): Logical XOR function. + +[`maximum(...)`](../tf/math/maximum.md): Returns the max of x and y (i.e. x > y ? x : y) element-wise. + +[`minimum(...)`](../tf/math/minimum.md): Returns the min of x and y (i.e. x < y ? x : y) element-wise. + +[`mod(...)`](../tf/math/floormod.md): Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + +[`multiply(...)`](../tf/math/multiply.md): Returns an element-wise x * y. + +[`multiply_no_nan(...)`](../tf/math/multiply_no_nan.md): Computes the product of x and y and returns 0 if the y is zero, even if x is NaN or infinite. + +[`ndtri(...)`](../tf/math/ndtri.md): Compute quantile of Standard Normal. + +[`negative(...)`](../tf/math/negative.md): Computes numerical negative value element-wise. + +[`nextafter(...)`](../tf/math/nextafter.md): Returns the next representable value of `x1` in the direction of `x2`, element-wise. + +[`not_equal(...)`](../tf/math/not_equal.md): Returns the truth value of (x != y) element-wise. + +[`polygamma(...)`](../tf/math/polygamma.md): Compute the polygamma function \\(\psi^{(n)}(x)\\). + +[`polyval(...)`](../tf/math/polyval.md): Computes the elementwise value of a polynomial. + +[`pow(...)`](../tf/math/pow.md): Computes the power of one value to another. + +[`real(...)`](../tf/math/real.md): Returns the real part of a complex (or real) tensor. + +[`reciprocal(...)`](../tf/math/reciprocal.md): Computes the reciprocal of x element-wise. + +[`reciprocal_no_nan(...)`](../tf/math/reciprocal_no_nan.md): Performs a safe reciprocal operation, element wise. + +[`reduce_all(...)`](../tf/reduce_all.md): Computes the "logical and" of elements across dimensions of a tensor. + +[`reduce_any(...)`](../tf/math/reduce_any.md): Computes the "logical or" of elements across dimensions of a tensor. + +[`reduce_euclidean_norm(...)`](../tf/math/reduce_euclidean_norm.md): Computes the Euclidean norm of elements across dimensions of a tensor. + +[`reduce_logsumexp(...)`](../tf/math/reduce_logsumexp.md): Computes log(sum(exp(elements across dimensions of a tensor))). + +[`reduce_max(...)`](../tf/math/reduce_max.md): Computes the maximum of elements across dimensions of a tensor. + +[`reduce_mean(...)`](../tf/math/reduce_mean.md): Computes the mean of elements across dimensions of a tensor. + +[`reduce_min(...)`](../tf/math/reduce_min.md): Computes the minimum of elements across dimensions of a tensor. + +[`reduce_prod(...)`](../tf/math/reduce_prod.md): Computes the product of elements across dimensions of a tensor. + +[`reduce_std(...)`](../tf/math/reduce_std.md): Computes the standard deviation of elements across dimensions of a tensor. + +[`reduce_sum(...)`](../tf/math/reduce_sum.md): Computes the sum of elements across dimensions of a tensor. + +[`reduce_variance(...)`](../tf/math/reduce_variance.md): Computes the variance of elements across dimensions of a tensor. + +[`rint(...)`](../tf/math/rint.md): Returns element-wise integer closest to x. + +[`round(...)`](../tf/math/round.md): Rounds the values of a tensor to the nearest integer, element-wise. + +[`rsqrt(...)`](../tf/math/rsqrt.md): Computes reciprocal of square root of x element-wise. + +[`scalar_mul(...)`](../tf/math/scalar_mul.md): Multiplies a scalar times a `Tensor` or `IndexedSlices` object. + +[`segment_max(...)`](../tf/math/segment_max.md): Computes the maximum along segments of a tensor. + +[`segment_mean(...)`](../tf/math/segment_mean.md): Computes the mean along segments of a tensor. + +[`segment_min(...)`](../tf/math/segment_min.md): Computes the minimum along segments of a tensor. + +[`segment_prod(...)`](../tf/math/segment_prod.md): Computes the product along segments of a tensor. + +[`segment_sum(...)`](../tf/math/segment_sum.md): Computes the sum along segments of a tensor. + +[`sigmoid(...)`](../tf/math/sigmoid.md): Computes sigmoid of `x` element-wise. + +[`sign(...)`](../tf/math/sign.md): Returns an element-wise indication of the sign of a number. + +[`sin(...)`](../tf/math/sin.md): Computes sine of x element-wise. + +[`sinh(...)`](../tf/math/sinh.md): Computes hyperbolic sine of x element-wise. + +[`sobol_sample(...)`](../tf/math/sobol_sample.md): Generates points from the Sobol sequence. + +[`softmax(...)`](../tf/nn/softmax.md): Computes softmax activations. + +[`softplus(...)`](../tf/math/softplus.md): Computes softplus: `log(exp(features) + 1)`. + +[`softsign(...)`](../tf/nn/softsign.md): Computes softsign: `features / (abs(features) + 1)`. + +[`sqrt(...)`](../tf/math/sqrt.md): Computes element-wise square root of the input tensor. + +[`square(...)`](../tf/math/square.md): Computes square of x element-wise. + +[`squared_difference(...)`](../tf/math/squared_difference.md): Returns (x - y)(x - y) element-wise. + +[`subtract(...)`](../tf/math/subtract.md): Returns x - y element-wise. + +[`tan(...)`](../tf/math/tan.md): Computes tan of x element-wise. + +[`tanh(...)`](../tf/math/tanh.md): Computes hyperbolic tangent of `x` element-wise. + +[`top_k(...)`](../tf/math/top_k.md): Finds values and indices of the `k` largest entries for the last dimension. + +[`truediv(...)`](../tf/math/truediv.md): Divides x / y elementwise (using Python 3 division operator semantics). + +[`unsorted_segment_max(...)`](../tf/math/unsorted_segment_max.md): Computes the maximum along segments of a tensor. + +[`unsorted_segment_mean(...)`](../tf/math/unsorted_segment_mean.md): Computes the mean along segments of a tensor. + +[`unsorted_segment_min(...)`](../tf/math/unsorted_segment_min.md): Computes the minimum along segments of a tensor. + +[`unsorted_segment_prod(...)`](../tf/math/unsorted_segment_prod.md): Computes the product along segments of a tensor. + +[`unsorted_segment_sqrt_n(...)`](../tf/math/unsorted_segment_sqrt_n.md): Computes the sum along segments of a tensor divided by the sqrt(N). + +[`unsorted_segment_sum(...)`](../tf/math/unsorted_segment_sum.md): Computes the sum along segments of a tensor. + +[`xdivy(...)`](../tf/math/xdivy.md): Returns 0 if x == 0, and x / y otherwise, elementwise. + +[`xlog1py(...)`](../tf/math/xlog1py.md): Compute x * log1p(y). + +[`xlogy(...)`](../tf/math/xlogy.md): Returns 0 if x == 0, and x * log(y) otherwise, elementwise. + +[`zero_fraction(...)`](../tf/math/zero_fraction.md): Returns the fraction of zeros in `value`. + +[`zeta(...)`](../tf/math/zeta.md): Compute the Hurwitz zeta function \\(\zeta(x, q)\\). + diff --git a/site/en/api_docs/python/tf/math/abs.md b/site/en/api_docs/python/tf/math/abs.md new file mode 100644 index 00000000000..e85a5ba2076 --- /dev/null +++ b/site/en/api_docs/python/tf/math/abs.md @@ -0,0 +1,107 @@ +description: Computes the absolute value of a tensor. + +
+ + +
+ +# tf.math.abs + + + + + + + + + +Computes the absolute value of a tensor. + + + + + + + + + +Given a tensor of integer or floating-point values, this operation returns a +tensor of the same type, where each element contains the absolute value of the +corresponding element in the input. + +Given a tensor `x` of complex numbers, this operation returns a tensor of type +`float32` or `float64` that is the absolute value of each element in `x`. For +a complex number \\(a + bj\\), its absolute value is computed as \\(\sqrt{a^2 ++ b^2}\\). For example: + +``` +>>> x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]]) +>>> tf.abs(x) + +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor` of type `float16`, `float32`, `float64`, +`int32`, `int64`, `complex64` or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor` of the same size, type and sparsity as `x`, +with absolute values. Note, for `complex64` or `complex128` input, the +returned `Tensor` will be of type `float32` or `float64`, respectively. + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.abs(x.values, ...), x.dense_shape)` +
+ diff --git a/site/en/api_docs/python/tf/math/accumulate_n.md b/site/en/api_docs/python/tf/math/accumulate_n.md new file mode 100644 index 00000000000..22c742b8e06 --- /dev/null +++ b/site/en/api_docs/python/tf/math/accumulate_n.md @@ -0,0 +1,136 @@ +description: Returns the element-wise sum of a list of tensors. + +
+ + +
+ +# tf.math.accumulate_n + + + + + + + + + +Returns the element-wise sum of a list of tensors. + + + + + + + + + +Optionally, pass `shape` and `tensor_dtype` for shape and type checking, +otherwise, these are inferred. + +`accumulate_n` performs the same operation as tf.math.add_n. + +#### For example: + + + +```python +a = tf.constant([[1, 2], [3, 4]]) +b = tf.constant([[5, 0], [0, 6]]) +tf.math.accumulate_n([a, b, a]) # [[7, 4], [6, 14]] + +# Explicitly pass shape and type +tf.math.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32) + # [[7, 4], + # [6, 14]] +``` + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of `Tensor` objects, each with same shape and type. +
+`shape` + +Expected shape of elements of `inputs` (optional). Also controls the +output shape of this op, which may affect type inference in other ops. A +value of `None` means "infer the input shape from the shapes in `inputs`". +
+`tensor_dtype` + +Expected data type of `inputs` (optional). A value of `None` +means "infer the input dtype from `inputs[0]`". +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of same shape and type as the elements of `inputs`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `inputs` don't all have same shape and dtype or the shape +cannot be inferred. +
+ diff --git a/site/en/api_docs/python/tf/math/acos.md b/site/en/api_docs/python/tf/math/acos.md new file mode 100644 index 00000000000..e12cf3d53a7 --- /dev/null +++ b/site/en/api_docs/python/tf/math/acos.md @@ -0,0 +1,80 @@ +description: Computes acos of x element-wise. + +
+ + +
+ +# tf.math.acos + + + + + + + + + +Computes acos of x element-wise. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/acosh.md b/site/en/api_docs/python/tf/math/acosh.md new file mode 100644 index 00000000000..d4d282c3d20 --- /dev/null +++ b/site/en/api_docs/python/tf/math/acosh.md @@ -0,0 +1,87 @@ +description: Computes inverse hyperbolic cosine of x element-wise. + +
+ + +
+ +# tf.math.acosh + + + + + + + + + +Computes inverse hyperbolic cosine of x element-wise. + + + + + + + + + +Given an input tensor, the function computes inverse hyperbolic cosine of every element. +Input range is `[1, inf]`. It returns `nan` if the input lies outside the range. + +```python +x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float("inf")]) +tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/add.md b/site/en/api_docs/python/tf/math/add.md new file mode 100644 index 00000000000..8af9530f5c9 --- /dev/null +++ b/site/en/api_docs/python/tf/math/add.md @@ -0,0 +1,89 @@ +description: Returns x + y element-wise. + +
+ + +
+ +# tf.math.add + + + + + + + + + +Returns x + y element-wise. + + + + + + + + + +*NOTE*: math.add supports broadcasting. `AddN` does not. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/add_n.md b/site/en/api_docs/python/tf/math/add_n.md new file mode 100644 index 00000000000..2b059409502 --- /dev/null +++ b/site/en/api_docs/python/tf/math/add_n.md @@ -0,0 +1,128 @@ +description: Adds all input tensors element-wise. + +
+ + +
+ +# tf.math.add_n + + + + + + + + + +Adds all input tensors element-wise. + + + + + + + + + +tf.math.add_n performs the same operation as tf.math.accumulate_n, but it +waits for all of its inputs to be ready before beginning to sum. +This buffering can result in higher memory consumption when inputs are ready +at different times, since the minimum temporary storage required is +proportional to the input size rather than the output size. + +This op does not [broadcast]( +https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html) +its inputs. If you need broadcasting, use tf.math.add (or the `+` operator) +instead. + +#### For example: + + + +``` +>>> a = tf.constant([[3, 5], [4, 8]]) +>>> b = tf.constant([[1, 6], [2, 9]]) +>>> tf.math.add_n([a, b, a]) + +``` + + + + + + + + + + + + + +
+`inputs` + +A list of tf.Tensor or tf.IndexedSlices objects, each with the +same shape and type. tf.IndexedSlices objects will be converted into +dense tensors prior to adding. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tf.Tensor of the same shape and type as the elements of `inputs`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `inputs` don't all have same shape and dtype or the shape +cannot be inferred. +
+ diff --git a/site/en/api_docs/python/tf/math/angle.md b/site/en/api_docs/python/tf/math/angle.md new file mode 100644 index 00000000000..a145c28a9e3 --- /dev/null +++ b/site/en/api_docs/python/tf/math/angle.md @@ -0,0 +1,102 @@ +description: Returns the element-wise argument of a complex (or real) tensor. + +
+ + +
+ +# tf.math.angle + + + + + + + + + +Returns the element-wise argument of a complex (or real) tensor. + + + + + + + + + +Given a tensor `input`, this operation returns a tensor of type `float` that +is the argument of each element in `input` considered as a complex number. + +The elements in `input` are considered to be complex numbers of the form +\\(a + bj\\), where *a* is the real part and *b* is the imaginary part. +If `input` is real then *b* is zero by definition. + +The argument returned by this function is of the form \\(atan2(b, a)\\). +If `input` is real, a tensor of all zeros is returned. + +#### For example: + + + +``` +input = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j], dtype=tf.complex64) +tf.math.angle(input).numpy() +# ==> array([2.0131705, 1.056345 ], dtype=float32) +``` + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float`, `double`, +`complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32` or `float64`. +
+ diff --git a/site/en/api_docs/python/tf/math/argmax.md b/site/en/api_docs/python/tf/math/argmax.md new file mode 100644 index 00000000000..098ecf273fa --- /dev/null +++ b/site/en/api_docs/python/tf/math/argmax.md @@ -0,0 +1,111 @@ +description: Returns the index with the largest value across axes of a tensor. + +
+ + +
+ +# tf.math.argmax + + + + + + + + + +Returns the index with the largest value across axes of a tensor. + + + + + + + + + +Note that in case of ties the identity of the return value is not guaranteed. + +#### For example: + + + +``` +>>> A = tf.constant([2, 20, 30, 3, 6]) +>>> tf.math.argmax(A) # A[2] is maximum in tensor A + +>>> B = tf.constant([[2, 20, 30, 3, 6], [3, 11, 16, 1, 8], +... [14, 45, 23, 5, 27]]) +>>> tf.math.argmax(B, 0) + +>>> tf.math.argmax(B, 1) + +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`axis` + +An integer, the axis to reduce across. Default to 0. +
+`output_type` + +An optional output dtype (tf.int32 or tf.int64). Defaults +to tf.int64. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` of type `output_type`. +
+ diff --git a/site/en/api_docs/python/tf/math/argmin.md b/site/en/api_docs/python/tf/math/argmin.md new file mode 100644 index 00000000000..be5e086a6c3 --- /dev/null +++ b/site/en/api_docs/python/tf/math/argmin.md @@ -0,0 +1,114 @@ +description: Returns the index with the smallest value across axes of a tensor. + +
+ + +
+ +# tf.math.argmin + + + + + + + + + +Returns the index with the smallest value across axes of a tensor. + + + + + + + + + +Note that in case of ties the identity of the return value is not guaranteed. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, +`int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, +`quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, +`uint64`. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +int32 or int64, must be in the range `-rank(input), rank(input))`. +Describes which axis of the input Tensor to reduce across. For vectors, +use axis = 0. +
+`output_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to +tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `output_type`. +
+ + + +#### Usage: + + +```python +import tensorflow as tf +a = [1, 10, 26.9, 2.8, 166.32, 62.3] +b = tf.math.argmin(input = a) +c = tf.keras.backend.eval(b) +# c = 0 +# here a[0] = 1 which is the smallest element of a across axis 0 +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/math/asin.md b/site/en/api_docs/python/tf/math/asin.md new file mode 100644 index 00000000000..2b3a4c5f5c6 --- /dev/null +++ b/site/en/api_docs/python/tf/math/asin.md @@ -0,0 +1,97 @@ +description: Computes the trignometric inverse sine of x element-wise. + +
+ + +
+ +# tf.math.asin + + + + + + + + + +Computes the trignometric inverse sine of x element-wise. + + + + + + + + + +The tf.math.asin operation returns the inverse of tf.math.sin, such that +if `y = tf.math.sin(x)` then, `x = tf.math.asin(y)`. + +**Note**: The output of tf.math.asin will lie within the invertible range +of sine, i.e [-pi/2, pi/2]. + +#### For example: + + + +```python +# Note: [1.047, 0.785] ~= [(pi/3), (pi/4)] +x = tf.constant([1.047, 0.785]) +y = tf.math.sin(x) # [0.8659266, 0.7068252] + +tf.math.asin(y) # [1.047, 0.785] = x +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/asinh.md b/site/en/api_docs/python/tf/math/asinh.md new file mode 100644 index 00000000000..25062cc1071 --- /dev/null +++ b/site/en/api_docs/python/tf/math/asinh.md @@ -0,0 +1,88 @@ +description: Computes inverse hyperbolic sine of x element-wise. + +
+ + +
+ +# tf.math.asinh + + + + + + + + + +Computes inverse hyperbolic sine of x element-wise. + + + + + + + + + + Given an input tensor, this function computes inverse hyperbolic sine + for every element in the tensor. Both input and output has a range of + `[-inf, inf]`. + + ```python + x = tf.constant([-float("inf"), -2, -0.5, 1, 1.2, 200, 10000, float("inf")]) + tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/atan.md b/site/en/api_docs/python/tf/math/atan.md new file mode 100644 index 00000000000..66fd9fd9a1c --- /dev/null +++ b/site/en/api_docs/python/tf/math/atan.md @@ -0,0 +1,97 @@ +description: Computes the trignometric inverse tangent of x element-wise. + +
+ + +
+ +# tf.math.atan + + + + + + + + + +Computes the trignometric inverse tangent of x element-wise. + + + + + + + + + +The tf.math.atan operation returns the inverse of tf.math.tan, such that +if `y = tf.math.tan(x)` then, `x = tf.math.atan(y)`. + +**Note**: The output of tf.math.atan will lie within the invertible range +of tan, i.e (-pi/2, pi/2). + +#### For example: + + + +```python +# Note: [1.047, 0.785] ~= [(pi/3), (pi/4)] +x = tf.constant([1.047, 0.785]) +y = tf.math.tan(x) # [1.731261, 0.99920404] + +tf.math.atan(y) # [1.047, 0.785] = x +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/atan2.md b/site/en/api_docs/python/tf/math/atan2.md new file mode 100644 index 00000000000..9e567a6e232 --- /dev/null +++ b/site/en/api_docs/python/tf/math/atan2.md @@ -0,0 +1,92 @@ +description: Computes arctangent of y/x element-wise, respecting signs of the arguments. + +
+ + +
+ +# tf.math.atan2 + + + + + + + + + +Computes arctangent of `y/x` element-wise, respecting signs of the arguments. + + + + + + + + + +This is the angle \( \theta \in [-\pi, \pi] \) such that +\[ x = r \cos(\theta) \] +and +\[ y = r \sin(\theta) \] +where \(r = \sqrt(x^2 + y^2) \). + + + + + + + + + + + + + + + + +
+`y` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`x` + +A `Tensor`. Must have the same type as `y`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `y`. +
+ diff --git a/site/en/api_docs/python/tf/math/atanh.md b/site/en/api_docs/python/tf/math/atanh.md new file mode 100644 index 00000000000..dc95be677a5 --- /dev/null +++ b/site/en/api_docs/python/tf/math/atanh.md @@ -0,0 +1,90 @@ +description: Computes inverse hyperbolic tangent of x element-wise. + +
+ + +
+ +# tf.math.atanh + + + + + + + + + +Computes inverse hyperbolic tangent of x element-wise. + + + + + + + + + + Given an input tensor, this function computes inverse hyperbolic tangent + for every element in the tensor. Input range is `[-1,1]` and output range is + `[-inf, inf]`. If input is `-1`, output will be `-inf` and if the + input is `1`, output will be `inf`. Values outside the range will have + `nan` as output. + + ```python + x = tf.constant([-float("inf"), -1, -0.5, 1, 0, 0.5, 10, float("inf")]) + tf.math.atanh(x) ==> [nan -inf -0.54930615 inf 0. 0.54930615 nan nan] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/bessel_i0.md b/site/en/api_docs/python/tf/math/bessel_i0.md new file mode 100644 index 00000000000..7c18cbc5c44 --- /dev/null +++ b/site/en/api_docs/python/tf/math/bessel_i0.md @@ -0,0 +1,92 @@ +description: Computes the Bessel i0 function of x element-wise. + +
+ + +
+ +# tf.math.bessel_i0 + + + + + + + + + +Computes the Bessel i0 function of `x` element-wise. + + + + + + + + + +Modified Bessel function of order 0. + +It is preferable to use the numerically stabler function `i0e(x)` instead. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, +`float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. +
+ + + + +#### Scipy Compatibility +Equivalent to scipy.special.i0 + diff --git a/site/en/api_docs/python/tf/math/bessel_i0e.md b/site/en/api_docs/python/tf/math/bessel_i0e.md new file mode 100644 index 00000000000..602c18c3162 --- /dev/null +++ b/site/en/api_docs/python/tf/math/bessel_i0e.md @@ -0,0 +1,84 @@ +description: Computes the Bessel i0e function of x element-wise. + +
+ + +
+ +# tf.math.bessel_i0e + + + + + + + + + +Computes the Bessel i0e function of `x` element-wise. + + + + + + + + + +Exponentially scaled modified Bessel function of order 0 defined as +`bessel_i0e(x) = exp(-abs(x)) bessel_i0(x)`. + +This function is faster and numerically stabler than `bessel_i0(x)`. + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.bessel_i0e(x.values, ...), x.dense_shape)` +
+ diff --git a/site/en/api_docs/python/tf/math/bessel_i1.md b/site/en/api_docs/python/tf/math/bessel_i1.md new file mode 100644 index 00000000000..5ad5e0bce3a --- /dev/null +++ b/site/en/api_docs/python/tf/math/bessel_i1.md @@ -0,0 +1,92 @@ +description: Computes the Bessel i1 function of x element-wise. + +
+ + +
+ +# tf.math.bessel_i1 + + + + + + + + + +Computes the Bessel i1 function of `x` element-wise. + + + + + + + + + +Modified Bessel function of order 1. + +It is preferable to use the numerically stabler function `i1e(x)` instead. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, +`float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. +
+ + + + +#### Scipy Compatibility +Equivalent to scipy.special.i1 + diff --git a/site/en/api_docs/python/tf/math/bessel_i1e.md b/site/en/api_docs/python/tf/math/bessel_i1e.md new file mode 100644 index 00000000000..5ee43e41105 --- /dev/null +++ b/site/en/api_docs/python/tf/math/bessel_i1e.md @@ -0,0 +1,84 @@ +description: Computes the Bessel i1e function of x element-wise. + +
+ + +
+ +# tf.math.bessel_i1e + + + + + + + + + +Computes the Bessel i1e function of `x` element-wise. + + + + + + + + + +Exponentially scaled modified Bessel function of order 0 defined as +`bessel_i1e(x) = exp(-abs(x)) bessel_i1(x)`. + +This function is faster and numerically stabler than `bessel_i1(x)`. + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.bessel_i1e(x.values, ...), x.dense_shape)` +
+ diff --git a/site/en/api_docs/python/tf/math/betainc.md b/site/en/api_docs/python/tf/math/betainc.md new file mode 100644 index 00000000000..010bb30d904 --- /dev/null +++ b/site/en/api_docs/python/tf/math/betainc.md @@ -0,0 +1,104 @@ +description: Compute the regularized incomplete beta integral \\(I_x(a, b)\\). + +
+ + +
+ +# tf.math.betainc + + + + + + + + + +Compute the regularized incomplete beta integral \\(I_x(a, b)\\). + + + + + + + + + +The regularized incomplete beta integral is defined as: + + +\\(I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}\\) + +where + + +\\(B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt\\) + + +is the incomplete beta function and \\(B(a, b)\\) is the *complete* +beta function. + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`b` + +A `Tensor`. Must have the same type as `a`. +
+`x` + +A `Tensor`. Must have the same type as `a`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a`. +
+ diff --git a/site/en/api_docs/python/tf/math/bincount.md b/site/en/api_docs/python/tf/math/bincount.md new file mode 100644 index 00000000000..2935b1cf2c0 --- /dev/null +++ b/site/en/api_docs/python/tf/math/bincount.md @@ -0,0 +1,144 @@ +description: Counts the number of occurrences of each value in an integer array. + +
+ + +
+ +# tf.math.bincount + + + + + + + + + +Counts the number of occurrences of each value in an integer array. + + + + + + + +If `minlength` and `maxlength` are not given, returns a vector with length +`tf.reduce_max(arr) + 1` if `arr` is non-empty, and length 0 otherwise. +If `weights` are non-None, then index `i` of the output stores the sum of the +value in `weights` at each index where the corresponding value in `arr` is +`i`. + +```python +values = tf.constant([1,1,2,3,2,4,4,5]) +tf.math.bincount(values) #[0 2 2 1 2 1] +``` +Vector length = Maximum element in vector `values` is 5. Adding 1, which is 6 + will be the vector length. + +Each bin value in the output indicates number of occurrences of the particular +index. Here, index 1 in output has a value 2. This indicates value 1 occurs +two times in `values`. + +```python +values = tf.constant([1,1,2,3,2,4,4,5]) +weights = tf.constant([1,5,0,1,0,5,4,5]) +tf.math.bincount(values, weights=weights) #[0 6 0 1 9 5] +``` +Bin will be incremented by the corresponding weight instead of 1. +Here, index 1 in output has a value 6. This is the summation of weights +corresponding to the value in `values`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`arr` + +An int32 tensor of non-negative values. +
+`weights` + +If non-None, must be the same shape as arr. For each value in +`arr`, the bin will be incremented by the corresponding weight instead of +1. +
+`minlength` + +If given, ensures the output has length at least `minlength`, +padding with zeros at the end if necessary. +
+`maxlength` + +If given, skips values in `arr` that are equal or greater than +`maxlength`, ensuring that the output has length at most `maxlength`. +
+`dtype` + +If `weights` is None, determines the type of the output bins. +
+`name` + +A name scope for the associated operations (optional). +
+ + + + + + + + + + + +
+A vector with the same dtype as `weights` or the given `dtype`. The bin +values. +
+ + + + + + + + + + + +
+`InvalidArgumentError` if negative values are provided as an input. +
+ diff --git a/site/en/api_docs/python/tf/math/ceil.md b/site/en/api_docs/python/tf/math/ceil.md new file mode 100644 index 00000000000..37e474bacdd --- /dev/null +++ b/site/en/api_docs/python/tf/math/ceil.md @@ -0,0 +1,99 @@ +description: Return the ceiling of the input, element-wise. + +
+ + +
+ +# tf.math.ceil + + + + + + + + + +Return the ceiling of the input, element-wise. + + + + + + + + + + +#### For example: + + + +``` +>>> tf.math.ceil([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) + +``` + + + + + + + + + + + + + +
+`x` + +A tf.Tensor. Must be one of the following types: `bfloat16`, `half`, +`float32`, `float64`. `int32` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tf.Tensor. Has the same type as `x`. +
+ + + + +#### Numpy Compatibility +Equivalent to np.ceil + diff --git a/site/en/api_docs/python/tf/math/confusion_matrix.md b/site/en/api_docs/python/tf/math/confusion_matrix.md new file mode 100644 index 00000000000..4348a10f779 --- /dev/null +++ b/site/en/api_docs/python/tf/math/confusion_matrix.md @@ -0,0 +1,152 @@ +description: Computes the confusion matrix from predictions and labels. + +
+ + +
+ +# tf.math.confusion_matrix + + + + + + + + + +Computes the confusion matrix from predictions and labels. + + + + + + + +The matrix columns represent the prediction labels and the rows represent the +real labels. The confusion matrix is always a 2-D array of shape `[n, n]`, +where `n` is the number of valid labels for a given classification task. Both +prediction and labels must be 1-D arrays of the same shape in order for this +function to work. + +If `num_classes` is `None`, then `num_classes` will be set to one plus the +maximum value in either predictions or labels. Class labels are expected to +start at 0. For example, if `num_classes` is 3, then the possible labels +would be `[0, 1, 2]`. + +If `weights` is not `None`, then each prediction contributes its +corresponding weight to the total value of the confusion matrix cell. + +#### For example: + + + +```python + tf.math.confusion_matrix([1, 2, 4], [2, 2, 4]) ==> + [[0 0 0 0 0] + [0 0 1 0 0] + [0 0 1 0 0] + [0 0 0 0 0] + [0 0 0 0 1]] +``` + +Note that the possible labels are assumed to be `[0, 1, 2, 3, 4]`, +resulting in a 5x5 confusion matrix. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +1-D `Tensor` of real labels for the classification task. +
+`predictions` + +1-D `Tensor` of predictions for a given classification. +
+`num_classes` + +The possible number of labels the classification task can +have. If this value is not provided, it will be calculated +using both predictions and labels array. +
+`weights` + +An optional `Tensor` whose shape matches `predictions`. +
+`dtype` + +Data type of the confusion matrix. +
+`name` + +Scope name. +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype` with shape `[n, n]` representing the confusion +matrix, where `n` is the number of possible labels in the classification +task. +
+ + + + + + + + + + + + +
+`ValueError` + +If both predictions and labels are not 1-D vectors and have +mismatched shapes, or if `weights` is not `None` and its shape doesn't +match `predictions`. +
+ diff --git a/site/en/api_docs/python/tf/math/conj.md b/site/en/api_docs/python/tf/math/conj.md new file mode 100644 index 00000000000..adc2a37dfc3 --- /dev/null +++ b/site/en/api_docs/python/tf/math/conj.md @@ -0,0 +1,114 @@ +description: Returns the complex conjugate of a complex number. + +
+ + +
+ +# tf.math.conj + + + + + + + + + +Returns the complex conjugate of a complex number. + + + + + + + + + +Given a tensor `input` of complex numbers, this operation returns a tensor of +complex numbers that are the complex conjugate of each element in `input`. The +complex numbers in `input` must be of the form \\(a + bj\\), where *a* is the +real part and *b* is the imaginary part. + +The complex conjugate returned by this operation is of the form \\(a - bj\\). + +#### For example: + + +# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] +tf.math.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j] + + +If `x` is real, it is returned unchanged. + + + + + + + + + + + + + +
+`x` + +`Tensor` to conjugate. Must have numeric or variant type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` that is the conjugate of `x` (with the same type). +
+ + + + + + + + + + + + +
+`TypeError` + +If `x` is not a numeric tensor. +
+ diff --git a/site/en/api_docs/python/tf/math/cos.md b/site/en/api_docs/python/tf/math/cos.md new file mode 100644 index 00000000000..afb8a11c626 --- /dev/null +++ b/site/en/api_docs/python/tf/math/cos.md @@ -0,0 +1,89 @@ +description: Computes cos of x element-wise. + +
+ + +
+ +# tf.math.cos + + + + + + + + + +Computes cos of x element-wise. + + + + + + + + + + Given an input tensor, this function computes cosine of every + element in the tensor. Input range is `(-inf, inf)` and + output range is `[-1,1]`. If input lies outside the boundary, `nan` + is returned. + + ```python + x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10000, float("inf")]) + tf.math.cos(x) ==> [nan -0.91113025 0.87758255 0.5403023 0.36235774 0.48718765 -0.95215535 nan] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/cosh.md b/site/en/api_docs/python/tf/math/cosh.md new file mode 100644 index 00000000000..4519421fe23 --- /dev/null +++ b/site/en/api_docs/python/tf/math/cosh.md @@ -0,0 +1,88 @@ +description: Computes hyperbolic cosine of x element-wise. + +
+ + +
+ +# tf.math.cosh + + + + + + + + + +Computes hyperbolic cosine of x element-wise. + + + + + + + + + + Given an input tensor, this function computes hyperbolic cosine of every + element in the tensor. Input range is `[-inf, inf]` and output range + is `[1, inf]`. + + ```python + x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 2, 10, float("inf")]) + tf.math.cosh(x) ==> [inf 4.0515420e+03 1.1276259e+00 1.5430807e+00 1.8106556e+00 3.7621956e+00 1.1013233e+04 inf] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/count_nonzero.md b/site/en/api_docs/python/tf/math/count_nonzero.md new file mode 100644 index 00000000000..03eb8912763 --- /dev/null +++ b/site/en/api_docs/python/tf/math/count_nonzero.md @@ -0,0 +1,128 @@ +description: Computes number of nonzero elements across dimensions of a tensor. + +
+ + +
+ +# tf.math.count_nonzero + + + + + + + + + +Computes number of nonzero elements across dimensions of a tensor. + + + + + + + +Reduces `input` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` has no entries, all dimensions are reduced, and a +tensor with a single element is returned. + +**NOTE** Floating point comparison to zero is done by exact floating point +equality check. Small values are **not** rounded to zero for purposes of +the nonzero check. + +#### For example: + + + +```python +x = tf.constant([[0, 1, 0], [1, 1, 0]]) +tf.math.count_nonzero(x) # 3 +tf.math.count_nonzero(x, 0) # [1, 2, 0] +tf.math.count_nonzero(x, 1) # [1, 2] +tf.math.count_nonzero(x, 1, keepdims=True) # [[1], [2]] +tf.math.count_nonzero(x, [0, 1]) # 3 +``` + +**NOTE** Strings are compared against zero-length empty string `""`. Any +string with a size greater than zero is already considered as nonzero. + +#### For example: + + +```python +x = tf.constant(["", "a", " ", "b", ""]) +tf.math.count_nonzero(x) # 3, with "a", " ", and "b" as nonzero strings. +``` + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +The tensor to reduce. Should be of numeric type, `bool`, or `string`. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input), rank(input))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`dtype` + +The output dtype; defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The reduced tensor (number of nonzero values). +
+ diff --git a/site/en/api_docs/python/tf/math/cumprod.md b/site/en/api_docs/python/tf/math/cumprod.md new file mode 100644 index 00000000000..0429c5ea3ca --- /dev/null +++ b/site/en/api_docs/python/tf/math/cumprod.md @@ -0,0 +1,134 @@ +description: Compute the cumulative product of the tensor x along axis. + +
+ + +
+ +# tf.math.cumprod + + + + + + + + + +Compute the cumulative product of the tensor `x` along `axis`. + + + + + + + + + +By default, this op performs an inclusive cumprod, which means that the +first element of the input is identical to the first element of the output: + +```python +tf.math.cumprod([a, b, c]) # [a, a * b, a * b * c] +``` + +By setting the `exclusive` kwarg to `True`, an exclusive cumprod is +performed +instead: + +```python +tf.math.cumprod([a, b, c], exclusive=True) # [1, a, a * b] +``` + +By setting the `reverse` kwarg to `True`, the cumprod is performed in the +opposite direction: + +```python +tf.math.cumprod([a, b, c], reverse=True) # [a * b * c, b * c, c] +``` + +This is more efficient than using separate tf.reverse ops. +The `reverse` and `exclusive` kwargs can also be combined: + +```python +tf.math.cumprod([a, b, c], exclusive=True, reverse=True) # [b * c, c, 1] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, +`int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, +`complex128`, `qint8`, `quint8`, `qint32`, `half`. +
+`axis` + +A `Tensor` of type `int32` (default: 0). Must be in the range +`[-rank(x), rank(x))`. +
+`exclusive` + +If `True`, perform exclusive cumprod. +
+`reverse` + +A `bool` (default: False). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/cumsum.md b/site/en/api_docs/python/tf/math/cumsum.md new file mode 100644 index 00000000000..182509d3458 --- /dev/null +++ b/site/en/api_docs/python/tf/math/cumsum.md @@ -0,0 +1,166 @@ +description: Compute the cumulative sum of the tensor x along axis. + +
+ + +
+ +# tf.math.cumsum + + + + + + + + + +Compute the cumulative sum of the tensor `x` along `axis`. + + + + + + + + + +By default, this op performs an inclusive cumsum, which means that the first +element of the input is identical to the first element of the output: +For example: + +``` +>>> # tf.cumsum([a, b, c]) # [a, a + b, a + b + c] +>>> x = tf.constant([2, 4, 6, 8]) +>>> tf.cumsum(x) + +``` + +``` +>>> # using varying `axis` values +>>> y = tf.constant([[2, 4, 6, 8], [1,3,5,7]]) +>>> tf.cumsum(y, axis=0) + +>>> tf.cumsum(y, axis=1) + +``` + +By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed +instead: + +``` +>>> # tf.cumsum([a, b, c], exclusive=True) => [0, a, a + b] +>>> x = tf.constant([2, 4, 6, 8]) +>>> tf.cumsum(x, exclusive=True) + +``` + +By setting the `reverse` kwarg to `True`, the cumsum is performed in the +opposite direction: + +``` +>>> # tf.cumsum([a, b, c], reverse=True) # [a + b + c, b + c, c] +>>> x = tf.constant([2, 4, 6, 8]) +>>> tf.cumsum(x, reverse=True) + +``` + +This is more efficient than using separate tf.reverse ops. +The `reverse` and `exclusive` kwargs can also be combined: + +``` +>>> # tf.cumsum([a, b, c], exclusive=True, reverse=True) # [b + c, c, 0] +>>> x = tf.constant([2, 4, 6, 8]) +>>> tf.cumsum(x, exclusive=True, reverse=True) + +``` + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, +`int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, +`complex128`, `qint8`, `quint8`, `qint32`, `half`. +
+`axis` + +A `Tensor` of type `int32` (default: 0). Must be in the range +`[-rank(x), rank(x))`. +
+`exclusive` + +If `True`, perform exclusive cumsum. +
+`reverse` + +A `bool` (default: False). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/cumulative_logsumexp.md b/site/en/api_docs/python/tf/math/cumulative_logsumexp.md new file mode 100644 index 00000000000..48b0a9244b4 --- /dev/null +++ b/site/en/api_docs/python/tf/math/cumulative_logsumexp.md @@ -0,0 +1,137 @@ +description: Compute the cumulative log-sum-exp of the tensor x along axis. + +
+ + +
+ +# tf.math.cumulative_logsumexp + + + + + + + + + +Compute the cumulative log-sum-exp of the tensor `x` along `axis`. + + + + + + + + + +By default, this op performs an inclusive cumulative log-sum-exp, which means +that the first element of the input is identical to the first element of +the output. + +This operation is significantly more numerically stable than the equivalent +tensorflow operation `tf.math.log(tf.math.cumsum(tf.math.exp(x)))`, although +computes the same result given infinite numerical precision. However, note +that in some cases, it may be less stable than tf.math.reduce_logsumexp +for a given element, as it applies the "log-sum-exp trick" in a different +way. + +More precisely, where tf.math.reduce_logsumexp uses the following trick: + +``` +log(sum(exp(x))) == log(sum(exp(x - max(x)))) + max(x) +``` + +it cannot be directly used here as there is no fast way of applying it +to each prefix `x[:i]`. Instead, this function implements a prefix +scan using pairwise log-add-exp, which is a commutative and associative +(up to floating point precision) operator: + +``` +log_add_exp(x, y) = log(exp(x) + exp(y)) + = log(1 + exp(min(x, y) - max(x, y))) + max(x, y) +``` + +However, reducing using the above operator leads to a different computation +tree (logs are taken repeatedly instead of only at the end), and the maximum +is only computed pairwise instead of over the entire prefix. In general, this +leads to a different and slightly less precise computation. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float16`, `float32`, +`float64`. +
+`axis` + +A `Tensor` of type `int32` or `int64` (default: 0). Must be in the +range `[-rank(x), rank(x))`. +
+`exclusive` + +If `True`, perform exclusive cumulative log-sum-exp. +
+`reverse` + +If `True`, performs the cumulative log-sum-exp in the reverse +direction. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same shape and type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/digamma.md b/site/en/api_docs/python/tf/math/digamma.md new file mode 100644 index 00000000000..7c67c8ad556 --- /dev/null +++ b/site/en/api_docs/python/tf/math/digamma.md @@ -0,0 +1,78 @@ +description: Computes Psi, the derivative of Lgamma (the log of the absolute value of + +
+ + +
+ +# tf.math.digamma + + + + + + + + + +Computes Psi, the derivative of Lgamma (the log of the absolute value of + + + + + + + + + +`Gamma(x)`), element-wise. + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/divide.md b/site/en/api_docs/python/tf/math/divide.md new file mode 100644 index 00000000000..0400338d2b3 --- /dev/null +++ b/site/en/api_docs/python/tf/math/divide.md @@ -0,0 +1,104 @@ +description: Computes Python style division of x by y. + +
+ + +
+ +# tf.math.divide + + + + + + + + + +Computes Python style division of `x` by `y`. + + + + + + + + + + +#### For example: + + + +``` +>>> x = tf.constant([16, 12, 11]) +>>> y = tf.constant([4, 6, 2]) +>>> tf.divide(x,y) + +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor` +
+`y` + +A `Tensor` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` with same shape as input +
+ diff --git a/site/en/api_docs/python/tf/math/divide_no_nan.md b/site/en/api_docs/python/tf/math/divide_no_nan.md new file mode 100644 index 00000000000..fd98d8eb013 --- /dev/null +++ b/site/en/api_docs/python/tf/math/divide_no_nan.md @@ -0,0 +1,89 @@ +description: Computes a safe divide which returns 0 if the y is zero. + +
+ + +
+ +# tf.math.divide_no_nan + + + + + + + + + +Computes a safe divide which returns 0 if the y is zero. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`y` + +A `Tensor` whose dtype is compatible with `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The element-wise value of the x divided by y. +
+ diff --git a/site/en/api_docs/python/tf/math/equal.md b/site/en/api_docs/python/tf/math/equal.md new file mode 100644 index 00000000000..b330edec869 --- /dev/null +++ b/site/en/api_docs/python/tf/math/equal.md @@ -0,0 +1,128 @@ +description: Returns the truth value of (x == y) element-wise. + +
+ + +
+ +# tf.math.equal + + + + + + + + + +Returns the truth value of (x == y) element-wise. + + + + + + + + + +Performs a [broadcast]( +https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the +arguments and then an element-wise equality comparison, returning a Tensor of +boolean values. + +#### For example: + + + +``` +>>> x = tf.constant([2, 4]) +>>> y = tf.constant(2) +>>> tf.math.equal(x, y) + +``` + +``` +>>> x = tf.constant([2, 4]) +>>> y = tf.constant([2, 4]) +>>> tf.math.equal(x, y) + +``` + + + + + + + + + + + + + + + + +
+`x` + +A tf.Tensor or tf.SparseTensor or tf.IndexedSlices. +
+`y` + +A tf.Tensor or tf.SparseTensor or tf.IndexedSlices. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tf.Tensor of type bool with the same size as that of x or y. +
+ + + + + + + + + + + +
+tf.errors.InvalidArgumentError: If shapes of arguments are incompatible +
+ diff --git a/site/en/api_docs/python/tf/math/erf.md b/site/en/api_docs/python/tf/math/erf.md new file mode 100644 index 00000000000..d3256be6fe9 --- /dev/null +++ b/site/en/api_docs/python/tf/math/erf.md @@ -0,0 +1,80 @@ +description: Computes the Gauss error function of x element-wise. + +
+ + +
+ +# tf.math.erf + + + + + + + + + +Computes the Gauss error function of `x` element-wise. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.erf(x.values, ...), x.dense_shape)` +
+ diff --git a/site/en/api_docs/python/tf/math/erfc.md b/site/en/api_docs/python/tf/math/erfc.md new file mode 100644 index 00000000000..2a1be3e2ec4 --- /dev/null +++ b/site/en/api_docs/python/tf/math/erfc.md @@ -0,0 +1,77 @@ +description: Computes the complementary error function of x element-wise. + +
+ + +
+ +# tf.math.erfc + + + + + + + + + +Computes the complementary error function of `x` element-wise. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/erfinv.md b/site/en/api_docs/python/tf/math/erfinv.md new file mode 100644 index 00000000000..7968efbce62 --- /dev/null +++ b/site/en/api_docs/python/tf/math/erfinv.md @@ -0,0 +1,84 @@ +description: Compute inverse error function. + +
+ + +
+ +# tf.math.erfinv + + + + + + + + + +Compute inverse error function. + + + + + + + + + +Given `x`, compute the inverse error function of `x`. This function +is the inverse of tf.math.erf. + + + + + + + + + + + + + +
+`x` + +`Tensor` with type `float` or `double`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+Inverse error function of `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/exp.md b/site/en/api_docs/python/tf/math/exp.md new file mode 100644 index 00000000000..12a822c9e4c --- /dev/null +++ b/site/en/api_docs/python/tf/math/exp.md @@ -0,0 +1,122 @@ +description: Computes exponential of x element-wise. \\(y = e^x\\). + +
+ + +
+ +# tf.math.exp + + + + + + + + + +Computes exponential of x element-wise. \\(y = e^x\\). + + + + + + + + + +This function computes the exponential of the input tensor element-wise. +i.e. math.exp(x) or \\(e^x\\), where `x` is the input tensor. +\\(e\\) denotes Euler's number and is approximately equal to 2.718281. +Output is positive for any real input. + +``` +>>> x = tf.constant(2.0) +>>> tf.math.exp(x) + +``` + +``` +>>> x = tf.constant([2.0, 8.0]) +>>> tf.math.exp(x) + +``` + +For complex numbers, the exponential value is calculated as +\\(e^{x+iy}={e^x}{e^{iy}}={e^x}{\\cos(y)+i\\sin(y)}\\) + +For `1+1j` the value would be computed as: +\\(e^1{\\cos(1)+i\\sin(1)} = 2.7182817 \\times (0.5403023+0.84147096j)\\) + +``` +>>> x = tf.constant(1 + 1j) +>>> tf.math.exp(x) + +``` + + + + + + + + + + + + + +
+`x` + +A tf.Tensor. Must be one of the following types: `bfloat16`, `half`, +`float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tf.Tensor. Has the same type as `x`. +
+ + + + +#### Numpy Compatibility +Equivalent to np.exp + diff --git a/site/en/api_docs/python/tf/math/expm1.md b/site/en/api_docs/python/tf/math/expm1.md new file mode 100644 index 00000000000..5d904769838 --- /dev/null +++ b/site/en/api_docs/python/tf/math/expm1.md @@ -0,0 +1,90 @@ +description: Computes exp(x) - 1 element-wise. + +
+ + +
+ +# tf.math.expm1 + + + + + + + + + +Computes `exp(x) - 1` element-wise. + + + + + + + + + + i.e. `exp(x) - 1` or `e^(x) - 1`, where `x` is the input tensor. + `e` denotes Euler's number and is approximately equal to 2.718281. + + ```python + x = tf.constant(2.0) + tf.math.expm1(x) ==> 6.389056 + + x = tf.constant([2.0, 8.0]) + tf.math.expm1(x) ==> array([6.389056, 2979.958], dtype=float32) + + x = tf.constant(1 + 1j) + tf.math.expm1(x) ==> (0.46869393991588515+2.2873552871788423j) + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/floor.md b/site/en/api_docs/python/tf/math/floor.md new file mode 100644 index 00000000000..1bb589190a9 --- /dev/null +++ b/site/en/api_docs/python/tf/math/floor.md @@ -0,0 +1,80 @@ +description: Returns element-wise largest integer not greater than x. + +
+ + +
+ +# tf.math.floor + + + + + + + + + +Returns element-wise largest integer not greater than x. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/floordiv.md b/site/en/api_docs/python/tf/math/floordiv.md new file mode 100644 index 00000000000..f7807dae039 --- /dev/null +++ b/site/en/api_docs/python/tf/math/floordiv.md @@ -0,0 +1,115 @@ +description: Divides x / y elementwise, rounding toward the most negative integer. + +
+ + +
+ +# tf.math.floordiv + + + + + + + + + +Divides `x / y` elementwise, rounding toward the most negative integer. + + + + + + + + + +The same as tf.compat.v1.div(x,y) for integers, but uses +`tf.floor(tf.compat.v1.div(x,y))` for +floating point arguments so that the result is always an integer (though +possibly an integer represented as floating point). This op is generated by +`x // y` floor division in Python 3 and in Python 2.7 with +`from __future__ import division`. + +`x` and `y` must have the same type, and the result will have the same type +as well. + + + + + + + + + + + + + + + + +
+`x` + +`Tensor` numerator of real numeric type. +
+`y` + +`Tensor` denominator of real numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+`x / y` rounded down. +
+ + + + + + + + + + + + +
+`TypeError` + +If the inputs are complex. +
+ diff --git a/site/en/api_docs/python/tf/math/floormod.md b/site/en/api_docs/python/tf/math/floormod.md new file mode 100644 index 00000000000..4910b4e4ac4 --- /dev/null +++ b/site/en/api_docs/python/tf/math/floormod.md @@ -0,0 +1,92 @@ +description: Returns element-wise remainder of division. When x < 0 xor y < 0 is + +
+ + +
+ +# tf.math.floormod + + + + + + + + + +Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + + + + + + + + + +true, this follows Python semantics in that the result here is consistent +with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`. + +*NOTE*: math.floormod supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/greater.md b/site/en/api_docs/python/tf/math/greater.md new file mode 100644 index 00000000000..f135aae317e --- /dev/null +++ b/site/en/api_docs/python/tf/math/greater.md @@ -0,0 +1,103 @@ +description: Returns the truth value of (x > y) element-wise. + +
+ + +
+ +# tf.math.greater + + + + + + + + + +Returns the truth value of (x > y) element-wise. + + + + + + + + + +*NOTE*: math.greater supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 2, 5]) +tf.math.greater(x, y) ==> [False, True, True] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.greater(x, y) ==> [False, False, True] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/math/greater_equal.md b/site/en/api_docs/python/tf/math/greater_equal.md new file mode 100644 index 00000000000..def4e80b052 --- /dev/null +++ b/site/en/api_docs/python/tf/math/greater_equal.md @@ -0,0 +1,103 @@ +description: Returns the truth value of (x >= y) element-wise. + +
+ + +
+ +# tf.math.greater_equal + + + + + + + + + +Returns the truth value of (x >= y) element-wise. + + + + + + + + + +*NOTE*: math.greater_equal supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6, 7]) +y = tf.constant([5, 2, 5, 10]) +tf.math.greater_equal(x, y) ==> [True, True, True, False] + +x = tf.constant([5, 4, 6, 7]) +y = tf.constant([5]) +tf.math.greater_equal(x, y) ==> [True, False, True, True] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/math/igamma.md b/site/en/api_docs/python/tf/math/igamma.md new file mode 100644 index 00000000000..ee17ef7ef12 --- /dev/null +++ b/site/en/api_docs/python/tf/math/igamma.md @@ -0,0 +1,97 @@ +description: Compute the lower regularized incomplete Gamma function P(a, x). + +
+ + +
+ +# tf.math.igamma + + + + + + + + + +Compute the lower regularized incomplete Gamma function `P(a, x)`. + + + + + + + + + +The lower regularized incomplete Gamma function is defined as: + + +\\(P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)\\) + +where + +\\(gamma(a, x) = \\int_{0}^{x} t^{a-1} exp(-t) dt\\) + +is the lower incomplete Gamma function. + +Note, above `Q(a, x)` (`Igammac`) is the upper regularized complete +Gamma function. + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`x` + +A `Tensor`. Must have the same type as `a`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a`. +
+ diff --git a/site/en/api_docs/python/tf/math/igammac.md b/site/en/api_docs/python/tf/math/igammac.md new file mode 100644 index 00000000000..b0a36b7ef57 --- /dev/null +++ b/site/en/api_docs/python/tf/math/igammac.md @@ -0,0 +1,96 @@ +description: Compute the upper regularized incomplete Gamma function Q(a, x). + +
+ + +
+ +# tf.math.igammac + + + + + + + + + +Compute the upper regularized incomplete Gamma function `Q(a, x)`. + + + + + + + + + +The upper regularized incomplete Gamma function is defined as: + +\\(Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\\) + +where + +\\(Gamma(a, x) = int_{x}^{\infty} t^{a-1} exp(-t) dt\\) + +is the upper incomplete Gama function. + +Note, above `P(a, x)` (`Igamma`) is the lower regularized complete +Gamma function. + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`x` + +A `Tensor`. Must have the same type as `a`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a`. +
+ diff --git a/site/en/api_docs/python/tf/math/imag.md b/site/en/api_docs/python/tf/math/imag.md new file mode 100644 index 00000000000..77c438971af --- /dev/null +++ b/site/en/api_docs/python/tf/math/imag.md @@ -0,0 +1,95 @@ +description: Returns the imaginary part of a complex (or real) tensor. + +
+ + +
+ +# tf.math.imag + + + + + + + + + +Returns the imaginary part of a complex (or real) tensor. + + + + + + + + + +Given a tensor `input`, this operation returns a tensor of type `float` that +is the imaginary part of each element in `input` considered as a complex +number. If `input` is real, a tensor of all zeros is returned. + +#### For example: + + + +```python +x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j]) +tf.math.imag(x) # [4.75, 5.75] +``` + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float`, `double`, +`complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32` or `float64`. +
+ diff --git a/site/en/api_docs/python/tf/math/in_top_k.md b/site/en/api_docs/python/tf/math/in_top_k.md new file mode 100644 index 00000000000..7088437c632 --- /dev/null +++ b/site/en/api_docs/python/tf/math/in_top_k.md @@ -0,0 +1,109 @@ +description: Says whether the targets are in the top K predictions. + +
+ + +
+ +# tf.math.in_top_k + + + + + + + + + +Says whether the targets are in the top `K` predictions. + + + + + + + + + +This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the +prediction for the target class is finite (not inf, -inf, or nan) and among +the top `k` predictions among all predictions for example `i`. Note that the +behavior of `InTopK` differs from the `TopK` op in its handling of ties; if +multiple classes have the same prediction value and straddle the top-`k` +boundary, all of those classes are considered to be in the top `k`. + +More formally, let + + \\(predictions_i\\) be the predictions for all classes for example `i`, + \\(targets_i\\) be the target class for example `i`, + \\(out_i\\) be the output for example `i`, + +$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$ + + + + + + + + + + + + + + + + + + + +
+`predictions` + +A `Tensor` of type `float32`. +A `batch_size` x `classes` tensor. +
+`targets` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A `batch_size` vector of class ids. +
+`k` + +An `int`. Number of top elements to look at for computing precision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. Computed Precision at `k` as a `bool Tensor`. +
+ diff --git a/site/en/api_docs/python/tf/math/invert_permutation.md b/site/en/api_docs/python/tf/math/invert_permutation.md new file mode 100644 index 00000000000..a294a1acbed --- /dev/null +++ b/site/en/api_docs/python/tf/math/invert_permutation.md @@ -0,0 +1,94 @@ +description: Computes the inverse permutation of a tensor. + +
+ + +
+ +# tf.math.invert_permutation + + + + + + + + + +Computes the inverse permutation of a tensor. + + + + + + + + + +This operation computes the inverse of an index permutation. It takes a 1-D +integer tensor `x`, which represents the indices of a zero-based array, and +swaps each value with its index position. In other words, for an output tensor +`y` and an input tensor `x`, this operation computes the following: + +`y[x[i]] = i for i in [0, 1, ..., len(x) - 1]` + +The values must include 0. There can be no duplicate values or negative values. + +#### For example: + + + +``` +# tensor `x` is [3, 4, 0, 2, 1] +invert_permutation(x) ==> [2, 4, 3, 0, 1] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/is_finite.md b/site/en/api_docs/python/tf/math/is_finite.md new file mode 100644 index 00000000000..a1de75dfc82 --- /dev/null +++ b/site/en/api_docs/python/tf/math/is_finite.md @@ -0,0 +1,92 @@ +description: Returns which elements of x are finite. + +
+ + +
+ +# tf.math.is_finite + + + + + + + + + +Returns which elements of x are finite. + + + + + + + + + + + +#### Example: + + + +```python +x = tf.constant([5.0, 4.8, 6.8, np.inf, np.nan]) +tf.math.is_finite(x) ==> [True, True, True, False, False] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ + + +#### Numpy Compatibility +Equivalent to np.isfinite + diff --git a/site/en/api_docs/python/tf/math/is_inf.md b/site/en/api_docs/python/tf/math/is_inf.md new file mode 100644 index 00000000000..40ae7be1851 --- /dev/null +++ b/site/en/api_docs/python/tf/math/is_inf.md @@ -0,0 +1,92 @@ +description: Returns which elements of x are Inf. + +
+ + +
+ +# tf.math.is_inf + + + + + + + + + +Returns which elements of x are Inf. + + + + + + + + + + + +#### Example: + + + +```python +x = tf.constant([5.0, np.inf, 6.8, np.inf]) +tf.math.is_inf(x) ==> [False, True, False, True] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ + + +#### Numpy Compatibility +Equivalent to np.isinf + diff --git a/site/en/api_docs/python/tf/math/is_nan.md b/site/en/api_docs/python/tf/math/is_nan.md new file mode 100644 index 00000000000..70d93ba0f44 --- /dev/null +++ b/site/en/api_docs/python/tf/math/is_nan.md @@ -0,0 +1,92 @@ +description: Returns which elements of x are NaN. + +
+ + +
+ +# tf.math.is_nan + + + + + + + + + +Returns which elements of x are NaN. + + + + + + + + + + + +#### Example: + + + +```python +x = tf.constant([5.0, np.nan, 6.8, np.nan, np.inf]) +tf.math.is_nan(x) ==> [False, True, False, True, False] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ + + +#### Numpy Compatibility +Equivalent to np.isnan + diff --git a/site/en/api_docs/python/tf/math/is_non_decreasing.md b/site/en/api_docs/python/tf/math/is_non_decreasing.md new file mode 100644 index 00000000000..724d3f0ebc9 --- /dev/null +++ b/site/en/api_docs/python/tf/math/is_non_decreasing.md @@ -0,0 +1,113 @@ +description: Returns True if x is non-decreasing. + +
+ + +
+ +# tf.math.is_non_decreasing + + + + + + + + + +Returns `True` if `x` is non-decreasing. + + + + + + + + + +Elements of `x` are compared in row-major order. The tensor `[x[0],...]` +is non-decreasing if for every adjacent pair we have `x[i] <= x[i+1]`. +If `x` has less than two elements, it is trivially non-decreasing. + +See also: `is_strictly_increasing` + +``` +>>> x1 = tf.constant([1.0, 1.0, 3.0]) +>>> tf.math.is_non_decreasing(x1) + +>>> x2 = tf.constant([3.0, 1.0, 2.0]) +>>> tf.math.is_non_decreasing(x2) + +``` + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`name` + +A name for this operation (optional). Defaults to "is_non_decreasing" +
+ + + + + + + + + + + +
+Boolean `Tensor`, equal to `True` iff `x` is non-decreasing. +
+ + + + + + + + + + + + +
+`TypeError` + +if `x` is not a numeric tensor. +
+ diff --git a/site/en/api_docs/python/tf/math/is_strictly_increasing.md b/site/en/api_docs/python/tf/math/is_strictly_increasing.md new file mode 100644 index 00000000000..3a5c870e252 --- /dev/null +++ b/site/en/api_docs/python/tf/math/is_strictly_increasing.md @@ -0,0 +1,114 @@ +description: Returns True if x is strictly increasing. + +
+ + +
+ +# tf.math.is_strictly_increasing + + + + + + + + + +Returns `True` if `x` is strictly increasing. + + + + + + + + + +Elements of `x` are compared in row-major order. The tensor `[x[0],...]` +is strictly increasing if for every adjacent pair we have `x[i] < x[i+1]`. +If `x` has less than two elements, it is trivially strictly increasing. + +See also: `is_non_decreasing` + +``` +>>> x1 = tf.constant([1.0, 2.0, 3.0]) +>>> tf.math.is_strictly_increasing(x1) + +>>> x2 = tf.constant([3.0, 1.0, 2.0]) +>>> tf.math.is_strictly_increasing(x2) + +``` + + + + + + + + + + + + + +
+`x` + +Numeric `Tensor`. +
+`name` + +A name for this operation (optional). +Defaults to "is_strictly_increasing" +
+ + + + + + + + + + + +
+Boolean `Tensor`, equal to `True` iff `x` is strictly increasing. +
+ + + + + + + + + + + + +
+`TypeError` + +if `x` is not a numeric tensor. +
+ diff --git a/site/en/api_docs/python/tf/math/l2_normalize.md b/site/en/api_docs/python/tf/math/l2_normalize.md new file mode 100644 index 00000000000..0cecce9f773 --- /dev/null +++ b/site/en/api_docs/python/tf/math/l2_normalize.md @@ -0,0 +1,101 @@ +description: Normalizes along dimension axis using an L2 norm. + +
+ + +
+ +# tf.math.l2_normalize + + + + + + + + + +Normalizes along dimension `axis` using an L2 norm. + + + + + + + + + +For a 1-D tensor with `axis = 0`, computes + + output = x / sqrt(max(sum(x**2), epsilon)) + +For `x` with more dimensions, independently normalizes each 1-D slice along +dimension `axis`. + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. +
+`axis` + +Dimension along which to normalize. A scalar or a vector of +integers. +
+`epsilon` + +A lower bound value for the norm. Will use `sqrt(epsilon)` as the +divisor if `norm < sqrt(epsilon)`. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` with the same shape as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/lbeta.md b/site/en/api_docs/python/tf/math/lbeta.md new file mode 100644 index 00000000000..1921678af8f --- /dev/null +++ b/site/en/api_docs/python/tf/math/lbeta.md @@ -0,0 +1,103 @@ +description: Computes \\(ln(|Beta(x)|)\\), reducing along the last dimension. + +
+ + +
+ +# tf.math.lbeta + + + + + + + + + +Computes \\(ln(|Beta(x)|)\\), reducing along the last dimension. + + + + + + + + + +Given one-dimensional $z = [z_1,...,z_K]$, we define + +$$Beta(z) = \frac{\prod_j \Gamma(z_j)}{\Gamma(\sum_j z_j)},$$ + +where $\Gamma$ is the gamma function. + +And for $n + 1$ dimensional $x$ with shape $[N_1, ..., N_n, K]$, we define + +$$lbeta(x)[i_1, ..., i_n] = \log{|Beta(x[i_1, ..., i_n, :])|}.$$ + +In other words, the last dimension is treated as the $z$ vector. + +Note that if $z = [u, v]$, then + +$$Beta(z) = \frac{\Gamma(u)\Gamma(v)}{\Gamma(u + v)} + = \int_0^1 t^{u-1} (1 - t)^{v-1} \mathrm{d}t,$$ + +which defines the traditional bivariate beta function. + +If the last dimension is empty, we follow the convention that the sum over +the empty set is zero, and the product is one. + + + + + + + + + + + + + +
+`x` + +A rank `n + 1` `Tensor`, `n >= 0` with type `float`, or `double`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The logarithm of \\(|Beta(x)|\\) reducing along the last dimension. +
+ diff --git a/site/en/api_docs/python/tf/math/less.md b/site/en/api_docs/python/tf/math/less.md new file mode 100644 index 00000000000..0d3a442dedd --- /dev/null +++ b/site/en/api_docs/python/tf/math/less.md @@ -0,0 +1,103 @@ +description: Returns the truth value of (x < y) element-wise. + +
+ + +
+ +# tf.math.less + + + + + + + + + +Returns the truth value of (x < y) element-wise. + + + + + + + + + +*NOTE*: math.less supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.less(x, y) ==> [False, True, False] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 6, 7]) +tf.math.less(x, y) ==> [False, True, True] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/math/less_equal.md b/site/en/api_docs/python/tf/math/less_equal.md new file mode 100644 index 00000000000..3ac747af316 --- /dev/null +++ b/site/en/api_docs/python/tf/math/less_equal.md @@ -0,0 +1,103 @@ +description: Returns the truth value of (x <= y) element-wise. + +
+ + +
+ +# tf.math.less_equal + + + + + + + + + +Returns the truth value of (x <= y) element-wise. + + + + + + + + + +*NOTE*: math.less_equal supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.less_equal(x, y) ==> [True, True, False] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 6, 6]) +tf.math.less_equal(x, y) ==> [True, True, True] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/math/lgamma.md b/site/en/api_docs/python/tf/math/lgamma.md new file mode 100644 index 00000000000..f95870a6283 --- /dev/null +++ b/site/en/api_docs/python/tf/math/lgamma.md @@ -0,0 +1,88 @@ +description: Computes the log of the absolute value of Gamma(x) element-wise. + +
+ + +
+ +# tf.math.lgamma + + + + + + + + + +Computes the log of the absolute value of `Gamma(x)` element-wise. + + + + + + + + + + For positive numbers, this function computes log((input - 1)!) for every element in the tensor. + `lgamma(5) = log((5-1)!) = log(4!) = log(24) = 3.1780539` + +#### Example: + + + +```python +x = tf.constant([0, 0.5, 1, 4.5, -4, -5.6]) +tf.math.lgamma(x) ==> [inf, 0.5723649, 0., 2.4537368, inf, -4.6477685] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/log.md b/site/en/api_docs/python/tf/math/log.md new file mode 100644 index 00000000000..cf3bb55e8cc --- /dev/null +++ b/site/en/api_docs/python/tf/math/log.md @@ -0,0 +1,91 @@ +description: Computes natural logarithm of x element-wise. + +
+ + +
+ +# tf.math.log + + + + + + + + + +Computes natural logarithm of x element-wise. + + + + + + + + + +I.e., \\(y = \log_e x\\). + +#### Example: + + + +```python +>>> x = tf.constant([0, 0.5, 1, 5]) +>>> tf.math.log(x) + + +``` + +See: https://en.wikipedia.org/wiki/Logarithm + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/log1p.md b/site/en/api_docs/python/tf/math/log1p.md new file mode 100644 index 00000000000..cbccd6f32e7 --- /dev/null +++ b/site/en/api_docs/python/tf/math/log1p.md @@ -0,0 +1,85 @@ +description: Computes natural logarithm of (1 + x) element-wise. + +
+ + +
+ +# tf.math.log1p + + + + + + + + + +Computes natural logarithm of (1 + x) element-wise. + + + + + + + + + +I.e., \\(y = \log_e (1 + x)\\). + +#### Example: + + +>>> x = tf.constant([0, 0.5, 1, 5]) +>>> tf.math.log1p(x) + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/log_sigmoid.md b/site/en/api_docs/python/tf/math/log_sigmoid.md new file mode 100644 index 00000000000..bf18f18dfef --- /dev/null +++ b/site/en/api_docs/python/tf/math/log_sigmoid.md @@ -0,0 +1,84 @@ +description: Computes log sigmoid of x element-wise. + +
+ + +
+ +# tf.math.log_sigmoid + + + + + + + + + +Computes log sigmoid of `x` element-wise. + + + + + + + + + +Specifically, `y = log(1 / (1 + exp(-x)))`. For numerical stability, +we use `y = -tf.nn.softplus(-x)`. + + + + + + + + + + + + + +
+`x` + +A Tensor with type `float32` or `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A Tensor with the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/logical_and.md b/site/en/api_docs/python/tf/math/logical_and.md new file mode 100644 index 00000000000..ff986eb62d1 --- /dev/null +++ b/site/en/api_docs/python/tf/math/logical_and.md @@ -0,0 +1,125 @@ +description: Logical AND function. + +
+ + +
+ +# tf.math.logical_and + + + + + + + + + +Logical AND function. + + + + + + + + + +The operation works for the following input types: + +- Two single elements of type `bool` +- One tf.Tensor of type `bool` and one single `bool`, where the result will + be calculated by applying logical AND with the single element to each + element in the larger Tensor. +- Two tf.Tensor objects of type `bool` of the same shape. In this case, + the result will be the element-wise logical AND of the two input tensors. + +#### Usage: + + + +``` +>>> a = tf.constant([True]) +>>> b = tf.constant([False]) +>>> tf.math.logical_and(a, b) + +``` + +``` +>>> c = tf.constant([True]) +>>> x = tf.constant([False, True, True, False]) +>>> tf.math.logical_and(c, x) + +``` + +``` +>>> y = tf.constant([False, False, True, True]) +>>> z = tf.constant([False, True, False, True]) +>>> tf.math.logical_and(y, z) + +``` + + + + + + + + + + + + + + + + +
+`x` + +A tf.Tensor type bool. +
+`y` + +A tf.Tensor of type bool. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tf.Tensor of type bool with the same size as that of x or y. +
+ diff --git a/site/en/api_docs/python/tf/math/logical_not.md b/site/en/api_docs/python/tf/math/logical_not.md new file mode 100644 index 00000000000..6fb00646254 --- /dev/null +++ b/site/en/api_docs/python/tf/math/logical_not.md @@ -0,0 +1,89 @@ +description: Returns the truth value of NOT x element-wise. + +
+ + +
+ +# tf.math.logical_not + + + + + + + + + +Returns the truth value of `NOT x` element-wise. + + + + + + + + + + +#### Example: + + + +``` +>>> tf.math.logical_not(tf.constant([True, False])) + +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor` of type `bool`. A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/math/logical_or.md b/site/en/api_docs/python/tf/math/logical_or.md new file mode 100644 index 00000000000..48beeb86967 --- /dev/null +++ b/site/en/api_docs/python/tf/math/logical_or.md @@ -0,0 +1,89 @@ +description: Returns the truth value of x OR y element-wise. + +
+ + +
+ +# tf.math.logical_or + + + + + + + + + +Returns the truth value of x OR y element-wise. + + + + + + + + + +*NOTE*: math.logical_or supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/math/logical_xor.md b/site/en/api_docs/python/tf/math/logical_xor.md new file mode 100644 index 00000000000..bd60984b362 --- /dev/null +++ b/site/en/api_docs/python/tf/math/logical_xor.md @@ -0,0 +1,124 @@ +description: Logical XOR function. + +
+ + +
+ +# tf.math.logical_xor + + + + + + + + + +Logical XOR function. + + + + + + + + + +x ^ y = (x | y) & ~(x & y) + +The operation works for the following input types: + +- Two single elements of type `bool` +- One tf.Tensor of type `bool` and one single `bool`, where the result will + be calculated by applying logical XOR with the single element to each + element in the larger Tensor. +- Two tf.Tensor objects of type `bool` of the same shape. In this case, + the result will be the element-wise logical XOR of the two input tensors. + +#### Usage: + + + +``` +>>> a = tf.constant([True]) +>>> b = tf.constant([False]) +>>> tf.math.logical_xor(a, b) + +``` + +``` +>>> c = tf.constant([True]) +>>> x = tf.constant([False, True, True, False]) +>>> tf.math.logical_xor(c, x) + +``` + +``` +>>> y = tf.constant([False, False, True, True]) +>>> z = tf.constant([False, True, False, True]) +>>> tf.math.logical_xor(y, z) + +``` + + + + + + + + + + + + + + + + +
+`x` + +A tf.Tensor type bool. +
+`y` + +A tf.Tensor of type bool. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tf.Tensor of type bool with the same size as that of x or y. +
+ diff --git a/site/en/api_docs/python/tf/math/maximum.md b/site/en/api_docs/python/tf/math/maximum.md new file mode 100644 index 00000000000..b9b7db495c3 --- /dev/null +++ b/site/en/api_docs/python/tf/math/maximum.md @@ -0,0 +1,95 @@ +description: Returns the max of x and y (i.e. x > y ? x : y) element-wise. + +
+ + +
+ +# tf.math.maximum + + + + + + + + + +Returns the max of x and y (i.e. x > y ? x : y) element-wise. + + + + + + + + + + +#### Example: + + +>>> x = tf.constant([0., 0., 0., 0.]) +>>> y = tf.constant([-2., 0., 2., 5.]) +>>> tf.math.maximum(x, y) + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/minimum.md b/site/en/api_docs/python/tf/math/minimum.md new file mode 100644 index 00000000000..8d9eb10a2f2 --- /dev/null +++ b/site/en/api_docs/python/tf/math/minimum.md @@ -0,0 +1,95 @@ +description: Returns the min of x and y (i.e. x < y ? x : y) element-wise. + +
+ + +
+ +# tf.math.minimum + + + + + + + + + +Returns the min of x and y (i.e. x < y ? x : y) element-wise. + + + + + + + + + + +#### Example: + + +>>> x = tf.constant([0., 0., 0., 0.]) +>>> y = tf.constant([-5., -2., 0., 3.]) +>>> tf.math.minimum(x, y) + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/multiply.md b/site/en/api_docs/python/tf/math/multiply.md new file mode 100644 index 00000000000..19474e918ec --- /dev/null +++ b/site/en/api_docs/python/tf/math/multiply.md @@ -0,0 +1,140 @@ +description: Returns an element-wise x * y. + +
+ + +
+ +# tf.math.multiply + + + + + + + + + +Returns an element-wise x * y. + + + + + + + + + + +#### For example: + + + +``` +>>> x = tf.constant(([1, 2, 3, 4])) +>>> tf.math.multiply(x, x) + +``` + +Since tf.math.multiply will convert its arguments to `Tensor`s, you can also +pass in non-`Tensor` arguments: + +``` +>>> tf.math.multiply(7,6) + +``` + +If `x.shape` is not thes same as `y.shape`, they will be broadcast to a +compatible shape. (More about broadcasting +[here](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).) + +#### For example: + + + +``` +>>> x = tf.ones([1, 2]); +>>> y = tf.ones([2, 1]); +>>> x * y # Taking advantage of operator overriding + +``` + + + + + + + + + + + + + + + + +
+`x` + +A Tensor. Must be one of the following types: `bfloat16`, +`half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, +`int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + +
+ + +A `Tensor`. Has the same type as `x`. + + + + + + + + + +
+* InvalidArgumentError: When `x` and `y` have incomptatible shapes or types. +
+ diff --git a/site/en/api_docs/python/tf/math/multiply_no_nan.md b/site/en/api_docs/python/tf/math/multiply_no_nan.md new file mode 100644 index 00000000000..530c758a331 --- /dev/null +++ b/site/en/api_docs/python/tf/math/multiply_no_nan.md @@ -0,0 +1,89 @@ +description: Computes the product of x and y and returns 0 if the y is zero, even if x is NaN or infinite. + +
+ + +
+ +# tf.math.multiply_no_nan + + + + + + + + + +Computes the product of x and y and returns 0 if the y is zero, even if x is NaN or infinite. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`y` + +A `Tensor` whose dtype is compatible with `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The element-wise value of the x times y. +
+ diff --git a/site/en/api_docs/python/tf/math/ndtri.md b/site/en/api_docs/python/tf/math/ndtri.md new file mode 100644 index 00000000000..4acddc6985a --- /dev/null +++ b/site/en/api_docs/python/tf/math/ndtri.md @@ -0,0 +1,82 @@ +description: Compute quantile of Standard Normal. + +
+ + +
+ +# tf.math.ndtri + + + + + + + + + +Compute quantile of Standard Normal. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +`Tensor` with type `float` or `double`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+Inverse error function of `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/negative.md b/site/en/api_docs/python/tf/math/negative.md new file mode 100644 index 00000000000..a37cf9462b6 --- /dev/null +++ b/site/en/api_docs/python/tf/math/negative.md @@ -0,0 +1,84 @@ +description: Computes numerical negative value element-wise. + +
+ + +
+ +# tf.math.negative + + + + + + + + + +Computes numerical negative value element-wise. + + + + + + + + + +I.e., \\(y = -x\\). + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.negative(x.values, ...), x.dense_shape)` +
+ diff --git a/site/en/api_docs/python/tf/math/nextafter.md b/site/en/api_docs/python/tf/math/nextafter.md new file mode 100644 index 00000000000..a37bdbcfbba --- /dev/null +++ b/site/en/api_docs/python/tf/math/nextafter.md @@ -0,0 +1,94 @@ +description: Returns the next representable value of x1 in the direction of x2, element-wise. + +
+ + +
+ +# tf.math.nextafter + + + + + + + + + +Returns the next representable value of `x1` in the direction of `x2`, element-wise. + + + + + + + + + +This operation returns the same result as the C++ std::nextafter function. + +It can also return a subnormal number. + + + + + + + + + + + + + + + + + + +
+`x1` + +A `Tensor`. Must be one of the following types: `float64`, `float32`. +
+`x2` + +A `Tensor`. Must have the same type as `x1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x1`. +
+ + + +#### Cpp Compatibility +Equivalent to C++ std::nextafter function. + diff --git a/site/en/api_docs/python/tf/math/not_equal.md b/site/en/api_docs/python/tf/math/not_equal.md new file mode 100644 index 00000000000..e95066d6ea4 --- /dev/null +++ b/site/en/api_docs/python/tf/math/not_equal.md @@ -0,0 +1,128 @@ +description: Returns the truth value of (x != y) element-wise. + +
+ + +
+ +# tf.math.not_equal + + + + + + + + + +Returns the truth value of (x != y) element-wise. + + + + + + + + + +Performs a [broadcast]( +https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the +arguments and then an element-wise inequality comparison, returning a Tensor +of boolean values. + +#### For example: + + + +``` +>>> x = tf.constant([2, 4]) +>>> y = tf.constant(2) +>>> tf.math.not_equal(x, y) + +``` + +``` +>>> x = tf.constant([2, 4]) +>>> y = tf.constant([2, 4]) +>>> tf.math.not_equal(x, y) + +``` + + + + + + + + + + + + + + + + +
+`x` + +A tf.Tensor or tf.SparseTensor or tf.IndexedSlices. +
+`y` + +A tf.Tensor or tf.SparseTensor or tf.IndexedSlices. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tf.Tensor of type bool with the same size as that of x or y. +
+ + + + + + + + + + + +
+tf.errors.InvalidArgumentError: If shapes of arguments are incompatible +
+ diff --git a/site/en/api_docs/python/tf/math/polygamma.md b/site/en/api_docs/python/tf/math/polygamma.md new file mode 100644 index 00000000000..70f747e4362 --- /dev/null +++ b/site/en/api_docs/python/tf/math/polygamma.md @@ -0,0 +1,91 @@ +description: Compute the polygamma function \\(\psi^{(n)}(x)\\). + +
+ + +
+ +# tf.math.polygamma + + + + + + + + + +Compute the polygamma function \\(\psi^{(n)}(x)\\). + + + + + + + + + +The polygamma function is defined as: + + +\\(\psi^{(a)}(x) = \frac{d^a}{dx^a} \psi(x)\\) + +where \\(\psi(x)\\) is the digamma function. +The polygamma function is defined only for non-negative integer orders \\a\\. + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`x` + +A `Tensor`. Must have the same type as `a`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a`. +
+ diff --git a/site/en/api_docs/python/tf/math/polyval.md b/site/en/api_docs/python/tf/math/polyval.md new file mode 100644 index 00000000000..46f0c6c5145 --- /dev/null +++ b/site/en/api_docs/python/tf/math/polyval.md @@ -0,0 +1,138 @@ +description: Computes the elementwise value of a polynomial. + +
+ + +
+ +# tf.math.polyval + + + + + + + + + +Computes the elementwise value of a polynomial. + + + + + + + + + +If `x` is a tensor and `coeffs` is a list n + 1 tensors, +this function returns the value of the n-th order polynomial + + p(x) = coeffs[n-1] + coeffs[n-2] * x + ... + coeffs[0] * x**(n-1) + +evaluated using Horner's method, i.e. + + p(x) = coeffs[n-1] + x * (coeffs[n-2] + ... + x * (coeffs[1] + + x * coeffs[0])) + +Usage Example: + +``` +>>> coefficients = [1.0, 2.5, -4.2] +>>> x = 5.0 +>>> y = tf.math.polyval(coefficients, x) +>>> y + +``` + +#### Usage Example: + + + +``` +>>> tf.math.polyval([2, 1, 0], 3) # evaluates 2 * (3**2) + 1 * (3**1) + 0 * (3**0) + +``` + +tf.math.polyval can also be used in polynomial regression. Taking +advantage of this function can facilitate writing a polynomial equation +as compared to explicitly writing it out, especially for higher degree +polynomials. + +``` +>>> x = tf.constant(3) +>>> theta1 = tf.Variable(2) +>>> theta2 = tf.Variable(1) +>>> theta3 = tf.Variable(0) +>>> tf.math.polyval([theta1, theta2, theta3], x) + +``` + + + + + + + + + + + + + + + + +
+`coeffs` + +A list of `Tensor` representing the coefficients of the polynomial. +
+`x` + +A `Tensor` representing the variable of the polynomial. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `tensor` of the shape as the expression p(x) with usual broadcasting +rules for element-wise addition and multiplication applied. +
+ + + + +#### Numpy Compatibility +Equivalent to numpy.polyval. + diff --git a/site/en/api_docs/python/tf/math/pow.md b/site/en/api_docs/python/tf/math/pow.md new file mode 100644 index 00000000000..e7d87f0472f --- /dev/null +++ b/site/en/api_docs/python/tf/math/pow.md @@ -0,0 +1,102 @@ +description: Computes the power of one value to another. + +
+ + +
+ +# tf.math.pow + + + + + + + + + +Computes the power of one value to another. + + + + + + + + + +Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for +corresponding elements in `x` and `y`. For example: + +```python +x = tf.constant([[2, 2], [3, 3]]) +y = tf.constant([[8, 16], [2, 3]]) +tf.pow(x, y) # [[256, 65536], [9, 27]] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`y` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`, +`complex64`, or `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. +
+ diff --git a/site/en/api_docs/python/tf/math/real.md b/site/en/api_docs/python/tf/math/real.md new file mode 100644 index 00000000000..bd28487ce9a --- /dev/null +++ b/site/en/api_docs/python/tf/math/real.md @@ -0,0 +1,95 @@ +description: Returns the real part of a complex (or real) tensor. + +
+ + +
+ +# tf.math.real + + + + + + + + + +Returns the real part of a complex (or real) tensor. + + + + + + + + + +Given a tensor `input`, this operation returns a tensor of type `float` that +is the real part of each element in `input` considered as a complex number. + +#### For example: + + + +```python +x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j]) +tf.math.real(x) # [-2.25, 3.25] +``` + +If `input` is already real, it is returned unchanged. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must have numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32` or `float64`. +
+ diff --git a/site/en/api_docs/python/tf/math/reciprocal.md b/site/en/api_docs/python/tf/math/reciprocal.md new file mode 100644 index 00000000000..c5f299f8db2 --- /dev/null +++ b/site/en/api_docs/python/tf/math/reciprocal.md @@ -0,0 +1,78 @@ +description: Computes the reciprocal of x element-wise. + +
+ + +
+ +# tf.math.reciprocal + + + + + + + + + +Computes the reciprocal of x element-wise. + + + + + + + + + +I.e., \\(y = 1 / x\\). + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/reciprocal_no_nan.md b/site/en/api_docs/python/tf/math/reciprocal_no_nan.md new file mode 100644 index 00000000000..44a655e1b73 --- /dev/null +++ b/site/en/api_docs/python/tf/math/reciprocal_no_nan.md @@ -0,0 +1,110 @@ +description: Performs a safe reciprocal operation, element wise. + +
+ + +
+ +# tf.math.reciprocal_no_nan + + + + + + + + + +Performs a safe reciprocal operation, element wise. + + + + + + + + + +If a particular element is zero, the reciprocal for that element is +also set to zero. + +#### For example: + + +```python +x = tf.constant([2.0, 0.5, 0, 1], dtype=tf.float32) +tf.math.reciprocal_no_nan(x) # [ 0.5, 2, 0.0, 1.0 ] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor` of type `float16`, `float32`, `float64` `complex64` or +`complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of same shape and type as `x`. +
+ + + + + + + + + + + + +
+`TypeError` + +x must be of a valid dtype. +
+ diff --git a/site/en/api_docs/python/tf/math/reduce_any.md b/site/en/api_docs/python/tf/math/reduce_any.md new file mode 100644 index 00000000000..b4d2d1b0744 --- /dev/null +++ b/site/en/api_docs/python/tf/math/reduce_any.md @@ -0,0 +1,119 @@ +description: Computes the "logical or" of elements across dimensions of a tensor. + +
+ + +
+ +# tf.math.reduce_any + + + + + + + + + +Computes the "logical or" of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + +#### For example: + + + +```python +x = tf.constant([[True, True], [False, False]]) +tf.reduce_any(x) # True +tf.reduce_any(x, 0) # [True, True] +tf.reduce_any(x, 1) # [True, False] +``` + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The boolean tensor to reduce. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The reduced tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.any + diff --git a/site/en/api_docs/python/tf/math/reduce_euclidean_norm.md b/site/en/api_docs/python/tf/math/reduce_euclidean_norm.md new file mode 100644 index 00000000000..743ebe6c519 --- /dev/null +++ b/site/en/api_docs/python/tf/math/reduce_euclidean_norm.md @@ -0,0 +1,120 @@ +description: Computes the Euclidean norm of elements across dimensions of a tensor. + +
+ + +
+ +# tf.math.reduce_euclidean_norm + + + + + + + + + +Computes the Euclidean norm of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + +#### For example: + + + +```python +x = tf.constant([[1, 2, 3], [1, 1, 1]]) # x.dtype is tf.int32 +tf.math.reduce_euclidean_norm(x) # returns 4 as dtype is tf.int32 +y = tf.constant([[1, 2, 3], [1, 1, 1]], dtype = tf.float32) +tf.math.reduce_euclidean_norm(y) # returns 4.1231055 which is sqrt(17) +tf.math.reduce_euclidean_norm(y, 0) # [sqrt(2), sqrt(5), sqrt(10)] +tf.math.reduce_euclidean_norm(y, 1) # [sqrt(14), sqrt(3)] +tf.math.reduce_euclidean_norm(y, 1, keepdims=True) # [[sqrt(14)], [sqrt(3)]] +tf.math.reduce_euclidean_norm(y, [0, 1]) # sqrt(17) +``` + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The reduced tensor, of the same dtype as the input_tensor. +
+ diff --git a/site/en/api_docs/python/tf/math/reduce_logsumexp.md b/site/en/api_docs/python/tf/math/reduce_logsumexp.md new file mode 100644 index 00000000000..76e78cafca3 --- /dev/null +++ b/site/en/api_docs/python/tf/math/reduce_logsumexp.md @@ -0,0 +1,119 @@ +description: Computes log(sum(exp(elements across dimensions of a tensor))). + +
+ + +
+ +# tf.math.reduce_logsumexp + + + + + + + + + +Computes log(sum(exp(elements across dimensions of a tensor))). + + + + + + + + + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` has no entries, all dimensions are reduced, and a +tensor with a single element is returned. + +This function is more numerically stable than log(sum(exp(input))). It avoids +overflows caused by taking the exp of large inputs and underflows caused by +taking the log of small inputs. + +#### For example: + + + +```python +x = tf.constant([[0., 0., 0.], [0., 0., 0.]]) +tf.reduce_logsumexp(x) # log(6) +tf.reduce_logsumexp(x, 0) # [log(2), log(2), log(2)] +tf.reduce_logsumexp(x, 1) # [log(3), log(3)] +tf.reduce_logsumexp(x, 1, keepdims=True) # [[log(3)], [log(3)]] +tf.reduce_logsumexp(x, [0, 1]) # log(6) +``` + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The reduced tensor. +
+ diff --git a/site/en/api_docs/python/tf/math/reduce_max.md b/site/en/api_docs/python/tf/math/reduce_max.md new file mode 100644 index 00000000000..a341e8339fe --- /dev/null +++ b/site/en/api_docs/python/tf/math/reduce_max.md @@ -0,0 +1,126 @@ +description: Computes the maximum of elements across dimensions of a tensor. + +
+ + +
+ +# tf.math.reduce_max + + + + + + + + + +Computes the maximum of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + +#### Usage example: + + + +``` +>>> x = tf.constant([5, 1, 2, 4]) +>>> print(tf.reduce_max(x)) +tf.Tensor(5, shape=(), dtype=int32) +>>> x = tf.constant([-5, -1, -2, -4]) +>>> print(tf.reduce_max(x)) +tf.Tensor(-1, shape=(), dtype=int32) +>>> x = tf.constant([4, float('nan')]) +>>> print(tf.reduce_max(x)) +tf.Tensor(4.0, shape=(), dtype=float32) +>>> x = tf.constant([float('nan'), float('nan')]) +>>> print(tf.reduce_max(x)) +tf.Tensor(-inf, shape=(), dtype=float32) +>>> x = tf.constant([float('-inf'), float('inf')]) +>>> print(tf.reduce_max(x)) +tf.Tensor(inf, shape=(), dtype=float32) +``` + +See the numpy docs for `np.amax` and `np.nanmax` behavior. + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have real numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The reduced tensor. +
+ diff --git a/site/en/api_docs/python/tf/math/reduce_mean.md b/site/en/api_docs/python/tf/math/reduce_mean.md new file mode 100644 index 00000000000..2bbeab6f7c2 --- /dev/null +++ b/site/en/api_docs/python/tf/math/reduce_mean.md @@ -0,0 +1,138 @@ +description: Computes the mean of elements across dimensions of a tensor. + +
+ + +
+ +# tf.math.reduce_mean + + + + + + + + + +Computes the mean of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input_tensor` along the dimensions given in `axis` by computing the +mean of elements across the dimensions in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions are retained +with length 1. + +If `axis` is None, all dimensions are reduced, and a tensor with a single +element is returned. + +#### For example: + + + +``` +>>> x = tf.constant([[1., 1.], [2., 2.]]) +>>> tf.reduce_mean(x) + +>>> tf.reduce_mean(x, 0) + +>>> tf.reduce_mean(x, 1) + +``` + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The reduced tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.mean + +Please note that `np.mean` has a `dtype` parameter that could be used to +specify the output type. By default this is `dtype=float64`. On the other +hand, tf.reduce_mean has an aggressive type inference from `input_tensor`, +for example: + +``` +>>> x = tf.constant([1, 0, 1, 0]) +>>> tf.reduce_mean(x) + +>>> y = tf.constant([1., 0., 1., 0.]) +>>> tf.reduce_mean(y) + +``` + + diff --git a/site/en/api_docs/python/tf/math/reduce_min.md b/site/en/api_docs/python/tf/math/reduce_min.md new file mode 100644 index 00000000000..eeaf42f5ee0 --- /dev/null +++ b/site/en/api_docs/python/tf/math/reduce_min.md @@ -0,0 +1,116 @@ +description: Computes the minimum of elements across dimensions of a tensor. + +
+ + +
+ +# tf.math.reduce_min + + + + + + + + + +Computes the minimum of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have real numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The reduced tensor. +
+ + + +#### For example: + +>>> a = tf.constant([[1, 2], [3, 4]]) +>>> tf.reduce_min(a) + + + + + +#### Numpy Compatibility +Equivalent to np.min + diff --git a/site/en/api_docs/python/tf/math/reduce_prod.md b/site/en/api_docs/python/tf/math/reduce_prod.md new file mode 100644 index 00000000000..eab5c01da85 --- /dev/null +++ b/site/en/api_docs/python/tf/math/reduce_prod.md @@ -0,0 +1,108 @@ +description: Computes the product of elements across dimensions of a tensor. + +
+ + +
+ +# tf.math.reduce_prod + + + + + + + + + +Computes the product of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The reduced tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.prod + diff --git a/site/en/api_docs/python/tf/math/reduce_std.md b/site/en/api_docs/python/tf/math/reduce_std.md new file mode 100644 index 00000000000..f6021a4cc9f --- /dev/null +++ b/site/en/api_docs/python/tf/math/reduce_std.md @@ -0,0 +1,126 @@ +description: Computes the standard deviation of elements across dimensions of a tensor. + +
+ + +
+ +# tf.math.reduce_std + + + + + + + + + +Computes the standard deviation of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + +#### For example: + + + +```python +x = tf.constant([[1., 2.], [3., 4.]]) +tf.reduce_std(x) # 1.1180339887498949 +tf.reduce_std(x, 0) # [1., 1.] +tf.reduce_std(x, 1) # [0.5, 0.5] +``` + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name scope for the associated operations (optional). +
+ + + + + + + + + + + +
+The reduced tensor, of the same dtype as the input_tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.std + +Please note that `np.std` has a `dtype` parameter that could be used to +specify the output type. By default this is `dtype=float64`. On the other +hand, `tf.reduce_std` has an aggressive type inference from `input_tensor`, + diff --git a/site/en/api_docs/python/tf/math/reduce_sum.md b/site/en/api_docs/python/tf/math/reduce_sum.md new file mode 100644 index 00000000000..9e4983d013e --- /dev/null +++ b/site/en/api_docs/python/tf/math/reduce_sum.md @@ -0,0 +1,122 @@ +description: Computes the sum of elements across dimensions of a tensor. + +
+ + +
+ +# tf.math.reduce_sum + + + + + + + + + +Computes the sum of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + +#### For example: + + + +```python +x = tf.constant([[1, 1, 1], [1, 1, 1]]) +tf.reduce_sum(x) # 6 +tf.reduce_sum(x, 0) # [2, 2, 2] +tf.reduce_sum(x, 1) # [3, 3] +tf.reduce_sum(x, 1, keepdims=True) # [[3], [3]] +tf.reduce_sum(x, [0, 1]) # 6 +``` + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The reduced tensor, of the same dtype as the input_tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.sum apart the fact that numpy upcast uint8 and int32 to +int64 while tensorflow returns the same dtype as the input. + diff --git a/site/en/api_docs/python/tf/math/reduce_variance.md b/site/en/api_docs/python/tf/math/reduce_variance.md new file mode 100644 index 00000000000..3843fc31d9e --- /dev/null +++ b/site/en/api_docs/python/tf/math/reduce_variance.md @@ -0,0 +1,127 @@ +description: Computes the variance of elements across dimensions of a tensor. + +
+ + +
+ +# tf.math.reduce_variance + + + + + + + + + +Computes the variance of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + +#### For example: + + + +```python +x = tf.constant([[1., 2.], [3., 4.]]) +tf.reduce_variance(x) # 1.25 +tf.reduce_variance(x, 0) # [1., 1.] +tf.reduce_variance(x, 1) # [0.25, 0.25] +``` + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The tensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name scope for the associated operations (optional). +
+ + + + + + + + + + + +
+The reduced tensor, of the same dtype as the input_tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.var + +Please note that `np.var` has a `dtype` parameter that could be used to +specify the output type. By default this is `dtype=float64`. On the other +hand, `tf.reduce_variance` has an aggressive type inference from +`input_tensor`, + diff --git a/site/en/api_docs/python/tf/math/rint.md b/site/en/api_docs/python/tf/math/rint.md new file mode 100644 index 00000000000..6e775477493 --- /dev/null +++ b/site/en/api_docs/python/tf/math/rint.md @@ -0,0 +1,86 @@ +description: Returns element-wise integer closest to x. + +
+ + +
+ +# tf.math.rint + + + + + + + + + +Returns element-wise integer closest to x. + + + + + + + + + +If the result is midway between two representable values, +the even representable is chosen. +For example: + +``` +rint(-1.5) ==> -2.0 +rint(0.5000001) ==> 1.0 +rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/round.md b/site/en/api_docs/python/tf/math/round.md new file mode 100644 index 00000000000..ed96ed0d43b --- /dev/null +++ b/site/en/api_docs/python/tf/math/round.md @@ -0,0 +1,93 @@ +description: Rounds the values of a tensor to the nearest integer, element-wise. + +
+ + +
+ +# tf.math.round + + + + + + + + + +Rounds the values of a tensor to the nearest integer, element-wise. + + + + + + + + + +Rounds half to even. Also known as bankers rounding. If you want to round +according to the current system rounding mode use tf::cint. +For example: + +```python +x = tf.constant([0.9, 2.5, 2.3, 1.5, -4.5]) +tf.round(x) # [ 1.0, 2.0, 2.0, 2.0, -4.0 ] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor` of type `float16`, `float32`, `float64`, `int32`, or `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of same shape and type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/rsqrt.md b/site/en/api_docs/python/tf/math/rsqrt.md new file mode 100644 index 00000000000..c43b6fc601f --- /dev/null +++ b/site/en/api_docs/python/tf/math/rsqrt.md @@ -0,0 +1,94 @@ +description: Computes reciprocal of square root of x element-wise. + +
+ + +
+ +# tf.math.rsqrt + + + + + + + + + +Computes reciprocal of square root of x element-wise. + + + + + + + + + + +#### For example: + + + +``` +>>> x = tf.constant([2., 0., -2.]) +>>> tf.math.rsqrt(x) + +``` + + + + + + + + + + + + + +
+`x` + +A tf.Tensor. Must be one of the following types: `bfloat16`, `half`, +`float32`, `float64`. `int32` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tf.Tensor. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/scalar_mul.md b/site/en/api_docs/python/tf/math/scalar_mul.md new file mode 100644 index 00000000000..0d9fe3ee2fa --- /dev/null +++ b/site/en/api_docs/python/tf/math/scalar_mul.md @@ -0,0 +1,106 @@ +description: Multiplies a scalar times a Tensor or IndexedSlices object. + +
+ + +
+ +# tf.math.scalar_mul + + + + + + + + + +Multiplies a scalar times a `Tensor` or `IndexedSlices` object. + + + + + + + + + +Intended for use in gradient code which might deal with `IndexedSlices` +objects, which are easy to multiply by a scalar but more expensive to +multiply with arbitrary tensors. + + + + + + + + + + + + + + + + +
+`scalar` + +A 0-D scalar `Tensor`. Must have known shape. +
+`x` + +A `Tensor` or `IndexedSlices` to be scaled. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+`scalar * x` of the same type (`Tensor` or `IndexedSlices`) as `x`. +
+ + + + + + + + + + + + +
+`ValueError` + +if scalar is not a 0-D `scalar`. +
+ diff --git a/site/en/api_docs/python/tf/math/segment_max.md b/site/en/api_docs/python/tf/math/segment_max.md new file mode 100644 index 00000000000..93d47776512 --- /dev/null +++ b/site/en/api_docs/python/tf/math/segment_max.md @@ -0,0 +1,110 @@ +description: Computes the maximum along segments of a tensor. + +
+ + +
+ +# tf.math.segment_max + + + + + + + + + +Computes the maximum along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output_i = \max_j(data_j)\\) where `max` is over `j` such +that `segment_ids[j] == i`. + +If the max is empty for a given segment ID `i`, `output[i] = 0`. + +
+ +
+ +#### For example: + + + +``` +c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) +tf.segment_max(c, tf.constant([0, 0, 1])) +# ==> [[4, 3, 3, 4], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor whose size is equal to the size of `data`'s +first dimension. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/math/segment_mean.md b/site/en/api_docs/python/tf/math/segment_mean.md new file mode 100644 index 00000000000..7226096c32e --- /dev/null +++ b/site/en/api_docs/python/tf/math/segment_mean.md @@ -0,0 +1,111 @@ +description: Computes the mean along segments of a tensor. + +
+ + +
+ +# tf.math.segment_mean + + + + + + + + + +Computes the mean along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output_i = \frac{\sum_j data_j}{N}\\) where `mean` is +over `j` such that `segment_ids[j] == i` and `N` is the total number of +values summed. + +If the mean is empty for a given segment ID `i`, `output[i] = 0`. + +
+ +
+ +#### For example: + + + +``` +c = tf.constant([[1.0,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) +tf.segment_mean(c, tf.constant([0, 0, 1])) +# ==> [[2.5, 2.5, 2.5, 2.5], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor whose size is equal to the size of `data`'s +first dimension. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/math/segment_min.md b/site/en/api_docs/python/tf/math/segment_min.md new file mode 100644 index 00000000000..10138066daa --- /dev/null +++ b/site/en/api_docs/python/tf/math/segment_min.md @@ -0,0 +1,110 @@ +description: Computes the minimum along segments of a tensor. + +
+ + +
+ +# tf.math.segment_min + + + + + + + + + +Computes the minimum along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output_i = \min_j(data_j)\\) where `min` is over `j` such +that `segment_ids[j] == i`. + +If the min is empty for a given segment ID `i`, `output[i] = 0`. + +
+ +
+ +#### For example: + + + +``` +c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) +tf.segment_min(c, tf.constant([0, 0, 1])) +# ==> [[1, 2, 2, 1], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor whose size is equal to the size of `data`'s +first dimension. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/math/segment_prod.md b/site/en/api_docs/python/tf/math/segment_prod.md new file mode 100644 index 00000000000..d96a6bdbfdf --- /dev/null +++ b/site/en/api_docs/python/tf/math/segment_prod.md @@ -0,0 +1,110 @@ +description: Computes the product along segments of a tensor. + +
+ + +
+ +# tf.math.segment_prod + + + + + + + + + +Computes the product along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output_i = \prod_j data_j\\) where the product is over `j` such +that `segment_ids[j] == i`. + +If the product is empty for a given segment ID `i`, `output[i] = 1`. + +
+ +
+ +#### For example: + + + +``` +c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) +tf.segment_prod(c, tf.constant([0, 0, 1])) +# ==> [[4, 6, 6, 4], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor whose size is equal to the size of `data`'s +first dimension. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/math/segment_sum.md b/site/en/api_docs/python/tf/math/segment_sum.md new file mode 100644 index 00000000000..94aa5972c63 --- /dev/null +++ b/site/en/api_docs/python/tf/math/segment_sum.md @@ -0,0 +1,110 @@ +description: Computes the sum along segments of a tensor. + +
+ + +
+ +# tf.math.segment_sum + + + + + + + + + +Computes the sum along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output_i = \sum_j data_j\\) where sum is over `j` such +that `segment_ids[j] == i`. + +If the sum is empty for a given segment ID `i`, `output[i] = 0`. + +
+ +
+ +#### For example: + + + +``` +c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) +tf.segment_sum(c, tf.constant([0, 0, 1])) +# ==> [[5, 5, 5, 5], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor whose size is equal to the size of `data`'s +first dimension. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/math/sigmoid.md b/site/en/api_docs/python/tf/math/sigmoid.md new file mode 100644 index 00000000000..52e53e8e6f1 --- /dev/null +++ b/site/en/api_docs/python/tf/math/sigmoid.md @@ -0,0 +1,132 @@ +description: Computes sigmoid of x element-wise. + +
+ + +
+ +# tf.math.sigmoid + + + + + + + + + +Computes sigmoid of `x` element-wise. + + + + + + + + + +Formula for calculating sigmoid(x): `y = 1 / (1 + exp(-x))`. + +For x \in (-inf, inf) => sigmoid(x) \in (0, 1) + +#### Example Usage: + + + +If a positive number is large, then its sigmoid will approach to 1 since the +formula will be `y = / (1 + )` + +``` +>>> x = tf.constant([0.0, 1.0, 50.0, 100.0]) +>>> tf.math.sigmoid(x) + +``` + +If a negative number is large, its sigmoid will approach to 0 since the +formula will be `y = 1 / (1 + )` + +``` +>>> x = tf.constant([-100.0, -50.0, -1.0, 0.0]) +>>> tf.math.sigmoid(x) + +``` + + + + + + + + + + + + + +
+`x` + +A Tensor with type `float16`, `float32`, `float64`, `complex64`, or +`complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A Tensor with the same type as `x`. +
+ + + +#### Usage Example: + + + +``` +>>> x = tf.constant([-128.0, 0.0, 128.0], dtype=tf.float32) +>>> tf.sigmoid(x) + +``` + + + +#### Scipy Compatibility +Equivalent to scipy.special.expit + diff --git a/site/en/api_docs/python/tf/math/sign.md b/site/en/api_docs/python/tf/math/sign.md new file mode 100644 index 00000000000..377cc763969 --- /dev/null +++ b/site/en/api_docs/python/tf/math/sign.md @@ -0,0 +1,104 @@ +description: Returns an element-wise indication of the sign of a number. + +
+ + +
+ +# tf.math.sign + + + + + + + + + +Returns an element-wise indication of the sign of a number. + + + + + + + + + +y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0. + +For complex numbers, y = sign(x) = x / |x| if x != 0, otherwise y = 0. + +#### Example usage: + + + +``` +>>> tf.math.sign([0., 2., -3.]) + +``` + + + + + + + + + + + + + +
+`x` + +A Tensor. Must be one of the following types: bfloat16, half, float32, +float64, int32, int64, complex64, complex128. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A Tensor. Has the same type as x. + +If x is a SparseTensor, returns SparseTensor(x.indices, +tf.math.sign(x.values, ...), x.dense_shape). + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.sign(x.values, ...), x.dense_shape)` +
+ diff --git a/site/en/api_docs/python/tf/math/sin.md b/site/en/api_docs/python/tf/math/sin.md new file mode 100644 index 00000000000..04a22e5244c --- /dev/null +++ b/site/en/api_docs/python/tf/math/sin.md @@ -0,0 +1,88 @@ +description: Computes sine of x element-wise. + +
+ + +
+ +# tf.math.sin + + + + + + + + + +Computes sine of x element-wise. + + + + + + + + + + Given an input tensor, this function computes sine of every + element in the tensor. Input range is `(-inf, inf)` and + output range is `[-1,1]`. + + ```python + x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10, float("inf")]) + tf.math.sin(x) ==> [nan -0.4121185 -0.47942555 0.84147096 0.9320391 -0.87329733 -0.54402107 nan] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/sinh.md b/site/en/api_docs/python/tf/math/sinh.md new file mode 100644 index 00000000000..5a5b2b3b0ec --- /dev/null +++ b/site/en/api_docs/python/tf/math/sinh.md @@ -0,0 +1,88 @@ +description: Computes hyperbolic sine of x element-wise. + +
+ + +
+ +# tf.math.sinh + + + + + + + + + +Computes hyperbolic sine of x element-wise. + + + + + + + + + + Given an input tensor, this function computes hyperbolic sine of every + element in the tensor. Input range is `[-inf,inf]` and output range + is `[-inf,inf]`. + + ```python + x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 2, 10, float("inf")]) + tf.math.sinh(x) ==> [-inf -4.0515420e+03 -5.2109528e-01 1.1752012e+00 1.5094614e+00 3.6268604e+00 1.1013232e+04 inf] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/sobol_sample.md b/site/en/api_docs/python/tf/math/sobol_sample.md new file mode 100644 index 00000000000..d76d098fb47 --- /dev/null +++ b/site/en/api_docs/python/tf/math/sobol_sample.md @@ -0,0 +1,108 @@ +description: Generates points from the Sobol sequence. + +
+ + +
+ +# tf.math.sobol_sample + + + + + + + + + +Generates points from the Sobol sequence. + + + + + + + + + +Creates a Sobol sequence with `num_results` samples. Each sample has dimension +`dim`. Skips the first `skip` samples. + + + + + + + + + + + + + + + + + + + + + + +
+`dim` + +Positive scalar `Tensor` representing each sample's dimension. +
+`num_results` + +Positive scalar `Tensor` of dtype int32. The number of Sobol +points to return in the output. +
+`skip` + +(Optional) Positive scalar `Tensor` of dtype int32. The number of +initial points of the Sobol sequence to skip. Default value is 0. +
+`dtype` + +(Optional) The `tf.Dtype` of the sample. One of: tf.float32 or +tf.float64. Defaults to tf.float32. +
+`name` + +(Optional) Python `str` name prefixed to ops created by this function. +
+ + + + + + + + + + + +
+`Tensor` of samples from Sobol sequence with `shape` [num_results, dim]. +
+ diff --git a/site/en/api_docs/python/tf/math/softplus.md b/site/en/api_docs/python/tf/math/softplus.md new file mode 100644 index 00000000000..fde2c243071 --- /dev/null +++ b/site/en/api_docs/python/tf/math/softplus.md @@ -0,0 +1,80 @@ +description: Computes softplus: log(exp(features) + 1). + +
+ + +
+ +# tf.math.softplus + + + + + + + + + +Computes softplus: `log(exp(features) + 1)`. + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/math/special.md b/site/en/api_docs/python/tf/math/special.md new file mode 100644 index 00000000000..031261f5df2 --- /dev/null +++ b/site/en/api_docs/python/tf/math/special.md @@ -0,0 +1,33 @@ +description: Public API for tf.math.special namespace. + +
+ + +
+ +# Module: tf.math.special + + + + + + + + + +Public API for tf.math.special namespace. + + + +## Functions + +[`dawsn(...)`](../../tf/math/special/dawsn.md): Computes Dawson's integral of `x` element-wise. + +[`expint(...)`](../../tf/math/special/expint.md): Computes the Exponential integral of `x` element-wise. + +[`fresnel_cos(...)`](../../tf/math/special/fresnel_cos.md): Computes Fresnel's cosine integral of `x` element-wise. + +[`fresnel_sin(...)`](../../tf/math/special/fresnel_sin.md): Computes Fresnel's sine integral of `x` element-wise. + +[`spence(...)`](../../tf/math/special/spence.md): Computes Spence's integral of `x` element-wise. + diff --git a/site/en/api_docs/python/tf/math/special/dawsn.md b/site/en/api_docs/python/tf/math/special/dawsn.md new file mode 100644 index 00000000000..4749a2ce6a3 --- /dev/null +++ b/site/en/api_docs/python/tf/math/special/dawsn.md @@ -0,0 +1,97 @@ +description: Computes Dawson's integral of x element-wise. + +
+ + +
+ +# tf.math.special.dawsn + + + + + + + + + +Computes Dawson's integral of `x` element-wise. + + + + + + + + + +Dawson's integral is defined as `exp(-x**2)` times the integral of +`exp(t**2)` from `0` to `x`, with the domain of definition all real numbers. + +Dawson's function is odd. +>>> tf.math.special.dawsn([-1., -0.5, 0.5, 1.]).numpy() +array([-0.5380795, -0.4244364, 0.4244364, 0.5380795], dtype=float32) + +This implementation is based off of the Cephes math library. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor`. Must be one of the following types: +`float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. +
+ + + + +#### Scipy Compatibility +Equivalent to scipy.special.dawsn + diff --git a/site/en/api_docs/python/tf/math/special/expint.md b/site/en/api_docs/python/tf/math/special/expint.md new file mode 100644 index 00000000000..c7c53dcb7e6 --- /dev/null +++ b/site/en/api_docs/python/tf/math/special/expint.md @@ -0,0 +1,98 @@ +description: Computes the Exponential integral of x element-wise. + +
+ + +
+ +# tf.math.special.expint + + + + + + + + + +Computes the Exponential integral of `x` element-wise. + + + + + + + + + +The Exponential integral is defined as the integral of `exp(t) / t` from +`-inf` to `x`, with the domain of definition all positive real numbers. + +``` +>>> tf.math.special.expint([1., 1.1, 2.1, 4.1]).numpy() +array([ 1.8951179, 2.1673784, 5.3332353, 21.048464], dtype=float32) +``` + +This implementation is based off of the Cephes math library. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor`. Must be one of the following types: +`float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. +
+ + + + +#### Scipy Compatibility +Equivalent to scipy.special.expi + diff --git a/site/en/api_docs/python/tf/math/special/fresnel_cos.md b/site/en/api_docs/python/tf/math/special/fresnel_cos.md new file mode 100644 index 00000000000..6dc3536e48e --- /dev/null +++ b/site/en/api_docs/python/tf/math/special/fresnel_cos.md @@ -0,0 +1,97 @@ +description: Computes Fresnel's cosine integral of x element-wise. + +
+ + +
+ +# tf.math.special.fresnel_cos + + + + + + + + + +Computes Fresnel's cosine integral of `x` element-wise. + + + + + + + + + +The Fresnel cosine integral is defined as the integral of `cos(t^2)` from +`0` to `x`, with the domain of definition all real numbers. + +The Fresnel cosine integral is odd. +>>> tf.math.special.fresnel_cos([-1., -0.1, 0.1, 1.]).numpy() +array([-0.7798934 , -0.09999753, 0.09999753, 0.7798934 ], dtype=float32) + +This implementation is based off of the Cephes math library. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor`. Must be one of the following types: +`float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. +
+ + + + +#### Scipy Compatibility +Equivalent to scipy.special.fresnel second output. + diff --git a/site/en/api_docs/python/tf/math/special/fresnel_sin.md b/site/en/api_docs/python/tf/math/special/fresnel_sin.md new file mode 100644 index 00000000000..824af110335 --- /dev/null +++ b/site/en/api_docs/python/tf/math/special/fresnel_sin.md @@ -0,0 +1,98 @@ +description: Computes Fresnel's sine integral of x element-wise. + +
+ + +
+ +# tf.math.special.fresnel_sin + + + + + + + + + +Computes Fresnel's sine integral of `x` element-wise. + + + + + + + + + +The Fresnel sine integral is defined as the integral of `sin(t^2)` from +`0` to `x`, with the domain of definition all real numbers. + +``` +>>> tf.math.special.fresnel_sin([-1., -0.1, 0.1, 1.]).numpy() +array([-0.43825912, -0.00052359, 0.00052359, 0.43825912], dtype=float32) +``` + +This implementation is based off of the Cephes math library. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor`. Must be one of the following types: +`float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. +
+ + + + +#### Scipy Compatibility +Equivalent to scipy.special.fresnel first output. + diff --git a/site/en/api_docs/python/tf/math/special/spence.md b/site/en/api_docs/python/tf/math/special/spence.md new file mode 100644 index 00000000000..d83aad4558a --- /dev/null +++ b/site/en/api_docs/python/tf/math/special/spence.md @@ -0,0 +1,98 @@ +description: Computes Spence's integral of x element-wise. + +
+ + +
+ +# tf.math.special.spence + + + + + + + + + +Computes Spence's integral of `x` element-wise. + + + + + + + + + +Spence's integral is defined as the integral of `log(t) / (1 - t)` from +`1` to `x`, with the domain of definition all non-negative real numbers. + +``` +>>> tf.math.special.spence([0.5, 1., 2., 3.]).numpy() +array([ 0.58224034, 0. , -0.82246685, -1.4367464], dtype=float32) +``` + +This implementation is based off of the Cephes math library. + + + + + + + + + + + + + +
+`x` + +A `Tensor` or `SparseTensor`. Must be one of the following types: +`float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. +
+ + + + +#### Scipy Compatibility +Equivalent to scipy.special.spence + diff --git a/site/en/api_docs/python/tf/math/sqrt.md b/site/en/api_docs/python/tf/math/sqrt.md new file mode 100644 index 00000000000..9638d634984 --- /dev/null +++ b/site/en/api_docs/python/tf/math/sqrt.md @@ -0,0 +1,111 @@ +description: Computes element-wise square root of the input tensor. + +
+ + +
+ +# tf.math.sqrt + + + + + + + + + +Computes element-wise square root of the input tensor. + + + + + + + + + +Note: This operation does not support integer types. + +``` +>>> x = tf.constant([[4.0], [16.0]]) +>>> tf.sqrt(x) + +>>> y = tf.constant([[-4.0], [16.0]]) +>>> tf.sqrt(y) + +>>> z = tf.constant([[-1.0], [16.0]], dtype=tf.complex128) +>>> tf.sqrt(z) + +``` + +Note: In order to support complex complex, please provide an input tensor +of `complex64` or `complex128`. + + + + + + + + + + + + + +
+`x` + +A tf.Tensor of type `bfloat16`, `half`, `float32`, `float64`, +`complex64`, `complex128` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tf.Tensor of same size, type and sparsity as `x`. + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.sqrt(x.values, ...), x.dense_shape)` +
+ diff --git a/site/en/api_docs/python/tf/math/square.md b/site/en/api_docs/python/tf/math/square.md new file mode 100644 index 00000000000..6330d4f7c8a --- /dev/null +++ b/site/en/api_docs/python/tf/math/square.md @@ -0,0 +1,89 @@ +description: Computes square of x element-wise. + +
+ + +
+ +# tf.math.square + + + + + + + + + +Computes square of x element-wise. + + + + + + + + + +I.e., \\(y = x * x = x^2\\). + +``` +>>> tf.math.square([-2., 0., 3.]) + +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.square(x.values, ...), x.dense_shape)` +
+ diff --git a/site/en/api_docs/python/tf/math/squared_difference.md b/site/en/api_docs/python/tf/math/squared_difference.md new file mode 100644 index 00000000000..b94c5955326 --- /dev/null +++ b/site/en/api_docs/python/tf/math/squared_difference.md @@ -0,0 +1,86 @@ +description: Returns (x - y)(x - y) element-wise. + +
+ + +
+ +# tf.math.squared_difference + + + + + + + + + +Returns (x - y)(x - y) element-wise. + + + + + + + + + +*NOTE*: math.squared_difference supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/subtract.md b/site/en/api_docs/python/tf/math/subtract.md new file mode 100644 index 00000000000..03398dd2d3d --- /dev/null +++ b/site/en/api_docs/python/tf/math/subtract.md @@ -0,0 +1,94 @@ +description: Returns x - y element-wise. + +
+ + +
+ +# tf.math.subtract + + + + + + + + + +Returns x - y element-wise. + + + + + + + + + +*NOTE*: `Subtract` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/tan.md b/site/en/api_docs/python/tf/math/tan.md new file mode 100644 index 00000000000..a7498aee5c4 --- /dev/null +++ b/site/en/api_docs/python/tf/math/tan.md @@ -0,0 +1,89 @@ +description: Computes tan of x element-wise. + +
+ + +
+ +# tf.math.tan + + + + + + + + + +Computes tan of x element-wise. + + + + + + + + + + Given an input tensor, this function computes tangent of every + element in the tensor. Input range is `(-inf, inf)` and + output range is `(-inf, inf)`. If input lies outside the boundary, `nan` + is returned. + + ```python + x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10000, float("inf")]) + tf.math.tan(x) ==> [nan 0.45231566 -0.5463025 1.5574077 2.572152 -1.7925274 0.32097113 nan] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/tanh.md b/site/en/api_docs/python/tf/math/tanh.md new file mode 100644 index 00000000000..458052913ff --- /dev/null +++ b/site/en/api_docs/python/tf/math/tanh.md @@ -0,0 +1,91 @@ +description: Computes hyperbolic tangent of x element-wise. + +
+ + +
+ +# tf.math.tanh + + + + + + + + + +Computes hyperbolic tangent of `x` element-wise. + + + + + + + + + + Given an input tensor, this function computes hyperbolic tangent of every + element in the tensor. Input range is `[-inf, inf]` and + output range is `[-1,1]`. + + ```python + x = tf.constant([-float("inf"), -5, -0.5, 1, 1.2, 2, 3, float("inf")]) + tf.math.tanh(x) ==> [-1. -0.99990916 -0.46211717 0.7615942 0.8336547 0.9640276 0.9950547 1.] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. + +If `x` is a `SparseTensor`, returns +`SparseTensor(x.indices, tf.math.tanh(x.values, ...), x.dense_shape)` +
+ diff --git a/site/en/api_docs/python/tf/math/top_k.md b/site/en/api_docs/python/tf/math/top_k.md new file mode 100644 index 00000000000..9993af44da3 --- /dev/null +++ b/site/en/api_docs/python/tf/math/top_k.md @@ -0,0 +1,121 @@ +description: Finds values and indices of the k largest entries for the last dimension. + +
+ + +
+ +# tf.math.top_k + + + + + + + + + +Finds values and indices of the `k` largest entries for the last dimension. + + + + + + + + + +If the input is a vector (rank=1), finds the `k` largest entries in the vector +and outputs their values and indices as vectors. Thus `values[j]` is the +`j`-th largest entry in `input`, and its index is `indices[j]`. + +For matrices (resp. higher rank input), computes the top `k` entries in each +row (resp. vector along the last dimension). Thus, + + values.shape = indices.shape = input.shape[:-1] + [k] + +If two elements are equal, the lower-index element appears first. + + + + + + + + + + + + + + + + + + + +
+`input` + +1-D or higher `Tensor` with last dimension at least `k`. +
+`k` + +0-D `int32` `Tensor`. Number of top elements to look for along the last +dimension (along each row for matrices). +
+`sorted` + +If true the resulting `k` elements will be sorted by the values in +descending order. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + + + + + +
+`values` + +The `k` largest elements along each last dimensional slice. +
+`indices` + +The indices of `values` within the last dimension of `input`. +
+ diff --git a/site/en/api_docs/python/tf/math/truediv.md b/site/en/api_docs/python/tf/math/truediv.md new file mode 100644 index 00000000000..55d74123c3e --- /dev/null +++ b/site/en/api_docs/python/tf/math/truediv.md @@ -0,0 +1,122 @@ +description: Divides x / y elementwise (using Python 3 division operator semantics). + +
+ + +
+ +# tf.math.truediv + + + + + + + + + +Divides x / y elementwise (using Python 3 division operator semantics). + + + + + + + + + +NOTE: Prefer using the Tensor operator or tf.divide which obey Python +division operator semantics. + +This function forces Python 3 division operator semantics where all integer +arguments are cast to floating types first. This op is generated by normal +`x / y` division in Python 3 and in Python 2.7 with +`from __future__ import division`. If you want integer division that rounds +down, use `x // y` or `tf.math.floordiv`. + +`x` and `y` must have the same numeric type. If the inputs are floating +point, the output will have the same type. If the inputs are integral, the +inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32` +and `int64` (matching the behavior of Numpy). + + + + + + + + + + + + + + + + +
+`x` + +`Tensor` numerator of numeric type. +
+`y` + +`Tensor` denominator of numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+`x / y` evaluated in floating point. +
+ + + + + + + + + + + + +
+`TypeError` + +If `x` and `y` have different dtypes. +
+ diff --git a/site/en/api_docs/python/tf/math/unsorted_segment_max.md b/site/en/api_docs/python/tf/math/unsorted_segment_max.md new file mode 100644 index 00000000000..d4225c532bf --- /dev/null +++ b/site/en/api_docs/python/tf/math/unsorted_segment_max.md @@ -0,0 +1,124 @@ +description: Computes the maximum along segments of a tensor. + +
+ + +
+ +# tf.math.unsorted_segment_max + + + + + + + + + +Computes the maximum along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +This operator is similar to the unsorted segment sum operator found +[(here)](../../../api_docs/python/math_ops.md#UnsortedSegmentSum). +Instead of computing the sum over segments, it computes the maximum such that: + +\\(output_i = \max_{j...} data[j...]\\) where max is over tuples `j...` such +that `segment_ids[j...] == i`. + +If the maximum is empty for a given segment ID `i`, it outputs the smallest +possible value for the specific numeric type, +`output[i] = numeric_limits::lowest()`. + +If the given segment ID `i` is negative, then the corresponding value is +dropped, and will not be included in the result. + +
+ +
+ +#### For example: + + + +``` python +c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]]) +tf.unsorted_segment_max(c, tf.constant([0, 1, 0]), num_segments=2) +# ==> [[ 4, 3, 3, 4], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor whose shape is a prefix of `data.shape`. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/math/unsorted_segment_mean.md b/site/en/api_docs/python/tf/math/unsorted_segment_mean.md new file mode 100644 index 00000000000..f58e8bc05b5 --- /dev/null +++ b/site/en/api_docs/python/tf/math/unsorted_segment_mean.md @@ -0,0 +1,116 @@ +description: Computes the mean along segments of a tensor. + +
+ + +
+ +# tf.math.unsorted_segment_mean + + + + + + + + + +Computes the mean along segments of a tensor. + + + + + + + + + +Read [the section on +segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation) +for an explanation of segments. + +This operator is similar to the unsorted segment sum operator found +[here](../../../api_docs/python/math_ops.md#UnsortedSegmentSum). +Instead of computing the sum over segments, it computes the mean of all +entries belonging to a segment such that: + +\\(output_i = 1/N_i \sum_{j...} data[j...]\\) where the sum is over tuples +`j...` such that `segment_ids[j...] == i` with \\N_i\\ being the number of +occurrences of id \\i\\. + +If there is no entry for a given segment ID `i`, it outputs 0. + +If the given segment ID `i` is negative, the value is dropped and will not +be added to the sum of the segment. + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor` with floating point or complex dtype. +
+`segment_ids` + +An integer tensor whose shape is a prefix of `data.shape`. +
+`num_segments` + +An integer scalar `Tensor`. The number of distinct segment +IDs. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has same shape as data, except for the first `segment_ids.rank` +dimensions, which are replaced with a single dimension which has size +`num_segments`. +
+ diff --git a/site/en/api_docs/python/tf/math/unsorted_segment_min.md b/site/en/api_docs/python/tf/math/unsorted_segment_min.md new file mode 100644 index 00000000000..06c887f385c --- /dev/null +++ b/site/en/api_docs/python/tf/math/unsorted_segment_min.md @@ -0,0 +1,120 @@ +description: Computes the minimum along segments of a tensor. + +
+ + +
+ +# tf.math.unsorted_segment_min + + + + + + + + + +Computes the minimum along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +This operator is similar to the unsorted segment sum operator found +[(here)](../../../api_docs/python/math_ops.md#UnsortedSegmentSum). +Instead of computing the sum over segments, it computes the minimum such that: + +\\(output_i = \min_{j...} data_[j...]\\) where min is over tuples `j...` such +that `segment_ids[j...] == i`. + +If the minimum is empty for a given segment ID `i`, it outputs the largest +possible value for the specific numeric type, +`output[i] = numeric_limits::max()`. + +#### For example: + + + +``` python +c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]]) +tf.unsorted_segment_min(c, tf.constant([0, 1, 0]), num_segments=2) +# ==> [[ 1, 2, 2, 1], +# [5, 6, 7, 8]] +``` + +If the given segment ID `i` is negative, then the corresponding value is +dropped, and will not be included in the result. + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor whose shape is a prefix of `data.shape`. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/math/unsorted_segment_prod.md b/site/en/api_docs/python/tf/math/unsorted_segment_prod.md new file mode 100644 index 00000000000..6f805d54656 --- /dev/null +++ b/site/en/api_docs/python/tf/math/unsorted_segment_prod.md @@ -0,0 +1,119 @@ +description: Computes the product along segments of a tensor. + +
+ + +
+ +# tf.math.unsorted_segment_prod + + + + + + + + + +Computes the product along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +This operator is similar to the unsorted segment sum operator found +[(here)](../../../api_docs/python/math_ops.md#UnsortedSegmentSum). +Instead of computing the sum over segments, it computes the product of all +entries belonging to a segment such that: + +\\(output_i = \prod_{j...} data[j...]\\) where the product is over tuples +`j...` such that `segment_ids[j...] == i`. + +#### For example: + + + +``` python +c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]]) +tf.unsorted_segment_prod(c, tf.constant([0, 1, 0]), num_segments=2) +# ==> [[ 4, 6, 6, 4], +# [5, 6, 7, 8]] +``` + +If there is no entry for a given segment ID `i`, it outputs 1. + +If the given segment ID `i` is negative, then the corresponding value is +dropped, and will not be included in the result. + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor whose shape is a prefix of `data.shape`. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/math/unsorted_segment_sqrt_n.md b/site/en/api_docs/python/tf/math/unsorted_segment_sqrt_n.md new file mode 100644 index 00000000000..fb45808c626 --- /dev/null +++ b/site/en/api_docs/python/tf/math/unsorted_segment_sqrt_n.md @@ -0,0 +1,119 @@ +description: Computes the sum along segments of a tensor divided by the sqrt(N). + +
+ + +
+ +# tf.math.unsorted_segment_sqrt_n + + + + + + + + + +Computes the sum along segments of a tensor divided by the sqrt(N). + + + + + + + + + +Read [the section on +segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation) +for an explanation of segments. + +This operator is similar to the unsorted segment sum operator found +[here](../../../api_docs/python/math_ops.md#UnsortedSegmentSum). +Additionally to computing the sum over segments, it divides the results by +sqrt(N). + +\\(output_i = 1/sqrt(N_i) \sum_{j...} data[j...]\\) where the sum is over +tuples `j...` such that `segment_ids[j...] == i` with \\N_i\\ being the +number of occurrences of id \\i\\. + +If there is no entry for a given segment ID `i`, it outputs 0. + +Note that this op only supports floating point and complex dtypes, +due to tf.sqrt only supporting these types. + +If the given segment ID `i` is negative, the value is dropped and will not +be added to the sum of the segment. + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor` with floating point or complex dtype. +
+`segment_ids` + +An integer tensor whose shape is a prefix of `data.shape`. +
+`num_segments` + +An integer scalar `Tensor`. The number of distinct segment +IDs. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has same shape as data, except for the first `segment_ids.rank` +dimensions, which are replaced with a single dimension which has size +`num_segments`. +
+ diff --git a/site/en/api_docs/python/tf/math/unsorted_segment_sum.md b/site/en/api_docs/python/tf/math/unsorted_segment_sum.md new file mode 100644 index 00000000000..4942c73900d --- /dev/null +++ b/site/en/api_docs/python/tf/math/unsorted_segment_sum.md @@ -0,0 +1,118 @@ +description: Computes the sum along segments of a tensor. + +
+ + +
+ +# tf.math.unsorted_segment_sum + + + + + + + + + +Computes the sum along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output[i] = \sum_{j...} data[j...]\\) where the sum is over tuples `j...` such +that `segment_ids[j...] == i`. Unlike `SegmentSum`, `segment_ids` +need not be sorted and need not cover all values in the full +range of valid values. + +If the sum is empty for a given segment ID `i`, `output[i] = 0`. +If the given segment ID `i` is negative, the value is dropped and will not be +added to the sum of the segment. + +`num_segments` should equal the number of distinct segment IDs. + +
+ +
+ +``` python +c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]]) +tf.unsorted_segment_sum(c, tf.constant([0, 1, 0]), num_segments=2) +# ==> [[ 5, 5, 5, 5], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor whose shape is a prefix of `data.shape`. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/math/xdivy.md b/site/en/api_docs/python/tf/math/xdivy.md new file mode 100644 index 00000000000..86c0c3c5188 --- /dev/null +++ b/site/en/api_docs/python/tf/math/xdivy.md @@ -0,0 +1,84 @@ +description: Returns 0 if x == 0, and x / y otherwise, elementwise. + +
+ + +
+ +# tf.math.xdivy + + + + + + + + + +Returns 0 if x == 0, and x / y otherwise, elementwise. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/xlog1py.md b/site/en/api_docs/python/tf/math/xlog1py.md new file mode 100644 index 00000000000..f04780f84ad --- /dev/null +++ b/site/en/api_docs/python/tf/math/xlog1py.md @@ -0,0 +1,114 @@ +description: Compute x * log1p(y). + +
+ + +
+ +# tf.math.xlog1py + + + + + + + + + +Compute x * log1p(y). + + + + + + + + + +Given `x` and `y`, compute `x * log1p(y)`. This function safely returns +zero when `x = 0`, no matter what the value of `y` is. + +#### Example: + + + +``` +>>> tf.math.xlog1py(0., 1.) + +>>> tf.math.xlog1py(1., 1.) + +>>> tf.math.xlog1py(2., 2.) + +>>> tf.math.xlog1py(0., -1.) + +``` + + + + + + + + + + + + + + + + +
+`x` + +A tf.Tensor of type `bfloat16`, `half`, `float32`, `float64`, +`complex64`, `complex128` +
+`y` + +A tf.Tensor of type `bfloat16`, `half`, `float32`, `float64`, +`complex64`, `complex128` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+`x * log1p(y)`. +
+ + + + +#### Scipy Compatibility +Equivalent to scipy.special.xlog1py + diff --git a/site/en/api_docs/python/tf/math/xlogy.md b/site/en/api_docs/python/tf/math/xlogy.md new file mode 100644 index 00000000000..72d4592c06e --- /dev/null +++ b/site/en/api_docs/python/tf/math/xlogy.md @@ -0,0 +1,84 @@ +description: Returns 0 if x == 0, and x * log(y) otherwise, elementwise. + +
+ + +
+ +# tf.math.xlogy + + + + + + + + + +Returns 0 if x == 0, and x * log(y) otherwise, elementwise. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/math/zero_fraction.md b/site/en/api_docs/python/tf/math/zero_fraction.md new file mode 100644 index 00000000000..5c501d5d384 --- /dev/null +++ b/site/en/api_docs/python/tf/math/zero_fraction.md @@ -0,0 +1,93 @@ +description: Returns the fraction of zeros in value. + +
+ + +
+ +# tf.math.zero_fraction + + + + + + + + + +Returns the fraction of zeros in `value`. + + + + + + + + + +If `value` is empty, the result is `nan`. + +This is useful in summaries to measure and report sparsity. For example, + +```python + z = tf.nn.relu(...) + summ = tf.compat.v1.summary.scalar('sparsity', tf.nn.zero_fraction(z)) +``` + + + + + + + + + + + + + +
+`value` + +A tensor of numeric type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The fraction of zeros in `value`, with type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/math/zeta.md b/site/en/api_docs/python/tf/math/zeta.md new file mode 100644 index 00000000000..8dfd7f99299 --- /dev/null +++ b/site/en/api_docs/python/tf/math/zeta.md @@ -0,0 +1,88 @@ +description: Compute the Hurwitz zeta function \\(\zeta(x, q)\\). + +
+ + +
+ +# tf.math.zeta + + + + + + + + + +Compute the Hurwitz zeta function \\(\zeta(x, q)\\). + + + + + + + + + +The Hurwitz zeta function is defined as: + + +\\(\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x}\\) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`q` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/meshgrid.md b/site/en/api_docs/python/tf/meshgrid.md new file mode 100644 index 00000000000..b60e3cf6932 --- /dev/null +++ b/site/en/api_docs/python/tf/meshgrid.md @@ -0,0 +1,138 @@ +description: Broadcasts parameters for evaluation on an N-D grid. + +
+ + +
+ +# tf.meshgrid + + + + + + + + + +Broadcasts parameters for evaluation on an N-D grid. + + + + + + + + + +Given N one-dimensional coordinate arrays `*args`, returns a list `outputs` +of N-D coordinate arrays for evaluating expressions on an N-D grid. + +#### Notes: + + + +`meshgrid` supports cartesian ('xy') and matrix ('ij') indexing conventions. +When the `indexing` argument is set to 'xy' (the default), the broadcasting +instructions for the first two dimensions are swapped. + +#### Examples: + + + +Calling `X, Y = meshgrid(x, y)` with the tensors + +```python +x = [1, 2, 3] +y = [4, 5, 6] +X, Y = tf.meshgrid(x, y) +# X = [[1, 2, 3], +# [1, 2, 3], +# [1, 2, 3]] +# Y = [[4, 4, 4], +# [5, 5, 5], +# [6, 6, 6]] +``` + + + + + + + + + + + + + +
+`*args` + +`Tensor`s with rank 1. +
+`**kwargs` + +- indexing: Either 'xy' or 'ij' (optional, default: 'xy'). +- name: A name for the operation (optional). +
+ + + + + + + + + + + + +
+`outputs` + +A list of N `Tensor`s with rank N. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +When no keyword arguments (kwargs) are passed. +
+`ValueError` + +When indexing keyword argument is not one of `xy` or `ij`. +
+ diff --git a/site/en/api_docs/python/tf/mixed_precision.md b/site/en/api_docs/python/tf/mixed_precision.md new file mode 100644 index 00000000000..a77ce2880f3 --- /dev/null +++ b/site/en/api_docs/python/tf/mixed_precision.md @@ -0,0 +1,25 @@ +description: Public API for tf.mixed_precision namespace. + +
+ + +
+ +# Module: tf.mixed_precision + + + + + + + + + +Public API for tf.mixed_precision namespace. + + + +## Modules + +[`experimental`](../tf/mixed_precision/experimental.md) module: Public API for tf.mixed_precision.experimental namespace. + diff --git a/site/en/api_docs/python/tf/mixed_precision/experimental.md b/site/en/api_docs/python/tf/mixed_precision/experimental.md new file mode 100644 index 00000000000..5d2d9d686e7 --- /dev/null +++ b/site/en/api_docs/python/tf/mixed_precision/experimental.md @@ -0,0 +1,29 @@ +description: Public API for tf.mixed_precision.experimental namespace. + +
+ + +
+ +# Module: tf.mixed_precision.experimental + + + + + + + + + +Public API for tf.mixed_precision.experimental namespace. + + + +## Classes + +[`class DynamicLossScale`](../../tf/mixed_precision/experimental/DynamicLossScale.md): Loss scale that dynamically adjusts itself. + +[`class FixedLossScale`](../../tf/mixed_precision/experimental/FixedLossScale.md): Loss scale with a fixed value. + +[`class LossScale`](../../tf/mixed_precision/experimental/LossScale.md): Base class for all loss scales. + diff --git a/site/en/api_docs/python/tf/mixed_precision/experimental/DynamicLossScale.md b/site/en/api_docs/python/tf/mixed_precision/experimental/DynamicLossScale.md new file mode 100644 index 00000000000..a56edc5ec4e --- /dev/null +++ b/site/en/api_docs/python/tf/mixed_precision/experimental/DynamicLossScale.md @@ -0,0 +1,191 @@ +description: Loss scale that dynamically adjusts itself. + +
+ + + + + + + +
+ +# tf.mixed_precision.experimental.DynamicLossScale + + + + + + + + + +Loss scale that dynamically adjusts itself. + +Inherits From: [`LossScale`](../../../tf/mixed_precision/experimental/LossScale.md) + + + + + + + + + +Dynamic loss scaling works by adjusting the loss scale as training progresses. +The goal is to keep the loss scale as high as possible without overflowing the +gradients. As long as the gradients do not overflow, raising the loss scale +never hurts. + +The algorithm starts by setting the loss scale to an initial value. Every N +steps that the gradients are finite, the loss scale is increased by some +factor. However, if a NaN or Inf gradient is found, the gradients for that +step are not applied, and the loss scale is decreased by the factor. This +process tends to keep the loss scale as high as possible without gradients +overflowing. + + + + + + + + + + + + + + + + +
+`initial_loss_scale` + +A Python float. The loss scale to use at the +beginning. It's better to start this at a very high number, because a +loss scale that is too high gets lowered far more quickly than a loss +scale that is too low gets raised. The default is 2 ** 15, which is +approximately half the maximum float16 value. +
+`increment_period` + +Increases loss scale every `increment_period` +consecutive steps that finite gradients are encountered. If a nonfinite +gradient is encountered, the count is reset back to zero. +
+`multiplier` + +The multiplier to use when increasing or decreasing the loss +scale. +
+ + + + + + + + + + + + + + + + + + + + +
+`increment_period` + + +
+`initial_loss_scale` + + +
+`multiplier` + + +
+ + + +## Methods + +

from_config

+ +View source + + + +Creates the LossScale from its config. + + +

get_config

+ +View source + + + +Returns the config of this loss scale. + + +

update

+ +View source + + + +Updates loss scale based on if gradients are finite in current step. + + +

__call__

+ +View source + + + +Returns the current loss scale as a scalar `float32` tensor. + + + + diff --git a/site/en/api_docs/python/tf/mixed_precision/experimental/FixedLossScale.md b/site/en/api_docs/python/tf/mixed_precision/experimental/FixedLossScale.md new file mode 100644 index 00000000000..311c3923f1c --- /dev/null +++ b/site/en/api_docs/python/tf/mixed_precision/experimental/FixedLossScale.md @@ -0,0 +1,213 @@ +description: Loss scale with a fixed value. + +
+ + + + + + + +
+ +# tf.mixed_precision.experimental.FixedLossScale + + + + + + + + + +Loss scale with a fixed value. + +Inherits From: [`LossScale`](../../../tf/mixed_precision/experimental/LossScale.md) + + + + + + + + + +The loss scale is not updated for the lifetime of instances of this class. +A given instance of this class always returns the same number when called. + + + + + + + + + + +
+`loss_scale_value` + +A Python float. Its ideal value varies depending on +models to run. Choosing a too small loss_scale might affect model +quality; a too big loss_scale might cause inf or nan. There is no single +right loss_scale to apply. There is no harm choosing a relatively big +number as long as no nan or inf is encountered in training. +
+ + + + + + + + + + + + +
+`ValueError` + +If loss_scale_value is less than 1. +
+ + + +## Methods + +

from_config

+ +View source + + + +Creates the LossScale from its config. + + +

get_config

+ +View source + + + +Returns the config of this loss scale. + + +

update

+ +View source + + + +Updates the value of the loss scale. + +The loss scale will be potentially updated, based on the value of `grads`. +The tensor returned by calling this class is only updated when this function +is evaluated. + +In eager mode, this directly updates the loss scale, so that calling +`__call__` will return the newly updated loss scale. In graph mode, +this returns an op that, when evaluated, updates the loss scale. + +This function also returns a `should_apply_gradients` bool. If False, +gradients should not be applied to the variables that step, as nonfinite +gradients were found, and the loss scale has been be updated to reduce the +chance of finding nonfinite gradients in the next step. Some loss scale +classes will always return True, as they cannot adjust themselves in +response to nonfinite gradients. + +When a DistributionStrategy is used, this function may only be called in a +cross-replica context. + + + + + + + + + + +
Args
+`grads` + +A nested structure of unscaled gradients, each which is the +gradient of the loss with respect to a weight. The gradients should have +already been divided by the loss scale being before passed to this +function. 'None' gradients are accepted, and are ignored. +
+ + + + + + + + + + + + + + + +
Returns
+`update_op` + +In eager mode, None. In graph mode, an op to update the loss +scale. +
+`should_apply_gradients` + +Either a bool or a scalar boolean tensor. If +False, the caller should skip applying `grads` to the variables this +step. +
+ + + +

__call__

+ +View source + + + +Returns the current loss scale as a scalar `float32` tensor. + + + + diff --git a/site/en/api_docs/python/tf/mixed_precision/experimental/LossScale.md b/site/en/api_docs/python/tf/mixed_precision/experimental/LossScale.md new file mode 100644 index 00000000000..01f2b35b148 --- /dev/null +++ b/site/en/api_docs/python/tf/mixed_precision/experimental/LossScale.md @@ -0,0 +1,205 @@ +description: Base class for all loss scales. + +
+ + + + + + + +
+ +# tf.mixed_precision.experimental.LossScale + + + + + + + + + +Base class for all loss scales. + + + + + + + + + +This is an abstract base class, so you cannot instantiate it directly. +Instead, use one of its concrete subclasses: + * tf.mixed_precision.experimental.DynamicLossScale (recommended) + * tf.mixed_precision.experimental.FixedLossScale + +It's recommended to use a loss scale with a +tf.keras.mixed_precision.experimental.LossScaleOptimizer, as its easier than +using a loss scale directly. + +Loss scaling is a process that multiplies the loss by a multiplier called the +loss scale, and divides each gradient by the same multiplier. The pseudocode +for this process is: + +``` +loss = ... +loss *= loss_scale +grads = gradients(loss, vars) +grads /= loss_scale +``` + +Mathematically, loss scaling has no effect, but can help avoid numerical +underflow in intermediate gradients when float16 tensors are used for mixed +precision training. By multiplying the loss, each intermediate gradient will +have the same multiplier applied. + +Instances of this class represent a loss scale. Calling instances of this +class returns the loss scale as a scalar float32 tensor, while method +`update()` updates the loss scale depending on the values of the gradients. +Optimizers use instances of this class to scale loss and gradients. + +In most functions that accept a LossScale, you can also pass an int (such as +8) to create a `FixedLossScale` or the string `"dynamic"` to create a dynamic +loss scale. + +## Methods + +

from_config

+ +View source + + + +Creates the LossScale from its config. + + +

get_config

+ +View source + + + +Returns the config of this loss scale. + + +

update

+ +View source + + + +Updates the value of the loss scale. + +The loss scale will be potentially updated, based on the value of `grads`. +The tensor returned by calling this class is only updated when this function +is evaluated. + +In eager mode, this directly updates the loss scale, so that calling +`__call__` will return the newly updated loss scale. In graph mode, +this returns an op that, when evaluated, updates the loss scale. + +This function also returns a `should_apply_gradients` bool. If False, +gradients should not be applied to the variables that step, as nonfinite +gradients were found, and the loss scale has been be updated to reduce the +chance of finding nonfinite gradients in the next step. Some loss scale +classes will always return True, as they cannot adjust themselves in +response to nonfinite gradients. + +When a DistributionStrategy is used, this function may only be called in a +cross-replica context. + + + + + + + + + + +
Args
+`grads` + +A nested structure of unscaled gradients, each which is the +gradient of the loss with respect to a weight. The gradients should have +already been divided by the loss scale being before passed to this +function. 'None' gradients are accepted, and are ignored. +
+ + + + + + + + + + + + + + + +
Returns
+`update_op` + +In eager mode, None. In graph mode, an op to update the loss +scale. +
+`should_apply_gradients` + +Either a bool or a scalar boolean tensor. If +False, the caller should skip applying `grads` to the variables this +step. +
+ + + +

__call__

+ +View source + + + +Returns the current loss scale as a scalar `float32` tensor. + + + + diff --git a/site/en/api_docs/python/tf/mlir.md b/site/en/api_docs/python/tf/mlir.md new file mode 100644 index 00000000000..e0a19c2b09a --- /dev/null +++ b/site/en/api_docs/python/tf/mlir.md @@ -0,0 +1,25 @@ +description: Public API for tf.mlir namespace. + +
+ + +
+ +# Module: tf.mlir + + + + + + + + + +Public API for tf.mlir namespace. + + + +## Modules + +[`experimental`](../tf/mlir/experimental.md) module: Public API for tf.mlir.experimental namespace. + diff --git a/site/en/api_docs/python/tf/mlir/experimental.md b/site/en/api_docs/python/tf/mlir/experimental.md new file mode 100644 index 00000000000..4820abb74db --- /dev/null +++ b/site/en/api_docs/python/tf/mlir/experimental.md @@ -0,0 +1,25 @@ +description: Public API for tf.mlir.experimental namespace. + +
+ + +
+ +# Module: tf.mlir.experimental + + + + + + + + + +Public API for tf.mlir.experimental namespace. + + + +## Functions + +[`convert_graph_def(...)`](../../tf/mlir/experimental/convert_graph_def.md): Import a GraphDef and convert it to a textual MLIR module. + diff --git a/site/en/api_docs/python/tf/mlir/experimental/convert_graph_def.md b/site/en/api_docs/python/tf/mlir/experimental/convert_graph_def.md new file mode 100644 index 00000000000..b67c76dbdcb --- /dev/null +++ b/site/en/api_docs/python/tf/mlir/experimental/convert_graph_def.md @@ -0,0 +1,86 @@ +description: Import a GraphDef and convert it to a textual MLIR module. + +
+ + +
+ +# tf.mlir.experimental.convert_graph_def + + + + + + + + + +Import a GraphDef and convert it to a textual MLIR module. + + + + + + + + + + + + + + + + + + + + + + +
+`graph_def` + +An object of type graph_pb2.GraphDef or a textual proto +representation of a valid GraphDef. +
+`pass_pipeline` + +A textual description of an MLIR Pass Pipeline to run on the +module, see MLIR documentation for the +[textual pass pipeline syntax](https://github.com/tensorflow/mlir/blob/master/g3doc/WritingAPass.md#textual-pass-pipeline-specification). +
+ + + + + + + + + + + +
+A textual representation of the MLIR module corresponding to the graphdef. +Raises a RuntimeError on error. +
+ diff --git a/site/en/api_docs/python/tf/name_scope.md b/site/en/api_docs/python/tf/name_scope.md new file mode 100644 index 00000000000..451de5a53ce --- /dev/null +++ b/site/en/api_docs/python/tf/name_scope.md @@ -0,0 +1,172 @@ +description: A context manager for use when defining a Python op. + +
+ + + + + +
+ +# tf.name_scope + + + + + + + + + +A context manager for use when defining a Python op. + + + + + + + +This context manager pushes a name scope, which will make the name of all +operations added within it have a prefix. + +For example, to define a new Python op called `my_op`: + +```python +def my_op(a, b, c, name=None): + with tf.name_scope("MyOp") as scope: + a = tf.convert_to_tensor(a, name="a") + b = tf.convert_to_tensor(b, name="b") + c = tf.convert_to_tensor(c, name="c") + # Define some computation that uses `a`, `b`, and `c`. + return foo_op(..., name=scope) +``` + +When executed, the Tensors `a`, `b`, `c`, will have names `MyOp/a`, `MyOp/b`, +and `MyOp/c`. + +If the scope name already exists, the name will be made unique by appending +`_n`. For example, calling `my_op` the second time will generate `MyOp_1/a`, +etc. + + + + + + + + + + +
+`name` + +The prefix to use on all names created within the name scope. +
+ + + + + + + + + + + + +
+`ValueError` + +If name is None, or not a string. +
+ + + + + + + + + + + + + + +
+`name` + + +
+ + + +## Methods + +

__enter__

+ +View source + + + +Start the scope block. + + + + + + + + + + +
Returns
+The scope name. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if neither `name` nor `default_name` is provided +but `values` are. +
+ + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/nest.md b/site/en/api_docs/python/tf/nest.md new file mode 100644 index 00000000000..e5e53a2f9cb --- /dev/null +++ b/site/en/api_docs/python/tf/nest.md @@ -0,0 +1,33 @@ +description: Public API for tf.nest namespace. + +
+ + +
+ +# Module: tf.nest + + + + + + + + + +Public API for tf.nest namespace. + + + +## Functions + +[`assert_same_structure(...)`](../tf/nest/assert_same_structure.md): Asserts that two structures are nested in the same way. + +[`flatten(...)`](../tf/nest/flatten.md): Returns a flat list from a given nested structure. + +[`is_nested(...)`](../tf/nest/is_nested.md): Returns true if its input is a collections.abc.Sequence (except strings). + +[`map_structure(...)`](../tf/nest/map_structure.md): Applies `func` to each entry in `structure` and returns a new structure. + +[`pack_sequence_as(...)`](../tf/nest/pack_sequence_as.md): Returns a given flattened sequence packed into a given structure. + diff --git a/site/en/api_docs/python/tf/nest/assert_same_structure.md b/site/en/api_docs/python/tf/nest/assert_same_structure.md new file mode 100644 index 00000000000..009e1b10ce6 --- /dev/null +++ b/site/en/api_docs/python/tf/nest/assert_same_structure.md @@ -0,0 +1,125 @@ +description: Asserts that two structures are nested in the same way. + +
+ + +
+ +# tf.nest.assert_same_structure + + + + + + + + + +Asserts that two structures are nested in the same way. + + + + + + + + + +Note that namedtuples with identical name and fields are always considered +to have the same shallow structure (even with `check_types=True`). +For instance, this code will print `True`: + +```python +def nt(a, b): + return collections.namedtuple('foo', 'a b')(a, b) +print(assert_same_structure(nt(0, 1), nt(2, 3))) +``` + + + + + + + + + + + + + + + + + + + +
+`nest1` + +an arbitrarily nested structure. +
+`nest2` + +an arbitrarily nested structure. +
+`check_types` + +if `True` (default) types of sequences are checked as well, +including the keys of dictionaries. If set to `False`, for example a +list and a tuple of objects will look the same if they have the same +size. Note that namedtuples with identical name and fields are always +considered to have the same shallow structure. Two types will also be +considered the same if they are both list subtypes (which allows "list" +and "_ListWrapper" from trackable dependency tracking to compare +equal). +
+`expand_composites` + +If true, then composite tensors such as tf.SparseTensor +and tf.RaggedTensor are expanded into their component tensors. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If the two structures do not have the same number of elements or +if the two structures are not nested in the same way. +
+`TypeError` + +If the two structures differ in the type of sequence in any of +their substructures. Only possible if `check_types` is `True`. +
+ diff --git a/site/en/api_docs/python/tf/nest/flatten.md b/site/en/api_docs/python/tf/nest/flatten.md new file mode 100644 index 00000000000..d91f17d1cc5 --- /dev/null +++ b/site/en/api_docs/python/tf/nest/flatten.md @@ -0,0 +1,157 @@ +description: Returns a flat list from a given nested structure. + +
+ + +
+ +# tf.nest.flatten + + + + + + + + + +Returns a flat list from a given nested structure. + + + + + + + + + +If nest is not a structure , tuple (or a namedtuple), dict, or an attrs class, +then returns a single-element list: + [nest]. + +In the case of dict instances, the sequence consists of the values, sorted by +key to ensure deterministic behavior. This is true also for OrderedDict +instances: their sequence order is ignored, the sorting order of keys is used +instead. The same convention is followed in pack_sequence_as. This correctly +repacks dicts and OrderedDicts after they have been flattened, and also allows +flattening an OrderedDict and then repacking it back using a corresponding +plain dict, or vice-versa. Dictionaries with non-sortable keys cannot be +flattened. + +Users must not modify any collections used in nest while this function is +running. + +#### Examples: + + + +1. Python dict (ordered by key): + +``` +>>> dict = { "key3": "value3", "key1": "value1", "key2": "value2" } +>>> tf.nest.flatten(dict) +['value1', 'value2', 'value3'] +``` + +2. For a nested python tuple: + +``` +>>> tuple = ((1.0, 2.0), (3.0, 4.0, 5.0), (6.0)) +>>> tf.nest.flatten(tuple) + [1.0, 2.0, 3.0, 4.0, 5.0, 6.0] +``` + +3. Numpy array (will not flatten): + +``` +>>> array = np.array([[1, 2], [3, 4]]) +>>> tf.nest.flatten(array) + [array([[1, 2], + [3, 4]])] +``` + + +4. tf.Tensor (will not flatten): + +``` +>>> tensor = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]]) +>>> tf.nest.flatten(tensor) + [] +``` + + + + + + + + + + + + + +
+`structure` + +an arbitrarily nested structure. Note, numpy arrays are +considered atoms and are not flattened. +
+`expand_composites` + +If true, then composite tensors such as tf.SparseTensor +and tf.RaggedTensor are expanded into their component tensors. +
+ + + + + + + + + + + +
+A Python list, the flattened version of the input. +
+ + + + + + + + + + + + +
+`TypeError` + +The nest is or contains a dict with non-sortable keys. +
+ diff --git a/site/en/api_docs/python/tf/nest/is_nested.md b/site/en/api_docs/python/tf/nest/is_nested.md new file mode 100644 index 00000000000..acc1659472c --- /dev/null +++ b/site/en/api_docs/python/tf/nest/is_nested.md @@ -0,0 +1,76 @@ +description: Returns true if its input is a collections.abc.Sequence (except strings). + +
+ + +
+ +# tf.nest.is_nested + + + + + + + + + +Returns true if its input is a collections.abc.Sequence (except strings). + + + + + + + + + + + + + + + + + + + +
+`seq` + +an input sequence. +
+ + + + + + + + + + + +
+True if the sequence is a not a string and is a collections.abc.Sequence +or a dict. +
+ diff --git a/site/en/api_docs/python/tf/nest/map_structure.md b/site/en/api_docs/python/tf/nest/map_structure.md new file mode 100644 index 00000000000..2cbced724f8 --- /dev/null +++ b/site/en/api_docs/python/tf/nest/map_structure.md @@ -0,0 +1,142 @@ +description: Applies func to each entry in structure and returns a new structure. + +
+ + +
+ +# tf.nest.map_structure + + + + + + + + + +Applies `func` to each entry in `structure` and returns a new structure. + + + + + + + + + +Applies `func(x[0], x[1], ...)` where x[i] is an entry in +`structure[i]`. All structures in `structure` must have the same arity, +and the return value will contain results with the same structure layout. + + + + + + + + + + + + + + + + +
+`func` + +A callable that accepts as many arguments as there are structures. +
+`*structure` + +scalar, or tuple or dict or list of constructed scalars and/or +other tuples/lists, or scalars. Note: numpy arrays are considered as +scalars. +
+`**kwargs` + +Valid keyword args are: + +* `check_types`: If set to `True` (default) the types of +iterables within the structures have to be same (e.g. +`map_structure(func, [1], (1,))` raises a `TypeError` +exception). To allow this set this argument to `False`. +Note that namedtuples with identical name and fields are always +considered to have the same shallow structure. +* `expand_composites`: If set to `True`, then composite tensors such +as tf.SparseTensor and tf.RaggedTensor are expanded into their +component tensors. If `False` (the default), then composite tensors +are not expanded. +
+ + + + + + + + + + + +
+A new structure with the same arity as `structure`, whose values correspond +to `func(x[0], x[1], ...)` where `x[i]` is a value in the corresponding +location in `structure[i]`. If there are different sequence types and +`check_types` is `False` the sequence types of the first structure will be +used. +
+ + + + + + + + + + + + + + + + + + +
+`TypeError` + +If `func` is not callable or if the structures do not match +each other by depth tree. +
+`ValueError` + +If no structure is provided or if the structures do not match +each other by type. +
+`ValueError` + +If wrong keyword arguments are provided. +
+ diff --git a/site/en/api_docs/python/tf/nest/pack_sequence_as.md b/site/en/api_docs/python/tf/nest/pack_sequence_as.md new file mode 100644 index 00000000000..06e9801c33a --- /dev/null +++ b/site/en/api_docs/python/tf/nest/pack_sequence_as.md @@ -0,0 +1,132 @@ +description: Returns a given flattened sequence packed into a given structure. + +
+ + +
+ +# tf.nest.pack_sequence_as + + + + + + + + + +Returns a given flattened sequence packed into a given structure. + + + + + + + + + +If `structure` is a scalar, `flat_sequence` must be a single-element list; +in this case the return value is `flat_sequence[0]`. + +If `structure` is or contains a dict instance, the keys will be sorted to +pack the flat sequence in deterministic order. This is true also for +`OrderedDict` instances: their sequence order is ignored, the sorting order of +keys is used instead. The same convention is followed in `flatten`. +This correctly repacks dicts and `OrderedDict`s after they have been +flattened, and also allows flattening an `OrderedDict` and then repacking it +back using a corresponding plain dict, or vice-versa. +Dictionaries with non-sortable keys cannot be flattened. + + + + + + + + + + + + + + + + +
+`structure` + +Nested structure, whose structure is given by nested lists, +tuples, and dicts. Note: numpy arrays and strings are considered +scalars. +
+`flat_sequence` + +flat sequence to pack. +
+`expand_composites` + +If true, then composite tensors such as tf.SparseTensor +and tf.RaggedTensor are expanded into their component tensors. +
+ + + + + + + + + + + + +
+`packed` + +`flat_sequence` converted to have the same recursive structure as +`structure`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `flat_sequence` and `structure` have different +element counts. +
+`TypeError` + +`structure` is or contains a dict with non-sortable keys. +
+ diff --git a/site/en/api_docs/python/tf/nn.md b/site/en/api_docs/python/tf/nn.md new file mode 100644 index 00000000000..5fca01f48e8 --- /dev/null +++ b/site/en/api_docs/python/tf/nn.md @@ -0,0 +1,193 @@ +description: Wrappers for primitive Neural Net (NN) Operations. + +
+ + +
+ +# Module: tf.nn + + + + + + + + + +Wrappers for primitive Neural Net (NN) Operations. + + + +## Classes + +[`class RNNCellDeviceWrapper`](../tf/nn/RNNCellDeviceWrapper.md): Operator that ensures an RNNCell runs on a particular device. + +[`class RNNCellDropoutWrapper`](../tf/nn/RNNCellDropoutWrapper.md): Operator adding dropout to inputs and outputs of the given cell. + +[`class RNNCellResidualWrapper`](../tf/nn/RNNCellResidualWrapper.md): RNNCell wrapper that ensures cell inputs are added to the outputs. + +## Functions + +[`all_candidate_sampler(...)`](../tf/random/all_candidate_sampler.md): Generate the set of all classes. + +[`atrous_conv2d(...)`](../tf/nn/atrous_conv2d.md): Atrous convolution (a.k.a. convolution with holes or dilated convolution). + +[`atrous_conv2d_transpose(...)`](../tf/nn/atrous_conv2d_transpose.md): The transpose of `atrous_conv2d`. + +[`avg_pool(...)`](../tf/nn/avg_pool.md): Performs the avg pooling on the input. + +[`avg_pool1d(...)`](../tf/nn/avg_pool1d.md): Performs the average pooling on the input. + +[`avg_pool2d(...)`](../tf/nn/avg_pool2d.md): Performs the average pooling on the input. + +[`avg_pool3d(...)`](../tf/nn/avg_pool3d.md): Performs the average pooling on the input. + +[`batch_norm_with_global_normalization(...)`](../tf/nn/batch_norm_with_global_normalization.md): Batch normalization. + +[`batch_normalization(...)`](../tf/nn/batch_normalization.md): Batch normalization. + +[`bias_add(...)`](../tf/nn/bias_add.md): Adds `bias` to `value`. + +[`collapse_repeated(...)`](../tf/nn/collapse_repeated.md): Merge repeated labels into single labels. + +[`compute_accidental_hits(...)`](../tf/nn/compute_accidental_hits.md): Compute the position ids in `sampled_candidates` matching `true_classes`. + +[`compute_average_loss(...)`](../tf/nn/compute_average_loss.md): Scales per-example losses with sample_weights and computes their average. + +[`conv1d(...)`](../tf/nn/conv1d.md): Computes a 1-D convolution given 3-D input and filter tensors. + +[`conv1d_transpose(...)`](../tf/nn/conv1d_transpose.md): The transpose of `conv1d`. + +[`conv2d(...)`](../tf/nn/conv2d.md): Computes a 2-D convolution given 4-D `input` and `filters` tensors. + +[`conv2d_transpose(...)`](../tf/nn/conv2d_transpose.md): The transpose of `conv2d`. + +[`conv3d(...)`](../tf/nn/conv3d.md): Computes a 3-D convolution given 5-D `input` and `filters` tensors. + +[`conv3d_transpose(...)`](../tf/nn/conv3d_transpose.md): The transpose of `conv3d`. + +[`conv_transpose(...)`](../tf/nn/conv_transpose.md): The transpose of `convolution`. + +[`convolution(...)`](../tf/nn/convolution.md): Computes sums of N-D convolutions (actually cross-correlation). + +[`crelu(...)`](../tf/nn/crelu.md): Computes Concatenated ReLU. + +[`ctc_beam_search_decoder(...)`](../tf/nn/ctc_beam_search_decoder.md): Performs beam search decoding on the logits given in input. + +[`ctc_greedy_decoder(...)`](../tf/nn/ctc_greedy_decoder.md): Performs greedy decoding on the logits given in input (best path). + +[`ctc_loss(...)`](../tf/nn/ctc_loss.md): Computes CTC (Connectionist Temporal Classification) loss. + +[`ctc_unique_labels(...)`](../tf/nn/ctc_unique_labels.md): Get unique labels and indices for batched labels for tf.nn.ctc_loss. + +[`depth_to_space(...)`](../tf/nn/depth_to_space.md): DepthToSpace for tensors of type T. + +[`depthwise_conv2d(...)`](../tf/nn/depthwise_conv2d.md): Depthwise 2-D convolution. + +[`depthwise_conv2d_backprop_filter(...)`](../tf/nn/depthwise_conv2d_backprop_filter.md): Computes the gradients of depthwise convolution with respect to the filter. + +[`depthwise_conv2d_backprop_input(...)`](../tf/nn/depthwise_conv2d_backprop_input.md): Computes the gradients of depthwise convolution with respect to the input. + +[`dilation2d(...)`](../tf/nn/dilation2d.md): Computes the grayscale dilation of 4-D `input` and 3-D `filters` tensors. + +[`dropout(...)`](../tf/nn/dropout.md): Computes dropout: randomly sets elements to zero to prevent overfitting. + +[`elu(...)`](../tf/nn/elu.md): Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise. + +[`embedding_lookup(...)`](../tf/nn/embedding_lookup.md): Looks up `ids` in a list of embedding tensors. + +[`embedding_lookup_sparse(...)`](../tf/nn/embedding_lookup_sparse.md): Computes embeddings for the given ids and weights. + +[`erosion2d(...)`](../tf/nn/erosion2d.md): Computes the grayscale erosion of 4-D `value` and 3-D `filters` tensors. + +[`fixed_unigram_candidate_sampler(...)`](../tf/random/fixed_unigram_candidate_sampler.md): Samples a set of classes using the provided (fixed) base distribution. + +[`fractional_avg_pool(...)`](../tf/nn/fractional_avg_pool.md): Performs fractional average pooling on the input. + +[`fractional_max_pool(...)`](../tf/nn/fractional_max_pool.md): Performs fractional max pooling on the input. + +[`in_top_k(...)`](../tf/math/in_top_k.md): Says whether the targets are in the top `K` predictions. + +[`l2_loss(...)`](../tf/nn/l2_loss.md): L2 Loss. + +[`l2_normalize(...)`](../tf/math/l2_normalize.md): Normalizes along dimension `axis` using an L2 norm. + +[`leaky_relu(...)`](../tf/nn/leaky_relu.md): Compute the Leaky ReLU activation function. + +[`learned_unigram_candidate_sampler(...)`](../tf/random/learned_unigram_candidate_sampler.md): Samples a set of classes from a distribution learned during training. + +[`local_response_normalization(...)`](../tf/nn/local_response_normalization.md): Local Response Normalization. + +[`log_poisson_loss(...)`](../tf/nn/log_poisson_loss.md): Computes log Poisson loss given `log_input`. + +[`log_softmax(...)`](../tf/nn/log_softmax.md): Computes log softmax activations. + +[`lrn(...)`](../tf/nn/local_response_normalization.md): Local Response Normalization. + +[`max_pool(...)`](../tf/nn/max_pool.md): Performs the max pooling on the input. + +[`max_pool1d(...)`](../tf/nn/max_pool1d.md): Performs the max pooling on the input. + +[`max_pool2d(...)`](../tf/nn/max_pool2d.md): Performs the max pooling on the input. + +[`max_pool3d(...)`](../tf/nn/max_pool3d.md): Performs the max pooling on the input. + +[`max_pool_with_argmax(...)`](../tf/nn/max_pool_with_argmax.md): Performs max pooling on the input and outputs both max values and indices. + +[`moments(...)`](../tf/nn/moments.md): Calculates the mean and variance of `x`. + +[`nce_loss(...)`](../tf/nn/nce_loss.md): Computes and returns the noise-contrastive estimation training loss. + +[`normalize_moments(...)`](../tf/nn/normalize_moments.md): Calculate the mean and variance of based on the sufficient statistics. + +[`pool(...)`](../tf/nn/pool.md): Performs an N-D pooling operation. + +[`relu(...)`](../tf/nn/relu.md): Computes rectified linear: `max(features, 0)`. + +[`relu6(...)`](../tf/nn/relu6.md): Computes Rectified Linear 6: `min(max(features, 0), 6)`. + +[`safe_embedding_lookup_sparse(...)`](../tf/nn/safe_embedding_lookup_sparse.md): Lookup embedding results, accounting for invalid IDs and empty features. + +[`sampled_softmax_loss(...)`](../tf/nn/sampled_softmax_loss.md): Computes and returns the sampled softmax training loss. + +[`scale_regularization_loss(...)`](../tf/nn/scale_regularization_loss.md): Scales the sum of the given regularization losses by number of replicas. + +[`selu(...)`](../tf/nn/selu.md): Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)` + +[`separable_conv2d(...)`](../tf/nn/separable_conv2d.md): 2-D convolution with separable filters. + +[`sigmoid(...)`](../tf/math/sigmoid.md): Computes sigmoid of `x` element-wise. + +[`sigmoid_cross_entropy_with_logits(...)`](../tf/nn/sigmoid_cross_entropy_with_logits.md): Computes sigmoid cross entropy given `logits`. + +[`softmax(...)`](../tf/nn/softmax.md): Computes softmax activations. + +[`softmax_cross_entropy_with_logits(...)`](../tf/nn/softmax_cross_entropy_with_logits.md): Computes softmax cross entropy between `logits` and `labels`. + +[`softplus(...)`](../tf/math/softplus.md): Computes softplus: `log(exp(features) + 1)`. + +[`softsign(...)`](../tf/nn/softsign.md): Computes softsign: `features / (abs(features) + 1)`. + +[`space_to_batch(...)`](../tf/space_to_batch.md): SpaceToBatch for N-D tensors of type T. + +[`space_to_depth(...)`](../tf/nn/space_to_depth.md): SpaceToDepth for tensors of type T. + +[`sparse_softmax_cross_entropy_with_logits(...)`](../tf/nn/sparse_softmax_cross_entropy_with_logits.md): Computes sparse softmax cross entropy between `logits` and `labels`. + +[`sufficient_statistics(...)`](../tf/nn/sufficient_statistics.md): Calculate the sufficient statistics for the mean and variance of `x`. + +[`swish(...)`](../tf/nn/swish.md): Computes the Swish activation function: `x * sigmoid(x)`. + +[`tanh(...)`](../tf/math/tanh.md): Computes hyperbolic tangent of `x` element-wise. + +[`top_k(...)`](../tf/math/top_k.md): Finds values and indices of the `k` largest entries for the last dimension. + +[`weighted_cross_entropy_with_logits(...)`](../tf/nn/weighted_cross_entropy_with_logits.md): Computes a weighted cross entropy. + +[`weighted_moments(...)`](../tf/nn/weighted_moments.md): Returns the frequency-weighted mean and variance of `x`. + +[`with_space_to_batch(...)`](../tf/nn/with_space_to_batch.md): Performs `op` on the space-to-batch representation of `input`. + +[`zero_fraction(...)`](../tf/math/zero_fraction.md): Returns the fraction of zeros in `value`. + diff --git a/site/en/api_docs/python/tf/nn/RNNCellDeviceWrapper.md b/site/en/api_docs/python/tf/nn/RNNCellDeviceWrapper.md new file mode 100644 index 00000000000..964f6f97499 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/RNNCellDeviceWrapper.md @@ -0,0 +1,126 @@ +description: Operator that ensures an RNNCell runs on a particular device. + +
+ + + + + + +
+ +# tf.nn.RNNCellDeviceWrapper + + + + + + + + + +Operator that ensures an RNNCell runs on a particular device. + + + + + + + + + + + + + + + + + + + + + + + +
+`cell` + +An instance of `RNNCell`. +
+`device` + +A device string or function, for passing to tf.device. +
+`**kwargs` + +dict of keyword arguments for base layer. +
+ + + + + + + + + + + + + + + + + +
+`output_size` + + +
+`state_size` + + +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/nn/RNNCellDropoutWrapper.md b/site/en/api_docs/python/tf/nn/RNNCellDropoutWrapper.md new file mode 100644 index 00000000000..416eebed25f --- /dev/null +++ b/site/en/api_docs/python/tf/nn/RNNCellDropoutWrapper.md @@ -0,0 +1,233 @@ +description: Operator adding dropout to inputs and outputs of the given cell. + +
+ + + + + + +
+ +# tf.nn.RNNCellDropoutWrapper + + + + + + + + + +Operator adding dropout to inputs and outputs of the given cell. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cell` + +an RNNCell, a projection to output_size is added to it. +
+`input_keep_prob` + +unit Tensor or float between 0 and 1, input keep +probability; if it is constant and 1, no input dropout will be added. +
+`output_keep_prob` + +unit Tensor or float between 0 and 1, output keep +probability; if it is constant and 1, no output dropout will be added. +
+`state_keep_prob` + +unit Tensor or float between 0 and 1, output keep +probability; if it is constant and 1, no output dropout will be added. +State dropout is performed on the outgoing states of the cell. **Note** +the state components to which dropout is applied when `state_keep_prob` +is in `(0, 1)` are also determined by the argument +`dropout_state_filter_visitor` (e.g. by default dropout is never applied +to the `c` component of an `LSTMStateTuple`). +
+`variational_recurrent` + +Python bool. If `True`, then the same dropout +pattern is applied across all time steps per run call. If this parameter +is set, `input_size` **must** be provided. +
+`input_size` + +(optional) (possibly nested tuple of) `TensorShape` objects +containing the depth(s) of the input tensors expected to be passed in to +the `DropoutWrapper`. Required and used **iff** `variational_recurrent += True` and `input_keep_prob < 1`. +
+`dtype` + +(optional) The `dtype` of the input, state, and output tensors. +Required and used **iff** `variational_recurrent = True`. +
+`seed` + +(optional) integer, the randomness seed. +
+`dropout_state_filter_visitor` + +(optional), default: (see below). Function +that takes any hierarchical level of the state and returns a scalar or +depth=1 structure of Python booleans describing which terms in the state +should be dropped out. In addition, if the function returns `True`, +dropout is applied across this sublevel. If the function returns +`False`, dropout is not applied across this entire sublevel. +Default behavior: perform dropout on all terms except the memory (`c`) +state of `LSTMCellState` objects, and don't try to apply dropout to +`TensorArray` objects: ``` +def dropout_state_filter_visitor(s): +if isinstance(s, LSTMCellState): # Never perform dropout on the c +state. return LSTMCellState(c=False, h=True) +elif isinstance(s, TensorArray): return False return True ``` +
+`**kwargs` + +dict of keyword arguments for base layer. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if `cell` is not an `RNNCell`, or `keep_state_fn` is provided +but not `callable`. +
+`ValueError` + +if any of the keep_probs are not between 0 and 1. +
+ + + + + + + + + + + + + + + + + + + + +
+`output_size` + + +
+`state_size` + + +
+`wrapped_cell` + + +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/nn/RNNCellResidualWrapper.md b/site/en/api_docs/python/tf/nn/RNNCellResidualWrapper.md new file mode 100644 index 00000000000..d820b1fb927 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/RNNCellResidualWrapper.md @@ -0,0 +1,129 @@ +description: RNNCell wrapper that ensures cell inputs are added to the outputs. + +
+ + + + + + +
+ +# tf.nn.RNNCellResidualWrapper + + + + + + + + + +RNNCell wrapper that ensures cell inputs are added to the outputs. + + + + + + + + + + + + + + + + + + + + + + + +
+`cell` + +An instance of `RNNCell`. +
+`residual_fn` + +(Optional) The function to map raw cell inputs and raw cell +outputs to the actual cell outputs of the residual network. +Defaults to calling nest.map_structure on (lambda i, o: i + o), inputs +and outputs. +
+`**kwargs` + +dict of keyword arguments for base layer. +
+ + + + + + + + + + + + + + + + + +
+`output_size` + + +
+`state_size` + + +
+ + + +## Methods + +

get_initial_state

+ +View source + + + + + + +

zero_state

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/nn/atrous_conv2d.md b/site/en/api_docs/python/tf/nn/atrous_conv2d.md new file mode 100644 index 00000000000..afa4da81f4a --- /dev/null +++ b/site/en/api_docs/python/tf/nn/atrous_conv2d.md @@ -0,0 +1,247 @@ +description: Atrous convolution (a.k.a. convolution with holes or dilated convolution). + +
+ + +
+ +# tf.nn.atrous_conv2d + + + + + + + + + +Atrous convolution (a.k.a. convolution with holes or dilated convolution). + + + + + + + + + +This function is a simpler wrapper around the more general +tf.nn.convolution, and exists only for backwards compatibility. You can +use tf.nn.convolution to perform 1-D, 2-D, or 3-D atrous convolution. + + +Computes a 2-D atrous convolution, also known as convolution with holes or +dilated convolution, given 4-D `value` and `filters` tensors. If the `rate` +parameter is equal to one, it performs regular 2-D convolution. If the `rate` +parameter is greater than one, it performs convolution with holes, sampling +the input values every `rate` pixels in the `height` and `width` dimensions. +This is equivalent to convolving the input with a set of upsampled filters, +produced by inserting `rate - 1` zeros between two consecutive values of the +filters along the `height` and `width` dimensions, hence the name atrous +convolution or convolution with holes (the French word trous means holes in +English). + +#### More specifically: + + + +``` +output[batch, height, width, out_channel] = + sum_{dheight, dwidth, in_channel} ( + filters[dheight, dwidth, in_channel, out_channel] * + value[batch, height + rate*dheight, width + rate*dwidth, in_channel] + ) +``` + +Atrous convolution allows us to explicitly control how densely to compute +feature responses in fully convolutional networks. Used in conjunction with +bilinear interpolation, it offers an alternative to `conv2d_transpose` in +dense prediction tasks such as semantic image segmentation, optical flow +computation, or depth estimation. It also allows us to effectively enlarge +the field of view of filters without increasing the number of parameters or +the amount of computation. + +For a description of atrous convolution and how it can be used for dense +feature extraction, please see: (Chen et al., 2015). The same operation is +investigated further in (Yu et al., 2016). Previous works that effectively +use atrous convolution in different ways are, among others, +(Sermanet et al., 2014) and (Giusti et al., 2013). +Atrous convolution is also closely related to the so-called noble identities +in multi-rate signal processing. + +There are many different ways to implement atrous convolution (see the refs +above). The implementation here reduces + +```python + atrous_conv2d(value, filters, rate, padding=padding) +``` + +to the following three operations: + +```python + paddings = ... + net = space_to_batch(value, paddings, block_size=rate) + net = conv2d(net, filters, strides=[1, 1, 1, 1], padding="VALID") + crops = ... + net = batch_to_space(net, crops, block_size=rate) +``` + +Advanced usage. Note the following optimization: A sequence of `atrous_conv2d` +operations with identical `rate` parameters, 'SAME' `padding`, and filters +with odd heights/ widths: + +```python + net = atrous_conv2d(net, filters1, rate, padding="SAME") + net = atrous_conv2d(net, filters2, rate, padding="SAME") + ... + net = atrous_conv2d(net, filtersK, rate, padding="SAME") +``` + +can be equivalently performed cheaper in terms of computation and memory as: + +```python + pad = ... # padding so that the input dims are multiples of rate + net = space_to_batch(net, paddings=pad, block_size=rate) + net = conv2d(net, filters1, strides=[1, 1, 1, 1], padding="SAME") + net = conv2d(net, filters2, strides=[1, 1, 1, 1], padding="SAME") + ... + net = conv2d(net, filtersK, strides=[1, 1, 1, 1], padding="SAME") + net = batch_to_space(net, crops=pad, block_size=rate) +``` + +because a pair of consecutive `space_to_batch` and `batch_to_space` ops with +the same `block_size` cancel out when their respective `paddings` and `crops` +inputs are identical. + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A 4-D `Tensor` of type `float`. It needs to be in the default "NHWC" +format. Its shape is `[batch, in_height, in_width, in_channels]`. +
+`filters` + +A 4-D `Tensor` with the same type as `value` and shape +`[filter_height, filter_width, in_channels, out_channels]`. `filters`' +`in_channels` dimension must match that of `value`. Atrous convolution is +equivalent to standard convolution with upsampled filters with effective +height `filter_height + (filter_height - 1) * (rate - 1)` and effective +width `filter_width + (filter_width - 1) * (rate - 1)`, produced by +inserting `rate - 1` zeros along consecutive elements across the +`filters`' spatial dimensions. +
+`rate` + +A positive int32. The stride with which we sample input values across +the `height` and `width` dimensions. Equivalently, the rate by which we +upsample the filter values by inserting zeros across the `height` and +`width` dimensions. In the literature, the same parameter is sometimes +called `input stride` or `dilation`. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. +
+`name` + +Optional name for the returned tensor. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `value`. +Output shape with `'VALID'` padding is: + +[batch, height - 2 * (filter_width - 1), +width - 2 * (filter_height - 1), out_channels]. + +Output shape with `'SAME'` padding is: + +[batch, height, width, out_channels]. +
+ + + + + + + + + + + + +
+`ValueError` + +If input/output depth does not match `filters`' shape, or if +padding is other than `'VALID'` or `'SAME'`. +
+ + + +#### References: + +Multi-Scale Context Aggregation by Dilated Convolutions: + [Yu et al., 2016](https://arxiv.org/abs/1511.07122) + ([pdf](https://arxiv.org/pdf/1511.07122.pdf)) +Semantic Image Segmentation with Deep Convolutional Nets and Fully +Connected CRFs: + [Chen et al., 2015](http://arxiv.org/abs/1412.7062) + ([pdf](https://arxiv.org/pdf/1412.7062)) +OverFeat - Integrated Recognition, Localization and Detection using +Convolutional Networks: + [Sermanet et al., 2014](https://arxiv.org/abs/1312.6229) + ([pdf](https://arxiv.org/pdf/1312.6229.pdf)) +Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks: + [Giusti et al., 2013] + (https://ieeexplore.ieee.org/abstract/document/6738831) + ([pdf](https://arxiv.org/pdf/1302.1700.pdf)) diff --git a/site/en/api_docs/python/tf/nn/atrous_conv2d_transpose.md b/site/en/api_docs/python/tf/nn/atrous_conv2d_transpose.md new file mode 100644 index 00000000000..900e4b6351e --- /dev/null +++ b/site/en/api_docs/python/tf/nn/atrous_conv2d_transpose.md @@ -0,0 +1,154 @@ +description: The transpose of atrous_conv2d. + +
+ + +
+ +# tf.nn.atrous_conv2d_transpose + + + + + + + + + +The transpose of `atrous_conv2d`. + + + + + + + + + +This operation is sometimes called "deconvolution" after +(Zeiler et al., 2010), but is really the transpose (gradient) of +`atrous_conv2d` rather than an actual deconvolution. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC` +format. Its shape is `[batch, in_height, in_width, in_channels]`. +
+`filters` + +A 4-D `Tensor` with the same type as `value` and shape +`[filter_height, filter_width, out_channels, in_channels]`. `filters`' +`in_channels` dimension must match that of `value`. Atrous convolution is +equivalent to standard convolution with upsampled filters with effective +height `filter_height + (filter_height - 1) * (rate - 1)` and effective +width `filter_width + (filter_width - 1) * (rate - 1)`, produced by +inserting `rate - 1` zeros along consecutive elements across the +`filters`' spatial dimensions. +
+`output_shape` + +A 1-D `Tensor` of shape representing the output shape of the +deconvolution op. +
+`rate` + +A positive int32. The stride with which we sample input values across +the `height` and `width` dimensions. Equivalently, the rate by which we +upsample the filter values by inserting zeros across the `height` and +`width` dimensions. In the literature, the same parameter is sometimes +called `input stride` or `dilation`. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. +
+`name` + +Optional name for the returned tensor. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `value`. +
+ + + + + + + + + + + + +
+`ValueError` + +If input/output depth does not match `filters`' shape, or if +padding is other than `'VALID'` or `'SAME'`, or if the `rate` is less +than one, or if the output_shape is not a tensor with 4 elements. +
+ + + +#### References: + +Deconvolutional Networks: + [Zeiler et al., 2010] + (https://ieeexplore.ieee.org/abstract/document/5539957) + ([pdf] + (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf)) diff --git a/site/en/api_docs/python/tf/nn/avg_pool.md b/site/en/api_docs/python/tf/nn/avg_pool.md new file mode 100644 index 00000000000..953ae98e05a --- /dev/null +++ b/site/en/api_docs/python/tf/nn/avg_pool.md @@ -0,0 +1,121 @@ +description: Performs the avg pooling on the input. + +
+ + +
+ +# tf.nn.avg_pool + + + + + + + + + +Performs the avg pooling on the input. + + + + + + + + + +Each entry in `output` is the mean of the corresponding size `ksize` +window in `value`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + +[num_channels]` if `data_format` does not start with "NC" (default), or +`[batch_size, num_channels] + input_spatial_shape` if data_format starts +with "NC". Pooling happens over the spatial dimensions only. +
+`ksize` + +An int or list of `ints` that has length `1`, `N` or `N+2`. The size +of the window for each dimension of the input tensor. +
+`strides` + +An int or list of `ints` that has length `1`, `N` or `N+2`. The +stride of the sliding window for each dimension of the input tensor. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. See +the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string. Specifies the channel dimension. For N=1 it can be +either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) +or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW". +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` of format specified by `data_format`. +The average pooled output tensor. +
+ diff --git a/site/en/api_docs/python/tf/nn/avg_pool1d.md b/site/en/api_docs/python/tf/nn/avg_pool1d.md new file mode 100644 index 00000000000..0754d5615f8 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/avg_pool1d.md @@ -0,0 +1,118 @@ +description: Performs the average pooling on the input. + +
+ + +
+ +# tf.nn.avg_pool1d + + + + + + + + + +Performs the average pooling on the input. + + + + + + + + + +Each entry in `output` is the mean of the corresponding size `ksize` +window in `value`. + +Note internally this op reshapes and uses the underlying 2d operation. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A 3-D `Tensor` of the format specified by `data_format`. +
+`ksize` + +An int or list of `ints` that has length `1` or `3`. The size of the +window for each dimension of the input tensor. +
+`strides` + +An int or list of `ints` that has length `1` or `3`. The stride of +the sliding window for each dimension of the input tensor. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. See +the "returns" section of tf.nn.convolution for details. +
+`data_format` + +An optional string from: "NWC", "NCW". Defaults to "NWC". +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of format specified by `data_format`. +The max pooled output tensor. +
+ diff --git a/site/en/api_docs/python/tf/nn/avg_pool2d.md b/site/en/api_docs/python/tf/nn/avg_pool2d.md new file mode 100644 index 00000000000..d7f89c4a1ef --- /dev/null +++ b/site/en/api_docs/python/tf/nn/avg_pool2d.md @@ -0,0 +1,105 @@ +description: Performs the average pooling on the input. + +
+ + +
+ +# tf.nn.avg_pool2d + + + + + + + + + +Performs the average pooling on the input. + + + + + + + +Each entry in `output` is the mean of the corresponding size `ksize` +window in `value`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A 4-D `Tensor` of shape `[batch, height, width, channels]` and type +`float32`, `float64`, `qint8`, `quint8`, or `qint32`. +
+`ksize` + +An int or list of `ints` that has length `1`, `2` or `4`. The size of +the window for each dimension of the input tensor. +
+`strides` + +An int or list of `ints` that has length `1`, `2` or `4`. The +stride of the sliding window for each dimension of the input tensor. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. +See the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string. 'NHWC' and 'NCHW' are supported. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `value`. The average pooled output tensor. +
+ diff --git a/site/en/api_docs/python/tf/nn/avg_pool3d.md b/site/en/api_docs/python/tf/nn/avg_pool3d.md new file mode 100644 index 00000000000..c2507cf0942 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/avg_pool3d.md @@ -0,0 +1,116 @@ +description: Performs the average pooling on the input. + +
+ + +
+ +# tf.nn.avg_pool3d + + + + + + + + + +Performs the average pooling on the input. + + + + + + + + + +Each entry in `output` is the mean of the corresponding size `ksize` +window in `value`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A 5-D `Tensor` of shape `[batch, height, width, channels]` and type +`float32`, `float64`, `qint8`, `quint8`, or `qint32`. +
+`ksize` + +An int or list of `ints` that has length `1`, `3` or `5`. The size of +the window for each dimension of the input tensor. +
+`strides` + +An int or list of `ints` that has length `1`, `3` or `5`. The +stride of the sliding window for each dimension of the input tensor. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. +See the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string. 'NDHWC' and 'NCDHW' are supported. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `value`. The average pooled output tensor. +
+ diff --git a/site/en/api_docs/python/tf/nn/batch_norm_with_global_normalization.md b/site/en/api_docs/python/tf/nn/batch_norm_with_global_normalization.md new file mode 100644 index 00000000000..d5b32b3e719 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/batch_norm_with_global_normalization.md @@ -0,0 +1,130 @@ +description: Batch normalization. + +
+ + +
+ +# tf.nn.batch_norm_with_global_normalization + + + + + + + + + +Batch normalization. + + + + + + + +This op is deprecated. See tf.nn.batch_normalization. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A 4D input Tensor. +
+`mean` + +A 1D mean Tensor with size matching the last dimension of t. +This is the first output from tf.nn.moments, +or a saved moving average thereof. +
+`variance` + +A 1D variance Tensor with size matching the last dimension of t. +This is the second output from tf.nn.moments, +or a saved moving average thereof. +
+`beta` + +A 1D beta Tensor with size matching the last dimension of t. +An offset to be added to the normalized tensor. +
+`gamma` + +A 1D gamma Tensor with size matching the last dimension of t. +If "scale_after_normalization" is true, this tensor will be multiplied +with the normalized tensor. +
+`variance_epsilon` + +A small float number to avoid dividing by 0. +
+`scale_after_normalization` + +A bool indicating whether the resulted tensor +needs to be multiplied with gamma. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+A batch-normalized `t`. +
+ + + +#### References: + +Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift: + [Ioffe et al., 2015](http://proceedings.mlr.press/v37/ioffe15.html) + ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf)) diff --git a/site/en/api_docs/python/tf/nn/batch_normalization.md b/site/en/api_docs/python/tf/nn/batch_normalization.md new file mode 100644 index 00000000000..5a3929e10ae --- /dev/null +++ b/site/en/api_docs/python/tf/nn/batch_normalization.md @@ -0,0 +1,156 @@ +description: Batch normalization. + +
+ + +
+ +# tf.nn.batch_normalization + + + + + + + + + +Batch normalization. + + + + + + + + + +Normalizes a tensor by `mean` and `variance`, and applies (optionally) a +`scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\): + +\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\) + +`mean`, `variance`, `offset` and `scale` are all expected to be of one of two +shapes: + + * In all generality, they can have the same number of dimensions as the + input `x`, with identical sizes as `x` for the dimensions that are not + normalized over (the 'depth' dimension(s)), and dimension 1 for the + others which are being normalized over. + `mean` and `variance` in this case would typically be the outputs of + tf.nn.moments(..., keepdims=True) during training, or running averages + thereof during inference. + * In the common case where the 'depth' dimension is the last dimension in + the input tensor `x`, they may be one dimensional tensors of the same + size as the 'depth' dimension. + This is the case for example for the common `[batch, depth]` layout of + fully-connected layers, and `[batch, height, width, depth]` for + convolutions. + `mean` and `variance` in this case would typically be the outputs of + tf.nn.moments(..., keepdims=False) during training, or running averages + thereof during inference. + +See equation 11 in Algorithm 2 of source: +[Batch Normalization: Accelerating Deep Network Training by +Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] +(http://arxiv.org/abs/1502.03167). + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +Input `Tensor` of arbitrary dimensionality. +
+`mean` + +A mean `Tensor`. +
+`variance` + +A variance `Tensor`. +
+`offset` + +An offset `Tensor`, often denoted \\(\beta\\) in equations, or +None. If present, will be added to the normalized tensor. +
+`scale` + +A scale `Tensor`, often denoted \\(\gamma\\) in equations, or +`None`. If present, the scale is applied to the normalized tensor. +
+`variance_epsilon` + +A small float number to avoid dividing by 0. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+the normalized, scaled, offset tensor. +
+ + + +#### References: + +Batch Normalization - Accelerating Deep Network Training by Reducing +Internal Covariate Shift: + [Ioffe et al., 2015](http://arxiv.org/abs/1502.03167) + ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf)) diff --git a/site/en/api_docs/python/tf/nn/bias_add.md b/site/en/api_docs/python/tf/nn/bias_add.md new file mode 100644 index 00000000000..7b0c5951b73 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/bias_add.md @@ -0,0 +1,122 @@ +description: Adds bias to value. + +
+ + +
+ +# tf.nn.bias_add + + + + + + + + + +Adds `bias` to `value`. + + + + + + + + + +This is (mostly) a special case of tf.add where `bias` is restricted to 1-D. +Broadcasting is supported, so `value` may have any number of dimensions. +Unlike tf.add, the type of `bias` is allowed to differ from `value` in the +case where both types are quantized. + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, +`int16`, `int8`, `complex64`, or `complex128`. +
+`bias` + +A 1-D `Tensor` with size matching the channel dimension of `value`. +Must be the same type as `value` unless `value` is a quantized type, +in which case a different quantized type may be used. +
+`data_format` + +A string. 'N...C' and 'NC...' are supported. If `None` (the +default) is specified then 'N..C' is assumed. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `value`. +
+ + + + + + + + + + + +
+ValueError if data format is unrecognized, if `value` has less than two +dimensions when `data_format` is 'N..C'/`None` or `value` has less +then three dimensions when `data_format` is `NC..`, if `bias` does not +have exactly one dimension (is a vector), or if the size of `bias` +does not match the size of the channel dimension of `value`. +
+ diff --git a/site/en/api_docs/python/tf/nn/collapse_repeated.md b/site/en/api_docs/python/tf/nn/collapse_repeated.md new file mode 100644 index 00000000000..5b0c0e40ca5 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/collapse_repeated.md @@ -0,0 +1,105 @@ +description: Merge repeated labels into single labels. + +
+ + +
+ +# tf.nn.collapse_repeated + + + + + + + + + +Merge repeated labels into single labels. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +Tensor of shape [batch, max value in seq_length] +
+`seq_length` + +Tensor of shape [batch], sequence length of each batch element. +
+`name` + +A name for this `Op`. Defaults to "collapse_repeated_labels". +
+ + + + + + + + + + + + + + + + + +
+A tuple `(collapsed_labels, new_seq_length)` where +
+`collapsed_labels` + +Tensor of shape [batch, max_seq_length] with repeated +labels collapsed and padded to max_seq_length, eg: +`[[A, A, B, B, A], [A, B, C, D, E]] => [[A, B, A, 0, 0], [A, B, C, D, E]]` +
+`new_seq_length` + +int tensor of shape [batch] with new sequence lengths. +
+ diff --git a/site/en/api_docs/python/tf/nn/compute_accidental_hits.md b/site/en/api_docs/python/tf/nn/compute_accidental_hits.md new file mode 100644 index 00000000000..93c76138713 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/compute_accidental_hits.md @@ -0,0 +1,144 @@ +description: Compute the position ids in sampled_candidates matching true_classes. + +
+ + +
+ +# tf.nn.compute_accidental_hits + + + + + + + + + +Compute the position ids in `sampled_candidates` matching `true_classes`. + + + + + + + + + +In Candidate Sampling, this operation facilitates virtually removing +sampled classes which happen to match target classes. This is done +in Sampled Softmax and Sampled Logistic. + +See our [Candidate Sampling Algorithms +Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf). + +We presuppose that the `sampled_candidates` are unique. + +We call it an 'accidental hit' when one of the target classes +matches one of the sampled classes. This operation reports +accidental hits as triples `(index, id, weight)`, where `index` +represents the row number in `true_classes`, `id` represents the +position in `sampled_candidates`, and weight is `-FLOAT_MAX`. + +The result of this op should be passed through a `sparse_to_dense` +operation, then added to the logits of the sampled classes. This +removes the contradictory effect of accidentally sampling the true +target classes as noise classes for the same example. + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64` and shape `[batch_size, +num_true]`. The target classes. +
+`sampled_candidates` + +A tensor of type `int64` and shape `[num_sampled]`. +The sampled_candidates output of CandidateSampler. +
+`num_true` + +An `int`. The number of target classes per training example. +
+`seed` + +An `int`. An operation-specific seed. Default is 0. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor` of type `int32` and shape `[num_accidental_hits]`. +Values indicate rows in `true_classes`. +
+`ids` + +A `Tensor` of type `int64` and shape `[num_accidental_hits]`. +Values indicate positions in `sampled_candidates`. +
+`weights` + +A `Tensor` of type `float` and shape `[num_accidental_hits]`. +Each value is `-FLOAT_MAX`. +
+ diff --git a/site/en/api_docs/python/tf/nn/compute_average_loss.md b/site/en/api_docs/python/tf/nn/compute_average_loss.md new file mode 100644 index 00000000000..8cf378a0687 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/compute_average_loss.md @@ -0,0 +1,107 @@ +description: Scales per-example losses with sample_weights and computes their average. + +
+ + +
+ +# tf.nn.compute_average_loss + + + + + + + + + +Scales per-example losses with sample_weights and computes their average. + + + + + + + + + +Usage with distribution strategy and custom training loop: + +```python +with strategy.scope(): + def compute_loss(labels, predictions, sample_weight=None): + + # If you are using a `Loss` class instead, set reduction to `NONE` so that + # we can do the reduction afterwards and divide by global batch size. + per_example_loss = tf.keras.losses.sparse_categorical_crossentropy( + labels, predictions) + + # Compute loss that is scaled by sample_weight and by global batch size. + return tf.nn.compute_average_loss( + per_example_loss, + sample_weight=sample_weight, + global_batch_size=GLOBAL_BATCH_SIZE) +``` + + + + + + + + + + + + + + + + +
+`per_example_loss` + +Per-example loss. +
+`sample_weight` + +Optional weighting for each example. +
+`global_batch_size` + +Optional global batch size value. Defaults to (size of +first dimension of `losses`) * (number of replicas). +
+ + + + + + + + + + + +
+Scalar loss value. +
+ diff --git a/site/en/api_docs/python/tf/nn/conv1d.md b/site/en/api_docs/python/tf/nn/conv1d.md new file mode 100644 index 00000000000..6a33b2d6ccf --- /dev/null +++ b/site/en/api_docs/python/tf/nn/conv1d.md @@ -0,0 +1,150 @@ +description: Computes a 1-D convolution given 3-D input and filter tensors. + +
+ + +
+ +# tf.nn.conv1d + + + + + + + + + +Computes a 1-D convolution given 3-D input and filter tensors. + + + + + + + +Given an input tensor of shape + [batch, in_width, in_channels] +if data_format is "NWC", or + [batch, in_channels, in_width] +if data_format is "NCW", +and a filter / kernel tensor of shape +[filter_width, in_channels, out_channels], this op reshapes +the arguments to pass them to conv2d to perform the equivalent +convolution operation. + +Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. +For example, if `data_format` does not start with "NC", a tensor of shape + [batch, in_width, in_channels] +is reshaped to + [batch, 1, in_width, in_channels], +and the filter is reshaped to + [1, filter_width, in_channels, out_channels]. +The result is then reshaped back to + [batch, out_width, out_channels] +\(where out_width is a function of the stride and padding as in conv2d\) and +returned to the caller. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A 3D `Tensor`. Must be of type `float16`, `float32`, or `float64`. +
+`filters` + +A 3D `Tensor`. Must have the same type as `input`. +
+`stride` + +An int or list of `ints` that has length `1` or `3`. The number of +entries by which the filter is moved right at each step. +
+`padding` + +'SAME' or 'VALID' +
+`data_format` + +An optional `string` from `"NWC", "NCW"`. Defaults to `"NWC"`, +the data is stored in the order of [batch, in_width, in_channels]. The +`"NCW"` format stores data as [batch, in_channels, in_width]. +
+`dilations` + +An int or list of `ints` that has length `1` or `3` which +defaults to 1. The dilation factor for each dimension of input. If set to +k > 1, there will be k-1 skipped cells between each filter element on that +dimension. Dilations in the batch and depth dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as input. +
+ + + + + + + + + + + + +
+`ValueError` + +if `data_format` is invalid. +
+ diff --git a/site/en/api_docs/python/tf/nn/conv1d_transpose.md b/site/en/api_docs/python/tf/nn/conv1d_transpose.md new file mode 100644 index 00000000000..9c7b632f884 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/conv1d_transpose.md @@ -0,0 +1,166 @@ +description: The transpose of conv1d. + +
+ + +
+ +# tf.nn.conv1d_transpose + + + + + + + + + +The transpose of `conv1d`. + + + + + + + + + +This operation is sometimes called "deconvolution" after +(Zeiler et al., 2010), but is actually the transpose (gradient) of `conv1d` +rather than an actual deconvolution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A 3-D `Tensor` of type `float` and shape +`[batch, in_width, in_channels]` for `NWC` data format or +`[batch, in_channels, in_width]` for `NCW` data format. +
+`filters` + +A 3-D `Tensor` with the same type as `value` and shape +`[filter_width, output_channels, in_channels]`. `filter`'s +`in_channels` dimension must match that of `value`. +
+`output_shape` + +A 1-D `Tensor`, containing three elements, representing the +output shape of the deconvolution op. +
+`strides` + +An int or list of `ints` that has length `1` or `3`. The number of +entries by which the filter is moved right at each step. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. +See the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string. `'NWC'` and `'NCW'` are supported. +
+`dilations` + +An int or list of `ints` that has length `1` or `3` which +defaults to 1. The dilation factor for each dimension of input. If set to +k > 1, there will be k-1 skipped cells between each filter element on that +dimension. Dilations in the batch and depth dimensions must be 1. +
+`name` + +Optional name for the returned tensor. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `value`. +
+ + + + + + + + + + + + +
+`ValueError` + +If input/output depth does not match `filter`'s shape, if +`output_shape` is not at 3-element vector, if `padding` is other than +`'VALID'` or `'SAME'`, or if `data_format` is invalid. +
+ + + +#### References: + +Deconvolutional Networks: + [Zeiler et al., 2010] + (https://ieeexplore.ieee.org/abstract/document/5539957) + ([pdf] + (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf)) diff --git a/site/en/api_docs/python/tf/nn/conv2d.md b/site/en/api_docs/python/tf/nn/conv2d.md new file mode 100644 index 00000000000..12fb4adf859 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/conv2d.md @@ -0,0 +1,175 @@ +description: Computes a 2-D convolution given 4-D input and filters tensors. + +
+ + +
+ +# tf.nn.conv2d + + + + + + + + + +Computes a 2-D convolution given 4-D `input` and `filters` tensors. + + + + + + + +Given an input tensor of shape `[batch, in_height, in_width, in_channels]` +and a filter / kernel tensor of shape +`[filter_height, filter_width, in_channels, out_channels]`, this op +performs the following: + +1. Flattens the filter to a 2-D matrix with shape + `[filter_height * filter_width * in_channels, output_channels]`. +2. Extracts image patches from the input tensor to form a *virtual* + tensor of shape `[batch, out_height, out_width, + filter_height * filter_width * in_channels]`. +3. For each patch, right-multiplies the filter matrix and the image patch + vector. + +In detail, with the default NHWC format, + + output[b, i, j, k] = + sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * + filter[di, dj, q, k] + +Must have `strides[0] = strides[3] = 1`. For the most common case of the same +horizontal and vertical strides, `strides = [1, stride, stride, 1]`. + +#### Usage Example: + + + +``` +>>> x_in = np.array([[ +... [[2], [1], [2], [0], [1]], +... [[1], [3], [2], [2], [3]], +... [[1], [1], [3], [3], [0]], +... [[2], [2], [0], [1], [1]], +... [[0], [0], [3], [1], [2]], ]]) +>>> kernel_in = np.array([ +... [ [[2, 0.1]], [[3, 0.2]] ], +... [ [[0, 0.3]],[[1, 0.4]] ], ]) +>>> x = tf.constant(x_in, dtype=tf.float32) +>>> kernel = tf.constant(kernel_in, dtype=tf.float32) +>>> tf.nn.conv2d(x, kernel, strides=[1, 1, 1, 1], padding='VALID') + +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: +`half`, `bfloat16`, `float32`, `float64`. +A 4-D tensor. The dimension order is interpreted according to the value +of `data_format`, see below for details. +
+`filters` + +A `Tensor`. Must have the same type as `input`. +A 4-D tensor of shape +`[filter_height, filter_width, in_channels, out_channels]` +
+`strides` + +An int or list of `ints` that has length `1`, `2` or `4`. The +stride of the sliding window for each dimension of `input`. If a single +value is given it is replicated in the `H` and `W` dimension. By default +the `N` and `C` dimensions are set to 1. The dimension order is determined +by the value of `data_format`, see below for details. +
+`padding` + +Either the `string` `"SAME"` or `"VALID"` indicating the type of +padding algorithm to use, or a list indicating the explicit paddings at +the start and end of each dimension. When explicit padding is used and +data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top, +pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used +and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0], +[pad_top, pad_bottom], [pad_left, pad_right]]`. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. +Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, height, width, channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, channels, height, width]. +
+`dilations` + +An int or list of `ints` that has length `1`, `2` or `4`, +defaults to 1. The dilation factor for each dimension of`input`. If a +single value is given it is replicated in the `H` and `W` dimension. By +default the `N` and `C` dimensions are set to 1. If set to k > 1, there +will be k-1 skipped cells between each filter element on that dimension. +The dimension order is determined by the value of `data_format`, see above +for details. Dilations in the batch and depth dimensions if a 4-d tensor +must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/nn/conv2d_transpose.md b/site/en/api_docs/python/tf/nn/conv2d_transpose.md new file mode 100644 index 00000000000..14a78e21865 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/conv2d_transpose.md @@ -0,0 +1,161 @@ +description: The transpose of conv2d. + +
+ + +
+ +# tf.nn.conv2d_transpose + + + + + + + + + +The transpose of `conv2d`. + + + + + + + +This operation is sometimes called "deconvolution" after +(Zeiler et al., 2010), but is really the transpose (gradient) of +`atrous_conv2d` rather than an actual deconvolution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A 4-D `Tensor` of type `float` and shape `[batch, height, width, +in_channels]` for `NHWC` data format or `[batch, in_channels, height, +width]` for `NCHW` data format. +
+`filters` + +A 4-D `Tensor` with the same type as `input` and shape `[height, +width, output_channels, in_channels]`. `filter`'s `in_channels` dimension +must match that of `input`. +
+`output_shape` + +A 1-D `Tensor` representing the output shape of the +deconvolution op. +
+`strides` + +An int or list of `ints` that has length `1`, `2` or `4`. The +stride of the sliding window for each dimension of `input`. If a single +value is given it is replicated in the `H` and `W` dimension. By default +the `N` and `C` dimensions are set to 0. The dimension order is determined +by the value of `data_format`, see below for details. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. See +the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string. 'NHWC' and 'NCHW' are supported. +
+`dilations` + +An int or list of `ints` that has length `1`, `2` or `4`, +defaults to 1. The dilation factor for each dimension of`input`. If a +single value is given it is replicated in the `H` and `W` dimension. By +default the `N` and `C` dimensions are set to 1. If set to k > 1, there +will be k-1 skipped cells between each filter element on that dimension. +The dimension order is determined by the value of `data_format`, see above +for details. Dilations in the batch and depth dimensions if a 4-d tensor +must be 1. +
+`name` + +Optional name for the returned tensor. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `input`. +
+ + + + + + + + + + + + +
+`ValueError` + +If input/output depth does not match `filter`'s shape, or if +padding is other than `'VALID'` or `'SAME'`. +
+ + + +#### References: + +Deconvolutional Networks: + [Zeiler et al., 2010] + (https://ieeexplore.ieee.org/abstract/document/5539957) + ([pdf] + (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf)) diff --git a/site/en/api_docs/python/tf/nn/conv3d.md b/site/en/api_docs/python/tf/nn/conv3d.md new file mode 100644 index 00000000000..1c51f9e56df --- /dev/null +++ b/site/en/api_docs/python/tf/nn/conv3d.md @@ -0,0 +1,127 @@ +description: Computes a 3-D convolution given 5-D input and filters tensors. + +
+ + +
+ +# tf.nn.conv3d + + + + + + + + + +Computes a 3-D convolution given 5-D `input` and `filters` tensors. + + + + + + + +In signal processing, cross-correlation is a measure of similarity of +two waveforms as a function of a time-lag applied to one of them. This +is also known as a sliding dot product or sliding inner-product. + +Our Conv3D implements a form of cross-correlation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +Shape `[batch, in_depth, in_height, in_width, in_channels]`. +
+`filters` + +A `Tensor`. Must have the same type as `input`. +Shape `[filter_depth, filter_height, filter_width, in_channels, +out_channels]`. `in_channels` must match between `input` and `filters`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. +The data format of the input and output data. With the +default format "NDHWC", the data is stored in the order of: +[batch, in_depth, in_height, in_width, in_channels]. +Alternatively, the format could be "NCDHW", the data storage order is: +[batch, in_channels, in_depth, in_height, in_width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. +1-D tensor of length 5. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each +filter element on that dimension. The dimension order is determined by the +value of `data_format`, see above for details. Dilations in the batch and +depth dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/nn/conv3d_transpose.md b/site/en/api_docs/python/tf/nn/conv3d_transpose.md new file mode 100644 index 00000000000..b55e35c2d7d --- /dev/null +++ b/site/en/api_docs/python/tf/nn/conv3d_transpose.md @@ -0,0 +1,143 @@ +description: The transpose of conv3d. + +
+ + +
+ +# tf.nn.conv3d_transpose + + + + + + + + + +The transpose of `conv3d`. + + + + + + + +This operation is sometimes called "deconvolution" after +(Zeiler et al., 2010), but is really the transpose (gradient) of `conv3d` +rather than an actual deconvolution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A 5-D `Tensor` of type `float` and shape `[batch, height, width, +in_channels]` for `NHWC` data format or `[batch, in_channels, height, +width]` for `NCHW` data format. +
+`filters` + +A 5-D `Tensor` with the same type as `value` and shape `[height, +width, output_channels, in_channels]`. `filter`'s `in_channels` dimension +must match that of `value`. +
+`output_shape` + +A 1-D `Tensor` representing the output shape of the +deconvolution op. +
+`strides` + +An int or list of `ints` that has length `1`, `3` or `5`. The +stride of the sliding window for each dimension of `input`. If a single +value is given it is replicated in the `D`, `H` and `W` dimension. By +default the `N` and `C` dimensions are set to 0. The dimension order is +determined by the value of `data_format`, see below for details. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. See +the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string. 'NDHWC' and 'NCDHW' are supported. +
+`dilations` + +An int or list of `ints` that has length `1`, `3` or `5`, +defaults to 1. The dilation factor for each dimension of`input`. If a +single value is given it is replicated in the `D`, `H` and `W` dimension. +By default the `N` and `C` dimensions are set to 1. If set to k > 1, there +will be k-1 skipped cells between each filter element on that dimension. +The dimension order is determined by the value of `data_format`, see above +for details. Dilations in the batch and depth dimensions if a 5-d tensor +must be 1. +
+`name` + +Optional name for the returned tensor. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `value`. +
+ + + +#### References: + +Deconvolutional Networks: + [Zeiler et al., 2010] + (https://ieeexplore.ieee.org/abstract/document/5539957) + ([pdf] + (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf)) diff --git a/site/en/api_docs/python/tf/nn/conv_transpose.md b/site/en/api_docs/python/tf/nn/conv_transpose.md new file mode 100644 index 00000000000..00a33d3123e --- /dev/null +++ b/site/en/api_docs/python/tf/nn/conv_transpose.md @@ -0,0 +1,161 @@ +description: The transpose of convolution. + +
+ + +
+ +# tf.nn.conv_transpose + + + + + + + + + +The transpose of `convolution`. + + + + + + + + + +This operation is sometimes called "deconvolution" after +(Zeiler et al., 2010), but is really the transpose (gradient) of `conv3d` +rather than an actual deconvolution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +An N+2 dimensional `Tensor` of shape +`[batch_size] + input_spatial_shape + [in_channels]` if data_format does +not start with "NC" (default), or +`[batch_size, in_channels] + input_spatial_shape` if data_format starts +with "NC". It must be one of the following types: +`half`, `bfloat16`, `float32`, `float64`. +
+`filters` + +An N+2 dimensional `Tensor` with the same type as `input` and +shape `spatial_filter_shape + [in_channels, out_channels]`. +
+`output_shape` + +A 1-D `Tensor` representing the output shape of the +deconvolution op. +
+`strides` + +An int or list of `ints` that has length `1`, `N` or `N+2`. The +stride of the sliding window for each dimension of `input`. If a single +value is given it is replicated in the spatial dimensions. By default +the `N` and `C` dimensions are set to 0. The dimension order is determined +by the value of `data_format`, see below for details. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. See +the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string or None. Specifies whether the channel dimension of +the `input` and output is the last dimension (default, or if `data_format` +does not start with "NC"), or the second dimension (if `data_format` +starts with "NC"). For N=1, the valid values are "NWC" (default) and +"NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". +For N=3, the valid values are "NDHWC" (default) and "NCDHW". +
+`dilations` + +An int or list of `ints` that has length `1`, `N` or `N+2`, +defaults to 1. The dilation factor for each dimension of`input`. If a +single value is given it is replicated in the spatial dimensions. By +default the `N` and `C` dimensions are set to 1. If set to k > 1, there +will be k-1 skipped cells between each filter element on that dimension. +The dimension order is determined by the value of `data_format`, see above +for details. +
+`name` + +A name for the operation (optional). If not specified "conv_transpose" +is used. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `value`. +
+ + + +#### References: + +Deconvolutional Networks: + [Zeiler et al., 2010] + (https://ieeexplore.ieee.org/abstract/document/5539957) + ([pdf] + (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf)) diff --git a/site/en/api_docs/python/tf/nn/convolution.md b/site/en/api_docs/python/tf/nn/convolution.md new file mode 100644 index 00000000000..06dc3a82c9f --- /dev/null +++ b/site/en/api_docs/python/tf/nn/convolution.md @@ -0,0 +1,230 @@ +description: Computes sums of N-D convolutions (actually cross-correlation). + +
+ + +
+ +# tf.nn.convolution + + + + + + + + + +Computes sums of N-D convolutions (actually cross-correlation). + + + + + + + +This also supports either output striding via the optional `strides` parameter +or atrous convolution (also known as convolution with holes or dilated +convolution, based on the French word "trous" meaning holes in English) via +the optional `dilations` parameter. Currently, however, output striding +is not supported for atrous convolutions. + +Specifically, in the case that `data_format` does not start with "NC", given +a rank (N+2) `input` Tensor of shape + + [num_batches, + input_spatial_shape[0], + ..., + input_spatial_shape[N-1], + num_input_channels], + +a rank (N+2) `filters` Tensor of shape + + [spatial_filter_shape[0], + ..., + spatial_filter_shape[N-1], + num_input_channels, + num_output_channels], + +an optional `dilations` tensor of shape [N] (defaulting to [1]*N) +specifying the filter upsampling/input downsampling rate, and an optional list +of N `strides` (defaulting [1]*N), this computes for each N-D spatial output +position (x[0], ..., x[N-1]): + +``` + output[b, x[0], ..., x[N-1], k] = + sum_{z[0], ..., z[N-1], q} + filter[z[0], ..., z[N-1], q, k] * + padded_input[b, + x[0]*strides[0] + dilation_rate[0]*z[0], + ..., + x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], + q] +``` +where b is the index into the batch, k is the output channel number, q is the +input channel number, and z is the N-D spatial offset within the filter. Here, +`padded_input` is obtained by zero padding the input using an effective +spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and +output striding `strides` as described in the +[comment here](https://tensorflow.org/api_guides/python/nn#Convolution). + +In the case that `data_format` does start with `"NC"`, the `input` and output +(but not the `filters`) are simply transposed as follows: + + convolution(input, data_format, **kwargs) = + tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), + **kwargs), + [0, N+1] + range(1, N+1)) + +It is required that 1 <= N <= 3. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +An (N+2)-D `Tensor` of type `T`, of shape +`[batch_size] + input_spatial_shape + [in_channels]` if data_format does +not start with "NC" (default), or +`[batch_size, in_channels] + input_spatial_shape` if data_format starts +with "NC". +
+`filters` + +An (N+2)-D `Tensor` with the same type as `input` and shape +`spatial_filter_shape + [in_channels, out_channels]`. +
+`padding` + +A string, either `"VALID"` or `"SAME"`. The padding algorithm. +
+`strides` + +Optional. Sequence of N ints >= 1. Specifies the output stride. +Defaults to [1]*N. If any value of strides is > 1, then all values of +dilation_rate must be 1. +
+`dilations` + +Optional. Sequence of N ints >= 1. Specifies the filter +upsampling/input downsampling rate. In the literature, the same parameter +is sometimes called `input stride` or `dilation`. The effective filter +size used for the convolution will be `spatial_filter_shape + +(spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting +(dilation_rate[i]-1) zeros between consecutive elements of the original +filter in each spatial dimension i. If any value of dilation_rate is > 1, +then all values of strides must be 1. +
+`name` + +Optional name for the returned tensor. +
+`data_format` + +A string or None. Specifies whether the channel dimension of +the `input` and output is the last dimension (default, or if `data_format` +does not start with "NC"), or the second dimension (if `data_format` +starts with "NC"). For N=1, the valid values are "NWC" (default) and +"NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". +For N=3, the valid values are "NDHWC" (default) and "NCDHW". +
+`filters` + +Alias of filter. +
+`dilations` + +Alias of dilation_rate. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `input` of shape + +`[batch_size] + output_spatial_shape + [out_channels]` + +if data_format is None or does not start with "NC", or + +`[batch_size, out_channels] + output_spatial_shape` + +if data_format starts with "NC", +where `output_spatial_shape` depends on the value of `padding`. + +If padding == "SAME": +output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i]) + +If padding == "VALID": +output_spatial_shape[i] = +ceil((input_spatial_shape[i] - +(spatial_filter_shape[i]-1) * dilation_rate[i]) +/ strides[i]). +
+ + + + + + + + + + + + +
+`ValueError` + +If input/output depth does not match `filters` shape, if padding +is other than `"VALID"` or `"SAME"`, or if data_format is invalid. +
+ diff --git a/site/en/api_docs/python/tf/nn/crelu.md b/site/en/api_docs/python/tf/nn/crelu.md new file mode 100644 index 00000000000..43e15e19444 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/crelu.md @@ -0,0 +1,93 @@ +description: Computes Concatenated ReLU. + +
+ + +
+ +# tf.nn.crelu + + + + + + + + + +Computes Concatenated ReLU. + + + + + + + +Concatenates a ReLU which selects only the positive part of the activation +with a ReLU which selects only the *negative* part of the activation. +Note that as a result this non-linearity doubles the depth of the activations. +Source: [Understanding and Improving Convolutional Neural Networks via +Concatenated Rectified Linear Units. W. Shang, et +al.](https://arxiv.org/abs/1603.05201) + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`, +`int16`, or `int8`. +
+`name` + +A name for the operation (optional). +
+`axis` + +The axis that the output values are concatenated along. Default is -1. +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `features`. +
+ + + +#### References: + +Understanding and Improving Convolutional Neural Networks via Concatenated +Rectified Linear Units: + [Shang et al., 2016](http://proceedings.mlr.press/v48/shang16) + ([pdf](http://proceedings.mlr.press/v48/shang16.pdf)) diff --git a/site/en/api_docs/python/tf/nn/ctc_beam_search_decoder.md b/site/en/api_docs/python/tf/nn/ctc_beam_search_decoder.md new file mode 100644 index 00000000000..5798cef7199 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/ctc_beam_search_decoder.md @@ -0,0 +1,126 @@ +description: Performs beam search decoding on the logits given in input. + +
+ + +
+ +# tf.nn.ctc_beam_search_decoder + + + + + + + + + +Performs beam search decoding on the logits given in input. + + + + + + + + + +**Note** The `ctc_greedy_decoder` is a special case of the +`ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but +that decoder is faster for this special case). + + + + + + + + + + + + + + + + + + + +
+`inputs` + +3-D `float` `Tensor`, size `[max_time, batch_size, num_classes]`. +The logits. +
+`sequence_length` + +1-D `int32` vector containing sequence lengths, having size +`[batch_size]`. +
+`beam_width` + +An int scalar >= 0 (beam search beam width). +
+`top_paths` + +An int scalar >= 0, <= beam_width (controls output size). +
+ + + + + + + + + + + + + + + + + +
+A tuple `(decoded, log_probabilities)` where +
+`decoded` + +A list of length top_paths, where `decoded[j]` +is a `SparseTensor` containing the decoded outputs: + +`decoded[j].indices`: Indices matrix `[total_decoded_outputs[j], 2]`; +The rows store: `[batch, time]`. + +`decoded[j].values`: Values vector, size `[total_decoded_outputs[j]]`. +The vector stores the decoded classes for beam `j`. + +`decoded[j].dense_shape`: Shape vector, size `(2)`. +The shape values are: `[batch_size, max_decoded_length[j]]`. +
+`log_probability` + +A `float` matrix `[batch_size, top_paths]` containing +sequence log-probabilities. +
+ diff --git a/site/en/api_docs/python/tf/nn/ctc_greedy_decoder.md b/site/en/api_docs/python/tf/nn/ctc_greedy_decoder.md new file mode 100644 index 00000000000..89100cfa4b7 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/ctc_greedy_decoder.md @@ -0,0 +1,128 @@ +description: Performs greedy decoding on the logits given in input (best path). + +
+ + +
+ +# tf.nn.ctc_greedy_decoder + + + + + + + + + +Performs greedy decoding on the logits given in input (best path). + + + + + + + + + +Note: Regardless of the value of merge_repeated, if the maximum index of a +given time and batch corresponds to the blank index `(num_classes - 1)`, no +new element is emitted. + +If `merge_repeated` is `True`, merge repeated classes in output. +This means that if consecutive logits' maximum indices are the same, +only the first of these is emitted. The sequence `A B B * B * B` (where '*' +is the blank label) becomes + + * `A B B B` if `merge_repeated=True`. + * `A B B B B` if `merge_repeated=False`. + + + + + + + + + + + + + + + + +
+`inputs` + +3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`. +The logits. +
+`sequence_length` + +1-D `int32` vector containing sequence lengths, having size +`[batch_size]`. +
+`merge_repeated` + +Boolean. Default: True. +
+ + + + + + + + + + + + + + + + + +
+A tuple `(decoded, neg_sum_logits)` where +
+`decoded` + +A single-element list. `decoded[0]` +is an `SparseTensor` containing the decoded outputs s.t.: + +`decoded.indices`: Indices matrix `(total_decoded_outputs, 2)`. +The rows store: `[batch, time]`. + +`decoded.values`: Values vector, size `(total_decoded_outputs)`. +The vector stores the decoded classes. + +`decoded.dense_shape`: Shape vector, size `(2)`. +The shape values are: `[batch_size, max_decoded_length]` +
+`neg_sum_logits` + +A `float` matrix `(batch_size x 1)` containing, for the +sequence found, the negative of the sum of the greatest logit at each +timeframe. +
+ diff --git a/site/en/api_docs/python/tf/nn/ctc_loss.md b/site/en/api_docs/python/tf/nn/ctc_loss.md new file mode 100644 index 00000000000..5fb188078da --- /dev/null +++ b/site/en/api_docs/python/tf/nn/ctc_loss.md @@ -0,0 +1,150 @@ +description: Computes CTC (Connectionist Temporal Classification) loss. + +
+ + +
+ +# tf.nn.ctc_loss + + + + + + + + + +Computes CTC (Connectionist Temporal Classification) loss. + + + + + + + +This op implements the CTC loss as presented in (Graves et al., 2016). + +#### Notes: + + + +- Same as the "Classic CTC" in TensorFlow 1.x's tf.compat.v1.nn.ctc_loss + setting of preprocess_collapse_repeated=False, ctc_merge_repeated=True +- Labels may be supplied as either a dense, zero-padded tensor with a + vector of label sequence lengths OR as a SparseTensor. +- On TPU and GPU: Only dense padded labels are supported. +- On CPU: Caller may use SparseTensor or dense padded labels but calling with + a SparseTensor will be significantly faster. +- Default blank label is 0 rather num_classes - 1, unless overridden by + blank_index. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`labels` + +tensor of shape [batch_size, max_label_seq_length] or SparseTensor +
+`logits` + +tensor of shape [frames, batch_size, num_labels], if +logits_time_major == False, shape is [batch_size, frames, num_labels]. +
+`label_length` + +tensor of shape [batch_size], None if labels is SparseTensor +Length of reference label sequence in labels. +
+`logit_length` + +tensor of shape [batch_size] Length of input sequence in +logits. +
+`logits_time_major` + +(optional) If True (default), logits is shaped [time, +batch, logits]. If False, shape is [batch, time, logits] +
+`unique` + +(optional) Unique label indices as computed by +ctc_unique_labels(labels). If supplied, enable a faster, memory efficient +implementation on TPU. +
+`blank_index` + +(optional) Set the class index to use for the blank label. +Negative values will start from num_classes, ie, -1 will reproduce the +ctc_loss behavior of using num_classes - 1 for the blank symbol. There is +some memory/performance overhead to switching from the default of 0 as an +additional shifted copy of the logits may be created. +
+`name` + +A name for this `Op`. Defaults to "ctc_loss_dense". +
+ + + + + + + + + + + + +
+`loss` + +tensor of shape [batch_size], negative log probabilities. +
+ + + +#### References: + +Connectionist Temporal Classification - Labeling Unsegmented Sequence Data +with Recurrent Neural Networks: + [Graves et al., 2016](https://dl.acm.org/citation.cfm?id=1143891) + ([pdf](http://www.cs.toronto.edu/~graves/icml_2006.pdf)) diff --git a/site/en/api_docs/python/tf/nn/ctc_unique_labels.md b/site/en/api_docs/python/tf/nn/ctc_unique_labels.md new file mode 100644 index 00000000000..3a902950935 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/ctc_unique_labels.md @@ -0,0 +1,95 @@ +description: Get unique labels and indices for batched labels for tf.nn.ctc_loss. + +
+ + +
+ +# tf.nn.ctc_unique_labels + + + + + + + + + +Get unique labels and indices for batched labels for tf.nn.ctc_loss. + + + + + + + + + +For use with tf.nn.ctc_loss optional argument `unique`: This op can be +used to preprocess labels in input pipeline to for better speed/memory use +computing the ctc loss on TPU. + +#### Example: + +ctc_unique_labels([[3, 4, 4, 3]]) -> + unique labels padded with 0: [[3, 4, 0, 0]] + indices of original labels in unique: [0, 1, 1, 0] + + + + + + + + + + + + + + + +
+`labels` + +tensor of shape [batch_size, max_label_length] padded with 0. +
+`name` + +A name for this `Op`. Defaults to "ctc_unique_labels". +
+ + + + + + + + + + + +
+tuple of +- unique labels, tensor of shape `[batch_size, max_label_length]` +- indices into unique labels, shape `[batch_size, max_label_length]` +
+ diff --git a/site/en/api_docs/python/tf/nn/depth_to_space.md b/site/en/api_docs/python/tf/nn/depth_to_space.md new file mode 100644 index 00000000000..b94fafa90d9 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/depth_to_space.md @@ -0,0 +1,175 @@ +description: DepthToSpace for tensors of type T. + +
+ + +
+ +# tf.nn.depth_to_space + + + + + + + + + +DepthToSpace for tensors of type T. + + + + + + + +Rearranges data from depth into blocks of spatial data. +This is the reverse transformation of SpaceToDepth. More specifically, +this op outputs a copy of the input tensor where values from the `depth` +dimension are moved in spatial blocks to the `height` and `width` dimensions. +The attr `block_size` indicates the input block size and how the data is moved. + + * Chunks of data of size `block_size * block_size` from depth are rearranged + into non-overlapping blocks of size `block_size x block_size` + * The width the output tensor is `input_depth * block_size`, whereas the + height is `input_height * block_size`. + * The Y, X coordinates within each block of the output image are determined + by the high order component of the input channel index. + * The depth of the input tensor must be divisible by + `block_size * block_size`. + +The `data_format` attr specifies the layout of the input and output tensors +with the following options: + "NHWC": `[ batch, height, width, channels ]` + "NCHW": `[ batch, channels, height, width ]` + "NCHW_VECT_C": + `qint8 [ batch, channels / 4, height, width, 4 ]` + +It is useful to consider the operation as transforming a 6-D Tensor. +e.g. for data_format = NHWC, + Each element in the input tensor can be specified via 6 coordinates, + ordered by decreasing memory layout significance as: + n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates + within the input image, bX, bY means coordinates + within the output block, oC means output channels). + The output would be the input transposed to the following layout: + n,iY,bY,iX,bX,oC + +This operation is useful for resizing the activations between convolutions +(but keeping all data), e.g. instead of pooling. It is also useful for training +purely convolutional models. + +For example, given an input of shape `[1, 1, 1, 4]`, data_format = "NHWC" and +block_size = 2: + +``` +x = [[[[1, 2, 3, 4]]]] + +``` + +This operation will output a tensor of shape `[1, 2, 2, 1]`: + +``` + [[[[1], [2]], + [[3], [4]]]] +``` + +Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`, +the corresponding output will have 2x2 elements and will have a depth of +1 channel (1 = `4 / (block_size * block_size)`). +The output element shape is `[2, 2, 1]`. + +For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g. + +``` +x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] +``` + +This operation, for block size of 2, will return the following tensor of shape +`[1, 2, 2, 3]` + +``` + [[[[1, 2, 3], [4, 5, 6]], + [[7, 8, 9], [10, 11, 12]]]] + +``` + +Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2: + +``` +x = [[[[1, 2, 3, 4], + [5, 6, 7, 8]], + [[9, 10, 11, 12], + [13, 14, 15, 16]]]] +``` + +the operator will return the following tensor of shape `[1 4 4 1]`: + +``` +x = [[[ [1], [2], [5], [6]], + [ [3], [4], [7], [8]], + [ [9], [10], [13], [14]], + [ [11], [12], [15], [16]]]] + +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`block_size` + +An `int` that is `>= 2`. +The size of the spatial block, same as in Space2Depth. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW", "NCHW_VECT_C"`. Defaults to `"NHWC"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/nn/depthwise_conv2d.md b/site/en/api_docs/python/tf/nn/depthwise_conv2d.md new file mode 100644 index 00000000000..ebf55339f4a --- /dev/null +++ b/site/en/api_docs/python/tf/nn/depthwise_conv2d.md @@ -0,0 +1,132 @@ +description: Depthwise 2-D convolution. + +
+ + +
+ +# tf.nn.depthwise_conv2d + + + + + + + + + +Depthwise 2-D convolution. + + + + + + + +Given a 4D input tensor ('NHWC' or 'NCHW' data formats) +and a filter tensor of shape +`[filter_height, filter_width, in_channels, channel_multiplier]` +containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` +applies a different filter to each input channel (expanding from 1 channel +to `channel_multiplier` channels for each), then concatenates the results +together. The output has `in_channels * channel_multiplier` channels. + +In detail, with the default NHWC format, + + output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} + filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, + strides[2] * j + rate[1] * dj, k] + +Must have `strides[0] = strides[3] = 1`. For the most common case of the +same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. +If any value in `rate` is greater than 1, we perform atrous depthwise +convolution, in which case all values in the `strides` tensor must be equal +to 1. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +4-D with shape according to `data_format`. +
+`filter` + +4-D with shape +`[filter_height, filter_width, in_channels, channel_multiplier]`. +
+`strides` + +1-D of size 4. The stride of the sliding window for each +dimension of `input`. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. +See the "returns" section of tf.nn.convolution for details. +
+`data_format` + +The data format for input. Either "NHWC" (default) or "NCHW". +
+`dilations` + +1-D of size 2. The dilation rate in which we sample input values +across the `height` and `width` dimensions in atrous convolution. If it is +greater than 1, then all values of strides must be 1. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+A 4-D `Tensor` with shape according to `data_format`. E.g., for +"NHWC" format, shape is +`[batch, out_height, out_width, in_channels * channel_multiplier].` +
+ diff --git a/site/en/api_docs/python/tf/nn/depthwise_conv2d_backprop_filter.md b/site/en/api_docs/python/tf/nn/depthwise_conv2d_backprop_filter.md new file mode 100644 index 00000000000..49fa89cda5f --- /dev/null +++ b/site/en/api_docs/python/tf/nn/depthwise_conv2d_backprop_filter.md @@ -0,0 +1,143 @@ +description: Computes the gradients of depthwise convolution with respect to the filter. + +
+ + +
+ +# tf.nn.depthwise_conv2d_backprop_filter + + + + + + + + + +Computes the gradients of depthwise convolution with respect to the filter. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +4-D with shape based on `data_format`. For example, if +`data_format` is 'NHWC' then `input` is a 4-D `[batch, in_height, +in_width, in_channels]` tensor. +
+`filter_sizes` + +A `Tensor` of type `int32`. +An integer vector representing the tensor shape of `filter`, +where `filter` is a 4-D +`[filter_height, filter_width, in_channels, depthwise_multiplier]` tensor. +
+`out_backprop` + +A `Tensor`. Must have the same type as `input`. +4-D with shape based on `data_format`. +For example, if `data_format` is 'NHWC' then +out_backprop shape is `[batch, out_height, out_width, out_channels]`. +Gradients w.r.t. the output of the convolution. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +of the convolution. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, height, width, channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, channels, height, width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each filter +element on that dimension. The dimension order is determined by the value of +`data_format`, see above for details. Dilations in the batch and depth +dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/nn/depthwise_conv2d_backprop_input.md b/site/en/api_docs/python/tf/nn/depthwise_conv2d_backprop_input.md new file mode 100644 index 00000000000..3fec417c2e7 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/depthwise_conv2d_backprop_input.md @@ -0,0 +1,142 @@ +description: Computes the gradients of depthwise convolution with respect to the input. + +
+ + +
+ +# tf.nn.depthwise_conv2d_backprop_input + + + + + + + + + +Computes the gradients of depthwise convolution with respect to the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_sizes` + +A `Tensor` of type `int32`. +An integer vector representing the shape of `input`, based +on `data_format`. For example, if `data_format` is 'NHWC' then +`input` is a 4-D `[batch, height, width, channels]` tensor. +
+`filter` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +4-D with shape +`[filter_height, filter_width, in_channels, depthwise_multiplier]`. +
+`out_backprop` + +A `Tensor`. Must have the same type as `filter`. +4-D with shape based on `data_format`. +For example, if `data_format` is 'NHWC' then +out_backprop shape is `[batch, out_height, out_width, out_channels]`. +Gradients w.r.t. the output of the convolution. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +of the convolution. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, height, width, channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, channels, height, width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each filter +element on that dimension. The dimension order is determined by the value of +`data_format`, see above for details. Dilations in the batch and depth +dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `filter`. +
+ diff --git a/site/en/api_docs/python/tf/nn/dilation2d.md b/site/en/api_docs/python/tf/nn/dilation2d.md new file mode 100644 index 00000000000..f13537c2148 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/dilation2d.md @@ -0,0 +1,138 @@ +description: Computes the grayscale dilation of 4-D input and 3-D filters tensors. + +
+ + +
+ +# tf.nn.dilation2d + + + + + + + + + +Computes the grayscale dilation of 4-D `input` and 3-D `filters` tensors. + + + + + + + +The `input` tensor has shape `[batch, in_height, in_width, depth]` and the +`filters` tensor has shape `[filter_height, filter_width, depth]`, i.e., each +input channel is processed independently of the others with its own +structuring function. The `output` tensor has shape +`[batch, out_height, out_width, depth]`. The spatial dimensions of the output +tensor depend on the `padding` algorithm. We currently only support the +default "NHWC" `data_format`. + +In detail, the grayscale morphological 2-D dilation is the max-sum correlation +(for consistency with `conv2d`, we use unmirrored filters): + + output[b, y, x, c] = + max_{dy, dx} input[b, + strides[1] * y + rates[1] * dy, + strides[2] * x + rates[2] * dx, + c] + + filters[dy, dx, c] + +Max-pooling is a special case when the filter has size equal to the pooling +kernel size and contains all zeros. + +Note on duality: The dilation of `input` by the `filters` is equal to the +negation of the erosion of `-input` by the reflected `filters`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, +`int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, +`uint32`, `uint64`. +4-D with shape `[batch, in_height, in_width, depth]`. +
+`filters` + +A `Tensor`. Must have the same type as `input`. +3-D with shape `[filter_height, filter_width, depth]`. +
+`strides` + +A list of `ints` that has length `>= 4`. +The stride of the sliding window for each dimension of the input +tensor. Must be: `[1, stride_height, stride_width, 1]`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +A `string`, only `"NHWC"` is currently supported. +
+`dilations` + +A list of `ints` that has length `>= 4`. +The input stride for atrous morphological dilation. Must be: +`[1, rate_height, rate_width, 1]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/nn/dropout.md b/site/en/api_docs/python/tf/nn/dropout.md new file mode 100644 index 00000000000..4ea6738f8e7 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/dropout.md @@ -0,0 +1,172 @@ +description: Computes dropout: randomly sets elements to zero to prevent overfitting. + +
+ + +
+ +# tf.nn.dropout + + + + + + + + + +Computes dropout: randomly sets elements to zero to prevent overfitting. + + + + + + + +Note: The behavior of dropout has changed between TensorFlow 1.x and 2.x. +When converting 1.x code, please use named arguments to ensure behavior stays +consistent. + +See also: tf.keras.layers.Dropout for a dropout layer. + +[Dropout](https://arxiv.org/abs/1207.0580) is useful for regularizing DNN +models. Inputs elements are randomly set to zero (and the other elements are +rescaled). This encourages each node to be independently useful, as it cannot +rely on the output of other nodes. + +More precisely: With probability `rate` elements of `x` are set to `0`. +The remaining elements are scaled up by `1.0 / (1 - rate)`, so that the +expected value is preserved. + +``` +>>> tf.random.set_seed(0) +>>> x = tf.ones([3,5]) +>>> tf.nn.dropout(x, rate = 0.5, seed = 1).numpy() +array([[2., 0., 0., 2., 2.], + [2., 2., 2., 2., 2.], + [2., 0., 2., 0., 2.]], dtype=float32) +``` + +``` +>>> tf.random.set_seed(0) +>>> x = tf.ones([3,5]) +>>> tf.nn.dropout(x, rate = 0.8, seed = 1).numpy() +array([[0., 0., 0., 5., 5.], + [0., 5., 0., 5., 0.], + [5., 0., 5., 0., 5.]], dtype=float32) +``` + +``` +>>> tf.nn.dropout(x, rate = 0.0) == x + +``` + + +By default, each element is kept or dropped independently. If `noise_shape` +is specified, it must be +[broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) +to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` +will make independent decisions. This is useful for dropping whole +channels from an image or sequence. For example: + +``` +>>> tf.random.set_seed(0) +>>> x = tf.ones([3,10]) +>>> tf.nn.dropout(x, rate = 2/3, noise_shape=[1,10], seed=1).numpy() +array([[0., 0., 0., 3., 3., 0., 3., 3., 3., 0.], + [0., 0., 0., 3., 3., 0., 3., 3., 3., 0.], + [0., 0., 0., 3., 3., 0., 3., 3., 3., 0.]], dtype=float32) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A floating point tensor. +
+`rate` + +A scalar `Tensor` with the same type as x. The probability +that each element is dropped. For example, setting rate=0.1 would drop +10% of input elements. +
+`noise_shape` + +A 1-D `Tensor` of type `int32`, representing the +shape for randomly generated keep/drop flags. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.random.set_seed for behavior. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+A Tensor of the same shape of `x`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `rate` is not in `[0, 1)` or if `x` is not a floating point +tensor. `rate=1` is disallowed, because theoutput would be all zeros, +which is likely not what was intended. +
+ diff --git a/site/en/api_docs/python/tf/nn/elu.md b/site/en/api_docs/python/tf/nn/elu.md new file mode 100644 index 00000000000..06956982b77 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/elu.md @@ -0,0 +1,79 @@ +description: Computes exponential linear: exp(features) - 1 if < 0, features otherwise. + +
+ + +
+ +# tf.nn.elu + + + + + + + + + +Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise. + + + + + + + + + +See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) +](http://arxiv.org/abs/1511.07289) + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/nn/embedding_lookup.md b/site/en/api_docs/python/tf/nn/embedding_lookup.md new file mode 100644 index 00000000000..c76c4b609f7 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/embedding_lookup.md @@ -0,0 +1,129 @@ +description: Looks up ids in a list of embedding tensors. + +
+ + +
+ +# tf.nn.embedding_lookup + + + + + + + + + +Looks up `ids` in a list of embedding tensors. + + + + + + + +This function is used to perform parallel lookups on the list of +tensors in `params`. It is a generalization of +tf.gather, where `params` is +interpreted as a partitioning of a large embedding tensor. `params` may be +a `PartitionedVariable` as returned by using tf.compat.v1.get_variable() +with a +partitioner. + +If `len(params) > 1`, each element `id` of `ids` is partitioned between +the elements of `params` according to the `partition_strategy`. +In all strategies, if the id space does not evenly divide the number of +partitions, each of the first `(max_id + 1) % len(params)` partitions will +be assigned one more id. + +The `partition_strategy` is always `"div"` currently. This means that we +assign ids to partitions in a contiguous manner. For instance, 13 ids are +split across 5 partitions as: +`[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]` + +The results of the lookup are concatenated into a dense +tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`. + + + + + + + + + + + + + + + + + + + +
+`params` + +A single tensor representing the complete embedding tensor, or a +list of P tensors all of same shape except for the first dimension, +representing sharded embedding tensors. Alternatively, a +`PartitionedVariable`, created by partitioning along dimension 0. Each +element must be appropriately sized for the 'div' `partition_strategy`. +
+`ids` + +A `Tensor` with type `int32` or `int64` containing the ids to be looked +up in `params`. +
+`max_norm` + +If not `None`, each embedding is clipped if its l2-norm is larger +than this value. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` with the same type as the tensors in `params`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `params` is empty. +
+ diff --git a/site/en/api_docs/python/tf/nn/embedding_lookup_sparse.md b/site/en/api_docs/python/tf/nn/embedding_lookup_sparse.md new file mode 100644 index 00000000000..ca36705be1d --- /dev/null +++ b/site/en/api_docs/python/tf/nn/embedding_lookup_sparse.md @@ -0,0 +1,174 @@ +description: Computes embeddings for the given ids and weights. + +
+ + +
+ +# tf.nn.embedding_lookup_sparse + + + + + + + + + +Computes embeddings for the given ids and weights. + + + + + + + +This op assumes that there is at least one id for each row in the dense tensor +represented by sp_ids (i.e. there are no rows with empty features), and that +all the indices of sp_ids are in canonical row-major order. + +It also assumes that all id values lie in the range [0, p0), where p0 +is the sum of the size of params along dimension 0. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`params` + +A single tensor representing the complete embedding tensor, or a +list of P tensors all of same shape except for the first dimension, +representing sharded embedding tensors. Alternatively, a +`PartitionedVariable`, created by partitioning along dimension 0. Each +element must be appropriately sized for ``"div"`` `partition_strategy`. +
+`sp_ids` + +N x M `SparseTensor` of int64 ids where N is typically batch size +and M is arbitrary. +
+`sp_weights` + +either a `SparseTensor` of float / double weights, or `None` to +indicate all weights should be taken to be 1. If specified, `sp_weights` +must have exactly the same shape and indices as `sp_ids`. +
+`combiner` + +A string specifying the reduction op. Currently "mean", "sqrtn" +and "sum" are supported. "sum" computes the weighted sum of the embedding +results for each row. "mean" is the weighted sum divided by the total +weight. "sqrtn" is the weighted sum divided by the square root of the sum +of the squares of the weights. +
+`max_norm` + +If not `None`, each embedding is clipped if its l2-norm is larger +than this value, before combining. +
+`name` + +Optional name for the op. +
+ + + + + + + + + + + +
+A dense tensor representing the combined embeddings for the +sparse ids. For each row in the dense tensor represented by `sp_ids`, the op +looks up the embeddings for all ids in that row, multiplies them by the +corresponding weight, and combines these embeddings as specified. + +In other words, if + +`shape(combined params) = [p0, p1, ..., pm]` + +and + +`shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn]` + +then + +`shape(output) = [d0, d1, ..., dn-1, p1, ..., pm]`. + +For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are + +```python +[0, 0]: id 1, weight 2.0 +[0, 1]: id 3, weight 0.5 +[1, 0]: id 0, weight 1.0 +[2, 3]: id 1, weight 3.0 +``` + +with `combiner`="mean", then the output will be a 3x20 matrix where + +```python +output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) +output[1, :] = (params[0, :] * 1.0) / 1.0 +output[2, :] = (params[1, :] * 3.0) / 3.0 +``` +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If `sp_ids` is not a `SparseTensor`, or if `sp_weights` is +neither `None` nor `SparseTensor`. +
+`ValueError` + +If `combiner` is not one of {"mean", "sqrtn", "sum"}. +
+ diff --git a/site/en/api_docs/python/tf/nn/erosion2d.md b/site/en/api_docs/python/tf/nn/erosion2d.md new file mode 100644 index 00000000000..5e230c7bfd4 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/erosion2d.md @@ -0,0 +1,151 @@ +description: Computes the grayscale erosion of 4-D value and 3-D filters tensors. + +
+ + +
+ +# tf.nn.erosion2d + + + + + + + + + +Computes the grayscale erosion of 4-D `value` and 3-D `filters` tensors. + + + + + + + +The `value` tensor has shape `[batch, in_height, in_width, depth]` and the +`filters` tensor has shape `[filters_height, filters_width, depth]`, i.e., +each input channel is processed independently of the others with its own +structuring function. The `output` tensor has shape +`[batch, out_height, out_width, depth]`. The spatial dimensions of the +output tensor depend on the `padding` algorithm. We currently only support the +default "NHWC" `data_format`. + +In detail, the grayscale morphological 2-D erosion is given by: + + output[b, y, x, c] = + min_{dy, dx} value[b, + strides[1] * y - dilations[1] * dy, + strides[2] * x - dilations[2] * dx, + c] - + filters[dy, dx, c] + +Duality: The erosion of `value` by the `filters` is equal to the negation of +the dilation of `-value` by the reflected `filters`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. 4-D with shape `[batch, in_height, in_width, depth]`. +
+`filters` + +A `Tensor`. Must have the same type as `value`. +3-D with shape `[filters_height, filters_width, depth]`. +
+`strides` + +A list of `ints` that has length `>= 4`. +1-D of length 4. The stride of the sliding window for each dimension of +the input tensor. Must be: `[1, stride_height, stride_width, 1]`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +A `string`, only `"NHWC"` is currently supported. +
+`dilations` + +A list of `ints` that has length `>= 4`. +1-D of length 4. The input stride for atrous morphological dilation. +Must be: `[1, rate_height, rate_width, 1]`. +
+`name` + +A name for the operation (optional). If not specified "erosion2d" +is used. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `value`. +4-D with shape `[batch, out_height, out_width, depth]`. +
+ + + + + + + + + + + + +
+`ValueError` + +If the `value` depth does not match `filters`' shape, or if +padding is other than `'VALID'` or `'SAME'`. +
+ diff --git a/site/en/api_docs/python/tf/nn/fractional_avg_pool.md b/site/en/api_docs/python/tf/nn/fractional_avg_pool.md new file mode 100644 index 00000000000..bae047758df --- /dev/null +++ b/site/en/api_docs/python/tf/nn/fractional_avg_pool.md @@ -0,0 +1,129 @@ +description: Performs fractional average pooling on the input. + +
+ + +
+ +# tf.nn.fractional_avg_pool + + + + + + + + + +Performs fractional average pooling on the input. + + + + + + + +Fractional average pooling is similar to Fractional max pooling in the pooling +region generation step. The only difference is that after pooling regions are +generated, a mean operation is performed instead of a max operation in each +pooling region. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. 4-D with shape `[batch, height, width, channels]`. +
+`pooling_ratio` + +A list of `floats` that has length >= 4. Pooling ratio for +each dimension of `value`, currently only supports row and col dimension +and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, +1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't +allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling +ratio on height and width dimensions respectively. +
+`pseudo_random` + +An optional `bool`. Defaults to `False`. When set to `True`, +generates the pooling sequence in a pseudorandom fashion, otherwise, in a +random fashion. Check paper (Graham, 2015) for difference between +pseudorandom and random. +
+`overlapping` + +An optional `bool`. Defaults to `False`. When set to `True`, +it means when pooling, the values at the boundary of adjacent pooling +cells are used by both cells. For example: +`index 0 1 2 3 4` +`value 20 5 16 3 7` +If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used +twice. The result would be [20, 16] for fractional avg pooling. +
+`seed` + +An optional `int`. Defaults to `0`. If set to be non-zero, the +random number generator is seeded by the given seed. Otherwise it is +seeded by a random seed. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + +
+ + +A tuple of `Tensor` objects (`output`, `row_pooling_sequence`, +`col_pooling_sequence`). + output: Output `Tensor` after fractional avg pooling. Has the same type as + `value`. + row_pooling_sequence: A `Tensor` of type `int64`. + col_pooling_sequence: A `Tensor` of type `int64`. + +#### References: + +Fractional Max-Pooling: + [Graham, 2015](https://arxiv.org/abs/1412.6071) + ([pdf](https://arxiv.org/pdf/1412.6071.pdf)) diff --git a/site/en/api_docs/python/tf/nn/fractional_max_pool.md b/site/en/api_docs/python/tf/nn/fractional_max_pool.md new file mode 100644 index 00000000000..8d9fb94b2c2 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/fractional_max_pool.md @@ -0,0 +1,150 @@ +description: Performs fractional max pooling on the input. + +
+ + +
+ +# tf.nn.fractional_max_pool + + + + + + + + + +Performs fractional max pooling on the input. + + + + + + + +Fractional max pooling is slightly different than regular max pooling. In +regular max pooling, you downsize an input set by taking the maximum value of +smaller N x N subsections of the set (often 2x2), and try to reduce the set by +a factor of N, where N is an integer. Fractional max pooling, as you might +expect from the word "fractional", means that the overall reduction ratio N +does not have to be an integer. + +The sizes of the pooling regions are generated randomly but are fairly +uniform. For example, let's look at the height dimension, and the constraints +on the list of rows that will be pool boundaries. + +First we define the following: + +1. input_row_length : the number of rows from the input set +2. output_row_length : which will be smaller than the input +3. alpha = input_row_length / output_row_length : our reduction ratio +4. K = floor(alpha) +5. row_pooling_sequence : this is the result list of pool boundary rows + +Then, row_pooling_sequence should satisfy: + +1. a[0] = 0 : the first value of the sequence is 0 +2. a[end] = input_row_length : the last value of the sequence is the size +3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size +4. length(row_pooling_sequence) = output_row_length+1 + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. 4-D with shape `[batch, height, width, channels]`. +
+`pooling_ratio` + +An int or list of `ints` that has length `1`, `2` or `4`. +Pooling ratio for each dimension of `value`, currently only supports row +and col dimension and should be >= 1.0. For example, a valid pooling ratio +looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 +because we don't allow pooling on batch and channels dimensions. 1.44 and +1.73 are pooling ratio on height and width dimensions respectively. +
+`pseudo_random` + +An optional `bool`. Defaults to `False`. When set to `True`, +generates the pooling sequence in a pseudorandom fashion, otherwise, in a +random fashion. Check paper (Graham, 2015) for difference between +pseudorandom and random. +
+`overlapping` + +An optional `bool`. Defaults to `False`. When set to `True`, +it means when pooling, the values at the boundary of adjacent pooling +cells are used by both cells. For example: +`index 0 1 2 3 4` +`value 20 5 16 3 7` +If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used +twice. The result would be [20, 16] for fractional max pooling. +
+`seed` + +An optional `int`. Defaults to `0`. If set to be non-zero, the +random number generator is seeded by the given seed. Otherwise it is +seeded by a random seed. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + +
+ + +A tuple of `Tensor` objects (`output`, `row_pooling_sequence`, +`col_pooling_sequence`). + output: Output `Tensor` after fractional max pooling. Has the same type as + `value`. + row_pooling_sequence: A `Tensor` of type `int64`. + col_pooling_sequence: A `Tensor` of type `int64`. + +#### References: + +Fractional Max-Pooling: + [Graham, 2015](https://arxiv.org/abs/1412.6071) + ([pdf](https://arxiv.org/pdf/1412.6071.pdf)) diff --git a/site/en/api_docs/python/tf/nn/l2_loss.md b/site/en/api_docs/python/tf/nn/l2_loss.md new file mode 100644 index 00000000000..0a0299e9095 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/l2_loss.md @@ -0,0 +1,81 @@ +description: L2 Loss. + +
+ + +
+ +# tf.nn.l2_loss + + + + + + + + + +L2 Loss. + + + + + + + + + +Computes half the L2 norm of a tensor without the `sqrt`: + + output = sum(t ** 2) / 2 + + + + + + + + + + + + + +
+`t` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +Typically 2-D, but may have any dimensions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `t`. +
+ diff --git a/site/en/api_docs/python/tf/nn/leaky_relu.md b/site/en/api_docs/python/tf/nn/leaky_relu.md new file mode 100644 index 00000000000..da5ad715c97 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/leaky_relu.md @@ -0,0 +1,75 @@ +description: Compute the Leaky ReLU activation function. + +
+ + +
+ +# tf.nn.leaky_relu + + + + + + + + + +Compute the Leaky ReLU activation function. + + + + + + + + + +Source: [Rectifier Nonlinearities Improve Neural Network Acoustic Models. +AL Maas, AY Hannun, AY Ng - Proc. ICML, 2013] +(https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf). +Args: + features: A `Tensor` representing preactivation values. Must be one of + the following types: `float16`, `float32`, `float64`, `int32`, `int64`. + alpha: Slope of the activation function at x < 0. + name: A name for the operation (optional). + + + + + + + + + +
+The activation value. +
+ + + +#### References: + +Rectifier Nonlinearities Improve Neural Network Acoustic Models: + [Maas et al., 2013] + (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.693.1422) + ([pdf] + (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.693.1422&rep=rep1&type=pdf)) diff --git a/site/en/api_docs/python/tf/nn/local_response_normalization.md b/site/en/api_docs/python/tf/nn/local_response_normalization.md new file mode 100644 index 00000000000..5bc59f220ce --- /dev/null +++ b/site/en/api_docs/python/tf/nn/local_response_normalization.md @@ -0,0 +1,123 @@ +description: Local Response Normalization. + +
+ + +
+ +# tf.nn.local_response_normalization + + + + + + + + + +Local Response Normalization. + + + + + + + + + +The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last +dimension), and each vector is normalized independently. Within a given vector, +each component is divided by the weighted, squared sum of inputs within +`depth_radius`. In detail, + + sqr_sum[a, b, c, d] = + sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2) + output = input / (bias + alpha * sqr_sum) ** beta + +For details, see [Krizhevsky et al., ImageNet classification with deep +convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. +4-D. +
+`depth_radius` + +An optional `int`. Defaults to `5`. +0-D. Half-width of the 1-D normalization window. +
+`bias` + +An optional `float`. Defaults to `1`. +An offset (usually positive to avoid dividing by 0). +
+`alpha` + +An optional `float`. Defaults to `1`. +A scale factor, usually positive. +
+`beta` + +An optional `float`. Defaults to `0.5`. An exponent. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/nn/log_poisson_loss.md b/site/en/api_docs/python/tf/nn/log_poisson_loss.md new file mode 100644 index 00000000000..592c8ebf2f9 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/log_poisson_loss.md @@ -0,0 +1,135 @@ +description: Computes log Poisson loss given log_input. + +
+ + +
+ +# tf.nn.log_poisson_loss + + + + + + + + + +Computes log Poisson loss given `log_input`. + + + + + + + + + +Gives the log-likelihood loss between the prediction and the target under the +assumption that the target has a Poisson distribution. +Caveat: By default, this is not the exact loss, but the loss minus a + constant term [log(z!)]. That has no effect for optimization, but + does not play well with relative loss comparisons. To compute an + approximation of the log factorial term, specify + compute_full_loss=True to enable Stirling's Approximation. + +For brevity, let `c = log(x) = log_input`, `z = targets`. The log Poisson +loss is + + -log(exp(-x) * (x^z) / z!) + = -log(exp(-x) * (x^z)) + log(z!) + ~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] + [ Note the second term is the Stirling's Approximation for log(z!). + It is invariant to x and does not affect optimization, though + important for correct relative loss comparisons. It is only + computed when compute_full_loss == True. ] + = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] + = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)] + + + + + + + + + + + + + + + + + + + +
+`targets` + +A `Tensor` of the same type and shape as `log_input`. +
+`log_input` + +A `Tensor` of type `float32` or `float64`. +
+`compute_full_loss` + +whether to compute the full loss. If false, a constant +term is dropped in favor of more efficient optimization. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of the same shape as `log_input` with the componentwise +logistic losses. +
+ + + + + + + + + + + + +
+`ValueError` + +If `log_input` and `targets` do not have the same shape. +
+ diff --git a/site/en/api_docs/python/tf/nn/log_softmax.md b/site/en/api_docs/python/tf/nn/log_softmax.md new file mode 100644 index 00000000000..eea656ec0b4 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/log_softmax.md @@ -0,0 +1,109 @@ +description: Computes log softmax activations. + +
+ + +
+ +# tf.nn.log_softmax + + + + + + + + + +Computes log softmax activations. + + + + + + + + + +For each batch `i` and class `j` we have + + logsoftmax = logits - log(reduce_sum(exp(logits), axis)) + + + + + + + + + + + + + + + + +
+`logits` + +A non-empty `Tensor`. Must be one of the following types: `half`, +`float32`, `float64`. +
+`axis` + +The dimension softmax would be performed on. The default is -1 which +indicates the last dimension. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `logits`. Same shape as `logits`. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if `logits` is empty or `axis` is beyond the last +dimension of `logits`. +
+ diff --git a/site/en/api_docs/python/tf/nn/max_pool.md b/site/en/api_docs/python/tf/nn/max_pool.md new file mode 100644 index 00000000000..ea8900258e5 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/max_pool.md @@ -0,0 +1,119 @@ +description: Performs the max pooling on the input. + +
+ + +
+ +# tf.nn.max_pool + + + + + + + + + +Performs the max pooling on the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + +[num_channels]` if `data_format` does not start with "NC" (default), or +`[batch_size, num_channels] + input_spatial_shape` if data_format starts +with "NC". Pooling happens over the spatial dimensions only. +
+`ksize` + +An int or list of `ints` that has length `1`, `N` or `N+2`. The size +of the window for each dimension of the input tensor. +
+`strides` + +An int or list of `ints` that has length `1`, `N` or `N+2`. The +stride of the sliding window for each dimension of the input tensor. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. See +the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string. Specifies the channel dimension. For N=1 it can be +either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) +or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW". +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` of format specified by `data_format`. +The max pooled output tensor. +
+ diff --git a/site/en/api_docs/python/tf/nn/max_pool1d.md b/site/en/api_docs/python/tf/nn/max_pool1d.md new file mode 100644 index 00000000000..15a407e49dc --- /dev/null +++ b/site/en/api_docs/python/tf/nn/max_pool1d.md @@ -0,0 +1,115 @@ +description: Performs the max pooling on the input. + +
+ + +
+ +# tf.nn.max_pool1d + + + + + + + + + +Performs the max pooling on the input. + + + + + + + + + +Note internally this op reshapes and uses the underlying 2d operation. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A 3-D `Tensor` of the format specified by `data_format`. +
+`ksize` + +An int or list of `ints` that has length `1` or `3`. The size of the +window for each dimension of the input tensor. +
+`strides` + +An int or list of `ints` that has length `1` or `3`. The stride of +the sliding window for each dimension of the input tensor. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. See +the "returns" section of tf.nn.convolution for details. +
+`data_format` + +An optional string from: "NWC", "NCW". Defaults to "NWC". +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of format specified by `data_format`. +The max pooled output tensor. +
+ diff --git a/site/en/api_docs/python/tf/nn/max_pool2d.md b/site/en/api_docs/python/tf/nn/max_pool2d.md new file mode 100644 index 00000000000..4ddea30c49e --- /dev/null +++ b/site/en/api_docs/python/tf/nn/max_pool2d.md @@ -0,0 +1,114 @@ +description: Performs the max pooling on the input. + +
+ + +
+ +# tf.nn.max_pool2d + + + + + + + + + +Performs the max pooling on the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A 4-D `Tensor` of the format specified by `data_format`. +
+`ksize` + +An int or list of `ints` that has length `1`, `2` or `4`. The size of +the window for each dimension of the input tensor. +
+`strides` + +An int or list of `ints` that has length `1`, `2` or `4`. The +stride of the sliding window for each dimension of the input tensor. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. See +the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` of format specified by `data_format`. +The max pooled output tensor. +
+ diff --git a/site/en/api_docs/python/tf/nn/max_pool3d.md b/site/en/api_docs/python/tf/nn/max_pool3d.md new file mode 100644 index 00000000000..bc3aba05248 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/max_pool3d.md @@ -0,0 +1,119 @@ +description: Performs the max pooling on the input. + +
+ + +
+ +# tf.nn.max_pool3d + + + + + + + + + +Performs the max pooling on the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A 5-D `Tensor` of the format specified by `data_format`. +
+`ksize` + +An int or list of `ints` that has length `1`, `3` or `5`. The size of +the window for each dimension of the input tensor. +
+`strides` + +An int or list of `ints` that has length `1`, `3` or `5`. The +stride of the sliding window for each dimension of the input tensor. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. See +the "returns" section of tf.nn.convolution for details. +
+`data_format` + +An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". +The data format of the input and output data. With the default format +"NDHWC", the data is stored in the order of: [batch, in_depth, in_height, +in_width, in_channels]. Alternatively, the format could be "NCDHW", the +data storage order is: [batch, in_channels, in_depth, in_height, +in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of format specified by `data_format`. +The max pooled output tensor. +
+ diff --git a/site/en/api_docs/python/tf/nn/max_pool_with_argmax.md b/site/en/api_docs/python/tf/nn/max_pool_with_argmax.md new file mode 100644 index 00000000000..48e85a66484 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/max_pool_with_argmax.md @@ -0,0 +1,151 @@ +description: Performs max pooling on the input and outputs both max values and indices. + +
+ + +
+ +# tf.nn.max_pool_with_argmax + + + + + + + + + +Performs max pooling on the input and outputs both max values and indices. + + + + + + + +The indices in `argmax` are flattened, so that a maximum value at position +`[b, y, x, c]` becomes flattened index: `(y * width + x) * channels + c` if +`include_batch_in_index` is False; +`((b * height + y) * width + x) * channels + c` +if `include_batch_in_index` is True. + +The indices returned are always in `[0, height) x [0, width)` before +flattening, even if padding is involved and the mathematically correct answer +is outside (either negative or too large). This is a bug, but fixing it is +difficult to do in a safe backwards compatible way, especially due to +flattening. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, +`int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, +`uint32`, `uint64`. +4-D with shape `[batch, height, width, channels]`. Input to pool over. +
+`ksize` + +An int or list of `ints` that has length `1`, `2` or `4`. +The size of the window for each dimension of the input tensor. +
+`strides` + +An int or list of `ints` that has length `1`, `2` or `4`. +The stride of the sliding window for each dimension of the +input tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string`, must be set to `"NHWC"`. Defaults to +`"NHWC"`. +Specify the data format of the input and output data. +
+`output_dtype` + +An optional tf.DType from: `tf.int32, tf.int64`. +Defaults to tf.int64. +The dtype of the returned argmax tensor. +
+`include_batch_in_index` + +An optional `boolean`. Defaults to `False`. +Whether to include batch dimension in flattened index of `argmax`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, argmax). +
+`output` + +A `Tensor`. Has the same type as `input`. +
+`argmax` + +A `Tensor` of type `output_dtype`. +
+ diff --git a/site/en/api_docs/python/tf/nn/moments.md b/site/en/api_docs/python/tf/nn/moments.md new file mode 100644 index 00000000000..3a0a0327c03 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/moments.md @@ -0,0 +1,105 @@ +description: Calculates the mean and variance of x. + +
+ + +
+ +# tf.nn.moments + + + + + + + + + +Calculates the mean and variance of `x`. + + + + + + + +The mean and variance are calculated by aggregating the contents of `x` +across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean +and variance of a vector. + +Note: shift is currently not used; the true mean is computed and used. + +When using these moments for batch normalization (see +tf.nn.batch_normalization): + + * for so-called "global normalization", used with convolutional filters with + shape `[batch, height, width, depth]`, pass `axes=[0, 1, 2]`. + * for simple batch normalization pass `axes=[0]` (batch only). + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. +
+`axes` + +Array of ints. Axes along which to compute mean and +variance. +
+`shift` + +Not used in the current implementation. +
+`keepdims` + +produce moments with the same dimensionality as the input. +
+`name` + +Name used to scope the operations that compute the moments. +
+ + + + + + + + + + + +
+Two `Tensor` objects: `mean` and `variance`. +
+ diff --git a/site/en/api_docs/python/tf/nn/nce_loss.md b/site/en/api_docs/python/tf/nn/nce_loss.md new file mode 100644 index 00000000000..4348610b9b4 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/nce_loss.md @@ -0,0 +1,187 @@ +description: Computes and returns the noise-contrastive estimation training loss. + +
+ + +
+ +# tf.nn.nce_loss + + + + + + + + + +Computes and returns the noise-contrastive estimation training loss. + + + + + + + +See [Noise-contrastive estimation: A new estimation principle for +unnormalized statistical +models](http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf). +Also see our [Candidate Sampling Algorithms +Reference](https://www.tensorflow.org/extras/candidate_sampling.pdf) + +A common use case is to use this method for training, and calculate the full +sigmoid loss for evaluation or inference as in the following example: + +```python +if mode == "train": + loss = tf.nn.nce_loss( + weights=weights, + biases=biases, + labels=labels, + inputs=inputs, + ...) +elif mode == "eval": + logits = tf.matmul(inputs, tf.transpose(weights)) + logits = tf.nn.bias_add(logits, biases) + labels_one_hot = tf.one_hot(labels, n_classes) + loss = tf.nn.sigmoid_cross_entropy_with_logits( + labels=labels_one_hot, + logits=logits) + loss = tf.reduce_sum(loss, axis=1) +``` + +Note: when doing embedding lookup on `weights` and `bias`, "div" partition +strategy will be used. Support for other partition strategy will be added +later. + +Note: By default this uses a log-uniform (Zipfian) distribution for sampling, +so your labels must be sorted in order of decreasing frequency to achieve +good results. For more details, see +tf.random.log_uniform_candidate_sampler. + +Note: In the case where `num_true` > 1, we assign to each target class +the target probability 1 / `num_true` so that the target probabilities +sum to 1 per-example. + +Note: It would be useful to allow a variable number of target classes per +example. We hope to provide this functionality in a future release. +For now, if you have a variable number of target classes, you can pad them +out to a constant number by either repeating them or by padding +with an otherwise unused class. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`weights` + +A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` +objects whose concatenation along dimension 0 has shape [num_classes, +dim]. The (possibly-partitioned) class embeddings. +
+`biases` + +A `Tensor` of shape `[num_classes]`. The class biases. +
+`labels` + +A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The +target classes. +
+`inputs` + +A `Tensor` of shape `[batch_size, dim]`. The forward activations of +the input network. +
+`num_sampled` + +An `int`. The number of negative classes to randomly sample +per batch. This single sample of negative classes is evaluated for each +element in the batch. +
+`num_classes` + +An `int`. The number of possible classes. +
+`num_true` + +An `int`. The number of target classes per training example. +
+`sampled_values` + +a tuple of (`sampled_candidates`, `true_expected_count`, +`sampled_expected_count`) returned by a `*_candidate_sampler` function. +(if None, we default to `log_uniform_candidate_sampler`) +
+`remove_accidental_hits` + +A `bool`. Whether to remove "accidental hits" +where a sampled class equals one of the target classes. If set to `True`, +this is a "Sampled Logistic" loss instead of NCE, and we are learning to +generate log-odds instead of log probabilities. See our [Candidate +Sampling Algorithms Reference] +(https://www.tensorflow.org/extras/candidate_sampling.pdf). Default is +False. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `batch_size` 1-D tensor of per-example NCE losses. +
+ diff --git a/site/en/api_docs/python/tf/nn/normalize_moments.md b/site/en/api_docs/python/tf/nn/normalize_moments.md new file mode 100644 index 00000000000..c5b50445dc8 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/normalize_moments.md @@ -0,0 +1,106 @@ +description: Calculate the mean and variance of based on the sufficient statistics. + +
+ + +
+ +# tf.nn.normalize_moments + + + + + + + + + +Calculate the mean and variance of based on the sufficient statistics. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`counts` + +A `Tensor` containing the total count of the data (one value). +
+`mean_ss` + +A `Tensor` containing the mean sufficient statistics: the (possibly +shifted) sum of the elements to average over. +
+`variance_ss` + +A `Tensor` containing the variance sufficient statistics: the +(possibly shifted) squared sum of the data to compute the variance over. +
+`shift` + +A `Tensor` containing the value by which the data is shifted for +numerical stability, or `None` if no shift was performed. +
+`name` + +Name used to scope the operations that compute the moments. +
+ + + + + + + + + + + +
+Two `Tensor` objects: `mean` and `variance`. +
+ diff --git a/site/en/api_docs/python/tf/nn/pool.md b/site/en/api_docs/python/tf/nn/pool.md new file mode 100644 index 00000000000..91fed43697c --- /dev/null +++ b/site/en/api_docs/python/tf/nn/pool.md @@ -0,0 +1,188 @@ +description: Performs an N-D pooling operation. + +
+ + +
+ +# tf.nn.pool + + + + + + + + + +Performs an N-D pooling operation. + + + + + + + +In the case that `data_format` does not start with "NC", computes for + 0 <= b < batch_size, + 0 <= x[i] < output_spatial_shape[i], + 0 <= c < num_channels: + +``` + output[b, x[0], ..., x[N-1], c] = + REDUCE_{z[0], ..., z[N-1]} + input[b, + x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], + ... + x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], + c], +``` + +where the reduction function REDUCE depends on the value of `pooling_type`, +and pad_before is defined based on the value of `padding` as described in +the "returns" section of tf.nn.convolution for details. +The reduction never includes out-of-bounds positions. + +In the case that `data_format` starts with `"NC"`, the `input` and output are +simply transposed as follows: + +``` + pool(input, data_format, **kwargs) = + tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), + **kwargs), + [0, N+1] + range(1, N+1)) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape + +[num_channels]` if data_format does not start with "NC" (default), or +`[batch_size, num_channels] + input_spatial_shape` if data_format starts +with "NC". Pooling happens over the spatial dimensions only. +
+`window_shape` + +Sequence of N ints >= 1. +
+`pooling_type` + +Specifies pooling operation, must be "AVG" or "MAX". +
+`strides` + +Optional. Sequence of N ints >= 1. Defaults to [1]*N. If any value of +strides is > 1, then all values of dilation_rate must be 1. +
+`padding` + +The padding algorithm, must be "SAME" or "VALID". Defaults to "SAME". +See the "returns" section of tf.nn.convolution for details. +
+`data_format` + +A string or None. Specifies whether the channel dimension of +the `input` and output is the last dimension (default, or if `data_format` +does not start with "NC"), or the second dimension (if `data_format` +starts with "NC"). For N=1, the valid values are "NWC" (default) and +"NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For +N=3, the valid values are "NDHWC" (default) and "NCDHW". +
+`dilations` + +Optional. Dilation rate. List of N ints >= 1. Defaults to +[1]*N. If any value of dilation_rate is > 1, then all values of strides +must be 1. +
+`name` + +Optional. Name of the op. +
+ + + + + + + + + + + +
+Tensor of rank N+2, of shape +[batch_size] + output_spatial_shape + [num_channels] + +if data_format is None or does not start with "NC", or + +[batch_size, num_channels] + output_spatial_shape + +if data_format starts with "NC", +where `output_spatial_shape` depends on the value of padding: + +If padding = "SAME": +output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i]) + +If padding = "VALID": +output_spatial_shape[i] = +ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) +/ strides[i]). +
+ + + + + + + + + + + + +
+`ValueError` + +if arguments are invalid. +
+ diff --git a/site/en/api_docs/python/tf/nn/relu.md b/site/en/api_docs/python/tf/nn/relu.md new file mode 100644 index 00000000000..9de2bab8826 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/relu.md @@ -0,0 +1,81 @@ +description: Computes rectified linear: max(features, 0). + +
+ + +
+ +# tf.nn.relu + + + + + + + + + +Computes rectified linear: `max(features, 0)`. + + + + + + + + + +See: https://en.wikipedia.org/wiki/Rectifier_(neural_networks) +Example usage: +>>> tf.nn.relu([-2., 0., -0., 3.]).numpy() +array([ 0., 0., -0., 3.], dtype=float32) + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `qint8`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/nn/relu6.md b/site/en/api_docs/python/tf/nn/relu6.md new file mode 100644 index 00000000000..be9409cd98d --- /dev/null +++ b/site/en/api_docs/python/tf/nn/relu6.md @@ -0,0 +1,90 @@ +description: Computes Rectified Linear 6: min(max(features, 0), 6). + +
+ + +
+ +# tf.nn.relu6 + + + + + + + + + +Computes Rectified Linear 6: `min(max(features, 0), 6)`. + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`, +`int16`, or `int8`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `features`. +
+ + + +#### References: + +Convolutional Deep Belief Networks on CIFAR-10: + Krizhevsky et al., 2010 + ([pdf](http://www.cs.utoronto.ca/~kriz/conv-cifar10-aug2010.pdf)) diff --git a/site/en/api_docs/python/tf/nn/safe_embedding_lookup_sparse.md b/site/en/api_docs/python/tf/nn/safe_embedding_lookup_sparse.md new file mode 100644 index 00000000000..bf41f453dad --- /dev/null +++ b/site/en/api_docs/python/tf/nn/safe_embedding_lookup_sparse.md @@ -0,0 +1,151 @@ +description: Lookup embedding results, accounting for invalid IDs and empty features. + +
+ + +
+ +# tf.nn.safe_embedding_lookup_sparse + + + + + + + + + +Lookup embedding results, accounting for invalid IDs and empty features. + + + + + + + +The partitioned embedding in `embedding_weights` must all be the same shape +except for the first dimension. The first dimension is allowed to vary as the +vocabulary size is not necessarily a multiple of `P`. `embedding_weights` +may be a `PartitionedVariable` as returned by using +tf.compat.v1.get_variable() with a +partitioner. + +Invalid IDs (< 0) are pruned from input IDs and weights, as well as any IDs +with non-positive weight. For an entry with no features, the embedding vector +for `default_id` is returned, or the 0-vector if `default_id` is not supplied. + +The ids and weights may be multi-dimensional. Embeddings are always aggregated +along the last dimension. + +Note: when doing embedding lookup on `embedding_weights`, "div" partition +strategy will be used. Support for other partition strategy will be added +later. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`embedding_weights` + +A list of `P` float `Tensor`s or values representing +partitioned embedding `Tensor`s. Alternatively, a `PartitionedVariable` +created by partitioning along dimension 0. The total unpartitioned shape +should be `[e_0, e_1, ..., e_m]`, where `e_0` represents the vocab size +and `e_1, ..., e_m` are the embedding dimensions. +
+`sparse_ids` + +`SparseTensor` of shape `[d_0, d_1, ..., d_n]` containing the +ids. `d_0` is typically batch size. +
+`sparse_weights` + +`SparseTensor` of same shape as `sparse_ids`, containing +float weights corresponding to `sparse_ids`, or `None` if all weights are +be assumed to be 1.0. +
+`combiner` + +A string specifying how to combine embedding results for each +entry. Currently "mean", "sqrtn" and "sum" are supported, with "mean" the +default. +
+`default_id` + +The id to use for an entry with no features. +
+`max_norm` + +If not `None`, all embeddings are l2-normalized to max_norm before +combining. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+Dense `Tensor` of shape `[d_0, d_1, ..., d_{n-1}, e_1, ..., e_m]`. +
+ + + + + + + + + + + + +
+`ValueError` + +if `embedding_weights` is empty. +
+ diff --git a/site/en/api_docs/python/tf/nn/sampled_softmax_loss.md b/site/en/api_docs/python/tf/nn/sampled_softmax_loss.md new file mode 100644 index 00000000000..d98e89e0fb5 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/sampled_softmax_loss.md @@ -0,0 +1,180 @@ +description: Computes and returns the sampled softmax training loss. + +
+ + +
+ +# tf.nn.sampled_softmax_loss + + + + + + + + + +Computes and returns the sampled softmax training loss. + + + + + + + +This is a faster way to train a softmax classifier over a huge number of +classes. + +This operation is for training only. It is generally an underestimate of +the full softmax loss. + +A common use case is to use this method for training, and calculate the full +sigmoid loss for evaluation or inference as in the following example: + +```python +if mode == "train": + loss = tf.nn.sampled_softmax_loss( + weights=weights, + biases=biases, + labels=labels, + inputs=inputs, + ...) +elif mode == "eval": + logits = tf.matmul(inputs, tf.transpose(weights)) + logits = tf.nn.bias_add(logits, biases) + labels_one_hot = tf.one_hot(labels, n_classes) + loss = tf.nn.softmax_cross_entropy_with_logits( + labels=labels_one_hot, + logits=logits) +``` + +See our [Candidate Sampling Algorithms Reference] +(https://www.tensorflow.org/extras/candidate_sampling.pdf) + +Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007) +([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math. + +Note: when doing embedding lookup on `weights` and `bias`, "div" partition +strategy will be used. Support for other partition strategy will be added +later. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`weights` + +A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` +objects whose concatenation along dimension 0 has shape [num_classes, +dim]. The (possibly-sharded) class embeddings. +
+`biases` + +A `Tensor` of shape `[num_classes]`. The class biases. +
+`labels` + +A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The +target classes. Note that this format differs from the `labels` argument +of nn.softmax_cross_entropy_with_logits. +
+`inputs` + +A `Tensor` of shape `[batch_size, dim]`. The forward activations of +the input network. +
+`num_sampled` + +An `int`. The number of classes to randomly sample per batch. +
+`num_classes` + +An `int`. The number of possible classes. +
+`num_true` + +An `int`. The number of target classes per training example. +
+`sampled_values` + +a tuple of (`sampled_candidates`, `true_expected_count`, +`sampled_expected_count`) returned by a `*_candidate_sampler` function. +(if None, we default to `log_uniform_candidate_sampler`) +
+`remove_accidental_hits` + +A `bool`. whether to remove "accidental hits" +where a sampled class equals one of the target classes. Default is True. +
+`seed` + +random seed for candidate sampling. Default to None, which doesn't set +the op-level random seed for candidate sampling. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `batch_size` 1-D tensor of per-example sampled softmax losses. +
+ diff --git a/site/en/api_docs/python/tf/nn/scale_regularization_loss.md b/site/en/api_docs/python/tf/nn/scale_regularization_loss.md new file mode 100644 index 00000000000..cc32fa1036d --- /dev/null +++ b/site/en/api_docs/python/tf/nn/scale_regularization_loss.md @@ -0,0 +1,93 @@ +description: Scales the sum of the given regularization losses by number of replicas. + +
+ + +
+ +# tf.nn.scale_regularization_loss + + + + + + + + + +Scales the sum of the given regularization losses by number of replicas. + + + + + + + + + +Usage with distribution strategy and custom training loop: + +```python +with strategy.scope(): + def compute_loss(self, label, predictions): + per_example_loss = tf.keras.losses.sparse_categorical_crossentropy( + labels, predictions) + + # Compute loss that is scaled by sample_weight and by global batch size. + loss = tf.nn.compute_average_loss( + per_example_loss, + sample_weight=sample_weight, + global_batch_size=GLOBAL_BATCH_SIZE) + + # Add scaled regularization losses. + loss += tf.nn.scale_regularization_loss(tf.nn.l2_loss(weights)) + return loss +``` + + + + + + + + + + +
+`regularization_loss` + +Regularization loss. +
+ + + + + + + + + + + +
+Scalar loss value. +
+ diff --git a/site/en/api_docs/python/tf/nn/selu.md b/site/en/api_docs/python/tf/nn/selu.md new file mode 100644 index 00000000000..b803608c515 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/selu.md @@ -0,0 +1,84 @@ +description: Computes scaled exponential linear: scale * alpha * (exp(features) - 1) + +
+ + +
+ +# tf.nn.selu + + + + + + + + + +Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)` + + + + + + + + + +if < 0, `scale * features` otherwise. + +To be used together with +`initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN')`. +For correct dropout, use `tf.contrib.nn.alpha_dropout`. + +See [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515) + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/nn/separable_conv2d.md b/site/en/api_docs/python/tf/nn/separable_conv2d.md new file mode 100644 index 00000000000..81b9c7ce5b3 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/separable_conv2d.md @@ -0,0 +1,143 @@ +description: 2-D convolution with separable filters. + +
+ + +
+ +# tf.nn.separable_conv2d + + + + + + + + + +2-D convolution with separable filters. + + + + + + + +Performs a depthwise convolution that acts separately on channels followed by +a pointwise convolution that mixes channels. Note that this is separability +between dimensions `[1, 2]` and `3`, not spatial separability between +dimensions `1` and `2`. + +In detail, with the default NHWC format, + + output[b, i, j, k] = sum_{di, dj, q, r} + input[b, strides[1] * i + di, strides[2] * j + dj, q] * + depthwise_filter[di, dj, q, r] * + pointwise_filter[0, 0, q * channel_multiplier + r, k] + +`strides` controls the strides for the depthwise convolution only, since +the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have +`strides[0] = strides[3] = 1`. For the most common case of the same +horizontal and vertical strides, `strides = [1, stride, stride, 1]`. +If any value in `rate` is greater than 1, we perform atrous depthwise +convolution, in which case all values in the `strides` tensor must be equal +to 1. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +4-D `Tensor` with shape according to `data_format`. +
+`depthwise_filter` + +4-D `Tensor` with shape `[filter_height, filter_width, +in_channels, channel_multiplier]`. Contains `in_channels` convolutional +filters of depth 1. +
+`pointwise_filter` + +4-D `Tensor` with shape `[1, 1, channel_multiplier * +in_channels, out_channels]`. Pointwise filter to mix channels after +`depthwise_filter` has convolved spatially. +
+`strides` + +1-D of size 4. The strides for the depthwise convolution for each +dimension of `input`. +
+`padding` + +A string, either `'VALID'` or `'SAME'`. The padding algorithm. See +the "returns" section of tf.nn.convolution for details. +
+`data_format` + +The data format for input. Either "NHWC" (default) or "NCHW". +
+`dilations` + +1-D of size 2. The dilation rate in which we sample input values +across the `height` and `width` dimensions in atrous convolution. If it is +greater than 1, then all values of strides must be 1. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+A 4-D `Tensor` with shape according to 'data_format'. For +example, with data_format="NHWC", shape is [batch, out_height, +out_width, out_channels]. +
+ diff --git a/site/en/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits.md b/site/en/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits.md new file mode 100644 index 00000000000..3451f2b79cb --- /dev/null +++ b/site/en/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits.md @@ -0,0 +1,122 @@ +description: Computes sigmoid cross entropy given logits. + +
+ + +
+ +# tf.nn.sigmoid_cross_entropy_with_logits + + + + + + + + + +Computes sigmoid cross entropy given `logits`. + + + + + + + +Measures the probability error in discrete classification tasks in which each +class is independent and not mutually exclusive. For instance, one could +perform multilabel classification where a picture can contain both an elephant +and a dog at the same time. + +For brevity, let `x = logits`, `z = labels`. The logistic loss is + + z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) + = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) + = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) + = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) + = (1 - z) * x + log(1 + exp(-x)) + = x - x * z + log(1 + exp(-x)) + +For x < 0, to avoid overflow in exp(-x), we reformulate the above + + x - x * z + log(1 + exp(-x)) + = log(exp(x)) - x * z + log(1 + exp(-x)) + = - x * z + log(1 + exp(x)) + +Hence, to ensure stability and avoid overflow, the implementation uses this +equivalent formulation + + max(x, 0) - x * z + log(1 + exp(-abs(x))) + +`logits` and `labels` must have the same type and shape. + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` of the same type and shape as `logits`. +
+`logits` + +A `Tensor` of type `float32` or `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of the same shape as `logits` with the componentwise +logistic losses. +
+ + + + + + + + + + + + +
+`ValueError` + +If `logits` and `labels` do not have the same shape. +
+ diff --git a/site/en/api_docs/python/tf/nn/softmax.md b/site/en/api_docs/python/tf/nn/softmax.md new file mode 100644 index 00000000000..7914b1f9c74 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/softmax.md @@ -0,0 +1,109 @@ +description: Computes softmax activations. + +
+ + +
+ +# tf.nn.softmax + + + + + + + + + +Computes softmax activations. + + + + + + + + + +This function performs the equivalent of + + softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis) + + + + + + + + + + + + + + + + +
+`logits` + +A non-empty `Tensor`. Must be one of the following types: `half`, +`float32`, `float64`. +
+`axis` + +The dimension softmax would be performed on. The default is -1 which +indicates the last dimension. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type and shape as `logits`. +
+ + + + + + + + + + + + +
+`InvalidArgumentError` + +if `logits` is empty or `axis` is beyond the last +dimension of `logits`. +
+ diff --git a/site/en/api_docs/python/tf/nn/softmax_cross_entropy_with_logits.md b/site/en/api_docs/python/tf/nn/softmax_cross_entropy_with_logits.md new file mode 100644 index 00000000000..4342b827c52 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/softmax_cross_entropy_with_logits.md @@ -0,0 +1,130 @@ +description: Computes softmax cross entropy between logits and labels. + +
+ + +
+ +# tf.nn.softmax_cross_entropy_with_logits + + + + + + + + + +Computes softmax cross entropy between `logits` and `labels`. + + + + + + + +Measures the probability error in discrete classification tasks in which the +classes are mutually exclusive (each entry is in exactly one class). For +example, each CIFAR-10 image is labeled with one and only one label: an image +can be a dog or a truck, but not both. + +**NOTE:** While the classes are mutually exclusive, their probabilities +need not be. All that is required is that each row of `labels` is +a valid probability distribution. If they are not, the computation of the +gradient will be incorrect. + +If using exclusive `labels` (wherein one and only +one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`. + +#### Usage: + + +>>> logits = [[4.0, 2.0, 1.0], [0.0, 5.0, 1.0]] +>>> labels = [[1.0, 0.0, 0.0], [0.0, 0.8, 0.2]] +>>> tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits) + + +**WARNING:** This op expects unscaled logits, since it performs a `softmax` +on `logits` internally for efficiency. Do not call this op with the +output of `softmax`, as it will produce incorrect results. + +A common use case is to have logits and labels of shape +`[batch_size, num_classes]`, but higher dimensions are supported, with +the `axis` argument specifying the class dimension. + +`logits` and `labels` must have the same dtype (either `float16`, `float32`, +or `float64`). + +Backpropagation will happen into both `logits` and `labels`. To disallow +backpropagation into `labels`, pass label tensors through tf.stop_gradient +before feeding it to this function. + +**Note that to avoid confusion, it is required to pass only named arguments to +this function.** + + + + + + + + + + + + + + + + + + + +
+`labels` + +Each vector along the class dimension should hold a valid +probability distribution e.g. for the case in which labels are of shape +`[batch_size, num_classes]`, each row of `labels[i]` must be a valid +probability distribution. +
+`logits` + +Per-label activations, typically a linear output. These activation +energies are interpreted as unnormalized log probabilities. +
+`axis` + +The class dimension. Defaulted to -1 which is the last dimension. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` that contains the softmax cross entropy loss. Its type is the +same as `logits` and its shape is the same as `labels` except that it does +not have the last dimension of `labels`. +
+ diff --git a/site/en/api_docs/python/tf/nn/softsign.md b/site/en/api_docs/python/tf/nn/softsign.md new file mode 100644 index 00000000000..e292d1cbbf1 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/softsign.md @@ -0,0 +1,80 @@ +description: Computes softsign: features / (abs(features) + 1). + +
+ + +
+ +# tf.nn.softsign + + + + + + + + + +Computes softsign: `features / (abs(features) + 1)`. + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/nn/space_to_depth.md b/site/en/api_docs/python/tf/nn/space_to_depth.md new file mode 100644 index 00000000000..a20d48432c4 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/space_to_depth.md @@ -0,0 +1,168 @@ +description: SpaceToDepth for tensors of type T. + +
+ + +
+ +# tf.nn.space_to_depth + + + + + + + + + +SpaceToDepth for tensors of type T. + + + + + + + +Rearranges blocks of spatial data, into depth. More specifically, +this op outputs a copy of the input tensor where values from the `height` +and `width` dimensions are moved to the `depth` dimension. +The attr `block_size` indicates the input block size. + + * Non-overlapping blocks of size `block_size x block size` are rearranged + into depth at each location. + * The depth of the output tensor is `block_size * block_size * input_depth`. + * The Y, X coordinates within each block of the input become the high order + component of the output channel index. + * The input tensor's height and width must be divisible by block_size. + +The `data_format` attr specifies the layout of the input and output tensors +with the following options: + "NHWC": `[ batch, height, width, channels ]` + "NCHW": `[ batch, channels, height, width ]` + "NCHW_VECT_C": + `qint8 [ batch, channels / 4, height, width, 4 ]` + +It is useful to consider the operation as transforming a 6-D Tensor. +e.g. for data_format = NHWC, + Each element in the input tensor can be specified via 6 coordinates, + ordered by decreasing memory layout significance as: + n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates + within the output image, bX, bY means coordinates + within the input block, iC means input channels). + The output would be a transpose to the following layout: + n,oY,oX,bY,bX,iC + +This operation is useful for resizing the activations between convolutions +(but keeping all data), e.g. instead of pooling. It is also useful for training +purely convolutional models. + +For example, given an input of shape `[1, 2, 2, 1]`, data_format = "NHWC" and +block_size = 2: + +``` +x = [[[[1], [2]], + [[3], [4]]]] +``` + +This operation will output a tensor of shape `[1, 1, 1, 4]`: + +``` +[[[[1, 2, 3, 4]]]] +``` + +Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`, +the corresponding output will have a single element (i.e. width and height are +both 1) and will have a depth of 4 channels (1 * block_size * block_size). +The output element shape is `[1, 1, 4]`. + +For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g. + +``` +x = [[[[1, 2, 3], [4, 5, 6]], + [[7, 8, 9], [10, 11, 12]]]] +``` + +This operation, for block_size of 2, will return the following tensor of shape +`[1, 1, 1, 12]` + +``` +[[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] +``` + +Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2: + +``` +x = [[[[1], [2], [5], [6]], + [[3], [4], [7], [8]], + [[9], [10], [13], [14]], + [[11], [12], [15], [16]]]] +``` + +the operator will return the following tensor of shape `[1 2 2 4]`: + +``` +x = [[[[1, 2, 3, 4], + [5, 6, 7, 8]], + [[9, 10, 11, 12], + [13, 14, 15, 16]]]] +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`block_size` + +An `int` that is `>= 2`. The size of the spatial block. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW", "NCHW_VECT_C"`. Defaults to `"NHWC"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits.md b/site/en/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits.md new file mode 100644 index 00000000000..ad4cbc5ae5c --- /dev/null +++ b/site/en/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits.md @@ -0,0 +1,127 @@ +description: Computes sparse softmax cross entropy between logits and labels. + +
+ + +
+ +# tf.nn.sparse_softmax_cross_entropy_with_logits + + + + + + + + + +Computes sparse softmax cross entropy between `logits` and `labels`. + + + + + + + +Measures the probability error in discrete classification tasks in which the +classes are mutually exclusive (each entry is in exactly one class). For +example, each CIFAR-10 image is labeled with one and only one label: an image +can be a dog or a truck, but not both. + +**NOTE:** For this operation, the probability of a given label is considered +exclusive. That is, soft classes are not allowed, and the `labels` vector +must provide a single specific index for the true class for each row of +`logits` (each minibatch entry). For soft softmax classification with +a probability distribution for each entry, see +`softmax_cross_entropy_with_logits_v2`. + +**WARNING:** This op expects unscaled logits, since it performs a `softmax` +on `logits` internally for efficiency. Do not call this op with the +output of `softmax`, as it will produce incorrect results. + +A common use case is to have logits of shape +`[batch_size, num_classes]` and have labels of shape +`[batch_size]`, but higher dimensions are supported, in which +case the `dim`-th dimension is assumed to be of size `num_classes`. +`logits` must have the dtype of `float16`, `float32`, or `float64`, and +`labels` must have the dtype of `int32` or `int64`. + +**Note that to avoid confusion, it is required to pass only named arguments to +this function.** + + + + + + + + + + + + + + + + +
+`labels` + +`Tensor` of shape `[d_0, d_1, ..., d_{r-1}]` (where `r` is rank of +`labels` and result) and dtype `int32` or `int64`. Each entry in `labels` +must be an index in `[0, num_classes)`. Other values will raise an +exception when this op is run on CPU, and return `NaN` for corresponding +loss and gradient rows on GPU. +
+`logits` + +Unscaled log probabilities of shape `[d_0, d_1, ..., d_{r-1}, +num_classes]` and dtype `float16`, `float32`, or `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of the same shape as `labels` and of the same type as `logits` +with the softmax cross entropy loss. +
+ + + + + + + + + + + + +
+`ValueError` + +If logits are scalars (need to have rank >= 1) or if the rank +of the labels is not equal to the rank of the logits minus one. +
+ diff --git a/site/en/api_docs/python/tf/nn/sufficient_statistics.md b/site/en/api_docs/python/tf/nn/sufficient_statistics.md new file mode 100644 index 00000000000..b5c8a5445d5 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/sufficient_statistics.md @@ -0,0 +1,102 @@ +description: Calculate the sufficient statistics for the mean and variance of x. + +
+ + +
+ +# tf.nn.sufficient_statistics + + + + + + + + + +Calculate the sufficient statistics for the mean and variance of `x`. + + + + + + + +These sufficient statistics are computed using the one pass algorithm on +an input that's optionally shifted. See: +https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. +
+`axes` + +Array of ints. Axes along which to compute mean and variance. +
+`shift` + +A `Tensor` containing the value by which to shift the data for +numerical stability, or `None` if no shift is to be performed. A shift +close to the true mean provides the most numerically stable results. +
+`keepdims` + +produce statistics with the same dimensionality as the input. +
+`name` + +Name used to scope the operations that compute the sufficient stats. +
+ + + + + + + + + + + +
+Four `Tensor` objects of the same type as `x`: + +* the count (number of elements to average over). +* the (possibly shifted) sum of the elements in the array. +* the (possibly shifted) sum of squares of the elements in the array. +* the shift by which the mean must be corrected or None if `shift` is None. +
+ diff --git a/site/en/api_docs/python/tf/nn/swish.md b/site/en/api_docs/python/tf/nn/swish.md new file mode 100644 index 00000000000..9baa215a126 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/swish.md @@ -0,0 +1,84 @@ +description: Computes the Swish activation function: x * sigmoid(x). + +
+ + +
+ +# tf.nn.swish + + + + + + + + + +Computes the Swish activation function: `x * sigmoid(x)`. + + + + + + + + + +Source: "Searching for Activation Functions" (Ramachandran et al. 2017) +https://arxiv.org/abs/1710.05941 + + + + + + + + + + + + + +
+`features` + +A `Tensor` representing preactivation values. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The activation value. +
+ diff --git a/site/en/api_docs/python/tf/nn/weighted_cross_entropy_with_logits.md b/site/en/api_docs/python/tf/nn/weighted_cross_entropy_with_logits.md new file mode 100644 index 00000000000..74cd20bddc7 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/weighted_cross_entropy_with_logits.md @@ -0,0 +1,139 @@ +description: Computes a weighted cross entropy. + +
+ + +
+ +# tf.nn.weighted_cross_entropy_with_logits + + + + + + + + + +Computes a weighted cross entropy. + + + + + + + +This is like `sigmoid_cross_entropy_with_logits()` except that `pos_weight`, +allows one to trade off recall and precision by up- or down-weighting the +cost of a positive error relative to a negative error. + +The usual cross-entropy cost is defined as: + + labels * -log(sigmoid(logits)) + + (1 - labels) * -log(1 - sigmoid(logits)) + +A value `pos_weight > 1` decreases the false negative count, hence increasing +the recall. +Conversely setting `pos_weight < 1` decreases the false positive count and +increases the precision. +This can be seen from the fact that `pos_weight` is introduced as a +multiplicative coefficient for the positive labels term +in the loss expression: + + labels * -log(sigmoid(logits)) * pos_weight + + (1 - labels) * -log(1 - sigmoid(logits)) + +For brevity, let `x = logits`, `z = labels`, `q = pos_weight`. +The loss is: + + qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) + = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) + = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) + = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) + = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x)) + = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x)) + +Setting `l = (1 + (q - 1) * z)`, to ensure stability and avoid overflow, +the implementation uses + + (1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0)) + +`logits` and `labels` must have the same type and shape. + + + + + + + + + + + + + + + + + + + +
+`labels` + +A `Tensor` of the same type and shape as `logits`. +
+`logits` + +A `Tensor` of type `float32` or `float64`. +
+`pos_weight` + +A coefficient to use on the positive examples. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of the same shape as `logits` with the componentwise +weighted logistic losses. +
+ + + + + + + + + + + + +
+`ValueError` + +If `logits` and `labels` do not have the same shape. +
+ diff --git a/site/en/api_docs/python/tf/nn/weighted_moments.md b/site/en/api_docs/python/tf/nn/weighted_moments.md new file mode 100644 index 00000000000..6312b2298e9 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/weighted_moments.md @@ -0,0 +1,94 @@ +description: Returns the frequency-weighted mean and variance of x. + +
+ + +
+ +# tf.nn.weighted_moments + + + + + + + + + +Returns the frequency-weighted mean and variance of `x`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A tensor. +
+`axes` + +1-d tensor of int32 values; these are the axes along which +to compute mean and variance. +
+`frequency_weights` + +A tensor of positive weights which can be +broadcast with x. +
+`keepdims` + +Produce moments with the same dimensionality as the input. +
+`name` + +Name used to scope the operation. +
+ + + + + + + + + + + +
+Two tensors: `weighted_mean` and `weighted_variance`. +
+ diff --git a/site/en/api_docs/python/tf/nn/with_space_to_batch.md b/site/en/api_docs/python/tf/nn/with_space_to_batch.md new file mode 100644 index 00000000000..ef113fb5ca9 --- /dev/null +++ b/site/en/api_docs/python/tf/nn/with_space_to_batch.md @@ -0,0 +1,256 @@ +description: Performs op on the space-to-batch representation of input. + +
+ + +
+ +# tf.nn.with_space_to_batch + + + + + + + + + +Performs `op` on the space-to-batch representation of `input`. + + + + + + + + + +This has the effect of transforming sliding window operations into the +corresponding "atrous" operation in which the input is sampled at the +specified `dilation_rate`. + +In the special case that `dilation_rate` is uniformly 1, this simply returns: + + op(input, num_spatial_dims, padding) + +Otherwise, it returns: + + batch_to_space_nd( + op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), + num_spatial_dims, + "VALID") + adjusted_dilation_rate, + adjusted_crops), + +where: + + adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)], + adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2] + +defined as follows: + +We first define two int64 tensors `paddings` and `crops` of shape +`[num_spatial_dims, 2]` based on the value of `padding` and the spatial +dimensions of the `input`: + +If `padding = "VALID"`, then: + + paddings, crops = required_space_to_batch_paddings( + input_shape[spatial_dims], + dilation_rate) + +If `padding = "SAME"`, then: + + dilated_filter_shape = + filter_shape + (filter_shape - 1) * (dilation_rate - 1) + + paddings, crops = required_space_to_batch_paddings( + input_shape[spatial_dims], + dilation_rate, + [(dilated_filter_shape - 1) // 2, + dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2]) + +Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial +dimensions are contiguous starting at the second dimension, but the specified +`spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and +`crops` in order to be usable with these operations. For a given dimension, +if the block size is 1, and both the starting and ending padding and crop +amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, +which is what is needed for dimensions not part of `spatial_dims`. +Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case +efficiently for any number of leading and trailing dimensions. + +For 0 <= i < len(spatial_dims), we assign: + + adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] + adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] + adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :] + +All unassigned values of `adjusted_dilation_rate` default to 1, while all +unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0. + +Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID" +padding is equivalent to specifying `padding = "SAME"` with a filter_shape of +`[1]*N`. + +Advanced usage. Note the following optimization: A sequence of +`with_space_to_batch` operations with identical (not uniformly 1) +`dilation_rate` parameters and "VALID" padding + + net = with_space_to_batch(net, dilation_rate, "VALID", op_1) + ... + net = with_space_to_batch(net, dilation_rate, "VALID", op_k) + +can be combined into a single `with_space_to_batch` operation as follows: + + def combined_op(converted_input, num_spatial_dims, _): + result = op_1(converted_input, num_spatial_dims, "VALID") + ... + result = op_k(result, num_spatial_dims, "VALID") + + net = with_space_to_batch(net, dilation_rate, "VALID", combined_op) + +This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and +`batch_to_space_nd`. + +Similarly, a sequence of `with_space_to_batch` operations with identical (not +uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter +dimensions + + net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) + ... + net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k) + +can be combined into a single `with_space_to_batch` operation as follows: + + def combined_op(converted_input, num_spatial_dims, _): + result = op_1(converted_input, num_spatial_dims, "SAME") + ... + result = op_k(result, num_spatial_dims, "SAME") + + net = with_space_to_batch(net, dilation_rate, "VALID", combined_op) + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +Tensor of rank > max(spatial_dims). +
+`dilation_rate` + +int32 Tensor of *known* shape [num_spatial_dims]. +
+`padding` + +str constant equal to "VALID" or "SAME" +
+`op` + +Function that maps (input, num_spatial_dims, padding) -> output +
+`filter_shape` + +If padding = "SAME", specifies the shape of the convolution +kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. +If padding = "VALID", filter_shape is ignored and need not be specified. +
+`spatial_dims` + +Monotonically increasing sequence of `num_spatial_dims` +integers (which are >= 1) specifying the spatial dimensions of `input` +and output. Defaults to: `range(1, num_spatial_dims+1)`. +
+`data_format` + +A string or None. Specifies whether the channel dimension of +the `input` and output is the last dimension (default, or if `data_format` +does not start with "NC"), or the second dimension (if `data_format` +starts with "NC"). For N=1, the valid values are "NWC" (default) and +"NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". +For N=3, the valid values are "NDHWC" (default) and "NCDHW". +
+ + + + + + + + + + + +
+The output Tensor as described above, dimensions will vary based on the op +provided. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +if `padding` is invalid or the arguments are incompatible. +
+`ValueError` + +if `spatial_dims` are invalid. +
+ diff --git a/site/en/api_docs/python/tf/no_gradient.md b/site/en/api_docs/python/tf/no_gradient.md new file mode 100644 index 00000000000..7a2e200dea2 --- /dev/null +++ b/site/en/api_docs/python/tf/no_gradient.md @@ -0,0 +1,95 @@ +description: Specifies that ops of type op_type is not differentiable. + +
+ + +
+ +# tf.no_gradient + + + + + + + + + +Specifies that ops of type `op_type` is not differentiable. + + + + + + + + + +This function should *not* be used for operations that have a +well-defined gradient that is not yet implemented. + +This function is only used when defining a new op type. It may be +used for ops such as tf.size() that are not differentiable. For +example: + +```python +tf.no_gradient("Size") +``` + +The gradient computed for 'op_type' will then propagate zeros. + +For ops that have a well-defined gradient but are not yet implemented, +no declaration should be made, and an error *must* be thrown if +an attempt to request its gradient is made. + + + + + + + + + + +
+`op_type` + +The string type of an operation. This corresponds to the +`OpDef.name` field for the proto that defines the operation. +
+ + + + + + + + + + + + +
+`TypeError` + +If `op_type` is not a string. +
+ diff --git a/site/en/api_docs/python/tf/no_op.md b/site/en/api_docs/python/tf/no_op.md new file mode 100644 index 00000000000..aa907dea47d --- /dev/null +++ b/site/en/api_docs/python/tf/no_op.md @@ -0,0 +1,70 @@ +description: Does nothing. Only useful as a placeholder for control edges. + +
+ + +
+ +# tf.no_op + + + + + + + + + +Does nothing. Only useful as a placeholder for control edges. + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/nondifferentiable_batch_function.md b/site/en/api_docs/python/tf/nondifferentiable_batch_function.md new file mode 100644 index 00000000000..30f019fea44 --- /dev/null +++ b/site/en/api_docs/python/tf/nondifferentiable_batch_function.md @@ -0,0 +1,137 @@ +description: Batches the computation done by the decorated function. + +
+ + +
+ +# tf.nondifferentiable_batch_function + + + + + + + + + +Batches the computation done by the decorated function. + + + + + + + + + +So, for example, in the following code + +```python +@batch_function(1, 2, 3) +def layer(a): + return tf.matmul(a, a) + +b = layer(w) +``` + +if more than one session.run call is simultaneously trying to compute `b` +the values of `w` will be gathered, non-deterministically concatenated +along the first axis, and only one thread will run the computation. See the +documentation of the `Batch` op for more details. + +Assumes that all arguments of the decorated function are Tensors which will +be batched along their first dimension. + +SparseTensor is not supported. The return value of the decorated function +must be a Tensor or a list/tuple of Tensors. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_batch_threads` + +Number of scheduling threads for processing batches +of work. Determines the number of batches processed in parallel. +
+`max_batch_size` + +Batch sizes will never be bigger than this. +
+`batch_timeout_micros` + +Maximum number of microseconds to wait before +outputting an incomplete batch. +
+`allowed_batch_sizes` + +Optional list of allowed batch sizes. If left empty, +does nothing. Otherwise, supplies a list of batch sizes, causing the op +to pad batches up to one of those sizes. The entries must increase +monotonically, and the final entry must equal max_batch_size. +
+`max_enqueued_batches` + +The maximum depth of the batch queue. Defaults to 10. +
+`autograph` + +Whether to use autograph to compile python and eager style code +for efficient graph-mode execution. +
+ + + + + + + + + + + +
+The decorated function will return the unbatched computation output Tensors. +
+ diff --git a/site/en/api_docs/python/tf/norm.md b/site/en/api_docs/python/tf/norm.md new file mode 100644 index 00000000000..2f5e530a9f7 --- /dev/null +++ b/site/en/api_docs/python/tf/norm.md @@ -0,0 +1,163 @@ +description: Computes the norm of vectors, matrices, and tensors. + +
+ + +
+ +# tf.norm + + + + + + + + + +Computes the norm of vectors, matrices, and tensors. + + + + + + + + + +This function can compute several different vector norms (the 1-norm, the +Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and +matrix norms (Frobenius, 1-norm, 2-norm and inf-norm). + + + + + + + + + + + + + + + + + + + + + + +
+`tensor` + +`Tensor` of types `float32`, `float64`, `complex64`, `complex128` +
+`ord` + +Order of the norm. Supported values are `'fro'`, `'euclidean'`, +`1`, `2`, `np.inf` and any positive real number yielding the corresponding +p-norm. Default is `'euclidean'` which is equivalent to Frobenius norm if +`tensor` is a matrix and equivalent to 2-norm for vectors. +Some restrictions apply: +a) The Frobenius norm `'fro'` is not defined for vectors, +b) If axis is a 2-tuple (matrix norm), only `'euclidean'`, '`fro'`, `1`, +`2`, `np.inf` are supported. +See the description of `axis` on how to compute norms for a batch of +vectors or matrices stored in a tensor. +
+`axis` + +If `axis` is `None` (the default), the input is considered a vector +and a single vector norm is computed over the entire set of values in the +tensor, i.e. `norm(tensor, ord=ord)` is equivalent to +`norm(reshape(tensor, [-1]), ord=ord)`. +If `axis` is a Python integer, the input is considered a batch of vectors, +and `axis` determines the axis in `tensor` over which to compute vector +norms. +If `axis` is a 2-tuple of Python integers it is considered a batch of +matrices and `axis` determines the axes in `tensor` over which to compute +a matrix norm. +Negative indices are supported. Example: If you are passing a tensor that +can be either a matrix or a batch of matrices at runtime, pass +`axis=[-2,-1]` instead of `axis=None` to make sure that matrix norms are +computed. +
+`keepdims` + +If True, the axis indicated in `axis` are kept with size 1. +Otherwise, the dimensions in `axis` are removed from the output shape. +
+`name` + +The name of the op. +
+ + + + + + + + + + + + +
+`output` + +A `Tensor` of the same type as tensor, containing the vector or +matrix norms. If `keepdims` is True then the rank of output is equal to +the rank of `tensor`. Otherwise, if `axis` is none the output is a scalar, +if `axis` is an integer, the rank of `output` is one less than the rank +of `tensor`, if `axis` is a 2-tuple the rank of `output` is two less +than the rank of `tensor`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `ord` or `axis` is invalid. +
+ + + + +#### Numpy Compatibility +Mostly equivalent to numpy.linalg.norm. +Not supported: ord <= 0, 2-norm for matrices, nuclear norm. +Other differences: + a) If axis is `None`, treats the flattened `tensor` as a vector + regardless of rank. + b) Explicitly supports 'euclidean' norm as the default, including for + higher order tensors. + diff --git a/site/en/api_docs/python/tf/numpy_function.md b/site/en/api_docs/python/tf/numpy_function.md new file mode 100644 index 00000000000..dbaa791c40b --- /dev/null +++ b/site/en/api_docs/python/tf/numpy_function.md @@ -0,0 +1,151 @@ +description: Wraps a python function and uses it as a TensorFlow op. + +
+ + +
+ +# tf.numpy_function + + + + + + + + + +Wraps a python function and uses it as a TensorFlow op. + + + + + + + + + +Given a python function `func` wrap this function as an operation in a +TensorFlow function. `func` must take numpy arrays as its arguments and +return numpy arrays as its outputs. + +The following example creates a TensorFlow graph with `np.sinh()` as an +operation in the graph: + +``` +>>> def my_numpy_func(x): +... # x will be a numpy array with the contents of the input to the +... # tf.function +... return np.sinh(x) +>>> @tf.function(input_signature=[tf.TensorSpec(None, tf.float32)]) +... def tf_function(input): +... y = tf.numpy_function(my_numpy_func, [input], tf.float32) +... return y * y +>>> tf_function(tf.constant(1.)) + +``` + +Comparison to tf.py_function: +tf.py_function and tf.numpy_function are very similar, except that +tf.numpy_function takes numpy arrays, and not tf.Tensors. If you want the +function to contain `tf.Tensors`, and have any TensorFlow operations executed +in the function be differentiable, please use tf.py_function. + +Note: The tf.numpy_function operation has the following known +limitations: + +* The body of the function (i.e. `func`) will not be serialized in a + `tf.SavedModel`. Therefore, you should not use this function if you need to + serialize your model and restore it in a different environment. + +* The operation must run in the same address space as the Python program + that calls tf.numpy_function(). If you are using distributed + TensorFlow, you must run a tf.distribute.Server in the same process as the + program that calls tf.numpy_function you must pin the created + operation to a device in that server (e.g. using `with tf.device():`). + +* Since the function takes numpy arrays, you cannot take gradients + through a numpy_function. If you require something that is differentiable, + please consider using tf.py_function. + +* The resulting function is assumed stateful and will never be optimized. + + + + + + + + + + + + + + + + + + + +
+`func` + +A Python function, which accepts `numpy.ndarray` objects as arguments +and returns a list of `numpy.ndarray` objects (or a single +`numpy.ndarray`). This function must accept as many arguments as there are +tensors in `inp`, and these argument types will match the corresponding +tf.Tensor objects in `inp`. The returns `numpy.ndarray`s must match the +number and types defined `Tout`. +Important Note: Input and output `numpy.ndarray`s of `func` are not +guaranteed to be copies. In some cases their underlying memory will be +shared with the corresponding TensorFlow tensors. In-place modification +or storing `func` input or return values in python datastructures +without explicit (np.)copy can have non-deterministic consequences. +
+`inp` + +A list of tf.Tensor objects. +
+`Tout` + +A list or tuple of tensorflow data types or a single tensorflow data +type if there is only one, indicating what `func` returns. +
+`name` + +(Optional) A name for the operation. +
+ + + + + + + + + + + +
+Single or list of tf.Tensor which `func` computes. +
+ diff --git a/site/en/api_docs/python/tf/one_hot.md b/site/en/api_docs/python/tf/one_hot.md new file mode 100644 index 00000000000..152c405e69d --- /dev/null +++ b/site/en/api_docs/python/tf/one_hot.md @@ -0,0 +1,232 @@ +description: Returns a one-hot tensor. + +
+ + +
+ +# tf.one_hot + + + + + + + + + +Returns a one-hot tensor. + + + + + + + + + +The locations represented by indices in `indices` take value `on_value`, +while all other locations take value `off_value`. + +`on_value` and `off_value` must have matching data types. If `dtype` is also +provided, they must be the same data type as specified by `dtype`. + +If `on_value` is not provided, it will default to the value `1` with type +`dtype` + +If `off_value` is not provided, it will default to the value `0` with type +`dtype` + +If the input `indices` is rank `N`, the output will have rank `N+1`. The +new axis is created at dimension `axis` (default: the new axis is appended +at the end). + +If `indices` is a scalar the output shape will be a vector of length `depth` + +If `indices` is a vector of length `features`, the output shape will be: + +``` + features x depth if axis == -1 + depth x features if axis == 0 +``` + +If `indices` is a matrix (batch) with shape `[batch, features]`, the output +shape will be: + +``` + batch x features x depth if axis == -1 + batch x depth x features if axis == 1 + depth x batch x features if axis == 0 +``` + +If `indices` is a RaggedTensor, the 'axis' argument must be positive and refer +to a non-ragged axis. The output will be equivalent to applying 'one_hot' on +the values of the RaggedTensor, and creating a new RaggedTensor from the +result. + +If `dtype` is not provided, it will attempt to assume the data type of +`on_value` or `off_value`, if one or both are passed in. If none of +`on_value`, `off_value`, or `dtype` are provided, `dtype` will default to the +value tf.float32. + +Note: If a non-numeric data type output is desired (tf.string, tf.bool, +etc.), both `on_value` and `off_value` _must_ be provided to `one_hot`. + +#### For example: + + + +```python +indices = [0, 1, 2] +depth = 3 +tf.one_hot(indices, depth) # output: [3 x 3] +# [[1., 0., 0.], +# [0., 1., 0.], +# [0., 0., 1.]] + +indices = [0, 2, -1, 1] +depth = 3 +tf.one_hot(indices, depth, + on_value=5.0, off_value=0.0, + axis=-1) # output: [4 x 3] +# [[5.0, 0.0, 0.0], # one_hot(0) +# [0.0, 0.0, 5.0], # one_hot(2) +# [0.0, 0.0, 0.0], # one_hot(-1) +# [0.0, 5.0, 0.0]] # one_hot(1) + +indices = [[0, 2], [1, -1]] +depth = 3 +tf.one_hot(indices, depth, + on_value=1.0, off_value=0.0, + axis=-1) # output: [2 x 2 x 3] +# [[[1.0, 0.0, 0.0], # one_hot(0) +# [0.0, 0.0, 1.0]], # one_hot(2) +# [[0.0, 1.0, 0.0], # one_hot(1) +# [0.0, 0.0, 0.0]]] # one_hot(-1) + +indices = tf.ragged.constant([[0, 1], [2]]) +depth = 3 +tf.one_hot(indices, depth) # output: [2 x None x 3] +# [[[1., 0., 0.], +# [0., 1., 0.]], +# [[0., 0., 1.]]] +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor` of indices. +
+`depth` + +A scalar defining the depth of the one hot dimension. +
+`on_value` + +A scalar defining the value to fill in output when `indices[j] += i`. (default: 1) +
+`off_value` + +A scalar defining the value to fill in output when `indices[j] +!= i`. (default: 0) +
+`axis` + +The axis to fill (default: -1, a new inner-most axis). +
+`dtype` + +The data type of the output tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + +
+`output` + +The one-hot tensor. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +If dtype of either `on_value` or `off_value` don't match `dtype` +
+`TypeError` + +If dtype of `on_value` and `off_value` don't match one another +
+ diff --git a/site/en/api_docs/python/tf/ones.md b/site/en/api_docs/python/tf/ones.md new file mode 100644 index 00000000000..1e5449cdcdd --- /dev/null +++ b/site/en/api_docs/python/tf/ones.md @@ -0,0 +1,103 @@ +description: Creates a tensor with all elements set to one (1). + +
+ + +
+ +# tf.ones + + + + + + + + + +Creates a tensor with all elements set to one (1). + + + + + + + + + +See also tf.ones_like. + +This operation returns a tensor of type `dtype` with shape `shape` and +all elements set to one. + +``` +>>> tf.ones([3, 4], tf.int32) + +``` + + + + + + + + + + + + + + + + +
+`shape` + +A `list` of integers, a `tuple` of integers, or +a 1-D `Tensor` of type `int32`. +
+`dtype` + +Optional DType of an element in the resulting `Tensor`. Default is +tf.float32. +
+`name` + +Optional string. A name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` with all elements set to one (1). +
+ diff --git a/site/en/api_docs/python/tf/ones_initializer.md b/site/en/api_docs/python/tf/ones_initializer.md new file mode 100644 index 00000000000..e1128f9f5e4 --- /dev/null +++ b/site/en/api_docs/python/tf/ones_initializer.md @@ -0,0 +1,203 @@ +description: Initializer that generates tensors initialized to 1. + +
+ + + + + +
+ +# tf.ones_initializer + + + + + + + + + +Initializer that generates tensors initialized to 1. + +Inherits From: [`Initializer`](../tf/keras/initializers/Initializer.md) + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, tf.ones_initializer()) +>>> v1 + +>>> v2 + +>>> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.)) +(, from_config + +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. +It will typically be the output of `get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. Only numeric or boolean dtypes are +supported. +
+ + + + + + + + + + + + +
Raises
+`ValuesError` + +If the dtype is not numeric or boolean. +
+ + + + + diff --git a/site/en/api_docs/python/tf/ones_like.md b/site/en/api_docs/python/tf/ones_like.md new file mode 100644 index 00000000000..004d7867125 --- /dev/null +++ b/site/en/api_docs/python/tf/ones_like.md @@ -0,0 +1,97 @@ +description: Creates a tensor of all ones that has the same shape as the input. + +
+ + +
+ +# tf.ones_like + + + + + + + + + +Creates a tensor of all ones that has the same shape as the input. + + + + + + + +See also tf.ones. + +Given a single tensor (`tensor`), this operation returns a tensor of the +same type and shape as `tensor` with all elements set to 1. Optionally, +you can use `dtype` to specify a new type for the returned tensor. + +#### For example: + + + +``` +>>> tensor = tf.constant([[1, 2, 3], [4, 5, 6]]) +>>> tf.ones_like(tensor) + +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`dtype` + +A type for the returned `Tensor`. Must be `float16`, `float32`, +`float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, +`complex64`, `complex128`, `bool` or `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` with all elements set to one. +
+ diff --git a/site/en/api_docs/python/tf/pad.md b/site/en/api_docs/python/tf/pad.md new file mode 100644 index 00000000000..3295ffd14bb --- /dev/null +++ b/site/en/api_docs/python/tf/pad.md @@ -0,0 +1,148 @@ +description: Pads a tensor. + +
+ + +
+ +# tf.pad + + + + + + + + + +Pads a tensor. + + + + + + + +This operation pads a `tensor` according to the `paddings` you specify. +`paddings` is an integer tensor with shape `[n, 2]`, where n is the rank of +`tensor`. For each dimension D of `input`, `paddings[D, 0]` indicates how +many values to add before the contents of `tensor` in that dimension, and +`paddings[D, 1]` indicates how many values to add after the contents of +`tensor` in that dimension. If `mode` is "REFLECT" then both `paddings[D, 0]` +and `paddings[D, 1]` must be no greater than `tensor.dim_size(D) - 1`. If +`mode` is "SYMMETRIC" then both `paddings[D, 0]` and `paddings[D, 1]` must be +no greater than `tensor.dim_size(D)`. + +The padded size of each dimension D of the output is: + +`paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1]` + +#### For example: + + + +```python +t = tf.constant([[1, 2, 3], [4, 5, 6]]) +paddings = tf.constant([[1, 1,], [2, 2]]) +# 'constant_values' is 0. +# rank of 't' is 2. +tf.pad(t, paddings, "CONSTANT") # [[0, 0, 0, 0, 0, 0, 0], + # [0, 0, 1, 2, 3, 0, 0], + # [0, 0, 4, 5, 6, 0, 0], + # [0, 0, 0, 0, 0, 0, 0]] + +tf.pad(t, paddings, "REFLECT") # [[6, 5, 4, 5, 6, 5, 4], + # [3, 2, 1, 2, 3, 2, 1], + # [6, 5, 4, 5, 6, 5, 4], + # [3, 2, 1, 2, 3, 2, 1]] + +tf.pad(t, paddings, "SYMMETRIC") # [[2, 1, 1, 2, 3, 3, 2], + # [2, 1, 1, 2, 3, 3, 2], + # [5, 4, 4, 5, 6, 6, 5], + # [5, 4, 4, 5, 6, 6, 5]] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`paddings` + +A `Tensor` of type `int32`. +
+`mode` + +One of "CONSTANT", "REFLECT", or "SYMMETRIC" (case-insensitive) +
+`constant_values` + +In "CONSTANT" mode, the scalar pad value to use. Must be +same type as `tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ + + + + + + + + + + + +
+`ValueError` + +When mode is not one of "CONSTANT", "REFLECT", or "SYMMETRIC". +
+ diff --git a/site/en/api_docs/python/tf/parallel_stack.md b/site/en/api_docs/python/tf/parallel_stack.md new file mode 100644 index 00000000000..abd46f7641e --- /dev/null +++ b/site/en/api_docs/python/tf/parallel_stack.md @@ -0,0 +1,115 @@ +description: Stacks a list of rank-R tensors into one rank-(R+1) tensor in parallel. + +
+ + +
+ +# tf.parallel_stack + + + + + + + + + +Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor in parallel. + + + + + + + + + +Requires that the shape of inputs be known at graph construction time. + +Packs the list of tensors in `values` into a tensor with rank one higher than +each tensor in `values`, by packing them along the first dimension. +Given a list of length `N` of tensors of shape `(A, B, C)`; the `output` +tensor will have the shape `(N, A, B, C)`. + +#### For example: + + + +```python +x = tf.constant([1, 4]) +y = tf.constant([2, 5]) +z = tf.constant([3, 6]) +tf.parallel_stack([x, y, z]) # [[1, 4], [2, 5], [3, 6]] +``` + +The difference between `stack` and `parallel_stack` is that `stack` requires +all the inputs be computed before the operation will begin but doesn't require +that the input shapes be known during graph construction. + +`parallel_stack` will copy pieces of the input into the output as they become +available, in some situations this can provide a performance benefit. + +Unlike `stack`, `parallel_stack` does NOT support backpropagation. + +This is the opposite of unstack. The numpy equivalent is + + tf.parallel_stack([x, y, z]) = np.asarray([x, y, z]) + + + + + + + + + + + + + +
+`values` + +A list of `Tensor` objects with the same shape and type. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + + +
+`output` + +A stacked `Tensor` with the same type as `values`. +
+ diff --git a/site/en/api_docs/python/tf/print.md b/site/en/api_docs/python/tf/print.md new file mode 100644 index 00000000000..179a8431d27 --- /dev/null +++ b/site/en/api_docs/python/tf/print.md @@ -0,0 +1,229 @@ +description: Print the specified inputs. + +
+ + +
+ +# tf.print + + + + + + + + + +Print the specified inputs. + + + + + + + + + +A TensorFlow operator that prints the specified inputs to a desired +output stream or logging level. The inputs may be dense or sparse Tensors, +primitive python objects, data structures that contain tensors, and printable +Python objects. Printed tensors will recursively show the first and last +elements of each dimension to summarize. + + + +#### Example: + +Single-input usage: + +```python +tensor = tf.range(10) +tf.print(tensor, output_stream=sys.stderr) +``` + +(This prints "[0 1 2 ... 7 8 9]" to sys.stderr) + +Multi-input usage: + +```python +tensor = tf.range(10) +tf.print("tensors:", tensor, {2: tensor * 2}, output_stream=sys.stdout) +``` + +(This prints "tensors: [0 1 2 ... 7 8 9] {2: [0 2 4 ... 14 16 18]}" to +sys.stdout) + +Changing the input separator: +```python +tensor_a = tf.range(2) +tensor_b = tensor_a * 2 +tf.print(tensor_a, tensor_b, output_stream=sys.stderr, sep=',') +``` + +(This prints "[0 1],[0 2]" to sys.stderr) + +Usage in a tf.function: + +```python +@tf.function +def f(): + tensor = tf.range(10) + tf.print(tensor, output_stream=sys.stderr) + return tensor + +range_tensor = f() +``` + +(This prints "[0 1 2 ... 7 8 9]" to sys.stderr) + + +@compatibility(TF 1.x Graphs and Sessions) +In graphs manually created outside of tf.function, this method returns +the created TF operator that prints the data. To make sure the +operator runs, users need to pass the produced op to +tf.compat.v1.Session's run method, or to use the op as a control +dependency for executed ops by specifying +`with tf.compat.v1.control_dependencies([print_op])`. +@end_compatibility + + Compatibility usage in TF 1.x graphs: + + ```python + sess = tf.compat.v1.Session() + with sess.as_default(): + tensor = tf.range(10) + print_op = tf.print("tensors:", tensor, {2: tensor * 2}, + output_stream=sys.stdout) + with tf.control_dependencies([print_op]): + tripled_tensor = tensor * 3 + sess.run(tripled_tensor) + ``` + + (This prints "tensors: [0 1 2 ... 7 8 9] {2: [0 2 4 ... 14 16 18]}" to + sys.stdout) + +Note: In Jupyter notebooks and colabs, tf.print prints to the notebook + cell outputs. It will not write to the notebook kernel's console logs. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`*inputs` + +Positional arguments that are the inputs to print. Inputs in the +printed output will be separated by spaces. Inputs may be python +primitives, tensors, data structures such as dicts and lists that may +contain tensors (with the data structures possibly nested in arbitrary +ways), and printable python objects. +
+`output_stream` + +The output stream, logging level, or file to print to. +Defaults to sys.stderr, but sys.stdout, tf.compat.v1.logging.info, +tf.compat.v1.logging.warning, tf.compat.v1.logging.error, +absl.logging.info, absl.logging.warning and absl.logging.error are also +supported. To print to a file, pass a string started with "file://" +followed by the file path, e.g., "file:///tmp/foo.out". +
+`summarize` + +The first and last `summarize` elements within each dimension are +recursively printed per Tensor. If None, then the first 3 and last 3 +elements of each dimension are printed for each tensor. If set to -1, it +will print all elements of every tensor. +
+`sep` + +The string to use to separate the inputs. Defaults to " ". +
+`end` + +End character that is appended at the end the printed string. +Defaults to the newline character. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+None when executing eagerly. During graph tracing this returns +a TF operator that prints the specified inputs in the specified output +stream or logging level. This operator will be automatically executed +except inside of tf.compat.v1 graphs and sessions. +
+ + + + + + + + + + + + +
+`ValueError` + +If an unsupported output stream is specified. +
+ + + +#### Python2 Compatibility +In python 2.7, make sure to import the following: +`from __future__ import print_function` + diff --git a/site/en/api_docs/python/tf/profiler.md b/site/en/api_docs/python/tf/profiler.md new file mode 100644 index 00000000000..1b1b82bbd58 --- /dev/null +++ b/site/en/api_docs/python/tf/profiler.md @@ -0,0 +1,25 @@ +description: Public API for tf.profiler namespace. + +
+ + +
+ +# Module: tf.profiler + + + + + + + + + +Public API for tf.profiler namespace. + + + +## Modules + +[`experimental`](../tf/profiler/experimental.md) module: Public API for tf.profiler.experimental namespace. + diff --git a/site/en/api_docs/python/tf/profiler/experimental.md b/site/en/api_docs/python/tf/profiler/experimental.md new file mode 100644 index 00000000000..c4c5518238b --- /dev/null +++ b/site/en/api_docs/python/tf/profiler/experimental.md @@ -0,0 +1,37 @@ +description: Public API for tf.profiler.experimental namespace. + +
+ + +
+ +# Module: tf.profiler.experimental + + + + + + + + + +Public API for tf.profiler.experimental namespace. + + + +## Modules + +[`client`](../../tf/profiler/experimental/client.md) module: Public API for tf.profiler.experimental.client namespace. + +[`server`](../../tf/profiler/experimental/server.md) module: Public API for tf.profiler.experimental.server namespace. + +## Classes + +[`class Profile`](../../tf/profiler/experimental/Profile.md): Context-manager profile API. + +## Functions + +[`start(...)`](../../tf/profiler/experimental/start.md): Starts profiling. + +[`stop(...)`](../../tf/profiler/experimental/stop.md): Stops the current profiling session. + diff --git a/site/en/api_docs/python/tf/profiler/experimental/Profile.md b/site/en/api_docs/python/tf/profiler/experimental/Profile.md new file mode 100644 index 00000000000..fbc20a45b9d --- /dev/null +++ b/site/en/api_docs/python/tf/profiler/experimental/Profile.md @@ -0,0 +1,76 @@ +description: Context-manager profile API. + +
+ + + + + +
+ +# tf.profiler.experimental.Profile + + + + + + + + + +Context-manager profile API. + + + + + + + +Profiling will start when entering the scope, and stop and save the results to +the logdir when exits the scope. Open TensorBoard profile tab to view results. + +#### Example usage: + + +```python +with tf.profiler.experimental.Profile("/path/to/logdir"): + # do some work +``` + +## Methods + +

__enter__

+ +View source + + + + + + +

__exit__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/profiler/experimental/client.md b/site/en/api_docs/python/tf/profiler/experimental/client.md new file mode 100644 index 00000000000..822c36ff5e9 --- /dev/null +++ b/site/en/api_docs/python/tf/profiler/experimental/client.md @@ -0,0 +1,27 @@ +description: Public API for tf.profiler.experimental.client namespace. + +
+ + +
+ +# Module: tf.profiler.experimental.client + + + + + + + + + +Public API for tf.profiler.experimental.client namespace. + + + +## Functions + +[`monitor(...)`](../../../tf/profiler/experimental/client/monitor.md): Sends grpc requests to profiler server to perform on-demand monitoring. + +[`trace(...)`](../../../tf/profiler/experimental/client/trace.md): Sends grpc requests to profiler server to perform on-demand profiling. + diff --git a/site/en/api_docs/python/tf/profiler/experimental/client/monitor.md b/site/en/api_docs/python/tf/profiler/experimental/client/monitor.md new file mode 100644 index 00000000000..eddd2349ff8 --- /dev/null +++ b/site/en/api_docs/python/tf/profiler/experimental/client/monitor.md @@ -0,0 +1,92 @@ +description: Sends grpc requests to profiler server to perform on-demand monitoring. + +
+ + +
+ +# tf.profiler.experimental.client.monitor + + + + + + + + + +Sends grpc requests to profiler server to perform on-demand monitoring. + + + + + + + +The monitoring result is a light weight performance summary of your model +execution. This method will block the caller thread until it receives the +monitoring result. This method currently supports Cloud TPU only. + + + + + + + + + + + + + + + + +
+`service_addr` + +gRPC address of profiler service e.g. grpc://10.0.0.2:8466. +
+`duration_ms` + +Duration of monitoring in ms. +
+`level` + +Choose a monitoring level between 1 and 2 to monitor your job. Level +2 is more verbose than level 1 and shows more metrics. +
+ + + + + + + + + + + +
+A string of monitoring output. +
+ + + +#### Example usage: + + +# Continuously send gRPC requests to the Cloud TPU to monitor the model +# execution. +```python +for query in range(0, 100): + print(tf.profiler.experimental.client.monitor('grpc://10.0.0.2:8466', 1000)) \ No newline at end of file diff --git a/site/en/api_docs/python/tf/profiler/experimental/client/trace.md b/site/en/api_docs/python/tf/profiler/experimental/client/trace.md new file mode 100644 index 00000000000..82601c0f0df --- /dev/null +++ b/site/en/api_docs/python/tf/profiler/experimental/client/trace.md @@ -0,0 +1,136 @@ +description: Sends grpc requests to profiler server to perform on-demand profiling. + +
+ + +
+ +# tf.profiler.experimental.client.trace + + + + + + + + + +Sends grpc requests to profiler server to perform on-demand profiling. + + + + + + + +This method will block caller thread until it receives tracing result. This +method supports CPU, GPU, and Cloud TPU. This method supports profiling a +single host for CPU, GPU, TPU, as well as multiple TPU workers. +The profiled results will be saved to your specified TensorBoard log +directory (e.g. the directory you save your model checkpoints). Use the +TensorBoard profile plugin to view the visualization and analysis results. + + + + + + + + + + + + + + + + + + + + + + +
+`service_addr` + +gRPC address of profiler service e.g. grpc://localhost:6009. +
+`logdir` + +Path of TensorBoard log directory e.g. /tmp/tb_log. +
+`duration_ms` + +Duration of tracing or monitoring in ms. +
+`worker_list` + +Optional. The list of workers that we are about to profile in +the current session (TPU only). +
+`num_tracing_attempts` + +Optional. Automatically retry N times when no trace +event is collected (default 3). +
+ + + + + + + + + + + + +
+`UnavailableError` + +If no trace event is collected. +
+ + +Example usage (CPU/GPU): +# Start a profiler server before your model runs. +```python +tf.profiler.experimental.server.start(6009) +# your model code. +# Send gRPC request to the profiler server to collect a trace of your model. +```python +tf.profiler.experimental.client.trace('grpc://localhost:6009', + '/tmp/tb_log', 2000) + +Example usage (TPU): +# Send gRPC request to a TPU worker to collect a trace of your model. A +# profiler service has been started in the TPU worker at port 8466. +```python +# E.g. your TPU IP address is 10.0.0.2 and you want to profile for 2 seconds. +tf.profiler.experimental.client.trace('grpc://10.0.0.2:8466', + 'gs://your_tb_dir', 2000) + +Example usage (Multiple TPUs): +# Send gRPC request to a TPU pod to collect a trace of your model on multiple +# TPUs. A profiler service has been started in all the TPU workers at the +# port 8466. +```python +# E.g. your TPU IP addresses are 10.0.0.2, 10.0.0.3, 10.0.0.4, and you want to +# profile for 2 seconds. +tf.profiler.experimental.client.trace('grpc://10.0.0.2:8466', + 'gs://your_tb_dir', + 2000, '10.0.0.3,10.0.0.4') + +Launch TensorBoard and point it to the same logdir you provided to this API. +$ tensorboard --logdir=/tmp/tb_log (or gs://your_tb_dir in the above examples) +Open your browser and go to localhost:6006/#profile to view profiling results. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/profiler/experimental/server.md b/site/en/api_docs/python/tf/profiler/experimental/server.md new file mode 100644 index 00000000000..ae5d9af84fc --- /dev/null +++ b/site/en/api_docs/python/tf/profiler/experimental/server.md @@ -0,0 +1,25 @@ +description: Public API for tf.profiler.experimental.server namespace. + +
+ + +
+ +# Module: tf.profiler.experimental.server + + + + + + + + + +Public API for tf.profiler.experimental.server namespace. + + + +## Functions + +[`start(...)`](../../../tf/profiler/experimental/server/start.md): Start a profiler grpc server that listens to given port. + diff --git a/site/en/api_docs/python/tf/profiler/experimental/server/start.md b/site/en/api_docs/python/tf/profiler/experimental/server/start.md new file mode 100644 index 00000000000..b0e1b3b70d2 --- /dev/null +++ b/site/en/api_docs/python/tf/profiler/experimental/server/start.md @@ -0,0 +1,55 @@ +description: Start a profiler grpc server that listens to given port. + +
+ + +
+ +# tf.profiler.experimental.server.start + + + + + + + + + +Start a profiler grpc server that listens to given port. + + + + + + + +The profiler server will exit when the process finishes. The service is +defined in tensorflow/core/profiler/profiler_service.proto. + + + + + + + + + + +
+`port` + +port profiler server listens to. +
+ + +Example usage: ```python tf.profiler.experimental.server.start('6009') # do + your training here. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/profiler/experimental/start.md b/site/en/api_docs/python/tf/profiler/experimental/start.md new file mode 100644 index 00000000000..656e787bce4 --- /dev/null +++ b/site/en/api_docs/python/tf/profiler/experimental/start.md @@ -0,0 +1,81 @@ +description: Starts profiling. + +
+ + +
+ +# tf.profiler.experimental.start + + + + + + + + + +Starts profiling. + + + + + + + + + + + + + + + + + +
+`logdir` + +A log directory read by TensorBoard to export the profile results. +
+ + + + + + + + + + + + +
+`AlreadyExistsError` + +If another profiling session is running. +
+ + + +#### Example usage: + + +```python +tf.profiler.experimental.start('logdir_path') +# do your training here. +tf.profiler.experimental.stop() +``` + +Launch TensorBoard and point it to the same logdir you provided to this API. +$ tensorboard --logdir=logdir_path +Open your browser and go to localhost:6006/#profile to view profiling results. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/profiler/experimental/stop.md b/site/en/api_docs/python/tf/profiler/experimental/stop.md new file mode 100644 index 00000000000..3cc5c9b84f4 --- /dev/null +++ b/site/en/api_docs/python/tf/profiler/experimental/stop.md @@ -0,0 +1,68 @@ +description: Stops the current profiling session. + +
+ + +
+ +# tf.profiler.experimental.stop + + + + + + + + + +Stops the current profiling session. + + + + + + + +The profiler session will be stopped and profile results can be saved. + + + + + + + + + + +
+`save` + +An optional variable to save the results to TensorBoard. Default True. +
+ + + + + + + + + + + + +
+`UnavailableError` + +If there is no active profiling session. +
+ diff --git a/site/en/api_docs/python/tf/py_function.md b/site/en/api_docs/python/tf/py_function.md new file mode 100644 index 00000000000..c923cc6ecf7 --- /dev/null +++ b/site/en/api_docs/python/tf/py_function.md @@ -0,0 +1,162 @@ +description: Wraps a python function into a TensorFlow op that executes it eagerly. + +
+ + +
+ +# tf.py_function + + + + + + + + + +Wraps a python function into a TensorFlow op that executes it eagerly. + + + + + + + + + +This function allows expressing computations in a TensorFlow graph as +Python functions. In particular, it wraps a Python function `func` +in a once-differentiable TensorFlow operation that executes it with eager +execution enabled. As a consequence, tf.py_function makes it +possible to express control flow using Python constructs (`if`, `while`, +`for`, etc.), instead of TensorFlow control flow constructs (tf.cond, +tf.while_loop). For example, you might use tf.py_function to +implement the log huber function: + +```python +def log_huber(x, m): + if tf.abs(x) <= m: + return x**2 + else: + return m**2 * (1 - 2 * tf.math.log(m) + tf.math.log(x**2)) + +x = tf.compat.v1.placeholder(tf.float32) +m = tf.compat.v1.placeholder(tf.float32) + +y = tf.py_function(func=log_huber, inp=[x, m], Tout=tf.float32) +dy_dx = tf.gradients(y, x)[0] + +with tf.compat.v1.Session() as sess: + # The session executes `log_huber` eagerly. Given the feed values below, + # it will take the first branch, so `y` evaluates to 1.0 and + # `dy_dx` evaluates to 2.0. + y, dy_dx = sess.run([y, dy_dx], feed_dict={x: 1.0, m: 2.0}) +``` + +You can also use tf.py_function to debug your models at runtime +using Python tools, i.e., you can isolate portions of your code that +you want to debug, wrap them in Python functions and insert `pdb` tracepoints +or print statements as desired, and wrap those functions in +tf.py_function. + +For more information on eager execution, see the +[Eager guide](https://tensorflow.org/guide/eager). + +tf.py_function is similar in spirit to tf.compat.v1.py_func, but unlike +the latter, the former lets you use TensorFlow operations in the wrapped +Python function. In particular, while tf.compat.v1.py_func only runs on CPUs +and +wraps functions that take NumPy arrays as inputs and return NumPy arrays as +outputs, tf.py_function can be placed on GPUs and wraps functions +that take Tensors as inputs, execute TensorFlow operations in their bodies, +and return Tensors as outputs. + +Like tf.compat.v1.py_func, tf.py_function has the following limitations +with respect to serialization and distribution: + +* The body of the function (i.e. `func`) will not be serialized in a + `GraphDef`. Therefore, you should not use this function if you need to + serialize your model and restore it in a different environment. + +* The operation must run in the same address space as the Python program + that calls tf.py_function(). If you are using distributed + TensorFlow, you must run a tf.distribute.Server in the same process as the + program that calls tf.py_function() and you must pin the created + operation to a device in that server (e.g. using `with tf.device():`). + + + + + + + + + + + + + + + + + + + + +
+`func` + +A Python function which accepts a list of `Tensor` objects having +element types that match the corresponding tf.Tensor objects in `inp` +and returns a list of `Tensor` objects (or a single `Tensor`, or `None`) +having element types that match the corresponding values in `Tout`. +
+`inp` + +A list of `Tensor` objects. +
+`Tout` + +A list or tuple of tensorflow data types or a single tensorflow data +type if there is only one, indicating what `func` returns; an empty list +if no value is returned (i.e., if the return value is `None`). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` or a single `Tensor` which `func` computes; an empty list +if `func` returns None. +
+ diff --git a/site/en/api_docs/python/tf/quantization.md b/site/en/api_docs/python/tf/quantization.md new file mode 100644 index 00000000000..5cb1b271ae6 --- /dev/null +++ b/site/en/api_docs/python/tf/quantization.md @@ -0,0 +1,43 @@ +description: Public API for tf.quantization namespace. + +
+ + +
+ +# Module: tf.quantization + + + + + + + + + +Public API for tf.quantization namespace. + + + +## Functions + +[`dequantize(...)`](../tf/quantization/dequantize.md): Dequantize the 'input' tensor into a float or bfloat16 Tensor. + +[`fake_quant_with_min_max_args(...)`](../tf/quantization/fake_quant_with_min_max_args.md): Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. + +[`fake_quant_with_min_max_args_gradient(...)`](../tf/quantization/fake_quant_with_min_max_args_gradient.md): Compute gradients for a FakeQuantWithMinMaxArgs operation. + +[`fake_quant_with_min_max_vars(...)`](../tf/quantization/fake_quant_with_min_max_vars.md): Fake-quantize the 'inputs' tensor of type float via global float scalars `min` + +[`fake_quant_with_min_max_vars_gradient(...)`](../tf/quantization/fake_quant_with_min_max_vars_gradient.md): Compute gradients for a FakeQuantWithMinMaxVars operation. + +[`fake_quant_with_min_max_vars_per_channel(...)`](../tf/quantization/fake_quant_with_min_max_vars_per_channel.md): Fake-quantize the 'inputs' tensor of type float and one of the shapes: `[d]`, + +[`fake_quant_with_min_max_vars_per_channel_gradient(...)`](../tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient.md): Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. + +[`quantize(...)`](../tf/quantization/quantize.md): Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. + +[`quantize_and_dequantize(...)`](../tf/quantization/quantize_and_dequantize.md): Quantizes then dequantizes a tensor. + +[`quantized_concat(...)`](../tf/quantization/quantized_concat.md): Concatenates quantized tensors along one dimension. + diff --git a/site/en/api_docs/python/tf/quantization/dequantize.md b/site/en/api_docs/python/tf/quantization/dequantize.md new file mode 100644 index 00000000000..994c329356d --- /dev/null +++ b/site/en/api_docs/python/tf/quantization/dequantize.md @@ -0,0 +1,181 @@ +description: Dequantize the 'input' tensor into a float or bfloat16 Tensor. + +
+ + +
+ +# tf.quantization.dequantize + + + + + + + + + +Dequantize the 'input' tensor into a float or bfloat16 Tensor. + + + + + + + + + +[min_range, max_range] are scalar floats that specify the range for +the output. The 'mode' attribute controls exactly which calculations are +used to convert the float values to their quantized equivalents. + +In 'MIN_COMBINED' mode, each value of the tensor will undergo the following: + +``` +if T == qint8: in[i] += (range(T) + 1)/ 2.0 +out[i] = min_range + (in[i]* (max_range - min_range) / range(T)) +``` +here `range(T) = numeric_limits::max() - numeric_limits::min()` + +*MIN_COMBINED Mode Example* + +If the input comes from a QuantizedRelu6, the output type is +quint8 (range of 0-255) but the possible range of QuantizedRelu6 is +0-6. The min_range and max_range values are therefore 0.0 and 6.0. +Dequantize on quint8 will take each value, cast to float, and multiply +by 6 / 255. +Note that if quantizedtype is qint8, the operation will additionally add +each value by 128 prior to casting. + +If the mode is 'MIN_FIRST', then this approach is used: + +```c++ +num_discrete_values = 1 << (# of bits in T) +range_adjust = num_discrete_values / (num_discrete_values - 1) +range = (range_max - range_min) * range_adjust +range_scale = range / num_discrete_values +const double offset_input = static_cast(input) - lowest_quantized; +result = range_min + ((input - numeric_limits::min()) * range_scale) +``` + +If the mode is `SCALED`, dequantization is performed by multiplying each +input value by a scaling_factor. (Thus an input of 0 always maps to 0.0). + +The scaling_factor is determined from `min_range`, `max_range`, and +`narrow_range` in a way that is compatible with `QuantizeAndDequantize{V2|V3}` +and `QuantizeV2`, using the following algorithm: + +```c++ + + const int min_expected_T = std::numeric_limits::min() + + (narrow_range ? 1 : 0); + const int max_expected_T = std::numeric_limits::max(); + const float max_expected_T = std::numeric_limits::max(); + + const float scale_factor = + (std::numeric_limits::min() == 0) ? (max_range / max_expected_T) + : std::max(min_range / min_expected_T, + max_range / max_expected_T); +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`min_range` + +A `Tensor` of type `float32`. +The minimum scalar value possibly produced for the input. +
+`max_range` + +A `Tensor` of type `float32`. +The maximum scalar value possibly produced for the input. +
+`mode` + +An optional `string` from: `"MIN_COMBINED", "MIN_FIRST", "SCALED"`. Defaults to `"MIN_COMBINED"`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`axis` + +An optional `int`. Defaults to `-1`. +
+`dtype` + +An optional tf.DType from: `tf.bfloat16, tf.float32`. Defaults to tf.float32. +Type of the output tensor. Currently Dequantize supports float and bfloat16. +If 'dtype' is 'bfloat16', it only supports 'MIN_COMBINED' mode. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_args.md b/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_args.md new file mode 100644 index 00000000000..4f9e1d2b99e --- /dev/null +++ b/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_args.md @@ -0,0 +1,121 @@ +description: Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. + +
+ + +
+ +# tf.quantization.fake_quant_with_min_max_args + + + + + + + + + +Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. + + + + + + + + + +Attributes `[min; max]` define the clamping range for the `inputs` data. +`inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` +when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and +then de-quantized and output as floats in `[min; max]` interval. +`num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive. + +Before quantization, `min` and `max` values are adjusted with the following +logic. +It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values, +the behavior can be unexpected: +If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`. +If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`. +If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `, +`min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`. + +Quantization is called fake since the output is still in floating point. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor` of type `float32`. +
+`min` + +An optional `float`. Defaults to `-6`. +
+`max` + +An optional `float`. Defaults to `6`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_args_gradient.md b/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_args_gradient.md new file mode 100644 index 00000000000..2eab20529de --- /dev/null +++ b/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_args_gradient.md @@ -0,0 +1,114 @@ +description: Compute gradients for a FakeQuantWithMinMaxArgs operation. + +
+ + +
+ +# tf.quantization.fake_quant_with_min_max_args_gradient + + + + + + + + + +Compute gradients for a FakeQuantWithMinMaxArgs operation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor` of type `float32`. +Backpropagated gradients above the FakeQuantWithMinMaxArgs operation. +
+`inputs` + +A `Tensor` of type `float32`. +Values passed as inputs to the FakeQuantWithMinMaxArgs operation. +
+`min` + +An optional `float`. Defaults to `-6`. +
+`max` + +An optional `float`. Defaults to `6`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_vars.md b/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_vars.md new file mode 100644 index 00000000000..68cb6affa7c --- /dev/null +++ b/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_vars.md @@ -0,0 +1,124 @@ +description: Fake-quantize the 'inputs' tensor of type float via global float scalars min + +
+ + +
+ +# tf.quantization.fake_quant_with_min_max_vars + + + + + + + + + +Fake-quantize the 'inputs' tensor of type float via global float scalars `min` + + + + + + + + + +and `max` to 'outputs' tensor of same shape as `inputs`. + +`[min; max]` define the clamping range for the `inputs` data. +`inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` +when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and +then de-quantized and output as floats in `[min; max]` interval. +`num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive. + +Before quantization, `min` and `max` values are adjusted with the following +logic. +It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values, +the behavior can be unexpected: +If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`. +If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`. +If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `, +`min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`. + +This operation has a gradient and thus allows for training `min` and `max` +values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor` of type `float32`. +
+`min` + +A `Tensor` of type `float32`. +
+`max` + +A `Tensor` of type `float32`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_gradient.md b/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_gradient.md new file mode 100644 index 00000000000..09ee56c7155 --- /dev/null +++ b/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_gradient.md @@ -0,0 +1,138 @@ +description: Compute gradients for a FakeQuantWithMinMaxVars operation. + +
+ + +
+ +# tf.quantization.fake_quant_with_min_max_vars_gradient + + + + + + + + + +Compute gradients for a FakeQuantWithMinMaxVars operation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor` of type `float32`. +Backpropagated gradients above the FakeQuantWithMinMaxVars operation. +
+`inputs` + +A `Tensor` of type `float32`. +Values passed as inputs to the FakeQuantWithMinMaxVars operation. +min, max: Quantization interval, scalar floats. +
+`min` + +A `Tensor` of type `float32`. +
+`max` + +A `Tensor` of type `float32`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +The bitwidth of the quantization; between 2 and 8, inclusive. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +Whether to quantize into 2^num_bits - 1 distinct values. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max). +
+`backprops_wrt_input` + +A `Tensor` of type `float32`. +
+`backprop_wrt_min` + +A `Tensor` of type `float32`. +
+`backprop_wrt_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_per_channel.md b/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_per_channel.md new file mode 100644 index 00000000000..f399b92bd44 --- /dev/null +++ b/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_per_channel.md @@ -0,0 +1,125 @@ +description: Fake-quantize the 'inputs' tensor of type float and one of the shapes: [d], + +
+ + +
+ +# tf.quantization.fake_quant_with_min_max_vars_per_channel + + + + + + + + + +Fake-quantize the 'inputs' tensor of type float and one of the shapes: `[d]`, + + + + + + + + + +`[b, d]` `[b, h, w, d]` via per-channel floats `min` and `max` of shape `[d]` +to 'outputs' tensor of same shape as `inputs`. + +`[min; max]` define the clamping range for the `inputs` data. +`inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` +when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and +then de-quantized and output as floats in `[min; max]` interval. +`num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive. + +Before quantization, `min` and `max` values are adjusted with the following +logic. +It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values, +the behavior can be unexpected: +If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`. +If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`. +If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `, +`min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`. + +This operation has a gradient and thus allows for training `min` and `max` +values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor` of type `float32`. +
+`min` + +A `Tensor` of type `float32`. +
+`max` + +A `Tensor` of type `float32`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient.md b/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient.md new file mode 100644 index 00000000000..6528663d111 --- /dev/null +++ b/site/en/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient.md @@ -0,0 +1,140 @@ +description: Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. + +
+ + +
+ +# tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient + + + + + + + + + +Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor` of type `float32`. +Backpropagated gradients above the FakeQuantWithMinMaxVars operation, +shape one of: `[d]`, `[b, d]`, `[b, h, w, d]`. +
+`inputs` + +A `Tensor` of type `float32`. +Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape +same as `gradients`. +min, max: Quantization interval, floats of shape `[d]`. +
+`min` + +A `Tensor` of type `float32`. +
+`max` + +A `Tensor` of type `float32`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +The bitwidth of the quantization; between 2 and 16, inclusive. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +Whether to quantize into 2^num_bits - 1 distinct values. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max). +
+`backprops_wrt_input` + +A `Tensor` of type `float32`. +
+`backprop_wrt_min` + +A `Tensor` of type `float32`. +
+`backprop_wrt_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/quantization/quantize.md b/site/en/api_docs/python/tf/quantization/quantize.md new file mode 100644 index 00000000000..a527b4c0b88 --- /dev/null +++ b/site/en/api_docs/python/tf/quantization/quantize.md @@ -0,0 +1,292 @@ +description: Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. + +
+ + +
+ +# tf.quantization.quantize + + + + + + + + + +Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. + + + + + + + + + +[min_range, max_range] are scalar floats that specify the range for +the 'input' data. The 'mode' attribute controls exactly which calculations are +used to convert the float values to their quantized equivalents. The +'round_mode' attribute controls which rounding tie-breaking algorithm is used +when rounding float values to their quantized equivalents. + +In 'MIN_COMBINED' mode, each value of the tensor will undergo the following: + +``` +out[i] = (in[i] - min_range) * range(T) / (max_range - min_range) +if T == qint8: out[i] -= (range(T) + 1) / 2.0 +``` + +here `range(T) = numeric_limits::max() - numeric_limits::min()` + +*MIN_COMBINED Mode Example* + +Assume the input is type float and has a possible range of [0.0, 6.0] and the +output type is quint8 ([0, 255]). The min_range and max_range values should be +specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each +value of the input by 255/6 and cast to quint8. + +If the output type was qint8 ([-128, 127]), the operation will additionally +subtract each value by 128 prior to casting, so that the range of values aligns +with the range of qint8. + +If the mode is 'MIN_FIRST', then this approach is used: + +``` +num_discrete_values = 1 << (# of bits in T) +range_adjust = num_discrete_values / (num_discrete_values - 1) +range = (range_max - range_min) * range_adjust +range_scale = num_discrete_values / range +quantized = round(input * range_scale) - round(range_min * range_scale) + + numeric_limits::min() +quantized = max(quantized, numeric_limits::min()) +quantized = min(quantized, numeric_limits::max()) +``` + +The biggest difference between this and MIN_COMBINED is that the minimum range +is rounded first, before it's subtracted from the rounded value. With +MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing +and dequantizing will introduce a larger and larger error. + +*SCALED mode Example* + +`SCALED` mode matches the quantization approach used in +`QuantizeAndDequantize{V2|V3}`. + +If the mode is `SCALED`, the quantization is performed by multiplying each +input value by a scaling_factor. +The scaling_factor is determined from `min_range` and `max_range` to be as large +as possible such that the range from `min_range` to `max_range` is representable +within values of type T. + +```c++ + + const int min_T = std::numeric_limits::min(); + const int max_T = std::numeric_limits::max(); + const float max_float = std::numeric_limits::max(); + + const float scale_factor_from_min_side = + (min_T * min_range > 0) ? min_T / min_range : max_float; + const float scale_factor_from_max_side = + (max_T * max_range > 0) ? max_T / max_range : max_float; + + const float scale_factor = std::min(scale_factor_from_min_side, + scale_factor_from_max_side); +``` + +We next use the scale_factor to adjust min_range and max_range as follows: + +```c++ + min_range = min_T / scale_factor; + max_range = max_T / scale_factor; +``` + + +e.g. if T = qint8, and initially min_range = -10, and max_range = 9, we would +compare -128/-10.0 = 12.8 to 127/9.0 = 14.11, and set scaling_factor = 12.8 +In this case, min_range would remain -10, but max_range would be adjusted to +127 / 12.8 = 9.921875 + +So we will quantize input values in the range (-10, 9.921875) to (-128, 127). + +The input tensor can now be quantized by clipping values to the range +`min_range` to `max_range`, then multiplying by scale_factor as follows: + +```c++ +result = round(min(max_range, max(min_range, input)) * scale_factor) +``` + +The adjusted `min_range` and `max_range` are returned as outputs 2 and 3 of +this operation. These outputs should be used as the range for any further +calculations. + + +*narrow_range (bool) attribute* + +If true, we do not use the minimum quantized value. +i.e. for int8 the quantized output, it would be restricted to the range +-127..127 instead of the full -128..127 range. +This is provided for compatibility with certain inference backends. +(Only applies to SCALED mode) + + +*axis (int) attribute* + +An optional `axis` attribute can specify a dimension index of the input tensor, +such that quantization ranges will be calculated and applied separately for each +slice of the tensor along that dimension. This is useful for per-channel +quantization. + +If axis is specified, min_range and max_range + +if `axis`=None, per-tensor quantization is performed as normal. + + +*ensure_minimum_range (float) attribute* + +Ensures the minimum quantization range is at least this value. +The legacy default value for this is 0.01, but it is strongly suggested to +set it to 0 for new uses. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `float32`. +
+`min_range` + +A `Tensor` of type `float32`. +The minimum value of the quantization range. This value may be adjusted by the +op depending on other parameters. The adjusted value is written to `output_min`. +If the `axis` attribute is specified, this must be a 1-D tensor whose size +matches the `axis` dimension of the input and output tensors. +
+`max_range` + +A `Tensor` of type `float32`. +The maximum value of the quantization range. This value may be adjusted by the +op depending on other parameters. The adjusted value is written to `output_max`. +If the `axis` attribute is specified, this must be a 1-D tensor whose size +matches the `axis` dimension of the input and output tensors. +
+`T` + +A tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. +
+`mode` + +An optional `string` from: `"MIN_COMBINED", "MIN_FIRST", "SCALED"`. Defaults to `"MIN_COMBINED"`. +
+`round_mode` + +An optional `string` from: `"HALF_AWAY_FROM_ZERO", "HALF_TO_EVEN"`. Defaults to `"HALF_AWAY_FROM_ZERO"`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`axis` + +An optional `int`. Defaults to `-1`. +
+`ensure_minimum_range` + +An optional `float`. Defaults to `0.01`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, output_min, output_max). +
+`output` + +A `Tensor` of type `T`. +
+`output_min` + +A `Tensor` of type `float32`. +
+`output_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/quantization/quantize_and_dequantize.md b/site/en/api_docs/python/tf/quantization/quantize_and_dequantize.md new file mode 100644 index 00000000000..3f624e78d4f --- /dev/null +++ b/site/en/api_docs/python/tf/quantization/quantize_and_dequantize.md @@ -0,0 +1,150 @@ +description: Quantizes then dequantizes a tensor. + +
+ + +
+ +# tf.quantization.quantize_and_dequantize + + + + + + + + + +Quantizes then dequantizes a tensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` to quantize and dequantize. +
+`input_min` + +If range_given=True, the minimum input value, that needs to be +represented in the quantized representation. If axis is specified, this +should be a vector of minimum values for each slice along axis. +
+`input_max` + +If range_given=True, the maximum input value that needs to be +represented in the quantized representation. If axis is specified, this +should be a vector of maximum values for each slice along axis. +
+`signed_input` + +True if the quantization is signed or unsigned. +
+`num_bits` + +The bitwidth of the quantization. +
+`range_given` + +If true use `input_min` and `input_max` for the range of the +input, otherwise determine min and max from the input `Tensor`. +
+`round_mode` + +Rounding mode when rounding from float values to quantized ones. +one of ['HALF_TO_EVEN', 'HALF_UP'] +
+`name` + +Optional name for the operation. +
+`narrow_range` + +If true, then the absolute value of the quantized minimum +value is the same as the quantized maximum value, instead of 1 greater. +i.e. for 8 bit quantization, the minimum value is -127 instead of -128. +
+`axis` + +Integer. If specified, refers to a dimension of the input tensor, such +that quantization will be per slice along that dimension. +
+ + + + + + + + + + + +
+A `Tensor`. Each element is the result of quantizing and dequantizing the +corresponding element of `input`. +
+ diff --git a/site/en/api_docs/python/tf/quantization/quantized_concat.md b/site/en/api_docs/python/tf/quantization/quantized_concat.md new file mode 100644 index 00000000000..29dbe384090 --- /dev/null +++ b/site/en/api_docs/python/tf/quantization/quantized_concat.md @@ -0,0 +1,125 @@ +description: Concatenates quantized tensors along one dimension. + +
+ + +
+ +# tf.quantization.quantized_concat + + + + + + + + + +Concatenates quantized tensors along one dimension. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`concat_dim` + +A `Tensor` of type `int32`. +0-D. The dimension along which to concatenate. Must be in the +range [0, rank(values)). +
+`values` + +A list of at least 2 `Tensor` objects with the same type. +The `N` Tensors to concatenate. Their ranks and types must match, +and their sizes must match in all dimensions except `concat_dim`. +
+`input_mins` + +A list with the same length as `values` of `Tensor` objects with type `float32`. +The minimum scalar values for each of the input tensors. +
+`input_maxes` + +A list with the same length as `values` of `Tensor` objects with type `float32`. +The maximum scalar values for each of the input tensors. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, output_min, output_max). +
+`output` + +A `Tensor`. Has the same type as `values`. +
+`output_min` + +A `Tensor` of type `float32`. +
+`output_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/queue.md b/site/en/api_docs/python/tf/queue.md new file mode 100644 index 00000000000..38f4a7f7121 --- /dev/null +++ b/site/en/api_docs/python/tf/queue.md @@ -0,0 +1,33 @@ +description: Public API for tf.queue namespace. + +
+ + +
+ +# Module: tf.queue + + + + + + + + + +Public API for tf.queue namespace. + + + +## Classes + +[`class FIFOQueue`](../tf/queue/FIFOQueue.md): A queue implementation that dequeues elements in first-in first-out order. + +[`class PaddingFIFOQueue`](../tf/queue/PaddingFIFOQueue.md): A FIFOQueue that supports batching variable-sized tensors by padding. + +[`class PriorityQueue`](../tf/queue/PriorityQueue.md): A queue implementation that dequeues elements in prioritized order. + +[`class QueueBase`](../tf/queue/QueueBase.md): Base class for queue implementations. + +[`class RandomShuffleQueue`](../tf/queue/RandomShuffleQueue.md): A queue implementation that dequeues elements in a random order. + diff --git a/site/en/api_docs/python/tf/queue/FIFOQueue.md b/site/en/api_docs/python/tf/queue/FIFOQueue.md new file mode 100644 index 00000000000..063a2cbc463 --- /dev/null +++ b/site/en/api_docs/python/tf/queue/FIFOQueue.md @@ -0,0 +1,707 @@ +description: A queue implementation that dequeues elements in first-in first-out order. + +
+ + + + + + + + + + + + +
+ +# tf.queue.FIFOQueue + + + + + + + + + +A queue implementation that dequeues elements in first-in first-out order. + +Inherits From: [`QueueBase`](../../tf/queue/QueueBase.md) + + + + + + + + + +See tf.queue.QueueBase for a description of the methods on +this class. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`capacity` + +An integer. The upper bound on the number of elements +that may be stored in this queue. +
+`dtypes` + +A list of `DType` objects. The length of `dtypes` must equal +the number of tensors in each queue element. +
+`shapes` + +(Optional.) A list of fully-defined `TensorShape` objects +with the same length as `dtypes`, or `None`. +
+`names` + +(Optional.) A list of string naming the components in the queue +with the same length as `dtypes`, or `None`. If specified the dequeue +methods return a dictionary with the names as keys. +
+`shared_name` + +(Optional.) If non-empty, this queue will be shared under +the given name across multiple sessions. +
+`name` + +Optional name for the queue operation. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +The list of dtypes for each component of a queue element. +
+`name` + +The name of the underlying queue. +
+`names` + +The list of names for each component of a queue element. +
+`queue_ref` + +The underlying queue reference. +
+`shapes` + +The list of shapes for each component of a queue element. +
+ + + +## Methods + +

close

+ +View source + + + +Closes this queue. + +This operation signals that no more elements will be enqueued in +the given queue. Subsequent `enqueue` and `enqueue_many` +operations will fail. Subsequent `dequeue` and `dequeue_many` +operations will continue to succeed if sufficient elements remain +in the queue. Subsequently dequeue and dequeue_many operations +that would otherwise block waiting for more elements (if close +hadn't been called) will now fail immediately. + +If `cancel_pending_enqueues` is `True`, all pending requests will also +be canceled. + + + + + + + + + + + + + +
Args
+`cancel_pending_enqueues` + +(Optional.) A boolean, defaulting to +`False` (described above). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that closes the queue. +
+ + + +

dequeue

+ +View source + + + +Dequeues one element from this queue. + +If the queue is empty when this operation executes, it will block +until there is an element to dequeue. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed, the queue is empty, and there are no pending +enqueue operations that can fulfill this request, +tf.errors.OutOfRangeError will be raised. If the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The tuple of tensors that was dequeued. +
+ + + +

dequeue_many

+ +View source + + + +Dequeues and concatenates `n` elements from this queue. + +This operation concatenates queue-element component tensors along +the 0th dimension to make a single component tensor. All of the +components in the dequeued tuple will have size `n` in the 0th dimension. + +If the queue is closed and there are less than `n` elements left, then an +`OutOfRange` exception is raised. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed, the queue contains fewer than `n` elements, and +there are no pending enqueue operations that can fulfill this +request, tf.errors.OutOfRangeError will be raised. If the +session is `tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`n` + +A scalar `Tensor` containing the number of elements to dequeue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The list of concatenated tensors that was dequeued. +
+ + + +

dequeue_up_to

+ +View source + + + +Dequeues and concatenates `n` elements from this queue. + +**Note** This operation is not supported by all queues. If a queue does not +support DequeueUpTo, then a tf.errors.UnimplementedError is raised. + +This operation concatenates queue-element component tensors along +the 0th dimension to make a single component tensor. If the queue +has not been closed, all of the components in the dequeued tuple +will have size `n` in the 0th dimension. + +If the queue is closed and there are more than `0` but fewer than +`n` elements remaining, then instead of raising a +tf.errors.OutOfRangeError like `tf.QueueBase.dequeue_many`, +less than `n` elements are returned immediately. If the queue is +closed and there are `0` elements left in the queue, then a +tf.errors.OutOfRangeError is raised just like in `dequeue_many`. +Otherwise the behavior is identical to `dequeue_many`. + + + + + + + + + + + + + +
Args
+`n` + +A scalar `Tensor` containing the number of elements to dequeue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The tuple of concatenated tensors that was dequeued. +
+ + + +

enqueue

+ +View source + + + +Enqueues one element to this queue. + +If the queue is full when this operation executes, it will block +until the element has been enqueued. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed before this operation runs, +tf.errors.CancelledError will be raised. If this operation is +blocked, and either (i) the queue is closed by a close operation +with `cancel_pending_enqueues=True`, or (ii) the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`vals` + +A tensor, a list or tuple of tensors, or a dictionary containing +the values to enqueue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that enqueues a new tuple of tensors to the queue. +
+ + + +

enqueue_many

+ +View source + + + +Enqueues zero or more elements to this queue. + +This operation slices each component tensor along the 0th dimension to +make multiple queue elements. All of the tensors in `vals` must have the +same size in the 0th dimension. + +If the queue is full when this operation executes, it will block +until all of the elements have been enqueued. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed before this operation runs, +tf.errors.CancelledError will be raised. If this operation is +blocked, and either (i) the queue is closed by a close operation +with `cancel_pending_enqueues=True`, or (ii) the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`vals` + +A tensor, a list or tuple of tensors, or a dictionary +from which the queue elements are taken. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that enqueues a batch of tuples of tensors to the queue. +
+ + + +

from_list

+ +View source + + + +Create a queue using the queue reference from `queues[index]`. + + + + + + + + + + + + + + +
Args
+`index` + +An integer scalar tensor that determines the input that gets +selected. +
+`queues` + +A list of `QueueBase` objects. +
+ + + + + + + + + + + +
Returns
+A `QueueBase` object. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +When `queues` is not a list of `QueueBase` objects, +or when the data types of `queues` are not all the same. +
+ + + +

is_closed

+ +View source + + + +Returns true if queue is closed. + +This operation returns true if the queue is closed and false if the queue +is open. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+True if the queue is closed and false if the queue is open. +
+ + + +

size

+ +View source + + + +Compute the number of elements in this queue. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A scalar tensor containing the number of elements in this queue. +
+ + + + + diff --git a/site/en/api_docs/python/tf/queue/PaddingFIFOQueue.md b/site/en/api_docs/python/tf/queue/PaddingFIFOQueue.md new file mode 100644 index 00000000000..6e51b78184c --- /dev/null +++ b/site/en/api_docs/python/tf/queue/PaddingFIFOQueue.md @@ -0,0 +1,732 @@ +description: A FIFOQueue that supports batching variable-sized tensors by padding. + +
+ + + + + + + + + + + + +
+ +# tf.queue.PaddingFIFOQueue + + + + + + + + + +A FIFOQueue that supports batching variable-sized tensors by padding. + +Inherits From: [`QueueBase`](../../tf/queue/QueueBase.md) + + + + + + + + + +A `PaddingFIFOQueue` may contain components with dynamic shape, while also +supporting `dequeue_many`. See the constructor for more details. + +See tf.queue.QueueBase for a description of the methods on +this class. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`capacity` + +An integer. The upper bound on the number of elements +that may be stored in this queue. +
+`dtypes` + +A list of `DType` objects. The length of `dtypes` must equal +the number of tensors in each queue element. +
+`shapes` + +A list of `TensorShape` objects, with the same length as +`dtypes`. Any dimension in the `TensorShape` containing value +`None` is dynamic and allows values to be enqueued with +variable size in that dimension. +
+`names` + +(Optional.) A list of string naming the components in the queue +with the same length as `dtypes`, or `None`. If specified the dequeue +methods return a dictionary with the names as keys. +
+`shared_name` + +(Optional.) If non-empty, this queue will be shared under +the given name across multiple sessions. +
+`name` + +Optional name for the queue operation. +
+ + + + + + + + + + + + +
+`ValueError` + +If shapes is not a list of shapes, or the lengths of dtypes +and shapes do not match, or if names is specified and the lengths of +dtypes and names do not match. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +The list of dtypes for each component of a queue element. +
+`name` + +The name of the underlying queue. +
+`names` + +The list of names for each component of a queue element. +
+`queue_ref` + +The underlying queue reference. +
+`shapes` + +The list of shapes for each component of a queue element. +
+ + + +## Methods + +

close

+ +View source + + + +Closes this queue. + +This operation signals that no more elements will be enqueued in +the given queue. Subsequent `enqueue` and `enqueue_many` +operations will fail. Subsequent `dequeue` and `dequeue_many` +operations will continue to succeed if sufficient elements remain +in the queue. Subsequently dequeue and dequeue_many operations +that would otherwise block waiting for more elements (if close +hadn't been called) will now fail immediately. + +If `cancel_pending_enqueues` is `True`, all pending requests will also +be canceled. + + + + + + + + + + + + + +
Args
+`cancel_pending_enqueues` + +(Optional.) A boolean, defaulting to +`False` (described above). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that closes the queue. +
+ + + +

dequeue

+ +View source + + + +Dequeues one element from this queue. + +If the queue is empty when this operation executes, it will block +until there is an element to dequeue. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed, the queue is empty, and there are no pending +enqueue operations that can fulfill this request, +tf.errors.OutOfRangeError will be raised. If the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The tuple of tensors that was dequeued. +
+ + + +

dequeue_many

+ +View source + + + +Dequeues and concatenates `n` elements from this queue. + +This operation concatenates queue-element component tensors along +the 0th dimension to make a single component tensor. All of the +components in the dequeued tuple will have size `n` in the 0th dimension. + +If the queue is closed and there are less than `n` elements left, then an +`OutOfRange` exception is raised. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed, the queue contains fewer than `n` elements, and +there are no pending enqueue operations that can fulfill this +request, tf.errors.OutOfRangeError will be raised. If the +session is `tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`n` + +A scalar `Tensor` containing the number of elements to dequeue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The list of concatenated tensors that was dequeued. +
+ + + +

dequeue_up_to

+ +View source + + + +Dequeues and concatenates `n` elements from this queue. + +**Note** This operation is not supported by all queues. If a queue does not +support DequeueUpTo, then a tf.errors.UnimplementedError is raised. + +This operation concatenates queue-element component tensors along +the 0th dimension to make a single component tensor. If the queue +has not been closed, all of the components in the dequeued tuple +will have size `n` in the 0th dimension. + +If the queue is closed and there are more than `0` but fewer than +`n` elements remaining, then instead of raising a +tf.errors.OutOfRangeError like `tf.QueueBase.dequeue_many`, +less than `n` elements are returned immediately. If the queue is +closed and there are `0` elements left in the queue, then a +tf.errors.OutOfRangeError is raised just like in `dequeue_many`. +Otherwise the behavior is identical to `dequeue_many`. + + + + + + + + + + + + + +
Args
+`n` + +A scalar `Tensor` containing the number of elements to dequeue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The tuple of concatenated tensors that was dequeued. +
+ + + +

enqueue

+ +View source + + + +Enqueues one element to this queue. + +If the queue is full when this operation executes, it will block +until the element has been enqueued. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed before this operation runs, +tf.errors.CancelledError will be raised. If this operation is +blocked, and either (i) the queue is closed by a close operation +with `cancel_pending_enqueues=True`, or (ii) the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`vals` + +A tensor, a list or tuple of tensors, or a dictionary containing +the values to enqueue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that enqueues a new tuple of tensors to the queue. +
+ + + +

enqueue_many

+ +View source + + + +Enqueues zero or more elements to this queue. + +This operation slices each component tensor along the 0th dimension to +make multiple queue elements. All of the tensors in `vals` must have the +same size in the 0th dimension. + +If the queue is full when this operation executes, it will block +until all of the elements have been enqueued. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed before this operation runs, +tf.errors.CancelledError will be raised. If this operation is +blocked, and either (i) the queue is closed by a close operation +with `cancel_pending_enqueues=True`, or (ii) the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`vals` + +A tensor, a list or tuple of tensors, or a dictionary +from which the queue elements are taken. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that enqueues a batch of tuples of tensors to the queue. +
+ + + +

from_list

+ +View source + + + +Create a queue using the queue reference from `queues[index]`. + + + + + + + + + + + + + + +
Args
+`index` + +An integer scalar tensor that determines the input that gets +selected. +
+`queues` + +A list of `QueueBase` objects. +
+ + + + + + + + + + + +
Returns
+A `QueueBase` object. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +When `queues` is not a list of `QueueBase` objects, +or when the data types of `queues` are not all the same. +
+ + + +

is_closed

+ +View source + + + +Returns true if queue is closed. + +This operation returns true if the queue is closed and false if the queue +is open. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+True if the queue is closed and false if the queue is open. +
+ + + +

size

+ +View source + + + +Compute the number of elements in this queue. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A scalar tensor containing the number of elements in this queue. +
+ + + + + diff --git a/site/en/api_docs/python/tf/queue/PriorityQueue.md b/site/en/api_docs/python/tf/queue/PriorityQueue.md new file mode 100644 index 00000000000..1bec0b78605 --- /dev/null +++ b/site/en/api_docs/python/tf/queue/PriorityQueue.md @@ -0,0 +1,710 @@ +description: A queue implementation that dequeues elements in prioritized order. + +
+ + + + + + + + + + + + +
+ +# tf.queue.PriorityQueue + + + + + + + + + +A queue implementation that dequeues elements in prioritized order. + +Inherits From: [`QueueBase`](../../tf/queue/QueueBase.md) + + + + + + + + + +See tf.queue.QueueBase for a description of the methods on +this class. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`capacity` + +An integer. The upper bound on the number of elements +that may be stored in this queue. +
+`types` + +A list of `DType` objects. The length of `types` must equal +the number of tensors in each queue element, except the first priority +element. The first tensor in each element is the priority, +which must be type int64. +
+`shapes` + +(Optional.) A list of fully-defined `TensorShape` objects, +with the same length as `types`, or `None`. +
+`names` + +(Optional.) A list of strings naming the components in the queue +with the same length as `dtypes`, or `None`. If specified, the dequeue +methods return a dictionary with the names as keys. +
+`shared_name` + +(Optional.) If non-empty, this queue will be shared under +the given name across multiple sessions. +
+`name` + +Optional name for the queue operation. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +The list of dtypes for each component of a queue element. +
+`name` + +The name of the underlying queue. +
+`names` + +The list of names for each component of a queue element. +
+`queue_ref` + +The underlying queue reference. +
+`shapes` + +The list of shapes for each component of a queue element. +
+ + + +## Methods + +

close

+ +View source + + + +Closes this queue. + +This operation signals that no more elements will be enqueued in +the given queue. Subsequent `enqueue` and `enqueue_many` +operations will fail. Subsequent `dequeue` and `dequeue_many` +operations will continue to succeed if sufficient elements remain +in the queue. Subsequently dequeue and dequeue_many operations +that would otherwise block waiting for more elements (if close +hadn't been called) will now fail immediately. + +If `cancel_pending_enqueues` is `True`, all pending requests will also +be canceled. + + + + + + + + + + + + + +
Args
+`cancel_pending_enqueues` + +(Optional.) A boolean, defaulting to +`False` (described above). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that closes the queue. +
+ + + +

dequeue

+ +View source + + + +Dequeues one element from this queue. + +If the queue is empty when this operation executes, it will block +until there is an element to dequeue. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed, the queue is empty, and there are no pending +enqueue operations that can fulfill this request, +tf.errors.OutOfRangeError will be raised. If the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The tuple of tensors that was dequeued. +
+ + + +

dequeue_many

+ +View source + + + +Dequeues and concatenates `n` elements from this queue. + +This operation concatenates queue-element component tensors along +the 0th dimension to make a single component tensor. All of the +components in the dequeued tuple will have size `n` in the 0th dimension. + +If the queue is closed and there are less than `n` elements left, then an +`OutOfRange` exception is raised. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed, the queue contains fewer than `n` elements, and +there are no pending enqueue operations that can fulfill this +request, tf.errors.OutOfRangeError will be raised. If the +session is `tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`n` + +A scalar `Tensor` containing the number of elements to dequeue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The list of concatenated tensors that was dequeued. +
+ + + +

dequeue_up_to

+ +View source + + + +Dequeues and concatenates `n` elements from this queue. + +**Note** This operation is not supported by all queues. If a queue does not +support DequeueUpTo, then a tf.errors.UnimplementedError is raised. + +This operation concatenates queue-element component tensors along +the 0th dimension to make a single component tensor. If the queue +has not been closed, all of the components in the dequeued tuple +will have size `n` in the 0th dimension. + +If the queue is closed and there are more than `0` but fewer than +`n` elements remaining, then instead of raising a +tf.errors.OutOfRangeError like `tf.QueueBase.dequeue_many`, +less than `n` elements are returned immediately. If the queue is +closed and there are `0` elements left in the queue, then a +tf.errors.OutOfRangeError is raised just like in `dequeue_many`. +Otherwise the behavior is identical to `dequeue_many`. + + + + + + + + + + + + + +
Args
+`n` + +A scalar `Tensor` containing the number of elements to dequeue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The tuple of concatenated tensors that was dequeued. +
+ + + +

enqueue

+ +View source + + + +Enqueues one element to this queue. + +If the queue is full when this operation executes, it will block +until the element has been enqueued. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed before this operation runs, +tf.errors.CancelledError will be raised. If this operation is +blocked, and either (i) the queue is closed by a close operation +with `cancel_pending_enqueues=True`, or (ii) the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`vals` + +A tensor, a list or tuple of tensors, or a dictionary containing +the values to enqueue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that enqueues a new tuple of tensors to the queue. +
+ + + +

enqueue_many

+ +View source + + + +Enqueues zero or more elements to this queue. + +This operation slices each component tensor along the 0th dimension to +make multiple queue elements. All of the tensors in `vals` must have the +same size in the 0th dimension. + +If the queue is full when this operation executes, it will block +until all of the elements have been enqueued. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed before this operation runs, +tf.errors.CancelledError will be raised. If this operation is +blocked, and either (i) the queue is closed by a close operation +with `cancel_pending_enqueues=True`, or (ii) the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`vals` + +A tensor, a list or tuple of tensors, or a dictionary +from which the queue elements are taken. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that enqueues a batch of tuples of tensors to the queue. +
+ + + +

from_list

+ +View source + + + +Create a queue using the queue reference from `queues[index]`. + + + + + + + + + + + + + + +
Args
+`index` + +An integer scalar tensor that determines the input that gets +selected. +
+`queues` + +A list of `QueueBase` objects. +
+ + + + + + + + + + + +
Returns
+A `QueueBase` object. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +When `queues` is not a list of `QueueBase` objects, +or when the data types of `queues` are not all the same. +
+ + + +

is_closed

+ +View source + + + +Returns true if queue is closed. + +This operation returns true if the queue is closed and false if the queue +is open. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+True if the queue is closed and false if the queue is open. +
+ + + +

size

+ +View source + + + +Compute the number of elements in this queue. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A scalar tensor containing the number of elements in this queue. +
+ + + + + diff --git a/site/en/api_docs/python/tf/queue/QueueBase.md b/site/en/api_docs/python/tf/queue/QueueBase.md new file mode 100644 index 00000000000..ac4cc6fa97a --- /dev/null +++ b/site/en/api_docs/python/tf/queue/QueueBase.md @@ -0,0 +1,720 @@ +description: Base class for queue implementations. + +
+ + + + + + + + + + + + +
+ +# tf.queue.QueueBase + + + + + + + + + +Base class for queue implementations. + + + + + + + + + +A queue is a TensorFlow data structure that stores tensors across +multiple steps, and exposes operations that enqueue and dequeue +tensors. + +Each queue element is a tuple of one or more tensors, where each +tuple component has a static dtype, and may have a static shape. The +queue implementations support versions of enqueue and dequeue that +handle single elements, versions that support enqueuing and +dequeuing a batch of elements at once. + +See tf.queue.FIFOQueue and +tf.queue.RandomShuffleQueue for concrete +implementations of this class, and instructions on how to create +them. + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +A list of types. The length of dtypes must equal the number +of tensors in each element. +
+`shapes` + +Constraints on the shapes of tensors in an element: +A list of shape tuples or None. This list is the same length +as dtypes. If the shape of any tensors in the element are constrained, +all must be; shapes can be None if the shapes should not be constrained. +
+`names` + +Optional list of names. If provided, the `enqueue()` and +`dequeue()` methods will use dictionaries with these names as keys. +Must be None or a list or tuple of the same length as `dtypes`. +
+`queue_ref` + +The queue reference, i.e. the output of the queue op. +
+ + + + + + + + + + + + +
+`ValueError` + +If one of the arguments is invalid. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +The list of dtypes for each component of a queue element. +
+`name` + +The name of the underlying queue. +
+`names` + +The list of names for each component of a queue element. +
+`queue_ref` + +The underlying queue reference. +
+`shapes` + +The list of shapes for each component of a queue element. +
+ + + +## Methods + +

close

+ +View source + + + +Closes this queue. + +This operation signals that no more elements will be enqueued in +the given queue. Subsequent `enqueue` and `enqueue_many` +operations will fail. Subsequent `dequeue` and `dequeue_many` +operations will continue to succeed if sufficient elements remain +in the queue. Subsequently dequeue and dequeue_many operations +that would otherwise block waiting for more elements (if close +hadn't been called) will now fail immediately. + +If `cancel_pending_enqueues` is `True`, all pending requests will also +be canceled. + + + + + + + + + + + + + +
Args
+`cancel_pending_enqueues` + +(Optional.) A boolean, defaulting to +`False` (described above). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that closes the queue. +
+ + + +

dequeue

+ +View source + + + +Dequeues one element from this queue. + +If the queue is empty when this operation executes, it will block +until there is an element to dequeue. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed, the queue is empty, and there are no pending +enqueue operations that can fulfill this request, +tf.errors.OutOfRangeError will be raised. If the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The tuple of tensors that was dequeued. +
+ + + +

dequeue_many

+ +View source + + + +Dequeues and concatenates `n` elements from this queue. + +This operation concatenates queue-element component tensors along +the 0th dimension to make a single component tensor. All of the +components in the dequeued tuple will have size `n` in the 0th dimension. + +If the queue is closed and there are less than `n` elements left, then an +`OutOfRange` exception is raised. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed, the queue contains fewer than `n` elements, and +there are no pending enqueue operations that can fulfill this +request, tf.errors.OutOfRangeError will be raised. If the +session is `tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`n` + +A scalar `Tensor` containing the number of elements to dequeue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The list of concatenated tensors that was dequeued. +
+ + + +

dequeue_up_to

+ +View source + + + +Dequeues and concatenates `n` elements from this queue. + +**Note** This operation is not supported by all queues. If a queue does not +support DequeueUpTo, then a tf.errors.UnimplementedError is raised. + +This operation concatenates queue-element component tensors along +the 0th dimension to make a single component tensor. If the queue +has not been closed, all of the components in the dequeued tuple +will have size `n` in the 0th dimension. + +If the queue is closed and there are more than `0` but fewer than +`n` elements remaining, then instead of raising a +tf.errors.OutOfRangeError like `tf.QueueBase.dequeue_many`, +less than `n` elements are returned immediately. If the queue is +closed and there are `0` elements left in the queue, then a +tf.errors.OutOfRangeError is raised just like in `dequeue_many`. +Otherwise the behavior is identical to `dequeue_many`. + + + + + + + + + + + + + +
Args
+`n` + +A scalar `Tensor` containing the number of elements to dequeue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The tuple of concatenated tensors that was dequeued. +
+ + + +

enqueue

+ +View source + + + +Enqueues one element to this queue. + +If the queue is full when this operation executes, it will block +until the element has been enqueued. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed before this operation runs, +tf.errors.CancelledError will be raised. If this operation is +blocked, and either (i) the queue is closed by a close operation +with `cancel_pending_enqueues=True`, or (ii) the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`vals` + +A tensor, a list or tuple of tensors, or a dictionary containing +the values to enqueue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that enqueues a new tuple of tensors to the queue. +
+ + + +

enqueue_many

+ +View source + + + +Enqueues zero or more elements to this queue. + +This operation slices each component tensor along the 0th dimension to +make multiple queue elements. All of the tensors in `vals` must have the +same size in the 0th dimension. + +If the queue is full when this operation executes, it will block +until all of the elements have been enqueued. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed before this operation runs, +tf.errors.CancelledError will be raised. If this operation is +blocked, and either (i) the queue is closed by a close operation +with `cancel_pending_enqueues=True`, or (ii) the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`vals` + +A tensor, a list or tuple of tensors, or a dictionary +from which the queue elements are taken. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that enqueues a batch of tuples of tensors to the queue. +
+ + + +

from_list

+ +View source + + + +Create a queue using the queue reference from `queues[index]`. + + + + + + + + + + + + + + +
Args
+`index` + +An integer scalar tensor that determines the input that gets +selected. +
+`queues` + +A list of `QueueBase` objects. +
+ + + + + + + + + + + +
Returns
+A `QueueBase` object. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +When `queues` is not a list of `QueueBase` objects, +or when the data types of `queues` are not all the same. +
+ + + +

is_closed

+ +View source + + + +Returns true if queue is closed. + +This operation returns true if the queue is closed and false if the queue +is open. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+True if the queue is closed and false if the queue is open. +
+ + + +

size

+ +View source + + + +Compute the number of elements in this queue. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A scalar tensor containing the number of elements in this queue. +
+ + + + + diff --git a/site/en/api_docs/python/tf/queue/RandomShuffleQueue.md b/site/en/api_docs/python/tf/queue/RandomShuffleQueue.md new file mode 100644 index 00000000000..d049587e89e --- /dev/null +++ b/site/en/api_docs/python/tf/queue/RandomShuffleQueue.md @@ -0,0 +1,724 @@ +description: A queue implementation that dequeues elements in a random order. + +
+ + + + + + + + + + + + +
+ +# tf.queue.RandomShuffleQueue + + + + + + + + + +A queue implementation that dequeues elements in a random order. + +Inherits From: [`QueueBase`](../../tf/queue/QueueBase.md) + + + + + + + + + +See tf.queue.QueueBase for a description of the methods on +this class. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`capacity` + +An integer. The upper bound on the number of elements +that may be stored in this queue. +
+`min_after_dequeue` + +An integer (described above). +
+`dtypes` + +A list of `DType` objects. The length of `dtypes` must equal +the number of tensors in each queue element. +
+`shapes` + +(Optional.) A list of fully-defined `TensorShape` objects +with the same length as `dtypes`, or `None`. +
+`names` + +(Optional.) A list of string naming the components in the queue +with the same length as `dtypes`, or `None`. If specified the dequeue +methods return a dictionary with the names as keys. +
+`seed` + +A Python integer. Used to create a random seed. See +tf.compat.v1.set_random_seed +for behavior. +
+`shared_name` + +(Optional.) If non-empty, this queue will be shared under +the given name across multiple sessions. +
+`name` + +Optional name for the queue operation. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +The list of dtypes for each component of a queue element. +
+`name` + +The name of the underlying queue. +
+`names` + +The list of names for each component of a queue element. +
+`queue_ref` + +The underlying queue reference. +
+`shapes` + +The list of shapes for each component of a queue element. +
+ + + +## Methods + +

close

+ +View source + + + +Closes this queue. + +This operation signals that no more elements will be enqueued in +the given queue. Subsequent `enqueue` and `enqueue_many` +operations will fail. Subsequent `dequeue` and `dequeue_many` +operations will continue to succeed if sufficient elements remain +in the queue. Subsequently dequeue and dequeue_many operations +that would otherwise block waiting for more elements (if close +hadn't been called) will now fail immediately. + +If `cancel_pending_enqueues` is `True`, all pending requests will also +be canceled. + + + + + + + + + + + + + +
Args
+`cancel_pending_enqueues` + +(Optional.) A boolean, defaulting to +`False` (described above). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that closes the queue. +
+ + + +

dequeue

+ +View source + + + +Dequeues one element from this queue. + +If the queue is empty when this operation executes, it will block +until there is an element to dequeue. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed, the queue is empty, and there are no pending +enqueue operations that can fulfill this request, +tf.errors.OutOfRangeError will be raised. If the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The tuple of tensors that was dequeued. +
+ + + +

dequeue_many

+ +View source + + + +Dequeues and concatenates `n` elements from this queue. + +This operation concatenates queue-element component tensors along +the 0th dimension to make a single component tensor. All of the +components in the dequeued tuple will have size `n` in the 0th dimension. + +If the queue is closed and there are less than `n` elements left, then an +`OutOfRange` exception is raised. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed, the queue contains fewer than `n` elements, and +there are no pending enqueue operations that can fulfill this +request, tf.errors.OutOfRangeError will be raised. If the +session is `tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`n` + +A scalar `Tensor` containing the number of elements to dequeue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The list of concatenated tensors that was dequeued. +
+ + + +

dequeue_up_to

+ +View source + + + +Dequeues and concatenates `n` elements from this queue. + +**Note** This operation is not supported by all queues. If a queue does not +support DequeueUpTo, then a tf.errors.UnimplementedError is raised. + +This operation concatenates queue-element component tensors along +the 0th dimension to make a single component tensor. If the queue +has not been closed, all of the components in the dequeued tuple +will have size `n` in the 0th dimension. + +If the queue is closed and there are more than `0` but fewer than +`n` elements remaining, then instead of raising a +tf.errors.OutOfRangeError like `tf.QueueBase.dequeue_many`, +less than `n` elements are returned immediately. If the queue is +closed and there are `0` elements left in the queue, then a +tf.errors.OutOfRangeError is raised just like in `dequeue_many`. +Otherwise the behavior is identical to `dequeue_many`. + + + + + + + + + + + + + +
Args
+`n` + +A scalar `Tensor` containing the number of elements to dequeue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The tuple of concatenated tensors that was dequeued. +
+ + + +

enqueue

+ +View source + + + +Enqueues one element to this queue. + +If the queue is full when this operation executes, it will block +until the element has been enqueued. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed before this operation runs, +tf.errors.CancelledError will be raised. If this operation is +blocked, and either (i) the queue is closed by a close operation +with `cancel_pending_enqueues=True`, or (ii) the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`vals` + +A tensor, a list or tuple of tensors, or a dictionary containing +the values to enqueue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that enqueues a new tuple of tensors to the queue. +
+ + + +

enqueue_many

+ +View source + + + +Enqueues zero or more elements to this queue. + +This operation slices each component tensor along the 0th dimension to +make multiple queue elements. All of the tensors in `vals` must have the +same size in the 0th dimension. + +If the queue is full when this operation executes, it will block +until all of the elements have been enqueued. + +At runtime, this operation may raise an error if the queue is +`tf.QueueBase.close` before or during its execution. If the +queue is closed before this operation runs, +tf.errors.CancelledError will be raised. If this operation is +blocked, and either (i) the queue is closed by a close operation +with `cancel_pending_enqueues=True`, or (ii) the session is +`tf.Session.close`, +tf.errors.CancelledError will be raised. + + + + + + + + + + + + + +
Args
+`vals` + +A tensor, a list or tuple of tensors, or a dictionary +from which the queue elements are taken. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+The operation that enqueues a batch of tuples of tensors to the queue. +
+ + + +

from_list

+ +View source + + + +Create a queue using the queue reference from `queues[index]`. + + + + + + + + + + + + + + +
Args
+`index` + +An integer scalar tensor that determines the input that gets +selected. +
+`queues` + +A list of `QueueBase` objects. +
+ + + + + + + + + + + +
Returns
+A `QueueBase` object. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +When `queues` is not a list of `QueueBase` objects, +or when the data types of `queues` are not all the same. +
+ + + +

is_closed

+ +View source + + + +Returns true if queue is closed. + +This operation returns true if the queue is closed and false if the queue +is open. + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+True if the queue is closed and false if the queue is open. +
+ + + +

size

+ +View source + + + +Compute the number of elements in this queue. + + + + + + + + + + + +
Args
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A scalar tensor containing the number of elements in this queue. +
+ + + + + diff --git a/site/en/api_docs/python/tf/ragged.md b/site/en/api_docs/python/tf/ragged.md new file mode 100644 index 00000000000..3882f910276 --- /dev/null +++ b/site/en/api_docs/python/tf/ragged.md @@ -0,0 +1,178 @@ +description: Ragged Tensors. + +
+ + +
+ +# Module: tf.ragged + + + + + + + + + +Ragged Tensors. + + +This package defines ops for manipulating ragged tensors (tf.RaggedTensor), +which are tensors with non-uniform shapes. In particular, each `RaggedTensor` +has one or more *ragged dimensions*, which are dimensions whose slices may have +different lengths. For example, the inner (column) dimension of +`rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []]` is ragged, since the column slices +(`rt[0, :]`, ..., `rt[4, :]`) have different lengths. For a more detailed +description of ragged tensors, see the tf.RaggedTensor class documentation +and the [Ragged Tensor Guide](/guide/ragged_tensors). + + +### Additional ops that support `RaggedTensor` + +Arguments that accept `RaggedTensor`s are marked in **bold**. + +* `tf.batch_gather`(**params**, **indices**, name=`None`) +* tf.bitwise.bitwise_and(**x**, **y**, name=`None`) +* tf.bitwise.bitwise_or(**x**, **y**, name=`None`) +* tf.bitwise.bitwise_xor(**x**, **y**, name=`None`) +* tf.bitwise.invert(**x**, name=`None`) +* tf.bitwise.left_shift(**x**, **y**, name=`None`) +* tf.bitwise.right_shift(**x**, **y**, name=`None`) +* tf.cast(**x**, dtype, name=`None`) +* tf.clip_by_value(**t**, clip_value_min, clip_value_max, name=`None`) +* tf.concat(**values**, axis, name=`'concat'`) +* tf.debugging.check_numerics(**tensor**, message, name=`None`) +* tf.dtypes.complex(**real**, **imag**, name=`None`) +* tf.dtypes.saturate_cast(**value**, dtype, name=`None`) +* tf.dynamic_partition(**data**, **partitions**, num_partitions, name=`None`) +* tf.expand_dims(**input**, axis=`None`, name=`None`, dim=`None`) +* tf.gather_nd(**params**, **indices**, name=`None`, batch_dims=`0`) +* tf.gather(**params**, **indices**, validate_indices=`None`, name=`None`, axis=`None`, batch_dims=`0`) +* tf.identity(**input**, name=`None`) +* tf.io.decode_base64(**input**, name=`None`) +* tf.io.decode_compressed(**bytes**, compression_type=`''`, name=`None`) +* tf.io.encode_base64(**input**, pad=`False`, name=`None`) +* tf.math.abs(**x**, name=`None`) +* tf.math.acos(**x**, name=`None`) +* tf.math.acosh(**x**, name=`None`) +* tf.math.add_n(**inputs**, name=`None`) +* tf.math.add(**x**, **y**, name=`None`) +* tf.math.angle(**input**, name=`None`) +* tf.math.asin(**x**, name=`None`) +* tf.math.asinh(**x**, name=`None`) +* tf.math.atan2(**y**, **x**, name=`None`) +* tf.math.atan(**x**, name=`None`) +* tf.math.atanh(**x**, name=`None`) +* tf.math.ceil(**x**, name=`None`) +* tf.math.conj(**x**, name=`None`) +* tf.math.cos(**x**, name=`None`) +* tf.math.cosh(**x**, name=`None`) +* tf.math.digamma(**x**, name=`None`) +* tf.math.divide_no_nan(**x**, **y**, name=`None`) +* tf.math.divide(**x**, **y**, name=`None`) +* tf.math.equal(**x**, **y**, name=`None`) +* tf.math.erf(**x**, name=`None`) +* tf.math.erfc(**x**, name=`None`) +* tf.math.erfinv(**x**, name=`None`) +* tf.math.exp(**x**, name=`None`) +* tf.math.expm1(**x**, name=`None`) +* tf.math.floor(**x**, name=`None`) +* tf.math.floordiv(**x**, **y**, name=`None`) +* tf.math.floormod(**x**, **y**, name=`None`) +* tf.math.greater_equal(**x**, **y**, name=`None`) +* tf.math.greater(**x**, **y**, name=`None`) +* tf.math.imag(**input**, name=`None`) +* tf.math.is_finite(**x**, name=`None`) +* tf.math.is_inf(**x**, name=`None`) +* tf.math.is_nan(**x**, name=`None`) +* tf.math.less_equal(**x**, **y**, name=`None`) +* tf.math.less(**x**, **y**, name=`None`) +* tf.math.lgamma(**x**, name=`None`) +* tf.math.log1p(**x**, name=`None`) +* tf.math.log_sigmoid(**x**, name=`None`) +* tf.math.log(**x**, name=`None`) +* tf.math.logical_and(**x**, **y**, name=`None`) +* tf.math.logical_not(**x**, name=`None`) +* tf.math.logical_or(**x**, **y**, name=`None`) +* tf.math.logical_xor(**x**, **y**, name=`'LogicalXor'`) +* tf.math.maximum(**x**, **y**, name=`None`) +* tf.math.minimum(**x**, **y**, name=`None`) +* tf.math.multiply(**x**, **y**, name=`None`) +* tf.math.ndtri(**x**, name=`None`) +* tf.math.negative(**x**, name=`None`) +* tf.math.not_equal(**x**, **y**, name=`None`) +* tf.math.pow(**x**, **y**, name=`None`) +* tf.math.real(**input**, name=`None`) +* tf.math.reciprocal(**x**, name=`None`) +* tf.math.reduce_any(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.math.reduce_max(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.math.reduce_mean(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.math.reduce_min(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.math.reduce_prod(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.math.reduce_sum(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.math.rint(**x**, name=`None`) +* tf.math.round(**x**, name=`None`) +* tf.math.rsqrt(**x**, name=`None`) +* tf.math.sign(**x**, name=`None`) +* tf.math.sin(**x**, name=`None`) +* tf.math.sinh(**x**, name=`None`) +* tf.math.sqrt(**x**, name=`None`) +* tf.math.square(**x**, name=`None`) +* tf.math.squared_difference(**x**, **y**, name=`None`) +* tf.math.subtract(**x**, **y**, name=`None`) +* tf.math.tan(**x**, name=`None`) +* tf.math.truediv(**x**, **y**, name=`None`) +* tf.math.unsorted_segment_max(**data**, **segment_ids**, num_segments, name=`None`) +* tf.math.unsorted_segment_mean(**data**, **segment_ids**, num_segments, name=`None`) +* tf.math.unsorted_segment_min(**data**, **segment_ids**, num_segments, name=`None`) +* tf.math.unsorted_segment_prod(**data**, **segment_ids**, num_segments, name=`None`) +* tf.math.unsorted_segment_sqrt_n(**data**, **segment_ids**, num_segments, name=`None`) +* tf.math.unsorted_segment_sum(**data**, **segment_ids**, num_segments, name=`None`) +* tf.one_hot(**indices**, depth, on_value=`None`, off_value=`None`, axis=`None`, dtype=`None`, name=`None`) +* tf.ones_like(**tensor**, dtype=`None`, name=`None`, optimize=`True`) +* tf.rank(**input**, name=`None`) +* tf.realdiv(**x**, **y**, name=`None`) +* tf.reduce_all(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`) +* tf.reverse(**tensor**, axis, name=`None`) +* tf.size(**input**, name=`None`, out_type=tf.int32) +* tf.squeeze(**input**, axis=`None`, name=`None`, squeeze_dims=`None`) +* tf.stack(**values**, axis=`0`, name=`'stack'`) +* tf.strings.as_string(**input**, precision=`-1`, scientific=`False`, shortest=`False`, width=`-1`, fill=`''`, name=`None`) +* tf.strings.join(**inputs**, separator=`''`, name=`None`) +* tf.strings.length(**input**, name=`None`, unit=`'BYTE'`) +* tf.strings.reduce_join(**inputs**, axis=`None`, keepdims=`False`, separator=`''`, name=`None`) +* tf.strings.regex_full_match(**input**, pattern, name=`None`) +* tf.strings.regex_replace(**input**, pattern, rewrite, replace_global=`True`, name=`None`) +* tf.strings.strip(**input**, name=`None`) +* tf.strings.substr(**input**, pos, len, name=`None`, unit=`'BYTE'`) +* tf.strings.to_hash_bucket_fast(**input**, num_buckets, name=`None`) +* tf.strings.to_hash_bucket_strong(**input**, num_buckets, key, name=`None`) +* tf.strings.to_hash_bucket(**input**, num_buckets, name=`None`) +* tf.strings.to_hash_bucket(**input**, num_buckets, name=`None`) +* tf.strings.to_number(**input**, out_type=tf.float32, name=`None`) +* tf.strings.unicode_script(**input**, name=`None`) +* tf.tile(**input**, multiples, name=`None`) +* tf.truncatediv(**x**, **y**, name=`None`) +* tf.truncatemod(**x**, **y**, name=`None`) +* tf.where(**condition**, **x**=`None`, **y**=`None`, name=`None`) +* tf.zeros_like(**tensor**, dtype=`None`, name=`None`, optimize=`True`)n + +## Functions + +[`boolean_mask(...)`](../tf/ragged/boolean_mask.md): Applies a boolean mask to `data` without flattening the mask dimensions. + +[`constant(...)`](../tf/ragged/constant.md): Constructs a constant RaggedTensor from a nested Python list. + +[`map_flat_values(...)`](../tf/ragged/map_flat_values.md): Applies `op` to the values of one or more RaggedTensors. + +[`range(...)`](../tf/ragged/range.md): Returns a `RaggedTensor` containing the specified sequences of numbers. + +[`row_splits_to_segment_ids(...)`](../tf/ragged/row_splits_to_segment_ids.md): Generates the segmentation corresponding to a RaggedTensor `row_splits`. + +[`segment_ids_to_row_splits(...)`](../tf/ragged/segment_ids_to_row_splits.md): Generates the RaggedTensor `row_splits` corresponding to a segmentation. + +[`stack(...)`](../tf/ragged/stack.md): Stacks a list of rank-`R` tensors into one rank-`(R+1)` `RaggedTensor`. + +[`stack_dynamic_partitions(...)`](../tf/ragged/stack_dynamic_partitions.md): Stacks dynamic partitions of a Tensor or RaggedTensor. + diff --git a/site/en/api_docs/python/tf/ragged/boolean_mask.md b/site/en/api_docs/python/tf/ragged/boolean_mask.md new file mode 100644 index 00000000000..87a748eb26d --- /dev/null +++ b/site/en/api_docs/python/tf/ragged/boolean_mask.md @@ -0,0 +1,149 @@ +description: Applies a boolean mask to data without flattening the mask dimensions. + +
+ + +
+ +# tf.ragged.boolean_mask + + + + + + + + + +Applies a boolean mask to `data` without flattening the mask dimensions. + + + + + + + + + +Returns a potentially ragged tensor that is formed by retaining the elements +in `data` where the corresponding value in `mask` is `True`. + +* `output[a1...aA, i, b1...bB] = data[a1...aA, j, b1...bB]` + + Where `j` is the `i`th `True` entry of `mask[a1...aA]`. + +Note that `output` preserves the mask dimensions `a1...aA`; this differs +from tf.boolean_mask, which flattens those dimensions. + + + + + + + + + + + + + + + + +
+`data` + +A potentially ragged tensor. +
+`mask` + +A potentially ragged boolean tensor. `mask`'s shape must be a prefix +of `data`'s shape. `rank(mask)` must be known statically. +
+`name` + +A name prefix for the returned tensor (optional). +
+ + + + + + + + + + + +
+A potentially ragged tensor that is formed by retaining the elements in +`data` where the corresponding value in `mask` is `True`. + +* `rank(output) = rank(data)`. +* `output.ragged_rank = max(data.ragged_rank, rank(mask) - 1)`. +
+ + + + + + + + + + + + +
+`ValueError` + +if `rank(mask)` is not known statically; or if `mask.shape` is +not a prefix of `data.shape`. +
+ + +#### Examples: + +``` +>>> # Aliases for True & False so data and mask line up. +>>> T, F = (True, False) +``` + +``` +>>> tf.ragged.boolean_mask( # Mask a 2D Tensor. +... data=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], +... mask=[[T, F, T], [F, F, F], [T, F, F]]).to_list() +[[1, 3], [], [7]] +``` + +``` +>>> tf.ragged.boolean_mask( # Mask a 2D RaggedTensor. +... tf.ragged.constant([[1, 2, 3], [4], [5, 6]]), +... tf.ragged.constant([[F, F, T], [F], [T, T]])).to_list() +[[3], [], [5, 6]] +``` + +``` +>>> tf.ragged.boolean_mask( # Mask rows of a 2D RaggedTensor. +... tf.ragged.constant([[1, 2, 3], [4], [5, 6]]), +... tf.ragged.constant([True, False, True])).to_list() +[[1, 2, 3], [5, 6]] +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/ragged/constant.md b/site/en/api_docs/python/tf/ragged/constant.md new file mode 100644 index 00000000000..195b46f1da6 --- /dev/null +++ b/site/en/api_docs/python/tf/ragged/constant.md @@ -0,0 +1,155 @@ +description: Constructs a constant RaggedTensor from a nested Python list. + +
+ + +
+ +# tf.ragged.constant + + + + + + + + + +Constructs a constant RaggedTensor from a nested Python list. + + + + + + + + + + +#### Example: + + + +``` +>>> tf.ragged.constant([[1, 2], [3], [4, 5, 6]]) + +``` + +All scalar values in `pylist` must have the same nesting depth `K`, and the +returned `RaggedTensor` will have rank `K`. If `pylist` contains no scalar +values, then `K` is one greater than the maximum depth of empty lists in +`pylist`. All scalar values in `pylist` must be compatible with `dtype`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`pylist` + +A nested `list`, `tuple` or `np.ndarray`. Any nested element that +is not a `list`, `tuple` or `np.ndarray` must be a scalar value +compatible with `dtype`. +
+`dtype` + +The type of elements for the returned `RaggedTensor`. If not +specified, then a default is chosen based on the scalar values in +`pylist`. +
+`ragged_rank` + +An integer specifying the ragged rank of the returned +`RaggedTensor`. Must be nonnegative and less than `K`. Defaults to +`max(0, K - 1)` if `inner_shape` is not specified. Defaults to `max(0, K +- 1 - len(inner_shape))` if `inner_shape` is specified. +
+`inner_shape` + +A tuple of integers specifying the shape for individual inner +values in the returned `RaggedTensor`. Defaults to `()` if `ragged_rank` +is not specified. If `ragged_rank` is specified, then a default is chosen +based on the contents of `pylist`. +
+`name` + +A name prefix for the returned tensor (optional). +
+`row_splits_dtype` + +data type for the constructed `RaggedTensor`'s row_splits. +One of tf.int32 or tf.int64. +
+ + + + + + + + + + + +
+A potentially ragged tensor with rank `K` and the specified `ragged_rank`, +containing the values from `pylist`. +
+ + + + + + + + + + + + +
+`ValueError` + +If the scalar values in `pylist` have inconsistent nesting +depth; or if ragged_rank or inner_shape are incompatible with `pylist`. +
+ diff --git a/site/en/api_docs/python/tf/ragged/map_flat_values.md b/site/en/api_docs/python/tf/ragged/map_flat_values.md new file mode 100644 index 00000000000..49addc39355 --- /dev/null +++ b/site/en/api_docs/python/tf/ragged/map_flat_values.md @@ -0,0 +1,133 @@ +description: Applies op to the values of one or more RaggedTensors. + +
+ + +
+ +# tf.ragged.map_flat_values + + + + + + + + + +Applies `op` to the values of one or more RaggedTensors. + + + + + + + + + +Replaces any `RaggedTensor` in `args` or `kwargs` with its `flat_values` +tensor, and then calls `op`. Returns a `RaggedTensor` that is constructed +from the input `RaggedTensor`s' `nested_row_splits` and the value returned by +the `op`. + +If the input arguments contain multiple `RaggedTensor`s, then they must have +identical `nested_row_splits`. + +#### Examples: + + + +``` +>>> rt = tf.ragged.constant([[1, 2, 3], [], [4, 5], [6]]) +>>> map_flat_values(tf.ones_like, rt).to_list() +[[1, 1, 1], [], [1, 1], [1]] +>>> map_flat_values(tf.multiply, rt, rt).to_list() +[[1, 4, 9], [], [16, 25], [36]] +>>> map_flat_values(tf.add, rt, 5).to_list() +[[6, 7, 8], [], [9, 10], [11]] +``` + + + + + + + + + + + + + + + + +
+`op` + +The operation that should be applied to the RaggedTensor `flat_values`. +`op` is typically an element-wise operation (such as math_ops.add), but +any operation that preserves the size of the outermost dimension can be +used. I.e., `shape[0]` of the value returned by `op` must match +`shape[0]` of the `RaggedTensor`s' `flat_values` tensors. +
+`*args` + +Arguments for `op`. +
+`**kwargs` + +Keyword arguments for `op`. +
+ + + + + + + + + + + +
+A `RaggedTensor` whose `ragged_rank` matches the `ragged_rank` of all +input `RaggedTensor`s. +
+ + + + + + + + + + + + +
+`ValueError` + +If args contains no `RaggedTensors`, or if the `nested_splits` +of the input `RaggedTensor`s are not identical. +
+ diff --git a/site/en/api_docs/python/tf/ragged/range.md b/site/en/api_docs/python/tf/ragged/range.md new file mode 100644 index 00000000000..b584b23d73d --- /dev/null +++ b/site/en/api_docs/python/tf/ragged/range.md @@ -0,0 +1,146 @@ +description: Returns a RaggedTensor containing the specified sequences of numbers. + +
+ + +
+ +# tf.ragged.range + + + + + + + + + +Returns a `RaggedTensor` containing the specified sequences of numbers. + + + + + + + + + +Each row of the returned `RaggedTensor` contains a single sequence: + +```python +ragged.range(starts, limits, deltas)[i] == + tf.range(starts[i], limits[i], deltas[i]) +``` + +If `start[i] < limits[i] and deltas[i] > 0`, then `output[i]` will be an +empty list. Similarly, if `start[i] > limits[i] and deltas[i] < 0`, then +`output[i]` will be an empty list. This behavior is consistent with the +Python `range` function, but differs from the tf.range op, which returns +an error for these cases. + +#### Examples: + + + +``` +>>> tf.ragged.range([3, 5, 2]).to_list() +[[0, 1, 2], [0, 1, 2, 3, 4], [0, 1]] +>>> tf.ragged.range([0, 5, 8], [3, 3, 12]).to_list() +[[0, 1, 2], [], [8, 9, 10, 11]] +>>> tf.ragged.range([0, 5, 8], [3, 3, 12], 2).to_list() +[[0, 2], [], [8, 10]] +``` + +The input tensors `starts`, `limits`, and `deltas` may be scalars or vectors. +The vector inputs must all have the same size. Scalar inputs are broadcast +to match the size of the vector inputs. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`starts` + +Vector or scalar `Tensor`. Specifies the first entry for each range +if `limits` is not `None`; otherwise, specifies the range limits, and the +first entries default to `0`. +
+`limits` + +Vector or scalar `Tensor`. Specifies the exclusive upper limits for +each range. +
+`deltas` + +Vector or scalar `Tensor`. Specifies the increment for each range. +Defaults to `1`. +
+`dtype` + +The type of the elements of the resulting tensor. If not specified, +then a value is chosen based on the other args. +
+`name` + +A name for the operation. +
+`row_splits_dtype` + +`dtype` for the returned `RaggedTensor`'s `row_splits` +tensor. One of tf.int32 or tf.int64. +
+ + + + + + + + + + + +
+A `RaggedTensor` of type `dtype` with `ragged_rank=1`. +
+ diff --git a/site/en/api_docs/python/tf/ragged/row_splits_to_segment_ids.md b/site/en/api_docs/python/tf/ragged/row_splits_to_segment_ids.md new file mode 100644 index 00000000000..af1c43acca9 --- /dev/null +++ b/site/en/api_docs/python/tf/ragged/row_splits_to_segment_ids.md @@ -0,0 +1,114 @@ +description: Generates the segmentation corresponding to a RaggedTensor row_splits. + +
+ + +
+ +# tf.ragged.row_splits_to_segment_ids + + + + + + + + + +Generates the segmentation corresponding to a RaggedTensor `row_splits`. + + + + + + + + + +Returns an integer vector `segment_ids`, where `segment_ids[i] == j` if +`splits[j] <= i < splits[j+1]`. Example: + +``` +>>> print(tf.ragged.row_splits_to_segment_ids([0, 3, 3, 5, 6, 9])) + tf.Tensor([0 0 0 2 2 3 4 4 4], shape=(9,), dtype=int64) +``` + + + + + + + + + + + + + + + + +
+`splits` + +A sorted 1-D integer Tensor. `splits[0]` must be zero. +
+`name` + +A name prefix for the returned tensor (optional). +
+`out_type` + +The dtype for the return value. Defaults to `splits.dtype`, +or tf.int64 if `splits` does not have a dtype. +
+ + + + + + + + + + + +
+A sorted 1-D integer Tensor, with `shape=[splits[-1]]` +
+ + + + + + + + + + + + +
+`ValueError` + +If `splits` is invalid. +
+ diff --git a/site/en/api_docs/python/tf/ragged/segment_ids_to_row_splits.md b/site/en/api_docs/python/tf/ragged/segment_ids_to_row_splits.md new file mode 100644 index 00000000000..34b8cf812ad --- /dev/null +++ b/site/en/api_docs/python/tf/ragged/segment_ids_to_row_splits.md @@ -0,0 +1,105 @@ +description: Generates the RaggedTensor row_splits corresponding to a segmentation. + +
+ + +
+ +# tf.ragged.segment_ids_to_row_splits + + + + + + + + + +Generates the RaggedTensor `row_splits` corresponding to a segmentation. + + + + + + + + + +Returns an integer vector `splits`, where `splits[0] = 0` and +`splits[i] = splits[i-1] + count(segment_ids==i)`. Example: + +``` +>>> print(tf.ragged.segment_ids_to_row_splits([0, 0, 0, 2, 2, 3, 4, 4, 4])) +tf.Tensor([0 3 3 5 6 9], shape=(6,), dtype=int64) +``` + + + + + + + + + + + + + + + + + + + +
+`segment_ids` + +A 1-D integer Tensor. +
+`num_segments` + +A scalar integer indicating the number of segments. Defaults +to `max(segment_ids) + 1` (or zero if `segment_ids` is empty). +
+`out_type` + +The dtype for the return value. Defaults to `segment_ids.dtype`, +or tf.int64 if `segment_ids` does not have a dtype. +
+`name` + +A name prefix for the returned tensor (optional). +
+ + + + + + + + + + + +
+A sorted 1-D integer Tensor, with `shape=[num_segments + 1]`. +
+ diff --git a/site/en/api_docs/python/tf/ragged/stack.md b/site/en/api_docs/python/tf/ragged/stack.md new file mode 100644 index 00000000000..88709473acf --- /dev/null +++ b/site/en/api_docs/python/tf/ragged/stack.md @@ -0,0 +1,136 @@ +description: Stacks a list of rank-R tensors into one rank-(R+1) RaggedTensor. + +
+ + +
+ +# tf.ragged.stack + + + + + + + + + +Stacks a list of rank-`R` tensors into one rank-`(R+1)` `RaggedTensor`. + + + + + + + + + +Given a list of tensors or ragged tensors with the same rank `R` +(`R >= axis`), returns a rank-`R+1` `RaggedTensor` `result` such that +`result[i0...iaxis]` is `[value[i0...iaxis] for value in values]`. + +#### Examples: + +``` +>>> # Stacking two ragged tensors. +>>> t1 = tf.ragged.constant([[1, 2], [3, 4, 5]]) +>>> t2 = tf.ragged.constant([[6], [7, 8, 9]]) +>>> tf.ragged.stack([t1, t2], axis=0) + +>>> tf.ragged.stack([t1, t2], axis=1) + +``` + +``` +>>> # Stacking two dense tensors with different sizes. +>>> t3 = tf.constant([[1, 2, 3], [4, 5, 6]]) +>>> t4 = tf.constant([[5], [6], [7]]) +>>> tf.ragged.stack([t3, t4], axis=0) + +``` + + + + + + + + + + + + + + + + +
+`values` + +A list of tf.Tensor or tf.RaggedTensor. May not be empty. All +`values` must have the same rank and the same dtype; but unlike +tf.stack, they can have arbitrary dimension sizes. +
+`axis` + +A python integer, indicating the dimension along which to stack. +(Note: Unlike tf.stack, the `axis` parameter must be statically known.) +Negative values are supported only if the rank of at least one +`values` value is statically known. +
+`name` + +A name prefix for the returned tensor (optional). +
+ + + + + + + + + + + +
+A `RaggedTensor` with rank `R+1`. +`result.ragged_rank=1+max(axis, max(rt.ragged_rank for rt in values]))`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `values` is empty, if `axis` is out of bounds or if +the input tensors have different ranks. +
+ diff --git a/site/en/api_docs/python/tf/ragged/stack_dynamic_partitions.md b/site/en/api_docs/python/tf/ragged/stack_dynamic_partitions.md new file mode 100644 index 00000000000..443c4a6de45 --- /dev/null +++ b/site/en/api_docs/python/tf/ragged/stack_dynamic_partitions.md @@ -0,0 +1,122 @@ +description: Stacks dynamic partitions of a Tensor or RaggedTensor. + +
+ + +
+ +# tf.ragged.stack_dynamic_partitions + + + + + + + + + +Stacks dynamic partitions of a Tensor or RaggedTensor. + + + + + + + + + +Returns a RaggedTensor `output` with `num_partitions` rows, where the row +`output[i]` is formed by stacking all slices `data[j1...jN]` such that +`partitions[j1...jN] = i`. Slices of `data` are stacked in row-major +order. + +If `num_partitions` is an `int` (not a `Tensor`), then this is equivalent to +`tf.ragged.stack(tf.dynamic_partition(data, partitions, num_partitions))`. + +#### Example: + +``` +>>> data = ['a', 'b', 'c', 'd', 'e'] +>>> partitions = [ 3, 0, 2, 2, 3] +>>> num_partitions = 5 +>>> tf.ragged.stack_dynamic_partitions(data, partitions, num_partitions) + +``` + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor` or `RaggedTensor` containing the values to stack. +
+`partitions` + +An `int32` or `int64` `Tensor` or `RaggedTensor` specifying the +partition that each slice of `data` should be added to. +`partitions.shape` must be a prefix of `data.shape`. Values must be +greater than or equal to zero, and less than `num_partitions`. +`partitions` is not required to be sorted. +
+`num_partitions` + +An `int32` or `int64` scalar specifying the number of +partitions to output. This determines the number of rows in `output`. +
+`name` + +A name prefix for the returned tensor (optional). +
+ + + + + + + + + + + +
+A `RaggedTensor` containing the stacked partitions. The returned tensor +has the same dtype as `data`, and its shape is +`[num_partitions, (D)] + data.shape[partitions.rank:]`, where `(D)` is a +ragged dimension whose length is the number of data slices stacked for +each `partition`. +
+ diff --git a/site/en/api_docs/python/tf/random.md b/site/en/api_docs/python/tf/random.md new file mode 100644 index 00000000000..956e1c93f3d --- /dev/null +++ b/site/en/api_docs/python/tf/random.md @@ -0,0 +1,79 @@ +description: Public API for tf.random namespace. + +
+ + +
+ +# Module: tf.random + + + + + + + + + +Public API for tf.random namespace. + + + +## Modules + +[`experimental`](../tf/random/experimental.md) module: Public API for tf.random.experimental namespace. + +## Classes + +[`class Algorithm`](../tf/random/Algorithm.md): An enumeration. + +[`class Generator`](../tf/random/Generator.md): Random-number generator. + +## Functions + +[`all_candidate_sampler(...)`](../tf/random/all_candidate_sampler.md): Generate the set of all classes. + +[`categorical(...)`](../tf/random/categorical.md): Draws samples from a categorical distribution. + +[`create_rng_state(...)`](../tf/random/create_rng_state.md): Creates a RNG state from an integer or a vector. + +[`fixed_unigram_candidate_sampler(...)`](../tf/random/fixed_unigram_candidate_sampler.md): Samples a set of classes using the provided (fixed) base distribution. + +[`gamma(...)`](../tf/random/gamma.md): Draws `shape` samples from each of the given Gamma distribution(s). + +[`get_global_generator(...)`](../tf/random/get_global_generator.md): Retrieves the global generator. + +[`learned_unigram_candidate_sampler(...)`](../tf/random/learned_unigram_candidate_sampler.md): Samples a set of classes from a distribution learned during training. + +[`log_uniform_candidate_sampler(...)`](../tf/random/log_uniform_candidate_sampler.md): Samples a set of classes using a log-uniform (Zipfian) base distribution. + +[`normal(...)`](../tf/random/normal.md): Outputs random values from a normal distribution. + +[`poisson(...)`](../tf/random/poisson.md): Draws `shape` samples from each of the given Poisson distribution(s). + +[`set_global_generator(...)`](../tf/random/set_global_generator.md): Replaces the global generator with another `Generator` object. + +[`set_seed(...)`](../tf/random/set_seed.md): Sets the global random seed. + +[`shuffle(...)`](../tf/random/shuffle.md): Randomly shuffles a tensor along its first dimension. + +[`stateless_binomial(...)`](../tf/random/stateless_binomial.md): Outputs deterministic pseudorandom values from a binomial distribution. + +[`stateless_categorical(...)`](../tf/random/stateless_categorical.md): Draws deterministic pseudorandom samples from a categorical distribution. + +[`stateless_gamma(...)`](../tf/random/stateless_gamma.md): Outputs deterministic pseudorandom values from a gamma distribution. + +[`stateless_normal(...)`](../tf/random/stateless_normal.md): Outputs deterministic pseudorandom values from a normal distribution. + +[`stateless_poisson(...)`](../tf/random/stateless_poisson.md): Outputs deterministic pseudorandom values from a Poisson distribution. + +[`stateless_truncated_normal(...)`](../tf/random/stateless_truncated_normal.md): Outputs deterministic pseudorandom values, truncated normally distributed. + +[`stateless_uniform(...)`](../tf/random/stateless_uniform.md): Outputs deterministic pseudorandom values from a uniform distribution. + +[`truncated_normal(...)`](../tf/random/truncated_normal.md): Outputs random values from a truncated normal distribution. + +[`uniform(...)`](../tf/random/uniform.md): Outputs random values from a uniform distribution. + +[`uniform_candidate_sampler(...)`](../tf/random/uniform_candidate_sampler.md): Samples a set of classes using a uniform base distribution. + diff --git a/site/en/api_docs/python/tf/random/Algorithm.md b/site/en/api_docs/python/tf/random/Algorithm.md new file mode 100644 index 00000000000..07c50e34807 --- /dev/null +++ b/site/en/api_docs/python/tf/random/Algorithm.md @@ -0,0 +1,47 @@ +description: An enumeration. + +
+ + + + +
+ +# tf.random.Algorithm + + + + + + + + + +An enumeration. + + + + + + +## Class Variables + +* `PHILOX` +* `THREEFRY` diff --git a/site/en/api_docs/python/tf/random/Generator.md b/site/en/api_docs/python/tf/random/Generator.md new file mode 100644 index 00000000000..83cce72e3bd --- /dev/null +++ b/site/en/api_docs/python/tf/random/Generator.md @@ -0,0 +1,1171 @@ +description: Random-number generator. + +
+ + + + + + + + + + + + + + + + + + +
+ +# tf.random.Generator + + + + + + + + + +Random-number generator. + + + + + + + + + + +#### Example: + + + +Creating a generator from a seed: + +``` +>>> g = tf.random.Generator.from_seed(1234) +>>> g.normal(shape=(2, 3)) + +``` + +Creating a generator from a non-deterministic state: + +``` +>>> g = tf.random.Generator.from_non_deterministic_state() +>>> g.normal(shape=(2, 3)) + +``` + +All the constructors allow explicitly choosing an Random-Number-Generation +(RNG) algorithm. Supported algorithms are `"philox"` and `"threefry"`. For +example: + +``` +>>> g = tf.random.Generator.from_seed(123, alg="philox") +>>> g.normal(shape=(2, 3)) + +``` + +CPU, GPU and TPU with the same algorithm and seed will generate the same +integer random numbers. Float-point results (such as the output of `normal`) +may have small numerical discrepancies between different devices. + +This class uses a tf.Variable to manage its internal state. Every time +random numbers are generated, the state of the generator will change. For +example: + +``` +>>> g = tf.random.Generator.from_seed(1234) +>>> g.state + +>>> g.normal(shape=(2, 3)) +<...> +>>> g.state + +``` + +The shape of the state is algorithm-specific. + +There is also a global generator: + +``` +>>> g = tf.random.get_global_generator() +>>> g.normal(shape=(2, 3)) + +``` + + + + + + + + + + + + + + + + +
+`copy_from` + +a generator to be copied from. +
+`state` + +a vector of dtype STATE_TYPE representing the initial state of the +RNG, whose length and semantics are algorithm-specific. If it's a +variable, the generator will reuse it instead of creating a new +variable. +
+`alg` + +the RNG algorithm. Possible values are +tf.random.Algorithm.PHILOX for the Philox algorithm and +tf.random.Algorithm.THREEFRY for the ThreeFry algorithm +(see paper 'Parallel Random Numbers: As Easy as 1, 2, 3' +[https://www.thesalmons.org/john/random123/papers/random123sc11.pdf]). +The string names `"philox"` and `"threefry"` can also be used. +Note `PHILOX` guarantees the same numbers are produced (given +the same random state) across all architectures (CPU, GPU, XLA etc). +
+ + + + + + + + + + + + + + + + + + + + +
+`algorithm` + +The RNG algorithm id (a Python integer or scalar integer Tensor). +
+`key` + +The 'key' part of the state of a counter-based RNG. + +For a counter-base RNG algorithm such as Philox and ThreeFry (as +described in paper 'Parallel Random Numbers: As Easy as 1, 2, 3' +[https://www.thesalmons.org/john/random123/papers/random123sc11.pdf]), +the RNG state consists of two parts: counter and key. The output is +generated via the formula: output=hash(key, counter), i.e. a hashing of +the counter parametrized by the key. Two RNGs with two different keys can +be thought as generating two independent random-number streams (a stream +is formed by increasing the counter). +
+`state` + +The internal state of the RNG. +
+ + + +## Methods + +

binomial

+ +View source + + + +Outputs random values from a binomial distribution. + +The generated values follow a binomial distribution with specified count and +probability of success parameters. + +#### Example: + + + +```python +counts = [10., 20.] +# Probability of success. +probs = [0.8] + +rng = tf.random.Generator.from_seed(seed=234) +binomial_samples = rng.binomial(shape=[2], counts=counts, probs=probs) + + +counts = ... # Shape [3, 1, 2] +probs = ... # Shape [1, 4, 2] +shape = [3, 4, 3, 4, 2] +rng = tf.random.Generator.from_seed(seed=1717) +# Sample shape will be [3, 4, 3, 4, 2] +binomial_samples = rng.binomial(shape=shape, counts=counts, probs=probs) +``` + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output +tensor. +
+`counts` + +Tensor. The counts of the binomial distribution. Must be +broadcastable with `probs`, and broadcastable with the rightmost +dimensions of `shape`. +
+`probs` + +Tensor. The probability of success for the +binomial distribution. Must be broadcastable with `counts` and +broadcastable with the rightmost dimensions of `shape`. +
+`dtype` + +The type of the output. Default: tf.int32 +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + +
Returns
+`samples` + +A Tensor of the specified shape filled with random binomial +values. For each i, each samples[i, ...] is an independent draw from +the binomial distribution on counts[i] trials with probability of +success probs[i]. +
+ + + +

from_key_counter

+ +View source + + + +Creates a generator from a key and a counter. + +This constructor only applies if the algorithm is a counter-based algorithm. +See method `key` for the meaning of "key" and "counter". + + + + + + + + + + + + + + + + +
Args
+`key` + +the key for the RNG, a scalar of type STATE_TYPE. +
+`counter` + +a vector of dtype STATE_TYPE representing the initial counter for +the RNG, whose length is algorithm-specific., +
+`alg` + +the RNG algorithm. If None, it will be auto-selected. See +`__init__` for its possible values. +
+ + + + + + + + + + + +
Returns
+The new generator. +
+ + + +#### Throws: + + +* `ValueError`: if the generator is created inside a synchronous + tf.distribute strategy such as `MirroredStrategy` or `TPUStrategy`, + because there is ambiguity on how to replicate a generator (e.g. should + it be copied so such each replica will get the same random numbers, or + should it be "split" into different generators that generate + different random numbers). + + +

from_non_deterministic_state

+ +View source + + + +Creates a generator by non-deterministically initializing its state. + +The source of the non-determinism will be platform- and time-dependent. + + + + + + + + + + +
Args
+`alg` + +(optional) the RNG algorithm. If None, it will be auto-selected. See +`__init__` for its possible values. +
+ + + + + + + + + + + +
Returns
+The new generator. +
+ + + +#### Throws: + + +* `ValueError`: if the generator is created inside a synchronous + tf.distribute strategy such as `MirroredStrategy` or `TPUStrategy`, + because there is ambiguity on how to replicate a generator (e.g. should + it be copied so such each replica will get the same random numbers, or + should it be "split" into different generators that generate + different random numbers). + + +

from_seed

+ +View source + + + +Creates a generator from a seed. + +A seed is a 1024-bit unsigned integer represented either as a Python +integer or a vector of integers. Seeds shorter than 1024-bit will be +padded. The padding, the internal structure of a seed and the way a seed +is converted to a state are all opaque (unspecified). The only semantics +specification of seeds is that two different seeds are likely to produce +two independent generators (but no guarantee). + + + + + + + + + + + + + +
Args
+`seed` + +the seed for the RNG. +
+`alg` + +(optional) the RNG algorithm. If None, it will be auto-selected. See +`__init__` for its possible values. +
+ + + + + + + + + + + +
Returns
+The new generator. +
+ + + +#### Throws: + + +* `ValueError`: if the generator is created inside a synchronous + tf.distribute strategy such as `MirroredStrategy` or `TPUStrategy`, + because there is ambiguity on how to replicate a generator (e.g. should + it be copied so such each replica will get the same random numbers, or + should it be "split" into different generators that generate + different random numbers). + + +

from_state

+ +View source + + + +Creates a generator from a state. + +See `__init__` for description of `state` and `alg`. + + + + + + + + + + + + + +
Args
+`state` + +the new state. +
+`alg` + +the RNG algorithm. +
+ + + + + + + + + + + +
Returns
+The new generator. +
+ + + +#### Throws: + + +* `ValueError`: if the generator is created inside a synchronous + tf.distribute strategy such as `MirroredStrategy` or `TPUStrategy`, + because there is ambiguity on how to replicate a generator (e.g. should + it be copied so such each replica will get the same random numbers, or + should it be "split" into different generators that generate + different random numbers). + + +

make_seeds

+ +View source + + + +Generates seeds for stateless random ops. + + +#### For example: + + + +```python +seeds = get_global_generator().make_seeds(count=10) +for i in range(10): + seed = seeds[:, i] + numbers = stateless_random_normal(shape=[2, 3], seed=seed) + ... +``` + + + + + + + + + + +
Args
+`count` + +the number of seed pairs (note that stateless random ops need a +pair of seeds to invoke). +
+ + + + + + + + + + + +
Returns
+A tensor of shape [2, count] and dtype int64. +
+ + + +

normal

+ +View source + + + +Outputs random values from a normal distribution. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output +tensor. +
+`mean` + +A 0-D Tensor or Python value of type `dtype`. The mean of the normal +distribution. +
+`stddev` + +A 0-D Tensor or Python value of type `dtype`. The standard +deviation of the normal distribution. +
+`dtype` + +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tensor of the specified shape filled with random normal values. +
+ + + +

reset

+ +View source + + + +Resets the generator by a new state. + +See `__init__` for the meaning of "state". + + + + + + + + + + +
Args
+`state` + +the new state. +
+ + + +

reset_from_key_counter

+ +View source + + + +Resets the generator by a new key-counter pair. + +See `from_key_counter` for the meaning of "key" and "counter". + + + + + + + + + + + + + +
Args
+`key` + +the new key. +
+`counter` + +the new counter. +
+ + + +

reset_from_seed

+ +View source + + + +Resets the generator by a new seed. + +See `from_seed` for the meaning of "seed". + + + + + + + + + + +
Args
+`seed` + +the new seed. +
+ + + +

skip

+ +View source + + + +Advance the counter of a counter-based RNG. + + + + + + + + + + + +
Args
+`delta` + +the amount of advancement. The state of the RNG after +`skip(n)` will be the same as that after `normal([n])` +(or any other distribution). The actual increment added to the +counter is an unspecified implementation detail. +
+ + + +

split

+ +View source + + + +Returns a list of independent `Generator` objects. + +Two generators are independent of each other in the sense that the +random-number streams they generate don't have statistically detectable +correlations. The new generators are also independent of the old one. +The old generator's state will be changed (like other random-number +generating methods), so two calls of `split` will return different +new generators. + +#### For example: + + + +```python +gens = get_global_generator().split(count=10) +for gen in gens: + numbers = gen.normal(shape=[2, 3]) + # ... +gens2 = get_global_generator().split(count=10) +# gens2 will be different from gens +``` + +The new generators will be put on the current device (possible different +from the old generator's), for example: + +```python +with tf.device("/device:CPU:0"): + gen = Generator(seed=1234) # gen is on CPU +with tf.device("/device:GPU:0"): + gens = gen.split(count=10) # gens are on GPU +``` + + + + + + + + + + +
Args
+`count` + +the number of generators to return. +
+ + + + + + + + + + + +
Returns
+A list (length `count`) of `Generator` objects independent of each other. +The new generators have the same RNG algorithm as the old one. +
+ + + +

truncated_normal

+ +View source + + + +Outputs random values from a truncated normal distribution. + +The generated values follow a normal distribution with specified mean and +standard deviation, except that values whose magnitude is more than +2 standard deviations from the mean are dropped and re-picked. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output +tensor. +
+`mean` + +A 0-D Tensor or Python value of type `dtype`. The mean of the +truncated normal distribution. +
+`stddev` + +A 0-D Tensor or Python value of type `dtype`. The standard +deviation of the normal distribution, before truncation. +
+`dtype` + +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tensor of the specified shape filled with random truncated normal +values. +
+ + + +

uniform

+ +View source + + + +Outputs random values from a uniform distribution. + +The generated values follow a uniform distribution in the range +`[minval, maxval)`. The lower bound `minval` is included in the range, while +the upper bound `maxval` is excluded. (For float numbers especially +low-precision types like bfloat16, because of +rounding, the result may sometimes include `maxval`.) + +For floats, the default range is `[0, 1)`. For ints, at least `maxval` must +be specified explicitly. + +In the integer case, the random integers are slightly biased unless +`maxval - minval` is an exact power of two. The bias is small for values of +`maxval - minval` significantly smaller than the range of the output (either +`2**32` or `2**64`). + + + + + + + + + + + + + + + + + + + + + + +
Args
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output +tensor. +
+`minval` + +A 0-D Tensor or Python value of type `dtype`. The lower bound on +the range of random values to generate. Defaults to 0. +
+`maxval` + +A 0-D Tensor or Python value of type `dtype`. The upper bound on +the range of random values to generate. Defaults to 1 if `dtype` is +floating point. +
+`dtype` + +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A tensor of the specified shape filled with random uniform values. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `dtype` is integral and `maxval` is not specified. +
+ + + +

uniform_full_int

+ +View source + + + +Uniform distribution on an integer type's entire range. + +The other method `uniform` only covers the range [minval, maxval), which +cannot be `dtype`'s full range because `maxval` is of type `dtype`. + + + + + + + + + + + + + + + + +
Args
+`shape` + +the shape of the output. +
+`dtype` + +(optional) the integer type, default to uint64. +
+`name` + +(optional) the name of the node. +
+ + + + + + + + + + + +
Returns
+A tensor of random numbers of the required shape. +
+ + + + + diff --git a/site/en/api_docs/python/tf/random/all_candidate_sampler.md b/site/en/api_docs/python/tf/random/all_candidate_sampler.md new file mode 100644 index 00000000000..5ace36f22f3 --- /dev/null +++ b/site/en/api_docs/python/tf/random/all_candidate_sampler.md @@ -0,0 +1,141 @@ +description: Generate the set of all classes. + +
+ + +
+ +# tf.random.all_candidate_sampler + + + + + + + + + +Generate the set of all classes. + + + + + + + + + +Deterministically generates and returns the set of all possible classes. +For testing purposes. There is no need to use this, since you might as +well use full softmax or full logistic regression. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64` and shape `[batch_size, +num_true]`. The target classes. +
+`num_true` + +An `int`. The number of target classes per training example. +
+`num_sampled` + +An `int`. The number of possible classes. +
+`unique` + +A `bool`. Ignored. +unique. +
+`seed` + +An `int`. An operation-specific seed. Default is 0. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + +
+`sampled_candidates` + +A tensor of type `int64` and shape `[num_sampled]`. +This operation deterministically returns the entire range +`[0, num_sampled]`. +
+`true_expected_count` + +A tensor of type `float`. Same shape as +`true_classes`. The expected counts under the sampling distribution +of each of `true_classes`. All returned values are 1.0. +
+`sampled_expected_count` + +A tensor of type `float`. Same shape as +`sampled_candidates`. The expected counts under the sampling distribution +of each of `sampled_candidates`. All returned values are 1.0. +
+ diff --git a/site/en/api_docs/python/tf/random/categorical.md b/site/en/api_docs/python/tf/random/categorical.md new file mode 100644 index 00000000000..1e160778ba5 --- /dev/null +++ b/site/en/api_docs/python/tf/random/categorical.md @@ -0,0 +1,115 @@ +description: Draws samples from a categorical distribution. + +
+ + +
+ +# tf.random.categorical + + + + + + + + + +Draws samples from a categorical distribution. + + + + + + + + + + +#### Example: + + + +```python +# samples has shape [1, 5], where each value is either 0 or 1 with equal +# probability. +samples = tf.random.categorical(tf.math.log([[0.5, 0.5]]), 5) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`logits` + +2-D Tensor with shape `[batch_size, num_classes]`. Each slice +`[i, :]` represents the unnormalized log-probabilities for all classes. +
+`num_samples` + +0-D. Number of independent samples to draw for each row slice. +
+`dtype` + +integer type to use for the output. Defaults to int64. +
+`seed` + +A Python integer. Used to create a random seed for the distribution. +See tf.random.set_seed for behavior. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
+The drawn samples of shape `[batch_size, num_samples]`. +
+ diff --git a/site/en/api_docs/python/tf/random/create_rng_state.md b/site/en/api_docs/python/tf/random/create_rng_state.md new file mode 100644 index 00000000000..c1d25ab0b3a --- /dev/null +++ b/site/en/api_docs/python/tf/random/create_rng_state.md @@ -0,0 +1,98 @@ +description: Creates a RNG state from an integer or a vector. + +
+ + +
+ +# tf.random.create_rng_state + + + + + + + + + +Creates a RNG state from an integer or a vector. + + + + + + + + + + +#### Example: + + + +``` +>>> tf.random.create_rng_state( +... 1234, "philox") +array([1234, 0, 0]) +>>> tf.random.create_rng_state( +... [12, 34], "threefry") +array([12, 34]) +``` + + + + + + + + + + + + + +
+`seed` + +an integer or 1-D numpy array. +
+`alg` + +the RNG algorithm. Can be a string, an `Algorithm` or an integer. +
+ + + + + + + + + + + +
+a 1-D numpy array whose size depends on the algorithm. +
+ diff --git a/site/en/api_docs/python/tf/random/experimental.md b/site/en/api_docs/python/tf/random/experimental.md new file mode 100644 index 00000000000..ca94ab9321a --- /dev/null +++ b/site/en/api_docs/python/tf/random/experimental.md @@ -0,0 +1,35 @@ +description: Public API for tf.random.experimental namespace. + +
+ + +
+ +# Module: tf.random.experimental + + + + + + + + + +Public API for tf.random.experimental namespace. + + + +## Classes + +[`class Algorithm`](../../tf/random/Algorithm.md): An enumeration. + +[`class Generator`](../../tf/random/Generator.md): Random-number generator. + +## Functions + +[`create_rng_state(...)`](../../tf/random/create_rng_state.md): Creates a RNG state from an integer or a vector. + +[`get_global_generator(...)`](../../tf/random/get_global_generator.md): Retrieves the global generator. + +[`set_global_generator(...)`](../../tf/random/set_global_generator.md): Replaces the global generator with another `Generator` object. + diff --git a/site/en/api_docs/python/tf/random/fixed_unigram_candidate_sampler.md b/site/en/api_docs/python/tf/random/fixed_unigram_candidate_sampler.md new file mode 100644 index 00000000000..d40d76dcfed --- /dev/null +++ b/site/en/api_docs/python/tf/random/fixed_unigram_candidate_sampler.md @@ -0,0 +1,227 @@ +description: Samples a set of classes using the provided (fixed) base distribution. + +
+ + +
+ +# tf.random.fixed_unigram_candidate_sampler + + + + + + + + + +Samples a set of classes using the provided (fixed) base distribution. + + + + + + + + + +This operation randomly samples a tensor of sampled classes +(`sampled_candidates`) from the range of integers `[0, range_max)`. + +The elements of `sampled_candidates` are drawn without replacement +(if `unique=True`) or with replacement (if `unique=False`) from +the base distribution. + +The base distribution is read from a file or passed in as an +in-memory array. There is also an option to skew the distribution by +applying a distortion power to the weights. + +In addition, this operation returns tensors `true_expected_count` +and `sampled_expected_count` representing the number of times each +of the target classes (`true_classes`) and the sampled +classes (`sampled_candidates`) is expected to occur in an average +tensor of sampled classes. These values correspond to `Q(y|x)` +defined in [this +document](http://www.tensorflow.org/extras/candidate_sampling.pdf). +If `unique=True`, then these are post-rejection probabilities and we +compute them approximately. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64` and shape `[batch_size, +num_true]`. The target classes. +
+`num_true` + +An `int`. The number of target classes per training example. +
+`num_sampled` + +An `int`. The number of classes to randomly sample. +
+`unique` + +A `bool`. Determines whether all sampled classes in a batch are +unique. +
+`range_max` + +An `int`. The number of possible classes. +
+`vocab_file` + +Each valid line in this file (which should have a CSV-like +format) corresponds to a valid word ID. IDs are in sequential order, +starting from num_reserved_ids. The last entry in each line is expected +to be a value corresponding to the count or relative probability. Exactly +one of `vocab_file` and `unigrams` needs to be passed to this operation. +
+`distortion` + +The distortion is used to skew the unigram probability +distribution. Each weight is first raised to the distortion's power +before adding to the internal unigram distribution. As a result, +`distortion = 1.0` gives regular unigram sampling (as defined by the vocab +file), and `distortion = 0.0` gives a uniform distribution. +
+`num_reserved_ids` + +Optionally some reserved IDs can be added in the range +`[0, num_reserved_ids)` by the users. One use case is that a special +unknown word token is used as ID 0. These IDs will have a sampling +probability of 0. +
+`num_shards` + +A sampler can be used to sample from a subset of the original +range in order to speed up the whole computation through parallelism. This +parameter (together with `shard`) indicates the number of partitions that +are being used in the overall computation. +
+`shard` + +A sampler can be used to sample from a subset of the original range +in order to speed up the whole computation through parallelism. This +parameter (together with `num_shards`) indicates the particular partition +number of the operation, when partitioning is being used. +
+`unigrams` + +A list of unigram counts or probabilities, one per ID in +sequential order. Exactly one of `vocab_file` and `unigrams` should be +passed to this operation. +
+`seed` + +An `int`. An operation-specific seed. Default is 0. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + +
+`sampled_candidates` + +A tensor of type `int64` and shape `[num_sampled]`. +The sampled classes. +
+`true_expected_count` + +A tensor of type `float`. Same shape as +`true_classes`. The expected counts under the sampling distribution +of each of `true_classes`. +
+`sampled_expected_count` + +A tensor of type `float`. Same shape as +`sampled_candidates`. The expected counts under the sampling distribution +of each of `sampled_candidates`. +
+ diff --git a/site/en/api_docs/python/tf/random/gamma.md b/site/en/api_docs/python/tf/random/gamma.md new file mode 100644 index 00000000000..5e3e1ca8c27 --- /dev/null +++ b/site/en/api_docs/python/tf/random/gamma.md @@ -0,0 +1,171 @@ +description: Draws shape samples from each of the given Gamma distribution(s). + +
+ + +
+ +# tf.random.gamma + + + + + + + + + +Draws `shape` samples from each of the given Gamma distribution(s). + + + + + + + + + +`alpha` is the shape parameter describing the distribution(s), and `beta` is +the inverse scale parameter(s). + +Note: Because internal calculations are done using `float64` and casting has +`floor` semantics, we must manually map zero outcomes to the smallest +possible positive floating-point value, i.e., `np.finfo(dtype).tiny`. This +means that `np.finfo(dtype).tiny` occurs more frequently than it otherwise +should. This bias can only happen for small values of `alpha`, i.e., +`alpha << 1` or large values of `beta`, i.e., `beta >> 1`. + +The samples are differentiable w.r.t. alpha and beta. +The derivatives are computed using the approach described in +(Figurnov et al., 2018). + +#### Example: + + + +```python +samples = tf.random.gamma([10], [0.5, 1.5]) +# samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents +# the samples drawn from each distribution + +samples = tf.random.gamma([7, 5], [0.5, 1.5]) +# samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] +# represents the 7x5 samples drawn from each of the two distributions + +alpha = tf.constant([[1.],[3.],[5.]]) +beta = tf.constant([[3., 4.]]) +samples = tf.random.gamma([30], alpha=alpha, beta=beta) +# samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions. + +loss = tf.reduce_mean(tf.square(samples)) +dloss_dalpha, dloss_dbeta = tf.gradients(loss, [alpha, beta]) +# unbiased stochastic derivatives of the loss function +alpha.shape == dloss_dalpha.shape # True +beta.shape == dloss_dbeta.shape # True +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output samples +to be drawn per alpha/beta-parameterized distribution. +
+`alpha` + +A Tensor or Python value or N-D array of type `dtype`. `alpha` +provides the shape parameter(s) describing the gamma distribution(s) to +sample. Must be broadcastable with `beta`. +
+`beta` + +A Tensor or Python value or N-D array of type `dtype`. Defaults to 1. +`beta` provides the inverse scale parameter(s) of the gamma +distribution(s) to sample. Must be broadcastable with `alpha`. +
+`dtype` + +The type of alpha, beta, and the output: `float16`, `float32`, or +`float64`. +
+`seed` + +A Python integer. Used to create a random seed for the distributions. +See +tf.random.set_seed +for behavior. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + + +
+`samples` + +a `Tensor` of shape +`tf.concat([shape, tf.shape(alpha + beta)], axis=0)` with values of type +`dtype`. +
+ + + +#### References: + +Implicit Reparameterization Gradients: + [Figurnov et al., 2018] + (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients) + ([pdf] + (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf)) diff --git a/site/en/api_docs/python/tf/random/get_global_generator.md b/site/en/api_docs/python/tf/random/get_global_generator.md new file mode 100644 index 00000000000..46cf2fba0d9 --- /dev/null +++ b/site/en/api_docs/python/tf/random/get_global_generator.md @@ -0,0 +1,63 @@ +description: Retrieves the global generator. + +
+ + +
+ +# tf.random.get_global_generator + + + + + + + + + +Retrieves the global generator. + + + + + + + + + +This function will create the global generator the first time it is called, +and the generator will be placed at the default device at that time, so one +needs to be careful when this function is first called. Using a generator +placed on a less-ideal device will incur performance regression. + + + + + + + + + +
+The global tf.random.Generator object. +
+ diff --git a/site/en/api_docs/python/tf/random/learned_unigram_candidate_sampler.md b/site/en/api_docs/python/tf/random/learned_unigram_candidate_sampler.md new file mode 100644 index 00000000000..0c36bb6ce38 --- /dev/null +++ b/site/en/api_docs/python/tf/random/learned_unigram_candidate_sampler.md @@ -0,0 +1,167 @@ +description: Samples a set of classes from a distribution learned during training. + +
+ + +
+ +# tf.random.learned_unigram_candidate_sampler + + + + + + + + + +Samples a set of classes from a distribution learned during training. + + + + + + + + + +This operation randomly samples a tensor of sampled classes +(`sampled_candidates`) from the range of integers `[0, range_max)`. + +The elements of `sampled_candidates` are drawn without replacement +(if `unique=True`) or with replacement (if `unique=False`) from +the base distribution. + +The base distribution for this operation is constructed on the fly +during training. It is a unigram distribution over the target +classes seen so far during training. Every integer in `[0, range_max)` +begins with a weight of 1, and is incremented by 1 each time it is +seen as a target class. The base distribution is not saved to checkpoints, +so it is reset when the model is reloaded. + +In addition, this operation returns tensors `true_expected_count` +and `sampled_expected_count` representing the number of times each +of the target classes (`true_classes`) and the sampled +classes (`sampled_candidates`) is expected to occur in an average +tensor of sampled classes. These values correspond to `Q(y|x)` +defined in [this +document](http://www.tensorflow.org/extras/candidate_sampling.pdf). +If `unique=True`, then these are post-rejection probabilities and we +compute them approximately. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64` and shape `[batch_size, +num_true]`. The target classes. +
+`num_true` + +An `int`. The number of target classes per training example. +
+`num_sampled` + +An `int`. The number of classes to randomly sample. +
+`unique` + +A `bool`. Determines whether all sampled classes in a batch are +unique. +
+`range_max` + +An `int`. The number of possible classes. +
+`seed` + +An `int`. An operation-specific seed. Default is 0. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + +
+`sampled_candidates` + +A tensor of type `int64` and shape `[num_sampled]`. +The sampled classes. +
+`true_expected_count` + +A tensor of type `float`. Same shape as +`true_classes`. The expected counts under the sampling distribution +of each of `true_classes`. +
+`sampled_expected_count` + +A tensor of type `float`. Same shape as +`sampled_candidates`. The expected counts under the sampling distribution +of each of `sampled_candidates`. +
+ diff --git a/site/en/api_docs/python/tf/random/log_uniform_candidate_sampler.md b/site/en/api_docs/python/tf/random/log_uniform_candidate_sampler.md new file mode 100644 index 00000000000..337a78da6fa --- /dev/null +++ b/site/en/api_docs/python/tf/random/log_uniform_candidate_sampler.md @@ -0,0 +1,167 @@ +description: Samples a set of classes using a log-uniform (Zipfian) base distribution. + +
+ + +
+ +# tf.random.log_uniform_candidate_sampler + + + + + + + + + +Samples a set of classes using a log-uniform (Zipfian) base distribution. + + + + + + + + + +This operation randomly samples a tensor of sampled classes +(`sampled_candidates`) from the range of integers `[0, range_max)`. + +The elements of `sampled_candidates` are drawn without replacement +(if `unique=True`) or with replacement (if `unique=False`) from +the base distribution. + +The base distribution for this operation is an approximately log-uniform +or Zipfian distribution: + +`P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)` + +This sampler is useful when the target classes approximately follow such +a distribution - for example, if the classes represent words in a lexicon +sorted in decreasing order of frequency. If your classes are not ordered by +decreasing frequency, do not use this op. + +In addition, this operation returns tensors `true_expected_count` +and `sampled_expected_count` representing the number of times each +of the target classes (`true_classes`) and the sampled +classes (`sampled_candidates`) is expected to occur in an average +tensor of sampled classes. These values correspond to `Q(y|x)` +defined in [this +document](http://www.tensorflow.org/extras/candidate_sampling.pdf). +If `unique=True`, then these are post-rejection probabilities and we +compute them approximately. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64` and shape `[batch_size, +num_true]`. The target classes. +
+`num_true` + +An `int`. The number of target classes per training example. +
+`num_sampled` + +An `int`. The number of classes to randomly sample. +
+`unique` + +A `bool`. Determines whether all sampled classes in a batch are +unique. +
+`range_max` + +An `int`. The number of possible classes. +
+`seed` + +An `int`. An operation-specific seed. Default is 0. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + +
+`sampled_candidates` + +A tensor of type `int64` and shape `[num_sampled]`. +The sampled classes. +
+`true_expected_count` + +A tensor of type `float`. Same shape as +`true_classes`. The expected counts under the sampling distribution +of each of `true_classes`. +
+`sampled_expected_count` + +A tensor of type `float`. Same shape as +`sampled_candidates`. The expected counts under the sampling distribution +of each of `sampled_candidates`. +
+ diff --git a/site/en/api_docs/python/tf/random/normal.md b/site/en/api_docs/python/tf/random/normal.md new file mode 100644 index 00000000000..3c79a423fc1 --- /dev/null +++ b/site/en/api_docs/python/tf/random/normal.md @@ -0,0 +1,136 @@ +description: Outputs random values from a normal distribution. + +
+ + +
+ +# tf.random.normal + + + + + + + + + +Outputs random values from a normal distribution. + + + + + + + + + +Example that generates a new set of random values every time: + +``` +>>> tf.random.set_seed(5); +>>> tf.random.normal([4], 0, 1, tf.float32) + +``` + +Example that outputs a reproducible result: + +``` +>>> tf.random.set_seed(5); +>>> tf.random.normal([2,2], 0, 1, tf.float32, seed=1) + +``` + +In this case, we are setting both the global and operation-level seed to +ensure this result is reproducible. See tf.random.set_seed for more +information. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output tensor. +
+`mean` + +A Tensor or Python value of type `dtype`, broadcastable with `stddev`. +The mean of the normal distribution. +
+`stddev` + +A Tensor or Python value of type `dtype`, broadcastable with `mean`. +The standard deviation of the normal distribution. +
+`dtype` + +The type of the output. +
+`seed` + +A Python integer. Used to create a random seed for the distribution. +See +tf.random.set_seed +for behavior. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tensor of the specified shape filled with random normal values. +
+ diff --git a/site/en/api_docs/python/tf/random/poisson.md b/site/en/api_docs/python/tf/random/poisson.md new file mode 100644 index 00000000000..85587bbd231 --- /dev/null +++ b/site/en/api_docs/python/tf/random/poisson.md @@ -0,0 +1,118 @@ +description: Draws shape samples from each of the given Poisson distribution(s). + +
+ + +
+ +# tf.random.poisson + + + + + + + + + +Draws `shape` samples from each of the given Poisson distribution(s). + + + + + + + +`lam` is the rate parameter describing the distribution(s). + +#### Example: + + + +```python +samples = tf.random.poisson([10], [0.5, 1.5]) +# samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents +# the samples drawn from each distribution + +samples = tf.random.poisson([7, 5], [12.2, 3.3]) +# samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] +# represents the 7x5 samples drawn from each of the two distributions +``` + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output samples +to be drawn per "rate"-parameterized distribution. +
+`lam` + +A Tensor or Python value or N-D array of type `dtype`. +`lam` provides the rate parameter(s) describing the poisson +distribution(s) to sample. +
+`dtype` + +The type of the output: `float16`, `float32`, `float64`, `int32` or +`int64`. +
+`seed` + +A Python integer. Used to create a random seed for the distributions. +See +tf.random.set_seed +for behavior. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + + +
+`samples` + +a `Tensor` of shape `tf.concat([shape, tf.shape(lam)], axis=0)` +with values of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/random/set_global_generator.md b/site/en/api_docs/python/tf/random/set_global_generator.md new file mode 100644 index 00000000000..e6f0fc5dc3c --- /dev/null +++ b/site/en/api_docs/python/tf/random/set_global_generator.md @@ -0,0 +1,74 @@ +description: Replaces the global generator with another Generator object. + +
+ + +
+ +# tf.random.set_global_generator + + + + + + + + + +Replaces the global generator with another `Generator` object. + + + + + + + + + +This function creates a new Generator object (and the Variable object within), +which does not work well with tf.function because (1) tf.function puts +restrictions on Variable creation thus reset_global_generator can't be freely +used inside tf.function; (2) redirecting a global variable to +a new object is problematic with tf.function because the old object may be +captured by a 'tf.function'ed function and still be used by it. +A 'tf.function'ed function only keeps weak references to variables, +so deleting a variable and then calling that function again may raise an +error, as demonstrated by +random_test.py/RandomTest.testResetGlobalGeneratorBadWithDefun . + + + + + + + + + + +
+`generator` + +the new `Generator` object. +
+ diff --git a/site/en/api_docs/python/tf/random/set_seed.md b/site/en/api_docs/python/tf/random/set_seed.md new file mode 100644 index 00000000000..365afcb1eb0 --- /dev/null +++ b/site/en/api_docs/python/tf/random/set_seed.md @@ -0,0 +1,190 @@ +description: Sets the global random seed. + +
+ + +
+ +# tf.random.set_seed + + + + + + + + + +Sets the global random seed. + + + + + + + +Operations that rely on a random seed actually derive it from two seeds: +the global and operation-level seeds. This sets the global seed. + +Its interactions with operation-level seeds is as follows: + + 1. If neither the global seed nor the operation seed is set: A randomly + picked seed is used for this op. + 2. If the graph-level seed is set, but the operation seed is not: + The system deterministically picks an operation seed in conjunction with + the graph-level seed so that it gets a unique random sequence. Within the + same version of tensorflow and user code, this sequence is deterministic. + However across different versions, this sequence might change. If the + code depends on particular seeds to work, specify both graph-level + and operation-level seeds explicitly. + 3. If the operation seed is set, but the global seed is not set: + A default global seed and the specified operation seed are used to + determine the random sequence. + 4. If both the global and the operation seed are set: + Both seeds are used in conjunction to determine the random sequence. + +To illustrate the user-visible effects, consider these examples: + +If neither the global seed nor the operation seed is set, we get different +results for every call to the random op and every re-run of the program: + +```python +print(tf.random.uniform([1])) # generates 'A1' +print(tf.random.uniform([1])) # generates 'A2' +``` + +(now close the program and run it again) + +```python +print(tf.random.uniform([1])) # generates 'A3' +print(tf.random.uniform([1])) # generates 'A4' +``` + +If the global seed is set but the operation seed is not set, we get different +results for every call to the random op, but the same sequence for every +re-run of the program: + +```python +tf.random.set_seed(1234) +print(tf.random.uniform([1])) # generates 'A1' +print(tf.random.uniform([1])) # generates 'A2' +``` + +(now close the program and run it again) + +```python +tf.random.set_seed(1234) +print(tf.random.uniform([1])) # generates 'A1' +print(tf.random.uniform([1])) # generates 'A2' +``` + +The reason we get 'A2' instead 'A1' on the second call of tf.random.uniform +above is because the second call uses a different operation seed. + +Note that tf.function acts like a re-run of a program in this case. When +the global seed is set but operation seeds are not set, the sequence of random +numbers are the same for each tf.function. For example: + +```python +tf.random.set_seed(1234) + +@tf.function +def f(): + a = tf.random.uniform([1]) + b = tf.random.uniform([1]) + return a, b + +@tf.function +def g(): + a = tf.random.uniform([1]) + b = tf.random.uniform([1]) + return a, b + +print(f()) # prints '(A1, A2)' +print(g()) # prints '(A1, A2)' +``` + +If the operation seed is set, we get different results for every call to the +random op, but the same sequence for every re-run of the program: + +```python +print(tf.random.uniform([1], seed=1)) # generates 'A1' +print(tf.random.uniform([1], seed=1)) # generates 'A2' +``` + +(now close the program and run it again) + +```python +print(tf.random.uniform([1], seed=1)) # generates 'A1' +print(tf.random.uniform([1], seed=1)) # generates 'A2' +``` + +The reason we get 'A2' instead 'A1' on the second call of tf.random.uniform +above is because the same tf.random.uniform kernel (i.e. internal +representation) is used by TensorFlow for all calls of it with the same +arguments, and the kernel maintains an internal counter which is incremented +every time it is executed, generating different results. + +Calling tf.random.set_seed will reset any such counters: + +```python +tf.random.set_seed(1234) +print(tf.random.uniform([1], seed=1)) # generates 'A1' +print(tf.random.uniform([1], seed=1)) # generates 'A2' +tf.random.set_seed(1234) +print(tf.random.uniform([1], seed=1)) # generates 'A1' +print(tf.random.uniform([1], seed=1)) # generates 'A2' +``` + +When multiple identical random ops are wrapped in a tf.function, their +behaviors change because the ops no long share the same counter. For example: + +```python +@tf.function +def foo(): + a = tf.random.uniform([1], seed=1) + b = tf.random.uniform([1], seed=1) + return a, b +print(foo()) # prints '(A1, A1)' +print(foo()) # prints '(A2, A2)' + +@tf.function +def bar(): + a = tf.random.uniform([1]) + b = tf.random.uniform([1]) + return a, b +print(bar()) # prints '(A1, A2)' +print(bar()) # prints '(A3, A4)' +``` + +The second call of `foo` returns '(A2, A2)' instead of '(A1, A1)' because +tf.random.uniform maintains an internal counter. If you want `foo` to return +'(A1, A1)' every time, use the stateless random ops such as +tf.random.stateless_uniform. Also see tf.random.experimental.Generator for +a new set of stateful random ops that use external variables to manage their +states. + + + + + + + + + + +
+`seed` + +integer. +
+ diff --git a/site/en/api_docs/python/tf/random/shuffle.md b/site/en/api_docs/python/tf/random/shuffle.md new file mode 100644 index 00000000000..2ca185f3589 --- /dev/null +++ b/site/en/api_docs/python/tf/random/shuffle.md @@ -0,0 +1,102 @@ +description: Randomly shuffles a tensor along its first dimension. + +
+ + +
+ +# tf.random.shuffle + + + + + + + + + +Randomly shuffles a tensor along its first dimension. + + + + + + + + + +The tensor is shuffled along dimension 0, such that each `value[j]` is mapped +to one and only one `output[i]`. For example, a mapping that might occur for a +3x2 tensor is: + +```python +[[1, 2], [[5, 6], + [3, 4], ==> [1, 2], + [5, 6]] [3, 4]] +``` + + + + + + + + + + + + + + + + +
+`value` + +A Tensor to be shuffled. +
+`seed` + +A Python integer. Used to create a random seed for the distribution. +See +tf.random.set_seed +for behavior. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tensor of same shape and type as `value`, shuffled along its first +dimension. +
+ diff --git a/site/en/api_docs/python/tf/random/stateless_binomial.md b/site/en/api_docs/python/tf/random/stateless_binomial.md new file mode 100644 index 00000000000..e4e181f8fcb --- /dev/null +++ b/site/en/api_docs/python/tf/random/stateless_binomial.md @@ -0,0 +1,148 @@ +description: Outputs deterministic pseudorandom values from a binomial distribution. + +
+ + +
+ +# tf.random.stateless_binomial + + + + + + + + + +Outputs deterministic pseudorandom values from a binomial distribution. + + + + + + + + + +The generated values follow a binomial distribution with specified count and +probability of success parameters. + +This is a stateless version of tf.random.Generator.binomial: if run twice +with the same seeds, it will produce the same pseudorandom numbers. The +output is consistent across multiple runs on the same hardware (and between +CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU +hardware. + +#### Example: + + + +```python +counts = [10., 20.] +# Probability of success. +probs = [0.8] + +binomial_samples = tf.random.stateless_binomial( + shape=[2], seed=[123, 456], counts=counts, probs=probs) + +counts = ... # Shape [3, 1, 2] +probs = ... # Shape [1, 4, 2] +shape = [3, 4, 3, 4, 2] +# Sample shape will be [3, 4, 3, 4, 2] +binomial_samples = tf.random.stateless_binomial( + shape=shape, seed=[123, 456], counts=counts, probs=probs) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output tensor. +
+`seed` + +A shape [2] integer Tensor of seeds to the random number generator. +
+`counts` + +Tensor. The counts of the binomial distribution. Must be +broadcastable with `probs`, and broadcastable with the rightmost +dimensions of `shape`. +
+`probs` + +Tensor. The probability of success for the binomial distribution. +Must be broadcastable with `counts` and broadcastable with the rightmost +dimensions of `shape`. +
+`output_dtype` + +The type of the output. Default: tf.int32 +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + +
+`samples` + +A Tensor of the specified shape filled with random binomial +values. For each i, each samples[..., i] is an independent draw from +the binomial distribution on counts[i] trials with probability of +success probs[i]. +
+ diff --git a/site/en/api_docs/python/tf/random/stateless_categorical.md b/site/en/api_docs/python/tf/random/stateless_categorical.md new file mode 100644 index 00000000000..101df18acae --- /dev/null +++ b/site/en/api_docs/python/tf/random/stateless_categorical.md @@ -0,0 +1,120 @@ +description: Draws deterministic pseudorandom samples from a categorical distribution. + +
+ + +
+ +# tf.random.stateless_categorical + + + + + + + + + +Draws deterministic pseudorandom samples from a categorical distribution. + + + + + + + + + +This is a stateless version of `tf.categorical`: if run twice with the +same seeds, it will produce the same pseudorandom numbers. The output is +consistent across multiple runs on the same hardware (and between CPU +and GPU), but may change between versions of TensorFlow or on non-CPU/GPU +hardware. + +#### Example: + + + +```python +# samples has shape [1, 5], where each value is either 0 or 1 with equal +# probability. +samples = tf.random.stateless_categorical( + tf.math.log([[0.5, 0.5]]), 5, seed=[7, 17]) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`logits` + +2-D Tensor with shape `[batch_size, num_classes]`. Each slice +`[i, :]` represents the unnormalized log-probabilities for all classes. +
+`num_samples` + +0-D. Number of independent samples to draw for each row slice. +
+`seed` + +A shape [2] integer Tensor of seeds to the random number generator. +
+`dtype` + +integer type to use for the output. Defaults to int64. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
+The drawn samples of shape `[batch_size, num_samples]`. +
+ diff --git a/site/en/api_docs/python/tf/random/stateless_gamma.md b/site/en/api_docs/python/tf/random/stateless_gamma.md new file mode 100644 index 00000000000..4a138421ccd --- /dev/null +++ b/site/en/api_docs/python/tf/random/stateless_gamma.md @@ -0,0 +1,173 @@ +description: Outputs deterministic pseudorandom values from a gamma distribution. + +
+ + +
+ +# tf.random.stateless_gamma + + + + + + + + + +Outputs deterministic pseudorandom values from a gamma distribution. + + + + + + + + + +The generated values follow a gamma distribution with specified concentration +(`alpha`) and inverse scale (`beta`) parameters. + +This is a stateless version of tf.random.gamma: if run twice with the same +seeds, it will produce the same pseudorandom numbers. The output is consistent +across multiple runs on the same hardware (and between CPU and GPU), but may +change between versions of TensorFlow or on non-CPU/GPU hardware. + +A slight difference exists in the interpretation of the `shape` parameter +between `stateless_gamma` and `gamma`: in `gamma`, the `shape` is always +prepended to the shape of the broadcast of `alpha` with `beta`; whereas in +`stateless_gamma` the `shape` parameter must always encompass the shapes of +each of `alpha` and `beta` (which must broadcast together to match the +trailing dimensions of `shape`). + +Note: Because internal calculations are done using `float64` and casting has +`floor` semantics, we must manually map zero outcomes to the smallest +possible positive floating-point value, i.e., `np.finfo(dtype).tiny`. This +means that `np.finfo(dtype).tiny` occurs more frequently than it otherwise +should. This bias can only happen for small values of `alpha`, i.e., +`alpha << 1` or large values of `beta`, i.e., `beta >> 1`. + +The samples are differentiable w.r.t. alpha and beta. +The derivatives are computed using the approach described in +(Figurnov et al., 2018). + +#### Example: + + + +```python +samples = tf.random.stateless_gamma([10, 2], seed=[12, 34], alpha=[0.5, 1.5]) +# samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents +# the samples drawn from each distribution + +samples = tf.random.stateless_gamma([7, 5, 2], seed=[12, 34], alpha=[.5, 1.5]) +# samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] +# represents the 7x5 samples drawn from each of the two distributions + +alpha = tf.constant([[1.], [3.], [5.]]) +beta = tf.constant([[3., 4.]]) +samples = tf.random.stateless_gamma( + [30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta) +# samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions. + +with tf.GradientTape() as tape: + tape.watch([alpha, beta]) + loss = tf.reduce_mean(tf.square(tf.random.stateless_gamma( + [30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta))) +dloss_dalpha, dloss_dbeta = tape.gradient(loss, [alpha, beta]) +# unbiased stochastic derivatives of the loss function +alpha.shape == dloss_dalpha.shape # True +beta.shape == dloss_dbeta.shape # True +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output tensor. +
+`seed` + +A shape [2] integer Tensor of seeds to the random number generator. +
+`alpha` + +Tensor. The concentration parameter of the gamma distribution. Must +be broadcastable with `beta`, and broadcastable with the rightmost +dimensions of `shape`. +
+`beta` + +Tensor. The inverse scale parameter of the gamma distribution. Must be +broadcastable with `alpha` and broadcastable with the rightmost dimensions +of `shape`. +
+`dtype` + +Floating point dtype of `alpha`, `beta`, and the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + +
+`samples` + +A Tensor of the specified shape filled with random gamma values. +For each i, each `samples[..., i] is an independent draw from the gamma +distribution with concentration alpha[i] and scale beta[i]. +
+ diff --git a/site/en/api_docs/python/tf/random/stateless_normal.md b/site/en/api_docs/python/tf/random/stateless_normal.md new file mode 100644 index 00000000000..1f0ead2c5cd --- /dev/null +++ b/site/en/api_docs/python/tf/random/stateless_normal.md @@ -0,0 +1,117 @@ +description: Outputs deterministic pseudorandom values from a normal distribution. + +
+ + +
+ +# tf.random.stateless_normal + + + + + + + + + +Outputs deterministic pseudorandom values from a normal distribution. + + + + + + + + + +This is a stateless version of tf.random.normal: if run twice with the +same seeds, it will produce the same pseudorandom numbers. The output is +consistent across multiple runs on the same hardware (and between CPU +and GPU), but may change between versions of TensorFlow or on non-CPU/GPU +hardware. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output tensor. +
+`seed` + +A shape [2] integer Tensor of seeds to the random number generator. +
+`mean` + +A 0-D Tensor or Python value of type `dtype`. The mean of the normal +distribution. +
+`stddev` + +A 0-D Tensor or Python value of type `dtype`. The standard deviation +of the normal distribution. +
+`dtype` + +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tensor of the specified shape filled with random normal values. +
+ diff --git a/site/en/api_docs/python/tf/random/stateless_poisson.md b/site/en/api_docs/python/tf/random/stateless_poisson.md new file mode 100644 index 00000000000..cffbd6f84c6 --- /dev/null +++ b/site/en/api_docs/python/tf/random/stateless_poisson.md @@ -0,0 +1,140 @@ +description: Outputs deterministic pseudorandom values from a Poisson distribution. + +
+ + +
+ +# tf.random.stateless_poisson + + + + + + + + + +Outputs deterministic pseudorandom values from a Poisson distribution. + + + + + + + + + +The generated values follow a Poisson distribution with specified rate +parameter. + +This is a stateless version of tf.random.poisson: if run twice with the same +seeds, it will produce the same pseudorandom numbers. The output is consistent +across multiple runs on the same hardware (and between CPU and GPU), but may +change between versions of TensorFlow or on non-CPU/GPU hardware. + +A slight difference exists in the interpretation of the `shape` parameter +between `stateless_poisson` and `poisson`: in `poisson`, the `shape` is always +prepended to the shape of `rate`; whereas in `stateless_poisson` the shape of +`rate` must match the trailing dimensions of `shape`. + +#### Example: + + + +```python +samples = tf.random.stateless_poisson([10, 2], seed=[12, 34], lam=[5, 15]) +# samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents +# the samples drawn from each distribution + +samples = tf.random.stateless_poisson([7, 5, 2], seed=[12, 34], lam=[5, 15]) +# samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] +# represents the 7x5 samples drawn from each of the two distributions + +rate = tf.constant([[1.], [3.], [5.]]) +samples = tf.random.stateless_poisson([30, 3, 1], seed=[12, 34], lam=rate) +# samples has shape [30, 3, 1], with 30 samples each of 3x1 distributions. +``` + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output tensor. +
+`seed` + +A shape [2] integer Tensor of seeds to the random number generator. +
+`lam` + +Tensor. The rate parameter "lambda" of the Poisson distribution. Shape +must match the rightmost dimensions of `shape`. +
+`dtype` + +Dtype of the samples (int or float dtypes are permissible, as samples +are discrete). Default: int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + +
+`samples` + +A Tensor of the specified shape filled with random Poisson values. +For each i, each `samples[..., i]` is an independent draw from the Poisson +distribution with rate `lam[i]`. +
+ diff --git a/site/en/api_docs/python/tf/random/stateless_truncated_normal.md b/site/en/api_docs/python/tf/random/stateless_truncated_normal.md new file mode 100644 index 00000000000..363e30435e4 --- /dev/null +++ b/site/en/api_docs/python/tf/random/stateless_truncated_normal.md @@ -0,0 +1,122 @@ +description: Outputs deterministic pseudorandom values, truncated normally distributed. + +
+ + +
+ +# tf.random.stateless_truncated_normal + + + + + + + + + +Outputs deterministic pseudorandom values, truncated normally distributed. + + + + + + + + + +This is a stateless version of tf.random.truncated_normal: if run twice with +the +same seeds, it will produce the same pseudorandom numbers. The output is +consistent across multiple runs on the same hardware (and between CPU +and GPU), but may change between versions of TensorFlow or on non-CPU/GPU +hardware. + +The generated values follow a normal distribution with specified mean and +standard deviation, except that values whose magnitude is more than 2 standard +deviations from the mean are dropped and re-picked. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output tensor. +
+`seed` + +A shape [2] integer Tensor of seeds to the random number generator. +
+`mean` + +A 0-D Tensor or Python value of type `dtype`. The mean of the +truncated normal distribution. +
+`stddev` + +A 0-D Tensor or Python value of type `dtype`. The standard deviation +of the normal distribution, before truncation. +
+`dtype` + +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tensor of the specified shape filled with random truncated normal values. +
+ diff --git a/site/en/api_docs/python/tf/random/stateless_uniform.md b/site/en/api_docs/python/tf/random/stateless_uniform.md new file mode 100644 index 00000000000..779e7e4b9f6 --- /dev/null +++ b/site/en/api_docs/python/tf/random/stateless_uniform.md @@ -0,0 +1,160 @@ +description: Outputs deterministic pseudorandom values from a uniform distribution. + +
+ + +
+ +# tf.random.stateless_uniform + + + + + + + + + +Outputs deterministic pseudorandom values from a uniform distribution. + + + + + + + + + +This is a stateless version of tf.random.uniform: if run twice with the +same seeds, it will produce the same pseudorandom numbers. The output is +consistent across multiple runs on the same hardware (and between CPU +and GPU), but may change between versions of TensorFlow or on non-CPU/GPU +hardware. + +The generated values follow a uniform distribution in the range +`[minval, maxval)`. The lower bound `minval` is included in the range, while +the upper bound `maxval` is excluded. + +For floats, the default range is `[0, 1)`. For ints, at least `maxval` must +be specified explicitly. + +In the integer case, the random integers are slightly biased unless +`maxval - minval` is an exact power of two. The bias is small for values of +`maxval - minval` significantly smaller than the range of the output (either +`2**32` or `2**64`). + +For full full-range (i.e. inclusive of both max and min) random integers, pass +`minval=None` and `maxval=None` with an integer `dtype`. For an integer dtype +either both `minval` and `maxval` must be `None` or neither may be `None`. For +example: +```python +ints = tf.random.stateless_uniform( + [10], seed=(2, 3), minval=None, maxval=None, dtype=tf.int32) +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output tensor. +
+`seed` + +A shape [2] integer Tensor of seeds to the random number generator. +
+`minval` + +A 0-D Tensor or Python value of type `dtype`. The lower bound on the +range of random values to generate. Pass `None` for full-range integers. +Defaults to 0. +
+`maxval` + +A 0-D Tensor or Python value of type `dtype`. The upper bound on the +range of random values to generate. Defaults to 1 if `dtype` is floating +point. Pass `None` for full-range integers. +
+`dtype` + +The type of the output: `float16`, `float32`, `float64`, `int32`, or +`int64`. For unbounded uniform ints (`minval`, `maxval` both `None`), +`uint32` and `uint64` may be used. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tensor of the specified shape filled with random uniform values. +
+ + + + + + + + + + + + +
+`ValueError` + +If `dtype` is integral and only one of `minval` or `maxval` is +specified. +
+ diff --git a/site/en/api_docs/python/tf/random/truncated_normal.md b/site/en/api_docs/python/tf/random/truncated_normal.md new file mode 100644 index 00000000000..ca8b1c816ef --- /dev/null +++ b/site/en/api_docs/python/tf/random/truncated_normal.md @@ -0,0 +1,118 @@ +description: Outputs random values from a truncated normal distribution. + +
+ + +
+ +# tf.random.truncated_normal + + + + + + + + + +Outputs random values from a truncated normal distribution. + + + + + + + + + +The generated values follow a normal distribution with specified mean and +standard deviation, except that values whose magnitude is more than 2 standard +deviations from the mean are dropped and re-picked. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output tensor. +
+`mean` + +A 0-D Tensor or Python value of type `dtype`. The mean of the +truncated normal distribution. +
+`stddev` + +A 0-D Tensor or Python value of type `dtype`. The standard deviation +of the normal distribution, before truncation. +
+`dtype` + +The type of the output. +
+`seed` + +A Python integer. Used to create a random seed for the distribution. +See +tf.random.set_seed +for behavior. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tensor of the specified shape filled with random truncated normal values. +
+ diff --git a/site/en/api_docs/python/tf/random/uniform.md b/site/en/api_docs/python/tf/random/uniform.md new file mode 100644 index 00000000000..b34e8d29680 --- /dev/null +++ b/site/en/api_docs/python/tf/random/uniform.md @@ -0,0 +1,177 @@ +description: Outputs random values from a uniform distribution. + +
+ + +
+ +# tf.random.uniform + + + + + + + + + +Outputs random values from a uniform distribution. + + + + + + + + + +The generated values follow a uniform distribution in the range +`[minval, maxval)`. The lower bound `minval` is included in the range, while +the upper bound `maxval` is excluded. + +For floats, the default range is `[0, 1)`. For ints, at least `maxval` must +be specified explicitly. + +In the integer case, the random integers are slightly biased unless +`maxval - minval` is an exact power of two. The bias is small for values of +`maxval - minval` significantly smaller than the range of the output (either +`2**32` or `2**64`). + +#### Examples: + + + +``` +>>> tf.random.uniform(shape=[2]) + +>>> tf.random.uniform(shape=[], minval=-1., maxval=0.) + +>>> tf.random.uniform(shape=[], minval=5, maxval=10, dtype=tf.int64) + +``` + +The `seed` argument produces a deterministic sequence of tensors across +multiple calls. To repeat that sequence, use tf.random.set_seed: + +``` +>>> tf.random.set_seed(5) +>>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10) + +>>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10) + +>>> tf.random.set_seed(5) +>>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10) + +>>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10) + +``` + +Without tf.random.set_seed but with a `seed` argument is specified, small +changes to function graphs or previously executed operations will change the +returned value. See tf.random.set_seed for details. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A 1-D integer Tensor or Python array. The shape of the output tensor. +
+`minval` + +A Tensor or Python value of type `dtype`, broadcastable with +`maxval`. The lower bound on the range of random values to generate +(inclusive). Defaults to 0. +
+`maxval` + +A Tensor or Python value of type `dtype`, broadcastable with +`minval`. The upper bound on the range of random values to generate +(exclusive). Defaults to 1 if `dtype` is floating point. +
+`dtype` + +The type of the output: `float16`, `float32`, `float64`, `int32`, +or `int64`. +
+`seed` + +A Python integer. Used in combination with tf.random.set_seed to +create a reproducible sequence of tensors across multiple calls. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tensor of the specified shape filled with random uniform values. +
+ + + + + + + + + + + + +
+`ValueError` + +If `dtype` is integral and `maxval` is not specified. +
+ diff --git a/site/en/api_docs/python/tf/random/uniform_candidate_sampler.md b/site/en/api_docs/python/tf/random/uniform_candidate_sampler.md new file mode 100644 index 00000000000..cdaa2f24914 --- /dev/null +++ b/site/en/api_docs/python/tf/random/uniform_candidate_sampler.md @@ -0,0 +1,164 @@ +description: Samples a set of classes using a uniform base distribution. + +
+ + +
+ +# tf.random.uniform_candidate_sampler + + + + + + + + + +Samples a set of classes using a uniform base distribution. + + + + + + + + + +This operation randomly samples a tensor of sampled classes +(`sampled_candidates`) from the range of integers `[0, range_max)`. + +The elements of `sampled_candidates` are drawn without replacement +(if `unique=True`) or with replacement (if `unique=False`) from +the base distribution. + +The base distribution for this operation is the uniform distribution +over the range of integers `[0, range_max)`. + +In addition, this operation returns tensors `true_expected_count` +and `sampled_expected_count` representing the number of times each +of the target classes (`true_classes`) and the sampled +classes (`sampled_candidates`) is expected to occur in an average +tensor of sampled classes. These values correspond to `Q(y|x)` +defined in [this +document](http://www.tensorflow.org/extras/candidate_sampling.pdf). +If `unique=True`, then these are post-rejection probabilities and we +compute them approximately. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64` and shape `[batch_size, +num_true]`. The target classes. +
+`num_true` + +An `int`. The number of target classes per training example. +
+`num_sampled` + +An `int`. The number of classes to randomly sample. The +`sampled_candidates` return value will have shape `[num_sampled]`. If +`unique=True`, `num_sampled` must be less than or equal to `range_max`. +
+`unique` + +A `bool`. Determines whether all sampled classes in a batch are +unique. +
+`range_max` + +An `int`. The number of possible classes. +
+`seed` + +An `int`. An operation-specific seed. Default is 0. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + +
+`sampled_candidates` + +A tensor of type `int64` and shape `[num_sampled]`. The +sampled classes, either with possible duplicates (`unique=False`) or all +unique (`unique=True`). In either case, `sampled_candidates` is +independent of the true classes. +
+`true_expected_count` + +A tensor of type `float`. Same shape as +`true_classes`. The expected counts under the sampling distribution +of each of `true_classes`. +
+`sampled_expected_count` + +A tensor of type `float`. Same shape as +`sampled_candidates`. The expected counts under the sampling distribution +of each of `sampled_candidates`. +
+ diff --git a/site/en/api_docs/python/tf/random_normal_initializer.md b/site/en/api_docs/python/tf/random_normal_initializer.md new file mode 100644 index 00000000000..2f10ecca5b8 --- /dev/null +++ b/site/en/api_docs/python/tf/random_normal_initializer.md @@ -0,0 +1,245 @@ +description: Initializer that generates tensors with a normal distribution. + +
+ + + + + + +
+ +# tf.random_normal_initializer + + + + + + + + + +Initializer that generates tensors with a normal distribution. + +Inherits From: [`Initializer`](../tf/keras/initializers/Initializer.md) + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, +... tf.random_normal_initializer(mean=1., stddev=2.)) +>>> v1 + +>>> v2 +>> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.)) +(, + + + + + + + + + + + + + + +
+`mean` + +a python scalar or a scalar tensor. Mean of the random values to +generate. +
+`stddev` + +a python scalar or a scalar tensor. Standard deviation of the random +values to generate. +
+`seed` + +A Python integer. Used to create random seeds. See +tf.random.set_seed for behavior. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. +It will typically be the output of `get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. Only floating point types are +supported. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the dtype is not floating point +
+ + + + + diff --git a/site/en/api_docs/python/tf/random_uniform_initializer.md b/site/en/api_docs/python/tf/random_uniform_initializer.md new file mode 100644 index 00000000000..c2b25fa94af --- /dev/null +++ b/site/en/api_docs/python/tf/random_uniform_initializer.md @@ -0,0 +1,246 @@ +description: Initializer that generates tensors with a uniform distribution. + +
+ + + + + + +
+ +# tf.random_uniform_initializer + + + + + + + + + +Initializer that generates tensors with a uniform distribution. + +Inherits From: [`Initializer`](../tf/keras/initializers/Initializer.md) + + + + + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, tf.ones_initializer()) +>>> v1 + +>>> v2 + +>>> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.)) +(, + + + + + + + + + + + + + + +
+`minval` + +A python scalar or a scalar tensor. Lower bound of the range of +random values to generate (inclusive). +
+`maxval` + +A python scalar or a scalar tensor. Upper bound of the range of +random values to generate (exclusive). +
+`seed` + +A Python integer. Used to create random seeds. See +tf.random.set_seed for behavior. +
+ + + +## Methods + +

from_config

+ +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. +It will typically be the output of `get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. Only floating point and integer +types are supported. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If the dtype is not numeric. +
+ + + + + diff --git a/site/en/api_docs/python/tf/range.md b/site/en/api_docs/python/tf/range.md new file mode 100644 index 00000000000..a43c91e3ac0 --- /dev/null +++ b/site/en/api_docs/python/tf/range.md @@ -0,0 +1,149 @@ +description: Creates a sequence of numbers. + +
+ + +
+ +# tf.range + + + + + + + + + +Creates a sequence of numbers. + + + + +```python +tf.range(limit, delta=1, dtype=None, name='range') +tf.range(start, limit, delta=1, dtype=None, name='range') +``` + + + + +Creates a sequence of numbers that begins at `start` and extends by +increments of `delta` up to but not including `limit`. + +The dtype of the resulting tensor is inferred from the inputs unless +it is provided explicitly. + +Like the Python builtin `range`, `start` defaults to 0, so that +`range(n) = range(0, n)`. + +#### For example: + + + +``` +>>> start = 3 +>>> limit = 18 +>>> delta = 3 +>>> tf.range(start, limit, delta) + +``` + +``` +>>> start = 3 +>>> limit = 1 +>>> delta = -0.5 +>>> tf.range(start, limit, delta) + +``` + +``` +>>> limit = 5 +>>> tf.range(limit) + +``` + + + + + + + + + + + + + + + + + + + + + + +
+`start` + +A 0-D `Tensor` (scalar). Acts as first entry in the range if `limit` +is not None; otherwise, acts as range limit and first entry defaults to 0. +
+`limit` + +A 0-D `Tensor` (scalar). Upper limit of sequence, exclusive. If None, +defaults to the value of `start` while the first entry of the range +defaults to 0. +
+`delta` + +A 0-D `Tensor` (scalar). Number that increments `start`. Defaults to +1. +
+`dtype` + +The type of the elements of the resulting tensor. +
+`name` + +A name for the operation. Defaults to "range". +
+ + + + + + + + + + + +
+An 1-D `Tensor` of type `dtype`. +
+ + + + +#### Numpy Compatibility +Equivalent to np.arange + diff --git a/site/en/api_docs/python/tf/rank.md b/site/en/api_docs/python/tf/rank.md new file mode 100644 index 00000000000..298ac5802ab --- /dev/null +++ b/site/en/api_docs/python/tf/rank.md @@ -0,0 +1,103 @@ +description: Returns the rank of a tensor. + +
+ + +
+ +# tf.rank + + + + + + + + + +Returns the rank of a tensor. + + + + + + + + + +Returns a 0-D `int32` `Tensor` representing the rank of `input`. + +#### For example: + + + +```python +# shape of tensor 't' is [2, 2, 3] +t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]) +tf.rank(t) # 3 +``` + +**Note**: The rank of a tensor is not the same as the rank of a matrix. The +rank of a tensor is the number of indices required to uniquely select each +element of the tensor. Rank is also known as "order", "degree", or "ndims." + + + + + + + + + + + + + +
+`input` + +A `Tensor` or `SparseTensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ + + + +#### Numpy Compatibility +Equivalent to np.ndim + diff --git a/site/en/api_docs/python/tf/raw_ops.md b/site/en/api_docs/python/tf/raw_ops.md new file mode 100644 index 00000000000..1aca81ab440 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops.md @@ -0,0 +1,1302 @@ +description: Public API for tf.raw_ops namespace. + +
+ + +
+ +# Module: tf.raw_ops + + + + + + + + + +Public API for tf.raw_ops namespace. + + + + + +Note: `tf.raw_ops` provides direct/low level access to all TensorFlow ops. +See [the RFC](https://github.com/tensorflow/community/blob/master/rfcs/20181225-tf-raw-ops.md) +for details. Unless you are library writer, you likely do not need to use +these ops directly. + + +| Op Name | Has Gradient | +|---------|:------------:| +| Abort | ❌ | +| Abs | ✔️ | +| AccumulateNV2 | ✔️ | +| AccumulatorApplyGradient | ❌ | +| AccumulatorNumAccumulated | ❌ | +| AccumulatorSetGlobalStep | ❌ | +| AccumulatorTakeGradient | ❌ | +| Acos | ✔️ | +| Acosh | ✔️ | +| Add | ✔️ | +| AddManySparseToTensorsMap | ❌ | +| AddN | ✔️ | +| AddSparseToTensorsMap | ❌ | +| AddV2 | ✔️ | +| AdjustContrast | ❌ | +| AdjustContrastv2 | ❌ | +| AdjustHue | ❌ | +| AdjustSaturation | ❌ | +| All | ❌ | +| AllCandidateSampler | ❌ | +| AllToAll | ✔️ | +| Angle | ✔️ | +| AnonymousIterator | ❌ | +| AnonymousIteratorV2 | ❌ | +| AnonymousMemoryCache | ❌ | +| AnonymousMultiDeviceIterator | ❌ | +| AnonymousRandomSeedGenerator | ❌ | +| Any | ❌ | +| ApplyAdaMax | ❌ | +| ApplyAdadelta | ❌ | +| ApplyAdagrad | ❌ | +| ApplyAdagradDA | ❌ | +| ApplyAdagradV2 | ❌ | +| ApplyAdam | ❌ | +| ApplyAddSign | ❌ | +| ApplyCenteredRMSProp | ❌ | +| ApplyFtrl | ❌ | +| ApplyFtrlV2 | ❌ | +| ApplyGradientDescent | ❌ | +| ApplyMomentum | ❌ | +| ApplyPowerSign | ❌ | +| ApplyProximalAdagrad | ❌ | +| ApplyProximalGradientDescent | ❌ | +| ApplyRMSProp | ❌ | +| ApproximateEqual | ✔️ | +| ArgMax | ✔️ | +| ArgMin | ✔️ | +| AsString | ✔️ | +| Asin | ✔️ | +| Asinh | ✔️ | +| Assert | ✔️ | +| AssertCardinalityDataset | ❌ | +| AssertNextDataset | ❌ | +| Assign | ✔️ | +| AssignAdd | ✔️ | +| AssignAddVariableOp | ❌ | +| AssignSub | ✔️ | +| AssignSubVariableOp | ❌ | +| AssignVariableOp | ❌ | +| Atan | ✔️ | +| Atan2 | ✔️ | +| Atanh | ✔️ | +| AudioSpectrogram | ❌ | +| AudioSummary | ✔️ | +| AudioSummaryV2 | ✔️ | +| AutoShardDataset | ❌ | +| AvgPool | ✔️ | +| AvgPool3D | ✔️ | +| AvgPool3DGrad | ✔️ | +| AvgPoolGrad | ✔️ | +| Barrier | ❌ | +| BarrierClose | ❌ | +| BarrierIncompleteSize | ❌ | +| BarrierInsertMany | ❌ | +| BarrierReadySize | ❌ | +| BarrierTakeMany | ❌ | +| Batch | ❌ | +| BatchCholesky | ❌ | +| BatchCholeskyGrad | ❌ | +| BatchDataset | ❌ | +| BatchDatasetV2 | ❌ | +| BatchFFT | ❌ | +| BatchFFT2D | ❌ | +| BatchFFT3D | ❌ | +| BatchFunction | ❌ | +| BatchIFFT | ❌ | +| BatchIFFT2D | ❌ | +| BatchIFFT3D | ❌ | +| BatchMatMul | ✔️ | +| BatchMatMulV2 | ✔️ | +| BatchMatrixBandPart | ❌ | +| BatchMatrixDeterminant | ❌ | +| BatchMatrixDiag | ❌ | +| BatchMatrixDiagPart | ❌ | +| BatchMatrixInverse | ❌ | +| BatchMatrixSetDiag | ❌ | +| BatchMatrixSolve | ❌ | +| BatchMatrixSolveLs | ❌ | +| BatchMatrixTriangularSolve | ❌ | +| BatchNormWithGlobalNormalization | ✔️ | +| BatchNormWithGlobalNormalizationGrad | ❌ | +| BatchSelfAdjointEig | ❌ | +| BatchSelfAdjointEigV2 | ❌ | +| BatchSvd | ❌ | +| BatchToSpace | ✔️ | +| BatchToSpaceND | ✔️ | +| BesselI0e | ✔️ | +| BesselI1e | ✔️ | +| Betainc | ✔️ | +| BiasAdd | ✔️ | +| BiasAddGrad | ✔️ | +| BiasAddV1 | ✔️ | +| Bincount | ❌ | +| Bitcast | ❌ | +| BitwiseAnd | ✔️ | +| BitwiseOr | ✔️ | +| BitwiseXor | ✔️ | +| BlockLSTM | ✔️ | +| BlockLSTMGrad | ❌ | +| BlockLSTMGradV2 | ❌ | +| BlockLSTMV2 | ✔️ | +| BoostedTreesAggregateStats | ❌ | +| BoostedTreesBucketize | ❌ | +| BoostedTreesCalculateBestFeatureSplit | ❌ | +| BoostedTreesCalculateBestFeatureSplitV2 | ❌ | +| BoostedTreesCalculateBestGainsPerFeature | ❌ | +| BoostedTreesCenterBias | ❌ | +| BoostedTreesCreateEnsemble | ❌ | +| BoostedTreesCreateQuantileStreamResource | ❌ | +| BoostedTreesDeserializeEnsemble | ❌ | +| BoostedTreesEnsembleResourceHandleOp | ❌ | +| BoostedTreesExampleDebugOutputs | ❌ | +| BoostedTreesFlushQuantileSummaries | ❌ | +| BoostedTreesGetEnsembleStates | ❌ | +| BoostedTreesMakeQuantileSummaries | ❌ | +| BoostedTreesMakeStatsSummary | ❌ | +| BoostedTreesPredict | ❌ | +| BoostedTreesQuantileStreamResourceAddSummaries | ❌ | +| BoostedTreesQuantileStreamResourceDeserialize | ❌ | +| BoostedTreesQuantileStreamResourceFlush | ❌ | +| BoostedTreesQuantileStreamResourceGetBucketBoundaries | ❌ | +| BoostedTreesQuantileStreamResourceHandleOp | ❌ | +| BoostedTreesSerializeEnsemble | ❌ | +| BoostedTreesSparseAggregateStats | ❌ | +| BoostedTreesSparseCalculateBestFeatureSplit | ❌ | +| BoostedTreesTrainingPredict | ❌ | +| BoostedTreesUpdateEnsemble | ❌ | +| BoostedTreesUpdateEnsembleV2 | ❌ | +| BroadcastArgs | ❌ | +| BroadcastGradientArgs | ✔️ | +| BroadcastTo | ✔️ | +| Bucketize | ❌ | +| BytesProducedStatsDataset | ❌ | +| CSRSparseMatrixComponents | ❌ | +| CSRSparseMatrixToDense | ✔️ | +| CSRSparseMatrixToSparseTensor | ❌ | +| CSVDataset | ❌ | +| CTCBeamSearchDecoder | ✔️ | +| CTCGreedyDecoder | ✔️ | +| CTCLoss | ✔️ | +| CTCLossV2 | ✔️ | +| CacheDataset | ❌ | +| CacheDatasetV2 | ❌ | +| Case | ✔️ | +| Cast | ✔️ | +| Ceil | ✔️ | +| CheckNumerics | ✔️ | +| CheckNumericsV2 | ✔️ | +| Cholesky | ✔️ | +| CholeskyGrad | ❌ | +| ChooseFastestBranchDataset | ❌ | +| ChooseFastestDataset | ❌ | +| ClipByValue | ❌ | +| CloseSummaryWriter | ❌ | +| CollectiveBcastRecv | ❌ | +| CollectiveBcastSend | ❌ | +| CollectiveGather | ❌ | +| CollectivePermute | ✔️ | +| CollectiveReduce | ❌ | +| CombinedNonMaxSuppression | ❌ | +| CompareAndBitpack | ❌ | +| Complex | ✔️ | +| ComplexAbs | ✔️ | +| ComputeAccidentalHits | ❌ | +| Concat | ✔️ | +| ConcatOffset | ✔️ | +| ConcatV2 | ✔️ | +| ConcatenateDataset | ❌ | +| ConditionalAccumulator | ❌ | +| ConfigureDistributedTPU | ❌ | +| ConfigureTPUEmbedding | ❌ | +| Conj | ✔️ | +| ConjugateTranspose | ✔️ | +| Const | ✔️ | +| ConsumeMutexLock | ❌ | +| ControlTrigger | ❌ | +| Conv2D | ✔️ | +| Conv2DBackpropFilter | ✔️ | +| Conv2DBackpropInput | ✔️ | +| Conv3D | ✔️ | +| Conv3DBackpropFilter | ❌ | +| Conv3DBackpropFilterV2 | ✔️ | +| Conv3DBackpropInput | ❌ | +| Conv3DBackpropInputV2 | ✔️ | +| Copy | ❌ | +| CopyHost | ❌ | +| Cos | ✔️ | +| Cosh | ✔️ | +| CountUpTo | ❌ | +| CreateSummaryDbWriter | ❌ | +| CreateSummaryFileWriter | ❌ | +| CropAndResize | ✔️ | +| CropAndResizeGradBoxes | ❌ | +| CropAndResizeGradImage | ❌ | +| Cross | ✔️ | +| CrossReplicaSum | ✔️ | +| CudnnRNN | ✔️ | +| CudnnRNNBackprop | ❌ | +| CudnnRNNBackpropV2 | ❌ | +| CudnnRNNBackpropV3 | ❌ | +| CudnnRNNCanonicalToParams | ❌ | +| CudnnRNNCanonicalToParamsV2 | ❌ | +| CudnnRNNParamsSize | ❌ | +| CudnnRNNParamsToCanonical | ❌ | +| CudnnRNNParamsToCanonicalV2 | ❌ | +| CudnnRNNV2 | ✔️ | +| CudnnRNNV3 | ✔️ | +| Cumprod | ✔️ | +| Cumsum | ✔️ | +| CumulativeLogsumexp | ✔️ | +| DataFormatDimMap | ❌ | +| DataFormatVecPermute | ❌ | +| DatasetCardinality | ❌ | +| DatasetFromGraph | ❌ | +| DatasetToGraph | ❌ | +| DatasetToGraphV2 | ❌ | +| DatasetToSingleElement | ❌ | +| DatasetToTFRecord | ❌ | +| Dawsn | ✔️ | +| DebugGradientIdentity | ✔️ | +| DebugGradientRefIdentity | ✔️ | +| DebugIdentity | ❌ | +| DebugIdentityV2 | ✔️ | +| DebugNanCount | ❌ | +| DebugNumericSummary | ❌ | +| DebugNumericSummaryV2 | ❌ | +| DecodeAndCropJpeg | ❌ | +| DecodeBase64 | ✔️ | +| DecodeBmp | ❌ | +| DecodeCSV | ❌ | +| DecodeCompressed | ❌ | +| DecodeGif | ❌ | +| DecodeJSONExample | ❌ | +| DecodeJpeg | ❌ | +| DecodePaddedRaw | ✔️ | +| DecodePng | ❌ | +| DecodeProtoV2 | ✔️ | +| DecodeRaw | ✔️ | +| DecodeWav | ❌ | +| DeepCopy | ❌ | +| DeleteIterator | ❌ | +| DeleteMemoryCache | ❌ | +| DeleteMultiDeviceIterator | ❌ | +| DeleteRandomSeedGenerator | ❌ | +| DeleteSessionTensor | ✔️ | +| DenseToCSRSparseMatrix | ✔️ | +| DenseToDenseSetOperation | ✔️ | +| DenseToSparseBatchDataset | ❌ | +| DenseToSparseSetOperation | ✔️ | +| DepthToSpace | ✔️ | +| DepthwiseConv2dNative | ✔️ | +| DepthwiseConv2dNativeBackpropFilter | ✔️ | +| DepthwiseConv2dNativeBackpropInput | ✔️ | +| Dequantize | ❌ | +| DeserializeIterator | ❌ | +| DeserializeManySparse | ❌ | +| DeserializeSparse | ❌ | +| DestroyResourceOp | ❌ | +| DestroyTemporaryVariable | ❌ | +| Diag | ✔️ | +| DiagPart | ✔️ | +| Digamma | ✔️ | +| Dilation2D | ✔️ | +| Dilation2DBackpropFilter | ❌ | +| Dilation2DBackpropInput | ❌ | +| DirectedInterleaveDataset | ❌ | +| Div | ✔️ | +| DivNoNan | ✔️ | +| DrawBoundingBoxes | ✔️ | +| DrawBoundingBoxesV2 | ❌ | +| DummyMemoryCache | ❌ | +| DynamicPartition | ✔️ | +| DynamicStitch | ✔️ | +| EagerPyFunc | ✔️ | +| EditDistance | ✔️ | +| Eig | ❌ | +| Einsum | ✔️ | +| Elu | ✔️ | +| EluGrad | ✔️ | +| Empty | ❌ | +| EmptyTensorList | ❌ | +| EncodeBase64 | ✔️ | +| EncodeJpeg | ❌ | +| EncodeJpegVariableQuality | ❌ | +| EncodePng | ❌ | +| EncodeProto | ✔️ | +| EncodeWav | ❌ | +| EnqueueTPUEmbeddingIntegerBatch | ❌ | +| EnqueueTPUEmbeddingSparseBatch | ❌ | +| EnqueueTPUEmbeddingSparseTensorBatch | ❌ | +| EnsureShape | ✔️ | +| Enter | ✔️ | +| Equal | ✔️ | +| Erf | ✔️ | +| Erfc | ✔️ | +| Erfinv | ✔️ | +| EuclideanNorm | ✔️ | +| Exit | ✔️ | +| Exp | ✔️ | +| ExpandDims | ✔️ | +| ExperimentalAssertNextDataset | ❌ | +| ExperimentalAutoShardDataset | ❌ | +| ExperimentalBytesProducedStatsDataset | ❌ | +| ExperimentalCSVDataset | ❌ | +| ExperimentalChooseFastestDataset | ❌ | +| ExperimentalDatasetCardinality | ❌ | +| ExperimentalDatasetToTFRecord | ❌ | +| ExperimentalDenseToSparseBatchDataset | ❌ | +| ExperimentalDirectedInterleaveDataset | ❌ | +| ExperimentalGroupByReducerDataset | ❌ | +| ExperimentalGroupByWindowDataset | ❌ | +| ExperimentalIgnoreErrorsDataset | ❌ | +| ExperimentalIteratorGetDevice | ❌ | +| ExperimentalLMDBDataset | ❌ | +| ExperimentalLatencyStatsDataset | ❌ | +| ExperimentalMapAndBatchDataset | ❌ | +| ExperimentalMapDataset | ❌ | +| ExperimentalMatchingFilesDataset | ❌ | +| ExperimentalMaxIntraOpParallelismDataset | ❌ | +| ExperimentalNonSerializableDataset | ❌ | +| ExperimentalParallelInterleaveDataset | ❌ | +| ExperimentalParseExampleDataset | ❌ | +| ExperimentalPrivateThreadPoolDataset | ❌ | +| ExperimentalRandomDataset | ❌ | +| ExperimentalRebatchDataset | ❌ | +| ExperimentalScanDataset | ❌ | +| ExperimentalSetStatsAggregatorDataset | ❌ | +| ExperimentalSleepDataset | ❌ | +| ExperimentalSlidingWindowDataset | ❌ | +| ExperimentalSqlDataset | ❌ | +| ExperimentalStatsAggregatorHandle | ❌ | +| ExperimentalStatsAggregatorSummary | ❌ | +| ExperimentalTakeWhileDataset | ❌ | +| ExperimentalThreadPoolDataset | ❌ | +| ExperimentalThreadPoolHandle | ❌ | +| ExperimentalUnbatchDataset | ❌ | +| ExperimentalUniqueDataset | ❌ | +| Expint | ✔️ | +| Expm1 | ✔️ | +| ExtractGlimpse | ✔️ | +| ExtractImagePatches | ✔️ | +| ExtractJpegShape | ❌ | +| ExtractVolumePatches | ✔️ | +| FFT | ✔️ | +| FFT2D | ✔️ | +| FFT3D | ✔️ | +| FIFOQueue | ❌ | +| FIFOQueueV2 | ❌ | +| Fact | ❌ | +| FakeParam | ❌ | +| FakeQuantWithMinMaxArgs | ✔️ | +| FakeQuantWithMinMaxArgsGradient | ❌ | +| FakeQuantWithMinMaxVars | ✔️ | +| FakeQuantWithMinMaxVarsGradient | ❌ | +| FakeQuantWithMinMaxVarsPerChannel | ✔️ | +| FakeQuantWithMinMaxVarsPerChannelGradient | ❌ | +| FakeQueue | ❌ | +| Fill | ✔️ | +| FilterByLastComponentDataset | ❌ | +| FilterDataset | ❌ | +| Fingerprint | ❌ | +| FixedLengthRecordDataset | ❌ | +| FixedLengthRecordDatasetV2 | ❌ | +| FixedLengthRecordReader | ✔️ | +| FixedLengthRecordReaderV2 | ❌ | +| FixedUnigramCandidateSampler | ❌ | +| FlatMapDataset | ❌ | +| Floor | ✔️ | +| FloorDiv | ✔️ | +| FloorMod | ✔️ | +| FlushSummaryWriter | ❌ | +| For | ❌ | +| FractionalAvgPool | ✔️ | +| FractionalAvgPoolGrad | ❌ | +| FractionalMaxPool | ✔️ | +| FractionalMaxPoolGrad | ❌ | +| FresnelCos | ✔️ | +| FresnelSin | ✔️ | +| FusedBatchNorm | ✔️ | +| FusedBatchNormGrad | ✔️ | +| FusedBatchNormGradV2 | ✔️ | +| FusedBatchNormGradV3 | ✔️ | +| FusedBatchNormV2 | ✔️ | +| FusedBatchNormV3 | ✔️ | +| FusedPadConv2D | ❌ | +| FusedResizeAndPadConv2D | ❌ | +| GRUBlockCell | ❌ | +| GRUBlockCellGrad | ❌ | +| Gather | ✔️ | +| GatherNd | ✔️ | +| GatherV2 | ✔️ | +| GenerateBoundingBoxProposals | ✔️ | +| GenerateVocabRemapping | ✔️ | +| GeneratorDataset | ❌ | +| GetSessionHandle | ✔️ | +| GetSessionHandleV2 | ✔️ | +| GetSessionTensor | ✔️ | +| Greater | ✔️ | +| GreaterEqual | ✔️ | +| GroupByReducerDataset | ❌ | +| GroupByWindowDataset | ❌ | +| GuaranteeConst | ❌ | +| HSVToRGB | ✔️ | +| HashTable | ✔️ | +| HashTableV2 | ✔️ | +| HistogramFixedWidth | ❌ | +| HistogramSummary | ✔️ | +| IFFT | ✔️ | +| IFFT2D | ✔️ | +| IFFT3D | ✔️ | +| IRFFT | ✔️ | +| IRFFT2D | ✔️ | +| IRFFT3D | ❌ | +| Identity | ✔️ | +| IdentityN | ✔️ | +| IdentityReader | ✔️ | +| IdentityReaderV2 | ❌ | +| If | ✔️ | +| Igamma | ✔️ | +| IgammaGradA | ❌ | +| Igammac | ✔️ | +| IgnoreErrorsDataset | ❌ | +| Imag | ✔️ | +| ImageProjectiveTransformV2 | ✔️ | +| ImageSummary | ✔️ | +| ImmutableConst | ❌ | +| ImportEvent | ❌ | +| InTopK | ❌ | +| InTopKV2 | ❌ | +| InfeedDequeue | ❌ | +| InfeedDequeueTuple | ❌ | +| InfeedEnqueue | ❌ | +| InfeedEnqueuePrelinearizedBuffer | ❌ | +| InfeedEnqueueTuple | ❌ | +| InitializeTable | ✔️ | +| InitializeTableFromTextFile | ✔️ | +| InitializeTableFromTextFileV2 | ✔️ | +| InitializeTableV2 | ✔️ | +| InplaceAdd | ❌ | +| InplaceSub | ❌ | +| InplaceUpdate | ❌ | +| InterleaveDataset | ❌ | +| Inv | ✔️ | +| InvGrad | ✔️ | +| Invert | ✔️ | +| InvertPermutation | ✔️ | +| IsBoostedTreesEnsembleInitialized | ❌ | +| IsBoostedTreesQuantileStreamResourceInitialized | ❌ | +| IsFinite | ❌ | +| IsInf | ❌ | +| IsNan | ❌ | +| IsVariableInitialized | ❌ | +| Iterator | ❌ | +| IteratorFromStringHandle | ❌ | +| IteratorFromStringHandleV2 | ❌ | +| IteratorGetDevice | ❌ | +| IteratorGetNext | ❌ | +| IteratorGetNextAsOptional | ❌ | +| IteratorGetNextSync | ❌ | +| IteratorToStringHandle | ❌ | +| IteratorV2 | ❌ | +| L2Loss | ✔️ | +| LMDBDataset | ❌ | +| LMDBReader | ✔️ | +| LRN | ✔️ | +| LRNGrad | ❌ | +| LSTMBlockCell | ❌ | +| LSTMBlockCellGrad | ❌ | +| LatencyStatsDataset | ❌ | +| LeakyRelu | ✔️ | +| LeakyReluGrad | ✔️ | +| LearnedUnigramCandidateSampler | ❌ | +| LeftShift | ✔️ | +| LegacyParallelInterleaveDatasetV2 | ❌ | +| Less | ✔️ | +| LessEqual | ✔️ | +| Lgamma | ✔️ | +| LinSpace | ✔️ | +| ListDiff | ❌ | +| LoadAndRemapMatrix | ✔️ | +| LoadTPUEmbeddingADAMParameters | ❌ | +| LoadTPUEmbeddingADAMParametersGradAccumDebug | ❌ | +| LoadTPUEmbeddingAdadeltaParameters | ❌ | +| LoadTPUEmbeddingAdadeltaParametersGradAccumDebug | ❌ | +| LoadTPUEmbeddingAdagradParameters | ❌ | +| LoadTPUEmbeddingAdagradParametersGradAccumDebug | ❌ | +| LoadTPUEmbeddingCenteredRMSPropParameters | ❌ | +| LoadTPUEmbeddingFTRLParameters | ❌ | +| LoadTPUEmbeddingFTRLParametersGradAccumDebug | ❌ | +| LoadTPUEmbeddingMDLAdagradLightParameters | ❌ | +| LoadTPUEmbeddingMomentumParameters | ❌ | +| LoadTPUEmbeddingMomentumParametersGradAccumDebug | ❌ | +| LoadTPUEmbeddingProximalAdagradParameters | ❌ | +| LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug | ❌ | +| LoadTPUEmbeddingRMSPropParameters | ❌ | +| LoadTPUEmbeddingRMSPropParametersGradAccumDebug | ❌ | +| LoadTPUEmbeddingStochasticGradientDescentParameters | ❌ | +| Log | ✔️ | +| Log1p | ✔️ | +| LogMatrixDeterminant | ✔️ | +| LogSoftmax | ✔️ | +| LogUniformCandidateSampler | ❌ | +| LogicalAnd | ✔️ | +| LogicalNot | ✔️ | +| LogicalOr | ✔️ | +| LookupTableExport | ❌ | +| LookupTableExportV2 | ❌ | +| LookupTableFind | ✔️ | +| LookupTableFindV2 | ✔️ | +| LookupTableImport | ❌ | +| LookupTableImportV2 | ❌ | +| LookupTableInsert | ✔️ | +| LookupTableInsertV2 | ✔️ | +| LookupTableRemoveV2 | ❌ | +| LookupTableSize | ✔️ | +| LookupTableSizeV2 | ✔️ | +| LoopCond | ✔️ | +| LowerBound | ❌ | +| Lu | ❌ | +| MakeIterator | ❌ | +| MapAndBatchDataset | ❌ | +| MapClear | ❌ | +| MapDataset | ❌ | +| MapDefun | ❌ | +| MapIncompleteSize | ❌ | +| MapPeek | ❌ | +| MapSize | ❌ | +| MapStage | ❌ | +| MapUnstage | ❌ | +| MapUnstageNoKey | ❌ | +| MatMul | ✔️ | +| MatchingFiles | ❌ | +| MatchingFilesDataset | ❌ | +| MatrixBandPart | ✔️ | +| MatrixDeterminant | ✔️ | +| MatrixDiag | ✔️ | +| MatrixDiagPart | ✔️ | +| MatrixDiagPartV2 | ✔️ | +| MatrixDiagPartV3 | ✔️ | +| MatrixDiagV2 | ✔️ | +| MatrixDiagV3 | ✔️ | +| MatrixExponential | ❌ | +| MatrixInverse | ✔️ | +| MatrixLogarithm | ❌ | +| MatrixSetDiag | ✔️ | +| MatrixSetDiagV2 | ✔️ | +| MatrixSetDiagV3 | ✔️ | +| MatrixSolve | ✔️ | +| MatrixSolveLs | ✔️ | +| MatrixSquareRoot | ✔️ | +| MatrixTriangularSolve | ✔️ | +| Max | ✔️ | +| MaxIntraOpParallelismDataset | ❌ | +| MaxPool | ✔️ | +| MaxPool3D | ✔️ | +| MaxPool3DGrad | ✔️ | +| MaxPool3DGradGrad | ✔️ | +| MaxPoolGrad | ✔️ | +| MaxPoolGradGrad | ✔️ | +| MaxPoolGradGradV2 | ❌ | +| MaxPoolGradGradWithArgmax | ❌ | +| MaxPoolGradV2 | ✔️ | +| MaxPoolGradWithArgmax | ❌ | +| MaxPoolV2 | ✔️ | +| MaxPoolWithArgmax | ✔️ | +| Maximum | ✔️ | +| Mean | ✔️ | +| Merge | ✔️ | +| MergeSummary | ✔️ | +| MergeV2Checkpoints | ❌ | +| Mfcc | ❌ | +| Min | ✔️ | +| Minimum | ✔️ | +| MirrorPad | ✔️ | +| MirrorPadGrad | ✔️ | +| Mod | ❌ | +| ModelDataset | ❌ | +| Mul | ✔️ | +| MulNoNan | ✔️ | +| MultiDeviceIterator | ❌ | +| MultiDeviceIteratorFromStringHandle | ❌ | +| MultiDeviceIteratorGetNextFromShard | ❌ | +| MultiDeviceIteratorInit | ❌ | +| MultiDeviceIteratorToStringHandle | ❌ | +| Multinomial | ✔️ | +| MutableDenseHashTable | ✔️ | +| MutableDenseHashTableV2 | ✔️ | +| MutableHashTable | ✔️ | +| MutableHashTableOfTensors | ✔️ | +| MutableHashTableOfTensorsV2 | ✔️ | +| MutableHashTableV2 | ✔️ | +| MutexLock | ❌ | +| MutexV2 | ❌ | +| NcclAllReduce | ✔️ | +| NcclBroadcast | ✔️ | +| NcclReduce | ✔️ | +| Ndtri | ✔️ | +| Neg | ✔️ | +| NextAfter | ✔️ | +| NextIteration | ✔️ | +| NoOp | ❌ | +| NonDeterministicInts | ❌ | +| NonMaxSuppression | ✔️ | +| NonMaxSuppressionV2 | ✔️ | +| NonMaxSuppressionV3 | ❌ | +| NonMaxSuppressionV4 | ❌ | +| NonMaxSuppressionV5 | ❌ | +| NonMaxSuppressionWithOverlaps | ✔️ | +| NonSerializableDataset | ❌ | +| NotEqual | ✔️ | +| NthElement | ✔️ | +| OneHot | ✔️ | +| OneShotIterator | ❌ | +| OnesLike | ✔️ | +| OptimizeDataset | ❌ | +| OptionalFromValue | ✔️ | +| OptionalGetValue | ✔️ | +| OptionalHasValue | ❌ | +| OptionalNone | ❌ | +| OrderedMapClear | ❌ | +| OrderedMapIncompleteSize | ❌ | +| OrderedMapPeek | ❌ | +| OrderedMapSize | ❌ | +| OrderedMapStage | ❌ | +| OrderedMapUnstage | ❌ | +| OrderedMapUnstageNoKey | ❌ | +| OutfeedDequeue | ❌ | +| OutfeedDequeueTuple | ❌ | +| OutfeedEnqueue | ❌ | +| OutfeedEnqueueTuple | ❌ | +| Pack | ✔️ | +| Pad | ✔️ | +| PadV2 | ✔️ | +| PaddedBatchDataset | ❌ | +| PaddedBatchDatasetV2 | ❌ | +| PaddingFIFOQueue | ❌ | +| PaddingFIFOQueueV2 | ❌ | +| ParallelConcat | ❌ | +| ParallelDynamicStitch | ✔️ | +| ParallelInterleaveDataset | ❌ | +| ParallelInterleaveDatasetV2 | ❌ | +| ParallelInterleaveDatasetV3 | ❌ | +| ParallelInterleaveDatasetV4 | ❌ | +| ParallelMapDataset | ❌ | +| ParallelMapDatasetV2 | ❌ | +| ParameterizedTruncatedNormal | ✔️ | +| ParseExample | ❌ | +| ParseExampleDataset | ❌ | +| ParseExampleDatasetV2 | ❌ | +| ParseExampleV2 | ❌ | +| ParseSequenceExample | ❌ | +| ParseSequenceExampleV2 | ❌ | +| ParseSingleExample | ❌ | +| ParseSingleSequenceExample | ❌ | +| ParseTensor | ✔️ | +| PartitionedCall | ❌ | +| Placeholder | ❌ | +| PlaceholderV2 | ❌ | +| PlaceholderWithDefault | ✔️ | +| Polygamma | ✔️ | +| PopulationCount | ✔️ | +| Pow | ✔️ | +| PrefetchDataset | ❌ | +| Prelinearize | ❌ | +| PrelinearizeTuple | ❌ | +| PreventGradient | ✔️ | +| Print | ✔️ | +| PrintV2 | ❌ | +| PriorityQueue | ❌ | +| PriorityQueueV2 | ❌ | +| PrivateThreadPoolDataset | ❌ | +| Prod | ✔️ | +| PyFunc | ✔️ | +| PyFuncStateless | ✔️ | +| Qr | ✔️ | +| QuantizeAndDequantize | ✔️ | +| QuantizeAndDequantizeV2 | ✔️ | +| QuantizeAndDequantizeV3 | ✔️ | +| QuantizeDownAndShrinkRange | ❌ | +| QuantizeV2 | ❌ | +| QuantizedAdd | ❌ | +| QuantizedAvgPool | ❌ | +| QuantizedBatchNormWithGlobalNormalization | ❌ | +| QuantizedBiasAdd | ❌ | +| QuantizedConcat | ❌ | +| QuantizedConv2D | ❌ | +| QuantizedConv2DAndRelu | ❌ | +| QuantizedConv2DAndReluAndRequantize | ❌ | +| QuantizedConv2DAndRequantize | ❌ | +| QuantizedConv2DPerChannel | ❌ | +| QuantizedConv2DWithBias | ❌ | +| QuantizedConv2DWithBiasAndRelu | ❌ | +| QuantizedConv2DWithBiasAndReluAndRequantize | ❌ | +| QuantizedConv2DWithBiasAndRequantize | ❌ | +| QuantizedConv2DWithBiasSignedSumAndReluAndRequantize | ❌ | +| QuantizedConv2DWithBiasSumAndRelu | ❌ | +| QuantizedConv2DWithBiasSumAndReluAndRequantize | ❌ | +| QuantizedDepthwiseConv2D | ❌ | +| QuantizedDepthwiseConv2DWithBias | ❌ | +| QuantizedDepthwiseConv2DWithBiasAndRelu | ❌ | +| QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize | ❌ | +| QuantizedInstanceNorm | ❌ | +| QuantizedMatMul | ❌ | +| QuantizedMatMulWithBias | ❌ | +| QuantizedMatMulWithBiasAndDequantize | ❌ | +| QuantizedMatMulWithBiasAndRelu | ❌ | +| QuantizedMatMulWithBiasAndReluAndRequantize | ❌ | +| QuantizedMatMulWithBiasAndRequantize | ❌ | +| QuantizedMaxPool | ❌ | +| QuantizedMul | ❌ | +| QuantizedRelu | ❌ | +| QuantizedRelu6 | ❌ | +| QuantizedReluX | ❌ | +| QuantizedReshape | ❌ | +| QuantizedResizeBilinear | ❌ | +| QueueClose | ✔️ | +| QueueCloseV2 | ❌ | +| QueueDequeue | ✔️ | +| QueueDequeueMany | ✔️ | +| QueueDequeueManyV2 | ❌ | +| QueueDequeueUpTo | ✔️ | +| QueueDequeueUpToV2 | ❌ | +| QueueDequeueV2 | ❌ | +| QueueEnqueue | ✔️ | +| QueueEnqueueMany | ✔️ | +| QueueEnqueueManyV2 | ❌ | +| QueueEnqueueV2 | ❌ | +| QueueIsClosed | ❌ | +| QueueIsClosedV2 | ❌ | +| QueueSize | ✔️ | +| QueueSizeV2 | ❌ | +| RFFT | ✔️ | +| RFFT2D | ✔️ | +| RFFT3D | ❌ | +| RGBToHSV | ✔️ | +| RaggedGather | ✔️ | +| RaggedRange | ✔️ | +| RaggedTensorFromVariant | ❌ | +| RaggedTensorToSparse | ✔️ | +| RaggedTensorToTensor | ✔️ | +| RaggedTensorToVariant | ✔️ | +| RandomCrop | ✔️ | +| RandomDataset | ❌ | +| RandomGamma | ✔️ | +| RandomGammaGrad | ❌ | +| RandomPoisson | ❌ | +| RandomPoissonV2 | ❌ | +| RandomShuffle | ❌ | +| RandomShuffleQueue | ❌ | +| RandomShuffleQueueV2 | ❌ | +| RandomStandardNormal | ✔️ | +| RandomUniform | ✔️ | +| RandomUniformInt | ❌ | +| Range | ✔️ | +| RangeDataset | ❌ | +| Rank | ✔️ | +| ReadFile | ❌ | +| ReadVariableOp | ✔️ | +| ReaderNumRecordsProduced | ✔️ | +| ReaderNumRecordsProducedV2 | ❌ | +| ReaderNumWorkUnitsCompleted | ✔️ | +| ReaderNumWorkUnitsCompletedV2 | ❌ | +| ReaderRead | ✔️ | +| ReaderReadUpTo | ✔️ | +| ReaderReadUpToV2 | ❌ | +| ReaderReadV2 | ❌ | +| ReaderReset | ✔️ | +| ReaderResetV2 | ❌ | +| ReaderRestoreState | ✔️ | +| ReaderRestoreStateV2 | ❌ | +| ReaderSerializeState | ✔️ | +| ReaderSerializeStateV2 | ❌ | +| Real | ✔️ | +| RealDiv | ✔️ | +| RebatchDataset | ❌ | +| Reciprocal | ✔️ | +| ReciprocalGrad | ✔️ | +| RecordInput | ❌ | +| Recv | ❌ | +| RecvTPUEmbeddingActivations | ❌ | +| ReduceDataset | ✔️ | +| ReduceJoin | ✔️ | +| RefEnter | ✔️ | +| RefExit | ✔️ | +| RefIdentity | ✔️ | +| RefMerge | ✔️ | +| RefNextIteration | ✔️ | +| RefSelect | ❌ | +| RefSwitch | ✔️ | +| RegexFullMatch | ❌ | +| RegexReplace | ✔️ | +| Relu | ✔️ | +| Relu6 | ✔️ | +| Relu6Grad | ✔️ | +| ReluGrad | ✔️ | +| RemoteCall | ❌ | +| RepeatDataset | ❌ | +| RequantizationRange | ❌ | +| RequantizationRangePerChannel | ❌ | +| Requantize | ❌ | +| RequantizePerChannel | ❌ | +| Reshape | ✔️ | +| ResizeArea | ❌ | +| ResizeBicubic | ✔️ | +| ResizeBicubicGrad | ❌ | +| ResizeBilinear | ✔️ | +| ResizeBilinearGrad | ❌ | +| ResizeNearestNeighbor | ✔️ | +| ResizeNearestNeighborGrad | ❌ | +| ResourceAccumulatorApplyGradient | ❌ | +| ResourceAccumulatorNumAccumulated | ❌ | +| ResourceAccumulatorSetGlobalStep | ❌ | +| ResourceAccumulatorTakeGradient | ❌ | +| ResourceApplyAdaMax | ❌ | +| ResourceApplyAdadelta | ❌ | +| ResourceApplyAdagrad | ❌ | +| ResourceApplyAdagradDA | ❌ | +| ResourceApplyAdagradV2 | ❌ | +| ResourceApplyAdam | ❌ | +| ResourceApplyAdamWithAmsgrad | ❌ | +| ResourceApplyAddSign | ❌ | +| ResourceApplyCenteredRMSProp | ❌ | +| ResourceApplyFtrl | ❌ | +| ResourceApplyFtrlV2 | ❌ | +| ResourceApplyGradientDescent | ❌ | +| ResourceApplyKerasMomentum | ❌ | +| ResourceApplyMomentum | ❌ | +| ResourceApplyPowerSign | ❌ | +| ResourceApplyProximalAdagrad | ❌ | +| ResourceApplyProximalGradientDescent | ❌ | +| ResourceApplyRMSProp | ❌ | +| ResourceConditionalAccumulator | ❌ | +| ResourceCountUpTo | ❌ | +| ResourceGather | ✔️ | +| ResourceGatherNd | ✔️ | +| ResourceScatterAdd | ❌ | +| ResourceScatterDiv | ❌ | +| ResourceScatterMax | ❌ | +| ResourceScatterMin | ❌ | +| ResourceScatterMul | ❌ | +| ResourceScatterNdAdd | ❌ | +| ResourceScatterNdSub | ❌ | +| ResourceScatterNdUpdate | ❌ | +| ResourceScatterSub | ❌ | +| ResourceScatterUpdate | ❌ | +| ResourceSparseApplyAdadelta | ❌ | +| ResourceSparseApplyAdagrad | ❌ | +| ResourceSparseApplyAdagradDA | ❌ | +| ResourceSparseApplyAdagradV2 | ❌ | +| ResourceSparseApplyCenteredRMSProp | ❌ | +| ResourceSparseApplyFtrl | ❌ | +| ResourceSparseApplyFtrlV2 | ❌ | +| ResourceSparseApplyKerasMomentum | ❌ | +| ResourceSparseApplyMomentum | ❌ | +| ResourceSparseApplyProximalAdagrad | ❌ | +| ResourceSparseApplyProximalGradientDescent | ❌ | +| ResourceSparseApplyRMSProp | ❌ | +| ResourceStridedSliceAssign | ❌ | +| Restore | ❌ | +| RestoreSlice | ❌ | +| RestoreV2 | ❌ | +| RetrieveTPUEmbeddingADAMParameters | ❌ | +| RetrieveTPUEmbeddingADAMParametersGradAccumDebug | ❌ | +| RetrieveTPUEmbeddingAdadeltaParameters | ❌ | +| RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug | ❌ | +| RetrieveTPUEmbeddingAdagradParameters | ❌ | +| RetrieveTPUEmbeddingAdagradParametersGradAccumDebug | ❌ | +| RetrieveTPUEmbeddingCenteredRMSPropParameters | ❌ | +| RetrieveTPUEmbeddingFTRLParameters | ❌ | +| RetrieveTPUEmbeddingFTRLParametersGradAccumDebug | ❌ | +| RetrieveTPUEmbeddingMDLAdagradLightParameters | ❌ | +| RetrieveTPUEmbeddingMomentumParameters | ❌ | +| RetrieveTPUEmbeddingMomentumParametersGradAccumDebug | ❌ | +| RetrieveTPUEmbeddingProximalAdagradParameters | ❌ | +| RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug | ❌ | +| RetrieveTPUEmbeddingRMSPropParameters | ❌ | +| RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug | ❌ | +| RetrieveTPUEmbeddingStochasticGradientDescentParameters | ❌ | +| Reverse | ✔️ | +| ReverseSequence | ✔️ | +| ReverseV2 | ✔️ | +| RightShift | ✔️ | +| Rint | ✔️ | +| RngSkip | ❌ | +| Roll | ✔️ | +| Round | ✔️ | +| Rsqrt | ✔️ | +| RsqrtGrad | ✔️ | +| SampleDistortedBoundingBox | ✔️ | +| SampleDistortedBoundingBoxV2 | ✔️ | +| SamplingDataset | ❌ | +| Save | ❌ | +| SaveSlices | ❌ | +| SaveV2 | ❌ | +| ScalarSummary | ✔️ | +| ScaleAndTranslate | ✔️ | +| ScaleAndTranslateGrad | ❌ | +| ScanDataset | ❌ | +| ScatterAdd | ✔️ | +| ScatterDiv | ✔️ | +| ScatterMax | ❌ | +| ScatterMin | ❌ | +| ScatterMul | ✔️ | +| ScatterNd | ✔️ | +| ScatterNdAdd | ✔️ | +| ScatterNdNonAliasingAdd | ✔️ | +| ScatterNdSub | ✔️ | +| ScatterNdUpdate | ✔️ | +| ScatterSub | ✔️ | +| ScatterUpdate | ❌ | +| SdcaFprint | ✔️ | +| SdcaOptimizer | ✔️ | +| SdcaOptimizerV2 | ✔️ | +| SdcaShrinkL1 | ✔️ | +| SegmentMax | ✔️ | +| SegmentMean | ✔️ | +| SegmentMin | ✔️ | +| SegmentProd | ❌ | +| SegmentSum | ✔️ | +| Select | ✔️ | +| SelectV2 | ✔️ | +| SelfAdjointEig | ❌ | +| SelfAdjointEigV2 | ✔️ | +| Selu | ✔️ | +| SeluGrad | ✔️ | +| Send | ❌ | +| SendTPUEmbeddingGradients | ❌ | +| SerializeIterator | ❌ | +| SerializeManySparse | ❌ | +| SerializeSparse | ❌ | +| SerializeTensor | ✔️ | +| SetSize | ✔️ | +| SetStatsAggregatorDataset | ❌ | +| Shape | ✔️ | +| ShapeN | ✔️ | +| ShardDataset | ❌ | +| ShardedFilename | ❌ | +| ShardedFilespec | ❌ | +| ShuffleAndRepeatDataset | ❌ | +| ShuffleDataset | ❌ | +| ShuffleDatasetV2 | ❌ | +| ShutdownDistributedTPU | ❌ | +| Sigmoid | ✔️ | +| SigmoidGrad | ✔️ | +| Sign | ✔️ | +| Sin | ✔️ | +| Sinh | ✔️ | +| Size | ✔️ | +| SkipDataset | ❌ | +| SleepDataset | ❌ | +| Slice | ✔️ | +| SlidingWindowDataset | ❌ | +| Snapshot | ❌ | +| SnapshotDataset | ❌ | +| SobolSample | ❌ | +| Softmax | ✔️ | +| SoftmaxCrossEntropyWithLogits | ✔️ | +| Softplus | ✔️ | +| SoftplusGrad | ✔️ | +| Softsign | ✔️ | +| SoftsignGrad | ❌ | +| SpaceToBatch | ✔️ | +| SpaceToBatchND | ✔️ | +| SpaceToDepth | ✔️ | +| SparseAccumulatorApplyGradient | ❌ | +| SparseAccumulatorTakeGradient | ❌ | +| SparseAdd | ✔️ | +| SparseAddGrad | ✔️ | +| SparseApplyAdadelta | ❌ | +| SparseApplyAdagrad | ❌ | +| SparseApplyAdagradDA | ❌ | +| SparseApplyAdagradV2 | ❌ | +| SparseApplyCenteredRMSProp | ❌ | +| SparseApplyFtrl | ❌ | +| SparseApplyFtrlV2 | ❌ | +| SparseApplyMomentum | ❌ | +| SparseApplyProximalAdagrad | ❌ | +| SparseApplyProximalGradientDescent | ❌ | +| SparseApplyRMSProp | ❌ | +| SparseConcat | ✔️ | +| SparseConditionalAccumulator | ❌ | +| SparseCross | ❌ | +| SparseDenseCwiseAdd | ✔️ | +| SparseDenseCwiseDiv | ✔️ | +| SparseDenseCwiseMul | ✔️ | +| SparseFillEmptyRows | ✔️ | +| SparseFillEmptyRowsGrad | ❌ | +| SparseMatMul | ✔️ | +| SparseMatrixAdd | ✔️ | +| SparseMatrixMatMul | ✔️ | +| SparseMatrixMul | ✔️ | +| SparseMatrixNNZ | ✔️ | +| SparseMatrixOrderingAMD | ❌ | +| SparseMatrixSoftmax | ✔️ | +| SparseMatrixSoftmaxGrad | ❌ | +| SparseMatrixSparseCholesky | ❌ | +| SparseMatrixSparseMatMul | ✔️ | +| SparseMatrixTranspose | ✔️ | +| SparseMatrixZeros | ✔️ | +| SparseReduceMax | ❌ | +| SparseReduceMaxSparse | ❌ | +| SparseReduceSum | ✔️ | +| SparseReduceSumSparse | ❌ | +| SparseReorder | ✔️ | +| SparseReshape | ❌ | +| SparseSegmentMean | ✔️ | +| SparseSegmentMeanGrad | ❌ | +| SparseSegmentMeanWithNumSegments | ✔️ | +| SparseSegmentSqrtN | ✔️ | +| SparseSegmentSqrtNGrad | ❌ | +| SparseSegmentSqrtNWithNumSegments | ✔️ | +| SparseSegmentSum | ✔️ | +| SparseSegmentSumWithNumSegments | ✔️ | +| SparseSlice | ✔️ | +| SparseSliceGrad | ❌ | +| SparseSoftmax | ✔️ | +| SparseSoftmaxCrossEntropyWithLogits | ✔️ | +| SparseSparseMaximum | ✔️ | +| SparseSparseMinimum | ✔️ | +| SparseSplit | ❌ | +| SparseTensorDenseAdd | ✔️ | +| SparseTensorDenseMatMul | ✔️ | +| SparseTensorSliceDataset | ❌ | +| SparseTensorToCSRSparseMatrix | ❌ | +| SparseToDense | ✔️ | +| SparseToSparseSetOperation | ✔️ | +| Spence | ✔️ | +| Split | ✔️ | +| SplitV | ✔️ | +| SqlDataset | ❌ | +| Sqrt | ✔️ | +| SqrtGrad | ✔️ | +| Square | ✔️ | +| SquaredDifference | ✔️ | +| Squeeze | ✔️ | +| Stack | ✔️ | +| StackClose | ✔️ | +| StackCloseV2 | ❌ | +| StackPop | ✔️ | +| StackPopV2 | ❌ | +| StackPush | ✔️ | +| StackPushV2 | ❌ | +| StackV2 | ❌ | +| Stage | ❌ | +| StageClear | ❌ | +| StagePeek | ❌ | +| StageSize | ❌ | +| StatefulPartitionedCall | ❌ | +| StatefulRandomBinomial | ❌ | +| StatefulStandardNormal | ❌ | +| StatefulStandardNormalV2 | ❌ | +| StatefulTruncatedNormal | ❌ | +| StatefulUniform | ❌ | +| StatefulUniformFullInt | ❌ | +| StatefulUniformInt | ❌ | +| StatelessIf | ✔️ | +| StatelessMultinomial | ✔️ | +| StatelessRandomBinomial | ✔️ | +| StatelessRandomGammaV2 | ✔️ | +| StatelessRandomNormal | ✔️ | +| StatelessRandomPoisson | ✔️ | +| StatelessRandomUniform | ✔️ | +| StatelessRandomUniformFullInt | ✔️ | +| StatelessRandomUniformInt | ✔️ | +| StatelessTruncatedNormal | ✔️ | +| StatelessWhile | ❌ | +| StaticRegexFullMatch | ❌ | +| StaticRegexReplace | ❌ | +| StatsAggregatorHandle | ❌ | +| StatsAggregatorHandleV2 | ❌ | +| StatsAggregatorSetSummaryWriter | ❌ | +| StatsAggregatorSummary | ❌ | +| StopGradient | ✔️ | +| StridedSlice | ✔️ | +| StridedSliceAssign | ❌ | +| StridedSliceGrad | ✔️ | +| StringFormat | ❌ | +| StringJoin | ✔️ | +| StringLength | ❌ | +| StringLower | ❌ | +| StringNGrams | ❌ | +| StringSplit | ✔️ | +| StringSplitV2 | ❌ | +| StringStrip | ❌ | +| StringToHashBucket | ✔️ | +| StringToHashBucketFast | ✔️ | +| StringToHashBucketStrong | ✔️ | +| StringToNumber | ✔️ | +| StringUpper | ❌ | +| Sub | ✔️ | +| Substr | ❌ | +| Sum | ✔️ | +| SummaryWriter | ❌ | +| Svd | ✔️ | +| Switch | ✔️ | +| SymbolicGradient | ❌ | +| TFRecordDataset | ❌ | +| TFRecordReader | ✔️ | +| TFRecordReaderV2 | ❌ | +| TPUCompilationResult | ❌ | +| TPUEmbeddingActivations | ✔️ | +| TPUOrdinalSelector | ❌ | +| TPUPartitionedCall | ❌ | +| TPUReplicateMetadata | ❌ | +| TPUReplicatedInput | ✔️ | +| TPUReplicatedOutput | ❌ | +| TakeDataset | ❌ | +| TakeManySparseFromTensorsMap | ❌ | +| TakeWhileDataset | ❌ | +| Tan | ✔️ | +| Tanh | ✔️ | +| TanhGrad | ✔️ | +| TemporaryVariable | ❌ | +| TensorArray | ✔️ | +| TensorArrayClose | ✔️ | +| TensorArrayCloseV2 | ✔️ | +| TensorArrayCloseV3 | ✔️ | +| TensorArrayConcat | ✔️ | +| TensorArrayConcatV2 | ✔️ | +| TensorArrayConcatV3 | ✔️ | +| TensorArrayGather | ✔️ | +| TensorArrayGatherV2 | ✔️ | +| TensorArrayGatherV3 | ✔️ | +| TensorArrayGrad | ✔️ | +| TensorArrayGradV2 | ✔️ | +| TensorArrayGradV3 | ✔️ | +| TensorArrayGradWithShape | ✔️ | +| TensorArrayPack | ❌ | +| TensorArrayRead | ✔️ | +| TensorArrayReadV2 | ✔️ | +| TensorArrayReadV3 | ✔️ | +| TensorArrayScatter | ✔️ | +| TensorArrayScatterV2 | ✔️ | +| TensorArrayScatterV3 | ✔️ | +| TensorArraySize | ✔️ | +| TensorArraySizeV2 | ✔️ | +| TensorArraySizeV3 | ✔️ | +| TensorArraySplit | ✔️ | +| TensorArraySplitV2 | ✔️ | +| TensorArraySplitV3 | ✔️ | +| TensorArrayUnpack | ❌ | +| TensorArrayV2 | ✔️ | +| TensorArrayV3 | ✔️ | +| TensorArrayWrite | ✔️ | +| TensorArrayWriteV2 | ✔️ | +| TensorArrayWriteV3 | ✔️ | +| TensorDataset | ❌ | +| TensorListConcat | ✔️ | +| TensorListConcatLists | ✔️ | +| TensorListConcatV2 | ✔️ | +| TensorListElementShape | ✔️ | +| TensorListFromTensor | ✔️ | +| TensorListGather | ✔️ | +| TensorListGetItem | ✔️ | +| TensorListLength | ✔️ | +| TensorListPopBack | ✔️ | +| TensorListPushBack | ✔️ | +| TensorListPushBackBatch | ✔️ | +| TensorListReserve | ❌ | +| TensorListResize | ✔️ | +| TensorListScatter | ✔️ | +| TensorListScatterIntoExistingList | ✔️ | +| TensorListScatterV2 | ✔️ | +| TensorListSetItem | ✔️ | +| TensorListSplit | ✔️ | +| TensorListStack | ✔️ | +| TensorScatterAdd | ✔️ | +| TensorScatterSub | ✔️ | +| TensorScatterUpdate | ✔️ | +| TensorSliceDataset | ❌ | +| TensorStridedSliceUpdate | ❌ | +| TensorSummary | ✔️ | +| TensorSummaryV2 | ✔️ | +| TextLineDataset | ❌ | +| TextLineReader | ✔️ | +| TextLineReaderV2 | ❌ | +| ThreadPoolDataset | ❌ | +| ThreadPoolHandle | ❌ | +| ThreadUnsafeUnigramCandidateSampler | ❌ | +| Tile | ✔️ | +| TileGrad | ❌ | +| Timestamp | ✔️ | +| ToBool | ❌ | +| TopK | ✔️ | +| TopKV2 | ✔️ | +| Transpose | ✔️ | +| TridiagonalMatMul | ✔️ | +| TridiagonalSolve | ✔️ | +| TruncateDiv | ✔️ | +| TruncateMod | ❌ | +| TruncatedNormal | ✔️ | +| Unbatch | ❌ | +| UnbatchDataset | ❌ | +| UnbatchGrad | ❌ | +| UnicodeDecode | ❌ | +| UnicodeDecodeWithOffsets | ❌ | +| UnicodeEncode | ❌ | +| UnicodeScript | ❌ | +| UnicodeTranscode | ❌ | +| UniformCandidateSampler | ❌ | +| Unique | ❌ | +| UniqueDataset | ❌ | +| UniqueV2 | ❌ | +| UniqueWithCounts | ❌ | +| UniqueWithCountsV2 | ❌ | +| Unpack | ✔️ | +| UnravelIndex | ❌ | +| UnsortedSegmentJoin | ❌ | +| UnsortedSegmentMax | ✔️ | +| UnsortedSegmentMin | ✔️ | +| UnsortedSegmentProd | ✔️ | +| UnsortedSegmentSum | ✔️ | +| Unstage | ❌ | +| UnwrapDatasetVariant | ❌ | +| UpperBound | ❌ | +| VarHandleOp | ❌ | +| VarIsInitializedOp | ✔️ | +| Variable | ❌ | +| VariableShape | ✔️ | +| VariableV2 | ❌ | +| Where | ❌ | +| While | ❌ | +| WholeFileReader | ✔️ | +| WholeFileReaderV2 | ❌ | +| WindowDataset | ❌ | +| WorkerHeartbeat | ❌ | +| WrapDatasetVariant | ❌ | +| WriteAudioSummary | ❌ | +| WriteFile | ❌ | +| WriteGraphSummary | ❌ | +| WriteHistogramSummary | ❌ | +| WriteImageSummary | ❌ | +| WriteRawProtoSummary | ❌ | +| WriteScalarSummary | ❌ | +| WriteSummary | ❌ | +| Xdivy | ✔️ | +| Xlog1py | ✔️ | +| Xlogy | ✔️ | +| ZerosLike | ✔️ | +| Zeta | ✔️ | +| ZipDataset | ❌ | +| __builtins__ | ❌ | +| __cached__ | ❌ | +| __doc__ | ❌ | +| __file__ | ❌ | +| __loader__ | ❌ | +| __name__ | ❌ | +| __package__ | ❌ | +| __path__ | ❌ | +| __spec__ | ❌ | +| _sys | ❌ | \ No newline at end of file diff --git a/site/en/api_docs/python/tf/raw_ops/Abort.md b/site/en/api_docs/python/tf/raw_ops/Abort.md new file mode 100644 index 00000000000..9e9fda1a2c8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Abort.md @@ -0,0 +1,89 @@ +description: Raise a exception to abort the process when called. + +
+ + +
+ +# tf.raw_ops.Abort + + + + + + + + + +Raise a exception to abort the process when called. + + + + + + + + + +If exit_without_error is true, the process will exit normally, +otherwise it will exit with a SIGABORT signal. + +Returns nothing but an exception. + + + + + + + + + + + + + + + + +
+`error_msg` + +An optional `string`. Defaults to `""`. +A string which is the message associated with the exception. +
+`exit_without_error` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Abs.md b/site/en/api_docs/python/tf/raw_ops/Abs.md new file mode 100644 index 00000000000..562d455eb06 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Abs.md @@ -0,0 +1,80 @@ +description: Computes the absolute value of a tensor. + +
+ + +
+ +# tf.raw_ops.Abs + + + + + + + + + +Computes the absolute value of a tensor. + + + + + + + + + +Given a tensor `x`, this operation returns a tensor containing the absolute +value of each element in `x`. For example, if x is an input element and y is +an output element, this operation computes \\(y = |x|\\). + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AccumulateNV2.md b/site/en/api_docs/python/tf/raw_ops/AccumulateNV2.md new file mode 100644 index 00000000000..2e299719975 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AccumulateNV2.md @@ -0,0 +1,94 @@ +description: Returns the element-wise sum of a list of tensors. + +
+ + +
+ +# tf.raw_ops.AccumulateNV2 + + + + + + + + + +Returns the element-wise sum of a list of tensors. + + + + + + + + + +`tf.accumulate_n_v2` performs the same operation as tf.add_n, but does not +wait for all of its inputs to be ready before beginning to sum. This can +save memory if inputs are ready at different times, since minimum temporary +storage is proportional to the output size rather than the inputs size. + +Unlike the original `accumulate_n`, `accumulate_n_v2` is differentiable. + +Returns a `Tensor` of same shape and type as the elements of `inputs`. + + + + + + + + + + + + + + + + +
+`inputs` + +A list of at least 1 `Tensor` objects with the same type in: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A list of `Tensor` objects, each with same shape and type. +
+`shape` + +A tf.TensorShape or list of `ints`. +Shape of elements of `inputs`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `inputs`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AccumulatorApplyGradient.md b/site/en/api_docs/python/tf/raw_ops/AccumulatorApplyGradient.md new file mode 100644 index 00000000000..4dac2dd03ba --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AccumulatorApplyGradient.md @@ -0,0 +1,94 @@ +description: Applies a gradient to a given accumulator. + +
+ + +
+ +# tf.raw_ops.AccumulatorApplyGradient + + + + + + + + + +Applies a gradient to a given accumulator. + + + + + + + + + +Does not add if local_step is lesser than the accumulator's global_step. + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a accumulator. +
+`local_step` + +A `Tensor` of type `int64`. +The local_step value at which the gradient was computed. +
+`gradient` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A tensor of the gradient to be accumulated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AccumulatorNumAccumulated.md b/site/en/api_docs/python/tf/raw_ops/AccumulatorNumAccumulated.md new file mode 100644 index 00000000000..4d0bf0b8a51 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AccumulatorNumAccumulated.md @@ -0,0 +1,77 @@ +description: Returns the number of gradients aggregated in the given accumulators. + +
+ + +
+ +# tf.raw_ops.AccumulatorNumAccumulated + + + + + + + + + +Returns the number of gradients aggregated in the given accumulators. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to an accumulator. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AccumulatorSetGlobalStep.md b/site/en/api_docs/python/tf/raw_ops/AccumulatorSetGlobalStep.md new file mode 100644 index 00000000000..5fd3aea99ab --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AccumulatorSetGlobalStep.md @@ -0,0 +1,87 @@ +description: Updates the accumulator with a new value for global_step. + +
+ + +
+ +# tf.raw_ops.AccumulatorSetGlobalStep + + + + + + + + + +Updates the accumulator with a new value for global_step. + + + + + + + + + +Logs warning if the accumulator's value is already higher than +new_global_step. + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to an accumulator. +
+`new_global_step` + +A `Tensor` of type `int64`. +The new global_step value to set. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AccumulatorTakeGradient.md b/site/en/api_docs/python/tf/raw_ops/AccumulatorTakeGradient.md new file mode 100644 index 00000000000..8e1b75a937d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AccumulatorTakeGradient.md @@ -0,0 +1,99 @@ +description: Extracts the average gradient in the given ConditionalAccumulator. + +
+ + +
+ +# tf.raw_ops.AccumulatorTakeGradient + + + + + + + + + +Extracts the average gradient in the given ConditionalAccumulator. + + + + + + + + + +The op blocks until sufficient (i.e., more than num_required) +gradients have been accumulated. If the accumulator has already +aggregated more than num_required gradients, it returns the average of +the accumulated gradients. Also automatically increments the recorded +global_step in the accumulator by 1, and resets the aggregate to 0. + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to an accumulator. +
+`num_required` + +A `Tensor` of type `int32`. +Number of gradients required before we return an aggregate. +
+`dtype` + +A tf.DType from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`. +The data type of accumulated gradients. Needs to correspond to the type +of the accumulator. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Acos.md b/site/en/api_docs/python/tf/raw_ops/Acos.md new file mode 100644 index 00000000000..f7d186466b0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Acos.md @@ -0,0 +1,77 @@ +description: Computes acos of x element-wise. + +
+ + +
+ +# tf.raw_ops.Acos + + + + + + + + + +Computes acos of x element-wise. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Acosh.md b/site/en/api_docs/python/tf/raw_ops/Acosh.md new file mode 100644 index 00000000000..63bb6b7acbf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Acosh.md @@ -0,0 +1,84 @@ +description: Computes inverse hyperbolic cosine of x element-wise. + +
+ + +
+ +# tf.raw_ops.Acosh + + + + + + + + + +Computes inverse hyperbolic cosine of x element-wise. + + + + + + + + + +Given an input tensor, the function computes inverse hyperbolic cosine of every element. +Input range is `[1, inf]`. It returns `nan` if the input lies outside the range. + +```python +x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float("inf")]) +tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Add.md b/site/en/api_docs/python/tf/raw_ops/Add.md new file mode 100644 index 00000000000..f97449cb693 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Add.md @@ -0,0 +1,86 @@ +description: Returns x + y element-wise. + +
+ + +
+ +# tf.raw_ops.Add + + + + + + + + + +Returns x + y element-wise. + + + + + + + + + +*NOTE*: math.add supports broadcasting. `AddN` does not. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AddManySparseToTensorsMap.md b/site/en/api_docs/python/tf/raw_ops/AddManySparseToTensorsMap.md new file mode 100644 index 00000000000..f2dc0591274 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AddManySparseToTensorsMap.md @@ -0,0 +1,136 @@ +description: Add an N-minibatch SparseTensor to a SparseTensorsMap, return N handles. + +
+ + +
+ +# tf.raw_ops.AddManySparseToTensorsMap + + + + + + + + + +Add an `N`-minibatch `SparseTensor` to a `SparseTensorsMap`, return `N` handles. + + + + + + + + + +A `SparseTensor` of rank `R` is represented by three tensors: `sparse_indices`, +`sparse_values`, and `sparse_shape`, where + +```sparse_indices.shape[1] == sparse_shape.shape[0] == R``` + +An `N`-minibatch of `SparseTensor` objects is represented as a `SparseTensor` +having a first `sparse_indices` column taking values between `[0, N)`, where +the minibatch size `N == sparse_shape[0]`. + +The input `SparseTensor` must have rank `R` greater than 1, and the first +dimension is treated as the minibatch dimension. Elements of the `SparseTensor` +must be sorted in increasing order of this first dimension. The stored +`SparseTensor` objects pointed to by each row of the output `sparse_handles` +will have rank `R-1`. + +The `SparseTensor` values can then be read out as part of a minibatch by passing +the given keys as vector elements to `TakeManySparseFromTensorsMap`. To ensure +the correct `SparseTensorsMap` is accessed, ensure that the same +`container` and `shared_name` are passed to that Op. If no `shared_name` +is provided here, instead use the *name* of the Operation created by calling +`AddManySparseToTensorsMap` as the `shared_name` passed to +`TakeManySparseFromTensorsMap`. Ensure the Operations are colocated. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_indices` + +A `Tensor` of type `int64`. +2-D. The `indices` of the minibatch `SparseTensor`. +`sparse_indices[:, 0]` must be ordered values in `[0, N)`. +
+`sparse_values` + +A `Tensor`. +1-D. The `values` of the minibatch `SparseTensor`. +
+`sparse_shape` + +A `Tensor` of type `int64`. +1-D. The `shape` of the minibatch `SparseTensor`. +The minibatch size `N == sparse_shape[0]`. +
+`container` + +An optional `string`. Defaults to `""`. +The container name for the `SparseTensorsMap` created by this op. +
+`shared_name` + +An optional `string`. Defaults to `""`. +The shared name for the `SparseTensorsMap` created by this op. +If blank, the new Operation's unique name is used. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AddN.md b/site/en/api_docs/python/tf/raw_ops/AddN.md new file mode 100644 index 00000000000..791c9dbe875 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AddN.md @@ -0,0 +1,83 @@ +description: Add all input tensors element wise. + +
+ + +
+ +# tf.raw_ops.AddN + + + + + + + + + +Add all input tensors element wise. + + + + + + + + + + Inputs must be of same size and shape. + + ```python + x = [9, 7, 10] + tf.math.add_n(x) ==> 26 + ``` + + + + + + + + + + + + + +
+`inputs` + +A list of at least 1 `Tensor` objects with the same type in: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `variant`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `inputs`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AddSparseToTensorsMap.md b/site/en/api_docs/python/tf/raw_ops/AddSparseToTensorsMap.md new file mode 100644 index 00000000000..867b28cd976 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AddSparseToTensorsMap.md @@ -0,0 +1,125 @@ +description: Add a SparseTensor to a SparseTensorsMap return its handle. + +
+ + +
+ +# tf.raw_ops.AddSparseToTensorsMap + + + + + + + + + +Add a `SparseTensor` to a `SparseTensorsMap` return its handle. + + + + + + + + + +A `SparseTensor` is represented by three tensors: `sparse_indices`, +`sparse_values`, and `sparse_shape`. + +This operator takes the given `SparseTensor` and adds it to a container +object (a `SparseTensorsMap`). A unique key within this container is generated +in the form of an `int64`, and this is the value that is returned. + +The `SparseTensor` can then be read out as part of a minibatch by passing +the key as a vector element to `TakeManySparseFromTensorsMap`. To ensure +the correct `SparseTensorsMap` is accessed, ensure that the same +`container` and `shared_name` are passed to that Op. If no `shared_name` +is provided here, instead use the *name* of the Operation created by calling +`AddSparseToTensorsMap` as the `shared_name` passed to +`TakeManySparseFromTensorsMap`. Ensure the Operations are colocated. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_indices` + +A `Tensor` of type `int64`. +2-D. The `indices` of the `SparseTensor`. +
+`sparse_values` + +A `Tensor`. 1-D. The `values` of the `SparseTensor`. +
+`sparse_shape` + +A `Tensor` of type `int64`. +1-D. The `shape` of the `SparseTensor`. +
+`container` + +An optional `string`. Defaults to `""`. +The container name for the `SparseTensorsMap` created by this op. +
+`shared_name` + +An optional `string`. Defaults to `""`. +The shared name for the `SparseTensorsMap` created by this op. +If blank, the new Operation's unique name is used. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AddV2.md b/site/en/api_docs/python/tf/raw_ops/AddV2.md new file mode 100644 index 00000000000..badc9239832 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AddV2.md @@ -0,0 +1,86 @@ +description: Returns x + y element-wise. + +
+ + +
+ +# tf.raw_ops.AddV2 + + + + + + + + + +Returns x + y element-wise. + + + + + + + + + +*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AdjustContrast.md b/site/en/api_docs/python/tf/raw_ops/AdjustContrast.md new file mode 100644 index 00000000000..3660d0eec3f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AdjustContrast.md @@ -0,0 +1,98 @@ +description: Deprecated. Disallowed in GraphDef version >= 2. + +
+ + +
+ +# tf.raw_ops.AdjustContrast + + + + + + + + + +Deprecated. Disallowed in GraphDef version >= 2. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `float32`, `float64`. +
+`contrast_factor` + +A `Tensor` of type `float32`. +
+`min_value` + +A `Tensor` of type `float32`. +
+`max_value` + +A `Tensor` of type `float32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AdjustContrastv2.md b/site/en/api_docs/python/tf/raw_ops/AdjustContrastv2.md new file mode 100644 index 00000000000..985b09134ed --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AdjustContrastv2.md @@ -0,0 +1,95 @@ +description: Adjust the contrast of one or more images. + +
+ + +
+ +# tf.raw_ops.AdjustContrastv2 + + + + + + + + + +Adjust the contrast of one or more images. + + + + + + + + + +`images` is a tensor of at least 3 dimensions. The last 3 dimensions are +interpreted as `[height, width, channels]`. The other dimensions only +represent a collection of images, such as `[batch, height, width, channels].` + +Contrast is adjusted independently for each channel of each image. + +For each channel, the Op first computes the mean of the image pixels in the +channel and then adjusts each component of each pixel to +`(x - mean) * contrast_factor + mean`. + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +Images to adjust. At least 3-D. +
+`contrast_factor` + +A `Tensor` of type `float32`. +A float multiplier for adjusting contrast. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AdjustHue.md b/site/en/api_docs/python/tf/raw_ops/AdjustHue.md new file mode 100644 index 00000000000..02a5453b153 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AdjustHue.md @@ -0,0 +1,91 @@ +description: Adjust the hue of one or more images. + +
+ + +
+ +# tf.raw_ops.AdjustHue + + + + + + + + + +Adjust the hue of one or more images. + + + + + + + + + +`images` is a tensor of at least 3 dimensions. The last dimension is +interpretted as channels, and must be three. + +The input image is considered in the RGB colorspace. Conceptually, the RGB +colors are first mapped into HSV. A delta is then applied all the hue values, +and then remapped back to RGB colorspace. + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +Images to adjust. At least 3-D. +
+`delta` + +A `Tensor` of type `float32`. A float delta to add to the hue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AdjustSaturation.md b/site/en/api_docs/python/tf/raw_ops/AdjustSaturation.md new file mode 100644 index 00000000000..e30905790e4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AdjustSaturation.md @@ -0,0 +1,92 @@ +description: Adjust the saturation of one or more images. + +
+ + +
+ +# tf.raw_ops.AdjustSaturation + + + + + + + + + +Adjust the saturation of one or more images. + + + + + + + + + +`images` is a tensor of at least 3 dimensions. The last dimension is +interpretted as channels, and must be three. + +The input image is considered in the RGB colorspace. Conceptually, the RGB +colors are first mapped into HSV. A scale is then applied all the saturation +values, and then remapped back to RGB colorspace. + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +Images to adjust. At least 3-D. +
+`scale` + +A `Tensor` of type `float32`. +A float scale to add to the saturation. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/All.md b/site/en/api_docs/python/tf/raw_ops/All.md new file mode 100644 index 00000000000..413fa3210ee --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/All.md @@ -0,0 +1,98 @@ +description: Computes the "logical and" of elements across dimensions of a tensor. + +
+ + +
+ +# tf.raw_ops.All + + + + + + + + + +Computes the "logical and" of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input` along the dimensions given in `axis`. Unless +`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in +`axis`. If `keep_dims` is true, the reduced dimensions are +retained with length 1. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `bool`. The tensor to reduce. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The dimensions to reduce. Must be in the range +`[-rank(input), rank(input))`. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If true, retain reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AllCandidateSampler.md b/site/en/api_docs/python/tf/raw_ops/AllCandidateSampler.md new file mode 100644 index 00000000000..f6a758d4b37 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AllCandidateSampler.md @@ -0,0 +1,151 @@ +description: Generates labels for candidate sampling with a learned unigram distribution. + +
+ + +
+ +# tf.raw_ops.AllCandidateSampler + + + + + + + + + +Generates labels for candidate sampling with a learned unigram distribution. + + + + + + + + + +See explanations of candidate sampling and the data formats at +go/candidate-sampling. + +For each batch, this op picks a single set of sampled candidate labels. + +The advantages of sampling candidates per-batch are simplicity and the +possibility of efficient dense matrix multiplication. The disadvantage is that +the sampled candidates must be chosen independently of the context and of the +true labels. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64`. +A batch_size * num_true matrix, in which each row contains the +IDs of the num_true target_classes in the corresponding original label. +
+`num_true` + +An `int` that is `>= 1`. Number of true labels per context. +
+`num_sampled` + +An `int` that is `>= 1`. Number of candidates to produce. +
+`unique` + +A `bool`. +If unique is true, we sample with rejection, so that all sampled +candidates in a batch are unique. This requires some approximation to +estimate the post-rejection sampling probabilities. +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +An second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sampled_candidates, true_expected_count, sampled_expected_count). +
+`sampled_candidates` + +A `Tensor` of type `int64`. +
+`true_expected_count` + +A `Tensor` of type `float32`. +
+`sampled_expected_count` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AllToAll.md b/site/en/api_docs/python/tf/raw_ops/AllToAll.md new file mode 100644 index 00000000000..7b6856f9809 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AllToAll.md @@ -0,0 +1,127 @@ +description: An Op to exchange data across TPU replicas. + +
+ + +
+ +# tf.raw_ops.AllToAll + + + + + + + + + +An Op to exchange data across TPU replicas. + + + + + + + + + +On each replica, the input is split into `split_count` blocks along +`split_dimension` and send to the other replicas given group_assignment. After +receiving `split_count` - 1 blocks from other replicas, we concatenate the +blocks along `concat_dimension` as the output. + +For example, suppose there are 2 TPU replicas: +replica 0 receives input: `[[A, B]]` +replica 1 receives input: `[[C, D]]` + +group_assignment=`[[0, 1]]` +concat_dimension=0 +split_dimension=1 +split_count=2 + +replica 0's output: `[[A], [C]]` +replica 1's output: `[[B], [D]]` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`. +The local input to the sum. +
+`group_assignment` + +A `Tensor` of type `int32`. An int32 tensor with shape +[num_groups, num_replicas_per_group]. `group_assignment[i]` represents the +replica ids in the ith subgroup. +
+`concat_dimension` + +An `int`. The dimension number to concatenate. +
+`split_dimension` + +An `int`. The dimension number to split. +
+`split_count` + +An `int`. +The number of splits, this number must equal to the sub-group +size(group_assignment.get_shape()[1]) +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Angle.md b/site/en/api_docs/python/tf/raw_ops/Angle.md new file mode 100644 index 00000000000..a63aeb3937d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Angle.md @@ -0,0 +1,106 @@ +description: Returns the argument of a complex number. + +
+ + +
+ +# tf.raw_ops.Angle + + + + + + + + + +Returns the argument of a complex number. + + + + + + + + + +Given a tensor `input` of complex numbers, this operation returns a tensor of +type `float` that is the argument of each element in `input`. All elements in +`input` must be complex numbers of the form \\(a + bj\\), where *a* +is the real part and *b* is the imaginary part. + +The argument returned by this operation is of the form \\(atan2(b, a)\\). + +#### For example: + + + +``` +# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] +tf.angle(input) ==> [2.0132, 1.056] +``` + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +
+`Tout` + +An optional tf.DType from: `tf.float32, tf.float64`. Defaults to tf.float32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Tout`. +
+ + + +#### Numpy Compatibility +Equivalent to np.angle. + diff --git a/site/en/api_docs/python/tf/raw_ops/AnonymousIterator.md b/site/en/api_docs/python/tf/raw_ops/AnonymousIterator.md new file mode 100644 index 00000000000..ff3ebbabb71 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AnonymousIterator.md @@ -0,0 +1,84 @@ +description: A container for an iterator resource. + +
+ + +
+ +# tf.raw_ops.AnonymousIterator + + + + + + + + + +A container for an iterator resource. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AnonymousIteratorV2.md b/site/en/api_docs/python/tf/raw_ops/AnonymousIteratorV2.md new file mode 100644 index 00000000000..8dcf7a4c8d7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AnonymousIteratorV2.md @@ -0,0 +1,98 @@ +description: A container for an iterator resource. + +
+ + +
+ +# tf.raw_ops.AnonymousIteratorV2 + + + + + + + + + +A container for an iterator resource. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (handle, deleter). +
+`handle` + +A `Tensor` of type `resource`. +
+`deleter` + +A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AnonymousMemoryCache.md b/site/en/api_docs/python/tf/raw_ops/AnonymousMemoryCache.md new file mode 100644 index 00000000000..b6a5bf9c066 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AnonymousMemoryCache.md @@ -0,0 +1,82 @@ +
+ + +
+ +# tf.raw_ops.AnonymousMemoryCache + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (handle, deleter). +
+`handle` + +A `Tensor` of type `resource`. +
+`deleter` + +A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AnonymousMultiDeviceIterator.md b/site/en/api_docs/python/tf/raw_ops/AnonymousMultiDeviceIterator.md new file mode 100644 index 00000000000..a6e149ec3c8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AnonymousMultiDeviceIterator.md @@ -0,0 +1,105 @@ +description: A container for a multi device iterator resource. + +
+ + +
+ +# tf.raw_ops.AnonymousMultiDeviceIterator + + + + + + + + + +A container for a multi device iterator resource. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`devices` + +A list of `strings` that has length `>= 1`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (handle, deleter). +
+`handle` + +A `Tensor` of type `resource`. +
+`deleter` + +A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AnonymousRandomSeedGenerator.md b/site/en/api_docs/python/tf/raw_ops/AnonymousRandomSeedGenerator.md new file mode 100644 index 00000000000..03aa6e14e6a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AnonymousRandomSeedGenerator.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.AnonymousRandomSeedGenerator + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`seed` + +A `Tensor` of type `int64`. +
+`seed2` + +A `Tensor` of type `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (handle, deleter). +
+`handle` + +A `Tensor` of type `resource`. +
+`deleter` + +A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Any.md b/site/en/api_docs/python/tf/raw_ops/Any.md new file mode 100644 index 00000000000..5dcaf3ee260 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Any.md @@ -0,0 +1,98 @@ +description: Computes the "logical or" of elements across dimensions of a tensor. + +
+ + +
+ +# tf.raw_ops.Any + + + + + + + + + +Computes the "logical or" of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input` along the dimensions given in `axis`. Unless +`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in +`axis`. If `keep_dims` is true, the reduced dimensions are +retained with length 1. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `bool`. The tensor to reduce. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The dimensions to reduce. Must be in the range +`[-rank(input), rank(input))`. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If true, retain reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyAdaMax.md b/site/en/api_docs/python/tf/raw_ops/ApplyAdaMax.md new file mode 100644 index 00000000000..54febdb9c8e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyAdaMax.md @@ -0,0 +1,155 @@ +description: Update '*var' according to the AdaMax algorithm. + +
+ + +
+ +# tf.raw_ops.ApplyAdaMax + + + + + + + + + +Update '*var' according to the AdaMax algorithm. + + + + + + + + + +m_t <- beta1 * m_{t-1} + (1 - beta1) * g +v_t <- max(beta2 * v_{t-1}, abs(g)) +variable <- variable - learning_rate / (1 - beta1^t) * m_t / (v_t + epsilon) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`m` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`v` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`beta1_power` + +A `Tensor`. Must have the same type as `var`. +Must be a scalar. +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`beta1` + +A `Tensor`. Must have the same type as `var`. +Momentum factor. Must be a scalar. +
+`beta2` + +A `Tensor`. Must have the same type as `var`. +Momentum factor. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `var`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, m, and v tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyAdadelta.md b/site/en/api_docs/python/tf/raw_ops/ApplyAdadelta.md new file mode 100644 index 00000000000..0f1e495325d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyAdadelta.md @@ -0,0 +1,138 @@ +description: Update '*var' according to the adadelta scheme. + +
+ + +
+ +# tf.raw_ops.ApplyAdadelta + + + + + + + + + +Update '*var' according to the adadelta scheme. + + + + + + + + + +accum = rho() * accum + (1 - rho()) * grad.square(); +update = (update_accum + epsilon).sqrt() * (accum + epsilon()).rsqrt() * grad; +update_accum = rho() * update_accum + (1 - rho()) * update.square(); +var -= update; + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`accum_update` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`rho` + +A `Tensor`. Must have the same type as `var`. +Decay factor. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `var`. +Constant factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, updating of the var, accum and update_accum tensors will be protected by +a lock; otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyAdagrad.md b/site/en/api_docs/python/tf/raw_ops/ApplyAdagrad.md new file mode 100644 index 00000000000..9c2c5fcf3a7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyAdagrad.md @@ -0,0 +1,120 @@ +description: Update '*var' according to the adagrad scheme. + +
+ + +
+ +# tf.raw_ops.ApplyAdagrad + + + + + + + + + +Update '*var' according to the adagrad scheme. + + + + + + + + + +accum += grad * grad +var -= lr * grad * (1 / sqrt(accum)) + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`update_slots` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyAdagradDA.md b/site/en/api_docs/python/tf/raw_ops/ApplyAdagradDA.md new file mode 100644 index 00000000000..af8116dd423 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyAdagradDA.md @@ -0,0 +1,143 @@ +description: Update '*var' according to the proximal adagrad scheme. + +
+ + +
+ +# tf.raw_ops.ApplyAdagradDA + + + + + + + + + +Update '*var' according to the proximal adagrad scheme. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`gradient_accumulator` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`gradient_squared_accumulator` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `var`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `var`. +L2 regularization. Must be a scalar. +
+`global_step` + +A `Tensor` of type `int64`. +Training step number. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, updating of the var and accum tensors will be protected by +a lock; otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyAdagradV2.md b/site/en/api_docs/python/tf/raw_ops/ApplyAdagradV2.md new file mode 100644 index 00000000000..848f698fd56 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyAdagradV2.md @@ -0,0 +1,129 @@ +description: Update '*var' according to the adagrad scheme. + +
+ + +
+ +# tf.raw_ops.ApplyAdagradV2 + + + + + + + + + +Update '*var' according to the adagrad scheme. + + + + + + + + + +accum += grad * grad +var -= lr * grad * (1 / sqrt(accum)) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `var`. +Constant factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`update_slots` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyAdam.md b/site/en/api_docs/python/tf/raw_ops/ApplyAdam.md new file mode 100644 index 00000000000..faca0bdd9e6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyAdam.md @@ -0,0 +1,172 @@ +description: Update '*var' according to the Adam algorithm. + +
+ + +
+ +# tf.raw_ops.ApplyAdam + + + + + + + + + +Update '*var' according to the Adam algorithm. + + + + + + + + + +$$lr_t := \text{learning\_rate} * \sqrt{1 - beta_2^t} / (1 - beta_1^t)$$ +$$m_t := beta_1 * m_{t-1} + (1 - beta_1) * g$$ +$$v_t := beta_2 * v_{t-1} + (1 - beta_2) * g * g$$ +$$variable := variable - lr_t * m_t / (\sqrt{v_t} + \epsilon)$$ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`m` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`v` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`beta1_power` + +A `Tensor`. Must have the same type as `var`. +Must be a scalar. +
+`beta2_power` + +A `Tensor`. Must have the same type as `var`. +Must be a scalar. +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`beta1` + +A `Tensor`. Must have the same type as `var`. +Momentum factor. Must be a scalar. +
+`beta2` + +A `Tensor`. Must have the same type as `var`. +Momentum factor. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `var`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, m, and v tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`use_nesterov` + +An optional `bool`. Defaults to `False`. +If `True`, uses the nesterov update. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyAddSign.md b/site/en/api_docs/python/tf/raw_ops/ApplyAddSign.md new file mode 100644 index 00000000000..973fca497b3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyAddSign.md @@ -0,0 +1,136 @@ +description: Update '*var' according to the AddSign update. + +
+ + +
+ +# tf.raw_ops.ApplyAddSign + + + + + + + + + +Update '*var' according to the AddSign update. + + + + + + + + + +m_t <- beta1 * m_{t-1} + (1 - beta1) * g +update <- (alpha + sign_decay * sign(g) *sign(m)) * g +variable <- variable - lr_t * update + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`m` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`alpha` + +A `Tensor`. Must have the same type as `var`. Must be a scalar. +
+`sign_decay` + +A `Tensor`. Must have the same type as `var`. +Must be a scalar. +
+`beta` + +A `Tensor`. Must have the same type as `var`. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and m tensors is +protected by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyCenteredRMSProp.md b/site/en/api_docs/python/tf/raw_ops/ApplyCenteredRMSProp.md new file mode 100644 index 00000000000..0e09605fb1a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyCenteredRMSProp.md @@ -0,0 +1,169 @@ +description: Update '*var' according to the centered RMSProp algorithm. + +
+ + +
+ +# tf.raw_ops.ApplyCenteredRMSProp + + + + + + + + + +Update '*var' according to the centered RMSProp algorithm. + + + + + + + + + +The centered RMSProp algorithm uses an estimate of the centered second moment +(i.e., the variance) for normalization, as opposed to regular RMSProp, which +uses the (uncentered) second moment. This often helps with training, but is +slightly more expensive in terms of computation and memory. + +Note that in dense implementation of this algorithm, mg, ms, and mom will +update even if the grad is zero, but in this sparse implementation, mg, ms, +and mom will not update in iterations during which the grad is zero. + +mean_square = decay * mean_square + (1-decay) * gradient ** 2 +mean_grad = decay * mean_grad + (1-decay) * gradient + +Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2) + +mg <- rho * mg_{t-1} + (1-rho) * grad +ms <- rho * ms_{t-1} + (1-rho) * grad * grad +mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms - mg * mg + epsilon) +var <- var - mom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`mg` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`ms` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`mom` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`rho` + +A `Tensor`. Must have the same type as `var`. +Decay rate. Must be a scalar. +
+`momentum` + +A `Tensor`. Must have the same type as `var`. +
+`epsilon` + +A `Tensor`. Must have the same type as `var`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, mg, ms, and mom tensors is +protected by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyFtrl.md b/site/en/api_docs/python/tf/raw_ops/ApplyFtrl.md new file mode 100644 index 00000000000..727019617df --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyFtrl.md @@ -0,0 +1,148 @@ +description: Update '*var' according to the Ftrl-proximal scheme. + +
+ + +
+ +# tf.raw_ops.ApplyFtrl + + + + + + + + + +Update '*var' according to the Ftrl-proximal scheme. + + + + + + + + + +accum_new = accum + grad * grad +linear += grad + (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var +quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 +var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 +accum = accum_new + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`linear` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `var`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `var`. +L2 regularization. Must be a scalar. +
+`lr_power` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyFtrlV2.md b/site/en/api_docs/python/tf/raw_ops/ApplyFtrlV2.md new file mode 100644 index 00000000000..d20707c3c66 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyFtrlV2.md @@ -0,0 +1,158 @@ +description: Update '*var' according to the Ftrl-proximal scheme. + +
+ + +
+ +# tf.raw_ops.ApplyFtrlV2 + + + + + + + + + +Update '*var' according to the Ftrl-proximal scheme. + + + + + + + + + +grad_with_shrinkage = grad + 2 * l2_shrinkage * var +accum_new = accum + grad_with_shrinkage * grad_with_shrinkage +linear += grad_with_shrinkage + + (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var +quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 +var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 +accum = accum_new + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`linear` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `var`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `var`. +L2 shrinkage regularization. Must be a scalar. +
+`l2_shrinkage` + +A `Tensor`. Must have the same type as `var`. +
+`lr_power` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyGradientDescent.md b/site/en/api_docs/python/tf/raw_ops/ApplyGradientDescent.md new file mode 100644 index 00000000000..5716c175069 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyGradientDescent.md @@ -0,0 +1,102 @@ +description: Update '*var' by subtracting 'alpha' * 'delta' from it. + +
+ + +
+ +# tf.raw_ops.ApplyGradientDescent + + + + + + + + + +Update '*var' by subtracting 'alpha' * 'delta' from it. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`alpha` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`delta` + +A `Tensor`. Must have the same type as `var`. The change. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, the subtraction will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyMomentum.md b/site/en/api_docs/python/tf/raw_ops/ApplyMomentum.md new file mode 100644 index 00000000000..5d423e36930 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyMomentum.md @@ -0,0 +1,134 @@ +description: Update '*var' according to the momentum scheme. + +
+ + +
+ +# tf.raw_ops.ApplyMomentum + + + + + + + + + +Update '*var' according to the momentum scheme. + + + + + + + + + +Set use_nesterov = True if you want to use Nesterov momentum. + +accum = accum * momentum + grad +var -= lr * accum + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`momentum` + +A `Tensor`. Must have the same type as `var`. +Momentum. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`use_nesterov` + +An optional `bool`. Defaults to `False`. +If `True`, the tensor passed to compute grad will be +var - lr * momentum * accum, so in the end, the var you get is actually +var - lr * momentum * accum. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyPowerSign.md b/site/en/api_docs/python/tf/raw_ops/ApplyPowerSign.md new file mode 100644 index 00000000000..0d7fda1e835 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyPowerSign.md @@ -0,0 +1,136 @@ +description: Update '*var' according to the AddSign update. + +
+ + +
+ +# tf.raw_ops.ApplyPowerSign + + + + + + + + + +Update '*var' according to the AddSign update. + + + + + + + + + +m_t <- beta1 * m_{t-1} + (1 - beta1) * g +update <- exp(logbase * sign_decay * sign(g) * sign(m_t)) * g +variable <- variable - lr_t * update + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`m` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`logbase` + +A `Tensor`. Must have the same type as `var`. Must be a scalar. +
+`sign_decay` + +A `Tensor`. Must have the same type as `var`. +Must be a scalar. +
+`beta` + +A `Tensor`. Must have the same type as `var`. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and m tensors is +protected by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyProximalAdagrad.md b/site/en/api_docs/python/tf/raw_ops/ApplyProximalAdagrad.md new file mode 100644 index 00000000000..6d8e09b7160 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyProximalAdagrad.md @@ -0,0 +1,129 @@ +description: Update '*var' and '*accum' according to FOBOS with Adagrad learning rate. + +
+ + +
+ +# tf.raw_ops.ApplyProximalAdagrad + + + + + + + + + +Update '*var' and '*accum' according to FOBOS with Adagrad learning rate. + + + + + + + + + +accum += grad * grad +prox_v = var - lr * grad * (1 / sqrt(accum)) +var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0} + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `var`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `var`. +L2 regularization. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, updating of the var and accum tensors will be protected by +a lock; otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyProximalGradientDescent.md b/site/en/api_docs/python/tf/raw_ops/ApplyProximalGradientDescent.md new file mode 100644 index 00000000000..2966a675055 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyProximalGradientDescent.md @@ -0,0 +1,120 @@ +description: Update '*var' as FOBOS algorithm with fixed learning rate. + +
+ + +
+ +# tf.raw_ops.ApplyProximalGradientDescent + + + + + + + + + +Update '*var' as FOBOS algorithm with fixed learning rate. + + + + + + + + + +prox_v = var - alpha * delta +var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0} + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`alpha` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `var`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `var`. +L2 regularization. Must be a scalar. +
+`delta` + +A `Tensor`. Must have the same type as `var`. The change. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the subtraction will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApplyRMSProp.md b/site/en/api_docs/python/tf/raw_ops/ApplyRMSProp.md new file mode 100644 index 00000000000..1557e12cb5a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApplyRMSProp.md @@ -0,0 +1,152 @@ +description: Update '*var' according to the RMSProp algorithm. + +
+ + +
+ +# tf.raw_ops.ApplyRMSProp + + + + + + + + + +Update '*var' according to the RMSProp algorithm. + + + + + + + + + +Note that in dense implementation of this algorithm, ms and mom will +update even if the grad is zero, but in this sparse implementation, ms +and mom will not update in iterations during which the grad is zero. + +mean_square = decay * mean_square + (1-decay) * gradient ** 2 +Delta = learning_rate * gradient / sqrt(mean_square + epsilon) + +ms <- rho * ms_{t-1} + (1-rho) * grad * grad +mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) +var <- var - mom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`ms` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`mom` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`rho` + +A `Tensor`. Must have the same type as `var`. +Decay rate. Must be a scalar. +
+`momentum` + +A `Tensor`. Must have the same type as `var`. +
+`epsilon` + +A `Tensor`. Must have the same type as `var`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, ms, and mom tensors is protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ApproximateEqual.md b/site/en/api_docs/python/tf/raw_ops/ApproximateEqual.md new file mode 100644 index 00000000000..d4139d7a21c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ApproximateEqual.md @@ -0,0 +1,91 @@ +description: Returns the truth value of abs(x-y) < tolerance element-wise. + +
+ + +
+ +# tf.raw_ops.ApproximateEqual + + + + + + + + + +Returns the truth value of abs(x-y) < tolerance element-wise. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`tolerance` + +An optional `float`. Defaults to `1e-05`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ArgMax.md b/site/en/api_docs/python/tf/raw_ops/ArgMax.md new file mode 100644 index 00000000000..562c1943852 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ArgMax.md @@ -0,0 +1,108 @@ +description: Returns the index with the largest value across dimensions of a tensor. + +
+ + +
+ +# tf.raw_ops.ArgMax + + + + + + + + + +Returns the index with the largest value across dimensions of a tensor. + + + + + + + + + +Note that in case of ties the identity of the return value is not guaranteed. + +#### Usage: + +```python +import tensorflow as tf +a = [1, 10, 26.9, 2.8, 166.32, 62.3] +b = tf.math.argmax(input = a) +c = tf.keras.backend.eval(b) +# c = 4 +# here a[4] = 166.32 which is the largest element of a across axis 0 +``` + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`dimension` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +int32 or int64, must be in the range `[-rank(input), rank(input))`. +Describes which dimension of the input Tensor to reduce across. For vectors, +use dimension = 0. +
+`output_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `output_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ArgMin.md b/site/en/api_docs/python/tf/raw_ops/ArgMin.md new file mode 100644 index 00000000000..03df850579a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ArgMin.md @@ -0,0 +1,108 @@ +description: Returns the index with the smallest value across dimensions of a tensor. + +
+ + +
+ +# tf.raw_ops.ArgMin + + + + + + + + + +Returns the index with the smallest value across dimensions of a tensor. + + + + + + + + + +Note that in case of ties the identity of the return value is not guaranteed. + +#### Usage: + +```python +import tensorflow as tf +a = [1, 10, 26.9, 2.8, 166.32, 62.3] +b = tf.math.argmin(input = a) +c = tf.keras.backend.eval(b) +# c = 0 +# here a[0] = 1 which is the smallest element of a across axis 0 +``` + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`dimension` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +int32 or int64, must be in the range `[-rank(input), rank(input))`. +Describes which dimension of the input Tensor to reduce across. For vectors, +use dimension = 0. +
+`output_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `output_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AsString.md b/site/en/api_docs/python/tf/raw_ops/AsString.md new file mode 100644 index 00000000000..2b35693110a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AsString.md @@ -0,0 +1,139 @@ +description: Converts each entry in the given tensor to strings. + +
+ + +
+ +# tf.raw_ops.AsString + + + + + + + + + +Converts each entry in the given tensor to strings. + + + + + + + + + +Supports many numeric types and boolean. + +For Unicode, see the +[https://www.tensorflow.org/tutorials/representation/unicode](Working with Unicode text) +tutorial. + +#### Examples: + + + +``` +>>> tf.strings.as_string([3, 2]) + +>>> tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy() +array([b'3.14', b'2.72'], dtype=object) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `float32`, `float64`, `bool`. +
+`precision` + +An optional `int`. Defaults to `-1`. +The post-decimal precision to use for floating point numbers. +Only used if precision > -1. +
+`scientific` + +An optional `bool`. Defaults to `False`. +Use scientific notation for floating point numbers. +
+`shortest` + +An optional `bool`. Defaults to `False`. +Use shortest representation (either scientific or standard) for +floating point numbers. +
+`width` + +An optional `int`. Defaults to `-1`. +Pad pre-decimal numbers to this width. +Applies to both floating point and integer numbers. +Only used if width > -1. +
+`fill` + +An optional `string`. Defaults to `""`. +The value to pad if width > -1. If empty, pads with spaces. +Another typical value is '0'. String cannot be longer than 1 character. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Asin.md b/site/en/api_docs/python/tf/raw_ops/Asin.md new file mode 100644 index 00000000000..2b89fdb218b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Asin.md @@ -0,0 +1,94 @@ +description: Computes the trignometric inverse sine of x element-wise. + +
+ + +
+ +# tf.raw_ops.Asin + + + + + + + + + +Computes the trignometric inverse sine of x element-wise. + + + + + + + + + +The tf.math.asin operation returns the inverse of tf.math.sin, such that +if `y = tf.math.sin(x)` then, `x = tf.math.asin(y)`. + +**Note**: The output of tf.math.asin will lie within the invertible range +of sine, i.e [-pi/2, pi/2]. + +#### For example: + + + +```python +# Note: [1.047, 0.785] ~= [(pi/3), (pi/4)] +x = tf.constant([1.047, 0.785]) +y = tf.math.sin(x) # [0.8659266, 0.7068252] + +tf.math.asin(y) # [1.047, 0.785] = x +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Asinh.md b/site/en/api_docs/python/tf/raw_ops/Asinh.md new file mode 100644 index 00000000000..59de0f65c81 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Asinh.md @@ -0,0 +1,85 @@ +description: Computes inverse hyperbolic sine of x element-wise. + +
+ + +
+ +# tf.raw_ops.Asinh + + + + + + + + + +Computes inverse hyperbolic sine of x element-wise. + + + + + + + + + + Given an input tensor, this function computes inverse hyperbolic sine + for every element in the tensor. Both input and output has a range of + `[-inf, inf]`. + + ```python + x = tf.constant([-float("inf"), -2, -0.5, 1, 1.2, 200, 10000, float("inf")]) + tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Assert.md b/site/en/api_docs/python/tf/raw_ops/Assert.md new file mode 100644 index 00000000000..cc3355a2265 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Assert.md @@ -0,0 +1,95 @@ +description: Asserts that the given condition is true. + +
+ + +
+ +# tf.raw_ops.Assert + + + + + + + + + +Asserts that the given condition is true. + + + + + + + + + +If `condition` evaluates to false, print the list of tensors in `data`. +`summarize` determines how many entries of the tensors to print. + + + + + + + + + + + + + + + + + + + +
+`condition` + +A `Tensor` of type `bool`. The condition to evaluate. +
+`data` + +A list of `Tensor` objects. +The tensors to print out when condition is false. +
+`summarize` + +An optional `int`. Defaults to `3`. +Print this many entries of each tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AssertCardinalityDataset.md b/site/en/api_docs/python/tf/raw_ops/AssertCardinalityDataset.md new file mode 100644 index 00000000000..4c7a389a21a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AssertCardinalityDataset.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.AssertCardinalityDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`cardinality` + +A `Tensor` of type `int64`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AssertNextDataset.md b/site/en/api_docs/python/tf/raw_ops/AssertNextDataset.md new file mode 100644 index 00000000000..a8abd4b31c7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AssertNextDataset.md @@ -0,0 +1,110 @@ +description: A transformation that asserts which transformations happen next. + +
+ + +
+ +# tf.raw_ops.AssertNextDataset + + + + + + + + + +A transformation that asserts which transformations happen next. + + + + + + + + + +This transformation checks whether the camel-case names (i.e. "FlatMap", not +"flat_map") of the transformations following this transformation match the list +of names in the `transformations` argument. If there is a mismatch, the +transformation raises an exception. + +The check occurs when iterating over the contents of the dataset, which +means that the check happens *after* any static optimizations are applied +to the dataset graph. + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +`AssertNextDataset` passes through the outputs of its input dataset. +
+`transformations` + +A `Tensor` of type `string`. +A tf.string vector tf.Tensor identifying the transformations that are +expected to happen next. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Assign.md b/site/en/api_docs/python/tf/raw_ops/Assign.md new file mode 100644 index 00000000000..0e4652f321f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Assign.md @@ -0,0 +1,107 @@ +description: Update 'ref' by assigning 'value' to it. + +
+ + +
+ +# tf.raw_ops.Assign + + + + + + + + + +Update 'ref' by assigning 'value' to it. + + + + + + + + + +This operation outputs "ref" after the assignment is done. +This makes it easier to chain operations that need to use the reset value. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. +Should be from a `Variable` node. May be uninitialized. +
+`value` + +A `Tensor`. Must have the same type as `ref`. +The value to be assigned to the variable. +
+`validate_shape` + +An optional `bool`. Defaults to `True`. +If true, the operation will validate that the shape +of 'value' matches the shape of the Tensor being assigned to. If false, +'ref' will take on the shape of 'value'. +
+`use_locking` + +An optional `bool`. Defaults to `True`. +If True, the assignment will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AssignAdd.md b/site/en/api_docs/python/tf/raw_ops/AssignAdd.md new file mode 100644 index 00000000000..c33a5021650 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AssignAdd.md @@ -0,0 +1,97 @@ +description: Update 'ref' by adding 'value' to it. + +
+ + +
+ +# tf.raw_ops.AssignAdd + + + + + + + + + +Update 'ref' by adding 'value' to it. + + + + + + + + + +This operation outputs "ref" after the update is done. +This makes it easier to chain operations that need to use the reset value. + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a `Variable` node. +
+`value` + +A `Tensor`. Must have the same type as `ref`. +The value to be added to the variable. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the addition will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AssignAddVariableOp.md b/site/en/api_docs/python/tf/raw_ops/AssignAddVariableOp.md new file mode 100644 index 00000000000..924094912dd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AssignAddVariableOp.md @@ -0,0 +1,87 @@ +description: Adds a value to the current value of a variable. + +
+ + +
+ +# tf.raw_ops.AssignAddVariableOp + + + + + + + + + +Adds a value to the current value of a variable. + + + + + + + + + +Any ReadVariableOp with a control dependency on this op is guaranteed to +see the incremented value or a subsequent newer one. + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +handle to the resource in which to store the variable. +
+`value` + +A `Tensor`. the value by which the variable will be incremented. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AssignSub.md b/site/en/api_docs/python/tf/raw_ops/AssignSub.md new file mode 100644 index 00000000000..05a9e4e39f0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AssignSub.md @@ -0,0 +1,97 @@ +description: Update 'ref' by subtracting 'value' from it. + +
+ + +
+ +# tf.raw_ops.AssignSub + + + + + + + + + +Update 'ref' by subtracting 'value' from it. + + + + + + + + + +This operation outputs "ref" after the update is done. +This makes it easier to chain operations that need to use the reset value. + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a `Variable` node. +
+`value` + +A `Tensor`. Must have the same type as `ref`. +The value to be subtracted to the variable. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the subtraction will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AssignSubVariableOp.md b/site/en/api_docs/python/tf/raw_ops/AssignSubVariableOp.md new file mode 100644 index 00000000000..b7de19420b9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AssignSubVariableOp.md @@ -0,0 +1,87 @@ +description: Subtracts a value from the current value of a variable. + +
+ + +
+ +# tf.raw_ops.AssignSubVariableOp + + + + + + + + + +Subtracts a value from the current value of a variable. + + + + + + + + + +Any ReadVariableOp with a control dependency on this op is guaranteed to +see the decremented value or a subsequent newer one. + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +handle to the resource in which to store the variable. +
+`value` + +A `Tensor`. the value by which the variable will be incremented. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AssignVariableOp.md b/site/en/api_docs/python/tf/raw_ops/AssignVariableOp.md new file mode 100644 index 00000000000..25517f30df3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AssignVariableOp.md @@ -0,0 +1,87 @@ +description: Assigns a new value to a variable. + +
+ + +
+ +# tf.raw_ops.AssignVariableOp + + + + + + + + + +Assigns a new value to a variable. + + + + + + + + + +Any ReadVariableOp with a control dependency on this op is guaranteed to return +this value or a subsequent newer value of the variable. + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +handle to the resource in which to store the variable. +
+`value` + +A `Tensor`. the value to set the new tensor to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Atan.md b/site/en/api_docs/python/tf/raw_ops/Atan.md new file mode 100644 index 00000000000..9161496ac89 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Atan.md @@ -0,0 +1,94 @@ +description: Computes the trignometric inverse tangent of x element-wise. + +
+ + +
+ +# tf.raw_ops.Atan + + + + + + + + + +Computes the trignometric inverse tangent of x element-wise. + + + + + + + + + +The tf.math.atan operation returns the inverse of tf.math.tan, such that +if `y = tf.math.tan(x)` then, `x = tf.math.atan(y)`. + +**Note**: The output of tf.math.atan will lie within the invertible range +of tan, i.e (-pi/2, pi/2). + +#### For example: + + + +```python +# Note: [1.047, 0.785] ~= [(pi/3), (pi/4)] +x = tf.constant([1.047, 0.785]) +y = tf.math.tan(x) # [1.731261, 0.99920404] + +tf.math.atan(y) # [1.047, 0.785] = x +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Atan2.md b/site/en/api_docs/python/tf/raw_ops/Atan2.md new file mode 100644 index 00000000000..6dd60794b8e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Atan2.md @@ -0,0 +1,89 @@ +description: Computes arctangent of y/x element-wise, respecting signs of the arguments. + +
+ + +
+ +# tf.raw_ops.Atan2 + + + + + + + + + +Computes arctangent of `y/x` element-wise, respecting signs of the arguments. + + + + + + + + + +This is the angle \( \theta \in [-\pi, \pi] \) such that +\[ x = r \cos(\theta) \] +and +\[ y = r \sin(\theta) \] +where \(r = \sqrt(x^2 + y^2) \). + + + + + + + + + + + + + + + + +
+`y` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`x` + +A `Tensor`. Must have the same type as `y`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `y`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Atanh.md b/site/en/api_docs/python/tf/raw_ops/Atanh.md new file mode 100644 index 00000000000..76d5ea066dc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Atanh.md @@ -0,0 +1,87 @@ +description: Computes inverse hyperbolic tangent of x element-wise. + +
+ + +
+ +# tf.raw_ops.Atanh + + + + + + + + + +Computes inverse hyperbolic tangent of x element-wise. + + + + + + + + + + Given an input tensor, this function computes inverse hyperbolic tangent + for every element in the tensor. Input range is `[-1,1]` and output range is + `[-inf, inf]`. If input is `-1`, output will be `-inf` and if the + input is `1`, output will be `inf`. Values outside the range will have + `nan` as output. + + ```python + x = tf.constant([-float("inf"), -1, -0.5, 1, 0, 0.5, 10, float("inf")]) + tf.math.atanh(x) ==> [nan -inf -0.54930615 inf 0. 0.54930615 nan nan] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AudioSpectrogram.md b/site/en/api_docs/python/tf/raw_ops/AudioSpectrogram.md new file mode 100644 index 00000000000..d910955b849 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AudioSpectrogram.md @@ -0,0 +1,128 @@ +description: Produces a visualization of audio data over time. + +
+ + +
+ +# tf.raw_ops.AudioSpectrogram + + + + + + + + + +Produces a visualization of audio data over time. + + + + + + + + + +Spectrograms are a standard way of representing audio information as a series of +slices of frequency information, one slice for each window of time. By joining +these together into a sequence, they form a distinctive fingerprint of the sound +over time. + +This op expects to receive audio data as an input, stored as floats in the range +-1 to 1, together with a window width in samples, and a stride specifying how +far to move the window between slices. From this it generates a three +dimensional output. The first dimension is for the channels in the input, so a +stereo audio input would have two here for example. The second dimension is time, +with successive frequency slices. The third dimension has an amplitude value for +each frequency during that time slice. + +This means the layout when converted and saved as an image is rotated 90 degrees +clockwise from a typical spectrogram. Time is descending down the Y axis, and +the frequency decreases from left to right. + +Each value in the result represents the square root of the sum of the real and +imaginary parts of an FFT on the current window of samples. In this way, the +lowest dimension represents the power of each frequency in the current window, +and adjacent windows are concatenated in the next dimension. + +To get a more intuitive and visual look at what this operation does, you can run +tensorflow/examples/wav_to_spectrogram to read in an audio file and save out the +resulting spectrogram as a PNG image. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `float32`. Float representation of audio data. +
+`window_size` + +An `int`. +How wide the input window is in samples. For the highest efficiency +this should be a power of two, but other values are accepted. +
+`stride` + +An `int`. +How widely apart the center of adjacent sample windows should be. +
+`magnitude_squared` + +An optional `bool`. Defaults to `False`. +Whether to return the squared magnitude or just the +magnitude. Using squared magnitude can avoid extra calculations. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AudioSummary.md b/site/en/api_docs/python/tf/raw_ops/AudioSummary.md new file mode 100644 index 00000000000..b8da9ba44f0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AudioSummary.md @@ -0,0 +1,111 @@ +description: Outputs a Summary protocol buffer with audio. + +
+ + +
+ +# tf.raw_ops.AudioSummary + + + + + + + + + +Outputs a `Summary` protocol buffer with audio. + + + + + + + + + +The summary has up to `max_outputs` summary values containing audio. The +audio is built from `tensor` which must be 3-D with shape `[batch_size, +frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are +assumed to be in the range of `[-1.0, 1.0]` with a sample rate of `sample_rate`. + +The `tag` argument is a scalar `Tensor` of type `string`. It is used to +build the `tag` of the summary values: + +* If `max_outputs` is 1, the summary value tag is '*tag*/audio'. +* If `max_outputs` is greater than 1, the summary value tags are + generated sequentially as '*tag*/audio/0', '*tag*/audio/1', etc. + + + + + + + + + + + + + + + + + + + + + + +
+`tag` + +A `Tensor` of type `string`. +Scalar. Used to build the `tag` attribute of the summary values. +
+`tensor` + +A `Tensor` of type `float32`. 2-D of shape `[batch_size, frames]`. +
+`sample_rate` + +A `float`. The sample rate of the signal in hertz. +
+`max_outputs` + +An optional `int` that is `>= 1`. Defaults to `3`. +Max number of batch elements to generate audio for. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AudioSummaryV2.md b/site/en/api_docs/python/tf/raw_ops/AudioSummaryV2.md new file mode 100644 index 00000000000..5681843e975 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AudioSummaryV2.md @@ -0,0 +1,112 @@ +description: Outputs a Summary protocol buffer with audio. + +
+ + +
+ +# tf.raw_ops.AudioSummaryV2 + + + + + + + + + +Outputs a `Summary` protocol buffer with audio. + + + + + + + + + +The summary has up to `max_outputs` summary values containing audio. The +audio is built from `tensor` which must be 3-D with shape `[batch_size, +frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are +assumed to be in the range of `[-1.0, 1.0]` with a sample rate of `sample_rate`. + +The `tag` argument is a scalar `Tensor` of type `string`. It is used to +build the `tag` of the summary values: + +* If `max_outputs` is 1, the summary value tag is '*tag*/audio'. +* If `max_outputs` is greater than 1, the summary value tags are + generated sequentially as '*tag*/audio/0', '*tag*/audio/1', etc. + + + + + + + + + + + + + + + + + + + + + + +
+`tag` + +A `Tensor` of type `string`. +Scalar. Used to build the `tag` attribute of the summary values. +
+`tensor` + +A `Tensor` of type `float32`. 2-D of shape `[batch_size, frames]`. +
+`sample_rate` + +A `Tensor` of type `float32`. +The sample rate of the signal in hertz. +
+`max_outputs` + +An optional `int` that is `>= 1`. Defaults to `3`. +Max number of batch elements to generate audio for. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AutoShardDataset.md b/site/en/api_docs/python/tf/raw_ops/AutoShardDataset.md new file mode 100644 index 00000000000..a22c94cdde1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AutoShardDataset.md @@ -0,0 +1,123 @@ +description: Creates a dataset that shards the input dataset. + +
+ + +
+ +# tf.raw_ops.AutoShardDataset + + + + + + + + + +Creates a dataset that shards the input dataset. + + + + + + + + + +Creates a dataset that shards the input dataset by num_workers, returning a +sharded dataset for the index-th worker. This attempts to automatically shard +a dataset by examining the Dataset graph and inserting a shard op before the +inputs to a reader Dataset (e.g. CSVDataset, TFRecordDataset). + +This dataset will throw a NotFound error if we cannot shard the dataset +automatically. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +
+`num_workers` + +A `Tensor` of type `int64`. +A scalar representing the number of workers to distribute this dataset across. +
+`index` + +A `Tensor` of type `int64`. +A scalar representing the index of the current worker out of num_workers. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`auto_shard_policy` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AvgPool.md b/site/en/api_docs/python/tf/raw_ops/AvgPool.md new file mode 100644 index 00000000000..185ae904bf8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AvgPool.md @@ -0,0 +1,116 @@ +description: Performs average pooling on the input. + +
+ + +
+ +# tf.raw_ops.AvgPool + + + + + + + + + +Performs average pooling on the input. + + + + + + + + + +Each entry in `output` is the mean of the corresponding size `ksize` +window in `value`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +4-D with shape `[batch, height, width, channels]`. +
+`ksize` + +A list of `ints` that has length `>= 4`. +The size of the sliding window for each dimension of `value`. +
+`strides` + +A list of `ints` that has length `>= 4`. +The stride of the sliding window for each dimension of `value`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, in_height, in_width, in_channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `value`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AvgPool3D.md b/site/en/api_docs/python/tf/raw_ops/AvgPool3D.md new file mode 100644 index 00000000000..befb346f50d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AvgPool3D.md @@ -0,0 +1,116 @@ +description: Performs 3D average pooling on the input. + +
+ + +
+ +# tf.raw_ops.AvgPool3D + + + + + + + + + +Performs 3D average pooling on the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +Shape `[batch, depth, rows, cols, channels]` tensor to pool over. +
+`ksize` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The size of the window for each dimension of +the input tensor. Must have `ksize[0] = ksize[4] = 1`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. +The data format of the input and output data. With the +default format "NDHWC", the data is stored in the order of: +[batch, in_depth, in_height, in_width, in_channels]. +Alternatively, the format could be "NCDHW", the data storage order is: +[batch, in_channels, in_depth, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AvgPool3DGrad.md b/site/en/api_docs/python/tf/raw_ops/AvgPool3DGrad.md new file mode 100644 index 00000000000..dc01709bbe2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AvgPool3DGrad.md @@ -0,0 +1,124 @@ +description: Computes gradients of average pooling function. + +
+ + +
+ +# tf.raw_ops.AvgPool3DGrad + + + + + + + + + +Computes gradients of average pooling function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`orig_input_shape` + +A `Tensor` of type `int32`. +The original input dimensions. +
+`grad` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +Output backprop of shape `[batch, depth, rows, cols, channels]`. +
+`ksize` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The size of the window for each dimension of +the input tensor. Must have `ksize[0] = ksize[4] = 1`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. +The data format of the input and output data. With the +default format "NDHWC", the data is stored in the order of: +[batch, in_depth, in_height, in_width, in_channels]. +Alternatively, the format could be "NCDHW", the data storage order is: +[batch, in_channels, in_depth, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `grad`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/AvgPoolGrad.md b/site/en/api_docs/python/tf/raw_ops/AvgPoolGrad.md new file mode 100644 index 00000000000..58b8def0a93 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/AvgPoolGrad.md @@ -0,0 +1,123 @@ +description: Computes gradients of the average pooling function. + +
+ + +
+ +# tf.raw_ops.AvgPoolGrad + + + + + + + + + +Computes gradients of the average pooling function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`orig_input_shape` + +A `Tensor` of type `int32`. +1-D. Shape of the original input to `avg_pool`. +
+`grad` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +4-D with shape `[batch, height, width, channels]`. Gradients w.r.t. +the output of `avg_pool`. +
+`ksize` + +A list of `ints` that has length `>= 4`. +The size of the sliding window for each dimension of the input. +
+`strides` + +A list of `ints` that has length `>= 4`. +The stride of the sliding window for each dimension of the input. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, in_height, in_width, in_channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `grad`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Barrier.md b/site/en/api_docs/python/tf/raw_ops/Barrier.md new file mode 100644 index 00000000000..0bf012c1e9b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Barrier.md @@ -0,0 +1,123 @@ +description: Defines a barrier that persists across different graph executions. + +
+ + +
+ +# tf.raw_ops.Barrier + + + + + + + + + +Defines a barrier that persists across different graph executions. + + + + + + + + + +A barrier represents a key-value map, where each key is a string, and +each value is a tuple of tensors. + +At runtime, the barrier contains 'complete' and 'incomplete' +elements. A complete element has defined tensors for all components of +its value tuple, and may be accessed using BarrierTakeMany. An +incomplete element has some undefined components in its value tuple, +and may be updated using BarrierInsertMany. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a value. +
+`shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +The shape of each component in a value. Each shape must be 1 in the +first dimension. The length of this attr must be the same as the length of +component_types. +
+`capacity` + +An optional `int`. Defaults to `-1`. +The capacity of the barrier. The default capacity is MAX_INT32, +which is the largest capacity of the underlying queue. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this barrier is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this barrier will be shared under the given name +across multiple sessions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BarrierClose.md b/site/en/api_docs/python/tf/raw_ops/BarrierClose.md new file mode 100644 index 00000000000..80eb1136fd0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BarrierClose.md @@ -0,0 +1,93 @@ +description: Closes the given barrier. + +
+ + +
+ +# tf.raw_ops.BarrierClose + + + + + + + + + +Closes the given barrier. + + + + + + + + + +This operation signals that no more new elements will be inserted in the +given barrier. Subsequent InsertMany that try to introduce a new key will fail. +Subsequent InsertMany operations that just add missing components to already +existing elements will continue to succeed. Subsequent TakeMany operations will +continue to succeed if sufficient completed elements remain in the barrier. +Subsequent TakeMany operations that would block will fail immediately. + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a barrier. +
+`cancel_pending_enqueues` + +An optional `bool`. Defaults to `False`. +If true, all pending enqueue requests that are +blocked on the barrier's queue will be canceled. InsertMany will fail, even +if no new key is introduced. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BarrierIncompleteSize.md b/site/en/api_docs/python/tf/raw_ops/BarrierIncompleteSize.md new file mode 100644 index 00000000000..dd076b2ed5a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BarrierIncompleteSize.md @@ -0,0 +1,77 @@ +description: Computes the number of incomplete elements in the given barrier. + +
+ + +
+ +# tf.raw_ops.BarrierIncompleteSize + + + + + + + + + +Computes the number of incomplete elements in the given barrier. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a barrier. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BarrierInsertMany.md b/site/en/api_docs/python/tf/raw_ops/BarrierInsertMany.md new file mode 100644 index 00000000000..1e47aa99a13 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BarrierInsertMany.md @@ -0,0 +1,106 @@ +description: For each key, assigns the respective value to the specified component. + +
+ + +
+ +# tf.raw_ops.BarrierInsertMany + + + + + + + + + +For each key, assigns the respective value to the specified component. + + + + + + + + + +If a key is not found in the barrier, this operation will create a new +incomplete element. If a key is found in the barrier, and the element +already has a value at component_index, this operation will fail with +INVALID_ARGUMENT, and leave the barrier in an undefined state. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a barrier. +
+`keys` + +A `Tensor` of type `string`. +A one-dimensional tensor of keys, with length n. +
+`values` + +A `Tensor`. +An any-dimensional tensor of values, which are associated with the +respective keys. The 0th dimension must have length n. +
+`component_index` + +An `int`. +The component of the barrier elements that is being assigned. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BarrierReadySize.md b/site/en/api_docs/python/tf/raw_ops/BarrierReadySize.md new file mode 100644 index 00000000000..f5e627bab96 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BarrierReadySize.md @@ -0,0 +1,77 @@ +description: Computes the number of complete elements in the given barrier. + +
+ + +
+ +# tf.raw_ops.BarrierReadySize + + + + + + + + + +Computes the number of complete elements in the given barrier. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a barrier. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BarrierTakeMany.md b/site/en/api_docs/python/tf/raw_ops/BarrierTakeMany.md new file mode 100644 index 00000000000..c5185397b55 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BarrierTakeMany.md @@ -0,0 +1,149 @@ +description: Takes the given number of completed elements from a barrier. + +
+ + +
+ +# tf.raw_ops.BarrierTakeMany + + + + + + + + + +Takes the given number of completed elements from a barrier. + + + + + + + + + +This operation concatenates completed-element component tensors along +the 0th dimension to make a single component tensor. + +Elements come out of the barrier when they are complete, and in the order +in which they were placed into the barrier. The indices output provides +information about the batch in which each element was originally inserted +into the barrier. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a barrier. +
+`num_elements` + +A `Tensor` of type `int32`. +A single-element tensor containing the number of elements to +take. +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a value. +
+`allow_small_batch` + +An optional `bool`. Defaults to `False`. +Allow to return less than num_elements items if barrier is +already closed. +
+`wait_for_incomplete` + +An optional `bool`. Defaults to `False`. +
+`timeout_ms` + +An optional `int`. Defaults to `-1`. +If the queue is empty, this operation will block for up to +timeout_ms milliseconds. +Note: This option is not supported yet. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (indices, keys, values). +
+`indices` + +A `Tensor` of type `int64`. +
+`keys` + +A `Tensor` of type `string`. +
+`values` + +A list of `Tensor` objects of type `component_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Batch.md b/site/en/api_docs/python/tf/raw_ops/Batch.md new file mode 100644 index 00000000000..126026d9c32 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Batch.md @@ -0,0 +1,199 @@ +description: Batches all input tensors nondeterministically. + +
+ + +
+ +# tf.raw_ops.Batch + + + + + + + + + +Batches all input tensors nondeterministically. + + + + + + + + + +When many instances of this Op are being run concurrently with the same +container/shared_name in the same device, some will output zero-shaped Tensors +and others will output Tensors of size up to max_batch_size. + +All Tensors in in_tensors are batched together (so, for example, labels and +features should be batched with a single instance of this operation. + +Each invocation of batch emits an `id` scalar which will be used to identify +this particular invocation when doing unbatch or its gradient. + +Each op which emits a non-empty batch will also emit a non-empty batch_index +Tensor, which, is a [K, 3] matrix where each row contains the invocation's id, +start, and length of elements of each set of Tensors present in batched_tensors. + +Batched tensors are concatenated along the first dimension, and all tensors in +in_tensors must have the first dimension of the same size. + +in_tensors: The tensors to be batched. +num_batch_threads: Number of scheduling threads for processing batches of work. + Determines the number of batches processed in parallel. +max_batch_size: Batch sizes will never be bigger than this. +batch_timeout_micros: Maximum number of microseconds to wait before outputting + an incomplete batch. +allowed_batch_sizes: Optional list of allowed batch sizes. If left empty, does + nothing. Otherwise, supplies a list of batch sizes, causing the op to pad + batches up to one of those sizes. The entries must increase monotonically, and + the final entry must equal max_batch_size. +grad_timeout_micros: The timeout to use for the gradient. See Unbatch. +batched_tensors: Either empty tensors or a batch of concatenated Tensors. +batch_index: If out_tensors is non-empty, has information to invert it. +container: Controls the scope of sharing of this batch. +id: always contains a scalar with a unique ID for this invocation of Batch. +shared_name: Concurrently running instances of batch in the same device with the + same container and shared_name will batch their elements together. If left + empty, the op name will be used as the shared name. +T: the types of tensors to be batched. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`in_tensors` + +A list of `Tensor` objects. +
+`num_batch_threads` + +An `int`. +
+`max_batch_size` + +An `int`. +
+`batch_timeout_micros` + +An `int`. +
+`grad_timeout_micros` + +An `int`. +
+`max_enqueued_batches` + +An optional `int`. Defaults to `10`. +
+`allowed_batch_sizes` + +An optional list of `ints`. Defaults to `[]`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`batching_queue` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (batched_tensors, batch_index, id). +
+`batched_tensors` + +A list of `Tensor` objects. Has the same type as `in_tensors`. +
+`batch_index` + +A `Tensor` of type `int64`. +
+`id` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchCholesky.md b/site/en/api_docs/python/tf/raw_ops/BatchCholesky.md new file mode 100644 index 00000000000..a75591b2fc4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchCholesky.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.BatchCholesky + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchCholeskyGrad.md b/site/en/api_docs/python/tf/raw_ops/BatchCholeskyGrad.md new file mode 100644 index 00000000000..551e1aac993 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchCholeskyGrad.md @@ -0,0 +1,82 @@ +
+ + +
+ +# tf.raw_ops.BatchCholeskyGrad + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`l` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`grad` + +A `Tensor`. Must have the same type as `l`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `l`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchDataset.md b/site/en/api_docs/python/tf/raw_ops/BatchDataset.md new file mode 100644 index 00000000000..3583b6b65b0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchDataset.md @@ -0,0 +1,100 @@ +description: Creates a dataset that batches batch_size elements from input_dataset. + +
+ + +
+ +# tf.raw_ops.BatchDataset + + + + + + + + + +Creates a dataset that batches `batch_size` elements from `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`batch_size` + +A `Tensor` of type `int64`. +A scalar representing the number of elements to accumulate in a +batch. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchDatasetV2.md b/site/en/api_docs/python/tf/raw_ops/BatchDatasetV2.md new file mode 100644 index 00000000000..4eb3035365f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchDatasetV2.md @@ -0,0 +1,116 @@ +description: Creates a dataset that batches batch_size elements from input_dataset. + +
+ + +
+ +# tf.raw_ops.BatchDatasetV2 + + + + + + + + + +Creates a dataset that batches `batch_size` elements from `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`batch_size` + +A `Tensor` of type `int64`. +A scalar representing the number of elements to accumulate in a batch. +
+`drop_remainder` + +A `Tensor` of type `bool`. +A scalar representing whether the last batch should be dropped in case its size +is smaller than desired. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`parallel_copy` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchFFT.md b/site/en/api_docs/python/tf/raw_ops/BatchFFT.md new file mode 100644 index 00000000000..b7207ffd2f9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchFFT.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.BatchFFT + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `complex64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `complex64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchFFT2D.md b/site/en/api_docs/python/tf/raw_ops/BatchFFT2D.md new file mode 100644 index 00000000000..b750933d94d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchFFT2D.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.BatchFFT2D + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `complex64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `complex64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchFFT3D.md b/site/en/api_docs/python/tf/raw_ops/BatchFFT3D.md new file mode 100644 index 00000000000..44beb395636 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchFFT3D.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.BatchFFT3D + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `complex64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `complex64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchFunction.md b/site/en/api_docs/python/tf/raw_ops/BatchFunction.md new file mode 100644 index 00000000000..f95b5e7e469 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchFunction.md @@ -0,0 +1,208 @@ +description: Batches all the inputs tensors to the computation done by the function. + +
+ + +
+ +# tf.raw_ops.BatchFunction + + + + + + + + + +Batches all the inputs tensors to the computation done by the function. + + + + + + + + + +So, for example, in the following code + + ```python + + # This input will be captured. + y = tf.placeholder_with_default(1.0, shape=[]) + + @tf.Defun(tf.float32) + def computation(a): + return tf.matmul(a, a) + y + + b = gen_batch_ops.batch_function( + f=computation + in_tensors=[a], + captured_tensors=computation.captured_inputs, + Tout=[o.type for o in computation.definition.signature.output_arg], + num_batch_threads=1, + max_batch_size=10, + batch_timeout_micros=100000, # 100ms + allowed_batch_sizes=[3, 10], + batching_queue="") + +If more than one session.run call is simultaneously trying to compute `b` +the values of `a` will be gathered, non-deterministically concatenated +along the first axis, and only one thread will run the computation. + +Assumes that all arguments of the function are Tensors which will be batched +along their first dimension. + +Arguments that are captured, are not batched. The session.run call which does +the concatenation, will use the values of the captured tensors available to it. +Therefore, typical uses of captured tensors should involve values which remain +unchanged across session.run calls. Inference is a good example of this. + +SparseTensor is not supported. The return value of the decorated function +must be a Tensor or a list/tuple of Tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`in_tensors` + +A list of `Tensor` objects. The tensors to be batched. +
+`captured_tensors` + +A list of `Tensor` objects. +The tensors which are captured in the function, and don't need +to be batched. +
+`f` + +A function decorated with @Defun. +
+`num_batch_threads` + +An `int`. +Number of scheduling threads for processing batches of work. +Determines the number of batches processed in parallel. +
+`max_batch_size` + +An `int`. Batch sizes will never be bigger than this. +
+`batch_timeout_micros` + +An `int`. +Maximum number of microseconds to wait before outputting +an incomplete batch. +
+`Tout` + +A list of `tf.DTypes` that has length `>= 1`. +the types of the output tensors. +
+`max_enqueued_batches` + +An optional `int`. Defaults to `10`. +Maximum number of batches enqueued. Default: 10. +
+`allowed_batch_sizes` + +An optional list of `ints`. Defaults to `[]`. +Optional list of allowed batch sizes. If left empty, does +nothing. Otherwise, supplies a list of batch sizes, causing the op to pad +batches up to one of those sizes. The entries must increase monotonically, and +the final entry must equal max_batch_size. +
+`container` + +An optional `string`. Defaults to `""`. +Controls the scope of sharing of this batch. +
+`shared_name` + +An optional `string`. Defaults to `""`. +Concurrently running instances of batch in the same device with the +same container and shared_name will batch their elements together. If left +empty, the op name will be used as the shared name. +
+`batching_queue` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchIFFT.md b/site/en/api_docs/python/tf/raw_ops/BatchIFFT.md new file mode 100644 index 00000000000..672a4ce24dc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchIFFT.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.BatchIFFT + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `complex64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `complex64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchIFFT2D.md b/site/en/api_docs/python/tf/raw_ops/BatchIFFT2D.md new file mode 100644 index 00000000000..d28a0389168 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchIFFT2D.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.BatchIFFT2D + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `complex64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `complex64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchIFFT3D.md b/site/en/api_docs/python/tf/raw_ops/BatchIFFT3D.md new file mode 100644 index 00000000000..d7bbd66d239 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchIFFT3D.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.BatchIFFT3D + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `complex64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `complex64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchMatMul.md b/site/en/api_docs/python/tf/raw_ops/BatchMatMul.md new file mode 100644 index 00000000000..85601264532 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchMatMul.md @@ -0,0 +1,123 @@ +description: Multiplies slices of two tensors in batches. + +
+ + +
+ +# tf.raw_ops.BatchMatMul + + + + + + + + + +Multiplies slices of two tensors in batches. + + + + + + + + + +Multiplies all slices of `Tensor` `x` and `y` (each slice can be +viewed as an element of a batch), and arranges the individual results +in a single output tensor of the same batch size. Each of the +individual slices can optionally be adjointed (to adjoint a matrix +means to transpose and conjugate it) before multiplication by setting +the `adj_x` or `adj_y` flag to `True`, which are by default `False`. + +The input tensors `x` and `y` are 2-D or higher with shape `[..., r_x, c_x]` +and `[..., r_y, c_y]`. + +The output tensor is 2-D or higher with shape `[..., r_o, c_o]`, where: + + r_o = c_x if adj_x else r_x + c_o = r_y if adj_y else c_y + +#### It is computed as: + + +output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :]) + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +2-D or higher with shape `[..., r_x, c_x]`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +2-D or higher with shape `[..., r_y, c_y]`. +
+`adj_x` + +An optional `bool`. Defaults to `False`. +If `True`, adjoint the slices of `x`. Defaults to `False`. +
+`adj_y` + +An optional `bool`. Defaults to `False`. +If `True`, adjoint the slices of `y`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchMatMulV2.md b/site/en/api_docs/python/tf/raw_ops/BatchMatMulV2.md new file mode 100644 index 00000000000..b4cf57daddf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchMatMulV2.md @@ -0,0 +1,126 @@ +description: Multiplies slices of two tensors in batches. + +
+ + +
+ +# tf.raw_ops.BatchMatMulV2 + + + + + + + + + +Multiplies slices of two tensors in batches. + + + + + + + + + +Multiplies all slices of `Tensor` `x` and `y` (each slice can be +viewed as an element of a batch), and arranges the individual results +in a single output tensor of the same batch size. Each of the +individual slices can optionally be adjointed (to adjoint a matrix +means to transpose and conjugate it) before multiplication by setting +the `adj_x` or `adj_y` flag to `True`, which are by default `False`. + +The input tensors `x` and `y` are 2-D or higher with shape `[..., r_x, c_x]` +and `[..., r_y, c_y]`. + +The output tensor is 2-D or higher with shape `[..., r_o, c_o]`, where: + + r_o = c_x if adj_x else r_x + c_o = r_y if adj_y else c_y + +#### It is computed as: + + +output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :]) + + +*NOTE*: `BatchMatMulV2` supports broadcasting in the batch dimensions. More +about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +2-D or higher with shape `[..., r_x, c_x]`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +2-D or higher with shape `[..., r_y, c_y]`. +
+`adj_x` + +An optional `bool`. Defaults to `False`. +If `True`, adjoint the slices of `x`. Defaults to `False`. +
+`adj_y` + +An optional `bool`. Defaults to `False`. +If `True`, adjoint the slices of `y`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchMatrixBandPart.md b/site/en/api_docs/python/tf/raw_ops/BatchMatrixBandPart.md new file mode 100644 index 00000000000..6591c34c1d1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchMatrixBandPart.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.BatchMatrixBandPart + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`num_lower` + +A `Tensor` of type `int64`. +
+`num_upper` + +A `Tensor` of type `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchMatrixDeterminant.md b/site/en/api_docs/python/tf/raw_ops/BatchMatrixDeterminant.md new file mode 100644 index 00000000000..40d2b578115 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchMatrixDeterminant.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.BatchMatrixDeterminant + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchMatrixDiag.md b/site/en/api_docs/python/tf/raw_ops/BatchMatrixDiag.md new file mode 100644 index 00000000000..47f5f968320 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchMatrixDiag.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.BatchMatrixDiag + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`diagonal` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `diagonal`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchMatrixDiagPart.md b/site/en/api_docs/python/tf/raw_ops/BatchMatrixDiagPart.md new file mode 100644 index 00000000000..b75b4fc7d52 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchMatrixDiagPart.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.BatchMatrixDiagPart + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchMatrixInverse.md b/site/en/api_docs/python/tf/raw_ops/BatchMatrixInverse.md new file mode 100644 index 00000000000..d3272109efa --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchMatrixInverse.md @@ -0,0 +1,82 @@ +
+ + +
+ +# tf.raw_ops.BatchMatrixInverse + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`. +
+`adjoint` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchMatrixSetDiag.md b/site/en/api_docs/python/tf/raw_ops/BatchMatrixSetDiag.md new file mode 100644 index 00000000000..de9b715fd5b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchMatrixSetDiag.md @@ -0,0 +1,82 @@ +
+ + +
+ +# tf.raw_ops.BatchMatrixSetDiag + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`diagonal` + +A `Tensor`. Must have the same type as `input`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchMatrixSolve.md b/site/en/api_docs/python/tf/raw_ops/BatchMatrixSolve.md new file mode 100644 index 00000000000..ec781df6fc2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchMatrixSolve.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.BatchMatrixSolve + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`matrix` + +A `Tensor`. Must be one of the following types: `float64`, `float32`. +
+`rhs` + +A `Tensor`. Must have the same type as `matrix`. +
+`adjoint` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `matrix`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchMatrixSolveLs.md b/site/en/api_docs/python/tf/raw_ops/BatchMatrixSolveLs.md new file mode 100644 index 00000000000..0bdecb42ab9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchMatrixSolveLs.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.BatchMatrixSolveLs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`matrix` + +A `Tensor`. Must be one of the following types: `float64`, `float32`. +
+`rhs` + +A `Tensor`. Must have the same type as `matrix`. +
+`l2_regularizer` + +A `Tensor` of type `float64`. +
+`fast` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `matrix`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchMatrixTriangularSolve.md b/site/en/api_docs/python/tf/raw_ops/BatchMatrixTriangularSolve.md new file mode 100644 index 00000000000..5fd9625d7d8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchMatrixTriangularSolve.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.BatchMatrixTriangularSolve + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`matrix` + +A `Tensor`. Must be one of the following types: `float64`, `float32`. +
+`rhs` + +A `Tensor`. Must have the same type as `matrix`. +
+`lower` + +An optional `bool`. Defaults to `True`. +
+`adjoint` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `matrix`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchNormWithGlobalNormalization.md b/site/en/api_docs/python/tf/raw_ops/BatchNormWithGlobalNormalization.md new file mode 100644 index 00000000000..58e58f032af --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchNormWithGlobalNormalization.md @@ -0,0 +1,134 @@ +description: Batch normalization. + +
+ + +
+ +# tf.raw_ops.BatchNormWithGlobalNormalization + + + + + + + + + +Batch normalization. + + + + + + + + + +This op is deprecated. Prefer tf.nn.batch_normalization. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`t` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A 4D input Tensor. +
+`m` + +A `Tensor`. Must have the same type as `t`. +A 1D mean Tensor with size matching the last dimension of t. +This is the first output from tf.nn.moments, +or a saved moving average thereof. +
+`v` + +A `Tensor`. Must have the same type as `t`. +A 1D variance Tensor with size matching the last dimension of t. +This is the second output from tf.nn.moments, +or a saved moving average thereof. +
+`beta` + +A `Tensor`. Must have the same type as `t`. +A 1D beta Tensor with size matching the last dimension of t. +An offset to be added to the normalized tensor. +
+`gamma` + +A `Tensor`. Must have the same type as `t`. +A 1D gamma Tensor with size matching the last dimension of t. +If "scale_after_normalization" is true, this tensor will be multiplied +with the normalized tensor. +
+`variance_epsilon` + +A `float`. A small float number to avoid dividing by 0. +
+`scale_after_normalization` + +A `bool`. +A bool indicating whether the resulted tensor +needs to be multiplied with gamma. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `t`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchNormWithGlobalNormalizationGrad.md b/site/en/api_docs/python/tf/raw_ops/BatchNormWithGlobalNormalizationGrad.md new file mode 100644 index 00000000000..4165c1d4cfe --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchNormWithGlobalNormalizationGrad.md @@ -0,0 +1,167 @@ +description: Gradients for batch normalization. + +
+ + +
+ +# tf.raw_ops.BatchNormWithGlobalNormalizationGrad + + + + + + + + + +Gradients for batch normalization. + + + + + + + + + +This op is deprecated. See tf.nn.batch_normalization. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`t` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A 4D input Tensor. +
+`m` + +A `Tensor`. Must have the same type as `t`. +A 1D mean Tensor with size matching the last dimension of t. +This is the first output from tf.nn.moments, +or a saved moving average thereof. +
+`v` + +A `Tensor`. Must have the same type as `t`. +A 1D variance Tensor with size matching the last dimension of t. +This is the second output from tf.nn.moments, +or a saved moving average thereof. +
+`gamma` + +A `Tensor`. Must have the same type as `t`. +A 1D gamma Tensor with size matching the last dimension of t. +If "scale_after_normalization" is true, this Tensor will be multiplied +with the normalized Tensor. +
+`backprop` + +A `Tensor`. Must have the same type as `t`. 4D backprop Tensor. +
+`variance_epsilon` + +A `float`. A small float number to avoid dividing by 0. +
+`scale_after_normalization` + +A `bool`. +A bool indicating whether the resulted tensor +needs to be multiplied with gamma. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (dx, dm, dv, db, dg). +
+`dx` + +A `Tensor`. Has the same type as `t`. +
+`dm` + +A `Tensor`. Has the same type as `t`. +
+`dv` + +A `Tensor`. Has the same type as `t`. +
+`db` + +A `Tensor`. Has the same type as `t`. +
+`dg` + +A `Tensor`. Has the same type as `t`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchSelfAdjointEig.md b/site/en/api_docs/python/tf/raw_ops/BatchSelfAdjointEig.md new file mode 100644 index 00000000000..a7c391e4ae5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchSelfAdjointEig.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.BatchSelfAdjointEig + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchSelfAdjointEigV2.md b/site/en/api_docs/python/tf/raw_ops/BatchSelfAdjointEigV2.md new file mode 100644 index 00000000000..91b380e3573 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchSelfAdjointEigV2.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.BatchSelfAdjointEigV2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`. +
+`compute_v` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (e, v). +
+`e` + +A `Tensor`. Has the same type as `input`. +
+`v` + +A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchSvd.md b/site/en/api_docs/python/tf/raw_ops/BatchSvd.md new file mode 100644 index 00000000000..dbe7b41aafd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchSvd.md @@ -0,0 +1,110 @@ +
+ + +
+ +# tf.raw_ops.BatchSvd + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `complex64`, `complex128`. +
+`compute_uv` + +An optional `bool`. Defaults to `True`. +
+`full_matrices` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (s, u, v). +
+`s` + +A `Tensor`. Has the same type as `input`. +
+`u` + +A `Tensor`. Has the same type as `input`. +
+`v` + +A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchToSpace.md b/site/en/api_docs/python/tf/raw_ops/BatchToSpace.md new file mode 100644 index 00000000000..9938cf5419c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchToSpace.md @@ -0,0 +1,106 @@ +description: BatchToSpace for 4-D tensors of type T. + +
+ + +
+ +# tf.raw_ops.BatchToSpace + + + + + + + + + +BatchToSpace for 4-D tensors of type T. + + + + + + + + + +This is a legacy version of the more general BatchToSpaceND. + +Rearranges (permutes) data from batch into blocks of spatial data, followed by +cropping. This is the reverse transformation of SpaceToBatch. More specifically, +this op outputs a copy of the input tensor where values from the `batch` +dimension are moved in spatial blocks to the `height` and `width` dimensions, +followed by cropping along the `height` and `width` dimensions. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. 4-D tensor with shape +`[batch*block_size*block_size, height_pad/block_size, width_pad/block_size, +depth]`. Note that the batch size of the input tensor must be divisible by +`block_size * block_size`. +
+`crops` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2-D tensor of non-negative integers with shape `[2, 2]`. It specifies +how many elements to crop from the intermediate result across the spatial +dimensions as follows: + +crops = [[crop_top, crop_bottom], [crop_left, crop_right]] +
+`block_size` + +An `int` that is `>= 2`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BatchToSpaceND.md b/site/en/api_docs/python/tf/raw_ops/BatchToSpaceND.md new file mode 100644 index 00000000000..edef8d1d826 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BatchToSpaceND.md @@ -0,0 +1,209 @@ +description: BatchToSpace for N-D tensors of type T. + +
+ + +
+ +# tf.raw_ops.BatchToSpaceND + + + + + + + + + +BatchToSpace for N-D tensors of type T. + + + + + + + + + +This operation reshapes the "batch" dimension 0 into `M + 1` dimensions of shape +`block_shape + [batch]`, interleaves these blocks back into the grid defined by +the spatial dimensions `[1, ..., M]`, to obtain a result with the same rank as +the input. The spatial dimensions of this intermediate result are then +optionally cropped according to `crops` to produce the output. This is the +reverse of SpaceToBatch. See below for a precise description. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`, +where spatial_shape has M dimensions. +
+`block_shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D with shape `[M]`, all values must be >= 1. +
+`crops` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2-D with shape `[M, 2]`, all values must be >= 0. +`crops[i] = [crop_start, crop_end]` specifies the amount to crop from input +dimension `i + 1`, which corresponds to spatial dimension `i`. It is +required that +`crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`. + +This operation is equivalent to the following steps: + +1. Reshape `input` to `reshaped` of shape: +[block_shape[0], ..., block_shape[M-1], +batch / prod(block_shape), +input_shape[1], ..., input_shape[N-1]] + +2. Permute dimensions of `reshaped` to produce `permuted` of shape +[batch / prod(block_shape), + +input_shape[1], block_shape[0], +..., +input_shape[M], block_shape[M-1], + +input_shape[M+1], ..., input_shape[N-1]] + +3. Reshape `permuted` to produce `reshaped_permuted` of shape +[batch / prod(block_shape), + +input_shape[1] * block_shape[0], +..., +input_shape[M] * block_shape[M-1], + +input_shape[M+1], +..., +input_shape[N-1]] + +4. Crop the start and end of dimensions `[1, ..., M]` of +`reshaped_permuted` according to `crops` to produce the output of shape: +[batch / prod(block_shape), + +input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1], +..., +input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1], + +input_shape[M+1], ..., input_shape[N-1]] + +Some examples: + +(1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and +`crops = [[0, 0], [0, 0]]`: + +``` +[[[[1]]], [[[2]]], [[[3]]], [[[4]]]] +``` + +The output tensor has shape `[1, 2, 2, 1]` and value: + +``` +x = [[[[1], [2]], [[3], [4]]]] +``` + +(2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and +`crops = [[0, 0], [0, 0]]`: + +``` +[[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] +``` + +The output tensor has shape `[1, 2, 2, 3]` and value: + +``` +x = [[[[1, 2, 3], [4, 5, 6]], +[[7, 8, 9], [10, 11, 12]]]] +``` + +(3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and +`crops = [[0, 0], [0, 0]]`: + +``` +x = [[[[1], [3]], [[9], [11]]], +[[[2], [4]], [[10], [12]]], +[[[5], [7]], [[13], [15]]], +[[[6], [8]], [[14], [16]]]] +``` + +The output tensor has shape `[1, 4, 4, 1]` and value: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]], +[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` + +(4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and +`crops = [[0, 0], [2, 0]]`: + +``` +x = [[[[0], [1], [3]]], [[[0], [9], [11]]], +[[[0], [2], [4]]], [[[0], [10], [12]]], +[[[0], [5], [7]]], [[[0], [13], [15]]], +[[[0], [6], [8]]], [[[0], [14], [16]]]] +``` + +The output tensor has shape `[2, 2, 4, 1]` and value: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]]], +[[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BesselI0e.md b/site/en/api_docs/python/tf/raw_ops/BesselI0e.md new file mode 100644 index 00000000000..0ae0c1290c5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BesselI0e.md @@ -0,0 +1,81 @@ +description: Computes the Bessel i0e function of x element-wise. + +
+ + +
+ +# tf.raw_ops.BesselI0e + + + + + + + + + +Computes the Bessel i0e function of `x` element-wise. + + + + + + + + + +Exponentially scaled modified Bessel function of order 0 defined as +`bessel_i0e(x) = exp(-abs(x)) bessel_i0(x)`. + +This function is faster and numerically stabler than `bessel_i0(x)`. + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BesselI1e.md b/site/en/api_docs/python/tf/raw_ops/BesselI1e.md new file mode 100644 index 00000000000..3b23a60c7ed --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BesselI1e.md @@ -0,0 +1,81 @@ +description: Computes the Bessel i1e function of x element-wise. + +
+ + +
+ +# tf.raw_ops.BesselI1e + + + + + + + + + +Computes the Bessel i1e function of `x` element-wise. + + + + + + + + + +Exponentially scaled modified Bessel function of order 0 defined as +`bessel_i1e(x) = exp(-abs(x)) bessel_i1(x)`. + +This function is faster and numerically stabler than `bessel_i1(x)`. + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Betainc.md b/site/en/api_docs/python/tf/raw_ops/Betainc.md new file mode 100644 index 00000000000..48a51c0d778 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Betainc.md @@ -0,0 +1,104 @@ +description: Compute the regularized incomplete beta integral \\(I_x(a, b)\\). + +
+ + +
+ +# tf.raw_ops.Betainc + + + + + + + + + +Compute the regularized incomplete beta integral \\(I_x(a, b)\\). + + + + + + + + + +The regularized incomplete beta integral is defined as: + + +\\(I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}\\) + +where + + +\\(B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt\\) + + +is the incomplete beta function and \\(B(a, b)\\) is the *complete* +beta function. + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`b` + +A `Tensor`. Must have the same type as `a`. +
+`x` + +A `Tensor`. Must have the same type as `a`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BiasAdd.md b/site/en/api_docs/python/tf/raw_ops/BiasAdd.md new file mode 100644 index 00000000000..b5076c23930 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BiasAdd.md @@ -0,0 +1,102 @@ +description: Adds bias to value. + +
+ + +
+ +# tf.raw_ops.BiasAdd + + + + + + + + + +Adds `bias` to `value`. + + + + + + + + + +This is a special case of tf.add where `bias` is restricted to be 1-D. +Broadcasting is supported, so `value` may have any number of dimensions. + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Any number of dimensions. +
+`bias` + +A `Tensor`. Must have the same type as `value`. +1-D with size the last dimension of `value`. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the bias tensor will be added to the last dimension +of the value tensor. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +The tensor will be added to "in_channels", the third-to-the-last +dimension. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `value`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BiasAddGrad.md b/site/en/api_docs/python/tf/raw_ops/BiasAddGrad.md new file mode 100644 index 00000000000..b302a950e59 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BiasAddGrad.md @@ -0,0 +1,95 @@ +description: The backward operation for "BiasAdd" on the "bias" tensor. + +
+ + +
+ +# tf.raw_ops.BiasAddGrad + + + + + + + + + +The backward operation for "BiasAdd" on the "bias" tensor. + + + + + + + + + +It accumulates all the values from out_backprop into the feature dimension. +For NHWC data format, the feature dimension is the last. For NCHW data format, +the feature dimension is the third-to-last. + + + + + + + + + + + + + + + + +
+`out_backprop` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Any number of dimensions. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the bias tensor will be added to the last dimension +of the value tensor. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +The tensor will be added to "in_channels", the third-to-the-last +dimension. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `out_backprop`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BiasAddV1.md b/site/en/api_docs/python/tf/raw_ops/BiasAddV1.md new file mode 100644 index 00000000000..215e95d7c16 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BiasAddV1.md @@ -0,0 +1,90 @@ +description: Adds bias to value. + +
+ + +
+ +# tf.raw_ops.BiasAddV1 + + + + + + + + + +Adds `bias` to `value`. + + + + + + + + + +This is a deprecated version of BiasAdd and will be soon removed. + +This is a special case of tf.add where `bias` is restricted to be 1-D. +Broadcasting is supported, so `value` may have any number of dimensions. + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Any number of dimensions. +
+`bias` + +A `Tensor`. Must have the same type as `value`. +1-D with size the last dimension of `value`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `value`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Bincount.md b/site/en/api_docs/python/tf/raw_ops/Bincount.md new file mode 100644 index 00000000000..4b99be5c232 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Bincount.md @@ -0,0 +1,101 @@ +description: Counts the number of occurrences of each value in an integer array. + +
+ + +
+ +# tf.raw_ops.Bincount + + + + + + + + + +Counts the number of occurrences of each value in an integer array. + + + + + + + + + +Outputs a vector with length `size` and the same dtype as `weights`. If +`weights` are empty, then index `i` stores the number of times the value `i` is +counted in `arr`. If `weights` are non-empty, then index `i` stores the sum of +the value in `weights` at each index where the corresponding value in `arr` is +`i`. + +Values in `arr` outside of the range [0, size) are ignored. + + + + + + + + + + + + + + + + + + + +
+`arr` + +A `Tensor` of type `int32`. int32 `Tensor`. +
+`size` + +A `Tensor` of type `int32`. non-negative int32 scalar `Tensor`. +
+`weights` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`. +is an int32, int64, float32, or float64 `Tensor` with the same +shape as `arr`, or a length-0 `Tensor`, in which case it acts as all weights +equal to 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `weights`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Bitcast.md b/site/en/api_docs/python/tf/raw_ops/Bitcast.md new file mode 100644 index 00000000000..e22d29fee59 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Bitcast.md @@ -0,0 +1,146 @@ +description: Bitcasts a tensor from one type to another without copying data. + +
+ + +
+ +# tf.raw_ops.Bitcast + + + + + + + + + +Bitcasts a tensor from one type to another without copying data. + + + + + + + + + +Given a tensor `input`, this operation returns a tensor that has the same buffer +data as `input` with datatype `type`. + +If the input datatype `T` is larger than the output datatype `type` then the +shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)]. + +If `T` is smaller than `type`, the operator requires that the rightmost +dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from +[..., sizeof(`type`)/sizeof(`T`)] to [...]. + +tf.bitcast() and tf.cast() work differently when real dtype is casted as a complex dtype +(e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast() +gives module error. +For example, + +#### Example 1: + + + +``` +>>> a = [1., 2., 3.] +>>> equality_bitcast = tf.bitcast(a, tf.complex128) +Traceback (most recent call last): +... +InvalidArgumentError: Cannot bitcast from 1 to 18 [Op:Bitcast] +>>> equality_cast = tf.cast(a, tf.complex128) +>>> print(equality_cast) +tf.Tensor([1.+0.j 2.+0.j 3.+0.j], shape=(3,), dtype=complex128) +``` + +#### Example 2: + + + +``` +>>> tf.bitcast(tf.constant(0xffffffff, dtype=tf.uint32), tf.uint8) + +``` + +#### Example 3: + + + +``` +>>> x = [1., 2., 3.] +>>> y = [0., 2., 3.] +>>> equality= tf.equal(x,y) +>>> equality_cast = tf.cast(equality,tf.float32) +>>> equality_bitcast = tf.bitcast(equality_cast,tf.uint8) +>>> print(equality) +tf.Tensor([False True True], shape=(3,), dtype=bool) +>>> print(equality_cast) +tf.Tensor([0. 1. 1.], shape=(3,), dtype=float32) +>>> print(equality_bitcast) +tf.Tensor( + [[ 0 0 0 0] + [ 0 0 128 63] + [ 0 0 128 63]], shape=(3, 4), dtype=uint8) +``` + +*NOTE*: Bitcast is implemented as a low-level cast, so machines with different +endian orderings will give different results. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `complex64`, `complex128`, `qint8`, `quint8`, `qint16`, `quint16`, `qint32`. +
+`type` + +A tf.DType from: `tf.bfloat16, tf.half, tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.uint32, tf.uint64, tf.int8, tf.int16, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BitwiseAnd.md b/site/en/api_docs/python/tf/raw_ops/BitwiseAnd.md new file mode 100644 index 00000000000..09904cee60c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BitwiseAnd.md @@ -0,0 +1,105 @@ +description: Elementwise computes the bitwise AND of x and y. + +
+ + +
+ +# tf.raw_ops.BitwiseAnd + + + + + + + + + +Elementwise computes the bitwise AND of `x` and `y`. + + + + + + + + + +The result will have those bits set, that are set in both `x` and `y`. The +computation is performed on the underlying representations of `x` and `y`. + +#### For example: + + + +```python +import tensorflow as tf +from tensorflow.python.ops import bitwise_ops +dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64, + tf.uint8, tf.uint16, tf.uint32, tf.uint64] + +for dtype in dtype_list: + lhs = tf.constant([0, 5, 3, 14], dtype=dtype) + rhs = tf.constant([5, 0, 7, 11], dtype=dtype) + exp = tf.constant([0, 0, 3, 10], dtype=tf.float32) + + res = bitwise_ops.bitwise_and(lhs, rhs) + tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BitwiseOr.md b/site/en/api_docs/python/tf/raw_ops/BitwiseOr.md new file mode 100644 index 00000000000..cd75aa8c7f4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BitwiseOr.md @@ -0,0 +1,105 @@ +description: Elementwise computes the bitwise OR of x and y. + +
+ + +
+ +# tf.raw_ops.BitwiseOr + + + + + + + + + +Elementwise computes the bitwise OR of `x` and `y`. + + + + + + + + + +The result will have those bits set, that are set in `x`, `y` or both. The +computation is performed on the underlying representations of `x` and `y`. + +#### For example: + + + +```python +import tensorflow as tf +from tensorflow.python.ops import bitwise_ops +dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64, + tf.uint8, tf.uint16, tf.uint32, tf.uint64] + +for dtype in dtype_list: + lhs = tf.constant([0, 5, 3, 14], dtype=dtype) + rhs = tf.constant([5, 0, 7, 11], dtype=dtype) + exp = tf.constant([5, 5, 7, 15], dtype=tf.float32) + + res = bitwise_ops.bitwise_or(lhs, rhs) + tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BitwiseXor.md b/site/en/api_docs/python/tf/raw_ops/BitwiseXor.md new file mode 100644 index 00000000000..1cf8f6ae40d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BitwiseXor.md @@ -0,0 +1,105 @@ +description: Elementwise computes the bitwise XOR of x and y. + +
+ + +
+ +# tf.raw_ops.BitwiseXor + + + + + + + + + +Elementwise computes the bitwise XOR of `x` and `y`. + + + + + + + + + +The result will have those bits set, that are different in `x` and `y`. The +computation is performed on the underlying representations of `x` and `y`. + +#### For example: + + + +```python +import tensorflow as tf +from tensorflow.python.ops import bitwise_ops +dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64, + tf.uint8, tf.uint16, tf.uint32, tf.uint64] + +for dtype in dtype_list: + lhs = tf.constant([0, 5, 3, 14], dtype=dtype) + rhs = tf.constant([5, 0, 7, 11], dtype=dtype) + exp = tf.constant([5, 5, 4, 5], dtype=tf.float32) + + res = bitwise_ops.bitwise_xor(lhs, rhs) + tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BlockLSTM.md b/site/en/api_docs/python/tf/raw_ops/BlockLSTM.md new file mode 100644 index 00000000000..a3511a2f6ce --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BlockLSTM.md @@ -0,0 +1,231 @@ +description: Computes the LSTM cell forward propagation for all the time steps. + +
+ + +
+ +# tf.raw_ops.BlockLSTM + + + + + + + + + +Computes the LSTM cell forward propagation for all the time steps. + + + + + + + + + +This is equivalent to applying LSTMBlockCell in a loop, like so: + +```python +for x1 in unpack(x): + i1, cs1, f1, o1, ci1, co1, h1 = LSTMBlock( + x1, cs_prev, h_prev, w, wci, wcf, wco, b) + cs_prev = cs1 + h_prev = h1 + i.append(i1) + cs.append(cs1) + f.append(f1) + o.append(o1) + ci.append(ci1) + co.append(co1) + h.append(h1) +return pack(i), pack(cs), pack(f), pack(o), pack(ci), pack(ch), pack(h) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`seq_len_max` + +A `Tensor` of type `int64`. +Maximum time length actually used by this input. Outputs are padded +with zeros beyond this length. +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +The sequence input to the LSTM, shape (timelen, batch_size, num_inputs). +
+`cs_prev` + +A `Tensor`. Must have the same type as `x`. +Value of the initial cell state. +
+`h_prev` + +A `Tensor`. Must have the same type as `x`. +Initial output of cell (to be used for peephole). +
+`w` + +A `Tensor`. Must have the same type as `x`. The weight matrix. +
+`wci` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for input gate peephole connection. +
+`wcf` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for forget gate peephole connection. +
+`wco` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for output gate peephole connection. +
+`b` + +A `Tensor`. Must have the same type as `x`. The bias vector. +
+`forget_bias` + +An optional `float`. Defaults to `1`. The forget gate bias. +
+`cell_clip` + +An optional `float`. Defaults to `3`. +Value to clip the 'cs' value to. +
+`use_peephole` + +An optional `bool`. Defaults to `False`. +Whether to use peephole weights. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (i, cs, f, o, ci, co, h). +
+`i` + +A `Tensor`. Has the same type as `x`. +
+`cs` + +A `Tensor`. Has the same type as `x`. +
+`f` + +A `Tensor`. Has the same type as `x`. +
+`o` + +A `Tensor`. Has the same type as `x`. +
+`ci` + +A `Tensor`. Has the same type as `x`. +
+`co` + +A `Tensor`. Has the same type as `x`. +
+`h` + +A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BlockLSTMGrad.md b/site/en/api_docs/python/tf/raw_ops/BlockLSTMGrad.md new file mode 100644 index 00000000000..025671d0112 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BlockLSTMGrad.md @@ -0,0 +1,278 @@ +description: Computes the LSTM cell backward propagation for the entire time sequence. + +
+ + +
+ +# tf.raw_ops.BlockLSTMGrad + + + + + + + + + +Computes the LSTM cell backward propagation for the entire time sequence. + + + + + + + + + +This implementation is to be used in conjunction of LSTMBlock. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`seq_len_max` + +A `Tensor` of type `int64`. +Maximum time length actually used by this input. Outputs are padded +with zeros beyond this length. +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +The sequence input to the LSTM, shape (timelen, batch_size, num_inputs). +
+`cs_prev` + +A `Tensor`. Must have the same type as `x`. +Value of the initial cell state. +
+`h_prev` + +A `Tensor`. Must have the same type as `x`. +Initial output of cell (to be used for peephole). +
+`w` + +A `Tensor`. Must have the same type as `x`. The weight matrix. +
+`wci` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for input gate peephole connection. +
+`wcf` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for forget gate peephole connection. +
+`wco` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for output gate peephole connection. +
+`b` + +A `Tensor`. Must have the same type as `x`. The bias vector. +
+`i` + +A `Tensor`. Must have the same type as `x`. +The input gate over the whole time sequence. +
+`cs` + +A `Tensor`. Must have the same type as `x`. +The cell state before the tanh over the whole time sequence. +
+`f` + +A `Tensor`. Must have the same type as `x`. +The forget gate over the whole time sequence. +
+`o` + +A `Tensor`. Must have the same type as `x`. +The output gate over the whole time sequence. +
+`ci` + +A `Tensor`. Must have the same type as `x`. +The cell input over the whole time sequence. +
+`co` + +A `Tensor`. Must have the same type as `x`. +The cell after the tanh over the whole time sequence. +
+`h` + +A `Tensor`. Must have the same type as `x`. +The output h vector over the whole time sequence. +
+`cs_grad` + +A `Tensor`. Must have the same type as `x`. +The current gradient of cs. +
+`h_grad` + +A `Tensor`. Must have the same type as `x`. +The gradient of h vector. +
+`use_peephole` + +A `bool`. Whether to use peephole weights. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (x_grad, cs_prev_grad, h_prev_grad, w_grad, wci_grad, wcf_grad, wco_grad, b_grad). +
+`x_grad` + +A `Tensor`. Has the same type as `x`. +
+`cs_prev_grad` + +A `Tensor`. Has the same type as `x`. +
+`h_prev_grad` + +A `Tensor`. Has the same type as `x`. +
+`w_grad` + +A `Tensor`. Has the same type as `x`. +
+`wci_grad` + +A `Tensor`. Has the same type as `x`. +
+`wcf_grad` + +A `Tensor`. Has the same type as `x`. +
+`wco_grad` + +A `Tensor`. Has the same type as `x`. +
+`b_grad` + +A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BlockLSTMGradV2.md b/site/en/api_docs/python/tf/raw_ops/BlockLSTMGradV2.md new file mode 100644 index 00000000000..9f7a6fccc95 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BlockLSTMGradV2.md @@ -0,0 +1,278 @@ +description: Computes the LSTM cell backward propagation for the entire time sequence. + +
+ + +
+ +# tf.raw_ops.BlockLSTMGradV2 + + + + + + + + + +Computes the LSTM cell backward propagation for the entire time sequence. + + + + + + + + + +This implementation is to be used in conjunction of BlockLSTMV2. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`seq_len_max` + +A `Tensor` of type `int64`. +Maximum time length actually used by this input. Outputs are padded +with zeros beyond this length. +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +The sequence input to the LSTM, shape (timelen, batch_size, num_inputs). +
+`cs_prev` + +A `Tensor`. Must have the same type as `x`. +Value of the initial cell state. +
+`h_prev` + +A `Tensor`. Must have the same type as `x`. +Initial output of cell (to be used for peephole). +
+`w` + +A `Tensor`. Must have the same type as `x`. The weight matrix. +
+`wci` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for input gate peephole connection. +
+`wcf` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for forget gate peephole connection. +
+`wco` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for output gate peephole connection. +
+`b` + +A `Tensor`. Must have the same type as `x`. The bias vector. +
+`i` + +A `Tensor`. Must have the same type as `x`. +The input gate over the whole time sequence. +
+`cs` + +A `Tensor`. Must have the same type as `x`. +The cell state before the tanh over the whole time sequence. +
+`f` + +A `Tensor`. Must have the same type as `x`. +The forget gate over the whole time sequence. +
+`o` + +A `Tensor`. Must have the same type as `x`. +The output gate over the whole time sequence. +
+`ci` + +A `Tensor`. Must have the same type as `x`. +The cell input over the whole time sequence. +
+`co` + +A `Tensor`. Must have the same type as `x`. +The cell after the tanh over the whole time sequence. +
+`h` + +A `Tensor`. Must have the same type as `x`. +The output h vector over the whole time sequence. +
+`cs_grad` + +A `Tensor`. Must have the same type as `x`. +The current gradient of cs. +
+`h_grad` + +A `Tensor`. Must have the same type as `x`. +The gradient of h vector. +
+`use_peephole` + +A `bool`. Whether to use peephole weights. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (x_grad, cs_prev_grad, h_prev_grad, w_grad, wci_grad, wcf_grad, wco_grad, b_grad). +
+`x_grad` + +A `Tensor`. Has the same type as `x`. +
+`cs_prev_grad` + +A `Tensor`. Has the same type as `x`. +
+`h_prev_grad` + +A `Tensor`. Has the same type as `x`. +
+`w_grad` + +A `Tensor`. Has the same type as `x`. +
+`wci_grad` + +A `Tensor`. Has the same type as `x`. +
+`wcf_grad` + +A `Tensor`. Has the same type as `x`. +
+`wco_grad` + +A `Tensor`. Has the same type as `x`. +
+`b_grad` + +A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BlockLSTMV2.md b/site/en/api_docs/python/tf/raw_ops/BlockLSTMV2.md new file mode 100644 index 00000000000..9a1f452d7f1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BlockLSTMV2.md @@ -0,0 +1,228 @@ +description: Computes the LSTM cell forward propagation for all the time steps. + +
+ + +
+ +# tf.raw_ops.BlockLSTMV2 + + + + + + + + + +Computes the LSTM cell forward propagation for all the time steps. + + + + + + + + + +This is equivalent to applying LSTMBlockCell in a loop, like so: + +```python +for x1 in unpack(x): + i1, cs1, f1, o1, ci1, co1, h1 = LSTMBlock( + x1, cs_prev, h_prev, w, wci, wcf, wco, b) + cs_prev = cs1 + h_prev = h1 + i.append(i1) + cs.append(cs1) + f.append(f1) + o.append(o1) + ci.append(ci1) + co.append(co1) + h.append(h1) +return pack(i), pack(cs), pack(f), pack(o), pack(ci), pack(ch), pack(h) + +Note that unlike LSTMBlockCell (and BlockLSTM) which uses ICFO gate layout, +this op uses IFCO. So in order for the following snippet to be equivalent +all gate-related outputs should be reordered. +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`seq_len_max` + +A `Tensor` of type `int64`. +Maximum time length actually used by this input. Outputs are padded +with zeros beyond this length. +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +The sequence input to the LSTM, shape (timelen, batch_size, num_inputs). +
+`cs_prev` + +A `Tensor`. Must have the same type as `x`. +Value of the initial cell state. +
+`h_prev` + +A `Tensor`. Must have the same type as `x`. +Initial output of cell (to be used for peephole). +
+`w` + +A `Tensor`. Must have the same type as `x`. The weight matrix. +
+`wci` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for input gate peephole connection. +
+`wcf` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for forget gate peephole connection. +
+`wco` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for output gate peephole connection. +
+`b` + +A `Tensor`. Must have the same type as `x`. The bias vector. +
+`cell_clip` + +An optional `float`. Defaults to `0`. +Value to clip the 'cs' value to. +
+`use_peephole` + +An optional `bool`. Defaults to `False`. +Whether to use peephole weights. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (i, cs, f, o, ci, co, h). +
+`i` + +A `Tensor`. Has the same type as `x`. +
+`cs` + +A `Tensor`. Has the same type as `x`. +
+`f` + +A `Tensor`. Has the same type as `x`. +
+`o` + +A `Tensor`. Has the same type as `x`. +
+`ci` + +A `Tensor`. Has the same type as `x`. +
+`co` + +A `Tensor`. Has the same type as `x`. +
+`h` + +A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesAggregateStats.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesAggregateStats.md new file mode 100644 index 00000000000..35affb545a1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesAggregateStats.md @@ -0,0 +1,119 @@ +description: Aggregates the summary of accumulated stats for the batch. + +
+ + +
+ +# tf.raw_ops.BoostedTreesAggregateStats + + + + + + + + + +Aggregates the summary of accumulated stats for the batch. + + + + + + + + + +The summary stats contains gradients and hessians accumulated for each node, feature dimension id and bucket. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`node_ids` + +A `Tensor` of type `int32`. +int32; Rank 1 Tensor containing node ids for each example, shape [batch_size]. +
+`gradients` + +A `Tensor` of type `float32`. +float32; Rank 2 Tensor (shape=[batch_size, logits_dimension]) with gradients for each example. +
+`hessians` + +A `Tensor` of type `float32`. +float32; Rank 2 Tensor (shape=[batch_size, hessian_dimension]) with hessians for each example. +
+`feature` + +A `Tensor` of type `int32`. +int32; Rank 2 feature Tensors (shape=[batch_size, feature_dimension]). +
+`max_splits` + +An `int` that is `>= 1`. +int; the maximum number of splits possible in the whole tree. +
+`num_buckets` + +An `int` that is `>= 1`. +int; equals to the maximum possible value of bucketized feature. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesBucketize.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesBucketize.md new file mode 100644 index 00000000000..af3b0b6850b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesBucketize.md @@ -0,0 +1,89 @@ +description: Bucketize each feature based on bucket boundaries. + +
+ + +
+ +# tf.raw_ops.BoostedTreesBucketize + + + + + + + + + +Bucketize each feature based on bucket boundaries. + + + + + + + + + +An op that returns a list of float tensors, where each tensor represents the +bucketized values for a single feature. + + + + + + + + + + + + + + + + +
+`float_values` + +A list of `Tensor` objects with type `float32`. +float; List of Rank 1 Tensor each containing float values for a single feature. +
+`bucket_boundaries` + +A list with the same length as `float_values` of `Tensor` objects with type `float32`. +float; List of Rank 1 Tensors each containing the bucket boundaries for a single +feature. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list with the same length as `float_values` of `Tensor` objects with type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesCalculateBestFeatureSplit.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesCalculateBestFeatureSplit.md new file mode 100644 index 00000000000..f50e9e1d023 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesCalculateBestFeatureSplit.md @@ -0,0 +1,192 @@ +description: Calculates gains for each feature and returns the best possible split information for the feature. + +
+ + +
+ +# tf.raw_ops.BoostedTreesCalculateBestFeatureSplit + + + + + + + + + +Calculates gains for each feature and returns the best possible split information for the feature. + + + + + + + + + +The split information is the best threshold (bucket id), gains and left/right node contributions per node for each feature. + +It is possible that not all nodes can be split on each feature. Hence, the list of possible nodes can differ between the features. Therefore, we return `node_ids_list` for each feature, containing the list of nodes that this feature can be used to split. + +In this manner, the output is the best split per features and per node, so that it needs to be combined later to produce the best split for each node (among all possible features). + +The output shapes are compatible in a way that the first dimension of all tensors are the same and equal to the number of possible split nodes for each feature. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`node_id_range` + +A `Tensor` of type `int32`. +A Rank 1 tensor (shape=[2]) to specify the range [first, last) of node ids to process within `stats_summary_list`. The nodes are iterated between the two nodes specified by the tensor, as like `for node_id in range(node_id_range[0], node_id_range[1])` (Note that the last index node_id_range[1] is exclusive). +
+`stats_summary` + +A `Tensor` of type `float32`. +A Rank 4 tensor (#shape=[max_splits, feature_dims, bucket, stats_dims]) for accumulated stats summary (gradient/hessian) per node, per dimension, per buckets for each feature. +The first dimension of the tensor is the maximum number of splits, and thus not all elements of it will be used, but only the indexes specified by node_ids will be used. +
+`l1` + +A `Tensor` of type `float32`. +l1 regularization factor on leaf weights, per instance based. +
+`l2` + +A `Tensor` of type `float32`. +l2 regularization factor on leaf weights, per instance based. +
+`tree_complexity` + +A `Tensor` of type `float32`. +adjustment to the gain, per leaf based. +
+`min_node_weight` + +A `Tensor` of type `float32`. +minimum avg of hessians in a node before required for the node to be considered for splitting. +
+`logits_dimension` + +An `int` that is `>= 1`. +The dimension of logit, i.e., number of classes. +
+`split_type` + +An optional `string` from: `"inequality", "equality"`. Defaults to `"inequality"`. +A string indicating if this Op should perform inequality split or equality split. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (node_ids, gains, feature_dimensions, thresholds, left_node_contribs, right_node_contribs, split_with_default_directions). +
+`node_ids` + +A `Tensor` of type `int32`. +
+`gains` + +A `Tensor` of type `float32`. +
+`feature_dimensions` + +A `Tensor` of type `int32`. +
+`thresholds` + +A `Tensor` of type `int32`. +
+`left_node_contribs` + +A `Tensor` of type `float32`. +
+`right_node_contribs` + +A `Tensor` of type `float32`. +
+`split_with_default_directions` + +A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesCalculateBestFeatureSplitV2.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesCalculateBestFeatureSplitV2.md new file mode 100644 index 00000000000..860f5e83010 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesCalculateBestFeatureSplitV2.md @@ -0,0 +1,207 @@ +description: Calculates gains for each feature and returns the best possible split information for each node. However, if no split is found, then no split information is returned for that node. + +
+ + +
+ +# tf.raw_ops.BoostedTreesCalculateBestFeatureSplitV2 + + + + + + + + + +Calculates gains for each feature and returns the best possible split information for each node. However, if no split is found, then no split information is returned for that node. + + + + + + + + + +The split information is the best threshold (bucket id), gains and left/right node contributions per node for each feature. + +It is possible that not all nodes can be split on each feature. Hence, the list of possible nodes can differ between the features. Therefore, we return `node_ids_list` for each feature, containing the list of nodes that this feature can be used to split. + +In this manner, the output is the best split per features and per node, so that it needs to be combined later to produce the best split for each node (among all possible features). + +The output shapes are compatible in a way that the first dimension of all tensors are the same and equal to the number of possible split nodes for each feature. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`node_id_range` + +A `Tensor` of type `int32`. +A Rank 1 tensor (shape=[2]) to specify the range [first, last) of node ids to process within `stats_summary_list`. The nodes are iterated between the two nodes specified by the tensor, as like `for node_id in range(node_id_range[0], node_id_range[1])` (Note that the last index node_id_range[1] is exclusive). +
+`stats_summaries_list` + +A list of at least 1 `Tensor` objects with type `float32`. +A list of Rank 4 tensor (#shape=[max_splits, feature_dims, bucket, stats_dims]) for accumulated stats summary (gradient/hessian) per node, per dimension, per buckets for each feature. +The first dimension of the tensor is the maximum number of splits, and thus not all elements of it will be used, but only the indexes specified by node_ids will be used. +
+`split_types` + +A `Tensor` of type `string`. +A Rank 1 tensor indicating if this Op should perform inequality split or equality split per feature. +
+`candidate_feature_ids` + +A `Tensor` of type `int32`. +Rank 1 tensor with ids for each feature. This is the real id of the feature. +
+`l1` + +A `Tensor` of type `float32`. +l1 regularization factor on leaf weights, per instance based. +
+`l2` + +A `Tensor` of type `float32`. +l2 regularization factor on leaf weights, per instance based. +
+`tree_complexity` + +A `Tensor` of type `float32`. +adjustment to the gain, per leaf based. +
+`min_node_weight` + +A `Tensor` of type `float32`. +mininum avg of hessians in a node before required for the node to be considered for splitting. +
+`logits_dimension` + +An `int` that is `>= 1`. +The dimension of logit, i.e., number of classes. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (node_ids, gains, feature_ids, feature_dimensions, thresholds, left_node_contribs, right_node_contribs, split_with_default_directions). +
+`node_ids` + +A `Tensor` of type `int32`. +
+`gains` + +A `Tensor` of type `float32`. +
+`feature_ids` + +A `Tensor` of type `int32`. +
+`feature_dimensions` + +A `Tensor` of type `int32`. +
+`thresholds` + +A `Tensor` of type `int32`. +
+`left_node_contribs` + +A `Tensor` of type `float32`. +
+`right_node_contribs` + +A `Tensor` of type `float32`. +
+`split_with_default_directions` + +A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesCalculateBestGainsPerFeature.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesCalculateBestGainsPerFeature.md new file mode 100644 index 00000000000..6dbb28964c5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesCalculateBestGainsPerFeature.md @@ -0,0 +1,170 @@ +description: Calculates gains for each feature and returns the best possible split information for the feature. + +
+ + +
+ +# tf.raw_ops.BoostedTreesCalculateBestGainsPerFeature + + + + + + + + + +Calculates gains for each feature and returns the best possible split information for the feature. + + + + + + + + + +The split information is the best threshold (bucket id), gains and left/right node contributions per node for each feature. + +It is possible that not all nodes can be split on each feature. Hence, the list of possible nodes can differ between the features. Therefore, we return `node_ids_list` for each feature, containing the list of nodes that this feature can be used to split. + +In this manner, the output is the best split per features and per node, so that it needs to be combined later to produce the best split for each node (among all possible features). + +The length of output lists are all of the same length, `num_features`. +The output shapes are compatible in a way that the first dimension of all tensors of all lists are the same and equal to the number of possible split nodes for each feature. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`node_id_range` + +A `Tensor` of type `int32`. +A Rank 1 tensor (shape=[2]) to specify the range [first, last) of node ids to process within `stats_summary_list`. The nodes are iterated between the two nodes specified by the tensor, as like `for node_id in range(node_id_range[0], node_id_range[1])` (Note that the last index node_id_range[1] is exclusive). +
+`stats_summary_list` + +A list of at least 1 `Tensor` objects with type `float32`. +A list of Rank 3 tensor (#shape=[max_splits, bucket, 2]) for accumulated stats summary (gradient/hessian) per node per buckets for each feature. The first dimension of the tensor is the maximum number of splits, and thus not all elements of it will be used, but only the indexes specified by node_ids will be used. +
+`l1` + +A `Tensor` of type `float32`. +l1 regularization factor on leaf weights, per instance based. +
+`l2` + +A `Tensor` of type `float32`. +l2 regularization factor on leaf weights, per instance based. +
+`tree_complexity` + +A `Tensor` of type `float32`. +adjustment to the gain, per leaf based. +
+`min_node_weight` + +A `Tensor` of type `float32`. +minimum avg of hessians in a node before required for the node to be considered for splitting. +
+`max_splits` + +An `int` that is `>= 1`. +the number of nodes that can be split in the whole tree. Used as a dimension of output tensors. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (node_ids_list, gains_list, thresholds_list, left_node_contribs_list, right_node_contribs_list). +
+`node_ids_list` + +A list with the same length as `stats_summary_list` of `Tensor` objects with type `int32`. +
+`gains_list` + +A list with the same length as `stats_summary_list` of `Tensor` objects with type `float32`. +
+`thresholds_list` + +A list with the same length as `stats_summary_list` of `Tensor` objects with type `int32`. +
+`left_node_contribs_list` + +A list with the same length as `stats_summary_list` of `Tensor` objects with type `float32`. +
+`right_node_contribs_list` + +A list with the same length as `stats_summary_list` of `Tensor` objects with type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesCenterBias.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesCenterBias.md new file mode 100644 index 00000000000..5b0a9d6bbc4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesCenterBias.md @@ -0,0 +1,110 @@ +description: Calculates the prior from the training data (the bias) and fills in the first node with the logits' prior. Returns a boolean indicating whether to continue centering. + +
+ + +
+ +# tf.raw_ops.BoostedTreesCenterBias + + + + + + + + + +Calculates the prior from the training data (the bias) and fills in the first node with the logits' prior. Returns a boolean indicating whether to continue centering. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tree_ensemble_handle` + +A `Tensor` of type `resource`. +Handle to the tree ensemble. +
+`mean_gradients` + +A `Tensor` of type `float32`. +A tensor with shape=[logits_dimension] with mean of gradients for a first node. +
+`mean_hessians` + +A `Tensor` of type `float32`. +A tensor with shape=[logits_dimension] mean of hessians for a first node. +
+`l1` + +A `Tensor` of type `float32`. +l1 regularization factor on leaf weights, per instance based. +
+`l2` + +A `Tensor` of type `float32`. +l2 regularization factor on leaf weights, per instance based. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesCreateEnsemble.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesCreateEnsemble.md new file mode 100644 index 00000000000..c4a5c6970b9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesCreateEnsemble.md @@ -0,0 +1,94 @@ +description: Creates a tree ensemble model and returns a handle to it. + +
+ + +
+ +# tf.raw_ops.BoostedTreesCreateEnsemble + + + + + + + + + +Creates a tree ensemble model and returns a handle to it. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tree_ensemble_handle` + +A `Tensor` of type `resource`. +Handle to the tree ensemble resource to be created. +
+`stamp_token` + +A `Tensor` of type `int64`. +Token to use as the initial value of the resource stamp. +
+`tree_ensemble_serialized` + +A `Tensor` of type `string`. +Serialized proto of the tree ensemble. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesCreateQuantileStreamResource.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesCreateQuantileStreamResource.md new file mode 100644 index 00000000000..4dc2c828500 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesCreateQuantileStreamResource.md @@ -0,0 +1,103 @@ +description: Create the Resource for Quantile Streams. + +
+ + +
+ +# tf.raw_ops.BoostedTreesCreateQuantileStreamResource + + + + + + + + + +Create the Resource for Quantile Streams. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`quantile_stream_resource_handle` + +A `Tensor` of type `resource`. +resource; Handle to quantile stream resource. +
+`epsilon` + +A `Tensor` of type `float32`. +float; The required approximation error of the stream resource. +
+`num_streams` + +A `Tensor` of type `int64`. +int; The number of streams managed by the resource that shares the same epsilon. +
+`max_elements` + +An optional `int`. Defaults to `1099511627776`. +int; The maximum number of data points that can be fed to the stream. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesDeserializeEnsemble.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesDeserializeEnsemble.md new file mode 100644 index 00000000000..3abbf302a05 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesDeserializeEnsemble.md @@ -0,0 +1,95 @@ +description: Deserializes a serialized tree ensemble config and replaces current tree + +
+ + +
+ +# tf.raw_ops.BoostedTreesDeserializeEnsemble + + + + + + + + + +Deserializes a serialized tree ensemble config and replaces current tree + + + + + + + + + +ensemble. + + + + + + + + + + + + + + + + + + + +
+`tree_ensemble_handle` + +A `Tensor` of type `resource`. +Handle to the tree ensemble. +
+`stamp_token` + +A `Tensor` of type `int64`. +Token to use as the new value of the resource stamp. +
+`tree_ensemble_serialized` + +A `Tensor` of type `string`. +Serialized proto of the ensemble. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesEnsembleResourceHandleOp.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesEnsembleResourceHandleOp.md new file mode 100644 index 00000000000..53a5f84acf0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesEnsembleResourceHandleOp.md @@ -0,0 +1,84 @@ +description: Creates a handle to a BoostedTreesEnsembleResource + +
+ + +
+ +# tf.raw_ops.BoostedTreesEnsembleResourceHandleOp + + + + + + + + + +Creates a handle to a BoostedTreesEnsembleResource + + + + + + + + + + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesExampleDebugOutputs.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesExampleDebugOutputs.md new file mode 100644 index 00000000000..8359a745909 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesExampleDebugOutputs.md @@ -0,0 +1,98 @@ +description: Debugging/model interpretability outputs for each example. + +
+ + +
+ +# tf.raw_ops.BoostedTreesExampleDebugOutputs + + + + + + + + + +Debugging/model interpretability outputs for each example. + + + + + + + + + +It traverses all the trees and computes debug metrics for individual examples, +such as getting split feature ids and logits after each split along the decision +path used to compute directional feature contributions. + + + + + + + + + + + + + + + + + + + +
+`tree_ensemble_handle` + +A `Tensor` of type `resource`. +
+`bucketized_features` + +A list of at least 1 `Tensor` objects with type `int32`. +A list of rank 1 Tensors containing bucket id for each +feature. +
+`logits_dimension` + +An `int`. +scalar, dimension of the logits, to be used for constructing the protos in +examples_debug_outputs_serialized. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesFlushQuantileSummaries.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesFlushQuantileSummaries.md new file mode 100644 index 00000000000..57a92141722 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesFlushQuantileSummaries.md @@ -0,0 +1,88 @@ +description: Flush the quantile summaries from each quantile stream resource. + +
+ + +
+ +# tf.raw_ops.BoostedTreesFlushQuantileSummaries + + + + + + + + + +Flush the quantile summaries from each quantile stream resource. + + + + + + + + + +An op that outputs a list of quantile summaries of a quantile stream resource. +Each summary Tensor is rank 2, containing summaries (value, weight, min_rank, +max_rank) for a single feature. + + + + + + + + + + + + + + + + +
+`quantile_stream_resource_handle` + +A `Tensor` of type `resource`. +resource handle referring to a QuantileStreamResource. +
+`num_features` + +An `int` that is `>= 0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `num_features` `Tensor` objects with type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesGetEnsembleStates.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesGetEnsembleStates.md new file mode 100644 index 00000000000..e3478f60c0b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesGetEnsembleStates.md @@ -0,0 +1,113 @@ +description: Retrieves the tree ensemble resource stamp token, number of trees and growing statistics. + +
+ + +
+ +# tf.raw_ops.BoostedTreesGetEnsembleStates + + + + + + + + + +Retrieves the tree ensemble resource stamp token, number of trees and growing statistics. + + + + + + + + + + + + + + + + + + + + + + +
+`tree_ensemble_handle` + +A `Tensor` of type `resource`. +Handle to the tree ensemble. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (stamp_token, num_trees, num_finalized_trees, num_attempted_layers, last_layer_nodes_range). +
+`stamp_token` + +A `Tensor` of type `int64`. +
+`num_trees` + +A `Tensor` of type `int32`. +
+`num_finalized_trees` + +A `Tensor` of type `int32`. +
+`num_attempted_layers` + +A `Tensor` of type `int32`. +
+`last_layer_nodes_range` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesMakeQuantileSummaries.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesMakeQuantileSummaries.md new file mode 100644 index 00000000000..c318c2856cf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesMakeQuantileSummaries.md @@ -0,0 +1,96 @@ +description: Makes the summary of quantiles for the batch. + +
+ + +
+ +# tf.raw_ops.BoostedTreesMakeQuantileSummaries + + + + + + + + + +Makes the summary of quantiles for the batch. + + + + + + + + + +An op that takes a list of tensors (one tensor per feature) and outputs the +quantile summaries for each tensor. + + + + + + + + + + + + + + + + + + + +
+`float_values` + +A list of `Tensor` objects with type `float32`. +float; List of Rank 1 Tensors each containing values for a single feature. +
+`example_weights` + +A `Tensor` of type `float32`. +float; Rank 1 Tensor with weights per instance. +
+`epsilon` + +A `Tensor` of type `float32`. +float; The required maximum approximation error. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list with the same length as `float_values` of `Tensor` objects with type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesMakeStatsSummary.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesMakeStatsSummary.md new file mode 100644 index 00000000000..bbd26030150 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesMakeStatsSummary.md @@ -0,0 +1,120 @@ +description: Makes the summary of accumulated stats for the batch. + +
+ + +
+ +# tf.raw_ops.BoostedTreesMakeStatsSummary + + + + + + + + + +Makes the summary of accumulated stats for the batch. + + + + + + + + + +The summary stats contains gradients and hessians accumulated into the corresponding node and bucket for each example. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`node_ids` + +A `Tensor` of type `int32`. +int32 Rank 1 Tensor containing node ids, which each example falls into for the requested layer. +
+`gradients` + +A `Tensor` of type `float32`. +float32; Rank 2 Tensor (shape=[#examples, 1]) for gradients. +
+`hessians` + +A `Tensor` of type `float32`. +float32; Rank 2 Tensor (shape=[#examples, 1]) for hessians. +
+`bucketized_features_list` + +A list of at least 1 `Tensor` objects with type `int32`. +int32 list of Rank 1 Tensors, each containing the bucketized feature (for each feature column). +
+`max_splits` + +An `int` that is `>= 1`. +int; the maximum number of splits possible in the whole tree. +
+`num_buckets` + +An `int` that is `>= 1`. +int; equals to the maximum possible value of bucketized feature. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesPredict.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesPredict.md new file mode 100644 index 00000000000..a8673e90a38 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesPredict.md @@ -0,0 +1,97 @@ +description: Runs multiple additive regression ensemble predictors on input instances and + +
+ + +
+ +# tf.raw_ops.BoostedTreesPredict + + + + + + + + + +Runs multiple additive regression ensemble predictors on input instances and + + + + + + + + + +computes the logits. It is designed to be used during prediction. +It traverses all the trees and calculates the final score for each instance. + + + + + + + + + + + + + + + + + + + +
+`tree_ensemble_handle` + +A `Tensor` of type `resource`. +
+`bucketized_features` + +A list of at least 1 `Tensor` objects with type `int32`. +A list of rank 1 Tensors containing bucket id for each +feature. +
+`logits_dimension` + +An `int`. +scalar, dimension of the logits, to be used for partial logits +shape. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceAddSummaries.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceAddSummaries.md new file mode 100644 index 00000000000..c31447d03db --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceAddSummaries.md @@ -0,0 +1,89 @@ +description: Add the quantile summaries to each quantile stream resource. + +
+ + +
+ +# tf.raw_ops.BoostedTreesQuantileStreamResourceAddSummaries + + + + + + + + + +Add the quantile summaries to each quantile stream resource. + + + + + + + + + +An op that adds a list of quantile summaries to a quantile stream resource. Each +summary Tensor is rank 2, containing summaries (value, weight, min_rank, max_rank) +for a single feature. + + + + + + + + + + + + + + + + +
+`quantile_stream_resource_handle` + +A `Tensor` of type `resource`. +resource handle referring to a QuantileStreamResource. +
+`summaries` + +A list of `Tensor` objects with type `float32`. +string; List of Rank 2 Tensor each containing the summaries for a single feature. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceDeserialize.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceDeserialize.md new file mode 100644 index 00000000000..49a52fea971 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceDeserialize.md @@ -0,0 +1,87 @@ +description: Deserialize bucket boundaries and ready flag into current QuantileAccumulator. + +
+ + +
+ +# tf.raw_ops.BoostedTreesQuantileStreamResourceDeserialize + + + + + + + + + +Deserialize bucket boundaries and ready flag into current QuantileAccumulator. + + + + + + + + + +An op that deserializes bucket boundaries and are boundaries ready flag into current QuantileAccumulator. + + + + + + + + + + + + + + + + +
+`quantile_stream_resource_handle` + +A `Tensor` of type `resource`. +resource handle referring to a QuantileStreamResource. +
+`bucket_boundaries` + +A list of at least 1 `Tensor` objects with type `float32`. +float; List of Rank 1 Tensors each containing the bucket boundaries for a feature. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceFlush.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceFlush.md new file mode 100644 index 00000000000..439ab449fdf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceFlush.md @@ -0,0 +1,101 @@ +description: Flush the summaries for a quantile stream resource. + +
+ + +
+ +# tf.raw_ops.BoostedTreesQuantileStreamResourceFlush + + + + + + + + + +Flush the summaries for a quantile stream resource. + + + + + + + + + +An op that flushes the summaries for a quantile stream resource. + + + + + + + + + + + + + + + + + + + +
+`quantile_stream_resource_handle` + +A `Tensor` of type `resource`. +resource handle referring to a QuantileStreamResource. +
+`num_buckets` + +A `Tensor` of type `int64`. +int; approximate number of buckets unless using generate_quantiles. +
+`generate_quantiles` + +An optional `bool`. Defaults to `False`. +bool; If True, the output will be the num_quantiles for each stream where the ith +entry is the ith quantile of the input with an approximation error of epsilon. +Duplicate values may be present. +If False, the output will be the points in the histogram that we got which roughly +translates to 1/epsilon boundaries and without any duplicates. +Default to False. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceGetBucketBoundaries.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceGetBucketBoundaries.md new file mode 100644 index 00000000000..7393d1251d8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceGetBucketBoundaries.md @@ -0,0 +1,88 @@ +description: Generate the bucket boundaries for each feature based on accumulated summaries. + +
+ + +
+ +# tf.raw_ops.BoostedTreesQuantileStreamResourceGetBucketBoundaries + + + + + + + + + +Generate the bucket boundaries for each feature based on accumulated summaries. + + + + + + + + + +An op that returns a list of float tensors for a quantile stream resource. Each +tensor is Rank 1 containing bucket boundaries for a single feature. + + + + + + + + + + + + + + + + +
+`quantile_stream_resource_handle` + +A `Tensor` of type `resource`. +resource handle referring to a QuantileStreamResource. +
+`num_features` + +An `int` that is `>= 0`. +inferred int; number of features to get bucket boundaries for. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `num_features` `Tensor` objects with type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceHandleOp.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceHandleOp.md new file mode 100644 index 00000000000..98a5ed6e860 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesQuantileStreamResourceHandleOp.md @@ -0,0 +1,84 @@ +description: Creates a handle to a BoostedTreesQuantileStreamResource. + +
+ + +
+ +# tf.raw_ops.BoostedTreesQuantileStreamResourceHandleOp + + + + + + + + + +Creates a handle to a BoostedTreesQuantileStreamResource. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesSerializeEnsemble.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesSerializeEnsemble.md new file mode 100644 index 00000000000..2c388310d61 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesSerializeEnsemble.md @@ -0,0 +1,92 @@ +description: Serializes the tree ensemble to a proto. + +
+ + +
+ +# tf.raw_ops.BoostedTreesSerializeEnsemble + + + + + + + + + +Serializes the tree ensemble to a proto. + + + + + + + + + + + + + + + + + + + + + + +
+`tree_ensemble_handle` + +A `Tensor` of type `resource`. +Handle to the tree ensemble. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (stamp_token, tree_ensemble_serialized). +
+`stamp_token` + +A `Tensor` of type `int64`. +
+`tree_ensemble_serialized` + +A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesSparseAggregateStats.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesSparseAggregateStats.md new file mode 100644 index 00000000000..ac66f811400 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesSparseAggregateStats.md @@ -0,0 +1,163 @@ +description: Aggregates the summary of accumulated stats for the batch. + +
+ + +
+ +# tf.raw_ops.BoostedTreesSparseAggregateStats + + + + + + + + + +Aggregates the summary of accumulated stats for the batch. + + + + + + + + + +The summary stats contains gradients and hessians accumulated for each node, bucket and dimension id. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`node_ids` + +A `Tensor` of type `int32`. +int32; Rank 1 Tensor containing node ids for each example, shape [batch_size]. +
+`gradients` + +A `Tensor` of type `float32`. +float32; Rank 2 Tensor (shape=[batch_size, logits_dimension]) with gradients for each example. +
+`hessians` + +A `Tensor` of type `float32`. +float32; Rank 2 Tensor (shape=[batch_size, hessian_dimension]) with hessians for each example. +
+`feature_indices` + +A `Tensor` of type `int32`. +int32; Rank 2 indices of feature sparse Tensors (shape=[number of sparse entries, 2]). +Number of sparse entries across all instances from the batch. The first value is +the index of the instance, the second is dimension of the feature. The second axis +can only have 2 values, i.e., the input dense version of Tensor can only be matrix. +
+`feature_values` + +A `Tensor` of type `int32`. +int32; Rank 1 values of feature sparse Tensors (shape=[number of sparse entries]). +Number of sparse entries across all instances from the batch. The first value is +the index of the instance, the second is dimension of the feature. +
+`feature_shape` + +A `Tensor` of type `int32`. +int32; Rank 1 dense shape of feature sparse Tensors (shape=[2]). +The first axis can only have 2 values, [batch_size, feature_dimension]. +
+`max_splits` + +An `int` that is `>= 1`. +int; the maximum number of splits possible in the whole tree. +
+`num_buckets` + +An `int` that is `>= 1`. +int; equals to the maximum possible value of bucketized feature + 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (stats_summary_indices, stats_summary_values, stats_summary_shape). +
+`stats_summary_indices` + +A `Tensor` of type `int32`. +
+`stats_summary_values` + +A `Tensor` of type `float32`. +
+`stats_summary_shape` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesSparseCalculateBestFeatureSplit.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesSparseCalculateBestFeatureSplit.md new file mode 100644 index 00000000000..b9f5cd20896 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesSparseCalculateBestFeatureSplit.md @@ -0,0 +1,209 @@ +description: Calculates gains for each feature and returns the best possible split information for the feature. + +
+ + +
+ +# tf.raw_ops.BoostedTreesSparseCalculateBestFeatureSplit + + + + + + + + + +Calculates gains for each feature and returns the best possible split information for the feature. + + + + + + + + + +The split information is the best threshold (bucket id), gains and left/right node contributions per node for each feature. + +It is possible that not all nodes can be split on each feature. Hence, the list of possible nodes can differ between the features. Therefore, we return `node_ids_list` for each feature, containing the list of nodes that this feature can be used to split. + +In this manner, the output is the best split per features and per node, so that it needs to be combined later to produce the best split for each node (among all possible features). + +The output shapes are compatible in a way that the first dimension of all tensors are the same and equal to the number of possible split nodes for each feature. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`node_id_range` + +A `Tensor` of type `int32`. +A Rank 1 tensor (shape=[2]) to specify the range [first, last) of node ids to process within `stats_summary_list`. The nodes are iterated between the two nodes specified by the tensor, as like `for node_id in range(node_id_range[0], node_id_range[1])` (Note that the last index node_id_range[1] is exclusive). +
+`stats_summary_indices` + +A `Tensor` of type `int32`. +A Rank 2 int64 tensor of dense shape [N, 4] (N specifies the number of non-zero values) for accumulated stats summary (gradient/hessian) per node per bucket for each feature. The second dimension contains node id, feature dimension, bucket id, and stats dim. +stats dim is the sum of logits dimension and hessian dimension, hessian dimension can either be logits dimension if diagonal hessian is used, or logits dimension^2 if full hessian is used. +
+`stats_summary_values` + +A `Tensor` of type `float32`. +A Rank 1 float tensor of dense shape [N] (N specifies the number of non-zero values), which supplies the values for each element in summary_indices. +
+`stats_summary_shape` + +A `Tensor` of type `int32`. +A Rank 1 float tensor of dense shape [4], which specifies the dense shape of the sparse tensor, which is [num tree nodes, feature dimensions, num buckets, stats dim]. +
+`l1` + +A `Tensor` of type `float32`. +l1 regularization factor on leaf weights, per instance based. +
+`l2` + +A `Tensor` of type `float32`. +l2 regularization factor on leaf weights, per instance based. +
+`tree_complexity` + +A `Tensor` of type `float32`. +adjustment to the gain, per leaf based. +
+`min_node_weight` + +A `Tensor` of type `float32`. +minimum avg of hessians in a node before required for the node to be considered for splitting. +
+`logits_dimension` + +An `int` that is `>= 1`. +The dimension of logit, i.e., number of classes. +
+`split_type` + +An optional `string` from: `"inequality"`. Defaults to `"inequality"`. +A string indicating if this Op should perform inequality split or equality split. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (node_ids, gains, feature_dimensions, thresholds, left_node_contribs, right_node_contribs, split_with_default_directions). +
+`node_ids` + +A `Tensor` of type `int32`. +
+`gains` + +A `Tensor` of type `float32`. +
+`feature_dimensions` + +A `Tensor` of type `int32`. +
+`thresholds` + +A `Tensor` of type `int32`. +
+`left_node_contribs` + +A `Tensor` of type `float32`. +
+`right_node_contribs` + +A `Tensor` of type `float32`. +
+`split_with_default_directions` + +A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesTrainingPredict.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesTrainingPredict.md new file mode 100644 index 00000000000..3801c81824b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesTrainingPredict.md @@ -0,0 +1,138 @@ +description: Runs multiple additive regression ensemble predictors on input instances and + +
+ + +
+ +# tf.raw_ops.BoostedTreesTrainingPredict + + + + + + + + + +Runs multiple additive regression ensemble predictors on input instances and + + + + + + + + + +computes the update to cached logits. It is designed to be used during training. +It traverses the trees starting from cached tree id and cached node id and +calculates the updates to be pushed to the cache. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tree_ensemble_handle` + +A `Tensor` of type `resource`. +
+`cached_tree_ids` + +A `Tensor` of type `int32`. +Rank 1 Tensor containing cached tree ids which is the starting +tree of prediction. +
+`cached_node_ids` + +A `Tensor` of type `int32`. +Rank 1 Tensor containing cached node id which is the starting +node of prediction. +
+`bucketized_features` + +A list of at least 1 `Tensor` objects with type `int32`. +A list of rank 1 Tensors containing bucket id for each +feature. +
+`logits_dimension` + +An `int`. +scalar, dimension of the logits, to be used for partial logits +shape. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (partial_logits, tree_ids, node_ids). +
+`partial_logits` + +A `Tensor` of type `float32`. +
+`tree_ids` + +A `Tensor` of type `int32`. +
+`node_ids` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesUpdateEnsemble.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesUpdateEnsemble.md new file mode 100644 index 00000000000..ab7c7a39744 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesUpdateEnsemble.md @@ -0,0 +1,160 @@ +description: Updates the tree ensemble by either adding a layer to the last tree being grown + +
+ + +
+ +# tf.raw_ops.BoostedTreesUpdateEnsemble + + + + + + + + + +Updates the tree ensemble by either adding a layer to the last tree being grown + + + + + + + + + +or by starting a new tree. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tree_ensemble_handle` + +A `Tensor` of type `resource`. +Handle to the ensemble variable. +
+`feature_ids` + +A `Tensor` of type `int32`. +Rank 1 tensor with ids for each feature. This is the real id of +the feature that will be used in the split. +
+`node_ids` + +A list of `Tensor` objects with type `int32`. +List of rank 1 tensors representing the nodes for which this feature +has a split. +
+`gains` + +A list with the same length as `node_ids` of `Tensor` objects with type `float32`. +List of rank 1 tensors representing the gains for each of the feature's +split. +
+`thresholds` + +A list with the same length as `node_ids` of `Tensor` objects with type `int32`. +List of rank 1 tensors representing the thesholds for each of the +feature's split. +
+`left_node_contribs` + +A list with the same length as `node_ids` of `Tensor` objects with type `float32`. +List of rank 2 tensors with left leaf contribs for each of +the feature's splits. Will be added to the previous node values to constitute +the values of the left nodes. +
+`right_node_contribs` + +A list with the same length as `node_ids` of `Tensor` objects with type `float32`. +List of rank 2 tensors with right leaf contribs for each +of the feature's splits. Will be added to the previous node values to constitute +the values of the right nodes. +
+`max_depth` + +A `Tensor` of type `int32`. Max depth of the tree to build. +
+`learning_rate` + +A `Tensor` of type `float32`. +shrinkage const for each new tree. +
+`pruning_mode` + +An `int` that is `>= 0`. +0-No pruning, 1-Pre-pruning, 2-Post-pruning. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BoostedTreesUpdateEnsembleV2.md b/site/en/api_docs/python/tf/raw_ops/BoostedTreesUpdateEnsembleV2.md new file mode 100644 index 00000000000..47e5f22f163 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BoostedTreesUpdateEnsembleV2.md @@ -0,0 +1,184 @@ +description: Updates the tree ensemble by adding a layer to the last tree being grown + +
+ + +
+ +# tf.raw_ops.BoostedTreesUpdateEnsembleV2 + + + + + + + + + +Updates the tree ensemble by adding a layer to the last tree being grown + + + + + + + + + +or by starting a new tree. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tree_ensemble_handle` + +A `Tensor` of type `resource`. +Handle to the ensemble variable. +
+`feature_ids` + +A list of at least 1 `Tensor` objects with type `int32`. +Rank 1 tensor with ids for each feature. This is the real id of +the feature that will be used in the split. +
+`dimension_ids` + +A list of `Tensor` objects with type `int32`. +List of rank 1 tensors representing the dimension in each feature. +
+`node_ids` + +A list with the same length as `dimension_ids` of `Tensor` objects with type `int32`. +List of rank 1 tensors representing the nodes for which this feature +has a split. +
+`gains` + +A list with the same length as `dimension_ids` of `Tensor` objects with type `float32`. +List of rank 1 tensors representing the gains for each of the feature's +split. +
+`thresholds` + +A list with the same length as `dimension_ids` of `Tensor` objects with type `int32`. +List of rank 1 tensors representing the thesholds for each of the +feature's split. +
+`left_node_contribs` + +A list with the same length as `dimension_ids` of `Tensor` objects with type `float32`. +List of rank 2 tensors with left leaf contribs for each of +the feature's splits. Will be added to the previous node values to constitute +the values of the left nodes. +
+`right_node_contribs` + +A list with the same length as `dimension_ids` of `Tensor` objects with type `float32`. +List of rank 2 tensors with right leaf contribs for each +of the feature's splits. Will be added to the previous node values to constitute +the values of the right nodes. +
+`split_types` + +A list with the same length as `dimension_ids` of `Tensor` objects with type `string`. +List of rank 1 tensors representing the split type for each feature. +
+`max_depth` + +A `Tensor` of type `int32`. Max depth of the tree to build. +
+`learning_rate` + +A `Tensor` of type `float32`. +shrinkage const for each new tree. +
+`pruning_mode` + +A `Tensor` of type `int32`. +0-No pruning, 1-Pre-pruning, 2-Post-pruning. +
+`logits_dimension` + +An optional `int`. Defaults to `1`. +scalar, dimension of the logits +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BroadcastArgs.md b/site/en/api_docs/python/tf/raw_ops/BroadcastArgs.md new file mode 100644 index 00000000000..cc2b1a2380a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BroadcastArgs.md @@ -0,0 +1,86 @@ +description: Return the shape of s0 op s1 with broadcast. + +
+ + +
+ +# tf.raw_ops.BroadcastArgs + + + + + + + + + +Return the shape of s0 op s1 with broadcast. + + + + + + + + + +Given `s0` and `s1`, tensors that represent shapes, compute `r0`, the +broadcasted shape. `s0`, `s1` and `r0` are all integer vectors. + + + + + + + + + + + + + + + + +
+`s0` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`s1` + +A `Tensor`. Must have the same type as `s0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `s0`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BroadcastGradientArgs.md b/site/en/api_docs/python/tf/raw_ops/BroadcastGradientArgs.md new file mode 100644 index 00000000000..bb3e6f1fb85 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BroadcastGradientArgs.md @@ -0,0 +1,99 @@ +description: Return the reduction indices for computing gradients of s0 op s1 with broadcast. + +
+ + +
+ +# tf.raw_ops.BroadcastGradientArgs + + + + + + + + + +Return the reduction indices for computing gradients of s0 op s1 with broadcast. + + + + + + + + + +This is typically used by gradient computations for a broadcasting operation. + + + + + + + + + + + + + + + + +
+`s0` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`s1` + +A `Tensor`. Must have the same type as `s0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (r0, r1). +
+`r0` + +A `Tensor`. Has the same type as `s0`. +
+`r1` + +A `Tensor`. Has the same type as `s0`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BroadcastTo.md b/site/en/api_docs/python/tf/raw_ops/BroadcastTo.md new file mode 100644 index 00000000000..0abf4af8cb2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BroadcastTo.md @@ -0,0 +1,105 @@ +description: Broadcast an array for a compatible shape. + +
+ + +
+ +# tf.raw_ops.BroadcastTo + + + + + + + + + +Broadcast an array for a compatible shape. + + + + + + + + + +Broadcasting is the process of making arrays to have compatible shapes +for arithmetic operations. Two shapes are compatible if for each +dimension pair they are either equal or one of them is one. When trying +to broadcast a Tensor to a shape, it starts with the trailing dimensions, +and works its way forward. + +For example, + +``` +>>> x = tf.constant([1, 2, 3]) +>>> y = tf.broadcast_to(x, [3, 3]) +>>> print(y) +tf.Tensor( + [[1 2 3] + [1 2 3] + [1 2 3]], shape=(3, 3), dtype=int32) +``` + +In the above example, the input Tensor with the shape of `[1, 3]` +is broadcasted to output Tensor with shape of `[3, 3]`. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. A Tensor to broadcast. +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +An 1-D `int` Tensor. The shape of the desired output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Bucketize.md b/site/en/api_docs/python/tf/raw_ops/Bucketize.md new file mode 100644 index 00000000000..da935465f96 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Bucketize.md @@ -0,0 +1,96 @@ +description: Bucketizes 'input' based on 'boundaries'. + +
+ + +
+ +# tf.raw_ops.Bucketize + + + + + + + + + +Bucketizes 'input' based on 'boundaries'. + + + + + + + + + +For example, if the inputs are + boundaries = [0, 10, 100] + input = [[-5, 10000] + [150, 10] + [5, 100]] + +then the output will be + output = [[0, 3] + [3, 2] + [1, 3]] + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`. +Any shape of Tensor contains with int or float type. +
+`boundaries` + +A list of `floats`. +A sorted list of floats gives the boundary of the buckets. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/BytesProducedStatsDataset.md b/site/en/api_docs/python/tf/raw_ops/BytesProducedStatsDataset.md new file mode 100644 index 00000000000..327366a98f3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/BytesProducedStatsDataset.md @@ -0,0 +1,98 @@ +description: Records the bytes size of each element of input_dataset in a StatsAggregator. + +
+ + +
+ +# tf.raw_ops.BytesProducedStatsDataset + + + + + + + + + +Records the bytes size of each element of `input_dataset` in a StatsAggregator. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`tag` + +A `Tensor` of type `string`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CSRSparseMatrixComponents.md b/site/en/api_docs/python/tf/raw_ops/CSRSparseMatrixComponents.md new file mode 100644 index 00000000000..dbe993ef552 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CSRSparseMatrixComponents.md @@ -0,0 +1,116 @@ +description: Reads out the CSR components at batch index. + +
+ + +
+ +# tf.raw_ops.CSRSparseMatrixComponents + + + + + + + + + +Reads out the CSR components at batch `index`. + + + + + + + + + +This op is meant only for debugging / testing, and its interface is not expected +to be stable. + + + + + + + + + + + + + + + + + + + +
+`csr_sparse_matrix` + +A `Tensor` of type `variant`. +A batched CSRSparseMatrix. +
+`index` + +A `Tensor` of type `int32`. +The index in `csr_sparse_matrix`'s batch. +
+`type` + +A tf.DType from: `tf.float32, tf.float64, tf.complex64, tf.complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (row_ptrs, col_inds, values). +
+`row_ptrs` + +A `Tensor` of type `int32`. +
+`col_inds` + +A `Tensor` of type `int32`. +
+`values` + +A `Tensor` of type `type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CSRSparseMatrixToDense.md b/site/en/api_docs/python/tf/raw_ops/CSRSparseMatrixToDense.md new file mode 100644 index 00000000000..4ff0f7c2895 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CSRSparseMatrixToDense.md @@ -0,0 +1,84 @@ +description: Convert a (possibly batched) CSRSparseMatrix to dense. + +
+ + +
+ +# tf.raw_ops.CSRSparseMatrixToDense + + + + + + + + + +Convert a (possibly batched) CSRSparseMatrix to dense. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_input` + +A `Tensor` of type `variant`. A batched CSRSparseMatrix. +
+`type` + +A tf.DType from: `tf.float32, tf.float64, tf.complex64, tf.complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CSRSparseMatrixToSparseTensor.md b/site/en/api_docs/python/tf/raw_ops/CSRSparseMatrixToSparseTensor.md new file mode 100644 index 00000000000..144c13cea72 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CSRSparseMatrixToSparseTensor.md @@ -0,0 +1,106 @@ +description: Converts a (possibly batched) CSRSparesMatrix to a SparseTensor. + +
+ + +
+ +# tf.raw_ops.CSRSparseMatrixToSparseTensor + + + + + + + + + +Converts a (possibly batched) CSRSparesMatrix to a SparseTensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_matrix` + +A `Tensor` of type `variant`. +A (possibly batched) CSRSparseMatrix. +
+`type` + +A tf.DType from: `tf.float32, tf.float64, tf.complex64, tf.complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (indices, values, dense_shape). +
+`indices` + +A `Tensor` of type `int64`. +
+`values` + +A `Tensor` of type `type`. +
+`dense_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CSVDataset.md b/site/en/api_docs/python/tf/raw_ops/CSVDataset.md new file mode 100644 index 00000000000..9b4e55ace87 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CSVDataset.md @@ -0,0 +1,139 @@ +
+ + +
+ +# tf.raw_ops.CSVDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A `Tensor` of type `string`. +
+`compression_type` + +A `Tensor` of type `string`. +
+`buffer_size` + +A `Tensor` of type `int64`. +
+`header` + +A `Tensor` of type `bool`. +
+`field_delim` + +A `Tensor` of type `string`. +
+`use_quote_delim` + +A `Tensor` of type `bool`. +
+`na_value` + +A `Tensor` of type `string`. +
+`select_cols` + +A `Tensor` of type `int64`. +
+`record_defaults` + +A list of `Tensor` objects with types from: `float32`, `float64`, `int32`, `int64`, `string`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CTCBeamSearchDecoder.md b/site/en/api_docs/python/tf/raw_ops/CTCBeamSearchDecoder.md new file mode 100644 index 00000000000..a1dc3e08c8a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CTCBeamSearchDecoder.md @@ -0,0 +1,143 @@ +description: Performs beam search decoding on the logits given in input. + +
+ + +
+ +# tf.raw_ops.CTCBeamSearchDecoder + + + + + + + + + +Performs beam search decoding on the logits given in input. + + + + + + + + + +A note about the attribute merge_repeated: For the beam search decoder, +this means that if consecutive entries in a beam are the same, only +the first of these is emitted. That is, when the top path is "A B B B B", +"A B" is returned if merge_repeated = True but "A B B B B" is +returned if merge_repeated = False. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +3-D, shape: `(max_time x batch_size x num_classes)`, the logits. +
+`sequence_length` + +A `Tensor` of type `int32`. +A vector containing sequence lengths, size `(batch)`. +
+`beam_width` + +An `int` that is `>= 1`. +A scalar >= 0 (beam search beam width). +
+`top_paths` + +An `int` that is `>= 1`. +A scalar >= 0, <= beam_width (controls output size). +
+`merge_repeated` + +An optional `bool`. Defaults to `True`. +If true, merge repeated classes in output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (decoded_indices, decoded_values, decoded_shape, log_probability). +
+`decoded_indices` + +A list of `top_paths` `Tensor` objects with type `int64`. +
+`decoded_values` + +A list of `top_paths` `Tensor` objects with type `int64`. +
+`decoded_shape` + +A list of `top_paths` `Tensor` objects with type `int64`. +
+`log_probability` + +A `Tensor`. Has the same type as `inputs`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CTCGreedyDecoder.md b/site/en/api_docs/python/tf/raw_ops/CTCGreedyDecoder.md new file mode 100644 index 00000000000..62c72ae54a0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CTCGreedyDecoder.md @@ -0,0 +1,131 @@ +description: Performs greedy decoding on the logits given in inputs. + +
+ + +
+ +# tf.raw_ops.CTCGreedyDecoder + + + + + + + + + +Performs greedy decoding on the logits given in inputs. + + + + + + + + + +A note about the attribute merge_repeated: if enabled, when +consecutive logits' maximum indices are the same, only the first of +these is emitted. Labeling the blank '*', the sequence "A B B * B B" +becomes "A B B" if merge_repeated = True and "A B B B B" if +merge_repeated = False. + +Regardless of the value of merge_repeated, if the maximum index of a given +time and batch corresponds to the blank, index `(num_classes - 1)`, no new +element is emitted. + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +3-D, shape: `(max_time x batch_size x num_classes)`, the logits. +
+`sequence_length` + +A `Tensor` of type `int32`. +A vector containing sequence lengths, size `(batch_size)`. +
+`merge_repeated` + +An optional `bool`. Defaults to `False`. +If True, merge repeated classes in output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (decoded_indices, decoded_values, decoded_shape, log_probability). +
+`decoded_indices` + +A `Tensor` of type `int64`. +
+`decoded_values` + +A `Tensor` of type `int64`. +
+`decoded_shape` + +A `Tensor` of type `int64`. +
+`log_probability` + +A `Tensor`. Has the same type as `inputs`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CTCLoss.md b/site/en/api_docs/python/tf/raw_ops/CTCLoss.md new file mode 100644 index 00000000000..8b8a3ed6ac9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CTCLoss.md @@ -0,0 +1,151 @@ +description: Calculates the CTC Loss (log probability) for each batch entry. Also calculates + +
+ + +
+ +# tf.raw_ops.CTCLoss + + + + + + + + + +Calculates the CTC Loss (log probability) for each batch entry. Also calculates + + + + + + + + + +the gradient. This class performs the softmax operation for you, so inputs +should be e.g. linear projections of outputs by an LSTM. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +3-D, shape: `(max_time x batch_size x num_classes)`, the logits. +
+`labels_indices` + +A `Tensor` of type `int64`. +The indices of a `SparseTensor`. +`labels_indices(i, :) == [b, t]` means `labels_values(i)` stores the id for +`(batch b, time t)`. +
+`labels_values` + +A `Tensor` of type `int32`. +The values (labels) associated with the given batch and time. +
+`sequence_length` + +A `Tensor` of type `int32`. +A vector containing sequence lengths (batch). +
+`preprocess_collapse_repeated` + +An optional `bool`. Defaults to `False`. +Scalar, if true then repeated labels are +collapsed prior to the CTC calculation. +
+`ctc_merge_repeated` + +An optional `bool`. Defaults to `True`. +Scalar. If set to false, *during* CTC calculation +repeated non-blank labels will not be merged and are interpreted as +individual labels. This is a simplified version of CTC. +
+`ignore_longer_outputs_than_inputs` + +An optional `bool`. Defaults to `False`. +Scalar. If set to true, during CTC +calculation, items that have longer output sequences than input sequences +are skipped: they don't contribute to the loss term and have zero-gradient. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (loss, gradient). +
+`loss` + +A `Tensor`. Has the same type as `inputs`. +
+`gradient` + +A `Tensor`. Has the same type as `inputs`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CTCLossV2.md b/site/en/api_docs/python/tf/raw_ops/CTCLossV2.md new file mode 100644 index 00000000000..f367d854952 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CTCLossV2.md @@ -0,0 +1,152 @@ +description: Calculates the CTC Loss (log probability) for each batch entry. Also calculates + +
+ + +
+ +# tf.raw_ops.CTCLossV2 + + + + + + + + + +Calculates the CTC Loss (log probability) for each batch entry. Also calculates + + + + + + + + + +the gradient. This class performs the softmax operation for you, so inputs +should be e.g. linear projections of outputs by an LSTM. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor` of type `float32`. +3-D, shape: `(max_time x batch_size x num_classes)`, the logits. Default blank +label is 0 rather num_classes - 1. +
+`labels_indices` + +A `Tensor` of type `int64`. +The indices of a `SparseTensor`. +`labels_indices(i, :) == [b, t]` means `labels_values(i)` stores the id for +`(batch b, time t)`. +
+`labels_values` + +A `Tensor` of type `int32`. +The values (labels) associated with the given batch and time. +
+`sequence_length` + +A `Tensor` of type `int32`. +A vector containing sequence lengths (batch). +
+`preprocess_collapse_repeated` + +An optional `bool`. Defaults to `False`. +Scalar, if true then repeated labels are +collapsed prior to the CTC calculation. +
+`ctc_merge_repeated` + +An optional `bool`. Defaults to `True`. +Scalar. If set to false, *during* CTC calculation +repeated non-blank labels will not be merged and are interpreted as +individual labels. This is a simplified version of CTC. +
+`ignore_longer_outputs_than_inputs` + +An optional `bool`. Defaults to `False`. +Scalar. If set to true, during CTC +calculation, items that have longer output sequences than input sequences +are skipped: they don't contribute to the loss term and have zero-gradient. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (loss, gradient). +
+`loss` + +A `Tensor` of type `float32`. +
+`gradient` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CacheDataset.md b/site/en/api_docs/python/tf/raw_ops/CacheDataset.md new file mode 100644 index 00000000000..e93e04f5a94 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CacheDataset.md @@ -0,0 +1,104 @@ +description: Creates a dataset that caches elements from input_dataset. + +
+ + +
+ +# tf.raw_ops.CacheDataset + + + + + + + + + +Creates a dataset that caches elements from `input_dataset`. + + + + + + + + + +A CacheDataset will iterate over the input_dataset, and store tensors. If the +cache already exists, the cache will be used. If the cache is inappropriate +(e.g. cannot be opened, contains tensors of the wrong shape / size), an error +will the returned when used. + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`filename` + +A `Tensor` of type `string`. +A path on the filesystem where we should cache the dataset. Note: this +will be a directory. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CacheDatasetV2.md b/site/en/api_docs/python/tf/raw_ops/CacheDatasetV2.md new file mode 100644 index 00000000000..aecd3c184c1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CacheDatasetV2.md @@ -0,0 +1,103 @@ +
+ + +
+ +# tf.raw_ops.CacheDatasetV2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`filename` + +A `Tensor` of type `string`. +
+`cache` + +A `Tensor` of type `resource`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Case.md b/site/en/api_docs/python/tf/raw_ops/Case.md new file mode 100644 index 00000000000..194cbaced39 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Case.md @@ -0,0 +1,125 @@ +description: An n-way switch statement which calls a single branch function. + +
+ + +
+ +# tf.raw_ops.Case + + + + + + + + + +An n-way switch statement which calls a single branch function. + + + + + + + + + + An n-way switch statement, implementing the following: + ``` + switch (branch_index) { + case 0: + output = branches[0](input); + break; + case 1: + output = branches[1](input); + break; + ... + case [[nbranches-1]]: + default: + output = branches[nbranches-1](input); + break; + } + ``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`branch_index` + +A `Tensor` of type `int32`. +The branch selector, an int32 Tensor. +
+`input` + +A list of `Tensor` objects. +A list of input tensors passed to the branch function. +
+`Tout` + +A list of `tf.DTypes`. A list of output types. +
+`branches` + +A list of functions decorated with @Defun that has length `>= 1`. +A list of functions each of which takes 'inputs' and returns a list of +tensors, whose types are the same as what every other branch returns. +
+`output_shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Cast.md b/site/en/api_docs/python/tf/raw_ops/Cast.md new file mode 100644 index 00000000000..8fe2cb2be13 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Cast.md @@ -0,0 +1,91 @@ +description: Cast x of type SrcT to y of DstT. + +
+ + +
+ +# tf.raw_ops.Cast + + + + + + + + + +Cast x of type SrcT to y of DstT. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. +
+`DstT` + +A tf.DType. +
+`Truncate` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `DstT`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Ceil.md b/site/en/api_docs/python/tf/raw_ops/Ceil.md new file mode 100644 index 00000000000..01e479582a0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Ceil.md @@ -0,0 +1,77 @@ +description: Returns element-wise smallest integer not less than x. + +
+ + +
+ +# tf.raw_ops.Ceil + + + + + + + + + +Returns element-wise smallest integer not less than x. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CheckNumerics.md b/site/en/api_docs/python/tf/raw_ops/CheckNumerics.md new file mode 100644 index 00000000000..772d21b153e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CheckNumerics.md @@ -0,0 +1,86 @@ +description: Checks a tensor for NaN and Inf values. + +
+ + +
+ +# tf.raw_ops.CheckNumerics + + + + + + + + + +Checks a tensor for NaN and Inf values. + + + + + + + + + +When run, reports an `InvalidArgument` error if `tensor` has any values +that are not a number (NaN) or infinity (Inf). Otherwise, passes `tensor` as-is. + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`message` + +A `string`. Prefix of the error message. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CheckNumericsV2.md b/site/en/api_docs/python/tf/raw_ops/CheckNumericsV2.md new file mode 100644 index 00000000000..725afa6348d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CheckNumericsV2.md @@ -0,0 +1,88 @@ +description: Checks a tensor for NaN, -Inf and +Inf values. + +
+ + +
+ +# tf.raw_ops.CheckNumericsV2 + + + + + + + + + +Checks a tensor for NaN, -Inf and +Inf values. + + + + + + + + + +When run, reports an `InvalidArgument` error if `tensor` has any values +that are not a number (NaN) or infinity (Inf). Otherwise, passes `tensor` as-is. +Unlike CheckNumerics (V1), CheckNumericsV2 distinguishes -Inf and +Inf in the +errors it throws. + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`message` + +A `string`. Prefix of the error message. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Cholesky.md b/site/en/api_docs/python/tf/raw_ops/Cholesky.md new file mode 100644 index 00000000000..5b80dbca1de --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Cholesky.md @@ -0,0 +1,91 @@ +description: Computes the Cholesky decomposition of one or more square matrices. + +
+ + +
+ +# tf.raw_ops.Cholesky + + + + + + + + + +Computes the Cholesky decomposition of one or more square matrices. + + + + + + + + + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. + +The input has to be symmetric and positive definite. Only the lower-triangular +part of the input will be used for this operation. The upper-triangular part +will not be read. + +The output is a tensor of the same shape as the input +containing the Cholesky decompositions for all input submatrices `[..., :, :]`. + +**Note**: The gradient computation on GPU is faster for large matrices but +not for large batch dimensions when the submatrices are small. In this +case it might be faster to use the CPU. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CholeskyGrad.md b/site/en/api_docs/python/tf/raw_ops/CholeskyGrad.md new file mode 100644 index 00000000000..75b5a66222f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CholeskyGrad.md @@ -0,0 +1,92 @@ +description: Computes the reverse mode backpropagated gradient of the Cholesky algorithm. + +
+ + +
+ +# tf.raw_ops.CholeskyGrad + + + + + + + + + +Computes the reverse mode backpropagated gradient of the Cholesky algorithm. + + + + + + + + + +For an explanation see "Differentiation of the Cholesky algorithm" by +Iain Murray http://arxiv.org/abs/1602.07527. + + + + + + + + + + + + + + + + +
+`l` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +Output of batch Cholesky algorithm l = cholesky(A). Shape is `[..., M, M]`. +Algorithm depends only on lower triangular part of the innermost matrices of +this tensor. +
+`grad` + +A `Tensor`. Must have the same type as `l`. +df/dl where f is some scalar function. Shape is `[..., M, M]`. +Algorithm depends only on lower triangular part of the innermost matrices of +this tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `l`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ChooseFastestBranchDataset.md b/site/en/api_docs/python/tf/raw_ops/ChooseFastestBranchDataset.md new file mode 100644 index 00000000000..f89ffcbdb4b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ChooseFastestBranchDataset.md @@ -0,0 +1,133 @@ +
+ + +
+ +# tf.raw_ops.ChooseFastestBranchDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`ratio_numerator` + +A `Tensor` of type `int64`. +
+`ratio_denominator` + +A `Tensor` of type `int64`. +
+`other_arguments` + +A list of `Tensor` objects. +
+`num_elements_per_branch` + +An `int` that is `>= 1`. +
+`branches` + +A list of functions decorated with @Defun that has length `>= 1`. +
+`other_arguments_lengths` + +A list of `ints` that has length `>= 1`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ChooseFastestDataset.md b/site/en/api_docs/python/tf/raw_ops/ChooseFastestDataset.md new file mode 100644 index 00000000000..a8e4d2ad206 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ChooseFastestDataset.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.ChooseFastestDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_datasets` + +A list of at least 2 `Tensor` objects with type `variant`. +
+`num_experiments` + +An `int`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ClipByValue.md b/site/en/api_docs/python/tf/raw_ops/ClipByValue.md new file mode 100644 index 00000000000..459e0c66243 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ClipByValue.md @@ -0,0 +1,100 @@ +description: Clips tensor values to a specified min and max. + +
+ + +
+ +# tf.raw_ops.ClipByValue + + + + + + + + + +Clips tensor values to a specified min and max. + + + + + + + + + +Given a tensor `t`, this operation returns a tensor of the same type and +shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`. +Any values less than `clip_value_min` are set to `clip_value_min`. Any values +greater than `clip_value_max` are set to `clip_value_max`. + + + + + + + + + + + + + + + + + + + +
+`t` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A `Tensor`. +
+`clip_value_min` + +A `Tensor`. Must have the same type as `t`. +A 0-D (scalar) `Tensor`, or a `Tensor` with the same shape +as `t`. The minimum value to clip by. +
+`clip_value_max` + +A `Tensor`. Must have the same type as `t`. +A 0-D (scalar) `Tensor`, or a `Tensor` with the same shape +as `t`. The maximum value to clip by. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `t`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CloseSummaryWriter.md b/site/en/api_docs/python/tf/raw_ops/CloseSummaryWriter.md new file mode 100644 index 00000000000..200197082d6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CloseSummaryWriter.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.CloseSummaryWriter + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`writer` + +A `Tensor` of type `resource`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CollectiveBcastRecv.md b/site/en/api_docs/python/tf/raw_ops/CollectiveBcastRecv.md new file mode 100644 index 00000000000..f0a8680094a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CollectiveBcastRecv.md @@ -0,0 +1,113 @@ +description: Receives a tensor value broadcast from another device. + +
+ + +
+ +# tf.raw_ops.CollectiveBcastRecv + + + + + + + + + +Receives a tensor value broadcast from another device. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`T` + +A tf.DType from: `tf.bool, tf.float32, tf.half, tf.float64, tf.int32, tf.int64`. +
+`group_size` + +An `int`. +
+`group_key` + +An `int`. +
+`instance_key` + +An `int`. +
+`shape` + +A tf.TensorShape or list of `ints`. +
+`communication_hint` + +An optional `string`. Defaults to `"auto"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `T`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CollectiveBcastSend.md b/site/en/api_docs/python/tf/raw_ops/CollectiveBcastSend.md new file mode 100644 index 00000000000..4d2273baea5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CollectiveBcastSend.md @@ -0,0 +1,113 @@ +description: Broadcasts a tensor value to one or more other devices. + +
+ + +
+ +# tf.raw_ops.CollectiveBcastSend + + + + + + + + + +Broadcasts a tensor value to one or more other devices. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `bool`, `float32`, `half`, `float64`, `int32`, `int64`. +
+`group_size` + +An `int`. +
+`group_key` + +An `int`. +
+`instance_key` + +An `int`. +
+`shape` + +A tf.TensorShape or list of `ints`. +
+`communication_hint` + +An optional `string`. Defaults to `"auto"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CollectiveGather.md b/site/en/api_docs/python/tf/raw_ops/CollectiveGather.md new file mode 100644 index 00000000000..12d3ec216fb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CollectiveGather.md @@ -0,0 +1,113 @@ +description: Mutually accumulates multiple tensors of identical type and shape. + +
+ + +
+ +# tf.raw_ops.CollectiveGather + + + + + + + + + +Mutually accumulates multiple tensors of identical type and shape. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `half`, `float64`, `int32`, `int64`. +
+`group_size` + +An `int`. +
+`group_key` + +An `int`. +
+`instance_key` + +An `int`. +
+`shape` + +A tf.TensorShape or list of `ints`. +
+`communication_hint` + +An optional `string`. Defaults to `"auto"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CollectivePermute.md b/site/en/api_docs/python/tf/raw_ops/CollectivePermute.md new file mode 100644 index 00000000000..21d394cfc1e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CollectivePermute.md @@ -0,0 +1,92 @@ +description: An Op to permute tensors across replicated TPU instances. + +
+ + +
+ +# tf.raw_ops.CollectivePermute + + + + + + + + + +An Op to permute tensors across replicated TPU instances. + + + + + + + + + +Each instance supplies its own input. + +For example, suppose there are 4 TPU instances: `[A, B, C, D]`. Passing +source_target_pairs=`[[0,1],[1,2],[2,3],[3,0]]` gets the outputs: +`[D, A, B, C]`. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The local input to be permuted. Currently only supports float and +bfloat16. +
+`source_target_pairs` + +A `Tensor` of type `int32`. +A tensor with shape [num_pairs, 2]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CollectiveReduce.md b/site/en/api_docs/python/tf/raw_ops/CollectiveReduce.md new file mode 100644 index 00000000000..d73157f7544 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CollectiveReduce.md @@ -0,0 +1,134 @@ +description: Mutually reduces multiple tensors of identical type and shape. + +
+ + +
+ +# tf.raw_ops.CollectiveReduce + + + + + + + + + +Mutually reduces multiple tensors of identical type and shape. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `half`, `float64`, `int32`, `int64`. +
+`group_size` + +An `int`. +
+`group_key` + +An `int`. +
+`instance_key` + +An `int`. +
+`merge_op` + +A `string` from: `"Min", "Max", "Mul", "Add"`. +
+`final_op` + +A `string` from: `"Id", "Div"`. +
+`subdiv_offsets` + +A list of `ints`. +
+`wait_for` + +An optional list of `ints`. Defaults to `[]`. +
+`communication_hint` + +An optional `string`. Defaults to `"auto"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CombinedNonMaxSuppression.md b/site/en/api_docs/python/tf/raw_ops/CombinedNonMaxSuppression.md new file mode 100644 index 00000000000..f5856323b18 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CombinedNonMaxSuppression.md @@ -0,0 +1,188 @@ +description: Greedily selects a subset of bounding boxes in descending order of score, + +
+ + +
+ +# tf.raw_ops.CombinedNonMaxSuppression + + + + + + + + + +Greedily selects a subset of bounding boxes in descending order of score, + + + + + + + + + +This operation performs non_max_suppression on the inputs per batch, across +all classes. +Prunes away boxes that have high intersection-over-union (IOU) overlap +with previously selected boxes. Bounding boxes are supplied as +[y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any +diagonal pair of box corners and the coordinates can be provided as normalized +(i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm +is agnostic to where the origin is in the coordinate system. Also note that +this algorithm is invariant to orthogonal transformations and translations +of the coordinate system; thus translating or reflections of the coordinate +system result in the same boxes being selected by the algorithm. +The output of this operation is the final boxes, scores and classes tensor +returned after performing non_max_suppression. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`boxes` + +A `Tensor` of type `float32`. +A 4-D float tensor of shape `[batch_size, num_boxes, q, 4]`. If `q` is 1 then +same boxes are used for all classes otherwise, if `q` is equal to number of +classes, class-specific boxes are used. +
+`scores` + +A `Tensor` of type `float32`. +A 3-D float tensor of shape `[batch_size, num_boxes, num_classes]` +representing a single score corresponding to each box (each row of boxes). +
+`max_output_size_per_class` + +A `Tensor` of type `int32`. +A scalar integer tensor representing the maximum number of +boxes to be selected by non max suppression per class +
+`max_total_size` + +A `Tensor` of type `int32`. +A scalar representing maximum number of boxes retained over all classes. +
+`iou_threshold` + +A `Tensor` of type `float32`. +A 0-D float tensor representing the threshold for deciding whether +boxes overlap too much with respect to IOU. +
+`score_threshold` + +A `Tensor` of type `float32`. +A 0-D float tensor representing the threshold for deciding when to remove +boxes based on score. +
+`pad_per_class` + +An optional `bool`. Defaults to `False`. +If false, the output nmsed boxes, scores and classes +are padded/clipped to `max_total_size`. If true, the +output nmsed boxes, scores and classes are padded to be of length +`max_size_per_class`*`num_classes`, unless it exceeds `max_total_size` in +which case it is clipped to `max_total_size`. Defaults to false. +
+`clip_boxes` + +An optional `bool`. Defaults to `True`. +If true, assume the box coordinates are between [0, 1] and clip the output boxes +if they fall beyond [0, 1]. If false, do not do clipping and output the box +coordinates as it is. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (nmsed_boxes, nmsed_scores, nmsed_classes, valid_detections). +
+`nmsed_boxes` + +A `Tensor` of type `float32`. +
+`nmsed_scores` + +A `Tensor` of type `float32`. +
+`nmsed_classes` + +A `Tensor` of type `float32`. +
+`valid_detections` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CompareAndBitpack.md b/site/en/api_docs/python/tf/raw_ops/CompareAndBitpack.md new file mode 100644 index 00000000000..14c33621fab --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CompareAndBitpack.md @@ -0,0 +1,109 @@ +description: Compare values of input to threshold and pack resulting bits into a uint8. + +
+ + +
+ +# tf.raw_ops.CompareAndBitpack + + + + + + + + + +Compare values of `input` to `threshold` and pack resulting bits into a `uint8`. + + + + + + + + + +Each comparison returns a boolean `true` (if `input_value > threshold`) +or and `false` otherwise. + +This operation is useful for Locality-Sensitive-Hashing (LSH) and other +algorithms that use hashing approximations of cosine and `L2` distances; +codes can be generated from an input via: + +```python +codebook_size = 50 +codebook_bits = codebook_size * 32 +codebook = tf.get_variable('codebook', [x.shape[-1].value, codebook_bits], + dtype=x.dtype, + initializer=tf.orthogonal_initializer()) +codes = compare_and_threshold(tf.matmul(x, codebook), threshold=0.) +codes = tf.bitcast(codes, tf.int32) # go from uint8 to int32 +# now codes has shape x.shape[:-1] + [codebook_size] +``` + +**NOTE**: Currently, the innermost dimension of the tensor must be divisible +by 8. + +Given an `input` shaped `[s0, s1, ..., s_n]`, the output is +a `uint8` tensor shaped `[s0, s1, ..., s_n / 8]`. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `bool`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`. +Values to compare against `threshold` and bitpack. +
+`threshold` + +A `Tensor`. Must have the same type as `input`. +Threshold to compare against. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `uint8`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Complex.md b/site/en/api_docs/python/tf/raw_ops/Complex.md new file mode 100644 index 00000000000..0c55895e71b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Complex.md @@ -0,0 +1,107 @@ +description: Converts two real numbers to a complex number. + +
+ + +
+ +# tf.raw_ops.Complex + + + + + + + + + +Converts two real numbers to a complex number. + + + + + + + + + +Given a tensor `real` representing the real part of a complex number, and a +tensor `imag` representing the imaginary part of a complex number, this +operation returns complex numbers elementwise of the form \\(a + bj\\), where +*a* represents the `real` part and *b* represents the `imag` part. + +The input tensors `real` and `imag` must have the same shape. + +#### For example: + + + +``` +# tensor 'real' is [2.25, 3.25] +# tensor `imag` is [4.75, 5.75] +tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]] +``` + + + + + + + + + + + + + + + + + + + +
+`real` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`imag` + +A `Tensor`. Must have the same type as `real`. +
+`Tout` + +An optional tf.DType from: `tf.complex64, tf.complex128`. Defaults to tf.complex64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ComplexAbs.md b/site/en/api_docs/python/tf/raw_ops/ComplexAbs.md new file mode 100644 index 00000000000..aa21b1e689e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ComplexAbs.md @@ -0,0 +1,88 @@ +description: Computes the complex absolute value of a tensor. + +
+ + +
+ +# tf.raw_ops.ComplexAbs + + + + + + + + + +Computes the complex absolute value of a tensor. + + + + + + + + + +Given a tensor `x` of complex numbers, this operation returns a tensor of type +`float` or `double` that is the absolute value of each element in `x`. All +elements in `x` must be complex numbers of the form \\(a + bj\\). The absolute +value is computed as \\( \sqrt{a^2 + b^2}\\). + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +
+`Tout` + +An optional tf.DType from: `tf.float32, tf.float64`. Defaults to tf.float32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ComputeAccidentalHits.md b/site/en/api_docs/python/tf/raw_ops/ComputeAccidentalHits.md new file mode 100644 index 00000000000..a1ef9b38d52 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ComputeAccidentalHits.md @@ -0,0 +1,136 @@ +description: Computes the ids of the positions in sampled_candidates that match true_labels. + +
+ + +
+ +# tf.raw_ops.ComputeAccidentalHits + + + + + + + + + +Computes the ids of the positions in sampled_candidates that match true_labels. + + + + + + + + + +When doing log-odds NCE, the result of this op should be passed through a +SparseToDense op, then added to the logits of the sampled candidates. This has +the effect of 'removing' the sampled labels that match the true labels by +making the classifier sure that they are sampled labels. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64`. +The true_classes output of UnpackSparseLabels. +
+`sampled_candidates` + +A `Tensor` of type `int64`. +The sampled_candidates output of CandidateSampler. +
+`num_true` + +An `int`. Number of true labels per context. +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +An second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (indices, ids, weights). +
+`indices` + +A `Tensor` of type `int32`. +
+`ids` + +A `Tensor` of type `int64`. +
+`weights` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Concat.md b/site/en/api_docs/python/tf/raw_ops/Concat.md new file mode 100644 index 00000000000..a2ebe6b9e63 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Concat.md @@ -0,0 +1,88 @@ +description: Concatenates tensors along one dimension. + +
+ + +
+ +# tf.raw_ops.Concat + + + + + + + + + +Concatenates tensors along one dimension. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`concat_dim` + +A `Tensor` of type `int32`. +0-D. The dimension along which to concatenate. Must be in the +range [0, rank(values)). +
+`values` + +A list of at least 2 `Tensor` objects with the same type. +The `N` Tensors to concatenate. Their ranks and types must match, +and their sizes must match in all dimensions except `concat_dim`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ConcatOffset.md b/site/en/api_docs/python/tf/raw_ops/ConcatOffset.md new file mode 100644 index 00000000000..27d0fbb3933 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ConcatOffset.md @@ -0,0 +1,99 @@ +description: Computes offsets of concat inputs within its output. + +
+ + +
+ +# tf.raw_ops.ConcatOffset + + + + + + + + + +Computes offsets of concat inputs within its output. + + + + + + + + + + +#### For example: + + + +``` +# 'x' is [2, 2, 7] +# 'y' is [2, 3, 7] +# 'z' is [2, 5, 7] +concat_offset(2, [x, y, z]) => [0, 0, 0], [0, 2, 0], [0, 5, 0] +``` + +This is typically used by gradient computations for a concat operation. + + + + + + + + + + + + + + + + +
+`concat_dim` + +A `Tensor` of type `int32`. +The dimension along which to concatenate. +
+`shape` + +A list of at least 2 `Tensor` objects with type `int32`. +The `N` int32 vectors representing shape of tensors being concatenated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list with the same length as `shape` of `Tensor` objects with type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ConcatV2.md b/site/en/api_docs/python/tf/raw_ops/ConcatV2.md new file mode 100644 index 00000000000..a0ba1aab3b5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ConcatV2.md @@ -0,0 +1,88 @@ +description: Concatenates tensors along one dimension. + +
+ + +
+ +# tf.raw_ops.ConcatV2 + + + + + + + + + +Concatenates tensors along one dimension. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`values` + +A list of at least 2 `Tensor` objects with the same type. +List of `N` Tensors to concatenate. Their ranks and types must match, +and their sizes must match in all dimensions except `concat_dim`. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +0-D. The dimension along which to concatenate. Must be in the +range [-rank(values), rank(values)). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ConcatenateDataset.md b/site/en/api_docs/python/tf/raw_ops/ConcatenateDataset.md new file mode 100644 index 00000000000..192c8fd34f9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ConcatenateDataset.md @@ -0,0 +1,98 @@ +description: Creates a dataset that concatenates input_dataset with another_dataset. + +
+ + +
+ +# tf.raw_ops.ConcatenateDataset + + + + + + + + + +Creates a dataset that concatenates `input_dataset` with `another_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`another_dataset` + +A `Tensor` of type `variant`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ConditionalAccumulator.md b/site/en/api_docs/python/tf/raw_ops/ConditionalAccumulator.md new file mode 100644 index 00000000000..2eb37558341 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ConditionalAccumulator.md @@ -0,0 +1,117 @@ +description: A conditional accumulator for aggregating gradients. + +
+ + +
+ +# tf.raw_ops.ConditionalAccumulator + + + + + + + + + +A conditional accumulator for aggregating gradients. + + + + + + + + + +The accumulator accepts gradients marked with local_step greater or +equal to the most recent global_step known to the accumulator. The +average can be extracted from the accumulator, provided sufficient +gradients have been accumulated. Extracting the average automatically +resets the aggregate to 0, and increments the global_step recorded by +the accumulator. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +A tf.DType from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`. +The type of the value being accumulated. +
+`shape` + +A tf.TensorShape or list of `ints`. +The shape of the values, can be [], in which case shape is unknown. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this accumulator is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this accumulator will be shared under the +given name across multiple sessions. +
+`reduction_type` + +An optional `string` from: `"MEAN", "SUM"`. Defaults to `"MEAN"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ConfigureDistributedTPU.md b/site/en/api_docs/python/tf/raw_ops/ConfigureDistributedTPU.md new file mode 100644 index 00000000000..2c12dfcc0ab --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ConfigureDistributedTPU.md @@ -0,0 +1,111 @@ +description: Sets up the centralized structures for a distributed TPU system. + +
+ + +
+ +# tf.raw_ops.ConfigureDistributedTPU + + + + + + + + + +Sets up the centralized structures for a distributed TPU system. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`embedding_config` + +An optional `string`. Defaults to `""`. +Reserved. Do not use. +
+`tpu_embedding_config` + +An optional `string`. Defaults to `""`. +Serialized tensorflow.tpu.TPUEmbeddingConfiguration that +describes the embedding lookups of the program. +
+`is_global_init` + +An optional `bool`. Defaults to `False`. +Reserved. Do not use. +
+`enable_whole_mesh_compilations` + +An optional `bool`. Defaults to `False`. +
+`compilation_failure_closes_chips` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ConfigureTPUEmbedding.md b/site/en/api_docs/python/tf/raw_ops/ConfigureTPUEmbedding.md new file mode 100644 index 00000000000..7b0aaac4345 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ConfigureTPUEmbedding.md @@ -0,0 +1,79 @@ +description: Sets up TPUEmbedding in a distributed TPU system. + +
+ + +
+ +# tf.raw_ops.ConfigureTPUEmbedding + + + + + + + + + +Sets up TPUEmbedding in a distributed TPU system. + + + + + + + + + + + + + + + + + + + + + + +
+`config` + +A `string`. +Serialized tensorflow.tpu.TPUEmbeddingConfiguration that +describes the embedding lookups of the program. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Conj.md b/site/en/api_docs/python/tf/raw_ops/Conj.md new file mode 100644 index 00000000000..d70cc9c5b8c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Conj.md @@ -0,0 +1,92 @@ +description: Returns the complex conjugate of a complex number. + +
+ + +
+ +# tf.raw_ops.Conj + + + + + + + + + +Returns the complex conjugate of a complex number. + + + + + + + + + +Given a tensor `input` of complex numbers, this operation returns a tensor of +complex numbers that are the complex conjugate of each element in `input`. The +complex numbers in `input` must be of the form \\(a + bj\\), where *a* is the +real part and *b* is the imaginary part. + +The complex conjugate returned by this operation is of the form \\(a - bj\\). + +#### For example: + + + +``` +# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] +tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j] +``` + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`, `variant`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ConjugateTranspose.md b/site/en/api_docs/python/tf/raw_ops/ConjugateTranspose.md new file mode 100644 index 00000000000..2b111480b3d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ConjugateTranspose.md @@ -0,0 +1,87 @@ +description: Shuffle dimensions of x according to a permutation and conjugate the result. + +
+ + +
+ +# tf.raw_ops.ConjugateTranspose + + + + + + + + + +Shuffle dimensions of x according to a permutation and conjugate the result. + + + + + + + + + +The output `y` has the same rank as `x`. The shapes of `x` and `y` satisfy: + `y.shape[i] == x.shape[perm[i]] for i in [0, 1, ..., rank(x) - 1]` + `y[i,j,k,...,s,t,u] == conj(x[perm[i], perm[j], perm[k],...,perm[s], perm[t], perm[u]])` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. +
+`perm` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Const.md b/site/en/api_docs/python/tf/raw_ops/Const.md new file mode 100644 index 00000000000..2097775fc4d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Const.md @@ -0,0 +1,84 @@ +description: Returns a constant tensor. + +
+ + +
+ +# tf.raw_ops.Const + + + + + + + + + +Returns a constant tensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `tf.TensorProto`. Attr `value` is the tensor to return. +
+`dtype` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ConsumeMutexLock.md b/site/en/api_docs/python/tf/raw_ops/ConsumeMutexLock.md new file mode 100644 index 00000000000..8b86a01c5da --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ConsumeMutexLock.md @@ -0,0 +1,85 @@ +description: This op consumes a lock created by MutexLock. + +
+ + +
+ +# tf.raw_ops.ConsumeMutexLock + + + + + + + + + +This op consumes a lock created by `MutexLock`. + + + + + + + + + +This op exists to consume a tensor created by `MutexLock` (other than +direct control dependencies). It should be the only that consumes the tensor, +and will raise an error if it is not. Its only purpose is to keep the +mutex lock tensor alive until it is consumed by this op. + +**NOTE**: This operation must run on the same device as its input. This may +be enforced via the `colocate_with` mechanism. + + + + + + + + + + + + + +
+`mutex_lock` + +A `Tensor` of type `variant`. +A tensor returned by `MutexLock`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ControlTrigger.md b/site/en/api_docs/python/tf/raw_ops/ControlTrigger.md new file mode 100644 index 00000000000..e7c2b5bc48e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ControlTrigger.md @@ -0,0 +1,71 @@ +description: Does nothing. Serves as a control trigger for scheduling. + +
+ + +
+ +# tf.raw_ops.ControlTrigger + + + + + + + + + +Does nothing. Serves as a control trigger for scheduling. + + + + + + + + + +Only useful as a placeholder for control edges. + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Conv2D.md b/site/en/api_docs/python/tf/raw_ops/Conv2D.md new file mode 100644 index 00000000000..ef8ca30443d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Conv2D.md @@ -0,0 +1,170 @@ +description: Computes a 2-D convolution given 4-D input and filter tensors. + +
+ + +
+ +# tf.raw_ops.Conv2D + + + + + + + + + +Computes a 2-D convolution given 4-D `input` and `filter` tensors. + + + + + + + + + +Given an input tensor of shape `[batch, in_height, in_width, in_channels]` +and a filter / kernel tensor of shape +`[filter_height, filter_width, in_channels, out_channels]`, this op +performs the following: + +1. Flattens the filter to a 2-D matrix with shape + `[filter_height * filter_width * in_channels, output_channels]`. +2. Extracts image patches from the input tensor to form a *virtual* + tensor of shape `[batch, out_height, out_width, + filter_height * filter_width * in_channels]`. +3. For each patch, right-multiplies the filter matrix and the image patch + vector. + +In detail, with the default NHWC format, + + output[b, i, j, k] = + sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * + filter[di, dj, q, k] + +Must have `strides[0] = strides[3] = 1`. For the most common case of the same +horizontal and vertices strides, `strides = [1, stride, stride, 1]`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`. +A 4-D tensor. The dimension order is interpreted according to the value +of `data_format`, see below for details. +
+`filter` + +A `Tensor`. Must have the same type as `input`. +A 4-D tensor of shape +`[filter_height, filter_width, in_channels, out_channels]` +
+`strides` + +A list of `ints`. +1-D tensor of length 4. The stride of the sliding window for each +dimension of `input`. The dimension order is determined by the value of +`data_format`, see below for details. +
+`padding` + +A `string` from: `"SAME", "VALID", "EXPLICIT"`. +The type of padding algorithm to use. +
+`use_cudnn_on_gpu` + +An optional `bool`. Defaults to `True`. +
+`explicit_paddings` + +An optional list of `ints`. Defaults to `[]`. +If `padding` is `"EXPLICIT"`, the list of explicit padding amounts. For the ith +dimension, the amount of padding inserted before and after the dimension is +`explicit_paddings[2 * i]` and `explicit_paddings[2 * i + 1]`, respectively. If +`padding` is not `"EXPLICIT"`, `explicit_paddings` must be empty. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, height, width, channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, channels, height, width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each +filter element on that dimension. The dimension order is determined by the +value of `data_format`, see above for details. Dilations in the batch and +depth dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Conv2DBackpropFilter.md b/site/en/api_docs/python/tf/raw_ops/Conv2DBackpropFilter.md new file mode 100644 index 00000000000..415bc356880 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Conv2DBackpropFilter.md @@ -0,0 +1,158 @@ +description: Computes the gradients of convolution with respect to the filter. + +
+ + +
+ +# tf.raw_ops.Conv2DBackpropFilter + + + + + + + + + +Computes the gradients of convolution with respect to the filter. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +4-D with shape `[batch, in_height, in_width, in_channels]`. +
+`filter_sizes` + +A `Tensor` of type `int32`. +An integer vector representing the tensor shape of `filter`, +where `filter` is a 4-D +`[filter_height, filter_width, in_channels, out_channels]` tensor. +
+`out_backprop` + +A `Tensor`. Must have the same type as `input`. +4-D with shape `[batch, out_height, out_width, out_channels]`. +Gradients w.r.t. the output of the convolution. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +of the convolution. Must be in the same order as the dimension specified with +format. +
+`padding` + +A `string` from: `"SAME", "VALID", "EXPLICIT"`. +The type of padding algorithm to use. +
+`use_cudnn_on_gpu` + +An optional `bool`. Defaults to `True`. +
+`explicit_paddings` + +An optional list of `ints`. Defaults to `[]`. +If `padding` is `"EXPLICIT"`, the list of explicit padding amounts. For the ith +dimension, the amount of padding inserted before and after the dimension is +`explicit_paddings[2 * i]` and `explicit_paddings[2 * i + 1]`, respectively. If +`padding` is not `"EXPLICIT"`, `explicit_paddings` must be empty. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, in_height, in_width, in_channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each filter +element on that dimension. The dimension order is determined by the value of +`data_format`, see above for details. Dilations in the batch and depth +dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Conv2DBackpropInput.md b/site/en/api_docs/python/tf/raw_ops/Conv2DBackpropInput.md new file mode 100644 index 00000000000..289bfeb3899 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Conv2DBackpropInput.md @@ -0,0 +1,158 @@ +description: Computes the gradients of convolution with respect to the input. + +
+ + +
+ +# tf.raw_ops.Conv2DBackpropInput + + + + + + + + + +Computes the gradients of convolution with respect to the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_sizes` + +A `Tensor` of type `int32`. +An integer vector representing the shape of `input`, +where `input` is a 4-D `[batch, height, width, channels]` tensor. +
+`filter` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`. +4-D with shape +`[filter_height, filter_width, in_channels, out_channels]`. +
+`out_backprop` + +A `Tensor`. Must have the same type as `filter`. +4-D with shape `[batch, out_height, out_width, out_channels]`. +Gradients w.r.t. the output of the convolution. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +of the convolution. Must be in the same order as the dimension specified with +format. +
+`padding` + +A `string` from: `"SAME", "VALID", "EXPLICIT"`. +The type of padding algorithm to use. +
+`use_cudnn_on_gpu` + +An optional `bool`. Defaults to `True`. +
+`explicit_paddings` + +An optional list of `ints`. Defaults to `[]`. +If `padding` is `"EXPLICIT"`, the list of explicit padding amounts. For the ith +dimension, the amount of padding inserted before and after the dimension is +`explicit_paddings[2 * i]` and `explicit_paddings[2 * i + 1]`, respectively. If +`padding` is not `"EXPLICIT"`, `explicit_paddings` must be empty. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, in_height, in_width, in_channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each filter +element on that dimension. The dimension order is determined by the value of +`data_format`, see above for details. Dilations in the batch and depth +dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `filter`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Conv3D.md b/site/en/api_docs/python/tf/raw_ops/Conv3D.md new file mode 100644 index 00000000000..c74d9b71600 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Conv3D.md @@ -0,0 +1,134 @@ +description: Computes a 3-D convolution given 5-D input and filter tensors. + +
+ + +
+ +# tf.raw_ops.Conv3D + + + + + + + + + +Computes a 3-D convolution given 5-D `input` and `filter` tensors. + + + + + + + + + +In signal processing, cross-correlation is a measure of similarity of +two waveforms as a function of a time-lag applied to one of them. This +is also known as a sliding dot product or sliding inner-product. + +Our Conv3D implements a form of cross-correlation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +Shape `[batch, in_depth, in_height, in_width, in_channels]`. +
+`filter` + +A `Tensor`. Must have the same type as `input`. +Shape `[filter_depth, filter_height, filter_width, in_channels, +out_channels]`. `in_channels` must match between `input` and `filter`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. +The data format of the input and output data. With the +default format "NDHWC", the data is stored in the order of: +[batch, in_depth, in_height, in_width, in_channels]. +Alternatively, the format could be "NCDHW", the data storage order is: +[batch, in_channels, in_depth, in_height, in_width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. +1-D tensor of length 5. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each +filter element on that dimension. The dimension order is determined by the +value of `data_format`, see above for details. Dilations in the batch and +depth dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Conv3DBackpropFilter.md b/site/en/api_docs/python/tf/raw_ops/Conv3DBackpropFilter.md new file mode 100644 index 00000000000..a3e7b673657 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Conv3DBackpropFilter.md @@ -0,0 +1,121 @@ +description: Computes the gradients of 3-D convolution with respect to the filter. + +
+ + +
+ +# tf.raw_ops.Conv3DBackpropFilter + + + + + + + + + +Computes the gradients of 3-D convolution with respect to the filter. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +Shape `[batch, depth, rows, cols, in_channels]`. +
+`filter` + +A `Tensor`. Must have the same type as `input`. +Shape `[depth, rows, cols, in_channels, out_channels]`. +`in_channels` must match between `input` and `filter`. +
+`out_backprop` + +A `Tensor`. Must have the same type as `input`. +Backprop signal of shape `[batch, out_depth, out_rows, out_cols, +out_channels]`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Conv3DBackpropFilterV2.md b/site/en/api_docs/python/tf/raw_ops/Conv3DBackpropFilterV2.md new file mode 100644 index 00000000000..f8ec4d913e6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Conv3DBackpropFilterV2.md @@ -0,0 +1,140 @@ +description: Computes the gradients of 3-D convolution with respect to the filter. + +
+ + +
+ +# tf.raw_ops.Conv3DBackpropFilterV2 + + + + + + + + + +Computes the gradients of 3-D convolution with respect to the filter. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +Shape `[batch, depth, rows, cols, in_channels]`. +
+`filter_sizes` + +A `Tensor` of type `int32`. +An integer vector representing the tensor shape of `filter`, +where `filter` is a 5-D +`[filter_depth, filter_height, filter_width, in_channels, out_channels]` +tensor. +
+`out_backprop` + +A `Tensor`. Must have the same type as `input`. +Backprop signal of shape `[batch, out_depth, out_rows, out_cols, +out_channels]`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. +The data format of the input and output data. With the +default format "NDHWC", the data is stored in the order of: +[batch, in_depth, in_height, in_width, in_channels]. +Alternatively, the format could be "NCDHW", the data storage order is: +[batch, in_channels, in_depth, in_height, in_width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. +1-D tensor of length 5. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each +filter element on that dimension. The dimension order is determined by the +value of `data_format`, see above for details. Dilations in the batch and +depth dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Conv3DBackpropInput.md b/site/en/api_docs/python/tf/raw_ops/Conv3DBackpropInput.md new file mode 100644 index 00000000000..3c23b69d262 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Conv3DBackpropInput.md @@ -0,0 +1,121 @@ +description: Computes the gradients of 3-D convolution with respect to the input. + +
+ + +
+ +# tf.raw_ops.Conv3DBackpropInput + + + + + + + + + +Computes the gradients of 3-D convolution with respect to the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +Shape `[batch, depth, rows, cols, in_channels]`. +
+`filter` + +A `Tensor`. Must have the same type as `input`. +Shape `[depth, rows, cols, in_channels, out_channels]`. +`in_channels` must match between `input` and `filter`. +
+`out_backprop` + +A `Tensor`. Must have the same type as `input`. +Backprop signal of shape `[batch, out_depth, out_rows, out_cols, +out_channels]`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Conv3DBackpropInputV2.md b/site/en/api_docs/python/tf/raw_ops/Conv3DBackpropInputV2.md new file mode 100644 index 00000000000..2419f508d2e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Conv3DBackpropInputV2.md @@ -0,0 +1,140 @@ +description: Computes the gradients of 3-D convolution with respect to the input. + +
+ + +
+ +# tf.raw_ops.Conv3DBackpropInputV2 + + + + + + + + + +Computes the gradients of 3-D convolution with respect to the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_sizes` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +An integer vector representing the tensor shape of `input`, +where `input` is a 5-D +`[batch, depth, rows, cols, in_channels]` tensor. +
+`filter` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +Shape `[depth, rows, cols, in_channels, out_channels]`. +`in_channels` must match between `input` and `filter`. +
+`out_backprop` + +A `Tensor`. Must have the same type as `filter`. +Backprop signal of shape `[batch, out_depth, out_rows, out_cols, +out_channels]`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. +The data format of the input and output data. With the +default format "NDHWC", the data is stored in the order of: +[batch, in_depth, in_height, in_width, in_channels]. +Alternatively, the format could be "NCDHW", the data storage order is: +[batch, in_channels, in_depth, in_height, in_width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`. +1-D tensor of length 5. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each +filter element on that dimension. The dimension order is determined by the +value of `data_format`, see above for details. Dilations in the batch and +depth dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `filter`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Copy.md b/site/en/api_docs/python/tf/raw_ops/Copy.md new file mode 100644 index 00000000000..427b3732ef9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Copy.md @@ -0,0 +1,105 @@ +description: Copy a tensor from CPU-to-CPU or GPU-to-GPU. + +
+ + +
+ +# tf.raw_ops.Copy + + + + + + + + + +Copy a tensor from CPU-to-CPU or GPU-to-GPU. + + + + + + + + + +Performs CPU-to-CPU or GPU-to-GPU deep-copying of tensor, depending on the +device on which the tensor is allocated. +N.B.: If the all downstream attached debug ops are disabled given the current +gRPC gating status, the output will simply forward the input tensor without +deep-copying. See the documentation of Debug* ops for more details. + +Unlike the CopyHost Op, this op does not have HostMemory constraint on its +input or output. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Input tensor. +
+`tensor_name` + +An optional `string`. Defaults to `""`. +The name of the input tensor. +
+`debug_ops_spec` + +An optional list of `strings`. Defaults to `[]`. +A list of debug op spec (op, url, gated_grpc) for attached debug +ops. Each element of the list has the format +;;, wherein gated_grpc is boolean represented +as 0/1. E.g., "DebugIdentity;grpc://foo:3333;1", +"DebugIdentity;file:///tmp/tfdbg_1;0". +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CopyHost.md b/site/en/api_docs/python/tf/raw_ops/CopyHost.md new file mode 100644 index 00000000000..12f69ca2365 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CopyHost.md @@ -0,0 +1,103 @@ +description: Copy a tensor to host. + +
+ + +
+ +# tf.raw_ops.CopyHost + + + + + + + + + +Copy a tensor to host. + + + + + + + + + +Performs CPU-to-CPU deep-copying of tensor. +N.B.: If the all downstream attached debug ops are disabled given the current +gRPC gating status, the output will simply forward the input tensor without +deep-copying. See the documentation of Debug* ops for more details. + +Unlike the Copy Op, this op has HostMemory constraint on its input or output. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Input tensor. +
+`tensor_name` + +An optional `string`. Defaults to `""`. +The name of the input tensor. +
+`debug_ops_spec` + +An optional list of `strings`. Defaults to `[]`. +A list of debug op spec (op, url, gated_grpc) for attached debug +ops. Each element of the list has the format +;;, wherein gated_grpc is boolean represented +as 0/1. E.g., "DebugIdentity;grpc://foo:3333;1", +"DebugIdentity;file:///tmp/tfdbg_1;0". +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Cos.md b/site/en/api_docs/python/tf/raw_ops/Cos.md new file mode 100644 index 00000000000..52eb2c42f50 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Cos.md @@ -0,0 +1,86 @@ +description: Computes cos of x element-wise. + +
+ + +
+ +# tf.raw_ops.Cos + + + + + + + + + +Computes cos of x element-wise. + + + + + + + + + + Given an input tensor, this function computes cosine of every + element in the tensor. Input range is `(-inf, inf)` and + output range is `[-1,1]`. If input lies outside the boundary, `nan` + is returned. + + ```python + x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10000, float("inf")]) + tf.math.cos(x) ==> [nan -0.91113025 0.87758255 0.5403023 0.36235774 0.48718765 -0.95215535 nan] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Cosh.md b/site/en/api_docs/python/tf/raw_ops/Cosh.md new file mode 100644 index 00000000000..90daf781350 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Cosh.md @@ -0,0 +1,85 @@ +description: Computes hyperbolic cosine of x element-wise. + +
+ + +
+ +# tf.raw_ops.Cosh + + + + + + + + + +Computes hyperbolic cosine of x element-wise. + + + + + + + + + + Given an input tensor, this function computes hyperbolic cosine of every + element in the tensor. Input range is `[-inf, inf]` and output range + is `[1, inf]`. + + ```python + x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 2, 10, float("inf")]) + tf.math.cosh(x) ==> [inf 4.0515420e+03 1.1276259e+00 1.5430807e+00 1.8106556e+00 3.7621956e+00 1.1013233e+04 inf] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CountUpTo.md b/site/en/api_docs/python/tf/raw_ops/CountUpTo.md new file mode 100644 index 00000000000..d10d2ccaa57 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CountUpTo.md @@ -0,0 +1,87 @@ +description: Increments 'ref' until it reaches 'limit'. + +
+ + +
+ +# tf.raw_ops.CountUpTo + + + + + + + + + +Increments 'ref' until it reaches 'limit'. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `int32`, `int64`. +Should be from a scalar `Variable` node. +
+`limit` + +An `int`. +If incrementing ref would bring it above limit, instead generates an +'OutOfRange' error. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CreateSummaryDbWriter.md b/site/en/api_docs/python/tf/raw_ops/CreateSummaryDbWriter.md new file mode 100644 index 00000000000..59e06b40a89 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CreateSummaryDbWriter.md @@ -0,0 +1,103 @@ +
+ + +
+ +# tf.raw_ops.CreateSummaryDbWriter + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`writer` + +A `Tensor` of type `resource`. +
+`db_uri` + +A `Tensor` of type `string`. +
+`experiment_name` + +A `Tensor` of type `string`. +
+`run_name` + +A `Tensor` of type `string`. +
+`user_name` + +A `Tensor` of type `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CreateSummaryFileWriter.md b/site/en/api_docs/python/tf/raw_ops/CreateSummaryFileWriter.md new file mode 100644 index 00000000000..ab0f6b2d27c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CreateSummaryFileWriter.md @@ -0,0 +1,103 @@ +
+ + +
+ +# tf.raw_ops.CreateSummaryFileWriter + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`writer` + +A `Tensor` of type `resource`. +
+`logdir` + +A `Tensor` of type `string`. +
+`max_queue` + +A `Tensor` of type `int32`. +
+`flush_millis` + +A `Tensor` of type `int32`. +
+`filename_suffix` + +A `Tensor` of type `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CropAndResize.md b/site/en/api_docs/python/tf/raw_ops/CropAndResize.md new file mode 100644 index 00000000000..5021952f9df --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CropAndResize.md @@ -0,0 +1,150 @@ +description: Extracts crops from the input image tensor and resizes them. + +
+ + +
+ +# tf.raw_ops.CropAndResize + + + + + + + + + +Extracts crops from the input image tensor and resizes them. + + + + + + + + + +Extracts crops from the input image tensor and resizes them using bilinear +sampling or nearest neighbor sampling (possibly with aspect ratio change) to a +common output size specified by `crop_size`. This is more general than the +`crop_to_bounding_box` op which extracts a fixed size slice from the input image +and does not allow resizing or aspect ratio change. + +Returns a tensor with `crops` from the input `image` at positions defined at the +bounding box locations in `boxes`. The cropped boxes are all resized (with +bilinear or nearest neighbor interpolation) to a fixed +`size = [crop_height, crop_width]`. The result is a 4-D tensor +`[num_boxes, crop_height, crop_width, depth]`. The resizing is corner aligned. +In particular, if `boxes = [[0, 0, 1, 1]]`, the method will give identical +results to using `tf.image.resize_bilinear()` or +`tf.image.resize_nearest_neighbor()`(depends on the `method` argument) with +`align_corners=True`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`image` + +A `Tensor`. Must be one of the following types: `uint8`, `uint16`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. +A 4-D tensor of shape `[batch, image_height, image_width, depth]`. +Both `image_height` and `image_width` need to be positive. +
+`boxes` + +A `Tensor` of type `float32`. +A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor +specifies the coordinates of a box in the `box_ind[i]` image and is specified +in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of +`y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the +`[0, 1]` interval of normalized image height is mapped to +`[0, image_height - 1]` in image height coordinates. We do allow `y1` > `y2`, in +which case the sampled crop is an up-down flipped version of the original +image. The width dimension is treated similarly. Normalized coordinates +outside the `[0, 1]` range are allowed, in which case we use +`extrapolation_value` to extrapolate the input image values. +
+`box_ind` + +A `Tensor` of type `int32`. +A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`. +The value of `box_ind[i]` specifies the image that the `i`-th box refers to. +
+`crop_size` + +A `Tensor` of type `int32`. +A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`. All +cropped image patches are resized to this size. The aspect ratio of the image +content is not preserved. Both `crop_height` and `crop_width` need to be +positive. +
+`method` + +An optional `string` from: `"bilinear", "nearest"`. Defaults to `"bilinear"`. +A string specifying the sampling method for resizing. It can be either +`"bilinear"` or `"nearest"` and default to `"bilinear"`. Currently two sampling +methods are supported: Bilinear and Nearest Neighbor. +
+`extrapolation_value` + +An optional `float`. Defaults to `0`. +Value used for extrapolation, when applicable. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CropAndResizeGradBoxes.md b/site/en/api_docs/python/tf/raw_ops/CropAndResizeGradBoxes.md new file mode 100644 index 00000000000..1c6143a0384 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CropAndResizeGradBoxes.md @@ -0,0 +1,122 @@ +description: Computes the gradient of the crop_and_resize op wrt the input boxes tensor. + +
+ + +
+ +# tf.raw_ops.CropAndResizeGradBoxes + + + + + + + + + +Computes the gradient of the crop_and_resize op wrt the input boxes tensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`grads` + +A `Tensor` of type `float32`. +A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`. +
+`image` + +A `Tensor`. Must be one of the following types: `uint8`, `uint16`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. +A 4-D tensor of shape `[batch, image_height, image_width, depth]`. +Both `image_height` and `image_width` need to be positive. +
+`boxes` + +A `Tensor` of type `float32`. +A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor +specifies the coordinates of a box in the `box_ind[i]` image and is specified +in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of +`y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the +`[0, 1]` interval of normalized image height is mapped to +`[0, image_height - 1] in image height coordinates. We do allow y1 > y2, in +which case the sampled crop is an up-down flipped version of the original +image. The width dimension is treated similarly. Normalized coordinates +outside the `[0, 1]` range are allowed, in which case we use +`extrapolation_value` to extrapolate the input image values. +
+`box_ind` + +A `Tensor` of type `int32`. +A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`. +The value of `box_ind[i]` specifies the image that the `i`-th box refers to. +
+`method` + +An optional `string` from: `"bilinear"`. Defaults to `"bilinear"`. +A string specifying the interpolation method. Only 'bilinear' is +supported for now. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CropAndResizeGradImage.md b/site/en/api_docs/python/tf/raw_ops/CropAndResizeGradImage.md new file mode 100644 index 00000000000..0198b0cbbf1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CropAndResizeGradImage.md @@ -0,0 +1,130 @@ +description: Computes the gradient of the crop_and_resize op wrt the input image tensor. + +
+ + +
+ +# tf.raw_ops.CropAndResizeGradImage + + + + + + + + + +Computes the gradient of the crop_and_resize op wrt the input image tensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`grads` + +A `Tensor` of type `float32`. +A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`. +
+`boxes` + +A `Tensor` of type `float32`. +A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor +specifies the coordinates of a box in the `box_ind[i]` image and is specified +in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of +`y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the +`[0, 1]` interval of normalized image height is mapped to +`[0, image_height - 1] in image height coordinates. We do allow y1 > y2, in +which case the sampled crop is an up-down flipped version of the original +image. The width dimension is treated similarly. Normalized coordinates +outside the `[0, 1]` range are allowed, in which case we use +`extrapolation_value` to extrapolate the input image values. +
+`box_ind` + +A `Tensor` of type `int32`. +A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`. +The value of `box_ind[i]` specifies the image that the `i`-th box refers to. +
+`image_size` + +A `Tensor` of type `int32`. +A 1-D tensor with value `[batch, image_height, image_width, depth]` +containing the original image size. Both `image_height` and `image_width` need +to be positive. +
+`T` + +A tf.DType from: `tf.float32, tf.half, tf.float64`. +
+`method` + +An optional `string` from: `"bilinear", "nearest"`. Defaults to `"bilinear"`. +A string specifying the interpolation method. Only 'bilinear' is +supported for now. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `T`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Cross.md b/site/en/api_docs/python/tf/raw_ops/Cross.md new file mode 100644 index 00000000000..0db9c48adfc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Cross.md @@ -0,0 +1,89 @@ +description: Compute the pairwise cross product. + +
+ + +
+ +# tf.raw_ops.Cross + + + + + + + + + +Compute the pairwise cross product. + + + + + + + + + +`a` and `b` must be the same shape; they can either be simple 3-element vectors, +or any shape where the innermost dimension is 3. In the latter case, each pair +of corresponding 3-element vectors is cross-multiplied independently. + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +A tensor containing 3-element vectors. +
+`b` + +A `Tensor`. Must have the same type as `a`. +Another tensor, of same type and shape as `a`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CrossReplicaSum.md b/site/en/api_docs/python/tf/raw_ops/CrossReplicaSum.md new file mode 100644 index 00000000000..1b688dc1ff6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CrossReplicaSum.md @@ -0,0 +1,93 @@ +description: An Op to sum inputs across replicated TPU instances. + +
+ + +
+ +# tf.raw_ops.CrossReplicaSum + + + + + + + + + +An Op to sum inputs across replicated TPU instances. + + + + + + + + + +Each instance supplies its own input. + +For example, suppose there are 8 TPU instances: `[A, B, C, D, E, F, G, H]`. +Passing group_assignment=`[[0,2,4,6],[1,3,5,7]]` sets `A, C, E, G` as group 0, +and `B, D, F, H` as group 1. Thus we get the outputs: +`[A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H]`. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `bfloat16`, `float32`, `int32`, `uint32`. +The local input to the sum. +
+`group_assignment` + +A `Tensor` of type `int32`. An int32 tensor with shape +[num_groups, num_replicas_per_group]. `group_assignment[i]` represents the +replica ids in the ith subgroup. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CudnnRNN.md b/site/en/api_docs/python/tf/raw_ops/CudnnRNN.md new file mode 100644 index 00000000000..aef28ff1e30 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CudnnRNN.md @@ -0,0 +1,207 @@ +description: A RNN backed by cuDNN. + +
+ + +
+ +# tf.raw_ops.CudnnRNN + + + + + + + + + +A RNN backed by cuDNN. + + + + + + + + + +Computes the RNN from the input and initial states, with respect to the params +buffer. + +rnn_mode: Indicates the type of the RNN model. +input_mode: Indicate whether there is a linear projection between the input and + the actual computation before the first layer. 'skip_input' is only allowed + when input_size == num_units; 'auto_select' implies 'skip_input' when + input_size == num_units; otherwise, it implies 'linear_input'. +direction: Indicates whether a bidirectional model will be used. Should be + "unidirectional" or "bidirectional". +dropout: Dropout probability. When set to 0., dropout is disabled. +seed: The 1st part of a seed to initialize dropout. +seed2: The 2nd part of a seed to initialize dropout. +input: A 3-D tensor with the shape of [seq_length, batch_size, input_size]. +input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size, + num_units]. +input_c: For LSTM, a 3-D tensor with the shape of + [num_layer * dir, batch, num_units]. For other models, it is ignored. +params: A 1-D tensor that contains the weights and biases in an opaque layout. + The size must be created through CudnnRNNParamsSize, and initialized + separately. Note that they might not be compatible across different + generations. So it is a good idea to save and restore +output: A 3-D tensor with the shape of [seq_length, batch_size, + dir * num_units]. +output_h: The same shape has input_h. +output_c: The same shape as input_c for LSTM. An empty tensor for other models. +is_training: Indicates whether this operation is used for inferenece or + training. +reserve_space: An opaque tensor that can be used in backprop calculation. It + is only produced if is_training is false. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +
+`input_h` + +A `Tensor`. Must have the same type as `input`. +
+`input_c` + +A `Tensor`. Must have the same type as `input`. +
+`params` + +A `Tensor`. Must have the same type as `input`. +
+`rnn_mode` + +An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. +
+`input_mode` + +An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. +
+`direction` + +An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. +
+`dropout` + +An optional `float`. Defaults to `0`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`is_training` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, output_h, output_c, reserve_space). +
+`output` + +A `Tensor`. Has the same type as `input`. +
+`output_h` + +A `Tensor`. Has the same type as `input`. +
+`output_c` + +A `Tensor`. Has the same type as `input`. +
+`reserve_space` + +A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CudnnRNNBackprop.md b/site/en/api_docs/python/tf/raw_ops/CudnnRNNBackprop.md new file mode 100644 index 00000000000..85419d742de --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CudnnRNNBackprop.md @@ -0,0 +1,259 @@ +description: Backprop step of CudnnRNN. + +
+ + +
+ +# tf.raw_ops.CudnnRNNBackprop + + + + + + + + + +Backprop step of CudnnRNN. + + + + + + + + + +Compute the backprop of both data and weights in a RNN. + +rnn_mode: Indicates the type of the RNN model. +input_mode: Indicate whether there is a linear projection between the input and + the actual computation before the first layer. 'skip_input' is only allowed + when input_size == num_units; 'auto_select' implies 'skip_input' when + input_size == num_units; otherwise, it implies 'linear_input'. +direction: Indicates whether a bidirectional model will be used. Should be + "unidirectional" or "bidirectional". +dropout: Dropout probability. When set to 0., dropout is disabled. +seed: The 1st part of a seed to initialize dropout. +seed2: The 2nd part of a seed to initialize dropout. +input: A 3-D tensor with the shape of [seq_length, batch_size, input_size]. +input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size, + num_units]. +input_c: For LSTM, a 3-D tensor with the shape of + [num_layer * dir, batch, num_units]. For other models, it is ignored. +params: A 1-D tensor that contains the weights and biases in an opaque layout. + The size must be created through CudnnRNNParamsSize, and initialized + separately. Note that they might not be compatible across different + generations. So it is a good idea to save and restore +output: A 3-D tensor with the shape of [seq_length, batch_size, + dir * num_units]. +output_h: The same shape has input_h. +output_c: The same shape as input_c for LSTM. An empty tensor for other models. +output_backprop: A 3-D tensor with the same shape as output in the forward pass. +output_h_backprop: A 3-D tensor with the same shape as output_h in the forward + pass. +output_c_backprop: A 3-D tensor with the same shape as output_c in the forward + pass. +reserve_space: The same reserve_space produced in for forward operation. +input_backprop: The backprop to input in the forward pass. Has the same shape + as input. +input_h_backprop: The backprop to input_h in the forward pass. Has the same + shape as input_h. +input_c_backprop: The backprop to input_c in the forward pass. Has the same + shape as input_c. +params_backprop: The backprop to the params buffer in the forward pass. Has the + same shape as params. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +
+`input_h` + +A `Tensor`. Must have the same type as `input`. +
+`input_c` + +A `Tensor`. Must have the same type as `input`. +
+`params` + +A `Tensor`. Must have the same type as `input`. +
+`output` + +A `Tensor`. Must have the same type as `input`. +
+`output_h` + +A `Tensor`. Must have the same type as `input`. +
+`output_c` + +A `Tensor`. Must have the same type as `input`. +
+`output_backprop` + +A `Tensor`. Must have the same type as `input`. +
+`output_h_backprop` + +A `Tensor`. Must have the same type as `input`. +
+`output_c_backprop` + +A `Tensor`. Must have the same type as `input`. +
+`reserve_space` + +A `Tensor`. Must have the same type as `input`. +
+`rnn_mode` + +An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. +
+`input_mode` + +An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. +
+`direction` + +An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. +
+`dropout` + +An optional `float`. Defaults to `0`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (input_backprop, input_h_backprop, input_c_backprop, params_backprop). +
+`input_backprop` + +A `Tensor`. Has the same type as `input`. +
+`input_h_backprop` + +A `Tensor`. Has the same type as `input`. +
+`input_c_backprop` + +A `Tensor`. Has the same type as `input`. +
+`params_backprop` + +A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CudnnRNNBackpropV2.md b/site/en/api_docs/python/tf/raw_ops/CudnnRNNBackpropV2.md new file mode 100644 index 00000000000..1fd980fe060 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CudnnRNNBackpropV2.md @@ -0,0 +1,269 @@ +description: Backprop step of CudnnRNN. + +
+ + +
+ +# tf.raw_ops.CudnnRNNBackpropV2 + + + + + + + + + +Backprop step of CudnnRNN. + + + + + + + + + +Compute the backprop of both data and weights in a RNN. Takes an extra + "host_reserved" inupt than CudnnRNNBackprop, which is used to determine RNN + cudnnRNNAlgo_t and cudnnMathType_t. + +rnn_mode: Indicates the type of the RNN model. +input_mode: Indicates whether there is a linear projection between the input and + the actual computation before the first layer. 'skip_input' is only allowed + when input_size == num_units; 'auto_select' implies 'skip_input' when + input_size == num_units; otherwise, it implies 'linear_input'. +direction: Indicates whether a bidirectional model will be used. Should be + "unidirectional" or "bidirectional". +dropout: Dropout probability. When set to 0., dropout is disabled. +seed: The 1st part of a seed to initialize dropout. +seed2: The 2nd part of a seed to initialize dropout. +input: A 3-D tensor with the shape of [seq_length, batch_size, input_size]. +input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size, + num_units]. +input_c: For LSTM, a 3-D tensor with the shape of + [num_layer * dir, batch, num_units]. For other models, it is ignored. +params: A 1-D tensor that contains the weights and biases in an opaque layout. + The size must be created through CudnnRNNParamsSize, and initialized + separately. Note that they might not be compatible across different + generations. So it is a good idea to save and restore +output: A 3-D tensor with the shape of [seq_length, batch_size, + dir * num_units]. +output_h: The same shape has input_h. +output_c: The same shape as input_c for LSTM. An empty tensor for other models. +output_backprop: A 3-D tensor with the same shape as output in the forward pass. +output_h_backprop: A 3-D tensor with the same shape as output_h in the forward + pass. +output_c_backprop: A 3-D tensor with the same shape as output_c in the forward + pass. +reserve_space: The same reserve_space produced in the forward operation. +host_reserved: The same host_reserved produced in the forward operation. +input_backprop: The backprop to input in the forward pass. Has the same shape + as input. +input_h_backprop: The backprop to input_h in the forward pass. Has the same + shape as input_h. +input_c_backprop: The backprop to input_c in the forward pass. Has the same + shape as input_c. +params_backprop: The backprop to the params buffer in the forward pass. Has the + same shape as params. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +
+`input_h` + +A `Tensor`. Must have the same type as `input`. +
+`input_c` + +A `Tensor`. Must have the same type as `input`. +
+`params` + +A `Tensor`. Must have the same type as `input`. +
+`output` + +A `Tensor`. Must have the same type as `input`. +
+`output_h` + +A `Tensor`. Must have the same type as `input`. +
+`output_c` + +A `Tensor`. Must have the same type as `input`. +
+`output_backprop` + +A `Tensor`. Must have the same type as `input`. +
+`output_h_backprop` + +A `Tensor`. Must have the same type as `input`. +
+`output_c_backprop` + +A `Tensor`. Must have the same type as `input`. +
+`reserve_space` + +A `Tensor`. Must have the same type as `input`. +
+`host_reserved` + +A `Tensor` of type `int8`. +
+`rnn_mode` + +An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. +
+`input_mode` + +An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. +
+`direction` + +An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. +
+`dropout` + +An optional `float`. Defaults to `0`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (input_backprop, input_h_backprop, input_c_backprop, params_backprop). +
+`input_backprop` + +A `Tensor`. Has the same type as `input`. +
+`input_h_backprop` + +A `Tensor`. Has the same type as `input`. +
+`input_c_backprop` + +A `Tensor`. Has the same type as `input`. +
+`params_backprop` + +A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CudnnRNNBackpropV3.md b/site/en/api_docs/python/tf/raw_ops/CudnnRNNBackpropV3.md new file mode 100644 index 00000000000..9cc30461f8d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CudnnRNNBackpropV3.md @@ -0,0 +1,296 @@ +description: Backprop step of CudnnRNNV3. + +
+ + +
+ +# tf.raw_ops.CudnnRNNBackpropV3 + + + + + + + + + +Backprop step of CudnnRNNV3. + + + + + + + + + +Compute the backprop of both data and weights in a RNN. Takes an extra + "sequence_lengths" input than CudnnRNNBackprop. + +rnn_mode: Indicates the type of the RNN model. +input_mode: Indicates whether there is a linear projection between the input and + the actual computation before the first layer. 'skip_input' is only allowed + when input_size == num_units; 'auto_select' implies 'skip_input' when + input_size == num_units; otherwise, it implies 'linear_input'. +direction: Indicates whether a bidirectional model will be used. Should be + "unidirectional" or "bidirectional". +dropout: Dropout probability. When set to 0., dropout is disabled. +seed: The 1st part of a seed to initialize dropout. +seed2: The 2nd part of a seed to initialize dropout. +input: If time_major is true, this is a 3-D tensor with the shape of + [seq_length, batch_size, input_size]. If time_major is false, the shape is + [batch_size, seq_length, input_size]. +input_h: If time_major is true, this is a 3-D tensor with the shape of + [num_layer * dir, batch_size, num_units]. If time_major is false, the shape + is [batch_size, num_layer * dir, num_units]. +input_c: For LSTM, a 3-D tensor with the shape of + [num_layer * dir, batch, num_units]. For other models, it is ignored. +params: A 1-D tensor that contains the weights and biases in an opaque layout. + The size must be created through CudnnRNNParamsSize, and initialized + separately. Note that they might not be compatible across different + generations. So it is a good idea to save and restore +sequence_lengths: a vector of lengths of each input sequence. +output: If time_major is true, this is a 3-D tensor with the shape of + [seq_length, batch_size, dir * num_units]. If time_major is false, the + shape is [batch_size, seq_length, dir * num_units]. +output_h: The same shape has input_h. +output_c: The same shape as input_c for LSTM. An empty tensor for other models. +output_backprop: A 3-D tensor with the same shape as output in the forward pass. +output_h_backprop: A 3-D tensor with the same shape as output_h in the forward + pass. +output_c_backprop: A 3-D tensor with the same shape as output_c in the forward + pass. +time_major: Indicates whether the input/output format is time major or batch + major. +reserve_space: The same reserve_space produced in the forward operation. +input_backprop: The backprop to input in the forward pass. Has the same shape + as input. +input_h_backprop: The backprop to input_h in the forward pass. Has the same + shape as input_h. +input_c_backprop: The backprop to input_c in the forward pass. Has the same + shape as input_c. +params_backprop: The backprop to the params buffer in the forward pass. Has the + same shape as params. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +
+`input_h` + +A `Tensor`. Must have the same type as `input`. +
+`input_c` + +A `Tensor`. Must have the same type as `input`. +
+`params` + +A `Tensor`. Must have the same type as `input`. +
+`sequence_lengths` + +A `Tensor` of type `int32`. +
+`output` + +A `Tensor`. Must have the same type as `input`. +
+`output_h` + +A `Tensor`. Must have the same type as `input`. +
+`output_c` + +A `Tensor`. Must have the same type as `input`. +
+`output_backprop` + +A `Tensor`. Must have the same type as `input`. +
+`output_h_backprop` + +A `Tensor`. Must have the same type as `input`. +
+`output_c_backprop` + +A `Tensor`. Must have the same type as `input`. +
+`reserve_space` + +A `Tensor`. Must have the same type as `input`. +
+`host_reserved` + +A `Tensor` of type `int8`. +
+`rnn_mode` + +An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. +
+`input_mode` + +An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. +
+`direction` + +An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. +
+`dropout` + +An optional `float`. Defaults to `0`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`num_proj` + +An optional `int`. Defaults to `0`. +
+`time_major` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (input_backprop, input_h_backprop, input_c_backprop, params_backprop). +
+`input_backprop` + +A `Tensor`. Has the same type as `input`. +
+`input_h_backprop` + +A `Tensor`. Has the same type as `input`. +
+`input_c_backprop` + +A `Tensor`. Has the same type as `input`. +
+`params_backprop` + +A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CudnnRNNCanonicalToParams.md b/site/en/api_docs/python/tf/raw_ops/CudnnRNNCanonicalToParams.md new file mode 100644 index 00000000000..f6d4cbe735e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CudnnRNNCanonicalToParams.md @@ -0,0 +1,178 @@ +description: Converts CudnnRNN params from canonical form to usable form. + +
+ + +
+ +# tf.raw_ops.CudnnRNNCanonicalToParams + + + + + + + + + +Converts CudnnRNN params from canonical form to usable form. + + + + + + + + + +Writes a set of weights into the opaque params buffer so they can be used in +upcoming training or inferences. + +Note that the params buffer may not be compatible across different GPUs. So any +save and restoration should be converted to and from the canonical weights and +biases. + +num_layers: Specifies the number of layers in the RNN model. +num_units: Specifies the size of the hidden state. +input_size: Specifies the size of the input state. +weights: the canonical form of weights that can be used for saving + and restoration. They are more likely to be compatible across different + generations. +biases: the canonical form of biases that can be used for saving + and restoration. They are more likely to be compatible across different + generations. +num_params: number of parameter sets for all layers. + Each layer may contain multiple parameter sets, with each set consisting of + a weight matrix and a bias vector. +rnn_mode: Indicates the type of the RNN model. +input_mode: Indicate whether there is a linear projection between the input and + The actual computation before the first layer. 'skip_input' is only allowed + when input_size == num_units; 'auto_select' implies 'skip_input' when + input_size == num_units; otherwise, it implies 'linear_input'. +direction: Indicates whether a bidirectional model will be used. + dir = (direction == bidirectional) ? 2 : 1 +dropout: dropout probability. When set to 0., dropout is disabled. +seed: the 1st part of a seed to initialize dropout. +seed2: the 2nd part of a seed to initialize dropout. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_layers` + +A `Tensor` of type `int32`. +
+`num_units` + +A `Tensor` of type `int32`. +
+`input_size` + +A `Tensor` of type `int32`. +
+`weights` + +A list of at least 1 `Tensor` objects with the same type in: `half`, `float32`, `float64`. +
+`biases` + +A list with the same length as `weights` of `Tensor` objects with the same type as `weights`. +
+`rnn_mode` + +An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. +
+`input_mode` + +An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. +
+`direction` + +An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. +
+`dropout` + +An optional `float`. Defaults to `0`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `weights`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CudnnRNNCanonicalToParamsV2.md b/site/en/api_docs/python/tf/raw_ops/CudnnRNNCanonicalToParamsV2.md new file mode 100644 index 00000000000..016f5aacced --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CudnnRNNCanonicalToParamsV2.md @@ -0,0 +1,186 @@ +description: Converts CudnnRNN params from canonical form to usable form. It supports the projection in LSTM. + +
+ + +
+ +# tf.raw_ops.CudnnRNNCanonicalToParamsV2 + + + + + + + + + +Converts CudnnRNN params from canonical form to usable form. It supports the projection in LSTM. + + + + + + + + + +Writes a set of weights into the opaque params buffer so they can be used in +upcoming training or inferences. + +Note that the params buffer may not be compatible across different GPUs. So any +save and restoration should be converted to and from the canonical weights and +biases. + +num_layers: Specifies the number of layers in the RNN model. +num_units: Specifies the size of the hidden state. +input_size: Specifies the size of the input state. +weights: the canonical form of weights that can be used for saving + and restoration. They are more likely to be compatible across different + generations. +biases: the canonical form of biases that can be used for saving + and restoration. They are more likely to be compatible across different + generations. +num_params_weights: number of weight parameter matrix for all layers. +num_params_biases: number of bias parameter vector for all layers. +rnn_mode: Indicates the type of the RNN model. +input_mode: Indicate whether there is a linear projection between the input and + The actual computation before the first layer. 'skip_input' is only allowed + when input_size == num_units; 'auto_select' implies 'skip_input' when + input_size == num_units; otherwise, it implies 'linear_input'. +direction: Indicates whether a bidirectional model will be used. + dir = (direction == bidirectional) ? 2 : 1 +dropout: dropout probability. When set to 0., dropout is disabled. +seed: the 1st part of a seed to initialize dropout. +seed2: the 2nd part of a seed to initialize dropout. +num_proj: The output dimensionality for the projection matrices. If None or 0, + no projection is performed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_layers` + +A `Tensor` of type `int32`. +
+`num_units` + +A `Tensor` of type `int32`. +
+`input_size` + +A `Tensor` of type `int32`. +
+`weights` + +A list of at least 1 `Tensor` objects with the same type in: `half`, `float32`, `float64`. +
+`biases` + +A list of at least 1 `Tensor` objects with the same type as `weights`. +
+`rnn_mode` + +An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. +
+`input_mode` + +An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. +
+`direction` + +An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. +
+`dropout` + +An optional `float`. Defaults to `0`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`num_proj` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `weights`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CudnnRNNParamsSize.md b/site/en/api_docs/python/tf/raw_ops/CudnnRNNParamsSize.md new file mode 100644 index 00000000000..c8e1b257dbb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CudnnRNNParamsSize.md @@ -0,0 +1,177 @@ +description: Computes size of weights that can be used by a Cudnn RNN model. + +
+ + +
+ +# tf.raw_ops.CudnnRNNParamsSize + + + + + + + + + +Computes size of weights that can be used by a Cudnn RNN model. + + + + + + + + + +Return the params size that can be used by the Cudnn RNN model. Subsequent +weight allocation and initialization should use this size. + +num_layers: Specifies the number of layers in the RNN model. +num_units: Specifies the size of the hidden state. +input_size: Specifies the size of the input state. +rnn_mode: Indicates the type of the RNN model. +input_mode: Indicate whether there is a linear projection between the input and + The actual computation before the first layer. 'skip_input' is only allowed + when input_size == num_units; 'auto_select' implies 'skip_input' when + input_size == num_units; otherwise, it implies 'linear_input'. +direction: Indicates whether a bidirectional model will be used. + dir = (direction == bidirectional) ? 2 : 1 +dropout: dropout probability. When set to 0., dropout is disabled. +seed: the 1st part of a seed to initialize dropout. +seed2: the 2nd part of a seed to initialize dropout. +params_size: The size of the params buffer that should be allocated and + initialized for this RNN model. Note that this params buffer may not be + compatible across GPUs. Please use CudnnRNNParamsWeights and + CudnnRNNParamsBiases to save and restore them in a way that is compatible + across different runs. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_layers` + +A `Tensor` of type `int32`. +
+`num_units` + +A `Tensor` of type `int32`. +
+`input_size` + +A `Tensor` of type `int32`. +
+`T` + +A tf.DType from: `tf.half, tf.float32, tf.float64`. +
+`S` + +A tf.DType from: `tf.int32, tf.int64`. +
+`rnn_mode` + +An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. +
+`input_mode` + +An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. +
+`direction` + +An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. +
+`dropout` + +An optional `float`. Defaults to `0`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`num_proj` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `S`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CudnnRNNParamsToCanonical.md b/site/en/api_docs/python/tf/raw_ops/CudnnRNNParamsToCanonical.md new file mode 100644 index 00000000000..5199537e932 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CudnnRNNParamsToCanonical.md @@ -0,0 +1,192 @@ +description: Retrieves CudnnRNN params in canonical form. + +
+ + +
+ +# tf.raw_ops.CudnnRNNParamsToCanonical + + + + + + + + + +Retrieves CudnnRNN params in canonical form. + + + + + + + + + +Retrieves a set of weights from the opaque params buffer that can be saved and +restored in a way compatible with future runs. + +Note that the params buffer may not be compatible across different GPUs. So any +save and restoration should be converted to and from the canonical weights and +biases. + +num_layers: Specifies the number of layers in the RNN model. +num_units: Specifies the size of the hidden state. +input_size: Specifies the size of the input state. +num_params: number of parameter sets for all layers. + Each layer may contain multiple parameter sets, with each set consisting of + a weight matrix and a bias vector. +weights: the canonical form of weights that can be used for saving + and restoration. They are more likely to be compatible across different + generations. +biases: the canonical form of biases that can be used for saving + and restoration. They are more likely to be compatible across different + generations. +rnn_mode: Indicates the type of the RNN model. +input_mode: Indicate whether there is a linear projection between the input and + The actual computation before the first layer. 'skip_input' is only allowed + when input_size == num_units; 'auto_select' implies 'skip_input' when + input_size == num_units; otherwise, it implies 'linear_input'. +direction: Indicates whether a bidirectional model will be used. + dir = (direction == bidirectional) ? 2 : 1 +dropout: dropout probability. When set to 0., dropout is disabled. +seed: the 1st part of a seed to initialize dropout. +seed2: the 2nd part of a seed to initialize dropout. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_layers` + +A `Tensor` of type `int32`. +
+`num_units` + +A `Tensor` of type `int32`. +
+`input_size` + +A `Tensor` of type `int32`. +
+`params` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +
+`num_params` + +An `int` that is `>= 1`. +
+`rnn_mode` + +An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. +
+`input_mode` + +An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. +
+`direction` + +An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. +
+`dropout` + +An optional `float`. Defaults to `0`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (weights, biases). +
+`weights` + +A list of `num_params` `Tensor` objects with the same type as `params`. +
+`biases` + +A list of `num_params` `Tensor` objects with the same type as `params`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CudnnRNNParamsToCanonicalV2.md b/site/en/api_docs/python/tf/raw_ops/CudnnRNNParamsToCanonicalV2.md new file mode 100644 index 00000000000..46b45f61dce --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CudnnRNNParamsToCanonicalV2.md @@ -0,0 +1,207 @@ +description: Retrieves CudnnRNN params in canonical form. It supports the projection in LSTM. + +
+ + +
+ +# tf.raw_ops.CudnnRNNParamsToCanonicalV2 + + + + + + + + + +Retrieves CudnnRNN params in canonical form. It supports the projection in LSTM. + + + + + + + + + +Retrieves a set of weights from the opaque params buffer that can be saved and +restored in a way compatible with future runs. + +Note that the params buffer may not be compatible across different GPUs. So any +save and restoration should be converted to and from the canonical weights and +biases. + +num_layers: Specifies the number of layers in the RNN model. +num_units: Specifies the size of the hidden state. +input_size: Specifies the size of the input state. +num_params_weights: number of weight parameter matrix for all layers. +num_params_biases: number of bias parameter vector for all layers. +weights: the canonical form of weights that can be used for saving + and restoration. They are more likely to be compatible across different + generations. +biases: the canonical form of biases that can be used for saving + and restoration. They are more likely to be compatible across different + generations. +rnn_mode: Indicates the type of the RNN model. +input_mode: Indicate whether there is a linear projection between the input and + The actual computation before the first layer. 'skip_input' is only allowed + when input_size == num_units; 'auto_select' implies 'skip_input' when + input_size == num_units; otherwise, it implies 'linear_input'. +direction: Indicates whether a bidirectional model will be used. + dir = (direction == bidirectional) ? 2 : 1 +dropout: dropout probability. When set to 0., dropout is disabled. +seed: the 1st part of a seed to initialize dropout. +seed2: the 2nd part of a seed to initialize dropout. +num_proj: The output dimensionality for the projection matrices. If None or 0, + no projection is performed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_layers` + +A `Tensor` of type `int32`. +
+`num_units` + +A `Tensor` of type `int32`. +
+`input_size` + +A `Tensor` of type `int32`. +
+`params` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +
+`num_params_weights` + +An `int` that is `>= 1`. +
+`num_params_biases` + +An `int` that is `>= 1`. +
+`rnn_mode` + +An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. +
+`input_mode` + +An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. +
+`direction` + +An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. +
+`dropout` + +An optional `float`. Defaults to `0`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`num_proj` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (weights, biases). +
+`weights` + +A list of `num_params_weights` `Tensor` objects with the same type as `params`. +
+`biases` + +A list of `num_params_biases` `Tensor` objects with the same type as `params`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CudnnRNNV2.md b/site/en/api_docs/python/tf/raw_ops/CudnnRNNV2.md new file mode 100644 index 00000000000..257d669bfb0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CudnnRNNV2.md @@ -0,0 +1,217 @@ +description: A RNN backed by cuDNN. + +
+ + +
+ +# tf.raw_ops.CudnnRNNV2 + + + + + + + + + +A RNN backed by cuDNN. + + + + + + + + + +Computes the RNN from the input and initial states, with respect to the params +buffer. Produces one extra output "host_reserved" than CudnnRNN. + +rnn_mode: Indicates the type of the RNN model. +input_mode: Indicates whether there is a linear projection between the input and + the actual computation before the first layer. 'skip_input' is only allowed + when input_size == num_units; 'auto_select' implies 'skip_input' when + input_size == num_units; otherwise, it implies 'linear_input'. +direction: Indicates whether a bidirectional model will be used. Should be + "unidirectional" or "bidirectional". +dropout: Dropout probability. When set to 0., dropout is disabled. +seed: The 1st part of a seed to initialize dropout. +seed2: The 2nd part of a seed to initialize dropout. +input: A 3-D tensor with the shape of [seq_length, batch_size, input_size]. +input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size, + num_units]. +input_c: For LSTM, a 3-D tensor with the shape of + [num_layer * dir, batch, num_units]. For other models, it is ignored. +params: A 1-D tensor that contains the weights and biases in an opaque layout. + The size must be created through CudnnRNNParamsSize, and initialized + separately. Note that they might not be compatible across different + generations. So it is a good idea to save and restore +output: A 3-D tensor with the shape of [seq_length, batch_size, + dir * num_units]. +output_h: The same shape has input_h. +output_c: The same shape as input_c for LSTM. An empty tensor for other models. +is_training: Indicates whether this operation is used for inferenece or + training. +reserve_space: An opaque tensor that can be used in backprop calculation. It + is only produced if is_training is true. +host_reserved: An opaque tensor that can be used in backprop calculation. It is + only produced if is_training is true. It is output on host memory rather than + device memory. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +
+`input_h` + +A `Tensor`. Must have the same type as `input`. +
+`input_c` + +A `Tensor`. Must have the same type as `input`. +
+`params` + +A `Tensor`. Must have the same type as `input`. +
+`rnn_mode` + +An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. +
+`input_mode` + +An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. +
+`direction` + +An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. +
+`dropout` + +An optional `float`. Defaults to `0`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`is_training` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, output_h, output_c, reserve_space, host_reserved). +
+`output` + +A `Tensor`. Has the same type as `input`. +
+`output_h` + +A `Tensor`. Has the same type as `input`. +
+`output_c` + +A `Tensor`. Has the same type as `input`. +
+`reserve_space` + +A `Tensor`. Has the same type as `input`. +
+`host_reserved` + +A `Tensor` of type `int8`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CudnnRNNV3.md b/site/en/api_docs/python/tf/raw_ops/CudnnRNNV3.md new file mode 100644 index 00000000000..d570988aeab --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CudnnRNNV3.md @@ -0,0 +1,242 @@ +description: A RNN backed by cuDNN. + +
+ + +
+ +# tf.raw_ops.CudnnRNNV3 + + + + + + + + + +A RNN backed by cuDNN. + + + + + + + + + +Computes the RNN from the input and initial states, with respect to the params +buffer. Accepts one extra input "sequence_lengths" than CudnnRNN. + +rnn_mode: Indicates the type of the RNN model. +input_mode: Indicates whether there is a linear projection between the input and + the actual computation before the first layer. 'skip_input' is only allowed + when input_size == num_units; 'auto_select' implies 'skip_input' when + input_size == num_units; otherwise, it implies 'linear_input'. +direction: Indicates whether a bidirectional model will be used. Should be + "unidirectional" or "bidirectional". +dropout: Dropout probability. When set to 0., dropout is disabled. +seed: The 1st part of a seed to initialize dropout. +seed2: The 2nd part of a seed to initialize dropout. +input: If time_major is true, this is a 3-D tensor with the shape of + [seq_length, batch_size, input_size]. If time_major is false, the shape is + [batch_size, seq_length, input_size]. +input_h: If time_major is true, this is a 3-D tensor with the shape of + [num_layer * dir, batch_size, num_units]. If time_major is false, the shape + is [batch_size, num_layer * dir, num_units]. +input_c: For LSTM, a 3-D tensor with the shape of + [num_layer * dir, batch, num_units]. For other models, it is ignored. +params: A 1-D tensor that contains the weights and biases in an opaque layout. + The size must be created through CudnnRNNParamsSize, and initialized + separately. Note that they might not be compatible across different + generations. So it is a good idea to save and restore +sequence_lengths: a vector of lengths of each input sequence. +output: If time_major is true, this is a 3-D tensor with the shape of + [seq_length, batch_size, dir * num_units]. If time_major is false, the + shape is [batch_size, seq_length, dir * num_units]. +output_h: The same shape has input_h. +output_c: The same shape as input_c for LSTM. An empty tensor for other models. +is_training: Indicates whether this operation is used for inferenece or + training. +time_major: Indicates whether the input/output format is time major or batch + major. +reserve_space: An opaque tensor that can be used in backprop calculation. It + is only produced if is_training is true. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +
+`input_h` + +A `Tensor`. Must have the same type as `input`. +
+`input_c` + +A `Tensor`. Must have the same type as `input`. +
+`params` + +A `Tensor`. Must have the same type as `input`. +
+`sequence_lengths` + +A `Tensor` of type `int32`. +
+`rnn_mode` + +An optional `string` from: `"rnn_relu", "rnn_tanh", "lstm", "gru"`. Defaults to `"lstm"`. +
+`input_mode` + +An optional `string` from: `"linear_input", "skip_input", "auto_select"`. Defaults to `"linear_input"`. +
+`direction` + +An optional `string` from: `"unidirectional", "bidirectional"`. Defaults to `"unidirectional"`. +
+`dropout` + +An optional `float`. Defaults to `0`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`num_proj` + +An optional `int`. Defaults to `0`. +
+`is_training` + +An optional `bool`. Defaults to `True`. +
+`time_major` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, output_h, output_c, reserve_space, host_reserved). +
+`output` + +A `Tensor`. Has the same type as `input`. +
+`output_h` + +A `Tensor`. Has the same type as `input`. +
+`output_c` + +A `Tensor`. Has the same type as `input`. +
+`reserve_space` + +A `Tensor`. Has the same type as `input`. +
+`host_reserved` + +A `Tensor` of type `int8`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Cumprod.md b/site/en/api_docs/python/tf/raw_ops/Cumprod.md new file mode 100644 index 00000000000..960b15efdd8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Cumprod.md @@ -0,0 +1,133 @@ +description: Compute the cumulative product of the tensor x along axis. + +
+ + +
+ +# tf.raw_ops.Cumprod + + + + + + + + + +Compute the cumulative product of the tensor `x` along `axis`. + + + + + + + + + +By default, this op performs an inclusive cumprod, which means that the first +element of the input is identical to the first element of the output: + +```python +tf.cumprod([a, b, c]) # => [a, a * b, a * b * c] +``` + +By setting the `exclusive` kwarg to `True`, an exclusive cumprod is +performed instead: + +```python +tf.cumprod([a, b, c], exclusive=True) # => [1, a, a * b] +``` + +By setting the `reverse` kwarg to `True`, the cumprod is performed in the +opposite direction: + +```python +tf.cumprod([a, b, c], reverse=True) # => [a * b * c, b * c, c] +``` + +This is more efficient than using separate tf.reverse ops. + +The `reverse` and `exclusive` kwargs can also be combined: + +```python +tf.cumprod([a, b, c], exclusive=True, reverse=True) # => [b * c, c, 1] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A `Tensor`. Must be one of the following types: `float32`, `float64`, +`int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, +`complex128`, `qint8`, `quint8`, `qint32`, `half`. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A `Tensor` of type `int32` (default: 0). Must be in the range +`[-rank(x), rank(x))`. +
+`exclusive` + +An optional `bool`. Defaults to `False`. +If `True`, perform exclusive cumprod. +
+`reverse` + +An optional `bool`. Defaults to `False`. +A `bool` (default: False). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Cumsum.md b/site/en/api_docs/python/tf/raw_ops/Cumsum.md new file mode 100644 index 00000000000..7e5d0949884 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Cumsum.md @@ -0,0 +1,133 @@ +description: Compute the cumulative sum of the tensor x along axis. + +
+ + +
+ +# tf.raw_ops.Cumsum + + + + + + + + + +Compute the cumulative sum of the tensor `x` along `axis`. + + + + + + + + + +By default, this op performs an inclusive cumsum, which means that the first +element of the input is identical to the first element of the output: + +```python +tf.cumsum([a, b, c]) # => [a, a + b, a + b + c] +``` + +By setting the `exclusive` kwarg to `True`, an exclusive cumsum is +performed instead: + +```python +tf.cumsum([a, b, c], exclusive=True) # => [0, a, a + b] +``` + +By setting the `reverse` kwarg to `True`, the cumsum is performed in the +opposite direction: + +```python +tf.cumsum([a, b, c], reverse=True) # => [a + b + c, b + c, c] +``` + +This is more efficient than using separate tf.reverse ops. + +The `reverse` and `exclusive` kwargs can also be combined: + +```python +tf.cumsum([a, b, c], exclusive=True, reverse=True) # => [b + c, c, 0] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A `Tensor`. Must be one of the following types: `float32`, `float64`, +`int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, +`complex128`, `qint8`, `quint8`, `qint32`, `half`. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A `Tensor` of type `int32` (default: 0). Must be in the range +`[-rank(x), rank(x))`. +
+`exclusive` + +An optional `bool`. Defaults to `False`. +If `True`, perform exclusive cumsum. +
+`reverse` + +An optional `bool`. Defaults to `False`. +A `bool` (default: False). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/CumulativeLogsumexp.md b/site/en/api_docs/python/tf/raw_ops/CumulativeLogsumexp.md new file mode 100644 index 00000000000..492a1255770 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/CumulativeLogsumexp.md @@ -0,0 +1,121 @@ +description: Compute the cumulative product of the tensor x along axis. + +
+ + +
+ +# tf.raw_ops.CumulativeLogsumexp + + + + + + + + + +Compute the cumulative product of the tensor `x` along `axis`. + + + + + + + + + +By default, this op performs an inclusive cumulative log-sum-exp, +which means that the first +element of the input is identical to the first element of the output: +```python +tf.math.cumulative_logsumexp([a, b, c]) # => [a, log(exp(a) + exp(b)), log(exp(a) + exp(b) + exp(c))] +``` + +By setting the `exclusive` kwarg to `True`, an exclusive cumulative log-sum-exp is +performed instead: +```python +tf.cumulative_logsumexp([a, b, c], exclusive=True) # => [-inf, a, log(exp(a) * exp(b))] +``` +Note that the neutral element of the log-sum-exp operation is `-inf`, +however, for performance reasons, the minimal value representable by the +floating point type is used instead. + +By setting the `reverse` kwarg to `True`, the cumulative log-sum-exp is performed in the +opposite direction. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +A `Tensor`. Must be one of the following types: `float16`, `float32`, `float64`. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A `Tensor` of type `int32` (default: 0). Must be in the range +`[-rank(x), rank(x))`. +
+`exclusive` + +An optional `bool`. Defaults to `False`. +If `True`, perform exclusive cumulative log-sum-exp. +
+`reverse` + +An optional `bool`. Defaults to `False`. +A `bool` (default: False). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DataFormatDimMap.md b/site/en/api_docs/python/tf/raw_ops/DataFormatDimMap.md new file mode 100644 index 00000000000..074f69f0616 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DataFormatDimMap.md @@ -0,0 +1,96 @@ +description: Returns the dimension index in the destination data format given the one in + +
+ + +
+ +# tf.raw_ops.DataFormatDimMap + + + + + + + + + +Returns the dimension index in the destination data format given the one in + + + + + + + + + +the source data format. + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A Tensor with each element as a dimension index in source data format. +Must be in the range [-4, 4). +
+`src_format` + +An optional `string`. Defaults to `"NHWC"`. +source data format. +
+`dst_format` + +An optional `string`. Defaults to `"NCHW"`. +destination data format. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DataFormatVecPermute.md b/site/en/api_docs/python/tf/raw_ops/DataFormatVecPermute.md new file mode 100644 index 00000000000..5d1638b3f4e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DataFormatVecPermute.md @@ -0,0 +1,95 @@ +description: Returns the permuted vector/tensor in the destination data format given the + +
+ + +
+ +# tf.raw_ops.DataFormatVecPermute + + + + + + + + + +Returns the permuted vector/tensor in the destination data format given the + + + + + + + + + +one in the source data format. + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Vector of size 4 or Tensor of shape (4, 2) in source data format. +
+`src_format` + +An optional `string`. Defaults to `"NHWC"`. +source data format. +
+`dst_format` + +An optional `string`. Defaults to `"NCHW"`. +destination data format. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DatasetCardinality.md b/site/en/api_docs/python/tf/raw_ops/DatasetCardinality.md new file mode 100644 index 00000000000..e9ebbda0bf3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DatasetCardinality.md @@ -0,0 +1,79 @@ +description: Returns the cardinality of input_dataset. + +
+ + +
+ +# tf.raw_ops.DatasetCardinality + + + + + + + + + +Returns the cardinality of `input_dataset`. + + + + + + + + + +Returns the cardinality of `input_dataset`. + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the dataset to return cardinality for. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DatasetFromGraph.md b/site/en/api_docs/python/tf/raw_ops/DatasetFromGraph.md new file mode 100644 index 00000000000..250e92f20fb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DatasetFromGraph.md @@ -0,0 +1,79 @@ +description: Creates a dataset from the given graph_def. + +
+ + +
+ +# tf.raw_ops.DatasetFromGraph + + + + + + + + + +Creates a dataset from the given `graph_def`. + + + + + + + + + +Creates a dataset from the provided `graph_def`. + + + + + + + + + + + + + +
+`graph_def` + +A `Tensor` of type `string`. +The graph representation of the dataset (as serialized GraphDef). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DatasetToGraph.md b/site/en/api_docs/python/tf/raw_ops/DatasetToGraph.md new file mode 100644 index 00000000000..7e9675f4acb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DatasetToGraph.md @@ -0,0 +1,101 @@ +description: Returns a serialized GraphDef representing input_dataset. + +
+ + +
+ +# tf.raw_ops.DatasetToGraph + + + + + + + + + +Returns a serialized GraphDef representing `input_dataset`. + + + + + + + + + +Returns a graph representation for `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the dataset to return the graph representation for. +
+`stateful_whitelist` + +An optional list of `strings`. Defaults to `[]`. +
+`allow_stateful` + +An optional `bool`. Defaults to `False`. +
+`strip_device_assignment` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DatasetToGraphV2.md b/site/en/api_docs/python/tf/raw_ops/DatasetToGraphV2.md new file mode 100644 index 00000000000..609bc936063 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DatasetToGraphV2.md @@ -0,0 +1,94 @@ +description: Returns a serialized GraphDef representing input_dataset. + +
+ + +
+ +# tf.raw_ops.DatasetToGraphV2 + + + + + + + + + +Returns a serialized GraphDef representing `input_dataset`. + + + + + + + + + +Returns a graph representation for `input_dataset`. + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the dataset to return the graph representation for. +
+`external_state_policy` + +An optional `int`. Defaults to `0`. +
+`strip_device_assignment` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DatasetToSingleElement.md b/site/en/api_docs/python/tf/raw_ops/DatasetToSingleElement.md new file mode 100644 index 00000000000..5449f335d57 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DatasetToSingleElement.md @@ -0,0 +1,92 @@ +description: Outputs the single element from the given dataset. + +
+ + +
+ +# tf.raw_ops.DatasetToSingleElement + + + + + + + + + +Outputs the single element from the given dataset. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dataset` + +A `Tensor` of type `variant`. +A handle to a dataset that contains a single element. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `output_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DatasetToTFRecord.md b/site/en/api_docs/python/tf/raw_ops/DatasetToTFRecord.md new file mode 100644 index 00000000000..6853b2e5211 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DatasetToTFRecord.md @@ -0,0 +1,95 @@ +description: Writes the given dataset to the given file using the TFRecord format. + +
+ + +
+ +# tf.raw_ops.DatasetToTFRecord + + + + + + + + + +Writes the given dataset to the given file using the TFRecord format. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the dataset to write. +
+`filename` + +A `Tensor` of type `string`. +A scalar string tensor representing the filename to use. +
+`compression_type` + +A `Tensor` of type `string`. +A scalar string tensor containing either (i) the empty string (no +compression), (ii) "ZLIB", or (iii) "GZIP". +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Dawsn.md b/site/en/api_docs/python/tf/raw_ops/Dawsn.md new file mode 100644 index 00000000000..f1ad0c16cd9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Dawsn.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.Dawsn + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DebugGradientIdentity.md b/site/en/api_docs/python/tf/raw_ops/DebugGradientIdentity.md new file mode 100644 index 00000000000..032872bbb6b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DebugGradientIdentity.md @@ -0,0 +1,80 @@ +description: Identity op for gradient debugging. + +
+ + +
+ +# tf.raw_ops.DebugGradientIdentity + + + + + + + + + +Identity op for gradient debugging. + + + + + + + + + +This op is hidden from public in Python. It is used by TensorFlow Debugger to +register gradient tensors for gradient debugging. +This op operates on non-reference-type tensors. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DebugGradientRefIdentity.md b/site/en/api_docs/python/tf/raw_ops/DebugGradientRefIdentity.md new file mode 100644 index 00000000000..58fa8c1df75 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DebugGradientRefIdentity.md @@ -0,0 +1,80 @@ +description: Identity op for gradient debugging. + +
+ + +
+ +# tf.raw_ops.DebugGradientRefIdentity + + + + + + + + + +Identity op for gradient debugging. + + + + + + + + + +This op is hidden from public in Python. It is used by TensorFlow Debugger to +register gradient tensors for gradient debugging. +This op operates on reference-type tensors. + + + + + + + + + + + + + +
+`input` + +A mutable `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DebugIdentity.md b/site/en/api_docs/python/tf/raw_ops/DebugIdentity.md new file mode 100644 index 00000000000..f6e4c961bcd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DebugIdentity.md @@ -0,0 +1,117 @@ +description: Provides an identity mapping of the non-Ref type input tensor for debugging. + +
+ + +
+ +# tf.raw_ops.DebugIdentity + + + + + + + + + +Provides an identity mapping of the non-Ref type input tensor for debugging. + + + + + + + + + +Provides an identity mapping of the non-Ref type input tensor for debugging. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Input tensor, non-Reference type +
+`device_name` + +An optional `string`. Defaults to `""`. +Name of the device on which the tensor resides. +
+`tensor_name` + +An optional `string`. Defaults to `""`. +Name of the input tensor. +
+`debug_urls` + +An optional list of `strings`. Defaults to `[]`. +List of URLs to debug targets, e.g., +file:///foo/tfdbg_dump, grpc:://localhost:11011 +
+`gated_grpc` + +An optional `bool`. Defaults to `False`. +Whether this op will be gated. If any of the debug_urls of this +debug node is of the grpc:// scheme, when the value of this attribute is set +to True, the data will not actually be sent via the grpc stream unless this +debug op has been enabled at the debug_url. If all of the debug_urls of this +debug node are of the grpc:// scheme and the debug op is enabled at none of +them, the output will be an empty Tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DebugIdentityV2.md b/site/en/api_docs/python/tf/raw_ops/DebugIdentityV2.md new file mode 100644 index 00000000000..18cc13ad028 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DebugIdentityV2.md @@ -0,0 +1,130 @@ +description: Debug Identity V2 Op. + +
+ + +
+ +# tf.raw_ops.DebugIdentityV2 + + + + + + + + + +Debug Identity V2 Op. + + + + + + + + + +Provides an identity mapping from input to output, while writing the content of +the input tensor by calling DebugEventsWriter. + +The semantics of the input tensor depends on tensor_debug_mode. In typical +usage, the input tensor comes directly from the user computation only when +graph_debug_mode is FULL_TENSOR (see protobuf/debug_event.proto for a +list of all the possible values of graph_debug_mode). For the other debug modes, +the input tensor should be produced by an additional op or subgraph that +computes summary information about one or more tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Input tensor, non-Reference type +
+`tfdbg_context_id` + +An optional `string`. Defaults to `""`. +A tfdbg-generated ID for the context that the op belongs to, +e.g., a concrete compiled tf.function. +
+`op_name` + +An optional `string`. Defaults to `""`. +Optional. Name of the op that the debug op is concerned with. +Used only for single-tensor trace. +
+`output_slot` + +An optional `int`. Defaults to `-1`. +Optional. Output slot index of the tensor that the debug op +is concerned with. Used only for single-tensor trace. +
+`tensor_debug_mode` + +An optional `int`. Defaults to `-1`. +TensorDebugMode enum value. See debug_event.proto for details. +
+`debug_urls` + +An optional list of `strings`. Defaults to `[]`. +List of URLs to debug targets, e.g., file:///foo/tfdbg_dump. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DebugNanCount.md b/site/en/api_docs/python/tf/raw_ops/DebugNanCount.md new file mode 100644 index 00000000000..652f73ee56c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DebugNanCount.md @@ -0,0 +1,116 @@ +description: Debug NaN Value Counter Op. + +
+ + +
+ +# tf.raw_ops.DebugNanCount + + + + + + + + + +Debug NaN Value Counter Op. + + + + + + + + + +Counts number of NaNs in the input tensor, for debugging. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Input tensor, non-Reference type. +
+`device_name` + +An optional `string`. Defaults to `""`. +
+`tensor_name` + +An optional `string`. Defaults to `""`. +Name of the input tensor. +
+`debug_urls` + +An optional list of `strings`. Defaults to `[]`. +List of URLs to debug targets, e.g., +file:///foo/tfdbg_dump, grpc:://localhost:11011. +
+`gated_grpc` + +An optional `bool`. Defaults to `False`. +Whether this op will be gated. If any of the debug_urls of this +debug node is of the grpc:// scheme, when the value of this attribute is set +to True, the data will not actually be sent via the grpc stream unless this +debug op has been enabled at the debug_url. If all of the debug_urls of this +debug node are of the grpc:// scheme and the debug op is enabled at none of +them, the output will be an empty Tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DebugNumericSummary.md b/site/en/api_docs/python/tf/raw_ops/DebugNumericSummary.md new file mode 100644 index 00000000000..2cd2b039424 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DebugNumericSummary.md @@ -0,0 +1,172 @@ +description: Debug Numeric Summary Op. + +
+ + +
+ +# tf.raw_ops.DebugNumericSummary + + + + + + + + + +Debug Numeric Summary Op. + + + + + + + + + +Provide a basic summary of numeric value types, range and distribution. + +output: A double tensor of shape [14 + nDimensions], where nDimensions is the + the number of dimensions of the tensor's shape. The elements of output are: + [0]: is initialized (1.0) or not (0.0). + [1]: total number of elements + [2]: NaN element count + [3]: generalized -inf count: elements <= lower_bound. lower_bound is -inf by + default. + [4]: negative element count (excluding -inf), if lower_bound is the default + -inf. Otherwise, this is the count of elements > lower_bound and < 0. + [5]: zero element count + [6]: positive element count (excluding +inf), if upper_bound is the default + -inf. Otherwise, this is the count of elements < upper_bound and > 0. + [7]: generalized +inf count, elements >= upper_bound. upper_bound is +inf by + default. +Output elements [1:8] are all zero, if the tensor is uninitialized. + [8]: minimum of all non-inf and non-NaN elements. + If uninitialized or no such element exists: +inf. + [9]: maximum of all non-inf and non-NaN elements. + If uninitialized or no such element exists: -inf. + [10]: mean of all non-inf and non-NaN elements. + If uninitialized or no such element exists: NaN. + [11]: variance of all non-inf and non-NaN elements. + If uninitialized or no such element exists: NaN. + [12]: Data type of the tensor encoded as an enum integer. See the DataType + proto for more details. + [13]: Number of dimensions of the tensor (ndims). + [14+]: Sizes of the dimensions. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Input tensor, non-Reference type. +
+`device_name` + +An optional `string`. Defaults to `""`. +
+`tensor_name` + +An optional `string`. Defaults to `""`. +Name of the input tensor. +
+`debug_urls` + +An optional list of `strings`. Defaults to `[]`. +List of URLs to debug targets, e.g., +file:///foo/tfdbg_dump, grpc:://localhost:11011. +
+`lower_bound` + +An optional `float`. Defaults to `float('-inf')`. +(float) The lower bound <= which values will be included in the +generalized -inf count. Default: -inf. +
+`upper_bound` + +An optional `float`. Defaults to `float('inf')`. +(float) The upper bound >= which values will be included in the +generalized +inf count. Default: +inf. +
+`mute_if_healthy` + +An optional `bool`. Defaults to `False`. +(bool) Do not send data to the debug URLs unless at least one +of elements [2], [3] and [7] (i.e., the nan count and the generalized -inf and +inf counts) is non-zero. +
+`gated_grpc` + +An optional `bool`. Defaults to `False`. +Whether this op will be gated. If any of the debug_urls of this +debug node is of the grpc:// scheme, when the value of this attribute is set +to True, the data will not actually be sent via the grpc stream unless this +debug op has been enabled at the debug_url. If all of the debug_urls of this +debug node are of the grpc:// scheme and the debug op is enabled at none of +them, the output will be an empty Tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DebugNumericSummaryV2.md b/site/en/api_docs/python/tf/raw_ops/DebugNumericSummaryV2.md new file mode 100644 index 00000000000..dcd535ea821 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DebugNumericSummaryV2.md @@ -0,0 +1,158 @@ +description: Debug Numeric Summary V2 Op. + +
+ + +
+ +# tf.raw_ops.DebugNumericSummaryV2 + + + + + + + + + +Debug Numeric Summary V2 Op. + + + + + + + + + +Computes a numeric summary of the input tensor. The shape of the output +depends on the tensor_debug_mode attribute. +This op is used internally by TensorFlow Debugger (tfdbg) v2. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Input tensor, to be summarized by the op. +
+`output_dtype` + +An optional tf.DType from: `tf.float32, tf.float64`. Defaults to tf.float32. +Optional. The type of the output. Can be float32 or float64 (default: float32). +
+`tensor_debug_mode` + +An optional `int`. Defaults to `-1`. +Tensor debug mode: the mode in which the input tensor is summarized +by the op. See the TensorDebugMode enum in +tensorflow/core/protobuf/debug_event.proto for details. + +Supported values: +2 (CURT_HEALTH): Output a float32/64 tensor of shape [2]. The 1st +element is the tensor_id, if provided, and -1 otherwise. The 2nd +element is a bit which is set to 1 if the input tensor has an +infinity or nan value, or zero otherwise. + +3 (CONCISE_HEALTH): Output a float32/64 tensor of shape [5]. The 1st +element is the tensor_id, if provided, and -1 otherwise. The +remaining four slots are the total number of elements, -infs, ++infs, and nans in the input tensor respectively. + +4 (FULL_HEALTH): Output a float32/64 tensor of shape [11]. The 1st +element is the tensor_id, if provided, and -1 otherwise. The 2nd +element is the device_id, if provided, and -1 otherwise. The 3rd +element holds the datatype value of the input tensor as according +to the enumerated type in tensorflow/core/framework/types.proto. +The remaining elements hold the total number of elements, -infs, ++infs, nans, negative finite numbers, zeros, and positive finite +numbers in the input tensor respectively. + +5 (SHAPE): Output a float32/64 tensor of shape [10]. The 1st +element is the tensor_id, if provided, and -1 otherwise. The 2nd +element holds the datatype value of the input tensor as according +to the enumerated type in tensorflow/core/framework/types.proto. +The 3rd element holds the rank of the tensor. The 4th element holds +the number of elements within the tensor. Finally the remaining 6 +elements hold the shape of the tensor. If the rank of the tensor +is lower than 6, the shape is right padded with zeros. If the rank +is greater than 6, the head of the shape is truncated. + +6 (FULL_NUMERICS): Output a float32/64 tensor of shape [22]. The 1st +element is the tensor_id, if provided, and -1 otherwise. The 2nd +element is the device_id, if provided, and -1 otherwise. The 3rd +element holds the datatype value of the input tensor as according +to the enumerated type in tensorflow/core/framework/types.proto. +The 4th element holds the rank of the tensor. The 5th to 11th +elements hold the shape of the tensor. If the rank of the tensor +is lower than 6, the shape is right padded with zeros. If the rank +is greater than 6, the head of the shape is truncated. The 12th to +18th elements hold the number of elements, -infs, +infs, nans, +denormal floats, negative finite numbers, zeros, and positive +finite numbers in the input tensor respectively. The final four +elements hold the min value, max value, mean, and variance of the +input tensor. + +8 (REDUCE_INF_NAN_THREE_SLOTS): Output a float32/64 tensor of shape +[3]. The 1st element is -inf if any elements of the input tensor +is -inf, or zero otherwise. The 2nd element is +inf if any elements +of the input tensor is +inf, or zero otherwise. The 3rd element is +nan if any element of the input tensor is nan, or zero otherwise. +
+`tensor_id` + +An optional `int`. Defaults to `-1`. +Optional. An integer identifier for the tensor being summarized by this op. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `output_dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodeAndCropJpeg.md b/site/en/api_docs/python/tf/raw_ops/DecodeAndCropJpeg.md new file mode 100644 index 00000000000..7934091b147 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodeAndCropJpeg.md @@ -0,0 +1,161 @@ +description: Decode and Crop a JPEG-encoded image to a uint8 tensor. + +
+ + +
+ +# tf.raw_ops.DecodeAndCropJpeg + + + + + + + + + +Decode and Crop a JPEG-encoded image to a uint8 tensor. + + + + + + + + + +The attr `channels` indicates the desired number of color channels for the +decoded image. + +#### Accepted values are: + + + +* 0: Use the number of channels in the JPEG-encoded image. +* 1: output a grayscale image. +* 3: output an RGB image. + +If needed, the JPEG-encoded image is transformed to match the requested number +of color channels. + +The attr `ratio` allows downscaling the image by an integer factor during +decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than +downscaling the image later. + + +It is equivalent to a combination of decode and crop, but much faster by only +decoding partial jpeg image. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. 0-D. The JPEG-encoded image. +
+`crop_window` + +A `Tensor` of type `int32`. +1-D. The crop window: [crop_y, crop_x, crop_height, crop_width]. +
+`channels` + +An optional `int`. Defaults to `0`. +Number of color channels for the decoded image. +
+`ratio` + +An optional `int`. Defaults to `1`. Downscaling ratio. +
+`fancy_upscaling` + +An optional `bool`. Defaults to `True`. +If true use a slower but nicer upscaling of the +chroma planes (yuv420/422 only). +
+`try_recover_truncated` + +An optional `bool`. Defaults to `False`. +If true try to recover an image from truncated input. +
+`acceptable_fraction` + +An optional `float`. Defaults to `1`. +The minimum required fraction of lines before a truncated +input is accepted. +
+`dct_method` + +An optional `string`. Defaults to `""`. +string specifying a hint about the algorithm used for +decompression. Defaults to "" which maps to a system-specific +default. Currently valid values are ["INTEGER_FAST", +"INTEGER_ACCURATE"]. The hint may be ignored (e.g., the internal +jpeg library changes to a version that does not have that specific +option.) +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `uint8`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodeBase64.md b/site/en/api_docs/python/tf/raw_ops/DecodeBase64.md new file mode 100644 index 00000000000..ea0281eb77a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodeBase64.md @@ -0,0 +1,79 @@ +description: Decode web-safe base64-encoded strings. + +
+ + +
+ +# tf.raw_ops.DecodeBase64 + + + + + + + + + +Decode web-safe base64-encoded strings. + + + + + + + + + +Input may or may not have padding at the end. See EncodeBase64 for padding. +Web-safe means that input must use - and _ instead of + and /. + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. Base64 strings to decode. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodeBmp.md b/site/en/api_docs/python/tf/raw_ops/DecodeBmp.md new file mode 100644 index 00000000000..709239625b5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodeBmp.md @@ -0,0 +1,94 @@ +description: Decode the first frame of a BMP-encoded image to a uint8 tensor. + +
+ + +
+ +# tf.raw_ops.DecodeBmp + + + + + + + + + +Decode the first frame of a BMP-encoded image to a uint8 tensor. + + + + + + + + + +The attr `channels` indicates the desired number of color channels for the +decoded image. + +#### Accepted values are: + + + +* 0: Use the number of channels in the BMP-encoded image. +* 3: output an RGB image. +* 4: output an RGBA image. + + + + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. 0-D. The BMP-encoded image. +
+`channels` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `uint8`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodeCSV.md b/site/en/api_docs/python/tf/raw_ops/DecodeCSV.md new file mode 100644 index 00000000000..9c9a92d6954 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodeCSV.md @@ -0,0 +1,126 @@ +description: Convert CSV records to tensors. Each column maps to one tensor. + +
+ + +
+ +# tf.raw_ops.DecodeCSV + + + + + + + + + +Convert CSV records to tensors. Each column maps to one tensor. + + + + + + + + + +RFC 4180 format is expected for the CSV records. +(https://tools.ietf.org/html/rfc4180) +Note that we allow leading and trailing spaces with int or float field. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`records` + +A `Tensor` of type `string`. +Each string is a record/row in the csv and all records should have +the same format. +
+`record_defaults` + +A list of `Tensor` objects with types from: `float32`, `float64`, `int32`, `int64`, `string`. +One tensor per column of the input record, with either a +scalar default value for that column or an empty vector if the column is +required. +
+`field_delim` + +An optional `string`. Defaults to `","`. +char delimiter to separate fields in a record. +
+`use_quote_delim` + +An optional `bool`. Defaults to `True`. +If false, treats double quotation marks as regular +characters inside of the string fields (ignoring RFC 4180, Section 2, +Bullet 5). +
+`na_value` + +An optional `string`. Defaults to `""`. +Additional string to recognize as NA/NaN. +
+`select_cols` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects. Has the same type as `record_defaults`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodeCompressed.md b/site/en/api_docs/python/tf/raw_ops/DecodeCompressed.md new file mode 100644 index 00000000000..9057facd629 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodeCompressed.md @@ -0,0 +1,93 @@ +description: Decompress strings. + +
+ + +
+ +# tf.raw_ops.DecodeCompressed + + + + + + + + + +Decompress strings. + + + + + + + + + +This op decompresses each element of the `bytes` input `Tensor`, which +is assumed to be compressed using the given `compression_type`. + +The `output` is a string `Tensor` of the same shape as `bytes`, +each element containing the decompressed data from the corresponding +element in `bytes`. + + + + + + + + + + + + + + + + +
+`bytes` + +A `Tensor` of type `string`. +A Tensor of string which is compressed. +
+`compression_type` + +An optional `string`. Defaults to `""`. +A scalar containing either (i) the empty string (no +compression), (ii) "ZLIB", or (iii) "GZIP". +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodeGif.md b/site/en/api_docs/python/tf/raw_ops/DecodeGif.md new file mode 100644 index 00000000000..d8451635e0c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodeGif.md @@ -0,0 +1,85 @@ +description: Decode the frame(s) of a GIF-encoded image to a uint8 tensor. + +
+ + +
+ +# tf.raw_ops.DecodeGif + + + + + + + + + +Decode the frame(s) of a GIF-encoded image to a uint8 tensor. + + + + + + + + + +GIF images with frame or transparency compression are not supported. +On Linux and MacOS systems, convert animated GIFs from compressed to +uncompressed by running: + + convert $src.gif -coalesce $dst.gif + +This op also supports decoding JPEGs and PNGs, though it is cleaner to use +tf.image.decode_image. + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. 0-D. The GIF-encoded image. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `uint8`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodeJSONExample.md b/site/en/api_docs/python/tf/raw_ops/DecodeJSONExample.md new file mode 100644 index 00000000000..e9a82bd3226 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodeJSONExample.md @@ -0,0 +1,85 @@ +description: Convert JSON-encoded Example records to binary protocol buffer strings. + +
+ + +
+ +# tf.raw_ops.DecodeJSONExample + + + + + + + + + +Convert JSON-encoded Example records to binary protocol buffer strings. + + + + + + + + + +This op translates a tensor containing Example records, encoded using +the [standard JSON +mapping](https://developers.google.com/protocol-buffers/docs/proto3#json), +into a tensor containing the same records encoded as binary protocol +buffers. The resulting tensor can then be fed to any of the other +Example-parsing ops. + + + + + + + + + + + + + +
+`json_examples` + +A `Tensor` of type `string`. +Each string is a JSON object serialized according to the JSON +mapping of the Example proto. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodeJpeg.md b/site/en/api_docs/python/tf/raw_ops/DecodeJpeg.md new file mode 100644 index 00000000000..28234a9f9ad --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodeJpeg.md @@ -0,0 +1,153 @@ +description: Decode a JPEG-encoded image to a uint8 tensor. + +
+ + +
+ +# tf.raw_ops.DecodeJpeg + + + + + + + + + +Decode a JPEG-encoded image to a uint8 tensor. + + + + + + + + + +The attr `channels` indicates the desired number of color channels for the +decoded image. + +#### Accepted values are: + + + +* 0: Use the number of channels in the JPEG-encoded image. +* 1: output a grayscale image. +* 3: output an RGB image. + +If needed, the JPEG-encoded image is transformed to match the requested number +of color channels. + +The attr `ratio` allows downscaling the image by an integer factor during +decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than +downscaling the image later. + + +This op also supports decoding PNGs and non-animated GIFs since the interface is +the same, though it is cleaner to use tf.image.decode_image. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. 0-D. The JPEG-encoded image. +
+`channels` + +An optional `int`. Defaults to `0`. +Number of color channels for the decoded image. +
+`ratio` + +An optional `int`. Defaults to `1`. Downscaling ratio. +
+`fancy_upscaling` + +An optional `bool`. Defaults to `True`. +If true use a slower but nicer upscaling of the +chroma planes (yuv420/422 only). +
+`try_recover_truncated` + +An optional `bool`. Defaults to `False`. +If true try to recover an image from truncated input. +
+`acceptable_fraction` + +An optional `float`. Defaults to `1`. +The minimum required fraction of lines before a truncated +input is accepted. +
+`dct_method` + +An optional `string`. Defaults to `""`. +string specifying a hint about the algorithm used for +decompression. Defaults to "" which maps to a system-specific +default. Currently valid values are ["INTEGER_FAST", +"INTEGER_ACCURATE"]. The hint may be ignored (e.g., the internal +jpeg library changes to a version that does not have that specific +option.) +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `uint8`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodePaddedRaw.md b/site/en/api_docs/python/tf/raw_ops/DecodePaddedRaw.md new file mode 100644 index 00000000000..4c0c3dedfc5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodePaddedRaw.md @@ -0,0 +1,102 @@ +description: Reinterpret the bytes of a string as a vector of numbers. + +
+ + +
+ +# tf.raw_ops.DecodePaddedRaw + + + + + + + + + +Reinterpret the bytes of a string as a vector of numbers. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_bytes` + +A `Tensor` of type `string`. Tensor of string to be decoded. +
+`fixed_length` + +A `Tensor` of type `int32`. +Length in bytes for each element of the decoded output. Must be a multiple +of the size of the output type. +
+`out_type` + +A tf.DType from: `tf.half, tf.float32, tf.float64, tf.int32, tf.uint16, tf.uint8, tf.int16, tf.int8, tf.int64`. +
+`little_endian` + +An optional `bool`. Defaults to `True`. +Whether the input `input_bytes` is in little-endian order. Ignored for +`out_type` values that are stored in a single byte, like `uint8` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodePng.md b/site/en/api_docs/python/tf/raw_ops/DecodePng.md new file mode 100644 index 00000000000..93efc610420 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodePng.md @@ -0,0 +1,109 @@ +description: Decode a PNG-encoded image to a uint8 or uint16 tensor. + +
+ + +
+ +# tf.raw_ops.DecodePng + + + + + + + + + +Decode a PNG-encoded image to a uint8 or uint16 tensor. + + + + + + + + + +The attr `channels` indicates the desired number of color channels for the +decoded image. + +#### Accepted values are: + + + +* 0: Use the number of channels in the PNG-encoded image. +* 1: output a grayscale image. +* 3: output an RGB image. +* 4: output an RGBA image. + +If needed, the PNG-encoded image is transformed to match the requested number +of color channels. + +This op also supports decoding JPEGs and non-animated GIFs since the interface +is the same, though it is cleaner to use tf.image.decode_image. + + + + + + + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. 0-D. The PNG-encoded image. +
+`channels` + +An optional `int`. Defaults to `0`. +Number of color channels for the decoded image. +
+`dtype` + +An optional tf.DType from: `tf.uint8, tf.uint16`. Defaults to tf.uint8. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodeProtoV2.md b/site/en/api_docs/python/tf/raw_ops/DecodeProtoV2.md new file mode 100644 index 00000000000..7c54ead9796 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodeProtoV2.md @@ -0,0 +1,189 @@ +description: The op extracts fields from a serialized protocol buffers message into tensors. + +
+ + +
+ +# tf.raw_ops.DecodeProtoV2 + + + + + + + + + +The op extracts fields from a serialized protocol buffers message into tensors. + + + + + + + + + +The `decode_proto` op extracts fields from a serialized protocol buffers +message into tensors. The fields in `field_names` are decoded and converted +to the corresponding `output_types` if possible. + +A `message_type` name must be provided to give context for the field names. +The actual message descriptor can be looked up either in the linked-in +descriptor pool or a filename provided by the caller using the +`descriptor_source` attribute. + +Each output tensor is a dense tensor. This means that it is padded to hold +the largest number of repeated elements seen in the input minibatch. (The +shape is also padded by one to prevent zero-sized dimensions). The actual +repeat counts for each example in the minibatch can be found in the `sizes` +output. In many cases the output of `decode_proto` is fed immediately into +tf.squeeze if missing values are not a concern. When using tf.squeeze, always +pass the squeeze dimension explicitly to avoid surprises. + +For the most part, the mapping between Proto field types and TensorFlow dtypes +is straightforward. However, there are a few special cases: + +- A proto field that contains a submessage or group can only be converted +to `DT_STRING` (the serialized submessage). This is to reduce the complexity +of the API. The resulting string can be used as input to another instance of +the decode_proto op. + +- TensorFlow lacks support for unsigned integers. The ops represent uint64 +types as a `DT_INT64` with the same twos-complement bit pattern (the obvious +way). Unsigned int32 values can be represented exactly by specifying type +`DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in +the `output_types` attribute. + +Both binary and text proto serializations are supported, and can be +chosen using the `format` attribute. + +The `descriptor_source` attribute selects the source of protocol +descriptors to consult when looking up `message_type`. This may be: + +- An empty string or "local://", in which case protocol descriptors are +created for C++ (not Python) proto definitions linked to the binary. + +- A file, in which case protocol descriptors are created from the file, +which is expected to contain a `FileDescriptorSet` serialized as a string. +NOTE: You can build a `descriptor_source` file using the `--descriptor_set_out` +and `--include_imports` options to the protocol compiler `protoc`. + +- A "bytes://", in which protocol descriptors are created from ``, +which is expected to be a `FileDescriptorSet` serialized as a string. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`bytes` + +A `Tensor` of type `string`. +Tensor of serialized protos with shape `batch_shape`. +
+`message_type` + +A `string`. Name of the proto message type to decode. +
+`field_names` + +A list of `strings`. +List of strings containing proto field names. An extension field can be decoded +by using its full name, e.g. EXT_PACKAGE.EXT_FIELD_NAME. +
+`output_types` + +A list of `tf.DTypes`. +List of TF types to use for the respective field in field_names. +
+`descriptor_source` + +An optional `string`. Defaults to `"local://"`. +Either the special value `local://` or a path to a file containing +a serialized `FileDescriptorSet`. +
+`message_format` + +An optional `string`. Defaults to `"binary"`. +Either `binary` or `text`. +
+`sanitize` + +An optional `bool`. Defaults to `False`. +Whether to sanitize the result or not. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sizes, values). +
+`sizes` + +A `Tensor` of type `int32`. +
+`values` + +A list of `Tensor` objects of type `output_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodeRaw.md b/site/en/api_docs/python/tf/raw_ops/DecodeRaw.md new file mode 100644 index 00000000000..6a71a833d04 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodeRaw.md @@ -0,0 +1,95 @@ +description: Reinterpret the bytes of a string as a vector of numbers. + +
+ + +
+ +# tf.raw_ops.DecodeRaw + + + + + + + + + +Reinterpret the bytes of a string as a vector of numbers. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`bytes` + +A `Tensor` of type `string`. +All the elements must have the same length. +
+`out_type` + +A tf.DType from: `tf.half, tf.float32, tf.float64, tf.int32, tf.uint16, tf.uint8, tf.int16, tf.int8, tf.int64, tf.complex64, tf.complex128, tf.bool`. +
+`little_endian` + +An optional `bool`. Defaults to `True`. +Whether the input `bytes` are in little-endian order. +Ignored for `out_type` values that are stored in a single byte like +`uint8`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DecodeWav.md b/site/en/api_docs/python/tf/raw_ops/DecodeWav.md new file mode 100644 index 00000000000..4c2a10c8329 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DecodeWav.md @@ -0,0 +1,122 @@ +description: Decode a 16-bit PCM WAV file to a float tensor. + +
+ + +
+ +# tf.raw_ops.DecodeWav + + + + + + + + + +Decode a 16-bit PCM WAV file to a float tensor. + + + + + + + + + +The -32768 to 32767 signed 16-bit values will be scaled to -1.0 to 1.0 in float. + +When desired_channels is set, if the input contains fewer channels than this +then the last channel will be duplicated to give the requested number, else if +the input has more channels than requested then the additional channels will be +ignored. + +If desired_samples is set, then the audio will be cropped or padded with zeroes +to the requested length. + +The first output contains a Tensor with the content of the audio samples. The +lowest dimension will be the number of channels, and the second will be the +number of samples. For example, a ten-sample-long stereo WAV file should give an +output shape of [10, 2]. + + + + + + + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. +The WAV-encoded audio, usually from a file. +
+`desired_channels` + +An optional `int`. Defaults to `-1`. +Number of sample channels wanted. +
+`desired_samples` + +An optional `int`. Defaults to `-1`. +Length of audio requested. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (audio, sample_rate). +
+`audio` + +A `Tensor` of type `float32`. +
+`sample_rate` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DeepCopy.md b/site/en/api_docs/python/tf/raw_ops/DeepCopy.md new file mode 100644 index 00000000000..58774252cfd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DeepCopy.md @@ -0,0 +1,77 @@ +description: Makes a copy of x. + +
+ + +
+ +# tf.raw_ops.DeepCopy + + + + + + + + + +Makes a copy of `x`. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. The source tensor of type `T`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DeleteIterator.md b/site/en/api_docs/python/tf/raw_ops/DeleteIterator.md new file mode 100644 index 00000000000..1b532ede1bd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DeleteIterator.md @@ -0,0 +1,84 @@ +description: A container for an iterator resource. + +
+ + +
+ +# tf.raw_ops.DeleteIterator + + + + + + + + + +A container for an iterator resource. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. A handle to the iterator to delete. +
+`deleter` + +A `Tensor` of type `variant`. A variant deleter. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DeleteMemoryCache.md b/site/en/api_docs/python/tf/raw_ops/DeleteMemoryCache.md new file mode 100644 index 00000000000..ea28100c05d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DeleteMemoryCache.md @@ -0,0 +1,82 @@ +
+ + +
+ +# tf.raw_ops.DeleteMemoryCache + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. +
+`deleter` + +A `Tensor` of type `variant`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DeleteMultiDeviceIterator.md b/site/en/api_docs/python/tf/raw_ops/DeleteMultiDeviceIterator.md new file mode 100644 index 00000000000..08418f613bf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DeleteMultiDeviceIterator.md @@ -0,0 +1,93 @@ +description: A container for an iterator resource. + +
+ + +
+ +# tf.raw_ops.DeleteMultiDeviceIterator + + + + + + + + + +A container for an iterator resource. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`multi_device_iterator` + +A `Tensor` of type `resource`. +A handle to the multi device iterator to delete. +
+`iterators` + +A list of `Tensor` objects with type `resource`. +A list of iterator handles (unused). This is added so that automatic control dependencies get added during function tracing that ensure this op runs after all the dependent iterators are deleted. +
+`deleter` + +A `Tensor` of type `variant`. A variant deleter. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DeleteRandomSeedGenerator.md b/site/en/api_docs/python/tf/raw_ops/DeleteRandomSeedGenerator.md new file mode 100644 index 00000000000..1edacf0feba --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DeleteRandomSeedGenerator.md @@ -0,0 +1,82 @@ +
+ + +
+ +# tf.raw_ops.DeleteRandomSeedGenerator + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. +
+`deleter` + +A `Tensor` of type `variant`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DeleteSessionTensor.md b/site/en/api_docs/python/tf/raw_ops/DeleteSessionTensor.md new file mode 100644 index 00000000000..569da3cfa2d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DeleteSessionTensor.md @@ -0,0 +1,78 @@ +description: Delete the tensor specified by its handle in the session. + +
+ + +
+ +# tf.raw_ops.DeleteSessionTensor + + + + + + + + + +Delete the tensor specified by its handle in the session. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `string`. +The handle for a tensor stored in the session state. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DenseToCSRSparseMatrix.md b/site/en/api_docs/python/tf/raw_ops/DenseToCSRSparseMatrix.md new file mode 100644 index 00000000000..c864517c660 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DenseToCSRSparseMatrix.md @@ -0,0 +1,85 @@ +description: Converts a dense tensor to a (possibly batched) CSRSparseMatrix. + +
+ + +
+ +# tf.raw_ops.DenseToCSRSparseMatrix + + + + + + + + + +Converts a dense tensor to a (possibly batched) CSRSparseMatrix. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dense_input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `complex64`, `complex128`. +A Dense tensor. +
+`indices` + +A `Tensor` of type `int64`. Indices of nonzero elements. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DenseToDenseSetOperation.md b/site/en/api_docs/python/tf/raw_ops/DenseToDenseSetOperation.md new file mode 100644 index 00000000000..35dd928689a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DenseToDenseSetOperation.md @@ -0,0 +1,130 @@ +description: Applies set operation along last dimension of 2 Tensor inputs. + +
+ + +
+ +# tf.raw_ops.DenseToDenseSetOperation + + + + + + + + + +Applies set operation along last dimension of 2 `Tensor` inputs. + + + + + + + + + +See SetOperationOp::SetOperationFromContext for values of `set_operation`. + +Output `result` is a `SparseTensor` represented by `result_indices`, +`result_values`, and `result_shape`. For `set1` and `set2` ranked `n`, this +has rank `n` and the same 1st `n-1` dimensions as `set1` and `set2`. The `nth` +dimension contains the result of `set_operation` applied to the corresponding +`[0...n-1]` dimension of `set`. + + + + + + + + + + + + + + + + + + + + + + +
+`set1` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `string`. +`Tensor` with rank `n`. 1st `n-1` dimensions must be the same as `set2`. +Dimension `n` contains values in a set, duplicates are allowed but ignored. +
+`set2` + +A `Tensor`. Must have the same type as `set1`. +`Tensor` with rank `n`. 1st `n-1` dimensions must be the same as `set1`. +Dimension `n` contains values in a set, duplicates are allowed but ignored. +
+`set_operation` + +A `string`. +
+`validate_indices` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (result_indices, result_values, result_shape). +
+`result_indices` + +A `Tensor` of type `int64`. +
+`result_values` + +A `Tensor`. Has the same type as `set1`. +
+`result_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DenseToSparseBatchDataset.md b/site/en/api_docs/python/tf/raw_ops/DenseToSparseBatchDataset.md new file mode 100644 index 00000000000..b86730dada2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DenseToSparseBatchDataset.md @@ -0,0 +1,111 @@ +description: Creates a dataset that batches input elements into a SparseTensor. + +
+ + +
+ +# tf.raw_ops.DenseToSparseBatchDataset + + + + + + + + + +Creates a dataset that batches input elements into a SparseTensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A handle to an input dataset. Must have a single component. +
+`batch_size` + +A `Tensor` of type `int64`. +A scalar representing the number of elements to accumulate in a +batch. +
+`row_shape` + +A `Tensor` of type `int64`. +A vector representing the dense shape of each row in the produced +SparseTensor. The shape may be partially specified, using `-1` to indicate +that a particular dimension should use the maximum size of all batch elements. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DenseToSparseSetOperation.md b/site/en/api_docs/python/tf/raw_ops/DenseToSparseSetOperation.md new file mode 100644 index 00000000000..38fe6612d2c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DenseToSparseSetOperation.md @@ -0,0 +1,158 @@ +description: Applies set operation along last dimension of Tensor and SparseTensor. + +
+ + +
+ +# tf.raw_ops.DenseToSparseSetOperation + + + + + + + + + +Applies set operation along last dimension of `Tensor` and `SparseTensor`. + + + + + + + + + +See SetOperationOp::SetOperationFromContext for values of `set_operation`. + +Input `set2` is a `SparseTensor` represented by `set2_indices`, `set2_values`, +and `set2_shape`. For `set2` ranked `n`, 1st `n-1` dimensions must be the same +as `set1`. Dimension `n` contains values in a set, duplicates are allowed but +ignored. + +If `validate_indices` is `True`, this op validates the order and range of `set2` +indices. + +Output `result` is a `SparseTensor` represented by `result_indices`, +`result_values`, and `result_shape`. For `set1` and `set2` ranked `n`, this +has rank `n` and the same 1st `n-1` dimensions as `set1` and `set2`. The `nth` +dimension contains the result of `set_operation` applied to the corresponding +`[0...n-1]` dimension of `set`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`set1` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `string`. +`Tensor` with rank `n`. 1st `n-1` dimensions must be the same as `set2`. +Dimension `n` contains values in a set, duplicates are allowed but ignored. +
+`set2_indices` + +A `Tensor` of type `int64`. +2D `Tensor`, indices of a `SparseTensor`. Must be in row-major +order. +
+`set2_values` + +A `Tensor`. Must have the same type as `set1`. +1D `Tensor`, values of a `SparseTensor`. Must be in row-major +order. +
+`set2_shape` + +A `Tensor` of type `int64`. +1D `Tensor`, shape of a `SparseTensor`. `set2_shape[0...n-1]` must +be the same as the 1st `n-1` dimensions of `set1`, `result_shape[n]` is the +max set size across `n-1` dimensions. +
+`set_operation` + +A `string`. +
+`validate_indices` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (result_indices, result_values, result_shape). +
+`result_indices` + +A `Tensor` of type `int64`. +
+`result_values` + +A `Tensor`. Has the same type as `set1`. +
+`result_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DepthToSpace.md b/site/en/api_docs/python/tf/raw_ops/DepthToSpace.md new file mode 100644 index 00000000000..bca71eac7ef --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DepthToSpace.md @@ -0,0 +1,181 @@ +description: DepthToSpace for tensors of type T. + +
+ + +
+ +# tf.raw_ops.DepthToSpace + + + + + + + + + +DepthToSpace for tensors of type T. + + + + + + + + + +Rearranges data from depth into blocks of spatial data. +This is the reverse transformation of SpaceToDepth. More specifically, +this op outputs a copy of the input tensor where values from the `depth` +dimension are moved in spatial blocks to the `height` and `width` dimensions. +The attr `block_size` indicates the input block size and how the data is moved. + + * Chunks of data of size `block_size * block_size` from depth are rearranged + into non-overlapping blocks of size `block_size x block_size` + * The width the output tensor is `input_depth * block_size`, whereas the + height is `input_height * block_size`. + * The Y, X coordinates within each block of the output image are determined + by the high order component of the input channel index. + * The depth of the input tensor must be divisible by + `block_size * block_size`. + +The `data_format` attr specifies the layout of the input and output tensors +with the following options: + "NHWC": `[ batch, height, width, channels ]` + "NCHW": `[ batch, channels, height, width ]` + "NCHW_VECT_C": + `qint8 [ batch, channels / 4, height, width, 4 ]` + +It is useful to consider the operation as transforming a 6-D Tensor. +e.g. for data_format = NHWC, + Each element in the input tensor can be specified via 6 coordinates, + ordered by decreasing memory layout significance as: + n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates + within the input image, bX, bY means coordinates + within the output block, oC means output channels). + The output would be the input transposed to the following layout: + n,iY,bY,iX,bX,oC + +This operation is useful for resizing the activations between convolutions +(but keeping all data), e.g. instead of pooling. It is also useful for training +purely convolutional models. + +For example, given an input of shape `[1, 1, 1, 4]`, data_format = "NHWC" and +block_size = 2: + +``` +x = [[[[1, 2, 3, 4]]]] + +``` + +This operation will output a tensor of shape `[1, 2, 2, 1]`: + +``` + [[[[1], [2]], + [[3], [4]]]] +``` + +Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`, +the corresponding output will have 2x2 elements and will have a depth of +1 channel (1 = `4 / (block_size * block_size)`). +The output element shape is `[2, 2, 1]`. + +For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g. + +``` +x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] +``` + +This operation, for block size of 2, will return the following tensor of shape +`[1, 2, 2, 3]` + +``` + [[[[1, 2, 3], [4, 5, 6]], + [[7, 8, 9], [10, 11, 12]]]] + +``` + +Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2: + +``` +x = [[[[1, 2, 3, 4], + [5, 6, 7, 8]], + [[9, 10, 11, 12], + [13, 14, 15, 16]]]] +``` + +the operator will return the following tensor of shape `[1 4 4 1]`: + +``` +x = [[[ [1], [2], [5], [6]], + [ [3], [4], [7], [8]], + [ [9], [10], [13], [14]], + [ [11], [12], [15], [16]]]] + +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`block_size` + +An `int` that is `>= 2`. +The size of the spatial block, same as in Space2Depth. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW", "NCHW_VECT_C"`. Defaults to `"NHWC"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DepthwiseConv2dNative.md b/site/en/api_docs/python/tf/raw_ops/DepthwiseConv2dNative.md new file mode 100644 index 00000000000..357a2983385 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DepthwiseConv2dNative.md @@ -0,0 +1,144 @@ +description: Computes a 2-D depthwise convolution given 4-D input and filter tensors. + +
+ + +
+ +# tf.raw_ops.DepthwiseConv2dNative + + + + + + + + + +Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors. + + + + + + + + + +Given an input tensor of shape `[batch, in_height, in_width, in_channels]` +and a filter / kernel tensor of shape +`[filter_height, filter_width, in_channels, channel_multiplier]`, containing +`in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies +a different filter to each input channel (expanding from 1 channel to +`channel_multiplier` channels for each), then concatenates the results +together. Thus, the output has `in_channels * channel_multiplier` channels. + +``` +for k in 0..in_channels-1 + for q in 0..channel_multiplier-1 + output[b, i, j, k * channel_multiplier + q] = + sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * + filter[di, dj, k, q] +``` + +Must have `strides[0] = strides[3] = 1`. For the most common case of the same +horizontal and vertices strides, `strides = [1, stride, stride, 1]`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +
+`filter` + +A `Tensor`. Must have the same type as `input`. +
+`strides` + +A list of `ints`. +1-D of length 4. The stride of the sliding window for each dimension +of `input`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, height, width, channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, channels, height, width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each filter +element on that dimension. The dimension order is determined by the value of +`data_format`, see above for details. Dilations in the batch and depth +dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DepthwiseConv2dNativeBackpropFilter.md b/site/en/api_docs/python/tf/raw_ops/DepthwiseConv2dNativeBackpropFilter.md new file mode 100644 index 00000000000..7b9f7e2b501 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DepthwiseConv2dNativeBackpropFilter.md @@ -0,0 +1,143 @@ +description: Computes the gradients of depthwise convolution with respect to the filter. + +
+ + +
+ +# tf.raw_ops.DepthwiseConv2dNativeBackpropFilter + + + + + + + + + +Computes the gradients of depthwise convolution with respect to the filter. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +4-D with shape based on `data_format`. For example, if +`data_format` is 'NHWC' then `input` is a 4-D `[batch, in_height, +in_width, in_channels]` tensor. +
+`filter_sizes` + +A `Tensor` of type `int32`. +An integer vector representing the tensor shape of `filter`, +where `filter` is a 4-D +`[filter_height, filter_width, in_channels, depthwise_multiplier]` tensor. +
+`out_backprop` + +A `Tensor`. Must have the same type as `input`. +4-D with shape based on `data_format`. +For example, if `data_format` is 'NHWC' then +out_backprop shape is `[batch, out_height, out_width, out_channels]`. +Gradients w.r.t. the output of the convolution. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +of the convolution. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, height, width, channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, channels, height, width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each filter +element on that dimension. The dimension order is determined by the value of +`data_format`, see above for details. Dilations in the batch and depth +dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DepthwiseConv2dNativeBackpropInput.md b/site/en/api_docs/python/tf/raw_ops/DepthwiseConv2dNativeBackpropInput.md new file mode 100644 index 00000000000..c17b9988e57 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DepthwiseConv2dNativeBackpropInput.md @@ -0,0 +1,142 @@ +description: Computes the gradients of depthwise convolution with respect to the input. + +
+ + +
+ +# tf.raw_ops.DepthwiseConv2dNativeBackpropInput + + + + + + + + + +Computes the gradients of depthwise convolution with respect to the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_sizes` + +A `Tensor` of type `int32`. +An integer vector representing the shape of `input`, based +on `data_format`. For example, if `data_format` is 'NHWC' then +`input` is a 4-D `[batch, height, width, channels]` tensor. +
+`filter` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +4-D with shape +`[filter_height, filter_width, in_channels, depthwise_multiplier]`. +
+`out_backprop` + +A `Tensor`. Must have the same type as `filter`. +4-D with shape based on `data_format`. +For example, if `data_format` is 'NHWC' then +out_backprop shape is `[batch, out_height, out_width, out_channels]`. +Gradients w.r.t. the output of the convolution. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +of the convolution. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, height, width, channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, channels, height, width]. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each filter +element on that dimension. The dimension order is determined by the value of +`data_format`, see above for details. Dilations in the batch and depth +dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `filter`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Dequantize.md b/site/en/api_docs/python/tf/raw_ops/Dequantize.md new file mode 100644 index 00000000000..94d79664063 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Dequantize.md @@ -0,0 +1,176 @@ +description: Dequantize the 'input' tensor into a float or bfloat16 Tensor. + +
+ + +
+ +# tf.raw_ops.Dequantize + + + + + + + + + +Dequantize the 'input' tensor into a float or bfloat16 Tensor. + + + + + + + + + +[min_range, max_range] are scalar floats that specify the range for +the output. The 'mode' attribute controls exactly which calculations are +used to convert the float values to their quantized equivalents. + +In 'MIN_COMBINED' mode, each value of the tensor will undergo the following: + +``` +if T == qint8: in[i] += (range(T) + 1)/ 2.0 +out[i] = min_range + (in[i]* (max_range - min_range) / range(T)) +``` +here `range(T) = numeric_limits::max() - numeric_limits::min()` + +*MIN_COMBINED Mode Example* + +If the input comes from a QuantizedRelu6, the output type is +quint8 (range of 0-255) but the possible range of QuantizedRelu6 is +0-6. The min_range and max_range values are therefore 0.0 and 6.0. +Dequantize on quint8 will take each value, cast to float, and multiply +by 6 / 255. +Note that if quantizedtype is qint8, the operation will additionally add +each value by 128 prior to casting. + +If the mode is 'MIN_FIRST', then this approach is used: + +```c++ +num_discrete_values = 1 << (# of bits in T) +range_adjust = num_discrete_values / (num_discrete_values - 1) +range = (range_max - range_min) * range_adjust +range_scale = range / num_discrete_values +const double offset_input = static_cast(input) - lowest_quantized; +result = range_min + ((input - numeric_limits::min()) * range_scale) +``` + +If the mode is `SCALED`, dequantization is performed by multiplying each +input value by a scaling_factor. (Thus an input of 0 always maps to 0.0). + +The scaling_factor is determined from `min_range`, `max_range`, and +`narrow_range` in a way that is compatible with `QuantizeAndDequantize{V2|V3}` +and `QuantizeV2`, using the following algorithm: + +```c++ + + const int min_expected_T = std::numeric_limits::min() + + (narrow_range ? 1 : 0); + const int max_expected_T = std::numeric_limits::max(); + const float max_expected_T = std::numeric_limits::max(); + + const float scale_factor = + (std::numeric_limits::min() == 0) ? (max_range / max_expected_T) + : std::max(min_range / min_expected_T, + max_range / max_expected_T); +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`min_range` + +A `Tensor` of type `float32`. +The minimum scalar value possibly produced for the input. +
+`max_range` + +A `Tensor` of type `float32`. +The maximum scalar value possibly produced for the input. +
+`mode` + +An optional `string` from: `"MIN_COMBINED", "MIN_FIRST", "SCALED"`. Defaults to `"MIN_COMBINED"`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`axis` + +An optional `int`. Defaults to `-1`. +
+`dtype` + +An optional tf.DType from: `tf.bfloat16, tf.float32`. Defaults to tf.float32. +Type of the output tensor. Currently Dequantize supports float and bfloat16. +If 'dtype' is 'bfloat16', it only supports 'MIN_COMBINED' mode. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DeserializeIterator.md b/site/en/api_docs/python/tf/raw_ops/DeserializeIterator.md new file mode 100644 index 00000000000..d1e3d2428dc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DeserializeIterator.md @@ -0,0 +1,87 @@ +description: Converts the given variant tensor to an iterator and stores it in the given resource. + +
+ + +
+ +# tf.raw_ops.DeserializeIterator + + + + + + + + + +Converts the given variant tensor to an iterator and stores it in the given resource. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`resource_handle` + +A `Tensor` of type `resource`. +A handle to an iterator resource. +
+`serialized` + +A `Tensor` of type `variant`. +A variant tensor storing the state of the iterator contained in the +resource. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DeserializeManySparse.md b/site/en/api_docs/python/tf/raw_ops/DeserializeManySparse.md new file mode 100644 index 00000000000..c62b01fb920 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DeserializeManySparse.md @@ -0,0 +1,148 @@ +description: Deserialize and concatenate SparseTensors from a serialized minibatch. + +
+ + +
+ +# tf.raw_ops.DeserializeManySparse + + + + + + + + + +Deserialize and concatenate `SparseTensors` from a serialized minibatch. + + + + + + + + + +The input `serialized_sparse` must be a string matrix of shape `[N x 3]` where +`N` is the minibatch size and the rows correspond to packed outputs of +`SerializeSparse`. The ranks of the original `SparseTensor` objects +must all match. When the final `SparseTensor` is created, it has rank one +higher than the ranks of the incoming `SparseTensor` objects +(they have been concatenated along a new row dimension). + +The output `SparseTensor` object's shape values for all dimensions but the +first are the max across the input `SparseTensor` objects' shape values +for the corresponding dimensions. Its first shape value is `N`, the minibatch +size. + +The input `SparseTensor` objects' indices are assumed ordered in +standard lexicographic order. If this is not the case, after this +step run `SparseReorder` to restore index ordering. + +For example, if the serialized input is a `[2 x 3]` matrix representing two +original `SparseTensor` objects: + + index = [ 0] + [10] + [20] + values = [1, 2, 3] + shape = [50] + +and + + index = [ 2] + [10] + values = [4, 5] + shape = [30] + +then the final deserialized `SparseTensor` will be: + + index = [0 0] + [0 10] + [0 20] + [1 2] + [1 10] + values = [1, 2, 3, 4, 5] + shape = [2 50] + + + + + + + + + + + + + + + + +
+`serialized_sparse` + +A `Tensor` of type `string`. +2-D, The `N` serialized `SparseTensor` objects. +Must have 3 columns. +
+`dtype` + +A tf.DType. The `dtype` of the serialized `SparseTensor` objects. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shape). +
+`sparse_indices` + +A `Tensor` of type `int64`. +
+`sparse_values` + +A `Tensor` of type `dtype`. +
+`sparse_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DeserializeSparse.md b/site/en/api_docs/python/tf/raw_ops/DeserializeSparse.md new file mode 100644 index 00000000000..100cfd9fde3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DeserializeSparse.md @@ -0,0 +1,148 @@ +description: Deserialize SparseTensor objects. + +
+ + +
+ +# tf.raw_ops.DeserializeSparse + + + + + + + + + +Deserialize `SparseTensor` objects. + + + + + + + + + +The input `serialized_sparse` must have the shape `[?, ?, ..., ?, 3]` where +the last dimension stores serialized `SparseTensor` objects and the other N +dimensions (N >= 0) correspond to a batch. The ranks of the original +`SparseTensor` objects must all match. When the final `SparseTensor` is +created, its rank is the rank of the incoming `SparseTensor` objects plus N; +the sparse tensors have been concatenated along new dimensions, one for each +batch. + +The output `SparseTensor` object's shape values for the original dimensions +are the max across the input `SparseTensor` objects' shape values for the +corresponding dimensions. The new dimensions match the size of the batch. + +The input `SparseTensor` objects' indices are assumed ordered in +standard lexicographic order. If this is not the case, after this +step run `SparseReorder` to restore index ordering. + +For example, if the serialized input is a `[2 x 3]` matrix representing two +original `SparseTensor` objects: + + index = [ 0] + [10] + [20] + values = [1, 2, 3] + shape = [50] + +and + + index = [ 2] + [10] + values = [4, 5] + shape = [30] + +then the final deserialized `SparseTensor` will be: + + index = [0 0] + [0 10] + [0 20] + [1 2] + [1 10] + values = [1, 2, 3, 4, 5] + shape = [2 50] + + + + + + + + + + + + + + + + +
+`serialized_sparse` + +A `Tensor`. Must be one of the following types: `string`, `variant`. +The serialized `SparseTensor` objects. The last dimension +must have 3 columns. +
+`dtype` + +A tf.DType. The `dtype` of the serialized `SparseTensor` objects. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shape). +
+`sparse_indices` + +A `Tensor` of type `int64`. +
+`sparse_values` + +A `Tensor` of type `dtype`. +
+`sparse_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DestroyResourceOp.md b/site/en/api_docs/python/tf/raw_ops/DestroyResourceOp.md new file mode 100644 index 00000000000..5a3ab63b6e5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DestroyResourceOp.md @@ -0,0 +1,88 @@ +description: Deletes the resource specified by the handle. + +
+ + +
+ +# tf.raw_ops.DestroyResourceOp + + + + + + + + + +Deletes the resource specified by the handle. + + + + + + + + + +All subsequent operations using the resource will result in a NotFound +error status. + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. handle to the resource to delete. +
+`ignore_lookup_error` + +An optional `bool`. Defaults to `True`. +whether to ignore the error when the resource +doesn't exist. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DestroyTemporaryVariable.md b/site/en/api_docs/python/tf/raw_ops/DestroyTemporaryVariable.md new file mode 100644 index 00000000000..988105de2fa --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DestroyTemporaryVariable.md @@ -0,0 +1,93 @@ +description: Destroys the temporary variable and returns its final value. + +
+ + +
+ +# tf.raw_ops.DestroyTemporaryVariable + + + + + + + + + +Destroys the temporary variable and returns its final value. + + + + + + + + + +Sets output to the value of the Tensor pointed to by 'ref', then destroys +the temporary variable called 'var_name'. +All other uses of 'ref' *must* have executed before this op. +This is typically achieved by chaining the ref through each assign op, or by +using control dependencies. + +Outputs the final value of the tensor pointed to by 'ref'. + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. A reference to the temporary variable tensor. +
+`var_name` + +A `string`. +Name of the temporary variable, usually the name of the matching +'TemporaryVariable' op. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Diag.md b/site/en/api_docs/python/tf/raw_ops/Diag.md new file mode 100644 index 00000000000..d6beaff7373 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Diag.md @@ -0,0 +1,97 @@ +description: Returns a diagonal tensor with a given diagonal values. + +
+ + +
+ +# tf.raw_ops.Diag + + + + + + + + + +Returns a diagonal tensor with a given diagonal values. + + + + + + + + + +Given a `diagonal`, this operation returns a tensor with the `diagonal` and +everything else padded with zeros. The diagonal is computed as follows: + +Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of +rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where: + +`output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else. + +#### For example: + + + +``` +# 'diagonal' is [1, 2, 3, 4] +tf.diag(diagonal) ==> [[1, 0, 0, 0] + [0, 2, 0, 0] + [0, 0, 3, 0] + [0, 0, 0, 4]] +``` + + + + + + + + + + + + + +
+`diagonal` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +Rank k tensor where k is at most 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `diagonal`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DiagPart.md b/site/en/api_docs/python/tf/raw_ops/DiagPart.md new file mode 100644 index 00000000000..1de9677ff23 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DiagPart.md @@ -0,0 +1,98 @@ +description: Returns the diagonal part of the tensor. + +
+ + +
+ +# tf.raw_ops.DiagPart + + + + + + + + + +Returns the diagonal part of the tensor. + + + + + + + + + +This operation returns a tensor with the `diagonal` part +of the `input`. The `diagonal` part is computed as follows: + +Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a +tensor of rank `k` with dimensions `[D1,..., Dk]` where: + +`diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`. + +#### For example: + + + +``` +# 'input' is [[1, 0, 0, 0] + [0, 2, 0, 0] + [0, 0, 3, 0] + [0, 0, 0, 4]] + +tf.diag_part(input) ==> [1, 2, 3, 4] +``` + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +Rank k tensor where k is even and not zero. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Digamma.md b/site/en/api_docs/python/tf/raw_ops/Digamma.md new file mode 100644 index 00000000000..0c8a46031c0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Digamma.md @@ -0,0 +1,78 @@ +description: Computes Psi, the derivative of Lgamma (the log of the absolute value of + +
+ + +
+ +# tf.raw_ops.Digamma + + + + + + + + + +Computes Psi, the derivative of Lgamma (the log of the absolute value of + + + + + + + + + +`Gamma(x)`), element-wise. + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Dilation2D.md b/site/en/api_docs/python/tf/raw_ops/Dilation2D.md new file mode 100644 index 00000000000..5289fca5fae --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Dilation2D.md @@ -0,0 +1,135 @@ +description: Computes the grayscale dilation of 4-D input and 3-D filter tensors. + +
+ + +
+ +# tf.raw_ops.Dilation2D + + + + + + + + + +Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors. + + + + + + + + + +The `input` tensor has shape `[batch, in_height, in_width, depth]` and the +`filter` tensor has shape `[filter_height, filter_width, depth]`, i.e., each +input channel is processed independently of the others with its own structuring +function. The `output` tensor has shape +`[batch, out_height, out_width, depth]`. The spatial dimensions of the output +tensor depend on the `padding` algorithm. We currently only support the default +"NHWC" `data_format`. + +In detail, the grayscale morphological 2-D dilation is the max-sum correlation +(for consistency with `conv2d`, we use unmirrored filters): + + output[b, y, x, c] = + max_{dy, dx} input[b, + strides[1] * y + rates[1] * dy, + strides[2] * x + rates[2] * dx, + c] + + filter[dy, dx, c] + +Max-pooling is a special case when the filter has size equal to the pooling +kernel size and contains all zeros. + +Note on duality: The dilation of `input` by the `filter` is equal to the +negation of the erosion of `-input` by the reflected `filter`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +4-D with shape `[batch, in_height, in_width, depth]`. +
+`filter` + +A `Tensor`. Must have the same type as `input`. +3-D with shape `[filter_height, filter_width, depth]`. +
+`strides` + +A list of `ints` that has length `>= 4`. +The stride of the sliding window for each dimension of the input +tensor. Must be: `[1, stride_height, stride_width, 1]`. +
+`rates` + +A list of `ints` that has length `>= 4`. +The input stride for atrous morphological dilation. Must be: +`[1, rate_height, rate_width, 1]`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Dilation2DBackpropFilter.md b/site/en/api_docs/python/tf/raw_ops/Dilation2DBackpropFilter.md new file mode 100644 index 00000000000..1f686afb483 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Dilation2DBackpropFilter.md @@ -0,0 +1,120 @@ +description: Computes the gradient of morphological 2-D dilation with respect to the filter. + +
+ + +
+ +# tf.raw_ops.Dilation2DBackpropFilter + + + + + + + + + +Computes the gradient of morphological 2-D dilation with respect to the filter. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +4-D with shape `[batch, in_height, in_width, depth]`. +
+`filter` + +A `Tensor`. Must have the same type as `input`. +3-D with shape `[filter_height, filter_width, depth]`. +
+`out_backprop` + +A `Tensor`. Must have the same type as `input`. +4-D with shape `[batch, out_height, out_width, depth]`. +
+`strides` + +A list of `ints` that has length `>= 4`. +1-D of length 4. The stride of the sliding window for each dimension of +the input tensor. Must be: `[1, stride_height, stride_width, 1]`. +
+`rates` + +A list of `ints` that has length `>= 4`. +1-D of length 4. The input stride for atrous morphological dilation. +Must be: `[1, rate_height, rate_width, 1]`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Dilation2DBackpropInput.md b/site/en/api_docs/python/tf/raw_ops/Dilation2DBackpropInput.md new file mode 100644 index 00000000000..25dcb6ea640 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Dilation2DBackpropInput.md @@ -0,0 +1,120 @@ +description: Computes the gradient of morphological 2-D dilation with respect to the input. + +
+ + +
+ +# tf.raw_ops.Dilation2DBackpropInput + + + + + + + + + +Computes the gradient of morphological 2-D dilation with respect to the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +4-D with shape `[batch, in_height, in_width, depth]`. +
+`filter` + +A `Tensor`. Must have the same type as `input`. +3-D with shape `[filter_height, filter_width, depth]`. +
+`out_backprop` + +A `Tensor`. Must have the same type as `input`. +4-D with shape `[batch, out_height, out_width, depth]`. +
+`strides` + +A list of `ints` that has length `>= 4`. +1-D of length 4. The stride of the sliding window for each dimension of +the input tensor. Must be: `[1, stride_height, stride_width, 1]`. +
+`rates` + +A list of `ints` that has length `>= 4`. +1-D of length 4. The input stride for atrous morphological dilation. +Must be: `[1, rate_height, rate_width, 1]`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DirectedInterleaveDataset.md b/site/en/api_docs/python/tf/raw_ops/DirectedInterleaveDataset.md new file mode 100644 index 00000000000..efa9464d496 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DirectedInterleaveDataset.md @@ -0,0 +1,103 @@ +description: A substitute for InterleaveDataset on a fixed list of N datasets. + +
+ + +
+ +# tf.raw_ops.DirectedInterleaveDataset + + + + + + + + + +A substitute for `InterleaveDataset` on a fixed list of `N` datasets. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`selector_input_dataset` + +A `Tensor` of type `variant`. +A dataset of scalar `DT_INT64` elements that determines which of the +`N` data inputs should produce the next output element. +
+`data_input_datasets` + +A list of at least 1 `Tensor` objects with type `variant`. +`N` datasets with the same type that will be interleaved according to +the values of `selector_input_dataset`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Div.md b/site/en/api_docs/python/tf/raw_ops/Div.md new file mode 100644 index 00000000000..f94fe5bb374 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Div.md @@ -0,0 +1,86 @@ +description: Returns x / y element-wise. + +
+ + +
+ +# tf.raw_ops.Div + + + + + + + + + +Returns x / y element-wise. + + + + + + + + + +*NOTE*: `Div` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DivNoNan.md b/site/en/api_docs/python/tf/raw_ops/DivNoNan.md new file mode 100644 index 00000000000..215b6331bd7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DivNoNan.md @@ -0,0 +1,87 @@ +description: Returns 0 if the denominator is zero. + +
+ + +
+ +# tf.raw_ops.DivNoNan + + + + + + + + + +Returns 0 if the denominator is zero. + + + + + + + + + + +*NOTE*: `DivNoNan` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DrawBoundingBoxes.md b/site/en/api_docs/python/tf/raw_ops/DrawBoundingBoxes.md new file mode 100644 index 00000000000..65d2e332239 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DrawBoundingBoxes.md @@ -0,0 +1,98 @@ +description: Draw bounding boxes on a batch of images. + +
+ + +
+ +# tf.raw_ops.DrawBoundingBoxes + + + + + + + + + +Draw bounding boxes on a batch of images. + + + + + + + + + +Outputs a copy of `images` but draws on top of the pixels zero or more bounding +boxes specified by the locations in `boxes`. The coordinates of the each +bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`. The +bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and +height of the underlying image. + +For example, if an image is 100 x 200 pixels (height x width) and the bounding +box is `[0.1, 0.2, 0.5, 0.9]`, the upper-left and bottom-right coordinates of +the bounding box will be `(40, 10)` to `(180, 50)` (in (x,y) coordinates). + +Parts of the bounding box may fall outside the image. + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `float32`, `half`. +4-D with shape `[batch, height, width, depth]`. A batch of images. +
+`boxes` + +A `Tensor` of type `float32`. +3-D with shape `[batch, num_bounding_boxes, 4]` containing bounding +boxes. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DrawBoundingBoxesV2.md b/site/en/api_docs/python/tf/raw_ops/DrawBoundingBoxesV2.md new file mode 100644 index 00000000000..b24a72f5f9c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DrawBoundingBoxesV2.md @@ -0,0 +1,106 @@ +description: Draw bounding boxes on a batch of images. + +
+ + +
+ +# tf.raw_ops.DrawBoundingBoxesV2 + + + + + + + + + +Draw bounding boxes on a batch of images. + + + + + + + + + +Outputs a copy of `images` but draws on top of the pixels zero or more bounding +boxes specified by the locations in `boxes`. The coordinates of the each +bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`. The +bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and +height of the underlying image. + +For example, if an image is 100 x 200 pixels (height x width) and the bounding +box is `[0.1, 0.2, 0.5, 0.9]`, the upper-left and bottom-right coordinates of +the bounding box will be `(40, 10)` to `(100, 50)` (in (x,y) coordinates). + +Parts of the bounding box may fall outside the image. + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `float32`, `half`. +4-D with shape `[batch, height, width, depth]`. A batch of images. +
+`boxes` + +A `Tensor` of type `float32`. +3-D with shape `[batch, num_bounding_boxes, 4]` containing bounding +boxes. +
+`colors` + +A `Tensor` of type `float32`. +2-D. A list of RGBA colors to cycle through for the boxes. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DummyMemoryCache.md b/site/en/api_docs/python/tf/raw_ops/DummyMemoryCache.md new file mode 100644 index 00000000000..936f775556c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DummyMemoryCache.md @@ -0,0 +1,68 @@ +
+ + +
+ +# tf.raw_ops.DummyMemoryCache + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DynamicPartition.md b/site/en/api_docs/python/tf/raw_ops/DynamicPartition.md new file mode 100644 index 00000000000..bf12be40d34 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DynamicPartition.md @@ -0,0 +1,132 @@ +description: Partitions data into num_partitions tensors using indices from partitions. + +
+ + +
+ +# tf.raw_ops.DynamicPartition + + + + + + + + + +Partitions `data` into `num_partitions` tensors using indices from `partitions`. + + + + + + + + + +For each index tuple `js` of size `partitions.ndim`, the slice `data[js, ...]` +becomes part of `outputs[partitions[js]]`. The slices with `partitions[js] = i` +are placed in `outputs[i]` in lexicographic order of `js`, and the first +dimension of `outputs[i]` is the number of entries in `partitions` equal to `i`. +In detail, + +```python + outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:] + + outputs[i] = pack([data[js, ...] for js if partitions[js] == i]) +``` + +`data.shape` must start with `partitions.shape`. + +#### For example: + + + +```python + # Scalar partitions. + partitions = 1 + num_partitions = 2 + data = [10, 20] + outputs[0] = [] # Empty with shape [0, 2] + outputs[1] = [[10, 20]] + + # Vector partitions. + partitions = [0, 0, 1, 1, 0] + num_partitions = 2 + data = [10, 20, 30, 40, 50] + outputs[0] = [10, 20, 50] + outputs[1] = [30, 40] +``` + +See `dynamic_stitch` for an example on how to merge partitions back. + +
+ +
+ + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. +
+`partitions` + +A `Tensor` of type `int32`. +Any shape. Indices in the range `[0, num_partitions)`. +
+`num_partitions` + +An `int` that is `>= 1`. +The number of partitions to output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `num_partitions` `Tensor` objects with the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/DynamicStitch.md b/site/en/api_docs/python/tf/raw_ops/DynamicStitch.md new file mode 100644 index 00000000000..6d7c72e4800 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/DynamicStitch.md @@ -0,0 +1,148 @@ +description: Interleave the values from the data tensors into a single tensor. + +
+ + +
+ +# tf.raw_ops.DynamicStitch + + + + + + + + + +Interleave the values from the `data` tensors into a single tensor. + + + + + + + + + +Builds a merged tensor such that + +```python + merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...] +``` + +For example, if each `indices[m]` is scalar or vector, we have + +```python + # Scalar indices: + merged[indices[m], ...] = data[m][...] + + # Vector indices: + merged[indices[m][i], ...] = data[m][i, ...] +``` + +Each `data[i].shape` must start with the corresponding `indices[i].shape`, +and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we +must have `data[i].shape = indices[i].shape + constant`. In terms of this +`constant`, the output shape is + + merged.shape = [max(indices)] + constant + +Values are merged in order, so if an index appears in both `indices[m][i]` and +`indices[n][j]` for `(m,i) < (n,j)` the slice `data[n][j]` will appear in the +merged result. If you do not need this guarantee, ParallelDynamicStitch might +perform better on some devices. + +#### For example: + + + +```python + indices[0] = 6 + indices[1] = [4, 1] + indices[2] = [[5, 2], [0, 3]] + data[0] = [61, 62] + data[1] = [[41, 42], [11, 12]] + data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]] + merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42], + [51, 52], [61, 62]] +``` + +This method can be used to merge partitions created by `dynamic_partition` +as illustrated on the following example: + +```python + # Apply function (increments x_i) on elements for which a certain condition + # apply (x_i != -1 in this example). + x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4]) + condition_mask=tf.not_equal(x,tf.constant(-1.)) + partitioned_data = tf.dynamic_partition( + x, tf.cast(condition_mask, tf.int32) , 2) + partitioned_data[1] = partitioned_data[1] + 1.0 + condition_indices = tf.dynamic_partition( + tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2) + x = tf.dynamic_stitch(condition_indices, partitioned_data) + # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain + # unchanged. +``` + +
+ +
+ + + + + + + + + + + + + + + + +
+`indices` + +A list of at least 1 `Tensor` objects with type `int32`. +
+`data` + +A list with the same length as `indices` of `Tensor` objects with the same type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EagerPyFunc.md b/site/en/api_docs/python/tf/raw_ops/EagerPyFunc.md new file mode 100644 index 00000000000..81a167b7de3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EagerPyFunc.md @@ -0,0 +1,100 @@ +description: Eagerly executes a python function to compute func(input)->output. The + +
+ + +
+ +# tf.raw_ops.EagerPyFunc + + + + + + + + + +Eagerly executes a python function to compute func(input)->output. The + + + + + + + + + +semantics of the input, output, and attributes are the same as those for +PyFunc. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A list of `Tensor` objects. +
+`token` + +A `string`. +
+`Tout` + +A list of `tf.DTypes`. +
+`is_async` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EditDistance.md b/site/en/api_docs/python/tf/raw_ops/EditDistance.md new file mode 100644 index 00000000000..114cdc784e3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EditDistance.md @@ -0,0 +1,142 @@ +description: Computes the (possibly normalized) Levenshtein Edit Distance. + +
+ + +
+ +# tf.raw_ops.EditDistance + + + + + + + + + +Computes the (possibly normalized) Levenshtein Edit Distance. + + + + + + + + + +The inputs are variable-length sequences provided by SparseTensors + (hypothesis_indices, hypothesis_values, hypothesis_shape) +and + (truth_indices, truth_values, truth_shape). + +#### The inputs are: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`hypothesis_indices` + +A `Tensor` of type `int64`. +The indices of the hypothesis list SparseTensor. +This is an N x R int64 matrix. +
+`hypothesis_values` + +A `Tensor`. +The values of the hypothesis list SparseTensor. +This is an N-length vector. +
+`hypothesis_shape` + +A `Tensor` of type `int64`. +The shape of the hypothesis list SparseTensor. +This is an R-length vector. +
+`truth_indices` + +A `Tensor` of type `int64`. +The indices of the truth list SparseTensor. +This is an M x R int64 matrix. +
+`truth_values` + +A `Tensor`. Must have the same type as `hypothesis_values`. +The values of the truth list SparseTensor. +This is an M-length vector. +
+`truth_shape` + +A `Tensor` of type `int64`. truth indices, vector. +
+`normalize` + +An optional `bool`. Defaults to `True`. +boolean (if true, edit distances are normalized by length of truth). + +The output is: +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Eig.md b/site/en/api_docs/python/tf/raw_ops/Eig.md new file mode 100644 index 00000000000..db49e42a8cd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Eig.md @@ -0,0 +1,119 @@ +description: Computes the eigen decomposition of one or more square matrices. + +
+ + +
+ +# tf.raw_ops.Eig + + + + + + + + + +Computes the eigen decomposition of one or more square matrices. + + + + + + + + + +Computes the eigenvalues and (optionally) right eigenvectors of each inner matrix in +`input` such that `input[..., :, :] = v[..., :, :] * diag(e[..., :])`. The eigenvalues +are sorted in non-decreasing order. + +```python +# a is a tensor. +# e is a tensor of eigenvalues. +# v is a tensor of eigenvectors. +e, v = eig(a) +e = eig(a, compute_v=False) +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `complex64`, `complex128`. +`Tensor` input of shape `[N, N]`. +
+`Tout` + +A tf.DType from: `tf.complex64, tf.complex128`. +
+`compute_v` + +An optional `bool`. Defaults to `True`. +If `True` then eigenvectors will be computed and returned in `v`. +Otherwise, only the eigenvalues will be computed. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (e, v). +
+`e` + +A `Tensor` of type `Tout`. +
+`v` + +A `Tensor` of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Einsum.md b/site/en/api_docs/python/tf/raw_ops/Einsum.md new file mode 100644 index 00000000000..f5504f7e476 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Einsum.md @@ -0,0 +1,164 @@ +description: Tensor contraction according to Einstein summation convention. + +
+ + +
+ +# tf.raw_ops.Einsum + + + + + + + + + +Tensor contraction according to Einstein summation convention. + + + + + + + + + +Implements generalized Tensor contraction and reduction. Each input Tensor must +have a corresponding input subscript appearing in the comma-separated left-hand +side of the equation. The right-hand side of the equation consists of the +output subscript. The input subscripts and the output subscript should consist +of zero or more named axis labels and at most one ellipsis (`...`). + +The named axis labels may be any single character other than those having +special meaning, namely `,.->`. The behavior of this Op is undefined if it +receives an ill-formatted equation; since the validation is done at +graph-building time, we omit format validation checks at runtime. + +Note: This Op is *not* intended to be called by the user; instead users should +call tf.einsum directly. It is a hidden Op used by tf.einsum. + +Operations are applied to the input(s) according to the following rules: + + (a) Generalized Diagonals: For input dimensions corresponding to axis labels + appearing more than once in the same input subscript, we take the + generalized (`k`-dimensional) diagonal. + For example, in the equation `iii->i` with input shape `[3, 3, 3]`, the + generalized diagonal would consist of `3` elements at indices `(0, 0, 0)`, + `(1, 1, 1)` and `(2, 2, 2)` to create a Tensor of shape `[3]`. + + (b) Reduction: Axes corresponding to labels appearing only in one input + subscript but not in the output subscript are summed over prior to Tensor + contraction. + For example, in the equation `ab,bc->b`, the axis labels `a` and `c` are + the reduction axis labels. + + (c) Batch Dimensions: Axes corresponding to labels appearing in each of the + input subscripts and also in the output subscript make up the batch + dimensions in Tensor contraction. Unnamed axis labels corresponding to + ellipsis (`...`) also correspond to batch dimensions. + For example, for the equation denoting batch matrix multiplication, + `bij,bjk->bik`, the axis label `b` corresponds to a batch dimension. + + (d) Contraction: In case of binary einsum, axes corresponding to labels + appearing in two different inputs (and not in the output) are contracted + against each other. + Considering the batch matrix multiplication equation again + (`bij,bjk->bik`), the contracted axis label is `j`. + + (e) Expand Diagonal: If the output subscripts contain repeated (explicit) axis + labels, the opposite operation of (a) is applied. For example, in the + equation `i->iii`, and input shape `[3]`, the output of shape `[3, 3, 3]` + are all zeros, except for the (generalized) diagonal which is populated + with values from the input. + Note: This operation is not supported by `np.einsum` or tf.einsum; it is + provided to enable computing the symbolic gradient of tf.einsum. + +The output subscripts must contain only labels appearing in at least one of the +input subscripts. Furthermore, all dimensions mapping to the same axis label +must be equal. + +Any of the input and output subscripts may contain at most a single ellipsis +(`...`). These ellipsis are mapped against dimensions not corresponding to any +named axis label. If two inputs contain ellipsis, then they are broadcasted +according to standard NumPy broadcasting +[rules](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). + +The broadcasted dimensions are placed in the corresponding location of the +ellipsis in the output subscript. If the broadcasted dimensions are non-empty +and the output subscripts do not contain ellipsis, then an InvalidArgument error +is raised. + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of at least 1 `Tensor` objects with the same type. +List of 1 or 2 Tensors. +
+`equation` + +A `string`. +String describing the Einstein Summation operation; in the format of np.einsum. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `inputs`. +
+ + + +#### Numpy Compatibility +Similar to [`numpy.einsum`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html). + +Comparison with `numpy.einsum`: + + * This Op only supports unary and binary forms of `numpy.einsum`. + * This Op does not support implicit form. (i.e. equations without `->`). + * This Op also supports repeated indices in the output subscript, which is not + supported by `numpy.einsum`. + diff --git a/site/en/api_docs/python/tf/raw_ops/Elu.md b/site/en/api_docs/python/tf/raw_ops/Elu.md new file mode 100644 index 00000000000..d78da24427f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Elu.md @@ -0,0 +1,79 @@ +description: Computes exponential linear: exp(features) - 1 if < 0, features otherwise. + +
+ + +
+ +# tf.raw_ops.Elu + + + + + + + + + +Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise. + + + + + + + + + +See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) +](http://arxiv.org/abs/1511.07289) + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EluGrad.md b/site/en/api_docs/python/tf/raw_ops/EluGrad.md new file mode 100644 index 00000000000..2a0f64aba75 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EluGrad.md @@ -0,0 +1,86 @@ +description: Computes gradients for the exponential linear (Elu) operation. + +
+ + +
+ +# tf.raw_ops.EluGrad + + + + + + + + + +Computes gradients for the exponential linear (Elu) operation. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +The backpropagated gradients to the corresponding Elu operation. +
+`outputs` + +A `Tensor`. Must have the same type as `gradients`. +The outputs of the corresponding Elu operation. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `gradients`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Empty.md b/site/en/api_docs/python/tf/raw_ops/Empty.md new file mode 100644 index 00000000000..74ad3c1a7b7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Empty.md @@ -0,0 +1,53 @@ +description: Creates a tensor with the given shape. + +
+ + +
+ +# tf.raw_ops.Empty + + + + + + + + + +Creates a tensor with the given shape. + + + + + + + + + +This operation creates a tensor of `shape` and `dtype`. + + Args: + shape: A `Tensor` of type `int32`. + 1-D. Represents the shape of the output tensor. + dtype: A tf.DType. + init: An optional `bool`. Defaults to `False`. + If True, initialize the returned tensor with the default value of dtype. Otherwise, the implementation is free not to initializethe tensor's content. + name: A name for the operation (optional). + + Returns: + A `Tensor` of type `dtype`. + \ No newline at end of file diff --git a/site/en/api_docs/python/tf/raw_ops/EmptyTensorList.md b/site/en/api_docs/python/tf/raw_ops/EmptyTensorList.md new file mode 100644 index 00000000000..3ca6e7cfc68 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EmptyTensorList.md @@ -0,0 +1,97 @@ +description: Creates and returns an empty tensor list. + +
+ + +
+ +# tf.raw_ops.EmptyTensorList + + + + + + + + + +Creates and returns an empty tensor list. + + + + + + + + + +All list elements must be tensors of dtype element_dtype and shape compatible +with element_shape. + +handle: an empty tensor list. +element_dtype: the type of elements in the list. +element_shape: a shape compatible with that of elements in the list. + + + + + + + + + + + + + + + + + + + +
+`element_shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`max_num_elements` + +A `Tensor` of type `int32`. +
+`element_dtype` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EncodeBase64.md b/site/en/api_docs/python/tf/raw_ops/EncodeBase64.md new file mode 100644 index 00000000000..6939406c5f1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EncodeBase64.md @@ -0,0 +1,91 @@ +description: Encode strings into web-safe base64 format. + +
+ + +
+ +# tf.raw_ops.EncodeBase64 + + + + + + + + + +Encode strings into web-safe base64 format. + + + + + + + + + +Refer to the following article for more information on base64 format: +en.wikipedia.org/wiki/Base64. Base64 strings may have padding with '=' at the +end so that the encoded has length multiple of 4. See Padding section of the +link above. + +Web-safe means that the encoder uses - and _ instead of + and /. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. Strings to be encoded. +
+`pad` + +An optional `bool`. Defaults to `False`. +Bool whether padding is applied at the ends. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EncodeJpeg.md b/site/en/api_docs/python/tf/raw_ops/EncodeJpeg.md new file mode 100644 index 00000000000..17922f822bb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EncodeJpeg.md @@ -0,0 +1,169 @@ +description: JPEG-encode an image. + +
+ + +
+ +# tf.raw_ops.EncodeJpeg + + + + + + + + + +JPEG-encode an image. + + + + + + + + + +`image` is a 3-D uint8 Tensor of shape `[height, width, channels]`. + +The attr `format` can be used to override the color format of the encoded +output. Values can be: + +* `''`: Use a default format based on the number of channels in the image. +* `grayscale`: Output a grayscale JPEG image. The `channels` dimension + of `image` must be 1. +* `rgb`: Output an RGB JPEG image. The `channels` dimension + of `image` must be 3. + +If `format` is not specified or is the empty string, a default format is picked +in function of the number of channels in `image`: + +* 1: Output a grayscale image. +* 3: Output an RGB image. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`image` + +A `Tensor` of type `uint8`. +3-D with shape `[height, width, channels]`. +
+`format` + +An optional `string` from: `"", "grayscale", "rgb"`. Defaults to `""`. +Per pixel image format. +
+`quality` + +An optional `int`. Defaults to `95`. +Quality of the compression from 0 to 100 (higher is better and slower). +
+`progressive` + +An optional `bool`. Defaults to `False`. +If True, create a JPEG that loads progressively (coarse to fine). +
+`optimize_size` + +An optional `bool`. Defaults to `False`. +If True, spend CPU/RAM to reduce size with no quality change. +
+`chroma_downsampling` + +An optional `bool`. Defaults to `True`. +See http://en.wikipedia.org/wiki/Chroma_subsampling. +
+`density_unit` + +An optional `string` from: `"in", "cm"`. Defaults to `"in"`. +Unit used to specify `x_density` and `y_density`: +pixels per inch (`'in'`) or centimeter (`'cm'`). +
+`x_density` + +An optional `int`. Defaults to `300`. +Horizontal pixels per density unit. +
+`y_density` + +An optional `int`. Defaults to `300`. +Vertical pixels per density unit. +
+`xmp_metadata` + +An optional `string`. Defaults to `""`. +If not empty, embed this XMP metadata in the image header. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EncodeJpegVariableQuality.md b/site/en/api_docs/python/tf/raw_ops/EncodeJpegVariableQuality.md new file mode 100644 index 00000000000..2f31bd14b7c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EncodeJpegVariableQuality.md @@ -0,0 +1,86 @@ +description: JPEG encode input image with provided compression quality. + +
+ + +
+ +# tf.raw_ops.EncodeJpegVariableQuality + + + + + + + + + +JPEG encode input image with provided compression quality. + + + + + + + + + +`image` is a 3-D uint8 Tensor of shape `[height, width, channels]`. +`quality` is an int32 jpeg compression quality value between 0 and 100. + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor` of type `uint8`. Images to adjust. At least 3-D. +
+`quality` + +A `Tensor` of type `int32`. An int quality to encode to. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EncodePng.md b/site/en/api_docs/python/tf/raw_ops/EncodePng.md new file mode 100644 index 00000000000..59afe03ed0d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EncodePng.md @@ -0,0 +1,96 @@ +description: PNG-encode an image. + +
+ + +
+ +# tf.raw_ops.EncodePng + + + + + + + + + +PNG-encode an image. + + + + + + + + + +`image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]` +where `channels` is: + +* 1: for grayscale. +* 2: for grayscale + alpha. +* 3: for RGB. +* 4: for RGBA. + +The ZLIB compression level, `compression`, can be -1 for the PNG-encoder +default or a value from 0 to 9. 9 is the highest compression level, generating +the smallest output, but is slower. + + + + + + + + + + + + + + + + +
+`image` + +A `Tensor`. Must be one of the following types: `uint8`, `uint16`. +3-D with shape `[height, width, channels]`. +
+`compression` + +An optional `int`. Defaults to `-1`. Compression level. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EncodeProto.md b/site/en/api_docs/python/tf/raw_ops/EncodeProto.md new file mode 100644 index 00000000000..fc59f17fa8f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EncodeProto.md @@ -0,0 +1,149 @@ +description: The op serializes protobuf messages provided in the input tensors. + +
+ + +
+ +# tf.raw_ops.EncodeProto + + + + + + + + + +The op serializes protobuf messages provided in the input tensors. + + + + + + + + + +The types of the tensors in `values` must match the schema for the fields +specified in `field_names`. All the tensors in `values` must have a common +shape prefix, *batch_shape*. + +The `sizes` tensor specifies repeat counts for each field. The repeat count +(last dimension) of a each tensor in `values` must be greater than or equal +to corresponding repeat count in `sizes`. + +A `message_type` name must be provided to give context for the field names. +The actual message descriptor can be looked up either in the linked-in +descriptor pool or a filename provided by the caller using the +`descriptor_source` attribute. + +For the most part, the mapping between Proto field types and TensorFlow dtypes +is straightforward. However, there are a few special cases: + +- A proto field that contains a submessage or group can only be converted +to `DT_STRING` (the serialized submessage). This is to reduce the complexity +of the API. The resulting string can be used as input to another instance of +the decode_proto op. + +- TensorFlow lacks support for unsigned integers. The ops represent uint64 +types as a `DT_INT64` with the same twos-complement bit pattern (the obvious +way). Unsigned int32 values can be represented exactly by specifying type +`DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in +the `output_types` attribute. + +The `descriptor_source` attribute selects the source of protocol +descriptors to consult when looking up `message_type`. This may be: + +- An empty string or "local://", in which case protocol descriptors are +created for C++ (not Python) proto definitions linked to the binary. + +- A file, in which case protocol descriptors are created from the file, +which is expected to contain a `FileDescriptorSet` serialized as a string. +NOTE: You can build a `descriptor_source` file using the `--descriptor_set_out` +and `--include_imports` options to the protocol compiler `protoc`. + +- A "bytes://", in which protocol descriptors are created from ``, +which is expected to be a `FileDescriptorSet` serialized as a string. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sizes` + +A `Tensor` of type `int32`. +Tensor of int32 with shape `[batch_shape, len(field_names)]`. +
+`values` + +A list of `Tensor` objects. +List of tensors containing values for the corresponding field. +
+`field_names` + +A list of `strings`. +List of strings containing proto field names. +
+`message_type` + +A `string`. Name of the proto message type to decode. +
+`descriptor_source` + +An optional `string`. Defaults to `"local://"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EncodeWav.md b/site/en/api_docs/python/tf/raw_ops/EncodeWav.md new file mode 100644 index 00000000000..cef024547a9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EncodeWav.md @@ -0,0 +1,92 @@ +description: Encode audio data using the WAV file format. + +
+ + +
+ +# tf.raw_ops.EncodeWav + + + + + + + + + +Encode audio data using the WAV file format. + + + + + + + + + +This operation will generate a string suitable to be saved out to create a .wav +audio file. It will be encoded in the 16-bit PCM format. It takes in float +values in the range -1.0f to 1.0f, and any outside that value will be clamped to +that range. + +`audio` is a 2-D float Tensor of shape `[length, channels]`. +`sample_rate` is a scalar Tensor holding the rate to use (e.g. 44100). + + + + + + + + + + + + + + + + +
+`audio` + +A `Tensor` of type `float32`. 2-D with shape `[length, channels]`. +
+`sample_rate` + +A `Tensor` of type `int32`. +Scalar containing the sample frequency. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EnqueueTPUEmbeddingIntegerBatch.md b/site/en/api_docs/python/tf/raw_ops/EnqueueTPUEmbeddingIntegerBatch.md new file mode 100644 index 00000000000..bd5def8a205 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EnqueueTPUEmbeddingIntegerBatch.md @@ -0,0 +1,99 @@ +description: An op that enqueues a list of input batch tensors to TPUEmbedding. + +
+ + +
+ +# tf.raw_ops.EnqueueTPUEmbeddingIntegerBatch + + + + + + + + + +An op that enqueues a list of input batch tensors to TPUEmbedding. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`batch` + +A list of at least 1 `Tensor` objects with type `int32`. +A list of 1D tensors, one for each embedding table, containing the +indices into the tables. +
+`mode_override` + +A `Tensor` of type `string`. +A string input that overrides the mode specified in the +TPUEmbeddingConfiguration. Supported values are {'unspecified', 'inference', +'training', 'backward_pass_only'}. When set to 'unspecified', the mode set +in TPUEmbeddingConfiguration is used, otherwise mode_override is used. +
+`device_ordinal` + +An optional `int`. Defaults to `-1`. +The TPU device to use. Should be >= 0 and less than the number +of TPU cores in the task on which the node is placed. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EnqueueTPUEmbeddingSparseBatch.md b/site/en/api_docs/python/tf/raw_ops/EnqueueTPUEmbeddingSparseBatch.md new file mode 100644 index 00000000000..126973b1ac3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EnqueueTPUEmbeddingSparseBatch.md @@ -0,0 +1,142 @@ +description: An op that enqueues TPUEmbedding input indices from a SparseTensor. + +
+ + +
+ +# tf.raw_ops.EnqueueTPUEmbeddingSparseBatch + + + + + + + + + +An op that enqueues TPUEmbedding input indices from a SparseTensor. + + + + + + + + + +This Op eases the porting of code that uses embedding_lookup_sparse(), +although some Python preprocessing of the SparseTensor arguments to +embedding_lookup_sparse() is required to produce the arguments to this Op, +since only a single EnqueueTPUEmbeddingSparseBatch Op is allowed per training +step. + +The tensors at corresponding positions in the three input lists +must have the same shape, i.e. rank 1 with dim_size() equal to the total +number of lookups into the table described by the corresponding table_id. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sample_indices` + +A list of at least 1 `Tensor` objects with the same type in: `int32`, `int64`. +A list of rank 1 Tensors specifying the training example and +feature to which the corresponding embedding_indices and aggregation_weights +values belong. sample_indices[i] must equal b * nf + f, where nf is the +number of features from the corresponding table, f is in [0, nf), and +b is in [0, batch size). +
+`embedding_indices` + +A list with the same length as `sample_indices` of `Tensor` objects with the same type in: `int32`, `int64`. +A list of rank 1 Tensors, indices into the embedding tables. +
+`aggregation_weights` + +A list with the same length as `sample_indices` of `Tensor` objects with the same type in: `float32`, `float64`. +A list of rank 1 Tensors containing per sample -- i.e. per +(training example, feature) -- aggregation weights. +
+`mode_override` + +A `Tensor` of type `string`. +A string input that overrides the mode specified in the +TPUEmbeddingConfiguration. Supported values are {'unspecified', 'inference', +'training', 'backward_pass_only'}. When set to 'unspecified', the mode set +in TPUEmbeddingConfiguration is used, otherwise mode_override is used. +
+`device_ordinal` + +An optional `int`. Defaults to `-1`. +The TPU device to use. Should be >= 0 and less than the number +of TPU cores in the task on which the node is placed. +
+`combiners` + +An optional list of `strings`. Defaults to `[]`. +A list of string scalars, one for each embedding table that specify +how to normalize the embedding activations after weighted summation. +Supported combiners are 'mean', 'sum', or 'sqrtn'. It is invalid to have +the sum of the weights be 0 for 'mean' or the sum of the squared weights be +0 for 'sqrtn'. If combiners isn't passed, the default is to use 'sum' for +all tables. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EnqueueTPUEmbeddingSparseTensorBatch.md b/site/en/api_docs/python/tf/raw_ops/EnqueueTPUEmbeddingSparseTensorBatch.md new file mode 100644 index 00000000000..a0d0e2c8ba1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EnqueueTPUEmbeddingSparseTensorBatch.md @@ -0,0 +1,160 @@ +description: Eases the porting of code that uses tf.nn.embedding_lookup_sparse(). + +
+ + +
+ +# tf.raw_ops.EnqueueTPUEmbeddingSparseTensorBatch + + + + + + + + + +Eases the porting of code that uses tf.nn.embedding_lookup_sparse(). + + + + + + + + + +sample_indices[i], embedding_indices[i] and aggregation_weights[i] correspond +to the ith feature. table_ids[i] indicates which embedding table to look up ith +feature. + +The tensors at corresponding positions in the three input lists (sample_indices, +embedding_indices and aggregation_weights) must have the same shape, i.e. rank 1 +with dim_size() equal to the total number of lookups into the table described by +the corresponding feature. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sample_indices` + +A list of at least 1 `Tensor` objects with the same type in: `int32`, `int64`. +A list of rank 1 Tensors specifying the training example to +which the corresponding embedding_indices and aggregation_weights values +belong. It corresponds to sp_ids.indices[:,0] in embedding_lookup_sparse(). +
+`embedding_indices` + +A list with the same length as `sample_indices` of `Tensor` objects with the same type in: `int32`, `int64`. +A list of rank 1 Tensors, indices into the embedding tables. +It corresponds to sp_ids.values in embedding_lookup_sparse(). +
+`aggregation_weights` + +A list with the same length as `sample_indices` of `Tensor` objects with the same type in: `float32`, `float64`. +A list of rank 1 Tensors containing per training example +aggregation weights. It corresponds to sp_weights.values in +embedding_lookup_sparse(). +
+`mode_override` + +A `Tensor` of type `string`. +A string input that overrides the mode specified in the +TPUEmbeddingConfiguration. Supported values are {'unspecified', 'inference', +'training', 'backward_pass_only'}. When set to 'unspecified', the mode set +in TPUEmbeddingConfiguration is used, otherwise mode_override is used. +
+`table_ids` + +A list of `ints`. +A list of integers specifying the identifier of the embedding table +(offset of TableDescriptor in the TPUEmbeddingConfiguration) to lookup the +corresponding input. The ith input is looked up using table_ids[i]. The size +of the table_ids list must be equal to that of sample_indices, +embedding_indices and aggregation_weights. +
+`device_ordinal` + +An optional `int`. Defaults to `-1`. +The TPU device to use. Should be >= 0 and less than the number +of TPU cores in the task on which the node is placed. +
+`combiners` + +An optional list of `strings`. Defaults to `[]`. +A list of string scalars, one for each embedding table that specify +how to normalize the embedding activations after weighted summation. +Supported combiners are 'mean', 'sum', or 'sqrtn'. It is invalid to have +the sum of the weights be 0 for 'mean' or the sum of the squared weights be +0 for 'sqrtn'. If combiners isn't passed, the default is to use 'sum' for +all tables. +
+`max_sequence_lengths` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EnsureShape.md b/site/en/api_docs/python/tf/raw_ops/EnsureShape.md new file mode 100644 index 00000000000..0bf4175bbf2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EnsureShape.md @@ -0,0 +1,87 @@ +description: Ensures that the tensor's shape matches the expected shape. + +
+ + +
+ +# tf.raw_ops.EnsureShape + + + + + + + + + +Ensures that the tensor's shape matches the expected shape. + + + + + + + + + +Raises an error if the input tensor's shape does not match the specified shape. +Returns the input tensor otherwise. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. A tensor, whose shape is to be validated. +
+`shape` + +A tf.TensorShape or list of `ints`. +The expected (possibly partially specified) shape of the input tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Enter.md b/site/en/api_docs/python/tf/raw_ops/Enter.md new file mode 100644 index 00000000000..de7fcf4f1f2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Enter.md @@ -0,0 +1,105 @@ +description: Creates or finds a child frame, and makes data available to the child frame. + +
+ + +
+ +# tf.raw_ops.Enter + + + + + + + + + +Creates or finds a child frame, and makes `data` available to the child frame. + + + + + + + + + +This op is used together with `Exit` to create loops in the graph. +The unique `frame_name` is used by the `Executor` to identify frames. If +`is_constant` is true, `output` is a constant in the child frame; otherwise +it may be changed in the child frame. At most `parallel_iterations` iterations +are run in parallel in the child frame. + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. The tensor to be made available to the child frame. +
+`frame_name` + +A `string`. The name of the child frame. +
+`is_constant` + +An optional `bool`. Defaults to `False`. +If true, the output is constant within the child frame. +
+`parallel_iterations` + +An optional `int`. Defaults to `10`. +The number of iterations allowed to run in parallel. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Equal.md b/site/en/api_docs/python/tf/raw_ops/Equal.md new file mode 100644 index 00000000000..da315c726e6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Equal.md @@ -0,0 +1,103 @@ +description: Returns the truth value of (x == y) element-wise. + +
+ + +
+ +# tf.raw_ops.Equal + + + + + + + + + +Returns the truth value of (x == y) element-wise. + + + + + + + + + +*NOTE*: `Equal` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +```python +x = tf.constant([2, 4]) +y = tf.constant(2) +tf.math.equal(x, y) ==> array([True, False]) + +x = tf.constant([2, 4]) +y = tf.constant([2, 4]) +tf.math.equal(x, y) ==> array([True, True]) +``` + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`, `string`, `bool`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`incompatible_shape_error` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Erf.md b/site/en/api_docs/python/tf/raw_ops/Erf.md new file mode 100644 index 00000000000..05be965af24 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Erf.md @@ -0,0 +1,77 @@ +description: Computes the Gauss error function of x element-wise. + +
+ + +
+ +# tf.raw_ops.Erf + + + + + + + + + +Computes the Gauss error function of `x` element-wise. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Erfc.md b/site/en/api_docs/python/tf/raw_ops/Erfc.md new file mode 100644 index 00000000000..761ed66ad28 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Erfc.md @@ -0,0 +1,77 @@ +description: Computes the complementary error function of x element-wise. + +
+ + +
+ +# tf.raw_ops.Erfc + + + + + + + + + +Computes the complementary error function of `x` element-wise. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Erfinv.md b/site/en/api_docs/python/tf/raw_ops/Erfinv.md new file mode 100644 index 00000000000..d5b30c6f9f9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Erfinv.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.Erfinv + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/EuclideanNorm.md b/site/en/api_docs/python/tf/raw_ops/EuclideanNorm.md new file mode 100644 index 00000000000..1cbaf4eb56f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/EuclideanNorm.md @@ -0,0 +1,99 @@ +description: Computes the euclidean norm of elements across dimensions of a tensor. + +
+ + +
+ +# tf.raw_ops.EuclideanNorm + + + + + + + + + +Computes the euclidean norm of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input` along the dimensions given in `axis`. Unless +`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in +`axis`. If `keep_dims` is true, the reduced dimensions are +retained with length 1. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The tensor to reduce. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The dimensions to reduce. Must be in the range +`[-rank(input), rank(input))`. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If true, retain reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Exit.md b/site/en/api_docs/python/tf/raw_ops/Exit.md new file mode 100644 index 00000000000..92cc575190a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Exit.md @@ -0,0 +1,78 @@ +description: Exits the current frame to its parent frame. + +
+ + +
+ +# tf.raw_ops.Exit + + + + + + + + + +Exits the current frame to its parent frame. + + + + + + + + + +Exit makes its input `data` available to the parent frame. + + + + + + + + + + + + + +
+`data` + +A `Tensor`. The tensor to be made available to the parent frame. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Exp.md b/site/en/api_docs/python/tf/raw_ops/Exp.md new file mode 100644 index 00000000000..1877ca9cf07 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Exp.md @@ -0,0 +1,103 @@ +description: Computes exponential of x element-wise. \\(y = e^x\\). + +
+ + +
+ +# tf.raw_ops.Exp + + + + + + + + + +Computes exponential of x element-wise. \\(y = e^x\\). + + + + + + + + + + This function computes the exponential of every element in the input tensor. + i.e. `exp(x)` or `e^(x)`, where `x` is the input tensor. + `e` denotes Euler's number and is approximately equal to 2.718281. + Output is positive for any real input. + + ```python + x = tf.constant(2.0) + tf.math.exp(x) ==> 7.389056 + + x = tf.constant([2.0, 8.0]) + tf.math.exp(x) ==> array([7.389056, 2980.958], dtype=float32) + ``` + + For complex numbers, the exponential value is calculated as follows: + + ``` + e^(x+iy) = e^x * e^iy = e^x * (cos y + i sin y) + ``` + + Let's consider complex number 1+1j as an example. + e^1 * (cos 1 + i sin 1) = 2.7182818284590 * (0.54030230586+0.8414709848j) + + ```python + x = tf.constant(1 + 1j) + tf.math.exp(x) ==> 1.4686939399158851+2.2873552871788423j + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExpandDims.md b/site/en/api_docs/python/tf/raw_ops/ExpandDims.md new file mode 100644 index 00000000000..b2a6c2cb4f3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExpandDims.md @@ -0,0 +1,119 @@ +description: Inserts a dimension of 1 into a tensor's shape. + +
+ + +
+ +# tf.raw_ops.ExpandDims + + + + + + + + + +Inserts a dimension of 1 into a tensor's shape. + + + + + + + + + +Given a tensor `input`, this operation inserts a dimension of 1 at the +dimension index `axis` of `input`'s shape. The dimension index `axis` starts at +zero; if you specify a negative number for `axis` it is counted backward from +the end. + +This operation is useful if you want to add a batch dimension to a single +element. For example, if you have a single image of shape `[height, width, +channels]`, you can make it a batch of 1 image with `expand_dims(image, 0)`, +which will make the shape `[1, height, width, channels]`. + +#### Other examples: + + + +``` +# 't' is a tensor of shape [2] +shape(expand_dims(t, 0)) ==> [1, 2] +shape(expand_dims(t, 1)) ==> [2, 1] +shape(expand_dims(t, -1)) ==> [2, 1] + +# 't2' is a tensor of shape [2, 3, 5] +shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5] +shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5] +shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1] +``` + +This operation requires that: + +`-1-input.dims() <= dim <= input.dims()` + +This operation is related to `squeeze()`, which removes dimensions of +size 1. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +0-D (scalar). Specifies the dimension index at which to +expand the shape of `input`. Must be in the range +`[-rank(input) - 1, rank(input)]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalAssertNextDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalAssertNextDataset.md new file mode 100644 index 00000000000..53356c0300b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalAssertNextDataset.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.ExperimentalAssertNextDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`transformations` + +A `Tensor` of type `string`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalAutoShardDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalAutoShardDataset.md new file mode 100644 index 00000000000..bc42c40aa85 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalAutoShardDataset.md @@ -0,0 +1,123 @@ +description: Creates a dataset that shards the input dataset. + +
+ + +
+ +# tf.raw_ops.ExperimentalAutoShardDataset + + + + + + + + + +Creates a dataset that shards the input dataset. + + + + + + + + + +Creates a dataset that shards the input dataset by num_workers, returning a +sharded dataset for the index-th worker. This attempts to automatically shard +a dataset by examining the Dataset graph and inserting a shard op before the +inputs to a reader Dataset (e.g. CSVDataset, TFRecordDataset). + +This dataset will throw a NotFound error if we cannot shard the dataset +automatically. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +
+`num_workers` + +A `Tensor` of type `int64`. +A scalar representing the number of workers to distribute this dataset across. +
+`index` + +A `Tensor` of type `int64`. +A scalar representing the index of the current worker out of num_workers. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`auto_shard_policy` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalBytesProducedStatsDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalBytesProducedStatsDataset.md new file mode 100644 index 00000000000..15a3821b87f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalBytesProducedStatsDataset.md @@ -0,0 +1,98 @@ +description: Records the bytes size of each element of input_dataset in a StatsAggregator. + +
+ + +
+ +# tf.raw_ops.ExperimentalBytesProducedStatsDataset + + + + + + + + + +Records the bytes size of each element of `input_dataset` in a StatsAggregator. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`tag` + +A `Tensor` of type `string`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalCSVDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalCSVDataset.md new file mode 100644 index 00000000000..1420c0ca2a7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalCSVDataset.md @@ -0,0 +1,139 @@ +
+ + +
+ +# tf.raw_ops.ExperimentalCSVDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A `Tensor` of type `string`. +
+`compression_type` + +A `Tensor` of type `string`. +
+`buffer_size` + +A `Tensor` of type `int64`. +
+`header` + +A `Tensor` of type `bool`. +
+`field_delim` + +A `Tensor` of type `string`. +
+`use_quote_delim` + +A `Tensor` of type `bool`. +
+`na_value` + +A `Tensor` of type `string`. +
+`select_cols` + +A `Tensor` of type `int64`. +
+`record_defaults` + +A list of `Tensor` objects with types from: `float32`, `float64`, `int32`, `int64`, `string`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalChooseFastestDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalChooseFastestDataset.md new file mode 100644 index 00000000000..739e42e7217 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalChooseFastestDataset.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.ExperimentalChooseFastestDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_datasets` + +A list of at least 2 `Tensor` objects with type `variant`. +
+`num_experiments` + +An `int`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalDatasetCardinality.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalDatasetCardinality.md new file mode 100644 index 00000000000..0084d7a91c3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalDatasetCardinality.md @@ -0,0 +1,79 @@ +description: Returns the cardinality of input_dataset. + +
+ + +
+ +# tf.raw_ops.ExperimentalDatasetCardinality + + + + + + + + + +Returns the cardinality of `input_dataset`. + + + + + + + + + +Returns the cardinality of `input_dataset`. + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the dataset to return cardinality for. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalDatasetToTFRecord.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalDatasetToTFRecord.md new file mode 100644 index 00000000000..181cdf5c1a5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalDatasetToTFRecord.md @@ -0,0 +1,95 @@ +description: Writes the given dataset to the given file using the TFRecord format. + +
+ + +
+ +# tf.raw_ops.ExperimentalDatasetToTFRecord + + + + + + + + + +Writes the given dataset to the given file using the TFRecord format. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the dataset to write. +
+`filename` + +A `Tensor` of type `string`. +A scalar string tensor representing the filename to use. +
+`compression_type` + +A `Tensor` of type `string`. +A scalar string tensor containing either (i) the empty string (no +compression), (ii) "ZLIB", or (iii) "GZIP". +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalDenseToSparseBatchDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalDenseToSparseBatchDataset.md new file mode 100644 index 00000000000..c6176a0ecc7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalDenseToSparseBatchDataset.md @@ -0,0 +1,111 @@ +description: Creates a dataset that batches input elements into a SparseTensor. + +
+ + +
+ +# tf.raw_ops.ExperimentalDenseToSparseBatchDataset + + + + + + + + + +Creates a dataset that batches input elements into a SparseTensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A handle to an input dataset. Must have a single component. +
+`batch_size` + +A `Tensor` of type `int64`. +A scalar representing the number of elements to accumulate in a +batch. +
+`row_shape` + +A `Tensor` of type `int64`. +A vector representing the dense shape of each row in the produced +SparseTensor. The shape may be partially specified, using `-1` to indicate +that a particular dimension should use the maximum size of all batch elements. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalDirectedInterleaveDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalDirectedInterleaveDataset.md new file mode 100644 index 00000000000..ce2f91713e4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalDirectedInterleaveDataset.md @@ -0,0 +1,103 @@ +description: A substitute for InterleaveDataset on a fixed list of N datasets. + +
+ + +
+ +# tf.raw_ops.ExperimentalDirectedInterleaveDataset + + + + + + + + + +A substitute for `InterleaveDataset` on a fixed list of `N` datasets. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`selector_input_dataset` + +A `Tensor` of type `variant`. +A dataset of scalar `DT_INT64` elements that determines which of the +`N` data inputs should produce the next output element. +
+`data_input_datasets` + +A list of at least 1 `Tensor` objects with type `variant`. +`N` datasets with the same type that will be interleaved according to +the values of `selector_input_dataset`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalGroupByReducerDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalGroupByReducerDataset.md new file mode 100644 index 00000000000..bef6f39108b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalGroupByReducerDataset.md @@ -0,0 +1,166 @@ +description: Creates a dataset that computes a group-by on input_dataset. + +
+ + +
+ +# tf.raw_ops.ExperimentalGroupByReducerDataset + + + + + + + + + +Creates a dataset that computes a group-by on `input_dataset`. + + + + + + + + + +Creates a dataset that computes a group-by on `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +
+`key_func_other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when +building a closure for `key_func`. +
+`init_func_other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when +building a closure for `init_func`. +
+`reduce_func_other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when +building a closure for `reduce_func`. +
+`finalize_func_other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when +building a closure for `finalize_func`. +
+`key_func` + +A function decorated with @Defun. +A function mapping an element of `input_dataset`, concatenated +with `key_func_other_arguments` to a scalar value of type DT_INT64. +
+`init_func` + +A function decorated with @Defun. +A function mapping a key of type DT_INT64, concatenated with +`init_func_other_arguments` to the initial reducer state. +
+`reduce_func` + +A function decorated with @Defun. +A function mapping the current reducer state and an element of `input_dataset`, +concatenated with `reduce_func_other_arguments` to a new reducer state. +
+`finalize_func` + +A function decorated with @Defun. +A function mapping the final reducer state to an output element. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalGroupByWindowDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalGroupByWindowDataset.md new file mode 100644 index 00000000000..2494b183d7c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalGroupByWindowDataset.md @@ -0,0 +1,138 @@ +description: Creates a dataset that computes a windowed group-by on input_dataset. + +
+ + +
+ +# tf.raw_ops.ExperimentalGroupByWindowDataset + + + + + + + + + +Creates a dataset that computes a windowed group-by on `input_dataset`. + + + + + + + + + +// + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`key_func_other_arguments` + +A list of `Tensor` objects. +
+`reduce_func_other_arguments` + +A list of `Tensor` objects. +
+`window_size_func_other_arguments` + +A list of `Tensor` objects. +
+`key_func` + +A function decorated with @Defun. +A function mapping an element of `input_dataset`, concatenated +with `key_func_other_arguments` to a scalar value of type DT_INT64. +
+`reduce_func` + +A function decorated with @Defun. +
+`window_size_func` + +A function decorated with @Defun. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalIgnoreErrorsDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalIgnoreErrorsDataset.md new file mode 100644 index 00000000000..3c05cbf790f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalIgnoreErrorsDataset.md @@ -0,0 +1,91 @@ +description: Creates a dataset that contains the elements of input_dataset ignoring errors. + +
+ + +
+ +# tf.raw_ops.ExperimentalIgnoreErrorsDataset + + + + + + + + + +Creates a dataset that contains the elements of `input_dataset` ignoring errors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalIteratorGetDevice.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalIteratorGetDevice.md new file mode 100644 index 00000000000..7ca73267ba2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalIteratorGetDevice.md @@ -0,0 +1,77 @@ +description: Returns the name of the device on which resource has been placed. + +
+ + +
+ +# tf.raw_ops.ExperimentalIteratorGetDevice + + + + + + + + + +Returns the name of the device on which `resource` has been placed. + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalLMDBDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalLMDBDataset.md new file mode 100644 index 00000000000..5b29608fd74 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalLMDBDataset.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.ExperimentalLMDBDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A `Tensor` of type `string`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalLatencyStatsDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalLatencyStatsDataset.md new file mode 100644 index 00000000000..b3e21382a52 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalLatencyStatsDataset.md @@ -0,0 +1,98 @@ +description: Records the latency of producing input_dataset elements in a StatsAggregator. + +
+ + +
+ +# tf.raw_ops.ExperimentalLatencyStatsDataset + + + + + + + + + +Records the latency of producing `input_dataset` elements in a StatsAggregator. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`tag` + +A `Tensor` of type `string`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalMapAndBatchDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalMapAndBatchDataset.md new file mode 100644 index 00000000000..8b0603997db --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalMapAndBatchDataset.md @@ -0,0 +1,151 @@ +description: Creates a dataset that fuses mapping with batching. + +
+ + +
+ +# tf.raw_ops.ExperimentalMapAndBatchDataset + + + + + + + + + +Creates a dataset that fuses mapping with batching. + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset` and then +batches `batch_size` of them. + +Unlike a "MapDataset", which applies `f` sequentially, this dataset invokes up +to `batch_size * num_parallel_batches` copies of `f` in parallel. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +
+`other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when building a closure +for `f`. +
+`batch_size` + +A `Tensor` of type `int64`. +A scalar representing the number of elements to accumulate in a +batch. It determines the number of concurrent invocations of `f` that process +elements from `input_dataset` in parallel. +
+`num_parallel_calls` + +A `Tensor` of type `int64`. +A scalar representing the maximum number of parallel invocations of the `map_fn` +function. Applying the `map_fn` on consecutive input elements in parallel has +the potential to improve input pipeline throughput. +
+`drop_remainder` + +A `Tensor` of type `bool`. +A scalar representing whether the last batch should be dropped in case its size +is smaller than desired. +
+`f` + +A function decorated with @Defun. +A function to apply to the outputs of `input_dataset`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`preserve_cardinality` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalMapDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalMapDataset.md new file mode 100644 index 00000000000..44edaec3dff --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalMapDataset.md @@ -0,0 +1,120 @@ +description: Creates a dataset that applies f to the outputs of input_dataset. + +
+ + +
+ +# tf.raw_ops.ExperimentalMapDataset + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`other_arguments` + +A list of `Tensor` objects. +
+`f` + +A function decorated with @Defun. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`use_inter_op_parallelism` + +An optional `bool`. Defaults to `True`. +
+`preserve_cardinality` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalMatchingFilesDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalMatchingFilesDataset.md new file mode 100644 index 00000000000..38f8a7722ce --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalMatchingFilesDataset.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.ExperimentalMatchingFilesDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`patterns` + +A `Tensor` of type `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalMaxIntraOpParallelismDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalMaxIntraOpParallelismDataset.md new file mode 100644 index 00000000000..8f3a1c0c3ce --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalMaxIntraOpParallelismDataset.md @@ -0,0 +1,99 @@ +description: Creates a dataset that overrides the maximum intra-op parallelism. + +
+ + +
+ +# tf.raw_ops.ExperimentalMaxIntraOpParallelismDataset + + + + + + + + + +Creates a dataset that overrides the maximum intra-op parallelism. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`max_intra_op_parallelism` + +A `Tensor` of type `int64`. +Identifies the maximum intra-op parallelism to use. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalNonSerializableDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalNonSerializableDataset.md new file mode 100644 index 00000000000..bdcb8666144 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalNonSerializableDataset.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.ExperimentalNonSerializableDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalParallelInterleaveDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalParallelInterleaveDataset.md new file mode 100644 index 00000000000..a56bed22068 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalParallelInterleaveDataset.md @@ -0,0 +1,152 @@ +description: Creates a dataset that applies f to the outputs of input_dataset. + +
+ + +
+ +# tf.raw_ops.ExperimentalParallelInterleaveDataset + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset`. + + + + + + + + + +The resulting dataset is similar to the `InterleaveDataset`, with the exception +that if retrieving the next value from a dataset would cause the requester to +block, it will skip that input dataset. This dataset is especially useful +when loading data from a variable-latency datastores (e.g. HDFS, GCS), as it +allows the training step to proceed so long as some data is available. + +!! WARNING !! This dataset is not deterministic! + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`other_arguments` + +A list of `Tensor` objects. +
+`cycle_length` + +A `Tensor` of type `int64`. +
+`block_length` + +A `Tensor` of type `int64`. +
+`sloppy` + +A `Tensor` of type `bool`. +
+`buffer_output_elements` + +A `Tensor` of type `int64`. +
+`prefetch_input_elements` + +A `Tensor` of type `int64`. +
+`f` + +A function decorated with @Defun. +A function mapping elements of `input_dataset`, concatenated with +`other_arguments`, to a Dataset variant that contains elements matching +`output_types` and `output_shapes`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalParseExampleDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalParseExampleDataset.md new file mode 100644 index 00000000000..28c6c59f77b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalParseExampleDataset.md @@ -0,0 +1,162 @@ +description: Transforms input_dataset containing Example protos as vectors of DT_STRING into a dataset of Tensor or SparseTensor objects representing the parsed features. + +
+ + +
+ +# tf.raw_ops.ExperimentalParseExampleDataset + + + + + + + + + +Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`num_parallel_calls` + +A `Tensor` of type `int64`. +
+`dense_defaults` + +A list of `Tensor` objects with types from: `float32`, `int64`, `string`. +A dict mapping string keys to `Tensor`s. +The keys of the dict must match the dense_keys of the feature. +
+`sparse_keys` + +A list of `strings`. +A list of string keys in the examples features. +The results for these keys will be returned as `SparseTensor` objects. +
+`dense_keys` + +A list of `strings`. +A list of Ndense string Tensors (scalars). +The keys expected in the Examples features associated with dense values. +
+`sparse_types` + +A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. +A list of `DTypes` of the same length as `sparse_keys`. +Only tf.float32 (`FloatList`), tf.int64 (`Int64List`), +and tf.string (`BytesList`) are supported. +
+`dense_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`). +List of tuples with the same length as `dense_keys`. +The shape of the data for each dense feature referenced by `dense_keys`. +Required for any input tensors identified by `dense_keys`. Must be +either fully defined, or may contain an unknown first dimension. +An unknown first dimension means the feature is treated as having +a variable number of blocks, and the output shape along this dimension +is considered unknown at graph build time. Padding is applied for +minibatch elements smaller than the maximum number of blocks for the +given feature along this dimension. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type list for the return values. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +The list of shapes being produced. +
+`sloppy` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalPrivateThreadPoolDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalPrivateThreadPoolDataset.md new file mode 100644 index 00000000000..e44cb8d0d21 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalPrivateThreadPoolDataset.md @@ -0,0 +1,99 @@ +description: Creates a dataset that uses a custom thread pool to compute input_dataset. + +
+ + +
+ +# tf.raw_ops.ExperimentalPrivateThreadPoolDataset + + + + + + + + + +Creates a dataset that uses a custom thread pool to compute `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`num_threads` + +A `Tensor` of type `int64`. +Identifies the number of threads to use for the private threadpool. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalRandomDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalRandomDataset.md new file mode 100644 index 00000000000..dfe6dcd0b9a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalRandomDataset.md @@ -0,0 +1,102 @@ +description: Creates a Dataset that returns pseudorandom numbers. + +
+ + +
+ +# tf.raw_ops.ExperimentalRandomDataset + + + + + + + + + +Creates a Dataset that returns pseudorandom numbers. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`seed` + +A `Tensor` of type `int64`. +A scalar seed for the random number generator. If either seed or +seed2 is set to be non-zero, the random number generator is seeded +by the given seed. Otherwise, a random seed is used. +
+`seed2` + +A `Tensor` of type `int64`. +A second scalar seed to avoid seed collision. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalRebatchDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalRebatchDataset.md new file mode 100644 index 00000000000..3c64b43ba37 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalRebatchDataset.md @@ -0,0 +1,112 @@ +description: Creates a dataset that changes the batch size. + +
+ + +
+ +# tf.raw_ops.ExperimentalRebatchDataset + + + + + + + + + +Creates a dataset that changes the batch size. + + + + + + + + + +Creates a dataset that changes the batch size of the dataset to current batch +size // num_replicas. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +
+`num_replicas` + +A `Tensor` of type `int64`. +A scalar representing the number of replicas to distribute this batch across. As +a result of this transformation the current batch size would end up being +divided by this parameter. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`use_fallback` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalScanDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalScanDataset.md new file mode 100644 index 00000000000..1c59529f8a5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalScanDataset.md @@ -0,0 +1,120 @@ +description: Creates a dataset successively reduces f over the elements of input_dataset. + +
+ + +
+ +# tf.raw_ops.ExperimentalScanDataset + + + + + + + + + +Creates a dataset successively reduces `f` over the elements of `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`initial_state` + +A list of `Tensor` objects. +
+`other_arguments` + +A list of `Tensor` objects. +
+`f` + +A function decorated with @Defun. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`preserve_cardinality` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalSetStatsAggregatorDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalSetStatsAggregatorDataset.md new file mode 100644 index 00000000000..b25a507b03e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalSetStatsAggregatorDataset.md @@ -0,0 +1,111 @@ +
+ + +
+ +# tf.raw_ops.ExperimentalSetStatsAggregatorDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`stats_aggregator` + +A `Tensor` of type `resource`. +
+`tag` + +A `Tensor` of type `string`. +
+`counter_prefix` + +A `Tensor` of type `string`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalSleepDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalSleepDataset.md new file mode 100644 index 00000000000..c05ace67ac3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalSleepDataset.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.ExperimentalSleepDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`sleep_microseconds` + +A `Tensor` of type `int64`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalSlidingWindowDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalSlidingWindowDataset.md new file mode 100644 index 00000000000..83a03512e39 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalSlidingWindowDataset.md @@ -0,0 +1,119 @@ +description: Creates a dataset that passes a sliding window over input_dataset. + +
+ + +
+ +# tf.raw_ops.ExperimentalSlidingWindowDataset + + + + + + + + + +Creates a dataset that passes a sliding window over `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`window_size` + +A `Tensor` of type `int64`. +A scalar representing the number of elements in the +sliding window. +
+`window_shift` + +A `Tensor` of type `int64`. +A scalar representing the steps moving the sliding window +forward in one iteration. It must be positive. +
+`window_stride` + +A `Tensor` of type `int64`. +A scalar representing the stride of the input elements of the sliding window. +It must be positive. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalSqlDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalSqlDataset.md new file mode 100644 index 00000000000..aa94a04ed02 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalSqlDataset.md @@ -0,0 +1,107 @@ +description: Creates a dataset that executes a SQL query and emits rows of the result set. + +
+ + +
+ +# tf.raw_ops.ExperimentalSqlDataset + + + + + + + + + +Creates a dataset that executes a SQL query and emits rows of the result set. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`driver_name` + +A `Tensor` of type `string`. +The database type. Currently, the only supported type is 'sqlite'. +
+`data_source_name` + +A `Tensor` of type `string`. +A connection string to connect to the database. +
+`query` + +A `Tensor` of type `string`. A SQL query to execute. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalStatsAggregatorHandle.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalStatsAggregatorHandle.md new file mode 100644 index 00000000000..2dd4a15cf87 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalStatsAggregatorHandle.md @@ -0,0 +1,84 @@ +description: Creates a statistics manager resource. + +
+ + +
+ +# tf.raw_ops.ExperimentalStatsAggregatorHandle + + + + + + + + + +Creates a statistics manager resource. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalStatsAggregatorSummary.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalStatsAggregatorSummary.md new file mode 100644 index 00000000000..3e73d304d95 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalStatsAggregatorSummary.md @@ -0,0 +1,77 @@ +description: Produces a summary of any statistics recorded by the given statistics manager. + +
+ + +
+ +# tf.raw_ops.ExperimentalStatsAggregatorSummary + + + + + + + + + +Produces a summary of any statistics recorded by the given statistics manager. + + + + + + + + + + + + + + + + + + + + + + +
+`iterator` + +A `Tensor` of type `resource`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalTakeWhileDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalTakeWhileDataset.md new file mode 100644 index 00000000000..e528f3b7b38 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalTakeWhileDataset.md @@ -0,0 +1,114 @@ +description: Creates a dataset that stops iteration when predicate is false. + +
+ + +
+ +# tf.raw_ops.ExperimentalTakeWhileDataset + + + + + + + + + +Creates a dataset that stops iteration when predicate` is false. + + + + + + + + + +The `predicate` function must return a scalar boolean and accept the +following arguments: + +* One tensor for each component of an element of `input_dataset`. +* One tensor for each value in `other_arguments`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when +building a closure for `predicate`. +
+`predicate` + +A function decorated with @Defun. +A function returning a scalar boolean. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalThreadPoolDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalThreadPoolDataset.md new file mode 100644 index 00000000000..eb3c554c2f2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalThreadPoolDataset.md @@ -0,0 +1,99 @@ +description: Creates a dataset that uses a custom thread pool to compute input_dataset. + +
+ + +
+ +# tf.raw_ops.ExperimentalThreadPoolDataset + + + + + + + + + +Creates a dataset that uses a custom thread pool to compute `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`thread_pool` + +A `Tensor` of type `resource`. +A resource produced by the ThreadPoolHandle op. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalThreadPoolHandle.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalThreadPoolHandle.md new file mode 100644 index 00000000000..ff8a211988a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalThreadPoolHandle.md @@ -0,0 +1,111 @@ +description: Creates a dataset that uses a custom thread pool to compute input_dataset. + +
+ + +
+ +# tf.raw_ops.ExperimentalThreadPoolHandle + + + + + + + + + +Creates a dataset that uses a custom thread pool to compute `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_threads` + +An `int`. The number of threads in the thread pool. +
+`display_name` + +A `string`. +A human-readable name for the threads that may be visible in some +visualizations. +threadpool. +
+`max_intra_op_parallelism` + +An optional `int`. Defaults to `1`. +The maximum degree of parallelism to use within operations that execute on this +threadpool. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalUnbatchDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalUnbatchDataset.md new file mode 100644 index 00000000000..1a7ee9e83e1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalUnbatchDataset.md @@ -0,0 +1,91 @@ +description: A dataset that splits the elements of its input into multiple elements. + +
+ + +
+ +# tf.raw_ops.ExperimentalUnbatchDataset + + + + + + + + + +A dataset that splits the elements of its input into multiple elements. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExperimentalUniqueDataset.md b/site/en/api_docs/python/tf/raw_ops/ExperimentalUniqueDataset.md new file mode 100644 index 00000000000..c37b0e3bbe4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExperimentalUniqueDataset.md @@ -0,0 +1,91 @@ +description: Creates a dataset that contains the unique elements of input_dataset. + +
+ + +
+ +# tf.raw_ops.ExperimentalUniqueDataset + + + + + + + + + +Creates a dataset that contains the unique elements of `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Expint.md b/site/en/api_docs/python/tf/raw_ops/Expint.md new file mode 100644 index 00000000000..aed5c86be6d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Expint.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.Expint + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Expm1.md b/site/en/api_docs/python/tf/raw_ops/Expm1.md new file mode 100644 index 00000000000..7d45712cdb3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Expm1.md @@ -0,0 +1,90 @@ +description: Computes exp(x) - 1 element-wise. + +
+ + +
+ +# tf.raw_ops.Expm1 + + + + + + + + + +Computes `exp(x) - 1` element-wise. + + + + + + + + + + i.e. `exp(x) - 1` or `e^(x) - 1`, where `x` is the input tensor. + `e` denotes Euler's number and is approximately equal to 2.718281. + + ```python + x = tf.constant(2.0) + tf.math.expm1(x) ==> 6.389056 + + x = tf.constant([2.0, 8.0]) + tf.math.expm1(x) ==> array([6.389056, 2979.958], dtype=float32) + + x = tf.constant(1 + 1j) + tf.math.expm1(x) ==> (0.46869393991588515+2.2873552871788423j) + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExtractGlimpse.md b/site/en/api_docs/python/tf/raw_ops/ExtractGlimpse.md new file mode 100644 index 00000000000..e3deebda8be --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExtractGlimpse.md @@ -0,0 +1,157 @@ +description: Extracts a glimpse from the input tensor. + +
+ + +
+ +# tf.raw_ops.ExtractGlimpse + + + + + + + + + +Extracts a glimpse from the input tensor. + + + + + + + + + +Returns a set of windows called glimpses extracted at location +`offsets` from the input tensor. If the windows only partially +overlaps the inputs, the non overlapping areas will be filled with +random noise. + +The result is a 4-D tensor of shape `[batch_size, glimpse_height, +glimpse_width, channels]`. The channels and batch dimensions are the +same as that of the input tensor. The height and width of the output +windows are specified in the `size` parameter. + +The argument `normalized` and `centered` controls how the windows are built: + +* If the coordinates are normalized but not centered, 0.0 and 1.0 + correspond to the minimum and maximum of each height and width + dimension. +* If the coordinates are both normalized and centered, they range from + -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper + left corner, the lower right corner is located at (1.0, 1.0) and the + center is at (0, 0). +* If the coordinates are not normalized they are interpreted as + numbers of pixels. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `float32`. +A 4-D float tensor of shape `[batch_size, height, width, channels]`. +
+`size` + +A `Tensor` of type `int32`. +A 1-D tensor of 2 elements containing the size of the glimpses +to extract. The glimpse height must be specified first, following +by the glimpse width. +
+`offsets` + +A `Tensor` of type `float32`. +A 2-D integer tensor of shape `[batch_size, 2]` containing +the y, x locations of the center of each window. +
+`centered` + +An optional `bool`. Defaults to `True`. +indicates if the offset coordinates are centered relative to +the image, in which case the (0, 0) offset is relative to the center +of the input images. If false, the (0,0) offset corresponds to the +upper left corner of the input images. +
+`normalized` + +An optional `bool`. Defaults to `True`. +indicates if the offset coordinates are normalized. +
+`uniform_noise` + +An optional `bool`. Defaults to `True`. +indicates if the noise should be generated using a +uniform distribution or a Gaussian distribution. +
+`noise` + +An optional `string`. Defaults to `"uniform"`. +indicates if the noise should `uniform`, `gaussian`, or +`zero`. The default is `uniform` which means the the noise type +will be decided by `uniform_noise`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExtractImagePatches.md b/site/en/api_docs/python/tf/raw_ops/ExtractImagePatches.md new file mode 100644 index 00000000000..b0982ff1898 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExtractImagePatches.md @@ -0,0 +1,116 @@ +description: Extract patches from images and put them in the "depth" output dimension. + +
+ + +
+ +# tf.raw_ops.ExtractImagePatches + + + + + + + + + +Extract `patches` from `images` and put them in the "depth" output dimension. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `complex64`, `complex128`, `bool`. +4-D Tensor with shape `[batch, in_rows, in_cols, depth]`. +
+`ksizes` + +A list of `ints` that has length `>= 4`. +The size of the sliding window for each dimension of `images`. +
+`strides` + +A list of `ints` that has length `>= 4`. +How far the centers of two consecutive patches are in +the images. Must be: `[1, stride_rows, stride_cols, 1]`. +
+`rates` + +A list of `ints` that has length `>= 4`. +Must be: `[1, rate_rows, rate_cols, 1]`. This is the +input stride, specifying how far two consecutive patch samples are in the +input. Equivalent to extracting patches with +`patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by +subsampling them spatially by a factor of `rates`. This is equivalent to +`rate` in dilated (a.k.a. Atrous) convolutions. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExtractJpegShape.md b/site/en/api_docs/python/tf/raw_ops/ExtractJpegShape.md new file mode 100644 index 00000000000..bdfc9aa8e2b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExtractJpegShape.md @@ -0,0 +1,87 @@ +description: Extract the shape information of a JPEG-encoded image. + +
+ + +
+ +# tf.raw_ops.ExtractJpegShape + + + + + + + + + +Extract the shape information of a JPEG-encoded image. + + + + + + + + + +This op only parses the image header, so it is much faster than DecodeJpeg. + + + + + + + + + + + + + + + + +
+`contents` + +A `Tensor` of type `string`. 0-D. The JPEG-encoded image. +
+`output_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +(Optional) The output type of the operation (int32 or int64). +Defaults to int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `output_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ExtractVolumePatches.md b/site/en/api_docs/python/tf/raw_ops/ExtractVolumePatches.md new file mode 100644 index 00000000000..20bb0413523 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ExtractVolumePatches.md @@ -0,0 +1,110 @@ +description: Extract patches from input and put them in the "depth" output dimension. 3D extension of extract_image_patches. + +
+ + +
+ +# tf.raw_ops.ExtractVolumePatches + + + + + + + + + +Extract `patches` from `input` and put them in the "depth" output dimension. 3D extension of `extract_image_patches`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +5-D Tensor with shape `[batch, in_planes, in_rows, in_cols, depth]`. +
+`ksizes` + +A list of `ints` that has length `>= 5`. +The size of the sliding window for each dimension of `input`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D of length 5. How far the centers of two consecutive patches are in +`input`. Must be: `[1, stride_planes, stride_rows, stride_cols, 1]`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. + +We specify the size-related attributes as: + +```python +ksizes = [1, ksize_planes, ksize_rows, ksize_cols, 1] +strides = [1, stride_planes, strides_rows, strides_cols, 1] +``` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FFT.md b/site/en/api_docs/python/tf/raw_ops/FFT.md new file mode 100644 index 00000000000..867d24253b6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FFT.md @@ -0,0 +1,80 @@ +description: Fast Fourier transform. + +
+ + +
+ +# tf.raw_ops.FFT + + + + + + + + + +Fast Fourier transform. + + + + + + + + + +Computes the 1-dimensional discrete Fourier transform over the inner-most +dimension of `input`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FFT2D.md b/site/en/api_docs/python/tf/raw_ops/FFT2D.md new file mode 100644 index 00000000000..df403179900 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FFT2D.md @@ -0,0 +1,80 @@ +description: 2D fast Fourier transform. + +
+ + +
+ +# tf.raw_ops.FFT2D + + + + + + + + + +2D fast Fourier transform. + + + + + + + + + +Computes the 2-dimensional discrete Fourier transform over the inner-most +2 dimensions of `input`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FFT3D.md b/site/en/api_docs/python/tf/raw_ops/FFT3D.md new file mode 100644 index 00000000000..f85aa20f3d5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FFT3D.md @@ -0,0 +1,80 @@ +description: 3D fast Fourier transform. + +
+ + +
+ +# tf.raw_ops.FFT3D + + + + + + + + + +3D fast Fourier transform. + + + + + + + + + +Computes the 3-dimensional discrete Fourier transform over the inner-most 3 +dimensions of `input`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FIFOQueue.md b/site/en/api_docs/python/tf/raw_ops/FIFOQueue.md new file mode 100644 index 00000000000..51c1ea97533 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FIFOQueue.md @@ -0,0 +1,116 @@ +description: A queue that produces elements in first-in first-out order. + +
+ + +
+ +# tf.raw_ops.FIFOQueue + + + + + + + + + +A queue that produces elements in first-in first-out order. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a value. +
+`shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +The shape of each component in a value. The length of this attr must +be either 0 or the same as the length of component_types. If the length of +this attr is 0, the shapes of queue elements are not constrained, and +only one element may be dequeued at a time. +
+`capacity` + +An optional `int`. Defaults to `-1`. +The upper bound on the number of elements in this queue. +Negative numbers mean no limit. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this queue is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this queue will be shared under the given name +across multiple sessions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FIFOQueueV2.md b/site/en/api_docs/python/tf/raw_ops/FIFOQueueV2.md new file mode 100644 index 00000000000..7b124fa0cd5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FIFOQueueV2.md @@ -0,0 +1,116 @@ +description: A queue that produces elements in first-in first-out order. + +
+ + +
+ +# tf.raw_ops.FIFOQueueV2 + + + + + + + + + +A queue that produces elements in first-in first-out order. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a value. +
+`shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +The shape of each component in a value. The length of this attr must +be either 0 or the same as the length of component_types. If the length of +this attr is 0, the shapes of queue elements are not constrained, and +only one element may be dequeued at a time. +
+`capacity` + +An optional `int`. Defaults to `-1`. +The upper bound on the number of elements in this queue. +Negative numbers mean no limit. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this queue is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this queue will be shared under the given name +across multiple sessions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Fact.md b/site/en/api_docs/python/tf/raw_ops/Fact.md new file mode 100644 index 00000000000..a352169b080 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Fact.md @@ -0,0 +1,70 @@ +description: Output a fact about factorials. + +
+ + +
+ +# tf.raw_ops.Fact + + + + + + + + + +Output a fact about factorials. + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FakeParam.md b/site/en/api_docs/python/tf/raw_ops/FakeParam.md new file mode 100644 index 00000000000..72b113a2924 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FakeParam.md @@ -0,0 +1,88 @@ +description: This op is used as a placeholder in If branch functions. It doesn't provide a + +
+ + +
+ +# tf.raw_ops.FakeParam + + + + + + + + + +This op is used as a placeholder in If branch functions. It doesn't provide a + + + + + + + + +valid output when run, so must either be removed (e.g. replaced with a +function input) or guaranteed not to be used (e.g. if mirroring an +intermediate output needed for the gradient computation of the other branch). + + + + + + + + + + + + + + + + +
+`dtype` + +A tf.DType. The type of the output. +
+`shape` + +A tf.TensorShape or list of `ints`. +The purported shape of the output. This is only used for shape inference; +the output will not necessarily have this shape. Can be a partial shape. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxArgs.md b/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxArgs.md new file mode 100644 index 00000000000..8ae31e0d8fb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxArgs.md @@ -0,0 +1,121 @@ +description: Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. + +
+ + +
+ +# tf.raw_ops.FakeQuantWithMinMaxArgs + + + + + + + + + +Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. + + + + + + + + + +Attributes `[min; max]` define the clamping range for the `inputs` data. +`inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` +when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and +then de-quantized and output as floats in `[min; max]` interval. +`num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive. + +Before quantization, `min` and `max` values are adjusted with the following +logic. +It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values, +the behavior can be unexpected: +If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`. +If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`. +If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `, +`min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`. + +Quantization is called fake since the output is still in floating point. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor` of type `float32`. +
+`min` + +An optional `float`. Defaults to `-6`. +
+`max` + +An optional `float`. Defaults to `6`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxArgsGradient.md b/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxArgsGradient.md new file mode 100644 index 00000000000..a3637df00d8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxArgsGradient.md @@ -0,0 +1,114 @@ +description: Compute gradients for a FakeQuantWithMinMaxArgs operation. + +
+ + +
+ +# tf.raw_ops.FakeQuantWithMinMaxArgsGradient + + + + + + + + + +Compute gradients for a FakeQuantWithMinMaxArgs operation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor` of type `float32`. +Backpropagated gradients above the FakeQuantWithMinMaxArgs operation. +
+`inputs` + +A `Tensor` of type `float32`. +Values passed as inputs to the FakeQuantWithMinMaxArgs operation. +
+`min` + +An optional `float`. Defaults to `-6`. +
+`max` + +An optional `float`. Defaults to `6`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVars.md b/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVars.md new file mode 100644 index 00000000000..9ec3cd85397 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVars.md @@ -0,0 +1,124 @@ +description: Fake-quantize the 'inputs' tensor of type float via global float scalars min + +
+ + +
+ +# tf.raw_ops.FakeQuantWithMinMaxVars + + + + + + + + + +Fake-quantize the 'inputs' tensor of type float via global float scalars `min` + + + + + + + + + +and `max` to 'outputs' tensor of same shape as `inputs`. + +`[min; max]` define the clamping range for the `inputs` data. +`inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` +when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and +then de-quantized and output as floats in `[min; max]` interval. +`num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive. + +Before quantization, `min` and `max` values are adjusted with the following +logic. +It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values, +the behavior can be unexpected: +If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`. +If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`. +If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `, +`min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`. + +This operation has a gradient and thus allows for training `min` and `max` +values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor` of type `float32`. +
+`min` + +A `Tensor` of type `float32`. +
+`max` + +A `Tensor` of type `float32`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVarsGradient.md b/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVarsGradient.md new file mode 100644 index 00000000000..12f47f6d209 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVarsGradient.md @@ -0,0 +1,138 @@ +description: Compute gradients for a FakeQuantWithMinMaxVars operation. + +
+ + +
+ +# tf.raw_ops.FakeQuantWithMinMaxVarsGradient + + + + + + + + + +Compute gradients for a FakeQuantWithMinMaxVars operation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor` of type `float32`. +Backpropagated gradients above the FakeQuantWithMinMaxVars operation. +
+`inputs` + +A `Tensor` of type `float32`. +Values passed as inputs to the FakeQuantWithMinMaxVars operation. +min, max: Quantization interval, scalar floats. +
+`min` + +A `Tensor` of type `float32`. +
+`max` + +A `Tensor` of type `float32`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +The bitwidth of the quantization; between 2 and 8, inclusive. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +Whether to quantize into 2^num_bits - 1 distinct values. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max). +
+`backprops_wrt_input` + +A `Tensor` of type `float32`. +
+`backprop_wrt_min` + +A `Tensor` of type `float32`. +
+`backprop_wrt_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVarsPerChannel.md b/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVarsPerChannel.md new file mode 100644 index 00000000000..703cbd71b31 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVarsPerChannel.md @@ -0,0 +1,125 @@ +description: Fake-quantize the 'inputs' tensor of type float and one of the shapes: [d], + +
+ + +
+ +# tf.raw_ops.FakeQuantWithMinMaxVarsPerChannel + + + + + + + + + +Fake-quantize the 'inputs' tensor of type float and one of the shapes: `[d]`, + + + + + + + + + +`[b, d]` `[b, h, w, d]` via per-channel floats `min` and `max` of shape `[d]` +to 'outputs' tensor of same shape as `inputs`. + +`[min; max]` define the clamping range for the `inputs` data. +`inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` +when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and +then de-quantized and output as floats in `[min; max]` interval. +`num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive. + +Before quantization, `min` and `max` values are adjusted with the following +logic. +It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values, +the behavior can be unexpected: +If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`. +If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`. +If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `, +`min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`. + +This operation has a gradient and thus allows for training `min` and `max` +values. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor` of type `float32`. +
+`min` + +A `Tensor` of type `float32`. +
+`max` + +A `Tensor` of type `float32`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVarsPerChannelGradient.md b/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVarsPerChannelGradient.md new file mode 100644 index 00000000000..552ab062ff8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxVarsPerChannelGradient.md @@ -0,0 +1,140 @@ +description: Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. + +
+ + +
+ +# tf.raw_ops.FakeQuantWithMinMaxVarsPerChannelGradient + + + + + + + + + +Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor` of type `float32`. +Backpropagated gradients above the FakeQuantWithMinMaxVars operation, +shape one of: `[d]`, `[b, d]`, `[b, h, w, d]`. +
+`inputs` + +A `Tensor` of type `float32`. +Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape +same as `gradients`. +min, max: Quantization interval, floats of shape `[d]`. +
+`min` + +A `Tensor` of type `float32`. +
+`max` + +A `Tensor` of type `float32`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +The bitwidth of the quantization; between 2 and 16, inclusive. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +Whether to quantize into 2^num_bits - 1 distinct values. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max). +
+`backprops_wrt_input` + +A `Tensor` of type `float32`. +
+`backprop_wrt_min` + +A `Tensor` of type `float32`. +
+`backprop_wrt_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FakeQueue.md b/site/en/api_docs/python/tf/raw_ops/FakeQueue.md new file mode 100644 index 00000000000..1ecc2d2181a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FakeQueue.md @@ -0,0 +1,77 @@ +description: Deprecated. Do not use. + +
+ + +
+ +# tf.raw_ops.FakeQueue + + + + + + + + + +Deprecated. Do not use. + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Fill.md b/site/en/api_docs/python/tf/raw_ops/Fill.md new file mode 100644 index 00000000000..448f5ec9019 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Fill.md @@ -0,0 +1,111 @@ +description: Creates a tensor filled with a scalar value. + +
+ + +
+ +# tf.raw_ops.Fill + + + + + + + + + +Creates a tensor filled with a scalar value. + + + + + + + + + +This operation creates a tensor of shape `dims` and fills it with `value`. + +#### For example: + + + +``` +# Output tensor has shape [2, 3]. +fill([2, 3], 9) ==> [[9, 9, 9] + [9, 9, 9]] +``` + +tf.fill differs from tf.constant in a few ways: + +* tf.fill only supports scalar contents, whereas tf.constant supports + Tensor values. +* tf.fill creates an Op in the computation graph that constructs the actual + Tensor value at runtime. This is in contrast to tf.constant which embeds + the entire Tensor into the graph with a `Const` node. +* Because tf.fill evaluates at graph runtime, it supports dynamic shapes + based on other runtime Tensors, unlike tf.constant. + + + + + + + + + + + + + + + + +
+`dims` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D. Represents the shape of the output tensor. +
+`value` + +A `Tensor`. 0-D (scalar). Value to fill the returned tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `value`. +
+ + + +#### Numpy Compatibility +Equivalent to np.full + diff --git a/site/en/api_docs/python/tf/raw_ops/FilterByLastComponentDataset.md b/site/en/api_docs/python/tf/raw_ops/FilterByLastComponentDataset.md new file mode 100644 index 00000000000..073bbf89cff --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FilterByLastComponentDataset.md @@ -0,0 +1,91 @@ +description: Creates a dataset containing elements of first component of input_dataset having true in the last component. + +
+ + +
+ +# tf.raw_ops.FilterByLastComponentDataset + + + + + + + + + +Creates a dataset containing elements of first component of `input_dataset` having true in the last component. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FilterDataset.md b/site/en/api_docs/python/tf/raw_ops/FilterDataset.md new file mode 100644 index 00000000000..02a27d42746 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FilterDataset.md @@ -0,0 +1,114 @@ +description: Creates a dataset containing elements of input_dataset matching predicate. + +
+ + +
+ +# tf.raw_ops.FilterDataset + + + + + + + + + +Creates a dataset containing elements of `input_dataset` matching `predicate`. + + + + + + + + + +The `predicate` function must return a scalar boolean and accept the +following arguments: + +* One tensor for each component of an element of `input_dataset`. +* One tensor for each value in `other_arguments`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when +building a closure for `predicate`. +
+`predicate` + +A function decorated with @Defun. +A function returning a scalar boolean. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Fingerprint.md b/site/en/api_docs/python/tf/raw_ops/Fingerprint.md new file mode 100644 index 00000000000..b44ac3798a6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Fingerprint.md @@ -0,0 +1,116 @@ +description: Generates fingerprint values. + +
+ + +
+ +# tf.raw_ops.Fingerprint + + + + + + + + + +Generates fingerprint values. + + + + + + + + + +Generates fingerprint values of `data`. + +Fingerprint op considers the first dimension of `data` as the batch dimension, +and `output[i]` contains the fingerprint value generated from contents in +`data[i, ...]` for all `i`. + +Fingerprint op writes fingerprint values as byte arrays. For example, the +default method `farmhash64` generates a 64-bit fingerprint value at a time. +This 8-byte value is written out as an `uint8` array of size 8, in little-endian +order. + +For example, suppose that `data` has data type `DT_INT32` and shape (2, 3, 4), +and that the fingerprint method is `farmhash64`. In this case, the output shape +is (2, 8), where 2 is the batch dimension size of `data`, and 8 is the size of +each fingerprint value in bytes. `output[0, :]` is generated from 12 integers in +`data[0, :, :]` and similarly `output[1, :]` is generated from other 12 integers +in `data[1, :, :]`. + +Note that this op fingerprints the raw underlying buffer, and it does not +fingerprint Tensor's metadata such as data type and/or shape. For example, the +fingerprint values are invariant under reshapes and bitcasts as long as the +batch dimension remain the same: + +``` +Fingerprint(data) == Fingerprint(Reshape(data, ...)) +Fingerprint(data) == Fingerprint(Bitcast(data, ...)) +``` + +For string data, one should expect `Fingerprint(data) != +Fingerprint(ReduceJoin(data))` in general. + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must have rank 1 or higher. +
+`method` + +A `Tensor` of type `string`. +Fingerprint method used by this op. Currently available method is +`farmhash::fingerprint64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `uint8`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FixedLengthRecordDataset.md b/site/en/api_docs/python/tf/raw_ops/FixedLengthRecordDataset.md new file mode 100644 index 00000000000..090e24f850d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FixedLengthRecordDataset.md @@ -0,0 +1,113 @@ +description: Creates a dataset that emits the records from one or more binary files. + +
+ + +
+ +# tf.raw_ops.FixedLengthRecordDataset + + + + + + + + + +Creates a dataset that emits the records from one or more binary files. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A `Tensor` of type `string`. +A scalar or a vector containing the name(s) of the file(s) to be +read. +
+`header_bytes` + +A `Tensor` of type `int64`. +A scalar representing the number of bytes to skip at the +beginning of a file. +
+`record_bytes` + +A `Tensor` of type `int64`. +A scalar representing the number of bytes in each record. +
+`footer_bytes` + +A `Tensor` of type `int64`. +A scalar representing the number of bytes to skip at the end +of a file. +
+`buffer_size` + +A `Tensor` of type `int64`. +A scalar representing the number of bytes to buffer. Must be > 0. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FixedLengthRecordDatasetV2.md b/site/en/api_docs/python/tf/raw_ops/FixedLengthRecordDatasetV2.md new file mode 100644 index 00000000000..d4781152fd9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FixedLengthRecordDatasetV2.md @@ -0,0 +1,111 @@ +
+ + +
+ +# tf.raw_ops.FixedLengthRecordDatasetV2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A `Tensor` of type `string`. +
+`header_bytes` + +A `Tensor` of type `int64`. +
+`record_bytes` + +A `Tensor` of type `int64`. +
+`footer_bytes` + +A `Tensor` of type `int64`. +
+`buffer_size` + +A `Tensor` of type `int64`. +
+`compression_type` + +A `Tensor` of type `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FixedLengthRecordReader.md b/site/en/api_docs/python/tf/raw_ops/FixedLengthRecordReader.md new file mode 100644 index 00000000000..eea0548e367 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FixedLengthRecordReader.md @@ -0,0 +1,121 @@ +description: A Reader that outputs fixed-length records from a file. + +
+ + +
+ +# tf.raw_ops.FixedLengthRecordReader + + + + + + + + + +A Reader that outputs fixed-length records from a file. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`record_bytes` + +An `int`. Number of bytes in the record. +
+`header_bytes` + +An optional `int`. Defaults to `0`. +Number of bytes in the header, defaults to 0. +
+`footer_bytes` + +An optional `int`. Defaults to `0`. +Number of bytes in the footer, defaults to 0. +
+`hop_bytes` + +An optional `int`. Defaults to `0`. +Number of bytes to hop before each read. Default of 0 means using +record_bytes. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FixedLengthRecordReaderV2.md b/site/en/api_docs/python/tf/raw_ops/FixedLengthRecordReaderV2.md new file mode 100644 index 00000000000..4cfee9196bc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FixedLengthRecordReaderV2.md @@ -0,0 +1,130 @@ +description: A Reader that outputs fixed-length records from a file. + +
+ + +
+ +# tf.raw_ops.FixedLengthRecordReaderV2 + + + + + + + + + +A Reader that outputs fixed-length records from a file. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`record_bytes` + +An `int`. Number of bytes in the record. +
+`header_bytes` + +An optional `int`. Defaults to `0`. +Number of bytes in the header, defaults to 0. +
+`footer_bytes` + +An optional `int`. Defaults to `0`. +Number of bytes in the footer, defaults to 0. +
+`hop_bytes` + +An optional `int`. Defaults to `0`. +Number of bytes to hop before each read. Default of 0 means using +record_bytes. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`encoding` + +An optional `string`. Defaults to `""`. +The type of encoding for the file. Currently ZLIB and GZIP +are supported. Defaults to none. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FixedUnigramCandidateSampler.md b/site/en/api_docs/python/tf/raw_ops/FixedUnigramCandidateSampler.md new file mode 100644 index 00000000000..1b4c836399c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FixedUnigramCandidateSampler.md @@ -0,0 +1,232 @@ +description: Generates labels for candidate sampling with a learned unigram distribution. + +
+ + +
+ +# tf.raw_ops.FixedUnigramCandidateSampler + + + + + + + + + +Generates labels for candidate sampling with a learned unigram distribution. + + + + + + + + + +A unigram sampler could use a fixed unigram distribution read from a +file or passed in as an in-memory array instead of building up the distribution +from data on the fly. There is also an option to skew the distribution by +applying a distortion power to the weights. + +The vocabulary file should be in CSV-like format, with the last field +being the weight associated with the word. + +For each batch, this op picks a single set of sampled candidate labels. + +The advantages of sampling candidates per-batch are simplicity and the +possibility of efficient dense matrix multiplication. The disadvantage is that +the sampled candidates must be chosen independently of the context and of the +true labels. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64`. +A batch_size * num_true matrix, in which each row contains the +IDs of the num_true target_classes in the corresponding original label. +
+`num_true` + +An `int` that is `>= 1`. Number of true labels per context. +
+`num_sampled` + +An `int` that is `>= 1`. +Number of candidates to randomly sample. +
+`unique` + +A `bool`. +If unique is true, we sample with rejection, so that all sampled +candidates in a batch are unique. This requires some approximation to +estimate the post-rejection sampling probabilities. +
+`range_max` + +An `int` that is `>= 1`. +The sampler will sample integers from the interval [0, range_max). +
+`vocab_file` + +An optional `string`. Defaults to `""`. +Each valid line in this file (which should have a CSV-like format) +corresponds to a valid word ID. IDs are in sequential order, starting from +num_reserved_ids. The last entry in each line is expected to be a value +corresponding to the count or relative probability. Exactly one of vocab_file +and unigrams needs to be passed to this op. +
+`distortion` + +An optional `float`. Defaults to `1`. +The distortion is used to skew the unigram probability distribution. +Each weight is first raised to the distortion's power before adding to the +internal unigram distribution. As a result, distortion = 1.0 gives regular +unigram sampling (as defined by the vocab file), and distortion = 0.0 gives +a uniform distribution. +
+`num_reserved_ids` + +An optional `int`. Defaults to `0`. +Optionally some reserved IDs can be added in the range [0, +..., num_reserved_ids) by the users. One use case is that a special unknown +word token is used as ID 0. These IDs will have a sampling probability of 0. +
+`num_shards` + +An optional `int` that is `>= 1`. Defaults to `1`. +A sampler can be used to sample from a subset of the original range +in order to speed up the whole computation through parallelism. This parameter +(together with 'shard') indicates the number of partitions that are being +used in the overall computation. +
+`shard` + +An optional `int` that is `>= 0`. Defaults to `0`. +A sampler can be used to sample from a subset of the original range +in order to speed up the whole computation through parallelism. This parameter +(together with 'num_shards') indicates the particular partition number of a +sampler op, when partitioning is being used. +
+`unigrams` + +An optional list of `floats`. Defaults to `[]`. +A list of unigram counts or probabilities, one per ID in sequential +order. Exactly one of vocab_file and unigrams should be passed to this op. +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +An second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sampled_candidates, true_expected_count, sampled_expected_count). +
+`sampled_candidates` + +A `Tensor` of type `int64`. +
+`true_expected_count` + +A `Tensor` of type `float32`. +
+`sampled_expected_count` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FlatMapDataset.md b/site/en/api_docs/python/tf/raw_ops/FlatMapDataset.md new file mode 100644 index 00000000000..a3d43b21c06 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FlatMapDataset.md @@ -0,0 +1,111 @@ +description: Creates a dataset that applies f to the outputs of input_dataset. + +
+ + +
+ +# tf.raw_ops.FlatMapDataset + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset`. + + + + + + + + + +Unlike MapDataset, the `f` in FlatMapDataset is expected to return a +Dataset variant, and FlatMapDataset will flatten successive results +into a single Dataset. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`other_arguments` + +A list of `Tensor` objects. +
+`f` + +A function decorated with @Defun. +A function mapping elements of `input_dataset`, concatenated with +`other_arguments`, to a Dataset variant that contains elements matching +`output_types` and `output_shapes`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Floor.md b/site/en/api_docs/python/tf/raw_ops/Floor.md new file mode 100644 index 00000000000..7b9cb5c644b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Floor.md @@ -0,0 +1,77 @@ +description: Returns element-wise largest integer not greater than x. + +
+ + +
+ +# tf.raw_ops.Floor + + + + + + + + + +Returns element-wise largest integer not greater than x. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FloorDiv.md b/site/en/api_docs/python/tf/raw_ops/FloorDiv.md new file mode 100644 index 00000000000..7f0bd604982 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FloorDiv.md @@ -0,0 +1,86 @@ +description: Returns x // y element-wise. + +
+ + +
+ +# tf.raw_ops.FloorDiv + + + + + + + + + +Returns x // y element-wise. + + + + + + + + + +*NOTE*: `floor_div` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FloorMod.md b/site/en/api_docs/python/tf/raw_ops/FloorMod.md new file mode 100644 index 00000000000..4b141feb5bb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FloorMod.md @@ -0,0 +1,89 @@ +description: Returns element-wise remainder of division. When x < 0 xor y < 0 is + +
+ + +
+ +# tf.raw_ops.FloorMod + + + + + + + + + +Returns element-wise remainder of division. When `x < 0` xor `y < 0` is + + + + + + + + + +true, this follows Python semantics in that the result here is consistent +with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`. + +*NOTE*: math.floormod supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FlushSummaryWriter.md b/site/en/api_docs/python/tf/raw_ops/FlushSummaryWriter.md new file mode 100644 index 00000000000..d2c8667b380 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FlushSummaryWriter.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.FlushSummaryWriter + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`writer` + +A `Tensor` of type `resource`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/For.md b/site/en/api_docs/python/tf/raw_ops/For.md new file mode 100644 index 00000000000..14b41be8b4a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/For.md @@ -0,0 +1,111 @@ +description: python + +
+ + +
+ +# tf.raw_ops.For + + + + + + + + + +```python + + + + + + + + + output = input; + for i in range(start, limit, delta) + output = body(i, output); +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`start` + +A `Tensor` of type `int32`. The lower bound. An int32 +
+`limit` + +A `Tensor` of type `int32`. The upper bound. An int32 +
+`delta` + +A `Tensor` of type `int32`. The increment. An int32 +
+`input` + +A list of `Tensor` objects. +A list of input tensors whose types are T. +
+`body` + +A function decorated with @Defun. +A function that takes a list of tensors (int32, T) and returns another +list of tensors (T). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FractionalAvgPool.md b/site/en/api_docs/python/tf/raw_ops/FractionalAvgPool.md new file mode 100644 index 00000000000..e96b5e06fe5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FractionalAvgPool.md @@ -0,0 +1,172 @@ +description: Performs fractional average pooling on the input. + +
+ + +
+ +# tf.raw_ops.FractionalAvgPool + + + + + + + + + +Performs fractional average pooling on the input. + + + + + + + + + +Fractional average pooling is similar to Fractional max pooling in the pooling +region generation step. The only difference is that after pooling regions are +generated, a mean operation is performed instead of a max operation in each +pooling region. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`. +4-D with shape `[batch, height, width, channels]`. +
+`pooling_ratio` + +A list of `floats` that has length `>= 4`. +Pooling ratio for each dimension of `value`, currently only +supports row and col dimension and should be >= 1.0. For example, a valid +pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements +must be 1.0 because we don't allow pooling on batch and channels +dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions +respectively. +
+`pseudo_random` + +An optional `bool`. Defaults to `False`. +When set to True, generates the pooling sequence in a +pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin +Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) for +difference between pseudorandom and random. +
+`overlapping` + +An optional `bool`. Defaults to `False`. +When set to True, it means when pooling, the values at the boundary +of adjacent pooling cells are used by both cells. For example: + +`index 0 1 2 3 4` + +`value 20 5 16 3 7` + +If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. +The result would be [41/3, 26/3] for fractional avg pooling. +
+`deterministic` + +An optional `bool`. Defaults to `False`. +When set to True, a fixed pooling region will be used when +iterating over a FractionalAvgPool node in the computation graph. Mainly used +in unit test to make FractionalAvgPool deterministic. +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +An second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, row_pooling_sequence, col_pooling_sequence). +
+`output` + +A `Tensor`. Has the same type as `value`. +
+`row_pooling_sequence` + +A `Tensor` of type `int64`. +
+`col_pooling_sequence` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FractionalAvgPoolGrad.md b/site/en/api_docs/python/tf/raw_ops/FractionalAvgPoolGrad.md new file mode 100644 index 00000000000..7054deba5c8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FractionalAvgPoolGrad.md @@ -0,0 +1,127 @@ +description: Computes gradient of the FractionalAvgPool function. + +
+ + +
+ +# tf.raw_ops.FractionalAvgPoolGrad + + + + + + + + + +Computes gradient of the FractionalAvgPool function. + + + + + + + + + +Unlike FractionalMaxPoolGrad, we don't need to find arg_max for +FractionalAvgPoolGrad, we just need to evenly back-propagate each element of +out_backprop to those indices that form the same pooling cell. Therefore, we +just need to know the shape of original input tensor, instead of the whole +tensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`orig_input_tensor_shape` + +A `Tensor` of type `int64`. +Original input tensor shape for `fractional_avg_pool` +
+`out_backprop` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`. +4-D with shape `[batch, height, width, channels]`. Gradients +w.r.t. the output of `fractional_avg_pool`. +
+`row_pooling_sequence` + +A `Tensor` of type `int64`. +row pooling sequence, form pooling region with +col_pooling_sequence. +
+`col_pooling_sequence` + +A `Tensor` of type `int64`. +column pooling sequence, form pooling region with +row_pooling sequence. +
+`overlapping` + +An optional `bool`. Defaults to `False`. +When set to True, it means when pooling, the values at the boundary +of adjacent pooling cells are used by both cells. For example: + +`index 0 1 2 3 4` + +`value 20 5 16 3 7` + +If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. +The result would be [41/3, 26/3] for fractional avg pooling. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `out_backprop`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FractionalMaxPool.md b/site/en/api_docs/python/tf/raw_ops/FractionalMaxPool.md new file mode 100644 index 00000000000..24900bbff16 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FractionalMaxPool.md @@ -0,0 +1,196 @@ +description: Performs fractional max pooling on the input. + +
+ + +
+ +# tf.raw_ops.FractionalMaxPool + + + + + + + + + +Performs fractional max pooling on the input. + + + + + + + + + +Fractional max pooling is slightly different than regular max pooling. In +regular max pooling, you downsize an input set by taking the maximum value of +smaller N x N subsections of the set (often 2x2), and try to reduce the set by +a factor of N, where N is an integer. Fractional max pooling, as you might +expect from the word "fractional", means that the overall reduction ratio N +does not have to be an integer. + +The sizes of the pooling regions are generated randomly but are fairly uniform. +For example, let's look at the height dimension, and the constraints on the +list of rows that will be pool boundaries. + +First we define the following: + +1. input_row_length : the number of rows from the input set +2. output_row_length : which will be smaller than the input +3. alpha = input_row_length / output_row_length : our reduction ratio +4. K = floor(alpha) +5. row_pooling_sequence : this is the result list of pool boundary rows + +Then, row_pooling_sequence should satisfy: + +1. a[0] = 0 : the first value of the sequence is 0 +2. a[end] = input_row_length : the last value of the sequence is the size +3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size +4. length(row_pooling_sequence) = output_row_length+1 + +For more details on fractional max pooling, see this paper: +[Benjamin Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`. +4-D with shape `[batch, height, width, channels]`. +
+`pooling_ratio` + +A list of `floats` that has length `>= 4`. +Pooling ratio for each dimension of `value`, currently only +supports row and col dimension and should be >= 1.0. For example, a valid +pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements +must be 1.0 because we don't allow pooling on batch and channels +dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions +respectively. +
+`pseudo_random` + +An optional `bool`. Defaults to `False`. +When set to True, generates the pooling sequence in a +pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin +Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) for +difference between pseudorandom and random. +
+`overlapping` + +An optional `bool`. Defaults to `False`. +When set to True, it means when pooling, the values at the boundary +of adjacent pooling cells are used by both cells. For example: + +`index 0 1 2 3 4` + +`value 20 5 16 3 7` + +If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. +The result would be [20, 16] for fractional max pooling. +
+`deterministic` + +An optional `bool`. Defaults to `False`. +When set to True, a fixed pooling region will be used when +iterating over a FractionalMaxPool node in the computation graph. Mainly used +in unit test to make FractionalMaxPool deterministic. +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +An second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, row_pooling_sequence, col_pooling_sequence). +
+`output` + +A `Tensor`. Has the same type as `value`. +
+`row_pooling_sequence` + +A `Tensor` of type `int64`. +
+`col_pooling_sequence` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FractionalMaxPoolGrad.md b/site/en/api_docs/python/tf/raw_ops/FractionalMaxPoolGrad.md new file mode 100644 index 00000000000..df9dd4fc480 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FractionalMaxPoolGrad.md @@ -0,0 +1,130 @@ +description: Computes gradient of the FractionalMaxPool function. + +
+ + +
+ +# tf.raw_ops.FractionalMaxPoolGrad + + + + + + + + + +Computes gradient of the FractionalMaxPool function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`orig_input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`. +Original input for `fractional_max_pool` +
+`orig_output` + +A `Tensor`. Must have the same type as `orig_input`. +Original output for `fractional_max_pool` +
+`out_backprop` + +A `Tensor`. Must have the same type as `orig_input`. +4-D with shape `[batch, height, width, channels]`. Gradients +w.r.t. the output of `fractional_max_pool`. +
+`row_pooling_sequence` + +A `Tensor` of type `int64`. +row pooling sequence, form pooling region with +col_pooling_sequence. +
+`col_pooling_sequence` + +A `Tensor` of type `int64`. +column pooling sequence, form pooling region with +row_pooling sequence. +
+`overlapping` + +An optional `bool`. Defaults to `False`. +When set to True, it means when pooling, the values at the boundary +of adjacent pooling cells are used by both cells. For example: + +`index 0 1 2 3 4` + +`value 20 5 16 3 7` + +If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. +The result would be [20, 16] for fractional max pooling. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `orig_input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FresnelCos.md b/site/en/api_docs/python/tf/raw_ops/FresnelCos.md new file mode 100644 index 00000000000..aca9d576a9f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FresnelCos.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.FresnelCos + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FresnelSin.md b/site/en/api_docs/python/tf/raw_ops/FresnelSin.md new file mode 100644 index 00000000000..fb3930c0e08 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FresnelSin.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.FresnelSin + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FusedBatchNorm.md b/site/en/api_docs/python/tf/raw_ops/FusedBatchNorm.md new file mode 100644 index 00000000000..20c6cf3a56b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FusedBatchNorm.md @@ -0,0 +1,182 @@ +description: Batch normalization. + +
+ + +
+ +# tf.raw_ops.FusedBatchNorm + + + + + + + + + +Batch normalization. + + + + + + + + + +Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". +The size of 1D Tensors matches the dimension C of the 4D Tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`. +A 4D Tensor for input data. +
+`scale` + +A `Tensor`. Must have the same type as `x`. +A 1D Tensor for scaling factor, to scale the normalized x. +
+`offset` + +A `Tensor`. Must have the same type as `x`. +A 1D Tensor for offset, to shift to the normalized x. +
+`mean` + +A `Tensor`. Must have the same type as `x`. +A 1D Tensor for population mean. Used for inference only; +must be empty for training. +
+`variance` + +A `Tensor`. Must have the same type as `x`. +A 1D Tensor for population variance. Used for inference only; +must be empty for training. +
+`epsilon` + +An optional `float`. Defaults to `0.0001`. +A small float number added to the variance of x. +
+`exponential_avg_factor` + +An optional `float`. Defaults to `1`. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +The data format for x and y. Either "NHWC" (default) or "NCHW". +
+`is_training` + +An optional `bool`. Defaults to `True`. +A bool value to indicate the operation is for training (default) +or inference. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (y, batch_mean, batch_variance, reserve_space_1, reserve_space_2). +
+`y` + +A `Tensor`. Has the same type as `x`. +
+`batch_mean` + +A `Tensor`. Has the same type as `x`. +
+`batch_variance` + +A `Tensor`. Has the same type as `x`. +
+`reserve_space_1` + +A `Tensor`. Has the same type as `x`. +
+`reserve_space_2` + +A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FusedBatchNormGrad.md b/site/en/api_docs/python/tf/raw_ops/FusedBatchNormGrad.md new file mode 100644 index 00000000000..bb4674b6906 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FusedBatchNormGrad.md @@ -0,0 +1,181 @@ +description: Gradient for batch normalization. + +
+ + +
+ +# tf.raw_ops.FusedBatchNormGrad + + + + + + + + + +Gradient for batch normalization. + + + + + + + + + +Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". +The size of 1D Tensors matches the dimension C of the 4D Tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`y_backprop` + +A `Tensor`. Must be one of the following types: `float32`. +A 4D Tensor for the gradient with respect to y. +
+`x` + +A `Tensor`. Must have the same type as `y_backprop`. +A 4D Tensor for input data. +
+`scale` + +A `Tensor`. Must have the same type as `y_backprop`. +A 1D Tensor for scaling factor, to scale the normalized x. +
+`reserve_space_1` + +A `Tensor`. Must have the same type as `y_backprop`. +When is_training is True, a 1D Tensor for the computed batch +mean to be reused in gradient computation. When is_training is +False, a 1D Tensor for the population mean to be reused in both +1st and 2nd order gradient computation. +
+`reserve_space_2` + +A `Tensor`. Must have the same type as `y_backprop`. +When is_training is True, a 1D Tensor for the computed batch +variance (inverted variance in the cuDNN case) to be reused in +gradient computation. When is_training is False, a 1D Tensor +for the population variance to be reused in both 1st and 2nd +order gradient computation. +
+`epsilon` + +An optional `float`. Defaults to `0.0001`. +A small float number added to the variance of x. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +The data format for y_backprop, x, x_backprop. +Either "NHWC" (default) or "NCHW". +
+`is_training` + +An optional `bool`. Defaults to `True`. +A bool value to indicate the operation is for training (default) +or inference. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (x_backprop, scale_backprop, offset_backprop, reserve_space_3, reserve_space_4). +
+`x_backprop` + +A `Tensor`. Has the same type as `y_backprop`. +
+`scale_backprop` + +A `Tensor`. Has the same type as `y_backprop`. +
+`offset_backprop` + +A `Tensor`. Has the same type as `y_backprop`. +
+`reserve_space_3` + +A `Tensor`. Has the same type as `y_backprop`. +
+`reserve_space_4` + +A `Tensor`. Has the same type as `y_backprop`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FusedBatchNormGradV2.md b/site/en/api_docs/python/tf/raw_ops/FusedBatchNormGradV2.md new file mode 100644 index 00000000000..b892a4494ab --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FusedBatchNormGradV2.md @@ -0,0 +1,181 @@ +description: Gradient for batch normalization. + +
+ + +
+ +# tf.raw_ops.FusedBatchNormGradV2 + + + + + + + + + +Gradient for batch normalization. + + + + + + + + + +Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". +The size of 1D Tensors matches the dimension C of the 4D Tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`y_backprop` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. +A 4D Tensor for the gradient with respect to y. +
+`x` + +A `Tensor`. Must have the same type as `y_backprop`. +A 4D Tensor for input data. +
+`scale` + +A `Tensor` of type `float32`. +A 1D Tensor for scaling factor, to scale the normalized x. +
+`reserve_space_1` + +A `Tensor`. Must be one of the following types: `float32`. +When is_training is True, a 1D Tensor for the computed batch +mean to be reused in gradient computation. When is_training is +False, a 1D Tensor for the population mean to be reused in both +1st and 2nd order gradient computation. +
+`reserve_space_2` + +A `Tensor`. Must have the same type as `reserve_space_1`. +When is_training is True, a 1D Tensor for the computed batch +variance (inverted variance in the cuDNN case) to be reused in +gradient computation. When is_training is False, a 1D Tensor +for the population variance to be reused in both 1st and 2nd +order gradient computation. +
+`epsilon` + +An optional `float`. Defaults to `0.0001`. +A small float number added to the variance of x. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +The data format for y_backprop, x, x_backprop. +Either "NHWC" (default) or "NCHW". +
+`is_training` + +An optional `bool`. Defaults to `True`. +A bool value to indicate the operation is for training (default) +or inference. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (x_backprop, scale_backprop, offset_backprop, reserve_space_3, reserve_space_4). +
+`x_backprop` + +A `Tensor`. Has the same type as `y_backprop`. +
+`scale_backprop` + +A `Tensor`. Has the same type as `reserve_space_1`. +
+`offset_backprop` + +A `Tensor`. Has the same type as `reserve_space_1`. +
+`reserve_space_3` + +A `Tensor`. Has the same type as `reserve_space_1`. +
+`reserve_space_4` + +A `Tensor`. Has the same type as `reserve_space_1`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FusedBatchNormGradV3.md b/site/en/api_docs/python/tf/raw_ops/FusedBatchNormGradV3.md new file mode 100644 index 00000000000..8ad1929e05c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FusedBatchNormGradV3.md @@ -0,0 +1,191 @@ +description: Gradient for batch normalization. + +
+ + +
+ +# tf.raw_ops.FusedBatchNormGradV3 + + + + + + + + + +Gradient for batch normalization. + + + + + + + + + +Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". +The size of 1D Tensors matches the dimension C of the 4D Tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`y_backprop` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. +A 4D Tensor for the gradient with respect to y. +
+`x` + +A `Tensor`. Must have the same type as `y_backprop`. +A 4D Tensor for input data. +
+`scale` + +A `Tensor` of type `float32`. +A 1D Tensor for scaling factor, to scale the normalized x. +
+`reserve_space_1` + +A `Tensor`. Must be one of the following types: `float32`. +When is_training is True, a 1D Tensor for the computed batch +mean to be reused in gradient computation. When is_training is +False, a 1D Tensor for the population mean to be reused in both +1st and 2nd order gradient computation. +
+`reserve_space_2` + +A `Tensor`. Must have the same type as `reserve_space_1`. +When is_training is True, a 1D Tensor for the computed batch +variance (inverted variance in the cuDNN case) to be reused in +gradient computation. When is_training is False, a 1D Tensor +for the population variance to be reused in both 1st and 2nd +order gradient computation. +
+`reserve_space_3` + +A `Tensor`. Must have the same type as `reserve_space_1`. +When is_training is True, a 1D Tensor for some intermediate results to be reused +in gradient computation. When is_training is False, a dummy empty Tensor will be +created. +
+`epsilon` + +An optional `float`. Defaults to `0.0001`. +A small float number added to the variance of x. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +The data format for y_backprop, x, x_backprop. +Either "NHWC" (default) or "NCHW". +
+`is_training` + +An optional `bool`. Defaults to `True`. +A bool value to indicate the operation is for training (default) +or inference. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (x_backprop, scale_backprop, offset_backprop, reserve_space_4, reserve_space_5). +
+`x_backprop` + +A `Tensor`. Has the same type as `y_backprop`. +
+`scale_backprop` + +A `Tensor`. Has the same type as `reserve_space_1`. +
+`offset_backprop` + +A `Tensor`. Has the same type as `reserve_space_1`. +
+`reserve_space_4` + +A `Tensor`. Has the same type as `reserve_space_1`. +
+`reserve_space_5` + +A `Tensor`. Has the same type as `reserve_space_1`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FusedBatchNormV2.md b/site/en/api_docs/python/tf/raw_ops/FusedBatchNormV2.md new file mode 100644 index 00000000000..34d8f02b9cf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FusedBatchNormV2.md @@ -0,0 +1,182 @@ +description: Batch normalization. + +
+ + +
+ +# tf.raw_ops.FusedBatchNormV2 + + + + + + + + + +Batch normalization. + + + + + + + + + +Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". +The size of 1D Tensors matches the dimension C of the 4D Tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. +A 4D Tensor for input data. +
+`scale` + +A `Tensor`. Must be one of the following types: `float32`. +A 1D Tensor for scaling factor, to scale the normalized x. +
+`offset` + +A `Tensor`. Must have the same type as `scale`. +A 1D Tensor for offset, to shift to the normalized x. +
+`mean` + +A `Tensor`. Must have the same type as `scale`. +A 1D Tensor for population mean. Used for inference only; +must be empty for training. +
+`variance` + +A `Tensor`. Must have the same type as `scale`. +A 1D Tensor for population variance. Used for inference only; +must be empty for training. +
+`epsilon` + +An optional `float`. Defaults to `0.0001`. +A small float number added to the variance of x. +
+`exponential_avg_factor` + +An optional `float`. Defaults to `1`. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +The data format for x and y. Either "NHWC" (default) or "NCHW". +
+`is_training` + +An optional `bool`. Defaults to `True`. +A bool value to indicate the operation is for training (default) +or inference. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (y, batch_mean, batch_variance, reserve_space_1, reserve_space_2). +
+`y` + +A `Tensor`. Has the same type as `x`. +
+`batch_mean` + +A `Tensor`. Has the same type as `scale`. +
+`batch_variance` + +A `Tensor`. Has the same type as `scale`. +
+`reserve_space_1` + +A `Tensor`. Has the same type as `scale`. +
+`reserve_space_2` + +A `Tensor`. Has the same type as `scale`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FusedBatchNormV3.md b/site/en/api_docs/python/tf/raw_ops/FusedBatchNormV3.md new file mode 100644 index 00000000000..956c2679465 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FusedBatchNormV3.md @@ -0,0 +1,189 @@ +description: Batch normalization. + +
+ + +
+ +# tf.raw_ops.FusedBatchNormV3 + + + + + + + + + +Batch normalization. + + + + + + + + + +Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". +The size of 1D Tensors matches the dimension C of the 4D Tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. +A 4D Tensor for input data. +
+`scale` + +A `Tensor`. Must be one of the following types: `float32`. +A 1D Tensor for scaling factor, to scale the normalized x. +
+`offset` + +A `Tensor`. Must have the same type as `scale`. +A 1D Tensor for offset, to shift to the normalized x. +
+`mean` + +A `Tensor`. Must have the same type as `scale`. +A 1D Tensor for population mean. Used for inference only; +must be empty for training. +
+`variance` + +A `Tensor`. Must have the same type as `scale`. +A 1D Tensor for population variance. Used for inference only; +must be empty for training. +
+`epsilon` + +An optional `float`. Defaults to `0.0001`. +A small float number added to the variance of x. +
+`exponential_avg_factor` + +An optional `float`. Defaults to `1`. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +The data format for x and y. Either "NHWC" (default) or "NCHW". +
+`is_training` + +An optional `bool`. Defaults to `True`. +A bool value to indicate the operation is for training (default) +or inference. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (y, batch_mean, batch_variance, reserve_space_1, reserve_space_2, reserve_space_3). +
+`y` + +A `Tensor`. Has the same type as `x`. +
+`batch_mean` + +A `Tensor`. Has the same type as `scale`. +
+`batch_variance` + +A `Tensor`. Has the same type as `scale`. +
+`reserve_space_1` + +A `Tensor`. Has the same type as `scale`. +
+`reserve_space_2` + +A `Tensor`. Has the same type as `scale`. +
+`reserve_space_3` + +A `Tensor`. Has the same type as `scale`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FusedPadConv2D.md b/site/en/api_docs/python/tf/raw_ops/FusedPadConv2D.md new file mode 100644 index 00000000000..f864f1de430 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FusedPadConv2D.md @@ -0,0 +1,130 @@ +description: Performs a padding as a preprocess during a convolution. + +
+ + +
+ +# tf.raw_ops.FusedPadConv2D + + + + + + + + + +Performs a padding as a preprocess during a convolution. + + + + + + + + + +Similar to FusedResizeAndPadConv2d, this op allows for an optimized +implementation where the spatial padding transformation stage is fused with the +im2col lookup, but in this case without the bilinear filtering required for +resizing. Fusing the padding prevents the need to write out the intermediate +results as whole tensors, reducing memory pressure, and we can get some latency +gains by merging the transformation calculations. +The data_format attribute for Conv2D isn't supported by this op, and 'NHWC' +order is used instead. +Internally this op uses a single per-graph scratch buffer, which means that it +will block if multiple versions are being run in parallel. This is because this +operator is primarily an optimization to minimize memory usage. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +4-D with shape `[batch, in_height, in_width, in_channels]`. +
+`paddings` + +A `Tensor` of type `int32`. +A two-column matrix specifying the padding sizes. The number of +rows must be the same as the rank of `input`. +
+`filter` + +A `Tensor`. Must have the same type as `input`. 4-D with shape +`[filter_height, filter_width, in_channels, out_channels]`. +
+`mode` + +A `string` from: `"REFLECT", "SYMMETRIC"`. +
+`strides` + +A list of `ints`. +1-D of length 4. The stride of the sliding window for each dimension +of `input`. Must be in the same order as the dimension specified with format. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/FusedResizeAndPadConv2D.md b/site/en/api_docs/python/tf/raw_ops/FusedResizeAndPadConv2D.md new file mode 100644 index 00000000000..29215109c1e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/FusedResizeAndPadConv2D.md @@ -0,0 +1,148 @@ +description: Performs a resize and padding as a preprocess during a convolution. + +
+ + +
+ +# tf.raw_ops.FusedResizeAndPadConv2D + + + + + + + + + +Performs a resize and padding as a preprocess during a convolution. + + + + + + + + + +It's often possible to do spatial transformations more efficiently as part of +the packing stage of a convolution, so this op allows for an optimized +implementation where these stages are fused together. This prevents the need to +write out the intermediate results as whole tensors, reducing memory pressure, +and we can get some latency gains by merging the transformation calculations. +The data_format attribute for Conv2D isn't supported by this op, and defaults to +'NHWC' order. +Internally this op uses a single per-graph scratch buffer, which means that it +will block if multiple versions are being run in parallel. This is because this +operator is primarily an optimization to minimize memory usage. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +4-D with shape `[batch, in_height, in_width, in_channels]`. +
+`size` + +A `Tensor` of type `int32`. +A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The +new size for the images. +
+`paddings` + +A `Tensor` of type `int32`. +A two-column matrix specifying the padding sizes. The number of +rows must be the same as the rank of `input`. +
+`filter` + +A `Tensor`. Must have the same type as `input`. 4-D with shape +`[filter_height, filter_width, in_channels, out_channels]`. +
+`mode` + +A `string` from: `"REFLECT", "SYMMETRIC"`. +
+`strides` + +A list of `ints`. +1-D of length 4. The stride of the sliding window for each dimension +of `input`. Must be in the same order as the dimension specified with format. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`resize_align_corners` + +An optional `bool`. Defaults to `False`. +If true, the centers of the 4 corner pixels of the input and output tensors are +aligned, preserving the values at the corner pixels. Defaults to false. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GRUBlockCell.md b/site/en/api_docs/python/tf/raw_ops/GRUBlockCell.md new file mode 100644 index 00000000000..26a402d9a57 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GRUBlockCell.md @@ -0,0 +1,184 @@ +description: Computes the GRU cell forward propagation for 1 time step. + +
+ + +
+ +# tf.raw_ops.GRUBlockCell + + + + + + + + + +Computes the GRU cell forward propagation for 1 time step. + + + + + + + + + +Args + x: Input to the GRU cell. + h_prev: State input from the previous GRU cell. + w_ru: Weight matrix for the reset and update gate. + w_c: Weight matrix for the cell connection gate. + b_ru: Bias vector for the reset and update gate. + b_c: Bias vector for the cell connection gate. + +Returns + r: Output of the reset gate. + u: Output of the update gate. + c: Output of the cell connection gate. + h: Current state of the GRU cell. + +Note on notation of the variables: + +Concatenation of a and b is represented by a_b +Element-wise dot product of a and b is represented by ab +Element-wise dot product is represented by \circ +Matrix multiplication is represented by * + +Biases are initialized with : +`b_ru` - constant_initializer(1.0) +`b_c` - constant_initializer(0.0) + +This kernel op implements the following mathematical equations: + +``` +x_h_prev = [x, h_prev] + +[r_bar u_bar] = x_h_prev * w_ru + b_ru + +r = sigmoid(r_bar) +u = sigmoid(u_bar) + +h_prevr = h_prev \circ r + +x_h_prevr = [x h_prevr] + +c_bar = x_h_prevr * w_c + b_c +c = tanh(c_bar) + +h = (1-u) \circ c + u \circ h_prev +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`. +
+`h_prev` + +A `Tensor`. Must have the same type as `x`. +
+`w_ru` + +A `Tensor`. Must have the same type as `x`. +
+`w_c` + +A `Tensor`. Must have the same type as `x`. +
+`b_ru` + +A `Tensor`. Must have the same type as `x`. +
+`b_c` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (r, u, c, h). +
+`r` + +A `Tensor`. Has the same type as `x`. +
+`u` + +A `Tensor`. Has the same type as `x`. +
+`c` + +A `Tensor`. Has the same type as `x`. +
+`h` + +A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GRUBlockCellGrad.md b/site/en/api_docs/python/tf/raw_ops/GRUBlockCellGrad.md new file mode 100644 index 00000000000..96cf6f00042 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GRUBlockCellGrad.md @@ -0,0 +1,248 @@ +description: Computes the GRU cell back-propagation for 1 time step. + +
+ + +
+ +# tf.raw_ops.GRUBlockCellGrad + + + + + + + + + +Computes the GRU cell back-propagation for 1 time step. + + + + + + + + + +Args + x: Input to the GRU cell. + h_prev: State input from the previous GRU cell. + w_ru: Weight matrix for the reset and update gate. + w_c: Weight matrix for the cell connection gate. + b_ru: Bias vector for the reset and update gate. + b_c: Bias vector for the cell connection gate. + r: Output of the reset gate. + u: Output of the update gate. + c: Output of the cell connection gate. + d_h: Gradients of the h_new wrt to objective function. + +Returns + d_x: Gradients of the x wrt to objective function. + d_h_prev: Gradients of the h wrt to objective function. + d_c_bar Gradients of the c_bar wrt to objective function. + d_r_bar_u_bar Gradients of the r_bar & u_bar wrt to objective function. + +This kernel op implements the following mathematical equations: + +Note on notation of the variables: + +Concatenation of a and b is represented by a_b +Element-wise dot product of a and b is represented by ab +Element-wise dot product is represented by \circ +Matrix multiplication is represented by * + +Additional notes for clarity: + +`w_ru` can be segmented into 4 different matrices. +``` +w_ru = [w_r_x w_u_x + w_r_h_prev w_u_h_prev] +``` +Similarly, `w_c` can be segmented into 2 different matrices. +``` +w_c = [w_c_x w_c_h_prevr] +``` +Same goes for biases. +``` +b_ru = [b_ru_x b_ru_h] +b_c = [b_c_x b_c_h] +``` +Another note on notation: +``` +d_x = d_x_component_1 + d_x_component_2 + +where d_x_component_1 = d_r_bar * w_r_x^T + d_u_bar * w_r_x^T +and d_x_component_2 = d_c_bar * w_c_x^T + +d_h_prev = d_h_prev_component_1 + d_h_prevr \circ r + d_h \circ u +where d_h_prev_componenet_1 = d_r_bar * w_r_h_prev^T + d_u_bar * w_r_h_prev^T +``` + +Mathematics behind the Gradients below: +``` +d_c_bar = d_h \circ (1-u) \circ (1-c \circ c) +d_u_bar = d_h \circ (h-c) \circ u \circ (1-u) + +d_r_bar_u_bar = [d_r_bar d_u_bar] + +[d_x_component_1 d_h_prev_component_1] = d_r_bar_u_bar * w_ru^T + +[d_x_component_2 d_h_prevr] = d_c_bar * w_c^T + +d_x = d_x_component_1 + d_x_component_2 + +d_h_prev = d_h_prev_component_1 + d_h_prevr \circ r + u +``` +Below calculation is performed in the python wrapper for the Gradients +(not in the gradient kernel.) +``` +d_w_ru = x_h_prevr^T * d_c_bar + +d_w_c = x_h_prev^T * d_r_bar_u_bar + +d_b_ru = sum of d_r_bar_u_bar along axis = 0 + +d_b_c = sum of d_c_bar along axis = 0 +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`. +
+`h_prev` + +A `Tensor`. Must have the same type as `x`. +
+`w_ru` + +A `Tensor`. Must have the same type as `x`. +
+`w_c` + +A `Tensor`. Must have the same type as `x`. +
+`b_ru` + +A `Tensor`. Must have the same type as `x`. +
+`b_c` + +A `Tensor`. Must have the same type as `x`. +
+`r` + +A `Tensor`. Must have the same type as `x`. +
+`u` + +A `Tensor`. Must have the same type as `x`. +
+`c` + +A `Tensor`. Must have the same type as `x`. +
+`d_h` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (d_x, d_h_prev, d_c_bar, d_r_bar_u_bar). +
+`d_x` + +A `Tensor`. Has the same type as `x`. +
+`d_h_prev` + +A `Tensor`. Has the same type as `x`. +
+`d_c_bar` + +A `Tensor`. Has the same type as `x`. +
+`d_r_bar_u_bar` + +A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Gather.md b/site/en/api_docs/python/tf/raw_ops/Gather.md new file mode 100644 index 00000000000..efe60b25116 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Gather.md @@ -0,0 +1,116 @@ +description: Gather slices from params according to indices. + +
+ + +
+ +# tf.raw_ops.Gather + + + + + + + + + +Gather slices from `params` according to `indices`. + + + + + + + + + +`indices` must be an integer tensor of any dimension (usually 0-D or 1-D). +Produces an output tensor with shape `indices.shape + params.shape[1:]` where: + +```python + # Scalar indices + output[:, ..., :] = params[indices, :, ... :] + + # Vector indices + output[i, :, ..., :] = params[indices[i], :, ... :] + + # Higher rank indices + output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :] +``` + +If `indices` is a permutation and `len(indices) == params.shape[0]` then +this operation will permute `params` accordingly. + +`validate_indices`: DEPRECATED. If this operation is assigned to CPU, values in +`indices` are always validated to be within range. If assigned to GPU, +out-of-bound indices result in safe but unspecified behavior, which may include +raising an error. + +
+ +
+ + + + + + + + + + + + + + + + + + + +
+`params` + +A `Tensor`. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`validate_indices` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `params`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GatherNd.md b/site/en/api_docs/python/tf/raw_ops/GatherNd.md new file mode 100644 index 00000000000..7a4bbacbe7d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GatherNd.md @@ -0,0 +1,189 @@ +description: Gather slices from params into a Tensor with shape specified by indices. + +
+ + +
+ +# tf.raw_ops.GatherNd + + + + + + + + + +Gather slices from `params` into a Tensor with shape specified by `indices`. + + + + + + + + + +`indices` is a K-dimensional integer tensor, best thought of as a +(K-1)-dimensional tensor of indices into `params`, where each element defines a +slice of `params`: + + output[\\(i_0, ..., i_{K-2}\\)] = params[indices[\\(i_0, ..., i_{K-2}\\)]] + +Whereas in tf.gather `indices` defines slices into the `axis` +dimension of `params`, in tf.gather_nd, `indices` defines slices into the +first `N` dimensions of `params`, where `N = indices.shape[-1]`. + +The last dimension of `indices` can be at most the rank of +`params`: + + indices.shape[-1] <= params.rank + +The last dimension of `indices` corresponds to elements +(if `indices.shape[-1] == params.rank`) or slices +(if `indices.shape[-1] < params.rank`) along dimension `indices.shape[-1]` +of `params`. The output tensor has shape + + indices.shape[:-1] + params.shape[indices.shape[-1]:] + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, a 0 is stored in the +corresponding output value. + +Some examples below. + +Simple indexing into a matrix: + +```python + indices = [[0, 0], [1, 1]] + params = [['a', 'b'], ['c', 'd']] + output = ['a', 'd'] +``` + +Slice indexing into a matrix: + +```python + indices = [[1], [0]] + params = [['a', 'b'], ['c', 'd']] + output = [['c', 'd'], ['a', 'b']] +``` + +Indexing into a 3-tensor: + +```python + indices = [[1]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [[['a1', 'b1'], ['c1', 'd1']]] + + + indices = [[0, 1], [1, 0]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [['c0', 'd0'], ['a1', 'b1']] + + + indices = [[0, 0, 1], [1, 0, 1]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = ['b0', 'b1'] +``` + +Batched indexing into a matrix: + +```python + indices = [[[0, 0]], [[0, 1]]] + params = [['a', 'b'], ['c', 'd']] + output = [['a'], ['b']] +``` + +Batched slice indexing into a matrix: + +```python + indices = [[[1]], [[0]]] + params = [['a', 'b'], ['c', 'd']] + output = [[['c', 'd']], [['a', 'b']]] +``` + +Batched indexing into a 3-tensor: + +```python + indices = [[[1]], [[0]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [[[['a1', 'b1'], ['c1', 'd1']]], + [[['a0', 'b0'], ['c0', 'd0']]]] + + indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [[['c0', 'd0'], ['a1', 'b1']], + [['a0', 'b0'], ['c1', 'd1']]] + + + indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]] + params = [[['a0', 'b0'], ['c0', 'd0']], + [['a1', 'b1'], ['c1', 'd1']]] + output = [['b0', 'b1'], ['d0', 'c1']] +``` + +See also tf.gather and `tf.batch_gather`. + + + + + + + + + + + + + + + + +
+`params` + +A `Tensor`. The tensor from which to gather values. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `params`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GatherV2.md b/site/en/api_docs/python/tf/raw_ops/GatherV2.md new file mode 100644 index 00000000000..39d2091846d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GatherV2.md @@ -0,0 +1,130 @@ +description: Gather slices from params axis axis according to indices. + +
+ + +
+ +# tf.raw_ops.GatherV2 + + + + + + + + + +Gather slices from `params` axis `axis` according to `indices`. + + + + + + + + + +`indices` must be an integer tensor of any dimension (usually 0-D or 1-D). +Produces an output tensor with shape `params.shape[:axis] + indices.shape + +params.shape[axis + 1:]` where: + +```python + # Scalar indices (output is rank(params) - 1). + output[a_0, ..., a_n, b_0, ..., b_n] = + params[a_0, ..., a_n, indices, b_0, ..., b_n] + + # Vector indices (output is rank(params)). + output[a_0, ..., a_n, i, b_0, ..., b_n] = + params[a_0, ..., a_n, indices[i], b_0, ..., b_n] + + # Higher rank indices (output is rank(params) + rank(indices) - 1). + output[a_0, ..., a_n, i, ..., j, b_0, ... b_n] = + params[a_0, ..., a_n, indices[i, ..., j], b_0, ..., b_n] +``` + +
+ +
+ +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, a 0 is stored in the +corresponding output value. + +See also `tf.batch_gather` and tf.gather_nd. + + + + + + + + + + + + + + + + + + + + + + +
+`params` + +A `Tensor`. +The tensor from which to gather values. Must be at least rank +`axis + 1`. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. Must be in range `[0, params.shape[axis])`. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The axis in `params` to gather `indices` from. Defaults to the first +dimension. Supports negative indexes. +
+`batch_dims` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `params`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GenerateBoundingBoxProposals.md b/site/en/api_docs/python/tf/raw_ops/GenerateBoundingBoxProposals.md new file mode 100644 index 00000000000..8f23da0b295 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GenerateBoundingBoxProposals.md @@ -0,0 +1,161 @@ +description: This op produces Region of Interests from given bounding boxes(bbox_deltas) encoded wrt anchors according to eq.2 in arXiv:1506.01497 + +
+ + +
+ +# tf.raw_ops.GenerateBoundingBoxProposals + + + + + + + + + +This op produces Region of Interests from given bounding boxes(bbox_deltas) encoded wrt anchors according to eq.2 in arXiv:1506.01497 + + + + + + + + + + The op selects top `pre_nms_topn` scoring boxes, decodes them with respect to anchors, + applies non-maximal suppression on overlapping boxes with higher than + `nms_threshold` intersection-over-union (iou) value, discarding boxes where shorter + side is less than `min_size`. + Inputs: + `scores`: A 4D tensor of shape [Batch, Height, Width, Num Anchors] containing the scores per anchor at given postion + `bbox_deltas`: is a tensor of shape [Batch, Height, Width, 4 x Num Anchors] boxes encoded to each anchor + `anchors`: A 1D tensor of shape [4 x Num Anchors], representing the anchors. + Outputs: + `rois`: output RoIs, a 3D tensor of shape [Batch, post_nms_topn, 4], padded by 0 if less than post_nms_topn candidates found. + `roi_probabilities`: probability scores of each roi in 'rois', a 2D tensor of shape [Batch,post_nms_topn], padded with 0 if needed, sorted by scores. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`scores` + +A `Tensor` of type `float32`. +A 4-D float tensor of shape `[num_images, height, width, num_achors]` containing scores of the boxes for given anchors, can be unsorted. +
+`bbox_deltas` + +A `Tensor` of type `float32`. +A 4-D float tensor of shape `[num_images, height, width, 4 x num_anchors]`. encoding boxes with respec to each anchor. +Coordinates are given in the form [dy, dx, dh, dw]. +
+`image_info` + +A `Tensor` of type `float32`. +A 2-D float tensor of shape `[num_images, 5]` containing image information Height, Width, Scale. +
+`anchors` + +A `Tensor` of type `float32`. +A 2-D float tensor of shape `[num_anchors, 4]` describing the anchor boxes. Boxes are formatted in the form [y1, x1, y2, x2]. +
+`nms_threshold` + +A `Tensor` of type `float32`. +A scalar float tensor for non-maximal-suppression threshold. +
+`pre_nms_topn` + +A `Tensor` of type `int32`. +A scalar int tensor for the number of top scoring boxes to be used as input. +
+`min_size` + +A `Tensor` of type `float32`. +A scalar float tensor. Any box that has a smaller size than min_size will be discarded. +
+`post_nms_topn` + +An optional `int`. Defaults to `300`. +An integer. Maximum number of rois in the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (rois, roi_probabilities). +
+`rois` + +A `Tensor` of type `float32`. +
+`roi_probabilities` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GenerateVocabRemapping.md b/site/en/api_docs/python/tf/raw_ops/GenerateVocabRemapping.md new file mode 100644 index 00000000000..4616abdbb14 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GenerateVocabRemapping.md @@ -0,0 +1,152 @@ +description: Given a path to new and old vocabulary files, returns a remapping Tensor of + +
+ + +
+ +# tf.raw_ops.GenerateVocabRemapping + + + + + + + + + +Given a path to new and old vocabulary files, returns a remapping Tensor of + + + + + + + + + +length `num_new_vocab`, where `remapping[i]` contains the row number in the old +vocabulary that corresponds to row `i` in the new vocabulary (starting at line +`new_vocab_offset` and up to `num_new_vocab` entities), or `-1` if entry `i` +in the new vocabulary is not in the old vocabulary. The old vocabulary is +constrained to the first `old_vocab_size` entries if `old_vocab_size` is not the +default value of -1. + +`num_vocab_offset` enables +use in the partitioned variable case, and should generally be set through +examining partitioning info. The format of the files should be a text file, +with each line containing a single entity within the vocabulary. + +For example, with `new_vocab_file` a text file containing each of the following +elements on a single line: `[f0, f1, f2, f3]`, old_vocab_file = [f1, f0, f3], +`num_new_vocab = 3, new_vocab_offset = 1`, the returned remapping would be +`[0, -1, 2]`. + +The op also returns a count of how many entries in the new vocabulary +were present in the old vocabulary, which is used to calculate the number of +values to initialize in a weight matrix remapping + +This functionality can be used to remap both row vocabularies (typically, +features) and column vocabularies (typically, classes) from TensorFlow +checkpoints. Note that the partitioning logic relies on contiguous vocabularies +corresponding to div-partitioned variables. Moreover, the underlying remapping +uses an IndexTable (as opposed to an inexact CuckooTable), so client code should +use the corresponding index_table_from_file() as the FeatureColumn framework +does (as opposed to tf.feature_to_id(), which uses a CuckooTable). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`new_vocab_file` + +A `Tensor` of type `string`. Path to the new vocab file. +
+`old_vocab_file` + +A `Tensor` of type `string`. Path to the old vocab file. +
+`new_vocab_offset` + +An `int` that is `>= 0`. +How many entries into the new vocab file to start reading. +
+`num_new_vocab` + +An `int` that is `>= 0`. +Number of entries in the new vocab file to remap. +
+`old_vocab_size` + +An optional `int` that is `>= -1`. Defaults to `-1`. +Number of entries in the old vocab file to consider. If -1, +use the entire old vocabulary. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (remapping, num_present). +
+`remapping` + +A `Tensor` of type `int64`. +
+`num_present` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GeneratorDataset.md b/site/en/api_docs/python/tf/raw_ops/GeneratorDataset.md new file mode 100644 index 00000000000..4a8f3b033bf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GeneratorDataset.md @@ -0,0 +1,127 @@ +description: Creates a dataset that invokes a function to generate elements. + +
+ + +
+ +# tf.raw_ops.GeneratorDataset + + + + + + + + + +Creates a dataset that invokes a function to generate elements. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`init_func_other_args` + +A list of `Tensor` objects. +
+`next_func_other_args` + +A list of `Tensor` objects. +
+`finalize_func_other_args` + +A list of `Tensor` objects. +
+`init_func` + +A function decorated with @Defun. +
+`next_func` + +A function decorated with @Defun. +
+`finalize_func` + +A function decorated with @Defun. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GetSessionHandle.md b/site/en/api_docs/python/tf/raw_ops/GetSessionHandle.md new file mode 100644 index 00000000000..29c50c0276b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GetSessionHandle.md @@ -0,0 +1,77 @@ +description: Store the input tensor in the state of the current session. + +
+ + +
+ +# tf.raw_ops.GetSessionHandle + + + + + + + + + +Store the input tensor in the state of the current session. + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. The tensor to be stored. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GetSessionHandleV2.md b/site/en/api_docs/python/tf/raw_ops/GetSessionHandleV2.md new file mode 100644 index 00000000000..3f7cbdb8ce7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GetSessionHandleV2.md @@ -0,0 +1,77 @@ +description: Store the input tensor in the state of the current session. + +
+ + +
+ +# tf.raw_ops.GetSessionHandleV2 + + + + + + + + + +Store the input tensor in the state of the current session. + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. The tensor to be stored. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GetSessionTensor.md b/site/en/api_docs/python/tf/raw_ops/GetSessionTensor.md new file mode 100644 index 00000000000..8dc05c727d3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GetSessionTensor.md @@ -0,0 +1,85 @@ +description: Get the value of the tensor specified by its handle. + +
+ + +
+ +# tf.raw_ops.GetSessionTensor + + + + + + + + + +Get the value of the tensor specified by its handle. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `string`. +The handle for a tensor stored in the session state. +
+`dtype` + +A tf.DType. The type of the output value. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Greater.md b/site/en/api_docs/python/tf/raw_ops/Greater.md new file mode 100644 index 00000000000..84d2657a376 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Greater.md @@ -0,0 +1,100 @@ +description: Returns the truth value of (x > y) element-wise. + +
+ + +
+ +# tf.raw_ops.Greater + + + + + + + + + +Returns the truth value of (x > y) element-wise. + + + + + + + + + +*NOTE*: math.greater supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 2, 5]) +tf.math.greater(x, y) ==> [False, True, True] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.greater(x, y) ==> [False, False, True] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GreaterEqual.md b/site/en/api_docs/python/tf/raw_ops/GreaterEqual.md new file mode 100644 index 00000000000..f80ba267fa2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GreaterEqual.md @@ -0,0 +1,100 @@ +description: Returns the truth value of (x >= y) element-wise. + +
+ + +
+ +# tf.raw_ops.GreaterEqual + + + + + + + + + +Returns the truth value of (x >= y) element-wise. + + + + + + + + + +*NOTE*: math.greater_equal supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6, 7]) +y = tf.constant([5, 2, 5, 10]) +tf.math.greater_equal(x, y) ==> [True, True, True, False] + +x = tf.constant([5, 4, 6, 7]) +y = tf.constant([5]) +tf.math.greater_equal(x, y) ==> [True, False, True, True] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GroupByReducerDataset.md b/site/en/api_docs/python/tf/raw_ops/GroupByReducerDataset.md new file mode 100644 index 00000000000..c1c7b2f45d3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GroupByReducerDataset.md @@ -0,0 +1,166 @@ +description: Creates a dataset that computes a group-by on input_dataset. + +
+ + +
+ +# tf.raw_ops.GroupByReducerDataset + + + + + + + + + +Creates a dataset that computes a group-by on `input_dataset`. + + + + + + + + + +Creates a dataset that computes a group-by on `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +
+`key_func_other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when +building a closure for `key_func`. +
+`init_func_other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when +building a closure for `init_func`. +
+`reduce_func_other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when +building a closure for `reduce_func`. +
+`finalize_func_other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when +building a closure for `finalize_func`. +
+`key_func` + +A function decorated with @Defun. +A function mapping an element of `input_dataset`, concatenated +with `key_func_other_arguments` to a scalar value of type DT_INT64. +
+`init_func` + +A function decorated with @Defun. +A function mapping a key of type DT_INT64, concatenated with +`init_func_other_arguments` to the initial reducer state. +
+`reduce_func` + +A function decorated with @Defun. +A function mapping the current reducer state and an element of `input_dataset`, +concatenated with `reduce_func_other_arguments` to a new reducer state. +
+`finalize_func` + +A function decorated with @Defun. +A function mapping the final reducer state to an output element. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GroupByWindowDataset.md b/site/en/api_docs/python/tf/raw_ops/GroupByWindowDataset.md new file mode 100644 index 00000000000..376d6e44ce3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GroupByWindowDataset.md @@ -0,0 +1,138 @@ +description: Creates a dataset that computes a windowed group-by on input_dataset. + +
+ + +
+ +# tf.raw_ops.GroupByWindowDataset + + + + + + + + + +Creates a dataset that computes a windowed group-by on `input_dataset`. + + + + + + + + + +// + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`key_func_other_arguments` + +A list of `Tensor` objects. +
+`reduce_func_other_arguments` + +A list of `Tensor` objects. +
+`window_size_func_other_arguments` + +A list of `Tensor` objects. +
+`key_func` + +A function decorated with @Defun. +A function mapping an element of `input_dataset`, concatenated +with `key_func_other_arguments` to a scalar value of type DT_INT64. +
+`reduce_func` + +A function decorated with @Defun. +
+`window_size_func` + +A function decorated with @Defun. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/GuaranteeConst.md b/site/en/api_docs/python/tf/raw_ops/GuaranteeConst.md new file mode 100644 index 00000000000..2b6d91e27e0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/GuaranteeConst.md @@ -0,0 +1,83 @@ +description: Gives a guarantee to the TF runtime that the input tensor is a constant. + +
+ + +
+ +# tf.raw_ops.GuaranteeConst + + + + + + + + + +Gives a guarantee to the TF runtime that the input tensor is a constant. + + + + + + + + + +The runtime is then free to make optimizations based on this. + +Only accepts value typed tensors as inputs and rejects resource variable handles +as input. + +Returns the input tensor without modification. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/HSVToRGB.md b/site/en/api_docs/python/tf/raw_ops/HSVToRGB.md new file mode 100644 index 00000000000..3520662e7ae --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/HSVToRGB.md @@ -0,0 +1,83 @@ +description: Convert one or more images from HSV to RGB. + +
+ + +
+ +# tf.raw_ops.HSVToRGB + + + + + + + + + +Convert one or more images from HSV to RGB. + + + + + + + + + +Outputs a tensor of the same shape as the `images` tensor, containing the RGB +value of the pixels. The output is only well defined if the value in `images` +are in `[0,1]`. + +See `rgb_to_hsv` for a description of the HSV encoding. + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +1-D or higher rank. HSV data to convert. Last dimension must be size 3. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/HashTable.md b/site/en/api_docs/python/tf/raw_ops/HashTable.md new file mode 100644 index 00000000000..4bc3dae65d5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/HashTable.md @@ -0,0 +1,115 @@ +description: Creates a non-initialized hash table. + +
+ + +
+ +# tf.raw_ops.HashTable + + + + + + + + + +Creates a non-initialized hash table. + + + + + + + + + +This op creates a hash table, specifying the type of its keys and values. +Before using the table you will have to initialize it. After initialization the +table will be immutable. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key_dtype` + +A tf.DType. Type of the table keys. +
+`value_dtype` + +A tf.DType. Type of the table values. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this table is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this table is shared under the given name across +multiple sessions. +
+`use_node_name_sharing` + +An optional `bool`. Defaults to `False`. +If true and shared_name is empty, the table is shared +using the node name. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/HashTableV2.md b/site/en/api_docs/python/tf/raw_ops/HashTableV2.md new file mode 100644 index 00000000000..9e558073f0a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/HashTableV2.md @@ -0,0 +1,115 @@ +description: Creates a non-initialized hash table. + +
+ + +
+ +# tf.raw_ops.HashTableV2 + + + + + + + + + +Creates a non-initialized hash table. + + + + + + + + + +This op creates a hash table, specifying the type of its keys and values. +Before using the table you will have to initialize it. After initialization the +table will be immutable. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key_dtype` + +A tf.DType. Type of the table keys. +
+`value_dtype` + +A tf.DType. Type of the table values. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this table is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this table is shared under the given name across +multiple sessions. +
+`use_node_name_sharing` + +An optional `bool`. Defaults to `False`. +If true and shared_name is empty, the table is shared +using the node name. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/HistogramFixedWidth.md b/site/en/api_docs/python/tf/raw_ops/HistogramFixedWidth.md new file mode 100644 index 00000000000..de395f47ecc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/HistogramFixedWidth.md @@ -0,0 +1,118 @@ +description: Return histogram of values. + +
+ + +
+ +# tf.raw_ops.HistogramFixedWidth + + + + + + + + + +Return histogram of values. + + + + + + + + + +Given the tensor `values`, this operation returns a rank 1 histogram counting +the number of entries in `values` that fall into every bin. The bins are +equal width and determined by the arguments `value_range` and `nbins`. + +```python +# Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf) +nbins = 5 +value_range = [0.0, 5.0] +new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15] + +with tf.get_default_session() as sess: + hist = tf.histogram_fixed_width(new_values, value_range, nbins=5) + variables.global_variables_initializer().run() + sess.run(hist) => [2, 1, 1, 0, 2] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`values` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`. +Numeric `Tensor`. +
+`value_range` + +A `Tensor`. Must have the same type as `values`. +Shape [2] `Tensor` of same `dtype` as `values`. +values <= value_range[0] will be mapped to hist[0], +values >= value_range[1] will be mapped to hist[-1]. +
+`nbins` + +A `Tensor` of type `int32`. +Scalar `int32 Tensor`. Number of histogram bins. +
+`dtype` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/HistogramSummary.md b/site/en/api_docs/python/tf/raw_ops/HistogramSummary.md new file mode 100644 index 00000000000..c8220ec82cf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/HistogramSummary.md @@ -0,0 +1,91 @@ +description: Outputs a Summary protocol buffer with a histogram. + +
+ + +
+ +# tf.raw_ops.HistogramSummary + + + + + + + + + +Outputs a `Summary` protocol buffer with a histogram. + + + + + + + + + +The generated +[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) +has one summary value containing a histogram for `values`. + +This op reports an `InvalidArgument` error if any value is not finite. + + + + + + + + + + + + + + + + +
+`tag` + +A `Tensor` of type `string`. +Scalar. Tag to use for the `Summary.Value`. +
+`values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +Any shape. Values to use to build the histogram. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IFFT.md b/site/en/api_docs/python/tf/raw_ops/IFFT.md new file mode 100644 index 00000000000..eb52530654e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IFFT.md @@ -0,0 +1,80 @@ +description: Inverse fast Fourier transform. + +
+ + +
+ +# tf.raw_ops.IFFT + + + + + + + + + +Inverse fast Fourier transform. + + + + + + + + + +Computes the inverse 1-dimensional discrete Fourier transform over the +inner-most dimension of `input`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IFFT2D.md b/site/en/api_docs/python/tf/raw_ops/IFFT2D.md new file mode 100644 index 00000000000..51d41363cec --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IFFT2D.md @@ -0,0 +1,80 @@ +description: Inverse 2D fast Fourier transform. + +
+ + +
+ +# tf.raw_ops.IFFT2D + + + + + + + + + +Inverse 2D fast Fourier transform. + + + + + + + + + +Computes the inverse 2-dimensional discrete Fourier transform over the +inner-most 2 dimensions of `input`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IFFT3D.md b/site/en/api_docs/python/tf/raw_ops/IFFT3D.md new file mode 100644 index 00000000000..1e701fd076e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IFFT3D.md @@ -0,0 +1,80 @@ +description: Inverse 3D fast Fourier transform. + +
+ + +
+ +# tf.raw_ops.IFFT3D + + + + + + + + + +Inverse 3D fast Fourier transform. + + + + + + + + + +Computes the inverse 3-dimensional discrete Fourier transform over the +inner-most 3 dimensions of `input`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IRFFT.md b/site/en/api_docs/python/tf/raw_ops/IRFFT.md new file mode 100644 index 00000000000..39f3a014fae --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IRFFT.md @@ -0,0 +1,106 @@ +description: Inverse real-valued fast Fourier transform. + +
+ + +
+ +# tf.raw_ops.IRFFT + + + + + + + + + +Inverse real-valued fast Fourier transform. + + + + + + + + + +Computes the inverse 1-dimensional discrete Fourier transform of a real-valued +signal over the inner-most dimension of `input`. + +The inner-most dimension of `input` is assumed to be the result of `RFFT`: the +`fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If +`fft_length` is not provided, it is computed from the size of the inner-most +dimension of `input` (`fft_length = 2 * (inner - 1)`). If the FFT length used to +compute `input` is odd, it should be provided since it cannot be inferred +properly. + +Along the axis `IRFFT` is computed on, if `fft_length / 2 + 1` is smaller +than the corresponding dimension of `input`, the dimension is cropped. If it is +larger, the dimension is padded with zeros. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`fft_length` + +A `Tensor` of type `int32`. +An int32 tensor of shape [1]. The FFT length. +
+`Treal` + +An optional tf.DType from: `tf.float32, tf.float64`. Defaults to tf.float32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Treal`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IRFFT2D.md b/site/en/api_docs/python/tf/raw_ops/IRFFT2D.md new file mode 100644 index 00000000000..13f262cc30f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IRFFT2D.md @@ -0,0 +1,107 @@ +description: Inverse 2D real-valued fast Fourier transform. + +
+ + +
+ +# tf.raw_ops.IRFFT2D + + + + + + + + + +Inverse 2D real-valued fast Fourier transform. + + + + + + + + + +Computes the inverse 2-dimensional discrete Fourier transform of a real-valued +signal over the inner-most 2 dimensions of `input`. + +The inner-most 2 dimensions of `input` are assumed to be the result of `RFFT2D`: +The inner-most dimension contains the `fft_length / 2 + 1` unique components of +the DFT of a real-valued signal. If `fft_length` is not provided, it is computed +from the size of the inner-most 2 dimensions of `input`. If the FFT length used +to compute `input` is odd, it should be provided since it cannot be inferred +properly. + +Along each axis `IRFFT2D` is computed on, if `fft_length` (or +`fft_length / 2 + 1` for the inner-most dimension) is smaller than the +corresponding dimension of `input`, the dimension is cropped. If it is larger, +the dimension is padded with zeros. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`fft_length` + +A `Tensor` of type `int32`. +An int32 tensor of shape [2]. The FFT length for each dimension. +
+`Treal` + +An optional tf.DType from: `tf.float32, tf.float64`. Defaults to tf.float32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Treal`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IRFFT3D.md b/site/en/api_docs/python/tf/raw_ops/IRFFT3D.md new file mode 100644 index 00000000000..96be37a9603 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IRFFT3D.md @@ -0,0 +1,107 @@ +description: Inverse 3D real-valued fast Fourier transform. + +
+ + +
+ +# tf.raw_ops.IRFFT3D + + + + + + + + + +Inverse 3D real-valued fast Fourier transform. + + + + + + + + + +Computes the inverse 3-dimensional discrete Fourier transform of a real-valued +signal over the inner-most 3 dimensions of `input`. + +The inner-most 3 dimensions of `input` are assumed to be the result of `RFFT3D`: +The inner-most dimension contains the `fft_length / 2 + 1` unique components of +the DFT of a real-valued signal. If `fft_length` is not provided, it is computed +from the size of the inner-most 3 dimensions of `input`. If the FFT length used +to compute `input` is odd, it should be provided since it cannot be inferred +properly. + +Along each axis `IRFFT3D` is computed on, if `fft_length` (or +`fft_length / 2 + 1` for the inner-most dimension) is smaller than the +corresponding dimension of `input`, the dimension is cropped. If it is larger, +the dimension is padded with zeros. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`fft_length` + +A `Tensor` of type `int32`. +An int32 tensor of shape [3]. The FFT length for each dimension. +
+`Treal` + +An optional tf.DType from: `tf.float32, tf.float64`. Defaults to tf.float32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Treal`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Identity.md b/site/en/api_docs/python/tf/raw_ops/Identity.md new file mode 100644 index 00000000000..66549dfe981 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Identity.md @@ -0,0 +1,77 @@ +description: Return a tensor with the same shape and contents as the input tensor or value. + +
+ + +
+ +# tf.raw_ops.Identity + + + + + + + + + +Return a tensor with the same shape and contents as the input tensor or value. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IdentityN.md b/site/en/api_docs/python/tf/raw_ops/IdentityN.md new file mode 100644 index 00000000000..0be7ab0ca42 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IdentityN.md @@ -0,0 +1,92 @@ +description: Returns a list of tensors with the same shapes and contents as the input + +
+ + +
+ +# tf.raw_ops.IdentityN + + + + + + + + + +Returns a list of tensors with the same shapes and contents as the input + + + + + + + + + +tensors. + +This op can be used to override the gradient for complicated functions. For +example, suppose y = f(x) and we wish to apply a custom function g for backprop +such that dx = g(dy). In Python, + +```python +with tf.get_default_graph().gradient_override_map( + {'IdentityN': 'OverrideGradientWithG'}): + y, _ = identity_n([f(x), x]) + +@tf.RegisterGradient('OverrideGradientWithG') +def ApplyG(op, dy, _): + return [None, g(dy)] # Do not backprop to f(x). +``` + + + + + + + + + + + + + +
+`input` + +A list of `Tensor` objects. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IdentityReader.md b/site/en/api_docs/python/tf/raw_ops/IdentityReader.md new file mode 100644 index 00000000000..a244caf37b4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IdentityReader.md @@ -0,0 +1,90 @@ +description: A Reader that outputs the queued work as both the key and value. + +
+ + +
+ +# tf.raw_ops.IdentityReader + + + + + + + + + +A Reader that outputs the queued work as both the key and value. + + + + + + + + + +To use, enqueue strings in a Queue. ReaderRead will take the front +work string and output (work, work). + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IdentityReaderV2.md b/site/en/api_docs/python/tf/raw_ops/IdentityReaderV2.md new file mode 100644 index 00000000000..20e649990a4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IdentityReaderV2.md @@ -0,0 +1,90 @@ +description: A Reader that outputs the queued work as both the key and value. + +
+ + +
+ +# tf.raw_ops.IdentityReaderV2 + + + + + + + + + +A Reader that outputs the queued work as both the key and value. + + + + + + + + + +To use, enqueue strings in a Queue. ReaderRead will take the front +work string and output (work, work). + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/If.md b/site/en/api_docs/python/tf/raw_ops/If.md new file mode 100644 index 00000000000..cc80232c3f0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/If.md @@ -0,0 +1,122 @@ +description: output = cond ? then_branch(input) : else_branch(input) + +
+ + +
+ +# tf.raw_ops.If + + + + + + + + + +output = cond ? then_branch(input) : else_branch(input) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cond` + +A `Tensor`. +A Tensor. If the tensor is a scalar of non-boolean type, the +scalar is converted to a boolean according to the +following rule: if the scalar is a numerical value, non-zero means +`True` and zero means False; if the scalar is a string, non-empty +means `True` and empty means `False`. If the tensor is not a scalar, +being empty means False and being non-empty means True. +
+`input` + +A list of `Tensor` objects. A list of input tensors. +
+`Tout` + +A list of `tf.DTypes`. A list of output types. +
+`then_branch` + +A function decorated with @Defun. +A function that takes 'inputs' and returns a list of tensors, whose +types are the same as what else_branch returns. +
+`else_branch` + +A function decorated with @Defun. +A function that takes 'inputs' and returns a list of tensors, whose +types are the same as what then_branch returns. +
+`output_shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Igamma.md b/site/en/api_docs/python/tf/raw_ops/Igamma.md new file mode 100644 index 00000000000..3049771a7e5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Igamma.md @@ -0,0 +1,97 @@ +description: Compute the lower regularized incomplete Gamma function P(a, x). + +
+ + +
+ +# tf.raw_ops.Igamma + + + + + + + + + +Compute the lower regularized incomplete Gamma function `P(a, x)`. + + + + + + + + + +The lower regularized incomplete Gamma function is defined as: + + +\\(P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)\\) + +where + +\\(gamma(a, x) = \\int_{0}^{x} t^{a-1} exp(-t) dt\\) + +is the lower incomplete Gamma function. + +Note, above `Q(a, x)` (`Igammac`) is the upper regularized complete +Gamma function. + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`x` + +A `Tensor`. Must have the same type as `a`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IgammaGradA.md b/site/en/api_docs/python/tf/raw_ops/IgammaGradA.md new file mode 100644 index 00000000000..272b2fcf241 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IgammaGradA.md @@ -0,0 +1,84 @@ +description: Computes the gradient of igamma(a, x) wrt a. + +
+ + +
+ +# tf.raw_ops.IgammaGradA + + + + + + + + + +Computes the gradient of `igamma(a, x)` wrt `a`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`x` + +A `Tensor`. Must have the same type as `a`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Igammac.md b/site/en/api_docs/python/tf/raw_ops/Igammac.md new file mode 100644 index 00000000000..91b9b56ed36 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Igammac.md @@ -0,0 +1,96 @@ +description: Compute the upper regularized incomplete Gamma function Q(a, x). + +
+ + +
+ +# tf.raw_ops.Igammac + + + + + + + + + +Compute the upper regularized incomplete Gamma function `Q(a, x)`. + + + + + + + + + +The upper regularized incomplete Gamma function is defined as: + +\\(Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\\) + +where + +\\(Gamma(a, x) = int_{x}^{\infty} t^{a-1} exp(-t) dt\\) + +is the upper incomplete Gama function. + +Note, above `P(a, x)` (`Igamma`) is the lower regularized complete +Gamma function. + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`x` + +A `Tensor`. Must have the same type as `a`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IgnoreErrorsDataset.md b/site/en/api_docs/python/tf/raw_ops/IgnoreErrorsDataset.md new file mode 100644 index 00000000000..7dc946eb504 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IgnoreErrorsDataset.md @@ -0,0 +1,91 @@ +description: Creates a dataset that contains the elements of input_dataset ignoring errors. + +
+ + +
+ +# tf.raw_ops.IgnoreErrorsDataset + + + + + + + + + +Creates a dataset that contains the elements of `input_dataset` ignoring errors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Imag.md b/site/en/api_docs/python/tf/raw_ops/Imag.md new file mode 100644 index 00000000000..5583034b4ae --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Imag.md @@ -0,0 +1,97 @@ +description: Returns the imaginary part of a complex number. + +
+ + +
+ +# tf.raw_ops.Imag + + + + + + + + + +Returns the imaginary part of a complex number. + + + + + + + + + +Given a tensor `input` of complex numbers, this operation returns a tensor of +type `float` that is the imaginary part of each element in `input`. All +elements in `input` must be complex numbers of the form \\(a + bj\\), where *a* +is the real part and *b* is the imaginary part returned by this operation. + +#### For example: + + + +``` +# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] +tf.imag(input) ==> [4.75, 5.75] +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +
+`Tout` + +An optional tf.DType from: `tf.float32, tf.float64`. Defaults to tf.float32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ImageProjectiveTransformV2.md b/site/en/api_docs/python/tf/raw_ops/ImageProjectiveTransformV2.md new file mode 100644 index 00000000000..5f53436d24d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ImageProjectiveTransformV2.md @@ -0,0 +1,116 @@ +description: Applies the given transform to each of the images. + +
+ + +
+ +# tf.raw_ops.ImageProjectiveTransformV2 + + + + + + + + + +Applies the given transform to each of the images. + + + + + + + + + +If one row of `transforms` is `[a0, a1, a2, b0, b1, b2, c0, c1]`, then it maps +the *output* point `(x, y)` to a transformed *input* point +`(x', y') = ((a0 x + a1 y + a2) / k, (b0 x + b1 y + b2) / k)`, where +`k = c0 x + c1 y + 1`. If the transformed point lays outside of the input +image, the output pixel is set to 0. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `uint8`, `int32`, `int64`, `half`, `float32`, `float64`. +4-D with shape `[batch, height, width, channels]`. +
+`transforms` + +A `Tensor` of type `float32`. +2-D Tensor, `[batch, 8]` or `[1, 8]` matrix, where each row corresponds to a 3 x 3 +projective transformation matrix, with the last entry assumed to be 1. If there +is one row, the same transformation will be applied to all images. +
+`output_shape` + +A `Tensor` of type `int32`. +1-D Tensor [new_height, new_width]. +
+`interpolation` + +A `string`. Interpolation method, "NEAREST" or "BILINEAR". +
+`fill_mode` + +An optional `string`. Defaults to `"CONSTANT"`. +Fill mode, "REFLECT", "WRAP", or "CONSTANT". +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ImageSummary.md b/site/en/api_docs/python/tf/raw_ops/ImageSummary.md new file mode 100644 index 00000000000..617b31d9911 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ImageSummary.md @@ -0,0 +1,138 @@ +description: Outputs a Summary protocol buffer with images. + +
+ + +
+ +# tf.raw_ops.ImageSummary + + + + + + + + + +Outputs a `Summary` protocol buffer with images. + + + + + + + + + +The summary has up to `max_images` summary values containing images. The +images are built from `tensor` which must be 4-D with shape `[batch_size, +height, width, channels]` and where `channels` can be: + +* 1: `tensor` is interpreted as Grayscale. +* 3: `tensor` is interpreted as RGB. +* 4: `tensor` is interpreted as RGBA. + +The images have the same number of channels as the input tensor. For float +input, the values are normalized one image at a time to fit in the range +`[0, 255]`. `uint8` values are unchanged. The op uses two different +normalization algorithms: + +* If the input values are all positive, they are rescaled so the largest one + is 255. + +* If any input value is negative, the values are shifted so input value 0.0 + is at 127. They are then rescaled so that either the smallest value is 0, + or the largest one is 255. + +The `tag` argument is a scalar `Tensor` of type `string`. It is used to +build the `tag` of the summary values: + +* If `max_images` is 1, the summary value tag is '*tag*/image'. +* If `max_images` is greater than 1, the summary value tags are + generated sequentially as '*tag*/image/0', '*tag*/image/1', etc. + +The `bad_color` argument is the color to use in the generated images for +non-finite input values. It is a `uint8` 1-D tensor of length `channels`. +Each element must be in the range `[0, 255]` (It represents the value of a +pixel in the output image). Non-finite values in the input tensor are +replaced by this tensor in the output image. The default value is the color +red. + + + + + + + + + + + + + + + + + + + + + + +
+`tag` + +A `Tensor` of type `string`. +Scalar. Used to build the `tag` attribute of the summary values. +
+`tensor` + +A `Tensor`. Must be one of the following types: `uint8`, `float32`, `half`, `float64`. +4-D of shape `[batch_size, height, width, channels]` where +`channels` is 1, 3, or 4. +
+`max_images` + +An optional `int` that is `>= 1`. Defaults to `3`. +Max number of batch elements to generate images for. +
+`bad_color` + +An optional `tf.TensorProto`. Defaults to `dtype: DT_UINT8 tensor_shape { dim { size: 4 } } int_val: 255 int_val: 0 int_val: 0 int_val: 255`. +Color to use for pixels with non-finite values. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ImmutableConst.md b/site/en/api_docs/python/tf/raw_ops/ImmutableConst.md new file mode 100644 index 00000000000..bedce2cb905 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ImmutableConst.md @@ -0,0 +1,94 @@ +description: Returns immutable tensor from memory region. + +
+ + +
+ +# tf.raw_ops.ImmutableConst + + + + + + + + + +Returns immutable tensor from memory region. + + + + + + + + + +The current implementation memmaps the tensor from a file. + + + + + + + + + + + + + + + + + + + +
+`dtype` + +A tf.DType. Type of the returned tensor. +
+`shape` + +A tf.TensorShape or list of `ints`. Shape of the returned tensor. +
+`memory_region_name` + +A `string`. +Name of readonly memory region used by the tensor, see +NewReadOnlyMemoryRegionFromFile in tensorflow::Env. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ImportEvent.md b/site/en/api_docs/python/tf/raw_ops/ImportEvent.md new file mode 100644 index 00000000000..391f5bebbe7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ImportEvent.md @@ -0,0 +1,82 @@ +
+ + +
+ +# tf.raw_ops.ImportEvent + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`writer` + +A `Tensor` of type `resource`. +
+`event` + +A `Tensor` of type `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InTopK.md b/site/en/api_docs/python/tf/raw_ops/InTopK.md new file mode 100644 index 00000000000..410a768024b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InTopK.md @@ -0,0 +1,107 @@ +description: Says whether the targets are in the top K predictions. + +
+ + +
+ +# tf.raw_ops.InTopK + + + + + + + + + +Says whether the targets are in the top `K` predictions. + + + + + + + + + +This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the +prediction for the target class is among the top `k` predictions among +all predictions for example `i`. Note that the behavior of `InTopK` differs +from the `TopK` op in its handling of ties; if multiple classes have the +same prediction value and straddle the top-`k` boundary, all of those +classes are considered to be in the top `k`. + +More formally, let + + \\(predictions_i\\) be the predictions for all classes for example `i`, + \\(targets_i\\) be the target class for example `i`, + \\(out_i\\) be the output for example `i`, + +$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$ + + + + + + + + + + + + + + + + + + + +
+`predictions` + +A `Tensor` of type `float32`. +A `batch_size` x `classes` tensor. +
+`targets` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A `batch_size` vector of class ids. +
+`k` + +An `int`. Number of top elements to look at for computing precision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InTopKV2.md b/site/en/api_docs/python/tf/raw_ops/InTopKV2.md new file mode 100644 index 00000000000..3cb93079614 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InTopKV2.md @@ -0,0 +1,108 @@ +description: Says whether the targets are in the top K predictions. + +
+ + +
+ +# tf.raw_ops.InTopKV2 + + + + + + + + + +Says whether the targets are in the top `K` predictions. + + + + + + + + + +This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the +prediction for the target class is among the top `k` predictions among +all predictions for example `i`. Note that the behavior of `InTopK` differs +from the `TopK` op in its handling of ties; if multiple classes have the +same prediction value and straddle the top-`k` boundary, all of those +classes are considered to be in the top `k`. + +More formally, let + + \\(predictions_i\\) be the predictions for all classes for example `i`, + \\(targets_i\\) be the target class for example `i`, + \\(out_i\\) be the output for example `i`, + +$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$ + + + + + + + + + + + + + + + + + + + +
+`predictions` + +A `Tensor` of type `float32`. +A `batch_size` x `classes` tensor. +
+`targets` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A `batch_size` vector of class ids. +
+`k` + +A `Tensor`. Must have the same type as `targets`. +Number of top elements to look at for computing precision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InfeedDequeue.md b/site/en/api_docs/python/tf/raw_ops/InfeedDequeue.md new file mode 100644 index 00000000000..a78d6aeccb6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InfeedDequeue.md @@ -0,0 +1,84 @@ +description: A placeholder op for a value that will be fed into the computation. + +
+ + +
+ +# tf.raw_ops.InfeedDequeue + + + + + + + + + +A placeholder op for a value that will be fed into the computation. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +A tf.DType. The type of elements in the tensor. +
+`shape` + +A tf.TensorShape or list of `ints`. The shape of the tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InfeedDequeueTuple.md b/site/en/api_docs/python/tf/raw_ops/InfeedDequeueTuple.md new file mode 100644 index 00000000000..60cac9b6f8f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InfeedDequeueTuple.md @@ -0,0 +1,86 @@ +description: Fetches multiple values from infeed as an XLA tuple. + +
+ + +
+ +# tf.raw_ops.InfeedDequeueTuple + + + + + + + + + +Fetches multiple values from infeed as an XLA tuple. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +A list of `tf.DTypes` that has length `>= 1`. +The element types of each element in `outputs`. +
+`shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`). +The shapes of each tensor in `outputs`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `dtypes`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InfeedEnqueue.md b/site/en/api_docs/python/tf/raw_ops/InfeedEnqueue.md new file mode 100644 index 00000000000..16674059c2b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InfeedEnqueue.md @@ -0,0 +1,106 @@ +description: An op which feeds a single Tensor value into the computation. + +
+ + +
+ +# tf.raw_ops.InfeedEnqueue + + + + + + + + + +An op which feeds a single Tensor value into the computation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +A tensor that will be provided using the infeed mechanism. +
+`shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `[]`. +The shape of the tensor. +
+`layout` + +An optional list of `ints`. Defaults to `[]`. +A vector holding the requested layout in minor-to-major sequence. +If a layout attribute is passed, but its values are all -1, the layout will +be computed by the infeed operation. +
+`device_ordinal` + +An optional `int`. Defaults to `-1`. +The TPU device to use. This should be -1 when the Op +is running on a TPU device, and >= 0 when the Op is running on the CPU +device. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InfeedEnqueuePrelinearizedBuffer.md b/site/en/api_docs/python/tf/raw_ops/InfeedEnqueuePrelinearizedBuffer.md new file mode 100644 index 00000000000..20ce8a73c85 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InfeedEnqueuePrelinearizedBuffer.md @@ -0,0 +1,87 @@ +description: An op which enqueues prelinearized buffer into TPU infeed. + +
+ + +
+ +# tf.raw_ops.InfeedEnqueuePrelinearizedBuffer + + + + + + + + + +An op which enqueues prelinearized buffer into TPU infeed. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `variant`. +A variant tensor representing linearized output. +
+`device_ordinal` + +An optional `int`. Defaults to `-1`. +The TPU device to use. This should be -1 when the Op is running on a TPU device +and = 0 when the Op is running on the CPU device. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InfeedEnqueueTuple.md b/site/en/api_docs/python/tf/raw_ops/InfeedEnqueueTuple.md new file mode 100644 index 00000000000..3b0aaea6285 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InfeedEnqueueTuple.md @@ -0,0 +1,107 @@ +description: Feeds multiple Tensor values into the computation as an XLA tuple. + +
+ + +
+ +# tf.raw_ops.InfeedEnqueueTuple + + + + + + + + + +Feeds multiple Tensor values into the computation as an XLA tuple. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of `Tensor` objects. +A list of tensors that will be provided using the infeed mechanism. +
+`shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`). +The shapes of each tensor in `inputs`. +
+`layouts` + +An optional list of `ints`. Defaults to `[]`. +A vector holding the requested layout in minor-to-major sequence for +all the tuple shapes, in the order the shapes appear in the "shapes" input. +The layout elements for a sub-shape can be set to -1, in which case the +corresponding layout will be computed by the infeed operation. +
+`device_ordinal` + +An optional `int`. Defaults to `-1`. +The TPU device to use. This should be -1 when the Op +is running on a TPU device, and >= 0 when the Op is running on the CPU +device. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InitializeTable.md b/site/en/api_docs/python/tf/raw_ops/InitializeTable.md new file mode 100644 index 00000000000..8ffef3a8d7f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InitializeTable.md @@ -0,0 +1,92 @@ +description: Table initializer that takes two tensors for keys and values respectively. + +
+ + +
+ +# tf.raw_ops.InitializeTable + + + + + + + + + +Table initializer that takes two tensors for keys and values respectively. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type mutable `string`. +Handle to a table which will be initialized. +
+`keys` + +A `Tensor`. Keys of type Tkey. +
+`values` + +A `Tensor`. Values of type Tval. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InitializeTableFromTextFile.md b/site/en/api_docs/python/tf/raw_ops/InitializeTableFromTextFile.md new file mode 100644 index 00000000000..edcb3a3cb31 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InitializeTableFromTextFile.md @@ -0,0 +1,129 @@ +description: Initializes a table from a text file. + +
+ + +
+ +# tf.raw_ops.InitializeTableFromTextFile + + + + + + + + + +Initializes a table from a text file. + + + + + + + + + +It inserts one key-value pair into the table for each line of the file. +The key and value is extracted from the whole line content, elements from the +split line based on `delimiter` or the line number (starting from zero). +Where to extract the key and value from a line is specified by `key_index` and +`value_index`. + +- A value of -1 means use the line number(starting from zero), expects `int64`. +- A value of -2 means use the whole line content, expects `string`. +- A value >= 0 means use the index (starting at zero) of the split line based + on `delimiter`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type mutable `string`. +Handle to a table which will be initialized. +
+`filename` + +A `Tensor` of type `string`. Filename of a vocabulary text file. +
+`key_index` + +An `int` that is `>= -2`. +Column index in a line to get the table `key` values from. +
+`value_index` + +An `int` that is `>= -2`. +Column index that represents information of a line to get the table +`value` values from. +
+`vocab_size` + +An optional `int` that is `>= -1`. Defaults to `-1`. +Number of elements of the file, use -1 if unknown. +
+`delimiter` + +An optional `string`. Defaults to `"\t"`. +Delimiter to separate fields in a line. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InitializeTableFromTextFileV2.md b/site/en/api_docs/python/tf/raw_ops/InitializeTableFromTextFileV2.md new file mode 100644 index 00000000000..464f981ef87 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InitializeTableFromTextFileV2.md @@ -0,0 +1,129 @@ +description: Initializes a table from a text file. + +
+ + +
+ +# tf.raw_ops.InitializeTableFromTextFileV2 + + + + + + + + + +Initializes a table from a text file. + + + + + + + + + +It inserts one key-value pair into the table for each line of the file. +The key and value is extracted from the whole line content, elements from the +split line based on `delimiter` or the line number (starting from zero). +Where to extract the key and value from a line is specified by `key_index` and +`value_index`. + +- A value of -1 means use the line number(starting from zero), expects `int64`. +- A value of -2 means use the whole line content, expects `string`. +- A value >= 0 means use the index (starting at zero) of the split line based + on `delimiter`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type `resource`. +Handle to a table which will be initialized. +
+`filename` + +A `Tensor` of type `string`. Filename of a vocabulary text file. +
+`key_index` + +An `int` that is `>= -2`. +Column index in a line to get the table `key` values from. +
+`value_index` + +An `int` that is `>= -2`. +Column index that represents information of a line to get the table +`value` values from. +
+`vocab_size` + +An optional `int` that is `>= -1`. Defaults to `-1`. +Number of elements of the file, use -1 if unknown. +
+`delimiter` + +An optional `string`. Defaults to `"\t"`. +Delimiter to separate fields in a line. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InitializeTableV2.md b/site/en/api_docs/python/tf/raw_ops/InitializeTableV2.md new file mode 100644 index 00000000000..ab7c0cdef32 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InitializeTableV2.md @@ -0,0 +1,92 @@ +description: Table initializer that takes two tensors for keys and values respectively. + +
+ + +
+ +# tf.raw_ops.InitializeTableV2 + + + + + + + + + +Table initializer that takes two tensors for keys and values respectively. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type `resource`. +Handle to a table which will be initialized. +
+`keys` + +A `Tensor`. Keys of type Tkey. +
+`values` + +A `Tensor`. Values of type Tval. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InplaceAdd.md b/site/en/api_docs/python/tf/raw_ops/InplaceAdd.md new file mode 100644 index 00000000000..0bd05fa9b42 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InplaceAdd.md @@ -0,0 +1,94 @@ +description: Adds v into specified rows of x. + +
+ + +
+ +# tf.raw_ops.InplaceAdd + + + + + + + + + +Adds v into specified rows of x. + + + + + + + + + + Computes y = x; y[i, :] += v; return y. + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. A `Tensor` of type T. +
+`i` + +A `Tensor` of type `int32`. +A vector. Indices into the left-most dimension of `x`. +
+`v` + +A `Tensor`. Must have the same type as `x`. +A `Tensor` of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InplaceSub.md b/site/en/api_docs/python/tf/raw_ops/InplaceSub.md new file mode 100644 index 00000000000..b41fc3379a1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InplaceSub.md @@ -0,0 +1,94 @@ +description: Subtracts v into specified rows of x. + +
+ + +
+ +# tf.raw_ops.InplaceSub + + + + + + + + + +Subtracts `v` into specified rows of `x`. + + + + + + + + + + Computes y = x; y[i, :] -= v; return y. + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. A `Tensor` of type T. +
+`i` + +A `Tensor` of type `int32`. +A vector. Indices into the left-most dimension of `x`. +
+`v` + +A `Tensor`. Must have the same type as `x`. +A `Tensor` of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InplaceUpdate.md b/site/en/api_docs/python/tf/raw_ops/InplaceUpdate.md new file mode 100644 index 00000000000..ad76ce79d2f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InplaceUpdate.md @@ -0,0 +1,94 @@ +description: Updates specified rows with values in v. + +
+ + +
+ +# tf.raw_ops.InplaceUpdate + + + + + + + + + +Updates specified rows with values in `v`. + + + + + + + + + + Computes `x[i, :] = v; return x`. + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. A tensor of type `T`. +
+`i` + +A `Tensor` of type `int32`. +A vector. Indices into the left-most dimension of `x`. +
+`v` + +A `Tensor`. Must have the same type as `x`. +A `Tensor` of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InterleaveDataset.md b/site/en/api_docs/python/tf/raw_ops/InterleaveDataset.md new file mode 100644 index 00000000000..a9b1dda4e40 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InterleaveDataset.md @@ -0,0 +1,128 @@ +description: Creates a dataset that applies f to the outputs of input_dataset. + +
+ + +
+ +# tf.raw_ops.InterleaveDataset + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset`. + + + + + + + + + +Unlike MapDataset, the `f` in InterleaveDataset is expected to return +a Dataset variant, and InterleaveDataset will flatten successive +results into a single Dataset. Unlike FlatMapDataset, +InterleaveDataset will interleave sequences of up to `block_length` +consecutive elements from `cycle_length` input elements. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`other_arguments` + +A list of `Tensor` objects. +
+`cycle_length` + +A `Tensor` of type `int64`. +
+`block_length` + +A `Tensor` of type `int64`. +
+`f` + +A function decorated with @Defun. +A function mapping elements of `input_dataset`, concatenated with +`other_arguments`, to a Dataset variant that contains elements matching +`output_types` and `output_shapes`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Inv.md b/site/en/api_docs/python/tf/raw_ops/Inv.md new file mode 100644 index 00000000000..1fe2318af5d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Inv.md @@ -0,0 +1,78 @@ +description: Computes the reciprocal of x element-wise. + +
+ + +
+ +# tf.raw_ops.Inv + + + + + + + + + +Computes the reciprocal of x element-wise. + + + + + + + + + +I.e., \\(y = 1 / x\\). + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InvGrad.md b/site/en/api_docs/python/tf/raw_ops/InvGrad.md new file mode 100644 index 00000000000..76682fd2fb6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InvGrad.md @@ -0,0 +1,86 @@ +description: Computes the gradient for the inverse of x wrt its input. + +
+ + +
+ +# tf.raw_ops.InvGrad + + + + + + + + + +Computes the gradient for the inverse of `x` wrt its input. + + + + + + + + + +Specifically, `grad = -dy * y*y`, where `y = 1/x`, and `dy` +is the corresponding input gradient. + + + + + + + + + + + + + + + + +
+`y` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`dy` + +A `Tensor`. Must have the same type as `y`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `y`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Invert.md b/site/en/api_docs/python/tf/raw_ops/Invert.md new file mode 100644 index 00000000000..e4321b42306 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Invert.md @@ -0,0 +1,118 @@ +description: Invert (flip) each bit of supported types; for example, type uint8 value 01010101 becomes 10101010. + +
+ + +
+ +# tf.raw_ops.Invert + + + + + + + + + +Invert (flip) each bit of supported types; for example, type `uint8` value 01010101 becomes 10101010. + + + + + + + + + +Flip each bit of supported types. For example, type `int8` (decimal 2) binary 00000010 becomes (decimal -3) binary 11111101. +This operation is performed on each element of the tensor argument `x`. + +#### Example: + + +```python +import tensorflow as tf +from tensorflow.python.ops import bitwise_ops + +# flip 2 (00000010) to -3 (11111101) +tf.assert_equal(-3, bitwise_ops.invert(2)) + +dtype_list = [dtypes.int8, dtypes.int16, dtypes.int32, dtypes.int64, + dtypes.uint8, dtypes.uint16, dtypes.uint32, dtypes.uint64] + +inputs = [0, 5, 3, 14] +for dtype in dtype_list: + # Because of issues with negative numbers, let's test this indirectly. + # 1. invert(a) and a = 0 + # 2. invert(a) or a = invert(0) + input_tensor = tf.constant([0, 5, 3, 14], dtype=dtype) + not_a_and_a, not_a_or_a, not_0 = [bitwise_ops.bitwise_and( + input_tensor, bitwise_ops.invert(input_tensor)), + bitwise_ops.bitwise_or( + input_tensor, bitwise_ops.invert(input_tensor)), + bitwise_ops.invert( + tf.constant(0, dtype=dtype))] + + expected = tf.constant([0, 0, 0, 0], dtype=tf.float32) + tf.assert_equal(tf.cast(not_a_and_a, tf.float32), expected) + + expected = tf.cast([not_0] * 4, tf.float32) + tf.assert_equal(tf.cast(not_a_or_a, tf.float32), expected) + + # For unsigned dtypes let's also check the result directly. + if dtype.is_unsigned: + inverted = bitwise_ops.invert(input_tensor) + expected = tf.constant([dtype.max - x for x in inputs], dtype=tf.float32) + tf.assert_equal(tf.cast(inverted, tf.float32), tf.cast(expected, tf.float32)) +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/InvertPermutation.md b/site/en/api_docs/python/tf/raw_ops/InvertPermutation.md new file mode 100644 index 00000000000..16ce237fa4a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/InvertPermutation.md @@ -0,0 +1,94 @@ +description: Computes the inverse permutation of a tensor. + +
+ + +
+ +# tf.raw_ops.InvertPermutation + + + + + + + + + +Computes the inverse permutation of a tensor. + + + + + + + + + +This operation computes the inverse of an index permutation. It takes a 1-D +integer tensor `x`, which represents the indices of a zero-based array, and +swaps each value with its index position. In other words, for an output tensor +`y` and an input tensor `x`, this operation computes the following: + +`y[x[i]] = i for i in [0, 1, ..., len(x) - 1]` + +The values must include 0. There can be no duplicate values or negative values. + +#### For example: + + + +``` +# tensor `x` is [3, 4, 0, 2, 1] +invert_permutation(x) ==> [2, 4, 3, 0, 1] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IsBoostedTreesEnsembleInitialized.md b/site/en/api_docs/python/tf/raw_ops/IsBoostedTreesEnsembleInitialized.md new file mode 100644 index 00000000000..62415629950 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IsBoostedTreesEnsembleInitialized.md @@ -0,0 +1,78 @@ +description: Checks whether a tree ensemble has been initialized. + +
+ + +
+ +# tf.raw_ops.IsBoostedTreesEnsembleInitialized + + + + + + + + + +Checks whether a tree ensemble has been initialized. + + + + + + + + + + + + + + + + + + + + + + +
+`tree_ensemble_handle` + +A `Tensor` of type `resource`. +Handle to the tree ensemble resource. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IsBoostedTreesQuantileStreamResourceInitialized.md b/site/en/api_docs/python/tf/raw_ops/IsBoostedTreesQuantileStreamResourceInitialized.md new file mode 100644 index 00000000000..7f67b5e2c45 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IsBoostedTreesQuantileStreamResourceInitialized.md @@ -0,0 +1,79 @@ +description: Checks whether a quantile stream has been initialized. + +
+ + +
+ +# tf.raw_ops.IsBoostedTreesQuantileStreamResourceInitialized + + + + + + + + + +Checks whether a quantile stream has been initialized. + + + + + + + + + +An Op that checks if quantile stream resource is initialized. + + + + + + + + + + + + + +
+`quantile_stream_resource_handle` + +A `Tensor` of type `resource`. +resource; The reference to quantile stream resource handle. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IsFinite.md b/site/en/api_docs/python/tf/raw_ops/IsFinite.md new file mode 100644 index 00000000000..fc8fddc8a81 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IsFinite.md @@ -0,0 +1,92 @@ +description: Returns which elements of x are finite. + +
+ + +
+ +# tf.raw_ops.IsFinite + + + + + + + + + +Returns which elements of x are finite. + + + + + + + + + + + +#### Example: + + + +```python +x = tf.constant([5.0, 4.8, 6.8, np.inf, np.nan]) +tf.math.is_finite(x) ==> [True, True, True, False, False] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ + + +#### Numpy Compatibility +Equivalent to np.isfinite + diff --git a/site/en/api_docs/python/tf/raw_ops/IsInf.md b/site/en/api_docs/python/tf/raw_ops/IsInf.md new file mode 100644 index 00000000000..e6a5df6dfa8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IsInf.md @@ -0,0 +1,92 @@ +description: Returns which elements of x are Inf. + +
+ + +
+ +# tf.raw_ops.IsInf + + + + + + + + + +Returns which elements of x are Inf. + + + + + + + + + + + +#### Example: + + + +```python +x = tf.constant([5.0, np.inf, 6.8, np.inf]) +tf.math.is_inf(x) ==> [False, True, False, True] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ + + +#### Numpy Compatibility +Equivalent to np.isinf + diff --git a/site/en/api_docs/python/tf/raw_ops/IsNan.md b/site/en/api_docs/python/tf/raw_ops/IsNan.md new file mode 100644 index 00000000000..5187db4e25f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IsNan.md @@ -0,0 +1,92 @@ +description: Returns which elements of x are NaN. + +
+ + +
+ +# tf.raw_ops.IsNan + + + + + + + + + +Returns which elements of x are NaN. + + + + + + + + + + + +#### Example: + + + +```python +x = tf.constant([5.0, np.nan, 6.8, np.nan, np.inf]) +tf.math.is_nan(x) ==> [False, True, False, True, False] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ + + +#### Numpy Compatibility +Equivalent to np.isnan + diff --git a/site/en/api_docs/python/tf/raw_ops/IsVariableInitialized.md b/site/en/api_docs/python/tf/raw_ops/IsVariableInitialized.md new file mode 100644 index 00000000000..b9de365ce6a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IsVariableInitialized.md @@ -0,0 +1,79 @@ +description: Checks whether a tensor has been initialized. + +
+ + +
+ +# tf.raw_ops.IsVariableInitialized + + + + + + + + + +Checks whether a tensor has been initialized. + + + + + + + + + +Outputs boolean scalar indicating whether the tensor has been initialized. + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. +Should be from a `Variable` node. May be uninitialized. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Iterator.md b/site/en/api_docs/python/tf/raw_ops/Iterator.md new file mode 100644 index 00000000000..72843b89a6e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Iterator.md @@ -0,0 +1,98 @@ +description: A container for an iterator resource. + +
+ + +
+ +# tf.raw_ops.Iterator + + + + + + + + + +A container for an iterator resource. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shared_name` + +A `string`. +
+`container` + +A `string`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IteratorFromStringHandle.md b/site/en/api_docs/python/tf/raw_ops/IteratorFromStringHandle.md new file mode 100644 index 00000000000..3318e686000 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IteratorFromStringHandle.md @@ -0,0 +1,96 @@ +description: Converts the given string representing a handle to an iterator to a resource. + +
+ + +
+ +# tf.raw_ops.IteratorFromStringHandle + + + + + + + + + +Converts the given string representing a handle to an iterator to a resource. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`string_handle` + +A `Tensor` of type `string`. +A string representation of the given handle. +
+`output_types` + +An optional list of `tf.DTypes`. Defaults to `[]`. +If specified, defines the type of each tuple component in an +element produced by the resulting iterator. +
+`output_shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +If specified, defines the shape of each tuple component in an +element produced by the resulting iterator. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IteratorFromStringHandleV2.md b/site/en/api_docs/python/tf/raw_ops/IteratorFromStringHandleV2.md new file mode 100644 index 00000000000..4980c7dca20 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IteratorFromStringHandleV2.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.IteratorFromStringHandleV2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`string_handle` + +A `Tensor` of type `string`. +
+`output_types` + +An optional list of `tf.DTypes`. Defaults to `[]`. +
+`output_shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IteratorGetDevice.md b/site/en/api_docs/python/tf/raw_ops/IteratorGetDevice.md new file mode 100644 index 00000000000..b8e48b68731 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IteratorGetDevice.md @@ -0,0 +1,77 @@ +description: Returns the name of the device on which resource has been placed. + +
+ + +
+ +# tf.raw_ops.IteratorGetDevice + + + + + + + + + +Returns the name of the device on which `resource` has been placed. + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IteratorGetNext.md b/site/en/api_docs/python/tf/raw_ops/IteratorGetNext.md new file mode 100644 index 00000000000..ab2ccb6c32b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IteratorGetNext.md @@ -0,0 +1,91 @@ +description: Gets the next output from the given iterator . + +
+ + +
+ +# tf.raw_ops.IteratorGetNext + + + + + + + + + +Gets the next output from the given iterator . + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`iterator` + +A `Tensor` of type `resource`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `output_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IteratorGetNextAsOptional.md b/site/en/api_docs/python/tf/raw_ops/IteratorGetNextAsOptional.md new file mode 100644 index 00000000000..c853b56a89b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IteratorGetNextAsOptional.md @@ -0,0 +1,91 @@ +description: Gets the next output from the given iterator as an Optional variant. + +
+ + +
+ +# tf.raw_ops.IteratorGetNextAsOptional + + + + + + + + + +Gets the next output from the given iterator as an Optional variant. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`iterator` + +A `Tensor` of type `resource`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IteratorGetNextSync.md b/site/en/api_docs/python/tf/raw_ops/IteratorGetNextSync.md new file mode 100644 index 00000000000..dea6467cc05 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IteratorGetNextSync.md @@ -0,0 +1,95 @@ +description: Gets the next output from the given iterator. + +
+ + +
+ +# tf.raw_ops.IteratorGetNextSync + + + + + + + + + +Gets the next output from the given iterator. + + + + + + + + + +This operation is a synchronous version IteratorGetNext. It should only be used +in situations where the iterator does not block the calling thread, or where +the calling thread is not a member of the thread pool used to execute parallel +operations (e.g. in eager mode). + + + + + + + + + + + + + + + + + + + +
+`iterator` + +A `Tensor` of type `resource`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `output_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IteratorToStringHandle.md b/site/en/api_docs/python/tf/raw_ops/IteratorToStringHandle.md new file mode 100644 index 00000000000..a4b8172d68e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IteratorToStringHandle.md @@ -0,0 +1,78 @@ +description: Converts the given resource_handle representing an iterator to a string. + +
+ + +
+ +# tf.raw_ops.IteratorToStringHandle + + + + + + + + + +Converts the given `resource_handle` representing an iterator to a string. + + + + + + + + + + + + + + + + + + + + + + +
+`resource_handle` + +A `Tensor` of type `resource`. +A handle to an iterator resource. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/IteratorV2.md b/site/en/api_docs/python/tf/raw_ops/IteratorV2.md new file mode 100644 index 00000000000..ea73a9a3caa --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/IteratorV2.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.IteratorV2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shared_name` + +A `string`. +
+`container` + +A `string`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/L2Loss.md b/site/en/api_docs/python/tf/raw_ops/L2Loss.md new file mode 100644 index 00000000000..64f1f596e28 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/L2Loss.md @@ -0,0 +1,81 @@ +description: L2 Loss. + +
+ + +
+ +# tf.raw_ops.L2Loss + + + + + + + + + +L2 Loss. + + + + + + + + + +Computes half the L2 norm of a tensor without the `sqrt`: + + output = sum(t ** 2) / 2 + + + + + + + + + + + + + +
+`t` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +Typically 2-D, but may have any dimensions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `t`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LMDBDataset.md b/site/en/api_docs/python/tf/raw_ops/LMDBDataset.md new file mode 100644 index 00000000000..db0484a1659 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LMDBDataset.md @@ -0,0 +1,103 @@ +description: Creates a dataset that emits the key-value pairs in one or more LMDB files. + +
+ + +
+ +# tf.raw_ops.LMDBDataset + + + + + + + + + +Creates a dataset that emits the key-value pairs in one or more LMDB files. + + + + + + + + + +The Lightning Memory-Mapped Database Manager, or LMDB, is an embedded binary +key-value database. This dataset can read the contents of LMDB database files, +the names of which generally have the `.mdb` suffix. + +Each output element consists of a key-value pair represented as a pair of +scalar string `Tensor`s, where the first `Tensor` contains the key and the +second `Tensor` contains the value. + +LMDB uses different file formats on big- and little-endian machines. +`LMDBDataset` can only read files in the format of the host machine. + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A `Tensor` of type `string`. +A scalar or a vector containing the name(s) of the binary file(s) to be +read. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LMDBReader.md b/site/en/api_docs/python/tf/raw_ops/LMDBReader.md new file mode 100644 index 00000000000..3ac3bd0e082 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LMDBReader.md @@ -0,0 +1,88 @@ +description: A Reader that outputs the records from a LMDB file. + +
+ + +
+ +# tf.raw_ops.LMDBReader + + + + + + + + + +A Reader that outputs the records from a LMDB file. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LRN.md b/site/en/api_docs/python/tf/raw_ops/LRN.md new file mode 100644 index 00000000000..9abe0bc947d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LRN.md @@ -0,0 +1,120 @@ +description: Local Response Normalization. + +
+ + +
+ +# tf.raw_ops.LRN + + + + + + + + + +Local Response Normalization. + + + + + + + + + +The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last +dimension), and each vector is normalized independently. Within a given vector, +each component is divided by the weighted, squared sum of inputs within +`depth_radius`. In detail, + + sqr_sum[a, b, c, d] = + sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2) + output = input / (bias + alpha * sqr_sum) ** beta + +For details, see [Krizhevsky et al., ImageNet classification with deep +convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. +4-D. +
+`depth_radius` + +An optional `int`. Defaults to `5`. +0-D. Half-width of the 1-D normalization window. +
+`bias` + +An optional `float`. Defaults to `1`. +An offset (usually positive to avoid dividing by 0). +
+`alpha` + +An optional `float`. Defaults to `1`. +A scale factor, usually positive. +
+`beta` + +An optional `float`. Defaults to `0.5`. An exponent. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LRNGrad.md b/site/en/api_docs/python/tf/raw_ops/LRNGrad.md new file mode 100644 index 00000000000..f315fcd4273 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LRNGrad.md @@ -0,0 +1,125 @@ +description: Gradients for Local Response Normalization. + +
+ + +
+ +# tf.raw_ops.LRNGrad + + + + + + + + + +Gradients for Local Response Normalization. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_grads` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. +4-D with shape `[batch, height, width, channels]`. +
+`input_image` + +A `Tensor`. Must have the same type as `input_grads`. +4-D with shape `[batch, height, width, channels]`. +
+`output_image` + +A `Tensor`. Must have the same type as `input_grads`. +4-D with shape `[batch, height, width, channels]`. +
+`depth_radius` + +An optional `int`. Defaults to `5`. A depth radius. +
+`bias` + +An optional `float`. Defaults to `1`. +An offset (usually > 0 to avoid dividing by 0). +
+`alpha` + +An optional `float`. Defaults to `1`. +A scale factor, usually positive. +
+`beta` + +An optional `float`. Defaults to `0.5`. An exponent. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input_grads`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LSTMBlockCell.md b/site/en/api_docs/python/tf/raw_ops/LSTMBlockCell.md new file mode 100644 index 00000000000..ca1b1b64911 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LSTMBlockCell.md @@ -0,0 +1,229 @@ +description: Computes the LSTM cell forward propagation for 1 time step. + +
+ + +
+ +# tf.raw_ops.LSTMBlockCell + + + + + + + + + +Computes the LSTM cell forward propagation for 1 time step. + + + + + + + + + +This implementation uses 1 weight matrix and 1 bias vector, and there's an +optional peephole connection. + +This kernel op implements the following mathematical equations: + +```python +xh = [x, h_prev] +[i, f, ci, o] = xh * w + b +f = f + forget_bias + +if not use_peephole: + wci = wcf = wco = 0 + +i = sigmoid(cs_prev * wci + i) +f = sigmoid(cs_prev * wcf + f) +ci = tanh(ci) + +cs = ci .* i + cs_prev .* f +cs = clip(cs, cell_clip) + +o = sigmoid(cs * wco + o) +co = tanh(cs) +h = co .* o +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +The input to the LSTM cell, shape (batch_size, num_inputs). +
+`cs_prev` + +A `Tensor`. Must have the same type as `x`. +Value of the cell state at previous time step. +
+`h_prev` + +A `Tensor`. Must have the same type as `x`. +Output of the previous cell at previous time step. +
+`w` + +A `Tensor`. Must have the same type as `x`. The weight matrix. +
+`wci` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for input gate peephole connection. +
+`wcf` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for forget gate peephole connection. +
+`wco` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for output gate peephole connection. +
+`b` + +A `Tensor`. Must have the same type as `x`. The bias vector. +
+`forget_bias` + +An optional `float`. Defaults to `1`. The forget gate bias. +
+`cell_clip` + +An optional `float`. Defaults to `3`. +Value to clip the 'cs' value to. +
+`use_peephole` + +An optional `bool`. Defaults to `False`. +Whether to use peephole weights. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (i, cs, f, o, ci, co, h). +
+`i` + +A `Tensor`. Has the same type as `x`. +
+`cs` + +A `Tensor`. Has the same type as `x`. +
+`f` + +A `Tensor`. Has the same type as `x`. +
+`o` + +A `Tensor`. Has the same type as `x`. +
+`ci` + +A `Tensor`. Has the same type as `x`. +
+`co` + +A `Tensor`. Has the same type as `x`. +
+`h` + +A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LSTMBlockCellGrad.md b/site/en/api_docs/python/tf/raw_ops/LSTMBlockCellGrad.md new file mode 100644 index 00000000000..f811df3da9e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LSTMBlockCellGrad.md @@ -0,0 +1,234 @@ +description: Computes the LSTM cell backward propagation for 1 timestep. + +
+ + +
+ +# tf.raw_ops.LSTMBlockCellGrad + + + + + + + + + +Computes the LSTM cell backward propagation for 1 timestep. + + + + + + + + + +This implementation is to be used in conjunction of LSTMBlockCell. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +The input to the LSTM cell, shape (batch_size, num_inputs). +
+`cs_prev` + +A `Tensor`. Must have the same type as `x`. +The previous cell state. +
+`h_prev` + +A `Tensor`. Must have the same type as `x`. The previous h state. +
+`w` + +A `Tensor`. Must have the same type as `x`. The weight matrix. +
+`wci` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for input gate peephole connection. +
+`wcf` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for forget gate peephole connection. +
+`wco` + +A `Tensor`. Must have the same type as `x`. +The weight matrix for output gate peephole connection. +
+`b` + +A `Tensor`. Must have the same type as `x`. The bias vector. +
+`i` + +A `Tensor`. Must have the same type as `x`. The input gate. +
+`cs` + +A `Tensor`. Must have the same type as `x`. +The cell state before the tanh. +
+`f` + +A `Tensor`. Must have the same type as `x`. The forget gate. +
+`o` + +A `Tensor`. Must have the same type as `x`. The output gate. +
+`ci` + +A `Tensor`. Must have the same type as `x`. The cell input. +
+`co` + +A `Tensor`. Must have the same type as `x`. The cell after the tanh. +
+`cs_grad` + +A `Tensor`. Must have the same type as `x`. +The current gradient of cs. +
+`h_grad` + +A `Tensor`. Must have the same type as `x`. +The gradient of h vector. +
+`use_peephole` + +A `bool`. Whether the cell uses peephole connections. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (cs_prev_grad, dicfo, wci_grad, wcf_grad, wco_grad). +
+`cs_prev_grad` + +A `Tensor`. Has the same type as `x`. +
+`dicfo` + +A `Tensor`. Has the same type as `x`. +
+`wci_grad` + +A `Tensor`. Has the same type as `x`. +
+`wcf_grad` + +A `Tensor`. Has the same type as `x`. +
+`wco_grad` + +A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LatencyStatsDataset.md b/site/en/api_docs/python/tf/raw_ops/LatencyStatsDataset.md new file mode 100644 index 00000000000..018519f1ce7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LatencyStatsDataset.md @@ -0,0 +1,98 @@ +description: Records the latency of producing input_dataset elements in a StatsAggregator. + +
+ + +
+ +# tf.raw_ops.LatencyStatsDataset + + + + + + + + + +Records the latency of producing `input_dataset` elements in a StatsAggregator. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`tag` + +A `Tensor` of type `string`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LeakyRelu.md b/site/en/api_docs/python/tf/raw_ops/LeakyRelu.md new file mode 100644 index 00000000000..068f586470d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LeakyRelu.md @@ -0,0 +1,84 @@ +description: Computes rectified linear: max(features, features * alpha). + +
+ + +
+ +# tf.raw_ops.LeakyRelu + + + + + + + + + +Computes rectified linear: `max(features, features * alpha)`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +
+`alpha` + +An optional `float`. Defaults to `0.2`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LeakyReluGrad.md b/site/en/api_docs/python/tf/raw_ops/LeakyReluGrad.md new file mode 100644 index 00000000000..5b2950d7159 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LeakyReluGrad.md @@ -0,0 +1,94 @@ +description: Computes rectified linear gradients for a LeakyRelu operation. + +
+ + +
+ +# tf.raw_ops.LeakyReluGrad + + + + + + + + + +Computes rectified linear gradients for a LeakyRelu operation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +The backpropagated gradients to the corresponding LeakyRelu operation. +
+`features` + +A `Tensor`. Must have the same type as `gradients`. +The features passed as input to the corresponding LeakyRelu operation, +OR the outputs of that operation (both work equivalently). +
+`alpha` + +An optional `float`. Defaults to `0.2`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `gradients`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LearnedUnigramCandidateSampler.md b/site/en/api_docs/python/tf/raw_ops/LearnedUnigramCandidateSampler.md new file mode 100644 index 00000000000..b3542d26696 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LearnedUnigramCandidateSampler.md @@ -0,0 +1,161 @@ +description: Generates labels for candidate sampling with a learned unigram distribution. + +
+ + +
+ +# tf.raw_ops.LearnedUnigramCandidateSampler + + + + + + + + + +Generates labels for candidate sampling with a learned unigram distribution. + + + + + + + + + +See explanations of candidate sampling and the data formats at +go/candidate-sampling. + +For each batch, this op picks a single set of sampled candidate labels. + +The advantages of sampling candidates per-batch are simplicity and the +possibility of efficient dense matrix multiplication. The disadvantage is that +the sampled candidates must be chosen independently of the context and of the +true labels. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64`. +A batch_size * num_true matrix, in which each row contains the +IDs of the num_true target_classes in the corresponding original label. +
+`num_true` + +An `int` that is `>= 1`. Number of true labels per context. +
+`num_sampled` + +An `int` that is `>= 1`. +Number of candidates to randomly sample. +
+`unique` + +A `bool`. +If unique is true, we sample with rejection, so that all sampled +candidates in a batch are unique. This requires some approximation to +estimate the post-rejection sampling probabilities. +
+`range_max` + +An `int` that is `>= 1`. +The sampler will sample integers from the interval [0, range_max). +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +An second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sampled_candidates, true_expected_count, sampled_expected_count). +
+`sampled_candidates` + +A `Tensor` of type `int64`. +
+`true_expected_count` + +A `Tensor` of type `float32`. +
+`sampled_expected_count` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LeftShift.md b/site/en/api_docs/python/tf/raw_ops/LeftShift.md new file mode 100644 index 00000000000..56cafc1675f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LeftShift.md @@ -0,0 +1,116 @@ +description: Elementwise computes the bitwise left-shift of x and y. + +
+ + +
+ +# tf.raw_ops.LeftShift + + + + + + + + + +Elementwise computes the bitwise left-shift of `x` and `y`. + + + + + + + + + +If `y` is negative, or greater than or equal to the width of `x` in bits the +result is implementation defined. + +#### Example: + + + +```python +import tensorflow as tf +from tensorflow.python.ops import bitwise_ops +import numpy as np +dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64] + +for dtype in dtype_list: + lhs = tf.constant([-1, -5, -3, -14], dtype=dtype) + rhs = tf.constant([5, 0, 7, 11], dtype=dtype) + + left_shift_result = bitwise_ops.left_shift(lhs, rhs) + + print(left_shift_result) + +# This will print: +# tf.Tensor([ -32 -5 -128 0], shape=(4,), dtype=int8) +# tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int16) +# tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int32) +# tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int64) + +lhs = np.array([-2, 64, 101, 32], dtype=np.int8) +rhs = np.array([-1, -5, -3, -14], dtype=np.int8) +bitwise_ops.left_shift(lhs, rhs) +# +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LegacyParallelInterleaveDatasetV2.md b/site/en/api_docs/python/tf/raw_ops/LegacyParallelInterleaveDatasetV2.md new file mode 100644 index 00000000000..3becfa8dc62 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LegacyParallelInterleaveDatasetV2.md @@ -0,0 +1,152 @@ +description: Creates a dataset that applies f to the outputs of input_dataset. + +
+ + +
+ +# tf.raw_ops.LegacyParallelInterleaveDatasetV2 + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset`. + + + + + + + + + +The resulting dataset is similar to the `InterleaveDataset`, with the exception +that if retrieving the next value from a dataset would cause the requester to +block, it will skip that input dataset. This dataset is especially useful +when loading data from a variable-latency datastores (e.g. HDFS, GCS), as it +allows the training step to proceed so long as some data is available. + +!! WARNING !! This dataset is not deterministic! + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`other_arguments` + +A list of `Tensor` objects. +
+`cycle_length` + +A `Tensor` of type `int64`. +
+`block_length` + +A `Tensor` of type `int64`. +
+`buffer_output_elements` + +A `Tensor` of type `int64`. +
+`prefetch_input_elements` + +A `Tensor` of type `int64`. +
+`f` + +A function decorated with @Defun. +A function mapping elements of `input_dataset`, concatenated with +`other_arguments`, to a Dataset variant that contains elements matching +`output_types` and `output_shapes`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`deterministic` + +An optional `string`. Defaults to `"default"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Less.md b/site/en/api_docs/python/tf/raw_ops/Less.md new file mode 100644 index 00000000000..27c876f9a09 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Less.md @@ -0,0 +1,100 @@ +description: Returns the truth value of (x < y) element-wise. + +
+ + +
+ +# tf.raw_ops.Less + + + + + + + + + +Returns the truth value of (x < y) element-wise. + + + + + + + + + +*NOTE*: math.less supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.less(x, y) ==> [False, True, False] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 6, 7]) +tf.math.less(x, y) ==> [False, True, True] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LessEqual.md b/site/en/api_docs/python/tf/raw_ops/LessEqual.md new file mode 100644 index 00000000000..99deb7def0b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LessEqual.md @@ -0,0 +1,100 @@ +description: Returns the truth value of (x <= y) element-wise. + +
+ + +
+ +# tf.raw_ops.LessEqual + + + + + + + + + +Returns the truth value of (x <= y) element-wise. + + + + + + + + + +*NOTE*: math.less_equal supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +#### Example: + + + +```python +x = tf.constant([5, 4, 6]) +y = tf.constant([5]) +tf.math.less_equal(x, y) ==> [True, True, False] + +x = tf.constant([5, 4, 6]) +y = tf.constant([5, 6, 6]) +tf.math.less_equal(x, y) ==> [True, True, True] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Lgamma.md b/site/en/api_docs/python/tf/raw_ops/Lgamma.md new file mode 100644 index 00000000000..d2640630e7f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Lgamma.md @@ -0,0 +1,88 @@ +description: Computes the log of the absolute value of Gamma(x) element-wise. + +
+ + +
+ +# tf.raw_ops.Lgamma + + + + + + + + + +Computes the log of the absolute value of `Gamma(x)` element-wise. + + + + + + + + + + For positive numbers, this function computes log((input - 1)!) for every element in the tensor. + `lgamma(5) = log((5-1)!) = log(4!) = log(24) = 3.1780539` + +#### Example: + + + +```python +x = tf.constant([0, 0.5, 1, 4.5, -4, -5.6]) +tf.math.lgamma(x) ==> [inf, 0.5723649, 0., 2.4537368, inf, -4.6477685] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LinSpace.md b/site/en/api_docs/python/tf/raw_ops/LinSpace.md new file mode 100644 index 00000000000..71c2bbac8c9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LinSpace.md @@ -0,0 +1,105 @@ +description: Generates values in an interval. + +
+ + +
+ +# tf.raw_ops.LinSpace + + + + + + + + + +Generates values in an interval. + + + + + + + + + +A sequence of `num` evenly-spaced values are generated beginning at `start`. +If `num > 1`, the values in the sequence increase by `stop - start / num - 1`, +so that the last one is exactly `stop`. + +#### For example: + + + +``` +tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0] +``` + + + + + + + + + + + + + + + + + + + +
+`start` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +0-D tensor. First entry in the range. +
+`stop` + +A `Tensor`. Must have the same type as `start`. +0-D tensor. Last entry in the range. +
+`num` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +0-D tensor. Number of values to generate. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `start`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ListDiff.md b/site/en/api_docs/python/tf/raw_ops/ListDiff.md new file mode 100644 index 00000000000..4db74f07a6c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ListDiff.md @@ -0,0 +1,126 @@ +description: Computes the difference between two lists of numbers or strings. + +
+ + +
+ +# tf.raw_ops.ListDiff + + + + + + + + + +Computes the difference between two lists of numbers or strings. + + + + + + + + + +Given a list `x` and a list `y`, this operation returns a list `out` that +represents all values that are in `x` but not in `y`. The returned list `out` +is sorted in the same order that the numbers appear in `x` (duplicates are +preserved). This operation also returns a list `idx` that represents the +position of each `out` element in `x`. In other words: + +`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]` + +For example, given this input: + +``` +x = [1, 2, 3, 4, 5, 6] +y = [1, 3, 5] +``` + +This operation would return: + +``` +out ==> [2, 4, 6] +idx ==> [1, 3, 5] +``` + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. 1-D. Values to keep. +
+`y` + +A `Tensor`. Must have the same type as `x`. 1-D. Values to remove. +
+`out_idx` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (out, idx). +
+`out` + +A `Tensor`. Has the same type as `x`. +
+`idx` + +A `Tensor` of type `out_idx`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadAndRemapMatrix.md b/site/en/api_docs/python/tf/raw_ops/LoadAndRemapMatrix.md new file mode 100644 index 00000000000..abb55cf6a68 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadAndRemapMatrix.md @@ -0,0 +1,179 @@ +description: Loads a 2-D (matrix) Tensor with name old_tensor_name from the checkpoint + +
+ + +
+ +# tf.raw_ops.LoadAndRemapMatrix + + + + + + + + + +Loads a 2-D (matrix) `Tensor` with name `old_tensor_name` from the checkpoint + + + + + + + + + +at `ckpt_path` and potentially reorders its rows and columns using the +specified remappings. + +Most users should use one of the wrapper initializers (such as +`tf.contrib.framework.load_and_remap_matrix_initializer`) instead of this +function directly. + +The remappings are 1-D tensors with the following properties: + +* `row_remapping` must have exactly `num_rows` entries. Row `i` of the output + matrix will be initialized from the row corresponding to index + `row_remapping[i]` in the old `Tensor` from the checkpoint. +* `col_remapping` must have either 0 entries (indicating that no column + reordering is needed) or `num_cols` entries. If specified, column `j` of the + output matrix will be initialized from the column corresponding to index + `col_remapping[j]` in the old `Tensor` from the checkpoint. +* A value of -1 in either of the remappings signifies a "missing" entry. In that + case, values from the `initializing_values` tensor will be used to fill that + missing row or column. If `row_remapping` has `r` missing entries and + `col_remapping` has `c` missing entries, then the following condition must be + true: + +`(r * num_cols) + (c * num_rows) - (r * c) == len(initializing_values)` + +The remapping tensors can be generated using the GenerateVocabRemapping op. + +As an example, with row_remapping = [1, 0, -1], col_remapping = [0, 2, -1], +initializing_values = [0.5, -0.5, 0.25, -0.25, 42], and w(i, j) representing +the value from row i, column j of the old tensor in the checkpoint, the output +matrix will look like the following: + +[[w(1, 0), w(1, 2), 0.5], + [w(0, 0), w(0, 2), -0.5], + [0.25, -0.25, 42]] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`ckpt_path` + +A `Tensor` of type `string`. +Path to the TensorFlow checkpoint (version 2, `TensorBundle`) from +which the old matrix `Tensor` will be loaded. +
+`old_tensor_name` + +A `Tensor` of type `string`. +Name of the 2-D `Tensor` to load from checkpoint. +
+`row_remapping` + +A `Tensor` of type `int64`. +An int `Tensor` of row remappings (generally created by +`generate_vocab_remapping`). Even if no row remapping is needed, this must +still be an index-valued Tensor (e.g. [0, 1, 2, ...]), or a shifted +index-valued `Tensor` (e.g. [8, 9, 10, ...], for partitioned `Variables`). +
+`col_remapping` + +A `Tensor` of type `int64`. +An int `Tensor` of column remappings (generally created by +`generate_vocab_remapping`). May be a size-0 `Tensor` if only row remapping +is to be done (e.g. column ordering is the same). +
+`initializing_values` + +A `Tensor` of type `float32`. +A float `Tensor` containing values to fill in for cells +in the output matrix that are not loaded from the checkpoint. Length must be +exactly the same as the number of missing / new cells. +
+`num_rows` + +An `int` that is `>= 0`. +Number of rows (length of the 1st dimension) in the output matrix. +
+`num_cols` + +An `int` that is `>= 1`. +Number of columns (length of the 2nd dimension) in the output matrix. +
+`max_rows_in_memory` + +An optional `int`. Defaults to `-1`. +The maximum number of rows to load from the checkpoint at +once. If less than or equal to 0, the entire matrix will be loaded into +memory. Setting this arg trades increased disk reads for lower memory usage. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingADAMParameters.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingADAMParameters.md new file mode 100644 index 00000000000..558d61241bb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingADAMParameters.md @@ -0,0 +1,135 @@ +description: Load ADAM embedding parameters. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingADAMParameters + + + + + + + + + +Load ADAM embedding parameters. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the ADAM optimization algorithm. +
+`momenta` + +A `Tensor` of type `float32`. +Value of momenta used in the ADAM optimization algorithm. +
+`velocities` + +A `Tensor` of type `float32`. +Value of velocities used in the ADAM optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingADAMParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingADAMParametersGradAccumDebug.md new file mode 100644 index 00000000000..c369536f3df --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingADAMParametersGradAccumDebug.md @@ -0,0 +1,143 @@ +description: Load ADAM embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingADAMParametersGradAccumDebug + + + + + + + + + +Load ADAM embedding parameters with debug support. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the ADAM optimization algorithm. +
+`momenta` + +A `Tensor` of type `float32`. +Value of momenta used in the ADAM optimization algorithm. +
+`velocities` + +A `Tensor` of type `float32`. +Value of velocities used in the ADAM optimization algorithm. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +Value of gradient_accumulators used in the ADAM optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingAdadeltaParameters.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingAdadeltaParameters.md new file mode 100644 index 00000000000..f364e1090e8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingAdadeltaParameters.md @@ -0,0 +1,135 @@ +description: Load Adadelta embedding parameters. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingAdadeltaParameters + + + + + + + + + +Load Adadelta embedding parameters. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the Adadelta optimization algorithm. +
+`accumulators` + +A `Tensor` of type `float32`. +Value of accumulators used in the Adadelta optimization algorithm. +
+`updates` + +A `Tensor` of type `float32`. +Value of updates used in the Adadelta optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingAdadeltaParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingAdadeltaParametersGradAccumDebug.md new file mode 100644 index 00000000000..f3a2a13fcd4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingAdadeltaParametersGradAccumDebug.md @@ -0,0 +1,143 @@ +description: Load Adadelta parameters with debug support. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingAdadeltaParametersGradAccumDebug + + + + + + + + + +Load Adadelta parameters with debug support. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the Adadelta optimization algorithm. +
+`accumulators` + +A `Tensor` of type `float32`. +Value of accumulators used in the Adadelta optimization algorithm. +
+`updates` + +A `Tensor` of type `float32`. +Value of updates used in the Adadelta optimization algorithm. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +Value of gradient_accumulators used in the Adadelta optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingAdagradParameters.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingAdagradParameters.md new file mode 100644 index 00000000000..722e6dcdc8d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingAdagradParameters.md @@ -0,0 +1,127 @@ +description: Load Adagrad embedding parameters. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingAdagradParameters + + + + + + + + + +Load Adagrad embedding parameters. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the Adagrad optimization algorithm. +
+`accumulators` + +A `Tensor` of type `float32`. +Value of accumulators used in the Adagrad optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingAdagradParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingAdagradParametersGradAccumDebug.md new file mode 100644 index 00000000000..2a92f2005bd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingAdagradParametersGradAccumDebug.md @@ -0,0 +1,135 @@ +description: Load Adagrad embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingAdagradParametersGradAccumDebug + + + + + + + + + +Load Adagrad embedding parameters with debug support. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the Adagrad optimization algorithm. +
+`accumulators` + +A `Tensor` of type `float32`. +Value of accumulators used in the Adagrad optimization algorithm. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +Value of gradient_accumulators used in the Adagrad optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingCenteredRMSPropParameters.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingCenteredRMSPropParameters.md new file mode 100644 index 00000000000..8ed5b3744a1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingCenteredRMSPropParameters.md @@ -0,0 +1,143 @@ +description: Load centered RMSProp embedding parameters. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingCenteredRMSPropParameters + + + + + + + + + +Load centered RMSProp embedding parameters. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the centered RMSProp optimization algorithm. +
+`ms` + +A `Tensor` of type `float32`. +Value of ms used in the centered RMSProp optimization algorithm. +
+`mom` + +A `Tensor` of type `float32`. +Value of mom used in the centered RMSProp optimization algorithm. +
+`mg` + +A `Tensor` of type `float32`. +Value of mg used in the centered RMSProp optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingFTRLParameters.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingFTRLParameters.md new file mode 100644 index 00000000000..beb701228e3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingFTRLParameters.md @@ -0,0 +1,135 @@ +description: Load FTRL embedding parameters. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingFTRLParameters + + + + + + + + + +Load FTRL embedding parameters. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the FTRL optimization algorithm. +
+`accumulators` + +A `Tensor` of type `float32`. +Value of accumulators used in the FTRL optimization algorithm. +
+`linears` + +A `Tensor` of type `float32`. +Value of linears used in the FTRL optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingFTRLParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingFTRLParametersGradAccumDebug.md new file mode 100644 index 00000000000..ff6f5cb2d3e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingFTRLParametersGradAccumDebug.md @@ -0,0 +1,143 @@ +description: Load FTRL embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingFTRLParametersGradAccumDebug + + + + + + + + + +Load FTRL embedding parameters with debug support. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the FTRL optimization algorithm. +
+`accumulators` + +A `Tensor` of type `float32`. +Value of accumulators used in the FTRL optimization algorithm. +
+`linears` + +A `Tensor` of type `float32`. +Value of linears used in the FTRL optimization algorithm. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +Value of gradient_accumulators used in the FTRL optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingMDLAdagradLightParameters.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingMDLAdagradLightParameters.md new file mode 100644 index 00000000000..4867fc8c9b5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingMDLAdagradLightParameters.md @@ -0,0 +1,143 @@ +description: Load MDL Adagrad Light embedding parameters. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingMDLAdagradLightParameters + + + + + + + + + +Load MDL Adagrad Light embedding parameters. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the MDL Adagrad Light optimization algorithm. +
+`accumulators` + +A `Tensor` of type `float32`. +Value of accumulators used in the MDL Adagrad Light optimization algorithm. +
+`weights` + +A `Tensor` of type `float32`. +Value of weights used in the MDL Adagrad Light optimization algorithm. +
+`benefits` + +A `Tensor` of type `float32`. +Value of benefits used in the MDL Adagrad Light optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingMomentumParameters.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingMomentumParameters.md new file mode 100644 index 00000000000..a2c396f6dc0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingMomentumParameters.md @@ -0,0 +1,127 @@ +description: Load Momentum embedding parameters. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingMomentumParameters + + + + + + + + + +Load Momentum embedding parameters. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the Momentum optimization algorithm. +
+`momenta` + +A `Tensor` of type `float32`. +Value of momenta used in the Momentum optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingMomentumParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingMomentumParametersGradAccumDebug.md new file mode 100644 index 00000000000..901f93f983c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingMomentumParametersGradAccumDebug.md @@ -0,0 +1,135 @@ +description: Load Momentum embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingMomentumParametersGradAccumDebug + + + + + + + + + +Load Momentum embedding parameters with debug support. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the Momentum optimization algorithm. +
+`momenta` + +A `Tensor` of type `float32`. +Value of momenta used in the Momentum optimization algorithm. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +Value of gradient_accumulators used in the Momentum optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingProximalAdagradParameters.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingProximalAdagradParameters.md new file mode 100644 index 00000000000..3de665b9e02 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingProximalAdagradParameters.md @@ -0,0 +1,127 @@ +description: Load proximal Adagrad embedding parameters. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingProximalAdagradParameters + + + + + + + + + +Load proximal Adagrad embedding parameters. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the proximal Adagrad optimization algorithm. +
+`accumulators` + +A `Tensor` of type `float32`. +Value of accumulators used in the proximal Adagrad optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug.md new file mode 100644 index 00000000000..a9dcb24ae21 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug.md @@ -0,0 +1,135 @@ +description: Load proximal Adagrad embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug + + + + + + + + + +Load proximal Adagrad embedding parameters with debug support. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the proximal Adagrad optimization algorithm. +
+`accumulators` + +A `Tensor` of type `float32`. +Value of accumulators used in the proximal Adagrad optimization algorithm. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +Value of gradient_accumulators used in the proximal Adagrad optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingRMSPropParameters.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingRMSPropParameters.md new file mode 100644 index 00000000000..6c7f5c7d735 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingRMSPropParameters.md @@ -0,0 +1,135 @@ +description: Load RMSProp embedding parameters. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingRMSPropParameters + + + + + + + + + +Load RMSProp embedding parameters. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the RMSProp optimization algorithm. +
+`ms` + +A `Tensor` of type `float32`. +Value of ms used in the RMSProp optimization algorithm. +
+`mom` + +A `Tensor` of type `float32`. +Value of mom used in the RMSProp optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingRMSPropParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingRMSPropParametersGradAccumDebug.md new file mode 100644 index 00000000000..cc322cf2e26 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingRMSPropParametersGradAccumDebug.md @@ -0,0 +1,143 @@ +description: Load RMSProp embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingRMSPropParametersGradAccumDebug + + + + + + + + + +Load RMSProp embedding parameters with debug support. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the RMSProp optimization algorithm. +
+`ms` + +A `Tensor` of type `float32`. +Value of ms used in the RMSProp optimization algorithm. +
+`mom` + +A `Tensor` of type `float32`. +Value of mom used in the RMSProp optimization algorithm. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +Value of gradient_accumulators used in the RMSProp optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingStochasticGradientDescentParameters.md b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingStochasticGradientDescentParameters.md new file mode 100644 index 00000000000..2cd2a2c67c8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoadTPUEmbeddingStochasticGradientDescentParameters.md @@ -0,0 +1,119 @@ +description: Load SGD embedding parameters. + +
+ + +
+ +# tf.raw_ops.LoadTPUEmbeddingStochasticGradientDescentParameters + + + + + + + + + +Load SGD embedding parameters. + + + + + + + + + +An op that loads optimization parameters into HBM for embedding. Must be +preceded by a ConfigureTPUEmbeddingHost op that sets up the correct +embedding table configuration. For example, this op is used to install +parameters that are loaded from a checkpoint before a training loop is +executed. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`parameters` + +A `Tensor` of type `float32`. +Value of parameters used in the stochastic gradient descent optimization algorithm. +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Log.md b/site/en/api_docs/python/tf/raw_ops/Log.md new file mode 100644 index 00000000000..b6245b1a15d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Log.md @@ -0,0 +1,91 @@ +description: Computes natural logarithm of x element-wise. + +
+ + +
+ +# tf.raw_ops.Log + + + + + + + + + +Computes natural logarithm of x element-wise. + + + + + + + + + +I.e., \\(y = \log_e x\\). + +#### Example: + + + +```python +>>> x = tf.constant([0, 0.5, 1, 5]) +>>> tf.math.log(x) + + +``` + +See: https://en.wikipedia.org/wiki/Logarithm + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Log1p.md b/site/en/api_docs/python/tf/raw_ops/Log1p.md new file mode 100644 index 00000000000..97635b9ca2e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Log1p.md @@ -0,0 +1,85 @@ +description: Computes natural logarithm of (1 + x) element-wise. + +
+ + +
+ +# tf.raw_ops.Log1p + + + + + + + + + +Computes natural logarithm of (1 + x) element-wise. + + + + + + + + + +I.e., \\(y = \log_e (1 + x)\\). + +#### Example: + + +>>> x = tf.constant([0, 0.5, 1, 5]) +>>> tf.math.log1p(x) + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LogMatrixDeterminant.md b/site/en/api_docs/python/tf/raw_ops/LogMatrixDeterminant.md new file mode 100644 index 00000000000..1b98e1104d0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LogMatrixDeterminant.md @@ -0,0 +1,101 @@ +description: Computes the sign and the log of the absolute value of the determinant of + +
+ + +
+ +# tf.raw_ops.LogMatrixDeterminant + + + + + + + + + +Computes the sign and the log of the absolute value of the determinant of + + + + + + + + + +one or more square matrices. + +The input is a tensor of shape `[N, M, M]` whose inner-most 2 dimensions +form square matrices. The outputs are two tensors containing the signs and +absolute values of the log determinants for all N input submatrices +`[..., :, :]` such that the determinant = sign*exp(log_abs_determinant). +The log_abs_determinant is computed as det(P)*sum(log(diag(LU))) where LU +is the LU decomposition of the input and P is the corresponding +permutation matrix. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. +Shape is `[N, M, M]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sign, log_abs_determinant). +
+`sign` + +A `Tensor`. Has the same type as `input`. +
+`log_abs_determinant` + +A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LogSoftmax.md b/site/en/api_docs/python/tf/raw_ops/LogSoftmax.md new file mode 100644 index 00000000000..153b906c21f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LogSoftmax.md @@ -0,0 +1,81 @@ +description: Computes log softmax activations. + +
+ + +
+ +# tf.raw_ops.LogSoftmax + + + + + + + + + +Computes log softmax activations. + + + + + + + + + +For each batch `i` and class `j` we have + + logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i]))) + + + + + + + + + + + + + +
+`logits` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +2-D with shape `[batch_size, num_classes]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `logits`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LogUniformCandidateSampler.md b/site/en/api_docs/python/tf/raw_ops/LogUniformCandidateSampler.md new file mode 100644 index 00000000000..ccdf4b75e67 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LogUniformCandidateSampler.md @@ -0,0 +1,161 @@ +description: Generates labels for candidate sampling with a log-uniform distribution. + +
+ + +
+ +# tf.raw_ops.LogUniformCandidateSampler + + + + + + + + + +Generates labels for candidate sampling with a log-uniform distribution. + + + + + + + + + +See explanations of candidate sampling and the data formats at +go/candidate-sampling. + +For each batch, this op picks a single set of sampled candidate labels. + +The advantages of sampling candidates per-batch are simplicity and the +possibility of efficient dense matrix multiplication. The disadvantage is that +the sampled candidates must be chosen independently of the context and of the +true labels. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64`. +A batch_size * num_true matrix, in which each row contains the +IDs of the num_true target_classes in the corresponding original label. +
+`num_true` + +An `int` that is `>= 1`. Number of true labels per context. +
+`num_sampled` + +An `int` that is `>= 1`. +Number of candidates to randomly sample. +
+`unique` + +A `bool`. +If unique is true, we sample with rejection, so that all sampled +candidates in a batch are unique. This requires some approximation to +estimate the post-rejection sampling probabilities. +
+`range_max` + +An `int` that is `>= 1`. +The sampler will sample integers from the interval [0, range_max). +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +An second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sampled_candidates, true_expected_count, sampled_expected_count). +
+`sampled_candidates` + +A `Tensor` of type `int64`. +
+`true_expected_count` + +A `Tensor` of type `float32`. +
+`sampled_expected_count` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LogicalAnd.md b/site/en/api_docs/python/tf/raw_ops/LogicalAnd.md new file mode 100644 index 00000000000..5aee23fec96 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LogicalAnd.md @@ -0,0 +1,86 @@ +description: Returns the truth value of x AND y element-wise. + +
+ + +
+ +# tf.raw_ops.LogicalAnd + + + + + + + + + +Returns the truth value of x AND y element-wise. + + + + + + + + + +*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LogicalNot.md b/site/en/api_docs/python/tf/raw_ops/LogicalNot.md new file mode 100644 index 00000000000..a279baefad6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LogicalNot.md @@ -0,0 +1,86 @@ +description: Returns the truth value of NOT x element-wise. + +
+ + +
+ +# tf.raw_ops.LogicalNot + + + + + + + + + +Returns the truth value of `NOT x` element-wise. + + + + + + + + + + +#### Example: + + + +``` +>>> tf.math.logical_not(tf.constant([True, False])) + +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor` of type `bool`. A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LogicalOr.md b/site/en/api_docs/python/tf/raw_ops/LogicalOr.md new file mode 100644 index 00000000000..351c1c0b438 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LogicalOr.md @@ -0,0 +1,86 @@ +description: Returns the truth value of x OR y element-wise. + +
+ + +
+ +# tf.raw_ops.LogicalOr + + + + + + + + + +Returns the truth value of x OR y element-wise. + + + + + + + + + +*NOTE*: math.logical_or supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor` of type `bool`. +
+`y` + +A `Tensor` of type `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LookupTableExport.md b/site/en/api_docs/python/tf/raw_ops/LookupTableExport.md new file mode 100644 index 00000000000..c0b2a1cf846 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LookupTableExport.md @@ -0,0 +1,105 @@ +description: Outputs all keys and values in the table. + +
+ + +
+ +# tf.raw_ops.LookupTableExport + + + + + + + + + +Outputs all keys and values in the table. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type mutable `string`. Handle to the table. +
+`Tkeys` + +A tf.DType. +
+`Tvalues` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (keys, values). +
+`keys` + +A `Tensor` of type `Tkeys`. +
+`values` + +A `Tensor` of type `Tvalues`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LookupTableExportV2.md b/site/en/api_docs/python/tf/raw_ops/LookupTableExportV2.md new file mode 100644 index 00000000000..35d6af49906 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LookupTableExportV2.md @@ -0,0 +1,105 @@ +description: Outputs all keys and values in the table. + +
+ + +
+ +# tf.raw_ops.LookupTableExportV2 + + + + + + + + + +Outputs all keys and values in the table. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type `resource`. Handle to the table. +
+`Tkeys` + +A tf.DType. +
+`Tvalues` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (keys, values). +
+`keys` + +A `Tensor` of type `Tkeys`. +
+`values` + +A `Tensor` of type `Tvalues`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LookupTableFind.md b/site/en/api_docs/python/tf/raw_ops/LookupTableFind.md new file mode 100644 index 00000000000..2d46ccff434 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LookupTableFind.md @@ -0,0 +1,96 @@ +description: Looks up keys in a table, outputs the corresponding values. + +
+ + +
+ +# tf.raw_ops.LookupTableFind + + + + + + + + + +Looks up keys in a table, outputs the corresponding values. + + + + + + + + + +The tensor `keys` must of the same type as the keys of the table. +The output `values` is of the type of the table values. + +The scalar `default_value` is the value output for keys not present in the +table. It must also be of the same type as the table values. + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type mutable `string`. Handle to the table. +
+`keys` + +A `Tensor`. Any shape. Keys to look up. +
+`default_value` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `default_value`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LookupTableFindV2.md b/site/en/api_docs/python/tf/raw_ops/LookupTableFindV2.md new file mode 100644 index 00000000000..5fbad3ba6e0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LookupTableFindV2.md @@ -0,0 +1,96 @@ +description: Looks up keys in a table, outputs the corresponding values. + +
+ + +
+ +# tf.raw_ops.LookupTableFindV2 + + + + + + + + + +Looks up keys in a table, outputs the corresponding values. + + + + + + + + + +The tensor `keys` must of the same type as the keys of the table. +The output `values` is of the type of the table values. + +The scalar `default_value` is the value output for keys not present in the +table. It must also be of the same type as the table values. + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type `resource`. Handle to the table. +
+`keys` + +A `Tensor`. Any shape. Keys to look up. +
+`default_value` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `default_value`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LookupTableImport.md b/site/en/api_docs/python/tf/raw_ops/LookupTableImport.md new file mode 100644 index 00000000000..eed28bc7c06 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LookupTableImport.md @@ -0,0 +1,93 @@ +description: Replaces the contents of the table with the specified keys and values. + +
+ + +
+ +# tf.raw_ops.LookupTableImport + + + + + + + + + +Replaces the contents of the table with the specified keys and values. + + + + + + + + + +The tensor `keys` must be of the same type as the keys of the table. +The tensor `values` must be of the type of the table values. + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type mutable `string`. Handle to the table. +
+`keys` + +A `Tensor`. Any shape. Keys to look up. +
+`values` + +A `Tensor`. Values to associate with keys. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LookupTableImportV2.md b/site/en/api_docs/python/tf/raw_ops/LookupTableImportV2.md new file mode 100644 index 00000000000..aa5dab0cc68 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LookupTableImportV2.md @@ -0,0 +1,93 @@ +description: Replaces the contents of the table with the specified keys and values. + +
+ + +
+ +# tf.raw_ops.LookupTableImportV2 + + + + + + + + + +Replaces the contents of the table with the specified keys and values. + + + + + + + + + +The tensor `keys` must be of the same type as the keys of the table. +The tensor `values` must be of the type of the table values. + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type `resource`. Handle to the table. +
+`keys` + +A `Tensor`. Any shape. Keys to look up. +
+`values` + +A `Tensor`. Values to associate with keys. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LookupTableInsert.md b/site/en/api_docs/python/tf/raw_ops/LookupTableInsert.md new file mode 100644 index 00000000000..3b60b3d53b6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LookupTableInsert.md @@ -0,0 +1,93 @@ +description: Updates the table to associates keys with values. + +
+ + +
+ +# tf.raw_ops.LookupTableInsert + + + + + + + + + +Updates the table to associates keys with values. + + + + + + + + + +The tensor `keys` must be of the same type as the keys of the table. +The tensor `values` must be of the type of the table values. + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type mutable `string`. Handle to the table. +
+`keys` + +A `Tensor`. Any shape. Keys to look up. +
+`values` + +A `Tensor`. Values to associate with keys. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LookupTableInsertV2.md b/site/en/api_docs/python/tf/raw_ops/LookupTableInsertV2.md new file mode 100644 index 00000000000..61b3ad11e1a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LookupTableInsertV2.md @@ -0,0 +1,93 @@ +description: Updates the table to associates keys with values. + +
+ + +
+ +# tf.raw_ops.LookupTableInsertV2 + + + + + + + + + +Updates the table to associates keys with values. + + + + + + + + + +The tensor `keys` must be of the same type as the keys of the table. +The tensor `values` must be of the type of the table values. + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type `resource`. Handle to the table. +
+`keys` + +A `Tensor`. Any shape. Keys to look up. +
+`values` + +A `Tensor`. Values to associate with keys. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LookupTableRemoveV2.md b/site/en/api_docs/python/tf/raw_ops/LookupTableRemoveV2.md new file mode 100644 index 00000000000..b225bbf729a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LookupTableRemoveV2.md @@ -0,0 +1,86 @@ +description: Removes keys and its associated values from a table. + +
+ + +
+ +# tf.raw_ops.LookupTableRemoveV2 + + + + + + + + + +Removes keys and its associated values from a table. + + + + + + + + + +The tensor `keys` must of the same type as the keys of the table. Keys not +already in the table are silently ignored. + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type `resource`. Handle to the table. +
+`keys` + +A `Tensor`. Any shape. Keys of the elements to remove. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LookupTableSize.md b/site/en/api_docs/python/tf/raw_ops/LookupTableSize.md new file mode 100644 index 00000000000..cd251200bcb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LookupTableSize.md @@ -0,0 +1,77 @@ +description: Computes the number of elements in the given table. + +
+ + +
+ +# tf.raw_ops.LookupTableSize + + + + + + + + + +Computes the number of elements in the given table. + + + + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type mutable `string`. Handle to the table. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LookupTableSizeV2.md b/site/en/api_docs/python/tf/raw_ops/LookupTableSizeV2.md new file mode 100644 index 00000000000..47e9750053c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LookupTableSizeV2.md @@ -0,0 +1,77 @@ +description: Computes the number of elements in the given table. + +
+ + +
+ +# tf.raw_ops.LookupTableSizeV2 + + + + + + + + + +Computes the number of elements in the given table. + + + + + + + + + + + + + + + + + + + + + + +
+`table_handle` + +A `Tensor` of type `resource`. Handle to the table. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LoopCond.md b/site/en/api_docs/python/tf/raw_ops/LoopCond.md new file mode 100644 index 00000000000..afd06191e33 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LoopCond.md @@ -0,0 +1,80 @@ +description: Forwards the input to the output. + +
+ + +
+ +# tf.raw_ops.LoopCond + + + + + + + + + +Forwards the input to the output. + + + + + + + + + +This operator represents the loop termination condition used by the +"pivot" switches of a loop. + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `bool`. +A boolean scalar, representing the branch predicate of the Switch op. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/LowerBound.md b/site/en/api_docs/python/tf/raw_ops/LowerBound.md new file mode 100644 index 00000000000..7d1bc498a16 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/LowerBound.md @@ -0,0 +1,110 @@ +description: Applies lower_bound(sorted_search_values, values) along each row. + +
+ + +
+ +# tf.raw_ops.LowerBound + + + + + + + + + +Applies lower_bound(sorted_search_values, values) along each row. + + + + + + + + + +Each set of rows with the same index in (sorted_inputs, values) is treated +independently. The resulting row is the equivalent of calling +`np.searchsorted(sorted_inputs, values, side='left')`. + +The result is not a global index to the entire +`Tensor`, but rather just the index in the last dimension. + +A 2-D example: + sorted_sequence = [[0, 3, 9, 9, 10], + [1, 2, 3, 4, 5]] + values = [[2, 4, 9], + [0, 2, 6]] + + result = LowerBound(sorted_sequence, values) + + result == [[1, 2, 2], + [0, 1, 5]] + + + + + + + + + + + + + + + + + + + +
+`sorted_inputs` + +A `Tensor`. 2-D Tensor where each row is ordered. +
+`values` + +A `Tensor`. Must have the same type as `sorted_inputs`. +2-D Tensor with the same numbers of rows as `sorted_search_values`. Contains +the values that will be searched for in `sorted_search_values`. +
+`out_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Lu.md b/site/en/api_docs/python/tf/raw_ops/Lu.md new file mode 100644 index 00000000000..402f93de0e5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Lu.md @@ -0,0 +1,117 @@ +description: Computes the LU decomposition of one or more square matrices. + +
+ + +
+ +# tf.raw_ops.Lu + + + + + + + + + +Computes the LU decomposition of one or more square matrices. + + + + + + + + + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. + +The input has to be invertible. + +The output consists of two tensors LU and P containing the LU decomposition +of all input submatrices `[..., :, :]`. LU encodes the lower triangular and +upper triangular factors. + +For each input submatrix of shape `[M, M]`, L is a lower triangular matrix of +shape `[M, M]` with unit diagonal whose entries correspond to the strictly lower +triangular part of LU. U is a upper triangular matrix of shape `[M, M]` whose +entries correspond to the upper triangular part, including the diagonal, of LU. + +P represents a permutation matrix encoded as a list of indices each between `0` +and `M-1`, inclusive. If P_mat denotes the permutation matrix corresponding to +P, then the L, U and P satisfies P_mat * input = L * U. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +A tensor of shape `[..., M, M]` whose inner-most 2 dimensions form matrices of +size `[M, M]`. +
+`output_idx_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (lu, p). +
+`lu` + +A `Tensor`. Has the same type as `input`. +
+`p` + +A `Tensor` of type `output_idx_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MakeIterator.md b/site/en/api_docs/python/tf/raw_ops/MakeIterator.md new file mode 100644 index 00000000000..3790f5fd631 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MakeIterator.md @@ -0,0 +1,86 @@ +description: Makes a new iterator from the given dataset and stores it in iterator. + +
+ + +
+ +# tf.raw_ops.MakeIterator + + + + + + + + + +Makes a new iterator from the given `dataset` and stores it in `iterator`. + + + + + + + + + +This operation may be executed multiple times. Each execution will reset the +iterator in `iterator` to the first element of `dataset`. + + + + + + + + + + + + + + + + +
+`dataset` + +A `Tensor` of type `variant`. +
+`iterator` + +A `Tensor` of type `resource`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MapAndBatchDataset.md b/site/en/api_docs/python/tf/raw_ops/MapAndBatchDataset.md new file mode 100644 index 00000000000..d2b2f757389 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MapAndBatchDataset.md @@ -0,0 +1,151 @@ +description: Creates a dataset that fuses mapping with batching. + +
+ + +
+ +# tf.raw_ops.MapAndBatchDataset + + + + + + + + + +Creates a dataset that fuses mapping with batching. + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset` and then +batches `batch_size` of them. + +Unlike a "MapDataset", which applies `f` sequentially, this dataset invokes up +to `batch_size * num_parallel_batches` copies of `f` in parallel. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +
+`other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when building a closure +for `f`. +
+`batch_size` + +A `Tensor` of type `int64`. +A scalar representing the number of elements to accumulate in a +batch. It determines the number of concurrent invocations of `f` that process +elements from `input_dataset` in parallel. +
+`num_parallel_calls` + +A `Tensor` of type `int64`. +A scalar representing the maximum number of parallel invocations of the `map_fn` +function. Applying the `map_fn` on consecutive input elements in parallel has +the potential to improve input pipeline throughput. +
+`drop_remainder` + +A `Tensor` of type `bool`. +A scalar representing whether the last batch should be dropped in case its size +is smaller than desired. +
+`f` + +A function decorated with @Defun. +A function to apply to the outputs of `input_dataset`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`preserve_cardinality` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MapClear.md b/site/en/api_docs/python/tf/raw_ops/MapClear.md new file mode 100644 index 00000000000..423bccb4ddb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MapClear.md @@ -0,0 +1,105 @@ +description: Op removes all elements in the underlying container. + +
+ + +
+ +# tf.raw_ops.MapClear + + + + + + + + + +Op removes all elements in the underlying container. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +A list of `tf.DTypes`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MapDataset.md b/site/en/api_docs/python/tf/raw_ops/MapDataset.md new file mode 100644 index 00000000000..21c0f9890af --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MapDataset.md @@ -0,0 +1,120 @@ +description: Creates a dataset that applies f to the outputs of input_dataset. + +
+ + +
+ +# tf.raw_ops.MapDataset + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`other_arguments` + +A list of `Tensor` objects. +
+`f` + +A function decorated with @Defun. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`use_inter_op_parallelism` + +An optional `bool`. Defaults to `True`. +
+`preserve_cardinality` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MapDefun.md b/site/en/api_docs/python/tf/raw_ops/MapDefun.md new file mode 100644 index 00000000000..5b7ce13d6e9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MapDefun.md @@ -0,0 +1,129 @@ +description: Maps a function on the list of tensors unpacked from arguments on dimension 0. + +
+ + +
+ +# tf.raw_ops.MapDefun + + + + + + + + + +Maps a function on the list of tensors unpacked from arguments on dimension 0. + + + + + + + + +The function given by `f` is assumed to be stateless, and is executed +concurrently on all the slices; up to batch_size (i.e. the size of the 0th +dimension of each argument) functions will be scheduled at once. + +The `max_intra_op_parallelism` attr, which defaults to 1, can be used to +limit the intra op parallelism. To limit inter-op parallelism, a user can +set a private threadpool on the dataset using tf.data.Options's +`ThreadingOptions`. + +Note that this op is not exposed to users directly, but is invoked in tf.data +rewrites. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`arguments` + +A list of `Tensor` objects. +A list of tensors whose types are `Targuments`, corresponding to the inputs +the function should be mapped over. +
+`captured_inputs` + +A list of `Tensor` objects. +A list of tensors whose types are `Tcaptured`, corresponding to the captured +inputs of the defun. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +A list of types. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +A list of shapes. +
+`f` + +A function decorated with @Defun. +
+`max_intra_op_parallelism` + +An optional `int`. Defaults to `1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `output_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MapIncompleteSize.md b/site/en/api_docs/python/tf/raw_ops/MapIncompleteSize.md new file mode 100644 index 00000000000..dc3ea8940ff --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MapIncompleteSize.md @@ -0,0 +1,105 @@ +description: Op returns the number of incomplete elements in the underlying container. + +
+ + +
+ +# tf.raw_ops.MapIncompleteSize + + + + + + + + + +Op returns the number of incomplete elements in the underlying container. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +A list of `tf.DTypes`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MapPeek.md b/site/en/api_docs/python/tf/raw_ops/MapPeek.md new file mode 100644 index 00000000000..57e1917f6e1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MapPeek.md @@ -0,0 +1,122 @@ +description: Op peeks at the values at the specified key. If the + +
+ + +
+ +# tf.raw_ops.MapPeek + + + + + + + + + +Op peeks at the values at the specified key. If the + + + + + + + + + +underlying container does not contain this key +this op will block until it does. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A `Tensor` of type `int64`. +
+`indices` + +A `Tensor` of type `int32`. +
+`dtypes` + +A list of `tf.DTypes` that has length `>= 1`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `dtypes`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MapSize.md b/site/en/api_docs/python/tf/raw_ops/MapSize.md new file mode 100644 index 00000000000..182bba99009 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MapSize.md @@ -0,0 +1,105 @@ +description: Op returns the number of elements in the underlying container. + +
+ + +
+ +# tf.raw_ops.MapSize + + + + + + + + + +Op returns the number of elements in the underlying container. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +A list of `tf.DTypes`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MapStage.md b/site/en/api_docs/python/tf/raw_ops/MapStage.md new file mode 100644 index 00000000000..dd3b82105d5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MapStage.md @@ -0,0 +1,133 @@ +description: Stage (key, values) in the underlying container which behaves like a hashtable. + +
+ + +
+ +# tf.raw_ops.MapStage + + + + + + + + + +Stage (key, values) in the underlying container which behaves like a hashtable. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A `Tensor` of type `int64`. int64 +
+`indices` + +A `Tensor` of type `int32`. +
+`values` + +A list of `Tensor` objects. a list of tensors +dtypes A list of data types that inserted values should adhere to. +
+`dtypes` + +A list of `tf.DTypes`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +Maximum number of elements in the Staging Area. If > 0, inserts +on the container will block when the capacity is reached. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this queue is placed in the given container. Otherwise, +a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +It is necessary to match this name to the matching Unstage Op. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MapUnstage.md b/site/en/api_docs/python/tf/raw_ops/MapUnstage.md new file mode 100644 index 00000000000..0d2c1dd1326 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MapUnstage.md @@ -0,0 +1,122 @@ +description: Op removes and returns the values associated with the key + +
+ + +
+ +# tf.raw_ops.MapUnstage + + + + + + + + + +Op removes and returns the values associated with the key + + + + + + + + + +from the underlying container. If the underlying container +does not contain this key, the op will block until it does. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A `Tensor` of type `int64`. +
+`indices` + +A `Tensor` of type `int32`. +
+`dtypes` + +A list of `tf.DTypes` that has length `>= 1`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `dtypes`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MapUnstageNoKey.md b/site/en/api_docs/python/tf/raw_ops/MapUnstageNoKey.md new file mode 100644 index 00000000000..b769759c8d6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MapUnstageNoKey.md @@ -0,0 +1,129 @@ +description: Op removes and returns a random (key, value) + +
+ + +
+ +# tf.raw_ops.MapUnstageNoKey + + + + + + + + + +Op removes and returns a random (key, value) + + + + + + + + + +from the underlying container. If the underlying container +does not contain elements, the op will block until it does. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor` of type `int32`. +
+`dtypes` + +A list of `tf.DTypes` that has length `>= 1`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (key, values). +
+`key` + +A `Tensor` of type `int64`. +
+`values` + +A list of `Tensor` objects of type `dtypes`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatMul.md b/site/en/api_docs/python/tf/raw_ops/MatMul.md new file mode 100644 index 00000000000..a7ced8f311f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatMul.md @@ -0,0 +1,107 @@ +description: Multiply the matrix "a" by the matrix "b". + +
+ + +
+ +# tf.raw_ops.MatMul + + + + + + + + + +Multiply the matrix "a" by the matrix "b". + + + + + + + + + +The inputs must be two-dimensional matrices and the inner dimension of +"a" (after being transposed if transpose_a is true) must match the +outer dimension of "b" (after being transposed if transposed_b is +true). + +*Note*: The default kernel implementation for MatMul on GPUs uses +cublas. + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`b` + +A `Tensor`. Must have the same type as `a`. +
+`transpose_a` + +An optional `bool`. Defaults to `False`. +If true, "a" is transposed before multiplication. +
+`transpose_b` + +An optional `bool`. Defaults to `False`. +If true, "b" is transposed before multiplication. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatchingFiles.md b/site/en/api_docs/python/tf/raw_ops/MatchingFiles.md new file mode 100644 index 00000000000..41a548c9c9f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatchingFiles.md @@ -0,0 +1,81 @@ +description: Returns the set of files matching one or more glob patterns. + +
+ + +
+ +# tf.raw_ops.MatchingFiles + + + + + + + + + +Returns the set of files matching one or more glob patterns. + + + + + + + + + +Note that this routine only supports wildcard characters in the +basename portion of the pattern, not in the directory portion. +Note also that the order of filenames returned is deterministic. + + + + + + + + + + + + + +
+`pattern` + +A `Tensor` of type `string`. +Shell wildcard pattern(s). Scalar or vector of type string. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatchingFilesDataset.md b/site/en/api_docs/python/tf/raw_ops/MatchingFilesDataset.md new file mode 100644 index 00000000000..3b018579039 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatchingFilesDataset.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.MatchingFilesDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`patterns` + +A `Tensor` of type `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixBandPart.md b/site/en/api_docs/python/tf/raw_ops/MatrixBandPart.md new file mode 100644 index 00000000000..206f7c75f2e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixBandPart.md @@ -0,0 +1,138 @@ +description: Copy a tensor setting everything outside a central band in each innermost matrix + +
+ + +
+ +# tf.raw_ops.MatrixBandPart + + + + + + + + + +Copy a tensor setting everything outside a central band in each innermost matrix + + + + + + + + + +to zero. + +The `band` part is computed as follows: +Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a +tensor with the same shape where + +`band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`. + +The indicator function + +`in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) && + (num_upper < 0 || (n-m) <= num_upper)`. + +#### For example: + + + +``` +# if 'input' is [[ 0, 1, 2, 3] + [-1, 0, 1, 2] + [-2, -1, 0, 1] + [-3, -2, -1, 0]], + +tf.matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3] + [-1, 0, 1, 2] + [ 0, -1, 0, 1] + [ 0, 0, -1, 0]], + +tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0] + [-1, 0, 1, 0] + [-2, -1, 0, 1] + [ 0, -2, -1, 0]] +``` + +#### Useful special cases: + + + +``` + tf.matrix_band_part(input, 0, -1) ==> Upper triangular part. + tf.matrix_band_part(input, -1, 0) ==> Lower triangular part. + tf.matrix_band_part(input, 0, 0) ==> Diagonal. +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Rank `k` tensor. +
+`num_lower` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +0-D tensor. Number of subdiagonals to keep. If negative, keep entire +lower triangle. +
+`num_upper` + +A `Tensor`. Must have the same type as `num_lower`. +0-D tensor. Number of superdiagonals to keep. If negative, keep +entire upper triangle. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixDeterminant.md b/site/en/api_docs/python/tf/raw_ops/MatrixDeterminant.md new file mode 100644 index 00000000000..d90db7c128f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixDeterminant.md @@ -0,0 +1,81 @@ +description: Computes the determinant of one or more square matrices. + +
+ + +
+ +# tf.raw_ops.MatrixDeterminant + + + + + + + + + +Computes the determinant of one or more square matrices. + + + + + + + + + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. The output is a tensor containing the determinants +for all input submatrices `[..., :, :]`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixDiag.md b/site/en/api_docs/python/tf/raw_ops/MatrixDiag.md new file mode 100644 index 00000000000..03fd5d6cfd3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixDiag.md @@ -0,0 +1,105 @@ +description: Returns a batched diagonal tensor with a given batched diagonal values. + +
+ + +
+ +# tf.raw_ops.MatrixDiag + + + + + + + + + +Returns a batched diagonal tensor with a given batched diagonal values. + + + + + + + + + +Given a `diagonal`, this operation returns a tensor with the `diagonal` and +everything else padded with zeros. The diagonal is computed as follows: + +Assume `diagonal` has `k` dimensions `[I, J, K, ..., N]`, then the output is a +tensor of rank `k+1` with dimensions [I, J, K, ..., N, N]` where: + +`output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]`. + +#### For example: + + + +``` +# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]] + +and diagonal.shape = (2, 4) + +tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0] + [0, 2, 0, 0] + [0, 0, 3, 0] + [0, 0, 0, 4]], + [[5, 0, 0, 0] + [0, 6, 0, 0] + [0, 0, 7, 0] + [0, 0, 0, 8]]] + +which has shape (2, 4, 4) +``` + + + + + + + + + + + + + +
+`diagonal` + +A `Tensor`. Rank `k`, where `k >= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `diagonal`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixDiagPart.md b/site/en/api_docs/python/tf/raw_ops/MatrixDiagPart.md new file mode 100644 index 00000000000..89b284478fb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixDiagPart.md @@ -0,0 +1,107 @@ +description: Returns the batched diagonal part of a batched tensor. + +
+ + +
+ +# tf.raw_ops.MatrixDiagPart + + + + + + + + + +Returns the batched diagonal part of a batched tensor. + + + + + + + + + +This operation returns a tensor with the `diagonal` part +of the batched `input`. The `diagonal` part is computed as follows: + +Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a +tensor of rank `k - 1` with dimensions `[I, J, K, ..., min(M, N)]` where: + +`diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]`. + +The input must be at least a matrix. + +#### For example: + + + +``` +# 'input' is [[[1, 0, 0, 0] + [0, 2, 0, 0] + [0, 0, 3, 0] + [0, 0, 0, 4]], + [[5, 0, 0, 0] + [0, 6, 0, 0] + [0, 0, 7, 0] + [0, 0, 0, 8]]] + +and input.shape = (2, 4, 4) + +tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]] + +which has shape (2, 4) +``` + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Rank `k` tensor where `k >= 2`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixDiagPartV2.md b/site/en/api_docs/python/tf/raw_ops/MatrixDiagPartV2.md new file mode 100644 index 00000000000..d03f92c9869 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixDiagPartV2.md @@ -0,0 +1,167 @@ +description: Returns the batched diagonal part of a batched tensor. + +
+ + +
+ +# tf.raw_ops.MatrixDiagPartV2 + + + + + + + + + +Returns the batched diagonal part of a batched tensor. + + + + + + + + + +Returns a tensor with the `k[0]`-th to `k[1]`-th diagonals of the batched +`input`. + +Assume `input` has `r` dimensions `[I, J, ..., L, M, N]`. +Let `max_diag_len` be the maximum length among all diagonals to be extracted, +`max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))` +Let `num_diags` be the number of diagonals to extract, +`num_diags = k[1] - k[0] + 1`. + +If `num_diags == 1`, the output tensor is of rank `r - 1` with shape +`[I, J, ..., L, max_diag_len]` and values: + +``` +diagonal[i, j, ..., l, n] + = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N, + padding_value ; otherwise. +``` +where `y = max(-k[1], 0)`, `x = max(k[1], 0)`. + +Otherwise, the output tensor has rank `r` with dimensions +`[I, J, ..., L, num_diags, max_diag_len]` with values: + +``` +diagonal[i, j, ..., l, m, n] + = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N, + padding_value ; otherwise. +``` +where `d = k[1] - m`, `y = max(-d, 0)`, and `x = max(d, 0)`. + +The input must be at least a matrix. + +#### For example: + + + +``` +input = np.array([[[1, 2, 3, 4], # Input shape: (2, 3, 4) + [5, 6, 7, 8], + [9, 8, 7, 6]], + [[5, 4, 3, 2], + [1, 2, 3, 4], + [5, 6, 7, 8]]]) + +# A main diagonal from each batch. +tf.matrix_diag_part(input) ==> [[1, 6, 7], # Output shape: (2, 3) + [5, 2, 7]] + +# A superdiagonal from each batch. +tf.matrix_diag_part(input, k = 1) + ==> [[2, 7, 6], # Output shape: (2, 3) + [4, 3, 8]] + +# A tridiagonal band from each batch. +tf.matrix_diag_part(input, k = (-1, 1)) + ==> [[[2, 7, 6], # Output shape: (2, 3, 3) + [1, 6, 7], + [5, 8, 0]], + [[4, 3, 8], + [5, 2, 7], + [1, 6, 0]]] + +# Padding value = 9 +tf.matrix_diag_part(input, k = (1, 3), padding_value = 9) + ==> [[[4, 9, 9], # Output shape: (2, 3, 3) + [3, 8, 9], + [2, 7, 6]], + [[2, 9, 9], + [3, 4, 9], + [4, 3, 8]]] +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Rank `r` tensor where `r >= 2`. +
+`k` + +A `Tensor` of type `int32`. +Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main +diagonal, and negative value means subdiagonals. `k` can be a single integer +(for a single diagonal) or a pair of integers specifying the low and high ends +of a matrix band. `k[0]` must not be larger than `k[1]`. +
+`padding_value` + +A `Tensor`. Must have the same type as `input`. +The value to fill the area outside the specified diagonal band with. +Default is 0. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixDiagPartV3.md b/site/en/api_docs/python/tf/raw_ops/MatrixDiagPartV3.md new file mode 100644 index 00000000000..26f2d0daa2d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixDiagPartV3.md @@ -0,0 +1,212 @@ +description: Returns the batched diagonal part of a batched tensor. + +
+ + +
+ +# tf.raw_ops.MatrixDiagPartV3 + + + + + + + + + +Returns the batched diagonal part of a batched tensor. + + + + + + + + + +Returns a tensor with the `k[0]`-th to `k[1]`-th diagonals of the batched +`input`. + +Assume `input` has `r` dimensions `[I, J, ..., L, M, N]`. +Let `max_diag_len` be the maximum length among all diagonals to be extracted, +`max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))` +Let `num_diags` be the number of diagonals to extract, +`num_diags = k[1] - k[0] + 1`. + +If `num_diags == 1`, the output tensor is of rank `r - 1` with shape +`[I, J, ..., L, max_diag_len]` and values: + +``` +diagonal[i, j, ..., l, n] + = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N, + padding_value ; otherwise. +``` +where `y = max(-k[1], 0)`, `x = max(k[1], 0)`. + +Otherwise, the output tensor has rank `r` with dimensions +`[I, J, ..., L, num_diags, max_diag_len]` with values: + +``` +diagonal[i, j, ..., l, m, n] + = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N, + padding_value ; otherwise. +``` +where `d = k[1] - m`, `y = max(-d, 0) - offset`, and `x = max(d, 0) - offset`. + +`offset` is zero except when the alignment of the diagonal is to the right. +``` +offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT} + and `d >= 0`) or + (`align` in {LEFT_RIGHT, RIGHT_RIGHT} + and `d <= 0`) + 0 ; otherwise +``` +where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`. + +The input must be at least a matrix. + +#### For example: + + + +``` +input = np.array([[[1, 2, 3, 4], # Input shape: (2, 3, 4) + [5, 6, 7, 8], + [9, 8, 7, 6]], + [[5, 4, 3, 2], + [1, 2, 3, 4], + [5, 6, 7, 8]]]) + +# A main diagonal from each batch. +tf.matrix_diag_part(input) ==> [[1, 6, 7], # Output shape: (2, 3) + [5, 2, 7]] + +# A superdiagonal from each batch. +tf.matrix_diag_part(input, k = 1) + ==> [[2, 7, 6], # Output shape: (2, 3) + [4, 3, 8]] + +# A band from each batch. +tf.matrix_diag_part(input, k = (-1, 2)) + ==> [[[0, 3, 8], # Output shape: (2, 4, 3) + [2, 7, 6], + [1, 6, 7], + [5, 8, 0]], + [[0, 3, 4], + [4, 3, 8], + [5, 2, 7], + [1, 6, 0]]] + +# LEFT_RIGHT alignment. +tf.matrix_diag_part(input, k = (-1, 2), align="LEFT_RIGHT") + ==> [[[3, 8, 0], # Output shape: (2, 4, 3) + [2, 7, 6], + [1, 6, 7], + [0, 5, 8]], + [[3, 4, 0], + [4, 3, 8], + [5, 2, 7], + [0, 1, 6]]] + +# max_diag_len can be shorter than the main diagonal. +tf.matrix_diag_part(input, k = (-2, -1)) + ==> [[[5, 8], + [9, 0]], + [[1, 6], + [5, 0]]] + +# padding_value = 9 +tf.matrix_diag_part(input, k = (1, 3), padding_value = 9) + ==> [[[9, 9, 4], # Output shape: (2, 3, 3) + [9, 3, 8], + [2, 7, 6]], + [[9, 9, 2], + [9, 3, 4], + [4, 3, 8]]] + +``` + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Rank `r` tensor where `r >= 2`. +
+`k` + +A `Tensor` of type `int32`. +Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main +diagonal, and negative value means subdiagonals. `k` can be a single integer +(for a single diagonal) or a pair of integers specifying the low and high ends +of a matrix band. `k[0]` must not be larger than `k[1]`. +
+`padding_value` + +A `Tensor`. Must have the same type as `input`. +The value to fill the area outside the specified diagonal band with. +Default is 0. +
+`align` + +An optional `string` from: `"LEFT_RIGHT", "RIGHT_LEFT", "LEFT_LEFT", "RIGHT_RIGHT"`. Defaults to `"RIGHT_LEFT"`. +Some diagonals are shorter than `max_diag_len` and need to be padded. `align` is +a string specifying how superdiagonals and subdiagonals should be aligned, +respectively. There are four possible alignments: "RIGHT_LEFT" (default), +"LEFT_RIGHT", "LEFT_LEFT", and "RIGHT_RIGHT". "RIGHT_LEFT" aligns superdiagonals +to the right (left-pads the row) and subdiagonals to the left (right-pads the +row). It is the packing format LAPACK uses. cuSPARSE uses "LEFT_RIGHT", which is +the opposite alignment. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixDiagV2.md b/site/en/api_docs/python/tf/raw_ops/MatrixDiagV2.md new file mode 100644 index 00000000000..fba48916c3e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixDiagV2.md @@ -0,0 +1,206 @@ +description: Returns a batched diagonal tensor with given batched diagonal values. + +
+ + +
+ +# tf.raw_ops.MatrixDiagV2 + + + + + + + + + +Returns a batched diagonal tensor with given batched diagonal values. + + + + + + + + + +Returns a tensor with the contents in `diagonal` as `k[0]`-th to `k[1]`-th +diagonals of a matrix, with everything else padded with `padding`. `num_rows` +and `num_cols` specify the dimension of the innermost matrix of the output. If +both are not specified, the op assumes the innermost matrix is square and infers +its size from `k` and the innermost dimension of `diagonal`. If only one of them +is specified, the op assumes the unspecified value is the smallest possible +based on other criteria. + +Let `diagonal` have `r` dimensions `[I, J, ..., L, M, N]`. The output tensor has +rank `r+1` with shape `[I, J, ..., L, M, num_rows, num_cols]` when only one +diagonal is given (`k` is an integer or `k[0] == k[1]`). Otherwise, it has rank +`r` with shape `[I, J, ..., L, num_rows, num_cols]`. + +The second innermost dimension of `diagonal` has double meaning. +When `k` is scalar or `k[0] == k[1]`, `M` is part of the batch size +[I, J, ..., M], and the output tensor is: + +``` +output[i, j, ..., l, m, n] + = diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper + padding_value ; otherwise +``` + +Otherwise, `M` is treated as the number of diagonals for the matrix in the +same batch (`M = k[1]-k[0]+1`), and the output tensor is: + +``` +output[i, j, ..., l, m, n] + = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1] + padding_value ; otherwise +``` +where `d = n - m`, `diag_index = k[1] - d`, and `index_in_diag = n - max(d, 0)`. + +#### For example: + + + +``` +# The main diagonal. +diagonal = np.array([[1, 2, 3, 4], # Input shape: (2, 4) + [5, 6, 7, 8]]) +tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0], # Output shape: (2, 4, 4) + [0, 2, 0, 0], + [0, 0, 3, 0], + [0, 0, 0, 4]], + [[5, 0, 0, 0], + [0, 6, 0, 0], + [0, 0, 7, 0], + [0, 0, 0, 8]]] + +# A superdiagonal (per batch). +diagonal = np.array([[1, 2, 3], # Input shape: (2, 3) + [4, 5, 6]]) +tf.matrix_diag(diagonal, k = 1) + ==> [[[0, 1, 0, 0], # Output shape: (2, 4, 4) + [0, 0, 2, 0], + [0, 0, 0, 3], + [0, 0, 0, 0]], + [[0, 4, 0, 0], + [0, 0, 5, 0], + [0, 0, 0, 6], + [0, 0, 0, 0]]] + +# A band of diagonals. +diagonals = np.array([[[1, 2, 3], # Input shape: (2, 2, 3) + [4, 5, 0]], + [[6, 7, 9], + [9, 1, 0]]]) +tf.matrix_diag(diagonals, k = (-1, 0)) + ==> [[[1, 0, 0], # Output shape: (2, 3, 3) + [4, 2, 0], + [0, 5, 3]], + [[6, 0, 0], + [9, 7, 0], + [0, 1, 9]]] + +# Rectangular matrix. +diagonal = np.array([1, 2]) # Input shape: (2) +tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4) + ==> [[0, 0, 0, 0], # Output shape: (3, 4) + [1, 0, 0, 0], + [0, 2, 0, 0]] + +# Rectangular matrix with inferred num_cols and padding_value = 9. +tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9) + ==> [[9, 9], # Output shape: (3, 2) + [1, 9], + [9, 2]] +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`diagonal` + +A `Tensor`. Rank `r`, where `r >= 1` +
+`k` + +A `Tensor` of type `int32`. +Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main +diagonal, and negative value means subdiagonals. `k` can be a single integer +(for a single diagonal) or a pair of integers specifying the low and high ends +of a matrix band. `k[0]` must not be larger than `k[1]`. +
+`num_rows` + +A `Tensor` of type `int32`. +The number of rows of the output matrix. If it is not provided, the op assumes +the output matrix is a square matrix and infers the matrix size from k and the +innermost dimension of `diagonal`. +
+`num_cols` + +A `Tensor` of type `int32`. +The number of columns of the output matrix. If it is not provided, the op +assumes the output matrix is a square matrix and infers the matrix size from +k and the innermost dimension of `diagonal`. +
+`padding_value` + +A `Tensor`. Must have the same type as `diagonal`. +The number to fill the area outside the specified diagonal band with. +Default is 0. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `diagonal`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixDiagV3.md b/site/en/api_docs/python/tf/raw_ops/MatrixDiagV3.md new file mode 100644 index 00000000000..caa2472028c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixDiagV3.md @@ -0,0 +1,249 @@ +description: Returns a batched diagonal tensor with given batched diagonal values. + +
+ + +
+ +# tf.raw_ops.MatrixDiagV3 + + + + + + + + + +Returns a batched diagonal tensor with given batched diagonal values. + + + + + + + + + +Returns a tensor with the contents in `diagonal` as `k[0]`-th to `k[1]`-th +diagonals of a matrix, with everything else padded with `padding`. `num_rows` +and `num_cols` specify the dimension of the innermost matrix of the output. If +both are not specified, the op assumes the innermost matrix is square and infers +its size from `k` and the innermost dimension of `diagonal`. If only one of them +is specified, the op assumes the unspecified value is the smallest possible +based on other criteria. + +Let `diagonal` have `r` dimensions `[I, J, ..., L, M, N]`. The output tensor has +rank `r+1` with shape `[I, J, ..., L, M, num_rows, num_cols]` when only one +diagonal is given (`k` is an integer or `k[0] == k[1]`). Otherwise, it has rank +`r` with shape `[I, J, ..., L, num_rows, num_cols]`. + +The second innermost dimension of `diagonal` has double meaning. +When `k` is scalar or `k[0] == k[1]`, `M` is part of the batch size +[I, J, ..., M], and the output tensor is: + +``` +output[i, j, ..., l, m, n] + = diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper + padding_value ; otherwise +``` + +Otherwise, `M` is treated as the number of diagonals for the matrix in the +same batch (`M = k[1]-k[0]+1`), and the output tensor is: + +``` +output[i, j, ..., l, m, n] + = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1] + padding_value ; otherwise +``` +where `d = n - m`, `diag_index = [k] - d`, and +`index_in_diag = n - max(d, 0) + offset`. + +`offset` is zero except when the alignment of the diagonal is to the right. +``` +offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT} + and `d >= 0`) or + (`align` in {LEFT_RIGHT, RIGHT_RIGHT} + and `d <= 0`) + 0 ; otherwise +``` +where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`. + +#### For example: + + + +``` +# The main diagonal. +diagonal = np.array([[1, 2, 3, 4], # Input shape: (2, 4) + [5, 6, 7, 8]]) +tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0], # Output shape: (2, 4, 4) + [0, 2, 0, 0], + [0, 0, 3, 0], + [0, 0, 0, 4]], + [[5, 0, 0, 0], + [0, 6, 0, 0], + [0, 0, 7, 0], + [0, 0, 0, 8]]] + +# A superdiagonal (per batch). +diagonal = np.array([[1, 2, 3], # Input shape: (2, 3) + [4, 5, 6]]) +tf.matrix_diag(diagonal, k = 1) + ==> [[[0, 1, 0, 0], # Output shape: (2, 4, 4) + [0, 0, 2, 0], + [0, 0, 0, 3], + [0, 0, 0, 0]], + [[0, 4, 0, 0], + [0, 0, 5, 0], + [0, 0, 0, 6], + [0, 0, 0, 0]]] + +# A tridiagonal band (per batch). +diagonals = np.array([[[0, 8, 9], # Input shape: (2, 2, 3) + [1, 2, 3], + [4, 5, 0]], + [[0, 2, 3], + [6, 7, 9], + [9, 1, 0]]]) +tf.matrix_diag(diagonals, k = (-1, 1)) + ==> [[[1, 8, 0], # Output shape: (2, 3, 3) + [4, 2, 9], + [0, 5, 3]], + [[6, 2, 0], + [9, 7, 3], + [0, 1, 9]]] + +# LEFT_RIGHT alignment. +diagonals = np.array([[[8, 9, 0], # Input shape: (2, 2, 3) + [1, 2, 3], + [0, 4, 5]], + [[2, 3, 0], + [6, 7, 9], + [0, 9, 1]]]) +tf.matrix_diag(diagonals, k = (-1, 1), align="LEFT_RIGHT") + ==> [[[1, 8, 0], # Output shape: (2, 3, 3) + [4, 2, 9], + [0, 5, 3]], + [[6, 2, 0], + [9, 7, 3], + [0, 1, 9]]] + +# Rectangular matrix. +diagonal = np.array([1, 2]) # Input shape: (2) +tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4) + ==> [[0, 0, 0, 0], # Output shape: (3, 4) + [1, 0, 0, 0], + [0, 2, 0, 0]] + +# Rectangular matrix with inferred num_cols and padding_value = 9. +tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9) + ==> [[9, 9], # Output shape: (3, 2) + [1, 9], + [9, 2]] + +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`diagonal` + +A `Tensor`. Rank `r`, where `r >= 1` +
+`k` + +A `Tensor` of type `int32`. +Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main +diagonal, and negative value means subdiagonals. `k` can be a single integer +(for a single diagonal) or a pair of integers specifying the low and high ends +of a matrix band. `k[0]` must not be larger than `k[1]`. +
+`num_rows` + +A `Tensor` of type `int32`. +The number of rows of the output matrix. If it is not provided, the op assumes +the output matrix is a square matrix and infers the matrix size from k and the +innermost dimension of `diagonal`. +
+`num_cols` + +A `Tensor` of type `int32`. +The number of columns of the output matrix. If it is not provided, the op +assumes the output matrix is a square matrix and infers the matrix size from +k and the innermost dimension of `diagonal`. +
+`padding_value` + +A `Tensor`. Must have the same type as `diagonal`. +The number to fill the area outside the specified diagonal band with. +Default is 0. +
+`align` + +An optional `string` from: `"LEFT_RIGHT", "RIGHT_LEFT", "LEFT_LEFT", "RIGHT_RIGHT"`. Defaults to `"RIGHT_LEFT"`. +Some diagonals are shorter than `max_diag_len` and need to be padded. `align` is +a string specifying how superdiagonals and subdiagonals should be aligned, +respectively. There are four possible alignments: "RIGHT_LEFT" (default), +"LEFT_RIGHT", "LEFT_LEFT", and "RIGHT_RIGHT". "RIGHT_LEFT" aligns superdiagonals +to the right (left-pads the row) and subdiagonals to the left (right-pads the +row). It is the packing format LAPACK uses. cuSPARSE uses "LEFT_RIGHT", which is +the opposite alignment. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `diagonal`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixExponential.md b/site/en/api_docs/python/tf/raw_ops/MatrixExponential.md new file mode 100644 index 00000000000..5fbf7a41731 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixExponential.md @@ -0,0 +1,77 @@ +description: Deprecated, use python implementation tf.linalg.matrix_exponential. + +
+ + +
+ +# tf.raw_ops.MatrixExponential + + + + + + + + + +Deprecated, use python implementation tf.linalg.matrix_exponential. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixInverse.md b/site/en/api_docs/python/tf/raw_ops/MatrixInverse.md new file mode 100644 index 00000000000..4c0bee6f088 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixInverse.md @@ -0,0 +1,96 @@ +description: Computes the inverse of one or more square invertible matrices or their + +
+ + +
+ +# tf.raw_ops.MatrixInverse + + + + + + + + + +Computes the inverse of one or more square invertible matrices or their + + + + + + + + + +adjoints (conjugate transposes). + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. The output is a tensor of the same shape as the input +containing the inverse for all input submatrices `[..., :, :]`. + +The op uses LU decomposition with partial pivoting to compute the inverses. + +If a matrix is not invertible there is no guarantee what the op does. It +may detect the condition and raise an exception or it may simply return a +garbage result. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`adjoint` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixLogarithm.md b/site/en/api_docs/python/tf/raw_ops/MatrixLogarithm.md new file mode 100644 index 00000000000..535a2b12501 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixLogarithm.md @@ -0,0 +1,93 @@ +description: Computes the matrix logarithm of one or more square matrices: + +
+ + +
+ +# tf.raw_ops.MatrixLogarithm + + + + + + + + + +Computes the matrix logarithm of one or more square matrices: + + + + + + + + + + +\\(log(exp(A)) = A\\) + +This op is only defined for complex matrices. If A is positive-definite and +real, then casting to a complex matrix, taking the logarithm and casting back +to a real matrix will give the correct result. + +This function computes the matrix logarithm using the Schur-Parlett algorithm. +Details of the algorithm can be found in Section 11.6.2 of: +Nicholas J. Higham, Functions of Matrices: Theory and Computation, SIAM 2008. +ISBN 978-0-898716-46-7. + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. The output is a tensor of the same shape as the input +containing the exponential for all input submatrices `[..., :, :]`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixSetDiag.md b/site/en/api_docs/python/tf/raw_ops/MatrixSetDiag.md new file mode 100644 index 00000000000..344b82fc37a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixSetDiag.md @@ -0,0 +1,97 @@ +description: Returns a batched matrix tensor with new batched diagonal values. + +
+ + +
+ +# tf.raw_ops.MatrixSetDiag + + + + + + + + + +Returns a batched matrix tensor with new batched diagonal values. + + + + + + + + + +Given `input` and `diagonal`, this operation returns a tensor with the +same shape and values as `input`, except for the main diagonal of the +innermost matrices. These will be overwritten by the values in `diagonal`. + +The output is computed as follows: + +Assume `input` has `k+1` dimensions `[I, J, K, ..., M, N]` and `diagonal` has +`k` dimensions `[I, J, K, ..., min(M, N)]`. Then the output is a +tensor of rank `k+1` with dimensions `[I, J, K, ..., M, N]` where: + + * `output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n]` for `m == n`. + * `output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n]` for `m != n`. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Rank `k+1`, where `k >= 1`. +
+`diagonal` + +A `Tensor`. Must have the same type as `input`. +Rank `k`, where `k >= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixSetDiagV2.md b/site/en/api_docs/python/tf/raw_ops/MatrixSetDiagV2.md new file mode 100644 index 00000000000..a9e565ed71c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixSetDiagV2.md @@ -0,0 +1,170 @@ +description: Returns a batched matrix tensor with new batched diagonal values. + +
+ + +
+ +# tf.raw_ops.MatrixSetDiagV2 + + + + + + + + + +Returns a batched matrix tensor with new batched diagonal values. + + + + + + + + + +Given `input` and `diagonal`, this operation returns a tensor with the +same shape and values as `input`, except for the specified diagonals of the +innermost matrices. These will be overwritten by the values in `diagonal`. + +`input` has `r+1` dimensions `[I, J, ..., L, M, N]`. When `k` is scalar or +`k[0] == k[1]`, `diagonal` has `r` dimensions `[I, J, ..., L, max_diag_len]`. +Otherwise, it has `r+1` dimensions `[I, J, ..., L, num_diags, max_diag_len]`. +`num_diags` is the number of diagonals, `num_diags = k[1] - k[0] + 1`. +`max_diag_len` is the longest diagonal in the range `[k[0], k[1]]`, +`max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))` + +The output is a tensor of rank `k+1` with dimensions `[I, J, ..., L, M, N]`. +If `k` is scalar or `k[0] == k[1]`: + +``` +output[i, j, ..., l, m, n] + = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1] + input[i, j, ..., l, m, n] ; otherwise +``` + +Otherwise, + +``` +output[i, j, ..., l, m, n] + = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1] + input[i, j, ..., l, m, n] ; otherwise +``` +where `d = n - m`, `diag_index = k[1] - d`, and `index_in_diag = n - max(d, 0)`. + +#### For example: + + + +``` +# The main diagonal. +input = np.array([[[7, 7, 7, 7], # Input shape: (2, 3, 4) + [7, 7, 7, 7], + [7, 7, 7, 7]], + [[7, 7, 7, 7], + [7, 7, 7, 7], + [7, 7, 7, 7]]]) +diagonal = np.array([[1, 2, 3], # Diagonal shape: (2, 3) + [4, 5, 6]]) +tf.matrix_set_diag(diagonal) ==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4) + [7, 2, 7, 7], + [7, 7, 3, 7]], + [[4, 7, 7, 7], + [7, 5, 7, 7], + [7, 7, 6, 7]]] + +# A superdiagonal (per batch). +tf.matrix_set_diag(diagonal, k = 1) + ==> [[[7, 1, 7, 7], # Output shape: (2, 3, 4) + [7, 7, 2, 7], + [7, 7, 7, 3]], + [[7, 4, 7, 7], + [7, 7, 5, 7], + [7, 7, 7, 6]]] + +# A band of diagonals. +diagonals = np.array([[[1, 2, 3], # Diagonal shape: (2, 2, 3) + [4, 5, 0]], + [[6, 1, 2], + [3, 4, 0]]]) +tf.matrix_set_diag(diagonals, k = (-1, 0)) + ==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4) + [4, 2, 7, 7], + [0, 5, 3, 7]], + [[6, 7, 7, 7], + [3, 1, 7, 7], + [7, 4, 2, 7]]] + +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Rank `r+1`, where `r >= 1`. +
+`diagonal` + +A `Tensor`. Must have the same type as `input`. +Rank `r` when `k` is an integer or `k[0] == k[1]`. Otherwise, it has rank `r+1`. +`k >= 1`. +
+`k` + +A `Tensor` of type `int32`. +Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main +diagonal, and negative value means subdiagonals. `k` can be a single integer +(for a single diagonal) or a pair of integers specifying the low and high ends +of a matrix band. `k[0]` must not be larger than `k[1]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixSetDiagV3.md b/site/en/api_docs/python/tf/raw_ops/MatrixSetDiagV3.md new file mode 100644 index 00000000000..b8a85f6dea6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixSetDiagV3.md @@ -0,0 +1,217 @@ +description: Returns a batched matrix tensor with new batched diagonal values. + +
+ + +
+ +# tf.raw_ops.MatrixSetDiagV3 + + + + + + + + + +Returns a batched matrix tensor with new batched diagonal values. + + + + + + + + + +Given `input` and `diagonal`, this operation returns a tensor with the +same shape and values as `input`, except for the specified diagonals of the +innermost matrices. These will be overwritten by the values in `diagonal`. + +`input` has `r+1` dimensions `[I, J, ..., L, M, N]`. When `k` is scalar or +`k[0] == k[1]`, `diagonal` has `r` dimensions `[I, J, ..., L, max_diag_len]`. +Otherwise, it has `r+1` dimensions `[I, J, ..., L, num_diags, max_diag_len]`. +`num_diags` is the number of diagonals, `num_diags = k[1] - k[0] + 1`. +`max_diag_len` is the longest diagonal in the range `[k[0], k[1]]`, +`max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))` + +The output is a tensor of rank `k+1` with dimensions `[I, J, ..., L, M, N]`. +If `k` is scalar or `k[0] == k[1]`: + +``` +output[i, j, ..., l, m, n] + = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1] + input[i, j, ..., l, m, n] ; otherwise +``` + +Otherwise, + +``` +output[i, j, ..., l, m, n] + = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1] + input[i, j, ..., l, m, n] ; otherwise +``` +where `d = n - m`, `diag_index = k[1] - d`, and +`index_in_diag = n - max(d, 0) + offset`. + +`offset` is zero except when the alignment of the diagonal is to the right. +``` +offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT} + and `d >= 0`) or + (`align` in {LEFT_RIGHT, RIGHT_RIGHT} + and `d <= 0`) + 0 ; otherwise +``` +where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`. + +#### For example: + + + +``` +# The main diagonal. +input = np.array([[[7, 7, 7, 7], # Input shape: (2, 3, 4) + [7, 7, 7, 7], + [7, 7, 7, 7]], + [[7, 7, 7, 7], + [7, 7, 7, 7], + [7, 7, 7, 7]]]) +diagonal = np.array([[1, 2, 3], # Diagonal shape: (2, 3) + [4, 5, 6]]) +tf.matrix_set_diag(input, diagonal) + ==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4) + [7, 2, 7, 7], + [7, 7, 3, 7]], + [[4, 7, 7, 7], + [7, 5, 7, 7], + [7, 7, 6, 7]]] + +# A superdiagonal (per batch). +tf.matrix_set_diag(input, diagonal, k = 1) + ==> [[[7, 1, 7, 7], # Output shape: (2, 3, 4) + [7, 7, 2, 7], + [7, 7, 7, 3]], + [[7, 4, 7, 7], + [7, 7, 5, 7], + [7, 7, 7, 6]]] + +# A band of diagonals. +diagonals = np.array([[[0, 9, 1], # Diagonal shape: (2, 4, 3) + [6, 5, 8], + [1, 2, 3], + [4, 5, 0]], + [[0, 1, 2], + [5, 6, 4], + [6, 1, 2], + [3, 4, 0]]]) +tf.matrix_set_diag(input, diagonals, k = (-1, 2)) + ==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4) + [4, 2, 5, 1], + [7, 5, 3, 8]], + [[6, 5, 1, 7], + [3, 1, 6, 2], + [7, 4, 2, 4]]] + +# LEFT_RIGHT alignment. +diagonals = np.array([[[9, 1, 0], # Diagonal shape: (2, 4, 3) + [6, 5, 8], + [1, 2, 3], + [0, 4, 5]], + [[1, 2, 0], + [5, 6, 4], + [6, 1, 2], + [0, 3, 4]]]) +tf.matrix_set_diag(input, diagonals, k = (-1, 2), align="LEFT_RIGHT") + ==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4) + [4, 2, 5, 1], + [7, 5, 3, 8]], + [[6, 5, 1, 7], + [3, 1, 6, 2], + [7, 4, 2, 4]]] + +``` + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Rank `r+1`, where `r >= 1`. +
+`diagonal` + +A `Tensor`. Must have the same type as `input`. +Rank `r` when `k` is an integer or `k[0] == k[1]`. Otherwise, it has rank `r+1`. +`k >= 1`. +
+`k` + +A `Tensor` of type `int32`. +Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main +diagonal, and negative value means subdiagonals. `k` can be a single integer +(for a single diagonal) or a pair of integers specifying the low and high ends +of a matrix band. `k[0]` must not be larger than `k[1]`. +
+`align` + +An optional `string` from: `"LEFT_RIGHT", "RIGHT_LEFT", "LEFT_LEFT", "RIGHT_RIGHT"`. Defaults to `"RIGHT_LEFT"`. +Some diagonals are shorter than `max_diag_len` and need to be padded. `align` is +a string specifying how superdiagonals and subdiagonals should be aligned, +respectively. There are four possible alignments: "RIGHT_LEFT" (default), +"LEFT_RIGHT", "LEFT_LEFT", and "RIGHT_RIGHT". "RIGHT_LEFT" aligns superdiagonals +to the right (left-pads the row) and subdiagonals to the left (right-pads the +row). It is the packing format LAPACK uses. cuSPARSE uses "LEFT_RIGHT", which is +the opposite alignment. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixSolve.md b/site/en/api_docs/python/tf/raw_ops/MatrixSolve.md new file mode 100644 index 00000000000..6e35407fb5e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixSolve.md @@ -0,0 +1,101 @@ +description: Solves systems of linear equations. + +
+ + +
+ +# tf.raw_ops.MatrixSolve + + + + + + + + + +Solves systems of linear equations. + + + + + + + + + +`Matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. `Rhs` is a tensor of shape `[..., M, K]`. The `output` is +a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output matrix +satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`. +If `adjoint` is `True` then each output matrix satisfies +`adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`. + + + + + + + + + + + + + + + + + + + +
+`matrix` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`rhs` + +A `Tensor`. Must have the same type as `matrix`. +Shape is `[..., M, K]`. +
+`adjoint` + +An optional `bool`. Defaults to `False`. +Boolean indicating whether to solve with `matrix` or its (block-wise) +adjoint. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `matrix`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixSolveLs.md b/site/en/api_docs/python/tf/raw_ops/MatrixSolveLs.md new file mode 100644 index 00000000000..3c03c94a522 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixSolveLs.md @@ -0,0 +1,139 @@ +description: Solves one or more linear least-squares problems. + +
+ + +
+ +# tf.raw_ops.MatrixSolveLs + + + + + + + + + +Solves one or more linear least-squares problems. + + + + + + + + + +`matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions +form real or complex matrices of size `[M, N]`. `Rhs` is a tensor of the same +type as `matrix` and shape `[..., M, K]`. +The output is a tensor shape `[..., N, K]` where each output matrix solves +each of the equations +`matrix[..., :, :]` * `output[..., :, :]` = `rhs[..., :, :]` +in the least squares sense. + +We use the following notation for (complex) matrix and right-hand sides +in the batch: + +`matrix`=\\(A \in \mathbb{C}^{m \times n}\\), +`rhs`=\\(B \in \mathbb{C}^{m \times k}\\), +`output`=\\(X \in \mathbb{C}^{n \times k}\\), +`l2_regularizer`=\\(\lambda \in \mathbb{R}\\). + +If `fast` is `True`, then the solution is computed by solving the normal +equations using Cholesky decomposition. Specifically, if \\(m \ge n\\) then +\\(X = (A^H A + \lambda I)^{-1} A^H B\\), which solves the least-squares +problem \\(X = \mathrm{argmin}_{Z \in \Re^{n \times k} } ||A Z - B||_F^2 + \lambda ||Z||_F^2\\). +If \\(m \lt n\\) then `output` is computed as +\\(X = A^H (A A^H + \lambda I)^{-1} B\\), which (for \\(\lambda = 0\\)) is the +minimum-norm solution to the under-determined linear system, i.e. +\\(X = \mathrm{argmin}_{Z \in \mathbb{C}^{n \times k} } ||Z||_F^2 \\), +subject to \\(A Z = B\\). Notice that the fast path is only numerically stable +when \\(A\\) is numerically full rank and has a condition number +\\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon_{mach} } }\\) or \\(\lambda\\) is +sufficiently large. + +If `fast` is `False` an algorithm based on the numerically robust complete +orthogonal decomposition is used. This computes the minimum-norm +least-squares solution, even when \\(A\\) is rank deficient. This path is +typically 6-7 times slower than the fast path. If `fast` is `False` then +`l2_regularizer` is ignored. + + + + + + + + + + + + + + + + + + + + + + +
+`matrix` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +Shape is `[..., M, N]`. +
+`rhs` + +A `Tensor`. Must have the same type as `matrix`. +Shape is `[..., M, K]`. +
+`l2_regularizer` + +A `Tensor` of type `float64`. Scalar tensor. +
+`fast` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `matrix`. +
+ + + +#### Numpy Compatibility +Equivalent to np.linalg.lstsq + diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixSquareRoot.md b/site/en/api_docs/python/tf/raw_ops/MatrixSquareRoot.md new file mode 100644 index 00000000000..596a06a0e9c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixSquareRoot.md @@ -0,0 +1,93 @@ +description: Computes the matrix square root of one or more square matrices: + +
+ + +
+ +# tf.raw_ops.MatrixSquareRoot + + + + + + + + + +Computes the matrix square root of one or more square matrices: + + + + + + + + + +matmul(sqrtm(A), sqrtm(A)) = A + +The input matrix should be invertible. If the input matrix is real, it should +have no eigenvalues which are real and negative (pairs of complex conjugate +eigenvalues are allowed). + +The matrix square root is computed by first reducing the matrix to +quasi-triangular form with the real Schur decomposition. The square root +of the quasi-triangular matrix is then computed directly. Details of +the algorithm can be found in: Nicholas J. Higham, "Computing real +square roots of a real matrix", Linear Algebra Appl., 1987. + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. The output is a tensor of the same shape as the input +containing the matrix square root for all input submatrices `[..., :, :]`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MatrixTriangularSolve.md b/site/en/api_docs/python/tf/raw_ops/MatrixTriangularSolve.md new file mode 100644 index 00000000000..e7731de0107 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MatrixTriangularSolve.md @@ -0,0 +1,157 @@ +description: Solves systems of linear equations with upper or lower triangular matrices by backsubstitution. + +
+ + +
+ +# tf.raw_ops.MatrixTriangularSolve + + + + + + + + + +Solves systems of linear equations with upper or lower triangular matrices by backsubstitution. + + + + + + + + + + +`matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form +square matrices. If `lower` is `True` then the strictly upper triangular part +of each inner-most matrix is assumed to be zero and not accessed. +If `lower` is False then the strictly lower triangular part of each inner-most +matrix is assumed to be zero and not accessed. +`rhs` is a tensor of shape `[..., M, N]`. + +The output is a tensor of shape `[..., M, N]`. If `adjoint` is +`True` then the innermost matrices in `output` satisfy matrix equations +`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`. +If `adjoint` is `False` then the strictly then the innermost matrices in +`output` satisfy matrix equations +`adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`. + +Note, the batch shapes for the inputs only need to broadcast. + +#### Example: + + +```python + +a = tf.constant([[3, 0, 0, 0], + [2, 1, 0, 0], + [1, 0, 1, 0], + [1, 1, 1, 1]], dtype=tf.float32) + +b = tf.constant([[4], + [2], + [4], + [2]], dtype=tf.float32) + +x = tf.linalg.triangular_solve(a, b, lower=True) +x +# + +# in python3 one can use `a@x` +tf.matmul(a, x) +# +``` + + + + + + + + + + + + + + + + + + + + + + +
+`matrix` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +Shape is `[..., M, M]`. +
+`rhs` + +A `Tensor`. Must have the same type as `matrix`. +Shape is `[..., M, K]`. +
+`lower` + +An optional `bool`. Defaults to `True`. +Boolean indicating whether the innermost matrices in `matrix` are +lower or upper triangular. +
+`adjoint` + +An optional `bool`. Defaults to `False`. +Boolean indicating whether to solve with `matrix` or its (block-wise) +adjoint. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `matrix`. +
+ + + +#### Numpy Compatibility +Equivalent to scipy.linalg.solve_triangular + diff --git a/site/en/api_docs/python/tf/raw_ops/Max.md b/site/en/api_docs/python/tf/raw_ops/Max.md new file mode 100644 index 00000000000..1a14f718423 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Max.md @@ -0,0 +1,99 @@ +description: Computes the maximum of elements across dimensions of a tensor. + +
+ + +
+ +# tf.raw_ops.Max + + + + + + + + + +Computes the maximum of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input` along the dimensions given in `axis`. Unless +`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in +`axis`. If `keep_dims` is true, the reduced dimensions are +retained with length 1. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The tensor to reduce. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The dimensions to reduce. Must be in the range +`[-rank(input), rank(input))`. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If true, retain reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxIntraOpParallelismDataset.md b/site/en/api_docs/python/tf/raw_ops/MaxIntraOpParallelismDataset.md new file mode 100644 index 00000000000..73191ce7994 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxIntraOpParallelismDataset.md @@ -0,0 +1,99 @@ +description: Creates a dataset that overrides the maximum intra-op parallelism. + +
+ + +
+ +# tf.raw_ops.MaxIntraOpParallelismDataset + + + + + + + + + +Creates a dataset that overrides the maximum intra-op parallelism. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`max_intra_op_parallelism` + +A `Tensor` of type `int64`. +Identifies the maximum intra-op parallelism to use. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxPool.md b/site/en/api_docs/python/tf/raw_ops/MaxPool.md new file mode 100644 index 00000000000..e05555bdb13 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxPool.md @@ -0,0 +1,115 @@ +description: Performs max pooling on the input. + +
+ + +
+ +# tf.raw_ops.MaxPool + + + + + + + + + +Performs max pooling on the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `qint8`. +4-D input to pool over. +
+`ksize` + +A list of `ints` that has length `>= 4`. +The size of the window for each dimension of the input tensor. +
+`strides` + +A list of `ints` that has length `>= 4`. +The stride of the sliding window for each dimension of the +input tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW", "NCHW_VECT_C"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, in_height, in_width, in_channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxPool3D.md b/site/en/api_docs/python/tf/raw_ops/MaxPool3D.md new file mode 100644 index 00000000000..64b7ed17942 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxPool3D.md @@ -0,0 +1,116 @@ +description: Performs 3D max pooling on the input. + +
+ + +
+ +# tf.raw_ops.MaxPool3D + + + + + + + + + +Performs 3D max pooling on the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. +Shape `[batch, depth, rows, cols, channels]` tensor to pool over. +
+`ksize` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The size of the window for each dimension of +the input tensor. Must have `ksize[0] = ksize[4] = 1`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. +The data format of the input and output data. With the +default format "NDHWC", the data is stored in the order of: +[batch, in_depth, in_height, in_width, in_channels]. +Alternatively, the format could be "NCDHW", the data storage order is: +[batch, in_channels, in_depth, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxPool3DGrad.md b/site/en/api_docs/python/tf/raw_ops/MaxPool3DGrad.md new file mode 100644 index 00000000000..b7cac7bc9b2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxPool3DGrad.md @@ -0,0 +1,133 @@ +description: Computes gradients of max pooling function. + +
+ + +
+ +# tf.raw_ops.MaxPool3DGrad + + + + + + + + + +Computes gradients of max pooling function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`orig_input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. +The original input tensor. +
+`orig_output` + +A `Tensor`. Must have the same type as `orig_input`. +The original output tensor. +
+`grad` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`. +Output backprop of shape `[batch, depth, rows, cols, channels]`. +
+`ksize` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The size of the window for each dimension of +the input tensor. Must have `ksize[0] = ksize[4] = 1`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. +The data format of the input and output data. With the +default format "NDHWC", the data is stored in the order of: +[batch, in_depth, in_height, in_width, in_channels]. +Alternatively, the format could be "NCDHW", the data storage order is: +[batch, in_channels, in_depth, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `grad`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxPool3DGradGrad.md b/site/en/api_docs/python/tf/raw_ops/MaxPool3DGradGrad.md new file mode 100644 index 00000000000..ea0c2face7b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxPool3DGradGrad.md @@ -0,0 +1,133 @@ +description: Computes second-order gradients of the maxpooling function. + +
+ + +
+ +# tf.raw_ops.MaxPool3DGradGrad + + + + + + + + + +Computes second-order gradients of the maxpooling function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`orig_input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +The original input tensor. +
+`orig_output` + +A `Tensor`. Must have the same type as `orig_input`. +The original output tensor. +
+`grad` + +A `Tensor`. Must have the same type as `orig_input`. +Output backprop of shape `[batch, depth, rows, cols, channels]`. +
+`ksize` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The size of the window for each dimension of +the input tensor. Must have `ksize[0] = ksize[4] = 1`. +
+`strides` + +A list of `ints` that has length `>= 5`. +1-D tensor of length 5. The stride of the sliding window for each +dimension of `input`. Must have `strides[0] = strides[4] = 1`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NDHWC", "NCDHW"`. Defaults to `"NDHWC"`. +The data format of the input and output data. With the +default format "NDHWC", the data is stored in the order of: +[batch, in_depth, in_height, in_width, in_channels]. +Alternatively, the format could be "NCDHW", the data storage order is: +[batch, in_channels, in_depth, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `orig_input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxPoolGrad.md b/site/en/api_docs/python/tf/raw_ops/MaxPoolGrad.md new file mode 100644 index 00000000000..b3bdf46aa6e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxPoolGrad.md @@ -0,0 +1,132 @@ +description: Computes gradients of the maxpooling function. + +
+ + +
+ +# tf.raw_ops.MaxPoolGrad + + + + + + + + + +Computes gradients of the maxpooling function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`orig_input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +The original input tensor. +
+`orig_output` + +A `Tensor`. Must have the same type as `orig_input`. +The original output tensor. +
+`grad` + +A `Tensor`. Must have the same type as `orig_input`. +4-D. Gradients w.r.t. the output of `max_pool`. +
+`ksize` + +A list of `ints` that has length `>= 4`. +The size of the window for each dimension of the input tensor. +
+`strides` + +A list of `ints` that has length `>= 4`. +The stride of the sliding window for each dimension of the +input tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, in_height, in_width, in_channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `orig_input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxPoolGradGrad.md b/site/en/api_docs/python/tf/raw_ops/MaxPoolGradGrad.md new file mode 100644 index 00000000000..e8714180439 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxPoolGradGrad.md @@ -0,0 +1,132 @@ +description: Computes second-order gradients of the maxpooling function. + +
+ + +
+ +# tf.raw_ops.MaxPoolGradGrad + + + + + + + + + +Computes second-order gradients of the maxpooling function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`orig_input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +The original input tensor. +
+`orig_output` + +A `Tensor`. Must have the same type as `orig_input`. +The original output tensor. +
+`grad` + +A `Tensor`. Must have the same type as `orig_input`. +4-D. Gradients of gradients w.r.t. the input of `max_pool`. +
+`ksize` + +A list of `ints` that has length `>= 4`. +The size of the window for each dimension of the input tensor. +
+`strides` + +A list of `ints` that has length `>= 4`. +The stride of the sliding window for each dimension of the +input tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, in_height, in_width, in_channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `orig_input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxPoolGradGradV2.md b/site/en/api_docs/python/tf/raw_ops/MaxPoolGradGradV2.md new file mode 100644 index 00000000000..887c9305d00 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxPoolGradGradV2.md @@ -0,0 +1,132 @@ +description: Computes second-order gradients of the maxpooling function. + +
+ + +
+ +# tf.raw_ops.MaxPoolGradGradV2 + + + + + + + + + +Computes second-order gradients of the maxpooling function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`orig_input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +The original input tensor. +
+`orig_output` + +A `Tensor`. Must have the same type as `orig_input`. +The original output tensor. +
+`grad` + +A `Tensor`. Must have the same type as `orig_input`. +4-D. Gradients of gradients w.r.t. the input of `max_pool`. +
+`ksize` + +A `Tensor` of type `int32`. +The size of the window for each dimension of the input tensor. +
+`strides` + +A `Tensor` of type `int32`. +The stride of the sliding window for each dimension of the +input tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, in_height, in_width, in_channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `orig_input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxPoolGradGradWithArgmax.md b/site/en/api_docs/python/tf/raw_ops/MaxPoolGradGradWithArgmax.md new file mode 100644 index 00000000000..59ad013bbe1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxPoolGradGradWithArgmax.md @@ -0,0 +1,129 @@ +description: Computes second-order gradients of the maxpooling function. + +
+ + +
+ +# tf.raw_ops.MaxPoolGradGradWithArgmax + + + + + + + + + +Computes second-order gradients of the maxpooling function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +The original input. +
+`grad` + +A `Tensor`. Must have the same type as `input`. +4-D with shape `[batch, height, width, channels]`. Gradients w.r.t. the +input of `max_pool`. +
+`argmax` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The indices of the maximum values chosen for each output of `max_pool`. +
+`ksize` + +A list of `ints` that has length `>= 4`. +The size of the window for each dimension of the input tensor. +
+`strides` + +A list of `ints` that has length `>= 4`. +The stride of the sliding window for each dimension of the +input tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`include_batch_in_index` + +An optional `bool`. Defaults to `False`. +Whether to include batch dimension in flattened index of `argmax`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxPoolGradV2.md b/site/en/api_docs/python/tf/raw_ops/MaxPoolGradV2.md new file mode 100644 index 00000000000..63bc23c9e6d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxPoolGradV2.md @@ -0,0 +1,132 @@ +description: Computes gradients of the maxpooling function. + +
+ + +
+ +# tf.raw_ops.MaxPoolGradV2 + + + + + + + + + +Computes gradients of the maxpooling function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`orig_input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +The original input tensor. +
+`orig_output` + +A `Tensor`. Must have the same type as `orig_input`. +The original output tensor. +
+`grad` + +A `Tensor`. Must have the same type as `orig_input`. +4-D. Gradients w.r.t. the output of `max_pool`. +
+`ksize` + +A `Tensor` of type `int32`. +The size of the window for each dimension of the input tensor. +
+`strides` + +A `Tensor` of type `int32`. +The stride of the sliding window for each dimension of the +input tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, in_height, in_width, in_channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `orig_input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxPoolGradWithArgmax.md b/site/en/api_docs/python/tf/raw_ops/MaxPoolGradWithArgmax.md new file mode 100644 index 00000000000..ea24d6bc29c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxPoolGradWithArgmax.md @@ -0,0 +1,129 @@ +description: Computes gradients of the maxpooling function. + +
+ + +
+ +# tf.raw_ops.MaxPoolGradWithArgmax + + + + + + + + + +Computes gradients of the maxpooling function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +The original input. +
+`grad` + +A `Tensor`. Must have the same type as `input`. +4-D with shape `[batch, height, width, channels]`. Gradients w.r.t. the +output of `max_pool`. +
+`argmax` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The indices of the maximum values chosen for each output of `max_pool`. +
+`ksize` + +A list of `ints` that has length `>= 4`. +The size of the window for each dimension of the input tensor. +
+`strides` + +A list of `ints` that has length `>= 4`. +The stride of the sliding window for each dimension of the +input tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`include_batch_in_index` + +An optional `bool`. Defaults to `False`. +Whether to include batch dimension in flattened index of `argmax`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxPoolV2.md b/site/en/api_docs/python/tf/raw_ops/MaxPoolV2.md new file mode 100644 index 00000000000..ec284a62411 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxPoolV2.md @@ -0,0 +1,115 @@ +description: Performs max pooling on the input. + +
+ + +
+ +# tf.raw_ops.MaxPoolV2 + + + + + + + + + +Performs max pooling on the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `qint8`. +4-D input to pool over. +
+`ksize` + +A `Tensor` of type `int32`. +The size of the window for each dimension of the input tensor. +
+`strides` + +A `Tensor` of type `int32`. +The stride of the sliding window for each dimension of the +input tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW", "NCHW_VECT_C"`. Defaults to `"NHWC"`. +Specify the data format of the input and output data. With the +default format "NHWC", the data is stored in the order of: +[batch, in_height, in_width, in_channels]. +Alternatively, the format could be "NCHW", the data storage order of: +[batch, in_channels, in_height, in_width]. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MaxPoolWithArgmax.md b/site/en/api_docs/python/tf/raw_ops/MaxPoolWithArgmax.md new file mode 100644 index 00000000000..55cfb51b934 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MaxPoolWithArgmax.md @@ -0,0 +1,142 @@ +description: Performs max pooling on the input and outputs both max values and indices. + +
+ + +
+ +# tf.raw_ops.MaxPoolWithArgmax + + + + + + + + + +Performs max pooling on the input and outputs both max values and indices. + + + + + + + + + +The indices in `argmax` are flattened, so that a maximum value at position +`[b, y, x, c]` becomes flattened index: +`(y * width + x) * channels + c` if `include_batch_in_index` is False; +`((b * height + y) * width + x) * channels + c` if `include_batch_in_index` is True. + +The indices returned are always in `[0, height) x [0, width)` before flattening, +even if padding is involved and the mathematically correct answer is outside +(either negative or too large). This is a bug, but fixing it is difficult to do +in a safe backwards compatible way, especially due to flattening. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +4-D with shape `[batch, height, width, channels]`. Input to pool over. +
+`ksize` + +A list of `ints` that has length `>= 4`. +The size of the window for each dimension of the input tensor. +
+`strides` + +A list of `ints` that has length `>= 4`. +The stride of the sliding window for each dimension of the +input tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`Targmax` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`include_batch_in_index` + +An optional `bool`. Defaults to `False`. +Whether to include batch dimension in flattened index of `argmax`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, argmax). +
+`output` + +A `Tensor`. Has the same type as `input`. +
+`argmax` + +A `Tensor` of type `Targmax`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Maximum.md b/site/en/api_docs/python/tf/raw_ops/Maximum.md new file mode 100644 index 00000000000..0f7f94234ee --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Maximum.md @@ -0,0 +1,92 @@ +description: Returns the max of x and y (i.e. x > y ? x : y) element-wise. + +
+ + +
+ +# tf.raw_ops.Maximum + + + + + + + + + +Returns the max of x and y (i.e. x > y ? x : y) element-wise. + + + + + + + + + + +#### Example: + + +>>> x = tf.constant([0., 0., 0., 0.]) +>>> y = tf.constant([-2., 0., 2., 5.]) +>>> tf.math.maximum(x, y) + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Mean.md b/site/en/api_docs/python/tf/raw_ops/Mean.md new file mode 100644 index 00000000000..a2bebbec9be --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Mean.md @@ -0,0 +1,99 @@ +description: Computes the mean of elements across dimensions of a tensor. + +
+ + +
+ +# tf.raw_ops.Mean + + + + + + + + + +Computes the mean of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input` along the dimensions given in `axis`. Unless +`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in +`axis`. If `keep_dims` is true, the reduced dimensions are +retained with length 1. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The tensor to reduce. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The dimensions to reduce. Must be in the range +`[-rank(input), rank(input))`. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If true, retain reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Merge.md b/site/en/api_docs/python/tf/raw_ops/Merge.md new file mode 100644 index 00000000000..d6b4078d53c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Merge.md @@ -0,0 +1,97 @@ +description: Forwards the value of an available tensor from inputs to output. + +
+ + +
+ +# tf.raw_ops.Merge + + + + + + + + + +Forwards the value of an available tensor from `inputs` to `output`. + + + + + + + + + +`Merge` waits for at least one of the tensors in `inputs` to become available. +It is usually combined with `Switch` to implement branching. + +`Merge` forwards the first tensor to become available to `output`, and sets +`value_index` to its index in `inputs`. + + + + + + + + + + + + + +
+`inputs` + +A list of at least 1 `Tensor` objects with the same type. +The input tensors, exactly one of which will become available. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, value_index). +
+`output` + +A `Tensor`. Has the same type as `inputs`. +
+`value_index` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MergeSummary.md b/site/en/api_docs/python/tf/raw_ops/MergeSummary.md new file mode 100644 index 00000000000..b14075b7d7c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MergeSummary.md @@ -0,0 +1,86 @@ +description: Merges summaries. + +
+ + +
+ +# tf.raw_ops.MergeSummary + + + + + + + + + +Merges summaries. + + + + + + + + + +This op creates a +[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) +protocol buffer that contains the union of all the values in the input +summaries. + +When the Op is run, it reports an `InvalidArgument` error if multiple values +in the summaries to merge use the same tag. + + + + + + + + + + + + + +
+`inputs` + +A list of at least 1 `Tensor` objects with type `string`. +Can be of any shape. Each must contain serialized `Summary` protocol +buffers. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MergeV2Checkpoints.md b/site/en/api_docs/python/tf/raw_ops/MergeV2Checkpoints.md new file mode 100644 index 00000000000..5cd4b632ef9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MergeV2Checkpoints.md @@ -0,0 +1,102 @@ +description: V2 format specific: merges the metadata files of sharded checkpoints. The + +
+ + +
+ +# tf.raw_ops.MergeV2Checkpoints + + + + + + + + + +V2 format specific: merges the metadata files of sharded checkpoints. The + + + + + + + + + +result is one logical checkpoint, with one physical metadata file and renamed +data files. + +Intended for "grouping" multiple checkpoints in a sharded checkpoint setup. + +If delete_old_dirs is true, attempts to delete recursively the dirname of each +path in the input checkpoint_prefixes. This is useful when those paths are non +user-facing temporary locations. + + + + + + + + + + + + + + + + + + + +
+`checkpoint_prefixes` + +A `Tensor` of type `string`. +prefixes of V2 checkpoints to merge. +
+`destination_prefix` + +A `Tensor` of type `string`. +scalar. The desired final prefix. Allowed to be the same +as one of the checkpoint_prefixes. +
+`delete_old_dirs` + +An optional `bool`. Defaults to `True`. see above. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Mfcc.md b/site/en/api_docs/python/tf/raw_ops/Mfcc.md new file mode 100644 index 00000000000..50b27ee3d5d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Mfcc.md @@ -0,0 +1,128 @@ +description: Transforms a spectrogram into a form that's useful for speech recognition. + +
+ + +
+ +# tf.raw_ops.Mfcc + + + + + + + + + +Transforms a spectrogram into a form that's useful for speech recognition. + + + + + + + + + +Mel Frequency Cepstral Coefficients are a way of representing audio data that's +been effective as an input feature for machine learning. They are created by +taking the spectrum of a spectrogram (a 'cepstrum'), and discarding some of the +higher frequencies that are less significant to the human ear. They have a long +history in the speech recognition world, and https://en.wikipedia.org/wiki/Mel-frequency_cepstrum +is a good resource to learn more. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`spectrogram` + +A `Tensor` of type `float32`. +Typically produced by the Spectrogram op, with magnitude_squared +set to true. +
+`sample_rate` + +A `Tensor` of type `int32`. +How many samples per second the source audio used. +
+`upper_frequency_limit` + +An optional `float`. Defaults to `4000`. +The highest frequency to use when calculating the +ceptstrum. +
+`lower_frequency_limit` + +An optional `float`. Defaults to `20`. +The lowest frequency to use when calculating the +ceptstrum. +
+`filterbank_channel_count` + +An optional `int`. Defaults to `40`. +Resolution of the Mel bank used internally. +
+`dct_coefficient_count` + +An optional `int`. Defaults to `13`. +How many output channels to produce per time slice. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Min.md b/site/en/api_docs/python/tf/raw_ops/Min.md new file mode 100644 index 00000000000..45544f428e6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Min.md @@ -0,0 +1,99 @@ +description: Computes the minimum of elements across dimensions of a tensor. + +
+ + +
+ +# tf.raw_ops.Min + + + + + + + + + +Computes the minimum of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input` along the dimensions given in `axis`. Unless +`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in +`axis`. If `keep_dims` is true, the reduced dimensions are +retained with length 1. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The tensor to reduce. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The dimensions to reduce. Must be in the range +`[-rank(input), rank(input))`. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If true, retain reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Minimum.md b/site/en/api_docs/python/tf/raw_ops/Minimum.md new file mode 100644 index 00000000000..4b1d8c5f2b0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Minimum.md @@ -0,0 +1,92 @@ +description: Returns the min of x and y (i.e. x < y ? x : y) element-wise. + +
+ + +
+ +# tf.raw_ops.Minimum + + + + + + + + + +Returns the min of x and y (i.e. x < y ? x : y) element-wise. + + + + + + + + + + +#### Example: + + +>>> x = tf.constant([0., 0., 0., 0.]) +>>> y = tf.constant([-5., -2., 0., 3.]) +>>> tf.math.minimum(x, y) + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MirrorPad.md b/site/en/api_docs/python/tf/raw_ops/MirrorPad.md new file mode 100644 index 00000000000..2c164e8dd95 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MirrorPad.md @@ -0,0 +1,125 @@ +description: Pads a tensor with mirrored values. + +
+ + +
+ +# tf.raw_ops.MirrorPad + + + + + + + + + +Pads a tensor with mirrored values. + + + + + + + + + +This operation pads a `input` with mirrored values according to the `paddings` +you specify. `paddings` is an integer tensor with shape `[n, 2]`, where n is +the rank of `input`. For each dimension D of `input`, `paddings[D, 0]` indicates +how many values to add before the contents of `input` in that dimension, and +`paddings[D, 1]` indicates how many values to add after the contents of `input` +in that dimension. Both `paddings[D, 0]` and `paddings[D, 1]` must be no greater +than `input.dim_size(D)` (or `input.dim_size(D) - 1`) if `copy_border` is true +(if false, respectively). + +The padded size of each dimension D of the output is: + +`paddings(D, 0) + input.dim_size(D) + paddings(D, 1)` + +#### For example: + + + +``` +# 't' is [[1, 2, 3], [4, 5, 6]]. +# 'paddings' is [[1, 1]], [2, 2]]. +# 'mode' is SYMMETRIC. +# rank of 't' is 2. +pad(t, paddings) ==> [[2, 1, 1, 2, 3, 3, 2] + [2, 1, 1, 2, 3, 3, 2] + [5, 4, 4, 5, 6, 6, 5] + [5, 4, 4, 5, 6, 6, 5]] +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. The input tensor to be padded. +
+`paddings` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A two-column matrix specifying the padding sizes. The number of +rows must be the same as the rank of `input`. +
+`mode` + +A `string` from: `"REFLECT", "SYMMETRIC"`. +Either `REFLECT` or `SYMMETRIC`. In reflect mode the padded regions +do not include the borders, while in symmetric mode the padded regions +do include the borders. For example, if `input` is `[1, 2, 3]` and `paddings` +is `[0, 2]`, then the output is `[1, 2, 3, 2, 1]` in reflect mode, and +it is `[1, 2, 3, 3, 2]` in symmetric mode. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MirrorPadGrad.md b/site/en/api_docs/python/tf/raw_ops/MirrorPadGrad.md new file mode 100644 index 00000000000..d684e2a8aa3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MirrorPadGrad.md @@ -0,0 +1,114 @@ +description: Gradient op for MirrorPad op. This op folds a mirror-padded tensor. + +
+ + +
+ +# tf.raw_ops.MirrorPadGrad + + + + + + + + + +Gradient op for `MirrorPad` op. This op folds a mirror-padded tensor. + + + + + + + + + +This operation folds the padded areas of `input` by `MirrorPad` according to the +`paddings` you specify. `paddings` must be the same as `paddings` argument +given to the corresponding `MirrorPad` op. + +The folded size of each dimension D of the output is: + +`input.dim_size(D) - paddings(D, 0) - paddings(D, 1)` + +#### For example: + + + +``` +# 't' is [[1, 2, 3], [4, 5, 6], [7, 8, 9]]. +# 'paddings' is [[0, 1]], [0, 1]]. +# 'mode' is SYMMETRIC. +# rank of 't' is 2. +pad(t, paddings) ==> [[ 1, 5] + [11, 28]] +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. The input tensor to be folded. +
+`paddings` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A two-column matrix specifying the padding sizes. The number of +rows must be the same as the rank of `input`. +
+`mode` + +A `string` from: `"REFLECT", "SYMMETRIC"`. +The mode used in the `MirrorPad` op. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Mod.md b/site/en/api_docs/python/tf/raw_ops/Mod.md new file mode 100644 index 00000000000..42ff96d08f1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Mod.md @@ -0,0 +1,89 @@ +description: Returns element-wise remainder of division. This emulates C semantics in that + +
+ + +
+ +# tf.raw_ops.Mod + + + + + + + + + +Returns element-wise remainder of division. This emulates C semantics in that + + + + + + + + + +the result here is consistent with a truncating divide. E.g. +`tf.truncatediv(x, y) * y + truncate_mod(x, y) = x`. + +*NOTE*: `Mod` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `half`, `half`, `bfloat16`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ModelDataset.md b/site/en/api_docs/python/tf/raw_ops/ModelDataset.md new file mode 100644 index 00000000000..9acfc2c1ea5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ModelDataset.md @@ -0,0 +1,107 @@ +description: Identity transformation that models performance. + +
+ + +
+ +# tf.raw_ops.ModelDataset + + + + + + + + + +Identity transformation that models performance. + + + + + + + + + +Identity transformation that models performance. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`algorithm` + +An optional `int`. Defaults to `0`. +
+`cpu_budget` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Mul.md b/site/en/api_docs/python/tf/raw_ops/Mul.md new file mode 100644 index 00000000000..0c42cf8c9d7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Mul.md @@ -0,0 +1,86 @@ +description: Returns x * y element-wise. + +
+ + +
+ +# tf.raw_ops.Mul + + + + + + + + + +Returns x * y element-wise. + + + + + + + + + +*NOTE*: `Multiply` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MulNoNan.md b/site/en/api_docs/python/tf/raw_ops/MulNoNan.md new file mode 100644 index 00000000000..8ad886809c0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MulNoNan.md @@ -0,0 +1,86 @@ +description: Returns x * y element-wise. Returns zero if y is zero, even if x if infinite or NaN. + +
+ + +
+ +# tf.raw_ops.MulNoNan + + + + + + + + + +Returns x * y element-wise. Returns zero if y is zero, even if x if infinite or NaN. + + + + + + + + + +*NOTE*: `MulNoNan` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MultiDeviceIterator.md b/site/en/api_docs/python/tf/raw_ops/MultiDeviceIterator.md new file mode 100644 index 00000000000..7cfa64bc0d4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MultiDeviceIterator.md @@ -0,0 +1,112 @@ +description: Creates a MultiDeviceIterator resource. + +
+ + +
+ +# tf.raw_ops.MultiDeviceIterator + + + + + + + + + +Creates a MultiDeviceIterator resource. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`devices` + +A list of `strings` that has length `>= 1`. +A list of devices the iterator works across. +
+`shared_name` + +A `string`. +If non-empty, this resource will be shared under the given name +across multiple sessions. +
+`container` + +A `string`. +If non-empty, this resource is placed in the given container. +Otherwise, a default container is used. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type list for the return values. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +The list of shapes being produced. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MultiDeviceIteratorFromStringHandle.md b/site/en/api_docs/python/tf/raw_ops/MultiDeviceIteratorFromStringHandle.md new file mode 100644 index 00000000000..38d842ece35 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MultiDeviceIteratorFromStringHandle.md @@ -0,0 +1,94 @@ +description: Generates a MultiDeviceIterator resource from its provided string handle. + +
+ + +
+ +# tf.raw_ops.MultiDeviceIteratorFromStringHandle + + + + + + + + + +Generates a MultiDeviceIterator resource from its provided string handle. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`string_handle` + +A `Tensor` of type `string`. +String representing the resource. +
+`output_types` + +An optional list of `tf.DTypes`. Defaults to `[]`. +The type list for the return values. +
+`output_shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +The list of shapes being produced. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MultiDeviceIteratorGetNextFromShard.md b/site/en/api_docs/python/tf/raw_ops/MultiDeviceIteratorGetNextFromShard.md new file mode 100644 index 00000000000..b0669b3f5fb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MultiDeviceIteratorGetNextFromShard.md @@ -0,0 +1,111 @@ +description: Gets next element for the provided shard number. + +
+ + +
+ +# tf.raw_ops.MultiDeviceIteratorGetNextFromShard + + + + + + + + + +Gets next element for the provided shard number. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`multi_device_iterator` + +A `Tensor` of type `resource`. +A MultiDeviceIterator resource. +
+`shard_num` + +A `Tensor` of type `int32`. +Integer representing which shard to fetch data for. +
+`incarnation_id` + +A `Tensor` of type `int64`. +Which incarnation of the MultiDeviceIterator is running. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type list for the return values. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +The list of shapes being produced. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `output_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MultiDeviceIteratorInit.md b/site/en/api_docs/python/tf/raw_ops/MultiDeviceIteratorInit.md new file mode 100644 index 00000000000..eeed7452db5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MultiDeviceIteratorInit.md @@ -0,0 +1,93 @@ +description: Initializes the multi device iterator with the given dataset. + +
+ + +
+ +# tf.raw_ops.MultiDeviceIteratorInit + + + + + + + + + +Initializes the multi device iterator with the given dataset. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dataset` + +A `Tensor` of type `variant`. Dataset to be iterated upon. +
+`multi_device_iterator` + +A `Tensor` of type `resource`. +A MultiDeviceIteratorResource. +
+`max_buffer_size` + +A `Tensor` of type `int64`. +The maximum size of the host side per device buffer to keep. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MultiDeviceIteratorToStringHandle.md b/site/en/api_docs/python/tf/raw_ops/MultiDeviceIteratorToStringHandle.md new file mode 100644 index 00000000000..8934e146ab2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MultiDeviceIteratorToStringHandle.md @@ -0,0 +1,78 @@ +description: Produces a string handle for the given MultiDeviceIterator. + +
+ + +
+ +# tf.raw_ops.MultiDeviceIteratorToStringHandle + + + + + + + + + +Produces a string handle for the given MultiDeviceIterator. + + + + + + + + + + + + + + + + + + + + + + +
+`multi_device_iterator` + +A `Tensor` of type `resource`. +A MultiDeviceIterator resource. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Multinomial.md b/site/en/api_docs/python/tf/raw_ops/Multinomial.md new file mode 100644 index 00000000000..70a5a9f64ac --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Multinomial.md @@ -0,0 +1,111 @@ +description: Draws samples from a multinomial distribution. + +
+ + +
+ +# tf.raw_ops.Multinomial + + + + + + + + + +Draws samples from a multinomial distribution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`logits` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +2-D Tensor with shape `[batch_size, num_classes]`. Each slice `[i, :]` +represents the unnormalized log probabilities for all classes. +
+`num_samples` + +A `Tensor` of type `int32`. +0-D. Number of independent samples to draw for each row slice. +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 is set to be non-zero, the internal random number +generator is seeded by the given seed. Otherwise, a random seed is used. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`output_dtype` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `output_dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MutableDenseHashTable.md b/site/en/api_docs/python/tf/raw_ops/MutableDenseHashTable.md new file mode 100644 index 00000000000..1d1c8063e8a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MutableDenseHashTable.md @@ -0,0 +1,145 @@ +description: Creates an empty hash table that uses tensors as the backing store. + +
+ + +
+ +# tf.raw_ops.MutableDenseHashTable + + + + + + + + + +Creates an empty hash table that uses tensors as the backing store. + + + + + + + + + +It uses "open addressing" with quadratic reprobing to resolve +collisions. + +This op creates a mutable hash table, specifying the type of its keys and +values. Each value must be a scalar. Data can be inserted into the table using +the insert operations. It does not support the initialization operation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`empty_key` + +A `Tensor`. +The key used to represent empty key buckets internally. Must not +be used in insert or lookup operations. +
+`value_dtype` + +A tf.DType. Type of the table values. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this table is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this table is shared under the given name across +multiple sessions. +
+`use_node_name_sharing` + +An optional `bool`. Defaults to `False`. +
+`value_shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `[]`. +The shape of each value. +
+`initial_num_buckets` + +An optional `int`. Defaults to `131072`. +The initial number of hash table buckets. Must be a power +to 2. +
+`max_load_factor` + +An optional `float`. Defaults to `0.8`. +The maximum ratio between number of entries and number of +buckets before growing the table. Must be between 0 and 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MutableDenseHashTableV2.md b/site/en/api_docs/python/tf/raw_ops/MutableDenseHashTableV2.md new file mode 100644 index 00000000000..7a448184492 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MutableDenseHashTableV2.md @@ -0,0 +1,152 @@ +description: Creates an empty hash table that uses tensors as the backing store. + +
+ + +
+ +# tf.raw_ops.MutableDenseHashTableV2 + + + + + + + + + +Creates an empty hash table that uses tensors as the backing store. + + + + + + + + + +It uses "open addressing" with quadratic reprobing to resolve +collisions. + +This op creates a mutable hash table, specifying the type of its keys and +values. Each value must be a scalar. Data can be inserted into the table using +the insert operations. It does not support the initialization operation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`empty_key` + +A `Tensor`. +The key used to represent empty key buckets internally. Must not +be used in insert or lookup operations. +
+`deleted_key` + +A `Tensor`. Must have the same type as `empty_key`. +
+`value_dtype` + +A tf.DType. Type of the table values. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this table is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this table is shared under the given name across +multiple sessions. +
+`use_node_name_sharing` + +An optional `bool`. Defaults to `False`. +
+`value_shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `[]`. +The shape of each value. +
+`initial_num_buckets` + +An optional `int`. Defaults to `131072`. +The initial number of hash table buckets. Must be a power +to 2. +
+`max_load_factor` + +An optional `float`. Defaults to `0.8`. +The maximum ratio between number of entries and number of +buckets before growing the table. Must be between 0 and 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MutableHashTable.md b/site/en/api_docs/python/tf/raw_ops/MutableHashTable.md new file mode 100644 index 00000000000..b72123d42ad --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MutableHashTable.md @@ -0,0 +1,115 @@ +description: Creates an empty hash table. + +
+ + +
+ +# tf.raw_ops.MutableHashTable + + + + + + + + + +Creates an empty hash table. + + + + + + + + + +This op creates a mutable hash table, specifying the type of its keys and +values. Each value must be a scalar. Data can be inserted into the table using +the insert operations. It does not support the initialization operation. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key_dtype` + +A tf.DType. Type of the table keys. +
+`value_dtype` + +A tf.DType. Type of the table values. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this table is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this table is shared under the given name across +multiple sessions. +
+`use_node_name_sharing` + +An optional `bool`. Defaults to `False`. +If true and shared_name is empty, the table is shared +using the node name. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MutableHashTableOfTensors.md b/site/en/api_docs/python/tf/raw_ops/MutableHashTableOfTensors.md new file mode 100644 index 00000000000..d2bbfb33f21 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MutableHashTableOfTensors.md @@ -0,0 +1,120 @@ +description: Creates an empty hash table. + +
+ + +
+ +# tf.raw_ops.MutableHashTableOfTensors + + + + + + + + + +Creates an empty hash table. + + + + + + + + + +This op creates a mutable hash table, specifying the type of its keys and +values. Each value must be a vector. Data can be inserted into the table using +the insert operations. It does not support the initialization operation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key_dtype` + +A tf.DType. Type of the table keys. +
+`value_dtype` + +A tf.DType. Type of the table values. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this table is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this table is shared under the given name across +multiple sessions. +
+`use_node_name_sharing` + +An optional `bool`. Defaults to `False`. +
+`value_shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MutableHashTableOfTensorsV2.md b/site/en/api_docs/python/tf/raw_ops/MutableHashTableOfTensorsV2.md new file mode 100644 index 00000000000..a43db428683 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MutableHashTableOfTensorsV2.md @@ -0,0 +1,120 @@ +description: Creates an empty hash table. + +
+ + +
+ +# tf.raw_ops.MutableHashTableOfTensorsV2 + + + + + + + + + +Creates an empty hash table. + + + + + + + + + +This op creates a mutable hash table, specifying the type of its keys and +values. Each value must be a vector. Data can be inserted into the table using +the insert operations. It does not support the initialization operation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key_dtype` + +A tf.DType. Type of the table keys. +
+`value_dtype` + +A tf.DType. Type of the table values. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this table is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this table is shared under the given name across +multiple sessions. +
+`use_node_name_sharing` + +An optional `bool`. Defaults to `False`. +
+`value_shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MutableHashTableV2.md b/site/en/api_docs/python/tf/raw_ops/MutableHashTableV2.md new file mode 100644 index 00000000000..d79583c05e9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MutableHashTableV2.md @@ -0,0 +1,115 @@ +description: Creates an empty hash table. + +
+ + +
+ +# tf.raw_ops.MutableHashTableV2 + + + + + + + + + +Creates an empty hash table. + + + + + + + + + +This op creates a mutable hash table, specifying the type of its keys and +values. Each value must be a scalar. Data can be inserted into the table using +the insert operations. It does not support the initialization operation. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key_dtype` + +A tf.DType. Type of the table keys. +
+`value_dtype` + +A tf.DType. Type of the table values. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this table is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this table is shared under the given name across +multiple sessions. +
+`use_node_name_sharing` + +An optional `bool`. Defaults to `False`. +If true and shared_name is empty, the table is shared +using the node name. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MutexLock.md b/site/en/api_docs/python/tf/raw_ops/MutexLock.md new file mode 100644 index 00000000000..57e7cbf187b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MutexLock.md @@ -0,0 +1,115 @@ +description: Locks a mutex resource. The output is the lock. So long as the lock tensor + +
+ + +
+ +# tf.raw_ops.MutexLock + + + + + + + + + +Locks a mutex resource. The output is the lock. So long as the lock tensor + + + + + + + + + +is alive, any other request to use `MutexLock` with this mutex will wait. + +This is particularly useful for creating a critical section when used in +conjunction with `MutexLockIdentity`: + +```python + +mutex = mutex_v2( + shared_name=handle_name, container=container, name=name) + +def execute_in_critical_section(fn, *args, **kwargs): + lock = gen_resource_variable_ops.mutex_lock(mutex) + + with ops.control_dependencies([lock]): + r = fn(*args, **kwargs) + + with ops.control_dependencies(nest.flatten(r)): + with ops.colocate_with(mutex): + ensure_lock_exists = mutex_lock_identity(lock) + + # Make sure that if any element of r is accessed, all of + # them are executed together. + r = nest.map_structure(tf.identity, r) + + with ops.control_dependencies([ensure_lock_exists]): + return nest.map_structure(tf.identity, r) +``` + +While `fn` is running in the critical section, no other functions which wish to +use this critical section may run. + +Often the use case is that two executions of the same graph, in parallel, +wish to run `fn`; and we wish to ensure that only one of them executes +at a time. This is especially important if `fn` modifies one or more +variables at a time. + +It is also useful if two separate functions must share a resource, but we +wish to ensure the usage is exclusive. + + + + + + + + + + + + + +
+`mutex` + +A `Tensor` of type `resource`. The mutex resource to lock. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/MutexV2.md b/site/en/api_docs/python/tf/raw_ops/MutexV2.md new file mode 100644 index 00000000000..52f34ad60ef --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/MutexV2.md @@ -0,0 +1,88 @@ +description: Creates a Mutex resource that can be locked by MutexLock. + +
+ + +
+ +# tf.raw_ops.MutexV2 + + + + + + + + + +Creates a Mutex resource that can be locked by `MutexLock`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this variable is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this variable is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NcclAllReduce.md b/site/en/api_docs/python/tf/raw_ops/NcclAllReduce.md new file mode 100644 index 00000000000..4d01532fd05 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NcclAllReduce.md @@ -0,0 +1,110 @@ +description: Outputs a tensor containing the reduction across all input tensors. + +
+ + +
+ +# tf.raw_ops.NcclAllReduce + + + + + + + + + +Outputs a tensor containing the reduction across all input tensors. + + + + + + + + + +Outputs a tensor containing the reduction across all input tensors passed to ops +within the same `shared_name. + +The graph should be constructed so if one op runs with shared_name value `c`, +then `num_devices` ops will run with shared_name value `c`. Failure to do so +will cause the graph execution to fail to complete. + +input: the input to the reduction +data: the value of the reduction across all `num_devices` devices. +reduction: the reduction operation to perform. +num_devices: The number of devices participating in this reduction. +shared_name: Identifier that shared between ops of the same reduction. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`. +
+`reduction` + +A `string` from: `"min", "max", "prod", "sum"`. +
+`num_devices` + +An `int`. +
+`shared_name` + +A `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NcclBroadcast.md b/site/en/api_docs/python/tf/raw_ops/NcclBroadcast.md new file mode 100644 index 00000000000..91df01d2bed --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NcclBroadcast.md @@ -0,0 +1,92 @@ +description: Sends input to all devices that are connected to the output. + +
+ + +
+ +# tf.raw_ops.NcclBroadcast + + + + + + + + + +Sends `input` to all devices that are connected to the output. + + + + + + + + + +Sends `input` to all devices that are connected to the output. + +The graph should be constructed so that all ops connected to the output have a +valid device assignment, and the op itself is assigned one of these devices. + +input: The input to the broadcast. +output: The same as input. +shape: The shape of the input tensor. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`. +
+`shape` + +A tf.TensorShape or list of `ints`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NcclReduce.md b/site/en/api_docs/python/tf/raw_ops/NcclReduce.md new file mode 100644 index 00000000000..b455e8766a5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NcclReduce.md @@ -0,0 +1,92 @@ +description: Reduces input from num_devices using reduction to a single device. + +
+ + +
+ +# tf.raw_ops.NcclReduce + + + + + + + + + +Reduces `input` from `num_devices` using `reduction` to a single device. + + + + + + + + + +Reduces `input` from `num_devices` using `reduction` to a single device. + +The graph should be constructed so that all inputs have a valid device +assignment, and the op itself is assigned one of these devices. + +input: The input to the reduction. +data: the value of the reduction across all `num_devices` devices. +reduction: the reduction operation to perform. + + + + + + + + + + + + + + + + +
+`input` + +A list of at least 1 `Tensor` objects with the same type in: `half`, `float32`, `float64`, `int32`, `int64`. +
+`reduction` + +A `string` from: `"min", "max", "prod", "sum"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Ndtri.md b/site/en/api_docs/python/tf/raw_ops/Ndtri.md new file mode 100644 index 00000000000..a5f1379de77 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Ndtri.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.Ndtri + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Neg.md b/site/en/api_docs/python/tf/raw_ops/Neg.md new file mode 100644 index 00000000000..b7ebd129a30 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Neg.md @@ -0,0 +1,78 @@ +description: Computes numerical negative value element-wise. + +
+ + +
+ +# tf.raw_ops.Neg + + + + + + + + + +Computes numerical negative value element-wise. + + + + + + + + + +I.e., \\(y = -x\\). + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NextAfter.md b/site/en/api_docs/python/tf/raw_ops/NextAfter.md new file mode 100644 index 00000000000..e7c0266c3d3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NextAfter.md @@ -0,0 +1,94 @@ +description: Returns the next representable value of x1 in the direction of x2, element-wise. + +
+ + +
+ +# tf.raw_ops.NextAfter + + + + + + + + + +Returns the next representable value of `x1` in the direction of `x2`, element-wise. + + + + + + + + + +This operation returns the same result as the C++ std::nextafter function. + +It can also return a subnormal number. + + + + + + + + + + + + + + + + + + +
+`x1` + +A `Tensor`. Must be one of the following types: `float64`, `float32`. +
+`x2` + +A `Tensor`. Must have the same type as `x1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x1`. +
+ + + +#### Cpp Compatibility +Equivalent to C++ std::nextafter function. + diff --git a/site/en/api_docs/python/tf/raw_ops/NextIteration.md b/site/en/api_docs/python/tf/raw_ops/NextIteration.md new file mode 100644 index 00000000000..124511a29af --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NextIteration.md @@ -0,0 +1,77 @@ +description: Makes its input available to the next iteration. + +
+ + +
+ +# tf.raw_ops.NextIteration + + + + + + + + + +Makes its input available to the next iteration. + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. The tensor to be made available to the next iteration. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NoOp.md b/site/en/api_docs/python/tf/raw_ops/NoOp.md new file mode 100644 index 00000000000..7a4f941df7d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NoOp.md @@ -0,0 +1,70 @@ +description: Does nothing. Only useful as a placeholder for control edges. + +
+ + +
+ +# tf.raw_ops.NoOp + + + + + + + + + +Does nothing. Only useful as a placeholder for control edges. + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NonDeterministicInts.md b/site/en/api_docs/python/tf/raw_ops/NonDeterministicInts.md new file mode 100644 index 00000000000..dada36de540 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NonDeterministicInts.md @@ -0,0 +1,86 @@ +description: Non-deterministically generates some integers. + +
+ + +
+ +# tf.raw_ops.NonDeterministicInts + + + + + + + + + +Non-deterministically generates some integers. + + + + + + + + + +This op may use some OS-provided source of non-determinism (e.g. an RNG), so each execution will give different results. + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. The shape of the output tensor. +
+`dtype` + +An optional tf.DType. Defaults to tf.int64. +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NonMaxSuppression.md b/site/en/api_docs/python/tf/raw_ops/NonMaxSuppression.md new file mode 100644 index 00000000000..d93c1421baf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NonMaxSuppression.md @@ -0,0 +1,121 @@ +description: Greedily selects a subset of bounding boxes in descending order of score, + +
+ + +
+ +# tf.raw_ops.NonMaxSuppression + + + + + + + + + +Greedily selects a subset of bounding boxes in descending order of score, + + + + + + + + + +pruning away boxes that have high intersection-over-union (IOU) overlap +with previously selected boxes. Bounding boxes are supplied as +[y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any +diagonal pair of box corners and the coordinates can be provided as normalized +(i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm +is agnostic to where the origin is in the coordinate system. Note that this +algorithm is invariant to orthogonal transformations and translations +of the coordinate system; thus translating or reflections of the coordinate +system result in the same boxes being selected by the algorithm. +The output of this operation is a set of integers indexing into the input +collection of bounding boxes representing the selected boxes. The bounding +box coordinates corresponding to the selected indices can then be obtained +using the `tf.gather operation`. For example: + selected_indices = tf.image.non_max_suppression( + boxes, scores, max_output_size, iou_threshold) + selected_boxes = tf.gather(boxes, selected_indices) + + + + + + + + + + + + + + + + + + + + + + +
+`boxes` + +A `Tensor` of type `float32`. +A 2-D float tensor of shape `[num_boxes, 4]`. +
+`scores` + +A `Tensor` of type `float32`. +A 1-D float tensor of shape `[num_boxes]` representing a single +score corresponding to each box (each row of boxes). +
+`max_output_size` + +A `Tensor` of type `int32`. +A scalar integer tensor representing the maximum number of +boxes to be selected by non max suppression. +
+`iou_threshold` + +An optional `float`. Defaults to `0.5`. +A float representing the threshold for deciding whether boxes +overlap too much with respect to IOU. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionV2.md b/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionV2.md new file mode 100644 index 00000000000..ada6a156ae5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionV2.md @@ -0,0 +1,123 @@ +description: Greedily selects a subset of bounding boxes in descending order of score, + +
+ + +
+ +# tf.raw_ops.NonMaxSuppressionV2 + + + + + + + + + +Greedily selects a subset of bounding boxes in descending order of score, + + + + + + + + + +pruning away boxes that have high intersection-over-union (IOU) overlap +with previously selected boxes. Bounding boxes are supplied as +[y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any +diagonal pair of box corners and the coordinates can be provided as normalized +(i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm +is agnostic to where the origin is in the coordinate system. Note that this +algorithm is invariant to orthogonal transformations and translations +of the coordinate system; thus translating or reflections of the coordinate +system result in the same boxes being selected by the algorithm. + +The output of this operation is a set of integers indexing into the input +collection of bounding boxes representing the selected boxes. The bounding +box coordinates corresponding to the selected indices can then be obtained +using the `tf.gather operation`. For example: + + selected_indices = tf.image.non_max_suppression_v2( + boxes, scores, max_output_size, iou_threshold) + selected_boxes = tf.gather(boxes, selected_indices) + + + + + + + + + + + + + + + + + + + + + + +
+`boxes` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +A 2-D float tensor of shape `[num_boxes, 4]`. +
+`scores` + +A `Tensor`. Must have the same type as `boxes`. +A 1-D float tensor of shape `[num_boxes]` representing a single +score corresponding to each box (each row of boxes). +
+`max_output_size` + +A `Tensor` of type `int32`. +A scalar integer tensor representing the maximum number of +boxes to be selected by non max suppression. +
+`iou_threshold` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +A 0-D float tensor representing the threshold for deciding whether +boxes overlap too much with respect to IOU. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionV3.md b/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionV3.md new file mode 100644 index 00000000000..ff65ef9e52d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionV3.md @@ -0,0 +1,131 @@ +description: Greedily selects a subset of bounding boxes in descending order of score, + +
+ + +
+ +# tf.raw_ops.NonMaxSuppressionV3 + + + + + + + + + +Greedily selects a subset of bounding boxes in descending order of score, + + + + + + + + + +pruning away boxes that have high intersection-over-union (IOU) overlap +with previously selected boxes. Bounding boxes with score less than +`score_threshold` are removed. Bounding boxes are supplied as +[y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any +diagonal pair of box corners and the coordinates can be provided as normalized +(i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm +is agnostic to where the origin is in the coordinate system and more +generally is invariant to orthogonal transformations and translations +of the coordinate system; thus translating or reflections of the coordinate +system result in the same boxes being selected by the algorithm. +The output of this operation is a set of integers indexing into the input +collection of bounding boxes representing the selected boxes. The bounding +box coordinates corresponding to the selected indices can then be obtained +using the `tf.gather operation`. For example: + selected_indices = tf.image.non_max_suppression_v2( + boxes, scores, max_output_size, iou_threshold, score_threshold) + selected_boxes = tf.gather(boxes, selected_indices) + + + + + + + + + + + + + + + + + + + + + + + + + +
+`boxes` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +A 2-D float tensor of shape `[num_boxes, 4]`. +
+`scores` + +A `Tensor`. Must have the same type as `boxes`. +A 1-D float tensor of shape `[num_boxes]` representing a single +score corresponding to each box (each row of boxes). +
+`max_output_size` + +A `Tensor` of type `int32`. +A scalar integer tensor representing the maximum number of +boxes to be selected by non max suppression. +
+`iou_threshold` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +A 0-D float tensor representing the threshold for deciding whether +boxes overlap too much with respect to IOU. +
+`score_threshold` + +A `Tensor`. Must have the same type as `iou_threshold`. +A 0-D float tensor representing the threshold for deciding when to remove +boxes based on score. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionV4.md b/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionV4.md new file mode 100644 index 00000000000..661a6ccbd7b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionV4.md @@ -0,0 +1,155 @@ +description: Greedily selects a subset of bounding boxes in descending order of score, + +
+ + +
+ +# tf.raw_ops.NonMaxSuppressionV4 + + + + + + + + + +Greedily selects a subset of bounding boxes in descending order of score, + + + + + + + + + +pruning away boxes that have high intersection-over-union (IOU) overlap +with previously selected boxes. Bounding boxes with score less than +`score_threshold` are removed. Bounding boxes are supplied as +[y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any +diagonal pair of box corners and the coordinates can be provided as normalized +(i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm +is agnostic to where the origin is in the coordinate system and more +generally is invariant to orthogonal transformations and translations +of the coordinate system; thus translating or reflections of the coordinate +system result in the same boxes being selected by the algorithm. +The output of this operation is a set of integers indexing into the input +collection of bounding boxes representing the selected boxes. The bounding +box coordinates corresponding to the selected indices can then be obtained +using the `tf.gather operation`. For example: + selected_indices = tf.image.non_max_suppression_v2( + boxes, scores, max_output_size, iou_threshold, score_threshold) + selected_boxes = tf.gather(boxes, selected_indices) + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`boxes` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +A 2-D float tensor of shape `[num_boxes, 4]`. +
+`scores` + +A `Tensor`. Must have the same type as `boxes`. +A 1-D float tensor of shape `[num_boxes]` representing a single +score corresponding to each box (each row of boxes). +
+`max_output_size` + +A `Tensor` of type `int32`. +A scalar integer tensor representing the maximum number of +boxes to be selected by non max suppression. +
+`iou_threshold` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +A 0-D float tensor representing the threshold for deciding whether +boxes overlap too much with respect to IOU. +
+`score_threshold` + +A `Tensor`. Must have the same type as `iou_threshold`. +A 0-D float tensor representing the threshold for deciding when to remove +boxes based on score. +
+`pad_to_max_output_size` + +An optional `bool`. Defaults to `False`. +If true, the output `selected_indices` is padded to be of length +`max_output_size`. Defaults to false. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (selected_indices, valid_outputs). +
+`selected_indices` + +A `Tensor` of type `int32`. +
+`valid_outputs` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionV5.md b/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionV5.md new file mode 100644 index 00000000000..643fc29a7bd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionV5.md @@ -0,0 +1,177 @@ +description: Greedily selects a subset of bounding boxes in descending order of score, + +
+ + +
+ +# tf.raw_ops.NonMaxSuppressionV5 + + + + + + + + + +Greedily selects a subset of bounding boxes in descending order of score, + + + + + + + + + +pruning away boxes that have high intersection-over-union (IOU) overlap +with previously selected boxes. Bounding boxes with score less than +`score_threshold` are removed. Bounding boxes are supplied as +[y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any +diagonal pair of box corners and the coordinates can be provided as normalized +(i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm +is agnostic to where the origin is in the coordinate system and more +generally is invariant to orthogonal transformations and translations +of the coordinate system; thus translating or reflections of the coordinate +system result in the same boxes being selected by the algorithm. +The output of this operation is a set of integers indexing into the input +collection of bounding boxes representing the selected boxes. The bounding +box coordinates corresponding to the selected indices can then be obtained +using the `tf.gather operation`. For example: + selected_indices = tf.image.non_max_suppression_v2( + boxes, scores, max_output_size, iou_threshold, score_threshold) + selected_boxes = tf.gather(boxes, selected_indices) +This op also supports a Soft-NMS (with Gaussian weighting) mode (c.f. +Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score +of other overlapping boxes instead of directly causing them to be pruned. +To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be +larger than 0. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`boxes` + +A `Tensor`. Must be one of the following types: `half`, `float32`. +A 2-D float tensor of shape `[num_boxes, 4]`. +
+`scores` + +A `Tensor`. Must have the same type as `boxes`. +A 1-D float tensor of shape `[num_boxes]` representing a single +score corresponding to each box (each row of boxes). +
+`max_output_size` + +A `Tensor` of type `int32`. +A scalar integer tensor representing the maximum number of +boxes to be selected by non max suppression. +
+`iou_threshold` + +A `Tensor`. Must have the same type as `boxes`. +A 0-D float tensor representing the threshold for deciding whether +boxes overlap too much with respect to IOU. +
+`score_threshold` + +A `Tensor`. Must have the same type as `boxes`. +A 0-D float tensor representing the threshold for deciding when to remove +boxes based on score. +
+`soft_nms_sigma` + +A `Tensor`. Must have the same type as `boxes`. +A 0-D float tensor representing the sigma parameter for Soft NMS; see Bodla et +al (c.f. https://arxiv.org/abs/1704.04503). When `soft_nms_sigma=0.0` (which +is default), we fall back to standard (hard) NMS. +
+`pad_to_max_output_size` + +An optional `bool`. Defaults to `False`. +If true, the output `selected_indices` is padded to be of length +`max_output_size`. Defaults to false. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (selected_indices, selected_scores, valid_outputs). +
+`selected_indices` + +A `Tensor` of type `int32`. +
+`selected_scores` + +A `Tensor`. Has the same type as `boxes`. +
+`valid_outputs` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionWithOverlaps.md b/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionWithOverlaps.md new file mode 100644 index 00000000000..639094b8289 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NonMaxSuppressionWithOverlaps.md @@ -0,0 +1,129 @@ +description: Greedily selects a subset of bounding boxes in descending order of score, + +
+ + +
+ +# tf.raw_ops.NonMaxSuppressionWithOverlaps + + + + + + + + + +Greedily selects a subset of bounding boxes in descending order of score, + + + + + + + + + +pruning away boxes that have high overlaps +with previously selected boxes. Bounding boxes with score less than +`score_threshold` are removed. N-by-n overlap values are supplied as square matrix, +which allows for defining a custom overlap criterium (eg. intersection over union, +intersection over area, etc.). + +The output of this operation is a set of integers indexing into the input +collection of bounding boxes representing the selected boxes. The bounding +box coordinates corresponding to the selected indices can then be obtained +using the `tf.gather operation`. For example: + + selected_indices = tf.image.non_max_suppression_with_overlaps( + overlaps, scores, max_output_size, overlap_threshold, score_threshold) + selected_boxes = tf.gather(boxes, selected_indices) + + + + + + + + + + + + + + + + + + + + + + + + + +
+`overlaps` + +A `Tensor` of type `float32`. +A 2-D float tensor of shape `[num_boxes, num_boxes]` representing +the n-by-n box overlap values. +
+`scores` + +A `Tensor` of type `float32`. +A 1-D float tensor of shape `[num_boxes]` representing a single +score corresponding to each box (each row of boxes). +
+`max_output_size` + +A `Tensor` of type `int32`. +A scalar integer tensor representing the maximum number of +boxes to be selected by non max suppression. +
+`overlap_threshold` + +A `Tensor` of type `float32`. +A 0-D float tensor representing the threshold for deciding whether +boxes overlap too. +
+`score_threshold` + +A `Tensor` of type `float32`. +A 0-D float tensor representing the threshold for deciding when to remove +boxes based on score. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NonSerializableDataset.md b/site/en/api_docs/python/tf/raw_ops/NonSerializableDataset.md new file mode 100644 index 00000000000..af01b2bbfea --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NonSerializableDataset.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.NonSerializableDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NotEqual.md b/site/en/api_docs/python/tf/raw_ops/NotEqual.md new file mode 100644 index 00000000000..51b46c11583 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NotEqual.md @@ -0,0 +1,93 @@ +description: Returns the truth value of (x != y) element-wise. + +
+ + +
+ +# tf.raw_ops.NotEqual + + + + + + + + + +Returns the truth value of (x != y) element-wise. + + + + + + + + + +*NOTE*: `NotEqual` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`, `string`, `bool`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`incompatible_shape_error` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/NthElement.md b/site/en/api_docs/python/tf/raw_ops/NthElement.md new file mode 100644 index 00000000000..b136883d651 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/NthElement.md @@ -0,0 +1,103 @@ +description: Finds values of the n-th order statistic for the last dimension. + +
+ + +
+ +# tf.raw_ops.NthElement + + + + + + + + + +Finds values of the `n`-th order statistic for the last dimension. + + + + + + + + + +If the input is a vector (rank-1), finds the entries which is the nth-smallest +value in the vector and outputs their values as scalar tensor. + +For matrices (resp. higher rank input), computes the entries which is the +nth-smallest value in each row (resp. vector along the last dimension). Thus, + + values.shape = input.shape[:-1] + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +1-D or higher with last dimension at least `n+1`. +
+`n` + +A `Tensor` of type `int32`. +0-D. Position of sorted vector to select along the last dimension (along +each row for matrices). Valid range of n is `[0, input.shape[:-1])` +
+`reverse` + +An optional `bool`. Defaults to `False`. +When set to True, find the nth-largest value in the vector and vice +versa. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OneHot.md b/site/en/api_docs/python/tf/raw_ops/OneHot.md new file mode 100644 index 00000000000..55b9ed766ee --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OneHot.md @@ -0,0 +1,197 @@ +description: Returns a one-hot tensor. + +
+ + +
+ +# tf.raw_ops.OneHot + + + + + + + + + +Returns a one-hot tensor. + + + + + + + + + +The locations represented by indices in `indices` take value `on_value`, +while all other locations take value `off_value`. + +If the input `indices` is rank `N`, the output will have rank `N+1`, +The new axis is created at dimension `axis` (default: the new axis is +appended at the end). + +If `indices` is a scalar the output shape will be a vector of length `depth`. + +If `indices` is a vector of length `features`, the output shape will be: +``` + features x depth if axis == -1 + depth x features if axis == 0 +``` + +If `indices` is a matrix (batch) with shape `[batch, features]`, +the output shape will be: +``` + batch x features x depth if axis == -1 + batch x depth x features if axis == 1 + depth x batch x features if axis == 0 +``` + + +Examples +========= + +Suppose that +``` + indices = [0, 2, -1, 1] + depth = 3 + on_value = 5.0 + off_value = 0.0 + axis = -1 +``` + +Then output is `[4 x 3]`: +``` +output = + [5.0 0.0 0.0] // one_hot(0) + [0.0 0.0 5.0] // one_hot(2) + [0.0 0.0 0.0] // one_hot(-1) + [0.0 5.0 0.0] // one_hot(1) +``` + +Suppose that +``` + indices = [0, 2, -1, 1] + depth = 3 + on_value = 0.0 + off_value = 3.0 + axis = 0 +``` + +Then output is `[3 x 4]`: +``` +output = + [0.0 3.0 3.0 3.0] + [3.0 3.0 3.0 0.0] + [3.0 3.0 3.0 3.0] + [3.0 0.0 3.0 3.0] +// ^ one_hot(0) +// ^ one_hot(2) +// ^ one_hot(-1) +// ^ one_hot(1) +``` + +Suppose that +``` + indices = [[0, 2], [1, -1]] + depth = 3 + on_value = 1.0 + off_value = 0.0 + axis = -1 +``` + +Then output is `[2 x 2 x 3]`: +``` +output = + [ + [1.0, 0.0, 0.0] // one_hot(0) + [0.0, 0.0, 1.0] // one_hot(2) + ][ + [0.0, 1.0, 0.0] // one_hot(1) + [0.0, 0.0, 0.0] // one_hot(-1) + ] +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor`. Must be one of the following types: `uint8`, `int32`, `int64`. +A tensor of indices. +
+`depth` + +A `Tensor` of type `int32`. +A scalar defining the depth of the one hot dimension. +
+`on_value` + +A `Tensor`. +A scalar defining the value to fill in output when `indices[j] = i`. +
+`off_value` + +A `Tensor`. Must have the same type as `on_value`. +A scalar defining the value to fill in output when `indices[j] != i`. +
+`axis` + +An optional `int`. Defaults to `-1`. +The axis to fill (default: -1, a new inner-most axis). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `on_value`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OneShotIterator.md b/site/en/api_docs/python/tf/raw_ops/OneShotIterator.md new file mode 100644 index 00000000000..7d58e04d7f5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OneShotIterator.md @@ -0,0 +1,125 @@ +description: Makes a "one-shot" iterator that can be iterated only once. + +
+ + +
+ +# tf.raw_ops.OneShotIterator + + + + + + + + + +Makes a "one-shot" iterator that can be iterated only once. + + + + + + + + + +A one-shot iterator bundles the logic for defining the dataset and +the state of the iterator in a single op, which allows simple input +pipelines to be defined without an additional initialization +("MakeIterator") step. + +One-shot iterators have the following limitations: + +* They do not support parameterization: all logic for creating the underlying + dataset must be bundled in the `dataset_factory` function. +* They are not resettable. Once a one-shot iterator reaches the end of its + underlying dataset, subsequent "IteratorGetNext" operations on that + iterator will always produce an `OutOfRange` error. + +For greater flexibility, use "Iterator" and "MakeIterator" to define +an iterator using an arbitrary subgraph, which may capture tensors +(including fed values) as parameters, and which may be reset multiple +times by rerunning "MakeIterator". + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dataset_factory` + +A function decorated with @Defun. +A function of type `() -> DT_VARIANT`, where the returned +DT_VARIANT is a dataset. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OnesLike.md b/site/en/api_docs/python/tf/raw_ops/OnesLike.md new file mode 100644 index 00000000000..ba37924eb5a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OnesLike.md @@ -0,0 +1,78 @@ +description: Returns a tensor of ones with the same shape and type as x. + +
+ + +
+ +# tf.raw_ops.OnesLike + + + + + + + + + +Returns a tensor of ones with the same shape and type as x. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `complex64`, `complex128`, `bool`. +a tensor of type T. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OptimizeDataset.md b/site/en/api_docs/python/tf/raw_ops/OptimizeDataset.md new file mode 100644 index 00000000000..88856555291 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OptimizeDataset.md @@ -0,0 +1,109 @@ +description: Creates a dataset by applying optimizations to input_dataset. + +
+ + +
+ +# tf.raw_ops.OptimizeDataset + + + + + + + + + +Creates a dataset by applying optimizations to `input_dataset`. + + + + + + + + + +Creates a dataset by applying optimizations to `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +
+`optimizations` + +A `Tensor` of type `string`. +A tf.string vector tf.Tensor identifying optimizations to use. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`optimization_configs` + +An optional list of `strings`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OptionalFromValue.md b/site/en/api_docs/python/tf/raw_ops/OptionalFromValue.md new file mode 100644 index 00000000000..b2a6889f3f1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OptionalFromValue.md @@ -0,0 +1,77 @@ +description: Constructs an Optional variant from a tuple of tensors. + +
+ + +
+ +# tf.raw_ops.OptionalFromValue + + + + + + + + + +Constructs an Optional variant from a tuple of tensors. + + + + + + + + + + + + + + + + + + + + + + +
+`components` + +A list of `Tensor` objects. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OptionalGetValue.md b/site/en/api_docs/python/tf/raw_ops/OptionalGetValue.md new file mode 100644 index 00000000000..4c25397ee5a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OptionalGetValue.md @@ -0,0 +1,91 @@ +description: Returns the value stored in an Optional variant or raises an error if none exists. + +
+ + +
+ +# tf.raw_ops.OptionalGetValue + + + + + + + + + +Returns the value stored in an Optional variant or raises an error if none exists. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`optional` + +A `Tensor` of type `variant`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `output_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OptionalHasValue.md b/site/en/api_docs/python/tf/raw_ops/OptionalHasValue.md new file mode 100644 index 00000000000..3ffeb17960c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OptionalHasValue.md @@ -0,0 +1,77 @@ +description: Returns true if and only if the given Optional variant has a value. + +
+ + +
+ +# tf.raw_ops.OptionalHasValue + + + + + + + + + +Returns true if and only if the given Optional variant has a value. + + + + + + + + + + + + + + + + + + + + + + +
+`optional` + +A `Tensor` of type `variant`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OptionalNone.md b/site/en/api_docs/python/tf/raw_ops/OptionalNone.md new file mode 100644 index 00000000000..bb87e39decd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OptionalNone.md @@ -0,0 +1,70 @@ +description: Creates an Optional variant with no value. + +
+ + +
+ +# tf.raw_ops.OptionalNone + + + + + + + + + +Creates an Optional variant with no value. + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OrderedMapClear.md b/site/en/api_docs/python/tf/raw_ops/OrderedMapClear.md new file mode 100644 index 00000000000..e221ae3900c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OrderedMapClear.md @@ -0,0 +1,105 @@ +description: Op removes all elements in the underlying container. + +
+ + +
+ +# tf.raw_ops.OrderedMapClear + + + + + + + + + +Op removes all elements in the underlying container. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +A list of `tf.DTypes`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OrderedMapIncompleteSize.md b/site/en/api_docs/python/tf/raw_ops/OrderedMapIncompleteSize.md new file mode 100644 index 00000000000..7266bd59578 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OrderedMapIncompleteSize.md @@ -0,0 +1,105 @@ +description: Op returns the number of incomplete elements in the underlying container. + +
+ + +
+ +# tf.raw_ops.OrderedMapIncompleteSize + + + + + + + + + +Op returns the number of incomplete elements in the underlying container. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +A list of `tf.DTypes`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OrderedMapPeek.md b/site/en/api_docs/python/tf/raw_ops/OrderedMapPeek.md new file mode 100644 index 00000000000..a12e7bfdd3f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OrderedMapPeek.md @@ -0,0 +1,123 @@ +description: Op peeks at the values at the specified key. If the + +
+ + +
+ +# tf.raw_ops.OrderedMapPeek + + + + + + + + + +Op peeks at the values at the specified key. If the + + + + + + + + + +underlying container does not contain this key +this op will block until it does. This Op is optimized for +performance. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A `Tensor` of type `int64`. +
+`indices` + +A `Tensor` of type `int32`. +
+`dtypes` + +A list of `tf.DTypes` that has length `>= 1`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `dtypes`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OrderedMapSize.md b/site/en/api_docs/python/tf/raw_ops/OrderedMapSize.md new file mode 100644 index 00000000000..a40220949ce --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OrderedMapSize.md @@ -0,0 +1,105 @@ +description: Op returns the number of elements in the underlying container. + +
+ + +
+ +# tf.raw_ops.OrderedMapSize + + + + + + + + + +Op returns the number of elements in the underlying container. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +A list of `tf.DTypes`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OrderedMapStage.md b/site/en/api_docs/python/tf/raw_ops/OrderedMapStage.md new file mode 100644 index 00000000000..a7c7fd7e260 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OrderedMapStage.md @@ -0,0 +1,134 @@ +description: Stage (key, values) in the underlying container which behaves like a ordered + +
+ + +
+ +# tf.raw_ops.OrderedMapStage + + + + + + + + + +Stage (key, values) in the underlying container which behaves like a ordered + + + + + + + + + +associative container. Elements are ordered by key. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A `Tensor` of type `int64`. int64 +
+`indices` + +A `Tensor` of type `int32`. +
+`values` + +A list of `Tensor` objects. a list of tensors +dtypes A list of data types that inserted values should adhere to. +
+`dtypes` + +A list of `tf.DTypes`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +Maximum number of elements in the Staging Area. If > 0, inserts +on the container will block when the capacity is reached. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this queue is placed in the given container. Otherwise, +a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +It is necessary to match this name to the matching Unstage Op. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OrderedMapUnstage.md b/site/en/api_docs/python/tf/raw_ops/OrderedMapUnstage.md new file mode 100644 index 00000000000..1b596cdf1a4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OrderedMapUnstage.md @@ -0,0 +1,122 @@ +description: Op removes and returns the values associated with the key + +
+ + +
+ +# tf.raw_ops.OrderedMapUnstage + + + + + + + + + +Op removes and returns the values associated with the key + + + + + + + + + +from the underlying container. If the underlying container +does not contain this key, the op will block until it does. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`key` + +A `Tensor` of type `int64`. +
+`indices` + +A `Tensor` of type `int32`. +
+`dtypes` + +A list of `tf.DTypes` that has length `>= 1`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `dtypes`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OrderedMapUnstageNoKey.md b/site/en/api_docs/python/tf/raw_ops/OrderedMapUnstageNoKey.md new file mode 100644 index 00000000000..5c52049bc5e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OrderedMapUnstageNoKey.md @@ -0,0 +1,129 @@ +description: Op removes and returns the (key, value) element with the smallest + +
+ + +
+ +# tf.raw_ops.OrderedMapUnstageNoKey + + + + + + + + + +Op removes and returns the (key, value) element with the smallest + + + + + + + + + +key from the underlying container. If the underlying container +does not contain elements, the op will block until it does. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor` of type `int32`. +
+`dtypes` + +A list of `tf.DTypes` that has length `>= 1`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (key, values). +
+`key` + +A `Tensor` of type `int64`. +
+`values` + +A list of `Tensor` objects of type `dtypes`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OutfeedDequeue.md b/site/en/api_docs/python/tf/raw_ops/OutfeedDequeue.md new file mode 100644 index 00000000000..dd64e16537c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OutfeedDequeue.md @@ -0,0 +1,95 @@ +description: Retrieves a single tensor from the computation outfeed. + +
+ + +
+ +# tf.raw_ops.OutfeedDequeue + + + + + + + + + +Retrieves a single tensor from the computation outfeed. + + + + + + + + + +This operation will block indefinitely until data is available. + + + + + + + + + + + + + + + + + + + +
+`dtype` + +A tf.DType. The type of elements in the tensor. +
+`shape` + +A tf.TensorShape or list of `ints`. The shape of the tensor. +
+`device_ordinal` + +An optional `int`. Defaults to `-1`. +The TPU device to use. This should be -1 when the Op +is running on a TPU device, and >= 0 when the Op is running on the CPU +device. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OutfeedDequeueTuple.md b/site/en/api_docs/python/tf/raw_ops/OutfeedDequeueTuple.md new file mode 100644 index 00000000000..8a8a032c0d0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OutfeedDequeueTuple.md @@ -0,0 +1,98 @@ +description: Retrieve multiple values from the computation outfeed. + +
+ + +
+ +# tf.raw_ops.OutfeedDequeueTuple + + + + + + + + + +Retrieve multiple values from the computation outfeed. + + + + + + + + + +This operation will block indefinitely until data is available. Output `i` +corresponds to XLA tuple element `i`. + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +A list of `tf.DTypes` that has length `>= 1`. +The element types of each element in `outputs`. +
+`shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`). +The shapes of each tensor in `outputs`. +
+`device_ordinal` + +An optional `int`. Defaults to `-1`. +The TPU device to use. This should be -1 when the Op +is running on a TPU device, and >= 0 when the Op is running on the CPU +device. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `dtypes`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OutfeedEnqueue.md b/site/en/api_docs/python/tf/raw_ops/OutfeedEnqueue.md new file mode 100644 index 00000000000..aaf77b1d33f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OutfeedEnqueue.md @@ -0,0 +1,77 @@ +description: Enqueue a Tensor on the computation outfeed. + +
+ + +
+ +# tf.raw_ops.OutfeedEnqueue + + + + + + + + + +Enqueue a Tensor on the computation outfeed. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. A tensor that will be inserted into the outfeed queue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/OutfeedEnqueueTuple.md b/site/en/api_docs/python/tf/raw_ops/OutfeedEnqueueTuple.md new file mode 100644 index 00000000000..b472a83a8b0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/OutfeedEnqueueTuple.md @@ -0,0 +1,79 @@ +description: Enqueue multiple Tensor values on the computation outfeed. + +
+ + +
+ +# tf.raw_ops.OutfeedEnqueueTuple + + + + + + + + + +Enqueue multiple Tensor values on the computation outfeed. + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of `Tensor` objects. +A list of tensors that will be inserted into the outfeed queue as an +XLA tuple. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Pack.md b/site/en/api_docs/python/tf/raw_ops/Pack.md new file mode 100644 index 00000000000..162c4acd923 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Pack.md @@ -0,0 +1,108 @@ +description: Packs a list of N rank-R tensors into one rank-(R+1) tensor. + +
+ + +
+ +# tf.raw_ops.Pack + + + + + + + + + +Packs a list of `N` rank-`R` tensors into one rank-`(R+1)` tensor. + + + + + + + + + +Packs the `N` tensors in `values` into a tensor with rank one higher than each +tensor in `values`, by packing them along the `axis` dimension. +Given a list of tensors of shape `(A, B, C)`; + +if `axis == 0` then the `output` tensor will have the shape `(N, A, B, C)`. +if `axis == 1` then the `output` tensor will have the shape `(A, N, B, C)`. +Etc. + +#### For example: + + + +``` +# 'x' is [1, 4] +# 'y' is [2, 5] +# 'z' is [3, 6] +pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim. +pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]] +``` + +This is the opposite of `unpack`. + + + + + + + + + + + + + + + + +
+`values` + +A list of at least 1 `Tensor` objects with the same type. +Must be of same shape and type. +
+`axis` + +An optional `int`. Defaults to `0`. +Dimension along which to pack. Negative values wrap around, so the +valid range is `[-(R+1), R+1)`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Pad.md b/site/en/api_docs/python/tf/raw_ops/Pad.md new file mode 100644 index 00000000000..1d932a43c41 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Pad.md @@ -0,0 +1,108 @@ +description: Pads a tensor with zeros. + +
+ + +
+ +# tf.raw_ops.Pad + + + + + + + + + +Pads a tensor with zeros. + + + + + + + + + +This operation pads a `input` with zeros according to the `paddings` you +specify. `paddings` is an integer tensor with shape `[Dn, 2]`, where n is the +rank of `input`. For each dimension D of `input`, `paddings[D, 0]` indicates +how many zeros to add before the contents of `input` in that dimension, and +`paddings[D, 1]` indicates how many zeros to add after the contents of `input` +in that dimension. + +The padded size of each dimension D of the output is: + +`paddings(D, 0) + input.dim_size(D) + paddings(D, 1)` + +#### For example: + + + +``` +# 't' is [[1, 1], [2, 2]] +# 'paddings' is [[1, 1], [2, 2]] +# rank of 't' is 2 +pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0] + [0, 0, 1, 1, 0, 0] + [0, 0, 2, 2, 0, 0] + [0, 0, 0, 0, 0, 0]] +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`paddings` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PadV2.md b/site/en/api_docs/python/tf/raw_ops/PadV2.md new file mode 100644 index 00000000000..4f27bf29dd0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PadV2.md @@ -0,0 +1,117 @@ +description: Pads a tensor. + +
+ + +
+ +# tf.raw_ops.PadV2 + + + + + + + + + +Pads a tensor. + + + + + + + + + +This operation pads `input` according to the `paddings` and `constant_values` +you specify. `paddings` is an integer tensor with shape `[Dn, 2]`, where n is +the rank of `input`. For each dimension D of `input`, `paddings[D, 0]` indicates +how many padding values to add before the contents of `input` in that dimension, +and `paddings[D, 1]` indicates how many padding values to add after the contents +of `input` in that dimension. `constant_values` is a scalar tensor of the same +type as `input` that indicates the value to use for padding `input`. + +The padded size of each dimension D of the output is: + +`paddings(D, 0) + input.dim_size(D) + paddings(D, 1)` + +#### For example: + + + +``` +# 't' is [[1, 1], [2, 2]] +# 'paddings' is [[1, 1], [2, 2]] +# 'constant_values' is 0 +# rank of 't' is 2 +pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0] + [0, 0, 1, 1, 0, 0] + [0, 0, 2, 2, 0, 0] + [0, 0, 0, 0, 0, 0]] +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`paddings` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`constant_values` + +A `Tensor`. Must have the same type as `input`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PaddedBatchDataset.md b/site/en/api_docs/python/tf/raw_ops/PaddedBatchDataset.md new file mode 100644 index 00000000000..958eba3a13e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PaddedBatchDataset.md @@ -0,0 +1,114 @@ +description: Creates a dataset that batches and pads batch_size elements from the input. + +
+ + +
+ +# tf.raw_ops.PaddedBatchDataset + + + + + + + + + +Creates a dataset that batches and pads `batch_size` elements from the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`batch_size` + +A `Tensor` of type `int64`. +A scalar representing the number of elements to accumulate in a +batch. +
+`padded_shapes` + +A list of at least 1 `Tensor` objects with type `int64`. +A list of int64 tensors representing the desired padded shapes +of the corresponding output components. These shapes may be partially +specified, using `-1` to indicate that a particular dimension should be +padded to the maximum size of all batch elements. +
+`padding_values` + +A list of `Tensor` objects. +A list of scalars containing the padding value to use for +each of the outputs. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PaddedBatchDatasetV2.md b/site/en/api_docs/python/tf/raw_ops/PaddedBatchDatasetV2.md new file mode 100644 index 00000000000..fdb2a7a25c0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PaddedBatchDatasetV2.md @@ -0,0 +1,130 @@ +description: Creates a dataset that batches and pads batch_size elements from the input. + +
+ + +
+ +# tf.raw_ops.PaddedBatchDatasetV2 + + + + + + + + + +Creates a dataset that batches and pads `batch_size` elements from the input. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`batch_size` + +A `Tensor` of type `int64`. +A scalar representing the number of elements to accumulate in a +batch. +
+`padded_shapes` + +A list of at least 1 `Tensor` objects with type `int64`. +A list of int64 tensors representing the desired padded shapes +of the corresponding output components. These shapes may be partially +specified, using `-1` to indicate that a particular dimension should be +padded to the maximum size of all batch elements. +
+`padding_values` + +A list of `Tensor` objects. +A list of scalars containing the padding value to use for +each of the outputs. +
+`drop_remainder` + +A `Tensor` of type `bool`. +A scalar representing whether the last batch should be dropped in case its size +is smaller than desired. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`parallel_copy` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PaddingFIFOQueue.md b/site/en/api_docs/python/tf/raw_ops/PaddingFIFOQueue.md new file mode 100644 index 00000000000..c88fe73f6f8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PaddingFIFOQueue.md @@ -0,0 +1,123 @@ +description: A queue that produces elements in first-in first-out order. + +
+ + +
+ +# tf.raw_ops.PaddingFIFOQueue + + + + + + + + + +A queue that produces elements in first-in first-out order. + + + + + + + + + +Variable-size shapes are allowed by setting the corresponding shape dimensions +to 0 in the shape attr. In this case DequeueMany will pad up to the maximum +size of any given element in the minibatch. See below for details. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a value. +
+`shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +The shape of each component in a value. The length of this attr must +be either 0 or the same as the length of component_types. +Shapes of fixed rank but variable size are allowed by setting +any shape dimension to -1. In this case, the inputs' shape may vary along +the given dimension, and DequeueMany will pad the given dimension with +zeros up to the maximum shape of all elements in the given batch. +If the length of this attr is 0, different queue elements may have +different ranks and shapes, but only one element may be dequeued at a time. +
+`capacity` + +An optional `int`. Defaults to `-1`. +The upper bound on the number of elements in this queue. +Negative numbers mean no limit. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this queue is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this queue will be shared under the given name +across multiple sessions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PaddingFIFOQueueV2.md b/site/en/api_docs/python/tf/raw_ops/PaddingFIFOQueueV2.md new file mode 100644 index 00000000000..ee0e6978646 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PaddingFIFOQueueV2.md @@ -0,0 +1,123 @@ +description: A queue that produces elements in first-in first-out order. + +
+ + +
+ +# tf.raw_ops.PaddingFIFOQueueV2 + + + + + + + + + +A queue that produces elements in first-in first-out order. + + + + + + + + + +Variable-size shapes are allowed by setting the corresponding shape dimensions +to 0 in the shape attr. In this case DequeueMany will pad up to the maximum +size of any given element in the minibatch. See below for details. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a value. +
+`shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +The shape of each component in a value. The length of this attr must +be either 0 or the same as the length of component_types. +Shapes of fixed rank but variable size are allowed by setting +any shape dimension to -1. In this case, the inputs' shape may vary along +the given dimension, and DequeueMany will pad the given dimension with +zeros up to the maximum shape of all elements in the given batch. +If the length of this attr is 0, different queue elements may have +different ranks and shapes, but only one element may be dequeued at a time. +
+`capacity` + +An optional `int`. Defaults to `-1`. +The upper bound on the number of elements in this queue. +Negative numbers mean no limit. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this queue is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this queue will be shared under the given name +across multiple sessions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParallelConcat.md b/site/en/api_docs/python/tf/raw_ops/ParallelConcat.md new file mode 100644 index 00000000000..22a8c188397 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParallelConcat.md @@ -0,0 +1,106 @@ +description: Concatenates a list of N tensors along the first dimension. + +
+ + +
+ +# tf.raw_ops.ParallelConcat + + + + + + + + + +Concatenates a list of `N` tensors along the first dimension. + + + + + + + + + +The input tensors are all required to have size 1 in the first dimension. + +#### For example: + + + +``` +# 'x' is [[1, 4]] +# 'y' is [[2, 5]] +# 'z' is [[3, 6]] +parallel_concat([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim. +``` + +The difference between concat and parallel_concat is that concat requires all +of the inputs be computed before the operation will begin but doesn't require +that the input shapes be known during graph construction. Parallel concat +will copy pieces of the input into the output as they become available, in +some situations this can provide a performance benefit. + + + + + + + + + + + + + + + + +
+`values` + +A list of at least 1 `Tensor` objects with the same type. +Tensors to be concatenated. All must have size 1 in the first dimension +and same shape. +
+`shape` + +A tf.TensorShape or list of `ints`. +the final shape of the result; should be equal to the shapes of any input +but with the number of input values in the first dimension. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParallelDynamicStitch.md b/site/en/api_docs/python/tf/raw_ops/ParallelDynamicStitch.md new file mode 100644 index 00000000000..45bbe0efb47 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParallelDynamicStitch.md @@ -0,0 +1,147 @@ +description: Interleave the values from the data tensors into a single tensor. + +
+ + +
+ +# tf.raw_ops.ParallelDynamicStitch + + + + + + + + + +Interleave the values from the `data` tensors into a single tensor. + + + + + + + + + +Builds a merged tensor such that + +```python + merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...] +``` + +For example, if each `indices[m]` is scalar or vector, we have + +```python + # Scalar indices: + merged[indices[m], ...] = data[m][...] + + # Vector indices: + merged[indices[m][i], ...] = data[m][i, ...] +``` + +Each `data[i].shape` must start with the corresponding `indices[i].shape`, +and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we +must have `data[i].shape = indices[i].shape + constant`. In terms of this +`constant`, the output shape is + + merged.shape = [max(indices)] + constant + +Values may be merged in parallel, so if an index appears in both `indices[m][i]` +and `indices[n][j]`, the result may be invalid. This differs from the normal +DynamicStitch operator that defines the behavior in that case. + +#### For example: + + + +```python + indices[0] = 6 + indices[1] = [4, 1] + indices[2] = [[5, 2], [0, 3]] + data[0] = [61, 62] + data[1] = [[41, 42], [11, 12]] + data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]] + merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42], + [51, 52], [61, 62]] +``` + +This method can be used to merge partitions created by `dynamic_partition` +as illustrated on the following example: + +```python + # Apply function (increments x_i) on elements for which a certain condition + # apply (x_i != -1 in this example). + x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4]) + condition_mask=tf.not_equal(x,tf.constant(-1.)) + partitioned_data = tf.dynamic_partition( + x, tf.cast(condition_mask, tf.int32) , 2) + partitioned_data[1] = partitioned_data[1] + 1.0 + condition_indices = tf.dynamic_partition( + tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2) + x = tf.dynamic_stitch(condition_indices, partitioned_data) + # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain + # unchanged. +``` + +
+ +
+ + + + + + + + + + + + + + + + +
+`indices` + +A list of at least 1 `Tensor` objects with type `int32`. +
+`data` + +A list with the same length as `indices` of `Tensor` objects with the same type. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParallelInterleaveDataset.md b/site/en/api_docs/python/tf/raw_ops/ParallelInterleaveDataset.md new file mode 100644 index 00000000000..316ff1c92cb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParallelInterleaveDataset.md @@ -0,0 +1,176 @@ +description: Creates a dataset that applies f to the outputs of input_dataset. + +
+ + +
+ +# tf.raw_ops.ParallelInterleaveDataset + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset`. + + + + + + + + + +The resulting dataset is similar to the `InterleaveDataset`, with the exception +that if retrieving the next value from a dataset would cause the requester to +block, it will skip that input dataset. This dataset is especially useful +when loading data from a variable-latency datastores (e.g. HDFS, GCS), as it +allows the training step to proceed so long as some data is available. + +!! WARNING !! If the `sloppy` parameter is set to `True`, the operation of this +dataset will not be deterministic! + +This dataset has been superseded by `ParallelInterleaveDatasetV2`. New code +should use `ParallelInterleaveDatasetV2`. + +The Python API tf.data.experimental.parallel_interleave creates instances of +this op. tf.data.experimental.parallel_interleave is a deprecated API. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +Dataset that produces a stream of arguments for the function `f`. +
+`other_arguments` + +A list of `Tensor` objects. +Additional arguments to pass to `f` beyond those produced by `input_dataset`. +Evaluated once when the dataset is instantiated. +
+`cycle_length` + +A `Tensor` of type `int64`. +Number of datasets (each created by applying `f` to the elements of +`input_dataset`) among which the `ParallelInterleaveDataset` will cycle in a +round-robin fashion. +
+`block_length` + +A `Tensor` of type `int64`. +Number of elements at a time to produce from each interleaved invocation of a +dataset returned by `f`. +
+`sloppy` + +A `Tensor` of type `bool`. +If `True`, return elements as they become available, even if that means returning +these elements in a non-deterministic order. Sloppy operation may result in better +performance in the presence of stragglers, but the dataset will still block if +all of its open streams are blocked. +If `False`, always return elements in a deterministic order. +
+`buffer_output_elements` + +A `Tensor` of type `int64`. +The number of elements each iterator being interleaved should buffer (similar +to the `.prefetch()` transformation for each interleaved iterator). +
+`prefetch_input_elements` + +A `Tensor` of type `int64`. +Determines the number of iterators to prefetch, allowing buffers to warm up and +data to be pre-fetched without blocking the main thread. +
+`f` + +A function decorated with @Defun. +A function mapping elements of `input_dataset`, concatenated with +`other_arguments`, to a Dataset variant that contains elements matching +`output_types` and `output_shapes`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParallelInterleaveDatasetV2.md b/site/en/api_docs/python/tf/raw_ops/ParallelInterleaveDatasetV2.md new file mode 100644 index 00000000000..5561ca494fc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParallelInterleaveDatasetV2.md @@ -0,0 +1,160 @@ +description: Creates a dataset that applies f to the outputs of input_dataset. + +
+ + +
+ +# tf.raw_ops.ParallelInterleaveDatasetV2 + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset`. + + + + + + + + + +The resulting dataset is similar to the `InterleaveDataset`, except that the +dataset will fetch records from the interleaved datasets in parallel. + +The tf.data Python API creates instances of this op from +Dataset.interleave() when the `num_parallel_calls` parameter of that method +is set to any value other than `None`. + +By default, the output of this dataset will be deterministic, which may result +in the dataset blocking if the next data item to be returned isn't available. +In order to avoid head-of-line blocking, one can set the +`experimental_deterministic` parameter of tf.data.Options to `False`, +which can improve performance at the expense of non-determinism. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +Dataset that produces a stream of arguments for the function `f`. +
+`other_arguments` + +A list of `Tensor` objects. +Additional arguments to pass to `f` beyond those produced by `input_dataset`. +Evaluated once when the dataset is instantiated. +
+`cycle_length` + +A `Tensor` of type `int64`. +Number of datasets (each created by applying `f` to the elements of +`input_dataset`) among which the `ParallelInterleaveDatasetV2` will cycle in a +round-robin fashion. +
+`block_length` + +A `Tensor` of type `int64`. +Number of elements at a time to produce from each interleaved invocation of a +dataset returned by `f`. +
+`num_parallel_calls` + +A `Tensor` of type `int64`. +Determines the number of threads that should be used for fetching data from +input datasets in parallel. The Python API tf.data.experimental.AUTOTUNE +constant can be used to indicate that the level of parallelism should be autotuned. +
+`f` + +A function decorated with @Defun. +A function mapping elements of `input_dataset`, concatenated with +`other_arguments`, to a Dataset variant that contains elements matching +`output_types` and `output_shapes`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`sloppy` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParallelInterleaveDatasetV3.md b/site/en/api_docs/python/tf/raw_ops/ParallelInterleaveDatasetV3.md new file mode 100644 index 00000000000..c04dd521ddf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParallelInterleaveDatasetV3.md @@ -0,0 +1,166 @@ +description: Creates a dataset that applies f to the outputs of input_dataset. + +
+ + +
+ +# tf.raw_ops.ParallelInterleaveDatasetV3 + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset`. + + + + + + + + + +The resulting dataset is similar to the `InterleaveDataset`, except that the +dataset will fetch records from the interleaved datasets in parallel. + +The tf.data Python API creates instances of this op from +Dataset.interleave() when the `num_parallel_calls` parameter of that method +is set to any value other than `None`. + +By default, the output of this dataset will be deterministic, which may result +in the dataset blocking if the next data item to be returned isn't available. +In order to avoid head-of-line blocking, one can either set the `deterministic` +attribute to "false", or leave it as "default" and set the +`experimental_deterministic` parameter of tf.data.Options to `False`. +This can improve performance at the expense of non-determinism. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +Dataset that produces a stream of arguments for the function `f`. +
+`other_arguments` + +A list of `Tensor` objects. +Additional arguments to pass to `f` beyond those produced by `input_dataset`. +Evaluated once when the dataset is instantiated. +
+`cycle_length` + +A `Tensor` of type `int64`. +Number of datasets (each created by applying `f` to the elements of +`input_dataset`) among which the `ParallelInterleaveDatasetV2` will cycle in a +round-robin fashion. +
+`block_length` + +A `Tensor` of type `int64`. +Number of elements at a time to produce from each interleaved invocation of a +dataset returned by `f`. +
+`num_parallel_calls` + +A `Tensor` of type `int64`. +Determines the number of threads that should be used for fetching data from +input datasets in parallel. The Python API tf.data.experimental.AUTOTUNE +constant can be used to indicate that the level of parallelism should be autotuned. +
+`f` + +A function decorated with @Defun. +A function mapping elements of `input_dataset`, concatenated with +`other_arguments`, to a Dataset variant that contains elements matching +`output_types` and `output_shapes`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`deterministic` + +An optional `string`. Defaults to `"default"`. +A string indicating the op-level determinism to use. Deterministic controls +whether the interleave is allowed to return elements out of order if the next +element to be returned isn't available, but a later element is. Options are +"true", "false", and "default". "default" indicates that determinism should be +decided by the `experimental_deterministic` parameter of tf.data.Options. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParallelInterleaveDatasetV4.md b/site/en/api_docs/python/tf/raw_ops/ParallelInterleaveDatasetV4.md new file mode 100644 index 00000000000..ac72323aea4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParallelInterleaveDatasetV4.md @@ -0,0 +1,185 @@ +description: Creates a dataset that applies f to the outputs of input_dataset. + +
+ + +
+ +# tf.raw_ops.ParallelInterleaveDatasetV4 + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset`. + + + + + + + + + +The resulting dataset is similar to the `InterleaveDataset`, except that the +dataset will fetch records from the interleaved datasets in parallel. + +The tf.data Python API creates instances of this op from +Dataset.interleave() when the `num_parallel_calls` parameter of that method +is set to any value other than `None`. + +By default, the output of this dataset will be deterministic, which may result +in the dataset blocking if the next data item to be returned isn't available. +In order to avoid head-of-line blocking, one can either set the `deterministic` +attribute to "false", or leave it as "default" and set the +`experimental_deterministic` parameter of tf.data.Options to `False`. +This can improve performance at the expense of non-determinism. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +Dataset that produces a stream of arguments for the function `f`. +
+`other_arguments` + +A list of `Tensor` objects. +Additional arguments to pass to `f` beyond those produced by `input_dataset`. +Evaluated once when the dataset is instantiated. +
+`cycle_length` + +A `Tensor` of type `int64`. +Number of datasets (each created by applying `f` to the elements of +`input_dataset`) among which the `ParallelInterleaveDatasetV2` will cycle in a +round-robin fashion. +
+`block_length` + +A `Tensor` of type `int64`. +Number of elements at a time to produce from each interleaved invocation of a +dataset returned by `f`. +
+`buffer_output_elements` + +A `Tensor` of type `int64`. +The number of elements each iterator being interleaved should buffer (similar +to the `.prefetch()` transformation for each interleaved iterator). +
+`prefetch_input_elements` + +A `Tensor` of type `int64`. +Determines the number of iterators to prefetch, allowing buffers to warm up and +data to be pre-fetched without blocking the main thread. +
+`num_parallel_calls` + +A `Tensor` of type `int64`. +Determines the number of threads that should be used for fetching data from +input datasets in parallel. The Python API tf.data.experimental.AUTOTUNE +constant can be used to indicate that the level of parallelism should be autotuned. +
+`f` + +A function decorated with @Defun. +A function mapping elements of `input_dataset`, concatenated with +`other_arguments`, to a Dataset variant that contains elements matching +`output_types` and `output_shapes`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`deterministic` + +An optional `string`. Defaults to `"default"`. +A string indicating the op-level determinism to use. Deterministic controls +whether the interleave is allowed to return elements out of order if the next +element to be returned isn't available, but a later element is. Options are +"true", "false", and "default". "default" indicates that determinism should be +decided by the `experimental_deterministic` parameter of tf.data.Options. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParallelMapDataset.md b/site/en/api_docs/python/tf/raw_ops/ParallelMapDataset.md new file mode 100644 index 00000000000..c0f1390832e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParallelMapDataset.md @@ -0,0 +1,139 @@ +description: Creates a dataset that applies f to the outputs of input_dataset. + +
+ + +
+ +# tf.raw_ops.ParallelMapDataset + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset`. + + + + + + + + + +Unlike a "MapDataset", which applies `f` sequentially, this dataset invokes up +to `num_parallel_calls` copies of `f` in parallel. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`other_arguments` + +A list of `Tensor` objects. +
+`num_parallel_calls` + +A `Tensor` of type `int32`. +The number of concurrent invocations of `f` that process +elements from `input_dataset` in parallel. +
+`f` + +A function decorated with @Defun. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`use_inter_op_parallelism` + +An optional `bool`. Defaults to `True`. +
+`sloppy` + +An optional `bool`. Defaults to `False`. +
+`preserve_cardinality` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParallelMapDatasetV2.md b/site/en/api_docs/python/tf/raw_ops/ParallelMapDatasetV2.md new file mode 100644 index 00000000000..8fac1b4f5b8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParallelMapDatasetV2.md @@ -0,0 +1,139 @@ +description: Creates a dataset that applies f to the outputs of input_dataset. + +
+ + +
+ +# tf.raw_ops.ParallelMapDatasetV2 + + + + + + + + + +Creates a dataset that applies `f` to the outputs of `input_dataset`. + + + + + + + + + +Unlike a "MapDataset", which applies `f` sequentially, this dataset invokes up +to `num_parallel_calls` copies of `f` in parallel. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`other_arguments` + +A list of `Tensor` objects. +
+`num_parallel_calls` + +A `Tensor` of type `int64`. +The number of concurrent invocations of `f` that process +elements from `input_dataset` in parallel. +
+`f` + +A function decorated with @Defun. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`use_inter_op_parallelism` + +An optional `bool`. Defaults to `True`. +
+`deterministic` + +An optional `string`. Defaults to `"default"`. +
+`preserve_cardinality` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParameterizedTruncatedNormal.md b/site/en/api_docs/python/tf/raw_ops/ParameterizedTruncatedNormal.md new file mode 100644 index 00000000000..10e0b30acbc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParameterizedTruncatedNormal.md @@ -0,0 +1,131 @@ +description: Outputs random values from a normal distribution. The parameters may each be a + +
+ + +
+ +# tf.raw_ops.ParameterizedTruncatedNormal + + + + + + + + + +Outputs random values from a normal distribution. The parameters may each be a + + + + + + + + + +scalar which applies to the entire output, or a vector of length shape[0] which +stores the parameters for each batch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. Batches are indexed by the 0th dimension. +
+`means` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +The mean parameter of each batch. +
+`stdevs` + +A `Tensor`. Must have the same type as `means`. +The standard deviation parameter of each batch. Must be greater than 0. +
+`minvals` + +A `Tensor`. Must have the same type as `means`. +The minimum cutoff. May be -infinity. +
+`maxvals` + +A `Tensor`. Must have the same type as `means`. +The maximum cutoff. May be +infinity, and must be more than the minval +for each batch. +
+`seed` + +An optional `int`. Defaults to `0`. +If either `seed` or `seed2` are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `means`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParseExample.md b/site/en/api_docs/python/tf/raw_ops/ParseExample.md new file mode 100644 index 00000000000..b627f08612d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParseExample.md @@ -0,0 +1,187 @@ +description: Transforms a vector of brain.Example protos (as strings) into typed tensors. + +
+ + +
+ +# tf.raw_ops.ParseExample + + + + + + + + + +Transforms a vector of brain.Example protos (as strings) into typed tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A `Tensor` of type `string`. +A vector containing a batch of binary serialized Example protos. +
+`names` + +A `Tensor` of type `string`. +A vector containing the names of the serialized protos. +May contain, for example, table key (descriptive) names for the +corresponding serialized protos. These are purely useful for debugging +purposes, and the presence of values here has no effect on the output. +May also be an empty vector if no names are available. +If non-empty, this vector must be the same length as "serialized". +
+`sparse_keys` + +A list of `Tensor` objects with type `string`. +A list of Nsparse string Tensors (scalars). +The keys expected in the Examples' features associated with sparse values. +
+`dense_keys` + +A list of `Tensor` objects with type `string`. +A list of Ndense string Tensors (scalars). +The keys expected in the Examples' features associated with dense values. +
+`dense_defaults` + +A list of `Tensor` objects with types from: `float32`, `int64`, `string`. +A list of Ndense Tensors (some may be empty). +dense_defaults[j] provides default values +when the example's feature_map lacks dense_key[j]. If an empty Tensor is +provided for dense_defaults[j], then the Feature dense_keys[j] is required. +The input type is inferred from dense_defaults[j], even when it's empty. +If dense_defaults[j] is not empty, and dense_shapes[j] is fully defined, +then the shape of dense_defaults[j] must match that of dense_shapes[j]. +If dense_shapes[j] has an undefined major dimension (variable strides dense +feature), dense_defaults[j] must contain a single element: +the padding element. +
+`sparse_types` + +A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. +A list of Nsparse types; the data types of data in each Feature +given in sparse_keys. +Currently the ParseExample supports DT_FLOAT (FloatList), +DT_INT64 (Int64List), and DT_STRING (BytesList). +
+`dense_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`). +A list of Ndense shapes; the shapes of data in each Feature +given in dense_keys. +The number of elements in the Feature corresponding to dense_key[j] +must always equal dense_shapes[j].NumEntries(). +If dense_shapes[j] == (D0, D1, ..., DN) then the shape of output +Tensor dense_values[j] will be (|serialized|, D0, D1, ..., DN): +The dense outputs are just the inputs row-stacked by batch. +This works for dense_shapes[j] = (-1, D1, ..., DN). In this case +the shape of the output Tensor dense_values[j] will be +(|serialized|, M, D1, .., DN), where M is the maximum number of blocks +of elements of length D1 * .... * DN, across all minibatch entries +in the input. Any minibatch entry with less than M blocks of elements of +length D1 * ... * DN will be padded with the corresponding default_value +scalar element along the second dimension. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shapes, dense_values). +
+`sparse_indices` + +A list with the same length as `sparse_keys` of `Tensor` objects with type `int64`. +
+`sparse_values` + +A list of `Tensor` objects of type `sparse_types`. +
+`sparse_shapes` + +A list with the same length as `sparse_keys` of `Tensor` objects with type `int64`. +
+`dense_values` + +A list of `Tensor` objects. Has the same type as `dense_defaults`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParseExampleDataset.md b/site/en/api_docs/python/tf/raw_ops/ParseExampleDataset.md new file mode 100644 index 00000000000..127ef57a45f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParseExampleDataset.md @@ -0,0 +1,183 @@ +description: Transforms input_dataset containing Example protos as vectors of DT_STRING into a dataset of Tensor or SparseTensor objects representing the parsed features. + +
+ + +
+ +# tf.raw_ops.ParseExampleDataset + + + + + + + + + +Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`num_parallel_calls` + +A `Tensor` of type `int64`. +
+`dense_defaults` + +A list of `Tensor` objects with types from: `float32`, `int64`, `string`. +A dict mapping string keys to `Tensor`s. +The keys of the dict must match the dense_keys of the feature. +
+`sparse_keys` + +A list of `strings`. +A list of string keys in the examples features. +The results for these keys will be returned as `SparseTensor` objects. +
+`dense_keys` + +A list of `strings`. +A list of Ndense string Tensors (scalars). +The keys expected in the Examples features associated with dense values. +
+`sparse_types` + +A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. +A list of `DTypes` of the same length as `sparse_keys`. +Only tf.float32 (`FloatList`), tf.int64 (`Int64List`), +and tf.string (`BytesList`) are supported. +
+`dense_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`). +List of tuples with the same length as `dense_keys`. +The shape of the data for each dense feature referenced by `dense_keys`. +Required for any input tensors identified by `dense_keys`. Must be +either fully defined, or may contain an unknown first dimension. +An unknown first dimension means the feature is treated as having +a variable number of blocks, and the output shape along this dimension +is considered unknown at graph build time. Padding is applied for +minibatch elements smaller than the maximum number of blocks for the +given feature along this dimension. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type list for the return values. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +The list of shapes being produced. +
+`sloppy` + +An optional `bool`. Defaults to `False`. +
+`ragged_keys` + +An optional list of `strings`. Defaults to `[]`. +
+`ragged_value_types` + +An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. +
+`ragged_split_types` + +An optional list of `tf.DTypes` from: `tf.int32, tf.int64`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParseExampleDatasetV2.md b/site/en/api_docs/python/tf/raw_ops/ParseExampleDatasetV2.md new file mode 100644 index 00000000000..72847f0aa96 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParseExampleDatasetV2.md @@ -0,0 +1,189 @@ +description: Transforms input_dataset containing Example protos as vectors of DT_STRING into a dataset of Tensor or SparseTensor objects representing the parsed features. + +
+ + +
+ +# tf.raw_ops.ParseExampleDatasetV2 + + + + + + + + + +Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`num_parallel_calls` + +A `Tensor` of type `int64`. +
+`dense_defaults` + +A list of `Tensor` objects with types from: `float32`, `int64`, `string`. +A dict mapping string keys to `Tensor`s. +The keys of the dict must match the dense_keys of the feature. +
+`sparse_keys` + +A list of `strings`. +A list of string keys in the examples features. +The results for these keys will be returned as `SparseTensor` objects. +
+`dense_keys` + +A list of `strings`. +A list of Ndense string Tensors (scalars). +The keys expected in the Examples features associated with dense values. +
+`sparse_types` + +A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. +A list of `DTypes` of the same length as `sparse_keys`. +Only tf.float32 (`FloatList`), tf.int64 (`Int64List`), +and tf.string (`BytesList`) are supported. +
+`dense_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`). +List of tuples with the same length as `dense_keys`. +The shape of the data for each dense feature referenced by `dense_keys`. +Required for any input tensors identified by `dense_keys`. Must be +either fully defined, or may contain an unknown first dimension. +An unknown first dimension means the feature is treated as having +a variable number of blocks, and the output shape along this dimension +is considered unknown at graph build time. Padding is applied for +minibatch elements smaller than the maximum number of blocks for the +given feature along this dimension. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type list for the return values. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +The list of shapes being produced. +
+`deterministic` + +An optional `string`. Defaults to `"default"`. +A string indicating the op-level determinism to use. Deterministic controls +whether the dataset is allowed to return elements out of order if the next +element to be returned isn't available, but a later element is. Options are +"true", "false", and "default". "default" indicates that determinism should be +decided by the `experimental_deterministic` parameter of tf.data.Options. +
+`ragged_keys` + +An optional list of `strings`. Defaults to `[]`. +
+`ragged_value_types` + +An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. +
+`ragged_split_types` + +An optional list of `tf.DTypes` from: `tf.int32, tf.int64`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParseExampleV2.md b/site/en/api_docs/python/tf/raw_ops/ParseExampleV2.md new file mode 100644 index 00000000000..2e857a77b96 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParseExampleV2.md @@ -0,0 +1,237 @@ +description: Transforms a vector of tf.Example protos (as strings) into typed tensors. + +
+ + +
+ +# tf.raw_ops.ParseExampleV2 + + + + + + + + + +Transforms a vector of tf.Example protos (as strings) into typed tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A `Tensor` of type `string`. +A scalar or vector containing binary serialized Example protos. +
+`names` + +A `Tensor` of type `string`. +A tensor containing the names of the serialized protos. +Corresponds 1:1 with the `serialized` tensor. +May contain, for example, table key (descriptive) names for the +corresponding serialized protos. These are purely useful for debugging +purposes, and the presence of values here has no effect on the output. +May also be an empty vector if no names are available. +If non-empty, this tensor must have the same shape as "serialized". +
+`sparse_keys` + +A `Tensor` of type `string`. Vector of strings. +The keys expected in the Examples' features associated with sparse values. +
+`dense_keys` + +A `Tensor` of type `string`. Vector of strings. +The keys expected in the Examples' features associated with dense values. +
+`ragged_keys` + +A `Tensor` of type `string`. Vector of strings. +The keys expected in the Examples' features associated with ragged values. +
+`dense_defaults` + +A list of `Tensor` objects with types from: `float32`, `int64`, `string`. +A list of Tensors (some may be empty). Corresponds 1:1 with `dense_keys`. +dense_defaults[j] provides default values +when the example's feature_map lacks dense_key[j]. If an empty Tensor is +provided for dense_defaults[j], then the Feature dense_keys[j] is required. +The input type is inferred from dense_defaults[j], even when it's empty. +If dense_defaults[j] is not empty, and dense_shapes[j] is fully defined, +then the shape of dense_defaults[j] must match that of dense_shapes[j]. +If dense_shapes[j] has an undefined major dimension (variable strides dense +feature), dense_defaults[j] must contain a single element: +the padding element. +
+`num_sparse` + +An `int` that is `>= 0`. The number of sparse keys. +
+`sparse_types` + +A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. +A list of `num_sparse` types; the data types of data in each Feature +given in sparse_keys. +Currently the ParseExample supports DT_FLOAT (FloatList), +DT_INT64 (Int64List), and DT_STRING (BytesList). +
+`ragged_value_types` + +A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. +A list of `num_ragged` types; the data types of data in each Feature +given in ragged_keys (where `num_ragged = sparse_keys.size()`). +Currently the ParseExample supports DT_FLOAT (FloatList), +DT_INT64 (Int64List), and DT_STRING (BytesList). +
+`ragged_split_types` + +A list of `tf.DTypes` from: `tf.int32, tf.int64`. +A list of `num_ragged` types; the data types of row_splits in each Feature +given in ragged_keys (where `num_ragged = sparse_keys.size()`). +May be DT_INT32 or DT_INT64. +
+`dense_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`). +A list of `num_dense` shapes; the shapes of data in each Feature +given in dense_keys (where `num_dense = dense_keys.size()`). +The number of elements in the Feature corresponding to dense_key[j] +must always equal dense_shapes[j].NumEntries(). +If dense_shapes[j] == (D0, D1, ..., DN) then the shape of output +Tensor dense_values[j] will be (|serialized|, D0, D1, ..., DN): +The dense outputs are just the inputs row-stacked by batch. +This works for dense_shapes[j] = (-1, D1, ..., DN). In this case +the shape of the output Tensor dense_values[j] will be +(|serialized|, M, D1, .., DN), where M is the maximum number of blocks +of elements of length D1 * .... * DN, across all minibatch entries +in the input. Any minibatch entry with less than M blocks of elements of +length D1 * ... * DN will be padded with the corresponding default_value +scalar element along the second dimension. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shapes, dense_values, ragged_values, ragged_row_splits). +
+`sparse_indices` + +A list of `num_sparse` `Tensor` objects with type `int64`. +
+`sparse_values` + +A list of `Tensor` objects of type `sparse_types`. +
+`sparse_shapes` + +A list of `num_sparse` `Tensor` objects with type `int64`. +
+`dense_values` + +A list of `Tensor` objects. Has the same type as `dense_defaults`. +
+`ragged_values` + +A list of `Tensor` objects of type `ragged_value_types`. +
+`ragged_row_splits` + +A list of `Tensor` objects of type `ragged_split_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParseSequenceExample.md b/site/en/api_docs/python/tf/raw_ops/ParseSequenceExample.md new file mode 100644 index 00000000000..a8abb443164 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParseSequenceExample.md @@ -0,0 +1,306 @@ +description: Transforms a vector of brain.SequenceExample protos (as strings) into typed tensors. + +
+ + +
+ +# tf.raw_ops.ParseSequenceExample + + + + + + + + + +Transforms a vector of brain.SequenceExample protos (as strings) into typed tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A `Tensor` of type `string`. +A vector containing binary serialized SequenceExample protos. +
+`debug_name` + +A `Tensor` of type `string`. +A vector containing the names of the serialized protos. +May contain, for example, table key (descriptive) name for the +corresponding serialized proto. This is purely useful for debugging +purposes, and the presence of values here has no effect on the output. +May also be an empty vector if no name is available. +
+`context_dense_defaults` + +A list of `Tensor` objects with types from: `float32`, `int64`, `string`. +A list of Ncontext_dense Tensors (some may be empty). +context_dense_defaults[j] provides default values +when the SequenceExample's context map lacks context_dense_key[j]. +If an empty Tensor is provided for context_dense_defaults[j], +then the Feature context_dense_keys[j] is required. +The input type is inferred from context_dense_defaults[j], even when it's +empty. If context_dense_defaults[j] is not empty, its shape must match +context_dense_shapes[j]. +
+`feature_list_dense_missing_assumed_empty` + +A list of `strings`. +A vector listing the +FeatureList keys which may be missing from the SequenceExamples. If the +associated FeatureList is missing, it is treated as empty. By default, +any FeatureList not listed in this vector must exist in the SequenceExamples. +
+`context_sparse_keys` + +A list of `strings`. +A list of Ncontext_sparse string Tensors (scalars). +The keys expected in the Examples' features associated with context_sparse +values. +
+`context_dense_keys` + +A list of `strings`. +A list of Ncontext_dense string Tensors (scalars). +The keys expected in the SequenceExamples' context features associated with +dense values. +
+`feature_list_sparse_keys` + +A list of `strings`. +A list of Nfeature_list_sparse string Tensors +(scalars). The keys expected in the FeatureLists associated with sparse +values. +
+`feature_list_dense_keys` + +A list of `strings`. +A list of Nfeature_list_dense string Tensors (scalars). +The keys expected in the SequenceExamples' feature_lists associated +with lists of dense values. +
+`Ncontext_sparse` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`Ncontext_dense` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`Nfeature_list_sparse` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`Nfeature_list_dense` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`context_sparse_types` + +An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. +A list of Ncontext_sparse types; the data types of data in +each context Feature given in context_sparse_keys. +Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList), +DT_INT64 (Int64List), and DT_STRING (BytesList). +
+`feature_list_dense_types` + +An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. +
+`context_dense_shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +A list of Ncontext_dense shapes; the shapes of data in +each context Feature given in context_dense_keys. +The number of elements in the Feature corresponding to context_dense_key[j] +must always equal context_dense_shapes[j].NumEntries(). +The shape of context_dense_values[j] will match context_dense_shapes[j]. +
+`feature_list_sparse_types` + +An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. +A list of Nfeature_list_sparse types; the data types +of data in each FeatureList given in feature_list_sparse_keys. +Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList), +DT_INT64 (Int64List), and DT_STRING (BytesList). +
+`feature_list_dense_shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +A list of Nfeature_list_dense shapes; the shapes of +data in each FeatureList given in feature_list_dense_keys. +The shape of each Feature in the FeatureList corresponding to +feature_list_dense_key[j] must always equal +feature_list_dense_shapes[j].NumEntries(). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (context_sparse_indices, context_sparse_values, context_sparse_shapes, context_dense_values, feature_list_sparse_indices, feature_list_sparse_values, feature_list_sparse_shapes, feature_list_dense_values, feature_list_dense_lengths). +
+`context_sparse_indices` + +A list of `Ncontext_sparse` `Tensor` objects with type `int64`. +
+`context_sparse_values` + +A list of `Tensor` objects of type `context_sparse_types`. +
+`context_sparse_shapes` + +A list of `Ncontext_sparse` `Tensor` objects with type `int64`. +
+`context_dense_values` + +A list of `Tensor` objects. Has the same type as `context_dense_defaults`. +
+`feature_list_sparse_indices` + +A list of `Nfeature_list_sparse` `Tensor` objects with type `int64`. +
+`feature_list_sparse_values` + +A list of `Tensor` objects of type `feature_list_sparse_types`. +
+`feature_list_sparse_shapes` + +A list of `Nfeature_list_sparse` `Tensor` objects with type `int64`. +
+`feature_list_dense_values` + +A list of `Tensor` objects of type `feature_list_dense_types`. +
+`feature_list_dense_lengths` + +A list of `Nfeature_list_dense` `Tensor` objects with type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParseSequenceExampleV2.md b/site/en/api_docs/python/tf/raw_ops/ParseSequenceExampleV2.md new file mode 100644 index 00000000000..52584ae4ccc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParseSequenceExampleV2.md @@ -0,0 +1,141 @@ +description: Transforms a vector of tf.io.SequenceExample protos (as strings) into + +
+ + +
+ +# tf.raw_ops.ParseSequenceExampleV2 + + + + + + + + + +Transforms a vector of tf.io.SequenceExample protos (as strings) into + + + + + + + + +typed tensors. + + Args: + serialized: A `Tensor` of type `string`. + A scalar or vector containing binary serialized SequenceExample protos. + debug_name: A `Tensor` of type `string`. + A scalar or vector containing the names of the serialized protos. + May contain, for example, table key (descriptive) name for the + corresponding serialized proto. This is purely useful for debugging + purposes, and the presence of values here has no effect on the output. + May also be an empty vector if no name is available. + context_sparse_keys: A `Tensor` of type `string`. + The keys expected in the Examples' features associated with context_sparse + values. + context_dense_keys: A `Tensor` of type `string`. + The keys expected in the SequenceExamples' context features associated with + dense values. + context_ragged_keys: A `Tensor` of type `string`. + The keys expected in the Examples' features associated with context_ragged + values. + feature_list_sparse_keys: A `Tensor` of type `string`. + The keys expected in the FeatureLists associated with sparse values. + feature_list_dense_keys: A `Tensor` of type `string`. + The keys expected in the SequenceExamples' feature_lists associated + with lists of dense values. + feature_list_ragged_keys: A `Tensor` of type `string`. + The keys expected in the FeatureLists associated with ragged values. + feature_list_dense_missing_assumed_empty: A `Tensor` of type `bool`. + A vector corresponding 1:1 with featue_list_dense_keys, indicating which + features may be missing from the SequenceExamples. If the associated + FeatureList is missing, it is treated as empty. + context_dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`. + A list of Ncontext_dense Tensors (some may be empty). + context_dense_defaults[j] provides default values + when the SequenceExample's context map lacks context_dense_key[j]. + If an empty Tensor is provided for context_dense_defaults[j], + then the Feature context_dense_keys[j] is required. + The input type is inferred from context_dense_defaults[j], even when it's + empty. If context_dense_defaults[j] is not empty, its shape must match + context_dense_shapes[j]. + Ncontext_sparse: An optional `int` that is `>= 0`. Defaults to `0`. + context_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. + A list of Ncontext_sparse types; the data types of data in + each context Feature given in context_sparse_keys. + Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList), + DT_INT64 (Int64List), and DT_STRING (BytesList). + context_ragged_value_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. + RaggedTensor.value dtypes for the ragged context features. + context_ragged_split_types: An optional list of `tf.DTypes` from: `tf.int32, tf.int64`. Defaults to `[]`. + RaggedTensor.row_split dtypes for the ragged context features. + context_dense_shapes: An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. + A list of Ncontext_dense shapes; the shapes of data in + each context Feature given in context_dense_keys. + The number of elements in the Feature corresponding to context_dense_key[j] + must always equal context_dense_shapes[j].NumEntries(). + The shape of context_dense_values[j] will match context_dense_shapes[j]. + Nfeature_list_sparse: An optional `int` that is `>= 0`. Defaults to `0`. + Nfeature_list_dense: An optional `int` that is `>= 0`. Defaults to `0`. + feature_list_dense_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. + feature_list_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. + A list of Nfeature_list_sparse types; the data types + of data in each FeatureList given in feature_list_sparse_keys. + Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList), + DT_INT64 (Int64List), and DT_STRING (BytesList). + feature_list_ragged_value_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. + RaggedTensor.value dtypes for the ragged FeatureList features. + feature_list_ragged_split_types: An optional list of `tf.DTypes` from: `tf.int32, tf.int64`. Defaults to `[]`. + RaggedTensor.row_split dtypes for the ragged FeatureList features. + feature_list_dense_shapes: An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. + A list of Nfeature_list_dense shapes; the shapes of + data in each FeatureList given in feature_list_dense_keys. + The shape of each Feature in the FeatureList corresponding to + feature_list_dense_key[j] must always equal + feature_list_dense_shapes[j].NumEntries(). + name: A name for the operation (optional). + + Returns: + A tuple of `Tensor` objects (context_sparse_indices, context_sparse_values, context_sparse_shapes, context_dense_values, context_ragged_values, context_ragged_row_splits, feature_list_sparse_indices, feature_list_sparse_values, feature_list_sparse_shapes, feature_list_dense_values, feature_list_dense_lengths, feature_list_ragged_values, feature_list_ragged_outer_splits, feature_list_ragged_inner_splits). + + context_sparse_indices: A list of `Ncontext_sparse` `Tensor` objects with type `int64`. + context_sparse_values: A list of `Tensor` objects of type `context_sparse_types`. + context_sparse_shapes: A list of `Ncontext_sparse` `Tensor` objects with type `int64`. + context_dense_values: A list of `Tensor` objects. Has the same type as `context_dense_defaults`. + context_ragged_values: A list of `Tensor` objects of type `context_ragged_value_types`. + context_ragged_row_splits: A list of `Tensor` objects of type `context_ragged_split_types`. + feature_list_sparse_indices: A list of `Nfeature_list_sparse` `Tensor` objects with type `int64`. + feature_list_sparse_values: A list of `Tensor` objects of type `feature_list_sparse_types`. + feature_list_sparse_shapes: A list of `Nfeature_list_sparse` `Tensor` objects with type `int64`. + feature_list_dense_values: A list of `Tensor` objects of type `feature_list_dense_types`. + feature_list_dense_lengths: A list of `Nfeature_list_dense` `Tensor` objects with type `int64`. + feature_list_ragged_values: A list of `Tensor` objects of type `feature_list_ragged_value_types`. + feature_list_ragged_outer_splits: A list of `Tensor` objects of type `feature_list_ragged_split_types`. + feature_list_ragged_inner_splits: A list of `Tensor` objects of type `feature_list_ragged_split_types`. + \ No newline at end of file diff --git a/site/en/api_docs/python/tf/raw_ops/ParseSingleExample.md b/site/en/api_docs/python/tf/raw_ops/ParseSingleExample.md new file mode 100644 index 00000000000..b97cabc3594 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParseSingleExample.md @@ -0,0 +1,177 @@ +description: Transforms a tf.Example proto (as a string) into typed tensors. + +
+ + +
+ +# tf.raw_ops.ParseSingleExample + + + + + + + + + +Transforms a tf.Example proto (as a string) into typed tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A `Tensor` of type `string`. +A vector containing a batch of binary serialized Example protos. +
+`dense_defaults` + +A list of `Tensor` objects with types from: `float32`, `int64`, `string`. +A list of Tensors (some may be empty), whose length matches +the length of `dense_keys`. dense_defaults[j] provides default values +when the example's feature_map lacks dense_key[j]. If an empty Tensor is +provided for dense_defaults[j], then the Feature dense_keys[j] is required. +The input type is inferred from dense_defaults[j], even when it's empty. +If dense_defaults[j] is not empty, and dense_shapes[j] is fully defined, +then the shape of dense_defaults[j] must match that of dense_shapes[j]. +If dense_shapes[j] has an undefined major dimension (variable strides dense +feature), dense_defaults[j] must contain a single element: +the padding element. +
+`num_sparse` + +An `int` that is `>= 0`. +The number of sparse features to be parsed from the example. This +must match the lengths of `sparse_keys` and `sparse_types`. +
+`sparse_keys` + +A list of `strings`. A list of `num_sparse` strings. +The keys expected in the Examples' features associated with sparse values. +
+`dense_keys` + +A list of `strings`. +The keys expected in the Examples' features associated with dense +values. +
+`sparse_types` + +A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. +A list of `num_sparse` types; the data types of data in each +Feature given in sparse_keys. +Currently the ParseSingleExample op supports DT_FLOAT (FloatList), +DT_INT64 (Int64List), and DT_STRING (BytesList). +
+`dense_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`). +The shapes of data in each Feature given in dense_keys. +The length of this list must match the length of `dense_keys`. The +number of elements in the Feature corresponding to dense_key[j] must +always equal dense_shapes[j].NumEntries(). If dense_shapes[j] == +(D0, D1, ..., DN) then the shape of output Tensor dense_values[j] +will be (D0, D1, ..., DN): In the case dense_shapes[j] = (-1, D1, +..., DN), the shape of the output Tensor dense_values[j] will be (M, +D1, .., DN), where M is the number of blocks of elements of length +D1 * .... * DN, in the input. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shapes, dense_values). +
+`sparse_indices` + +A list of `num_sparse` `Tensor` objects with type `int64`. +
+`sparse_values` + +A list of `Tensor` objects of type `sparse_types`. +
+`sparse_shapes` + +A list of `num_sparse` `Tensor` objects with type `int64`. +
+`dense_values` + +A list of `Tensor` objects. Has the same type as `dense_defaults`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParseSingleSequenceExample.md b/site/en/api_docs/python/tf/raw_ops/ParseSingleSequenceExample.md new file mode 100644 index 00000000000..fa377ef30c3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParseSingleSequenceExample.md @@ -0,0 +1,269 @@ +description: Transforms a scalar brain.SequenceExample proto (as strings) into typed tensors. + +
+ + +
+ +# tf.raw_ops.ParseSingleSequenceExample + + + + + + + + + +Transforms a scalar brain.SequenceExample proto (as strings) into typed tensors. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A `Tensor` of type `string`. +A scalar containing a binary serialized SequenceExample proto. +
+`feature_list_dense_missing_assumed_empty` + +A `Tensor` of type `string`. +A vector listing the +FeatureList keys which may be missing from the SequenceExample. If the +associated FeatureList is missing, it is treated as empty. By default, +any FeatureList not listed in this vector must exist in the SequenceExample. +
+`context_sparse_keys` + +A list of `Tensor` objects with type `string`. +A list of Ncontext_sparse string Tensors (scalars). +The keys expected in the Examples' features associated with context_sparse +values. +
+`context_dense_keys` + +A list of `Tensor` objects with type `string`. +A list of Ncontext_dense string Tensors (scalars). +The keys expected in the SequenceExamples' context features associated with +dense values. +
+`feature_list_sparse_keys` + +A list of `Tensor` objects with type `string`. +A list of Nfeature_list_sparse string Tensors +(scalars). The keys expected in the FeatureLists associated with sparse +values. +
+`feature_list_dense_keys` + +A list of `Tensor` objects with type `string`. +A list of Nfeature_list_dense string Tensors (scalars). +The keys expected in the SequenceExamples' feature_lists associated +with lists of dense values. +
+`context_dense_defaults` + +A list of `Tensor` objects with types from: `float32`, `int64`, `string`. +A list of Ncontext_dense Tensors (some may be empty). +context_dense_defaults[j] provides default values +when the SequenceExample's context map lacks context_dense_key[j]. +If an empty Tensor is provided for context_dense_defaults[j], +then the Feature context_dense_keys[j] is required. +The input type is inferred from context_dense_defaults[j], even when it's +empty. If context_dense_defaults[j] is not empty, its shape must match +context_dense_shapes[j]. +
+`debug_name` + +A `Tensor` of type `string`. +A scalar containing the name of the serialized proto. +May contain, for example, table key (descriptive) name for the +corresponding serialized proto. This is purely useful for debugging +purposes, and the presence of values here has no effect on the output. +May also be an empty scalar if no name is available. +
+`context_sparse_types` + +An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. +A list of Ncontext_sparse types; the data types of data in +each context Feature given in context_sparse_keys. +Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList), +DT_INT64 (Int64List), and DT_STRING (BytesList). +
+`feature_list_dense_types` + +An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. +
+`context_dense_shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +A list of Ncontext_dense shapes; the shapes of data in +each context Feature given in context_dense_keys. +The number of elements in the Feature corresponding to context_dense_key[j] +must always equal context_dense_shapes[j].NumEntries(). +The shape of context_dense_values[j] will match context_dense_shapes[j]. +
+`feature_list_sparse_types` + +An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`. +A list of Nfeature_list_sparse types; the data types +of data in each FeatureList given in feature_list_sparse_keys. +Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList), +DT_INT64 (Int64List), and DT_STRING (BytesList). +
+`feature_list_dense_shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +A list of Nfeature_list_dense shapes; the shapes of +data in each FeatureList given in feature_list_dense_keys. +The shape of each Feature in the FeatureList corresponding to +feature_list_dense_key[j] must always equal +feature_list_dense_shapes[j].NumEntries(). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (context_sparse_indices, context_sparse_values, context_sparse_shapes, context_dense_values, feature_list_sparse_indices, feature_list_sparse_values, feature_list_sparse_shapes, feature_list_dense_values). +
+`context_sparse_indices` + +A list with the same length as `context_sparse_keys` of `Tensor` objects with type `int64`. +
+`context_sparse_values` + +A list of `Tensor` objects of type `context_sparse_types`. +
+`context_sparse_shapes` + +A list with the same length as `context_sparse_keys` of `Tensor` objects with type `int64`. +
+`context_dense_values` + +A list of `Tensor` objects. Has the same type as `context_dense_defaults`. +
+`feature_list_sparse_indices` + +A list with the same length as `feature_list_sparse_keys` of `Tensor` objects with type `int64`. +
+`feature_list_sparse_values` + +A list of `Tensor` objects of type `feature_list_sparse_types`. +
+`feature_list_sparse_shapes` + +A list with the same length as `feature_list_sparse_keys` of `Tensor` objects with type `int64`. +
+`feature_list_dense_values` + +A list of `Tensor` objects of type `feature_list_dense_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ParseTensor.md b/site/en/api_docs/python/tf/raw_ops/ParseTensor.md new file mode 100644 index 00000000000..28347896c9a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ParseTensor.md @@ -0,0 +1,87 @@ +description: Transforms a serialized tensorflow.TensorProto proto into a Tensor. + +
+ + +
+ +# tf.raw_ops.ParseTensor + + + + + + + + + +Transforms a serialized tensorflow.TensorProto proto into a Tensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`serialized` + +A `Tensor` of type `string`. +A scalar string containing a serialized TensorProto proto. +
+`out_type` + +A tf.DType. +The type of the serialized tensor. The provided type must match the +type of the serialized tensor and no implicit conversion will take place. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PartitionedCall.md b/site/en/api_docs/python/tf/raw_ops/PartitionedCall.md new file mode 100644 index 00000000000..1037543c167 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PartitionedCall.md @@ -0,0 +1,116 @@ +description: returns f(inputs), where f's body is placed and partitioned. + +
+ + +
+ +# tf.raw_ops.PartitionedCall + + + + + + + + + +returns `f(inputs)`, where `f`'s body is placed and partitioned. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`args` + +A list of `Tensor` objects. A list of input tensors. +
+`Tout` + +A list of `tf.DTypes`. A list of output types. +
+`f` + +A function decorated with @Defun. +A function that takes 'args', a list of tensors, and returns 'output', +another list of tensors. Input and output types are specified by 'Tin' +and 'Tout'. The function body of f will be placed and partitioned across +devices, setting this op apart from the regular Call op. +
+`config` + +An optional `string`. Defaults to `""`. +
+`config_proto` + +An optional `string`. Defaults to `""`. +
+`executor_type` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Placeholder.md b/site/en/api_docs/python/tf/raw_ops/Placeholder.md new file mode 100644 index 00000000000..5100dbcf66a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Placeholder.md @@ -0,0 +1,89 @@ +description: A placeholder op for a value that will be fed into the computation. + +
+ + +
+ +# tf.raw_ops.Placeholder + + + + + + + + + +A placeholder op for a value that will be fed into the computation. + + + + + + + + + +N.B. This operation will fail with an error if it is executed. It is +intended as a way to represent a value that will always be fed, and to +provide attrs that enable the fed value to be checked at runtime. + + + + + + + + + + + + + + + + +
+`dtype` + +A tf.DType. The type of elements in the tensor. +
+`shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `None`. +(Optional) The shape of the tensor. If the shape has 0 dimensions, the +shape is unconstrained. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PlaceholderV2.md b/site/en/api_docs/python/tf/raw_ops/PlaceholderV2.md new file mode 100644 index 00000000000..0d00a4d2140 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PlaceholderV2.md @@ -0,0 +1,89 @@ +description: A placeholder op for a value that will be fed into the computation. + +
+ + +
+ +# tf.raw_ops.PlaceholderV2 + + + + + + + + + +A placeholder op for a value that will be fed into the computation. + + + + + + + + + +N.B. This operation will fail with an error if it is executed. It is +intended as a way to represent a value that will always be fed, and to +provide attrs that enable the fed value to be checked at runtime. + + + + + + + + + + + + + + + + +
+`dtype` + +A tf.DType. The type of elements in the tensor. +
+`shape` + +A tf.TensorShape or list of `ints`. +The shape of the tensor. The shape can be any partially-specified +shape. To be unconstrained, pass in a shape with unknown rank. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PlaceholderWithDefault.md b/site/en/api_docs/python/tf/raw_ops/PlaceholderWithDefault.md new file mode 100644 index 00000000000..d6551854381 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PlaceholderWithDefault.md @@ -0,0 +1,85 @@ +description: A placeholder op that passes through input when its output is not fed. + +
+ + +
+ +# tf.raw_ops.PlaceholderWithDefault + + + + + + + + + +A placeholder op that passes through `input` when its output is not fed. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. The default value to produce when `output` is not fed. +
+`shape` + +A tf.TensorShape or list of `ints`. +The (possibly partial) shape of the tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Polygamma.md b/site/en/api_docs/python/tf/raw_ops/Polygamma.md new file mode 100644 index 00000000000..2f367a8cbda --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Polygamma.md @@ -0,0 +1,91 @@ +description: Compute the polygamma function \\(\psi^{(n)}(x)\\). + +
+ + +
+ +# tf.raw_ops.Polygamma + + + + + + + + + +Compute the polygamma function \\(\psi^{(n)}(x)\\). + + + + + + + + + +The polygamma function is defined as: + + +\\(\psi^{(a)}(x) = \frac{d^a}{dx^a} \psi(x)\\) + +where \\(\psi(x)\\) is the digamma function. +The polygamma function is defined only for non-negative integer orders \\a\\. + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`x` + +A `Tensor`. Must have the same type as `a`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PopulationCount.md b/site/en/api_docs/python/tf/raw_ops/PopulationCount.md new file mode 100644 index 00000000000..802ecad692c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PopulationCount.md @@ -0,0 +1,83 @@ +description: Computes element-wise population count (a.k.a. popcount, bitsum, bitcount). + +
+ + +
+ +# tf.raw_ops.PopulationCount + + + + + + + + + +Computes element-wise population count (a.k.a. popcount, bitsum, bitcount). + + + + + + + + + +For each entry in `x`, calculates the number of `1` (on) bits in the binary +representation of that entry. + +**NOTE**: It is more efficient to first tf.bitcast your tensors into +`int32` or `int64` and perform the bitcount on the result, than to feed in +8- or 16-bit inputs and then aggregate the resulting counts. + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `uint8`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Pow.md b/site/en/api_docs/python/tf/raw_ops/Pow.md new file mode 100644 index 00000000000..2ab8f8b545f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Pow.md @@ -0,0 +1,92 @@ +description: Computes the power of one value to another. + +
+ + +
+ +# tf.raw_ops.Pow + + + + + + + + + +Computes the power of one value to another. + + + + + + + + + +Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for +corresponding elements in `x` and `y`. For example: + +``` +# tensor 'x' is [[2, 2]], [3, 3]] +# tensor 'y' is [[8, 16], [2, 3]] +tf.pow(x, y) ==> [[256, 65536], [9, 27]] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `float32`, `half`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PrefetchDataset.md b/site/en/api_docs/python/tf/raw_ops/PrefetchDataset.md new file mode 100644 index 00000000000..90de6466643 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PrefetchDataset.md @@ -0,0 +1,115 @@ +description: Creates a dataset that asynchronously prefetches elements from input_dataset. + +
+ + +
+ +# tf.raw_ops.PrefetchDataset + + + + + + + + + +Creates a dataset that asynchronously prefetches elements from `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`buffer_size` + +A `Tensor` of type `int64`. +The maximum number of elements to buffer in an iterator over +this dataset. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`slack_period` + +An optional `int`. Defaults to `0`. +
+`legacy_autotune` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Prelinearize.md b/site/en/api_docs/python/tf/raw_ops/Prelinearize.md new file mode 100644 index 00000000000..cd8d25639b7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Prelinearize.md @@ -0,0 +1,95 @@ +description: An op which linearizes one Tensor value to an opaque variant tensor. + +
+ + +
+ +# tf.raw_ops.Prelinearize + + + + + + + + + +An op which linearizes one Tensor value to an opaque variant tensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. A tensor that will be linearized. +
+`shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `[]`. +The shape of the tensor. +
+`layout` + +An optional list of `ints`. Defaults to `[]`. +A vector holding the requested layout in minor-to-major sequence. If a layout +attribute is passed but its values are all -1 the layout will be computed by +the infeed operation. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PrelinearizeTuple.md b/site/en/api_docs/python/tf/raw_ops/PrelinearizeTuple.md new file mode 100644 index 00000000000..34d6880f660 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PrelinearizeTuple.md @@ -0,0 +1,97 @@ +description: An op which linearizes multiple Tensor values to an opaque variant tensor. + +
+ + +
+ +# tf.raw_ops.PrelinearizeTuple + + + + + + + + + +An op which linearizes multiple Tensor values to an opaque variant tensor. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of `Tensor` objects. +A list of tensors that will be provided using the infeed mechanism. +
+`shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`). +The shapes of each tensor in `inputs`. +
+`layouts` + +An optional list of `ints`. Defaults to `[]`. +A vector holding the requested layout in minor-to-major sequence for all the +tuple shapes in the order the shapes appear in the "shapes" input. The layout +elements for a sub-shape can be set to -1 in which case the corresponding layout +will be computed by the infeed operation. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PreventGradient.md b/site/en/api_docs/python/tf/raw_ops/PreventGradient.md new file mode 100644 index 00000000000..d0d5e52bf88 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PreventGradient.md @@ -0,0 +1,93 @@ +description: An identity op that triggers an error if a gradient is requested. + +
+ + +
+ +# tf.raw_ops.PreventGradient + + + + + + + + + +An identity op that triggers an error if a gradient is requested. + + + + + + + + + +When executed in a graph, this op outputs its input tensor as-is. + +When building ops to compute gradients, the TensorFlow gradient system +will return an error when trying to lookup the gradient of this op, +because no gradient must ever be registered for this function. This +op exists to prevent subtle bugs from silently returning unimplemented +gradients in some corner cases. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. any tensor. +
+`message` + +An optional `string`. Defaults to `""`. +Will be printed in the error when anyone tries to differentiate +this operation. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Print.md b/site/en/api_docs/python/tf/raw_ops/Print.md new file mode 100644 index 00000000000..18c4082b2f8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Print.md @@ -0,0 +1,110 @@ +description: Prints a list of tensors. + +
+ + +
+ +# tf.raw_ops.Print + + + + + + + + + +Prints a list of tensors. + + + + + + + + + +Passes `input` through to `output` and prints `data` when evaluating. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. The tensor passed to `output` +
+`data` + +A list of `Tensor` objects. +A list of tensors to print out when op is evaluated. +
+`message` + +An optional `string`. Defaults to `""`. +A string, prefix of the error message. +
+`first_n` + +An optional `int`. Defaults to `-1`. +Only log `first_n` number of times. -1 disables logging. +
+`summarize` + +An optional `int`. Defaults to `3`. +Only print this many entries of each tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PrintV2.md b/site/en/api_docs/python/tf/raw_ops/PrintV2.md new file mode 100644 index 00000000000..c620f775687 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PrintV2.md @@ -0,0 +1,93 @@ +description: Prints a string scalar. + +
+ + +
+ +# tf.raw_ops.PrintV2 + + + + + + + + + +Prints a string scalar. + + + + + + + + + +Prints a string scalar to the desired output_stream. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. The string scalar to print. +
+`output_stream` + +An optional `string`. Defaults to `"stderr"`. +A string specifying the output stream or logging level to print to. +
+`end` + +An optional `string`. Defaults to `"\n"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PriorityQueue.md b/site/en/api_docs/python/tf/raw_ops/PriorityQueue.md new file mode 100644 index 00000000000..e80b6461973 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PriorityQueue.md @@ -0,0 +1,121 @@ +description: A queue that produces elements sorted by the first component value. + +
+ + +
+ +# tf.raw_ops.PriorityQueue + + + + + + + + + +A queue that produces elements sorted by the first component value. + + + + + + + + + +Note that the PriorityQueue requires the first component of any element +to be a scalar int64, in addition to the other elements declared by +component_types. Therefore calls to Enqueue and EnqueueMany (resp. Dequeue +and DequeueMany) on a PriorityQueue will all require (resp. output) one extra +entry in their input (resp. output) lists. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`). +The shape of each component in a value. The length of this attr must +be either 0 or the same as the length of component_types. If the length of +this attr is 0, the shapes of queue elements are not constrained, and +only one element may be dequeued at a time. +
+`component_types` + +An optional list of `tf.DTypes`. Defaults to `[]`. +The type of each component in a value. +
+`capacity` + +An optional `int`. Defaults to `-1`. +The upper bound on the number of elements in this queue. +Negative numbers mean no limit. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this queue is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this queue will be shared under the given name +across multiple sessions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PriorityQueueV2.md b/site/en/api_docs/python/tf/raw_ops/PriorityQueueV2.md new file mode 100644 index 00000000000..84ed7442e2a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PriorityQueueV2.md @@ -0,0 +1,121 @@ +description: A queue that produces elements sorted by the first component value. + +
+ + +
+ +# tf.raw_ops.PriorityQueueV2 + + + + + + + + + +A queue that produces elements sorted by the first component value. + + + + + + + + + +Note that the PriorityQueue requires the first component of any element +to be a scalar int64, in addition to the other elements declared by +component_types. Therefore calls to Enqueue and EnqueueMany (resp. Dequeue +and DequeueMany) on a PriorityQueue will all require (resp. output) one extra +entry in their input (resp. output) lists. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`). +The shape of each component in a value. The length of this attr must +be either 0 or the same as the length of component_types. If the length of +this attr is 0, the shapes of queue elements are not constrained, and +only one element may be dequeued at a time. +
+`component_types` + +An optional list of `tf.DTypes`. Defaults to `[]`. +The type of each component in a value. +
+`capacity` + +An optional `int`. Defaults to `-1`. +The upper bound on the number of elements in this queue. +Negative numbers mean no limit. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this queue is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this queue will be shared under the given name +across multiple sessions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PrivateThreadPoolDataset.md b/site/en/api_docs/python/tf/raw_ops/PrivateThreadPoolDataset.md new file mode 100644 index 00000000000..091d45ab7a7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PrivateThreadPoolDataset.md @@ -0,0 +1,99 @@ +description: Creates a dataset that uses a custom thread pool to compute input_dataset. + +
+ + +
+ +# tf.raw_ops.PrivateThreadPoolDataset + + + + + + + + + +Creates a dataset that uses a custom thread pool to compute `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`num_threads` + +A `Tensor` of type `int64`. +Identifies the number of threads to use for the private threadpool. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Prod.md b/site/en/api_docs/python/tf/raw_ops/Prod.md new file mode 100644 index 00000000000..150730239ab --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Prod.md @@ -0,0 +1,99 @@ +description: Computes the product of elements across dimensions of a tensor. + +
+ + +
+ +# tf.raw_ops.Prod + + + + + + + + + +Computes the product of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input` along the dimensions given in `axis`. Unless +`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in +`axis`. If `keep_dims` is true, the reduced dimensions are +retained with length 1. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The tensor to reduce. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The dimensions to reduce. Must be in the range +`[-rank(input), rank(input))`. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If true, retain reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PyFunc.md b/site/en/api_docs/python/tf/raw_ops/PyFunc.md new file mode 100644 index 00000000000..a0fa9c8d5f4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PyFunc.md @@ -0,0 +1,96 @@ +description: Invokes a python function to compute func(input)->output. + +
+ + +
+ +# tf.raw_ops.PyFunc + + + + + + + + + +Invokes a python function to compute func(input)->output. + + + + + + + + + +This operation is considered stateful. For a stateless version, see +PyFuncStateless. + + + + + + + + + + + + + + + + + + + +
+`input` + +A list of `Tensor` objects. +List of Tensors that will provide input to the Op. +
+`token` + +A `string`. +A token representing a registered python function in this address space. +
+`Tout` + +A list of `tf.DTypes`. Data types of the outputs from the op. +The length of the list specifies the number of outputs. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/PyFuncStateless.md b/site/en/api_docs/python/tf/raw_ops/PyFuncStateless.md new file mode 100644 index 00000000000..27004ac0a53 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/PyFuncStateless.md @@ -0,0 +1,91 @@ +description: A stateless version of PyFunc. + +
+ + +
+ +# tf.raw_ops.PyFuncStateless + + + + + + + + + +A stateless version of PyFunc. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A list of `Tensor` objects. +
+`token` + +A `string`. +
+`Tout` + +A list of `tf.DTypes`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Qr.md b/site/en/api_docs/python/tf/raw_ops/Qr.md new file mode 100644 index 00000000000..a2072f363e1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Qr.md @@ -0,0 +1,112 @@ +description: Computes the QR decompositions of one or more matrices. + +
+ + +
+ +# tf.raw_ops.Qr + + + + + + + + + +Computes the QR decompositions of one or more matrices. + + + + + + + + + +Computes the QR decomposition of each inner matrix in `tensor` such that +`tensor[..., :, :] = q[..., :, :] * r[..., :,:])` + +```python +# a is a tensor. +# q is a tensor of orthonormal matrices. +# r is a tensor of upper triangular matrices. +q, r = qr(a) +q_full, r_full = qr(a, full_matrices=True) +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +A tensor of shape `[..., M, N]` whose inner-most 2 dimensions +form matrices of size `[M, N]`. Let `P` be the minimum of `M` and `N`. +
+`full_matrices` + +An optional `bool`. Defaults to `False`. +If true, compute full-sized `q` and `r`. If false +(the default), compute only the leading `P` columns of `q`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (q, r). +
+`q` + +A `Tensor`. Has the same type as `input`. +
+`r` + +A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizeAndDequantize.md b/site/en/api_docs/python/tf/raw_ops/QuantizeAndDequantize.md new file mode 100644 index 00000000000..655a4f6c58f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizeAndDequantize.md @@ -0,0 +1,113 @@ +description: Use QuantizeAndDequantizeV2 instead. + +
+ + +
+ +# tf.raw_ops.QuantizeAndDequantize + + + + + + + + + +Use QuantizeAndDequantizeV2 instead. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`signed_input` + +An optional `bool`. Defaults to `True`. +
+`num_bits` + +An optional `int`. Defaults to `8`. +
+`range_given` + +An optional `bool`. Defaults to `False`. +
+`input_min` + +An optional `float`. Defaults to `0`. +
+`input_max` + +An optional `float`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizeAndDequantizeV2.md b/site/en/api_docs/python/tf/raw_ops/QuantizeAndDequantizeV2.md new file mode 100644 index 00000000000..29ae3b68f0b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizeAndDequantizeV2.md @@ -0,0 +1,209 @@ +description: Quantizes then dequantizes a tensor. + +
+ + +
+ +# tf.raw_ops.QuantizeAndDequantizeV2 + + + + + + + + + +Quantizes then dequantizes a tensor. + + + + + + + + + +This op simulates the precision loss from the quantized forward pass by: + +1. Quantizing the tensor to fixed point numbers, which should match the target + quantization method when it is used in inference. +2. Dequantizing it back to floating point numbers for the following ops, most + likely matmul. + +There are different ways to quantize. This version uses only scaling, so 0.0 +maps to 0. + +From the specified 'num_bits' in the quantized output type, it determines +minimum and maximum representable quantized values. + +e.g. + +* [-128, 127] for signed, num_bits = 8, or +* [0, 255] for unsigned, num_bits = 8. + +If range_given == False, the initial input_min, input_max will be determined +automatically as the minimum and maximum values in the input tensor, otherwise +the specified values of input_min, input_max are used. + +Note: If the input_min, input_max are specified, they do not need to equal the +actual minimum and maximum values in the tensor. e.g. in some cases it may be +beneficial to specify these values such that the low probability extremes of the +input distribution are clipped. + +This op determines the maximum scale_factor that would map the initial +[input_min, input_max] range to a range that lies within the representable +quantized range. + +It determines the scale from one of input_min and input_max, then updates the +other one to maximize the representable range. + +e.g. + +* if the output is signed, num_bits = 8, [input_min, input_max] = [-10.0, + 5.0]: it would use a scale_factor of -128 / -10.0 = 12.8 In this case, it + would update input_max to be 127 / 12.8 = 9.921875 +* if the output is signed, num_bits = 8, [input_min, input_max] = [-10.0, + 10.0]: it would use a scale_factor of 127 / 10.0 = 12.7 In this case, it + would update input_min to be 128.0 / 12.7 = -10.07874 +* if the output is unsigned, input_min is forced to be 0, and only the + specified input_max is used. + +After determining the scale_factor and updating the input range, it applies the +following to each value in the 'input' tensor. + +output = round(clamp(value, input_min, input_max) * scale_factor) / scale_factor. + +The above round function rounds the value based on the given round_mode. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +Tensor to quantize and then dequantize. +
+`input_min` + +A `Tensor`. Must have the same type as `input`. +If `range_given == True`, this specifies the minimum input value that needs to +be represented, otherwise it is determined from the min value of the `input` +tensor. +
+`input_max` + +A `Tensor`. Must have the same type as `input`. +If `range_given == True`, this specifies the maximum input value that needs to +be represented, otherwise it is determined from the max value of the `input` +tensor. +
+`signed_input` + +An optional `bool`. Defaults to `True`. +Whether the quantization is signed or unsigned. (actually this parameter should +have been called `signed_output`) +
+`num_bits` + +An optional `int`. Defaults to `8`. +The bitwidth of the quantization. +
+`range_given` + +An optional `bool`. Defaults to `False`. +Whether the range is given or should be determined from the `input` tensor. +
+`round_mode` + +An optional `string` from: `"HALF_TO_EVEN", "HALF_UP"`. Defaults to `"HALF_TO_EVEN"`. +The 'round_mode' attribute controls which rounding tie-breaking algorithm is +used when rounding float values to their quantized equivalents. The following +rounding modes are currently supported: + +* HALF_TO_EVEN: this is the default round_mode. +* HALF_UP: round towards positive. In this mode 7.5 rounds up to 8 and -7.5 +rounds up to -7. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +If True, then the absolute value of the quantized minimum value is the same as +the quantized maximum value, instead of 1 greater. +i.e. for 8 bit quantization, the minimum value is -127 instead of -128. +
+`axis` + +An optional `int`. Defaults to `-1`. +If specified, this axis is treated as a channel or slice axis, and a separate +quantization range is used for each channel or slice along this axis. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizeAndDequantizeV3.md b/site/en/api_docs/python/tf/raw_ops/QuantizeAndDequantizeV3.md new file mode 100644 index 00000000000..d30ba0d1030 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizeAndDequantizeV3.md @@ -0,0 +1,129 @@ +description: Quantizes then dequantizes a tensor. + +
+ + +
+ +# tf.raw_ops.QuantizeAndDequantizeV3 + + + + + + + + + +Quantizes then dequantizes a tensor. + + + + + + + + + +This is almost identical to QuantizeAndDequantizeV2, except that num_bits is a +tensor, so its value can change during training. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`input_min` + +A `Tensor`. Must have the same type as `input`. +
+`input_max` + +A `Tensor`. Must have the same type as `input`. +
+`num_bits` + +A `Tensor` of type `int32`. +
+`signed_input` + +An optional `bool`. Defaults to `True`. +
+`range_given` + +An optional `bool`. Defaults to `True`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`axis` + +An optional `int`. Defaults to `-1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizeDownAndShrinkRange.md b/site/en/api_docs/python/tf/raw_ops/QuantizeDownAndShrinkRange.md new file mode 100644 index 00000000000..81aa6ba2513 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizeDownAndShrinkRange.md @@ -0,0 +1,144 @@ +description: Convert the quantized 'input' tensor into a lower-precision 'output', using the + +
+ + +
+ +# tf.raw_ops.QuantizeDownAndShrinkRange + + + + + + + + + +Convert the quantized 'input' tensor into a lower-precision 'output', using the + + + + + + + + + +actual distribution of the values to maximize the usage of the lower bit depth +and adjusting the output min and max ranges accordingly. + +[input_min, input_max] are scalar floats that specify the range for the float +interpretation of the 'input' data. For example, if input_min is -1.0f and +input_max is 1.0f, and we are dealing with quint16 quantized data, then a 0 +value in the 16-bit data should be interpreted as -1.0f, and a 65535 means 1.0f. + +This operator tries to squeeze as much precision as possible into an output with +a lower bit depth by calculating the actual min and max values found in the +data. For example, maybe that quint16 input has no values lower than 16,384 and +none higher than 49,152. That means only half the range is actually needed, all +the float interpretations are between -0.5f and 0.5f, so if we want to compress +the data into a quint8 output, we can use that range rather than the theoretical +-1.0f to 1.0f that is suggested by the input min and max. + +In practice, this is most useful for taking output from operations like +QuantizedMatMul that can produce higher bit-depth outputs than their inputs and +may have large potential output ranges, but in practice have a distribution of +input values that only uses a small fraction of the possible range. By feeding +that output into this operator, we can reduce it from 32 bits down to 8 with +minimal loss of accuracy. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`input_min` + +A `Tensor` of type `float32`. +The float value that the minimum quantized input value represents. +
+`input_max` + +A `Tensor` of type `float32`. +The float value that the maximum quantized input value represents. +
+`out_type` + +A tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. +The type of the output. Should be a lower bit depth than Tinput. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, output_min, output_max). +
+`output` + +A `Tensor` of type `out_type`. +
+`output_min` + +A `Tensor` of type `float32`. +
+`output_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizeV2.md b/site/en/api_docs/python/tf/raw_ops/QuantizeV2.md new file mode 100644 index 00000000000..ea166e656a3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizeV2.md @@ -0,0 +1,287 @@ +description: Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. + +
+ + +
+ +# tf.raw_ops.QuantizeV2 + + + + + + + + + +Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. + + + + + + + + + +[min_range, max_range] are scalar floats that specify the range for +the 'input' data. The 'mode' attribute controls exactly which calculations are +used to convert the float values to their quantized equivalents. The +'round_mode' attribute controls which rounding tie-breaking algorithm is used +when rounding float values to their quantized equivalents. + +In 'MIN_COMBINED' mode, each value of the tensor will undergo the following: + +``` +out[i] = (in[i] - min_range) * range(T) / (max_range - min_range) +if T == qint8: out[i] -= (range(T) + 1) / 2.0 +``` + +here `range(T) = numeric_limits::max() - numeric_limits::min()` + +*MIN_COMBINED Mode Example* + +Assume the input is type float and has a possible range of [0.0, 6.0] and the +output type is quint8 ([0, 255]). The min_range and max_range values should be +specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each +value of the input by 255/6 and cast to quint8. + +If the output type was qint8 ([-128, 127]), the operation will additionally +subtract each value by 128 prior to casting, so that the range of values aligns +with the range of qint8. + +If the mode is 'MIN_FIRST', then this approach is used: + +``` +num_discrete_values = 1 << (# of bits in T) +range_adjust = num_discrete_values / (num_discrete_values - 1) +range = (range_max - range_min) * range_adjust +range_scale = num_discrete_values / range +quantized = round(input * range_scale) - round(range_min * range_scale) + + numeric_limits::min() +quantized = max(quantized, numeric_limits::min()) +quantized = min(quantized, numeric_limits::max()) +``` + +The biggest difference between this and MIN_COMBINED is that the minimum range +is rounded first, before it's subtracted from the rounded value. With +MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing +and dequantizing will introduce a larger and larger error. + +*SCALED mode Example* + +`SCALED` mode matches the quantization approach used in +`QuantizeAndDequantize{V2|V3}`. + +If the mode is `SCALED`, the quantization is performed by multiplying each +input value by a scaling_factor. +The scaling_factor is determined from `min_range` and `max_range` to be as large +as possible such that the range from `min_range` to `max_range` is representable +within values of type T. + +```c++ + + const int min_T = std::numeric_limits::min(); + const int max_T = std::numeric_limits::max(); + const float max_float = std::numeric_limits::max(); + + const float scale_factor_from_min_side = + (min_T * min_range > 0) ? min_T / min_range : max_float; + const float scale_factor_from_max_side = + (max_T * max_range > 0) ? max_T / max_range : max_float; + + const float scale_factor = std::min(scale_factor_from_min_side, + scale_factor_from_max_side); +``` + +We next use the scale_factor to adjust min_range and max_range as follows: + +```c++ + min_range = min_T / scale_factor; + max_range = max_T / scale_factor; +``` + + +e.g. if T = qint8, and initially min_range = -10, and max_range = 9, we would +compare -128/-10.0 = 12.8 to 127/9.0 = 14.11, and set scaling_factor = 12.8 +In this case, min_range would remain -10, but max_range would be adjusted to +127 / 12.8 = 9.921875 + +So we will quantize input values in the range (-10, 9.921875) to (-128, 127). + +The input tensor can now be quantized by clipping values to the range +`min_range` to `max_range`, then multiplying by scale_factor as follows: + +```c++ +result = round(min(max_range, max(min_range, input)) * scale_factor) +``` + +The adjusted `min_range` and `max_range` are returned as outputs 2 and 3 of +this operation. These outputs should be used as the range for any further +calculations. + + +*narrow_range (bool) attribute* + +If true, we do not use the minimum quantized value. +i.e. for int8 the quantized output, it would be restricted to the range +-127..127 instead of the full -128..127 range. +This is provided for compatibility with certain inference backends. +(Only applies to SCALED mode) + + +*axis (int) attribute* + +An optional `axis` attribute can specify a dimension index of the input tensor, +such that quantization ranges will be calculated and applied separately for each +slice of the tensor along that dimension. This is useful for per-channel +quantization. + +If axis is specified, min_range and max_range + +if `axis`=None, per-tensor quantization is performed as normal. + + +*ensure_minimum_range (float) attribute* + +Ensures the minimum quantization range is at least this value. +The legacy default value for this is 0.01, but it is strongly suggested to +set it to 0 for new uses. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `float32`. +
+`min_range` + +A `Tensor` of type `float32`. +The minimum value of the quantization range. This value may be adjusted by the +op depending on other parameters. The adjusted value is written to `output_min`. +If the `axis` attribute is specified, this must be a 1-D tensor whose size +matches the `axis` dimension of the input and output tensors. +
+`max_range` + +A `Tensor` of type `float32`. +The maximum value of the quantization range. This value may be adjusted by the +op depending on other parameters. The adjusted value is written to `output_max`. +If the `axis` attribute is specified, this must be a 1-D tensor whose size +matches the `axis` dimension of the input and output tensors. +
+`T` + +A tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. +
+`mode` + +An optional `string` from: `"MIN_COMBINED", "MIN_FIRST", "SCALED"`. Defaults to `"MIN_COMBINED"`. +
+`round_mode` + +An optional `string` from: `"HALF_AWAY_FROM_ZERO", "HALF_TO_EVEN"`. Defaults to `"HALF_AWAY_FROM_ZERO"`. +
+`narrow_range` + +An optional `bool`. Defaults to `False`. +
+`axis` + +An optional `int`. Defaults to `-1`. +
+`ensure_minimum_range` + +An optional `float`. Defaults to `0.01`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, output_min, output_max). +
+`output` + +A `Tensor` of type `T`. +
+`output_min` + +A `Tensor` of type `float32`. +
+`output_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedAdd.md b/site/en/api_docs/python/tf/raw_ops/QuantizedAdd.md new file mode 100644 index 00000000000..70cf2796c77 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedAdd.md @@ -0,0 +1,144 @@ +description: Returns x + y element-wise, working on quantized buffers. + +
+ + +
+ +# tf.raw_ops.QuantizedAdd + + + + + + + + + +Returns x + y element-wise, working on quantized buffers. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`y` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`min_x` + +A `Tensor` of type `float32`. +The float value that the lowest quantized `x` value represents. +
+`max_x` + +A `Tensor` of type `float32`. +The float value that the highest quantized `x` value represents. +
+`min_y` + +A `Tensor` of type `float32`. +The float value that the lowest quantized `y` value represents. +
+`max_y` + +A `Tensor` of type `float32`. +The float value that the highest quantized `y` value represents. +
+`Toutput` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (z, min_z, max_z). +
+`z` + +A `Tensor` of type `Toutput`. +
+`min_z` + +A `Tensor` of type `float32`. +
+`max_z` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedAvgPool.md b/site/en/api_docs/python/tf/raw_ops/QuantizedAvgPool.md new file mode 100644 index 00000000000..78b4fa3f111 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedAvgPool.md @@ -0,0 +1,141 @@ +description: Produces the average pool of the input tensor for quantized types. + +
+ + +
+ +# tf.raw_ops.QuantizedAvgPool + + + + + + + + + +Produces the average pool of the input tensor for quantized types. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +4-D with shape `[batch, height, width, channels]`. +
+`min_input` + +A `Tensor` of type `float32`. +The float value that the lowest quantized input value represents. +
+`max_input` + +A `Tensor` of type `float32`. +The float value that the highest quantized input value represents. +
+`ksize` + +A list of `ints`. +The size of the window for each dimension of the input tensor. +The length must be 4 to match the number of dimensions of the input. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +tensor. The length must be 4 to match the number of dimensions of the input. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor`. Has the same type as `input`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedBatchNormWithGlobalNormalization.md b/site/en/api_docs/python/tf/raw_ops/QuantizedBatchNormWithGlobalNormalization.md new file mode 100644 index 00000000000..6b3763411a3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedBatchNormWithGlobalNormalization.md @@ -0,0 +1,245 @@ +description: Quantized Batch normalization. + +
+ + +
+ +# tf.raw_ops.QuantizedBatchNormWithGlobalNormalization + + + + + + + + + +Quantized Batch normalization. + + + + + + + + + +This op is deprecated and will be removed in the future. Prefer +tf.nn.batch_normalization. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`t` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +A 4D input Tensor. +
+`t_min` + +A `Tensor` of type `float32`. +The value represented by the lowest quantized input. +
+`t_max` + +A `Tensor` of type `float32`. +The value represented by the highest quantized input. +
+`m` + +A `Tensor`. Must have the same type as `t`. +A 1D mean Tensor with size matching the last dimension of t. +This is the first output from tf.nn.moments, +or a saved moving average thereof. +
+`m_min` + +A `Tensor` of type `float32`. +The value represented by the lowest quantized mean. +
+`m_max` + +A `Tensor` of type `float32`. +The value represented by the highest quantized mean. +
+`v` + +A `Tensor`. Must have the same type as `t`. +A 1D variance Tensor with size matching the last dimension of t. +This is the second output from tf.nn.moments, +or a saved moving average thereof. +
+`v_min` + +A `Tensor` of type `float32`. +The value represented by the lowest quantized variance. +
+`v_max` + +A `Tensor` of type `float32`. +The value represented by the highest quantized variance. +
+`beta` + +A `Tensor`. Must have the same type as `t`. +A 1D beta Tensor with size matching the last dimension of t. +An offset to be added to the normalized tensor. +
+`beta_min` + +A `Tensor` of type `float32`. +The value represented by the lowest quantized offset. +
+`beta_max` + +A `Tensor` of type `float32`. +The value represented by the highest quantized offset. +
+`gamma` + +A `Tensor`. Must have the same type as `t`. +A 1D gamma Tensor with size matching the last dimension of t. +If "scale_after_normalization" is true, this tensor will be multiplied +with the normalized tensor. +
+`gamma_min` + +A `Tensor` of type `float32`. +The value represented by the lowest quantized gamma. +
+`gamma_max` + +A `Tensor` of type `float32`. +The value represented by the highest quantized gamma. +
+`out_type` + +A tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. +
+`variance_epsilon` + +A `float`. A small float number to avoid dividing by 0. +
+`scale_after_normalization` + +A `bool`. +A bool indicating whether the resulted tensor +needs to be multiplied with gamma. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (result, result_min, result_max). +
+`result` + +A `Tensor` of type `out_type`. +
+`result_min` + +A `Tensor` of type `float32`. +
+`result_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedBiasAdd.md b/site/en/api_docs/python/tf/raw_ops/QuantizedBiasAdd.md new file mode 100644 index 00000000000..d778bcd14cd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedBiasAdd.md @@ -0,0 +1,146 @@ +description: Adds Tensor 'bias' to Tensor 'input' for Quantized types. + +
+ + +
+ +# tf.raw_ops.QuantizedBiasAdd + + + + + + + + + +Adds Tensor 'bias' to Tensor 'input' for Quantized types. + + + + + + + + + +Broadcasts the values of bias on dimensions 0..N-2 of 'input'. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`bias` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +A 1D bias Tensor with size matching the last dimension of 'input'. +
+`min_input` + +A `Tensor` of type `float32`. +The float value that the lowest quantized input value represents. +
+`max_input` + +A `Tensor` of type `float32`. +The float value that the highest quantized input value represents. +
+`min_bias` + +A `Tensor` of type `float32`. +The float value that the lowest quantized bias value represents. +
+`max_bias` + +A `Tensor` of type `float32`. +The float value that the highest quantized bias value represents. +
+`out_type` + +A tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_out, max_out). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_out` + +A `Tensor` of type `float32`. +
+`max_out` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConcat.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConcat.md new file mode 100644 index 00000000000..064c25ff337 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConcat.md @@ -0,0 +1,125 @@ +description: Concatenates quantized tensors along one dimension. + +
+ + +
+ +# tf.raw_ops.QuantizedConcat + + + + + + + + + +Concatenates quantized tensors along one dimension. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`concat_dim` + +A `Tensor` of type `int32`. +0-D. The dimension along which to concatenate. Must be in the +range [0, rank(values)). +
+`values` + +A list of at least 2 `Tensor` objects with the same type. +The `N` Tensors to concatenate. Their ranks and types must match, +and their sizes must match in all dimensions except `concat_dim`. +
+`input_mins` + +A list with the same length as `values` of `Tensor` objects with type `float32`. +The minimum scalar values for each of the input tensors. +
+`input_maxes` + +A list with the same length as `values` of `Tensor` objects with type `float32`. +The maximum scalar values for each of the input tensors. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, output_min, output_max). +
+`output` + +A `Tensor`. Has the same type as `values`. +
+`output_min` + +A `Tensor` of type `float32`. +
+`output_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConv2D.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2D.md new file mode 100644 index 00000000000..1dc947d352e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2D.md @@ -0,0 +1,179 @@ +description: Computes a 2D convolution given quantized 4D input and filter tensors. + +
+ + +
+ +# tf.raw_ops.QuantizedConv2D + + + + + + + + + +Computes a 2D convolution given quantized 4D input and filter tensors. + + + + + + + + + +The inputs are quantized tensors where the lowest value represents the real +number of the associated minimum, and the highest represents the maximum. +This means that you can only interpret the quantized output in the same way, by +taking the returned minimum and maximum values into account. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +filter's input_depth dimension must match input's depth dimensions. +
+`min_input` + +A `Tensor` of type `float32`. +The float value that the lowest quantized input value represents. +
+`max_input` + +A `Tensor` of type `float32`. +The float value that the highest quantized input value represents. +
+`min_filter` + +A `Tensor` of type `float32`. +The float value that the lowest quantized filter value represents. +
+`max_filter` + +A `Tensor` of type `float32`. +The float value that the highest quantized filter value represents. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +tensor. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +1-D tensor of length 4. The dilation factor for each dimension of +`input`. If set to k > 1, there will be k-1 skipped cells between each +filter element on that dimension. The dimension order is determined by the +value of `data_format`, see above for details. Dilations in the batch and +depth dimensions must be 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DAndRelu.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DAndRelu.md new file mode 100644 index 00000000000..17880a1bb97 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DAndRelu.md @@ -0,0 +1,167 @@ +
+ + +
+ +# tf.raw_ops.QuantizedConv2DAndRelu + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`min_input` + +A `Tensor` of type `float32`. +
+`max_input` + +A `Tensor` of type `float32`. +
+`min_filter` + +A `Tensor` of type `float32`. +
+`max_filter` + +A `Tensor` of type `float32`. +
+`strides` + +A list of `ints`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +
+`padding_list` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DAndReluAndRequantize.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DAndReluAndRequantize.md new file mode 100644 index 00000000000..85e1c5f7315 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DAndReluAndRequantize.md @@ -0,0 +1,182 @@ +
+ + +
+ +# tf.raw_ops.QuantizedConv2DAndReluAndRequantize + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`min_input` + +A `Tensor` of type `float32`. +
+`max_input` + +A `Tensor` of type `float32`. +
+`min_filter` + +A `Tensor` of type `float32`. +
+`max_filter` + +A `Tensor` of type `float32`. +
+`min_freezed_output` + +A `Tensor` of type `float32`. +
+`max_freezed_output` + +A `Tensor` of type `float32`. +
+`strides` + +A list of `ints`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +
+`padding_list` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DAndRequantize.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DAndRequantize.md new file mode 100644 index 00000000000..a00d5875927 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DAndRequantize.md @@ -0,0 +1,182 @@ +
+ + +
+ +# tf.raw_ops.QuantizedConv2DAndRequantize + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`min_input` + +A `Tensor` of type `float32`. +
+`max_input` + +A `Tensor` of type `float32`. +
+`min_filter` + +A `Tensor` of type `float32`. +
+`max_filter` + +A `Tensor` of type `float32`. +
+`min_freezed_output` + +A `Tensor` of type `float32`. +
+`max_freezed_output` + +A `Tensor` of type `float32`. +
+`strides` + +A list of `ints`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint8. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +
+`padding_list` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DPerChannel.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DPerChannel.md new file mode 100644 index 00000000000..456de0d31d6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DPerChannel.md @@ -0,0 +1,170 @@ +description: Computes QuantizedConv2D per channel. + +
+ + +
+ +# tf.raw_ops.QuantizedConv2DPerChannel + + + + + + + + + +Computes QuantizedConv2D per channel. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The original input tensor. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The original filter tensor. +
+`min_input` + +A `Tensor` of type `float32`. +The minimum value of the input tensor +
+`max_input` + +A `Tensor` of type `float32`. +The maximum value of the input tensor. +
+`min_filter` + +A `Tensor` of type `float32`. +The minimum value of the filter tensor. +
+`max_filter` + +A `Tensor` of type `float32`. +The maximum value of the filter tensor. +
+`strides` + +A list of `ints`. list of stride values. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +The quantized type of output tensor that needs to be converted. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +list of dilation values. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBias.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBias.md new file mode 100644 index 00000000000..322141c7728 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBias.md @@ -0,0 +1,175 @@ +
+ + +
+ +# tf.raw_ops.QuantizedConv2DWithBias + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`bias` + +A `Tensor` of type `float32`. +
+`min_input` + +A `Tensor` of type `float32`. +
+`max_input` + +A `Tensor` of type `float32`. +
+`min_filter` + +A `Tensor` of type `float32`. +
+`max_filter` + +A `Tensor` of type `float32`. +
+`strides` + +A list of `ints`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +
+`padding_list` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasAndRelu.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasAndRelu.md new file mode 100644 index 00000000000..a1b97b9c8f6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasAndRelu.md @@ -0,0 +1,175 @@ +
+ + +
+ +# tf.raw_ops.QuantizedConv2DWithBiasAndRelu + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`bias` + +A `Tensor` of type `float32`. +
+`min_input` + +A `Tensor` of type `float32`. +
+`max_input` + +A `Tensor` of type `float32`. +
+`min_filter` + +A `Tensor` of type `float32`. +
+`max_filter` + +A `Tensor` of type `float32`. +
+`strides` + +A list of `ints`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +
+`padding_list` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasAndReluAndRequantize.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasAndReluAndRequantize.md new file mode 100644 index 00000000000..8a55fac9c08 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasAndReluAndRequantize.md @@ -0,0 +1,189 @@ +
+ + +
+ +# tf.raw_ops.QuantizedConv2DWithBiasAndReluAndRequantize + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`bias` + +A `Tensor`. Must be one of the following types: `float32`, `qint32`. +
+`min_input` + +A `Tensor` of type `float32`. +
+`max_input` + +A `Tensor` of type `float32`. +
+`min_filter` + +A `Tensor` of type `float32`. +
+`max_filter` + +A `Tensor` of type `float32`. +
+`min_freezed_output` + +A `Tensor` of type `float32`. +
+`max_freezed_output` + +A `Tensor` of type `float32`. +
+`strides` + +A list of `ints`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +
+`padding_list` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasAndRequantize.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasAndRequantize.md new file mode 100644 index 00000000000..634626ddf94 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasAndRequantize.md @@ -0,0 +1,189 @@ +
+ + +
+ +# tf.raw_ops.QuantizedConv2DWithBiasAndRequantize + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`bias` + +A `Tensor`. Must be one of the following types: `float32`, `qint32`. +
+`min_input` + +A `Tensor` of type `float32`. +
+`max_input` + +A `Tensor` of type `float32`. +
+`min_filter` + +A `Tensor` of type `float32`. +
+`max_filter` + +A `Tensor` of type `float32`. +
+`min_freezed_output` + +A `Tensor` of type `float32`. +
+`max_freezed_output` + +A `Tensor` of type `float32`. +
+`strides` + +A list of `ints`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint8. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +
+`padding_list` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasSignedSumAndReluAndRequantize.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasSignedSumAndReluAndRequantize.md new file mode 100644 index 00000000000..d6e3a5d1b70 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasSignedSumAndReluAndRequantize.md @@ -0,0 +1,211 @@ +
+ + +
+ +# tf.raw_ops.QuantizedConv2DWithBiasSignedSumAndReluAndRequantize + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`bias` + +A `Tensor`. Must be one of the following types: `float32`, `qint32`. +
+`min_input` + +A `Tensor` of type `float32`. +
+`max_input` + +A `Tensor` of type `float32`. +
+`min_filter` + +A `Tensor` of type `float32`. +
+`max_filter` + +A `Tensor` of type `float32`. +
+`min_freezed_output` + +A `Tensor` of type `float32`. +
+`max_freezed_output` + +A `Tensor` of type `float32`. +
+`summand` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`min_summand` + +A `Tensor` of type `float32`. +
+`max_summand` + +A `Tensor` of type `float32`. +
+`strides` + +A list of `ints`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +
+`padding_list` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasSumAndRelu.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasSumAndRelu.md new file mode 100644 index 00000000000..cf165273752 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasSumAndRelu.md @@ -0,0 +1,182 @@ +
+ + +
+ +# tf.raw_ops.QuantizedConv2DWithBiasSumAndRelu + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`bias` + +A `Tensor` of type `float32`. +
+`min_input` + +A `Tensor` of type `float32`. +
+`max_input` + +A `Tensor` of type `float32`. +
+`min_filter` + +A `Tensor` of type `float32`. +
+`max_filter` + +A `Tensor` of type `float32`. +
+`summand` + +A `Tensor` of type `float32`. +
+`strides` + +A list of `ints`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +
+`padding_list` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasSumAndReluAndRequantize.md b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasSumAndReluAndRequantize.md new file mode 100644 index 00000000000..2ce953e062e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedConv2DWithBiasSumAndReluAndRequantize.md @@ -0,0 +1,211 @@ +
+ + +
+ +# tf.raw_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`bias` + +A `Tensor`. Must be one of the following types: `float32`, `qint32`. +
+`min_input` + +A `Tensor` of type `float32`. +
+`max_input` + +A `Tensor` of type `float32`. +
+`min_filter` + +A `Tensor` of type `float32`. +
+`max_filter` + +A `Tensor` of type `float32`. +
+`min_freezed_output` + +A `Tensor` of type `float32`. +
+`max_freezed_output` + +A `Tensor` of type `float32`. +
+`summand` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`min_summand` + +A `Tensor` of type `float32`. +
+`max_summand` + +A `Tensor` of type `float32`. +
+`strides` + +A list of `ints`. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +
+`padding_list` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedDepthwiseConv2D.md b/site/en/api_docs/python/tf/raw_ops/QuantizedDepthwiseConv2D.md new file mode 100644 index 00000000000..82cc9c7799e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedDepthwiseConv2D.md @@ -0,0 +1,170 @@ +description: Computes quantized depthwise Conv2D. + +
+ + +
+ +# tf.raw_ops.QuantizedDepthwiseConv2D + + + + + + + + + +Computes quantized depthwise Conv2D. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The original input tensor. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The original filter tensor. +
+`min_input` + +A `Tensor` of type `float32`. +The float value that the minimum quantized input value represents. +
+`max_input` + +A `Tensor` of type `float32`. +The float value that the maximum quantized input value represents. +
+`min_filter` + +A `Tensor` of type `float32`. +The float value that the minimum quantized filter value represents. +
+`max_filter` + +A `Tensor` of type `float32`. +The float value that the maximum quantized filter value represents. +
+`strides` + +A list of `ints`. List of stride values. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +The type of the output. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +List of dilation values. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedDepthwiseConv2DWithBias.md b/site/en/api_docs/python/tf/raw_ops/QuantizedDepthwiseConv2DWithBias.md new file mode 100644 index 00000000000..ccdaa553a84 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedDepthwiseConv2DWithBias.md @@ -0,0 +1,177 @@ +description: Computes quantized depthwise Conv2D with Bias. + +
+ + +
+ +# tf.raw_ops.QuantizedDepthwiseConv2DWithBias + + + + + + + + + +Computes quantized depthwise Conv2D with Bias. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The original input tensor. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The original filter tensor. +
+`bias` + +A `Tensor` of type `float32`. The original bias tensor. +
+`min_input` + +A `Tensor` of type `float32`. +The float value that the minimum quantized input value represents. +
+`max_input` + +A `Tensor` of type `float32`. +The float value that the maximum quantized input value represents. +
+`min_filter` + +A `Tensor` of type `float32`. +The float value that the minimum quantized filter value represents. +
+`max_filter` + +A `Tensor` of type `float32`. +The float value that the maximum quantized filter value represents. +
+`strides` + +A list of `ints`. List of stride values. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +The type of the output. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +List of dilation values. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedDepthwiseConv2DWithBiasAndRelu.md b/site/en/api_docs/python/tf/raw_ops/QuantizedDepthwiseConv2DWithBiasAndRelu.md new file mode 100644 index 00000000000..f4f14f68173 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedDepthwiseConv2DWithBiasAndRelu.md @@ -0,0 +1,185 @@ +description: Computes quantized depthwise Conv2D with Bias and Relu. + +
+ + +
+ +# tf.raw_ops.QuantizedDepthwiseConv2DWithBiasAndRelu + + + + + + + + + +Computes quantized depthwise Conv2D with Bias and Relu. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The original input tensor. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The original filter tensor. +
+`bias` + +A `Tensor` of type `float32`. The original bias tensor. +
+`min_input` + +A `Tensor` of type `float32`. +The float value that the minimum quantized input value represents. +
+`max_input` + +A `Tensor` of type `float32`. +The float value that the maximum quantized input value represents. +
+`min_filter` + +A `Tensor` of type `float32`. +The float value that the minimum quantized filter value represents. +
+`max_filter` + +A `Tensor` of type `float32`. +The float value that the maximum quantized filter value represents. +
+`strides` + +A list of `ints`. List of stride values. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +The type of the output. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +List of dilation values. +
+`padding_list` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize.md b/site/en/api_docs/python/tf/raw_ops/QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize.md new file mode 100644 index 00000000000..e8e4608171d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize.md @@ -0,0 +1,202 @@ +description: Computes quantized depthwise Conv2D with Bias, Relu and Requantize. + +
+ + +
+ +# tf.raw_ops.QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize + + + + + + + + + +Computes quantized depthwise Conv2D with Bias, Relu and Requantize. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The original input tensor. +
+`filter` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The original filter tensor. +
+`bias` + +A `Tensor`. Must be one of the following types: `float32`, `qint32`. +The original bias tensor. +
+`min_input` + +A `Tensor` of type `float32`. +The float value that the minimum quantized input value represents. +
+`max_input` + +A `Tensor` of type `float32`. +The float value that the maximum quantized input value represents. +
+`min_filter` + +A `Tensor` of type `float32`. +The float value that the minimum quantized filter value represents. +
+`max_filter` + +A `Tensor` of type `float32`. +The float value that the maximum quantized filter value represents. +
+`min_freezed_output` + +A `Tensor` of type `float32`. +The minimum float value of the output tensor. +
+`max_freezed_output` + +A `Tensor` of type `float32`. +The maximum float value of the output tensor. +
+`strides` + +A list of `ints`. List of stride values. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. +The type of the output. +
+`dilations` + +An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. +List of dilation values. +
+`padding_list` + +An optional list of `ints`. Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor` of type `out_type`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedInstanceNorm.md b/site/en/api_docs/python/tf/raw_ops/QuantizedInstanceNorm.md new file mode 100644 index 00000000000..b7fef82a751 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedInstanceNorm.md @@ -0,0 +1,158 @@ +description: Quantized Instance normalization. + +
+ + +
+ +# tf.raw_ops.QuantizedInstanceNorm + + + + + + + + + +Quantized Instance normalization. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +A 4D input Tensor. +
+`x_min` + +A `Tensor` of type `float32`. +The value represented by the lowest quantized input. +
+`x_max` + +A `Tensor` of type `float32`. +The value represented by the highest quantized input. +
+`output_range_given` + +An optional `bool`. Defaults to `False`. +If True, `given_y_min` and `given_y_min` +and `given_y_max` are used as the output range. Otherwise, +the implementation computes the output range. +
+`given_y_min` + +An optional `float`. Defaults to `0`. +Output in `y_min` if `output_range_given` is True. +
+`given_y_max` + +An optional `float`. Defaults to `0`. +Output in `y_max` if `output_range_given` is True. +
+`variance_epsilon` + +An optional `float`. Defaults to `1e-05`. +A small float number to avoid dividing by 0. +
+`min_separation` + +An optional `float`. Defaults to `0.001`. +Minimum value of `y_max - y_min` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (y, y_min, y_max). +
+`y` + +A `Tensor`. Has the same type as `x`. +
+`y_min` + +A `Tensor` of type `float32`. +
+`y_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedMatMul.md b/site/en/api_docs/python/tf/raw_ops/QuantizedMatMul.md new file mode 100644 index 00000000000..d5b574fb0e8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedMatMul.md @@ -0,0 +1,176 @@ +description: Perform a quantized matrix multiplication of a by the matrix b. + +
+ + +
+ +# tf.raw_ops.QuantizedMatMul + + + + + + + + + +Perform a quantized matrix multiplication of `a` by the matrix `b`. + + + + + + + + + +The inputs must be two-dimensional matrices and the inner dimension of +`a` (after being transposed if `transpose_a` is non-zero) must match the +outer dimension of `b` (after being transposed if `transposed_b` is +non-zero). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +Must be a two-dimensional tensor. +
+`b` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +Must be a two-dimensional tensor. +
+`min_a` + +A `Tensor` of type `float32`. +The float value that the lowest quantized `a` value represents. +
+`max_a` + +A `Tensor` of type `float32`. +The float value that the highest quantized `a` value represents. +
+`min_b` + +A `Tensor` of type `float32`. +The float value that the lowest quantized `b` value represents. +
+`max_b` + +A `Tensor` of type `float32`. +The float value that the highest quantized `b` value represents. +
+`Toutput` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +
+`transpose_a` + +An optional `bool`. Defaults to `False`. +If true, `a` is transposed before multiplication. +
+`transpose_b` + +An optional `bool`. Defaults to `False`. +If true, `b` is transposed before multiplication. +
+`Tactivation` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. +The type of output produced by activation function +following this operation. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (out, min_out, max_out). +
+`out` + +A `Tensor` of type `Toutput`. +
+`min_out` + +A `Tensor` of type `float32`. +
+`max_out` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBias.md b/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBias.md new file mode 100644 index 00000000000..4add448ddb7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBias.md @@ -0,0 +1,81 @@ +description: Performs a quantized matrix multiplication of a by the matrix b with bias + +
+ + +
+ +# tf.raw_ops.QuantizedMatMulWithBias + + + + + + + + + +Performs a quantized matrix multiplication of `a` by the matrix `b` with bias + + + + + + + + +add. + + The inputs must be two-dimensional matrices and 1D bias vector. And the inner + dimension of `a` (after being transposed if `transpose_a` is non-zero) must + match the outer dimension of `b` (after being transposed if `transposed_b` is + non-zero). Then do broadcast add operation with bias values on the matrix + mulplication result. The bias size must match inner dimension of `b`. + + Args: + a: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. + A matrix to be multiplied. Must be a two-dimensional tensor of type `quint8`. + b: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. + A matrix to be multiplied and must be a two-dimensional tensor of type `qint8`. + bias: A `Tensor`. Must be one of the following types: `float32`, `qint32`. + A 1D bias tensor with size matching inner dimension of `b` (after being + transposed if `transposed_b` is non-zero). + min_a: A `Tensor` of type `float32`. + The float value that the lowest quantized `a` value represents. + max_a: A `Tensor` of type `float32`. + The float value that the highest quantized `a` value represents. + min_b: A `Tensor` of type `float32`. + The float value that the lowest quantized `b` value represents. + max_b: A `Tensor` of type `float32`. + The float value that the highest quantized `b` value represents. + Toutput: An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. + transpose_a: An optional `bool`. Defaults to `False`. + If true, `a` is transposed before multiplication. + transpose_b: An optional `bool`. Defaults to `False`. + If true, `b` is transposed before multiplication. + input_quant_mode: An optional `string` from: `"MIN_FIRST", "SCALED"`. Defaults to `"MIN_FIRST"`. + Input data quantization mode. Either MIN_FIRST(default) or SCALED. + name: A name for the operation (optional). + + Returns: + A tuple of `Tensor` objects (out, min_out, max_out). + + out: A `Tensor` of type `Toutput`. + min_out: A `Tensor` of type `float32`. + max_out: A `Tensor` of type `float32`. + \ No newline at end of file diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndDequantize.md b/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndDequantize.md new file mode 100644 index 00000000000..508e4f958b4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndDequantize.md @@ -0,0 +1,161 @@ +
+ + +
+ +# tf.raw_ops.QuantizedMatMulWithBiasAndDequantize + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`b` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`bias` + +A `Tensor`. Must be one of the following types: `float32`, `qint32`. +
+`min_a` + +A `Tensor` of type `float32`. +
+`max_a` + +A `Tensor` of type `float32`. +
+`min_b` + +A `Tensor` of type `float32`. +
+`max_b` + +A `Tensor` of type `float32`. +
+`min_freezed_output` + +A `Tensor` of type `float32`. +
+`max_freezed_output` + +A `Tensor` of type `float32`. +
+`Toutput` + +A tf.DType from: tf.float32. +
+`transpose_a` + +An optional `bool`. Defaults to `False`. +
+`transpose_b` + +An optional `bool`. Defaults to `False`. +
+`input_quant_mode` + +An optional `string` from: `"MIN_FIRST", "SCALED"`. Defaults to `"MIN_FIRST"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Toutput`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndRelu.md b/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndRelu.md new file mode 100644 index 00000000000..d9c40a48f33 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndRelu.md @@ -0,0 +1,82 @@ +description: Perform a quantized matrix multiplication of a by the matrix b with bias + +
+ + +
+ +# tf.raw_ops.QuantizedMatMulWithBiasAndRelu + + + + + + + + + +Perform a quantized matrix multiplication of `a` by the matrix `b` with bias + + + + + + + + +add and relu fusion. + + The inputs must be two-dimensional matrices and 1D bias vector. And the inner + dimension of `a` (after being transposed if `transpose_a` is non-zero) must + match the outer dimension of `b` (after being transposed if `transposed_b` is + non-zero). Then do broadcast add operation with bias values on the matrix + mulplication result. The bias size must match inner dimension of `b`. Then do + relu activation to get non-negative result. + + Args: + a: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. + A matrix to be multiplied. Must be a two-dimensional tensor of type `quint8`. + b: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. + A matrix to be multiplied and must be a two-dimensional tensor of type `qint8`. + bias: A `Tensor` of type `float32`. + A 1D bias tensor with size matching with inner dimension of `b` (after being + transposed if `transposed_b` is non-zero). + min_a: A `Tensor` of type `float32`. + The float value that the lowest quantized `a` value represents. + max_a: A `Tensor` of type `float32`. + The float value that the highest quantized `a` value represents. + min_b: A `Tensor` of type `float32`. + The float value that the lowest quantized `b` value represents. + max_b: A `Tensor` of type `float32`. + The float value that the highest quantized `b` value represents. + Toutput: An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. + transpose_a: An optional `bool`. Defaults to `False`. + If true, `a` is transposed before multiplication. + transpose_b: An optional `bool`. Defaults to `False`. + If true, `b` is transposed before multiplication. + input_quant_mode: An optional `string` from: `"MIN_FIRST", "SCALED"`. Defaults to `"MIN_FIRST"`. + Input data quantization mode. Either MIN_FIRST(default) or SCALED. + name: A name for the operation (optional). + + Returns: + A tuple of `Tensor` objects (out, min_out, max_out). + + out: A `Tensor` of type `Toutput`. + min_out: A `Tensor` of type `float32`. + max_out: A `Tensor` of type `float32`. + \ No newline at end of file diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndReluAndRequantize.md b/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndReluAndRequantize.md new file mode 100644 index 00000000000..770b3d35f1c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndReluAndRequantize.md @@ -0,0 +1,86 @@ +description: Perform a quantized matrix multiplication of a by the matrix b with bias + +
+ + +
+ +# tf.raw_ops.QuantizedMatMulWithBiasAndReluAndRequantize + + + + + + + + + +Perform a quantized matrix multiplication of `a` by the matrix `b` with bias + + + + + + + + +add and relu and requantize fusion. + + The inputs must be two-dimensional matrices and 1D bias vector. And the inner + dimension of `a` (after being transposed if `transpose_a` is non-zero) must + match the outer dimension of `b` (after being transposed if `transposed_b` is + non-zero). Then do broadcast add operation with bias values on the matrix + mulplication result. The bias size must match inner dimension of `b`. Then do + relu activation to get non-negative result. Then do requantize operation to get + final uint8 result. + + Args: + a: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. + A matrix to be multiplied. Must be a two-dimensional tensor of type `quint8`. + b: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. + A matrix to be multiplied and must be a two-dimensional tensor of type `qint8`. + bias: A `Tensor`. Must be one of the following types: `float32`, `qint32`. + A 1D bias tensor with size matching with inner dimension of `b` (after being + transposed if `transposed_b` is non-zero). + min_a: A `Tensor` of type `float32`. + The float value that the lowest quantized `a` value represents. + max_a: A `Tensor` of type `float32`. + The float value that the highest quantized `a` value represents. + min_b: A `Tensor` of type `float32`. + The float value that the lowest quantized `b` value represents. + max_b: A `Tensor` of type `float32`. + The float value that the highest quantized `b` value represents. + min_freezed_output: A `Tensor` of type `float32`. + The float value that the highest quantized output value after requantize. + max_freezed_output: A `Tensor` of type `float32`. + Toutput: An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. + transpose_a: An optional `bool`. Defaults to `False`. + If true, `a` is transposed before multiplication. + transpose_b: An optional `bool`. Defaults to `False`. + If true, `b` is transposed before multiplication. + input_quant_mode: An optional `string` from: `"MIN_FIRST", "SCALED"`. Defaults to `"MIN_FIRST"`. + Input data quantization mode. Either MIN_FIRST(default) or SCALED. + name: A name for the operation (optional). + + Returns: + A tuple of `Tensor` objects (out, min_out, max_out). + + out: A `Tensor` of type `Toutput`. + min_out: A `Tensor` of type `float32`. + max_out: A `Tensor` of type `float32`. + \ No newline at end of file diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndRequantize.md b/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndRequantize.md new file mode 100644 index 00000000000..1a1617b1f18 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndRequantize.md @@ -0,0 +1,182 @@ +
+ + +
+ +# tf.raw_ops.QuantizedMatMulWithBiasAndRequantize + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`b` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`bias` + +A `Tensor`. Must be one of the following types: `float32`, `qint32`. +
+`min_a` + +A `Tensor` of type `float32`. +
+`max_a` + +A `Tensor` of type `float32`. +
+`min_b` + +A `Tensor` of type `float32`. +
+`max_b` + +A `Tensor` of type `float32`. +
+`min_freezed_output` + +A `Tensor` of type `float32`. +
+`max_freezed_output` + +A `Tensor` of type `float32`. +
+`Toutput` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. +
+`transpose_a` + +An optional `bool`. Defaults to `False`. +
+`transpose_b` + +An optional `bool`. Defaults to `False`. +
+`input_quant_mode` + +An optional `string` from: `"MIN_FIRST", "SCALED"`. Defaults to `"MIN_FIRST"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (out, min_out, max_out). +
+`out` + +A `Tensor` of type `Toutput`. +
+`min_out` + +A `Tensor` of type `float32`. +
+`max_out` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedMaxPool.md b/site/en/api_docs/python/tf/raw_ops/QuantizedMaxPool.md new file mode 100644 index 00000000000..627b446f0fe --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedMaxPool.md @@ -0,0 +1,141 @@ +description: Produces the max pool of the input tensor for quantized types. + +
+ + +
+ +# tf.raw_ops.QuantizedMaxPool + + + + + + + + + +Produces the max pool of the input tensor for quantized types. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The 4D (batch x rows x cols x depth) Tensor to MaxReduce over. +
+`min_input` + +A `Tensor` of type `float32`. +The float value that the lowest quantized input value represents. +
+`max_input` + +A `Tensor` of type `float32`. +The float value that the highest quantized input value represents. +
+`ksize` + +A list of `ints`. +The size of the window for each dimension of the input tensor. +The length must be 4 to match the number of dimensions of the input. +
+`strides` + +A list of `ints`. +The stride of the sliding window for each dimension of the input +tensor. The length must be 4 to match the number of dimensions of the input. +
+`padding` + +A `string` from: `"SAME", "VALID"`. +The type of padding algorithm to use. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, min_output, max_output). +
+`output` + +A `Tensor`. Has the same type as `input`. +
+`min_output` + +A `Tensor` of type `float32`. +
+`max_output` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedMul.md b/site/en/api_docs/python/tf/raw_ops/QuantizedMul.md new file mode 100644 index 00000000000..2095a05fd6e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedMul.md @@ -0,0 +1,144 @@ +description: Returns x * y element-wise, working on quantized buffers. + +
+ + +
+ +# tf.raw_ops.QuantizedMul + + + + + + + + + +Returns x * y element-wise, working on quantized buffers. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`y` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`min_x` + +A `Tensor` of type `float32`. +The float value that the lowest quantized `x` value represents. +
+`max_x` + +A `Tensor` of type `float32`. +The float value that the highest quantized `x` value represents. +
+`min_y` + +A `Tensor` of type `float32`. +The float value that the lowest quantized `y` value represents. +
+`max_y` + +A `Tensor` of type `float32`. +The float value that the highest quantized `y` value represents. +
+`Toutput` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.qint32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (z, min_z, max_z). +
+`z` + +A `Tensor` of type `Toutput`. +
+`min_z` + +A `Tensor` of type `float32`. +
+`max_z` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedRelu.md b/site/en/api_docs/python/tf/raw_ops/QuantizedRelu.md new file mode 100644 index 00000000000..8a55f119660 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedRelu.md @@ -0,0 +1,121 @@ +description: Computes Quantized Rectified Linear: max(features, 0) + +
+ + +
+ +# tf.raw_ops.QuantizedRelu + + + + + + + + + +Computes Quantized Rectified Linear: `max(features, 0)` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`min_features` + +A `Tensor` of type `float32`. +The float value that the lowest quantized value represents. +
+`max_features` + +A `Tensor` of type `float32`. +The float value that the highest quantized value represents. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (activations, min_activations, max_activations). +
+`activations` + +A `Tensor` of type `out_type`. +
+`min_activations` + +A `Tensor` of type `float32`. +
+`max_activations` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedRelu6.md b/site/en/api_docs/python/tf/raw_ops/QuantizedRelu6.md new file mode 100644 index 00000000000..cd728508296 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedRelu6.md @@ -0,0 +1,121 @@ +description: Computes Quantized Rectified Linear 6: min(max(features, 0), 6) + +
+ + +
+ +# tf.raw_ops.QuantizedRelu6 + + + + + + + + + +Computes Quantized Rectified Linear 6: `min(max(features, 0), 6)` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`min_features` + +A `Tensor` of type `float32`. +The float value that the lowest quantized value represents. +
+`max_features` + +A `Tensor` of type `float32`. +The float value that the highest quantized value represents. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (activations, min_activations, max_activations). +
+`activations` + +A `Tensor` of type `out_type`. +
+`min_activations` + +A `Tensor` of type `float32`. +
+`max_activations` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedReluX.md b/site/en/api_docs/python/tf/raw_ops/QuantizedReluX.md new file mode 100644 index 00000000000..96a93c43caf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedReluX.md @@ -0,0 +1,129 @@ +description: Computes Quantized Rectified Linear X: min(max(features, 0), max_value) + +
+ + +
+ +# tf.raw_ops.QuantizedReluX + + + + + + + + + +Computes Quantized Rectified Linear X: `min(max(features, 0), max_value)` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`max_value` + +A `Tensor` of type `float32`. +
+`min_features` + +A `Tensor` of type `float32`. +The float value that the lowest quantized value represents. +
+`max_features` + +A `Tensor` of type `float32`. +The float value that the highest quantized value represents. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (activations, min_activations, max_activations). +
+`activations` + +A `Tensor` of type `out_type`. +
+`min_activations` + +A `Tensor` of type `float32`. +
+`max_activations` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedReshape.md b/site/en/api_docs/python/tf/raw_ops/QuantizedReshape.md new file mode 100644 index 00000000000..17eda66a4d3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedReshape.md @@ -0,0 +1,121 @@ +description: Reshapes a quantized tensor as per the Reshape op. + +
+ + +
+ +# tf.raw_ops.QuantizedReshape + + + + + + + + + +Reshapes a quantized tensor as per the Reshape op. + + + + + + + + + +``` + + + + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Defines the shape of the output tensor. +
+`input_min` + +A `Tensor` of type `float32`. The minimum value of the input. +
+`input_max` + +A `Tensor` of type `float32`. The maximum value of the input. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, output_min, output_max). +
+`output` + +A `Tensor`. Has the same type as `tensor`. +
+`output_min` + +A `Tensor` of type `float32`. +
+`output_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QuantizedResizeBilinear.md b/site/en/api_docs/python/tf/raw_ops/QuantizedResizeBilinear.md new file mode 100644 index 00000000000..d9652d00190 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QuantizedResizeBilinear.md @@ -0,0 +1,139 @@ +description: Resize quantized images to size using quantized bilinear interpolation. + +
+ + +
+ +# tf.raw_ops.QuantizedResizeBilinear + + + + + + + + + +Resize quantized `images` to `size` using quantized bilinear interpolation. + + + + + + + + + +Input images and output images must be quantized types. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `quint8`, `qint32`, `float32`. +4-D with shape `[batch, height, width, channels]`. +
+`size` + +A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The +new size for the images. +
+`min` + +A `Tensor` of type `float32`. +
+`max` + +A `Tensor` of type `float32`. +
+`align_corners` + +An optional `bool`. Defaults to `False`. +If true, the centers of the 4 corner pixels of the input and output tensors are +aligned, preserving the values at the corner pixels. Defaults to false. +
+`half_pixel_centers` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (resized_images, out_min, out_max). +
+`resized_images` + +A `Tensor`. Has the same type as `images`. +
+`out_min` + +A `Tensor` of type `float32`. +
+`out_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueClose.md b/site/en/api_docs/python/tf/raw_ops/QueueClose.md new file mode 100644 index 00000000000..585c7ad10bc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueClose.md @@ -0,0 +1,91 @@ +description: Closes the given queue. + +
+ + +
+ +# tf.raw_ops.QueueClose + + + + + + + + + +Closes the given queue. + + + + + + + + + +This operation signals that no more elements will be enqueued in the +given queue. Subsequent Enqueue(Many) operations will fail. +Subsequent Dequeue(Many) operations will continue to succeed if +sufficient elements remain in the queue. Subsequent Dequeue(Many) +operations that would block will fail immediately. + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a queue. +
+`cancel_pending_enqueues` + +An optional `bool`. Defaults to `False`. +If true, all pending enqueue requests that are +blocked on the given queue will be canceled. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueCloseV2.md b/site/en/api_docs/python/tf/raw_ops/QueueCloseV2.md new file mode 100644 index 00000000000..51f4f7314cf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueCloseV2.md @@ -0,0 +1,91 @@ +description: Closes the given queue. + +
+ + +
+ +# tf.raw_ops.QueueCloseV2 + + + + + + + + + +Closes the given queue. + + + + + + + + + +This operation signals that no more elements will be enqueued in the +given queue. Subsequent Enqueue(Many) operations will fail. +Subsequent Dequeue(Many) operations will continue to succeed if +sufficient elements remain in the queue. Subsequent Dequeue(Many) +operations that would block will fail immediately. + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a queue. +
+`cancel_pending_enqueues` + +An optional `bool`. Defaults to `False`. +If true, all pending enqueue requests that are +blocked on the given queue will be canceled. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueDequeue.md b/site/en/api_docs/python/tf/raw_ops/QueueDequeue.md new file mode 100644 index 00000000000..b2f6cf3d3bc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueDequeue.md @@ -0,0 +1,101 @@ +description: Dequeues a tuple of one or more tensors from the given queue. + +
+ + +
+ +# tf.raw_ops.QueueDequeue + + + + + + + + + +Dequeues a tuple of one or more tensors from the given queue. + + + + + + + + + +This operation has k outputs, where k is the number of components +in the tuples stored in the given queue, and output i is the ith +component of the dequeued tuple. + +N.B. If the queue is empty, this operation will block until an element +has been dequeued (or 'timeout_ms' elapses, if specified). + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a queue. +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a tuple. +
+`timeout_ms` + +An optional `int`. Defaults to `-1`. +If the queue is empty, this operation will block for up to +timeout_ms milliseconds. +Note: This option is not supported yet. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `component_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueDequeueMany.md b/site/en/api_docs/python/tf/raw_ops/QueueDequeueMany.md new file mode 100644 index 00000000000..d51873cbda8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueDequeueMany.md @@ -0,0 +1,115 @@ +description: Dequeues n tuples of one or more tensors from the given queue. + +
+ + +
+ +# tf.raw_ops.QueueDequeueMany + + + + + + + + + +Dequeues `n` tuples of one or more tensors from the given queue. + + + + + + + + + +If the queue is closed and there are fewer than `n` elements, then an +OutOfRange error is returned. + +This operation concatenates queue-element component tensors along the +0th dimension to make a single component tensor. All of the components +in the dequeued tuple will have size `n` in the 0th dimension. + +This operation has `k` outputs, where `k` is the number of components in +the tuples stored in the given queue, and output `i` is the ith +component of the dequeued tuple. + +N.B. If the queue is empty, this operation will block until `n` elements +have been dequeued (or 'timeout_ms' elapses, if specified). + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a queue. +
+`n` + +A `Tensor` of type `int32`. The number of tuples to dequeue. +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a tuple. +
+`timeout_ms` + +An optional `int`. Defaults to `-1`. +If the queue has fewer than n elements, this operation +will block for up to timeout_ms milliseconds. +Note: This option is not supported yet. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `component_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueDequeueManyV2.md b/site/en/api_docs/python/tf/raw_ops/QueueDequeueManyV2.md new file mode 100644 index 00000000000..b651a1cba3e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueDequeueManyV2.md @@ -0,0 +1,115 @@ +description: Dequeues n tuples of one or more tensors from the given queue. + +
+ + +
+ +# tf.raw_ops.QueueDequeueManyV2 + + + + + + + + + +Dequeues `n` tuples of one or more tensors from the given queue. + + + + + + + + + +If the queue is closed and there are fewer than `n` elements, then an +OutOfRange error is returned. + +This operation concatenates queue-element component tensors along the +0th dimension to make a single component tensor. All of the components +in the dequeued tuple will have size `n` in the 0th dimension. + +This operation has `k` outputs, where `k` is the number of components in +the tuples stored in the given queue, and output `i` is the ith +component of the dequeued tuple. + +N.B. If the queue is empty, this operation will block until `n` elements +have been dequeued (or 'timeout_ms' elapses, if specified). + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a queue. +
+`n` + +A `Tensor` of type `int32`. The number of tuples to dequeue. +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a tuple. +
+`timeout_ms` + +An optional `int`. Defaults to `-1`. +If the queue has fewer than n elements, this operation +will block for up to timeout_ms milliseconds. +Note: This option is not supported yet. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `component_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueDequeueUpTo.md b/site/en/api_docs/python/tf/raw_ops/QueueDequeueUpTo.md new file mode 100644 index 00000000000..70926898ab2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueDequeueUpTo.md @@ -0,0 +1,119 @@ +description: Dequeues n tuples of one or more tensors from the given queue. + +
+ + +
+ +# tf.raw_ops.QueueDequeueUpTo + + + + + + + + + +Dequeues `n` tuples of one or more tensors from the given queue. + + + + + + + + + +This operation is not supported by all queues. If a queue does not support +DequeueUpTo, then an Unimplemented error is returned. + +If the queue is closed and there are more than 0 but less than `n` +elements remaining, then instead of returning an OutOfRange error like +QueueDequeueMany, less than `n` elements are returned immediately. If +the queue is closed and there are 0 elements left in the queue, then +an OutOfRange error is returned just like in QueueDequeueMany. +Otherwise the behavior is identical to QueueDequeueMany: + +This operation concatenates queue-element component tensors along the +0th dimension to make a single component tensor. All of the components +in the dequeued tuple will have size `n` in the 0th dimension. + +This operation has k outputs, where `k` is the number of components in +the tuples stored in the given queue, and output `i` is the ith +component of the dequeued tuple. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a queue. +
+`n` + +A `Tensor` of type `int32`. The number of tuples to dequeue. +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a tuple. +
+`timeout_ms` + +An optional `int`. Defaults to `-1`. +If the queue has fewer than n elements, this operation +will block for up to timeout_ms milliseconds. +Note: This option is not supported yet. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `component_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueDequeueUpToV2.md b/site/en/api_docs/python/tf/raw_ops/QueueDequeueUpToV2.md new file mode 100644 index 00000000000..7856892dcd0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueDequeueUpToV2.md @@ -0,0 +1,119 @@ +description: Dequeues n tuples of one or more tensors from the given queue. + +
+ + +
+ +# tf.raw_ops.QueueDequeueUpToV2 + + + + + + + + + +Dequeues `n` tuples of one or more tensors from the given queue. + + + + + + + + + +This operation is not supported by all queues. If a queue does not support +DequeueUpTo, then an Unimplemented error is returned. + +If the queue is closed and there are more than 0 but less than `n` +elements remaining, then instead of returning an OutOfRange error like +QueueDequeueMany, less than `n` elements are returned immediately. If +the queue is closed and there are 0 elements left in the queue, then +an OutOfRange error is returned just like in QueueDequeueMany. +Otherwise the behavior is identical to QueueDequeueMany: + +This operation concatenates queue-element component tensors along the +0th dimension to make a single component tensor. All of the components +in the dequeued tuple will have size n in the 0th dimension. + +This operation has `k` outputs, where `k` is the number of components in +the tuples stored in the given queue, and output `i` is the ith +component of the dequeued tuple. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a queue. +
+`n` + +A `Tensor` of type `int32`. The number of tuples to dequeue. +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a tuple. +
+`timeout_ms` + +An optional `int`. Defaults to `-1`. +If the queue has fewer than n elements, this operation +will block for up to timeout_ms milliseconds. +Note: This option is not supported yet. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `component_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueDequeueV2.md b/site/en/api_docs/python/tf/raw_ops/QueueDequeueV2.md new file mode 100644 index 00000000000..a89cd68c8ff --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueDequeueV2.md @@ -0,0 +1,101 @@ +description: Dequeues a tuple of one or more tensors from the given queue. + +
+ + +
+ +# tf.raw_ops.QueueDequeueV2 + + + + + + + + + +Dequeues a tuple of one or more tensors from the given queue. + + + + + + + + + +This operation has k outputs, where k is the number of components +in the tuples stored in the given queue, and output i is the ith +component of the dequeued tuple. + +N.B. If the queue is empty, this operation will block until an element +has been dequeued (or 'timeout_ms' elapses, if specified). + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a queue. +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a tuple. +
+`timeout_ms` + +An optional `int`. Defaults to `-1`. +If the queue is empty, this operation will block for up to +timeout_ms milliseconds. +Note: This option is not supported yet. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `component_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueEnqueue.md b/site/en/api_docs/python/tf/raw_ops/QueueEnqueue.md new file mode 100644 index 00000000000..1f883af711d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueEnqueue.md @@ -0,0 +1,100 @@ +description: Enqueues a tuple of one or more tensors in the given queue. + +
+ + +
+ +# tf.raw_ops.QueueEnqueue + + + + + + + + + +Enqueues a tuple of one or more tensors in the given queue. + + + + + + + + + +The components input has k elements, which correspond to the components of +tuples stored in the given queue. + +N.B. If the queue is full, this operation will block until the given +element has been enqueued (or 'timeout_ms' elapses, if specified). + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a queue. +
+`components` + +A list of `Tensor` objects. +One or more tensors from which the enqueued tensors should be taken. +
+`timeout_ms` + +An optional `int`. Defaults to `-1`. +If the queue is full, this operation will block for up to +timeout_ms milliseconds. +Note: This option is not supported yet. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueEnqueueMany.md b/site/en/api_docs/python/tf/raw_ops/QueueEnqueueMany.md new file mode 100644 index 00000000000..37899539028 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueEnqueueMany.md @@ -0,0 +1,105 @@ +description: Enqueues zero or more tuples of one or more tensors in the given queue. + +
+ + +
+ +# tf.raw_ops.QueueEnqueueMany + + + + + + + + + +Enqueues zero or more tuples of one or more tensors in the given queue. + + + + + + + + + +This operation slices each component tensor along the 0th dimension to +make multiple queue elements. All of the tuple components must have the +same size in the 0th dimension. + +The components input has k elements, which correspond to the components of +tuples stored in the given queue. + +N.B. If the queue is full, this operation will block until the given +elements have been enqueued (or 'timeout_ms' elapses, if specified). + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a queue. +
+`components` + +A list of `Tensor` objects. +One or more tensors from which the enqueued tensors should +be taken. +
+`timeout_ms` + +An optional `int`. Defaults to `-1`. +If the queue is too full, this operation will block for up +to timeout_ms milliseconds. +Note: This option is not supported yet. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueEnqueueManyV2.md b/site/en/api_docs/python/tf/raw_ops/QueueEnqueueManyV2.md new file mode 100644 index 00000000000..5599c7a873b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueEnqueueManyV2.md @@ -0,0 +1,105 @@ +description: Enqueues zero or more tuples of one or more tensors in the given queue. + +
+ + +
+ +# tf.raw_ops.QueueEnqueueManyV2 + + + + + + + + + +Enqueues zero or more tuples of one or more tensors in the given queue. + + + + + + + + + +This operation slices each component tensor along the 0th dimension to +make multiple queue elements. All of the tuple components must have the +same size in the 0th dimension. + +The components input has k elements, which correspond to the components of +tuples stored in the given queue. + +N.B. If the queue is full, this operation will block until the given +elements have been enqueued (or 'timeout_ms' elapses, if specified). + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a queue. +
+`components` + +A list of `Tensor` objects. +One or more tensors from which the enqueued tensors should +be taken. +
+`timeout_ms` + +An optional `int`. Defaults to `-1`. +If the queue is too full, this operation will block for up +to timeout_ms milliseconds. +Note: This option is not supported yet. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueEnqueueV2.md b/site/en/api_docs/python/tf/raw_ops/QueueEnqueueV2.md new file mode 100644 index 00000000000..cbe5b1bdbda --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueEnqueueV2.md @@ -0,0 +1,100 @@ +description: Enqueues a tuple of one or more tensors in the given queue. + +
+ + +
+ +# tf.raw_ops.QueueEnqueueV2 + + + + + + + + + +Enqueues a tuple of one or more tensors in the given queue. + + + + + + + + + +The components input has k elements, which correspond to the components of +tuples stored in the given queue. + +N.B. If the queue is full, this operation will block until the given +element has been enqueued (or 'timeout_ms' elapses, if specified). + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a queue. +
+`components` + +A list of `Tensor` objects. +One or more tensors from which the enqueued tensors should be taken. +
+`timeout_ms` + +An optional `int`. Defaults to `-1`. +If the queue is full, this operation will block for up to +timeout_ms milliseconds. +Note: This option is not supported yet. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueIsClosed.md b/site/en/api_docs/python/tf/raw_ops/QueueIsClosed.md new file mode 100644 index 00000000000..ee84e864b2d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueIsClosed.md @@ -0,0 +1,79 @@ +description: Returns true if queue is closed. + +
+ + +
+ +# tf.raw_ops.QueueIsClosed + + + + + + + + + +Returns true if queue is closed. + + + + + + + + + +This operation returns true if the queue is closed and false if the queue +is open. + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a queue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueIsClosedV2.md b/site/en/api_docs/python/tf/raw_ops/QueueIsClosedV2.md new file mode 100644 index 00000000000..473612d2116 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueIsClosedV2.md @@ -0,0 +1,79 @@ +description: Returns true if queue is closed. + +
+ + +
+ +# tf.raw_ops.QueueIsClosedV2 + + + + + + + + + +Returns true if queue is closed. + + + + + + + + + +This operation returns true if the queue is closed and false if the queue +is open. + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a queue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueSize.md b/site/en/api_docs/python/tf/raw_ops/QueueSize.md new file mode 100644 index 00000000000..b7f9d93d3f3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueSize.md @@ -0,0 +1,77 @@ +description: Computes the number of elements in the given queue. + +
+ + +
+ +# tf.raw_ops.QueueSize + + + + + + + + + +Computes the number of elements in the given queue. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a queue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/QueueSizeV2.md b/site/en/api_docs/python/tf/raw_ops/QueueSizeV2.md new file mode 100644 index 00000000000..424d4ce2052 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/QueueSizeV2.md @@ -0,0 +1,77 @@ +description: Computes the number of elements in the given queue. + +
+ + +
+ +# tf.raw_ops.QueueSizeV2 + + + + + + + + + +Computes the number of elements in the given queue. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a queue. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RFFT.md b/site/en/api_docs/python/tf/raw_ops/RFFT.md new file mode 100644 index 00000000000..b2ba6f8ba21 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RFFT.md @@ -0,0 +1,103 @@ +description: Real-valued fast Fourier transform. + +
+ + +
+ +# tf.raw_ops.RFFT + + + + + + + + + +Real-valued fast Fourier transform. + + + + + + + + + +Computes the 1-dimensional discrete Fourier transform of a real-valued signal +over the inner-most dimension of `input`. + +Since the DFT of a real signal is Hermitian-symmetric, `RFFT` only returns the +`fft_length / 2 + 1` unique components of the FFT: the zero-frequency term, +followed by the `fft_length / 2` positive-frequency terms. + +Along the axis `RFFT` is computed on, if `fft_length` is smaller than the +corresponding dimension of `input`, the dimension is cropped. If it is larger, +the dimension is padded with zeros. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +A float32 tensor. +
+`fft_length` + +A `Tensor` of type `int32`. +An int32 tensor of shape [1]. The FFT length. +
+`Tcomplex` + +An optional tf.DType from: `tf.complex64, tf.complex128`. Defaults to tf.complex64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Tcomplex`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RFFT2D.md b/site/en/api_docs/python/tf/raw_ops/RFFT2D.md new file mode 100644 index 00000000000..d4ce1b18665 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RFFT2D.md @@ -0,0 +1,104 @@ +description: 2D real-valued fast Fourier transform. + +
+ + +
+ +# tf.raw_ops.RFFT2D + + + + + + + + + +2D real-valued fast Fourier transform. + + + + + + + + + +Computes the 2-dimensional discrete Fourier transform of a real-valued signal +over the inner-most 2 dimensions of `input`. + +Since the DFT of a real signal is Hermitian-symmetric, `RFFT2D` only returns the +`fft_length / 2 + 1` unique components of the FFT for the inner-most dimension +of `output`: the zero-frequency term, followed by the `fft_length / 2` +positive-frequency terms. + +Along each axis `RFFT2D` is computed on, if `fft_length` is smaller than the +corresponding dimension of `input`, the dimension is cropped. If it is larger, +the dimension is padded with zeros. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +A float32 tensor. +
+`fft_length` + +A `Tensor` of type `int32`. +An int32 tensor of shape [2]. The FFT length for each dimension. +
+`Tcomplex` + +An optional tf.DType from: `tf.complex64, tf.complex128`. Defaults to tf.complex64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Tcomplex`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RFFT3D.md b/site/en/api_docs/python/tf/raw_ops/RFFT3D.md new file mode 100644 index 00000000000..c1c00531be8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RFFT3D.md @@ -0,0 +1,104 @@ +description: 3D real-valued fast Fourier transform. + +
+ + +
+ +# tf.raw_ops.RFFT3D + + + + + + + + + +3D real-valued fast Fourier transform. + + + + + + + + + +Computes the 3-dimensional discrete Fourier transform of a real-valued signal +over the inner-most 3 dimensions of `input`. + +Since the DFT of a real signal is Hermitian-symmetric, `RFFT3D` only returns the +`fft_length / 2 + 1` unique components of the FFT for the inner-most dimension +of `output`: the zero-frequency term, followed by the `fft_length / 2` +positive-frequency terms. + +Along each axis `RFFT3D` is computed on, if `fft_length` is smaller than the +corresponding dimension of `input`, the dimension is cropped. If it is larger, +the dimension is padded with zeros. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +A float32 tensor. +
+`fft_length` + +A `Tensor` of type `int32`. +An int32 tensor of shape [3]. The FFT length for each dimension. +
+`Tcomplex` + +An optional tf.DType from: `tf.complex64, tf.complex128`. Defaults to tf.complex64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Tcomplex`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RGBToHSV.md b/site/en/api_docs/python/tf/raw_ops/RGBToHSV.md new file mode 100644 index 00000000000..b3a7d99eb56 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RGBToHSV.md @@ -0,0 +1,100 @@ +description: Converts one or more images from RGB to HSV. + +
+ + +
+ +# tf.raw_ops.RGBToHSV + + + + + + + + + +Converts one or more images from RGB to HSV. + + + + + + + + + +Outputs a tensor of the same shape as the `images` tensor, containing the HSV +value of the pixels. The output is only well defined if the value in `images` +are in `[0,1]`. + +`output[..., 0]` contains hue, `output[..., 1]` contains saturation, and +`output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0 +corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue. + +#### Usage Example: + + + +``` +>>> blue_image = tf.stack([ +... tf.zeros([5,5]), +... tf.zeros([5,5]), +... tf.ones([5,5])], +... axis=-1) +>>> blue_hsv_image = tf.image.rgb_to_hsv(blue_image) +>>> blue_hsv_image[0,0].numpy() +array([0.6666667, 1. , 1. ], dtype=float32) +``` + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +1-D or higher rank. RGB data to convert. Last dimension must be size 3. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RaggedGather.md b/site/en/api_docs/python/tf/raw_ops/RaggedGather.md new file mode 100644 index 00000000000..a92cd35aef6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RaggedGather.md @@ -0,0 +1,145 @@ +description: Gather ragged slices from params axis 0 according to indices. + +
+ + +
+ +# tf.raw_ops.RaggedGather + + + + + + + + + +Gather ragged slices from `params` axis `0` according to `indices`. + + + + + + + + + +Outputs a `RaggedTensor` output composed from `output_dense_values` and +`output_nested_splits`, such that: + +```python +output.shape = indices.shape + params.shape[1:] +output.ragged_rank = indices.shape.ndims + params.ragged_rank +output[i...j, d0...dn] = params[indices[i...j], d0...dn] +``` + +where + +* `params = + ragged.from_nested_row_splits(params_dense_values, params_nested_splits)` + provides the values that should be gathered. +* `indices` ia a dense tensor with dtype `int32` or `int64`, indicating which + values should be gathered. +* `output = + ragged.from_nested_row_splits(output_dense_values, output_nested_splits)` + is the output tensor. + +(Note: This c++ op is used to implement the higher-level python +`tf.ragged.gather` op, which also supports ragged indices.) + + + + + + + + + + + + + + + + + + + + + + +
+`params_nested_splits` + +A list of at least 1 `Tensor` objects with the same type in: `int32`, `int64`. +The `nested_row_splits` tensors that define the row-partitioning for the +`params` RaggedTensor input. +
+`params_dense_values` + +A `Tensor`. +The `flat_values` for the `params` RaggedTensor. There was a terminology change +at the python level from dense_values to flat_values, so dense_values is the +deprecated name. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Indices in the outermost dimension of `params` of the values that should be +gathered. +
+`OUTPUT_RAGGED_RANK` + +An `int` that is `>= 0`. +The ragged rank of the output RaggedTensor. `output_nested_splits` will contain +this number of `row_splits` tensors. This value should equal +`indices.shape.ndims + params.ragged_rank - 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_nested_splits, output_dense_values). +
+`output_nested_splits` + +A list of `OUTPUT_RAGGED_RANK` `Tensor` objects with the same type as `params_nested_splits`. +
+`output_dense_values` + +A `Tensor`. Has the same type as `params_dense_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RaggedRange.md b/site/en/api_docs/python/tf/raw_ops/RaggedRange.md new file mode 100644 index 00000000000..e03c4ee4ec9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RaggedRange.md @@ -0,0 +1,131 @@ +description: Returns a RaggedTensor containing the specified sequences of numbers. + +
+ + +
+ +# tf.raw_ops.RaggedRange + + + + + + + + + +Returns a `RaggedTensor` containing the specified sequences of numbers. + + + + + + + + + + +Returns a `RaggedTensor` `result` composed from `rt_dense_values` and +`rt_nested_splits`, such that +`result[i] = range(starts[i], limits[i], deltas[i])`. + +```python +(rt_nested_splits, rt_dense_values) = ragged_range( + starts=[2, 5, 8], limits=[3, 5, 12], deltas=1) +result = tf.ragged.from_row_splits(rt_dense_values, rt_nested_splits) +print(result) + +``` + +The input tensors `starts`, `limits`, and `deltas` may be scalars or vectors. +The vector inputs must all have the same size. Scalar inputs are broadcast +to match the size of the vector inputs. + + + + + + + + + + + + + + + + + + + + + + +
+`starts` + +A `Tensor`. Must be one of the following types: `bfloat16`, `float32`, `float64`, `int32`, `int64`. +The starts of each range. +
+`limits` + +A `Tensor`. Must have the same type as `starts`. +The limits of each range. +
+`deltas` + +A `Tensor`. Must have the same type as `starts`. +The deltas of each range. +
+`Tsplits` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (rt_nested_splits, rt_dense_values). +
+`rt_nested_splits` + +A `Tensor` of type `Tsplits`. +
+`rt_dense_values` + +A `Tensor`. Has the same type as `starts`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RaggedTensorFromVariant.md b/site/en/api_docs/python/tf/raw_ops/RaggedTensorFromVariant.md new file mode 100644 index 00000000000..aa5b9590c94 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RaggedTensorFromVariant.md @@ -0,0 +1,136 @@ +description: Decodes a variant Tensor into a RaggedTensor. + +
+ + +
+ +# tf.raw_ops.RaggedTensorFromVariant + + + + + + + + + +Decodes a `variant` Tensor into a `RaggedTensor`. + + + + + + + + + +Decodes the given `variant` Tensor and returns a `RaggedTensor`. The input +could be a scalar, meaning it encodes a single `RaggedTensor` with ragged_rank +`output_ragged_rank`. It could also have an arbitrary rank, in which case each +element is decoded into a `RaggedTensor` with ragged_rank `input_ragged_rank` +and these are then stacked according to the input shape to output a single +`RaggedTensor` with ragged_rank `output_ragged_rank`. Each `variant` element in +the input Tensor is decoded by retrieving from the element a 1-D `variant` +Tensor with `input_ragged_rank + 1` Tensors, corresponding to the splits and +values of the decoded `RaggedTensor`. If `input_ragged_rank` is -1, then it is +inferred as `output_ragged_rank` - `rank(encoded_ragged)`. See +`RaggedTensorToVariant` for the corresponding encoding logic. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`encoded_ragged` + +A `Tensor` of type `variant`. +A `variant` Tensor containing encoded `RaggedTensor`s. +
+`input_ragged_rank` + +An `int` that is `>= -1`. +The ragged rank of each encoded `RaggedTensor` component in the input. If set to +-1, this is inferred as `output_ragged_rank` - `rank(encoded_ragged)` +
+`output_ragged_rank` + +An `int` that is `>= 0`. +The expected ragged rank of the output `RaggedTensor`. The following must hold: +`output_ragged_rank = rank(encoded_ragged) + input_ragged_rank`. +
+`Tvalues` + +A tf.DType. +
+`Tsplits` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_nested_splits, output_dense_values). +
+`output_nested_splits` + +A list of `output_ragged_rank` `Tensor` objects with type `Tsplits`. +
+`output_dense_values` + +A `Tensor` of type `Tvalues`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RaggedTensorToSparse.md b/site/en/api_docs/python/tf/raw_ops/RaggedTensorToSparse.md new file mode 100644 index 00000000000..246f3e726b9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RaggedTensorToSparse.md @@ -0,0 +1,109 @@ +description: Converts a RaggedTensor into a SparseTensor with the same values. + +
+ + +
+ +# tf.raw_ops.RaggedTensorToSparse + + + + + + + + + +Converts a `RaggedTensor` into a `SparseTensor` with the same values. + + + + + + + + + +input=ragged.from_nested_row_splits(rt_dense_values, rt_nested_splits) +output=SparseTensor(indices=sparse_indices, values=sparse_values, + dense_shape=sparse_dense_shape) + + + + + + + + + + + + + + + + +
+`rt_nested_splits` + +A list of at least 1 `Tensor` objects with the same type in: `int32`, `int64`. +The `row_splits` for the `RaggedTensor`. +
+`rt_dense_values` + +A `Tensor`. The `flat_values` for the `RaggedTensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_dense_shape). +
+`sparse_indices` + +A `Tensor` of type `int64`. +
+`sparse_values` + +A `Tensor`. Has the same type as `rt_dense_values`. +
+`sparse_dense_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RaggedTensorToTensor.md b/site/en/api_docs/python/tf/raw_ops/RaggedTensorToTensor.md new file mode 100644 index 00000000000..8597cdea1a3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RaggedTensorToTensor.md @@ -0,0 +1,153 @@ +description: Create a dense tensor from a ragged tensor, possibly altering its shape. + +
+ + +
+ +# tf.raw_ops.RaggedTensorToTensor + + + + + + + + + +Create a dense tensor from a ragged tensor, possibly altering its shape. + + + + + + + + + +The `ragged_to_dense` op creates a dense tensor from a list of row partition +tensors, a value vector, and default values. If the shape is unspecified, the +minimal shape required to contain all the elements in the ragged tensor (the +natural shape) will be used. If some dimensions are left unspecified, then the +size of the natural shape is used in that dimension. + +The default_value will be broadcast to the output shape. After that, the values +from the ragged tensor overwrite the default values. Note that the default_value +must have less dimensions than the value. + +The row partition tensors are in the order of the dimensions. +At present, the types can be: +* "ROW_SPLITS": the row_splits tensor from the ragged tensor. +* "VALUE_ROWIDS": the value_rowids tensor from the ragged tensor. +* "FIRST_DIM_SIZE": if value_rowids is used for the first dimension, then it + is preceded by "FIRST_DIM_SIZE". + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int64`, `int32`. +The desired shape of the the output tensor. If left unspecified (empty), +the minimal shape required to contain all the elements in the ragged tensor +(the natural shape) will be used. If some dimensions are left unspecified, then +the size of the natural shape is used in that dimension. + +Note that dense dimensions cannot be modified by the shape argument. Trying to +change the size of a dense dimension will cause the op to fail. +Examples: +natural shape: [4, 5, 6] +shape: -1 +output shape: [4, 5, 6] + +natural shape: [4, 5, 6] +shape: [3, -1, 2] +output shape: [3, 5, 2] + +natural shape: [4, 5, 6] +shape: [3, 7, 2] +output shape: [3, 7, 2] +
+`values` + +A `Tensor`. +A 1D tensor representing the values of the ragged tensor. +
+`default_value` + +A `Tensor`. Must have the same type as `values`. +The default_value when the shape is larger than the ragged tensor. The +default_value is broadcast until it is the shape of the output tensor, and +then overwritten by values in the ragged tensor. The default value must be +compatible with this broadcast operation, and must have fewer dimensions than +the value tensor. +
+`row_partition_tensors` + +A list of at least 1 `Tensor` objects with the same type in: `int64`, `int32`. +
+`row_partition_types` + +A list of `strings`. +The types of the row partition tensors. At present, these can be: +* "ROW_SPLITS": the row_splits tensor from the ragged tensor. +* "VALUE_ROWIDS": the value_rowids tensor from the ragged tensor. +* "FIRST_DIM_SIZE": if value_rowids is used for the first dimension, then it +is preceeded by "FIRST_DIM_SIZE". +The tensors are in the order of the dimensions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RaggedTensorToVariant.md b/site/en/api_docs/python/tf/raw_ops/RaggedTensorToVariant.md new file mode 100644 index 00000000000..73ce795eb65 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RaggedTensorToVariant.md @@ -0,0 +1,106 @@ +description: Encodes a RaggedTensor into a variant Tensor. + +
+ + +
+ +# tf.raw_ops.RaggedTensorToVariant + + + + + + + + + +Encodes a `RaggedTensor` into a `variant` Tensor. + + + + + + + + + + +Encodes the given `RaggedTensor` and returns a `variant` Tensor. If +`batched_input` is True, then input `RaggedTensor` is unbatched along the +zero-th dimension, each component `RaggedTensor` is encoded into a scalar +`variant` Tensor, and these are stacked to return a 1-D `variant` Tensor. +If `batched_input` is False, then the input `RaggedTensor` is encoded as is and +a scalar `variant` Tensor is returned. A `RaggedTensor` is encoded by first +creating a 1-D `variant` Tensor with `ragged_rank + 1` elements, containing the +splits and values Tensors of the `RaggedTensor`. Then the 1-D `variant` Tensor +is wrapped in a scalar `variant` Tensor. See `RaggedTensorFromVariant` for the +corresponding decoding logic. + + + + + + + + + + + + + + + + + + + +
+`rt_nested_splits` + +A list of `Tensor` objects with the same type in: `int32`, `int64`. +A list of one or more Tensors representing the splits of the input +`RaggedTensor`. +
+`rt_dense_values` + +A `Tensor`. +A Tensor representing the values of the input `RaggedTensor`. +
+`batched_input` + +A `bool`. +A `bool` denoting whether the input is a batched `RaggedTensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RandomCrop.md b/site/en/api_docs/python/tf/raw_ops/RandomCrop.md new file mode 100644 index 00000000000..bd1514d1b91 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RandomCrop.md @@ -0,0 +1,110 @@ +description: Randomly crop image. + +
+ + +
+ +# tf.raw_ops.RandomCrop + + + + + + + + + +Randomly crop `image`. + + + + + + + + + +`size` is a 1-D int64 tensor with 2 elements representing the crop height and +width. The values must be non negative. + +This Op picks a random location in `image` and crops a `height` by `width` +rectangle from that location. The random location is picked so the cropped +area will fit inside the original image. + + + + + + + + + + + + + + + + + + + + + + +
+`image` + +A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `float32`, `float64`. +3-D of shape `[height, width, channels]`. +
+`size` + +A `Tensor` of type `int64`. +1-D of length 2 containing: `crop_height`, `crop_width`.. +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +An second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `image`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RandomDataset.md b/site/en/api_docs/python/tf/raw_ops/RandomDataset.md new file mode 100644 index 00000000000..e92c5921916 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RandomDataset.md @@ -0,0 +1,112 @@ +description: Creates a Dataset that returns pseudorandom numbers. + +
+ + +
+ +# tf.raw_ops.RandomDataset + + + + + + + + + +Creates a Dataset that returns pseudorandom numbers. + + + + + + + + + +Creates a Dataset that returns a stream of uniformly distributed +pseudorandom 64-bit signed integers. + +In the TensorFlow Python API, you can instantiate this dataset via the +class tf.data.experimental.RandomDataset. + +Instances of this dataset are also created as a result of the +`hoist_random_uniform` static optimization. Whether this optimization is +performed is determined by the `experimental_optimization.hoist_random_uniform` +option of tf.data.Options. + + + + + + + + + + + + + + + + + + + + + + +
+`seed` + +A `Tensor` of type `int64`. +A scalar seed for the random number generator. If either seed or +seed2 is set to be non-zero, the random number generator is seeded +by the given seed. Otherwise, a random seed is used. +
+`seed2` + +A `Tensor` of type `int64`. +A second scalar seed to avoid seed collision. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RandomGamma.md b/site/en/api_docs/python/tf/raw_ops/RandomGamma.md new file mode 100644 index 00000000000..9834895bce8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RandomGamma.md @@ -0,0 +1,109 @@ +description: Outputs random values from the Gamma distribution(s) described by alpha. + +
+ + +
+ +# tf.raw_ops.RandomGamma + + + + + + + + + +Outputs random values from the Gamma distribution(s) described by alpha. + + + + + + + + + +This op uses the algorithm by Marsaglia et al. to acquire samples via +transformation-rejection from pairs of uniform and normal random variables. +See http://dl.acm.org/citation.cfm?id=358414 + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D integer tensor. Shape of independent samples to draw from each +distribution described by the shape parameters given in alpha. +
+`alpha` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +A tensor in which each scalar is a "shape" parameter describing the +associated gamma distribution. +
+`seed` + +An optional `int`. Defaults to `0`. +If either `seed` or `seed2` are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `alpha`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RandomGammaGrad.md b/site/en/api_docs/python/tf/raw_ops/RandomGammaGrad.md new file mode 100644 index 00000000000..1c4796f13f5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RandomGammaGrad.md @@ -0,0 +1,84 @@ +description: Computes the derivative of a Gamma random sample w.r.t. alpha. + +
+ + +
+ +# tf.raw_ops.RandomGammaGrad + + + + + + + + + +Computes the derivative of a Gamma random sample w.r.t. `alpha`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`alpha` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`sample` + +A `Tensor`. Must have the same type as `alpha`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `alpha`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RandomPoisson.md b/site/en/api_docs/python/tf/raw_ops/RandomPoisson.md new file mode 100644 index 00000000000..ca5432f0904 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RandomPoisson.md @@ -0,0 +1,98 @@ +description: Use RandomPoissonV2 instead. + +
+ + +
+ +# tf.raw_ops.RandomPoisson + + + + + + + + + +Use RandomPoissonV2 instead. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`rate` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `rate`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RandomPoissonV2.md b/site/en/api_docs/python/tf/raw_ops/RandomPoissonV2.md new file mode 100644 index 00000000000..00a7983e408 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RandomPoissonV2.md @@ -0,0 +1,122 @@ +description: Outputs random values from the Poisson distribution(s) described by rate. + +
+ + +
+ +# tf.raw_ops.RandomPoissonV2 + + + + + + + + + +Outputs random values from the Poisson distribution(s) described by rate. + + + + + + + + + +This op uses two algorithms, depending on rate. If rate >= 10, then +the algorithm by Hormann is used to acquire samples via +transformation-rejection. +See http://www.sciencedirect.com/science/article/pii/0167668793909974. + +Otherwise, Knuth's algorithm is used to acquire samples via multiplying uniform +random variables. +See Donald E. Knuth (1969). Seminumerical Algorithms. The Art of Computer +Programming, Volume 2. Addison Wesley + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D integer tensor. Shape of independent samples to draw from each +distribution described by the shape parameters given in rate. +
+`rate` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`. +A tensor in which each scalar is a "rate" parameter describing the +associated poisson distribution. +
+`seed` + +An optional `int`. Defaults to `0`. +If either `seed` or `seed2` are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`dtype` + +An optional tf.DType from: `tf.half, tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RandomShuffle.md b/site/en/api_docs/python/tf/raw_ops/RandomShuffle.md new file mode 100644 index 00000000000..52417fafc20 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RandomShuffle.md @@ -0,0 +1,104 @@ +description: Randomly shuffles a tensor along its first dimension. + +
+ + +
+ +# tf.raw_ops.RandomShuffle + + + + + + + + + +Randomly shuffles a tensor along its first dimension. + + + + + + + + + + The tensor is shuffled along dimension 0, such that each `value[j]` is mapped + to one and only one `output[i]`. For example, a mapping that might occur for a + 3x2 tensor is: + +``` +[[1, 2], [[5, 6], + [3, 4], ==> [1, 2], + [5, 6]] [3, 4]] +``` + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. The tensor to be shuffled. +
+`seed` + +An optional `int`. Defaults to `0`. +If either `seed` or `seed2` are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `value`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RandomShuffleQueue.md b/site/en/api_docs/python/tf/raw_ops/RandomShuffleQueue.md new file mode 100644 index 00000000000..4ebe7817544 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RandomShuffleQueue.md @@ -0,0 +1,144 @@ +description: A queue that randomizes the order of elements. + +
+ + +
+ +# tf.raw_ops.RandomShuffleQueue + + + + + + + + + +A queue that randomizes the order of elements. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a value. +
+`shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +The shape of each component in a value. The length of this attr must +be either 0 or the same as the length of component_types. If the length of +this attr is 0, the shapes of queue elements are not constrained, and +only one element may be dequeued at a time. +
+`capacity` + +An optional `int`. Defaults to `-1`. +The upper bound on the number of elements in this queue. +Negative numbers mean no limit. +
+`min_after_dequeue` + +An optional `int`. Defaults to `0`. +Dequeue will block unless there would be this +many elements after the dequeue or the queue is closed. This +ensures a minimum level of mixing of elements. +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 is set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, a random seed is used. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this queue is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this queue will be shared under the given name +across multiple sessions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RandomShuffleQueueV2.md b/site/en/api_docs/python/tf/raw_ops/RandomShuffleQueueV2.md new file mode 100644 index 00000000000..4db94b1b2de --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RandomShuffleQueueV2.md @@ -0,0 +1,144 @@ +description: A queue that randomizes the order of elements. + +
+ + +
+ +# tf.raw_ops.RandomShuffleQueueV2 + + + + + + + + + +A queue that randomizes the order of elements. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`component_types` + +A list of `tf.DTypes` that has length `>= 1`. +The type of each component in a value. +
+`shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +The shape of each component in a value. The length of this attr must +be either 0 or the same as the length of component_types. If the length of +this attr is 0, the shapes of queue elements are not constrained, and +only one element may be dequeued at a time. +
+`capacity` + +An optional `int`. Defaults to `-1`. +The upper bound on the number of elements in this queue. +Negative numbers mean no limit. +
+`min_after_dequeue` + +An optional `int`. Defaults to `0`. +Dequeue will block unless there would be this +many elements after the dequeue or the queue is closed. This +ensures a minimum level of mixing of elements. +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 is set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, a random seed is used. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this queue is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this queue will be shared under the given name +across multiple sessions. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RandomStandardNormal.md b/site/en/api_docs/python/tf/raw_ops/RandomStandardNormal.md new file mode 100644 index 00000000000..777981fc15e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RandomStandardNormal.md @@ -0,0 +1,105 @@ +description: Outputs random values from a normal distribution. + +
+ + +
+ +# tf.raw_ops.RandomStandardNormal + + + + + + + + + +Outputs random values from a normal distribution. + + + + + + + + + +The generated values will have mean 0 and standard deviation 1. + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. +
+`dtype` + +A tf.DType from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. +The type of the output. +
+`seed` + +An optional `int`. Defaults to `0`. +If either `seed` or `seed2` are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RandomUniform.md b/site/en/api_docs/python/tf/raw_ops/RandomUniform.md new file mode 100644 index 00000000000..9a66f382fc4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RandomUniform.md @@ -0,0 +1,106 @@ +description: Outputs random values from a uniform distribution. + +
+ + +
+ +# tf.raw_ops.RandomUniform + + + + + + + + + +Outputs random values from a uniform distribution. + + + + + + + + + +The generated values follow a uniform distribution in the range `[0, 1)`. The +lower bound 0 is included in the range, while the upper bound 1 is excluded. + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. +
+`dtype` + +A tf.DType from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. +The type of the output. +
+`seed` + +An optional `int`. Defaults to `0`. +If either `seed` or `seed2` are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RandomUniformInt.md b/site/en/api_docs/python/tf/raw_ops/RandomUniformInt.md new file mode 100644 index 00000000000..e5e212d60eb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RandomUniformInt.md @@ -0,0 +1,119 @@ +description: Outputs random integers from a uniform distribution. + +
+ + +
+ +# tf.raw_ops.RandomUniformInt + + + + + + + + + +Outputs random integers from a uniform distribution. + + + + + + + + + +The generated values are uniform integers in the range `[minval, maxval)`. +The lower bound `minval` is included in the range, while the upper bound +`maxval` is excluded. + +The random integers are slightly biased unless `maxval - minval` is an exact +power of two. The bias is small for values of `maxval - minval` significantly +smaller than the range of the output (either `2^32` or `2^64`). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. +
+`minval` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +0-D. Inclusive lower bound on the generated integers. +
+`maxval` + +A `Tensor`. Must have the same type as `minval`. +0-D. Exclusive upper bound on the generated integers. +
+`seed` + +An optional `int`. Defaults to `0`. +If either `seed` or `seed2` are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `minval`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Range.md b/site/en/api_docs/python/tf/raw_ops/Range.md new file mode 100644 index 00000000000..d7bd0f8e2f9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Range.md @@ -0,0 +1,107 @@ +description: Creates a sequence of numbers. + +
+ + +
+ +# tf.raw_ops.Range + + + + + + + + + +Creates a sequence of numbers. + + + + + + + + + +This operation creates a sequence of numbers that begins at `start` and +extends by increments of `delta` up to but not including `limit`. + +#### For example: + + + +``` +# 'start' is 3 +# 'limit' is 18 +# 'delta' is 3 +tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15] +``` + + + + + + + + + + + + + + + + + + + +
+`start` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`. +0-D (scalar). First entry in the sequence. +
+`limit` + +A `Tensor`. Must have the same type as `start`. +0-D (scalar). Upper limit of sequence, exclusive. +
+`delta` + +A `Tensor`. Must have the same type as `start`. +0-D (scalar). Optional. Default is 1. Number that increments `start`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `start`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RangeDataset.md b/site/en/api_docs/python/tf/raw_ops/RangeDataset.md new file mode 100644 index 00000000000..1c66724dbc2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RangeDataset.md @@ -0,0 +1,108 @@ +description: Creates a dataset with a range of values. Corresponds to python's xrange. + +
+ + +
+ +# tf.raw_ops.RangeDataset + + + + + + + + + +Creates a dataset with a range of values. Corresponds to python's xrange. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`start` + +A `Tensor` of type `int64`. +corresponds to start in python's xrange(). +
+`stop` + +A `Tensor` of type `int64`. +corresponds to stop in python's xrange(). +
+`step` + +A `Tensor` of type `int64`. +corresponds to step in python's xrange(). +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Rank.md b/site/en/api_docs/python/tf/raw_ops/Rank.md new file mode 100644 index 00000000000..e0a8160a6a6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Rank.md @@ -0,0 +1,92 @@ +description: Returns the rank of a tensor. + +
+ + +
+ +# tf.raw_ops.Rank + + + + + + + + + +Returns the rank of a tensor. + + + + + + + + + +This operation returns an integer representing the rank of `input`. + +#### For example: + + + +``` +# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] +# shape of tensor 't' is [2, 2, 3] +rank(t) ==> 3 +``` + +**Note**: The rank of a tensor is not the same as the rank of a matrix. The rank +of a tensor is the number of indices required to uniquely select each element +of the tensor. Rank is also known as "order", "degree", or "ndims." + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReadFile.md b/site/en/api_docs/python/tf/raw_ops/ReadFile.md new file mode 100644 index 00000000000..2415cb0d827 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReadFile.md @@ -0,0 +1,77 @@ +description: Reads and outputs the entire contents of the input filename. + +
+ + +
+ +# tf.raw_ops.ReadFile + + + + + + + + + +Reads and outputs the entire contents of the input filename. + + + + + + + + + + + + + + + + + + + + + + +
+`filename` + +A `Tensor` of type `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReadVariableOp.md b/site/en/api_docs/python/tf/raw_ops/ReadVariableOp.md new file mode 100644 index 00000000000..de84d7503f8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReadVariableOp.md @@ -0,0 +1,91 @@ +description: Reads the value of a variable. + +
+ + +
+ +# tf.raw_ops.ReadVariableOp + + + + + + + + + +Reads the value of a variable. + + + + + + + + + +The tensor returned by this operation is immutable. + +The value returned by this operation is guaranteed to be influenced by all the +writes on which this operation depends directly or indirectly, and to not be +influenced by any of the writes which depend directly or indirectly on this +operation. + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +handle to the resource in which to store the variable. +
+`dtype` + +A tf.DType. the dtype of the value. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderNumRecordsProduced.md b/site/en/api_docs/python/tf/raw_ops/ReaderNumRecordsProduced.md new file mode 100644 index 00000000000..8b4fd735d89 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderNumRecordsProduced.md @@ -0,0 +1,79 @@ +description: Returns the number of records this Reader has produced. + +
+ + +
+ +# tf.raw_ops.ReaderNumRecordsProduced + + + + + + + + + +Returns the number of records this Reader has produced. + + + + + + + + + +This is the same as the number of ReaderRead executions that have +succeeded. + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type mutable `string`. Handle to a Reader. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderNumRecordsProducedV2.md b/site/en/api_docs/python/tf/raw_ops/ReaderNumRecordsProducedV2.md new file mode 100644 index 00000000000..a25b83de911 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderNumRecordsProducedV2.md @@ -0,0 +1,79 @@ +description: Returns the number of records this Reader has produced. + +
+ + +
+ +# tf.raw_ops.ReaderNumRecordsProducedV2 + + + + + + + + + +Returns the number of records this Reader has produced. + + + + + + + + + +This is the same as the number of ReaderRead executions that have +succeeded. + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type `resource`. Handle to a Reader. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderNumWorkUnitsCompleted.md b/site/en/api_docs/python/tf/raw_ops/ReaderNumWorkUnitsCompleted.md new file mode 100644 index 00000000000..daa7281515c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderNumWorkUnitsCompleted.md @@ -0,0 +1,77 @@ +description: Returns the number of work units this Reader has finished processing. + +
+ + +
+ +# tf.raw_ops.ReaderNumWorkUnitsCompleted + + + + + + + + + +Returns the number of work units this Reader has finished processing. + + + + + + + + + + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type mutable `string`. Handle to a Reader. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderNumWorkUnitsCompletedV2.md b/site/en/api_docs/python/tf/raw_ops/ReaderNumWorkUnitsCompletedV2.md new file mode 100644 index 00000000000..30c0a8985e4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderNumWorkUnitsCompletedV2.md @@ -0,0 +1,77 @@ +description: Returns the number of work units this Reader has finished processing. + +
+ + +
+ +# tf.raw_ops.ReaderNumWorkUnitsCompletedV2 + + + + + + + + + +Returns the number of work units this Reader has finished processing. + + + + + + + + + + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type `resource`. Handle to a Reader. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderRead.md b/site/en/api_docs/python/tf/raw_ops/ReaderRead.md new file mode 100644 index 00000000000..c657a3a6dae --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderRead.md @@ -0,0 +1,102 @@ +description: Returns the next record (key, value pair) produced by a Reader. + +
+ + +
+ +# tf.raw_ops.ReaderRead + + + + + + + + + +Returns the next record (key, value pair) produced by a Reader. + + + + + + + + + +Will dequeue from the input queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has finished +with the previous file). + + + + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type mutable `string`. Handle to a Reader. +
+`queue_handle` + +A `Tensor` of type mutable `string`. +Handle to a Queue, with string work items. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (key, value). +
+`key` + +A `Tensor` of type `string`. +
+`value` + +A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderReadUpTo.md b/site/en/api_docs/python/tf/raw_ops/ReaderReadUpTo.md new file mode 100644 index 00000000000..368c4f68977 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderReadUpTo.md @@ -0,0 +1,111 @@ +description: Returns up to num_records (key, value) pairs produced by a Reader. + +
+ + +
+ +# tf.raw_ops.ReaderReadUpTo + + + + + + + + + +Returns up to `num_records` (key, value) pairs produced by a Reader. + + + + + + + + + +Will dequeue from the input queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has finished +with the previous file). +It may return less than `num_records` even before the last batch. + + + + + + + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type mutable `string`. Handle to a `Reader`. +
+`queue_handle` + +A `Tensor` of type mutable `string`. +Handle to a `Queue`, with string work items. +
+`num_records` + +A `Tensor` of type `int64`. +number of records to read from `Reader`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (keys, values). +
+`keys` + +A `Tensor` of type `string`. +
+`values` + +A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderReadUpToV2.md b/site/en/api_docs/python/tf/raw_ops/ReaderReadUpToV2.md new file mode 100644 index 00000000000..c7a7edc415a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderReadUpToV2.md @@ -0,0 +1,111 @@ +description: Returns up to num_records (key, value) pairs produced by a Reader. + +
+ + +
+ +# tf.raw_ops.ReaderReadUpToV2 + + + + + + + + + +Returns up to `num_records` (key, value) pairs produced by a Reader. + + + + + + + + + +Will dequeue from the input queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has finished +with the previous file). +It may return less than `num_records` even before the last batch. + + + + + + + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type `resource`. Handle to a `Reader`. +
+`queue_handle` + +A `Tensor` of type `resource`. +Handle to a `Queue`, with string work items. +
+`num_records` + +A `Tensor` of type `int64`. +number of records to read from `Reader`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (keys, values). +
+`keys` + +A `Tensor` of type `string`. +
+`values` + +A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderReadV2.md b/site/en/api_docs/python/tf/raw_ops/ReaderReadV2.md new file mode 100644 index 00000000000..36fb315a0d1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderReadV2.md @@ -0,0 +1,102 @@ +description: Returns the next record (key, value pair) produced by a Reader. + +
+ + +
+ +# tf.raw_ops.ReaderReadV2 + + + + + + + + + +Returns the next record (key, value pair) produced by a Reader. + + + + + + + + + +Will dequeue from the input queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has finished +with the previous file). + + + + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type `resource`. Handle to a Reader. +
+`queue_handle` + +A `Tensor` of type `resource`. +Handle to a Queue, with string work items. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (key, value). +
+`key` + +A `Tensor` of type `string`. +
+`value` + +A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderReset.md b/site/en/api_docs/python/tf/raw_ops/ReaderReset.md new file mode 100644 index 00000000000..ec035c90b6a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderReset.md @@ -0,0 +1,77 @@ +description: Restore a Reader to its initial clean state. + +
+ + +
+ +# tf.raw_ops.ReaderReset + + + + + + + + + +Restore a Reader to its initial clean state. + + + + + + + + + + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type mutable `string`. Handle to a Reader. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderResetV2.md b/site/en/api_docs/python/tf/raw_ops/ReaderResetV2.md new file mode 100644 index 00000000000..6ea9c36a90f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderResetV2.md @@ -0,0 +1,77 @@ +description: Restore a Reader to its initial clean state. + +
+ + +
+ +# tf.raw_ops.ReaderResetV2 + + + + + + + + + +Restore a Reader to its initial clean state. + + + + + + + + + + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type `resource`. Handle to a Reader. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderRestoreState.md b/site/en/api_docs/python/tf/raw_ops/ReaderRestoreState.md new file mode 100644 index 00000000000..245dc57ce94 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderRestoreState.md @@ -0,0 +1,88 @@ +description: Restore a reader to a previously saved state. + +
+ + +
+ +# tf.raw_ops.ReaderRestoreState + + + + + + + + + +Restore a reader to a previously saved state. + + + + + + + + + +Not all Readers support being restored, so this can produce an +Unimplemented error. + + + + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type mutable `string`. Handle to a Reader. +
+`state` + +A `Tensor` of type `string`. +Result of a ReaderSerializeState of a Reader with type +matching reader_handle. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderRestoreStateV2.md b/site/en/api_docs/python/tf/raw_ops/ReaderRestoreStateV2.md new file mode 100644 index 00000000000..73cdefd5703 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderRestoreStateV2.md @@ -0,0 +1,88 @@ +description: Restore a reader to a previously saved state. + +
+ + +
+ +# tf.raw_ops.ReaderRestoreStateV2 + + + + + + + + + +Restore a reader to a previously saved state. + + + + + + + + + +Not all Readers support being restored, so this can produce an +Unimplemented error. + + + + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type `resource`. Handle to a Reader. +
+`state` + +A `Tensor` of type `string`. +Result of a ReaderSerializeState of a Reader with type +matching reader_handle. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderSerializeState.md b/site/en/api_docs/python/tf/raw_ops/ReaderSerializeState.md new file mode 100644 index 00000000000..a63bcc530c4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderSerializeState.md @@ -0,0 +1,79 @@ +description: Produce a string tensor that encodes the state of a Reader. + +
+ + +
+ +# tf.raw_ops.ReaderSerializeState + + + + + + + + + +Produce a string tensor that encodes the state of a Reader. + + + + + + + + + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type mutable `string`. Handle to a Reader. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReaderSerializeStateV2.md b/site/en/api_docs/python/tf/raw_ops/ReaderSerializeStateV2.md new file mode 100644 index 00000000000..d115c2bfc3b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReaderSerializeStateV2.md @@ -0,0 +1,79 @@ +description: Produce a string tensor that encodes the state of a Reader. + +
+ + +
+ +# tf.raw_ops.ReaderSerializeStateV2 + + + + + + + + + +Produce a string tensor that encodes the state of a Reader. + + + + + + + + + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + + + + + + + + + + + + + +
+`reader_handle` + +A `Tensor` of type `resource`. Handle to a Reader. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Real.md b/site/en/api_docs/python/tf/raw_ops/Real.md new file mode 100644 index 00000000000..e690c575761 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Real.md @@ -0,0 +1,97 @@ +description: Returns the real part of a complex number. + +
+ + +
+ +# tf.raw_ops.Real + + + + + + + + + +Returns the real part of a complex number. + + + + + + + + + +Given a tensor `input` of complex numbers, this operation returns a tensor of +type `float` that is the real part of each element in `input`. All elements in +`input` must be complex numbers of the form \\(a + bj\\), where *a* is the real + part returned by this operation and *b* is the imaginary part. + +#### For example: + + + +``` +# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] +tf.real(input) ==> [-2.25, 3.25] +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +
+`Tout` + +An optional tf.DType from: `tf.float32, tf.float64`. Defaults to tf.float32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RealDiv.md b/site/en/api_docs/python/tf/raw_ops/RealDiv.md new file mode 100644 index 00000000000..46ce3cf0d3a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RealDiv.md @@ -0,0 +1,88 @@ +description: Returns x / y element-wise for real types. + +
+ + +
+ +# tf.raw_ops.RealDiv + + + + + + + + + +Returns x / y element-wise for real types. + + + + + + + + + +If `x` and `y` are reals, this will return the floating-point division. + +*NOTE*: `Div` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RebatchDataset.md b/site/en/api_docs/python/tf/raw_ops/RebatchDataset.md new file mode 100644 index 00000000000..567d40c3ab6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RebatchDataset.md @@ -0,0 +1,112 @@ +description: Creates a dataset that changes the batch size. + +
+ + +
+ +# tf.raw_ops.RebatchDataset + + + + + + + + + +Creates a dataset that changes the batch size. + + + + + + + + + +Creates a dataset that changes the batch size of the dataset to current batch +size // num_workers. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +
+`num_replicas` + +A `Tensor` of type `int64`. +A scalar representing the number of replicas to distribute this batch across. As +a result of this transformation the current batch size would end up being +divided by this parameter. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`use_fallback` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Reciprocal.md b/site/en/api_docs/python/tf/raw_ops/Reciprocal.md new file mode 100644 index 00000000000..604104270b5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Reciprocal.md @@ -0,0 +1,78 @@ +description: Computes the reciprocal of x element-wise. + +
+ + +
+ +# tf.raw_ops.Reciprocal + + + + + + + + + +Computes the reciprocal of x element-wise. + + + + + + + + + +I.e., \\(y = 1 / x\\). + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReciprocalGrad.md b/site/en/api_docs/python/tf/raw_ops/ReciprocalGrad.md new file mode 100644 index 00000000000..451c65061f9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReciprocalGrad.md @@ -0,0 +1,86 @@ +description: Computes the gradient for the inverse of x wrt its input. + +
+ + +
+ +# tf.raw_ops.ReciprocalGrad + + + + + + + + + +Computes the gradient for the inverse of `x` wrt its input. + + + + + + + + + +Specifically, `grad = -dy * y*y`, where `y = 1/x`, and `dy` +is the corresponding input gradient. + + + + + + + + + + + + + + + + +
+`y` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`dy` + +A `Tensor`. Must have the same type as `y`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `y`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RecordInput.md b/site/en/api_docs/python/tf/raw_ops/RecordInput.md new file mode 100644 index 00000000000..3a2fc9a4d00 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RecordInput.md @@ -0,0 +1,128 @@ +description: Emits randomized records. + +
+ + +
+ +# tf.raw_ops.RecordInput + + + + + + + + + +Emits randomized records. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`file_pattern` + +A `string`. Glob pattern for the data files. +
+`file_random_seed` + +An optional `int`. Defaults to `301`. +Random seeds used to produce randomized records. +
+`file_shuffle_shift_ratio` + +An optional `float`. Defaults to `0`. +Shifts the list of files after the list is randomly +shuffled. +
+`file_buffer_size` + +An optional `int`. Defaults to `10000`. +The randomization shuffling buffer. +
+`file_parallelism` + +An optional `int`. Defaults to `16`. +How many sstables are opened and concurrently iterated over. +
+`batch_size` + +An optional `int`. Defaults to `32`. The batch size. +
+`compression_type` + +An optional `string`. Defaults to `""`. +The type of compression for the file. Currently ZLIB and +GZIP are supported. Defaults to none. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Recv.md b/site/en/api_docs/python/tf/raw_ops/Recv.md new file mode 100644 index 00000000000..40cc450933e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Recv.md @@ -0,0 +1,117 @@ +description: Receives the named tensor from send_device on recv_device. + +
+ + +
+ +# tf.raw_ops.Recv + + + + + + + + + +Receives the named tensor from send_device on recv_device. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensor_type` + +A tf.DType. +
+`tensor_name` + +A `string`. The name of the tensor to receive. +
+`send_device` + +A `string`. The name of the device sending the tensor. +
+`send_device_incarnation` + +An `int`. The current incarnation of send_device. +
+`recv_device` + +A `string`. The name of the device receiving the tensor. +
+`client_terminated` + +An optional `bool`. Defaults to `False`. +If set to true, this indicates that the node was added +to the graph as a result of a client-side feed or fetch of Tensor data, +in which case the corresponding send or recv is expected to be managed +locally by the caller. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `tensor_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RecvTPUEmbeddingActivations.md b/site/en/api_docs/python/tf/raw_ops/RecvTPUEmbeddingActivations.md new file mode 100644 index 00000000000..d8cdb502dae --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RecvTPUEmbeddingActivations.md @@ -0,0 +1,92 @@ +description: An op that receives embedding activations on the TPU. + +
+ + +
+ +# tf.raw_ops.RecvTPUEmbeddingActivations + + + + + + + + + +An op that receives embedding activations on the TPU. + + + + + + + + + +The TPU system performs the embedding lookups and aggregations specified by +the arguments to TPUEmbeddingEnqueue(Integer/Sparse/SparseTensor)Batch. The +results of these aggregations are visible to the Tensorflow Graph as the +outputs of a RecvTPUEmbeddingActivations op. This op returns a list containing +one Tensor of activations per table specified in the model. There can be at +most one RecvTPUEmbeddingActivations op in the TPU graph. + + + + + + + + + + + + + + + + +
+`num_outputs` + +An `int` that is `>= 1`. +The number of output activation tensors, equal to the number of +embedding tables in the model. +
+`config` + +A `string`. Serialized TPUEmbeddingConfiguration proto. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `num_outputs` `Tensor` objects with type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReduceDataset.md b/site/en/api_docs/python/tf/raw_ops/ReduceDataset.md new file mode 100644 index 00000000000..ec0a666619f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReduceDataset.md @@ -0,0 +1,126 @@ +description: Reduces the input dataset to a singleton using a reduce function. + +
+ + +
+ +# tf.raw_ops.ReduceDataset + + + + + + + + + +Reduces the input dataset to a singleton using a reduce function. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +
+`initial_state` + +A list of `Tensor` objects. +A nested structure of tensors, representing the initial state of the +transformation. +
+`other_arguments` + +A list of `Tensor` objects. +
+`f` + +A function decorated with @Defun. +A function that maps `(old_state, input_element)` to `new_state`. It must take +two arguments and return a nested structures of tensors. The structure of +`new_state` must match the structure of `initial_state`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`use_inter_op_parallelism` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `output_types`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReduceJoin.md b/site/en/api_docs/python/tf/raw_ops/ReduceJoin.md new file mode 100644 index 00000000000..47f633cf874 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReduceJoin.md @@ -0,0 +1,129 @@ +description: Joins a string Tensor across the given dimensions. + +
+ + +
+ +# tf.raw_ops.ReduceJoin + + + + + + + + + +Joins a string Tensor across the given dimensions. + + + + + + + + + +Computes the string join across dimensions in the given string Tensor of shape +`[\\(d_0, d_1, ..., d_{n-1}\\)]`. Returns a new Tensor created by joining the input +strings with the given separator (default: empty string). Negative indices are +counted backwards from the end, with `-1` being equivalent to `n - 1`. If +indices are not specified, joins across all dimensions beginning from `n - 1` +through `0`. + +#### For example: + + + +```python +# tensor `a` is [["a", "b"], ["c", "d"]] +tf.reduce_join(a, 0) ==> ["ac", "bd"] +tf.reduce_join(a, 1) ==> ["ab", "cd"] +tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> ["ac", "bd"] +tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> ["ab", "cd"] +tf.reduce_join(a, 0, keep_dims=True) ==> [["ac", "bd"]] +tf.reduce_join(a, 1, keep_dims=True) ==> [["ab"], ["cd"]] +tf.reduce_join(a, 0, separator=".") ==> ["a.c", "b.d"] +tf.reduce_join(a, [0, 1]) ==> "acbd" +tf.reduce_join(a, [1, 0]) ==> "abcd" +tf.reduce_join(a, []) ==> [["a", "b"], ["c", "d"]] +tf.reduce_join(a) = tf.reduce_join(a, [1, 0]) ==> "abcd" +``` + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor` of type `string`. +The input to be joined. All reduced indices must have non-zero size. +
+`reduction_indices` + +A `Tensor` of type `int32`. +The dimensions to reduce over. Dimensions are reduced in the +order specified. Omitting `reduction_indices` is equivalent to passing +`[n-1, n-2, ..., 0]`. Negative indices from `-n` to `-1` are supported. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If `True`, retain reduced dimensions with length `1`. +
+`separator` + +An optional `string`. Defaults to `""`. +The separator to use when joining. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RefEnter.md b/site/en/api_docs/python/tf/raw_ops/RefEnter.md new file mode 100644 index 00000000000..e30defc20ac --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RefEnter.md @@ -0,0 +1,105 @@ +description: Creates or finds a child frame, and makes data available to the child frame. + +
+ + +
+ +# tf.raw_ops.RefEnter + + + + + + + + + +Creates or finds a child frame, and makes `data` available to the child frame. + + + + + + + + + +The unique `frame_name` is used by the `Executor` to identify frames. If +`is_constant` is true, `output` is a constant in the child frame; otherwise +it may be changed in the child frame. At most `parallel_iterations` iterations +are run in parallel in the child frame. + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A mutable `Tensor`. +The tensor to be made available to the child frame. +
+`frame_name` + +A `string`. The name of the child frame. +
+`is_constant` + +An optional `bool`. Defaults to `False`. +If true, the output is constant within the child frame. +
+`parallel_iterations` + +An optional `int`. Defaults to `10`. +The number of iterations allowed to run in parallel. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RefExit.md b/site/en/api_docs/python/tf/raw_ops/RefExit.md new file mode 100644 index 00000000000..cb5c60d1358 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RefExit.md @@ -0,0 +1,79 @@ +description: Exits the current frame to its parent frame. + +
+ + +
+ +# tf.raw_ops.RefExit + + + + + + + + + +Exits the current frame to its parent frame. + + + + + + + + + +Exit makes its input `data` available to the parent frame. + + + + + + + + + + + + + +
+`data` + +A mutable `Tensor`. +The tensor to be made available to the parent frame. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RefIdentity.md b/site/en/api_docs/python/tf/raw_ops/RefIdentity.md new file mode 100644 index 00000000000..6123cd072e6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RefIdentity.md @@ -0,0 +1,77 @@ +description: Return the same ref tensor as the input ref tensor. + +
+ + +
+ +# tf.raw_ops.RefIdentity + + + + + + + + + +Return the same ref tensor as the input ref tensor. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A mutable `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RefMerge.md b/site/en/api_docs/python/tf/raw_ops/RefMerge.md new file mode 100644 index 00000000000..39c28892533 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RefMerge.md @@ -0,0 +1,97 @@ +description: Forwards the value of an available tensor from inputs to output. + +
+ + +
+ +# tf.raw_ops.RefMerge + + + + + + + + + +Forwards the value of an available tensor from `inputs` to `output`. + + + + + + + + + +`Merge` waits for at least one of the tensors in `inputs` to become available. +It is usually combined with `Switch` to implement branching. + +`Merge` forwards the first tensor for become available to `output`, and sets +`value_index` to its index in `inputs`. + + + + + + + + + + + + + +
+`inputs` + +A list of at least 1 mutable `Tensor` objects with the same type. +The input tensors, exactly one of which will become available. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, value_index). +
+`output` + +A mutable `Tensor`. Has the same type as `inputs`. +
+`value_index` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RefNextIteration.md b/site/en/api_docs/python/tf/raw_ops/RefNextIteration.md new file mode 100644 index 00000000000..ba59c2b2bef --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RefNextIteration.md @@ -0,0 +1,78 @@ +description: Makes its input available to the next iteration. + +
+ + +
+ +# tf.raw_ops.RefNextIteration + + + + + + + + + +Makes its input available to the next iteration. + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A mutable `Tensor`. +The tensor to be made available to the next iteration. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RefSelect.md b/site/en/api_docs/python/tf/raw_ops/RefSelect.md new file mode 100644 index 00000000000..9f6b2b0a928 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RefSelect.md @@ -0,0 +1,86 @@ +description: Forwards the indexth element of inputs to output. + +
+ + +
+ +# tf.raw_ops.RefSelect + + + + + + + + + +Forwards the `index`th element of `inputs` to `output`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`index` + +A `Tensor` of type `int32`. +A scalar that determines the input that gets selected. +
+`inputs` + +A list of at least 1 mutable `Tensor` objects with the same type. +A list of ref tensors, one of which will be forwarded to `output`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `inputs`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RefSwitch.md b/site/en/api_docs/python/tf/raw_ops/RefSwitch.md new file mode 100644 index 00000000000..04b7453e9da --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RefSwitch.md @@ -0,0 +1,104 @@ +description: Forwards the ref tensor data to the output port determined by pred. + +
+ + +
+ +# tf.raw_ops.RefSwitch + + + + + + + + + +Forwards the ref tensor `data` to the output port determined by `pred`. + + + + + + + + + +If `pred` is true, the `data` input is forwarded to `output_true`. Otherwise, +the data goes to `output_false`. + +See also `Switch` and `Merge`. + + + + + + + + + + + + + + + + +
+`data` + +A mutable `Tensor`. +The ref tensor to be forwarded to the appropriate output. +
+`pred` + +A `Tensor` of type `bool`. +A scalar that specifies which output port will receive data. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_false, output_true). +
+`output_false` + +A mutable `Tensor`. Has the same type as `data`. +
+`output_true` + +A mutable `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RegexFullMatch.md b/site/en/api_docs/python/tf/raw_ops/RegexFullMatch.md new file mode 100644 index 00000000000..80ce747a120 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RegexFullMatch.md @@ -0,0 +1,103 @@ +description: Check if the input matches the regex pattern. + +
+ + +
+ +# tf.raw_ops.RegexFullMatch + + + + + + + + + +Check if the input matches the regex pattern. + + + + + + + + + +The input is a string tensor of any shape. The pattern is a scalar +string tensor which is applied to every element of the input tensor. +The boolean values (True or False) of the output tensor indicate +if the input matches the regex pattern provided. + +The pattern follows the re2 syntax (https://github.com/google/re2/wiki/Syntax) + +#### Examples: + + + +``` +>>> tf.strings.regex_full_match(["TF lib", "lib TF"], ".*lib$") + +>>> tf.strings.regex_full_match(["TF lib", "lib TF"], ".*TF$") + +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +A string tensor of the text to be processed. +
+`pattern` + +A `Tensor` of type `string`. +A scalar string tensor containing the regular expression to match the input. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RegexReplace.md b/site/en/api_docs/python/tf/raw_ops/RegexReplace.md new file mode 100644 index 00000000000..70e85acf004 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RegexReplace.md @@ -0,0 +1,59 @@ +description: Replaces matches of the pattern regular expression in input with the + +
+ + +
+ +# tf.raw_ops.RegexReplace + + + + + + + + + +Replaces matches of the `pattern` regular expression in `input` with the + + + + + + + + +replacement string provided in `rewrite`. + + It follows the re2 syntax (https://github.com/google/re2/wiki/Syntax) + + Args: + input: A `Tensor` of type `string`. The text to be processed. + pattern: A `Tensor` of type `string`. + The regular expression to be matched in the `input` strings. + rewrite: A `Tensor` of type `string`. + The rewrite string to be substituted for the `pattern` expression where it is + matched in the `input` strings. + replace_global: An optional `bool`. Defaults to `True`. + If True, the replacement is global (that is, all matches of the `pattern` regular + expression in each input string are rewritten), otherwise the `rewrite` + substitution is only made for the first `pattern` match. + name: A name for the operation (optional). + + Returns: + A `Tensor` of type `string`. + \ No newline at end of file diff --git a/site/en/api_docs/python/tf/raw_ops/Relu.md b/site/en/api_docs/python/tf/raw_ops/Relu.md new file mode 100644 index 00000000000..68568347e78 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Relu.md @@ -0,0 +1,81 @@ +description: Computes rectified linear: max(features, 0). + +
+ + +
+ +# tf.raw_ops.Relu + + + + + + + + + +Computes rectified linear: `max(features, 0)`. + + + + + + + + + +See: https://en.wikipedia.org/wiki/Rectifier_(neural_networks) +Example usage: +>>> tf.nn.relu([-2., 0., -0., 3.]).numpy() +array([ 0., 0., -0., 3.], dtype=float32) + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `qint8`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Relu6.md b/site/en/api_docs/python/tf/raw_ops/Relu6.md new file mode 100644 index 00000000000..decf33c792a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Relu6.md @@ -0,0 +1,77 @@ +description: Computes rectified linear 6: min(max(features, 0), 6). + +
+ + +
+ +# tf.raw_ops.Relu6 + + + + + + + + + +Computes rectified linear 6: `min(max(features, 0), 6)`. + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Relu6Grad.md b/site/en/api_docs/python/tf/raw_ops/Relu6Grad.md new file mode 100644 index 00000000000..29c1be583f0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Relu6Grad.md @@ -0,0 +1,87 @@ +description: Computes rectified linear 6 gradients for a Relu6 operation. + +
+ + +
+ +# tf.raw_ops.Relu6Grad + + + + + + + + + +Computes rectified linear 6 gradients for a Relu6 operation. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +The backpropagated gradients to the corresponding Relu6 operation. +
+`features` + +A `Tensor`. Must have the same type as `gradients`. +The features passed as input to the corresponding Relu6 operation, or +its output; using either one produces the same result. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `gradients`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReluGrad.md b/site/en/api_docs/python/tf/raw_ops/ReluGrad.md new file mode 100644 index 00000000000..0a610d4e060 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReluGrad.md @@ -0,0 +1,87 @@ +description: Computes rectified linear gradients for a Relu operation. + +
+ + +
+ +# tf.raw_ops.ReluGrad + + + + + + + + + +Computes rectified linear gradients for a Relu operation. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +The backpropagated gradients to the corresponding Relu operation. +
+`features` + +A `Tensor`. Must have the same type as `gradients`. +The features passed as input to the corresponding Relu operation, OR +the outputs of that operation (both work equivalently). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `gradients`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RemoteCall.md b/site/en/api_docs/python/tf/raw_ops/RemoteCall.md new file mode 100644 index 00000000000..d8993a9efef --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RemoteCall.md @@ -0,0 +1,100 @@ +description: Runs function f on a remote device indicated by target. + +
+ + +
+ +# tf.raw_ops.RemoteCall + + + + + + + + + +Runs function `f` on a remote device indicated by `target`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`target` + +A `Tensor` of type `string`. +A fully specified device name where we want to run the function. +
+`args` + +A list of `Tensor` objects. A list of arguments for the function. +
+`Tout` + +A list of `tf.DTypes` that has length `>= 1`. +The type list for the return values. +
+`f` + +A function decorated with @Defun. The function to run remotely. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RepeatDataset.md b/site/en/api_docs/python/tf/raw_ops/RepeatDataset.md new file mode 100644 index 00000000000..7a1a9570422 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RepeatDataset.md @@ -0,0 +1,100 @@ +description: Creates a dataset that emits the outputs of input_dataset count times. + +
+ + +
+ +# tf.raw_ops.RepeatDataset + + + + + + + + + +Creates a dataset that emits the outputs of `input_dataset` `count` times. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`count` + +A `Tensor` of type `int64`. +A scalar representing the number of times that `input_dataset` should +be repeated. A value of `-1` indicates that it should be repeated infinitely. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RequantizationRange.md b/site/en/api_docs/python/tf/raw_ops/RequantizationRange.md new file mode 100644 index 00000000000..9508c542cbd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RequantizationRange.md @@ -0,0 +1,111 @@ +description: Computes a range that covers the actual values present in a quantized tensor. + +
+ + +
+ +# tf.raw_ops.RequantizationRange + + + + + + + + + +Computes a range that covers the actual values present in a quantized tensor. + + + + + + + + + +Given a quantized tensor described by `(input, input_min, input_max)`, outputs a +range that covers the actual values present in that tensor. This op is typically +used to produce the `requested_output_min` and `requested_output_max` for +`Requantize`. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`input_min` + +A `Tensor` of type `float32`. +The float value that the minimum quantized input value represents. +
+`input_max` + +A `Tensor` of type `float32`. +The float value that the maximum quantized input value represents. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_min, output_max). +
+`output_min` + +A `Tensor` of type `float32`. +
+`output_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RequantizationRangePerChannel.md b/site/en/api_docs/python/tf/raw_ops/RequantizationRangePerChannel.md new file mode 100644 index 00000000000..329973b12e6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RequantizationRangePerChannel.md @@ -0,0 +1,117 @@ +description: Computes requantization range per channel. + +
+ + +
+ +# tf.raw_ops.RequantizationRangePerChannel + + + + + + + + + +Computes requantization range per channel. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The original input tensor. +
+`input_min` + +A `Tensor` of type `float32`. +The minimum value of the input tensor +
+`input_max` + +A `Tensor` of type `float32`. +The maximum value of the input tensor. +
+`clip_value_max` + +A `float`. +The maximum value of the output that needs to be clipped. +Example: set this to 6 for Relu6. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_min, output_max). +
+`output_min` + +A `Tensor` of type `float32`. +
+`output_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Requantize.md b/site/en/api_docs/python/tf/raw_ops/Requantize.md new file mode 100644 index 00000000000..f46b70a49c0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Requantize.md @@ -0,0 +1,146 @@ +description: Converts the quantized input tensor into a lower-precision output. + +
+ + +
+ +# tf.raw_ops.Requantize + + + + + + + + + +Converts the quantized `input` tensor into a lower-precision `output`. + + + + + + + + + +Converts the quantized `input` tensor into a lower-precision `output`, using the +output range specified with `requested_output_min` and `requested_output_max`. + +`[input_min, input_max]` are scalar floats that specify the range for the float +interpretation of the `input` data. For example, if `input_min` is -1.0f and +`input_max` is 1.0f, and we are dealing with `quint16` quantized data, then a 0 +value in the 16-bit data should be interpreted as -1.0f, and a 65535 means 1.0f. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +
+`input_min` + +A `Tensor` of type `float32`. +The float value that the minimum quantized input value represents. +
+`input_max` + +A `Tensor` of type `float32`. +The float value that the maximum quantized input value represents. +
+`requested_output_min` + +A `Tensor` of type `float32`. +The float value that the minimum quantized output value represents. +
+`requested_output_max` + +A `Tensor` of type `float32`. +The float value that the maximum quantized output value represents. +
+`out_type` + +A tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. +The type of the output. Should be a lower bit depth than Tinput. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, output_min, output_max). +
+`output` + +A `Tensor` of type `out_type`. +
+`output_min` + +A `Tensor` of type `float32`. +
+`output_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RequantizePerChannel.md b/site/en/api_docs/python/tf/raw_ops/RequantizePerChannel.md new file mode 100644 index 00000000000..43e3e434542 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RequantizePerChannel.md @@ -0,0 +1,140 @@ +description: Requantizes input with min and max values known per channel. + +
+ + +
+ +# tf.raw_ops.RequantizePerChannel + + + + + + + + + +Requantizes input with min and max values known per channel. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. +The original input tensor. +
+`input_min` + +A `Tensor` of type `float32`. +The minimum value of the input tensor +
+`input_max` + +A `Tensor` of type `float32`. +The maximum value of the input tensor. +
+`requested_output_min` + +A `Tensor` of type `float32`. +The minimum value of the output tensor requested. +
+`requested_output_max` + +A `Tensor` of type `float32`. +The maximum value of the output tensor requested. +
+`out_type` + +An optional tf.DType from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to tf.quint8. +The quantized type of output tensor that needs to be converted. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output, output_min, output_max). +
+`output` + +A `Tensor` of type `out_type`. +
+`output_min` + +A `Tensor` of type `float32`. +
+`output_max` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Reshape.md b/site/en/api_docs/python/tf/raw_ops/Reshape.md new file mode 100644 index 00000000000..235fe016eb6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Reshape.md @@ -0,0 +1,146 @@ +description: Reshapes a tensor. + +
+ + +
+ +# tf.raw_ops.Reshape + + + + + + + + + +Reshapes a tensor. + + + + + + + + + +Given `tensor`, this operation returns a tensor that has the same values +as `tensor` with shape `shape`. + +If one component of 1-D tensor `shape` is the special value -1, the size of that +dimension is computed so that the total size remains constant. In particular, a +`shape` of `[-1]` flattens into 1-D. At most one component of `shape` may be +unknown. + +The `shape` must be 1-D and the operation returns a tensor with shape +`shape` filled with the values of `tensor`. In this case, the number of elements +implied by `shape` must be the same as the number of elements in `tensor`. + +It is an error if `shape` is not 1-D. + +#### For example: + + + +``` +# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9] +# tensor 't' has shape [9] +reshape(t, [3, 3]) ==> [[1, 2, 3], + [4, 5, 6], + [7, 8, 9]] + +# tensor 't' is [[[1, 1], [2, 2]], +# [[3, 3], [4, 4]]] +# tensor 't' has shape [2, 2, 2] +reshape(t, [2, 4]) ==> [[1, 1, 2, 2], + [3, 3, 4, 4]] + +# tensor 't' is [[[1, 1, 1], +# [2, 2, 2]], +# [[3, 3, 3], +# [4, 4, 4]], +# [[5, 5, 5], +# [6, 6, 6]]] +# tensor 't' has shape [3, 2, 3] +# pass '[-1]' to flatten 't' +reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6] + +# -1 can also be used to infer the shape + +# -1 is inferred to be 9: +reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], + [4, 4, 4, 5, 5, 5, 6, 6, 6]] +# -1 is inferred to be 2: +reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], + [4, 4, 4, 5, 5, 5, 6, 6, 6]] +# -1 is inferred to be 3: +reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1], + [2, 2, 2], + [3, 3, 3]], + [[4, 4, 4], + [5, 5, 5], + [6, 6, 6]]] + +# tensor 't' is [7] +# shape `[]` reshapes to a scalar +reshape(t, []) ==> 7 +``` + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Defines the shape of the output tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResizeArea.md b/site/en/api_docs/python/tf/raw_ops/ResizeArea.md new file mode 100644 index 00000000000..6acdd900261 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResizeArea.md @@ -0,0 +1,106 @@ +description: Resize images to size using area interpolation. + +
+ + +
+ +# tf.raw_ops.ResizeArea + + + + + + + + + +Resize `images` to `size` using area interpolation. + + + + + + + + + +Input images can be of different types but output images are always float. + +The range of pixel values for the output image might be slightly different +from the range for the input image because of limited numerical precision. +To guarantee an output range, for example `[0.0, 1.0]`, apply +tf.clip_by_value to the output. + +Each output pixel is computed by first transforming the pixel's footprint into +the input tensor and then averaging the pixels that intersect the footprint. An +input pixel's contribution to the average is weighted by the fraction of its +area that intersects the footprint. This is the same as OpenCV's INTER_AREA. + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `half`, `float32`, `float64`. +4-D with shape `[batch, height, width, channels]`. +
+`size` + +A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The +new size for the images. +
+`align_corners` + +An optional `bool`. Defaults to `False`. +If true, the centers of the 4 corner pixels of the input and output tensors are +aligned, preserving the values at the corner pixels. Defaults to false. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResizeBicubic.md b/site/en/api_docs/python/tf/raw_ops/ResizeBicubic.md new file mode 100644 index 00000000000..fbca84e1220 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResizeBicubic.md @@ -0,0 +1,103 @@ +description: Resize images to size using bicubic interpolation. + +
+ + +
+ +# tf.raw_ops.ResizeBicubic + + + + + + + + + +Resize `images` to `size` using bicubic interpolation. + + + + + + + + + +Input images can be of different types but output images are always float. + + + + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `half`, `float32`, `float64`. +4-D with shape `[batch, height, width, channels]`. +
+`size` + +A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The +new size for the images. +
+`align_corners` + +An optional `bool`. Defaults to `False`. +If true, the centers of the 4 corner pixels of the input and output tensors are +aligned, preserving the values at the corner pixels. Defaults to false. +
+`half_pixel_centers` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResizeBicubicGrad.md b/site/en/api_docs/python/tf/raw_ops/ResizeBicubicGrad.md new file mode 100644 index 00000000000..fcb5baed6f6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResizeBicubicGrad.md @@ -0,0 +1,104 @@ +description: Computes the gradient of bicubic interpolation. + +
+ + +
+ +# tf.raw_ops.ResizeBicubicGrad + + + + + + + + + +Computes the gradient of bicubic interpolation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`grads` + +A `Tensor` of type `float32`. +4-D with shape `[batch, height, width, channels]`. +
+`original_image` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +4-D with shape `[batch, orig_height, orig_width, channels]`, +The image tensor that was resized. +
+`align_corners` + +An optional `bool`. Defaults to `False`. +If true, the centers of the 4 corner pixels of the input and grad tensors are +aligned. Defaults to false. +
+`half_pixel_centers` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `original_image`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResizeBilinear.md b/site/en/api_docs/python/tf/raw_ops/ResizeBilinear.md new file mode 100644 index 00000000000..c74e0c6a487 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResizeBilinear.md @@ -0,0 +1,103 @@ +description: Resize images to size using bilinear interpolation. + +
+ + +
+ +# tf.raw_ops.ResizeBilinear + + + + + + + + + +Resize `images` to `size` using bilinear interpolation. + + + + + + + + + +Input images can be of different types but output images are always float. + + + + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +4-D with shape `[batch, height, width, channels]`. +
+`size` + +A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The +new size for the images. +
+`align_corners` + +An optional `bool`. Defaults to `False`. +If true, the centers of the 4 corner pixels of the input and output tensors are +aligned, preserving the values at the corner pixels. Defaults to false. +
+`half_pixel_centers` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResizeBilinearGrad.md b/site/en/api_docs/python/tf/raw_ops/ResizeBilinearGrad.md new file mode 100644 index 00000000000..2ac733e71d5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResizeBilinearGrad.md @@ -0,0 +1,104 @@ +description: Computes the gradient of bilinear interpolation. + +
+ + +
+ +# tf.raw_ops.ResizeBilinearGrad + + + + + + + + + +Computes the gradient of bilinear interpolation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`grads` + +A `Tensor` of type `float32`. +4-D with shape `[batch, height, width, channels]`. +
+`original_image` + +A `Tensor`. Must be one of the following types: `float32`, `bfloat16`, `half`, `float64`. +4-D with shape `[batch, orig_height, orig_width, channels]`, +The image tensor that was resized. +
+`align_corners` + +An optional `bool`. Defaults to `False`. +If true, the centers of the 4 corner pixels of the input and grad tensors are +aligned. Defaults to false. +
+`half_pixel_centers` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `original_image`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResizeNearestNeighbor.md b/site/en/api_docs/python/tf/raw_ops/ResizeNearestNeighbor.md new file mode 100644 index 00000000000..92dfe9891d7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResizeNearestNeighbor.md @@ -0,0 +1,102 @@ +description: Resize images to size using nearest neighbor interpolation. + +
+ + +
+ +# tf.raw_ops.ResizeNearestNeighbor + + + + + + + + + +Resize `images` to `size` using nearest neighbor interpolation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `half`, `float32`, `float64`. +4-D with shape `[batch, height, width, channels]`. +
+`size` + +A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The +new size for the images. +
+`align_corners` + +An optional `bool`. Defaults to `False`. +If true, the centers of the 4 corner pixels of the input and output tensors are +aligned, preserving the values at the corner pixels. Defaults to false. +
+`half_pixel_centers` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `images`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResizeNearestNeighborGrad.md b/site/en/api_docs/python/tf/raw_ops/ResizeNearestNeighborGrad.md new file mode 100644 index 00000000000..f08b2a3e04d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResizeNearestNeighborGrad.md @@ -0,0 +1,102 @@ +description: Computes the gradient of nearest neighbor interpolation. + +
+ + +
+ +# tf.raw_ops.ResizeNearestNeighborGrad + + + + + + + + + +Computes the gradient of nearest neighbor interpolation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`grads` + +A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `half`, `float32`, `float64`. +4-D with shape `[batch, height, width, channels]`. +
+`size` + +A 1-D int32 Tensor of 2 elements: `orig_height, orig_width`. The +original input size. +
+`align_corners` + +An optional `bool`. Defaults to `False`. +If true, the centers of the 4 corner pixels of the input and grad tensors are +aligned. Defaults to false. +
+`half_pixel_centers` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `grads`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceAccumulatorApplyGradient.md b/site/en/api_docs/python/tf/raw_ops/ResourceAccumulatorApplyGradient.md new file mode 100644 index 00000000000..026974b0225 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceAccumulatorApplyGradient.md @@ -0,0 +1,94 @@ +description: Applies a gradient to a given accumulator. + +
+ + +
+ +# tf.raw_ops.ResourceAccumulatorApplyGradient + + + + + + + + + +Applies a gradient to a given accumulator. + + + + + + + + + +Does not add if local_step is lesser than the accumulator's global_step. + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a accumulator. +
+`local_step` + +A `Tensor` of type `int64`. +The local_step value at which the gradient was computed. +
+`gradient` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A tensor of the gradient to be accumulated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceAccumulatorNumAccumulated.md b/site/en/api_docs/python/tf/raw_ops/ResourceAccumulatorNumAccumulated.md new file mode 100644 index 00000000000..01ba48a18ab --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceAccumulatorNumAccumulated.md @@ -0,0 +1,77 @@ +description: Returns the number of gradients aggregated in the given accumulators. + +
+ + +
+ +# tf.raw_ops.ResourceAccumulatorNumAccumulated + + + + + + + + + +Returns the number of gradients aggregated in the given accumulators. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to an accumulator. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceAccumulatorSetGlobalStep.md b/site/en/api_docs/python/tf/raw_ops/ResourceAccumulatorSetGlobalStep.md new file mode 100644 index 00000000000..5072ddf1c00 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceAccumulatorSetGlobalStep.md @@ -0,0 +1,87 @@ +description: Updates the accumulator with a new value for global_step. + +
+ + +
+ +# tf.raw_ops.ResourceAccumulatorSetGlobalStep + + + + + + + + + +Updates the accumulator with a new value for global_step. + + + + + + + + + +Logs warning if the accumulator's value is already higher than +new_global_step. + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to an accumulator. +
+`new_global_step` + +A `Tensor` of type `int64`. +The new global_step value to set. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceAccumulatorTakeGradient.md b/site/en/api_docs/python/tf/raw_ops/ResourceAccumulatorTakeGradient.md new file mode 100644 index 00000000000..44e9dc5421e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceAccumulatorTakeGradient.md @@ -0,0 +1,99 @@ +description: Extracts the average gradient in the given ConditionalAccumulator. + +
+ + +
+ +# tf.raw_ops.ResourceAccumulatorTakeGradient + + + + + + + + + +Extracts the average gradient in the given ConditionalAccumulator. + + + + + + + + + +The op blocks until sufficient (i.e., more than num_required) +gradients have been accumulated. If the accumulator has already +aggregated more than num_required gradients, it returns the average of +the accumulated gradients. Also automatically increments the recorded +global_step in the accumulator by 1, and resets the aggregate to 0. + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to an accumulator. +
+`num_required` + +A `Tensor` of type `int32`. +Number of gradients required before we return an aggregate. +
+`dtype` + +A tf.DType from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`. +The data type of accumulated gradients. Needs to correspond to the type +of the accumulator. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdaMax.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdaMax.md new file mode 100644 index 00000000000..01263921ecb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdaMax.md @@ -0,0 +1,152 @@ +description: Update '*var' according to the AdaMax algorithm. + +
+ + +
+ +# tf.raw_ops.ResourceApplyAdaMax + + + + + + + + + +Update '*var' according to the AdaMax algorithm. + + + + + + + + + +m_t <- beta1 * m_{t-1} + (1 - beta1) * g +v_t <- max(beta2 * v_{t-1}, abs(g)) +variable <- variable - learning_rate / (1 - beta1^t) * m_t / (v_t + epsilon) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`m` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`v` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`beta1_power` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Must be a scalar. +
+`lr` + +A `Tensor`. Must have the same type as `beta1_power`. +Scaling factor. Must be a scalar. +
+`beta1` + +A `Tensor`. Must have the same type as `beta1_power`. +Momentum factor. Must be a scalar. +
+`beta2` + +A `Tensor`. Must have the same type as `beta1_power`. +Momentum factor. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `beta1_power`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `beta1_power`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, m, and v tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdadelta.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdadelta.md new file mode 100644 index 00000000000..1313fda770a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdadelta.md @@ -0,0 +1,135 @@ +description: Update '*var' according to the adadelta scheme. + +
+ + +
+ +# tf.raw_ops.ResourceApplyAdadelta + + + + + + + + + +Update '*var' according to the adadelta scheme. + + + + + + + + + +accum = rho() * accum + (1 - rho()) * grad.square(); +update = (update_accum + epsilon).sqrt() * (accum + epsilon()).rsqrt() * grad; +update_accum = rho() * update_accum + (1 - rho()) * update.square(); +var -= update; + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum_update` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`rho` + +A `Tensor`. Must have the same type as `lr`. +Decay factor. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `lr`. +Constant factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, updating of the var, accum and update_accum tensors will be protected by +a lock; otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdagrad.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdagrad.md new file mode 100644 index 00000000000..1827d11511a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdagrad.md @@ -0,0 +1,118 @@ +description: Update '*var' according to the adagrad scheme. + +
+ + +
+ +# tf.raw_ops.ResourceApplyAdagrad + + + + + + + + + +Update '*var' according to the adagrad scheme. + + + + + + + + + +accum += grad * grad +var -= lr * grad * (1 / sqrt(accum)) + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`update_slots` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdagradDA.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdagradDA.md new file mode 100644 index 00000000000..dff1c4cec52 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdagradDA.md @@ -0,0 +1,143 @@ +description: Update '*var' according to the proximal adagrad scheme. + +
+ + +
+ +# tf.raw_ops.ResourceApplyAdagradDA + + + + + + + + + +Update '*var' according to the proximal adagrad scheme. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`gradient_accumulator` + +A `Tensor` of type `resource`. +Should be from a Variable(). +
+`gradient_squared_accumulator` + +A `Tensor` of type `resource`. +Should be from a Variable(). +
+`grad` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The gradient. +
+`lr` + +A `Tensor`. Must have the same type as `grad`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `grad`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `grad`. +L2 regularization. Must be a scalar. +
+`global_step` + +A `Tensor` of type `int64`. +Training step number. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, updating of the var and accum tensors will be protected by +a lock; otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdagradV2.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdagradV2.md new file mode 100644 index 00000000000..d255e7e4fd2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdagradV2.md @@ -0,0 +1,127 @@ +description: Update '*var' according to the adagrad scheme. + +
+ + +
+ +# tf.raw_ops.ResourceApplyAdagradV2 + + + + + + + + + +Update '*var' according to the adagrad scheme. + + + + + + + + + +accum += grad * grad +var -= lr * grad * (1 / sqrt(accum)) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `lr`. +Constant factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`update_slots` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdam.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdam.md new file mode 100644 index 00000000000..6d45bd343d8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdam.md @@ -0,0 +1,169 @@ +description: Update '*var' according to the Adam algorithm. + +
+ + +
+ +# tf.raw_ops.ResourceApplyAdam + + + + + + + + + +Update '*var' according to the Adam algorithm. + + + + + + + + + +$$\text{lr}_t := \mathrm{learning_rate} * \sqrt{1 - \beta_2^t} / (1 - \beta_1^t)$$ +$$m_t := \beta_1 * m_{t-1} + (1 - \beta_1) * g$$ +$$v_t := \beta_2 * v_{t-1} + (1 - \beta_2) * g * g$$ +$$\text{variable} := \text{variable} - \text{lr}_t * m_t / (\sqrt{v_t} + \epsilon)$$ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`m` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`v` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`beta1_power` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Must be a scalar. +
+`beta2_power` + +A `Tensor`. Must have the same type as `beta1_power`. +Must be a scalar. +
+`lr` + +A `Tensor`. Must have the same type as `beta1_power`. +Scaling factor. Must be a scalar. +
+`beta1` + +A `Tensor`. Must have the same type as `beta1_power`. +Momentum factor. Must be a scalar. +
+`beta2` + +A `Tensor`. Must have the same type as `beta1_power`. +Momentum factor. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `beta1_power`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `beta1_power`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, m, and v tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`use_nesterov` + +An optional `bool`. Defaults to `False`. +If `True`, uses the nesterov update. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdamWithAmsgrad.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdamWithAmsgrad.md new file mode 100644 index 00000000000..c4fe9eb648a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAdamWithAmsgrad.md @@ -0,0 +1,169 @@ +description: Update '*var' according to the Adam algorithm. + +
+ + +
+ +# tf.raw_ops.ResourceApplyAdamWithAmsgrad + + + + + + + + + +Update '*var' according to the Adam algorithm. + + + + + + + + + +$$\text{lr}_t := \mathrm{learning_rate} * \sqrt{1 - \beta_2^t} / (1 - \beta_1^t)$$ +$$m_t := \beta_1 * m_{t-1} + (1 - \beta_1) * g$$ +$$v_t := \beta_2 * v_{t-1} + (1 - \beta_2) * g * g$$ +$$\hat{v}_t := max{\hat{v}_{t-1}, v_t}$$ +$$\text{variable} := \text{variable} - \text{lr}_t * m_t / (\sqrt{\hat{v}_t} + \epsilon)$$ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`m` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`v` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`vhat` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`beta1_power` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Must be a scalar. +
+`beta2_power` + +A `Tensor`. Must have the same type as `beta1_power`. +Must be a scalar. +
+`lr` + +A `Tensor`. Must have the same type as `beta1_power`. +Scaling factor. Must be a scalar. +
+`beta1` + +A `Tensor`. Must have the same type as `beta1_power`. +Momentum factor. Must be a scalar. +
+`beta2` + +A `Tensor`. Must have the same type as `beta1_power`. +Momentum factor. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `beta1_power`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `beta1_power`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, m, and v tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyAddSign.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAddSign.md new file mode 100644 index 00000000000..b30b519e701 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyAddSign.md @@ -0,0 +1,133 @@ +description: Update '*var' according to the AddSign update. + +
+ + +
+ +# tf.raw_ops.ResourceApplyAddSign + + + + + + + + + +Update '*var' according to the AddSign update. + + + + + + + + + +m_t <- beta1 * m_{t-1} + (1 - beta1) * g +update <- (alpha + sign_decay * sign(g) *sign(m)) * g +variable <- variable - lr_t * update + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`m` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`alpha` + +A `Tensor`. Must have the same type as `lr`. Must be a scalar. +
+`sign_decay` + +A `Tensor`. Must have the same type as `lr`. Must be a scalar. +
+`beta` + +A `Tensor`. Must have the same type as `lr`. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and m tensors is +protected by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyCenteredRMSProp.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyCenteredRMSProp.md new file mode 100644 index 00000000000..b6740587621 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyCenteredRMSProp.md @@ -0,0 +1,165 @@ +description: Update '*var' according to the centered RMSProp algorithm. + +
+ + +
+ +# tf.raw_ops.ResourceApplyCenteredRMSProp + + + + + + + + + +Update '*var' according to the centered RMSProp algorithm. + + + + + + + + + +The centered RMSProp algorithm uses an estimate of the centered second moment +(i.e., the variance) for normalization, as opposed to regular RMSProp, which +uses the (uncentered) second moment. This often helps with training, but is +slightly more expensive in terms of computation and memory. + +Note that in dense implementation of this algorithm, mg, ms, and mom will +update even if the grad is zero, but in this sparse implementation, mg, ms, +and mom will not update in iterations during which the grad is zero. + +mean_square = decay * mean_square + (1-decay) * gradient ** 2 +mean_grad = decay * mean_grad + (1-decay) * gradient + +Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2) + +mg <- rho * mg_{t-1} + (1-rho) * grad +ms <- rho * ms_{t-1} + (1-rho) * grad * grad +mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms - mg * mg + epsilon) +var <- var - mom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`mg` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`ms` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`mom` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`rho` + +A `Tensor`. Must have the same type as `lr`. +Decay rate. Must be a scalar. +
+`momentum` + +A `Tensor`. Must have the same type as `lr`. +
+`epsilon` + +A `Tensor`. Must have the same type as `lr`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, mg, ms, and mom tensors is +protected by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyFtrl.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyFtrl.md new file mode 100644 index 00000000000..0821e28ac36 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyFtrl.md @@ -0,0 +1,146 @@ +description: Update '*var' according to the Ftrl-proximal scheme. + +
+ + +
+ +# tf.raw_ops.ResourceApplyFtrl + + + + + + + + + +Update '*var' according to the Ftrl-proximal scheme. + + + + + + + + + +accum_new = accum + grad * grad +linear += grad - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var +quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 +var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 +accum = accum_new + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`linear` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`grad` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The gradient. +
+`lr` + +A `Tensor`. Must have the same type as `grad`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `grad`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `grad`. +L2 regularization. Must be a scalar. +
+`lr_power` + +A `Tensor`. Must have the same type as `grad`. +Scaling factor. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyFtrlV2.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyFtrlV2.md new file mode 100644 index 00000000000..1168ae30183 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyFtrlV2.md @@ -0,0 +1,156 @@ +description: Update '*var' according to the Ftrl-proximal scheme. + +
+ + +
+ +# tf.raw_ops.ResourceApplyFtrlV2 + + + + + + + + + +Update '*var' according to the Ftrl-proximal scheme. + + + + + + + + + +grad_with_shrinkage = grad + 2 * l2_shrinkage * var +accum_new = accum + grad_with_shrinkage * grad_with_shrinkage +linear += grad_with_shrinkage + + (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var +quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 +var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 +accum = accum_new + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`linear` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`grad` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The gradient. +
+`lr` + +A `Tensor`. Must have the same type as `grad`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `grad`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `grad`. +L2 shrinkage regularization. Must be a scalar. +
+`l2_shrinkage` + +A `Tensor`. Must have the same type as `grad`. +
+`lr_power` + +A `Tensor`. Must have the same type as `grad`. +Scaling factor. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyGradientDescent.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyGradientDescent.md new file mode 100644 index 00000000000..a7e820aa041 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyGradientDescent.md @@ -0,0 +1,101 @@ +description: Update '*var' by subtracting 'alpha' * 'delta' from it. + +
+ + +
+ +# tf.raw_ops.ResourceApplyGradientDescent + + + + + + + + + +Update '*var' by subtracting 'alpha' * 'delta' from it. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`alpha` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`delta` + +A `Tensor`. Must have the same type as `alpha`. The change. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, the subtraction will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyKerasMomentum.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyKerasMomentum.md new file mode 100644 index 00000000000..d8deb2b592e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyKerasMomentum.md @@ -0,0 +1,132 @@ +description: Update '*var' according to the momentum scheme. + +
+ + +
+ +# tf.raw_ops.ResourceApplyKerasMomentum + + + + + + + + + +Update '*var' according to the momentum scheme. + + + + + + + + + +Set use_nesterov = True if you want to use Nesterov momentum. + +accum = accum * momentum - lr * grad +var += accum + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`momentum` + +A `Tensor`. Must have the same type as `lr`. +Momentum. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`use_nesterov` + +An optional `bool`. Defaults to `False`. +If `True`, the tensor passed to compute grad will be +var + momentum * accum, so in the end, the var you get is actually +var + momentum * accum. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyMomentum.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyMomentum.md new file mode 100644 index 00000000000..124b31a07f2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyMomentum.md @@ -0,0 +1,132 @@ +description: Update '*var' according to the momentum scheme. Set use_nesterov = True if you + +
+ + +
+ +# tf.raw_ops.ResourceApplyMomentum + + + + + + + + + +Update '*var' according to the momentum scheme. Set use_nesterov = True if you + + + + + + + + + +want to use Nesterov momentum. + +accum = accum * momentum + grad +var -= lr * accum + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`momentum` + +A `Tensor`. Must have the same type as `lr`. +Momentum. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`use_nesterov` + +An optional `bool`. Defaults to `False`. +If `True`, the tensor passed to compute grad will be +var - lr * momentum * accum, so in the end, the var you get is actually +var - lr * momentum * accum. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyPowerSign.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyPowerSign.md new file mode 100644 index 00000000000..aff4239e16c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyPowerSign.md @@ -0,0 +1,133 @@ +description: Update '*var' according to the AddSign update. + +
+ + +
+ +# tf.raw_ops.ResourceApplyPowerSign + + + + + + + + + +Update '*var' according to the AddSign update. + + + + + + + + + +m_t <- beta1 * m_{t-1} + (1 - beta1) * g +update <- exp(logbase * sign_decay * sign(g) * sign(m_t)) * g +variable <- variable - lr_t * update + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`m` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`logbase` + +A `Tensor`. Must have the same type as `lr`. Must be a scalar. +
+`sign_decay` + +A `Tensor`. Must have the same type as `lr`. Must be a scalar. +
+`beta` + +A `Tensor`. Must have the same type as `lr`. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and m tensors is +protected by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyProximalAdagrad.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyProximalAdagrad.md new file mode 100644 index 00000000000..12a2f7e3c64 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyProximalAdagrad.md @@ -0,0 +1,127 @@ +description: Update '*var' and '*accum' according to FOBOS with Adagrad learning rate. + +
+ + +
+ +# tf.raw_ops.ResourceApplyProximalAdagrad + + + + + + + + + +Update '*var' and '*accum' according to FOBOS with Adagrad learning rate. + + + + + + + + + +accum += grad * grad +prox_v = var - lr * grad * (1 / sqrt(accum)) +var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0} + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `lr`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `lr`. +L2 regularization. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, updating of the var and accum tensors will be protected by +a lock; otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyProximalGradientDescent.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyProximalGradientDescent.md new file mode 100644 index 00000000000..767b349846b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyProximalGradientDescent.md @@ -0,0 +1,119 @@ +description: Update '*var' as FOBOS algorithm with fixed learning rate. + +
+ + +
+ +# tf.raw_ops.ResourceApplyProximalGradientDescent + + + + + + + + + +Update '*var' as FOBOS algorithm with fixed learning rate. + + + + + + + + + +prox_v = var - alpha * delta +var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0} + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`alpha` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `alpha`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `alpha`. +L2 regularization. Must be a scalar. +
+`delta` + +A `Tensor`. Must have the same type as `alpha`. The change. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the subtraction will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceApplyRMSProp.md b/site/en/api_docs/python/tf/raw_ops/ResourceApplyRMSProp.md new file mode 100644 index 00000000000..53adec714a9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceApplyRMSProp.md @@ -0,0 +1,149 @@ +description: Update '*var' according to the RMSProp algorithm. + +
+ + +
+ +# tf.raw_ops.ResourceApplyRMSProp + + + + + + + + + +Update '*var' according to the RMSProp algorithm. + + + + + + + + + +Note that in dense implementation of this algorithm, ms and mom will +update even if the grad is zero, but in this sparse implementation, ms +and mom will not update in iterations during which the grad is zero. + +mean_square = decay * mean_square + (1-decay) * gradient ** 2 +Delta = learning_rate * gradient / sqrt(mean_square + epsilon) + +ms <- rho * ms_{t-1} + (1-rho) * grad * grad +mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) +var <- var - mom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`ms` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`mom` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`rho` + +A `Tensor`. Must have the same type as `lr`. +Decay rate. Must be a scalar. +
+`momentum` + +A `Tensor`. Must have the same type as `lr`. +
+`epsilon` + +A `Tensor`. Must have the same type as `lr`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, ms, and mom tensors is protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceConditionalAccumulator.md b/site/en/api_docs/python/tf/raw_ops/ResourceConditionalAccumulator.md new file mode 100644 index 00000000000..837289da9c3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceConditionalAccumulator.md @@ -0,0 +1,119 @@ +description: A conditional accumulator for aggregating gradients. + +
+ + +
+ +# tf.raw_ops.ResourceConditionalAccumulator + + + + + + + + + +A conditional accumulator for aggregating gradients. + + + + + + + + + +The accumulator accepts gradients marked with local_step greater or +equal to the most recent global_step known to the accumulator. The +average can be extracted from the accumulator, provided sufficient +gradients have been accumulated. Extracting the average automatically +resets the aggregate to 0, and increments the global_step recorded by +the accumulator. +This is a resource version of ConditionalAccumulator that will work in TF2.0 +with tf.cond version 2. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +A tf.DType from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`. +The type of the value being accumulated. +
+`shape` + +A tf.TensorShape or list of `ints`. +The shape of the values, can be [], in which case shape is unknown. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this accumulator is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this accumulator will be shared under the +given name across multiple sessions. +
+`reduction_type` + +An optional `string` from: `"MEAN", "SUM"`. Defaults to `"MEAN"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceCountUpTo.md b/site/en/api_docs/python/tf/raw_ops/ResourceCountUpTo.md new file mode 100644 index 00000000000..d4fb4054614 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceCountUpTo.md @@ -0,0 +1,94 @@ +description: Increments variable pointed to by 'resource' until it reaches 'limit'. + +
+ + +
+ +# tf.raw_ops.ResourceCountUpTo + + + + + + + + + +Increments variable pointed to by 'resource' until it reaches 'limit'. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +Should be from a scalar `Variable` node. +
+`limit` + +An `int`. +If incrementing ref would bring it above limit, instead generates an +'OutOfRange' error. +
+`T` + +A tf.DType from: `tf.int32, tf.int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `T`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceGather.md b/site/en/api_docs/python/tf/raw_ops/ResourceGather.md new file mode 100644 index 00000000000..8f3478efdc9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceGather.md @@ -0,0 +1,118 @@ +description: Gather slices from the variable pointed to by resource according to indices. + +
+ + +
+ +# tf.raw_ops.ResourceGather + + + + + + + + + +Gather slices from the variable pointed to by `resource` according to `indices`. + + + + + + + + + +`indices` must be an integer tensor of any dimension (usually 0-D or 1-D). +Produces an output tensor with shape `indices.shape + params.shape[1:]` where: + +```python + # Scalar indices + output[:, ..., :] = params[indices, :, ... :] + + # Vector indices + output[i, :, ..., :] = params[indices[i], :, ... :] + + # Higher rank indices + output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :] +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`dtype` + +A tf.DType. +
+`batch_dims` + +An optional `int`. Defaults to `0`. +
+`validate_indices` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceGatherNd.md b/site/en/api_docs/python/tf/raw_ops/ResourceGatherNd.md new file mode 100644 index 00000000000..bc750283439 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceGatherNd.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.ResourceGatherNd + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`dtype` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceScatterAdd.md b/site/en/api_docs/python/tf/raw_ops/ResourceScatterAdd.md new file mode 100644 index 00000000000..6a67def39af --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceScatterAdd.md @@ -0,0 +1,112 @@ +description: Adds sparse updates to the variable referenced by resource. + +
+ + +
+ +# tf.raw_ops.ResourceScatterAdd + + + + + + + + + +Adds sparse updates to the variable referenced by `resource`. + + + + + + + + + +This operation computes + + # Scalar indices + ref[indices, ...] += updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] += updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] += updates[i, ..., j, ...] + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions add. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + +
+ +
+ + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A tensor of updated values to add to `ref`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceScatterDiv.md b/site/en/api_docs/python/tf/raw_ops/ResourceScatterDiv.md new file mode 100644 index 00000000000..d817bcd2762 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceScatterDiv.md @@ -0,0 +1,112 @@ +description: Divides sparse updates into the variable referenced by resource. + +
+ + +
+ +# tf.raw_ops.ResourceScatterDiv + + + + + + + + + +Divides sparse updates into the variable referenced by `resource`. + + + + + + + + + +This operation computes + + # Scalar indices + ref[indices, ...] /= updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] /= updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...] + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions multiply. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + +
+ +
+ + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A tensor of updated values to add to `ref`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceScatterMax.md b/site/en/api_docs/python/tf/raw_ops/ResourceScatterMax.md new file mode 100644 index 00000000000..5f040a26f5b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceScatterMax.md @@ -0,0 +1,112 @@ +description: Reduces sparse updates into the variable referenced by resource using the max operation. + +
+ + +
+ +# tf.raw_ops.ResourceScatterMax + + + + + + + + + +Reduces sparse updates into the variable referenced by `resource` using the `max` operation. + + + + + + + + + +This operation computes + + # Scalar indices + ref[indices, ...] = max(ref[indices, ...], updates[...]) + + # Vector indices (for each i) + ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...]) + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...], updates[i, ..., j, ...]) + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions are combined. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + +
+ +
+ + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A tensor of updated values to add to `ref`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceScatterMin.md b/site/en/api_docs/python/tf/raw_ops/ResourceScatterMin.md new file mode 100644 index 00000000000..9e5c8ef0be5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceScatterMin.md @@ -0,0 +1,112 @@ +description: Reduces sparse updates into the variable referenced by resource using the min operation. + +
+ + +
+ +# tf.raw_ops.ResourceScatterMin + + + + + + + + + +Reduces sparse updates into the variable referenced by `resource` using the `min` operation. + + + + + + + + + +This operation computes + + # Scalar indices + ref[indices, ...] = min(ref[indices, ...], updates[...]) + + # Vector indices (for each i) + ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...]) + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...], updates[i, ..., j, ...]) + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions are combined. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + +
+ +
+ + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A tensor of updated values to add to `ref`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceScatterMul.md b/site/en/api_docs/python/tf/raw_ops/ResourceScatterMul.md new file mode 100644 index 00000000000..28cc27616f9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceScatterMul.md @@ -0,0 +1,112 @@ +description: Multiplies sparse updates into the variable referenced by resource. + +
+ + +
+ +# tf.raw_ops.ResourceScatterMul + + + + + + + + + +Multiplies sparse updates into the variable referenced by `resource`. + + + + + + + + + +This operation computes + + # Scalar indices + ref[indices, ...] *= updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] *= updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...] + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions multiply. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + +
+ +
+ + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A tensor of updated values to add to `ref`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceScatterNdAdd.md b/site/en/api_docs/python/tf/raw_ops/ResourceScatterNdAdd.md new file mode 100644 index 00000000000..1db933b536d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceScatterNdAdd.md @@ -0,0 +1,138 @@ +description: Applies sparse addition to individual values or slices in a Variable. + +
+ + +
+ +# tf.raw_ops.ResourceScatterNdAdd + + + + + + + + + +Applies sparse addition to individual values or slices in a Variable. + + + + + + + + + +`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into `ref`. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of `ref`. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]] +``` + +For example, say we want to add 4 scattered elements to a rank-1 tensor to +8 elements. In Python, that addition would look like this: + +```python +ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8], use_resource=True) +indices = tf.constant([[4], [3], [1], [7]]) +updates = tf.constant([9, 10, 11, 12]) +add = tf.scatter_nd_add(ref, indices, updates) +with tf.Session() as sess: + print sess.run(add) +``` + +The resulting update to ref would look like this: + + [1, 13, 3, 14, 14, 6, 7, 20] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A `Tensor` of type `resource`. +A resource handle. Must be from a VarHandleOp. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A Tensor. Must be one of the following types: int32, int64. +A tensor of indices into ref. +
+`updates` + +A `Tensor`. A Tensor. Must have the same type as ref. A tensor of +values to add to ref. +
+`use_locking` + +An optional `bool`. Defaults to `True`. +An optional bool. Defaults to True. If True, the assignment will +be protected by a lock; otherwise the behavior is undefined, +but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceScatterNdSub.md b/site/en/api_docs/python/tf/raw_ops/ResourceScatterNdSub.md new file mode 100644 index 00000000000..c169d3889af --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceScatterNdSub.md @@ -0,0 +1,138 @@ +description: Applies sparse subtraction to individual values or slices in a Variable. + +
+ + +
+ +# tf.raw_ops.ResourceScatterNdSub + + + + + + + + + +Applies sparse subtraction to individual values or slices in a Variable. + + + + + + + + + +`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into `ref`. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of `ref`. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]] +``` + +For example, say we want to subtract 4 scattered elements from a rank-1 tensor +with 8 elements. In Python, that subtraction would look like this: + +```python +ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8], use_resource=True) +indices = tf.constant([[4], [3], [1], [7]]) +updates = tf.constant([9, 10, 11, 12]) +sub = tf.scatter_nd_sub(ref, indices, updates) +with tf.Session() as sess: + print sess.run(sub) +``` + +The resulting update to ref would look like this: + + [1, -9, 3, -6, -4, 6, 7, -4] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A `Tensor` of type `resource`. +A resource handle. Must be from a VarHandleOp. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A Tensor. Must be one of the following types: int32, int64. +A tensor of indices into ref. +
+`updates` + +A `Tensor`. A Tensor. Must have the same type as ref. A tensor of +values to add to ref. +
+`use_locking` + +An optional `bool`. Defaults to `True`. +An optional bool. Defaults to True. If True, the assignment will +be protected by a lock; otherwise the behavior is undefined, +but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceScatterNdUpdate.md b/site/en/api_docs/python/tf/raw_ops/ResourceScatterNdUpdate.md new file mode 100644 index 00000000000..cc348c08196 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceScatterNdUpdate.md @@ -0,0 +1,141 @@ +description: Applies sparse updates to individual values or slices within a given + +
+ + +
+ +# tf.raw_ops.ResourceScatterNdUpdate + + + + + + + + + +Applies sparse `updates` to individual values or slices within a given + + + + + + + + + +variable according to `indices`. + +`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into `ref`. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of `ref`. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]. +``` + +For example, say we want to update 4 scattered elements to a rank-1 tensor to +8 elements. In Python, that update would look like this: + +```python + ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) + indices = tf.constant([[4], [3], [1] ,[7]]) + updates = tf.constant([9, 10, 11, 12]) + update = tf.scatter_nd_update(ref, indices, updates) + with tf.Session() as sess: + print sess.run(update) +``` + +The resulting update to ref would look like this: + + [1, 11, 3, 10, 9, 6, 7, 12] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A `Tensor` of type `resource`. +A resource handle. Must be from a VarHandleOp. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A Tensor. Must be one of the following types: int32, int64. +A tensor of indices into ref. +
+`updates` + +A `Tensor`. +A Tensor. Must have the same type as ref. A tensor of updated +values to add to ref. +
+`use_locking` + +An optional `bool`. Defaults to `True`. +An optional bool. Defaults to True. If True, the assignment will +be protected by a lock; otherwise the behavior is undefined, +but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceScatterSub.md b/site/en/api_docs/python/tf/raw_ops/ResourceScatterSub.md new file mode 100644 index 00000000000..65db96162f6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceScatterSub.md @@ -0,0 +1,112 @@ +description: Subtracts sparse updates from the variable referenced by resource. + +
+ + +
+ +# tf.raw_ops.ResourceScatterSub + + + + + + + + + +Subtracts sparse updates from the variable referenced by `resource`. + + + + + + + + + +This operation computes + + # Scalar indices + ref[indices, ...] -= updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] -= updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...] + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions add. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + +
+ +
+ + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A tensor of updated values to add to `ref`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceScatterUpdate.md b/site/en/api_docs/python/tf/raw_ops/ResourceScatterUpdate.md new file mode 100644 index 00000000000..954537cb799 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceScatterUpdate.md @@ -0,0 +1,102 @@ +description: Assigns sparse updates to the variable referenced by resource. + +
+ + +
+ +# tf.raw_ops.ResourceScatterUpdate + + + + + + + + + +Assigns sparse updates to the variable referenced by `resource`. + + + + + + + + + +This operation computes + + # Scalar indices + ref[indices, ...] = updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] = updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] = updates[i, ..., j, ...] + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. A tensor of updated values to add to `ref`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyAdadelta.md b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyAdadelta.md new file mode 100644 index 00000000000..616e2b8aec4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyAdadelta.md @@ -0,0 +1,141 @@ +description: var: Should be from a Variable(). + +
+ + +
+ +# tf.raw_ops.ResourceSparseApplyAdadelta + + + + + + + + + +var: Should be from a Variable(). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum_update` + +A `Tensor` of type `resource`. +: Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Learning rate. Must be a scalar. +
+`rho` + +A `Tensor`. Must have the same type as `lr`. +Decay factor. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `lr`. +Constant factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, updating of the var and accum tensors will be protected by +a lock; otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyAdagrad.md b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyAdagrad.md new file mode 100644 index 00000000000..1f242f48a0a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyAdagrad.md @@ -0,0 +1,128 @@ +description: Update relevant entries in '*var' and '*accum' according to the adagrad scheme. + +
+ + +
+ +# tf.raw_ops.ResourceSparseApplyAdagrad + + + + + + + + + +Update relevant entries in '*var' and '*accum' according to the adagrad scheme. + + + + + + + + + +That is for rows we have grad for, we update var and accum as follows: +accum += grad * grad +var -= lr * grad * (1 / sqrt(accum)) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Learning rate. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`update_slots` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyAdagradDA.md b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyAdagradDA.md new file mode 100644 index 00000000000..8dd23c3b92d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyAdagradDA.md @@ -0,0 +1,151 @@ +description: Update entries in '*var' and '*accum' according to the proximal adagrad scheme. + +
+ + +
+ +# tf.raw_ops.ResourceSparseApplyAdagradDA + + + + + + + + + +Update entries in '*var' and '*accum' according to the proximal adagrad scheme. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`gradient_accumulator` + +A `Tensor` of type `resource`. +Should be from a Variable(). +
+`gradient_squared_accumulator` + +A `Tensor` of type `resource`. +Should be from a Variable(). +
+`grad` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`lr` + +A `Tensor`. Must have the same type as `grad`. +Learning rate. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `grad`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `grad`. +L2 regularization. Must be a scalar. +
+`global_step` + +A `Tensor` of type `int64`. +Training step number. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, updating of the var and accum tensors will be protected by +a lock; otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyAdagradV2.md b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyAdagradV2.md new file mode 100644 index 00000000000..d2c110d74e6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyAdagradV2.md @@ -0,0 +1,136 @@ +description: Update relevant entries in '*var' and '*accum' according to the adagrad scheme. + +
+ + +
+ +# tf.raw_ops.ResourceSparseApplyAdagradV2 + + + + + + + + + +Update relevant entries in '*var' and '*accum' according to the adagrad scheme. + + + + + + + + + +That is for rows we have grad for, we update var and accum as follows: +accum += grad * grad +var -= lr * grad * (1 / sqrt(accum)) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Learning rate. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `lr`. +Constant factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`update_slots` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyCenteredRMSProp.md b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyCenteredRMSProp.md new file mode 100644 index 00000000000..ef4a3d1518f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyCenteredRMSProp.md @@ -0,0 +1,171 @@ +description: Update '*var' according to the centered RMSProp algorithm. + +
+ + +
+ +# tf.raw_ops.ResourceSparseApplyCenteredRMSProp + + + + + + + + + +Update '*var' according to the centered RMSProp algorithm. + + + + + + + + + +The centered RMSProp algorithm uses an estimate of the centered second moment +(i.e., the variance) for normalization, as opposed to regular RMSProp, which +uses the (uncentered) second moment. This often helps with training, but is +slightly more expensive in terms of computation and memory. + +Note that in dense implementation of this algorithm, mg, ms, and mom will +update even if the grad is zero, but in this sparse implementation, mg, ms, +and mom will not update in iterations during which the grad is zero. + +mean_square = decay * mean_square + (1-decay) * gradient ** 2 +mean_grad = decay * mean_grad + (1-decay) * gradient +Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2) + +ms <- rho * ms_{t-1} + (1-rho) * grad * grad +mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) +var <- var - mom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`mg` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`ms` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`mom` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`rho` + +A `Tensor`. Must have the same type as `lr`. +Decay rate. Must be a scalar. +
+`momentum` + +A `Tensor`. Must have the same type as `lr`. +
+`epsilon` + +A `Tensor`. Must have the same type as `lr`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var, ms and mom. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, mg, ms, and mom tensors is +protected by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyFtrl.md b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyFtrl.md new file mode 100644 index 00000000000..164a25957c7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyFtrl.md @@ -0,0 +1,156 @@ +description: Update relevant entries in '*var' according to the Ftrl-proximal scheme. + +
+ + +
+ +# tf.raw_ops.ResourceSparseApplyFtrl + + + + + + + + + +Update relevant entries in '*var' according to the Ftrl-proximal scheme. + + + + + + + + + +That is for rows we have grad for, we update var, accum and linear as follows: +accum_new = accum + grad * grad +linear += grad - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var +quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 +var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 +accum = accum_new + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`linear` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`grad` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`lr` + +A `Tensor`. Must have the same type as `grad`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `grad`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `grad`. +L2 regularization. Must be a scalar. +
+`lr_power` + +A `Tensor`. Must have the same type as `grad`. +Scaling factor. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyFtrlV2.md b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyFtrlV2.md new file mode 100644 index 00000000000..07f880c39ff --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyFtrlV2.md @@ -0,0 +1,165 @@ +description: Update relevant entries in '*var' according to the Ftrl-proximal scheme. + +
+ + +
+ +# tf.raw_ops.ResourceSparseApplyFtrlV2 + + + + + + + + + +Update relevant entries in '*var' according to the Ftrl-proximal scheme. + + + + + + + + + +That is for rows we have grad for, we update var, accum and linear as follows: +grad_with_shrinkage = grad + 2 * l2_shrinkage * var +accum_new = accum + grad_with_shrinkage * grad_with_shrinkage +linear += grad_with_shrinkage + + (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var +quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 +var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 +accum = accum_new + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`linear` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`grad` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`lr` + +A `Tensor`. Must have the same type as `grad`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `grad`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `grad`. +L2 shrinkage regularization. Must be a scalar. +
+`l2_shrinkage` + +A `Tensor`. Must have the same type as `grad`. +
+`lr_power` + +A `Tensor`. Must have the same type as `grad`. +Scaling factor. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyKerasMomentum.md b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyKerasMomentum.md new file mode 100644 index 00000000000..0ca1834aaaf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyKerasMomentum.md @@ -0,0 +1,142 @@ +description: Update relevant entries in '*var' and '*accum' according to the momentum scheme. + +
+ + +
+ +# tf.raw_ops.ResourceSparseApplyKerasMomentum + + + + + + + + + +Update relevant entries in '*var' and '*accum' according to the momentum scheme. + + + + + + + + + +Set use_nesterov = True if you want to use Nesterov momentum. + +That is for rows we have grad for, we update var and accum as follows: + +accum = accum * momentum - lr * grad +var += accum + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Learning rate. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`momentum` + +A `Tensor`. Must have the same type as `lr`. +Momentum. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`use_nesterov` + +An optional `bool`. Defaults to `False`. +If `True`, the tensor passed to compute grad will be +var + momentum * accum, so in the end, the var you get is actually +var + momentum * accum. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyMomentum.md b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyMomentum.md new file mode 100644 index 00000000000..1388ee7c4e9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyMomentum.md @@ -0,0 +1,142 @@ +description: Update relevant entries in '*var' and '*accum' according to the momentum scheme. + +
+ + +
+ +# tf.raw_ops.ResourceSparseApplyMomentum + + + + + + + + + +Update relevant entries in '*var' and '*accum' according to the momentum scheme. + + + + + + + + + +Set use_nesterov = True if you want to use Nesterov momentum. + +That is for rows we have grad for, we update var and accum as follows: + +accum = accum * momentum + grad +var -= lr * accum + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Learning rate. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`momentum` + +A `Tensor`. Must have the same type as `lr`. +Momentum. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`use_nesterov` + +An optional `bool`. Defaults to `False`. +If `True`, the tensor passed to compute grad will be +var - lr * momentum * accum, so in the end, the var you get is actually +var - lr * momentum * accum. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyProximalAdagrad.md b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyProximalAdagrad.md new file mode 100644 index 00000000000..db0bcb6b88c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyProximalAdagrad.md @@ -0,0 +1,137 @@ +description: Sparse update entries in '*var' and '*accum' according to FOBOS algorithm. + +
+ + +
+ +# tf.raw_ops.ResourceSparseApplyProximalAdagrad + + + + + + + + + +Sparse update entries in '*var' and '*accum' according to FOBOS algorithm. + + + + + + + + + +That is for rows we have grad for, we update var and accum as follows: +accum += grad * grad +prox_v = var +prox_v -= lr * grad * (1 / sqrt(accum)) +var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0} + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`accum` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Learning rate. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `lr`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `lr`. +L2 regularization. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, updating of the var and accum tensors will be protected by +a lock; otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyProximalGradientDescent.md b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyProximalGradientDescent.md new file mode 100644 index 00000000000..fe84ebb3167 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyProximalGradientDescent.md @@ -0,0 +1,128 @@ +description: Sparse update '*var' as FOBOS algorithm with fixed learning rate. + +
+ + +
+ +# tf.raw_ops.ResourceSparseApplyProximalGradientDescent + + + + + + + + + +Sparse update '*var' as FOBOS algorithm with fixed learning rate. + + + + + + + + + +That is for rows we have grad for, we update var as follows: +prox_v = var - alpha * grad +var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0} + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`alpha` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `alpha`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `alpha`. +L2 regularization. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `alpha`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the subtraction will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyRMSProp.md b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyRMSProp.md new file mode 100644 index 00000000000..ed33738f1b6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceSparseApplyRMSProp.md @@ -0,0 +1,158 @@ +description: Update '*var' according to the RMSProp algorithm. + +
+ + +
+ +# tf.raw_ops.ResourceSparseApplyRMSProp + + + + + + + + + +Update '*var' according to the RMSProp algorithm. + + + + + + + + + +Note that in dense implementation of this algorithm, ms and mom will +update even if the grad is zero, but in this sparse implementation, ms +and mom will not update in iterations during which the grad is zero. + +mean_square = decay * mean_square + (1-decay) * gradient ** 2 +Delta = learning_rate * gradient / sqrt(mean_square + epsilon) + +ms <- rho * ms_{t-1} + (1-rho) * grad * grad +mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) +var <- var - mom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`ms` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`mom` + +A `Tensor` of type `resource`. Should be from a Variable(). +
+`lr` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Scaling factor. Must be a scalar. +
+`rho` + +A `Tensor`. Must have the same type as `lr`. +Decay rate. Must be a scalar. +
+`momentum` + +A `Tensor`. Must have the same type as `lr`. +
+`epsilon` + +A `Tensor`. Must have the same type as `lr`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `lr`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var, ms and mom. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, ms, and mom tensors is protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ResourceStridedSliceAssign.md b/site/en/api_docs/python/tf/raw_ops/ResourceStridedSliceAssign.md new file mode 100644 index 00000000000..c5020d3e5d0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ResourceStridedSliceAssign.md @@ -0,0 +1,147 @@ +description: Assign value to the sliced l-value reference of ref. + +
+ + +
+ +# tf.raw_ops.ResourceStridedSliceAssign + + + + + + + + + +Assign `value` to the sliced l-value reference of `ref`. + + + + + + + + + +The values of `value` are assigned to the positions in the variable +`ref` that are selected by the slice parameters. The slice parameters +`begin, `end`, `strides`, etc. work exactly as in `StridedSlice`. + +NOTE this op currently does not support broadcasting and so `value`'s +shape must be exactly the shape produced by the slice of `ref`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A `Tensor` of type `resource`. +
+`begin` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`end` + +A `Tensor`. Must have the same type as `begin`. +
+`strides` + +A `Tensor`. Must have the same type as `begin`. +
+`value` + +A `Tensor`. +
+`begin_mask` + +An optional `int`. Defaults to `0`. +
+`end_mask` + +An optional `int`. Defaults to `0`. +
+`ellipsis_mask` + +An optional `int`. Defaults to `0`. +
+`new_axis_mask` + +An optional `int`. Defaults to `0`. +
+`shrink_axis_mask` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Restore.md b/site/en/api_docs/python/tf/raw_ops/Restore.md new file mode 100644 index 00000000000..d66e5ce18c1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Restore.md @@ -0,0 +1,120 @@ +description: Restores a tensor from checkpoint files. + +
+ + +
+ +# tf.raw_ops.Restore + + + + + + + + + +Restores a tensor from checkpoint files. + + + + + + + + + +Reads a tensor stored in one or several files. If there are several files (for +instance because a tensor was saved as slices), `file_pattern` may contain +wildcard symbols (`*` and `?`) in the filename portion only, not in the +directory portion. + +If a `file_pattern` matches several files, `preferred_shard` can be used to hint +in which file the requested tensor is likely to be found. This op will first +open the file at index `preferred_shard` in the list of matching files and try +to restore tensors from that file. Only if some tensors or tensor slices are +not found in that first file, then the Op opens all the files. Setting +`preferred_shard` to match the value passed as the `shard` input +of a matching `Save` Op may speed up Restore. This attribute only affects +performance, not correctness. The default value -1 means files are processed in +order. + +See also `RestoreSlice`. + + + + + + + + + + + + + + + + + + + + + + +
+`file_pattern` + +A `Tensor` of type `string`. +Must have a single element. The pattern of the files from +which we read the tensor. +
+`tensor_name` + +A `Tensor` of type `string`. +Must have a single element. The name of the tensor to be +restored. +
+`dt` + +A tf.DType. The type of the tensor to be restored. +
+`preferred_shard` + +An optional `int`. Defaults to `-1`. +Index of file to open first if multiple files match +`file_pattern`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dt`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RestoreSlice.md b/site/en/api_docs/python/tf/raw_ops/RestoreSlice.md new file mode 100644 index 00000000000..a30ee25b98c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RestoreSlice.md @@ -0,0 +1,119 @@ +description: Restores a tensor from checkpoint files. + +
+ + +
+ +# tf.raw_ops.RestoreSlice + + + + + + + + + +Restores a tensor from checkpoint files. + + + + + + + + + +This is like `Restore` except that restored tensor can be listed as filling +only a slice of a larger tensor. `shape_and_slice` specifies the shape of the +larger tensor and the slice that the restored tensor covers. + +The `shape_and_slice` input has the same format as the +elements of the `shapes_and_slices` input of the `SaveSlices` op. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`file_pattern` + +A `Tensor` of type `string`. +Must have a single element. The pattern of the files from +which we read the tensor. +
+`tensor_name` + +A `Tensor` of type `string`. +Must have a single element. The name of the tensor to be +restored. +
+`shape_and_slice` + +A `Tensor` of type `string`. +Scalar. The shapes and slice specifications to use when +restoring a tensors. +
+`dt` + +A tf.DType. The type of the tensor to be restored. +
+`preferred_shard` + +An optional `int`. Defaults to `-1`. +Index of file to open first if multiple files match +`file_pattern`. See the documentation for `Restore`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dt`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RestoreV2.md b/site/en/api_docs/python/tf/raw_ops/RestoreV2.md new file mode 100644 index 00000000000..5c14ef78e47 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RestoreV2.md @@ -0,0 +1,117 @@ +description: Restores tensors from a V2 checkpoint. + +
+ + +
+ +# tf.raw_ops.RestoreV2 + + + + + + + + + +Restores tensors from a V2 checkpoint. + + + + + + + + + +For backward compatibility with the V1 format, this Op currently allows +restoring from a V1 checkpoint as well: + - This Op first attempts to find the V2 index file pointed to by "prefix", and + if found proceed to read it as a V2 checkpoint; + - Otherwise the V1 read path is invoked. +Relying on this behavior is not recommended, as the ability to fall back to read +V1 might be deprecated and eventually removed. + +By default, restores the named tensors in full. If the caller wishes to restore +specific slices of stored tensors, "shape_and_slices" should be non-empty +strings and correspondingly well-formed. + +Callers must ensure all the named tensors are indeed stored in the checkpoint. + + + + + + + + + + + + + + + + + + + + + + +
+`prefix` + +A `Tensor` of type `string`. +Must have a single element. The prefix of a V2 checkpoint. +
+`tensor_names` + +A `Tensor` of type `string`. +shape {N}. The names of the tensors to be restored. +
+`shape_and_slices` + +A `Tensor` of type `string`. +shape {N}. The slice specs of the tensors to be restored. +Empty strings indicate that they are non-partitioned tensors. +
+`dtypes` + +A list of `tf.DTypes` that has length `>= 1`. +shape {N}. The list of expected dtype for the tensors. Must match +those stored in the checkpoint. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `dtypes`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingADAMParameters.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingADAMParameters.md new file mode 100644 index 00000000000..3c1525f0211 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingADAMParameters.md @@ -0,0 +1,130 @@ +description: Retrieve ADAM embedding parameters. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingADAMParameters + + + + + + + + + +Retrieve ADAM embedding parameters. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, momenta, velocities). +
+`parameters` + +A `Tensor` of type `float32`. +
+`momenta` + +A `Tensor` of type `float32`. +
+`velocities` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingADAMParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingADAMParametersGradAccumDebug.md new file mode 100644 index 00000000000..524acb02587 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingADAMParametersGradAccumDebug.md @@ -0,0 +1,137 @@ +description: Retrieve ADAM embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingADAMParametersGradAccumDebug + + + + + + + + + +Retrieve ADAM embedding parameters with debug support. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, momenta, velocities, gradient_accumulators). +
+`parameters` + +A `Tensor` of type `float32`. +
+`momenta` + +A `Tensor` of type `float32`. +
+`velocities` + +A `Tensor` of type `float32`. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdadeltaParameters.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdadeltaParameters.md new file mode 100644 index 00000000000..6e1780d378f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdadeltaParameters.md @@ -0,0 +1,130 @@ +description: Retrieve Adadelta embedding parameters. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParameters + + + + + + + + + +Retrieve Adadelta embedding parameters. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, accumulators, updates). +
+`parameters` + +A `Tensor` of type `float32`. +
+`accumulators` + +A `Tensor` of type `float32`. +
+`updates` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug.md new file mode 100644 index 00000000000..c069da2a8ee --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug.md @@ -0,0 +1,137 @@ +description: Retrieve Adadelta embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug + + + + + + + + + +Retrieve Adadelta embedding parameters with debug support. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, accumulators, updates, gradient_accumulators). +
+`parameters` + +A `Tensor` of type `float32`. +
+`accumulators` + +A `Tensor` of type `float32`. +
+`updates` + +A `Tensor` of type `float32`. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdagradParameters.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdagradParameters.md new file mode 100644 index 00000000000..580a4d5f0e2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdagradParameters.md @@ -0,0 +1,123 @@ +description: Retrieve Adagrad embedding parameters. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingAdagradParameters + + + + + + + + + +Retrieve Adagrad embedding parameters. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, accumulators). +
+`parameters` + +A `Tensor` of type `float32`. +
+`accumulators` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdagradParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdagradParametersGradAccumDebug.md new file mode 100644 index 00000000000..f3ae418a267 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdagradParametersGradAccumDebug.md @@ -0,0 +1,130 @@ +description: Retrieve Adagrad embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingAdagradParametersGradAccumDebug + + + + + + + + + +Retrieve Adagrad embedding parameters with debug support. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, accumulators, gradient_accumulators). +
+`parameters` + +A `Tensor` of type `float32`. +
+`accumulators` + +A `Tensor` of type `float32`. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingCenteredRMSPropParameters.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingCenteredRMSPropParameters.md new file mode 100644 index 00000000000..1492ca0bd1e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingCenteredRMSPropParameters.md @@ -0,0 +1,137 @@ +description: Retrieve centered RMSProp embedding parameters. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingCenteredRMSPropParameters + + + + + + + + + +Retrieve centered RMSProp embedding parameters. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, ms, mom, mg). +
+`parameters` + +A `Tensor` of type `float32`. +
+`ms` + +A `Tensor` of type `float32`. +
+`mom` + +A `Tensor` of type `float32`. +
+`mg` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingFTRLParameters.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingFTRLParameters.md new file mode 100644 index 00000000000..e9b3cc2b5ee --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingFTRLParameters.md @@ -0,0 +1,130 @@ +description: Retrieve FTRL embedding parameters. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingFTRLParameters + + + + + + + + + +Retrieve FTRL embedding parameters. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, accumulators, linears). +
+`parameters` + +A `Tensor` of type `float32`. +
+`accumulators` + +A `Tensor` of type `float32`. +
+`linears` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.md new file mode 100644 index 00000000000..36329d214e5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.md @@ -0,0 +1,137 @@ +description: Retrieve FTRL embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingFTRLParametersGradAccumDebug + + + + + + + + + +Retrieve FTRL embedding parameters with debug support. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, accumulators, linears, gradient_accumulators). +
+`parameters` + +A `Tensor` of type `float32`. +
+`accumulators` + +A `Tensor` of type `float32`. +
+`linears` + +A `Tensor` of type `float32`. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingMDLAdagradLightParameters.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingMDLAdagradLightParameters.md new file mode 100644 index 00000000000..705da2737bb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingMDLAdagradLightParameters.md @@ -0,0 +1,137 @@ +description: Retrieve MDL Adagrad Light embedding parameters. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingMDLAdagradLightParameters + + + + + + + + + +Retrieve MDL Adagrad Light embedding parameters. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, accumulators, weights, benefits). +
+`parameters` + +A `Tensor` of type `float32`. +
+`accumulators` + +A `Tensor` of type `float32`. +
+`weights` + +A `Tensor` of type `float32`. +
+`benefits` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingMomentumParameters.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingMomentumParameters.md new file mode 100644 index 00000000000..a79a784dfef --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingMomentumParameters.md @@ -0,0 +1,123 @@ +description: Retrieve Momentum embedding parameters. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingMomentumParameters + + + + + + + + + +Retrieve Momentum embedding parameters. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, momenta). +
+`parameters` + +A `Tensor` of type `float32`. +
+`momenta` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingMomentumParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingMomentumParametersGradAccumDebug.md new file mode 100644 index 00000000000..51ae18b0bf5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingMomentumParametersGradAccumDebug.md @@ -0,0 +1,130 @@ +description: Retrieve Momentum embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingMomentumParametersGradAccumDebug + + + + + + + + + +Retrieve Momentum embedding parameters with debug support. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, momenta, gradient_accumulators). +
+`parameters` + +A `Tensor` of type `float32`. +
+`momenta` + +A `Tensor` of type `float32`. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParameters.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParameters.md new file mode 100644 index 00000000000..bfd0ba0745e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParameters.md @@ -0,0 +1,123 @@ +description: Retrieve proximal Adagrad embedding parameters. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters + + + + + + + + + +Retrieve proximal Adagrad embedding parameters. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, accumulators). +
+`parameters` + +A `Tensor` of type `float32`. +
+`accumulators` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug.md new file mode 100644 index 00000000000..4a6d7098028 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug.md @@ -0,0 +1,130 @@ +description: Retrieve proximal Adagrad embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug + + + + + + + + + +Retrieve proximal Adagrad embedding parameters with debug support. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, accumulators, gradient_accumulators). +
+`parameters` + +A `Tensor` of type `float32`. +
+`accumulators` + +A `Tensor` of type `float32`. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingRMSPropParameters.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingRMSPropParameters.md new file mode 100644 index 00000000000..6cec43b0edb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingRMSPropParameters.md @@ -0,0 +1,130 @@ +description: Retrieve RMSProp embedding parameters. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingRMSPropParameters + + + + + + + + + +Retrieve RMSProp embedding parameters. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, ms, mom). +
+`parameters` + +A `Tensor` of type `float32`. +
+`ms` + +A `Tensor` of type `float32`. +
+`mom` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug.md new file mode 100644 index 00000000000..866fb741f59 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug.md @@ -0,0 +1,137 @@ +description: Retrieve RMSProp embedding parameters with debug support. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug + + + + + + + + + +Retrieve RMSProp embedding parameters with debug support. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (parameters, ms, mom, gradient_accumulators). +
+`parameters` + +A `Tensor` of type `float32`. +
+`ms` + +A `Tensor` of type `float32`. +
+`mom` + +A `Tensor` of type `float32`. +
+`gradient_accumulators` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingStochasticGradientDescentParameters.md b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingStochasticGradientDescentParameters.md new file mode 100644 index 00000000000..e2a43645c02 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingStochasticGradientDescentParameters.md @@ -0,0 +1,109 @@ +description: Retrieve SGD embedding parameters. + +
+ + +
+ +# tf.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParameters + + + + + + + + + +Retrieve SGD embedding parameters. + + + + + + + + + +An op that retrieves optimization parameters from embedding to host +memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up +the correct embedding table configuration. For example, this op is +used to retrieve updated parameters before saving a checkpoint. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_shards` + +An `int`. +
+`shard_id` + +An `int`. +
+`table_id` + +An optional `int` that is `>= -1`. Defaults to `-1`. +
+`table_name` + +An optional `string`. Defaults to `""`. +
+`config` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Reverse.md b/site/en/api_docs/python/tf/raw_ops/Reverse.md new file mode 100644 index 00000000000..f211cfe1ae9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Reverse.md @@ -0,0 +1,131 @@ +description: Reverses specific dimensions of a tensor. + +
+ + +
+ +# tf.raw_ops.Reverse + + + + + + + + + +Reverses specific dimensions of a tensor. + + + + + + + + + +Given a `tensor`, and a `bool` tensor `dims` representing the dimensions +of `tensor`, this operation reverses each dimension i of `tensor` where +`dims[i]` is `True`. + +`tensor` can have up to 8 dimensions. The number of dimensions +of `tensor` must equal the number of elements in `dims`. In other words: + +`rank(tensor) = size(dims)` + +#### For example: + + + +``` +# tensor 't' is [[[[ 0, 1, 2, 3], +# [ 4, 5, 6, 7], +# [ 8, 9, 10, 11]], +# [[12, 13, 14, 15], +# [16, 17, 18, 19], +# [20, 21, 22, 23]]]] +# tensor 't' shape is [1, 2, 3, 4] + +# 'dims' is [False, False, False, True] +reverse(t, dims) ==> [[[[ 3, 2, 1, 0], + [ 7, 6, 5, 4], + [ 11, 10, 9, 8]], + [[15, 14, 13, 12], + [19, 18, 17, 16], + [23, 22, 21, 20]]]] + +# 'dims' is [False, True, False, False] +reverse(t, dims) ==> [[[[12, 13, 14, 15], + [16, 17, 18, 19], + [20, 21, 22, 23] + [[ 0, 1, 2, 3], + [ 4, 5, 6, 7], + [ 8, 9, 10, 11]]]] + +# 'dims' is [False, False, True, False] +reverse(t, dims) ==> [[[[8, 9, 10, 11], + [4, 5, 6, 7], + [0, 1, 2, 3]] + [[20, 21, 22, 23], + [16, 17, 18, 19], + [12, 13, 14, 15]]]] +``` + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. Must be one of the following types: `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `bool`, `half`, `float32`, `float64`, `complex64`, `complex128`, `string`. +Up to 8-D. +
+`dims` + +A `Tensor` of type `bool`. 1-D. The dimensions to reverse. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReverseSequence.md b/site/en/api_docs/python/tf/raw_ops/ReverseSequence.md new file mode 100644 index 00000000000..a0cebd8d888 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReverseSequence.md @@ -0,0 +1,157 @@ +description: Reverses variable length slices. + +
+ + +
+ +# tf.raw_ops.ReverseSequence + + + + + + + + + +Reverses variable length slices. + + + + + + + + + +This op first slices `input` along the dimension `batch_dim`, and for each +slice `i`, reverses the first `seq_lengths[i]` elements along +the dimension `seq_dim`. + +The elements of `seq_lengths` must obey `seq_lengths[i] <= input.dims[seq_dim]`, +and `seq_lengths` must be a vector of length `input.dims[batch_dim]`. + +The output slice `i` along dimension `batch_dim` is then given by input +slice `i`, with the first `seq_lengths[i]` slices along dimension +`seq_dim` reversed. + +#### For example: + + + +``` +# Given this: +batch_dim = 0 +seq_dim = 1 +input.dims = (4, 8, ...) +seq_lengths = [7, 2, 3, 5] + +# then slices of input are reversed on seq_dim, but only up to seq_lengths: +output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...] +output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...] +output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...] +output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...] + +# while entries past seq_lens are copied through: +output[0, 7:, :, ...] = input[0, 7:, :, ...] +output[1, 2:, :, ...] = input[1, 2:, :, ...] +output[2, 3:, :, ...] = input[2, 3:, :, ...] +output[3, 2:, :, ...] = input[3, 2:, :, ...] +``` + +In contrast, if: + +``` +# Given this: +batch_dim = 2 +seq_dim = 0 +input.dims = (8, ?, 4, ...) +seq_lengths = [7, 2, 3, 5] + +# then slices of input are reversed on seq_dim, but only up to seq_lengths: +output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...] +output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...] +output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...] +output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...] + +# while entries past seq_lens are copied through: +output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...] +output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...] +output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...] +output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. The input to reverse. +
+`seq_lengths` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D with length `input.dims(batch_dim)` and +`max(seq_lengths) <= input.dims(seq_dim)` +
+`seq_dim` + +An `int`. The dimension which is partially reversed. +
+`batch_dim` + +An optional `int`. Defaults to `0`. +The dimension along which reversal is performed. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ReverseV2.md b/site/en/api_docs/python/tf/raw_ops/ReverseV2.md new file mode 100644 index 00000000000..63b917b9f45 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ReverseV2.md @@ -0,0 +1,135 @@ +description: Reverses specific dimensions of a tensor. + +
+ + +
+ +# tf.raw_ops.ReverseV2 + + + + + + + + + +Reverses specific dimensions of a tensor. + + + + + + + + + +NOTE tf.reverse has now changed behavior in preparation for 1.0. +`tf.reverse_v2` is currently an alias that will be deprecated before TF 1.0. + +Given a `tensor`, and a `int32` tensor `axis` representing the set of +dimensions of `tensor` to reverse. This operation reverses each dimension +`i` for which there exists `j` s.t. `axis[j] == i`. + +`tensor` can have up to 8 dimensions. The number of dimensions specified +in `axis` may be 0 or more entries. If an index is specified more than +once, a InvalidArgument error is raised. + +#### For example: + + + +``` +# tensor 't' is [[[[ 0, 1, 2, 3], +# [ 4, 5, 6, 7], +# [ 8, 9, 10, 11]], +# [[12, 13, 14, 15], +# [16, 17, 18, 19], +# [20, 21, 22, 23]]]] +# tensor 't' shape is [1, 2, 3, 4] + +# 'dims' is [3] or 'dims' is [-1] +reverse(t, dims) ==> [[[[ 3, 2, 1, 0], + [ 7, 6, 5, 4], + [ 11, 10, 9, 8]], + [[15, 14, 13, 12], + [19, 18, 17, 16], + [23, 22, 21, 20]]]] + +# 'dims' is '[1]' (or 'dims' is '[-3]') +reverse(t, dims) ==> [[[[12, 13, 14, 15], + [16, 17, 18, 19], + [20, 21, 22, 23] + [[ 0, 1, 2, 3], + [ 4, 5, 6, 7], + [ 8, 9, 10, 11]]]] + +# 'dims' is '[2]' (or 'dims' is '[-2]') +reverse(t, dims) ==> [[[[8, 9, 10, 11], + [4, 5, 6, 7], + [0, 1, 2, 3]] + [[20, 21, 22, 23], + [16, 17, 18, 19], + [12, 13, 14, 15]]]] +``` + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. Must be one of the following types: `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `bool`, `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`, `string`. +Up to 8-D. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D. The indices of the dimensions to reverse. Must be in the range +`[-rank(tensor), rank(tensor))`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RightShift.md b/site/en/api_docs/python/tf/raw_ops/RightShift.md new file mode 100644 index 00000000000..c26d5cff6d1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RightShift.md @@ -0,0 +1,119 @@ +description: Elementwise computes the bitwise right-shift of x and y. + +
+ + +
+ +# tf.raw_ops.RightShift + + + + + + + + + +Elementwise computes the bitwise right-shift of `x` and `y`. + + + + + + + + + +Performs a logical shift for unsigned integer types, and an arithmetic shift +for signed integer types. + +If `y` is negative, or greater than or equal to than the width of `x` in bits +the result is implementation defined. + +#### Example: + + + +```python +import tensorflow as tf +from tensorflow.python.ops import bitwise_ops +import numpy as np +dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64] + +for dtype in dtype_list: + lhs = tf.constant([-1, -5, -3, -14], dtype=dtype) + rhs = tf.constant([5, 0, 7, 11], dtype=dtype) + + right_shift_result = bitwise_ops.right_shift(lhs, rhs) + + print(right_shift_result) + +# This will print: +# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int8) +# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int16) +# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int32) +# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int64) + +lhs = np.array([-2, 64, 101, 32], dtype=np.int8) +rhs = np.array([-1, -5, -3, -14], dtype=np.int8) +bitwise_ops.right_shift(lhs, rhs) +# +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Rint.md b/site/en/api_docs/python/tf/raw_ops/Rint.md new file mode 100644 index 00000000000..32a1cdfb189 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Rint.md @@ -0,0 +1,86 @@ +description: Returns element-wise integer closest to x. + +
+ + +
+ +# tf.raw_ops.Rint + + + + + + + + + +Returns element-wise integer closest to x. + + + + + + + + + +If the result is midway between two representable values, +the even representable is chosen. +For example: + +``` +rint(-1.5) ==> -2.0 +rint(0.5000001) ==> 1.0 +rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.] +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RngSkip.md b/site/en/api_docs/python/tf/raw_ops/RngSkip.md new file mode 100644 index 00000000000..e2cf948d954 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RngSkip.md @@ -0,0 +1,96 @@ +description: Advance the counter of a counter-based RNG. + +
+ + +
+ +# tf.raw_ops.RngSkip + + + + + + + + + +Advance the counter of a counter-based RNG. + + + + + + + + + +The state of the RNG after +`rng_skip(n)` will be the same as that after `stateful_uniform([n])` +(or any other distribution). The actual increment added to the +counter is an unspecified implementation detail. + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +The handle of the resource variable that stores the state of the RNG. +
+`algorithm` + +A `Tensor` of type `int64`. The RNG algorithm. +
+`delta` + +A `Tensor` of type `int64`. The amount of advancement. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Roll.md b/site/en/api_docs/python/tf/raw_ops/Roll.md new file mode 100644 index 00000000000..a1ef7570ac4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Roll.md @@ -0,0 +1,121 @@ +description: Rolls the elements of a tensor along an axis. + +
+ + +
+ +# tf.raw_ops.Roll + + + + + + + + + +Rolls the elements of a tensor along an axis. + + + + + + + + + +The elements are shifted positively (towards larger indices) by the offset of +`shift` along the dimension of `axis`. Negative `shift` values will shift +elements in the opposite direction. Elements that roll passed the last position +will wrap around to the first and vice versa. Multiple shifts along multiple +axes may be specified. + +#### For example: + + + +``` +# 't' is [0, 1, 2, 3, 4] +roll(t, shift=2, axis=0) ==> [3, 4, 0, 1, 2] + +# shifting along multiple dimensions +# 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]] +roll(t, shift=[1, -2], axis=[0, 1]) ==> [[7, 8, 9, 5, 6], [2, 3, 4, 0, 1]] + +# shifting along the same axis multiple times +# 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]] +roll(t, shift=[2, -3], axis=[1, 1]) ==> [[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]] +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`shift` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Dimension must be 0-D or 1-D. `shift[i]` specifies the number of places by which +elements are shifted positively (towards larger indices) along the dimension +specified by `axis[i]`. Negative shifts will roll the elements in the opposite +direction. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Dimension must be 0-D or 1-D. `axis[i]` specifies the dimension that the shift +`shift[i]` should occur. If the same axis is referenced more than once, the +total shift for that axis will be the sum of all the shifts that belong to that +axis. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Round.md b/site/en/api_docs/python/tf/raw_ops/Round.md new file mode 100644 index 00000000000..b3d3d4793f1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Round.md @@ -0,0 +1,79 @@ +description: Rounds the values of a tensor to the nearest integer, element-wise. + +
+ + +
+ +# tf.raw_ops.Round + + + + + + + + + +Rounds the values of a tensor to the nearest integer, element-wise. + + + + + + + + + +Rounds half to even. Also known as bankers rounding. If you want to round +according to the current system rounding mode use std::cint. + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Rsqrt.md b/site/en/api_docs/python/tf/raw_ops/Rsqrt.md new file mode 100644 index 00000000000..73ed8444c54 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Rsqrt.md @@ -0,0 +1,78 @@ +description: Computes reciprocal of square root of x element-wise. + +
+ + +
+ +# tf.raw_ops.Rsqrt + + + + + + + + + +Computes reciprocal of square root of x element-wise. + + + + + + + + + +I.e., \\(y = 1 / \sqrt{x}\\). + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/RsqrtGrad.md b/site/en/api_docs/python/tf/raw_ops/RsqrtGrad.md new file mode 100644 index 00000000000..d7aa24fea7b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/RsqrtGrad.md @@ -0,0 +1,86 @@ +description: Computes the gradient for the rsqrt of x wrt its input. + +
+ + +
+ +# tf.raw_ops.RsqrtGrad + + + + + + + + + +Computes the gradient for the rsqrt of `x` wrt its input. + + + + + + + + + +Specifically, `grad = dy * -0.5 * y^3`, where `y = rsqrt(x)`, and `dy` +is the corresponding input gradient. + + + + + + + + + + + + + + + + +
+`y` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`dy` + +A `Tensor`. Must have the same type as `y`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `y`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SampleDistortedBoundingBox.md b/site/en/api_docs/python/tf/raw_ops/SampleDistortedBoundingBox.md new file mode 100644 index 00000000000..f0772a3d5f2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SampleDistortedBoundingBox.md @@ -0,0 +1,215 @@ +description: Generate a single randomly distorted bounding box for an image. + +
+ + +
+ +# tf.raw_ops.SampleDistortedBoundingBox + + + + + + + + + +Generate a single randomly distorted bounding box for an image. + + + + + + + + + +Bounding box annotations are often supplied in addition to ground-truth labels +in image recognition or object localization tasks. A common technique for +training such a system is to randomly distort an image while preserving +its content, i.e. *data augmentation*. This Op outputs a randomly distorted +localization of an object, i.e. bounding box, given an `image_size`, +`bounding_boxes` and a series of constraints. + +The output of this Op is a single bounding box that may be used to crop the +original image. The output is returned as 3 tensors: `begin`, `size` and +`bboxes`. The first 2 tensors can be fed directly into tf.slice to crop the +image. The latter may be supplied to tf.image.draw_bounding_boxes to visualize +what the bounding box looks like. + +Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The +bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and +height of the underlying image. + +For example, + +```python + # Generate a single distorted bounding box. + begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box( + tf.shape(image), + bounding_boxes=bounding_boxes) + + # Draw the bounding box in an image summary. + image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), + bbox_for_draw) + tf.summary.image('images_with_box', image_with_box) + + # Employ the bounding box to distort the image. + distorted_image = tf.slice(image, begin, size) +``` + +Note that if no bounding box information is available, setting +`use_image_if_no_bounding_boxes = true` will assume there is a single implicit +bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is +false and no bounding boxes are supplied, an error is raised. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`image_size` + +A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`. +1-D, containing `[height, width, channels]`. +
+`bounding_boxes` + +A `Tensor` of type `float32`. +3-D with shape `[batch, N, 4]` describing the N bounding boxes +associated with the image. +
+`seed` + +An optional `int`. Defaults to `0`. +If either `seed` or `seed2` are set to non-zero, the random number +generator is seeded by the given `seed`. Otherwise, it is seeded by a random +seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`min_object_covered` + +An optional `float`. Defaults to `0.1`. +The cropped area of the image must contain at least this +fraction of any bounding box supplied. The value of this parameter should be +non-negative. In the case of 0, the cropped area does not need to overlap +any of the bounding boxes supplied. +
+`aspect_ratio_range` + +An optional list of `floats`. Defaults to `[0.75, 1.33]`. +The cropped area of the image must have an aspect ratio = +width / height within this range. +
+`area_range` + +An optional list of `floats`. Defaults to `[0.05, 1]`. +The cropped area of the image must contain a fraction of the +supplied image within this range. +
+`max_attempts` + +An optional `int`. Defaults to `100`. +Number of attempts at generating a cropped region of the image +of the specified constraints. After `max_attempts` failures, return the entire +image. +
+`use_image_if_no_bounding_boxes` + +An optional `bool`. Defaults to `False`. +Controls behavior if no bounding boxes supplied. +If true, assume an implicit bounding box covering the whole input. If false, +raise an error. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (begin, size, bboxes). +
+`begin` + +A `Tensor`. Has the same type as `image_size`. +
+`size` + +A `Tensor`. Has the same type as `image_size`. +
+`bboxes` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SampleDistortedBoundingBoxV2.md b/site/en/api_docs/python/tf/raw_ops/SampleDistortedBoundingBoxV2.md new file mode 100644 index 00000000000..9063508d468 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SampleDistortedBoundingBoxV2.md @@ -0,0 +1,215 @@ +description: Generate a single randomly distorted bounding box for an image. + +
+ + +
+ +# tf.raw_ops.SampleDistortedBoundingBoxV2 + + + + + + + + + +Generate a single randomly distorted bounding box for an image. + + + + + + + + + +Bounding box annotations are often supplied in addition to ground-truth labels +in image recognition or object localization tasks. A common technique for +training such a system is to randomly distort an image while preserving +its content, i.e. *data augmentation*. This Op outputs a randomly distorted +localization of an object, i.e. bounding box, given an `image_size`, +`bounding_boxes` and a series of constraints. + +The output of this Op is a single bounding box that may be used to crop the +original image. The output is returned as 3 tensors: `begin`, `size` and +`bboxes`. The first 2 tensors can be fed directly into tf.slice to crop the +image. The latter may be supplied to tf.image.draw_bounding_boxes to visualize +what the bounding box looks like. + +Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The +bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and +height of the underlying image. + +For example, + +```python + # Generate a single distorted bounding box. + begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box( + tf.shape(image), + bounding_boxes=bounding_boxes) + + # Draw the bounding box in an image summary. + image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), + bbox_for_draw) + tf.summary.image('images_with_box', image_with_box) + + # Employ the bounding box to distort the image. + distorted_image = tf.slice(image, begin, size) +``` + +Note that if no bounding box information is available, setting +`use_image_if_no_bounding_boxes = true` will assume there is a single implicit +bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is +false and no bounding boxes are supplied, an error is raised. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`image_size` + +A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`. +1-D, containing `[height, width, channels]`. +
+`bounding_boxes` + +A `Tensor` of type `float32`. +3-D with shape `[batch, N, 4]` describing the N bounding boxes +associated with the image. +
+`min_object_covered` + +A `Tensor` of type `float32`. +The cropped area of the image must contain at least this +fraction of any bounding box supplied. The value of this parameter should be +non-negative. In the case of 0, the cropped area does not need to overlap +any of the bounding boxes supplied. +
+`seed` + +An optional `int`. Defaults to `0`. +If either `seed` or `seed2` are set to non-zero, the random number +generator is seeded by the given `seed`. Otherwise, it is seeded by a random +seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`aspect_ratio_range` + +An optional list of `floats`. Defaults to `[0.75, 1.33]`. +The cropped area of the image must have an aspect ratio = +width / height within this range. +
+`area_range` + +An optional list of `floats`. Defaults to `[0.05, 1]`. +The cropped area of the image must contain a fraction of the +supplied image within this range. +
+`max_attempts` + +An optional `int`. Defaults to `100`. +Number of attempts at generating a cropped region of the image +of the specified constraints. After `max_attempts` failures, return the entire +image. +
+`use_image_if_no_bounding_boxes` + +An optional `bool`. Defaults to `False`. +Controls behavior if no bounding boxes supplied. +If true, assume an implicit bounding box covering the whole input. If false, +raise an error. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (begin, size, bboxes). +
+`begin` + +A `Tensor`. Has the same type as `image_size`. +
+`size` + +A `Tensor`. Has the same type as `image_size`. +
+`bboxes` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SamplingDataset.md b/site/en/api_docs/python/tf/raw_ops/SamplingDataset.md new file mode 100644 index 00000000000..45dacfdebad --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SamplingDataset.md @@ -0,0 +1,121 @@ +description: Creates a dataset that takes a Bernoulli sample of the contents of another dataset. + +
+ + +
+ +# tf.raw_ops.SamplingDataset + + + + + + + + + +Creates a dataset that takes a Bernoulli sample of the contents of another dataset. + + + + + + + + + +There is no transformation in the tf.data Python API for creating this dataset. +Instead, it is created as a result of the `filter_with_random_uniform_fusion` +static optimization. Whether this optimization is performed is determined by the +`experimental_optimization.filter_with_random_uniform_fusion` option of +tf.data.Options. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`rate` + +A `Tensor` of type `float32`. +A scalar representing the sample rate. Each element of `input_dataset` is +retained with this probability, independent of all other elements. +
+`seed` + +A `Tensor` of type `int64`. +A scalar representing seed of random number generator. +
+`seed2` + +A `Tensor` of type `int64`. +A scalar representing seed2 of random number generator. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Save.md b/site/en/api_docs/python/tf/raw_ops/Save.md new file mode 100644 index 00000000000..7e2893a3d0c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Save.md @@ -0,0 +1,98 @@ +description: Saves the input tensors to disk. + +
+ + +
+ +# tf.raw_ops.Save + + + + + + + + + +Saves the input tensors to disk. + + + + + + + + + +The size of `tensor_names` must match the number of tensors in `data`. `data[i]` +is written to `filename` with name `tensor_names[i]`. + +See also `SaveSlices`. + + + + + + + + + + + + + + + + + + + +
+`filename` + +A `Tensor` of type `string`. +Must have a single element. The name of the file to which we write +the tensor. +
+`tensor_names` + +A `Tensor` of type `string`. +Shape `[N]`. The names of the tensors to be saved. +
+`data` + +A list of `Tensor` objects. `N` tensors to save. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SaveSlices.md b/site/en/api_docs/python/tf/raw_ops/SaveSlices.md new file mode 100644 index 00000000000..5c3a573487b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SaveSlices.md @@ -0,0 +1,124 @@ +description: Saves input tensors slices to disk. + +
+ + +
+ +# tf.raw_ops.SaveSlices + + + + + + + + + +Saves input tensors slices to disk. + + + + + + + + + +This is like `Save` except that tensors can be listed in the saved file as being +a slice of a larger tensor. `shapes_and_slices` specifies the shape of the +larger tensor and the slice that this tensor covers. `shapes_and_slices` must +have as many elements as `tensor_names`. + +Elements of the `shapes_and_slices` input must either be: + +* The empty string, in which case the corresponding tensor is + saved normally. +* A string of the form `dim0 dim1 ... dimN-1 slice-spec` where the + `dimI` are the dimensions of the larger tensor and `slice-spec` + specifies what part is covered by the tensor to save. + +`slice-spec` itself is a `:`-separated list: `slice0:slice1:...:sliceN-1` +where each `sliceI` is either: + +* The string `-` meaning that the slice covers all indices of this dimension +* `start,length` where `start` and `length` are integers. In that + case the slice covers `length` indices starting at `start`. + +See also `Save`. + + + + + + + + + + + + + + + + + + + + + + +
+`filename` + +A `Tensor` of type `string`. +Must have a single element. The name of the file to which we write the +tensor. +
+`tensor_names` + +A `Tensor` of type `string`. +Shape `[N]`. The names of the tensors to be saved. +
+`shapes_and_slices` + +A `Tensor` of type `string`. +Shape `[N]`. The shapes and slice specifications to use when +saving the tensors. +
+`data` + +A list of `Tensor` objects. `N` tensors to save. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SaveV2.md b/site/en/api_docs/python/tf/raw_ops/SaveV2.md new file mode 100644 index 00000000000..04adaeefd83 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SaveV2.md @@ -0,0 +1,106 @@ +description: Saves tensors in V2 checkpoint format. + +
+ + +
+ +# tf.raw_ops.SaveV2 + + + + + + + + + +Saves tensors in V2 checkpoint format. + + + + + + + + + +By default, saves the named tensors in full. If the caller wishes to save +specific slices of full tensors, "shape_and_slices" should be non-empty strings +and correspondingly well-formed. + + + + + + + + + + + + + + + + + + + + + + +
+`prefix` + +A `Tensor` of type `string`. +Must have a single element. The prefix of the V2 checkpoint to which we +write the tensors. +
+`tensor_names` + +A `Tensor` of type `string`. +shape {N}. The names of the tensors to be saved. +
+`shape_and_slices` + +A `Tensor` of type `string`. +shape {N}. The slice specs of the tensors to be saved. +Empty strings indicate that they are non-partitioned tensors. +
+`tensors` + +A list of `Tensor` objects. `N` tensors to save. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScalarSummary.md b/site/en/api_docs/python/tf/raw_ops/ScalarSummary.md new file mode 100644 index 00000000000..cf0134f287e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScalarSummary.md @@ -0,0 +1,87 @@ +description: Outputs a Summary protocol buffer with scalar values. + +
+ + +
+ +# tf.raw_ops.ScalarSummary + + + + + + + + + +Outputs a `Summary` protocol buffer with scalar values. + + + + + + + + + +The input `tags` and `values` must have the same shape. The generated summary +has a summary value for each tag-value pair in `tags` and `values`. + + + + + + + + + + + + + + + + +
+`tags` + +A `Tensor` of type `string`. Tags for the summary. +
+`values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +Same shape as `tags. Values for the summary. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScaleAndTranslate.md b/site/en/api_docs/python/tf/raw_ops/ScaleAndTranslate.md new file mode 100644 index 00000000000..090f0ae3323 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScaleAndTranslate.md @@ -0,0 +1,111 @@ +
+ + +
+ +# tf.raw_ops.ScaleAndTranslate + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`images` + +A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`size` + +A `Tensor` of type `int32`. +
+`scale` + +A `Tensor` of type `float32`. +
+`translation` + +A `Tensor` of type `float32`. +
+`kernel_type` + +An optional `string`. Defaults to `"lanczos3"`. +
+`antialias` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScaleAndTranslateGrad.md b/site/en/api_docs/python/tf/raw_ops/ScaleAndTranslateGrad.md new file mode 100644 index 00000000000..3af50ff4cdc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScaleAndTranslateGrad.md @@ -0,0 +1,111 @@ +
+ + +
+ +# tf.raw_ops.ScaleAndTranslateGrad + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`grads` + +A `Tensor`. Must be one of the following types: `float32`. +
+`original_image` + +A `Tensor`. Must have the same type as `grads`. +
+`scale` + +A `Tensor` of type `float32`. +
+`translation` + +A `Tensor` of type `float32`. +
+`kernel_type` + +An optional `string`. Defaults to `"lanczos3"`. +
+`antialias` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `grads`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScanDataset.md b/site/en/api_docs/python/tf/raw_ops/ScanDataset.md new file mode 100644 index 00000000000..bf0b72b81f4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScanDataset.md @@ -0,0 +1,127 @@ +description: Creates a dataset successively reduces f over the elements of input_dataset. + +
+ + +
+ +# tf.raw_ops.ScanDataset + + + + + + + + + +Creates a dataset successively reduces `f` over the elements of `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`initial_state` + +A list of `Tensor` objects. +
+`other_arguments` + +A list of `Tensor` objects. +
+`f` + +A function decorated with @Defun. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`preserve_cardinality` + +An optional `bool`. Defaults to `False`. +
+`use_default_device` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScatterAdd.md b/site/en/api_docs/python/tf/raw_ops/ScatterAdd.md new file mode 100644 index 00000000000..63dc3c4ccf9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScatterAdd.md @@ -0,0 +1,125 @@ +description: Adds sparse updates to a variable reference. + +
+ + +
+ +# tf.raw_ops.ScatterAdd + + + + + + + + + +Adds sparse updates to a variable reference. + + + + + + + + + +This operation computes + + # Scalar indices + ref[indices, ...] += updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] += updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] += updates[i, ..., j, ...] + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions add. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A tensor of updated values to add to `ref`. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the addition will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScatterDiv.md b/site/en/api_docs/python/tf/raw_ops/ScatterDiv.md new file mode 100644 index 00000000000..fdeeb6061f2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScatterDiv.md @@ -0,0 +1,123 @@ +description: Divides a variable reference by sparse updates. + +
+ + +
+ +# tf.raw_ops.ScatterDiv + + + + + + + + + +Divides a variable reference by sparse updates. + + + + + + + + + +This operation computes + +```python + # Scalar indices + ref[indices, ...] /= updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] /= updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...] +``` + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions divide. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A tensor of values that `ref` is divided by. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the operation will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScatterMax.md b/site/en/api_docs/python/tf/raw_ops/ScatterMax.md new file mode 100644 index 00000000000..8ee11196a94 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScatterMax.md @@ -0,0 +1,125 @@ +description: Reduces sparse updates into a variable reference using the max operation. + +
+ + +
+ +# tf.raw_ops.ScatterMax + + + + + + + + + +Reduces sparse updates into a variable reference using the `max` operation. + + + + + + + + + +This operation computes + + # Scalar indices + ref[indices, ...] = max(ref[indices, ...], updates[...]) + + # Vector indices (for each i) + ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...]) + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...], updates[i, ..., j, ...]) + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions combine. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`, `int64`. +Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A tensor of updated values to reduce into `ref`. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the update will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScatterMin.md b/site/en/api_docs/python/tf/raw_ops/ScatterMin.md new file mode 100644 index 00000000000..9332cde86f8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScatterMin.md @@ -0,0 +1,125 @@ +description: Reduces sparse updates into a variable reference using the min operation. + +
+ + +
+ +# tf.raw_ops.ScatterMin + + + + + + + + + +Reduces sparse updates into a variable reference using the `min` operation. + + + + + + + + + +This operation computes + + # Scalar indices + ref[indices, ...] = min(ref[indices, ...], updates[...]) + + # Vector indices (for each i) + ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...]) + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...], updates[i, ..., j, ...]) + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions combine. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`, `int64`. +Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A tensor of updated values to reduce into `ref`. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the update will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScatterMul.md b/site/en/api_docs/python/tf/raw_ops/ScatterMul.md new file mode 100644 index 00000000000..25b93ae80ee --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScatterMul.md @@ -0,0 +1,123 @@ +description: Multiplies sparse updates into a variable reference. + +
+ + +
+ +# tf.raw_ops.ScatterMul + + + + + + + + + +Multiplies sparse updates into a variable reference. + + + + + + + + + +This operation computes + +```python + # Scalar indices + ref[indices, ...] *= updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] *= updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...] +``` + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions multiply. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A tensor of updated values to multiply to `ref`. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the operation will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScatterNd.md b/site/en/api_docs/python/tf/raw_ops/ScatterNd.md new file mode 100644 index 00000000000..d44d5b7cc40 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScatterNd.md @@ -0,0 +1,173 @@ +description: Scatter updates into a new tensor according to indices. + +
+ + +
+ +# tf.raw_ops.ScatterNd + + + + + + + + + +Scatter `updates` into a new tensor according to `indices`. + + + + + + + + + +Creates a new tensor by applying sparse `updates` to individual values or +slices within a tensor (initially zero for numeric, empty for string) of +the given `shape` according to indices. This operator is the inverse of the +tf.gather_nd operator which extracts values or slices from a given tensor. + +This operation is similar to tensor_scatter_add, except that the tensor is +zero-initialized. Calling tf.scatter_nd(indices, values, shape) is identical +to `tensor_scatter_add(tf.zeros(shape, values.dtype), indices, values)` + +If `indices` contains duplicates, then their updates are accumulated (summed). + +**WARNING**: The order in which updates are applied is nondeterministic, so the +output will be nondeterministic if `indices` contains duplicates -- because +of some numerical approximation issues, numbers summed in different order +may yield different results. + +`indices` is an integer tensor containing indices into a new tensor of shape +`shape`. The last dimension of `indices` can be at most the rank of `shape`: + + indices.shape[-1] <= shape.rank + +The last dimension of `indices` corresponds to indices into elements +(if `indices.shape[-1] = shape.rank`) or slices +(if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of +`shape`. `updates` is a tensor with shape + + indices.shape[:-1] + shape[indices.shape[-1]:] + +The simplest form of scatter is to insert individual elements in a tensor by +index. For example, say we want to insert 4 scattered elements in a rank-1 +tensor with 8 elements. + +
+ +
+ +In Python, this scatter operation would look like this: + +```python + indices = tf.constant([[4], [3], [1], [7]]) + updates = tf.constant([9, 10, 11, 12]) + shape = tf.constant([8]) + scatter = tf.scatter_nd(indices, updates, shape) + print(scatter) +``` + +The resulting tensor would look like this: + + [0, 11, 0, 10, 9, 0, 0, 12] + +We can also, insert entire slices of a higher rank tensor all at once. For +example, if we wanted to insert two slices in the first dimension of a +rank-3 tensor with two matrices of new values. + +
+ +
+ +In Python, this scatter operation would look like this: + +```python + indices = tf.constant([[0], [2]]) + updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], + [7, 7, 7, 7], [8, 8, 8, 8]], + [[5, 5, 5, 5], [6, 6, 6, 6], + [7, 7, 7, 7], [8, 8, 8, 8]]]) + shape = tf.constant([4, 4, 4]) + scatter = tf.scatter_nd(indices, updates, shape) + print(scatter) +``` + +The resulting tensor would look like this: + + [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], + [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], + [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], + [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]] + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, the index is ignored. + + + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`updates` + +A `Tensor`. Updates to scatter into output. +
+`shape` + +A `Tensor`. Must have the same type as `indices`. +1-D. The shape of the resulting tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `updates`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScatterNdAdd.md b/site/en/api_docs/python/tf/raw_ops/ScatterNdAdd.md new file mode 100644 index 00000000000..b4bd4822e1b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScatterNdAdd.md @@ -0,0 +1,139 @@ +description: Applies sparse addition to individual values or slices in a Variable. + +
+ + +
+ +# tf.raw_ops.ScatterNdAdd + + + + + + + + + +Applies sparse addition to individual values or slices in a Variable. + + + + + + + + + +`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into `ref`. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of `ref`. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]] +``` + +For example, say we want to add 4 scattered elements to a rank-1 tensor to +8 elements. In Python, that addition would look like this: + +```python +ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) +indices = tf.constant([[4], [3], [1], [7]]) +updates = tf.constant([9, 10, 11, 12]) +add = tf.scatter_nd_add(ref, indices, updates) +with tf.Session() as sess: + print sess.run(add) +``` + +The resulting update to ref would look like this: + + [1, 13, 3, 14, 14, 6, 7, 20] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A mutable Tensor. Should be from a Variable node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A Tensor. Must be one of the following types: int32, int64. +A tensor of indices into ref. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A Tensor. Must have the same type as ref. A tensor of updated values +to add to ref. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +An optional bool. Defaults to True. If True, the assignment will +be protected by a lock; otherwise the behavior is undefined, +but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScatterNdNonAliasingAdd.md b/site/en/api_docs/python/tf/raw_ops/ScatterNdNonAliasingAdd.md new file mode 100644 index 00000000000..c01a8f5f43e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScatterNdNonAliasingAdd.md @@ -0,0 +1,129 @@ +description: Applies sparse addition to input using individual values or slices + +
+ + +
+ +# tf.raw_ops.ScatterNdNonAliasingAdd + + + + + + + + + +Applies sparse addition to `input` using individual values or slices + + + + + + + + + +from `updates` according to indices `indices`. The updates are non-aliasing: +`input` is only modified in-place if no other operations will use it. +Otherwise, a copy of `input` is made. This operation has a gradient with +respect to both `input` and `updates`. + +`input` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into `input`. +It must be shape \\([d_0, ..., d_{Q-2}, K]\\) where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or `(P-K)`-dimensional slices +(if `K < P`) along the `K`th dimension of `input`. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +$$[d_0, ..., d_{Q-2}, input.shape[K], ..., input.shape[P-1]].$$ + +For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 +elements. In Python, that addition would look like this: + + input = tf.constant([1, 2, 3, 4, 5, 6, 7, 8]) + indices = tf.constant([[4], [3], [1], [7]]) + updates = tf.constant([9, 10, 11, 12]) + output = tf.scatter_nd_non_aliasing_add(input, indices, updates) + with tf.Session() as sess: + print(sess.run(output)) + +The resulting value `output` would look like this: + + [1, 13, 3, 14, 14, 6, 7, 20] + +See tf.scatter_nd for more details about how to make updates to slices. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`. +A Tensor. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A Tensor. Must be one of the following types: `int32`, `int64`. +A tensor of indices into `input`. +
+`updates` + +A `Tensor`. Must have the same type as `input`. +A Tensor. Must have the same type as ref. A tensor of updated values +to add to `input`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScatterNdSub.md b/site/en/api_docs/python/tf/raw_ops/ScatterNdSub.md new file mode 100644 index 00000000000..e6358208c5e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScatterNdSub.md @@ -0,0 +1,141 @@ +description: Applies sparse subtraction to individual values or slices in a Variable. + +
+ + +
+ +# tf.raw_ops.ScatterNdSub + + + + + + + + + +Applies sparse subtraction to individual values or slices in a Variable. + + + + + + + + + +within a given variable according to `indices`. + +`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into `ref`. +It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of `ref`. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +``` +[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]] +``` + +For example, say we want to subtract 4 scattered elements from a rank-1 tensor +with 8 elements. In Python, that subtraction would look like this: + +```python +ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) +indices = tf.constant([[4], [3], [1], [7]]) +updates = tf.constant([9, 10, 11, 12]) +sub = tf.scatter_nd_sub(ref, indices, updates) +with tf.Session() as sess: + print sess.run(sub) +``` + +The resulting update to ref would look like this: + + [1, -9, 3, -6, -4, 6, 7, -4] + +See tf.scatter_nd for more details about how to make updates to +slices. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +A mutable Tensor. Should be from a Variable node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A Tensor. Must be one of the following types: int32, int64. +A tensor of indices into ref. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A Tensor. Must have the same type as ref. A tensor of updated values +to subtract from ref. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +An optional bool. Defaults to True. If True, the assignment will +be protected by a lock; otherwise the behavior is undefined, +but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScatterNdUpdate.md b/site/en/api_docs/python/tf/raw_ops/ScatterNdUpdate.md new file mode 100644 index 00000000000..9e263a52275 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScatterNdUpdate.md @@ -0,0 +1,140 @@ +description: Applies sparse updates to individual values or slices within a given + +
+ + +
+ +# tf.raw_ops.ScatterNdUpdate + + + + + + + + + +Applies sparse `updates` to individual values or slices within a given + + + + + + + + + +variable according to `indices`. + +`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`. + +`indices` must be integer tensor, containing indices into `ref`. +It must be shape \\([d_0, ..., d_{Q-2}, K]\\) where `0 < K <= P`. + +The innermost dimension of `indices` (with length `K`) corresponds to +indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th +dimension of `ref`. + +`updates` is `Tensor` of rank `Q-1+P-K` with shape: + +$$[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].$$ + +For example, say we want to update 4 scattered elements to a rank-1 tensor to +8 elements. In Python, that update would look like this: + +```python + ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) + indices = tf.constant([[4], [3], [1] ,[7]]) + updates = tf.constant([9, 10, 11, 12]) + update = tf.scatter_nd_update(ref, indices, updates) + with tf.Session() as sess: + print sess.run(update) +``` + +The resulting update to ref would look like this: + + [1, 11, 3, 10, 9, 6, 7, 12] + +See tf.scatter_nd for more details about how to make updates to +slices. + +See also `tf.scatter_update` and `tf.batch_scatter_update`. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. A mutable Tensor. Should be from a Variable node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A Tensor. Must be one of the following types: int32, int64. +A tensor of indices into ref. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A Tensor. Must have the same type as ref. A tensor of updated +values to add to ref. +
+`use_locking` + +An optional `bool`. Defaults to `True`. +An optional bool. Defaults to True. If True, the assignment will +be protected by a lock; otherwise the behavior is undefined, +but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScatterSub.md b/site/en/api_docs/python/tf/raw_ops/ScatterSub.md new file mode 100644 index 00000000000..8433c5734c4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScatterSub.md @@ -0,0 +1,125 @@ +description: Subtracts sparse updates to a variable reference. + +
+ + +
+ +# tf.raw_ops.ScatterSub + + + + + + + + + +Subtracts sparse updates to a variable reference. + + + + + + + + + +```python + # Scalar indices + ref[indices, ...] -= updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] -= updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...] +``` + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their (negated) contributions add. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A tensor of updated values to subtract from `ref`. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the subtraction will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ScatterUpdate.md b/site/en/api_docs/python/tf/raw_ops/ScatterUpdate.md new file mode 100644 index 00000000000..fe1595d1cc5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ScatterUpdate.md @@ -0,0 +1,129 @@ +description: Applies sparse updates to a variable reference. + +
+ + +
+ +# tf.raw_ops.ScatterUpdate + + + + + + + + + +Applies sparse updates to a variable reference. + + + + + + + + + +This operation computes + +```python + # Scalar indices + ref[indices, ...] = updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] = updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] = updates[i, ..., j, ...] +``` + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +If values in `ref` is to be updated more than once, because there are +duplicate entries in `indices`, the order at which the updates happen +for each value is undefined. + +Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`. + +
+ +
+ +See also `tf.batch_scatter_update` and `tf.scatter_nd_update`. + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. Should be from a `Variable` node. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor of indices into the first dimension of `ref`. +
+`updates` + +A `Tensor`. Must have the same type as `ref`. +A tensor of updated values to store in `ref`. +
+`use_locking` + +An optional `bool`. Defaults to `True`. +If True, the assignment will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SdcaFprint.md b/site/en/api_docs/python/tf/raw_ops/SdcaFprint.md new file mode 100644 index 00000000000..376e48334c6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SdcaFprint.md @@ -0,0 +1,78 @@ +description: Computes fingerprints of the input strings. + +
+ + +
+ +# tf.raw_ops.SdcaFprint + + + + + + + + + +Computes fingerprints of the input strings. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +vector of strings to compute fingerprints on. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SdcaOptimizer.md b/site/en/api_docs/python/tf/raw_ops/SdcaOptimizer.md new file mode 100644 index 00000000000..c1ff908da35 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SdcaOptimizer.md @@ -0,0 +1,245 @@ +description: Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for + +
+ + +
+ +# tf.raw_ops.SdcaOptimizer + + + + + + + + + +Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for + + + + + + + + + +linear models with L1 + L2 regularization. As global optimization objective is +strongly-convex, the optimizer optimizes the dual objective at each step. The +optimizer applies each update one example at a time. Examples are sampled +uniformly, and the optimizer is learning rate free and enjoys linear convergence +rate. + +[Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
+Shai Shalev-Shwartz, Tong Zhang. 2012 + +$$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ + +[Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
+Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, +Peter Richtarik, Martin Takac. 2015 + +[Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
+Dominik Csiba, Zheng Qu, Peter Richtarik. 2015 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_example_indices` + +A list of `Tensor` objects with type `int64`. +a list of vectors which contain example indices. +
+`sparse_feature_indices` + +A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. +a list of vectors which contain feature indices. +
+`sparse_feature_values` + +A list of `Tensor` objects with type `float32`. +a list of vectors which contains feature value +associated with each feature group. +
+`dense_features` + +A list of `Tensor` objects with type `float32`. +a list of matrices which contains the dense feature values. +
+`example_weights` + +A `Tensor` of type `float32`. +a vector which contains the weight associated with each +example. +
+`example_labels` + +A `Tensor` of type `float32`. +a vector which contains the label/target associated with each +example. +
+`sparse_indices` + +A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. +a list of vectors where each value is the indices which has +corresponding weights in sparse_weights. This field maybe omitted for the +dense approach. +
+`sparse_weights` + +A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. +a list of vectors where each value is the weight associated with +a sparse feature group. +
+`dense_weights` + +A list with the same length as `dense_features` of `Tensor` objects with type `float32`. +a list of vectors where the values are the weights associated +with a dense feature group. +
+`example_state_data` + +A `Tensor` of type `float32`. +a list of vectors containing the example state data. +
+`loss_type` + +A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. +Type of the primal loss. Currently SdcaSolver supports logistic, +squared and hinge losses. +
+`l1` + +A `float`. Symmetric l1 regularization strength. +
+`l2` + +A `float`. Symmetric l2 regularization strength. +
+`num_loss_partitions` + +An `int` that is `>= 1`. +Number of partitions of the global loss function. +
+`num_inner_iterations` + +An `int` that is `>= 1`. +Number of iterations per mini-batch. +
+`adaptative` + +An optional `bool`. Defaults to `True`. +Whether to use Adaptive SDCA for the inner loop. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights). +
+`out_example_state_data` + +A `Tensor` of type `float32`. +
+`out_delta_sparse_weights` + +A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. +
+`out_delta_dense_weights` + +A list with the same length as `dense_features` of `Tensor` objects with type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SdcaOptimizerV2.md b/site/en/api_docs/python/tf/raw_ops/SdcaOptimizerV2.md new file mode 100644 index 00000000000..646471e6211 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SdcaOptimizerV2.md @@ -0,0 +1,245 @@ +description: Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for + +
+ + +
+ +# tf.raw_ops.SdcaOptimizerV2 + + + + + + + + + +Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for + + + + + + + + + +linear models with L1 + L2 regularization. As global optimization objective is +strongly-convex, the optimizer optimizes the dual objective at each step. The +optimizer applies each update one example at a time. Examples are sampled +uniformly, and the optimizer is learning rate free and enjoys linear convergence +rate. + +[Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
+Shai Shalev-Shwartz, Tong Zhang. 2012 + +$$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ + +[Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
+Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, +Peter Richtarik, Martin Takac. 2015 + +[Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
+Dominik Csiba, Zheng Qu, Peter Richtarik. 2015 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_example_indices` + +A list of `Tensor` objects with type `int64`. +a list of vectors which contain example indices. +
+`sparse_feature_indices` + +A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. +a list of vectors which contain feature indices. +
+`sparse_feature_values` + +A list of `Tensor` objects with type `float32`. +a list of vectors which contains feature value +associated with each feature group. +
+`dense_features` + +A list of `Tensor` objects with type `float32`. +a list of matrices which contains the dense feature values. +
+`example_weights` + +A `Tensor` of type `float32`. +a vector which contains the weight associated with each +example. +
+`example_labels` + +A `Tensor` of type `float32`. +a vector which contains the label/target associated with each +example. +
+`sparse_indices` + +A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`. +a list of vectors where each value is the indices which has +corresponding weights in sparse_weights. This field maybe omitted for the +dense approach. +
+`sparse_weights` + +A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. +a list of vectors where each value is the weight associated with +a sparse feature group. +
+`dense_weights` + +A list with the same length as `dense_features` of `Tensor` objects with type `float32`. +a list of vectors where the values are the weights associated +with a dense feature group. +
+`example_state_data` + +A `Tensor` of type `float32`. +a list of vectors containing the example state data. +
+`loss_type` + +A `string` from: `"logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss"`. +Type of the primal loss. Currently SdcaSolver supports logistic, +squared and hinge losses. +
+`l1` + +A `float`. Symmetric l1 regularization strength. +
+`l2` + +A `float`. Symmetric l2 regularization strength. +
+`num_loss_partitions` + +An `int` that is `>= 1`. +Number of partitions of the global loss function. +
+`num_inner_iterations` + +An `int` that is `>= 1`. +Number of iterations per mini-batch. +
+`adaptive` + +An optional `bool`. Defaults to `True`. +Whether to use Adaptive SDCA for the inner loop. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights). +
+`out_example_state_data` + +A `Tensor` of type `float32`. +
+`out_delta_sparse_weights` + +A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`. +
+`out_delta_dense_weights` + +A list with the same length as `dense_features` of `Tensor` objects with type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SdcaShrinkL1.md b/site/en/api_docs/python/tf/raw_ops/SdcaShrinkL1.md new file mode 100644 index 00000000000..dfcb0895a54 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SdcaShrinkL1.md @@ -0,0 +1,94 @@ +description: Applies L1 regularization shrink step on the parameters. + +
+ + +
+ +# tf.raw_ops.SdcaShrinkL1 + + + + + + + + + +Applies L1 regularization shrink step on the parameters. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`weights` + +A list of `Tensor` objects with type mutable `float32`. +a list of vectors where each value is the weight associated with a +feature group. +
+`l1` + +A `float`. Symmetric l1 regularization strength. +
+`l2` + +A `float`. +Symmetric l2 regularization strength. Should be a positive float. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SegmentMax.md b/site/en/api_docs/python/tf/raw_ops/SegmentMax.md new file mode 100644 index 00000000000..ce6446fb8ce --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SegmentMax.md @@ -0,0 +1,110 @@ +description: Computes the maximum along segments of a tensor. + +
+ + +
+ +# tf.raw_ops.SegmentMax + + + + + + + + + +Computes the maximum along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output_i = \max_j(data_j)\\) where `max` is over `j` such +that `segment_ids[j] == i`. + +If the max is empty for a given segment ID `i`, `output[i] = 0`. + +
+ +
+ +#### For example: + + + +``` +c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) +tf.segment_max(c, tf.constant([0, 0, 1])) +# ==> [[4, 3, 3, 4], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor whose size is equal to the size of `data`'s +first dimension. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SegmentMean.md b/site/en/api_docs/python/tf/raw_ops/SegmentMean.md new file mode 100644 index 00000000000..edfbb0f751a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SegmentMean.md @@ -0,0 +1,111 @@ +description: Computes the mean along segments of a tensor. + +
+ + +
+ +# tf.raw_ops.SegmentMean + + + + + + + + + +Computes the mean along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output_i = \frac{\sum_j data_j}{N}\\) where `mean` is +over `j` such that `segment_ids[j] == i` and `N` is the total number of +values summed. + +If the mean is empty for a given segment ID `i`, `output[i] = 0`. + +
+ +
+ +#### For example: + + + +``` +c = tf.constant([[1.0,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) +tf.segment_mean(c, tf.constant([0, 0, 1])) +# ==> [[2.5, 2.5, 2.5, 2.5], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor whose size is equal to the size of `data`'s +first dimension. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SegmentMin.md b/site/en/api_docs/python/tf/raw_ops/SegmentMin.md new file mode 100644 index 00000000000..ae473146bcc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SegmentMin.md @@ -0,0 +1,110 @@ +description: Computes the minimum along segments of a tensor. + +
+ + +
+ +# tf.raw_ops.SegmentMin + + + + + + + + + +Computes the minimum along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output_i = \min_j(data_j)\\) where `min` is over `j` such +that `segment_ids[j] == i`. + +If the min is empty for a given segment ID `i`, `output[i] = 0`. + +
+ +
+ +#### For example: + + + +``` +c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) +tf.segment_min(c, tf.constant([0, 0, 1])) +# ==> [[1, 2, 2, 1], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor whose size is equal to the size of `data`'s +first dimension. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SegmentProd.md b/site/en/api_docs/python/tf/raw_ops/SegmentProd.md new file mode 100644 index 00000000000..4e5dd00efee --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SegmentProd.md @@ -0,0 +1,110 @@ +description: Computes the product along segments of a tensor. + +
+ + +
+ +# tf.raw_ops.SegmentProd + + + + + + + + + +Computes the product along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output_i = \prod_j data_j\\) where the product is over `j` such +that `segment_ids[j] == i`. + +If the product is empty for a given segment ID `i`, `output[i] = 1`. + +
+ +
+ +#### For example: + + + +``` +c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) +tf.segment_prod(c, tf.constant([0, 0, 1])) +# ==> [[4, 6, 6, 4], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor whose size is equal to the size of `data`'s +first dimension. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SegmentSum.md b/site/en/api_docs/python/tf/raw_ops/SegmentSum.md new file mode 100644 index 00000000000..38e4670840f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SegmentSum.md @@ -0,0 +1,110 @@ +description: Computes the sum along segments of a tensor. + +
+ + +
+ +# tf.raw_ops.SegmentSum + + + + + + + + + +Computes the sum along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output_i = \sum_j data_j\\) where sum is over `j` such +that `segment_ids[j] == i`. + +If the sum is empty for a given segment ID `i`, `output[i] = 0`. + +
+ +
+ +#### For example: + + + +``` +c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) +tf.segment_sum(c, tf.constant([0, 0, 1])) +# ==> [[5, 5, 5, 5], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor whose size is equal to the size of `data`'s +first dimension. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Select.md b/site/en/api_docs/python/tf/raw_ops/Select.md new file mode 100644 index 00000000000..8e61ac7446d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Select.md @@ -0,0 +1,133 @@ +description: Selects elements from x or y, depending on condition. + +
+ + +
+ +# tf.raw_ops.Select + + + + + + + + + +Selects elements from `x` or `y`, depending on `condition`. + + + + + + + + + +The `x`, and `y` tensors must all have the same shape, and the +output will also have that shape. + +The `condition` tensor must be a scalar if `x` and `y` are scalars. +If `x` and `y` are vectors or higher rank, then `condition` must be either a +scalar, a vector with size matching the first dimension of `x`, or must have +the same shape as `x`. + +The `condition` tensor acts as a mask that chooses, based on the value at each +element, whether the corresponding element / row in the output should be +taken from `x` (if true) or `y` (if false). + +If `condition` is a vector and `x` and `y` are higher rank matrices, then +it chooses which row (outer dimension) to copy from `x` and `y`. +If `condition` has the same shape as `x` and `y`, then it chooses which +element to copy from `x` and `y`. + +#### For example: + + + +```python +# 'condition' tensor is [[True, False] +# [False, True]] +# 't' is [[1, 2], +# [3, 4]] +# 'e' is [[5, 6], +# [7, 8]] +select(condition, t, e) # => [[1, 6], [7, 4]] + + +# 'condition' tensor is [True, False] +# 't' is [[1, 2], +# [3, 4]] +# 'e' is [[5, 6], +# [7, 8]] +select(condition, t, e) ==> [[1, 2], + [7, 8]] + +``` + + + + + + + + + + + + + + + + + + + +
+`condition` + +A `Tensor` of type `bool`. +
+`x` + +A `Tensor` which may have the same shape as `condition`. +If `condition` is rank 1, `x` may have higher rank, +but its first dimension must match the size of `condition`. +
+`y` + +A `Tensor` with the same type and shape as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `t`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SelectV2.md b/site/en/api_docs/python/tf/raw_ops/SelectV2.md new file mode 100644 index 00000000000..5f46f77804b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SelectV2.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.SelectV2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`condition` + +A `Tensor` of type `bool`. +
+`t` + +A `Tensor`. +
+`e` + +A `Tensor`. Must have the same type as `t`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `t`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SelfAdjointEig.md b/site/en/api_docs/python/tf/raw_ops/SelfAdjointEig.md new file mode 100644 index 00000000000..22b8ee97735 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SelfAdjointEig.md @@ -0,0 +1,85 @@ +description: Computes the Eigen Decomposition of a batch of square self-adjoint matrices. + +
+ + +
+ +# tf.raw_ops.SelfAdjointEig + + + + + + + + + +Computes the Eigen Decomposition of a batch of square self-adjoint matrices. + + + + + + + + + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices, with the same constraints as the single matrix +SelfAdjointEig. + +The result is a [..., M+1, M] matrix with [..., 0,:] containing the +eigenvalues, and subsequent [...,1:, :] containing the eigenvectors. The eigenvalues +are sorted in non-decreasing order. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`. +Shape is `[..., M, M]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SelfAdjointEigV2.md b/site/en/api_docs/python/tf/raw_ops/SelfAdjointEigV2.md new file mode 100644 index 00000000000..bdfb051d0ed --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SelfAdjointEigV2.md @@ -0,0 +1,112 @@ +description: Computes the eigen decomposition of one or more square self-adjoint matrices. + +
+ + +
+ +# tf.raw_ops.SelfAdjointEigV2 + + + + + + + + + +Computes the eigen decomposition of one or more square self-adjoint matrices. + + + + + + + + + +Computes the eigenvalues and (optionally) eigenvectors of each inner matrix in +`input` such that `input[..., :, :] = v[..., :, :] * diag(e[..., :])`. The eigenvalues +are sorted in non-decreasing order. + +```python +# a is a tensor. +# e is a tensor of eigenvalues. +# v is a tensor of eigenvectors. +e, v = self_adjoint_eig(a) +e = self_adjoint_eig(a, compute_v=False) +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +`Tensor` input of shape `[N, N]`. +
+`compute_v` + +An optional `bool`. Defaults to `True`. +If `True` then eigenvectors will be computed and returned in `v`. +Otherwise, only the eigenvalues will be computed. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (e, v). +
+`e` + +A `Tensor`. Has the same type as `input`. +
+`v` + +A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Selu.md b/site/en/api_docs/python/tf/raw_ops/Selu.md new file mode 100644 index 00000000000..afd7aea16b8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Selu.md @@ -0,0 +1,84 @@ +description: Computes scaled exponential linear: scale * alpha * (exp(features) - 1) + +
+ + +
+ +# tf.raw_ops.Selu + + + + + + + + + +Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)` + + + + + + + + + +if < 0, `scale * features` otherwise. + +To be used together with +`initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN')`. +For correct dropout, use `tf.contrib.nn.alpha_dropout`. + +See [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515) + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SeluGrad.md b/site/en/api_docs/python/tf/raw_ops/SeluGrad.md new file mode 100644 index 00000000000..b56bba44ef8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SeluGrad.md @@ -0,0 +1,86 @@ +description: Computes gradients for the scaled exponential linear (Selu) operation. + +
+ + +
+ +# tf.raw_ops.SeluGrad + + + + + + + + + +Computes gradients for the scaled exponential linear (Selu) operation. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +The backpropagated gradients to the corresponding Selu operation. +
+`outputs` + +A `Tensor`. Must have the same type as `gradients`. +The outputs of the corresponding Selu operation. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `gradients`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Send.md b/site/en/api_docs/python/tf/raw_ops/Send.md new file mode 100644 index 00000000000..5cf8681e835 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Send.md @@ -0,0 +1,117 @@ +description: Sends the named tensor from send_device to recv_device. + +
+ + +
+ +# tf.raw_ops.Send + + + + + + + + + +Sends the named tensor from send_device to recv_device. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. The tensor to send. +
+`tensor_name` + +A `string`. The name of the tensor to send. +
+`send_device` + +A `string`. The name of the device sending the tensor. +
+`send_device_incarnation` + +An `int`. The current incarnation of send_device. +
+`recv_device` + +A `string`. The name of the device receiving the tensor. +
+`client_terminated` + +An optional `bool`. Defaults to `False`. +If set to true, this indicates that the node was added +to the graph as a result of a client-side feed or fetch of Tensor data, +in which case the corresponding send or recv is expected to be managed +locally by the caller. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SendTPUEmbeddingGradients.md b/site/en/api_docs/python/tf/raw_ops/SendTPUEmbeddingGradients.md new file mode 100644 index 00000000000..15c6c46b358 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SendTPUEmbeddingGradients.md @@ -0,0 +1,103 @@ +description: Performs gradient updates of embedding tables. + +
+ + +
+ +# tf.raw_ops.SendTPUEmbeddingGradients + + + + + + + + + +Performs gradient updates of embedding tables. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of at least 1 `Tensor` objects with type `float32`. +A TensorList of gradients with which to update embedding tables. +This argument has the same length and shapes as the return value of +RecvTPUEmbeddingActivations, but contains gradients of the model's loss +with respect to the embedding activations. The embedding tables are updated +from these gradients via the optimizer specified in the TPU embedding +configuration given to tpu.initialize_system. +
+`learning_rates` + +A list of `Tensor` objects with type `float32`. +A TensorList of float32 scalars, one for each dynamic learning +rate tag: see the comments in +//third_party/tensorflow/core/protobuf/tpu/optimization_parameters.proto. +Multiple tables can share the same dynamic learning rate tag as specified +in the configuration. If the learning rates for all tables are constant, +this list should be empty. +
+`config` + +A `string`. Serialized TPUEmbeddingConfiguration proto. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SerializeIterator.md b/site/en/api_docs/python/tf/raw_ops/SerializeIterator.md new file mode 100644 index 00000000000..f8896de14ed --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SerializeIterator.md @@ -0,0 +1,85 @@ +description: Converts the given resource_handle representing an iterator to a variant tensor. + +
+ + +
+ +# tf.raw_ops.SerializeIterator + + + + + + + + + +Converts the given `resource_handle` representing an iterator to a variant tensor. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`resource_handle` + +A `Tensor` of type `resource`. +A handle to an iterator resource. +
+`external_state_policy` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SerializeManySparse.md b/site/en/api_docs/python/tf/raw_ops/SerializeManySparse.md new file mode 100644 index 00000000000..91721d595c8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SerializeManySparse.md @@ -0,0 +1,111 @@ +description: Serialize an N-minibatch SparseTensor into an [N, 3] Tensor object. + +
+ + +
+ +# tf.raw_ops.SerializeManySparse + + + + + + + + + +Serialize an `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor` object. + + + + + + + + + +The `SparseTensor` must have rank `R` greater than 1, and the first dimension +is treated as the minibatch dimension. Elements of the `SparseTensor` +must be sorted in increasing order of this first dimension. The serialized +`SparseTensor` objects going into each row of `serialized_sparse` will have +rank `R-1`. + +The minibatch size `N` is extracted from `sparse_shape[0]`. + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_indices` + +A `Tensor` of type `int64`. +2-D. The `indices` of the minibatch `SparseTensor`. +
+`sparse_values` + +A `Tensor`. +1-D. The `values` of the minibatch `SparseTensor`. +
+`sparse_shape` + +A `Tensor` of type `int64`. +1-D. The `shape` of the minibatch `SparseTensor`. +
+`out_type` + +An optional tf.DType from: `tf.string, tf.variant`. Defaults to tf.string. +The `dtype` to use for serialization; the supported types are `string` +(default) and `variant`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SerializeSparse.md b/site/en/api_docs/python/tf/raw_ops/SerializeSparse.md new file mode 100644 index 00000000000..095d59f184d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SerializeSparse.md @@ -0,0 +1,103 @@ +description: Serialize a SparseTensor into a [3] Tensor object. + +
+ + +
+ +# tf.raw_ops.SerializeSparse + + + + + + + + + +Serialize a `SparseTensor` into a `[3]` `Tensor` object. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_indices` + +A `Tensor` of type `int64`. +2-D. The `indices` of the `SparseTensor`. +
+`sparse_values` + +A `Tensor`. 1-D. The `values` of the `SparseTensor`. +
+`sparse_shape` + +A `Tensor` of type `int64`. +1-D. The `shape` of the `SparseTensor`. +
+`out_type` + +An optional tf.DType from: `tf.string, tf.variant`. Defaults to tf.string. +The `dtype` to use for serialization; the supported types are `string` +(default) and `variant`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SerializeTensor.md b/site/en/api_docs/python/tf/raw_ops/SerializeTensor.md new file mode 100644 index 00000000000..2fd5dc2e10a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SerializeTensor.md @@ -0,0 +1,77 @@ +description: Transforms a Tensor into a serialized TensorProto proto. + +
+ + +
+ +# tf.raw_ops.SerializeTensor + + + + + + + + + +Transforms a Tensor into a serialized TensorProto proto. + + + + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. A Tensor of type `T`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SetSize.md b/site/en/api_docs/python/tf/raw_ops/SetSize.md new file mode 100644 index 00000000000..a9a21cf56d8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SetSize.md @@ -0,0 +1,107 @@ +description: Number of unique elements along last dimension of input set. + +
+ + +
+ +# tf.raw_ops.SetSize + + + + + + + + + +Number of unique elements along last dimension of input `set`. + + + + + + + + + +Input `set` is a `SparseTensor` represented by `set_indices`, `set_values`, +and `set_shape`. The last dimension contains values in a set, duplicates are +allowed but ignored. + +If `validate_indices` is `True`, this op validates the order and range of `set` +indices. + + + + + + + + + + + + + + + + + + + + + + +
+`set_indices` + +A `Tensor` of type `int64`. +2D `Tensor`, indices of a `SparseTensor`. +
+`set_values` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `string`. +1D `Tensor`, values of a `SparseTensor`. +
+`set_shape` + +A `Tensor` of type `int64`. +1D `Tensor`, shape of a `SparseTensor`. +
+`validate_indices` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SetStatsAggregatorDataset.md b/site/en/api_docs/python/tf/raw_ops/SetStatsAggregatorDataset.md new file mode 100644 index 00000000000..8353a2da0e4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SetStatsAggregatorDataset.md @@ -0,0 +1,111 @@ +
+ + +
+ +# tf.raw_ops.SetStatsAggregatorDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`stats_aggregator` + +A `Tensor` of type `resource`. +
+`tag` + +A `Tensor` of type `string`. +
+`counter_prefix` + +A `Tensor` of type `string`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Shape.md b/site/en/api_docs/python/tf/raw_ops/Shape.md new file mode 100644 index 00000000000..a160d230da3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Shape.md @@ -0,0 +1,94 @@ +description: Returns the shape of a tensor. + +
+ + +
+ +# tf.raw_ops.Shape + + + + + + + + + +Returns the shape of a tensor. + + + + + + + + + +This operation returns a 1-D integer tensor representing the shape of `input`. + +#### For example: + + + +``` +# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] +shape(t) ==> [2, 2, 3] +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`out_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ShapeN.md b/site/en/api_docs/python/tf/raw_ops/ShapeN.md new file mode 100644 index 00000000000..2a21711b3fe --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ShapeN.md @@ -0,0 +1,85 @@ +description: Returns shape of tensors. + +
+ + +
+ +# tf.raw_ops.ShapeN + + + + + + + + + +Returns shape of tensors. + + + + + + + + + +This operation returns N 1-D integer tensors representing shape of `input[i]s`. + + + + + + + + + + + + + + + + +
+`input` + +A list of at least 1 `Tensor` objects with the same type. +
+`out_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list with the same length as `input` of `Tensor` objects with type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ShardDataset.md b/site/en/api_docs/python/tf/raw_ops/ShardDataset.md new file mode 100644 index 00000000000..28e848cb601 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ShardDataset.md @@ -0,0 +1,115 @@ +description: Creates a Dataset that includes only 1/num_shards of this dataset. + +
+ + +
+ +# tf.raw_ops.ShardDataset + + + + + + + + + +Creates a `Dataset` that includes only 1/`num_shards` of this dataset. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`num_shards` + +A `Tensor` of type `int64`. +An integer representing the number of shards operating in parallel. +
+`index` + +A `Tensor` of type `int64`. +An integer representing the current worker index. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`require_non_empty` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ShardedFilename.md b/site/en/api_docs/python/tf/raw_ops/ShardedFilename.md new file mode 100644 index 00000000000..1ecb8ea78f2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ShardedFilename.md @@ -0,0 +1,92 @@ +description: Generate a sharded filename. The filename is printf formatted as + +
+ + +
+ +# tf.raw_ops.ShardedFilename + + + + + + + + + +Generate a sharded filename. The filename is printf formatted as + + + + + + + + + + %s-%05d-of-%05d, basename, shard, num_shards. + + + + + + + + + + + + + + + + + + + +
+`basename` + +A `Tensor` of type `string`. +
+`shard` + +A `Tensor` of type `int32`. +
+`num_shards` + +A `Tensor` of type `int32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ShardedFilespec.md b/site/en/api_docs/python/tf/raw_ops/ShardedFilespec.md new file mode 100644 index 00000000000..e872c846715 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ShardedFilespec.md @@ -0,0 +1,84 @@ +description: Generate a glob pattern matching all sharded file names. + +
+ + +
+ +# tf.raw_ops.ShardedFilespec + + + + + + + + + +Generate a glob pattern matching all sharded file names. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`basename` + +A `Tensor` of type `string`. +
+`num_shards` + +A `Tensor` of type `int32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ShuffleAndRepeatDataset.md b/site/en/api_docs/python/tf/raw_ops/ShuffleAndRepeatDataset.md new file mode 100644 index 00000000000..a9bd741121e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ShuffleAndRepeatDataset.md @@ -0,0 +1,130 @@ +description: Creates a dataset that shuffles and repeats elements from input_dataset + +
+ + +
+ +# tf.raw_ops.ShuffleAndRepeatDataset + + + + + + + + + +Creates a dataset that shuffles and repeats elements from `input_dataset` + + + + + + + + + +pseudorandomly. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`buffer_size` + +A `Tensor` of type `int64`. +The number of output elements to buffer in an iterator over +this dataset. Compare with the `min_after_dequeue` attr when creating a +`RandomShuffleQueue`. +
+`seed` + +A `Tensor` of type `int64`. +A scalar seed for the random number generator. If either `seed` or +`seed2` is set to be non-zero, the random number generator is seeded +by the given seed. Otherwise, a random seed is used. +
+`seed2` + +A `Tensor` of type `int64`. +A second scalar seed to avoid seed collision. +
+`count` + +A `Tensor` of type `int64`. +A scalar representing the number of times the underlying dataset +should be repeated. The default is `-1`, which results in infinite repetition. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ShuffleDataset.md b/site/en/api_docs/python/tf/raw_ops/ShuffleDataset.md new file mode 100644 index 00000000000..b2ed156f14b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ShuffleDataset.md @@ -0,0 +1,132 @@ +description: Creates a dataset that shuffles elements from input_dataset pseudorandomly. + +
+ + +
+ +# tf.raw_ops.ShuffleDataset + + + + + + + + + +Creates a dataset that shuffles elements from `input_dataset` pseudorandomly. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`buffer_size` + +A `Tensor` of type `int64`. +The number of output elements to buffer in an iterator over +this dataset. Compare with the `min_after_dequeue` attr when creating a +`RandomShuffleQueue`. +
+`seed` + +A `Tensor` of type `int64`. +A scalar seed for the random number generator. If either `seed` or +`seed2` is set to be non-zero, the random number generator is seeded +by the given seed. Otherwise, a random seed is used. +
+`seed2` + +A `Tensor` of type `int64`. +A second scalar seed to avoid seed collision. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`reshuffle_each_iteration` + +An optional `bool`. Defaults to `True`. +If true, each iterator over this dataset will be given +a different pseudorandomly generated seed, based on a sequence seeded by the +`seed` and `seed2` inputs. If false, each iterator will be given the same +seed, and repeated iteration over this dataset will yield the exact same +sequence of results. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ShuffleDatasetV2.md b/site/en/api_docs/python/tf/raw_ops/ShuffleDatasetV2.md new file mode 100644 index 00000000000..a36e0d928ed --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ShuffleDatasetV2.md @@ -0,0 +1,104 @@ +
+ + +
+ +# tf.raw_ops.ShuffleDatasetV2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`buffer_size` + +A `Tensor` of type `int64`. +
+`seed_generator` + +A `Tensor` of type `resource`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ShutdownDistributedTPU.md b/site/en/api_docs/python/tf/raw_ops/ShutdownDistributedTPU.md new file mode 100644 index 00000000000..7c82efe2ba6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ShutdownDistributedTPU.md @@ -0,0 +1,71 @@ +description: Shuts down a running distributed TPU system. + +
+ + +
+ +# tf.raw_ops.ShutdownDistributedTPU + + + + + + + + + +Shuts down a running distributed TPU system. + + + + + + + + + +The op returns an error if no system is running. + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Sigmoid.md b/site/en/api_docs/python/tf/raw_ops/Sigmoid.md new file mode 100644 index 00000000000..9adcb0da254 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Sigmoid.md @@ -0,0 +1,78 @@ +description: Computes sigmoid of x element-wise. + +
+ + +
+ +# tf.raw_ops.Sigmoid + + + + + + + + + +Computes sigmoid of `x` element-wise. + + + + + + + + + +Specifically, `y = 1 / (1 + exp(-x))`. + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SigmoidGrad.md b/site/en/api_docs/python/tf/raw_ops/SigmoidGrad.md new file mode 100644 index 00000000000..86fb198efd9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SigmoidGrad.md @@ -0,0 +1,86 @@ +description: Computes the gradient of the sigmoid of x wrt its input. + +
+ + +
+ +# tf.raw_ops.SigmoidGrad + + + + + + + + + +Computes the gradient of the sigmoid of `x` wrt its input. + + + + + + + + + +Specifically, `grad = dy * y * (1 - y)`, where `y = sigmoid(x)`, and +`dy` is the corresponding input gradient. + + + + + + + + + + + + + + + + +
+`y` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`dy` + +A `Tensor`. Must have the same type as `y`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `y`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Sign.md b/site/en/api_docs/python/tf/raw_ops/Sign.md new file mode 100644 index 00000000000..50455c1e9ae --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Sign.md @@ -0,0 +1,86 @@ +description: Returns an element-wise indication of the sign of a number. + +
+ + +
+ +# tf.raw_ops.Sign + + + + + + + + + +Returns an element-wise indication of the sign of a number. + + + + + + + + + +`y = sign(x) = -1` if `x < 0`; 0 if `x == 0`; 1 if `x > 0`. + +For complex numbers, `y = sign(x) = x / |x|` if `x != 0`, otherwise `y = 0`. + +#### Example usage: + + +>>> tf.math.sign([0., 2., -3.]) + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Sin.md b/site/en/api_docs/python/tf/raw_ops/Sin.md new file mode 100644 index 00000000000..f7dab395593 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Sin.md @@ -0,0 +1,85 @@ +description: Computes sine of x element-wise. + +
+ + +
+ +# tf.raw_ops.Sin + + + + + + + + + +Computes sine of x element-wise. + + + + + + + + + + Given an input tensor, this function computes sine of every + element in the tensor. Input range is `(-inf, inf)` and + output range is `[-1,1]`. + + ```python + x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10, float("inf")]) + tf.math.sin(x) ==> [nan -0.4121185 -0.47942555 0.84147096 0.9320391 -0.87329733 -0.54402107 nan] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Sinh.md b/site/en/api_docs/python/tf/raw_ops/Sinh.md new file mode 100644 index 00000000000..6a4d403dd16 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Sinh.md @@ -0,0 +1,85 @@ +description: Computes hyperbolic sine of x element-wise. + +
+ + +
+ +# tf.raw_ops.Sinh + + + + + + + + + +Computes hyperbolic sine of x element-wise. + + + + + + + + + + Given an input tensor, this function computes hyperbolic sine of every + element in the tensor. Input range is `[-inf,inf]` and output range + is `[-inf,inf]`. + + ```python + x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 2, 10, float("inf")]) + tf.math.sinh(x) ==> [-inf -4.0515420e+03 -5.2109528e-01 1.1752012e+00 1.5094614e+00 3.6268604e+00 1.1013232e+04 inf] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Size.md b/site/en/api_docs/python/tf/raw_ops/Size.md new file mode 100644 index 00000000000..09e18fe561c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Size.md @@ -0,0 +1,95 @@ +description: Returns the size of a tensor. + +
+ + +
+ +# tf.raw_ops.Size + + + + + + + + + +Returns the size of a tensor. + + + + + + + + + +This operation returns an integer representing the number of elements in +`input`. + +#### For example: + + + +``` +# 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]] +size(t) ==> 12 +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`out_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SkipDataset.md b/site/en/api_docs/python/tf/raw_ops/SkipDataset.md new file mode 100644 index 00000000000..2c9c8fe0e33 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SkipDataset.md @@ -0,0 +1,100 @@ +description: Creates a dataset that skips count elements from the input_dataset. + +
+ + +
+ +# tf.raw_ops.SkipDataset + + + + + + + + + +Creates a dataset that skips `count` elements from the `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`count` + +A `Tensor` of type `int64`. +A scalar representing the number of elements from the `input_dataset` +that should be skipped. If count is -1, skips everything. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SleepDataset.md b/site/en/api_docs/python/tf/raw_ops/SleepDataset.md new file mode 100644 index 00000000000..280d364f83b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SleepDataset.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.SleepDataset + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`sleep_microseconds` + +A `Tensor` of type `int64`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Slice.md b/site/en/api_docs/python/tf/raw_ops/Slice.md new file mode 100644 index 00000000000..672f08a7279 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Slice.md @@ -0,0 +1,103 @@ +description: Return a slice from 'input'. + +
+ + +
+ +# tf.raw_ops.Slice + + + + + + + + + +Return a slice from 'input'. + + + + + + + + + +The output tensor is a tensor with dimensions described by 'size' +whose values are extracted from 'input' starting at the offsets in +'begin'. + +*Requirements*: + 0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n) + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`begin` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +begin[i] specifies the offset into the 'i'th dimension of +'input' to slice from. +
+`size` + +A `Tensor`. Must have the same type as `begin`. +size[i] specifies the number of elements of the 'i'th dimension +of 'input' to slice. If size[i] is -1, all remaining elements in dimension +i are included in the slice (i.e. this is equivalent to setting +size[i] = input.dim_size(i) - begin[i]). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SlidingWindowDataset.md b/site/en/api_docs/python/tf/raw_ops/SlidingWindowDataset.md new file mode 100644 index 00000000000..9f4e4816197 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SlidingWindowDataset.md @@ -0,0 +1,119 @@ +description: Creates a dataset that passes a sliding window over input_dataset. + +
+ + +
+ +# tf.raw_ops.SlidingWindowDataset + + + + + + + + + +Creates a dataset that passes a sliding window over `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`window_size` + +A `Tensor` of type `int64`. +A scalar representing the number of elements in the +sliding window. +
+`window_shift` + +A `Tensor` of type `int64`. +A scalar representing the steps moving the sliding window +forward in one iteration. It must be positive. +
+`window_stride` + +A `Tensor` of type `int64`. +A scalar representing the stride of the input elements of the sliding window. +It must be positive. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Snapshot.md b/site/en/api_docs/python/tf/raw_ops/Snapshot.md new file mode 100644 index 00000000000..7e6c16178f9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Snapshot.md @@ -0,0 +1,77 @@ +description: Returns a copy of the input tensor. + +
+ + +
+ +# tf.raw_ops.Snapshot + + + + + + + + + +Returns a copy of the input tensor. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SnapshotDataset.md b/site/en/api_docs/python/tf/raw_ops/SnapshotDataset.md new file mode 100644 index 00000000000..a31931daaec --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SnapshotDataset.md @@ -0,0 +1,207 @@ +description: Creates a dataset that will write to / read from a snapshot. + +
+ + +
+ +# tf.raw_ops.SnapshotDataset + + + + + + + + + +Creates a dataset that will write to / read from a snapshot. + + + + + + + + + +This dataset attempts to determine whether a valid snapshot exists at the +`snapshot_path`, and reads from the snapshot in lieu of using `input_dataset`. +If not, it will run the preprocessing pipeline as usual, and write out a +snapshot of the data processed for future use. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +A variant tensor representing the input dataset. +
+`path` + +A `Tensor` of type `string`. +The path we should write snapshots to / read snapshots from. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`compression` + +An optional `string`. Defaults to `""`. +
+`reader_path_prefix` + +An optional `string`. Defaults to `""`. +
+`writer_path_prefix` + +An optional `string`. Defaults to `""`. +
+`shard_size_bytes` + +An optional `int`. Defaults to `10737418240`. +
+`pending_snapshot_expiry_seconds` + +An optional `int`. Defaults to `86400`. +
+`num_reader_threads` + +An optional `int`. Defaults to `1`. +
+`reader_buffer_size` + +An optional `int`. Defaults to `1`. +
+`num_writer_threads` + +An optional `int`. Defaults to `1`. +
+`writer_buffer_size` + +An optional `int`. Defaults to `1`. +
+`shuffle_on_read` + +An optional `bool`. Defaults to `False`. +
+`seed` + +An optional `int`. Defaults to `0`. +
+`seed2` + +An optional `int`. Defaults to `0`. +
+`mode` + +An optional `string`. Defaults to `"auto"`. +
+`snapshot_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SobolSample.md b/site/en/api_docs/python/tf/raw_ops/SobolSample.md new file mode 100644 index 00000000000..12b368d4a2c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SobolSample.md @@ -0,0 +1,106 @@ +description: Generates points from the Sobol sequence. + +
+ + +
+ +# tf.raw_ops.SobolSample + + + + + + + + + +Generates points from the Sobol sequence. + + + + + + + + + +Creates a Sobol sequence with `num_results` samples. Each sample has dimension +`dim`. Skips the first `skip` samples. + + + + + + + + + + + + + + + + + + + + + + +
+`dim` + +A `Tensor` of type `int32`. +Positive scalar `Tensor` representing each sample's dimension. +
+`num_results` + +A `Tensor` of type `int32`. +Positive scalar `Tensor` of dtype int32. The number of Sobol points to return +in the output. +
+`skip` + +A `Tensor` of type `int32`. +Positive scalar `Tensor` of dtype int32. The number of initial points of the +Sobol sequence to skip. +
+`dtype` + +An optional tf.DType from: `tf.float32, tf.float64`. Defaults to tf.float32. +The type of the sample. One of: `float32` or `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Softmax.md b/site/en/api_docs/python/tf/raw_ops/Softmax.md new file mode 100644 index 00000000000..0297692159d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Softmax.md @@ -0,0 +1,81 @@ +description: Computes softmax activations. + +
+ + +
+ +# tf.raw_ops.Softmax + + + + + + + + + +Computes softmax activations. + + + + + + + + + +For each batch `i` and class `j` we have + + $$softmax[i, j] = exp(logits[i, j]) / sum_j(exp(logits[i, j]))$$ + + + + + + + + + + + + + +
+`logits` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +2-D with shape `[batch_size, num_classes]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `logits`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SoftmaxCrossEntropyWithLogits.md b/site/en/api_docs/python/tf/raw_ops/SoftmaxCrossEntropyWithLogits.md new file mode 100644 index 00000000000..a0faac2ad45 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SoftmaxCrossEntropyWithLogits.md @@ -0,0 +1,103 @@ +description: Computes softmax cross entropy cost and gradients to backpropagate. + +
+ + +
+ +# tf.raw_ops.SoftmaxCrossEntropyWithLogits + + + + + + + + + +Computes softmax cross entropy cost and gradients to backpropagate. + + + + + + + + + +Inputs are the logits, not probabilities. + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +batch_size x num_classes matrix +
+`labels` + +A `Tensor`. Must have the same type as `features`. +batch_size x num_classes matrix +The caller must ensure that each batch of labels represents a valid +probability distribution. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (loss, backprop). +
+`loss` + +A `Tensor`. Has the same type as `features`. +
+`backprop` + +A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Softplus.md b/site/en/api_docs/python/tf/raw_ops/Softplus.md new file mode 100644 index 00000000000..f61692b72a7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Softplus.md @@ -0,0 +1,77 @@ +description: Computes softplus: log(exp(features) + 1). + +
+ + +
+ +# tf.raw_ops.Softplus + + + + + + + + + +Computes softplus: `log(exp(features) + 1)`. + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SoftplusGrad.md b/site/en/api_docs/python/tf/raw_ops/SoftplusGrad.md new file mode 100644 index 00000000000..2ac28cac9a4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SoftplusGrad.md @@ -0,0 +1,86 @@ +description: Computes softplus gradients for a softplus operation. + +
+ + +
+ +# tf.raw_ops.SoftplusGrad + + + + + + + + + +Computes softplus gradients for a softplus operation. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +The backpropagated gradients to the corresponding softplus operation. +
+`features` + +A `Tensor`. Must have the same type as `gradients`. +The features passed as input to the corresponding softplus operation. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `gradients`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Softsign.md b/site/en/api_docs/python/tf/raw_ops/Softsign.md new file mode 100644 index 00000000000..3937590df3b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Softsign.md @@ -0,0 +1,77 @@ +description: Computes softsign: features / (abs(features) + 1). + +
+ + +
+ +# tf.raw_ops.Softsign + + + + + + + + + +Computes softsign: `features / (abs(features) + 1)`. + + + + + + + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SoftsignGrad.md b/site/en/api_docs/python/tf/raw_ops/SoftsignGrad.md new file mode 100644 index 00000000000..f8c13d37fe3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SoftsignGrad.md @@ -0,0 +1,86 @@ +description: Computes softsign gradients for a softsign operation. + +
+ + +
+ +# tf.raw_ops.SoftsignGrad + + + + + + + + + +Computes softsign gradients for a softsign operation. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`gradients` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +The backpropagated gradients to the corresponding softsign operation. +
+`features` + +A `Tensor`. Must have the same type as `gradients`. +The features passed as input to the corresponding softsign operation. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `gradients`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SpaceToBatch.md b/site/en/api_docs/python/tf/raw_ops/SpaceToBatch.md new file mode 100644 index 00000000000..95f4fe25c5a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SpaceToBatch.md @@ -0,0 +1,183 @@ +description: SpaceToBatch for 4-D tensors of type T. + +
+ + +
+ +# tf.raw_ops.SpaceToBatch + + + + + + + + + +SpaceToBatch for 4-D tensors of type T. + + + + + + + + + +This is a legacy version of the more general SpaceToBatchND. + +Zero-pads and then rearranges (permutes) blocks of spatial data into batch. +More specifically, this op outputs a copy of the input tensor where values from +the `height` and `width` dimensions are moved to the `batch` dimension. After +the zero-padding, both `height` and `width` of the input must be divisible by the +block size. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. 4-D with shape `[batch, height, width, depth]`. +
+`paddings` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2-D tensor of non-negative integers with shape `[2, 2]`. It specifies +the padding of the input with zeros across the spatial dimensions as follows: + +paddings = [[pad_top, pad_bottom], [pad_left, pad_right]] + +The effective spatial dimensions of the zero-padded input tensor will be: + +height_pad = pad_top + height + pad_bottom +width_pad = pad_left + width + pad_right + +The attr `block_size` must be greater than one. It indicates the block size. + +* Non-overlapping blocks of size `block_size x block size` in the height and +width dimensions are rearranged into the batch dimension at each location. +* The batch of the output tensor is `batch * block_size * block_size`. +* Both height_pad and width_pad must be divisible by block_size. + +The shape of the output will be: + +[batch*block_size*block_size, height_pad/block_size, width_pad/block_size, +depth] + +Some examples: + +(1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2: + +``` +x = [[[[1], [2]], [[3], [4]]]] +``` + +The output tensor has shape `[4, 1, 1, 1]` and value: + +``` +[[[[1]]], [[[2]]], [[[3]]], [[[4]]]] +``` + +(2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2: + +``` +x = [[[[1, 2, 3], [4, 5, 6]], +[[7, 8, 9], [10, 11, 12]]]] +``` + +The output tensor has shape `[4, 1, 1, 3]` and value: + +``` +[[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] +``` + +(3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]], +[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` + +The output tensor has shape `[4, 2, 2, 1]` and value: + +``` +x = [[[[1], [3]], [[9], [11]]], +[[[2], [4]], [[10], [12]]], +[[[5], [7]], [[13], [15]]], +[[[6], [8]], [[14], [16]]]] +``` + +(4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]]], +[[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` + +The output tensor has shape `[8, 1, 2, 1]` and value: + +``` +x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]], +[[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]] +``` + +Among others, this operation is useful for reducing atrous convolution into +regular convolution. +
+`block_size` + +An `int` that is `>= 2`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SpaceToBatchND.md b/site/en/api_docs/python/tf/raw_ops/SpaceToBatchND.md new file mode 100644 index 00000000000..89faf8cc521 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SpaceToBatchND.md @@ -0,0 +1,210 @@ +description: SpaceToBatch for N-D tensors of type T. + +
+ + +
+ +# tf.raw_ops.SpaceToBatchND + + + + + + + + + +SpaceToBatch for N-D tensors of type T. + + + + + + + + + +This operation divides "spatial" dimensions `[1, ..., M]` of the input into a +grid of blocks of shape `block_shape`, and interleaves these blocks with the +"batch" dimension (0) such that in the output, the spatial dimensions +`[1, ..., M]` correspond to the position within the grid, and the batch +dimension combines both the position within a spatial block and the original +batch position. Prior to division into blocks, the spatial dimensions of the +input are optionally zero padded according to `paddings`. See below for a +precise description. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`, +where spatial_shape has `M` dimensions. +
+`block_shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D with shape `[M]`, all values must be >= 1. +
+`paddings` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2-D with shape `[M, 2]`, all values must be >= 0. +`paddings[i] = [pad_start, pad_end]` specifies the padding for input dimension +`i + 1`, which corresponds to spatial dimension `i`. It is required that +`block_shape[i]` divides `input_shape[i + 1] + pad_start + pad_end`. + +This operation is equivalent to the following steps: + +1. Zero-pad the start and end of dimensions `[1, ..., M]` of the +input according to `paddings` to produce `padded` of shape `padded_shape`. + +2. Reshape `padded` to `reshaped_padded` of shape: + +[batch] + +[padded_shape[1] / block_shape[0], +block_shape[0], +..., +padded_shape[M] / block_shape[M-1], +block_shape[M-1]] + +remaining_shape + +3. Permute dimensions of `reshaped_padded` to produce +`permuted_reshaped_padded` of shape: + +block_shape + +[batch] + +[padded_shape[1] / block_shape[0], +..., +padded_shape[M] / block_shape[M-1]] + +remaining_shape + +4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch +dimension, producing an output tensor of shape: + +[batch * prod(block_shape)] + +[padded_shape[1] / block_shape[0], +..., +padded_shape[M] / block_shape[M-1]] + +remaining_shape + +Some examples: + +(1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and +`paddings = [[0, 0], [0, 0]]`: + +``` +x = [[[[1], [2]], [[3], [4]]]] +``` + +The output tensor has shape `[4, 1, 1, 1]` and value: + +``` +[[[[1]]], [[[2]]], [[[3]]], [[[4]]]] +``` + +(2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and +`paddings = [[0, 0], [0, 0]]`: + +``` +x = [[[[1, 2, 3], [4, 5, 6]], +[[7, 8, 9], [10, 11, 12]]]] +``` + +The output tensor has shape `[4, 1, 1, 3]` and value: + +``` +[[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] +``` + +(3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and +`paddings = [[0, 0], [0, 0]]`: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]], +[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` + +The output tensor has shape `[4, 2, 2, 1]` and value: + +``` +x = [[[[1], [3]], [[9], [11]]], +[[[2], [4]], [[10], [12]]], +[[[5], [7]], [[13], [15]]], +[[[6], [8]], [[14], [16]]]] +``` + +(4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and +paddings = `[[0, 0], [2, 0]]`: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]]], +[[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` + +The output tensor has shape `[8, 1, 3, 1]` and value: + +``` +x = [[[[0], [1], [3]]], [[[0], [9], [11]]], +[[[0], [2], [4]]], [[[0], [10], [12]]], +[[[0], [5], [7]]], [[[0], [13], [15]]], +[[[0], [6], [8]]], [[[0], [14], [16]]]] +``` + +Among others, this operation is useful for reducing atrous convolution into +regular convolution. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SpaceToDepth.md b/site/en/api_docs/python/tf/raw_ops/SpaceToDepth.md new file mode 100644 index 00000000000..f4469767998 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SpaceToDepth.md @@ -0,0 +1,174 @@ +description: SpaceToDepth for tensors of type T. + +
+ + +
+ +# tf.raw_ops.SpaceToDepth + + + + + + + + + +SpaceToDepth for tensors of type T. + + + + + + + + + +Rearranges blocks of spatial data, into depth. More specifically, +this op outputs a copy of the input tensor where values from the `height` +and `width` dimensions are moved to the `depth` dimension. +The attr `block_size` indicates the input block size. + + * Non-overlapping blocks of size `block_size x block size` are rearranged + into depth at each location. + * The depth of the output tensor is `block_size * block_size * input_depth`. + * The Y, X coordinates within each block of the input become the high order + component of the output channel index. + * The input tensor's height and width must be divisible by block_size. + +The `data_format` attr specifies the layout of the input and output tensors +with the following options: + "NHWC": `[ batch, height, width, channels ]` + "NCHW": `[ batch, channels, height, width ]` + "NCHW_VECT_C": + `qint8 [ batch, channels / 4, height, width, 4 ]` + +It is useful to consider the operation as transforming a 6-D Tensor. +e.g. for data_format = NHWC, + Each element in the input tensor can be specified via 6 coordinates, + ordered by decreasing memory layout significance as: + n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates + within the output image, bX, bY means coordinates + within the input block, iC means input channels). + The output would be a transpose to the following layout: + n,oY,oX,bY,bX,iC + +This operation is useful for resizing the activations between convolutions +(but keeping all data), e.g. instead of pooling. It is also useful for training +purely convolutional models. + +For example, given an input of shape `[1, 2, 2, 1]`, data_format = "NHWC" and +block_size = 2: + +``` +x = [[[[1], [2]], + [[3], [4]]]] +``` + +This operation will output a tensor of shape `[1, 1, 1, 4]`: + +``` +[[[[1, 2, 3, 4]]]] +``` + +Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`, +the corresponding output will have a single element (i.e. width and height are +both 1) and will have a depth of 4 channels (1 * block_size * block_size). +The output element shape is `[1, 1, 4]`. + +For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g. + +``` +x = [[[[1, 2, 3], [4, 5, 6]], + [[7, 8, 9], [10, 11, 12]]]] +``` + +This operation, for block_size of 2, will return the following tensor of shape +`[1, 1, 1, 12]` + +``` +[[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] +``` + +Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2: + +``` +x = [[[[1], [2], [5], [6]], + [[3], [4], [7], [8]], + [[9], [10], [13], [14]], + [[11], [12], [15], [16]]]] +``` + +the operator will return the following tensor of shape `[1 2 2 4]`: + +``` +x = [[[[1, 2, 3, 4], + [5, 6, 7, 8]], + [[9, 10, 11, 12], + [13, 14, 15, 16]]]] +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`block_size` + +An `int` that is `>= 2`. The size of the spatial block. +
+`data_format` + +An optional `string` from: `"NHWC", "NCHW", "NCHW_VECT_C"`. Defaults to `"NHWC"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseAccumulatorApplyGradient.md b/site/en/api_docs/python/tf/raw_ops/SparseAccumulatorApplyGradient.md new file mode 100644 index 00000000000..43ce2eed004 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseAccumulatorApplyGradient.md @@ -0,0 +1,124 @@ +description: Applies a sparse gradient to a given accumulator. + +
+ + +
+ +# tf.raw_ops.SparseAccumulatorApplyGradient + + + + + + + + + +Applies a sparse gradient to a given accumulator. + + + + + + + + + +Does not add if local_step is smaller than the accumulator's +global_step. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. The handle to a accumulator. +
+`local_step` + +A `Tensor` of type `int64`. +The local_step value at which the sparse gradient was computed. +
+`gradient_indices` + +A `Tensor` of type `int64`. +Indices of the sparse gradient to be accumulated. Must be a +vector. +
+`gradient_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Values are the non-zero slices of the gradient, and must have +the same first dimension as indices, i.e., the nnz represented by indices and +values must be consistent. +
+`gradient_shape` + +A `Tensor` of type `int64`. +Shape of the sparse gradient to be accumulated. +
+`has_known_shape` + +A `bool`. +Boolean indicating whether gradient_shape is unknown, in which +case the input is ignored during validation. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseAccumulatorTakeGradient.md b/site/en/api_docs/python/tf/raw_ops/SparseAccumulatorTakeGradient.md new file mode 100644 index 00000000000..8646b4dcff9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseAccumulatorTakeGradient.md @@ -0,0 +1,122 @@ +description: Extracts the average sparse gradient in a SparseConditionalAccumulator. + +
+ + +
+ +# tf.raw_ops.SparseAccumulatorTakeGradient + + + + + + + + + +Extracts the average sparse gradient in a SparseConditionalAccumulator. + + + + + + + + + +The op will blocks until sufficient (i.e., more than num_required) +gradients have been accumulated. If the accumulator has already +aggregated more than num_required gradients, it will return its +average of the accumulated gradients. Also automatically increments +the recorded global_step in the accumulator by 1, and resets the +aggregate to 0. + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +The handle to a SparseConditionalAccumulator. +
+`num_required` + +A `Tensor` of type `int32`. +Number of gradients required before we return an aggregate. +
+`dtype` + +A tf.DType from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`. +The data type of accumulated gradients. Needs to correspond to the type +of the accumulator. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (indices, values, shape). +
+`indices` + +A `Tensor` of type `int64`. +
+`values` + +A `Tensor` of type `dtype`. +
+`shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseAdd.md b/site/en/api_docs/python/tf/raw_ops/SparseAdd.md new file mode 100644 index 00000000000..252b9b5bacb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseAdd.md @@ -0,0 +1,161 @@ +description: Adds two SparseTensor objects to produce another SparseTensor. + +
+ + +
+ +# tf.raw_ops.SparseAdd + + + + + + + + + +Adds two `SparseTensor` objects to produce another `SparseTensor`. + + + + + + + + + +The input `SparseTensor` objects' indices are assumed ordered in standard +lexicographic order. If this is not the case, before this step run +`SparseReorder` to restore index ordering. + +By default, if two values sum to zero at some index, the output `SparseTensor` +would still include that particular location in its index, storing a zero in the +corresponding value slot. To override this, callers can specify `thresh`, +indicating that if the sum has a magnitude strictly smaller than `thresh`, its +corresponding value and index would then not be included. In particular, +`thresh == 0` (default) means everything is kept and actual thresholding happens +only for a positive value. + +In the following shapes, `nnz` is the count after taking `thresh` into account. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a_indices` + +A `Tensor` of type `int64`. +2-D. The `indices` of the first `SparseTensor`, size `[nnz, ndims]` Matrix. +
+`a_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +1-D. The `values` of the first `SparseTensor`, size `[nnz]` Vector. +
+`a_shape` + +A `Tensor` of type `int64`. +1-D. The `shape` of the first `SparseTensor`, size `[ndims]` Vector. +
+`b_indices` + +A `Tensor` of type `int64`. +2-D. The `indices` of the second `SparseTensor`, size `[nnz, ndims]` Matrix. +
+`b_values` + +A `Tensor`. Must have the same type as `a_values`. +1-D. The `values` of the second `SparseTensor`, size `[nnz]` Vector. +
+`b_shape` + +A `Tensor` of type `int64`. +1-D. The `shape` of the second `SparseTensor`, size `[ndims]` Vector. +
+`thresh` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +0-D. The magnitude threshold that determines if an output value/index +pair takes space. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sum_indices, sum_values, sum_shape). +
+`sum_indices` + +A `Tensor` of type `int64`. +
+`sum_values` + +A `Tensor`. Has the same type as `a_values`. +
+`sum_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseAddGrad.md b/site/en/api_docs/python/tf/raw_ops/SparseAddGrad.md new file mode 100644 index 00000000000..5560bf1bbd4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseAddGrad.md @@ -0,0 +1,122 @@ +description: The gradient operator for the SparseAdd op. + +
+ + +
+ +# tf.raw_ops.SparseAddGrad + + + + + + + + + +The gradient operator for the SparseAdd op. + + + + + + + + + +The SparseAdd op calculates A + B, where A, B, and the sum are all represented +as `SparseTensor` objects. This op takes in the upstream gradient w.r.t. +non-empty values of the sum, and outputs the gradients w.r.t. the non-empty +values of A and B. + + + + + + + + + + + + + + + + + + + + + + +
+`backprop_val_grad` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +1-D with shape `[nnz(sum)]`. The gradient with respect to +the non-empty values of the sum. +
+`a_indices` + +A `Tensor` of type `int64`. +2-D. The `indices` of the `SparseTensor` A, size `[nnz(A), ndims]`. +
+`b_indices` + +A `Tensor` of type `int64`. +2-D. The `indices` of the `SparseTensor` B, size `[nnz(B), ndims]`. +
+`sum_indices` + +A `Tensor` of type `int64`. +2-D. The `indices` of the sum `SparseTensor`, size +`[nnz(sum), ndims]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (a_val_grad, b_val_grad). +
+`a_val_grad` + +A `Tensor`. Has the same type as `backprop_val_grad`. +
+`b_val_grad` + +A `Tensor`. Has the same type as `backprop_val_grad`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseApplyAdadelta.md b/site/en/api_docs/python/tf/raw_ops/SparseApplyAdadelta.md new file mode 100644 index 00000000000..f6b4409e039 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseApplyAdadelta.md @@ -0,0 +1,142 @@ +description: var: Should be from a Variable(). + +
+ + +
+ +# tf.raw_ops.SparseApplyAdadelta + + + + + + + + + +var: Should be from a Variable(). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`accum_update` + +A mutable `Tensor`. Must have the same type as `var`. +: Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Learning rate. Must be a scalar. +
+`rho` + +A `Tensor`. Must have the same type as `var`. +Decay factor. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `var`. +Constant factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, updating of the var and accum tensors will be protected by +a lock; otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseApplyAdagrad.md b/site/en/api_docs/python/tf/raw_ops/SparseApplyAdagrad.md new file mode 100644 index 00000000000..3b82f486462 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseApplyAdagrad.md @@ -0,0 +1,130 @@ +description: Update relevant entries in '*var' and '*accum' according to the adagrad scheme. + +
+ + +
+ +# tf.raw_ops.SparseApplyAdagrad + + + + + + + + + +Update relevant entries in '*var' and '*accum' according to the adagrad scheme. + + + + + + + + + +That is for rows we have grad for, we update var and accum as follows: +$$accum += grad * grad$$ +$$var -= lr * grad * (1 / sqrt(accum))$$ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Learning rate. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`update_slots` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseApplyAdagradDA.md b/site/en/api_docs/python/tf/raw_ops/SparseApplyAdagradDA.md new file mode 100644 index 00000000000..20271fac5dd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseApplyAdagradDA.md @@ -0,0 +1,151 @@ +description: Update entries in '*var' and '*accum' according to the proximal adagrad scheme. + +
+ + +
+ +# tf.raw_ops.SparseApplyAdagradDA + + + + + + + + + +Update entries in '*var' and '*accum' according to the proximal adagrad scheme. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`gradient_accumulator` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`gradient_squared_accumulator` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Learning rate. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `var`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `var`. +L2 regularization. Must be a scalar. +
+`global_step` + +A `Tensor` of type `int64`. +Training step number. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, updating of the var and accum tensors will be protected by +a lock; otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseApplyAdagradV2.md b/site/en/api_docs/python/tf/raw_ops/SparseApplyAdagradV2.md new file mode 100644 index 00000000000..e23cc80e82b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseApplyAdagradV2.md @@ -0,0 +1,138 @@ +description: Update relevant entries in '*var' and '*accum' according to the adagrad scheme. + +
+ + +
+ +# tf.raw_ops.SparseApplyAdagradV2 + + + + + + + + + +Update relevant entries in '*var' and '*accum' according to the adagrad scheme. + + + + + + + + + +That is for rows we have grad for, we update var and accum as follows: +$$accum += grad * grad$$ +$$var -= lr * grad * (1 / sqrt(accum))$$ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Learning rate. Must be a scalar. +
+`epsilon` + +A `Tensor`. Must have the same type as `var`. +Constant factor. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`update_slots` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseApplyCenteredRMSProp.md b/site/en/api_docs/python/tf/raw_ops/SparseApplyCenteredRMSProp.md new file mode 100644 index 00000000000..fbff7ad5377 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseApplyCenteredRMSProp.md @@ -0,0 +1,175 @@ +description: Update '*var' according to the centered RMSProp algorithm. + +
+ + +
+ +# tf.raw_ops.SparseApplyCenteredRMSProp + + + + + + + + + +Update '*var' according to the centered RMSProp algorithm. + + + + + + + + + +The centered RMSProp algorithm uses an estimate of the centered second moment +(i.e., the variance) for normalization, as opposed to regular RMSProp, which +uses the (uncentered) second moment. This often helps with training, but is +slightly more expensive in terms of computation and memory. + +Note that in dense implementation of this algorithm, mg, ms, and mom will +update even if the grad is zero, but in this sparse implementation, mg, ms, +and mom will not update in iterations during which the grad is zero. + +mean_square = decay * mean_square + (1-decay) * gradient ** 2 +mean_grad = decay * mean_grad + (1-decay) * gradient +Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2) + +$$ms <- rho * ms_{t-1} + (1-rho) * grad * grad$$ +$$mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon)$$ +$$var <- var - mom$$ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`mg` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`ms` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`mom` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`rho` + +A `Tensor`. Must have the same type as `var`. +Decay rate. Must be a scalar. +
+`momentum` + +A `Tensor`. Must have the same type as `var`. +
+`epsilon` + +A `Tensor`. Must have the same type as `var`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var, ms and mom. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, mg, ms, and mom tensors is +protected by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseApplyFtrl.md b/site/en/api_docs/python/tf/raw_ops/SparseApplyFtrl.md new file mode 100644 index 00000000000..b3a4ecaa614 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseApplyFtrl.md @@ -0,0 +1,158 @@ +description: Update relevant entries in '*var' according to the Ftrl-proximal scheme. + +
+ + +
+ +# tf.raw_ops.SparseApplyFtrl + + + + + + + + + +Update relevant entries in '*var' according to the Ftrl-proximal scheme. + + + + + + + + + +That is for rows we have grad for, we update var, accum and linear as follows: +$$accum_new = accum + grad * grad$$ +$$linear += grad + (accum_{new}^{-lr_{power}} - accum^{-lr_{power}} / lr * var$$ +$$quadratic = 1.0 / (accum_{new}^{lr_{power}} * lr) + 2 * l2$$ +$$var = (sign(linear) * l1 - linear) / quadratic\ if\ |linear| > l1\ else\ 0.0$$ +$$accum = accum_{new}$$ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`linear` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `var`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `var`. +L2 regularization. Must be a scalar. +
+`lr_power` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseApplyFtrlV2.md b/site/en/api_docs/python/tf/raw_ops/SparseApplyFtrlV2.md new file mode 100644 index 00000000000..5b9e9ba0199 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseApplyFtrlV2.md @@ -0,0 +1,167 @@ +description: Update relevant entries in '*var' according to the Ftrl-proximal scheme. + +
+ + +
+ +# tf.raw_ops.SparseApplyFtrlV2 + + + + + + + + + +Update relevant entries in '*var' according to the Ftrl-proximal scheme. + + + + + + + + + +That is for rows we have grad for, we update var, accum and linear as follows: +grad_with_shrinkage = grad + 2 * l2_shrinkage * var +accum_new = accum + grad_with_shrinkage * grad_with_shrinkage +linear += grad_with_shrinkage + + (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var +quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 +var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 +accum = accum_new + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`linear` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `var`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `var`. +L2 shrinkage regularization. Must be a scalar. +
+`l2_shrinkage` + +A `Tensor`. Must have the same type as `var`. +
+`lr_power` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseApplyMomentum.md b/site/en/api_docs/python/tf/raw_ops/SparseApplyMomentum.md new file mode 100644 index 00000000000..dbba87823db --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseApplyMomentum.md @@ -0,0 +1,144 @@ +description: Update relevant entries in '*var' and '*accum' according to the momentum scheme. + +
+ + +
+ +# tf.raw_ops.SparseApplyMomentum + + + + + + + + + +Update relevant entries in '*var' and '*accum' according to the momentum scheme. + + + + + + + + + +Set use_nesterov = True if you want to use Nesterov momentum. + +That is for rows we have grad for, we update var and accum as follows: + +$$accum = accum * momentum + grad$$ +$$var -= lr * accum$$ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Learning rate. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`momentum` + +A `Tensor`. Must have the same type as `var`. +Momentum. Must be a scalar. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var and accum tensors will be protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`use_nesterov` + +An optional `bool`. Defaults to `False`. +If `True`, the tensor passed to compute grad will be +var - lr * momentum * accum, so in the end, the var you get is actually +var - lr * momentum * accum. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseApplyProximalAdagrad.md b/site/en/api_docs/python/tf/raw_ops/SparseApplyProximalAdagrad.md new file mode 100644 index 00000000000..408d6217e9a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseApplyProximalAdagrad.md @@ -0,0 +1,139 @@ +description: Sparse update entries in '*var' and '*accum' according to FOBOS algorithm. + +
+ + +
+ +# tf.raw_ops.SparseApplyProximalAdagrad + + + + + + + + + +Sparse update entries in '*var' and '*accum' according to FOBOS algorithm. + + + + + + + + + +That is for rows we have grad for, we update var and accum as follows: +$$accum += grad * grad$$ +$$prox_v = var$$ +$$prox_v -= lr * grad * (1 / sqrt(accum))$$ +$$var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}$$ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`accum` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Learning rate. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `var`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `var`. +L2 regularization. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, updating of the var and accum tensors will be protected by +a lock; otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseApplyProximalGradientDescent.md b/site/en/api_docs/python/tf/raw_ops/SparseApplyProximalGradientDescent.md new file mode 100644 index 00000000000..c8992d2fab4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseApplyProximalGradientDescent.md @@ -0,0 +1,129 @@ +description: Sparse update '*var' as FOBOS algorithm with fixed learning rate. + +
+ + +
+ +# tf.raw_ops.SparseApplyProximalGradientDescent + + + + + + + + + +Sparse update '*var' as FOBOS algorithm with fixed learning rate. + + + + + + + + + +That is for rows we have grad for, we update var as follows: +$$prox_v = var - alpha * grad$$ +$$var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}$$ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`alpha` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`l1` + +A `Tensor`. Must have the same type as `var`. +L1 regularization. Must be a scalar. +
+`l2` + +A `Tensor`. Must have the same type as `var`. +L2 regularization. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var and accum. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If True, the subtraction will be protected by a lock; +otherwise the behavior is undefined, but may exhibit less contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseApplyRMSProp.md b/site/en/api_docs/python/tf/raw_ops/SparseApplyRMSProp.md new file mode 100644 index 00000000000..1951f0e4fb7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseApplyRMSProp.md @@ -0,0 +1,161 @@ +description: Update '*var' according to the RMSProp algorithm. + +
+ + +
+ +# tf.raw_ops.SparseApplyRMSProp + + + + + + + + + +Update '*var' according to the RMSProp algorithm. + + + + + + + + + +Note that in dense implementation of this algorithm, ms and mom will +update even if the grad is zero, but in this sparse implementation, ms +and mom will not update in iterations during which the grad is zero. + +mean_square = decay * mean_square + (1-decay) * gradient ** 2 +Delta = learning_rate * gradient / sqrt(mean_square + epsilon) + +$$ms <- rho * ms_{t-1} + (1-rho) * grad * grad$$ +$$mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon)$$ +$$var <- var - mom$$ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`var` + +A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +Should be from a Variable(). +
+`ms` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`mom` + +A mutable `Tensor`. Must have the same type as `var`. +Should be from a Variable(). +
+`lr` + +A `Tensor`. Must have the same type as `var`. +Scaling factor. Must be a scalar. +
+`rho` + +A `Tensor`. Must have the same type as `var`. +Decay rate. Must be a scalar. +
+`momentum` + +A `Tensor`. Must have the same type as `var`. +
+`epsilon` + +A `Tensor`. Must have the same type as `var`. +Ridge term. Must be a scalar. +
+`grad` + +A `Tensor`. Must have the same type as `var`. The gradient. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A vector of indices into the first dimension of var, ms and mom. +
+`use_locking` + +An optional `bool`. Defaults to `False`. +If `True`, updating of the var, ms, and mom tensors is protected +by a lock; otherwise the behavior is undefined, but may exhibit less +contention. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `var`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseConcat.md b/site/en/api_docs/python/tf/raw_ops/SparseConcat.md new file mode 100644 index 00000000000..0740442281f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseConcat.md @@ -0,0 +1,165 @@ +description: Concatenates a list of SparseTensor along the specified dimension. + +
+ + +
+ +# tf.raw_ops.SparseConcat + + + + + + + + + +Concatenates a list of `SparseTensor` along the specified dimension. + + + + + + + + + +Concatenation is with respect to the dense versions of these sparse tensors. +It is assumed that each input is a `SparseTensor` whose elements are ordered +along increasing dimension number. + +All inputs' shapes must match, except for the concat dimension. The +`indices`, `values`, and `shapes` lists must have the same length. + +The output shape is identical to the inputs', except along the concat +dimension, where it is the sum of the inputs' sizes along that dimension. + +The output elements will be resorted to preserve the sort order along +increasing dimension number. + +This op runs in `O(M log M)` time, where `M` is the total number of non-empty +values across all inputs. This is due to the need for an internal sort in +order to concatenate efficiently across an arbitrary dimension. + +For example, if `concat_dim = 1` and the inputs are + + sp_inputs[0]: shape = [2, 3] + [0, 2]: "a" + [1, 0]: "b" + [1, 1]: "c" + + sp_inputs[1]: shape = [2, 4] + [0, 1]: "d" + [0, 2]: "e" + +then the output will be + + shape = [2, 7] + [0, 2]: "a" + [0, 4]: "d" + [0, 5]: "e" + [1, 0]: "b" + [1, 1]: "c" + +Graphically this is equivalent to doing + + [ a] concat [ d e ] = [ a d e ] + [b c ] [ ] [b c ] + + + + + + + + + + + + + + + + + + + + + + +
+`indices` + +A list of at least 2 `Tensor` objects with type `int64`. +2-D. Indices of each input `SparseTensor`. +
+`values` + +A list with the same length as `indices` of `Tensor` objects with the same type. +1-D. Non-empty values of each `SparseTensor`. +
+`shapes` + +A list with the same length as `indices` of `Tensor` objects with type `int64`. +1-D. Shapes of each `SparseTensor`. +
+`concat_dim` + +An `int`. +Dimension to concatenate along. Must be in range [-rank, rank), +where rank is the number of dimensions in each input `SparseTensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_indices, output_values, output_shape). +
+`output_indices` + +A `Tensor` of type `int64`. +
+`output_values` + +A `Tensor`. Has the same type as `values`. +
+`output_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseConditionalAccumulator.md b/site/en/api_docs/python/tf/raw_ops/SparseConditionalAccumulator.md new file mode 100644 index 00000000000..e46a43a2285 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseConditionalAccumulator.md @@ -0,0 +1,116 @@ +description: A conditional accumulator for aggregating sparse gradients. + +
+ + +
+ +# tf.raw_ops.SparseConditionalAccumulator + + + + + + + + + +A conditional accumulator for aggregating sparse gradients. + + + + + + + + + +The accumulator accepts gradients marked with local_step greater or +equal to the most recent global_step known to the accumulator. The +average can be extracted from the accumulator, provided sufficient +gradients have been accumulated. Extracting the average automatically +resets the aggregate to 0, and increments the global_step recorded by +the accumulator. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +A tf.DType from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`. +The type of the value being accumulated. +
+`shape` + +A tf.TensorShape or list of `ints`. The shape of the values. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this accumulator is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this accumulator will be shared under the given name +across multiple sessions. +
+`reduction_type` + +An optional `string` from: `"MEAN", "SUM"`. Defaults to `"MEAN"`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseCross.md b/site/en/api_docs/python/tf/raw_ops/SparseCross.md new file mode 100644 index 00000000000..21f8b4e301f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseCross.md @@ -0,0 +1,200 @@ +description: Generates sparse cross from a list of sparse and dense tensors. + +
+ + +
+ +# tf.raw_ops.SparseCross + + + + + + + + + +Generates sparse cross from a list of sparse and dense tensors. + + + + + + + + + +The op takes two lists, one of 2D `SparseTensor` and one of 2D `Tensor`, each +representing features of one feature column. It outputs a 2D `SparseTensor` with +the batchwise crosses of these features. + +For example, if the inputs are + + inputs[0]: SparseTensor with shape = [2, 2] + [0, 0]: "a" + [1, 0]: "b" + [1, 1]: "c" + + inputs[1]: SparseTensor with shape = [2, 1] + [0, 0]: "d" + [1, 0]: "e" + + inputs[2]: Tensor [["f"], ["g"]] + +then the output will be + + shape = [2, 2] + [0, 0]: "a_X_d_X_f" + [1, 0]: "b_X_e_X_g" + [1, 1]: "c_X_e_X_g" + +if hashed_output=true then the output will be + + shape = [2, 2] + [0, 0]: FingerprintCat64( + Fingerprint64("f"), FingerprintCat64( + Fingerprint64("d"), Fingerprint64("a"))) + [1, 0]: FingerprintCat64( + Fingerprint64("g"), FingerprintCat64( + Fingerprint64("e"), Fingerprint64("b"))) + [1, 1]: FingerprintCat64( + Fingerprint64("g"), FingerprintCat64( + Fingerprint64("e"), Fingerprint64("c"))) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`indices` + +A list of `Tensor` objects with type `int64`. +2-D. Indices of each input `SparseTensor`. +
+`values` + +A list of `Tensor` objects with types from: `int64`, `string`. +1-D. values of each `SparseTensor`. +
+`shapes` + +A list with the same length as `indices` of `Tensor` objects with type `int64`. +1-D. Shapes of each `SparseTensor`. +
+`dense_inputs` + +A list of `Tensor` objects with types from: `int64`, `string`. +2-D. Columns represented by dense `Tensor`. +
+`hashed_output` + +A `bool`. +If true, returns the hash of the cross instead of the string. +This will allow us avoiding string manipulations. +
+`num_buckets` + +An `int` that is `>= 0`. It is used if hashed_output is true. +output = hashed_value%num_buckets if num_buckets > 0 else hashed_value. +
+`hash_key` + +An `int`. +Specify the hash_key that will be used by the `FingerprintCat64` +function to combine the crosses fingerprints. +
+`out_type` + +A tf.DType from: `tf.int64, tf.string`. +
+`internal_type` + +A tf.DType from: `tf.int64, tf.string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_indices, output_values, output_shape). +
+`output_indices` + +A `Tensor` of type `int64`. +
+`output_values` + +A `Tensor` of type `out_type`. +
+`output_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseDenseCwiseAdd.md b/site/en/api_docs/python/tf/raw_ops/SparseDenseCwiseAdd.md new file mode 100644 index 00000000000..37f87fb8c3e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseDenseCwiseAdd.md @@ -0,0 +1,111 @@ +description: Adds up a SparseTensor and a dense Tensor, using these special rules: + +
+ + +
+ +# tf.raw_ops.SparseDenseCwiseAdd + + + + + + + + + +Adds up a SparseTensor and a dense Tensor, using these special rules: + + + + + + + + + +(1) Broadcasts the dense side to have the same shape as the sparse side, if + eligible; +(2) Then, only the dense values pointed to by the indices of the SparseTensor + participate in the cwise addition. + +By these rules, the result is a logical SparseTensor with exactly the same +indices and shape, but possibly with different non-zero values. The output of +this Op is the resultant non-zero values. + + + + + + + + + + + + + + + + + + + + + + +
+`sp_indices` + +A `Tensor` of type `int64`. +2-D. `N x R` matrix with the indices of non-empty values in a +SparseTensor, possibly not in canonical ordering. +
+`sp_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +1-D. `N` non-empty values corresponding to `sp_indices`. +
+`sp_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`dense` + +A `Tensor`. Must have the same type as `sp_values`. +`R`-D. The dense Tensor operand. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `sp_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseDenseCwiseDiv.md b/site/en/api_docs/python/tf/raw_ops/SparseDenseCwiseDiv.md new file mode 100644 index 00000000000..a86a4793a49 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseDenseCwiseDiv.md @@ -0,0 +1,105 @@ +description: Component-wise divides a SparseTensor by a dense Tensor. + +
+ + +
+ +# tf.raw_ops.SparseDenseCwiseDiv + + + + + + + + + +Component-wise divides a SparseTensor by a dense Tensor. + + + + + + + + + +*Limitation*: this Op only broadcasts the dense side to the sparse side, but not +the other direction. + + + + + + + + + + + + + + + + + + + + + + +
+`sp_indices` + +A `Tensor` of type `int64`. +2-D. `N x R` matrix with the indices of non-empty values in a +SparseTensor, possibly not in canonical ordering. +
+`sp_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +1-D. `N` non-empty values corresponding to `sp_indices`. +
+`sp_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`dense` + +A `Tensor`. Must have the same type as `sp_values`. +`R`-D. The dense Tensor operand. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `sp_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseDenseCwiseMul.md b/site/en/api_docs/python/tf/raw_ops/SparseDenseCwiseMul.md new file mode 100644 index 00000000000..e6437a244c4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseDenseCwiseMul.md @@ -0,0 +1,109 @@ +description: Component-wise multiplies a SparseTensor by a dense Tensor. + +
+ + +
+ +# tf.raw_ops.SparseDenseCwiseMul + + + + + + + + + +Component-wise multiplies a SparseTensor by a dense Tensor. + + + + + + + + + +The output locations corresponding to the implicitly zero elements in the sparse +tensor will be zero (i.e., will not take up storage space), regardless of the +contents of the dense tensor (even if it's +/-INF and that INF*0 == NaN). + +*Limitation*: this Op only broadcasts the dense side to the sparse side, but not +the other direction. + + + + + + + + + + + + + + + + + + + + + + +
+`sp_indices` + +A `Tensor` of type `int64`. +2-D. `N x R` matrix with the indices of non-empty values in a +SparseTensor, possibly not in canonical ordering. +
+`sp_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +1-D. `N` non-empty values corresponding to `sp_indices`. +
+`sp_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`dense` + +A `Tensor`. Must have the same type as `sp_values`. +`R`-D. The dense Tensor operand. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `sp_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseFillEmptyRows.md b/site/en/api_docs/python/tf/raw_ops/SparseFillEmptyRows.md new file mode 100644 index 00000000000..d688e9ab80b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseFillEmptyRows.md @@ -0,0 +1,167 @@ +description: Fills empty rows in the input 2-D SparseTensor with a default value. + +
+ + +
+ +# tf.raw_ops.SparseFillEmptyRows + + + + + + + + + +Fills empty rows in the input 2-D `SparseTensor` with a default value. + + + + + + + + + +The input `SparseTensor` is represented via the tuple of inputs +(`indices`, `values`, `dense_shape`). The output `SparseTensor` has the +same `dense_shape` but with indices `output_indices` and values +`output_values`. + +This op inserts a single entry for every row that doesn't have any values. +The index is created as `[row, 0, ..., 0]` and the inserted value +is `default_value`. + +For example, suppose `sp_input` has shape `[5, 6]` and non-empty values: + + [0, 1]: a + [0, 3]: b + [2, 0]: c + [3, 1]: d + +Rows 1 and 4 are empty, so the output will be of shape `[5, 6]` with values: + + [0, 1]: a + [0, 3]: b + [1, 0]: default_value + [2, 0]: c + [3, 1]: d + [4, 0]: default_value + +The output `SparseTensor` will be in row-major order and will have the +same shape as the input. + +This op also returns an indicator vector shaped `[dense_shape[0]]` such that + + empty_row_indicator[i] = True iff row i was an empty row. + +And a reverse index map vector shaped `[indices.shape[0]]` that is used during +backpropagation, + + reverse_index_map[j] = out_j s.t. indices[j, :] == output_indices[out_j, :] + + + + + + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor` of type `int64`. +2-D. the indices of the sparse tensor. +
+`values` + +A `Tensor`. 1-D. the values of the sparse tensor. +
+`dense_shape` + +A `Tensor` of type `int64`. +1-D. the shape of the sparse tensor. +
+`default_value` + +A `Tensor`. Must have the same type as `values`. +0-D. default value to insert into location `[row, 0, ..., 0]` +for rows missing from the input sparse tensor. +output indices: 2-D. the indices of the filled sparse tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_indices, output_values, empty_row_indicator, reverse_index_map). +
+`output_indices` + +A `Tensor` of type `int64`. +
+`output_values` + +A `Tensor`. Has the same type as `values`. +
+`empty_row_indicator` + +A `Tensor` of type `bool`. +
+`reverse_index_map` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseFillEmptyRowsGrad.md b/site/en/api_docs/python/tf/raw_ops/SparseFillEmptyRowsGrad.md new file mode 100644 index 00000000000..82963ec2a5a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseFillEmptyRowsGrad.md @@ -0,0 +1,107 @@ +description: The gradient of SparseFillEmptyRows. + +
+ + +
+ +# tf.raw_ops.SparseFillEmptyRowsGrad + + + + + + + + + +The gradient of SparseFillEmptyRows. + + + + + + + + + +Takes vectors reverse_index_map, shaped `[N]`, and grad_values, +shaped `[N_full]`, where `N_full >= N` and copies data into either +`d_values` or `d_default_value`. Here `d_values` is shaped `[N]` and +`d_default_value` is a scalar. + + d_values[j] = grad_values[reverse_index_map[j]] + d_default_value = sum_{k : 0 .. N_full - 1} ( + grad_values[k] * 1{k not in reverse_index_map}) + + + + + + + + + + + + + + + + +
+`reverse_index_map` + +A `Tensor` of type `int64`. +1-D. The reverse index map from SparseFillEmptyRows. +
+`grad_values` + +A `Tensor`. 1-D. The gradients from backprop. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (d_values, d_default_value). +
+`d_values` + +A `Tensor`. Has the same type as `grad_values`. +
+`d_default_value` + +A `Tensor`. Has the same type as `grad_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseMatMul.md b/site/en/api_docs/python/tf/raw_ops/SparseMatMul.md new file mode 100644 index 00000000000..b4871c4b4ad --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseMatMul.md @@ -0,0 +1,122 @@ +description: Multiply matrix "a" by matrix "b". + +
+ + +
+ +# tf.raw_ops.SparseMatMul + + + + + + + + + +Multiply matrix "a" by matrix "b". + + + + + + + + + +The inputs must be two-dimensional matrices and the inner dimension of "a" must +match the outer dimension of "b". Both "a" and "b" must be `Tensor`s not +`SparseTensor`s. This op is optimized for the case where at least one of "a" or +"b" is sparse, in the sense that they have a large proportion of zero values. +The breakeven for using this versus a dense matrix multiply on one platform was +30% zero values in the sparse matrix. + +The gradient computation of this operation will only take advantage of sparsity +in the input gradient when that gradient comes from a Relu. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. Must be one of the following types: `float32`, `bfloat16`. +
+`b` + +A `Tensor`. Must be one of the following types: `float32`, `bfloat16`. +
+`transpose_a` + +An optional `bool`. Defaults to `False`. +
+`transpose_b` + +An optional `bool`. Defaults to `False`. +
+`a_is_sparse` + +An optional `bool`. Defaults to `False`. +
+`b_is_sparse` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseMatrixAdd.md b/site/en/api_docs/python/tf/raw_ops/SparseMatrixAdd.md new file mode 100644 index 00000000000..fabc20ba908 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseMatrixAdd.md @@ -0,0 +1,101 @@ +description: Sparse addition of two CSR matrices, C = alpha * A + beta * B. + +
+ + +
+ +# tf.raw_ops.SparseMatrixAdd + + + + + + + + + +Sparse addition of two CSR matrices, C = alpha * A + beta * B. + + + + + + + + + +The gradients of SparseMatrixAdd outputs with respect to alpha and beta are not +currently defined (TensorFlow will return zeros for these entries). + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor` of type `variant`. A CSRSparseMatrix. +
+`b` + +A `Tensor` of type `variant`. A CSRSparseMatrix. +
+`alpha` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `complex64`, `complex128`. +A constant scalar. +
+`beta` + +A `Tensor`. Must have the same type as `alpha`. A constant scalar. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseMatrixMatMul.md b/site/en/api_docs/python/tf/raw_ops/SparseMatrixMatMul.md new file mode 100644 index 00000000000..2f2be61aa4c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseMatrixMatMul.md @@ -0,0 +1,158 @@ +description: Matrix-multiplies a sparse matrix with a dense matrix. + +
+ + +
+ +# tf.raw_ops.SparseMatrixMatMul + + + + + + + + + +Matrix-multiplies a sparse matrix with a dense matrix. + + + + + + + + + +Returns a dense matrix. +For inputs A and B, where A is CSR and B is dense; this op returns a dense C; + +If transpose_output is false, returns: +``` + C = A . B +``` + +If transpose_output is `true`, returns: +``` + C = transpose(A . B) = transpose(B) . transpose(A) +``` +where the transposition is performed along the two innermost (matrix) +dimensions. + +If conjugate_output is `true`, returns: +``` + C = conjugate(A . B) = conjugate(A) . conjugate(B) +``` + +If both conjugate_output and transpose_output are `true`, returns: +``` + C = conjugate(transpose(A . B)) = conjugate(transpose(B)) . + conjugate(transpose(A)) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor` of type `variant`. A CSRSparseMatrix. +
+`b` + +A `Tensor`. A dense tensor. +
+`transpose_a` + +An optional `bool`. Defaults to `False`. +Indicates whether `a` should be transposed. +
+`transpose_b` + +An optional `bool`. Defaults to `False`. +Indicates whether `b` should be transposed. +
+`adjoint_a` + +An optional `bool`. Defaults to `False`. +Indicates whether `a` should be conjugate-transposed. +
+`adjoint_b` + +An optional `bool`. Defaults to `False`. +Indicates whether `b` should be conjugate-transposed. +
+`transpose_output` + +An optional `bool`. Defaults to `False`. +Transposes the product of `a` and `b`. +
+`conjugate_output` + +An optional `bool`. Defaults to `False`. +Conjugates the product of `a` and `b`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `b`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseMatrixMul.md b/site/en/api_docs/python/tf/raw_ops/SparseMatrixMul.md new file mode 100644 index 00000000000..031c3e214f0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseMatrixMul.md @@ -0,0 +1,92 @@ +description: Element-wise multiplication of a sparse matrix with a dense tensor. + +
+ + +
+ +# tf.raw_ops.SparseMatrixMul + + + + + + + + + +Element-wise multiplication of a sparse matrix with a dense tensor. + + + + + + + + + +Returns a sparse matrix. + +The dense tensor `b` may be either a scalar; otherwise `a` must be a rank-3 +`SparseMatrix`; in this case `b` must be shaped `[batch_size, 1, 1]` and the +multiply operation broadcasts. + +**NOTE** even if `b` is zero, the sparsity structure of the output does not +change. + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor` of type `variant`. A CSRSparseMatrix. +
+`b` + +A `Tensor`. A dense tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseMatrixNNZ.md b/site/en/api_docs/python/tf/raw_ops/SparseMatrixNNZ.md new file mode 100644 index 00000000000..def0640a3ab --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseMatrixNNZ.md @@ -0,0 +1,77 @@ +description: Returns the number of nonzeroes of sparse_matrix. + +
+ + +
+ +# tf.raw_ops.SparseMatrixNNZ + + + + + + + + + +Returns the number of nonzeroes of `sparse_matrix`. + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_matrix` + +A `Tensor` of type `variant`. A CSRSparseMatrix. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseMatrixOrderingAMD.md b/site/en/api_docs/python/tf/raw_ops/SparseMatrixOrderingAMD.md new file mode 100644 index 00000000000..911781a3a96 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseMatrixOrderingAMD.md @@ -0,0 +1,128 @@ +description: Computes the Approximate Minimum Degree (AMD) ordering of input. + +
+ + +
+ +# tf.raw_ops.SparseMatrixOrderingAMD + + + + + + + + + +Computes the Approximate Minimum Degree (AMD) ordering of `input`. + + + + + + + + + +Computes the Approximate Minimum Degree (AMD) ordering for a sparse matrix. + +The returned permutation may be used to permute the rows and columns of the +given sparse matrix. This typically results in permuted sparse matrix's sparse +Cholesky (or other decompositions) in having fewer zero fill-in compared to +decomposition of the original matrix. + +The input sparse matrix may have rank 2 or rank 3. The output Tensor, +representing would then have rank 1 or 2 respectively, with the same batch +shape as the input. + +Each component of the input sparse matrix must represent a square symmetric +matrix; only the lower triangular part of the matrix is read. The values of the +sparse matrix does not affect the returned permutation, only the sparsity +pattern of the sparse matrix is used. Hence, a single AMD ordering may be +reused for the Cholesky decompositions of sparse matrices with the same sparsity +pattern but with possibly different values. + +Each batch component of the output permutation represents a permutation of `N` +elements, where the input sparse matrix components each have `N` rows. That is, +the component contains each of the integers `{0, .. N-1}` exactly once. The +`i`th element represents the row index that the `i`th row maps to. + +#### Usage example: + + + +```python + from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops + + a_indices = np.array([[0, 0], [1, 1], [2, 1], [2, 2], [3, 3]]) + a_values = np.array([1.0, 2.0, 1.0, 3.0, 4.0], np.float32) + a_dense_shape = [4, 4] + + with tf.Session() as sess: + # Define (COO format) SparseTensor over Numpy array. + a_st = tf.SparseTensor(a_indices, a_values, a_dense_shape) + + # Convert SparseTensors to CSR SparseMatrix. + a_sm = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( + a_st.indices, a_st.values, a_st.dense_shape) + + # Obtain the AMD Ordering for the CSR SparseMatrix. + ordering_amd = sparse_csr_matrix_ops.sparse_matrix_ordering_amd(sparse_matrix) + + ordering_amd_value = sess.run(ordering_amd) +``` + +`ordering_amd_value` stores the AMD ordering: `[1 2 3 0]`. + +input: A `CSRSparseMatrix`. + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `variant`. A `CSRSparseMatrix`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseMatrixSoftmax.md b/site/en/api_docs/python/tf/raw_ops/SparseMatrixSoftmax.md new file mode 100644 index 00000000000..317cabe39b4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseMatrixSoftmax.md @@ -0,0 +1,89 @@ +description: Calculates the softmax of a CSRSparseMatrix. + +
+ + +
+ +# tf.raw_ops.SparseMatrixSoftmax + + + + + + + + + +Calculates the softmax of a CSRSparseMatrix. + + + + + + + + + +Calculate the softmax of the innermost dimensions of a SparseMatrix. + +Missing values are treated as `-inf` (i.e., logits of zero probability); and +the output has the same sparsity structure as the input (though missing values +in the output may now be treated as having probability zero). + + + + + + + + + + + + + + + + +
+`logits` + +A `Tensor` of type `variant`. A CSRSparseMatrix. +
+`type` + +A tf.DType from: `tf.float32, tf.float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseMatrixSoftmaxGrad.md b/site/en/api_docs/python/tf/raw_ops/SparseMatrixSoftmaxGrad.md new file mode 100644 index 00000000000..92e55731b5e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseMatrixSoftmaxGrad.md @@ -0,0 +1,91 @@ +description: Calculates the gradient of the SparseMatrixSoftmax op. + +
+ + +
+ +# tf.raw_ops.SparseMatrixSoftmaxGrad + + + + + + + + + +Calculates the gradient of the SparseMatrixSoftmax op. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`softmax` + +A `Tensor` of type `variant`. A CSRSparseMatrix. +
+`grad_softmax` + +A `Tensor` of type `variant`. The gradient of `softmax`. +
+`type` + +A tf.DType from: `tf.float32, tf.float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseMatrixSparseCholesky.md b/site/en/api_docs/python/tf/raw_ops/SparseMatrixSparseCholesky.md new file mode 100644 index 00000000000..7104decb645 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseMatrixSparseCholesky.md @@ -0,0 +1,171 @@ +description: Computes the sparse Cholesky decomposition of input. + +
+ + +
+ +# tf.raw_ops.SparseMatrixSparseCholesky + + + + + + + + + +Computes the sparse Cholesky decomposition of `input`. + + + + + + + + + +Computes the Sparse Cholesky decomposition of a sparse matrix, with the given +fill-in reducing permutation. + +The input sparse matrix and the fill-in reducing permutation `permutation` must +have compatible shapes. If the sparse matrix has rank 3; with the batch +dimension `B`, then the `permutation` must be of rank 2; with the same batch +dimension `B`. There is no support for broadcasting. + +Furthermore, each component vector of `permutation` must be of length `N`, +containing each of the integers {0, 1, ..., N - 1} exactly once, where `N` is +the number of rows of each component of the sparse matrix. + +Each component of the input sparse matrix must represent a symmetric positive +definite (SPD) matrix; although only the lower triangular part of the matrix is +read. If any individual component is not SPD, then an InvalidArgument error is +thrown. + +The returned sparse matrix has the same dense shape as the input sparse matrix. +For each component `A` of the input sparse matrix, the corresponding output +sparse matrix represents `L`, the lower triangular Cholesky factor satisfying +the following identity: + +``` + A = L * Lt +``` + +where Lt denotes the transpose of L (or its conjugate transpose, if `type` is +`complex64` or `complex128`). + +The `type` parameter denotes the type of the matrix elements. The supported +types are: `float32`, `float64`, `complex64` and `complex128`. + +#### Usage example: + + + +```python + from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops + + a_indices = np.array([[0, 0], [1, 1], [2, 1], [2, 2], [3, 3]]) + a_values = np.array([1.0, 2.0, 1.0, 3.0, 4.0], np.float32) + a_dense_shape = [4, 4] + + with tf.Session() as sess: + # Define (COO format) SparseTensor over Numpy array. + a_st = tf.SparseTensor(a_indices, a_values, a_dense_shape) + + # Convert SparseTensors to CSR SparseMatrix. + a_sm = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( + a_st.indices, a_st.values, a_st.dense_shape) + + # Obtain the Sparse Cholesky factor using AMD Ordering for reducing zero + # fill-in (number of structural non-zeros in the sparse Cholesky factor). + ordering_amd = sparse_csr_matrix_ops.sparse_matrix_ordering_amd(sparse_matrix) + cholesky_sparse_matrices = ( + sparse_csr_matrix_ops.sparse_matrix_sparse_cholesky( + sparse_matrix, ordering_amd, type=tf.float32)) + + # Convert the CSRSparseMatrix Cholesky factor to a dense Tensor + dense_cholesky = sparse_csr_matrix_ops.csr_sparse_matrix_to_dense( + cholesky_sparse_matrices, tf.float32) + + # Evaluate the dense Tensor value. + dense_cholesky_value = sess.run(dense_cholesky) +``` + +`dense_cholesky_value` stores the dense Cholesky factor: + +``` + [[ 1. 0. 0. 0.] + [ 0. 1.41 0. 0.] + [ 0. 0.70 1.58 0.] + [ 0. 0. 0. 2.]] +``` + + +input: A `CSRSparseMatrix`. +permutation: A `Tensor`. +type: The type of `input`. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `variant`. A `CSRSparseMatrix`. +
+`permutation` + +A `Tensor` of type `int32`. +A fill-in reducing permutation matrix. +
+`type` + +A tf.DType from: `tf.float32, tf.float64, tf.complex64, tf.complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseMatrixSparseMatMul.md b/site/en/api_docs/python/tf/raw_ops/SparseMatrixSparseMatMul.md new file mode 100644 index 00000000000..332697beb87 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseMatrixSparseMatMul.md @@ -0,0 +1,202 @@ +description: Sparse-matrix-multiplies two CSR matrices a and b. + +
+ + +
+ +# tf.raw_ops.SparseMatrixSparseMatMul + + + + + + + + + +Sparse-matrix-multiplies two CSR matrices `a` and `b`. + + + + + + + + + +Performs a matrix multiplication of a sparse matrix `a` with a sparse matrix +`b`; returns a sparse matrix `a * b`, unless either `a` or `b` is transposed or +adjointed. + +Each matrix may be transposed or adjointed (conjugated and transposed) +according to the Boolean parameters `transpose_a`, `adjoint_a`, `transpose_b` +and `adjoint_b`. At most one of `transpose_a` or `adjoint_a` may be True. +Similarly, at most one of `transpose_b` or `adjoint_b` may be True. + +The inputs must have compatible shapes. That is, the inner dimension of `a` +must be equal to the outer dimension of `b`. This requirement is adjusted +according to whether either `a` or `b` is transposed or adjointed. + +The `type` parameter denotes the type of the matrix elements. Both `a` and `b` +must have the same type. The supported types are: `float32`, `float64`, +`complex64` and `complex128`. + +Both `a` and `b` must have the same rank. Broadcasting is not supported. If they +have rank 3, each batch of 2D CSRSparseMatrices within `a` and `b` must have the +same dense shape. + +The sparse matrix product may have numeric (non-structural) zeros. + +zeros. + +#### Usage example: + + + +```python + from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops + + a_indices = np.array([[0, 0], [2, 3], [2, 4], [3, 0]]) + a_values = np.array([1.0, 5.0, -1.0, -2.0], np.float32) + a_dense_shape = [4, 5] + + b_indices = np.array([[0, 0], [3, 0], [3, 1]]) + b_values = np.array([2.0, 7.0, 8.0], np.float32) + b_dense_shape = [5, 3] + + with tf.Session() as sess: + # Define (COO format) Sparse Tensors over Numpy arrays + a_st = tf.SparseTensor(a_indices, a_values, a_dense_shape) + b_st = tf.SparseTensor(b_indices, b_values, b_dense_shape) + + # Convert SparseTensors to CSR SparseMatrix + a_sm = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( + a_st.indices, a_st.values, a_st.dense_shape) + b_sm = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( + b_st.indices, b_st.values, b_st.dense_shape) + + # Compute the CSR SparseMatrix matrix multiplication + c_sm = sparse_csr_matrix_ops.sparse_matrix_sparse_mat_mul( + a=a_sm, b=b_sm, type=tf.float32) + + # Convert the CSR SparseMatrix product to a dense Tensor + c_sm_dense = sparse_csr_matrix_ops.csr_sparse_matrix_to_dense( + c_sm, tf.float32) + # Evaluate the dense Tensor value + c_sm_dense_value = sess.run(c_sm_dense) +``` + +`c_sm_dense_value` stores the dense matrix product: + +``` + [[ 2. 0. 0.] + [ 0. 0. 0.] + [ 35. 40. 0.] + [ -4. 0. 0.]] +``` + +a: A `CSRSparseMatrix`. +b: A `CSRSparseMatrix` with the same type and rank as `a`. +type: The type of both `a` and `b`. +transpose_a: If True, `a` transposed before multiplication. +transpose_b: If True, `b` transposed before multiplication. +adjoint_a: If True, `a` adjointed before multiplication. +adjoint_b: If True, `b` adjointed before multiplication. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor` of type `variant`. A CSRSparseMatrix. +
+`b` + +A `Tensor` of type `variant`. A CSRSparseMatrix. +
+`type` + +A tf.DType from: `tf.float32, tf.float64, tf.complex64, tf.complex128`. +
+`transpose_a` + +An optional `bool`. Defaults to `False`. +Indicates whether `a` should be transposed. +
+`transpose_b` + +An optional `bool`. Defaults to `False`. +Indicates whether `b` should be transposed. +
+`adjoint_a` + +An optional `bool`. Defaults to `False`. +Indicates whether `a` should be conjugate-transposed. +
+`adjoint_b` + +An optional `bool`. Defaults to `False`. +Indicates whether `b` should be conjugate-transposed. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseMatrixTranspose.md b/site/en/api_docs/python/tf/raw_ops/SparseMatrixTranspose.md new file mode 100644 index 00000000000..830fb333794 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseMatrixTranspose.md @@ -0,0 +1,94 @@ +description: Transposes the inner (matrix) dimensions of a CSRSparseMatrix. + +
+ + +
+ +# tf.raw_ops.SparseMatrixTranspose + + + + + + + + + +Transposes the inner (matrix) dimensions of a CSRSparseMatrix. + + + + + + + + + +Transposes the inner (matrix) dimensions of a SparseMatrix and optionally +conjugates its values. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `variant`. A CSRSparseMatrix. +
+`type` + +A tf.DType from: `tf.float32, tf.float64, tf.complex64, tf.complex128`. +
+`conjugate` + +An optional `bool`. Defaults to `False`. +Indicates whether `input` should be conjugated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseMatrixZeros.md b/site/en/api_docs/python/tf/raw_ops/SparseMatrixZeros.md new file mode 100644 index 00000000000..59816bfe759 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseMatrixZeros.md @@ -0,0 +1,84 @@ +description: Creates an all-zeros CSRSparseMatrix with shape dense_shape. + +
+ + +
+ +# tf.raw_ops.SparseMatrixZeros + + + + + + + + + +Creates an all-zeros CSRSparseMatrix with shape `dense_shape`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dense_shape` + +A `Tensor` of type `int64`. The desired matrix shape. +
+`type` + +A tf.DType from: `tf.float32, tf.float64, tf.complex64, tf.complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseReduceMax.md b/site/en/api_docs/python/tf/raw_ops/SparseReduceMax.md new file mode 100644 index 00000000000..aa3789cc695 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseReduceMax.md @@ -0,0 +1,124 @@ +description: Computes the max of elements across dimensions of a SparseTensor. + +
+ + +
+ +# tf.raw_ops.SparseReduceMax + + + + + + + + + +Computes the max of elements across dimensions of a SparseTensor. + + + + + + + + + +This Op takes a SparseTensor and is the sparse counterpart to +tf.reduce_max(). In particular, this Op also returns a dense `Tensor` +instead of a sparse one. + +Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless +`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in +`reduction_axes`. If `keep_dims` is true, the reduced dimensions are retained +with length 1. + +If `reduction_axes` has no entries, all dimensions are reduced, and a tensor +with a single element is returned. Additionally, the axes can be negative, +which are interpreted according to the indexing rules in Python. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_indices` + +A `Tensor` of type `int64`. +2-D. `N x R` matrix with the indices of non-empty values in a +SparseTensor, possibly not in canonical ordering. +
+`input_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +1-D. `N` non-empty values corresponding to `input_indices`. +
+`input_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`reduction_axes` + +A `Tensor` of type `int32`. +1-D. Length-`K` vector containing the reduction axes. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If true, retain reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseReduceMaxSparse.md b/site/en/api_docs/python/tf/raw_ops/SparseReduceMaxSparse.md new file mode 100644 index 00000000000..bd9bfda37bc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseReduceMaxSparse.md @@ -0,0 +1,145 @@ +description: Computes the max of elements across dimensions of a SparseTensor. + +
+ + +
+ +# tf.raw_ops.SparseReduceMaxSparse + + + + + + + + + +Computes the max of elements across dimensions of a SparseTensor. + + + + + + + + + +This Op takes a SparseTensor and is the sparse counterpart to +tf.reduce_max(). In contrast to SparseReduceMax, this Op returns a +SparseTensor. + +Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless +`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in +`reduction_axes`. If `keep_dims` is true, the reduced dimensions are retained +with length 1. + +If `reduction_axes` has no entries, all dimensions are reduced, and a tensor +with a single element is returned. Additionally, the axes can be negative, +which are interpreted according to the indexing rules in Python. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_indices` + +A `Tensor` of type `int64`. +2-D. `N x R` matrix with the indices of non-empty values in a +SparseTensor, possibly not in canonical ordering. +
+`input_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +1-D. `N` non-empty values corresponding to `input_indices`. +
+`input_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`reduction_axes` + +A `Tensor` of type `int32`. +1-D. Length-`K` vector containing the reduction axes. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If true, retain reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_indices, output_values, output_shape). +
+`output_indices` + +A `Tensor` of type `int64`. +
+`output_values` + +A `Tensor`. Has the same type as `input_values`. +
+`output_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseReduceSum.md b/site/en/api_docs/python/tf/raw_ops/SparseReduceSum.md new file mode 100644 index 00000000000..3b71f5d7ab0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseReduceSum.md @@ -0,0 +1,124 @@ +description: Computes the sum of elements across dimensions of a SparseTensor. + +
+ + +
+ +# tf.raw_ops.SparseReduceSum + + + + + + + + + +Computes the sum of elements across dimensions of a SparseTensor. + + + + + + + + + +This Op takes a SparseTensor and is the sparse counterpart to +tf.reduce_sum(). In particular, this Op also returns a dense `Tensor` +instead of a sparse one. + +Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless +`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in +`reduction_axes`. If `keep_dims` is true, the reduced dimensions are retained +with length 1. + +If `reduction_axes` has no entries, all dimensions are reduced, and a tensor +with a single element is returned. Additionally, the axes can be negative, +which are interpreted according to the indexing rules in Python. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_indices` + +A `Tensor` of type `int64`. +2-D. `N x R` matrix with the indices of non-empty values in a +SparseTensor, possibly not in canonical ordering. +
+`input_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +1-D. `N` non-empty values corresponding to `input_indices`. +
+`input_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`reduction_axes` + +A `Tensor` of type `int32`. +1-D. Length-`K` vector containing the reduction axes. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If true, retain reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseReduceSumSparse.md b/site/en/api_docs/python/tf/raw_ops/SparseReduceSumSparse.md new file mode 100644 index 00000000000..aeecbd89593 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseReduceSumSparse.md @@ -0,0 +1,145 @@ +description: Computes the sum of elements across dimensions of a SparseTensor. + +
+ + +
+ +# tf.raw_ops.SparseReduceSumSparse + + + + + + + + + +Computes the sum of elements across dimensions of a SparseTensor. + + + + + + + + + +This Op takes a SparseTensor and is the sparse counterpart to +tf.reduce_sum(). In contrast to SparseReduceSum, this Op returns a +SparseTensor. + +Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless +`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in +`reduction_axes`. If `keep_dims` is true, the reduced dimensions are retained +with length 1. + +If `reduction_axes` has no entries, all dimensions are reduced, and a tensor +with a single element is returned. Additionally, the axes can be negative, +which are interpreted according to the indexing rules in Python. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_indices` + +A `Tensor` of type `int64`. +2-D. `N x R` matrix with the indices of non-empty values in a +SparseTensor, possibly not in canonical ordering. +
+`input_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +1-D. `N` non-empty values corresponding to `input_indices`. +
+`input_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`reduction_axes` + +A `Tensor` of type `int32`. +1-D. Length-`K` vector containing the reduction axes. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If true, retain reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_indices, output_values, output_shape). +
+`output_indices` + +A `Tensor` of type `int64`. +
+`output_values` + +A `Tensor`. Has the same type as `input_values`. +
+`output_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseReorder.md b/site/en/api_docs/python/tf/raw_ops/SparseReorder.md new file mode 100644 index 00000000000..39dfd8ed77b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseReorder.md @@ -0,0 +1,117 @@ +description: Reorders a SparseTensor into the canonical, row-major ordering. + +
+ + +
+ +# tf.raw_ops.SparseReorder + + + + + + + + + +Reorders a SparseTensor into the canonical, row-major ordering. + + + + + + + + + +Note that by convention, all sparse ops preserve the canonical ordering along +increasing dimension number. The only time ordering can be violated is during +manual manipulation of the indices and values vectors to add entries. + +Reordering does not affect the shape of the SparseTensor. + +If the tensor has rank `R` and `N` non-empty values, `input_indices` has +shape `[N, R]`, input_values has length `N`, and input_shape has length `R`. + + + + + + + + + + + + + + + + + + + +
+`input_indices` + +A `Tensor` of type `int64`. +2-D. `N x R` matrix with the indices of non-empty values in a +SparseTensor, possibly not in canonical ordering. +
+`input_values` + +A `Tensor`. +1-D. `N` non-empty values corresponding to `input_indices`. +
+`input_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_indices, output_values). +
+`output_indices` + +A `Tensor` of type `int64`. +
+`output_values` + +A `Tensor`. Has the same type as `input_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseReshape.md b/site/en/api_docs/python/tf/raw_ops/SparseReshape.md new file mode 100644 index 00000000000..ecc255523b4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseReshape.md @@ -0,0 +1,124 @@ +description: Reshapes a SparseTensor to represent values in a new dense shape. + +
+ + +
+ +# tf.raw_ops.SparseReshape + + + + + + + + + +Reshapes a SparseTensor to represent values in a new dense shape. + + + + + + + + + +This operation has the same semantics as reshape on the represented dense +tensor. The `input_indices` are recomputed based on the requested `new_shape`. + +If one component of `new_shape` is the special value -1, the size of that +dimension is computed so that the total dense size remains constant. At +most one component of `new_shape` can be -1. The number of dense elements +implied by `new_shape` must be the same as the number of dense elements +originally implied by `input_shape`. + +Reshaping does not affect the order of values in the SparseTensor. + +If the input tensor has rank `R_in` and `N` non-empty values, and `new_shape` +has length `R_out`, then `input_indices` has shape `[N, R_in]`, +`input_shape` has length `R_in`, `output_indices` has shape `[N, R_out]`, and +`output_shape` has length `R_out`. + + + + + + + + + + + + + + + + + + + +
+`input_indices` + +A `Tensor` of type `int64`. +2-D. `N x R_in` matrix with the indices of non-empty values in a +SparseTensor. +
+`input_shape` + +A `Tensor` of type `int64`. +1-D. `R_in` vector with the input SparseTensor's dense shape. +
+`new_shape` + +A `Tensor` of type `int64`. +1-D. `R_out` vector with the requested new dense shape. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_indices, output_shape). +
+`output_indices` + +A `Tensor` of type `int64`. +
+`output_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSegmentMean.md b/site/en/api_docs/python/tf/raw_ops/SparseSegmentMean.md new file mode 100644 index 00000000000..6252b50f583 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSegmentMean.md @@ -0,0 +1,97 @@ +description: Computes the mean along sparse segments of a tensor. + +
+ + +
+ +# tf.raw_ops.SparseSegmentMean + + + + + + + + + +Computes the mean along sparse segments of a tensor. + + + + + + + + + +See tf.sparse.segment_sum for usage examples. + +Like `SegmentMean`, but `segment_ids` can have rank less than `data`'s first +dimension, selecting a subset of dimension 0, specified by `indices`. + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor. Has same rank as `segment_ids`. +
+`segment_ids` + +A `Tensor` of type `int32`. +A 1-D tensor. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSegmentMeanGrad.md b/site/en/api_docs/python/tf/raw_ops/SparseSegmentMeanGrad.md new file mode 100644 index 00000000000..63ad8bab8b8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSegmentMeanGrad.md @@ -0,0 +1,104 @@ +description: Computes gradients for SparseSegmentMean. + +
+ + +
+ +# tf.raw_ops.SparseSegmentMeanGrad + + + + + + + + + +Computes gradients for SparseSegmentMean. + + + + + + + + + +Returns tensor "output" with same shape as grad, except for dimension 0 whose +value is output_dim0. + + + + + + + + + + + + + + + + + + + + + + +
+`grad` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +gradient propagated to the SparseSegmentMean op. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +indices passed to the corresponding SparseSegmentMean op. +
+`segment_ids` + +A `Tensor` of type `int32`. +segment_ids passed to the corresponding SparseSegmentMean op. +
+`output_dim0` + +A `Tensor` of type `int32`. +dimension 0 of "data" passed to SparseSegmentMean op. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `grad`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSegmentMeanWithNumSegments.md b/site/en/api_docs/python/tf/raw_ops/SparseSegmentMeanWithNumSegments.md new file mode 100644 index 00000000000..845fd6cf40a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSegmentMeanWithNumSegments.md @@ -0,0 +1,107 @@ +description: Computes the mean along sparse segments of a tensor. + +
+ + +
+ +# tf.raw_ops.SparseSegmentMeanWithNumSegments + + + + + + + + + +Computes the mean along sparse segments of a tensor. + + + + + + + + + +Like `SparseSegmentMean`, but allows missing ids in `segment_ids`. If an id is +misisng, the `output` tensor at that position will be zeroed. + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor. Has same rank as `segment_ids`. +
+`segment_ids` + +A `Tensor` of type `int32`. +A 1-D tensor. Values should be sorted and can be repeated. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Should equal the number of distinct segment IDs. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSegmentSqrtN.md b/site/en/api_docs/python/tf/raw_ops/SparseSegmentSqrtN.md new file mode 100644 index 00000000000..7dab5ba8b3c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSegmentSqrtN.md @@ -0,0 +1,96 @@ +description: Computes the sum along sparse segments of a tensor divided by the sqrt of N. + +
+ + +
+ +# tf.raw_ops.SparseSegmentSqrtN + + + + + + + + + +Computes the sum along sparse segments of a tensor divided by the sqrt of N. + + + + + + + + + +N is the size of the segment being reduced. + +See tf.sparse.segment_sum for usage examples. + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor. Has same rank as `segment_ids`. +
+`segment_ids` + +A `Tensor` of type `int32`. +A 1-D tensor. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSegmentSqrtNGrad.md b/site/en/api_docs/python/tf/raw_ops/SparseSegmentSqrtNGrad.md new file mode 100644 index 00000000000..93e54f31212 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSegmentSqrtNGrad.md @@ -0,0 +1,104 @@ +description: Computes gradients for SparseSegmentSqrtN. + +
+ + +
+ +# tf.raw_ops.SparseSegmentSqrtNGrad + + + + + + + + + +Computes gradients for SparseSegmentSqrtN. + + + + + + + + + +Returns tensor "output" with same shape as grad, except for dimension 0 whose +value is output_dim0. + + + + + + + + + + + + + + + + + + + + + + +
+`grad` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +gradient propagated to the SparseSegmentSqrtN op. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +indices passed to the corresponding SparseSegmentSqrtN op. +
+`segment_ids` + +A `Tensor` of type `int32`. +segment_ids passed to the corresponding SparseSegmentSqrtN op. +
+`output_dim0` + +A `Tensor` of type `int32`. +dimension 0 of "data" passed to SparseSegmentSqrtN op. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `grad`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSegmentSqrtNWithNumSegments.md b/site/en/api_docs/python/tf/raw_ops/SparseSegmentSqrtNWithNumSegments.md new file mode 100644 index 00000000000..951e1abd4af --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSegmentSqrtNWithNumSegments.md @@ -0,0 +1,109 @@ +description: Computes the sum along sparse segments of a tensor divided by the sqrt of N. + +
+ + +
+ +# tf.raw_ops.SparseSegmentSqrtNWithNumSegments + + + + + + + + + +Computes the sum along sparse segments of a tensor divided by the sqrt of N. + + + + + + + + + +N is the size of the segment being reduced. + +Like `SparseSegmentSqrtN`, but allows missing ids in `segment_ids`. If an id is +misisng, the `output` tensor at that position will be zeroed. + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor. Has same rank as `segment_ids`. +
+`segment_ids` + +A `Tensor` of type `int32`. +A 1-D tensor. Values should be sorted and can be repeated. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Should equal the number of distinct segment IDs. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSegmentSum.md b/site/en/api_docs/python/tf/raw_ops/SparseSegmentSum.md new file mode 100644 index 00000000000..31d399deec4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSegmentSum.md @@ -0,0 +1,124 @@ +description: Computes the sum along sparse segments of a tensor. + +
+ + +
+ +# tf.raw_ops.SparseSegmentSum + + + + + + + + + +Computes the sum along sparse segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Like `SegmentSum`, but `segment_ids` can have rank less than `data`'s first +dimension, selecting a subset of dimension 0, specified by `indices`. + +#### For example: + + + +```python +c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) + +# Select two rows, one segment. +tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0])) +# => [[0 0 0 0]] + +# Select two rows, two segment. +tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1])) +# => [[ 1 2 3 4] +# [-1 -2 -3 -4]] + +# Select all rows, two segments. +tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1])) +# => [[0 0 0 0] +# [5 6 7 8]] + +# Which is equivalent to: +tf.segment_sum(c, tf.constant([0, 0, 1])) +``` + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor. Has same rank as `segment_ids`. +
+`segment_ids` + +A `Tensor` of type `int32`. +A 1-D tensor. Values should be sorted and can be repeated. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSegmentSumWithNumSegments.md b/site/en/api_docs/python/tf/raw_ops/SparseSegmentSumWithNumSegments.md new file mode 100644 index 00000000000..3669ead2c33 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSegmentSumWithNumSegments.md @@ -0,0 +1,130 @@ +description: Computes the sum along sparse segments of a tensor. + +
+ + +
+ +# tf.raw_ops.SparseSegmentSumWithNumSegments + + + + + + + + + +Computes the sum along sparse segments of a tensor. + + + + + + + + + +Like `SparseSegmentSum`, but allows missing ids in `segment_ids`. If an id is +misisng, the `output` tensor at that position will be zeroed. + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/sparse#Segmentation) +for an explanation of segments. + +#### For example: + + + +```python +c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) + +tf.sparse_segment_sum_with_num_segments( + c, tf.constant([0, 1]), tf.constant([0, 0]), num_segments=3) +# => [[0 0 0 0] +# [0 0 0 0] +# [0 0 0 0]] + +tf.sparse_segment_sum_with_num_segments(c, + tf.constant([0, 1]), + tf.constant([0, 2], + num_segments=4)) +# => [[ 1 2 3 4] +# [ 0 0 0 0] +# [-1 -2 -3 -4] +# [ 0 0 0 0]] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1-D tensor. Has same rank as `segment_ids`. +
+`segment_ids` + +A `Tensor` of type `int32`. +A 1-D tensor. Values should be sorted and can be repeated. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Should equal the number of distinct segment IDs. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSlice.md b/site/en/api_docs/python/tf/raw_ops/SparseSlice.md new file mode 100644 index 00000000000..0a167d6410c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSlice.md @@ -0,0 +1,147 @@ +description: Slice a SparseTensor based on the start and size. + +
+ + +
+ +# tf.raw_ops.SparseSlice + + + + + + + + + +Slice a `SparseTensor` based on the `start` and `size`. + + + + + + + + + +For example, if the input is + + input_tensor = shape = [2, 7] + [ a d e ] + [b c ] + +Graphically the output tensors are: + + sparse_slice([0, 0], [2, 4]) = shape = [2, 4] + [ a ] + [b c ] + + sparse_slice([0, 4], [2, 3]) = shape = [2, 3] + [ d e ] + [ ] + + + + + + + + + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor` of type `int64`. +2-D tensor represents the indices of the sparse tensor. +
+`values` + +A `Tensor`. 1-D tensor represents the values of the sparse tensor. +
+`shape` + +A `Tensor` of type `int64`. +1-D. tensor represents the shape of the sparse tensor. +
+`start` + +A `Tensor` of type `int64`. +1-D. tensor represents the start of the slice. +
+`size` + +A `Tensor` of type `int64`. +1-D. tensor represents the size of the slice. +output indices: A list of 1-D tensors represents the indices of the output +sparse tensors. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_indices, output_values, output_shape). +
+`output_indices` + +A `Tensor` of type `int64`. +
+`output_values` + +A `Tensor`. Has the same type as `values`. +
+`output_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSliceGrad.md b/site/en/api_docs/python/tf/raw_ops/SparseSliceGrad.md new file mode 100644 index 00000000000..6efca9e5542 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSliceGrad.md @@ -0,0 +1,106 @@ +description: The gradient operator for the SparseSlice op. + +
+ + +
+ +# tf.raw_ops.SparseSliceGrad + + + + + + + + + +The gradient operator for the SparseSlice op. + + + + + + + + + +This op takes in the upstream gradient w.r.t. non-empty values of +the sliced `SparseTensor`, and outputs the gradients w.r.t. +the non-empty values of input `SparseTensor`. + + + + + + + + + + + + + + + + + + + + + + +
+`backprop_val_grad` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +1-D. The gradient with respect to +the non-empty values of the sliced `SparseTensor`. +
+`input_indices` + +A `Tensor` of type `int64`. +2-D. The `indices` of the input `SparseTensor`. +
+`input_start` + +A `Tensor` of type `int64`. +1-D. tensor represents the start of the slice. +
+`output_indices` + +A `Tensor` of type `int64`. +2-D. The `indices` of the sliced `SparseTensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `backprop_val_grad`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSoftmax.md b/site/en/api_docs/python/tf/raw_ops/SparseSoftmax.md new file mode 100644 index 00000000000..5e0a3167fcd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSoftmax.md @@ -0,0 +1,110 @@ +description: Applies softmax to a batched N-D SparseTensor. + +
+ + +
+ +# tf.raw_ops.SparseSoftmax + + + + + + + + + +Applies softmax to a batched N-D `SparseTensor`. + + + + + + + + + +The inputs represent an N-D SparseTensor with logical shape `[..., B, C]` +(where `N >= 2`), and with indices sorted in the canonical lexicographic order. + +This op is equivalent to applying the normal tf.nn.softmax() to each innermost +logical submatrix with shape `[B, C]`, but with the catch that *the implicitly +zero elements do not participate*. Specifically, the algorithm is equivalent +to the following: + + (1) Applies tf.nn.softmax() to a densified view of each innermost submatrix + with shape `[B, C]`, along the size-C dimension; + (2) Masks out the original implicitly-zero locations; + (3) Renormalizes the remaining elements. + +Hence, the `SparseTensor` result has exactly the same non-zero indices and +shape. + + + + + + + + + + + + + + + + + + + +
+`sp_indices` + +A `Tensor` of type `int64`. +2-D. `NNZ x R` matrix with the indices of non-empty values in a +SparseTensor, in canonical ordering. +
+`sp_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +1-D. `NNZ` non-empty values corresponding to `sp_indices`. +
+`sp_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `sp_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSoftmaxCrossEntropyWithLogits.md b/site/en/api_docs/python/tf/raw_ops/SparseSoftmaxCrossEntropyWithLogits.md new file mode 100644 index 00000000000..3f7437bd4ac --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSoftmaxCrossEntropyWithLogits.md @@ -0,0 +1,107 @@ +description: Computes softmax cross entropy cost and gradients to backpropagate. + +
+ + +
+ +# tf.raw_ops.SparseSoftmaxCrossEntropyWithLogits + + + + + + + + + +Computes softmax cross entropy cost and gradients to backpropagate. + + + + + + + + + +Unlike `SoftmaxCrossEntropyWithLogits`, this operation does not accept +a matrix of label probabilities, but rather a single label per row +of features. This label is considered to have probability 1.0 for the +given row. + +Inputs are the logits, not probabilities. + + + + + + + + + + + + + + + + +
+`features` + +A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`. +batch_size x num_classes matrix +
+`labels` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +batch_size vector with values in [0, num_classes). +This is the label for the given minibatch entry. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (loss, backprop). +
+`loss` + +A `Tensor`. Has the same type as `features`. +
+`backprop` + +A `Tensor`. Has the same type as `features`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSparseMaximum.md b/site/en/api_docs/python/tf/raw_ops/SparseSparseMaximum.md new file mode 100644 index 00000000000..9665e1573a5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSparseMaximum.md @@ -0,0 +1,134 @@ +description: Returns the element-wise max of two SparseTensors. + +
+ + +
+ +# tf.raw_ops.SparseSparseMaximum + + + + + + + + + +Returns the element-wise max of two SparseTensors. + + + + + + + + + +Assumes the two SparseTensors have the same shape, i.e., no broadcasting. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a_indices` + +A `Tensor` of type `int64`. +2-D. `N x R` matrix with the indices of non-empty values in a +SparseTensor, in the canonical lexicographic ordering. +
+`a_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +1-D. `N` non-empty values corresponding to `a_indices`. +
+`a_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`b_indices` + +A `Tensor` of type `int64`. +counterpart to `a_indices` for the other operand. +
+`b_values` + +A `Tensor`. Must have the same type as `a_values`. +counterpart to `a_values` for the other operand; must be of the same dtype. +
+`b_shape` + +A `Tensor` of type `int64`. +counterpart to `a_shape` for the other operand; the two shapes must be equal. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_indices, output_values). +
+`output_indices` + +A `Tensor` of type `int64`. +
+`output_values` + +A `Tensor`. Has the same type as `a_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSparseMinimum.md b/site/en/api_docs/python/tf/raw_ops/SparseSparseMinimum.md new file mode 100644 index 00000000000..fa10459d4d7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSparseMinimum.md @@ -0,0 +1,134 @@ +description: Returns the element-wise min of two SparseTensors. + +
+ + +
+ +# tf.raw_ops.SparseSparseMinimum + + + + + + + + + +Returns the element-wise min of two SparseTensors. + + + + + + + + + +Assumes the two SparseTensors have the same shape, i.e., no broadcasting. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a_indices` + +A `Tensor` of type `int64`. +2-D. `N x R` matrix with the indices of non-empty values in a +SparseTensor, in the canonical lexicographic ordering. +
+`a_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +1-D. `N` non-empty values corresponding to `a_indices`. +
+`a_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`b_indices` + +A `Tensor` of type `int64`. +counterpart to `a_indices` for the other operand. +
+`b_values` + +A `Tensor`. Must have the same type as `a_values`. +counterpart to `a_values` for the other operand; must be of the same dtype. +
+`b_shape` + +A `Tensor` of type `int64`. +counterpart to `a_shape` for the other operand; the two shapes must be equal. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_indices, output_values). +
+`output_indices` + +A `Tensor` of type `int64`. +
+`output_values` + +A `Tensor`. Has the same type as `a_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseSplit.md b/site/en/api_docs/python/tf/raw_ops/SparseSplit.md new file mode 100644 index 00000000000..60cf91cfced --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseSplit.md @@ -0,0 +1,149 @@ +description: Split a SparseTensor into num_split tensors along one dimension. + +
+ + +
+ +# tf.raw_ops.SparseSplit + + + + + + + + + +Split a `SparseTensor` into `num_split` tensors along one dimension. + + + + + + + + + +If the `shape[split_dim]` is not an integer multiple of `num_split`. Slices +`[0 : shape[split_dim] % num_split]` gets one extra dimension. +For example, if `split_dim = 1` and `num_split = 2` and the input is + + input_tensor = shape = [2, 7] + [ a d e ] + [b c ] + +Graphically the output tensors are: + + output_tensor[0] = shape = [2, 4] + [ a ] + [b c ] + + output_tensor[1] = shape = [2, 3] + [ d e ] + [ ] + + + + + + + + + + + + + + + + + + + + + + + + + +
+`split_dim` + +A `Tensor` of type `int64`. +0-D. The dimension along which to split. Must be in the range +`[0, rank(shape))`. +
+`indices` + +A `Tensor` of type `int64`. +2-D tensor represents the indices of the sparse tensor. +
+`values` + +A `Tensor`. 1-D tensor represents the values of the sparse tensor. +
+`shape` + +A `Tensor` of type `int64`. +1-D. tensor represents the shape of the sparse tensor. +output indices: A list of 1-D tensors represents the indices of the output +sparse tensors. +
+`num_split` + +An `int` that is `>= 1`. The number of ways to split. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_indices, output_values, output_shape). +
+`output_indices` + +A list of `num_split` `Tensor` objects with type `int64`. +
+`output_values` + +A list of `num_split` `Tensor` objects with the same type as `values`. +
+`output_shape` + +A list of `num_split` `Tensor` objects with type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseTensorDenseAdd.md b/site/en/api_docs/python/tf/raw_ops/SparseTensorDenseAdd.md new file mode 100644 index 00000000000..2bbee79f699 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseTensorDenseAdd.md @@ -0,0 +1,103 @@ +description: Adds up a SparseTensor and a dense Tensor, producing a dense Tensor. + +
+ + +
+ +# tf.raw_ops.SparseTensorDenseAdd + + + + + + + + + +Adds up a `SparseTensor` and a dense `Tensor`, producing a dense `Tensor`. + + + + + + + + + +This Op does not require `a_indices` be sorted in standard lexicographic order. + + + + + + + + + + + + + + + + + + + + + + +
+`a_indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2-D. The `indices` of the `SparseTensor`, with shape `[nnz, ndims]`. +
+`a_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +1-D. The `values` of the `SparseTensor`, with shape `[nnz]`. +
+`a_shape` + +A `Tensor`. Must have the same type as `a_indices`. +1-D. The `shape` of the `SparseTensor`, with shape `[ndims]`. +
+`b` + +A `Tensor`. Must have the same type as `a_values`. +`ndims`-D Tensor. With shape `a_shape`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseTensorDenseMatMul.md b/site/en/api_docs/python/tf/raw_ops/SparseTensorDenseMatMul.md new file mode 100644 index 00000000000..1df58ce8206 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseTensorDenseMatMul.md @@ -0,0 +1,129 @@ +description: Multiply SparseTensor (of rank 2) "A" by dense matrix "B". + +
+ + +
+ +# tf.raw_ops.SparseTensorDenseMatMul + + + + + + + + + +Multiply SparseTensor (of rank 2) "A" by dense matrix "B". + + + + + + + + + +No validity checking is performed on the indices of A. However, the following +input format is recommended for optimal behavior: + +if adjoint_a == false: + A should be sorted in lexicographically increasing order. Use SparseReorder + if you're not sure. +if adjoint_a == true: + A should be sorted in order of increasing dimension 1 (i.e., "column major" + order instead of "row major" order). + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`a_indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2-D. The `indices` of the `SparseTensor`, size `[nnz, 2]` Matrix. +
+`a_values` + +A `Tensor`. +1-D. The `values` of the `SparseTensor`, size `[nnz]` Vector. +
+`a_shape` + +A `Tensor` of type `int64`. +1-D. The `shape` of the `SparseTensor`, size `[2]` Vector. +
+`b` + +A `Tensor`. Must have the same type as `a_values`. +2-D. A dense Matrix. +
+`adjoint_a` + +An optional `bool`. Defaults to `False`. +Use the adjoint of A in the matrix multiply. If A is complex, this +is transpose(conj(A)). Otherwise it's transpose(A). +
+`adjoint_b` + +An optional `bool`. Defaults to `False`. +Use the adjoint of B in the matrix multiply. If B is complex, this +is transpose(conj(B)). Otherwise it's transpose(B). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `a_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseTensorSliceDataset.md b/site/en/api_docs/python/tf/raw_ops/SparseTensorSliceDataset.md new file mode 100644 index 00000000000..d9bc0cbcefa --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseTensorSliceDataset.md @@ -0,0 +1,91 @@ +description: Creates a dataset that splits a SparseTensor into elements row-wise. + +
+ + +
+ +# tf.raw_ops.SparseTensorSliceDataset + + + + + + + + + +Creates a dataset that splits a SparseTensor into elements row-wise. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor` of type `int64`. +
+`values` + +A `Tensor`. +
+`dense_shape` + +A `Tensor` of type `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseTensorToCSRSparseMatrix.md b/site/en/api_docs/python/tf/raw_ops/SparseTensorToCSRSparseMatrix.md new file mode 100644 index 00000000000..784de9c2eaa --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseTensorToCSRSparseMatrix.md @@ -0,0 +1,92 @@ +description: Converts a SparseTensor to a (possibly batched) CSRSparseMatrix. + +
+ + +
+ +# tf.raw_ops.SparseTensorToCSRSparseMatrix + + + + + + + + + +Converts a SparseTensor to a (possibly batched) CSRSparseMatrix. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor` of type `int64`. SparseTensor indices. +
+`values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `complex64`, `complex128`. +SparseTensor values. +
+`dense_shape` + +A `Tensor` of type `int64`. SparseTensor dense shape. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseToDense.md b/site/en/api_docs/python/tf/raw_ops/SparseToDense.md new file mode 100644 index 00000000000..ba7f1cd6312 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseToDense.md @@ -0,0 +1,134 @@ +description: Converts a sparse representation into a dense tensor. + +
+ + +
+ +# tf.raw_ops.SparseToDense + + + + + + + + + +Converts a sparse representation into a dense tensor. + + + + + + + + + +Builds an array `dense` with shape `output_shape` such that + +``` +# If sparse_indices is scalar +dense[i] = (i == sparse_indices ? sparse_values : default_value) + +# If sparse_indices is a vector, then for each i +dense[sparse_indices[i]] = sparse_values[i] + +# If sparse_indices is an n by d matrix, then for each i in [0, n) +dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i] +``` + +All other values in `dense` are set to `default_value`. If `sparse_values` is a +scalar, all sparse indices are set to this single value. + +Indices should be sorted in lexicographic order, and indices must not +contain any repeats. If `validate_indices` is true, these properties +are checked during execution. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +0-D, 1-D, or 2-D. `sparse_indices[i]` contains the complete +index where `sparse_values[i]` will be placed. +
+`output_shape` + +A `Tensor`. Must have the same type as `sparse_indices`. +1-D. Shape of the dense output tensor. +
+`sparse_values` + +A `Tensor`. +1-D. Values corresponding to each row of `sparse_indices`, +or a scalar value to be used for all sparse indices. +
+`default_value` + +A `Tensor`. Must have the same type as `sparse_values`. +Scalar value to set for indices not specified in +`sparse_indices`. +
+`validate_indices` + +An optional `bool`. Defaults to `True`. +If true, indices are checked to make sure they are sorted in +lexicographic order and that there are no repeats. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `sparse_values`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SparseToSparseSetOperation.md b/site/en/api_docs/python/tf/raw_ops/SparseToSparseSetOperation.md new file mode 100644 index 00000000000..45827de0560 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SparseToSparseSetOperation.md @@ -0,0 +1,185 @@ +description: Applies set operation along last dimension of 2 SparseTensor inputs. + +
+ + +
+ +# tf.raw_ops.SparseToSparseSetOperation + + + + + + + + + +Applies set operation along last dimension of 2 `SparseTensor` inputs. + + + + + + + + + +See SetOperationOp::SetOperationFromContext for values of `set_operation`. + +If `validate_indices` is `True`, `SparseToSparseSetOperation` validates the +order and range of `set1` and `set2` indices. + +Input `set1` is a `SparseTensor` represented by `set1_indices`, `set1_values`, +and `set1_shape`. For `set1` ranked `n`, 1st `n-1` dimensions must be the same +as `set2`. Dimension `n` contains values in a set, duplicates are allowed but +ignored. + +Input `set2` is a `SparseTensor` represented by `set2_indices`, `set2_values`, +and `set2_shape`. For `set2` ranked `n`, 1st `n-1` dimensions must be the same +as `set1`. Dimension `n` contains values in a set, duplicates are allowed but +ignored. + +If `validate_indices` is `True`, this op validates the order and range of `set1` +and `set2` indices. + +Output `result` is a `SparseTensor` represented by `result_indices`, +`result_values`, and `result_shape`. For `set1` and `set2` ranked `n`, this +has rank `n` and the same 1st `n-1` dimensions as `set1` and `set2`. The `nth` +dimension contains the result of `set_operation` applied to the corresponding +`[0...n-1]` dimension of `set`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`set1_indices` + +A `Tensor` of type `int64`. +2D `Tensor`, indices of a `SparseTensor`. Must be in row-major +order. +
+`set1_values` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `string`. +1D `Tensor`, values of a `SparseTensor`. Must be in row-major +order. +
+`set1_shape` + +A `Tensor` of type `int64`. +1D `Tensor`, shape of a `SparseTensor`. `set1_shape[0...n-1]` must +be the same as `set2_shape[0...n-1]`, `set1_shape[n]` is the +max set size across `0...n-1` dimensions. +
+`set2_indices` + +A `Tensor` of type `int64`. +2D `Tensor`, indices of a `SparseTensor`. Must be in row-major +order. +
+`set2_values` + +A `Tensor`. Must have the same type as `set1_values`. +1D `Tensor`, values of a `SparseTensor`. Must be in row-major +order. +
+`set2_shape` + +A `Tensor` of type `int64`. +1D `Tensor`, shape of a `SparseTensor`. `set2_shape[0...n-1]` must +be the same as `set1_shape[0...n-1]`, `set2_shape[n]` is the +max set size across `0...n-1` dimensions. +
+`set_operation` + +A `string`. +
+`validate_indices` + +An optional `bool`. Defaults to `True`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (result_indices, result_values, result_shape). +
+`result_indices` + +A `Tensor` of type `int64`. +
+`result_values` + +A `Tensor`. Has the same type as `set1_values`. +
+`result_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Spence.md b/site/en/api_docs/python/tf/raw_ops/Spence.md new file mode 100644 index 00000000000..06886af3a46 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Spence.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.Spence + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Split.md b/site/en/api_docs/python/tf/raw_ops/Split.md new file mode 100644 index 00000000000..8f4abd70e87 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Split.md @@ -0,0 +1,95 @@ +description: Splits a tensor into num_split tensors along one dimension. + +
+ + +
+ +# tf.raw_ops.Split + + + + + + + + + +Splits a tensor into `num_split` tensors along one dimension. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`axis` + +A `Tensor` of type `int32`. +0-D. The dimension along which to split. Must be in the range +`[-rank(value), rank(value))`. +
+`value` + +A `Tensor`. The tensor to split. +
+`num_split` + +An `int` that is `>= 1`. +The number of ways to split. Must evenly divide +`value.shape[split_dim]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `num_split` `Tensor` objects with the same type as `value`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SplitV.md b/site/en/api_docs/python/tf/raw_ops/SplitV.md new file mode 100644 index 00000000000..a27f1d61246 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SplitV.md @@ -0,0 +1,103 @@ +description: Splits a tensor into num_split tensors along one dimension. + +
+ + +
+ +# tf.raw_ops.SplitV + + + + + + + + + +Splits a tensor into `num_split` tensors along one dimension. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. The tensor to split. +
+`size_splits` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +list containing the sizes of each output tensor along the split +dimension. Must sum to the dimension of value along split_dim. +Can contain one -1 indicating that dimension is to be inferred. +
+`axis` + +A `Tensor` of type `int32`. +0-D. The dimension along which to split. Must be in the range +`[-rank(value), rank(value))`. +
+`num_split` + +An `int` that is `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `num_split` `Tensor` objects with the same type as `value`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SqlDataset.md b/site/en/api_docs/python/tf/raw_ops/SqlDataset.md new file mode 100644 index 00000000000..65378639ae6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SqlDataset.md @@ -0,0 +1,107 @@ +description: Creates a dataset that executes a SQL query and emits rows of the result set. + +
+ + +
+ +# tf.raw_ops.SqlDataset + + + + + + + + + +Creates a dataset that executes a SQL query and emits rows of the result set. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`driver_name` + +A `Tensor` of type `string`. +The database type. Currently, the only supported type is 'sqlite'. +
+`data_source_name` + +A `Tensor` of type `string`. +A connection string to connect to the database. +
+`query` + +A `Tensor` of type `string`. A SQL query to execute. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Sqrt.md b/site/en/api_docs/python/tf/raw_ops/Sqrt.md new file mode 100644 index 00000000000..ceaad81c8b4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Sqrt.md @@ -0,0 +1,78 @@ +description: Computes square root of x element-wise. + +
+ + +
+ +# tf.raw_ops.Sqrt + + + + + + + + + +Computes square root of x element-wise. + + + + + + + + + +I.e., \\(y = \sqrt{x} = x^{1/2}\\). + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SqrtGrad.md b/site/en/api_docs/python/tf/raw_ops/SqrtGrad.md new file mode 100644 index 00000000000..a0cb66ad25b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SqrtGrad.md @@ -0,0 +1,86 @@ +description: Computes the gradient for the sqrt of x wrt its input. + +
+ + +
+ +# tf.raw_ops.SqrtGrad + + + + + + + + + +Computes the gradient for the sqrt of `x` wrt its input. + + + + + + + + + +Specifically, `grad = dy * 0.5 / y`, where `y = sqrt(x)`, and `dy` +is the corresponding input gradient. + + + + + + + + + + + + + + + + +
+`y` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`dy` + +A `Tensor`. Must have the same type as `y`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `y`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Square.md b/site/en/api_docs/python/tf/raw_ops/Square.md new file mode 100644 index 00000000000..a17fdec71e9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Square.md @@ -0,0 +1,83 @@ +description: Computes square of x element-wise. + +
+ + +
+ +# tf.raw_ops.Square + + + + + + + + + +Computes square of x element-wise. + + + + + + + + + +I.e., \\(y = x * x = x^2\\). + +``` +>>> tf.math.square([-2., 0., 3.]) + +``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SquaredDifference.md b/site/en/api_docs/python/tf/raw_ops/SquaredDifference.md new file mode 100644 index 00000000000..b193086137c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SquaredDifference.md @@ -0,0 +1,86 @@ +description: Returns (x - y)(x - y) element-wise. + +
+ + +
+ +# tf.raw_ops.SquaredDifference + + + + + + + + + +Returns (x - y)(x - y) element-wise. + + + + + + + + + +*NOTE*: math.squared_difference supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Squeeze.md b/site/en/api_docs/python/tf/raw_ops/Squeeze.md new file mode 100644 index 00000000000..63d02a19816 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Squeeze.md @@ -0,0 +1,107 @@ +description: Removes dimensions of size 1 from the shape of a tensor. + +
+ + +
+ +# tf.raw_ops.Squeeze + + + + + + + + + +Removes dimensions of size 1 from the shape of a tensor. + + + + + + + + + +Given a tensor `input`, this operation returns a tensor of the same type with +all dimensions of size 1 removed. If you don't want to remove all size 1 +dimensions, you can remove specific size 1 dimensions by specifying +`axis`. + +#### For example: + + + +``` +# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] +shape(squeeze(t)) ==> [2, 3] +``` + +Or, to remove specific size 1 dimensions: + +``` +# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] +shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1] +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. The `input` to squeeze. +
+`axis` + +An optional list of `ints`. Defaults to `[]`. +If specified, only squeezes the dimensions listed. The dimension +index starts at 0. It is an error to squeeze a dimension that is not 1. Must +be in the range `[-rank(input), rank(input))`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Stack.md b/site/en/api_docs/python/tf/raw_ops/Stack.md new file mode 100644 index 00000000000..07e8a658f34 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Stack.md @@ -0,0 +1,84 @@ +description: Deprecated, use StackV2. + +
+ + +
+ +# tf.raw_ops.Stack + + + + + + + + + +Deprecated, use StackV2. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`elem_type` + +A tf.DType. +
+`stack_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StackClose.md b/site/en/api_docs/python/tf/raw_ops/StackClose.md new file mode 100644 index 00000000000..31ff2b7415a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StackClose.md @@ -0,0 +1,77 @@ +description: Deprecated, use StackCloseV2. + +
+ + +
+ +# tf.raw_ops.StackClose + + + + + + + + + +Deprecated, use StackCloseV2. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StackCloseV2.md b/site/en/api_docs/python/tf/raw_ops/StackCloseV2.md new file mode 100644 index 00000000000..4c5a4ce1a9e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StackCloseV2.md @@ -0,0 +1,77 @@ +description: Delete the stack from its resource container. + +
+ + +
+ +# tf.raw_ops.StackCloseV2 + + + + + + + + + +Delete the stack from its resource container. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a stack. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StackPop.md b/site/en/api_docs/python/tf/raw_ops/StackPop.md new file mode 100644 index 00000000000..c36c2147031 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StackPop.md @@ -0,0 +1,84 @@ +description: Deprecated, use StackPopV2. + +
+ + +
+ +# tf.raw_ops.StackPop + + + + + + + + + +Deprecated, use StackPopV2. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`elem_type` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `elem_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StackPopV2.md b/site/en/api_docs/python/tf/raw_ops/StackPopV2.md new file mode 100644 index 00000000000..234062dd92f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StackPopV2.md @@ -0,0 +1,84 @@ +description: Pop the element at the top of the stack. + +
+ + +
+ +# tf.raw_ops.StackPopV2 + + + + + + + + + +Pop the element at the top of the stack. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a stack. +
+`elem_type` + +A tf.DType. The type of the elem that is popped. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `elem_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StackPush.md b/site/en/api_docs/python/tf/raw_ops/StackPush.md new file mode 100644 index 00000000000..99fef57cc37 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StackPush.md @@ -0,0 +1,91 @@ +description: Deprecated, use StackPushV2. + +
+ + +
+ +# tf.raw_ops.StackPush + + + + + + + + + +Deprecated, use StackPushV2. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`elem` + +A `Tensor`. +
+`swap_memory` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `elem`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StackPushV2.md b/site/en/api_docs/python/tf/raw_ops/StackPushV2.md new file mode 100644 index 00000000000..fc65336f146 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StackPushV2.md @@ -0,0 +1,92 @@ +description: Push an element onto the stack. + +
+ + +
+ +# tf.raw_ops.StackPushV2 + + + + + + + + + +Push an element onto the stack. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a stack. +
+`elem` + +A `Tensor`. The tensor to be pushed onto the stack. +
+`swap_memory` + +An optional `bool`. Defaults to `False`. +Swap `elem` to CPU. Default to false. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `elem`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StackV2.md b/site/en/api_docs/python/tf/raw_ops/StackV2.md new file mode 100644 index 00000000000..a9b0eee3725 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StackV2.md @@ -0,0 +1,95 @@ +description: A stack that produces elements in first-in last-out order. + +
+ + +
+ +# tf.raw_ops.StackV2 + + + + + + + + + +A stack that produces elements in first-in last-out order. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`max_size` + +A `Tensor` of type `int32`. +The maximum size of the stack if non-negative. If negative, the stack +size is unlimited. +
+`elem_type` + +A tf.DType. The type of the elements on the stack. +
+`stack_name` + +An optional `string`. Defaults to `""`. +Overrides the name used for the temporary stack resource. Default +value is the name of the 'Stack' op (which is guaranteed unique). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Stage.md b/site/en/api_docs/python/tf/raw_ops/Stage.md new file mode 100644 index 00000000000..07f6c3642b4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Stage.md @@ -0,0 +1,115 @@ +description: Stage values similar to a lightweight Enqueue. + +
+ + +
+ +# tf.raw_ops.Stage + + + + + + + + + +Stage values similar to a lightweight Enqueue. + + + + + + + + + +The basic functionality of this Op is similar to a queue with many +fewer capabilities and options. This Op is optimized for performance. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`values` + +A list of `Tensor` objects. a list of tensors +dtypes A list of data types that inserted values should adhere to. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +Maximum number of elements in the Staging Area. If > 0, inserts +on the container will block when the capacity is reached. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +The maximum number of bytes allowed for Tensors in the Staging Area. +If > 0, inserts will block until sufficient space is available. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this queue is placed in the given container. Otherwise, +a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +It is necessary to match this name to the matching Unstage Op. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StageClear.md b/site/en/api_docs/python/tf/raw_ops/StageClear.md new file mode 100644 index 00000000000..dcd2310a8b3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StageClear.md @@ -0,0 +1,105 @@ +description: Op removes all elements in the underlying container. + +
+ + +
+ +# tf.raw_ops.StageClear + + + + + + + + + +Op removes all elements in the underlying container. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +A list of `tf.DTypes`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StagePeek.md b/site/en/api_docs/python/tf/raw_ops/StagePeek.md new file mode 100644 index 00000000000..540ccd39a27 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StagePeek.md @@ -0,0 +1,116 @@ +description: Op peeks at the values at the specified index. If the + +
+ + +
+ +# tf.raw_ops.StagePeek + + + + + + + + + +Op peeks at the values at the specified index. If the + + + + + + + + + +underlying container does not contain sufficient elements +this op will block until it does. This Op is optimized for +performance. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`index` + +A `Tensor` of type `int32`. +
+`dtypes` + +A list of `tf.DTypes` that has length `>= 1`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `dtypes`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StageSize.md b/site/en/api_docs/python/tf/raw_ops/StageSize.md new file mode 100644 index 00000000000..1b9abbc7c09 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StageSize.md @@ -0,0 +1,105 @@ +description: Op returns the number of elements in the underlying container. + +
+ + +
+ +# tf.raw_ops.StageSize + + + + + + + + + +Op returns the number of elements in the underlying container. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +A list of `tf.DTypes`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatefulPartitionedCall.md b/site/en/api_docs/python/tf/raw_ops/StatefulPartitionedCall.md new file mode 100644 index 00000000000..82451a6233c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatefulPartitionedCall.md @@ -0,0 +1,117 @@ +description: returns f(inputs), where f's body is placed and partitioned. + +
+ + +
+ +# tf.raw_ops.StatefulPartitionedCall + + + + + + + + + +returns `f(inputs)`, where `f`'s body is placed and partitioned. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`args` + +A list of `Tensor` objects. A list of input tensors. +
+`Tout` + +A list of `tf.DTypes`. A list of output types. +
+`f` + +A function decorated with @Defun. +A function that takes 'args', a list of tensors, and returns 'output', +another list of tensors. Input and output types are specified by 'Tin' +and 'Tout'. The function body of f will be placed and partitioned across +devices, setting this op apart from the regular Call op. This op is +stateful. +
+`config` + +An optional `string`. Defaults to `""`. +
+`config_proto` + +An optional `string`. Defaults to `""`. +
+`executor_type` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatefulRandomBinomial.md b/site/en/api_docs/python/tf/raw_ops/StatefulRandomBinomial.md new file mode 100644 index 00000000000..84db59cf53d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatefulRandomBinomial.md @@ -0,0 +1,110 @@ +
+ + +
+ +# tf.raw_ops.StatefulRandomBinomial + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +
+`algorithm` + +A `Tensor` of type `int64`. +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`counts` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`. +
+`probs` + +A `Tensor`. Must have the same type as `counts`. +
+`dtype` + +An optional tf.DType from: `tf.half, tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatefulStandardNormal.md b/site/en/api_docs/python/tf/raw_ops/StatefulStandardNormal.md new file mode 100644 index 00000000000..cf23016fc61 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatefulStandardNormal.md @@ -0,0 +1,94 @@ +description: Outputs random values from a normal distribution. This op is deprecated in favor of op 'StatefulStandardNormalV2' + +
+ + +
+ +# tf.raw_ops.StatefulStandardNormal + + + + + + + + + +Outputs random values from a normal distribution. This op is deprecated in favor of op 'StatefulStandardNormalV2' + + + + + + + + + +The generated values will have mean 0 and standard deviation 1. + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +The handle of the resource variable that stores the state of the RNG. +
+`shape` + +A `Tensor`. The shape of the output tensor. +
+`dtype` + +An optional tf.DType. Defaults to tf.float32. +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatefulStandardNormalV2.md b/site/en/api_docs/python/tf/raw_ops/StatefulStandardNormalV2.md new file mode 100644 index 00000000000..7284393c6e1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatefulStandardNormalV2.md @@ -0,0 +1,101 @@ +description: Outputs random values from a normal distribution. + +
+ + +
+ +# tf.raw_ops.StatefulStandardNormalV2 + + + + + + + + + +Outputs random values from a normal distribution. + + + + + + + + + +The generated values will have mean 0 and standard deviation 1. + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +The handle of the resource variable that stores the state of the RNG. +
+`algorithm` + +A `Tensor` of type `int64`. The RNG algorithm. +
+`shape` + +A `Tensor`. The shape of the output tensor. +
+`dtype` + +An optional tf.DType. Defaults to tf.float32. +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatefulTruncatedNormal.md b/site/en/api_docs/python/tf/raw_ops/StatefulTruncatedNormal.md new file mode 100644 index 00000000000..1511d046203 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatefulTruncatedNormal.md @@ -0,0 +1,103 @@ +description: Outputs random values from a truncated normal distribution. + +
+ + +
+ +# tf.raw_ops.StatefulTruncatedNormal + + + + + + + + + +Outputs random values from a truncated normal distribution. + + + + + + + + + +The generated values follow a normal distribution with mean 0 and standard +deviation 1, except that values whose magnitude is more than 2 standard +deviations from the mean are dropped and re-picked. + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +The handle of the resource variable that stores the state of the RNG. +
+`algorithm` + +A `Tensor` of type `int64`. The RNG algorithm. +
+`shape` + +A `Tensor`. The shape of the output tensor. +
+`dtype` + +An optional tf.DType. Defaults to tf.float32. +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatefulUniform.md b/site/en/api_docs/python/tf/raw_ops/StatefulUniform.md new file mode 100644 index 00000000000..51da61d5353 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatefulUniform.md @@ -0,0 +1,102 @@ +description: Outputs random values from a uniform distribution. + +
+ + +
+ +# tf.raw_ops.StatefulUniform + + + + + + + + + +Outputs random values from a uniform distribution. + + + + + + + + + +The generated values follow a uniform distribution in the range `[0, 1)`. The +lower bound 0 is included in the range, while the upper bound 1 is excluded. + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +The handle of the resource variable that stores the state of the RNG. +
+`algorithm` + +A `Tensor` of type `int64`. The RNG algorithm. +
+`shape` + +A `Tensor`. The shape of the output tensor. +
+`dtype` + +An optional tf.DType. Defaults to tf.float32. +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatefulUniformFullInt.md b/site/en/api_docs/python/tf/raw_ops/StatefulUniformFullInt.md new file mode 100644 index 00000000000..139afffe80c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatefulUniformFullInt.md @@ -0,0 +1,101 @@ +description: Outputs random integers from a uniform distribution. + +
+ + +
+ +# tf.raw_ops.StatefulUniformFullInt + + + + + + + + + +Outputs random integers from a uniform distribution. + + + + + + + + + +The generated values are uniform integers covering the whole range of `dtype`. + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +The handle of the resource variable that stores the state of the RNG. +
+`algorithm` + +A `Tensor` of type `int64`. The RNG algorithm. +
+`shape` + +A `Tensor`. The shape of the output tensor. +
+`dtype` + +An optional tf.DType. Defaults to tf.uint64. +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatefulUniformInt.md b/site/en/api_docs/python/tf/raw_ops/StatefulUniformInt.md new file mode 100644 index 00000000000..5497c4caf01 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatefulUniformInt.md @@ -0,0 +1,114 @@ +description: Outputs random integers from a uniform distribution. + +
+ + +
+ +# tf.raw_ops.StatefulUniformInt + + + + + + + + + +Outputs random integers from a uniform distribution. + + + + + + + + + +The generated values are uniform integers in the range `[minval, maxval)`. +The lower bound `minval` is included in the range, while the upper bound +`maxval` is excluded. + +The random integers are slightly biased unless `maxval - minval` is an exact +power of two. The bias is small for values of `maxval - minval` significantly +smaller than the range of the output (either `2^32` or `2^64`). + + + + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. +The handle of the resource variable that stores the state of the RNG. +
+`algorithm` + +A `Tensor` of type `int64`. The RNG algorithm. +
+`shape` + +A `Tensor`. The shape of the output tensor. +
+`minval` + +A `Tensor`. Minimum value (inclusive, scalar). +
+`maxval` + +A `Tensor`. Must have the same type as `minval`. +Maximum value (exclusive, scalar). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `minval`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatelessIf.md b/site/en/api_docs/python/tf/raw_ops/StatelessIf.md new file mode 100644 index 00000000000..0bedca94d77 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatelessIf.md @@ -0,0 +1,125 @@ +description: output = cond ? then_branch(input) : else_branch(input) + +
+ + +
+ +# tf.raw_ops.StatelessIf + + + + + + + + + +output = cond ? then_branch(input) : else_branch(input) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cond` + +A `Tensor`. +A Tensor. If the tensor is a scalar of non-boolean type, the +scalar is converted to a boolean according to the +following rule: if the scalar is a numerical value, non-zero means +`True` and zero means False; if the scalar is a string, non-empty +means `True` and empty means `False`. If the tensor is not a scalar, +being empty means False and being non-empty means True. + +This should only be used when the if then/else body functions do not +have stateful ops. +
+`input` + +A list of `Tensor` objects. A list of input tensors. +
+`Tout` + +A list of `tf.DTypes`. A list of output types. +
+`then_branch` + +A function decorated with @Defun. +A function that takes 'inputs' and returns a list of tensors, whose +types are the same as what else_branch returns. +
+`else_branch` + +A function decorated with @Defun. +A function that takes 'inputs' and returns a list of tensors, whose +types are the same as what then_branch returns. +
+`output_shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatelessMultinomial.md b/site/en/api_docs/python/tf/raw_ops/StatelessMultinomial.md new file mode 100644 index 00000000000..38abd243471 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatelessMultinomial.md @@ -0,0 +1,102 @@ +description: Draws samples from a multinomial distribution. + +
+ + +
+ +# tf.raw_ops.StatelessMultinomial + + + + + + + + + +Draws samples from a multinomial distribution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`logits` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +2-D Tensor with shape `[batch_size, num_classes]`. Each slice `[i, :]` +represents the unnormalized log probabilities for all classes. +
+`num_samples` + +A `Tensor` of type `int32`. +0-D. Number of independent samples to draw for each row slice. +
+`seed` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2 seeds (shape [2]). +
+`output_dtype` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `output_dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatelessRandomBinomial.md b/site/en/api_docs/python/tf/raw_ops/StatelessRandomBinomial.md new file mode 100644 index 00000000000..45bd7792230 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatelessRandomBinomial.md @@ -0,0 +1,115 @@ +description: Outputs deterministic pseudorandom random numbers from a binomial distribution. + +
+ + +
+ +# tf.raw_ops.StatelessRandomBinomial + + + + + + + + + +Outputs deterministic pseudorandom random numbers from a binomial distribution. + + + + + + + + + +Outputs random values from a binomial distribution. + +The outputs are a deterministic function of `shape`, `seed`, `counts`, and `probs`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. +
+`seed` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2 seeds (shape [2]). +
+`counts` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`. +The counts of the binomial distribution. Must be broadcastable with `probs`, +and broadcastable with the rightmost dimensions of `shape`. +
+`probs` + +A `Tensor`. Must have the same type as `counts`. +The probability of success for the binomial distribution. Must be broadcastable +with `counts` and broadcastable with the rightmost dimensions of `shape`. +
+`dtype` + +An optional tf.DType from: `tf.half, tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to tf.int64. +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatelessRandomGammaV2.md b/site/en/api_docs/python/tf/raw_ops/StatelessRandomGammaV2.md new file mode 100644 index 00000000000..bc03e10dea4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatelessRandomGammaV2.md @@ -0,0 +1,98 @@ +description: Outputs deterministic pseudorandom random numbers from a gamma distribution. + +
+ + +
+ +# tf.raw_ops.StatelessRandomGammaV2 + + + + + + + + + +Outputs deterministic pseudorandom random numbers from a gamma distribution. + + + + + + + + + +Outputs random values from a gamma distribution. + +The outputs are a deterministic function of `shape`, `seed`, and `alpha`. + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. +
+`seed` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2 seeds (shape [2]). +
+`alpha` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +The concentration of the gamma distribution. Shape must match the rightmost +dimensions of `shape`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `alpha`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatelessRandomNormal.md b/site/en/api_docs/python/tf/raw_ops/StatelessRandomNormal.md new file mode 100644 index 00000000000..d7f2b20913d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatelessRandomNormal.md @@ -0,0 +1,97 @@ +description: Outputs deterministic pseudorandom values from a normal distribution. + +
+ + +
+ +# tf.raw_ops.StatelessRandomNormal + + + + + + + + + +Outputs deterministic pseudorandom values from a normal distribution. + + + + + + + + + +The generated values will have mean 0 and standard deviation 1. + +The outputs are a deterministic function of `shape` and `seed`. + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. +
+`seed` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2 seeds (shape [2]). +
+`dtype` + +An optional tf.DType from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. Defaults to tf.float32. +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatelessRandomPoisson.md b/site/en/api_docs/python/tf/raw_ops/StatelessRandomPoisson.md new file mode 100644 index 00000000000..1ba898680b9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatelessRandomPoisson.md @@ -0,0 +1,106 @@ +description: Outputs deterministic pseudorandom random numbers from a Poisson distribution. + +
+ + +
+ +# tf.raw_ops.StatelessRandomPoisson + + + + + + + + + +Outputs deterministic pseudorandom random numbers from a Poisson distribution. + + + + + + + + + +Outputs random values from a Poisson distribution. + +The outputs are a deterministic function of `shape`, `seed`, and `lam`. + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. +
+`seed` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2 seeds (shape [2]). +
+`lam` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`. +The rate of the Poisson distribution. Shape must match the rightmost dimensions +of `shape`. +
+`dtype` + +A tf.DType from: `tf.half, tf.float32, tf.float64, tf.int32, tf.int64`. +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatelessRandomUniform.md b/site/en/api_docs/python/tf/raw_ops/StatelessRandomUniform.md new file mode 100644 index 00000000000..3b4e5dda108 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatelessRandomUniform.md @@ -0,0 +1,98 @@ +description: Outputs deterministic pseudorandom random values from a uniform distribution. + +
+ + +
+ +# tf.raw_ops.StatelessRandomUniform + + + + + + + + + +Outputs deterministic pseudorandom random values from a uniform distribution. + + + + + + + + + +The generated values follow a uniform distribution in the range `[0, 1)`. The +lower bound 0 is included in the range, while the upper bound 1 is excluded. + +The outputs are a deterministic function of `shape` and `seed`. + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. +
+`seed` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2 seeds (shape [2]). +
+`dtype` + +An optional tf.DType from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. Defaults to tf.float32. +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatelessRandomUniformFullInt.md b/site/en/api_docs/python/tf/raw_ops/StatelessRandomUniformFullInt.md new file mode 100644 index 00000000000..9f71e2d2762 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatelessRandomUniformFullInt.md @@ -0,0 +1,97 @@ +description: Outputs deterministic pseudorandom random integers from a uniform distribution. + +
+ + +
+ +# tf.raw_ops.StatelessRandomUniformFullInt + + + + + + + + + +Outputs deterministic pseudorandom random integers from a uniform distribution. + + + + + + + + + +The generated values are uniform integers covering the whole range of `dtype`. + +The outputs are a deterministic function of `shape` and `seed`. + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. +
+`seed` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `uint32`, `uint64`. +2 seeds (shape [2]). +
+`dtype` + +An optional tf.DType from: `tf.int32, tf.int64, tf.uint32, tf.uint64`. Defaults to tf.uint64. +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatelessRandomUniformInt.md b/site/en/api_docs/python/tf/raw_ops/StatelessRandomUniformInt.md new file mode 100644 index 00000000000..3808c79bd47 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatelessRandomUniformInt.md @@ -0,0 +1,105 @@ +description: Outputs deterministic pseudorandom random integers from a uniform distribution. + +
+ + +
+ +# tf.raw_ops.StatelessRandomUniformInt + + + + + + + + + +Outputs deterministic pseudorandom random integers from a uniform distribution. + + + + + + + + + +The generated values follow a uniform distribution in the range `[minval, maxval)`. + +The outputs are a deterministic function of `shape`, `seed`, `minval`, and `maxval`. + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. +
+`seed` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2 seeds (shape [2]). +
+`minval` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Minimum value (inclusive, scalar). +
+`maxval` + +A `Tensor`. Must have the same type as `minval`. +Maximum value (exclusive, scalar). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `minval`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatelessTruncatedNormal.md b/site/en/api_docs/python/tf/raw_ops/StatelessTruncatedNormal.md new file mode 100644 index 00000000000..e777df168f8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatelessTruncatedNormal.md @@ -0,0 +1,99 @@ +description: Outputs deterministic pseudorandom values from a truncated normal distribution. + +
+ + +
+ +# tf.raw_ops.StatelessTruncatedNormal + + + + + + + + + +Outputs deterministic pseudorandom values from a truncated normal distribution. + + + + + + + + + +The generated values follow a normal distribution with mean 0 and standard +deviation 1, except that values whose magnitude is more than 2 standard +deviations from the mean are dropped and re-picked. + +The outputs are a deterministic function of `shape` and `seed`. + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. +
+`seed` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2 seeds (shape [2]). +
+`dtype` + +An optional tf.DType from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. Defaults to tf.float32. +The type of the output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatelessWhile.md b/site/en/api_docs/python/tf/raw_ops/StatelessWhile.md new file mode 100644 index 00000000000..3954fb9d09c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatelessWhile.md @@ -0,0 +1,119 @@ +description: output = input; While (Cond(output)) { output = Body(output) } + +
+ + +
+ +# tf.raw_ops.StatelessWhile + + + + + + + + + +output = input; While (Cond(output)) { output = Body(output) } + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A list of `Tensor` objects. +A list of input tensors whose types are T. +
+`cond` + +A function decorated with @Defun. +A function takes 'input' and returns a tensor. If the tensor is +a scalar of non-boolean, the scalar is converted to a boolean +according to the following rule: if the scalar is a numerical +value, non-zero means True and zero means False; if the scalar is +a string, non-empty means True and empty means False. If the +tensor is not a scalar, non-emptiness means True and False +otherwise. + +This should only be used when the while condition and body functions +do not have stateful ops. +
+`body` + +A function decorated with @Defun. +A function that takes a list of tensors and returns another +list of tensors. Both lists have the same types as specified +by T. +
+`output_shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +
+`parallel_iterations` + +An optional `int`. Defaults to `10`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StaticRegexFullMatch.md b/site/en/api_docs/python/tf/raw_ops/StaticRegexFullMatch.md new file mode 100644 index 00000000000..fac4d727675 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StaticRegexFullMatch.md @@ -0,0 +1,91 @@ +description: Check if the input matches the regex pattern. + +
+ + +
+ +# tf.raw_ops.StaticRegexFullMatch + + + + + + + + + +Check if the input matches the regex pattern. + + + + + + + + + +The input is a string tensor of any shape. The pattern is the +regular expression to be matched with every element of the input tensor. +The boolean values (True or False) of the output tensor indicate +if the input matches the regex pattern provided. + +The pattern follows the re2 syntax (https://github.com/google/re2/wiki/Syntax) + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +A string tensor of the text to be processed. +
+`pattern` + +A `string`. The regular expression to match the input. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StaticRegexReplace.md b/site/en/api_docs/python/tf/raw_ops/StaticRegexReplace.md new file mode 100644 index 00000000000..9915b8f1134 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StaticRegexReplace.md @@ -0,0 +1,101 @@ +description: Replaces the match of pattern in input with rewrite. + +
+ + +
+ +# tf.raw_ops.StaticRegexReplace + + + + + + + + + +Replaces the match of pattern in input with rewrite. + + + + + + + + + +It follows the re2 syntax (https://github.com/google/re2/wiki/Syntax) + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. The text to be processed. +
+`pattern` + +A `string`. The regular expression to match the input. +
+`rewrite` + +A `string`. The rewrite to be applied to the matched expression. +
+`replace_global` + +An optional `bool`. Defaults to `True`. +If True, the replacement is global, otherwise the replacement +is done only on the first match. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatsAggregatorHandle.md b/site/en/api_docs/python/tf/raw_ops/StatsAggregatorHandle.md new file mode 100644 index 00000000000..127b1d79711 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatsAggregatorHandle.md @@ -0,0 +1,84 @@ +description: Creates a statistics manager resource. + +
+ + +
+ +# tf.raw_ops.StatsAggregatorHandle + + + + + + + + + +Creates a statistics manager resource. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatsAggregatorHandleV2.md b/site/en/api_docs/python/tf/raw_ops/StatsAggregatorHandleV2.md new file mode 100644 index 00000000000..65cd9e0e47f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatsAggregatorHandleV2.md @@ -0,0 +1,82 @@ +
+ + +
+ +# tf.raw_ops.StatsAggregatorHandleV2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatsAggregatorSetSummaryWriter.md b/site/en/api_docs/python/tf/raw_ops/StatsAggregatorSetSummaryWriter.md new file mode 100644 index 00000000000..39ec973c478 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatsAggregatorSetSummaryWriter.md @@ -0,0 +1,84 @@ +description: Set a summary_writer_interface to record statistics using given stats_aggregator. + +
+ + +
+ +# tf.raw_ops.StatsAggregatorSetSummaryWriter + + + + + + + + + +Set a summary_writer_interface to record statistics using given stats_aggregator. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`stats_aggregator` + +A `Tensor` of type `resource`. +
+`summary` + +A `Tensor` of type `resource`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StatsAggregatorSummary.md b/site/en/api_docs/python/tf/raw_ops/StatsAggregatorSummary.md new file mode 100644 index 00000000000..c66f11f87a9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StatsAggregatorSummary.md @@ -0,0 +1,77 @@ +description: Produces a summary of any statistics recorded by the given statistics manager. + +
+ + +
+ +# tf.raw_ops.StatsAggregatorSummary + + + + + + + + + +Produces a summary of any statistics recorded by the given statistics manager. + + + + + + + + + + + + + + + + + + + + + + +
+`iterator` + +A `Tensor` of type `resource`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StopGradient.md b/site/en/api_docs/python/tf/raw_ops/StopGradient.md new file mode 100644 index 00000000000..b0036da08f3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StopGradient.md @@ -0,0 +1,96 @@ +description: Stops gradient computation. + +
+ + +
+ +# tf.raw_ops.StopGradient + + + + + + + + + +Stops gradient computation. + + + + + + + + + +When executed in a graph, this op outputs its input tensor as-is. + +When building ops to compute gradients, this op prevents the contribution of +its inputs to be taken into account. Normally, the gradient generator adds ops +to a graph to compute the derivatives of a specified 'loss' by recursively +finding out inputs that contributed to its computation. If you insert this op +in the graph it inputs are masked from the gradient generator. They are not +taken into account for computing gradients. + +This is useful any time you want to compute a value with TensorFlow but need +to pretend that the value was a constant. Some examples include: + +* The *EM* algorithm where the *M-step* should not involve backpropagation + through the output of the *E-step*. +* Contrastive divergence training of Boltzmann machines where, when + differentiating the energy function, the training must not backpropagate + through the graph that generated the samples from the model. +* Adversarial training, where no backprop should happen through the adversarial + example generation process. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StridedSlice.md b/site/en/api_docs/python/tf/raw_ops/StridedSlice.md new file mode 100644 index 00000000000..f3e4877ad87 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StridedSlice.md @@ -0,0 +1,254 @@ +description: Return a strided slice from input. + +
+ + +
+ +# tf.raw_ops.StridedSlice + + + + + + + + + +Return a strided slice from `input`. + + + + + + + + + +Note, most python users will want to use the Python Tensor.__getitem__ +or Variable.__getitem__ rather than this op directly. + +The goal of this op is to produce a new tensor with a subset of +the elements from the `n` dimensional `input` tensor. The subset is chosen using +a sequence of `m` sparse range specifications encoded into the arguments +of this function. Note, in some cases +`m` could be equal to `n`, but this need not be the case. Each +range specification entry can be one of the following: + +- An ellipsis (...). Ellipses are used to imply zero or more + dimensions of full-dimension selection and are produced using + `ellipsis_mask`. For example, `foo[...]` is the identity slice. + +- A new axis. This is used to insert a new shape=1 dimension and is + produced using `new_axis_mask`. For example, `foo[:, ...]` where + `foo` is shape `(3, 4)` produces a `(1, 3, 4)` tensor. + + +- A range `begin:end:stride`. This is used to specify how much to choose from + a given dimension. `stride` can be any integer but 0. `begin` is an integer + which represents the index of the first value to select while `end` represents + the index of the last value to select. The number of values selected in each + dimension is `end - begin` if `stride > 0` and `begin - end` if `stride < 0`. + `begin` and `end` can be negative where `-1` is the last element, `-2` is + the second to last. `begin_mask` controls whether to replace the explicitly + given `begin` with an implicit effective value of `0` if `stride > 0` and + `-1` if `stride < 0`. `end_mask` is analogous but produces the number + required to create the largest open interval. For example, given a shape + `(3,)` tensor `foo[:]`, the effective `begin` and `end` are `0` and `3`. Do + not assume this is equivalent to `foo[0:-1]` which has an effective `begin` + and `end` of `0` and `2`. Another example is `foo[-2::-1]` which reverses the + first dimension of a tensor while dropping the last two (in the original + order elements). For example `foo = [1,2,3,4]; foo[-2::-1]` is `[4,3]`. + +- A single index. This is used to keep only elements that have a given + index. For example (`foo[2, :]` on a shape `(5,6)` tensor produces a + shape `(6,)` tensor. This is encoded in `begin` and `end` and + `shrink_axis_mask`. + +Each conceptual range specification is encoded in the op's argument. This +encoding is best understand by considering a non-trivial example. In +particular, +`foo[1, 2:4, None, ..., :-3:-1, :]` will be encoded as + +``` +begin = [1, 2, x, x, 0, x] # x denotes don't care (usually 0) +end = [2, 4, x, x, -3, x] +strides = [1, 1, x, x, -1, 1] +begin_mask = 1<<4 | 1 << 5 = 48 +end_mask = 1<<5 = 32 +ellipsis_mask = 1<<3 = 8 +new_axis_mask = 1<<2 4 +shrink_axis_mask = 1<<0 +``` + +In this case if `foo.shape` is (5, 5, 5, 5, 5, 5) the final shape of +the slice becomes (2, 1, 5, 5, 2, 5). +Let us walk step by step through each argument specification. + +1. The first argument in the example slice is turned into `begin = 1` and +`end = begin + 1 = 2`. To disambiguate from the original spec `2:4` we +also set the appropriate bit in `shrink_axis_mask`. + +2. `2:4` is contributes 2, 4, 1 to begin, end, and stride. All masks have +zero bits contributed. + +3. None is a synonym for tf.newaxis. This means insert a dimension of size 1 +dimension in the final shape. Dummy values are contributed to begin, +end and stride, while the new_axis_mask bit is set. + +4. `...` grab the full ranges from as many dimensions as needed to +fully specify a slice for every dimension of the input shape. + +5. `:-3:-1` shows the use of negative indices. A negative index `i` associated +with a dimension that has shape `s` is converted to a positive index +`s + i`. So `-1` becomes `s-1` (i.e. the last element). This conversion +is done internally so begin, end and strides receive x, -3, and -1. +The appropriate begin_mask bit is set to indicate the start range is the +full range (ignoring the x). + +6. `:` indicates that the entire contents of the corresponding dimension +is selected. This is equivalent to `::` or `0::1`. begin, end, and strides +receive 0, 0, and 1, respectively. The appropriate bits in `begin_mask` and +`end_mask` are also set. + +*Requirements*: + `0 != strides[i] for i in [0, m)` + `ellipsis_mask must be a power of two (only one ellipsis)` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`begin` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +`begin[k]` specifies the offset into the `k`th range specification. +The exact dimension this corresponds to will be determined by context. +Out-of-bounds values will be silently clamped. If the `k`th bit of +`begin_mask` then `begin[k]` is ignored and the full range of the +appropriate dimension is used instead. Negative values causes indexing +to start from the highest element e.g. If `foo==[1,2,3]` then `foo[-1]==3`. +
+`end` + +A `Tensor`. Must have the same type as `begin`. +`end[i]` is like `begin` with the exception that `end_mask` is +used to determine full ranges. +
+`strides` + +A `Tensor`. Must have the same type as `begin`. +`strides[i]` specifies the increment in the `i`th specification +after extracting a given element. Negative indices will reverse +the original order. Out or range values are +clamped to `[0,dim[i]) if slice[i]>0` or `[-1,dim[i]-1] if slice[i] < 0` +
+`begin_mask` + +An optional `int`. Defaults to `0`. +a bitmask where a bit i being 1 means to ignore the begin +value and instead use the largest interval possible. At runtime +begin[i] will be replaced with `[0, n-1)` if `stride[i] > 0` or +`[-1, n-1]` if `stride[i] < 0` +
+`end_mask` + +An optional `int`. Defaults to `0`. analogous to `begin_mask` +
+`ellipsis_mask` + +An optional `int`. Defaults to `0`. +a bitmask where bit `i` being 1 means the `i`th +position is actually an ellipsis. One bit at most can be 1. +If `ellipsis_mask == 0`, then an implicit ellipsis mask of `1 << (m+1)` +is provided. This means that `foo[3:5] == foo[3:5, ...]`. An ellipsis +implicitly creates as many range specifications as necessary to fully +specify the sliced range for every dimension. For example for a 4-dimensional +tensor `foo` the slice `foo[2, ..., 5:8]` implies `foo[2, :, :, 5:8]`. +
+`new_axis_mask` + +An optional `int`. Defaults to `0`. +a bitmask where bit `i` being 1 means the `i`th +specification creates a new shape 1 dimension. For example +`foo[:4, tf.newaxis, :2]` would produce a shape `(4, 1, 2)` tensor. +
+`shrink_axis_mask` + +An optional `int`. Defaults to `0`. +a bitmask where bit `i` implies that the `i`th +specification should shrink the dimensionality. begin and end +must imply a slice of size 1 in the dimension. For example in +python one might do `foo[:, 3, :]` which would result in +`shrink_axis_mask` being 2. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StridedSliceAssign.md b/site/en/api_docs/python/tf/raw_ops/StridedSliceAssign.md new file mode 100644 index 00000000000..27d4f7ddf8f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StridedSliceAssign.md @@ -0,0 +1,147 @@ +description: Assign value to the sliced l-value reference of ref. + +
+ + +
+ +# tf.raw_ops.StridedSliceAssign + + + + + + + + + +Assign `value` to the sliced l-value reference of `ref`. + + + + + + + + + +The values of `value` are assigned to the positions in the variable +`ref` that are selected by the slice parameters. The slice parameters +`begin`, `end`, `strides`, etc. work exactly as in `StridedSlice`. + +NOTE this op currently does not support broadcasting and so `value`'s +shape must be exactly the shape produced by the slice of `ref`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`ref` + +A mutable `Tensor`. +
+`begin` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`end` + +A `Tensor`. Must have the same type as `begin`. +
+`strides` + +A `Tensor`. Must have the same type as `begin`. +
+`value` + +A `Tensor`. Must have the same type as `ref`. +
+`begin_mask` + +An optional `int`. Defaults to `0`. +
+`end_mask` + +An optional `int`. Defaults to `0`. +
+`ellipsis_mask` + +An optional `int`. Defaults to `0`. +
+`new_axis_mask` + +An optional `int`. Defaults to `0`. +
+`shrink_axis_mask` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor`. Has the same type as `ref`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StridedSliceGrad.md b/site/en/api_docs/python/tf/raw_ops/StridedSliceGrad.md new file mode 100644 index 00000000000..a6141256a87 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StridedSliceGrad.md @@ -0,0 +1,149 @@ +description: Returns the gradient of StridedSlice. + +
+ + +
+ +# tf.raw_ops.StridedSliceGrad + + + + + + + + + +Returns the gradient of `StridedSlice`. + + + + + + + + + +Since `StridedSlice` cuts out pieces of its `input` which is size +`shape`, its gradient will have the same shape (which is passed here +as `shape`). The gradient will be zero in any element that the slice +does not select. + +Arguments are the same as StridedSliceGrad with the exception that +`dy` is the input gradient to be propagated and `shape` is the +shape of `StridedSlice`'s `input`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`begin` + +A `Tensor`. Must have the same type as `shape`. +
+`end` + +A `Tensor`. Must have the same type as `shape`. +
+`strides` + +A `Tensor`. Must have the same type as `shape`. +
+`dy` + +A `Tensor`. +
+`begin_mask` + +An optional `int`. Defaults to `0`. +
+`end_mask` + +An optional `int`. Defaults to `0`. +
+`ellipsis_mask` + +An optional `int`. Defaults to `0`. +
+`new_axis_mask` + +An optional `int`. Defaults to `0`. +
+`shrink_axis_mask` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `dy`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringFormat.md b/site/en/api_docs/python/tf/raw_ops/StringFormat.md new file mode 100644 index 00000000000..4900c193c0e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringFormat.md @@ -0,0 +1,103 @@ +description: Formats a string template using a list of tensors. + +
+ + +
+ +# tf.raw_ops.StringFormat + + + + + + + + + +Formats a string template using a list of tensors. + + + + + + + + + +Formats a string template using a list of tensors, pretty-printing tensor summaries. + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of `Tensor` objects. +The list of tensors to format into the placeholder string. +
+`template` + +An optional `string`. Defaults to `"%s"`. +A string, the template to format tensor summaries into. +
+`placeholder` + +An optional `string`. Defaults to `"%s"`. +A string, at each placeholder in the template a subsequent tensor summary will be inserted. +
+`summarize` + +An optional `int`. Defaults to `3`. +When formatting the tensor summaries print the first and last summarize entries of each tensor dimension. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringJoin.md b/site/en/api_docs/python/tf/raw_ops/StringJoin.md new file mode 100644 index 00000000000..f5a470b3889 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringJoin.md @@ -0,0 +1,99 @@ +description: Joins the strings in the given list of string tensors into one tensor; + +
+ + +
+ +# tf.raw_ops.StringJoin + + + + + + + + + +Joins the strings in the given list of string tensors into one tensor; + + + + + + + + + +with the given separator (default is an empty separator). + +#### Examples: + + + +``` +>>> s = ["hello", "world", "tensorflow"] +>>> tf.strings.join(s, " ") + +``` + + + + + + + + + + + + + + + + +
+`inputs` + +A list of at least 1 `Tensor` objects with type `string`. +A list of string tensors. The tensors must all have the same shape, +or be scalars. Scalars may be mixed in; these will be broadcast to the shape +of non-scalar inputs. +
+`separator` + +An optional `string`. Defaults to `""`. +string, an optional join separator. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringLength.md b/site/en/api_docs/python/tf/raw_ops/StringLength.md new file mode 100644 index 00000000000..09574a1c081 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringLength.md @@ -0,0 +1,99 @@ +description: String lengths of input. + +
+ + +
+ +# tf.raw_ops.StringLength + + + + + + + + + +String lengths of `input`. + + + + + + + + + +Computes the length of each string given in the input tensor. + +``` +>>> strings = tf.constant(['Hello','TensorFlow', '\U0001F642']) +>>> tf.strings.length(strings).numpy() # default counts bytes +array([ 5, 10, 4], dtype=int32) +>>> tf.strings.length(strings, unit="UTF8_CHAR").numpy() +array([ 5, 10, 1], dtype=int32) +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +The strings for which to compute the length for each element. +
+`unit` + +An optional `string` from: `"BYTE", "UTF8_CHAR"`. Defaults to `"BYTE"`. +The unit that is counted to compute string length. One of: `"BYTE"` (for +the number of bytes in each string) or `"UTF8_CHAR"` (for the number of UTF-8 +encoded Unicode code points in each string). Results are undefined +if `unit=UTF8_CHAR` and the `input` strings do not contain structurally +valid UTF-8. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringLower.md b/site/en/api_docs/python/tf/raw_ops/StringLower.md new file mode 100644 index 00000000000..e358e571fbf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringLower.md @@ -0,0 +1,93 @@ +description: Converts all uppercase characters into their respective lowercase replacements. + +
+ + +
+ +# tf.raw_ops.StringLower + + + + + + + + + +Converts all uppercase characters into their respective lowercase replacements. + + + + + + + + + + +#### Example: + + + +``` +>>> tf.strings.lower("CamelCase string and ALL CAPS") + +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +
+`encoding` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringNGrams.md b/site/en/api_docs/python/tf/raw_ops/StringNGrams.md new file mode 100644 index 00000000000..a5d14d94ff6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringNGrams.md @@ -0,0 +1,156 @@ +description: Creates ngrams from ragged string data. + +
+ + +
+ +# tf.raw_ops.StringNGrams + + + + + + + + + +Creates ngrams from ragged string data. + + + + + + + + + +This op accepts a ragged tensor with 1 ragged dimension containing only +strings and outputs a ragged tensor with 1 ragged dimension containing ngrams +of that string, joined along the innermost axis. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor` of type `string`. +The values tensor of the ragged string tensor to make ngrams out of. Must be a +1D string tensor. +
+`data_splits` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The splits tensor of the ragged string tensor to make ngrams out of. +
+`separator` + +A `string`. +The string to append between elements of the token. Use "" for no separator. +
+`ngram_widths` + +A list of `ints`. The sizes of the ngrams to create. +
+`left_pad` + +A `string`. +The string to use to pad the left side of the ngram sequence. Only used if +pad_width != 0. +
+`right_pad` + +A `string`. +The string to use to pad the right side of the ngram sequence. Only used if +pad_width != 0. +
+`pad_width` + +An `int`. +The number of padding elements to add to each side of each +sequence. Note that padding will never be greater than 'ngram_widths'-1 +regardless of this value. If `pad_width=-1`, then add `max(ngram_widths)-1` +elements. +
+`preserve_short_sequences` + +A `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (ngrams, ngrams_splits). +
+`ngrams` + +A `Tensor` of type `string`. +
+`ngrams_splits` + +A `Tensor`. Has the same type as `data_splits`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringSplit.md b/site/en/api_docs/python/tf/raw_ops/StringSplit.md new file mode 100644 index 00000000000..4fc817aa673 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringSplit.md @@ -0,0 +1,137 @@ +description: Split elements of input based on delimiter into a SparseTensor. + +
+ + +
+ +# tf.raw_ops.StringSplit + + + + + + + + + +Split elements of `input` based on `delimiter` into a `SparseTensor`. + + + + + + + + + +Let N be the size of source (typically N will be the batch size). Split each +element of `input` based on `delimiter` and return a `SparseTensor` +containing the splitted tokens. Empty tokens are ignored. + +`delimiter` can be empty, or a string of split characters. If `delimiter` is an + empty string, each element of `input` is split into individual single-byte + character strings, including splitting of UTF-8 multibyte sequences. Otherwise + every character of `delimiter` is a potential split point. + +#### For example: + +N = 2, input[0] is 'hello world' and input[1] is 'a b c', then the output +will be + +indices = [0, 0; + 0, 1; + 1, 0; + 1, 1; + 1, 2] +shape = [2, 3] +values = ['hello', 'world', 'a', 'b', 'c'] + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. 1-D. Strings to split. +
+`delimiter` + +A `Tensor` of type `string`. +0-D. Delimiter characters (bytes), or empty string. +
+`skip_empty` + +An optional `bool`. Defaults to `True`. +A `bool`. If `True`, skip the empty strings from the result. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (indices, values, shape). +
+`indices` + +A `Tensor` of type `int64`. +
+`values` + +A `Tensor` of type `string`. +
+`shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringSplitV2.md b/site/en/api_docs/python/tf/raw_ops/StringSplitV2.md new file mode 100644 index 00000000000..13edcb208c2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringSplitV2.md @@ -0,0 +1,139 @@ +description: Split elements of source based on sep into a SparseTensor. + +
+ + +
+ +# tf.raw_ops.StringSplitV2 + + + + + + + + + +Split elements of `source` based on `sep` into a `SparseTensor`. + + + + + + + + + +Let N be the size of source (typically N will be the batch size). Split each +element of `source` based on `sep` and return a `SparseTensor` +containing the split tokens. Empty tokens are ignored. + +For example, N = 2, source[0] is 'hello world' and source[1] is 'a b c', +then the output will be +``` +st.indices = [0, 0; + 0, 1; + 1, 0; + 1, 1; + 1, 2] +st.shape = [2, 3] +st.values = ['hello', 'world', 'a', 'b', 'c'] +``` + +If `sep` is given, consecutive delimiters are not grouped together and are +deemed to delimit empty strings. For example, source of `"1<>2<><>3"` and +sep of `"<>"` returns `["1", "2", "", "3"]`. If `sep` is None or an empty +string, consecutive whitespace are regarded as a single separator, and the +result will contain no empty strings at the startor end if the string has +leading or trailing whitespace. + +Note that the above mentioned behavior matches python's str.split. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +`1-D` string `Tensor`, the strings to split. +
+`sep` + +A `Tensor` of type `string`. +`0-D` string `Tensor`, the delimiter character. +
+`maxsplit` + +An optional `int`. Defaults to `-1`. +An `int`. If `maxsplit > 0`, limit of the split of the result. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (indices, values, shape). +
+`indices` + +A `Tensor` of type `int64`. +
+`values` + +A `Tensor` of type `string`. +
+`shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringStrip.md b/site/en/api_docs/python/tf/raw_ops/StringStrip.md new file mode 100644 index 00000000000..ade39f08526 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringStrip.md @@ -0,0 +1,77 @@ +description: Strip leading and trailing whitespaces from the Tensor. + +
+ + +
+ +# tf.raw_ops.StringStrip + + + + + + + + + +Strip leading and trailing whitespaces from the Tensor. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. A string `Tensor` of any shape. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringToHashBucket.md b/site/en/api_docs/python/tf/raw_ops/StringToHashBucket.md new file mode 100644 index 00000000000..ecae266fedb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringToHashBucket.md @@ -0,0 +1,90 @@ +description: Converts each string in the input Tensor to its hash mod by a number of buckets. + +
+ + +
+ +# tf.raw_ops.StringToHashBucket + + + + + + + + + +Converts each string in the input Tensor to its hash mod by a number of buckets. + + + + + + + + + +The hash function is deterministic on the content of the string within the +process. + +Note that the hash function may change from time to time. +This functionality will be deprecated and it's recommended to use +`tf.string_to_hash_bucket_fast()` or `tf.string_to_hash_bucket_strong()`. + + + + + + + + + + + + + + + + +
+`string_tensor` + +A `Tensor` of type `string`. +
+`num_buckets` + +An `int` that is `>= 1`. The number of buckets. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringToHashBucketFast.md b/site/en/api_docs/python/tf/raw_ops/StringToHashBucketFast.md new file mode 100644 index 00000000000..616523c2d1d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringToHashBucketFast.md @@ -0,0 +1,99 @@ +description: Converts each string in the input Tensor to its hash mod by a number of buckets. + +
+ + +
+ +# tf.raw_ops.StringToHashBucketFast + + + + + + + + + +Converts each string in the input Tensor to its hash mod by a number of buckets. + + + + + + + + + +The hash function is deterministic on the content of the string within the +process and will never change. However, it is not suitable for cryptography. +This function may be used when CPU time is scarce and inputs are trusted or +unimportant. There is a risk of adversaries constructing inputs that all hash +to the same bucket. To prevent this problem, use a strong hash function with +`tf.string_to_hash_bucket_strong`. + +#### Examples: + + + +``` +>>> tf.strings.to_hash_bucket_fast(["Hello", "TensorFlow", "2.x"], 3).numpy() +array([0, 2, 2]) +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. The strings to assign a hash bucket. +
+`num_buckets` + +An `int` that is `>= 1`. The number of buckets. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringToHashBucketStrong.md b/site/en/api_docs/python/tf/raw_ops/StringToHashBucketStrong.md new file mode 100644 index 00000000000..f7233320783 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringToHashBucketStrong.md @@ -0,0 +1,115 @@ +description: Converts each string in the input Tensor to its hash mod by a number of buckets. + +
+ + +
+ +# tf.raw_ops.StringToHashBucketStrong + + + + + + + + + +Converts each string in the input Tensor to its hash mod by a number of buckets. + + + + + + + + + +The hash function is deterministic on the content of the string within the +process. The hash function is a keyed hash function, where attribute `key` +defines the key of the hash function. `key` is an array of 2 elements. + +A strong hash is important when inputs may be malicious, e.g. URLs with +additional components. Adversaries could try to make their inputs hash to the +same bucket for a denial-of-service attack or to skew the results. A strong +hash can be used to make it difficult to find inputs with a skewed hash value +distribution over buckets. This requires that the hash function is +seeded by a high-entropy (random) "key" unknown to the adversary. + +The additional robustness comes at a cost of roughly 4x higher compute +time than `tf.string_to_hash_bucket_fast`. + +#### Examples: + + + +``` +>>> tf.strings.to_hash_bucket_strong(["Hello", "TF"], 3, [1, 2]).numpy() +array([2, 0]) +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. The strings to assign a hash bucket. +
+`num_buckets` + +An `int` that is `>= 1`. The number of buckets. +
+`key` + +A list of `ints`. +The key used to seed the hash function, passed as a list of two uint64 +elements. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringToNumber.md b/site/en/api_docs/python/tf/raw_ops/StringToNumber.md new file mode 100644 index 00000000000..356bc6676bc --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringToNumber.md @@ -0,0 +1,97 @@ +description: Converts each string in the input Tensor to the specified numeric type. + +
+ + +
+ +# tf.raw_ops.StringToNumber + + + + + + + + + +Converts each string in the input Tensor to the specified numeric type. + + + + + + + + + +(Note that int32 overflow results in an error while float overflow +results in a rounded value.) + +#### Example: + + + +``` +>>> strings = ["5.0", "3.0", "7.0"] +>>> tf.strings.to_number(strings) + +``` + + + + + + + + + + + + + + + + +
+`string_tensor` + +A `Tensor` of type `string`. +
+`out_type` + +An optional tf.DType from: `tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to tf.float32. +The numeric type to interpret each string in `string_tensor` as. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/StringUpper.md b/site/en/api_docs/python/tf/raw_ops/StringUpper.md new file mode 100644 index 00000000000..12a46eca281 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/StringUpper.md @@ -0,0 +1,93 @@ +description: Converts all lowercase characters into their respective uppercase replacements. + +
+ + +
+ +# tf.raw_ops.StringUpper + + + + + + + + + +Converts all lowercase characters into their respective uppercase replacements. + + + + + + + + + + +#### Example: + + + +``` +>>> tf.strings.upper("CamelCase string and ALL CAPS") + +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +
+`encoding` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Sub.md b/site/en/api_docs/python/tf/raw_ops/Sub.md new file mode 100644 index 00000000000..9adde6acb7c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Sub.md @@ -0,0 +1,86 @@ +description: Returns x - y element-wise. + +
+ + +
+ +# tf.raw_ops.Sub + + + + + + + + + +Returns x - y element-wise. + + + + + + + + + +*NOTE*: `Subtract` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Substr.md b/site/en/api_docs/python/tf/raw_ops/Substr.md new file mode 100644 index 00000000000..3da403c49f9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Substr.md @@ -0,0 +1,197 @@ +description: Return substrings from Tensor of strings. + +
+ + +
+ +# tf.raw_ops.Substr + + + + + + + + + +Return substrings from `Tensor` of strings. + + + + + + + + + +For each string in the input `Tensor`, creates a substring starting at index +`pos` with a total length of `len`. + +If `len` defines a substring that would extend beyond the length of the input +string, or if `len` is negative, then as many characters as possible are used. + +A negative `pos` indicates distance within the string backwards from the end. + +If `pos` specifies an index which is out of range for any of the input strings, +then an `InvalidArgumentError` is thrown. + +`pos` and `len` must have the same shape, otherwise a `ValueError` is thrown on +Op creation. + +*NOTE*: `Substr` supports broadcasting up to two dimensions. More about +broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +--- + +Examples + +Using scalar `pos` and `len`: + +```python +input = [b'Hello', b'World'] +position = 1 +length = 3 + +output = [b'ell', b'orl'] +``` + +Using `pos` and `len` with same shape as `input`: + +```python +input = [[b'ten', b'eleven', b'twelve'], + [b'thirteen', b'fourteen', b'fifteen'], + [b'sixteen', b'seventeen', b'eighteen']] +position = [[1, 2, 3], + [1, 2, 3], + [1, 2, 3]] +length = [[2, 3, 4], + [4, 3, 2], + [5, 5, 5]] + +output = [[b'en', b'eve', b'lve'], + [b'hirt', b'urt', b'te'], + [b'ixtee', b'vente', b'hteen']] +``` + +Broadcasting `pos` and `len` onto `input`: + +``` +input = [[b'ten', b'eleven', b'twelve'], + [b'thirteen', b'fourteen', b'fifteen'], + [b'sixteen', b'seventeen', b'eighteen'], + [b'nineteen', b'twenty', b'twentyone']] +position = [1, 2, 3] +length = [1, 2, 3] + +output = [[b'e', b'ev', b'lve'], + [b'h', b'ur', b'tee'], + [b'i', b've', b'hte'], + [b'i', b'en', b'nty']] +``` + +Broadcasting `input` onto `pos` and `len`: + +``` +input = b'thirteen' +position = [1, 5, 7] +length = [3, 2, 1] + +output = [b'hir', b'ee', b'n'] +``` + + + + + + + + + +
+* `ValueError`: If the first argument cannot be converted to a +Tensor of `dtype string`. +* `InvalidArgumentError`: If indicies are out of range. +* `ValueError`: If `pos` and `len` are not the same shape. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. Tensor of strings +
+`pos` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Scalar defining the position of first character in each substring +
+`len` + +A `Tensor`. Must have the same type as `pos`. +Scalar defining the number of characters to include in each substring +
+`unit` + +An optional `string` from: `"BYTE", "UTF8_CHAR"`. Defaults to `"BYTE"`. +The unit that is used to create the substring. One of: `"BYTE"` (for +defining position and length by bytes) or `"UTF8_CHAR"` (for the UTF-8 +encoded Unicode code points). The default is `"BYTE"`. Results are undefined if +`unit=UTF8_CHAR` and the `input` strings do not contain structurally valid +UTF-8. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Sum.md b/site/en/api_docs/python/tf/raw_ops/Sum.md new file mode 100644 index 00000000000..81287b49fa3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Sum.md @@ -0,0 +1,99 @@ +description: Computes the sum of elements across dimensions of a tensor. + +
+ + +
+ +# tf.raw_ops.Sum + + + + + + + + + +Computes the sum of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input` along the dimensions given in `axis`. Unless +`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in +`axis`. If `keep_dims` is true, the reduced dimensions are +retained with length 1. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +The tensor to reduce. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The dimensions to reduce. Must be in the range +`[-rank(input), rank(input))`. +
+`keep_dims` + +An optional `bool`. Defaults to `False`. +If true, retain reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SummaryWriter.md b/site/en/api_docs/python/tf/raw_ops/SummaryWriter.md new file mode 100644 index 00000000000..075512e22f6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SummaryWriter.md @@ -0,0 +1,82 @@ +
+ + +
+ +# tf.raw_ops.SummaryWriter + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Svd.md b/site/en/api_docs/python/tf/raw_ops/Svd.md new file mode 100644 index 00000000000..493e45e02ca --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Svd.md @@ -0,0 +1,131 @@ +description: Computes the singular value decompositions of one or more matrices. + +
+ + +
+ +# tf.raw_ops.Svd + + + + + + + + + +Computes the singular value decompositions of one or more matrices. + + + + + + + + + +Computes the SVD of each inner matrix in `input` such that +`input[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :, :])` + +```python +# a is a tensor containing a batch of matrices. +# s is a tensor of singular values for each matrix. +# u is the tensor containing the left singular vectors for each matrix. +# v is the tensor containing the right singular vectors for each matrix. +s, u, v = svd(a) +s, _, _ = svd(a, compute_uv=False) +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`. +A tensor of shape `[..., M, N]` whose inner-most 2 dimensions +form matrices of size `[M, N]`. Let `P` be the minimum of `M` and `N`. +
+`compute_uv` + +An optional `bool`. Defaults to `True`. +If true, left and right singular vectors will be +computed and returned in `u` and `v`, respectively. +If false, `u` and `v` are not set and should never referenced. +
+`full_matrices` + +An optional `bool`. Defaults to `False`. +If true, compute full-sized `u` and `v`. If false +(the default), compute only the leading `P` singular vectors. +Ignored if `compute_uv` is `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (s, u, v). +
+`s` + +A `Tensor`. Has the same type as `input`. +
+`u` + +A `Tensor`. Has the same type as `input`. +
+`v` + +A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Switch.md b/site/en/api_docs/python/tf/raw_ops/Switch.md new file mode 100644 index 00000000000..c5d3b8bc58f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Switch.md @@ -0,0 +1,103 @@ +description: Forwards data to the output port determined by pred. + +
+ + +
+ +# tf.raw_ops.Switch + + + + + + + + + +Forwards `data` to the output port determined by `pred`. + + + + + + + + + +If `pred` is true, the `data` input is forwarded to `output_true`. Otherwise, +the data goes to `output_false`. + +See also `RefSwitch` and `Merge`. + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. The tensor to be forwarded to the appropriate output. +
+`pred` + +A `Tensor` of type `bool`. +A scalar that specifies which output port will receive data. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_false, output_true). +
+`output_false` + +A `Tensor`. Has the same type as `data`. +
+`output_true` + +A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/SymbolicGradient.md b/site/en/api_docs/python/tf/raw_ops/SymbolicGradient.md new file mode 100644 index 00000000000..6d6f5f1467d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/SymbolicGradient.md @@ -0,0 +1,110 @@ +description: Computes the gradient function for function f via backpropagation. + +
+ + +
+ +# tf.raw_ops.SymbolicGradient + + + + + + + + + +Computes the gradient function for function f via backpropagation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A list of `Tensor` objects. a list of input tensors of size N + M; +
+`Tout` + +A list of `tf.DTypes` that has length `>= 1`. +the type list for the input list. +
+`f` + +A function decorated with @Defun. +The function we want to compute the gradient for. + +The function 'f' must be a numerical function which takes N inputs and +produces M outputs. Its gradient function 'g', which is computed by +this SymbolicGradient op is a function taking N + M inputs and +produces N outputs. + +I.e. if we have +(y1, y2, ..., y_M) = f(x1, x2, ..., x_N), +then, g is +(dL/dx1, dL/dx2, ..., dL/dx_N) = g(x1, x2, ..., x_N, +dL/dy1, dL/dy2, ..., dL/dy_M), + +where L is a scalar-value function of (x1, x2, ..., xN) (e.g., the +loss function). dL/dx_i is the partial derivative of L with respect +to x_i. + +(Needs some math expert to say the comment above better.) +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TFRecordDataset.md b/site/en/api_docs/python/tf/raw_ops/TFRecordDataset.md new file mode 100644 index 00000000000..63f63ae60a9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TFRecordDataset.md @@ -0,0 +1,97 @@ +description: Creates a dataset that emits the records from one or more TFRecord files. + +
+ + +
+ +# tf.raw_ops.TFRecordDataset + + + + + + + + + +Creates a dataset that emits the records from one or more TFRecord files. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A `Tensor` of type `string`. +A scalar or vector containing the name(s) of the file(s) to be +read. +
+`compression_type` + +A `Tensor` of type `string`. +A scalar containing either (i) the empty string (no +compression), (ii) "ZLIB", or (iii) "GZIP". +
+`buffer_size` + +A `Tensor` of type `int64`. +A scalar representing the number of bytes to buffer. A value of +0 means no buffering will be performed. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TFRecordReader.md b/site/en/api_docs/python/tf/raw_ops/TFRecordReader.md new file mode 100644 index 00000000000..7e798b003b9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TFRecordReader.md @@ -0,0 +1,95 @@ +description: A Reader that outputs the records from a TensorFlow Records file. + +
+ + +
+ +# tf.raw_ops.TFRecordReader + + + + + + + + + +A Reader that outputs the records from a TensorFlow Records file. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`compression_type` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TFRecordReaderV2.md b/site/en/api_docs/python/tf/raw_ops/TFRecordReaderV2.md new file mode 100644 index 00000000000..fb34f02e343 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TFRecordReaderV2.md @@ -0,0 +1,95 @@ +description: A Reader that outputs the records from a TensorFlow Records file. + +
+ + +
+ +# tf.raw_ops.TFRecordReaderV2 + + + + + + + + + +A Reader that outputs the records from a TensorFlow Records file. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`compression_type` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TPUCompilationResult.md b/site/en/api_docs/python/tf/raw_ops/TPUCompilationResult.md new file mode 100644 index 00000000000..9901bc26b81 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TPUCompilationResult.md @@ -0,0 +1,73 @@ +description: Returns the result of a TPU compilation. + +
+ + +
+ +# tf.raw_ops.TPUCompilationResult + + + + + + + + + +Returns the result of a TPU compilation. + + + + + + + + + +This operation returns the result of a TPU compilation as a serialized +CompilationResultProto, which holds a status and an error message if an error +occurred during compilation. + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TPUEmbeddingActivations.md b/site/en/api_docs/python/tf/raw_ops/TPUEmbeddingActivations.md new file mode 100644 index 00000000000..ea861cf6f5f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TPUEmbeddingActivations.md @@ -0,0 +1,109 @@ +description: An op enabling differentiation of TPU Embeddings. + +
+ + +
+ +# tf.raw_ops.TPUEmbeddingActivations + + + + + + + + + +An op enabling differentiation of TPU Embeddings. + + + + + + + + + +This op simply returns its first input, which is assumed to have been sliced +from the Tensors returned by TPUEmbeddingDequeueActivations. The presence of +this op, and its first argument being a trainable Variable, enables automatic +differentiation of graphs containing embeddings via the TPU Embedding Python +libraries. + + + + + + + + + + + + + + + + + + + + + + +
+`embedding_variable` + +A `Tensor` of type `float32`. +A trainable variable, enabling optimizers to find this op. +
+`sliced_activations` + +A `Tensor` of type `float32`. +The embedding activations Tensor to return. +
+`table_id` + +An `int` that is `>= 0`. +The id of the table in the embedding layer configuration from which +these activations were computed. +
+`lookup_id` + +An `int` that is `>= 0`. +Identifier of the set of embedding indices which produced these +activations. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TPUOrdinalSelector.md b/site/en/api_docs/python/tf/raw_ops/TPUOrdinalSelector.md new file mode 100644 index 00000000000..7722549c355 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TPUOrdinalSelector.md @@ -0,0 +1,73 @@ +description: A TPU core selector Op. + +
+ + +
+ +# tf.raw_ops.TPUOrdinalSelector + + + + + + + + + +A TPU core selector Op. + + + + + + + + + +This Op produces a set of TPU cores (for warm-up) or a single TPU core +(for regular inference) to execute the TPU program on. The output is +consumed by TPUPartitionedCall. + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TPUPartitionedCall.md b/site/en/api_docs/python/tf/raw_ops/TPUPartitionedCall.md new file mode 100644 index 00000000000..d93747a3f9f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TPUPartitionedCall.md @@ -0,0 +1,106 @@ +description: Calls a function placed on a specified TPU device. + +
+ + +
+ +# tf.raw_ops.TPUPartitionedCall + + + + + + + + + +Calls a function placed on a specified TPU device. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`args` + +A list of `Tensor` objects. The arguments to the function. +
+`device_ordinal` + +A `Tensor` of type `int32`. +The TPU device ordinal to run the function on. +
+`Tout` + +A list of `tf.DTypes`. The types of the outputs of the function. +
+`f` + +A function decorated with @Defun. The function to call. +
+`autotuner_thresh` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `Tout`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TPUReplicateMetadata.md b/site/en/api_docs/python/tf/raw_ops/TPUReplicateMetadata.md new file mode 100644 index 00000000000..b66a46586b1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TPUReplicateMetadata.md @@ -0,0 +1,150 @@ +description: Metadata indicating how the TPU computation should be replicated. + +
+ + +
+ +# tf.raw_ops.TPUReplicateMetadata + + + + + + + + + +Metadata indicating how the TPU computation should be replicated. + + + + + + + + + +This operation holds the metadata common to operations of a `tpu.replicate()` computation subgraph. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_replicas` + +An `int` that is `>= 0`. +Number of replicas of the computation +
+`num_cores_per_replica` + +An optional `int`. Defaults to `1`. +Number of cores per replica. Used for model parallelism. +
+`topology` + +An optional `string`. Defaults to `""`. +TopologyProto indicating the topology of the TPU pod slice. +
+`use_tpu` + +An optional `bool`. Defaults to `True`. +Whether to place the computation on the TPU. +
+`device_assignment` + +An optional list of `ints`. Defaults to `[]`. +The assignment of devices for the computation. +
+`computation_shape` + +An optional list of `ints`. Defaults to `[]`. +DEPRECATED. Use num_cores_per_replica instead. +
+`host_compute_core` + +An optional list of `strings`. Defaults to `[]`. +
+`padding_map` + +An optional list of `strings`. Defaults to `[]`. +
+`step_marker_location` + +An optional `string`. Defaults to `"STEP_MARK_AT_ENTRY"`. +
+`allow_soft_placement` + +An optional `bool`. Defaults to `False`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TPUReplicatedInput.md b/site/en/api_docs/python/tf/raw_ops/TPUReplicatedInput.md new file mode 100644 index 00000000000..7bbe227d508 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TPUReplicatedInput.md @@ -0,0 +1,104 @@ +description: Connects N inputs to an N-way replicated TPU computation. + +
+ + +
+ +# tf.raw_ops.TPUReplicatedInput + + + + + + + + + +Connects N inputs to an N-way replicated TPU computation. + + + + + + + + + +This operation holds a replicated input to a `tpu.replicate()` computation subgraph. +Each replicated input has the same shape and type alongside the output. + +#### For example: + + +``` +%a = "tf.opA"() +%b = "tf.opB"() +%replicated_input = "tf.TPUReplicatedInput"(%a, %b) +%computation = "tf.Computation"(%replicated_input) +``` +The above computation has a replicated input of two replicas. + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A list of at least 1 `Tensor` objects with the same type. +
+`is_mirrored_variable` + +An optional `bool`. Defaults to `False`. +
+`index` + +An optional `int`. Defaults to `-1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `inputs`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TPUReplicatedOutput.md b/site/en/api_docs/python/tf/raw_ops/TPUReplicatedOutput.md new file mode 100644 index 00000000000..f2a83fc74db --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TPUReplicatedOutput.md @@ -0,0 +1,95 @@ +description: Connects N outputs from an N-way replicated TPU computation. + +
+ + +
+ +# tf.raw_ops.TPUReplicatedOutput + + + + + + + + + +Connects N outputs from an N-way replicated TPU computation. + + + + + + + + + +This operation holds a replicated output from a `tpu.replicate()` computation subgraph. +Each replicated output has the same shape and type alongside the input. + +#### For example: + + +``` +%computation = "tf.Computation"() +%replicated_output:2 = "tf.TPUReplicatedOutput"(%computation) +``` +The above computation has a replicated output of two replicas. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`num_replicas` + +An `int` that is `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `num_replicas` `Tensor` objects with the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TakeDataset.md b/site/en/api_docs/python/tf/raw_ops/TakeDataset.md new file mode 100644 index 00000000000..63942f54a85 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TakeDataset.md @@ -0,0 +1,101 @@ +description: Creates a dataset that contains count elements from the input_dataset. + +
+ + +
+ +# tf.raw_ops.TakeDataset + + + + + + + + + +Creates a dataset that contains `count` elements from the `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`count` + +A `Tensor` of type `int64`. +A scalar representing the number of elements from the `input_dataset` +that should be taken. A value of `-1` indicates that all of `input_dataset` +is taken. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TakeManySparseFromTensorsMap.md b/site/en/api_docs/python/tf/raw_ops/TakeManySparseFromTensorsMap.md new file mode 100644 index 00000000000..473fd279284 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TakeManySparseFromTensorsMap.md @@ -0,0 +1,175 @@ +description: Read SparseTensors from a SparseTensorsMap and concatenate them. + +
+ + +
+ +# tf.raw_ops.TakeManySparseFromTensorsMap + + + + + + + + + +Read `SparseTensors` from a `SparseTensorsMap` and concatenate them. + + + + + + + + + +The input `sparse_handles` must be an `int64` matrix of shape `[N, 1]` where +`N` is the minibatch size and the rows correspond to the output handles of +`AddSparseToTensorsMap` or `AddManySparseToTensorsMap`. The ranks of the +original `SparseTensor` objects that went into the given input ops must all +match. When the final `SparseTensor` is created, it has rank one +higher than the ranks of the incoming `SparseTensor` objects +(they have been concatenated along a new row dimension on the left). + +The output `SparseTensor` object's shape values for all dimensions but the +first are the max across the input `SparseTensor` objects' shape values +for the corresponding dimensions. Its first shape value is `N`, the minibatch +size. + +The input `SparseTensor` objects' indices are assumed ordered in +standard lexicographic order. If this is not the case, after this +step run `SparseReorder` to restore index ordering. + +For example, if the handles represent an input, which is a `[2, 3]` matrix +representing two original `SparseTensor` objects: + +``` + index = [ 0] + [10] + [20] + values = [1, 2, 3] + shape = [50] +``` + +and + +``` + index = [ 2] + [10] + values = [4, 5] + shape = [30] +``` + +then the final `SparseTensor` will be: + +``` + index = [0 0] + [0 10] + [0 20] + [1 2] + [1 10] + values = [1, 2, 3, 4, 5] + shape = [2 50] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`sparse_handles` + +A `Tensor` of type `int64`. +1-D, The `N` serialized `SparseTensor` objects. +Shape: `[N]`. +
+`dtype` + +A tf.DType. +The `dtype` of the `SparseTensor` objects stored in the +`SparseTensorsMap`. +
+`container` + +An optional `string`. Defaults to `""`. +The container name for the `SparseTensorsMap` read by this op. +
+`shared_name` + +An optional `string`. Defaults to `""`. +The shared name for the `SparseTensorsMap` read by this op. +It should not be blank; rather the `shared_name` or unique Operation name +of the Op that created the original `SparseTensorsMap` should be used. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shape). +
+`sparse_indices` + +A `Tensor` of type `int64`. +
+`sparse_values` + +A `Tensor` of type `dtype`. +
+`sparse_shape` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TakeWhileDataset.md b/site/en/api_docs/python/tf/raw_ops/TakeWhileDataset.md new file mode 100644 index 00000000000..5bbcc7951c4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TakeWhileDataset.md @@ -0,0 +1,114 @@ +description: Creates a dataset that stops iteration when predicate is false. + +
+ + +
+ +# tf.raw_ops.TakeWhileDataset + + + + + + + + + +Creates a dataset that stops iteration when predicate` is false. + + + + + + + + + +The `predicate` function must return a scalar boolean and accept the +following arguments: + +* One tensor for each component of an element of `input_dataset`. +* One tensor for each value in `other_arguments`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`other_arguments` + +A list of `Tensor` objects. +A list of tensors, typically values that were captured when +building a closure for `predicate`. +
+`predicate` + +A function decorated with @Defun. +A function returning a scalar boolean. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Tan.md b/site/en/api_docs/python/tf/raw_ops/Tan.md new file mode 100644 index 00000000000..1911d08c862 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Tan.md @@ -0,0 +1,86 @@ +description: Computes tan of x element-wise. + +
+ + +
+ +# tf.raw_ops.Tan + + + + + + + + + +Computes tan of x element-wise. + + + + + + + + + + Given an input tensor, this function computes tangent of every + element in the tensor. Input range is `(-inf, inf)` and + output range is `(-inf, inf)`. If input lies outside the boundary, `nan` + is returned. + + ```python + x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10000, float("inf")]) + tf.math.tan(x) ==> [nan 0.45231566 -0.5463025 1.5574077 2.572152 -1.7925274 0.32097113 nan] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Tanh.md b/site/en/api_docs/python/tf/raw_ops/Tanh.md new file mode 100644 index 00000000000..7b19daef4e4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Tanh.md @@ -0,0 +1,85 @@ +description: Computes hyperbolic tangent of x element-wise. + +
+ + +
+ +# tf.raw_ops.Tanh + + + + + + + + + +Computes hyperbolic tangent of `x` element-wise. + + + + + + + + + + Given an input tensor, this function computes hyperbolic tangent of every + element in the tensor. Input range is `[-inf, inf]` and + output range is `[-1,1]`. + + ```python + x = tf.constant([-float("inf"), -5, -0.5, 1, 1.2, 2, 3, float("inf")]) + tf.math.tanh(x) ==> [-1. -0.99990916 -0.46211717 0.7615942 0.8336547 0.9640276 0.9950547 1.] + ``` + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TanhGrad.md b/site/en/api_docs/python/tf/raw_ops/TanhGrad.md new file mode 100644 index 00000000000..5f7cac933a3 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TanhGrad.md @@ -0,0 +1,86 @@ +description: Computes the gradient for the tanh of x wrt its input. + +
+ + +
+ +# tf.raw_ops.TanhGrad + + + + + + + + + +Computes the gradient for the tanh of `x` wrt its input. + + + + + + + + + +Specifically, `grad = dy * (1 - y*y)`, where `y = tanh(x)`, and `dy` +is the corresponding input gradient. + + + + + + + + + + + + + + + + +
+`y` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`dy` + +A `Tensor`. Must have the same type as `y`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `y`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TemporaryVariable.md b/site/en/api_docs/python/tf/raw_ops/TemporaryVariable.md new file mode 100644 index 00000000000..2715b89e2bf --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TemporaryVariable.md @@ -0,0 +1,108 @@ +description: Returns a tensor that may be mutated, but only persists within a single step. + +
+ + +
+ +# tf.raw_ops.TemporaryVariable + + + + + + + + + +Returns a tensor that may be mutated, but only persists within a single step. + + + + + + + + + +This is an experimental op for internal use only and it is possible to use this +op in unsafe ways. DO NOT USE unless you fully understand the risks. + +It is the caller's responsibility to ensure that 'ref' is eventually passed to a +matching 'DestroyTemporaryVariable' op after all other uses have completed. + +Outputs a ref to the tensor state so it may be read or modified. + + E.g. + var = state_ops._temporary_variable([1, 2], types.float_) + var_name = var.op.name + var = state_ops.assign(var, [[4.0, 5.0]]) + var = state_ops.assign_add(var, [[6.0, 7.0]]) + final = state_ops._destroy_temporary_variable(var, var_name=var_name) + + + + + + + + + + + + + + + + + + + +
+`shape` + +A tf.TensorShape or list of `ints`. +The shape of the variable tensor. +
+`dtype` + +A tf.DType. The type of elements in the variable tensor. +
+`var_name` + +An optional `string`. Defaults to `""`. +Overrides the name used for the temporary variable resource. Default +value is the name of the 'TemporaryVariable' op (which is guaranteed unique). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArray.md b/site/en/api_docs/python/tf/raw_ops/TensorArray.md new file mode 100644 index 00000000000..e4ab62d421d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArray.md @@ -0,0 +1,111 @@ +
+ + +
+ +# tf.raw_ops.TensorArray + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`size` + +A `Tensor` of type `int32`. +
+`dtype` + +A tf.DType. +
+`dynamic_size` + +An optional `bool`. Defaults to `False`. +
+`clear_after_read` + +An optional `bool`. Defaults to `True`. +
+`tensor_array_name` + +An optional `string`. Defaults to `""`. +
+`element_shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `None`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayClose.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayClose.md new file mode 100644 index 00000000000..26463f60cdd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayClose.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.TensorArrayClose + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayCloseV2.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayCloseV2.md new file mode 100644 index 00000000000..8497fa0ea0b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayCloseV2.md @@ -0,0 +1,77 @@ +description: Deprecated. Use TensorArrayCloseV3 + +
+ + +
+ +# tf.raw_ops.TensorArrayCloseV2 + + + + + + + + + +Deprecated. Use TensorArrayCloseV3 + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayCloseV3.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayCloseV3.md new file mode 100644 index 00000000000..e1d0746433c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayCloseV3.md @@ -0,0 +1,80 @@ +description: Delete the TensorArray from its resource container. + +
+ + +
+ +# tf.raw_ops.TensorArrayCloseV3 + + + + + + + + + +Delete the TensorArray from its resource container. + + + + + + + + + +This enables the user to close and release the resource in the middle +of a step/run. + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. +The handle to a TensorArray (output of TensorArray or TensorArrayGrad). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayConcat.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayConcat.md new file mode 100644 index 00000000000..2dc97ad63a7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayConcat.md @@ -0,0 +1,110 @@ +
+ + +
+ +# tf.raw_ops.TensorArrayConcat + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`dtype` + +A tf.DType. +
+`element_shape_except0` + +An optional tf.TensorShape or list of `ints`. Defaults to `None`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (value, lengths). +
+`value` + +A `Tensor` of type `dtype`. +
+`lengths` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayConcatV2.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayConcatV2.md new file mode 100644 index 00000000000..c4857dda4cd --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayConcatV2.md @@ -0,0 +1,112 @@ +description: Deprecated. Use TensorArrayConcatV3 + +
+ + +
+ +# tf.raw_ops.TensorArrayConcatV2 + + + + + + + + + +Deprecated. Use TensorArrayConcatV3 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `string`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`dtype` + +A tf.DType. +
+`element_shape_except0` + +An optional tf.TensorShape or list of `ints`. Defaults to `None`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (value, lengths). +
+`value` + +A `Tensor` of type `dtype`. +
+`lengths` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayConcatV3.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayConcatV3.md new file mode 100644 index 00000000000..ff9f661fe50 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayConcatV3.md @@ -0,0 +1,128 @@ +description: Concat the elements from the TensorArray into value value. + +
+ + +
+ +# tf.raw_ops.TensorArrayConcatV3 + + + + + + + + + +Concat the elements from the TensorArray into value `value`. + + + + + + + + + +Takes `T` elements of shapes + + ``` + (n0 x d0 x d1 x ...), (n1 x d0 x d1 x ...), ..., (n(T-1) x d0 x d1 x ...) + ``` + +and concatenates them into a Tensor of shape: + + ```(n0 + n1 + ... + n(T-1) x d0 x d1 x ...)``` + +All elements must have the same shape (excepting the first dimension). + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a TensorArray. +
+`flow_in` + +A `Tensor` of type `float32`. +A float scalar that enforces proper chaining of operations. +
+`dtype` + +A `tf.DType`. The type of the elem that is returned. +
+`element_shape_except0` + +An optional `tf.TensorShape` or list of `ints`. Defaults to `None`. +The expected shape of an element, if known, +excluding the first dimension. Used to validate the shapes of +TensorArray elements. If this shape is not fully specified, concatenating +zero-size TensorArrays is an error. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (value, lengths). +
+`value` + +A `Tensor` of type `dtype`. +
+`lengths` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayGather.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayGather.md new file mode 100644 index 00000000000..ebe509dba33 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayGather.md @@ -0,0 +1,103 @@ +
+ + +
+ +# tf.raw_ops.TensorArrayGather + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`indices` + +A `Tensor` of type `int32`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`dtype` + +A tf.DType. +
+`element_shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `None`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayGatherV2.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayGatherV2.md new file mode 100644 index 00000000000..69afd0add87 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayGatherV2.md @@ -0,0 +1,105 @@ +description: Deprecated. Use TensorArrayGatherV3 + +
+ + +
+ +# tf.raw_ops.TensorArrayGatherV2 + + + + + + + + + +Deprecated. Use TensorArrayGatherV3 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `string`. +
+`indices` + +A `Tensor` of type `int32`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`dtype` + +A tf.DType. +
+`element_shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `None`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayGatherV3.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayGatherV3.md new file mode 100644 index 00000000000..dd8def35890 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayGatherV3.md @@ -0,0 +1,111 @@ +description: Gather specific elements from the TensorArray into output value. + +
+ + +
+ +# tf.raw_ops.TensorArrayGatherV3 + + + + + + + + + +Gather specific elements from the TensorArray into output `value`. + + + + + + + + + +All elements selected by `indices` must have the same shape. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a TensorArray. +
+`indices` + +A `Tensor` of type `int32`. +The locations in the TensorArray from which to read tensor elements. +
+`flow_in` + +A `Tensor` of type `float32`. +A float scalar that enforces proper chaining of operations. +
+`dtype` + +A tf.DType. The type of the elem that is returned. +
+`element_shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `None`. +The expected shape of an element, if known. Used to +validate the shapes of TensorArray elements. If this shape is not +fully specified, gathering zero-size TensorArrays is an error. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayGrad.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayGrad.md new file mode 100644 index 00000000000..25acd892a90 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayGrad.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.TensorArrayGrad + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `string`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`source` + +A `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayGradV2.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayGradV2.md new file mode 100644 index 00000000000..56a2f03de7c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayGradV2.md @@ -0,0 +1,91 @@ +description: Deprecated. Use TensorArrayGradV3 + +
+ + +
+ +# tf.raw_ops.TensorArrayGradV2 + + + + + + + + + +Deprecated. Use TensorArrayGradV3 + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `string`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`source` + +A `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayGradV3.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayGradV3.md new file mode 100644 index 00000000000..5e05a25fd79 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayGradV3.md @@ -0,0 +1,145 @@ +description: Creates a TensorArray for storing the gradients of values in the given handle. + +
+ + +
+ +# tf.raw_ops.TensorArrayGradV3 + + + + + + + + + +Creates a TensorArray for storing the gradients of values in the given handle. + + + + + + + + + +If the given TensorArray gradient already exists, returns a reference to it. + +Locks the size of the original TensorArray by disabling its dynamic size flag. + +**A note about the input flow_in:** + +The handle flow_in forces the execution of the gradient lookup to occur +only after certain other operations have occurred. For example, when +the forward TensorArray is dynamically sized, writes to this TensorArray +may resize the object. The gradient TensorArray is statically sized based +on the size of the forward TensorArray when this operation executes. +Furthermore, the size of the forward TensorArray is frozen by this call. +As a result, the flow is used to ensure that the call to generate the gradient +TensorArray only happens after all writes are executed. + +In the case of dynamically sized TensorArrays, gradient computation should +only be performed on read operations that have themselves been chained via +flow to occur only after all writes have executed. That way the final size +of the forward TensorArray is known when this operation is called. + +**A note about the source attribute:** + +TensorArray gradient calls use an accumulator TensorArray object. If +multiple gradients are calculated and run in the same session, the multiple +gradient nodes may accidentally flow through the same accumulator TensorArray. +This double counts and generally breaks the TensorArray gradient flow. + +The solution is to identify which gradient call this particular +TensorArray gradient is being called in. This is performed by identifying +a unique string (e.g. "gradients", "gradients_1", ...) from the input +gradient Tensor's name. This string is used as a suffix when creating +the TensorArray gradient object here (the attribute `source`). + +The attribute `source` is added as a suffix to the forward TensorArray's +name when performing the creation / lookup, so that each separate gradient +calculation gets its own TensorArray accumulator. + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. +The handle to the forward TensorArray. +
+`flow_in` + +A `Tensor` of type `float32`. +A float scalar that enforces proper chaining of operations. +
+`source` + +A `string`. +The gradient source string, used to decide which gradient TensorArray +to return. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (grad_handle, flow_out). +
+`grad_handle` + +A `Tensor` of type `resource`. +
+`flow_out` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayGradWithShape.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayGradWithShape.md new file mode 100644 index 00000000000..175e88dff7e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayGradWithShape.md @@ -0,0 +1,123 @@ +description: Creates a TensorArray for storing multiple gradients of values in the given handle. + +
+ + +
+ +# tf.raw_ops.TensorArrayGradWithShape + + + + + + + + + +Creates a TensorArray for storing multiple gradients of values in the given handle. + + + + + + + + + +Similar to TensorArrayGradV3. However it creates an accumulator with an +expanded shape compared to the input TensorArray whose gradient is being +computed. This enables multiple gradients for the same TensorArray to be +calculated using the same accumulator. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. +The handle to the forward TensorArray. +
+`flow_in` + +A `Tensor` of type `float32`. +A float scalar that enforces proper chaining of operations. +
+`shape_to_prepend` + +A `Tensor` of type `int32`. +An int32 vector representing a shape. Elements in the gradient accumulator will +have shape which is this shape_to_prepend value concatenated with shape of the +elements in the TensorArray corresponding to the input handle. +
+`source` + +A `string`. +The gradient source string, used to decide which gradient TensorArray +to return. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (grad_handle, flow_out). +
+`grad_handle` + +A `Tensor` of type `resource`. +
+`flow_out` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayPack.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayPack.md new file mode 100644 index 00000000000..5ad9d89f0be --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayPack.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.TensorArrayPack + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`dtype` + +A tf.DType. +
+`element_shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `None`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayRead.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayRead.md new file mode 100644 index 00000000000..ab3118f2ab4 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayRead.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.TensorArrayRead + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`index` + +A `Tensor` of type `int32`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`dtype` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayReadV2.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayReadV2.md new file mode 100644 index 00000000000..a9e019aa34f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayReadV2.md @@ -0,0 +1,98 @@ +description: Deprecated. Use TensorArrayReadV3 + +
+ + +
+ +# tf.raw_ops.TensorArrayReadV2 + + + + + + + + + +Deprecated. Use TensorArrayReadV3 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `string`. +
+`index` + +A `Tensor` of type `int32`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`dtype` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayReadV3.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayReadV3.md new file mode 100644 index 00000000000..8b1e5827a52 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayReadV3.md @@ -0,0 +1,99 @@ +description: Read an element from the TensorArray into output value. + +
+ + +
+ +# tf.raw_ops.TensorArrayReadV3 + + + + + + + + + +Read an element from the TensorArray into output `value`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a TensorArray. +
+`index` + +A `Tensor` of type `int32`. +
+`flow_in` + +A `Tensor` of type `float32`. +A float scalar that enforces proper chaining of operations. +
+`dtype` + +A tf.DType. The type of the elem that is returned. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayScatter.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayScatter.md new file mode 100644 index 00000000000..1ec789ba846 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayScatter.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.TensorArrayScatter + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`indices` + +A `Tensor` of type `int32`. +
+`value` + +A `Tensor`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayScatterV2.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayScatterV2.md new file mode 100644 index 00000000000..d7633c2dc48 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayScatterV2.md @@ -0,0 +1,98 @@ +description: Deprecated. Use TensorArrayScatterV3 + +
+ + +
+ +# tf.raw_ops.TensorArrayScatterV2 + + + + + + + + + +Deprecated. Use TensorArrayScatterV3 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `string`. +
+`indices` + +A `Tensor` of type `int32`. +
+`value` + +A `Tensor`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayScatterV3.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayScatterV3.md new file mode 100644 index 00000000000..b675e3c2585 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayScatterV3.md @@ -0,0 +1,101 @@ +description: Scatter the data from the input value into specific TensorArray elements. + +
+ + +
+ +# tf.raw_ops.TensorArrayScatterV3 + + + + + + + + + +Scatter the data from the input value into specific TensorArray elements. + + + + + + + + + +`indices` must be a vector, its length must match the first dim of `value`. + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a TensorArray. +
+`indices` + +A `Tensor` of type `int32`. +The locations at which to write the tensor elements. +
+`value` + +A `Tensor`. The concatenated tensor to write to the TensorArray. +
+`flow_in` + +A `Tensor` of type `float32`. +A float scalar that enforces proper chaining of operations. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArraySize.md b/site/en/api_docs/python/tf/raw_ops/TensorArraySize.md new file mode 100644 index 00000000000..ef472607e21 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArraySize.md @@ -0,0 +1,82 @@ +
+ + +
+ +# tf.raw_ops.TensorArraySize + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArraySizeV2.md b/site/en/api_docs/python/tf/raw_ops/TensorArraySizeV2.md new file mode 100644 index 00000000000..cbcb9a4dc62 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArraySizeV2.md @@ -0,0 +1,84 @@ +description: Deprecated. Use TensorArraySizeV3 + +
+ + +
+ +# tf.raw_ops.TensorArraySizeV2 + + + + + + + + + +Deprecated. Use TensorArraySizeV3 + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `string`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArraySizeV3.md b/site/en/api_docs/python/tf/raw_ops/TensorArraySizeV3.md new file mode 100644 index 00000000000..9d398d0d000 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArraySizeV3.md @@ -0,0 +1,86 @@ +description: Get the current size of the TensorArray. + +
+ + +
+ +# tf.raw_ops.TensorArraySizeV3 + + + + + + + + + +Get the current size of the TensorArray. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. +The handle to a TensorArray (output of TensorArray or TensorArrayGrad). +
+`flow_in` + +A `Tensor` of type `float32`. +A float scalar that enforces proper chaining of operations. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArraySplit.md b/site/en/api_docs/python/tf/raw_ops/TensorArraySplit.md new file mode 100644 index 00000000000..e21ff4e1b72 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArraySplit.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.TensorArraySplit + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`value` + +A `Tensor`. +
+`lengths` + +A `Tensor` of type `int64`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArraySplitV2.md b/site/en/api_docs/python/tf/raw_ops/TensorArraySplitV2.md new file mode 100644 index 00000000000..4c2756adf9a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArraySplitV2.md @@ -0,0 +1,98 @@ +description: Deprecated. Use TensorArraySplitV3 + +
+ + +
+ +# tf.raw_ops.TensorArraySplitV2 + + + + + + + + + +Deprecated. Use TensorArraySplitV3 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `string`. +
+`value` + +A `Tensor`. +
+`lengths` + +A `Tensor` of type `int64`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArraySplitV3.md b/site/en/api_docs/python/tf/raw_ops/TensorArraySplitV3.md new file mode 100644 index 00000000000..2aed6afbd47 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArraySplitV3.md @@ -0,0 +1,118 @@ +description: Split the data from the input value into TensorArray elements. + +
+ + +
+ +# tf.raw_ops.TensorArraySplitV3 + + + + + + + + + +Split the data from the input value into TensorArray elements. + + + + + + + + + +Assuming that `lengths` takes on values + + ```(n0, n1, ..., n(T-1))``` + +and that `value` has shape + + ```(n0 + n1 + ... + n(T-1) x d0 x d1 x ...)```, + +this splits values into a TensorArray with T tensors. + +TensorArray index t will be the subtensor of values with starting position + + ```(n0 + n1 + ... + n(t-1), 0, 0, ...)``` + +and having size + + ```nt x d0 x d1 x ...``` + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a TensorArray. +
+`value` + +A `Tensor`. The concatenated tensor to write to the TensorArray. +
+`lengths` + +A `Tensor` of type `int64`. +The vector of lengths, how to split the rows of value into the +TensorArray. +
+`flow_in` + +A `Tensor` of type `float32`. +A float scalar that enforces proper chaining of operations. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayUnpack.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayUnpack.md new file mode 100644 index 00000000000..de7771ca024 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayUnpack.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.TensorArrayUnpack + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`value` + +A `Tensor`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayV2.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayV2.md new file mode 100644 index 00000000000..c66865a5467 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayV2.md @@ -0,0 +1,113 @@ +description: Deprecated. Use TensorArrayV3 + +
+ + +
+ +# tf.raw_ops.TensorArrayV2 + + + + + + + + + +Deprecated. Use TensorArrayV3 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`size` + +A `Tensor` of type `int32`. +
+`dtype` + +A tf.DType. +
+`element_shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `None`. +
+`dynamic_size` + +An optional `bool`. Defaults to `False`. +
+`clear_after_read` + +An optional `bool`. Defaults to `True`. +
+`tensor_array_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayV3.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayV3.md new file mode 100644 index 00000000000..215d722a34e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayV3.md @@ -0,0 +1,152 @@ +description: An array of Tensors of given size. + +
+ + +
+ +# tf.raw_ops.TensorArrayV3 + + + + + + + + + +An array of Tensors of given size. + + + + + + + + + +Write data via Write and read via Read or Pack. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`size` + +A `Tensor` of type `int32`. The size of the array. +
+`dtype` + +A tf.DType. The type of the elements on the tensor_array. +
+`element_shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `None`. +The expected shape of an element, if known. Used to +validate the shapes of TensorArray elements. If this shape is not +fully specified, gathering zero-size TensorArrays is an error. +
+`dynamic_size` + +An optional `bool`. Defaults to `False`. +A boolean that determines whether writes to the TensorArray +are allowed to grow the size. By default, this is not allowed. +
+`clear_after_read` + +An optional `bool`. Defaults to `True`. +If true (default), Tensors in the TensorArray are cleared +after being read. This disables multiple read semantics but allows early +release of memory. +
+`identical_element_shapes` + +An optional `bool`. Defaults to `False`. +If true (default is false), then all +elements in the TensorArray will be expected to have have identical shapes. +This allows certain behaviors, like dynamically checking for +consistent shapes on write, and being able to fill in properly +shaped zero tensors on stack -- even if the element_shape attribute +is not fully defined. +
+`tensor_array_name` + +An optional `string`. Defaults to `""`. +Overrides the name used for the temporary tensor_array +resource. Default value is the name of the 'TensorArray' op (which +is guaranteed unique). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (handle, flow). +
+`handle` + +A `Tensor` of type `resource`. +
+`flow` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayWrite.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayWrite.md new file mode 100644 index 00000000000..ca04a1e4b91 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayWrite.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.TensorArrayWrite + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type mutable `string`. +
+`index` + +A `Tensor` of type `int32`. +
+`value` + +A `Tensor`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayWriteV2.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayWriteV2.md new file mode 100644 index 00000000000..e95fb46f889 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayWriteV2.md @@ -0,0 +1,98 @@ +description: Deprecated. Use TensorArrayGradV3 + +
+ + +
+ +# tf.raw_ops.TensorArrayWriteV2 + + + + + + + + + +Deprecated. Use TensorArrayGradV3 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `string`. +
+`index` + +A `Tensor` of type `int32`. +
+`value` + +A `Tensor`. +
+`flow_in` + +A `Tensor` of type `float32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorArrayWriteV3.md b/site/en/api_docs/python/tf/raw_ops/TensorArrayWriteV3.md new file mode 100644 index 00000000000..ac9685307f5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorArrayWriteV3.md @@ -0,0 +1,100 @@ +description: Push an element onto the tensor_array. + +
+ + +
+ +# tf.raw_ops.TensorArrayWriteV3 + + + + + + + + + +Push an element onto the tensor_array. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`handle` + +A `Tensor` of type `resource`. The handle to a TensorArray. +
+`index` + +A `Tensor` of type `int32`. +The position to write to inside the TensorArray. +
+`value` + +A `Tensor`. The tensor to write to the TensorArray. +
+`flow_in` + +A `Tensor` of type `float32`. +A float scalar that enforces proper chaining of operations. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorDataset.md b/site/en/api_docs/python/tf/raw_ops/TensorDataset.md new file mode 100644 index 00000000000..06ff93bd34b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorDataset.md @@ -0,0 +1,84 @@ +description: Creates a dataset that emits components as a tuple of tensors once. + +
+ + +
+ +# tf.raw_ops.TensorDataset + + + + + + + + + +Creates a dataset that emits `components` as a tuple of tensors once. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`components` + +A list of `Tensor` objects. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListConcat.md b/site/en/api_docs/python/tf/raw_ops/TensorListConcat.md new file mode 100644 index 00000000000..9c7e71794e8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListConcat.md @@ -0,0 +1,110 @@ +description: Concats all tensors in the list along the 0th dimension. + +
+ + +
+ +# tf.raw_ops.TensorListConcat + + + + + + + + + +Concats all tensors in the list along the 0th dimension. + + + + + + + + + +Requires that all tensors have the same shape except the first dimension. + +input_handle: The input list. +tensor: The concated result. +lengths: Output tensor containing sizes of the 0th dimension of tensors in the list, used for computing the gradient. + + + + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`element_dtype` + +A tf.DType. +
+`element_shape` + +An optional tf.TensorShape or list of `ints`. Defaults to `None`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (tensor, lengths). +
+`tensor` + +A `Tensor` of type `element_dtype`. +
+`lengths` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListConcatLists.md b/site/en/api_docs/python/tf/raw_ops/TensorListConcatLists.md new file mode 100644 index 00000000000..4082a62cf37 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListConcatLists.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.TensorListConcatLists + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_a` + +A `Tensor` of type `variant`. +
+`input_b` + +A `Tensor` of type `variant`. +
+`element_dtype` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListConcatV2.md b/site/en/api_docs/python/tf/raw_ops/TensorListConcatV2.md new file mode 100644 index 00000000000..f99c383e54e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListConcatV2.md @@ -0,0 +1,123 @@ +description: Concats all tensors in the list along the 0th dimension. + +
+ + +
+ +# tf.raw_ops.TensorListConcatV2 + + + + + + + + + +Concats all tensors in the list along the 0th dimension. + + + + + + + + + +Requires that all tensors have the same shape except the first dimension. + +input_handle: The input list. +element_shape: The shape of the uninitialized elements in the list. If the first + dimension is not -1, it is assumed that all list elements have the same + leading dim. +leading_dims: The list of leading dims of uninitialized list elements. Used if + the leading dim of input_handle.element_shape or the element_shape input arg + is not already set. +tensor: The concated result. +lengths: Output tensor containing sizes of the 0th dimension of tensors in the list, used for computing the gradient. + + + + + + + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`element_shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`leading_dims` + +A `Tensor` of type `int64`. +
+`element_dtype` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (tensor, lengths). +
+`tensor` + +A `Tensor` of type `element_dtype`. +
+`lengths` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListElementShape.md b/site/en/api_docs/python/tf/raw_ops/TensorListElementShape.md new file mode 100644 index 00000000000..2bc100df448 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListElementShape.md @@ -0,0 +1,86 @@ +description: The shape of the elements of the given list, as a tensor. + +
+ + +
+ +# tf.raw_ops.TensorListElementShape + + + + + + + + + +The shape of the elements of the given list, as a tensor. + + + + + + + + + + input_handle: the list + element_shape: the shape of elements of the list + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`shape_type` + +A tf.DType from: `tf.int32, tf.int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `shape_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListFromTensor.md b/site/en/api_docs/python/tf/raw_ops/TensorListFromTensor.md new file mode 100644 index 00000000000..44b914eb96c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListFromTensor.md @@ -0,0 +1,88 @@ +description: Creates a TensorList which, when stacked, has the value of tensor. + +
+ + +
+ +# tf.raw_ops.TensorListFromTensor + + + + + + + + + +Creates a TensorList which, when stacked, has the value of `tensor`. + + + + + + + + + +Each tensor in the result list corresponds to one row of the input tensor. + +tensor: The input tensor. +output_handle: The list. + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`element_shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListGather.md b/site/en/api_docs/python/tf/raw_ops/TensorListGather.md new file mode 100644 index 00000000000..085ea88910a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListGather.md @@ -0,0 +1,104 @@ +description: Creates a Tensor by indexing into the TensorList. + +
+ + +
+ +# tf.raw_ops.TensorListGather + + + + + + + + + +Creates a Tensor by indexing into the TensorList. + + + + + + + + + +Each row in the produced Tensor corresponds to the element in the TensorList +specified by the given index (see tf.gather). + +input_handle: The input tensor list. +indices: The indices used to index into the list. +values: The tensor. + + + + + + + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`indices` + +A `Tensor` of type `int32`. +
+`element_shape` + +A `Tensor` of type `int32`. +
+`element_dtype` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `element_dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListGetItem.md b/site/en/api_docs/python/tf/raw_ops/TensorListGetItem.md new file mode 100644 index 00000000000..ee89bc42f45 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListGetItem.md @@ -0,0 +1,101 @@ +description: Returns the item in the list with the given index. + +
+ + +
+ +# tf.raw_ops.TensorListGetItem + + + + + + + + + +Returns the item in the list with the given index. + + + + + + + + + +input_handle: the list +index: the position in the list from which an element will be retrieved +item: the element at that position + + + + + + + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`index` + +A `Tensor` of type `int32`. +
+`element_shape` + +A `Tensor` of type `int32`. +
+`element_dtype` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `element_dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListLength.md b/site/en/api_docs/python/tf/raw_ops/TensorListLength.md new file mode 100644 index 00000000000..59b2161ff9d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListLength.md @@ -0,0 +1,79 @@ +description: Returns the number of tensors in the input tensor list. + +
+ + +
+ +# tf.raw_ops.TensorListLength + + + + + + + + + +Returns the number of tensors in the input tensor list. + + + + + + + + + +input_handle: the input list +length: the number of tensors in the list + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListPopBack.md b/site/en/api_docs/python/tf/raw_ops/TensorListPopBack.md new file mode 100644 index 00000000000..d72f387239d --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListPopBack.md @@ -0,0 +1,111 @@ +description: Returns the last element of the input list as well as a list with all but that element. + +
+ + +
+ +# tf.raw_ops.TensorListPopBack + + + + + + + + + +Returns the last element of the input list as well as a list with all but that element. + + + + + + + + + +Fails if the list is empty. + +input_handle: the input list +tensor: the withdrawn last element of the list +element_dtype: the type of elements in the list +element_shape: the shape of the output tensor + + + + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`element_shape` + +A `Tensor` of type `int32`. +
+`element_dtype` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (output_handle, tensor). +
+`output_handle` + +A `Tensor` of type `variant`. +
+`tensor` + +A `Tensor` of type `element_dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListPushBack.md b/site/en/api_docs/python/tf/raw_ops/TensorListPushBack.md new file mode 100644 index 00000000000..3de9e1635c0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListPushBack.md @@ -0,0 +1,89 @@ +description: Returns a list which has the passed-in Tensor as last element and the other elements of the given list in input_handle. + +
+ + +
+ +# tf.raw_ops.TensorListPushBack + + + + + + + + + +Returns a list which has the passed-in `Tensor` as last element and the other elements of the given list in `input_handle`. + + + + + + + + + +tensor: The tensor to put on the list. +input_handle: The old list. +output_handle: A list with the elements of the old list followed by tensor. +element_dtype: the type of elements in the list. +element_shape: a shape compatible with that of elements in the list. + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`tensor` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListPushBackBatch.md b/site/en/api_docs/python/tf/raw_ops/TensorListPushBackBatch.md new file mode 100644 index 00000000000..ec602d92867 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListPushBackBatch.md @@ -0,0 +1,82 @@ +
+ + +
+ +# tf.raw_ops.TensorListPushBackBatch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_handles` + +A `Tensor` of type `variant`. +
+`tensor` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListReserve.md b/site/en/api_docs/python/tf/raw_ops/TensorListReserve.md new file mode 100644 index 00000000000..ec9d5d4faa2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListReserve.md @@ -0,0 +1,95 @@ +description: List of the given size with empty elements. + +
+ + +
+ +# tf.raw_ops.TensorListReserve + + + + + + + + + +List of the given size with empty elements. + + + + + + + + + +element_shape: the shape of the future elements of the list +num_elements: the number of elements to reserve +handle: the output list +element_dtype: the desired type of elements in the list. + + + + + + + + + + + + + + + + + + + +
+`element_shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`num_elements` + +A `Tensor` of type `int32`. +
+`element_dtype` + +A tf.DType. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListResize.md b/site/en/api_docs/python/tf/raw_ops/TensorListResize.md new file mode 100644 index 00000000000..745b55918a6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListResize.md @@ -0,0 +1,87 @@ +description: Resizes the list. + +
+ + +
+ +# tf.raw_ops.TensorListResize + + + + + + + + + +Resizes the list. + + + + + + + + + + +input_handle: the input list +size: size of the output list + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`size` + +A `Tensor` of type `int32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListScatter.md b/site/en/api_docs/python/tf/raw_ops/TensorListScatter.md new file mode 100644 index 00000000000..f91c17b72a0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListScatter.md @@ -0,0 +1,99 @@ +description: Creates a TensorList by indexing into a Tensor. + +
+ + +
+ +# tf.raw_ops.TensorListScatter + + + + + + + + + +Creates a TensorList by indexing into a Tensor. + + + + + + + + + +Each member of the TensorList corresponds to one row of the input tensor, +specified by the given index (see tf.gather). + +tensor: The input tensor. +indices: The indices used to index into the list. +element_shape: The shape of the elements in the list (can be less specified than + the shape of the tensor). +output_handle: The TensorList. + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`indices` + +A `Tensor` of type `int32`. +
+`element_shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListScatterIntoExistingList.md b/site/en/api_docs/python/tf/raw_ops/TensorListScatterIntoExistingList.md new file mode 100644 index 00000000000..1ec3f43bfd8 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListScatterIntoExistingList.md @@ -0,0 +1,98 @@ +description: Scatters tensor at indices in an input list. + +
+ + +
+ +# tf.raw_ops.TensorListScatterIntoExistingList + + + + + + + + + +Scatters tensor at indices in an input list. + + + + + + + + + +Each member of the TensorList corresponds to one row of the input tensor, +specified by the given index (see tf.gather). + +input_handle: The list to scatter into. +tensor: The input tensor. +indices: The indices used to index into the list. +output_handle: The TensorList. + + + + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`tensor` + +A `Tensor`. +
+`indices` + +A `Tensor` of type `int32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListScatterV2.md b/site/en/api_docs/python/tf/raw_ops/TensorListScatterV2.md new file mode 100644 index 00000000000..332668c40a5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListScatterV2.md @@ -0,0 +1,109 @@ +description: Creates a TensorList by indexing into a Tensor. + +
+ + +
+ +# tf.raw_ops.TensorListScatterV2 + + + + + + + + + +Creates a TensorList by indexing into a Tensor. + + + + + + + + + +Each member of the TensorList corresponds to one row of the input tensor, +specified by the given index (see tf.gather). + +tensor: The input tensor. +indices: The indices used to index into the list. +element_shape: The shape of the elements in the list (can be less specified than + the shape of the tensor). +num_elements: The size of the output list. Must be large enough to accommodate + the largest index in indices. If -1, the list is just large enough to include + the largest index in indices. +output_handle: The TensorList. + + + + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`indices` + +A `Tensor` of type `int32`. +
+`element_shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`num_elements` + +A `Tensor` of type `int32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListSetItem.md b/site/en/api_docs/python/tf/raw_ops/TensorListSetItem.md new file mode 100644 index 00000000000..394e0e8ab14 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListSetItem.md @@ -0,0 +1,95 @@ +description: Sets the index-th position of the list to contain the given tensor. + +
+ + +
+ +# tf.raw_ops.TensorListSetItem + + + + + + + + + +Sets the index-th position of the list to contain the given tensor. + + + + + + + + + +input_handle: the list +index: the position in the list to which the tensor will be assigned +item: the element to be assigned to that position +output_handle: the new list, with the element in the proper position + + + + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`index` + +A `Tensor` of type `int32`. +
+`item` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListSplit.md b/site/en/api_docs/python/tf/raw_ops/TensorListSplit.md new file mode 100644 index 00000000000..9b0f0608c7a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListSplit.md @@ -0,0 +1,98 @@ +description: Splits a tensor into a list. + +
+ + +
+ +# tf.raw_ops.TensorListSplit + + + + + + + + + +Splits a tensor into a list. + + + + + + + + + +list[i] corresponds to lengths[i] tensors from the input tensor. +The tensor must have rank at least 1 and contain exactly sum(lengths) elements. + +tensor: The input tensor. +element_shape: A shape compatible with that of elements in the tensor. +lengths: Vector of sizes of the 0th dimension of tensors in the list. +output_handle: The list. + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`element_shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`lengths` + +A `Tensor` of type `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorListStack.md b/site/en/api_docs/python/tf/raw_ops/TensorListStack.md new file mode 100644 index 00000000000..534333ae9ba --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorListStack.md @@ -0,0 +1,103 @@ +description: Stacks all tensors in the list. + +
+ + +
+ +# tf.raw_ops.TensorListStack + + + + + + + + + +Stacks all tensors in the list. + + + + + + + + + +Requires that all tensors have the same shape. + +input_handle: the input list +tensor: the gathered result +num_elements: optional. If not -1, the number of elements in the list. + + + + + + + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`element_shape` + +A `Tensor` of type `int32`. +
+`element_dtype` + +A tf.DType. +
+`num_elements` + +An optional `int`. Defaults to `-1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `element_dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorScatterAdd.md b/site/en/api_docs/python/tf/raw_ops/TensorScatterAdd.md new file mode 100644 index 00000000000..0b66321af97 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorScatterAdd.md @@ -0,0 +1,155 @@ +description: Adds sparse updates to an existing tensor according to indices. + +
+ + +
+ +# tf.raw_ops.TensorScatterAdd + + + + + + + + + +Adds sparse `updates` to an existing tensor according to `indices`. + + + + + + + + + +This operation creates a new tensor by adding sparse `updates` to the passed +in `tensor`. +This operation is very similar to `tf.scatter_nd_add`, except that the updates +are added onto an existing tensor (as opposed to a variable). If the memory +for the existing tensor cannot be re-used, a copy is made and updated. + +`indices` is an integer tensor containing indices into a new tensor of shape +`shape`. The last dimension of `indices` can be at most the rank of `shape`: + + indices.shape[-1] <= shape.rank + +The last dimension of `indices` corresponds to indices into elements +(if `indices.shape[-1] = shape.rank`) or slices +(if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of +`shape`. `updates` is a tensor with shape + + indices.shape[:-1] + shape[indices.shape[-1]:] + +The simplest form of tensor_scatter_add is to add individual elements to a +tensor by index. For example, say we want to add 4 elements in a rank-1 +tensor with 8 elements. + +In Python, this scatter add operation would look like this: + +```python + indices = tf.constant([[4], [3], [1], [7]]) + updates = tf.constant([9, 10, 11, 12]) + tensor = tf.ones([8], dtype=tf.int32) + updated = tf.tensor_scatter_nd_add(tensor, indices, updates) + print(updated) +``` + +The resulting tensor would look like this: + + [1, 12, 1, 11, 10, 1, 1, 13] + +We can also, insert entire slices of a higher rank tensor all at once. For +example, if we wanted to insert two slices in the first dimension of a +rank-3 tensor with two matrices of new values. + +In Python, this scatter add operation would look like this: + +```python + indices = tf.constant([[0], [2]]) + updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], + [7, 7, 7, 7], [8, 8, 8, 8]], + [[5, 5, 5, 5], [6, 6, 6, 6], + [7, 7, 7, 7], [8, 8, 8, 8]]]) + tensor = tf.ones([4, 4, 4],dtype=tf.int32) + updated = tf.tensor_scatter_nd_add(tensor, indices, updates) + print(updated) +``` + +The resulting tensor would look like this: + + [[[6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8], [9, 9, 9, 9]], + [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], + [[6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8], [9, 9, 9, 9]], + [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]] + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, the index is ignored. + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. Tensor to copy/update. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`updates` + +A `Tensor`. Must have the same type as `tensor`. +Updates to scatter into output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorScatterSub.md b/site/en/api_docs/python/tf/raw_ops/TensorScatterSub.md new file mode 100644 index 00000000000..7f7af032ef0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorScatterSub.md @@ -0,0 +1,155 @@ +description: Subtracts sparse updates from an existing tensor according to indices. + +
+ + +
+ +# tf.raw_ops.TensorScatterSub + + + + + + + + + +Subtracts sparse `updates` from an existing tensor according to `indices`. + + + + + + + + + +This operation creates a new tensor by subtracting sparse `updates` from the +passed in `tensor`. +This operation is very similar to `tf.scatter_nd_sub`, except that the updates +are subtracted from an existing tensor (as opposed to a variable). If the memory +for the existing tensor cannot be re-used, a copy is made and updated. + +`indices` is an integer tensor containing indices into a new tensor of shape +`shape`. The last dimension of `indices` can be at most the rank of `shape`: + + indices.shape[-1] <= shape.rank + +The last dimension of `indices` corresponds to indices into elements +(if `indices.shape[-1] = shape.rank`) or slices +(if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of +`shape`. `updates` is a tensor with shape + + indices.shape[:-1] + shape[indices.shape[-1]:] + +The simplest form of tensor_scatter_sub is to subtract individual elements +from a tensor by index. For example, say we want to insert 4 scattered elements +in a rank-1 tensor with 8 elements. + +In Python, this scatter subtract operation would look like this: + +```python + indices = tf.constant([[4], [3], [1], [7]]) + updates = tf.constant([9, 10, 11, 12]) + tensor = tf.ones([8], dtype=tf.int32) + updated = tf.tensor_scatter_nd_sub(tensor, indices, updates) + print(updated) +``` + +The resulting tensor would look like this: + + [1, -10, 1, -9, -8, 1, 1, -11] + +We can also, insert entire slices of a higher rank tensor all at once. For +example, if we wanted to insert two slices in the first dimension of a +rank-3 tensor with two matrices of new values. + +In Python, this scatter add operation would look like this: + +```python + indices = tf.constant([[0], [2]]) + updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], + [7, 7, 7, 7], [8, 8, 8, 8]], + [[5, 5, 5, 5], [6, 6, 6, 6], + [7, 7, 7, 7], [8, 8, 8, 8]]]) + tensor = tf.ones([4, 4, 4],dtype=tf.int32) + updated = tf.tensor_scatter_nd_sub(tensor, indices, updates) + print(updated) +``` + +The resulting tensor would look like this: + + [[[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]], + [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], + [[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]], + [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]] + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, the index is ignored. + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. Tensor to copy/update. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`updates` + +A `Tensor`. Must have the same type as `tensor`. +Updates to scatter into output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorScatterUpdate.md b/site/en/api_docs/python/tf/raw_ops/TensorScatterUpdate.md new file mode 100644 index 00000000000..ad507e1337e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorScatterUpdate.md @@ -0,0 +1,170 @@ +description: Scatter updates into an existing tensor according to indices. + +
+ + +
+ +# tf.raw_ops.TensorScatterUpdate + + + + + + + + + +Scatter `updates` into an existing tensor according to `indices`. + + + + + + + + + +This operation creates a new tensor by applying sparse `updates` to the passed +in `tensor`. +This operation is very similar to tf.scatter_nd, except that the updates are +scattered onto an existing tensor (as opposed to a zero-tensor). If the memory +for the existing tensor cannot be re-used, a copy is made and updated. + +If `indices` contains duplicates, then their updates are accumulated (summed). + +**WARNING**: The order in which updates are applied is nondeterministic, so the +output will be nondeterministic if `indices` contains duplicates -- because +of some numerical approximation issues, numbers summed in different order +may yield different results. + +`indices` is an integer tensor containing indices into a new tensor of shape +`shape`. The last dimension of `indices` can be at most the rank of `shape`: + + indices.shape[-1] <= shape.rank + +The last dimension of `indices` corresponds to indices into elements +(if `indices.shape[-1] = shape.rank`) or slices +(if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of +`shape`. `updates` is a tensor with shape + + indices.shape[:-1] + shape[indices.shape[-1]:] + +The simplest form of scatter is to insert individual elements in a tensor by +index. For example, say we want to insert 4 scattered elements in a rank-1 +tensor with 8 elements. + +
+ +
+ +In Python, this scatter operation would look like this: + + ``` + >>> indices = tf.constant([[4], [3], [1], [7]]) + >>> updates = tf.constant([9, 10, 11, 12]) + >>> tensor = tf.ones([8], dtype=tf.int32) + >>> print(tf.tensor_scatter_nd_update(tensor, indices, updates)) + tf.Tensor([ 1 11 1 10 9 1 1 12], shape=(8,), dtype=int32) + ``` + +We can also, insert entire slices of a higher rank tensor all at once. For +example, if we wanted to insert two slices in the first dimension of a +rank-3 tensor with two matrices of new values. + +In Python, this scatter operation would look like this: + + ``` + >>> indices = tf.constant([[0], [2]]) + >>> updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], + ... [7, 7, 7, 7], [8, 8, 8, 8]], + ... [[5, 5, 5, 5], [6, 6, 6, 6], + ... [7, 7, 7, 7], [8, 8, 8, 8]]]) + >>> tensor = tf.ones([4, 4, 4], dtype=tf.int32) + >>> print(tf.tensor_scatter_nd_update(tensor, indices, updates).numpy()) + [[[5 5 5 5] + [6 6 6 6] + [7 7 7 7] + [8 8 8 8]] + [[1 1 1 1] + [1 1 1 1] + [1 1 1 1] + [1 1 1 1]] + [[5 5 5 5] + [6 6 6 6] + [7 7 7 7] + [8 8 8 8]] + [[1 1 1 1] + [1 1 1 1] + [1 1 1 1] + [1 1 1 1]]] + ``` + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, the index is ignored. + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. Tensor to copy/update. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`updates` + +A `Tensor`. Must have the same type as `tensor`. +Updates to scatter into output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorSliceDataset.md b/site/en/api_docs/python/tf/raw_ops/TensorSliceDataset.md new file mode 100644 index 00000000000..dfa3fee15e5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorSliceDataset.md @@ -0,0 +1,84 @@ +description: Creates a dataset that emits each dim-0 slice of components once. + +
+ + +
+ +# tf.raw_ops.TensorSliceDataset + + + + + + + + + +Creates a dataset that emits each dim-0 slice of `components` once. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`components` + +A list of `Tensor` objects. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorStridedSliceUpdate.md b/site/en/api_docs/python/tf/raw_ops/TensorStridedSliceUpdate.md new file mode 100644 index 00000000000..bb24b18b576 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorStridedSliceUpdate.md @@ -0,0 +1,147 @@ +description: Assign value to the sliced l-value reference of input. + +
+ + +
+ +# tf.raw_ops.TensorStridedSliceUpdate + + + + + + + + + +Assign `value` to the sliced l-value reference of `input`. + + + + + + + + + +The values of `value` are assigned to the positions in the tensor `input` that +are selected by the slice parameters. The slice parameters `begin` `end` +`strides` etc. work exactly as in `StridedSlice`. + +NOTE this op currently does not support broadcasting and so `value`'s shape +must be exactly the shape produced by the slice of `input`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`begin` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`end` + +A `Tensor`. Must have the same type as `begin`. +
+`strides` + +A `Tensor`. Must have the same type as `begin`. +
+`value` + +A `Tensor`. Must have the same type as `input`. +
+`begin_mask` + +An optional `int`. Defaults to `0`. +
+`end_mask` + +An optional `int`. Defaults to `0`. +
+`ellipsis_mask` + +An optional `int`. Defaults to `0`. +
+`new_axis_mask` + +An optional `int`. Defaults to `0`. +
+`shrink_axis_mask` + +An optional `int`. Defaults to `0`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorSummary.md b/site/en/api_docs/python/tf/raw_ops/TensorSummary.md new file mode 100644 index 00000000000..10fa37933ab --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorSummary.md @@ -0,0 +1,103 @@ +description: Outputs a Summary protocol buffer with a tensor. + +
+ + +
+ +# tf.raw_ops.TensorSummary + + + + + + + + + +Outputs a `Summary` protocol buffer with a tensor. + + + + + + + + + +This op is being phased out in favor of TensorSummaryV2, which lets callers pass +a tag as well as a serialized SummaryMetadata proto string that contains +plugin-specific data. We will keep this op to maintain backwards compatibility. + + + + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. A tensor to serialize. +
+`description` + +An optional `string`. Defaults to `""`. +A json-encoded SummaryDescription proto. +
+`labels` + +An optional list of `strings`. Defaults to `[]`. +An unused list of strings. +
+`display_name` + +An optional `string`. Defaults to `""`. An unused string. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TensorSummaryV2.md b/site/en/api_docs/python/tf/raw_ops/TensorSummaryV2.md new file mode 100644 index 00000000000..434b34a6683 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TensorSummaryV2.md @@ -0,0 +1,94 @@ +description: Outputs a Summary protocol buffer with a tensor and per-plugin data. + +
+ + +
+ +# tf.raw_ops.TensorSummaryV2 + + + + + + + + + +Outputs a `Summary` protocol buffer with a tensor and per-plugin data. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`tag` + +A `Tensor` of type `string`. +A string attached to this summary. Used for organization in TensorBoard. +
+`tensor` + +A `Tensor`. A tensor to serialize. +
+`serialized_summary_metadata` + +A `Tensor` of type `string`. +A serialized SummaryMetadata proto. Contains plugin +data. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TextLineDataset.md b/site/en/api_docs/python/tf/raw_ops/TextLineDataset.md new file mode 100644 index 00000000000..c60a59d5a7f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TextLineDataset.md @@ -0,0 +1,96 @@ +description: Creates a dataset that emits the lines of one or more text files. + +
+ + +
+ +# tf.raw_ops.TextLineDataset + + + + + + + + + +Creates a dataset that emits the lines of one or more text files. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`filenames` + +A `Tensor` of type `string`. +A scalar or a vector containing the name(s) of the file(s) to be +read. +
+`compression_type` + +A `Tensor` of type `string`. +A scalar containing either (i) the empty string (no +compression), (ii) "ZLIB", or (iii) "GZIP". +
+`buffer_size` + +A `Tensor` of type `int64`. +A scalar containing the number of bytes to buffer. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TextLineReader.md b/site/en/api_docs/python/tf/raw_ops/TextLineReader.md new file mode 100644 index 00000000000..4fe45262a4c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TextLineReader.md @@ -0,0 +1,96 @@ +description: A Reader that outputs the lines of a file delimited by '\n'. + +
+ + +
+ +# tf.raw_ops.TextLineReader + + + + + + + + + +A Reader that outputs the lines of a file delimited by '\n'. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`skip_header_lines` + +An optional `int`. Defaults to `0`. +Number of lines to skip from the beginning of every file. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TextLineReaderV2.md b/site/en/api_docs/python/tf/raw_ops/TextLineReaderV2.md new file mode 100644 index 00000000000..e5d56640ceb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TextLineReaderV2.md @@ -0,0 +1,96 @@ +description: A Reader that outputs the lines of a file delimited by '\n'. + +
+ + +
+ +# tf.raw_ops.TextLineReaderV2 + + + + + + + + + +A Reader that outputs the lines of a file delimited by '\n'. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`skip_header_lines` + +An optional `int`. Defaults to `0`. +Number of lines to skip from the beginning of every file. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ThreadPoolDataset.md b/site/en/api_docs/python/tf/raw_ops/ThreadPoolDataset.md new file mode 100644 index 00000000000..978decdc729 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ThreadPoolDataset.md @@ -0,0 +1,99 @@ +description: Creates a dataset that uses a custom thread pool to compute input_dataset. + +
+ + +
+ +# tf.raw_ops.ThreadPoolDataset + + + + + + + + + +Creates a dataset that uses a custom thread pool to compute `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`thread_pool` + +A `Tensor` of type `resource`. +A resource produced by the ThreadPoolHandle op. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ThreadPoolHandle.md b/site/en/api_docs/python/tf/raw_ops/ThreadPoolHandle.md new file mode 100644 index 00000000000..ee2b1aee7f6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ThreadPoolHandle.md @@ -0,0 +1,111 @@ +description: Creates a dataset that uses a custom thread pool to compute input_dataset. + +
+ + +
+ +# tf.raw_ops.ThreadPoolHandle + + + + + + + + + +Creates a dataset that uses a custom thread pool to compute `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_threads` + +An `int`. The number of threads in the thread pool. +
+`display_name` + +A `string`. +A human-readable name for the threads that may be visible in some +visualizations. +threadpool. +
+`max_intra_op_parallelism` + +An optional `int`. Defaults to `1`. +The maximum degree of parallelism to use within operations that execute on this +threadpool. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ThreadUnsafeUnigramCandidateSampler.md b/site/en/api_docs/python/tf/raw_ops/ThreadUnsafeUnigramCandidateSampler.md new file mode 100644 index 00000000000..106d9ef94fe --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ThreadUnsafeUnigramCandidateSampler.md @@ -0,0 +1,161 @@ +description: Generates labels for candidate sampling with a learned unigram distribution. + +
+ + +
+ +# tf.raw_ops.ThreadUnsafeUnigramCandidateSampler + + + + + + + + + +Generates labels for candidate sampling with a learned unigram distribution. + + + + + + + + + +See explanations of candidate sampling and the data formats at +go/candidate-sampling. + +For each batch, this op picks a single set of sampled candidate labels. + +The advantages of sampling candidates per-batch are simplicity and the +possibility of efficient dense matrix multiplication. The disadvantage is that +the sampled candidates must be chosen independently of the context and of the +true labels. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64`. +A batch_size * num_true matrix, in which each row contains the +IDs of the num_true target_classes in the corresponding original label. +
+`num_true` + +An `int` that is `>= 1`. Number of true labels per context. +
+`num_sampled` + +An `int` that is `>= 1`. +Number of candidates to randomly sample. +
+`unique` + +A `bool`. +If unique is true, we sample with rejection, so that all sampled +candidates in a batch are unique. This requires some approximation to +estimate the post-rejection sampling probabilities. +
+`range_max` + +An `int` that is `>= 1`. +The sampler will sample integers from the interval [0, range_max). +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +An second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sampled_candidates, true_expected_count, sampled_expected_count). +
+`sampled_candidates` + +A `Tensor` of type `int64`. +
+`true_expected_count` + +A `Tensor` of type `float32`. +
+`sampled_expected_count` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Tile.md b/site/en/api_docs/python/tf/raw_ops/Tile.md new file mode 100644 index 00000000000..d4542bf0d3e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Tile.md @@ -0,0 +1,113 @@ +description: Constructs a tensor by tiling a given tensor. + +
+ + +
+ +# tf.raw_ops.Tile + + + + + + + + + +Constructs a tensor by tiling a given tensor. + + + + + + + + + +This operation creates a new tensor by replicating `input` `multiples` times. +The output tensor's i'th dimension has `input.dims(i) * multiples[i]` elements, +and the values of `input` are replicated `multiples[i]` times along the 'i'th +dimension. For example, tiling `[a b c d]` by `[2]` produces +`[a b c d a b c d]`. + +``` +>>> a = tf.constant([[1,2,3],[4,5,6]], tf.int32) +>>> b = tf.constant([1,2], tf.int32) +>>> tf.tile(a, b) + +>>> c = tf.constant([2,1], tf.int32) +>>> tf.tile(a, c) + +>>> d = tf.constant([2,2], tf.int32) +>>> tf.tile(a, d) + +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. 1-D or higher. +
+`multiples` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D. Length must be the same as the number of dimensions in `input` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TileGrad.md b/site/en/api_docs/python/tf/raw_ops/TileGrad.md new file mode 100644 index 00000000000..9e0c66f92fa --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TileGrad.md @@ -0,0 +1,87 @@ +description: Returns the gradient of Tile. + +
+ + +
+ +# tf.raw_ops.TileGrad + + + + + + + + + +Returns the gradient of `Tile`. + + + + + + + + + +Since `Tile` takes an input and repeats the input `multiples` times +along each dimension, `TileGrad` takes in `multiples` and aggregates +each repeated tile of `input` into `output`. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`multiples` + +A `Tensor` of type `int32`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Timestamp.md b/site/en/api_docs/python/tf/raw_ops/Timestamp.md new file mode 100644 index 00000000000..e9f63098e17 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Timestamp.md @@ -0,0 +1,74 @@ +description: Provides the time since epoch in seconds. + +
+ + +
+ +# tf.raw_ops.Timestamp + + + + + + + + + +Provides the time since epoch in seconds. + + + + + + + + + +Returns the timestamp as a `float64` for seconds since the Unix epoch. + +Note: the timestamp is computed when the op is executed, not when it is added +to the graph. + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ToBool.md b/site/en/api_docs/python/tf/raw_ops/ToBool.md new file mode 100644 index 00000000000..8f80f6379fa --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ToBool.md @@ -0,0 +1,89 @@ +description: Converts a tensor to a scalar predicate. + +
+ + +
+ +# tf.raw_ops.ToBool + + + + + + + + + +Converts a tensor to a scalar predicate. + + + + + + + + + +Converts a tensor to a scalar predicate with the following rules: + +- For 0D tensors, truthiness is determined by comparing against a "zero" + value. For numerical types it is the obvious zero. For strings it is the + empty string. + +- For >0D tensors, truthiness is determined by looking at the number of + elements. If has zero elements, then the result is false. Otherwise the + result is true. + +This matches the behavior of If and While for determining if a tensor counts +as true/false for a branch condition. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TopK.md b/site/en/api_docs/python/tf/raw_ops/TopK.md new file mode 100644 index 00000000000..e69d568cf71 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TopK.md @@ -0,0 +1,122 @@ +description: Finds values and indices of the k largest elements for the last dimension. + +
+ + +
+ +# tf.raw_ops.TopK + + + + + + + + + +Finds values and indices of the `k` largest elements for the last dimension. + + + + + + + + + +If the input is a vector (rank-1), finds the `k` largest entries in the vector +and outputs their values and indices as vectors. Thus `values[j]` is the +`j`-th largest entry in `input`, and its index is `indices[j]`. + +For matrices (resp. higher rank input), computes the top `k` entries in each +row (resp. vector along the last dimension). Thus, + + values.shape = indices.shape = input.shape[:-1] + [k] + +If two elements are equal, the lower-index element appears first. + +If `k` varies dynamically, use `TopKV2` below. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +1-D or higher with last dimension at least `k`. +
+`k` + +An `int` that is `>= 0`. +Number of top elements to look for along the last dimension (along each +row for matrices). +
+`sorted` + +An optional `bool`. Defaults to `True`. +If true the resulting `k` elements will be sorted by the values in +descending order. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (values, indices). +
+`values` + +A `Tensor`. Has the same type as `input`. +
+`indices` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TopKV2.md b/site/en/api_docs/python/tf/raw_ops/TopKV2.md new file mode 100644 index 00000000000..5db978a7fb6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TopKV2.md @@ -0,0 +1,120 @@ +description: Finds values and indices of the k largest elements for the last dimension. + +
+ + +
+ +# tf.raw_ops.TopKV2 + + + + + + + + + +Finds values and indices of the `k` largest elements for the last dimension. + + + + + + + + + +If the input is a vector (rank-1), finds the `k` largest entries in the vector +and outputs their values and indices as vectors. Thus `values[j]` is the +`j`-th largest entry in `input`, and its index is `indices[j]`. + +For matrices (resp. higher rank input), computes the top `k` entries in each +row (resp. vector along the last dimension). Thus, + + values.shape = indices.shape = input.shape[:-1] + [k] + +If two elements are equal, the lower-index element appears first. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +1-D or higher with last dimension at least `k`. +
+`k` + +A `Tensor` of type `int32`. +0-D. Number of top elements to look for along the last dimension (along each +row for matrices). +
+`sorted` + +An optional `bool`. Defaults to `True`. +If true the resulting `k` elements will be sorted by the values in +descending order. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (values, indices). +
+`values` + +A `Tensor`. Has the same type as `input`. +
+`indices` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Transpose.md b/site/en/api_docs/python/tf/raw_ops/Transpose.md new file mode 100644 index 00000000000..6a69d9871b2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Transpose.md @@ -0,0 +1,86 @@ +description: Shuffle dimensions of x according to a permutation. + +
+ + +
+ +# tf.raw_ops.Transpose + + + + + + + + + +Shuffle dimensions of x according to a permutation. + + + + + + + + + +The output `y` has the same rank as `x`. The shapes of `x` and `y` satisfy: + `y.shape[i] == x.shape[perm[i]] for i in [0, 1, ..., rank(x) - 1]` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. +
+`perm` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TridiagonalMatMul.md b/site/en/api_docs/python/tf/raw_ops/TridiagonalMatMul.md new file mode 100644 index 00000000000..adade809197 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TridiagonalMatMul.md @@ -0,0 +1,107 @@ +description: Calculate product with tridiagonal matrix. + +
+ + +
+ +# tf.raw_ops.TridiagonalMatMul + + + + + + + + + +Calculate product with tridiagonal matrix. + + + + + + + + + +Calculates product of two matrices, where left matrix is a tridiagonal matrix. + + + + + + + + + + + + + + + + + + + + + + +
+`superdiag` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `complex64`, `complex128`. +Tensor of shape `[..., 1, M]`, representing superdiagonals of +tri-diagonal matrices to the left of multiplication. Last element is ignored. +
+`maindiag` + +A `Tensor`. Must have the same type as `superdiag`. +Tensor of shape `[..., 1, M]`, representing main diagonals of tri-diagonal +matrices to the left of multiplication. +
+`subdiag` + +A `Tensor`. Must have the same type as `superdiag`. +Tensor of shape `[..., 1, M]`, representing subdiagonals of tri-diagonal +matrices to the left of multiplication. First element is ignored. +
+`rhs` + +A `Tensor`. Must have the same type as `superdiag`. +Tensor of shape `[..., M, N]`, representing MxN matrices to the right of +multiplication. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `superdiag`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TridiagonalSolve.md b/site/en/api_docs/python/tf/raw_ops/TridiagonalSolve.md new file mode 100644 index 00000000000..6aeb7113714 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TridiagonalSolve.md @@ -0,0 +1,105 @@ +description: Solves tridiagonal systems of equations. + +
+ + +
+ +# tf.raw_ops.TridiagonalSolve + + + + + + + + + +Solves tridiagonal systems of equations. + + + + + + + + + + Solves tridiagonal systems of equations. + Supports batch dimensions and multiple right-hand sides per each left-hand + side. + On CPU, solution is computed via Gaussian elimination with or without partial + pivoting, depending on `partial_pivoting` attribute. On GPU, Nvidia's cuSPARSE + library is used: https://docs.nvidia.com/cuda/cusparse/index.html#gtsv + + + + + + + + + + + + + + + + + + + +
+`diagonals` + +A `Tensor`. Must be one of the following types: `float64`, `float32`, `complex64`, `complex128`. +Tensor of shape `[..., 3, M]` whose innermost 2 dimensions represent the +tridiagonal matrices with three rows being the superdiagonal, diagonals, and +subdiagonals, in order. The last element of the superdiagonal and the first +element of the subdiagonal is ignored. +
+`rhs` + +A `Tensor`. Must have the same type as `diagonals`. +Tensor of shape `[..., M, K]`, representing K right-hand sides per each +left-hand side. +
+`partial_pivoting` + +An optional `bool`. Defaults to `True`. +Whether to apply partial pivoting. Partial pivoting makes the procedure more +stable, but slower. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `diagonals`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TruncateDiv.md b/site/en/api_docs/python/tf/raw_ops/TruncateDiv.md new file mode 100644 index 00000000000..4f3e7534fcb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TruncateDiv.md @@ -0,0 +1,91 @@ +description: Returns x / y element-wise for integer types. + +
+ + +
+ +# tf.raw_ops.TruncateDiv + + + + + + + + + +Returns x / y element-wise for integer types. + + + + + + + + + +Truncation designates that negative numbers will round fractional quantities +toward zero. I.e. -7 / 5 = -1. This matches C semantics but it is different +than Python semantics. See `FloorDiv` for a division function that matches +Python Semantics. + +*NOTE*: `truncatediv` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TruncateMod.md b/site/en/api_docs/python/tf/raw_ops/TruncateMod.md new file mode 100644 index 00000000000..e58c18d8b5c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TruncateMod.md @@ -0,0 +1,89 @@ +description: Returns element-wise remainder of division. This emulates C semantics in that + +
+ + +
+ +# tf.raw_ops.TruncateMod + + + + + + + + + +Returns element-wise remainder of division. This emulates C semantics in that + + + + + + + + + +the result here is consistent with a truncating divide. E.g. `truncate(x / y) * +y + truncate_mod(x, y) = x`. + +*NOTE*: `truncatemod` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/TruncatedNormal.md b/site/en/api_docs/python/tf/raw_ops/TruncatedNormal.md new file mode 100644 index 00000000000..8b5def301e1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/TruncatedNormal.md @@ -0,0 +1,107 @@ +description: Outputs random values from a truncated normal distribution. + +
+ + +
+ +# tf.raw_ops.TruncatedNormal + + + + + + + + + +Outputs random values from a truncated normal distribution. + + + + + + + + + +The generated values follow a normal distribution with mean 0 and standard +deviation 1, except that values whose magnitude is more than 2 standard +deviations from the mean are dropped and re-picked. + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +The shape of the output tensor. +
+`dtype` + +A tf.DType from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. +The type of the output. +
+`seed` + +An optional `int`. Defaults to `0`. +If either `seed` or `seed2` are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +A second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Unbatch.md b/site/en/api_docs/python/tf/raw_ops/Unbatch.md new file mode 100644 index 00000000000..74c7ff39403 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Unbatch.md @@ -0,0 +1,131 @@ +description: Reverses the operation of Batch for a single output Tensor. + +
+ + +
+ +# tf.raw_ops.Unbatch + + + + + + + + + +Reverses the operation of Batch for a single output Tensor. + + + + + + + + + +An instance of Unbatch either receives an empty batched_tensor, in which case it +asynchronously waits until the values become available from a concurrently +running instance of Unbatch with the same container and shared_name, or receives +a non-empty batched_tensor in which case it finalizes all other concurrently +running instances and outputs its own element from the batch. + +batched_tensor: The possibly transformed output of Batch. The size of the first + dimension should remain unchanged by the transformations for the operation to + work. +batch_index: The matching batch_index obtained from Batch. +id: The id scalar emitted by Batch. +unbatched_tensor: The Tensor corresponding to this execution. +timeout_micros: Maximum amount of time (in microseconds) to wait to receive the + batched input tensor associated with a given invocation of the op. +container: Container to control resource sharing. +shared_name: Instances of Unbatch with the same container and shared_name are + assumed to possibly belong to the same batch. If left empty, the op name will + be used as the shared name. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`batched_tensor` + +A `Tensor`. +
+`batch_index` + +A `Tensor` of type `int64`. +
+`id` + +A `Tensor` of type `int64`. +
+`timeout_micros` + +An `int`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `batched_tensor`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnbatchDataset.md b/site/en/api_docs/python/tf/raw_ops/UnbatchDataset.md new file mode 100644 index 00000000000..15bee8dcc05 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnbatchDataset.md @@ -0,0 +1,91 @@ +description: A dataset that splits the elements of its input into multiple elements. + +
+ + +
+ +# tf.raw_ops.UnbatchDataset + + + + + + + + + +A dataset that splits the elements of its input into multiple elements. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnbatchGrad.md b/site/en/api_docs/python/tf/raw_ops/UnbatchGrad.md new file mode 100644 index 00000000000..240929965a7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnbatchGrad.md @@ -0,0 +1,126 @@ +description: Gradient of Unbatch. + +
+ + +
+ +# tf.raw_ops.UnbatchGrad + + + + + + + + + +Gradient of Unbatch. + + + + + + + + + +Acts like Batch but using the given batch_index index of batching things as they +become available. This ensures that the gradients are propagated back in the +same session which did the forward pass. + +original_input: The input to the Unbatch operation this is the gradient of. +batch_index: The batch_index given to the Unbatch operation this is the gradient +of. +grad: The downstream gradient. +id: The id scalar emitted by Batch. +batched_grad: The return value, either an empty tensor or the batched gradient. +container: Container to control resource sharing. +shared_name: Instances of UnbatchGrad with the same container and shared_name + are assumed to possibly belong to the same batch. If left empty, the op name + will be used as the shared name. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`original_input` + +A `Tensor`. +
+`batch_index` + +A `Tensor` of type `int64`. +
+`grad` + +A `Tensor`. Must have the same type as `original_input`. +
+`id` + +A `Tensor` of type `int64`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `original_input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnicodeDecode.md b/site/en/api_docs/python/tf/raw_ops/UnicodeDecode.md new file mode 100644 index 00000000000..eb5711ad994 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnicodeDecode.md @@ -0,0 +1,157 @@ +description: Decodes each string in input into a sequence of Unicode code points. + +
+ + +
+ +# tf.raw_ops.UnicodeDecode + + + + + + + + + +Decodes each string in `input` into a sequence of Unicode code points. + + + + + + + + + +The character codepoints for all strings are returned using a single vector +`char_values`, with strings expanded to characters in row-major order. + +The `row_splits` tensor indicates where the codepoints for +each input string begin and end within the `char_values` tensor. +In particular, the values for the `i`th +string (in row-major order) are stored in the slice +`[row_splits[i]:row_splits[i+1]]`. Thus: + +* `char_values[row_splits[i]+j]` is the Unicode codepoint for the `j`th + character in the `i`th string (in row-major order). +* `row_splits[i+1] - row_splits[i]` is the number of characters in the `i`th + string (in row-major order). + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +The text to be decoded. Can have any shape. Note that the output is flattened +to a vector of char values. +
+`input_encoding` + +A `string`. +Text encoding of the input strings. This is any of the encodings supported +by ICU ucnv algorithmic converters. Examples: `"UTF-16", "US ASCII", "UTF-8"`. +
+`errors` + +An optional `string` from: `"strict", "replace", "ignore"`. Defaults to `"replace"`. +Error handling policy when there is invalid formatting found in the input. +The value of 'strict' will cause the operation to produce a InvalidArgument +error on any invalid input formatting. A value of 'replace' (the default) will +cause the operation to replace any invalid formatting in the input with the +`replacement_char` codepoint. A value of 'ignore' will cause the operation to +skip any invalid formatting in the input and produce no corresponding output +character. +
+`replacement_char` + +An optional `int`. Defaults to `65533`. +The replacement character codepoint to be used in place of any invalid +formatting in the input when `errors='replace'`. Any valid unicode codepoint may +be used. The default value is the default unicode replacement character is +0xFFFD or U+65533.) +
+`replace_control_characters` + +An optional `bool`. Defaults to `False`. +Whether to replace the C0 control characters (00-1F) with the +`replacement_char`. Default is false. +
+`Tsplits` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (row_splits, char_values). +
+`row_splits` + +A `Tensor` of type `Tsplits`. +
+`char_values` + +A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnicodeDecodeWithOffsets.md b/site/en/api_docs/python/tf/raw_ops/UnicodeDecodeWithOffsets.md new file mode 100644 index 00000000000..eec6678f9a6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnicodeDecodeWithOffsets.md @@ -0,0 +1,168 @@ +description: Decodes each string in input into a sequence of Unicode code points. + +
+ + +
+ +# tf.raw_ops.UnicodeDecodeWithOffsets + + + + + + + + + +Decodes each string in `input` into a sequence of Unicode code points. + + + + + + + + + +The character codepoints for all strings are returned using a single vector +`char_values`, with strings expanded to characters in row-major order. +Similarly, the character start byte offsets are returned using a single vector +`char_to_byte_starts`, with strings expanded in row-major order. + +The `row_splits` tensor indicates where the codepoints and start offsets for +each input string begin and end within the `char_values` and +`char_to_byte_starts` tensors. In particular, the values for the `i`th +string (in row-major order) are stored in the slice +`[row_splits[i]:row_splits[i+1]]`. Thus: + +* `char_values[row_splits[i]+j]` is the Unicode codepoint for the `j`th + character in the `i`th string (in row-major order). +* `char_to_bytes_starts[row_splits[i]+j]` is the start byte offset for the `j`th + character in the `i`th string (in row-major order). +* `row_splits[i+1] - row_splits[i]` is the number of characters in the `i`th + string (in row-major order). + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +The text to be decoded. Can have any shape. Note that the output is flattened +to a vector of char values. +
+`input_encoding` + +A `string`. +Text encoding of the input strings. This is any of the encodings supported +by ICU ucnv algorithmic converters. Examples: `"UTF-16", "US ASCII", "UTF-8"`. +
+`errors` + +An optional `string` from: `"strict", "replace", "ignore"`. Defaults to `"replace"`. +Error handling policy when there is invalid formatting found in the input. +The value of 'strict' will cause the operation to produce a InvalidArgument +error on any invalid input formatting. A value of 'replace' (the default) will +cause the operation to replace any invalid formatting in the input with the +`replacement_char` codepoint. A value of 'ignore' will cause the operation to +skip any invalid formatting in the input and produce no corresponding output +character. +
+`replacement_char` + +An optional `int`. Defaults to `65533`. +The replacement character codepoint to be used in place of any invalid +formatting in the input when `errors='replace'`. Any valid unicode codepoint may +be used. The default value is the default unicode replacement character is +0xFFFD or U+65533.) +
+`replace_control_characters` + +An optional `bool`. Defaults to `False`. +Whether to replace the C0 control characters (00-1F) with the +`replacement_char`. Default is false. +
+`Tsplits` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (row_splits, char_values, char_to_byte_starts). +
+`row_splits` + +A `Tensor` of type `Tsplits`. +
+`char_values` + +A `Tensor` of type `int32`. +
+`char_to_byte_starts` + +A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnicodeEncode.md b/site/en/api_docs/python/tf/raw_ops/UnicodeEncode.md new file mode 100644 index 00000000000..92e7d75fc96 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnicodeEncode.md @@ -0,0 +1,140 @@ +description: Encode a tensor of ints into unicode strings. + +
+ + +
+ +# tf.raw_ops.UnicodeEncode + + + + + + + + + +Encode a tensor of ints into unicode strings. + + + + + + + + + +Returns a vector of strings, where `output[i]` is constructed by encoding the +Unicode codepoints in `input_values[input_splits[i]:input_splits[i+1]]` +using `output_encoding`. + +--- + +#### Example: + + + +``` +input_values = [72, 101, 108, 108, 111, 87, 111, 114, 108, 100] +input_splits = [0, 5, 10] +output_encoding = 'UTF-8' + +output = ['Hello', 'World'] +``` + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_values` + +A `Tensor` of type `int32`. +A 1D tensor containing the unicode codepoints that should be encoded. +
+`input_splits` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A 1D tensor specifying how the unicode codepoints should be split into strings. +In particular, `output[i]` is constructed by encoding the codepoints in the +slice `input_values[input_splits[i]:input_splits[i+1]]`. +
+`output_encoding` + +A `string` from: `"UTF-8", "UTF-16-BE", "UTF-32-BE"`. +Unicode encoding of the output strings. Valid encodings are: `"UTF-8", +"UTF-16-BE", and "UTF-32-BE"`. +
+`errors` + +An optional `string` from: `"ignore", "replace", "strict"`. Defaults to `"replace"`. +Error handling policy when there is invalid formatting found in the input. +The value of 'strict' will cause the operation to produce a InvalidArgument +error on any invalid input formatting. A value of 'replace' (the default) will +cause the operation to replace any invalid formatting in the input with the +`replacement_char` codepoint. A value of 'ignore' will cause the operation to +skip any invalid formatting in the input and produce no corresponding output +character. +
+`replacement_char` + +An optional `int`. Defaults to `65533`. +The replacement character codepoint to be used in place of any invalid +formatting in the input when `errors='replace'`. Any valid unicode codepoint may +be used. The default value is the default unicode replacement character is +0xFFFD (U+65533). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnicodeScript.md b/site/en/api_docs/python/tf/raw_ops/UnicodeScript.md new file mode 100644 index 00000000000..f088912c1f5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnicodeScript.md @@ -0,0 +1,91 @@ +description: Determine the script codes of a given tensor of Unicode integer code points. + +
+ + +
+ +# tf.raw_ops.UnicodeScript + + + + + + + + + +Determine the script codes of a given tensor of Unicode integer code points. + + + + + + + + + +This operation converts Unicode code points to script codes corresponding to +each code point. Script codes correspond to International Components for +Unicode (ICU) UScriptCode values. See http://icu-project.org/apiref/icu4c/uscript_8h.html. +Returns -1 (USCRIPT_INVALID_CODE) for invalid codepoints. Output shape will +match input shape. + +#### Examples: + + + +``` +>>> tf.strings.unicode_script([1, 31, 38]) + +``` + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `int32`. A Tensor of int32 Unicode code points. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnicodeTranscode.md b/site/en/api_docs/python/tf/raw_ops/UnicodeTranscode.md new file mode 100644 index 00000000000..a95c5c008aa --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnicodeTranscode.md @@ -0,0 +1,174 @@ +description: Transcode the input text from a source encoding to a destination encoding. + +
+ + +
+ +# tf.raw_ops.UnicodeTranscode + + + + + + + + + +Transcode the input text from a source encoding to a destination encoding. + + + + + + + + + +The input is a string tensor of any shape. The output is a string tensor of +the same shape containing the transcoded strings. Output strings are always +valid unicode. If the input contains invalid encoding positions, the +`errors` attribute sets the policy for how to deal with them. If the default +error-handling policy is used, invalid formatting will be substituted in the +output by the `replacement_char`. If the errors policy is to `ignore`, any +invalid encoding positions in the input are skipped and not included in the +output. If it set to `strict` then any invalid formatting will result in an +InvalidArgument error. + +This operation can be used with `output_encoding = input_encoding` to enforce +correct formatting for inputs even if they are already in the desired encoding. + +If the input is prefixed by a Byte Order Mark needed to determine encoding +(e.g. if the encoding is UTF-16 and the BOM indicates big-endian), then that +BOM will be consumed and not emitted into the output. If the input encoding +is marked with an explicit endianness (e.g. UTF-16-BE), then the BOM is +interpreted as a non-breaking-space and is preserved in the output (including +always for UTF-8). + +The end result is that if the input is marked as an explicit endianness the +transcoding is faithful to all codepoints in the source. If it is not marked +with an explicit endianness, the BOM is not considered part of the string itself +but as metadata, and so is not preserved in the output. + +#### Examples: + + + +``` +>>> tf.strings.unicode_transcode(["Hello", "TensorFlow", "2.x"], "UTF-8", "UTF-16-BE") + +>>> tf.strings.unicode_transcode(["A", "B", "C"], "US ASCII", "UTF-8").numpy() +array([b'A', b'B', b'C'], dtype=object) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +The text to be processed. Can have any shape. +
+`input_encoding` + +A `string`. +Text encoding of the input strings. This is any of the encodings supported +by ICU ucnv algorithmic converters. Examples: `"UTF-16", "US ASCII", "UTF-8"`. +
+`output_encoding` + +A `string` from: `"UTF-8", "UTF-16-BE", "UTF-32-BE"`. +The unicode encoding to use in the output. Must be one of +`"UTF-8", "UTF-16-BE", "UTF-32-BE"`. Multi-byte encodings will be big-endian. +
+`errors` + +An optional `string` from: `"strict", "replace", "ignore"`. Defaults to `"replace"`. +Error handling policy when there is invalid formatting found in the input. +The value of 'strict' will cause the operation to produce a InvalidArgument +error on any invalid input formatting. A value of 'replace' (the default) will +cause the operation to replace any invalid formatting in the input with the +`replacement_char` codepoint. A value of 'ignore' will cause the operation to +skip any invalid formatting in the input and produce no corresponding output +character. +
+`replacement_char` + +An optional `int`. Defaults to `65533`. +The replacement character codepoint to be used in place of any invalid +formatting in the input when `errors='replace'`. Any valid unicode codepoint may +be used. The default value is the default unicode replacement character is +0xFFFD or U+65533.) + +Note that for UTF-8, passing a replacement character expressible in 1 byte, such +as ' ', will preserve string alignment to the source since invalid bytes will be +replaced with a 1-byte replacement. For UTF-16-BE and UTF-16-LE, any 1 or 2 byte +replacement character will preserve byte alignment to the source. +
+`replace_control_characters` + +An optional `bool`. Defaults to `False`. +Whether to replace the C0 control characters (00-1F) with the +`replacement_char`. Default is false. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UniformCandidateSampler.md b/site/en/api_docs/python/tf/raw_ops/UniformCandidateSampler.md new file mode 100644 index 00000000000..1ee76195ee9 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UniformCandidateSampler.md @@ -0,0 +1,161 @@ +description: Generates labels for candidate sampling with a uniform distribution. + +
+ + +
+ +# tf.raw_ops.UniformCandidateSampler + + + + + + + + + +Generates labels for candidate sampling with a uniform distribution. + + + + + + + + + +See explanations of candidate sampling and the data formats at +go/candidate-sampling. + +For each batch, this op picks a single set of sampled candidate labels. + +The advantages of sampling candidates per-batch are simplicity and the +possibility of efficient dense matrix multiplication. The disadvantage is that +the sampled candidates must be chosen independently of the context and of the +true labels. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`true_classes` + +A `Tensor` of type `int64`. +A batch_size * num_true matrix, in which each row contains the +IDs of the num_true target_classes in the corresponding original label. +
+`num_true` + +An `int` that is `>= 1`. Number of true labels per context. +
+`num_sampled` + +An `int` that is `>= 1`. +Number of candidates to randomly sample. +
+`unique` + +A `bool`. +If unique is true, we sample with rejection, so that all sampled +candidates in a batch are unique. This requires some approximation to +estimate the post-rejection sampling probabilities. +
+`range_max` + +An `int` that is `>= 1`. +The sampler will sample integers from the interval [0, range_max). +
+`seed` + +An optional `int`. Defaults to `0`. +If either seed or seed2 are set to be non-zero, the random number +generator is seeded by the given seed. Otherwise, it is seeded by a +random seed. +
+`seed2` + +An optional `int`. Defaults to `0`. +An second seed to avoid seed collision. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (sampled_candidates, true_expected_count, sampled_expected_count). +
+`sampled_candidates` + +A `Tensor` of type `int64`. +
+`true_expected_count` + +A `Tensor` of type `float32`. +
+`sampled_expected_count` + +A `Tensor` of type `float32`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Unique.md b/site/en/api_docs/python/tf/raw_ops/Unique.md new file mode 100644 index 00000000000..1edcf583d7c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Unique.md @@ -0,0 +1,122 @@ +description: Finds unique elements in a 1-D tensor. + +
+ + +
+ +# tf.raw_ops.Unique + + + + + + + + + +Finds unique elements in a 1-D tensor. + + + + + + + + + +This operation returns a tensor `y` containing all of the unique elements of `x` +sorted in the same order that they occur in `x`; `x` does not need to be sorted. +This operation also returns a tensor `idx` the same size as `x` that contains +the index of each value of `x` in the unique output `y`. In other words: + +`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]` + +#### Examples: + + + +``` +# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] +y, idx = unique(x) +y ==> [1, 2, 4, 7, 8] +idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] +``` + +``` +# tensor 'x' is [4, 5, 1, 2, 3, 3, 4, 5] +y, idx = unique(x) +y ==> [4, 5, 1, 2, 3] +idx ==> [0, 1, 2, 3, 4, 4, 0, 1] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. 1-D. +
+`out_idx` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (y, idx). +
+`y` + +A `Tensor`. Has the same type as `x`. +
+`idx` + +A `Tensor` of type `out_idx`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UniqueDataset.md b/site/en/api_docs/python/tf/raw_ops/UniqueDataset.md new file mode 100644 index 00000000000..ad6916e66eb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UniqueDataset.md @@ -0,0 +1,91 @@ +description: Creates a dataset that contains the unique elements of input_dataset. + +
+ + +
+ +# tf.raw_ops.UniqueDataset + + + + + + + + + +Creates a dataset that contains the unique elements of `input_dataset`. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UniqueV2.md b/site/en/api_docs/python/tf/raw_ops/UniqueV2.md new file mode 100644 index 00000000000..87d248d6e56 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UniqueV2.md @@ -0,0 +1,152 @@ +description: Finds unique elements along an axis of a tensor. + +
+ + +
+ +# tf.raw_ops.UniqueV2 + + + + + + + + + +Finds unique elements along an axis of a tensor. + + + + + + + + + +This operation either returns a tensor `y` containing unique elements +along the `axis` of a tensor. The returned unique elements is sorted +in the same order as they occur along `axis` in `x`. +This operation also returns a tensor `idx` that is the same size as +the number of the elements in `x` along the `axis` dimension. It +contains the index in the unique output `y`. +In other words, for an `1-D` tensor `x` with `axis = None: + +`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]` + +#### For example: + + + +``` +# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] +y, idx = unique(x) +y ==> [1, 2, 4, 7, 8] +idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] +``` + +For an `2-D` tensor `x` with `axis = 0`: + +``` +# tensor 'x' is [[1, 0, 0], +# [1, 0, 0], +# [2, 0, 0]] +y, idx = unique(x, axis=0) +y ==> [[1, 0, 0], + [2, 0, 0]] +idx ==> [0, 0, 1] +``` + +For an `2-D` tensor `x` with `axis = 1`: + +``` +# tensor 'x' is [[1, 0, 0], +# [1, 0, 0], +# [2, 0, 0]] +y, idx = unique(x, axis=1) +y ==> [[1, 0], + [1, 0], + [2, 0]] +idx ==> [0, 1, 1] +``` + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. A `Tensor`. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A `Tensor` of type `int32` (default: None). The axis of the Tensor to +find the unique elements. +
+`out_idx` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (y, idx). +
+`y` + +A `Tensor`. Has the same type as `x`. +
+`idx` + +A `Tensor` of type `out_idx`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UniqueWithCounts.md b/site/en/api_docs/python/tf/raw_ops/UniqueWithCounts.md new file mode 100644 index 00000000000..44669608330 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UniqueWithCounts.md @@ -0,0 +1,124 @@ +description: Finds unique elements in a 1-D tensor. + +
+ + +
+ +# tf.raw_ops.UniqueWithCounts + + + + + + + + + +Finds unique elements in a 1-D tensor. + + + + + + + + + +This operation returns a tensor `y` containing all of the unique elements of `x` +sorted in the same order that they occur in `x`. This operation also returns a +tensor `idx` the same size as `x` that contains the index of each value of `x` +in the unique output `y`. Finally, it returns a third tensor `count` that +contains the count of each element of `y` in `x`. In other words: + +`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]` + +#### For example: + + + +``` +# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] +y, idx, count = unique_with_counts(x) +y ==> [1, 2, 4, 7, 8] +idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] +count ==> [2, 1, 3, 1, 2] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. 1-D. +
+`out_idx` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (y, idx, count). +
+`y` + +A `Tensor`. Has the same type as `x`. +
+`idx` + +A `Tensor` of type `out_idx`. +
+`count` + +A `Tensor` of type `out_idx`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UniqueWithCountsV2.md b/site/en/api_docs/python/tf/raw_ops/UniqueWithCountsV2.md new file mode 100644 index 00000000000..5179ef2c495 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UniqueWithCountsV2.md @@ -0,0 +1,163 @@ +description: Finds unique elements along an axis of a tensor. + +
+ + +
+ +# tf.raw_ops.UniqueWithCountsV2 + + + + + + + + + +Finds unique elements along an axis of a tensor. + + + + + + + + + +This operation either returns a tensor `y` containing unique elements +along the `axis` of a tensor. The returned unique elements is sorted +in the same order as they occur along `axis` in `x`. +This operation also returns a tensor `idx` and a tensor `count` +that are the same size as the number of the elements in `x` along the +`axis` dimension. The `idx` contains the index in the unique output `y` +and the `count` contains the count in the unique output `y`. +In other words, for an `1-D` tensor `x` with `axis = None: + +`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]` + +#### For example: + + + +``` +# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] +y, idx, count = unique_with_counts(x) +y ==> [1, 2, 4, 7, 8] +idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] +count ==> [2, 1, 3, 1, 2] +``` + +For an `2-D` tensor `x` with `axis = 0`: + +``` +# tensor 'x' is [[1, 0, 0], +# [1, 0, 0], +# [2, 0, 0]] +y, idx, count = unique_with_counts(x, axis=0) +y ==> [[1, 0, 0], + [2, 0, 0]] +idx ==> [0, 0, 1] +count ==> [2, 1] +``` + +For an `2-D` tensor `x` with `axis = 1`: + +``` +# tensor 'x' is [[1, 0, 0], +# [1, 0, 0], +# [2, 0, 0]] +y, idx, count = unique_with_counts(x, axis=1) +y ==> [[1, 0], + [1, 0], + [2, 0]] +idx ==> [0, 1, 1] +count ==> [1, 2] +``` + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. A `Tensor`. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A `Tensor` of type `int32` (default: None). The axis of the Tensor to +find the unique elements. +
+`out_idx` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (y, idx, count). +
+`y` + +A `Tensor`. Has the same type as `x`. +
+`idx` + +A `Tensor` of type `out_idx`. +
+`count` + +A `Tensor` of type `out_idx`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Unpack.md b/site/en/api_docs/python/tf/raw_ops/Unpack.md new file mode 100644 index 00000000000..56eb3aaffe5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Unpack.md @@ -0,0 +1,106 @@ +description: Unpacks a given dimension of a rank-R tensor into num rank-(R-1) tensors. + +
+ + +
+ +# tf.raw_ops.Unpack + + + + + + + + + +Unpacks a given dimension of a rank-`R` tensor into `num` rank-`(R-1)` tensors. + + + + + + + + + +Unpacks `num` tensors from `value` by chipping it along the `axis` dimension. +For example, given a tensor of shape `(A, B, C, D)`; + +If `axis == 0` then the i'th tensor in `output` is the slice `value[i, :, :, :]` + and each tensor in `output` will have shape `(B, C, D)`. (Note that the + dimension unpacked along is gone, unlike `split`). + +If `axis == 1` then the i'th tensor in `output` is the slice `value[:, i, :, :]` + and each tensor in `output` will have shape `(A, C, D)`. +Etc. + +This is the opposite of `pack`. + + + + + + + + + + + + + + + + + + + +
+`value` + +A `Tensor`. +1-D or higher, with `axis` dimension size equal to `num`. +
+`num` + +An `int` that is `>= 0`. +
+`axis` + +An optional `int`. Defaults to `0`. +Dimension along which to unpack. Negative values wrap around, so the +valid range is `[-R, R)`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `num` `Tensor` objects with the same type as `value`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnravelIndex.md b/site/en/api_docs/python/tf/raw_ops/UnravelIndex.md new file mode 100644 index 00000000000..4056b6bf389 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnravelIndex.md @@ -0,0 +1,113 @@ +description: Converts an array of flat indices into a tuple of coordinate arrays. + +
+ + +
+ +# tf.raw_ops.UnravelIndex + + + + + + + + + +Converts an array of flat indices into a tuple of coordinate arrays. + + + + + + + + + + +#### Example: + + + +``` +y = tf.unravel_index(indices=[2, 5, 7], dims=[3, 3]) +# 'dims' represent a hypothetical (3, 3) tensor of indices: +# [[0, 1, *2*], +# [3, 4, *5*], +# [6, *7*, 8]] +# For each entry from 'indices', this operation returns +# its coordinates (marked with '*'), such as +# 2 ==> (0, 2) +# 5 ==> (1, 2) +# 7 ==> (2, 1) +y ==> [[0, 1, 2], [2, 2, 1]] +``` + + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +An 0-D or 1-D `int` Tensor whose elements are indices into the +flattened version of an array of dimensions dims. +
+`dims` + +A `Tensor`. Must have the same type as `indices`. +An 1-D `int` Tensor. The shape of the array to use for unraveling +indices. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `indices`. +
+ + + +#### Numpy Compatibility +Equivalent to np.unravel_index + diff --git a/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentJoin.md b/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentJoin.md new file mode 100644 index 00000000000..dc9e507e29f --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentJoin.md @@ -0,0 +1,130 @@ +description: Joins the elements of inputs based on segment_ids. + +
+ + +
+ +# tf.raw_ops.UnsortedSegmentJoin + + + + + + + + + +Joins the elements of `inputs` based on `segment_ids`. + + + + + + + + + +Computes the string join along segments of a tensor. +Given `segment_ids` with rank `N` and `data` with rank `N+M`: + + `output[i, k1...kM] = strings.join([data[j1...jN, k1...kM])` + +where the join is over all [j1...jN] such that segment_ids[j1...jN] = i. +Strings are joined in row-major order. + +#### For example: + + + +```python +inputs = [['Y', 'q', 'c'], ['Y', '6', '6'], ['p', 'G', 'a']] +output_array = string_ops.unsorted_segment_join(inputs=inputs, + segment_ids=[1, 0, 1], + num_segments=2, + separator=':')) +# output_array ==> [['Y', '6', '6'], ['Y:p', 'q:G', 'c:a']] + + +inputs = ['this', 'is', 'a', 'test'] +output_array = string_ops.unsorted_segment_join(inputs=inputs, + segment_ids=[0, 0, 0, 0], + num_segments=1, + separator=':')) +# output_array ==> ['this:is:a:test'] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor` of type `string`. The input to be joined. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor whose shape is a prefix of data.shape. Negative segment ids are not +supported. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A scalar. +
+`separator` + +An optional `string`. Defaults to `""`. +The separator to use when joining. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentMax.md b/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentMax.md new file mode 100644 index 00000000000..aa679b8767e --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentMax.md @@ -0,0 +1,124 @@ +description: Computes the maximum along segments of a tensor. + +
+ + +
+ +# tf.raw_ops.UnsortedSegmentMax + + + + + + + + + +Computes the maximum along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +This operator is similar to the unsorted segment sum operator found +[(here)](../../../api_docs/python/math_ops.md#UnsortedSegmentSum). +Instead of computing the sum over segments, it computes the maximum such that: + +\\(output_i = \max_{j...} data[j...]\\) where max is over tuples `j...` such +that `segment_ids[j...] == i`. + +If the maximum is empty for a given segment ID `i`, it outputs the smallest +possible value for the specific numeric type, +`output[i] = numeric_limits::lowest()`. + +If the given segment ID `i` is negative, then the corresponding value is +dropped, and will not be included in the result. + +
+ +
+ +#### For example: + + + +``` python +c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]]) +tf.unsorted_segment_max(c, tf.constant([0, 1, 0]), num_segments=2) +# ==> [[ 4, 3, 3, 4], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor whose shape is a prefix of `data.shape`. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentMin.md b/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentMin.md new file mode 100644 index 00000000000..3bf4ee69e94 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentMin.md @@ -0,0 +1,120 @@ +description: Computes the minimum along segments of a tensor. + +
+ + +
+ +# tf.raw_ops.UnsortedSegmentMin + + + + + + + + + +Computes the minimum along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +This operator is similar to the unsorted segment sum operator found +[(here)](../../../api_docs/python/math_ops.md#UnsortedSegmentSum). +Instead of computing the sum over segments, it computes the minimum such that: + +\\(output_i = \min_{j...} data_[j...]\\) where min is over tuples `j...` such +that `segment_ids[j...] == i`. + +If the minimum is empty for a given segment ID `i`, it outputs the largest +possible value for the specific numeric type, +`output[i] = numeric_limits::max()`. + +#### For example: + + + +``` python +c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]]) +tf.unsorted_segment_min(c, tf.constant([0, 1, 0]), num_segments=2) +# ==> [[ 1, 2, 2, 1], +# [5, 6, 7, 8]] +``` + +If the given segment ID `i` is negative, then the corresponding value is +dropped, and will not be included in the result. + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor whose shape is a prefix of `data.shape`. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentProd.md b/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentProd.md new file mode 100644 index 00000000000..ce575a24e26 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentProd.md @@ -0,0 +1,119 @@ +description: Computes the product along segments of a tensor. + +
+ + +
+ +# tf.raw_ops.UnsortedSegmentProd + + + + + + + + + +Computes the product along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +This operator is similar to the unsorted segment sum operator found +[(here)](../../../api_docs/python/math_ops.md#UnsortedSegmentSum). +Instead of computing the sum over segments, it computes the product of all +entries belonging to a segment such that: + +\\(output_i = \prod_{j...} data[j...]\\) where the product is over tuples +`j...` such that `segment_ids[j...] == i`. + +#### For example: + + + +``` python +c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]]) +tf.unsorted_segment_prod(c, tf.constant([0, 1, 0]), num_segments=2) +# ==> [[ 4, 6, 6, 4], +# [5, 6, 7, 8]] +``` + +If there is no entry for a given segment ID `i`, it outputs 1. + +If the given segment ID `i` is negative, then the corresponding value is +dropped, and will not be included in the result. + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor whose shape is a prefix of `data.shape`. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentSum.md b/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentSum.md new file mode 100644 index 00000000000..ca0c38d66c2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnsortedSegmentSum.md @@ -0,0 +1,118 @@ +description: Computes the sum along segments of a tensor. + +
+ + +
+ +# tf.raw_ops.UnsortedSegmentSum + + + + + + + + + +Computes the sum along segments of a tensor. + + + + + + + + + +Read +[the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output[i] = \sum_{j...} data[j...]\\) where the sum is over tuples `j...` such +that `segment_ids[j...] == i`. Unlike `SegmentSum`, `segment_ids` +need not be sorted and need not cover all values in the full +range of valid values. + +If the sum is empty for a given segment ID `i`, `output[i] = 0`. +If the given segment ID `i` is negative, the value is dropped and will not be +added to the sum of the segment. + +`num_segments` should equal the number of distinct segment IDs. + +
+ +
+ +``` python +c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]]) +tf.unsorted_segment_sum(c, tf.constant([0, 1, 0]), num_segments=2) +# ==> [[ 5, 5, 5, 5], +# [5, 6, 7, 8]] +``` + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor whose shape is a prefix of `data.shape`. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `data`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Unstage.md b/site/en/api_docs/python/tf/raw_ops/Unstage.md new file mode 100644 index 00000000000..98c0bc27e5a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Unstage.md @@ -0,0 +1,107 @@ +description: Op is similar to a lightweight Dequeue. + +
+ + +
+ +# tf.raw_ops.Unstage + + + + + + + + + +Op is similar to a lightweight Dequeue. + + + + + + + + + +The basic functionality is similar to dequeue with many fewer +capabilities and options. This Op is optimized for performance. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtypes` + +A list of `tf.DTypes` that has length `>= 1`. +
+`capacity` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`memory_limit` + +An optional `int` that is `>= 0`. Defaults to `0`. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects of type `dtypes`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UnwrapDatasetVariant.md b/site/en/api_docs/python/tf/raw_ops/UnwrapDatasetVariant.md new file mode 100644 index 00000000000..e141ad238ad --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UnwrapDatasetVariant.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.UnwrapDatasetVariant + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/UpperBound.md b/site/en/api_docs/python/tf/raw_ops/UpperBound.md new file mode 100644 index 00000000000..eb39682a3e2 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/UpperBound.md @@ -0,0 +1,110 @@ +description: Applies upper_bound(sorted_search_values, values) along each row. + +
+ + +
+ +# tf.raw_ops.UpperBound + + + + + + + + + +Applies upper_bound(sorted_search_values, values) along each row. + + + + + + + + + +Each set of rows with the same index in (sorted_inputs, values) is treated +independently. The resulting row is the equivalent of calling +`np.searchsorted(sorted_inputs, values, side='right')`. + +The result is not a global index to the entire +`Tensor`, but rather just the index in the last dimension. + +A 2-D example: + sorted_sequence = [[0, 3, 9, 9, 10], + [1, 2, 3, 4, 5]] + values = [[2, 4, 9], + [0, 2, 6]] + + result = UpperBound(sorted_sequence, values) + + result == [[1, 2, 4], + [0, 2, 5]] + + + + + + + + + + + + + + + + + + + +
+`sorted_inputs` + +A `Tensor`. 2-D Tensor where each row is ordered. +
+`values` + +A `Tensor`. Must have the same type as `sorted_inputs`. +2-D Tensor with the same numbers of rows as `sorted_search_values`. Contains +the values that will be searched for in `sorted_search_values`. +
+`out_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/VarHandleOp.md b/site/en/api_docs/python/tf/raw_ops/VarHandleOp.md new file mode 100644 index 00000000000..4f712ff2921 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/VarHandleOp.md @@ -0,0 +1,102 @@ +description: Creates a handle to a Variable resource. + +
+ + +
+ +# tf.raw_ops.VarHandleOp + + + + + + + + + +Creates a handle to a Variable resource. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dtype` + +A tf.DType. the type of this variable. Must agree with the dtypes +of all ops using this variable. +
+`shape` + +A tf.TensorShape or list of `ints`. +The (possibly partially specified) shape of this variable. +
+`container` + +An optional `string`. Defaults to `""`. +the container this variable is placed in. +
+`shared_name` + +An optional `string`. Defaults to `""`. +the name by which this variable is referred to. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/VarIsInitializedOp.md b/site/en/api_docs/python/tf/raw_ops/VarIsInitializedOp.md new file mode 100644 index 00000000000..ebe1e7a6d9a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/VarIsInitializedOp.md @@ -0,0 +1,77 @@ +description: Checks whether a resource handle-based variable has been initialized. + +
+ + +
+ +# tf.raw_ops.VarIsInitializedOp + + + + + + + + + +Checks whether a resource handle-based variable has been initialized. + + + + + + + + + + + + + + + + + + + + + + +
+`resource` + +A `Tensor` of type `resource`. the input resource handle. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Variable.md b/site/en/api_docs/python/tf/raw_ops/Variable.md new file mode 100644 index 00000000000..b5bf8bddcfb --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Variable.md @@ -0,0 +1,98 @@ +description: Use VariableV2 instead. + +
+ + +
+ +# tf.raw_ops.Variable + + + + + + + + + +Use VariableV2 instead. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A tf.TensorShape or list of `ints`. +
+`dtype` + +A tf.DType. +
+`container` + +An optional `string`. Defaults to `""`. +
+`shared_name` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/VariableShape.md b/site/en/api_docs/python/tf/raw_ops/VariableShape.md new file mode 100644 index 00000000000..68ea517602c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/VariableShape.md @@ -0,0 +1,94 @@ +description: Returns the shape of the variable pointed to by resource. + +
+ + +
+ +# tf.raw_ops.VariableShape + + + + + + + + + +Returns the shape of the variable pointed to by `resource`. + + + + + + + + + +This operation returns a 1-D integer tensor representing the shape of `input`. + +#### For example: + + + +``` +# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] +shape(t) ==> [2, 2, 3] +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `resource`. +
+`out_type` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/VariableV2.md b/site/en/api_docs/python/tf/raw_ops/VariableV2.md new file mode 100644 index 00000000000..3b9ab8cb041 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/VariableV2.md @@ -0,0 +1,106 @@ +description: Holds state in the form of a tensor that persists across steps. + +
+ + +
+ +# tf.raw_ops.VariableV2 + + + + + + + + + +Holds state in the form of a tensor that persists across steps. + + + + + + + + + +Outputs a ref to the tensor state so it may be read or modified. + +about sharing states in tensorflow. + + + + + + + + + + + + + + + + + + + + + + +
+`shape` + +A tf.TensorShape or list of `ints`. +The shape of the variable tensor. +
+`dtype` + +A tf.DType. The type of elements in the variable tensor. +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this variable is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this variable is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A mutable `Tensor` of type `dtype`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Where.md b/site/en/api_docs/python/tf/raw_ops/Where.md new file mode 100644 index 00000000000..836bc079a87 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Where.md @@ -0,0 +1,138 @@ +description: Returns locations of nonzero / true values in a tensor. + +
+ + +
+ +# tf.raw_ops.Where + + + + + + + + + +Returns locations of nonzero / true values in a tensor. + + + + + + + + + +This operation returns the coordinates of true elements in `condition`. The +coordinates are returned in a 2-D tensor where the first dimension (rows) +represents the number of true elements, and the second dimension (columns) +represents the coordinates of the true elements. Keep in mind, the shape of +the output tensor can vary depending on how many true values there are in +`condition`. Indices are output in row-major order. + +#### For example: + + + +``` +# 'input' tensor is [[True, False] +# [True, False]] +# 'input' has two true values, so output has two coordinates. +# 'input' has rank of 2, so coordinates have two indices. +where(input) ==> [[0, 0], + [1, 0]] + +# `condition` tensor is [[[True, False] +# [True, False]] +# [[False, True] +# [False, True]] +# [[False, False] +# [False, True]]] +# 'input' has 5 true values, so output has 5 coordinates. +# 'input' has rank of 3, so coordinates have three indices. +where(input) ==> [[0, 0, 0], + [0, 1, 0], + [1, 0, 1], + [1, 1, 1], + [2, 1, 1]] + +# `condition` tensor is [[[1.5, 0.0] +# [-0.5, 0.0]] +# [[0.0, 0.25] +# [0.0, 0.75]] +# [[0.0, 0.0] +# [0.0, 0.01]]] +# 'input' has 5 nonzero values, so output has 5 coordinates. +# 'input' has rank of 3, so coordinates have three indices. +where(input) ==> [[0, 0, 0], + [0, 1, 0], + [1, 0, 1], + [1, 1, 1], + [2, 1, 1]] + +# `condition` tensor is [[[1.5 + 0.0j, 0.0 + 0.0j] +# [0.0 + 0.5j, 0.0 + 0.0j]] +# [[0.0 + 0.0j, 0.25 + 1.5j] +# [0.0 + 0.0j, 0.75 + 0.0j]] +# [[0.0 + 0.0j, 0.0 + 0.0j] +# [0.0 + 0.0j, 0.01 + 0.0j]]] +# 'input' has 5 nonzero magnitude values, so output has 5 coordinates. +# 'input' has rank of 3, so coordinates have three indices. +where(input) ==> [[0, 0, 0], + [0, 1, 0], + [1, 0, 1], + [1, 1, 1], + [2, 1, 1]] +``` + + + + + + + + + + + + + +
+`condition` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/While.md b/site/en/api_docs/python/tf/raw_ops/While.md new file mode 100644 index 00000000000..18ee9315333 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/While.md @@ -0,0 +1,116 @@ +description: output = input; While (Cond(output)) { output = Body(output) } + +
+ + +
+ +# tf.raw_ops.While + + + + + + + + + +output = input; While (Cond(output)) { output = Body(output) } + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A list of `Tensor` objects. +A list of input tensors whose types are T. +
+`cond` + +A function decorated with @Defun. +A function takes 'input' and returns a tensor. If the tensor is +a scalar of non-boolean, the scalar is converted to a boolean +according to the following rule: if the scalar is a numerical +value, non-zero means True and zero means False; if the scalar is +a string, non-empty means True and empty means False. If the +tensor is not a scalar, non-emptiness means True and False +otherwise. +
+`body` + +A function decorated with @Defun. +A function that takes a list of tensors and returns another +list of tensors. Both lists have the same types as specified +by T. +
+`output_shapes` + +An optional list of shapes (each a tf.TensorShape or list of `ints`). Defaults to `[]`. +
+`parallel_iterations` + +An optional `int`. Defaults to `10`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list of `Tensor` objects. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WholeFileReader.md b/site/en/api_docs/python/tf/raw_ops/WholeFileReader.md new file mode 100644 index 00000000000..d36f24e821b --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WholeFileReader.md @@ -0,0 +1,90 @@ +description: A Reader that outputs the entire contents of a file as a value. + +
+ + +
+ +# tf.raw_ops.WholeFileReader + + + + + + + + + +A Reader that outputs the entire contents of a file as a value. + + + + + + + + + +To use, enqueue filenames in a Queue. The output of ReaderRead will +be a filename (key) and the contents of that file (value). + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type mutable `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WholeFileReaderV2.md b/site/en/api_docs/python/tf/raw_ops/WholeFileReaderV2.md new file mode 100644 index 00000000000..5e94100759a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WholeFileReaderV2.md @@ -0,0 +1,90 @@ +description: A Reader that outputs the entire contents of a file as a value. + +
+ + +
+ +# tf.raw_ops.WholeFileReaderV2 + + + + + + + + + +A Reader that outputs the entire contents of a file as a value. + + + + + + + + + +To use, enqueue filenames in a Queue. The output of ReaderRead will +be a filename (key) and the contents of that file (value). + + + + + + + + + + + + + + + + +
+`container` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is placed in the given container. +Otherwise, a default container is used. +
+`shared_name` + +An optional `string`. Defaults to `""`. +If non-empty, this reader is named in the given bucket +with this shared_name. Otherwise, the node name is used instead. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `resource`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WindowDataset.md b/site/en/api_docs/python/tf/raw_ops/WindowDataset.md new file mode 100644 index 00000000000..619fba70fc5 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WindowDataset.md @@ -0,0 +1,173 @@ +description: Combines (nests of) input elements into a dataset of (nests of) windows. + +
+ + +
+ +# tf.raw_ops.WindowDataset + + + + + + + + + +Combines (nests of) input elements into a dataset of (nests of) windows. + + + + + + + + + +A "window" is a finite dataset of flat elements of size `size` (or possibly +fewer if there are not enough input elements to fill the window and +`drop_remainder` evaluates to false). + +The `shift` argument determines the number of input elements by which +the window moves on each iteration. The first element in the `k`th window +will be element + +``` +1 + (k-1) * shift +``` + +of the input dataset. In particular, the first element of the first window +will always be the first element of the input dataset. + +If the `stride` parameter is greater than 1, then each window will skip +`(stride - 1)` input elements between each element that appears in the +window. Output windows will still contain `size` elements regardless of +the value of `stride`. + +The `stride` argument determines the stride of the input elements, and the +`shift` argument determines the shift of the window. + +For example, letting `{...}` to represent a Dataset: + +- `tf.data.Dataset.range(7).window(2)` produces + `{{0, 1}, {2, 3}, {4, 5}, {6}}` +- `tf.data.Dataset.range(7).window(3, 2, 1, True)` produces + `{{0, 1, 2}, {2, 3, 4}, {4, 5, 6}}` +- `tf.data.Dataset.range(7).window(3, 1, 2, True)` produces + `{{0, 2, 4}, {1, 3, 5}, {2, 4, 6}}` + +Note that when the `window` transformation is applied to a dataset of +nested elements, it produces a dataset of nested windows. + +#### For example: + + + +- `tf.data.Dataset.from_tensor_slices((range(4), range(4))).window(2)` + produces `{({0, 1}, {0, 1}), ({2, 3}, {2, 3})}` +- `tf.data.Dataset.from_tensor_slices({"a": range(4)}).window(2)` + produces `{{"a": {0, 1}}, {"a": {2, 3}}}` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_dataset` + +A `Tensor` of type `variant`. +
+`size` + +A `Tensor` of type `int64`. +An integer scalar, representing the number of elements +of the input dataset to combine into a window. Must be positive. +
+`shift` + +A `Tensor` of type `int64`. +An integer scalar, representing the number of input elements +by which the window moves in each iteration. Defaults to `size`. +Must be positive. +
+`stride` + +A `Tensor` of type `int64`. +An integer scalar, representing the stride of the input elements +in the sliding window. Must be positive. The default value of 1 means +"retain every input element". +
+`drop_remainder` + +A `Tensor` of type `bool`. +A Boolean scalar, representing whether the last window should be +dropped if its size is smaller than `window_size`. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WorkerHeartbeat.md b/site/en/api_docs/python/tf/raw_ops/WorkerHeartbeat.md new file mode 100644 index 00000000000..e0672c135b7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WorkerHeartbeat.md @@ -0,0 +1,80 @@ +description: Worker heartbeat op. + +
+ + +
+ +# tf.raw_ops.WorkerHeartbeat + + + + + + + + + +Worker heartbeat op. + + + + + + + + + +Heartbeats may be sent periodically to indicate the coordinator is still active, +to retrieve the current worker status and to expedite shutdown when necessary. + + + + + + + + + + + + + +
+`request` + +A `Tensor` of type `string`. +A string tensor containing a serialized WorkerHeartbeatRequest +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WrapDatasetVariant.md b/site/en/api_docs/python/tf/raw_ops/WrapDatasetVariant.md new file mode 100644 index 00000000000..f8b7b23b4a7 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WrapDatasetVariant.md @@ -0,0 +1,75 @@ +
+ + +
+ +# tf.raw_ops.WrapDatasetVariant + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_handle` + +A `Tensor` of type `variant`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WriteAudioSummary.md b/site/en/api_docs/python/tf/raw_ops/WriteAudioSummary.md new file mode 100644 index 00000000000..78713e673e0 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WriteAudioSummary.md @@ -0,0 +1,110 @@ +
+ + +
+ +# tf.raw_ops.WriteAudioSummary + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`writer` + +A `Tensor` of type `resource`. +
+`step` + +A `Tensor` of type `int64`. +
+`tag` + +A `Tensor` of type `string`. +
+`tensor` + +A `Tensor` of type `float32`. +
+`sample_rate` + +A `Tensor` of type `float32`. +
+`max_outputs` + +An optional `int` that is `>= 1`. Defaults to `3`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WriteFile.md b/site/en/api_docs/python/tf/raw_ops/WriteFile.md new file mode 100644 index 00000000000..37f367e6bec --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WriteFile.md @@ -0,0 +1,87 @@ +description: Writes contents to the file at input filename. Creates file and recursively + +
+ + +
+ +# tf.raw_ops.WriteFile + + + + + + + + + +Writes contents to the file at input filename. Creates file and recursively + + + + + + + + + +creates directory if not existing. + + + + + + + + + + + + + + + + +
+`filename` + +A `Tensor` of type `string`. +scalar. The name of the file to which we write the contents. +
+`contents` + +A `Tensor` of type `string`. +scalar. The content to be written to the output file. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WriteGraphSummary.md b/site/en/api_docs/python/tf/raw_ops/WriteGraphSummary.md new file mode 100644 index 00000000000..99fbcb5d665 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WriteGraphSummary.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.WriteGraphSummary + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`writer` + +A `Tensor` of type `resource`. +
+`step` + +A `Tensor` of type `int64`. +
+`tensor` + +A `Tensor` of type `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WriteHistogramSummary.md b/site/en/api_docs/python/tf/raw_ops/WriteHistogramSummary.md new file mode 100644 index 00000000000..913ed4b9e1c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WriteHistogramSummary.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.WriteHistogramSummary + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`writer` + +A `Tensor` of type `resource`. +
+`step` + +A `Tensor` of type `int64`. +
+`tag` + +A `Tensor` of type `string`. +
+`values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WriteImageSummary.md b/site/en/api_docs/python/tf/raw_ops/WriteImageSummary.md new file mode 100644 index 00000000000..7c320788056 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WriteImageSummary.md @@ -0,0 +1,110 @@ +
+ + +
+ +# tf.raw_ops.WriteImageSummary + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`writer` + +A `Tensor` of type `resource`. +
+`step` + +A `Tensor` of type `int64`. +
+`tag` + +A `Tensor` of type `string`. +
+`tensor` + +A `Tensor`. Must be one of the following types: `uint8`, `float32`, `half`. +
+`bad_color` + +A `Tensor` of type `uint8`. +
+`max_images` + +An optional `int` that is `>= 1`. Defaults to `3`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WriteRawProtoSummary.md b/site/en/api_docs/python/tf/raw_ops/WriteRawProtoSummary.md new file mode 100644 index 00000000000..b6b11bf0261 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WriteRawProtoSummary.md @@ -0,0 +1,89 @@ +
+ + +
+ +# tf.raw_ops.WriteRawProtoSummary + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`writer` + +A `Tensor` of type `resource`. +
+`step` + +A `Tensor` of type `int64`. +
+`tensor` + +A `Tensor` of type `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WriteScalarSummary.md b/site/en/api_docs/python/tf/raw_ops/WriteScalarSummary.md new file mode 100644 index 00000000000..b554fb96fee --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WriteScalarSummary.md @@ -0,0 +1,96 @@ +
+ + +
+ +# tf.raw_ops.WriteScalarSummary + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`writer` + +A `Tensor` of type `resource`. +
+`step` + +A `Tensor` of type `int64`. +
+`tag` + +A `Tensor` of type `string`. +
+`value` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/WriteSummary.md b/site/en/api_docs/python/tf/raw_ops/WriteSummary.md new file mode 100644 index 00000000000..f49b5d4f232 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/WriteSummary.md @@ -0,0 +1,103 @@ +
+ + +
+ +# tf.raw_ops.WriteSummary + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`writer` + +A `Tensor` of type `resource`. +
+`step` + +A `Tensor` of type `int64`. +
+`tensor` + +A `Tensor`. +
+`tag` + +A `Tensor` of type `string`. +
+`summary_metadata` + +A `Tensor` of type `string`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created Operation. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Xdivy.md b/site/en/api_docs/python/tf/raw_ops/Xdivy.md new file mode 100644 index 00000000000..8f227e830d6 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Xdivy.md @@ -0,0 +1,84 @@ +description: Returns 0 if x == 0, and x / y otherwise, elementwise. + +
+ + +
+ +# tf.raw_ops.Xdivy + + + + + + + + + +Returns 0 if x == 0, and x / y otherwise, elementwise. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Xlog1py.md b/site/en/api_docs/python/tf/raw_ops/Xlog1py.md new file mode 100644 index 00000000000..cfdbf089776 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Xlog1py.md @@ -0,0 +1,84 @@ +description: Returns 0 if x == 0, and x * log1p(y) otherwise, elementwise. + +
+ + +
+ +# tf.raw_ops.Xlog1py + + + + + + + + + +Returns 0 if x == 0, and x * log1p(y) otherwise, elementwise. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Xlogy.md b/site/en/api_docs/python/tf/raw_ops/Xlogy.md new file mode 100644 index 00000000000..a035b9c51c1 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Xlogy.md @@ -0,0 +1,84 @@ +description: Returns 0 if x == 0, and x * log(y) otherwise, elementwise. + +
+ + +
+ +# tf.raw_ops.Xlogy + + + + + + + + + +Returns 0 if x == 0, and x * log(y) otherwise, elementwise. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ZerosLike.md b/site/en/api_docs/python/tf/raw_ops/ZerosLike.md new file mode 100644 index 00000000000..80d904c201a --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ZerosLike.md @@ -0,0 +1,77 @@ +description: Returns a tensor of zeros with the same shape and type as x. + +
+ + +
+ +# tf.raw_ops.ZerosLike + + + + + + + + + +Returns a tensor of zeros with the same shape and type as x. + + + + + + + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. a tensor of type T. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/Zeta.md b/site/en/api_docs/python/tf/raw_ops/Zeta.md new file mode 100644 index 00000000000..22527f33637 --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/Zeta.md @@ -0,0 +1,88 @@ +description: Compute the Hurwitz zeta function \\(\zeta(x, q)\\). + +
+ + +
+ +# tf.raw_ops.Zeta + + + + + + + + + +Compute the Hurwitz zeta function \\(\zeta(x, q)\\). + + + + + + + + + +The Hurwitz zeta function is defined as: + + +\\(\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x}\\) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +
+`q` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/raw_ops/ZipDataset.md b/site/en/api_docs/python/tf/raw_ops/ZipDataset.md new file mode 100644 index 00000000000..837a6aba78c --- /dev/null +++ b/site/en/api_docs/python/tf/raw_ops/ZipDataset.md @@ -0,0 +1,97 @@ +description: Creates a dataset that zips together input_datasets. + +
+ + +
+ +# tf.raw_ops.ZipDataset + + + + + + + + + +Creates a dataset that zips together `input_datasets`. + + + + + + + + + +The elements of the resulting dataset are created by zipping corresponding +elements from each of the input datasets. + +The size of the resulting dataset will match the size of the smallest input +dataset, and no error will be raised if input datasets have different sizes. + + + + + + + + + + + + + + + + + + + +
+`input_datasets` + +A list of at least 1 `Tensor` objects with type `variant`. +List of `N` variant Tensors representing datasets to be zipped together. +
+`output_types` + +A list of `tf.DTypes` that has length `>= 1`. +
+`output_shapes` + +A list of shapes (each a tf.TensorShape or list of `ints`) that has length `>= 1`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `variant`. +
+ diff --git a/site/en/api_docs/python/tf/realdiv.md b/site/en/api_docs/python/tf/realdiv.md new file mode 100644 index 00000000000..a53ef67e2fa --- /dev/null +++ b/site/en/api_docs/python/tf/realdiv.md @@ -0,0 +1,88 @@ +description: Returns x / y element-wise for real types. + +
+ + +
+ +# tf.realdiv + + + + + + + + + +Returns x / y element-wise for real types. + + + + + + + + + +If `x` and `y` are reals, this will return the floating-point division. + +*NOTE*: `Div` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/recompute_grad.md b/site/en/api_docs/python/tf/recompute_grad.md new file mode 100644 index 00000000000..963c39645b8 --- /dev/null +++ b/site/en/api_docs/python/tf/recompute_grad.md @@ -0,0 +1,85 @@ +description: An eager-compatible version of recompute_grad. + +
+ + +
+ +# tf.recompute_grad + + + + + + + + + +An eager-compatible version of recompute_grad. + + + + + + + + + +For f(*args, **kwargs), this supports gradients with respect to args or +kwargs, but kwargs are currently only supported in eager-mode. +Note that for keras layer and model objects, this is handled automatically. + +Warning: If `f` was originally a tf.keras Model or Layer object, `g` will not +be able to access the member variables of that object, because `g` returns +through the wrapper function `inner`. When recomputing gradients through +objects that inherit from keras, we suggest keeping a reference to the +underlying object around for the purpose of accessing these variables. + + + + + + + + + + +
+`f` + +function `f(*x)` that returns a `Tensor` or sequence of `Tensor` outputs. +
+ + + + + + + + + + + +
+A function `g` that wraps `f`, but which recomputes `f` on the backwards +pass of a gradient call. +
+ diff --git a/site/en/api_docs/python/tf/reduce_all.md b/site/en/api_docs/python/tf/reduce_all.md new file mode 100644 index 00000000000..16bc5f5c663 --- /dev/null +++ b/site/en/api_docs/python/tf/reduce_all.md @@ -0,0 +1,119 @@ +description: Computes the "logical and" of elements across dimensions of a tensor. + +
+ + +
+ +# tf.reduce_all + + + + + + + + + +Computes the "logical and" of elements across dimensions of a tensor. + + + + + + + + + +Reduces `input_tensor` along the dimensions given in `axis`. +Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each +entry in `axis`. If `keepdims` is true, the reduced dimensions +are retained with length 1. + +If `axis` is None, all dimensions are reduced, and a +tensor with a single element is returned. + +#### For example: + + + +```python +x = tf.constant([[True, True], [False, False]]) +tf.reduce_all(x) # False +tf.reduce_all(x, 0) # [False, False] +tf.reduce_all(x, 1) # [True, False] +``` + + + + + + + + + + + + + + + + + + + +
+`input_tensor` + +The boolean tensor to reduce. +
+`axis` + +The dimensions to reduce. If `None` (the default), reduces all +dimensions. Must be in the range `[-rank(input_tensor), +rank(input_tensor))`. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The reduced tensor. +
+ + + + +#### Numpy Compatibility +Equivalent to np.all + diff --git a/site/en/api_docs/python/tf/register_tensor_conversion_function.md b/site/en/api_docs/python/tf/register_tensor_conversion_function.md new file mode 100644 index 00000000000..06066ce19cd --- /dev/null +++ b/site/en/api_docs/python/tf/register_tensor_conversion_function.md @@ -0,0 +1,119 @@ +description: Registers a function for converting objects of base_type to Tensor. + +
+ + +
+ +# tf.register_tensor_conversion_function + + + + + + + + + +Registers a function for converting objects of `base_type` to `Tensor`. + + + + + + + + + +The conversion function must have the following signature: + +```python + def conversion_func(value, dtype=None, name=None, as_ref=False): + # ... +``` + +It must return a `Tensor` with the given `dtype` if specified. If the +conversion function creates a new `Tensor`, it should use the given +`name` if specified. All exceptions will be propagated to the caller. + +The conversion function may return `NotImplemented` for some +inputs. In this case, the conversion process will continue to try +subsequent conversion functions. + +If `as_ref` is true, the function must return a `Tensor` reference, +such as a `Variable`. + +NOTE: The conversion functions will execute in order of priority, +followed by order of registration. To ensure that a conversion function +`F` runs before another conversion function `G`, ensure that `F` is +registered with a smaller priority than `G`. + + + + + + + + + + + + + + + + +
+`base_type` + +The base type or tuple of base types for all objects that +`conversion_func` accepts. +
+`conversion_func` + +A function that converts instances of `base_type` to +`Tensor`. +
+`priority` + +Optional integer that indicates the priority for applying this +conversion function. Conversion functions with smaller priority values run +earlier than conversion functions with larger priority values. Defaults to +100. +
+ + + + + + + + + + + + +
+`TypeError` + +If the arguments do not have the appropriate type. +
+ diff --git a/site/en/api_docs/python/tf/repeat.md b/site/en/api_docs/python/tf/repeat.md new file mode 100644 index 00000000000..29439e4348a --- /dev/null +++ b/site/en/api_docs/python/tf/repeat.md @@ -0,0 +1,141 @@ +description: Repeat elements of input. + +
+ + +
+ +# tf.repeat + + + + + + + + + +Repeat elements of `input`. + + + + + + + + + +See also tf.concat, tf.stack, tf.tile. + + + + + + + + + + + + + + + + + + + +
+`input` + +An `N`-dimensional Tensor. +
+`repeats` + +An 1-D `int` Tensor. The number of repetitions for each element. +repeats is broadcasted to fit the shape of the given axis. `len(repeats)` +must equal `input.shape[axis]` if axis is not None. +
+`axis` + +An int. The axis along which to repeat values. By default (axis=None), +use the flattened input array, and return a flat output array. +
+`name` + +A name for the operation. +
+ + + + + + + + + + + +
+A Tensor which has the same shape as `input`, except along the given axis. +If axis is None then the output array is flattened to match the flattened +input array. +
+ + + +#### Example usage: + + + +``` +>>> repeat(['a', 'b', 'c'], repeats=[3, 0, 2], axis=0) + +``` + +``` +>>> repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=0) + +``` + +``` +>>> repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=1) + +``` + +``` +>>> repeat(3, repeats=4) + +``` + +``` +>>> repeat([[1,2], [3,4]], repeats=2) + +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/required_space_to_batch_paddings.md b/site/en/api_docs/python/tf/required_space_to_batch_paddings.md new file mode 100644 index 00000000000..2f25ae114db --- /dev/null +++ b/site/en/api_docs/python/tf/required_space_to_batch_paddings.md @@ -0,0 +1,116 @@ +description: Calculate padding required to make block_shape divide input_shape. + +
+ + +
+ +# tf.required_space_to_batch_paddings + + + + + + + + + +Calculate padding required to make block_shape divide input_shape. + + + + + + + + + +This function can be used to calculate a suitable paddings argument for use +with space_to_batch_nd and batch_to_space_nd. + + + + + + + + + + + + + + + + + + + +
+`input_shape` + +int32 Tensor of shape [N]. +
+`block_shape` + +int32 Tensor of shape [N]. +
+`base_paddings` + +Optional int32 Tensor of shape [N, 2]. Specifies the minimum +amount of padding to use. All elements must be >= 0. If not specified, +defaults to 0. +
+`name` + +string. Optional name prefix. +
+ + + + + + + + + + + + + + +
+(paddings, crops), where: + +`paddings` and `crops` are int32 Tensors of rank 2 and shape [N, 2] +
+`satisfying` + +paddings[i, 0] = base_paddings[i, 0]. +0 <= paddings[i, 1] - base_paddings[i, 1] < block_shape[i] +(input_shape[i] + paddings[i, 0] + paddings[i, 1]) % block_shape[i] == 0 + +crops[i, 0] = 0 +crops[i, 1] = paddings[i, 1] - base_paddings[i, 1] +
+ + +Raises: ValueError if called with incompatible shapes. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/reshape.md b/site/en/api_docs/python/tf/reshape.md new file mode 100644 index 00000000000..5342619f44d --- /dev/null +++ b/site/en/api_docs/python/tf/reshape.md @@ -0,0 +1,229 @@ +description: Reshapes a tensor. + +
+ + +
+ +# tf.reshape + + + + + + + + + +Reshapes a tensor. + + + + + + + + + +Given `tensor`, this operation returns a new tf.Tensor that has the same +values as `tensor` in the same order, except with a new shape given by +`shape`. + +``` +>>> t1 = [[1, 2, 3], +... [4, 5, 6]] +>>> print(tf.shape(t1).numpy()) +[2 3] +>>> t2 = tf.reshape(t1, [6]) +>>> t2 + +>>> tf.reshape(t2, [3, 2]) + +``` + +The tf.reshape does not change the order of or the total number of elements +in the tensor, and so it can reuse the underlying data buffer. This makes it +a fast operation independent of how big of a tensor it is operating on. + +``` +>>> tf.reshape([1, 2, 3], [2, 2]) +Traceback (most recent call last): +... +InvalidArgumentError: Input to reshape is a tensor with 3 values, but the +requested shape has 4 +``` + +To instead reorder the data to rearrange the dimensions of a tensor, see +tf.transpose. + +``` +>>> t = [[1, 2, 3], +... [4, 5, 6]] +>>> tf.reshape(t, [3, 2]).numpy() +array([[1, 2], + [3, 4], + [5, 6]], dtype=int32) +>>> tf.transpose(t, perm=[1, 0]).numpy() +array([[1, 4], + [2, 5], + [3, 6]], dtype=int32) +``` + +If one component of `shape` is the special value -1, the size of that +dimension is computed so that the total size remains constant. In particular, +a `shape` of `[-1]` flattens into 1-D. At most one component of `shape` can +be -1. + +``` +>>> t = [[1, 2, 3], +... [4, 5, 6]] +>>> tf.reshape(t, [-1]) + +>>> tf.reshape(t, [3, -1]) + +>>> tf.reshape(t, [-1, 2]) + +``` + +`tf.reshape(t, [])` reshapes a tensor `t` with one element to a scalar. + +``` +>>> tf.reshape([7], []).numpy() +7 +``` + +#### More examples: + + + +``` +>>> t = [1, 2, 3, 4, 5, 6, 7, 8, 9] +>>> print(tf.shape(t).numpy()) +[9] +>>> tf.reshape(t, [3, 3]) + +``` + +``` +>>> t = [[[1, 1], [2, 2]], +... [[3, 3], [4, 4]]] +>>> print(tf.shape(t).numpy()) +[2 2 2] +>>> tf.reshape(t, [2, 4]) + +``` + +``` +>>> t = [[[1, 1, 1], +... [2, 2, 2]], +... [[3, 3, 3], +... [4, 4, 4]], +... [[5, 5, 5], +... [6, 6, 6]]] +>>> print(tf.shape(t).numpy()) +[3 2 3] +>>> # Pass '[-1]' to flatten 't'. +>>> tf.reshape(t, [-1]) + +>>> # -- Using -1 to infer the shape -- +>>> # Here -1 is inferred to be 9: +>>> tf.reshape(t, [2, -1]) + +>>> # -1 is inferred to be 2: +>>> tf.reshape(t, [-1, 9]) + +>>> # -1 is inferred to be 3: +>>> tf.reshape(t, [ 2, -1, 3]) + +``` + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. +
+`shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Defines the shape of the output tensor. +
+`name` + +Optional string. A name for the operation. +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/reverse.md b/site/en/api_docs/python/tf/reverse.md new file mode 100644 index 00000000000..a89d60b3ec7 --- /dev/null +++ b/site/en/api_docs/python/tf/reverse.md @@ -0,0 +1,135 @@ +description: Reverses specific dimensions of a tensor. + +
+ + +
+ +# tf.reverse + + + + + + + + + +Reverses specific dimensions of a tensor. + + + + + + + + + +NOTE tf.reverse has now changed behavior in preparation for 1.0. +`tf.reverse_v2` is currently an alias that will be deprecated before TF 1.0. + +Given a `tensor`, and a `int32` tensor `axis` representing the set of +dimensions of `tensor` to reverse. This operation reverses each dimension +`i` for which there exists `j` s.t. `axis[j] == i`. + +`tensor` can have up to 8 dimensions. The number of dimensions specified +in `axis` may be 0 or more entries. If an index is specified more than +once, a InvalidArgument error is raised. + +#### For example: + + + +``` +# tensor 't' is [[[[ 0, 1, 2, 3], +# [ 4, 5, 6, 7], +# [ 8, 9, 10, 11]], +# [[12, 13, 14, 15], +# [16, 17, 18, 19], +# [20, 21, 22, 23]]]] +# tensor 't' shape is [1, 2, 3, 4] + +# 'dims' is [3] or 'dims' is [-1] +reverse(t, dims) ==> [[[[ 3, 2, 1, 0], + [ 7, 6, 5, 4], + [ 11, 10, 9, 8]], + [[15, 14, 13, 12], + [19, 18, 17, 16], + [23, 22, 21, 20]]]] + +# 'dims' is '[1]' (or 'dims' is '[-3]') +reverse(t, dims) ==> [[[[12, 13, 14, 15], + [16, 17, 18, 19], + [20, 21, 22, 23] + [[ 0, 1, 2, 3], + [ 4, 5, 6, 7], + [ 8, 9, 10, 11]]]] + +# 'dims' is '[2]' (or 'dims' is '[-2]') +reverse(t, dims) ==> [[[[8, 9, 10, 11], + [4, 5, 6, 7], + [0, 1, 2, 3]] + [[20, 21, 22, 23], + [16, 17, 18, 19], + [12, 13, 14, 15]]]] +``` + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. Must be one of the following types: `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `bool`, `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`, `string`. +Up to 8-D. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D. The indices of the dimensions to reverse. Must be in the range +`[-rank(tensor), rank(tensor))`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/reverse_sequence.md b/site/en/api_docs/python/tf/reverse_sequence.md new file mode 100644 index 00000000000..32b2df9288f --- /dev/null +++ b/site/en/api_docs/python/tf/reverse_sequence.md @@ -0,0 +1,104 @@ +description: Reverses variable length slices. (deprecated arguments) (deprecated arguments) + +
+ + +
+ +# tf.reverse_sequence + + + + + + + + + +Reverses variable length slices. (deprecated arguments) (deprecated arguments) + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(seq_dim)`. They will be removed in a future version. +Instructions for updating: +seq_dim is deprecated, use seq_axis instead + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(batch_dim)`. They will be removed in a future version. +Instructions for updating: +batch_dim is deprecated, use batch_axis instead + +This op first slices `input` along the dimension `batch_axis`, and for +each slice `i`, reverses the first `seq_lengths[i]` elements along the +dimension `seq_axis`. + +The elements of `seq_lengths` must obey `seq_lengths[i] <= +input.dims[seq_dim]`, and `seq_lengths` must be a vector of length +`input.dims[batch_dim]`. + +The output slice `i` along dimension `batch_axis` is then given by +input slice `i`, with the first `seq_lengths[i]` slices along +dimension `seq_axis` reversed. + +#### Example usage: + + + +``` +>>> seq_lengths = [7, 2, 3, 5] +>>> input = [[1, 2, 3, 4, 5, 0, 0, 0], [1, 2, 0, 0, 0, 0, 0, 0], +... [1, 2, 3, 4, 0, 0, 0, 0], [1, 2, 3, 4, 5, 6, 7, 8]] +>>> output = tf.reverse_sequence(input, seq_lengths, seq_axis=1, batch_axis=0) +>>> output + +``` + + + + + + + + + +
+`input`: A `Tensor`. The input to reverse. +`seq_lengths`: A `Tensor`. Must be one of the following types: `int32`, +`int64`. 1-D with length `input.dims(batch_dim)` and `max(seq_lengths) <= +input.dims(seq_dim)` +`seq_axis`: An `int`. The dimension which is partially reversed. +`batch_axis`: An optional `int`. Defaults to `0`. The dimension along which +reversal is performed. +`name`: A name for the operation (optional). +
+ + + + + + + + + + + +
+A Tensor. Has the same type as input. +
+ diff --git a/site/en/api_docs/python/tf/roll.md b/site/en/api_docs/python/tf/roll.md new file mode 100644 index 00000000000..3ce12a1a310 --- /dev/null +++ b/site/en/api_docs/python/tf/roll.md @@ -0,0 +1,126 @@ +description: Rolls the elements of a tensor along an axis. + +
+ + +
+ +# tf.roll + + + + + + + + + +Rolls the elements of a tensor along an axis. + + + + + + + + + +The elements are shifted positively (towards larger indices) by the offset of +`shift` along the dimension of `axis`. Negative `shift` values will shift +elements in the opposite direction. Elements that roll passed the last position +will wrap around to the first and vice versa. Multiple shifts along multiple +axes may be specified. + +#### For example: + + + +``` +# 't' is [0, 1, 2, 3, 4] +roll(t, shift=2, axis=0) ==> [3, 4, 0, 1, 2] + +# shifting along multiple dimensions +# 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]] +roll(t, shift=[1, -2], axis=[0, 1]) ==> [[7, 8, 9, 5, 6], [2, 3, 4, 0, 1]] + +# shifting along the same axis multiple times +# 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]] +roll(t, shift=[2, -3], axis=[1, 1]) ==> [[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]] +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`shift` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Dimension must be 0-D or 1-D. `shift[i]` specifies the number of places by which +elements are shifted positively (towards larger indices) along the dimension +specified by `axis[i]`. Negative shifts will roll the elements in the opposite +direction. +
+`axis` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Dimension must be 0-D or 1-D. `axis[i]` specifies the dimension that the shift +`shift[i]` should occur. If the same axis is referenced more than once, the +total shift for that axis will be the sum of all the shifts that belong to that +axis. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/saved_model.md b/site/en/api_docs/python/tf/saved_model.md new file mode 100644 index 00000000000..8b2dc1fb2e8 --- /dev/null +++ b/site/en/api_docs/python/tf/saved_model.md @@ -0,0 +1,85 @@ +description: Public API for tf.saved_model namespace. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# Module: tf.saved_model + + + + + + + + + +Public API for tf.saved_model namespace. + + + +## Classes + +[`class Asset`](../tf/saved_model/Asset.md): Represents a file asset to hermetically include in a SavedModel. + +[`class SaveOptions`](../tf/saved_model/SaveOptions.md): Options for saving to SavedModel. + +## Functions + +[`contains_saved_model(...)`](../tf/saved_model/contains_saved_model.md): Checks whether the provided export directory could contain a SavedModel. + +[`load(...)`](../tf/saved_model/load.md): Load a SavedModel from `export_dir`. + +[`save(...)`](../tf/saved_model/save.md): Exports the Trackable object `obj` to [SavedModel format](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md). + +## Other Members + +* `ASSETS_DIRECTORY = 'assets'` +* `ASSETS_KEY = 'saved_model_assets'` +* `CLASSIFY_INPUTS = 'inputs'` +* `CLASSIFY_METHOD_NAME = 'tensorflow/serving/classify'` +* `CLASSIFY_OUTPUT_CLASSES = 'classes'` +* `CLASSIFY_OUTPUT_SCORES = 'scores'` +* `DEBUG_DIRECTORY = 'debug'` +* `DEBUG_INFO_FILENAME_PB = 'saved_model_debug_info.pb'` +* `DEFAULT_SERVING_SIGNATURE_DEF_KEY = 'serving_default'` +* `GPU = 'gpu'` +* `PREDICT_INPUTS = 'inputs'` +* `PREDICT_METHOD_NAME = 'tensorflow/serving/predict'` +* `PREDICT_OUTPUTS = 'outputs'` +* `REGRESS_INPUTS = 'inputs'` +* `REGRESS_METHOD_NAME = 'tensorflow/serving/regress'` +* `REGRESS_OUTPUTS = 'outputs'` +* `SAVED_MODEL_FILENAME_PB = 'saved_model.pb'` +* `SAVED_MODEL_FILENAME_PBTXT = 'saved_model.pbtxt'` +* `SAVED_MODEL_SCHEMA_VERSION = 1` +* `SERVING = 'serve'` +* `TPU = 'tpu'` +* `TRAINING = 'train'` +* `VARIABLES_DIRECTORY = 'variables'` +* `VARIABLES_FILENAME = 'variables'` diff --git a/site/en/api_docs/python/tf/saved_model/Asset.md b/site/en/api_docs/python/tf/saved_model/Asset.md new file mode 100644 index 00000000000..876ab58a0f7 --- /dev/null +++ b/site/en/api_docs/python/tf/saved_model/Asset.md @@ -0,0 +1,98 @@ +description: Represents a file asset to hermetically include in a SavedModel. + +
+ + + +
+ +# tf.saved_model.Asset + + + + + + + + + +Represents a file asset to hermetically include in a SavedModel. + + + + + + + + + +A SavedModel can include arbitrary files, called assets, that are needed +for its use. For example a vocabulary file used initialize a lookup table. + +When a trackable object is exported via tf.saved_model.save(), all the +`Asset`s reachable from it are copied into the SavedModel assets directory. +Upon loading, the assets and the serialized functions that depend on them +will refer to the correct filepaths inside the SavedModel directory. + +#### Example: + + + +``` +filename = tf.saved_model.Asset("file.txt") + +@tf.function(input_signature=[]) +def func(): + return tf.io.read_file(filename) + +trackable_obj = tf.train.Checkpoint() +trackable_obj.func = func +trackable_obj.filename = filename +tf.saved_model.save(trackable_obj, "/tmp/saved_model") + +# The created SavedModel is hermetic, it does not depend on +# the original file and can be moved to another path. +tf.io.gfile.remove("file.txt") +tf.io.gfile.rename("/tmp/saved_model", "/tmp/new_location") + +reloaded_obj = tf.saved_model.load("/tmp/new_location") +print(reloaded_obj.func()) +``` + + + + + + + + + + + + +
+`asset_path` + +A 0-D tf.string tensor with path to the asset. +
+ + + diff --git a/site/en/api_docs/python/tf/saved_model/SaveOptions.md b/site/en/api_docs/python/tf/saved_model/SaveOptions.md new file mode 100644 index 00000000000..65cfa016e76 --- /dev/null +++ b/site/en/api_docs/python/tf/saved_model/SaveOptions.md @@ -0,0 +1,120 @@ +description: Options for saving to SavedModel. + +
+ + + + + + +
+ +# tf.saved_model.SaveOptions + + + + + + + + + +Options for saving to SavedModel. + + + + + + + + + +This function may be used in the `options` argument in functions that +save a SavedModel (tf.saved_model.save, tf.keras.models.save_model). + + + + + + + + + + + + + + + + +
+`namespace_whitelist` + +List of strings containing op namespaces to whitelist +when saving a model. Saving an object that uses namespaced ops must +explicitly add all namespaces to the whitelist. The namespaced ops must +be registered into the framework when loading the SavedModel. +
+`save_debug_info` + +Boolean indicating whether debug information is saved. +If True, then a debug/saved_model_debug_info.pb file will be written +with the contents of a GraphDebugInfo binary protocol buffer containing +stack trace information for all ops and functions that are saved. +
+`function_aliases` + +Python dict. Mapping from string to object returned by +@tf.function. +A single tf.function can generate many ConcreteFunctions. If a +downstream tool wants to refer to all concrete functions generated by a +single tf.function you can use the `function_aliases` argument to store +a map from the alias name to all concrete function names. +E.g. +```python +class MyModel: +@tf.function +def func(): +... + +@tf.function +def serve(): +... +func() + +model = MyModel() +signatures = { +'serving_default': model.serve.get_concrete_function(), +} +options = tf.saved_model.SaveOptions(function_aliases={ +'my_func': func, +}) +tf.saved_model.save(model, export_dir, signatures, options) +``` +
+ + + +## Class Variables + +* `function_aliases` +* `namespace_whitelist` +* `save_debug_info` diff --git a/site/en/api_docs/python/tf/saved_model/contains_saved_model.md b/site/en/api_docs/python/tf/saved_model/contains_saved_model.md new file mode 100644 index 00000000000..109e165af49 --- /dev/null +++ b/site/en/api_docs/python/tf/saved_model/contains_saved_model.md @@ -0,0 +1,69 @@ +description: Checks whether the provided export directory could contain a SavedModel. + +
+ + +
+ +# tf.saved_model.contains_saved_model + + + + + + + + + +Checks whether the provided export directory could contain a SavedModel. + + + + + + + +Note that the method does not load any data by itself. If the method returns +`false`, the export directory definitely does not contain a SavedModel. If the +method returns `true`, the export directory may contain a SavedModel but +provides no guarantee that it can be loaded. + + + + + + + + + + +
+`export_dir` + +Absolute string path to possible export location. For example, +'/my/foo/model'. +
+ + + + + + + + + + + +
+True if the export directory contains SavedModel files, False otherwise. +
+ diff --git a/site/en/api_docs/python/tf/saved_model/load.md b/site/en/api_docs/python/tf/saved_model/load.md new file mode 100644 index 00000000000..b703a75b52b --- /dev/null +++ b/site/en/api_docs/python/tf/saved_model/load.md @@ -0,0 +1,173 @@ +description: Load a SavedModel from export_dir. + +
+ + +
+ +# tf.saved_model.load + + + + + + + + + +Load a SavedModel from `export_dir`. + + + + + + + + + +Signatures associated with the SavedModel are available as functions: + +```python +imported = tf.saved_model.load(path) +f = imported.signatures["serving_default"] +print(f(x=tf.constant([[1.]]))) +``` + +Objects exported with tf.saved_model.save additionally have trackable +objects and functions assigned to attributes: + +```python +exported = tf.train.Checkpoint(v=tf.Variable(3.)) +exported.f = tf.function( + lambda x: exported.v * x, + input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)]) +tf.saved_model.save(exported, path) +imported = tf.saved_model.load(path) +assert 3. == imported.v.numpy() +assert 6. == imported.f(x=tf.constant(2.)).numpy() +``` + +_Loading Keras models_ + +Keras models are trackable, so they can be saved to SavedModel. The object +returned by tf.saved_model.load is not a Keras object (i.e. doesn't have +`.fit`, `.predict`, etc. methods). A few attributes and functions are still +available: `.variables`, `.trainable_variables` and `.__call__`. + +```python +model = tf.keras.Model(...) +tf.saved_model.save(model, path) +imported = tf.saved_model.load(path) +outputs = imported(inputs) +``` + +Use tf.keras.models.load_model to restore the Keras model. + +_Importing SavedModels from TensorFlow 1.x_ + +SavedModels from tf.estimator.Estimator or 1.x SavedModel APIs have a flat +graph instead of tf.function objects. These SavedModels will be loaded with +the following attributes: + +* `.signatures`: A dictionary mapping signature names to functions. +* `.prune(feeds, fetches) `: A method which allows you to extract + functions for new subgraphs. This is equivalent to importing the SavedModel + and naming feeds and fetches in a Session from TensorFlow 1.x. + + ```python + imported = tf.saved_model.load(path_to_v1_saved_model) + pruned = imported.prune("x:0", "out:0") + pruned(tf.ones([])) + ``` + + See tf.compat.v1.wrap_function for details. +* `.variables`: A list of imported variables. +* `.graph`: The whole imported graph. +* `.restore(save_path)`: A function that restores variables from a checkpoint + saved from `tf.compat.v1.Saver`. + +_Consuming SavedModels asynchronously_ + +When consuming SavedModels asynchronously (the producer is a separate +process), the SavedModel directory will appear before all files have been +written, and tf.saved_model.load will fail if pointed at an incomplete +SavedModel. Rather than checking for the directory, check for +"saved_model_dir/saved_model.pb". This file is written atomically as the last +tf.saved_model.save file operation. + + + + + + + + + + + + + +
+`export_dir` + +The SavedModel directory to load from. +
+`tags` + +A tag or sequence of tags identifying the MetaGraph to load. Optional +if the SavedModel contains a single MetaGraph, as for those exported from +tf.saved_model.save. +
+ + + + + + + + + + + +
+A trackable object with a `signatures` attribute mapping from signature +keys to functions. If the SavedModel was exported by tf.saved_model.load, +it also points to trackable objects, functions, debug info which it has been +saved. +
+ + + + + + + + + + + + +
+`ValueError` + +If `tags` don't match a MetaGraph in the SavedModel. +
+ diff --git a/site/en/api_docs/python/tf/saved_model/save.md b/site/en/api_docs/python/tf/saved_model/save.md new file mode 100644 index 00000000000..01f2c465e83 --- /dev/null +++ b/site/en/api_docs/python/tf/saved_model/save.md @@ -0,0 +1,267 @@ +description: Exports the Trackable object obj to [SavedModel format](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md). + +
+ + +
+ +# tf.saved_model.save + + + + + + + + + +Exports the Trackable object `obj` to [SavedModel format](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md). + + + + + + + + + + +#### Example usage: + + + +```python +class Adder(tf.Module): + + @tf.function(input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)]) + def add(self, x): + return x + x + 1. + +to_export = Adder() +tf.saved_model.save(to_export, '/tmp/adder') +``` + +The resulting SavedModel is then servable with an input named "x", its value +having any shape and dtype float32. + +The optional `signatures` argument controls which methods in `obj` will be +available to programs which consume `SavedModel`s, for example, serving +APIs. Python functions may be decorated with +`@tf.function(input_signature=...)` and passed as signatures directly, or +lazily with a call to `get_concrete_function` on the method decorated with +`@tf.function`. + +If the `signatures` argument is omitted, `obj` will be searched for +`@tf.function`-decorated methods. If exactly one `@tf.function` is found, that +method will be used as the default signature for the SavedModel. This behavior +is expected to change in the future, when a corresponding +tf.saved_model.load symbol is added. At that point signatures will be +completely optional, and any `@tf.function` attached to `obj` or its +dependencies will be exported for use with `load`. + +When invoking a signature in an exported SavedModel, `Tensor` arguments are +identified by name. These names will come from the Python function's argument +names by default. They may be overridden by specifying a `name=...` argument +in the corresponding tf.TensorSpec object. Explicit naming is required if +multiple `Tensor`s are passed through a single argument to the Python +function. + +The outputs of functions used as `signatures` must either be flat lists, in +which case outputs will be numbered, or a dictionary mapping string keys to +`Tensor`, in which case the keys will be used to name outputs. + +Signatures are available in objects returned by tf.saved_model.load as a +`.signatures` attribute. This is a reserved attribute: tf.saved_model.save +on an object with a custom `.signatures` attribute will raise an exception. + +Since tf.keras.Model objects are also Trackable, this function can be +used to export Keras models. For example, exporting with a signature +specified: + +```python +class Model(tf.keras.Model): + + @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)]) + def serve(self, serialized): + ... + +m = Model() +tf.saved_model.save(m, '/tmp/saved_model/') +``` + +Exporting from a function without a fixed signature: + +```python +class Model(tf.keras.Model): + + @tf.function + def call(self, x): + ... + +m = Model() +tf.saved_model.save( + m, '/tmp/saved_model/', + signatures=m.call.get_concrete_function( + tf.TensorSpec(shape=[None, 3], dtype=tf.float32, name="inp"))) +``` + +tf.keras.Model instances constructed from inputs and outputs already have a +signature and so do not require a `@tf.function` decorator or a `signatures` +argument. If neither are specified, the model's forward pass is exported. + +```python +x = input_layer.Input((4,), name="x") +y = core.Dense(5, name="out")(x) +model = training.Model(x, y) +tf.saved_model.save(model, '/tmp/saved_model/') +# The exported SavedModel takes "x" with shape [None, 4] and returns "out" +# with shape [None, 5] +``` + +Variables must be tracked by assigning them to an attribute of a tracked +object or to an attribute of `obj` directly. TensorFlow objects (e.g. layers +from tf.keras.layers, optimizers from tf.train) track their variables +automatically. This is the same tracking scheme that tf.train.Checkpoint +uses, and an exported `Checkpoint` object may be restored as a training +checkpoint by pointing tf.train.Checkpoint.restore to the SavedModel's +"variables/" subdirectory. Currently, variables are the only stateful objects +supported by tf.saved_model.save, but others (e.g. tables) will be supported +in the future. + +tf.function does not hard-code device annotations from outside the function +body, instead of using the calling context's device. This means for example +that exporting a model that runs on a GPU and serving it on a CPU will +generally work, with some exceptions. tf.device annotations inside the body +of the function will be hard-coded in the exported model; this type of +annotation is discouraged. Device-specific operations, e.g. with "cuDNN" in +the name or with device-specific layouts, may cause issues. Currently a +`DistributionStrategy` is another exception: active distribution strategies +will cause device placements to be hard-coded in a function. Exporting a +single-device computation and importing under a `DistributionStrategy` is +not currently supported, but may be in the future. + +SavedModels exported with tf.saved_model.save [strip default-valued +attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes) +automatically, which removes one source of incompatibilities when the consumer +of a SavedModel is running an older TensorFlow version than the +producer. There are however other sources of incompatibilities which are not +handled automatically, such as when the exported model contains operations +which the consumer does not have definitions for. + +A single tf.function can generate many ConcreteFunctions. If a downstream tool +wants to refer to all concrete functions generated by a single tf.function you +can use the `function_aliases` argument to store a map from the alias name to +all concrete function names. +E.g. +```python +class MyModel: +@tf.function +def func(): + ... + +@tf.function +def serve(): + ... + func() + +model = MyModel() +signatures = { + 'serving_default': model.serve.get_concrete_function(), +} +options = tf.saved_model.SaveOptions(function_aliases={ + 'my_func': func, +}) +tf.saved_model.save(model, export_dir, signatures, options) +``` + + + + + + + + + + + + + + + + + + + +
+`obj` + +A trackable object to export. +
+`export_dir` + +A directory in which to write the SavedModel. +
+`signatures` + +Optional, either a tf.function with an input signature +specified or the result of `f.get_concrete_function` on a +`@tf.function`-decorated function `f`, in which case `f` will be used to +generate a signature for the SavedModel under the default serving +signature key. `signatures` may also be a dictionary, in which case it +maps from signature keys to either tf.function instances with input +signatures or concrete functions. The keys of such a dictionary may be +arbitrary strings, but will typically be from the +`tf.saved_model.signature_constants` module. +
+`options` + +Optional, tf.saved_model.SaveOptions object that specifies +options for saving. +
+ + + + + + + + + + + + +
+`ValueError` + +If `obj` is not trackable. +
+ + + + +#### Eager Compatibility +Not well supported when graph building. From TensorFlow 1.x, +tf.compat.v1.enable_eager_execution() should run first. Calling +tf.saved_model.save in a loop when graph building from TensorFlow 1.x will +add new save operations to the default graph each iteration. + +May not be called from within a function body. + diff --git a/site/en/api_docs/python/tf/scan.md b/site/en/api_docs/python/tf/scan.md new file mode 100644 index 00000000000..990314e1f65 --- /dev/null +++ b/site/en/api_docs/python/tf/scan.md @@ -0,0 +1,226 @@ +description: scan on the list of tensors unpacked from elems on dimension 0. (deprecated argument values) + +
+ + +
+ +# tf.scan + + + + + + + + + +scan on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values) + + + + + + + +Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(back_prop=False)`. They will be removed in a future version. +Instructions for updating: +back_prop=False is deprecated. Consider using tf.stop_gradient instead. +Instead of: +results = tf.scan(fn, elems, back_prop=False) +Use: +results = tf.nest.map_structure(tf.stop_gradient, tf.scan(fn, elems)) + +The simplest version of `scan` repeatedly applies the callable `fn` to a +sequence of elements from first to last. The elements are made of the tensors +unpacked from `elems` on dimension 0. The callable fn takes two tensors as +arguments. The first argument is the accumulated value computed from the +preceding invocation of fn, and the second is the value at the current +position of `elems`. If `initializer` is None, `elems` must contain at least +one element, and its first element is used as the initializer. + +Suppose that `elems` is unpacked into `values`, a list of tensors. The shape +of the result tensor is `[len(values)] + fn(initializer, values[0]).shape`. +If reverse=True, it's fn(initializer, values[-1]).shape. + +This method also allows multi-arity `elems` and accumulator. If `elems` +is a (possibly nested) list or tuple of tensors, then each of these tensors +must have a matching first (unpack) dimension. The second argument of +`fn` must match the structure of `elems`. + +If no `initializer` is provided, the output structure and dtypes of `fn` +are assumed to be the same as its input; and in this case, the first +argument of `fn` must match the structure of `elems`. + +If an `initializer` is provided, then the output of `fn` must have the same +structure as `initializer`; and the first argument of `fn` must match +this structure. + +For example, if `elems` is `(t1, [t2, t3])` and `initializer` is +`[i1, i2]` then an appropriate signature for `fn` in `python2` is: +`fn = lambda (acc_p1, acc_p2), (t1, [t2, t3]):` and `fn` must return a list, +`[acc_n1, acc_n2]`. An alternative correct signature for `fn`, and the + one that works in `python3`, is: +`fn = lambda a, t:`, where `a` and `t` correspond to the input tuples. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`fn` + +The callable to be performed. It accepts two arguments. The first will +have the same structure as `initializer` if one is provided, otherwise it +will have the same structure as `elems`. The second will have the same +(possibly nested) structure as `elems`. Its output must have the same +structure as `initializer` if one is provided, otherwise it must have the +same structure as `elems`. +
+`elems` + +A tensor or (possibly nested) sequence of tensors, each of which will +be unpacked along their first dimension. The nested sequence of the +resulting slices will be the first argument to `fn`. +
+`initializer` + +(optional) A tensor or (possibly nested) sequence of tensors, +initial value for the accumulator, and the expected output type of `fn`. +
+`parallel_iterations` + +(optional) The number of iterations allowed to run in +parallel. +
+`back_prop` + +(optional) Deprecated. False disables support for back +propagation. Prefer using tf.stop_gradient instead. +
+`swap_memory` + +(optional) True enables GPU-CPU memory swapping. +
+`infer_shape` + +(optional) False disables tests for consistent output shapes. +
+`reverse` + +(optional) True scans the tensor last to first (instead of first to +last). +
+`name` + +(optional) Name prefix for the returned tensors. +
+ + + + + + + + + + + +
+A tensor or (possibly nested) sequence of tensors. Each tensor packs the +results of applying `fn` to tensors unpacked from `elems` along the first +dimension, and the previous accumulator value(s), from first to last (or +last to first, if `reverse=True`). +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if `fn` is not callable or the structure of the output of +`fn` and `initializer` do not match. +
+`ValueError` + +if the lengths of the output of `fn` and `initializer` +do not match. +
+ + + +#### Examples: + +```python +elems = np.array([1, 2, 3, 4, 5, 6]) +sum = scan(lambda a, x: a + x, elems) +# sum == [1, 3, 6, 10, 15, 21] +sum = scan(lambda a, x: a + x, elems, reverse=True) +# sum == [21, 20, 18, 15, 11, 6] +``` + +```python +elems = np.array([1, 2, 3, 4, 5, 6]) +initializer = np.array(0) +sum_one = scan( + lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer) +# sum_one == [1, 2, 3, 4, 5, 6] +``` + +```python +elems = np.array([1, 0, 0, 0, 0, 0]) +initializer = (np.array(0), np.array(1)) +fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer) +# fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13]) +``` diff --git a/site/en/api_docs/python/tf/scatter_nd.md b/site/en/api_docs/python/tf/scatter_nd.md new file mode 100644 index 00000000000..1d1f2f977e1 --- /dev/null +++ b/site/en/api_docs/python/tf/scatter_nd.md @@ -0,0 +1,173 @@ +description: Scatter updates into a new tensor according to indices. + +
+ + +
+ +# tf.scatter_nd + + + + + + + + + +Scatter `updates` into a new tensor according to `indices`. + + + + + + + + + +Creates a new tensor by applying sparse `updates` to individual values or +slices within a tensor (initially zero for numeric, empty for string) of +the given `shape` according to indices. This operator is the inverse of the +tf.gather_nd operator which extracts values or slices from a given tensor. + +This operation is similar to tensor_scatter_add, except that the tensor is +zero-initialized. Calling tf.scatter_nd(indices, values, shape) is identical +to `tensor_scatter_add(tf.zeros(shape, values.dtype), indices, values)` + +If `indices` contains duplicates, then their updates are accumulated (summed). + +**WARNING**: The order in which updates are applied is nondeterministic, so the +output will be nondeterministic if `indices` contains duplicates -- because +of some numerical approximation issues, numbers summed in different order +may yield different results. + +`indices` is an integer tensor containing indices into a new tensor of shape +`shape`. The last dimension of `indices` can be at most the rank of `shape`: + + indices.shape[-1] <= shape.rank + +The last dimension of `indices` corresponds to indices into elements +(if `indices.shape[-1] = shape.rank`) or slices +(if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of +`shape`. `updates` is a tensor with shape + + indices.shape[:-1] + shape[indices.shape[-1]:] + +The simplest form of scatter is to insert individual elements in a tensor by +index. For example, say we want to insert 4 scattered elements in a rank-1 +tensor with 8 elements. + +
+ +
+ +In Python, this scatter operation would look like this: + +```python + indices = tf.constant([[4], [3], [1], [7]]) + updates = tf.constant([9, 10, 11, 12]) + shape = tf.constant([8]) + scatter = tf.scatter_nd(indices, updates, shape) + print(scatter) +``` + +The resulting tensor would look like this: + + [0, 11, 0, 10, 9, 0, 0, 12] + +We can also, insert entire slices of a higher rank tensor all at once. For +example, if we wanted to insert two slices in the first dimension of a +rank-3 tensor with two matrices of new values. + +
+ +
+ +In Python, this scatter operation would look like this: + +```python + indices = tf.constant([[0], [2]]) + updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], + [7, 7, 7, 7], [8, 8, 8, 8]], + [[5, 5, 5, 5], [6, 6, 6, 6], + [7, 7, 7, 7], [8, 8, 8, 8]]]) + shape = tf.constant([4, 4, 4]) + scatter = tf.scatter_nd(indices, updates, shape) + print(scatter) +``` + +The resulting tensor would look like this: + + [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], + [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], + [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], + [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]] + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, the index is ignored. + + + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`updates` + +A `Tensor`. Updates to scatter into output. +
+`shape` + +A `Tensor`. Must have the same type as `indices`. +1-D. The shape of the resulting tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `updates`. +
+ diff --git a/site/en/api_docs/python/tf/searchsorted.md b/site/en/api_docs/python/tf/searchsorted.md new file mode 100644 index 00000000000..e85917048bc --- /dev/null +++ b/site/en/api_docs/python/tf/searchsorted.md @@ -0,0 +1,144 @@ +description: Searches input tensor for values on the innermost dimension. + +
+ + +
+ +# tf.searchsorted + + + + + + + + + +Searches input tensor for values on the innermost dimension. + + + + + + + + + +A 2-D example: + +``` + sorted_sequence = [[0, 3, 9, 9, 10], + [1, 2, 3, 4, 5]] + values = [[2, 4, 9], + [0, 2, 6]] + + result = searchsorted(sorted_sequence, values, side="left") + + result == [[1, 2, 2], + [0, 1, 5]] + + result = searchsorted(sorted_sequence, values, side="right") + + result == [[1, 2, 4], + [0, 2, 5]] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`sorted_sequence` + +N-D `Tensor` containing a sorted sequence. +
+`values` + +N-D `Tensor` containing the search values. +
+`side` + +'left' or 'right'; 'left' corresponds to lower_bound and 'right' to +upper_bound. +
+`out_type` + +The output type (`int32` or `int64`). Default is tf.int32. +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
+An N-D `Tensor` the size of values containing the result of applying either +lower_bound or upper_bound (depending on side) to each value. The result +is not a global index to the entire `Tensor`, but the index in the last +dimension. +
+ + + + + + + + + + + + +
+`ValueError` + +If the last dimension of `sorted_sequence >= 2^31-1` elements. +If the total size of values exceeds `2^31 - 1` elements. +If the first `N-1` dimensions of the two tensors don't match. +
+ diff --git a/site/en/api_docs/python/tf/sequence_mask.md b/site/en/api_docs/python/tf/sequence_mask.md new file mode 100644 index 00000000000..806ce287913 --- /dev/null +++ b/site/en/api_docs/python/tf/sequence_mask.md @@ -0,0 +1,135 @@ +description: Returns a mask tensor representing the first N positions of each cell. + +
+ + +
+ +# tf.sequence_mask + + + + + + + + + +Returns a mask tensor representing the first N positions of each cell. + + + + + + + + + +If `lengths` has shape `[d_1, d_2, ..., d_n]` the resulting tensor `mask` has +dtype `dtype` and shape `[d_1, d_2, ..., d_n, maxlen]`, with + +``` +mask[i_1, i_2, ..., i_n, j] = (j < lengths[i_1, i_2, ..., i_n]) +``` + +#### Examples: + + + +```python +tf.sequence_mask([1, 3, 2], 5) # [[True, False, False, False, False], + # [True, True, True, False, False], + # [True, True, False, False, False]] + +tf.sequence_mask([[1, 3],[2,0]]) # [[[True, False, False], + # [True, True, True]], + # [[True, True, False], + # [False, False, False]]] +``` + + + + + + + + + + + + + + + + + + + +
+`lengths` + +integer tensor, all its values <= maxlen. +
+`maxlen` + +scalar integer tensor, size of last dimension of returned tensor. +Default is the maximum value in `lengths`. +
+`dtype` + +output type of the resulting tensor. +
+`name` + +name of the op. +
+ + + + + + + + + + + +
+A mask tensor of shape `lengths.shape + (maxlen,)`, cast to specified dtype. +
+ + + + + + + + + + + + +
+`ValueError` + +if `maxlen` is not a scalar. +
+ diff --git a/site/en/api_docs/python/tf/sets.md b/site/en/api_docs/python/tf/sets.md new file mode 100644 index 00000000000..f73c435661b --- /dev/null +++ b/site/en/api_docs/python/tf/sets.md @@ -0,0 +1,31 @@ +description: Tensorflow set operations. + +
+ + +
+ +# Module: tf.sets + + + + + + + + + +Tensorflow set operations. + + + +## Functions + +[`difference(...)`](../tf/sets/difference.md): Compute set difference of elements in last dimension of `a` and `b`. + +[`intersection(...)`](../tf/sets/intersection.md): Compute set intersection of elements in last dimension of `a` and `b`. + +[`size(...)`](../tf/sets/size.md): Compute number of unique elements along last dimension of `a`. + +[`union(...)`](../tf/sets/union.md): Compute set union of elements in last dimension of `a` and `b`. + diff --git a/site/en/api_docs/python/tf/sets/difference.md b/site/en/api_docs/python/tf/sets/difference.md new file mode 100644 index 00000000000..fe33bb55c27 --- /dev/null +++ b/site/en/api_docs/python/tf/sets/difference.md @@ -0,0 +1,182 @@ +description: Compute set difference of elements in last dimension of a and b. + +
+ + +
+ +# tf.sets.difference + + + + + + + + + +Compute set difference of elements in last dimension of `a` and `b`. + + + + + + + + + +All but the last dimension of `a` and `b` must match. + +#### Example: + + + +```python + import tensorflow as tf + import collections + + # Represent the following array of sets as a sparse tensor: + # a = np.array([[{1, 2}, {3}], [{4}, {5, 6}]]) + a = collections.OrderedDict([ + ((0, 0, 0), 1), + ((0, 0, 1), 2), + ((0, 1, 0), 3), + ((1, 0, 0), 4), + ((1, 1, 0), 5), + ((1, 1, 1), 6), + ]) + a = tf.SparseTensor(list(a.keys()), list(a.values()), dense_shape=[2, 2, 2]) + + # np.array([[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]]) + b = collections.OrderedDict([ + ((0, 0, 0), 1), + ((0, 0, 1), 3), + ((0, 1, 0), 2), + ((1, 0, 0), 4), + ((1, 0, 1), 5), + ((1, 1, 0), 5), + ((1, 1, 1), 6), + ((1, 1, 2), 7), + ((1, 1, 3), 8), + ]) + b = tf.SparseTensor(list(b.keys()), list(b.values()), dense_shape=[2, 2, 4]) + + # `set_difference` is applied to each aligned pair of sets. + tf.sets.difference(a, b) + + # The result will be equivalent to either of: + # + # np.array([[{2}, {3}], [{}, {}]]) + # + # collections.OrderedDict([ + # ((0, 0, 0), 2), + # ((0, 1, 0), 3), + # ]) +``` + + + + + + + + + + + + + + + + + + + +
+`a` + +`Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices +must be sorted in row-major order. +
+`b` + +`Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices +must be sorted in row-major order. +
+`aminusb` + +Whether to subtract `b` from `a`, vs vice versa. +
+`validate_indices` + +Whether to validate the order and range of sparse indices +in `a` and `b`. +
+ + + + + + + + + + + +
+A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but +the last dimension the same. Elements along the last dimension contain the +differences. +
+ + + + + + + + + + + + + + + + + + +
+`TypeError` + +If inputs are invalid types, or if `a` and `b` have +different types. +
+`ValueError` + +If `a` is sparse and `b` is dense. +
+`errors_impl.InvalidArgumentError` + +If the shapes of `a` and `b` do not +match in any dimension other than the last dimension. +
+ diff --git a/site/en/api_docs/python/tf/sets/intersection.md b/site/en/api_docs/python/tf/sets/intersection.md new file mode 100644 index 00000000000..a3001e5d46f --- /dev/null +++ b/site/en/api_docs/python/tf/sets/intersection.md @@ -0,0 +1,141 @@ +description: Compute set intersection of elements in last dimension of a and b. + +
+ + +
+ +# tf.sets.intersection + + + + + + + + + +Compute set intersection of elements in last dimension of `a` and `b`. + + + + + + + + + +All but the last dimension of `a` and `b` must match. + +#### Example: + + + +```python + import tensorflow as tf + import collections + + # Represent the following array of sets as a sparse tensor: + # a = np.array([[{1, 2}, {3}], [{4}, {5, 6}]]) + a = collections.OrderedDict([ + ((0, 0, 0), 1), + ((0, 0, 1), 2), + ((0, 1, 0), 3), + ((1, 0, 0), 4), + ((1, 1, 0), 5), + ((1, 1, 1), 6), + ]) + a = tf.SparseTensor(list(a.keys()), list(a.values()), dense_shape=[2,2,2]) + + # b = np.array([[{1}, {}], [{4}, {5, 6, 7, 8}]]) + b = collections.OrderedDict([ + ((0, 0, 0), 1), + ((1, 0, 0), 4), + ((1, 1, 0), 5), + ((1, 1, 1), 6), + ((1, 1, 2), 7), + ((1, 1, 3), 8), + ]) + b = tf.SparseTensor(list(b.keys()), list(b.values()), dense_shape=[2, 2, 4]) + + # `tf.sets.intersection` is applied to each aligned pair of sets. + tf.sets.intersection(a, b) + + # The result will be equivalent to either of: + # + # np.array([[{1}, {}], [{4}, {5, 6}]]) + # + # collections.OrderedDict([ + # ((0, 0, 0), 1), + # ((1, 0, 0), 4), + # ((1, 1, 0), 5), + # ((1, 1, 1), 6), + # ]) +``` + + + + + + + + + + + + + + + + +
+`a` + +`Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices +must be sorted in row-major order. +
+`b` + +`Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices +must be sorted in row-major order. +
+`validate_indices` + +Whether to validate the order and range of sparse indices +in `a` and `b`. +
+ + + + + + + + + + + +
+A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but +the last dimension the same. Elements along the last dimension contain the +intersections. +
+ diff --git a/site/en/api_docs/python/tf/sets/size.md b/site/en/api_docs/python/tf/sets/size.md new file mode 100644 index 00000000000..1cb361d385f --- /dev/null +++ b/site/en/api_docs/python/tf/sets/size.md @@ -0,0 +1,102 @@ +description: Compute number of unique elements along last dimension of a. + +
+ + +
+ +# tf.sets.size + + + + + + + + + +Compute number of unique elements along last dimension of `a`. + + + + + + + + + + + + + + + + + + + + + + +
+`a` + +`SparseTensor`, with indices sorted in row-major order. +
+`validate_indices` + +Whether to validate the order and range of sparse indices +in `a`. +
+ + + + + + + + + + + +
+`int32` `Tensor` of set sizes. For `a` ranked `n`, this is a `Tensor` with +rank `n-1`, and the same 1st `n-1` dimensions as `a`. Each value is the +number of unique elements in the corresponding `[0...n-1]` dimension of `a`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `a` is an invalid types. +
+ diff --git a/site/en/api_docs/python/tf/sets/union.md b/site/en/api_docs/python/tf/sets/union.md new file mode 100644 index 00000000000..a954409e063 --- /dev/null +++ b/site/en/api_docs/python/tf/sets/union.md @@ -0,0 +1,150 @@ +description: Compute set union of elements in last dimension of a and b. + +
+ + +
+ +# tf.sets.union + + + + + + + + + +Compute set union of elements in last dimension of `a` and `b`. + + + + + + + + + +All but the last dimension of `a` and `b` must match. + +#### Example: + + + +```python + import tensorflow as tf + import collections + + # [[{1, 2}, {3}], [{4}, {5, 6}]] + a = collections.OrderedDict([ + ((0, 0, 0), 1), + ((0, 0, 1), 2), + ((0, 1, 0), 3), + ((1, 0, 0), 4), + ((1, 1, 0), 5), + ((1, 1, 1), 6), + ]) + a = tf.SparseTensor(list(a.keys()), list(a.values()), dense_shape=[2, 2, 2]) + + # [[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]] + b = collections.OrderedDict([ + ((0, 0, 0), 1), + ((0, 0, 1), 3), + ((0, 1, 0), 2), + ((1, 0, 0), 4), + ((1, 0, 1), 5), + ((1, 1, 0), 5), + ((1, 1, 1), 6), + ((1, 1, 2), 7), + ((1, 1, 3), 8), + ]) + b = tf.SparseTensor(list(b.keys()), list(b.values()), dense_shape=[2, 2, 4]) + + # `set_union` is applied to each aligned pair of sets. + tf.sets.union(a, b) + + # The result will be a equivalent to either of: + # + # np.array([[{1, 2, 3}, {2, 3}], [{4, 5}, {5, 6, 7, 8}]]) + # + # collections.OrderedDict([ + # ((0, 0, 0), 1), + # ((0, 0, 1), 2), + # ((0, 0, 2), 3), + # ((0, 1, 0), 2), + # ((0, 1, 1), 3), + # ((1, 0, 0), 4), + # ((1, 0, 1), 5), + # ((1, 1, 0), 5), + # ((1, 1, 1), 6), + # ((1, 1, 2), 7), + # ((1, 1, 3), 8), + # ]) +``` + + + + + + + + + + + + + + + + +
+`a` + +`Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices +must be sorted in row-major order. +
+`b` + +`Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices +must be sorted in row-major order. +
+`validate_indices` + +Whether to validate the order and range of sparse indices +in `a` and `b`. +
+ + + + + + + + + + + +
+A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but +the last dimension the same. Elements along the last dimension contain the +unions. +
+ diff --git a/site/en/api_docs/python/tf/shape.md b/site/en/api_docs/python/tf/shape.md new file mode 100644 index 00000000000..f14c7f6e2d0 --- /dev/null +++ b/site/en/api_docs/python/tf/shape.md @@ -0,0 +1,115 @@ +description: Returns the shape of a tensor. + +
+ + +
+ +# tf.shape + + + + + + + + + +Returns the shape of a tensor. + + + + + + + +See also tf.size. + +This operation returns a 1-D integer tensor representing the shape of `input`. +This represents the minimal set of known information at definition time. + +#### For example: + + + +``` +>>> t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]) +>>> tf.shape(t) + +>>> tf.shape(t).numpy() +array([2, 2, 3], dtype=int32) +``` + +Note: When using symbolic tensors, such as when using the Keras functional +API, tf.shape() will return the shape of the symbolic tensor. + +``` +>>> a = tf.keras.layers.Input((None, 10)) +>>> tf.shape(a) + +``` + +In these cases, using tf.Tensor.shape will return more informative results. + +``` +>>> a.shape +TensorShape([None, None, 10]) +``` + +tf.shape and Tensor.shape should be identical in eager mode. Within +tf.function or within a compat.v1 context, not all dimensions may be +known until execution time. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` or `SparseTensor`. +
+`out_type` + +(Optional) The specified output type of the operation (`int32` or +`int64`). Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/shape_n.md b/site/en/api_docs/python/tf/shape_n.md new file mode 100644 index 00000000000..89c5c956d0d --- /dev/null +++ b/site/en/api_docs/python/tf/shape_n.md @@ -0,0 +1,91 @@ +description: Returns shape of tensors. + +
+ + +
+ +# tf.shape_n + + + + + + + + + +Returns shape of tensors. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A list of at least 1 `Tensor` object with the same type. +
+`out_type` + +The specified output type of the operation (`int32` or `int64`). +Defaults to tf.int32(optional). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A list with the same length as `input` of `Tensor` objects with +type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/signal.md b/site/en/api_docs/python/tf/signal.md new file mode 100644 index 00000000000..211b444ff4a --- /dev/null +++ b/site/en/api_docs/python/tf/signal.md @@ -0,0 +1,92 @@ +description: Signal processing operations. + +
+ + +
+ +# Module: tf.signal + + + + + + + + + +Signal processing operations. + + +See the [tf.signal](https://tensorflow.org/api_guides/python/contrib.signal) +guide. + + +[hamming]: https://en.wikipedia.org/wiki/Window_function#Hamming_window +[hann]: https://en.wikipedia.org/wiki/Window_function#Hann_window +[mel]: https://en.wikipedia.org/wiki/Mel_scale +[mfcc]: https://en.wikipedia.org/wiki/Mel-frequency_cepstrum +[stft]: https://en.wikipedia.org/wiki/Short-time_Fourier_transform + +## Functions + +[`dct(...)`](../tf/signal/dct.md): Computes the 1D [Discrete Cosine Transform (DCT)][dct] of `input`. + +[`fft(...)`](../tf/signal/fft.md): Fast Fourier transform. + +[`fft2d(...)`](../tf/signal/fft2d.md): 2D fast Fourier transform. + +[`fft3d(...)`](../tf/signal/fft3d.md): 3D fast Fourier transform. + +[`fftshift(...)`](../tf/signal/fftshift.md): Shift the zero-frequency component to the center of the spectrum. + +[`frame(...)`](../tf/signal/frame.md): Expands `signal`'s `axis` dimension into frames of `frame_length`. + +[`hamming_window(...)`](../tf/signal/hamming_window.md): Generate a [Hamming][hamming] window. + +[`hann_window(...)`](../tf/signal/hann_window.md): Generate a [Hann window][hann]. + +[`idct(...)`](../tf/signal/idct.md): Computes the 1D [Inverse Discrete Cosine Transform (DCT)][idct] of `input`. + +[`ifft(...)`](../tf/signal/ifft.md): Inverse fast Fourier transform. + +[`ifft2d(...)`](../tf/signal/ifft2d.md): Inverse 2D fast Fourier transform. + +[`ifft3d(...)`](../tf/signal/ifft3d.md): Inverse 3D fast Fourier transform. + +[`ifftshift(...)`](../tf/signal/ifftshift.md): The inverse of fftshift. + +[`inverse_mdct(...)`](../tf/signal/inverse_mdct.md): Computes the inverse modified DCT of `mdcts`. + +[`inverse_stft(...)`](../tf/signal/inverse_stft.md): Computes the inverse [Short-time Fourier Transform][stft] of `stfts`. + +[`inverse_stft_window_fn(...)`](../tf/signal/inverse_stft_window_fn.md): Generates a window function that can be used in `inverse_stft`. + +[`irfft(...)`](../tf/signal/irfft.md): Inverse real-valued fast Fourier transform. + +[`irfft2d(...)`](../tf/signal/irfft2d.md): Inverse 2D real-valued fast Fourier transform. + +[`irfft3d(...)`](../tf/signal/irfft3d.md): Inverse 3D real-valued fast Fourier transform. + +[`kaiser_bessel_derived_window(...)`](../tf/signal/kaiser_bessel_derived_window.md): Generate a [Kaiser Bessel derived window][kbd]. + +[`kaiser_window(...)`](../tf/signal/kaiser_window.md): Generate a [Kaiser window][kaiser]. + +[`linear_to_mel_weight_matrix(...)`](../tf/signal/linear_to_mel_weight_matrix.md): Returns a matrix to warp linear scale spectrograms to the [mel scale][mel]. + +[`mdct(...)`](../tf/signal/mdct.md): Computes the [Modified Discrete Cosine Transform][mdct] of `signals`. + +[`mfccs_from_log_mel_spectrograms(...)`](../tf/signal/mfccs_from_log_mel_spectrograms.md): Computes [MFCCs][mfcc] of `log_mel_spectrograms`. + +[`overlap_and_add(...)`](../tf/signal/overlap_and_add.md): Reconstructs a signal from a framed representation. + +[`rfft(...)`](../tf/signal/rfft.md): Real-valued fast Fourier transform. + +[`rfft2d(...)`](../tf/signal/rfft2d.md): 2D real-valued fast Fourier transform. + +[`rfft3d(...)`](../tf/signal/rfft3d.md): 3D real-valued fast Fourier transform. + +[`stft(...)`](../tf/signal/stft.md): Computes the [Short-time Fourier Transform][stft] of `signals`. + +[`vorbis_window(...)`](../tf/signal/vorbis_window.md): Generate a [Vorbis power complementary window][vorbis]. + diff --git a/site/en/api_docs/python/tf/signal/dct.md b/site/en/api_docs/python/tf/signal/dct.md new file mode 100644 index 00000000000..f0a79f50d15 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/dct.md @@ -0,0 +1,161 @@ +description: Computes the 1D [Discrete Cosine Transform (DCT)][dct] of input. + +
+ + +
+ +# tf.signal.dct + + + + + + + + + +Computes the 1D [Discrete Cosine Transform (DCT)][dct] of `input`. + + + + + + + + + +Types I, II, III and IV are supported. +Type I is implemented using a length `2N` padded tf.signal.rfft. +Type II is implemented using a length `2N` padded tf.signal.rfft, as + described here: [Type 2 DCT using 2N FFT padded (Makhoul)] + (https://dsp.stackexchange.com/a/10606). +Type III is a fairly straightforward inverse of Type II + (i.e. using a length `2N` padded tf.signal.irfft). + Type IV is calculated through 2N length DCT2 of padded signal and +picking the odd indices. + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `[..., samples]` `float32`/`float64` `Tensor` containing the +signals to take the DCT of. +
+`type` + +The DCT type to perform. Must be 1, 2, 3 or 4. +
+`n` + +The length of the transform. If length is less than sequence length, +only the first n elements of the sequence are considered for the DCT. +If n is greater than the sequence length, zeros are padded and then +the DCT is computed as usual. +
+`axis` + +For future expansion. The axis to compute the DCT along. Must be `-1`. +
+`norm` + +The normalization to apply. `None` for no normalization or `'ortho'` +for orthonormal normalization. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `[..., samples]` `float32`/`float64` `Tensor` containing the DCT of +`input`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `type` is not `1`, `2`, `3` or `4`, `axis` is +not `-1`, `n` is not `None` or greater than 0, +or `norm` is not `None` or `'ortho'`. +
+`ValueError` + +If `type` is `1` and `norm` is `ortho`. +
+ + +[dct]: https://en.wikipedia.org/wiki/Discrete_cosine_transform + +#### Scipy Compatibility +Equivalent to [scipy.fftpack.dct] + (https://docs.scipy.org/doc/scipy-1.4.0/reference/generated/scipy.fftpack.dct.html) + for Type-I, Type-II, Type-III and Type-IV DCT. + diff --git a/site/en/api_docs/python/tf/signal/fft.md b/site/en/api_docs/python/tf/signal/fft.md new file mode 100644 index 00000000000..dc3386b5634 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/fft.md @@ -0,0 +1,80 @@ +description: Fast Fourier transform. + +
+ + +
+ +# tf.signal.fft + + + + + + + + + +Fast Fourier transform. + + + + + + + + + +Computes the 1-dimensional discrete Fourier transform over the inner-most +dimension of `input`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/signal/fft2d.md b/site/en/api_docs/python/tf/signal/fft2d.md new file mode 100644 index 00000000000..0d54603bfea --- /dev/null +++ b/site/en/api_docs/python/tf/signal/fft2d.md @@ -0,0 +1,80 @@ +description: 2D fast Fourier transform. + +
+ + +
+ +# tf.signal.fft2d + + + + + + + + + +2D fast Fourier transform. + + + + + + + + + +Computes the 2-dimensional discrete Fourier transform over the inner-most +2 dimensions of `input`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/signal/fft3d.md b/site/en/api_docs/python/tf/signal/fft3d.md new file mode 100644 index 00000000000..012b8e039ee --- /dev/null +++ b/site/en/api_docs/python/tf/signal/fft3d.md @@ -0,0 +1,80 @@ +description: 3D fast Fourier transform. + +
+ + +
+ +# tf.signal.fft3d + + + + + + + + + +3D fast Fourier transform. + + + + + + + + + +Computes the 3-dimensional discrete Fourier transform over the inner-most 3 +dimensions of `input`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/signal/fftshift.md b/site/en/api_docs/python/tf/signal/fftshift.md new file mode 100644 index 00000000000..01e571ef8ef --- /dev/null +++ b/site/en/api_docs/python/tf/signal/fftshift.md @@ -0,0 +1,109 @@ +description: Shift the zero-frequency component to the center of the spectrum. + +
+ + +
+ +# tf.signal.fftshift + + + + + + + + + +Shift the zero-frequency component to the center of the spectrum. + + + + + + + + + +This function swaps half-spaces for all axes listed (defaults to all). +Note that ``y[0]`` is the Nyquist component only if ``len(x)`` is even. + + + +#### For example: + + + +```python +x = tf.signal.fftshift([ 0., 1., 2., 3., 4., -5., -4., -3., -2., -1.]) +x.numpy() # array([-5., -4., -3., -2., -1., 0., 1., 2., 3., 4.]) +``` + + + + + + + + + + + + + + + + +
+`x` + +`Tensor`, input tensor. +
+`axes` + +`int` or shape `tuple`, optional Axes over which to shift. Default is +None, which shifts all axes. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor`, The shifted tensor. +
+ + + +#### Numpy Compatibility +Equivalent to numpy.fft.fftshift. +https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fftshift.html + diff --git a/site/en/api_docs/python/tf/signal/frame.md b/site/en/api_docs/python/tf/signal/frame.md new file mode 100644 index 00000000000..d378d47b475 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/frame.md @@ -0,0 +1,166 @@ +description: Expands signal's axis dimension into frames of frame_length. + +
+ + +
+ +# tf.signal.frame + + + + + + + + + +Expands `signal`'s `axis` dimension into frames of `frame_length`. + + + + + + + + + +Slides a window of size `frame_length` over `signal`'s `axis` dimension +with a stride of `frame_step`, replacing the `axis` dimension with +`[frames, frame_length]` frames. + +If `pad_end` is True, window positions that are past the end of the `axis` +dimension are padded with `pad_value` until the window moves fully past the +end of the dimension. Otherwise, only window positions that fully overlap the +`axis` dimension are produced. + +#### For example: + + + +```python +# A batch size 3 tensor of 9152 audio samples. +audio = tf.random.normal([3, 9152]) + +# Compute overlapping frames of length 512 with a step of 180 (frames overlap +# by 332 samples). By default, only 50 frames are generated since the last +# 152 samples do not form a full frame. +frames = tf.signal.frame(audio, 512, 180) +frames.shape.assert_is_compatible_with([3, 50, 512]) + +# When pad_end is enabled, the final frame is kept (padded with zeros). +frames = tf.signal.frame(audio, 512, 180, pad_end=True) +frames.shape.assert_is_compatible_with([3, 51, 512]) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`signal` + +A `[..., samples, ...]` `Tensor`. The rank and dimensions +may be unknown. Rank must be at least 1. +
+`frame_length` + +The frame length in samples. An integer or scalar `Tensor`. +
+`frame_step` + +The frame hop size in samples. An integer or scalar `Tensor`. +
+`pad_end` + +Whether to pad the end of `signal` with `pad_value`. +
+`pad_value` + +An optional scalar `Tensor` to use where the input signal +does not exist when `pad_end` is True. +
+`axis` + +A scalar integer `Tensor` indicating the axis to frame. Defaults to +the last axis. Supports negative values for indexing from the end. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` of frames with shape `[..., frames, frame_length, ...]`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `frame_length`, `frame_step`, `pad_value`, or `axis` are not +scalar. +
+ diff --git a/site/en/api_docs/python/tf/signal/hamming_window.md b/site/en/api_docs/python/tf/signal/hamming_window.md new file mode 100644 index 00000000000..d72e120e26f --- /dev/null +++ b/site/en/api_docs/python/tf/signal/hamming_window.md @@ -0,0 +1,119 @@ +description: Generate a [Hamming][hamming] window. + +
+ + +
+ +# tf.signal.hamming_window + + + + + + + + + +Generate a [Hamming][hamming] window. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`window_length` + +A scalar `Tensor` indicating the window length to generate. +
+`periodic` + +A bool `Tensor` indicating whether to generate a periodic or +symmetric window. Periodic windows are typically used for spectral +analysis while symmetric windows are typically used for digital +filter design. +
+`dtype` + +The data type to produce. Must be a floating point type. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` of shape `[window_length]` of type `dtype`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `dtype` is not a floating point type. +
+ + +[hamming]: + https://en.wikipedia.org/wiki/Window_function#Hann_and_Hamming_windows \ No newline at end of file diff --git a/site/en/api_docs/python/tf/signal/hann_window.md b/site/en/api_docs/python/tf/signal/hann_window.md new file mode 100644 index 00000000000..f1332ac7d1c --- /dev/null +++ b/site/en/api_docs/python/tf/signal/hann_window.md @@ -0,0 +1,118 @@ +description: Generate a [Hann window][hann]. + +
+ + +
+ +# tf.signal.hann_window + + + + + + + + + +Generate a [Hann window][hann]. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`window_length` + +A scalar `Tensor` indicating the window length to generate. +
+`periodic` + +A bool `Tensor` indicating whether to generate a periodic or +symmetric window. Periodic windows are typically used for spectral +analysis while symmetric windows are typically used for digital +filter design. +
+`dtype` + +The data type to produce. Must be a floating point type. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` of shape `[window_length]` of type `dtype`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `dtype` is not a floating point type. +
+ + +[hann]: https://en.wikipedia.org/wiki/Window_function#Hann_and_Hamming_windows \ No newline at end of file diff --git a/site/en/api_docs/python/tf/signal/idct.md b/site/en/api_docs/python/tf/signal/idct.md new file mode 100644 index 00000000000..c374161e8e3 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/idct.md @@ -0,0 +1,150 @@ +description: Computes the 1D [Inverse Discrete Cosine Transform (DCT)][idct] of input. + +
+ + +
+ +# tf.signal.idct + + + + + + + + + +Computes the 1D [Inverse Discrete Cosine Transform (DCT)][idct] of `input`. + + + + + + + + + +Currently Types I, II, III, IV are supported. Type III is the inverse of +Type II, and vice versa. + +Note that you must re-normalize by 1/(2n) to obtain an inverse if `norm` is +not `'ortho'`. That is: +`signal == idct(dct(signal)) * 0.5 / signal.shape[-1]`. +When `norm='ortho'`, we have: +`signal == idct(dct(signal, norm='ortho'), norm='ortho')`. + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `[..., samples]` `float32`/`float64` `Tensor` containing the +signals to take the DCT of. +
+`type` + +The IDCT type to perform. Must be 1, 2, 3 or 4. +
+`n` + +For future expansion. The length of the transform. Must be `None`. +
+`axis` + +For future expansion. The axis to compute the DCT along. Must be `-1`. +
+`norm` + +The normalization to apply. `None` for no normalization or `'ortho'` +for orthonormal normalization. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `[..., samples]` `float32`/`float64` `Tensor` containing the IDCT of +`input`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `type` is not `1`, `2` or `3`, `n` is not `None, `axis` is +not `-1`, or `norm` is not `None` or `'ortho'`. +
+ + +[idct]: +https://en.wikipedia.org/wiki/Discrete_cosine_transform#Inverse_transforms + +#### Scipy Compatibility +Equivalent to [scipy.fftpack.idct] + (https://docs.scipy.org/doc/scipy-1.4.0/reference/generated/scipy.fftpack.idct.html) + for Type-I, Type-II, Type-III and Type-IV DCT. + diff --git a/site/en/api_docs/python/tf/signal/ifft.md b/site/en/api_docs/python/tf/signal/ifft.md new file mode 100644 index 00000000000..73467240952 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/ifft.md @@ -0,0 +1,80 @@ +description: Inverse fast Fourier transform. + +
+ + +
+ +# tf.signal.ifft + + + + + + + + + +Inverse fast Fourier transform. + + + + + + + + + +Computes the inverse 1-dimensional discrete Fourier transform over the +inner-most dimension of `input`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/signal/ifft2d.md b/site/en/api_docs/python/tf/signal/ifft2d.md new file mode 100644 index 00000000000..75f93bd9126 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/ifft2d.md @@ -0,0 +1,80 @@ +description: Inverse 2D fast Fourier transform. + +
+ + +
+ +# tf.signal.ifft2d + + + + + + + + + +Inverse 2D fast Fourier transform. + + + + + + + + + +Computes the inverse 2-dimensional discrete Fourier transform over the +inner-most 2 dimensions of `input`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/signal/ifft3d.md b/site/en/api_docs/python/tf/signal/ifft3d.md new file mode 100644 index 00000000000..654f9898de5 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/ifft3d.md @@ -0,0 +1,80 @@ +description: Inverse 3D fast Fourier transform. + +
+ + +
+ +# tf.signal.ifft3d + + + + + + + + + +Inverse 3D fast Fourier transform. + + + + + + + + + +Computes the inverse 3-dimensional discrete Fourier transform over the +inner-most 3 dimensions of `input`. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/signal/ifftshift.md b/site/en/api_docs/python/tf/signal/ifftshift.md new file mode 100644 index 00000000000..20fde4c15b8 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/ifftshift.md @@ -0,0 +1,109 @@ +description: The inverse of fftshift. + +
+ + +
+ +# tf.signal.ifftshift + + + + + + + + + +The inverse of fftshift. + + + + + + + + + +Although identical for even-length x, +the functions differ by one sample for odd-length x. + + + +#### For example: + + + +```python +x = tf.signal.ifftshift([[ 0., 1., 2.],[ 3., 4., -4.],[-3., -2., -1.]]) +x.numpy() # array([[ 4., -4., 3.],[-2., -1., -3.],[ 1., 2., 0.]]) +``` + + + + + + + + + + + + + + + + +
+`x` + +`Tensor`, input tensor. +
+`axes` + +`int` or shape `tuple` Axes over which to calculate. Defaults to None, +which shifts all axes. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor`, The shifted tensor. +
+ + + +#### Numpy Compatibility +Equivalent to numpy.fft.ifftshift. +https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.ifftshift.html + diff --git a/site/en/api_docs/python/tf/signal/inverse_mdct.md b/site/en/api_docs/python/tf/signal/inverse_mdct.md new file mode 100644 index 00000000000..a63b9dbd803 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/inverse_mdct.md @@ -0,0 +1,155 @@ +description: Computes the inverse modified DCT of mdcts. + +
+ + +
+ +# tf.signal.inverse_mdct + + + + + + + + + +Computes the inverse modified DCT of `mdcts`. + + + + + + + + + +To reconstruct an original waveform, the same window function should +be used with `mdct` and `inverse_mdct`. + +#### Example usage: + + + +``` +>>> @tf.function +... def compare_round_trip(): +... samples = 1000 +... frame_length = 400 +... halflen = frame_length // 2 +... waveform = tf.random.normal(dtype=tf.float32, shape=[samples]) +... waveform_pad = tf.pad(waveform, [[halflen, 0],]) +... mdct = tf.signal.mdct(waveform_pad, frame_length, pad_end=True, +... window_fn=tf.signal.vorbis_window) +... inverse_mdct = tf.signal.inverse_mdct(mdct, +... window_fn=tf.signal.vorbis_window) +... inverse_mdct = inverse_mdct[halflen: halflen + samples] +... return waveform, inverse_mdct +>>> waveform, inverse_mdct = compare_round_trip() +>>> np.allclose(waveform.numpy(), inverse_mdct.numpy(), rtol=1e-3, atol=1e-4) +True +``` + +Implemented with TPU/GPU-compatible ops and supports gradients. + + + + + + + + + + + + + + + + + + + +
+`mdcts` + +A `float32`/`float64` `[..., frames, frame_length // 2]` +`Tensor` of MDCT bins representing a batch of `frame_length // 2`-point +MDCTs. +
+`window_fn` + +A callable that takes a frame_length and a `dtype` keyword +argument and returns a `[frame_length]` `Tensor` of samples in the +provided datatype. If set to `None`, a rectangular window with a scale of +1/sqrt(2) is used. For perfect reconstruction of a signal from `mdct` +followed by `inverse_mdct`, please use tf.signal.vorbis_window, +tf.signal.kaiser_bessel_derived_window or `None`. If using another +window function, make sure that w[n]^2 + w[n + frame_length // 2]^2 = 1 +and w[n] = w[frame_length - n - 1] for n = 0,...,frame_length // 2 - 1 to +achieve perfect reconstruction. +
+`norm` + +If "ortho", orthonormal inverse DCT4 is performed, if it is None, +a regular dct4 followed by scaling of `1/frame_length` is performed. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `[..., samples]` `Tensor` of `float32`/`float64` signals representing +the inverse MDCT for each input MDCT in `mdcts` where `samples` is +`(frames - 1) * (frame_length // 2) + frame_length`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `mdcts` is not at least rank 2. +
+ + +[mdct]: https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform \ No newline at end of file diff --git a/site/en/api_docs/python/tf/signal/inverse_stft.md b/site/en/api_docs/python/tf/signal/inverse_stft.md new file mode 100644 index 00000000000..387367eca59 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/inverse_stft.md @@ -0,0 +1,170 @@ +description: Computes the inverse [Short-time Fourier Transform][stft] of stfts. + +
+ + +
+ +# tf.signal.inverse_stft + + + + + + + + + +Computes the inverse [Short-time Fourier Transform][stft] of `stfts`. + + + + + + + + + +To reconstruct an original waveform, a complementary window function should +be used with `inverse_stft`. Such a window function can be constructed with +tf.signal.inverse_stft_window_fn. +Example: + +```python +frame_length = 400 +frame_step = 160 +waveform = tf.random.normal(dtype=tf.float32, shape=[1000]) +stft = tf.signal.stft(waveform, frame_length, frame_step) +inverse_stft = tf.signal.inverse_stft( + stft, frame_length, frame_step, + window_fn=tf.signal.inverse_stft_window_fn(frame_step)) +``` + +If a custom `window_fn` is used with tf.signal.stft, it must be passed to +tf.signal.inverse_stft_window_fn: + +```python +frame_length = 400 +frame_step = 160 +window_fn = tf.signal.hamming_window +waveform = tf.random.normal(dtype=tf.float32, shape=[1000]) +stft = tf.signal.stft( + waveform, frame_length, frame_step, window_fn=window_fn) +inverse_stft = tf.signal.inverse_stft( + stft, frame_length, frame_step, + window_fn=tf.signal.inverse_stft_window_fn( + frame_step, forward_window_fn=window_fn)) +``` + +Implemented with TPU/GPU-compatible ops and supports gradients. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`stfts` + +A `complex64`/`complex128` `[..., frames, fft_unique_bins]` +`Tensor` of STFT bins representing a batch of `fft_length`-point STFTs +where `fft_unique_bins` is `fft_length // 2 + 1` +
+`frame_length` + +An integer scalar `Tensor`. The window length in samples. +
+`frame_step` + +An integer scalar `Tensor`. The number of samples to step. +
+`fft_length` + +An integer scalar `Tensor`. The size of the FFT that produced +`stfts`. If not provided, uses the smallest power of 2 enclosing +`frame_length`. +
+`window_fn` + +A callable that takes a window length and a `dtype` keyword +argument and returns a `[window_length]` `Tensor` of samples in the +provided datatype. If set to `None`, no windowing is used. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `[..., samples]` `Tensor` of `float32`/`float64` signals representing +the inverse STFT for each input STFT in `stfts`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `stfts` is not at least rank 2, `frame_length` is not scalar, +`frame_step` is not scalar, or `fft_length` is not scalar. +
+ + +[stft]: https://en.wikipedia.org/wiki/Short-time_Fourier_transform \ No newline at end of file diff --git a/site/en/api_docs/python/tf/signal/inverse_stft_window_fn.md b/site/en/api_docs/python/tf/signal/inverse_stft_window_fn.md new file mode 100644 index 00000000000..9b50b1fbc29 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/inverse_stft_window_fn.md @@ -0,0 +1,97 @@ +description: Generates a window function that can be used in inverse_stft. + +
+ + +
+ +# tf.signal.inverse_stft_window_fn + + + + + + + + + +Generates a window function that can be used in `inverse_stft`. + + + + + + + + + +Constructs a window that is equal to the forward window with a further +pointwise amplitude correction. `inverse_stft_window_fn` is equivalent to +`forward_window_fn` in the case where it would produce an exact inverse. + +See examples in `inverse_stft` documentation for usage. + + + + + + + + + + + + + + + + +
+`frame_step` + +An integer scalar `Tensor`. The number of samples to step. +
+`forward_window_fn` + +window_fn used in the forward transform, `stft`. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A callable that takes a window length and a `dtype` keyword argument and +returns a `[window_length]` `Tensor` of samples in the provided datatype. +The returned window is suitable for reconstructing original waveform in +inverse_stft. +
+ diff --git a/site/en/api_docs/python/tf/signal/irfft.md b/site/en/api_docs/python/tf/signal/irfft.md new file mode 100644 index 00000000000..4939c877011 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/irfft.md @@ -0,0 +1,111 @@ +description: Inverse real-valued fast Fourier transform. + +
+ + +
+ +# tf.signal.irfft + + + + + + + + + +Inverse real-valued fast Fourier transform. + + + + + + + + + +Computes the inverse 1-dimensional discrete Fourier transform of a real-valued +signal over the inner-most dimension of `input`. + +The inner-most dimension of `input` is assumed to be the result of `RFFT`: the +`fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If +`fft_length` is not provided, it is computed from the size of the inner-most +dimension of `input` (`fft_length = 2 * (inner - 1)`). If the FFT length used to +compute `input` is odd, it should be provided since it cannot be inferred +properly. + +Along the axis `IRFFT` is computed on, if `fft_length / 2 + 1` is smaller +than the corresponding dimension of `input`, the dimension is cropped. If it is +larger, the dimension is padded with zeros. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`fft_length` + +A `Tensor` of type `int32`. +An int32 tensor of shape [1]. The FFT length. +
+`Treal` + +An optional tf.DType from: `tf.float32, tf.float64`. Defaults to tf.float32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Treal`. +
+ diff --git a/site/en/api_docs/python/tf/signal/irfft2d.md b/site/en/api_docs/python/tf/signal/irfft2d.md new file mode 100644 index 00000000000..d0e48be1598 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/irfft2d.md @@ -0,0 +1,112 @@ +description: Inverse 2D real-valued fast Fourier transform. + +
+ + +
+ +# tf.signal.irfft2d + + + + + + + + + +Inverse 2D real-valued fast Fourier transform. + + + + + + + + + +Computes the inverse 2-dimensional discrete Fourier transform of a real-valued +signal over the inner-most 2 dimensions of `input`. + +The inner-most 2 dimensions of `input` are assumed to be the result of `RFFT2D`: +The inner-most dimension contains the `fft_length / 2 + 1` unique components of +the DFT of a real-valued signal. If `fft_length` is not provided, it is computed +from the size of the inner-most 2 dimensions of `input`. If the FFT length used +to compute `input` is odd, it should be provided since it cannot be inferred +properly. + +Along each axis `IRFFT2D` is computed on, if `fft_length` (or +`fft_length / 2 + 1` for the inner-most dimension) is smaller than the +corresponding dimension of `input`, the dimension is cropped. If it is larger, +the dimension is padded with zeros. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`fft_length` + +A `Tensor` of type `int32`. +An int32 tensor of shape [2]. The FFT length for each dimension. +
+`Treal` + +An optional tf.DType from: `tf.float32, tf.float64`. Defaults to tf.float32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Treal`. +
+ diff --git a/site/en/api_docs/python/tf/signal/irfft3d.md b/site/en/api_docs/python/tf/signal/irfft3d.md new file mode 100644 index 00000000000..f395b028150 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/irfft3d.md @@ -0,0 +1,112 @@ +description: Inverse 3D real-valued fast Fourier transform. + +
+ + +
+ +# tf.signal.irfft3d + + + + + + + + + +Inverse 3D real-valued fast Fourier transform. + + + + + + + + + +Computes the inverse 3-dimensional discrete Fourier transform of a real-valued +signal over the inner-most 3 dimensions of `input`. + +The inner-most 3 dimensions of `input` are assumed to be the result of `RFFT3D`: +The inner-most dimension contains the `fft_length / 2 + 1` unique components of +the DFT of a real-valued signal. If `fft_length` is not provided, it is computed +from the size of the inner-most 3 dimensions of `input`. If the FFT length used +to compute `input` is odd, it should be provided since it cannot be inferred +properly. + +Along each axis `IRFFT3D` is computed on, if `fft_length` (or +`fft_length / 2 + 1` for the inner-most dimension) is smaller than the +corresponding dimension of `input`, the dimension is cropped. If it is larger, +the dimension is padded with zeros. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +A complex tensor. +
+`fft_length` + +A `Tensor` of type `int32`. +An int32 tensor of shape [3]. The FFT length for each dimension. +
+`Treal` + +An optional tf.DType from: `tf.float32, tf.float64`. Defaults to tf.float32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Treal`. +
+ diff --git a/site/en/api_docs/python/tf/signal/kaiser_bessel_derived_window.md b/site/en/api_docs/python/tf/signal/kaiser_bessel_derived_window.md new file mode 100644 index 00000000000..3dc13977900 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/kaiser_bessel_derived_window.md @@ -0,0 +1,99 @@ +description: Generate a [Kaiser Bessel derived window][kbd]. + +
+ + +
+ +# tf.signal.kaiser_bessel_derived_window + + + + + + + + + +Generate a [Kaiser Bessel derived window][kbd]. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`window_length` + +A scalar `Tensor` indicating the window length to generate. +
+`beta` + +Beta parameter for Kaiser window. +
+`dtype` + +The data type to produce. Must be a floating point type. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` of shape `[window_length]` of type `dtype`. +
+ + +[kbd]: + https://en.wikipedia.org/wiki/Kaiser_window#Kaiser%E2%80%93Bessel-derived_(KBD)_window \ No newline at end of file diff --git a/site/en/api_docs/python/tf/signal/kaiser_window.md b/site/en/api_docs/python/tf/signal/kaiser_window.md new file mode 100644 index 00000000000..6229cbf32f9 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/kaiser_window.md @@ -0,0 +1,99 @@ +description: Generate a [Kaiser window][kaiser]. + +
+ + +
+ +# tf.signal.kaiser_window + + + + + + + + + +Generate a [Kaiser window][kaiser]. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`window_length` + +A scalar `Tensor` indicating the window length to generate. +
+`beta` + +Beta parameter for Kaiser window, see reference below. +
+`dtype` + +The data type to produce. Must be a floating point type. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` of shape `[window_length]` of type `dtype`. +
+ + +[kaiser]: + https://docs.scipy.org/doc/numpy/reference/generated/numpy.kaiser.html \ No newline at end of file diff --git a/site/en/api_docs/python/tf/signal/linear_to_mel_weight_matrix.md b/site/en/api_docs/python/tf/signal/linear_to_mel_weight_matrix.md new file mode 100644 index 00000000000..4b021269e1b --- /dev/null +++ b/site/en/api_docs/python/tf/signal/linear_to_mel_weight_matrix.md @@ -0,0 +1,180 @@ +description: Returns a matrix to warp linear scale spectrograms to the [mel scale][mel]. + +
+ + +
+ +# tf.signal.linear_to_mel_weight_matrix + + + + + + + + + +Returns a matrix to warp linear scale spectrograms to the [mel scale][mel]. + + + + + + + + + +Returns a weight matrix that can be used to re-weight a `Tensor` containing +`num_spectrogram_bins` linearly sampled frequency information from +`[0, sample_rate / 2]` into `num_mel_bins` frequency information from +`[lower_edge_hertz, upper_edge_hertz]` on the [mel scale][mel]. + +This function follows the [Hidden Markov Model Toolkit +(HTK)](http://htk.eng.cam.ac.uk/) convention, defining the mel scale in +terms of a frequency in hertz according to the following formula: + + $$ extrm{mel}(f) = 2595 * extrm{log}_{10}(1 + +rac{f}{700})$$ + +In the returned matrix, all the triangles (filterbanks) have a peak value +of 1.0. + +For example, the returned matrix `A` can be used to right-multiply a +spectrogram `S` of shape `[frames, num_spectrogram_bins]` of linear +scale spectrum values (e.g. STFT magnitudes) to generate a "mel spectrogram" +`M` of shape `[frames, num_mel_bins]`. + + # `S` has shape [frames, num_spectrogram_bins] + # `M` has shape [frames, num_mel_bins] + M = tf.matmul(S, A) + +The matrix can be used with tf.tensordot to convert an arbitrary rank +`Tensor` of linear-scale spectral bins into the mel scale. + + # S has shape [..., num_spectrogram_bins]. + # M has shape [..., num_mel_bins]. + M = tf.tensordot(S, A, 1) + # tf.tensordot does not support shape inference for this case yet. + M.set_shape(S.shape[:-1].concatenate(A.shape[-1:])) + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_mel_bins` + +Python int. How many bands in the resulting mel spectrum. +
+`num_spectrogram_bins` + +An integer `Tensor`. How many bins there are in the +source spectrogram data, which is understood to be `fft_size // 2 + 1`, +i.e. the spectrogram only contains the nonredundant FFT bins. +
+`sample_rate` + +An integer or float `Tensor`. Samples per second of the input +signal used to create the spectrogram. Used to figure out the frequencies +corresponding to each spectrogram bin, which dictates how they are mapped +into the mel scale. +
+`lower_edge_hertz` + +Python float. Lower bound on the frequencies to be +included in the mel spectrum. This corresponds to the lower edge of the +lowest triangular band. +
+`upper_edge_hertz` + +Python float. The desired top edge of the highest +frequency band. +
+`dtype` + +The `DType` of the result matrix. Must be a floating point type. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` of shape `[num_spectrogram_bins, num_mel_bins]`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `num_mel_bins`/`num_spectrogram_bins`/`sample_rate` are not +positive, `lower_edge_hertz` is negative, frequency edges are incorrectly +ordered, `upper_edge_hertz` is larger than the Nyquist frequency. +
+ + +[mel]: https://en.wikipedia.org/wiki/Mel_scale \ No newline at end of file diff --git a/site/en/api_docs/python/tf/signal/mdct.md b/site/en/api_docs/python/tf/signal/mdct.md new file mode 100644 index 00000000000..2563b9a232e --- /dev/null +++ b/site/en/api_docs/python/tf/signal/mdct.md @@ -0,0 +1,146 @@ +description: Computes the [Modified Discrete Cosine Transform][mdct] of signals. + +
+ + +
+ +# tf.signal.mdct + + + + + + + + + +Computes the [Modified Discrete Cosine Transform][mdct] of `signals`. + + + + + + + + + +Implemented with TPU/GPU-compatible ops and supports gradients. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`signals` + +A `[..., samples]` `float32`/`float64` `Tensor` of real-valued +signals. +
+`frame_length` + +An integer scalar `Tensor`. The window length in samples +which must be divisible by 4. +
+`window_fn` + +A callable that takes a frame_length and a `dtype` keyword +argument and returns a `[frame_length]` `Tensor` of samples in the +provided datatype. If set to `None`, a rectangular window with a scale of +1/sqrt(2) is used. For perfect reconstruction of a signal from `mdct` +followed by `inverse_mdct`, please use tf.signal.vorbis_window, +tf.signal.kaiser_bessel_derived_window or `None`. If using another +window function, make sure that w[n]^2 + w[n + frame_length // 2]^2 = 1 +and w[n] = w[frame_length - n - 1] for n = 0,...,frame_length // 2 - 1 to +achieve perfect reconstruction. +
+`pad_end` + +Whether to pad the end of `signals` with zeros when the provided +frame length and step produces a frame that lies partially past its end. +
+`norm` + +If it is None, unnormalized dct4 is used, if it is "ortho" +orthonormal dct4 is used. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `[..., frames, frame_length // 2]` `Tensor` of `float32`/`float64` +MDCT values where `frames` is roughly `samples // (frame_length // 2)` +when `pad_end=False`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `signals` is not at least rank 1, `frame_length` is +not scalar, or `frame_length` is not a multiple of `4`. +
+ + +[mdct]: https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform \ No newline at end of file diff --git a/site/en/api_docs/python/tf/signal/mfccs_from_log_mel_spectrograms.md b/site/en/api_docs/python/tf/signal/mfccs_from_log_mel_spectrograms.md new file mode 100644 index 00000000000..eb5b4e22203 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/mfccs_from_log_mel_spectrograms.md @@ -0,0 +1,148 @@ +description: Computes [MFCCs][mfcc] of log_mel_spectrograms. + +
+ + +
+ +# tf.signal.mfccs_from_log_mel_spectrograms + + + + + + + + + +Computes [MFCCs][mfcc] of `log_mel_spectrograms`. + + + + + + + + + +Implemented with GPU-compatible ops and supports gradients. + +[Mel-Frequency Cepstral Coefficient (MFCC)][mfcc] calculation consists of +taking the DCT-II of a log-magnitude mel-scale spectrogram. [HTK][htk]'s MFCCs +use a particular scaling of the DCT-II which is almost orthogonal +normalization. We follow this convention. + +All `num_mel_bins` MFCCs are returned and it is up to the caller to select +a subset of the MFCCs based on their application. For example, it is typical +to only use the first few for speech recognition, as this results in +an approximately pitch-invariant representation of the signal. + +#### For example: + + + +```python +batch_size, num_samples, sample_rate = 32, 32000, 16000.0 +# A Tensor of [batch_size, num_samples] mono PCM samples in the range [-1, 1]. +pcm = tf.random.normal([batch_size, num_samples], dtype=tf.float32) + +# A 1024-point STFT with frames of 64 ms and 75% overlap. +stfts = tf.signal.stft(pcm, frame_length=1024, frame_step=256, + fft_length=1024) +spectrograms = tf.abs(stfts) + +# Warp the linear scale spectrograms into the mel-scale. +num_spectrogram_bins = stfts.shape[-1].value +lower_edge_hertz, upper_edge_hertz, num_mel_bins = 80.0, 7600.0, 80 +linear_to_mel_weight_matrix = tf.signal.linear_to_mel_weight_matrix( + num_mel_bins, num_spectrogram_bins, sample_rate, lower_edge_hertz, + upper_edge_hertz) +mel_spectrograms = tf.tensordot( + spectrograms, linear_to_mel_weight_matrix, 1) +mel_spectrograms.set_shape(spectrograms.shape[:-1].concatenate( + linear_to_mel_weight_matrix.shape[-1:])) + +# Compute a stabilized log to get log-magnitude mel-scale spectrograms. +log_mel_spectrograms = tf.math.log(mel_spectrograms + 1e-6) + +# Compute MFCCs from log_mel_spectrograms and take the first 13. +mfccs = tf.signal.mfccs_from_log_mel_spectrograms( + log_mel_spectrograms)[..., :13] +``` + + + + + + + + + + + + + +
+`log_mel_spectrograms` + +A `[..., num_mel_bins]` `float32`/`float64` `Tensor` +of log-magnitude mel-scale spectrograms. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `[..., num_mel_bins]` `float32`/`float64` `Tensor` of the MFCCs of +`log_mel_spectrograms`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `num_mel_bins` is not positive. +
+ + +[mfcc]: https://en.wikipedia.org/wiki/Mel-frequency_cepstrum +[htk]: https://en.wikipedia.org/wiki/HTK_(software) \ No newline at end of file diff --git a/site/en/api_docs/python/tf/signal/overlap_and_add.md b/site/en/api_docs/python/tf/signal/overlap_and_add.md new file mode 100644 index 00000000000..c06db80f664 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/overlap_and_add.md @@ -0,0 +1,115 @@ +description: Reconstructs a signal from a framed representation. + +
+ + +
+ +# tf.signal.overlap_and_add + + + + + + + + + +Reconstructs a signal from a framed representation. + + + + + + + + + +Adds potentially overlapping frames of a signal with shape +`[..., frames, frame_length]`, offsetting subsequent frames by `frame_step`. +The resulting tensor has shape `[..., output_size]` where + + output_size = (frames - 1) * frame_step + frame_length + + + + + + + + + + + + + + + + +
+`signal` + +A [..., frames, frame_length] `Tensor`. All dimensions may be +unknown, and rank must be at least 2. +
+`frame_step` + +An integer or scalar `Tensor` denoting overlap offsets. Must be +less than or equal to `frame_length`. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` with shape `[..., output_size]` containing the overlap-added +frames of `signal`'s inner-most two dimensions. +
+ + + + + + + + + + + + +
+`ValueError` + +If `signal`'s rank is less than 2, or `frame_step` is not a +scalar integer. +
+ diff --git a/site/en/api_docs/python/tf/signal/rfft.md b/site/en/api_docs/python/tf/signal/rfft.md new file mode 100644 index 00000000000..71612492d62 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/rfft.md @@ -0,0 +1,108 @@ +description: Real-valued fast Fourier transform. + +
+ + +
+ +# tf.signal.rfft + + + + + + + + + +Real-valued fast Fourier transform. + + + + + + + + + +Computes the 1-dimensional discrete Fourier transform of a real-valued signal +over the inner-most dimension of `input`. + +Since the DFT of a real signal is Hermitian-symmetric, `RFFT` only returns the +`fft_length / 2 + 1` unique components of the FFT: the zero-frequency term, +followed by the `fft_length / 2` positive-frequency terms. + +Along the axis `RFFT` is computed on, if `fft_length` is smaller than the +corresponding dimension of `input`, the dimension is cropped. If it is larger, +the dimension is padded with zeros. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +A float32 tensor. +
+`fft_length` + +A `Tensor` of type `int32`. +An int32 tensor of shape [1]. The FFT length. +
+`Tcomplex` + +An optional tf.DType from: `tf.complex64, tf.complex128`. Defaults to tf.complex64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Tcomplex`. +
+ diff --git a/site/en/api_docs/python/tf/signal/rfft2d.md b/site/en/api_docs/python/tf/signal/rfft2d.md new file mode 100644 index 00000000000..acec2ba26b8 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/rfft2d.md @@ -0,0 +1,109 @@ +description: 2D real-valued fast Fourier transform. + +
+ + +
+ +# tf.signal.rfft2d + + + + + + + + + +2D real-valued fast Fourier transform. + + + + + + + + + +Computes the 2-dimensional discrete Fourier transform of a real-valued signal +over the inner-most 2 dimensions of `input`. + +Since the DFT of a real signal is Hermitian-symmetric, `RFFT2D` only returns the +`fft_length / 2 + 1` unique components of the FFT for the inner-most dimension +of `output`: the zero-frequency term, followed by the `fft_length / 2` +positive-frequency terms. + +Along each axis `RFFT2D` is computed on, if `fft_length` is smaller than the +corresponding dimension of `input`, the dimension is cropped. If it is larger, +the dimension is padded with zeros. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +A float32 tensor. +
+`fft_length` + +A `Tensor` of type `int32`. +An int32 tensor of shape [2]. The FFT length for each dimension. +
+`Tcomplex` + +An optional tf.DType from: `tf.complex64, tf.complex128`. Defaults to tf.complex64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Tcomplex`. +
+ diff --git a/site/en/api_docs/python/tf/signal/rfft3d.md b/site/en/api_docs/python/tf/signal/rfft3d.md new file mode 100644 index 00000000000..2b477a049fc --- /dev/null +++ b/site/en/api_docs/python/tf/signal/rfft3d.md @@ -0,0 +1,109 @@ +description: 3D real-valued fast Fourier transform. + +
+ + +
+ +# tf.signal.rfft3d + + + + + + + + + +3D real-valued fast Fourier transform. + + + + + + + + + +Computes the 3-dimensional discrete Fourier transform of a real-valued signal +over the inner-most 3 dimensions of `input`. + +Since the DFT of a real signal is Hermitian-symmetric, `RFFT3D` only returns the +`fft_length / 2 + 1` unique components of the FFT for the inner-most dimension +of `output`: the zero-frequency term, followed by the `fft_length / 2` +positive-frequency terms. + +Along each axis `RFFT3D` is computed on, if `fft_length` is smaller than the +corresponding dimension of `input`, the dimension is cropped. If it is larger, +the dimension is padded with zeros. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `float32`, `float64`. +A float32 tensor. +
+`fft_length` + +A `Tensor` of type `int32`. +An int32 tensor of shape [3]. The FFT length for each dimension. +
+`Tcomplex` + +An optional tf.DType from: `tf.complex64, tf.complex128`. Defaults to tf.complex64. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `Tcomplex`. +
+ diff --git a/site/en/api_docs/python/tf/signal/stft.md b/site/en/api_docs/python/tf/signal/stft.md new file mode 100644 index 00000000000..dbf284f766e --- /dev/null +++ b/site/en/api_docs/python/tf/signal/stft.md @@ -0,0 +1,146 @@ +description: Computes the [Short-time Fourier Transform][stft] of signals. + +
+ + +
+ +# tf.signal.stft + + + + + + + + + +Computes the [Short-time Fourier Transform][stft] of `signals`. + + + + + + + + + +Implemented with TPU/GPU-compatible ops and supports gradients. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`signals` + +A `[..., samples]` `float32`/`float64` `Tensor` of real-valued +signals. +
+`frame_length` + +An integer scalar `Tensor`. The window length in samples. +
+`frame_step` + +An integer scalar `Tensor`. The number of samples to step. +
+`fft_length` + +An integer scalar `Tensor`. The size of the FFT to apply. +If not provided, uses the smallest power of 2 enclosing `frame_length`. +
+`window_fn` + +A callable that takes a window length and a `dtype` keyword +argument and returns a `[window_length]` `Tensor` of samples in the +provided datatype. If set to `None`, no windowing is used. +
+`pad_end` + +Whether to pad the end of `signals` with zeros when the provided +frame length and step produces a frame that lies partially past its end. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `[..., frames, fft_unique_bins]` `Tensor` of `complex64`/`complex128` +STFT values where `fft_unique_bins` is `fft_length // 2 + 1` (the unique +components of the FFT). +
+ + + + + + + + + + + + +
+`ValueError` + +If `signals` is not at least rank 1, `frame_length` is +not scalar, or `frame_step` is not scalar. +
+ + +[stft]: https://en.wikipedia.org/wiki/Short-time_Fourier_transform \ No newline at end of file diff --git a/site/en/api_docs/python/tf/signal/vorbis_window.md b/site/en/api_docs/python/tf/signal/vorbis_window.md new file mode 100644 index 00000000000..9bf81232ec6 --- /dev/null +++ b/site/en/api_docs/python/tf/signal/vorbis_window.md @@ -0,0 +1,92 @@ +description: Generate a [Vorbis power complementary window][vorbis]. + +
+ + +
+ +# tf.signal.vorbis_window + + + + + + + + + +Generate a [Vorbis power complementary window][vorbis]. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`window_length` + +A scalar `Tensor` indicating the window length to generate. +
+`dtype` + +The data type to produce. Must be a floating point type. +
+`name` + +An optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` of shape `[window_length]` of type `dtype`. +
+ + +[vorbis]: + https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform#Window_functions \ No newline at end of file diff --git a/site/en/api_docs/python/tf/size.md b/site/en/api_docs/python/tf/size.md new file mode 100644 index 00000000000..5b0ef32f6ea --- /dev/null +++ b/site/en/api_docs/python/tf/size.md @@ -0,0 +1,99 @@ +description: Returns the size of a tensor. + +
+ + +
+ +# tf.size + + + + + + + + + +Returns the size of a tensor. + + + + + + + +See also tf.shape. + +Returns a 0-D `Tensor` representing the number of elements in `input` +of type `out_type`. Defaults to tf.int32. + +#### For example: + + + +``` +>>> t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]) +>>> tf.size(t) + +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` or `SparseTensor`. +
+`name` + +A name for the operation (optional). +
+`out_type` + +(Optional) The specified non-quantized numeric output type of the +operation. Defaults to tf.int32. +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. Defaults to tf.int32. +
+ + + + +#### Numpy Compatibility +Equivalent to np.size() + diff --git a/site/en/api_docs/python/tf/slice.md b/site/en/api_docs/python/tf/slice.md new file mode 100644 index 00000000000..dca101be646 --- /dev/null +++ b/site/en/api_docs/python/tf/slice.md @@ -0,0 +1,132 @@ +description: Extracts a slice from a tensor. + +
+ + +
+ +# tf.slice + + + + + + + + + +Extracts a slice from a tensor. + + + + + + + + + +This operation extracts a slice of size `size` from a tensor `input_` starting +at the location specified by `begin`. The slice `size` is represented as a +tensor shape, where `size[i]` is the number of elements of the 'i'th dimension +of `input_` that you want to slice. The starting location (`begin`) for the +slice is represented as an offset in each dimension of `input_`. In other +words, `begin[i]` is the offset into the i'th dimension of `input_` that you +want to slice from. + +Note that tf.Tensor.__getitem__ is typically a more pythonic way to +perform slices, as it allows you to write `foo[3:7, :-2]` instead of +`tf.slice(foo, [3, 0], [4, foo.get_shape()[1]-2])`. + +`begin` is zero-based; `size` is one-based. If `size[i]` is -1, +all remaining elements in dimension i are included in the +slice. In other words, this is equivalent to setting: + +`size[i] = input_.dim_size(i) - begin[i]` + +This operation requires that: + +`0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]` + +#### For example: + + + +```python +t = tf.constant([[[1, 1, 1], [2, 2, 2]], + [[3, 3, 3], [4, 4, 4]], + [[5, 5, 5], [6, 6, 6]]]) +tf.slice(t, [1, 0, 0], [1, 1, 3]) # [[[3, 3, 3]]] +tf.slice(t, [1, 0, 0], [1, 2, 3]) # [[[3, 3, 3], + # [4, 4, 4]]] +tf.slice(t, [1, 0, 0], [2, 1, 3]) # [[[3, 3, 3]], + # [[5, 5, 5]]] +``` + + + + + + + + + + + + + + + + + + + +
+`input_` + +A `Tensor`. +
+`begin` + +An `int32` or `int64` `Tensor`. +
+`size` + +An `int32` or `int64` `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` the same type as `input_`. +
+ diff --git a/site/en/api_docs/python/tf/sort.md b/site/en/api_docs/python/tf/sort.md new file mode 100644 index 00000000000..60e67cbe573 --- /dev/null +++ b/site/en/api_docs/python/tf/sort.md @@ -0,0 +1,128 @@ +description: Sorts a tensor. + +
+ + +
+ +# tf.sort + + + + + + + + + +Sorts a tensor. + + + + + + + + + + +#### Usage: + + + +```python +import tensorflow as tf +a = [1, 10, 26.9, 2.8, 166.32, 62.3] +b = tf.sort(a,axis=-1,direction='ASCENDING',name=None) +c = tf.keras.backend.eval(b) +# Here, c = [ 1. 2.8 10. 26.9 62.3 166.32] +``` + + + + + + + + + + + + + + + + + + + +
+`values` + +1-D or higher numeric `Tensor`. +
+`axis` + +The axis along which to sort. The default is -1, which sorts the last +axis. +
+`direction` + +The direction in which to sort the values (`'ASCENDING'` or +`'DESCENDING'`). +
+`name` + +Optional name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` with the same dtype and shape as `values`, with the elements +sorted along the given `axis`. +
+ + + + + + + + + + + + +
+`ValueError` + +If axis is not a constant scalar, or the direction is invalid. +
+ diff --git a/site/en/api_docs/python/tf/space_to_batch.md b/site/en/api_docs/python/tf/space_to_batch.md new file mode 100644 index 00000000000..11671b4b62e --- /dev/null +++ b/site/en/api_docs/python/tf/space_to_batch.md @@ -0,0 +1,212 @@ +description: SpaceToBatch for N-D tensors of type T. + +
+ + +
+ +# tf.space_to_batch + + + + + + + + + +SpaceToBatch for N-D tensors of type T. + + + + + + + + + +This operation divides "spatial" dimensions `[1, ..., M]` of the input into a +grid of blocks of shape `block_shape`, and interleaves these blocks with the +"batch" dimension (0) such that in the output, the spatial dimensions +`[1, ..., M]` correspond to the position within the grid, and the batch +dimension combines both the position within a spatial block and the original +batch position. Prior to division into blocks, the spatial dimensions of the +input are optionally zero padded according to `paddings`. See below for a +precise description. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`, +where spatial_shape has `M` dimensions. +
+`block_shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D with shape `[M]`, all values must be >= 1. +
+`paddings` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2-D with shape `[M, 2]`, all values must be >= 0. +`paddings[i] = [pad_start, pad_end]` specifies the padding for input dimension +`i + 1`, which corresponds to spatial dimension `i`. It is required that +`block_shape[i]` divides `input_shape[i + 1] + pad_start + pad_end`. + +This operation is equivalent to the following steps: + +1. Zero-pad the start and end of dimensions `[1, ..., M]` of the +input according to `paddings` to produce `padded` of shape `padded_shape`. + +2. Reshape `padded` to `reshaped_padded` of shape: + +[batch] + +[padded_shape[1] / block_shape[0], +block_shape[0], +..., +padded_shape[M] / block_shape[M-1], +block_shape[M-1]] + +remaining_shape + +3. Permute dimensions of `reshaped_padded` to produce +`permuted_reshaped_padded` of shape: + +block_shape + +[batch] + +[padded_shape[1] / block_shape[0], +..., +padded_shape[M] / block_shape[M-1]] + +remaining_shape + +4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch +dimension, producing an output tensor of shape: + +[batch * prod(block_shape)] + +[padded_shape[1] / block_shape[0], +..., +padded_shape[M] / block_shape[M-1]] + +remaining_shape + +Some examples: + +(1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and +`paddings = [[0, 0], [0, 0]]`: + +``` +x = [[[[1], [2]], [[3], [4]]]] +``` + +The output tensor has shape `[4, 1, 1, 1]` and value: + +``` +[[[[1]]], [[[2]]], [[[3]]], [[[4]]]] +``` + +(2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and +`paddings = [[0, 0], [0, 0]]`: + +``` +x = [[[[1, 2, 3], [4, 5, 6]], +[[7, 8, 9], [10, 11, 12]]]] +``` + +The output tensor has shape `[4, 1, 1, 3]` and value: + +``` +[[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] +``` + +(3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and +`paddings = [[0, 0], [0, 0]]`: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]], +[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` + +The output tensor has shape `[4, 2, 2, 1]` and value: + +``` +x = [[[[1], [3]], [[9], [11]]], +[[[2], [4]], [[10], [12]]], +[[[5], [7]], [[13], [15]]], +[[[6], [8]], [[14], [16]]]] +``` + +(4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and +paddings = `[[0, 0], [2, 0]]`: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]]], +[[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` + +The output tensor has shape `[8, 1, 3, 1]` and value: + +``` +x = [[[[0], [1], [3]]], [[[0], [9], [11]]], +[[[0], [2], [4]]], [[[0], [10], [12]]], +[[[0], [5], [7]]], [[[0], [13], [15]]], +[[[0], [6], [8]]], [[[0], [14], [16]]]] +``` + +Among others, this operation is useful for reducing atrous convolution into +regular convolution. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/space_to_batch_nd.md b/site/en/api_docs/python/tf/space_to_batch_nd.md new file mode 100644 index 00000000000..142de5c08bb --- /dev/null +++ b/site/en/api_docs/python/tf/space_to_batch_nd.md @@ -0,0 +1,210 @@ +description: SpaceToBatch for N-D tensors of type T. + +
+ + +
+ +# tf.space_to_batch_nd + + + + + + + + + +SpaceToBatch for N-D tensors of type T. + + + + + + + + + +This operation divides "spatial" dimensions `[1, ..., M]` of the input into a +grid of blocks of shape `block_shape`, and interleaves these blocks with the +"batch" dimension (0) such that in the output, the spatial dimensions +`[1, ..., M]` correspond to the position within the grid, and the batch +dimension combines both the position within a spatial block and the original +batch position. Prior to division into blocks, the spatial dimensions of the +input are optionally zero padded according to `paddings`. See below for a +precise description. + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`, +where spatial_shape has `M` dimensions. +
+`block_shape` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D with shape `[M]`, all values must be >= 1. +
+`paddings` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +2-D with shape `[M, 2]`, all values must be >= 0. +`paddings[i] = [pad_start, pad_end]` specifies the padding for input dimension +`i + 1`, which corresponds to spatial dimension `i`. It is required that +`block_shape[i]` divides `input_shape[i + 1] + pad_start + pad_end`. + +This operation is equivalent to the following steps: + +1. Zero-pad the start and end of dimensions `[1, ..., M]` of the +input according to `paddings` to produce `padded` of shape `padded_shape`. + +2. Reshape `padded` to `reshaped_padded` of shape: + +[batch] + +[padded_shape[1] / block_shape[0], +block_shape[0], +..., +padded_shape[M] / block_shape[M-1], +block_shape[M-1]] + +remaining_shape + +3. Permute dimensions of `reshaped_padded` to produce +`permuted_reshaped_padded` of shape: + +block_shape + +[batch] + +[padded_shape[1] / block_shape[0], +..., +padded_shape[M] / block_shape[M-1]] + +remaining_shape + +4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch +dimension, producing an output tensor of shape: + +[batch * prod(block_shape)] + +[padded_shape[1] / block_shape[0], +..., +padded_shape[M] / block_shape[M-1]] + +remaining_shape + +Some examples: + +(1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and +`paddings = [[0, 0], [0, 0]]`: + +``` +x = [[[[1], [2]], [[3], [4]]]] +``` + +The output tensor has shape `[4, 1, 1, 1]` and value: + +``` +[[[[1]]], [[[2]]], [[[3]]], [[[4]]]] +``` + +(2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and +`paddings = [[0, 0], [0, 0]]`: + +``` +x = [[[[1, 2, 3], [4, 5, 6]], +[[7, 8, 9], [10, 11, 12]]]] +``` + +The output tensor has shape `[4, 1, 1, 3]` and value: + +``` +[[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] +``` + +(3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and +`paddings = [[0, 0], [0, 0]]`: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]], +[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` + +The output tensor has shape `[4, 2, 2, 1]` and value: + +``` +x = [[[[1], [3]], [[9], [11]]], +[[[2], [4]], [[10], [12]]], +[[[5], [7]], [[13], [15]]], +[[[6], [8]], [[14], [16]]]] +``` + +(4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and +paddings = `[[0, 0], [2, 0]]`: + +``` +x = [[[[1], [2], [3], [4]], +[[5], [6], [7], [8]]], +[[[9], [10], [11], [12]], +[[13], [14], [15], [16]]]] +``` + +The output tensor has shape `[8, 1, 3, 1]` and value: + +``` +x = [[[[0], [1], [3]]], [[[0], [9], [11]]], +[[[0], [2], [4]]], [[[0], [10], [12]]], +[[[0], [5], [7]]], [[[0], [13], [15]]], +[[[0], [6], [8]]], [[[0], [14], [16]]]] +``` + +Among others, this operation is useful for reducing atrous convolution into +regular convolution. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/sparse.md b/site/en/api_docs/python/tf/sparse.md new file mode 100644 index 00000000000..6c24bb31763 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse.md @@ -0,0 +1,82 @@ +description: Sparse Tensor Representation. + +
+ + +
+ +# Module: tf.sparse + + + + + + + + + +Sparse Tensor Representation. + + +See also tf.SparseTensor. + +## Classes + +[`class SparseTensor`](../tf/sparse/SparseTensor.md): Represents a sparse tensor. + +## Functions + +[`add(...)`](../tf/sparse/add.md): Adds two tensors, at least one of each is a `SparseTensor`. + +[`concat(...)`](../tf/sparse/concat.md): Concatenates a list of `SparseTensor` along the specified dimension. (deprecated arguments) + +[`cross(...)`](../tf/sparse/cross.md): Generates sparse cross from a list of sparse and dense tensors. + +[`cross_hashed(...)`](../tf/sparse/cross_hashed.md): Generates hashed sparse cross from a list of sparse and dense tensors. + +[`expand_dims(...)`](../tf/sparse/expand_dims.md): Inserts a dimension of 1 into a tensor's shape. + +[`eye(...)`](../tf/sparse/eye.md): Creates a two-dimensional sparse tensor with ones along the diagonal. + +[`fill_empty_rows(...)`](../tf/sparse/fill_empty_rows.md): Fills empty rows in the input 2-D `SparseTensor` with a default value. + +[`from_dense(...)`](../tf/sparse/from_dense.md): Converts a dense tensor into a sparse tensor. + +[`mask(...)`](../tf/sparse/mask.md): Masks elements of `IndexedSlices`. + +[`maximum(...)`](../tf/sparse/maximum.md): Returns the element-wise max of two SparseTensors. + +[`minimum(...)`](../tf/sparse/minimum.md): Returns the element-wise min of two SparseTensors. + +[`reduce_max(...)`](../tf/sparse/reduce_max.md): Computes the max of elements across dimensions of a SparseTensor. + +[`reduce_sum(...)`](../tf/sparse/reduce_sum.md): Computes the sum of elements across dimensions of a SparseTensor. + +[`reorder(...)`](../tf/sparse/reorder.md): Reorders a `SparseTensor` into the canonical, row-major ordering. + +[`reset_shape(...)`](../tf/sparse/reset_shape.md): Resets the shape of a `SparseTensor` with indices and values unchanged. + +[`reshape(...)`](../tf/sparse/reshape.md): Reshapes a `SparseTensor` to represent values in a new dense shape. + +[`retain(...)`](../tf/sparse/retain.md): Retains specified non-empty values within a `SparseTensor`. + +[`segment_mean(...)`](../tf/sparse/segment_mean.md): Computes the mean along sparse segments of a tensor. + +[`segment_sqrt_n(...)`](../tf/sparse/segment_sqrt_n.md): Computes the sum along sparse segments of a tensor divided by the sqrt(N). + +[`segment_sum(...)`](../tf/sparse/segment_sum.md): Computes the sum along sparse segments of a tensor. + +[`slice(...)`](../tf/sparse/slice.md): Slice a `SparseTensor` based on the `start` and `size. + +[`softmax(...)`](../tf/sparse/softmax.md): Applies softmax to a batched N-D `SparseTensor`. + +[`sparse_dense_matmul(...)`](../tf/sparse/sparse_dense_matmul.md): Multiply SparseTensor (or dense Matrix) (of rank 2) "A" by dense matrix + +[`split(...)`](../tf/sparse/split.md): Split a `SparseTensor` into `num_split` tensors along `axis`. + +[`to_dense(...)`](../tf/sparse/to_dense.md): Converts a `SparseTensor` into a dense tensor. + +[`to_indicator(...)`](../tf/sparse/to_indicator.md): Converts a `SparseTensor` of ids into a dense bool indicator tensor. + +[`transpose(...)`](../tf/sparse/transpose.md): Transposes a `SparseTensor` + diff --git a/site/en/api_docs/python/tf/sparse/SparseTensor.md b/site/en/api_docs/python/tf/sparse/SparseTensor.md new file mode 100644 index 00000000000..e3e8c72259c --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/SparseTensor.md @@ -0,0 +1,510 @@ +description: Represents a sparse tensor. + +
+ + + + + + + + + + +
+ +# tf.sparse.SparseTensor + + + + + + + + + +Represents a sparse tensor. + + + + + + + + + +TensorFlow represents a sparse tensor as three separate dense tensors: +`indices`, `values`, and `dense_shape`. In Python, the three tensors are +collected into a `SparseTensor` class for ease of use. If you have separate +`indices`, `values`, and `dense_shape` tensors, wrap them in a `SparseTensor` +object before passing to the ops below. + +Concretely, the sparse tensor `SparseTensor(indices, values, dense_shape)` +comprises the following components, where `N` and `ndims` are the number +of values and number of dimensions in the `SparseTensor`, respectively: + +* `indices`: A 2-D int64 tensor of shape `[N, ndims]`, which specifies the + indices of the elements in the sparse tensor that contain nonzero values + (elements are zero-indexed). For example, `indices=[[1,3], [2,4]]` specifies + that the elements with indexes of [1,3] and [2,4] have nonzero values. + +* `values`: A 1-D tensor of any type and shape `[N]`, which supplies the + values for each element in `indices`. For example, given `indices=[[1,3], + [2,4]]`, the parameter `values=[18, 3.6]` specifies that element [1,3] of + the sparse tensor has a value of 18, and element [2,4] of the tensor has a + value of 3.6. + +* `dense_shape`: A 1-D int64 tensor of shape `[ndims]`, which specifies the + dense_shape of the sparse tensor. Takes a list indicating the number of + elements in each dimension. For example, `dense_shape=[3,6]` specifies a + two-dimensional 3x6 tensor, `dense_shape=[2,3,4]` specifies a + three-dimensional 2x3x4 tensor, and `dense_shape=[9]` specifies a + one-dimensional tensor with 9 elements. + +The corresponding dense tensor satisfies: + +```python +dense.shape = dense_shape +dense[tuple(indices[i])] = values[i] +``` + +By convention, `indices` should be sorted in row-major order (or equivalently +lexicographic order on the tuples `indices[i]`). This is not enforced when +`SparseTensor` objects are constructed, but most ops assume correct ordering. +If the ordering of sparse tensor `st` is wrong, a fixed version can be +obtained by calling tf.sparse.reorder(st). + +Example: The sparse tensor + +```python +SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]) +``` + +represents the dense tensor + +```python +[[1, 0, 0, 0] + [0, 0, 2, 0] + [0, 0, 0, 0]] +``` + + + + + + + + + + + + + + + + +
+`indices` + +A 2-D int64 tensor of shape `[N, ndims]`. +
+`values` + +A 1-D tensor of any type and shape `[N]`. +
+`dense_shape` + +A 1-D int64 tensor of shape `[ndims]`. +
+ + + + + + + + + + + + +
+`ValueError` + +When building an eager SparseTensor if `dense_shape` is +unknown or contains unknown elements (None or -1). +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`dense_shape` + +A 1-D Tensor of int64 representing the shape of the dense tensor. +
+`dtype` + +The `DType` of elements in this tensor. +
+`graph` + +The `Graph` that contains the index, value, and dense_shape tensors. +
+`indices` + +The indices of non-zero values in the represented dense tensor. +
+`op` + +The `Operation` that produces `values` as an output. +
+`shape` + +Get the `TensorShape` representing the shape of the dense tensor. +
+`values` + +The non-zero values in the represented dense tensor. +
+ + + +## Methods + +

consumers

+ +View source + + + + + + +

eval

+ +View source + + + +Evaluates this sparse tensor in a `Session`. + +Calling this method will execute all preceding operations that +produce the inputs needed for the operation that produces this +tensor. + +*N.B.* Before invoking SparseTensor.eval(), its graph must have been +launched in a session, and either a default session must be +available, or `session` must be specified explicitly. + + + + + + + + + + + + + +
Args
+`feed_dict` + +A dictionary that maps `Tensor` objects to feed values. See +`tf.Session.run` for a description of the valid feed values. +
+`session` + +(Optional.) The `Session` to be used to evaluate this sparse +tensor. If none, the default session will be used. +
+ + + + + + + + + + + +
Returns
+A `SparseTensorValue` object. +
+ + + +

from_value

+ +View source + + + + + + +

get_shape

+ +View source + + + +Get the `TensorShape` representing the shape of the dense tensor. + + + + + + + + + + +
Returns
+A `TensorShape` object. +
+ + + +

__div__

+ +View source + + + +Component-wise divides a SparseTensor by a dense Tensor. + +*Limitation*: this Op only broadcasts the dense side to the sparse side, but not +the other direction. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`sp_indices` + +A `Tensor` of type `int64`. +2-D. `N x R` matrix with the indices of non-empty values in a +SparseTensor, possibly not in canonical ordering. +
+`sp_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +1-D. `N` non-empty values corresponding to `sp_indices`. +
+`sp_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`dense` + +A `Tensor`. Must have the same type as `sp_values`. +`R`-D. The dense Tensor operand. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `sp_values`. +
+ + + +

__mul__

+ +View source + + + +Component-wise multiplies a SparseTensor by a dense Tensor. + +The output locations corresponding to the implicitly zero elements in the sparse +tensor will be zero (i.e., will not take up storage space), regardless of the +contents of the dense tensor (even if it's +/-INF and that INF*0 == NaN). + +*Limitation*: this Op only broadcasts the dense side to the sparse side, but not +the other direction. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`sp_indices` + +A `Tensor` of type `int64`. +2-D. `N x R` matrix with the indices of non-empty values in a +SparseTensor, possibly not in canonical ordering. +
+`sp_values` + +A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`. +1-D. `N` non-empty values corresponding to `sp_indices`. +
+`sp_shape` + +A `Tensor` of type `int64`. +1-D. Shape of the input SparseTensor. +
+`dense` + +A `Tensor`. Must have the same type as `sp_values`. +`R`-D. The dense Tensor operand. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
Returns
+A `Tensor`. Has the same type as `sp_values`. +
+ + + +

__truediv__

+ +View source + + + +Internal helper function for 'sp_t / dense_t'. + + + + diff --git a/site/en/api_docs/python/tf/sparse/add.md b/site/en/api_docs/python/tf/sparse/add.md new file mode 100644 index 00000000000..0674cf3fd1f --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/add.md @@ -0,0 +1,132 @@ +description: Adds two tensors, at least one of each is a SparseTensor. + +
+ + +
+ +# tf.sparse.add + + + + + + + + + +Adds two tensors, at least one of each is a `SparseTensor`. + + + + + + + +If one `SparseTensor` and one `Tensor` are passed in, returns a `Tensor`. If +both arguments are `SparseTensor`s, this returns a `SparseTensor`. The order +of arguments does not matter. Use vanilla tf.add() for adding two dense +`Tensor`s. + +The shapes of the two operands must match: broadcasting is not supported. + +The indices of any input `SparseTensor` are assumed ordered in standard +lexicographic order. If this is not the case, before this step run +`SparseReorder` to restore index ordering. + +If both arguments are sparse, we perform "clipping" as follows. By default, +if two values sum to zero at some index, the output `SparseTensor` would still +include that particular location in its index, storing a zero in the +corresponding value slot. To override this, callers can specify `threshold`, +indicating that if the sum has a magnitude strictly smaller than `threshold`, +its corresponding value and index would then not be included. In particular, +`threshold == 0.0` (default) means everything is kept and actual thresholding +happens only for a positive value. + +For example, suppose the logical sum of two sparse operands is (densified): + + [ 2] + [.1 0] + [ 6 -.2] + +Then, + +* `threshold == 0` (the default): all 5 index/value pairs will be + returned. +* `threshold == 0.11`: only .1 and 0 will vanish, and the remaining three + index/value pairs will be returned. +* `threshold == 0.21`: .1, 0, and -.2 will vanish. + + + + + + + + + + + + + + + + +
+`a` + +The first operand; `SparseTensor` or `Tensor`. +
+`b` + +The second operand; `SparseTensor` or `Tensor`. At least one operand +must be sparse. +
+`threshold` + +A 0-D `Tensor`. The magnitude threshold that determines if an +output value/index pair takes space. Its dtype should match that of the +values if they are real; if the latter are complex64/complex128, then the +dtype should be float32/float64, correspondingly. +
+ + + + + + + + + + + +
+A `SparseTensor` or a `Tensor`, representing the sum. +
+ + + + + + + + + + + + +
+`TypeError` + +If both `a` and `b` are `Tensor`s. Use tf.add() instead. +
+ diff --git a/site/en/api_docs/python/tf/sparse/concat.md b/site/en/api_docs/python/tf/sparse/concat.md new file mode 100644 index 00000000000..2df431f5574 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/concat.md @@ -0,0 +1,199 @@ +description: Concatenates a list of SparseTensor along the specified dimension. (deprecated arguments) + +
+ + +
+ +# tf.sparse.concat + + + + + + + + + +Concatenates a list of `SparseTensor` along the specified dimension. (deprecated arguments) + + + + + + + +Warning: SOME ARGUMENTS ARE DEPRECATED: `(concat_dim)`. They will be removed in a future version. +Instructions for updating: +concat_dim is deprecated, use axis instead + +Concatenation is with respect to the dense versions of each sparse input. +It is assumed that each inputs is a `SparseTensor` whose elements are ordered +along increasing dimension number. + +If expand_nonconcat_dim is False, all inputs' shapes must match, except for +the concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are +allowed to vary among all inputs. + +The `indices`, `values`, and `shapes` lists must have the same length. + +If expand_nonconcat_dim is False, then the output shape is identical to the +inputs', except along the concat dimension, where it is the sum of the inputs' +sizes along that dimension. + +If expand_nonconcat_dim is True, then the output shape along the non-concat +dimensions will be expand to be the largest among all inputs, and it is the +sum of the inputs sizes along the concat dimension. + +The output elements will be resorted to preserve the sort order along +increasing dimension number. + +This op runs in `O(M log M)` time, where `M` is the total number of non-empty +values across all inputs. This is due to the need for an internal sort in +order to concatenate efficiently across an arbitrary dimension. + +For example, if `axis = 1` and the inputs are + + sp_inputs[0]: shape = [2, 3] + [0, 2]: "a" + [1, 0]: "b" + [1, 1]: "c" + + sp_inputs[1]: shape = [2, 4] + [0, 1]: "d" + [0, 2]: "e" + +then the output will be + + shape = [2, 7] + [0, 2]: "a" + [0, 4]: "d" + [0, 5]: "e" + [1, 0]: "b" + [1, 1]: "c" + +Graphically this is equivalent to doing + + [ a] concat [ d e ] = [ a d e ] + [b c ] [ ] [b c ] + +Another example, if 'axis = 1' and the inputs are + + sp_inputs[0]: shape = [3, 3] + [0, 2]: "a" + [1, 0]: "b" + [2, 1]: "c" + + sp_inputs[1]: shape = [2, 4] + [0, 1]: "d" + [0, 2]: "e" + +if expand_nonconcat_dim = False, this will result in an error. But if +expand_nonconcat_dim = True, this will result in: + + shape = [3, 7] + [0, 2]: "a" + [0, 4]: "d" + [0, 5]: "e" + [1, 0]: "b" + [2, 1]: "c" + +Graphically this is equivalent to doing + + [ a] concat [ d e ] = [ a d e ] + [b ] [ ] [b ] + [ c ] [ c ] + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`axis` + +Dimension to concatenate along. Must be in range [-rank, rank), +where rank is the number of dimensions in each input `SparseTensor`. +
+`sp_inputs` + +List of `SparseTensor` to concatenate. +
+`name` + +A name prefix for the returned tensors (optional). +
+`expand_nonconcat_dim` + +Whether to allow the expansion in the non-concat +dimensions. Defaulted to False. +
+`concat_dim` + +The old (deprecated) name for axis. +
+`expand_nonconcat_dims` + +alias for expand_nonconcat_dim +
+ + + + + + + + + + + +
+A `SparseTensor` with the concatenated output. +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_inputs` is not a list of `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/cross.md b/site/en/api_docs/python/tf/sparse/cross.md new file mode 100644 index 00000000000..c92da553629 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/cross.md @@ -0,0 +1,99 @@ +description: Generates sparse cross from a list of sparse and dense tensors. + +
+ + +
+ +# tf.sparse.cross + + + + + + + + + +Generates sparse cross from a list of sparse and dense tensors. + + + + + + + + + +For example, if the inputs are + + * inputs[0]: SparseTensor with shape = [2, 2] + [0, 0]: "a" + [1, 0]: "b" + [1, 1]: "c" + * inputs[1]: SparseTensor with shape = [2, 1] + [0, 0]: "d" + [1, 0]: "e" + * inputs[2]: Tensor [["f"], ["g"]] + +then the output will be: + + shape = [2, 2] + [0, 0]: "a_X_d_X_f" + [1, 0]: "b_X_e_X_g" + [1, 1]: "c_X_e_X_g" + + + + + + + + + + + + + +
+`inputs` + +An iterable of `Tensor` or `SparseTensor`. +
+`name` + +Optional name for the op. +
+ + + + + + + + + + + +
+A `SparseTensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/cross_hashed.md b/site/en/api_docs/python/tf/sparse/cross_hashed.md new file mode 100644 index 00000000000..b107707a386 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/cross_hashed.md @@ -0,0 +1,121 @@ +description: Generates hashed sparse cross from a list of sparse and dense tensors. + +
+ + +
+ +# tf.sparse.cross_hashed + + + + + + + + + +Generates hashed sparse cross from a list of sparse and dense tensors. + + + + + + + + + +For example, if the inputs are + + * inputs[0]: SparseTensor with shape = [2, 2] + [0, 0]: "a" + [1, 0]: "b" + [1, 1]: "c" + * inputs[1]: SparseTensor with shape = [2, 1] + [0, 0]: "d" + [1, 0]: "e" + * inputs[2]: Tensor [["f"], ["g"]] + +then the output will be: + + shape = [2, 2] + [0, 0]: FingerprintCat64( + Fingerprint64("f"), FingerprintCat64( + Fingerprint64("d"), Fingerprint64("a"))) + [1, 0]: FingerprintCat64( + Fingerprint64("g"), FingerprintCat64( + Fingerprint64("e"), Fingerprint64("b"))) + [1, 1]: FingerprintCat64( + Fingerprint64("g"), FingerprintCat64( + Fingerprint64("e"), Fingerprint64("c"))) + + + + + + + + + + + + + + + + + + + +
+`inputs` + +An iterable of `Tensor` or `SparseTensor`. +
+`num_buckets` + +An `int` that is `>= 0`. +output = hashed_value%num_buckets if num_buckets > 0 else hashed_value. +
+`hash_key` + +Integer hash_key that will be used by the `FingerprintCat64` +function. If not given, will use a default key. +
+`name` + +Optional name for the op. +
+ + + + + + + + + + + +
+A `SparseTensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/expand_dims.md b/site/en/api_docs/python/tf/sparse/expand_dims.md new file mode 100644 index 00000000000..b28b7fedce6 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/expand_dims.md @@ -0,0 +1,96 @@ +description: Inserts a dimension of 1 into a tensor's shape. + +
+ + +
+ +# tf.sparse.expand_dims + + + + + + + + + +Inserts a dimension of 1 into a tensor's shape. + + + + + + + + + +Given a tensor `sp_input`, this operation inserts a dimension of 1 at the +dimension index `axis` of `sp_input`'s shape. The dimension index `axis` +starts at zero; if you specify a negative number for `axis` it is counted +backwards from the end. + + + + + + + + + + + + + + + + +
+`sp_input` + +A `SparseTensor`. +
+`axis` + +0-D (scalar). Specifies the dimension index at which to expand the +shape of `input`. Must be in the range `[-rank(sp_input) - 1, +rank(sp_input)]`. +
+`name` + +The name of the output `SparseTensor`. +
+ + + + + + + + + + + +
+A `SparseTensor` with the same data as `sp_input`, but its shape has an +additional dimension of size 1 added. +
+ diff --git a/site/en/api_docs/python/tf/sparse/eye.md b/site/en/api_docs/python/tf/sparse/eye.md new file mode 100644 index 00000000000..98a2bb47998 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/eye.md @@ -0,0 +1,99 @@ +description: Creates a two-dimensional sparse tensor with ones along the diagonal. + +
+ + +
+ +# tf.sparse.eye + + + + + + + + + +Creates a two-dimensional sparse tensor with ones along the diagonal. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`num_rows` + +Non-negative integer or `int32` scalar `tensor` giving the number +of rows in the resulting matrix. +
+`num_columns` + +Optional non-negative integer or `int32` scalar `tensor` giving +the number of columns in the resulting matrix. Defaults to `num_rows`. +
+`dtype` + +The type of element in the resulting `Tensor`. +
+`name` + +A name for this `Op`. Defaults to "eye". +
+ + + + + + + + + + + +
+A `SparseTensor` of shape [num_rows, num_columns] with ones along the +diagonal. +
+ diff --git a/site/en/api_docs/python/tf/sparse/fill_empty_rows.md b/site/en/api_docs/python/tf/sparse/fill_empty_rows.md new file mode 100644 index 00000000000..ff9152261ca --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/fill_empty_rows.md @@ -0,0 +1,147 @@ +description: Fills empty rows in the input 2-D SparseTensor with a default value. + +
+ + +
+ +# tf.sparse.fill_empty_rows + + + + + + + + + +Fills empty rows in the input 2-D `SparseTensor` with a default value. + + + + + + + + + +This op adds entries with the specified `default_value` at index +`[row, 0]` for any row in the input that does not already have a value. + +For example, suppose `sp_input` has shape `[5, 6]` and non-empty values: + + [0, 1]: a + [0, 3]: b + [2, 0]: c + [3, 1]: d + +Rows 1 and 4 are empty, so the output will be of shape `[5, 6]` with values: + + [0, 1]: a + [0, 3]: b + [1, 0]: default_value + [2, 0]: c + [3, 1]: d + [4, 0]: default_value + +Note that the input may have empty columns at the end, with no effect on +this op. + +The output `SparseTensor` will be in row-major order and will have the +same shape as the input. + +This op also returns an indicator vector such that + + empty_row_indicator[i] = True iff row i was an empty row. + + + + + + + + + + + + + + + + +
+`sp_input` + +A `SparseTensor` with shape `[N, M]`. +
+`default_value` + +The value to fill for empty rows, with the same type as +`sp_input.` +
+`name` + +A name prefix for the returned tensors (optional) +
+ + + + + + + + + + + + + + + +
+`sp_ordered_output` + +A `SparseTensor` with shape `[N, M]`, and with all empty +rows filled in with `default_value`. +
+`empty_row_indicator` + +A bool vector of length `N` indicating whether each +input row was empty. +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/from_dense.md b/site/en/api_docs/python/tf/sparse/from_dense.md new file mode 100644 index 00000000000..872d4cc38b4 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/from_dense.md @@ -0,0 +1,84 @@ +description: Converts a dense tensor into a sparse tensor. + +
+ + +
+ +# tf.sparse.from_dense + + + + + + + + + +Converts a dense tensor into a sparse tensor. + + + + + + + + + +Only elements not equal to zero will be present in the result. The resulting +`SparseTensor` has the same dtype and shape as the input. + + + + + + + + + + + + + +
+`tensor` + +A dense `Tensor` to be converted to a `SparseTensor`. +
+`name` + +Optional name for the op. +
+ + + + + + + + + + + +
+The `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/mask.md b/site/en/api_docs/python/tf/sparse/mask.md new file mode 100644 index 00000000000..e5f49af5e26 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/mask.md @@ -0,0 +1,114 @@ +description: Masks elements of IndexedSlices. + +
+ + +
+ +# tf.sparse.mask + + + + + + + + + +Masks elements of `IndexedSlices`. + + + + + + + + + +Given an `IndexedSlices` instance `a`, returns another `IndexedSlices` that +contains a subset of the slices of `a`. Only the slices at indices not +specified in `mask_indices` are returned. + +This is useful when you need to extract a subset of slices in an +`IndexedSlices` object. + +#### For example: + + + +```python +# `a` contains slices at indices [12, 26, 37, 45] from a large tensor +# with shape [1000, 10] +a.indices # [12, 26, 37, 45] +tf.shape(a.values) # [4, 10] + +# `b` will be the subset of `a` slices at its second and third indices, so +# we want to mask its first and last indices (which are at absolute +# indices 12, 45) +b = tf.sparse.mask(a, [12, 45]) + +b.indices # [26, 37] +tf.shape(b.values) # [2, 10] +``` + + + + + + + + + + + + + + + + +
+`a` + +An `IndexedSlices` instance. +
+`mask_indices` + +Indices of elements to mask. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The masked `IndexedSlices` instance. +
+ diff --git a/site/en/api_docs/python/tf/sparse/maximum.md b/site/en/api_docs/python/tf/sparse/maximum.md new file mode 100644 index 00000000000..c456f8fe738 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/maximum.md @@ -0,0 +1,103 @@ +description: Returns the element-wise max of two SparseTensors. + +
+ + +
+ +# tf.sparse.maximum + + + + + + + + + +Returns the element-wise max of two SparseTensors. + + + + + + + + + +Assumes the two SparseTensors have the same shape, i.e., no broadcasting. +Example: + +```python +sp_zero = sparse_tensor.SparseTensor([[0]], [0], [7]) +sp_one = sparse_tensor.SparseTensor([[1]], [1], [7]) +res = tf.sparse.maximum(sp_zero, sp_one).eval() +# "res" should be equal to SparseTensor([[0], [1]], [0, 1], [7]). +``` + + + + + + + + + + + + + + + + +
+`sp_a` + +a `SparseTensor` operand whose dtype is real, and indices +lexicographically ordered. +
+`sp_b` + +the other `SparseTensor` operand with the same requirements (and the +same shape). +
+`name` + +optional name of the operation. +
+ + + + + + + + + + + + +
+`output` + +the output SparseTensor. +
+ diff --git a/site/en/api_docs/python/tf/sparse/minimum.md b/site/en/api_docs/python/tf/sparse/minimum.md new file mode 100644 index 00000000000..e5c03b7217d --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/minimum.md @@ -0,0 +1,103 @@ +description: Returns the element-wise min of two SparseTensors. + +
+ + +
+ +# tf.sparse.minimum + + + + + + + + + +Returns the element-wise min of two SparseTensors. + + + + + + + + + +Assumes the two SparseTensors have the same shape, i.e., no broadcasting. +Example: + +```python +sp_zero = sparse_tensor.SparseTensor([[0]], [0], [7]) +sp_one = sparse_tensor.SparseTensor([[1]], [1], [7]) +res = tf.sparse.minimum(sp_zero, sp_one).eval() +# "res" should be equal to SparseTensor([[0], [1]], [0, 0], [7]). +``` + + + + + + + + + + + + + + + + +
+`sp_a` + +a `SparseTensor` operand whose dtype is real, and indices +lexicographically ordered. +
+`sp_b` + +the other `SparseTensor` operand with the same requirements (and the +same shape). +
+`name` + +optional name of the operation. +
+ + + + + + + + + + + + +
+`output` + +the output SparseTensor. +
+ diff --git a/site/en/api_docs/python/tf/sparse/reduce_max.md b/site/en/api_docs/python/tf/sparse/reduce_max.md new file mode 100644 index 00000000000..401b26e9cce --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/reduce_max.md @@ -0,0 +1,136 @@ +description: Computes the max of elements across dimensions of a SparseTensor. + +
+ + +
+ +# tf.sparse.reduce_max + + + + + + + + + +Computes the max of elements across dimensions of a SparseTensor. + + + + + + + +This Op takes a SparseTensor and is the sparse counterpart to +tf.reduce_max(). In particular, this Op also returns a dense `Tensor` +if `output_is_sparse` is `False`, or a `SparseTensor` if `output_is_sparse` +is `True`. + +Note: A gradient is not defined for this function, so it can't be used +in training models that need gradient descent. + +Reduces `sp_input` along the dimensions given in `axis`. Unless +`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in +`axis`. If `keepdims` is true, the reduced dimensions are retained +with length 1. + +If `axis` has no entries, all dimensions are reduced, and a tensor +with a single element is returned. Additionally, the axes can be negative, +similar to the indexing rules in Python. + +The values not defined in `sp_input` don't participate in the reduce max, +as opposed to be implicitly assumed 0 -- hence it can return negative values +for sparse `axis`. But, in case there are no values in +`axis`, it will reduce to 0. See second example below. + +#### For example: + + + +```python +# 'x' represents [[1, ?, 2] +# [?, 3, ?]] +# where ? is implicitly-zero. +tf.sparse.reduce_max(x) ==> 3 +tf.sparse.reduce_max(x, 0) ==> [1, 3, 2] +tf.sparse.reduce_max(x, 1) ==> [2, 3] # Can also use -1 as the axis. +tf.sparse.reduce_max(x, 1, keepdims=True) ==> [[2], [3]] +tf.sparse.reduce_max(x, [0, 1]) ==> 3 + +# 'y' represents [[-7, ?] +# [ 4, 3] +# [ ?, ?] +tf.sparse.reduce_max(x, 1) ==> [-7, 4, 0] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`sp_input` + +The SparseTensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce; list or scalar. If `None` (the +default), reduces all dimensions. +
+`keepdims` + +If true, retain reduced dimensions with length 1. +
+`output_is_sparse` + +If true, returns a `SparseTensor` instead of a dense +`Tensor` (the default). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The reduced Tensor or the reduced SparseTensor if `output_is_sparse` is +True. +
+ diff --git a/site/en/api_docs/python/tf/sparse/reduce_sum.md b/site/en/api_docs/python/tf/sparse/reduce_sum.md new file mode 100644 index 00000000000..b58f427c5ef --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/reduce_sum.md @@ -0,0 +1,125 @@ +description: Computes the sum of elements across dimensions of a SparseTensor. + +
+ + +
+ +# tf.sparse.reduce_sum + + + + + + + + + +Computes the sum of elements across dimensions of a SparseTensor. + + + + + + + +This Op takes a SparseTensor and is the sparse counterpart to +tf.reduce_sum(). In particular, this Op also returns a dense `Tensor` +if `output_is_sparse` is `False`, or a `SparseTensor` if `output_is_sparse` +is `True`. + +Note: if `output_is_sparse` is True, a gradient is not defined for this +function, so it can't be used in training models that need gradient descent. + +Reduces `sp_input` along the dimensions given in `axis`. Unless `keepdims` is +true, the rank of the tensor is reduced by 1 for each entry in `axis`. If +`keepdims` is true, the reduced dimensions are retained with length 1. + +If `axis` has no entries, all dimensions are reduced, and a tensor +with a single element is returned. Additionally, the axes can be negative, +similar to the indexing rules in Python. + +#### For example: + + + +```python +# 'x' represents [[1, ?, 1] +# [?, 1, ?]] +# where ? is implicitly-zero. +tf.sparse.reduce_sum(x) ==> 3 +tf.sparse.reduce_sum(x, 0) ==> [1, 1, 1] +tf.sparse.reduce_sum(x, 1) ==> [2, 1] # Can also use -1 as the axis. +tf.sparse.reduce_sum(x, 1, keepdims=True) ==> [[2], [1]] +tf.sparse.reduce_sum(x, [0, 1]) ==> 3 +``` + + + + + + + + + + + + + + + + + + + + + + +
+`sp_input` + +The SparseTensor to reduce. Should have numeric type. +
+`axis` + +The dimensions to reduce; list or scalar. If `None` (the +default), reduces all dimensions. +
+`keepdims` + +If true, retain reduced dimensions with length 1. +
+`output_is_sparse` + +If true, returns a `SparseTensor` instead of a dense +`Tensor` (the default). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The reduced Tensor or the reduced SparseTensor if `output_is_sparse` is +True. +
+ diff --git a/site/en/api_docs/python/tf/sparse/reorder.md b/site/en/api_docs/python/tf/sparse/reorder.md new file mode 100644 index 00000000000..dafd4ae7f01 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/reorder.md @@ -0,0 +1,120 @@ +description: Reorders a SparseTensor into the canonical, row-major ordering. + +
+ + +
+ +# tf.sparse.reorder + + + + + + + + + +Reorders a `SparseTensor` into the canonical, row-major ordering. + + + + + + + + + +Note that by convention, all sparse ops preserve the canonical ordering +along increasing dimension number. The only time ordering can be violated +is during manual manipulation of the indices and values to add entries. + +Reordering does not affect the shape of the `SparseTensor`. + +For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`: + + [0, 3]: b + [0, 1]: a + [3, 1]: d + [2, 0]: c + +then the output will be a `SparseTensor` of shape `[4, 5]` and +`indices` / `values`: + + [0, 1]: a + [0, 3]: b + [2, 0]: c + [3, 1]: d + + + + + + + + + + + + + +
+`sp_input` + +The input `SparseTensor`. +
+`name` + +A name prefix for the returned tensors (optional) +
+ + + + + + + + + + + +
+A `SparseTensor` with the same shape and non-empty values, but in +canonical ordering. +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/reset_shape.md b/site/en/api_docs/python/tf/sparse/reset_shape.md new file mode 100644 index 00000000000..27c90187f21 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/reset_shape.md @@ -0,0 +1,162 @@ +description: Resets the shape of a SparseTensor with indices and values unchanged. + +
+ + +
+ +# tf.sparse.reset_shape + + + + + + + + + +Resets the shape of a `SparseTensor` with indices and values unchanged. + + + + + + + + + +If `new_shape` is None, returns a copy of `sp_input` with its shape reset +to the tight bounding box of `sp_input`. This will be a shape consisting of +all zeros if sp_input has no values. + +If `new_shape` is provided, then it must be larger or equal in all dimensions +compared to the shape of `sp_input`. When this condition is met, the returned +SparseTensor will have its shape reset to `new_shape` and its indices and +values unchanged from that of `sp_input.` + +#### For example: + + +Consider a `sp_input` with shape [2, 3, 5]: + + [0, 0, 1]: a + [0, 1, 0]: b + [0, 2, 2]: c + [1, 0, 3]: d + +- It is an error to set `new_shape` as [3, 7] since this represents a + rank-2 tensor while `sp_input` is rank-3. This is either a ValueError + during graph construction (if both shapes are known) or an OpError during + run time. + +- Setting `new_shape` as [2, 3, 6] will be fine as this shape is larger or + equal in every dimension compared to the original shape [2, 3, 5]. + +- On the other hand, setting new_shape as [2, 3, 4] is also an error: The + third dimension is smaller than the original shape [2, 3, 5] (and an + `InvalidArgumentError` will be raised). + +- If `new_shape` is None, the returned SparseTensor will have a shape + [2, 3, 4], which is the tight bounding box of `sp_input`. + + + + + + + + + + + + + + + +
+`sp_input` + +The input `SparseTensor`. +
+`new_shape` + +None or a vector representing the new shape for the returned +`SparseTensor`. +
+ + + + + + + + + + + +
+A `SparseTensor` indices and values unchanged from `input_sp`. Its shape is +`new_shape` if that is set. Otherwise it is the tight bounding box of +`input_sp` +
+ + + + + + + + + + + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+`ValueError` + +If `new_shape` represents a tensor with a different rank from +that of `sp_input` (if shapes are known when graph is constructed). +
+`ValueError` + +If `new_shape` is determined during graph build to have +dimension sizes that are too small. +
+`OpError` + +- If `new_shape` has dimension sizes that are too small. +- If shapes are not known during graph construction time, and during run +time it is found out that the ranks do not match. +
+ diff --git a/site/en/api_docs/python/tf/sparse/reshape.md b/site/en/api_docs/python/tf/sparse/reshape.md new file mode 100644 index 00000000000..d1dd6ab0696 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/reshape.md @@ -0,0 +1,151 @@ +description: Reshapes a SparseTensor to represent values in a new dense shape. + +
+ + +
+ +# tf.sparse.reshape + + + + + + + + + +Reshapes a `SparseTensor` to represent values in a new dense shape. + + + + + + + + + +This operation has the same semantics as `reshape` on the represented dense +tensor. The indices of non-empty values in `sp_input` are recomputed based +on the new dense shape, and a new `SparseTensor` is returned containing the +new indices and new shape. The order of non-empty values in `sp_input` is +unchanged. + +If one component of `shape` is the special value -1, the size of that +dimension is computed so that the total dense size remains constant. At +most one component of `shape` can be -1. The number of dense elements +implied by `shape` must be the same as the number of dense elements +originally represented by `sp_input`. + +For example, if `sp_input` has shape `[2, 3, 6]` and `indices` / `values`: + + [0, 0, 0]: a + [0, 0, 1]: b + [0, 1, 0]: c + [1, 0, 0]: d + [1, 2, 3]: e + +and `shape` is `[9, -1]`, then the output will be a `SparseTensor` of +shape `[9, 4]` and `indices` / `values`: + + [0, 0]: a + [0, 1]: b + [1, 2]: c + [4, 2]: d + [8, 1]: e + + + + + + + + + + + + + + + + +
+`sp_input` + +The input `SparseTensor`. +
+`shape` + +A 1-D (vector) int64 `Tensor` specifying the new dense shape of the +represented `SparseTensor`. +
+`name` + +A name prefix for the returned tensors (optional) +
+ + + + + + + + + + + +
+A `SparseTensor` with the same non-empty values but with indices calculated +by the new dense shape. +
+ + + + + + + + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+`ValueError` + +If argument `shape` requests a `SparseTensor` with a different +number of elements than `sp_input`. +
+`ValueError` + +If `shape` has more than one inferred (== -1) dimension. +
+ diff --git a/site/en/api_docs/python/tf/sparse/retain.md b/site/en/api_docs/python/tf/sparse/retain.md new file mode 100644 index 00000000000..cfee3289166 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/retain.md @@ -0,0 +1,112 @@ +description: Retains specified non-empty values within a SparseTensor. + +
+ + +
+ +# tf.sparse.retain + + + + + + + + + +Retains specified non-empty values within a `SparseTensor`. + + + + + + + + + +For example, if `sp_input` has shape `[4, 5]` and 4 non-empty string values: + + [0, 1]: a + [0, 3]: b + [2, 0]: c + [3, 1]: d + +and `to_retain = [True, False, False, True]`, then the output will +be a `SparseTensor` of shape `[4, 5]` with 2 non-empty values: + + [0, 1]: a + [3, 1]: d + + + + + + + + + + + + + +
+`sp_input` + +The input `SparseTensor` with `N` non-empty elements. +
+`to_retain` + +A bool vector of length `N` with `M` true values. +
+ + + + + + + + + + + +
+A `SparseTensor` with the same shape as the input and `M` non-empty +elements corresponding to the true positions in `to_retain`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/segment_mean.md b/site/en/api_docs/python/tf/sparse/segment_mean.md new file mode 100644 index 00000000000..87a5fe48f0a --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/segment_mean.md @@ -0,0 +1,107 @@ +description: Computes the mean along sparse segments of a tensor. + +
+ + +
+ +# tf.sparse.segment_mean + + + + + + + + + +Computes the mean along sparse segments of a tensor. + + + + + + + +Read [the section on +segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation) +for an explanation of segments. + +Like tf.math.segment_mean, but `segment_ids` can have rank less than +`data`'s first dimension, selecting a subset of dimension 0, specified by +`indices`. +`segment_ids` is allowed to have missing ids, in which case the output will +be zeros at those indices. In those cases `num_segments` is used to determine +the size of the output. + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor` with data that will be assembled in the output. +
+`indices` + +A 1-D `Tensor` with indices into `data`. Has same rank as +`segment_ids`. +
+`segment_ids` + +A 1-D `Tensor` with indices into the output `Tensor`. Values +should be sorted and can be repeated. +
+`num_segments` + +An optional int32 scalar. Indicates the size of the output +`Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `tensor` of the shape as data, except for dimension 0 which +has size `k`, the number of segments specified via `num_segments` or +inferred for the last element in `segments_ids`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/segment_sqrt_n.md b/site/en/api_docs/python/tf/sparse/segment_sqrt_n.md new file mode 100644 index 00000000000..3feab938528 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/segment_sqrt_n.md @@ -0,0 +1,103 @@ +description: Computes the sum along sparse segments of a tensor divided by the sqrt(N). + +
+ + +
+ +# tf.sparse.segment_sqrt_n + + + + + + + + + +Computes the sum along sparse segments of a tensor divided by the sqrt(N). + + + + + + + +Read [the section on +segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation) +for an explanation of segments. + +Like tf.sparse.segment_mean, but instead of dividing by the size of the +segment, `N`, divide by `sqrt(N)` instead. + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor` with data that will be assembled in the output. +
+`indices` + +A 1-D `Tensor` with indices into `data`. Has same rank as +`segment_ids`. +
+`segment_ids` + +A 1-D `Tensor` with indices into the output `Tensor`. Values +should be sorted and can be repeated. +
+`num_segments` + +An optional int32 scalar. Indicates the size of the output +`Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `tensor` of the shape as data, except for dimension 0 which +has size `k`, the number of segments specified via `num_segments` or +inferred for the last element in `segments_ids`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/segment_sum.md b/site/en/api_docs/python/tf/sparse/segment_sum.md new file mode 100644 index 00000000000..6cea4ec099d --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/segment_sum.md @@ -0,0 +1,139 @@ +description: Computes the sum along sparse segments of a tensor. + +
+ + +
+ +# tf.sparse.segment_sum + + + + + + + + + +Computes the sum along sparse segments of a tensor. + + + + + + + +Read [the section on +segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation) +for an explanation of segments. + +Like tf.math.segment_sum, but `segment_ids` can have rank less than `data`'s +first dimension, selecting a subset of dimension 0, specified by `indices`. +`segment_ids` is allowed to have missing ids, in which case the output will +be zeros at those indices. In those cases `num_segments` is used to determine +the size of the output. + +#### For example: + + + +```python +c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) + +# Select two rows, one segment. +tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0])) +# => [[0 0 0 0]] + +# Select two rows, two segment. +tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1])) +# => [[ 1 2 3 4] +# [-1 -2 -3 -4]] + +# With missing segment ids. +tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 2]), + num_segments=4) +# => [[ 1 2 3 4] +# [ 0 0 0 0] +# [-1 -2 -3 -4] +# [ 0 0 0 0]] + +# Select all rows, two segments. +tf.sparse.segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1])) +# => [[0 0 0 0] +# [5 6 7 8]] + +# Which is equivalent to: +tf.math.segment_sum(c, tf.constant([0, 0, 1])) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A `Tensor` with data that will be assembled in the output. +
+`indices` + +A 1-D `Tensor` with indices into `data`. Has same rank as +`segment_ids`. +
+`segment_ids` + +A 1-D `Tensor` with indices into the output `Tensor`. Values +should be sorted and can be repeated. +
+`num_segments` + +An optional int32 scalar. Indicates the size of the output +`Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `tensor` of the shape as data, except for dimension 0 which +has size `k`, the number of segments specified via `num_segments` or +inferred for the last element in `segments_ids`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/slice.md b/site/en/api_docs/python/tf/sparse/slice.md new file mode 100644 index 00000000000..6e75cf4de40 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/slice.md @@ -0,0 +1,128 @@ +description: Slice a SparseTensor based on the start and size. + +
+ + +
+ +# tf.sparse.slice + + + + + + + + + +Slice a `SparseTensor` based on the `start` and `size. + + + + + + + + + +For example, if the input is + + input_tensor = shape = [2, 7] + [ a d e ] + [b c ] + +Graphically the output tensors are: + + sparse.slice([0, 0], [2, 4]) = shape = [2, 4] + [ a ] + [b c ] + + sparse.slice([0, 4], [2, 3]) = shape = [2, 3] + [ d e ] + [ ] + + + + + + + + + + + + + + + + + + + +
+`sp_input` + +The `SparseTensor` to split. +
+`start` + +1-D. tensor represents the start of the slice. +
+`size` + +1-D. tensor represents the size of the slice. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `SparseTensor` objects resulting from splicing. +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/softmax.md b/site/en/api_docs/python/tf/sparse/softmax.md new file mode 100644 index 00000000000..0f63609e09d --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/softmax.md @@ -0,0 +1,123 @@ +description: Applies softmax to a batched N-D SparseTensor. + +
+ + +
+ +# tf.sparse.softmax + + + + + + + + + +Applies softmax to a batched N-D `SparseTensor`. + + + + + + + + + +The inputs represent an N-D SparseTensor with logical shape `[..., B, C]` +(where `N >= 2`), and with indices sorted in the canonical lexicographic +order. + +This op is equivalent to applying the normal tf.nn.softmax() to each +innermost logical submatrix with shape `[B, C]`, but with the catch that *the +implicitly zero elements do not participate*. Specifically, the algorithm is +equivalent to: + + (1) Applies tf.nn.softmax() to a densified view of each innermost + submatrix with shape `[B, C]`, along the size-C dimension; + (2) Masks out the original implicitly-zero locations; + (3) Renormalizes the remaining elements. + +Hence, the `SparseTensor` result has exactly the same non-zero indices and +shape. + +#### Example: + + + +```python +# First batch: +# [? e.] +# [1. ? ] +# Second batch: +# [e ? ] +# [e e ] +shape = [2, 2, 2] # 3-D SparseTensor +values = np.asarray([[[0., np.e], [1., 0.]], [[np.e, 0.], [np.e, np.e]]]) +indices = np.vstack(np.where(values)).astype(np.int64).T + +result = tf.sparse.softmax(tf.SparseTensor(indices, values, shape)) +# ...returning a 3-D SparseTensor, equivalent to: +# [? 1.] [1 ?] +# [1. ? ] and [.5 .5] +# where ? means implicitly zero. +``` + + + + + + + + + + + + + +
+`sp_input` + +N-D `SparseTensor`, where `N >= 2`. +
+`name` + +optional name of the operation. +
+ + + + + + + + + + + + +
+`output` + +N-D `SparseTensor` representing the results. +
+ diff --git a/site/en/api_docs/python/tf/sparse/sparse_dense_matmul.md b/site/en/api_docs/python/tf/sparse/sparse_dense_matmul.md new file mode 100644 index 00000000000..827f3313e75 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/sparse_dense_matmul.md @@ -0,0 +1,293 @@ +description: Multiply SparseTensor (or dense Matrix) (of rank 2) "A" by dense matrix + +
+ + +
+ +# tf.sparse.sparse_dense_matmul + + + + + + + + + +Multiply SparseTensor (or dense Matrix) (of rank 2) "A" by dense matrix + + + + + + + + + +(or SparseTensor) "B". Please note that one and only one of the inputs MUST +be a SparseTensor and the other MUST be a dense matrix. + +No validity checking is performed on the indices of `A`. However, the +following input format is recommended for optimal behavior: + +* If `adjoint_a == false`: `A` should be sorted in lexicographically + increasing order. Use sparse.reorder if you're not sure. +* If `adjoint_a == true`: `A` should be sorted in order of increasing + dimension 1 (i.e., "column major" order instead of "row major" order). + +Using tf.nn.embedding_lookup_sparse for sparse multiplication: + +It's not obvious but you can consider `embedding_lookup_sparse` as another +sparse and dense multiplication. In some situations, you may prefer to use +`embedding_lookup_sparse` even though you're not dealing with embeddings. + +There are two questions to ask in the decision process: Do you need gradients +computed as sparse too? Is your sparse data represented as two +`SparseTensor`s: ids and values? There is more explanation about data format +below. If you answer any of these questions as yes, consider using +tf.nn.embedding_lookup_sparse. + +Following explains differences between the expected SparseTensors: +For example if dense form of your sparse data has shape `[3, 5]` and values: + + [[ a ] + [b c] + [ d ]] + + +`SparseTensor` format expected by `sparse_tensor_dense_matmul`: + `sp_a` (indices, values): + + [0, 1]: a + [1, 0]: b + [1, 4]: c + [2, 2]: d + +`SparseTensor` format expected by `embedding_lookup_sparse`: + `sp_ids` `sp_weights` + + [0, 0]: 1 [0, 0]: a + [1, 0]: 0 [1, 0]: b + [1, 1]: 4 [1, 1]: c + [2, 0]: 2 [2, 0]: d + + +Deciding when to use `sparse_tensor_dense_matmul` vs. +`matmul`(a_is_sparse=True): + +There are a number of questions to ask in the decision process, including: + +* Will the SparseTensor `A` fit in memory if densified? +* Is the column count of the product large (>> 1)? +* Is the density of `A` larger than approximately 15%? + +If the answer to several of these questions is yes, consider +converting the `SparseTensor` to a dense one and using tf.matmul with +`a_is_sparse=True`. + +This operation tends to perform well when `A` is more sparse, if the column +size of the product is small (e.g. matrix-vector multiplication), if +`sp_a.dense_shape` takes on large values. + +Below is a rough speed comparison between `sparse_tensor_dense_matmul`, +labeled 'sparse', and `matmul`(a_is_sparse=True), labeled 'dense'. For +purposes of the comparison, the time spent converting from a `SparseTensor` to +a dense `Tensor` is not included, so it is overly conservative with respect to +the time ratio. + +#### Benchmark system: + + +CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB +GPU: NVidia Tesla k40c + +#### Compiled with: + + +`-c opt --config=cuda --copt=-mavx` + +``` +tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks +A sparse [m, k] with % nonzero values between 1% and 80% +B dense [k, n] + +% nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense) +0.01 1 True 100 100 0.000221166 0.00010154 0.459112 +0.01 1 True 100 1000 0.00033858 0.000109275 0.322745 +0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385 +0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669 +0.01 1 False 100 100 0.000208085 0.000107603 0.51711 +0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762 +0.01 1 False 1000 100 0.000308222 0.00010345 0.335635 +0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124 +0.01 10 True 100 100 0.000218522 0.000105537 0.482958 +0.01 10 True 100 1000 0.000340882 0.000111641 0.327506 +0.01 10 True 1000 100 0.000315472 0.000117376 0.372064 +0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128 +0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354 +0.01 10 False 100 1000 0.000330552 0.000112615 0.340687 +0.01 10 False 1000 100 0.000341277 0.000114097 0.334324 +0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549 +0.01 25 True 100 100 0.000207806 0.000105977 0.509981 +0.01 25 True 100 1000 0.000322879 0.00012921 0.400181 +0.01 25 True 1000 100 0.00038262 0.00014158 0.370035 +0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504 +0.01 25 False 100 100 0.000209401 0.000104696 0.499979 +0.01 25 False 100 1000 0.000321161 0.000130737 0.407076 +0.01 25 False 1000 100 0.000377012 0.000136801 0.362856 +0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413 +0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833 +0.2 1 True 100 1000 0.000348674 0.000147475 0.422959 +0.2 1 True 1000 100 0.000336908 0.00010122 0.300439 +0.2 1 True 1000 1000 0.001022 0.000203274 0.198898 +0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746 +0.2 1 False 100 1000 0.000356127 0.000146824 0.41228 +0.2 1 False 1000 100 0.000322664 0.000100918 0.312764 +0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648 +0.2 10 True 100 100 0.000211692 0.000109903 0.519165 +0.2 10 True 100 1000 0.000372819 0.000164321 0.440753 +0.2 10 True 1000 100 0.000338651 0.000144806 0.427596 +0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064 +0.2 10 False 100 100 0.000215727 0.000110502 0.512231 +0.2 10 False 100 1000 0.000375419 0.0001613 0.429653 +0.2 10 False 1000 100 0.000336999 0.000145628 0.432132 +0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618 +0.2 25 True 100 100 0.000218705 0.000129913 0.594009 +0.2 25 True 100 1000 0.000394794 0.00029428 0.745402 +0.2 25 True 1000 100 0.000404483 0.0002693 0.665788 +0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052 +0.2 25 False 100 100 0.000221494 0.0001306 0.589632 +0.2 25 False 100 1000 0.000396436 0.000297204 0.74969 +0.2 25 False 1000 100 0.000409346 0.000270068 0.659754 +0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046 +0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836 +0.5 1 True 100 1000 0.000415328 0.000223073 0.537101 +0.5 1 True 1000 100 0.000358324 0.00011269 0.314492 +0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851 +0.5 1 False 100 100 0.000224196 0.000101423 0.452386 +0.5 1 False 100 1000 0.000400987 0.000223286 0.556841 +0.5 1 False 1000 100 0.000368825 0.00011224 0.304318 +0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563 +0.5 10 True 100 100 0.000222125 0.000112308 0.505608 +0.5 10 True 100 1000 0.000461088 0.00032357 0.701753 +0.5 10 True 1000 100 0.000394624 0.000225497 0.571422 +0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801 +0.5 10 False 100 100 0.000232083 0.000114978 0.495418 +0.5 10 False 100 1000 0.000454574 0.000324632 0.714146 +0.5 10 False 1000 100 0.000379097 0.000227768 0.600817 +0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638 +0.5 25 True 100 100 0.00023429 0.000151703 0.647501 +0.5 25 True 100 1000 0.000497462 0.000598873 1.20386 +0.5 25 True 1000 100 0.000460778 0.000557038 1.20891 +0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845 +0.5 25 False 100 100 0.000228981 0.000155334 0.678371 +0.5 25 False 100 1000 0.000496139 0.000620789 1.25124 +0.5 25 False 1000 100 0.00045473 0.000551528 1.21287 +0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927 +0.8 1 True 100 100 0.000222037 0.000105301 0.47425 +0.8 1 True 100 1000 0.000410804 0.000329327 0.801664 +0.8 1 True 1000 100 0.000349735 0.000131225 0.375212 +0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633 +0.8 1 False 100 100 0.000214079 0.000107486 0.502085 +0.8 1 False 100 1000 0.000413746 0.000323244 0.781261 +0.8 1 False 1000 100 0.000348983 0.000131983 0.378193 +0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282 +0.8 10 True 100 100 0.000229159 0.00011825 0.516017 +0.8 10 True 100 1000 0.000498845 0.000532618 1.0677 +0.8 10 True 1000 100 0.000383126 0.00029935 0.781336 +0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689 +0.8 10 False 100 100 0.000230783 0.000124958 0.541452 +0.8 10 False 100 1000 0.000493393 0.000550654 1.11606 +0.8 10 False 1000 100 0.000377167 0.000298581 0.791642 +0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024 +0.8 25 True 100 100 0.000233496 0.000175241 0.75051 +0.8 25 True 100 1000 0.00055654 0.00102658 1.84458 +0.8 25 True 1000 100 0.000463814 0.000783267 1.68875 +0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132 +0.8 25 False 100 100 0.000240243 0.000175047 0.728625 +0.8 25 False 100 1000 0.000578102 0.00104499 1.80763 +0.8 25 False 1000 100 0.000485113 0.000776849 1.60138 +0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992 +``` + + + + + + + + + + + + + + + + + + + + + + +
+`sp_a` + +SparseTensor (or dense Matrix) A, of rank 2. +
+`b` + +dense Matrix (or SparseTensor) B, with the same dtype as sp_a. +
+`adjoint_a` + +Use the adjoint of A in the matrix multiply. If A is complex, +this is transpose(conj(A)). Otherwise it's transpose(A). +
+`adjoint_b` + +Use the adjoint of B in the matrix multiply. If B is complex, +this is transpose(conj(B)). Otherwise it's transpose(B). +
+`name` + +A name prefix for the returned tensors (optional) +
+ + + + + + + + + + + +
+A dense matrix (pseudo-code in dense np.matrix notation): +`A = A.H if adjoint_a else A` +`B = B.H if adjoint_b else B` +`return A*B` +
+ diff --git a/site/en/api_docs/python/tf/sparse/split.md b/site/en/api_docs/python/tf/sparse/split.md new file mode 100644 index 00000000000..d9e8bcc2cf1 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/split.md @@ -0,0 +1,120 @@ +description: Split a SparseTensor into num_split tensors along axis. + +
+ + +
+ +# tf.sparse.split + + + + + + + + + +Split a `SparseTensor` into `num_split` tensors along `axis`. + + + + + + + +If the `sp_input.dense_shape[axis]` is not an integer multiple of `num_split` +each slice starting from 0:`shape[axis] % num_split` gets extra one +dimension. For example, if `axis = 1` and `num_split = 2` and the +input is: + + input_tensor = shape = [2, 7] + [ a d e ] + [b c ] + +Graphically the output tensors are: + + output_tensor[0] = + [ a ] + [b c ] + + output_tensor[1] = + [ d e ] + [ ] + + + + + + + + + + + + + + + + + + + +
+`sp_input` + +The `SparseTensor` to split. +
+`num_split` + +A Python integer. The number of ways to split. +
+`axis` + +A 0-D `int32` `Tensor`. The dimension along which to split. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+`num_split` `SparseTensor` objects resulting from splitting `value`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/to_dense.md b/site/en/api_docs/python/tf/sparse/to_dense.md new file mode 100644 index 00000000000..993eacbf4b5 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/to_dense.md @@ -0,0 +1,134 @@ +description: Converts a SparseTensor into a dense tensor. + +
+ + +
+ +# tf.sparse.to_dense + + + + + + + + + +Converts a `SparseTensor` into a dense tensor. + + + + + + + + + +This op is a convenience wrapper around `sparse_to_dense` for `SparseTensor`s. + +For example, if `sp_input` has shape `[3, 5]` and non-empty string values: + + [0, 1]: a + [0, 3]: b + [2, 0]: c + +and `default_value` is `x`, then the output will be a dense `[3, 5]` +string tensor with values: + + [[x a x b x] + [x x x x x] + [c x x x x]] + +Indices must be without repeats. This is only +tested if `validate_indices` is `True`. + + + + + + + + + + + + + + + + + + + +
+`sp_input` + +The input `SparseTensor`. +
+`default_value` + +Scalar value to set for indices not specified in +`sp_input`. Defaults to zero. +
+`validate_indices` + +A boolean value. If `True`, indices are checked to make +sure they are sorted in lexicographic order and that there are no repeats. +
+`name` + +A name prefix for the returned tensors (optional). +
+ + + + + + + + + + + +
+A dense tensor with shape `sp_input.dense_shape` and values specified by +the non-empty values in `sp_input`. Indices not in `sp_input` are assigned +`default_value`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/to_indicator.md b/site/en/api_docs/python/tf/sparse/to_indicator.md new file mode 100644 index 00000000000..9df1e6febb0 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/to_indicator.md @@ -0,0 +1,137 @@ +description: Converts a SparseTensor of ids into a dense bool indicator tensor. + +
+ + +
+ +# tf.sparse.to_indicator + + + + + + + + + +Converts a `SparseTensor` of ids into a dense bool indicator tensor. + + + + + + + + + +The last dimension of `sp_input.indices` is discarded and replaced with +the values of `sp_input`. If `sp_input.dense_shape = [D0, D1, ..., Dn, K]`, +then `output.shape = [D0, D1, ..., Dn, vocab_size]`, where + + output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True + +and False elsewhere in `output`. + +For example, if `sp_input.dense_shape = [2, 3, 4]` with non-empty values: + + [0, 0, 0]: 0 + [0, 1, 0]: 10 + [1, 0, 3]: 103 + [1, 1, 1]: 150 + [1, 1, 2]: 149 + [1, 1, 3]: 150 + [1, 2, 1]: 121 + +and `vocab_size = 200`, then the output will be a `[2, 3, 200]` dense bool +tensor with False everywhere except at positions + + (0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150), + (1, 2, 121). + +Note that repeats are allowed in the input SparseTensor. +This op is useful for converting `SparseTensor`s into dense formats for +compatibility with ops that expect dense tensors. + +The input `SparseTensor` must be in row-major order. + + + + + + + + + + + + + + + + +
+`sp_input` + +A `SparseTensor` with `values` property of type `int32` or +`int64`. +
+`vocab_size` + +A scalar int64 Tensor (or Python int) containing the new size +of the last dimension, `all(0 <= sp_input.values < vocab_size)`. +
+`name` + +A name prefix for the returned tensors (optional) +
+ + + + + + + + + + + +
+A dense bool indicator tensor representing the indices with specified value. +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/sparse/transpose.md b/site/en/api_docs/python/tf/sparse/transpose.md new file mode 100644 index 00000000000..3c7b626d0c9 --- /dev/null +++ b/site/en/api_docs/python/tf/sparse/transpose.md @@ -0,0 +1,125 @@ +description: Transposes a SparseTensor + +
+ + +
+ +# tf.sparse.transpose + + + + + + + + + +Transposes a `SparseTensor` + + + + + + + + + +The returned tensor's dimension i will correspond to the input dimension +`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is +the rank of the input tensor. Hence by default, this operation performs a +regular matrix transpose on 2-D input Tensors. + +For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`: + + [0, 3]: b + [0, 1]: a + [3, 1]: d + [2, 0]: c + +then the output will be a `SparseTensor` of shape `[5, 4]` and +`indices` / `values`: + + [0, 2]: c + [1, 0]: a + [1, 3]: d + [3, 0]: b + + + + + + + + + + + + + + + + +
+`sp_input` + +The input `SparseTensor`. +
+`perm` + +A permutation of the dimensions of `sp_input`. +
+`name` + +A name prefix for the returned tensors (optional) +
+ + + + + + + + + + + +
+A transposed `SparseTensor`. +
+ + + + + + + + + + + + +
+`TypeError` + +If `sp_input` is not a `SparseTensor`. +
+ diff --git a/site/en/api_docs/python/tf/split.md b/site/en/api_docs/python/tf/split.md new file mode 100644 index 00000000000..81860329629 --- /dev/null +++ b/site/en/api_docs/python/tf/split.md @@ -0,0 +1,161 @@ +description: Splits a tensor value into a list of sub tensors. + +
+ + +
+ +# tf.split + + + + + + + + + +Splits a tensor `value` into a list of sub tensors. + + + + + + + + + +See also tf.unstack. + +If `num_or_size_splits` is an integer, then `value` is split along the +dimension `axis` into `num_split` smaller tensors. This requires that +`value.shape[axis]` is divisible by `num_split`. + +If `num_or_size_splits` is a 1-D Tensor (or list), we call it `size_splits` +and `value` is split into `len(size_splits)` elements. The shape of the `i`-th +element has the same size as the `value` except along dimension `axis` where +the size is `size_splits[i]`. + +#### For example: + + + +``` +>>> x = tf.Variable(tf.random.uniform([5, 30], -1, 1)) +``` + +Split `x` into 3 tensors along dimension 1 +>>> s0, s1, s2 = tf.split(x, num_or_size_splits=3, axis=1) +>>> tf.shape(s0).numpy() +array([ 5, 10], dtype=int32) + +Split `x` into 3 tensors with sizes [4, 15, 11] along dimension 1 +>>> split0, split1, split2 = tf.split(x, [4, 15, 11], 1) +>>> tf.shape(split0).numpy() +array([5, 4], dtype=int32) +>>> tf.shape(split1).numpy() +array([ 5, 15], dtype=int32) +>>> tf.shape(split2).numpy() +array([ 5, 11], dtype=int32) + + + + + + + + + + + + + + + + + + + + + + +
+`value` + +The `Tensor` to split. +
+`num_or_size_splits` + +Either an integer indicating the number of splits along +`axis` or a 1-D integer `Tensor` or Python list containing the sizes of +each output tensor along `axis`. If a scalar, then it must evenly divide +`value.shape[axis]`; otherwise the sum of sizes along the split axis +must match that of the `value`. +
+`axis` + +An integer or scalar `int32` `Tensor`. The dimension along which to +split. Must be in the range `[-rank(value), rank(value))`. Defaults to 0. +
+`num` + +Optional, used to specify the number of outputs when it cannot be +inferred from the shape of `size_splits`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+if `num_or_size_splits` is a scalar returns a list of `num_or_size_splits` +`Tensor` objects; if `num_or_size_splits` is a 1-D Tensor returns +`num_or_size_splits.get_shape[0]` `Tensor` objects resulting from splitting +`value`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `num` is unspecified and cannot be inferred. +
+ diff --git a/site/en/api_docs/python/tf/squeeze.md b/site/en/api_docs/python/tf/squeeze.md new file mode 100644 index 00000000000..66b6639c17f --- /dev/null +++ b/site/en/api_docs/python/tf/squeeze.md @@ -0,0 +1,128 @@ +description: Removes dimensions of size 1 from the shape of a tensor. + +
+ + +
+ +# tf.squeeze + + + + + + + + + +Removes dimensions of size 1 from the shape of a tensor. + + + + + + + +Given a tensor `input`, this operation returns a tensor of the same type with +all dimensions of size 1 removed. If you don't want to remove all size 1 +dimensions, you can remove specific size 1 dimensions by specifying +`axis`. + +#### For example: + + + +```python +# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] +tf.shape(tf.squeeze(t)) # [2, 3] +``` + +Or, to remove specific size 1 dimensions: + +```python +# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] +tf.shape(tf.squeeze(t, [2, 4])) # [1, 2, 3, 1] +``` + +Unlike the older op tf.compat.v1.squeeze, this op does not accept a +deprecated `squeeze_dims` argument. + +Note: if `input` is a tf.RaggedTensor, then this operation takes `O(N)` +time, where `N` is the number of elements in the squeezed dimensions. + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. The `input` to squeeze. +
+`axis` + +An optional list of `ints`. Defaults to `[]`. If specified, only +squeezes the dimensions listed. The dimension index starts at 0. It is an +error to squeeze a dimension that is not 1. Must be in the range +`[-rank(input), rank(input))`. Must be specified if `input` is a +`RaggedTensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +Contains the same data as `input`, but has one or more dimensions of +size 1 removed. +
+ + + + + + + + + + + + +
+`ValueError` + +The input cannot be converted to a tensor, or the specified +axis cannot be squeezed. +
+ diff --git a/site/en/api_docs/python/tf/stack.md b/site/en/api_docs/python/tf/stack.md new file mode 100644 index 00000000000..8b78d3356b6 --- /dev/null +++ b/site/en/api_docs/python/tf/stack.md @@ -0,0 +1,145 @@ +description: Stacks a list of rank-R tensors into one rank-(R+1) tensor. + +
+ + +
+ +# tf.stack + + + + + + + + + +Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor. + + + + + + + + + +See also tf.concat, tf.tile, tf.repeat. + +Packs the list of tensors in `values` into a tensor with rank one higher than +each tensor in `values`, by packing them along the `axis` dimension. +Given a list of length `N` of tensors of shape `(A, B, C)`; + +if `axis == 0` then the `output` tensor will have the shape `(N, A, B, C)`. +if `axis == 1` then the `output` tensor will have the shape `(A, N, B, C)`. +Etc. + +#### For example: + + + +``` +>>> x = tf.constant([1, 4]) +>>> y = tf.constant([2, 5]) +>>> z = tf.constant([3, 6]) +>>> tf.stack([x, y, z]) + +>>> tf.stack([x, y, z], axis=1) + +``` + +This is the opposite of unstack. The numpy equivalent is `np.stack` + +``` +>>> np.array_equal(np.stack([x, y, z]), tf.stack([x, y, z])) +True +``` + + + + + + + + + + + + + + + + +
+`values` + +A list of `Tensor` objects with the same shape and type. +
+`axis` + +An `int`. The axis to stack along. Defaults to the first dimension. +Negative values wrap around, so the valid range is `[-(R+1), R+1)`. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + + +
+`output` + +A stacked `Tensor` with the same type as `values`. +
+ + + + + + + + + + + + +
+`ValueError` + +If `axis` is out of the range [-(R+1), R+1). +
+ diff --git a/site/en/api_docs/python/tf/stop_gradient.md b/site/en/api_docs/python/tf/stop_gradient.md new file mode 100644 index 00000000000..47328595eb0 --- /dev/null +++ b/site/en/api_docs/python/tf/stop_gradient.md @@ -0,0 +1,96 @@ +description: Stops gradient computation. + +
+ + +
+ +# tf.stop_gradient + + + + + + + + + +Stops gradient computation. + + + + + + + + + +When executed in a graph, this op outputs its input tensor as-is. + +When building ops to compute gradients, this op prevents the contribution of +its inputs to be taken into account. Normally, the gradient generator adds ops +to a graph to compute the derivatives of a specified 'loss' by recursively +finding out inputs that contributed to its computation. If you insert this op +in the graph it inputs are masked from the gradient generator. They are not +taken into account for computing gradients. + +This is useful any time you want to compute a value with TensorFlow but need +to pretend that the value was a constant. Some examples include: + +* The *EM* algorithm where the *M-step* should not involve backpropagation + through the output of the *E-step*. +* Contrastive divergence training of Boltzmann machines where, when + differentiating the energy function, the training must not backpropagate + through the graph that generated the samples from the model. +* Adversarial training, where no backprop should happen through the adversarial + example generation process. + + + + + + + + + + + + + +
+`input` + +A `Tensor`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/strided_slice.md b/site/en/api_docs/python/tf/strided_slice.md new file mode 100644 index 00000000000..edc96c02500 --- /dev/null +++ b/site/en/api_docs/python/tf/strided_slice.md @@ -0,0 +1,210 @@ +description: Extracts a strided slice of a tensor (generalized python array indexing). + +
+ + +
+ +# tf.strided_slice + + + + + + + + + +Extracts a strided slice of a tensor (generalized python array indexing). + + + + + + + + + +**Instead of calling this op directly most users will want to use the +NumPy-style slicing syntax (e.g. `tensor[..., 3:4:-1, tf.newaxis, 3]`), which +is supported via tf.Tensor.__getitem__ and tf.Variable.__getitem__.** +The interface of this op is a low-level encoding of the slicing syntax. + +Roughly speaking, this op extracts a slice of size `(end-begin)/stride` +from the given `input_` tensor. Starting at the location specified by `begin` +the slice continues by adding `stride` to the index until all dimensions are +not less than `end`. +Note that a stride can be negative, which causes a reverse slice. + +Given a Python slice `input[spec0, spec1, ..., specn]`, +this function will be called as follows. + +`begin`, `end`, and `strides` will be vectors of length n. +n in general is not equal to the rank of the `input_` tensor. + +In each mask field (`begin_mask`, `end_mask`, `ellipsis_mask`, +`new_axis_mask`, `shrink_axis_mask`) the ith bit will correspond to +the ith spec. + +If the ith bit of `begin_mask` is set, `begin[i]` is ignored and +the fullest possible range in that dimension is used instead. +`end_mask` works analogously, except with the end range. + +`foo[5:,:,:3]` on a 7x8x9 tensor is equivalent to `foo[5:7,0:8,0:3]`. +`foo[::-1]` reverses a tensor with shape 8. + +If the ith bit of `ellipsis_mask` is set, as many unspecified dimensions +as needed will be inserted between other dimensions. Only one +non-zero bit is allowed in `ellipsis_mask`. + +For example `foo[3:5,...,4:5]` on a shape 10x3x3x10 tensor is +equivalent to `foo[3:5,:,:,4:5]` and +`foo[3:5,...]` is equivalent to `foo[3:5,:,:,:]`. + +If the ith bit of `new_axis_mask` is set, then `begin`, +`end`, and `stride` are ignored and a new length 1 dimension is +added at this point in the output tensor. + +For example, +`foo[:4, tf.newaxis, :2]` would produce a shape `(4, 1, 2)` tensor. + +If the ith bit of `shrink_axis_mask` is set, it implies that the ith +specification shrinks the dimensionality by 1, taking on the value at index +`begin[i]`. `end[i]` and `strides[i]` are ignored in this case. For example in +Python one might do `foo[:, 3, :]` which would result in `shrink_axis_mask` +equal to 2. + + +NOTE: `begin` and `end` are zero-indexed. +`strides` entries must be non-zero. + + +```python +t = tf.constant([[[1, 1, 1], [2, 2, 2]], + [[3, 3, 3], [4, 4, 4]], + [[5, 5, 5], [6, 6, 6]]]) +tf.strided_slice(t, [1, 0, 0], [2, 1, 3], [1, 1, 1]) # [[[3, 3, 3]]] +tf.strided_slice(t, [1, 0, 0], [2, 2, 3], [1, 1, 1]) # [[[3, 3, 3], + # [4, 4, 4]]] +tf.strided_slice(t, [1, -1, 0], [2, -3, 3], [1, -1, 1]) # [[[4, 4, 4], + # [3, 3, 3]]] +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input_` + +A `Tensor`. +
+`begin` + +An `int32` or `int64` `Tensor`. +
+`end` + +An `int32` or `int64` `Tensor`. +
+`strides` + +An `int32` or `int64` `Tensor`. +
+`begin_mask` + +An `int32` mask. +
+`end_mask` + +An `int32` mask. +
+`ellipsis_mask` + +An `int32` mask. +
+`new_axis_mask` + +An `int32` mask. +
+`shrink_axis_mask` + +An `int32` mask. +
+`var` + +The variable corresponding to `input_` or None +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/strings.md b/site/en/api_docs/python/tf/strings.md new file mode 100644 index 00000000000..737f1537c4b --- /dev/null +++ b/site/en/api_docs/python/tf/strings.md @@ -0,0 +1,75 @@ +description: Operations for working with string Tensors. + +
+ + +
+ +# Module: tf.strings + + + + + + + + + +Operations for working with string Tensors. + + + +## Functions + +[`as_string(...)`](../tf/strings/as_string.md): Converts each entry in the given tensor to strings. + +[`bytes_split(...)`](../tf/strings/bytes_split.md): Split string elements of `input` into bytes. + +[`format(...)`](../tf/strings/format.md): Formats a string template using a list of tensors. + +[`join(...)`](../tf/strings/join.md): Perform element-wise concatenation of a list of string tensors. + +[`length(...)`](../tf/strings/length.md): String lengths of `input`. + +[`lower(...)`](../tf/strings/lower.md): Converts all uppercase characters into their respective lowercase replacements. + +[`ngrams(...)`](../tf/strings/ngrams.md): Create a tensor of n-grams based on `data`. + +[`reduce_join(...)`](../tf/strings/reduce_join.md): Joins all strings into a single string, or joins along an axis. + +[`regex_full_match(...)`](../tf/strings/regex_full_match.md): Check if the input matches the regex pattern. + +[`regex_replace(...)`](../tf/strings/regex_replace.md): Replace elements of `input` matching regex `pattern` with `rewrite`. + +[`split(...)`](../tf/strings/split.md): Split elements of `input` based on `sep` into a `RaggedTensor`. + +[`strip(...)`](../tf/strings/strip.md): Strip leading and trailing whitespaces from the Tensor. + +[`substr(...)`](../tf/strings/substr.md): Return substrings from `Tensor` of strings. + +[`to_hash_bucket(...)`](../tf/strings/to_hash_bucket.md): Converts each string in the input Tensor to its hash mod by a number of buckets. + +[`to_hash_bucket_fast(...)`](../tf/strings/to_hash_bucket_fast.md): Converts each string in the input Tensor to its hash mod by a number of buckets. + +[`to_hash_bucket_strong(...)`](../tf/strings/to_hash_bucket_strong.md): Converts each string in the input Tensor to its hash mod by a number of buckets. + +[`to_number(...)`](../tf/strings/to_number.md): Converts each string in the input Tensor to the specified numeric type. + +[`unicode_decode(...)`](../tf/strings/unicode_decode.md): Decodes each string in `input` into a sequence of Unicode code points. + +[`unicode_decode_with_offsets(...)`](../tf/strings/unicode_decode_with_offsets.md): Decodes each string into a sequence of code points with start offsets. + +[`unicode_encode(...)`](../tf/strings/unicode_encode.md): Encodes each sequence of Unicode code points in `input` into a string. + +[`unicode_script(...)`](../tf/strings/unicode_script.md): Determine the script codes of a given tensor of Unicode integer code points. + +[`unicode_split(...)`](../tf/strings/unicode_split.md): Splits each string in `input` into a sequence of Unicode code points. + +[`unicode_split_with_offsets(...)`](../tf/strings/unicode_split_with_offsets.md): Splits each string into a sequence of code points with start offsets. + +[`unicode_transcode(...)`](../tf/strings/unicode_transcode.md): Transcode the input text from a source encoding to a destination encoding. + +[`unsorted_segment_join(...)`](../tf/strings/unsorted_segment_join.md): Joins the elements of `inputs` based on `segment_ids`. + +[`upper(...)`](../tf/strings/upper.md): Converts all lowercase characters into their respective uppercase replacements. + diff --git a/site/en/api_docs/python/tf/strings/as_string.md b/site/en/api_docs/python/tf/strings/as_string.md new file mode 100644 index 00000000000..a6526f69f1d --- /dev/null +++ b/site/en/api_docs/python/tf/strings/as_string.md @@ -0,0 +1,142 @@ +description: Converts each entry in the given tensor to strings. + +
+ + +
+ +# tf.strings.as_string + + + + + + + + + +Converts each entry in the given tensor to strings. + + + + + + + + + +Supports many numeric types and boolean. + +For Unicode, see the +[https://www.tensorflow.org/tutorials/representation/unicode](Working with Unicode text) +tutorial. + +#### Examples: + + + +``` +>>> tf.strings.as_string([3, 2]) + +>>> tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy() +array([b'3.14', b'2.72'], dtype=object) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `float32`, `float64`, `bool`. +
+`precision` + +An optional `int`. Defaults to `-1`. +The post-decimal precision to use for floating point numbers. +Only used if precision > -1. +
+`scientific` + +An optional `bool`. Defaults to `False`. +Use scientific notation for floating point numbers. +
+`shortest` + +An optional `bool`. Defaults to `False`. +Use shortest representation (either scientific or standard) for +floating point numbers. +
+`width` + +An optional `int`. Defaults to `-1`. +Pad pre-decimal numbers to this width. +Applies to both floating point and integer numbers. +Only used if width > -1. +
+`fill` + +An optional `string`. Defaults to `""`. +The value to pad if width > -1. If empty, pads with spaces. +Another typical value is '0'. String cannot be longer than 1 character. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/strings/bytes_split.md b/site/en/api_docs/python/tf/strings/bytes_split.md new file mode 100644 index 00000000000..ab956e5ec8d --- /dev/null +++ b/site/en/api_docs/python/tf/strings/bytes_split.md @@ -0,0 +1,99 @@ +description: Split string elements of input into bytes. + +
+ + +
+ +# tf.strings.bytes_split + + + + + + + + + +Split string elements of `input` into bytes. + + + + + + + + + + +#### Examples: + + + +``` +>>> tf.strings.bytes_split('hello').numpy() +array([b'h', b'e', b'l', b'l', b'o'], dtype=object) +>>> tf.strings.bytes_split(['hello', '123']) + +``` + +Note that this op splits strings into bytes, not unicode characters. To +split strings into unicode characters, use tf.strings.unicode_split. + +See also: tf.io.decode_raw, tf.strings.split, tf.strings.unicode_split. + + + + + + + + + + + + + +
+`input` + +A string `Tensor` or `RaggedTensor`: the strings to split. Must +have a statically known rank (`N`). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `RaggedTensor` of rank `N+1`: the bytes that make up the source strings. +
+ diff --git a/site/en/api_docs/python/tf/strings/format.md b/site/en/api_docs/python/tf/strings/format.md new file mode 100644 index 00000000000..8169b367cf3 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/format.md @@ -0,0 +1,170 @@ +description: Formats a string template using a list of tensors. + +
+ + +
+ +# tf.strings.format + + + + + + + + + +Formats a string template using a list of tensors. + + + + + + + + + +Formats a string template using a list of tensors, abbreviating tensors by +only printing the first and last `summarize` elements of each dimension +(recursively). If formatting only one tensor into a template, the tensor does +not have to be wrapped in a list. + +#### Example: + +Formatting a single-tensor template: +```python +sess = tf.compat.v1.Session() +with sess.as_default(): + tensor = tf.range(10) + formatted = tf.strings.format("tensor: {}, suffix", tensor) + out = sess.run(formatted) + expected = "tensor: [0 1 2 ... 7 8 9], suffix" + + assert(out.decode() == expected) +``` + +Formatting a multi-tensor template: +```python +sess = tf.compat.v1.Session() +with sess.as_default(): + tensor_one = tf.reshape(tf.range(100), [10, 10]) + tensor_two = tf.range(10) + formatted = tf.strings.format("first: {}, second: {}, suffix", + (tensor_one, tensor_two)) + + out = sess.run(formatted) + expected = ("first: [[0 1 2 ... 7 8 9]\n" + " [10 11 12 ... 17 18 19]\n" + " [20 21 22 ... 27 28 29]\n" + " ...\n" + " [70 71 72 ... 77 78 79]\n" + " [80 81 82 ... 87 88 89]\n" + " [90 91 92 ... 97 98 99]], second: [0 1 2 ... 7 8 9], suffix") + + assert(out.decode() == expected) +``` + + + + + + + + + + + + + + + + + + + + + + + + +
+`template` + +A string template to format tensor values into. +
+`inputs` + +A list of `Tensor` objects, or a single Tensor. +The list of tensors to format into the template string. If a solitary +tensor is passed in, the input tensor will automatically be wrapped as a +list. +
+`placeholder` + +An optional `string`. Defaults to `{}`. +At each placeholder occurring in the template, a subsequent tensor +will be inserted. +
+`summarize` + +An optional `int`. Defaults to `3`. +When formatting the tensors, show the first and last `summarize` +entries of each tensor dimension (recursively). If set to -1, all +elements of the tensor will be shown. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A scalar `Tensor` of type `string`. +
+ + + + + + + + + + + + +
+`ValueError` + +if the number of placeholders does not match the number of +inputs. +
+ diff --git a/site/en/api_docs/python/tf/strings/join.md b/site/en/api_docs/python/tf/strings/join.md new file mode 100644 index 00000000000..2a333d23e37 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/join.md @@ -0,0 +1,105 @@ +description: Perform element-wise concatenation of a list of string tensors. + +
+ + +
+ +# tf.strings.join + + + + + + + + + +Perform element-wise concatenation of a list of string tensors. + + + + + + + + + +Given a list of string tensors of same shape, performs element-wise +concatenation of the strings of the same index in all tensors. + + +``` +>>> tf.strings.join(['abc','def']).numpy() +b'abcdef' +>>> tf.strings.join([['abc','123'], +... ['def','456'], +... ['ghi','789']]).numpy() +array([b'abcdefghi', b'123456789'], dtype=object) +>>> tf.strings.join([['abc','123'], +... ['def','456']], +... separator=" ").numpy() +array([b'abc def', b'123 456'], dtype=object) +``` + + + + + + + + + + + + + + + + +
+`inputs` + +A list of tf.Tensor objects of same size and tf.string dtype. +
+`separator` + +A string added between each string being joined. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tf.string tensor. +
+ diff --git a/site/en/api_docs/python/tf/strings/length.md b/site/en/api_docs/python/tf/strings/length.md new file mode 100644 index 00000000000..3bf152c2850 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/length.md @@ -0,0 +1,93 @@ +description: String lengths of input. + +
+ + +
+ +# tf.strings.length + + + + + + + + + +String lengths of `input`. + + + + + + + +Computes the length of each string given in the input tensor. + +``` +>>> strings = tf.constant(['Hello','TensorFlow', '\U0001F642']) +>>> tf.strings.length(strings).numpy() # default counts bytes +array([ 5, 10, 4], dtype=int32) +>>> tf.strings.length(strings, unit="UTF8_CHAR").numpy() +array([ 5, 10, 1], dtype=int32) +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +The strings for which to compute the length for each element. +
+`unit` + +An optional `string` from: `"BYTE", "UTF8_CHAR"`. Defaults to `"BYTE"`. +The unit that is counted to compute string length. One of: `"BYTE"` (for +the number of bytes in each string) or `"UTF8_CHAR"` (for the number of UTF-8 +encoded Unicode code points in each string). Results are undefined +if `unit=UTF8_CHAR` and the `input` strings do not contain structurally +valid UTF-8. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/strings/lower.md b/site/en/api_docs/python/tf/strings/lower.md new file mode 100644 index 00000000000..488af464bdf --- /dev/null +++ b/site/en/api_docs/python/tf/strings/lower.md @@ -0,0 +1,93 @@ +description: Converts all uppercase characters into their respective lowercase replacements. + +
+ + +
+ +# tf.strings.lower + + + + + + + + + +Converts all uppercase characters into their respective lowercase replacements. + + + + + + + + + + +#### Example: + + + +``` +>>> tf.strings.lower("CamelCase string and ALL CAPS") + +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +
+`encoding` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/strings/ngrams.md b/site/en/api_docs/python/tf/strings/ngrams.md new file mode 100644 index 00000000000..cee48db5665 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/ngrams.md @@ -0,0 +1,186 @@ +description: Create a tensor of n-grams based on data. + +
+ + +
+ +# tf.strings.ngrams + + + + + + + + + +Create a tensor of n-grams based on `data`. + + + + + + + + + +Creates a tensor of n-grams based on `data`. The n-grams are created by +joining windows of `width` adjacent strings from the inner axis of `data` +using `separator`. + +The input data can be padded on both the start and end of the sequence, if +desired, using the `pad_values` argument. If set, `pad_values` should contain +either a tuple of strings or a single string; the 0th element of the tuple +will be used to pad the left side of the sequence and the 1st element of the +tuple will be used to pad the right side of the sequence. The `padding_width` +arg controls how many padding values are added to each side; it defaults to +`ngram_width-1`. + +If this op is configured to not have padding, or if it is configured to add +padding with `padding_width` set to less than ngram_width-1, it is possible +that a sequence, or a sequence plus padding, is smaller than the ngram +width. In that case, no ngrams will be generated for that sequence. This can +be prevented by setting `preserve_short_sequences`, which will cause the op +to always generate at least one ngram per non-empty sequence. + +#### Examples: + + + +``` +>>> tf.strings.ngrams(["A", "B", "C", "D"], 2).numpy() +array([b'A B', b'B C', b'C D'], dtype=object) +>>> tf.strings.ngrams(["TF", "and", "keras"], 1).numpy() +array([b'TF', b'and', b'keras'], dtype=object) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`data` + +A Tensor or RaggedTensor containing the source data for the ngrams. +
+`ngram_width` + +The width(s) of the ngrams to create. If this is a list or +tuple, the op will return ngrams of all specified arities in list order. +Values must be non-Tensor integers greater than 0. +
+`separator` + +The separator string used between ngram elements. Must be a +string constant, not a Tensor. +
+`pad_values` + +A tuple of (left_pad_value, right_pad_value), a single string, +or None. If None, no padding will be added; if a single string, then that +string will be used for both left and right padding. Values must be Python +strings. +
+`padding_width` + +If set, `padding_width` pad values will be added to both +sides of each sequence. Defaults to `ngram_width`-1. Must be greater than +0. (Note that 1-grams are never padded, regardless of this value.) +
+`preserve_short_sequences` + +If true, then ensure that at least one ngram is +generated for each input sequence. In particular, if an input sequence is +shorter than `min(ngram_width) + 2*pad_width`, then generate a single +ngram containing the entire sequence. If false, then no ngrams are +generated for these short input sequences. +
+`name` + +The op name. +
+ + + + + + + + + + + +
+A RaggedTensor of ngrams. If `data.shape=[D1...DN, S]`, then +`output.shape=[D1...DN, NUM_NGRAMS]`, where +`NUM_NGRAMS=S-ngram_width+1+2*padding_width`. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if `pad_values` is set to an invalid type. +
+`ValueError` + +if `pad_values`, `padding_width`, or `ngram_width` is set to an +invalid value. +
+ diff --git a/site/en/api_docs/python/tf/strings/reduce_join.md b/site/en/api_docs/python/tf/strings/reduce_join.md new file mode 100644 index 00000000000..73d4857fcc1 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/reduce_join.md @@ -0,0 +1,106 @@ +description: Joins all strings into a single string, or joins along an axis. + +
+ + +
+ +# tf.strings.reduce_join + + + + + + + + + +Joins all strings into a single string, or joins along an axis. + + + + + + + +``` +>>> tf.strings.reduce_join([['abc','123'], +... ['def','456']]).numpy() +b'abc123def456' +>>> tf.strings.reduce_join([['abc','123'], +... ['def','456']], axis=-1).numpy() +array([b'abc123', b'def456'], dtype=object) +>>> tf.strings.reduce_join([['abc','123'], +... ['def','456']], +... axis=-1, +... separator=" ").numpy() +array([b'abc 123', b'def 456'], dtype=object) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A tf.string tensor. +
+`axis` + +Which axis to join along. The default behavior is to join all +elements, producing a scalar. +
+`keepdims` + +If true, retains reduced dimensions with length 1. +
+`separator` + +a string added between each string being joined. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tf.string tensor. +
+ diff --git a/site/en/api_docs/python/tf/strings/regex_full_match.md b/site/en/api_docs/python/tf/strings/regex_full_match.md new file mode 100644 index 00000000000..178f7f62301 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/regex_full_match.md @@ -0,0 +1,108 @@ +description: Check if the input matches the regex pattern. + +
+ + +
+ +# tf.strings.regex_full_match + + + + + + + + + +Check if the input matches the regex pattern. + + + + + + + + + +The input is a string tensor of any shape. The pattern is a scalar +string tensor which is applied to every element of the input tensor. +The boolean values (True or False) of the output tensor indicate +if the input matches the regex pattern provided. + +The pattern follows the re2 syntax (https://github.com/google/re2/wiki/Syntax) + +#### Examples: + + + +``` +>>> tf.strings.regex_full_match(["TF lib", "lib TF"], ".*lib$") + +>>> tf.strings.regex_full_match(["TF lib", "lib TF"], ".*TF$") + +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +A string tensor of the text to be processed. +
+`pattern` + +A `Tensor` of type `string`. +A scalar string tensor containing the regular expression to match the input. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `bool`. +
+ diff --git a/site/en/api_docs/python/tf/strings/regex_replace.md b/site/en/api_docs/python/tf/strings/regex_replace.md new file mode 100644 index 00000000000..000b99993b3 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/regex_replace.md @@ -0,0 +1,112 @@ +description: Replace elements of input matching regex pattern with rewrite. + +
+ + +
+ +# tf.strings.regex_replace + + + + + + + + + +Replace elements of `input` matching regex `pattern` with `rewrite`. + + + + + + + + + +``` +>>> tf.strings.regex_replace("Text with tags.
contains html", +... "<[^>]+>", " ") + +``` + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +string `Tensor`, the source strings to process. +
+`pattern` + +string or scalar string `Tensor`, regular expression to use, +see more details at https://github.com/google/re2/wiki/Syntax +
+`rewrite` + +string or scalar string `Tensor`, value to use in match +replacement, supports backslash-escaped digits (\1 to \9) can be to insert +text matching corresponding parenthesized group. +
+`replace_global` + +`bool`, if `True` replace all non-overlapping matches, +else replace only the first match. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+string `Tensor` of the same shape as `input` with specified replacements. +
+ diff --git a/site/en/api_docs/python/tf/strings/split.md b/site/en/api_docs/python/tf/strings/split.md new file mode 100644 index 00000000000..439569f3e3f --- /dev/null +++ b/site/en/api_docs/python/tf/strings/split.md @@ -0,0 +1,127 @@ +description: Split elements of input based on sep into a RaggedTensor. + +
+ + +
+ +# tf.strings.split + + + + + + + + + +Split elements of `input` based on `sep` into a `RaggedTensor`. + + + + + + + +Let N be the size of `input` (typically N will be the batch size). Split each +element of `input` based on `sep` and return a `RaggedTensor` containing the +split tokens. Empty tokens are ignored. + +#### Example: + + + +``` +>>> tf.strings.split('hello world').numpy() + array([b'hello', b'world'], dtype=object) +>>> tf.strings.split(['hello world', 'a b c']) + +``` + +If `sep` is given, consecutive delimiters are not grouped together and are +deemed to delimit empty strings. For example, `input` of `"1<>2<><>3"` and +`sep` of `"<>"` returns `["1", "2", "", "3"]`. If `sep` is None or an empty +string, consecutive whitespace are regarded as a single separator, and the +result will contain no empty strings at the start or end if the string has +leading or trailing whitespace. + +Note that the above mentioned behavior matches python's str.split. + + + + + + + + + + + + + + + + + + + +
+`input` + +A string `Tensor` of rank `N`, the strings to split. If +`rank(input)` is not known statically, then it is assumed to be `1`. +
+`sep` + +`0-D` string `Tensor`, the delimiter string. +
+`maxsplit` + +An `int`. If `maxsplit > 0`, limit of the split of the result. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + +
+`ValueError` + +If sep is not a string. +
+ + + + + + + + + + + +
+A `RaggedTensor` of rank `N+1`, the strings split according to the +delimiter. +
+ diff --git a/site/en/api_docs/python/tf/strings/strip.md b/site/en/api_docs/python/tf/strings/strip.md new file mode 100644 index 00000000000..edea4f572f0 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/strip.md @@ -0,0 +1,77 @@ +description: Strip leading and trailing whitespaces from the Tensor. + +
+ + +
+ +# tf.strings.strip + + + + + + + + + +Strip leading and trailing whitespaces from the Tensor. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. A string `Tensor` of any shape. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/strings/substr.md b/site/en/api_docs/python/tf/strings/substr.md new file mode 100644 index 00000000000..9f7326a24e1 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/substr.md @@ -0,0 +1,191 @@ +description: Return substrings from Tensor of strings. + +
+ + +
+ +# tf.strings.substr + + + + + + + + + +Return substrings from `Tensor` of strings. + + + + + + + +For each string in the input `Tensor`, creates a substring starting at index +`pos` with a total length of `len`. + +If `len` defines a substring that would extend beyond the length of the input +string, or if `len` is negative, then as many characters as possible are used. + +A negative `pos` indicates distance within the string backwards from the end. + +If `pos` specifies an index which is out of range for any of the input strings, +then an `InvalidArgumentError` is thrown. + +`pos` and `len` must have the same shape, otherwise a `ValueError` is thrown on +Op creation. + +*NOTE*: `Substr` supports broadcasting up to two dimensions. More about +broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + +--- + +Examples + +Using scalar `pos` and `len`: + +```python +input = [b'Hello', b'World'] +position = 1 +length = 3 + +output = [b'ell', b'orl'] +``` + +Using `pos` and `len` with same shape as `input`: + +```python +input = [[b'ten', b'eleven', b'twelve'], + [b'thirteen', b'fourteen', b'fifteen'], + [b'sixteen', b'seventeen', b'eighteen']] +position = [[1, 2, 3], + [1, 2, 3], + [1, 2, 3]] +length = [[2, 3, 4], + [4, 3, 2], + [5, 5, 5]] + +output = [[b'en', b'eve', b'lve'], + [b'hirt', b'urt', b'te'], + [b'ixtee', b'vente', b'hteen']] +``` + +Broadcasting `pos` and `len` onto `input`: + +``` +input = [[b'ten', b'eleven', b'twelve'], + [b'thirteen', b'fourteen', b'fifteen'], + [b'sixteen', b'seventeen', b'eighteen'], + [b'nineteen', b'twenty', b'twentyone']] +position = [1, 2, 3] +length = [1, 2, 3] + +output = [[b'e', b'ev', b'lve'], + [b'h', b'ur', b'tee'], + [b'i', b've', b'hte'], + [b'i', b'en', b'nty']] +``` + +Broadcasting `input` onto `pos` and `len`: + +``` +input = b'thirteen' +position = [1, 5, 7] +length = [3, 2, 1] + +output = [b'hir', b'ee', b'n'] +``` + + + + + + + + + +
+* `ValueError`: If the first argument cannot be converted to a +Tensor of `dtype string`. +* `InvalidArgumentError`: If indicies are out of range. +* `ValueError`: If `pos` and `len` are not the same shape. +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. Tensor of strings +
+`pos` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Scalar defining the position of first character in each substring +
+`len` + +A `Tensor`. Must have the same type as `pos`. +Scalar defining the number of characters to include in each substring +
+`unit` + +An optional `string` from: `"BYTE", "UTF8_CHAR"`. Defaults to `"BYTE"`. +The unit that is used to create the substring. One of: `"BYTE"` (for +defining position and length by bytes) or `"UTF8_CHAR"` (for the UTF-8 +encoded Unicode code points). The default is `"BYTE"`. Results are undefined if +`unit=UTF8_CHAR` and the `input` strings do not contain structurally valid +UTF-8. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/strings/to_hash_bucket.md b/site/en/api_docs/python/tf/strings/to_hash_bucket.md new file mode 100644 index 00000000000..a3632cc9355 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/to_hash_bucket.md @@ -0,0 +1,93 @@ +description: Converts each string in the input Tensor to its hash mod by a number of buckets. + +
+ + +
+ +# tf.strings.to_hash_bucket + + + + + + + + + +Converts each string in the input Tensor to its hash mod by a number of buckets. + + + + + + + +The hash function is deterministic on the content of the string within the +process. + +Note that the hash function may change from time to time. +This functionality will be deprecated and it's recommended to use +tf.strings.to_hash_bucket_fast() or tf.strings.to_hash_bucket_strong(). + +#### Examples: + + + +``` +>>> tf.strings.to_hash_bucket(["Hello", "TensorFlow", "2.x"], 3) + +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +
+`num_buckets` + +An `int` that is `>= 1`. The number of buckets. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/strings/to_hash_bucket_fast.md b/site/en/api_docs/python/tf/strings/to_hash_bucket_fast.md new file mode 100644 index 00000000000..d40f3f00578 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/to_hash_bucket_fast.md @@ -0,0 +1,99 @@ +description: Converts each string in the input Tensor to its hash mod by a number of buckets. + +
+ + +
+ +# tf.strings.to_hash_bucket_fast + + + + + + + + + +Converts each string in the input Tensor to its hash mod by a number of buckets. + + + + + + + + + +The hash function is deterministic on the content of the string within the +process and will never change. However, it is not suitable for cryptography. +This function may be used when CPU time is scarce and inputs are trusted or +unimportant. There is a risk of adversaries constructing inputs that all hash +to the same bucket. To prevent this problem, use a strong hash function with +`tf.string_to_hash_bucket_strong`. + +#### Examples: + + + +``` +>>> tf.strings.to_hash_bucket_fast(["Hello", "TensorFlow", "2.x"], 3).numpy() +array([0, 2, 2]) +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. The strings to assign a hash bucket. +
+`num_buckets` + +An `int` that is `>= 1`. The number of buckets. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/strings/to_hash_bucket_strong.md b/site/en/api_docs/python/tf/strings/to_hash_bucket_strong.md new file mode 100644 index 00000000000..4cf96aea18e --- /dev/null +++ b/site/en/api_docs/python/tf/strings/to_hash_bucket_strong.md @@ -0,0 +1,115 @@ +description: Converts each string in the input Tensor to its hash mod by a number of buckets. + +
+ + +
+ +# tf.strings.to_hash_bucket_strong + + + + + + + + + +Converts each string in the input Tensor to its hash mod by a number of buckets. + + + + + + + + + +The hash function is deterministic on the content of the string within the +process. The hash function is a keyed hash function, where attribute `key` +defines the key of the hash function. `key` is an array of 2 elements. + +A strong hash is important when inputs may be malicious, e.g. URLs with +additional components. Adversaries could try to make their inputs hash to the +same bucket for a denial-of-service attack or to skew the results. A strong +hash can be used to make it difficult to find inputs with a skewed hash value +distribution over buckets. This requires that the hash function is +seeded by a high-entropy (random) "key" unknown to the adversary. + +The additional robustness comes at a cost of roughly 4x higher compute +time than `tf.string_to_hash_bucket_fast`. + +#### Examples: + + + +``` +>>> tf.strings.to_hash_bucket_strong(["Hello", "TF"], 3, [1, 2]).numpy() +array([2, 0]) +``` + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. The strings to assign a hash bucket. +
+`num_buckets` + +An `int` that is `>= 1`. The number of buckets. +
+`key` + +A list of `ints`. +The key used to seed the hash function, passed as a list of two uint64 +elements. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int64`. +
+ diff --git a/site/en/api_docs/python/tf/strings/to_number.md b/site/en/api_docs/python/tf/strings/to_number.md new file mode 100644 index 00000000000..9d5574698ee --- /dev/null +++ b/site/en/api_docs/python/tf/strings/to_number.md @@ -0,0 +1,93 @@ +description: Converts each string in the input Tensor to the specified numeric type. + +
+ + +
+ +# tf.strings.to_number + + + + + + + + + +Converts each string in the input Tensor to the specified numeric type. + + + + + + + +(Note that int32 overflow results in an error while float overflow +results in a rounded value.) + +#### Examples: + + + +``` +>>> tf.strings.to_number("1.55") + +>>> tf.strings.to_number("3", tf.int32) + +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +
+`out_type` + +An optional tf.DType from: `tf.float32, tf.float64, tf.int32, +tf.int64`. Defaults to `tf.float32`. +The numeric type to interpret each string in `string_tensor` as. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `out_type`. +
+ diff --git a/site/en/api_docs/python/tf/strings/unicode_decode.md b/site/en/api_docs/python/tf/strings/unicode_decode.md new file mode 100644 index 00000000000..7b8f3f83e9c --- /dev/null +++ b/site/en/api_docs/python/tf/strings/unicode_decode.md @@ -0,0 +1,132 @@ +description: Decodes each string in input into a sequence of Unicode code points. + +
+ + +
+ +# tf.strings.unicode_decode + + + + + + + + + +Decodes each string in `input` into a sequence of Unicode code points. + + + + + + + + + +`result[i1...iN, j]` is the Unicode codepoint for the `j`th character in +`input[i1...iN]`, when decoded using `input_encoding`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +An `N` dimensional potentially ragged `string` tensor with shape +`[D1...DN]`. `N` must be statically known. +
+`input_encoding` + +String name for the unicode encoding that should be used to +decode each string. +
+`errors` + +Specifies the response when an input string can't be converted +using the indicated encoding. One of: +* `'strict'`: Raise an exception for any illegal substrings. +* `'replace'`: Replace illegal substrings with `replacement_char`. +* `'ignore'`: Skip illegal substrings. +
+`replacement_char` + +The replacement codepoint to be used in place of invalid +substrings in `input` when `errors='replace'`; and in place of C0 control +characters in `input` when `replace_control_characters=True`. +
+`replace_control_characters` + +Whether to replace the C0 control characters +`(U+0000 - U+001F)` with the `replacement_char`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `N+1` dimensional `int32` tensor with shape `[D1...DN, (num_chars)]`. +The returned tensor is a tf.Tensor if `input` is a scalar, or a +tf.RaggedTensor otherwise. +
+ + +#### Example: + +``` +>>> input = [s.encode('utf8') for s in (u'G\xf6\xf6dnight', u'\U0001f60a')] +>>> tf.strings.unicode_decode(input, 'UTF-8').to_list() +[[71, 246, 246, 100, 110, 105, 103, 104, 116], [128522]] +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/strings/unicode_decode_with_offsets.md b/site/en/api_docs/python/tf/strings/unicode_decode_with_offsets.md new file mode 100644 index 00000000000..bc6d01d6319 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/unicode_decode_with_offsets.md @@ -0,0 +1,147 @@ +description: Decodes each string into a sequence of code points with start offsets. + +
+ + +
+ +# tf.strings.unicode_decode_with_offsets + + + + + + + + + +Decodes each string into a sequence of code points with start offsets. + + + + + + + + + +This op is similar to `tf.strings.decode(...)`, but it also returns the +start offset for each character in its respective string. This information +can be used to align the characters with the original byte sequence. + +Returns a tuple `(codepoints, start_offsets)` where: + +* `codepoints[i1...iN, j]` is the Unicode codepoint for the `j`th character + in `input[i1...iN]`, when decoded using `input_encoding`. +* `start_offsets[i1...iN, j]` is the start byte offset for the `j`th + character in `input[i1...iN]`, when decoded using `input_encoding`. + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +An `N` dimensional potentially ragged `string` tensor with shape +`[D1...DN]`. `N` must be statically known. +
+`input_encoding` + +String name for the unicode encoding that should be used to +decode each string. +
+`errors` + +Specifies the response when an input string can't be converted +using the indicated encoding. One of: +* `'strict'`: Raise an exception for any illegal substrings. +* `'replace'`: Replace illegal substrings with `replacement_char`. +* `'ignore'`: Skip illegal substrings. +
+`replacement_char` + +The replacement codepoint to be used in place of invalid +substrings in `input` when `errors='replace'`; and in place of C0 control +characters in `input` when `replace_control_characters=True`. +
+`replace_control_characters` + +Whether to replace the C0 control characters +`(U+0000 - U+001F)` with the `replacement_char`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tuple of `N+1` dimensional tensors `(codepoints, start_offsets)`. + +* `codepoints` is an `int32` tensor with shape `[D1...DN, (num_chars)]`. +* `offsets` is an `int64` tensor with shape `[D1...DN, (num_chars)]`. + +The returned tensors are tf.Tensors if `input` is a scalar, or +tf.RaggedTensors otherwise. +
+ + +#### Example: + +``` +>>> input = [s.encode('utf8') for s in (u'G\xf6\xf6dnight', u'\U0001f60a')] +>>> result = tf.strings.unicode_decode_with_offsets(input, 'UTF-8') +>>> result[0].to_list() # codepoints +[[71, 246, 246, 100, 110, 105, 103, 104, 116], [128522]] +>>> result[1].to_list() # offsets +[[0, 1, 3, 5, 6, 7, 8, 9, 10], [0]] +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/strings/unicode_encode.md b/site/en/api_docs/python/tf/strings/unicode_encode.md new file mode 100644 index 00000000000..3af3db68409 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/unicode_encode.md @@ -0,0 +1,125 @@ +description: Encodes each sequence of Unicode code points in input into a string. + +
+ + +
+ +# tf.strings.unicode_encode + + + + + + + + + +Encodes each sequence of Unicode code points in `input` into a string. + + + + + + + + + +`result[i1...iN]` is the string formed by concatenating the Unicode +codepoints `input[1...iN, :]`, encoded using `output_encoding`. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +An `N+1` dimensional potentially ragged integer tensor with shape +`[D1...DN, num_chars]`. +
+`output_encoding` + +Unicode encoding that should be used to encode each +codepoint sequence. Can be `"UTF-8"`, `"UTF-16-BE"`, or `"UTF-32-BE"`. +
+`errors` + +Specifies the response when an invalid codepoint is encountered +(optional). One of: +* `'replace'`: Replace invalid codepoint with the +`replacement_char`. (default) +* `'ignore'`: Skip invalid codepoints. +* `'strict'`: Raise an exception for any invalid codepoint. +
+`replacement_char` + +The replacement character codepoint to be used in place of +any invalid input when `errors='replace'`. Any valid unicode codepoint may +be used. The default value is the default unicode replacement character +which is 0xFFFD (U+65533). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `N` dimensional `string` tensor with shape `[D1...DN]`. +
+ + +#### Example: + +``` +>>> input = tf.ragged.constant( +... [[71, 246, 246, 100, 110, 105, 103, 104, 116], [128522]]) +>>> print(unicode_encode(input, 'UTF-8')) +tf.Tensor([b'G\xc3\xb6\xc3\xb6dnight' b'\xf0\x9f\x98\x8a'], + shape=(2,), dtype=string) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/strings/unicode_script.md b/site/en/api_docs/python/tf/strings/unicode_script.md new file mode 100644 index 00000000000..3f7332cbec2 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/unicode_script.md @@ -0,0 +1,91 @@ +description: Determine the script codes of a given tensor of Unicode integer code points. + +
+ + +
+ +# tf.strings.unicode_script + + + + + + + + + +Determine the script codes of a given tensor of Unicode integer code points. + + + + + + + + + +This operation converts Unicode code points to script codes corresponding to +each code point. Script codes correspond to International Components for +Unicode (ICU) UScriptCode values. See http://icu-project.org/apiref/icu4c/uscript_8h.html. +Returns -1 (USCRIPT_INVALID_CODE) for invalid codepoints. Output shape will +match input shape. + +#### Examples: + + + +``` +>>> tf.strings.unicode_script([1, 31, 38]) + +``` + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `int32`. A Tensor of int32 Unicode code points. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `int32`. +
+ diff --git a/site/en/api_docs/python/tf/strings/unicode_split.md b/site/en/api_docs/python/tf/strings/unicode_split.md new file mode 100644 index 00000000000..42476372218 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/unicode_split.md @@ -0,0 +1,123 @@ +description: Splits each string in input into a sequence of Unicode code points. + +
+ + +
+ +# tf.strings.unicode_split + + + + + + + + + +Splits each string in `input` into a sequence of Unicode code points. + + + + + + + + + +`result[i1...iN, j]` is the substring of `input[i1...iN]` that encodes its +`j`th character, when decoded using `input_encoding`. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +An `N` dimensional potentially ragged `string` tensor with shape +`[D1...DN]`. `N` must be statically known. +
+`input_encoding` + +String name for the unicode encoding that should be used to +decode each string. +
+`errors` + +Specifies the response when an input string can't be converted +using the indicated encoding. One of: +* `'strict'`: Raise an exception for any illegal substrings. +* `'replace'`: Replace illegal substrings with `replacement_char`. +* `'ignore'`: Skip illegal substrings. +
+`replacement_char` + +The replacement codepoint to be used in place of invalid +substrings in `input` when `errors='replace'`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `N+1` dimensional `int32` tensor with shape `[D1...DN, (num_chars)]`. +The returned tensor is a tf.Tensor if `input` is a scalar, or a +tf.RaggedTensor otherwise. +
+ + +#### Example: + +``` +>>> input = [s.encode('utf8') for s in (u'G\xf6\xf6dnight', u'\U0001f60a')] +>>> tf.strings.unicode_split(input, 'UTF-8').to_list() +[[b'G', b'\xc3\xb6', b'\xc3\xb6', b'd', b'n', b'i', b'g', b'h', b't'], + [b'\xf0\x9f\x98\x8a']] +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/strings/unicode_split_with_offsets.md b/site/en/api_docs/python/tf/strings/unicode_split_with_offsets.md new file mode 100644 index 00000000000..28c0aae9395 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/unicode_split_with_offsets.md @@ -0,0 +1,138 @@ +description: Splits each string into a sequence of code points with start offsets. + +
+ + +
+ +# tf.strings.unicode_split_with_offsets + + + + + + + + + +Splits each string into a sequence of code points with start offsets. + + + + + + + + + +This op is similar to `tf.strings.decode(...)`, but it also returns the +start offset for each character in its respective string. This information +can be used to align the characters with the original byte sequence. + +Returns a tuple `(chars, start_offsets)` where: + +* `chars[i1...iN, j]` is the substring of `input[i1...iN]` that encodes its + `j`th character, when decoded using `input_encoding`. +* `start_offsets[i1...iN, j]` is the start byte offset for the `j`th + character in `input[i1...iN]`, when decoded using `input_encoding`. + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +An `N` dimensional potentially ragged `string` tensor with shape +`[D1...DN]`. `N` must be statically known. +
+`input_encoding` + +String name for the unicode encoding that should be used to +decode each string. +
+`errors` + +Specifies the response when an input string can't be converted +using the indicated encoding. One of: +* `'strict'`: Raise an exception for any illegal substrings. +* `'replace'`: Replace illegal substrings with `replacement_char`. +* `'ignore'`: Skip illegal substrings. +
+`replacement_char` + +The replacement codepoint to be used in place of invalid +substrings in `input` when `errors='replace'`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A tuple of `N+1` dimensional tensors `(codepoints, start_offsets)`. + +* `codepoints` is an `int32` tensor with shape `[D1...DN, (num_chars)]`. +* `offsets` is an `int64` tensor with shape `[D1...DN, (num_chars)]`. + +The returned tensors are tf.Tensors if `input` is a scalar, or +tf.RaggedTensors otherwise. +
+ + +#### Example: + +``` +>>> input = [s.encode('utf8') for s in (u'G\xf6\xf6dnight', u'\U0001f60a')] +>>> result = tf.strings.unicode_split_with_offsets(input, 'UTF-8') +>>> result[0].to_list() # character substrings +[[b'G', b'\xc3\xb6', b'\xc3\xb6', b'd', b'n', b'i', b'g', b'h', b't'], + [b'\xf0\x9f\x98\x8a']] +>>> result[1].to_list() # offsets +[[0, 1, 3, 5, 6, 7, 8, 9, 10], [0]] +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/strings/unicode_transcode.md b/site/en/api_docs/python/tf/strings/unicode_transcode.md new file mode 100644 index 00000000000..d958c9225e7 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/unicode_transcode.md @@ -0,0 +1,174 @@ +description: Transcode the input text from a source encoding to a destination encoding. + +
+ + +
+ +# tf.strings.unicode_transcode + + + + + + + + + +Transcode the input text from a source encoding to a destination encoding. + + + + + + + + + +The input is a string tensor of any shape. The output is a string tensor of +the same shape containing the transcoded strings. Output strings are always +valid unicode. If the input contains invalid encoding positions, the +`errors` attribute sets the policy for how to deal with them. If the default +error-handling policy is used, invalid formatting will be substituted in the +output by the `replacement_char`. If the errors policy is to `ignore`, any +invalid encoding positions in the input are skipped and not included in the +output. If it set to `strict` then any invalid formatting will result in an +InvalidArgument error. + +This operation can be used with `output_encoding = input_encoding` to enforce +correct formatting for inputs even if they are already in the desired encoding. + +If the input is prefixed by a Byte Order Mark needed to determine encoding +(e.g. if the encoding is UTF-16 and the BOM indicates big-endian), then that +BOM will be consumed and not emitted into the output. If the input encoding +is marked with an explicit endianness (e.g. UTF-16-BE), then the BOM is +interpreted as a non-breaking-space and is preserved in the output (including +always for UTF-8). + +The end result is that if the input is marked as an explicit endianness the +transcoding is faithful to all codepoints in the source. If it is not marked +with an explicit endianness, the BOM is not considered part of the string itself +but as metadata, and so is not preserved in the output. + +#### Examples: + + + +``` +>>> tf.strings.unicode_transcode(["Hello", "TensorFlow", "2.x"], "UTF-8", "UTF-16-BE") + +>>> tf.strings.unicode_transcode(["A", "B", "C"], "US ASCII", "UTF-8").numpy() +array([b'A', b'B', b'C'], dtype=object) +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +The text to be processed. Can have any shape. +
+`input_encoding` + +A `string`. +Text encoding of the input strings. This is any of the encodings supported +by ICU ucnv algorithmic converters. Examples: `"UTF-16", "US ASCII", "UTF-8"`. +
+`output_encoding` + +A `string` from: `"UTF-8", "UTF-16-BE", "UTF-32-BE"`. +The unicode encoding to use in the output. Must be one of +`"UTF-8", "UTF-16-BE", "UTF-32-BE"`. Multi-byte encodings will be big-endian. +
+`errors` + +An optional `string` from: `"strict", "replace", "ignore"`. Defaults to `"replace"`. +Error handling policy when there is invalid formatting found in the input. +The value of 'strict' will cause the operation to produce a InvalidArgument +error on any invalid input formatting. A value of 'replace' (the default) will +cause the operation to replace any invalid formatting in the input with the +`replacement_char` codepoint. A value of 'ignore' will cause the operation to +skip any invalid formatting in the input and produce no corresponding output +character. +
+`replacement_char` + +An optional `int`. Defaults to `65533`. +The replacement character codepoint to be used in place of any invalid +formatting in the input when `errors='replace'`. Any valid unicode codepoint may +be used. The default value is the default unicode replacement character is +0xFFFD or U+65533.) + +Note that for UTF-8, passing a replacement character expressible in 1 byte, such +as ' ', will preserve string alignment to the source since invalid bytes will be +replaced with a 1-byte replacement. For UTF-16-BE and UTF-16-LE, any 1 or 2 byte +replacement character will preserve byte alignment to the source. +
+`replace_control_characters` + +An optional `bool`. Defaults to `False`. +Whether to replace the C0 control characters (00-1F) with the +`replacement_char`. Default is false. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/strings/unsorted_segment_join.md b/site/en/api_docs/python/tf/strings/unsorted_segment_join.md new file mode 100644 index 00000000000..4c20104c1c6 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/unsorted_segment_join.md @@ -0,0 +1,130 @@ +description: Joins the elements of inputs based on segment_ids. + +
+ + +
+ +# tf.strings.unsorted_segment_join + + + + + + + + + +Joins the elements of `inputs` based on `segment_ids`. + + + + + + + + + +Computes the string join along segments of a tensor. +Given `segment_ids` with rank `N` and `data` with rank `N+M`: + + `output[i, k1...kM] = strings.join([data[j1...jN, k1...kM])` + +where the join is over all [j1...jN] such that segment_ids[j1...jN] = i. +Strings are joined in row-major order. + +#### For example: + + + +```python +inputs = [['Y', 'q', 'c'], ['Y', '6', '6'], ['p', 'G', 'a']] +output_array = string_ops.unsorted_segment_join(inputs=inputs, + segment_ids=[1, 0, 1], + num_segments=2, + separator=':')) +# output_array ==> [['Y', '6', '6'], ['Y:p', 'q:G', 'c:a']] + + +inputs = ['this', 'is', 'a', 'test'] +output_array = string_ops.unsorted_segment_join(inputs=inputs, + segment_ids=[0, 0, 0, 0], + num_segments=1, + separator=':')) +# output_array ==> ['this:is:a:test'] +``` + + + + + + + + + + + + + + + + + + + + + + +
+`inputs` + +A `Tensor` of type `string`. The input to be joined. +
+`segment_ids` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A tensor whose shape is a prefix of data.shape. Negative segment ids are not +supported. +
+`num_segments` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +A scalar. +
+`separator` + +An optional `string`. Defaults to `""`. +The separator to use when joining. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/strings/upper.md b/site/en/api_docs/python/tf/strings/upper.md new file mode 100644 index 00000000000..0f6c8a953e9 --- /dev/null +++ b/site/en/api_docs/python/tf/strings/upper.md @@ -0,0 +1,93 @@ +description: Converts all lowercase characters into their respective uppercase replacements. + +
+ + +
+ +# tf.strings.upper + + + + + + + + + +Converts all lowercase characters into their respective uppercase replacements. + + + + + + + + + + +#### Example: + + + +``` +>>> tf.strings.upper("CamelCase string and ALL CAPS") + +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` of type `string`. +
+`encoding` + +An optional `string`. Defaults to `""`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `string`. +
+ diff --git a/site/en/api_docs/python/tf/summary.md b/site/en/api_docs/python/tf/summary.md new file mode 100644 index 00000000000..39b69348c39 --- /dev/null +++ b/site/en/api_docs/python/tf/summary.md @@ -0,0 +1,113 @@ +description: Operations for writing summary data, for use in analysis and visualization. + +
+ + +
+ +# Module: tf.summary + + + + + + + + + +Operations for writing summary data, for use in analysis and visualization. + + +The tf.summary module provides APIs for writing summary data. This data can be +visualized in TensorBoard, the visualization toolkit that comes with TensorFlow. +See the [TensorBoard website](https://www.tensorflow.org/tensorboard) for more +detailed tutorials about how to use these APIs, or some quick examples below. + +Example usage with eager execution, the default in TF 2.0: + +```python +writer = tf.summary.create_file_writer("/tmp/mylogs") +with writer.as_default(): + for step in range(100): + # other model code would go here + tf.summary.scalar("my_metric", 0.5, step=step) + writer.flush() +``` + +Example usage with tf.function graph execution: + +```python +writer = tf.summary.create_file_writer("/tmp/mylogs") + +@tf.function +def my_func(step): + # other model code would go here + with writer.as_default(): + tf.summary.scalar("my_metric", 0.5, step=step) + +for step in range(100): + my_func(step) + writer.flush() +``` + +Example usage with legacy TF 1.x graph execution: + +```python +with tf.compat.v1.Graph().as_default(): + step = tf.Variable(0, dtype=tf.int64) + step_update = step.assign_add(1) + writer = tf.summary.create_file_writer("/tmp/mylogs") + with writer.as_default(): + tf.summary.scalar("my_metric", 0.5, step=step) + all_summary_ops = tf.compat.v1.summary.all_v2_summary_ops() + writer_flush = writer.flush() + + sess = tf.compat.v1.Session() + sess.run([writer.init(), step.initializer]) + for i in range(100): + sess.run(all_summary_ops) + sess.run(step_update) + sess.run(writer_flush) +``` + +## Modules + +[`experimental`](../tf/summary/experimental.md) module: Public API for tf.summary.experimental namespace. + +## Classes + +[`class SummaryWriter`](../tf/summary/SummaryWriter.md): Interface representing a stateful summary writer object. + +## Functions + +[`audio(...)`](../tf/summary/audio.md): Write an audio summary. + +[`create_file_writer(...)`](../tf/summary/create_file_writer.md): Creates a summary file writer for the given log directory. + +[`create_noop_writer(...)`](../tf/summary/create_noop_writer.md): Returns a summary writer that does nothing. + +[`flush(...)`](../tf/summary/flush.md): Forces summary writer to send any buffered data to storage. + +[`histogram(...)`](../tf/summary/histogram.md): Write a histogram summary. + +[`image(...)`](../tf/summary/image.md): Write an image summary. + +[`record_if(...)`](../tf/summary/record_if.md): Sets summary recording on or off per the provided boolean value. + +[`scalar(...)`](../tf/summary/scalar.md): Write a scalar summary. + +[`text(...)`](../tf/summary/text.md): Write a text summary. + +[`trace_export(...)`](../tf/summary/trace_export.md): Stops and exports the active trace as a Summary and/or profile file. + +[`trace_off(...)`](../tf/summary/trace_off.md): Stops the current trace and discards any collected information. + +[`trace_on(...)`](../tf/summary/trace_on.md): Starts a trace to record computation graphs and profiling information. + +[`write(...)`](../tf/summary/write.md): Writes a generic summary to the default SummaryWriter if one exists. + diff --git a/site/en/api_docs/python/tf/summary/SummaryWriter.md b/site/en/api_docs/python/tf/summary/SummaryWriter.md new file mode 100644 index 00000000000..6bcbadb7079 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/SummaryWriter.md @@ -0,0 +1,94 @@ +description: Interface representing a stateful summary writer object. + +
+ + + + + + + +
+ +# tf.summary.SummaryWriter + + + + + + + + + +Interface representing a stateful summary writer object. + + + + +## Methods + +

as_default

+ +View source + + + +Returns a context manager that enables summary writing. + + +

close

+ +View source + + + +Flushes and closes the summary writer. + + +

flush

+ +View source + + + +Flushes any buffered data. + + +

init

+ +View source + + + +Initializes the summary writer. + + +

set_as_default

+ +View source + + + +Enables this summary writer for the current thread. + + + + diff --git a/site/en/api_docs/python/tf/summary/audio.md b/site/en/api_docs/python/tf/summary/audio.md new file mode 100644 index 00000000000..8c5584dfeb8 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/audio.md @@ -0,0 +1,140 @@ +description: Write an audio summary. + +
+ + +
+ +# tf.summary.audio + + + + + + + + + +Write an audio summary. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for this summary. The summary tag used for TensorBoard will +be this name prefixed by any active name scopes. +
+`data` + +A `Tensor` representing audio data with shape `[k, t, c]`, +where `k` is the number of audio clips, `t` is the number of +frames, and `c` is the number of channels. Elements should be +floating-point values in `[-1.0, 1.0]`. Any of the dimensions may +be statically unknown (i.e., `None`). +
+`sample_rate` + +An `int` or rank-0 `int32` `Tensor` that represents the +sample rate, in Hz. Must be positive. +
+`step` + +Explicit `int64`-castable monotonic step value for this summary. If +omitted, this defaults to tf.summary.experimental.get_step(), which must +not be None. +
+`max_outputs` + +Optional `int` or rank-0 integer `Tensor`. At most this +many audio clips will be emitted at each step. When more than +`max_outputs` many clips are provided, the first `max_outputs` +many clips will be used and the rest silently discarded. +
+`encoding` + +Optional constant `str` for the desired encoding. Only "wav" +is currently supported, but this is not guaranteed to remain the +default, so if you want "wav" in particular, set this explicitly. +
+`description` + +Optional long-form description for this summary, as a +constant `str`. Markdown is supported. Defaults to empty. +
+ + + + + + + + + + + +
+True on success, or false if no summary was emitted because no default +summary writer was available. +
+ + + + + + + + + + + + +
+`ValueError` + +if a default writer exists, but no step was provided and +tf.summary.experimental.get_step() is None. +
+ diff --git a/site/en/api_docs/python/tf/summary/create_file_writer.md b/site/en/api_docs/python/tf/summary/create_file_writer.md new file mode 100644 index 00000000000..847891863ac --- /dev/null +++ b/site/en/api_docs/python/tf/summary/create_file_writer.md @@ -0,0 +1,93 @@ +description: Creates a summary file writer for the given log directory. + +
+ + +
+ +# tf.summary.create_file_writer + + + + + + + + + +Creates a summary file writer for the given log directory. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`logdir` + +a string specifying the directory in which to write an event file. +
+`max_queue` + +the largest number of summaries to keep in a queue; will +flush once the queue gets bigger than this. Defaults to 10. +
+`flush_millis` + +the largest interval between flushes. Defaults to 120,000. +
+`filename_suffix` + +optional suffix for the event file name. Defaults to `.v2`. +
+`name` + +a name for the op that creates the writer. +
+ + + + + + + + + + + +
+A SummaryWriter object. +
+ diff --git a/site/en/api_docs/python/tf/summary/create_noop_writer.md b/site/en/api_docs/python/tf/summary/create_noop_writer.md new file mode 100644 index 00000000000..2d6c2819dc7 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/create_noop_writer.md @@ -0,0 +1,33 @@ +description: Returns a summary writer that does nothing. + +
+ + +
+ +# tf.summary.create_noop_writer + + + + + + + + + +Returns a summary writer that does nothing. + + + + + + + +This is useful as a placeholder in code that expects a context manager. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/summary/experimental.md b/site/en/api_docs/python/tf/summary/experimental.md new file mode 100644 index 00000000000..9e860e2cdef --- /dev/null +++ b/site/en/api_docs/python/tf/summary/experimental.md @@ -0,0 +1,31 @@ +description: Public API for tf.summary.experimental namespace. + +
+ + +
+ +# Module: tf.summary.experimental + + + + + + + + + +Public API for tf.summary.experimental namespace. + + + +## Functions + +[`get_step(...)`](../../tf/summary/experimental/get_step.md): Returns the default summary step for the current thread. + +[`set_step(...)`](../../tf/summary/experimental/set_step.md): Sets the default summary step for the current thread. + +[`summary_scope(...)`](../../tf/summary/experimental/summary_scope.md): Experimental context manager for use when defining a custom summary op. + +[`write_raw_pb(...)`](../../tf/summary/experimental/write_raw_pb.md): Writes a summary using raw tf.compat.v1.Summary protocol buffers. + diff --git a/site/en/api_docs/python/tf/summary/experimental/get_step.md b/site/en/api_docs/python/tf/summary/experimental/get_step.md new file mode 100644 index 00000000000..c44b04e14b3 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/experimental/get_step.md @@ -0,0 +1,46 @@ +description: Returns the default summary step for the current thread. + +
+ + +
+ +# tf.summary.experimental.get_step + + + + + + + + + +Returns the default summary step for the current thread. + + + + + + + + + + + + + + + + +
+The step set by tf.summary.experimental.set_step() if one has been set, +otherwise None. +
+ diff --git a/site/en/api_docs/python/tf/summary/experimental/set_step.md b/site/en/api_docs/python/tf/summary/experimental/set_step.md new file mode 100644 index 00000000000..be6af149078 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/experimental/set_step.md @@ -0,0 +1,58 @@ +description: Sets the default summary step for the current thread. + +
+ + +
+ +# tf.summary.experimental.set_step + + + + + + + + + +Sets the default summary step for the current thread. + + + + + + + +For convenience, this function sets a default value for the `step` parameter +used in summary-writing functions elsewhere in the API so that it need not +be explicitly passed in every such invocation. The value can be a constant +or a variable, and can be retrieved via tf.summary.experimental.get_step(). + +Note: when using this with @tf.functions, the step value will be captured at +the time the function is traced, so changes to the step outside the function +will not be reflected inside the function unless using a tf.Variable step. + + + + + + + + + + +
+`step` + +An `int64`-castable default step value, or None to unset. +
+ diff --git a/site/en/api_docs/python/tf/summary/experimental/summary_scope.md b/site/en/api_docs/python/tf/summary/experimental/summary_scope.md new file mode 100644 index 00000000000..3299a102af2 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/experimental/summary_scope.md @@ -0,0 +1,87 @@ +description: Experimental context manager for use when defining a custom summary op. + +
+ + +
+ +# tf.summary.experimental.summary_scope + + + + + + + + + +Experimental context manager for use when defining a custom summary op. + + + + + + + +This behaves similarly to tf.name_scope, except that it returns a generated +summary tag in addition to the scope name. The tag is structurally similar to +the scope name - derived from the user-provided name, prefixed with enclosing +name scopes if any - but we relax the constraint that it be uniquified, as +well as the character set limitation (so the user-provided name can contain +characters not legal for scope names; in the scope name these are removed). + +This makes the summary tag more predictable and consistent for the user. + +For example, to define a new summary op called `my_op`: + +```python +def my_op(name, my_value, step): + with tf.summary.summary_scope(name, "MyOp", [my_value]) as (tag, scope): + my_value = tf.convert_to_tensor(my_value) + return tf.summary.write(tag, my_value, step=step) +``` + + + + + + + + + + + + + + + + +
+`name` + +string name for the summary. +
+`default_name` + +Optional; if provided, used as default name of the summary. +
+`values` + +Optional; passed as `values` parameter to name_scope. +
+ + + +#### Yields: + +A tuple `(tag, scope)` as described above. diff --git a/site/en/api_docs/python/tf/summary/experimental/write_raw_pb.md b/site/en/api_docs/python/tf/summary/experimental/write_raw_pb.md new file mode 100644 index 00000000000..5128d1d0dfe --- /dev/null +++ b/site/en/api_docs/python/tf/summary/experimental/write_raw_pb.md @@ -0,0 +1,102 @@ +description: Writes a summary using raw tf.compat.v1.Summary protocol buffers. + +
+ + +
+ +# tf.summary.experimental.write_raw_pb + + + + + + + + + +Writes a summary using raw tf.compat.v1.Summary protocol buffers. + + + + + + + +Experimental: this exists to support the usage of V1-style manual summary +writing (via the construction of a tf.compat.v1.Summary protocol buffer) +with the V2 summary writing API. + + + + + + + + + + + + + + + + +
+`tensor` + +the string Tensor holding one or more serialized `Summary` protobufs +
+`step` + +Explicit `int64`-castable monotonic step value for this summary. If +omitted, this defaults to tf.summary.experimental.get_step(), which must +not be None. +
+`name` + +Optional string name for this op. +
+ + + + + + + + + + + +
+True on success, or false if no summary was written because no default +summary writer was available. +
+ + + + + + + + + + + + +
+`ValueError` + +if a default writer exists, but no step was provided and +tf.summary.experimental.get_step() is None. +
+ diff --git a/site/en/api_docs/python/tf/summary/flush.md b/site/en/api_docs/python/tf/summary/flush.md new file mode 100644 index 00000000000..b315d73b291 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/flush.md @@ -0,0 +1,74 @@ +description: Forces summary writer to send any buffered data to storage. + +
+ + +
+ +# tf.summary.flush + + + + + + + + + +Forces summary writer to send any buffered data to storage. + + + + + + + +This operation blocks until that finishes. + + + + + + + + + + + + + +
+`writer` + +The tf.summary.SummaryWriter resource to flush. +The thread default will be used if this parameter is None. +Otherwise a tf.no_op is returned. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The created tf.Operation. +
+ diff --git a/site/en/api_docs/python/tf/summary/histogram.md b/site/en/api_docs/python/tf/summary/histogram.md new file mode 100644 index 00000000000..3f0ea2d144c --- /dev/null +++ b/site/en/api_docs/python/tf/summary/histogram.md @@ -0,0 +1,119 @@ +description: Write a histogram summary. + +
+ + +
+ +# tf.summary.histogram + + + + + + + + + +Write a histogram summary. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for this summary. The summary tag used for TensorBoard will +be this name prefixed by any active name scopes. +
+`data` + +A `Tensor` of any shape. Must be castable to `float64`. +
+`step` + +Explicit `int64`-castable monotonic step value for this summary. If +omitted, this defaults to tf.summary.experimental.get_step(), which must +not be None. +
+`buckets` + +Optional positive `int`. The output will have this +many buckets, except in two edge cases. If there is no data, then +there are no buckets. If there is data but all points have the +same value, then there is one bucket whose left and right +endpoints are the same. +
+`description` + +Optional long-form description for this summary, as a +constant `str`. Markdown is supported. Defaults to empty. +
+ + + + + + + + + + + +
+True on success, or false if no summary was emitted because no default +summary writer was available. +
+ + + + + + + + + + + + +
+`ValueError` + +if a default writer exists, but no step was provided and +tf.summary.experimental.get_step() is None. +
+ diff --git a/site/en/api_docs/python/tf/summary/image.md b/site/en/api_docs/python/tf/summary/image.md new file mode 100644 index 00000000000..c9b122f982e --- /dev/null +++ b/site/en/api_docs/python/tf/summary/image.md @@ -0,0 +1,123 @@ +description: Write an image summary. + +
+ + +
+ +# tf.summary.image + + + + + + + + + +Write an image summary. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for this summary. The summary tag used for TensorBoard will +be this name prefixed by any active name scopes. +
+`data` + +A `Tensor` representing pixel data with shape `[k, h, w, c]`, +where `k` is the number of images, `h` and `w` are the height and +width of the images, and `c` is the number of channels, which +should be 1, 2, 3, or 4 (grayscale, grayscale with alpha, RGB, RGBA). +Any of the dimensions may be statically unknown (i.e., `None`). +Floating point data will be clipped to the range [0,1). +
+`step` + +Explicit `int64`-castable monotonic step value for this summary. If +omitted, this defaults to tf.summary.experimental.get_step(), which must +not be None. +
+`max_outputs` + +Optional `int` or rank-0 integer `Tensor`. At most this +many images will be emitted at each step. When more than +`max_outputs` many images are provided, the first `max_outputs` many +images will be used and the rest silently discarded. +
+`description` + +Optional long-form description for this summary, as a +constant `str`. Markdown is supported. Defaults to empty. +
+ + + + + + + + + + + +
+True on success, or false if no summary was emitted because no default +summary writer was available. +
+ + + + + + + + + + + + +
+`ValueError` + +if a default writer exists, but no step was provided and +tf.summary.experimental.get_step() is None. +
+ diff --git a/site/en/api_docs/python/tf/summary/record_if.md b/site/en/api_docs/python/tf/summary/record_if.md new file mode 100644 index 00000000000..1e39d61eee4 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/record_if.md @@ -0,0 +1,60 @@ +description: Sets summary recording on or off per the provided boolean value. + +
+ + +
+ +# tf.summary.record_if + + + + + + + + + +Sets summary recording on or off per the provided boolean value. + + + + + + + +The provided value can be a python boolean, a scalar boolean Tensor, or +or a callable providing such a value; if a callable is passed it will be +invoked on-demand to determine whether summary writing will occur. + + + + + + + + + + +
+`condition` + +can be True, False, a bool Tensor, or a callable providing such. +
+ + + +#### Yields: + +Returns a context manager that sets this value on enter and restores the +previous value on exit. diff --git a/site/en/api_docs/python/tf/summary/scalar.md b/site/en/api_docs/python/tf/summary/scalar.md new file mode 100644 index 00000000000..98f48b84407 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/scalar.md @@ -0,0 +1,108 @@ +description: Write a scalar summary. + +
+ + +
+ +# tf.summary.scalar + + + + + + + + + +Write a scalar summary. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for this summary. The summary tag used for TensorBoard will +be this name prefixed by any active name scopes. +
+`data` + +A real numeric scalar value, convertible to a `float32` Tensor. +
+`step` + +Explicit `int64`-castable monotonic step value for this summary. If +omitted, this defaults to tf.summary.experimental.get_step(), which must +not be None. +
+`description` + +Optional long-form description for this summary, as a +constant `str`. Markdown is supported. Defaults to empty. +
+ + + + + + + + + + + +
+True on success, or false if no summary was written because no default +summary writer was available. +
+ + + + + + + + + + + + +
+`ValueError` + +if a default writer exists, but no step was provided and +tf.summary.experimental.get_step() is None. +
+ diff --git a/site/en/api_docs/python/tf/summary/text.md b/site/en/api_docs/python/tf/summary/text.md new file mode 100644 index 00000000000..55a2c50870a --- /dev/null +++ b/site/en/api_docs/python/tf/summary/text.md @@ -0,0 +1,108 @@ +description: Write a text summary. + +
+ + +
+ +# tf.summary.text + + + + + + + + + +Write a text summary. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`name` + +A name for this summary. The summary tag used for TensorBoard will +be this name prefixed by any active name scopes. +
+`data` + +A UTF-8 string tensor value. +
+`step` + +Explicit `int64`-castable monotonic step value for this summary. If +omitted, this defaults to tf.summary.experimental.get_step(), which must +not be None. +
+`description` + +Optional long-form description for this summary, as a +constant `str`. Markdown is supported. Defaults to empty. +
+ + + + + + + + + + + +
+True on success, or false if no summary was emitted because no default +summary writer was available. +
+ + + + + + + + + + + + +
+`ValueError` + +if a default writer exists, but no step was provided and +tf.summary.experimental.get_step() is None. +
+ diff --git a/site/en/api_docs/python/tf/summary/trace_export.md b/site/en/api_docs/python/tf/summary/trace_export.md new file mode 100644 index 00000000000..74a759cb566 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/trace_export.md @@ -0,0 +1,89 @@ +description: Stops and exports the active trace as a Summary and/or profile file. + +
+ + +
+ +# tf.summary.trace_export + + + + + + + + + +Stops and exports the active trace as a Summary and/or profile file. + + + + + + + +Stops the trace and exports all metadata collected during the trace to the +default SummaryWriter, if one has been set. + + + + + + + + + + + + + + + + +
+`name` + +A name for the summary to be written. +
+`step` + +Explicit `int64`-castable monotonic step value for this summary. If +omitted, this defaults to tf.summary.experimental.get_step(), which must +not be None. +
+`profiler_outdir` + +Output directory for profiler. This is only used when the +profiler was enabled when the trace was started. In that case, if there is +a logdir-based default SummaryWriter, this defaults to the same directory, +but otherwise the argument must be passed. +
+ + + + + + + + + + + + +
+`ValueError` + +if a default writer exists, but no step was provided and +tf.summary.experimental.get_step() is None. +
+ diff --git a/site/en/api_docs/python/tf/summary/trace_off.md b/site/en/api_docs/python/tf/summary/trace_off.md new file mode 100644 index 00000000000..a69fc3e33e9 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/trace_off.md @@ -0,0 +1,31 @@ +description: Stops the current trace and discards any collected information. + +
+ + +
+ +# tf.summary.trace_off + + + + + + + + + +Stops the current trace and discards any collected information. + + + + + + diff --git a/site/en/api_docs/python/tf/summary/trace_on.md b/site/en/api_docs/python/tf/summary/trace_on.md new file mode 100644 index 00000000000..9930e825441 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/trace_on.md @@ -0,0 +1,70 @@ +description: Starts a trace to record computation graphs and profiling information. + +
+ + +
+ +# tf.summary.trace_on + + + + + + + + + +Starts a trace to record computation graphs and profiling information. + + + + + + + +Must be invoked in eager mode. + +When enabled, TensorFlow runtime will collection information that can later be +exported and consumed by TensorBoard. The trace is activated across the entire +TensorFlow runtime and affects all threads of execution. + +To stop the trace and export the collected information, use +tf.summary.trace_export. To stop the trace without exporting, use +tf.summary.trace_off. + + + + + + + + + + + + + +
+`graph` + +If True, enables collection of executed graphs. It includes ones from +tf.function invocation and ones from the legacy graph mode. The default +is True. +
+`profiler` + +If True, enables the advanced profiler. Enabling profiler +implicitly enables the graph collection. The profiler may incur a high +memory overhead. The default is False. +
+ diff --git a/site/en/api_docs/python/tf/summary/write.md b/site/en/api_docs/python/tf/summary/write.md new file mode 100644 index 00000000000..9d30b8f7705 --- /dev/null +++ b/site/en/api_docs/python/tf/summary/write.md @@ -0,0 +1,120 @@ +description: Writes a generic summary to the default SummaryWriter if one exists. + +
+ + +
+ +# tf.summary.write + + + + + + + + + +Writes a generic summary to the default SummaryWriter if one exists. + + + + + + + +This exists primarily to support the definition of type-specific summary ops +like scalar() and image(), and is not intended for direct use unless defining +a new type-specific summary op. + + + + + + + + + + + + + + + + + + + + + + +
+`tag` + +string tag used to identify the summary (e.g. in TensorBoard), usually +generated with `tf.summary.summary_scope` +
+`tensor` + +the Tensor holding the summary data to write or a callable that +returns this Tensor. If a callable is passed, it will only be called when +a default SummaryWriter exists and the recording condition specified by +`record_if()` is met. +
+`step` + +Explicit `int64`-castable monotonic step value for this summary. If +omitted, this defaults to tf.summary.experimental.get_step(), which must +not be None. +
+`metadata` + +Optional SummaryMetadata, as a proto or serialized bytes +
+`name` + +Optional string name for this op. +
+ + + + + + + + + + + +
+True on success, or false if no summary was written because no default +summary writer was available. +
+ + + + + + + + + + + + +
+`ValueError` + +if a default writer exists, but no step was provided and +tf.summary.experimental.get_step() is None. +
+ diff --git a/site/en/api_docs/python/tf/switch_case.md b/site/en/api_docs/python/tf/switch_case.md new file mode 100644 index 00000000000..c758bc634a8 --- /dev/null +++ b/site/en/api_docs/python/tf/switch_case.md @@ -0,0 +1,182 @@ +description: Create a switch/case operation, i.e. an integer-indexed conditional. + +
+ + +
+ +# tf.switch_case + + + + + + + + + +Create a switch/case operation, i.e. an integer-indexed conditional. + + + + + + + + + +See also tf.case. + +This op can be substantially more efficient than tf.case when exactly one +branch will be selected. tf.switch_case is more like a C++ switch/case +statement than tf.case, which is more like an if/elif/elif/else chain. + +The `branch_fns` parameter is either a dict from `int` to callables, or list +of (`int`, callable) pairs, or simply a list of callables (in which case the +index is implicitly the key). The `branch_index` `Tensor` is used to select an +element in `branch_fns` with matching `int` key, falling back to `default` +if none match, or `max(keys)` if no `default` is provided. The keys must form +a contiguous set from `0` to `len(branch_fns) - 1`. + +tf.switch_case supports nested structures as implemented in tf.nest. All +callables must return the same (possibly nested) value structure of lists, +tuples, and/or named tuples. + +**Example:** + +#### Pseudocode: + + + +```c++ +switch (branch_index) { // c-style switch + case 0: return 17; + case 1: return 31; + default: return -1; +} +``` +or +```python +branches = {0: lambda: 17, 1: lambda: 31} +branches.get(branch_index, lambda: -1)() +``` + +#### Expressions: + + + +```python +def f1(): return tf.constant(17) +def f2(): return tf.constant(31) +def f3(): return tf.constant(-1) +r = tf.switch_case(branch_index, branch_fns={0: f1, 1: f2}, default=f3) +# Equivalent: tf.switch_case(branch_index, branch_fns={0: f1, 1: f2, 2: f3}) +``` + + + + + + + + + + + + + + + + + + + +
+`branch_index` + +An int Tensor specifying which of `branch_fns` should be +executed. +
+`branch_fns` + +A `dict` mapping `int`s to callables, or a `list` of +(`int`, callable) pairs, or simply a list of callables (in which case the +index serves as the key). Each callable must return a matching structure +of tensors. +
+`default` + +Optional callable that returns a structure of tensors. +
+`name` + +A name for this operation (optional). +
+ + + + + + + + + + + +
+The tensors returned by the callable identified by `branch_index`, or those +returned by `default` if no key matches and `default` was provided, or those +returned by the max-keyed `branch_fn` if no `default` is provided. +
+ + + + + + + + + + + + + + + + + + +
+`TypeError` + +If `branch_fns` is not a list/dictionary. +
+`TypeError` + +If `branch_fns` is a list but does not contain 2-tuples or +callables. +
+`TypeError` + +If `fns[i]` is not callable for any i, or `default` is not +callable. +
+ diff --git a/site/en/api_docs/python/tf/sysconfig.md b/site/en/api_docs/python/tf/sysconfig.md new file mode 100644 index 00000000000..7d1f4b79823 --- /dev/null +++ b/site/en/api_docs/python/tf/sysconfig.md @@ -0,0 +1,37 @@ +description: System configuration library. + +
+ + + + +
+ +# Module: tf.sysconfig + + + + + + + + + +System configuration library. + + + +## Functions + +[`get_compile_flags(...)`](../tf/sysconfig/get_compile_flags.md): Get the compilation flags for custom operators. + +[`get_include(...)`](../tf/sysconfig/get_include.md): Get the directory containing the TensorFlow C++ header files. + +[`get_lib(...)`](../tf/sysconfig/get_lib.md): Get the directory containing the TensorFlow framework library. + +[`get_link_flags(...)`](../tf/sysconfig/get_link_flags.md): Get the link flags for custom operators. + +## Other Members + +* `CXX11_ABI_FLAG = 0` +* `MONOLITHIC_BUILD = 0` diff --git a/site/en/api_docs/python/tf/sysconfig/get_compile_flags.md b/site/en/api_docs/python/tf/sysconfig/get_compile_flags.md new file mode 100644 index 00000000000..9a21fbaf4c0 --- /dev/null +++ b/site/en/api_docs/python/tf/sysconfig/get_compile_flags.md @@ -0,0 +1,56 @@ +description: Get the compilation flags for custom operators. + +
+ + +
+ +# tf.sysconfig.get_compile_flags + + + + + + + + + +Get the compilation flags for custom operators. + + + + + + + + + + + + + + + + + + +
+The compilation flags. +
+ diff --git a/site/en/api_docs/python/tf/sysconfig/get_include.md b/site/en/api_docs/python/tf/sysconfig/get_include.md new file mode 100644 index 00000000000..e03908c8666 --- /dev/null +++ b/site/en/api_docs/python/tf/sysconfig/get_include.md @@ -0,0 +1,56 @@ +description: Get the directory containing the TensorFlow C++ header files. + +
+ + +
+ +# tf.sysconfig.get_include + + + + + + + + + +Get the directory containing the TensorFlow C++ header files. + + + + + + + + + + + + + + + + + + +
+The directory as string. +
+ diff --git a/site/en/api_docs/python/tf/sysconfig/get_lib.md b/site/en/api_docs/python/tf/sysconfig/get_lib.md new file mode 100644 index 00000000000..c7f92ead6e1 --- /dev/null +++ b/site/en/api_docs/python/tf/sysconfig/get_lib.md @@ -0,0 +1,56 @@ +description: Get the directory containing the TensorFlow framework library. + +
+ + +
+ +# tf.sysconfig.get_lib + + + + + + + + + +Get the directory containing the TensorFlow framework library. + + + + + + + + + + + + + + + + + + +
+The directory as string. +
+ diff --git a/site/en/api_docs/python/tf/sysconfig/get_link_flags.md b/site/en/api_docs/python/tf/sysconfig/get_link_flags.md new file mode 100644 index 00000000000..466a59a08c6 --- /dev/null +++ b/site/en/api_docs/python/tf/sysconfig/get_link_flags.md @@ -0,0 +1,56 @@ +description: Get the link flags for custom operators. + +
+ + +
+ +# tf.sysconfig.get_link_flags + + + + + + + + + +Get the link flags for custom operators. + + + + + + + + + + + + + + + + + + +
+The link flags. +
+ diff --git a/site/en/api_docs/python/tf/tensor_scatter_nd_add.md b/site/en/api_docs/python/tf/tensor_scatter_nd_add.md new file mode 100644 index 00000000000..50535c385a9 --- /dev/null +++ b/site/en/api_docs/python/tf/tensor_scatter_nd_add.md @@ -0,0 +1,155 @@ +description: Adds sparse updates to an existing tensor according to indices. + +
+ + +
+ +# tf.tensor_scatter_nd_add + + + + + + + + + +Adds sparse `updates` to an existing tensor according to `indices`. + + + + + + + + + +This operation creates a new tensor by adding sparse `updates` to the passed +in `tensor`. +This operation is very similar to `tf.scatter_nd_add`, except that the updates +are added onto an existing tensor (as opposed to a variable). If the memory +for the existing tensor cannot be re-used, a copy is made and updated. + +`indices` is an integer tensor containing indices into a new tensor of shape +`shape`. The last dimension of `indices` can be at most the rank of `shape`: + + indices.shape[-1] <= shape.rank + +The last dimension of `indices` corresponds to indices into elements +(if `indices.shape[-1] = shape.rank`) or slices +(if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of +`shape`. `updates` is a tensor with shape + + indices.shape[:-1] + shape[indices.shape[-1]:] + +The simplest form of tensor_scatter_add is to add individual elements to a +tensor by index. For example, say we want to add 4 elements in a rank-1 +tensor with 8 elements. + +In Python, this scatter add operation would look like this: + +```python + indices = tf.constant([[4], [3], [1], [7]]) + updates = tf.constant([9, 10, 11, 12]) + tensor = tf.ones([8], dtype=tf.int32) + updated = tf.tensor_scatter_nd_add(tensor, indices, updates) + print(updated) +``` + +The resulting tensor would look like this: + + [1, 12, 1, 11, 10, 1, 1, 13] + +We can also, insert entire slices of a higher rank tensor all at once. For +example, if we wanted to insert two slices in the first dimension of a +rank-3 tensor with two matrices of new values. + +In Python, this scatter add operation would look like this: + +```python + indices = tf.constant([[0], [2]]) + updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], + [7, 7, 7, 7], [8, 8, 8, 8]], + [[5, 5, 5, 5], [6, 6, 6, 6], + [7, 7, 7, 7], [8, 8, 8, 8]]]) + tensor = tf.ones([4, 4, 4],dtype=tf.int32) + updated = tf.tensor_scatter_nd_add(tensor, indices, updates) + print(updated) +``` + +The resulting tensor would look like this: + + [[[6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8], [9, 9, 9, 9]], + [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], + [[6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8], [9, 9, 9, 9]], + [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]] + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, the index is ignored. + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. Tensor to copy/update. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`updates` + +A `Tensor`. Must have the same type as `tensor`. +Updates to scatter into output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/tensor_scatter_nd_sub.md b/site/en/api_docs/python/tf/tensor_scatter_nd_sub.md new file mode 100644 index 00000000000..d4f03a5bd08 --- /dev/null +++ b/site/en/api_docs/python/tf/tensor_scatter_nd_sub.md @@ -0,0 +1,155 @@ +description: Subtracts sparse updates from an existing tensor according to indices. + +
+ + +
+ +# tf.tensor_scatter_nd_sub + + + + + + + + + +Subtracts sparse `updates` from an existing tensor according to `indices`. + + + + + + + + + +This operation creates a new tensor by subtracting sparse `updates` from the +passed in `tensor`. +This operation is very similar to `tf.scatter_nd_sub`, except that the updates +are subtracted from an existing tensor (as opposed to a variable). If the memory +for the existing tensor cannot be re-used, a copy is made and updated. + +`indices` is an integer tensor containing indices into a new tensor of shape +`shape`. The last dimension of `indices` can be at most the rank of `shape`: + + indices.shape[-1] <= shape.rank + +The last dimension of `indices` corresponds to indices into elements +(if `indices.shape[-1] = shape.rank`) or slices +(if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of +`shape`. `updates` is a tensor with shape + + indices.shape[:-1] + shape[indices.shape[-1]:] + +The simplest form of tensor_scatter_sub is to subtract individual elements +from a tensor by index. For example, say we want to insert 4 scattered elements +in a rank-1 tensor with 8 elements. + +In Python, this scatter subtract operation would look like this: + +```python + indices = tf.constant([[4], [3], [1], [7]]) + updates = tf.constant([9, 10, 11, 12]) + tensor = tf.ones([8], dtype=tf.int32) + updated = tf.tensor_scatter_nd_sub(tensor, indices, updates) + print(updated) +``` + +The resulting tensor would look like this: + + [1, -10, 1, -9, -8, 1, 1, -11] + +We can also, insert entire slices of a higher rank tensor all at once. For +example, if we wanted to insert two slices in the first dimension of a +rank-3 tensor with two matrices of new values. + +In Python, this scatter add operation would look like this: + +```python + indices = tf.constant([[0], [2]]) + updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], + [7, 7, 7, 7], [8, 8, 8, 8]], + [[5, 5, 5, 5], [6, 6, 6, 6], + [7, 7, 7, 7], [8, 8, 8, 8]]]) + tensor = tf.ones([4, 4, 4],dtype=tf.int32) + updated = tf.tensor_scatter_nd_sub(tensor, indices, updates) + print(updated) +``` + +The resulting tensor would look like this: + + [[[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]], + [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], + [[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]], + [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]] + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, the index is ignored. + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. Tensor to copy/update. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`updates` + +A `Tensor`. Must have the same type as `tensor`. +Updates to scatter into output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/tensor_scatter_nd_update.md b/site/en/api_docs/python/tf/tensor_scatter_nd_update.md new file mode 100644 index 00000000000..a03fa1e534a --- /dev/null +++ b/site/en/api_docs/python/tf/tensor_scatter_nd_update.md @@ -0,0 +1,170 @@ +description: Scatter updates into an existing tensor according to indices. + +
+ + +
+ +# tf.tensor_scatter_nd_update + + + + + + + + + +Scatter `updates` into an existing tensor according to `indices`. + + + + + + + + + +This operation creates a new tensor by applying sparse `updates` to the passed +in `tensor`. +This operation is very similar to tf.scatter_nd, except that the updates are +scattered onto an existing tensor (as opposed to a zero-tensor). If the memory +for the existing tensor cannot be re-used, a copy is made and updated. + +If `indices` contains duplicates, then their updates are accumulated (summed). + +**WARNING**: The order in which updates are applied is nondeterministic, so the +output will be nondeterministic if `indices` contains duplicates -- because +of some numerical approximation issues, numbers summed in different order +may yield different results. + +`indices` is an integer tensor containing indices into a new tensor of shape +`shape`. The last dimension of `indices` can be at most the rank of `shape`: + + indices.shape[-1] <= shape.rank + +The last dimension of `indices` corresponds to indices into elements +(if `indices.shape[-1] = shape.rank`) or slices +(if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of +`shape`. `updates` is a tensor with shape + + indices.shape[:-1] + shape[indices.shape[-1]:] + +The simplest form of scatter is to insert individual elements in a tensor by +index. For example, say we want to insert 4 scattered elements in a rank-1 +tensor with 8 elements. + +
+ +
+ +In Python, this scatter operation would look like this: + + ``` + >>> indices = tf.constant([[4], [3], [1], [7]]) + >>> updates = tf.constant([9, 10, 11, 12]) + >>> tensor = tf.ones([8], dtype=tf.int32) + >>> print(tf.tensor_scatter_nd_update(tensor, indices, updates)) + tf.Tensor([ 1 11 1 10 9 1 1 12], shape=(8,), dtype=int32) + ``` + +We can also, insert entire slices of a higher rank tensor all at once. For +example, if we wanted to insert two slices in the first dimension of a +rank-3 tensor with two matrices of new values. + +In Python, this scatter operation would look like this: + + ``` + >>> indices = tf.constant([[0], [2]]) + >>> updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], + ... [7, 7, 7, 7], [8, 8, 8, 8]], + ... [[5, 5, 5, 5], [6, 6, 6, 6], + ... [7, 7, 7, 7], [8, 8, 8, 8]]]) + >>> tensor = tf.ones([4, 4, 4], dtype=tf.int32) + >>> print(tf.tensor_scatter_nd_update(tensor, indices, updates).numpy()) + [[[5 5 5 5] + [6 6 6 6] + [7 7 7 7] + [8 8 8 8]] + [[1 1 1 1] + [1 1 1 1] + [1 1 1 1] + [1 1 1 1]] + [[5 5 5 5] + [6 6 6 6] + [7 7 7 7] + [8 8 8 8]] + [[1 1 1 1] + [1 1 1 1] + [1 1 1 1] + [1 1 1 1]]] + ``` + +Note that on CPU, if an out of bound index is found, an error is returned. +On GPU, if an out of bound index is found, the index is ignored. + + + + + + + + + + + + + + + + + + + +
+`tensor` + +A `Tensor`. Tensor to copy/update. +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +Index tensor. +
+`updates` + +A `Tensor`. Must have the same type as `tensor`. +Updates to scatter into output. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `tensor`. +
+ diff --git a/site/en/api_docs/python/tf/tensordot.md b/site/en/api_docs/python/tf/tensordot.md new file mode 100644 index 00000000000..f652958d082 --- /dev/null +++ b/site/en/api_docs/python/tf/tensordot.md @@ -0,0 +1,158 @@ +description: Tensor contraction of a and b along specified axes and outer product. + +
+ + +
+ +# tf.tensordot + + + + + + + + + +Tensor contraction of a and b along specified axes and outer product. + + + + + + + + + +Tensordot (also known as tensor contraction) sums the product of elements +from `a` and `b` over the indices specified by `a_axes` and `b_axes`. +The lists `a_axes` and `b_axes` specify those pairs of axes along which to +contract the tensors. The axis `a_axes[i]` of `a` must have the same dimension +as axis `b_axes[i]` of `b` for all `i` in `range(0, len(a_axes))`. The lists +`a_axes` and `b_axes` must have identical length and consist of unique +integers that specify valid axes for each of the tensors. Additionally +outer product is supported by passing `axes=0`. + +This operation corresponds to `numpy.tensordot(a, b, axes)`. + +Example 1: When `a` and `b` are matrices (order 2), the case `axes = 1` +is equivalent to matrix multiplication. + +Example 2: When `a` and `b` are matrices (order 2), the case +`axes = [[1], [0]]` is equivalent to matrix multiplication. + +Example 3: When `a` and `b` are matrices (order 2), the case `axes=0` gives +the outer product, a tensor of order 4. + +Example 4: Suppose that \\(a_{ijk}\\) and \\(b_{lmn}\\) represent two +tensors of order 3. Then, `contract(a, b, [[0], [2]])` is the order 4 tensor +\\(c_{jklm}\\) whose entry +corresponding to the indices \\((j,k,l,m)\\) is given by: + +\\( c_{jklm} = \sum_i a_{ijk} b_{lmi} \\). + +In general, `order(c) = order(a) + order(b) - 2*len(axes[0])`. + + + + + + + + + + + + + + + + + + + +
+`a` + +`Tensor` of type `float32` or `float64`. +
+`b` + +`Tensor` with the same type as `a`. +
+`axes` + +Either a scalar `N`, or a list or an `int32` `Tensor` of shape [2, k]. +If axes is a scalar, sum over the last N axes of a and the first N axes of +b in order. If axes is a list or `Tensor` the first and second row contain +the set of unique integers specifying axes along which the contraction is +computed, for `a` and `b`, respectively. The number of axes for `a` and +`b` must be equal. If `axes=0`, computes the outer product between `a` and +`b`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` with the same type as `a`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If the shapes of `a`, `b`, and `axes` are incompatible. +
+`IndexError` + +If the values in axes exceed the rank of the corresponding +tensor. +
+ diff --git a/site/en/api_docs/python/tf/test.md b/site/en/api_docs/python/tf/test.md new file mode 100644 index 00000000000..b0c241497ee --- /dev/null +++ b/site/en/api_docs/python/tf/test.md @@ -0,0 +1,51 @@ +description: Testing. + +
+ + +
+ +# Module: tf.test + + + + + + + + + +Testing. + + + +## Classes + +[`class Benchmark`](../tf/test/Benchmark.md): Abstract class that provides helpers for TensorFlow benchmarks. + +[`class TestCase`](../tf/test/TestCase.md): Base class for tests that need to test TensorFlow. + +## Functions + +[`assert_equal_graph_def(...)`](../tf/test/assert_equal_graph_def.md): Asserts that two `GraphDef`s are (mostly) the same. + +[`benchmark_config(...)`](../tf/test/benchmark_config.md): Returns a tf.compat.v1.ConfigProto for disabling the dependency optimizer. + +[`compute_gradient(...)`](../tf/test/compute_gradient.md): Computes the theoretical and numeric Jacobian of `f`. + +[`create_local_cluster(...)`](../tf/test/create_local_cluster.md): Create and start local servers and return the associated `Server` objects. + +[`gpu_device_name(...)`](../tf/test/gpu_device_name.md): Returns the name of a GPU device if available or the empty string. + +[`is_built_with_cuda(...)`](../tf/test/is_built_with_cuda.md): Returns whether TensorFlow was built with CUDA (GPU) support. + +[`is_built_with_gpu_support(...)`](../tf/test/is_built_with_gpu_support.md): Returns whether TensorFlow was built with GPU (i.e. CUDA or ROCm) support. + +[`is_built_with_rocm(...)`](../tf/test/is_built_with_rocm.md): Returns whether TensorFlow was built with ROCm (GPU) support. + +[`is_built_with_xla(...)`](../tf/test/is_built_with_xla.md): Returns whether TensorFlow was built with XLA support. + +[`is_gpu_available(...)`](../tf/test/is_gpu_available.md): Returns whether TensorFlow can access a GPU. (deprecated) + +[`main(...)`](../tf/test/main.md): Runs all unit tests. + diff --git a/site/en/api_docs/python/tf/test/Benchmark.md b/site/en/api_docs/python/tf/test/Benchmark.md new file mode 100644 index 00000000000..6589a71d1d1 --- /dev/null +++ b/site/en/api_docs/python/tf/test/Benchmark.md @@ -0,0 +1,308 @@ +description: Abstract class that provides helpers for TensorFlow benchmarks. + +
+ + + + + + + +
+ +# tf.test.Benchmark + + + + + + + + + +Abstract class that provides helpers for TensorFlow benchmarks. + + + + + + + + + + +## Methods + +

evaluate

+ +View source + + + +Evaluates tensors and returns numpy values. + + + + + + + + + + + +
Args
+`tensors` + +A Tensor or a nested list/tuple of Tensors. +
+ + + + + + + + + + + +
Returns
+tensors numpy values. +
+ + + +

is_abstract

+ +View source + + + + + + +

report_benchmark

+ +View source + + + +Report a benchmark. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`iters` + +(optional) How many iterations were run +
+`cpu_time` + +(optional) Median or mean cpu time in seconds. +
+`wall_time` + +(optional) Median or mean wall time in seconds. +
+`throughput` + +(optional) Throughput (in MB/s) +
+`extras` + +(optional) Dict mapping string keys to additional benchmark info. +Values may be either floats or values that are convertible to strings. +
+`name` + +(optional) Override the BenchmarkEntry name with `name`. +Otherwise it is inferred from the top-level method name. +
+`metrics` + +(optional) A list of dict, where each dict has the keys below +name (required), string, metric name +value (required), double, metric value +min_value (optional), double, minimum acceptable metric value +max_value (optional), double, maximum acceptable metric value +
+ + + +

run_op_benchmark

+ +View source + + + +Run an op or tensor in the given session. Report the results. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`sess` + +`Session` object to use for timing. +
+`op_or_tensor` + +`Operation` or `Tensor` to benchmark. +
+`feed_dict` + +A `dict` of values to feed for each op iteration (see the +`feed_dict` parameter of `Session.run`). +
+`burn_iters` + +Number of burn-in iterations to run. +
+`min_iters` + +Minimum number of iterations to use for timing. +
+`store_trace` + +Boolean, whether to run an extra untimed iteration and +store the trace of iteration in returned extras. +The trace will be stored as a string in Google Chrome trace format +in the extras field "full_trace_chrome_format". Note that trace +will not be stored in test_log_pb2.TestResults proto. +
+`store_memory_usage` + +Boolean, whether to run an extra untimed iteration, +calculate memory usage, and store that in extras fields. +
+`name` + +(optional) Override the BenchmarkEntry name with `name`. +Otherwise it is inferred from the top-level method name. +
+`extras` + +(optional) Dict mapping string keys to additional benchmark info. +Values may be either floats or values that are convertible to strings. +
+`mbs` + +(optional) The number of megabytes moved by this op, used to +calculate the ops throughput. +
+ + + + + + + + + + + +
Returns
+A `dict` containing the key-value pairs that were passed to +`report_benchmark`. If `store_trace` option is used, then +`full_chrome_trace_format` will be included in return dictionary even +though it is not passed to `report_benchmark` with `extras`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/test/TestCase.md b/site/en/api_docs/python/tf/test/TestCase.md new file mode 100644 index 00000000000..6079c21c06b --- /dev/null +++ b/site/en/api_docs/python/tf/test/TestCase.md @@ -0,0 +1,4129 @@ +description: Base class for tests that need to test TensorFlow. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +# tf.test.TestCase + + + + + + + + + +Base class for tests that need to test TensorFlow. + + + + + + + + + + +## Child Classes +[`class failureException`](../../tf/test/TestCase/failureException.md) + +## Methods + +

addClassCleanup

+ + + +Same as addCleanup, except the cleanup items are called even if +setUpClass fails (unlike tearDownClass). + +

addCleanup

+ + + +Add a function, with arguments, to be called when the test is +completed. Functions added are called on a LIFO basis and are +called after tearDown on test failure or success. + +Cleanup items are called even if setUp fails (unlike tearDown). + +

addTypeEqualityFunc

+ + + +Add a type specific assertEqual style function to compare a type. + +This method is for use by TestCase subclasses that need to register +their own type equality functions to provide nicer error messages. + + + + + + + + + + + + + +
Args
+`typeobj` + +The data type to call this function on when both values +are of the same type in assertEqual(). +
+`function` + +The callable taking two arguments and an optional +msg= argument that raises self.failureException with a +useful error message when the two arguments are not equal. +
+ + + +

assertAllClose

+ +View source + + + +Asserts that two structures of numpy arrays or Tensors, have near values. + +`a` and `b` can be arbitrarily nested structures. A layer of a nested +structure can be a `dict`, `namedtuple`, `tuple` or `list`. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`a` + +The expected numpy `ndarray`, or anything that can be converted into a +numpy `ndarray` (including Tensor), or any arbitrarily nested of +structure of these. +
+`b` + +The actual numpy `ndarray`, or anything that can be converted into a +numpy `ndarray` (including Tensor), or any arbitrarily nested of +structure of these. +
+`rtol` + +relative tolerance. +
+`atol` + +absolute tolerance. +
+`msg` + +Optional message to report on failure. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +if only one of `a[p]` and `b[p]` is a dict or +`a[p]` and `b[p]` have different length, where `[p]` denotes a path +to the nested structure, e.g. given `a = [(1, 1), {'d': (6, 7)}]` and +`[p] = [1]['d']`, then `a[p] = (6, 7)`. +
+ + + +

assertAllCloseAccordingToType

+ +View source + + + +Like assertAllClose, but also suitable for comparing fp16 arrays. + +In particular, the tolerance is reduced to 1e-3 if at least +one of the arguments is of type float16. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`a` + +the expected numpy ndarray or anything can be converted to one. +
+`b` + +the actual numpy ndarray or anything can be converted to one. +
+`rtol` + +relative tolerance. +
+`atol` + +absolute tolerance. +
+`float_rtol` + +relative tolerance for float32. +
+`float_atol` + +absolute tolerance for float32. +
+`half_rtol` + +relative tolerance for float16. +
+`half_atol` + +absolute tolerance for float16. +
+`bfloat16_rtol` + +relative tolerance for bfloat16. +
+`bfloat16_atol` + +absolute tolerance for bfloat16. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertAllEqual

+ +View source + + + +Asserts that two numpy arrays or Tensors have the same values. + + + + + + + + + + + + + + + + + +
Args
+`a` + +the expected numpy ndarray or anything can be converted to one. +
+`b` + +the actual numpy ndarray or anything can be converted to one. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertAllGreater

+ +View source + + + +Assert element values are all greater than a target value. + + + + + + + + + + + + + + +
Args
+`a` + +The numpy `ndarray`, or anything that can be converted into a numpy +`ndarray` (including Tensor). +
+`comparison_target` + +The target value of comparison. +
+ + + +

assertAllGreaterEqual

+ +View source + + + +Assert element values are all greater than or equal to a target value. + + + + + + + + + + + + + + +
Args
+`a` + +The numpy `ndarray`, or anything that can be converted into a numpy +`ndarray` (including Tensor). +
+`comparison_target` + +The target value of comparison. +
+ + + +

assertAllInRange

+ +View source + + + +Assert that elements in a Tensor are all in a given range. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`target` + +The numpy `ndarray`, or anything that can be converted into a +numpy `ndarray` (including Tensor). +
+`lower_bound` + +lower bound of the range +
+`upper_bound` + +upper bound of the range +
+`open_lower_bound` + +(`bool`) whether the lower bound is open (i.e., > rather +than the default >=) +
+`open_upper_bound` + +(`bool`) whether the upper bound is open (i.e., < rather +than the default <=) +
+ + + + + + + + + + + + +
Raises
+`AssertionError` + +if the value tensor does not have an ordered numeric type (float* or +int*), or +if there are nan values, or +if any of the elements do not fall in the specified range. +
+ + + +

assertAllInSet

+ +View source + + + +Assert that elements of a Tensor are all in a given closed set. + + + + + + + + + + + + + + +
Args
+`target` + +The numpy `ndarray`, or anything that can be converted into a +numpy `ndarray` (including Tensor). +
+`expected_set` + +(`list`, `tuple` or `set`) The closed set that the elements +of the value of `target` are expected to fall into. +
+ + + + + + + + + + + + +
Raises
+`AssertionError` + +if any of the elements do not fall into `expected_set`. +
+ + + +

assertAllLess

+ +View source + + + +Assert element values are all less than a target value. + + + + + + + + + + + + + + +
Args
+`a` + +The numpy `ndarray`, or anything that can be converted into a numpy +`ndarray` (including Tensor). +
+`comparison_target` + +The target value of comparison. +
+ + + +

assertAllLessEqual

+ +View source + + + +Assert element values are all less than or equal to a target value. + + + + + + + + + + + + + + +
Args
+`a` + +The numpy `ndarray`, or anything that can be converted into a numpy +`ndarray` (including Tensor). +
+`comparison_target` + +The target value of comparison. +
+ + + +

assertAlmostEqual

+ + + +Fail if the two objects are unequal as determined by their +difference rounded to the given number of decimal places +(default 7) and comparing to zero, or by comparing that the +difference between the two objects is more than the given +delta. + +Note that decimal places (from zero) are usually not the same +as significant digits (measured from the most significant digit). + +If the two objects compare equal then they will automatically +compare almost equal. + +

assertAlmostEquals

+ + + + + + +

assertArrayNear

+ +View source + + + +Asserts that two float arrays are near each other. + +Checks that for all elements of farray1 and farray2 +|f1 - f2| < err. Asserts a test failure if not. + + + + + + + + + + + + + + + + + + + +
Args
+`farray1` + +a list of float values. +
+`farray2` + +a list of float values. +
+`err` + +a float value. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertBetween

+ + + +Asserts that value is between minv and maxv (inclusive). + + +

assertCommandFails

+ + + +Asserts a shell command fails and the error matches a regex in a list. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`command` + +List or string representing the command to run. +
+`regexes` + +the list of regular expression strings. +
+`env` + +Dictionary of environment variable settings. If None, no environment +variables will be set for the child process. This is to make tests +more hermetic. NOTE: this behavior is different than the standard +subprocess module. +
+`close_fds` + +Whether or not to close all open fd's in the child after +forking. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertCommandSucceeds

+ + + +Asserts that a shell command succeeds (i.e. exits with code 0). + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`command` + +List or string representing the command to run. +
+`regexes` + +List of regular expression byte strings that match success. +
+`env` + +Dictionary of environment variable settings. If None, no environment +variables will be set for the child process. This is to make tests +more hermetic. NOTE: this behavior is different than the standard +subprocess module. +
+`close_fds` + +Whether or not to close all open fd's in the child after +forking. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertContainsExactSubsequence

+ + + +Asserts that "container" contains "subsequence" as an exact subsequence. + +Asserts that "container" contains all the elements of "subsequence", in +order, and without other elements interspersed. For example, [1, 2, 3] is an +exact subsequence of [0, 0, 1, 2, 3, 0] but not of [0, 0, 1, 2, 0, 3, 0]. + + + + + + + + + + + + + + + + +
Args
+`container` + +the list we're testing for subsequence inclusion. +
+`subsequence` + +the list we hope will be an exact subsequence of container. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertContainsInOrder

+ + + +Asserts that the strings provided are found in the target in order. + +This may be useful for checking HTML output. + + + + + + + + + + + + + + + + +
Args
+`strings` + +A list of strings, such as [ 'fox', 'dog' ] +
+`target` + +A target string in which to look for the strings, such as +'The quick brown fox jumped over the lazy dog'. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertContainsSubsequence

+ + + +Asserts that "container" contains "subsequence" as a subsequence. + +Asserts that "container" contains all the elements of "subsequence", in +order, but possibly with other elements interspersed. For example, [1, 2, 3] +is a subsequence of [0, 0, 1, 2, 0, 3, 0] but not of [0, 0, 1, 3, 0, 2, 0]. + + + + + + + + + + + + + + + + +
Args
+`container` + +the list we're testing for subsequence inclusion. +
+`subsequence` + +the list we hope will be a subsequence of container. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertContainsSubset

+ + + +Checks whether actual iterable is a superset of expected iterable. + + +

assertCountEqual

+ + + +Asserts that two iterables have the same elements, the same number of +times, without regard to order. + + self.assertEqual(Counter(list(first)), + Counter(list(second))) + + Example: + - [0, 1, 1] and [1, 0, 1] compare equal. + - [0, 0, 1] and [0, 1] compare unequal. + +

assertDTypeEqual

+ +View source + + + +Assert ndarray data type is equal to expected. + + + + + + + + + + + + + + +
Args
+`target` + +The numpy `ndarray`, or anything that can be converted into a +numpy `ndarray` (including Tensor). +
+`expected_dtype` + +Expected data type. +
+ + + +

assertDeviceEqual

+ +View source + + + +Asserts that the two given devices are the same. + + + + + + + + + + + + + + + + + +
Args
+`device1` + +A string device name or TensorFlow `DeviceSpec` object. +
+`device2` + +A string device name or TensorFlow `DeviceSpec` object. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertDictContainsSubset

+ + + +Checks whether dictionary is a superset of subset. + + +

assertDictEqual

+ + + +Raises AssertionError if a and b are not equal dictionaries. + + + + + + + + + + + + + + + + + +
Args
+`a` + +A dict, the expected value. +
+`b` + +A dict, the actual value. +
+`msg` + +An optional str, the associated message. +
+ + + + + + + + + + + + +
Raises
+`AssertionError` + +if the dictionaries are not equal. +
+ + + +

assertEmpty

+ + + +Asserts that an object has zero length. + + + + + + + + + + + + + + +
Args
+`container` + +Anything that implements the collections.abc.Sized interface. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertEndsWith

+ + + +Asserts that actual.endswith(expected_end) is True. + + + + + + + + + + + + + + + + + +
Args
+`actual` + +str +
+`expected_end` + +str +
+`msg` + +Optional message to report on failure. +
+ + + +

assertEqual

+ + + +Fail if the two objects are unequal as determined by the '==' +operator. + +

assertEquals

+ + + + + + +

assertFalse

+ + + +Check that the expression is false. + + +

assertGreater

+ + + +Just like self.assertTrue(a > b), but with a nicer default message. + + +

assertGreaterEqual

+ + + +Just like self.assertTrue(a >= b), but with a nicer default message. + + +

assertIn

+ + + +Just like self.assertTrue(a in b), but with a nicer default message. + + +

assertIs

+ + + +Just like self.assertTrue(a is b), but with a nicer default message. + + +

assertIsInstance

+ + + +Same as self.assertTrue(isinstance(obj, cls)), with a nicer +default message. + +

assertIsNone

+ + + +Same as self.assertTrue(obj is None), with a nicer default message. + + +

assertIsNot

+ + + +Just like self.assertTrue(a is not b), but with a nicer default message. + + +

assertIsNotNone

+ + + +Included for symmetry with assertIsNone. + + +

assertItemsEqual

+ + + +Asserts that two iterables have the same elements, the same number of +times, without regard to order. + + self.assertEqual(Counter(list(first)), + Counter(list(second))) + + Example: + - [0, 1, 1] and [1, 0, 1] compare equal. + - [0, 0, 1] and [0, 1] compare unequal. + +

assertJsonEqual

+ + + +Asserts that the JSON objects defined in two strings are equal. + +A summary of the differences will be included in the failure message +using assertSameStructure. + + + + + + + + + + + + + + + + +
Args
+`first` + +A string containing JSON to decode and compare to second. +
+`second` + +A string containing JSON to decode and compare to first. +
+`msg` + +Additional text to include in the failure message. +
+ + + +

assertLen

+ + + +Asserts that an object has the expected length. + + + + + + + + + + + + + + + + + +
Args
+`container` + +Anything that implements the collections.abc.Sized interface. +
+`expected_len` + +The expected length of the container. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertLess

+ + + +Just like self.assertTrue(a < b), but with a nicer default message. + + +

assertLessEqual

+ + + +Just like self.assertTrue(a <= b), but with a nicer default message. + + +

assertListEqual

+ + + +A list-specific equality assertion. + + + + + + + + + + + + + + + + + +
Args
+`list1` + +The first list to compare. +
+`list2` + +The second list to compare. +
+`msg` + +Optional message to use on failure instead of a list of +differences. +
+ + + +

assertLogs

+ + + +Fail unless a log message of level *level* or higher is emitted +on *logger_name* or its children. If omitted, *level* defaults to +INFO and *logger* defaults to the root logger. + +This method must be used as a context manager, and will yield +a recording object with two attributes: `output` and `records`. +At the end of the context manager, the `output` attribute will +be a list of the matching formatted log messages and the +`records` attribute will be a list of the corresponding LogRecord +objects. + +Example:: + + with self.assertLogs('foo', level='INFO') as cm: + logging.getLogger('foo').info('first message') + logging.getLogger('foo.bar').error('second message') + self.assertEqual(cm.output, ['INFO:foo:first message', + 'ERROR:foo.bar:second message']) + +

assertMultiLineEqual

+ + + +Asserts that two multi-line strings are equal. + + +

assertNDArrayNear

+ +View source + + + +Asserts that two numpy arrays have near values. + + + + + + + + + + + + + + + + + + + + +
Args
+`ndarray1` + +a numpy ndarray. +
+`ndarray2` + +a numpy ndarray. +
+`err` + +a float. The maximum absolute difference allowed. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertNear

+ +View source + + + +Asserts that two floats are near each other. + +Checks that |f1 - f2| < err and asserts a test failure +if not. + + + + + + + + + + + + + + + + + + + +
Args
+`f1` + +A float value. +
+`f2` + +A float value. +
+`err` + +A float value. +
+`msg` + +An optional string message to append to the failure message. +
+ + + +

assertNoCommonElements

+ + + +Checks whether actual iterable and expected iterable are disjoint. + + +

assertNotAllClose

+ +View source + + + +Assert that two numpy arrays, or Tensors, do not have near values. + + + + + + + + + + + + + + + + + +
Args
+`a` + +the first value to compare. +
+`b` + +the second value to compare. +
+`**kwargs` + +additional keyword arguments to be passed to the underlying +`assertAllClose` call. +
+ + + + + + + + + + + + +
Raises
+`AssertionError` + +If `a` and `b` are unexpectedly close at all elements. +
+ + + +

assertNotAllEqual

+ +View source + + + +Asserts that two numpy arrays or Tensors do not have the same values. + + + + + + + + + + + + + + + + + +
Args
+`a` + +the expected numpy ndarray or anything can be converted to one. +
+`b` + +the actual numpy ndarray or anything can be converted to one. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertNotAlmostEqual

+ + + +Fail if the two objects are equal as determined by their +difference rounded to the given number of decimal places +(default 7) and comparing to zero, or by comparing that the +difference between the two objects is less than the given delta. + +Note that decimal places (from zero) are usually not the same +as significant digits (measured from the most significant digit). + +Objects that are equal automatically fail. + +

assertNotAlmostEquals

+ + + + + + +

assertNotEmpty

+ + + +Asserts that an object has non-zero length. + + + + + + + + + + + + + + +
Args
+`container` + +Anything that implements the collections.abc.Sized interface. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertNotEndsWith

+ + + +Asserts that actual.endswith(unexpected_end) is False. + + + + + + + + + + + + + + + + + +
Args
+`actual` + +str +
+`unexpected_end` + +str +
+`msg` + +Optional message to report on failure. +
+ + + +

assertNotEqual

+ + + +Fail if the two objects are equal as determined by the '!=' +operator. + +

assertNotEquals

+ + + + + + +

assertNotIn

+ + + +Just like self.assertTrue(a not in b), but with a nicer default message. + + +

assertNotIsInstance

+ + + +Included for symmetry with assertIsInstance. + + +

assertNotRegex

+ + + +Fail the test if the text matches the regular expression. + + +

assertNotRegexpMatches

+ + + + + + +

assertNotStartsWith

+ + + +Asserts that actual.startswith(unexpected_start) is False. + + + + + + + + + + + + + + + + + +
Args
+`actual` + +str +
+`unexpected_start` + +str +
+`msg` + +Optional message to report on failure. +
+ + + +

assertProtoEquals

+ +View source + + + +Asserts that message is same as parsed expected_message_ascii. + +Creates another prototype of message, reads the ascii message into it and +then compares them using self._AssertProtoEqual(). + + + + + + + + + + + + + + + + +
Args
+`expected_message_maybe_ascii` + +proto message in original or ascii form. +
+`message` + +the message to validate. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertProtoEqualsVersion

+ +View source + + + + + + +

assertRaises

+ + + +Fail unless an exception of class expected_exception is raised +by the callable when invoked with specified positional and +keyword arguments. If a different type of exception is +raised, it will not be caught, and the test case will be +deemed to have suffered an error, exactly as for an +unexpected exception. + +If called with the callable and arguments omitted, will return a +context object used like this:: + + with self.assertRaises(SomeException): + do_something() + +An optional keyword argument 'msg' can be provided when assertRaises +is used as a context object. + +The context manager keeps a reference to the exception as +the 'exception' attribute. This allows you to inspect the +exception after the assertion:: + + with self.assertRaises(SomeException) as cm: + do_something() + the_exception = cm.exception + self.assertEqual(the_exception.error_code, 3) + +

assertRaisesOpError

+ +View source + + + + + + +

assertRaisesRegex

+ + + +Asserts that the message in a raised exception matches a regex. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`expected_exception` + +Exception class expected to be raised. +
+`expected_regex` + +Regex (re.Pattern object or string) expected +to be found in error message. +
+`args` + +Function to be called and extra positional args. +
+`kwargs` + +Extra kwargs. +
+`msg` + +Optional message used in case of failure. Can only be used +when assertRaisesRegex is used as a context manager. +
+ + + +

assertRaisesRegexp

+ + + +Asserts that the message in a raised exception matches a regex. + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`expected_exception` + +Exception class expected to be raised. +
+`expected_regex` + +Regex (re.Pattern object or string) expected +to be found in error message. +
+`args` + +Function to be called and extra positional args. +
+`kwargs` + +Extra kwargs. +
+`msg` + +Optional message used in case of failure. Can only be used +when assertRaisesRegex is used as a context manager. +
+ + + +

assertRaisesWithLiteralMatch

+ + + +Asserts that the message in a raised exception equals the given string. + +Unlike assertRaisesRegex, this method takes a literal string, not +a regular expression. + +with self.assertRaisesWithLiteralMatch(ExType, 'message'): + DoSomething() + + + + + + + + + + + + + + + + + + + + + + +
Args
+`expected_exception` + +Exception class expected to be raised. +
+`expected_exception_message` + +String message expected in the raised +exception. For a raise exception e, expected_exception_message must +equal str(e). +
+`callable_obj` + +Function to be called, or None to return a context. +
+`*args` + +Extra args. +
+`**kwargs` + +Extra kwargs. +
+ + + + + + + + + + + +
Returns
+A context manager if callable_obj is None. Otherwise, None. +
+ + + + + + + + + + + +
Raises
+self.failureException if callable_obj does not raise a matching exception. +
+ + + +

assertRaisesWithPredicateMatch

+ +View source + + + +Returns a context manager to enclose code expected to raise an exception. + +If the exception is an OpError, the op stack is also included in the message +predicate search. + + + + + + + + + + + + + +
Args
+`exception_type` + +The expected type of exception that should be raised. +
+`expected_err_re_or_predicate` + +If this is callable, it should be a function +of one argument that inspects the passed-in exception and returns True +(success) or False (please fail the test). Otherwise, the error message +is expected to match this regular expression partially. +
+ + + + + + + + + + + +
Returns
+A context manager to surround code that is expected to raise an +exception. +
+ + + +

assertRegex

+ + + +Fail the test unless the text matches the regular expression. + + +

assertRegexMatch

+ + + +Asserts that at least one regex in regexes matches str. + +If possible you should use `assertRegex`, which is a simpler +version of this method. `assertRegex` takes a single regular +expression (a string or re compiled object) instead of a list. + +#### Notes: + + +1. This function uses substring matching, i.e. the matching + succeeds if *any* substring of the error message matches *any* + regex in the list. This is more convenient for the user than + full-string matching. + +2. If regexes is the empty list, the matching will always fail. + +3. Use regexes=[''] for a regex that will always pass. + +4. '.' matches any single character *except* the newline. To + match any character, use '(.|\n)'. + +5. '^' matches the beginning of each line, not just the beginning + of the string. Similarly, '$' matches the end of each line. + +6. An exception will be thrown if regexes contains an invalid + regex. + + + + + + + + + + + + + + + + +
Args
+`actual_str` + +The string we try to match with the items in regexes. +
+`regexes` + +The regular expressions we want to match against str. +See "Notes" above for detailed notes on how this is interpreted. +
+`message` + +The message to be printed if the test fails. +
+ + + +

assertRegexpMatches

+ + + + + + +

assertSameElements

+ + + +Asserts that two sequences have the same elements (in any order). + +This method, unlike assertCountEqual, doesn't care about any +duplicates in the expected and actual sequences. + + >> assertSameElements([1, 1, 1, 0, 0, 0], [0, 1]) + # Doesn't raise an AssertionError + +If possible, you should use assertCountEqual instead of +assertSameElements. + + + + + + + + + + + + + + + + +
Args
+`expected_seq` + +A sequence containing elements we are expecting. +
+`actual_seq` + +The sequence that we are testing. +
+`msg` + +The message to be printed if the test fails. +
+ + + +

assertSameStructure

+ + + +Asserts that two values contain the same structural content. + +The two arguments should be data trees consisting of trees of dicts and +lists. They will be deeply compared by walking into the contents of dicts +and lists; other items will be compared using the == operator. +If the two structures differ in content, the failure message will indicate +the location within the structures where the first difference is found. +This may be helpful when comparing large structures. + +Mixed Sequence and Set types are supported. Mixed Mapping types are +supported, but the order of the keys will not be considered in the +comparison. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`a` + +The first structure to compare. +
+`b` + +The second structure to compare. +
+`aname` + +Variable name to use for the first structure in assertion messages. +
+`bname` + +Variable name to use for the second structure. +
+`msg` + +Additional text to include in the failure message. +
+ + + +

assertSequenceAlmostEqual

+ + + +An approximate equality assertion for ordered sequences. + +Fail if the two sequences are unequal as determined by their value +differences rounded to the given number of decimal places (default 7) and +comparing to zero, or by comparing that the difference between each value +in the two sequences is more than the given delta. + +Note that decimal places (from zero) are usually not the same as significant +digits (measured from the most significant digit). + +If the two sequences compare equal then they will automatically compare +almost equal. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`expected_seq` + +A sequence containing elements we are expecting. +
+`actual_seq` + +The sequence that we are testing. +
+`places` + +The number of decimal places to compare. +
+`msg` + +The message to be printed if the test fails. +
+`delta` + +The OK difference between compared values. +
+ + + +

assertSequenceEqual

+ + + +An equality assertion for ordered sequences (like lists and tuples). + +For the purposes of this function, a valid ordered sequence type is one +which can be indexed, has a length, and has an equality operator. + + + + + + + + + + + + + + + + + + + +
Args
+`seq1` + +The first sequence to compare. +
+`seq2` + +The second sequence to compare. +
+`seq_type` + +The expected datatype of the sequences, or None if no +datatype should be enforced. +
+`msg` + +Optional message to use on failure instead of a list of +differences. +
+ + + +

assertSequenceStartsWith

+ + + +An equality assertion for the beginning of ordered sequences. + +If prefix is an empty sequence, it will raise an error unless whole is also +an empty sequence. + +If prefix is not a sequence, it will raise an error if the first element of +whole does not match. + + + + + + + + + + + + + + + + +
Args
+`prefix` + +A sequence expected at the beginning of the whole parameter. +
+`whole` + +The sequence in which to look for prefix. +
+`msg` + +Optional message to report on failure. +
+ + + +

assertSetEqual

+ + + +A set-specific equality assertion. + + + + + + + + + + + + + + + + + +
Args
+`set1` + +The first set to compare. +
+`set2` + +The second set to compare. +
+`msg` + +Optional message to use on failure instead of a list of +differences. +
+ + +assertSetEqual uses ducktyping to support different types of sets, and +is optimized for sets specifically (parameters must support a +difference method). + +

assertShapeEqual

+ +View source + + + +Asserts that a Numpy ndarray and a TensorFlow tensor have the same shape. + + + + + + + + + + + + + + + + + +
Args
+`np_array` + +A Numpy ndarray or Numpy scalar. +
+`tf_tensor` + +A Tensor. +
+`msg` + +Optional message to report on failure. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If the arguments have the wrong type. +
+ + + +

assertStartsWith

+ +View source + + + +Assert that actual.startswith(expected_start) is True. + + + + + + + + + + + + + + + + + +
Args
+`actual` + +str +
+`expected_start` + +str +
+`msg` + +Optional message to report on failure. +
+ + + +

assertTotallyOrdered

+ + + +Asserts that total ordering has been implemented correctly. + +For example, say you have a class A that compares only on its attribute x. +Comparators other than __lt__ are omitted for brevity. + +class A(object): + def __init__(self, x, y): + self.x = x + self.y = y + + def __hash__(self): + return hash(self.x) + + def __lt__(self, other): + try: + return self.x < other.x + except AttributeError: + return NotImplemented + +assertTotallyOrdered will check that instances can be ordered correctly. +For example, + +self.assertTotallyOrdered( + [None], # None should come before everything else. + [1], # Integers sort earlier. + [A(1, 'a')], + [A(2, 'b')], # 2 is after 1. + [A(3, 'c'), A(3, 'd')], # The second argument is irrelevant. + [A(4, 'z')], + ['foo']) # Strings sort last. + + + + + + + + + + +
Args
+`*groups` + +A list of groups of elements. Each group of elements is a list +of objects that are equal. The elements in each group must be less +than the elements in the group after it. For example, these groups are +totally ordered: [None], [1], [2, 2], [3]. +**kwargs: optional msg keyword argument can be passed. +
+ + + +

assertTrue

+ + + +Check that the expression is true. + + +

assertTupleEqual

+ + + +A tuple-specific equality assertion. + + + + + + + + + + + + + + + + + +
Args
+`tuple1` + +The first tuple to compare. +
+`tuple2` + +The second tuple to compare. +
+`msg` + +Optional message to use on failure instead of a list of +differences. +
+ + + +

assertUrlEqual

+ + + +Asserts that urls are equal, ignoring ordering of query params. + + +

assertWarns

+ + + +Fail unless a warning of class warnClass is triggered +by the callable when invoked with specified positional and +keyword arguments. If a different type of warning is +triggered, it will not be handled: depending on the other +warning filtering rules in effect, it might be silenced, printed +out, or raised as an exception. + +If called with the callable and arguments omitted, will return a +context object used like this:: + + with self.assertWarns(SomeWarning): + do_something() + +An optional keyword argument 'msg' can be provided when assertWarns +is used as a context object. + +The context manager keeps a reference to the first matching +warning as the 'warning' attribute; similarly, the 'filename' +and 'lineno' attributes give you information about the line +of Python code from which the warning was triggered. +This allows you to inspect the warning after the assertion:: + + with self.assertWarns(SomeWarning) as cm: + do_something() + the_warning = cm.warning + self.assertEqual(the_warning.some_attribute, 147) + +

assertWarnsRegex

+ + + +Asserts that the message in a triggered warning matches a regexp. +Basic functioning is similar to assertWarns() with the addition +that only warnings whose messages also match the regular expression +are considered successful matches. + + + + + + + + + + + + + + + + + + + + + + +
Args
+`expected_warning` + +Warning class expected to be triggered. +
+`expected_regex` + +Regex (re.Pattern object or string) expected +to be found in error message. +
+`args` + +Function to be called and extra positional args. +
+`kwargs` + +Extra kwargs. +
+`msg` + +Optional message used in case of failure. Can only be used +when assertWarnsRegex is used as a context manager. +
+ + + +

assert_

+ + + + + + +

cached_session

+ +View source + + + +Returns a TensorFlow Session for use in executing tests. + +This method behaves differently than self.session(): for performance reasons +`cached_session` will by default reuse the same session within the same +test. The session returned by this function will only be closed at the end +of the test (in the TearDown function). + +Use the `use_gpu` and `force_gpu` options to control where ops are run. If +`force_gpu` is True, all ops are pinned to `/device:GPU:0`. Otherwise, if +`use_gpu` is True, TensorFlow tries to run as many ops on the GPU as +possible. If both `force_gpu and `use_gpu` are False, all ops are pinned to +the CPU. + +#### Example: + + +```python +class MyOperatorTest(test_util.TensorFlowTestCase): + def testMyOperator(self): + with self.cached_session(use_gpu=True) as sess: + valid_input = [1.0, 2.0, 3.0, 4.0, 5.0] + result = MyOperator(valid_input).eval() + self.assertEqual(result, [1.0, 2.0, 3.0, 5.0, 8.0] + invalid_input = [-1.0, 2.0, 7.0] + with self.assertRaisesOpError("negative input not supported"): + MyOperator(invalid_input).eval() +``` + + + + + + + + + + + + + + + + + + + +
Args
+`graph` + +Optional graph to use during the returned session. +
+`config` + +An optional config_pb2.ConfigProto to use to configure the +session. +
+`use_gpu` + +If True, attempt to run as many ops as possible on GPU. +
+`force_gpu` + +If True, pin all ops to `/device:GPU:0`. +
+ + + +#### Yields: + +A Session object that should be used as a context manager to surround +the graph building and execution code in a test case. + + +

captureWritesToStream

+ +View source + + + +A context manager that captures the writes to a given stream. + +This context manager captures all writes to a given stream inside of a +`CapturedWrites` object. When this context manager is created, it yields +the `CapturedWrites` object. The captured contents can be accessed by +calling `.contents()` on the `CapturedWrites`. + +For this function to work, the stream must have a file descriptor that +can be modified using `os.dup` and `os.dup2`, and the stream must support +a `.flush()` method. The default python sys.stdout and sys.stderr are +examples of this. Note that this does not work in Colab or Jupyter +notebooks, because those use alternate stdout streams. + +#### Example: + + +```python +class MyOperatorTest(test_util.TensorFlowTestCase): + def testMyOperator(self): + input = [1.0, 2.0, 3.0, 4.0, 5.0] + with self.captureWritesToStream(sys.stdout) as captured: + result = MyOperator(input).eval() + self.assertStartsWith(captured.contents(), "This was printed.") +``` + + + + + + + + + + +
Args
+`stream` + +The stream whose writes should be captured. This stream must have +a file descriptor, support writing via using that file descriptor, and +must have a `.flush()` method. +
+ + + +#### Yields: + +A `CapturedWrites` object that contains all writes to the specified stream +made during this context. + + +

checkedThread

+ +View source + + + +Returns a Thread wrapper that asserts 'target' completes successfully. + +This method should be used to create all threads in test cases, as +otherwise there is a risk that a thread will silently fail, and/or +assertions made in the thread will not be respected. + + + + + + + + + + + + + + + + +
Args
+`target` + +A callable object to be executed in the thread. +
+`args` + +The argument tuple for the target invocation. Defaults to (). +
+`kwargs` + +A dictionary of keyword arguments for the target invocation. +Defaults to {}. +
+ + + + + + + + + + + +
Returns
+A wrapper for threading.Thread that supports start() and join() methods. +
+ + + +

countTestCases

+ + + + + + +

create_tempdir

+ + + +Create a temporary directory specific to the test. + +NOTE: The directory and its contents will be recursively cleared before +creation. This ensures that there is no pre-existing state. + +This creates a named directory on disk that is isolated to this test, and +will be properly cleaned up by the test. This avoids several pitfalls of +creating temporary directories for test purposes, as well as makes it easier +to setup directories and verify their contents. + +See also: `create_tempfile()` for creating temporary files. + + + + + + + + + + + + + +
Args
+`name` + +Optional name of the directory. If not given, a unique +name will be generated and used. +
+`cleanup` + +Optional cleanup policy on when/if to remove the directory (and +all its contents) at the end of the test. If None, then uses +`self.tempfile_cleanup`. +
+ + + + + + + + + + + +
Returns
+A _TempDir representing the created directory. +
+ + + +

create_tempfile

+ + + +Create a temporary file specific to the test. + +This creates a named file on disk that is isolated to this test, and will +be properly cleaned up by the test. This avoids several pitfalls of +creating temporary files for test purposes, as well as makes it easier +to setup files, their data, read them back, and inspect them when +a test fails. + +NOTE: This will zero-out the file. This ensures there is no pre-existing +state. +NOTE: If the file already exists, it will be made writable and overwritten. + +See also: `create_tempdir()` for creating temporary directories, and +`_TempDir.create_file` for creating files within a temporary directory. + + + + + + + + + + + + + + + + + + + + + + + + + +
Args
+`file_path` + +Optional file path for the temp file. If not given, a unique +file name will be generated and used. Slashes are allowed in the name; +any missing intermediate directories will be created. NOTE: This path is +the path that will be cleaned up, including any directories in the path, +e.g., 'foo/bar/baz.txt' will `rm -r foo`. +
+`content` + +Optional string or +bytes to initially write to the file. If not +specified, then an empty file is created. +
+`mode` + +Mode string to use when writing content. Only used if `content` is +non-empty. +
+`encoding` + +Encoding to use when writing string content. Only used if +`content` is text. +
+`errors` + +How to handle text to bytes encoding errors. Only used if +`content` is text. +
+`cleanup` + +Optional cleanup policy on when/if to remove the directory (and +all its contents) at the end of the test. If None, then uses +`self.tempfile_cleanup`. +
+ + + + + + + + + + + +
Returns
+A _TempFile representing the created file. +
+ + + +

debug

+ + + +Run the test without collecting errors in a TestResult + + +

defaultTestResult

+ + + + + + +

doClassCleanups

+ + + +Execute all class cleanup functions. Normally called for you after +tearDownClass. + +

doCleanups

+ + + +Execute all cleanup functions. Normally called for you after +tearDown. + +

enter_context

+ + + +Returns the CM's value after registering it with the exit stack. + +Entering a context pushes it onto a stack of contexts. The context is exited +when the test completes. Contexts are are exited in the reverse order of +entering. They will always be exited, regardless of test failure/success. +The context stack is specific to the test being run. + +This is useful to eliminate per-test boilerplate when context managers +are used. For example, instead of decorating every test with `@mock.patch`, +simply do `self.foo = self.enter_context(mock.patch(...))' in `setUp()`. + +NOTE: The context managers will always be exited without any error +information. This is an unfortunate implementation detail due to some +internals of how unittest runs tests. + + + + + + + + + + +
Args
+`manager` + +The context manager to enter. +
+ + + +

evaluate

+ +View source + + + +Evaluates tensors and returns numpy values. + + + + + + + + + + + +
Args
+`tensors` + +A Tensor or a nested list/tuple of Tensors. +
+ + + + + + + + + + + +
Returns
+tensors numpy values. +
+ + + +

fail

+ + + +Fail immediately with the given message, optionally prefixed. + + +

failIf

+ + + + + + +

failIfAlmostEqual

+ + + + + + +

failIfEqual

+ + + + + + +

failUnless

+ + + + + + +

failUnlessAlmostEqual

+ + + + + + +

failUnlessEqual

+ + + + + + +

failUnlessRaises

+ + + + + + +

get_temp_dir

+ +View source + + + +Returns a unique temporary directory for the test to use. + +If you call this method multiple times during in a test, it will return the +same folder. However, across different runs the directories will be +different. This will ensure that across different runs tests will not be +able to pollute each others environment. +If you need multiple unique directories within a single test, you should +use tempfile.mkdtemp as follows: + tempfile.mkdtemp(dir=self.get_temp_dir()): + + + + + + + + + +
Returns
+string, the path to the unique temporary directory created for this test. +
+ + + +

id

+ + + + + + +

run

+ + + + + + +

session

+ +View source + + + +A context manager for a TensorFlow Session for use in executing tests. + +Note that this will set this session and the graph as global defaults. + +Use the `use_gpu` and `force_gpu` options to control where ops are run. If +`force_gpu` is True, all ops are pinned to `/device:GPU:0`. Otherwise, if +`use_gpu` is True, TensorFlow tries to run as many ops on the GPU as +possible. If both `force_gpu and `use_gpu` are False, all ops are pinned to +the CPU. + +#### Example: + + + +``` python +class MyOperatorTest(test_util.TensorFlowTestCase): + def testMyOperator(self): + with self.session(use_gpu=True): + valid_input = [1.0, 2.0, 3.0, 4.0, 5.0] + result = MyOperator(valid_input).eval() + self.assertEqual(result, [1.0, 2.0, 3.0, 5.0, 8.0] + invalid_input = [-1.0, 2.0, 7.0] + with self.assertRaisesOpError("negative input not supported"): + MyOperator(invalid_input).eval() +``` + + + + + + + + + + + + + + + + + + + +
Args
+`graph` + +Optional graph to use during the returned session. +
+`config` + +An optional config_pb2.ConfigProto to use to configure the +session. +
+`use_gpu` + +If True, attempt to run as many ops as possible on GPU. +
+`force_gpu` + +If True, pin all ops to `/device:GPU:0`. +
+ + + +#### Yields: + +A Session object that should be used as a context manager to surround +the graph building and execution code in a test case. + + +

setUp

+ +View source + + + +Hook method for setting up the test fixture before exercising it. + + +

setUpClass

+ + + +Hook method for setting up class fixture before running tests in the class. + + +

shortDescription

+ + + +Formats both the test method name and the first line of its docstring. + +If no docstring is given, only returns the method name. + +This method overrides unittest.TestCase.shortDescription(), which +only returns the first line of the docstring, obscuring the name +of the test upon failure. + + + + + + + + + + +
Returns
+`desc` + +A short description of a test method. +
+ + + +

skipTest

+ + + +Skip this test. + + +

subTest

+ + + +Return a context manager that will return the enclosed block +of code in a subtest identified by the optional message and +keyword parameters. A failure in the subtest marks the test +case as failed but resumes execution at the end of the enclosed +block, allowing further test code to be executed. + +

tearDown

+ +View source + + + +Hook method for deconstructing the test fixture after testing it. + + +

tearDownClass

+ + + +Hook method for deconstructing the class fixture after running all tests in the class. + + +

test_session

+ +View source + + + +Use cached_session instead. (deprecated) + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `self.session()` or `self.cached_session()` instead. + +

__call__

+ + + +Call self as a function. + + +

__eq__

+ + + +Return self==value. + + + + +## Class Variables + +* `longMessage = True` +* `maxDiff = 1600` +* `tempfile_cleanup` diff --git a/site/en/api_docs/python/tf/test/TestCase/failureException.md b/site/en/api_docs/python/tf/test/TestCase/failureException.md new file mode 100644 index 00000000000..e147449100c --- /dev/null +++ b/site/en/api_docs/python/tf/test/TestCase/failureException.md @@ -0,0 +1,43 @@ +description: Assertion failed. + +
+ + + + +
+ +# tf.test.TestCase.failureException + + + + + + + + + +Assertion failed. + + + + + + + + + + diff --git a/site/en/api_docs/python/tf/test/assert_equal_graph_def.md b/site/en/api_docs/python/tf/test/assert_equal_graph_def.md new file mode 100644 index 00000000000..d91df6a931c --- /dev/null +++ b/site/en/api_docs/python/tf/test/assert_equal_graph_def.md @@ -0,0 +1,85 @@ +description: Asserts that two GraphDefs are (mostly) the same. + +
+ + +
+ +# tf.test.assert_equal_graph_def + + + + + + + + + +Asserts that two `GraphDef`s are (mostly) the same. + + + + + + + +Compares two `GraphDef` protos for equality, ignoring versions and ordering of +nodes, attrs, and control inputs. Node names are used to match up nodes +between the graphs, so the naming of nodes must be consistent. This function +ignores randomized attribute values that may appear in V2 checkpoints. + + + + + + + + + + + + + +
+`expected` + +The `GraphDef` we expected. +
+`actual` + +The `GraphDef` we have. +
+ + + + + + + + + + + + + + + +
+`AssertionError` + +If the `GraphDef`s do not match. +
+`TypeError` + +If either argument is not a `GraphDef`. +
+ diff --git a/site/en/api_docs/python/tf/test/benchmark_config.md b/site/en/api_docs/python/tf/test/benchmark_config.md new file mode 100644 index 00000000000..77042e65563 --- /dev/null +++ b/site/en/api_docs/python/tf/test/benchmark_config.md @@ -0,0 +1,56 @@ +description: Returns a tf.compat.v1.ConfigProto for disabling the dependency optimizer. + +
+ + +
+ +# tf.test.benchmark_config + + + + + + + + + +Returns a tf.compat.v1.ConfigProto for disabling the dependency optimizer. + + + + + + + + + + + + + + + + + + +
+A TensorFlow ConfigProto object. +
+ diff --git a/site/en/api_docs/python/tf/test/compute_gradient.md b/site/en/api_docs/python/tf/test/compute_gradient.md new file mode 100644 index 00000000000..52e6954f3cd --- /dev/null +++ b/site/en/api_docs/python/tf/test/compute_gradient.md @@ -0,0 +1,121 @@ +description: Computes the theoretical and numeric Jacobian of f. + +
+ + +
+ +# tf.test.compute_gradient + + + + + + + + + +Computes the theoretical and numeric Jacobian of `f`. + + + + + + + +With y = f(x), computes the theoretical and numeric Jacobian dy/dx. + + + + + + + + + + + + + + + + +
+`f` + +the function. +
+`x` + +a list arguments for the function +
+`delta` + +(optional) perturbation used to compute numeric Jacobian. +
+ + + + + + + + + + + +
+A pair of lists, where the first is a list of 2-d numpy arrays representing +the theoretical Jacobians for each argument, and the second list is the +numerical ones. Each 2-d array has "y_size" rows +and "x_size" columns where "x_size" is the number of elements in the +corresponding argument and "y_size" is the number of elements in f(x). +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If result is empty but the gradient is nonzero. +
+`ValueError` + +If x is not list, but any other type. +
+ + + +#### Example: + + +```python +@tf.function +def test_func(x): + return x*x + +theoretical, numerical = tf.test.compute_gradient(test_func, [1.0]) +theoretical, numerical +# ((array([[2.]], dtype=float32),), (array([[2.000004]], dtype=float32),)) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/test/create_local_cluster.md b/site/en/api_docs/python/tf/test/create_local_cluster.md new file mode 100644 index 00000000000..f624b4f4b24 --- /dev/null +++ b/site/en/api_docs/python/tf/test/create_local_cluster.md @@ -0,0 +1,158 @@ +description: Create and start local servers and return the associated Server objects. + +
+ + +
+ +# tf.test.create_local_cluster + + + + + + + + + +Create and start local servers and return the associated `Server` objects. + + + + + + + + + +"PS" stands for "parameter server": a task responsible for storing and +updating the model's parameters. Other tasks send updates to these parameters +as they work on optimizing the parameters. This particular division of labor +between tasks is not required, but is common for distributed training. + +Read more at https://www.tensorflow.org/guide/extend/architecture + +![components](https://www.tensorflow.org/images/diag1.svg "components") + + +Figure illustrates the interaction of these components. +"/job:worker/task:0" and "/job:ps/task:0" are both tasks with worker services. + + +#### Example: + + +```python +workers, _ = tf.test.create_local_cluster(num_workers=2, num_ps=2) + +worker_sessions = [tf.compat.v1.Session(w.target) for w in workers] + +with tf.device("/job:ps/task:0"): + ... +with tf.device("/job:ps/task:1"): + ... +with tf.device("/job:worker/task:0"): + ... +with tf.device("/job:worker/task:1"): + ... + +worker_sessions[0].run(...) +``` + + + + + + + + + + + + + + + + + + + + + + +
+`num_workers` + +Number of worker servers to start. +
+`num_ps` + +Number of PS servers to start. +
+`protocol` + +Communication protocol. Allowed values are documented in the +documentation of tf.distribute.Server. +
+`worker_config` + +(optional) `tf.ConfigProto` to initialize workers. Can be +used to instantiate multiple devices etc. +
+`ps_config` + +(optional) `tf.ConfigProto` to initialize PS servers. +
+ + + + + + + + + + + +
+A tuple `(worker_servers, ps_servers)`. `worker_servers` is a list +of `num_workers` objects of type tf.distribute.Server (all running +locally); +and `ps_servers` is a list of `num_ps` objects of similar type. +
+ + + + + + + + + + + + +
+`ImportError` + +if portpicker module was not found at load time +
+ diff --git a/site/en/api_docs/python/tf/test/gpu_device_name.md b/site/en/api_docs/python/tf/test/gpu_device_name.md new file mode 100644 index 00000000000..180728671e0 --- /dev/null +++ b/site/en/api_docs/python/tf/test/gpu_device_name.md @@ -0,0 +1,42 @@ +description: Returns the name of a GPU device if available or the empty string. + +
+ + +
+ +# tf.test.gpu_device_name + + + + + + + + + +Returns the name of a GPU device if available or the empty string. + + + + + + + + diff --git a/site/en/api_docs/python/tf/test/is_built_with_cuda.md b/site/en/api_docs/python/tf/test/is_built_with_cuda.md new file mode 100644 index 00000000000..102f2479147 --- /dev/null +++ b/site/en/api_docs/python/tf/test/is_built_with_cuda.md @@ -0,0 +1,42 @@ +description: Returns whether TensorFlow was built with CUDA (GPU) support. + +
+ + +
+ +# tf.test.is_built_with_cuda + + + + + + + + + +Returns whether TensorFlow was built with CUDA (GPU) support. + + + + + + + + diff --git a/site/en/api_docs/python/tf/test/is_built_with_gpu_support.md b/site/en/api_docs/python/tf/test/is_built_with_gpu_support.md new file mode 100644 index 00000000000..3702b50b9d5 --- /dev/null +++ b/site/en/api_docs/python/tf/test/is_built_with_gpu_support.md @@ -0,0 +1,42 @@ +description: Returns whether TensorFlow was built with GPU (i.e. CUDA or ROCm) support. + +
+ + +
+ +# tf.test.is_built_with_gpu_support + + + + + + + + + +Returns whether TensorFlow was built with GPU (i.e. CUDA or ROCm) support. + + + + + + + + diff --git a/site/en/api_docs/python/tf/test/is_built_with_rocm.md b/site/en/api_docs/python/tf/test/is_built_with_rocm.md new file mode 100644 index 00000000000..f3c317684d9 --- /dev/null +++ b/site/en/api_docs/python/tf/test/is_built_with_rocm.md @@ -0,0 +1,42 @@ +description: Returns whether TensorFlow was built with ROCm (GPU) support. + +
+ + +
+ +# tf.test.is_built_with_rocm + + + + + + + + + +Returns whether TensorFlow was built with ROCm (GPU) support. + + + + + + + + diff --git a/site/en/api_docs/python/tf/test/is_built_with_xla.md b/site/en/api_docs/python/tf/test/is_built_with_xla.md new file mode 100644 index 00000000000..81e06f07870 --- /dev/null +++ b/site/en/api_docs/python/tf/test/is_built_with_xla.md @@ -0,0 +1,42 @@ +description: Returns whether TensorFlow was built with XLA support. + +
+ + +
+ +# tf.test.is_built_with_xla + + + + + + + + + +Returns whether TensorFlow was built with XLA support. + + + + + + + + diff --git a/site/en/api_docs/python/tf/test/is_gpu_available.md b/site/en/api_docs/python/tf/test/is_gpu_available.md new file mode 100644 index 00000000000..78f66ed7523 --- /dev/null +++ b/site/en/api_docs/python/tf/test/is_gpu_available.md @@ -0,0 +1,104 @@ +description: Returns whether TensorFlow can access a GPU. (deprecated) + +
+ + +
+ +# tf.test.is_gpu_available + + + + + + + + + +Returns whether TensorFlow can access a GPU. (deprecated) + + + + + + + + + +Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. +Instructions for updating: +Use `tf.config.list_physical_devices('GPU')` instead. + +Warning: if a non-GPU version of the package is installed, the function would +also return False. Use tf.test.is_built_with_cuda to validate if TensorFlow +was build with CUDA support. + + + + + + + + + + + + + +
+`cuda_only` + +limit the search to CUDA GPUs. +
+`min_cuda_compute_capability` + +a (major,minor) pair that indicates the minimum +CUDA compute capability required, or None if no requirement. +
+ + +Note that the keyword arg name "cuda_only" is misleading (since routine will +return true when a GPU device is available irrespective of whether TF was +built with CUDA support or ROCm support. However no changes here because + +++ Changing the name "cuda_only" to something more generic would break + backward compatibility + +++ Adding an equivalent "rocm_only" would require the implementation check + the build type. This in turn would require doing the same for CUDA and thus + potentially break backward compatibility + +++ Adding a new "cuda_or_rocm_only" would not break backward compatibility, + but would require most (if not all) callers to update the call to use + "cuda_or_rocm_only" instead of "cuda_only" + + + + + + + + + +
+True if a GPU device of the requested kind is available. +
+ diff --git a/site/en/api_docs/python/tf/test/main.md b/site/en/api_docs/python/tf/test/main.md new file mode 100644 index 00000000000..af607aca47e --- /dev/null +++ b/site/en/api_docs/python/tf/test/main.md @@ -0,0 +1,44 @@ +description: Runs all unit tests. + +
+ + +
+ +# tf.test.main + + + + + + + + + +Runs all unit tests. + + + + + + + + diff --git a/site/en/api_docs/python/tf/tile.md b/site/en/api_docs/python/tf/tile.md new file mode 100644 index 00000000000..ce51cd80f90 --- /dev/null +++ b/site/en/api_docs/python/tf/tile.md @@ -0,0 +1,113 @@ +description: Constructs a tensor by tiling a given tensor. + +
+ + +
+ +# tf.tile + + + + + + + + + +Constructs a tensor by tiling a given tensor. + + + + + + + + + +This operation creates a new tensor by replicating `input` `multiples` times. +The output tensor's i'th dimension has `input.dims(i) * multiples[i]` elements, +and the values of `input` are replicated `multiples[i]` times along the 'i'th +dimension. For example, tiling `[a b c d]` by `[2]` produces +`[a b c d a b c d]`. + +``` +>>> a = tf.constant([[1,2,3],[4,5,6]], tf.int32) +>>> b = tf.constant([1,2], tf.int32) +>>> tf.tile(a, b) + +>>> c = tf.constant([2,1], tf.int32) +>>> tf.tile(a, c) + +>>> d = tf.constant([2,2], tf.int32) +>>> tf.tile(a, d) + +``` + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor`. 1-D or higher. +
+`multiples` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +1-D. Length must be the same as the number of dimensions in `input` +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `input`. +
+ diff --git a/site/en/api_docs/python/tf/timestamp.md b/site/en/api_docs/python/tf/timestamp.md new file mode 100644 index 00000000000..15d81993bd1 --- /dev/null +++ b/site/en/api_docs/python/tf/timestamp.md @@ -0,0 +1,74 @@ +description: Provides the time since epoch in seconds. + +
+ + +
+ +# tf.timestamp + + + + + + + + + +Provides the time since epoch in seconds. + + + + + + + + + +Returns the timestamp as a `float64` for seconds since the Unix epoch. + +Note: the timestamp is computed when the op is executed, not when it is added +to the graph. + + + + + + + + + + +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` of type `float64`. +
+ diff --git a/site/en/api_docs/python/tf/tpu.md b/site/en/api_docs/python/tf/tpu.md new file mode 100644 index 00000000000..74da0dc4702 --- /dev/null +++ b/site/en/api_docs/python/tf/tpu.md @@ -0,0 +1,25 @@ +description: Ops related to Tensor Processing Units. + +
+ + +
+ +# Module: tf.tpu + + + + + + + + + +Ops related to Tensor Processing Units. + + + +## Modules + +[`experimental`](../tf/tpu/experimental.md) module: Public API for tf.tpu.experimental namespace. + diff --git a/site/en/api_docs/python/tf/tpu/experimental.md b/site/en/api_docs/python/tf/tpu/experimental.md new file mode 100644 index 00000000000..c397a34dade --- /dev/null +++ b/site/en/api_docs/python/tf/tpu/experimental.md @@ -0,0 +1,31 @@ +description: Public API for tf.tpu.experimental namespace. + +
+ + +
+ +# Module: tf.tpu.experimental + + + + + + + + + +Public API for tf.tpu.experimental namespace. + + + +## Classes + +[`class DeviceAssignment`](../../tf/tpu/experimental/DeviceAssignment.md): Mapping from logical cores in a computation to the physical TPU topology. + +## Functions + +[`initialize_tpu_system(...)`](../../tf/tpu/experimental/initialize_tpu_system.md): Initialize the TPU devices. + +[`shutdown_tpu_system(...)`](../../tf/tpu/experimental/shutdown_tpu_system.md): Shuts down the TPU devices. + diff --git a/site/en/api_docs/python/tf/tpu/experimental/DeviceAssignment.md b/site/en/api_docs/python/tf/tpu/experimental/DeviceAssignment.md new file mode 100644 index 00000000000..7093d1da405 --- /dev/null +++ b/site/en/api_docs/python/tf/tpu/experimental/DeviceAssignment.md @@ -0,0 +1,286 @@ +description: Mapping from logical cores in a computation to the physical TPU topology. + +
+ + + + + + + + + +
+ +# tf.tpu.experimental.DeviceAssignment + + + + + + + + + +Mapping from logical cores in a computation to the physical TPU topology. + + + + + + + + + +Prefer to use the DeviceAssignment.build() helper to construct a +`DeviceAssignment`; it is easier if less flexible than constructing a +`DeviceAssignment` directly. + + + + + + + + + + + + + +
+`topology` + +A `Topology` object that describes the physical TPU topology. +
+`core_assignment` + +A logical to physical core mapping, represented as a +rank 3 numpy array. See the description of the `core_assignment` +property for more details. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `topology` is not `Topology` object. +
+`ValueError` + +If `core_assignment` is not a rank 3 numpy array. +
+ + + + + + + + + + + + + + + + + + + + + + + +
+`core_assignment` + +The logical to physical core mapping. +
+`num_cores_per_replica` + +The number of cores per replica. +
+`num_replicas` + +The number of replicas of the computation. +
+`topology` + +A `Topology` that describes the TPU topology. +
+ + + +## Methods + +

build

+ +View source + + + + + + +

coordinates

+ +View source + + + +Returns the physical topology coordinates of a logical core. + + +

host_device

+ +View source + + + +Returns the CPU device attached to a logical core. + + +

lookup_replicas

+ +View source + + + +Lookup replica ids by task number and logical core. + + + + + + + + + + + + + + +
Args
+`task_id` + +TensorFlow task number. +
+`logical_core` + +An integer, identifying a logical core. +
+ + + + + + + + + + + +
Returns
+A sorted list of the replicas that are attached to that task and +logical_core. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If no replica exists in the task which contains the logical +core. +
+ + + +

tpu_device

+ +View source + + + +Returns the name of the TPU device assigned to a logical core. + + +

tpu_ordinal

+ +View source + + + +Returns the ordinal of the TPU device assigned to a logical core. + + + + diff --git a/site/en/api_docs/python/tf/tpu/experimental/initialize_tpu_system.md b/site/en/api_docs/python/tf/tpu/experimental/initialize_tpu_system.md new file mode 100644 index 00000000000..696bcf1106e --- /dev/null +++ b/site/en/api_docs/python/tf/tpu/experimental/initialize_tpu_system.md @@ -0,0 +1,94 @@ +description: Initialize the TPU devices. + +
+ + +
+ +# tf.tpu.experimental.initialize_tpu_system + + + + + + + + + +Initialize the TPU devices. + + + + + + + + + + + + + + + + + + + +
+`cluster_resolver` + +A tf.distribute.cluster_resolver.TPUClusterResolver, +which provides information about the TPU cluster. +
+ + + + + + + + + + + +
+The tf.tpu.Topology object for the topology of the TPU cluster. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If no TPU devices found for eager execution or if run in a +tf.function. +
+ diff --git a/site/en/api_docs/python/tf/tpu/experimental/shutdown_tpu_system.md b/site/en/api_docs/python/tf/tpu/experimental/shutdown_tpu_system.md new file mode 100644 index 00000000000..c5b4b709f30 --- /dev/null +++ b/site/en/api_docs/python/tf/tpu/experimental/shutdown_tpu_system.md @@ -0,0 +1,83 @@ +description: Shuts down the TPU devices. + +
+ + +
+ +# tf.tpu.experimental.shutdown_tpu_system + + + + + + + + + +Shuts down the TPU devices. + + + + + + + + + +This will clear all caches, even those that are maintained through sequential +calls to tf.tpu.experimental.initialize_tpu_system, such as the compilation +cache. + + + + + + + + + + +
+`cluster_resolver` + +A tf.distribute.cluster_resolver.TPUClusterResolver, +which provides information about the TPU cluster. +
+ + + + + + + + + + + + +
+`RuntimeError` + +If no TPU devices found for eager execution or if run in a +tf.function. +
+ diff --git a/site/en/api_docs/python/tf/train.md b/site/en/api_docs/python/tf/train.md new file mode 100644 index 00000000000..4c611bfbc0d --- /dev/null +++ b/site/en/api_docs/python/tf/train.md @@ -0,0 +1,76 @@ +description: Support for training models. + +
+ + +
+ +# Module: tf.train + + + + + + + + + +Support for training models. + + +See the [Training](https://tensorflow.org/api_guides/python/train) guide. + +## Modules + +[`experimental`](../tf/train/experimental.md) module: Public API for tf.train.experimental namespace. + +## Classes + +[`class BytesList`](../tf/train/BytesList.md): A ProtocolMessage + +[`class Checkpoint`](../tf/train/Checkpoint.md): Groups trackable objects, saving and restoring them. + +[`class CheckpointManager`](../tf/train/CheckpointManager.md): Deletes old checkpoints. + +[`class ClusterDef`](../tf/train/ClusterDef.md): A ProtocolMessage + +[`class ClusterSpec`](../tf/train/ClusterSpec.md): Represents a cluster as a set of "tasks", organized into "jobs". + +[`class Coordinator`](../tf/train/Coordinator.md): A coordinator for threads. + +[`class Example`](../tf/train/Example.md): A ProtocolMessage + +[`class ExponentialMovingAverage`](../tf/train/ExponentialMovingAverage.md): Maintains moving averages of variables by employing an exponential decay. + +[`class Feature`](../tf/train/Feature.md): A ProtocolMessage + +[`class FeatureList`](../tf/train/FeatureList.md): A ProtocolMessage + +[`class FeatureLists`](../tf/train/FeatureLists.md): A ProtocolMessage + +[`class Features`](../tf/train/Features.md): A ProtocolMessage + +[`class FloatList`](../tf/train/FloatList.md): A ProtocolMessage + +[`class Int64List`](../tf/train/Int64List.md): A ProtocolMessage + +[`class JobDef`](../tf/train/JobDef.md): A ProtocolMessage + +[`class SequenceExample`](../tf/train/SequenceExample.md): A ProtocolMessage + +[`class ServerDef`](../tf/train/ServerDef.md): A ProtocolMessage + +## Functions + +[`checkpoints_iterator(...)`](../tf/train/checkpoints_iterator.md): Continuously yield new checkpoint files as they appear. + +[`get_checkpoint_state(...)`](../tf/train/get_checkpoint_state.md): Returns CheckpointState proto from the "checkpoint" file. + +[`latest_checkpoint(...)`](../tf/train/latest_checkpoint.md): Finds the filename of latest saved checkpoint file. + +[`list_variables(...)`](../tf/train/list_variables.md): Returns list of all variables in the checkpoint. + +[`load_checkpoint(...)`](../tf/train/load_checkpoint.md): Returns `CheckpointReader` for checkpoint found in `ckpt_dir_or_file`. + +[`load_variable(...)`](../tf/train/load_variable.md): Returns the tensor value of the given variable in the checkpoint. + diff --git a/site/en/api_docs/python/tf/train/BytesList.md b/site/en/api_docs/python/tf/train/BytesList.md new file mode 100644 index 00000000000..6830c81d261 --- /dev/null +++ b/site/en/api_docs/python/tf/train/BytesList.md @@ -0,0 +1,57 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.train.BytesList + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + +
+`value` + +`repeated bytes value` +
+ + + diff --git a/site/en/api_docs/python/tf/train/Checkpoint.md b/site/en/api_docs/python/tf/train/Checkpoint.md new file mode 100644 index 00000000000..68a1dc904d6 --- /dev/null +++ b/site/en/api_docs/python/tf/train/Checkpoint.md @@ -0,0 +1,390 @@ +description: Groups trackable objects, saving and restoring them. + +
+ + + + + + +
+ +# tf.train.Checkpoint + + + + + + + + + +Groups trackable objects, saving and restoring them. + + + + + + + +`Checkpoint`'s constructor accepts keyword arguments whose values are types +that contain trackable state, such as tf.keras.optimizers.Optimizer +implementations, tf.Variable, `tf.keras.Layer` implementations, or +tf.keras.Model implementations. It saves these values with a checkpoint, and +maintains a `save_counter` for numbering checkpoints. + +#### Example usage: + + + +```python +import tensorflow as tf +import os + +checkpoint_directory = "/tmp/training_checkpoints" +checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt") + +checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) +status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_directory)) +for _ in range(num_training_steps): + optimizer.minimize( ... ) # Variables will be restored on creation. +status.assert_consumed() # Optional sanity checks. +checkpoint.save(file_prefix=checkpoint_prefix) +``` + +Checkpoint.save and Checkpoint.restore write and read object-based +checkpoints, in contrast to TensorFlow 1.x's tf.compat.v1.train.Saver which +writes and +reads `variable.name` based checkpoints. Object-based checkpointing saves a +graph of dependencies between Python objects (`Layer`s, `Optimizer`s, +`Variable`s, etc.) with named edges, and this graph is used to match variables +when restoring a checkpoint. It can be more robust to changes in the Python +program, and helps to support restore-on-create for variables. + +`Checkpoint` objects have dependencies on the objects passed as keyword +arguments to their constructors, and each dependency is given a name that is +identical to the name of the keyword argument for which it was created. +TensorFlow classes like `Layer`s and `Optimizer`s will automatically add +dependencies on their variables (e.g. "kernel" and "bias" for +tf.keras.layers.Dense). Inheriting from tf.keras.Model makes managing +dependencies easy in user-defined classes, since `Model` hooks into attribute +assignment. For example: + +```python +class Regress(tf.keras.Model): + + def __init__(self): + super(Regress, self).__init__() + self.input_transform = tf.keras.layers.Dense(10) + # ... + + def call(self, inputs): + x = self.input_transform(inputs) + # ... +``` + +This `Model` has a dependency named "input_transform" on its `Dense` layer, +which in turn depends on its variables. As a result, saving an instance of +`Regress` using tf.train.Checkpoint will also save all the variables created +by the `Dense` layer. + +When variables are assigned to multiple workers, each worker writes its own +section of the checkpoint. These sections are then merged/re-indexed to behave +as a single checkpoint. This avoids copying all variables to one worker, but +does require that all workers see a common filesystem. + +While tf.keras.Model.save_weights and tf.train.Checkpoint.save save in the +same format, note that the root of the resulting checkpoint is the object the +save method is attached to. This means saving a tf.keras.Model using +`save_weights` and loading into a tf.train.Checkpoint with a `Model` +attached (or vice versa) will not match the `Model`'s variables. See the +[guide to training +checkpoints](https://www.tensorflow.org/guide/checkpoint) for +details. Prefer tf.train.Checkpoint over tf.keras.Model.save_weights for +training checkpoints. + + + + + + + + + + +
+`**kwargs` + +Keyword arguments are set as attributes of this object, and are +saved with the checkpoint. Values must be trackable objects. +
+ + + + + + + + + + + + +
+`ValueError` + +If objects in `kwargs` are not trackable. +
+ + + + + + + + + + + + + + +
+`save_counter` + +Incremented when `save()` is called. Used to number +checkpoints. +
+ + + +## Methods + +

restore

+ +View source + + + +Restore a training checkpoint. + +Restores this `Checkpoint` and any objects it depends on. + +Either assigns values immediately if variables to restore have been created +already, or defers restoration until the variables are created. Dependencies +added after this call will be matched if they have a corresponding object in +the checkpoint (the restore request will queue in any trackable object +waiting for the expected dependency to be added). + +To ensure that loading is complete and no more assignments will take place, +use the `assert_consumed()` method of the status object returned by +`restore`: + +```python +checkpoint = tf.train.Checkpoint( ... ) +checkpoint.restore(path).assert_consumed() +``` + +An exception will be raised if any Python objects in the dependency graph +were not found in the checkpoint, or if any checkpointed values do not have +a matching Python object. + +Name-based tf.compat.v1.train.Saver checkpoints from TensorFlow 1.x can be +loaded +using this method. Names are used to match variables. Re-encode name-based +checkpoints using tf.train.Checkpoint.save as soon as possible. + + + + + + + + + + +
Args
+`save_path` + +The path to the checkpoint, as returned by `save` or +tf.train.latest_checkpoint. If None (as when there is no latest +checkpoint for tf.train.latest_checkpoint to return), returns an +object which may run initializers for objects in the dependency graph. +If the checkpoint was written by the name-based +tf.compat.v1.train.Saver, names are used to match variables. +
+ + + + + + + + + + + +
Returns
+A load status object, which can be used to make assertions about the +status of a checkpoint restoration. + +The returned status object has the following methods: + +* `assert_consumed()`: +Raises an exception if any variables are unmatched: either +checkpointed values which don't have a matching Python object or +Python objects in the dependency graph with no values in the +checkpoint. This method returns the status object, and so may be +chained with other assertions. + +* `assert_existing_objects_matched()`: +Raises an exception if any existing Python objects in the dependency +graph are unmatched. Unlike `assert_consumed`, this assertion will +pass if values in the checkpoint have no corresponding Python +objects. For example a `tf.keras.Layer` object which has not yet been +built, and so has not created any variables, will pass this assertion +but fail `assert_consumed`. Useful when loading part of a larger +checkpoint into a new Python program, e.g. a training checkpoint with +a tf.compat.v1.train.Optimizer was saved but only the state required +for +inference is being loaded. This method returns the status object, and +so may be chained with other assertions. + +* `assert_nontrivial_match()`: Asserts that something aside from the root +object was matched. This is a very weak assertion, but is useful for +sanity checking in library code where objects may exist in the +checkpoint which haven't been created in Python and some Python +objects may not have a checkpointed value. + +* `expect_partial()`: Silence warnings about incomplete checkpoint +restores. Warnings are otherwise printed for unused parts of the +checkpoint file or object when the `Checkpoint` object is deleted +(often at program shutdown). +
+ + + +

save

+ +View source + + + +Saves a training checkpoint and provides basic checkpoint management. + +The saved checkpoint includes variables created by this object and any +trackable objects it depends on at the time Checkpoint.save() is +called. + +`save` is a basic convenience wrapper around the `write` method, +sequentially numbering checkpoints using `save_counter` and updating the +metadata used by tf.train.latest_checkpoint. More advanced checkpoint +management, for example garbage collection and custom numbering, may be +provided by other utilities which also wrap `write` +(tf.train.CheckpointManager for example). + + + + + + + + + + +
Args
+`file_prefix` + +A prefix to use for the checkpoint filenames +(/path/to/directory/and_a_prefix). Names are generated based on this +prefix and Checkpoint.save_counter. +
+ + + + + + + + + + + +
Returns
+The full path to the checkpoint. +
+ + + +

write

+ +View source + + + +Writes a training checkpoint. + +The checkpoint includes variables created by this object and any +trackable objects it depends on at the time Checkpoint.write() is +called. + +`write` does not number checkpoints, increment `save_counter`, or update the +metadata used by tf.train.latest_checkpoint. It is primarily intended for +use by higher level checkpoint management utilities. `save` provides a very +basic implementation of these features. + + + + + + + + + + +
Args
+`file_prefix` + +A prefix to use for the checkpoint filenames +(/path/to/directory/and_a_prefix). +
+ + + + + + + + + + + +
Returns
+The full path to the checkpoint (i.e. `file_prefix`). +
+ + + + + diff --git a/site/en/api_docs/python/tf/train/CheckpointManager.md b/site/en/api_docs/python/tf/train/CheckpointManager.md new file mode 100644 index 00000000000..3e038113902 --- /dev/null +++ b/site/en/api_docs/python/tf/train/CheckpointManager.md @@ -0,0 +1,326 @@ +description: Deletes old checkpoints. + +
+ + + + + +
+ +# tf.train.CheckpointManager + + + + + + + + + +Deletes old checkpoints. + + + + + + + + + + +#### Example usage: + + + +```python +import tensorflow as tf +checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) +manager = tf.train.CheckpointManager( + checkpoint, directory="/tmp/model", max_to_keep=5) +status = checkpoint.restore(manager.latest_checkpoint) +while True: + # train + manager.save() +``` + +`CheckpointManager` preserves its own state across instantiations (see the +`__init__` documentation for details). Only one should be active in a +particular directory at a time. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`checkpoint` + +The tf.train.Checkpoint instance to save and manage +checkpoints for. +
+`directory` + +The path to a directory in which to write checkpoints. A +special file named "checkpoint" is also written to this directory (in a +human-readable text format) which contains the state of the +`CheckpointManager`. +
+`max_to_keep` + +An integer, the number of checkpoints to keep. Unless +preserved by `keep_checkpoint_every_n_hours`, checkpoints will be +deleted from the active set, oldest first, until only `max_to_keep` +checkpoints remain. If `None`, no checkpoints are deleted and everything +stays in the active set. Note that `max_to_keep=None` will keep all +checkpoint paths in memory and in the checkpoint state protocol buffer +on disk. +
+`keep_checkpoint_every_n_hours` + +Upon removal from the active set, a +checkpoint will be preserved if it has been at least +`keep_checkpoint_every_n_hours` since the last preserved checkpoint. The +default setting of `None` does not preserve any checkpoints in this way. +
+`checkpoint_name` + +Custom name for the checkpoint file. +
+`step_counter` + +A tf.Variable instance for checking the current step +counter value, in case users want to save checkpoints every N steps. +
+`checkpoint_interval` + +An integer, indicates the minimum step interval +between two checkpoints. +
+`init_fn` + +Callable. A function to do customized intialization if no +checkpoints are in the directory. +
+ + + + + + + + + + + + +
+`ValueError` + +If `max_to_keep` is not a positive integer. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
+`checkpoint` + +Returns the tf.train.Checkpoint object. +
+`checkpoint_interval` + + +
+`checkpoints` + +A list of managed checkpoints. + +Note that checkpoints saved due to `keep_checkpoint_every_n_hours` will not +show up in this list (to avoid ever-growing filename lists). +
+`directory` + + +
+`latest_checkpoint` + +The prefix of the most recent checkpoint in `directory`. + +Equivalent to tf.train.latest_checkpoint(directory) where `directory` is +the constructor argument to `CheckpointManager`. + +Suitable for passing to tf.train.Checkpoint.restore to resume training. +
+ + + +## Methods + +

restore_or_initialize

+ +View source + + + +Restore items in `checkpoint` from the latest checkpoint file. + +This method will first try to restore from the most recent checkpoint in +`directory`. If no checkpoints exist in `directory`, and `init_fn` is +specified, this method will call `init_fn` to do customized +initialization. This can be used to support initialization from pretrained +models. + +Note that unlike tf.train.Checkpoint.restore(), this method doesn't return +a load status object that users can run assertions on +(e.g. assert_consumed()). Thus to run assertions, users should directly use +tf.train.Checkpoint.restore() method. + + + + + + + + + +
Returns
+The restored checkpoint path if the lastest checkpoint is found and +restored. Otherwise None. +
+ + + +

save

+ +View source + + + +Creates a new checkpoint and manages it. + + + + + + + + + + + + + + +
Args
+`checkpoint_number` + +An optional integer, or an integer-dtype `Variable` or +`Tensor`, used to number the checkpoint. If `None` (default), +checkpoints are numbered using `checkpoint.save_counter`. Even if +`checkpoint_number` is provided, `save_counter` is still incremented. A +user-provided `checkpoint_number` is not incremented even if it is a +`Variable`. +
+`check_interval` + +An optional boolean. The argument is only effective when +`checkpoint_interval` is passed into the manager. If `True`, the manager +will only save the checkpoint if the interval between checkpoints is +larger than `checkpoint_interval`. Otherwise it will always save the +checkpoint unless a checkpoint has already been saved for the current +step. +
+ + + + + + + + + + + +
Returns
+The path to the new checkpoint. It is also recorded in the `checkpoints` +and `latest_checkpoint` properties. `None` if no checkpoint is saved. +
+ + + + + diff --git a/site/en/api_docs/python/tf/train/ClusterDef.md b/site/en/api_docs/python/tf/train/ClusterDef.md new file mode 100644 index 00000000000..e3ddb9d0ee0 --- /dev/null +++ b/site/en/api_docs/python/tf/train/ClusterDef.md @@ -0,0 +1,57 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.train.ClusterDef + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + +
+`job` + +`repeated JobDef job` +
+ + + diff --git a/site/en/api_docs/python/tf/train/ClusterSpec.md b/site/en/api_docs/python/tf/train/ClusterSpec.md new file mode 100644 index 00000000000..213f2b49e86 --- /dev/null +++ b/site/en/api_docs/python/tf/train/ClusterSpec.md @@ -0,0 +1,492 @@ +description: Represents a cluster as a set of "tasks", organized into "jobs". + +
+ + + + + + + + + + + + + +
+ +# tf.train.ClusterSpec + + + + + + + + + +Represents a cluster as a set of "tasks", organized into "jobs". + + + + + + + + + +A tf.train.ClusterSpec represents the set of processes that +participate in a distributed TensorFlow computation. Every +tf.distribute.Server is constructed in a particular cluster. + +To create a cluster with two jobs and five tasks, you specify the +mapping from job names to lists of network addresses (typically +hostname-port pairs). + +```python +cluster = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222", + "worker1.example.com:2222", + "worker2.example.com:2222"], + "ps": ["ps0.example.com:2222", + "ps1.example.com:2222"]}) +``` + +Each job may also be specified as a sparse mapping from task indices +to network addresses. This enables a server to be configured without +needing to know the identity of (for example) all other worker +tasks: + +```python +cluster = tf.train.ClusterSpec({"worker": {1: "worker1.example.com:2222"}, + "ps": ["ps0.example.com:2222", + "ps1.example.com:2222"]}) +``` + + + + + + + + + + +
+`cluster` + +A dictionary mapping one or more job names to (i) a list of +network addresses, or (ii) a dictionary mapping integer task indices to +network addresses; or a tf.train.ClusterDef protocol buffer. +
+ + + + + + + + + + + + +
+`TypeError` + +If `cluster` is not a dictionary mapping strings to lists +of strings, and not a tf.train.ClusterDef protobuf. +
+ + + + + + + + + + + + + + +
+`jobs` + +Returns a list of job names in this cluster. +
+ + + +## Methods + +

as_cluster_def

+ +View source + + + +Returns a tf.train.ClusterDef protocol buffer based on this cluster. + + +

as_dict

+ +View source + + + +Returns a dictionary from job names to their tasks. + +For each job, if the task index space is dense, the corresponding +value will be a list of network addresses; otherwise it will be a +dictionary mapping (sparse) task indices to the corresponding +addresses. + + + + + + + + + +
Returns
+A dictionary mapping job names to lists or dictionaries +describing the tasks in those jobs. +
+ + + +

job_tasks

+ +View source + + + +Returns a mapping from task ID to address in the given job. + +NOTE: For backwards compatibility, this method returns a list. If +the given job was defined with a sparse set of task indices, the +length of this list may not reflect the number of tasks defined in +this job. Use the tf.train.ClusterSpec.num_tasks method +to find the number of tasks defined in a particular job. + + + + + + + + + + +
Args
+`job_name` + +The string name of a job in this cluster. +
+ + + + + + + + + + + +
Returns
+A list of task addresses, where the index in the list +corresponds to the task index of each task. The list may contain +`None` if the job was defined with a sparse set of task indices. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `job_name` does not name a job in this cluster. +
+ + + +

num_tasks

+ +View source + + + +Returns the number of tasks defined in the given job. + + + + + + + + + + + +
Args
+`job_name` + +The string name of a job in this cluster. +
+ + + + + + + + + + + +
Returns
+The number of tasks defined in the given job. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `job_name` does not name a job in this cluster. +
+ + + +

task_address

+ +View source + + + +Returns the address of the given task in the given job. + + + + + + + + + + + + + + +
Args
+`job_name` + +The string name of a job in this cluster. +
+`task_index` + +A non-negative integer. +
+ + + + + + + + + + + +
Returns
+The address of the given task in the given job. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `job_name` does not name a job in this cluster, +or no task with index `task_index` is defined in that job. +
+ + + +

task_indices

+ +View source + + + +Returns a list of valid task indices in the given job. + + + + + + + + + + + +
Args
+`job_name` + +The string name of a job in this cluster. +
+ + + + + + + + + + + +
Returns
+A list of valid task indices in the given job. +
+ + + + + + + + + + + + +
Raises
+`ValueError` + +If `job_name` does not name a job in this cluster, +or no task with index `task_index` is defined in that job. +
+ + + +

__bool__

+ +View source + + + + + + +

__eq__

+ +View source + + + +Return self==value. + + +

__ne__

+ +View source + + + +Return self!=value. + + +

__nonzero__

+ +View source + + + + + + + + diff --git a/site/en/api_docs/python/tf/train/Coordinator.md b/site/en/api_docs/python/tf/train/Coordinator.md new file mode 100644 index 00000000000..6d623db41cd --- /dev/null +++ b/site/en/api_docs/python/tf/train/Coordinator.md @@ -0,0 +1,474 @@ +description: A coordinator for threads. + +
+ + + + + + + + + + + +
+ +# tf.train.Coordinator + + + + + + + + + +A coordinator for threads. + + + + + + + + + +This class implements a simple mechanism to coordinate the termination of a +set of threads. + +#### Usage: + +```python +# Create a coordinator. +coord = Coordinator() +# Start a number of threads, passing the coordinator to each of them. +...start thread 1...(coord, ...) +...start thread N...(coord, ...) +# Wait for all the threads to terminate. +coord.join(threads) +``` + +Any of the threads can call `coord.request_stop()` to ask for all the threads +to stop. To cooperate with the requests, each thread must check for +`coord.should_stop()` on a regular basis. `coord.should_stop()` returns +`True` as soon as `coord.request_stop()` has been called. + +A typical thread running with a coordinator will do something like: + +```python +while not coord.should_stop(): + ...do some work... +``` + +#### Exception handling: + +A thread can report an exception to the coordinator as part of the +`request_stop()` call. The exception will be re-raised from the +`coord.join()` call. + +#### Thread code: + + + +```python +try: + while not coord.should_stop(): + ...do some work... +except Exception as e: + coord.request_stop(e) +``` + +#### Main code: + + + +```python +try: + ... + coord = Coordinator() + # Start a number of threads, passing the coordinator to each of them. + ...start thread 1...(coord, ...) + ...start thread N...(coord, ...) + # Wait for all the threads to terminate. + coord.join(threads) +except Exception as e: + ...exception that was passed to coord.request_stop() +``` + +To simplify the thread implementation, the Coordinator provides a +context handler `stop_on_exception()` that automatically requests a stop if +an exception is raised. Using the context handler the thread code above +can be written as: + +```python +with coord.stop_on_exception(): + while not coord.should_stop(): + ...do some work... +``` + +#### Grace period for stopping: + +After a thread has called `coord.request_stop()` the other threads have a +fixed time to stop, this is called the 'stop grace period' and defaults to 2 +minutes. If any of the threads is still alive after the grace period expires +`coord.join()` raises a RuntimeError reporting the laggards. + +```python +try: + ... + coord = Coordinator() + # Start a number of threads, passing the coordinator to each of them. + ...start thread 1...(coord, ...) + ...start thread N...(coord, ...) + # Wait for all the threads to terminate, give them 10s grace period + coord.join(threads, stop_grace_period_secs=10) +except RuntimeError: + ...one of the threads took more than 10s to stop after request_stop() + ...was called. +except Exception: + ...exception that was passed to coord.request_stop() +``` + + + + + + + + + + +
+`clean_stop_exception_types` + +Optional tuple of Exception types that should +cause a clean stop of the coordinator. If an exception of one of these +types is reported to `request_stop(ex)` the coordinator will behave as +if `request_stop(None)` was called. Defaults to +`(tf.errors.OutOfRangeError,)` which is used by input queues to signal +the end of input. When feeding training data from a Python iterator it +is common to add `StopIteration` to this list. +
+ + + + + + + + + + + + + + +
+`joined` + + +
+ + + +## Methods + +

clear_stop

+ +View source + + + +Clears the stop flag. + +After this is called, calls to `should_stop()` will return `False`. + +

join

+ +View source + + + +Wait for threads to terminate. + +This call blocks until a set of threads have terminated. The set of thread +is the union of the threads passed in the `threads` argument and the list +of threads that registered with the coordinator by calling +Coordinator.register_thread(). + +After the threads stop, if an `exc_info` was passed to `request_stop`, that +exception is re-raised. + +Grace period handling: When `request_stop()` is called, threads are given +'stop_grace_period_secs' seconds to terminate. If any of them is still +alive after that period expires, a `RuntimeError` is raised. Note that if +an `exc_info` was passed to `request_stop()` then it is raised instead of +that `RuntimeError`. + + + + + + + + + + + + + + + + +
Args
+`threads` + +List of `threading.Threads`. The started threads to join in +addition to the registered threads. +
+`stop_grace_period_secs` + +Number of seconds given to threads to stop after +`request_stop()` has been called. +
+`ignore_live_threads` + +If `False`, raises an error if any of the threads are +still alive after `stop_grace_period_secs`. +
+ + + + + + + + + + + + +
Raises
+`RuntimeError` + +If any thread is still alive after `request_stop()` +is called and the grace period expires. +
+ + + +

raise_requested_exception

+ +View source + + + +If an exception has been passed to `request_stop`, this raises it. + + +

register_thread

+ +View source + + + +Register a thread to join. + + + + + + + + + + + +
Args
+`thread` + +A Python thread to join. +
+ + + +

request_stop

+ +View source + + + +Request that the threads stop. + +After this is called, calls to `should_stop()` will return `True`. + +Note: If an exception is being passed in, in must be in the context of +handling the exception (i.e. `try: ... except Exception as ex: ...`) and not +a newly created one. + + + + + + + + + + +
Args
+`ex` + +Optional `Exception`, or Python `exc_info` tuple as returned by +`sys.exc_info()`. If this is the first call to `request_stop()` the +corresponding exception is recorded and re-raised from `join()`. +
+ + + +

should_stop

+ +View source + + + +Check if stop was requested. + + + + + + + + + + +
Returns
+True if a stop was requested. +
+ + + +

stop_on_exception

+ +View source + + + +Context manager to request stop when an Exception is raised. + +Code that uses a coordinator must catch exceptions and pass +them to the `request_stop()` method to stop the other threads +managed by the coordinator. + +This context handler simplifies the exception handling. +Use it as follows: + +```python +with coord.stop_on_exception(): + # Any exception raised in the body of the with + # clause is reported to the coordinator before terminating + # the execution of the body. + ...body... +``` + +This is completely equivalent to the slightly longer code: + +```python +try: + ...body... +except: + coord.request_stop(sys.exc_info()) +``` + +#### Yields: + +nothing. + + +

wait_for_stop

+ +View source + + + +Wait till the Coordinator is told to stop. + + + + + + + + + + + +
Args
+`timeout` + +Float. Sleep for up to that many seconds waiting for +should_stop() to become True. +
+ + + + + + + + + + + +
Returns
+True if the Coordinator is told stop, False if the timeout expired. +
+ + + + + diff --git a/site/en/api_docs/python/tf/train/Example.md b/site/en/api_docs/python/tf/train/Example.md new file mode 100644 index 00000000000..87b9bd21345 --- /dev/null +++ b/site/en/api_docs/python/tf/train/Example.md @@ -0,0 +1,57 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.train.Example + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + +
+`features` + +`Features features` +
+ + + diff --git a/site/en/api_docs/python/tf/train/ExponentialMovingAverage.md b/site/en/api_docs/python/tf/train/ExponentialMovingAverage.md new file mode 100644 index 00000000000..f28d320a3b6 --- /dev/null +++ b/site/en/api_docs/python/tf/train/ExponentialMovingAverage.md @@ -0,0 +1,435 @@ +description: Maintains moving averages of variables by employing an exponential decay. + +
+ + + + + + + +
+ +# tf.train.ExponentialMovingAverage + + + + + + + + + +Maintains moving averages of variables by employing an exponential decay. + + + + + + + + + +When training a model, it is often beneficial to maintain moving averages of +the trained parameters. Evaluations that use averaged parameters sometimes +produce significantly better results than the final trained values. + +The `apply()` method adds shadow copies of trained variables and add ops that +maintain a moving average of the trained variables in their shadow copies. +It is used when building the training model. The ops that maintain moving +averages are typically run after each training step. +The `average()` and `average_name()` methods give access to the shadow +variables and their names. They are useful when building an evaluation +model, or when restoring a model from a checkpoint file. They help use the +moving averages in place of the last trained values for evaluations. + +The moving averages are computed using exponential decay. You specify the +decay value when creating the `ExponentialMovingAverage` object. The shadow +variables are initialized with the same initial values as the trained +variables. When you run the ops to maintain the moving averages, each +shadow variable is updated with the formula: + + `shadow_variable -= (1 - decay) * (shadow_variable - variable)` + +This is mathematically equivalent to the classic formula below, but the use +of an `assign_sub` op (the `"-="` in the formula) allows concurrent lockless +updates to the variables: + + `shadow_variable = decay * shadow_variable + (1 - decay) * variable` + +Reasonable values for `decay` are close to 1.0, typically in the +multiple-nines range: 0.999, 0.9999, etc. + +Example usage when creating a training model: + +```python +# Create variables. +var0 = tf.Variable(...) +var1 = tf.Variable(...) +# ... use the variables to build a training model... +... +# Create an op that applies the optimizer. This is what we usually +# would use as a training op. +opt_op = opt.minimize(my_loss, [var0, var1]) + +# Create an ExponentialMovingAverage object +ema = tf.train.ExponentialMovingAverage(decay=0.9999) + +with tf.control_dependencies([opt_op]): + # Create the shadow variables, and add ops to maintain moving averages + # of var0 and var1. This also creates an op that will update the moving + # averages after each training step. This is what we will use in place + # of the usual training op. + training_op = ema.apply([var0, var1]) + +...train the model by running training_op... +``` + +There are two ways to use the moving averages for evaluations: + +* Build a model that uses the shadow variables instead of the variables. + For this, use the `average()` method which returns the shadow variable + for a given variable. +* Build a model normally but load the checkpoint files to evaluate by using + the shadow variable names. For this use the `average_name()` method. See + the tf.compat.v1.train.Saver for more + information on restoring saved variables. + +Example of restoring the shadow variable values: + +```python +# Create a Saver that loads variables from their saved shadow values. +shadow_var0_name = ema.average_name(var0) +shadow_var1_name = ema.average_name(var1) +saver = tf.compat.v1.train.Saver({shadow_var0_name: var0, shadow_var1_name: +var1}) +saver.restore(...checkpoint filename...) +# var0 and var1 now hold the moving average values +``` + + + + + + + + + + + + + + + + + + + +
+`decay` + +Float. The decay to use. +
+`num_updates` + +Optional count of number of updates applied to variables. +
+`zero_debias` + +If `True`, zero debias moving-averages that are initialized +with tensors. +
+`name` + +String. Optional prefix name to use for the name of ops added in +`apply()`. +
+ + + + + + + + + + + + + + +
+`name` + +The name of this ExponentialMovingAverage object. +
+ + + +## Methods + +

apply

+ +View source + + + +Maintains moving averages of variables. + +`var_list` must be a list of `Variable` or `Tensor` objects. This method +creates shadow variables for all elements of `var_list`. Shadow variables +for `Variable` objects are initialized to the variable's initial value. +They will be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection. +For `Tensor` objects, the shadow variables are initialized to 0 and zero +debiased (see docstring in `assign_moving_average` for more details). + +shadow variables are created with `trainable=False` and added to the +`GraphKeys.ALL_VARIABLES` collection. They will be returned by calls to +tf.compat.v1.global_variables(). + +Returns an op that updates all shadow variables from the current value of +their associated variables. + +Note that `apply()` can be called multiple times. When eager execution is +enabled each call to apply will update the variables once, so this needs to +be called in a loop. + + + + + + + + + + +
Args
+`var_list` + +A list of Variable or Tensor objects. The variables and Tensors +must be of types bfloat16, float16, float32, or float64. +
+ + + + + + + + + + + +
Returns
+An Operation that updates the moving averages. +
+ + + + + + + + + + + + +
Raises
+`TypeError` + +If the arguments are not an allowed type. +
+ + + +

average

+ +View source + + + +Returns the `Variable` holding the average of `var`. + + + + + + + + + + + +
Args
+`var` + +A `Variable` object. +
+ + + + + + + + + + + +
Returns
+A `Variable` object or `None` if the moving average of `var` +is not maintained. +
+ + + +

average_name

+ +View source + + + +Returns the name of the `Variable` holding the average for `var`. + +The typical scenario for `ExponentialMovingAverage` is to compute moving +averages of variables during training, and restore the variables from the +computed moving averages during evaluations. + +To restore variables, you have to know the name of the shadow variables. +That name and the original variable can then be passed to a `Saver()` object +to restore the variable from the moving average value with: + `saver = tf.compat.v1.train.Saver({ema.average_name(var): var})` + +`average_name()` can be called whether or not `apply()` has been called. + + + + + + + + + + +
Args
+`var` + +A `Variable` object. +
+ + + + + + + + + + + +
Returns
+A string: The name of the variable that will be used or was used +by the `ExponentialMovingAverage class` to hold the moving average of +`var`. +
+ + + +

variables_to_restore

+ +View source + + + +Returns a map of names to `Variables` to restore. + +If a variable has a moving average, use the moving average variable name as +the restore name; otherwise, use the variable name. + +For example, + +```python + variables_to_restore = ema.variables_to_restore() + saver = tf.compat.v1.train.Saver(variables_to_restore) +``` + +Below is an example of such mapping: + +``` + conv/batchnorm/gamma/ExponentialMovingAverage: conv/batchnorm/gamma, + conv_4/conv2d_params/ExponentialMovingAverage: conv_4/conv2d_params, + global_step: global_step +``` + + + + + + + + + + +
Args
+`moving_avg_variables` + +a list of variables that require to use of the +moving average variable name to be restored. If None, it will default to +variables.moving_average_variables() + variables.trainable_variables() +
+ + + + + + + + + + + +
Returns
+A map from restore_names to variables. The restore_name is either the +original or the moving average version of the variable name, depending +on whether the variable name is in the `moving_avg_variables`. +
+ + + + + diff --git a/site/en/api_docs/python/tf/train/Feature.md b/site/en/api_docs/python/tf/train/Feature.md new file mode 100644 index 00000000000..93a6993c3da --- /dev/null +++ b/site/en/api_docs/python/tf/train/Feature.md @@ -0,0 +1,71 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.train.Feature + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + +
+`bytes_list` + +`BytesList bytes_list` +
+`float_list` + +`FloatList float_list` +
+`int64_list` + +`Int64List int64_list` +
+ + + diff --git a/site/en/api_docs/python/tf/train/FeatureList.md b/site/en/api_docs/python/tf/train/FeatureList.md new file mode 100644 index 00000000000..754815d2f00 --- /dev/null +++ b/site/en/api_docs/python/tf/train/FeatureList.md @@ -0,0 +1,57 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.train.FeatureList + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + +
+`feature` + +`repeated Feature feature` +
+ + + diff --git a/site/en/api_docs/python/tf/train/FeatureLists.md b/site/en/api_docs/python/tf/train/FeatureLists.md new file mode 100644 index 00000000000..4c968c263ef --- /dev/null +++ b/site/en/api_docs/python/tf/train/FeatureLists.md @@ -0,0 +1,61 @@ +description: A ProtocolMessage + +
+ + + +
+ +# tf.train.FeatureLists + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + +
+`feature_list` + +`repeated FeatureListEntry feature_list` +
+ + + +## Child Classes +[`class FeatureListEntry`](../../tf/train/FeatureLists/FeatureListEntry.md) + diff --git a/site/en/api_docs/python/tf/train/FeatureLists/FeatureListEntry.md b/site/en/api_docs/python/tf/train/FeatureLists/FeatureListEntry.md new file mode 100644 index 00000000000..26f6ab15435 --- /dev/null +++ b/site/en/api_docs/python/tf/train/FeatureLists/FeatureListEntry.md @@ -0,0 +1,64 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.train.FeatureLists.FeatureListEntry + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + +
+`key` + +`string key` +
+`value` + +`FeatureList value` +
+ + + diff --git a/site/en/api_docs/python/tf/train/Features.md b/site/en/api_docs/python/tf/train/Features.md new file mode 100644 index 00000000000..ad1a106d11e --- /dev/null +++ b/site/en/api_docs/python/tf/train/Features.md @@ -0,0 +1,61 @@ +description: A ProtocolMessage + +
+ + + +
+ +# tf.train.Features + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + +
+`feature` + +`repeated FeatureEntry feature` +
+ + + +## Child Classes +[`class FeatureEntry`](../../tf/train/Features/FeatureEntry.md) + diff --git a/site/en/api_docs/python/tf/train/Features/FeatureEntry.md b/site/en/api_docs/python/tf/train/Features/FeatureEntry.md new file mode 100644 index 00000000000..5701b4e6979 --- /dev/null +++ b/site/en/api_docs/python/tf/train/Features/FeatureEntry.md @@ -0,0 +1,64 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.train.Features.FeatureEntry + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + +
+`key` + +`string key` +
+`value` + +`Feature value` +
+ + + diff --git a/site/en/api_docs/python/tf/train/FloatList.md b/site/en/api_docs/python/tf/train/FloatList.md new file mode 100644 index 00000000000..b1fec90b2ea --- /dev/null +++ b/site/en/api_docs/python/tf/train/FloatList.md @@ -0,0 +1,57 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.train.FloatList + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + +
+`value` + +`repeated float value` +
+ + + diff --git a/site/en/api_docs/python/tf/train/Int64List.md b/site/en/api_docs/python/tf/train/Int64List.md new file mode 100644 index 00000000000..3e743635261 --- /dev/null +++ b/site/en/api_docs/python/tf/train/Int64List.md @@ -0,0 +1,57 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.train.Int64List + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + +
+`value` + +`repeated int64 value` +
+ + + diff --git a/site/en/api_docs/python/tf/train/JobDef.md b/site/en/api_docs/python/tf/train/JobDef.md new file mode 100644 index 00000000000..01eb2a33867 --- /dev/null +++ b/site/en/api_docs/python/tf/train/JobDef.md @@ -0,0 +1,68 @@ +description: A ProtocolMessage + +
+ + + +
+ +# tf.train.JobDef + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + +
+`name` + +`string name` +
+`tasks` + +`repeated TasksEntry tasks` +
+ + + +## Child Classes +[`class TasksEntry`](../../tf/train/JobDef/TasksEntry.md) + diff --git a/site/en/api_docs/python/tf/train/JobDef/TasksEntry.md b/site/en/api_docs/python/tf/train/JobDef/TasksEntry.md new file mode 100644 index 00000000000..6a6acbc080c --- /dev/null +++ b/site/en/api_docs/python/tf/train/JobDef/TasksEntry.md @@ -0,0 +1,64 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.train.JobDef.TasksEntry + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + +
+`key` + +`int32 key` +
+`value` + +`string value` +
+ + + diff --git a/site/en/api_docs/python/tf/train/SequenceExample.md b/site/en/api_docs/python/tf/train/SequenceExample.md new file mode 100644 index 00000000000..fb1f2ec62db --- /dev/null +++ b/site/en/api_docs/python/tf/train/SequenceExample.md @@ -0,0 +1,64 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.train.SequenceExample + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + +
+`context` + +`Features context` +
+`feature_lists` + +`FeatureLists feature_lists` +
+ + + diff --git a/site/en/api_docs/python/tf/train/ServerDef.md b/site/en/api_docs/python/tf/train/ServerDef.md new file mode 100644 index 00000000000..cb4240e6bb0 --- /dev/null +++ b/site/en/api_docs/python/tf/train/ServerDef.md @@ -0,0 +1,99 @@ +description: A ProtocolMessage + +
+ + +
+ +# tf.train.ServerDef + + + + + + + + + +A ProtocolMessage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cluster` + +`ClusterDef cluster` +
+`cluster_device_filters` + +`ClusterDeviceFilters cluster_device_filters` +
+`default_session_config` + +`ConfigProto default_session_config` +
+`job_name` + +`string job_name` +
+`port` + +`int32 port` +
+`protocol` + +`string protocol` +
+`task_index` + +`int32 task_index` +
+ + + diff --git a/site/en/api_docs/python/tf/train/checkpoints_iterator.md b/site/en/api_docs/python/tf/train/checkpoints_iterator.md new file mode 100644 index 00000000000..e286cab2a39 --- /dev/null +++ b/site/en/api_docs/python/tf/train/checkpoints_iterator.md @@ -0,0 +1,113 @@ +description: Continuously yield new checkpoint files as they appear. + +
+ + +
+ +# tf.train.checkpoints_iterator + + + + + + + + + +Continuously yield new checkpoint files as they appear. + + + + + + + + + +The iterator only checks for new checkpoints when control flow has been +reverted to it. This means it can miss checkpoints if your code takes longer +to run between iterations than `min_interval_secs` or the interval at which +new checkpoints are written. + +The `timeout` argument is the maximum number of seconds to block waiting for +a new checkpoint. It is used in combination with the `timeout_fn` as +follows: + +* If the timeout expires and no `timeout_fn` was specified, the iterator + stops yielding. +* If a `timeout_fn` was specified, that function is called and if it returns + a true boolean value the iterator stops yielding. +* If the function returns a false boolean value then the iterator resumes the + wait for new checkpoints. At this point the timeout logic applies again. + +This behavior gives control to callers on what to do if checkpoints do not +come fast enough or stop being generated. For example, if callers have a way +to detect that the training has stopped and know that no new checkpoints +will be generated, they can provide a `timeout_fn` that returns `True` when +the training has stopped. If they know that the training is still going on +they return `False` instead. + + + + + + + + + + + + + + + + + + + +
+`checkpoint_dir` + +The directory in which checkpoints are saved. +
+`min_interval_secs` + +The minimum number of seconds between yielding +checkpoints. +
+`timeout` + +The maximum number of seconds to wait between checkpoints. If left +as `None`, then the process will wait indefinitely. +
+`timeout_fn` + +Optional function to call after a timeout. If the function +returns True, then it means that no new checkpoints will be generated and +the iterator will exit. The function is called with no arguments. +
+ + + +#### Yields: + +String paths to latest checkpoint files as they arrive. diff --git a/site/en/api_docs/python/tf/train/experimental.md b/site/en/api_docs/python/tf/train/experimental.md new file mode 100644 index 00000000000..cd8d5ecb946 --- /dev/null +++ b/site/en/api_docs/python/tf/train/experimental.md @@ -0,0 +1,37 @@ +description: Public API for tf.train.experimental namespace. + +
+ + +
+ +# Module: tf.train.experimental + + + + + + + + + +Public API for tf.train.experimental namespace. + + + +## Classes + +[`class DynamicLossScale`](../../tf/mixed_precision/experimental/DynamicLossScale.md): Loss scale that dynamically adjusts itself. + +[`class FixedLossScale`](../../tf/mixed_precision/experimental/FixedLossScale.md): Loss scale with a fixed value. + +[`class LossScale`](../../tf/mixed_precision/experimental/LossScale.md): Base class for all loss scales. + +[`class PythonState`](../../tf/train/experimental/PythonState.md): A mixin for putting Python state in an object-based checkpoint. + +## Functions + +[`disable_mixed_precision_graph_rewrite(...)`](../../tf/train/experimental/disable_mixed_precision_graph_rewrite.md): Disables the mixed precision graph rewrite. + +[`enable_mixed_precision_graph_rewrite(...)`](../../tf/train/experimental/enable_mixed_precision_graph_rewrite.md): Enable mixed precision via a graph rewrite. + diff --git a/site/en/api_docs/python/tf/train/experimental/PythonState.md b/site/en/api_docs/python/tf/train/experimental/PythonState.md new file mode 100644 index 00000000000..fe660bf0b5b --- /dev/null +++ b/site/en/api_docs/python/tf/train/experimental/PythonState.md @@ -0,0 +1,111 @@ +description: A mixin for putting Python state in an object-based checkpoint. + +
+ + + + +
+ +# tf.train.experimental.PythonState + + + + + + + + + +A mixin for putting Python state in an object-based checkpoint. + + + + + +This is an abstract class which allows extensions to TensorFlow's object-based +checkpointing (see tf.train.Checkpoint). For example a wrapper for NumPy +arrays: + +```python +import io +import numpy + +class NumpyWrapper(tf.train.experimental.PythonState): + + def __init__(self, array): + self.array = array + + def serialize(self): + string_file = io.BytesIO() + try: + numpy.save(string_file, self.array, allow_pickle=False) + serialized = string_file.getvalue() + finally: + string_file.close() + return serialized + + def deserialize(self, string_value): + string_file = io.BytesIO(string_value) + try: + self.array = numpy.load(string_file, allow_pickle=False) + finally: + string_file.close() +``` + +Instances of `NumpyWrapper` are checkpointable objects, and will be saved and +restored from checkpoints along with TensorFlow state like variables. + +```python +root = tf.train.Checkpoint(numpy=NumpyWrapper(numpy.array([1.]))) +save_path = root.save(prefix) +root.numpy.array *= 2. +assert [2.] == root.numpy.array +root.restore(save_path) +assert [1.] == root.numpy.array +``` + +## Methods + +

deserialize

+ +View source + + + +Callback to deserialize the object. + + +

serialize

+ +View source + + + +Callback to serialize the object. Returns a string. + + + + diff --git a/site/en/api_docs/python/tf/train/experimental/disable_mixed_precision_graph_rewrite.md b/site/en/api_docs/python/tf/train/experimental/disable_mixed_precision_graph_rewrite.md new file mode 100644 index 00000000000..cb374a10080 --- /dev/null +++ b/site/en/api_docs/python/tf/train/experimental/disable_mixed_precision_graph_rewrite.md @@ -0,0 +1,44 @@ +description: Disables the mixed precision graph rewrite. + +
+ + +
+ +# tf.train.experimental.disable_mixed_precision_graph_rewrite + + + + + + + + + +Disables the mixed precision graph rewrite. + + + + + + + +After this is called, the mixed precision graph rewrite will no longer run for +tf.functions, and so float32 operations will no longer be converted to +float16. + +This does not undo the effects of loss scaling. Any optimizers wrapped with a +LossScaleOptimizer will continue to do loss scaling, although this loss +scaling will no longer be useful, as the graph rewrite no longer converts +tf.functions to use float16. + +This function is useful for unit testing. A unit test can test using the mixed +precision graph rewrite, then disable it so future unit tests continue using +float32. \ No newline at end of file diff --git a/site/en/api_docs/python/tf/train/experimental/enable_mixed_precision_graph_rewrite.md b/site/en/api_docs/python/tf/train/experimental/enable_mixed_precision_graph_rewrite.md new file mode 100644 index 00000000000..df717a7f938 --- /dev/null +++ b/site/en/api_docs/python/tf/train/experimental/enable_mixed_precision_graph_rewrite.md @@ -0,0 +1,213 @@ +description: Enable mixed precision via a graph rewrite. + +
+ + +
+ +# tf.train.experimental.enable_mixed_precision_graph_rewrite + + + + + + + + + +Enable mixed precision via a graph rewrite. + + + + + + + +Mixed precision is the use of both float32 and float16 data types when +training a model to improve performance. This is achieved via a graph rewrite +operation and a loss-scale optimizer. + +Performing arithmetic operations in float16 takes advantage of specialized +processing units, such as NVIDIA Tensor Cores, for much higher arithmetic +throughput. However, due to the smaller representable range, performing the +entire training with float16 can result in gradient underflow, that is, small +gradient values becoming zeroes. Instead, performing only select arithmetic +operations in float16 results in higher throughput and decreased training +time when using compatible hardware accelerators while also reducing memory +usage, typically without sacrificing model accuracy. + +Note: While the mixed precision rewrite changes the datatype of various +layers throughout the model, the same accuracy reached in float32 is +expected. If a `NaN` gradient occurs with dynamic loss scaling, the model +update for that batch is skipped. In this case, the global step count is not +incremented, and the `LossScaleOptimizer` attempts to decrease the loss +scaling value to avoid `NaN` values in subsequent iterations. This approach +has been shown to achieve the same accuracy as float32 and, in most cases, +better training throughput. + +#### Example: + + + +```python +model = tf.keras.models.Sequential([ + tf.keras.layers.Dense(64, activation='relu'), + tf.keras.layers.Dense(64, activation='softmax'), +]) + +opt = tf.keras.optimizers.SGD() +opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) +model.compile(loss="mse", optimizer=opt) + +x_train = np.random.random((1024, 64)) +y_train = np.random.random((1024, 64)) +model.fit(x_train, y_train) +``` + +Calling `enable_mixed_precision_graph_rewrite(opt)` enables the graph rewrite +operation before computing gradients. The function additionally returns an +`Optimizer` (`opt`) wrapped with a `LossScaleOptimizer`. This prevents +underflow in the float16 tensors during the backward pass. An optimizer of +type tf.keras.optimizers.Optimizer or tf.compat.v1.train.Optimizer must be +passed to this function, which will then be wrapped to use loss scaling. + +The graph rewrite operation changes the dtype of certain operations in the +graph from float32 to float16. There are several categories of operations +that are either included or excluded by this rewrite operation. The following +categories of Ops are defined inside corresponding functions under the class +`AutoMixedPrecisionLists` in + +auto_mixed_precision_lists.h: + +* `ClearList`: Ops that do not have numerically significant adverse effects. +E.g. `ArgMax` and `Floor`. +* `WhiteList`: Ops that are considered numerically safe for execution in +float16, and thus are always converted. E.g. `Conv2D`. +* `BlackList`: Ops that are numerically unsafe to execute in float16 and +can negatively affect downstream nodes. E.g. `Softmax`. +* `GrayList`: Ops that are considered numerically safe for execution in +float16 unless downstream from a BlackList Op. E.g. `Add` and `AvgPool`. + +When this function is used, gradients should be computed and applied with the +returned optimizer, either by calling `opt.minimize()` or +`opt.compute_gradients()` followed by `opt.apply_gradients()`. If gradients +are instead computed with tf.gradients or tf.GradientTape, loss scaling +will not be applied, which will likely cause your model not to converge due to +float16 underflow problems. To apply lossing scaling with tf.gradients or +tf.GradientTape, LossScaleOptimizer.get_scaled_loss and +LossScaleOptimizer.get_unscaled_gradients. See +keras.mixed_precision.experimental.LossScaleOptimizer for details how to do +this. + +When eager execution is enabled, the mixed precision graph rewrite is only +enabled within tf.functions, as outside tf.functions, there is no graph. + +For NVIDIA GPUs with Tensor cores, as a general performance guide, dimensions +(such as batch size, input size, output size, and channel counts) +should be powers of two if under 256, or otherwise divisible by 8 if above +256. For more information, check out the +[NVIDIA Deep Learning Performance Guide]( +https://docs.nvidia.com/deeplearning/sdk/dl-performance-guide/index.html). + +Currently, mixed precision is only enabled on NVIDIA Tensor Core GPUs with +Compute Capability 7.0 and above (Volta, Turing, or newer architectures). The +parts of the graph on CPUs and TPUs are untouched by the graph rewrite. + +## Comparison with the Keras mixed precision API +Both this function and the [Keras mixed precision +API](https://www.tensorflow.org/guide/keras/mixed_precision) enable the use of +mixed precision in a model. Therefore, only one of the two APIs can be used. +We recommend using the Keras mixed precision API, as it is more customizable +and supports Eager execution. However, it only supports models which use Keras +layers, while the graph rewrite works in any model that uses tf.functions. + +The core difference between the two APIs is that this function is a graph +rewrite, and so it changes the graph to use mixed precision under the hood. +You still build your graph in float32, and the graph rewrite will change +certain ops to float16. The Keras mixed precision API directly builds the +Keras Model using a mix of float16 and float32. + +One core advantage of the Keras API is it supports mixed precision with Eager +execution, i.e. mixed precision outside tf.functions. The graph rewrite will +only affect ops within tf.functions, making it harder to debug if issues +occur with mixed precision. The Keras API is also more customizable, as you +can override any layer to run in float32 by passing `dtype="float32"` to the +layer constructor. Additionally, you can query the dtype of tensors in the +model by checking `tensor.dtype`. With the graph rewrite, all tensors appear +to be float32 since the dtype is only changed under the hood. + +The main advantage of the graph rewrite (this function) is that it works even +if you do not use Keras layers or any other part of Keras. The Keras mixed +precision API requires models which use Keras layers, as it only inserts casts +inside Keras layers and models. Another advantage is that the graph rewrite +never results in a TypeError, which the Keras API may introduce if you do +certain operations outside Keras. For example, the following will result in a +TypeError if the Keras mixed precision API is enabled, as a float16 and +float32 tensor will be added: +`tf.keras.layers.Dense(2)(x) + tf.keras.layers.Dense(2, dtype="float32")(x)` + + + + + + + + + +
+`ValueError`, if the tf.keras.mixed_precision API is also used by calling +tf.keras.mixed_precision.experimental.set_policy. Only one mixed precision +API can be used. +
+ + + + + + + + + + + + + + + +
+`opt` + +An instance of a tf.keras.optimizers.Optimizer. +
+`loss_scale` + +Either an int/float, the string `"dynamic"`, or an instance of a +tf.mixed_precision.experimental.LossScale. The loss scale to use. It is +recommended to keep this as its default value of `"dynamic"`, which will +adjust the scaling automatically to prevent `Inf` or `NaN` values. +
+ + + + + + + + + + + +
+A version of `opt` that will use loss scaling to prevent underflow. +
+ diff --git a/site/en/api_docs/python/tf/train/get_checkpoint_state.md b/site/en/api_docs/python/tf/train/get_checkpoint_state.md new file mode 100644 index 00000000000..dadc346ab74 --- /dev/null +++ b/site/en/api_docs/python/tf/train/get_checkpoint_state.md @@ -0,0 +1,103 @@ +description: Returns CheckpointState proto from the "checkpoint" file. + +
+ + +
+ +# tf.train.get_checkpoint_state + + + + + + + + + +Returns CheckpointState proto from the "checkpoint" file. + + + + + + + + + +If the "checkpoint" file contains a valid CheckpointState +proto, returns it. + + + + + + + + + + + + + +
+`checkpoint_dir` + +The directory of checkpoints. +
+`latest_filename` + +Optional name of the checkpoint file. Default to +'checkpoint'. +
+ + + + + + + + + + + +
+A CheckpointState if the state was available, None +otherwise. +
+ + + + + + + + + + + + +
+`ValueError` + +if the checkpoint read doesn't have model_checkpoint_path set. +
+ diff --git a/site/en/api_docs/python/tf/train/latest_checkpoint.md b/site/en/api_docs/python/tf/train/latest_checkpoint.md new file mode 100644 index 00000000000..3ae35c3fce2 --- /dev/null +++ b/site/en/api_docs/python/tf/train/latest_checkpoint.md @@ -0,0 +1,93 @@ +description: Finds the filename of latest saved checkpoint file. + +
+ + +
+ +# tf.train.latest_checkpoint + + + + + + + + + +Finds the filename of latest saved checkpoint file. + + + + + + + + + +Gets the checkpoint state given the provided checkpoint_dir and looks for a +corresponding TensorFlow 2 (preferred) or TensorFlow 1.x checkpoint path. +The latest_filename argument is only applicable if you are saving checkpoint +using `v1.Saver.save` + + +See the [Training Checkpoints +Guide](https://www.tensorflow.org/guide/checkpoint) for more details and +examples.` + + + + + + + + + + + + + +
+`checkpoint_dir` + +Directory where the variables were saved. +
+`latest_filename` + +Optional name for the protocol buffer file that +contains the list of most recent checkpoint filenames. +See the corresponding argument to `v1.Saver.save`. +
+ + + + + + + + + + + +
+The full path to the latest checkpoint or `None` if no checkpoint was found. +
+ diff --git a/site/en/api_docs/python/tf/train/list_variables.md b/site/en/api_docs/python/tf/train/list_variables.md new file mode 100644 index 00000000000..43980e0f569 --- /dev/null +++ b/site/en/api_docs/python/tf/train/list_variables.md @@ -0,0 +1,75 @@ +description: Returns list of all variables in the checkpoint. + +
+ + +
+ +# tf.train.list_variables + + + + + + + + + +Returns list of all variables in the checkpoint. + + + + + + + + + + + + + + + + + + + +
+`ckpt_dir_or_file` + +Directory with checkpoints file or path to checkpoint. +
+ + + + + + + + + + + +
+List of tuples `(name, shape)`. +
+ diff --git a/site/en/api_docs/python/tf/train/load_checkpoint.md b/site/en/api_docs/python/tf/train/load_checkpoint.md new file mode 100644 index 00000000000..62a2305e82a --- /dev/null +++ b/site/en/api_docs/python/tf/train/load_checkpoint.md @@ -0,0 +1,96 @@ +description: Returns CheckpointReader for checkpoint found in ckpt_dir_or_file. + +
+ + +
+ +# tf.train.load_checkpoint + + + + + + + + + +Returns `CheckpointReader` for checkpoint found in `ckpt_dir_or_file`. + + + + + + + + + +If `ckpt_dir_or_file` resolves to a directory with multiple checkpoints, +reader for the latest checkpoint is returned. + + + + + + + + + + +
+`ckpt_dir_or_file` + +Directory with checkpoints file or path to checkpoint +file. +
+ + + + + + + + + + + +
+`CheckpointReader` object. +
+ + + + + + + + + + + + +
+`ValueError` + +If `ckpt_dir_or_file` resolves to a directory with no +checkpoints. +
+ diff --git a/site/en/api_docs/python/tf/train/load_variable.md b/site/en/api_docs/python/tf/train/load_variable.md new file mode 100644 index 00000000000..59db7650b04 --- /dev/null +++ b/site/en/api_docs/python/tf/train/load_variable.md @@ -0,0 +1,82 @@ +description: Returns the tensor value of the given variable in the checkpoint. + +
+ + +
+ +# tf.train.load_variable + + + + + + + + + +Returns the tensor value of the given variable in the checkpoint. + + + + + + + + + + + + + + + + + + + + + + +
+`ckpt_dir_or_file` + +Directory with checkpoints file or path to checkpoint. +
+`name` + +Name of the variable to return. +
+ + + + + + + + + + + +
+A numpy `ndarray` with a copy of the value of this variable. +
+ diff --git a/site/en/api_docs/python/tf/transpose.md b/site/en/api_docs/python/tf/transpose.md new file mode 100644 index 00000000000..1d8a9208361 --- /dev/null +++ b/site/en/api_docs/python/tf/transpose.md @@ -0,0 +1,161 @@ +description: Transposes a, where a is a Tensor. + +
+ + +
+ +# tf.transpose + + + + + + + + + +Transposes `a`, where `a` is a Tensor. + + + + + + + +Permutes the dimensions according to the value of `perm`. + +The returned tensor's dimension `i` will correspond to the input dimension +`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is the rank +of the input tensor. Hence by default, this operation performs a regular +matrix transpose on 2-D input Tensors. + +If conjugate is `True` and `a.dtype` is either `complex64` or `complex128` +then the values of `a` are conjugated and transposed. + + + +#### For example: + + + +``` +>>> x = tf.constant([[1, 2, 3], [4, 5, 6]]) +>>> tf.transpose(x) + +``` + +Equivalently, you could call `tf.transpose(x, perm=[1, 0])`. + +If `x` is complex, setting conjugate=True gives the conjugate transpose: + +``` +>>> x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j], +... [4 + 4j, 5 + 5j, 6 + 6j]]) +>>> tf.transpose(x, conjugate=True) + +``` + +'perm' is more useful for n-dimensional tensors where n > 2: + +``` +>>> x = tf.constant([[[ 1, 2, 3], +... [ 4, 5, 6]], +... [[ 7, 8, 9], +... [10, 11, 12]]]) +``` + +As above, simply calling tf.transpose will default to `perm=[2,1,0]`. + +To take the transpose of the matrices in dimension-0 (such as when you are +transposing matrices where 0 is the batch dimesnion), you would set +`perm=[0,2,1]`. + +``` +>>> tf.transpose(x, perm=[0, 2, 1]) + +``` + +Note: This has a shorthand linalg.matrix_transpose): + + + + + + + + + + + + + + + + + + + +
+`a` + +A `Tensor`. +
+`perm` + +A permutation of the dimensions of `a`. This should be a vector. +
+`conjugate` + +Optional bool. Setting it to `True` is mathematically equivalent +to tf.math.conj(tf.transpose(input)). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A transposed `Tensor`. +
+ + + +#### Numpy Compatibility +In `numpy` transposes are memory-efficient constant time operations as they +simply return a new view of the same data with adjusted `strides`. + +TensorFlow does not support strides, so `transpose` returns a new tensor with +the items permuted. + diff --git a/site/en/api_docs/python/tf/truncatediv.md b/site/en/api_docs/python/tf/truncatediv.md new file mode 100644 index 00000000000..4bb3c70262b --- /dev/null +++ b/site/en/api_docs/python/tf/truncatediv.md @@ -0,0 +1,91 @@ +description: Returns x / y element-wise for integer types. + +
+ + +
+ +# tf.truncatediv + + + + + + + + + +Returns x / y element-wise for integer types. + + + + + + + + + +Truncation designates that negative numbers will round fractional quantities +toward zero. I.e. -7 / 5 = -1. This matches C semantics but it is different +than Python semantics. See `FloorDiv` for a division function that matches +Python Semantics. + +*NOTE*: `truncatediv` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/truncatemod.md b/site/en/api_docs/python/tf/truncatemod.md new file mode 100644 index 00000000000..204439e1929 --- /dev/null +++ b/site/en/api_docs/python/tf/truncatemod.md @@ -0,0 +1,89 @@ +description: Returns element-wise remainder of division. This emulates C semantics in that + +
+ + +
+ +# tf.truncatemod + + + + + + + + + +Returns element-wise remainder of division. This emulates C semantics in that + + + + + + + + + +the result here is consistent with a truncating divide. E.g. `truncate(x / y) * +y + truncate_mod(x, y) = x`. + +*NOTE*: `truncatemod` supports broadcasting. More about broadcasting +[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`. +
+`y` + +A `Tensor`. Must have the same type as `x`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `x`. +
+ diff --git a/site/en/api_docs/python/tf/tuple.md b/site/en/api_docs/python/tf/tuple.md new file mode 100644 index 00000000000..487a2d04c40 --- /dev/null +++ b/site/en/api_docs/python/tf/tuple.md @@ -0,0 +1,117 @@ +description: Group tensors together. + +
+ + +
+ +# tf.tuple + + + + + + + + + +Group tensors together. + + + + + + + +This creates a tuple of tensors with the same values as the `tensors` +argument, except that the value of each tensor is only returned after the +values of all tensors have been computed. + +`control_inputs` contains additional ops that have to finish before this op +finishes, but whose outputs are not returned. + +This can be used as a "join" mechanism for parallel computations: all the +argument tensors can be computed in parallel, but the values of any tensor +returned by `tuple` are only available after all the parallel computations +are done. + +See also tf.group and +tf.control_dependencies. + + + + + + + + + + + + + + + + +
+`tensors` + +A list of `Tensor`s or `IndexedSlices`, some entries can be `None`. +
+`control_inputs` + +List of additional ops to finish before returning. +
+`name` + +(optional) A name to use as a `name_scope` for the operation. +
+ + + + + + + + + + + +
+Same as `tensors`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `tensors` does not contain any `Tensor` or `IndexedSlices`. +
+`TypeError` + +If `control_inputs` is not a list of `Operation` or `Tensor` +objects. +
+ diff --git a/site/en/api_docs/python/tf/unique.md b/site/en/api_docs/python/tf/unique.md new file mode 100644 index 00000000000..e8b2aca2b85 --- /dev/null +++ b/site/en/api_docs/python/tf/unique.md @@ -0,0 +1,127 @@ +description: Finds unique elements in a 1-D tensor. + +
+ + +
+ +# tf.unique + + + + + + + + + +Finds unique elements in a 1-D tensor. + + + + + + + + + +This operation returns a tensor `y` containing all of the unique elements of `x` +sorted in the same order that they occur in `x`; `x` does not need to be sorted. +This operation also returns a tensor `idx` the same size as `x` that contains +the index of each value of `x` in the unique output `y`. In other words: + +`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]` + +#### Examples: + + + +``` +# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] +y, idx = unique(x) +y ==> [1, 2, 4, 7, 8] +idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] +``` + +``` +# tensor 'x' is [4, 5, 1, 2, 3, 3, 4, 5] +y, idx = unique(x) +y ==> [4, 5, 1, 2, 3] +idx ==> [0, 1, 2, 3, 4, 4, 0, 1] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. 1-D. +
+`out_idx` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (y, idx). +
+`y` + +A `Tensor`. Has the same type as `x`. +
+`idx` + +A `Tensor` of type `out_idx`. +
+ diff --git a/site/en/api_docs/python/tf/unique_with_counts.md b/site/en/api_docs/python/tf/unique_with_counts.md new file mode 100644 index 00000000000..d31214788b2 --- /dev/null +++ b/site/en/api_docs/python/tf/unique_with_counts.md @@ -0,0 +1,129 @@ +description: Finds unique elements in a 1-D tensor. + +
+ + +
+ +# tf.unique_with_counts + + + + + + + + + +Finds unique elements in a 1-D tensor. + + + + + + + + + +This operation returns a tensor `y` containing all of the unique elements of `x` +sorted in the same order that they occur in `x`. This operation also returns a +tensor `idx` the same size as `x` that contains the index of each value of `x` +in the unique output `y`. Finally, it returns a third tensor `count` that +contains the count of each element of `y` in `x`. In other words: + +`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]` + +#### For example: + + + +``` +# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] +y, idx, count = unique_with_counts(x) +y ==> [1, 2, 4, 7, 8] +idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] +count ==> [2, 1, 3, 1, 2] +``` + + + + + + + + + + + + + + + + +
+`x` + +A `Tensor`. 1-D. +
+`out_idx` + +An optional tf.DType from: `tf.int32, tf.int64`. Defaults to tf.int32. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + + + + + + + + + + +
+A tuple of `Tensor` objects (y, idx, count). +
+`y` + +A `Tensor`. Has the same type as `x`. +
+`idx` + +A `Tensor` of type `out_idx`. +
+`count` + +A `Tensor` of type `out_idx`. +
+ diff --git a/site/en/api_docs/python/tf/unravel_index.md b/site/en/api_docs/python/tf/unravel_index.md new file mode 100644 index 00000000000..8cfd5164683 --- /dev/null +++ b/site/en/api_docs/python/tf/unravel_index.md @@ -0,0 +1,113 @@ +description: Converts an array of flat indices into a tuple of coordinate arrays. + +
+ + +
+ +# tf.unravel_index + + + + + + + + + +Converts an array of flat indices into a tuple of coordinate arrays. + + + + + + + + + + +#### Example: + + + +``` +y = tf.unravel_index(indices=[2, 5, 7], dims=[3, 3]) +# 'dims' represent a hypothetical (3, 3) tensor of indices: +# [[0, 1, *2*], +# [3, 4, *5*], +# [6, *7*, 8]] +# For each entry from 'indices', this operation returns +# its coordinates (marked with '*'), such as +# 2 ==> (0, 2) +# 5 ==> (1, 2) +# 7 ==> (2, 1) +y ==> [[0, 1, 2], [2, 2, 1]] +``` + + + + + + + + + + + + + + + + + + +
+`indices` + +A `Tensor`. Must be one of the following types: `int32`, `int64`. +An 0-D or 1-D `int` Tensor whose elements are indices into the +flattened version of an array of dimensions dims. +
+`dims` + +A `Tensor`. Must have the same type as `indices`. +An 1-D `int` Tensor. The shape of the array to use for unraveling +indices. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor`. Has the same type as `indices`. +
+ + + +#### Numpy Compatibility +Equivalent to np.unravel_index + diff --git a/site/en/api_docs/python/tf/unstack.md b/site/en/api_docs/python/tf/unstack.md new file mode 100644 index 00000000000..ccb5c80de43 --- /dev/null +++ b/site/en/api_docs/python/tf/unstack.md @@ -0,0 +1,137 @@ +description: Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors. + +
+ + +
+ +# tf.unstack + + + + + + + + + +Unpacks the given dimension of a rank-`R` tensor into rank-`(R-1)` tensors. + + + + + + + + + +Unpacks `num` tensors from `value` by chipping it along the `axis` dimension. +If `num` is not specified (the default), it is inferred from `value`'s shape. +If `value.shape[axis]` is not known, `ValueError` is raised. + +For example, given a tensor of shape `(A, B, C, D)`; + +If `axis == 0` then the i'th tensor in `output` is the slice + `value[i, :, :, :]` and each tensor in `output` will have shape `(B, C, D)`. + (Note that the dimension unpacked along is gone, unlike `split`). + +If `axis == 1` then the i'th tensor in `output` is the slice + `value[:, i, :, :]` and each tensor in `output` will have shape `(A, C, D)`. +Etc. + +This is the opposite of stack. + + + + + + + + + + + + + + + + + + + +
+`value` + +A rank `R > 0` `Tensor` to be unstacked. +
+`num` + +An `int`. The length of the dimension `axis`. Automatically inferred if +`None` (the default). +
+`axis` + +An `int`. The axis to unstack along. Defaults to the first dimension. +Negative values wrap around, so the valid range is `[-R, R)`. +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+The list of `Tensor` objects unstacked from `value`. +
+ + + + + + + + + + + + + + + +
+`ValueError` + +If `num` is unspecified and cannot be inferred. +
+`ValueError` + +If `axis` is out of the range [-R, R). +
+ diff --git a/site/en/api_docs/python/tf/variable_creator_scope.md b/site/en/api_docs/python/tf/variable_creator_scope.md new file mode 100644 index 00000000000..2519425a4a4 --- /dev/null +++ b/site/en/api_docs/python/tf/variable_creator_scope.md @@ -0,0 +1,109 @@ +description: Scope which defines a variable creation function to be used by variable(). + +
+ + +
+ +# tf.variable_creator_scope + + + + + + + + + +Scope which defines a variable creation function to be used by variable(). + + + + + + + +variable_creator is expected to be a function with the following signature: + +``` + def variable_creator(next_creator, **kwargs) +``` + +The creator is supposed to eventually call the next_creator to create a +variable if it does want to create a variable and not call Variable or +ResourceVariable directly. This helps make creators composable. A creator may +choose to create multiple variables, return already existing variables, or +simply register that a variable was created and defer to the next creators in +line. Creators can also modify the keyword arguments seen by the next +creators. + +Custom getters in the variable scope will eventually resolve down to these +custom creators when they do create variables. + +The valid keyword arguments in kwds are: + + * initial_value: A `Tensor`, or Python object convertible to a `Tensor`, + which is the initial value for the Variable. The initial value must have + a shape specified unless `validate_shape` is set to False. Can also be a + callable with no argument that returns the initial value when called. In + that case, `dtype` must be specified. (Note that initializer functions + from init_ops.py must first be bound to a shape before being used here.) + * trainable: If `True`, the default, GradientTapes automatically watch + uses of this Variable. + * validate_shape: If `False`, allows the variable to be initialized with a + value of unknown shape. If `True`, the default, the shape of + `initial_value` must be known. + * caching_device: Optional device string describing where the Variable + should be cached for reading. Defaults to the Variable's device. + If not `None`, caches on another device. Typical use is to cache + on the device where the Ops using the Variable reside, to deduplicate + copying through `Switch` and other conditional statements. + * name: Optional name for the variable. Defaults to `'Variable'` and gets + uniquified automatically. + dtype: If set, initial_value will be converted to the given type. + If `None`, either the datatype will be kept (if `initial_value` is + a Tensor), or `convert_to_tensor` will decide. + * constraint: A constraint function to be applied to the variable after + updates by some algorithms. + * synchronization: Indicates when a distributed a variable will be + aggregated. Accepted values are constants defined in the class + tf.VariableSynchronization. By default the synchronization is set to + `AUTO` and the current `DistributionStrategy` chooses + when to synchronize. + * aggregation: Indicates how a distributed variable will be aggregated. + Accepted values are constants defined in the class + tf.VariableAggregation. + +This set may grow over time, so it's important the signature of creators is as +mentioned above. + + + + + + + + + + +
+`variable_creator` + +the passed creator +
+ + + +#### Yields: + +A scope in which the creator is active diff --git a/site/en/api_docs/python/tf/vectorized_map.md b/site/en/api_docs/python/tf/vectorized_map.md new file mode 100644 index 00000000000..653f27c445e --- /dev/null +++ b/site/en/api_docs/python/tf/vectorized_map.md @@ -0,0 +1,150 @@ +description: Parallel map on the list of tensors unpacked from elems on dimension 0. + +
+ + +
+ +# tf.vectorized_map + + + + + + + + + +Parallel map on the list of tensors unpacked from `elems` on dimension 0. + + + + + + + + + + +This method works similar to tf.map_fn but is optimized to run much faster, +possibly with a much larger memory footprint. The speedups are obtained by +vectorization (see https://arxiv.org/pdf/1903.04243.pdf). The idea behind +vectorization is to semantically launch all the invocations of `fn` in +parallel and fuse corresponding operations across all these invocations. This +fusion is done statically at graph generation time and the generated code is +often similar in performance to a manually fused version. + +Because tf.vectorized_map fully parallelizes the batch, this method will +generally be significantly faster than using tf.map_fn, especially in eager +mode. However this is an experimental feature and currently has a lot of +limitations: + - There should be no data dependency between the different semantic + invocations of `fn`, i.e. it should be safe to map the elements of the + inputs in any order. + - Stateful kernels may mostly not be supported since these often imply a + data dependency. We do support a limited set of such stateful kernels + though (like RandomFoo, Variable operations like reads, etc). + - `fn` has limited support for control flow operations. tf.cond in + particular is not supported. + - `fn` should return nested structure of Tensors or Operations. However + if an Operation is returned, it should have zero outputs. + - The shape and dtype of any intermediate or output tensors in the + computation of `fn` should not depend on the input to `fn`. + +#### Examples: + + +```python +def outer_product(a): + return tf.tensordot(a, a, 0) + +batch_size = 100 +a = tf.ones((batch_size, 32, 32)) +c = tf.vectorized_map(outer_product, a) +assert c.shape == (batch_size, 32, 32, 32, 32) +``` + +```python +# Computing per-example gradients + +batch_size = 10 +num_features = 32 +layer = tf.keras.layers.Dense(1) + +def model_fn(arg): + with tf.GradientTape() as g: + inp, label = arg + inp = tf.expand_dims(inp, 0) + label = tf.expand_dims(label, 0) + prediction = layer(inp) + loss = tf.nn.l2_loss(label - prediction) + return g.gradient(loss, (layer.kernel, layer.bias)) + +inputs = tf.random.uniform([batch_size, num_features]) +labels = tf.random.uniform([batch_size, 1]) +per_example_gradients = tf.vectorized_map(model_fn, (inputs, labels)) +assert per_example_gradients[0].shape == (batch_size, num_features, 1) +assert per_example_gradients[1].shape == (batch_size, 1) +``` + + + + + + + + + + + + + +
+`fn` + +The callable to be performed. It accepts one argument, which will have +the same (possibly nested) structure as `elems`, and returns a possibly +nested structure of Tensors and Operations, which may be different than +the structure of `elems`. +
+`elems` + +A tensor or (possibly nested) sequence of tensors, each of which will +be unpacked along their first dimension. The nested sequence of the +resulting slices will be mapped over by `fn`. +
+ + + + + + + + + + + +
+A tensor or (possibly nested) sequence of tensors. Each tensor packs the +results of applying fn to tensors unpacked from elems along the first +dimension, from first to last. +
+ diff --git a/site/en/api_docs/python/tf/version.md b/site/en/api_docs/python/tf/version.md new file mode 100644 index 00000000000..9402a663616 --- /dev/null +++ b/site/en/api_docs/python/tf/version.md @@ -0,0 +1,35 @@ +description: Public API for tf.version namespace. + +
+ + + + + + + + +
+ +# Module: tf.version + + + + + + + + + +Public API for tf.version namespace. + + + +## Other Members + +* `COMPILER_VERSION = '7.3.1 20180303'` +* `GIT_VERSION = 'v2.2.0-rc4-8-g2b96f3662b'` +* `GRAPH_DEF_VERSION = 175` +* `GRAPH_DEF_VERSION_MIN_CONSUMER = 0` +* `GRAPH_DEF_VERSION_MIN_PRODUCER = 0` +* `VERSION = '2.2.0'` diff --git a/site/en/api_docs/python/tf/where.md b/site/en/api_docs/python/tf/where.md new file mode 100644 index 00000000000..5da5768459b --- /dev/null +++ b/site/en/api_docs/python/tf/where.md @@ -0,0 +1,192 @@ +description: Return the elements where condition is True (multiplexing x and y). + +
+ + +
+ +# tf.where + + + + + + + + + +Return the elements where `condition` is `True` (multiplexing `x` and `y`). + + + + + + + + + +This operator has two modes: in one mode both `x` and `y` are provided, in +another mode neither are provided. `condition` is always expected to be a +tf.Tensor of type `bool`. + +#### Retrieving indices of `True` elements + +If `x` and `y` are not provided (both are None): + +tf.where will return the indices of `condition` that are `True`, in +the form of a 2-D tensor with shape (n, d). +(Where n is the number of matching indices in `condition`, +and d is the number of dimensions in `condition`). + +Indices are output in row-major order. + +``` +>>> tf.where([True, False, False, True]) + +``` + +``` +>>> tf.where([[True, False], [False, True]]) + +``` + +``` +>>> tf.where([[[True, False], [False, True], [True, True]]]) + +``` + +#### Multiplexing between `x` and `y` + +If `x` and `y` are provided (both have non-None values): + +tf.where will choose an output shape from the shapes of `condition`, `x`, +and `y` that all three shapes are +[broadcastable](https://docs.scipy.org/doc/numpy/reference/ufuncs.html) to. + +The `condition` tensor acts as a mask that chooses whether the corresponding +element / row in the output should be taken from `x` +(if the elemment in `condition is True) or `y` (if it is false). + +``` +>>> tf.where([True, False, False, True], [1,2,3,4], [100,200,300,400]) + +>>> tf.where([True, False, False, True], [1,2,3,4], [100]) + +>>> tf.where([True, False, False, True], [1,2,3,4], 100) + +>>> tf.where([True, False, False, True], 1, 100) + +``` + +``` +>>> tf.where(True, [1,2,3,4], 100) + +>>> tf.where(False, [1,2,3,4], 100) + +``` + + + + + + + + + + + + + + + + + + + +
+`condition` + +A tf.Tensor of type `bool` +
+`x` + +If provided, a Tensor which is of the same type as `y`, and has a shape +broadcastable with `condition` and `y`. +
+`y` + +If provided, a Tensor which is of the same type as `y`, and has a shape +broadcastable with `condition` and `x`. +
+`name` + +A name of the operation (optional). +
+ + + + + + + + + + + +
+If `x` and `y` are provided: +A `Tensor` with the same type as `x` and `y`, and shape that +is broadcast from `condition`, `x`, and `y`. +Otherwise, a `Tensor` with shape `(num_true, dim_size(condition))`. +
+ + + + + + + + + + + + +
+`ValueError` + +When exactly one of `x` or `y` is non-None, or the shapes +are not all broadcastable. +
+ diff --git a/site/en/api_docs/python/tf/while_loop.md b/site/en/api_docs/python/tf/while_loop.md new file mode 100644 index 00000000000..94586c467b9 --- /dev/null +++ b/site/en/api_docs/python/tf/while_loop.md @@ -0,0 +1,291 @@ +description: Repeat body while the condition cond is true. (deprecated argument values) + +
+ + +
+ +# tf.while_loop + + + + + + + + + +Repeat `body` while the condition `cond` is true. (deprecated argument values) + + + + + + + +Warning: SOME ARGUMENT VALUES ARE DEPRECATED: `(back_prop=False)`. They will be removed in a future version. +Instructions for updating: +back_prop=False is deprecated. Consider using tf.stop_gradient instead. +Instead of: +results = tf.while_loop(c, b, vars, back_prop=False) +Use: +results = tf.nest.map_structure(tf.stop_gradient, tf.while_loop(c, b, vars)) + +`cond` is a callable returning a boolean scalar tensor. `body` is a callable +returning a (possibly nested) tuple, namedtuple or list of tensors of the same +arity (length and structure) and types as `loop_vars`. `loop_vars` is a +(possibly nested) tuple, namedtuple or list of tensors that is passed to both +`cond` and `body`. `cond` and `body` both take as many arguments as there are +`loop_vars`. + +In addition to regular Tensors or IndexedSlices, the body may accept and +return TensorArray objects. The flows of the TensorArray objects will +be appropriately forwarded between loops and during gradient calculations. + +Note that `while_loop` calls `cond` and `body` *exactly once* (inside the +call to `while_loop`, and not at all during `Session.run()`). `while_loop` +stitches together the graph fragments created during the `cond` and `body` +calls with some additional graph nodes to create the graph flow that +repeats `body` until `cond` returns false. + +For correctness, tf.while_loop() strictly enforces shape invariants for +the loop variables. A shape invariant is a (possibly partial) shape that +is unchanged across the iterations of the loop. An error will be raised +if the shape of a loop variable after an iteration is determined to be more +general than or incompatible with its shape invariant. For example, a shape +of [11, None] is more general than a shape of [11, 17], and [11, 21] is not +compatible with [11, 17]. By default (if the argument `shape_invariants` is +not specified), it is assumed that the initial shape of each tensor in +`loop_vars` is the same in every iteration. The `shape_invariants` argument +allows the caller to specify a less specific shape invariant for each loop +variable, which is needed if the shape varies between iterations. The +tf.Tensor.set_shape +function may also be used in the `body` function to indicate that +the output loop variable has a particular shape. The shape invariant for +SparseTensor and IndexedSlices are treated specially as follows: + +a) If a loop variable is a SparseTensor, the shape invariant must be +TensorShape([r]) where r is the rank of the dense tensor represented +by the sparse tensor. It means the shapes of the three tensors of the +SparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here +is the shape of the SparseTensor.dense_shape property. It must be the shape of +a vector. + +b) If a loop variable is an IndexedSlices, the shape invariant must be +a shape invariant of the values tensor of the IndexedSlices. It means +the shapes of the three tensors of the IndexedSlices are (shape, [shape[0]], +[shape.ndims]). + +`while_loop` implements non-strict semantics, enabling multiple iterations +to run in parallel. The maximum number of parallel iterations can be +controlled by `parallel_iterations`, which gives users some control over +memory consumption and execution order. For correct programs, `while_loop` +should return the same result for any parallel_iterations > 0. + +For training, TensorFlow stores the tensors that are produced in the +forward inference and are needed in back propagation. These tensors are a +main source of memory consumption and often cause OOM errors when training +on GPUs. When the flag swap_memory is true, we swap out these tensors from +GPU to CPU. This for example allows us to train RNN models with very long +sequences and large batches. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+`cond` + +A callable that represents the termination condition of the loop. +
+`body` + +A callable that represents the loop body. +
+`loop_vars` + +A (possibly nested) tuple, namedtuple or list of numpy array, +`Tensor`, and `TensorArray` objects. +
+`shape_invariants` + +The shape invariants for the loop variables. +
+`parallel_iterations` + +The number of iterations allowed to run in parallel. It +must be a positive integer. +
+`back_prop` + +(optional) Deprecated. False disables support for back +propagation. Prefer using tf.stop_gradient instead. +
+`swap_memory` + +Whether GPU-CPU memory swap is enabled for this loop. +
+`maximum_iterations` + +Optional maximum number of iterations of the while loop +to run. If provided, the `cond` output is AND-ed with an additional +condition ensuring the number of iterations executed is no greater than +`maximum_iterations`. +
+`name` + +Optional name prefix for the returned tensors. +
+ + + + + + + + + + + +
+The output tensors for the loop variables after the loop. The return value +has the same structure as `loop_vars`. +
+ + + + + + + + + + + + + + + +
+`TypeError` + +if `cond` or `body` is not callable. +
+`ValueError` + +if `loop_vars` is empty. +
+ + + +#### Example: + + + +```python +i = tf.constant(0) +c = lambda i: tf.less(i, 10) +b = lambda i: (tf.add(i, 1), ) +r = tf.while_loop(c, b, [i]) +``` + +Example with nesting and a namedtuple: + +```python +import collections +Pair = collections.namedtuple('Pair', 'j, k') +ijk_0 = (tf.constant(0), Pair(tf.constant(1), tf.constant(2))) +c = lambda i, p: i < 10 +b = lambda i, p: (i + 1, Pair((p.j + p.k), (p.j - p.k))) +ijk_final = tf.while_loop(c, b, ijk_0) +``` + +Example using shape_invariants: + +```python +i0 = tf.constant(0) +m0 = tf.ones([2, 2]) +c = lambda i, m: i < 10 +b = lambda i, m: [i+1, tf.concat([m, m], axis=0)] +tf.while_loop( + c, b, loop_vars=[i0, m0], + shape_invariants=[i0.get_shape(), tf.TensorShape([None, 2])]) +``` + +Example which demonstrates non-strict semantics: In the following +example, the final value of the counter `i` does not depend on `x`. So +the `while_loop` can increment the counter parallel to updates of `x`. +However, because the loop counter at one loop iteration depends +on the value at the previous iteration, the loop counter itself cannot +be incremented in parallel. Hence if we just want the final value of the +counter (which we print on the line `print(sess.run(i))`), then +`x` will never be incremented, but the counter will be updated on a +single thread. Conversely, if we want the value of the output (which we +print on the line `print(sess.run(out).shape)`), then the counter may be +incremented on its own thread, while `x` can be incremented in +parallel on a separate thread. In the extreme case, it is conceivable +that the thread incrementing the counter runs until completion before +`x` is incremented even a single time. The only thing that can never +happen is that the thread updating `x` can never get ahead of the +counter thread because the thread incrementing `x` depends on the value +of the counter. + +```python +import tensorflow as tf + +n = 10000 +x = tf.constant(list(range(n))) +c = lambda i, x: i < n +b = lambda i, x: (tf.compat.v1.Print(i + 1, [i]), tf.compat.v1.Print(x + 1, +[i], "x:")) +i, out = tf.while_loop(c, b, (0, x)) +with tf.compat.v1.Session() as sess: + print(sess.run(i)) # prints [0] ... [9999] + + # The following line may increment the counter and x in parallel. + # The counter thread may get ahead of the other thread, but not the + # other way around. So you may see things like + # [9996] x:[9987] + # meaning that the counter thread is on iteration 9996, + # while the other thread is on iteration 9987 + print(sess.run(out).shape) +``` \ No newline at end of file diff --git a/site/en/api_docs/python/tf/xla.md b/site/en/api_docs/python/tf/xla.md new file mode 100644 index 00000000000..e4e93ed19e1 --- /dev/null +++ b/site/en/api_docs/python/tf/xla.md @@ -0,0 +1,25 @@ +description: Public API for tf.xla namespace. + +
+ + +
+ +# Module: tf.xla + + + + + + + + + +Public API for tf.xla namespace. + + + +## Modules + +[`experimental`](../tf/xla/experimental.md) module: Public API for tf.xla.experimental namespace. + diff --git a/site/en/api_docs/python/tf/xla/experimental.md b/site/en/api_docs/python/tf/xla/experimental.md new file mode 100644 index 00000000000..669067c85bb --- /dev/null +++ b/site/en/api_docs/python/tf/xla/experimental.md @@ -0,0 +1,27 @@ +description: Public API for tf.xla.experimental namespace. + +
+ + +
+ +# Module: tf.xla.experimental + + + + + + + + + +Public API for tf.xla.experimental namespace. + + + +## Functions + +[`compile(...)`](../../tf/xla/experimental/compile.md): Builds an operator that compiles and runs `computation` with XLA. + +[`jit_scope(...)`](../../tf/xla/experimental/jit_scope.md): Enable or disable JIT compilation of operators within the scope. + diff --git a/site/en/api_docs/python/tf/xla/experimental/compile.md b/site/en/api_docs/python/tf/xla/experimental/compile.md new file mode 100644 index 00000000000..f527563213d --- /dev/null +++ b/site/en/api_docs/python/tf/xla/experimental/compile.md @@ -0,0 +1,133 @@ +description: Builds an operator that compiles and runs computation with XLA. + +
+ + +
+ +# tf.xla.experimental.compile + + + + + + + + + +Builds an operator that compiles and runs `computation` with XLA. + + + + + + + + + +NOTE: In eager mode, `computation` will have `@tf.function` semantics. + + + + + + + + + + + + + +
+`computation` + +A Python function that builds a computation to apply to the +input. If the function takes n inputs, 'inputs' should be a list of n +tensors. + +`computation` may return a list of operations and tensors. Tensors must +come before operations in the returned list. The return value of +`compile` is a list of tensors corresponding to the tensors from the +output of `computation`. + +All `Operation`s returned from `computation` will be executed when +evaluating any of the returned output tensors. +
+`inputs` + +A list of inputs or `None` (equivalent to an empty list). Each input +can be a nested structure containing values that are convertible to +tensors. Note that passing an N-dimension list of compatible values will +result in a N-dimension list of scalar tensors rather than a single Rank-N +tensors. If you need different behavior, convert part of inputs to tensors +with tf.convert_to_tensor. +
+ + + + + + + + + + + +
+Same data structure as if computation(*inputs) is called directly with some +exceptions for correctness. Exceptions include: +1) None output: a NoOp would be returned which control-depends on +computation. +2) Single value output: A tuple containing the value would be returned. +3) Operation-only outputs: a NoOp would be returned which +control-depends on computation. +
+ + + + + + + + + + + + +
+`RuntimeError` + +if called when eager execution is enabled. +
+ + + +#### Known issues: + +When a tf.random operation is built with XLA, the implementation doesn't + pass the user provided seed to the XLA compiler. As such, the XLA compiler + generates a random number and uses it as a seed when compiling the + operation. This implementation causes a violation of the Tensorflow + defined semantics in two aspects. First, changing the value of the user + defined seed doesn't change the numbers generated by the operation. + Second, when a seed is not specified, running the program multiple times + will generate the same numbers. diff --git a/site/en/api_docs/python/tf/xla/experimental/jit_scope.md b/site/en/api_docs/python/tf/xla/experimental/jit_scope.md new file mode 100644 index 00000000000..79d5235b133 --- /dev/null +++ b/site/en/api_docs/python/tf/xla/experimental/jit_scope.md @@ -0,0 +1,129 @@ +description: Enable or disable JIT compilation of operators within the scope. + +
+ + +
+ +# tf.xla.experimental.jit_scope + + + + + + + + + +Enable or disable JIT compilation of operators within the scope. + + + + + + + + + +NOTE: This is an experimental feature. + +The compilation is a hint and only supported on a best-effort basis. + +#### Example usage: + + +```python +with tf.xla.experimental.jit_scope(): + c = tf.matmul(a, b) # compiled +with tf.xla.experimental.jit_scope(compile_ops=False): + d = tf.matmul(a, c) # not compiled +with tf.xla.experimental.jit_scope( + compile_ops=lambda node_def: 'matmul' in node_def.op.lower()): + e = tf.matmul(a, b) + d # matmul is compiled, the addition is not. +``` + + +Example of `separate_compiled_gradients`: + + ```python + # In the example below, the computations for f, g and h will all be compiled + # in separate scopes. + with tf.xla.experimental.jit_scope( + separate_compiled_gradients=True): + f = tf.matmul(a, b) + g = tf.gradients([f], [a, b], name='mygrads1') + h = tf.gradients([f], [a, b], name='mygrads2') + ``` + + + + + + + + + + + + + +
+`compile_ops` + +Whether to enable or disable compilation in the scope. +Either a Python bool, or a callable that accepts the parameter +`node_def` and returns a python bool. +
+`separate_compiled_gradients` + +If true put each gradient subgraph into a +separate compilation scope. This gives fine-grained control over which +portions of the graph will be compiled as a single unit. Compiling +gradients separately may yield better performance for some graphs. +The scope is named based on the scope of the forward computation as well +as the name of the gradients. As a result, the gradients will be compiled +in a scope that is separate from both the forward computation, and from +other gradients. +
+ + + + + + + + + + + + +
+`RuntimeError` + +if called when eager execution is enabled. +
+ + + +#### Yields: + +The current scope, enabling or disabling compilation. diff --git a/site/en/api_docs/python/tf/zeros.md b/site/en/api_docs/python/tf/zeros.md new file mode 100644 index 00000000000..83b325ac03f --- /dev/null +++ b/site/en/api_docs/python/tf/zeros.md @@ -0,0 +1,100 @@ +description: Creates a tensor with all elements set to zero. + +
+ + +
+ +# tf.zeros + + + + + + + + + +Creates a tensor with all elements set to zero. + + + + + + + + + +This operation returns a tensor of type `dtype` with shape `shape` and +all elements set to zero. + +``` +>>> tf.zeros([3, 4], tf.int32) + +``` + + + + + + + + + + + + + + + + +
+`shape` + +A `list` of integers, a `tuple` of integers, or +a 1-D `Tensor` of type `int32`. +
+`dtype` + +The DType of an element in the resulting `Tensor`. +
+`name` + +Optional string. A name for the operation. +
+ + + + + + + + + + + +
+A `Tensor` with all elements set to zero. +
+ diff --git a/site/en/api_docs/python/tf/zeros_initializer.md b/site/en/api_docs/python/tf/zeros_initializer.md new file mode 100644 index 00000000000..4a16c107ba0 --- /dev/null +++ b/site/en/api_docs/python/tf/zeros_initializer.md @@ -0,0 +1,203 @@ +description: Initializer that generates tensors initialized to 0. + +
+ + + + + +
+ +# tf.zeros_initializer + + + + + + + + + +Initializer that generates tensors initialized to 0. + +Inherits From: [`Initializer`](../tf/keras/initializers/Initializer.md) + + + + + +Initializers allow you to pre-specify an initialization strategy, encoded in +the Initializer object, without knowing the shape and dtype of the variable +being initialized. + +#### Examples: + + + +``` +>>> def make_variables(k, initializer): +... return (tf.Variable(initializer(shape=[k], dtype=tf.float32)), +... tf.Variable(initializer(shape=[k, k], dtype=tf.float32))) +>>> v1, v2 = make_variables(3, tf.zeros_initializer()) +>>> v1 + +>>> v2 + +>>> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.)) +(, from_config + +View source + + + +Instantiates an initializer from a configuration dictionary. + + +#### Example: + + + +```python +initializer = RandomUniform(-1, 1) +config = initializer.get_config() +initializer = RandomUniform.from_config(config) +``` + + + + + + + + + + +
Args
+`config` + +A Python dictionary. +It will typically be the output of `get_config`. +
+ + + + + + + + + + + +
Returns
+An Initializer instance. +
+ + + +

get_config

+ +View source + + + +Returns the configuration of the initializer as a JSON-serializable dict. + + + + + + + + + + +
Returns
+A JSON-serializable Python dict. +
+ + + +

__call__

+ +View source + + + +Returns a tensor object initialized as specified by the initializer. + + + + + + + + + + + + + + +
Args
+`shape` + +Shape of the tensor. +
+`dtype` + +Optional dtype of the tensor. Only numeric or boolean dtypes are +supported. +
+ + + + + + + + + + + + +
Raises
+`ValuesError` + +If the dtype is not numeric or boolean. +
+ + + + + diff --git a/site/en/api_docs/python/tf/zeros_like.md b/site/en/api_docs/python/tf/zeros_like.md new file mode 100644 index 00000000000..d6dd7065f01 --- /dev/null +++ b/site/en/api_docs/python/tf/zeros_like.md @@ -0,0 +1,112 @@ +description: Creates a tensor with all elements set to zero. + +
+ + +
+ +# tf.zeros_like + + + + + + + + + +Creates a tensor with all elements set to zero. + + + + + + + +See also tf.zeros. + +Given a single tensor or array-like object (`input`), this operation returns +a tensor of the same type and shape as `input` with all elements set to zero. +Optionally, you can use `dtype` to specify a new type for the returned tensor. + +#### Examples: + + +``` +>>> tensor = tf.constant([[1, 2, 3], [4, 5, 6]]) +>>> tf.zeros_like(tensor) + +``` + +``` +>>> tf.zeros_like(tensor, dtype=tf.float32) + +``` + +``` +>>> tf.zeros_like([[1, 2, 3], [4, 5, 6]]) + +``` + + + + + + + + + + + + + + + + + + +
+`input` + +A `Tensor` or array-like object. +
+`dtype` + +A type for the returned `Tensor`. Must be `float16`, `float32`, +`float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, +`complex64`, `complex128`, `bool` or `string` (optional). +
+`name` + +A name for the operation (optional). +
+ + + + + + + + + + + +
+A `Tensor` with all elements set to zero. +
+